diff --git a/.changelog/24774.txt b/.changelog/24774.txt new file mode 100644 index 00000000000..c55249ac202 --- /dev/null +++ b/.changelog/24774.txt @@ -0,0 +1,7 @@ +```release-note:bug +resource/aws_instance: Fix default for `metadata_options.http_endpoint` argument. +``` + +```release-note:note +resource/aws_instance: The `metadata_options.http_endpoint` argument now correctly defaults to `enabled`. +``` diff --git a/.changelog/25210.txt b/.changelog/25210.txt new file mode 100644 index 00000000000..55c0476e12a --- /dev/null +++ b/.changelog/25210.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_cloudformation_stack_set: Add `managed_execution` argument +``` \ No newline at end of file diff --git a/.changelog/25569.txt b/.changelog/25569.txt new file mode 100644 index 00000000000..da95c6e2c87 --- /dev/null +++ b/.changelog/25569.txt @@ -0,0 +1,3 @@ +```release-note:new-data-source +aws_iam_principal_policy_simulation +``` diff --git a/.changelog/25724.txt b/.changelog/25724.txt new file mode 100644 index 00000000000..092987cfb61 --- /dev/null +++ b/.changelog/25724.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_s3_bucket: Fix bucket_regional_domain_name not including region for buckets in us-east-1 +``` diff --git a/.changelog/26880.txt b/.changelog/26880.txt new file mode 100644 index 00000000000..efaa1acbfe2 --- /dev/null +++ b/.changelog/26880.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_ecs_task_definition: Fix to prevent persistent diff when `efs_volume_configuration` has both `root_volume` and `authorization_config` set. +``` diff --git a/.changelog/27197.txt b/.changelog/27197.txt new file mode 100644 index 00000000000..5e13bde5fb5 --- /dev/null +++ b/.changelog/27197.txt @@ -0,0 +1,7 @@ +```release-note:bug +resource/aws_s3_object: Remove `acl` default in order to work with S3 buckets that have ACL disabled +``` + +```release-note:bug +resource/aws_s3_object_copy: Remove `acl` default in order to work with S3 buckets that have ACL disabled +``` diff --git a/.changelog/27604.txt b/.changelog/27604.txt new file mode 100644 index 00000000000..95935b2c377 --- /dev/null +++ b/.changelog/27604.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_ses_active_receipt_rule_set: Support import +``` diff --git a/.changelog/27811.txt b/.changelog/27811.txt new file mode 100644 index 00000000000..efd85ca5d5c --- /dev/null +++ b/.changelog/27811.txt @@ -0,0 +1,7 @@ +```release-note:enhancement +resource/aws_datasync_task: Add `object_tags` attribute to `options` configuration block +``` + +```release-note:bug +resource/aws_datasync_location_object_storage: Don't ignore `server_certificate` argument +``` \ No newline at end of file diff --git a/.changelog/28470.txt b/.changelog/28470.txt new file mode 100644 index 00000000000..b23c27b2c0e --- /dev/null +++ b/.changelog/28470.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_opensearchserverless_security_policy +``` diff --git a/.changelog/28518.txt b/.changelog/28518.txt new file mode 100644 index 00000000000..62768271008 --- /dev/null +++ b/.changelog/28518.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_opensearchserverless_access_policy +``` diff --git a/.changelog/28651.txt b/.changelog/28651.txt new file mode 100644 index 00000000000..ca15a3ec408 --- /dev/null +++ b/.changelog/28651.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_opensearchserverless_vpc_endpoint +``` diff --git a/.changelog/28776.txt b/.changelog/28776.txt new file mode 100644 index 00000000000..d727ef44042 --- /dev/null +++ b/.changelog/28776.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_opensearchserverless_security_config +``` diff --git a/.changelog/29367.txt b/.changelog/29367.txt new file mode 100644 index 00000000000..1c4807b04e2 --- /dev/null +++ b/.changelog/29367.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_lambda_invocation: Add lifecycle_scope CRUD to invoke on each resource state transition +``` diff --git a/.changelog/29571.txt b/.changelog/29571.txt new file mode 100644 index 00000000000..8b1c268e1e8 --- /dev/null +++ b/.changelog/29571.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_lambda_layer_version_permission: Add `skip_destroy` attribute +``` diff --git a/.changelog/30242.txt b/.changelog/30242.txt new file mode 100644 index 00000000000..62e2f21ee72 --- /dev/null +++ b/.changelog/30242.txt @@ -0,0 +1,7 @@ +```release-note:enhancement +resource/aws_resourcegroups_group: `resource_query` no longer conflicts with `configuration` +``` + +```release-note:bug +resource/aws_resourcegroups_resource: Fix crash when resource Create fails +``` \ No newline at end of file diff --git a/.changelog/30545.txt b/.changelog/30545.txt new file mode 100644 index 00000000000..1860a0376d5 --- /dev/null +++ b/.changelog/30545.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_launch_template: Remove default values in `metadata_options` to allow default condition +``` \ No newline at end of file diff --git a/.changelog/30817.txt b/.changelog/30817.txt new file mode 100644 index 00000000000..9703205b81a --- /dev/null +++ b/.changelog/30817.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +data-source/aws_ssm_parameter: Add `insecure_value` attribute +``` diff --git a/.changelog/30916.txt b/.changelog/30916.txt new file mode 100644 index 00000000000..e2a3351e1a3 --- /dev/null +++ b/.changelog/30916.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_s3_bucket_logging: Retry on empty read of logging config +``` diff --git a/.changelog/30995.txt b/.changelog/30995.txt new file mode 100644 index 00000000000..cadd8d4d38b --- /dev/null +++ b/.changelog/30995.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_s3_bucket_replication_configuration: Replication configs sometimes need more than a second or two. This resolves a race condition and adds retry logic when reading them. +``` \ No newline at end of file diff --git a/.changelog/30999.txt b/.changelog/30999.txt new file mode 100644 index 00000000000..33a72466954 --- /dev/null +++ b/.changelog/30999.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_route_table: Fix `reading Route Table (rtb-abcd1234): couldn't find resource` errors when reading new resource +``` \ No newline at end of file diff --git a/.changelog/31006.txt b/.changelog/31006.txt index 694bab7f76c..4d1cba25988 100644 --- a/.changelog/31006.txt +++ b/.changelog/31006.txt @@ -1,3 +1,3 @@ ```release-note:note -data-source/aws_db_security_group: The `aws_redshift_service_account` data source has been deprecated and will be removed in a future version. AWS documentation [states that](https://docs.aws.amazon.com/redshift/latest/mgmt/db-auditing.html#db-auditing-bucket-permissions) a [service principal name](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html#principal-services) should be used instead of an AWS account ID in any relevant IAM policy -``` \ No newline at end of file +data-source/aws_redshift_service_account: The `aws_redshift_service_account` data source has been deprecated and will be removed in a future version. AWS documentation [states that](https://docs.aws.amazon.com/redshift/latest/mgmt/db-auditing.html#db-auditing-bucket-permissions) a [service principal name](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html#principal-services) should be used instead of an AWS account ID in any relevant IAM policy +``` diff --git a/.changelog/31091.txt b/.changelog/31091.txt new file mode 100644 index 00000000000..29ab877f12b --- /dev/null +++ b/.changelog/31091.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_opensearchserverless_collection +``` diff --git a/.changelog/31264.txt b/.changelog/31264.txt new file mode 100644 index 00000000000..db4eb8a39cc --- /dev/null +++ b/.changelog/31264.txt @@ -0,0 +1,11 @@ +```release-note:bug +resource/aws_elasticache_cluster: Correctly supports ElastiCache Redis version 7+ +``` + +```release-note:bug +resource/aws_elasticache_global_replication_group: Correctly supports ElastiCache Redis version 7+ +``` + +```release-note:bug +resource/aws_elasticache_replication_group: Correctly supports ElastiCache Redis version 7+ +``` diff --git a/.changelog/31282.txt b/.changelog/31282.txt new file mode 100644 index 00000000000..c7ba02f01a9 --- /dev/null +++ b/.changelog/31282.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_autoscaling_group: Ignore any `Failed` scaling activities due to IAM eventual consistency +``` \ No newline at end of file diff --git a/.changelog/31352.txt b/.changelog/31352.txt new file mode 100644 index 00000000000..fea17218a3c --- /dev/null +++ b/.changelog/31352.txt @@ -0,0 +1,7 @@ +```release-note:bug +resource/aws_keyspaces_keyspace: Correct plan time validation for `name` +``` + +```release-note:bug +resource/aws_keyspaces_table: Correct plan time validation for `keyspace_name`, `table_name` and column names +``` \ No newline at end of file diff --git a/.changelog/31365.txt b/.changelog/31365.txt new file mode 100644 index 00000000000..1126542afb4 --- /dev/null +++ b/.changelog/31365.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_chimesdkvoice_global_settings +``` diff --git a/.changelog/31369.txt b/.changelog/31369.txt new file mode 100644 index 00000000000..b58e7c5d001 --- /dev/null +++ b/.changelog/31369.txt @@ -0,0 +1,7 @@ +```release-note:enhancement +resource/aws_appsync_graphql_api: Add `visibility` argument +``` + +```release-note:enhancement +resource/aws_appsync_graphql_api: Add plan time validation for `log_config.cloudwatch_logs_role_arn` +``` \ No newline at end of file diff --git a/.changelog/31372.txt b/.changelog/31372.txt new file mode 100644 index 00000000000..741b6aa0d2c --- /dev/null +++ b/.changelog/31372.txt @@ -0,0 +1,3 @@ +```release-note:new-data-source +aws_vpclattice_resource_policy +``` \ No newline at end of file diff --git a/.changelog/31393.txt b/.changelog/31393.txt new file mode 100644 index 00000000000..cc15d35befd --- /dev/null +++ b/.changelog/31393.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_rbin_rule: Fix crash when multiple `resource_tags` blocks are configured +``` \ No newline at end of file diff --git a/.changelog/31398.txt b/.changelog/31398.txt new file mode 100644 index 00000000000..98c60b551ad --- /dev/null +++ b/.changelog/31398.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_sagemaker_endpoint_configuration: Add and `shadow_production_variants.serverless_config.provisioned_concurrency` arguments +``` \ No newline at end of file diff --git a/.changelog/31399.txt b/.changelog/31399.txt new file mode 100644 index 00000000000..d9d1c53408c --- /dev/null +++ b/.changelog/31399.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_emrcontainers_job_template +``` \ No newline at end of file diff --git a/.changelog/31422.txt b/.changelog/31422.txt new file mode 100644 index 00000000000..986ec9e0373 --- /dev/null +++ b/.changelog/31422.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_grafana_workspace: Increase default Create and Update timeouts to 30 minutes +``` diff --git a/.changelog/31424.txt b/.changelog/31424.txt new file mode 100644 index 00000000000..46e470a5514 --- /dev/null +++ b/.changelog/31424.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_quicksight_data_set: Fix join_instruction not applied when creating dataset +``` diff --git a/.changelog/31430.txt b/.changelog/31430.txt new file mode 100644 index 00000000000..80f3f768e49 --- /dev/null +++ b/.changelog/31430.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_resourcegroups_resource +``` diff --git a/.changelog/31448.txt b/.changelog/31448.txt new file mode 100644 index 00000000000..ecf330a5089 --- /dev/null +++ b/.changelog/31448.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_quicksight_dashboard +``` \ No newline at end of file diff --git a/.changelog/31452.txt b/.changelog/31452.txt new file mode 100644 index 00000000000..e580342ea25 --- /dev/null +++ b/.changelog/31452.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_ec2_transit_gateway_route_table_association: Add `replace_existing_association` argument +``` diff --git a/.changelog/31480.txt b/.changelog/31480.txt new file mode 100644 index 00000000000..d89831fdacb --- /dev/null +++ b/.changelog/31480.txt @@ -0,0 +1,7 @@ +```release-note:bug +data-source/aws_dx_connection: Fix the `vlan_id` being returned as null +``` + +```release-note:bug +resource/aws_dx_connection: Fix the `vlan_id` being returned as null +``` \ No newline at end of file diff --git a/.changelog/31483.txt b/.changelog/31483.txt new file mode 100644 index 00000000000..f63b0d054f4 --- /dev/null +++ b/.changelog/31483.txt @@ -0,0 +1,3 @@ +```release-note:breaking-change +resource/aws_codebuild_project: The `secondary_sources.auth` and `source.auth` attributes have been removed +``` \ No newline at end of file diff --git a/.changelog/31484.txt b/.changelog/31484.txt new file mode 100644 index 00000000000..472d2623a4b --- /dev/null +++ b/.changelog/31484.txt @@ -0,0 +1,7 @@ +```release-note:breaking-change +resource/aws_connect_hours_of_operation: The `hours_of_operation_arn` attribute has been removed +``` + +```release-note:breaking-change +data-source/aws_connect_hours_of_operation: The `hours_of_operation_arn` attribute has been removed +``` \ No newline at end of file diff --git a/.changelog/31486.txt b/.changelog/31486.txt new file mode 100644 index 00000000000..8047c98a6b9 --- /dev/null +++ b/.changelog/31486.txt @@ -0,0 +1,3 @@ +```release-note:breaking-change +resource/aws_wafv2_web_acl_logging_configuration: The `redacted_fields.all_query_arguments`, `redacted_fields.body` and `redacted_fields.single_query_argument` attributes have been removed +``` \ No newline at end of file diff --git a/.changelog/31487.txt b/.changelog/31487.txt new file mode 100644 index 00000000000..34d383d5c74 --- /dev/null +++ b/.changelog/31487.txt @@ -0,0 +1,7 @@ +```release-note:breaking-change +resource/aws_secretsmanager_secret: The `rotation_enabled`, `rotation_lambda_arn` and `rotation_rules` attributes have been removed +``` + +```release-note:breaking-change +data-source/aws_secretsmanager_secret: The `rotation_enabled`, `rotation_lambda_arn` and `rotation_rules` attributes have been removed +``` \ No newline at end of file diff --git a/.changelog/31488.txt b/.changelog/31488.txt new file mode 100644 index 00000000000..039e1f1299b --- /dev/null +++ b/.changelog/31488.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_quicksight_data_set: Ignore failure to read refresh properties for non-SPICE datasets +``` diff --git a/.changelog/31489.txt b/.changelog/31489.txt new file mode 100644 index 00000000000..c529440f63e --- /dev/null +++ b/.changelog/31489.txt @@ -0,0 +1,3 @@ +```release-note:breaking-change +resource/aws_lightsail_instance: The `ipv6_address` attribute has been removed +``` \ No newline at end of file diff --git a/.changelog/31490.txt b/.changelog/31490.txt new file mode 100644 index 00000000000..4c1366c8ae0 --- /dev/null +++ b/.changelog/31490.txt @@ -0,0 +1,7 @@ +```release-note:note +resource/aws_opensearch_domain: The `kibana_endpoint` attribute has been deprecated. All configurations using `kibana_endpoint` should be updated to use the `dashboard_endpoint` attribute instead +``` + +```release-note:note +data-source/aws_opensearch_domain: The `kibana_endpoint` attribute has been deprecated. All configurations using `kibana_endpoint` should be updated to use the `dashboard_endpoint` attribute instead +``` \ No newline at end of file diff --git a/.changelog/31495.txt b/.changelog/31495.txt new file mode 100644 index 00000000000..0da7a2fd911 --- /dev/null +++ b/.changelog/31495.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_instance: Add `instance_market_options` configuration block and `instance_lifecycle` and `spot_instance_request_id` attributes +``` \ No newline at end of file diff --git a/.changelog/31499.txt b/.changelog/31499.txt new file mode 100644 index 00000000000..a2348d8650d --- /dev/null +++ b/.changelog/31499.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_fis_experiment_template: Add support for `Volumes` to `actions.*.target` +``` \ No newline at end of file diff --git a/.changelog/31527.txt b/.changelog/31527.txt new file mode 100644 index 00000000000..78d8af75628 --- /dev/null +++ b/.changelog/31527.txt @@ -0,0 +1,15 @@ +```release-note:new-resource +aws_autoscaling_traffic_source_attachment +``` + +```release-note:enhancement +resource/aws_autoscaling_group: Add `traffic_source` configuration block +``` + +```release-note:enhancement +data-source/aws_autoscaling_group: Add `traffic_source` attribute +``` + +```release-note:note +resource/aws_autoscaling_group: The `load_balancers` and `target_group_arns` attributes have been changed to `Computed`. This means that omitting this argument is interpreted as ignoring any existing load balancer or target group attachments. To remove all load balancer or target group attachments an empty list should be specified. +``` diff --git a/.changelog/31536.txt b/.changelog/31536.txt new file mode 100644 index 00000000000..3d8f315e516 --- /dev/null +++ b/.changelog/31536.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_transfer_server: Add support for `TransferSecurityPolicy-2023-05` `security_policy_name` value +``` \ No newline at end of file diff --git a/.changelog/31541.txt b/.changelog/31541.txt new file mode 100644 index 00000000000..17c9faae3ad --- /dev/null +++ b/.changelog/31541.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_networkfirewall_firewall_policy: Add `stream_exception_policy` option to `firewall_policy.stateful_engine_options` +``` diff --git a/.changelog/31542.txt b/.changelog/31542.txt new file mode 100644 index 00000000000..312ed18b2b8 --- /dev/null +++ b/.changelog/31542.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_quicksight_analysis +``` \ No newline at end of file diff --git a/.changelog/31544.txt b/.changelog/31544.txt new file mode 100644 index 00000000000..348eb1ae38b --- /dev/null +++ b/.changelog/31544.txt @@ -0,0 +1,27 @@ +```release-note:enhancement +resource/aws_fsx_ontap_volume: Update `ontap_volume_type` attribute to be configurable +``` + +```release-note:enhancement +resource/aws_fsx_ontap_volume: `junction_path` is Optional +``` + +```release-note:enhancement +resource/aws_fsx_ontap_volume: Remove default value for `security_style` argument and mark as Computed +``` + +```release-note:enhancement +resource/aws_fsx_ontap_volume: `storage_efficiency_enabled` is Optional +``` + +```release-note:bug +resource/aws_fsx_ontap_volume: Change `volume_type` to [ForceNew](https://developer.hashicorp.com/terraform/plugin/sdkv2/schemas/schema-behaviors#forcenew) +``` + +```release-note:enhancement +resource/aws_fsx_ontap_volume: Add `skip_final_backup` argument +``` + +```release-note:bug +resource/aws_fsx_ontap_volume: Change `storage_virtual_machine_id` to [ForceNew](https://developer.hashicorp.com/terraform/plugin/sdkv2/schemas/schema-behaviors#forcenew) +``` diff --git a/.changelog/31545.txt b/.changelog/31545.txt new file mode 100644 index 00000000000..1df174314b0 --- /dev/null +++ b/.changelog/31545.txt @@ -0,0 +1,3 @@ +```release-note:new-data-source +aws_organizations_policies +``` \ No newline at end of file diff --git a/.changelog/31551.txt b/.changelog/31551.txt new file mode 100644 index 00000000000..c801766f7c2 --- /dev/null +++ b/.changelog/31551.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_autoscaling_group: Now ignores previous failed scaling activities +``` diff --git a/.changelog/31567.txt b/.changelog/31567.txt new file mode 100644 index 00000000000..274b5de39a2 --- /dev/null +++ b/.changelog/31567.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_eip: Deprecate `vpc` attribute. Use `domain` instead +``` \ No newline at end of file diff --git a/.changelog/31568.txt b/.changelog/31568.txt new file mode 100644 index 00000000000..30a91965303 --- /dev/null +++ b/.changelog/31568.txt @@ -0,0 +1,8 @@ + +```release-note:enhancement +resource/aws_opensearch_domain: Removed `engine_version` default value +``` + +```release-note:note +resource/aws_opensearch_domain: The `engine_version` attribute no longer has a default value. When omitted, the underlying AWS API will use the latest OpenSearch engine version. +``` diff --git a/.changelog/31587.txt b/.changelog/31587.txt new file mode 100644 index 00000000000..19305eab49a --- /dev/null +++ b/.changelog/31587.txt @@ -0,0 +1,3 @@ +```release-note:bug +provider/tags: Fix crash when tags are `null` +``` \ No newline at end of file diff --git a/.changelog/31588.txt b/.changelog/31588.txt new file mode 100644 index 00000000000..95d31f93e61 --- /dev/null +++ b/.changelog/31588.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_route53_vpc_association_authorization: Fix `ConcurrentModification` error +``` diff --git a/.changelog/31604.txt b/.changelog/31604.txt new file mode 100644 index 00000000000..3ce15542038 --- /dev/null +++ b/.changelog/31604.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_glue_data_quality_ruleset +``` \ No newline at end of file diff --git a/.changelog/31607.txt b/.changelog/31607.txt new file mode 100644 index 00000000000..7344c3e368c --- /dev/null +++ b/.changelog/31607.txt @@ -0,0 +1,11 @@ +```release-note:enhancement +resource/aws_pipes_pipe: Add `enrichment_parameters` argument +``` + +```release-note:enhancement +resource/aws_pipes_pipe: Add `activemq_broker_parameters`, `dynamodb_stream_parameters`, `kinesis_stream_parameters`, `managed_streaming_kafka_parameters`, `rabbitmq_broker_parameters`, `self_managed_kafka_parameters` and `sqs_queue_parameters` attributes to the `source_parameters` configuration block. NOTE: Because we cannot easily test all this functionality, it is best effort and we ask for community help in testing +``` + +```release-note:enhancement +resource/aws_pipes_pipe: Add `batch_job_parameters`, `cloudwatch_logs_parameters`, `ecs_task_parameters`, `eventbridge_event_bus_parameters`, `http_parameters`, `kinesis_stream_parameters`, `lambda_function_parameters`, `redshift_data_parameters`, `sagemaker_pipeline_parameters`, `sqs_queue_parameters` and `step_function_state_machine_parameters` attributes to the `target_parameters` configuration block. NOTE: Because we cannot easily test all this functionality, it is best effort and we ask for community help in testing +``` \ No newline at end of file diff --git a/.changelog/31608.txt b/.changelog/31608.txt new file mode 100644 index 00000000000..d255c3e3401 --- /dev/null +++ b/.changelog/31608.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_cloudfront_distribution: Remove the upper limit on `origin_keepalive_timeout` +``` diff --git a/.changelog/31612.txt b/.changelog/31612.txt new file mode 100644 index 00000000000..57c9713a9ae --- /dev/null +++ b/.changelog/31612.txt @@ -0,0 +1,7 @@ +```release-note:note +resource/aws_redshift_cluster: Ignores the parameter `aqua_configuration_status`, since the AWS API ignores it. Now always returns `auto`. +``` + +```release-note:bug +resource/aws_redshift_cluster: No longer errors on deletion when status is `Maintenance` +``` diff --git a/.changelog/31646.txt b/.changelog/31646.txt new file mode 100644 index 00000000000..796b4732726 --- /dev/null +++ b/.changelog/31646.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_lambda_provisioned_concurrency_configuration: Add `skip_destroy` argument +``` diff --git a/.changelog/31647.txt b/.changelog/31647.txt new file mode 100644 index 00000000000..5c70b8c01f3 --- /dev/null +++ b/.changelog/31647.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_medialive_channel: Fix attribute spelling in `hls_cdn_settings` expand +``` \ No newline at end of file diff --git a/.changelog/31656.txt b/.changelog/31656.txt new file mode 100644 index 00000000000..7e432fa8687 --- /dev/null +++ b/.changelog/31656.txt @@ -0,0 +1,7 @@ +```release-note:breaking-change +resource/aws_iam_role: The `role_last_used` attribute has been removed. Use the `aws_iam_role` data source instead. +``` + +```release-note:note +resource/aws_iam_role: The `role_last_used` attribute has been removed. Use the `aws_iam_role` data source instead. See the community feedback provided in the [linked issue](https://github.com/hashicorp/terraform-provider-aws/issues/30861) for additional justification on this change. As the attribute is read-only, unlikely to be used as an input to another resource, and available in the corresponding data source, a breaking change in a minor version was deemed preferable to a long deprecation/removal cycle in this circumstance. +``` diff --git a/.changelog/31669.txt b/.changelog/31669.txt new file mode 100644 index 00000000000..95e8ac5b870 --- /dev/null +++ b/.changelog/31669.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_kendra_index: Persist `user_group_resolution_mode` value to state after creation +``` \ No newline at end of file diff --git a/.changelog/31680.txt b/.changelog/31680.txt new file mode 100644 index 00000000000..e688a1b8785 --- /dev/null +++ b/.changelog/31680.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_cleanrooms_collaboration +``` diff --git a/.changelog/31682.txt b/.changelog/31682.txt new file mode 100644 index 00000000000..817026ff074 --- /dev/null +++ b/.changelog/31682.txt @@ -0,0 +1,3 @@ +```release-note:new-data-source +aws_organizations_policies_for_target +``` \ No newline at end of file diff --git a/.changelog/31683.txt b/.changelog/31683.txt new file mode 100644 index 00000000000..42f3a28ca19 --- /dev/null +++ b/.changelog/31683.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_ecs_service: Fix crash when just `alarms` is updated +``` \ No newline at end of file diff --git a/.changelog/31687.txt b/.changelog/31687.txt new file mode 100644 index 00000000000..64f53fb5625 --- /dev/null +++ b/.changelog/31687.txt @@ -0,0 +1,3 @@ +```release-note:bug +provider/tags: Fix crash when some `tags` are `null` and others are `computed` +``` \ No newline at end of file diff --git a/.changelog/31689.txt b/.changelog/31689.txt new file mode 100644 index 00000000000..f578880c6d8 --- /dev/null +++ b/.changelog/31689.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_connect_instance: Fix crash when reading instances with `CREATION_FAILED` status +``` \ No newline at end of file diff --git a/.changelog/31691.txt b/.changelog/31691.txt new file mode 100644 index 00000000000..a526125df61 --- /dev/null +++ b/.changelog/31691.txt @@ -0,0 +1,3 @@ +```release-note:new-data-source +aws_budgets_budget +``` \ No newline at end of file diff --git a/.changelog/31696.txt b/.changelog/31696.txt new file mode 100644 index 00000000000..be57fd2da32 --- /dev/null +++ b/.changelog/31696.txt @@ -0,0 +1,3 @@ +```release-note:new-data-source +aws_ecr_pull_through_cache_rule +``` diff --git a/.changelog/31709.txt b/.changelog/31709.txt new file mode 100644 index 00000000000..e06ad5279b1 --- /dev/null +++ b/.changelog/31709.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_rds_cluster: Correctly update `db_cluster_instance_class` +``` \ No newline at end of file diff --git a/.changelog/31711.txt b/.changelog/31711.txt new file mode 100644 index 00000000000..a69db22f3a8 --- /dev/null +++ b/.changelog/31711.txt @@ -0,0 +1,3 @@ +```release-note:new-data-source +aws_guardduty_finding_ids +``` diff --git a/.changelog/31715.txt b/.changelog/31715.txt new file mode 100644 index 00000000000..7f47edc7eaa --- /dev/null +++ b/.changelog/31715.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_autoscaling_group: Fix `The AutoRollback parameter cannot be set to true when the DesiredConfiguration parameter is empty` errors when refreshing instances +``` \ No newline at end of file diff --git a/.changelog/31716.txt b/.changelog/31716.txt new file mode 100644 index 00000000000..4f843a813c5 --- /dev/null +++ b/.changelog/31716.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_connect_security_profile: Set correct `tags` in state +``` \ No newline at end of file diff --git a/.changelog/31718.txt b/.changelog/31718.txt new file mode 100644 index 00000000000..8f6f30e7e70 --- /dev/null +++ b/.changelog/31718.txt @@ -0,0 +1,7 @@ +```release-note:bug +provider: Limits size of HTTP response bodies in logs to 4 KB +``` + +```release-note:enhancement +provider: Increases size of HTTP request bodies in logs to 1 KB +``` diff --git a/.changelog/31735.txt b/.changelog/31735.txt new file mode 100644 index 00000000000..c683a3f7a20 --- /dev/null +++ b/.changelog/31735.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_dx_connection: Convert `vlan_id` from [`TypeString`](https://developer.hashicorp.com/terraform/plugin/sdkv2/schemas/schema-types#typestring) to [`TypeInt`](https://developer.hashicorp.com/terraform/plugin/sdkv2/schemas/schema-types#typeint) in [Terraform state](https://developer.hashicorp.com/terraform/language/state) for existing resources. This fixes a regression introduced in [v5.1.0](https://github.com/hashicorp/terraform-provider-aws/blob/main/CHANGELOG.md#510-june--1-2023) causing `a number is required` errors +``` \ No newline at end of file diff --git a/.changelog/31745.txt b/.changelog/31745.txt new file mode 100644 index 00000000000..efe0b504440 --- /dev/null +++ b/.changelog/31745.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +provider: Adds `retry_mode` parameter +``` diff --git a/.changelog/31746.txt b/.changelog/31746.txt new file mode 100644 index 00000000000..fe66b33ad07 --- /dev/null +++ b/.changelog/31746.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_chime_voice_connector: Add tagging support +``` diff --git a/.changelog/31747.txt b/.changelog/31747.txt new file mode 100644 index 00000000000..1dc0421ecf8 --- /dev/null +++ b/.changelog/31747.txt @@ -0,0 +1,11 @@ +```release-note:enhancement +resource/aws_redshiftserverless_workgroup: Additional supported values for `config_parameter.parameter_key` +``` + +```release-note:bug +resource/aws_redshiftserverless_workgroup: Change `config_parameter` from `TypeList` to `TypeSet` as order is not significant +``` + +```release-note:bug +resource/aws_redshiftserverless_workgroup: Fix `ValidationException: Can't update multiple configurations at the same time` errors +``` \ No newline at end of file diff --git a/.changelog/31749.txt b/.changelog/31749.txt new file mode 100644 index 00000000000..c0efd601366 --- /dev/null +++ b/.changelog/31749.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_redshiftserverless_namespace: Fix perpetual `iam_roles` diffs when the namespace contains a workgroup +``` \ No newline at end of file diff --git a/.changelog/31752.txt b/.changelog/31752.txt new file mode 100644 index 00000000000..bce1a7b318b --- /dev/null +++ b/.changelog/31752.txt @@ -0,0 +1,7 @@ +```release-note:enhancement +resource/aws_ec2_transit_gateway_connect_peer: Add `bgp_peer_address` and `bgp_transit_gateway_addresses` attributes +``` + +```release-note:enhancement +data/aws_ec2_transit_gateway_connect_peer: Add `bgp_peer_address` and `bgp_transit_gateway_addresses` attributes +``` diff --git a/.changelog/31755.txt b/.changelog/31755.txt new file mode 100644 index 00000000000..4bcd501601b --- /dev/null +++ b/.changelog/31755.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_sagemaker_model: Add `container.model_package_name` and `primary_container.model_package_name` arguments +``` \ No newline at end of file diff --git a/.changelog/31767.txt b/.changelog/31767.txt new file mode 100644 index 00000000000..f8ef58a9784 --- /dev/null +++ b/.changelog/31767.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_globalaccelerator_endpoint_group: Fix bug updating `endpoint_configuration.weight` to `0` +``` \ No newline at end of file diff --git a/.changelog/31772.txt b/.changelog/31772.txt new file mode 100644 index 00000000000..005bb0f81a5 --- /dev/null +++ b/.changelog/31772.txt @@ -0,0 +1,3 @@ +```release-note:bug +data-source/aws_redshift_cluster: Fix crash reading clusters in `modifying` state +``` \ No newline at end of file diff --git a/.changelog/31801.txt b/.changelog/31801.txt new file mode 100644 index 00000000000..df134479af9 --- /dev/null +++ b/.changelog/31801.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_vpc_endpoint: Fix tagging error preventing use in ISO partitions +``` diff --git a/.changelog/31802.txt b/.changelog/31802.txt new file mode 100644 index 00000000000..d811382f06f --- /dev/null +++ b/.changelog/31802.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_finspace_kx_environment +``` diff --git a/.changelog/31803.txt b/.changelog/31803.txt new file mode 100644 index 00000000000..88d9e8aa57f --- /dev/null +++ b/.changelog/31803.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_finspace_kx_database +``` diff --git a/.changelog/31804.txt b/.changelog/31804.txt new file mode 100644 index 00000000000..b29dc89086f --- /dev/null +++ b/.changelog/31804.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_finspace_kx_user +``` diff --git a/.changelog/31806.txt b/.changelog/31806.txt new file mode 100644 index 00000000000..c467564e0d0 --- /dev/null +++ b/.changelog/31806.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_finspace_kx_cluster +``` diff --git a/.changelog/31826.txt b/.changelog/31826.txt new file mode 100644 index 00000000000..2cb06c5aa43 --- /dev/null +++ b/.changelog/31826.txt @@ -0,0 +1,3 @@ +```release-note:bug +provider/default_tags: Fix perpetual diff when identical tags are moved from `default_tags` to resource `tags`, and vice versa +``` \ No newline at end of file diff --git a/.changelog/31833.txt b/.changelog/31833.txt new file mode 100644 index 00000000000..11874055cb5 --- /dev/null +++ b/.changelog/31833.txt @@ -0,0 +1,7 @@ +```release-note:enhancement +resource/aws_mwaa_environment: Consider `CREATING_SNAPSHOT` a valid pending state for resource update +``` + +```release-note:note +resource/aws_mwaa_environment: Upgrading your environment to a new major version of Apache Airflow forces replacement of the resource +``` diff --git a/.changelog/31842.txt b/.changelog/31842.txt new file mode 100644 index 00000000000..d8ef283ea04 --- /dev/null +++ b/.changelog/31842.txt @@ -0,0 +1,7 @@ +```release-note:enhancement +resource/aws_lambda_function: Add support for `ruby3.2` `runtime` value +``` + +```release-note:enhancement +resource/aws_lambda_layer_version: Add support for `ruby3.2` `compatible_runtimes` value +``` \ No newline at end of file diff --git a/.changelog/31844.txt b/.changelog/31844.txt new file mode 100644 index 00000000000..eccf355e1e1 --- /dev/null +++ b/.changelog/31844.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_medialive_channel: Fix spelling in `hls_cdn_settings` expander. +``` \ No newline at end of file diff --git a/.changelog/31858.txt b/.changelog/31858.txt new file mode 100644 index 00000000000..a5282852e04 --- /dev/null +++ b/.changelog/31858.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_ec2_instance_connect_endpoint +``` \ No newline at end of file diff --git a/.changelog/31863.txt b/.changelog/31863.txt new file mode 100644 index 00000000000..3c229164938 --- /dev/null +++ b/.changelog/31863.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_quicksight_data_set: Allow physical table map to be optional +``` diff --git a/.changelog/31873.txt b/.changelog/31873.txt new file mode 100644 index 00000000000..d6417821ee8 --- /dev/null +++ b/.changelog/31873.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_vpc_endpoint: Add `private_dns_only_for_inbound_resolver_endpoint` attribute to the `dns_options` configuration block +``` \ No newline at end of file diff --git a/.changelog/31877.txt b/.changelog/31877.txt new file mode 100644 index 00000000000..66f22e89b40 --- /dev/null +++ b/.changelog/31877.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_vpc: Fix `reading EC2 VPC (vpc-abcd1234) Attribute (enableDnsSupport): couldn't find resource` errors when reading new resource +``` \ No newline at end of file diff --git a/.changelog/31884.txt b/.changelog/31884.txt new file mode 100644 index 00000000000..07b488d5168 --- /dev/null +++ b/.changelog/31884.txt @@ -0,0 +1,7 @@ +```release-note:enhancement +resource/aws_redshift_cluster: Add `cluster_namespace_arn` attribute +``` + +```release-note:enhancement +data-source/aws_redshift_cluster: Add `cluster_namespace_arn` attribute +``` \ No newline at end of file diff --git a/.changelog/31886.txt b/.changelog/31886.txt new file mode 100644 index 00000000000..dc929d42934 --- /dev/null +++ b/.changelog/31886.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_redshift_cluster: Allow `availability_zone_relocation_enabled` to be `true` when `publicly_accessible` is `true` +``` diff --git a/.changelog/31900.txt b/.changelog/31900.txt new file mode 100644 index 00000000000..3398c3d4a18 --- /dev/null +++ b/.changelog/31900.txt @@ -0,0 +1,6 @@ +```release-note:new-data-source +aws_quicksight_theme +``` +```release-note:new-resource +aws_quicksight_theme +``` \ No newline at end of file diff --git a/.changelog/31901.txt b/.changelog/31901.txt new file mode 100644 index 00000000000..5fda2ef8732 --- /dev/null +++ b/.changelog/31901.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_quicksight_analysis: Fix assignment of KPI visual field well target values +``` \ No newline at end of file diff --git a/.changelog/31903.txt b/.changelog/31903.txt new file mode 100644 index 00000000000..6b8b8e07c4e --- /dev/null +++ b/.changelog/31903.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_quicksight_analysis: Fix schema mapping for string set elements +``` \ No newline at end of file diff --git a/.changelog/31904.txt b/.changelog/31904.txt new file mode 100644 index 00000000000..296bac2db9f --- /dev/null +++ b/.changelog/31904.txt @@ -0,0 +1,3 @@ +```release-note:note +resource/aws_lambda_function: The `replace_security_groups_on_destroy` and `replacement_security_group_ids` attributes are being deprecated as AWS no longer supports this operation. These attributes now have no effect, and will be removed in a future major version. +``` diff --git a/.changelog/31915.txt b/.changelog/31915.txt new file mode 100644 index 00000000000..7da2067aa2a --- /dev/null +++ b/.changelog/31915.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_secretsmanager_secret_rotation: Fix `InvalidParameterException: You cannot specify both rotation frequency and schedule expression together` errors on resource Update +``` \ No newline at end of file diff --git a/.changelog/31926.txt b/.changelog/31926.txt new file mode 100644 index 00000000000..100e4a3ba32 --- /dev/null +++ b/.changelog/31926.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_glue_data_quality_ruleset: Add `catalog_id` argument to `target_table` block +``` \ No newline at end of file diff --git a/.changelog/31928.txt b/.changelog/31928.txt new file mode 100644 index 00000000000..b710c6990ef --- /dev/null +++ b/.changelog/31928.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_ssm_default_patch_baseline: Fix `*conns.AWSClient is not ssm.ssmClient: missing method SSMClient` panic +``` \ No newline at end of file diff --git a/.changelog/31931.txt b/.changelog/31931.txt new file mode 100644 index 00000000000..ec716eaa117 --- /dev/null +++ b/.changelog/31931.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_lambda_event_source_mapping: The `queues` argument has changed from a set to a list with a maximum of one element. +``` diff --git a/.changelog/31933.txt b/.changelog/31933.txt new file mode 100644 index 00000000000..bc19467a49a --- /dev/null +++ b/.changelog/31933.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_lambda_provisioned_concurrency_config: The `function_name` argument now properly handles ARN values +``` diff --git a/.changelog/31937.txt b/.changelog/31937.txt new file mode 100644 index 00000000000..b8c978c1574 --- /dev/null +++ b/.changelog/31937.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_chimesdkvoice_sip_media_application +``` diff --git a/.changelog/31976.txt b/.changelog/31976.txt new file mode 100644 index 00000000000..0307bffa0b8 --- /dev/null +++ b/.changelog/31976.txt @@ -0,0 +1,7 @@ +```release-note:bug +resource/aws_elb: Recreate the resource if `subnets` is updated to an empty list +``` + +```release-note:enhancement +resource/aws_elb: Add configurable Create and Update timeouts +``` \ No newline at end of file diff --git a/.changelog/32004.txt b/.changelog/32004.txt new file mode 100644 index 00000000000..0484af4ef57 --- /dev/null +++ b/.changelog/32004.txt @@ -0,0 +1,3 @@ +```release-note:bug +provider: Fix `index out of range [0] with length 0` panic +``` \ No newline at end of file diff --git a/.changelog/32007.txt b/.changelog/32007.txt new file mode 100644 index 00000000000..3fbc22f9ebb --- /dev/null +++ b/.changelog/32007.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_config_configuration_recorder: Add `exclusion_by_resource_types` and `recording_strategy` attributes to the `recording_group` configuration block +``` \ No newline at end of file diff --git a/.changelog/32016.txt b/.changelog/32016.txt new file mode 100644 index 00000000000..5537436b70b --- /dev/null +++ b/.changelog/32016.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_eip: Fix `reading EC2 EIP (eipalloc-abcd1234): couldn't find resource` errors when reading new resource +``` diff --git a/.changelog/32023.txt b/.changelog/32023.txt new file mode 100644 index 00000000000..5595ba98dee --- /dev/null +++ b/.changelog/32023.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_networkmanager_attachment_accepter: Added support for Transit Gateway route table attachments +``` \ No newline at end of file diff --git a/.changelog/32026.txt b/.changelog/32026.txt new file mode 100644 index 00000000000..d076a1345b5 --- /dev/null +++ b/.changelog/32026.txt @@ -0,0 +1,7 @@ +```release-note:new-data-source +aws_sesv2_email_identity +``` + +```release-note:new-data-source +aws_sesv2_email_identity_mail_from_attributes +``` diff --git a/.changelog/32056.txt b/.changelog/32056.txt new file mode 100644 index 00000000000..be335df7326 --- /dev/null +++ b/.changelog/32056.txt @@ -0,0 +1,7 @@ +```release-note:enhancement +data-source/aws_organizations_organization: Return the full set of attributes when running as a [delegated administrator for AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_delegate_policies.html) +``` + +```release-note:new-resource +aws_organizations_resource_policy +``` \ No newline at end of file diff --git a/.changelog/32067.txt b/.changelog/32067.txt new file mode 100644 index 00000000000..d63eb98451b --- /dev/null +++ b/.changelog/32067.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_redshiftserverless_workgroup: Fix `waiting for completion: unexpected state 'AVAILABLE'` errors when deleting resource +``` \ No newline at end of file diff --git a/.changelog/32070.txt b/.changelog/32070.txt new file mode 100644 index 00000000000..b481c3c829f --- /dev/null +++ b/.changelog/32070.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_chimesdkvoice_sip_rule +``` diff --git a/.changelog/32102.txt b/.changelog/32102.txt new file mode 100644 index 00000000000..9cf866f750d --- /dev/null +++ b/.changelog/32102.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_fis_experiment_template: Add `log_configuration` configuration block +``` \ No newline at end of file diff --git a/.changelog/32108.txt b/.changelog/32108.txt new file mode 100644 index 00000000000..f937fa38162 --- /dev/null +++ b/.changelog/32108.txt @@ -0,0 +1,11 @@ +```release-note:note +provider: Updates to Go 1.20, the last release that will run on macOS 10.13 High Sierra or 10.14 Mojave. A future release will update to Go 1.21, and these platforms will no longer be supported. +``` + +```release-note:note +provider: Updates to Go 1.20, the last release that will run on any release of Windows 7, 8, Server 2008 and Server 2012. A future release will update to Go 1.21, and these platforms will no longer be supported. +``` + +```release-note:note +provider: Updates to Go 1.20. The provider will now notice the `trust-ad` option in `/etc/resolv.conf` and, if set, will set the "authentic data" option in outgoing DNS requests in order to better match the behavior of the GNU libc resolver. +``` diff --git a/.changelog/32148.txt b/.changelog/32148.txt new file mode 100644 index 00000000000..f1246f4ed3a --- /dev/null +++ b/.changelog/32148.txt @@ -0,0 +1,7 @@ +```release-note:bug +resource/aws_vpc_security_group_egress_rule: `security_group_id` is Required +``` + +```release-note:bug +resource/aws_vpc_security_group_ingress_rule: `security_group_id` is Required +``` diff --git a/.changelog/32152.txt b/.changelog/32152.txt new file mode 100644 index 00000000000..73ae6532dbe --- /dev/null +++ b/.changelog/32152.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_fis_experiment_template: Add support for `Pods` and `Tasks` to `action.*.target` +``` diff --git a/.changelog/32160.txt b/.changelog/32160.txt new file mode 100644 index 00000000000..41a064ba6a4 --- /dev/null +++ b/.changelog/32160.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_fis_experiment_template: Add `parameters` attribute to the `target` configuration block +``` \ No newline at end of file diff --git a/.changelog/32169.txt b/.changelog/32169.txt new file mode 100644 index 00000000000..2abe2d405d2 --- /dev/null +++ b/.changelog/32169.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_api_gateway_rest_api: Fix crash when `binary_media_types` is `null` +``` \ No newline at end of file diff --git a/.changelog/32171.txt b/.changelog/32171.txt new file mode 100644 index 00000000000..bd2d89cdd7b --- /dev/null +++ b/.changelog/32171.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_storagegateway_smb_file_share: Fix update error when `kms_encrypted` is `true` but `kms_key_arn` is not sent in the request +``` \ No newline at end of file diff --git a/.changelog/32174.txt b/.changelog/32174.txt new file mode 100644 index 00000000000..f3179d0b1fc --- /dev/null +++ b/.changelog/32174.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +provider: Mask all sensitive values that appear when `TF_LOG` level is `TRACE` +``` \ No newline at end of file diff --git a/.changelog/32176.txt b/.changelog/32176.txt new file mode 100644 index 00000000000..c6d6e530605 --- /dev/null +++ b/.changelog/32176.txt @@ -0,0 +1,15 @@ +```release-note:new-resource +aws_sfn_alias +``` + +```release-note:new-data-source +aws_sfn_alias +``` + +```release-note:new-data-source +aws_sfn_state_machine_versions +``` + +```release-note:enhancement +resource/aws_sfn_state_machine: Add `description`, `publish`, `revision_id`, `state_machine_version_arn` and `version_description` attributes +``` \ No newline at end of file diff --git a/.changelog/32196.txt b/.changelog/32196.txt new file mode 100644 index 00000000000..67beca78654 --- /dev/null +++ b/.changelog/32196.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_route: Fix `reading Route in Route Table (rtb-1234abcd) with destination (1.2.3.4/5): couldn't find resource` errors when reading new resource +``` \ No newline at end of file diff --git a/.changelog/32200.txt b/.changelog/32200.txt new file mode 100644 index 00000000000..abf4dbd8780 --- /dev/null +++ b/.changelog/32200.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_batch_compute_environment: Add `placement_group` attribute to the `compute_resources` configuration block +``` \ No newline at end of file diff --git a/.changelog/32203.txt b/.changelog/32203.txt new file mode 100644 index 00000000000..a088dee0865 --- /dev/null +++ b/.changelog/32203.txt @@ -0,0 +1,15 @@ +```release-note:new-resource +aws_transfer_profile +``` + +```release-note:new-resource +aws_transfer_certificate +``` + +```release-note:new-resource +aws_transfer_agreement +``` + +```release-note:new-resource +aws_transfer_connector +``` \ No newline at end of file diff --git a/.changelog/32226.txt b/.changelog/32226.txt new file mode 100644 index 00000000000..dcec0198a1d --- /dev/null +++ b/.changelog/32226.txt @@ -0,0 +1,3 @@ +```release-note:new-data-source +aws_opensearchserverless_security_policy +``` \ No newline at end of file diff --git a/.changelog/32231.txt b/.changelog/32231.txt new file mode 100644 index 00000000000..a8a4c136a23 --- /dev/null +++ b/.changelog/32231.txt @@ -0,0 +1,3 @@ +```release-note:new-data-source +aws_opensearchserverless_access_policy +``` \ No newline at end of file diff --git a/.changelog/32247.txt b/.changelog/32247.txt new file mode 100644 index 00000000000..f59487ddd3a --- /dev/null +++ b/.changelog/32247.txt @@ -0,0 +1,3 @@ +```release-note:new-data-source +aws_opensearchserverless_collection +``` \ No newline at end of file diff --git a/.changelog/32276.txt b/.changelog/32276.txt new file mode 100644 index 00000000000..8cb046fbcca --- /dev/null +++ b/.changelog/32276.txt @@ -0,0 +1,3 @@ +```release-note:new-data-source +aws_opensearchserverless_vpc_endpoint +``` \ No newline at end of file diff --git a/.changelog/32278.txt b/.changelog/32278.txt new file mode 100644 index 00000000000..7ebd393ad8a --- /dev/null +++ b/.changelog/32278.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_emrserverless_application: Do not recreate the resource if `release_label` changes +``` \ No newline at end of file diff --git a/.changelog/32283.txt b/.changelog/32283.txt new file mode 100644 index 00000000000..ead62dbb895 --- /dev/null +++ b/.changelog/32283.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_glue_catalog_database: Add `target_database.region` argument +``` \ No newline at end of file diff --git a/.changelog/32287.txt b/.changelog/32287.txt new file mode 100644 index 00000000000..aa573ee6132 --- /dev/null +++ b/.changelog/32287.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_db_instance: Fix resource Create returning instances not in the `available` state when `identifier_prefix` is specified +``` diff --git a/.changelog/32297.txt b/.changelog/32297.txt new file mode 100644 index 00000000000..49dcac6c5e2 --- /dev/null +++ b/.changelog/32297.txt @@ -0,0 +1,3 @@ +```release-note:bug +provider: Prevent resource recreation if `tags` or `tags_all` are updated +``` \ No newline at end of file diff --git a/.changelog/32317.txt b/.changelog/32317.txt new file mode 100644 index 00000000000..c68807279da --- /dev/null +++ b/.changelog/32317.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_s3_bucket: Fix `InvalidArgument: Invalid attribute name specified` errors when listing S3 Bucket objects, caused by an [AWS SDK for Go regression](https://github.com/aws/aws-sdk-go/issues/4897) +``` \ No newline at end of file diff --git a/.changelog/32321.txt b/.changelog/32321.txt new file mode 100644 index 00000000000..e0c67113799 --- /dev/null +++ b/.changelog/32321.txt @@ -0,0 +1,3 @@ +```release-note:new-data-source +aws_opensearchserverless_security_config +``` \ No newline at end of file diff --git a/.changelog/32327.txt b/.changelog/32327.txt new file mode 100644 index 00000000000..a25d270e978 --- /dev/null +++ b/.changelog/32327.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_lambda_function: Support `code_signing_config_arn` in the `ap-east-1` AWS Region +``` \ No newline at end of file diff --git a/.changelog/32329.txt b/.changelog/32329.txt new file mode 100644 index 00000000000..d69669f7cea --- /dev/null +++ b/.changelog/32329.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_wafregional_web_acl_association: Increase creation timeout value from 2 to 5 minutes to prevent WAFUnavailableEntityException +``` diff --git a/.changelog/32332.txt b/.changelog/32332.txt new file mode 100644 index 00000000000..bc261a76c3b --- /dev/null +++ b/.changelog/32332.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_glue_crawler: Add `iceberg_target` configuration block +``` \ No newline at end of file diff --git a/.changelog/32339.txt b/.changelog/32339.txt new file mode 100644 index 00000000000..9d90109ec77 --- /dev/null +++ b/.changelog/32339.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_aws_keyspaces_table: Add `client_side_timestamps` configuration block +``` \ No newline at end of file diff --git a/.changelog/32342.txt b/.changelog/32342.txt new file mode 100644 index 00000000000..efb4e943c6e --- /dev/null +++ b/.changelog/32342.txt @@ -0,0 +1,7 @@ +```release-note:enhancement +resource/aws_service_discovery_private_dns_namespace: Allow `description` to be updated in-place +``` + +```release-note:enhancement +resource/aws_service_discovery_public_dns_namespace: Allow `description` to be updated in-place +``` \ No newline at end of file diff --git a/.changelog/32343.txt b/.changelog/32343.txt new file mode 100644 index 00000000000..15bda9ef708 --- /dev/null +++ b/.changelog/32343.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_internetmonitor_monitor: Add `health_events_config` configuration block +``` \ No newline at end of file diff --git a/.changelog/32345.txt b/.changelog/32345.txt new file mode 100644 index 00000000000..07c541efdd9 --- /dev/null +++ b/.changelog/32345.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_qldb_stream: Add configurable Create and Delete timeouts +``` \ No newline at end of file diff --git a/.changelog/32352.txt b/.changelog/32352.txt new file mode 100644 index 00000000000..2323a089913 --- /dev/null +++ b/.changelog/32352.txt @@ -0,0 +1,3 @@ +```release-note:bug +provider: Correctly handle `forbidden_account_ids` +``` diff --git a/.changelog/32354.txt b/.changelog/32354.txt new file mode 100644 index 00000000000..e635ab19be0 --- /dev/null +++ b/.changelog/32354.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_timestreamwrite_table: Add `schema` configuration block +``` \ No newline at end of file diff --git a/.changelog/32355.txt b/.changelog/32355.txt new file mode 100644 index 00000000000..d3dd2db718a --- /dev/null +++ b/.changelog/32355.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_vpc_endpoint: Fix `InvalidParameter: PrivateDnsOnlyForInboundResolverEndpoint not supported for this service` errors creating S3 _Interface_ VPC endpoints +``` \ No newline at end of file diff --git a/.changelog/32371.txt b/.changelog/32371.txt new file mode 100644 index 00000000000..2de6f5655fb --- /dev/null +++ b/.changelog/32371.txt @@ -0,0 +1,15 @@ +```release-note:bug +resource/aws_kms_external_key: Correctly remove all tags +``` + +```release-note:bug +resource/aws_kms_key: Correctly remove all tags +``` + +```release-note:bug +resource/aws_kms_replica_external_key: Correctly remove all tags +``` + +```release-note:bug +resource/aws_kms_replica_key: Correctly remove all tags +``` diff --git a/.changelog/32372.txt b/.changelog/32372.txt new file mode 100644 index 00000000000..2453cc73d35 --- /dev/null +++ b/.changelog/32372.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_ssm_parameter: Skip Update if only `overwrite` parameter changes +``` diff --git a/.changelog/32439.txt b/.changelog/32439.txt new file mode 100644 index 00000000000..c4d8922ad95 --- /dev/null +++ b/.changelog/32439.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_config_config_rule: Prevent crash on nil describe output +``` diff --git a/.changelog/32454.txt b/.changelog/32454.txt new file mode 100644 index 00000000000..094ccc8c54a --- /dev/null +++ b/.changelog/32454.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_mq_broker: default `replication_user` to `false` +``` diff --git a/.changelog/32462.txt b/.changelog/32462.txt new file mode 100644 index 00000000000..7ac8b9a0549 --- /dev/null +++ b/.changelog/32462.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_iam_virtual_mfa_device: Add `enable_date` and `user_name` attributes +``` diff --git a/.changelog/32464.txt b/.changelog/32464.txt new file mode 100644 index 00000000000..7b7a95cfb70 --- /dev/null +++ b/.changelog/32464.txt @@ -0,0 +1,35 @@ +```release-note:bug +resource/aws_quicksight_template: Fix exception thrown when specifying `definition.sheets.visuals.bar_chart_visual.chart_configuration.category_axis.scrollbar_options.visible_range` +``` + +```release-note:bug +resource/aws_quicksight_template: Fix exception thrown when specifying `definition.sheets.visuals.pivot_table_visual.chart_configuration.field_options.selected_field_options.visibility` +``` + +```release-note:bug +resource/aws_quicksight_template: Fix exception thrown when specifying `definition.sheets.visuals.pivot_table_visual.chart_configuration.field_wells.pivot_table_aggregated_field_wells.rows` +``` + +```release-note:bug +resource/aws_quicksight_dashboard: Fix exception thrown when specifying `definition.sheets.visuals.bar_chart_visual.chart_configuration.category_axis.scrollbar_options.visible_range` +``` + +```release-note:bug +resource/aws_quicksight_dashboard: Fix exception thrown when specifying `definition.sheets.visuals.pivot_table_visual.chart_configuration.field_options.selected_field_options.visibility` +``` + +```release-note:bug +resource/aws_quicksight_dashboard: Fix exception thrown when specifying `definition.sheets.visuals.pivot_table_visual.chart_configuration.field_wells.pivot_table_aggregated_field_wells.rows` +``` + +```release-note:bug +resource/aws_quicksight_analysis: Fix exception thrown when specifying `definition.sheets.visuals.bar_chart_visual.chart_configuration.category_axis.scrollbar_options.visible_range` +``` + +```release-note:bug +resource/aws_quicksight_analysis: Fix exception thrown when specifying `definition.sheets.visuals.pivot_table_visual.chart_configuration.field_options.selected_field_options.visibility` +``` + +```release-note:bug +resource/aws_quicksight_analysis: Fix exception thrown when specifying `definition.sheets.visuals.pivot_table_visual.chart_configuration.field_wells.pivot_table_aggregated_field_wells.rows` +``` diff --git a/.changelog/35970.txt b/.changelog/35970.txt new file mode 100644 index 00000000000..cb01c838ca0 --- /dev/null +++ b/.changelog/35970.txt @@ -0,0 +1,7 @@ +```release-note:enhancement +resource/aws_opensearch_domain: Add `off_peak_window_options` configuration block +``` + +```release-note:enhancement +data-source/aws_opensearch_domain: Add `off_peak_window_options` attribute +``` \ No newline at end of file diff --git a/.ci/.golangci2.yml b/.ci/.golangci2.yml index adb1c7804f5..81350b4bfe0 100644 --- a/.ci/.golangci2.yml +++ b/.ci/.golangci2.yml @@ -38,6 +38,10 @@ issues: - staticcheck path: internal/service/batch/ text: "SA1019: apiObject.ImageId is deprecated: This field is deprecated" + - linters: + - staticcheck + path: internal/service/chime/ + text: "SA1019: conn.\\w+ is deprecated: Replaced by \\w+ in the Amazon Chime SDK Voice Namespace" - linters: - staticcheck path: "internal/service/cloudfront" diff --git a/.ci/.semgrep-service-name0.yml b/.ci/.semgrep-service-name0.yml index 10e20096620..99438359d08 100644 --- a/.ci/.semgrep-service-name0.yml +++ b/.ci/.semgrep-service-name0.yml @@ -3405,3 +3405,32 @@ rules: - pattern-regex: "(?i)ConfigService" - pattern-not-regex: ^TestAcc.* severity: WARNING + - id: configservice-in-test-name + languages: + - go + message: Include "ConfigService" in test name + paths: + include: + - internal/service/configservice/*_test.go + patterns: + - pattern: func $NAME( ... ) { ... } + - metavariable-pattern: + metavariable: $NAME + patterns: + - pattern-not-regex: "^TestAccConfigService" + - pattern-regex: ^TestAcc.* + severity: WARNING + - id: configservice-in-const-name + languages: + - go + message: Do not use "ConfigService" in const name inside configservice package + paths: + include: + - internal/service/configservice + patterns: + - pattern: const $NAME = ... + - metavariable-pattern: + metavariable: $NAME + patterns: + - pattern-regex: "(?i)ConfigService" + severity: WARNING diff --git a/.ci/.semgrep-service-name1.yml b/.ci/.semgrep-service-name1.yml index 45148c64670..cc00082037a 100644 --- a/.ci/.semgrep-service-name1.yml +++ b/.ci/.semgrep-service-name1.yml @@ -1,34 +1,5 @@ # Generated by internal/generate/servicesemgrep/main.go; DO NOT EDIT. rules: - - id: configservice-in-test-name - languages: - - go - message: Include "ConfigService" in test name - paths: - include: - - internal/service/configservice/*_test.go - patterns: - - pattern: func $NAME( ... ) { ... } - - metavariable-pattern: - metavariable: $NAME - patterns: - - pattern-not-regex: "^TestAccConfigService" - - pattern-regex: ^TestAcc.* - severity: WARNING - - id: configservice-in-const-name - languages: - - go - message: Do not use "ConfigService" in const name inside configservice package - paths: - include: - - internal/service/configservice - patterns: - - pattern: const $NAME = ... - - metavariable-pattern: - metavariable: $NAME - patterns: - - pattern-regex: "(?i)ConfigService" - severity: WARNING - id: configservice-in-var-name languages: - go @@ -2393,6 +2364,64 @@ rules: patterns: - pattern-regex: "(?i)Evidently" severity: WARNING + - id: finspace-in-func-name + languages: + - go + message: Do not use "FinSpace" in func name inside finspace package + paths: + include: + - internal/service/finspace + patterns: + - pattern: func $NAME( ... ) { ... } + - metavariable-pattern: + metavariable: $NAME + patterns: + - pattern-regex: "(?i)FinSpace" + - pattern-not-regex: ^TestAcc.* + severity: WARNING + - id: finspace-in-test-name + languages: + - go + message: Include "FinSpace" in test name + paths: + include: + - internal/service/finspace/*_test.go + patterns: + - pattern: func $NAME( ... ) { ... } + - metavariable-pattern: + metavariable: $NAME + patterns: + - pattern-not-regex: "^TestAccFinSpace" + - pattern-regex: ^TestAcc.* + severity: WARNING + - id: finspace-in-const-name + languages: + - go + message: Do not use "FinSpace" in const name inside finspace package + paths: + include: + - internal/service/finspace + patterns: + - pattern: const $NAME = ... + - metavariable-pattern: + metavariable: $NAME + patterns: + - pattern-regex: "(?i)FinSpace" + severity: WARNING + - id: finspace-in-var-name + languages: + - go + message: Do not use "FinSpace" in var name inside finspace package + paths: + include: + - internal/service/finspace + patterns: + - pattern: var $NAME = ... + - metavariable-pattern: + metavariable: $NAME + patterns: + - pattern-regex: "(?i)FinSpace" + severity: WARNING - id: firehose-in-func-name languages: - go diff --git a/.ci/.semgrep-service-name2.yml b/.ci/.semgrep-service-name2.yml index 26f7d500f6b..f6645fb5ee3 100644 --- a/.ci/.semgrep-service-name2.yml +++ b/.ci/.semgrep-service-name2.yml @@ -3392,3 +3392,32 @@ rules: - pattern-regex: "(?i)RedshiftData" - pattern-not-regex: ^TestAcc.* severity: WARNING + - id: redshiftdata-in-test-name + languages: + - go + message: Include "RedshiftData" in test name + paths: + include: + - internal/service/redshiftdata/*_test.go + patterns: + - pattern: func $NAME( ... ) { ... } + - metavariable-pattern: + metavariable: $NAME + patterns: + - pattern-not-regex: "^TestAccRedshiftData" + - pattern-regex: ^TestAcc.* + severity: WARNING + - id: redshiftdata-in-const-name + languages: + - go + message: Do not use "RedshiftData" in const name inside redshiftdata package + paths: + include: + - internal/service/redshiftdata + patterns: + - pattern: const $NAME = ... + - metavariable-pattern: + metavariable: $NAME + patterns: + - pattern-regex: "(?i)RedshiftData" + severity: WARNING diff --git a/.ci/.semgrep-service-name3.yml b/.ci/.semgrep-service-name3.yml index 6d40269a840..6401a6d9f05 100644 --- a/.ci/.semgrep-service-name3.yml +++ b/.ci/.semgrep-service-name3.yml @@ -1,34 +1,5 @@ # Generated by internal/generate/servicesemgrep/main.go; DO NOT EDIT. rules: - - id: redshiftdata-in-test-name - languages: - - go - message: Include "RedshiftData" in test name - paths: - include: - - internal/service/redshiftdata/*_test.go - patterns: - - pattern: func $NAME( ... ) { ... } - - metavariable-pattern: - metavariable: $NAME - patterns: - - pattern-not-regex: "^TestAccRedshiftData" - - pattern-regex: ^TestAcc.* - severity: WARNING - - id: redshiftdata-in-const-name - languages: - - go - message: Do not use "RedshiftData" in const name inside redshiftdata package - paths: - include: - - internal/service/redshiftdata - patterns: - - pattern: const $NAME = ... - - metavariable-pattern: - metavariable: $NAME - patterns: - - pattern-regex: "(?i)RedshiftData" - severity: WARNING - id: redshiftdata-in-var-name languages: - go @@ -2913,6 +2884,64 @@ rules: - pattern-not-regex: "^TestAccVerifiedAccess" - pattern-regex: ^TestAcc.* severity: WARNING + - id: verifiedpermissions-in-func-name + languages: + - go + message: Do not use "VerifiedPermissions" in func name inside verifiedpermissions package + paths: + include: + - internal/service/verifiedpermissions + patterns: + - pattern: func $NAME( ... ) { ... } + - metavariable-pattern: + metavariable: $NAME + patterns: + - pattern-regex: "(?i)VerifiedPermissions" + - pattern-not-regex: ^TestAcc.* + severity: WARNING + - id: verifiedpermissions-in-test-name + languages: + - go + message: Include "VerifiedPermissions" in test name + paths: + include: + - internal/service/verifiedpermissions/*_test.go + patterns: + - pattern: func $NAME( ... ) { ... } + - metavariable-pattern: + metavariable: $NAME + patterns: + - pattern-not-regex: "^TestAccVerifiedPermissions" + - pattern-regex: ^TestAcc.* + severity: WARNING + - id: verifiedpermissions-in-const-name + languages: + - go + message: Do not use "VerifiedPermissions" in const name inside verifiedpermissions package + paths: + include: + - internal/service/verifiedpermissions + patterns: + - pattern: const $NAME = ... + - metavariable-pattern: + metavariable: $NAME + patterns: + - pattern-regex: "(?i)VerifiedPermissions" + severity: WARNING + - id: verifiedpermissions-in-var-name + languages: + - go + message: Do not use "VerifiedPermissions" in var name inside verifiedpermissions package + paths: + include: + - internal/service/verifiedpermissions + patterns: + - pattern: var $NAME = ... + - metavariable-pattern: + metavariable: $NAME + patterns: + - pattern-regex: "(?i)VerifiedPermissions" + severity: WARNING - id: vpc-in-test-name languages: - go diff --git a/.ci/.semgrep.yml b/.ci/.semgrep.yml index a94c793083f..79b047015cb 100644 --- a/.ci/.semgrep.yml +++ b/.ci/.semgrep.yml @@ -13,96 +13,6 @@ rules: - focus-metavariable: $FUNCNAME severity: WARNING - - id: aws-sdk-go-multiple-service-imports - languages: [go] - message: Resources should not implement multiple AWS service functionality - paths: - include: - - internal/ - exclude: - - "internal/service/**/*_test.go" - - "internal/service/**/sweep.go" - - "internal/acctest/acctest.go" - - "internal/conns/**/*.go" - patterns: - - pattern: | - import ("$X") - import ("$Y") - - metavariable-regex: - metavariable: "$X" - regex: '^"github.com/aws/aws-sdk-go/service/[^/]+"$' - - metavariable-regex: - metavariable: "$Y" - regex: '^"github.com/aws/aws-sdk-go/service/[^/]+"$' - # wafregional uses a number of resources from waf - - pattern-not: | - import ("github.com/aws/aws-sdk-go/service/waf") - import ("github.com/aws/aws-sdk-go/service/wafregional") - severity: WARNING - - - id: prefer-aws-go-sdk-pointer-conversion-assignment - languages: [go] - message: Prefer AWS Go SDK pointer conversion functions for dereferencing during assignment, e.g. aws.StringValue() - paths: - include: - - internal/service - exclude: - - "internal/service/**/*_test.go" - patterns: - - pattern: "$LHS = *$RHS" - - pattern-not: "*$LHS2 = *$RHS" - severity: WARNING - - - id: prefer-aws-go-sdk-pointer-conversion-conditional - languages: [go] - message: Prefer AWS Go SDK pointer conversion functions for dereferencing during conditionals, e.g. aws.StringValue() - paths: - include: - - internal/service - exclude: - - "internal/service/**/*_test.go" - patterns: - - pattern-either: - - pattern: "$LHS == *$RHS" - - pattern: "$LHS != *$RHS" - - pattern: "$LHS > *$RHS" - - pattern: "$LHS < *$RHS" - - pattern: "$LHS >= *$RHS" - - pattern: "$LHS <= *$RHS" - - pattern: "*$LHS == $RHS" - - pattern: "*$LHS != $RHS" - - pattern: "*$LHS > $RHS" - - pattern: "*$LHS < $RHS" - - pattern: "*$LHS >= $RHS" - - pattern: "*$LHS <= $RHS" - severity: WARNING - - - id: aws-go-sdk-pointer-conversion-ResourceData-SetId - fix: d.SetId(aws.StringValue($VALUE)) - languages: [go] - message: Prefer AWS Go SDK pointer conversion aws.StringValue() function for dereferencing during d.SetId() - paths: - include: - - internal/ - pattern: "d.SetId(*$VALUE)" - severity: WARNING - - - id: aws-go-sdk-pointer-conversion-immediate-dereference - fix: $VALUE - languages: [go] - message: Using AWS Go SDK pointer conversion, e.g. aws.String(), with immediate dereferencing is extraneous - paths: - include: - - internal/ - patterns: - - pattern-either: - - pattern: "*aws.Bool($VALUE)" - - pattern: "*aws.Float64($VALUE)" - - pattern: "*aws.Int64($VALUE)" - - pattern: "*aws.String($VALUE)" - - pattern: "*aws.Time($VALUE)" - severity: WARNING - - id: data-source-with-resource-read languages: [go] message: Calling a resource's Read method from within a data-source is discouraged @@ -157,6 +67,19 @@ rules: - pattern: var $VAR = fmt.Sprintf(..., <... acctest.RandomWithPrefix(...) ...>, ...) severity: WARNING + - id: helper-schema-Elem-check-valid-type + languages: [go] + message: Elem must be either a *schema.Schema or *schema.Resource type + paths: + include: + - internal/service/**/*.go + exclude: + - internal/service/**/*_data_source.go + patterns: + - pattern-inside: "Schema: map[string]*schema.Schema{ ... }" + - pattern-regex: "Elem:[ ]*schema.Type[a-zA-Z]*," + severity: WARNING + - id: helper-schema-Set-extraneous-NewSet-with-flattenStringList languages: [go] message: Prefer `flex.FlattenStringSet()` or `flex.FlattenStringValueSet()` @@ -203,23 +126,6 @@ rules: - pattern: if $VALUE, $OK := d.GetOk($KEY); $OK && len($VALUE.(string)) > 0 { $BODY } severity: WARNING - - id: helper-schema-ResourceData-Set-extraneous-value-pointer-conversion - fix: d.Set($ATTRIBUTE, $APIOBJECT) - languages: [go] - message: AWS Go SDK pointer conversion function for `d.Set()` value is extraneous - paths: - include: - - internal/ - patterns: - - pattern-either: - - pattern: d.Set($ATTRIBUTE, aws.BoolValue($APIOBJECT)) - - pattern: d.Set($ATTRIBUTE, aws.Float64Value($APIOBJECT)) - - pattern: d.Set($ATTRIBUTE, aws.IntValue($APIOBJECT)) - - pattern: d.Set($ATTRIBUTE, aws.Int64Value($APIOBJECT)) - - pattern: d.Set($ATTRIBUTE, int(aws.Int64Value($APIOBJECT))) - - pattern: d.Set($ATTRIBUTE, aws.StringValue($APIOBJECT)) - severity: WARNING - - id: helper-schema-ResourceData-Set-extraneous-nil-check languages: [go] message: Nil value check before `d.Set()` is extraneous @@ -684,16 +590,6 @@ rules: - pattern-regex: ProviderFactories:\s+(acctest\.)?ProviderFactories, severity: WARNING - - id: prefer-aws-go-sdk-pointer-conversion-int-conversion-int64-pointer - fix: $VALUE - languages: [go] - message: Prefer AWS Go SDK pointer conversion functions for dereferencing when converting int64 to int - paths: - include: - - internal/ - pattern: int(*$VALUE) - severity: WARNING - - id: fmt-Errorf-awserr-Error-Code languages: [go] message: Prefer `err` with `%w` format verb instead of `err.Code()` or `err.Message()` @@ -725,20 +621,3 @@ rules: patterns: - pattern-regex: "(Create|Read|Update|Delete)Context:" severity: ERROR - - - id: bare-error-returns - languages: [go] - message: API errors should be wrapped with the CRUD info - paths: - include: - - internal/service - patterns: - - pattern: return $ERR - - pattern-inside: | - func $FUNC($D *schema.ResourceData, ...) error { - ... - } - - metavariable-regex: - metavariable: $ERR - regex: "[Ee]rr(?!ors\\.)" - severity: ERROR diff --git a/.ci/.tflint.hcl b/.ci/.tflint.hcl index 82ce377b904..77821358b44 100644 --- a/.ci/.tflint.hcl +++ b/.ci/.tflint.hcl @@ -1,13 +1,9 @@ plugin "aws" { enabled = true - version = "0.17.1" + version = "0.23.1" source = "github.com/terraform-linters/tflint-ruleset-aws" } rule "aws_acm_certificate_lifecycle" { enabled = false } - -rule "aws_route_not_specified_target" { - enabled = false -} diff --git a/.ci/providerlint/go.mod b/.ci/providerlint/go.mod index fefaa9cc537..ac567f0b467 100644 --- a/.ci/providerlint/go.mod +++ b/.ci/providerlint/go.mod @@ -1,39 +1,41 @@ module github.com/hashicorp/terraform-provider-aws/ci/providerlint -go 1.19 +go 1.20 require ( - github.com/aws/aws-sdk-go v1.44.261 + github.com/aws/aws-sdk-go v1.44.299 github.com/bflad/tfproviderlint v0.29.0 - github.com/hashicorp/terraform-plugin-sdk/v2 v2.26.1 + github.com/hashicorp/terraform-plugin-sdk/v2 v2.27.0 golang.org/x/tools v0.8.0 ) require ( + github.com/ProtonMail/go-crypto v0.0.0-20230217124315-7d5c6f04bbb8 // indirect github.com/agext/levenshtein v1.2.2 // indirect github.com/apparentlymart/go-textseg/v13 v13.0.0 // indirect github.com/bflad/gopaniccheck v0.1.0 // indirect + github.com/cloudflare/circl v1.3.3 // indirect github.com/fatih/color v1.13.0 // indirect - github.com/golang/protobuf v1.5.2 // indirect + github.com/golang/protobuf v1.5.3 // indirect github.com/google/go-cmp v0.5.9 // indirect github.com/hashicorp/errwrap v1.0.0 // indirect github.com/hashicorp/go-checkpoint v0.5.0 // indirect github.com/hashicorp/go-cleanhttp v0.5.2 // indirect github.com/hashicorp/go-cty v1.4.1-0.20200414143053-d3edf31b6320 // indirect - github.com/hashicorp/go-hclog v1.4.0 // indirect + github.com/hashicorp/go-hclog v1.5.0 // indirect github.com/hashicorp/go-multierror v1.1.1 // indirect - github.com/hashicorp/go-plugin v1.4.8 // indirect + github.com/hashicorp/go-plugin v1.4.10 // indirect github.com/hashicorp/go-uuid v1.0.3 // indirect github.com/hashicorp/go-version v1.6.0 // indirect - github.com/hashicorp/hc-install v0.5.0 // indirect - github.com/hashicorp/hcl/v2 v2.16.2 // indirect + github.com/hashicorp/hc-install v0.5.2 // indirect + github.com/hashicorp/hcl/v2 v2.17.0 // indirect github.com/hashicorp/logutils v1.0.0 // indirect github.com/hashicorp/terraform-exec v0.18.1 // indirect - github.com/hashicorp/terraform-json v0.16.0 // indirect - github.com/hashicorp/terraform-plugin-go v0.14.3 // indirect - github.com/hashicorp/terraform-plugin-log v0.8.0 // indirect - github.com/hashicorp/terraform-registry-address v0.1.0 // indirect - github.com/hashicorp/terraform-svchost v0.0.0-20200729002733-f050f53b9734 // indirect + github.com/hashicorp/terraform-json v0.17.0 // indirect + github.com/hashicorp/terraform-plugin-go v0.16.0 // indirect + github.com/hashicorp/terraform-plugin-log v0.9.0 // indirect + github.com/hashicorp/terraform-registry-address v0.2.1 // indirect + github.com/hashicorp/terraform-svchost v0.1.1 // indirect github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d // indirect github.com/mattn/go-colorable v0.1.12 // indirect github.com/mattn/go-isatty v0.0.14 // indirect @@ -44,16 +46,16 @@ require ( github.com/mitchellh/reflectwalk v1.0.2 // indirect github.com/oklog/run v1.0.0 // indirect github.com/vmihailenco/msgpack v4.0.4+incompatible // indirect - github.com/vmihailenco/msgpack/v4 v4.3.12 // indirect - github.com/vmihailenco/tagparser v0.1.1 // indirect - github.com/zclconf/go-cty v1.13.1 // indirect - golang.org/x/crypto v0.8.0 // indirect + github.com/vmihailenco/msgpack/v5 v5.3.5 // indirect + github.com/vmihailenco/tagparser/v2 v2.0.0 // indirect + github.com/zclconf/go-cty v1.13.2 // indirect + golang.org/x/crypto v0.10.0 // indirect golang.org/x/mod v0.10.0 // indirect - golang.org/x/net v0.9.0 // indirect - golang.org/x/sys v0.7.0 // indirect - golang.org/x/text v0.9.0 // indirect - google.golang.org/appengine v1.6.6 // indirect - google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d // indirect - google.golang.org/grpc v1.51.0 // indirect - google.golang.org/protobuf v1.28.1 // indirect + golang.org/x/net v0.11.0 // indirect + golang.org/x/sys v0.9.0 // indirect + golang.org/x/text v0.10.0 // indirect + google.golang.org/appengine v1.6.7 // indirect + google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 // indirect + google.golang.org/grpc v1.56.0 // indirect + google.golang.org/protobuf v1.30.0 // indirect ) diff --git a/.ci/providerlint/go.sum b/.ci/providerlint/go.sum index 09a43f386c1..02a55770e15 100644 --- a/.ci/providerlint/go.sum +++ b/.ci/providerlint/go.sum @@ -1,297 +1,182 @@ -cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= -cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= -github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= -github.com/Masterminds/goutils v1.1.1/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU= -github.com/Masterminds/semver/v3 v3.1.1/go.mod h1:VPu/7SZ7ePZ3QOrcuXROw5FAcLl4a0cBrbBpGY/8hQs= -github.com/Masterminds/sprig/v3 v3.2.1/go.mod h1:UoaO7Yp8KlPnJIYWTFkMaqPUYKTfGFPhxNuwnnxkKlk= -github.com/Microsoft/go-winio v0.4.14/go.mod h1:qXqCSQ3Xa7+6tgxaGTIe4Kpcdsi+P8jBhyzoq1bpyYA= -github.com/Microsoft/go-winio v0.4.16 h1:FtSW/jqD+l4ba5iPBj9CODVtgfYAD8w2wS923g/cFDk= -github.com/Microsoft/go-winio v0.4.16/go.mod h1:XB6nPKklQyQ7GC9LdcBEcBl8PF76WugXOPRXwdLnMv0= -github.com/ProtonMail/go-crypto v0.0.0-20210428141323-04723f9f07d7 h1:YoJbenK9C67SkzkDfmQuVln04ygHj3vjZfd9FL+GmQQ= -github.com/ProtonMail/go-crypto v0.0.0-20210428141323-04723f9f07d7/go.mod h1:z4/9nQmJSSwwds7ejkxaJwO37dru3geImFUdJlaLzQo= -github.com/acomagu/bufpipe v1.0.3 h1:fxAGrHZTgQ9w5QqVItgzwj235/uYZYgbXitB+dLupOk= -github.com/acomagu/bufpipe v1.0.3/go.mod h1:mxdxdup/WdsKVreO5GpW4+M/1CE2sMG4jeGJ2sYmHc4= +github.com/Microsoft/go-winio v0.5.2 h1:a9IhgEQBCUEk6QCdml9CiJGhAws+YwffDHEMp1VMrpA= +github.com/ProtonMail/go-crypto v0.0.0-20230217124315-7d5c6f04bbb8 h1:wPbRQzjjwFc0ih8puEVAOFGELsn1zoIIYdxvML7mDxA= +github.com/ProtonMail/go-crypto v0.0.0-20230217124315-7d5c6f04bbb8/go.mod h1:I0gYDMZ6Z5GRU7l58bNFSkPTFN6Yl12dsUlAZ8xy98g= +github.com/acomagu/bufpipe v1.0.4 h1:e3H4WUzM3npvo5uv95QuJM3cQspFNtFBzvJ2oNjKIDQ= github.com/agext/levenshtein v1.2.2 h1:0S/Yg6LYmFJ5stwQeRp6EeOcCbj7xiqQSdNelsXvaqE= github.com/agext/levenshtein v1.2.2/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558= -github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239/go.mod h1:2FmKhYUyUczH0OGQWaF5ceTx0UBShxjsH6f8oGKYe2c= -github.com/apparentlymart/go-textseg v1.0.0/go.mod h1:z96Txxhf3xSFMPmb5X/1W05FF/Nj9VFpLOpjS5yuumk= github.com/apparentlymart/go-textseg/v12 v12.0.0/go.mod h1:S/4uRK2UtaQttw1GenVJEynmyUenKwP++x/+DdGV/Ec= github.com/apparentlymart/go-textseg/v13 v13.0.0 h1:Y+KvPE1NYz0xl601PVImeQfFyEy6iT90AvPUL1NNfNw= github.com/apparentlymart/go-textseg/v13 v13.0.0/go.mod h1:ZK2fH7c4NqDTLtiYLvIkEghdlcqw7yxLeM89kiTRPUo= -github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8= -github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs= -github.com/aws/aws-sdk-go v1.44.261 h1:PcTMX/QVk+P3yh2n34UzuXDF5FS2z5Lse2bt+r3IpU4= -github.com/aws/aws-sdk-go v1.44.261/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI= +github.com/aws/aws-sdk-go v1.44.299 h1:HVD9lU4CAFHGxleMJp95FV/sRhtg7P4miHD1v88JAQk= +github.com/aws/aws-sdk-go v1.44.299/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI= github.com/bflad/gopaniccheck v0.1.0 h1:tJftp+bv42ouERmUMWLoUn/5bi/iQZjHPznM00cP/bU= github.com/bflad/gopaniccheck v0.1.0/go.mod h1:ZCj2vSr7EqVeDaqVsWN4n2MwdROx1YL+LFo47TSWtsA= github.com/bflad/tfproviderlint v0.29.0 h1:zxKYAAM6IZ4ace1a3LX+uzMRIMP8L+iOtEc+FP2Yoow= github.com/bflad/tfproviderlint v0.29.0/go.mod h1:ErKVd+GRA2tXKr8T7zC0EDSrtALJL25wj0ZJ/v/klqQ= -github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs= -github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= -github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= -github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= +github.com/bwesterb/go-ristretto v1.2.0/go.mod h1:fUIoIZaG73pV5biE2Blr2xEzDoMj7NFEuV9ekS419A0= +github.com/cloudflare/circl v1.1.0/go.mod h1:prBCrKB9DV4poKZY1l9zBXg2QJY7mvgRvtMxxK7fi4I= +github.com/cloudflare/circl v1.3.3 h1:fE/Qz0QdIGqeWfnwq0RE0R7MI51s0M2E4Ga9kq5AEMs= +github.com/cloudflare/circl v1.3.3/go.mod h1:5XYMA4rFBvNIrhs50XuiBJ15vF2pZn4nnUKZrLbUZFA= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/emirpasic/gods v1.12.0 h1:QAUIPSaCu4G+POclxeqb3F+WPpdKqFGlw36+yOzGlrg= -github.com/emirpasic/gods v1.12.0/go.mod h1:YfzfFFoVP/catgzJb4IKIqXjX78Ha8FMSDh3ymbK86o= -github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= -github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= -github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4= +github.com/emirpasic/gods v1.18.1 h1:FXtiHYKDGKCW2KzwZKx0iC0PQmdlorYgdFG9jPXJ1Bc= github.com/fatih/color v1.13.0 h1:8LOYc1KYPPmyKMuN8QV2DNRWNbLo6LZ0iLs8+mlH53w= github.com/fatih/color v1.13.0/go.mod h1:kLAiJbzzSOZDVNGyDpeOxJ47H46qBXwg5ILebYFFOfk= -github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:xEzjJPgXI435gkrCt3MPfRiAkVrwSbHsst4LCFVfpJc= -github.com/gliderlabs/ssh v0.2.2/go.mod h1:U7qILu1NlMHj9FlMhZLlkCdDnU1DBEAqr0aevW3Awn0= github.com/go-git/gcfg v1.5.0 h1:Q5ViNfGF8zFgyJWPqYwA7qGFoMTEiBmdlkcfRmpIMa4= -github.com/go-git/gcfg v1.5.0/go.mod h1:5m20vg6GwYabIxaOonVkTdrILxQMpEShl1xiMF4ua+E= -github.com/go-git/go-billy/v5 v5.2.0/go.mod h1:pmpqyWchKfYfrkb/UVH4otLvyi/5gJlGI4Hb3ZqZ3W0= -github.com/go-git/go-billy/v5 v5.3.1 h1:CPiOUAzKtMRvolEKw+bG1PLRpT7D3LIs3/3ey4Aiu34= -github.com/go-git/go-billy/v5 v5.3.1/go.mod h1:pmpqyWchKfYfrkb/UVH4otLvyi/5gJlGI4Hb3ZqZ3W0= -github.com/go-git/go-git-fixtures/v4 v4.2.1/go.mod h1:K8zd3kDUAykwTdDCr+I0per6Y6vMiRR/nnVTBtavnB0= -github.com/go-git/go-git/v5 v5.4.2 h1:BXyZu9t0VkbiHtqrsvdq39UDhGJTl1h55VW6CSC4aY4= -github.com/go-git/go-git/v5 v5.4.2/go.mod h1:gQ1kArt6d+n+BGd+/B/I74HwRTLhth2+zti4ihgckDc= +github.com/go-git/go-billy/v5 v5.4.1 h1:Uwp5tDRkPr+l/TnbHOQzp+tmJfLceOlbVucgpTz8ix4= +github.com/go-git/go-git/v5 v5.6.1 h1:q4ZRqQl4pR/ZJHc1L5CFjGA1a10u76aV1iC+nh+bHsk= github.com/go-test/deep v1.0.3 h1:ZrJSEWsXzPOxaZnFteGEfooLba+ju3FYIbOrS+rQd68= -github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= -github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= github.com/golang/protobuf v1.1.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.4/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw= -github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8= -github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA= -github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs= -github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w= -github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0= -github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8= github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk= -github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw= -github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= -github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= -github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= +github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg= +github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38= github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= -github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= -github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA= github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4= github.com/hashicorp/go-checkpoint v0.5.0 h1:MFYpPZCnQqQTE18jFwSII6eUQrD/oxMFp3mlgcqk5mU= github.com/hashicorp/go-checkpoint v0.5.0/go.mod h1:7nfLNL10NsxqO4iWuW6tWW0HjZuDrwkBuEQsVcpCOgg= github.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80= -github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80= github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ= github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48= github.com/hashicorp/go-cty v1.4.1-0.20200414143053-d3edf31b6320 h1:1/D3zfFHttUKaCaGKZ/dR2roBXv0vKbSCnssIldfQdI= github.com/hashicorp/go-cty v1.4.1-0.20200414143053-d3edf31b6320/go.mod h1:EiZBMaudVLy8fmjf9Npq1dq9RalhveqZG5w/yz3mHWs= -github.com/hashicorp/go-hclog v1.4.0 h1:ctuWFGrhFha8BnnzxqeRGidlEcQkDyL5u8J8t5eA11I= -github.com/hashicorp/go-hclog v1.4.0/go.mod h1:W4Qnvbt70Wk/zYJryRzDRU/4r0kIg0PVHBcfoyhpF5M= -github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk= +github.com/hashicorp/go-hclog v1.5.0 h1:bI2ocEMgcVlz55Oj1xZNBsVi900c7II+fWDyV9o+13c= +github.com/hashicorp/go-hclog v1.5.0/go.mod h1:W4Qnvbt70Wk/zYJryRzDRU/4r0kIg0PVHBcfoyhpF5M= github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo= github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM= -github.com/hashicorp/go-plugin v1.4.8 h1:CHGwpxYDOttQOY7HOWgETU9dyVjOXzniXDqJcYJE1zM= -github.com/hashicorp/go-plugin v1.4.8/go.mod h1:viDMjcLJuDui6pXb8U4HVfb8AamCWhHGUjr2IrTF67s= +github.com/hashicorp/go-plugin v1.4.10 h1:xUbmA4jC6Dq163/fWcp8P3JuHilrHHMLNRxzGQJ9hNk= +github.com/hashicorp/go-plugin v1.4.10/go.mod h1:6/1TEzT0eQznvI/gV2CM29DLSkAK/e58mUWKVsPaph0= github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= github.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8= github.com/hashicorp/go-uuid v1.0.3/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= -github.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= github.com/hashicorp/go-version v1.6.0 h1:feTTfFNnjP967rlCxM/I9g701jU+RN74YKx2mOkIeek= github.com/hashicorp/go-version v1.6.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= -github.com/hashicorp/hc-install v0.5.0 h1:D9bl4KayIYKEeJ4vUDe9L5huqxZXczKaykSRcmQ0xY0= -github.com/hashicorp/hc-install v0.5.0/go.mod h1:JyzMfbzfSBSjoDCRPna1vi/24BEDxFaCPfdHtM5SCdo= -github.com/hashicorp/hcl/v2 v2.16.2 h1:mpkHZh/Tv+xet3sy3F9Ld4FyI2tUpWe9x3XtPx9f1a0= -github.com/hashicorp/hcl/v2 v2.16.2/go.mod h1:JRmR89jycNkrrqnMmvPDMd56n1rQJ2Q6KocSLCMCXng= +github.com/hashicorp/hc-install v0.5.2 h1:SfwMFnEXVVirpwkDuSF5kymUOhrUxrTq3udEseZdOD0= +github.com/hashicorp/hc-install v0.5.2/go.mod h1:9QISwe6newMWIfEiXpzuu1k9HAGtQYgnSH8H9T8wmoI= +github.com/hashicorp/hcl/v2 v2.17.0 h1:z1XvSUyXd1HP10U4lrLg5e0JMVz6CPaJvAgxM0KNZVY= +github.com/hashicorp/hcl/v2 v2.17.0/go.mod h1:gJyW2PTShkJqQBKpAmPO3yxMxIuoXkOF2TpqXzrQyx4= github.com/hashicorp/logutils v1.0.0 h1:dLEQVugN8vlakKOUE3ihGLTZJRB4j+M2cdTm/ORI65Y= github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64= github.com/hashicorp/terraform-exec v0.18.1 h1:LAbfDvNQU1l0NOQlTuudjczVhHj061fNX5H8XZxHlH4= github.com/hashicorp/terraform-exec v0.18.1/go.mod h1:58wg4IeuAJ6LVsLUeD2DWZZoc/bYi6dzhLHzxM41980= -github.com/hashicorp/terraform-json v0.16.0 h1:UKkeWRWb23do5LNAFlh/K3N0ymn1qTOO8c+85Albo3s= -github.com/hashicorp/terraform-json v0.16.0/go.mod h1:v0Ufk9jJnk6tcIZvScHvetlKfiNTC+WS21mnXIlc0B0= -github.com/hashicorp/terraform-plugin-go v0.14.3 h1:nlnJ1GXKdMwsC8g1Nh05tK2wsC3+3BL/DBBxFEki+j0= -github.com/hashicorp/terraform-plugin-go v0.14.3/go.mod h1:7ees7DMZ263q8wQ6E4RdIdR6nHHJtrdt4ogX5lPkX1A= -github.com/hashicorp/terraform-plugin-log v0.8.0 h1:pX2VQ/TGKu+UU1rCay0OlzosNKe4Nz1pepLXj95oyy0= -github.com/hashicorp/terraform-plugin-log v0.8.0/go.mod h1:1myFrhVsBLeylQzYYEV17VVjtG8oYPRFdaZs7xdW2xs= -github.com/hashicorp/terraform-plugin-sdk/v2 v2.26.1 h1:G9WAfb8LHeCxu7Ae8nc1agZlQOSCUWsb610iAogBhCs= -github.com/hashicorp/terraform-plugin-sdk/v2 v2.26.1/go.mod h1:xcOSYlRVdPLmDUoqPhO9fiO/YCN/l6MGYeTzGt5jgkQ= -github.com/hashicorp/terraform-registry-address v0.1.0 h1:W6JkV9wbum+m516rCl5/NjKxCyTVaaUBbzYcMzBDO3U= -github.com/hashicorp/terraform-registry-address v0.1.0/go.mod h1:EnyO2jYO6j29DTHbJcm00E5nQTFeTtyZH3H5ycydQ5A= -github.com/hashicorp/terraform-svchost v0.0.0-20200729002733-f050f53b9734 h1:HKLsbzeOsfXmKNpr3GiT18XAblV0BjCbzL8KQAMZGa0= -github.com/hashicorp/terraform-svchost v0.0.0-20200729002733-f050f53b9734/go.mod h1:kNDNcF7sN4DocDLBkQYz73HGKwN1ANB1blq4lIYLYvg= +github.com/hashicorp/terraform-json v0.17.0 h1:EiA1Wp07nknYQAiv+jIt4dX4Cq5crgP+TsTE45MjMmM= +github.com/hashicorp/terraform-json v0.17.0/go.mod h1:Huy6zt6euxaY9knPAFKjUITn8QxUFIe9VuSzb4zn/0o= +github.com/hashicorp/terraform-plugin-go v0.16.0 h1:DSOQ0rz5FUiVO4NUzMs8ln9gsPgHMTsfns7Nk+6gPuE= +github.com/hashicorp/terraform-plugin-go v0.16.0/go.mod h1:4sn8bFuDbt+2+Yztt35IbOrvZc0zyEi87gJzsTgCES8= +github.com/hashicorp/terraform-plugin-log v0.9.0 h1:i7hOA+vdAItN1/7UrfBqBwvYPQ9TFvymaRGZED3FCV0= +github.com/hashicorp/terraform-plugin-log v0.9.0/go.mod h1:rKL8egZQ/eXSyDqzLUuwUYLVdlYeamldAHSxjUFADow= +github.com/hashicorp/terraform-plugin-sdk/v2 v2.27.0 h1:I8efBnjuDrgPjNF1MEypHy48VgcTIUY4X6rOFunrR3Y= +github.com/hashicorp/terraform-plugin-sdk/v2 v2.27.0/go.mod h1:cUEP4ly/nxlHy5HzD6YRrHydtlheGvGRJDhiWqqVik4= +github.com/hashicorp/terraform-registry-address v0.2.1 h1:QuTf6oJ1+WSflJw6WYOHhLgwUiQ0FrROpHPYFtwTYWM= +github.com/hashicorp/terraform-registry-address v0.2.1/go.mod h1:BSE9fIFzp0qWsJUUyGquo4ldV9k2n+psif6NYkBRS3Y= +github.com/hashicorp/terraform-svchost v0.1.1 h1:EZZimZ1GxdqFRinZ1tpJwVxxt49xc/S52uzrw4x0jKQ= +github.com/hashicorp/terraform-svchost v0.1.1/go.mod h1:mNsjQfZyf/Jhz35v6/0LWcv26+X7JPS+buii2c9/ctc= github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d h1:kJCB4vdITiW1eC1vq2e6IsrXKrZit1bv/TDYFGMp4BQ= github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d/go.mod h1:+NfK9FKeTrX5uv1uIXGdwYDTeHna2qgaIlx54MXqjAM= -github.com/huandu/xstrings v1.3.1/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE= -github.com/huandu/xstrings v1.3.2/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE= -github.com/imdario/mergo v0.3.11/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA= -github.com/imdario/mergo v0.3.12 h1:b6R2BslTbIEToALKP7LxUvijTsNI9TAe80pLWN2g/HU= -github.com/imdario/mergo v0.3.12/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA= +github.com/imdario/mergo v0.3.13 h1:lFzP57bqS/wsqKssCGmtLAb8A0wKjLGrve2q3PPVcBk= github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 h1:BQSFePA1RWJOlocH6Fxy8MmwDt+yVQYULKfN0RoTN8A= -github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99/go.mod h1:1lJo3i6rXxKeerYnT8Nvf0QmHCRC1n8sfWVwXF2Frvo= -github.com/jessevdk/go-flags v1.5.0/go.mod h1:Fw0T6WPc1dYxT4mKEZRfG5kJhaTDP9pj1c2EWnYs/m4= github.com/jhump/protoreflect v1.6.0 h1:h5jfMVslIg6l29nsMs0D8Wj17RDVdNYti0vDN/PZZoE= github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg= github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo= github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U= -github.com/kevinburke/ssh_config v0.0.0-20201106050909-4977a11b4351 h1:DowS9hvgyYSX4TO5NpyC606/Z4SxnNYbT+WX27or6Ck= -github.com/kevinburke/ssh_config v0.0.0-20201106050909-4977a11b4351/go.mod h1:CT57kijsi8u/K/BOFA39wgDQJ9CxiF4nAY/ojJ6r6mM= -github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/kevinburke/ssh_config v1.2.0 h1:x584FjTGwHzMwvHx18PXxbBVzfnxogHaAReU4gf13a4= +github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI= github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= -github.com/kr/pretty v0.2.1 h1:Fmg33tUaq4/8ym9TJN1x7sLJnHVwhP33CNkpYV/7rwI= -github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= +github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= -github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= -github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= -github.com/kylelemons/godebug v0.0.0-20170820004349-d65d576e9348/go.mod h1:B69LEHPfb2qLo0BaaOLcbitczOKLWTsrBG9LczfCD4k= github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc= -github.com/matryer/is v1.2.0/go.mod h1:2fLPjFQM9rhQ15aVEtbuwhJinnOqrmgXPNdZsdwlWXA= -github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU= github.com/mattn/go-colorable v0.1.9/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc= github.com/mattn/go-colorable v0.1.12 h1:jF+Du6AlPIjs2BiUiQlKOX0rt3SujHxPnksPKZbaA40= github.com/mattn/go-colorable v0.1.12/go.mod h1:u5H1YNBxpqRaxsYJYSkiCWKzEfiAb1Gb520KVy5xxl4= -github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4= github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU= github.com/mattn/go-isatty v0.0.14 h1:yVuAays6BHfxijgZPzw+3Zlu5yQgKGP2/hcQbHb7S9Y= github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94= -github.com/mitchellh/cli v1.1.5/go.mod h1:v8+iFts2sPIKUV1ltktPXMCC8fumSKFItNcD2cLtRR4= -github.com/mitchellh/copystructure v1.0.0/go.mod h1:SNtv71yrdKgLRyLFxmLdkAbkKEFWgYaq1OVrnRcwhnw= github.com/mitchellh/copystructure v1.2.0 h1:vpKXTN4ewci03Vljg/q9QvCGUDttBOGBIa15WveJJGw= github.com/mitchellh/copystructure v1.2.0/go.mod h1:qLl+cE2AmVv+CoeAwDPye/v+N2HKCj9FbZEVFJRxO9s= -github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y= -github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= github.com/mitchellh/go-testing-interface v1.14.1 h1:jrgshOhYAUVNMAJiKbEu7EqAwgJJ2JqpQmpLJOu07cU= github.com/mitchellh/go-testing-interface v1.14.1/go.mod h1:gfgS7OtZj6MA4U1UrDRp04twqAjfvlZyCfX3sDjEym8= github.com/mitchellh/go-wordwrap v1.0.0 h1:6GlHJ/LTGMrIJbwgdqdl2eEH8o+Exx/0m8ir9Gns0u4= github.com/mitchellh/go-wordwrap v1.0.0/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo= github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY= github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= -github.com/mitchellh/reflectwalk v1.0.0/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw= github.com/mitchellh/reflectwalk v1.0.2 h1:G2LzWKi524PWgd3mLHV8Y5k7s6XUvT0Gef6zxSIeXaQ= github.com/mitchellh/reflectwalk v1.0.2/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw= -github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno= github.com/oklog/run v1.0.0 h1:Ru7dDtJNOyC66gQ5dQmaCa0qIsAUFY3sFpK1Xk8igrw= github.com/oklog/run v1.0.0/go.mod h1:dlhp/R75TPv97u0XWUtDeV/lRKWPKSdTuV0TZvrmrQA= -github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= +github.com/pjbgf/sha1cd v0.3.0 h1:4D5XXmUUBUl/xQ6IjCkEAbqXskkq/4O7LmGn0AqMDs4= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI= -github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= -github.com/sergi/go-diff v1.1.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM= github.com/sergi/go-diff v1.2.0 h1:XU+rvMAioB0UC3q1MFrIQy4Vo5/4VsRDQQXHsEya6xQ= -github.com/shopspring/decimal v1.2.0/go.mod h1:DKyhrW/HYNuLGql+MJL6WCR6knT2jwCFRcu2hWCYk4o= -github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q= -github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= +github.com/skeema/knownhosts v1.1.0 h1:Wvr9V0MxhjRbl3f9nMnKnFfiWTJmtECJ9Njkea3ysW0= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= -github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= -github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.7.2 h1:4jaiDzPyXQvSd7D0EjG45355tLlV3VOECpq10pLC+8s= github.com/stretchr/testify v1.7.2/go.mod h1:R6va5+xMeoiuVRoj+gSkQ7d3FALtqAAGI1FQKckRals= github.com/vmihailenco/msgpack v3.3.3+incompatible/go.mod h1:fy3FlTQTDXWkZ7Bh6AcGMlsjHatGryHQYUTf1ShIgkk= github.com/vmihailenco/msgpack v4.0.4+incompatible h1:dSLoQfGFAo3F6OoNhwUmLwVgaUXK79GlxNBwueZn0xI= github.com/vmihailenco/msgpack v4.0.4+incompatible/go.mod h1:fy3FlTQTDXWkZ7Bh6AcGMlsjHatGryHQYUTf1ShIgkk= -github.com/vmihailenco/msgpack/v4 v4.3.12 h1:07s4sz9IReOgdikxLTKNbBdqDMLsjPKXwvCazn8G65U= -github.com/vmihailenco/msgpack/v4 v4.3.12/go.mod h1:gborTTJjAo/GWTqqRjrLCn9pgNN+NXzzngzBKDPIqw4= -github.com/vmihailenco/tagparser v0.1.1 h1:quXMXlA39OCbd2wAdTsGDlK9RkOk6Wuw+x37wVyIuWY= -github.com/vmihailenco/tagparser v0.1.1/go.mod h1:OeAg3pn3UbLjkWt+rN9oFYB6u/cQgqMEUPoW2WPyhdI= -github.com/xanzy/ssh-agent v0.3.0 h1:wUMzuKtKilRgBAD1sUb8gOwwRr2FGoBVumcjoOACClI= -github.com/xanzy/ssh-agent v0.3.0/go.mod h1:3s9xbODqPuuhK9JV1R321M/FlMZSBvE5aY6eAcqrDh0= +github.com/vmihailenco/msgpack/v5 v5.3.5 h1:5gO0H1iULLWGhs2H5tbAHIZTV8/cYafcFOr9znI5mJU= +github.com/vmihailenco/msgpack/v5 v5.3.5/go.mod h1:7xyJ9e+0+9SaZT0Wt1RGleJXzli6Q/V5KbhBonMG9jc= +github.com/vmihailenco/tagparser/v2 v2.0.0 h1:y09buUbR+b5aycVFQs/g70pqKVZNBmxwAhO7/IwNM9g= +github.com/vmihailenco/tagparser/v2 v2.0.0/go.mod h1:Wri+At7QHww0WTrCBeu4J6bNtoV6mEfg5OIWRZA9qds= +github.com/xanzy/ssh-agent v0.3.3 h1:+/15pJfg/RsTxqYcX6fHqOXZwwMP+2VyYWJeWM2qQFM= github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= -github.com/zclconf/go-cty v1.1.0/go.mod h1:xnAOWiHeOqg2nWS62VtQ7pbOu17FtxJNW8RLEih+O3s= -github.com/zclconf/go-cty v1.13.1 h1:0a6bRwuiSHtAmqCqNOE+c2oHgepv0ctoxU4FUe43kwc= -github.com/zclconf/go-cty v1.13.1/go.mod h1:YKQzy/7pZ7iq2jNFzy5go57xdxdWoLLpaEp4u238AE0= -golang.org/x/crypto v0.0.0-20190219172222-a4c6cb3142f2/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= +github.com/zclconf/go-cty v1.13.2 h1:4GvrUxe/QUDYuJKAav4EYqdM47/kZa672LwmXFmEKT0= +github.com/zclconf/go-cty v1.13.2/go.mod h1:YKQzy/7pZ7iq2jNFzy5go57xdxdWoLLpaEp4u238AE0= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= -golang.org/x/crypto v0.0.0-20200414173820-0848c9571904/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= -golang.org/x/crypto v0.0.0-20200820211705-5c72a883971a/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= -golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4= -golang.org/x/crypto v0.0.0-20210421170649-83a5a9bb288b/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4= golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= -golang.org/x/crypto v0.5.0/go.mod h1:NK/OQwhpMQP3MwtdjgLlYHnH9ebylxKWv3e0fK+mkQU= -golang.org/x/crypto v0.8.0 h1:pd9TJtTueMTVQXzk8E2XESSMQDj/U7OUu0PqJqPXQjQ= -golang.org/x/crypto v0.8.0/go.mod h1:mRqEX+O9/h5TFCrQhkgjo2yKi0yYA+9ecGkdQoHrywE= -golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= -golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= -golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU= -golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= +golang.org/x/crypto v0.10.0 h1:LKqV2xt9+kDzSTfOhx4FrkEBcMrAgHSYgzywV9zcGmM= +golang.org/x/crypto v0.10.0/go.mod h1:o4eNf7Ede1fv+hwOwZsTHl9EsPFO6q6ZvYR8vYfY45I= golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg= golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= -golang.org/x/mod v0.7.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= golang.org/x/mod v0.10.0 h1:lFO9qtOdlre5W1jxS3r/4szv2/6iXxScdzjoBMXNhYk= golang.org/x/mod v0.10.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= -golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20180811021610-c39426892332/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20191009170851-d66e71096ffb/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= -golang.org/x/net v0.0.0-20210326060303-6b1517762897/go.mod h1:uSPa2vr4CLtc/ILN5odXGNXS6mhrKVzTaCXzk9m6W3k= golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco= -golang.org/x/net v0.5.0/go.mod h1:DivGGAXEgPSlEBzxGzZI+ZLohi+xUj054jfeKui00ws= -golang.org/x/net v0.9.0 h1:aWJ/m6xSmxWBx+V0XRHTlrYrPG56jKsLdTFmsSsCzOM= -golang.org/x/net v0.9.0/go.mod h1:d48xBJpPfHeWQsugry2m+kC02ZBRGRgulfHnEXEuWns= -golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= -golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= +golang.org/x/net v0.11.0 h1:Gi2tvZIJyBtO9SDr1q9h5hEQCp/4L2RQ+ar0qjx2oNU= +golang.org/x/net v0.11.0/go.mod h1:2L/ixqYpgIVXmeoSA/4Lu7BzTG4KIyPIryS4IsOd1oQ= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o= -golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210324051608-47abb6519492/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210502180810-71e4cd670f79/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.4.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.7.0 h1:3jlCCIQZPdOYu1h8BkNvLz8Kgwtae2cagcG/VamtZRU= -golang.org/x/sys v0.7.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.9.0 h1:KS/R3tvhPqvJvwcKfnBHJwwthS11LRhmM5D59eEXa0s= +golang.org/x/sys v0.9.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= -golang.org/x/term v0.4.0/go.mod h1:9P2UbLfCdcvo3p/nzKvsmas4TnlujnuoV9hGgYzW1lQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= -golang.org/x/text v0.6.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= -golang.org/x/text v0.9.0 h1:2sjJmO8cDvYveuX97RDLsxlyUxLl+GHoLxBiRdHllBE= -golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.10.0 h1:UpjohKhiEgNc0CSauXmwYftY1+LlaC75SJwh0SgCX58= +golang.org/x/text v0.10.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY= -golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= -golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20200214201135-548b770e2dfa/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= @@ -301,47 +186,21 @@ golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8T golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= -google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= -google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= -google.golang.org/appengine v1.6.6 h1:lMO5rYAqUxkmaj76jAkRUvt5JZgFymx/+Q5Mzfivuhc= -google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= -google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= -google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc= -google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo= -google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d h1:92D1fum1bJLKSdr11OJ+54YeCMCGYIygTA7R/YZxH5M= -google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= -google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= -google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= -google.golang.org/grpc v1.51.0 h1:E1eGv1FTqoLIdnBCZufiSHgKjlqG6fKFf6pPWtMTh8U= -google.golang.org/grpc v1.51.0/go.mod h1:wgNDFcnuBGmxLKI/qn4T+m5BtEBYXJPvibbUPsAIPww= -google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= -google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= -google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= -google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE= -google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo= -google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4= +google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c= +google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= +google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 h1:KpwkzHKEF7B9Zxg18WzOa7djJ+Ha5DzthMyZYQfEn2A= +google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1/go.mod h1:nKE/iIaLqn2bQwXBg8f1g2Ylh6r5MN5CmZvuzZCgsCU= +google.golang.org/grpc v1.56.0 h1:+y7Bs8rtMd07LeXmL3NxcTLn7mUkbKZqEpPhMNkwJEE= +google.golang.org/grpc v1.56.0/go.mod h1:I9bI3vqKfayGqPUAwGdOSu7kt6oIJLixfffKrpXqQ9s= google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw= google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= -google.golang.org/protobuf v1.28.1 h1:d0NfwRgPtno5B1Wa6L2DAG+KivqkdutMf1UhdNx175w= -google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= +google.golang.org/protobuf v1.30.0 h1:kPPoIgf3TsEvrm0PFe15JQ+570QVxYzEvvHqChK+cng= +google.golang.org/protobuf v1.30.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= -gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= gopkg.in/warnings.v0 v0.1.2 h1:wFXVbFY8DY5/xOe1ECiWdKCzZlxgshcYVNkBHstARME= -gopkg.in/warnings.v0 v0.1.2/go.mod h1:jksf8JmL6Qr/oQM2OXTHunEvvTAsrWBLb6OOjuVWRNI= -gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= -gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= -honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= -honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/AUTHORS b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/AUTHORS new file mode 100644 index 00000000000..2b00ddba0df --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/AUTHORS @@ -0,0 +1,3 @@ +# This source code refers to The Go Authors for copyright purposes. +# The master list of authors is in the main Go distribution, +# visible at https://tip.golang.org/AUTHORS. diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/CONTRIBUTORS b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/CONTRIBUTORS new file mode 100644 index 00000000000..1fbd3e976fa --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/CONTRIBUTORS @@ -0,0 +1,3 @@ +# This source code was written by the Go contributors. +# The master list of contributors is in the main Go distribution, +# visible at https://tip.golang.org/CONTRIBUTORS. diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/LICENSE b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/LICENSE new file mode 100644 index 00000000000..6a66aea5eaf --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/LICENSE @@ -0,0 +1,27 @@ +Copyright (c) 2009 The Go Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/PATENTS b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/PATENTS new file mode 100644 index 00000000000..733099041f8 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/PATENTS @@ -0,0 +1,22 @@ +Additional IP Rights Grant (Patents) + +"This implementation" means the copyrightable works distributed by +Google as part of the Go project. + +Google hereby grants to You a perpetual, worldwide, non-exclusive, +no-charge, royalty-free, irrevocable (except as stated in this section) +patent license to make, have made, use, offer to sell, sell, import, +transfer and otherwise run, modify and propagate the contents of this +implementation of Go, where such license applies only to those patent +claims, both currently owned or controlled by Google and acquired in +the future, licensable by Google that are necessarily infringed by this +implementation of Go. This grant does not include claims that would be +infringed only as a consequence of further modification of this +implementation. If you or your agent or exclusive licensee institute or +order or agree to the institution of patent litigation against any +entity (including a cross-claim or counterclaim in a lawsuit) alleging +that this implementation of Go or any code incorporated within this +implementation of Go constitutes direct or contributory patent +infringement, or inducement of patent infringement, then any patent +rights granted to you under this License for this implementation of Go +shall terminate as of the date such litigation is filed. diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/bitcurves/bitcurve.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/bitcurves/bitcurve.go new file mode 100644 index 00000000000..3ed3f435737 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/bitcurves/bitcurve.go @@ -0,0 +1,381 @@ +package bitcurves + +// Copyright 2010 The Go Authors. All rights reserved. +// Copyright 2011 ThePiachu. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package bitelliptic implements several Koblitz elliptic curves over prime +// fields. + +// This package operates, internally, on Jacobian coordinates. For a given +// (x, y) position on the curve, the Jacobian coordinates are (x1, y1, z1) +// where x = x1/z1² and y = y1/z1³. The greatest speedups come when the whole +// calculation can be performed within the transform (as in ScalarMult and +// ScalarBaseMult). But even for Add and Double, it's faster to apply and +// reverse the transform than to operate in affine coordinates. + +import ( + "crypto/elliptic" + "io" + "math/big" + "sync" +) + +// A BitCurve represents a Koblitz Curve with a=0. +// See http://www.hyperelliptic.org/EFD/g1p/auto-shortw.html +type BitCurve struct { + Name string + P *big.Int // the order of the underlying field + N *big.Int // the order of the base point + B *big.Int // the constant of the BitCurve equation + Gx, Gy *big.Int // (x,y) of the base point + BitSize int // the size of the underlying field +} + +// Params returns the parameters of the given BitCurve (see BitCurve struct) +func (bitCurve *BitCurve) Params() (cp *elliptic.CurveParams) { + cp = new(elliptic.CurveParams) + cp.Name = bitCurve.Name + cp.P = bitCurve.P + cp.N = bitCurve.N + cp.Gx = bitCurve.Gx + cp.Gy = bitCurve.Gy + cp.BitSize = bitCurve.BitSize + return cp +} + +// IsOnCurve returns true if the given (x,y) lies on the BitCurve. +func (bitCurve *BitCurve) IsOnCurve(x, y *big.Int) bool { + // y² = x³ + b + y2 := new(big.Int).Mul(y, y) //y² + y2.Mod(y2, bitCurve.P) //y²%P + + x3 := new(big.Int).Mul(x, x) //x² + x3.Mul(x3, x) //x³ + + x3.Add(x3, bitCurve.B) //x³+B + x3.Mod(x3, bitCurve.P) //(x³+B)%P + + return x3.Cmp(y2) == 0 +} + +// affineFromJacobian reverses the Jacobian transform. See the comment at the +// top of the file. +func (bitCurve *BitCurve) affineFromJacobian(x, y, z *big.Int) (xOut, yOut *big.Int) { + if z.Cmp(big.NewInt(0)) == 0 { + panic("bitcurve: Can't convert to affine with Jacobian Z = 0") + } + // x = YZ^2 mod P + zinv := new(big.Int).ModInverse(z, bitCurve.P) + zinvsq := new(big.Int).Mul(zinv, zinv) + + xOut = new(big.Int).Mul(x, zinvsq) + xOut.Mod(xOut, bitCurve.P) + // y = YZ^3 mod P + zinvsq.Mul(zinvsq, zinv) + yOut = new(big.Int).Mul(y, zinvsq) + yOut.Mod(yOut, bitCurve.P) + return xOut, yOut +} + +// Add returns the sum of (x1,y1) and (x2,y2) +func (bitCurve *BitCurve) Add(x1, y1, x2, y2 *big.Int) (*big.Int, *big.Int) { + z := new(big.Int).SetInt64(1) + x, y, z := bitCurve.addJacobian(x1, y1, z, x2, y2, z) + return bitCurve.affineFromJacobian(x, y, z) +} + +// addJacobian takes two points in Jacobian coordinates, (x1, y1, z1) and +// (x2, y2, z2) and returns their sum, also in Jacobian form. +func (bitCurve *BitCurve) addJacobian(x1, y1, z1, x2, y2, z2 *big.Int) (*big.Int, *big.Int, *big.Int) { + // See http://hyperelliptic.org/EFD/g1p/auto-shortw-jacobian-0.html#addition-add-2007-bl + z1z1 := new(big.Int).Mul(z1, z1) + z1z1.Mod(z1z1, bitCurve.P) + z2z2 := new(big.Int).Mul(z2, z2) + z2z2.Mod(z2z2, bitCurve.P) + + u1 := new(big.Int).Mul(x1, z2z2) + u1.Mod(u1, bitCurve.P) + u2 := new(big.Int).Mul(x2, z1z1) + u2.Mod(u2, bitCurve.P) + h := new(big.Int).Sub(u2, u1) + if h.Sign() == -1 { + h.Add(h, bitCurve.P) + } + i := new(big.Int).Lsh(h, 1) + i.Mul(i, i) + j := new(big.Int).Mul(h, i) + + s1 := new(big.Int).Mul(y1, z2) + s1.Mul(s1, z2z2) + s1.Mod(s1, bitCurve.P) + s2 := new(big.Int).Mul(y2, z1) + s2.Mul(s2, z1z1) + s2.Mod(s2, bitCurve.P) + r := new(big.Int).Sub(s2, s1) + if r.Sign() == -1 { + r.Add(r, bitCurve.P) + } + r.Lsh(r, 1) + v := new(big.Int).Mul(u1, i) + + x3 := new(big.Int).Set(r) + x3.Mul(x3, x3) + x3.Sub(x3, j) + x3.Sub(x3, v) + x3.Sub(x3, v) + x3.Mod(x3, bitCurve.P) + + y3 := new(big.Int).Set(r) + v.Sub(v, x3) + y3.Mul(y3, v) + s1.Mul(s1, j) + s1.Lsh(s1, 1) + y3.Sub(y3, s1) + y3.Mod(y3, bitCurve.P) + + z3 := new(big.Int).Add(z1, z2) + z3.Mul(z3, z3) + z3.Sub(z3, z1z1) + if z3.Sign() == -1 { + z3.Add(z3, bitCurve.P) + } + z3.Sub(z3, z2z2) + if z3.Sign() == -1 { + z3.Add(z3, bitCurve.P) + } + z3.Mul(z3, h) + z3.Mod(z3, bitCurve.P) + + return x3, y3, z3 +} + +// Double returns 2*(x,y) +func (bitCurve *BitCurve) Double(x1, y1 *big.Int) (*big.Int, *big.Int) { + z1 := new(big.Int).SetInt64(1) + return bitCurve.affineFromJacobian(bitCurve.doubleJacobian(x1, y1, z1)) +} + +// doubleJacobian takes a point in Jacobian coordinates, (x, y, z), and +// returns its double, also in Jacobian form. +func (bitCurve *BitCurve) doubleJacobian(x, y, z *big.Int) (*big.Int, *big.Int, *big.Int) { + // See http://hyperelliptic.org/EFD/g1p/auto-shortw-jacobian-0.html#doubling-dbl-2009-l + + a := new(big.Int).Mul(x, x) //X1² + b := new(big.Int).Mul(y, y) //Y1² + c := new(big.Int).Mul(b, b) //B² + + d := new(big.Int).Add(x, b) //X1+B + d.Mul(d, d) //(X1+B)² + d.Sub(d, a) //(X1+B)²-A + d.Sub(d, c) //(X1+B)²-A-C + d.Mul(d, big.NewInt(2)) //2*((X1+B)²-A-C) + + e := new(big.Int).Mul(big.NewInt(3), a) //3*A + f := new(big.Int).Mul(e, e) //E² + + x3 := new(big.Int).Mul(big.NewInt(2), d) //2*D + x3.Sub(f, x3) //F-2*D + x3.Mod(x3, bitCurve.P) + + y3 := new(big.Int).Sub(d, x3) //D-X3 + y3.Mul(e, y3) //E*(D-X3) + y3.Sub(y3, new(big.Int).Mul(big.NewInt(8), c)) //E*(D-X3)-8*C + y3.Mod(y3, bitCurve.P) + + z3 := new(big.Int).Mul(y, z) //Y1*Z1 + z3.Mul(big.NewInt(2), z3) //3*Y1*Z1 + z3.Mod(z3, bitCurve.P) + + return x3, y3, z3 +} + +//TODO: double check if it is okay +// ScalarMult returns k*(Bx,By) where k is a number in big-endian form. +func (bitCurve *BitCurve) ScalarMult(Bx, By *big.Int, k []byte) (*big.Int, *big.Int) { + // We have a slight problem in that the identity of the group (the + // point at infinity) cannot be represented in (x, y) form on a finite + // machine. Thus the standard add/double algorithm has to be tweaked + // slightly: our initial state is not the identity, but x, and we + // ignore the first true bit in |k|. If we don't find any true bits in + // |k|, then we return nil, nil, because we cannot return the identity + // element. + + Bz := new(big.Int).SetInt64(1) + x := Bx + y := By + z := Bz + + seenFirstTrue := false + for _, byte := range k { + for bitNum := 0; bitNum < 8; bitNum++ { + if seenFirstTrue { + x, y, z = bitCurve.doubleJacobian(x, y, z) + } + if byte&0x80 == 0x80 { + if !seenFirstTrue { + seenFirstTrue = true + } else { + x, y, z = bitCurve.addJacobian(Bx, By, Bz, x, y, z) + } + } + byte <<= 1 + } + } + + if !seenFirstTrue { + return nil, nil + } + + return bitCurve.affineFromJacobian(x, y, z) +} + +// ScalarBaseMult returns k*G, where G is the base point of the group and k is +// an integer in big-endian form. +func (bitCurve *BitCurve) ScalarBaseMult(k []byte) (*big.Int, *big.Int) { + return bitCurve.ScalarMult(bitCurve.Gx, bitCurve.Gy, k) +} + +var mask = []byte{0xff, 0x1, 0x3, 0x7, 0xf, 0x1f, 0x3f, 0x7f} + +//TODO: double check if it is okay +// GenerateKey returns a public/private key pair. The private key is generated +// using the given reader, which must return random data. +func (bitCurve *BitCurve) GenerateKey(rand io.Reader) (priv []byte, x, y *big.Int, err error) { + byteLen := (bitCurve.BitSize + 7) >> 3 + priv = make([]byte, byteLen) + + for x == nil { + _, err = io.ReadFull(rand, priv) + if err != nil { + return + } + // We have to mask off any excess bits in the case that the size of the + // underlying field is not a whole number of bytes. + priv[0] &= mask[bitCurve.BitSize%8] + // This is because, in tests, rand will return all zeros and we don't + // want to get the point at infinity and loop forever. + priv[1] ^= 0x42 + x, y = bitCurve.ScalarBaseMult(priv) + } + return +} + +// Marshal converts a point into the form specified in section 4.3.6 of ANSI +// X9.62. +func (bitCurve *BitCurve) Marshal(x, y *big.Int) []byte { + byteLen := (bitCurve.BitSize + 7) >> 3 + + ret := make([]byte, 1+2*byteLen) + ret[0] = 4 // uncompressed point + + xBytes := x.Bytes() + copy(ret[1+byteLen-len(xBytes):], xBytes) + yBytes := y.Bytes() + copy(ret[1+2*byteLen-len(yBytes):], yBytes) + return ret +} + +// Unmarshal converts a point, serialised by Marshal, into an x, y pair. On +// error, x = nil. +func (bitCurve *BitCurve) Unmarshal(data []byte) (x, y *big.Int) { + byteLen := (bitCurve.BitSize + 7) >> 3 + if len(data) != 1+2*byteLen { + return + } + if data[0] != 4 { // uncompressed form + return + } + x = new(big.Int).SetBytes(data[1 : 1+byteLen]) + y = new(big.Int).SetBytes(data[1+byteLen:]) + return +} + +//curve parameters taken from: +//http://www.secg.org/collateral/sec2_final.pdf + +var initonce sync.Once +var secp160k1 *BitCurve +var secp192k1 *BitCurve +var secp224k1 *BitCurve +var secp256k1 *BitCurve + +func initAll() { + initS160() + initS192() + initS224() + initS256() +} + +func initS160() { + // See SEC 2 section 2.4.1 + secp160k1 = new(BitCurve) + secp160k1.Name = "secp160k1" + secp160k1.P, _ = new(big.Int).SetString("FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFAC73", 16) + secp160k1.N, _ = new(big.Int).SetString("0100000000000000000001B8FA16DFAB9ACA16B6B3", 16) + secp160k1.B, _ = new(big.Int).SetString("0000000000000000000000000000000000000007", 16) + secp160k1.Gx, _ = new(big.Int).SetString("3B4C382CE37AA192A4019E763036F4F5DD4D7EBB", 16) + secp160k1.Gy, _ = new(big.Int).SetString("938CF935318FDCED6BC28286531733C3F03C4FEE", 16) + secp160k1.BitSize = 160 +} + +func initS192() { + // See SEC 2 section 2.5.1 + secp192k1 = new(BitCurve) + secp192k1.Name = "secp192k1" + secp192k1.P, _ = new(big.Int).SetString("FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFEE37", 16) + secp192k1.N, _ = new(big.Int).SetString("FFFFFFFFFFFFFFFFFFFFFFFE26F2FC170F69466A74DEFD8D", 16) + secp192k1.B, _ = new(big.Int).SetString("000000000000000000000000000000000000000000000003", 16) + secp192k1.Gx, _ = new(big.Int).SetString("DB4FF10EC057E9AE26B07D0280B7F4341DA5D1B1EAE06C7D", 16) + secp192k1.Gy, _ = new(big.Int).SetString("9B2F2F6D9C5628A7844163D015BE86344082AA88D95E2F9D", 16) + secp192k1.BitSize = 192 +} + +func initS224() { + // See SEC 2 section 2.6.1 + secp224k1 = new(BitCurve) + secp224k1.Name = "secp224k1" + secp224k1.P, _ = new(big.Int).SetString("FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFE56D", 16) + secp224k1.N, _ = new(big.Int).SetString("010000000000000000000000000001DCE8D2EC6184CAF0A971769FB1F7", 16) + secp224k1.B, _ = new(big.Int).SetString("00000000000000000000000000000000000000000000000000000005", 16) + secp224k1.Gx, _ = new(big.Int).SetString("A1455B334DF099DF30FC28A169A467E9E47075A90F7E650EB6B7A45C", 16) + secp224k1.Gy, _ = new(big.Int).SetString("7E089FED7FBA344282CAFBD6F7E319F7C0B0BD59E2CA4BDB556D61A5", 16) + secp224k1.BitSize = 224 +} + +func initS256() { + // See SEC 2 section 2.7.1 + secp256k1 = new(BitCurve) + secp256k1.Name = "secp256k1" + secp256k1.P, _ = new(big.Int).SetString("FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC2F", 16) + secp256k1.N, _ = new(big.Int).SetString("FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141", 16) + secp256k1.B, _ = new(big.Int).SetString("0000000000000000000000000000000000000000000000000000000000000007", 16) + secp256k1.Gx, _ = new(big.Int).SetString("79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798", 16) + secp256k1.Gy, _ = new(big.Int).SetString("483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8", 16) + secp256k1.BitSize = 256 +} + +// S160 returns a BitCurve which implements secp160k1 (see SEC 2 section 2.4.1) +func S160() *BitCurve { + initonce.Do(initAll) + return secp160k1 +} + +// S192 returns a BitCurve which implements secp192k1 (see SEC 2 section 2.5.1) +func S192() *BitCurve { + initonce.Do(initAll) + return secp192k1 +} + +// S224 returns a BitCurve which implements secp224k1 (see SEC 2 section 2.6.1) +func S224() *BitCurve { + initonce.Do(initAll) + return secp224k1 +} + +// S256 returns a BitCurve which implements bitcurves (see SEC 2 section 2.7.1) +func S256() *BitCurve { + initonce.Do(initAll) + return secp256k1 +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/brainpool/brainpool.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/brainpool/brainpool.go new file mode 100644 index 00000000000..cb6676de24b --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/brainpool/brainpool.go @@ -0,0 +1,134 @@ +// Package brainpool implements Brainpool elliptic curves. +// Implementation of rcurves is from github.com/ebfe/brainpool +// Note that these curves are implemented with naive, non-constant time operations +// and are likely not suitable for environments where timing attacks are a concern. +package brainpool + +import ( + "crypto/elliptic" + "math/big" + "sync" +) + +var ( + once sync.Once + p256t1, p384t1, p512t1 *elliptic.CurveParams + p256r1, p384r1, p512r1 *rcurve +) + +func initAll() { + initP256t1() + initP384t1() + initP512t1() + initP256r1() + initP384r1() + initP512r1() +} + +func initP256t1() { + p256t1 = &elliptic.CurveParams{Name: "brainpoolP256t1"} + p256t1.P, _ = new(big.Int).SetString("A9FB57DBA1EEA9BC3E660A909D838D726E3BF623D52620282013481D1F6E5377", 16) + p256t1.N, _ = new(big.Int).SetString("A9FB57DBA1EEA9BC3E660A909D838D718C397AA3B561A6F7901E0E82974856A7", 16) + p256t1.B, _ = new(big.Int).SetString("662C61C430D84EA4FE66A7733D0B76B7BF93EBC4AF2F49256AE58101FEE92B04", 16) + p256t1.Gx, _ = new(big.Int).SetString("A3E8EB3CC1CFE7B7732213B23A656149AFA142C47AAFBC2B79A191562E1305F4", 16) + p256t1.Gy, _ = new(big.Int).SetString("2D996C823439C56D7F7B22E14644417E69BCB6DE39D027001DABE8F35B25C9BE", 16) + p256t1.BitSize = 256 +} + +func initP256r1() { + twisted := p256t1 + params := &elliptic.CurveParams{ + Name: "brainpoolP256r1", + P: twisted.P, + N: twisted.N, + BitSize: twisted.BitSize, + } + params.Gx, _ = new(big.Int).SetString("8BD2AEB9CB7E57CB2C4B482FFC81B7AFB9DE27E1E3BD23C23A4453BD9ACE3262", 16) + params.Gy, _ = new(big.Int).SetString("547EF835C3DAC4FD97F8461A14611DC9C27745132DED8E545C1D54C72F046997", 16) + z, _ := new(big.Int).SetString("3E2D4BD9597B58639AE7AA669CAB9837CF5CF20A2C852D10F655668DFC150EF0", 16) + p256r1 = newrcurve(twisted, params, z) +} + +func initP384t1() { + p384t1 = &elliptic.CurveParams{Name: "brainpoolP384t1"} + p384t1.P, _ = new(big.Int).SetString("8CB91E82A3386D280F5D6F7E50E641DF152F7109ED5456B412B1DA197FB71123ACD3A729901D1A71874700133107EC53", 16) + p384t1.N, _ = new(big.Int).SetString("8CB91E82A3386D280F5D6F7E50E641DF152F7109ED5456B31F166E6CAC0425A7CF3AB6AF6B7FC3103B883202E9046565", 16) + p384t1.B, _ = new(big.Int).SetString("7F519EADA7BDA81BD826DBA647910F8C4B9346ED8CCDC64E4B1ABD11756DCE1D2074AA263B88805CED70355A33B471EE", 16) + p384t1.Gx, _ = new(big.Int).SetString("18DE98B02DB9A306F2AFCD7235F72A819B80AB12EBD653172476FECD462AABFFC4FF191B946A5F54D8D0AA2F418808CC", 16) + p384t1.Gy, _ = new(big.Int).SetString("25AB056962D30651A114AFD2755AD336747F93475B7A1FCA3B88F2B6A208CCFE469408584DC2B2912675BF5B9E582928", 16) + p384t1.BitSize = 384 +} + +func initP384r1() { + twisted := p384t1 + params := &elliptic.CurveParams{ + Name: "brainpoolP384r1", + P: twisted.P, + N: twisted.N, + BitSize: twisted.BitSize, + } + params.Gx, _ = new(big.Int).SetString("1D1C64F068CF45FFA2A63A81B7C13F6B8847A3E77EF14FE3DB7FCAFE0CBD10E8E826E03436D646AAEF87B2E247D4AF1E", 16) + params.Gy, _ = new(big.Int).SetString("8ABE1D7520F9C2A45CB1EB8E95CFD55262B70B29FEEC5864E19C054FF99129280E4646217791811142820341263C5315", 16) + z, _ := new(big.Int).SetString("41DFE8DD399331F7166A66076734A89CD0D2BCDB7D068E44E1F378F41ECBAE97D2D63DBC87BCCDDCCC5DA39E8589291C", 16) + p384r1 = newrcurve(twisted, params, z) +} + +func initP512t1() { + p512t1 = &elliptic.CurveParams{Name: "brainpoolP512t1"} + p512t1.P, _ = new(big.Int).SetString("AADD9DB8DBE9C48B3FD4E6AE33C9FC07CB308DB3B3C9D20ED6639CCA703308717D4D9B009BC66842AECDA12AE6A380E62881FF2F2D82C68528AA6056583A48F3", 16) + p512t1.N, _ = new(big.Int).SetString("AADD9DB8DBE9C48B3FD4E6AE33C9FC07CB308DB3B3C9D20ED6639CCA70330870553E5C414CA92619418661197FAC10471DB1D381085DDADDB58796829CA90069", 16) + p512t1.B, _ = new(big.Int).SetString("7CBBBCF9441CFAB76E1890E46884EAE321F70C0BCB4981527897504BEC3E36A62BCDFA2304976540F6450085F2DAE145C22553B465763689180EA2571867423E", 16) + p512t1.Gx, _ = new(big.Int).SetString("640ECE5C12788717B9C1BA06CBC2A6FEBA85842458C56DDE9DB1758D39C0313D82BA51735CDB3EA499AA77A7D6943A64F7A3F25FE26F06B51BAA2696FA9035DA", 16) + p512t1.Gy, _ = new(big.Int).SetString("5B534BD595F5AF0FA2C892376C84ACE1BB4E3019B71634C01131159CAE03CEE9D9932184BEEF216BD71DF2DADF86A627306ECFF96DBB8BACE198B61E00F8B332", 16) + p512t1.BitSize = 512 +} + +func initP512r1() { + twisted := p512t1 + params := &elliptic.CurveParams{ + Name: "brainpoolP512r1", + P: twisted.P, + N: twisted.N, + BitSize: twisted.BitSize, + } + params.Gx, _ = new(big.Int).SetString("81AEE4BDD82ED9645A21322E9C4C6A9385ED9F70B5D916C1B43B62EEF4D0098EFF3B1F78E2D0D48D50D1687B93B97D5F7C6D5047406A5E688B352209BCB9F822", 16) + params.Gy, _ = new(big.Int).SetString("7DDE385D566332ECC0EABFA9CF7822FDF209F70024A57B1AA000C55B881F8111B2DCDE494A5F485E5BCA4BD88A2763AED1CA2B2FA8F0540678CD1E0F3AD80892", 16) + z, _ := new(big.Int).SetString("12EE58E6764838B69782136F0F2D3BA06E27695716054092E60A80BEDB212B64E585D90BCE13761F85C3F1D2A64E3BE8FEA2220F01EBA5EEB0F35DBD29D922AB", 16) + p512r1 = newrcurve(twisted, params, z) +} + +// P256t1 returns a Curve which implements Brainpool P256t1 (see RFC 5639, section 3.4) +func P256t1() elliptic.Curve { + once.Do(initAll) + return p256t1 +} + +// P256r1 returns a Curve which implements Brainpool P256r1 (see RFC 5639, section 3.4) +func P256r1() elliptic.Curve { + once.Do(initAll) + return p256r1 +} + +// P384t1 returns a Curve which implements Brainpool P384t1 (see RFC 5639, section 3.6) +func P384t1() elliptic.Curve { + once.Do(initAll) + return p384t1 +} + +// P384r1 returns a Curve which implements Brainpool P384r1 (see RFC 5639, section 3.6) +func P384r1() elliptic.Curve { + once.Do(initAll) + return p384r1 +} + +// P512t1 returns a Curve which implements Brainpool P512t1 (see RFC 5639, section 3.7) +func P512t1() elliptic.Curve { + once.Do(initAll) + return p512t1 +} + +// P512r1 returns a Curve which implements Brainpool P512r1 (see RFC 5639, section 3.7) +func P512r1() elliptic.Curve { + once.Do(initAll) + return p512r1 +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/brainpool/rcurve.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/brainpool/rcurve.go new file mode 100644 index 00000000000..2d5355085f2 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/brainpool/rcurve.go @@ -0,0 +1,83 @@ +package brainpool + +import ( + "crypto/elliptic" + "math/big" +) + +var _ elliptic.Curve = (*rcurve)(nil) + +type rcurve struct { + twisted elliptic.Curve + params *elliptic.CurveParams + z *big.Int + zinv *big.Int + z2 *big.Int + z3 *big.Int + zinv2 *big.Int + zinv3 *big.Int +} + +var ( + two = big.NewInt(2) + three = big.NewInt(3) +) + +func newrcurve(twisted elliptic.Curve, params *elliptic.CurveParams, z *big.Int) *rcurve { + zinv := new(big.Int).ModInverse(z, params.P) + return &rcurve{ + twisted: twisted, + params: params, + z: z, + zinv: zinv, + z2: new(big.Int).Exp(z, two, params.P), + z3: new(big.Int).Exp(z, three, params.P), + zinv2: new(big.Int).Exp(zinv, two, params.P), + zinv3: new(big.Int).Exp(zinv, three, params.P), + } +} + +func (curve *rcurve) toTwisted(x, y *big.Int) (*big.Int, *big.Int) { + var tx, ty big.Int + tx.Mul(x, curve.z2) + tx.Mod(&tx, curve.params.P) + ty.Mul(y, curve.z3) + ty.Mod(&ty, curve.params.P) + return &tx, &ty +} + +func (curve *rcurve) fromTwisted(tx, ty *big.Int) (*big.Int, *big.Int) { + var x, y big.Int + x.Mul(tx, curve.zinv2) + x.Mod(&x, curve.params.P) + y.Mul(ty, curve.zinv3) + y.Mod(&y, curve.params.P) + return &x, &y +} + +func (curve *rcurve) Params() *elliptic.CurveParams { + return curve.params +} + +func (curve *rcurve) IsOnCurve(x, y *big.Int) bool { + return curve.twisted.IsOnCurve(curve.toTwisted(x, y)) +} + +func (curve *rcurve) Add(x1, y1, x2, y2 *big.Int) (x, y *big.Int) { + tx1, ty1 := curve.toTwisted(x1, y1) + tx2, ty2 := curve.toTwisted(x2, y2) + return curve.fromTwisted(curve.twisted.Add(tx1, ty1, tx2, ty2)) +} + +func (curve *rcurve) Double(x1, y1 *big.Int) (x, y *big.Int) { + return curve.fromTwisted(curve.twisted.Double(curve.toTwisted(x1, y1))) +} + +func (curve *rcurve) ScalarMult(x1, y1 *big.Int, scalar []byte) (x, y *big.Int) { + tx1, ty1 := curve.toTwisted(x1, y1) + return curve.fromTwisted(curve.twisted.ScalarMult(tx1, ty1, scalar)) +} + +func (curve *rcurve) ScalarBaseMult(scalar []byte) (x, y *big.Int) { + return curve.fromTwisted(curve.twisted.ScalarBaseMult(scalar)) +} \ No newline at end of file diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/eax/eax.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/eax/eax.go new file mode 100644 index 00000000000..6b6bc7aed04 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/eax/eax.go @@ -0,0 +1,162 @@ +// Copyright (C) 2019 ProtonTech AG + +// Package eax provides an implementation of the EAX +// (encrypt-authenticate-translate) mode of operation, as described in +// Bellare, Rogaway, and Wagner "THE EAX MODE OF OPERATION: A TWO-PASS +// AUTHENTICATED-ENCRYPTION SCHEME OPTIMIZED FOR SIMPLICITY AND EFFICIENCY." +// In FSE'04, volume 3017 of LNCS, 2004 +package eax + +import ( + "crypto/cipher" + "crypto/subtle" + "errors" + "github.com/ProtonMail/go-crypto/internal/byteutil" +) + +const ( + defaultTagSize = 16 + defaultNonceSize = 16 +) + +type eax struct { + block cipher.Block // Only AES-{128, 192, 256} supported + tagSize int // At least 12 bytes recommended + nonceSize int +} + +func (e *eax) NonceSize() int { + return e.nonceSize +} + +func (e *eax) Overhead() int { + return e.tagSize +} + +// NewEAX returns an EAX instance with AES-{KEYLENGTH} and default nonce and +// tag lengths. Supports {128, 192, 256}- bit key length. +func NewEAX(block cipher.Block) (cipher.AEAD, error) { + return NewEAXWithNonceAndTagSize(block, defaultNonceSize, defaultTagSize) +} + +// NewEAXWithNonceAndTagSize returns an EAX instance with AES-{keyLength} and +// given nonce and tag lengths in bytes. Panics on zero nonceSize and +// exceedingly long tags. +// +// It is recommended to use at least 12 bytes as tag length (see, for instance, +// NIST SP 800-38D). +// +// Only to be used for compatibility with existing cryptosystems with +// non-standard parameters. For all other cases, prefer NewEAX. +func NewEAXWithNonceAndTagSize( + block cipher.Block, nonceSize, tagSize int) (cipher.AEAD, error) { + if nonceSize < 1 { + return nil, eaxError("Cannot initialize EAX with nonceSize = 0") + } + if tagSize > block.BlockSize() { + return nil, eaxError("Custom tag length exceeds blocksize") + } + return &eax{ + block: block, + tagSize: tagSize, + nonceSize: nonceSize, + }, nil +} + +func (e *eax) Seal(dst, nonce, plaintext, adata []byte) []byte { + if len(nonce) > e.nonceSize { + panic("crypto/eax: Nonce too long for this instance") + } + ret, out := byteutil.SliceForAppend(dst, len(plaintext) + e.tagSize) + omacNonce := e.omacT(0, nonce) + omacAdata := e.omacT(1, adata) + + // Encrypt message using CTR mode and omacNonce as IV + ctr := cipher.NewCTR(e.block, omacNonce) + ciphertextData := out[:len(plaintext)] + ctr.XORKeyStream(ciphertextData, plaintext) + + omacCiphertext := e.omacT(2, ciphertextData) + + tag := out[len(plaintext):] + for i := 0; i < e.tagSize; i++ { + tag[i] = omacCiphertext[i] ^ omacNonce[i] ^ omacAdata[i] + } + return ret +} + +func (e* eax) Open(dst, nonce, ciphertext, adata []byte) ([]byte, error) { + if len(nonce) > e.nonceSize { + panic("crypto/eax: Nonce too long for this instance") + } + if len(ciphertext) < e.tagSize { + return nil, eaxError("Ciphertext shorter than tag length") + } + sep := len(ciphertext) - e.tagSize + + // Compute tag + omacNonce := e.omacT(0, nonce) + omacAdata := e.omacT(1, adata) + omacCiphertext := e.omacT(2, ciphertext[:sep]) + + tag := make([]byte, e.tagSize) + for i := 0; i < e.tagSize; i++ { + tag[i] = omacCiphertext[i] ^ omacNonce[i] ^ omacAdata[i] + } + + // Compare tags + if subtle.ConstantTimeCompare(ciphertext[sep:], tag) != 1 { + return nil, eaxError("Tag authentication failed") + } + + // Decrypt ciphertext + ret, out := byteutil.SliceForAppend(dst, len(ciphertext)) + ctr := cipher.NewCTR(e.block, omacNonce) + ctr.XORKeyStream(out, ciphertext[:sep]) + + return ret[:sep], nil +} + +// Tweakable OMAC - Calls OMAC_K([t]_n || plaintext) +func (e *eax) omacT(t byte, plaintext []byte) []byte { + blockSize := e.block.BlockSize() + byteT := make([]byte, blockSize) + byteT[blockSize-1] = t + concat := append(byteT, plaintext...) + return e.omac(concat) +} + +func (e *eax) omac(plaintext []byte) []byte { + blockSize := e.block.BlockSize() + // L ← E_K(0^n); B ← 2L; P ← 4L + L := make([]byte, blockSize) + e.block.Encrypt(L, L) + B := byteutil.GfnDouble(L) + P := byteutil.GfnDouble(B) + + // CBC with IV = 0 + cbc := cipher.NewCBCEncrypter(e.block, make([]byte, blockSize)) + padded := e.pad(plaintext, B, P) + cbcCiphertext := make([]byte, len(padded)) + cbc.CryptBlocks(cbcCiphertext, padded) + + return cbcCiphertext[len(cbcCiphertext)-blockSize:] +} + +func (e *eax) pad(plaintext, B, P []byte) []byte { + // if |M| in {n, 2n, 3n, ...} + blockSize := e.block.BlockSize() + if len(plaintext) != 0 && len(plaintext)%blockSize == 0 { + return byteutil.RightXor(plaintext, B) + } + + // else return (M || 1 || 0^(n−1−(|M| % n))) xor→ P + ending := make([]byte, blockSize-len(plaintext)%blockSize) + ending[0] = 0x80 + padded := append(plaintext, ending...) + return byteutil.RightXor(padded, P) +} + +func eaxError(err string) error { + return errors.New("crypto/eax: " + err) +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/eax/eax_test_vectors.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/eax/eax_test_vectors.go new file mode 100644 index 00000000000..ddb53d07905 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/eax/eax_test_vectors.go @@ -0,0 +1,58 @@ +package eax + +// Test vectors from +// https://web.cs.ucdavis.edu/~rogaway/papers/eax.pdf +var testVectors = []struct { + msg, key, nonce, header, ciphertext string +}{ + {"", + "233952DEE4D5ED5F9B9C6D6FF80FF478", + "62EC67F9C3A4A407FCB2A8C49031A8B3", + "6BFB914FD07EAE6B", + "E037830E8389F27B025A2D6527E79D01"}, + {"F7FB", + "91945D3F4DCBEE0BF45EF52255F095A4", + "BECAF043B0A23D843194BA972C66DEBD", + "FA3BFD4806EB53FA", + "19DD5C4C9331049D0BDAB0277408F67967E5"}, + {"1A47CB4933", + "01F74AD64077F2E704C0F60ADA3DD523", + "70C3DB4F0D26368400A10ED05D2BFF5E", + "234A3463C1264AC6", + "D851D5BAE03A59F238A23E39199DC9266626C40F80"}, + {"481C9E39B1", + "D07CF6CBB7F313BDDE66B727AFD3C5E8", + "8408DFFF3C1A2B1292DC199E46B7D617", + "33CCE2EABFF5A79D", + "632A9D131AD4C168A4225D8E1FF755939974A7BEDE"}, + {"40D0C07DA5E4", + "35B6D0580005BBC12B0587124557D2C2", + "FDB6B06676EEDC5C61D74276E1F8E816", + "AEB96EAEBE2970E9", + "071DFE16C675CB0677E536F73AFE6A14B74EE49844DD"}, + {"4DE3B35C3FC039245BD1FB7D", + "BD8E6E11475E60B268784C38C62FEB22", + "6EAC5C93072D8E8513F750935E46DA1B", + "D4482D1CA78DCE0F", + "835BB4F15D743E350E728414ABB8644FD6CCB86947C5E10590210A4F"}, + {"8B0A79306C9CE7ED99DAE4F87F8DD61636", + "7C77D6E813BED5AC98BAA417477A2E7D", + "1A8C98DCD73D38393B2BF1569DEEFC19", + "65D2017990D62528", + "02083E3979DA014812F59F11D52630DA30137327D10649B0AA6E1C181DB617D7F2"}, + {"1BDA122BCE8A8DBAF1877D962B8592DD2D56", + "5FFF20CAFAB119CA2FC73549E20F5B0D", + "DDE59B97D722156D4D9AFF2BC7559826", + "54B9F04E6A09189A", + "2EC47B2C4954A489AFC7BA4897EDCDAE8CC33B60450599BD02C96382902AEF7F832A"}, + {"6CF36720872B8513F6EAB1A8A44438D5EF11", + "A4A4782BCFFD3EC5E7EF6D8C34A56123", + "B781FCF2F75FA5A8DE97A9CA48E522EC", + "899A175897561D7E", + "0DE18FD0FDD91E7AF19F1D8EE8733938B1E8E7F6D2231618102FDB7FE55FF1991700"}, + {"CA40D7446E545FFAED3BD12A740A659FFBBB3CEAB7", + "8395FCF1E95BEBD697BD010BC766AAC3", + "22E7ADD93CFC6393C57EC0B3C17D6B44", + "126735FCC320D25A", + "CB8920F87A6C75CFF39627B56E3ED197C552D295A7CFC46AFC253B4652B1AF3795B124AB6E"}, +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/eax/random_vectors.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/eax/random_vectors.go new file mode 100644 index 00000000000..4eb19f28d9c --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/eax/random_vectors.go @@ -0,0 +1,131 @@ +// These vectors include key length in {128, 192, 256}, tag size 128, and +// random nonce, header, and plaintext lengths. + +// This file was automatically generated. + +package eax + +var randomVectors = []struct { + key, nonce, header, plaintext, ciphertext string +}{ + {"DFDE093F36B0356E5A81F609786982E3", + "1D8AC604419001816905BA72B14CED7E", + "152A1517A998D7A24163FCDD146DE81AC347C8B97088F502093C1ABB8F6E33D9A219C34D7603A18B1F5ABE02E56661B7D7F67E81EC08C1302EF38D80A859486D450E94A4F26AD9E68EEBBC0C857A0FC5CF9E641D63D565A7E361BC8908F5A8DC8FD6", + "1C8EAAB71077FE18B39730A3156ADE29C5EE824C7EE86ED2A253B775603FB237116E654F6FEC588DD27F523A0E01246FE73FE348491F2A8E9ABC6CA58D663F71CDBCF4AD798BE46C42AE6EE8B599DB44A1A48D7BBBBA0F7D2750181E1C5E66967F7D57CBD30AFBDA5727", + "79E7E150934BBEBF7013F61C60462A14D8B15AF7A248AFB8A344EF021C1500E16666891D6E973D8BB56B71A371F12CA34660C4410C016982B20F547E3762A58B7BF4F20236CADCF559E2BE7D783B13723B2741FC7CDC8997D839E39A3DDD2BADB96743DD7049F1BDB0516A262869915B3F70498AFB7B191BF960"}, + {"F10619EF02E5D94D7550EB84ED364A21", + "8DC0D4F2F745BBAE835CC5574B942D20", + "FE561358F2E8DF7E1024FF1AE9A8D36EBD01352214505CB99D644777A8A1F6027FA2BDBFC529A9B91136D5F2416CFC5F0F4EC3A1AFD32BDDA23CA504C5A5CB451785FABF4DFE4CD50D817491991A60615B30286361C100A95D1712F2A45F8E374461F4CA2B", + "D7B5A971FC219631D30EFC3664AE3127D9CF3097DAD9C24AC7905D15E8D9B25B026B31D68CAE00975CDB81EB1FD96FD5E1A12E2BB83FA25F1B1D91363457657FC03875C27F2946C5", + "2F336ED42D3CC38FC61660C4CD60BA4BD438B05F5965D8B7B399D2E7167F5D34F792D318F94DB15D67463AC449E13D568CC09BFCE32A35EE3EE96A041927680AE329811811E27F2D1E8E657707AF99BA96D13A478D695D59"}, + {"429F514EFC64D98A698A9247274CFF45", + "976AA5EB072F912D126ACEBC954FEC38", + "A71D89DC5B6CEDBB7451A27C3C2CAE09126DB4C421", + "5632FE62AB1DC549D54D3BC3FC868ACCEDEFD9ECF5E9F8", + "848AE4306CA8C7F416F8707625B7F55881C0AB430353A5C967CDA2DA787F581A70E34DBEBB2385"}, + {"398138F309085F47F8457CDF53895A63", + "F8A8A7F2D28E5FFF7BBC2F24353F7A36", + "5D633C21BA7764B8855CAB586F3746E236AD486039C83C6B56EFA9C651D38A41D6B20DAEE3418BFEA44B8BD6", + "A3BBAA91920AF5E10659818B1B3B300AC79BFC129C8329E75251F73A66D3AE0128EB91D5031E0A65C329DB7D1E9C0493E268", + "D078097267606E5FB07CFB7E2B4B718172A82C6A4CEE65D549A4DFB9838003BD2FBF64A7A66988AC1A632FD88F9E9FBB57C5A78AD2E086EACBA3DB68511D81C2970A"}, + {"7A4151EBD3901B42CBA45DAFB2E931BA", + "0FC88ACEE74DD538040321C330974EB8", + "250464FB04733BAB934C59E6AD2D6AE8D662CBCFEFBE61E5A308D4211E58C4C25935B72C69107722E946BFCBF416796600542D76AEB73F2B25BF53BAF97BDEB36ED3A7A51C31E7F170EB897457E7C17571D1BA0A908954E9", + "88C41F3EBEC23FAB8A362D969CAC810FAD4F7CA6A7F7D0D44F060F92E37E1183768DD4A8C733F71C96058D362A39876D183B86C103DE", + "74A25B2182C51096D48A870D80F18E1CE15867778E34FCBA6BD7BFB3739FDCD42AD0F2D9F4EBA29085285C6048C15BCE5E5166F1F962D3337AA88E6062F05523029D0A7F0BF9"}, + {"BFB147E1CD5459424F8C0271FC0E0DC5", + "EABCC126442BF373969EA3015988CC45", + "4C0880E1D71AA2C7", + "BE1B5EC78FBF73E7A6682B21BA7E0E5D2D1C7ABE", + "5660D7C1380E2F306895B1402CB2D6C37876504276B414D120F4CF92FDDDBB293A238EA0"}, + {"595DD6F52D18BC2CA8EB4EDAA18D9FA3", + "0F84B5D36CF4BC3B863313AF3B4D2E97", + "30AE6CC5F99580F12A779D98BD379A60948020C0B6FBD5746B30BA3A15C6CD33DAF376C70A9F15B6C0EB410A93161F7958AE23", + "8EF3687A1642B070970B0B91462229D1D76ABC154D18211F7152AA9FF368", + "317C1DDB11417E5A9CC4DDE7FDFF6659A5AC4B31DE025212580A05CDAC6024D3E4AE7C2966E52B9129E9ECDBED86"}, + {"44E6F2DC8FDC778AD007137D11410F50", + "270A237AD977F7187AA6C158A0BAB24F", + "509B0F0EB12E2AA5C5BA2DE553C07FAF4CE0C9E926531AA709A3D6224FCB783ACCF1559E10B1123EBB7D52E8AB54E6B5352A9ED0D04124BF0E9D9BACFD7E32B817B2E625F5EE94A64EDE9E470DE7FE6886C19B294F9F828209FE257A78", + "8B3D7815DF25618A5D0C55A601711881483878F113A12EC36CF64900549A3199555528559DC118F789788A55FAFD944E6E99A9CA3F72F238CD3F4D88223F7A745992B3FAED1848", + "1CC00D79F7AD82FDA71B58D286E5F34D0CC4CEF30704E771CC1E50746BDF83E182B078DB27149A42BAE619DF0F85B0B1090AD55D3B4471B0D6F6ECCD09C8F876B30081F0E7537A9624F8AAF29DA85E324122EFB4D68A56"}, + {"BB7BC352A03044B4428D8DBB4B0701FDEC4649FD17B81452", + "8B4BBE26CCD9859DCD84884159D6B0A4", + "2212BEB0E78E0F044A86944CF33C8D5C80D9DBE1034BF3BCF73611835C7D3A52F5BD2D81B68FD681B68540A496EE5DA16FD8AC8824E60E1EC2042BE28FB0BFAD4E4B03596446BDD8C37D936D9B3D5295BE19F19CF5ACE1D33A46C952CE4DE5C12F92C1DD051E04AEED", + "9037234CC44FFF828FABED3A7084AF40FA7ABFF8E0C0EFB57A1CC361E18FC4FAC1AB54F3ABFE9FF77263ACE16C3A", + "A9391B805CCD956081E0B63D282BEA46E7025126F1C1631239C33E92AA6F92CD56E5A4C56F00FF9658E93D48AF4EF0EF81628E34AD4DB0CDAEDCD2A17EE7"}, + {"99C0AD703196D2F60A74E6B378B838B31F82EA861F06FC4E", + "92745C018AA708ECFEB1667E9F3F1B01", + "828C69F376C0C0EC651C67749C69577D589EE39E51404D80EBF70C8660A8F5FD375473F4A7C611D59CB546A605D67446CE2AA844135FCD78BB5FBC90222A00D42920BB1D7EEDFB0C4672554F583EF23184F89063CDECBE482367B5F9AF3ACBC3AF61392BD94CBCD9B64677", + "A879214658FD0A5B0E09836639BF82E05EC7A5EF71D4701934BDA228435C68AC3D5CEB54997878B06A655EEACEFB1345C15867E7FE6C6423660C8B88DF128EBD6BCD85118DBAE16E9252FFB204324E5C8F38CA97759BDBF3CB0083", + "51FE87996F194A2585E438B023B345439EA60D1AEBED4650CDAF48A4D4EEC4FC77DC71CC4B09D3BEEF8B7B7AF716CE2B4EFFB3AC9E6323C18AC35E0AA6E2BBBC8889490EB6226C896B0D105EAB42BFE7053CCF00ED66BA94C1BA09A792AA873F0C3B26C5C5F9A936E57B25"}, + {"7086816D00D648FB8304AA8C9E552E1B69A9955FB59B25D1", + "0F45CF7F0BF31CCEB85D9DA10F4D749F", + "93F27C60A417D9F0669E86ACC784FC8917B502DAF30A6338F11B30B94D74FEFE2F8BE1BBE2EAD10FAB7EED3C6F72B7C3ECEE1937C32ED4970A6404E139209C05", + "877F046601F3CBE4FB1491943FA29487E738F94B99AF206262A1D6FF856C9AA0B8D4D08A54370C98F8E88FA3DCC2B14C1F76D71B2A4C7963AEE8AF960464C5BEC8357AD00DC8", + "FE96906B895CE6A8E72BC72344E2C8BB3C63113D70EAFA26C299BAFE77A8A6568172EB447FB3E86648A0AF3512DEB1AAC0819F3EC553903BF28A9FB0F43411237A774BF9EE03E445D280FBB9CD12B9BAAB6EF5E52691"}, + {"062F65A896D5BF1401BADFF70E91B458E1F9BD4888CB2E4D", + "5B11EA1D6008EBB41CF892FCA5B943D1", + "BAF4FF5C8242", + "A8870E091238355984EB2F7D61A865B9170F440BFF999A5993DD41A10F4440D21FF948DDA2BF663B2E03AC3324492DC5E40262ECC6A65C07672353BE23E7FB3A9D79FF6AA38D97960905A38DECC312CB6A59E5467ECF06C311CD43ADC0B543EDF34FE8BE611F176460D5627CA51F8F8D9FED71F55C", + "B10E127A632172CF8AA7539B140D2C9C2590E6F28C3CB892FC498FCE56A34F732FBFF32E79C7B9747D9094E8635A0C084D6F0247F9768FB5FF83493799A9BEC6C39572120C40E9292C8C947AE8573462A9108C36D9D7112E6995AE5867E6C8BB387D1C5D4BEF524F391B9FD9F0A3B4BFA079E915BCD920185CFD38D114C558928BD7D47877"}, + {"38A8E45D6D705A11AF58AED5A1344896998EACF359F2E26A", + "FD82B5B31804FF47D44199B533D0CF84", + "DE454D4E62FE879F2050EE3E25853623D3E9AC52EEC1A1779A48CFAF5ECA0BFDE44749391866D1", + "B804", + "164BB965C05EBE0931A1A63293EDF9C38C27"}, + {"34C33C97C6D7A0850DA94D78A58DC61EC717CD7574833068", + "343BE00DA9483F05C14F2E9EB8EA6AE8", + "78312A43EFDE3CAE34A65796FF059A3FE15304EEA5CF1D9306949FE5BF3349D4977D4EBE76C040FE894C5949E4E4D6681153DA87FB9AC5062063CA2EA183566343362370944CE0362D25FC195E124FD60E8682E665D13F2229DDA3E4B2CB1DCA", + "CC11BB284B1153578E4A5ED9D937B869DAF00F5B1960C23455CA9CC43F486A3BE0B66254F1041F04FDF459C8640465B6E1D2CF899A381451E8E7FCB50CF87823BE77E24B132BBEEDC72E53369B275E1D8F49ECE59F4F215230AC4FE133FC80E4F634EE80BA4682B62C86", + "E7F703DC31A95E3A4919FF957836CB76C063D81702AEA4703E1C2BF30831E58C4609D626EC6810E12EAA5B930F049FF9EFC22C3E3F1EBD4A1FB285CB02A1AC5AD46B425199FC0A85670A5C4E3DAA9636C8F64C199F42F18AAC8EA7457FD377F322DD7752D7D01B946C8F0A97E6113F0D50106F319AFD291AAACE"}, + {"C6ECF7F053573E403E61B83052A343D93CBCC179D1E835BE", + "E280E13D7367042E3AA09A80111B6184", + "21486C9D7A9647", + "5F2639AFA6F17931853791CD8C92382BBB677FD72D0AB1A080D0E49BFAA21810E963E4FACD422E92F65CBFAD5884A60CD94740DF31AF02F95AA57DA0C4401B0ED906", + "5C51DB20755302070C45F52E50128A67C8B2E4ED0EACB7E29998CCE2E8C289DD5655913EC1A51CC3AABE5CDC2402B2BE7D6D4BF6945F266FBD70BA9F37109067157AE7530678B45F64475D4EBFCB5FFF46A5"}, + {"5EC6CF7401BC57B18EF154E8C38ACCA8959E57D2F3975FF5", + "656B41CB3F9CF8C08BAD7EBFC80BD225", + "6B817C2906E2AF425861A7EF59BA5801F143EE2A139EE72697CDE168B4", + "2C0E1DDC9B1E5389BA63845B18B1F8A1DB062037151BCC56EF7C21C0BB4DAE366636BBA975685D7CC5A94AFBE89C769016388C56FB7B57CE750A12B718A8BDCF70E80E8659A8330EFC8F86640F21735E8C80E23FE43ABF23507CE3F964AE4EC99D", + "ED780CF911E6D1AA8C979B889B0B9DC1ABE261832980BDBFB576901D9EF5AB8048998E31A15BE54B3E5845A4D136AD24D0BDA1C3006168DF2F8AC06729CB0818867398150020131D8F04EDF1923758C9EABB5F735DE5EA1758D4BC0ACFCA98AFD202E9839B8720253693B874C65586C6F0"}, + {"C92F678EB2208662F5BCF3403EC05F5961E957908A3E79421E1D25FC19054153", + "DA0F3A40983D92F2D4C01FED33C7A192", + "2B6E9D26DB406A0FAB47608657AA10EFC2B4AA5F459B29FF85AC9A40BFFE7AEB04F77E9A11FAAA116D7F6D4DA417671A9AB02C588E0EF59CB1BFB4B1CC931B63A3B3A159FCEC97A04D1E6F0C7E6A9CEF6B0ABB04758A69F1FE754DF4C2610E8C46B6CF413BDB31351D55BEDCB7B4A13A1C98E10984475E0F2F957853", + "F37326A80E08", + "83519E53E321D334F7C10B568183775C0E9AAE55F806"}, + {"6847E0491BE57E72995D186D50094B0B3593957A5146798FCE68B287B2FB37B5", + "3EE1182AEBB19A02B128F28E1D5F7F99", + "D9F35ABB16D776CE", + "DB7566ED8EA95BDF837F23DB277BAFBC5E70D1105ADFD0D9EF15475051B1EF94709C67DCA9F8D5", + "2CDCED0C9EBD6E2A508822A685F7DCD1CDD99E7A5FCA786C234E7F7F1D27EC49751AD5DCFA30C5EDA87C43CAE3B919B6BBCFE34C8EDA59"}, + {"82B019673642C08388D3E42075A4D5D587558C229E4AB8F660E37650C4C41A0A", + "336F5D681E0410FAE7B607246092C6DC", + "D430CBD8FE435B64214E9E9CDC5DE99D31CFCFB8C10AA0587A49DF276611", + "998404153AD77003E1737EDE93ED79859EE6DCCA93CB40C4363AA817ABF2DBBD46E42A14A7183B6CC01E12A577888141363D0AE011EB6E8D28C0B235", + "9BEF69EEB60BD3D6065707B7557F25292A8872857CFBD24F2F3C088E4450995333088DA50FD9121221C504DF1D0CD5EFE6A12666C5D5BB12282CF4C19906E9CFAB97E9BDF7F49DC17CFC384B"}, + {"747B2E269B1859F0622C15C8BAD6A725028B1F94B8DB7326948D1E6ED663A8BC", + "AB91F7245DDCE3F1C747872D47BE0A8A", + "3B03F786EF1DDD76E1D42646DA4CD2A5165DC5383CE86D1A0B5F13F910DC278A4E451EE0192CBA178E13B3BA27FDC7840DF73D2E104B", + "6B803F4701114F3E5FE21718845F8416F70F626303F545BE197189E0A2BA396F37CE06D389EB2658BC7D56D67868708F6D0D32", + "1570DDB0BCE75AA25D1957A287A2C36B1A5F2270186DA81BA6112B7F43B0F3D1D0ED072591DCF1F1C99BBB25621FC39B896FF9BD9413A2845363A9DCD310C32CF98E57"}, + {"02E59853FB29AEDA0FE1C5F19180AD99A12FF2F144670BB2B8BADF09AD812E0A", + "C691294EF67CD04D1B9242AF83DD1421", + "879334DAE3", + "1E17F46A98FEF5CBB40759D95354", + "FED8C3FF27DDF6313AED444A2985B36CBA268AAD6AAC563C0BA28F6DB5DB"}, + {"F6C1FB9B4188F2288FF03BD716023198C3582CF2A037FC2F29760916C2B7FCDB", + "4228DA0678CA3534588859E77DFF014C", + "D8153CAF35539A61DD8D05B3C9B44F01E564FB9348BCD09A1C23B84195171308861058F0A3CD2A55B912A3AAEE06FF4D356C77275828F2157C2FC7C115DA39E443210CCC56BEDB0CC99BBFB227ABD5CC454F4E7F547C7378A659EEB6A7E809101A84F866503CB18D4484E1FA09B3EC7FC75EB2E35270800AA7", + "23B660A779AD285704B12EC1C580387A47BEC7B00D452C6570", + "5AA642BBABA8E49849002A2FAF31DB8FC7773EFDD656E469CEC19B3206D4174C9A263D0A05484261F6"}, + {"8FF6086F1FADB9A3FBE245EAC52640C43B39D43F89526BB5A6EBA47710931446", + "943188480C99437495958B0AE4831AA9", + "AD5CD0BDA426F6EBA23C8EB23DC73FF9FEC173355EDBD6C9344C4C4383F211888F7CE6B29899A6801DF6B38651A7C77150941A", + "80CD5EA8D7F81DDF5070B934937912E8F541A5301877528EB41AB60C020968D459960ED8FB73083329841A", + "ABAE8EB7F36FCA2362551E72DAC890BA1BB6794797E0FC3B67426EC9372726ED4725D379EA0AC9147E48DCD0005C502863C2C5358A38817C8264B5"}, + {"A083B54E6B1FE01B65D42FCD248F97BB477A41462BBFE6FD591006C022C8FD84", + "B0490F5BD68A52459556B3749ACDF40E", + "8892E047DA5CFBBDF7F3CFCBD1BD21C6D4C80774B1826999234394BD3E513CC7C222BB40E1E3140A152F19B3802F0D036C24A590512AD0E8", + "D7B15752789DC94ED0F36778A5C7BBB207BEC32BAC66E702B39966F06E381E090C6757653C3D26A81EC6AD6C364D66867A334C91BB0B8A8A4B6EACDF0783D09010AEBA2DD2062308FE99CC1F", + "C071280A732ADC93DF272BF1E613B2BB7D46FC6665EF2DC1671F3E211D6BDE1D6ADDD28DF3AA2E47053FC8BB8AE9271EC8BC8B2CFFA320D225B451685B6D23ACEFDD241FE284F8ADC8DB07F456985B14330BBB66E0FB212213E05B3E"}, +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/internal/byteutil/byteutil.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/internal/byteutil/byteutil.go new file mode 100644 index 00000000000..a6bdf51232e --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/internal/byteutil/byteutil.go @@ -0,0 +1,92 @@ +// Copyright (C) 2019 ProtonTech AG +// This file contains necessary tools for the aex and ocb packages. +// +// These functions SHOULD NOT be used elsewhere, since they are optimized for +// specific input nature in the EAX and OCB modes of operation. + +package byteutil + +// GfnDouble computes 2 * input in the field of 2^n elements. +// The irreducible polynomial in the finite field for n=128 is +// x^128 + x^7 + x^2 + x + 1 (equals 0x87) +// Constant-time execution in order to avoid side-channel attacks +func GfnDouble(input []byte) []byte { + if len(input) != 16 { + panic("Doubling in GFn only implemented for n = 128") + } + // If the first bit is zero, return 2L = L << 1 + // Else return (L << 1) xor 0^120 10000111 + shifted := ShiftBytesLeft(input) + shifted[15] ^= ((input[0] >> 7) * 0x87) + return shifted +} + +// ShiftBytesLeft outputs the byte array corresponding to x << 1 in binary. +func ShiftBytesLeft(x []byte) []byte { + l := len(x) + dst := make([]byte, l) + for i := 0; i < l-1; i++ { + dst[i] = (x[i] << 1) | (x[i+1] >> 7) + } + dst[l-1] = x[l-1] << 1 + return dst +} + +// ShiftNBytesLeft puts in dst the byte array corresponding to x << n in binary. +func ShiftNBytesLeft(dst, x []byte, n int) { + // Erase first n / 8 bytes + copy(dst, x[n/8:]) + + // Shift the remaining n % 8 bits + bits := uint(n % 8) + l := len(dst) + for i := 0; i < l-1; i++ { + dst[i] = (dst[i] << bits) | (dst[i+1] >> uint(8 - bits)) + } + dst[l-1] = dst[l-1] << bits + + // Append trailing zeroes + dst = append(dst, make([]byte, n/8)...) +} + +// XorBytesMut assumes equal input length, replaces X with X XOR Y +func XorBytesMut(X, Y []byte) { + for i := 0; i < len(X); i++ { + X[i] ^= Y[i] + } +} + + +// XorBytes assumes equal input length, puts X XOR Y into Z +func XorBytes(Z, X, Y []byte) { + for i := 0; i < len(X); i++ { + Z[i] = X[i] ^ Y[i] + } +} + +// RightXor XORs smaller input (assumed Y) at the right of the larger input (assumed X) +func RightXor(X, Y []byte) []byte { + offset := len(X) - len(Y) + xored := make([]byte, len(X)); + copy(xored, X) + for i := 0; i < len(Y); i++ { + xored[offset + i] ^= Y[i] + } + return xored +} + +// SliceForAppend takes a slice and a requested number of bytes. It returns a +// slice with the contents of the given slice followed by that many bytes and a +// second slice that aliases into it and contains only the extra bytes. If the +// original slice has sufficient capacity then no allocation is performed. +func SliceForAppend(in []byte, n int) (head, tail []byte) { + if total := len(in) + n; cap(in) >= total { + head = in[:total] + } else { + head = make([]byte, total) + copy(head, in) + } + tail = head[len(in):] + return +} + diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/ocb/ocb.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/ocb/ocb.go new file mode 100644 index 00000000000..7f78cfa759b --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/ocb/ocb.go @@ -0,0 +1,317 @@ +// Copyright (C) 2019 ProtonTech AG + +// Package ocb provides an implementation of the OCB (offset codebook) mode of +// operation, as described in RFC-7253 of the IRTF and in Rogaway, Bellare, +// Black and Krovetz - OCB: A BLOCK-CIPHER MODE OF OPERATION FOR EFFICIENT +// AUTHENTICATED ENCRYPTION (2003). +// Security considerations (from RFC-7253): A private key MUST NOT be used to +// encrypt more than 2^48 blocks. Tag length should be at least 12 bytes (a +// brute-force forging adversary succeeds after 2^{tag length} attempts). A +// single key SHOULD NOT be used to decrypt ciphertext with different tag +// lengths. Nonces need not be secret, but MUST NOT be reused. +// This package only supports underlying block ciphers with 128-bit blocks, +// such as AES-{128, 192, 256}, but may be extended to other sizes. +package ocb + +import ( + "bytes" + "crypto/cipher" + "crypto/subtle" + "errors" + "github.com/ProtonMail/go-crypto/internal/byteutil" + "math/bits" +) + +type ocb struct { + block cipher.Block + tagSize int + nonceSize int + mask mask + // Optimized en/decrypt: For each nonce N used to en/decrypt, the 'Ktop' + // internal variable can be reused for en/decrypting with nonces sharing + // all but the last 6 bits with N. The prefix of the first nonce used to + // compute the new Ktop, and the Ktop value itself, are stored in + // reusableKtop. If using incremental nonces, this saves one block cipher + // call every 63 out of 64 OCB encryptions, and stores one nonce and one + // output of the block cipher in memory only. + reusableKtop reusableKtop +} + +type mask struct { + // L_*, L_$, (L_i)_{i ∈ N} + lAst []byte + lDol []byte + L [][]byte +} + +type reusableKtop struct { + noncePrefix []byte + Ktop []byte +} + +const ( + defaultTagSize = 16 + defaultNonceSize = 15 +) + +const ( + enc = iota + dec +) + +func (o *ocb) NonceSize() int { + return o.nonceSize +} + +func (o *ocb) Overhead() int { + return o.tagSize +} + +// NewOCB returns an OCB instance with the given block cipher and default +// tag and nonce sizes. +func NewOCB(block cipher.Block) (cipher.AEAD, error) { + return NewOCBWithNonceAndTagSize(block, defaultNonceSize, defaultTagSize) +} + +// NewOCBWithNonceAndTagSize returns an OCB instance with the given block +// cipher, nonce length, and tag length. Panics on zero nonceSize and +// exceedingly long tag size. +// +// It is recommended to use at least 12 bytes as tag length. +func NewOCBWithNonceAndTagSize( + block cipher.Block, nonceSize, tagSize int) (cipher.AEAD, error) { + if block.BlockSize() != 16 { + return nil, ocbError("Block cipher must have 128-bit blocks") + } + if nonceSize < 1 { + return nil, ocbError("Incorrect nonce length") + } + if nonceSize >= block.BlockSize() { + return nil, ocbError("Nonce length exceeds blocksize - 1") + } + if tagSize > block.BlockSize() { + return nil, ocbError("Custom tag length exceeds blocksize") + } + return &ocb{ + block: block, + tagSize: tagSize, + nonceSize: nonceSize, + mask: initializeMaskTable(block), + reusableKtop: reusableKtop{ + noncePrefix: nil, + Ktop: nil, + }, + }, nil +} + +func (o *ocb) Seal(dst, nonce, plaintext, adata []byte) []byte { + if len(nonce) > o.nonceSize { + panic("crypto/ocb: Incorrect nonce length given to OCB") + } + ret, out := byteutil.SliceForAppend(dst, len(plaintext)+o.tagSize) + o.crypt(enc, out, nonce, adata, plaintext) + return ret +} + +func (o *ocb) Open(dst, nonce, ciphertext, adata []byte) ([]byte, error) { + if len(nonce) > o.nonceSize { + panic("Nonce too long for this instance") + } + if len(ciphertext) < o.tagSize { + return nil, ocbError("Ciphertext shorter than tag length") + } + sep := len(ciphertext) - o.tagSize + ret, out := byteutil.SliceForAppend(dst, len(ciphertext)) + ciphertextData := ciphertext[:sep] + tag := ciphertext[sep:] + o.crypt(dec, out, nonce, adata, ciphertextData) + if subtle.ConstantTimeCompare(ret[sep:], tag) == 1 { + ret = ret[:sep] + return ret, nil + } + for i := range out { + out[i] = 0 + } + return nil, ocbError("Tag authentication failed") +} + +// On instruction enc (resp. dec), crypt is the encrypt (resp. decrypt) +// function. It returns the resulting plain/ciphertext with the tag appended. +func (o *ocb) crypt(instruction int, Y, nonce, adata, X []byte) []byte { + // + // Consider X as a sequence of 128-bit blocks + // + // Note: For encryption (resp. decryption), X is the plaintext (resp., the + // ciphertext without the tag). + blockSize := o.block.BlockSize() + + // + // Nonce-dependent and per-encryption variables + // + // Zero out the last 6 bits of the nonce into truncatedNonce to see if Ktop + // is already computed. + truncatedNonce := make([]byte, len(nonce)) + copy(truncatedNonce, nonce) + truncatedNonce[len(truncatedNonce)-1] &= 192 + Ktop := make([]byte, blockSize) + if bytes.Equal(truncatedNonce, o.reusableKtop.noncePrefix) { + Ktop = o.reusableKtop.Ktop + } else { + // Nonce = num2str(TAGLEN mod 128, 7) || zeros(120 - bitlen(N)) || 1 || N + paddedNonce := append(make([]byte, blockSize-1-len(nonce)), 1) + paddedNonce = append(paddedNonce, truncatedNonce...) + paddedNonce[0] |= byte(((8 * o.tagSize) % (8 * blockSize)) << 1) + // Last 6 bits of paddedNonce are already zero. Encrypt into Ktop + paddedNonce[blockSize-1] &= 192 + Ktop = paddedNonce + o.block.Encrypt(Ktop, Ktop) + o.reusableKtop.noncePrefix = truncatedNonce + o.reusableKtop.Ktop = Ktop + } + + // Stretch = Ktop || ((lower half of Ktop) XOR (lower half of Ktop << 8)) + xorHalves := make([]byte, blockSize/2) + byteutil.XorBytes(xorHalves, Ktop[:blockSize/2], Ktop[1:1+blockSize/2]) + stretch := append(Ktop, xorHalves...) + bottom := int(nonce[len(nonce)-1] & 63) + offset := make([]byte, len(stretch)) + byteutil.ShiftNBytesLeft(offset, stretch, bottom) + offset = offset[:blockSize] + + // + // Process any whole blocks + // + // Note: For encryption Y is ciphertext || tag, for decryption Y is + // plaintext || tag. + checksum := make([]byte, blockSize) + m := len(X) / blockSize + for i := 0; i < m; i++ { + index := bits.TrailingZeros(uint(i + 1)) + if len(o.mask.L)-1 < index { + o.mask.extendTable(index) + } + byteutil.XorBytesMut(offset, o.mask.L[bits.TrailingZeros(uint(i+1))]) + blockX := X[i*blockSize : (i+1)*blockSize] + blockY := Y[i*blockSize : (i+1)*blockSize] + byteutil.XorBytes(blockY, blockX, offset) + switch instruction { + case enc: + o.block.Encrypt(blockY, blockY) + byteutil.XorBytesMut(blockY, offset) + byteutil.XorBytesMut(checksum, blockX) + case dec: + o.block.Decrypt(blockY, blockY) + byteutil.XorBytesMut(blockY, offset) + byteutil.XorBytesMut(checksum, blockY) + } + } + // + // Process any final partial block and compute raw tag + // + tag := make([]byte, blockSize) + if len(X)%blockSize != 0 { + byteutil.XorBytesMut(offset, o.mask.lAst) + pad := make([]byte, blockSize) + o.block.Encrypt(pad, offset) + chunkX := X[blockSize*m:] + chunkY := Y[blockSize*m : len(X)] + byteutil.XorBytes(chunkY, chunkX, pad[:len(chunkX)]) + // P_* || bit(1) || zeroes(127) - len(P_*) + switch instruction { + case enc: + paddedY := append(chunkX, byte(128)) + paddedY = append(paddedY, make([]byte, blockSize-len(chunkX)-1)...) + byteutil.XorBytesMut(checksum, paddedY) + case dec: + paddedX := append(chunkY, byte(128)) + paddedX = append(paddedX, make([]byte, blockSize-len(chunkY)-1)...) + byteutil.XorBytesMut(checksum, paddedX) + } + byteutil.XorBytes(tag, checksum, offset) + byteutil.XorBytesMut(tag, o.mask.lDol) + o.block.Encrypt(tag, tag) + byteutil.XorBytesMut(tag, o.hash(adata)) + copy(Y[blockSize*m+len(chunkY):], tag[:o.tagSize]) + } else { + byteutil.XorBytes(tag, checksum, offset) + byteutil.XorBytesMut(tag, o.mask.lDol) + o.block.Encrypt(tag, tag) + byteutil.XorBytesMut(tag, o.hash(adata)) + copy(Y[blockSize*m:], tag[:o.tagSize]) + } + return Y +} + +// This hash function is used to compute the tag. Per design, on empty input it +// returns a slice of zeros, of the same length as the underlying block cipher +// block size. +func (o *ocb) hash(adata []byte) []byte { + // + // Consider A as a sequence of 128-bit blocks + // + A := make([]byte, len(adata)) + copy(A, adata) + blockSize := o.block.BlockSize() + + // + // Process any whole blocks + // + sum := make([]byte, blockSize) + offset := make([]byte, blockSize) + m := len(A) / blockSize + for i := 0; i < m; i++ { + chunk := A[blockSize*i : blockSize*(i+1)] + index := bits.TrailingZeros(uint(i + 1)) + // If the mask table is too short + if len(o.mask.L)-1 < index { + o.mask.extendTable(index) + } + byteutil.XorBytesMut(offset, o.mask.L[index]) + byteutil.XorBytesMut(chunk, offset) + o.block.Encrypt(chunk, chunk) + byteutil.XorBytesMut(sum, chunk) + } + + // + // Process any final partial block; compute final hash value + // + if len(A)%blockSize != 0 { + byteutil.XorBytesMut(offset, o.mask.lAst) + // Pad block with 1 || 0 ^ 127 - bitlength(a) + ending := make([]byte, blockSize-len(A)%blockSize) + ending[0] = 0x80 + encrypted := append(A[blockSize*m:], ending...) + byteutil.XorBytesMut(encrypted, offset) + o.block.Encrypt(encrypted, encrypted) + byteutil.XorBytesMut(sum, encrypted) + } + return sum +} + +func initializeMaskTable(block cipher.Block) mask { + // + // Key-dependent variables + // + lAst := make([]byte, block.BlockSize()) + block.Encrypt(lAst, lAst) + lDol := byteutil.GfnDouble(lAst) + L := make([][]byte, 1) + L[0] = byteutil.GfnDouble(lDol) + + return mask{ + lAst: lAst, + lDol: lDol, + L: L, + } +} + +// Extends the L array of mask m up to L[limit], with L[i] = GfnDouble(L[i-1]) +func (m *mask) extendTable(limit int) { + for i := len(m.L); i <= limit; i++ { + m.L = append(m.L, byteutil.GfnDouble(m.L[i-1])) + } +} + +func ocbError(err string) error { + return errors.New("crypto/ocb: " + err) +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/ocb/random_vectors.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/ocb/random_vectors.go new file mode 100644 index 00000000000..0efaf344fd5 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/ocb/random_vectors.go @@ -0,0 +1,136 @@ +// In the test vectors provided by RFC 7253, the "bottom" +// internal variable, which defines "offset" for the first time, does not +// exceed 15. However, it can attain values up to 63. + +// These vectors include key length in {128, 192, 256}, tag size 128, and +// random nonce, header, and plaintext lengths. + +// This file was automatically generated. + +package ocb + +var randomVectors = []struct { + key, nonce, header, plaintext, ciphertext string +}{ + + {"9438C5D599308EAF13F800D2D31EA7F0", + "C38EE4801BEBFFA1CD8635BE", + "0E507B7DADD8A98CDFE272D3CB6B3E8332B56AE583FB049C0874D4200BED16BD1A044182434E9DA0E841F182DFD5B3016B34641CED0784F1745F63AB3D0DA22D3351C9EF9A658B8081E24498EBF61FCE40DA6D8E184536", + "962D227786FB8913A8BAD5DC3250", + "EEDEF5FFA5986D1E3BF86DDD33EF9ADC79DCA06E215FA772CCBA814F63AD"}, + {"BA7DE631C7D6712167C6724F5B9A2B1D", + "35263EBDA05765DC0E71F1F5", + "0103257B4224507C0242FEFE821EA7FA42E0A82863E5F8B68F7D881B4B44FA428A2B6B21D2F591260802D8AB6D83", + "9D6D1FC93AE8A64E7889B7B2E3521EFA9B920A8DDB692E6F833DDC4A38AFA535E5E2A3ED82CB7E26404AB86C54D01C4668F28398C2DF33D5D561CBA1C8DCFA7A912F5048E545B59483C0E3221F54B14DAA2E4EB657B3BEF9554F34CAD69B2724AE962D3D8A", + "E93852D1985C5E775655E937FA79CE5BF28A585F2AF53A5018853B9634BE3C84499AC0081918FDCE0624494D60E25F76ACD6853AC7576E3C350F332249BFCABD4E73CEABC36BE4EDDA40914E598AE74174A0D7442149B26990899491BDDFE8FC54D6C18E83AE9E9A6FFBF5D376565633862EEAD88D"}, + {"2E74B25289F6FD3E578C24866E9C72A5", + "FD912F15025AF8414642BA1D1D", + "FB5FB8C26F365EEDAB5FE260C6E3CCD27806729C8335F146063A7F9EA93290E56CF84576EB446350D22AD730547C267B1F0BBB97EB34E1E2C41A", + "6C092EBF78F76EE8C1C6E592277D9545BA16EDB67BC7D8480B9827702DC2F8A129E2B08A2CE710CA7E1DA45CE162BB6CD4B512E632116E2211D3C90871EFB06B8D4B902681C7FB", + "6AC0A77F26531BF4F354A1737F99E49BE32ECD909A7A71AD69352906F54B08A9CE9B8CA5D724CBFFC5673437F23F630697F3B84117A1431D6FA8CC13A974FB4AD360300522E09511B99E71065D5AC4BBCB1D791E864EF4"}, + {"E7EC507C802528F790AFF5303A017B17", + "4B97A7A568940A9E3CE7A99E93031E", + "28349BDC5A09390C480F9B8AA3EDEA3DDB8B9D64BCA322C570B8225DF0E31190DAB25A4014BA39519E02ABFB12B89AA28BBFD29E486E7FB28734258C817B63CED9912DBAFEBB93E2798AB2890DE3B0ACFCFF906AB15563EF7823CE83D27CDB251195E22BD1337BCBDE65E7C2C427321C463C2777BFE5AEAA", + "9455B3EA706B74", + "7F33BA3EA848D48A96B9530E26888F43EBD4463C9399B6"}, + {"6C928AA3224736F28EE7378DE0090191", + "8936138E2E4C6A13280017A1622D", + "6202717F2631565BDCDC57C6584543E72A7C8BD444D0D108ED35069819633C", + "DA0691439E5F035F3E455269D14FE5C201C8C9B0A3FE2D3F86BCC59387C868FE65733D388360B31E3CE28B4BF6A8BE636706B536D5720DB66B47CF1C7A5AFD6F61E0EF90F1726D6B0E169F9A768B2B7AE4EE00A17F630AC905FCAAA1B707FFF25B3A1AAE83B504837C64A5639B2A34002B300EC035C9B43654DA55", + "B8804D182AB0F0EEB464FA7BD1329AD6154F982013F3765FEDFE09E26DAC078C9C1439BFC1159D6C02A25E3FF83EF852570117B315852AD5EE20E0FA3AA0A626B0E43BC0CEA38B44579DD36803455FB46989B90E6D229F513FD727AF8372517E9488384C515D6067704119C931299A0982EDDFB9C2E86A90C450C077EB222511EC9CCABC9FCFDB19F70088"}, + {"ECEA315CA4B3F425B0C9957A17805EA4", + "664CDAE18403F4F9BA13015A44FC", + "642AFB090D6C6DB46783F08B01A3EF2A8FEB5736B531EAC226E7888FCC8505F396818F83105065FACB3267485B9E5E4A0261F621041C08FCCB2A809A49AB5252A91D0971BCC620B9D614BD77E57A0EED2FA5", + "6852C31F8083E20E364CEA21BB7854D67CEE812FE1C9ED2425C0932A90D3780728D1BB", + "2ECEF962A9695A463ADABB275BDA9FF8B2BA57AEC2F52EFFB700CD9271A74D2A011C24AEA946051BD6291776429B7E681BA33E"}, + {"4EE616C4A58AAA380878F71A373461F6", + "91B8C9C176D9C385E9C47E52", + "CDA440B7F9762C572A718AC754EDEECC119E5EE0CCB9FEA4FFB22EEE75087C032EBF3DA9CDD8A28CC010B99ED45143B41A4BA50EA2A005473F89639237838867A57F23B0F0ED3BF22490E4501DAC9C658A9B9F", + "D6E645FA9AE410D15B8123FD757FA356A8DBE9258DDB5BE88832E615910993F497EC", + "B70ED7BF959FB2AAED4F36174A2A99BFB16992C8CDF369C782C4DB9C73DE78C5DB8E0615F647243B97ACDB24503BC9CADC48"}, + {"DCD475773136C830D5E3D0C5FE05B7FF", + "BB8E1FBB483BE7616A922C4A", + "36FEF2E1CB29E76A6EA663FC3AF66ECD7404F466382F7B040AABED62293302B56E8783EF7EBC21B4A16C3E78A7483A0A403F253A2CDC5BBF79DC3DAE6C73F39A961D8FBBE8D41B", + "441E886EA38322B2437ECA7DEB5282518865A66780A454E510878E61BFEC3106A3CD93D2A02052E6F9E1832F9791053E3B76BF4C07EFDD6D4106E3027FABB752E60C1AA425416A87D53938163817A1051EBA1D1DEEB4B9B25C7E97368B52E5911A31810B0EC5AF547559B6142D9F4C4A6EF24A4CF75271BF9D48F62B", + "1BE4DD2F4E25A6512C2CC71D24BBB07368589A94C2714962CD0ACE5605688F06342587521E75F0ACAFFD86212FB5C34327D238DB36CF2B787794B9A4412E7CD1410EA5DDD2450C265F29CF96013CD213FD2880657694D718558964BC189B4A84AFCF47EB012935483052399DBA5B088B0A0477F20DFE0E85DCB735E21F22A439FB837DD365A93116D063E607"}, + {"3FBA2B3D30177FFE15C1C59ED2148BB2C091F5615FBA7C07", + "FACF804A4BEBF998505FF9DE", + "8213B9263B2971A5BDA18DBD02208EE1", + "15B323926993B326EA19F892D704439FC478828322AF72118748284A1FD8A6D814E641F70512FD706980337379F31DC63355974738D7FEA87AD2858C0C2EBBFBE74371C21450072373C7B651B334D7C4D43260B9D7CCD3AF9EDB", + "6D35DC1469B26E6AAB26272A41B46916397C24C485B61162E640A062D9275BC33DDCFD3D9E1A53B6C8F51AC89B66A41D59B3574197A40D9B6DCF8A4E2A001409C8112F16B9C389E0096179DB914E05D6D11ED0005AD17E1CE105A2F0BAB8F6B1540DEB968B7A5428FF44"}, + {"53B52B8D4D748BCDF1DDE68857832FA46227FA6E2F32EFA1", + "0B0EF53D4606B28D1398355F", + "F23882436349094AF98BCACA8218E81581A043B19009E28EFBF2DE37883E04864148CC01D240552CA8844EC1456F42034653067DA67E80F87105FD06E14FF771246C9612867BE4D215F6D761", + "F15030679BD4088D42CAC9BF2E9606EAD4798782FA3ED8C57EBE7F84A53236F51B25967C6489D0CD20C9EEA752F9BC", + "67B96E2D67C3729C96DAEAEDF821D61C17E648643A2134C5621FEC621186915AD80864BFD1EB5B238BF526A679385E012A457F583AFA78134242E9D9C1B4E4"}, + {"0272DD80F23399F49BFC320381A5CD8225867245A49A7D41", + "5C83F4896D0738E1366B1836", + "69B0337289B19F73A12BAEEA857CCAF396C11113715D9500CCCF48BA08CFF12BC8B4BADB3084E63B85719DB5058FA7C2C11DEB096D7943CFA7CAF5", + "C01AD10FC8B562CD17C7BC2FAB3E26CBDFF8D7F4DEA816794BBCC12336991712972F52816AABAB244EB43B0137E2BAC1DD413CE79531E78BEF782E6B439612BB3AEF154DE3502784F287958EBC159419F9EBA27916A28D6307324129F506B1DE80C1755A929F87", + "FEFE52DD7159C8DD6E8EC2D3D3C0F37AB6CB471A75A071D17EC4ACDD8F3AA4D7D4F7BB559F3C09099E3D9003E5E8AA1F556B79CECDE66F85B08FA5955E6976BF2695EA076388A62D2AD5BAB7CBF1A7F3F4C8D5CDF37CDE99BD3E30B685D9E5EEE48C7C89118EF4878EB89747F28271FA2CC45F8E9E7601"}, + {"3EEAED04A455D6E5E5AB53CFD5AFD2F2BC625C7BF4BE49A5", + "36B88F63ADBB5668588181D774", + "D367E3CB3703E762D23C6533188EF7028EFF9D935A3977150361997EC9DEAF1E4794BDE26AA8B53C124980B1362EC86FCDDFC7A90073171C1BAEE351A53234B86C66E8AB92FAE99EC6967A6D3428892D80", + "573454C719A9A55E04437BF7CBAAF27563CCCD92ADD5E515CD63305DFF0687E5EEF790C5DCA5C0033E9AB129505E2775438D92B38F08F3B0356BA142C6F694", + "E9F79A5B432D9E682C9AAA5661CFC2E49A0FCB81A431E54B42EB73DD3BED3F377FEC556ABA81624BA64A5D739AD41467460088F8D4F442180A9382CA635745473794C382FCDDC49BA4EB6D8A44AE3C"}, + {"B695C691538F8CBD60F039D0E28894E3693CC7C36D92D79D", + "BC099AEB637361BAC536B57618", + "BFFF1A65AE38D1DC142C71637319F5F6508E2CB33C9DCB94202B359ED5A5ED8042E7F4F09231D32A7242976677E6F4C549BF65FADC99E5AF43F7A46FD95E16C2", + "081DF3FD85B415D803F0BE5AC58CFF0023FDDED99788296C3731D8", + "E50C64E3614D94FE69C47092E46ACC9957C6FEA2CCBF96BC62FBABE7424753C75F9C147C42AE26FE171531"}, + {"C9ACBD2718F0689A1BE9802A551B6B8D9CF5614DAF5E65ED", + "B1B0AAF373B8B026EB80422051D8", + "6648C0E61AC733C76119D23FB24548D637751387AA2EAE9D80E912B7BD486CAAD9EAF4D7A5FE2B54AAD481E8EC94BB4D558000896E2010462B70C9FED1E7273080D1", + "189F591F6CB6D59AFEDD14C341741A8F1037DC0DF00FC57CE65C30F49E860255CEA5DC6019380CC0FE8880BC1A9E685F41C239C38F36E3F2A1388865C5C311059C0A", + "922A5E949B61D03BE34AB5F4E58607D4504EA14017BB363DAE3C873059EA7A1C77A746FB78981671D26C2CF6D9F24952D510044CE02A10177E9DB42D0145211DFE6E84369C5E3BC2669EAB4147B2822895F9"}, + {"7A832BD2CF5BF4919F353CE2A8C86A5E406DA2D52BE16A72", + "2F2F17CECF7E5A756D10785A3CB9DB", + "61DA05E3788CC2D8405DBA70C7A28E5AF699863C9F72E6C6770126929F5D6FA267F005EBCF49495CB46400958A3AE80D1289D1C671", + "44E91121195A41AF14E8CFDBD39A4B517BE0DF1A72977ED8A3EEF8EEDA1166B2EB6DB2C4AE2E74FA0F0C74537F659BFBD141E5DDEC67E64EDA85AABD3F52C85A785B9FB3CECD70E7DF", + "BEDF596EA21288D2B84901E188F6EE1468B14D5161D3802DBFE00D60203A24E2AB62714BF272A45551489838C3A7FEAADC177B591836E73684867CCF4E12901DCF2064058726BBA554E84ADC5136F507E961188D4AF06943D3"}, + {"1508E8AE9079AA15F1CEC4F776B4D11BCCB061B58AA56C18", + "BCA625674F41D1E3AB47672DC0C3", + "8B12CF84F16360F0EAD2A41BC021530FFCEC7F3579CAE658E10E2D3D81870F65AFCED0C77C6C4C6E6BA424FF23088C796BA6195ABA35094BF1829E089662E7A95FC90750AE16D0C8AFA55DAC789D7735B970B58D4BE7CEC7341DA82A0179A01929C27A59C5063215B859EA43", + "E525422519ECE070E82C", + "B47BC07C3ED1C0A43BA52C43CBACBCDBB29CAF1001E09FDF7107"}, + {"7550C2761644E911FE9ADD119BAC07376BEA442845FEAD876D7E7AC1B713E464", + "36D2EC25ADD33CDEDF495205BBC923", + "7FCFE81A3790DE97FFC3DE160C470847EA7E841177C2F759571CBD837EA004A6CA8C6F4AEBFF2E9FD552D73EB8A30705D58D70C0B67AEEA280CBBF0A477358ACEF1E7508F2735CD9A0E4F9AC92B8C008F575D3B6278F1C18BD01227E3502E5255F3AB1893632AD00C717C588EF652A51A43209E7EE90", + "2B1A62F8FDFAA3C16470A21AD307C9A7D03ADE8EF72C69B06F8D738CDE578D7AEFD0D40BD9C022FB9F580DF5394C998ACCCEFC5471A3996FB8F1045A81FDC6F32D13502EA65A211390C8D882B8E0BEFD8DD8CBEF51D1597B124E9F7F", + "C873E02A22DB89EB0787DB6A60B99F7E4A0A085D5C4232A81ADCE2D60AA36F92DDC33F93DD8640AC0E08416B187FB382B3EC3EE85A64B0E6EE41C1366A5AD2A282F66605E87031CCBA2FA7B2DA201D975994AADE3DD1EE122AE09604AD489B84BF0C1AB7129EE16C6934850E"}, + {"A51300285E554FDBDE7F771A9A9A80955639DD87129FAEF74987C91FB9687C71", + "81691D5D20EC818FCFF24B33DECC", + "C948093218AA9EB2A8E44A87EEA73FC8B6B75A196819A14BD83709EA323E8DF8B491045220E1D88729A38DBCFFB60D3056DAD4564498FD6574F74512945DEB34B69329ACED9FFC05D5D59DFCD5B973E2ACAFE6AD1EF8BBBC49351A2DD12508ED89ED", + "EB861165DAF7625F827C6B574ED703F03215", + "C6CD1CE76D2B3679C1B5AA1CFD67CCB55444B6BFD3E22C81CBC9BB738796B83E54E3"}, + {"8CE0156D26FAEB7E0B9B800BBB2E9D4075B5EAC5C62358B0E7F6FCE610223282", + "D2A7B94DD12CDACA909D3AD7", + "E021A78F374FC271389AB9A3E97077D755", + "7C26000B58929F5095E1CEE154F76C2A299248E299F9B5ADE6C403AA1FD4A67FD4E0232F214CE7B919EE7A1027D2B76C57475715CD078461", + "C556FB38DF069B56F337B5FF5775CE6EAA16824DFA754F20B78819028EA635C3BB7AA731DE8776B2DCB67DCA2D33EEDF3C7E52EA450013722A41755A0752433ED17BDD5991AAE77A"}, + {"1E8000A2CE00A561C9920A30BF0D7B983FEF8A1014C8F04C35CA6970E6BA02BD", + "65ED3D63F79F90BBFD19775E", + "336A8C0B7243582A46B221AA677647FCAE91", + "134A8B34824A290E7B", + "914FBEF80D0E6E17F8BDBB6097EBF5FBB0554952DC2B9E5151"}, + {"53D5607BBE690B6E8D8F6D97F3DF2BA853B682597A214B8AA0EA6E598650AF15", + "C391A856B9FE234E14BA1AC7BB40FF", + "479682BC21349C4BE1641D5E78FE2C79EC1B9CF5470936DCAD9967A4DCD7C4EFADA593BC9EDE71E6A08829B8580901B61E274227E9D918502DE3", + "EAD154DC09C5E26C5D26FF33ED148B27120C7F2C23225CC0D0631B03E1F6C6D96FEB88C1A4052ACB4CE746B884B6502931F407021126C6AAB8C514C077A5A38438AE88EE", + "938821286EBB671D999B87C032E1D6055392EB564E57970D55E545FC5E8BAB90E6E3E3C0913F6320995FC636D72CD9919657CC38BD51552F4A502D8D1FE56DB33EBAC5092630E69EBB986F0E15CEE9FC8C052501"}, + {"294362FCC984F440CEA3E9F7D2C06AF20C53AAC1B3738CA2186C914A6E193ABB", + "B15B61C8BB39261A8F55AB178EC3", + "D0729B6B75BB", + "2BD089ADCE9F334BAE3B065996C7D616DD0C27DF4218DCEEA0FBCA0F968837CE26B0876083327E25681FDDD620A32EC0DA12F73FAE826CC94BFF2B90A54D2651", + "AC94B25E4E21DE2437B806966CCD5D9385EF0CD4A51AB9FA6DE675C7B8952D67802E9FEC1FDE9F5D1EAB06057498BC0EEA454804FC9D2068982A3E24182D9AC2E7AB9994DDC899A604264583F63D066B"}, + {"959DBFEB039B1A5B8CE6A44649B602AAA5F98A906DB96143D202CD2024F749D9", + "01D7BDB1133E9C347486C1EFA6", + "F3843955BD741F379DD750585EDC55E2CDA05CCBA8C1F4622AC2FE35214BC3A019B8BD12C4CC42D9213D1E1556941E8D8450830287FFB3B763A13722DD4140ED9846FB5FFF745D7B0B967D810A068222E10B259AF1D392035B0D83DC1498A6830B11B2418A840212599171E0258A1C203B05362978", + "A21811232C950FA8B12237C2EBD6A7CD2C3A155905E9E0C7C120", + "63C1CE397B22F1A03F1FA549B43178BC405B152D3C95E977426D519B3DFCA28498823240592B6EEE7A14"}, + {"096AE499F5294173F34FF2B375F0E5D5AB79D0D03B33B1A74D7D576826345DF4", + "0C52B3D11D636E5910A4DD76D32C", + "229E9ECA3053789E937447BC719467075B6138A142DA528DA8F0CF8DDF022FD9AF8E74779BA3AC306609", + "8B7A00038783E8BAF6EDEAE0C4EAB48FC8FD501A588C7E4A4DB71E3604F2155A97687D3D2FFF8569261375A513CF4398CE0F87CA1658A1050F6EF6C4EA3E25", + "C20B6CF8D3C8241825FD90B2EDAC7593600646E579A8D8DAAE9E2E40C3835FE801B2BE4379131452BC5182C90307B176DFBE2049544222FE7783147B690774F6D9D7CEF52A91E61E298E9AA15464AC"}, +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/ocb/rfc7253_test_vectors_suite_a.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/ocb/rfc7253_test_vectors_suite_a.go new file mode 100644 index 00000000000..330309ff5f8 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/ocb/rfc7253_test_vectors_suite_a.go @@ -0,0 +1,78 @@ +package ocb + +import ( + "encoding/hex" +) + +// Test vectors from https://tools.ietf.org/html/rfc7253. Note that key is +// shared across tests. +var testKey, _ = hex.DecodeString("000102030405060708090A0B0C0D0E0F") + +var rfc7253testVectors = []struct { + nonce, header, plaintext, ciphertext string +}{ + {"BBAA99887766554433221100", + "", + "", + "785407BFFFC8AD9EDCC5520AC9111EE6"}, + {"BBAA99887766554433221101", + "0001020304050607", + "0001020304050607", + "6820B3657B6F615A5725BDA0D3B4EB3A257C9AF1F8F03009"}, + {"BBAA99887766554433221102", + "0001020304050607", + "", + "81017F8203F081277152FADE694A0A00"}, + {"BBAA99887766554433221103", + "", + "0001020304050607", + "45DD69F8F5AAE72414054CD1F35D82760B2CD00D2F99BFA9"}, + {"BBAA99887766554433221104", + "000102030405060708090A0B0C0D0E0F", + "000102030405060708090A0B0C0D0E0F", + "571D535B60B277188BE5147170A9A22C3AD7A4FF3835B8C5701C1CCEC8FC3358"}, + {"BBAA99887766554433221105", + "000102030405060708090A0B0C0D0E0F", + "", + "8CF761B6902EF764462AD86498CA6B97"}, + {"BBAA99887766554433221106", + "", + "000102030405060708090A0B0C0D0E0F", + "5CE88EC2E0692706A915C00AEB8B2396F40E1C743F52436BDF06D8FA1ECA343D"}, + {"BBAA99887766554433221107", + "000102030405060708090A0B0C0D0E0F1011121314151617", + "000102030405060708090A0B0C0D0E0F1011121314151617", + "1CA2207308C87C010756104D8840CE1952F09673A448A122C92C62241051F57356D7F3C90BB0E07F"}, + {"BBAA99887766554433221108", + "000102030405060708090A0B0C0D0E0F1011121314151617", + "", + "6DC225A071FC1B9F7C69F93B0F1E10DE"}, + {"BBAA99887766554433221109", + "", + "000102030405060708090A0B0C0D0E0F1011121314151617", + "221BD0DE7FA6FE993ECCD769460A0AF2D6CDED0C395B1C3CE725F32494B9F914D85C0B1EB38357FF"}, + {"BBAA9988776655443322110A", + "000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F", + "000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F", + "BD6F6C496201C69296C11EFD138A467ABD3C707924B964DEAFFC40319AF5A48540FBBA186C5553C68AD9F592A79A4240"}, + {"BBAA9988776655443322110B", + "000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F", + "", + "FE80690BEE8A485D11F32965BC9D2A32"}, + {"BBAA9988776655443322110C", + "", + "000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F", + "2942BFC773BDA23CABC6ACFD9BFD5835BD300F0973792EF46040C53F1432BCDFB5E1DDE3BC18A5F840B52E653444D5DF"}, + {"BBAA9988776655443322110D", + "000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F2021222324252627", + "000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F2021222324252627", + "D5CA91748410C1751FF8A2F618255B68A0A12E093FF454606E59F9C1D0DDC54B65E8628E568BAD7AED07BA06A4A69483A7035490C5769E60"}, + {"BBAA9988776655443322110E", + "000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F2021222324252627", + "", + "C5CD9D1850C141E358649994EE701B68"}, + {"BBAA9988776655443322110F", + "", + "000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F2021222324252627", + "4412923493C57D5DE0D700F753CCE0D1D2D95060122E9F15A5DDBFC5787E50B5CC55EE507BCB084E479AD363AC366B95A98CA5F3000B1479"}, +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/ocb/rfc7253_test_vectors_suite_b.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/ocb/rfc7253_test_vectors_suite_b.go new file mode 100644 index 00000000000..5dc158f0128 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/ocb/rfc7253_test_vectors_suite_b.go @@ -0,0 +1,24 @@ +package ocb + +// Second set of test vectors from https://tools.ietf.org/html/rfc7253 +var rfc7253TestVectorTaglen96 = struct { + key, nonce, header, plaintext, ciphertext string +}{"0F0E0D0C0B0A09080706050403020100", + "BBAA9988776655443322110D", + "000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F2021222324252627", + "000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F2021222324252627", + "1792A4E31E0755FB03E31B22116E6C2DDF9EFD6E33D536F1A0124B0A55BAE884ED93481529C76B6AD0C515F4D1CDD4FDAC4F02AA"} + +var rfc7253AlgorithmTest = []struct { + KEYLEN, TAGLEN int + OUTPUT string }{ + {128, 128, "67E944D23256C5E0B6C61FA22FDF1EA2"}, + {192, 128, "F673F2C3E7174AAE7BAE986CA9F29E17"}, + {256, 128, "D90EB8E9C977C88B79DD793D7FFA161C"}, + {128, 96, "77A3D8E73589158D25D01209"}, + {192, 96, "05D56EAD2752C86BE6932C5E"}, + {256, 96, "5458359AC23B0CBA9E6330DD"}, + {128, 64, "192C9B7BD90BA06A"}, + {192, 64, "0066BC6E0EF34E24"}, + {256, 64, "7D4EA5D445501CBE"}, + } diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/aes/keywrap/keywrap.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/aes/keywrap/keywrap.go new file mode 100644 index 00000000000..3c6251d1ce6 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/aes/keywrap/keywrap.go @@ -0,0 +1,153 @@ +// Copyright 2014 Matthew Endsley +// All rights reserved +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted providing that the following conditions +// are met: +// 1. Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// 2. Redistributions in binary form must reproduce the above copyright +// notice, this list of conditions and the following disclaimer in the +// documentation and/or other materials provided with the distribution. +// +// THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR +// IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +// ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY +// DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +// OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +// HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, +// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING +// IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +// POSSIBILITY OF SUCH DAMAGE. + +// Package keywrap is an implementation of the RFC 3394 AES key wrapping +// algorithm. This is used in OpenPGP with elliptic curve keys. +package keywrap + +import ( + "crypto/aes" + "encoding/binary" + "errors" +) + +var ( + // ErrWrapPlaintext is returned if the plaintext is not a multiple + // of 64 bits. + ErrWrapPlaintext = errors.New("keywrap: plainText must be a multiple of 64 bits") + + // ErrUnwrapCiphertext is returned if the ciphertext is not a + // multiple of 64 bits. + ErrUnwrapCiphertext = errors.New("keywrap: cipherText must by a multiple of 64 bits") + + // ErrUnwrapFailed is returned if unwrapping a key fails. + ErrUnwrapFailed = errors.New("keywrap: failed to unwrap key") + + // NB: the AES NewCipher call only fails if the key is an invalid length. + + // ErrInvalidKey is returned when the AES key is invalid. + ErrInvalidKey = errors.New("keywrap: invalid AES key") +) + +// Wrap a key using the RFC 3394 AES Key Wrap Algorithm. +func Wrap(key, plainText []byte) ([]byte, error) { + if len(plainText)%8 != 0 { + return nil, ErrWrapPlaintext + } + + c, err := aes.NewCipher(key) + if err != nil { + return nil, ErrInvalidKey + } + + nblocks := len(plainText) / 8 + + // 1) Initialize variables. + var block [aes.BlockSize]byte + // - Set A = IV, an initial value (see 2.2.3) + for ii := 0; ii < 8; ii++ { + block[ii] = 0xA6 + } + + // - For i = 1 to n + // - Set R[i] = P[i] + intermediate := make([]byte, len(plainText)) + copy(intermediate, plainText) + + // 2) Calculate intermediate values. + for ii := 0; ii < 6; ii++ { + for jj := 0; jj < nblocks; jj++ { + // - B = AES(K, A | R[i]) + copy(block[8:], intermediate[jj*8:jj*8+8]) + c.Encrypt(block[:], block[:]) + + // - A = MSB(64, B) ^ t where t = (n*j)+1 + t := uint64(ii*nblocks + jj + 1) + val := binary.BigEndian.Uint64(block[:8]) ^ t + binary.BigEndian.PutUint64(block[:8], val) + + // - R[i] = LSB(64, B) + copy(intermediate[jj*8:jj*8+8], block[8:]) + } + } + + // 3) Output results. + // - Set C[0] = A + // - For i = 1 to n + // - C[i] = R[i] + return append(block[:8], intermediate...), nil +} + +// Unwrap a key using the RFC 3394 AES Key Wrap Algorithm. +func Unwrap(key, cipherText []byte) ([]byte, error) { + if len(cipherText)%8 != 0 { + return nil, ErrUnwrapCiphertext + } + + c, err := aes.NewCipher(key) + if err != nil { + return nil, ErrInvalidKey + } + + nblocks := len(cipherText)/8 - 1 + + // 1) Initialize variables. + var block [aes.BlockSize]byte + // - Set A = C[0] + copy(block[:8], cipherText[:8]) + + // - For i = 1 to n + // - Set R[i] = C[i] + intermediate := make([]byte, len(cipherText)-8) + copy(intermediate, cipherText[8:]) + + // 2) Compute intermediate values. + for jj := 5; jj >= 0; jj-- { + for ii := nblocks - 1; ii >= 0; ii-- { + // - B = AES-1(K, (A ^ t) | R[i]) where t = n*j+1 + // - A = MSB(64, B) + t := uint64(jj*nblocks + ii + 1) + val := binary.BigEndian.Uint64(block[:8]) ^ t + binary.BigEndian.PutUint64(block[:8], val) + + copy(block[8:], intermediate[ii*8:ii*8+8]) + c.Decrypt(block[:], block[:]) + + // - R[i] = LSB(B, 64) + copy(intermediate[ii*8:ii*8+8], block[8:]) + } + } + + // 3) Output results. + // - If A is an appropriate initial value (see 2.2.3), + for ii := 0; ii < 8; ii++ { + if block[ii] != 0xA6 { + return nil, ErrUnwrapFailed + } + } + + // - For i = 1 to n + // - P[i] = R[i] + return intermediate, nil +} diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/armor/armor.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/armor/armor.go similarity index 88% rename from .ci/providerlint/vendor/golang.org/x/crypto/openpgp/armor/armor.go rename to .ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/armor/armor.go index 8907183ec0a..3b357e5851b 100644 --- a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/armor/armor.go +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/armor/armor.go @@ -4,33 +4,25 @@ // Package armor implements OpenPGP ASCII Armor, see RFC 4880. OpenPGP Armor is // very similar to PEM except that it has an additional CRC checksum. -// -// Deprecated: this package is unmaintained except for security fixes. New -// applications should consider a more focused, modern alternative to OpenPGP -// for their specific task. If you are required to interoperate with OpenPGP -// systems and need a maintained package, consider a community fork. -// See https://golang.org/issue/44226. -package armor // import "golang.org/x/crypto/openpgp/armor" +package armor // import "github.com/ProtonMail/go-crypto/openpgp/armor" import ( "bufio" "bytes" "encoding/base64" - "golang.org/x/crypto/openpgp/errors" + "github.com/ProtonMail/go-crypto/openpgp/errors" "io" ) // A Block represents an OpenPGP armored structure. // // The encoded form is: +// -----BEGIN Type----- +// Headers // -// -----BEGIN Type----- -// Headers -// -// base64-encoded Bytes -// '=' base64 encoded checksum -// -----END Type----- -// +// base64-encoded Bytes +// '=' base64 encoded checksum +// -----END Type----- // where Headers is a possibly empty sequence of Key: Value lines. // // Since the armored data can be very large, this package presents a streaming @@ -156,7 +148,7 @@ func (r *openpgpReader) Read(p []byte) (n int, err error) { n, err = r.b64Reader.Read(p) r.currentCRC = crc24(r.currentCRC, p[:n]) - if err == io.EOF && r.lReader.crcSet && r.lReader.crc != r.currentCRC&crc24Mask { + if err == io.EOF && r.lReader.crcSet && r.lReader.crc != uint32(r.currentCRC&crc24Mask) { return 0, ArmorCorrupt } diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/armor/encode.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/armor/encode.go similarity index 98% rename from .ci/providerlint/vendor/golang.org/x/crypto/openpgp/armor/encode.go rename to .ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/armor/encode.go index 5b6e16c19d5..6f07582c37c 100644 --- a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/armor/encode.go +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/armor/encode.go @@ -96,8 +96,7 @@ func (l *lineBreaker) Close() (err error) { // trailer. // // It's built into a stack of io.Writers: -// -// encoding -> base64 encoder -> lineBreaker -> out +// encoding -> base64 encoder -> lineBreaker -> out type encoding struct { out io.Writer breaker *lineBreaker diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/canonical_text.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/canonical_text.go similarity index 80% rename from .ci/providerlint/vendor/golang.org/x/crypto/openpgp/canonical_text.go rename to .ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/canonical_text.go index e601e389f12..a94f6150c4b 100644 --- a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/canonical_text.go +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/canonical_text.go @@ -4,7 +4,10 @@ package openpgp -import "hash" +import ( + "hash" + "io" +) // NewCanonicalTextHash reformats text written to it into the canonical // form and then applies the hash h. See RFC 4880, section 5.2.1. @@ -19,28 +22,31 @@ type canonicalTextHash struct { var newline = []byte{'\r', '\n'} -func (cth *canonicalTextHash) Write(buf []byte) (int, error) { +func writeCanonical(cw io.Writer, buf []byte, s *int) (int, error) { start := 0 - for i, c := range buf { - switch cth.s { + switch *s { case 0: if c == '\r' { - cth.s = 1 + *s = 1 } else if c == '\n' { - cth.h.Write(buf[start:i]) - cth.h.Write(newline) + cw.Write(buf[start:i]) + cw.Write(newline) start = i + 1 } case 1: - cth.s = 0 + *s = 0 } } - cth.h.Write(buf[start:]) + cw.Write(buf[start:]) return len(buf), nil } +func (cth *canonicalTextHash) Write(buf []byte) (int, error) { + return writeCanonical(cth.h, buf, &cth.s) +} + func (cth *canonicalTextHash) Sum(in []byte) []byte { return cth.h.Sum(in) } diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/ecdh/ecdh.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/ecdh/ecdh.go new file mode 100644 index 00000000000..b09e2a7359d --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/ecdh/ecdh.go @@ -0,0 +1,206 @@ +// Copyright 2017 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package ecdh implements ECDH encryption, suitable for OpenPGP, +// as specified in RFC 6637, section 8. +package ecdh + +import ( + "bytes" + "errors" + "io" + + "github.com/ProtonMail/go-crypto/openpgp/aes/keywrap" + "github.com/ProtonMail/go-crypto/openpgp/internal/algorithm" + "github.com/ProtonMail/go-crypto/openpgp/internal/ecc" +) + +type KDF struct { + Hash algorithm.Hash + Cipher algorithm.Cipher +} + +type PublicKey struct { + curve ecc.ECDHCurve + Point []byte + KDF +} + +type PrivateKey struct { + PublicKey + D []byte +} + +func NewPublicKey(curve ecc.ECDHCurve, kdfHash algorithm.Hash, kdfCipher algorithm.Cipher) *PublicKey { + return &PublicKey{ + curve: curve, + KDF: KDF{ + Hash: kdfHash, + Cipher: kdfCipher, + }, + } +} + +func NewPrivateKey(key PublicKey) *PrivateKey { + return &PrivateKey{ + PublicKey: key, + } +} + +func (pk *PublicKey) GetCurve() ecc.ECDHCurve { + return pk.curve +} + +func (pk *PublicKey) MarshalPoint() []byte { + return pk.curve.MarshalBytePoint(pk.Point) +} + +func (pk *PublicKey) UnmarshalPoint(p []byte) error { + pk.Point = pk.curve.UnmarshalBytePoint(p) + if pk.Point == nil { + return errors.New("ecdh: failed to parse EC point") + } + return nil +} + +func (sk *PrivateKey) MarshalByteSecret() []byte { + return sk.curve.MarshalByteSecret(sk.D) +} + +func (sk *PrivateKey) UnmarshalByteSecret(d []byte) error { + sk.D = sk.curve.UnmarshalByteSecret(d) + + if sk.D == nil { + return errors.New("ecdh: failed to parse scalar") + } + return nil +} + +func GenerateKey(rand io.Reader, c ecc.ECDHCurve, kdf KDF) (priv *PrivateKey, err error) { + priv = new(PrivateKey) + priv.PublicKey.curve = c + priv.PublicKey.KDF = kdf + priv.PublicKey.Point, priv.D, err = c.GenerateECDH(rand) + return +} + +func Encrypt(random io.Reader, pub *PublicKey, msg, curveOID, fingerprint []byte) (vsG, c []byte, err error) { + if len(msg) > 40 { + return nil, nil, errors.New("ecdh: message too long") + } + // the sender MAY use 21, 13, and 5 bytes of padding for AES-128, + // AES-192, and AES-256, respectively, to provide the same number of + // octets, 40 total, as an input to the key wrapping method. + padding := make([]byte, 40-len(msg)) + for i := range padding { + padding[i] = byte(40 - len(msg)) + } + m := append(msg, padding...) + + ephemeral, zb, err := pub.curve.Encaps(random, pub.Point) + if err != nil { + return nil, nil, err + } + + vsG = pub.curve.MarshalBytePoint(ephemeral) + + z, err := buildKey(pub, zb, curveOID, fingerprint, false, false) + if err != nil { + return nil, nil, err + } + + if c, err = keywrap.Wrap(z, m); err != nil { + return nil, nil, err + } + + return vsG, c, nil + +} + +func Decrypt(priv *PrivateKey, vsG, c, curveOID, fingerprint []byte) (msg []byte, err error) { + var m []byte + zb, err := priv.PublicKey.curve.Decaps(priv.curve.UnmarshalBytePoint(vsG), priv.D) + + // Try buildKey three times to workaround an old bug, see comments in buildKey. + for i := 0; i < 3; i++ { + var z []byte + // RFC6637 §8: "Compute Z = KDF( S, Z_len, Param );" + z, err = buildKey(&priv.PublicKey, zb, curveOID, fingerprint, i == 1, i == 2) + if err != nil { + return nil, err + } + + // RFC6637 §8: "Compute C = AESKeyWrap( Z, c ) as per [RFC3394]" + m, err = keywrap.Unwrap(z, c) + if err == nil { + break + } + } + + // Only return an error after we've tried all (required) variants of buildKey. + if err != nil { + return nil, err + } + + // RFC6637 §8: "m = symm_alg_ID || session key || checksum || pkcs5_padding" + // The last byte should be the length of the padding, as per PKCS5; strip it off. + return m[:len(m)-int(m[len(m)-1])], nil +} + +func buildKey(pub *PublicKey, zb []byte, curveOID, fingerprint []byte, stripLeading, stripTrailing bool) ([]byte, error) { + // Param = curve_OID_len || curve_OID || public_key_alg_ID || 03 + // || 01 || KDF_hash_ID || KEK_alg_ID for AESKeyWrap + // || "Anonymous Sender " || recipient_fingerprint; + param := new(bytes.Buffer) + if _, err := param.Write(curveOID); err != nil { + return nil, err + } + algKDF := []byte{18, 3, 1, pub.KDF.Hash.Id(), pub.KDF.Cipher.Id()} + if _, err := param.Write(algKDF); err != nil { + return nil, err + } + if _, err := param.Write([]byte("Anonymous Sender ")); err != nil { + return nil, err + } + // For v5 keys, the 20 leftmost octets of the fingerprint are used. + if _, err := param.Write(fingerprint[:20]); err != nil { + return nil, err + } + if param.Len() - len(curveOID) != 45 { + return nil, errors.New("ecdh: malformed KDF Param") + } + + // MB = Hash ( 00 || 00 || 00 || 01 || ZB || Param ); + h := pub.KDF.Hash.New() + if _, err := h.Write([]byte{0x0, 0x0, 0x0, 0x1}); err != nil { + return nil, err + } + zbLen := len(zb) + i := 0 + j := zbLen - 1 + if stripLeading { + // Work around old go crypto bug where the leading zeros are missing. + for ; i < zbLen && zb[i] == 0; i++ {} + } + if stripTrailing { + // Work around old OpenPGP.js bug where insignificant trailing zeros in + // this little-endian number are missing. + // (See https://github.com/openpgpjs/openpgpjs/pull/853.) + for ; j >= 0 && zb[j] == 0; j-- {} + } + if _, err := h.Write(zb[i:j+1]); err != nil { + return nil, err + } + if _, err := h.Write(param.Bytes()); err != nil { + return nil, err + } + mb := h.Sum(nil) + + return mb[:pub.KDF.Cipher.KeySize()], nil // return oBits leftmost bits of MB. + +} + +func Validate(priv *PrivateKey) error { + return priv.curve.ValidateECDH(priv.Point, priv.D) +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/ecdsa/ecdsa.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/ecdsa/ecdsa.go new file mode 100644 index 00000000000..6682a21a60f --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/ecdsa/ecdsa.go @@ -0,0 +1,80 @@ +// Package ecdsa implements ECDSA signature, suitable for OpenPGP, +// as specified in RFC 6637, section 5. +package ecdsa + +import ( + "errors" + "github.com/ProtonMail/go-crypto/openpgp/internal/ecc" + "io" + "math/big" +) + +type PublicKey struct { + X, Y *big.Int + curve ecc.ECDSACurve +} + +type PrivateKey struct { + PublicKey + D *big.Int +} + +func NewPublicKey(curve ecc.ECDSACurve) *PublicKey { + return &PublicKey{ + curve: curve, + } +} + +func NewPrivateKey(key PublicKey) *PrivateKey { + return &PrivateKey{ + PublicKey: key, + } +} + +func (pk *PublicKey) GetCurve() ecc.ECDSACurve { + return pk.curve +} + +func (pk *PublicKey) MarshalPoint() []byte { + return pk.curve.MarshalIntegerPoint(pk.X, pk.Y) +} + +func (pk *PublicKey) UnmarshalPoint(p []byte) error { + pk.X, pk.Y = pk.curve.UnmarshalIntegerPoint(p) + if pk.X == nil { + return errors.New("ecdsa: failed to parse EC point") + } + return nil +} + +func (sk *PrivateKey) MarshalIntegerSecret() []byte { + return sk.curve.MarshalIntegerSecret(sk.D) +} + +func (sk *PrivateKey) UnmarshalIntegerSecret(d []byte) error { + sk.D = sk.curve.UnmarshalIntegerSecret(d) + + if sk.D == nil { + return errors.New("ecdsa: failed to parse scalar") + } + return nil +} + +func GenerateKey(rand io.Reader, c ecc.ECDSACurve) (priv *PrivateKey, err error) { + priv = new(PrivateKey) + priv.PublicKey.curve = c + priv.PublicKey.X, priv.PublicKey.Y, priv.D, err = c.GenerateECDSA(rand) + return +} + +func Sign(rand io.Reader, priv *PrivateKey, hash []byte) (r, s *big.Int, err error) { + return priv.PublicKey.curve.Sign(rand, priv.X, priv.Y, priv.D, hash) +} + +func Verify(pub *PublicKey, hash []byte, r, s *big.Int) bool { + return pub.curve.Verify(pub.X, pub.Y, hash, r, s) +} + +func Validate(priv *PrivateKey) error { + return priv.curve.ValidateECDSA(priv.X, priv.Y, priv.D.Bytes()) +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/eddsa/eddsa.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/eddsa/eddsa.go new file mode 100644 index 00000000000..12866c12df0 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/eddsa/eddsa.go @@ -0,0 +1,91 @@ +// Package eddsa implements EdDSA signature, suitable for OpenPGP, as specified in +// https://datatracker.ietf.org/doc/html/draft-ietf-openpgp-crypto-refresh-06#section-13.7 +package eddsa + +import ( + "errors" + "github.com/ProtonMail/go-crypto/openpgp/internal/ecc" + "io" +) + +type PublicKey struct { + X []byte + curve ecc.EdDSACurve +} + +type PrivateKey struct { + PublicKey + D []byte +} + +func NewPublicKey(curve ecc.EdDSACurve) *PublicKey { + return &PublicKey{ + curve: curve, + } +} + +func NewPrivateKey(key PublicKey) *PrivateKey { + return &PrivateKey{ + PublicKey: key, + } +} + +func (pk *PublicKey) GetCurve() ecc.EdDSACurve { + return pk.curve +} + +func (pk *PublicKey) MarshalPoint() []byte { + return pk.curve.MarshalBytePoint(pk.X) +} + +func (pk *PublicKey) UnmarshalPoint(x []byte) error { + pk.X = pk.curve.UnmarshalBytePoint(x) + + if pk.X == nil { + return errors.New("eddsa: failed to parse EC point") + } + return nil +} + +func (sk *PrivateKey) MarshalByteSecret() []byte { + return sk.curve.MarshalByteSecret(sk.D) +} + +func (sk *PrivateKey) UnmarshalByteSecret(d []byte) error { + sk.D = sk.curve.UnmarshalByteSecret(d) + + if sk.D == nil { + return errors.New("eddsa: failed to parse scalar") + } + return nil +} + +func GenerateKey(rand io.Reader, c ecc.EdDSACurve) (priv *PrivateKey, err error) { + priv = new(PrivateKey) + priv.PublicKey.curve = c + priv.PublicKey.X, priv.D, err = c.GenerateEdDSA(rand) + return +} + +func Sign(priv *PrivateKey, message []byte) (r, s []byte, err error) { + sig, err := priv.PublicKey.curve.Sign(priv.PublicKey.X, priv.D, message) + if err != nil { + return nil, nil, err + } + + r, s = priv.PublicKey.curve.MarshalSignature(sig) + return +} + +func Verify(pub *PublicKey, message, r, s []byte) bool { + sig := pub.curve.UnmarshalSignature(r, s) + if sig == nil { + return false + } + + return pub.curve.Verify(pub.X, message, sig) +} + +func Validate(priv *PrivateKey) error { + return priv.curve.ValidateEdDSA(priv.PublicKey.X, priv.D) +} diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/elgamal/elgamal.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/elgamal/elgamal.go similarity index 86% rename from .ci/providerlint/vendor/golang.org/x/crypto/openpgp/elgamal/elgamal.go rename to .ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/elgamal/elgamal.go index 743b35a1204..6a07d8ff279 100644 --- a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/elgamal/elgamal.go +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/elgamal/elgamal.go @@ -10,13 +10,7 @@ // This form of ElGamal embeds PKCS#1 v1.5 padding, which may make it // unsuitable for other protocols. RSA should be used in preference in any // case. -// -// Deprecated: this package was only provided to support ElGamal encryption in -// OpenPGP. The golang.org/x/crypto/openpgp package is now deprecated (see -// https://golang.org/issue/44226), and ElGamal in the OpenPGP ecosystem has -// compatibility and security issues (see https://eprint.iacr.org/2021/923). -// Moreover, this package doesn't protect against side-channel attacks. -package elgamal // import "golang.org/x/crypto/openpgp/elgamal" +package elgamal // import "github.com/ProtonMail/go-crypto/openpgp/elgamal" import ( "crypto/rand" @@ -77,8 +71,8 @@ func Encrypt(random io.Reader, pub *PublicKey, msg []byte) (c1, c2 *big.Int, err // returns the plaintext of the message. An error can result only if the // ciphertext is invalid. Users should keep in mind that this is a padding // oracle and thus, if exposed to an adaptive chosen ciphertext attack, can -// be used to break the cryptosystem. See “Chosen Ciphertext Attacks -// Against Protocols Based on the RSA Encryption Standard PKCS #1”, Daniel +// be used to break the cryptosystem. See ``Chosen Ciphertext Attacks +// Against Protocols Based on the RSA Encryption Standard PKCS #1'', Daniel // Bleichenbacher, Advances in Cryptology (Crypto '98), func Decrypt(priv *PrivateKey, c1, c2 *big.Int) (msg []byte, err error) { s := new(big.Int).Exp(c1, priv.X, priv.P) diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/errors/errors.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/errors/errors.go similarity index 56% rename from .ci/providerlint/vendor/golang.org/x/crypto/openpgp/errors/errors.go rename to .ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/errors/errors.go index 1d7a0ea05ad..17e2bcfed20 100644 --- a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/errors/errors.go +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/errors/errors.go @@ -3,13 +3,7 @@ // license that can be found in the LICENSE file. // Package errors contains common error types for the OpenPGP packages. -// -// Deprecated: this package is unmaintained except for security fixes. New -// applications should consider a more focused, modern alternative to OpenPGP -// for their specific task. If you are required to interoperate with OpenPGP -// systems and need a maintained package, consider a community fork. -// See https://golang.org/issue/44226. -package errors // import "golang.org/x/crypto/openpgp/errors" +package errors // import "github.com/ProtonMail/go-crypto/openpgp/errors" import ( "strconv" @@ -47,6 +41,25 @@ func (b SignatureError) Error() string { return "openpgp: invalid signature: " + string(b) } +var ErrMDCHashMismatch error = SignatureError("MDC hash mismatch") +var ErrMDCMissing error = SignatureError("MDC packet not found") + +type signatureExpiredError int + +func (se signatureExpiredError) Error() string { + return "openpgp: signature expired" +} + +var ErrSignatureExpired error = signatureExpiredError(0) + +type keyExpiredError int + +func (ke keyExpiredError) Error() string { + return "openpgp: key expired" +} + +var ErrKeyExpired error = keyExpiredError(0) + type keyIncorrectError int func (ki keyIncorrectError) Error() string { @@ -55,6 +68,14 @@ func (ki keyIncorrectError) Error() string { var ErrKeyIncorrect error = keyIncorrectError(0) +// KeyInvalidError indicates that the public key parameters are invalid +// as they do not match the private ones +type KeyInvalidError string + +func (e KeyInvalidError) Error() string { + return "openpgp: invalid key: " + string(e) +} + type unknownIssuerError int func (unknownIssuerError) Error() string { @@ -76,3 +97,20 @@ type UnknownPacketTypeError uint8 func (upte UnknownPacketTypeError) Error() string { return "openpgp: unknown packet type: " + strconv.Itoa(int(upte)) } + +// AEADError indicates that there is a problem when initializing or using a +// AEAD instance, configuration struct, nonces or index values. +type AEADError string + +func (ae AEADError) Error() string { + return "openpgp: aead error: " + string(ae) +} + +// ErrDummyPrivateKey results when operations are attempted on a private key +// that is just a dummy key. See +// https://git.gnupg.org/cgi-bin/gitweb.cgi?p=gnupg.git;a=blob;f=doc/DETAILS;h=fe55ae16ab4e26d8356dc574c9e8bc935e71aef1;hb=23191d7851eae2217ecdac6484349849a24fd94a#l1109 +type ErrDummyPrivateKey string + +func (dke ErrDummyPrivateKey) Error() string { + return "openpgp: s2k GNU dummy key: " + string(dke) +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/algorithm/aead.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/algorithm/aead.go new file mode 100644 index 00000000000..d0670651866 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/algorithm/aead.go @@ -0,0 +1,65 @@ +// Copyright (C) 2019 ProtonTech AG + +package algorithm + +import ( + "crypto/cipher" + "github.com/ProtonMail/go-crypto/eax" + "github.com/ProtonMail/go-crypto/ocb" +) + +// AEADMode defines the Authenticated Encryption with Associated Data mode of +// operation. +type AEADMode uint8 + +// Supported modes of operation (see RFC4880bis [EAX] and RFC7253) +const ( + AEADModeEAX = AEADMode(1) + AEADModeOCB = AEADMode(2) + AEADModeGCM = AEADMode(3) +) + +// TagLength returns the length in bytes of authentication tags. +func (mode AEADMode) TagLength() int { + switch mode { + case AEADModeEAX: + return 16 + case AEADModeOCB: + return 16 + case AEADModeGCM: + return 16 + default: + return 0 + } +} + +// NonceLength returns the length in bytes of nonces. +func (mode AEADMode) NonceLength() int { + switch mode { + case AEADModeEAX: + return 16 + case AEADModeOCB: + return 15 + case AEADModeGCM: + return 12 + default: + return 0 + } +} + +// New returns a fresh instance of the given mode +func (mode AEADMode) New(block cipher.Block) (alg cipher.AEAD) { + var err error + switch mode { + case AEADModeEAX: + alg, err = eax.NewEAX(block) + case AEADModeOCB: + alg, err = ocb.NewOCB(block) + case AEADModeGCM: + alg, err = cipher.NewGCM(block) + } + if err != nil { + panic(err.Error()) + } + return alg +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/algorithm/cipher.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/algorithm/cipher.go new file mode 100644 index 00000000000..5760cff80ea --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/algorithm/cipher.go @@ -0,0 +1,107 @@ +// Copyright 2017 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package algorithm + +import ( + "crypto/aes" + "crypto/cipher" + "crypto/des" + + "golang.org/x/crypto/cast5" +) + +// Cipher is an official symmetric key cipher algorithm. See RFC 4880, +// section 9.2. +type Cipher interface { + // Id returns the algorithm ID, as a byte, of the cipher. + Id() uint8 + // KeySize returns the key size, in bytes, of the cipher. + KeySize() int + // BlockSize returns the block size, in bytes, of the cipher. + BlockSize() int + // New returns a fresh instance of the given cipher. + New(key []byte) cipher.Block +} + +// The following constants mirror the OpenPGP standard (RFC 4880). +const ( + TripleDES = CipherFunction(2) + CAST5 = CipherFunction(3) + AES128 = CipherFunction(7) + AES192 = CipherFunction(8) + AES256 = CipherFunction(9) +) + +// CipherById represents the different block ciphers specified for OpenPGP. See +// http://www.iana.org/assignments/pgp-parameters/pgp-parameters.xhtml#pgp-parameters-13 +var CipherById = map[uint8]Cipher{ + TripleDES.Id(): TripleDES, + CAST5.Id(): CAST5, + AES128.Id(): AES128, + AES192.Id(): AES192, + AES256.Id(): AES256, +} + +type CipherFunction uint8 + +// ID returns the algorithm Id, as a byte, of cipher. +func (sk CipherFunction) Id() uint8 { + return uint8(sk) +} + +var keySizeByID = map[uint8]int{ + TripleDES.Id(): 24, + CAST5.Id(): cast5.KeySize, + AES128.Id(): 16, + AES192.Id(): 24, + AES256.Id(): 32, +} + +// KeySize returns the key size, in bytes, of cipher. +func (cipher CipherFunction) KeySize() int { + switch cipher { + case TripleDES: + return 24 + case CAST5: + return cast5.KeySize + case AES128: + return 16 + case AES192: + return 24 + case AES256: + return 32 + } + return 0 +} + +// BlockSize returns the block size, in bytes, of cipher. +func (cipher CipherFunction) BlockSize() int { + switch cipher { + case TripleDES: + return des.BlockSize + case CAST5: + return 8 + case AES128, AES192, AES256: + return 16 + } + return 0 +} + +// New returns a fresh instance of the given cipher. +func (cipher CipherFunction) New(key []byte) (block cipher.Block) { + var err error + switch cipher { + case TripleDES: + block, err = des.NewTripleDESCipher(key) + case CAST5: + block, err = cast5.NewCipher(key) + case AES128, AES192, AES256: + block, err = aes.NewCipher(key) + } + if err != nil { + panic(err.Error()) + } + return +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/algorithm/hash.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/algorithm/hash.go new file mode 100644 index 00000000000..82e43d67401 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/algorithm/hash.go @@ -0,0 +1,143 @@ +// Copyright 2017 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package algorithm + +import ( + "crypto" + "fmt" + "hash" +) + +// Hash is an official hash function algorithm. See RFC 4880, section 9.4. +type Hash interface { + // Id returns the algorithm ID, as a byte, of Hash. + Id() uint8 + // Available reports whether the given hash function is linked into the binary. + Available() bool + // HashFunc simply returns the value of h so that Hash implements SignerOpts. + HashFunc() crypto.Hash + // New returns a new hash.Hash calculating the given hash function. New + // panics if the hash function is not linked into the binary. + New() hash.Hash + // Size returns the length, in bytes, of a digest resulting from the given + // hash function. It doesn't require that the hash function in question be + // linked into the program. + Size() int + // String is the name of the hash function corresponding to the given + // OpenPGP hash id. + String() string +} + +// The following vars mirror the crypto/Hash supported hash functions. +var ( + SHA1 Hash = cryptoHash{2, crypto.SHA1} + SHA256 Hash = cryptoHash{8, crypto.SHA256} + SHA384 Hash = cryptoHash{9, crypto.SHA384} + SHA512 Hash = cryptoHash{10, crypto.SHA512} + SHA224 Hash = cryptoHash{11, crypto.SHA224} + SHA3_256 Hash = cryptoHash{12, crypto.SHA3_256} + SHA3_512 Hash = cryptoHash{14, crypto.SHA3_512} +) + +// HashById represents the different hash functions specified for OpenPGP. See +// http://www.iana.org/assignments/pgp-parameters/pgp-parameters.xhtml#pgp-parameters-14 +var ( + HashById = map[uint8]Hash{ + SHA256.Id(): SHA256, + SHA384.Id(): SHA384, + SHA512.Id(): SHA512, + SHA224.Id(): SHA224, + SHA3_256.Id(): SHA3_256, + SHA3_512.Id(): SHA3_512, + } +) + +// cryptoHash contains pairs relating OpenPGP's hash identifier with +// Go's crypto.Hash type. See RFC 4880, section 9.4. +type cryptoHash struct { + id uint8 + crypto.Hash +} + +// Id returns the algorithm ID, as a byte, of cryptoHash. +func (h cryptoHash) Id() uint8 { + return h.id +} + +var hashNames = map[uint8]string{ + SHA256.Id(): "SHA256", + SHA384.Id(): "SHA384", + SHA512.Id(): "SHA512", + SHA224.Id(): "SHA224", + SHA3_256.Id(): "SHA3-256", + SHA3_512.Id(): "SHA3-512", +} + +func (h cryptoHash) String() string { + s, ok := hashNames[h.id] + if !ok { + panic(fmt.Sprintf("Unsupported hash function %d", h.id)) + } + return s +} + +// HashIdToHash returns a crypto.Hash which corresponds to the given OpenPGP +// hash id. +func HashIdToHash(id byte) (h crypto.Hash, ok bool) { + if hash, ok := HashById[id]; ok { + return hash.HashFunc(), true + } + return 0, false +} + +// HashIdToHashWithSha1 returns a crypto.Hash which corresponds to the given OpenPGP +// hash id, allowing sha1. +func HashIdToHashWithSha1(id byte) (h crypto.Hash, ok bool) { + if hash, ok := HashById[id]; ok { + return hash.HashFunc(), true + } + + if id == SHA1.Id() { + return SHA1.HashFunc(), true + } + + return 0, false +} + +// HashIdToString returns the name of the hash function corresponding to the +// given OpenPGP hash id. +func HashIdToString(id byte) (name string, ok bool) { + if hash, ok := HashById[id]; ok { + return hash.String(), true + } + return "", false +} + +// HashToHashId returns an OpenPGP hash id which corresponds the given Hash. +func HashToHashId(h crypto.Hash) (id byte, ok bool) { + for id, hash := range HashById { + if hash.HashFunc() == h { + return id, true + } + } + + return 0, false +} + +// HashToHashIdWithSha1 returns an OpenPGP hash id which corresponds the given Hash, +// allowing instances of SHA1 +func HashToHashIdWithSha1(h crypto.Hash) (id byte, ok bool) { + for id, hash := range HashById { + if hash.HashFunc() == h { + return id, true + } + } + + if h == SHA1.HashFunc() { + return SHA1.Id(), true + } + + return 0, false +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/ecc/curve25519.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/ecc/curve25519.go new file mode 100644 index 00000000000..266635ec579 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/ecc/curve25519.go @@ -0,0 +1,171 @@ +// Package ecc implements a generic interface for ECDH, ECDSA, and EdDSA. +package ecc + +import ( + "crypto/subtle" + "io" + + "github.com/ProtonMail/go-crypto/openpgp/errors" + x25519lib "github.com/cloudflare/circl/dh/x25519" +) + +type curve25519 struct {} + +func NewCurve25519() *curve25519 { + return &curve25519{} +} + +func (c *curve25519) GetCurveName() string { + return "curve25519" +} + +// MarshalBytePoint encodes the public point from native format, adding the prefix. +// See https://datatracker.ietf.org/doc/html/draft-ietf-openpgp-crypto-refresh-06#section-5.5.5.6 +func (c *curve25519) MarshalBytePoint(point [] byte) []byte { + return append([]byte{0x40}, point...) +} + +// UnmarshalBytePoint decodes the public point to native format, removing the prefix. +// See https://datatracker.ietf.org/doc/html/draft-ietf-openpgp-crypto-refresh-06#section-5.5.5.6 +func (c *curve25519) UnmarshalBytePoint(point []byte) []byte { + if len(point) != x25519lib.Size + 1 { + return nil + } + + // Remove prefix + return point[1:] +} + +// MarshalByteSecret encodes the secret scalar from native format. +// Note that the EC secret scalar differs from the definition of public keys in +// [Curve25519] in two ways: (1) the byte-ordering is big-endian, which is +// more uniform with how big integers are represented in OpenPGP, and (2) the +// leading zeros are truncated. +// See https://datatracker.ietf.org/doc/html/draft-ietf-openpgp-crypto-refresh-06#section-5.5.5.6.1.1 +// Note that leading zero bytes are stripped later when encoding as an MPI. +func (c *curve25519) MarshalByteSecret(secret []byte) []byte { + d := make([]byte, x25519lib.Size) + copyReversed(d, secret) + + // The following ensures that the private key is a number of the form + // 2^{254} + 8 * [0, 2^{251}), in order to avoid the small subgroup of + // the curve. + // + // This masking is done internally in the underlying lib and so is unnecessary + // for security, but OpenPGP implementations require that private keys be + // pre-masked. + d[0] &= 127 + d[0] |= 64 + d[31] &= 248 + + return d +} + +// UnmarshalByteSecret decodes the secret scalar from native format. +// Note that the EC secret scalar differs from the definition of public keys in +// [Curve25519] in two ways: (1) the byte-ordering is big-endian, which is +// more uniform with how big integers are represented in OpenPGP, and (2) the +// leading zeros are truncated. +// See https://datatracker.ietf.org/doc/html/draft-ietf-openpgp-crypto-refresh-06#section-5.5.5.6.1.1 +func (c *curve25519) UnmarshalByteSecret(d []byte) []byte { + if len(d) > x25519lib.Size { + return nil + } + + // Ensure truncated leading bytes are re-added + secret := make([]byte, x25519lib.Size) + copyReversed(secret, d) + + return secret +} + +// generateKeyPairBytes Generates a private-public key-pair. +// 'priv' is a private key; a little-endian scalar belonging to the set +// 2^{254} + 8 * [0, 2^{251}), in order to avoid the small subgroup of the +// curve. 'pub' is simply 'priv' * G where G is the base point. +// See https://cr.yp.to/ecdh.html and RFC7748, sec 5. +func (c *curve25519) generateKeyPairBytes(rand io.Reader) (priv, pub x25519lib.Key, err error) { + _, err = io.ReadFull(rand, priv[:]) + if err != nil { + return + } + + x25519lib.KeyGen(&pub, &priv) + return +} + +func (c *curve25519) GenerateECDH(rand io.Reader) (point []byte, secret []byte, err error) { + priv, pub, err := c.generateKeyPairBytes(rand) + if err != nil { + return + } + + return pub[:], priv[:], nil +} + +func (c *genericCurve) MaskSecret(secret []byte) []byte { + return secret +} + +func (c *curve25519) Encaps(rand io.Reader, point []byte) (ephemeral, sharedSecret []byte, err error) { + // RFC6637 §8: "Generate an ephemeral key pair {v, V=vG}" + // ephemeralPrivate corresponds to `v`. + // ephemeralPublic corresponds to `V`. + ephemeralPrivate, ephemeralPublic, err := c.generateKeyPairBytes(rand) + if err != nil { + return nil, nil, err + } + + // RFC6637 §8: "Obtain the authenticated recipient public key R" + // pubKey corresponds to `R`. + var pubKey x25519lib.Key + copy(pubKey[:], point) + + // RFC6637 §8: "Compute the shared point S = vR" + // "VB = convert point V to the octet string" + // sharedPoint corresponds to `VB`. + var sharedPoint x25519lib.Key + x25519lib.Shared(&sharedPoint, &ephemeralPrivate, &pubKey) + + return ephemeralPublic[:], sharedPoint[:], nil +} + +func (c *curve25519) Decaps(vsG, secret []byte) (sharedSecret []byte, err error) { + var ephemeralPublic, decodedPrivate, sharedPoint x25519lib.Key + // RFC6637 §8: "The decryption is the inverse of the method given." + // All quoted descriptions in comments below describe encryption, and + // the reverse is performed. + // vsG corresponds to `VB` in RFC6637 §8 . + + // RFC6637 §8: "VB = convert point V to the octet string" + copy(ephemeralPublic[:], vsG) + + // decodedPrivate corresponds to `r` in RFC6637 §8 . + copy(decodedPrivate[:], secret) + + // RFC6637 §8: "Note that the recipient obtains the shared secret by calculating + // S = rV = rvG, where (r,R) is the recipient's key pair." + // sharedPoint corresponds to `S`. + x25519lib.Shared(&sharedPoint, &decodedPrivate, &ephemeralPublic) + + return sharedPoint[:], nil +} + +func (c *curve25519) ValidateECDH(point []byte, secret []byte) (err error) { + var pk, sk x25519lib.Key + copy(sk[:], secret) + x25519lib.KeyGen(&pk, &sk) + + if subtle.ConstantTimeCompare(point, pk[:]) == 0 { + return errors.KeyInvalidError("ecc: invalid curve25519 public point") + } + + return nil +} + +func copyReversed(out []byte, in []byte) { + l := len(in) + for i := 0; i < l; i++ { + out[i] = in[l-i-1] + } +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/ecc/curve_info.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/ecc/curve_info.go new file mode 100644 index 00000000000..df2878c9570 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/ecc/curve_info.go @@ -0,0 +1,140 @@ +// Package ecc implements a generic interface for ECDH, ECDSA, and EdDSA. +package ecc + +import ( + "bytes" + "crypto/elliptic" + "github.com/ProtonMail/go-crypto/bitcurves" + "github.com/ProtonMail/go-crypto/brainpool" + "github.com/ProtonMail/go-crypto/openpgp/internal/encoding" +) + +type CurveInfo struct { + GenName string + Oid *encoding.OID + Curve Curve +} + +var Curves = []CurveInfo{ + { + // NIST P-256 + GenName: "P256", + Oid: encoding.NewOID([]byte{0x2A, 0x86, 0x48, 0xCE, 0x3D, 0x03, 0x01, 0x07}), + Curve: NewGenericCurve(elliptic.P256()), + }, + { + // NIST P-384 + GenName: "P384", + Oid: encoding.NewOID([]byte{0x2B, 0x81, 0x04, 0x00, 0x22}), + Curve: NewGenericCurve(elliptic.P384()), + }, + { + // NIST P-521 + GenName: "P521", + Oid: encoding.NewOID([]byte{0x2B, 0x81, 0x04, 0x00, 0x23}), + Curve: NewGenericCurve(elliptic.P521()), + }, + { + // SecP256k1 + GenName: "SecP256k1", + Oid: encoding.NewOID([]byte{0x2B, 0x81, 0x04, 0x00, 0x0A}), + Curve: NewGenericCurve(bitcurves.S256()), + }, + { + // Curve25519 + GenName: "Curve25519", + Oid: encoding.NewOID([]byte{0x2B, 0x06, 0x01, 0x04, 0x01, 0x97, 0x55, 0x01, 0x05, 0x01}), + Curve: NewCurve25519(), + }, + { + // X448 + GenName: "Curve448", + Oid: encoding.NewOID([]byte{0x2B, 0x65, 0x6F}), + Curve: NewX448(), + }, + { + // Ed25519 + GenName: "Curve25519", + Oid: encoding.NewOID([]byte{0x2B, 0x06, 0x01, 0x04, 0x01, 0xDA, 0x47, 0x0F, 0x01}), + Curve: NewEd25519(), + }, + { + // Ed448 + GenName: "Curve448", + Oid: encoding.NewOID([]byte{0x2B, 0x65, 0x71}), + Curve: NewEd448(), + }, + { + // BrainpoolP256r1 + GenName: "BrainpoolP256", + Oid: encoding.NewOID([]byte{0x2B, 0x24, 0x03, 0x03, 0x02, 0x08, 0x01, 0x01, 0x07}), + Curve: NewGenericCurve(brainpool.P256r1()), + }, + { + // BrainpoolP384r1 + GenName: "BrainpoolP384", + Oid: encoding.NewOID([]byte{0x2B, 0x24, 0x03, 0x03, 0x02, 0x08, 0x01, 0x01, 0x0B}), + Curve: NewGenericCurve(brainpool.P384r1()), + }, + { + // BrainpoolP512r1 + GenName: "BrainpoolP512", + Oid: encoding.NewOID([]byte{0x2B, 0x24, 0x03, 0x03, 0x02, 0x08, 0x01, 0x01, 0x0D}), + Curve: NewGenericCurve(brainpool.P512r1()), + }, +} + +func FindByCurve(curve Curve) *CurveInfo { + for _, curveInfo := range Curves { + if curveInfo.Curve.GetCurveName() == curve.GetCurveName() { + return &curveInfo + } + } + return nil +} + +func FindByOid(oid encoding.Field) *CurveInfo { + var rawBytes = oid.Bytes() + for _, curveInfo := range Curves { + if bytes.Equal(curveInfo.Oid.Bytes(), rawBytes) { + return &curveInfo + } + } + return nil +} + +func FindEdDSAByGenName(curveGenName string) EdDSACurve { + for _, curveInfo := range Curves { + if curveInfo.GenName == curveGenName { + curve, ok := curveInfo.Curve.(EdDSACurve) + if ok { + return curve + } + } + } + return nil +} + +func FindECDSAByGenName(curveGenName string) ECDSACurve { + for _, curveInfo := range Curves { + if curveInfo.GenName == curveGenName { + curve, ok := curveInfo.Curve.(ECDSACurve) + if ok { + return curve + } + } + } + return nil +} + +func FindECDHByGenName(curveGenName string) ECDHCurve { + for _, curveInfo := range Curves { + if curveInfo.GenName == curveGenName { + curve, ok := curveInfo.Curve.(ECDHCurve) + if ok { + return curve + } + } + } + return nil +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/ecc/curves.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/ecc/curves.go new file mode 100644 index 00000000000..c47072b49ec --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/ecc/curves.go @@ -0,0 +1,48 @@ +// Package ecc implements a generic interface for ECDH, ECDSA, and EdDSA. +package ecc + +import ( + "io" + "math/big" +) + +type Curve interface { + GetCurveName() string +} + +type ECDSACurve interface { + Curve + MarshalIntegerPoint(x, y *big.Int) []byte + UnmarshalIntegerPoint([]byte) (x, y *big.Int) + MarshalIntegerSecret(d *big.Int) []byte + UnmarshalIntegerSecret(d []byte) *big.Int + GenerateECDSA(rand io.Reader) (x, y, secret *big.Int, err error) + Sign(rand io.Reader, x, y, d *big.Int, hash []byte) (r, s *big.Int, err error) + Verify(x, y *big.Int, hash []byte, r, s *big.Int) bool + ValidateECDSA(x, y *big.Int, secret []byte) error +} + +type EdDSACurve interface { + Curve + MarshalBytePoint(x []byte) []byte + UnmarshalBytePoint([]byte) (x []byte) + MarshalByteSecret(d []byte) []byte + UnmarshalByteSecret(d []byte) []byte + MarshalSignature(sig []byte) (r, s []byte) + UnmarshalSignature(r, s []byte) (sig []byte) + GenerateEdDSA(rand io.Reader) (pub, priv []byte, err error) + Sign(publicKey, privateKey, message []byte) (sig []byte, err error) + Verify(publicKey, message, sig []byte) bool + ValidateEdDSA(publicKey, privateKey []byte) (err error) +} +type ECDHCurve interface { + Curve + MarshalBytePoint([]byte) (encoded []byte) + UnmarshalBytePoint(encoded []byte) ([]byte) + MarshalByteSecret(d []byte) []byte + UnmarshalByteSecret(d []byte) []byte + GenerateECDH(rand io.Reader) (point []byte, secret []byte, err error) + Encaps(rand io.Reader, point []byte) (ephemeral, sharedSecret []byte, err error) + Decaps(ephemeral, secret []byte) (sharedSecret []byte, err error) + ValidateECDH(public []byte, secret []byte) error +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/ecc/ed25519.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/ecc/ed25519.go new file mode 100644 index 00000000000..29f6cba9d84 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/ecc/ed25519.go @@ -0,0 +1,111 @@ +// Package ecc implements a generic interface for ECDH, ECDSA, and EdDSA. +package ecc + +import ( + "crypto/subtle" + "io" + + "github.com/ProtonMail/go-crypto/openpgp/errors" + ed25519lib "github.com/cloudflare/circl/sign/ed25519" +) + +const ed25519Size = 32 +type ed25519 struct {} + +func NewEd25519() *ed25519 { + return &ed25519{} +} + +func (c *ed25519) GetCurveName() string { + return "ed25519" +} + +// MarshalBytePoint encodes the public point from native format, adding the prefix. +// See https://datatracker.ietf.org/doc/html/draft-ietf-openpgp-crypto-refresh-06#section-5.5.5.5 +func (c *ed25519) MarshalBytePoint(x []byte) []byte { + return append([]byte{0x40}, x...) +} + +// UnmarshalBytePoint decodes a point from prefixed format to native. +// See https://datatracker.ietf.org/doc/html/draft-ietf-openpgp-crypto-refresh-06#section-5.5.5.5 +func (c *ed25519) UnmarshalBytePoint(point []byte) (x []byte) { + if len(point) != ed25519lib.PublicKeySize + 1 { + return nil + } + + // Return unprefixed + return point[1:] +} + +// MarshalByteSecret encodes a scalar in native format. +// See https://datatracker.ietf.org/doc/html/draft-ietf-openpgp-crypto-refresh-06#section-5.5.5.5 +func (c *ed25519) MarshalByteSecret(d []byte) []byte { + return d +} + +// UnmarshalByteSecret decodes a scalar in native format and re-adds the stripped leading zeroes +// See https://datatracker.ietf.org/doc/html/draft-ietf-openpgp-crypto-refresh-06#section-5.5.5.5 +func (c *ed25519) UnmarshalByteSecret(s []byte) (d []byte) { + if len(s) > ed25519lib.SeedSize { + return nil + } + + // Handle stripped leading zeroes + d = make([]byte, ed25519lib.SeedSize) + copy(d[ed25519lib.SeedSize - len(s):], s) + return +} + +// MarshalSignature splits a signature in R and S. +// See https://datatracker.ietf.org/doc/html/draft-ietf-openpgp-crypto-refresh-06#section-5.2.3.3.1 +func (c *ed25519) MarshalSignature(sig []byte) (r, s []byte) { + return sig[:ed25519Size], sig[ed25519Size:] +} + +// UnmarshalSignature decodes R and S in the native format, re-adding the stripped leading zeroes +// See https://datatracker.ietf.org/doc/html/draft-ietf-openpgp-crypto-refresh-06#section-5.2.3.3.1 +func (c *ed25519) UnmarshalSignature(r, s []byte) (sig []byte) { + // Check size + if len(r) > 32 || len(s) > 32 { + return nil + } + + sig = make([]byte, ed25519lib.SignatureSize) + + // Handle stripped leading zeroes + copy(sig[ed25519Size-len(r):ed25519Size], r) + copy(sig[ed25519lib.SignatureSize-len(s):], s) + return sig +} + +func (c *ed25519) GenerateEdDSA(rand io.Reader) (pub, priv []byte, err error) { + pk, sk, err := ed25519lib.GenerateKey(rand) + + if err != nil { + return nil, nil, err + } + + return pk, sk[:ed25519lib.SeedSize], nil +} + +func getEd25519Sk(publicKey, privateKey []byte) ed25519lib.PrivateKey { + return append(privateKey, publicKey...) +} + +func (c *ed25519) Sign(publicKey, privateKey, message []byte) (sig []byte, err error) { + sig = ed25519lib.Sign(getEd25519Sk(publicKey, privateKey), message) + return sig, nil +} + +func (c *ed25519) Verify(publicKey, message, sig []byte) bool { + return ed25519lib.Verify(publicKey, message, sig) +} + +func (c *ed25519) ValidateEdDSA(publicKey, privateKey []byte) (err error) { + priv := getEd25519Sk(publicKey, privateKey) + expectedPriv := ed25519lib.NewKeyFromSeed(priv.Seed()) + if subtle.ConstantTimeCompare(priv, expectedPriv) == 0 { + return errors.KeyInvalidError("ecc: invalid ed25519 secret") + } + return nil +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/ecc/ed448.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/ecc/ed448.go new file mode 100644 index 00000000000..a2df3dab874 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/ecc/ed448.go @@ -0,0 +1,111 @@ +// Package ecc implements a generic interface for ECDH, ECDSA, and EdDSA. +package ecc + +import ( + "crypto/subtle" + "io" + + "github.com/ProtonMail/go-crypto/openpgp/errors" + ed448lib "github.com/cloudflare/circl/sign/ed448" +) + +type ed448 struct {} + +func NewEd448() *ed448 { + return &ed448{} +} + +func (c *ed448) GetCurveName() string { + return "ed448" +} + +// MarshalBytePoint encodes the public point from native format, adding the prefix. +// See https://datatracker.ietf.org/doc/html/draft-ietf-openpgp-crypto-refresh-06#section-5.5.5.5 +func (c *ed448) MarshalBytePoint(x []byte) []byte { + // Return prefixed + return append([]byte{0x40}, x...) +} + +// UnmarshalBytePoint decodes a point from prefixed format to native. +// See https://datatracker.ietf.org/doc/html/draft-ietf-openpgp-crypto-refresh-06#section-5.5.5.5 +func (c *ed448) UnmarshalBytePoint(point []byte) (x []byte) { + if len(point) != ed448lib.PublicKeySize + 1 { + return nil + } + + // Strip prefix + return point[1:] +} + +// MarshalByteSecret encoded a scalar from native format to prefixed. +// See https://datatracker.ietf.org/doc/html/draft-ietf-openpgp-crypto-refresh-06#section-5.5.5.5 +func (c *ed448) MarshalByteSecret(d []byte) []byte { + // Return prefixed + return append([]byte{0x40}, d...) +} + +// UnmarshalByteSecret decodes a scalar from prefixed format to native. +// See https://datatracker.ietf.org/doc/html/draft-ietf-openpgp-crypto-refresh-06#section-5.5.5.5 +func (c *ed448) UnmarshalByteSecret(s []byte) (d []byte) { + // Check prefixed size + if len(s) != ed448lib.SeedSize + 1 { + return nil + } + + // Strip prefix + return s[1:] +} + +// MarshalSignature splits a signature in R and S, where R is in prefixed native format and +// S is an MPI with value zero. +// See https://datatracker.ietf.org/doc/html/draft-ietf-openpgp-crypto-refresh-06#section-5.2.3.3.2 +func (c *ed448) MarshalSignature(sig []byte) (r, s []byte) { + return append([]byte{0x40}, sig...), []byte{} +} + +// UnmarshalSignature decodes R and S in the native format. Only R is used, in prefixed native format. +// See https://datatracker.ietf.org/doc/html/draft-ietf-openpgp-crypto-refresh-06#section-5.2.3.3.2 +func (c *ed448) UnmarshalSignature(r, s []byte) (sig []byte) { + if len(r) != ed448lib.SignatureSize + 1 { + return nil + } + + return r[1:] +} + +func (c *ed448) GenerateEdDSA(rand io.Reader) (pub, priv []byte, err error) { + pk, sk, err := ed448lib.GenerateKey(rand) + + if err != nil { + return nil, nil, err + } + + return pk, sk[:ed448lib.SeedSize], nil +} + +func getEd448Sk(publicKey, privateKey []byte) ed448lib.PrivateKey { + return append(privateKey, publicKey...) +} + +func (c *ed448) Sign(publicKey, privateKey, message []byte) (sig []byte, err error) { + // Ed448 is used with the empty string as a context string. + // See https://datatracker.ietf.org/doc/html/draft-ietf-openpgp-crypto-refresh-06#section-13.7 + sig = ed448lib.Sign(getEd448Sk(publicKey, privateKey), message, "") + + return sig, nil +} + +func (c *ed448) Verify(publicKey, message, sig []byte) bool { + // Ed448 is used with the empty string as a context string. + // See https://datatracker.ietf.org/doc/html/draft-ietf-openpgp-crypto-refresh-06#section-13.7 + return ed448lib.Verify(publicKey, message, sig, "") +} + +func (c *ed448) ValidateEdDSA(publicKey, privateKey []byte) (err error) { + priv := getEd448Sk(publicKey, privateKey) + expectedPriv := ed448lib.NewKeyFromSeed(priv.Seed()) + if subtle.ConstantTimeCompare(priv, expectedPriv) == 0 { + return errors.KeyInvalidError("ecc: invalid ed448 secret") + } + return nil +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/ecc/generic.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/ecc/generic.go new file mode 100644 index 00000000000..e28d7c7106a --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/ecc/generic.go @@ -0,0 +1,149 @@ +// Package ecc implements a generic interface for ECDH, ECDSA, and EdDSA. +package ecc + +import ( + "crypto/ecdsa" + "crypto/elliptic" + "fmt" + "github.com/ProtonMail/go-crypto/openpgp/errors" + "io" + "math/big" +) + +type genericCurve struct { + Curve elliptic.Curve +} + +func NewGenericCurve(c elliptic.Curve) *genericCurve { + return &genericCurve{ + Curve: c, + } +} + +func (c *genericCurve) GetCurveName() string { + return c.Curve.Params().Name +} + +func (c *genericCurve) MarshalBytePoint(point []byte) []byte { + return point +} + +func (c *genericCurve) UnmarshalBytePoint(point []byte) []byte { + return point +} + +func (c *genericCurve) MarshalIntegerPoint(x, y *big.Int) []byte { + return elliptic.Marshal(c.Curve, x, y) +} + +func (c *genericCurve) UnmarshalIntegerPoint(point []byte) (x, y *big.Int) { + return elliptic.Unmarshal(c.Curve, point) +} + +func (c *genericCurve) MarshalByteSecret(d []byte) []byte { + return d +} + +func (c *genericCurve) UnmarshalByteSecret(d []byte) []byte { + return d +} + +func (c *genericCurve) MarshalIntegerSecret(d *big.Int) []byte { + return d.Bytes() +} + +func (c *genericCurve) UnmarshalIntegerSecret(d []byte) *big.Int { + return new(big.Int).SetBytes(d) +} + +func (c *genericCurve) GenerateECDH(rand io.Reader) (point, secret []byte, err error) { + secret, x, y, err := elliptic.GenerateKey(c.Curve, rand) + if err != nil { + return nil, nil, err + } + + point = elliptic.Marshal(c.Curve, x, y) + return point, secret, nil +} + +func (c *genericCurve) GenerateECDSA(rand io.Reader) (x, y, secret *big.Int, err error) { + priv, err := ecdsa.GenerateKey(c.Curve, rand) + if err != nil { + return + } + + return priv.X, priv.Y, priv.D, nil +} + +func (c *genericCurve) Encaps(rand io.Reader, point []byte) (ephemeral, sharedSecret []byte, err error) { + xP, yP := elliptic.Unmarshal(c.Curve, point) + if xP == nil { + panic("invalid point") + } + + d, x, y, err := elliptic.GenerateKey(c.Curve, rand) + if err != nil { + return nil, nil, err + } + + vsG := elliptic.Marshal(c.Curve, x, y) + zbBig, _ := c.Curve.ScalarMult(xP, yP, d) + + byteLen := (c.Curve.Params().BitSize + 7) >> 3 + zb := make([]byte, byteLen) + zbBytes := zbBig.Bytes() + copy(zb[byteLen-len(zbBytes):], zbBytes) + + return vsG, zb, nil +} + +func (c *genericCurve) Decaps(ephemeral, secret []byte) (sharedSecret []byte, err error) { + x, y := elliptic.Unmarshal(c.Curve, ephemeral) + zbBig, _ := c.Curve.ScalarMult(x, y, secret) + byteLen := (c.Curve.Params().BitSize + 7) >> 3 + zb := make([]byte, byteLen) + zbBytes := zbBig.Bytes() + copy(zb[byteLen-len(zbBytes):], zbBytes) + + return zb, nil +} + +func (c *genericCurve) Sign(rand io.Reader, x, y, d *big.Int, hash []byte) (r, s *big.Int, err error) { + priv := &ecdsa.PrivateKey{D: d, PublicKey: ecdsa.PublicKey{X: x, Y: y, Curve: c.Curve}} + return ecdsa.Sign(rand, priv, hash) +} + +func (c *genericCurve) Verify(x, y *big.Int, hash []byte, r, s *big.Int) bool { + pub := &ecdsa.PublicKey{X: x, Y: y, Curve: c.Curve} + return ecdsa.Verify(pub, hash, r, s) +} + +func (c *genericCurve) validate(xP, yP *big.Int, secret []byte) error { + // the public point should not be at infinity (0,0) + zero := new(big.Int) + if xP.Cmp(zero) == 0 && yP.Cmp(zero) == 0 { + return errors.KeyInvalidError(fmt.Sprintf("ecc (%s): infinity point", c.Curve.Params().Name)) + } + + // re-derive the public point Q' = (X,Y) = dG + // to compare to declared Q in public key + expectedX, expectedY := c.Curve.ScalarBaseMult(secret) + if xP.Cmp(expectedX) != 0 || yP.Cmp(expectedY) != 0 { + return errors.KeyInvalidError(fmt.Sprintf("ecc (%s): invalid point", c.Curve.Params().Name)) + } + + return nil +} + +func (c *genericCurve) ValidateECDSA(xP, yP *big.Int, secret []byte) error { + return c.validate(xP, yP, secret) +} + +func (c *genericCurve) ValidateECDH(point []byte, secret []byte) error { + xP, yP := elliptic.Unmarshal(c.Curve, point) + if xP == nil { + return errors.KeyInvalidError(fmt.Sprintf("ecc (%s): invalid point", c.Curve.Params().Name)) + } + + return c.validate(xP, yP, secret) +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/ecc/x448.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/ecc/x448.go new file mode 100644 index 00000000000..4a940b4f4d3 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/ecc/x448.go @@ -0,0 +1,105 @@ +// Package ecc implements a generic interface for ECDH, ECDSA, and EdDSA. +package ecc + +import ( + "crypto/subtle" + "io" + + "github.com/ProtonMail/go-crypto/openpgp/errors" + x448lib "github.com/cloudflare/circl/dh/x448" +) + +type x448 struct {} + +func NewX448() *x448 { + return &x448{} +} + +func (c *x448) GetCurveName() string { + return "x448" +} + +// MarshalBytePoint encodes the public point from native format, adding the prefix. +// See https://datatracker.ietf.org/doc/html/draft-ietf-openpgp-crypto-refresh-06#section-5.5.5.6 +func (c *x448) MarshalBytePoint(point []byte) []byte { + return append([]byte{0x40}, point...) +} + +// UnmarshalBytePoint decodes a point from prefixed format to native. +// See https://datatracker.ietf.org/doc/html/draft-ietf-openpgp-crypto-refresh-06#section-5.5.5.6 +func (c *x448) UnmarshalBytePoint(point []byte) []byte { + if len(point) != x448lib.Size + 1 { + return nil + } + + return point[1:] +} + +// MarshalByteSecret encoded a scalar from native format to prefixed. +// See https://datatracker.ietf.org/doc/html/draft-ietf-openpgp-crypto-refresh-06#section-5.5.5.6.1.2 +func (c *x448) MarshalByteSecret(d []byte) []byte { + return append([]byte{0x40}, d...) +} + +// UnmarshalByteSecret decodes a scalar from prefixed format to native. +// See https://datatracker.ietf.org/doc/html/draft-ietf-openpgp-crypto-refresh-06#section-5.5.5.6.1.2 +func (c *x448) UnmarshalByteSecret(d []byte) []byte { + if len(d) != x448lib.Size + 1 { + return nil + } + + // Store without prefix + return d[1:] +} + +func (c *x448) generateKeyPairBytes(rand io.Reader) (sk, pk x448lib.Key, err error) { + if _, err = rand.Read(sk[:]); err != nil { + return + } + + x448lib.KeyGen(&pk, &sk) + return +} + +func (c *x448) GenerateECDH(rand io.Reader) (point []byte, secret []byte, err error) { + priv, pub, err := c.generateKeyPairBytes(rand) + if err != nil { + return + } + + return pub[:], priv[:], nil +} + +func (c *x448) Encaps(rand io.Reader, point []byte) (ephemeral, sharedSecret []byte, err error) { + var pk, ss x448lib.Key + seed, e, err := c.generateKeyPairBytes(rand) + + copy(pk[:], point) + x448lib.Shared(&ss, &seed, &pk) + + return e[:], ss[:], nil +} + +func (c *x448) Decaps(ephemeral, secret []byte) (sharedSecret []byte, err error) { + var ss, sk, e x448lib.Key + + copy(sk[:], secret) + copy(e[:], ephemeral) + x448lib.Shared(&ss, &sk, &e) + + return ss[:], nil +} + +func (c *x448) ValidateECDH(point []byte, secret []byte) error { + var sk, pk, expectedPk x448lib.Key + + copy(pk[:], point) + copy(sk[:], secret) + x448lib.KeyGen(&expectedPk, &sk) + + if subtle.ConstantTimeCompare(expectedPk[:], pk[:]) == 0 { + return errors.KeyInvalidError("ecc: invalid curve25519 public point") + } + + return nil +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/encoding/encoding.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/encoding/encoding.go new file mode 100644 index 00000000000..6c921481b7b --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/encoding/encoding.go @@ -0,0 +1,27 @@ +// Copyright 2017 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package encoding implements openpgp packet field encodings as specified in +// RFC 4880 and 6637. +package encoding + +import "io" + +// Field is an encoded field of an openpgp packet. +type Field interface { + // Bytes returns the decoded data. + Bytes() []byte + + // BitLength is the size in bits of the decoded data. + BitLength() uint16 + + // EncodedBytes returns the encoded data. + EncodedBytes() []byte + + // EncodedLength is the size in bytes of the encoded data. + EncodedLength() uint16 + + // ReadFrom reads the next Field from r. + ReadFrom(r io.Reader) (int64, error) +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/encoding/mpi.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/encoding/mpi.go new file mode 100644 index 00000000000..02e5e695c38 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/encoding/mpi.go @@ -0,0 +1,91 @@ +// Copyright 2017 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package encoding + +import ( + "io" + "math/big" + "math/bits" +) + +// An MPI is used to store the contents of a big integer, along with the bit +// length that was specified in the original input. This allows the MPI to be +// reserialized exactly. +type MPI struct { + bytes []byte + bitLength uint16 +} + +// NewMPI returns a MPI initialized with bytes. +func NewMPI(bytes []byte) *MPI { + for len(bytes) != 0 && bytes[0] == 0 { + bytes = bytes[1:] + } + if len(bytes) == 0 { + bitLength := uint16(0) + return &MPI{bytes, bitLength} + } + bitLength := 8*uint16(len(bytes)-1) + uint16(bits.Len8(bytes[0])) + return &MPI{bytes, bitLength} +} + +// Bytes returns the decoded data. +func (m *MPI) Bytes() []byte { + return m.bytes +} + +// BitLength is the size in bits of the decoded data. +func (m *MPI) BitLength() uint16 { + return m.bitLength +} + +// EncodedBytes returns the encoded data. +func (m *MPI) EncodedBytes() []byte { + return append([]byte{byte(m.bitLength >> 8), byte(m.bitLength)}, m.bytes...) +} + +// EncodedLength is the size in bytes of the encoded data. +func (m *MPI) EncodedLength() uint16 { + return uint16(2 + len(m.bytes)) +} + +// ReadFrom reads into m the next MPI from r. +func (m *MPI) ReadFrom(r io.Reader) (int64, error) { + var buf [2]byte + n, err := io.ReadFull(r, buf[0:]) + if err != nil { + if err == io.EOF { + err = io.ErrUnexpectedEOF + } + return int64(n), err + } + + m.bitLength = uint16(buf[0])<<8 | uint16(buf[1]) + m.bytes = make([]byte, (int(m.bitLength)+7)/8) + + nn, err := io.ReadFull(r, m.bytes) + if err == io.EOF { + err = io.ErrUnexpectedEOF + } + + // remove leading zero bytes from malformed GnuPG encoded MPIs: + // https://bugs.gnupg.org/gnupg/issue1853 + // for _, b := range m.bytes { + // if b != 0 { + // break + // } + // m.bytes = m.bytes[1:] + // m.bitLength -= 8 + // } + + return int64(n) + int64(nn), err +} + +// SetBig initializes m with the bits from n. +func (m *MPI) SetBig(n *big.Int) *MPI { + m.bytes = n.Bytes() + m.bitLength = uint16(n.BitLen()) + return m +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/encoding/oid.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/encoding/oid.go new file mode 100644 index 00000000000..c9df9fe2322 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/internal/encoding/oid.go @@ -0,0 +1,88 @@ +// Copyright 2017 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package encoding + +import ( + "io" + + "github.com/ProtonMail/go-crypto/openpgp/errors" +) + +// OID is used to store a variable-length field with a one-octet size +// prefix. See https://tools.ietf.org/html/rfc6637#section-9. +type OID struct { + bytes []byte +} + +const ( + // maxOID is the maximum number of bytes in a OID. + maxOID = 254 + // reservedOIDLength1 and reservedOIDLength2 are OID lengths that the RFC + // specifies are reserved. + reservedOIDLength1 = 0 + reservedOIDLength2 = 0xff +) + +// NewOID returns a OID initialized with bytes. +func NewOID(bytes []byte) *OID { + switch len(bytes) { + case reservedOIDLength1, reservedOIDLength2: + panic("encoding: NewOID argument length is reserved") + default: + if len(bytes) > maxOID { + panic("encoding: NewOID argument too large") + } + } + + return &OID{ + bytes: bytes, + } +} + +// Bytes returns the decoded data. +func (o *OID) Bytes() []byte { + return o.bytes +} + +// BitLength is the size in bits of the decoded data. +func (o *OID) BitLength() uint16 { + return uint16(len(o.bytes) * 8) +} + +// EncodedBytes returns the encoded data. +func (o *OID) EncodedBytes() []byte { + return append([]byte{byte(len(o.bytes))}, o.bytes...) +} + +// EncodedLength is the size in bytes of the encoded data. +func (o *OID) EncodedLength() uint16 { + return uint16(1 + len(o.bytes)) +} + +// ReadFrom reads into b the next OID from r. +func (o *OID) ReadFrom(r io.Reader) (int64, error) { + var buf [1]byte + n, err := io.ReadFull(r, buf[:]) + if err != nil { + if err == io.EOF { + err = io.ErrUnexpectedEOF + } + return int64(n), err + } + + switch buf[0] { + case reservedOIDLength1, reservedOIDLength2: + return int64(n), errors.UnsupportedError("reserved for future extensions") + } + + o.bytes = make([]byte, buf[0]) + + nn, err := io.ReadFull(r, o.bytes) + if err == io.EOF { + err = io.ErrUnexpectedEOF + } + + return int64(n) + int64(nn), err +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/key_generation.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/key_generation.go new file mode 100644 index 00000000000..0e71934cd90 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/key_generation.go @@ -0,0 +1,389 @@ +// Copyright 2011 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package openpgp + +import ( + "crypto" + "crypto/rand" + "crypto/rsa" + goerrors "errors" + "io" + "math/big" + "time" + + "github.com/ProtonMail/go-crypto/openpgp/ecdh" + "github.com/ProtonMail/go-crypto/openpgp/ecdsa" + "github.com/ProtonMail/go-crypto/openpgp/eddsa" + "github.com/ProtonMail/go-crypto/openpgp/errors" + "github.com/ProtonMail/go-crypto/openpgp/internal/algorithm" + "github.com/ProtonMail/go-crypto/openpgp/internal/ecc" + "github.com/ProtonMail/go-crypto/openpgp/packet" +) + +// NewEntity returns an Entity that contains a fresh RSA/RSA keypair with a +// single identity composed of the given full name, comment and email, any of +// which may be empty but must not contain any of "()<>\x00". +// If config is nil, sensible defaults will be used. +func NewEntity(name, comment, email string, config *packet.Config) (*Entity, error) { + creationTime := config.Now() + keyLifetimeSecs := config.KeyLifetime() + + // Generate a primary signing key + primaryPrivRaw, err := newSigner(config) + if err != nil { + return nil, err + } + primary := packet.NewSignerPrivateKey(creationTime, primaryPrivRaw) + if config != nil && config.V5Keys { + primary.UpgradeToV5() + } + + e := &Entity{ + PrimaryKey: &primary.PublicKey, + PrivateKey: primary, + Identities: make(map[string]*Identity), + Subkeys: []Subkey{}, + } + + err = e.addUserId(name, comment, email, config, creationTime, keyLifetimeSecs) + if err != nil { + return nil, err + } + + // NOTE: No key expiry here, but we will not return this subkey in EncryptionKey() + // if the primary/master key has expired. + err = e.addEncryptionSubkey(config, creationTime, 0) + if err != nil { + return nil, err + } + + return e, nil +} + +func (t *Entity) AddUserId(name, comment, email string, config *packet.Config) error { + creationTime := config.Now() + keyLifetimeSecs := config.KeyLifetime() + return t.addUserId(name, comment, email, config, creationTime, keyLifetimeSecs) +} + +func (t *Entity) addUserId(name, comment, email string, config *packet.Config, creationTime time.Time, keyLifetimeSecs uint32) error { + uid := packet.NewUserId(name, comment, email) + if uid == nil { + return errors.InvalidArgumentError("user id field contained invalid characters") + } + + if _, ok := t.Identities[uid.Id]; ok { + return errors.InvalidArgumentError("user id exist") + } + + primary := t.PrivateKey + + isPrimaryId := len(t.Identities) == 0 + + selfSignature := createSignaturePacket(&primary.PublicKey, packet.SigTypePositiveCert, config) + selfSignature.CreationTime = creationTime + selfSignature.KeyLifetimeSecs = &keyLifetimeSecs + selfSignature.IsPrimaryId = &isPrimaryId + selfSignature.FlagsValid = true + selfSignature.FlagSign = true + selfSignature.FlagCertify = true + selfSignature.SEIPDv1 = true // true by default, see 5.8 vs. 5.14 + selfSignature.SEIPDv2 = config.AEAD() != nil + + // Set the PreferredHash for the SelfSignature from the packet.Config. + // If it is not the must-implement algorithm from rfc4880bis, append that. + hash, ok := algorithm.HashToHashId(config.Hash()) + if !ok { + return errors.UnsupportedError("unsupported preferred hash function") + } + + selfSignature.PreferredHash = []uint8{hash} + if config.Hash() != crypto.SHA256 { + selfSignature.PreferredHash = append(selfSignature.PreferredHash, hashToHashId(crypto.SHA256)) + } + + // Likewise for DefaultCipher. + selfSignature.PreferredSymmetric = []uint8{uint8(config.Cipher())} + if config.Cipher() != packet.CipherAES128 { + selfSignature.PreferredSymmetric = append(selfSignature.PreferredSymmetric, uint8(packet.CipherAES128)) + } + + // We set CompressionNone as the preferred compression algorithm because + // of compression side channel attacks, then append the configured + // DefaultCompressionAlgo if any is set (to signal support for cases + // where the application knows that using compression is safe). + selfSignature.PreferredCompression = []uint8{uint8(packet.CompressionNone)} + if config.Compression() != packet.CompressionNone { + selfSignature.PreferredCompression = append(selfSignature.PreferredCompression, uint8(config.Compression())) + } + + // And for DefaultMode. + modes := []uint8{uint8(config.AEAD().Mode())} + if config.AEAD().Mode() != packet.AEADModeOCB { + modes = append(modes, uint8(packet.AEADModeOCB)) + } + + // For preferred (AES256, GCM), we'll generate (AES256, GCM), (AES256, OCB), (AES128, GCM), (AES128, OCB) + for _, cipher := range selfSignature.PreferredSymmetric { + for _, mode := range modes { + selfSignature.PreferredCipherSuites = append(selfSignature.PreferredCipherSuites, [2]uint8{cipher, mode}) + } + } + + // User ID binding signature + err := selfSignature.SignUserId(uid.Id, &primary.PublicKey, primary, config) + if err != nil { + return err + } + t.Identities[uid.Id] = &Identity{ + Name: uid.Id, + UserId: uid, + SelfSignature: selfSignature, + Signatures: []*packet.Signature{selfSignature}, + } + return nil +} + +// AddSigningSubkey adds a signing keypair as a subkey to the Entity. +// If config is nil, sensible defaults will be used. +func (e *Entity) AddSigningSubkey(config *packet.Config) error { + creationTime := config.Now() + keyLifetimeSecs := config.KeyLifetime() + + subPrivRaw, err := newSigner(config) + if err != nil { + return err + } + sub := packet.NewSignerPrivateKey(creationTime, subPrivRaw) + sub.IsSubkey = true + if config != nil && config.V5Keys { + sub.UpgradeToV5() + } + + subkey := Subkey{ + PublicKey: &sub.PublicKey, + PrivateKey: sub, + } + subkey.Sig = createSignaturePacket(e.PrimaryKey, packet.SigTypeSubkeyBinding, config) + subkey.Sig.CreationTime = creationTime + subkey.Sig.KeyLifetimeSecs = &keyLifetimeSecs + subkey.Sig.FlagsValid = true + subkey.Sig.FlagSign = true + subkey.Sig.EmbeddedSignature = createSignaturePacket(subkey.PublicKey, packet.SigTypePrimaryKeyBinding, config) + subkey.Sig.EmbeddedSignature.CreationTime = creationTime + + err = subkey.Sig.EmbeddedSignature.CrossSignKey(subkey.PublicKey, e.PrimaryKey, subkey.PrivateKey, config) + if err != nil { + return err + } + + err = subkey.Sig.SignKey(subkey.PublicKey, e.PrivateKey, config) + if err != nil { + return err + } + + e.Subkeys = append(e.Subkeys, subkey) + return nil +} + +// AddEncryptionSubkey adds an encryption keypair as a subkey to the Entity. +// If config is nil, sensible defaults will be used. +func (e *Entity) AddEncryptionSubkey(config *packet.Config) error { + creationTime := config.Now() + keyLifetimeSecs := config.KeyLifetime() + return e.addEncryptionSubkey(config, creationTime, keyLifetimeSecs) +} + +func (e *Entity) addEncryptionSubkey(config *packet.Config, creationTime time.Time, keyLifetimeSecs uint32) error { + subPrivRaw, err := newDecrypter(config) + if err != nil { + return err + } + sub := packet.NewDecrypterPrivateKey(creationTime, subPrivRaw) + sub.IsSubkey = true + if config != nil && config.V5Keys { + sub.UpgradeToV5() + } + + subkey := Subkey{ + PublicKey: &sub.PublicKey, + PrivateKey: sub, + } + subkey.Sig = createSignaturePacket(e.PrimaryKey, packet.SigTypeSubkeyBinding, config) + subkey.Sig.CreationTime = creationTime + subkey.Sig.KeyLifetimeSecs = &keyLifetimeSecs + subkey.Sig.FlagsValid = true + subkey.Sig.FlagEncryptStorage = true + subkey.Sig.FlagEncryptCommunications = true + + err = subkey.Sig.SignKey(subkey.PublicKey, e.PrivateKey, config) + if err != nil { + return err + } + + e.Subkeys = append(e.Subkeys, subkey) + return nil +} + +// Generates a signing key +func newSigner(config *packet.Config) (signer interface{}, err error) { + switch config.PublicKeyAlgorithm() { + case packet.PubKeyAlgoRSA: + bits := config.RSAModulusBits() + if bits < 1024 { + return nil, errors.InvalidArgumentError("bits must be >= 1024") + } + if config != nil && len(config.RSAPrimes) >= 2 { + primes := config.RSAPrimes[0:2] + config.RSAPrimes = config.RSAPrimes[2:] + return generateRSAKeyWithPrimes(config.Random(), 2, bits, primes) + } + return rsa.GenerateKey(config.Random(), bits) + case packet.PubKeyAlgoEdDSA: + curve := ecc.FindEdDSAByGenName(string(config.CurveName())) + if curve == nil { + return nil, errors.InvalidArgumentError("unsupported curve") + } + + priv, err := eddsa.GenerateKey(config.Random(), curve) + if err != nil { + return nil, err + } + return priv, nil + case packet.PubKeyAlgoECDSA: + curve := ecc.FindECDSAByGenName(string(config.CurveName())) + if curve == nil { + return nil, errors.InvalidArgumentError("unsupported curve") + } + + priv, err := ecdsa.GenerateKey(config.Random(), curve) + if err != nil { + return nil, err + } + return priv, nil + default: + return nil, errors.InvalidArgumentError("unsupported public key algorithm") + } +} + +// Generates an encryption/decryption key +func newDecrypter(config *packet.Config) (decrypter interface{}, err error) { + switch config.PublicKeyAlgorithm() { + case packet.PubKeyAlgoRSA: + bits := config.RSAModulusBits() + if bits < 1024 { + return nil, errors.InvalidArgumentError("bits must be >= 1024") + } + if config != nil && len(config.RSAPrimes) >= 2 { + primes := config.RSAPrimes[0:2] + config.RSAPrimes = config.RSAPrimes[2:] + return generateRSAKeyWithPrimes(config.Random(), 2, bits, primes) + } + return rsa.GenerateKey(config.Random(), bits) + case packet.PubKeyAlgoEdDSA, packet.PubKeyAlgoECDSA: + fallthrough // When passing EdDSA or ECDSA, we generate an ECDH subkey + case packet.PubKeyAlgoECDH: + var kdf = ecdh.KDF{ + Hash: algorithm.SHA512, + Cipher: algorithm.AES256, + } + curve := ecc.FindECDHByGenName(string(config.CurveName())) + if curve == nil { + return nil, errors.InvalidArgumentError("unsupported curve") + } + return ecdh.GenerateKey(config.Random(), curve, kdf) + default: + return nil, errors.InvalidArgumentError("unsupported public key algorithm") + } +} + +var bigOne = big.NewInt(1) + +// generateRSAKeyWithPrimes generates a multi-prime RSA keypair of the +// given bit size, using the given random source and prepopulated primes. +func generateRSAKeyWithPrimes(random io.Reader, nprimes int, bits int, prepopulatedPrimes []*big.Int) (*rsa.PrivateKey, error) { + priv := new(rsa.PrivateKey) + priv.E = 65537 + + if nprimes < 2 { + return nil, goerrors.New("generateRSAKeyWithPrimes: nprimes must be >= 2") + } + + if bits < 1024 { + return nil, goerrors.New("generateRSAKeyWithPrimes: bits must be >= 1024") + } + + primes := make([]*big.Int, nprimes) + +NextSetOfPrimes: + for { + todo := bits + // crypto/rand should set the top two bits in each prime. + // Thus each prime has the form + // p_i = 2^bitlen(p_i) × 0.11... (in base 2). + // And the product is: + // P = 2^todo × α + // where α is the product of nprimes numbers of the form 0.11... + // + // If α < 1/2 (which can happen for nprimes > 2), we need to + // shift todo to compensate for lost bits: the mean value of 0.11... + // is 7/8, so todo + shift - nprimes * log2(7/8) ~= bits - 1/2 + // will give good results. + if nprimes >= 7 { + todo += (nprimes - 2) / 5 + } + for i := 0; i < nprimes; i++ { + var err error + if len(prepopulatedPrimes) == 0 { + primes[i], err = rand.Prime(random, todo/(nprimes-i)) + if err != nil { + return nil, err + } + } else { + primes[i] = prepopulatedPrimes[0] + prepopulatedPrimes = prepopulatedPrimes[1:] + } + + todo -= primes[i].BitLen() + } + + // Make sure that primes is pairwise unequal. + for i, prime := range primes { + for j := 0; j < i; j++ { + if prime.Cmp(primes[j]) == 0 { + continue NextSetOfPrimes + } + } + } + + n := new(big.Int).Set(bigOne) + totient := new(big.Int).Set(bigOne) + pminus1 := new(big.Int) + for _, prime := range primes { + n.Mul(n, prime) + pminus1.Sub(prime, bigOne) + totient.Mul(totient, pminus1) + } + if n.BitLen() != bits { + // This should never happen for nprimes == 2 because + // crypto/rand should set the top two bits in each prime. + // For nprimes > 2 we hope it does not happen often. + continue NextSetOfPrimes + } + + priv.D = new(big.Int) + e := big.NewInt(int64(priv.E)) + ok := priv.D.ModInverse(e, totient) + + if ok != nil { + priv.Primes = primes + priv.N = n + break + } + } + + priv.Precompute() + return priv, nil +} diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/keys.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/keys.go similarity index 52% rename from .ci/providerlint/vendor/golang.org/x/crypto/openpgp/keys.go rename to .ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/keys.go index d62f787e9d5..120f081ada0 100644 --- a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/keys.go +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/keys.go @@ -5,13 +5,13 @@ package openpgp import ( - "crypto/rsa" + goerrors "errors" "io" "time" - "golang.org/x/crypto/openpgp/armor" - "golang.org/x/crypto/openpgp/errors" - "golang.org/x/crypto/openpgp/packet" + "github.com/ProtonMail/go-crypto/openpgp/armor" + "github.com/ProtonMail/go-crypto/openpgp/errors" + "github.com/ProtonMail/go-crypto/openpgp/packet" ) // PublicKeyType is the armor type for a PGP public key. @@ -37,15 +37,17 @@ type Identity struct { Name string // by convention, has the form "Full Name (comment) " UserId *packet.UserId SelfSignature *packet.Signature - Signatures []*packet.Signature + Revocations []*packet.Signature + Signatures []*packet.Signature // all (potentially unverified) self-signatures, revocations, and third-party signatures } // A Subkey is an additional public key in an Entity. Subkeys can be used for // encryption. type Subkey struct { - PublicKey *packet.PublicKey - PrivateKey *packet.PrivateKey - Sig *packet.Signature + PublicKey *packet.PublicKey + PrivateKey *packet.PrivateKey + Sig *packet.Signature + Revocations []*packet.Signature } // A Key identifies a specific public key in an Entity. This is either the @@ -55,13 +57,14 @@ type Key struct { PublicKey *packet.PublicKey PrivateKey *packet.PrivateKey SelfSignature *packet.Signature + Revocations []*packet.Signature } // A KeyRing provides access to public and private keys. type KeyRing interface { // KeysById returns the set of keys that have the given key id. KeysById(id uint64) []Key - // KeysByIdUsage returns the set of keys with the given id + // KeysByIdAndUsage returns the set of keys with the given id // that also meet the key usage given by requiredUsage. // The requiredUsage is expressed as the bitwise-OR of // packet.KeyFlag* values. @@ -71,33 +74,71 @@ type KeyRing interface { DecryptionKeys() []Key } -// primaryIdentity returns the Identity marked as primary or the first identity -// if none are so marked. -func (e *Entity) primaryIdentity() *Identity { - var firstIdentity *Identity +// PrimaryIdentity returns an Identity, preferring non-revoked identities, +// identities marked as primary, or the latest-created identity, in that order. +func (e *Entity) PrimaryIdentity() *Identity { + var primaryIdentity *Identity for _, ident := range e.Identities { - if firstIdentity == nil { - firstIdentity = ident - } - if ident.SelfSignature.IsPrimaryId != nil && *ident.SelfSignature.IsPrimaryId { - return ident + if shouldPreferIdentity(primaryIdentity, ident) { + primaryIdentity = ident } } - return firstIdentity + return primaryIdentity +} + +func shouldPreferIdentity(existingId, potentialNewId *Identity) bool { + if existingId == nil { + return true + } + + if len(existingId.Revocations) > len(potentialNewId.Revocations) { + return true + } + + if len(existingId.Revocations) < len(potentialNewId.Revocations) { + return false + } + + if existingId.SelfSignature == nil { + return true + } + + if existingId.SelfSignature.IsPrimaryId != nil && *existingId.SelfSignature.IsPrimaryId && + !(potentialNewId.SelfSignature.IsPrimaryId != nil && *potentialNewId.SelfSignature.IsPrimaryId) { + return false + } + + if !(existingId.SelfSignature.IsPrimaryId != nil && *existingId.SelfSignature.IsPrimaryId) && + potentialNewId.SelfSignature.IsPrimaryId != nil && *potentialNewId.SelfSignature.IsPrimaryId { + return true + } + + return potentialNewId.SelfSignature.CreationTime.After(existingId.SelfSignature.CreationTime) } -// encryptionKey returns the best candidate Key for encrypting a message to the +// EncryptionKey returns the best candidate Key for encrypting a message to the // given Entity. -func (e *Entity) encryptionKey(now time.Time) (Key, bool) { +func (e *Entity) EncryptionKey(now time.Time) (Key, bool) { + // Fail to find any encryption key if the... + i := e.PrimaryIdentity() + if e.PrimaryKey.KeyExpired(i.SelfSignature, now) || // primary key has expired + i.SelfSignature == nil || // user ID has no self-signature + i.SelfSignature.SigExpired(now) || // user ID self-signature has expired + e.Revoked(now) || // primary key has been revoked + i.Revoked(now) { // user ID has been revoked + return Key{}, false + } + + // Iterate the keys to find the newest, unexpired one candidateSubkey := -1 - - // Iterate the keys to find the newest key var maxTime time.Time for i, subkey := range e.Subkeys { if subkey.Sig.FlagsValid && subkey.Sig.FlagEncryptCommunications && subkey.PublicKey.PubKeyAlgo.CanEncrypt() && - !subkey.Sig.KeyExpired(now) && + !subkey.PublicKey.KeyExpired(subkey.Sig, now) && + !subkey.Sig.SigExpired(now) && + !subkey.Revoked(now) && (maxTime.IsZero() || subkey.Sig.CreationTime.After(maxTime)) { candidateSubkey = i maxTime = subkey.Sig.CreationTime @@ -106,55 +147,136 @@ func (e *Entity) encryptionKey(now time.Time) (Key, bool) { if candidateSubkey != -1 { subkey := e.Subkeys[candidateSubkey] - return Key{e, subkey.PublicKey, subkey.PrivateKey, subkey.Sig}, true + return Key{e, subkey.PublicKey, subkey.PrivateKey, subkey.Sig, subkey.Revocations}, true } // If we don't have any candidate subkeys for encryption and // the primary key doesn't have any usage metadata then we // assume that the primary key is ok. Or, if the primary key is - // marked as ok to encrypt to, then we can obviously use it. - i := e.primaryIdentity() + // marked as ok to encrypt with, then we can obviously use it. if !i.SelfSignature.FlagsValid || i.SelfSignature.FlagEncryptCommunications && - e.PrimaryKey.PubKeyAlgo.CanEncrypt() && - !i.SelfSignature.KeyExpired(now) { - return Key{e, e.PrimaryKey, e.PrivateKey, i.SelfSignature}, true + e.PrimaryKey.PubKeyAlgo.CanEncrypt() { + return Key{e, e.PrimaryKey, e.PrivateKey, i.SelfSignature, e.Revocations}, true } - // This Entity appears to be signing only. return Key{}, false } -// signingKey return the best candidate Key for signing a message with this + +// CertificationKey return the best candidate Key for certifying a key with this // Entity. -func (e *Entity) signingKey(now time.Time) (Key, bool) { - candidateSubkey := -1 +func (e *Entity) CertificationKey(now time.Time) (Key, bool) { + return e.CertificationKeyById(now, 0) +} - for i, subkey := range e.Subkeys { +// CertificationKeyById return the Key for key certification with this +// Entity and keyID. +func (e *Entity) CertificationKeyById(now time.Time, id uint64) (Key, bool) { + return e.signingKeyByIdUsage(now, id, packet.KeyFlagCertify) +} + +// SigningKey return the best candidate Key for signing a message with this +// Entity. +func (e *Entity) SigningKey(now time.Time) (Key, bool) { + return e.SigningKeyById(now, 0) +} + +// SigningKeyById return the Key for signing a message with this +// Entity and keyID. +func (e *Entity) SigningKeyById(now time.Time, id uint64) (Key, bool) { + return e.signingKeyByIdUsage(now, id, packet.KeyFlagSign) +} + +func (e *Entity) signingKeyByIdUsage(now time.Time, id uint64, flags int) (Key, bool) { + // Fail to find any signing key if the... + i := e.PrimaryIdentity() + if e.PrimaryKey.KeyExpired(i.SelfSignature, now) || // primary key has expired + i.SelfSignature == nil || // user ID has no self-signature + i.SelfSignature.SigExpired(now) || // user ID self-signature has expired + e.Revoked(now) || // primary key has been revoked + i.Revoked(now) { // user ID has been revoked + return Key{}, false + } + + // Iterate the keys to find the newest, unexpired one + candidateSubkey := -1 + var maxTime time.Time + for idx, subkey := range e.Subkeys { if subkey.Sig.FlagsValid && - subkey.Sig.FlagSign && + (flags & packet.KeyFlagCertify == 0 || subkey.Sig.FlagCertify) && + (flags & packet.KeyFlagSign == 0 || subkey.Sig.FlagSign) && subkey.PublicKey.PubKeyAlgo.CanSign() && - !subkey.Sig.KeyExpired(now) { - candidateSubkey = i - break + !subkey.PublicKey.KeyExpired(subkey.Sig, now) && + !subkey.Sig.SigExpired(now) && + !subkey.Revoked(now) && + (maxTime.IsZero() || subkey.Sig.CreationTime.After(maxTime)) && + (id == 0 || subkey.PublicKey.KeyId == id) { + candidateSubkey = idx + maxTime = subkey.Sig.CreationTime } } if candidateSubkey != -1 { subkey := e.Subkeys[candidateSubkey] - return Key{e, subkey.PublicKey, subkey.PrivateKey, subkey.Sig}, true + return Key{e, subkey.PublicKey, subkey.PrivateKey, subkey.Sig, subkey.Revocations}, true } // If we have no candidate subkey then we assume that it's ok to sign - // with the primary key. - i := e.primaryIdentity() - if !i.SelfSignature.FlagsValid || i.SelfSignature.FlagSign && - !i.SelfSignature.KeyExpired(now) { - return Key{e, e.PrimaryKey, e.PrivateKey, i.SelfSignature}, true + // with the primary key. Or, if the primary key is marked as ok to + // sign with, then we can use it. + if !i.SelfSignature.FlagsValid || ( + (flags & packet.KeyFlagCertify == 0 || i.SelfSignature.FlagCertify) && + (flags & packet.KeyFlagSign == 0 || i.SelfSignature.FlagSign)) && + e.PrimaryKey.PubKeyAlgo.CanSign() && + (id == 0 || e.PrimaryKey.KeyId == id) { + return Key{e, e.PrimaryKey, e.PrivateKey, i.SelfSignature, e.Revocations}, true } + // No keys with a valid Signing Flag or no keys matched the id passed in return Key{}, false } +func revoked(revocations []*packet.Signature, now time.Time) bool { + for _, revocation := range revocations { + if revocation.RevocationReason != nil && *revocation.RevocationReason == packet.KeyCompromised { + // If the key is compromised, the key is considered revoked even before the revocation date. + return true + } + if !revocation.SigExpired(now) { + return true + } + } + return false +} + +// Revoked returns whether the entity has any direct key revocation signatures. +// Note that third-party revocation signatures are not supported. +// Note also that Identity and Subkey revocation should be checked separately. +func (e *Entity) Revoked(now time.Time) bool { + return revoked(e.Revocations, now) +} + +// Revoked returns whether the identity has been revoked by a self-signature. +// Note that third-party revocation signatures are not supported. +func (i *Identity) Revoked(now time.Time) bool { + return revoked(i.Revocations, now) +} + +// Revoked returns whether the subkey has been revoked by a self-signature. +// Note that third-party revocation signatures are not supported. +func (s *Subkey) Revoked(now time.Time) bool { + return revoked(s.Revocations, now) +} + +// Revoked returns whether the key or subkey has been revoked by a self-signature. +// Note that third-party revocation signatures are not supported. +// Note also that Identity revocation should be checked separately. +// Normally, it's not necessary to call this function, except on keys returned by +// KeysById or KeysByIdUsage. +func (key *Key) Revoked(now time.Time) bool { + return revoked(key.Revocations, now) +} + // An EntityList contains one or more Entities. type EntityList []*Entity @@ -162,41 +284,26 @@ type EntityList []*Entity func (el EntityList) KeysById(id uint64) (keys []Key) { for _, e := range el { if e.PrimaryKey.KeyId == id { - var selfSig *packet.Signature - for _, ident := range e.Identities { - if selfSig == nil { - selfSig = ident.SelfSignature - } else if ident.SelfSignature.IsPrimaryId != nil && *ident.SelfSignature.IsPrimaryId { - selfSig = ident.SelfSignature - break - } - } - keys = append(keys, Key{e, e.PrimaryKey, e.PrivateKey, selfSig}) + ident := e.PrimaryIdentity() + selfSig := ident.SelfSignature + keys = append(keys, Key{e, e.PrimaryKey, e.PrivateKey, selfSig, e.Revocations}) } for _, subKey := range e.Subkeys { if subKey.PublicKey.KeyId == id { - keys = append(keys, Key{e, subKey.PublicKey, subKey.PrivateKey, subKey.Sig}) + keys = append(keys, Key{e, subKey.PublicKey, subKey.PrivateKey, subKey.Sig, subKey.Revocations}) } } } return } -// KeysByIdUsage returns the set of keys with the given id that also meet +// KeysByIdAndUsage returns the set of keys with the given id that also meet // the key usage given by requiredUsage. The requiredUsage is expressed as // the bitwise-OR of packet.KeyFlag* values. func (el EntityList) KeysByIdUsage(id uint64, requiredUsage byte) (keys []Key) { for _, key := range el.KeysById(id) { - if len(key.Entity.Revocations) > 0 { - continue - } - - if key.SelfSignature.RevocationReason != nil { - continue - } - - if key.SelfSignature.FlagsValid && requiredUsage != 0 { + if key.SelfSignature != nil && key.SelfSignature.FlagsValid && requiredUsage != 0 { var usage byte if key.SelfSignature.FlagCertify { usage |= packet.KeyFlagCertify @@ -225,7 +332,7 @@ func (el EntityList) DecryptionKeys() (keys []Key) { for _, e := range el { for _, subKey := range e.Subkeys { if subKey.PrivateKey != nil && (!subKey.Sig.FlagsValid || subKey.Sig.FlagEncryptStorage || subKey.Sig.FlagEncryptCommunications) { - keys = append(keys, Key{e, subKey.PublicKey, subKey.PrivateKey, subKey.Sig}) + keys = append(keys, Key{e, subKey.PublicKey, subKey.PrivateKey, subKey.Sig, subKey.Revocations}) } } } @@ -420,11 +527,24 @@ func addUserID(e *Entity, packets *packet.Reader, pkt *packet.UserId) error { break } - if (sig.SigType == packet.SigTypePositiveCert || sig.SigType == packet.SigTypeGenericCert) && sig.IssuerKeyId != nil && *sig.IssuerKeyId == e.PrimaryKey.KeyId { + if sig.SigType != packet.SigTypeGenericCert && + sig.SigType != packet.SigTypePersonaCert && + sig.SigType != packet.SigTypeCasualCert && + sig.SigType != packet.SigTypePositiveCert && + sig.SigType != packet.SigTypeCertificationRevocation { + return errors.StructuralError("user ID signature with wrong type") + } + + if sig.CheckKeyIdOrFingerprint(e.PrimaryKey) { if err = e.PrimaryKey.VerifyUserIdSignature(pkt.Id, e.PrimaryKey, sig); err != nil { return errors.StructuralError("user ID self-signature invalid: " + err.Error()) } - identity.SelfSignature = sig + if sig.SigType == packet.SigTypeCertificationRevocation { + identity.Revocations = append(identity.Revocations, sig) + } else if identity.SelfSignature == nil || sig.CreationTime.After(identity.SelfSignature.CreationTime) { + identity.SelfSignature = sig + } + identity.Signatures = append(identity.Signatures, sig) e.Identities[pkt.Id] = identity } else { identity.Signatures = append(identity.Signatures, sig) @@ -463,10 +583,9 @@ func addSubkey(e *Entity, packets *packet.Reader, pub *packet.PublicKey, priv *p switch sig.SigType { case packet.SigTypeSubkeyRevocation: - subKey.Sig = sig + subKey.Revocations = append(subKey.Revocations, sig) case packet.SigTypeSubkeyBinding: - - if shouldReplaceSubkeySig(subKey.Sig, sig) { + if subKey.Sig == nil || sig.CreationTime.After(subKey.Sig.CreationTime) { subKey.Sig = sig } } @@ -481,131 +600,59 @@ func addSubkey(e *Entity, packets *packet.Reader, pub *packet.PublicKey, priv *p return nil } -func shouldReplaceSubkeySig(existingSig, potentialNewSig *packet.Signature) bool { - if potentialNewSig == nil { - return false - } - - if existingSig == nil { - return true - } - - if existingSig.SigType == packet.SigTypeSubkeyRevocation { - return false // never override a revocation signature +// SerializePrivate serializes an Entity, including private key material, but +// excluding signatures from other entities, to the given Writer. +// Identities and subkeys are re-signed in case they changed since NewEntry. +// If config is nil, sensible defaults will be used. +func (e *Entity) SerializePrivate(w io.Writer, config *packet.Config) (err error) { + if e.PrivateKey.Dummy() { + return errors.ErrDummyPrivateKey("dummy private key cannot re-sign identities") } - - return potentialNewSig.CreationTime.After(existingSig.CreationTime) + return e.serializePrivate(w, config, true) } -const defaultRSAKeyBits = 2048 - -// NewEntity returns an Entity that contains a fresh RSA/RSA keypair with a -// single identity composed of the given full name, comment and email, any of -// which may be empty but must not contain any of "()<>\x00". +// SerializePrivateWithoutSigning serializes an Entity, including private key +// material, but excluding signatures from other entities, to the given Writer. +// Self-signatures of identities and subkeys are not re-signed. This is useful +// when serializing GNU dummy keys, among other things. // If config is nil, sensible defaults will be used. -func NewEntity(name, comment, email string, config *packet.Config) (*Entity, error) { - creationTime := config.Now() - - bits := defaultRSAKeyBits - if config != nil && config.RSABits != 0 { - bits = config.RSABits - } - - uid := packet.NewUserId(name, comment, email) - if uid == nil { - return nil, errors.InvalidArgumentError("user id field contained invalid characters") - } - signingPriv, err := rsa.GenerateKey(config.Random(), bits) - if err != nil { - return nil, err - } - encryptingPriv, err := rsa.GenerateKey(config.Random(), bits) - if err != nil { - return nil, err - } - - e := &Entity{ - PrimaryKey: packet.NewRSAPublicKey(creationTime, &signingPriv.PublicKey), - PrivateKey: packet.NewRSAPrivateKey(creationTime, signingPriv), - Identities: make(map[string]*Identity), - } - isPrimaryId := true - e.Identities[uid.Id] = &Identity{ - Name: uid.Id, - UserId: uid, - SelfSignature: &packet.Signature{ - CreationTime: creationTime, - SigType: packet.SigTypePositiveCert, - PubKeyAlgo: packet.PubKeyAlgoRSA, - Hash: config.Hash(), - IsPrimaryId: &isPrimaryId, - FlagsValid: true, - FlagSign: true, - FlagCertify: true, - IssuerKeyId: &e.PrimaryKey.KeyId, - }, - } - err = e.Identities[uid.Id].SelfSignature.SignUserId(uid.Id, e.PrimaryKey, e.PrivateKey, config) - if err != nil { - return nil, err - } - - // If the user passes in a DefaultHash via packet.Config, - // set the PreferredHash for the SelfSignature. - if config != nil && config.DefaultHash != 0 { - e.Identities[uid.Id].SelfSignature.PreferredHash = []uint8{hashToHashId(config.DefaultHash)} - } - - // Likewise for DefaultCipher. - if config != nil && config.DefaultCipher != 0 { - e.Identities[uid.Id].SelfSignature.PreferredSymmetric = []uint8{uint8(config.DefaultCipher)} - } - - e.Subkeys = make([]Subkey, 1) - e.Subkeys[0] = Subkey{ - PublicKey: packet.NewRSAPublicKey(creationTime, &encryptingPriv.PublicKey), - PrivateKey: packet.NewRSAPrivateKey(creationTime, encryptingPriv), - Sig: &packet.Signature{ - CreationTime: creationTime, - SigType: packet.SigTypeSubkeyBinding, - PubKeyAlgo: packet.PubKeyAlgoRSA, - Hash: config.Hash(), - FlagsValid: true, - FlagEncryptStorage: true, - FlagEncryptCommunications: true, - IssuerKeyId: &e.PrimaryKey.KeyId, - }, - } - e.Subkeys[0].PublicKey.IsSubkey = true - e.Subkeys[0].PrivateKey.IsSubkey = true - err = e.Subkeys[0].Sig.SignKey(e.Subkeys[0].PublicKey, e.PrivateKey, config) - if err != nil { - return nil, err - } - return e, nil +func (e *Entity) SerializePrivateWithoutSigning(w io.Writer, config *packet.Config) (err error) { + return e.serializePrivate(w, config, false) } -// SerializePrivate serializes an Entity, including private key material, but -// excluding signatures from other entities, to the given Writer. -// Identities and subkeys are re-signed in case they changed since NewEntry. -// If config is nil, sensible defaults will be used. -func (e *Entity) SerializePrivate(w io.Writer, config *packet.Config) (err error) { +func (e *Entity) serializePrivate(w io.Writer, config *packet.Config, reSign bool) (err error) { + if e.PrivateKey == nil { + return goerrors.New("openpgp: private key is missing") + } err = e.PrivateKey.Serialize(w) if err != nil { return } + for _, revocation := range e.Revocations { + err := revocation.Serialize(w) + if err != nil { + return err + } + } for _, ident := range e.Identities { err = ident.UserId.Serialize(w) if err != nil { return } - err = ident.SelfSignature.SignUserId(ident.UserId.Id, e.PrimaryKey, e.PrivateKey, config) - if err != nil { - return + if reSign { + if ident.SelfSignature == nil { + return goerrors.New("openpgp: can't re-sign identity without valid self-signature") + } + err = ident.SelfSignature.SignUserId(ident.UserId.Id, e.PrimaryKey, e.PrivateKey, config) + if err != nil { + return + } } - err = ident.SelfSignature.Serialize(w) - if err != nil { - return + for _, sig := range ident.Signatures { + err = sig.Serialize(w) + if err != nil { + return err + } } } for _, subkey := range e.Subkeys { @@ -613,9 +660,24 @@ func (e *Entity) SerializePrivate(w io.Writer, config *packet.Config) (err error if err != nil { return } - err = subkey.Sig.SignKey(subkey.PublicKey, e.PrivateKey, config) - if err != nil { - return + if reSign { + err = subkey.Sig.SignKey(subkey.PublicKey, e.PrivateKey, config) + if err != nil { + return + } + if subkey.Sig.EmbeddedSignature != nil { + err = subkey.Sig.EmbeddedSignature.CrossSignKey(subkey.PublicKey, e.PrimaryKey, + subkey.PrivateKey, config) + if err != nil { + return + } + } + } + for _, revocation := range subkey.Revocations { + err := revocation.Serialize(w) + if err != nil { + return err + } } err = subkey.Sig.Serialize(w) if err != nil { @@ -632,12 +694,14 @@ func (e *Entity) Serialize(w io.Writer) error { if err != nil { return err } - for _, ident := range e.Identities { - err = ident.UserId.Serialize(w) + for _, revocation := range e.Revocations { + err := revocation.Serialize(w) if err != nil { return err } - err = ident.SelfSignature.Serialize(w) + } + for _, ident := range e.Identities { + err = ident.UserId.Serialize(w) if err != nil { return err } @@ -653,6 +717,12 @@ func (e *Entity) Serialize(w io.Writer) error { if err != nil { return err } + for _, revocation := range subkey.Revocations { + err := revocation.Serialize(w) + if err != nil { + return err + } + } err = subkey.Sig.Serialize(w) if err != nil { return err @@ -667,27 +737,68 @@ func (e *Entity) Serialize(w io.Writer) error { // necessary. // If config is nil, sensible defaults will be used. func (e *Entity) SignIdentity(identity string, signer *Entity, config *packet.Config) error { - if signer.PrivateKey == nil { - return errors.InvalidArgumentError("signing Entity must have a private key") + certificationKey, ok := signer.CertificationKey(config.Now()) + if !ok { + return errors.InvalidArgumentError("no valid certification key found") } - if signer.PrivateKey.Encrypted { + + if certificationKey.PrivateKey.Encrypted { return errors.InvalidArgumentError("signing Entity's private key must be decrypted") } + ident, ok := e.Identities[identity] if !ok { return errors.InvalidArgumentError("given identity string not found in Entity") } - sig := &packet.Signature{ - SigType: packet.SigTypeGenericCert, - PubKeyAlgo: signer.PrivateKey.PubKeyAlgo, - Hash: config.Hash(), - CreationTime: config.Now(), - IssuerKeyId: &signer.PrivateKey.KeyId, + sig := createSignaturePacket(certificationKey.PublicKey, packet.SigTypeGenericCert, config) + + signingUserID := config.SigningUserId() + if signingUserID != "" { + if _, ok := signer.Identities[signingUserID]; !ok { + return errors.InvalidArgumentError("signer identity string not found in signer Entity") + } + sig.SignerUserId = &signingUserID } - if err := sig.SignUserId(identity, e.PrimaryKey, signer.PrivateKey, config); err != nil { + + if err := sig.SignUserId(identity, e.PrimaryKey, certificationKey.PrivateKey, config); err != nil { return err } ident.Signatures = append(ident.Signatures, sig) return nil } + +// RevokeKey generates a key revocation signature (packet.SigTypeKeyRevocation) with the +// specified reason code and text (RFC4880 section-5.2.3.23). +// If config is nil, sensible defaults will be used. +func (e *Entity) RevokeKey(reason packet.ReasonForRevocation, reasonText string, config *packet.Config) error { + revSig := createSignaturePacket(e.PrimaryKey, packet.SigTypeKeyRevocation, config) + revSig.RevocationReason = &reason + revSig.RevocationReasonText = reasonText + + if err := revSig.RevokeKey(e.PrimaryKey, e.PrivateKey, config); err != nil { + return err + } + e.Revocations = append(e.Revocations, revSig) + return nil +} + +// RevokeSubkey generates a subkey revocation signature (packet.SigTypeSubkeyRevocation) for +// a subkey with the specified reason code and text (RFC4880 section-5.2.3.23). +// If config is nil, sensible defaults will be used. +func (e *Entity) RevokeSubkey(sk *Subkey, reason packet.ReasonForRevocation, reasonText string, config *packet.Config) error { + if err := e.PrimaryKey.VerifyKeySignature(sk.PublicKey, sk.Sig); err != nil { + return errors.InvalidArgumentError("given subkey is not associated with this key") + } + + revSig := createSignaturePacket(e.PrimaryKey, packet.SigTypeSubkeyRevocation, config) + revSig.RevocationReason = &reason + revSig.RevocationReasonText = reasonText + + if err := revSig.RevokeSubkey(sk.PublicKey, e.PrivateKey, config); err != nil { + return err + } + + sk.Revocations = append(sk.Revocations, revSig) + return nil +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/keys_test_data.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/keys_test_data.go new file mode 100644 index 00000000000..108fd096f3c --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/keys_test_data.go @@ -0,0 +1,538 @@ +package openpgp + +const expiringKeyHex = "c6c04d0451d0c680010800abbb021fd03ffc4e96618901180c3fdcb060ee69eeead97b91256d11420d80b5f1b51930248044130bd300605cf8a05b7a40d3d8cfb0a910be2e3db50dcd50a9c54064c2a5550801daa834ff4480b33d3d3ca495ff8a4e84a886977d17d998f881241a874083d8b995beab555b6d22b8a4817ab17ac3e7304f7d4d2c05c495fb2218348d3bc13651db1d92732e368a9dd7dcefa6eddff30b94706a9aaee47e9d39321460b740c59c6fc3c2fd8ab6c0fb868cb87c0051f0321301fe0f0e1820b15e7fb7063395769b525005c7e30a7ce85984f5cac00504e7b4fdc45d74958de8388436fd5c7ba9ea121f1c851b5911dd1b47a14d81a09e92ef37721e2325b6790011010001cd00c2c07b041001080025050251d0c680050900278d00060b09070803020415080a0203160201021901021b03021e01000a0910e7b484133a890a35ae4b0800a1beb82e7f28eaf5273d6af9d3391314f6280b2b624eaca2851f89a9ebcaf80ac589ebd509f168bc4322106ca2e2ce77a76e071a3c7444787d65216b5f05e82c77928860b92aace3b7d0327db59492f422eb9dfab7249266d37429870b091a98aba8724c2259ebf8f85093f21255eafa75aa841e31d94f2ac891b9755fed455e539044ee69fc47950b80e003fc9f298d695660f28329eaa38037c367efde1727458e514faf990d439a21461b719edaddf9296d3d0647b43ca56cb8dbf63b4fcf8b9968e7928c463470fab3b98e44d0d95645062f94b2d04fe56bd52822b71934db8ce845622c40b92fcbe765a142e7f38b61a6aa9606c8e8858dcd3b6eb1894acec04d0451d1f06b01080088bea67444e1789390e7c0335c86775502d58ec783d99c8ef4e06de235ed3dd4b0467f6f358d818c7d8989d43ec6d69fcbc8c32632d5a1b605e3fa8e41d695fcdcaa535936cd0157f9040dce362519803b908eafe838bb13216c885c6f93e9e8d5745607f0d062322085d6bdc760969149a8ff8dd9f5c18d9bfe2e6f63a06e17694cf1f67587c6fb70e9aebf90ffc528ca3b615ac7c9d4a21ea4f7c06f2e98fbbd90a859b8608bf9ea638e3a54289ce44c283110d0c45fa458de6251cd6e7baf71f80f12c8978340490fd90c92b81736ae902ed958e478dceae2835953d189c45d182aff02ea2be61b81d8e94430f041d638647b43e2fcb45fd512fbf5068b810011010001c2c06504180108000f050251d1f06b050900081095021b0c000a0910e7b484133a890a35e63407fe2ec88d6d1e6c9ce7553ece0cb2524747217bad29f251d33df84599ffcc900141a355abd62126800744068a5e05dc167056aa9205273dc7765a2ed49db15c2a83b8d6e6429c902136f1e12229086c1c10c0053242c2a4ae1930db58163387a48cad64607ff2153c320e42843dec28e3fce90e7399d63ac0affa2fee1f0adc0953c89eb3f46ef1d6c04328ed13b491669d5120a3782e3ffb7c69575fb77eebd108794f4dda9d34be2bae57e8e59ec8ebfda2f6f06104b2321be408ea146e2db482b00c5055c8618de36ac9716f80da2617e225556d0fce61b01c8cea2d1e0ea982c31711060ca370f2739366e1e708f38405d784b49d16a26cf62d152eae734327cec04d0451d1f07b010800d5af91c5e7c2fd8951c8d254eab0c97cdcb66822f868b79b78c366255059a68fd74ebca9adb9b970cd9e586690e6e0756705432306878c897b10a4b4ca0005966f99ac8fa4e6f9caf54bf8e53844544beee9872a7ac64c119cf1393d96e674254b661f61ee975633d0e8a8672531edb6bb8e211204e7754a9efa802342118eee850beea742bac95a3f706cc2024cf6037a308bb68162b2f53b9a6346a96e6d31871a2456186e24a1c7a82b82ac04afdfd57cd7fb9ba77a9c760d40b76a170f7be525e5fb6a9848cc726e806187710d9b190387df28700f321f988a392899f93815cc937f309129eb94d5299c5547cb2c085898e6639496e70d746c9d3fb9881d0011010001c2c06504180108000f050251d1f07b050900266305021b0c000a0910e7b484133a890a35bff207fd10dfe8c4a6ea1dd30568012b6fd6891a763c87ad0f7a1d112aad9e8e3239378a3b85588c235865bac2e614348cb4f216d7217f53b3ef48c192e0a4d31d64d7bfa5faccf21155965fa156e887056db644a05ad08a85cc6152d1377d9e37b46f4ff462bbe68ace2dc586ef90070314576c985d8037c2ba63f0a7dc17a62e15bd77e88bc61d9d00858979709f12304264a4cf4225c5cf86f12c8e19486cb9cdcc69f18f027e5f16f4ca8b50e28b3115eaff3a345acd21f624aef81f6ede515c1b55b26b84c1e32264754eab672d5489b287e7277ea855e0a5ff2aa9e8b8c76d579a964ec225255f4d57bf66639ccb34b64798846943e162a41096a7002ca21c7f56" +const subkeyUsageHex = "988d04533a52bc010400d26af43085558f65b9e7dbc90cb9238015259aed5e954637adcfa2181548b2d0b60c65f1f42ec5081cbf1bc0a8aa4900acfb77070837c58f26012fbce297d70afe96e759ad63531f0037538e70dbf8e384569b9720d99d8eb39d8d0a2947233ed242436cb6ac7dfe74123354b3d0119b5c235d3dd9c9d6c004f8ffaf67ad8583001101000188b7041f010200210502533b8552170c8001ce094aa433f7040bb2ddf0be3893cb843d0fe70c020700000a0910a42704b92866382aa98404009d63d916a27543da4221c60087c33f1c44bec9998c5438018ed370cca4962876c748e94b73eb39c58eb698063f3fd6346d58dd2a11c0247934c4a9d71f24754f7468f96fb24c3e791dd2392b62f626148ad724189498cbf993db2df7c0cdc2d677c35da0f16cb16c9ce7c33b4de65a4a91b1d21a130ae9cc26067718910ef8e2b417556d627261203c756d627261407379642e65642e61753e88b80413010200220502533a52bc021b03060b090807030206150802090a0b0416020301021e01021780000a0910a42704b92866382a47840400c0c2bd04f5fca586de408b395b3c280a278259c93eaaa8b79a53b97003f8ed502a8a00446dd9947fb462677e4fcac0dac2f0701847d15130aadb6cd9e0705ea0cf5f92f129136c7be21a718d46c8e641eb7f044f2adae573e11ae423a0a9ca51324f03a8a2f34b91fa40c3cc764bee4dccadedb54c768ba0469b683ea53f1c29b88d04533a52bc01040099c92a5d6f8b744224da27bc2369127c35269b58bec179de6bbc038f749344222f85a31933224f26b70243c4e4b2d242f0c4777eaef7b5502f9dad6d8bf3aaeb471210674b74de2d7078af497d55f5cdad97c7bedfbc1b41e8065a97c9c3d344b21fc81d27723af8e374bc595da26ea242dccb6ae497be26eea57e563ed517e90011010001889f0418010200090502533a52bc021b0c000a0910a42704b92866382afa1403ff70284c2de8a043ff51d8d29772602fa98009b7861c540535f874f2c230af8caf5638151a636b21f8255003997ccd29747fdd06777bb24f9593bd7d98a3e887689bf902f999915fcc94625ae487e5d13e6616f89090ebc4fdc7eb5cad8943e4056995bb61c6af37f8043016876a958ec7ebf39c43d20d53b7f546cfa83e8d2604b88d04533b8283010400c0b529316dbdf58b4c54461e7e669dc11c09eb7f73819f178ccd4177b9182b91d138605fcf1e463262fabefa73f94a52b5e15d1904635541c7ea540f07050ce0fb51b73e6f88644cec86e91107c957a114f69554548a85295d2b70bd0b203992f76eb5d493d86d9eabcaa7ef3fc7db7e458438db3fcdb0ca1cc97c638439a9170011010001889f0418010200090502533b8283021b0c000a0910a42704b92866382adc6d0400cfff6258485a21675adb7a811c3e19ebca18851533f75a7ba317950b9997fda8d1a4c8c76505c08c04b6c2cc31dc704d33da36a21273f2b388a1a706f7c3378b66d887197a525936ed9a69acb57fe7f718133da85ec742001c5d1864e9c6c8ea1b94f1c3759cebfd93b18606066c063a63be86085b7e37bdbc65f9a915bf084bb901a204533b85cd110400aed3d2c52af2b38b5b67904b0ef73d6dd7aef86adb770e2b153cd22489654dcc91730892087bb9856ae2d9f7ed1eb48f214243fe86bfe87b349ebd7c30e630e49c07b21fdabf78b7a95c8b7f969e97e3d33f2e074c63552ba64a2ded7badc05ce0ea2be6d53485f6900c7860c7aa76560376ce963d7271b9b54638a4028b573f00a0d8854bfcdb04986141568046202192263b9b67350400aaa1049dbc7943141ef590a70dcb028d730371d92ea4863de715f7f0f16d168bd3dc266c2450457d46dcbbf0b071547e5fbee7700a820c3750b236335d8d5848adb3c0da010e998908dfd93d961480084f3aea20b247034f8988eccb5546efaa35a92d0451df3aaf1aee5aa36a4c4d462c760ecd9cebcabfbe1412b1f21450f203fd126687cd486496e971a87fd9e1a8a765fe654baa219a6871ab97768596ab05c26c1aeea8f1a2c72395a58dbc12ef9640d2b95784e974a4d2d5a9b17c25fedacfe551bda52602de8f6d2e48443f5dd1a2a2a8e6a5e70ecdb88cd6e766ad9745c7ee91d78cc55c3d06536b49c3fee6c3d0b6ff0fb2bf13a314f57c953b8f4d93bf88e70418010200090502533b85cd021b0200520910a42704b92866382a47200419110200060502533b85cd000a091042ce2c64bc0ba99214b2009e26b26852c8b13b10c35768e40e78fbbb48bd084100a0c79d9ea0844fa5853dd3c85ff3ecae6f2c9dd6c557aa04008bbbc964cd65b9b8299d4ebf31f41cc7264b8cf33a00e82c5af022331fac79efc9563a822497ba012953cefe2629f1242fcdcb911dbb2315985bab060bfd58261ace3c654bdbbe2e8ed27a46e836490145c86dc7bae15c011f7e1ffc33730109b9338cd9f483e7cef3d2f396aab5bd80efb6646d7e778270ee99d934d187dd98" +const revokedKeyHex = "988d045331ce82010400c4fdf7b40a5477f206e6ee278eaef888ca73bf9128a9eef9f2f1ddb8b7b71a4c07cfa241f028a04edb405e4d916c61d6beabc333813dc7b484d2b3c52ee233c6a79b1eea4e9cc51596ba9cd5ac5aeb9df62d86ea051055b79d03f8a4fa9f38386f5bd17529138f3325d46801514ea9047977e0829ed728e68636802796801be10011010001889f04200102000905025331d0e3021d03000a0910a401d9f09a34f7c042aa040086631196405b7e6af71026b88e98012eab44aa9849f6ef3fa930c7c9f23deaedba9db1538830f8652fb7648ec3fcade8dbcbf9eaf428e83c6cbcc272201bfe2fbb90d41963397a7c0637a1a9d9448ce695d9790db2dc95433ad7be19eb3de72dacf1d6db82c3644c13eae2a3d072b99bb341debba012c5ce4006a7d34a1f4b94b444526567205265766f6b657220283c52656727732022424d204261726973746122204b657920262530305c303e5c29203c72656740626d626172697374612e636f2e61753e88b704130102002205025331ce82021b03060b090807030206150802090a0b0416020301021e01021780000a0910a401d9f09a34f7c0019c03f75edfbeb6a73e7225ad3cc52724e2872e04260d7daf0d693c170d8c4b243b8767bc7785763533febc62ec2600c30603c433c095453ede59ff2fcabeb84ce32e0ed9d5cf15ffcbc816202b64370d4d77c1e9077d74e94a16fb4fa2e5bec23a56d7a73cf275f91691ae1801a976fcde09e981a2f6327ac27ea1fecf3185df0d56889c04100102000605025331cfb5000a0910fe9645554e8266b64b4303fc084075396674fb6f778d302ac07cef6bc0b5d07b66b2004c44aef711cbac79617ef06d836b4957522d8772dd94bf41a2f4ac8b1ee6d70c57503f837445a74765a076d07b829b8111fc2a918423ddb817ead7ca2a613ef0bfb9c6b3562aec6c3cf3c75ef3031d81d95f6563e4cdcc9960bcb386c5d757b104fcca5fe11fc709df884604101102000605025331cfe7000a09107b15a67f0b3ddc0317f6009e360beea58f29c1d963a22b962b80788c3fa6c84e009d148cfde6b351469b8eae91187eff07ad9d08fcaab88d045331ce820104009f25e20a42b904f3fa555530fe5c46737cf7bd076c35a2a0d22b11f7e0b61a69320b768f4a80fe13980ce380d1cfc4a0cd8fbe2d2e2ef85416668b77208baa65bf973fe8e500e78cc310d7c8705cdb34328bf80e24f0385fce5845c33bc7943cf6b11b02348a23da0bf6428e57c05135f2dc6bd7c1ce325d666d5a5fd2fd5e410011010001889f04180102000905025331ce82021b0c000a0910a401d9f09a34f7c0418003fe34feafcbeaef348a800a0d908a7a6809cc7304017d820f70f0474d5e23cb17e38b67dc6dca282c6ca00961f4ec9edf2738d0f087b1d81e4871ef08e1798010863afb4eac4c44a376cb343be929c5be66a78cfd4456ae9ec6a99d97f4e1c3ff3583351db2147a65c0acef5c003fb544ab3a2e2dc4d43646f58b811a6c3a369d1f" +const revokedSubkeyHex = "988d04533121f6010400aefc803a3e4bb1a61c86e8a86d2726c6a43e0079e9f2713f1fa017e9854c83877f4aced8e331d675c67ea83ddab80aacbfa0b9040bb12d96f5a3d6be09455e2a76546cbd21677537db941cab710216b6d24ec277ee0bd65b910f416737ed120f6b93a9d3b306245c8cfd8394606fdb462e5cf43c551438d2864506c63367fc890011010001b41d416c696365203c616c69636540626d626172697374612e636f2e61753e88bb041301020025021b03060b090807030206150802090a0b0416020301021e01021780050253312798021901000a09104ef7e4beccde97f015a803ff5448437780f63263b0df8442a995e7f76c221351a51edd06f2063d8166cf3157aada4923dfc44aa0f2a6a4da5cf83b7fe722ba8ab416c976e77c6b5682e7f1069026673bd0de56ba06fd5d7a9f177607f277d9b55ff940a638c3e68525c67517e2b3d976899b93ca267f705b3e5efad7d61220e96b618a4497eab8d04403d23f8846041011020006050253312910000a09107b15a67f0b3ddc03d96e009f50b6365d86c4be5d5e9d0ea42d5e56f5794c617700a0ab274e19c2827780016d23417ce89e0a2c0d987d889c04100102000605025331cf7a000a0910a401d9f09a34f7c0ee970400aca292f213041c9f3b3fc49148cbda9d84afee6183c8dd6c5ff2600b29482db5fecd4303797be1ee6d544a20a858080fec43412061c9a71fae4039fd58013b4ae341273e6c66ad4c7cdd9e68245bedb260562e7b166f2461a1032f2b38c0e0e5715fb3d1656979e052b55ca827a76f872b78a9fdae64bc298170bfcebedc1271b41a416c696365203c616c696365407379646973702e6f722e61753e88b804130102002205025331278b021b03060b090807030206150802090a0b0416020301021e01021780000a09104ef7e4beccde97f06a7003fa03c3af68d272ebc1fa08aa72a03b02189c26496a2833d90450801c4e42c5b5f51ad96ce2d2c9cef4b7c02a6a2fcf1412d6a2d486098eb762f5010a201819c17fd2888aec8eda20c65a3b75744de7ee5cc8ac7bfc470cbe3cb982720405a27a3c6a8c229cfe36905f881b02ed5680f6a8f05866efb9d6c5844897e631deb949ca8846041011020006050253312910000a09107b15a67f0b3ddc0347bc009f7fa35db59147469eb6f2c5aaf6428accb138b22800a0caa2f5f0874bacc5909c652a57a31beda65eddd5889c04100102000605025331cf7a000a0910a401d9f09a34f7c0316403ff46f2a5c101256627f16384d34a38fb47a6c88ba60506843e532d91614339fccae5f884a5741e7582ffaf292ba38ee10a270a05f139bde3814b6a077e8cd2db0f105ebea2a83af70d385f13b507fac2ad93ff79d84950328bb86f3074745a8b7f9b64990fb142e2a12976e27e8d09a28dc5621f957ac49091116da410ac3cbde1b88d04533121f6010400cbd785b56905e4192e2fb62a720727d43c4fa487821203cf72138b884b78b701093243e1d8c92a0248a6c0203a5a88693da34af357499abacaf4b3309c640797d03093870a323b4b6f37865f6eaa2838148a67df4735d43a90ca87942554cdf1c4a751b1e75f9fd4ce4e97e278d6c1c7ed59d33441df7d084f3f02beb68896c70011010001889f0418010200090502533121f6021b0c000a09104ef7e4beccde97f0b98b03fc0a5ccf6a372995835a2f5da33b282a7d612c0ab2a97f59cf9fff73e9110981aac2858c41399afa29624a7fd8a0add11654e3d882c0fd199e161bdad65e5e2548f7b68a437ea64293db1246e3011cbb94dc1bcdeaf0f2539bd88ff16d95547144d97cead6a8c5927660a91e6db0d16eb36b7b49a3525b54d1644e65599b032b7eb901a204533127a0110400bd3edaa09eff9809c4edc2c2a0ebe52e53c50a19c1e49ab78e6167bf61473bb08f2050d78a5cbbc6ed66aff7b42cd503f16b4a0b99fa1609681fca9b7ce2bbb1a5b3864d6cdda4d7ef7849d156d534dea30fb0efb9e4cf8959a2b2ce623905882d5430b995a15c3b9fe92906086788b891002924f94abe139b42cbbfaaabe42f00a0b65dc1a1ad27d798adbcb5b5ad02d2688c89477b03ff4eebb6f7b15a73b96a96bed201c0e5e4ea27e4c6e2dd1005b94d4b90137a5b1cf5e01c6226c070c4cc999938101578877ee76d296b9aab8246d57049caacf489e80a3f40589cade790a020b1ac146d6f7a6241184b8c7fcde680eae3188f5dcbe846d7f7bdad34f6fcfca08413e19c1d5df83fc7c7c627d493492e009c2f52a80400a2fe82de87136fd2e8845888c4431b032ba29d9a29a804277e31002a8201fb8591a3e55c7a0d0881496caf8b9fb07544a5a4879291d0dc026a0ea9e5bd88eb4aa4947bbd694b25012e208a250d65ddc6f1eea59d3aed3b4ec15fcab85e2afaa23a40ab1ef9ce3e11e1bc1c34a0e758e7aa64deb8739276df0af7d4121f834a9b88e70418010200090502533127a0021b02005209104ef7e4beccde97f047200419110200060502533127a0000a0910dbce4ee19529437fe045009c0b32f5ead48ee8a7e98fac0dea3d3e6c0e2c552500a0ad71fadc5007cfaf842d9b7db3335a8cdad15d3d1a6404009b08e2c68fe8f3b45c1bb72a4b3278cdf3012aa0f229883ad74aa1f6000bb90b18301b2f85372ca5d6b9bf478d235b733b1b197d19ccca48e9daf8e890cb64546b4ce1b178faccfff07003c172a2d4f5ebaba9f57153955f3f61a9b80a4f5cb959908f8b211b03b7026a8a82fc612bfedd3794969bcf458c4ce92be215a1176ab88d045331d144010400a5063000c5aaf34953c1aa3bfc95045b3aab9882b9a8027fecfe2142dc6b47ba8aca667399990244d513dd0504716908c17d92c65e74219e004f7b83fc125e575dd58efec3ab6dd22e3580106998523dea42ec75bf9aa111734c82df54630bebdff20fe981cfc36c76f865eb1c2fb62c9e85bc3a6e5015a361a2eb1c8431578d0011010001889f04280102000905025331d433021d03000a09104ef7e4beccde97f02e5503ff5e0630d1b65291f4882b6d40a29da4616bb5088717d469fbcc3648b8276de04a04988b1f1b9f3e18f52265c1f8b6c85861691c1a6b8a3a25a1809a0b32ad330aec5667cb4262f4450649184e8113849b05e5ad06a316ea80c001e8e71838190339a6e48bbde30647bcf245134b9a97fa875c1d83a9862cae87ffd7e2c4ce3a1b89013d04180102000905025331d144021b0200a809104ef7e4beccde97f09d2004190102000605025331d144000a0910677815e371c2fd23522203fe22ab62b8e7a151383cea3edd3a12995693911426f8ccf125e1f6426388c0010f88d9ca7da2224aee8d1c12135998640c5e1813d55a93df472faae75bef858457248db41b4505827590aeccf6f9eb646da7f980655dd3050c6897feddddaca90676dee856d66db8923477d251712bb9b3186b4d0114daf7d6b59272b53218dd1da94a03ff64006fcbe71211e5daecd9961fba66cdb6de3f914882c58ba5beddeba7dcb950c1156d7fba18c19ea880dccc800eae335deec34e3b84ac75ffa24864f782f87815cda1c0f634b3dd2fa67cea30811d21723d21d9551fa12ccbcfa62b6d3a15d01307b99925707992556d50065505b090aadb8579083a20fe65bd2a270da9b011" + +const missingCrossSignatureKey = `-----BEGIN PGP PUBLIC KEY BLOCK----- +Charset: UTF-8 + +mQENBFMYynYBCACVOZ3/e8Bm2b9KH9QyIlHGo/i1bnkpqsgXj8tpJ2MIUOnXMMAY +ztW7kKFLCmgVdLIC0vSoLA4yhaLcMojznh/2CcUglZeb6Ao8Gtelr//Rd5DRfPpG +zqcfUo+m+eO1co2Orabw0tZDfGpg5p3AYl0hmxhUyYSc/xUq93xL1UJzBFgYXY54 +QsM8dgeQgFseSk/YvdP5SMx1ev+eraUyiiUtWzWrWC1TdyRa5p4UZg6Rkoppf+WJ +QrW6BWrhAtqATHc8ozV7uJjeONjUEq24roRc/OFZdmQQGK6yrzKnnbA6MdHhqpdo +9kWDcXYb7pSE63Lc+OBa5X2GUVvXJLS/3nrtABEBAAG0F2ludmFsaWQtc2lnbmlu +Zy1zdWJrZXlziQEoBBMBAgASBQJTnKB5AhsBAgsHAhUIAh4BAAoJEO3UDQUIHpI/ +dN4H/idX4FQ1LIZCnpHS/oxoWQWfpRgdKAEM0qCqjMgiipJeEwSQbqjTCynuh5/R +JlODDz85ABR06aoF4l5ebGLQWFCYifPnJZ/Yf5OYcMGtb7dIbqxWVFL9iLMO/oDL +ioI3dotjPui5e+2hI9pVH1UHB/bZ/GvMGo6Zg0XxLPolKQODMVjpjLAQ0YJ3spew +RAmOGre6tIvbDsMBnm8qREt7a07cBJ6XK7xjxYaZHQBiHVxyEWDa6gyANONx8duW +/fhQ/zDTnyVM/ik6VO0Ty9BhPpcEYLFwh5c1ilFari1ta3e6qKo6ZGa9YMk/REhu +yBHd9nTkI+0CiQUmbckUiVjDKKe5AQ0EUxjKdgEIAJcXQeP+NmuciE99YcJoffxv +2gVLU4ZXBNHEaP0mgaJ1+tmMD089vUQAcyGRvw8jfsNsVZQIOAuRxY94aHQhIRHR +bUzBN28ofo/AJJtfx62C15xt6fDKRV6HXYqAiygrHIpEoRLyiN69iScUsjIJeyFL +C8wa72e8pSL6dkHoaV1N9ZH/xmrJ+k0vsgkQaAh9CzYufncDxcwkoP+aOlGtX1gP +WwWoIbz0JwLEMPHBWvDDXQcQPQTYQyj+LGC9U6f9VZHN25E94subM1MjuT9OhN9Y +MLfWaaIc5WyhLFyQKW2Upofn9wSFi8ubyBnv640Dfd0rVmaWv7LNTZpoZ/GbJAMA +EQEAAYkBHwQYAQIACQUCU5ygeQIbAgAKCRDt1A0FCB6SP0zCB/sEzaVR38vpx+OQ +MMynCBJrakiqDmUZv9xtplY7zsHSQjpd6xGflbU2n+iX99Q+nav0ETQZifNUEd4N +1ljDGQejcTyKD6Pkg6wBL3x9/RJye7Zszazm4+toJXZ8xJ3800+BtaPoI39akYJm ++ijzbskvN0v/j5GOFJwQO0pPRAFtdHqRs9Kf4YanxhedB4dIUblzlIJuKsxFit6N +lgGRblagG3Vv2eBszbxzPbJjHCgVLR3RmrVezKOsZjr/2i7X+xLWIR0uD3IN1qOW +CXQxLBizEEmSNVNxsp7KPGTLnqO3bPtqFirxS9PJLIMPTPLNBY7ZYuPNTMqVIUWF +4artDmrG +=7FfJ +-----END PGP PUBLIC KEY BLOCK-----` + +const invalidCrossSignatureKey = `-----BEGIN PGP PUBLIC KEY BLOCK----- + +mQENBFMYynYBCACVOZ3/e8Bm2b9KH9QyIlHGo/i1bnkpqsgXj8tpJ2MIUOnXMMAY +ztW7kKFLCmgVdLIC0vSoLA4yhaLcMojznh/2CcUglZeb6Ao8Gtelr//Rd5DRfPpG +zqcfUo+m+eO1co2Orabw0tZDfGpg5p3AYl0hmxhUyYSc/xUq93xL1UJzBFgYXY54 +QsM8dgeQgFseSk/YvdP5SMx1ev+eraUyiiUtWzWrWC1TdyRa5p4UZg6Rkoppf+WJ +QrW6BWrhAtqATHc8ozV7uJjeONjUEq24roRc/OFZdmQQGK6yrzKnnbA6MdHhqpdo +9kWDcXYb7pSE63Lc+OBa5X2GUVvXJLS/3nrtABEBAAG0F2ludmFsaWQtc2lnbmlu +Zy1zdWJrZXlziQEoBBMBAgASBQJTnKB5AhsBAgsHAhUIAh4BAAoJEO3UDQUIHpI/ +dN4H/idX4FQ1LIZCnpHS/oxoWQWfpRgdKAEM0qCqjMgiipJeEwSQbqjTCynuh5/R +JlODDz85ABR06aoF4l5ebGLQWFCYifPnJZ/Yf5OYcMGtb7dIbqxWVFL9iLMO/oDL +ioI3dotjPui5e+2hI9pVH1UHB/bZ/GvMGo6Zg0XxLPolKQODMVjpjLAQ0YJ3spew +RAmOGre6tIvbDsMBnm8qREt7a07cBJ6XK7xjxYaZHQBiHVxyEWDa6gyANONx8duW +/fhQ/zDTnyVM/ik6VO0Ty9BhPpcEYLFwh5c1ilFari1ta3e6qKo6ZGa9YMk/REhu +yBHd9nTkI+0CiQUmbckUiVjDKKe5AQ0EUxjKdgEIAIINDqlj7X6jYKc6DjwrOkjQ +UIRWbQQar0LwmNilehmt70g5DCL1SYm9q4LcgJJ2Nhxj0/5qqsYib50OSWMcKeEe +iRXpXzv1ObpcQtI5ithp0gR53YPXBib80t3bUzomQ5UyZqAAHzMp3BKC54/vUrSK +FeRaxDzNLrCeyI00+LHNUtwghAqHvdNcsIf8VRumK8oTm3RmDh0TyjASWYbrt9c8 +R1Um3zuoACOVy+mEIgIzsfHq0u7dwYwJB5+KeM7ZLx+HGIYdUYzHuUE1sLwVoELh ++SHIGHI1HDicOjzqgajShuIjj5hZTyQySVprrsLKiXS6NEwHAP20+XjayJ/R3tEA +EQEAAYkCPgQYAQIBKAUCU5ygeQIbAsBdIAQZAQIABgUCU5ygeQAKCRCpVlnFZmhO +52RJB/9uD1MSa0wjY6tHOIgquZcP3bHBvHmrHNMw9HR2wRCMO91ZkhrpdS3ZHtgb +u3/55etj0FdvDo1tb8P8FGSVtO5Vcwf5APM8sbbqoi8L951Q3i7qt847lfhu6sMl +w0LWFvPTOLHrliZHItPRjOltS1WAWfr2jUYhsU9ytaDAJmvf9DujxEOsN5G1YJep +54JCKVCkM/y585Zcnn+yxk/XwqoNQ0/iJUT9qRrZWvoeasxhl1PQcwihCwss44A+ +YXaAt3hbk+6LEQuZoYS73yR3WHj+42tfm7YxRGeubXfgCEz/brETEWXMh4pe0vCL +bfWrmfSPq2rDegYcAybxRQz0lF8PAAoJEO3UDQUIHpI/exkH/0vQfdHA8g/N4T6E +i6b1CUVBAkvtdJpCATZjWPhXmShOw62gkDw306vHPilL4SCvEEi4KzG72zkp6VsB +DSRcpxCwT4mHue+duiy53/aRMtSJ+vDfiV1Vhq+3sWAck/yUtfDU9/u4eFaiNok1 +8/Gd7reyuZt5CiJnpdPpjCwelK21l2w7sHAnJF55ITXdOxI8oG3BRKufz0z5lyDY +s2tXYmhhQIggdgelN8LbcMhWs/PBbtUr6uZlNJG2lW1yscD4aI529VjwJlCeo745 +U7pO4eF05VViUJ2mmfoivL3tkhoTUWhx8xs8xCUcCg8DoEoSIhxtOmoTPR22Z9BL +6LCg2mg= +=Dhm4 +-----END PGP PUBLIC KEY BLOCK-----` + +const goodCrossSignatureKey = `-----BEGIN PGP PUBLIC KEY BLOCK----- +Version: GnuPG v1 + +mI0EVUqeVwEEAMufHRrMPWK3gyvi0O0tABCs/oON9zV9KDZlr1a1M91ShCSFwCPo +7r80PxdWVWcj0V5h50/CJYtpN3eE/mUIgW2z1uDYQF1OzrQ8ubrksfsJvpAhENom +lTQEppv9mV8qhcM278teb7TX0pgrUHLYF5CfPdp1L957JLLXoQR/lwLVABEBAAG0 +E2dvb2Qtc2lnbmluZy1zdWJrZXmIuAQTAQIAIgUCVUqeVwIbAwYLCQgHAwIGFQgC +CQoLBBYCAwECHgECF4AACgkQNRjL95IRWP69XQQAlH6+eyXJN4DZTLX78KGjHrsw +6FCvxxClEPtPUjcJy/1KCRQmtLAt9PbbA78dvgzjDeZMZqRAwdjyJhjyg/fkU2OH +7wq4ktjUu+dLcOBb+BFMEY+YjKZhf6EJuVfxoTVr5f82XNPbYHfTho9/OABKH6kv +X70PaKZhbwnwij8Nts65AaIEVUqftREEAJ3WxZfqAX0bTDbQPf2CMT2IVMGDfhK7 +GyubOZgDFFjwUJQvHNvsrbeGLZ0xOBumLINyPO1amIfTgJNm1iiWFWfmnHReGcDl +y5mpYG60Mb79Whdcer7CMm3AqYh/dW4g6IB02NwZMKoUHo3PXmFLxMKXnWyJ0clw +R0LI/Qn509yXAKDh1SO20rqrBM+EAP2c5bfI98kyNwQAi3buu94qo3RR1ZbvfxgW +CKXDVm6N99jdZGNK7FbRifXqzJJDLcXZKLnstnC4Sd3uyfyf1uFhmDLIQRryn5m+ +LBYHfDBPN3kdm7bsZDDq9GbTHiFZUfm/tChVKXWxkhpAmHhU/tH6GGzNSMXuIWSO +aOz3Rqq0ED4NXyNKjdF9MiwD/i83S0ZBc0LmJYt4Z10jtH2B6tYdqnAK29uQaadx +yZCX2scE09UIm32/w7pV77CKr1Cp/4OzAXS1tmFzQ+bX7DR+Gl8t4wxr57VeEMvl +BGw4Vjh3X8//m3xynxycQU18Q1zJ6PkiMyPw2owZ/nss3hpSRKFJsxMLhW3fKmKr +Ey2KiOcEGAECAAkFAlVKn7UCGwIAUgkQNRjL95IRWP5HIAQZEQIABgUCVUqftQAK +CRD98VjDN10SqkWrAKDTpEY8D8HC02E/KVC5YUI01B30wgCgurpILm20kXEDCeHp +C5pygfXw1DJrhAP+NyPJ4um/bU1I+rXaHHJYroYJs8YSweiNcwiHDQn0Engh/mVZ +SqLHvbKh2dL/RXymC3+rjPvQf5cup9bPxNMa6WagdYBNAfzWGtkVISeaQW+cTEp/ +MtgVijRGXR/lGLGETPg2X3Afwn9N9bLMBkBprKgbBqU7lpaoPupxT61bL70= +=vtbN +-----END PGP PUBLIC KEY BLOCK-----` + +const revokedUserIDKey = `-----BEGIN PGP PUBLIC KEY BLOCK----- + +mQENBFsgO5EBCADhREPmcjsPkXe1z7ctvyWL0S7oa9JaoGZ9oPDHFDlQxd0qlX2e +DZJZDg0qYvVixmaULIulApq1puEsaJCn3lHUbHlb4PYKwLEywYXM28JN91KtLsz/ +uaEX2KC5WqeP40utmzkNLq+oRX/xnRMgwbO7yUNVG2UlEa6eI+xOXO3YtLdmJMBW +ClQ066ZnOIzEo1JxnIwha1CDBMWLLfOLrg6l8InUqaXbtEBbnaIYO6fXVXELUjkx +nmk7t/QOk0tXCy8muH9UDqJkwDUESY2l79XwBAcx9riX8vY7vwC34pm22fAUVLCJ +x1SJx0J8bkeNp38jKM2Zd9SUQqSbfBopQ4pPABEBAAG0I0dvbGFuZyBHb3BoZXIg +PG5vLXJlcGx5QGdvbGFuZy5jb20+iQFUBBMBCgA+FiEE5Ik5JLcNx6l6rZfw1oFy +9I6cUoMFAlsgO5ECGwMFCQPCZwAFCwkIBwMFFQoJCAsFFgIDAQACHgECF4AACgkQ +1oFy9I6cUoMIkwf8DNPeD23i4jRwd/pylbvxwZintZl1fSwTJW1xcOa1emXaEtX2 +depuqhP04fjlRQGfsYAQh7X9jOJxAHjTmhqFBi5sD7QvKU00cPFYbJ/JTx0B41bl +aXnSbGhRPh63QtEZL7ACAs+shwvvojJqysx7kyVRu0EW2wqjXdHwR/SJO6nhNBa2 +DXzSiOU/SUA42mmG+5kjF8Aabq9wPwT9wjraHShEweNerNMmOqJExBOy3yFeyDpa +XwEZFzBfOKoxFNkIaVf5GSdIUGhFECkGvBMB935khftmgR8APxdU4BE7XrXexFJU +8RCuPXonm4WQOwTWR0vQg64pb2WKAzZ8HhwTGbQiR29sYW5nIEdvcGhlciA8cmV2 +b2tlZEBnb2xhbmcuY29tPokBNgQwAQoAIBYhBOSJOSS3Dcepeq2X8NaBcvSOnFKD +BQJbIDv3Ah0AAAoJENaBcvSOnFKDfWMIAKhI/Tvu3h8fSUxp/gSAcduT6bC1JttG +0lYQ5ilKB/58lBUA5CO3ZrKDKlzW3M8VEcvohVaqeTMKeoQd5rCZq8KxHn/KvN6N +s85REfXfniCKfAbnGgVXX3kDmZ1g63pkxrFu0fDZjVDXC6vy+I0sGyI/Inro0Pzb +tvn0QCsxjapKK15BtmSrpgHgzVqVg0cUp8vqZeKFxarYbYB2idtGRci4b9tObOK0 +BSTVFy26+I/mrFGaPrySYiy2Kz5NMEcRhjmTxJ8jSwEr2O2sUR0yjbgUAXbTxDVE +/jg5fQZ1ACvBRQnB7LvMHcInbzjyeTM3FazkkSYQD6b97+dkWwb1iWG5AQ0EWyA7 +kQEIALkg04REDZo1JgdYV4x8HJKFS4xAYWbIva1ZPqvDNmZRUbQZR2+gpJGEwn7z +VofGvnOYiGW56AS5j31SFf5kro1+1bZQ5iOONBng08OOo58/l1hRseIIVGB5TGSa +PCdChKKHreJI6hS3mShxH6hdfFtiZuB45rwoaArMMsYcjaezLwKeLc396cpUwwcZ +snLUNd1Xu5EWEF2OdFkZ2a1qYdxBvAYdQf4+1Nr+NRIx1u1NS9c8jp3PuMOkrQEi +bNtc1v6v0Jy52mKLG4y7mC/erIkvkQBYJdxPaP7LZVaPYc3/xskcyijrJ/5ufoD8 +K71/ShtsZUXSQn9jlRaYR0EbojMAEQEAAYkBPAQYAQoAJhYhBOSJOSS3Dcepeq2X +8NaBcvSOnFKDBQJbIDuRAhsMBQkDwmcAAAoJENaBcvSOnFKDkFMIAIt64bVZ8x7+ +TitH1bR4pgcNkaKmgKoZz6FXu80+SnbuEt2NnDyf1cLOSimSTILpwLIuv9Uft5Pb +OraQbYt3xi9yrqdKqGLv80bxqK0NuryNkvh9yyx5WoG1iKqMj9/FjGghuPrRaT4l +QinNAghGVkEy1+aXGFrG2DsOC1FFI51CC2WVTzZ5RwR2GpiNRfESsU1rZAUqf/2V +yJl9bD5R4SUNy8oQmhOxi+gbhD4Ao34e4W0ilibslI/uawvCiOwlu5NGd8zv5n+U +heiQvzkApQup5c+BhH5zFDFdKJ2CBByxw9+7QjMFI/wgLixKuE0Ob2kAokXf7RlB +7qTZOahrETw= +=IKnw +-----END PGP PUBLIC KEY BLOCK-----` + +const keyWithFirstUserIDRevoked = `-----BEGIN PGP PUBLIC KEY BLOCK----- +Version: OpenPGP.js v4.10.10 +Comment: https://openpgpjs.org + +xsBNBFsgO5EBCADhREPmcjsPkXe1z7ctvyWL0S7oa9JaoGZ9oPDHFDlQxd0q +lX2eDZJZDg0qYvVixmaULIulApq1puEsaJCn3lHUbHlb4PYKwLEywYXM28JN +91KtLsz/uaEX2KC5WqeP40utmzkNLq+oRX/xnRMgwbO7yUNVG2UlEa6eI+xO +XO3YtLdmJMBWClQ066ZnOIzEo1JxnIwha1CDBMWLLfOLrg6l8InUqaXbtEBb +naIYO6fXVXELUjkxnmk7t/QOk0tXCy8muH9UDqJkwDUESY2l79XwBAcx9riX +8vY7vwC34pm22fAUVLCJx1SJx0J8bkeNp38jKM2Zd9SUQqSbfBopQ4pPABEB +AAHNIkdvbGFuZyBHb3BoZXIgPHJldm9rZWRAZ29sYW5nLmNvbT7CwI0EMAEK +ACAWIQTkiTkktw3HqXqtl/DWgXL0jpxSgwUCWyA79wIdAAAhCRDWgXL0jpxS +gxYhBOSJOSS3Dcepeq2X8NaBcvSOnFKDfWMIAKhI/Tvu3h8fSUxp/gSAcduT +6bC1JttG0lYQ5ilKB/58lBUA5CO3ZrKDKlzW3M8VEcvohVaqeTMKeoQd5rCZ +q8KxHn/KvN6Ns85REfXfniCKfAbnGgVXX3kDmZ1g63pkxrFu0fDZjVDXC6vy ++I0sGyI/Inro0Pzbtvn0QCsxjapKK15BtmSrpgHgzVqVg0cUp8vqZeKFxarY +bYB2idtGRci4b9tObOK0BSTVFy26+I/mrFGaPrySYiy2Kz5NMEcRhjmTxJ8j +SwEr2O2sUR0yjbgUAXbTxDVE/jg5fQZ1ACvBRQnB7LvMHcInbzjyeTM3Fazk +kSYQD6b97+dkWwb1iWHNI0dvbGFuZyBHb3BoZXIgPG5vLXJlcGx5QGdvbGFu +Zy5jb20+wsCrBBMBCgA+FiEE5Ik5JLcNx6l6rZfw1oFy9I6cUoMFAlsgO5EC +GwMFCQPCZwAFCwkIBwMFFQoJCAsFFgIDAQACHgECF4AAIQkQ1oFy9I6cUoMW +IQTkiTkktw3HqXqtl/DWgXL0jpxSgwiTB/wM094PbeLiNHB3+nKVu/HBmKe1 +mXV9LBMlbXFw5rV6ZdoS1fZ16m6qE/Th+OVFAZ+xgBCHtf2M4nEAeNOaGoUG +LmwPtC8pTTRw8Vhsn8lPHQHjVuVpedJsaFE+HrdC0RkvsAICz6yHC++iMmrK +zHuTJVG7QRbbCqNd0fBH9Ik7qeE0FrYNfNKI5T9JQDjaaYb7mSMXwBpur3A/ +BP3COtodKETB416s0yY6okTEE7LfIV7IOlpfARkXMF84qjEU2QhpV/kZJ0hQ +aEUQKQa8EwH3fmSF+2aBHwA/F1TgETtetd7EUlTxEK49eiebhZA7BNZHS9CD +rilvZYoDNnweHBMZzsBNBFsgO5EBCAC5INOERA2aNSYHWFeMfByShUuMQGFm +yL2tWT6rwzZmUVG0GUdvoKSRhMJ+81aHxr5zmIhluegEuY99UhX+ZK6NftW2 +UOYjjjQZ4NPDjqOfP5dYUbHiCFRgeUxkmjwnQoSih63iSOoUt5kocR+oXXxb +YmbgeOa8KGgKzDLGHI2nsy8Cni3N/enKVMMHGbJy1DXdV7uRFhBdjnRZGdmt +amHcQbwGHUH+PtTa/jUSMdbtTUvXPI6dz7jDpK0BImzbXNb+r9CcudpiixuM +u5gv3qyJL5EAWCXcT2j+y2VWj2HN/8bJHMoo6yf+bn6A/Cu9f0obbGVF0kJ/ +Y5UWmEdBG6IzABEBAAHCwJMEGAEKACYWIQTkiTkktw3HqXqtl/DWgXL0jpxS +gwUCWyA7kQIbDAUJA8JnAAAhCRDWgXL0jpxSgxYhBOSJOSS3Dcepeq2X8NaB +cvSOnFKDkFMIAIt64bVZ8x7+TitH1bR4pgcNkaKmgKoZz6FXu80+SnbuEt2N +nDyf1cLOSimSTILpwLIuv9Uft5PbOraQbYt3xi9yrqdKqGLv80bxqK0NuryN +kvh9yyx5WoG1iKqMj9/FjGghuPrRaT4lQinNAghGVkEy1+aXGFrG2DsOC1FF +I51CC2WVTzZ5RwR2GpiNRfESsU1rZAUqf/2VyJl9bD5R4SUNy8oQmhOxi+gb +hD4Ao34e4W0ilibslI/uawvCiOwlu5NGd8zv5n+UheiQvzkApQup5c+BhH5z +FDFdKJ2CBByxw9+7QjMFI/wgLixKuE0Ob2kAokXf7RlB7qTZOahrETw= +=+2T8 +-----END PGP PUBLIC KEY BLOCK----- +` + +const keyWithOnlyUserIDRevoked = `-----BEGIN PGP PUBLIC KEY BLOCK----- + +mDMEYYwB7RYJKwYBBAHaRw8BAQdARimqhPPzyGAXmfQJjcqM1QVPzLtURJSzNVll +JV4tEaW0KVJldm9rZWQgUHJpbWFyeSBVc2VyIElEIDxyZXZva2VkQGtleS5jb20+ +iHgEMBYIACAWIQSpyJZAXYqVEFkjyKutFcS0yeB0LQUCYYwCtgIdAAAKCRCtFcS0 +yeB0LbSsAQD8OYMaaBjrdzzpwIkP1stgmPd4/kzN/ZG28Ywl6a5F5QEA5Xg7aq4e +/t6Fsb4F5iqB956kSPe6YJrikobD/tBbMwSIkAQTFggAOBYhBKnIlkBdipUQWSPI +q60VxLTJ4HQtBQJhjAHtAhsDBQsJCAcCBhUKCQgLAgQWAgMBAh4BAheAAAoJEK0V +xLTJ4HQtBaoBAPZL7luTCji+Tqhn7XNfFE/0QIahCt8k9wfO1cGlB3inAQDf8Tzw +ZGR5fNluUcNoVxQT7bUSFStbaGo3k0BaOYPbCLg4BGGMAe0SCisGAQQBl1UBBQEB +B0DLwSpveSrbIO/IVZD13yrs1XuB3FURZUnafGrRq7+jUAMBCAeIeAQYFggAIBYh +BKnIlkBdipUQWSPIq60VxLTJ4HQtBQJhjAHtAhsMAAoJEK0VxLTJ4HQtZ1oA/j9u +8+p3xTNzsmabTL6BkNbMeB/RUKCrlm6woM6AV+vxAQCcXTn3JC2sNoNrLoXuVzaA +mcG3/TwG5GSQUUPkrDsGDA== +=mFWy +-----END PGP PUBLIC KEY BLOCK----- +` + +const keyWithSubKey = `-----BEGIN PGP PUBLIC KEY BLOCK----- + +mI0EWyKwKQEEALwXhKBnyaaNFeK3ljfc/qn9X/QFw+28EUfgZPHjRmHubuXLE2uR +s3ZoSXY2z7Dkv+NyHYMt8p+X8q5fR7JvUjK2XbPyKoiJVnHINll83yl67DaWfKNL +EjNoO0kIfbXfCkZ7EG6DL+iKtuxniGTcnGT47e+HJSqb/STpLMnWwXjBABEBAAG0 +I0dvbGFuZyBHb3BoZXIgPG5vLXJlcGx5QGdvbGFuZy5jb20+iM4EEwEKADgWIQQ/ +lRafP/p9PytHbwxMvYJsOQdOOAUCWyKwKQIbAwULCQgHAwUVCgkICwUWAgMBAAIe +AQIXgAAKCRBMvYJsOQdOOOsFBAC62mXww8XuqvYLcVOvHkWLT6mhxrQOJXnlfpn7 +2uBV9CMhoG/Ycd43NONsJrB95Apr9TDIqWnVszNbqPCuBhZQSGLdbiDKjxnCWBk0 +69qv4RNtkpOhYB7jK4s8F5oQZqId6JasT/PmJTH92mhBYhhTQr0GYFuPX2UJdkw9 +Sn9C67iNBFsisDUBBAC3A+Yo9lgCnxi/pfskyLrweYif6kIXWLAtLTsM6g/6jt7b +wTrknuCPyTv0QKGXsAEe/cK/Xq3HvX9WfXPGIHc/X56ZIsHQ+RLowbZV/Lhok1IW +FAuQm8axr/by80cRwFnzhfPc/ukkAq2Qyj4hLsGblu6mxeAhzcp8aqmWOO2H9QAR +AQABiLYEKAEKACAWIQQ/lRafP/p9PytHbwxMvYJsOQdOOAUCWyK16gIdAAAKCRBM +vYJsOQdOOB1vA/4u4uLONsE+2GVOyBsHyy7uTdkuxaR9b54A/cz6jT/tzUbeIzgx +22neWhgvIEghnUZd0vEyK9k1wy5vbDlEo6nKzHso32N1QExGr5upRERAxweDxGOj +7luDwNypI7QcifE64lS/JmlnunwRCdRWMKc0Fp+7jtRc5mpwyHN/Suf5RokBagQY +AQoAIBYhBD+VFp8/+n0/K0dvDEy9gmw5B044BQJbIrA1AhsCAL8JEEy9gmw5B044 +tCAEGQEKAB0WIQSNdnkaWY6t62iX336UXbGvYdhXJwUCWyKwNQAKCRCUXbGvYdhX +JxJSA/9fCPHP6sUtGF1o3G1a3yvOUDGr1JWcct9U+QpbCt1mZoNopCNDDQAJvDWl +mvDgHfuogmgNJRjOMznvahbF+wpTXmB7LS0SK412gJzl1fFIpK4bgnhu0TwxNsO1 +8UkCZWqxRMgcNUn9z6XWONK8dgt5JNvHSHrwF4CxxwjL23AAtK+FA/UUoi3U4kbC +0XnSr1Sl+mrzQi1+H7xyMe7zjqe+gGANtskqexHzwWPUJCPZ5qpIa2l8ghiUim6b +4ymJ+N8/T8Yva1FaPEqfMzzqJr8McYFm0URioXJPvOAlRxdHPteZ0qUopt/Jawxl +Xt6B9h1YpeLoJwjwsvbi98UTRs0jXwoY +=3fWu +-----END PGP PUBLIC KEY BLOCK-----` + +const keyWithSubKeyAndBadSelfSigOrder = `-----BEGIN PGP PUBLIC KEY BLOCK----- + +mI0EWyLLDQEEAOqIOpJ/ha1OYAGduu9tS3rBz5vyjbNgJO4sFveEM0mgsHQ0X9/L +plonW+d0gRoO1dhJ8QICjDAc6+cna1DE3tEb5m6JtQ30teLZuqrR398Cf6w7NNVz +r3lrlmnH9JaKRuXl7tZciwyovneBfZVCdtsRZjaLI1uMQCz/BToiYe3DABEBAAG0 +I0dvbGFuZyBHb3BoZXIgPG5vLXJlcGx5QGdvbGFuZy5jb20+iM4EEwEKADgWIQRZ +sixZOfQcZdW0wUqmgmdsv1O9xgUCWyLLDQIbAwULCQgHAwUVCgkICwUWAgMBAAIe +AQIXgAAKCRCmgmdsv1O9xql2A/4pix98NxjhdsXtazA9agpAKeADf9tG4Za27Gj+ +3DCww/E4iP2X35jZimSm/30QRB6j08uGCqd9vXkkJxtOt63y/IpVOtWX6vMWSTUm +k8xKkaYMP0/IzKNJ1qC/qYEUYpwERBKg9Z+k99E2Ql4kRHdxXUHq6OzY79H18Y+s +GdeM/riNBFsiyxsBBAC54Pxg/8ZWaZX1phGdwfe5mek27SOYpC0AxIDCSOdMeQ6G +HPk38pywl1d+S+KmF/F4Tdi+kWro62O4eG2uc/T8JQuRDUhSjX0Qa51gPzJrUOVT +CFyUkiZ/3ZDhtXkgfuso8ua2ChBgR9Ngr4v43tSqa9y6AK7v0qjxD1x+xMrjXQAR +AQABiQFxBBgBCgAmAhsCFiEEWbIsWTn0HGXVtMFKpoJnbL9TvcYFAlsizTIFCQAN +MRcAv7QgBBkBCgAdFiEEJcoVUVJIk5RWj1c/o62jUpRPICQFAlsiyxsACgkQo62j +UpRPICQq5gQApoWIigZxXFoM0uw4uJBS5JFZtirTANvirZV5RhndwHeMN6JttaBS +YnjyA4+n1D+zB2VqliD2QrsX12KJN6rGOehCtEIClQ1Hodo9nC6kMzzAwW1O8bZs +nRJmXV+bsvD4sidLZLjdwOVa3Cxh6pvq4Uur6a7/UYx121hEY0Qx0s8JEKaCZ2y/ +U73GGi0D/i20VW8AWYAPACm2zMlzExKTOAV01YTQH/3vW0WLrOse53WcIVZga6es +HuO4So0SOEAvxKMe5HpRIu2dJxTvd99Bo9xk9xJU0AoFrO0vNCRnL+5y68xMlODK +lEw5/kl0jeaTBp6xX0HDQOEVOpPGUwWV4Ij2EnvfNDXaE1vK1kffiQFrBBgBCgAg +AhsCFiEEWbIsWTn0HGXVtMFKpoJnbL9TvcYFAlsi0AYAv7QgBBkBCgAdFiEEJcoV +UVJIk5RWj1c/o62jUpRPICQFAlsiyxsACgkQo62jUpRPICQq5gQApoWIigZxXFoM +0uw4uJBS5JFZtirTANvirZV5RhndwHeMN6JttaBSYnjyA4+n1D+zB2VqliD2QrsX +12KJN6rGOehCtEIClQ1Hodo9nC6kMzzAwW1O8bZsnRJmXV+bsvD4sidLZLjdwOVa +3Cxh6pvq4Uur6a7/UYx121hEY0Qx0s8JEKaCZ2y/U73GRl0EAJokkXmy4zKDHWWi +wvK9gi2gQgRkVnu2AiONxJb5vjeLhM/07BRmH6K1o+w3fOeEQp4FjXj1eQ5fPSM6 +Hhwx2CTl9SDnPSBMiKXsEFRkmwQ2AAsQZLmQZvKBkLZYeBiwf+IY621eYDhZfo+G +1dh1WoUCyREZsJQg2YoIpWIcvw+a +=bNRo +-----END PGP PUBLIC KEY BLOCK----- +` + +const onlySubkeyNoPrivateKey = `-----BEGIN PGP PRIVATE KEY BLOCK----- +Version: GnuPG v1 + +lQCVBFggvocBBAC7vBsHn7MKmS6IiiZNTXdciplVgS9cqVd+RTdIAoyNTcsiV1H0 +GQ3QtodOPeDlQDNoqinqaobd7R9g3m3hS53Nor7yBZkCWQ5x9v9JxRtoAq0sklh1 +I1X2zEqZk2l6YrfBF/64zWrhjnW3j23szkrAIVu0faQXbQ4z56tmZrw11wARAQAB +/gdlAkdOVQG0CUdOVSBEdW1teYi4BBMBAgAiBQJYIL6HAhsDBgsJCAcDAgYVCAIJ +CgsEFgIDAQIeAQIXgAAKCRCd1xxWp1CYAnjGA/9synn6ZXJUKAXQzySgmCZvCIbl +rqBfEpxwLG4Q/lONhm5vthAE0z49I8hj5Gc5e2tLYUtq0o0OCRdCrYHa/efOYWpJ +6RsK99bePOisVzmOABLIgZkcr022kHoMCmkPgv9CUGKP1yqbGl+zzAwQfUjRUmvD +ZIcWLHi2ge4GzPMPi50B2ARYIL6cAQQAxWHnicKejAFcFcF1/3gUSgSH7eiwuBPX +M7vDdgGzlve1o1jbV4tzrjN9jsCl6r0nJPDMfBSzgLr1auNTRG6HpJ4abcOx86ED +Ad+avDcQPZb7z3dPhH/gb2lQejZsHh7bbeOS8WMSzHV3RqCLd8J/xwWPNR5zKn1f +yp4IGfopidMAEQEAAQAD+wQOelnR82+dxyM2IFmZdOB9wSXQeCVOvxSaNMh6Y3lk +UOOkO8Nlic4x0ungQRvjoRs4wBmCuwFK/MII6jKui0B7dn/NDf51i7rGdNGuJXDH +e676By1sEY/NGkc74jr74T+5GWNU64W0vkpfgVmjSAzsUtpmhJMXsc7beBhJdnVl +AgDKCb8hZqj1alcdmLoNvb7ibA3K/V8J462CPD7bMySPBa/uayoFhNxibpoXml2r +oOtHa5izF3b0/9JY97F6rqkdAgD6GdTJ+xmlCoz1Sewoif1I6krq6xoa7gOYpIXo +UL1Afr+LiJeyAnF/M34j/kjIVmPanZJjry0kkjHE5ILjH3uvAf4/6n9np+Th8ujS +YDCIzKwR7639+H+qccOaddCep8Y6KGUMVdD/vTKEx1rMtK+hK/CDkkkxnFslifMJ +kqoqv3WUqCWJAT0EGAECAAkFAlggvpwCGwIAqAkQndccVqdQmAKdIAQZAQIABgUC +WCC+nAAKCRDmGUholQPwvQk+A/9latnSsR5s5/1A9TFki11GzSEnfLbx46FYOdkW +n3YBxZoPQGxNA1vIn8GmouxZInw9CF4jdOJxEdzLlYQJ9YLTLtN5tQEMl/19/bR8 +/qLacAZ9IOezYRWxxZsyn6//jfl7A0Y+FV59d4YajKkEfItcIIlgVBSW6T+TNQT3 +R+EH5HJ/A/4/AN0CmBhhE2vGzTnVU0VPrE4V64pjn1rufFdclgpixNZCuuqpKpoE +VVHn6mnBf4njKjZrAGPs5kfQ+H4NsM7v3Zz4yV6deu9FZc4O6E+V1WJ38rO8eBix +7G2jko106CC6vtxsCPVIzY7aaG3H5pjRtomw+pX7SzrQ7FUg2PGumg== +=F/T0 +-----END PGP PRIVATE KEY BLOCK-----` + +const ecdsaPrivateKey = `-----BEGIN PGP PRIVATE KEY BLOCK----- + +xaUEX1KsSRMIKoZIzj0DAQcCAwTpYqJsnJiFhKKh+8TulWD+lVmerBFNS+Ii +B+nlG3T0xQQ4Sy5eIjJ0CExIQQzi3EElF/Z2l4F3WC5taFA11NgA/gkDCHSS +PThf1M2K4LN8F1MRcvR+sb7i0nH55ojkwuVB1DE6jqIT9m9i+mX1tzjSAS+6 +lPQiweCJvG7xTC7Hs3AzRapf/r1At4TB+v+5G2/CKynNFEJpbGwgPGJpbGxA +aG9tZS5jb20+wncEEBMIAB8FAl9SrEkGCwkHCAMCBBUICgIDFgIBAhkBAhsD +Ah4BAAoJEMpwT3+q3+xqw5UBAMebZN9isEZ1ML+R/jWAAWMwa/knMugrEZ1v +Bl9+ZwM0AQCZdf80/wYY4Nve01qSRFv8OmKswLli3TvDv6FKc4cLz8epBF9S +rEkSCCqGSM49AwEHAgMEAjKnT9b5wY2bf9TpAV3d7OUfPOxKj9c4VzeVzSrH +AtQgo/MuI1cdYVURicV4i76DNjFhQHQFTk7BrC+C2u1yqQMBCAf+CQMIHImA +iYfzQtjgQWSFZYUkCFpbbwhNF0ch+3HNaZkaHCnZRIsWsRnc6FCb6lRQyK9+ +Dq59kHlduE5QgY40894jfmP2JdJHU6nBdYrivbEdbMJhBBgTCAAJBQJfUqxJ +AhsMAAoJEMpwT3+q3+xqUI0BAMykhV08kQ4Ip9Qlbss6Jdufv7YrU0Vd5hou +b5TmiPd0APoDBh3qIic+aLLUcAuG3+Gt1P1AbUlmqV61ozn1WfHxfw== +=KLN8 +-----END PGP PRIVATE KEY BLOCK-----` + +const dsaPrivateKeyWithElGamalSubkey = `-----BEGIN PGP PRIVATE KEY BLOCK----- + +lQOBBF9/MLsRCACeaF6BI0jTgDAs86t8/kXPfwlPvR2MCYzB0BCqAdcq1hV/GTYd +oNmJRna/ZJfsI/vf+d8Nv+EYOQkPheFS1MJVBitkAXjQPgm8i1tQWen1FCWZxqGk +/vwZYF4yo8GhZ+Wxi3w09W9Cp9QM/CTmyE1Xe7wpPBGe+oD+me8Zxjyt8JBS4Qx+ +gvWbfHxfHnggh4pz7U8QkItlLsBNQEdX4R5+zwRN66g2ZSX/shaa/EkVnihUhD7r +njP9I51ORWucTQD6OvgooaNQZCkQ/Se9TzdakwWKS2XSIFXiY/e2E5ZgKI/pfKDU +iA/KessxddPb7nP/05OIJqg9AoDrD4vmehLzAQD+zsUS3LDU1m9/cG4LMsQbT2VK +Te4HqbGIAle+eu/asQf8DDJMrbZpiJZvADum9j0TJ0oep6VdMbzo9RSDKvlLKT9m +kG63H8oDWnCZm1a+HmGq9YIX+JHWmsLXXsFLeEouLzHO+mZo0X28eji3V2T87hyR +MmUM0wFo4k7jK8uVmkDXv3XwNp2uByWxUKZd7EnWmcEZWqIiexJ7XpCS0Pg3tRaI +zxve0SRe/dxfUPnTk/9KQ9hS6DWroBKquL182zx1Fggh4LIWWE2zq+UYn8BI0E8A +rmIDFJdF8ymFQGRrEy6g79NnkPmkrZWsgMRYY65P6v4zLVmqohJKkpm3/Uxa6QAP +CCoPh/JTOvPeCP2bOJH8z4Z9Py3ouMIjofQW8sXqRgf/RIHbh0KsINHrwwZ4gVIr +MK3RofpaYxw1ztPIWb4cMWoWZHH1Pxh7ggTGSBpAhKXkiWw2Rxat8QF5aA7e962c +bLvVv8dqsPrD/RnVJHag89cbPTzjn7gY9elE8EM8ithV3oQkwHTr4avYlpDZsgNd +hUW3YgRwGo31tdzxoG04AcpV2t+07P8XMPr9hsfWs4rHohXPi38Hseu1Ji+dBoWQ +3+1w/HH3o55s+jy4Ruaz78AIrjbmAJq+6rA2mIcCgrhw3DnzuwQAKeBvSeqn9zfS +ZC812osMBVmkycwelpaIh64WZ0vWL3GvdXDctV2kXM+qVpDTLEny0LuiXxrwCKQL +Ev4HAwK9uQBcreDEEud7pfRb8EYP5lzO2ZA7RaIvje6EWAGBvJGMRT0QQE5SGqc7 +Fw5geigBdt+vVyRuNNhg3c2fdn/OBQaYu0J/8AiOogG8EaM8tCFlbGdhbWFsQGRz +YS5jb20gPGVsZ2FtYWxAZHNhLmNvbT6IkAQTEQgAOBYhBI+gnfiHQxB35/Dp0XAQ +aE/rsWC5BQJffzC7AhsDBQsJCAcCBhUKCQgLAgQWAgMBAh4BAheAAAoJEHAQaE/r +sWC5A4EA/0GcJmyPtN+Klc7b9sVT3JgKTRnB/URxOJfYJofP0hZLAQCkqyMO+adV +JvbgDH0zaITQWZSSXPqpgMpCA6juTrDsd50CawRffzC7EAgAxFFFSAAEQzWTgKU5 +EBtpxxoPzHqcChawTHRxHxjcELXzmUBS5PzfA1HXSPnNqK/x3Ut5ycC3CsW41Fnt +Gm3706Wu9VFbFZVn55F9lPiplUo61n5pqMvOr1gmuQsdXiTa0t5FRa4TZ2VSiHFw +vdAVSPTUsT4ZxJ1rPyFYRtq1n3pQcvdZowd07r0JnzTMjLLMFYCKhwIowoOC4zqJ +iB8enjwOlpaqBATRm9xpVF7SJkroPF6/B1vdhj7E3c1aJyHlo0PYBAg756sSHWHg +UuLyUQ4TA0hcCVenn/L/aSY2LnbdZB1EBhlYjA7dTCgwIqsQhfQmPkjz6g64A7+Y +HbbrLwADBQgAk14QIEQ+J/VHetpQV/jt2pNsFK1kVK7mXK0spTExaC2yj2sXlHjL +Ie3bO5T/KqmIaBEB5db5fA5xK9cZt79qrQHDKsEqUetUeMUWLBx77zBsus3grIgy +bwDZKseRzQ715pwxquxQlScGoDIBKEh08HpwHkq140eIj3w+MAIfndaZaSCNaxaP +Snky7BQmJ7Wc7qrIwoQP6yrnUqyW2yNi81nJYUhxjChqaFSlwzLs/iNGryBKo0ic +BqVIRjikKHBlwBng6WyrltQo/Vt9GG8w+lqaAVXbJRlaBZJUR+2NKi/YhP3qQse3 +v8fi4kns0gh5LK+2C01RvdX4T49QSExuIf4HAwLJqYIGwadA2uem5v7/765ZtFWV +oL0iZ0ueTJDby4wTFDpLVzzDi/uVcB0ZRFrGOp7w6OYcNYTtV8n3xmli2Q5Trw0c +wZVzvg+ABKWiv7faBjMczIFF8y6WZKOIeAQYEQgAIBYhBI+gnfiHQxB35/Dp0XAQ +aE/rsWC5BQJffzC7AhsMAAoJEHAQaE/rsWC5ZmIA/jhS4r4lClbvjuPWt0Yqdn7R +fss2SPMYvMrrDh42aE0OAQD8xn4G6CN8UtW9xihXOY6FpxiJ/sMc2VaneeUd34oa +4g== +=XZm8 +-----END PGP PRIVATE KEY BLOCK-----` + +// https://tests.sequoia-pgp.org/#Certificate_expiration +// P _ U p +const expiringPrimaryUIDKey = `-----BEGIN PGP PUBLIC KEY BLOCK----- + +xsDNBF2lnPIBDAC5cL9PQoQLTMuhjbYvb4Ncuuo0bfmgPRFywX53jPhoFf4Zg6mv +/seOXpgecTdOcVttfzC8ycIKrt3aQTiwOG/ctaR4Bk/t6ayNFfdUNxHWk4WCKzdz +/56fW2O0F23qIRd8UUJp5IIlN4RDdRCtdhVQIAuzvp2oVy/LaS2kxQoKvph/5pQ/ +5whqsyroEWDJoSV0yOb25B/iwk/pLUFoyhDG9bj0kIzDxrEqW+7Ba8nocQlecMF3 +X5KMN5kp2zraLv9dlBBpWW43XktjcCZgMy20SouraVma8Je/ECwUWYUiAZxLIlMv +9CurEOtxUw6N3RdOtLmYZS9uEnn5y1UkF88o8Nku890uk6BrewFzJyLAx5wRZ4F0 +qV/yq36UWQ0JB/AUGhHVPdFf6pl6eaxBwT5GXvbBUibtf8YI2og5RsgTWtXfU7eb +SGXrl5ZMpbA6mbfhd0R8aPxWfmDWiIOhBufhMCvUHh1sApMKVZnvIff9/0Dca3wb +vLIwa3T4CyshfT0AEQEAAc0hQm9iIEJhYmJhZ2UgPGJvYkBvcGVucGdwLmV4YW1w +bGU+wsFcBBMBCgCQBYJhesp/BYkEWQPJBQsJCAcCCRD7/MgqAV5zMEcUAAAAAAAe +ACBzYWx0QG5vdGF0aW9ucy5zZXF1b2lhLXBncC5vcmeEOQlNyTLFkc9I/elp+BpY +495V7KatqtDmsyDr+zDAdwYVCgkICwIEFgIDAQIXgAIbAwIeARYhBNGmbhojsYLJ +mA94jPv8yCoBXnMwAABSCQv/av8hKyynMtXVKFuWOGJw0mR8auDm84WdhMFRZg8t +yTJ1L88+Ny4WUAFeqo2j7DU2yPGrm5rmuvzlEedFYFeOWt+A4adz+oumgRd0nsgG +Lf3QYUWQhLWVlz+H7zubgKqSB2A2RqV65S7mTTVro42nb2Mng6rvGWiqeKG5nrXN +/01p1mIBQGR/KnZSqYLzA2Pw2PiJoSkXT26PDz/kiEMXpjKMR6sicV4bKVlEdUvm +pIImIPBHZq1EsKXEyWtWC41w/pc+FofGE+uSFs2aef1vvEHFkj3BHSK8gRcH3kfR +eFroTET8C2q9V1AOELWm+Ys6PzGzF72URK1MKXlThuL4t4LjvXWGNA78IKW+/RQH +DzK4U0jqSO0mL6qxqVS5Ij6jjL6OTrVEGdtDf5n0vI8tcUTBKtVqYAYk+t2YGT05 +ayxALtb7viVKo8f10WEcCuKshn0gdsEFMRZQzJ89uQIY3R3FbsdRCaE6OEaDgKMQ +UTFROyfhthgzRKbRxfcplMUCzsDNBF2lnPIBDADWML9cbGMrp12CtF9b2P6z9TTT +74S8iyBOzaSvdGDQY/sUtZXRg21HWamXnn9sSXvIDEINOQ6A9QxdxoqWdCHrOuW3 +ofneYXoG+zeKc4dC86wa1TR2q9vW+RMXSO4uImA+Uzula/6k1DogDf28qhCxMwG/ +i/m9g1c/0aApuDyKdQ1PXsHHNlgd/Dn6rrd5y2AObaifV7wIhEJnvqgFXDN2RXGj +LeCOHV4Q2WTYPg/S4k1nMXVDwZXrvIsA0YwIMgIT86Rafp1qKlgPNbiIlC1g9RY/ +iFaGN2b4Ir6GDohBQSfZW2+LXoPZuVE/wGlQ01rh827KVZW4lXvqsge+wtnWlszc +selGATyzqOK9LdHPdZGzROZYI2e8c+paLNDdVPL6vdRBUnkCaEkOtl1mr2JpQi5n +TU+gTX4IeInC7E+1a9UDF/Y85ybUz8XV8rUnR76UqVC7KidNepdHbZjjXCt8/Zo+ +Tec9JNbYNQB/e9ExmDntmlHEsSEQzFwzj8sxH48AEQEAAcLA9gQYAQoAIBYhBNGm +bhojsYLJmA94jPv8yCoBXnMwBQJdpZzyAhsMAAoJEPv8yCoBXnMw6f8L/26C34dk +jBffTzMj5Bdzm8MtF67OYneJ4TQMw7+41IL4rVcSKhIhk/3Ud5knaRtP2ef1+5F6 +6h9/RPQOJ5+tvBwhBAcUWSupKnUrdVaZQanYmtSxcVV2PL9+QEiNN3tzluhaWO// +rACxJ+K/ZXQlIzwQVTpNhfGzAaMVV9zpf3u0k14itcv6alKY8+rLZvO1wIIeRZLm +U0tZDD5HtWDvUV7rIFI1WuoLb+KZgbYn3OWjCPHVdTrdZ2CqnZbG3SXw6awH9bzR +LV9EXkbhIMez0deCVdeo+wFFklh8/5VK2b0vk/+wqMJxfpa1lHvJLobzOP9fvrsw +sr92MA2+k901WeISR7qEzcI0Fdg8AyFAExaEK6VyjP7SXGLwvfisw34OxuZr3qmx +1Sufu4toH3XrB7QJN8XyqqbsGxUCBqWif9RSK4xjzRTe56iPeiSJJOIciMP9i2ld +I+KgLycyeDvGoBj0HCLO3gVaBe4ubVrj5KjhX2PVNEJd3XZRzaXZE2aAMQ== +=AmgT +-----END PGP PUBLIC KEY BLOCK-----` + +const rsa2048PrivateKey = `-----BEGIN PGP PRIVATE KEY BLOCK----- +Comment: gpg (GnuPG) 2.2.27 with libgcrypt 1.9.4 + +lQPGBGL07P0BCADL0etN8efyAXA6sL2WfQvHe5wEKYXPWeN2+jiqSppfeRZAOlzP +kZ3U+cloeJriplYvVJwI3ID2aw52Z/TRn8iKRP5eOUFrEgcgl06lazLtOndK7o7p +oBV5mLtHEirFHm6W61fNt10jzM0jx0PV6nseLhFB2J42F1cmU/aBgFo41wjLSZYr +owR+v+O9S5sUXblQF6sEDcY01sBEu09zrIgT49VFwQ1Cvdh9XZEOTQBfdiugoj5a +DS3fAqAka3r1VoQK4eR7/upnYSgSACGeaQ4pUelKku5rpm50gdWTY8ppq0k9e1eT +y2x0OQcW3hWE+j4os1ca0ZEADMdqr/99MOxrABEBAAH+BwMCJWxU4VOZOJ7/I6vX +FxdfBhIBEXlJ52FM3S/oYtXqLhkGyrtmZOeEazVvUtuCe3M3ScHI8xCthcmE8E0j +bi+ZEHPS2NiBZtgHFF27BLn7zZuTc+oD5WKduZdK3463egnyThTqIIMl25WZBuab +k5ycwYrWwBH0jfA4gwJ13ai4pufKC2RM8qIu6YAVPglYBKFLKGvvJHa5vI+LuA0E +K+k35hIic7yVUcQneNnAF2598X5yWiieYnOZpmHlRw1zfbMwOJr3ZNj2v94u7b+L +sTa/1Uv9887Vb6sJp0c2Sh4cwEccoPYkvMqFn3ZrJUr3UdDu1K2vWohPtswzhrYV ++RdPZE5RLoCQufKvlPezk0Pzhzb3bBU7XjUbdGY1nH/EyQeBNp+Gw6qldKvzcBaB +cyOK1c6hPSszpJX93m5UxCN55IeifmcNjmbDh8vGCCdajy6d56qV2n4F3k7vt1J1 +0UlxIGhqijJoaTCX66xjLMC6VXkSz6aHQ35rnXosm/cqPcQshsZTdlfSyWkorfdr +4Hj8viBER26mjYurTMLBKDtUN724ZrR0Ev5jorX9uoKlgl87bDZHty2Ku2S+vR68 +VAvnj6Fi1BYNclnDoqxdRB2z5T9JbWE52HuG83/QsplhEqXxESDxriTyTHMbNxEe +88soVCDh4tgflZFa2ucUr6gEKJKij7jgahARnyaXfPZlQBUAS1YUeILYmN+VR+M/ +sHENpwDWc7TInn8VN638nJV+ScZGMih3AwWZTIoiLju3MMt1K0YZ3NuiqwGH4Jwg +/BbEdTWeCci9y3NEQHQ3uZZ5p6j2CwFVlK11idemCMvAiTVxF+gKdaLMkeCwKxru +J3YzhKEo+iDVYbPYBYizx/EHBn2U5kITQ5SBXzjTaaFMNZJEf9JYsL1ybPB6HOFY +VNVB2KT8CGVwtCJHb2xhbmcgR29waGVyIDxnb2xhbmdAZXhhbXBsZS5vcmc+iQFO +BBMBCgA4FiEEC6K7U7f4qesybTnqSkra7gHusm0FAmL07P0CGwMFCwkIBwIGFQoJ +CAsCBBYCAwECHgECF4AACgkQSkra7gHusm1MvwgAxpClWkeSqIhMQfbiuz0+lOkE +89y1DCFw8bHjZoUf4/4K8hFA3dGkk+q72XFgiyaCpfXxMt6Gi+dN47t+tTv9NIqC +sukbaoJBmJDhN6+djmJOgOYy+FWsW2LAk2LOwKYulpnBZdcA5rlMAhBg7gevQpF+ +ruSU69P7UUaFJl/DC7hDmaIcj+4cjBE/HO26SnVQjoTfjZT82rDh1Wsuf8LnkJUk +b3wezBLpXKjDvdHikdv4gdlR4AputVM38aZntYYglh/EASo5TneyZ7ZscdLNRdcF +r5O2fKqrOJLOdaoYRFZZWOvP5GtEVFDU7WGivOSVfiszBE0wZR3dgZRJipHCXJ0D +xgRi9Oz9AQgAtMJcJqLLVANJHl90tWuoizDkm+Imcwq2ubQAjpclnNrODnDK+7o4 +pBsWmXbZSdkC4gY+LhOQA6bPDD0JEHM58DOnrm49BddxXAyK0HPsk4sGGt2SS86B +OawWNdfJVyqw4bAiHWDmQg4PcjBbt3ocOIxAR6I5kBSiQVxuGQs9T+Zvg3G1r3Or +fS6DzlgY3HFUML5YsGH4lOxNSOoKAP68GIH/WNdUZ+feiRg9knIib6I3Hgtf5eO8 +JRH7aWE/TD7eNu36bLLjT5TZPq5r6xaD2plbtPOyXbNPWs9qI1yG+VnErfaLY0w8 +Qo0aqzbgID+CTZVomXSOpOcQseaFKw8ZfQARAQAB/gcDArha6+/+d4OY/w9N32K9 +hFNYt4LufTETMQ+k/sBeaMuAVzmT47DlAXzkrZhGW4dZOtXMu1rXaUwHlqkhEyzL +L4MYEWVXfD+LbZNEK3MEFss6RK+UAMeT/PTV9aA8cXQVPcSJYzfBXHQ1U1hnOgrO +apn92MN8RmkhX8wJLyeWTMMuP4lXByJMmmGo8WvifeRD2kFY4y0WVBDAXJAV4Ljf +Di/bBiwoc5a+gxHuZT2W9ZSxBQJNXdt4Un2IlyZuo58s5MLx2N0EaNJ8PwRUE6fM +RZYO8aZCEPUtINE4njbvsWOMCtrblsMPwZ1B0SiIaWmLaNyGdCNKea+fCIW7kasC +JYMhnLumpUTXg5HNexkCsl7ABWj0PYBflOE61h8EjWpnQ7JBBVKS2ua4lMjwHRX7 +5o5yxym9k5UZNFdGoXVL7xpizCcdGawxTJvwhs3vBqu1ZWYCegOAZWDrOkCyhUpq +8uKMROZFbn+FwE+7tjt+v2ed62FVEvD6g4V3ThCA6mQqeOARfJWN8GZY8BDm8lht +crOXriUkrx+FlrgGtm2CkwjW5/9Xd7AhFpHnQdFeozOHyq1asNSgJF9sNi9Lz94W +skQSVRi0IExxSXYGI3Y0nnAZUe2BAQflYPJdEveSr3sKlUqXiETTA1VXsTPK3kOC +92CbLzj/Hz199jZvywwyu53I+GKMpF42rMq7zxr2oa61YWY4YE/GDezwwys/wLx/ +QpCW4X3ppI7wJjCSSqEV0baYZSSli1ayheS6dxi8QnSpX1Bmpz6gU7m/M9Sns+hl +J7ZvgpjCAiV7KJTjtclr5/S02zP78LTVkoTWoz/6MOTROwaP63VBUXX8pbJhf/vu +DLmNnDk8joMJxoDXWeNU0EnNl4hP7Z/jExRBOEO4oAnUf/Sf6gCWQhL5qcajtg6w +tGv7vx3f2IkBNgQYAQoAIBYhBAuiu1O3+KnrMm056kpK2u4B7rJtBQJi9Oz9AhsM +AAoJEEpK2u4B7rJt6lgIAMBWqP4BCOGnQXBbgJ0+ACVghpkFUXZTb/tXJc8UUvTM +8uov6k/RsqDGZrvhhufD7Wwt7j9v7dD7VPp7bPyjVWyimglQzWguTUUqLDGlstYH +5uYv1pzma0ZsAGNqFeGlTLsKOSGKFMH4rB2KfN2n51L8POvtp1y7GKZQbWIWneaB +cZr3BINU5GMvYYU7pAYcoR+mJPdJx5Up3Ocn+bn8Tu1sy9C/ArtCQucazGnoE9u1 +HhNLrh0CdzzX7TNH6TQ8LwPOvq0K5l/WqbN9lE0WBBhMv2HydxhluO8AhU+A5GqC +C+wET7nVDnhoOm/fstIeb7/LN7OYejKPeHdFBJEL9GA= +=u442 +-----END PGP PRIVATE KEY BLOCK-----` + +const curve25519PrivateKey = `-----BEGIN PGP PRIVATE KEY BLOCK----- +Comment: gpg (GnuPG) 2.2.27 with libgcrypt 1.9.4 + +lFgEYvTtQBYJKwYBBAHaRw8BAQdAxsNXLbrk5xOjpO24VhOMvQ0/F+JcyIkckMDH +X3FIGxcAAQDFOlunZWYuPsCx5JLp78vKqUTfgef9TGG4oD6I/Sa0zBMstCJHb2xh +bmcgR29waGVyIDxnb2xhbmdAZXhhbXBsZS5vcmc+iJAEExYIADgWIQSFQHEOazmo +h1ldII4MvfnLQ4JBNwUCYvTtQAIbAwULCQgHAgYVCgkICwIEFgIDAQIeAQIXgAAK +CRAMvfnLQ4JBN5yeAQCKdry8B5ScCPrev2+UByMCss7Sdu5RhomCFsHdNPLcKAEA +8ugei+1owHsV+3cGwWWzKk6sLa8ZN87i3SKuOGp9DQycXQRi9O1AEgorBgEEAZdV +AQUBAQdA5CubPp8l7lrVQ25h7Hx5XN2C8xanRnnpcjzEooCaEA0DAQgHAAD/Rpc+ +sOZUXrFk9HOWB1XU41LoWbDBoG8sP8RWAVYwD5AQRYh4BBgWCAAgFiEEhUBxDms5 +qIdZXSCODL35y0OCQTcFAmL07UACGwwACgkQDL35y0OCQTcvdwEA7lb5g/YisrEf +iq660uwMGoepLUfvtqKzuQ6heYe83y0BAN65Ffg5HYOJzUEi0kZQRf7OhdtuL2kJ +SRXn8DmCTfEB +=cELM +-----END PGP PRIVATE KEY BLOCK-----` + +const curve448PrivateKey = `-----BEGIN PGP PRIVATE KEY BLOCK----- +Comment: C1DB 65D5 80D7 B922 7254 4B1E A699 9895 FABA CE52 + +xYUEYV2UmRYDK2VxAc9AFyxgh5xnSbyt50TWl558mw9xdMN+/UBLr5+UMP8IsrvV +MdXuTIE8CyaUQKSotHtH2RkYEXj5nsMAAAHPQIbTMSzjIWug8UFECzAex5FHgAgH +gYF3RK+TS8D24wX8kOu2C/NoVxwGY+p+i0JHaB+7yljriSKAGxs6wsBEBB8WCgCD +BYJhXZSZBYkFpI+9AwsJBwkQppmYlfq6zlJHFAAAAAAAHgAgc2FsdEBub3RhdGlv +bnMuc2VxdW9pYS1wZ3Aub3Jn5wSpIutJ5HncJWk4ruUV8GzQF390rR5+qWEAnAoY +akcDFQoIApsBAh4BFiEEwdtl1YDXuSJyVEseppmYlfq6zlIAALzdA5dA/fsgYg/J +qaQriYKaPUkyHL7EB3BXhV2d1h/gk+qJLvXQuU2WEJ/XSs3GrsBRiiZwvPH4o+7b +mleAxjy5wpS523vqrrBR2YZ5FwIku7WS4litSdn4AtVam/TlLdMNIf41CtFeZKBe +c5R5VNdQy8y7qy8AAADNEUN1cnZlNDQ4IE9wdGlvbiA4wsBHBBMWCgCGBYJhXZSZ +BYkFpI+9AwsJBwkQppmYlfq6zlJHFAAAAAAAHgAgc2FsdEBub3RhdGlvbnMuc2Vx +dW9pYS1wZ3Aub3JnD55UsYMzE6OACP+mgw5zvT+BBgol8/uFQjHg4krjUCMDFQoI +ApkBApsBAh4BFiEEwdtl1YDXuSJyVEseppmYlfq6zlIAAPQJA5dA0Xqwzn/0uwCq +RlsOVCB3f5NOj1exKnlBvRw0xT1VBee1yxvlUt5eIAoCxWoRlWBJob3TTkhm9AEA +8dyhwPmyGfWHzPw5NFG3xsXrZdNXNvit9WMVAPcmsyR7teXuDlJItxRAdJJc/qfJ +YVbBFoaNrhYAAADHhQRhXZSZFgMrZXEBz0BL7THZ9MnCLfSPJ1FMLim9eGkQ3Bfn +M3he5rOwO3t14QI1LjI96OjkeJipMgcFAmEP1Bq/ZHGO7oAAAc9AFnE8iNBaT3OU +EFtxkmWHXtdaYMmGGRdopw9JPXr/UxuunDln5o9dxPxf7q7z26zXrZen+qed/Isa +HsDCwSwEGBYKAWsFgmFdlJkFiQWkj70JEKaZmJX6us5SRxQAAAAAAB4AIHNhbHRA +bm90YXRpb25zLnNlcXVvaWEtcGdwLm9yZxREUizdTcepBzgSMOv2VWQCWbl++3CZ +EbgAWDryvSsyApsCwDGgBBkWCgBvBYJhXZSZCRBKo3SL4S5djkcUAAAAAAAeACBz +YWx0QG5vdGF0aW9ucy5zZXF1b2lhLXBncC5vcmemoGTDjmNQiIzw6HOEddvS0OB7 +UZ/P07jM/EVmnYxTlBYhBAxsnkGpx1UCiH6gUUqjdIvhLl2OAAALYQOXQAMB1oKq +OWxSFmvmgCKNcbAAyA3piF5ERIqs4z07oJvqDYrOWt75UsEIH/04gU/vHc4EmfG2 +JDLJgOLlyTUPkL/08f0ydGZPofFQBhn8HkuFFjnNtJ5oz3GIP4cdWMQFaUw0uvjb +PM9Tm3ptENGd6Ts1AAAAFiEEwdtl1YDXuSJyVEseppmYlfq6zlIAAGpTA5dATR6i +U2GrpUcQgpG+JqfAsGmF4yAOhgFxc1UfidFk3nTup3fLgjipkYY170WLRNbyKkVO +Sodx93GAs58rizO1acDAWiLq3cyEPBFXbyFThbcNPcLl+/77Uk/mgkYrPQFAQWdK +1kSRm4SizDBK37K8ChAAAADHhwRhXZSZEgMrZW8Bx0DMhzvhQo+OsXeqQ6QVw4sF +CaexHh6rLohh7TzL3hQSjoJ27fV6JBkIWdn0LfrMlJIDbSv2SLdlgQMBCgkAAcdA +MO7Dc1myF6Co1fAH+EuP+OxhxP/7V6ljuSCZENDfA49tQkzTta+PniG+pOVB2LHb +huyaKBkqiaogo8LAOQQYFgoAeAWCYV2UmQWJBaSPvQkQppmYlfq6zlJHFAAAAAAA +HgAgc2FsdEBub3RhdGlvbnMuc2VxdW9pYS1wZ3Aub3JnEjBMQAmc/2u45u5FQGmB +QAytjSG2LM3JQN+PPVl5vEkCmwwWIQTB22XVgNe5InJUSx6mmZiV+rrOUgAASdYD +l0DXEHQ9ykNP2rZP35ET1dmiFagFtTj/hLQcWlg16LqvJNGqOgYXuqTerbiOOt02 +XLCBln+wdewpU4ChEffMUDRBfqfQco/YsMqWV7bHJHAO0eC/DMKCjyU90xdH7R/d +QgqsfguR1PqPuJxpXV4bSr6CGAAAAA== +=MSvh +-----END PGP PRIVATE KEY BLOCK-----` + +const keyWithNotation = `-----BEGIN PGP PRIVATE KEY BLOCK----- + +xVgEY9gIshYJKwYBBAHaRw8BAQdAF25fSM8OpFlXZhop4Qpqo5ywGZ4jgWlR +ppjhIKDthREAAQC+LFpzFcMJYcjxGKzBGHN0Px2jU4d04YSRnFAik+lVVQ6u +zRdUZXN0IDx0ZXN0QGV4YW1wbGUuY29tPsLACgQQFgoAfAUCY9gIsgQLCQcI +CRD/utJOCym8pR0UgAAAAAAQAAR0ZXh0QGV4YW1wbGUuY29tdGVzdB8UAAAA +AAASAARiaW5hcnlAZXhhbXBsZS5jb20AAQIDAxUICgQWAAIBAhkBAhsDAh4B +FiEEEMCQTUVGKgCX5rDQ/7rSTgspvKUAAPl5AP9Npz90LxzrB97Qr2DrGwfG +wuYn4FSYwtuPfZHHeoIabwD/QEbvpQJ/NBb9EAZuow4Rirlt1yv19mmnF+j5 +8yUzhQjHXQRj2AiyEgorBgEEAZdVAQUBAQdARXAo30DmKcyUg6co7OUm0RNT +z9iqFbDBzA8A47JEt1MDAQgHAAD/XKK3lBm0SqMR558HLWdBrNG6NqKuqb5X +joCML987ZNgRD8J4BBgWCAAqBQJj2AiyCRD/utJOCym8pQIbDBYhBBDAkE1F +RioAl+aw0P+60k4LKbylAADRxgEAg7UfBDiDPp5LHcW9D+SgFHk6+GyEU4ev +VppQxdtxPvAA/34snHBX7Twnip1nMt7P4e2hDiw/hwQ7oqioOvc6jMkP +=Z8YJ +-----END PGP PRIVATE KEY BLOCK----- +` diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/aead_config.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/aead_config.go new file mode 100644 index 00000000000..fec41a0e73f --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/aead_config.go @@ -0,0 +1,67 @@ +// Copyright (C) 2019 ProtonTech AG + +package packet + +import "math/bits" + +// CipherSuite contains a combination of Cipher and Mode +type CipherSuite struct { + // The cipher function + Cipher CipherFunction + // The AEAD mode of operation. + Mode AEADMode +} + +// AEADConfig collects a number of AEAD parameters along with sensible defaults. +// A nil AEADConfig is valid and results in all default values. +type AEADConfig struct { + // The AEAD mode of operation. + DefaultMode AEADMode + // Amount of octets in each chunk of data + ChunkSize uint64 +} + +// Mode returns the AEAD mode of operation. +func (conf *AEADConfig) Mode() AEADMode { + // If no preference is specified, OCB is used (which is mandatory to implement). + if conf == nil || conf.DefaultMode == 0 { + return AEADModeOCB + } + + mode := conf.DefaultMode + if mode != AEADModeEAX && mode != AEADModeOCB && mode != AEADModeGCM { + panic("AEAD mode unsupported") + } + return mode +} + +// ChunkSizeByte returns the byte indicating the chunk size. The effective +// chunk size is computed with the formula uint64(1) << (chunkSizeByte + 6) +// limit to 16 = 4 MiB +// https://www.ietf.org/archive/id/draft-ietf-openpgp-crypto-refresh-07.html#section-5.13.2 +func (conf *AEADConfig) ChunkSizeByte() byte { + if conf == nil || conf.ChunkSize == 0 { + return 12 // 1 << (12 + 6) == 262144 bytes + } + + chunkSize := conf.ChunkSize + exponent := bits.Len64(chunkSize) - 1 + switch { + case exponent < 6: + exponent = 6 + case exponent > 16: + exponent = 16 + } + + return byte(exponent - 6) +} + +// decodeAEADChunkSize returns the effective chunk size. In 32-bit systems, the +// maximum returned value is 1 << 30. +func decodeAEADChunkSize(c byte) int { + size := uint64(1 << (c + 6)) + if size != uint64(int(size)) { + return 1 << 30 + } + return int(size) +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/aead_crypter.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/aead_crypter.go new file mode 100644 index 00000000000..a82b040bdd1 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/aead_crypter.go @@ -0,0 +1,265 @@ +// Copyright (C) 2019 ProtonTech AG + +package packet + +import ( + "bytes" + "crypto/cipher" + "encoding/binary" + "io" + + "github.com/ProtonMail/go-crypto/openpgp/errors" +) + +// aeadCrypter is an AEAD opener/sealer, its configuration, and data for en/decryption. +type aeadCrypter struct { + aead cipher.AEAD + chunkSize int + initialNonce []byte + associatedData []byte // Chunk-independent associated data + chunkIndex []byte // Chunk counter + packetTag packetType + bytesProcessed int // Amount of plaintext bytes encrypted/decrypted + buffer bytes.Buffer // Buffered bytes across chunks +} + +// computeNonce takes the incremental index and computes an eXclusive OR with +// the least significant 8 bytes of the receivers' initial nonce (see sec. +// 5.16.1 and 5.16.2). It returns the resulting nonce. +func (wo *aeadCrypter) computeNextNonce() (nonce []byte) { + if wo.packetTag == packetTypeSymmetricallyEncryptedIntegrityProtected { + return append(wo.initialNonce, wo.chunkIndex...) + } + + nonce = make([]byte, len(wo.initialNonce)) + copy(nonce, wo.initialNonce) + offset := len(wo.initialNonce) - 8 + for i := 0; i < 8; i++ { + nonce[i+offset] ^= wo.chunkIndex[i] + } + return +} + +// incrementIndex performs an integer increment by 1 of the integer represented by the +// slice, modifying it accordingly. +func (wo *aeadCrypter) incrementIndex() error { + index := wo.chunkIndex + if len(index) == 0 { + return errors.AEADError("Index has length 0") + } + for i := len(index) - 1; i >= 0; i-- { + if index[i] < 255 { + index[i]++ + return nil + } + index[i] = 0 + } + return errors.AEADError("cannot further increment index") +} + +// aeadDecrypter reads and decrypts bytes. It buffers extra decrypted bytes when +// necessary, similar to aeadEncrypter. +type aeadDecrypter struct { + aeadCrypter // Embedded ciphertext opener + reader io.Reader // 'reader' is a partialLengthReader + peekedBytes []byte // Used to detect last chunk + eof bool +} + +// Read decrypts bytes and reads them into dst. It decrypts when necessary and +// buffers extra decrypted bytes. It returns the number of bytes copied into dst +// and an error. +func (ar *aeadDecrypter) Read(dst []byte) (n int, err error) { + // Return buffered plaintext bytes from previous calls + if ar.buffer.Len() > 0 { + return ar.buffer.Read(dst) + } + + // Return EOF if we've previously validated the final tag + if ar.eof { + return 0, io.EOF + } + + // Read a chunk + tagLen := ar.aead.Overhead() + cipherChunkBuf := new(bytes.Buffer) + _, errRead := io.CopyN(cipherChunkBuf, ar.reader, int64(ar.chunkSize + tagLen)) + cipherChunk := cipherChunkBuf.Bytes() + if errRead != nil && errRead != io.EOF { + return 0, errRead + } + decrypted, errChunk := ar.openChunk(cipherChunk) + if errChunk != nil { + return 0, errChunk + } + + // Return decrypted bytes, buffering if necessary + if len(dst) < len(decrypted) { + n = copy(dst, decrypted[:len(dst)]) + ar.buffer.Write(decrypted[len(dst):]) + } else { + n = copy(dst, decrypted) + } + + // Check final authentication tag + if errRead == io.EOF { + errChunk := ar.validateFinalTag(ar.peekedBytes) + if errChunk != nil { + return n, errChunk + } + ar.eof = true // Mark EOF for when we've returned all buffered data + } + return +} + +// Close is noOp. The final authentication tag of the stream was already +// checked in the last Read call. In the future, this function could be used to +// wipe the reader and peeked, decrypted bytes, if necessary. +func (ar *aeadDecrypter) Close() (err error) { + return nil +} + +// openChunk decrypts and checks integrity of an encrypted chunk, returning +// the underlying plaintext and an error. It accesses peeked bytes from next +// chunk, to identify the last chunk and decrypt/validate accordingly. +func (ar *aeadDecrypter) openChunk(data []byte) ([]byte, error) { + tagLen := ar.aead.Overhead() + // Restore carried bytes from last call + chunkExtra := append(ar.peekedBytes, data...) + // 'chunk' contains encrypted bytes, followed by an authentication tag. + chunk := chunkExtra[:len(chunkExtra)-tagLen] + ar.peekedBytes = chunkExtra[len(chunkExtra)-tagLen:] + + adata := ar.associatedData + if ar.aeadCrypter.packetTag == packetTypeAEADEncrypted { + adata = append(ar.associatedData, ar.chunkIndex...) + } + + nonce := ar.computeNextNonce() + plainChunk, err := ar.aead.Open(nil, nonce, chunk, adata) + if err != nil { + return nil, err + } + ar.bytesProcessed += len(plainChunk) + if err = ar.aeadCrypter.incrementIndex(); err != nil { + return nil, err + } + return plainChunk, nil +} + +// Checks the summary tag. It takes into account the total decrypted bytes into +// the associated data. It returns an error, or nil if the tag is valid. +func (ar *aeadDecrypter) validateFinalTag(tag []byte) error { + // Associated: tag, version, cipher, aead, chunk size, ... + amountBytes := make([]byte, 8) + binary.BigEndian.PutUint64(amountBytes, uint64(ar.bytesProcessed)) + + adata := ar.associatedData + if ar.aeadCrypter.packetTag == packetTypeAEADEncrypted { + // ... index ... + adata = append(ar.associatedData, ar.chunkIndex...) + } + + // ... and total number of encrypted octets + adata = append(adata, amountBytes...) + nonce := ar.computeNextNonce() + _, err := ar.aead.Open(nil, nonce, tag, adata) + if err != nil { + return err + } + return nil +} + +// aeadEncrypter encrypts and writes bytes. It encrypts when necessary according +// to the AEAD block size, and buffers the extra encrypted bytes for next write. +type aeadEncrypter struct { + aeadCrypter // Embedded plaintext sealer + writer io.WriteCloser // 'writer' is a partialLengthWriter +} + + +// Write encrypts and writes bytes. It encrypts when necessary and buffers extra +// plaintext bytes for next call. When the stream is finished, Close() MUST be +// called to append the final tag. +func (aw *aeadEncrypter) Write(plaintextBytes []byte) (n int, err error) { + // Append plaintextBytes to existing buffered bytes + n, err = aw.buffer.Write(plaintextBytes) + if err != nil { + return n, err + } + // Encrypt and write chunks + for aw.buffer.Len() >= aw.chunkSize { + plainChunk := aw.buffer.Next(aw.chunkSize) + encryptedChunk, err := aw.sealChunk(plainChunk) + if err != nil { + return n, err + } + _, err = aw.writer.Write(encryptedChunk) + if err != nil { + return n, err + } + } + return +} + +// Close encrypts and writes the remaining buffered plaintext if any, appends +// the final authentication tag, and closes the embedded writer. This function +// MUST be called at the end of a stream. +func (aw *aeadEncrypter) Close() (err error) { + // Encrypt and write a chunk if there's buffered data left, or if we haven't + // written any chunks yet. + if aw.buffer.Len() > 0 || aw.bytesProcessed == 0 { + plainChunk := aw.buffer.Bytes() + lastEncryptedChunk, err := aw.sealChunk(plainChunk) + if err != nil { + return err + } + _, err = aw.writer.Write(lastEncryptedChunk) + if err != nil { + return err + } + } + // Compute final tag (associated data: packet tag, version, cipher, aead, + // chunk size... + adata := aw.associatedData + + if aw.aeadCrypter.packetTag == packetTypeAEADEncrypted { + // ... index ... + adata = append(aw.associatedData, aw.chunkIndex...) + } + + // ... and total number of encrypted octets + amountBytes := make([]byte, 8) + binary.BigEndian.PutUint64(amountBytes, uint64(aw.bytesProcessed)) + adata = append(adata, amountBytes...) + + nonce := aw.computeNextNonce() + finalTag := aw.aead.Seal(nil, nonce, nil, adata) + _, err = aw.writer.Write(finalTag) + if err != nil { + return err + } + return aw.writer.Close() +} + +// sealChunk Encrypts and authenticates the given chunk. +func (aw *aeadEncrypter) sealChunk(data []byte) ([]byte, error) { + if len(data) > aw.chunkSize { + return nil, errors.AEADError("chunk exceeds maximum length") + } + if aw.associatedData == nil { + return nil, errors.AEADError("can't seal without headers") + } + adata := aw.associatedData + if aw.aeadCrypter.packetTag == packetTypeAEADEncrypted { + adata = append(aw.associatedData, aw.chunkIndex...) + } + + nonce := aw.computeNextNonce() + encrypted := aw.aead.Seal(nil, nonce, data, adata) + aw.bytesProcessed += len(data) + if err := aw.aeadCrypter.incrementIndex(); err != nil { + return nil, err + } + return encrypted, nil +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/aead_encrypted.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/aead_encrypted.go new file mode 100644 index 00000000000..98bd876bf29 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/aead_encrypted.go @@ -0,0 +1,96 @@ +// Copyright (C) 2019 ProtonTech AG + +package packet + +import ( + "io" + + "github.com/ProtonMail/go-crypto/openpgp/errors" + "github.com/ProtonMail/go-crypto/openpgp/internal/algorithm" +) + +// AEADEncrypted represents an AEAD Encrypted Packet. +// See https://www.ietf.org/archive/id/draft-koch-openpgp-2015-rfc4880bis-00.html#name-aead-encrypted-data-packet-t +type AEADEncrypted struct { + cipher CipherFunction + mode AEADMode + chunkSizeByte byte + Contents io.Reader // Encrypted chunks and tags + initialNonce []byte // Referred to as IV in RFC4880-bis +} + +// Only currently defined version +const aeadEncryptedVersion = 1 + +func (ae *AEADEncrypted) parse(buf io.Reader) error { + headerData := make([]byte, 4) + if n, err := io.ReadFull(buf, headerData); n < 4 { + return errors.AEADError("could not read aead header:" + err.Error()) + } + // Read initial nonce + mode := AEADMode(headerData[2]) + nonceLen := mode.IvLength() + + // This packet supports only EAX and OCB + // https://www.ietf.org/archive/id/draft-koch-openpgp-2015-rfc4880bis-00.html#name-aead-encrypted-data-packet-t + if nonceLen == 0 || mode > AEADModeOCB { + return errors.AEADError("unknown mode") + } + + initialNonce := make([]byte, nonceLen) + if n, err := io.ReadFull(buf, initialNonce); n < nonceLen { + return errors.AEADError("could not read aead nonce:" + err.Error()) + } + ae.Contents = buf + ae.initialNonce = initialNonce + c := headerData[1] + if _, ok := algorithm.CipherById[c]; !ok { + return errors.UnsupportedError("unknown cipher: " + string(c)) + } + ae.cipher = CipherFunction(c) + ae.mode = mode + ae.chunkSizeByte = headerData[3] + return nil +} + +// Decrypt returns a io.ReadCloser from which decrypted bytes can be read, or +// an error. +func (ae *AEADEncrypted) Decrypt(ciph CipherFunction, key []byte) (io.ReadCloser, error) { + return ae.decrypt(key) +} + +// decrypt prepares an aeadCrypter and returns a ReadCloser from which +// decrypted bytes can be read (see aeadDecrypter.Read()). +func (ae *AEADEncrypted) decrypt(key []byte) (io.ReadCloser, error) { + blockCipher := ae.cipher.new(key) + aead := ae.mode.new(blockCipher) + // Carry the first tagLen bytes + tagLen := ae.mode.TagLength() + peekedBytes := make([]byte, tagLen) + n, err := io.ReadFull(ae.Contents, peekedBytes) + if n < tagLen || (err != nil && err != io.EOF) { + return nil, errors.AEADError("Not enough data to decrypt:" + err.Error()) + } + chunkSize := decodeAEADChunkSize(ae.chunkSizeByte) + return &aeadDecrypter{ + aeadCrypter: aeadCrypter{ + aead: aead, + chunkSize: chunkSize, + initialNonce: ae.initialNonce, + associatedData: ae.associatedData(), + chunkIndex: make([]byte, 8), + packetTag: packetTypeAEADEncrypted, + }, + reader: ae.Contents, + peekedBytes: peekedBytes}, nil +} + +// associatedData for chunks: tag, version, cipher, mode, chunk size byte +func (ae *AEADEncrypted) associatedData() []byte { + return []byte{ + 0xD4, + aeadEncryptedVersion, + byte(ae.cipher), + byte(ae.mode), + ae.chunkSizeByte} +} diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/compressed.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/compressed.go similarity index 95% rename from .ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/compressed.go rename to .ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/compressed.go index 353f945247c..2f5cad71dab 100644 --- a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/compressed.go +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/compressed.go @@ -8,7 +8,7 @@ import ( "compress/bzip2" "compress/flate" "compress/zlib" - "golang.org/x/crypto/openpgp/errors" + "github.com/ProtonMail/go-crypto/openpgp/errors" "io" "strconv" ) @@ -47,6 +47,8 @@ func (c *Compressed) parse(r io.Reader) error { } switch buf[0] { + case 0: + c.Body = r case 1: c.Body = flate.NewReader(r) case 2: @@ -60,7 +62,7 @@ func (c *Compressed) parse(r io.Reader) error { return err } -// compressedWriteCloser represents the serialized compression stream +// compressedWriterCloser represents the serialized compression stream // header and the compressor. Its Close() method ensures that both the // compressor and serialized stream header are closed. Its Write() // method writes to the compressor. diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/config.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/config.go new file mode 100644 index 00000000000..82ae539981a --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/config.go @@ -0,0 +1,224 @@ +// Copyright 2012 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package packet + +import ( + "crypto" + "crypto/rand" + "io" + "math/big" + "time" +) + +// Config collects a number of parameters along with sensible defaults. +// A nil *Config is valid and results in all default values. +type Config struct { + // Rand provides the source of entropy. + // If nil, the crypto/rand Reader is used. + Rand io.Reader + // DefaultHash is the default hash function to be used. + // If zero, SHA-256 is used. + DefaultHash crypto.Hash + // DefaultCipher is the cipher to be used. + // If zero, AES-128 is used. + DefaultCipher CipherFunction + // Time returns the current time as the number of seconds since the + // epoch. If Time is nil, time.Now is used. + Time func() time.Time + // DefaultCompressionAlgo is the compression algorithm to be + // applied to the plaintext before encryption. If zero, no + // compression is done. + DefaultCompressionAlgo CompressionAlgo + // CompressionConfig configures the compression settings. + CompressionConfig *CompressionConfig + // S2KCount is only used for symmetric encryption. It + // determines the strength of the passphrase stretching when + // the said passphrase is hashed to produce a key. S2KCount + // should be between 1024 and 65011712, inclusive. If Config + // is nil or S2KCount is 0, the value 65536 used. Not all + // values in the above range can be represented. S2KCount will + // be rounded up to the next representable value if it cannot + // be encoded exactly. When set, it is strongly encrouraged to + // use a value that is at least 65536. See RFC 4880 Section + // 3.7.1.3. + S2KCount int + // RSABits is the number of bits in new RSA keys made with NewEntity. + // If zero, then 2048 bit keys are created. + RSABits int + // The public key algorithm to use - will always create a signing primary + // key and encryption subkey. + Algorithm PublicKeyAlgorithm + // Some known primes that are optionally prepopulated by the caller + RSAPrimes []*big.Int + // Curve configures the desired packet.Curve if the Algorithm is PubKeyAlgoECDSA, + // PubKeyAlgoEdDSA, or PubKeyAlgoECDH. If empty Curve25519 is used. + Curve Curve + // AEADConfig configures the use of the new AEAD Encrypted Data Packet, + // defined in the draft of the next version of the OpenPGP specification. + // If a non-nil AEADConfig is passed, usage of this packet is enabled. By + // default, it is disabled. See the documentation of AEADConfig for more + // configuration options related to AEAD. + // **Note: using this option may break compatibility with other OpenPGP + // implementations, as well as future versions of this library.** + AEADConfig *AEADConfig + // V5Keys configures version 5 key generation. If false, this package still + // supports version 5 keys, but produces version 4 keys. + V5Keys bool + // "The validity period of the key. This is the number of seconds after + // the key creation time that the key expires. If this is not present + // or has a value of zero, the key never expires. This is found only on + // a self-signature."" + // https://tools.ietf.org/html/rfc4880#section-5.2.3.6 + KeyLifetimeSecs uint32 + // "The validity period of the signature. This is the number of seconds + // after the signature creation time that the signature expires. If + // this is not present or has a value of zero, it never expires." + // https://tools.ietf.org/html/rfc4880#section-5.2.3.10 + SigLifetimeSecs uint32 + // SigningKeyId is used to specify the signing key to use (by Key ID). + // By default, the signing key is selected automatically, preferring + // signing subkeys if available. + SigningKeyId uint64 + // SigningIdentity is used to specify a user ID (packet Signer's User ID, type 28) + // when producing a generic certification signature onto an existing user ID. + // The identity must be present in the signer Entity. + SigningIdentity string + // InsecureAllowUnauthenticatedMessages controls, whether it is tolerated to read + // encrypted messages without Modification Detection Code (MDC). + // MDC is mandated by the IETF OpenPGP Crypto Refresh draft and has long been implemented + // in most OpenPGP implementations. Messages without MDC are considered unnecessarily + // insecure and should be prevented whenever possible. + // In case one needs to deal with messages from very old OpenPGP implementations, there + // might be no other way than to tolerate the missing MDC. Setting this flag, allows this + // mode of operation. It should be considered a measure of last resort. + InsecureAllowUnauthenticatedMessages bool + // KnownNotations is a map of Notation Data names to bools, which controls + // the notation names that are allowed to be present in critical Notation Data + // signature subpackets. + KnownNotations map[string]bool + // SignatureNotations is a list of Notations to be added to any signatures. + SignatureNotations []*Notation +} + +func (c *Config) Random() io.Reader { + if c == nil || c.Rand == nil { + return rand.Reader + } + return c.Rand +} + +func (c *Config) Hash() crypto.Hash { + if c == nil || uint(c.DefaultHash) == 0 { + return crypto.SHA256 + } + return c.DefaultHash +} + +func (c *Config) Cipher() CipherFunction { + if c == nil || uint8(c.DefaultCipher) == 0 { + return CipherAES128 + } + return c.DefaultCipher +} + +func (c *Config) Now() time.Time { + if c == nil || c.Time == nil { + return time.Now() + } + return c.Time() +} + +// KeyLifetime returns the validity period of the key. +func (c *Config) KeyLifetime() uint32 { + if c == nil { + return 0 + } + return c.KeyLifetimeSecs +} + +// SigLifetime returns the validity period of the signature. +func (c *Config) SigLifetime() uint32 { + if c == nil { + return 0 + } + return c.SigLifetimeSecs +} + +func (c *Config) Compression() CompressionAlgo { + if c == nil { + return CompressionNone + } + return c.DefaultCompressionAlgo +} + +func (c *Config) PasswordHashIterations() int { + if c == nil || c.S2KCount == 0 { + return 0 + } + return c.S2KCount +} + +func (c *Config) RSAModulusBits() int { + if c == nil || c.RSABits == 0 { + return 2048 + } + return c.RSABits +} + +func (c *Config) PublicKeyAlgorithm() PublicKeyAlgorithm { + if c == nil || c.Algorithm == 0 { + return PubKeyAlgoRSA + } + return c.Algorithm +} + +func (c *Config) CurveName() Curve { + if c == nil || c.Curve == "" { + return Curve25519 + } + return c.Curve +} + +func (c *Config) AEAD() *AEADConfig { + if c == nil { + return nil + } + return c.AEADConfig +} + +func (c *Config) SigningKey() uint64 { + if c == nil { + return 0 + } + return c.SigningKeyId +} + +func (c *Config) SigningUserId() string { + if c == nil { + return "" + } + return c.SigningIdentity +} + +func (c *Config) AllowUnauthenticatedMessages() bool { + if c == nil { + return false + } + return c.InsecureAllowUnauthenticatedMessages +} + +func (c *Config) KnownNotation(notationName string) bool { + if c == nil { + return false + } + return c.KnownNotations[notationName] +} + +func (c *Config) Notations() []*Notation { + if c == nil { + return nil + } + return c.SignatureNotations +} diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/encrypted_key.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/encrypted_key.go similarity index 57% rename from .ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/encrypted_key.go rename to .ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/encrypted_key.go index 6d7639722c9..eeff2902c12 100644 --- a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/encrypted_key.go +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/encrypted_key.go @@ -12,8 +12,10 @@ import ( "math/big" "strconv" - "golang.org/x/crypto/openpgp/elgamal" - "golang.org/x/crypto/openpgp/errors" + "github.com/ProtonMail/go-crypto/openpgp/ecdh" + "github.com/ProtonMail/go-crypto/openpgp/elgamal" + "github.com/ProtonMail/go-crypto/openpgp/errors" + "github.com/ProtonMail/go-crypto/openpgp/internal/encoding" ) const encryptedKeyVersion = 3 @@ -23,10 +25,10 @@ const encryptedKeyVersion = 3 type EncryptedKey struct { KeyId uint64 Algo PublicKeyAlgorithm - CipherFunc CipherFunction // only valid after a successful Decrypt + CipherFunc CipherFunction // only valid after a successful Decrypt for a v3 packet Key []byte // only valid after a successful Decrypt - encryptedMPI1, encryptedMPI2 parsedMPI + encryptedMPI1, encryptedMPI2 encoding.Field } func (e *EncryptedKey) parse(r io.Reader) (err error) { @@ -42,17 +44,28 @@ func (e *EncryptedKey) parse(r io.Reader) (err error) { e.Algo = PublicKeyAlgorithm(buf[9]) switch e.Algo { case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly: - e.encryptedMPI1.bytes, e.encryptedMPI1.bitLength, err = readMPI(r) - if err != nil { + e.encryptedMPI1 = new(encoding.MPI) + if _, err = e.encryptedMPI1.ReadFrom(r); err != nil { return } case PubKeyAlgoElGamal: - e.encryptedMPI1.bytes, e.encryptedMPI1.bitLength, err = readMPI(r) - if err != nil { + e.encryptedMPI1 = new(encoding.MPI) + if _, err = e.encryptedMPI1.ReadFrom(r); err != nil { return } - e.encryptedMPI2.bytes, e.encryptedMPI2.bitLength, err = readMPI(r) - if err != nil { + + e.encryptedMPI2 = new(encoding.MPI) + if _, err = e.encryptedMPI2.ReadFrom(r); err != nil { + return + } + case PubKeyAlgoECDH: + e.encryptedMPI1 = new(encoding.MPI) + if _, err = e.encryptedMPI1.ReadFrom(r); err != nil { + return + } + + e.encryptedMPI2 = new(encoding.OID) + if _, err = e.encryptedMPI2.ReadFrom(r); err != nil { return } } @@ -72,6 +85,16 @@ func checksumKeyMaterial(key []byte) uint16 { // private key must have been decrypted first. // If config is nil, sensible defaults will be used. func (e *EncryptedKey) Decrypt(priv *PrivateKey, config *Config) error { + if e.KeyId != 0 && e.KeyId != priv.KeyId { + return errors.InvalidArgumentError("cannot decrypt encrypted session key for key id " + strconv.FormatUint(e.KeyId, 16) + " with private key id " + strconv.FormatUint(priv.KeyId, 16)) + } + if e.Algo != priv.PubKeyAlgo { + return errors.InvalidArgumentError("cannot decrypt encrypted session key of type " + strconv.Itoa(int(e.Algo)) + " with private key of type " + strconv.Itoa(int(priv.PubKeyAlgo))) + } + if priv.Dummy() { + return errors.ErrDummyPrivateKey("dummy key found") + } + var err error var b []byte @@ -81,13 +104,18 @@ func (e *EncryptedKey) Decrypt(priv *PrivateKey, config *Config) error { case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly: // Supports both *rsa.PrivateKey and crypto.Decrypter k := priv.PrivateKey.(crypto.Decrypter) - b, err = k.Decrypt(config.Random(), padToKeySize(k.Public().(*rsa.PublicKey), e.encryptedMPI1.bytes), nil) + b, err = k.Decrypt(config.Random(), padToKeySize(k.Public().(*rsa.PublicKey), e.encryptedMPI1.Bytes()), nil) case PubKeyAlgoElGamal: - c1 := new(big.Int).SetBytes(e.encryptedMPI1.bytes) - c2 := new(big.Int).SetBytes(e.encryptedMPI2.bytes) + c1 := new(big.Int).SetBytes(e.encryptedMPI1.Bytes()) + c2 := new(big.Int).SetBytes(e.encryptedMPI2.Bytes()) b, err = elgamal.Decrypt(priv.PrivateKey.(*elgamal.PrivateKey), c1, c2) + case PubKeyAlgoECDH: + vsG := e.encryptedMPI1.Bytes() + m := e.encryptedMPI2.Bytes() + oid := priv.PublicKey.oid.EncodedBytes() + b, err = ecdh.Decrypt(priv.PrivateKey.(*ecdh.PrivateKey), vsG, m, oid, priv.PublicKey.Fingerprint[:]) default: - err = errors.InvalidArgumentError("cannot decrypted encrypted session key with private key of type " + strconv.Itoa(int(priv.PubKeyAlgo))) + err = errors.InvalidArgumentError("cannot decrypt encrypted session key with private key of type " + strconv.Itoa(int(priv.PubKeyAlgo))) } if err != nil { @@ -95,6 +123,10 @@ func (e *EncryptedKey) Decrypt(priv *PrivateKey, config *Config) error { } e.CipherFunc = CipherFunction(b[0]) + if !e.CipherFunc.IsSupported() { + return errors.UnsupportedError("unsupported encryption function") + } + e.Key = b[1 : len(b)-2] expectedChecksum := uint16(b[len(b)-2])<<8 | uint16(b[len(b)-1]) checksum := checksumKeyMaterial(e.Key) @@ -110,14 +142,19 @@ func (e *EncryptedKey) Serialize(w io.Writer) error { var mpiLen int switch e.Algo { case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly: - mpiLen = 2 + len(e.encryptedMPI1.bytes) + mpiLen = int(e.encryptedMPI1.EncodedLength()) case PubKeyAlgoElGamal: - mpiLen = 2 + len(e.encryptedMPI1.bytes) + 2 + len(e.encryptedMPI2.bytes) + mpiLen = int(e.encryptedMPI1.EncodedLength()) + int(e.encryptedMPI2.EncodedLength()) + case PubKeyAlgoECDH: + mpiLen = int(e.encryptedMPI1.EncodedLength()) + int(e.encryptedMPI2.EncodedLength()) default: return errors.InvalidArgumentError("don't know how to serialize encrypted key type " + strconv.Itoa(int(e.Algo))) } - serializeHeader(w, packetTypeEncryptedKey, 1 /* version */ +8 /* key id */ +1 /* algo */ +mpiLen) + err := serializeHeader(w, packetTypeEncryptedKey, 1 /* version */ +8 /* key id */ +1 /* algo */ +mpiLen) + if err != nil { + return err + } w.Write([]byte{encryptedKeyVersion}) binary.Write(w, binary.BigEndian, e.KeyId) @@ -125,14 +162,23 @@ func (e *EncryptedKey) Serialize(w io.Writer) error { switch e.Algo { case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly: - writeMPIs(w, e.encryptedMPI1) + _, err := w.Write(e.encryptedMPI1.EncodedBytes()) + return err case PubKeyAlgoElGamal: - writeMPIs(w, e.encryptedMPI1, e.encryptedMPI2) + if _, err := w.Write(e.encryptedMPI1.EncodedBytes()); err != nil { + return err + } + _, err := w.Write(e.encryptedMPI2.EncodedBytes()) + return err + case PubKeyAlgoECDH: + if _, err := w.Write(e.encryptedMPI1.EncodedBytes()); err != nil { + return err + } + _, err := w.Write(e.encryptedMPI2.EncodedBytes()) + return err default: panic("internal error") } - - return nil } // SerializeEncryptedKey serializes an encrypted key packet to w that contains @@ -156,6 +202,8 @@ func SerializeEncryptedKey(w io.Writer, pub *PublicKey, cipherFunc CipherFunctio return serializeEncryptedKeyRSA(w, config.Random(), buf, pub.PublicKey.(*rsa.PublicKey), keyBlock) case PubKeyAlgoElGamal: return serializeEncryptedKeyElGamal(w, config.Random(), buf, pub.PublicKey.(*elgamal.PublicKey), keyBlock) + case PubKeyAlgoECDH: + return serializeEncryptedKeyECDH(w, config.Random(), buf, pub.PublicKey.(*ecdh.PublicKey), keyBlock, pub.oid, pub.Fingerprint) case PubKeyAlgoDSA, PubKeyAlgoRSASignOnly: return errors.InvalidArgumentError("cannot encrypt to public key of type " + strconv.Itoa(int(pub.PubKeyAlgo))) } @@ -169,7 +217,8 @@ func serializeEncryptedKeyRSA(w io.Writer, rand io.Reader, header [10]byte, pub return errors.InvalidArgumentError("RSA encryption failed: " + err.Error()) } - packetLen := 10 /* header length */ + 2 /* mpi size */ + len(cipherText) + cipherMPI := encoding.NewMPI(cipherText) + packetLen := 10 /* header length */ + int(cipherMPI.EncodedLength()) err = serializeHeader(w, packetTypeEncryptedKey, packetLen) if err != nil { @@ -179,7 +228,8 @@ func serializeEncryptedKeyRSA(w io.Writer, rand io.Reader, header [10]byte, pub if err != nil { return err } - return writeMPI(w, 8*uint16(len(cipherText)), cipherText) + _, err = w.Write(cipherMPI.EncodedBytes()) + return err } func serializeEncryptedKeyElGamal(w io.Writer, rand io.Reader, header [10]byte, pub *elgamal.PublicKey, keyBlock []byte) error { @@ -200,9 +250,37 @@ func serializeEncryptedKeyElGamal(w io.Writer, rand io.Reader, header [10]byte, if err != nil { return err } - err = writeBig(w, c1) + if _, err = w.Write(new(encoding.MPI).SetBig(c1).EncodedBytes()); err != nil { + return err + } + _, err = w.Write(new(encoding.MPI).SetBig(c2).EncodedBytes()) + return err +} + +func serializeEncryptedKeyECDH(w io.Writer, rand io.Reader, header [10]byte, pub *ecdh.PublicKey, keyBlock []byte, oid encoding.Field, fingerprint []byte) error { + vsG, c, err := ecdh.Encrypt(rand, pub, keyBlock, oid.EncodedBytes(), fingerprint) + if err != nil { + return errors.InvalidArgumentError("ECDH encryption failed: " + err.Error()) + } + + g := encoding.NewMPI(vsG) + m := encoding.NewOID(c) + + packetLen := 10 /* header length */ + packetLen += int(g.EncodedLength()) + int(m.EncodedLength()) + + err = serializeHeader(w, packetTypeEncryptedKey, packetLen) + if err != nil { + return err + } + + _, err = w.Write(header[:]) if err != nil { return err } - return writeBig(w, c2) + if _, err = w.Write(g.EncodedBytes()); err != nil { + return err + } + _, err = w.Write(m.EncodedBytes()) + return err } diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/literal.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/literal.go similarity index 96% rename from .ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/literal.go rename to .ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/literal.go index 1a9ec6e51e8..4be987609be 100644 --- a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/literal.go +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/literal.go @@ -11,6 +11,7 @@ import ( // LiteralData represents an encrypted file. See RFC 4880, section 5.9. type LiteralData struct { + Format uint8 IsBinary bool FileName string Time uint32 // Unix epoch time. Either creation time or modification time. 0 means undefined. @@ -31,7 +32,8 @@ func (l *LiteralData) parse(r io.Reader) (err error) { return } - l.IsBinary = buf[0] == 'b' + l.Format = buf[0] + l.IsBinary = l.Format == 'b' fileNameLen := int(buf[1]) _, err = readFull(r, buf[:fileNameLen]) diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/notation.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/notation.go new file mode 100644 index 00000000000..2c3e3f50b25 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/notation.go @@ -0,0 +1,29 @@ +package packet + +// Notation type represents a Notation Data subpacket +// see https://tools.ietf.org/html/rfc4880#section-5.2.3.16 +type Notation struct { + Name string + Value []byte + IsCritical bool + IsHumanReadable bool +} + +func (notation *Notation) getData() []byte { + nameData := []byte(notation.Name) + nameLen := len(nameData) + valueLen := len(notation.Value) + + data := make([]byte, 8+nameLen+valueLen) + if notation.IsHumanReadable { + data[0] = 0x80 + } + + data[4] = byte(nameLen >> 8) + data[5] = byte(nameLen) + data[6] = byte(valueLen >> 8) + data[7] = byte(valueLen) + copy(data[8:8+nameLen], nameData) + copy(data[8+nameLen:], notation.Value) + return data +} diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/ocfb.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/ocfb.go similarity index 92% rename from .ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/ocfb.go rename to .ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/ocfb.go index ce2a33a547c..4f26d0a00b7 100644 --- a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/ocfb.go +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/ocfb.go @@ -85,8 +85,7 @@ type ocfbDecrypter struct { // NewOCFBDecrypter returns a cipher.Stream which decrypts data with OpenPGP's // cipher feedback mode using the given cipher.Block. Prefix must be the first // blockSize + 2 bytes of the ciphertext, where blockSize is the cipher.Block's -// block size. If an incorrect key is detected then nil is returned. On -// successful exit, blockSize+2 bytes of decrypted data are written into +// block size. On successful exit, blockSize+2 bytes of decrypted data are written into // prefix. Resync determines if the "resynchronization step" from RFC 4880, // 13.9 step 7 is performed. Different parts of OpenPGP vary on this point. func NewOCFBDecrypter(block cipher.Block, prefix []byte, resync OCFBResyncOption) cipher.Stream { @@ -112,11 +111,6 @@ func NewOCFBDecrypter(block cipher.Block, prefix []byte, resync OCFBResyncOption prefixCopy[blockSize] ^= x.fre[0] prefixCopy[blockSize+1] ^= x.fre[1] - if prefixCopy[blockSize-2] != prefixCopy[blockSize] || - prefixCopy[blockSize-1] != prefixCopy[blockSize+1] { - return nil - } - if resync { block.Encrypt(x.fre, prefix[2:]) } else { diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/one_pass_signature.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/one_pass_signature.go similarity index 88% rename from .ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/one_pass_signature.go rename to .ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/one_pass_signature.go index 1713503395e..fff119e6397 100644 --- a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/one_pass_signature.go +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/one_pass_signature.go @@ -7,8 +7,8 @@ package packet import ( "crypto" "encoding/binary" - "golang.org/x/crypto/openpgp/errors" - "golang.org/x/crypto/openpgp/s2k" + "github.com/ProtonMail/go-crypto/openpgp/errors" + "github.com/ProtonMail/go-crypto/openpgp/internal/algorithm" "io" "strconv" ) @@ -37,7 +37,7 @@ func (ops *OnePassSignature) parse(r io.Reader) (err error) { } var ok bool - ops.Hash, ok = s2k.HashIdToHash(buf[2]) + ops.Hash, ok = algorithm.HashIdToHashWithSha1(buf[2]) if !ok { return errors.UnsupportedError("hash function: " + strconv.Itoa(int(buf[2]))) } @@ -55,7 +55,7 @@ func (ops *OnePassSignature) Serialize(w io.Writer) error { buf[0] = onePassSignatureVersion buf[1] = uint8(ops.SigType) var ok bool - buf[2], ok = s2k.HashToHashId(ops.Hash) + buf[2], ok = algorithm.HashToHashId(ops.Hash) if !ok { return errors.UnsupportedError("hash type: " + strconv.Itoa(int(ops.Hash))) } diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/opaque.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/opaque.go similarity index 89% rename from .ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/opaque.go rename to .ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/opaque.go index 3984477310f..4f8204079f2 100644 --- a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/opaque.go +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/opaque.go @@ -7,8 +7,9 @@ package packet import ( "bytes" "io" + "io/ioutil" - "golang.org/x/crypto/openpgp/errors" + "github.com/ProtonMail/go-crypto/openpgp/errors" ) // OpaquePacket represents an OpenPGP packet as raw, unparsed data. This is @@ -25,7 +26,7 @@ type OpaquePacket struct { } func (op *OpaquePacket) parse(r io.Reader) (err error) { - op.Contents, err = io.ReadAll(r) + op.Contents, err = ioutil.ReadAll(r) return } @@ -83,8 +84,9 @@ func (or *OpaqueReader) Next() (op *OpaquePacket, err error) { // OpaqueSubpacket represents an unparsed OpenPGP subpacket, // as found in signature and user attribute packets. type OpaqueSubpacket struct { - SubType uint8 - Contents []byte + SubType uint8 + EncodedLength []byte // Store the original encoded length for signature verifications. + Contents []byte } // OpaqueSubpackets extracts opaque, unparsed OpenPGP subpackets from @@ -108,6 +110,7 @@ func OpaqueSubpackets(contents []byte) (result []*OpaqueSubpacket, err error) { func nextSubpacket(contents []byte) (subHeaderLen int, subPacket *OpaqueSubpacket, err error) { // RFC 4880, section 5.2.3.1 var subLen uint32 + var encodedLength []byte if len(contents) < 1 { goto Truncated } @@ -118,6 +121,7 @@ func nextSubpacket(contents []byte) (subHeaderLen int, subPacket *OpaqueSubpacke if len(contents) < subHeaderLen { goto Truncated } + encodedLength = contents[0:1] subLen = uint32(contents[0]) contents = contents[1:] case contents[0] < 255: @@ -125,6 +129,7 @@ func nextSubpacket(contents []byte) (subHeaderLen int, subPacket *OpaqueSubpacke if len(contents) < subHeaderLen { goto Truncated } + encodedLength = contents[0:2] subLen = uint32(contents[0]-192)<<8 + uint32(contents[1]) + 192 contents = contents[2:] default: @@ -132,16 +137,19 @@ func nextSubpacket(contents []byte) (subHeaderLen int, subPacket *OpaqueSubpacke if len(contents) < subHeaderLen { goto Truncated } + encodedLength = contents[0:5] subLen = uint32(contents[1])<<24 | uint32(contents[2])<<16 | uint32(contents[3])<<8 | uint32(contents[4]) contents = contents[5:] + } if subLen > uint32(len(contents)) || subLen == 0 { goto Truncated } subPacket.SubType = contents[0] + subPacket.EncodedLength = encodedLength subPacket.Contents = contents[1:subLen] return Truncated: @@ -151,7 +159,9 @@ Truncated: func (osp *OpaqueSubpacket) Serialize(w io.Writer) (err error) { buf := make([]byte, 6) - n := serializeSubpacketLength(buf, len(osp.Contents)+1) + copy(buf, osp.EncodedLength) + n := len(osp.EncodedLength) + buf[n] = osp.SubType if _, err = w.Write(buf[:n+1]); err != nil { return diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/packet.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/packet.go similarity index 64% rename from .ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/packet.go rename to .ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/packet.go index 0a19794a8e4..f73f6f40d62 100644 --- a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/packet.go +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/packet.go @@ -4,26 +4,16 @@ // Package packet implements parsing and serialization of OpenPGP packets, as // specified in RFC 4880. -// -// Deprecated: this package is unmaintained except for security fixes. New -// applications should consider a more focused, modern alternative to OpenPGP -// for their specific task. If you are required to interoperate with OpenPGP -// systems and need a maintained package, consider a community fork. -// See https://golang.org/issue/44226. -package packet // import "golang.org/x/crypto/openpgp/packet" +package packet // import "github.com/ProtonMail/go-crypto/openpgp/packet" import ( - "bufio" - "crypto/aes" + "bytes" "crypto/cipher" - "crypto/des" "crypto/rsa" "io" - "math/big" - "math/bits" - "golang.org/x/crypto/cast5" - "golang.org/x/crypto/openpgp/errors" + "github.com/ProtonMail/go-crypto/openpgp/errors" + "github.com/ProtonMail/go-crypto/openpgp/internal/algorithm" ) // readFull is the same as io.ReadFull except that reading zero bytes returns @@ -106,68 +96,43 @@ func (r *partialLengthReader) Read(p []byte) (n int, err error) { // See RFC 4880, section 4.2.2.4. type partialLengthWriter struct { w io.WriteCloser + buf bytes.Buffer lengthByte [1]byte - sentFirst bool - buf []byte } -// RFC 4880 4.2.2.4: the first partial length MUST be at least 512 octets long. -const minFirstPartialWrite = 512 - func (w *partialLengthWriter) Write(p []byte) (n int, err error) { - off := 0 - if !w.sentFirst { - if len(w.buf) > 0 || len(p) < minFirstPartialWrite { - off = len(w.buf) - w.buf = append(w.buf, p...) - if len(w.buf) < minFirstPartialWrite { - return len(p), nil - } - p = w.buf - w.buf = nil - } - w.sentFirst = true - } - - power := uint8(30) - for len(p) > 0 { - l := 1 << power - if len(p) < l { - power = uint8(bits.Len32(uint32(len(p)))) - 1 - l = 1 << power - } - w.lengthByte[0] = 224 + power - _, err = w.w.Write(w.lengthByte[:]) - if err == nil { - var m int - m, err = w.w.Write(p[:l]) - n += m - } - if err != nil { - if n < off { - return 0, err + bufLen := w.buf.Len() + if bufLen > 512 { + for power := uint(30); ; power-- { + l := 1 << power + if bufLen >= l { + w.lengthByte[0] = 224 + uint8(power) + _, err = w.w.Write(w.lengthByte[:]) + if err != nil { + return + } + var m int + m, err = w.w.Write(w.buf.Next(l)) + if err != nil { + return + } + if m != l { + return 0, io.ErrShortWrite + } + break } - return n - off, err } - p = p[l:] } - return n - off, nil + return w.buf.Write(p) } -func (w *partialLengthWriter) Close() error { - if len(w.buf) > 0 { - // In this case we can't send a 512 byte packet. - // Just send what we have. - p := w.buf - w.sentFirst = true - w.buf = nil - if _, err := w.Write(p); err != nil { - return err - } +func (w *partialLengthWriter) Close() (err error) { + len := w.buf.Len() + err = serializeLength(w.w, len) + if err != nil { + return err } - - w.lengthByte[0] = 0 - _, err := w.w.Write(w.lengthByte[:]) + _, err = w.buf.WriteTo(w.w) if err != nil { return err } @@ -252,25 +217,43 @@ func readHeader(r io.Reader) (tag packetType, length int64, contents io.Reader, // serializeHeader writes an OpenPGP packet header to w. See RFC 4880, section // 4.2. func serializeHeader(w io.Writer, ptype packetType, length int) (err error) { - var buf [6]byte - var n int + err = serializeType(w, ptype) + if err != nil { + return + } + return serializeLength(w, length) +} +// serializeType writes an OpenPGP packet type to w. See RFC 4880, section +// 4.2. +func serializeType(w io.Writer, ptype packetType) (err error) { + var buf [1]byte buf[0] = 0x80 | 0x40 | byte(ptype) + _, err = w.Write(buf[:]) + return +} + +// serializeLength writes an OpenPGP packet length to w. See RFC 4880, section +// 4.2.2. +func serializeLength(w io.Writer, length int) (err error) { + var buf [5]byte + var n int + if length < 192 { - buf[1] = byte(length) - n = 2 + buf[0] = byte(length) + n = 1 } else if length < 8384 { length -= 192 - buf[1] = 192 + byte(length>>8) - buf[2] = byte(length) - n = 3 + buf[0] = 192 + byte(length>>8) + buf[1] = byte(length) + n = 2 } else { - buf[1] = 255 - buf[2] = byte(length >> 24) - buf[3] = byte(length >> 16) - buf[4] = byte(length >> 8) - buf[5] = byte(length) - n = 6 + buf[0] = 255 + buf[1] = byte(length >> 24) + buf[2] = byte(length >> 16) + buf[3] = byte(length >> 8) + buf[4] = byte(length) + n = 5 } _, err = w.Write(buf[:n]) @@ -281,9 +264,7 @@ func serializeHeader(w io.Writer, ptype packetType, length int) (err error) { // length of the packet is unknown. It returns a io.WriteCloser which can be // used to write the contents of the packet. See RFC 4880, section 4.2. func serializeStreamHeader(w io.WriteCloser, ptype packetType) (out io.WriteCloser, err error) { - var buf [1]byte - buf[0] = 0x80 | 0x40 | byte(ptype) - _, err = w.Write(buf[:]) + err = serializeType(w, ptype) if err != nil { return } @@ -321,33 +302,27 @@ func consumeAll(r io.Reader) (n int64, err error) { type packetType uint8 const ( - packetTypeEncryptedKey packetType = 1 - packetTypeSignature packetType = 2 - packetTypeSymmetricKeyEncrypted packetType = 3 - packetTypeOnePassSignature packetType = 4 - packetTypePrivateKey packetType = 5 - packetTypePublicKey packetType = 6 - packetTypePrivateSubkey packetType = 7 - packetTypeCompressed packetType = 8 - packetTypeSymmetricallyEncrypted packetType = 9 - packetTypeLiteralData packetType = 11 - packetTypeUserId packetType = 13 - packetTypePublicSubkey packetType = 14 - packetTypeUserAttribute packetType = 17 - packetTypeSymmetricallyEncryptedMDC packetType = 18 + packetTypeEncryptedKey packetType = 1 + packetTypeSignature packetType = 2 + packetTypeSymmetricKeyEncrypted packetType = 3 + packetTypeOnePassSignature packetType = 4 + packetTypePrivateKey packetType = 5 + packetTypePublicKey packetType = 6 + packetTypePrivateSubkey packetType = 7 + packetTypeCompressed packetType = 8 + packetTypeSymmetricallyEncrypted packetType = 9 + packetTypeLiteralData packetType = 11 + packetTypeUserId packetType = 13 + packetTypePublicSubkey packetType = 14 + packetTypeUserAttribute packetType = 17 + packetTypeSymmetricallyEncryptedIntegrityProtected packetType = 18 + packetTypeAEADEncrypted packetType = 20 ) -// peekVersion detects the version of a public key packet about to -// be read. A bufio.Reader at the original position of the io.Reader -// is returned. -func peekVersion(r io.Reader) (bufr *bufio.Reader, ver byte, err error) { - bufr = bufio.NewReader(r) - var verBuf []byte - if verBuf, err = bufr.Peek(1); err != nil { - return - } - ver = verBuf[0] - return +// EncryptedDataPacket holds encrypted data. It is currently implemented by +// SymmetricallyEncrypted and AEADEncrypted. +type EncryptedDataPacket interface { + Decrypt(CipherFunction, []byte) (io.ReadCloser, error) } // Read reads a single OpenPGP packet from the given io.Reader. If there is an @@ -362,16 +337,7 @@ func Read(r io.Reader) (p Packet, err error) { case packetTypeEncryptedKey: p = new(EncryptedKey) case packetTypeSignature: - var version byte - // Detect signature version - if contents, version, err = peekVersion(contents); err != nil { - return - } - if version < 4 { - p = new(SignatureV3) - } else { - p = new(Signature) - } + p = new(Signature) case packetTypeSymmetricKeyEncrypted: p = new(SymmetricKeyEncrypted) case packetTypeOnePassSignature: @@ -383,16 +349,8 @@ func Read(r io.Reader) (p Packet, err error) { } p = pk case packetTypePublicKey, packetTypePublicSubkey: - var version byte - if contents, version, err = peekVersion(contents); err != nil { - return - } isSubkey := tag == packetTypePublicSubkey - if version < 4 { - p = &PublicKeyV3{IsSubkey: isSubkey} - } else { - p = &PublicKey{IsSubkey: isSubkey} - } + p = &PublicKey{IsSubkey: isSubkey} case packetTypeCompressed: p = new(Compressed) case packetTypeSymmetricallyEncrypted: @@ -403,10 +361,12 @@ func Read(r io.Reader) (p Packet, err error) { p = new(UserId) case packetTypeUserAttribute: p = new(UserAttribute) - case packetTypeSymmetricallyEncryptedMDC: + case packetTypeSymmetricallyEncryptedIntegrityProtected: se := new(SymmetricallyEncrypted) - se.MDC = true + se.IntegrityProtected = true p = se + case packetTypeAEADEncrypted: + p = new(AEADEncrypted) default: err = errors.UnknownPacketTypeError(tag) } @@ -424,8 +384,8 @@ func Read(r io.Reader) (p Packet, err error) { type SignatureType uint8 const ( - SigTypeBinary SignatureType = 0 - SigTypeText = 1 + SigTypeBinary SignatureType = 0x00 + SigTypeText = 0x01 SigTypeGenericCert = 0x10 SigTypePersonaCert = 0x11 SigTypeCasualCert = 0x12 @@ -435,6 +395,7 @@ const ( SigTypeDirectSignature = 0x1F SigTypeKeyRevocation = 0x20 SigTypeSubkeyRevocation = 0x28 + SigTypeCertificationRevocation = 0x30 ) // PublicKeyAlgorithm represents the different public key system specified for @@ -449,6 +410,8 @@ const ( // RFC 6637, Section 5. PubKeyAlgoECDH PublicKeyAlgorithm = 18 PubKeyAlgoECDSA PublicKeyAlgorithm = 19 + // https://www.ietf.org/archive/id/draft-koch-eddsa-for-openpgp-04.txt + PubKeyAlgoEdDSA PublicKeyAlgorithm = 22 // Deprecated in RFC 4880, Section 13.5. Use key flags instead. PubKeyAlgoRSAEncryptOnly PublicKeyAlgorithm = 2 @@ -459,7 +422,7 @@ const ( // key of the given type. func (pka PublicKeyAlgorithm) CanEncrypt() bool { switch pka { - case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoElGamal: + case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoElGamal, PubKeyAlgoECDH: return true } return false @@ -469,7 +432,7 @@ func (pka PublicKeyAlgorithm) CanEncrypt() bool { // sign a message. func (pka PublicKeyAlgorithm) CanSign() bool { switch pka { - case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly, PubKeyAlgoDSA, PubKeyAlgoECDSA: + case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly, PubKeyAlgoDSA, PubKeyAlgoECDSA, PubKeyAlgoEdDSA: return true } return false @@ -477,7 +440,7 @@ func (pka PublicKeyAlgorithm) CanSign() bool { // CipherFunction represents the different block ciphers specified for OpenPGP. See // http://www.iana.org/assignments/pgp-parameters/pgp-parameters.xhtml#pgp-parameters-13 -type CipherFunction uint8 +type CipherFunction algorithm.CipherFunction const ( Cipher3DES CipherFunction = 2 @@ -489,81 +452,22 @@ const ( // KeySize returns the key size, in bytes, of cipher. func (cipher CipherFunction) KeySize() int { - switch cipher { - case Cipher3DES: - return 24 - case CipherCAST5: - return cast5.KeySize - case CipherAES128: - return 16 - case CipherAES192: - return 24 - case CipherAES256: - return 32 - } - return 0 + return algorithm.CipherFunction(cipher).KeySize() +} + +// IsSupported returns true if the cipher is supported from the library +func (cipher CipherFunction) IsSupported() bool { + return algorithm.CipherFunction(cipher).KeySize() > 0 } // blockSize returns the block size, in bytes, of cipher. func (cipher CipherFunction) blockSize() int { - switch cipher { - case Cipher3DES: - return des.BlockSize - case CipherCAST5: - return 8 - case CipherAES128, CipherAES192, CipherAES256: - return 16 - } - return 0 + return algorithm.CipherFunction(cipher).BlockSize() } // new returns a fresh instance of the given cipher. func (cipher CipherFunction) new(key []byte) (block cipher.Block) { - switch cipher { - case Cipher3DES: - block, _ = des.NewTripleDESCipher(key) - case CipherCAST5: - block, _ = cast5.NewCipher(key) - case CipherAES128, CipherAES192, CipherAES256: - block, _ = aes.NewCipher(key) - } - return -} - -// readMPI reads a big integer from r. The bit length returned is the bit -// length that was specified in r. This is preserved so that the integer can be -// reserialized exactly. -func readMPI(r io.Reader) (mpi []byte, bitLength uint16, err error) { - var buf [2]byte - _, err = readFull(r, buf[0:]) - if err != nil { - return - } - bitLength = uint16(buf[0])<<8 | uint16(buf[1]) - numBytes := (int(bitLength) + 7) / 8 - mpi = make([]byte, numBytes) - _, err = readFull(r, mpi) - // According to RFC 4880 3.2. we should check that the MPI has no leading - // zeroes (at least when not an encrypted MPI?), but this implementation - // does generate leading zeroes, so we keep accepting them. - return -} - -// writeMPI serializes a big integer to w. -func writeMPI(w io.Writer, bitLength uint16, mpiBytes []byte) (err error) { - // Note that we can produce leading zeroes, in violation of RFC 4880 3.2. - // Implementations seem to be tolerant of them, and stripping them would - // make it complex to guarantee matching re-serialization. - _, err = w.Write([]byte{byte(bitLength >> 8), byte(bitLength)}) - if err == nil { - _, err = w.Write(mpiBytes) - } - return -} - -// writeBig serializes a *big.Int to w. -func writeBig(w io.Writer, i *big.Int) error { - return writeMPI(w, uint16(i.BitLen()), i.Bytes()) + return algorithm.CipherFunction(cipher).New(key) } // padToKeySize left-pads a MPI with zeroes to match the length of the @@ -588,3 +492,60 @@ const ( CompressionZIP CompressionAlgo = 1 CompressionZLIB CompressionAlgo = 2 ) + +// AEADMode represents the different Authenticated Encryption with Associated +// Data specified for OpenPGP. +// See https://www.ietf.org/archive/id/draft-ietf-openpgp-crypto-refresh-07.html#section-9.6 +type AEADMode algorithm.AEADMode + +const ( + AEADModeEAX AEADMode = 1 + AEADModeOCB AEADMode = 2 + AEADModeGCM AEADMode = 3 +) + +func (mode AEADMode) IvLength() int { + return algorithm.AEADMode(mode).NonceLength() +} + +func (mode AEADMode) TagLength() int { + return algorithm.AEADMode(mode).TagLength() +} + +// new returns a fresh instance of the given mode. +func (mode AEADMode) new(block cipher.Block) cipher.AEAD { + return algorithm.AEADMode(mode).New(block) +} + +// ReasonForRevocation represents a revocation reason code as per RFC4880 +// section 5.2.3.23. +type ReasonForRevocation uint8 + +const ( + NoReason ReasonForRevocation = 0 + KeySuperseded ReasonForRevocation = 1 + KeyCompromised ReasonForRevocation = 2 + KeyRetired ReasonForRevocation = 3 +) + +// Curve is a mapping to supported ECC curves for key generation. +// See https://www.ietf.org/archive/id/draft-ietf-openpgp-crypto-refresh-06.html#name-curve-specific-wire-formats +type Curve string + +const ( + Curve25519 Curve = "Curve25519" + Curve448 Curve = "Curve448" + CurveNistP256 Curve = "P256" + CurveNistP384 Curve = "P384" + CurveNistP521 Curve = "P521" + CurveSecP256k1 Curve = "SecP256k1" + CurveBrainpoolP256 Curve = "BrainpoolP256" + CurveBrainpoolP384 Curve = "BrainpoolP384" + CurveBrainpoolP512 Curve = "BrainpoolP512" +) + +// TrustLevel represents a trust level per RFC4880 5.2.3.13 +type TrustLevel uint8 + +// TrustAmount represents a trust amount per RFC4880 5.2.3.13 +type TrustAmount uint8 diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/private_key.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/private_key.go new file mode 100644 index 00000000000..2898fa74c32 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/private_key.go @@ -0,0 +1,739 @@ +// Copyright 2011 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package packet + +import ( + "bytes" + "crypto" + "crypto/cipher" + "crypto/dsa" + "crypto/rand" + "crypto/rsa" + "crypto/sha1" + "io" + "io/ioutil" + "math/big" + "strconv" + "time" + + "github.com/ProtonMail/go-crypto/openpgp/ecdh" + "github.com/ProtonMail/go-crypto/openpgp/ecdsa" + "github.com/ProtonMail/go-crypto/openpgp/eddsa" + "github.com/ProtonMail/go-crypto/openpgp/elgamal" + "github.com/ProtonMail/go-crypto/openpgp/errors" + "github.com/ProtonMail/go-crypto/openpgp/internal/encoding" + "github.com/ProtonMail/go-crypto/openpgp/s2k" +) + +// PrivateKey represents a possibly encrypted private key. See RFC 4880, +// section 5.5.3. +type PrivateKey struct { + PublicKey + Encrypted bool // if true then the private key is unavailable until Decrypt has been called. + encryptedData []byte + cipher CipherFunction + s2k func(out, in []byte) + // An *{rsa|dsa|elgamal|ecdh|ecdsa|ed25519}.PrivateKey or + // crypto.Signer/crypto.Decrypter (Decryptor RSA only). + PrivateKey interface{} + sha1Checksum bool + iv []byte + + // Type of encryption of the S2K packet + // Allowed values are 0 (Not encrypted), 254 (SHA1), or + // 255 (2-byte checksum) + s2kType S2KType + // Full parameters of the S2K packet + s2kParams *s2k.Params +} + +//S2KType s2k packet type +type S2KType uint8 + +const ( + // S2KNON unencrypt + S2KNON S2KType = 0 + // S2KSHA1 sha1 sum check + S2KSHA1 S2KType = 254 + // S2KCHECKSUM sum check + S2KCHECKSUM S2KType = 255 +) + +func NewRSAPrivateKey(creationTime time.Time, priv *rsa.PrivateKey) *PrivateKey { + pk := new(PrivateKey) + pk.PublicKey = *NewRSAPublicKey(creationTime, &priv.PublicKey) + pk.PrivateKey = priv + return pk +} + +func NewDSAPrivateKey(creationTime time.Time, priv *dsa.PrivateKey) *PrivateKey { + pk := new(PrivateKey) + pk.PublicKey = *NewDSAPublicKey(creationTime, &priv.PublicKey) + pk.PrivateKey = priv + return pk +} + +func NewElGamalPrivateKey(creationTime time.Time, priv *elgamal.PrivateKey) *PrivateKey { + pk := new(PrivateKey) + pk.PublicKey = *NewElGamalPublicKey(creationTime, &priv.PublicKey) + pk.PrivateKey = priv + return pk +} + +func NewECDSAPrivateKey(creationTime time.Time, priv *ecdsa.PrivateKey) *PrivateKey { + pk := new(PrivateKey) + pk.PublicKey = *NewECDSAPublicKey(creationTime, &priv.PublicKey) + pk.PrivateKey = priv + return pk +} + +func NewEdDSAPrivateKey(creationTime time.Time, priv *eddsa.PrivateKey) *PrivateKey { + pk := new(PrivateKey) + pk.PublicKey = *NewEdDSAPublicKey(creationTime, &priv.PublicKey) + pk.PrivateKey = priv + return pk +} + +func NewECDHPrivateKey(creationTime time.Time, priv *ecdh.PrivateKey) *PrivateKey { + pk := new(PrivateKey) + pk.PublicKey = *NewECDHPublicKey(creationTime, &priv.PublicKey) + pk.PrivateKey = priv + return pk +} + +// NewSignerPrivateKey creates a PrivateKey from a crypto.Signer that +// implements RSA, ECDSA or EdDSA. +func NewSignerPrivateKey(creationTime time.Time, signer interface{}) *PrivateKey { + pk := new(PrivateKey) + // In general, the public Keys should be used as pointers. We still + // type-switch on the values, for backwards-compatibility. + switch pubkey := signer.(type) { + case *rsa.PrivateKey: + pk.PublicKey = *NewRSAPublicKey(creationTime, &pubkey.PublicKey) + case rsa.PrivateKey: + pk.PublicKey = *NewRSAPublicKey(creationTime, &pubkey.PublicKey) + case *ecdsa.PrivateKey: + pk.PublicKey = *NewECDSAPublicKey(creationTime, &pubkey.PublicKey) + case ecdsa.PrivateKey: + pk.PublicKey = *NewECDSAPublicKey(creationTime, &pubkey.PublicKey) + case *eddsa.PrivateKey: + pk.PublicKey = *NewEdDSAPublicKey(creationTime, &pubkey.PublicKey) + case eddsa.PrivateKey: + pk.PublicKey = *NewEdDSAPublicKey(creationTime, &pubkey.PublicKey) + default: + panic("openpgp: unknown signer type in NewSignerPrivateKey") + } + pk.PrivateKey = signer + return pk +} + +// NewDecrypterPrivateKey creates a PrivateKey from a *{rsa|elgamal|ecdh}.PrivateKey. +func NewDecrypterPrivateKey(creationTime time.Time, decrypter interface{}) *PrivateKey { + pk := new(PrivateKey) + switch priv := decrypter.(type) { + case *rsa.PrivateKey: + pk.PublicKey = *NewRSAPublicKey(creationTime, &priv.PublicKey) + case *elgamal.PrivateKey: + pk.PublicKey = *NewElGamalPublicKey(creationTime, &priv.PublicKey) + case *ecdh.PrivateKey: + pk.PublicKey = *NewECDHPublicKey(creationTime, &priv.PublicKey) + default: + panic("openpgp: unknown decrypter type in NewDecrypterPrivateKey") + } + pk.PrivateKey = decrypter + return pk +} + +func (pk *PrivateKey) parse(r io.Reader) (err error) { + err = (&pk.PublicKey).parse(r) + if err != nil { + return + } + v5 := pk.PublicKey.Version == 5 + + var buf [1]byte + _, err = readFull(r, buf[:]) + if err != nil { + return + } + pk.s2kType = S2KType(buf[0]) + var optCount [1]byte + if v5 { + if _, err = readFull(r, optCount[:]); err != nil { + return + } + } + + switch pk.s2kType { + case S2KNON: + pk.s2k = nil + pk.Encrypted = false + case S2KSHA1, S2KCHECKSUM: + if v5 && pk.s2kType == S2KCHECKSUM { + return errors.StructuralError("wrong s2k identifier for version 5") + } + _, err = readFull(r, buf[:]) + if err != nil { + return + } + pk.cipher = CipherFunction(buf[0]) + if pk.cipher != 0 && !pk.cipher.IsSupported() { + return errors.UnsupportedError("unsupported cipher function in private key") + } + pk.s2kParams, err = s2k.ParseIntoParams(r) + if err != nil { + return + } + if pk.s2kParams.Dummy() { + return + } + pk.s2k, err = pk.s2kParams.Function() + if err != nil { + return + } + pk.Encrypted = true + if pk.s2kType == S2KSHA1 { + pk.sha1Checksum = true + } + default: + return errors.UnsupportedError("deprecated s2k function in private key") + } + + if pk.Encrypted { + blockSize := pk.cipher.blockSize() + if blockSize == 0 { + return errors.UnsupportedError("unsupported cipher in private key: " + strconv.Itoa(int(pk.cipher))) + } + pk.iv = make([]byte, blockSize) + _, err = readFull(r, pk.iv) + if err != nil { + return + } + } + + var privateKeyData []byte + if v5 { + var n [4]byte /* secret material four octet count */ + _, err = readFull(r, n[:]) + if err != nil { + return + } + count := uint32(uint32(n[0])<<24 | uint32(n[1])<<16 | uint32(n[2])<<8 | uint32(n[3])) + if !pk.Encrypted { + count = count + 2 /* two octet checksum */ + } + privateKeyData = make([]byte, count) + _, err = readFull(r, privateKeyData) + if err != nil { + return + } + } else { + privateKeyData, err = ioutil.ReadAll(r) + if err != nil { + return + } + } + if !pk.Encrypted { + if len(privateKeyData) < 2 { + return errors.StructuralError("truncated private key data") + } + var sum uint16 + for i := 0; i < len(privateKeyData)-2; i++ { + sum += uint16(privateKeyData[i]) + } + if privateKeyData[len(privateKeyData)-2] != uint8(sum>>8) || + privateKeyData[len(privateKeyData)-1] != uint8(sum) { + return errors.StructuralError("private key checksum failure") + } + privateKeyData = privateKeyData[:len(privateKeyData)-2] + return pk.parsePrivateKey(privateKeyData) + } + + pk.encryptedData = privateKeyData + return +} + +// Dummy returns true if the private key is a dummy key. This is a GNU extension. +func (pk *PrivateKey) Dummy() bool { + return pk.s2kParams.Dummy() +} + +func mod64kHash(d []byte) uint16 { + var h uint16 + for _, b := range d { + h += uint16(b) + } + return h +} + +func (pk *PrivateKey) Serialize(w io.Writer) (err error) { + contents := bytes.NewBuffer(nil) + err = pk.PublicKey.serializeWithoutHeaders(contents) + if err != nil { + return + } + if _, err = contents.Write([]byte{uint8(pk.s2kType)}); err != nil { + return + } + + optional := bytes.NewBuffer(nil) + if pk.Encrypted || pk.Dummy() { + optional.Write([]byte{uint8(pk.cipher)}) + if err := pk.s2kParams.Serialize(optional); err != nil { + return err + } + if pk.Encrypted { + optional.Write(pk.iv) + } + } + if pk.Version == 5 { + contents.Write([]byte{uint8(optional.Len())}) + } + io.Copy(contents, optional) + + if !pk.Dummy() { + l := 0 + var priv []byte + if !pk.Encrypted { + buf := bytes.NewBuffer(nil) + err = pk.serializePrivateKey(buf) + if err != nil { + return err + } + l = buf.Len() + checksum := mod64kHash(buf.Bytes()) + buf.Write([]byte{byte(checksum >> 8), byte(checksum)}) + priv = buf.Bytes() + } else { + priv, l = pk.encryptedData, len(pk.encryptedData) + } + + if pk.Version == 5 { + contents.Write([]byte{byte(l >> 24), byte(l >> 16), byte(l >> 8), byte(l)}) + } + contents.Write(priv) + } + + ptype := packetTypePrivateKey + if pk.IsSubkey { + ptype = packetTypePrivateSubkey + } + err = serializeHeader(w, ptype, contents.Len()) + if err != nil { + return + } + _, err = io.Copy(w, contents) + if err != nil { + return + } + return +} + +func serializeRSAPrivateKey(w io.Writer, priv *rsa.PrivateKey) error { + if _, err := w.Write(new(encoding.MPI).SetBig(priv.D).EncodedBytes()); err != nil { + return err + } + if _, err := w.Write(new(encoding.MPI).SetBig(priv.Primes[1]).EncodedBytes()); err != nil { + return err + } + if _, err := w.Write(new(encoding.MPI).SetBig(priv.Primes[0]).EncodedBytes()); err != nil { + return err + } + _, err := w.Write(new(encoding.MPI).SetBig(priv.Precomputed.Qinv).EncodedBytes()) + return err +} + +func serializeDSAPrivateKey(w io.Writer, priv *dsa.PrivateKey) error { + _, err := w.Write(new(encoding.MPI).SetBig(priv.X).EncodedBytes()) + return err +} + +func serializeElGamalPrivateKey(w io.Writer, priv *elgamal.PrivateKey) error { + _, err := w.Write(new(encoding.MPI).SetBig(priv.X).EncodedBytes()) + return err +} + +func serializeECDSAPrivateKey(w io.Writer, priv *ecdsa.PrivateKey) error { + _, err := w.Write(encoding.NewMPI(priv.MarshalIntegerSecret()).EncodedBytes()) + return err +} + +func serializeEdDSAPrivateKey(w io.Writer, priv *eddsa.PrivateKey) error { + _, err := w.Write(encoding.NewMPI(priv.MarshalByteSecret()).EncodedBytes()) + return err +} + +func serializeECDHPrivateKey(w io.Writer, priv *ecdh.PrivateKey) error { + _, err := w.Write(encoding.NewMPI(priv.MarshalByteSecret()).EncodedBytes()) + return err +} + +// Decrypt decrypts an encrypted private key using a passphrase. +func (pk *PrivateKey) Decrypt(passphrase []byte) error { + if pk.Dummy() { + return errors.ErrDummyPrivateKey("dummy key found") + } + if !pk.Encrypted { + return nil + } + + key := make([]byte, pk.cipher.KeySize()) + pk.s2k(key, passphrase) + block := pk.cipher.new(key) + cfb := cipher.NewCFBDecrypter(block, pk.iv) + + data := make([]byte, len(pk.encryptedData)) + cfb.XORKeyStream(data, pk.encryptedData) + + if pk.sha1Checksum { + if len(data) < sha1.Size { + return errors.StructuralError("truncated private key data") + } + h := sha1.New() + h.Write(data[:len(data)-sha1.Size]) + sum := h.Sum(nil) + if !bytes.Equal(sum, data[len(data)-sha1.Size:]) { + return errors.StructuralError("private key checksum failure") + } + data = data[:len(data)-sha1.Size] + } else { + if len(data) < 2 { + return errors.StructuralError("truncated private key data") + } + var sum uint16 + for i := 0; i < len(data)-2; i++ { + sum += uint16(data[i]) + } + if data[len(data)-2] != uint8(sum>>8) || + data[len(data)-1] != uint8(sum) { + return errors.StructuralError("private key checksum failure") + } + data = data[:len(data)-2] + } + + err := pk.parsePrivateKey(data) + if _, ok := err.(errors.KeyInvalidError); ok { + return errors.KeyInvalidError("invalid key parameters") + } + if err != nil { + return err + } + + // Mark key as unencrypted + pk.s2kType = S2KNON + pk.s2k = nil + pk.Encrypted = false + pk.encryptedData = nil + + return nil +} + +// Encrypt encrypts an unencrypted private key using a passphrase. +func (pk *PrivateKey) Encrypt(passphrase []byte) error { + priv := bytes.NewBuffer(nil) + err := pk.serializePrivateKey(priv) + if err != nil { + return err + } + + //Default config of private key encryption + pk.cipher = CipherAES256 + s2kConfig := &s2k.Config{ + S2KMode: 3, //Iterated + S2KCount: 65536, + Hash: crypto.SHA256, + } + + pk.s2kParams, err = s2k.Generate(rand.Reader, s2kConfig) + if err != nil { + return err + } + privateKeyBytes := priv.Bytes() + key := make([]byte, pk.cipher.KeySize()) + + pk.sha1Checksum = true + pk.s2k, err = pk.s2kParams.Function() + if err != nil { + return err + } + pk.s2k(key, passphrase) + block := pk.cipher.new(key) + pk.iv = make([]byte, pk.cipher.blockSize()) + _, err = rand.Read(pk.iv) + if err != nil { + return err + } + cfb := cipher.NewCFBEncrypter(block, pk.iv) + + if pk.sha1Checksum { + pk.s2kType = S2KSHA1 + h := sha1.New() + h.Write(privateKeyBytes) + sum := h.Sum(nil) + privateKeyBytes = append(privateKeyBytes, sum...) + } else { + pk.s2kType = S2KCHECKSUM + var sum uint16 + for _, b := range privateKeyBytes { + sum += uint16(b) + } + priv.Write([]byte{uint8(sum >> 8), uint8(sum)}) + } + + pk.encryptedData = make([]byte, len(privateKeyBytes)) + cfb.XORKeyStream(pk.encryptedData, privateKeyBytes) + pk.Encrypted = true + pk.PrivateKey = nil + return err +} + +func (pk *PrivateKey) serializePrivateKey(w io.Writer) (err error) { + switch priv := pk.PrivateKey.(type) { + case *rsa.PrivateKey: + err = serializeRSAPrivateKey(w, priv) + case *dsa.PrivateKey: + err = serializeDSAPrivateKey(w, priv) + case *elgamal.PrivateKey: + err = serializeElGamalPrivateKey(w, priv) + case *ecdsa.PrivateKey: + err = serializeECDSAPrivateKey(w, priv) + case *eddsa.PrivateKey: + err = serializeEdDSAPrivateKey(w, priv) + case *ecdh.PrivateKey: + err = serializeECDHPrivateKey(w, priv) + default: + err = errors.InvalidArgumentError("unknown private key type") + } + return +} + +func (pk *PrivateKey) parsePrivateKey(data []byte) (err error) { + switch pk.PublicKey.PubKeyAlgo { + case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly, PubKeyAlgoRSAEncryptOnly: + return pk.parseRSAPrivateKey(data) + case PubKeyAlgoDSA: + return pk.parseDSAPrivateKey(data) + case PubKeyAlgoElGamal: + return pk.parseElGamalPrivateKey(data) + case PubKeyAlgoECDSA: + return pk.parseECDSAPrivateKey(data) + case PubKeyAlgoECDH: + return pk.parseECDHPrivateKey(data) + case PubKeyAlgoEdDSA: + return pk.parseEdDSAPrivateKey(data) + } + panic("impossible") +} + +func (pk *PrivateKey) parseRSAPrivateKey(data []byte) (err error) { + rsaPub := pk.PublicKey.PublicKey.(*rsa.PublicKey) + rsaPriv := new(rsa.PrivateKey) + rsaPriv.PublicKey = *rsaPub + + buf := bytes.NewBuffer(data) + d := new(encoding.MPI) + if _, err := d.ReadFrom(buf); err != nil { + return err + } + + p := new(encoding.MPI) + if _, err := p.ReadFrom(buf); err != nil { + return err + } + + q := new(encoding.MPI) + if _, err := q.ReadFrom(buf); err != nil { + return err + } + + rsaPriv.D = new(big.Int).SetBytes(d.Bytes()) + rsaPriv.Primes = make([]*big.Int, 2) + rsaPriv.Primes[0] = new(big.Int).SetBytes(p.Bytes()) + rsaPriv.Primes[1] = new(big.Int).SetBytes(q.Bytes()) + if err := rsaPriv.Validate(); err != nil { + return errors.KeyInvalidError(err.Error()) + } + rsaPriv.Precompute() + pk.PrivateKey = rsaPriv + + return nil +} + +func (pk *PrivateKey) parseDSAPrivateKey(data []byte) (err error) { + dsaPub := pk.PublicKey.PublicKey.(*dsa.PublicKey) + dsaPriv := new(dsa.PrivateKey) + dsaPriv.PublicKey = *dsaPub + + buf := bytes.NewBuffer(data) + x := new(encoding.MPI) + if _, err := x.ReadFrom(buf); err != nil { + return err + } + + dsaPriv.X = new(big.Int).SetBytes(x.Bytes()) + if err := validateDSAParameters(dsaPriv); err != nil { + return err + } + pk.PrivateKey = dsaPriv + + return nil +} + +func (pk *PrivateKey) parseElGamalPrivateKey(data []byte) (err error) { + pub := pk.PublicKey.PublicKey.(*elgamal.PublicKey) + priv := new(elgamal.PrivateKey) + priv.PublicKey = *pub + + buf := bytes.NewBuffer(data) + x := new(encoding.MPI) + if _, err := x.ReadFrom(buf); err != nil { + return err + } + + priv.X = new(big.Int).SetBytes(x.Bytes()) + if err := validateElGamalParameters(priv); err != nil { + return err + } + pk.PrivateKey = priv + + return nil +} + +func (pk *PrivateKey) parseECDSAPrivateKey(data []byte) (err error) { + ecdsaPub := pk.PublicKey.PublicKey.(*ecdsa.PublicKey) + ecdsaPriv := ecdsa.NewPrivateKey(*ecdsaPub) + + buf := bytes.NewBuffer(data) + d := new(encoding.MPI) + if _, err := d.ReadFrom(buf); err != nil { + return err + } + + if err := ecdsaPriv.UnmarshalIntegerSecret(d.Bytes()); err != nil { + return err + } + if err := ecdsa.Validate(ecdsaPriv); err != nil { + return err + } + pk.PrivateKey = ecdsaPriv + + return nil +} + +func (pk *PrivateKey) parseECDHPrivateKey(data []byte) (err error) { + ecdhPub := pk.PublicKey.PublicKey.(*ecdh.PublicKey) + ecdhPriv := ecdh.NewPrivateKey(*ecdhPub) + + buf := bytes.NewBuffer(data) + d := new(encoding.MPI) + if _, err := d.ReadFrom(buf); err != nil { + return err + } + + if err := ecdhPriv.UnmarshalByteSecret(d.Bytes()); err != nil { + return err + } + + if err := ecdh.Validate(ecdhPriv); err != nil { + return err + } + + pk.PrivateKey = ecdhPriv + + return nil +} + +func (pk *PrivateKey) parseEdDSAPrivateKey(data []byte) (err error) { + eddsaPub := pk.PublicKey.PublicKey.(*eddsa.PublicKey) + eddsaPriv := eddsa.NewPrivateKey(*eddsaPub) + eddsaPriv.PublicKey = *eddsaPub + + buf := bytes.NewBuffer(data) + d := new(encoding.MPI) + if _, err := d.ReadFrom(buf); err != nil { + return err + } + + if err = eddsaPriv.UnmarshalByteSecret(d.Bytes()); err != nil { + return err + } + + if err := eddsa.Validate(eddsaPriv); err != nil { + return err + } + + pk.PrivateKey = eddsaPriv + + return nil +} + +func validateDSAParameters(priv *dsa.PrivateKey) error { + p := priv.P // group prime + q := priv.Q // subgroup order + g := priv.G // g has order q mod p + x := priv.X // secret + y := priv.Y // y == g**x mod p + one := big.NewInt(1) + // expect g, y >= 2 and g < p + if g.Cmp(one) <= 0 || y.Cmp(one) <= 0 || g.Cmp(p) > 0 { + return errors.KeyInvalidError("dsa: invalid group") + } + // expect p > q + if p.Cmp(q) <= 0 { + return errors.KeyInvalidError("dsa: invalid group prime") + } + // q should be large enough and divide p-1 + pSub1 := new(big.Int).Sub(p, one) + if q.BitLen() < 150 || new(big.Int).Mod(pSub1, q).Cmp(big.NewInt(0)) != 0 { + return errors.KeyInvalidError("dsa: invalid order") + } + // confirm that g has order q mod p + if !q.ProbablyPrime(32) || new(big.Int).Exp(g, q, p).Cmp(one) != 0 { + return errors.KeyInvalidError("dsa: invalid order") + } + // check y + if new(big.Int).Exp(g, x, p).Cmp(y) != 0 { + return errors.KeyInvalidError("dsa: mismatching values") + } + + return nil +} + +func validateElGamalParameters(priv *elgamal.PrivateKey) error { + p := priv.P // group prime + g := priv.G // g has order p-1 mod p + x := priv.X // secret + y := priv.Y // y == g**x mod p + one := big.NewInt(1) + // Expect g, y >= 2 and g < p + if g.Cmp(one) <= 0 || y.Cmp(one) <= 0 || g.Cmp(p) > 0 { + return errors.KeyInvalidError("elgamal: invalid group") + } + if p.BitLen() < 1024 { + return errors.KeyInvalidError("elgamal: group order too small") + } + pSub1 := new(big.Int).Sub(p, one) + if new(big.Int).Exp(g, pSub1, p).Cmp(one) != 0 { + return errors.KeyInvalidError("elgamal: invalid group") + } + // Since p-1 is not prime, g might have a smaller order that divides p-1. + // We cannot confirm the exact order of g, but we make sure it is not too small. + gExpI := new(big.Int).Set(g) + i := 1 + threshold := 2 << 17 // we want order > threshold + for i < threshold { + i++ // we check every order to make sure key validation is not easily bypassed by guessing y' + gExpI.Mod(new(big.Int).Mul(gExpI, g), p) + if gExpI.Cmp(one) == 0 { + return errors.KeyInvalidError("elgamal: order too small") + } + } + // Check y + if new(big.Int).Exp(g, x, p).Cmp(y) != 0 { + return errors.KeyInvalidError("elgamal: mismatching values") + } + + return nil +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/private_key_test_data.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/private_key_test_data.go new file mode 100644 index 00000000000..029b8f1aab2 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/private_key_test_data.go @@ -0,0 +1,12 @@ +package packet + +// Generated with `gpg --export-secret-keys "Test Key 2"` +const privKeyRSAHex = "9501fe044cc349a8010400b70ca0010e98c090008d45d1ee8f9113bd5861fd57b88bacb7c68658747663f1e1a3b5a98f32fda6472373c024b97359cd2efc88ff60f77751adfbf6af5e615e6a1408cfad8bf0cea30b0d5f53aa27ad59089ba9b15b7ebc2777a25d7b436144027e3bcd203909f147d0e332b240cf63d3395f5dfe0df0a6c04e8655af7eacdf0011010001fe0303024a252e7d475fd445607de39a265472aa74a9320ba2dac395faa687e9e0336aeb7e9a7397e511b5afd9dc84557c80ac0f3d4d7bfec5ae16f20d41c8c84a04552a33870b930420e230e179564f6d19bb153145e76c33ae993886c388832b0fa042ddda7f133924f3854481533e0ede31d51278c0519b29abc3bf53da673e13e3e1214b52413d179d7f66deee35cac8eacb060f78379d70ef4af8607e68131ff529439668fc39c9ce6dfef8a5ac234d234802cbfb749a26107db26406213ae5c06d4673253a3cbee1fcbae58d6ab77e38d6e2c0e7c6317c48e054edadb5a40d0d48acb44643d998139a8a66bb820be1f3f80185bc777d14b5954b60effe2448a036d565c6bc0b915fcea518acdd20ab07bc1529f561c58cd044f723109b93f6fd99f876ff891d64306b5d08f48bab59f38695e9109c4dec34013ba3153488ce070268381ba923ee1eb77125b36afcb4347ec3478c8f2735b06ef17351d872e577fa95d0c397c88c71b59629a36aec" + +// Generated by `gpg --export-secret-keys` followed by a manual extraction of +// the ElGamal subkey from the packets. +const privKeyElGamalHex = "9d0157044df9ee1a100400eb8e136a58ec39b582629cdadf830bc64e0a94ed8103ca8bb247b27b11b46d1d25297ef4bcc3071785ba0c0bedfe89eabc5287fcc0edf81ab5896c1c8e4b20d27d79813c7aede75320b33eaeeaa586edc00fd1036c10133e6ba0ff277245d0d59d04b2b3421b7244aca5f4a8d870c6f1c1fbff9e1c26699a860b9504f35ca1d700030503fd1ededd3b840795be6d9ccbe3c51ee42e2f39233c432b831ddd9c4e72b7025a819317e47bf94f9ee316d7273b05d5fcf2999c3a681f519b1234bbfa6d359b4752bd9c3f77d6b6456cde152464763414ca130f4e91d91041432f90620fec0e6d6b5116076c2985d5aeaae13be492b9b329efcaf7ee25120159a0a30cd976b42d7afe030302dae7eb80db744d4960c4df930d57e87fe81412eaace9f900e6c839817a614ddb75ba6603b9417c33ea7b6c93967dfa2bcff3fa3c74a5ce2c962db65b03aece14c96cbd0038fc" + +// pkcs1PrivKeyHex is a PKCS#1, RSA private key. +// Generated by `openssl genrsa 1024 | openssl rsa -outform DER | xxd -p` +const pkcs1PrivKeyHex = "3082025d02010002818100e98edfa1c3b35884a54d0b36a6a603b0290fa85e49e30fa23fc94fef9c6790bc4849928607aa48d809da326fb42a969d06ad756b98b9c1a90f5d4a2b6d0ac05953c97f4da3120164a21a679793ce181c906dc01d235cc085ddcdf6ea06c389b6ab8885dfd685959e693138856a68a7e5db263337ff82a088d583a897cf2d59e9020301000102818100b6d5c9eb70b02d5369b3ee5b520a14490b5bde8a317d36f7e4c74b7460141311d1e5067735f8f01d6f5908b2b96fbd881f7a1ab9a84d82753e39e19e2d36856be960d05ac9ef8e8782ea1b6d65aee28fdfe1d61451e8cff0adfe84322f12cf455028b581cf60eb9e0e140ba5d21aeba6c2634d7c65318b9a665fc01c3191ca21024100fa5e818da3705b0fa33278bb28d4b6f6050388af2d4b75ec9375dd91ccf2e7d7068086a8b82a8f6282e4fbbdb8a7f2622eb97295249d87acea7f5f816f54d347024100eecf9406d7dc49cdfb95ab1eff4064de84c7a30f64b2798936a0d2018ba9eb52e4b636f82e96c49cc63b80b675e91e40d1b2e4017d4b9adaf33ab3d9cf1c214f024100c173704ace742c082323066226a4655226819a85304c542b9dacbeacbf5d1881ee863485fcf6f59f3a604f9b42289282067447f2b13dfeed3eab7851fc81e0550240741fc41f3fc002b382eed8730e33c5d8de40256e4accee846667f536832f711ab1d4590e7db91a8a116ac5bff3be13d3f9243ff2e976662aa9b395d907f8e9c9024046a5696c9ef882363e06c9fa4e2f5b580906452befba03f4a99d0f873697ef1f851d2226ca7934b30b7c3e80cb634a67172bbbf4781735fe3e09263e2dd723e7" diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/public_key.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/public_key.go new file mode 100644 index 00000000000..e0f5f74a93a --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/public_key.go @@ -0,0 +1,802 @@ +// Copyright 2011 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package packet + +import ( + "crypto" + "crypto/dsa" + "crypto/rsa" + "crypto/sha1" + "crypto/sha256" + _ "crypto/sha512" + "encoding/binary" + "fmt" + "hash" + "io" + "math/big" + "strconv" + "time" + + "github.com/ProtonMail/go-crypto/openpgp/ecdh" + "github.com/ProtonMail/go-crypto/openpgp/ecdsa" + "github.com/ProtonMail/go-crypto/openpgp/eddsa" + "github.com/ProtonMail/go-crypto/openpgp/elgamal" + "github.com/ProtonMail/go-crypto/openpgp/errors" + "github.com/ProtonMail/go-crypto/openpgp/internal/algorithm" + "github.com/ProtonMail/go-crypto/openpgp/internal/ecc" + "github.com/ProtonMail/go-crypto/openpgp/internal/encoding" +) + +type kdfHashFunction byte +type kdfAlgorithm byte + +// PublicKey represents an OpenPGP public key. See RFC 4880, section 5.5.2. +type PublicKey struct { + Version int + CreationTime time.Time + PubKeyAlgo PublicKeyAlgorithm + PublicKey interface{} // *rsa.PublicKey, *dsa.PublicKey, *ecdsa.PublicKey or *eddsa.PublicKey + Fingerprint []byte + KeyId uint64 + IsSubkey bool + + // RFC 4880 fields + n, e, p, q, g, y encoding.Field + + // RFC 6637 fields + // oid contains the OID byte sequence identifying the elliptic curve used + oid encoding.Field + + // kdf stores key derivation function parameters + // used for ECDH encryption. See RFC 6637, Section 9. + kdf encoding.Field +} + +// UpgradeToV5 updates the version of the key to v5, and updates all necessary +// fields. +func (pk *PublicKey) UpgradeToV5() { + pk.Version = 5 + pk.setFingerprintAndKeyId() +} + +// signingKey provides a convenient abstraction over signature verification +// for v3 and v4 public keys. +type signingKey interface { + SerializeForHash(io.Writer) error + SerializeSignaturePrefix(io.Writer) + serializeWithoutHeaders(io.Writer) error +} + +// NewRSAPublicKey returns a PublicKey that wraps the given rsa.PublicKey. +func NewRSAPublicKey(creationTime time.Time, pub *rsa.PublicKey) *PublicKey { + pk := &PublicKey{ + Version: 4, + CreationTime: creationTime, + PubKeyAlgo: PubKeyAlgoRSA, + PublicKey: pub, + n: new(encoding.MPI).SetBig(pub.N), + e: new(encoding.MPI).SetBig(big.NewInt(int64(pub.E))), + } + + pk.setFingerprintAndKeyId() + return pk +} + +// NewDSAPublicKey returns a PublicKey that wraps the given dsa.PublicKey. +func NewDSAPublicKey(creationTime time.Time, pub *dsa.PublicKey) *PublicKey { + pk := &PublicKey{ + Version: 4, + CreationTime: creationTime, + PubKeyAlgo: PubKeyAlgoDSA, + PublicKey: pub, + p: new(encoding.MPI).SetBig(pub.P), + q: new(encoding.MPI).SetBig(pub.Q), + g: new(encoding.MPI).SetBig(pub.G), + y: new(encoding.MPI).SetBig(pub.Y), + } + + pk.setFingerprintAndKeyId() + return pk +} + +// NewElGamalPublicKey returns a PublicKey that wraps the given elgamal.PublicKey. +func NewElGamalPublicKey(creationTime time.Time, pub *elgamal.PublicKey) *PublicKey { + pk := &PublicKey{ + Version: 4, + CreationTime: creationTime, + PubKeyAlgo: PubKeyAlgoElGamal, + PublicKey: pub, + p: new(encoding.MPI).SetBig(pub.P), + g: new(encoding.MPI).SetBig(pub.G), + y: new(encoding.MPI).SetBig(pub.Y), + } + + pk.setFingerprintAndKeyId() + return pk +} + +func NewECDSAPublicKey(creationTime time.Time, pub *ecdsa.PublicKey) *PublicKey { + pk := &PublicKey{ + Version: 4, + CreationTime: creationTime, + PubKeyAlgo: PubKeyAlgoECDSA, + PublicKey: pub, + p: encoding.NewMPI(pub.MarshalPoint()), + } + + curveInfo := ecc.FindByCurve(pub.GetCurve()) + if curveInfo == nil { + panic("unknown elliptic curve") + } + pk.oid = curveInfo.Oid + pk.setFingerprintAndKeyId() + return pk +} + +func NewECDHPublicKey(creationTime time.Time, pub *ecdh.PublicKey) *PublicKey { + var pk *PublicKey + var kdf = encoding.NewOID([]byte{0x1, pub.Hash.Id(), pub.Cipher.Id()}) + pk = &PublicKey{ + Version: 4, + CreationTime: creationTime, + PubKeyAlgo: PubKeyAlgoECDH, + PublicKey: pub, + p: encoding.NewMPI(pub.MarshalPoint()), + kdf: kdf, + } + + curveInfo := ecc.FindByCurve(pub.GetCurve()) + + if curveInfo == nil { + panic("unknown elliptic curve") + } + + pk.oid = curveInfo.Oid + pk.setFingerprintAndKeyId() + return pk +} + +func NewEdDSAPublicKey(creationTime time.Time, pub *eddsa.PublicKey) *PublicKey { + curveInfo := ecc.FindByCurve(pub.GetCurve()) + pk := &PublicKey{ + Version: 4, + CreationTime: creationTime, + PubKeyAlgo: PubKeyAlgoEdDSA, + PublicKey: pub, + oid: curveInfo.Oid, + // Native point format, see draft-koch-eddsa-for-openpgp-04, Appendix B + p: encoding.NewMPI(pub.MarshalPoint()), + } + + pk.setFingerprintAndKeyId() + return pk +} + +func (pk *PublicKey) parse(r io.Reader) (err error) { + // RFC 4880, section 5.5.2 + var buf [6]byte + _, err = readFull(r, buf[:]) + if err != nil { + return + } + if buf[0] != 4 && buf[0] != 5 { + return errors.UnsupportedError("public key version " + strconv.Itoa(int(buf[0]))) + } + + pk.Version = int(buf[0]) + if pk.Version == 5 { + var n [4]byte + _, err = readFull(r, n[:]) + if err != nil { + return + } + } + pk.CreationTime = time.Unix(int64(uint32(buf[1])<<24|uint32(buf[2])<<16|uint32(buf[3])<<8|uint32(buf[4])), 0) + pk.PubKeyAlgo = PublicKeyAlgorithm(buf[5]) + switch pk.PubKeyAlgo { + case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoRSASignOnly: + err = pk.parseRSA(r) + case PubKeyAlgoDSA: + err = pk.parseDSA(r) + case PubKeyAlgoElGamal: + err = pk.parseElGamal(r) + case PubKeyAlgoECDSA: + err = pk.parseECDSA(r) + case PubKeyAlgoECDH: + err = pk.parseECDH(r) + case PubKeyAlgoEdDSA: + err = pk.parseEdDSA(r) + default: + err = errors.UnsupportedError("public key type: " + strconv.Itoa(int(pk.PubKeyAlgo))) + } + if err != nil { + return + } + + pk.setFingerprintAndKeyId() + return +} + +func (pk *PublicKey) setFingerprintAndKeyId() { + // RFC 4880, section 12.2 + if pk.Version == 5 { + fingerprint := sha256.New() + pk.SerializeForHash(fingerprint) + pk.Fingerprint = make([]byte, 32) + copy(pk.Fingerprint, fingerprint.Sum(nil)) + pk.KeyId = binary.BigEndian.Uint64(pk.Fingerprint[:8]) + } else { + fingerprint := sha1.New() + pk.SerializeForHash(fingerprint) + pk.Fingerprint = make([]byte, 20) + copy(pk.Fingerprint, fingerprint.Sum(nil)) + pk.KeyId = binary.BigEndian.Uint64(pk.Fingerprint[12:20]) + } +} + +// parseRSA parses RSA public key material from the given Reader. See RFC 4880, +// section 5.5.2. +func (pk *PublicKey) parseRSA(r io.Reader) (err error) { + pk.n = new(encoding.MPI) + if _, err = pk.n.ReadFrom(r); err != nil { + return + } + pk.e = new(encoding.MPI) + if _, err = pk.e.ReadFrom(r); err != nil { + return + } + + if len(pk.e.Bytes()) > 3 { + err = errors.UnsupportedError("large public exponent") + return + } + rsa := &rsa.PublicKey{ + N: new(big.Int).SetBytes(pk.n.Bytes()), + E: 0, + } + for i := 0; i < len(pk.e.Bytes()); i++ { + rsa.E <<= 8 + rsa.E |= int(pk.e.Bytes()[i]) + } + pk.PublicKey = rsa + return +} + +// parseDSA parses DSA public key material from the given Reader. See RFC 4880, +// section 5.5.2. +func (pk *PublicKey) parseDSA(r io.Reader) (err error) { + pk.p = new(encoding.MPI) + if _, err = pk.p.ReadFrom(r); err != nil { + return + } + pk.q = new(encoding.MPI) + if _, err = pk.q.ReadFrom(r); err != nil { + return + } + pk.g = new(encoding.MPI) + if _, err = pk.g.ReadFrom(r); err != nil { + return + } + pk.y = new(encoding.MPI) + if _, err = pk.y.ReadFrom(r); err != nil { + return + } + + dsa := new(dsa.PublicKey) + dsa.P = new(big.Int).SetBytes(pk.p.Bytes()) + dsa.Q = new(big.Int).SetBytes(pk.q.Bytes()) + dsa.G = new(big.Int).SetBytes(pk.g.Bytes()) + dsa.Y = new(big.Int).SetBytes(pk.y.Bytes()) + pk.PublicKey = dsa + return +} + +// parseElGamal parses ElGamal public key material from the given Reader. See +// RFC 4880, section 5.5.2. +func (pk *PublicKey) parseElGamal(r io.Reader) (err error) { + pk.p = new(encoding.MPI) + if _, err = pk.p.ReadFrom(r); err != nil { + return + } + pk.g = new(encoding.MPI) + if _, err = pk.g.ReadFrom(r); err != nil { + return + } + pk.y = new(encoding.MPI) + if _, err = pk.y.ReadFrom(r); err != nil { + return + } + + elgamal := new(elgamal.PublicKey) + elgamal.P = new(big.Int).SetBytes(pk.p.Bytes()) + elgamal.G = new(big.Int).SetBytes(pk.g.Bytes()) + elgamal.Y = new(big.Int).SetBytes(pk.y.Bytes()) + pk.PublicKey = elgamal + return +} + +// parseECDSA parses ECDSA public key material from the given Reader. See +// RFC 6637, Section 9. +func (pk *PublicKey) parseECDSA(r io.Reader) (err error) { + pk.oid = new(encoding.OID) + if _, err = pk.oid.ReadFrom(r); err != nil { + return + } + pk.p = new(encoding.MPI) + if _, err = pk.p.ReadFrom(r); err != nil { + return + } + + curveInfo := ecc.FindByOid(pk.oid) + if curveInfo == nil { + return errors.UnsupportedError(fmt.Sprintf("unknown oid: %x", pk.oid)) + } + + c, ok := curveInfo.Curve.(ecc.ECDSACurve) + if !ok { + return errors.UnsupportedError(fmt.Sprintf("unsupported oid: %x", pk.oid)) + } + + ecdsaKey := ecdsa.NewPublicKey(c) + err = ecdsaKey.UnmarshalPoint(pk.p.Bytes()) + pk.PublicKey = ecdsaKey + + return +} + +// parseECDH parses ECDH public key material from the given Reader. See +// RFC 6637, Section 9. +func (pk *PublicKey) parseECDH(r io.Reader) (err error) { + pk.oid = new(encoding.OID) + if _, err = pk.oid.ReadFrom(r); err != nil { + return + } + pk.p = new(encoding.MPI) + if _, err = pk.p.ReadFrom(r); err != nil { + return + } + pk.kdf = new(encoding.OID) + if _, err = pk.kdf.ReadFrom(r); err != nil { + return + } + + curveInfo := ecc.FindByOid(pk.oid) + + if curveInfo == nil { + return errors.UnsupportedError(fmt.Sprintf("unknown oid: %x", pk.oid)) + } + + c, ok := curveInfo.Curve.(ecc.ECDHCurve) + if !ok { + return errors.UnsupportedError(fmt.Sprintf("unsupported oid: %x", pk.oid)) + } + + if kdfLen := len(pk.kdf.Bytes()); kdfLen < 3 { + return errors.UnsupportedError("unsupported ECDH KDF length: " + strconv.Itoa(kdfLen)) + } + if reserved := pk.kdf.Bytes()[0]; reserved != 0x01 { + return errors.UnsupportedError("unsupported KDF reserved field: " + strconv.Itoa(int(reserved))) + } + kdfHash, ok := algorithm.HashById[pk.kdf.Bytes()[1]] + if !ok { + return errors.UnsupportedError("unsupported ECDH KDF hash: " + strconv.Itoa(int(pk.kdf.Bytes()[1]))) + } + kdfCipher, ok := algorithm.CipherById[pk.kdf.Bytes()[2]] + if !ok { + return errors.UnsupportedError("unsupported ECDH KDF cipher: " + strconv.Itoa(int(pk.kdf.Bytes()[2]))) + } + + ecdhKey := ecdh.NewPublicKey(c, kdfHash, kdfCipher) + err = ecdhKey.UnmarshalPoint(pk.p.Bytes()) + pk.PublicKey = ecdhKey + + return +} + +func (pk *PublicKey) parseEdDSA(r io.Reader) (err error) { + pk.oid = new(encoding.OID) + if _, err = pk.oid.ReadFrom(r); err != nil { + return + } + curveInfo := ecc.FindByOid(pk.oid) + if curveInfo == nil { + return errors.UnsupportedError(fmt.Sprintf("unknown oid: %x", pk.oid)) + } + + c, ok := curveInfo.Curve.(ecc.EdDSACurve) + if !ok { + return errors.UnsupportedError(fmt.Sprintf("unsupported oid: %x", pk.oid)) + } + + pk.p = new(encoding.MPI) + if _, err = pk.p.ReadFrom(r); err != nil { + return + } + + pub := eddsa.NewPublicKey(c) + + switch flag := pk.p.Bytes()[0]; flag { + case 0x04: + // TODO: see _grcy_ecc_eddsa_ensure_compact in grcypt + return errors.UnsupportedError("unsupported EdDSA compression: " + strconv.Itoa(int(flag))) + case 0x40: + err = pub.UnmarshalPoint(pk.p.Bytes()) + default: + return errors.UnsupportedError("unsupported EdDSA compression: " + strconv.Itoa(int(flag))) + } + + pk.PublicKey = pub + return +} + +// SerializeForHash serializes the PublicKey to w with the special packet +// header format needed for hashing. +func (pk *PublicKey) SerializeForHash(w io.Writer) error { + pk.SerializeSignaturePrefix(w) + return pk.serializeWithoutHeaders(w) +} + +// SerializeSignaturePrefix writes the prefix for this public key to the given Writer. +// The prefix is used when calculating a signature over this public key. See +// RFC 4880, section 5.2.4. +func (pk *PublicKey) SerializeSignaturePrefix(w io.Writer) { + var pLength = pk.algorithmSpecificByteCount() + if pk.Version == 5 { + pLength += 10 // version, timestamp (4), algorithm, key octet count (4). + w.Write([]byte{ + 0x9A, + byte(pLength >> 24), + byte(pLength >> 16), + byte(pLength >> 8), + byte(pLength), + }) + return + } + pLength += 6 + w.Write([]byte{0x99, byte(pLength >> 8), byte(pLength)}) +} + +func (pk *PublicKey) Serialize(w io.Writer) (err error) { + length := 6 // 6 byte header + length += pk.algorithmSpecificByteCount() + if pk.Version == 5 { + length += 4 // octet key count + } + packetType := packetTypePublicKey + if pk.IsSubkey { + packetType = packetTypePublicSubkey + } + err = serializeHeader(w, packetType, length) + if err != nil { + return + } + return pk.serializeWithoutHeaders(w) +} + +func (pk *PublicKey) algorithmSpecificByteCount() int { + length := 0 + switch pk.PubKeyAlgo { + case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoRSASignOnly: + length += int(pk.n.EncodedLength()) + length += int(pk.e.EncodedLength()) + case PubKeyAlgoDSA: + length += int(pk.p.EncodedLength()) + length += int(pk.q.EncodedLength()) + length += int(pk.g.EncodedLength()) + length += int(pk.y.EncodedLength()) + case PubKeyAlgoElGamal: + length += int(pk.p.EncodedLength()) + length += int(pk.g.EncodedLength()) + length += int(pk.y.EncodedLength()) + case PubKeyAlgoECDSA: + length += int(pk.oid.EncodedLength()) + length += int(pk.p.EncodedLength()) + case PubKeyAlgoECDH: + length += int(pk.oid.EncodedLength()) + length += int(pk.p.EncodedLength()) + length += int(pk.kdf.EncodedLength()) + case PubKeyAlgoEdDSA: + length += int(pk.oid.EncodedLength()) + length += int(pk.p.EncodedLength()) + default: + panic("unknown public key algorithm") + } + return length +} + +// serializeWithoutHeaders marshals the PublicKey to w in the form of an +// OpenPGP public key packet, not including the packet header. +func (pk *PublicKey) serializeWithoutHeaders(w io.Writer) (err error) { + t := uint32(pk.CreationTime.Unix()) + if _, err = w.Write([]byte{ + byte(pk.Version), + byte(t >> 24), byte(t >> 16), byte(t >> 8), byte(t), + byte(pk.PubKeyAlgo), + }); err != nil { + return + } + + if pk.Version == 5 { + n := pk.algorithmSpecificByteCount() + if _, err = w.Write([]byte{ + byte(n >> 24), byte(n >> 16), byte(n >> 8), byte(n), + }); err != nil { + return + } + } + + switch pk.PubKeyAlgo { + case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoRSASignOnly: + if _, err = w.Write(pk.n.EncodedBytes()); err != nil { + return + } + _, err = w.Write(pk.e.EncodedBytes()) + return + case PubKeyAlgoDSA: + if _, err = w.Write(pk.p.EncodedBytes()); err != nil { + return + } + if _, err = w.Write(pk.q.EncodedBytes()); err != nil { + return + } + if _, err = w.Write(pk.g.EncodedBytes()); err != nil { + return + } + _, err = w.Write(pk.y.EncodedBytes()) + return + case PubKeyAlgoElGamal: + if _, err = w.Write(pk.p.EncodedBytes()); err != nil { + return + } + if _, err = w.Write(pk.g.EncodedBytes()); err != nil { + return + } + _, err = w.Write(pk.y.EncodedBytes()) + return + case PubKeyAlgoECDSA: + if _, err = w.Write(pk.oid.EncodedBytes()); err != nil { + return + } + _, err = w.Write(pk.p.EncodedBytes()) + return + case PubKeyAlgoECDH: + if _, err = w.Write(pk.oid.EncodedBytes()); err != nil { + return + } + if _, err = w.Write(pk.p.EncodedBytes()); err != nil { + return + } + _, err = w.Write(pk.kdf.EncodedBytes()) + return + case PubKeyAlgoEdDSA: + if _, err = w.Write(pk.oid.EncodedBytes()); err != nil { + return + } + _, err = w.Write(pk.p.EncodedBytes()) + return + } + return errors.InvalidArgumentError("bad public-key algorithm") +} + +// CanSign returns true iff this public key can generate signatures +func (pk *PublicKey) CanSign() bool { + return pk.PubKeyAlgo != PubKeyAlgoRSAEncryptOnly && pk.PubKeyAlgo != PubKeyAlgoElGamal && pk.PubKeyAlgo != PubKeyAlgoECDH +} + +// VerifySignature returns nil iff sig is a valid signature, made by this +// public key, of the data hashed into signed. signed is mutated by this call. +func (pk *PublicKey) VerifySignature(signed hash.Hash, sig *Signature) (err error) { + if !pk.CanSign() { + return errors.InvalidArgumentError("public key cannot generate signatures") + } + if sig.Version == 5 && (sig.SigType == 0x00 || sig.SigType == 0x01) { + sig.AddMetadataToHashSuffix() + } + signed.Write(sig.HashSuffix) + hashBytes := signed.Sum(nil) + if hashBytes[0] != sig.HashTag[0] || hashBytes[1] != sig.HashTag[1] { + return errors.SignatureError("hash tag doesn't match") + } + + if pk.PubKeyAlgo != sig.PubKeyAlgo { + return errors.InvalidArgumentError("public key and signature use different algorithms") + } + + switch pk.PubKeyAlgo { + case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly: + rsaPublicKey, _ := pk.PublicKey.(*rsa.PublicKey) + err = rsa.VerifyPKCS1v15(rsaPublicKey, sig.Hash, hashBytes, padToKeySize(rsaPublicKey, sig.RSASignature.Bytes())) + if err != nil { + return errors.SignatureError("RSA verification failure") + } + return nil + case PubKeyAlgoDSA: + dsaPublicKey, _ := pk.PublicKey.(*dsa.PublicKey) + // Need to truncate hashBytes to match FIPS 186-3 section 4.6. + subgroupSize := (dsaPublicKey.Q.BitLen() + 7) / 8 + if len(hashBytes) > subgroupSize { + hashBytes = hashBytes[:subgroupSize] + } + if !dsa.Verify(dsaPublicKey, hashBytes, new(big.Int).SetBytes(sig.DSASigR.Bytes()), new(big.Int).SetBytes(sig.DSASigS.Bytes())) { + return errors.SignatureError("DSA verification failure") + } + return nil + case PubKeyAlgoECDSA: + ecdsaPublicKey := pk.PublicKey.(*ecdsa.PublicKey) + if !ecdsa.Verify(ecdsaPublicKey, hashBytes, new(big.Int).SetBytes(sig.ECDSASigR.Bytes()), new(big.Int).SetBytes(sig.ECDSASigS.Bytes())) { + return errors.SignatureError("ECDSA verification failure") + } + return nil + case PubKeyAlgoEdDSA: + eddsaPublicKey := pk.PublicKey.(*eddsa.PublicKey) + if !eddsa.Verify(eddsaPublicKey, hashBytes, sig.EdDSASigR.Bytes(), sig.EdDSASigS.Bytes()) { + return errors.SignatureError("EdDSA verification failure") + } + return nil + default: + return errors.SignatureError("Unsupported public key algorithm used in signature") + } +} + +// keySignatureHash returns a Hash of the message that needs to be signed for +// pk to assert a subkey relationship to signed. +func keySignatureHash(pk, signed signingKey, hashFunc crypto.Hash) (h hash.Hash, err error) { + if !hashFunc.Available() { + return nil, errors.UnsupportedError("hash function") + } + h = hashFunc.New() + + // RFC 4880, section 5.2.4 + err = pk.SerializeForHash(h) + if err != nil { + return nil, err + } + + err = signed.SerializeForHash(h) + return +} + +// VerifyKeySignature returns nil iff sig is a valid signature, made by this +// public key, of signed. +func (pk *PublicKey) VerifyKeySignature(signed *PublicKey, sig *Signature) error { + h, err := keySignatureHash(pk, signed, sig.Hash) + if err != nil { + return err + } + if err = pk.VerifySignature(h, sig); err != nil { + return err + } + + if sig.FlagSign { + // Signing subkeys must be cross-signed. See + // https://www.gnupg.org/faq/subkey-cross-certify.html. + if sig.EmbeddedSignature == nil { + return errors.StructuralError("signing subkey is missing cross-signature") + } + // Verify the cross-signature. This is calculated over the same + // data as the main signature, so we cannot just recursively + // call signed.VerifyKeySignature(...) + if h, err = keySignatureHash(pk, signed, sig.EmbeddedSignature.Hash); err != nil { + return errors.StructuralError("error while hashing for cross-signature: " + err.Error()) + } + if err := signed.VerifySignature(h, sig.EmbeddedSignature); err != nil { + return errors.StructuralError("error while verifying cross-signature: " + err.Error()) + } + } + + return nil +} + +func keyRevocationHash(pk signingKey, hashFunc crypto.Hash) (h hash.Hash, err error) { + if !hashFunc.Available() { + return nil, errors.UnsupportedError("hash function") + } + h = hashFunc.New() + + // RFC 4880, section 5.2.4 + err = pk.SerializeForHash(h) + + return +} + +// VerifyRevocationSignature returns nil iff sig is a valid signature, made by this +// public key. +func (pk *PublicKey) VerifyRevocationSignature(sig *Signature) (err error) { + h, err := keyRevocationHash(pk, sig.Hash) + if err != nil { + return err + } + return pk.VerifySignature(h, sig) +} + +// VerifySubkeyRevocationSignature returns nil iff sig is a valid subkey revocation signature, +// made by this public key, of signed. +func (pk *PublicKey) VerifySubkeyRevocationSignature(sig *Signature, signed *PublicKey) (err error) { + h, err := keySignatureHash(pk, signed, sig.Hash) + if err != nil { + return err + } + return pk.VerifySignature(h, sig) +} + +// userIdSignatureHash returns a Hash of the message that needs to be signed +// to assert that pk is a valid key for id. +func userIdSignatureHash(id string, pk *PublicKey, hashFunc crypto.Hash) (h hash.Hash, err error) { + if !hashFunc.Available() { + return nil, errors.UnsupportedError("hash function") + } + h = hashFunc.New() + + // RFC 4880, section 5.2.4 + pk.SerializeSignaturePrefix(h) + pk.serializeWithoutHeaders(h) + + var buf [5]byte + buf[0] = 0xb4 + buf[1] = byte(len(id) >> 24) + buf[2] = byte(len(id) >> 16) + buf[3] = byte(len(id) >> 8) + buf[4] = byte(len(id)) + h.Write(buf[:]) + h.Write([]byte(id)) + + return +} + +// VerifyUserIdSignature returns nil iff sig is a valid signature, made by this +// public key, that id is the identity of pub. +func (pk *PublicKey) VerifyUserIdSignature(id string, pub *PublicKey, sig *Signature) (err error) { + h, err := userIdSignatureHash(id, pub, sig.Hash) + if err != nil { + return err + } + return pk.VerifySignature(h, sig) +} + +// KeyIdString returns the public key's fingerprint in capital hex +// (e.g. "6C7EE1B8621CC013"). +func (pk *PublicKey) KeyIdString() string { + return fmt.Sprintf("%X", pk.Fingerprint[12:20]) +} + +// KeyIdShortString returns the short form of public key's fingerprint +// in capital hex, as shown by gpg --list-keys (e.g. "621CC013"). +func (pk *PublicKey) KeyIdShortString() string { + return fmt.Sprintf("%X", pk.Fingerprint[16:20]) +} + +// BitLength returns the bit length for the given public key. +func (pk *PublicKey) BitLength() (bitLength uint16, err error) { + switch pk.PubKeyAlgo { + case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoRSASignOnly: + bitLength = pk.n.BitLength() + case PubKeyAlgoDSA: + bitLength = pk.p.BitLength() + case PubKeyAlgoElGamal: + bitLength = pk.p.BitLength() + case PubKeyAlgoECDSA: + bitLength = pk.p.BitLength() + case PubKeyAlgoECDH: + bitLength = pk.p.BitLength() + case PubKeyAlgoEdDSA: + bitLength = pk.p.BitLength() + default: + err = errors.InvalidArgumentError("bad public-key algorithm") + } + return +} + +// KeyExpired returns whether sig is a self-signature of a key that has +// expired or is created in the future. +func (pk *PublicKey) KeyExpired(sig *Signature, currentTime time.Time) bool { + if pk.CreationTime.After(currentTime) { + return true + } + if sig.KeyLifetimeSecs == nil || *sig.KeyLifetimeSecs == 0 { + return false + } + expiry := pk.CreationTime.Add(time.Duration(*sig.KeyLifetimeSecs) * time.Second) + return currentTime.After(expiry) +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/public_key_test_data.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/public_key_test_data.go new file mode 100644 index 00000000000..b255f1f6f8f --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/public_key_test_data.go @@ -0,0 +1,24 @@ +package packet + +const rsaFingerprintHex = "5fb74b1d03b1e3cb31bc2f8aa34d7e18c20c31bb" + +const rsaPkDataHex = "988d044d3c5c10010400b1d13382944bd5aba23a4312968b5095d14f947f600eb478e14a6fcb16b0e0cac764884909c020bc495cfcc39a935387c661507bdb236a0612fb582cac3af9b29cc2c8c70090616c41b662f4da4c1201e195472eb7f4ae1ccbcbf9940fe21d985e379a5563dde5b9a23d35f1cfaa5790da3b79db26f23695107bfaca8e7b5bcd0011010001" + +const dsaFingerprintHex = "eece4c094db002103714c63c8e8fbe54062f19ed" + +const dsaPkDataHex = "9901a2044d432f89110400cd581334f0d7a1e1bdc8b9d6d8c0baf68793632735d2bb0903224cbaa1dfbf35a60ee7a13b92643421e1eb41aa8d79bea19a115a677f6b8ba3c7818ce53a6c2a24a1608bd8b8d6e55c5090cbde09dd26e356267465ae25e69ec8bdd57c7bbb2623e4d73336f73a0a9098f7f16da2e25252130fd694c0e8070c55a812a423ae7f00a0ebf50e70c2f19c3520a551bd4b08d30f23530d3d03ff7d0bf4a53a64a09dc5e6e6e35854b7d70c882b0c60293401958b1bd9e40abec3ea05ba87cf64899299d4bd6aa7f459c201d3fbbd6c82004bdc5e8a9eb8082d12054cc90fa9d4ec251a843236a588bf49552441817436c4f43326966fe85447d4e6d0acf8fa1ef0f014730770603ad7634c3088dc52501c237328417c31c89ed70400b2f1a98b0bf42f11fefc430704bebbaa41d9f355600c3facee1e490f64208e0e094ea55e3a598a219a58500bf78ac677b670a14f4e47e9cf8eab4f368cc1ddcaa18cc59309d4cc62dd4f680e73e6cc3e1ce87a84d0925efbcb26c575c093fc42eecf45135fabf6403a25c2016e1774c0484e440a18319072c617cc97ac0a3bb0" + +const ecdsaFingerprintHex = "9892270b38b8980b05c8d56d43fe956c542ca00b" + +const ecdsaPkDataHex = "9893045071c29413052b8104002304230401f4867769cedfa52c325018896245443968e52e51d0c2df8d939949cb5b330f2921711fbee1c9b9dddb95d15cb0255e99badeddda7cc23d9ddcaacbc290969b9f24019375d61c2e4e3b36953a28d8b2bc95f78c3f1d592fb24499be348656a7b17e3963187b4361afe497bc5f9f81213f04069f8e1fb9e6a6290ae295ca1a92b894396cb4" + +const ecdhFingerprintHex = "722354df2475a42164d1d49faa8b938f9a201946" + +const ecdhPkDataHex = "b90073044d53059212052b810400220303042faa84024a20b6735c4897efa5bfb41bf85b7eefeab5ca0cb9ffc8ea04a46acb25534a577694f9e25340a4ab5223a9dd1eda530c8aa2e6718db10d7e672558c7736fe09369ea5739a2a3554bf16d41faa50562f11c6d39bbd5dffb6b9a9ec91803010909" + +const eddsaFingerprintHex = "b2d5e5ec0e6deca6bc8eeeb00907e75e1dd99ad8" + +const eddsaPkDataHex = "98330456e2132b16092b06010401da470f01010740bbda39266affa511a8c2d02edf690fb784b0499c4406185811a163539ef11dc1b41d74657374696e67203c74657374696e674074657374696e672e636f6d3e8879041316080021050256e2132b021b03050b09080702061508090a0b020416020301021e01021780000a09100907e75e1dd99ad86d0c00fe39d2008359352782bc9b61ac382584cd8eff3f57a18c2287e3afeeb05d1f04ba00fe2d0bc1ddf3ff8adb9afa3e7d9287244b4ec567f3db4d60b74a9b5465ed528203" + +// Source: https://sites.google.com/site/brainhub/pgpecckeys#TOC-ECC-NIST-P-384-key +const ecc384PubHex = `99006f044d53059213052b81040022030304f6b8c5aced5b84ef9f4a209db2e4a9dfb70d28cb8c10ecd57674a9fa5a67389942b62d5e51367df4c7bfd3f8e500feecf07ed265a621a8ebbbe53e947ec78c677eba143bd1533c2b350e1c29f82313e1e1108eba063be1e64b10e6950e799c2db42465635f6473615f64685f333834203c6f70656e70677040627261696e6875622e6f72673e8900cb04101309005305024d530592301480000000002000077072656665727265642d656d61696c2d656e636f64696e67407067702e636f6d7067706d696d65040b090807021901051b03000000021602051e010000000415090a08000a0910098033880f54719fca2b0180aa37350968bd5f115afd8ce7bc7b103822152dbff06d0afcda835329510905b98cb469ba208faab87c7412b799e7b633017f58364ea480e8a1a3f253a0c5f22c446e8be9a9fce6210136ee30811abbd49139de28b5bdf8dc36d06ae748579e9ff503b90073044d53059212052b810400220303042faa84024a20b6735c4897efa5bfb41bf85b7eefeab5ca0cb9ffc8ea04a46acb25534a577694f9e25340a4ab5223a9dd1eda530c8aa2e6718db10d7e672558c7736fe09369ea5739a2a3554bf16d41faa50562f11c6d39bbd5dffb6b9a9ec9180301090989008404181309000c05024d530592051b0c000000000a0910098033880f54719f80970180eee7a6d8fcee41ee4f9289df17f9bcf9d955dca25c583b94336f3a2b2d4986dc5cf417b8d2dc86f741a9e1a6d236c0e3017d1c76575458a0cfb93ae8a2b274fcc65ceecd7a91eec83656ba13219969f06945b48c56bd04152c3a0553c5f2f4bd1267` diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/reader.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/reader.go similarity index 83% rename from .ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/reader.go rename to .ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/reader.go index 34bc7c613e6..10215fe5f23 100644 --- a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/reader.go +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/reader.go @@ -5,8 +5,9 @@ package packet import ( - "golang.org/x/crypto/openpgp/errors" "io" + + "github.com/ProtonMail/go-crypto/openpgp/errors" ) // Reader reads packets from an io.Reader and allows packets to be 'unread' so @@ -42,9 +43,18 @@ func (r *Reader) Next() (p Packet, err error) { r.readers = r.readers[:len(r.readers)-1] continue } - if _, ok := err.(errors.UnknownPacketTypeError); !ok { - return nil, err + // TODO: Add strict mode that rejects unknown packets, instead of ignoring them. + if _, ok := err.(errors.UnknownPacketTypeError); ok { + continue + } + if _, ok := err.(errors.UnsupportedError); ok { + switch p.(type) { + case *SymmetricallyEncrypted, *AEADEncrypted, *Compressed, *LiteralData: + return nil, err + } + continue } + return nil, err } return nil, io.EOF diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/signature.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/signature.go similarity index 51% rename from .ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/signature.go rename to .ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/signature.go index b2a24a53232..9f0b1b1978c 100644 --- a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/signature.go +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/signature.go @@ -8,17 +8,17 @@ import ( "bytes" "crypto" "crypto/dsa" - "crypto/ecdsa" - "encoding/asn1" "encoding/binary" "hash" "io" - "math/big" "strconv" "time" - "golang.org/x/crypto/openpgp/errors" - "golang.org/x/crypto/openpgp/s2k" + "github.com/ProtonMail/go-crypto/openpgp/ecdsa" + "github.com/ProtonMail/go-crypto/openpgp/eddsa" + "github.com/ProtonMail/go-crypto/openpgp/errors" + "github.com/ProtonMail/go-crypto/openpgp/internal/algorithm" + "github.com/ProtonMail/go-crypto/openpgp/internal/encoding" ) const ( @@ -27,10 +27,15 @@ const ( KeyFlagSign KeyFlagEncryptCommunications KeyFlagEncryptStorage + KeyFlagSplitKey + KeyFlagAuthenticate + _ + KeyFlagGroupKey ) // Signature represents a signature. See RFC 4880, section 5.2. type Signature struct { + Version int SigType SignatureType PubKeyAlgo PublicKeyAlgorithm Hash crypto.Hash @@ -39,12 +44,19 @@ type Signature struct { HashSuffix []byte // HashTag contains the first two bytes of the hash for fast rejection // of bad signed data. - HashTag [2]byte + HashTag [2]byte + + // Metadata includes format, filename and time, and is protected by v5 + // signatures of type 0x00 or 0x01. This metadata is included into the hash + // computation; if nil, six 0x00 bytes are used instead. See section 5.2.4. + Metadata *LiteralData + CreationTime time.Time - RSASignature parsedMPI - DSASigR, DSASigS parsedMPI - ECDSASigR, ECDSASigS parsedMPI + RSASignature encoding.Field + DSASigR, DSASigS encoding.Field + ECDSASigR, ECDSASigS encoding.Field + EdDSASigR, EdDSASigS encoding.Field // rawSubpackets contains the unparsed subpackets, in order. rawSubpackets []outputSubpacket @@ -54,22 +66,44 @@ type Signature struct { SigLifetimeSecs, KeyLifetimeSecs *uint32 PreferredSymmetric, PreferredHash, PreferredCompression []uint8 + PreferredCipherSuites [][2]uint8 IssuerKeyId *uint64 + IssuerFingerprint []byte + SignerUserId *string IsPrimaryId *bool + Notations []*Notation + + // TrustLevel and TrustAmount can be set by the signer to assert that + // the key is not only valid but also trustworthy at the specified + // level. + // See RFC 4880, section 5.2.3.13 for details. + TrustLevel TrustLevel + TrustAmount TrustAmount + + // TrustRegularExpression can be used in conjunction with trust Signature + // packets to limit the scope of the trust that is extended. + // See RFC 4880, section 5.2.3.14 for details. + TrustRegularExpression *string + + // PolicyURI can be set to the URI of a document that describes the + // policy under which the signature was issued. See RFC 4880, section + // 5.2.3.20 for details. + PolicyURI string // FlagsValid is set if any flags were given. See RFC 4880, section // 5.2.3.21 for details. - FlagsValid bool - FlagCertify, FlagSign, FlagEncryptCommunications, FlagEncryptStorage bool + FlagsValid bool + FlagCertify, FlagSign, FlagEncryptCommunications, FlagEncryptStorage, FlagSplitKey, FlagAuthenticate, FlagGroupKey bool // RevocationReason is set if this signature has been revoked. // See RFC 4880, section 5.2.3.23 for details. - RevocationReason *uint8 + RevocationReason *ReasonForRevocation RevocationReasonText string - // MDC is set if this signature has a feature packet that indicates - // support for MDC subpackets. - MDC bool + // In a self-signature, these flags are set there is a features subpacket + // indicating that the issuer implementation supports these features + // see https://datatracker.ietf.org/doc/html/draft-ietf-openpgp-crypto-refresh#features-subpacket + SEIPDv1, SEIPDv2 bool // EmbeddedSignature, if non-nil, is a signature of the parent key, by // this key. This prevents an attacker from claiming another's signing @@ -86,11 +120,11 @@ func (sig *Signature) parse(r io.Reader) (err error) { if err != nil { return } - if buf[0] != 4 { + if buf[0] != 4 && buf[0] != 5 { err = errors.UnsupportedError("signature packet version " + strconv.Itoa(int(buf[0]))) return } - + sig.Version = int(buf[0]) _, err = readFull(r, buf[:5]) if err != nil { return @@ -98,36 +132,34 @@ func (sig *Signature) parse(r io.Reader) (err error) { sig.SigType = SignatureType(buf[0]) sig.PubKeyAlgo = PublicKeyAlgorithm(buf[1]) switch sig.PubKeyAlgo { - case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly, PubKeyAlgoDSA, PubKeyAlgoECDSA: + case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly, PubKeyAlgoDSA, PubKeyAlgoECDSA, PubKeyAlgoEdDSA: default: err = errors.UnsupportedError("public key algorithm " + strconv.Itoa(int(sig.PubKeyAlgo))) return } var ok bool - sig.Hash, ok = s2k.HashIdToHash(buf[2]) + + if sig.Version < 5 { + sig.Hash, ok = algorithm.HashIdToHashWithSha1(buf[2]) + } else { + sig.Hash, ok = algorithm.HashIdToHash(buf[2]) + } + if !ok { return errors.UnsupportedError("hash function " + strconv.Itoa(int(buf[2]))) } hashedSubpacketsLength := int(buf[3])<<8 | int(buf[4]) - l := 6 + hashedSubpacketsLength - sig.HashSuffix = make([]byte, l+6) - sig.HashSuffix[0] = 4 - copy(sig.HashSuffix[1:], buf[:5]) - hashedSubpackets := sig.HashSuffix[6:l] + hashedSubpackets := make([]byte, hashedSubpacketsLength) _, err = readFull(r, hashedSubpackets) if err != nil { return } - // See RFC 4880, section 5.2.4 - trailer := sig.HashSuffix[l:] - trailer[0] = 4 - trailer[1] = 0xff - trailer[2] = uint8(l >> 24) - trailer[3] = uint8(l >> 16) - trailer[4] = uint8(l >> 8) - trailer[5] = uint8(l) + err = sig.buildHashSuffix(hashedSubpackets) + if err != nil { + return + } err = parseSignatureSubpackets(sig, hashedSubpackets, true) if err != nil { @@ -156,16 +188,33 @@ func (sig *Signature) parse(r io.Reader) (err error) { switch sig.PubKeyAlgo { case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly: - sig.RSASignature.bytes, sig.RSASignature.bitLength, err = readMPI(r) + sig.RSASignature = new(encoding.MPI) + _, err = sig.RSASignature.ReadFrom(r) case PubKeyAlgoDSA: - sig.DSASigR.bytes, sig.DSASigR.bitLength, err = readMPI(r) - if err == nil { - sig.DSASigS.bytes, sig.DSASigS.bitLength, err = readMPI(r) + sig.DSASigR = new(encoding.MPI) + if _, err = sig.DSASigR.ReadFrom(r); err != nil { + return } + + sig.DSASigS = new(encoding.MPI) + _, err = sig.DSASigS.ReadFrom(r) case PubKeyAlgoECDSA: - sig.ECDSASigR.bytes, sig.ECDSASigR.bitLength, err = readMPI(r) - if err == nil { - sig.ECDSASigS.bytes, sig.ECDSASigS.bitLength, err = readMPI(r) + sig.ECDSASigR = new(encoding.MPI) + if _, err = sig.ECDSASigR.ReadFrom(r); err != nil { + return + } + + sig.ECDSASigS = new(encoding.MPI) + _, err = sig.ECDSASigS.ReadFrom(r) + case PubKeyAlgoEdDSA: + sig.EdDSASigR = new(encoding.MPI) + if _, err = sig.EdDSASigR.ReadFrom(r); err != nil { + return + } + + sig.EdDSASigS = new(encoding.MPI) + if _, err = sig.EdDSASigS.ReadFrom(r); err != nil { + return } default: panic("unreachable") @@ -195,16 +244,23 @@ type signatureSubpacketType uint8 const ( creationTimeSubpacket signatureSubpacketType = 2 signatureExpirationSubpacket signatureSubpacketType = 3 + trustSubpacket signatureSubpacketType = 5 + regularExpressionSubpacket signatureSubpacketType = 6 keyExpirationSubpacket signatureSubpacketType = 9 prefSymmetricAlgosSubpacket signatureSubpacketType = 11 issuerSubpacket signatureSubpacketType = 16 + notationDataSubpacket signatureSubpacketType = 20 prefHashAlgosSubpacket signatureSubpacketType = 21 prefCompressionSubpacket signatureSubpacketType = 22 primaryUserIdSubpacket signatureSubpacketType = 25 + policyUriSubpacket signatureSubpacketType = 26 keyFlagsSubpacket signatureSubpacketType = 27 + signerUserIdSubpacket signatureSubpacketType = 28 reasonForRevocationSubpacket signatureSubpacketType = 29 featuresSubpacket signatureSubpacketType = 30 embeddedSignatureSubpacket signatureSubpacketType = 32 + issuerFingerprintSubpacket signatureSubpacketType = 33 + prefCipherSuitesSubpacket signatureSubpacketType = 39 ) // parseSignatureSubpacket parses a single subpacket. len(subpacket) is >= 1. @@ -248,12 +304,14 @@ func parseSignatureSubpacket(sig *Signature, subpacket []byte, isHashed bool) (r isCritical = subpacket[0]&0x80 == 0x80 subpacket = subpacket[1:] sig.rawSubpackets = append(sig.rawSubpackets, outputSubpacket{isHashed, packetType, isCritical, subpacket}) + if !isHashed && + packetType != issuerSubpacket && + packetType != issuerFingerprintSubpacket && + packetType != embeddedSignatureSubpacket { + return + } switch packetType { case creationTimeSubpacket: - if !isHashed { - err = errors.StructuralError("signature creation time in non-hashed area") - return - } if len(subpacket) != 4 { err = errors.StructuralError("signature creation time not four bytes") return @@ -262,20 +320,27 @@ func parseSignatureSubpacket(sig *Signature, subpacket []byte, isHashed bool) (r sig.CreationTime = time.Unix(int64(t), 0) case signatureExpirationSubpacket: // Signature expiration time, section 5.2.3.10 - if !isHashed { - return - } if len(subpacket) != 4 { err = errors.StructuralError("expiration subpacket with bad length") return } sig.SigLifetimeSecs = new(uint32) *sig.SigLifetimeSecs = binary.BigEndian.Uint32(subpacket) - case keyExpirationSubpacket: - // Key expiration time, section 5.2.3.6 - if !isHashed { + case trustSubpacket: + // Trust level and amount, section 5.2.3.13 + sig.TrustLevel = TrustLevel(subpacket[0]) + sig.TrustAmount = TrustAmount(subpacket[1]) + case regularExpressionSubpacket: + // Trust regular expression, section 5.2.3.14 + // RFC specifies the string should be null-terminated; remove a null byte from the end + if subpacket[len(subpacket)-1] != 0x00 { + err = errors.StructuralError("expected regular expression to be null-terminated") return } + trustRegularExpression := string(subpacket[:len(subpacket)-1]) + sig.TrustRegularExpression = &trustRegularExpression + case keyExpirationSubpacket: + // Key expiration time, section 5.2.3.6 if len(subpacket) != 4 { err = errors.StructuralError("key expiration subpacket with bad length") return @@ -284,38 +349,52 @@ func parseSignatureSubpacket(sig *Signature, subpacket []byte, isHashed bool) (r *sig.KeyLifetimeSecs = binary.BigEndian.Uint32(subpacket) case prefSymmetricAlgosSubpacket: // Preferred symmetric algorithms, section 5.2.3.7 - if !isHashed { - return - } sig.PreferredSymmetric = make([]byte, len(subpacket)) copy(sig.PreferredSymmetric, subpacket) case issuerSubpacket: // Issuer, section 5.2.3.5 + if sig.Version > 4 { + err = errors.StructuralError("issuer subpacket found in v5 key") + return + } if len(subpacket) != 8 { err = errors.StructuralError("issuer subpacket with bad length") return } sig.IssuerKeyId = new(uint64) *sig.IssuerKeyId = binary.BigEndian.Uint64(subpacket) - case prefHashAlgosSubpacket: - // Preferred hash algorithms, section 5.2.3.8 - if !isHashed { + case notationDataSubpacket: + // Notation data, section 5.2.3.16 + if len(subpacket) < 8 { + err = errors.StructuralError("notation data subpacket with bad length") return } + + nameLength := uint32(subpacket[4])<<8 | uint32(subpacket[5]) + valueLength := uint32(subpacket[6])<<8 | uint32(subpacket[7]) + if len(subpacket) != int(nameLength) + int(valueLength) + 8 { + err = errors.StructuralError("notation data subpacket with bad length") + return + } + + notation := Notation{ + IsHumanReadable: (subpacket[0] & 0x80) == 0x80, + Name: string(subpacket[8: (nameLength + 8)]), + Value: subpacket[(nameLength + 8) : (valueLength + nameLength + 8)], + IsCritical: isCritical, + } + + sig.Notations = append(sig.Notations, ¬ation) + case prefHashAlgosSubpacket: + // Preferred hash algorithms, section 5.2.3.8 sig.PreferredHash = make([]byte, len(subpacket)) copy(sig.PreferredHash, subpacket) case prefCompressionSubpacket: // Preferred compression algorithms, section 5.2.3.9 - if !isHashed { - return - } sig.PreferredCompression = make([]byte, len(subpacket)) copy(sig.PreferredCompression, subpacket) case primaryUserIdSubpacket: // Primary User ID, section 5.2.3.19 - if !isHashed { - return - } if len(subpacket) != 1 { err = errors.StructuralError("primary user id subpacket with bad length") return @@ -326,9 +405,6 @@ func parseSignatureSubpacket(sig *Signature, subpacket []byte, isHashed bool) (r } case keyFlagsSubpacket: // Key flags, section 5.2.3.21 - if !isHashed { - return - } if len(subpacket) == 0 { err = errors.StructuralError("empty key flags subpacket") return @@ -346,24 +422,40 @@ func parseSignatureSubpacket(sig *Signature, subpacket []byte, isHashed bool) (r if subpacket[0]&KeyFlagEncryptStorage != 0 { sig.FlagEncryptStorage = true } + if subpacket[0]&KeyFlagSplitKey != 0 { + sig.FlagSplitKey = true + } + if subpacket[0]&KeyFlagAuthenticate != 0 { + sig.FlagAuthenticate = true + } + if subpacket[0]&KeyFlagGroupKey != 0 { + sig.FlagGroupKey = true + } + case signerUserIdSubpacket: + userId := string(subpacket) + sig.SignerUserId = &userId case reasonForRevocationSubpacket: // Reason For Revocation, section 5.2.3.23 - if !isHashed { - return - } if len(subpacket) == 0 { err = errors.StructuralError("empty revocation reason subpacket") return } - sig.RevocationReason = new(uint8) - *sig.RevocationReason = subpacket[0] + sig.RevocationReason = new(ReasonForRevocation) + *sig.RevocationReason = ReasonForRevocation(subpacket[0]) sig.RevocationReasonText = string(subpacket[1:]) case featuresSubpacket: // Features subpacket, section 5.2.3.24 specifies a very general // mechanism for OpenPGP implementations to signal support for new - // features. In practice, the subpacket is used exclusively to - // indicate support for MDC-protected encryption. - sig.MDC = len(subpacket) >= 1 && subpacket[0]&1 == 1 + // features. + if len(subpacket) > 0 { + if subpacket[0]&0x01 != 0 { + sig.SEIPDv1 = true + } + // 0x02 and 0x04 are reserved + if subpacket[0]&0x08 != 0 { + sig.SEIPDv2 = true + } + } case embeddedSignatureSubpacket: // Only usage is in signatures that cross-certify // signing subkeys. section 5.2.3.26 describes the @@ -382,6 +474,35 @@ func parseSignatureSubpacket(sig *Signature, subpacket []byte, isHashed bool) (r if sigType := sig.EmbeddedSignature.SigType; sigType != SigTypePrimaryKeyBinding { return nil, errors.StructuralError("cross-signature has unexpected type " + strconv.Itoa(int(sigType))) } + case policyUriSubpacket: + // Policy URI, section 5.2.3.20 + sig.PolicyURI = string(subpacket) + case issuerFingerprintSubpacket: + v, l := subpacket[0], len(subpacket[1:]) + if v == 5 && l != 32 || v != 5 && l != 20 { + return nil, errors.StructuralError("bad fingerprint length") + } + sig.IssuerFingerprint = make([]byte, l) + copy(sig.IssuerFingerprint, subpacket[1:]) + sig.IssuerKeyId = new(uint64) + if v == 5 { + *sig.IssuerKeyId = binary.BigEndian.Uint64(subpacket[1:9]) + } else { + *sig.IssuerKeyId = binary.BigEndian.Uint64(subpacket[13:21]) + } + case prefCipherSuitesSubpacket: + // Preferred AEAD cipher suites + // See https://www.ietf.org/archive/id/draft-ietf-openpgp-crypto-refresh-07.html#name-preferred-aead-ciphersuites + if len(subpacket) % 2 != 0 { + err = errors.StructuralError("invalid aead cipher suite length") + return + } + + sig.PreferredCipherSuites = make([][2]byte, len(subpacket) / 2) + + for i := 0; i < len(subpacket) / 2; i++ { + sig.PreferredCipherSuites[i] = [2]uint8{subpacket[2*i], subpacket[2*i+1]} + } default: if isCritical { err = errors.UnsupportedError("unknown critical signature subpacket type " + strconv.Itoa(int(packetType))) @@ -406,6 +527,13 @@ func subpacketLengthLength(length int) int { return 5 } +func (sig *Signature) CheckKeyIdOrFingerprint(pk *PublicKey) bool { + if sig.IssuerFingerprint != nil && len(sig.IssuerFingerprint) >= 20 { + return bytes.Equal(sig.IssuerFingerprint, pk.Fingerprint) + } + return sig.IssuerKeyId != nil && *sig.IssuerKeyId == pk.KeyId +} + // serializeSubpacketLength marshals the given length into to. func serializeSubpacketLength(to []byte, length int) int { // RFC 4880, Section 4.2.2. @@ -446,6 +574,9 @@ func serializeSubpackets(to []byte, subpackets []outputSubpacket, hashed bool) { if subpacket.hashed == hashed { n := serializeSubpacketLength(to, len(subpacket.contents)+1) to[n] = byte(subpacket.subpacketType) + if subpacket.isCritical { + to[n] |= 0x80 + } to = to[1+n:] n = copy(to, subpacket.contents) to = to[n:] @@ -454,49 +585,74 @@ func serializeSubpackets(to []byte, subpackets []outputSubpacket, hashed bool) { return } -// KeyExpired returns whether sig is a self-signature of a key that has -// expired. -func (sig *Signature) KeyExpired(currentTime time.Time) bool { - if sig.KeyLifetimeSecs == nil { +// SigExpired returns whether sig is a signature that has expired or is created +// in the future. +func (sig *Signature) SigExpired(currentTime time.Time) bool { + if sig.CreationTime.After(currentTime) { + return true + } + if sig.SigLifetimeSecs == nil || *sig.SigLifetimeSecs == 0 { return false } - expiry := sig.CreationTime.Add(time.Duration(*sig.KeyLifetimeSecs) * time.Second) + expiry := sig.CreationTime.Add(time.Duration(*sig.SigLifetimeSecs) * time.Second) return currentTime.After(expiry) } // buildHashSuffix constructs the HashSuffix member of sig in preparation for signing. -func (sig *Signature) buildHashSuffix() (err error) { - hashedSubpacketsLen := subpacketsLength(sig.outSubpackets, true) - +func (sig *Signature) buildHashSuffix(hashedSubpackets []byte) (err error) { + var hashId byte var ok bool - l := 6 + hashedSubpacketsLen - sig.HashSuffix = make([]byte, l+6) - sig.HashSuffix[0] = 4 - sig.HashSuffix[1] = uint8(sig.SigType) - sig.HashSuffix[2] = uint8(sig.PubKeyAlgo) - sig.HashSuffix[3], ok = s2k.HashToHashId(sig.Hash) + + if sig.Version < 5 { + hashId, ok = algorithm.HashToHashIdWithSha1(sig.Hash) + } else { + hashId, ok = algorithm.HashToHashId(sig.Hash) + } + if !ok { sig.HashSuffix = nil return errors.InvalidArgumentError("hash cannot be represented in OpenPGP: " + strconv.Itoa(int(sig.Hash))) } - sig.HashSuffix[4] = byte(hashedSubpacketsLen >> 8) - sig.HashSuffix[5] = byte(hashedSubpacketsLen) - serializeSubpackets(sig.HashSuffix[6:l], sig.outSubpackets, true) - trailer := sig.HashSuffix[l:] - trailer[0] = 4 - trailer[1] = 0xff - trailer[2] = byte(l >> 24) - trailer[3] = byte(l >> 16) - trailer[4] = byte(l >> 8) - trailer[5] = byte(l) + + hashedFields := bytes.NewBuffer([]byte{ + uint8(sig.Version), + uint8(sig.SigType), + uint8(sig.PubKeyAlgo), + uint8(hashId), + uint8(len(hashedSubpackets) >> 8), + uint8(len(hashedSubpackets)), + }) + hashedFields.Write(hashedSubpackets) + + var l uint64 = uint64(6 + len(hashedSubpackets)) + if sig.Version == 5 { + hashedFields.Write([]byte{0x05, 0xff}) + hashedFields.Write([]byte{ + uint8(l >> 56), uint8(l >> 48), uint8(l >> 40), uint8(l >> 32), + uint8(l >> 24), uint8(l >> 16), uint8(l >> 8), uint8(l), + }) + } else { + hashedFields.Write([]byte{0x04, 0xff}) + hashedFields.Write([]byte{ + uint8(l >> 24), uint8(l >> 16), uint8(l >> 8), uint8(l), + }) + } + sig.HashSuffix = make([]byte, hashedFields.Len()) + copy(sig.HashSuffix, hashedFields.Bytes()) return } func (sig *Signature) signPrepareHash(h hash.Hash) (digest []byte, err error) { - err = sig.buildHashSuffix() + hashedSubpacketsLen := subpacketsLength(sig.outSubpackets, true) + hashedSubpackets := make([]byte, hashedSubpacketsLen) + serializeSubpackets(hashedSubpackets, sig.outSubpackets, true) + err = sig.buildHashSuffix(hashedSubpackets) if err != nil { return } + if sig.Version == 5 && (sig.SigType == 0x00 || sig.SigType == 0x01) { + sig.AddMetadataToHashSuffix() + } h.Write(sig.HashSuffix) digest = h.Sum(nil) @@ -509,17 +665,26 @@ func (sig *Signature) signPrepareHash(h hash.Hash) (digest []byte, err error) { // On success, the signature is stored in sig. Call Serialize to write it out. // If config is nil, sensible defaults will be used. func (sig *Signature) Sign(h hash.Hash, priv *PrivateKey, config *Config) (err error) { - sig.outSubpackets = sig.buildSubpackets() + if priv.Dummy() { + return errors.ErrDummyPrivateKey("dummy key found") + } + sig.Version = priv.PublicKey.Version + sig.IssuerFingerprint = priv.PublicKey.Fingerprint + sig.outSubpackets, err = sig.buildSubpackets(priv.PublicKey) + if err != nil { + return err + } digest, err := sig.signPrepareHash(h) if err != nil { return } - switch priv.PubKeyAlgo { case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly: // supports both *rsa.PrivateKey and crypto.Signer - sig.RSASignature.bytes, err = priv.PrivateKey.(crypto.Signer).Sign(config.Random(), digest, sig.Hash) - sig.RSASignature.bitLength = uint16(8 * len(sig.RSASignature.bytes)) + sigdata, err := priv.PrivateKey.(crypto.Signer).Sign(config.Random(), digest, sig.Hash) + if err == nil { + sig.RSASignature = encoding.NewMPI(sigdata) + } case PubKeyAlgoDSA: dsaPriv := priv.PrivateKey.(*dsa.PrivateKey) @@ -530,26 +695,23 @@ func (sig *Signature) Sign(h hash.Hash, priv *PrivateKey, config *Config) (err e } r, s, err := dsa.Sign(config.Random(), dsaPriv, digest) if err == nil { - sig.DSASigR.bytes = r.Bytes() - sig.DSASigR.bitLength = uint16(8 * len(sig.DSASigR.bytes)) - sig.DSASigS.bytes = s.Bytes() - sig.DSASigS.bitLength = uint16(8 * len(sig.DSASigS.bytes)) + sig.DSASigR = new(encoding.MPI).SetBig(r) + sig.DSASigS = new(encoding.MPI).SetBig(s) } case PubKeyAlgoECDSA: - var r, s *big.Int - if pk, ok := priv.PrivateKey.(*ecdsa.PrivateKey); ok { - // direct support, avoid asn1 wrapping/unwrapping - r, s, err = ecdsa.Sign(config.Random(), pk, digest) - } else { - var b []byte - b, err = priv.PrivateKey.(crypto.Signer).Sign(config.Random(), digest, sig.Hash) - if err == nil { - r, s, err = unwrapECDSASig(b) - } + sk := priv.PrivateKey.(*ecdsa.PrivateKey) + r, s, err := ecdsa.Sign(config.Random(), sk, digest) + + if err == nil { + sig.ECDSASigR = new(encoding.MPI).SetBig(r) + sig.ECDSASigS = new(encoding.MPI).SetBig(s) } + case PubKeyAlgoEdDSA: + sk := priv.PrivateKey.(*eddsa.PrivateKey) + r, s, err := eddsa.Sign(sk, digest) if err == nil { - sig.ECDSASigR = fromBig(r) - sig.ECDSASigS = fromBig(s) + sig.EdDSASigR = encoding.NewMPI(r) + sig.EdDSASigS = encoding.NewMPI(s) } default: err = errors.UnsupportedError("public key algorithm: " + strconv.Itoa(int(sig.PubKeyAlgo))) @@ -558,24 +720,14 @@ func (sig *Signature) Sign(h hash.Hash, priv *PrivateKey, config *Config) (err e return } -// unwrapECDSASig parses the two integer components of an ASN.1-encoded ECDSA -// signature. -func unwrapECDSASig(b []byte) (r, s *big.Int, err error) { - var ecsdaSig struct { - R, S *big.Int - } - _, err = asn1.Unmarshal(b, &ecsdaSig) - if err != nil { - return - } - return ecsdaSig.R, ecsdaSig.S, nil -} - // SignUserId computes a signature from priv, asserting that pub is a valid // key for the identity id. On success, the signature is stored in sig. Call // Serialize to write it out. // If config is nil, sensible defaults will be used. func (sig *Signature) SignUserId(id string, pub *PublicKey, priv *PrivateKey, config *Config) error { + if priv.Dummy() { + return errors.ErrDummyPrivateKey("dummy key found") + } h, err := userIdSignatureHash(id, pub, sig.Hash) if err != nil { return err @@ -583,10 +735,25 @@ func (sig *Signature) SignUserId(id string, pub *PublicKey, priv *PrivateKey, co return sig.Sign(h, priv, config) } +// CrossSignKey computes a signature from signingKey on pub hashed using hashKey. On success, +// the signature is stored in sig. Call Serialize to write it out. +// If config is nil, sensible defaults will be used. +func (sig *Signature) CrossSignKey(pub *PublicKey, hashKey *PublicKey, signingKey *PrivateKey, + config *Config) error { + h, err := keySignatureHash(hashKey, pub, sig.Hash) + if err != nil { + return err + } + return sig.Sign(h, signingKey, config) +} + // SignKey computes a signature from priv, asserting that pub is a subkey. On // success, the signature is stored in sig. Call Serialize to write it out. // If config is nil, sensible defaults will be used. func (sig *Signature) SignKey(pub *PublicKey, priv *PrivateKey, config *Config) error { + if priv.Dummy() { + return errors.ErrDummyPrivateKey("dummy key found") + } h, err := keySignatureHash(&priv.PublicKey, pub, sig.Hash) if err != nil { return err @@ -594,26 +761,48 @@ func (sig *Signature) SignKey(pub *PublicKey, priv *PrivateKey, config *Config) return sig.Sign(h, priv, config) } +// RevokeKey computes a revocation signature of pub using priv. On success, the signature is +// stored in sig. Call Serialize to write it out. +// If config is nil, sensible defaults will be used. +func (sig *Signature) RevokeKey(pub *PublicKey, priv *PrivateKey, config *Config) error { + h, err := keyRevocationHash(pub, sig.Hash) + if err != nil { + return err + } + return sig.Sign(h, priv, config) +} + +// RevokeSubkey computes a subkey revocation signature of pub using priv. +// On success, the signature is stored in sig. Call Serialize to write it out. +// If config is nil, sensible defaults will be used. +func (sig *Signature) RevokeSubkey(pub *PublicKey, priv *PrivateKey, config *Config) error { + // Identical to a subkey binding signature + return sig.SignKey(pub, priv, config) +} + // Serialize marshals sig to w. Sign, SignUserId or SignKey must have been // called first. func (sig *Signature) Serialize(w io.Writer) (err error) { if len(sig.outSubpackets) == 0 { sig.outSubpackets = sig.rawSubpackets } - if sig.RSASignature.bytes == nil && sig.DSASigR.bytes == nil && sig.ECDSASigR.bytes == nil { + if sig.RSASignature == nil && sig.DSASigR == nil && sig.ECDSASigR == nil && sig.EdDSASigR == nil { return errors.InvalidArgumentError("Signature: need to call Sign, SignUserId or SignKey before Serialize") } sigLength := 0 switch sig.PubKeyAlgo { case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly: - sigLength = 2 + len(sig.RSASignature.bytes) + sigLength = int(sig.RSASignature.EncodedLength()) case PubKeyAlgoDSA: - sigLength = 2 + len(sig.DSASigR.bytes) - sigLength += 2 + len(sig.DSASigS.bytes) + sigLength = int(sig.DSASigR.EncodedLength()) + sigLength += int(sig.DSASigS.EncodedLength()) case PubKeyAlgoECDSA: - sigLength = 2 + len(sig.ECDSASigR.bytes) - sigLength += 2 + len(sig.ECDSASigS.bytes) + sigLength = int(sig.ECDSASigR.EncodedLength()) + sigLength += int(sig.ECDSASigS.EncodedLength()) + case PubKeyAlgoEdDSA: + sigLength = int(sig.EdDSASigR.EncodedLength()) + sigLength += int(sig.EdDSASigS.EncodedLength()) default: panic("impossible") } @@ -622,16 +811,29 @@ func (sig *Signature) Serialize(w io.Writer) (err error) { length := len(sig.HashSuffix) - 6 /* trailer not included */ + 2 /* length of unhashed subpackets */ + unhashedSubpacketsLen + 2 /* hash tag */ + sigLength + if sig.Version == 5 { + length -= 4 // eight-octet instead of four-octet big endian + } err = serializeHeader(w, packetTypeSignature, length) if err != nil { return } + err = sig.serializeBody(w) + if err != nil { + return err + } + return +} - _, err = w.Write(sig.HashSuffix[:len(sig.HashSuffix)-6]) +func (sig *Signature) serializeBody(w io.Writer) (err error) { + hashedSubpacketsLen := uint16(uint16(sig.HashSuffix[4])<<8) | uint16(sig.HashSuffix[5]) + fields := sig.HashSuffix[:6+hashedSubpacketsLen] + _, err = w.Write(fields) if err != nil { return } + unhashedSubpacketsLen := subpacketsLength(sig.outSubpackets, false) unhashedSubpackets := make([]byte, 2+unhashedSubpacketsLen) unhashedSubpackets[0] = byte(unhashedSubpacketsLen >> 8) unhashedSubpackets[1] = byte(unhashedSubpacketsLen) @@ -648,11 +850,22 @@ func (sig *Signature) Serialize(w io.Writer) (err error) { switch sig.PubKeyAlgo { case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly: - err = writeMPIs(w, sig.RSASignature) + _, err = w.Write(sig.RSASignature.EncodedBytes()) case PubKeyAlgoDSA: - err = writeMPIs(w, sig.DSASigR, sig.DSASigS) + if _, err = w.Write(sig.DSASigR.EncodedBytes()); err != nil { + return + } + _, err = w.Write(sig.DSASigS.EncodedBytes()) case PubKeyAlgoECDSA: - err = writeMPIs(w, sig.ECDSASigR, sig.ECDSASigS) + if _, err = w.Write(sig.ECDSASigR.EncodedBytes()); err != nil { + return + } + _, err = w.Write(sig.ECDSASigS.EncodedBytes()) + case PubKeyAlgoEdDSA: + if _, err = w.Write(sig.EdDSASigR.EncodedBytes()); err != nil { + return + } + _, err = w.Write(sig.EdDSASigS.EncodedBytes()) default: panic("impossible") } @@ -667,17 +880,23 @@ type outputSubpacket struct { contents []byte } -func (sig *Signature) buildSubpackets() (subpackets []outputSubpacket) { +func (sig *Signature) buildSubpackets(issuer PublicKey) (subpackets []outputSubpacket, err error) { creationTime := make([]byte, 4) binary.BigEndian.PutUint32(creationTime, uint32(sig.CreationTime.Unix())) subpackets = append(subpackets, outputSubpacket{true, creationTimeSubpacket, false, creationTime}) - if sig.IssuerKeyId != nil { + if sig.IssuerKeyId != nil && sig.Version == 4 { keyId := make([]byte, 8) binary.BigEndian.PutUint64(keyId, *sig.IssuerKeyId) - subpackets = append(subpackets, outputSubpacket{true, issuerSubpacket, false, keyId}) + subpackets = append(subpackets, outputSubpacket{true, issuerSubpacket, true, keyId}) + } + if sig.IssuerFingerprint != nil { + contents := append([]uint8{uint8(issuer.Version)}, sig.IssuerFingerprint...) + subpackets = append(subpackets, outputSubpacket{true, issuerFingerprintSubpacket, sig.Version == 5, contents}) + } + if sig.SignerUserId != nil { + subpackets = append(subpackets, outputSubpacket{true, signerUserIdSubpacket, false, []byte(*sig.SignerUserId)}) } - if sig.SigLifetimeSecs != nil && *sig.SigLifetimeSecs != 0 { sigLifetime := make([]byte, 4) binary.BigEndian.PutUint32(sigLifetime, *sig.SigLifetimeSecs) @@ -700,10 +919,51 @@ func (sig *Signature) buildSubpackets() (subpackets []outputSubpacket) { if sig.FlagEncryptStorage { flags |= KeyFlagEncryptStorage } + if sig.FlagSplitKey { + flags |= KeyFlagSplitKey + } + if sig.FlagAuthenticate { + flags |= KeyFlagAuthenticate + } + if sig.FlagGroupKey { + flags |= KeyFlagGroupKey + } subpackets = append(subpackets, outputSubpacket{true, keyFlagsSubpacket, false, []byte{flags}}) } - // The following subpackets may only appear in self-signatures + for _, notation := range sig.Notations { + subpackets = append( + subpackets, + outputSubpacket{ + true, + notationDataSubpacket, + notation.IsCritical, + notation.getData(), + }) + } + + // The following subpackets may only appear in self-signatures. + + var features = byte(0x00) + if sig.SEIPDv1 { + features |= 0x01 + } + if sig.SEIPDv2 { + features |= 0x08 + } + + if features != 0x00 { + subpackets = append(subpackets, outputSubpacket{true, featuresSubpacket, false, []byte{features}}) + } + + if sig.TrustLevel != 0 { + subpackets = append(subpackets, outputSubpacket{true, trustSubpacket, true, []byte{byte(sig.TrustLevel), byte(sig.TrustAmount)}}) + } + + if sig.TrustRegularExpression != nil { + // RFC specifies the string should be null-terminated; add a null byte to the end + subpackets = append(subpackets, outputSubpacket{true, regularExpressionSubpacket, true, []byte(*sig.TrustRegularExpression + "\000")}) + } if sig.KeyLifetimeSecs != nil && *sig.KeyLifetimeSecs != 0 { keyLifetime := make([]byte, 4) @@ -727,5 +987,82 @@ func (sig *Signature) buildSubpackets() (subpackets []outputSubpacket) { subpackets = append(subpackets, outputSubpacket{true, prefCompressionSubpacket, false, sig.PreferredCompression}) } + if len(sig.PolicyURI) > 0 { + subpackets = append(subpackets, outputSubpacket{true, policyUriSubpacket, false, []uint8(sig.PolicyURI)}) + } + + if len(sig.PreferredCipherSuites) > 0 { + serialized := make([]byte, len(sig.PreferredCipherSuites)*2) + for i, cipherSuite := range sig.PreferredCipherSuites { + serialized[2*i] = cipherSuite[0] + serialized[2*i+1] = cipherSuite[1] + } + subpackets = append(subpackets, outputSubpacket{true, prefCipherSuitesSubpacket, false, serialized}) + } + + // Revocation reason appears only in revocation signatures and is serialized as per section 5.2.3.23. + if sig.RevocationReason != nil { + subpackets = append(subpackets, outputSubpacket{true, reasonForRevocationSubpacket, true, + append([]uint8{uint8(*sig.RevocationReason)}, []uint8(sig.RevocationReasonText)...)}) + } + + // EmbeddedSignature appears only in subkeys capable of signing and is serialized as per section 5.2.3.26. + if sig.EmbeddedSignature != nil { + var buf bytes.Buffer + err = sig.EmbeddedSignature.serializeBody(&buf) + if err != nil { + return + } + subpackets = append(subpackets, outputSubpacket{true, embeddedSignatureSubpacket, true, buf.Bytes()}) + } + return } + +// AddMetadataToHashSuffix modifies the current hash suffix to include metadata +// (format, filename, and time). Version 5 keys protect this data including it +// in the hash computation. See section 5.2.4. +func (sig *Signature) AddMetadataToHashSuffix() { + if sig == nil || sig.Version != 5 { + return + } + if sig.SigType != 0x00 && sig.SigType != 0x01 { + return + } + lit := sig.Metadata + if lit == nil { + // This will translate into six 0x00 bytes. + lit = &LiteralData{} + } + + // Extract the current byte count + n := sig.HashSuffix[len(sig.HashSuffix)-8:] + l := uint64( + uint64(n[0])<<56 | uint64(n[1])<<48 | uint64(n[2])<<40 | uint64(n[3])<<32 | + uint64(n[4])<<24 | uint64(n[5])<<16 | uint64(n[6])<<8 | uint64(n[7])) + + suffix := bytes.NewBuffer(nil) + suffix.Write(sig.HashSuffix[:l]) + + // Add the metadata + var buf [4]byte + buf[0] = lit.Format + fileName := lit.FileName + if len(lit.FileName) > 255 { + fileName = fileName[:255] + } + buf[1] = byte(len(fileName)) + suffix.Write(buf[:2]) + suffix.Write([]byte(lit.FileName)) + binary.BigEndian.PutUint32(buf[:], lit.Time) + suffix.Write(buf[:]) + + // Update the counter and restore trailing bytes + l = uint64(suffix.Len()) + suffix.Write([]byte{0x05, 0xff}) + suffix.Write([]byte{ + uint8(l >> 56), uint8(l >> 48), uint8(l >> 40), uint8(l >> 32), + uint8(l >> 24), uint8(l >> 16), uint8(l >> 8), uint8(l), + }) + sig.HashSuffix = suffix.Bytes() +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/symmetric_key_encrypted.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/symmetric_key_encrypted.go new file mode 100644 index 00000000000..bc2caf0e8bd --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/symmetric_key_encrypted.go @@ -0,0 +1,303 @@ +// Copyright 2011 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package packet + +import ( + "bytes" + "crypto/cipher" + "crypto/sha256" + "io" + "strconv" + + "github.com/ProtonMail/go-crypto/openpgp/errors" + "github.com/ProtonMail/go-crypto/openpgp/s2k" + "golang.org/x/crypto/hkdf" +) + +// This is the largest session key that we'll support. Since at most 256-bit cipher +// is supported in OpenPGP, this is large enough to contain also the auth tag. +const maxSessionKeySizeInBytes = 64 + +// SymmetricKeyEncrypted represents a passphrase protected session key. See RFC +// 4880, section 5.3. +type SymmetricKeyEncrypted struct { + Version int + CipherFunc CipherFunction + Mode AEADMode + s2k func(out, in []byte) + iv []byte + encryptedKey []byte // Contains also the authentication tag for AEAD +} + +// parse parses an SymmetricKeyEncrypted packet as specified in +// https://www.ietf.org/archive/id/draft-ietf-openpgp-crypto-refresh-07.html#name-symmetric-key-encrypted-ses +func (ske *SymmetricKeyEncrypted) parse(r io.Reader) error { + var buf [1]byte + + // Version + if _, err := readFull(r, buf[:]); err != nil { + return err + } + ske.Version = int(buf[0]) + if ske.Version != 4 && ske.Version != 5 { + return errors.UnsupportedError("unknown SymmetricKeyEncrypted version") + } + + if ske.Version == 5 { + // Scalar octet count + if _, err := readFull(r, buf[:]); err != nil { + return err + } + } + + // Cipher function + if _, err := readFull(r, buf[:]); err != nil { + return err + } + ske.CipherFunc = CipherFunction(buf[0]) + if !ske.CipherFunc.IsSupported() { + return errors.UnsupportedError("unknown cipher: " + strconv.Itoa(int(buf[0]))) + } + + if ske.Version == 5 { + // AEAD mode + if _, err := readFull(r, buf[:]); err != nil { + return errors.StructuralError("cannot read AEAD octet from packet") + } + ske.Mode = AEADMode(buf[0]) + + // Scalar octet count + if _, err := readFull(r, buf[:]); err != nil { + return err + } + } + + var err error + if ske.s2k, err = s2k.Parse(r); err != nil { + if _, ok := err.(errors.ErrDummyPrivateKey); ok { + return errors.UnsupportedError("missing key GNU extension in session key") + } + return err + } + + if ske.Version == 5 { + // AEAD IV + iv := make([]byte, ske.Mode.IvLength()) + _, err := readFull(r, iv) + if err != nil { + return errors.StructuralError("cannot read AEAD IV") + } + + ske.iv = iv + } + + encryptedKey := make([]byte, maxSessionKeySizeInBytes) + // The session key may follow. We just have to try and read to find + // out. If it exists then we limit it to maxSessionKeySizeInBytes. + n, err := readFull(r, encryptedKey) + if err != nil && err != io.ErrUnexpectedEOF { + return err + } + + if n != 0 { + if n == maxSessionKeySizeInBytes { + return errors.UnsupportedError("oversized encrypted session key") + } + ske.encryptedKey = encryptedKey[:n] + } + return nil +} + +// Decrypt attempts to decrypt an encrypted session key and returns the key and +// the cipher to use when decrypting a subsequent Symmetrically Encrypted Data +// packet. +func (ske *SymmetricKeyEncrypted) Decrypt(passphrase []byte) ([]byte, CipherFunction, error) { + key := make([]byte, ske.CipherFunc.KeySize()) + ske.s2k(key, passphrase) + if len(ske.encryptedKey) == 0 { + return key, ske.CipherFunc, nil + } + switch ske.Version { + case 4: + plaintextKey, cipherFunc, err := ske.decryptV4(key) + return plaintextKey, cipherFunc, err + case 5: + plaintextKey, err := ske.decryptV5(key) + return plaintextKey, CipherFunction(0), err + } + err := errors.UnsupportedError("unknown SymmetricKeyEncrypted version") + return nil, CipherFunction(0), err +} + +func (ske *SymmetricKeyEncrypted) decryptV4(key []byte) ([]byte, CipherFunction, error) { + // the IV is all zeros + iv := make([]byte, ske.CipherFunc.blockSize()) + c := cipher.NewCFBDecrypter(ske.CipherFunc.new(key), iv) + plaintextKey := make([]byte, len(ske.encryptedKey)) + c.XORKeyStream(plaintextKey, ske.encryptedKey) + cipherFunc := CipherFunction(plaintextKey[0]) + if cipherFunc.blockSize() == 0 { + return nil, ske.CipherFunc, errors.UnsupportedError( + "unknown cipher: " + strconv.Itoa(int(cipherFunc))) + } + plaintextKey = plaintextKey[1:] + if len(plaintextKey) != cipherFunc.KeySize() { + return nil, cipherFunc, errors.StructuralError( + "length of decrypted key not equal to cipher keysize") + } + return plaintextKey, cipherFunc, nil +} + +func (ske *SymmetricKeyEncrypted) decryptV5(key []byte) ([]byte, error) { + adata := []byte{0xc3, byte(5), byte(ske.CipherFunc), byte(ske.Mode)} + aead := getEncryptedKeyAeadInstance(ske.CipherFunc, ske.Mode, key, adata) + + plaintextKey, err := aead.Open(nil, ske.iv, ske.encryptedKey, adata) + if err != nil { + return nil, err + } + return plaintextKey, nil +} + +// SerializeSymmetricKeyEncrypted serializes a symmetric key packet to w. +// The packet contains a random session key, encrypted by a key derived from +// the given passphrase. The session key is returned and must be passed to +// SerializeSymmetricallyEncrypted. +// If config is nil, sensible defaults will be used. +func SerializeSymmetricKeyEncrypted(w io.Writer, passphrase []byte, config *Config) (key []byte, err error) { + cipherFunc := config.Cipher() + + sessionKey := make([]byte, cipherFunc.KeySize()) + _, err = io.ReadFull(config.Random(), sessionKey) + if err != nil { + return + } + + err = SerializeSymmetricKeyEncryptedReuseKey(w, sessionKey, passphrase, config) + if err != nil { + return + } + + key = sessionKey + return +} + +// SerializeSymmetricKeyEncryptedReuseKey serializes a symmetric key packet to w. +// The packet contains the given session key, encrypted by a key derived from +// the given passphrase. The returned session key must be passed to +// SerializeSymmetricallyEncrypted. +// If config is nil, sensible defaults will be used. +func SerializeSymmetricKeyEncryptedReuseKey(w io.Writer, sessionKey []byte, passphrase []byte, config *Config) (err error) { + var version int + if config.AEAD() != nil { + version = 5 + } else { + version = 4 + } + cipherFunc := config.Cipher() + // cipherFunc must be AES + if !cipherFunc.IsSupported() || cipherFunc < CipherAES128 || cipherFunc > CipherAES256 { + return errors.UnsupportedError("unsupported cipher: " + strconv.Itoa(int(cipherFunc))) + } + + keySize := cipherFunc.KeySize() + s2kBuf := new(bytes.Buffer) + keyEncryptingKey := make([]byte, keySize) + // s2k.Serialize salts and stretches the passphrase, and writes the + // resulting key to keyEncryptingKey and the s2k descriptor to s2kBuf. + err = s2k.Serialize(s2kBuf, keyEncryptingKey, config.Random(), passphrase, &s2k.Config{Hash: config.Hash(), S2KCount: config.PasswordHashIterations()}) + if err != nil { + return + } + s2kBytes := s2kBuf.Bytes() + + var packetLength int + switch version { + case 4: + packetLength = 2 /* header */ + len(s2kBytes) + 1 /* cipher type */ + keySize + case 5: + ivLen := config.AEAD().Mode().IvLength() + tagLen := config.AEAD().Mode().TagLength() + packetLength = 5 + len(s2kBytes) + ivLen + keySize + tagLen + } + err = serializeHeader(w, packetTypeSymmetricKeyEncrypted, packetLength) + if err != nil { + return + } + + // Symmetric Key Encrypted Version + buf := []byte{byte(version)} + + if version == 5 { + // Scalar octet count + buf = append(buf, byte(3 + len(s2kBytes) + config.AEAD().Mode().IvLength())) + } + + // Cipher function + buf = append(buf, byte(cipherFunc)) + + if version == 5 { + // AEAD mode + buf = append(buf, byte(config.AEAD().Mode())) + + // Scalar octet count + buf = append(buf, byte(len(s2kBytes))) + } + _, err = w.Write(buf) + if err != nil { + return + } + _, err = w.Write(s2kBytes) + if err != nil { + return + } + + switch version { + case 4: + iv := make([]byte, cipherFunc.blockSize()) + c := cipher.NewCFBEncrypter(cipherFunc.new(keyEncryptingKey), iv) + encryptedCipherAndKey := make([]byte, keySize+1) + c.XORKeyStream(encryptedCipherAndKey, buf[1:]) + c.XORKeyStream(encryptedCipherAndKey[1:], sessionKey) + _, err = w.Write(encryptedCipherAndKey) + if err != nil { + return + } + case 5: + mode := config.AEAD().Mode() + adata := []byte{0xc3, byte(5), byte(cipherFunc), byte(mode)} + aead := getEncryptedKeyAeadInstance(cipherFunc, mode, keyEncryptingKey, adata) + + // Sample iv using random reader + iv := make([]byte, config.AEAD().Mode().IvLength()) + _, err = io.ReadFull(config.Random(), iv) + if err != nil { + return + } + // Seal and write (encryptedData includes auth. tag) + + encryptedData := aead.Seal(nil, iv, sessionKey, adata) + _, err = w.Write(iv) + if err != nil { + return + } + _, err = w.Write(encryptedData) + if err != nil { + return + } + } + + return +} + +func getEncryptedKeyAeadInstance(c CipherFunction, mode AEADMode, inputKey, associatedData []byte) (aead cipher.AEAD) { + hkdfReader := hkdf.New(sha256.New, inputKey, []byte{}, associatedData) + + encryptionKey := make([]byte, c.KeySize()) + _, _ = readFull(hkdfReader, encryptionKey) + + blockCipher := c.new(encryptionKey) + return mode.new(blockCipher) +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/symmetrically_encrypted.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/symmetrically_encrypted.go new file mode 100644 index 00000000000..dc1a2403573 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/symmetrically_encrypted.go @@ -0,0 +1,91 @@ +// Copyright 2011 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package packet + +import ( + "io" + + "github.com/ProtonMail/go-crypto/openpgp/errors" +) + +const aeadSaltSize = 32 + +// SymmetricallyEncrypted represents a symmetrically encrypted byte string. The +// encrypted Contents will consist of more OpenPGP packets. See RFC 4880, +// sections 5.7 and 5.13. +type SymmetricallyEncrypted struct { + Version int + Contents io.Reader // contains tag for version 2 + IntegrityProtected bool // If true it is type 18 (with MDC or AEAD). False is packet type 9 + + // Specific to version 1 + prefix []byte + + // Specific to version 2 + cipher CipherFunction + mode AEADMode + chunkSizeByte byte + salt [aeadSaltSize]byte +} + +const ( + symmetricallyEncryptedVersionMdc = 1 + symmetricallyEncryptedVersionAead = 2 +) + + +func (se *SymmetricallyEncrypted) parse(r io.Reader) error { + if se.IntegrityProtected { + // See RFC 4880, section 5.13. + var buf [1]byte + _, err := readFull(r, buf[:]) + if err != nil { + return err + } + + switch buf[0] { + case symmetricallyEncryptedVersionMdc: + se.Version = symmetricallyEncryptedVersionMdc + case symmetricallyEncryptedVersionAead: + se.Version = symmetricallyEncryptedVersionAead + if err := se.parseAead(r); err != nil { + return err + } + default: + return errors.UnsupportedError("unknown SymmetricallyEncrypted version") + } + } + se.Contents = r + return nil +} + +// Decrypt returns a ReadCloser, from which the decrypted Contents of the +// packet can be read. An incorrect key will only be detected after trying +// to decrypt the entire data. +func (se *SymmetricallyEncrypted) Decrypt(c CipherFunction, key []byte) (io.ReadCloser, error) { + if se.Version == symmetricallyEncryptedVersionAead { + return se.decryptAead(key) + } + + return se.decryptMdc(c, key) +} + +// SerializeSymmetricallyEncrypted serializes a symmetrically encrypted packet +// to w and returns a WriteCloser to which the to-be-encrypted packets can be +// written. +// If config is nil, sensible defaults will be used. +func SerializeSymmetricallyEncrypted(w io.Writer, c CipherFunction, aeadSupported bool, cipherSuite CipherSuite, key []byte, config *Config) (Contents io.WriteCloser, err error) { + writeCloser := noOpCloser{w} + ciphertext, err := serializeStreamHeader(writeCloser, packetTypeSymmetricallyEncryptedIntegrityProtected) + if err != nil { + return + } + + if aeadSupported { + return serializeSymmetricallyEncryptedAead(ciphertext, cipherSuite, config.AEADConfig.ChunkSizeByte(), config.Random(), key) + } + + return serializeSymmetricallyEncryptedMdc(ciphertext, c, key, config) +} diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/symmetrically_encrypted_aead.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/symmetrically_encrypted_aead.go new file mode 100644 index 00000000000..241800c02eb --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/symmetrically_encrypted_aead.go @@ -0,0 +1,155 @@ +// Copyright 2023 Proton AG. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package packet + +import ( + "crypto/cipher" + "crypto/sha256" + "github.com/ProtonMail/go-crypto/openpgp/errors" + "golang.org/x/crypto/hkdf" + "io" +) + +// parseAead parses a V2 SEIPD packet (AEAD) as specified in +// https://www.ietf.org/archive/id/draft-ietf-openpgp-crypto-refresh-07.html#section-5.13.2 +func (se *SymmetricallyEncrypted) parseAead(r io.Reader) error { + headerData := make([]byte, 3) + if n, err := io.ReadFull(r, headerData); n < 3 { + return errors.StructuralError("could not read aead header: " + err.Error()) + } + + // Cipher + se.cipher = CipherFunction(headerData[0]) + // cipherFunc must have block size 16 to use AEAD + if se.cipher.blockSize() != 16 { + return errors.UnsupportedError("invalid aead cipher: " + string(se.cipher)) + } + + // Mode + se.mode = AEADMode(headerData[1]) + if se.mode.TagLength() == 0 { + return errors.UnsupportedError("unknown aead mode: " + string(se.mode)) + } + + // Chunk size + se.chunkSizeByte = headerData[2] + if se.chunkSizeByte > 16 { + return errors.UnsupportedError("invalid aead chunk size byte: " + string(se.chunkSizeByte)) + } + + // Salt + if n, err := io.ReadFull(r, se.salt[:]); n < aeadSaltSize { + return errors.StructuralError("could not read aead salt: " + err.Error()) + } + + return nil +} + +// associatedData for chunks: tag, version, cipher, mode, chunk size byte +func (se *SymmetricallyEncrypted) associatedData() []byte { + return []byte{ + 0xD2, + symmetricallyEncryptedVersionAead, + byte(se.cipher), + byte(se.mode), + se.chunkSizeByte, + } +} + +// decryptAead decrypts a V2 SEIPD packet (AEAD) as specified in +// https://www.ietf.org/archive/id/draft-ietf-openpgp-crypto-refresh-07.html#section-5.13.2 +func (se *SymmetricallyEncrypted) decryptAead(inputKey []byte) (io.ReadCloser, error) { + aead, nonce := getSymmetricallyEncryptedAeadInstance(se.cipher, se.mode, inputKey, se.salt[:], se.associatedData()) + + // Carry the first tagLen bytes + tagLen := se.mode.TagLength() + peekedBytes := make([]byte, tagLen) + n, err := io.ReadFull(se.Contents, peekedBytes) + if n < tagLen || (err != nil && err != io.EOF) { + return nil, errors.StructuralError("not enough data to decrypt:" + err.Error()) + } + + return &aeadDecrypter{ + aeadCrypter: aeadCrypter{ + aead: aead, + chunkSize: decodeAEADChunkSize(se.chunkSizeByte), + initialNonce: nonce, + associatedData: se.associatedData(), + chunkIndex: make([]byte, 8), + packetTag: packetTypeSymmetricallyEncryptedIntegrityProtected, + }, + reader: se.Contents, + peekedBytes: peekedBytes, + }, nil +} + +// serializeSymmetricallyEncryptedAead encrypts to a writer a V2 SEIPD packet (AEAD) as specified in +// https://www.ietf.org/archive/id/draft-ietf-openpgp-crypto-refresh-07.html#section-5.13.2 +func serializeSymmetricallyEncryptedAead(ciphertext io.WriteCloser, cipherSuite CipherSuite, chunkSizeByte byte, rand io.Reader, inputKey []byte) (Contents io.WriteCloser, err error) { + // cipherFunc must have block size 16 to use AEAD + if cipherSuite.Cipher.blockSize() != 16 { + return nil, errors.InvalidArgumentError("invalid aead cipher function") + } + + if cipherSuite.Cipher.KeySize() != len(inputKey) { + return nil, errors.InvalidArgumentError("error in aead serialization: bad key length") + } + + // Data for en/decryption: tag, version, cipher, aead mode, chunk size + prefix := []byte{ + 0xD2, + symmetricallyEncryptedVersionAead, + byte(cipherSuite.Cipher), + byte(cipherSuite.Mode), + chunkSizeByte, + } + + // Write header (that correspond to prefix except first byte) + n, err := ciphertext.Write(prefix[1:]) + if err != nil || n < 4 { + return nil, err + } + + // Random salt + salt := make([]byte, aeadSaltSize) + if _, err := rand.Read(salt); err != nil { + return nil, err + } + + if _, err := ciphertext.Write(salt); err != nil { + return nil, err + } + + aead, nonce := getSymmetricallyEncryptedAeadInstance(cipherSuite.Cipher, cipherSuite.Mode, inputKey, salt, prefix) + + return &aeadEncrypter{ + aeadCrypter: aeadCrypter{ + aead: aead, + chunkSize: decodeAEADChunkSize(chunkSizeByte), + associatedData: prefix, + chunkIndex: make([]byte, 8), + initialNonce: nonce, + packetTag: packetTypeSymmetricallyEncryptedIntegrityProtected, + }, + writer: ciphertext, + }, nil +} + +func getSymmetricallyEncryptedAeadInstance(c CipherFunction, mode AEADMode, inputKey, salt, associatedData []byte) (aead cipher.AEAD, nonce []byte) { + hkdfReader := hkdf.New(sha256.New, inputKey, salt, associatedData) + + encryptionKey := make([]byte, c.KeySize()) + _, _ = readFull(hkdfReader, encryptionKey) + + // Last 64 bits of nonce are the counter + nonce = make([]byte, mode.IvLength() - 8) + + _, _ = readFull(hkdfReader, nonce) + + blockCipher := c.new(encryptionKey) + aead = mode.new(blockCipher) + + return +} diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/symmetrically_encrypted.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/symmetrically_encrypted_mdc.go similarity index 61% rename from .ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/symmetrically_encrypted.go rename to .ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/symmetrically_encrypted_mdc.go index 1a1a62964fc..3e070f8b820 100644 --- a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/symmetrically_encrypted.go +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/symmetrically_encrypted_mdc.go @@ -8,54 +8,38 @@ import ( "crypto/cipher" "crypto/sha1" "crypto/subtle" - "golang.org/x/crypto/openpgp/errors" "hash" "io" "strconv" + + "github.com/ProtonMail/go-crypto/openpgp/errors" ) -// SymmetricallyEncrypted represents a symmetrically encrypted byte string. The -// encrypted contents will consist of more OpenPGP packets. See RFC 4880, -// sections 5.7 and 5.13. -type SymmetricallyEncrypted struct { - MDC bool // true iff this is a type 18 packet and thus has an embedded MAC. - contents io.Reader - prefix []byte +// seMdcReader wraps an io.Reader with a no-op Close method. +type seMdcReader struct { + in io.Reader } -const symmetricallyEncryptedVersion = 1 +func (ser seMdcReader) Read(buf []byte) (int, error) { + return ser.in.Read(buf) +} -func (se *SymmetricallyEncrypted) parse(r io.Reader) error { - if se.MDC { - // See RFC 4880, section 5.13. - var buf [1]byte - _, err := readFull(r, buf[:]) - if err != nil { - return err - } - if buf[0] != symmetricallyEncryptedVersion { - return errors.UnsupportedError("unknown SymmetricallyEncrypted version") - } - } - se.contents = r +func (ser seMdcReader) Close() error { return nil } -// Decrypt returns a ReadCloser, from which the decrypted contents of the -// packet can be read. An incorrect key can, with high probability, be detected -// immediately and this will result in a KeyIncorrect error being returned. -func (se *SymmetricallyEncrypted) Decrypt(c CipherFunction, key []byte) (io.ReadCloser, error) { - keySize := c.KeySize() - if keySize == 0 { - return nil, errors.UnsupportedError("unknown cipher: " + strconv.Itoa(int(c))) +func (se *SymmetricallyEncrypted) decryptMdc(c CipherFunction, key []byte) (io.ReadCloser, error) { + if !c.IsSupported() { + return nil, errors.UnsupportedError("unsupported cipher: " + strconv.Itoa(int(c))) } - if len(key) != keySize { + + if len(key) != c.KeySize() { return nil, errors.InvalidArgumentError("SymmetricallyEncrypted: incorrect key length") } if se.prefix == nil { se.prefix = make([]byte, c.blockSize()+2) - _, err := readFull(se.contents, se.prefix) + _, err := readFull(se.Contents, se.prefix) if err != nil { return nil, err } @@ -64,47 +48,31 @@ func (se *SymmetricallyEncrypted) Decrypt(c CipherFunction, key []byte) (io.Read } ocfbResync := OCFBResync - if se.MDC { + if se.IntegrityProtected { // MDC packets use a different form of OCFB mode. ocfbResync = OCFBNoResync } s := NewOCFBDecrypter(c.new(key), se.prefix, ocfbResync) - if s == nil { - return nil, errors.ErrKeyIncorrect - } - plaintext := cipher.StreamReader{S: s, R: se.contents} + plaintext := cipher.StreamReader{S: s, R: se.Contents} - if se.MDC { - // MDC packets have an embedded hash that we need to check. + if se.IntegrityProtected { + // IntegrityProtected packets have an embedded hash that we need to check. h := sha1.New() h.Write(se.prefix) return &seMDCReader{in: plaintext, h: h}, nil } // Otherwise, we just need to wrap plaintext so that it's a valid ReadCloser. - return seReader{plaintext}, nil -} - -// seReader wraps an io.Reader with a no-op Close method. -type seReader struct { - in io.Reader -} - -func (ser seReader) Read(buf []byte) (int, error) { - return ser.in.Read(buf) -} - -func (ser seReader) Close() error { - return nil + return seMdcReader{plaintext}, nil } const mdcTrailerSize = 1 /* tag byte */ + 1 /* length byte */ + sha1.Size // An seMDCReader wraps an io.Reader, maintains a running hash and keeps hold // of the most recent 22 bytes (mdcTrailerSize). Upon EOF, those bytes form an -// MDC packet containing a hash of the previous contents which is checked +// MDC packet containing a hash of the previous Contents which is checked // against the running hash. See RFC 4880, section 5.13. type seMDCReader struct { in io.Reader @@ -175,12 +143,12 @@ func (ser *seMDCReader) Read(buf []byte) (n int, err error) { return } -// This is a new-format packet tag byte for a type 19 (MDC) packet. +// This is a new-format packet tag byte for a type 19 (Integrity Protected) packet. const mdcPacketTagByte = byte(0x80) | 0x40 | 19 func (ser *seMDCReader) Close() error { if ser.error { - return errors.SignatureError("error during reading") + return errors.ErrMDCMissing } for !ser.eof { @@ -191,18 +159,20 @@ func (ser *seMDCReader) Close() error { break } if err != nil { - return errors.SignatureError("error during reading") + return errors.ErrMDCMissing } } - if ser.trailer[0] != mdcPacketTagByte || ser.trailer[1] != sha1.Size { - return errors.SignatureError("MDC packet not found") - } ser.h.Write(ser.trailer[:2]) final := ser.h.Sum(nil) if subtle.ConstantTimeCompare(final, ser.trailer[2:]) != 1 { - return errors.SignatureError("hash mismatch") + return errors.ErrMDCHashMismatch + } + // The hash already includes the MDC header, but we still check its value + // to confirm encryption correctness + if ser.trailer[0] != mdcPacketTagByte || ser.trailer[1] != sha1.Size { + return errors.ErrMDCMissing } return nil } @@ -236,7 +206,7 @@ func (w *seMDCWriter) Close() (err error) { return w.w.Close() } -// noOpCloser is like an io.NopCloser, but for an io.Writer. +// noOpCloser is like an ioutil.NopCloser, but for an io.Writer. type noOpCloser struct { w io.Writer } @@ -249,21 +219,17 @@ func (c noOpCloser) Close() error { return nil } -// SerializeSymmetricallyEncrypted serializes a symmetrically encrypted packet -// to w and returns a WriteCloser to which the to-be-encrypted packets can be -// written. -// If config is nil, sensible defaults will be used. -func SerializeSymmetricallyEncrypted(w io.Writer, c CipherFunction, key []byte, config *Config) (contents io.WriteCloser, err error) { - if c.KeySize() != len(key) { - return nil, errors.InvalidArgumentError("SymmetricallyEncrypted.Serialize: bad key length") +func serializeSymmetricallyEncryptedMdc(ciphertext io.WriteCloser, c CipherFunction, key []byte, config *Config) (Contents io.WriteCloser, err error) { + // Disallow old cipher suites + if !c.IsSupported() || c < CipherAES128 { + return nil, errors.InvalidArgumentError("invalid mdc cipher function") } - writeCloser := noOpCloser{w} - ciphertext, err := serializeStreamHeader(writeCloser, packetTypeSymmetricallyEncryptedMDC) - if err != nil { - return + + if c.KeySize() != len(key) { + return nil, errors.InvalidArgumentError("error in mdc serialization: bad key length") } - _, err = ciphertext.Write([]byte{symmetricallyEncryptedVersion}) + _, err = ciphertext.Write([]byte{symmetricallyEncryptedVersionMdc}) if err != nil { return } @@ -285,6 +251,6 @@ func SerializeSymmetricallyEncrypted(w io.Writer, c CipherFunction, key []byte, h := sha1.New() h.Write(iv) h.Write(iv[blockSize-2:]) - contents = &seMDCWriter{w: plaintext, h: h} + Contents = &seMDCWriter{w: plaintext, h: h} return } diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/userattribute.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/userattribute.go similarity index 87% rename from .ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/userattribute.go rename to .ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/userattribute.go index ff7ef530755..88ec72c6c4a 100644 --- a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/userattribute.go +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/userattribute.go @@ -9,6 +9,7 @@ import ( "image" "image/jpeg" "io" + "io/ioutil" ) const UserAttrImageSubpacket = 1 @@ -41,9 +42,16 @@ func NewUserAttributePhoto(photos ...image.Image) (uat *UserAttribute, err error if err = jpeg.Encode(&buf, photo, nil); err != nil { return } + + lengthBuf := make([]byte, 5) + n := serializeSubpacketLength(lengthBuf, len(buf.Bytes())+1) + lengthBuf = lengthBuf[:n] + uat.Contents = append(uat.Contents, &OpaqueSubpacket{ - SubType: UserAttrImageSubpacket, - Contents: buf.Bytes()}) + SubType: UserAttrImageSubpacket, + EncodedLength: lengthBuf, + Contents: buf.Bytes(), + }) } return } @@ -55,7 +63,7 @@ func NewUserAttribute(contents ...*OpaqueSubpacket) *UserAttribute { func (uat *UserAttribute) parse(r io.Reader) (err error) { // RFC 4880, section 5.13 - b, err := io.ReadAll(r) + b, err := ioutil.ReadAll(r) if err != nil { return } @@ -68,7 +76,10 @@ func (uat *UserAttribute) parse(r io.Reader) (err error) { func (uat *UserAttribute) Serialize(w io.Writer) (err error) { var buf bytes.Buffer for _, sp := range uat.Contents { - sp.Serialize(&buf) + err = sp.Serialize(&buf) + if err != nil { + return err + } } if err = serializeHeader(w, packetTypeUserAttribute, buf.Len()); err != nil { return err diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/userid.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/userid.go similarity index 95% rename from .ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/userid.go rename to .ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/userid.go index 359a462eb8a..614fbafd5e1 100644 --- a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/userid.go +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/packet/userid.go @@ -6,6 +6,7 @@ package packet import ( "io" + "io/ioutil" "strings" ) @@ -65,7 +66,7 @@ func NewUserId(name, comment, email string) *UserId { func (uid *UserId) parse(r io.Reader) (err error) { // RFC 4880, section 5.11 - b, err := io.ReadAll(r) + b, err := ioutil.ReadAll(r) if err != nil { return } @@ -155,5 +156,12 @@ func parseUserId(id string) (name, comment, email string) { name = strings.TrimSpace(id[n.start:n.end]) comment = strings.TrimSpace(id[c.start:c.end]) email = strings.TrimSpace(id[e.start:e.end]) + + // RFC 2822 3.4: alternate simple form of a mailbox + if email == "" && strings.ContainsRune(name, '@') { + email = name + name = "" + } + return } diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/read.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/read.go similarity index 51% rename from .ci/providerlint/vendor/golang.org/x/crypto/openpgp/read.go rename to .ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/read.go index 48a89314685..e910e184417 100644 --- a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/read.go +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/read.go @@ -3,24 +3,21 @@ // license that can be found in the LICENSE file. // Package openpgp implements high level operations on OpenPGP messages. -// -// Deprecated: this package is unmaintained except for security fixes. New -// applications should consider a more focused, modern alternative to OpenPGP -// for their specific task. If you are required to interoperate with OpenPGP -// systems and need a maintained package, consider a community fork. -// See https://golang.org/issue/44226. -package openpgp // import "golang.org/x/crypto/openpgp" +package openpgp // import "github.com/ProtonMail/go-crypto/openpgp" import ( "crypto" _ "crypto/sha256" + _ "crypto/sha512" "hash" "io" "strconv" - "golang.org/x/crypto/openpgp/armor" - "golang.org/x/crypto/openpgp/errors" - "golang.org/x/crypto/openpgp/packet" + "github.com/ProtonMail/go-crypto/openpgp/armor" + "github.com/ProtonMail/go-crypto/openpgp/errors" + "github.com/ProtonMail/go-crypto/openpgp/internal/algorithm" + "github.com/ProtonMail/go-crypto/openpgp/packet" + _ "golang.org/x/crypto/sha3" ) // SignatureType is the armor type for a PGP signature. @@ -62,9 +59,9 @@ type MessageDetails struct { // been consumed. Once EOF has been seen, the following fields are // valid. (An authentication code failure is reported as a // SignatureError error when reading from UnverifiedBody.) - SignatureError error // nil if the signature is good. - Signature *packet.Signature // the signature packet itself, if v4 (default) - SignatureV3 *packet.SignatureV3 // the signature packet if it is a v2 or v3 signature + Signature *packet.Signature // the signature packet itself. + SignatureError error // nil if the signature is good. + UnverifiedSignatures []*packet.Signature // all other unverified signature packets. decrypted io.ReadCloser } @@ -94,7 +91,8 @@ func ReadMessage(r io.Reader, keyring KeyRing, prompt PromptFunction, config *pa var symKeys []*packet.SymmetricKeyEncrypted var pubKeys []keyEnvelopePair - var se *packet.SymmetricallyEncrypted + // Integrity protected encrypted packet: SymmetricallyEncrypted or AEADEncrypted + var edp packet.EncryptedDataPacket packets := packet.NewReader(r) md = new(MessageDetails) @@ -119,22 +117,30 @@ ParsePackets: // This packet contains the decryption key encrypted to a public key. md.EncryptedToKeyIds = append(md.EncryptedToKeyIds, p.KeyId) switch p.Algo { - case packet.PubKeyAlgoRSA, packet.PubKeyAlgoRSAEncryptOnly, packet.PubKeyAlgoElGamal: + case packet.PubKeyAlgoRSA, packet.PubKeyAlgoRSAEncryptOnly, packet.PubKeyAlgoElGamal, packet.PubKeyAlgoECDH: break default: continue } - var keys []Key - if p.KeyId == 0 { - keys = keyring.DecryptionKeys() - } else { - keys = keyring.KeysById(p.KeyId) - } - for _, k := range keys { - pubKeys = append(pubKeys, keyEnvelopePair{k, p}) + if keyring != nil { + var keys []Key + if p.KeyId == 0 { + keys = keyring.DecryptionKeys() + } else { + keys = keyring.KeysById(p.KeyId) + } + for _, k := range keys { + pubKeys = append(pubKeys, keyEnvelopePair{k, p}) + } } case *packet.SymmetricallyEncrypted: - se = p + if !p.IntegrityProtected && !config.AllowUnauthenticatedMessages() { + return nil, errors.UnsupportedError("message is not integrity protected") + } + edp = p + break ParsePackets + case *packet.AEADEncrypted: + edp = p break ParsePackets case *packet.Compressed, *packet.LiteralData, *packet.OnePassSignature: // This message isn't encrypted. @@ -142,7 +148,7 @@ ParsePackets: return nil, errors.StructuralError("key material not followed by encrypted message") } packets.Unread(p) - return readSignedMessage(packets, nil, keyring) + return readSignedMessage(packets, nil, keyring, config) } } @@ -164,12 +170,13 @@ FindKey: } if !pk.key.PrivateKey.Encrypted { if len(pk.encryptedKey.Key) == 0 { - pk.encryptedKey.Decrypt(pk.key.PrivateKey, config) - } - if len(pk.encryptedKey.Key) == 0 { - continue + errDec := pk.encryptedKey.Decrypt(pk.key.PrivateKey, config) + if errDec != nil { + continue + } } - decrypted, err = se.Decrypt(pk.encryptedKey.CipherFunc, pk.encryptedKey.Key) + // Try to decrypt symmetrically encrypted + decrypted, err = edp.Decrypt(pk.encryptedKey.CipherFunc, pk.encryptedKey.Key) if err != nil && err != errors.ErrKeyIncorrect { return nil, err } @@ -204,16 +211,17 @@ FindKey: if len(symKeys) != 0 && passphrase != nil { for _, s := range symKeys { key, cipherFunc, err := s.Decrypt(passphrase) + // In v4, on wrong passphrase, session key decryption is very likely to result in an invalid cipherFunc: + // only for < 5% of cases we will proceed to decrypt the data if err == nil { - decrypted, err = se.Decrypt(cipherFunc, key) - if err != nil && err != errors.ErrKeyIncorrect { + decrypted, err = edp.Decrypt(cipherFunc, key) + if err != nil { return nil, err } if decrypted != nil { break FindKey } } - } } } @@ -222,13 +230,17 @@ FindKey: if err := packets.Push(decrypted); err != nil { return nil, err } - return readSignedMessage(packets, md, keyring) + mdFinal, sensitiveParsingErr := readSignedMessage(packets, md, keyring, config) + if sensitiveParsingErr != nil { + return nil, errors.StructuralError("parsing error") + } + return mdFinal, nil } // readSignedMessage reads a possibly signed message if mdin is non-zero then // that structure is updated and returned. Otherwise a fresh MessageDetails is // used. -func readSignedMessage(packets *packet.Reader, mdin *MessageDetails, keyring KeyRing) (md *MessageDetails, err error) { +func readSignedMessage(packets *packet.Reader, mdin *MessageDetails, keyring KeyRing, config *packet.Config) (md *MessageDetails, err error) { if mdin == nil { mdin = new(MessageDetails) } @@ -237,6 +249,7 @@ func readSignedMessage(packets *packet.Reader, mdin *MessageDetails, keyring Key var p packet.Packet var h hash.Hash var wrappedHash hash.Hash + var prevLast bool FindLiteralData: for { p, err = packets.Next() @@ -249,21 +262,26 @@ FindLiteralData: return nil, err } case *packet.OnePassSignature: - if !p.IsLast { - return nil, errors.UnsupportedError("nested signatures") + if prevLast { + return nil, errors.UnsupportedError("nested signature packets") + } + + if p.IsLast { + prevLast = true } h, wrappedHash, err = hashForSignature(p.Hash, p.SigType) if err != nil { - md = nil - return + md.SignatureError = err } md.IsSigned = true md.SignedByKeyId = p.KeyId - keys := keyring.KeysByIdUsage(p.KeyId, packet.KeyFlagSign) - if len(keys) > 0 { - md.SignedBy = &keys[0] + if keyring != nil { + keys := keyring.KeysByIdUsage(p.KeyId, packet.KeyFlagSign) + if len(keys) > 0 { + md.SignedBy = &keys[0] + } } case *packet.LiteralData: md.LiteralData = p @@ -271,8 +289,8 @@ FindLiteralData: } } - if md.SignedBy != nil { - md.UnverifiedBody = &signatureCheckReader{packets, h, wrappedHash, md} + if md.IsSigned && md.SignatureError == nil { + md.UnverifiedBody = &signatureCheckReader{packets, h, wrappedHash, md, config} } else if md.decrypted != nil { md.UnverifiedBody = checkReader{md} } else { @@ -287,11 +305,14 @@ FindLiteralData: // should be preprocessed (i.e. to normalize line endings). Thus this function // returns two hashes. The second should be used to hash the message itself and // performs any needed preprocessing. -func hashForSignature(hashId crypto.Hash, sigType packet.SignatureType) (hash.Hash, hash.Hash, error) { - if !hashId.Available() { - return nil, nil, errors.UnsupportedError("hash not available: " + strconv.Itoa(int(hashId))) +func hashForSignature(hashFunc crypto.Hash, sigType packet.SignatureType) (hash.Hash, hash.Hash, error) { + if _, ok := algorithm.HashToHashIdWithSha1(hashFunc); !ok { + return nil, nil, errors.UnsupportedError("unsupported hash function") } - h := hashId.New() + if !hashFunc.Available() { + return nil, nil, errors.UnsupportedError("hash not available: " + strconv.Itoa(int(hashFunc))) + } + h := hashFunc.New() switch sigType { case packet.SigTypeBinary: @@ -310,15 +331,21 @@ type checkReader struct { md *MessageDetails } -func (cr checkReader) Read(buf []byte) (n int, err error) { - n, err = cr.md.LiteralData.Body.Read(buf) - if err == io.EOF { +func (cr checkReader) Read(buf []byte) (int, error) { + n, sensitiveParsingError := cr.md.LiteralData.Body.Read(buf) + if sensitiveParsingError == io.EOF { mdcErr := cr.md.decrypted.Close() if mdcErr != nil { - err = mdcErr + return n, mdcErr } + return n, io.EOF } - return + + if sensitiveParsingError != nil { + return n, errors.StructuralError("parsing error") + } + + return n, nil } // signatureCheckReader wraps an io.Reader from a LiteralData packet and hashes @@ -328,26 +355,53 @@ type signatureCheckReader struct { packets *packet.Reader h, wrappedHash hash.Hash md *MessageDetails + config *packet.Config } -func (scr *signatureCheckReader) Read(buf []byte) (n int, err error) { - n, err = scr.md.LiteralData.Body.Read(buf) - scr.wrappedHash.Write(buf[:n]) - if err == io.EOF { +func (scr *signatureCheckReader) Read(buf []byte) (int, error) { + n, sensitiveParsingError := scr.md.LiteralData.Body.Read(buf) + + // Hash only if required + if scr.md.SignedBy != nil { + scr.wrappedHash.Write(buf[:n]) + } + + if sensitiveParsingError == io.EOF { var p packet.Packet - p, scr.md.SignatureError = scr.packets.Next() - if scr.md.SignatureError != nil { - return + var readError error + var sig *packet.Signature + + p, readError = scr.packets.Next() + for readError == nil { + var ok bool + if sig, ok = p.(*packet.Signature); ok { + if sig.Version == 5 && (sig.SigType == 0x00 || sig.SigType == 0x01) { + sig.Metadata = scr.md.LiteralData + } + + // If signature KeyID matches + if scr.md.SignedBy != nil && *sig.IssuerKeyId == scr.md.SignedByKeyId { + key := scr.md.SignedBy + signatureError := key.PublicKey.VerifySignature(scr.h, sig) + if signatureError == nil { + signatureError = checkSignatureDetails(key, sig, scr.config) + } + scr.md.Signature = sig + scr.md.SignatureError = signatureError + } else { + scr.md.UnverifiedSignatures = append(scr.md.UnverifiedSignatures, sig) + } + } + + p, readError = scr.packets.Next() } - var ok bool - if scr.md.Signature, ok = p.(*packet.Signature); ok { - scr.md.SignatureError = scr.md.SignedBy.PublicKey.VerifySignature(scr.h, scr.md.Signature) - } else if scr.md.SignatureV3, ok = p.(*packet.SignatureV3); ok { - scr.md.SignatureError = scr.md.SignedBy.PublicKey.VerifySignatureV3(scr.h, scr.md.SignatureV3) - } else { - scr.md.SignatureError = errors.StructuralError("LiteralData not followed by Signature") - return + if scr.md.SignedBy != nil && scr.md.Signature == nil { + if scr.md.UnverifiedSignatures == nil { + scr.md.SignatureError = errors.StructuralError("LiteralData not followed by signature") + } else { + scr.md.SignatureError = errors.StructuralError("No matching signature found") + } } // The SymmetricallyEncrypted packet, if any, might have an @@ -356,47 +410,87 @@ func (scr *signatureCheckReader) Read(buf []byte) (n int, err error) { if scr.md.decrypted != nil { mdcErr := scr.md.decrypted.Close() if mdcErr != nil { - err = mdcErr + return n, mdcErr } } + return n, io.EOF } - return + + if sensitiveParsingError != nil { + return n, errors.StructuralError("parsing error") + } + + return n, nil +} + +// VerifyDetachedSignature takes a signed file and a detached signature and +// returns the signature packet and the entity the signature was signed by, +// if any, and a possible signature verification error. +// If the signer isn't known, ErrUnknownIssuer is returned. +func VerifyDetachedSignature(keyring KeyRing, signed, signature io.Reader, config *packet.Config) (sig *packet.Signature, signer *Entity, err error) { + var expectedHashes []crypto.Hash + return verifyDetachedSignature(keyring, signed, signature, expectedHashes, config) +} + +// VerifyDetachedSignatureAndHash performs the same actions as +// VerifyDetachedSignature and checks that the expected hash functions were used. +func VerifyDetachedSignatureAndHash(keyring KeyRing, signed, signature io.Reader, expectedHashes []crypto.Hash, config *packet.Config) (sig *packet.Signature, signer *Entity, err error) { + return verifyDetachedSignature(keyring, signed, signature, expectedHashes, config) } // CheckDetachedSignature takes a signed file and a detached signature and -// returns the signer if the signature is valid. If the signer isn't known, +// returns the entity the signature was signed by, if any, and a possible +// signature verification error. If the signer isn't known, // ErrUnknownIssuer is returned. -func CheckDetachedSignature(keyring KeyRing, signed, signature io.Reader) (signer *Entity, err error) { +func CheckDetachedSignature(keyring KeyRing, signed, signature io.Reader, config *packet.Config) (signer *Entity, err error) { + var expectedHashes []crypto.Hash + return CheckDetachedSignatureAndHash(keyring, signed, signature, expectedHashes, config) +} + +// CheckDetachedSignatureAndHash performs the same actions as +// CheckDetachedSignature and checks that the expected hash functions were used. +func CheckDetachedSignatureAndHash(keyring KeyRing, signed, signature io.Reader, expectedHashes []crypto.Hash, config *packet.Config) (signer *Entity, err error) { + _, signer, err = verifyDetachedSignature(keyring, signed, signature, expectedHashes, config) + return +} + +func verifyDetachedSignature(keyring KeyRing, signed, signature io.Reader, expectedHashes []crypto.Hash, config *packet.Config) (sig *packet.Signature, signer *Entity, err error) { var issuerKeyId uint64 var hashFunc crypto.Hash var sigType packet.SignatureType var keys []Key var p packet.Packet + expectedHashesLen := len(expectedHashes) packets := packet.NewReader(signature) for { p, err = packets.Next() if err == io.EOF { - return nil, errors.ErrUnknownIssuer + return nil, nil, errors.ErrUnknownIssuer } if err != nil { - return nil, err + return nil, nil, err + } + + var ok bool + sig, ok = p.(*packet.Signature) + if !ok { + return nil, nil, errors.StructuralError("non signature packet found") } + if sig.IssuerKeyId == nil { + return nil, nil, errors.StructuralError("signature doesn't have an issuer") + } + issuerKeyId = *sig.IssuerKeyId + hashFunc = sig.Hash + sigType = sig.SigType - switch sig := p.(type) { - case *packet.Signature: - if sig.IssuerKeyId == nil { - return nil, errors.StructuralError("signature doesn't have an issuer") + for i, expectedHash := range expectedHashes { + if hashFunc == expectedHash { + break + } + if i+1 == expectedHashesLen { + return nil, nil, errors.StructuralError("hash algorithm mismatch with cleartext message headers") } - issuerKeyId = *sig.IssuerKeyId - hashFunc = sig.Hash - sigType = sig.SigType - case *packet.SignatureV3: - issuerKeyId = sig.IssuerKeyId - hashFunc = sig.Hash - sigType = sig.SigType - default: - return nil, errors.StructuralError("non signature packet found") } keys = keyring.KeysByIdUsage(issuerKeyId, packet.KeyFlagSign) @@ -411,38 +505,86 @@ func CheckDetachedSignature(keyring KeyRing, signed, signature io.Reader) (signe h, wrappedHash, err := hashForSignature(hashFunc, sigType) if err != nil { - return nil, err + return nil, nil, err } if _, err := io.Copy(wrappedHash, signed); err != nil && err != io.EOF { - return nil, err + return nil, nil, err } for _, key := range keys { - switch sig := p.(type) { - case *packet.Signature: - err = key.PublicKey.VerifySignature(h, sig) - case *packet.SignatureV3: - err = key.PublicKey.VerifySignatureV3(h, sig) - default: - panic("unreachable") - } - + err = key.PublicKey.VerifySignature(h, sig) if err == nil { - return key.Entity, nil + return sig, key.Entity, checkSignatureDetails(&key, sig, config) } } - return nil, err + return nil, nil, err } // CheckArmoredDetachedSignature performs the same actions as // CheckDetachedSignature but expects the signature to be armored. -func CheckArmoredDetachedSignature(keyring KeyRing, signed, signature io.Reader) (signer *Entity, err error) { +func CheckArmoredDetachedSignature(keyring KeyRing, signed, signature io.Reader, config *packet.Config) (signer *Entity, err error) { body, err := readArmored(signature, SignatureType) if err != nil { return } - return CheckDetachedSignature(keyring, signed, body) + return CheckDetachedSignature(keyring, signed, body, config) +} + +// checkSignatureDetails returns an error if: +// - The signature (or one of the binding signatures mentioned below) +// has a unknown critical notation data subpacket +// - The primary key of the signing entity is revoked +// The signature was signed by a subkey and: +// - The signing subkey is revoked +// - The primary identity is revoked +// - The signature is expired +// - The primary key of the signing entity is expired according to the +// primary identity binding signature +// The signature was signed by a subkey and: +// - The signing subkey is expired according to the subkey binding signature +// - The signing subkey binding signature is expired +// - The signing subkey cross-signature is expired +// NOTE: The order of these checks is important, as the caller may choose to +// ignore ErrSignatureExpired or ErrKeyExpired errors, but should never +// ignore any other errors. +// TODO: Also return an error if: +// - The primary key is expired according to a direct-key signature +// - (For V5 keys only:) The direct-key signature (exists and) is expired +func checkSignatureDetails(key *Key, signature *packet.Signature, config *packet.Config) error { + now := config.Now() + primaryIdentity := key.Entity.PrimaryIdentity() + signedBySubKey := key.PublicKey != key.Entity.PrimaryKey + sigsToCheck := []*packet.Signature{ signature, primaryIdentity.SelfSignature } + if signedBySubKey { + sigsToCheck = append(sigsToCheck, key.SelfSignature, key.SelfSignature.EmbeddedSignature) + } + for _, sig := range sigsToCheck { + for _, notation := range sig.Notations { + if notation.IsCritical && !config.KnownNotation(notation.Name) { + return errors.SignatureError("unknown critical notation: " + notation.Name) + } + } + } + if key.Entity.Revoked(now) || // primary key is revoked + (signedBySubKey && key.Revoked(now)) || // subkey is revoked + primaryIdentity.Revoked(now) { // primary identity is revoked + return errors.ErrKeyRevoked + } + if key.Entity.PrimaryKey.KeyExpired(primaryIdentity.SelfSignature, now) { // primary key is expired + return errors.ErrKeyExpired + } + if signedBySubKey { + if key.PublicKey.KeyExpired(key.SelfSignature, now) { // subkey is expired + return errors.ErrKeyExpired + } + } + for _, sig := range sigsToCheck { + if sig.SigExpired(now) { // any of the relevant signatures are expired + return errors.ErrSignatureExpired + } + } + return nil } diff --git a/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/read_write_test_data.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/read_write_test_data.go new file mode 100644 index 00000000000..db6dad5c0b7 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/read_write_test_data.go @@ -0,0 +1,274 @@ +package openpgp + +const testKey1KeyId uint64 = 0xA34D7E18C20C31BB +const testKey3KeyId uint64 = 0x338934250CCC0360 +const testKeyP256KeyId uint64 = 0xd44a2c495918513e + +const signedInput = "Signed message\nline 2\nline 3\n" +const signedTextInput = "Signed message\r\nline 2\r\nline 3\r\n" + +const recipientUnspecifiedHex = "848c0300000000000000000103ff62d4d578d03cf40c3da998dfe216c074fa6ddec5e31c197c9666ba292830d91d18716a80f699f9d897389a90e6d62d0238f5f07a5248073c0f24920e4bc4a30c2d17ee4e0cae7c3d4aaa4e8dced50e3010a80ee692175fa0385f62ecca4b56ee6e9980aa3ec51b61b077096ac9e800edaf161268593eedb6cc7027ff5cb32745d250010d407a6221ae22ef18469b444f2822478c4d190b24d36371a95cb40087cdd42d9399c3d06a53c0673349bfb607927f20d1e122bde1e2bf3aa6cae6edf489629bcaa0689539ae3b718914d88ededc3b" + +const detachedSignatureHex = "889c04000102000605024d449cd1000a0910a34d7e18c20c31bb167603ff57718d09f28a519fdc7b5a68b6a3336da04df85e38c5cd5d5bd2092fa4629848a33d85b1729402a2aab39c3ac19f9d573f773cc62c264dc924c067a79dfd8a863ae06c7c8686120760749f5fd9b1e03a64d20a7df3446ddc8f0aeadeaeba7cbaee5c1e366d65b6a0c6cc749bcb912d2f15013f812795c2e29eb7f7b77f39ce77" + +const detachedSignatureTextHex = "889c04010102000605024d449d21000a0910a34d7e18c20c31bbc8c60400a24fbef7342603a41cb1165767bd18985d015fb72fe05db42db36cfb2f1d455967f1e491194fbf6cf88146222b23bf6ffbd50d17598d976a0417d3192ff9cc0034fd00f287b02e90418bbefe609484b09231e4e7a5f3562e199bf39909ab5276c4d37382fe088f6b5c3426fc1052865da8b3ab158672d58b6264b10823dc4b39" + +const detachedSignatureDSAHex = "884604001102000605024d6c4eac000a0910338934250ccc0360f18d00a087d743d6405ed7b87755476629600b8b694a39e900a0abff8126f46faf1547c1743c37b21b4ea15b8f83" + +const detachedSignatureP256Hex = "885e0400130a0006050256e5bb00000a0910d44a2c495918513edef001009841a4f792beb0befccb35c8838a6a87d9b936beaa86db6745ddc7b045eee0cf00fd1ac1f78306b17e965935dd3f8bae4587a76587e4af231efe19cc4011a8434817" + +// The plaintext is https://www.gutenberg.org/cache/epub/1080/pg1080.txt +const modestProposalSha512 = "lbbrB1+WP3T9AaC9OQqBdOcCjgeEQadlulXsNPgVx0tyqPzDHwUugZ2gE7V0ESKAw6kAVfgkcuvfgxAAGaeHtw==" + +const testKeys1And2Hex = "988d044d3c5c10010400b1d13382944bd5aba23a4312968b5095d14f947f600eb478e14a6fcb16b0e0cac764884909c020bc495cfcc39a935387c661507bdb236a0612fb582cac3af9b29cc2c8c70090616c41b662f4da4c1201e195472eb7f4ae1ccbcbf9940fe21d985e379a5563dde5b9a23d35f1cfaa5790da3b79db26f23695107bfaca8e7b5bcd0011010001b41054657374204b6579203120285253412988b804130102002205024d3c5c10021b03060b090807030206150802090a0b0416020301021e01021780000a0910a34d7e18c20c31bbb5b304009cc45fe610b641a2c146331be94dade0a396e73ca725e1b25c21708d9cab46ecca5ccebc23055879df8f99eea39b377962a400f2ebdc36a7c99c333d74aeba346315137c3ff9d0a09b0273299090343048afb8107cf94cbd1400e3026f0ccac7ecebbc4d78588eb3e478fe2754d3ca664bcf3eac96ca4a6b0c8d7df5102f60f6b0020003b88d044d3c5c10010400b201df61d67487301f11879d514f4248ade90c8f68c7af1284c161098de4c28c2850f1ec7b8e30f959793e571542ffc6532189409cb51c3d30dad78c4ad5165eda18b20d9826d8707d0f742e2ab492103a85bbd9ddf4f5720f6de7064feb0d39ee002219765bb07bcfb8b877f47abe270ddeda4f676108cecb6b9bb2ad484a4f0011010001889f04180102000905024d3c5c10021b0c000a0910a34d7e18c20c31bb1a03040085c8d62e16d05dc4e9dad64953c8a2eed8b6c12f92b1575eeaa6dcf7be9473dd5b24b37b6dffbb4e7c99ed1bd3cb11634be19b3e6e207bed7505c7ca111ccf47cb323bf1f8851eb6360e8034cbff8dd149993c959de89f8f77f38e7e98b8e3076323aa719328e2b408db5ec0d03936efd57422ba04f925cdc7b4c1af7590e40ab0020003988d044d3c5c33010400b488c3e5f83f4d561f317817538d9d0397981e9aef1321ca68ebfae1cf8b7d388e19f4b5a24a82e2fbbf1c6c26557a6c5845307a03d815756f564ac7325b02bc83e87d5480a8fae848f07cb891f2d51ce7df83dcafdc12324517c86d472cc0ee10d47a68fd1d9ae49a6c19bbd36d82af597a0d88cc9c49de9df4e696fc1f0b5d0011010001b42754657374204b6579203220285253412c20656e637279707465642070726976617465206b65792988b804130102002205024d3c5c33021b03060b090807030206150802090a0b0416020301021e01021780000a0910d4984f961e35246b98940400908a73b6a6169f700434f076c6c79015a49bee37130eaf23aaa3cfa9ce60bfe4acaa7bc95f1146ada5867e0079babb38804891f4f0b8ebca57a86b249dee786161a755b7a342e68ccf3f78ed6440a93a6626beb9a37aa66afcd4f888790cb4bb46d94a4ae3eb3d7d3e6b00f6bfec940303e89ec5b32a1eaaacce66497d539328b0020003b88d044d3c5c33010400a4e913f9442abcc7f1804ccab27d2f787ffa592077ca935a8bb23165bd8d57576acac647cc596b2c3f814518cc8c82953c7a4478f32e0cf645630a5ba38d9618ef2bc3add69d459ae3dece5cab778938d988239f8c5ae437807075e06c828019959c644ff05ef6a5a1dab72227c98e3a040b0cf219026640698d7a13d8538a570011010001889f04180102000905024d3c5c33021b0c000a0910d4984f961e35246b26c703ff7ee29ef53bc1ae1ead533c408fa136db508434e233d6e62be621e031e5940bbd4c08142aed0f82217e7c3e1ec8de574bc06ccf3c36633be41ad78a9eacd209f861cae7b064100758545cc9dd83db71806dc1cfd5fb9ae5c7474bba0c19c44034ae61bae5eca379383339dece94ff56ff7aa44a582f3e5c38f45763af577c0934b0020003" + +const testKeys1And2PrivateHex = "9501d8044d3c5c10010400b1d13382944bd5aba23a4312968b5095d14f947f600eb478e14a6fcb16b0e0cac764884909c020bc495cfcc39a935387c661507bdb236a0612fb582cac3af9b29cc2c8c70090616c41b662f4da4c1201e195472eb7f4ae1ccbcbf9940fe21d985e379a5563dde5b9a23d35f1cfaa5790da3b79db26f23695107bfaca8e7b5bcd00110100010003ff4d91393b9a8e3430b14d6209df42f98dc927425b881f1209f319220841273a802a97c7bdb8b3a7740b3ab5866c4d1d308ad0d3a79bd1e883aacf1ac92dfe720285d10d08752a7efe3c609b1d00f17f2805b217be53999a7da7e493bfc3e9618fd17018991b8128aea70a05dbce30e4fbe626aa45775fa255dd9177aabf4df7cf0200c1ded12566e4bc2bb590455e5becfb2e2c9796482270a943343a7835de41080582c2be3caf5981aa838140e97afa40ad652a0b544f83eb1833b0957dce26e47b0200eacd6046741e9ce2ec5beb6fb5e6335457844fb09477f83b050a96be7da043e17f3a9523567ed40e7a521f818813a8b8a72209f1442844843ccc7eb9805442570200bdafe0438d97ac36e773c7162028d65844c4d463e2420aa2228c6e50dc2743c3d6c72d0d782a5173fe7be2169c8a9f4ef8a7cf3e37165e8c61b89c346cdc6c1799d2b41054657374204b6579203120285253412988b804130102002205024d3c5c10021b03060b090807030206150802090a0b0416020301021e01021780000a0910a34d7e18c20c31bbb5b304009cc45fe610b641a2c146331be94dade0a396e73ca725e1b25c21708d9cab46ecca5ccebc23055879df8f99eea39b377962a400f2ebdc36a7c99c333d74aeba346315137c3ff9d0a09b0273299090343048afb8107cf94cbd1400e3026f0ccac7ecebbc4d78588eb3e478fe2754d3ca664bcf3eac96ca4a6b0c8d7df5102f60f6b00200009d01d8044d3c5c10010400b201df61d67487301f11879d514f4248ade90c8f68c7af1284c161098de4c28c2850f1ec7b8e30f959793e571542ffc6532189409cb51c3d30dad78c4ad5165eda18b20d9826d8707d0f742e2ab492103a85bbd9ddf4f5720f6de7064feb0d39ee002219765bb07bcfb8b877f47abe270ddeda4f676108cecb6b9bb2ad484a4f00110100010003fd17a7490c22a79c59281fb7b20f5e6553ec0c1637ae382e8adaea295f50241037f8997cf42c1ce26417e015091451b15424b2c59eb8d4161b0975630408e394d3b00f88d4b4e18e2cc85e8251d4753a27c639c83f5ad4a571c4f19d7cd460b9b73c25ade730c99df09637bd173d8e3e981ac64432078263bb6dc30d3e974150dd0200d0ee05be3d4604d2146fb0457f31ba17c057560785aa804e8ca5530a7cd81d3440d0f4ba6851efcfd3954b7e68908fc0ba47f7ac37bf559c6c168b70d3a7c8cd0200da1c677c4bce06a068070f2b3733b0a714e88d62aa3f9a26c6f5216d48d5c2b5624144f3807c0df30be66b3268eeeca4df1fbded58faf49fc95dc3c35f134f8b01fd1396b6c0fc1b6c4f0eb8f5e44b8eace1e6073e20d0b8bc5385f86f1cf3f050f66af789f3ef1fc107b7f4421e19e0349c730c68f0a226981f4e889054fdb4dc149e8e889f04180102000905024d3c5c10021b0c000a0910a34d7e18c20c31bb1a03040085c8d62e16d05dc4e9dad64953c8a2eed8b6c12f92b1575eeaa6dcf7be9473dd5b24b37b6dffbb4e7c99ed1bd3cb11634be19b3e6e207bed7505c7ca111ccf47cb323bf1f8851eb6360e8034cbff8dd149993c959de89f8f77f38e7e98b8e3076323aa719328e2b408db5ec0d03936efd57422ba04f925cdc7b4c1af7590e40ab00200009501fe044d3c5c33010400b488c3e5f83f4d561f317817538d9d0397981e9aef1321ca68ebfae1cf8b7d388e19f4b5a24a82e2fbbf1c6c26557a6c5845307a03d815756f564ac7325b02bc83e87d5480a8fae848f07cb891f2d51ce7df83dcafdc12324517c86d472cc0ee10d47a68fd1d9ae49a6c19bbd36d82af597a0d88cc9c49de9df4e696fc1f0b5d0011010001fe030302e9030f3c783e14856063f16938530e148bc57a7aa3f3e4f90df9dceccdc779bc0835e1ad3d006e4a8d7b36d08b8e0de5a0d947254ecfbd22037e6572b426bcfdc517796b224b0036ff90bc574b5509bede85512f2eefb520fb4b02aa523ba739bff424a6fe81c5041f253f8d757e69a503d3563a104d0d49e9e890b9d0c26f96b55b743883b472caa7050c4acfd4a21f875bdf1258d88bd61224d303dc9df77f743137d51e6d5246b88c406780528fd9a3e15bab5452e5b93970d9dcc79f48b38651b9f15bfbcf6da452837e9cc70683d1bdca94507870f743e4ad902005812488dd342f836e72869afd00ce1850eea4cfa53ce10e3608e13d3c149394ee3cbd0e23d018fcbcb6e2ec5a1a22972d1d462ca05355d0d290dd2751e550d5efb38c6c89686344df64852bf4ff86638708f644e8ec6bd4af9b50d8541cb91891a431326ab2e332faa7ae86cfb6e0540aa63160c1e5cdd5a4add518b303fff0a20117c6bc77f7cfbaf36b04c865c6c2b42754657374204b6579203220285253412c20656e637279707465642070726976617465206b65792988b804130102002205024d3c5c33021b03060b090807030206150802090a0b0416020301021e01021780000a0910d4984f961e35246b98940400908a73b6a6169f700434f076c6c79015a49bee37130eaf23aaa3cfa9ce60bfe4acaa7bc95f1146ada5867e0079babb38804891f4f0b8ebca57a86b249dee786161a755b7a342e68ccf3f78ed6440a93a6626beb9a37aa66afcd4f888790cb4bb46d94a4ae3eb3d7d3e6b00f6bfec940303e89ec5b32a1eaaacce66497d539328b00200009d01fe044d3c5c33010400a4e913f9442abcc7f1804ccab27d2f787ffa592077ca935a8bb23165bd8d57576acac647cc596b2c3f814518cc8c82953c7a4478f32e0cf645630a5ba38d9618ef2bc3add69d459ae3dece5cab778938d988239f8c5ae437807075e06c828019959c644ff05ef6a5a1dab72227c98e3a040b0cf219026640698d7a13d8538a570011010001fe030302e9030f3c783e148560f936097339ae381d63116efcf802ff8b1c9360767db5219cc987375702a4123fd8657d3e22700f23f95020d1b261eda5257e9a72f9a918e8ef22dd5b3323ae03bbc1923dd224db988cadc16acc04b120a9f8b7e84da9716c53e0334d7b66586ddb9014df604b41be1e960dcfcbc96f4ed150a1a0dd070b9eb14276b9b6be413a769a75b519a53d3ecc0c220e85cd91ca354d57e7344517e64b43b6e29823cbd87eae26e2b2e78e6dedfbb76e3e9f77bcb844f9a8932eb3db2c3f9e44316e6f5d60e9e2a56e46b72abe6b06dc9a31cc63f10023d1f5e12d2a3ee93b675c96f504af0001220991c88db759e231b3320dcedf814dcf723fd9857e3d72d66a0f2af26950b915abdf56c1596f46a325bf17ad4810d3535fb02a259b247ac3dbd4cc3ecf9c51b6c07cebb009c1506fba0a89321ec8683e3fd009a6e551d50243e2d5092fefb3321083a4bad91320dc624bd6b5dddf93553e3d53924c05bfebec1fb4bd47e89a1a889f04180102000905024d3c5c33021b0c000a0910d4984f961e35246b26c703ff7ee29ef53bc1ae1ead533c408fa136db508434e233d6e62be621e031e5940bbd4c08142aed0f82217e7c3e1ec8de574bc06ccf3c36633be41ad78a9eacd209f861cae7b064100758545cc9dd83db71806dc1cfd5fb9ae5c7474bba0c19c44034ae61bae5eca379383339dece94ff56ff7aa44a582f3e5c38f45763af577c0934b0020000" + +const dsaElGamalTestKeysHex = "9501e1044dfcb16a110400aa3e5c1a1f43dd28c2ffae8abf5cfce555ee874134d8ba0a0f7b868ce2214beddc74e5e1e21ded354a95d18acdaf69e5e342371a71fbb9093162e0c5f3427de413a7f2c157d83f5cd2f9d791256dc4f6f0e13f13c3302af27f2384075ab3021dff7a050e14854bbde0a1094174855fc02f0bae8e00a340d94a1f22b32e48485700a0cec672ac21258fb95f61de2ce1af74b2c4fa3e6703ff698edc9be22c02ae4d916e4fa223f819d46582c0516235848a77b577ea49018dcd5e9e15cff9dbb4663a1ae6dd7580fa40946d40c05f72814b0f88481207e6c0832c3bded4853ebba0a7e3bd8e8c66df33d5a537cd4acf946d1080e7a3dcea679cb2b11a72a33a2b6a9dc85f466ad2ddf4c3db6283fa645343286971e3dd700703fc0c4e290d45767f370831a90187e74e9972aae5bff488eeff7d620af0362bfb95c1a6c3413ab5d15a2e4139e5d07a54d72583914661ed6a87cce810be28a0aa8879a2dd39e52fb6fe800f4f181ac7e328f740cde3d09a05cecf9483e4cca4253e60d4429ffd679d9996a520012aad119878c941e3cf151459873bdfc2a9563472fe0303027a728f9feb3b864260a1babe83925ce794710cfd642ee4ae0e5b9d74cee49e9c67b6cd0ea5dfbb582132195a121356a1513e1bca73e5b80c58c7ccb4164453412f456c47616d616c2054657374204b65792031886204131102002205024dfcb16a021b03060b090807030206150802090a0b0416020301021e01021780000a091033af447ccd759b09fadd00a0b8fd6f5a790bad7e9f2dbb7632046dc4493588db009c087c6a9ba9f7f49fab221587a74788c00db4889ab00200009d0157044dfcb16a1004008dec3f9291205255ccff8c532318133a6840739dd68b03ba942676f9038612071447bf07d00d559c5c0875724ea16a4c774f80d8338b55fca691a0522e530e604215b467bbc9ccfd483a1da99d7bc2648b4318fdbd27766fc8bfad3fddb37c62b8ae7ccfe9577e9b8d1e77c1d417ed2c2ef02d52f4da11600d85d3229607943700030503ff506c94c87c8cab778e963b76cf63770f0a79bf48fb49d3b4e52234620fc9f7657f9f8d56c96a2b7c7826ae6b57ebb2221a3fe154b03b6637cea7e6d98e3e45d87cf8dc432f723d3d71f89c5192ac8d7290684d2c25ce55846a80c9a7823f6acd9bb29fa6cd71f20bc90eccfca20451d0c976e460e672b000df49466408d527affe0303027a728f9feb3b864260abd761730327bca2aaa4ea0525c175e92bf240682a0e83b226f97ecb2e935b62c9a133858ce31b271fa8eb41f6a1b3cd72a63025ce1a75ee4180dcc284884904181102000905024dfcb16a021b0c000a091033af447ccd759b09dd0b009e3c3e7296092c81bee5a19929462caaf2fff3ae26009e218c437a2340e7ea628149af1ec98ec091a43992b00200009501e1044dfcb1be1104009f61faa61aa43df75d128cbe53de528c4aec49ce9360c992e70c77072ad5623de0a3a6212771b66b39a30dad6781799e92608316900518ec01184a85d872365b7d2ba4bacfb5882ea3c2473d3750dc6178cc1cf82147fb58caa28b28e9f12f6d1efcb0534abed644156c91cca4ab78834268495160b2400bc422beb37d237c2300a0cac94911b6d493bda1e1fbc6feeca7cb7421d34b03fe22cec6ccb39675bb7b94a335c2b7be888fd3906a1125f33301d8aa6ec6ee6878f46f73961c8d57a3e9544d8ef2a2cbfd4d52da665b1266928cfe4cb347a58c412815f3b2d2369dec04b41ac9a71cc9547426d5ab941cccf3b18575637ccfb42df1a802df3cfe0a999f9e7109331170e3a221991bf868543960f8c816c28097e503fe319db10fb98049f3a57d7c80c420da66d56f3644371631fad3f0ff4040a19a4fedc2d07727a1b27576f75a4d28c47d8246f27071e12d7a8de62aad216ddbae6aa02efd6b8a3e2818cda48526549791ab277e447b3a36c57cefe9b592f5eab73959743fcc8e83cbefec03a329b55018b53eec196765ae40ef9e20521a603c551efe0303020950d53a146bf9c66034d00c23130cce95576a2ff78016ca471276e8227fb30b1ffbd92e61804fb0c3eff9e30b1a826ee8f3e4730b4d86273ca977b4164453412f456c47616d616c2054657374204b65792032886204131102002205024dfcb1be021b03060b090807030206150802090a0b0416020301021e01021780000a0910a86bf526325b21b22bd9009e34511620415c974750a20df5cb56b182f3b48e6600a0a9466cb1a1305a84953445f77d461593f1d42bc1b00200009d0157044dfcb1be1004009565a951da1ee87119d600c077198f1c1bceb0f7aa54552489298e41ff788fa8f0d43a69871f0f6f77ebdfb14a4260cf9fbeb65d5844b4272a1904dd95136d06c3da745dc46327dd44a0f16f60135914368c8039a34033862261806bb2c5ce1152e2840254697872c85441ccb7321431d75a747a4bfb1d2c66362b51ce76311700030503fc0ea76601c196768070b7365a200e6ddb09307f262d5f39eec467b5f5784e22abdf1aa49226f59ab37cb49969d8f5230ea65caf56015abda62604544ed526c5c522bf92bed178a078789f6c807b6d34885688024a5bed9e9f8c58d11d4b82487b44c5f470c5606806a0443b79cadb45e0f897a561a53f724e5349b9267c75ca17fe0303020950d53a146bf9c660bc5f4ce8f072465e2d2466434320c1e712272fafc20e342fe7608101580fa1a1a367e60486a7cd1246b7ef5586cf5e10b32762b710a30144f12dd17dd4884904181102000905024dfcb1be021b0c000a0910a86bf526325b21b2904c00a0b2b66b4b39ccffda1d10f3ea8d58f827e30a8b8e009f4255b2d8112a184e40cde43a34e8655ca7809370b0020000" + +const signedMessageHex = "a3019bc0cbccc0c4b8d8b74ee2108fe16ec6d3ca490cbe362d3f8333d3f352531472538b8b13d353b97232f352158c20943157c71c16064626063656269052062e4e01987e9b6fccff4b7df3a34c534b23e679cbec3bc0f8f6e64dfb4b55fe3f8efa9ce110ddb5cd79faf1d753c51aecfa669f7e7aa043436596cccc3359cb7dd6bbe9ecaa69e5989d9e57209571edc0b2fa7f57b9b79a64ee6e99ce1371395fee92fec2796f7b15a77c386ff668ee27f6d38f0baa6c438b561657377bf6acff3c5947befd7bf4c196252f1d6e5c524d0300" + +const signedTextMessageHex = "a3019bc0cbccc8c4b8d8b74ee2108fe16ec6d36a250cbece0c178233d3f352531472538b8b13d35379b97232f352158ca0b4312f57c71c1646462606365626906a062e4e019811591798ff99bf8afee860b0d8a8c2a85c3387e3bcf0bb3b17987f2bbcfab2aa526d930cbfd3d98757184df3995c9f3e7790e36e3e9779f06089d4c64e9e47dd6202cb6e9bc73c5d11bb59fbaf89d22d8dc7cf199ddf17af96e77c5f65f9bbed56f427bd8db7af37f6c9984bf9385efaf5f184f986fb3e6adb0ecfe35bbf92d16a7aa2a344fb0bc52fb7624f0200" + +const signedEncryptedMessageHex = "c18c032a67d68660df41c70103ff5a84c9a72f80e74ef0384c2d6a9ebfe2b09e06a8f298394f6d2abf174e40934ab0ec01fb2d0ddf21211c6fe13eb238563663b017a6b44edca552eb4736c4b7dc6ed907dd9e12a21b51b64b46f902f76fb7aaf805c1db8070574d8d0431a23e324a750f77fb72340a17a42300ee4ca8207301e95a731da229a63ab9c6b44541fbd2c11d016d810b3b3b2b38f15b5b40f0a4910332829c2062f1f7cc61f5b03677d73c54cafa1004ced41f315d46444946faae571d6f426e6dbd45d9780eb466df042005298adabf7ce0ef766dfeb94cd449c7ed0046c880339599c4711af073ce649b1e237c40b50a5536283e03bdbb7afad78bd08707715c67fb43295f905b4c479178809d429a8e167a9a8c6dfd8ab20b4edebdc38d6dec879a3202e1b752690d9bb5b0c07c5a227c79cc200e713a99251a4219d62ad5556900cf69bd384b6c8e726c7be267471d0d23af956da165af4af757246c2ebcc302b39e8ef2fccb4971b234fcda22d759ddb20e27269ee7f7fe67898a9de721bfa02ab0becaa046d00ea16cb1afc4e2eab40d0ac17121c565686e5cbd0cbdfbd9d6db5c70278b9c9db5a83176d04f61fbfbc4471d721340ede2746e5c312ded4f26787985af92b64fae3f253dbdde97f6a5e1996fd4d865599e32ff76325d3e9abe93184c02988ee89a4504356a4ef3b9b7a57cbb9637ca90af34a7676b9ef559325c3cca4e29d69fec1887f5440bb101361d744ad292a8547f22b4f22b419a42aa836169b89190f46d9560824cb2ac6e8771de8223216a5e647e132ab9eebcba89569ab339cb1c3d70fe806b31f4f4c600b4103b8d7583ebff16e43dcda551e6530f975122eb8b29" + +const verifiedSignatureEncryptedMessageHex = "c2b304000108000605026048f6d600210910a34d7e18c20c31bb1621045fb74b1d03b1e3cb31bc2f8aa34d7e18c20c31bb9a3b0400a32ddac1af259c1b0abab0041327ea04970944401978fb647dd1cf9aba4f164e43f0d8a9389501886474bdd4a6e77f6aea945c07dfbf87743835b44cc2c39a1f9aeecfa83135abc92e18e50396f2e6a06c44e0188b0081effbfb4160d28f118d4ff73dd199a102e47cffd8c7ff2bacd83ae72b5820c021a486766dd587b5da61" + +const unverifiedSignatureEncryptedMessageHex = "c2b304000108000605026048f6d600210910a34d7e18c20c31bb1621045fb74b1d03b1e3cb31bc2f8aa34d7e18c20c31bb9a3b0400a32ddac1af259c1b0abab0041327ea04970944401978fb647dd1cf9aba4f164e43f0d8a9389501886474bdd4a6e77f6aea945c07dfbf87743835b44cc2c39a1f9aeecfa83135abc92e18e50396f2e6a06c44e0188b0081effbfb4160d28f118d4ff73dd199a102e47cffd8c7ff2bacd83ae72b5820c021a486766dd587b5da61" + +const signedEncryptedMessage2Hex = "85010e03cf6a7abcd43e36731003fb057f5495b79db367e277cdbe4ab90d924ddee0c0381494112ff8c1238fb0184af35d1731573b01bc4c55ecacd2aafbe2003d36310487d1ecc9ac994f3fada7f9f7f5c3a64248ab7782906c82c6ff1303b69a84d9a9529c31ecafbcdb9ba87e05439897d87e8a2a3dec55e14df19bba7f7bd316291c002ae2efd24f83f9e3441203fc081c0c23dc3092a454ca8a082b27f631abf73aca341686982e8fbda7e0e7d863941d68f3de4a755c2964407f4b5e0477b3196b8c93d551dd23c8beef7d0f03fbb1b6066f78907faf4bf1677d8fcec72651124080e0b7feae6b476e72ab207d38d90b958759fdedfc3c6c35717c9dbfc979b3cfbbff0a76d24a5e57056bb88acbd2a901ef64bc6e4db02adc05b6250ff378de81dca18c1910ab257dff1b9771b85bb9bbe0a69f5989e6d1710a35e6dfcceb7d8fb5ccea8db3932b3d9ff3fe0d327597c68b3622aec8e3716c83a6c93f497543b459b58ba504ed6bcaa747d37d2ca746fe49ae0a6ce4a8b694234e941b5159ff8bd34b9023da2814076163b86f40eed7c9472f81b551452d5ab87004a373c0172ec87ea6ce42ccfa7dbdad66b745496c4873d8019e8c28d6b3" + +const signatureEncryptedMessage2Hex = "c24604001102000605024dfd0166000a091033af447ccd759b09bae600a096ec5e63ecf0a403085e10f75cc3bab327663282009f51fad9df457ed8d2b70d8a73c76e0443eac0f377" + +const symmetricallyEncryptedCompressedHex = "c32e040903085a357c1a7b5614ed00cc0d1d92f428162058b3f558a0fb0980d221ebac6c97d5eda4e0fe32f6e706e94dd263012d6ca1ef8c4bbd324098225e603a10c85ebf09cbf7b5aeeb5ce46381a52edc51038b76a8454483be74e6dcd1e50d5689a8ae7eceaeefed98a0023d49b22eb1f65c2aa1ef1783bb5e1995713b0457102ec3c3075fe871267ffa4b686ad5d52000d857" + +const dsaTestKeyHex = "9901a2044d6c49de110400cb5ce438cf9250907ac2ba5bf6547931270b89f7c4b53d9d09f4d0213a5ef2ec1f26806d3d259960f872a4a102ef1581ea3f6d6882d15134f21ef6a84de933cc34c47cc9106efe3bd84c6aec12e78523661e29bc1a61f0aab17fa58a627fd5fd33f5149153fbe8cd70edf3d963bc287ef875270ff14b5bfdd1bca4483793923b00a0fe46d76cb6e4cbdc568435cd5480af3266d610d303fe33ae8273f30a96d4d34f42fa28ce1112d425b2e3bf7ea553d526e2db6b9255e9dc7419045ce817214d1a0056dbc8d5289956a4b1b69f20f1105124096e6a438f41f2e2495923b0f34b70642607d45559595c7fe94d7fa85fc41bf7d68c1fd509ebeaa5f315f6059a446b9369c277597e4f474a9591535354c7e7f4fd98a08aa60400b130c24ff20bdfbf683313f5daebf1c9b34b3bdadfc77f2ddd72ee1fb17e56c473664bc21d66467655dd74b9005e3a2bacce446f1920cd7017231ae447b67036c9b431b8179deacd5120262d894c26bc015bffe3d827ba7087ad9b700d2ca1f6d16cc1786581e5dd065f293c31209300f9b0afcc3f7c08dd26d0a22d87580b4db41054657374204b65792033202844534129886204131102002205024d6c49de021b03060b090807030206150802090a0b0416020301021e01021780000a0910338934250ccc03607e0400a0bdb9193e8a6b96fc2dfc108ae848914b504481f100a09c4dc148cb693293a67af24dd40d2b13a9e36794" + +const dsaTestKeyPrivateHex = "9501bb044d6c49de110400cb5ce438cf9250907ac2ba5bf6547931270b89f7c4b53d9d09f4d0213a5ef2ec1f26806d3d259960f872a4a102ef1581ea3f6d6882d15134f21ef6a84de933cc34c47cc9106efe3bd84c6aec12e78523661e29bc1a61f0aab17fa58a627fd5fd33f5149153fbe8cd70edf3d963bc287ef875270ff14b5bfdd1bca4483793923b00a0fe46d76cb6e4cbdc568435cd5480af3266d610d303fe33ae8273f30a96d4d34f42fa28ce1112d425b2e3bf7ea553d526e2db6b9255e9dc7419045ce817214d1a0056dbc8d5289956a4b1b69f20f1105124096e6a438f41f2e2495923b0f34b70642607d45559595c7fe94d7fa85fc41bf7d68c1fd509ebeaa5f315f6059a446b9369c277597e4f474a9591535354c7e7f4fd98a08aa60400b130c24ff20bdfbf683313f5daebf1c9b34b3bdadfc77f2ddd72ee1fb17e56c473664bc21d66467655dd74b9005e3a2bacce446f1920cd7017231ae447b67036c9b431b8179deacd5120262d894c26bc015bffe3d827ba7087ad9b700d2ca1f6d16cc1786581e5dd065f293c31209300f9b0afcc3f7c08dd26d0a22d87580b4d00009f592e0619d823953577d4503061706843317e4fee083db41054657374204b65792033202844534129886204131102002205024d6c49de021b03060b090807030206150802090a0b0416020301021e01021780000a0910338934250ccc03607e0400a0bdb9193e8a6b96fc2dfc108ae848914b504481f100a09c4dc148cb693293a67af24dd40d2b13a9e36794" + +const p256TestKeyHex = "98520456e5b83813082a8648ce3d030107020304a2072cd6d21321266c758cc5b83fab0510f751cb8d91897cddb7047d8d6f185546e2107111b0a95cb8ef063c33245502af7a65f004d5919d93ee74eb71a66253b424502d3235362054657374204b6579203c696e76616c6964406578616d706c652e636f6d3e8879041313080021050256e5b838021b03050b09080702061508090a0b020416020301021e01021780000a0910d44a2c495918513e54e50100dfa64f97d9b47766fc1943c6314ba3f2b2a103d71ad286dc5b1efb96a345b0c80100dbc8150b54241f559da6ef4baacea6d31902b4f4b1bdc09b34bf0502334b7754b8560456e5b83812082a8648ce3d030107020304bfe3cea9cee13486f8d518aa487fecab451f25467d2bf08e58f63e5fa525d5482133e6a79299c274b068ef0be448152ad65cf11cf764348588ca4f6a0bcf22b6030108078861041813080009050256e5b838021b0c000a0910d44a2c495918513e4a4800ff49d589fa64024ad30be363a032e3a0e0e6f5db56ba4c73db850518bf0121b8f20100fd78e065f4c70ea5be9df319ea67e493b936fc78da834a71828043d3154af56e" + +const p256TestKeyPrivateHex = "94a50456e5b83813082a8648ce3d030107020304a2072cd6d21321266c758cc5b83fab0510f751cb8d91897cddb7047d8d6f185546e2107111b0a95cb8ef063c33245502af7a65f004d5919d93ee74eb71a66253fe070302f0c2bfb0b6c30f87ee1599472b8636477eab23ced13b271886a4b50ed34c9d8436af5af5b8f88921f0efba6ef8c37c459bbb88bc1c6a13bbd25c4ce9b1e97679569ee77645d469bf4b43de637f5561b424502d3235362054657374204b6579203c696e76616c6964406578616d706c652e636f6d3e8879041313080021050256e5b838021b03050b09080702061508090a0b020416020301021e01021780000a0910d44a2c495918513e54e50100dfa64f97d9b47766fc1943c6314ba3f2b2a103d71ad286dc5b1efb96a345b0c80100dbc8150b54241f559da6ef4baacea6d31902b4f4b1bdc09b34bf0502334b77549ca90456e5b83812082a8648ce3d030107020304bfe3cea9cee13486f8d518aa487fecab451f25467d2bf08e58f63e5fa525d5482133e6a79299c274b068ef0be448152ad65cf11cf764348588ca4f6a0bcf22b603010807fe0703027510012471a603cfee2968dce19f732721ddf03e966fd133b4e3c7a685b788705cbc46fb026dc94724b830c9edbaecd2fb2c662f23169516cacd1fe423f0475c364ecc10abcabcfd4bbbda1a36a1bd8861041813080009050256e5b838021b0c000a0910d44a2c495918513e4a4800ff49d589fa64024ad30be363a032e3a0e0e6f5db56ba4c73db850518bf0121b8f20100fd78e065f4c70ea5be9df319ea67e493b936fc78da834a71828043d3154af56e" + +const armoredPrivateKeyBlock = `-----BEGIN PGP PRIVATE KEY BLOCK----- +Version: GnuPG v1.4.10 (GNU/Linux) + +lQHYBE2rFNoBBADFwqWQIW/DSqcB4yCQqnAFTJ27qS5AnB46ccAdw3u4Greeu3Bp +idpoHdjULy7zSKlwR1EA873dO/k/e11Ml3dlAFUinWeejWaK2ugFP6JjiieSsrKn +vWNicdCS4HTWn0X4sjl0ZiAygw6GNhqEQ3cpLeL0g8E9hnYzJKQ0LWJa0QARAQAB +AAP/TB81EIo2VYNmTq0pK1ZXwUpxCrvAAIG3hwKjEzHcbQznsjNvPUihZ+NZQ6+X +0HCfPAdPkGDCLCb6NavcSW+iNnLTrdDnSI6+3BbIONqWWdRDYJhqZCkqmG6zqSfL +IdkJgCw94taUg5BWP/AAeQrhzjChvpMQTVKQL5mnuZbUCeMCAN5qrYMP2S9iKdnk +VANIFj7656ARKt/nf4CBzxcpHTyB8+d2CtPDKCmlJP6vL8t58Jmih+kHJMvC0dzn +gr5f5+sCAOOe5gt9e0am7AvQWhdbHVfJU0TQJx+m2OiCJAqGTB1nvtBLHdJnfdC9 +TnXXQ6ZXibqLyBies/xeY2sCKL5qtTMCAKnX9+9d/5yQxRyrQUHt1NYhaXZnJbHx +q4ytu0eWz+5i68IYUSK69jJ1NWPM0T6SkqpB3KCAIv68VFm9PxqG1KmhSrQIVGVz +dCBLZXmIuAQTAQIAIgUCTasU2gIbAwYLCQgHAwIGFQgCCQoLBBYCAwECHgECF4AA +CgkQO9o98PRieSoLhgQAkLEZex02Qt7vGhZzMwuN0R22w3VwyYyjBx+fM3JFETy1 +ut4xcLJoJfIaF5ZS38UplgakHG0FQ+b49i8dMij0aZmDqGxrew1m4kBfjXw9B/v+ +eIqpODryb6cOSwyQFH0lQkXC040pjq9YqDsO5w0WYNXYKDnzRV0p4H1pweo2VDid +AdgETasU2gEEAN46UPeWRqKHvA99arOxee38fBt2CI08iiWyI8T3J6ivtFGixSqV +bRcPxYO/qLpVe5l84Nb3X71GfVXlc9hyv7CD6tcowL59hg1E/DC5ydI8K8iEpUmK +/UnHdIY5h8/kqgGxkY/T/hgp5fRQgW1ZoZxLajVlMRZ8W4tFtT0DeA+JABEBAAEA +A/0bE1jaaZKj6ndqcw86jd+QtD1SF+Cf21CWRNeLKnUds4FRRvclzTyUMuWPkUeX +TaNNsUOFqBsf6QQ2oHUBBK4VCHffHCW4ZEX2cd6umz7mpHW6XzN4DECEzOVksXtc +lUC1j4UB91DC/RNQqwX1IV2QLSwssVotPMPqhOi0ZLNY7wIA3n7DWKInxYZZ4K+6 +rQ+POsz6brEoRHwr8x6XlHenq1Oki855pSa1yXIARoTrSJkBtn5oI+f8AzrnN0BN +oyeQAwIA/7E++3HDi5aweWrViiul9cd3rcsS0dEnksPhvS0ozCJiHsq/6GFmy7J8 +QSHZPteedBnZyNp5jR+H7cIfVN3KgwH/Skq4PsuPhDq5TKK6i8Pc1WW8MA6DXTdU +nLkX7RGmMwjC0DBf7KWAlPjFaONAX3a8ndnz//fy1q7u2l9AZwrj1qa1iJ8EGAEC +AAkFAk2rFNoCGwwACgkQO9o98PRieSo2/QP/WTzr4ioINVsvN1akKuekmEMI3LAp +BfHwatufxxP1U+3Si/6YIk7kuPB9Hs+pRqCXzbvPRrI8NHZBmc8qIGthishdCYad +AHcVnXjtxrULkQFGbGvhKURLvS9WnzD/m1K2zzwxzkPTzT9/Yf06O6Mal5AdugPL +VrM0m72/jnpKo04= +=zNCn +-----END PGP PRIVATE KEY BLOCK-----` + +const e2ePublicKey = `-----BEGIN PGP PUBLIC KEY BLOCK----- +Charset: UTF-8 + +xv8AAABSBAAAAAATCCqGSM49AwEHAgME1LRoXSpOxtHXDUdmuvzchyg6005qIBJ4 +sfaSxX7QgH9RV2ONUhC+WiayCNADq+UMzuR/vunSr4aQffXvuGnR383/AAAAFDxk +Z2lsQHlhaG9vLWluYy5jb20+wv8AAACGBBATCAA4/wAAAAWCVGvAG/8AAAACiwn/ +AAAACZC2VkQCOjdvYf8AAAAFlQgJCgv/AAAAA5YBAv8AAAACngEAAE1BAP0X8veD +24IjmI5/C6ZAfVNXxgZZFhTAACFX75jUA3oD6AEAzoSwKf1aqH6oq62qhCN/pekX ++WAsVMBhNwzLpqtCRjLO/wAAAFYEAAAAABIIKoZIzj0DAQcCAwT50ain7vXiIRv8 +B1DO3x3cE/aattZ5sHNixJzRCXi2vQIA5QmOxZ6b5jjUekNbdHG3SZi1a2Ak5mfX +fRxC/5VGAwEIB8L/AAAAZQQYEwgAGP8AAAAFglRrwBz/AAAACZC2VkQCOjdvYQAA +FJAA9isX3xtGyMLYwp2F3nXm7QEdY5bq5VUcD/RJlj792VwA/1wH0pCzVLl4Q9F9 +ex7En5r7rHR5xwX82Msc+Rq9dSyO +=7MrZ +-----END PGP PUBLIC KEY BLOCK-----` + +const dsaKeyWithSHA512 = `9901a2044f04b07f110400db244efecc7316553ee08d179972aab87bb1214de7692593fcf5b6feb1c80fba268722dd464748539b85b81d574cd2d7ad0ca2444de4d849b8756bad7768c486c83a824f9bba4af773d11742bdfb4ac3b89ef8cc9452d4aad31a37e4b630d33927bff68e879284a1672659b8b298222fc68f370f3e24dccacc4a862442b9438b00a0ea444a24088dc23e26df7daf8f43cba3bffc4fe703fe3d6cd7fdca199d54ed8ae501c30e3ec7871ea9cdd4cf63cfe6fc82281d70a5b8bb493f922cd99fba5f088935596af087c8d818d5ec4d0b9afa7f070b3d7c1dd32a84fca08d8280b4890c8da1dde334de8e3cad8450eed2a4a4fcc2db7b8e5528b869a74a7f0189e11ef097ef1253582348de072bb07a9fa8ab838e993cef0ee203ff49298723e2d1f549b00559f886cd417a41692ce58d0ac1307dc71d85a8af21b0cf6eaa14baf2922d3a70389bedf17cc514ba0febbd107675a372fe84b90162a9e88b14d4b1c6be855b96b33fb198c46f058568817780435b6936167ebb3724b680f32bf27382ada2e37a879b3d9de2abe0c3f399350afd1ad438883f4791e2e3b4184453412068617368207472756e636174696f6e207465737488620413110a002205024f04b07f021b03060b090807030206150802090a0b0416020301021e01021780000a0910ef20e0cefca131581318009e2bf3bf047a44d75a9bacd00161ee04d435522397009a03a60d51bd8a568c6c021c8d7cf1be8d990d6417b0020003` + +const unknownHashFunctionHex = `8a00000040040001990006050253863c24000a09103b4fe6acc0b21f32ffff0101010101010101010101010101010101010101010101010101010101010101010101010101` + +const rsaSignatureBadMPIlength = `8a00000040040001030006050253863c24000a09103b4fe6acc0b21f32ffff0101010101010101010101010101010101010101010101010101010101010101010101010101` + +const missingHashFunctionHex = `8a00000040040001030006050253863c24000a09103b4fe6acc0b21f32ffff0101010101010101010101010101010101010101010101010101010101010101010101010101` + +const campbellQuine = `a0b001000300fcffa0b001000d00f2ff000300fcffa0b001000d00f2ff8270a01c00000500faff8270a01c00000500faff000500faff001400ebff8270a01c00000500faff000500faff001400ebff428821c400001400ebff428821c400001400ebff428821c400001400ebff428821c400001400ebff428821c400000000ffff000000ffff000b00f4ff428821c400000000ffff000000ffff000b00f4ff0233214c40000100feff000233214c40000100feff0000` + +const keyV4forVerifyingSignedMessageV3 = `-----BEGIN PGP PUBLIC KEY BLOCK----- +Comment: GPGTools - https://gpgtools.org + +mI0EVfxoFQEEAMBIqmbDfYygcvP6Phr1wr1XI41IF7Qixqybs/foBF8qqblD9gIY +BKpXjnBOtbkcVOJ0nljd3/sQIfH4E0vQwK5/4YRQSI59eKOqd6Fx+fWQOLG+uu6z +tewpeCj9LLHvibx/Sc7VWRnrznia6ftrXxJ/wHMezSab3tnGC0YPVdGNABEBAAG0 +JEdvY3J5cHRvIFRlc3QgS2V5IDx0aGVtYXhAZ21haWwuY29tPoi5BBMBCgAjBQJV +/GgVAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQeXnQmhdGW9PFVAP+ +K7TU0qX5ArvIONIxh/WAweyOk884c5cE8f+3NOPOOCRGyVy0FId5A7MmD5GOQh4H +JseOZVEVCqlmngEvtHZb3U1VYtVGE5WZ+6rQhGsMcWP5qaT4soYwMBlSYxgYwQcx +YhN9qOr292f9j2Y//TTIJmZT4Oa+lMxhWdqTfX+qMgG4jQRV/GgVAQQArhFSiij1 +b+hT3dnapbEU+23Z1yTu1DfF6zsxQ4XQWEV3eR8v+8mEDDNcz8oyyF56k6UQ3rXi +UMTIwRDg4V6SbZmaFbZYCOwp/EmXJ3rfhm7z7yzXj2OFN22luuqbyVhuL7LRdB0M +pxgmjXb4tTvfgKd26x34S+QqUJ7W6uprY4sAEQEAAYifBBgBCgAJBQJV/GgVAhsM +AAoJEHl50JoXRlvT7y8D/02ckx4OMkKBZo7viyrBw0MLG92i+DC2bs35PooHR6zz +786mitjOp5z2QWNLBvxC70S0qVfCIz8jKupO1J6rq6Z8CcbLF3qjm6h1omUBf8Nd +EfXKD2/2HV6zMKVknnKzIEzauh+eCKS2CeJUSSSryap/QLVAjRnckaES/OsEWhNB +=RZia +-----END PGP PUBLIC KEY BLOCK----- +` + +const signedMessageV3 = `-----BEGIN PGP MESSAGE----- +Comment: GPGTools - https://gpgtools.org + +owGbwMvMwMVYWXlhlrhb9GXG03JJDKF/MtxDMjKLFYAoUaEktbhEITe1uDgxPVWP +q5NhKjMrWAVcC9evD8z/bF/uWNjqtk/X3y5/38XGRQHm/57rrDRYuGnTw597Xqka +uM3137/hH3Os+Jf2dc0fXOITKwJvXJvecPVs0ta+Vg7ZO1MLn8w58Xx+6L58mbka +DGHyU9yTueZE8D+QF/Tz28Y78dqtF56R1VPn9Xw4uJqrWYdd7b3vIZ1V6R4Nh05d +iT57d/OhWwA= +=hG7R +-----END PGP MESSAGE----- +` + +// https://mailarchive.ietf.org/arch/msg/openpgp/9SheW_LENE0Kxf7haNllovPyAdY/ +const v5PrivKey = `-----BEGIN PGP PRIVATE KEY BLOCK----- + +lGEFXJH05BYAAAAtCSsGAQQB2kcPAQEHQFhZlVcVVtwf+21xNQPX+ecMJJBL0MPd +fj75iux+my8QAAAAAAAiAQCHZ1SnSUmWqxEsoI6facIVZQu6mph3cBFzzTvcm5lA +Ng5ctBhlbW1hLmdvbGRtYW5AZXhhbXBsZS5uZXSIlgUTFggASCIhBRk0e8mHJGQC +X5nfPsLgAA7ZiEiS4fez6kyUAJFZVptUBQJckfTkAhsDBQsJCAcCAyICAQYVCgkI +CwIEFgIDAQIeBwIXgAAA9cAA/jiR3yMsZMeEQ40u6uzEoXa6UXeV/S3wwJAXRJy9 +M8s0AP9vuL/7AyTfFXwwzSjDnYmzS0qAhbLDQ643N+MXGBJ2BZxmBVyR9OQSAAAA +MgorBgEEAZdVAQUBAQdA+nysrzml2UCweAqtpDuncSPlvrcBWKU0yfU0YvYWWAoD +AQgHAAAAAAAiAP9OdAPppjU1WwpqjIItkxr+VPQRT8Zm/Riw7U3F6v3OiBFHiHoF +GBYIACwiIQUZNHvJhyRkAl+Z3z7C4AAO2YhIkuH3s+pMlACRWVabVAUCXJH05AIb +DAAAOSQBAP4BOOIR/sGLNMOfeb5fPs/02QMieoiSjIBnijhob2U5AQC+RtOHCHx7 +TcIYl5/Uyoi+FOvPLcNw4hOv2nwUzSSVAw== +=IiS2 +-----END PGP PRIVATE KEY BLOCK-----` + +// Generated with the above private key +const v5PrivKeyMsg = `-----BEGIN PGP MESSAGE----- +Version: OpenPGP.js v4.10.7 +Comment: https://openpgpjs.org + +xA0DAQoWGTR7yYckZAIByxF1B21zZy50eHRfbIGSdGVzdMJ3BQEWCgAGBQJf +bIGSACMiIQUZNHvJhyRkAl+Z3z7C4AAO2YhIkuH3s+pMlACRWVabVDQvAP9G +y29VPonFXqi2zKkpZrvyvZxg+n5e8Nt9wNbuxeCd3QD/TtO2s+JvjrE4Siwv +UQdl5MlBka1QSNbMq2Bz7XwNPg4= +=6lbM +-----END PGP MESSAGE-----` + +const keyWithExpiredCrossSig = `-----BEGIN PGP PUBLIC KEY BLOCK----- + +xsDNBF2lnPIBDAC5cL9PQoQLTMuhjbYvb4Ncuuo0bfmgPRFywX53jPhoFf4Zg6mv +/seOXpgecTdOcVttfzC8ycIKrt3aQTiwOG/ctaR4Bk/t6ayNFfdUNxHWk4WCKzdz +/56fW2O0F23qIRd8UUJp5IIlN4RDdRCtdhVQIAuzvp2oVy/LaS2kxQoKvph/5pQ/ +5whqsyroEWDJoSV0yOb25B/iwk/pLUFoyhDG9bj0kIzDxrEqW+7Ba8nocQlecMF3 +X5KMN5kp2zraLv9dlBBpWW43XktjcCZgMy20SouraVma8Je/ECwUWYUiAZxLIlMv +9CurEOtxUw6N3RdOtLmYZS9uEnn5y1UkF88o8Nku890uk6BrewFzJyLAx5wRZ4F0 +qV/yq36UWQ0JB/AUGhHVPdFf6pl6eaxBwT5GXvbBUibtf8YI2og5RsgTWtXfU7eb +SGXrl5ZMpbA6mbfhd0R8aPxWfmDWiIOhBufhMCvUHh1sApMKVZnvIff9/0Dca3wb +vLIwa3T4CyshfT0AEQEAAc0hQm9iIEJhYmJhZ2UgPGJvYkBvcGVucGdwLmV4YW1w +bGU+wsEABBMBCgATBYJeO2eVAgsJAxUICgKbAQIeAQAhCRD7/MgqAV5zMBYhBNGm +bhojsYLJmA94jPv8yCoBXnMwKWUMAJ3FKZfJ2mXvh+GFqgymvK4NoKkDRPB0CbUN +aDdG7ZOizQrWXo7Da2MYIZ6eZUDqBKLdhZ5gZfVnisDfu/yeCgpENaKib1MPHpA8 +nZQjnPejbBDomNqY8HRzr5jvXNlwywBpjWGtegCKUY9xbSynjbfzIlMrWL4S+Rfl ++bOOQKRyYJWXmECmVyqY8cz2VUYmETjNcwC8VCDUxQnhtcCJ7Aej22hfYwVEPb/J +BsJBPq8WECCiGfJ9Y2y6TF+62KzG9Kfs5hqUeHhQy8V4TSi479ewwL7DH86XmIIK +chSANBS+7iyMtctjNZfmF9zYdGJFvjI/mbBR/lK66E515Inuf75XnL8hqlXuwqvG +ni+i03Aet1DzULZEIio4uIU6ioc1lGO9h7K2Xn4S7QQH1QoISNMWqXibUR0RCGjw +FsEDTt2QwJl8XXxoJCooM7BCcCQo+rMNVUHDjIwrdoQjPld3YZsUQQRcqH6bLuln +cfn5ufl8zTGWKydoj/iTz8KcjZ7w187AzQRdpZzyAQwA1jC/XGxjK6ddgrRfW9j+ +s/U00++EvIsgTs2kr3Rg0GP7FLWV0YNtR1mpl55/bEl7yAxCDTkOgPUMXcaKlnQh +6zrlt6H53mF6Bvs3inOHQvOsGtU0dqvb1vkTF0juLiJgPlM7pWv+pNQ6IA39vKoQ +sTMBv4v5vYNXP9GgKbg8inUNT17BxzZYHfw5+q63ectgDm2on1e8CIRCZ76oBVwz +dkVxoy3gjh1eENlk2D4P0uJNZzF1Q8GV67yLANGMCDICE/OkWn6daipYDzW4iJQt +YPUWP4hWhjdm+CK+hg6IQUEn2Vtvi16D2blRP8BpUNNa4fNuylWVuJV76rIHvsLZ +1pbM3LHpRgE8s6jivS3Rz3WRs0TmWCNnvHPqWizQ3VTy+r3UQVJ5AmhJDrZdZq9i +aUIuZ01PoE1+CHiJwuxPtWvVAxf2POcm1M/F1fK1J0e+lKlQuyonTXqXR22Y41wr +fP2aPk3nPSTW2DUAf3vRMZg57ZpRxLEhEMxcM4/LMR+PABEBAAHCwrIEGAEKAAkF +gl8sAVYCmwIB3QkQ+/zIKgFeczDA+qAEGQEKAAwFgl47Z5UFgwB4TOAAIQkQfC+q +Tfk8N7IWIQQd3OFfCSF87i87N2B8L6pN+Tw3st58C/0exp0X2U4LqicSHEOSqHZj +jiysdqIELHGyo5DSPv92UFPp36aqjF9OFgtNNwSa56fmAVCD4+hor/fKARRIeIjF +qdIC5Y/9a4B10NQFJa5lsvB38x/d39LI2kEoglZnqWgdJskROo3vNQF4KlIcm6FH +dn4WI8UkC5oUUcrpZVMSKoacIaxLwqnXT42nIVgYYuqrd/ZagZZjG5WlrTOd5+NI +zi/l0fWProcPHGLjmAh4Thu8i7omtVw1nQaMnq9I77ffg3cPDgXknYrLL+q8xXh/ +0mEJyIhnmPwllWCSZuLv9DrD5pOexFfdlwXhf6cLzNpW6QhXD/Tf5KrqIPr9aOv8 +9xaEEXWh0vEby2kIsI2++ft+vfdIyxYw/wKqx0awTSnuBV1rG3z1dswX4BfoY66x +Bz3KOVqlz9+mG/FTRQwrgPvR+qgLCHbuotxoGN7fzW+PI75hQG5JQAqhsC9sHjQH +UrI21/VUNwzfw3v5pYsWuFb5bdQ3ASJetICQiMy7IW8WIQTRpm4aI7GCyZgPeIz7 +/MgqAV5zMG6/C/wLpPl/9e6Hf5wmXIUwpZNQbNZvpiCcyx9sXsHXaycOQVxn3McZ +nYOUP9/mobl1tIeDQyTNbkxWjU0zzJl8XQsDZerb5098pg+x7oGIL7M1vn5s5JMl +owROourqF88JEtOBxLMxlAM7X4hB48xKQ3Hu9hS1GdnqLKki4MqRGl4l5FUwyGOM +GjyS3TzkfiDJNwQxybQiC9n57ij20ieNyLfuWCMLcNNnZUgZtnF6wCctoq/0ZIWu +a7nvuA/XC2WW9YjEJJiWdy5109pqac+qWiY11HWy/nms4gpMdxVpT0RhrKGWq4o0 +M5q3ZElOoeN70UO3OSbU5EVrG7gB1GuwF9mTHUVlV0veSTw0axkta3FGT//XfSpD +lRrCkyLzwq0M+UUHQAuYpAfobDlDdnxxOD2jm5GyTzak3GSVFfjW09QFVO6HlGp5 +01/jtzkUiS6nwoHHkfnyn0beZuR8X6KlcrzLB0VFgQFLmkSM9cSOgYhD0PTu9aHb +hW1Hj9AO8lzggBQ= +=Nt+N +-----END PGP PUBLIC KEY BLOCK----- +` + +const sigFromKeyWithExpiredCrossSig = `-----BEGIN PGP SIGNATURE----- + +wsDzBAABCgAGBYJfLAFsACEJEHwvqk35PDeyFiEEHdzhXwkhfO4vOzdgfC+qTfk8 +N7KiqwwAts4QGB7v9bABCC2qkTxJhmStC0wQMcHRcjL/qAiVnmasQWmvE9KVsdm3 +AaXd8mIx4a37/RRvr9dYrY2eE4uw72cMqPxNja2tvVXkHQvk1oEUqfkvbXs4ypKI +NyeTWjXNOTZEbg0hbm3nMy+Wv7zgB1CEvAsEboLDJlhGqPcD+X8a6CJGrBGUBUrv +KVmZr3U6vEzClz3DBLpoddCQseJRhT4YM1nKmBlZ5quh2LFgTSpajv5OsZheqt9y +EZAPbqmLhDmWRQwGzkWHKceKS7nZ/ox2WK6OS7Ob8ZGZkM64iPo6/EGj5Yc19vQN +AGiIaPEGszBBWlOpHTPhNm0LB0nMWqqaT87oNYwP8CQuuxDb6rKJ2lffCmZH27Lb +UbQZcH8J+0UhpeaiadPZxH5ATJAcenmVtVVMLVOFnm+eIlxzov9ntpgGYt8hLdXB +ITEG9mMgp3TGS9ZzSifMZ8UGtHdp9QdBg8NEVPFzDOMGxpc/Bftav7RRRuPiAER+ +7A5CBid5 +=aQkm +-----END PGP SIGNATURE----- +` + +const signedMessageWithCriticalNotation = `-----BEGIN PGP MESSAGE----- + +owGbwMvMwMH4oOW7S46CznTG09xJDDE3Wl1KUotLuDousDAwcjBYiSmyXL+48d6x +U1PSGUxcj8IUszKBVMpMaWAAAgEGZpAeh9SKxNyCnFS95PzcytRiBi5OAZjyXXzM +f8WYLqv7TXP61Sa4rqT12CI3xaN73YS2pt089f96odCKaEPnWJ3iSGmzJaW/ug10 +2Zo8Wj2k4s7t8wt4H3HtTu+y5UZfV3VOO+l//sdE/o+Lsub8FZH7/eOq7OnbNp4n +vwjE8mqJXetNMfj8r2SCyvkEnlVRYR+/mnge+ib56FdJ8uKtqSxyvgA= +=fRXs +-----END PGP MESSAGE-----` + +const criticalNotationSigner = `-----BEGIN PGP PUBLIC KEY BLOCK----- + +mI0EUmEvTgEEANyWtQQMOybQ9JltDqmaX0WnNPJeLILIM36sw6zL0nfTQ5zXSS3+ +fIF6P29lJFxpblWk02PSID5zX/DYU9/zjM2xPO8Oa4xo0cVTOTLj++Ri5mtr//f5 +GLsIXxFrBJhD/ghFsL3Op0GXOeLJ9A5bsOn8th7x6JucNKuaRB6bQbSPABEBAAG0 +JFRlc3QgTWNUZXN0aW5ndG9uIDx0ZXN0QGV4YW1wbGUuY29tPoi5BBMBAgAjBQJS +YS9OAhsvBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQSmNhOk1uQJQwDAP6 +AgrTyqkRlJVqz2pb46TfbDM2TDF7o9CBnBzIGoxBhlRwpqALz7z2kxBDmwpQa+ki +Bq3jZN/UosY9y8bhwMAlnrDY9jP1gdCo+H0sD48CdXybblNwaYpwqC8VSpDdTndf +9j2wE/weihGp/DAdy/2kyBCaiOY1sjhUfJ1GogF49rC4jQRSYS9OAQQA6R/PtBFa +JaT4jq10yqASk4sqwVMsc6HcifM5lSdxzExFP74naUMMyEsKHP53QxTF0Grqusag +Qg/ZtgT0CN1HUM152y7ACOdp1giKjpMzOTQClqCoclyvWOFB+L/SwGEIJf7LSCEr +woBuJifJc8xAVr0XX0JthoW+uP91eTQ3XpsAEQEAAYkBPQQYAQIACQUCUmEvTgIb +LgCoCRBKY2E6TW5AlJ0gBBkBAgAGBQJSYS9OAAoJEOCE90RsICyXuqIEANmmiRCA +SF7YK7PvFkieJNwzeK0V3F2lGX+uu6Y3Q/Zxdtwc4xR+me/CSBmsURyXTO29OWhP +GLszPH9zSJU9BdDi6v0yNprmFPX/1Ng0Abn/sCkwetvjxC1YIvTLFwtUL/7v6NS2 +bZpsUxRTg9+cSrMWWSNjiY9qUKajm1tuzPDZXAUEAMNmAN3xXN/Kjyvj2OK2ck0X +W748sl/tc3qiKPMJ+0AkMF7Pjhmh9nxqE9+QCEl7qinFqqBLjuzgUhBU4QlwX1GD +AtNTq6ihLMD5v1d82ZC7tNatdlDMGWnIdvEMCv2GZcuIqDQ9rXWs49e7tq1NncLY +hz3tYjKhoFTKEIq3y3Pp +=h/aX +-----END PGP PUBLIC KEY BLOCK-----` diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/s2k/s2k.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/s2k/s2k.go similarity index 50% rename from .ci/providerlint/vendor/golang.org/x/crypto/openpgp/s2k/s2k.go rename to .ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/s2k/s2k.go index f53244a1c7b..d0b858345fa 100644 --- a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/s2k/s2k.go +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/s2k/s2k.go @@ -4,13 +4,7 @@ // Package s2k implements the various OpenPGP string-to-key transforms as // specified in RFC 4800 section 3.7.1. -// -// Deprecated: this package is unmaintained except for security fixes. New -// applications should consider a more focused, modern alternative to OpenPGP -// for their specific task. If you are required to interoperate with OpenPGP -// systems and need a maintained package, consider a community fork. -// See https://golang.org/issue/44226. -package s2k // import "golang.org/x/crypto/openpgp/s2k" +package s2k // import "github.com/ProtonMail/go-crypto/openpgp/s2k" import ( "crypto" @@ -18,49 +12,67 @@ import ( "io" "strconv" - "golang.org/x/crypto/openpgp/errors" + "github.com/ProtonMail/go-crypto/openpgp/errors" + "github.com/ProtonMail/go-crypto/openpgp/internal/algorithm" ) // Config collects configuration parameters for s2k key-stretching -// transformatioms. A nil *Config is valid and results in all default +// transformations. A nil *Config is valid and results in all default // values. Currently, Config is used only by the Serialize function in // this package. type Config struct { + // S2KMode is the mode of s2k function. + // It can be 0 (simple), 1(salted), 3(iterated) + // 2(reserved) 100-110(private/experimental). + S2KMode uint8 // Hash is the default hash function to be used. If - // nil, SHA1 is used. + // nil, SHA256 is used. Hash crypto.Hash // S2KCount is only used for symmetric encryption. It // determines the strength of the passphrase stretching when // the said passphrase is hashed to produce a key. S2KCount - // should be between 1024 and 65011712, inclusive. If Config - // is nil or S2KCount is 0, the value 65536 used. Not all + // should be between 65536 and 65011712, inclusive. If Config + // is nil or S2KCount is 0, the value 16777216 used. Not all // values in the above range can be represented. S2KCount will // be rounded up to the next representable value if it cannot - // be encoded exactly. When set, it is strongly encrouraged to - // use a value that is at least 65536. See RFC 4880 Section - // 3.7.1.3. + // be encoded exactly. See RFC 4880 Section 3.7.1.3. S2KCount int } +// Params contains all the parameters of the s2k packet +type Params struct { + // mode is the mode of s2k function. + // It can be 0 (simple), 1(salted), 3(iterated) + // 2(reserved) 100-110(private/experimental). + mode uint8 + // hashId is the ID of the hash function used in any of the modes + hashId byte + // salt is a byte array to use as a salt in hashing process + salt []byte + // countByte is used to determine how many rounds of hashing are to + // be performed in s2k mode 3. See RFC 4880 Section 3.7.1.3. + countByte byte +} + func (c *Config) hash() crypto.Hash { if c == nil || uint(c.Hash) == 0 { - // SHA1 is the historical default in this package. - return crypto.SHA1 + return crypto.SHA256 } return c.Hash } -func (c *Config) encodedCount() uint8 { +// EncodedCount get encoded count +func (c *Config) EncodedCount() uint8 { if c == nil || c.S2KCount == 0 { - return 96 // The common case. Correspoding to 65536 + return 224 // The common case. Corresponding to 16777216 } i := c.S2KCount + switch { - // Behave like GPG. Should we make 65536 the lowest value used? - case i < 1024: - i = 1024 + case i < 65536: + i = 65536 case i > 65011712: i = 65011712 } @@ -74,11 +86,11 @@ func (c *Config) encodedCount() uint8 { // if i is not in the above range (encodedCount above takes care to // pass i in the correct range). See RFC 4880 Section 3.7.7.1. func encodeCount(i int) uint8 { - if i < 1024 || i > 65011712 { + if i < 65536 || i > 65011712 { panic("count arg i outside the required range") } - for encoded := 0; encoded < 256; encoded++ { + for encoded := 96; encoded < 256; encoded++ { count := decodeCount(uint8(encoded)) if count >= i { return uint8(encoded) @@ -157,9 +169,44 @@ func Iterated(out []byte, h hash.Hash, in []byte, salt []byte, count int) { } } +// Generate generates valid parameters from given configuration. +// It will enforce salted + hashed s2k method +func Generate(rand io.Reader, c *Config) (*Params, error) { + hashId, ok := algorithm.HashToHashId(c.Hash) + if !ok { + return nil, errors.UnsupportedError("no such hash") + } + + params := &Params{ + mode: 3, // Enforce iterared + salted method + hashId: hashId, + salt: make([]byte, 8), + countByte: c.EncodedCount(), + } + + if _, err := io.ReadFull(rand, params.salt); err != nil { + return nil, err + } + + return params, nil +} + // Parse reads a binary specification for a string-to-key transformation from r -// and returns a function which performs that transform. +// and returns a function which performs that transform. If the S2K is a special +// GNU extension that indicates that the private key is missing, then the error +// returned is errors.ErrDummyPrivateKey. func Parse(r io.Reader) (f func(out, in []byte), err error) { + params, err := ParseIntoParams(r) + if err != nil { + return nil, err + } + + return params.Function() +} + +// ParseIntoParams reads a binary specification for a string-to-key +// transformation from r and returns a struct describing the s2k parameters. +func ParseIntoParams(r io.Reader) (params *Params, err error) { var buf [9]byte _, err = io.ReadFull(r, buf[:2]) @@ -167,113 +214,126 @@ func Parse(r io.Reader) (f func(out, in []byte), err error) { return } - hash, ok := HashIdToHash(buf[1]) + params = &Params{ + mode: buf[0], + hashId: buf[1], + } + + switch params.mode { + case 0: + return params, nil + case 1: + _, err = io.ReadFull(r, buf[:8]) + if err != nil { + return nil, err + } + + params.salt = buf[:8] + return params, nil + case 3: + _, err = io.ReadFull(r, buf[:9]) + if err != nil { + return nil, err + } + + params.salt = buf[:8] + params.countByte = buf[8] + return params, nil + case 101: + // This is a GNU extension. See + // https://git.gnupg.org/cgi-bin/gitweb.cgi?p=gnupg.git;a=blob;f=doc/DETAILS;h=fe55ae16ab4e26d8356dc574c9e8bc935e71aef1;hb=23191d7851eae2217ecdac6484349849a24fd94a#l1109 + if _, err = io.ReadFull(r, buf[:4]); err != nil { + return nil, err + } + if buf[0] == 'G' && buf[1] == 'N' && buf[2] == 'U' && buf[3] == 1 { + return params, nil + } + return nil, errors.UnsupportedError("GNU S2K extension") + } + + return nil, errors.UnsupportedError("S2K function") +} + +func (params *Params) Dummy() bool { + return params != nil && params.mode == 101 +} + +func (params *Params) Function() (f func(out, in []byte), err error) { + if params.Dummy() { + return nil, errors.ErrDummyPrivateKey("dummy key found") + } + hashObj, ok := algorithm.HashIdToHashWithSha1(params.hashId) if !ok { - return nil, errors.UnsupportedError("hash for S2K function: " + strconv.Itoa(int(buf[1]))) + return nil, errors.UnsupportedError("hash for S2K function: " + strconv.Itoa(int(params.hashId))) } - if !hash.Available() { - return nil, errors.UnsupportedError("hash not available: " + strconv.Itoa(int(hash))) + if !hashObj.Available() { + return nil, errors.UnsupportedError("hash not available: " + strconv.Itoa(int(hashObj))) } - h := hash.New() - switch buf[0] { + switch params.mode { case 0: f := func(out, in []byte) { - Simple(out, h, in) + Simple(out, hashObj.New(), in) } + return f, nil case 1: - _, err = io.ReadFull(r, buf[:8]) - if err != nil { - return - } f := func(out, in []byte) { - Salted(out, h, in, buf[:8]) + Salted(out, hashObj.New(), in, params.salt) } + return f, nil case 3: - _, err = io.ReadFull(r, buf[:9]) - if err != nil { - return - } - count := decodeCount(buf[8]) f := func(out, in []byte) { - Iterated(out, h, in, buf[:8], count) + Iterated(out, hashObj.New(), in, params.salt, decodeCount(params.countByte)) } + return f, nil } return nil, errors.UnsupportedError("S2K function") } +func (params *Params) Serialize(w io.Writer) (err error) { + if _, err = w.Write([]byte{params.mode}); err != nil { + return + } + if _, err = w.Write([]byte{params.hashId}); err != nil { + return + } + if params.Dummy() { + _, err = w.Write(append([]byte("GNU"), 1)) + return + } + if params.mode > 0 { + if _, err = w.Write(params.salt); err != nil { + return + } + if params.mode == 3 { + _, err = w.Write([]byte{params.countByte}) + } + } + return +} + // Serialize salts and stretches the given passphrase and writes the // resulting key into key. It also serializes an S2K descriptor to // w. The key stretching can be configured with c, which may be // nil. In that case, sensible defaults will be used. func Serialize(w io.Writer, key []byte, rand io.Reader, passphrase []byte, c *Config) error { - var buf [11]byte - buf[0] = 3 /* iterated and salted */ - buf[1], _ = HashToHashId(c.hash()) - salt := buf[2:10] - if _, err := io.ReadFull(rand, salt); err != nil { + params, err := Generate(rand, c) + if err != nil { return err } - encodedCount := c.encodedCount() - count := decodeCount(encodedCount) - buf[10] = encodedCount - if _, err := w.Write(buf[:]); err != nil { + err = params.Serialize(w) + if err != nil { return err } - Iterated(key, c.hash().New(), passphrase, salt, count) - return nil -} - -// hashToHashIdMapping contains pairs relating OpenPGP's hash identifier with -// Go's crypto.Hash type. See RFC 4880, section 9.4. -var hashToHashIdMapping = []struct { - id byte - hash crypto.Hash - name string -}{ - {1, crypto.MD5, "MD5"}, - {2, crypto.SHA1, "SHA1"}, - {3, crypto.RIPEMD160, "RIPEMD160"}, - {8, crypto.SHA256, "SHA256"}, - {9, crypto.SHA384, "SHA384"}, - {10, crypto.SHA512, "SHA512"}, - {11, crypto.SHA224, "SHA224"}, -} - -// HashIdToHash returns a crypto.Hash which corresponds to the given OpenPGP -// hash id. -func HashIdToHash(id byte) (h crypto.Hash, ok bool) { - for _, m := range hashToHashIdMapping { - if m.id == id { - return m.hash, true - } - } - return 0, false -} - -// HashIdToString returns the name of the hash function corresponding to the -// given OpenPGP hash id. -func HashIdToString(id byte) (name string, ok bool) { - for _, m := range hashToHashIdMapping { - if m.id == id { - return m.name, true - } - } - - return "", false -} - -// HashToHashId returns an OpenPGP hash id which corresponds the given Hash. -func HashToHashId(h crypto.Hash) (id byte, ok bool) { - for _, m := range hashToHashIdMapping { - if m.hash == h { - return m.id, true - } + f, err := params.Function() + if err != nil { + return err } - return 0, false + f(key, passphrase) + return nil } diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/write.go b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/write.go similarity index 51% rename from .ci/providerlint/vendor/golang.org/x/crypto/openpgp/write.go rename to .ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/write.go index b89d48b81d7..b3ae72f7bbd 100644 --- a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/write.go +++ b/.ci/providerlint/vendor/github.com/ProtonMail/go-crypto/openpgp/write.go @@ -11,10 +11,10 @@ import ( "strconv" "time" - "golang.org/x/crypto/openpgp/armor" - "golang.org/x/crypto/openpgp/errors" - "golang.org/x/crypto/openpgp/packet" - "golang.org/x/crypto/openpgp/s2k" + "github.com/ProtonMail/go-crypto/openpgp/armor" + "github.com/ProtonMail/go-crypto/openpgp/errors" + "github.com/ProtonMail/go-crypto/openpgp/internal/algorithm" + "github.com/ProtonMail/go-crypto/openpgp/packet" ) // DetachSign signs message with the private key from signer (which must @@ -60,27 +60,31 @@ func armoredDetachSign(w io.Writer, signer *Entity, message io.Reader, sigType p } func detachSign(w io.Writer, signer *Entity, message io.Reader, sigType packet.SignatureType, config *packet.Config) (err error) { - if signer.PrivateKey == nil { + signingKey, ok := signer.SigningKeyById(config.Now(), config.SigningKey()) + if !ok { + return errors.InvalidArgumentError("no valid signing keys") + } + if signingKey.PrivateKey == nil { return errors.InvalidArgumentError("signing key doesn't have a private key") } - if signer.PrivateKey.Encrypted { + if signingKey.PrivateKey.Encrypted { return errors.InvalidArgumentError("signing key is encrypted") } + if _, ok := algorithm.HashToHashId(config.Hash()); !ok { + return errors.InvalidArgumentError("invalid hash function") + } - sig := new(packet.Signature) - sig.SigType = sigType - sig.PubKeyAlgo = signer.PrivateKey.PubKeyAlgo - sig.Hash = config.Hash() - sig.CreationTime = config.Now() - sig.IssuerKeyId = &signer.PrivateKey.KeyId + sig := createSignaturePacket(signingKey.PublicKey, sigType, config) h, wrappedHash, err := hashForSignature(sig.Hash, sig.SigType) if err != nil { return } - io.Copy(wrappedHash, message) + if _, err = io.Copy(wrappedHash, message); err != nil { + return err + } - err = sig.Sign(h, signer.PrivateKey, config) + err = sig.Sign(h, signingKey.PrivateKey, config) if err != nil { return } @@ -115,18 +119,24 @@ func SymmetricallyEncrypt(ciphertext io.Writer, passphrase []byte, hints *FileHi if err != nil { return } - w, err := packet.SerializeSymmetricallyEncrypted(ciphertext, config.Cipher(), key, config) + + var w io.WriteCloser + cipherSuite := packet.CipherSuite{ + Cipher: config.Cipher(), + Mode: config.AEAD().Mode(), + } + w, err = packet.SerializeSymmetricallyEncrypted(ciphertext, config.Cipher(), config.AEAD() != nil, cipherSuite, key, config) if err != nil { return } - literaldata := w + literalData := w if algo := config.Compression(); algo != packet.CompressionNone { var compConfig *packet.CompressionConfig if config != nil { compConfig = config.CompressionConfig } - literaldata, err = packet.SerializeCompressed(w, algo, compConfig) + literalData, err = packet.SerializeCompressed(w, algo, compConfig) if err != nil { return } @@ -136,7 +146,7 @@ func SymmetricallyEncrypt(ciphertext io.Writer, passphrase []byte, hints *FileHi if !hints.ModTime.IsZero() { epochSeconds = uint32(hints.ModTime.Unix()) } - return packet.SerializeLiteral(literaldata, hints.IsBinary, hints.FileName, epochSeconds) + return packet.SerializeLiteral(literalData, hints.IsBinary, hints.FileName, epochSeconds) } // intersectPreferences mutates and returns a prefix of a that contains only @@ -156,23 +166,76 @@ func intersectPreferences(a []uint8, b []uint8) (intersection []uint8) { return a[:j] } +// intersectPreferences mutates and returns a prefix of a that contains only +// the values in the intersection of a and b. The order of a is preserved. +func intersectCipherSuites(a [][2]uint8, b [][2]uint8) (intersection [][2]uint8) { + var j int + for _, v := range a { + for _, v2 := range b { + if v[0] == v2[0] && v[1] == v2[1] { + a[j] = v + j++ + break + } + } + } + + return a[:j] +} + func hashToHashId(h crypto.Hash) uint8 { - v, ok := s2k.HashToHashId(h) + v, ok := algorithm.HashToHashId(h) if !ok { panic("tried to convert unknown hash") } return v } +// EncryptText encrypts a message to a number of recipients and, optionally, +// signs it. Optional information is contained in 'hints', also encrypted, that +// aids the recipients in processing the message. The resulting WriteCloser +// must be closed after the contents of the file have been written. If config +// is nil, sensible defaults will be used. The signing is done in text mode. +func EncryptText(ciphertext io.Writer, to []*Entity, signed *Entity, hints *FileHints, config *packet.Config) (plaintext io.WriteCloser, err error) { + return encrypt(ciphertext, ciphertext, to, signed, hints, packet.SigTypeText, config) +} + +// Encrypt encrypts a message to a number of recipients and, optionally, signs +// it. hints contains optional information, that is also encrypted, that aids +// the recipients in processing the message. The resulting WriteCloser must +// be closed after the contents of the file have been written. +// If config is nil, sensible defaults will be used. +func Encrypt(ciphertext io.Writer, to []*Entity, signed *Entity, hints *FileHints, config *packet.Config) (plaintext io.WriteCloser, err error) { + return encrypt(ciphertext, ciphertext, to, signed, hints, packet.SigTypeBinary, config) +} + +// EncryptSplit encrypts a message to a number of recipients and, optionally, signs +// it. hints contains optional information, that is also encrypted, that aids +// the recipients in processing the message. The resulting WriteCloser must +// be closed after the contents of the file have been written. +// If config is nil, sensible defaults will be used. +func EncryptSplit(keyWriter io.Writer, dataWriter io.Writer, to []*Entity, signed *Entity, hints *FileHints, config *packet.Config) (plaintext io.WriteCloser, err error) { + return encrypt(keyWriter, dataWriter, to, signed, hints, packet.SigTypeBinary, config) +} + +// EncryptTextSplit encrypts a message to a number of recipients and, optionally, signs +// it. hints contains optional information, that is also encrypted, that aids +// the recipients in processing the message. The resulting WriteCloser must +// be closed after the contents of the file have been written. +// If config is nil, sensible defaults will be used. +func EncryptTextSplit(keyWriter io.Writer, dataWriter io.Writer, to []*Entity, signed *Entity, hints *FileHints, config *packet.Config) (plaintext io.WriteCloser, err error) { + return encrypt(keyWriter, dataWriter, to, signed, hints, packet.SigTypeText, config) +} + // writeAndSign writes the data as a payload package and, optionally, signs // it. hints contains optional information, that is also encrypted, // that aids the recipients in processing the message. The resulting // WriteCloser must be closed after the contents of the file have been // written. If config is nil, sensible defaults will be used. -func writeAndSign(payload io.WriteCloser, candidateHashes []uint8, signed *Entity, hints *FileHints, config *packet.Config) (plaintext io.WriteCloser, err error) { +func writeAndSign(payload io.WriteCloser, candidateHashes []uint8, signed *Entity, hints *FileHints, sigType packet.SignatureType, config *packet.Config) (plaintext io.WriteCloser, err error) { var signer *packet.PrivateKey if signed != nil { - signKey, ok := signed.signingKey(config.Now()) + signKey, ok := signed.SigningKeyById(config.Now(), config.SigningKey()) if !ok { return nil, errors.InvalidArgumentError("no valid signing keys") } @@ -187,7 +250,7 @@ func writeAndSign(payload io.WriteCloser, candidateHashes []uint8, signed *Entit var hash crypto.Hash for _, hashId := range candidateHashes { - if h, ok := s2k.HashIdToHash(hashId); ok && h.Available() { + if h, ok := algorithm.HashIdToHash(hashId); ok && h.Available() { hash = h break } @@ -196,7 +259,7 @@ func writeAndSign(payload io.WriteCloser, candidateHashes []uint8, signed *Entit // If the hash specified by config is a candidate, we'll use that. if configuredHash := config.Hash(); configuredHash.Available() { for _, hashId := range candidateHashes { - if h, ok := s2k.HashIdToHash(hashId); ok && h == configuredHash { + if h, ok := algorithm.HashIdToHash(hashId); ok && h == configuredHash { hash = h break } @@ -205,7 +268,7 @@ func writeAndSign(payload io.WriteCloser, candidateHashes []uint8, signed *Entit if hash == 0 { hashId := candidateHashes[0] - name, ok := s2k.HashIdToString(hashId) + name, ok := algorithm.HashIdToString(hashId) if !ok { name = "#" + strconv.Itoa(int(hashId)) } @@ -214,7 +277,7 @@ func writeAndSign(payload io.WriteCloser, candidateHashes []uint8, signed *Entit if signer != nil { ops := &packet.OnePassSignature{ - SigType: packet.SigTypeBinary, + SigType: sigType, Hash: hash, PubKeyAlgo: signer.PubKeyAlgo, KeyId: signer.KeyId, @@ -247,68 +310,108 @@ func writeAndSign(payload io.WriteCloser, candidateHashes []uint8, signed *Entit } if signer != nil { - return signatureWriter{payload, literalData, hash, hash.New(), signer, config}, nil + h, wrappedHash, err := hashForSignature(hash, sigType) + if err != nil { + return nil, err + } + metadata := &packet.LiteralData{ + Format: 't', + FileName: hints.FileName, + Time: epochSeconds, + } + if hints.IsBinary { + metadata.Format = 'b' + } + return signatureWriter{payload, literalData, hash, wrappedHash, h, signer, sigType, config, metadata}, nil } return literalData, nil } -// Encrypt encrypts a message to a number of recipients and, optionally, signs +// encrypt encrypts a message to a number of recipients and, optionally, signs // it. hints contains optional information, that is also encrypted, that aids // the recipients in processing the message. The resulting WriteCloser must // be closed after the contents of the file have been written. // If config is nil, sensible defaults will be used. -func Encrypt(ciphertext io.Writer, to []*Entity, signed *Entity, hints *FileHints, config *packet.Config) (plaintext io.WriteCloser, err error) { +func encrypt(keyWriter io.Writer, dataWriter io.Writer, to []*Entity, signed *Entity, hints *FileHints, sigType packet.SignatureType, config *packet.Config) (plaintext io.WriteCloser, err error) { if len(to) == 0 { return nil, errors.InvalidArgumentError("no encryption recipient provided") } // These are the possible ciphers that we'll use for the message. candidateCiphers := []uint8{ - uint8(packet.CipherAES128), uint8(packet.CipherAES256), - uint8(packet.CipherCAST5), + uint8(packet.CipherAES128), } + // These are the possible hash functions that we'll use for the signature. candidateHashes := []uint8{ hashToHashId(crypto.SHA256), hashToHashId(crypto.SHA384), hashToHashId(crypto.SHA512), - hashToHashId(crypto.SHA1), - hashToHashId(crypto.RIPEMD160), + hashToHashId(crypto.SHA3_256), + hashToHashId(crypto.SHA3_512), + } + + // Prefer GCM if everyone supports it + candidateCipherSuites := [][2]uint8{ + {uint8(packet.CipherAES256), uint8(packet.AEADModeGCM)}, + {uint8(packet.CipherAES256), uint8(packet.AEADModeEAX)}, + {uint8(packet.CipherAES256), uint8(packet.AEADModeOCB)}, + {uint8(packet.CipherAES128), uint8(packet.AEADModeGCM)}, + {uint8(packet.CipherAES128), uint8(packet.AEADModeEAX)}, + {uint8(packet.CipherAES128), uint8(packet.AEADModeOCB)}, + } + + candidateCompression := []uint8{ + uint8(packet.CompressionNone), + uint8(packet.CompressionZIP), + uint8(packet.CompressionZLIB), } - // In the event that a recipient doesn't specify any supported ciphers - // or hash functions, these are the ones that we assume that every - // implementation supports. - defaultCiphers := candidateCiphers[len(candidateCiphers)-1:] - defaultHashes := candidateHashes[len(candidateHashes)-1:] encryptKeys := make([]Key, len(to)) + + // AEAD is used only if config enables it and every key supports it + aeadSupported := config.AEAD() != nil + for i := range to { var ok bool - encryptKeys[i], ok = to[i].encryptionKey(config.Now()) + encryptKeys[i], ok = to[i].EncryptionKey(config.Now()) if !ok { - return nil, errors.InvalidArgumentError("cannot encrypt a message to key id " + strconv.FormatUint(to[i].PrimaryKey.KeyId, 16) + " because it has no encryption keys") + return nil, errors.InvalidArgumentError("cannot encrypt a message to key id " + strconv.FormatUint(to[i].PrimaryKey.KeyId, 16) + " because it has no valid encryption keys") } - sig := to[i].primaryIdentity().SelfSignature - - preferredSymmetric := sig.PreferredSymmetric - if len(preferredSymmetric) == 0 { - preferredSymmetric = defaultCiphers - } - preferredHashes := sig.PreferredHash - if len(preferredHashes) == 0 { - preferredHashes = defaultHashes + sig := to[i].PrimaryIdentity().SelfSignature + if sig.SEIPDv2 == false { + aeadSupported = false } - candidateCiphers = intersectPreferences(candidateCiphers, preferredSymmetric) - candidateHashes = intersectPreferences(candidateHashes, preferredHashes) + + candidateCiphers = intersectPreferences(candidateCiphers, sig.PreferredSymmetric) + candidateHashes = intersectPreferences(candidateHashes, sig.PreferredHash) + candidateCipherSuites = intersectCipherSuites(candidateCipherSuites, sig.PreferredCipherSuites) + candidateCompression = intersectPreferences(candidateCompression, sig.PreferredCompression) } - if len(candidateCiphers) == 0 || len(candidateHashes) == 0 { - return nil, errors.InvalidArgumentError("cannot encrypt because recipient set shares no common algorithms") + // In the event that the intersection of supported algorithms is empty we use the ones + // labelled as MUST that every implementation supports. + if len(candidateCiphers) == 0 { + // https://www.ietf.org/archive/id/draft-ietf-openpgp-crypto-refresh-07.html#section-9.3 + candidateCiphers = []uint8{uint8(packet.CipherAES128)} + } + if len(candidateHashes) == 0 { + // https://www.ietf.org/archive/id/draft-ietf-openpgp-crypto-refresh-07.html#hash-algos + candidateHashes = []uint8{hashToHashId(crypto.SHA256)} + } + if len(candidateCipherSuites) == 0 { + // https://www.ietf.org/archive/id/draft-ietf-openpgp-crypto-refresh-07.html#section-9.6 + candidateCipherSuites = [][2]uint8{{uint8(packet.CipherAES128), uint8(packet.AEADModeOCB)}} } cipher := packet.CipherFunction(candidateCiphers[0]) + aeadCipherSuite := packet.CipherSuite{ + Cipher: packet.CipherFunction(candidateCipherSuites[0][0]), + Mode: packet.AEADMode(candidateCipherSuites[0][1]), + } + // If the cipher specified by config is a candidate, we'll use that. configuredCipher := config.Cipher() for _, c := range candidateCiphers { @@ -325,17 +428,23 @@ func Encrypt(ciphertext io.Writer, to []*Entity, signed *Entity, hints *FileHint } for _, key := range encryptKeys { - if err := packet.SerializeEncryptedKey(ciphertext, key.PublicKey, cipher, symKey, config); err != nil { + if err := packet.SerializeEncryptedKey(keyWriter, key.PublicKey, cipher, symKey, config); err != nil { return nil, err } } - payload, err := packet.SerializeSymmetricallyEncrypted(ciphertext, cipher, symKey, config) + var payload io.WriteCloser + payload, err = packet.SerializeSymmetricallyEncrypted(dataWriter, cipher, aeadSupported, aeadCipherSuite, symKey, config) if err != nil { - return + return + } + + payload, err = handleCompression(payload, candidateCompression, config) + if err != nil { + return nil, err } - return writeAndSign(payload, candidateHashes, signed, hints, config) + return writeAndSign(payload, candidateHashes, signed, hints, sigType, config) } // Sign signs a message. The resulting WriteCloser must be closed after the @@ -352,16 +461,20 @@ func Sign(output io.Writer, signed *Entity, hints *FileHints, config *packet.Con hashToHashId(crypto.SHA256), hashToHashId(crypto.SHA384), hashToHashId(crypto.SHA512), - hashToHashId(crypto.SHA1), - hashToHashId(crypto.RIPEMD160), + hashToHashId(crypto.SHA3_256), + hashToHashId(crypto.SHA3_512), } - defaultHashes := candidateHashes[len(candidateHashes)-1:] - preferredHashes := signed.primaryIdentity().SelfSignature.PreferredHash + defaultHashes := candidateHashes[0:1] + preferredHashes := signed.PrimaryIdentity().SelfSignature.PreferredHash if len(preferredHashes) == 0 { preferredHashes = defaultHashes } candidateHashes = intersectPreferences(candidateHashes, preferredHashes) - return writeAndSign(noOpCloser{output}, candidateHashes, signed, hints, config) + if len(candidateHashes) == 0 { + return nil, errors.InvalidArgumentError("cannot sign because signing key shares no common algorithms with candidate hashes") + } + + return writeAndSign(noOpCloser{output}, candidateHashes, signed, hints, packet.SigTypeBinary, config) } // signatureWriter hashes the contents of a message while passing it along to @@ -371,24 +484,30 @@ type signatureWriter struct { encryptedData io.WriteCloser literalData io.WriteCloser hashType crypto.Hash + wrappedHash hash.Hash h hash.Hash signer *packet.PrivateKey + sigType packet.SignatureType config *packet.Config + metadata *packet.LiteralData // V5 signatures protect document metadata } func (s signatureWriter) Write(data []byte) (int, error) { - s.h.Write(data) - return s.literalData.Write(data) + s.wrappedHash.Write(data) + switch s.sigType { + case packet.SigTypeBinary: + return s.literalData.Write(data) + case packet.SigTypeText: + flag := 0 + return writeCanonical(s.literalData, data, &flag) + } + return 0, errors.UnsupportedError("unsupported signature type: " + strconv.Itoa(int(s.sigType))) } func (s signatureWriter) Close() error { - sig := &packet.Signature{ - SigType: packet.SigTypeBinary, - PubKeyAlgo: s.signer.PubKeyAlgo, - Hash: s.hashType, - CreationTime: s.config.Now(), - IssuerKeyId: &s.signer.KeyId, - } + sig := createSignaturePacket(&s.signer.PublicKey, s.sigType, s.config) + sig.Hash = s.hashType + sig.Metadata = s.metadata if err := sig.Sign(s.h, s.signer, s.config); err != nil { return err @@ -402,7 +521,22 @@ func (s signatureWriter) Close() error { return s.encryptedData.Close() } -// noOpCloser is like an io.NopCloser, but for an io.Writer. +func createSignaturePacket(signer *packet.PublicKey, sigType packet.SignatureType, config *packet.Config) *packet.Signature { + sigLifetimeSecs := config.SigLifetime() + return &packet.Signature{ + Version: signer.Version, + SigType: sigType, + PubKeyAlgo: signer.PubKeyAlgo, + Hash: config.Hash(), + CreationTime: config.Now(), + IssuerKeyId: &signer.KeyId, + IssuerFingerprint: signer.Fingerprint, + Notations: config.Notations(), + SigLifetimeSecs: &sigLifetimeSecs, + } +} + +// noOpCloser is like an ioutil.NopCloser, but for an io.Writer. // TODO: we have two of these in OpenPGP packages alone. This probably needs // to be promoted somewhere more common. type noOpCloser struct { @@ -416,3 +550,34 @@ func (c noOpCloser) Write(data []byte) (n int, err error) { func (c noOpCloser) Close() error { return nil } + +func handleCompression(compressed io.WriteCloser, candidateCompression []uint8, config *packet.Config) (data io.WriteCloser, err error) { + data = compressed + confAlgo := config.Compression() + if confAlgo == packet.CompressionNone { + return + } + + // Set algorithm labelled as MUST as fallback + // https://www.ietf.org/archive/id/draft-ietf-openpgp-crypto-refresh-07.html#section-9.4 + finalAlgo := packet.CompressionNone + // if compression specified by config available we will use it + for _, c := range candidateCompression { + if uint8(confAlgo) == c { + finalAlgo = confAlgo + break + } + } + + if finalAlgo != packet.CompressionNone { + var compConfig *packet.CompressionConfig + if config != nil { + compConfig = config.CompressionConfig + } + data, err = packet.SerializeCompressed(compressed, finalAlgo, compConfig) + if err != nil { + return + } + } + return data, nil +} diff --git a/.ci/providerlint/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go b/.ci/providerlint/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go index 0e8f4c6063f..8b35bfb89cd 100644 --- a/.ci/providerlint/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go +++ b/.ci/providerlint/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go @@ -14,6 +14,7 @@ const ( AwsIsoPartitionID = "aws-iso" // AWS ISO (US) partition. AwsIsoBPartitionID = "aws-iso-b" // AWS ISOB (US) partition. AwsIsoEPartitionID = "aws-iso-e" // AWS ISOE (Europe) partition. + AwsIsoFPartitionID = "aws-iso-f" // AWS ISOF partition. ) // AWS Standard partition's regions. @@ -73,8 +74,11 @@ const ( // AWS ISOE (Europe) partition's regions. const () +// AWS ISOF partition's regions. +const () + // DefaultResolver returns an Endpoint resolver that will be able -// to resolve endpoints for: AWS Standard, AWS China, AWS GovCloud (US), AWS ISO (US), AWS ISOB (US), and AWS ISOE (Europe). +// to resolve endpoints for: AWS Standard, AWS China, AWS GovCloud (US), AWS ISO (US), AWS ISOB (US), AWS ISOE (Europe), and AWS ISOF. // // Use DefaultPartitions() to get the list of the default partitions. func DefaultResolver() Resolver { @@ -82,7 +86,7 @@ func DefaultResolver() Resolver { } // DefaultPartitions returns a list of the partitions the SDK is bundled -// with. The available partitions are: AWS Standard, AWS China, AWS GovCloud (US), AWS ISO (US), AWS ISOB (US), and AWS ISOE (Europe). +// with. The available partitions are: AWS Standard, AWS China, AWS GovCloud (US), AWS ISO (US), AWS ISOB (US), AWS ISOE (Europe), and AWS ISOF. // // partitions := endpoints.DefaultPartitions // for _, p := range partitions { @@ -99,6 +103,7 @@ var defaultPartitions = partitions{ awsisoPartition, awsisobPartition, awsisoePartition, + awsisofPartition, } // AwsPartition returns the Resolver for AWS Standard. @@ -1872,6 +1877,9 @@ var awsPartition = partition{ endpointKey{ Region: "ap-southeast-3", }: endpoint{}, + endpointKey{ + Region: "ap-southeast-4", + }: endpoint{}, endpointKey{ Region: "ca-central-1", }: endpoint{}, @@ -3572,6 +3580,15 @@ var awsPartition = partition{ }, Deprecated: boxedTrue, }, + endpointKey{ + Region: "me-central-1", + }: endpoint{}, + endpointKey{ + Region: "me-central-1", + Variant: dualStackVariant, + }: endpoint{ + Hostname: "athena.me-central-1.api.aws", + }, endpointKey{ Region: "me-south-1", }: endpoint{}, @@ -3605,6 +3622,12 @@ var awsPartition = partition{ }: endpoint{ Hostname: "athena-fips.us-east-1.amazonaws.com", }, + endpointKey{ + Region: "us-east-1", + Variant: fipsVariant | dualStackVariant, + }: endpoint{ + Hostname: "athena-fips.us-east-1.api.aws", + }, endpointKey{ Region: "us-east-2", }: endpoint{}, @@ -3620,6 +3643,12 @@ var awsPartition = partition{ }: endpoint{ Hostname: "athena-fips.us-east-2.amazonaws.com", }, + endpointKey{ + Region: "us-east-2", + Variant: fipsVariant | dualStackVariant, + }: endpoint{ + Hostname: "athena-fips.us-east-2.api.aws", + }, endpointKey{ Region: "us-west-1", }: endpoint{}, @@ -3635,6 +3664,12 @@ var awsPartition = partition{ }: endpoint{ Hostname: "athena-fips.us-west-1.amazonaws.com", }, + endpointKey{ + Region: "us-west-1", + Variant: fipsVariant | dualStackVariant, + }: endpoint{ + Hostname: "athena-fips.us-west-1.api.aws", + }, endpointKey{ Region: "us-west-2", }: endpoint{}, @@ -3650,6 +3685,12 @@ var awsPartition = partition{ }: endpoint{ Hostname: "athena-fips.us-west-2.amazonaws.com", }, + endpointKey{ + Region: "us-west-2", + Variant: fipsVariant | dualStackVariant, + }: endpoint{ + Hostname: "athena-fips.us-west-2.api.aws", + }, }, }, "auditmanager": service{ @@ -4011,15 +4052,84 @@ var awsPartition = partition{ }, "backupstorage": service{ Endpoints: serviceEndpoints{ + endpointKey{ + Region: "af-south-1", + }: endpoint{}, + endpointKey{ + Region: "ap-east-1", + }: endpoint{}, + endpointKey{ + Region: "ap-northeast-1", + }: endpoint{}, + endpointKey{ + Region: "ap-northeast-2", + }: endpoint{}, + endpointKey{ + Region: "ap-northeast-3", + }: endpoint{}, + endpointKey{ + Region: "ap-south-1", + }: endpoint{}, + endpointKey{ + Region: "ap-south-2", + }: endpoint{}, + endpointKey{ + Region: "ap-southeast-1", + }: endpoint{}, + endpointKey{ + Region: "ap-southeast-2", + }: endpoint{}, + endpointKey{ + Region: "ap-southeast-3", + }: endpoint{}, + endpointKey{ + Region: "ap-southeast-4", + }: endpoint{}, + endpointKey{ + Region: "ca-central-1", + }: endpoint{}, + endpointKey{ + Region: "eu-central-1", + }: endpoint{}, + endpointKey{ + Region: "eu-central-2", + }: endpoint{}, + endpointKey{ + Region: "eu-north-1", + }: endpoint{}, + endpointKey{ + Region: "eu-south-1", + }: endpoint{}, + endpointKey{ + Region: "eu-south-2", + }: endpoint{}, endpointKey{ Region: "eu-west-1", }: endpoint{}, + endpointKey{ + Region: "eu-west-2", + }: endpoint{}, + endpointKey{ + Region: "eu-west-3", + }: endpoint{}, + endpointKey{ + Region: "me-central-1", + }: endpoint{}, + endpointKey{ + Region: "me-south-1", + }: endpoint{}, + endpointKey{ + Region: "sa-east-1", + }: endpoint{}, endpointKey{ Region: "us-east-1", }: endpoint{}, endpointKey{ Region: "us-east-2", }: endpoint{}, + endpointKey{ + Region: "us-west-1", + }: endpoint{}, endpointKey{ Region: "us-west-2", }: endpoint{}, @@ -4065,6 +4175,9 @@ var awsPartition = partition{ endpointKey{ Region: "ap-southeast-3", }: endpoint{}, + endpointKey{ + Region: "ap-southeast-4", + }: endpoint{}, endpointKey{ Region: "ca-central-1", }: endpoint{}, @@ -5262,6 +5375,9 @@ var awsPartition = partition{ endpointKey{ Region: "ap-southeast-3", }: endpoint{}, + endpointKey{ + Region: "ap-southeast-4", + }: endpoint{}, endpointKey{ Region: "ca-central-1", }: endpoint{}, @@ -5933,6 +6049,9 @@ var awsPartition = partition{ endpointKey{ Region: "eu-north-1", }: endpoint{}, + endpointKey{ + Region: "eu-south-1", + }: endpoint{}, endpointKey{ Region: "eu-west-1", }: endpoint{}, @@ -6073,6 +6192,15 @@ var awsPartition = partition{ }, Deprecated: boxedTrue, }, + endpointKey{ + Region: "fips-us-west-1", + }: endpoint{ + Hostname: "cognito-identity-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + Deprecated: boxedTrue, + }, endpointKey{ Region: "fips-us-west-2", }: endpoint{ @@ -6109,6 +6237,12 @@ var awsPartition = partition{ endpointKey{ Region: "us-west-1", }: endpoint{}, + endpointKey{ + Region: "us-west-1", + Variant: fipsVariant, + }: endpoint{ + Hostname: "cognito-identity-fips.us-west-1.amazonaws.com", + }, endpointKey{ Region: "us-west-2", }: endpoint{}, @@ -7562,6 +7696,9 @@ var awsPartition = partition{ endpointKey{ Region: "ap-southeast-3", }: endpoint{}, + endpointKey{ + Region: "ap-southeast-4", + }: endpoint{}, endpointKey{ Region: "ca-central-1", }: endpoint{}, @@ -7762,6 +7899,12 @@ var awsPartition = partition{ endpointKey{ Region: "ca-central-1", }: endpoint{}, + endpointKey{ + Region: "ca-central-1", + Variant: fipsVariant, + }: endpoint{ + Hostname: "devops-guru-fips.ca-central-1.amazonaws.com", + }, endpointKey{ Region: "eu-central-1", }: endpoint{}, @@ -7777,6 +7920,15 @@ var awsPartition = partition{ endpointKey{ Region: "eu-west-3", }: endpoint{}, + endpointKey{ + Region: "fips-ca-central-1", + }: endpoint{ + Hostname: "devops-guru-fips.ca-central-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ca-central-1", + }, + Deprecated: boxedTrue, + }, endpointKey{ Region: "fips-us-east-1", }: endpoint{ @@ -7795,6 +7947,15 @@ var awsPartition = partition{ }, Deprecated: boxedTrue, }, + endpointKey{ + Region: "fips-us-west-1", + }: endpoint{ + Hostname: "devops-guru-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + Deprecated: boxedTrue, + }, endpointKey{ Region: "fips-us-west-2", }: endpoint{ @@ -7828,6 +7989,12 @@ var awsPartition = partition{ endpointKey{ Region: "us-west-1", }: endpoint{}, + endpointKey{ + Region: "us-west-1", + Variant: fipsVariant, + }: endpoint{ + Hostname: "devops-guru-fips.us-west-1.amazonaws.com", + }, endpointKey{ Region: "us-west-2", }: endpoint{}, @@ -11577,6 +11744,9 @@ var awsPartition = partition{ endpointKey{ Region: "ap-southeast-3", }: endpoint{}, + endpointKey{ + Region: "ap-southeast-4", + }: endpoint{}, endpointKey{ Region: "ca-central-1", }: endpoint{}, @@ -12082,6 +12252,9 @@ var awsPartition = partition{ endpointKey{ Region: "ap-southeast-3", }: endpoint{}, + endpointKey{ + Region: "ap-southeast-4", + }: endpoint{}, endpointKey{ Region: "ca-central-1", }: endpoint{}, @@ -13796,13 +13969,6 @@ var awsPartition = partition{ }, }, "iot": service{ - Defaults: endpointDefaults{ - defaultKey{}: endpoint{ - CredentialScope: credentialScope{ - Service: "execute-api", - }, - }, - }, Endpoints: serviceEndpoints{ endpointKey{ Region: "ap-east-1", @@ -13850,45 +14016,35 @@ var awsPartition = partition{ Region: "fips-ca-central-1", }: endpoint{ Hostname: "iot-fips.ca-central-1.amazonaws.com", - CredentialScope: credentialScope{ - Service: "execute-api", - }, + Deprecated: boxedTrue, }, endpointKey{ Region: "fips-us-east-1", }: endpoint{ Hostname: "iot-fips.us-east-1.amazonaws.com", - CredentialScope: credentialScope{ - Service: "execute-api", - }, + Deprecated: boxedTrue, }, endpointKey{ Region: "fips-us-east-2", }: endpoint{ Hostname: "iot-fips.us-east-2.amazonaws.com", - CredentialScope: credentialScope{ - Service: "execute-api", - }, + Deprecated: boxedTrue, }, endpointKey{ Region: "fips-us-west-1", }: endpoint{ Hostname: "iot-fips.us-west-1.amazonaws.com", - CredentialScope: credentialScope{ - Service: "execute-api", - }, + Deprecated: boxedTrue, }, endpointKey{ Region: "fips-us-west-2", }: endpoint{ Hostname: "iot-fips.us-west-2.amazonaws.com", - CredentialScope: credentialScope{ - Service: "execute-api", - }, + Deprecated: boxedTrue, }, endpointKey{ @@ -14728,6 +14884,9 @@ var awsPartition = partition{ endpointKey{ Region: "ap-southeast-3", }: endpoint{}, + endpointKey{ + Region: "ap-southeast-4", + }: endpoint{}, endpointKey{ Region: "ca-central-1", }: endpoint{}, @@ -14925,6 +15084,9 @@ var awsPartition = partition{ endpointKey{ Region: "eu-west-1", }: endpoint{}, + endpointKey{ + Region: "eu-west-2", + }: endpoint{}, endpointKey{ Region: "fips-us-east-1", }: endpoint{ @@ -15054,6 +15216,12 @@ var awsPartition = partition{ }: endpoint{ Hostname: "kendra-ranking.ca-central-1.api.aws", }, + endpointKey{ + Region: "ca-central-1", + Variant: fipsVariant, + }: endpoint{ + Hostname: "kendra-ranking-fips.ca-central-1.api.aws", + }, endpointKey{ Region: "eu-central-2", }: endpoint{ @@ -15104,11 +15272,23 @@ var awsPartition = partition{ }: endpoint{ Hostname: "kendra-ranking.us-east-1.api.aws", }, + endpointKey{ + Region: "us-east-1", + Variant: fipsVariant, + }: endpoint{ + Hostname: "kendra-ranking-fips.us-east-1.api.aws", + }, endpointKey{ Region: "us-east-2", }: endpoint{ Hostname: "kendra-ranking.us-east-2.api.aws", }, + endpointKey{ + Region: "us-east-2", + Variant: fipsVariant, + }: endpoint{ + Hostname: "kendra-ranking-fips.us-east-2.api.aws", + }, endpointKey{ Region: "us-west-1", }: endpoint{ @@ -15119,6 +15299,12 @@ var awsPartition = partition{ }: endpoint{ Hostname: "kendra-ranking.us-west-2.api.aws", }, + endpointKey{ + Region: "us-west-2", + Variant: fipsVariant, + }: endpoint{ + Hostname: "kendra-ranking-fips.us-west-2.api.aws", + }, }, }, "kinesis": service{ @@ -17371,6 +17557,9 @@ var awsPartition = partition{ }, "mediaconnect": service{ Endpoints: serviceEndpoints{ + endpointKey{ + Region: "af-south-1", + }: endpoint{}, endpointKey{ Region: "ap-east-1", }: endpoint{}, @@ -17389,6 +17578,9 @@ var awsPartition = partition{ endpointKey{ Region: "ap-southeast-2", }: endpoint{}, + endpointKey{ + Region: "ca-central-1", + }: endpoint{}, endpointKey{ Region: "eu-central-1", }: endpoint{}, @@ -17740,6 +17932,55 @@ var awsPartition = partition{ }: endpoint{}, }, }, + "mediapackagev2": service{ + Endpoints: serviceEndpoints{ + endpointKey{ + Region: "ap-northeast-1", + }: endpoint{}, + endpointKey{ + Region: "ap-northeast-2", + }: endpoint{}, + endpointKey{ + Region: "ap-south-1", + }: endpoint{}, + endpointKey{ + Region: "ap-southeast-1", + }: endpoint{}, + endpointKey{ + Region: "ap-southeast-2", + }: endpoint{}, + endpointKey{ + Region: "eu-central-1", + }: endpoint{}, + endpointKey{ + Region: "eu-north-1", + }: endpoint{}, + endpointKey{ + Region: "eu-west-1", + }: endpoint{}, + endpointKey{ + Region: "eu-west-2", + }: endpoint{}, + endpointKey{ + Region: "eu-west-3", + }: endpoint{}, + endpointKey{ + Region: "sa-east-1", + }: endpoint{}, + endpointKey{ + Region: "us-east-1", + }: endpoint{}, + endpointKey{ + Region: "us-east-2", + }: endpoint{}, + endpointKey{ + Region: "us-west-1", + }: endpoint{}, + endpointKey{ + Region: "us-west-2", + }: endpoint{}, + }, + }, "mediastore": service{ Endpoints: serviceEndpoints{ endpointKey{ @@ -18059,6 +18300,9 @@ var awsPartition = partition{ endpointKey{ Region: "eu-west-2", }: endpoint{}, + endpointKey{ + Region: "eu-west-3", + }: endpoint{}, endpointKey{ Region: "me-central-1", }: endpoint{}, @@ -18127,6 +18371,9 @@ var awsPartition = partition{ endpointKey{ Region: "ap-south-1", }: endpoint{}, + endpointKey{ + Region: "ap-south-2", + }: endpoint{}, endpointKey{ Region: "ap-southeast-1", }: endpoint{}, @@ -18136,18 +18383,27 @@ var awsPartition = partition{ endpointKey{ Region: "ap-southeast-3", }: endpoint{}, + endpointKey{ + Region: "ap-southeast-4", + }: endpoint{}, endpointKey{ Region: "ca-central-1", }: endpoint{}, endpointKey{ Region: "eu-central-1", }: endpoint{}, + endpointKey{ + Region: "eu-central-2", + }: endpoint{}, endpointKey{ Region: "eu-north-1", }: endpoint{}, endpointKey{ Region: "eu-south-1", }: endpoint{}, + endpointKey{ + Region: "eu-south-2", + }: endpoint{}, endpointKey{ Region: "eu-west-1", }: endpoint{}, @@ -18577,6 +18833,9 @@ var awsPartition = partition{ endpointKey{ Region: "ap-south-1", }: endpoint{}, + endpointKey{ + Region: "ap-south-2", + }: endpoint{}, endpointKey{ Region: "ap-southeast-1", }: endpoint{}, @@ -18586,18 +18845,27 @@ var awsPartition = partition{ endpointKey{ Region: "ap-southeast-3", }: endpoint{}, + endpointKey{ + Region: "ap-southeast-4", + }: endpoint{}, endpointKey{ Region: "ca-central-1", }: endpoint{}, endpointKey{ Region: "eu-central-1", }: endpoint{}, + endpointKey{ + Region: "eu-central-2", + }: endpoint{}, endpointKey{ Region: "eu-north-1", }: endpoint{}, endpointKey{ Region: "eu-south-1", }: endpoint{}, + endpointKey{ + Region: "eu-south-2", + }: endpoint{}, endpointKey{ Region: "eu-west-1", }: endpoint{}, @@ -18871,6 +19139,9 @@ var awsPartition = partition{ endpointKey{ Region: "ap-south-1", }: endpoint{}, + endpointKey{ + Region: "ap-south-2", + }: endpoint{}, endpointKey{ Region: "ap-southeast-1", }: endpoint{}, @@ -18880,6 +19151,9 @@ var awsPartition = partition{ endpointKey{ Region: "ap-southeast-3", }: endpoint{}, + endpointKey{ + Region: "ap-southeast-4", + }: endpoint{}, endpointKey{ Region: "ca-central-1", }: endpoint{}, @@ -18892,12 +19166,18 @@ var awsPartition = partition{ endpointKey{ Region: "eu-central-1", }: endpoint{}, + endpointKey{ + Region: "eu-central-2", + }: endpoint{}, endpointKey{ Region: "eu-north-1", }: endpoint{}, endpointKey{ Region: "eu-south-1", }: endpoint{}, + endpointKey{ + Region: "eu-south-2", + }: endpoint{}, endpointKey{ Region: "eu-west-1", }: endpoint{}, @@ -19018,18 +19298,33 @@ var awsPartition = partition{ endpointKey{ Region: "ap-northeast-1", }: endpoint{}, + endpointKey{ + Region: "ap-southeast-1", + }: endpoint{}, endpointKey{ Region: "ap-southeast-2", }: endpoint{}, endpointKey{ Region: "ca-central-1", }: endpoint{}, + endpointKey{ + Region: "eu-central-1", + }: endpoint{}, + endpointKey{ + Region: "eu-north-1", + }: endpoint{}, + endpointKey{ + Region: "eu-west-1", + }: endpoint{}, endpointKey{ Region: "eu-west-2", }: endpoint{}, endpointKey{ Region: "us-east-1", }: endpoint{}, + endpointKey{ + Region: "us-east-2", + }: endpoint{}, endpointKey{ Region: "us-west-2", }: endpoint{}, @@ -20401,18 +20696,63 @@ var awsPartition = partition{ endpointKey{ Region: "ca-central-1", }: endpoint{}, + endpointKey{ + Region: "ca-central-1", + Variant: fipsVariant, + }: endpoint{ + Hostname: "profile-fips.ca-central-1.amazonaws.com", + }, endpointKey{ Region: "eu-central-1", }: endpoint{}, endpointKey{ Region: "eu-west-2", }: endpoint{}, + endpointKey{ + Region: "fips-ca-central-1", + }: endpoint{ + Hostname: "profile-fips.ca-central-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ca-central-1", + }, + Deprecated: boxedTrue, + }, + endpointKey{ + Region: "fips-us-east-1", + }: endpoint{ + Hostname: "profile-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + Deprecated: boxedTrue, + }, + endpointKey{ + Region: "fips-us-west-2", + }: endpoint{ + Hostname: "profile-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + Deprecated: boxedTrue, + }, endpointKey{ Region: "us-east-1", }: endpoint{}, + endpointKey{ + Region: "us-east-1", + Variant: fipsVariant, + }: endpoint{ + Hostname: "profile-fips.us-east-1.amazonaws.com", + }, endpointKey{ Region: "us-west-2", }: endpoint{}, + endpointKey{ + Region: "us-west-2", + Variant: fipsVariant, + }: endpoint{ + Hostname: "profile-fips.us-west-2.amazonaws.com", + }, }, }, "projects.iot1click": service{ @@ -22553,6 +22893,9 @@ var awsPartition = partition{ endpointKey{ Region: "ap-southeast-3", }: endpoint{}, + endpointKey{ + Region: "ap-southeast-4", + }: endpoint{}, endpointKey{ Region: "ca-central-1", }: endpoint{}, @@ -24103,6 +24446,9 @@ var awsPartition = partition{ endpointKey{ Region: "ap-southeast-3", }: endpoint{}, + endpointKey{ + Region: "ap-southeast-4", + }: endpoint{}, endpointKey{ Region: "ca-central-1", }: endpoint{}, @@ -24218,6 +24564,12 @@ var awsPartition = partition{ endpointKey{ Region: "ap-northeast-1", }: endpoint{}, + endpointKey{ + Region: "ap-northeast-2", + }: endpoint{}, + endpointKey{ + Region: "ap-south-1", + }: endpoint{}, endpointKey{ Region: "ap-southeast-1", }: endpoint{}, @@ -24242,6 +24594,9 @@ var awsPartition = partition{ endpointKey{ Region: "us-east-2", }: endpoint{}, + endpointKey{ + Region: "us-west-1", + }: endpoint{}, endpointKey{ Region: "us-west-2", }: endpoint{}, @@ -24863,33 +25218,6 @@ var awsPartition = partition{ }: endpoint{ Hostname: "servicediscovery.sa-east-1.amazonaws.com", }, - endpointKey{ - Region: "servicediscovery", - }: endpoint{ - CredentialScope: credentialScope{ - Region: "ca-central-1", - }, - Deprecated: boxedTrue, - }, - endpointKey{ - Region: "servicediscovery", - Variant: fipsVariant, - }: endpoint{ - Hostname: "servicediscovery-fips.ca-central-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "ca-central-1", - }, - Deprecated: boxedTrue, - }, - endpointKey{ - Region: "servicediscovery-fips", - }: endpoint{ - Hostname: "servicediscovery-fips.ca-central-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "ca-central-1", - }, - Deprecated: boxedTrue, - }, endpointKey{ Region: "us-east-1", }: endpoint{}, @@ -26327,6 +26655,118 @@ var awsPartition = partition{ }, }, }, + "ssm-contacts": service{ + Endpoints: serviceEndpoints{ + endpointKey{ + Region: "ap-northeast-1", + }: endpoint{}, + endpointKey{ + Region: "ap-northeast-2", + }: endpoint{}, + endpointKey{ + Region: "ap-south-1", + }: endpoint{}, + endpointKey{ + Region: "ap-southeast-1", + }: endpoint{}, + endpointKey{ + Region: "ap-southeast-2", + }: endpoint{}, + endpointKey{ + Region: "ca-central-1", + }: endpoint{}, + endpointKey{ + Region: "eu-central-1", + }: endpoint{}, + endpointKey{ + Region: "eu-north-1", + }: endpoint{}, + endpointKey{ + Region: "eu-west-1", + }: endpoint{}, + endpointKey{ + Region: "eu-west-2", + }: endpoint{}, + endpointKey{ + Region: "eu-west-3", + }: endpoint{}, + endpointKey{ + Region: "fips-us-east-1", + }: endpoint{ + Hostname: "ssm-contacts-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + Deprecated: boxedTrue, + }, + endpointKey{ + Region: "fips-us-east-2", + }: endpoint{ + Hostname: "ssm-contacts-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + Deprecated: boxedTrue, + }, + endpointKey{ + Region: "fips-us-west-1", + }: endpoint{ + Hostname: "ssm-contacts-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + Deprecated: boxedTrue, + }, + endpointKey{ + Region: "fips-us-west-2", + }: endpoint{ + Hostname: "ssm-contacts-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + Deprecated: boxedTrue, + }, + endpointKey{ + Region: "sa-east-1", + }: endpoint{}, + endpointKey{ + Region: "us-east-1", + }: endpoint{}, + endpointKey{ + Region: "us-east-1", + Variant: fipsVariant, + }: endpoint{ + Hostname: "ssm-contacts-fips.us-east-1.amazonaws.com", + }, + endpointKey{ + Region: "us-east-2", + }: endpoint{}, + endpointKey{ + Region: "us-east-2", + Variant: fipsVariant, + }: endpoint{ + Hostname: "ssm-contacts-fips.us-east-2.amazonaws.com", + }, + endpointKey{ + Region: "us-west-1", + }: endpoint{}, + endpointKey{ + Region: "us-west-1", + Variant: fipsVariant, + }: endpoint{ + Hostname: "ssm-contacts-fips.us-west-1.amazonaws.com", + }, + endpointKey{ + Region: "us-west-2", + }: endpoint{}, + endpointKey{ + Region: "us-west-2", + Variant: fipsVariant, + }: endpoint{ + Hostname: "ssm-contacts-fips.us-west-2.amazonaws.com", + }, + }, + }, "ssm-incidents": service{ Endpoints: serviceEndpoints{ endpointKey{ @@ -26816,15 +27256,6 @@ var awsPartition = partition{ endpointKey{ Region: "eu-west-3", }: endpoint{}, - endpointKey{ - Region: "fips", - }: endpoint{ - Hostname: "storagegateway-fips.ca-central-1.amazonaws.com", - CredentialScope: credentialScope{ - Region: "ca-central-1", - }, - Deprecated: boxedTrue, - }, endpointKey{ Region: "me-central-1", }: endpoint{}, @@ -27832,12 +28263,21 @@ var awsPartition = partition{ }, "transcribestreaming": service{ Endpoints: serviceEndpoints{ + endpointKey{ + Region: "af-south-1", + }: endpoint{}, endpointKey{ Region: "ap-northeast-1", }: endpoint{}, endpointKey{ Region: "ap-northeast-2", }: endpoint{}, + endpointKey{ + Region: "ap-south-1", + }: endpoint{}, + endpointKey{ + Region: "ap-southeast-1", + }: endpoint{}, endpointKey{ Region: "ap-southeast-2", }: endpoint{}, @@ -27995,6 +28435,9 @@ var awsPartition = partition{ endpointKey{ Region: "ap-south-1", }: endpoint{}, + endpointKey{ + Region: "ap-south-2", + }: endpoint{}, endpointKey{ Region: "ap-southeast-1", }: endpoint{}, @@ -28016,12 +28459,18 @@ var awsPartition = partition{ endpointKey{ Region: "eu-central-1", }: endpoint{}, + endpointKey{ + Region: "eu-central-2", + }: endpoint{}, endpointKey{ Region: "eu-north-1", }: endpoint{}, endpointKey{ Region: "eu-south-1", }: endpoint{}, + endpointKey{ + Region: "eu-south-2", + }: endpoint{}, endpointKey{ Region: "eu-west-1", }: endpoint{}, @@ -28225,6 +28674,91 @@ var awsPartition = partition{ }, }, }, + "verifiedpermissions": service{ + Endpoints: serviceEndpoints{ + endpointKey{ + Region: "af-south-1", + }: endpoint{}, + endpointKey{ + Region: "ap-east-1", + }: endpoint{}, + endpointKey{ + Region: "ap-northeast-1", + }: endpoint{}, + endpointKey{ + Region: "ap-northeast-2", + }: endpoint{}, + endpointKey{ + Region: "ap-northeast-3", + }: endpoint{}, + endpointKey{ + Region: "ap-south-1", + }: endpoint{}, + endpointKey{ + Region: "ap-south-2", + }: endpoint{}, + endpointKey{ + Region: "ap-southeast-1", + }: endpoint{}, + endpointKey{ + Region: "ap-southeast-2", + }: endpoint{}, + endpointKey{ + Region: "ap-southeast-3", + }: endpoint{}, + endpointKey{ + Region: "ap-southeast-4", + }: endpoint{}, + endpointKey{ + Region: "ca-central-1", + }: endpoint{}, + endpointKey{ + Region: "eu-central-1", + }: endpoint{}, + endpointKey{ + Region: "eu-central-2", + }: endpoint{}, + endpointKey{ + Region: "eu-north-1", + }: endpoint{}, + endpointKey{ + Region: "eu-south-1", + }: endpoint{}, + endpointKey{ + Region: "eu-south-2", + }: endpoint{}, + endpointKey{ + Region: "eu-west-1", + }: endpoint{}, + endpointKey{ + Region: "eu-west-2", + }: endpoint{}, + endpointKey{ + Region: "eu-west-3", + }: endpoint{}, + endpointKey{ + Region: "me-central-1", + }: endpoint{}, + endpointKey{ + Region: "me-south-1", + }: endpoint{}, + endpointKey{ + Region: "sa-east-1", + }: endpoint{}, + endpointKey{ + Region: "us-east-1", + }: endpoint{}, + endpointKey{ + Region: "us-east-2", + }: endpoint{}, + endpointKey{ + Region: "us-west-1", + }: endpoint{}, + endpointKey{ + Region: "us-west-2", + }: endpoint{}, + }, + }, "voice-chime": service{ Endpoints: serviceEndpoints{ endpointKey{ @@ -30418,6 +30952,16 @@ var awscnPartition = partition{ }: endpoint{}, }, }, + "airflow": service{ + Endpoints: serviceEndpoints{ + endpointKey{ + Region: "cn-north-1", + }: endpoint{}, + endpointKey{ + Region: "cn-northwest-1", + }: endpoint{}, + }, + }, "api.ecr": service{ Endpoints: serviceEndpoints{ endpointKey{ @@ -30607,6 +31151,16 @@ var awscnPartition = partition{ }: endpoint{}, }, }, + "backupstorage": service{ + Endpoints: serviceEndpoints{ + endpointKey{ + Region: "cn-north-1", + }: endpoint{}, + endpointKey{ + Region: "cn-northwest-1", + }: endpoint{}, + }, + }, "batch": service{ Endpoints: serviceEndpoints{ endpointKey{ @@ -31062,6 +31616,16 @@ var awscnPartition = partition{ }: endpoint{}, }, }, + "emr-serverless": service{ + Endpoints: serviceEndpoints{ + endpointKey{ + Region: "cn-north-1", + }: endpoint{}, + endpointKey{ + Region: "cn-northwest-1", + }: endpoint{}, + }, + }, "es": service{ Endpoints: serviceEndpoints{ endpointKey{ @@ -31253,13 +31817,6 @@ var awscnPartition = partition{ }, }, "iot": service{ - Defaults: endpointDefaults{ - defaultKey{}: endpoint{ - CredentialScope: credentialScope{ - Service: "execute-api", - }, - }, - }, Endpoints: serviceEndpoints{ endpointKey{ Region: "cn-north-1", @@ -32930,6 +33487,12 @@ var awsusgovPartition = partition{ }: endpoint{ Hostname: "athena-fips.us-gov-east-1.amazonaws.com", }, + endpointKey{ + Region: "us-gov-east-1", + Variant: fipsVariant | dualStackVariant, + }: endpoint{ + Hostname: "athena-fips.us-gov-east-1.api.aws", + }, endpointKey{ Region: "us-gov-west-1", }: endpoint{}, @@ -32945,6 +33508,12 @@ var awsusgovPartition = partition{ }: endpoint{ Hostname: "athena-fips.us-gov-west-1.amazonaws.com", }, + endpointKey{ + Region: "us-gov-west-1", + Variant: fipsVariant | dualStackVariant, + }: endpoint{ + Hostname: "athena-fips.us-gov-west-1.api.aws", + }, }, }, "autoscaling": service{ @@ -33008,6 +33577,16 @@ var awsusgovPartition = partition{ }: endpoint{}, }, }, + "backupstorage": service{ + Endpoints: serviceEndpoints{ + endpointKey{ + Region: "us-gov-east-1", + }: endpoint{}, + endpointKey{ + Region: "us-gov-west-1", + }: endpoint{}, + }, + }, "batch": service{ Defaults: endpointDefaults{ defaultKey{}: endpoint{}, @@ -35154,30 +35733,19 @@ var awsusgovPartition = partition{ }, }, "iot": service{ - Defaults: endpointDefaults{ - defaultKey{}: endpoint{ - CredentialScope: credentialScope{ - Service: "execute-api", - }, - }, - }, Endpoints: serviceEndpoints{ endpointKey{ Region: "fips-us-gov-east-1", }: endpoint{ Hostname: "iot-fips.us-gov-east-1.amazonaws.com", - CredentialScope: credentialScope{ - Service: "execute-api", - }, + Deprecated: boxedTrue, }, endpointKey{ Region: "fips-us-gov-west-1", }: endpoint{ Hostname: "iot-fips.us-gov-west-1.amazonaws.com", - CredentialScope: credentialScope{ - Service: "execute-api", - }, + Deprecated: boxedTrue, }, endpointKey{ @@ -35830,6 +36398,46 @@ var awsusgovPartition = partition{ }: endpoint{}, }, }, + "mgn": service{ + Endpoints: serviceEndpoints{ + endpointKey{ + Region: "fips-us-gov-east-1", + }: endpoint{ + Hostname: "mgn-fips.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + Deprecated: boxedTrue, + }, + endpointKey{ + Region: "fips-us-gov-west-1", + }: endpoint{ + Hostname: "mgn-fips.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + Deprecated: boxedTrue, + }, + endpointKey{ + Region: "us-gov-east-1", + }: endpoint{}, + endpointKey{ + Region: "us-gov-east-1", + Variant: fipsVariant, + }: endpoint{ + Hostname: "mgn-fips.us-gov-east-1.amazonaws.com", + }, + endpointKey{ + Region: "us-gov-west-1", + }: endpoint{}, + endpointKey{ + Region: "us-gov-west-1", + Variant: fipsVariant, + }: endpoint{ + Hostname: "mgn-fips.us-gov-west-1.amazonaws.com", + }, + }, + }, "models.lex": service{ Defaults: endpointDefaults{ defaultKey{}: endpoint{ @@ -36590,9 +37198,35 @@ var awsusgovPartition = partition{ endpointKey{ Region: "us-gov-east-1", }: endpoint{}, + endpointKey{ + Region: "us-gov-east-1", + Variant: fipsVariant, + }: endpoint{ + Hostname: "route53resolver.us-gov-east-1.amazonaws.com", + }, + endpointKey{ + Region: "us-gov-east-1-fips", + }: endpoint{ + Hostname: "route53resolver.us-gov-east-1.amazonaws.com", + + Deprecated: boxedTrue, + }, endpointKey{ Region: "us-gov-west-1", }: endpoint{}, + endpointKey{ + Region: "us-gov-west-1", + Variant: fipsVariant, + }: endpoint{ + Hostname: "route53resolver.us-gov-west-1.amazonaws.com", + }, + endpointKey{ + Region: "us-gov-west-1-fips", + }: endpoint{ + Hostname: "route53resolver.us-gov-west-1.amazonaws.com", + + Deprecated: boxedTrue, + }, }, }, "runtime.lex": service{ @@ -37219,6 +37853,16 @@ var awsusgovPartition = partition{ }, }, }, + "simspaceweaver": service{ + Endpoints: serviceEndpoints{ + endpointKey{ + Region: "us-gov-east-1", + }: endpoint{}, + endpointKey{ + Region: "us-gov-west-1", + }: endpoint{}, + }, + }, "sms": service{ Endpoints: serviceEndpoints{ endpointKey{ @@ -38092,6 +38736,15 @@ var awsusgovPartition = partition{ }, "workspaces": service{ Endpoints: serviceEndpoints{ + endpointKey{ + Region: "fips-us-gov-east-1", + }: endpoint{ + Hostname: "workspaces-fips.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + Deprecated: boxedTrue, + }, endpointKey{ Region: "fips-us-gov-west-1", }: endpoint{ @@ -38104,6 +38757,12 @@ var awsusgovPartition = partition{ endpointKey{ Region: "us-gov-east-1", }: endpoint{}, + endpointKey{ + Region: "us-gov-east-1", + Variant: fipsVariant, + }: endpoint{ + Hostname: "workspaces-fips.us-gov-east-1.amazonaws.com", + }, endpointKey{ Region: "us-gov-west-1", }: endpoint{}, @@ -38285,6 +38944,16 @@ var awsisoPartition = partition{ }: endpoint{}, }, }, + "cloudcontrolapi": service{ + Endpoints: serviceEndpoints{ + endpointKey{ + Region: "us-iso-east-1", + }: endpoint{}, + endpointKey{ + Region: "us-iso-west-1", + }: endpoint{}, + }, + }, "cloudformation": service{ Endpoints: serviceEndpoints{ endpointKey{ @@ -38354,6 +39023,16 @@ var awsisoPartition = partition{ }: endpoint{}, }, }, + "dlm": service{ + Endpoints: serviceEndpoints{ + endpointKey{ + Region: "us-iso-east-1", + }: endpoint{}, + endpointKey{ + Region: "us-iso-west-1", + }: endpoint{}, + }, + }, "dms": service{ Defaults: endpointDefaults{ defaultKey{}: endpoint{}, @@ -38771,6 +39450,28 @@ var awsisoPartition = partition{ }: endpoint{}, }, }, + "rbin": service{ + Endpoints: serviceEndpoints{ + endpointKey{ + Region: "fips-us-iso-east-1", + }: endpoint{ + Hostname: "rbin-fips.us-iso-east-1.c2s.ic.gov", + CredentialScope: credentialScope{ + Region: "us-iso-east-1", + }, + Deprecated: boxedTrue, + }, + endpointKey{ + Region: "us-iso-east-1", + }: endpoint{}, + endpointKey{ + Region: "us-iso-east-1", + Variant: fipsVariant, + }: endpoint{ + Hostname: "rbin-fips.us-iso-east-1.c2s.ic.gov", + }, + }, + }, "rds": service{ Endpoints: serviceEndpoints{ endpointKey{ @@ -38810,6 +39511,9 @@ var awsisoPartition = partition{ endpointKey{ Region: "us-iso-east-1", }: endpoint{}, + endpointKey{ + Region: "us-iso-west-1", + }: endpoint{}, }, }, "runtime.sagemaker": service{ @@ -38963,6 +39667,9 @@ var awsisoPartition = partition{ endpointKey{ Region: "us-iso-east-1", }: endpoint{}, + endpointKey{ + Region: "us-iso-west-1", + }: endpoint{}, }, }, "transcribe": service{ @@ -39432,6 +40139,28 @@ var awsisobPartition = partition{ }: endpoint{}, }, }, + "rbin": service{ + Endpoints: serviceEndpoints{ + endpointKey{ + Region: "fips-us-isob-east-1", + }: endpoint{ + Hostname: "rbin-fips.us-isob-east-1.sc2s.sgov.gov", + CredentialScope: credentialScope{ + Region: "us-isob-east-1", + }, + Deprecated: boxedTrue, + }, + endpointKey{ + Region: "us-isob-east-1", + }: endpoint{}, + endpointKey{ + Region: "us-isob-east-1", + Variant: fipsVariant, + }: endpoint{ + Hostname: "rbin-fips.us-isob-east-1.sc2s.sgov.gov", + }, + }, + }, "rds": service{ Endpoints: serviceEndpoints{ endpointKey{ @@ -39639,3 +40368,37 @@ var awsisoePartition = partition{ Regions: regions{}, Services: services{}, } + +// AwsIsoFPartition returns the Resolver for AWS ISOF. +func AwsIsoFPartition() Partition { + return awsisofPartition.Partition() +} + +var awsisofPartition = partition{ + ID: "aws-iso-f", + Name: "AWS ISOF", + DNSSuffix: "csp.hci.ic.gov", + RegionRegex: regionRegex{ + Regexp: func() *regexp.Regexp { + reg, _ := regexp.Compile("^us\\-isof\\-\\w+\\-\\d+$") + return reg + }(), + }, + Defaults: endpointDefaults{ + defaultKey{}: endpoint{ + Hostname: "{service}.{region}.{dnsSuffix}", + Protocols: []string{"https"}, + SignatureVersions: []string{"v4"}, + }, + defaultKey{ + Variant: fipsVariant, + }: endpoint{ + Hostname: "{service}-fips.{region}.{dnsSuffix}", + DNSSuffix: "csp.hci.ic.gov", + Protocols: []string{"https"}, + SignatureVersions: []string{"v4"}, + }, + }, + Regions: regions{}, + Services: services{}, +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/LICENSE b/.ci/providerlint/vendor/github.com/cloudflare/circl/LICENSE new file mode 100644 index 00000000000..67edaa90a04 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/LICENSE @@ -0,0 +1,57 @@ +Copyright (c) 2019 Cloudflare. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Cloudflare nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +======================================================================== + +Copyright (c) 2009 The Go Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x25519/curve.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x25519/curve.go new file mode 100644 index 00000000000..f9057c2b866 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x25519/curve.go @@ -0,0 +1,96 @@ +package x25519 + +import ( + fp "github.com/cloudflare/circl/math/fp25519" +) + +// ladderJoye calculates a fixed-point multiplication with the generator point. +// The algorithm is the right-to-left Joye's ladder as described +// in "How to precompute a ladder" in SAC'2017. +func ladderJoye(k *Key) { + w := [5]fp.Elt{} // [mu,x1,z1,x2,z2] order must be preserved. + fp.SetOne(&w[1]) // x1 = 1 + fp.SetOne(&w[2]) // z1 = 1 + w[3] = fp.Elt{ // x2 = G-S + 0xbd, 0xaa, 0x2f, 0xc8, 0xfe, 0xe1, 0x94, 0x7e, + 0xf8, 0xed, 0xb2, 0x14, 0xae, 0x95, 0xf0, 0xbb, + 0xe2, 0x48, 0x5d, 0x23, 0xb9, 0xa0, 0xc7, 0xad, + 0x34, 0xab, 0x7c, 0xe2, 0xee, 0xcd, 0xae, 0x1e, + } + fp.SetOne(&w[4]) // z2 = 1 + + const n = 255 + const h = 3 + swap := uint(1) + for s := 0; s < n-h; s++ { + i := (s + h) / 8 + j := (s + h) % 8 + bit := uint((k[i] >> uint(j)) & 1) + copy(w[0][:], tableGenerator[s*Size:(s+1)*Size]) + diffAdd(&w, swap^bit) + swap = bit + } + for s := 0; s < h; s++ { + double(&w[1], &w[2]) + } + toAffine((*[fp.Size]byte)(k), &w[1], &w[2]) +} + +// ladderMontgomery calculates a generic scalar point multiplication +// The algorithm implemented is the left-to-right Montgomery's ladder. +func ladderMontgomery(k, xP *Key) { + w := [5]fp.Elt{} // [x1, x2, z2, x3, z3] order must be preserved. + w[0] = *(*fp.Elt)(xP) // x1 = xP + fp.SetOne(&w[1]) // x2 = 1 + w[3] = *(*fp.Elt)(xP) // x3 = xP + fp.SetOne(&w[4]) // z3 = 1 + + move := uint(0) + for s := 255 - 1; s >= 0; s-- { + i := s / 8 + j := s % 8 + bit := uint((k[i] >> uint(j)) & 1) + ladderStep(&w, move^bit) + move = bit + } + toAffine((*[fp.Size]byte)(k), &w[1], &w[2]) +} + +func toAffine(k *[fp.Size]byte, x, z *fp.Elt) { + fp.Inv(z, z) + fp.Mul(x, x, z) + _ = fp.ToBytes(k[:], x) +} + +var lowOrderPoints = [5]fp.Elt{ + { /* (0,_,1) point of order 2 on Curve25519 */ + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + }, + { /* (1,_,1) point of order 4 on Curve25519 */ + 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + }, + { /* (x,_,1) first point of order 8 on Curve25519 */ + 0xe0, 0xeb, 0x7a, 0x7c, 0x3b, 0x41, 0xb8, 0xae, + 0x16, 0x56, 0xe3, 0xfa, 0xf1, 0x9f, 0xc4, 0x6a, + 0xda, 0x09, 0x8d, 0xeb, 0x9c, 0x32, 0xb1, 0xfd, + 0x86, 0x62, 0x05, 0x16, 0x5f, 0x49, 0xb8, 0x00, + }, + { /* (x,_,1) second point of order 8 on Curve25519 */ + 0x5f, 0x9c, 0x95, 0xbc, 0xa3, 0x50, 0x8c, 0x24, + 0xb1, 0xd0, 0xb1, 0x55, 0x9c, 0x83, 0xef, 0x5b, + 0x04, 0x44, 0x5c, 0xc4, 0x58, 0x1c, 0x8e, 0x86, + 0xd8, 0x22, 0x4e, 0xdd, 0xd0, 0x9f, 0x11, 0x57, + }, + { /* (-1,_,1) a point of order 4 on the twist of Curve25519 */ + 0xec, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x7f, + }, +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x25519/curve_amd64.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x25519/curve_amd64.go new file mode 100644 index 00000000000..8a3d54c570f --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x25519/curve_amd64.go @@ -0,0 +1,30 @@ +//go:build amd64 && !purego +// +build amd64,!purego + +package x25519 + +import ( + fp "github.com/cloudflare/circl/math/fp25519" + "golang.org/x/sys/cpu" +) + +var hasBmi2Adx = cpu.X86.HasBMI2 && cpu.X86.HasADX + +var _ = hasBmi2Adx + +func double(x, z *fp.Elt) { doubleAmd64(x, z) } +func diffAdd(w *[5]fp.Elt, b uint) { diffAddAmd64(w, b) } +func ladderStep(w *[5]fp.Elt, b uint) { ladderStepAmd64(w, b) } +func mulA24(z, x *fp.Elt) { mulA24Amd64(z, x) } + +//go:noescape +func ladderStepAmd64(w *[5]fp.Elt, b uint) + +//go:noescape +func diffAddAmd64(w *[5]fp.Elt, b uint) + +//go:noescape +func doubleAmd64(x, z *fp.Elt) + +//go:noescape +func mulA24Amd64(z, x *fp.Elt) diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x25519/curve_amd64.h b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x25519/curve_amd64.h new file mode 100644 index 00000000000..8c1ae4d0fbb --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x25519/curve_amd64.h @@ -0,0 +1,111 @@ +#define ladderStepLeg \ + addSub(x2,z2) \ + addSub(x3,z3) \ + integerMulLeg(b0,x2,z3) \ + integerMulLeg(b1,x3,z2) \ + reduceFromDoubleLeg(t0,b0) \ + reduceFromDoubleLeg(t1,b1) \ + addSub(t0,t1) \ + cselect(x2,x3,regMove) \ + cselect(z2,z3,regMove) \ + integerSqrLeg(b0,t0) \ + integerSqrLeg(b1,t1) \ + reduceFromDoubleLeg(x3,b0) \ + reduceFromDoubleLeg(z3,b1) \ + integerMulLeg(b0,x1,z3) \ + reduceFromDoubleLeg(z3,b0) \ + integerSqrLeg(b0,x2) \ + integerSqrLeg(b1,z2) \ + reduceFromDoubleLeg(x2,b0) \ + reduceFromDoubleLeg(z2,b1) \ + subtraction(t0,x2,z2) \ + multiplyA24Leg(t1,t0) \ + additionLeg(t1,t1,z2) \ + integerMulLeg(b0,x2,z2) \ + integerMulLeg(b1,t0,t1) \ + reduceFromDoubleLeg(x2,b0) \ + reduceFromDoubleLeg(z2,b1) + +#define ladderStepBmi2Adx \ + addSub(x2,z2) \ + addSub(x3,z3) \ + integerMulAdx(b0,x2,z3) \ + integerMulAdx(b1,x3,z2) \ + reduceFromDoubleAdx(t0,b0) \ + reduceFromDoubleAdx(t1,b1) \ + addSub(t0,t1) \ + cselect(x2,x3,regMove) \ + cselect(z2,z3,regMove) \ + integerSqrAdx(b0,t0) \ + integerSqrAdx(b1,t1) \ + reduceFromDoubleAdx(x3,b0) \ + reduceFromDoubleAdx(z3,b1) \ + integerMulAdx(b0,x1,z3) \ + reduceFromDoubleAdx(z3,b0) \ + integerSqrAdx(b0,x2) \ + integerSqrAdx(b1,z2) \ + reduceFromDoubleAdx(x2,b0) \ + reduceFromDoubleAdx(z2,b1) \ + subtraction(t0,x2,z2) \ + multiplyA24Adx(t1,t0) \ + additionAdx(t1,t1,z2) \ + integerMulAdx(b0,x2,z2) \ + integerMulAdx(b1,t0,t1) \ + reduceFromDoubleAdx(x2,b0) \ + reduceFromDoubleAdx(z2,b1) + +#define difAddLeg \ + addSub(x1,z1) \ + integerMulLeg(b0,z1,ui) \ + reduceFromDoubleLeg(z1,b0) \ + addSub(x1,z1) \ + integerSqrLeg(b0,x1) \ + integerSqrLeg(b1,z1) \ + reduceFromDoubleLeg(x1,b0) \ + reduceFromDoubleLeg(z1,b1) \ + integerMulLeg(b0,x1,z2) \ + integerMulLeg(b1,z1,x2) \ + reduceFromDoubleLeg(x1,b0) \ + reduceFromDoubleLeg(z1,b1) + +#define difAddBmi2Adx \ + addSub(x1,z1) \ + integerMulAdx(b0,z1,ui) \ + reduceFromDoubleAdx(z1,b0) \ + addSub(x1,z1) \ + integerSqrAdx(b0,x1) \ + integerSqrAdx(b1,z1) \ + reduceFromDoubleAdx(x1,b0) \ + reduceFromDoubleAdx(z1,b1) \ + integerMulAdx(b0,x1,z2) \ + integerMulAdx(b1,z1,x2) \ + reduceFromDoubleAdx(x1,b0) \ + reduceFromDoubleAdx(z1,b1) + +#define doubleLeg \ + addSub(x1,z1) \ + integerSqrLeg(b0,x1) \ + integerSqrLeg(b1,z1) \ + reduceFromDoubleLeg(x1,b0) \ + reduceFromDoubleLeg(z1,b1) \ + subtraction(t0,x1,z1) \ + multiplyA24Leg(t1,t0) \ + additionLeg(t1,t1,z1) \ + integerMulLeg(b0,x1,z1) \ + integerMulLeg(b1,t0,t1) \ + reduceFromDoubleLeg(x1,b0) \ + reduceFromDoubleLeg(z1,b1) + +#define doubleBmi2Adx \ + addSub(x1,z1) \ + integerSqrAdx(b0,x1) \ + integerSqrAdx(b1,z1) \ + reduceFromDoubleAdx(x1,b0) \ + reduceFromDoubleAdx(z1,b1) \ + subtraction(t0,x1,z1) \ + multiplyA24Adx(t1,t0) \ + additionAdx(t1,t1,z1) \ + integerMulAdx(b0,x1,z1) \ + integerMulAdx(b1,t0,t1) \ + reduceFromDoubleAdx(x1,b0) \ + reduceFromDoubleAdx(z1,b1) diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x25519/curve_amd64.s b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x25519/curve_amd64.s new file mode 100644 index 00000000000..b7723185b61 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x25519/curve_amd64.s @@ -0,0 +1,156 @@ +// +build amd64 + +#include "textflag.h" + +// Depends on circl/math/fp25519 package +#include "../../math/fp25519/fp_amd64.h" +#include "curve_amd64.h" + +// CTE_A24 is (A+2)/4 from Curve25519 +#define CTE_A24 121666 + +#define Size 32 + +// multiplyA24Leg multiplies x times CTE_A24 and stores in z +// Uses: AX, DX, R8-R13, FLAGS +// Instr: x86_64, cmov +#define multiplyA24Leg(z,x) \ + MOVL $CTE_A24, AX; MULQ 0+x; MOVQ AX, R8; MOVQ DX, R9; \ + MOVL $CTE_A24, AX; MULQ 8+x; MOVQ AX, R12; MOVQ DX, R10; \ + MOVL $CTE_A24, AX; MULQ 16+x; MOVQ AX, R13; MOVQ DX, R11; \ + MOVL $CTE_A24, AX; MULQ 24+x; \ + ADDQ R12, R9; \ + ADCQ R13, R10; \ + ADCQ AX, R11; \ + ADCQ $0, DX; \ + MOVL $38, AX; /* 2*C = 38 = 2^256 MOD 2^255-19*/ \ + IMULQ AX, DX; \ + ADDQ DX, R8; \ + ADCQ $0, R9; MOVQ R9, 8+z; \ + ADCQ $0, R10; MOVQ R10, 16+z; \ + ADCQ $0, R11; MOVQ R11, 24+z; \ + MOVQ $0, DX; \ + CMOVQCS AX, DX; \ + ADDQ DX, R8; MOVQ R8, 0+z; + +// multiplyA24Adx multiplies x times CTE_A24 and stores in z +// Uses: AX, DX, R8-R12, FLAGS +// Instr: x86_64, cmov, bmi2 +#define multiplyA24Adx(z,x) \ + MOVQ $CTE_A24, DX; \ + MULXQ 0+x, R8, R10; \ + MULXQ 8+x, R9, R11; ADDQ R10, R9; \ + MULXQ 16+x, R10, AX; ADCQ R11, R10; \ + MULXQ 24+x, R11, R12; ADCQ AX, R11; \ + ;;;;;;;;;;;;;;;;;;;;; ADCQ $0, R12; \ + MOVL $38, DX; /* 2*C = 38 = 2^256 MOD 2^255-19*/ \ + IMULQ DX, R12; \ + ADDQ R12, R8; \ + ADCQ $0, R9; MOVQ R9, 8+z; \ + ADCQ $0, R10; MOVQ R10, 16+z; \ + ADCQ $0, R11; MOVQ R11, 24+z; \ + MOVQ $0, R12; \ + CMOVQCS DX, R12; \ + ADDQ R12, R8; MOVQ R8, 0+z; + +#define mulA24Legacy \ + multiplyA24Leg(0(DI),0(SI)) +#define mulA24Bmi2Adx \ + multiplyA24Adx(0(DI),0(SI)) + +// func mulA24Amd64(z, x *fp255.Elt) +TEXT ·mulA24Amd64(SB),NOSPLIT,$0-16 + MOVQ z+0(FP), DI + MOVQ x+8(FP), SI + CHECK_BMI2ADX(LMA24, mulA24Legacy, mulA24Bmi2Adx) + + +// func ladderStepAmd64(w *[5]fp255.Elt, b uint) +// ladderStepAmd64 calculates a point addition and doubling as follows: +// (x2,z2) = 2*(x2,z2) and (x3,z3) = (x2,z2)+(x3,z3) using as a difference (x1,-). +// work = (x1,x2,z2,x3,z3) are five fp255.Elt of 32 bytes. +// stack = (t0,t1) are two fp.Elt of fp.Size bytes, and +// (b0,b1) are two-double precision fp.Elt of 2*fp.Size bytes. +TEXT ·ladderStepAmd64(SB),NOSPLIT,$192-16 + // Parameters + #define regWork DI + #define regMove SI + #define x1 0*Size(regWork) + #define x2 1*Size(regWork) + #define z2 2*Size(regWork) + #define x3 3*Size(regWork) + #define z3 4*Size(regWork) + // Local variables + #define t0 0*Size(SP) + #define t1 1*Size(SP) + #define b0 2*Size(SP) + #define b1 4*Size(SP) + MOVQ w+0(FP), regWork + MOVQ b+8(FP), regMove + CHECK_BMI2ADX(LLADSTEP, ladderStepLeg, ladderStepBmi2Adx) + #undef regWork + #undef regMove + #undef x1 + #undef x2 + #undef z2 + #undef x3 + #undef z3 + #undef t0 + #undef t1 + #undef b0 + #undef b1 + +// func diffAddAmd64(w *[5]fp255.Elt, b uint) +// diffAddAmd64 calculates a differential point addition using a precomputed point. +// (x1,z1) = (x1,z1)+(mu) using a difference point (x2,z2) +// w = (mu,x1,z1,x2,z2) are five fp.Elt, and +// stack = (b0,b1) are two-double precision fp.Elt of 2*fp.Size bytes. +TEXT ·diffAddAmd64(SB),NOSPLIT,$128-16 + // Parameters + #define regWork DI + #define regSwap SI + #define ui 0*Size(regWork) + #define x1 1*Size(regWork) + #define z1 2*Size(regWork) + #define x2 3*Size(regWork) + #define z2 4*Size(regWork) + // Local variables + #define b0 0*Size(SP) + #define b1 2*Size(SP) + MOVQ w+0(FP), regWork + MOVQ b+8(FP), regSwap + cswap(x1,x2,regSwap) + cswap(z1,z2,regSwap) + CHECK_BMI2ADX(LDIFADD, difAddLeg, difAddBmi2Adx) + #undef regWork + #undef regSwap + #undef ui + #undef x1 + #undef z1 + #undef x2 + #undef z2 + #undef b0 + #undef b1 + +// func doubleAmd64(x, z *fp255.Elt) +// doubleAmd64 calculates a point doubling (x1,z1) = 2*(x1,z1). +// stack = (t0,t1) are two fp.Elt of fp.Size bytes, and +// (b0,b1) are two-double precision fp.Elt of 2*fp.Size bytes. +TEXT ·doubleAmd64(SB),NOSPLIT,$192-16 + // Parameters + #define x1 0(DI) + #define z1 0(SI) + // Local variables + #define t0 0*Size(SP) + #define t1 1*Size(SP) + #define b0 2*Size(SP) + #define b1 4*Size(SP) + MOVQ x+0(FP), DI + MOVQ z+8(FP), SI + CHECK_BMI2ADX(LDOUB,doubleLeg,doubleBmi2Adx) + #undef x1 + #undef z1 + #undef t0 + #undef t1 + #undef b0 + #undef b1 diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x25519/curve_generic.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x25519/curve_generic.go new file mode 100644 index 00000000000..dae67ea37df --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x25519/curve_generic.go @@ -0,0 +1,85 @@ +package x25519 + +import ( + "encoding/binary" + "math/bits" + + fp "github.com/cloudflare/circl/math/fp25519" +) + +func doubleGeneric(x, z *fp.Elt) { + t0, t1 := &fp.Elt{}, &fp.Elt{} + fp.AddSub(x, z) + fp.Sqr(x, x) + fp.Sqr(z, z) + fp.Sub(t0, x, z) + mulA24Generic(t1, t0) + fp.Add(t1, t1, z) + fp.Mul(x, x, z) + fp.Mul(z, t0, t1) +} + +func diffAddGeneric(w *[5]fp.Elt, b uint) { + mu, x1, z1, x2, z2 := &w[0], &w[1], &w[2], &w[3], &w[4] + fp.Cswap(x1, x2, b) + fp.Cswap(z1, z2, b) + fp.AddSub(x1, z1) + fp.Mul(z1, z1, mu) + fp.AddSub(x1, z1) + fp.Sqr(x1, x1) + fp.Sqr(z1, z1) + fp.Mul(x1, x1, z2) + fp.Mul(z1, z1, x2) +} + +func ladderStepGeneric(w *[5]fp.Elt, b uint) { + x1, x2, z2, x3, z3 := &w[0], &w[1], &w[2], &w[3], &w[4] + t0 := &fp.Elt{} + t1 := &fp.Elt{} + fp.AddSub(x2, z2) + fp.AddSub(x3, z3) + fp.Mul(t0, x2, z3) + fp.Mul(t1, x3, z2) + fp.AddSub(t0, t1) + fp.Cmov(x2, x3, b) + fp.Cmov(z2, z3, b) + fp.Sqr(x3, t0) + fp.Sqr(z3, t1) + fp.Mul(z3, x1, z3) + fp.Sqr(x2, x2) + fp.Sqr(z2, z2) + fp.Sub(t0, x2, z2) + mulA24Generic(t1, t0) + fp.Add(t1, t1, z2) + fp.Mul(x2, x2, z2) + fp.Mul(z2, t0, t1) +} + +func mulA24Generic(z, x *fp.Elt) { + const A24 = 121666 + const n = 8 + var xx [4]uint64 + for i := range xx { + xx[i] = binary.LittleEndian.Uint64(x[i*n : (i+1)*n]) + } + + h0, l0 := bits.Mul64(xx[0], A24) + h1, l1 := bits.Mul64(xx[1], A24) + h2, l2 := bits.Mul64(xx[2], A24) + h3, l3 := bits.Mul64(xx[3], A24) + + var c3 uint64 + l1, c0 := bits.Add64(h0, l1, 0) + l2, c1 := bits.Add64(h1, l2, c0) + l3, c2 := bits.Add64(h2, l3, c1) + l4, _ := bits.Add64(h3, 0, c2) + _, l4 = bits.Mul64(l4, 38) + l0, c0 = bits.Add64(l0, l4, 0) + xx[1], c1 = bits.Add64(l1, 0, c0) + xx[2], c2 = bits.Add64(l2, 0, c1) + xx[3], c3 = bits.Add64(l3, 0, c2) + xx[0], _ = bits.Add64(l0, (-c3)&38, 0) + for i := range xx { + binary.LittleEndian.PutUint64(z[i*n:(i+1)*n], xx[i]) + } +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x25519/curve_noasm.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x25519/curve_noasm.go new file mode 100644 index 00000000000..07fab97d2af --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x25519/curve_noasm.go @@ -0,0 +1,11 @@ +//go:build !amd64 || purego +// +build !amd64 purego + +package x25519 + +import fp "github.com/cloudflare/circl/math/fp25519" + +func double(x, z *fp.Elt) { doubleGeneric(x, z) } +func diffAdd(w *[5]fp.Elt, b uint) { diffAddGeneric(w, b) } +func ladderStep(w *[5]fp.Elt, b uint) { ladderStepGeneric(w, b) } +func mulA24(z, x *fp.Elt) { mulA24Generic(z, x) } diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x25519/doc.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x25519/doc.go new file mode 100644 index 00000000000..3ce102d1457 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x25519/doc.go @@ -0,0 +1,19 @@ +/* +Package x25519 provides Diffie-Hellman functions as specified in RFC-7748. + +Validation of public keys. + +The Diffie-Hellman function, as described in RFC-7748 [1], works for any +public key. However, if a different protocol requires contributory +behaviour [2,3], then the public keys must be validated against low-order +points [3,4]. To do that, the Shared function performs this validation +internally and returns false when the public key is invalid (i.e., it +is a low-order point). + +References: + - [1] RFC7748 by Langley, Hamburg, Turner (https://rfc-editor.org/rfc/rfc7748.txt) + - [2] Curve25519 by Bernstein (https://cr.yp.to/ecdh.html) + - [3] Bernstein (https://cr.yp.to/ecdh.html#validate) + - [4] Cremers&Jackson (https://eprint.iacr.org/2019/526) +*/ +package x25519 diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x25519/key.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x25519/key.go new file mode 100644 index 00000000000..c76f72ac7fa --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x25519/key.go @@ -0,0 +1,47 @@ +package x25519 + +import ( + "crypto/subtle" + + fp "github.com/cloudflare/circl/math/fp25519" +) + +// Size is the length in bytes of a X25519 key. +const Size = 32 + +// Key represents a X25519 key. +type Key [Size]byte + +func (k *Key) clamp(in *Key) *Key { + *k = *in + k[0] &= 248 + k[31] = (k[31] & 127) | 64 + return k +} + +// isValidPubKey verifies if the public key is not a low-order point. +func (k *Key) isValidPubKey() bool { + fp.Modp((*fp.Elt)(k)) + var isLowOrder int + for _, P := range lowOrderPoints { + isLowOrder |= subtle.ConstantTimeCompare(P[:], k[:]) + } + return isLowOrder == 0 +} + +// KeyGen obtains a public key given a secret key. +func KeyGen(public, secret *Key) { + ladderJoye(public.clamp(secret)) +} + +// Shared calculates Alice's shared key from Alice's secret key and Bob's +// public key returning true on success. A failure case happens when the public +// key is a low-order point, thus the shared key is all-zeros and the function +// returns false. +func Shared(shared, secret, public *Key) bool { + validPk := *public + validPk[31] &= (1 << (255 % 8)) - 1 + ok := validPk.isValidPubKey() + ladderMontgomery(shared.clamp(secret), &validPk) + return ok +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x25519/table.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x25519/table.go new file mode 100644 index 00000000000..28c8c4ac032 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x25519/table.go @@ -0,0 +1,268 @@ +package x25519 + +import "github.com/cloudflare/circl/math/fp25519" + +// tableGenerator contains the set of points: +// +// t[i] = (xi+1)/(xi-1), +// +// where (xi,yi) = 2^iG and G is the generator point +// Size = (256)*(256/8) = 8192 bytes. +var tableGenerator = [256 * fp25519.Size]byte{ + /* (2^ 0)P */ 0xf3, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x5f, + /* (2^ 1)P */ 0x96, 0xfe, 0xaa, 0x16, 0xf4, 0x20, 0x82, 0x6b, 0x34, 0x6a, 0x56, 0x4f, 0x2b, 0xeb, 0xeb, 0x82, 0x0f, 0x95, 0xa5, 0x75, 0xb0, 0xa5, 0xa9, 0xd5, 0xf4, 0x88, 0x24, 0x4b, 0xcf, 0xb2, 0x42, 0x51, + /* (2^ 2)P */ 0x0c, 0x68, 0x69, 0x00, 0x75, 0xbc, 0xae, 0x6a, 0x41, 0x9c, 0xf9, 0xa0, 0x20, 0x78, 0xcf, 0x89, 0xf4, 0xd0, 0x56, 0x3b, 0x18, 0xd9, 0x58, 0x2a, 0xa4, 0x11, 0x60, 0xe3, 0x80, 0xca, 0x5a, 0x4b, + /* (2^ 3)P */ 0x5d, 0x74, 0x29, 0x8c, 0x34, 0x32, 0x91, 0x32, 0xd7, 0x2f, 0x64, 0xe1, 0x16, 0xe6, 0xa2, 0xf4, 0x34, 0xbc, 0x67, 0xff, 0x03, 0xbb, 0x45, 0x1e, 0x4a, 0x9b, 0x2a, 0xf4, 0xd0, 0x12, 0x69, 0x30, + /* (2^ 4)P */ 0x54, 0x71, 0xaf, 0xe6, 0x07, 0x65, 0x88, 0xff, 0x2f, 0xc8, 0xee, 0xdf, 0x13, 0x0e, 0xf5, 0x04, 0xce, 0xb5, 0xba, 0x2a, 0xe8, 0x2f, 0x51, 0xaa, 0x22, 0xf2, 0xd5, 0x68, 0x1a, 0x25, 0x4e, 0x17, + /* (2^ 5)P */ 0x98, 0x88, 0x02, 0x82, 0x0d, 0x70, 0x96, 0xcf, 0xc5, 0x02, 0x2c, 0x0a, 0x37, 0xe3, 0x43, 0x17, 0xaa, 0x6e, 0xe8, 0xb4, 0x98, 0xec, 0x9e, 0x37, 0x2e, 0x48, 0xe0, 0x51, 0x8a, 0x88, 0x59, 0x0c, + /* (2^ 6)P */ 0x89, 0xd1, 0xb5, 0x99, 0xd6, 0xf1, 0xcb, 0xfb, 0x84, 0xdc, 0x9f, 0x8e, 0xd5, 0xf0, 0xae, 0xac, 0x14, 0x76, 0x1f, 0x23, 0x06, 0x0d, 0xc2, 0xc1, 0x72, 0xf9, 0x74, 0xa2, 0x8d, 0x21, 0x38, 0x29, + /* (2^ 7)P */ 0x18, 0x7f, 0x1d, 0xff, 0xbe, 0x49, 0xaf, 0xf6, 0xc2, 0xc9, 0x7a, 0x38, 0x22, 0x1c, 0x54, 0xcc, 0x6b, 0xc5, 0x15, 0x40, 0xef, 0xc9, 0xfc, 0x96, 0xa9, 0x13, 0x09, 0x69, 0x7c, 0x62, 0xc1, 0x69, + /* (2^ 8)P */ 0x0e, 0xdb, 0x33, 0x47, 0x2f, 0xfd, 0x86, 0x7a, 0xe9, 0x7d, 0x08, 0x9e, 0xf2, 0xc4, 0xb8, 0xfd, 0x29, 0xa2, 0xa2, 0x8e, 0x1a, 0x4b, 0x5e, 0x09, 0x79, 0x7a, 0xb3, 0x29, 0xc8, 0xa7, 0xd7, 0x1a, + /* (2^ 9)P */ 0xc0, 0xa0, 0x7e, 0xd1, 0xca, 0x89, 0x2d, 0x34, 0x51, 0x20, 0xed, 0xcc, 0xa6, 0xdd, 0xbe, 0x67, 0x74, 0x2f, 0xb4, 0x2b, 0xbf, 0x31, 0xca, 0x19, 0xbb, 0xac, 0x80, 0x49, 0xc8, 0xb4, 0xf7, 0x3d, + /* (2^ 10)P */ 0x83, 0xd8, 0x0a, 0xc8, 0x4d, 0x44, 0xc6, 0xa8, 0x85, 0xab, 0xe3, 0x66, 0x03, 0x44, 0x1e, 0xb9, 0xd8, 0xf6, 0x64, 0x01, 0xa0, 0xcd, 0x15, 0xc2, 0x68, 0xe6, 0x47, 0xf2, 0x6e, 0x7c, 0x86, 0x3d, + /* (2^ 11)P */ 0x8c, 0x65, 0x3e, 0xcc, 0x2b, 0x58, 0xdd, 0xc7, 0x28, 0x55, 0x0e, 0xee, 0x48, 0x47, 0x2c, 0xfd, 0x71, 0x4f, 0x9f, 0xcc, 0x95, 0x9b, 0xfd, 0xa0, 0xdf, 0x5d, 0x67, 0xb0, 0x71, 0xd8, 0x29, 0x75, + /* (2^ 12)P */ 0x78, 0xbd, 0x3c, 0x2d, 0xb4, 0x68, 0xf5, 0xb8, 0x82, 0xda, 0xf3, 0x91, 0x1b, 0x01, 0x33, 0x12, 0x62, 0x3b, 0x7c, 0x4a, 0xcd, 0x6c, 0xce, 0x2d, 0x03, 0x86, 0x49, 0x9e, 0x8e, 0xfc, 0xe7, 0x75, + /* (2^ 13)P */ 0xec, 0xb6, 0xd0, 0xfc, 0xf1, 0x13, 0x4f, 0x2f, 0x45, 0x7a, 0xff, 0x29, 0x1f, 0xca, 0xa8, 0xf1, 0x9b, 0xe2, 0x81, 0x29, 0xa7, 0xc1, 0x49, 0xc2, 0x6a, 0xb5, 0x83, 0x8c, 0xbb, 0x0d, 0xbe, 0x6e, + /* (2^ 14)P */ 0x22, 0xb2, 0x0b, 0x17, 0x8d, 0xfa, 0x14, 0x71, 0x5f, 0x93, 0x93, 0xbf, 0xd5, 0xdc, 0xa2, 0x65, 0x9a, 0x97, 0x9c, 0xb5, 0x68, 0x1f, 0xc4, 0xbd, 0x89, 0x92, 0xce, 0xa2, 0x79, 0xef, 0x0e, 0x2f, + /* (2^ 15)P */ 0xce, 0x37, 0x3c, 0x08, 0x0c, 0xbf, 0xec, 0x42, 0x22, 0x63, 0x49, 0xec, 0x09, 0xbc, 0x30, 0x29, 0x0d, 0xac, 0xfe, 0x9c, 0xc1, 0xb0, 0x94, 0xf2, 0x80, 0xbb, 0xfa, 0xed, 0x4b, 0xaa, 0x80, 0x37, + /* (2^ 16)P */ 0x29, 0xd9, 0xea, 0x7c, 0x3e, 0x7d, 0xc1, 0x56, 0xc5, 0x22, 0x57, 0x2e, 0xeb, 0x4b, 0xcb, 0xe7, 0x5a, 0xe1, 0xbf, 0x2d, 0x73, 0x31, 0xe9, 0x0c, 0xf8, 0x52, 0x10, 0x62, 0xc7, 0x83, 0xb8, 0x41, + /* (2^ 17)P */ 0x50, 0x53, 0xd2, 0xc3, 0xa0, 0x5c, 0xf7, 0xdb, 0x51, 0xe3, 0xb1, 0x6e, 0x08, 0xbe, 0x36, 0x29, 0x12, 0xb2, 0xa9, 0xb4, 0x3c, 0xe0, 0x36, 0xc9, 0xaa, 0x25, 0x22, 0x32, 0x82, 0xbf, 0x45, 0x1d, + /* (2^ 18)P */ 0xc5, 0x4c, 0x02, 0x6a, 0x03, 0xb1, 0x1a, 0xe8, 0x72, 0x9a, 0x4c, 0x30, 0x1c, 0x20, 0x12, 0xe2, 0xfc, 0xb1, 0x32, 0x68, 0xba, 0x3f, 0xd7, 0xc5, 0x81, 0x95, 0x83, 0x4d, 0x5a, 0xdb, 0xff, 0x20, + /* (2^ 19)P */ 0xad, 0x0f, 0x5d, 0xbe, 0x67, 0xd3, 0x83, 0xa2, 0x75, 0x44, 0x16, 0x8b, 0xca, 0x25, 0x2b, 0x6c, 0x2e, 0xf2, 0xaa, 0x7c, 0x46, 0x35, 0x49, 0x9d, 0x49, 0xff, 0x85, 0xee, 0x8e, 0x40, 0x66, 0x51, + /* (2^ 20)P */ 0x61, 0xe3, 0xb4, 0xfa, 0xa2, 0xba, 0x67, 0x3c, 0xef, 0x5c, 0xf3, 0x7e, 0xc6, 0x33, 0xe4, 0xb3, 0x1c, 0x9b, 0x15, 0x41, 0x92, 0x72, 0x59, 0x52, 0x33, 0xab, 0xb0, 0xd5, 0x92, 0x18, 0x62, 0x6a, + /* (2^ 21)P */ 0xcb, 0xcd, 0x55, 0x75, 0x38, 0x4a, 0xb7, 0x20, 0x3f, 0x92, 0x08, 0x12, 0x0e, 0xa1, 0x2a, 0x53, 0xd1, 0x1d, 0x28, 0x62, 0x77, 0x7b, 0xa1, 0xea, 0xbf, 0x44, 0x5c, 0xf0, 0x43, 0x34, 0xab, 0x61, + /* (2^ 22)P */ 0xf8, 0xde, 0x24, 0x23, 0x42, 0x6c, 0x7a, 0x25, 0x7f, 0xcf, 0xe3, 0x17, 0x10, 0x6c, 0x1c, 0x13, 0x57, 0xa2, 0x30, 0xf6, 0x39, 0x87, 0x75, 0x23, 0x80, 0x85, 0xa7, 0x01, 0x7a, 0x40, 0x5a, 0x29, + /* (2^ 23)P */ 0xd9, 0xa8, 0x5d, 0x6d, 0x24, 0x43, 0xc4, 0xf8, 0x5d, 0xfa, 0x52, 0x0c, 0x45, 0x75, 0xd7, 0x19, 0x3d, 0xf8, 0x1b, 0x73, 0x92, 0xfc, 0xfc, 0x2a, 0x00, 0x47, 0x2b, 0x1b, 0xe8, 0xc8, 0x10, 0x7d, + /* (2^ 24)P */ 0x0b, 0xa2, 0xba, 0x70, 0x1f, 0x27, 0xe0, 0xc8, 0x57, 0x39, 0xa6, 0x7c, 0x86, 0x48, 0x37, 0x99, 0xbb, 0xd4, 0x7e, 0xcb, 0xb3, 0xef, 0x12, 0x54, 0x75, 0x29, 0xe6, 0x73, 0x61, 0xd3, 0x96, 0x31, + /* (2^ 25)P */ 0xfc, 0xdf, 0xc7, 0x41, 0xd1, 0xca, 0x5b, 0xde, 0x48, 0xc8, 0x95, 0xb3, 0xd2, 0x8c, 0xcc, 0x47, 0xcb, 0xf3, 0x1a, 0xe1, 0x42, 0xd9, 0x4c, 0xa3, 0xc2, 0xce, 0x4e, 0xd0, 0xf2, 0xdb, 0x56, 0x02, + /* (2^ 26)P */ 0x7f, 0x66, 0x0e, 0x4b, 0xe9, 0xb7, 0x5a, 0x87, 0x10, 0x0d, 0x85, 0xc0, 0x83, 0xdd, 0xd4, 0xca, 0x9f, 0xc7, 0x72, 0x4e, 0x8f, 0x2e, 0xf1, 0x47, 0x9b, 0xb1, 0x85, 0x8c, 0xbb, 0x87, 0x1a, 0x5f, + /* (2^ 27)P */ 0xb8, 0x51, 0x7f, 0x43, 0xb6, 0xd0, 0xe9, 0x7a, 0x65, 0x90, 0x87, 0x18, 0x55, 0xce, 0xc7, 0x12, 0xee, 0x7a, 0xf7, 0x5c, 0xfe, 0x09, 0xde, 0x2a, 0x27, 0x56, 0x2c, 0x7d, 0x2f, 0x5a, 0xa0, 0x23, + /* (2^ 28)P */ 0x9a, 0x16, 0x7c, 0xf1, 0x28, 0xe1, 0x08, 0x59, 0x2d, 0x85, 0xd0, 0x8a, 0xdd, 0x98, 0x74, 0xf7, 0x64, 0x2f, 0x10, 0xab, 0xce, 0xc4, 0xb4, 0x74, 0x45, 0x98, 0x13, 0x10, 0xdd, 0xba, 0x3a, 0x18, + /* (2^ 29)P */ 0xac, 0xaa, 0x92, 0xaa, 0x8d, 0xba, 0x65, 0xb1, 0x05, 0x67, 0x38, 0x99, 0x95, 0xef, 0xc5, 0xd5, 0xd1, 0x40, 0xfc, 0xf8, 0x0c, 0x8f, 0x2f, 0xbe, 0x14, 0x45, 0x20, 0xee, 0x35, 0xe6, 0x01, 0x27, + /* (2^ 30)P */ 0x14, 0x65, 0x15, 0x20, 0x00, 0xa8, 0x9f, 0x62, 0xce, 0xc1, 0xa8, 0x64, 0x87, 0x86, 0x23, 0xf2, 0x0e, 0x06, 0x3f, 0x0b, 0xff, 0x4f, 0x89, 0x5b, 0xfa, 0xa3, 0x08, 0xf7, 0x4c, 0x94, 0xd9, 0x60, + /* (2^ 31)P */ 0x1f, 0x20, 0x7a, 0x1c, 0x1a, 0x00, 0xea, 0xae, 0x63, 0xce, 0xe2, 0x3e, 0x63, 0x6a, 0xf1, 0xeb, 0xe1, 0x07, 0x7a, 0x4c, 0x59, 0x09, 0x77, 0x6f, 0xcb, 0x08, 0x02, 0x0d, 0x15, 0x58, 0xb9, 0x79, + /* (2^ 32)P */ 0xe7, 0x10, 0xd4, 0x01, 0x53, 0x5e, 0xb5, 0x24, 0x4d, 0xc8, 0xfd, 0xf3, 0xdf, 0x4e, 0xa3, 0xe3, 0xd8, 0x32, 0x40, 0x90, 0xe4, 0x68, 0x87, 0xd8, 0xec, 0xae, 0x3a, 0x7b, 0x42, 0x84, 0x13, 0x13, + /* (2^ 33)P */ 0x14, 0x4f, 0x23, 0x86, 0x12, 0xe5, 0x05, 0x84, 0x29, 0xc5, 0xb4, 0xad, 0x39, 0x47, 0xdc, 0x14, 0xfd, 0x4f, 0x63, 0x50, 0xb2, 0xb5, 0xa2, 0xb8, 0x93, 0xff, 0xa7, 0xd8, 0x4a, 0xa9, 0xe2, 0x2f, + /* (2^ 34)P */ 0xdd, 0xfa, 0x43, 0xe8, 0xef, 0x57, 0x5c, 0xec, 0x18, 0x99, 0xbb, 0xf0, 0x40, 0xce, 0x43, 0x28, 0x05, 0x63, 0x3d, 0xcf, 0xd6, 0x61, 0xb5, 0xa4, 0x7e, 0x77, 0xfb, 0xe8, 0xbd, 0x29, 0x36, 0x74, + /* (2^ 35)P */ 0x8f, 0x73, 0xaf, 0xbb, 0x46, 0xdd, 0x3e, 0x34, 0x51, 0xa6, 0x01, 0xb1, 0x28, 0x18, 0x98, 0xed, 0x7a, 0x79, 0x2c, 0x88, 0x0b, 0x76, 0x01, 0xa4, 0x30, 0x87, 0xc8, 0x8d, 0xe2, 0x23, 0xc2, 0x1f, + /* (2^ 36)P */ 0x0e, 0xba, 0x0f, 0xfc, 0x91, 0x4e, 0x60, 0x48, 0xa4, 0x6f, 0x2c, 0x05, 0x8f, 0xf7, 0x37, 0xb6, 0x9c, 0x23, 0xe9, 0x09, 0x3d, 0xac, 0xcc, 0x91, 0x7c, 0x68, 0x7a, 0x43, 0xd4, 0xee, 0xf7, 0x23, + /* (2^ 37)P */ 0x00, 0xd8, 0x9b, 0x8d, 0x11, 0xb1, 0x73, 0x51, 0xa7, 0xd4, 0x89, 0x31, 0xb6, 0x41, 0xd6, 0x29, 0x86, 0xc5, 0xbb, 0x88, 0x79, 0x17, 0xbf, 0xfd, 0xf5, 0x1d, 0xd8, 0xca, 0x4f, 0x89, 0x59, 0x29, + /* (2^ 38)P */ 0x99, 0xc8, 0xbb, 0xb4, 0xf3, 0x8e, 0xbc, 0xae, 0xb9, 0x92, 0x69, 0xb2, 0x5a, 0x99, 0x48, 0x41, 0xfb, 0x2c, 0xf9, 0x34, 0x01, 0x0b, 0xe2, 0x24, 0xe8, 0xde, 0x05, 0x4a, 0x89, 0x58, 0xd1, 0x40, + /* (2^ 39)P */ 0xf6, 0x76, 0xaf, 0x85, 0x11, 0x0b, 0xb0, 0x46, 0x79, 0x7a, 0x18, 0x73, 0x78, 0xc7, 0xba, 0x26, 0x5f, 0xff, 0x8f, 0xab, 0x95, 0xbf, 0xc0, 0x3d, 0xd7, 0x24, 0x55, 0x94, 0xd8, 0x8b, 0x60, 0x2a, + /* (2^ 40)P */ 0x02, 0x63, 0x44, 0xbd, 0x88, 0x95, 0x44, 0x26, 0x9c, 0x43, 0x88, 0x03, 0x1c, 0xc2, 0x4b, 0x7c, 0xb2, 0x11, 0xbd, 0x83, 0xf3, 0xa4, 0x98, 0x8e, 0xb9, 0x76, 0xd8, 0xc9, 0x7b, 0x8d, 0x21, 0x26, + /* (2^ 41)P */ 0x8a, 0x17, 0x7c, 0x99, 0x42, 0x15, 0x08, 0xe3, 0x6f, 0x60, 0xb6, 0x6f, 0xa8, 0x29, 0x2d, 0x3c, 0x74, 0x93, 0x27, 0xfa, 0x36, 0x77, 0x21, 0x5c, 0xfa, 0xb1, 0xfe, 0x4a, 0x73, 0x05, 0xde, 0x7d, + /* (2^ 42)P */ 0xab, 0x2b, 0xd4, 0x06, 0x39, 0x0e, 0xf1, 0x3b, 0x9c, 0x64, 0x80, 0x19, 0x3e, 0x80, 0xf7, 0xe4, 0x7a, 0xbf, 0x95, 0x95, 0xf8, 0x3b, 0x05, 0xe6, 0x30, 0x55, 0x24, 0xda, 0x38, 0xaf, 0x4f, 0x39, + /* (2^ 43)P */ 0xf4, 0x28, 0x69, 0x89, 0x58, 0xfb, 0x8e, 0x7a, 0x3c, 0x11, 0x6a, 0xcc, 0xe9, 0x78, 0xc7, 0xfb, 0x6f, 0x59, 0xaf, 0x30, 0xe3, 0x0c, 0x67, 0x72, 0xf7, 0x6c, 0x3d, 0x1d, 0xa8, 0x22, 0xf2, 0x48, + /* (2^ 44)P */ 0xa7, 0xca, 0x72, 0x0d, 0x41, 0xce, 0x1f, 0xf0, 0x95, 0x55, 0x3b, 0x21, 0xc7, 0xec, 0x20, 0x5a, 0x83, 0x14, 0xfa, 0xc1, 0x65, 0x11, 0xc2, 0x7b, 0x41, 0xa7, 0xa8, 0x1d, 0xe3, 0x9a, 0xf8, 0x07, + /* (2^ 45)P */ 0xf9, 0x0f, 0x83, 0xc6, 0xb4, 0xc2, 0xd2, 0x05, 0x93, 0x62, 0x31, 0xc6, 0x0f, 0x33, 0x3e, 0xd4, 0x04, 0xa9, 0xd3, 0x96, 0x0a, 0x59, 0xa5, 0xa5, 0xb6, 0x33, 0x53, 0xa6, 0x91, 0xdb, 0x5e, 0x70, + /* (2^ 46)P */ 0xf7, 0xa5, 0xb9, 0x0b, 0x5e, 0xe1, 0x8e, 0x04, 0x5d, 0xaf, 0x0a, 0x9e, 0xca, 0xcf, 0x40, 0x32, 0x0b, 0xa4, 0xc4, 0xed, 0xce, 0x71, 0x4b, 0x8f, 0x6d, 0x4a, 0x54, 0xde, 0xa3, 0x0d, 0x1c, 0x62, + /* (2^ 47)P */ 0x91, 0x40, 0x8c, 0xa0, 0x36, 0x28, 0x87, 0x92, 0x45, 0x14, 0xc9, 0x10, 0xb0, 0x75, 0x83, 0xce, 0x94, 0x63, 0x27, 0x4f, 0x52, 0xeb, 0x72, 0x8a, 0x35, 0x36, 0xc8, 0x7e, 0xfa, 0xfc, 0x67, 0x26, + /* (2^ 48)P */ 0x2a, 0x75, 0xe8, 0x45, 0x33, 0x17, 0x4c, 0x7f, 0xa5, 0x79, 0x70, 0xee, 0xfe, 0x47, 0x1b, 0x06, 0x34, 0xff, 0x86, 0x9f, 0xfa, 0x9a, 0xdd, 0x25, 0x9c, 0xc8, 0x5d, 0x42, 0xf5, 0xce, 0x80, 0x37, + /* (2^ 49)P */ 0xe9, 0xb4, 0x3b, 0x51, 0x5a, 0x03, 0x46, 0x1a, 0xda, 0x5a, 0x57, 0xac, 0x79, 0xf3, 0x1e, 0x3e, 0x50, 0x4b, 0xa2, 0x5f, 0x1c, 0x5f, 0x8c, 0xc7, 0x22, 0x9f, 0xfd, 0x34, 0x76, 0x96, 0x1a, 0x32, + /* (2^ 50)P */ 0xfa, 0x27, 0x6e, 0x82, 0xb8, 0x07, 0x67, 0x94, 0xd0, 0x6f, 0x50, 0x4c, 0xd6, 0x84, 0xca, 0x3d, 0x36, 0x14, 0xe9, 0x75, 0x80, 0x21, 0x89, 0xc1, 0x84, 0x84, 0x3b, 0x9b, 0x16, 0x84, 0x92, 0x6d, + /* (2^ 51)P */ 0xdf, 0x2d, 0x3f, 0x38, 0x40, 0xe8, 0x67, 0x3a, 0x75, 0x9b, 0x4f, 0x0c, 0xa3, 0xc9, 0xee, 0x33, 0x47, 0xef, 0x83, 0xa7, 0x6f, 0xc8, 0xc7, 0x3e, 0xc4, 0xfb, 0xc9, 0xba, 0x9f, 0x44, 0xec, 0x26, + /* (2^ 52)P */ 0x7d, 0x9e, 0x9b, 0xa0, 0xcb, 0x38, 0x0f, 0x5c, 0x8c, 0x47, 0xa3, 0x62, 0xc7, 0x8c, 0x16, 0x81, 0x1c, 0x12, 0xfc, 0x06, 0xd3, 0xb0, 0x23, 0x3e, 0xdd, 0xdc, 0xef, 0xa5, 0xa0, 0x8a, 0x23, 0x5a, + /* (2^ 53)P */ 0xff, 0x43, 0xea, 0xc4, 0x21, 0x61, 0xa2, 0x1b, 0xb5, 0x32, 0x88, 0x7c, 0x7f, 0xc7, 0xf8, 0x36, 0x9a, 0xf9, 0xdc, 0x0a, 0x0b, 0xea, 0xfb, 0x88, 0xf9, 0xeb, 0x5b, 0xc2, 0x8e, 0x93, 0xa9, 0x5c, + /* (2^ 54)P */ 0xa0, 0xcd, 0xfc, 0x51, 0x5e, 0x6a, 0x43, 0xd5, 0x3b, 0x89, 0xcd, 0xc2, 0x97, 0x47, 0xbc, 0x1d, 0x08, 0x4a, 0x22, 0xd3, 0x65, 0x6a, 0x34, 0x19, 0x66, 0xf4, 0x9a, 0x9b, 0xe4, 0x34, 0x50, 0x0f, + /* (2^ 55)P */ 0x6e, 0xb9, 0xe0, 0xa1, 0x67, 0x39, 0x3c, 0xf2, 0x88, 0x4d, 0x7a, 0x86, 0xfa, 0x08, 0x8b, 0xe5, 0x79, 0x16, 0x34, 0xa7, 0xc6, 0xab, 0x2f, 0xfb, 0x46, 0x69, 0x02, 0xb6, 0x1e, 0x38, 0x75, 0x2a, + /* (2^ 56)P */ 0xac, 0x20, 0x94, 0xc1, 0xe4, 0x3b, 0x0a, 0xc8, 0xdc, 0xb6, 0xf2, 0x81, 0xc6, 0xf6, 0xb1, 0x66, 0x88, 0x33, 0xe9, 0x61, 0x67, 0x03, 0xf7, 0x7c, 0xc4, 0xa4, 0x60, 0xa6, 0xd8, 0xbb, 0xab, 0x25, + /* (2^ 57)P */ 0x98, 0x51, 0xfd, 0x14, 0xba, 0x12, 0xea, 0x91, 0xa9, 0xff, 0x3c, 0x4a, 0xfc, 0x50, 0x49, 0x68, 0x28, 0xad, 0xf5, 0x30, 0x21, 0x84, 0x26, 0xf8, 0x41, 0xa4, 0x01, 0x53, 0xf7, 0x88, 0xa9, 0x3e, + /* (2^ 58)P */ 0x6f, 0x8c, 0x5f, 0x69, 0x9a, 0x10, 0x78, 0xc9, 0xf3, 0xc3, 0x30, 0x05, 0x4a, 0xeb, 0x46, 0x17, 0x95, 0x99, 0x45, 0xb4, 0x77, 0x6d, 0x4d, 0x44, 0xc7, 0x5c, 0x4e, 0x05, 0x8c, 0x2b, 0x95, 0x75, + /* (2^ 59)P */ 0xaa, 0xd6, 0xf4, 0x15, 0x79, 0x3f, 0x70, 0xa3, 0xd8, 0x47, 0x26, 0x2f, 0x20, 0x46, 0xc3, 0x66, 0x4b, 0x64, 0x1d, 0x81, 0xdf, 0x69, 0x14, 0xd0, 0x1f, 0xd7, 0xa5, 0x81, 0x7d, 0xa4, 0xfe, 0x77, + /* (2^ 60)P */ 0x81, 0xa3, 0x7c, 0xf5, 0x9e, 0x52, 0xe9, 0xc5, 0x1a, 0x88, 0x2f, 0xce, 0xb9, 0xb4, 0xee, 0x6e, 0xd6, 0x9b, 0x00, 0xe8, 0x28, 0x1a, 0xe9, 0xb6, 0xec, 0x3f, 0xfc, 0x9a, 0x3e, 0xbe, 0x80, 0x4b, + /* (2^ 61)P */ 0xc5, 0xd2, 0xae, 0x26, 0xc5, 0x73, 0x37, 0x7e, 0x9d, 0xa4, 0xc9, 0x53, 0xb4, 0xfc, 0x4a, 0x1b, 0x4d, 0xb2, 0xff, 0xba, 0xd7, 0xbd, 0x20, 0xa9, 0x0e, 0x40, 0x2d, 0x12, 0x9f, 0x69, 0x54, 0x7c, + /* (2^ 62)P */ 0xc8, 0x4b, 0xa9, 0x4f, 0xe1, 0xc8, 0x46, 0xef, 0x5e, 0xed, 0x52, 0x29, 0xce, 0x74, 0xb0, 0xe0, 0xd5, 0x85, 0xd8, 0xdb, 0xe1, 0x50, 0xa4, 0xbe, 0x2c, 0x71, 0x0f, 0x32, 0x49, 0x86, 0xb6, 0x61, + /* (2^ 63)P */ 0xd1, 0xbd, 0xcc, 0x09, 0x73, 0x5f, 0x48, 0x8a, 0x2d, 0x1a, 0x4d, 0x7d, 0x0d, 0x32, 0x06, 0xbd, 0xf4, 0xbe, 0x2d, 0x32, 0x73, 0x29, 0x23, 0x25, 0x70, 0xf7, 0x17, 0x8c, 0x75, 0xc4, 0x5d, 0x44, + /* (2^ 64)P */ 0x3c, 0x93, 0xc8, 0x7c, 0x17, 0x34, 0x04, 0xdb, 0x9f, 0x05, 0xea, 0x75, 0x21, 0xe8, 0x6f, 0xed, 0x34, 0xdb, 0x53, 0xc0, 0xfd, 0xbe, 0xfe, 0x1e, 0x99, 0xaf, 0x5d, 0xc6, 0x67, 0xe8, 0xdb, 0x4a, + /* (2^ 65)P */ 0xdf, 0x09, 0x06, 0xa9, 0xa2, 0x71, 0xcd, 0x3a, 0x50, 0x40, 0xd0, 0x6d, 0x85, 0x91, 0xe9, 0xe5, 0x3c, 0xc2, 0x57, 0x81, 0x68, 0x9b, 0xc6, 0x1e, 0x4d, 0xfe, 0x5c, 0x88, 0xf6, 0x27, 0x74, 0x69, + /* (2^ 66)P */ 0x51, 0xa8, 0xe1, 0x65, 0x9b, 0x7b, 0xbe, 0xd7, 0xdd, 0x36, 0xc5, 0x22, 0xd5, 0x28, 0x3d, 0xa0, 0x45, 0xb6, 0xd2, 0x8f, 0x65, 0x9d, 0x39, 0x28, 0xe1, 0x41, 0x26, 0x7c, 0xe1, 0xb7, 0xe5, 0x49, + /* (2^ 67)P */ 0xa4, 0x57, 0x04, 0x70, 0x98, 0x3a, 0x8c, 0x6f, 0x78, 0x67, 0xbb, 0x5e, 0xa2, 0xf0, 0x78, 0x50, 0x0f, 0x96, 0x82, 0xc3, 0xcb, 0x3c, 0x3c, 0xd1, 0xb1, 0x84, 0xdf, 0xa7, 0x58, 0x32, 0x00, 0x2e, + /* (2^ 68)P */ 0x1c, 0x6a, 0x29, 0xe6, 0x9b, 0xf3, 0xd1, 0x8a, 0xb2, 0xbf, 0x5f, 0x2a, 0x65, 0xaa, 0xee, 0xc1, 0xcb, 0xf3, 0x26, 0xfd, 0x73, 0x06, 0xee, 0x33, 0xcc, 0x2c, 0x9d, 0xa6, 0x73, 0x61, 0x25, 0x59, + /* (2^ 69)P */ 0x41, 0xfc, 0x18, 0x4e, 0xaa, 0x07, 0xea, 0x41, 0x1e, 0xa5, 0x87, 0x7c, 0x52, 0x19, 0xfc, 0xd9, 0x6f, 0xca, 0x31, 0x58, 0x80, 0xcb, 0xaa, 0xbd, 0x4f, 0x69, 0x16, 0xc9, 0x2d, 0x65, 0x5b, 0x44, + /* (2^ 70)P */ 0x15, 0x23, 0x17, 0xf2, 0xa7, 0xa3, 0x92, 0xce, 0x64, 0x99, 0x1b, 0xe1, 0x2d, 0x28, 0xdc, 0x1e, 0x4a, 0x31, 0x4c, 0xe0, 0xaf, 0x3a, 0x82, 0xa1, 0x86, 0xf5, 0x7c, 0x43, 0x94, 0x2d, 0x0a, 0x79, + /* (2^ 71)P */ 0x09, 0xe0, 0xf6, 0x93, 0xfb, 0x47, 0xc4, 0x71, 0x76, 0x52, 0x84, 0x22, 0x67, 0xa5, 0x22, 0x89, 0x69, 0x51, 0x4f, 0x20, 0x3b, 0x90, 0x70, 0xbf, 0xfe, 0x19, 0xa3, 0x1b, 0x89, 0x89, 0x7a, 0x2f, + /* (2^ 72)P */ 0x0c, 0x14, 0xe2, 0x77, 0xb5, 0x8e, 0xa0, 0x02, 0xf4, 0xdc, 0x7b, 0x42, 0xd4, 0x4e, 0x9a, 0xed, 0xd1, 0x3c, 0x32, 0xe4, 0x44, 0xec, 0x53, 0x52, 0x5b, 0x35, 0xe9, 0x14, 0x3c, 0x36, 0x88, 0x3e, + /* (2^ 73)P */ 0x8c, 0x0b, 0x11, 0x77, 0x42, 0xc1, 0x66, 0xaa, 0x90, 0x33, 0xa2, 0x10, 0x16, 0x39, 0xe0, 0x1a, 0xa2, 0xc2, 0x3f, 0xc9, 0x12, 0xbd, 0x30, 0x20, 0xab, 0xc7, 0x55, 0x95, 0x57, 0x41, 0xe1, 0x3e, + /* (2^ 74)P */ 0x41, 0x7d, 0x6e, 0x6d, 0x3a, 0xde, 0x14, 0x92, 0xfe, 0x7e, 0xf1, 0x07, 0x86, 0xd8, 0xcd, 0x3c, 0x17, 0x12, 0xe1, 0xf8, 0x88, 0x12, 0x4f, 0x67, 0xd0, 0x93, 0x9f, 0x32, 0x0f, 0x25, 0x82, 0x56, + /* (2^ 75)P */ 0x6e, 0x39, 0x2e, 0x6d, 0x13, 0x0b, 0xf0, 0x6c, 0xbf, 0xde, 0x14, 0x10, 0x6f, 0xf8, 0x4c, 0x6e, 0x83, 0x4e, 0xcc, 0xbf, 0xb5, 0xb1, 0x30, 0x59, 0xb6, 0x16, 0xba, 0x8a, 0xb4, 0x69, 0x70, 0x04, + /* (2^ 76)P */ 0x93, 0x07, 0xb2, 0x69, 0xab, 0xe4, 0x4c, 0x0d, 0x9e, 0xfb, 0xd0, 0x97, 0x1a, 0xb9, 0x4d, 0xb2, 0x1d, 0xd0, 0x00, 0x4e, 0xf5, 0x50, 0xfa, 0xcd, 0xb5, 0xdd, 0x8b, 0x36, 0x85, 0x10, 0x1b, 0x22, + /* (2^ 77)P */ 0xd2, 0xd8, 0xe3, 0xb1, 0x68, 0x94, 0xe5, 0xe7, 0x93, 0x2f, 0x12, 0xbd, 0x63, 0x65, 0xc5, 0x53, 0x09, 0x3f, 0x66, 0xe0, 0x03, 0xa9, 0xe8, 0xee, 0x42, 0x3d, 0xbe, 0xcb, 0x62, 0xa6, 0xef, 0x61, + /* (2^ 78)P */ 0x2a, 0xab, 0x6e, 0xde, 0xdd, 0xdd, 0xf8, 0x2c, 0x31, 0xf2, 0x35, 0x14, 0xd5, 0x0a, 0xf8, 0x9b, 0x73, 0x49, 0xf0, 0xc9, 0xce, 0xda, 0xea, 0x5d, 0x27, 0x9b, 0xd2, 0x41, 0x5d, 0x5b, 0x27, 0x29, + /* (2^ 79)P */ 0x4f, 0xf1, 0xeb, 0x95, 0x08, 0x0f, 0xde, 0xcf, 0xa7, 0x05, 0x49, 0x05, 0x6b, 0xb9, 0xaa, 0xb9, 0xfd, 0x20, 0xc4, 0xa1, 0xd9, 0x0d, 0xe8, 0xca, 0xc7, 0xbb, 0x73, 0x16, 0x2f, 0xbf, 0x63, 0x0a, + /* (2^ 80)P */ 0x8c, 0xbc, 0x8f, 0x95, 0x11, 0x6e, 0x2f, 0x09, 0xad, 0x2f, 0x82, 0x04, 0xe8, 0x81, 0x2a, 0x67, 0x17, 0x25, 0xd5, 0x60, 0x15, 0x35, 0xc8, 0xca, 0xf8, 0x92, 0xf1, 0xc8, 0x22, 0x77, 0x3f, 0x6f, + /* (2^ 81)P */ 0xb7, 0x94, 0xe8, 0xc2, 0xcc, 0x90, 0xba, 0xf8, 0x0d, 0x9f, 0xff, 0x38, 0xa4, 0x57, 0x75, 0x2c, 0x59, 0x23, 0xe5, 0x5a, 0x85, 0x1d, 0x4d, 0x89, 0x69, 0x3d, 0x74, 0x7b, 0x15, 0x22, 0xe1, 0x68, + /* (2^ 82)P */ 0xf3, 0x19, 0xb9, 0xcf, 0x70, 0x55, 0x7e, 0xd8, 0xb9, 0x8d, 0x79, 0x95, 0xcd, 0xde, 0x2c, 0x3f, 0xce, 0xa2, 0xc0, 0x10, 0x47, 0x15, 0x21, 0x21, 0xb2, 0xc5, 0x6d, 0x24, 0x15, 0xa1, 0x66, 0x3c, + /* (2^ 83)P */ 0x72, 0xcb, 0x4e, 0x29, 0x62, 0xc5, 0xed, 0xcb, 0x16, 0x0b, 0x28, 0x6a, 0xc3, 0x43, 0x71, 0xba, 0x67, 0x8b, 0x07, 0xd4, 0xef, 0xc2, 0x10, 0x96, 0x1e, 0x4b, 0x6a, 0x94, 0x5d, 0x73, 0x44, 0x61, + /* (2^ 84)P */ 0x50, 0x33, 0x5b, 0xd7, 0x1e, 0x11, 0x6f, 0x53, 0x1b, 0xd8, 0x41, 0x20, 0x8c, 0xdb, 0x11, 0x02, 0x3c, 0x41, 0x10, 0x0e, 0x00, 0xb1, 0x3c, 0xf9, 0x76, 0x88, 0x9e, 0x03, 0x3c, 0xfd, 0x9d, 0x14, + /* (2^ 85)P */ 0x5b, 0x15, 0x63, 0x6b, 0xe4, 0xdd, 0x79, 0xd4, 0x76, 0x79, 0x83, 0x3c, 0xe9, 0x15, 0x6e, 0xb6, 0x38, 0xe0, 0x13, 0x1f, 0x3b, 0xe4, 0xfd, 0xda, 0x35, 0x0b, 0x4b, 0x2e, 0x1a, 0xda, 0xaf, 0x5f, + /* (2^ 86)P */ 0x81, 0x75, 0x19, 0x17, 0xdf, 0xbb, 0x00, 0x36, 0xc2, 0xd2, 0x3c, 0xbe, 0x0b, 0x05, 0x72, 0x39, 0x86, 0xbe, 0xd5, 0xbd, 0x6d, 0x90, 0x38, 0x59, 0x0f, 0x86, 0x9b, 0x3f, 0xe4, 0xe5, 0xfc, 0x34, + /* (2^ 87)P */ 0x02, 0x4d, 0xd1, 0x42, 0xcd, 0xa4, 0xa8, 0x75, 0x65, 0xdf, 0x41, 0x34, 0xc5, 0xab, 0x8d, 0x82, 0xd3, 0x31, 0xe1, 0xd2, 0xed, 0xab, 0xdc, 0x33, 0x5f, 0xd2, 0x14, 0xb8, 0x6f, 0xd7, 0xba, 0x3e, + /* (2^ 88)P */ 0x0f, 0xe1, 0x70, 0x6f, 0x56, 0x6f, 0x90, 0xd4, 0x5a, 0x0f, 0x69, 0x51, 0xaa, 0xf7, 0x12, 0x5d, 0xf2, 0xfc, 0xce, 0x76, 0x6e, 0xb1, 0xad, 0x45, 0x99, 0x29, 0x23, 0xad, 0xae, 0x68, 0xf7, 0x01, + /* (2^ 89)P */ 0xbd, 0xfe, 0x48, 0x62, 0x7b, 0xc7, 0x6c, 0x2b, 0xfd, 0xaf, 0x3a, 0xec, 0x28, 0x06, 0xd3, 0x3c, 0x6a, 0x48, 0xef, 0xd4, 0x80, 0x0b, 0x1c, 0xce, 0x23, 0x6c, 0xf6, 0xa6, 0x2e, 0xff, 0x3b, 0x4c, + /* (2^ 90)P */ 0x5f, 0xeb, 0xea, 0x4a, 0x09, 0xc4, 0x2e, 0x3f, 0xa7, 0x2c, 0x37, 0x6e, 0x28, 0x9b, 0xb1, 0x61, 0x1d, 0x70, 0x2a, 0xde, 0x66, 0xa9, 0xef, 0x5e, 0xef, 0xe3, 0x55, 0xde, 0x65, 0x05, 0xb2, 0x23, + /* (2^ 91)P */ 0x57, 0x85, 0xd5, 0x79, 0x52, 0xca, 0x01, 0xe3, 0x4f, 0x87, 0xc2, 0x27, 0xce, 0xd4, 0xb2, 0x07, 0x67, 0x1d, 0xcf, 0x9d, 0x8a, 0xcd, 0x32, 0xa5, 0x56, 0xff, 0x2b, 0x3f, 0xe2, 0xfe, 0x52, 0x2a, + /* (2^ 92)P */ 0x3d, 0x66, 0xd8, 0x7c, 0xb3, 0xef, 0x24, 0x86, 0x94, 0x75, 0xbd, 0xff, 0x20, 0xac, 0xc7, 0xbb, 0x45, 0x74, 0xd3, 0x82, 0x9c, 0x5e, 0xb8, 0x57, 0x66, 0xec, 0xa6, 0x86, 0xcb, 0x52, 0x30, 0x7b, + /* (2^ 93)P */ 0x1e, 0xe9, 0x25, 0x25, 0xad, 0xf0, 0x82, 0x34, 0xa0, 0xdc, 0x8e, 0xd2, 0x43, 0x80, 0xb6, 0x2c, 0x3a, 0x00, 0x1b, 0x2e, 0x05, 0x6d, 0x4f, 0xaf, 0x0a, 0x1b, 0x78, 0x29, 0x25, 0x8c, 0x5f, 0x18, + /* (2^ 94)P */ 0xd6, 0xe0, 0x0c, 0xd8, 0x5b, 0xde, 0x41, 0xaa, 0xd6, 0xe9, 0x53, 0x68, 0x41, 0xb2, 0x07, 0x94, 0x3a, 0x4c, 0x7f, 0x35, 0x6e, 0xc3, 0x3e, 0x56, 0xce, 0x7b, 0x29, 0x0e, 0xdd, 0xb8, 0xc4, 0x4c, + /* (2^ 95)P */ 0x0e, 0x73, 0xb8, 0xff, 0x52, 0x1a, 0xfc, 0xa2, 0x37, 0x8e, 0x05, 0x67, 0x6e, 0xf1, 0x11, 0x18, 0xe1, 0x4e, 0xdf, 0xcd, 0x66, 0xa3, 0xf9, 0x10, 0x99, 0xf0, 0xb9, 0xa0, 0xc4, 0xa0, 0xf4, 0x72, + /* (2^ 96)P */ 0xa7, 0x4e, 0x3f, 0x66, 0x6f, 0xc0, 0x16, 0x8c, 0xba, 0x0f, 0x97, 0x4e, 0xf7, 0x3a, 0x3b, 0x69, 0x45, 0xc3, 0x9e, 0xd6, 0xf1, 0xe7, 0x02, 0x21, 0x89, 0x80, 0x8a, 0x96, 0xbc, 0x3c, 0xa5, 0x0b, + /* (2^ 97)P */ 0x37, 0x55, 0xa1, 0xfe, 0xc7, 0x9d, 0x3d, 0xca, 0x93, 0x64, 0x53, 0x51, 0xbb, 0x24, 0x68, 0x4c, 0xb1, 0x06, 0x40, 0x84, 0x14, 0x63, 0x88, 0xb9, 0x60, 0xcc, 0x54, 0xb4, 0x2a, 0xa7, 0xd2, 0x40, + /* (2^ 98)P */ 0x75, 0x09, 0x57, 0x12, 0xb7, 0xa1, 0x36, 0x59, 0x57, 0xa6, 0xbd, 0xde, 0x48, 0xd6, 0xb9, 0x91, 0xea, 0x30, 0x43, 0xb6, 0x4b, 0x09, 0x44, 0x33, 0xd0, 0x51, 0xee, 0x12, 0x0d, 0xa1, 0x6b, 0x00, + /* (2^ 99)P */ 0x58, 0x5d, 0xde, 0xf5, 0x68, 0x84, 0x22, 0x19, 0xb0, 0x05, 0xcc, 0x38, 0x4c, 0x2f, 0xb1, 0x0e, 0x90, 0x19, 0x60, 0xd5, 0x9d, 0x9f, 0x03, 0xa1, 0x0b, 0x0e, 0xff, 0x4f, 0xce, 0xd4, 0x02, 0x45, + /* (2^100)P */ 0x89, 0xc1, 0x37, 0x68, 0x10, 0x54, 0x20, 0xeb, 0x3c, 0xb9, 0xd3, 0x6d, 0x4c, 0x54, 0xf6, 0xd0, 0x4f, 0xd7, 0x16, 0xc4, 0x64, 0x70, 0x72, 0x40, 0xf0, 0x2e, 0x50, 0x4b, 0x11, 0xc6, 0x15, 0x6e, + /* (2^101)P */ 0x6b, 0xa7, 0xb1, 0xcf, 0x98, 0xa3, 0xf2, 0x4d, 0xb1, 0xf6, 0xf2, 0x19, 0x74, 0x6c, 0x25, 0x11, 0x43, 0x60, 0x6e, 0x06, 0x62, 0x79, 0x49, 0x4a, 0x44, 0x5b, 0x35, 0x41, 0xab, 0x3a, 0x5b, 0x70, + /* (2^102)P */ 0xd8, 0xb1, 0x97, 0xd7, 0x36, 0xf5, 0x5e, 0x36, 0xdb, 0xf0, 0xdd, 0x22, 0xd6, 0x6b, 0x07, 0x00, 0x88, 0x5a, 0x57, 0xe0, 0xb0, 0x33, 0xbf, 0x3b, 0x4d, 0xca, 0xe4, 0xc8, 0x05, 0xaa, 0x77, 0x37, + /* (2^103)P */ 0x5f, 0xdb, 0x78, 0x55, 0xc8, 0x45, 0x27, 0x39, 0xe2, 0x5a, 0xae, 0xdb, 0x49, 0x41, 0xda, 0x6f, 0x67, 0x98, 0xdc, 0x8a, 0x0b, 0xb0, 0xf0, 0xb1, 0xa3, 0x1d, 0x6f, 0xd3, 0x37, 0x34, 0x96, 0x09, + /* (2^104)P */ 0x53, 0x38, 0xdc, 0xa5, 0x90, 0x4e, 0x82, 0x7e, 0xbd, 0x5c, 0x13, 0x1f, 0x64, 0xf6, 0xb5, 0xcc, 0xcc, 0x8f, 0xce, 0x87, 0x6c, 0xd8, 0x36, 0x67, 0x9f, 0x24, 0x04, 0x66, 0xe2, 0x3c, 0x5f, 0x62, + /* (2^105)P */ 0x3f, 0xf6, 0x02, 0x95, 0x05, 0xc8, 0x8a, 0xaf, 0x69, 0x14, 0x35, 0x2e, 0x0a, 0xe7, 0x05, 0x0c, 0x05, 0x63, 0x4b, 0x76, 0x9c, 0x2e, 0x29, 0x35, 0xc3, 0x3a, 0xe2, 0xc7, 0x60, 0x43, 0x39, 0x1a, + /* (2^106)P */ 0x64, 0x32, 0x18, 0x51, 0x32, 0xd5, 0xc6, 0xd5, 0x4f, 0xb7, 0xc2, 0x43, 0xbd, 0x5a, 0x06, 0x62, 0x9b, 0x3f, 0x97, 0x3b, 0xd0, 0xf5, 0xfb, 0xb5, 0x5e, 0x6e, 0x20, 0x61, 0x36, 0xda, 0xa3, 0x13, + /* (2^107)P */ 0xe5, 0x94, 0x5d, 0x72, 0x37, 0x58, 0xbd, 0xc6, 0xc5, 0x16, 0x50, 0x20, 0x12, 0x09, 0xe3, 0x18, 0x68, 0x3c, 0x03, 0x70, 0x15, 0xce, 0x88, 0x20, 0x87, 0x79, 0x83, 0x5c, 0x49, 0x1f, 0xba, 0x7f, + /* (2^108)P */ 0x9d, 0x07, 0xf9, 0xf2, 0x23, 0x74, 0x8c, 0x5a, 0xc5, 0x3f, 0x02, 0x34, 0x7b, 0x15, 0x35, 0x17, 0x51, 0xb3, 0xfa, 0xd2, 0x9a, 0xb4, 0xf9, 0xe4, 0x3c, 0xe3, 0x78, 0xc8, 0x72, 0xff, 0x91, 0x66, + /* (2^109)P */ 0x3e, 0xff, 0x5e, 0xdc, 0xde, 0x2a, 0x2c, 0x12, 0xf4, 0x6c, 0x95, 0xd8, 0xf1, 0x4b, 0xdd, 0xf8, 0xda, 0x5b, 0x9e, 0x9e, 0x5d, 0x20, 0x86, 0xeb, 0x43, 0xc7, 0x75, 0xd9, 0xb9, 0x92, 0x9b, 0x04, + /* (2^110)P */ 0x5a, 0xc0, 0xf6, 0xb0, 0x30, 0x97, 0x37, 0xa5, 0x53, 0xa5, 0xf3, 0xc6, 0xac, 0xff, 0xa0, 0x72, 0x6d, 0xcd, 0x0d, 0xb2, 0x34, 0x2c, 0x03, 0xb0, 0x4a, 0x16, 0xd5, 0x88, 0xbc, 0x9d, 0x0e, 0x47, + /* (2^111)P */ 0x47, 0xc0, 0x37, 0xa2, 0x0c, 0xf1, 0x9c, 0xb1, 0xa2, 0x81, 0x6c, 0x1f, 0x71, 0x66, 0x54, 0xb6, 0x43, 0x0b, 0xd8, 0x6d, 0xd1, 0x1b, 0x32, 0xb3, 0x8e, 0xbe, 0x5f, 0x0c, 0x60, 0x4f, 0xc1, 0x48, + /* (2^112)P */ 0x03, 0xc8, 0xa6, 0x4a, 0x26, 0x1c, 0x45, 0x66, 0xa6, 0x7d, 0xfa, 0xa4, 0x04, 0x39, 0x6e, 0xb6, 0x95, 0x83, 0x12, 0xb3, 0xb0, 0x19, 0x5f, 0xd4, 0x10, 0xbc, 0xc9, 0xc3, 0x27, 0x26, 0x60, 0x31, + /* (2^113)P */ 0x0d, 0xe1, 0xe4, 0x32, 0x48, 0xdc, 0x20, 0x31, 0xf7, 0x17, 0xc7, 0x56, 0x67, 0xc4, 0x20, 0xeb, 0x94, 0x02, 0x28, 0x67, 0x3f, 0x2e, 0xf5, 0x00, 0x09, 0xc5, 0x30, 0x47, 0xc1, 0x4f, 0x6d, 0x56, + /* (2^114)P */ 0x06, 0x72, 0x83, 0xfd, 0x40, 0x5d, 0x3a, 0x7e, 0x7a, 0x54, 0x59, 0x71, 0xdc, 0x26, 0xe9, 0xc1, 0x95, 0x60, 0x8d, 0xa6, 0xfb, 0x30, 0x67, 0x21, 0xa7, 0xce, 0x69, 0x3f, 0x84, 0xc3, 0xe8, 0x22, + /* (2^115)P */ 0x2b, 0x4b, 0x0e, 0x93, 0xe8, 0x74, 0xd0, 0x33, 0x16, 0x58, 0xd1, 0x84, 0x0e, 0x35, 0xe4, 0xb6, 0x65, 0x23, 0xba, 0xd6, 0x6a, 0xc2, 0x34, 0x55, 0xf3, 0xf3, 0xf1, 0x89, 0x2f, 0xc1, 0x73, 0x77, + /* (2^116)P */ 0xaa, 0x62, 0x79, 0xa5, 0x4d, 0x40, 0xba, 0x8c, 0x56, 0xce, 0x99, 0x19, 0xa8, 0x97, 0x98, 0x5b, 0xfc, 0x92, 0x16, 0x12, 0x2f, 0x86, 0x8e, 0x50, 0x91, 0xc2, 0x93, 0xa0, 0x7f, 0x90, 0x81, 0x3a, + /* (2^117)P */ 0x10, 0xa5, 0x25, 0x47, 0xff, 0xd0, 0xde, 0x0d, 0x03, 0xc5, 0x3f, 0x67, 0x10, 0xcc, 0xd8, 0x10, 0x89, 0x4e, 0x1f, 0x9f, 0x1c, 0x15, 0x9d, 0x5b, 0x4c, 0xa4, 0x09, 0xcb, 0xd5, 0xc1, 0xa5, 0x32, + /* (2^118)P */ 0xfb, 0x41, 0x05, 0xb9, 0x42, 0xa4, 0x0a, 0x1e, 0xdb, 0x85, 0xb4, 0xc1, 0x7c, 0xeb, 0x85, 0x5f, 0xe5, 0xf2, 0x9d, 0x8a, 0xce, 0x95, 0xe5, 0xbe, 0x36, 0x22, 0x42, 0x22, 0xc7, 0x96, 0xe4, 0x25, + /* (2^119)P */ 0xb9, 0xe5, 0x0f, 0xcd, 0x46, 0x3c, 0xdf, 0x5e, 0x88, 0x33, 0xa4, 0xd2, 0x7e, 0x5a, 0xe7, 0x34, 0x52, 0xe3, 0x61, 0xd7, 0x11, 0xde, 0x88, 0xe4, 0x5c, 0x54, 0x85, 0xa0, 0x01, 0x8a, 0x87, 0x0e, + /* (2^120)P */ 0x04, 0xbb, 0x21, 0xe0, 0x77, 0x3c, 0x49, 0xba, 0x9a, 0x89, 0xdf, 0xc7, 0x43, 0x18, 0x4d, 0x2b, 0x67, 0x0d, 0xe8, 0x7a, 0x48, 0x7a, 0xa3, 0x9e, 0x94, 0x17, 0xe4, 0x11, 0x80, 0x95, 0xa9, 0x67, + /* (2^121)P */ 0x65, 0xb0, 0x97, 0x66, 0x1a, 0x05, 0x58, 0x4b, 0xd4, 0xa6, 0x6b, 0x8d, 0x7d, 0x3f, 0xe3, 0x47, 0xc1, 0x46, 0xca, 0x83, 0xd4, 0xa8, 0x4d, 0xbb, 0x0d, 0xdb, 0xc2, 0x81, 0xa1, 0xca, 0xbe, 0x68, + /* (2^122)P */ 0xa5, 0x9a, 0x98, 0x0b, 0xe9, 0x80, 0x89, 0x8d, 0x9b, 0xc9, 0x93, 0x2c, 0x4a, 0xb1, 0x5e, 0xf9, 0xa2, 0x73, 0x6e, 0x79, 0xc4, 0xc7, 0xc6, 0x51, 0x69, 0xb5, 0xef, 0xb5, 0x63, 0x83, 0x22, 0x6e, + /* (2^123)P */ 0xc8, 0x24, 0xd6, 0x2d, 0xb0, 0xc0, 0xbb, 0xc6, 0xee, 0x70, 0x81, 0xec, 0x7d, 0xb4, 0x7e, 0x77, 0xa9, 0xaf, 0xcf, 0x04, 0xa0, 0x15, 0xde, 0x3c, 0x9b, 0xbf, 0x60, 0x71, 0x08, 0xbc, 0xc6, 0x1d, + /* (2^124)P */ 0x02, 0x40, 0xc3, 0xee, 0x43, 0xe0, 0x07, 0x2e, 0x7f, 0xdc, 0x68, 0x7a, 0x67, 0xfc, 0xe9, 0x18, 0x9a, 0x5b, 0xd1, 0x8b, 0x18, 0x03, 0xda, 0xd8, 0x53, 0x82, 0x56, 0x00, 0xbb, 0xc3, 0xfb, 0x48, + /* (2^125)P */ 0xe1, 0x4c, 0x65, 0xfb, 0x4c, 0x7d, 0x54, 0x57, 0xad, 0xe2, 0x58, 0xa0, 0x82, 0x5b, 0x56, 0xd3, 0x78, 0x44, 0x15, 0xbf, 0x0b, 0xaf, 0x3e, 0xf6, 0x18, 0xbb, 0xdf, 0x14, 0xf1, 0x1e, 0x53, 0x47, + /* (2^126)P */ 0x87, 0xc5, 0x78, 0x42, 0x0a, 0x63, 0xec, 0xe1, 0xf3, 0x83, 0x8e, 0xca, 0x46, 0xd5, 0x07, 0x55, 0x2b, 0x0c, 0xdc, 0x3a, 0xc6, 0x35, 0xe1, 0x85, 0x4e, 0x84, 0x82, 0x56, 0xa8, 0xef, 0xa7, 0x0a, + /* (2^127)P */ 0x15, 0xf6, 0xe1, 0xb3, 0xa8, 0x1b, 0x69, 0x72, 0xfa, 0x3f, 0xbe, 0x1f, 0x70, 0xe9, 0xb4, 0x32, 0x68, 0x78, 0xbb, 0x39, 0x2e, 0xd9, 0xb6, 0x97, 0xe8, 0x39, 0x2e, 0xa0, 0xde, 0x53, 0xfe, 0x2c, + /* (2^128)P */ 0xb0, 0x52, 0xcd, 0x85, 0xcd, 0x92, 0x73, 0x68, 0x31, 0x98, 0xe2, 0x10, 0xc9, 0x66, 0xff, 0x27, 0x06, 0x2d, 0x83, 0xa9, 0x56, 0x45, 0x13, 0x97, 0xa0, 0xf8, 0x84, 0x0a, 0x36, 0xb0, 0x9b, 0x26, + /* (2^129)P */ 0x5c, 0xf8, 0x43, 0x76, 0x45, 0x55, 0x6e, 0x70, 0x1b, 0x7d, 0x59, 0x9b, 0x8c, 0xa4, 0x34, 0x37, 0x72, 0xa4, 0xef, 0xc6, 0xe8, 0x91, 0xee, 0x7a, 0xe0, 0xd9, 0xa9, 0x98, 0xc1, 0xab, 0xd6, 0x5c, + /* (2^130)P */ 0x1a, 0xe4, 0x3c, 0xcb, 0x06, 0xde, 0x04, 0x0e, 0x38, 0xe1, 0x02, 0x34, 0x89, 0xeb, 0xc6, 0xd8, 0x72, 0x37, 0x6e, 0x68, 0xbb, 0x59, 0x46, 0x90, 0xc8, 0xa8, 0x6b, 0x74, 0x71, 0xc3, 0x15, 0x72, + /* (2^131)P */ 0xd9, 0xa2, 0xe4, 0xea, 0x7e, 0xa9, 0x12, 0xfd, 0xc5, 0xf2, 0x94, 0x63, 0x51, 0xb7, 0x14, 0x95, 0x94, 0xf2, 0x08, 0x92, 0x80, 0xd5, 0x6f, 0x26, 0xb9, 0x26, 0x9a, 0x61, 0x85, 0x70, 0x84, 0x5c, + /* (2^132)P */ 0xea, 0x94, 0xd6, 0xfe, 0x10, 0x54, 0x98, 0x52, 0x54, 0xd2, 0x2e, 0x4a, 0x93, 0x5b, 0x90, 0x3c, 0x67, 0xe4, 0x3b, 0x2d, 0x69, 0x47, 0xbb, 0x10, 0xe1, 0xe9, 0xe5, 0x69, 0x2d, 0x3d, 0x3b, 0x06, + /* (2^133)P */ 0xeb, 0x7d, 0xa5, 0xdd, 0xee, 0x26, 0x27, 0x47, 0x91, 0x18, 0xf4, 0x10, 0xae, 0xc4, 0xb6, 0xef, 0x14, 0x76, 0x30, 0x7b, 0x91, 0x41, 0x16, 0x2b, 0x7c, 0x5b, 0xf4, 0xc4, 0x4f, 0x55, 0x7c, 0x11, + /* (2^134)P */ 0x12, 0x88, 0x9d, 0x8f, 0x11, 0xf3, 0x7c, 0xc0, 0x39, 0x79, 0x01, 0x50, 0x20, 0xd8, 0xdb, 0x01, 0x27, 0x28, 0x1b, 0x17, 0xf4, 0x03, 0xe8, 0xd7, 0xea, 0x25, 0xd2, 0x87, 0x74, 0xe8, 0x15, 0x10, + /* (2^135)P */ 0x4d, 0xcc, 0x3a, 0xd2, 0xfe, 0xe3, 0x8d, 0xc5, 0x2d, 0xbe, 0xa7, 0x94, 0xc2, 0x91, 0xdb, 0x50, 0x57, 0xf4, 0x9c, 0x1c, 0x3d, 0xd4, 0x94, 0x0b, 0x4a, 0x52, 0x37, 0x6e, 0xfa, 0x40, 0x16, 0x6b, + /* (2^136)P */ 0x09, 0x0d, 0xda, 0x5f, 0x6c, 0x34, 0x2f, 0x69, 0x51, 0x31, 0x4d, 0xfa, 0x59, 0x1c, 0x0b, 0x20, 0x96, 0xa2, 0x77, 0x07, 0x76, 0x6f, 0xc4, 0xb8, 0xcf, 0xfb, 0xfd, 0x3f, 0x5f, 0x39, 0x38, 0x4b, + /* (2^137)P */ 0x71, 0xd6, 0x54, 0xbe, 0x00, 0x5e, 0xd2, 0x18, 0xa6, 0xab, 0xc8, 0xbe, 0x82, 0x05, 0xd5, 0x60, 0x82, 0xb9, 0x78, 0x3b, 0x26, 0x8f, 0xad, 0x87, 0x32, 0x04, 0xda, 0x9c, 0x4e, 0xf6, 0xfd, 0x50, + /* (2^138)P */ 0xf0, 0xdc, 0x78, 0xc5, 0xaa, 0x67, 0xf5, 0x90, 0x3b, 0x13, 0xa3, 0xf2, 0x0e, 0x9b, 0x1e, 0xef, 0x71, 0xde, 0xd9, 0x42, 0x92, 0xba, 0xeb, 0x0e, 0xc7, 0x01, 0x31, 0xf0, 0x9b, 0x3c, 0x47, 0x15, + /* (2^139)P */ 0x95, 0x80, 0xb7, 0x56, 0xae, 0xe8, 0x77, 0x7c, 0x8e, 0x07, 0x6f, 0x6e, 0x66, 0xe7, 0x78, 0xb6, 0x1f, 0xba, 0x48, 0x53, 0x61, 0xb9, 0xa0, 0x2d, 0x0b, 0x3f, 0x73, 0xff, 0xc1, 0x31, 0xf9, 0x7c, + /* (2^140)P */ 0x6c, 0x36, 0x0a, 0x0a, 0xf5, 0x57, 0xb3, 0x26, 0x32, 0xd7, 0x87, 0x2b, 0xf4, 0x8c, 0x70, 0xe9, 0xc0, 0xb2, 0x1c, 0xf9, 0xa5, 0xee, 0x3a, 0xc1, 0x4c, 0xbb, 0x43, 0x11, 0x99, 0x0c, 0xd9, 0x35, + /* (2^141)P */ 0xdc, 0xd9, 0xa0, 0xa9, 0x04, 0xc4, 0xc1, 0x47, 0x51, 0xd2, 0x72, 0x19, 0x45, 0x58, 0x9e, 0x65, 0x31, 0x8c, 0xb3, 0x73, 0xc4, 0xa8, 0x75, 0x38, 0x24, 0x1f, 0x56, 0x79, 0xd3, 0x9e, 0xbd, 0x1f, + /* (2^142)P */ 0x8d, 0xc2, 0x1e, 0xd4, 0x6f, 0xbc, 0xfa, 0x11, 0xca, 0x2d, 0x2a, 0xcd, 0xe3, 0xdf, 0xf8, 0x7e, 0x95, 0x45, 0x40, 0x8c, 0x5d, 0x3b, 0xe7, 0x72, 0x27, 0x2f, 0xb7, 0x54, 0x49, 0xfa, 0x35, 0x61, + /* (2^143)P */ 0x9c, 0xb6, 0x24, 0xde, 0xa2, 0x32, 0xfc, 0xcc, 0x88, 0x5d, 0x09, 0x1f, 0x8c, 0x69, 0x55, 0x3f, 0x29, 0xf9, 0xc3, 0x5a, 0xed, 0x50, 0x33, 0xbe, 0xeb, 0x7e, 0x47, 0xca, 0x06, 0xf8, 0x9b, 0x5e, + /* (2^144)P */ 0x68, 0x9f, 0x30, 0x3c, 0xb6, 0x8f, 0xce, 0xe9, 0xf4, 0xf9, 0xe1, 0x65, 0x35, 0xf6, 0x76, 0x53, 0xf1, 0x93, 0x63, 0x5a, 0xb3, 0xcf, 0xaf, 0xd1, 0x06, 0x35, 0x62, 0xe5, 0xed, 0xa1, 0x32, 0x66, + /* (2^145)P */ 0x4c, 0xed, 0x2d, 0x0c, 0x39, 0x6c, 0x7d, 0x0b, 0x1f, 0xcb, 0x04, 0xdf, 0x81, 0x32, 0xcb, 0x56, 0xc7, 0xc3, 0xec, 0x49, 0x12, 0x5a, 0x30, 0x66, 0x2a, 0xa7, 0x8c, 0xa3, 0x60, 0x8b, 0x58, 0x5d, + /* (2^146)P */ 0x2d, 0xf4, 0xe5, 0xe8, 0x78, 0xbf, 0xec, 0xa6, 0xec, 0x3e, 0x8a, 0x3c, 0x4b, 0xb4, 0xee, 0x86, 0x04, 0x16, 0xd2, 0xfb, 0x48, 0x9c, 0x21, 0xec, 0x31, 0x67, 0xc3, 0x17, 0xf5, 0x1a, 0xaf, 0x1a, + /* (2^147)P */ 0xe7, 0xbd, 0x69, 0x67, 0x83, 0xa2, 0x06, 0xc3, 0xdb, 0x2a, 0x1e, 0x2b, 0x62, 0x80, 0x82, 0x20, 0xa6, 0x94, 0xff, 0xfb, 0x1f, 0xf5, 0x27, 0x80, 0x6b, 0xf2, 0x24, 0x11, 0xce, 0xa1, 0xcf, 0x76, + /* (2^148)P */ 0xb6, 0xab, 0x22, 0x24, 0x56, 0x00, 0xeb, 0x18, 0xc3, 0x29, 0x8c, 0x8f, 0xd5, 0xc4, 0x77, 0xf3, 0x1a, 0x56, 0x31, 0xf5, 0x07, 0xc2, 0xbb, 0x4d, 0x27, 0x8a, 0x12, 0x82, 0xf0, 0xb7, 0x53, 0x02, + /* (2^149)P */ 0xe0, 0x17, 0x2c, 0xb6, 0x1c, 0x09, 0x1f, 0x3d, 0xa9, 0x28, 0x46, 0xd6, 0xab, 0xe1, 0x60, 0x48, 0x53, 0x42, 0x9d, 0x30, 0x36, 0x74, 0xd1, 0x52, 0x76, 0xe5, 0xfa, 0x3e, 0xe1, 0x97, 0x6f, 0x35, + /* (2^150)P */ 0x5b, 0x53, 0x50, 0xa1, 0x1a, 0xe1, 0x51, 0xd3, 0xcc, 0x78, 0xd8, 0x1d, 0xbb, 0x45, 0x6b, 0x3e, 0x98, 0x2c, 0xd9, 0xbe, 0x28, 0x61, 0x77, 0x0c, 0xb8, 0x85, 0x28, 0x03, 0x93, 0xae, 0x34, 0x1d, + /* (2^151)P */ 0xc3, 0xa4, 0x5b, 0xa8, 0x8c, 0x48, 0xa0, 0x4b, 0xce, 0xe6, 0x9c, 0x3c, 0xc3, 0x48, 0x53, 0x98, 0x70, 0xa7, 0xbd, 0x97, 0x6f, 0x4c, 0x12, 0x66, 0x4a, 0x12, 0x54, 0x06, 0x29, 0xa0, 0x81, 0x0f, + /* (2^152)P */ 0xfd, 0x86, 0x9b, 0x56, 0xa6, 0x9c, 0xd0, 0x9e, 0x2d, 0x9a, 0xaf, 0x18, 0xfd, 0x09, 0x10, 0x81, 0x0a, 0xc2, 0xd8, 0x93, 0x3f, 0xd0, 0x08, 0xff, 0x6b, 0xf2, 0xae, 0x9f, 0x19, 0x48, 0xa1, 0x52, + /* (2^153)P */ 0x73, 0x1b, 0x8d, 0x2d, 0xdc, 0xf9, 0x03, 0x3e, 0x70, 0x1a, 0x96, 0x73, 0x18, 0x80, 0x05, 0x42, 0x70, 0x59, 0xa3, 0x41, 0xf0, 0x87, 0xd9, 0xc0, 0x49, 0xd5, 0xc0, 0xa1, 0x15, 0x1f, 0xaa, 0x07, + /* (2^154)P */ 0x24, 0x72, 0xd2, 0x8c, 0xe0, 0x6c, 0xd4, 0xdf, 0x39, 0x42, 0x4e, 0x93, 0x4f, 0x02, 0x0a, 0x6d, 0x59, 0x7b, 0x89, 0x99, 0x63, 0x7a, 0x8a, 0x80, 0xa2, 0x95, 0x3d, 0xe1, 0xe9, 0x56, 0x45, 0x0a, + /* (2^155)P */ 0x45, 0x30, 0xc1, 0xe9, 0x1f, 0x99, 0x1a, 0xd2, 0xb8, 0x51, 0x77, 0xfe, 0x48, 0x85, 0x0e, 0x9b, 0x35, 0x00, 0xf3, 0x4b, 0xcb, 0x43, 0xa6, 0x5d, 0x21, 0xf7, 0x40, 0x39, 0xd6, 0x28, 0xdb, 0x77, + /* (2^156)P */ 0x11, 0x90, 0xdc, 0x4a, 0x61, 0xeb, 0x5e, 0xfc, 0xeb, 0x11, 0xc4, 0xe8, 0x9a, 0x41, 0x29, 0x52, 0x74, 0xcf, 0x1d, 0x7d, 0x78, 0xe7, 0xc3, 0x9e, 0xb5, 0x4c, 0x6e, 0x21, 0x3e, 0x05, 0x0d, 0x34, + /* (2^157)P */ 0xb4, 0xf2, 0x8d, 0xb4, 0x39, 0xaf, 0xc7, 0xca, 0x94, 0x0a, 0xa1, 0x71, 0x28, 0xec, 0xfa, 0xc0, 0xed, 0x75, 0xa5, 0x5c, 0x24, 0x69, 0x0a, 0x14, 0x4c, 0x3a, 0x27, 0x34, 0x71, 0xc3, 0xf1, 0x0c, + /* (2^158)P */ 0xa5, 0xb8, 0x24, 0xc2, 0x6a, 0x30, 0xee, 0xc8, 0xb0, 0x30, 0x49, 0xcb, 0x7c, 0xee, 0xea, 0x57, 0x4f, 0xe7, 0xcb, 0xaa, 0xbd, 0x06, 0xe8, 0xa1, 0x7d, 0x65, 0xeb, 0x2e, 0x74, 0x62, 0x9a, 0x7d, + /* (2^159)P */ 0x30, 0x48, 0x6c, 0x54, 0xef, 0xb6, 0xb6, 0x9e, 0x2e, 0x6e, 0xb3, 0xdd, 0x1f, 0xca, 0x5c, 0x88, 0x05, 0x71, 0x0d, 0xef, 0x83, 0xf3, 0xb9, 0xe6, 0x12, 0x04, 0x2e, 0x9d, 0xef, 0x4f, 0x65, 0x58, + /* (2^160)P */ 0x26, 0x8e, 0x0e, 0xbe, 0xff, 0xc4, 0x05, 0xa9, 0x6e, 0x81, 0x31, 0x9b, 0xdf, 0xe5, 0x2d, 0x94, 0xe1, 0x88, 0x2e, 0x80, 0x3f, 0x72, 0x7d, 0x49, 0x8d, 0x40, 0x2f, 0x60, 0xea, 0x4d, 0x68, 0x30, + /* (2^161)P */ 0x34, 0xcb, 0xe6, 0xa3, 0x78, 0xa2, 0xe5, 0x21, 0xc4, 0x1d, 0x15, 0x5b, 0x6f, 0x6e, 0xfb, 0xae, 0x15, 0xca, 0x77, 0x9d, 0x04, 0x8e, 0x0b, 0xb3, 0x81, 0x89, 0xb9, 0x53, 0xcf, 0xc9, 0xc3, 0x28, + /* (2^162)P */ 0x2a, 0xdd, 0x6c, 0x55, 0x21, 0xb7, 0x7f, 0x28, 0x74, 0x22, 0x02, 0x97, 0xa8, 0x7c, 0x31, 0x0d, 0x58, 0x32, 0x54, 0x3a, 0x42, 0xc7, 0x68, 0x74, 0x2f, 0x64, 0xb5, 0x4e, 0x46, 0x11, 0x7f, 0x4a, + /* (2^163)P */ 0xa6, 0x3a, 0x19, 0x4d, 0x77, 0xa4, 0x37, 0xa2, 0xa1, 0x29, 0x21, 0xa9, 0x6e, 0x98, 0x65, 0xd8, 0x88, 0x1a, 0x7c, 0xf8, 0xec, 0x15, 0xc5, 0x24, 0xeb, 0xf5, 0x39, 0x5f, 0x57, 0x03, 0x40, 0x60, + /* (2^164)P */ 0x27, 0x9b, 0x0a, 0x57, 0x89, 0xf1, 0xb9, 0x47, 0x78, 0x4b, 0x5e, 0x46, 0xde, 0xce, 0x98, 0x2b, 0x20, 0x5c, 0xb8, 0xdb, 0x51, 0xf5, 0x6d, 0x02, 0x01, 0x19, 0xe2, 0x47, 0x10, 0xd9, 0xfc, 0x74, + /* (2^165)P */ 0xa3, 0xbf, 0xc1, 0x23, 0x0a, 0xa9, 0xe2, 0x13, 0xf6, 0x19, 0x85, 0x47, 0x4e, 0x07, 0xb0, 0x0c, 0x44, 0xcf, 0xf6, 0x3a, 0xbe, 0xcb, 0xf1, 0x5f, 0xbe, 0x2d, 0x81, 0xbe, 0x38, 0x54, 0xfe, 0x67, + /* (2^166)P */ 0xb0, 0x05, 0x0f, 0xa4, 0x4f, 0xf6, 0x3c, 0xd1, 0x87, 0x37, 0x28, 0x32, 0x2f, 0xfb, 0x4d, 0x05, 0xea, 0x2a, 0x0d, 0x7f, 0x5b, 0x91, 0x73, 0x41, 0x4e, 0x0d, 0x61, 0x1f, 0x4f, 0x14, 0x2f, 0x48, + /* (2^167)P */ 0x34, 0x82, 0x7f, 0xb4, 0x01, 0x02, 0x21, 0xf6, 0x90, 0xb9, 0x70, 0x9e, 0x92, 0xe1, 0x0a, 0x5d, 0x7c, 0x56, 0x49, 0xb0, 0x55, 0xf4, 0xd7, 0xdc, 0x01, 0x6f, 0x91, 0xf0, 0xf1, 0xd0, 0x93, 0x7e, + /* (2^168)P */ 0xfa, 0xb4, 0x7d, 0x8a, 0xf1, 0xcb, 0x79, 0xdd, 0x2f, 0xc6, 0x74, 0x6f, 0xbf, 0x91, 0x83, 0xbe, 0xbd, 0x91, 0x82, 0x4b, 0xd1, 0x45, 0x71, 0x02, 0x05, 0x17, 0xbf, 0x2c, 0xea, 0x73, 0x5a, 0x58, + /* (2^169)P */ 0xb2, 0x0d, 0x8a, 0x92, 0x3e, 0xa0, 0x5c, 0x48, 0xe7, 0x57, 0x28, 0x74, 0xa5, 0x01, 0xfc, 0x10, 0xa7, 0x51, 0xd5, 0xd6, 0xdb, 0x2e, 0x48, 0x2f, 0x8a, 0xdb, 0x8f, 0x04, 0xb5, 0x33, 0x04, 0x0f, + /* (2^170)P */ 0x47, 0x62, 0xdc, 0xd7, 0x8d, 0x2e, 0xda, 0x60, 0x9a, 0x81, 0xd4, 0x8c, 0xd3, 0xc9, 0xb4, 0x88, 0x97, 0x66, 0xf6, 0x01, 0xc0, 0x3a, 0x03, 0x13, 0x75, 0x7d, 0x36, 0x3b, 0xfe, 0x24, 0x3b, 0x27, + /* (2^171)P */ 0xd4, 0xb9, 0xb3, 0x31, 0x6a, 0xf6, 0xe8, 0xc6, 0xd5, 0x49, 0xdf, 0x94, 0xa4, 0x14, 0x15, 0x28, 0xa7, 0x3d, 0xb2, 0xc8, 0xdf, 0x6f, 0x72, 0xd1, 0x48, 0xe5, 0xde, 0x03, 0xd1, 0xe7, 0x3a, 0x4b, + /* (2^172)P */ 0x7e, 0x9d, 0x4b, 0xce, 0x19, 0x6e, 0x25, 0xc6, 0x1c, 0xc6, 0xe3, 0x86, 0xf1, 0x5c, 0x5c, 0xff, 0x45, 0xc1, 0x8e, 0x4b, 0xa3, 0x3c, 0xc6, 0xac, 0x74, 0x65, 0xe6, 0xfe, 0x88, 0x18, 0x62, 0x74, + /* (2^173)P */ 0x1e, 0x0a, 0x29, 0x45, 0x96, 0x40, 0x6f, 0x95, 0x2e, 0x96, 0x3a, 0x26, 0xe3, 0xf8, 0x0b, 0xef, 0x7b, 0x64, 0xc2, 0x5e, 0xeb, 0x50, 0x6a, 0xed, 0x02, 0x75, 0xca, 0x9d, 0x3a, 0x28, 0x94, 0x06, + /* (2^174)P */ 0xd1, 0xdc, 0xa2, 0x43, 0x36, 0x96, 0x9b, 0x76, 0x53, 0x53, 0xfc, 0x09, 0xea, 0xc8, 0xb7, 0x42, 0xab, 0x7e, 0x39, 0x13, 0xee, 0x2a, 0x00, 0x4f, 0x3a, 0xd6, 0xb7, 0x19, 0x2c, 0x5e, 0x00, 0x63, + /* (2^175)P */ 0xea, 0x3b, 0x02, 0x63, 0xda, 0x36, 0x67, 0xca, 0xb7, 0x99, 0x2a, 0xb1, 0x6d, 0x7f, 0x6c, 0x96, 0xe1, 0xc5, 0x37, 0xc5, 0x90, 0x93, 0xe0, 0xac, 0xee, 0x89, 0xaa, 0xa1, 0x63, 0x60, 0x69, 0x0b, + /* (2^176)P */ 0xe5, 0x56, 0x8c, 0x28, 0x97, 0x3e, 0xb0, 0xeb, 0xe8, 0x8b, 0x8c, 0x93, 0x9f, 0x9f, 0x2a, 0x43, 0x71, 0x7f, 0x71, 0x5b, 0x3d, 0xa9, 0xa5, 0xa6, 0x97, 0x9d, 0x8f, 0xe1, 0xc3, 0xb4, 0x5f, 0x1a, + /* (2^177)P */ 0xce, 0xcd, 0x60, 0x1c, 0xad, 0xe7, 0x94, 0x1c, 0xa0, 0xc4, 0x02, 0xfc, 0x43, 0x2a, 0x20, 0xee, 0x20, 0x6a, 0xc4, 0x67, 0xd8, 0xe4, 0xaf, 0x8d, 0x58, 0x7b, 0xc2, 0x8a, 0x3c, 0x26, 0x10, 0x0a, + /* (2^178)P */ 0x4a, 0x2a, 0x43, 0xe4, 0xdf, 0xa9, 0xde, 0xd0, 0xc5, 0x77, 0x92, 0xbe, 0x7b, 0xf8, 0x6a, 0x85, 0x1a, 0xc7, 0x12, 0xc2, 0xac, 0x72, 0x84, 0xce, 0x91, 0x1e, 0xbb, 0x9b, 0x6d, 0x1b, 0x15, 0x6f, + /* (2^179)P */ 0x6a, 0xd5, 0xee, 0x7c, 0x52, 0x6c, 0x77, 0x26, 0xec, 0xfa, 0xf8, 0xfb, 0xb7, 0x1c, 0x21, 0x7d, 0xcc, 0x09, 0x46, 0xfd, 0xa6, 0x66, 0xae, 0x37, 0x42, 0x0c, 0x77, 0xd2, 0x02, 0xb7, 0x81, 0x1f, + /* (2^180)P */ 0x92, 0x83, 0xc5, 0xea, 0x57, 0xb0, 0xb0, 0x2f, 0x9d, 0x4e, 0x74, 0x29, 0xfe, 0x89, 0xdd, 0xe1, 0xf8, 0xb4, 0xbe, 0x17, 0xeb, 0xf8, 0x64, 0xc9, 0x1e, 0xd4, 0xa2, 0xc9, 0x73, 0x10, 0x57, 0x29, + /* (2^181)P */ 0x54, 0xe2, 0xc0, 0x81, 0x89, 0xa1, 0x48, 0xa9, 0x30, 0x28, 0xb2, 0x65, 0x9b, 0x36, 0xf6, 0x2d, 0xc6, 0xd3, 0xcf, 0x5f, 0xd7, 0xb2, 0x3e, 0xa3, 0x1f, 0xa0, 0x99, 0x41, 0xec, 0xd6, 0x8c, 0x07, + /* (2^182)P */ 0x2f, 0x0d, 0x90, 0xad, 0x41, 0x4a, 0x58, 0x4a, 0x52, 0x4c, 0xc7, 0xe2, 0x78, 0x2b, 0x14, 0x32, 0x78, 0xc9, 0x31, 0x84, 0x33, 0xe8, 0xc4, 0x68, 0xc2, 0x9f, 0x68, 0x08, 0x90, 0xea, 0x69, 0x7f, + /* (2^183)P */ 0x65, 0x82, 0xa3, 0x46, 0x1e, 0xc8, 0xf2, 0x52, 0xfd, 0x32, 0xa8, 0x04, 0x2d, 0x07, 0x78, 0xfd, 0x94, 0x9e, 0x35, 0x25, 0xfa, 0xd5, 0xd7, 0x8c, 0xd2, 0x29, 0xcc, 0x54, 0x74, 0x1b, 0xe7, 0x4d, + /* (2^184)P */ 0xc9, 0x6a, 0xda, 0x1e, 0xad, 0x60, 0xeb, 0x42, 0x3a, 0x9c, 0xc0, 0xdb, 0xdf, 0x37, 0xad, 0x0a, 0x91, 0xc1, 0x3c, 0xe3, 0x71, 0x4b, 0x00, 0x81, 0x3c, 0x80, 0x22, 0x51, 0x34, 0xbe, 0xe6, 0x44, + /* (2^185)P */ 0xdb, 0x20, 0x19, 0xba, 0x88, 0x83, 0xfe, 0x03, 0x08, 0xb0, 0x0d, 0x15, 0x32, 0x7c, 0xd5, 0xf5, 0x29, 0x0c, 0xf6, 0x1a, 0x28, 0xc4, 0xc8, 0x49, 0xee, 0x1a, 0x70, 0xde, 0x18, 0xb5, 0xed, 0x21, + /* (2^186)P */ 0x99, 0xdc, 0x06, 0x8f, 0x41, 0x3e, 0xb6, 0x7f, 0xb8, 0xd7, 0x66, 0xc1, 0x99, 0x0d, 0x46, 0xa4, 0x83, 0x0a, 0x52, 0xce, 0x48, 0x52, 0xdd, 0x24, 0x58, 0x83, 0x92, 0x2b, 0x71, 0xad, 0xc3, 0x5e, + /* (2^187)P */ 0x0f, 0x93, 0x17, 0xbd, 0x5f, 0x2a, 0x02, 0x15, 0xe3, 0x70, 0x25, 0xd8, 0x77, 0x4a, 0xf6, 0xa4, 0x12, 0x37, 0x78, 0x15, 0x69, 0x8d, 0xbc, 0x12, 0xbb, 0x0a, 0x62, 0xfc, 0xc0, 0x94, 0x81, 0x49, + /* (2^188)P */ 0x82, 0x6c, 0x68, 0x55, 0xd2, 0xd9, 0xa2, 0x38, 0xf0, 0x21, 0x3e, 0x19, 0xd9, 0x6b, 0x5c, 0x78, 0x84, 0x54, 0x4a, 0xb2, 0x1a, 0xc8, 0xd5, 0xe4, 0x89, 0x09, 0xe2, 0xb2, 0x60, 0x78, 0x30, 0x56, + /* (2^189)P */ 0xc4, 0x74, 0x4d, 0x8b, 0xf7, 0x55, 0x9d, 0x42, 0x31, 0x01, 0x35, 0x43, 0x46, 0x83, 0xf1, 0x22, 0xff, 0x1f, 0xc7, 0x98, 0x45, 0xc2, 0x60, 0x1e, 0xef, 0x83, 0x99, 0x97, 0x14, 0xf0, 0xf2, 0x59, + /* (2^190)P */ 0x44, 0x4a, 0x49, 0xeb, 0x56, 0x7d, 0xa4, 0x46, 0x8e, 0xa1, 0x36, 0xd6, 0x54, 0xa8, 0x22, 0x3e, 0x3b, 0x1c, 0x49, 0x74, 0x52, 0xe1, 0x46, 0xb3, 0xe7, 0xcd, 0x90, 0x53, 0x4e, 0xfd, 0xea, 0x2c, + /* (2^191)P */ 0x75, 0x66, 0x0d, 0xbe, 0x38, 0x85, 0x8a, 0xba, 0x23, 0x8e, 0x81, 0x50, 0xbb, 0x74, 0x90, 0x4b, 0xc3, 0x04, 0xd3, 0x85, 0x90, 0xb8, 0xda, 0xcb, 0xc4, 0x92, 0x61, 0xe5, 0xe0, 0x4f, 0xa2, 0x61, + /* (2^192)P */ 0xcb, 0x5b, 0x52, 0xdb, 0xe6, 0x15, 0x76, 0xcb, 0xca, 0xe4, 0x67, 0xa5, 0x35, 0x8c, 0x7d, 0xdd, 0x69, 0xdd, 0xfc, 0xca, 0x3a, 0x15, 0xb4, 0xe6, 0x66, 0x97, 0x3c, 0x7f, 0x09, 0x8e, 0x66, 0x2d, + /* (2^193)P */ 0xf0, 0x5e, 0xe5, 0x5c, 0x26, 0x7e, 0x7e, 0xa5, 0x67, 0xb9, 0xd4, 0x7c, 0x52, 0x4e, 0x9f, 0x5d, 0xe5, 0xd1, 0x2f, 0x49, 0x06, 0x36, 0xc8, 0xfb, 0xae, 0xf7, 0xc3, 0xb7, 0xbe, 0x52, 0x0d, 0x09, + /* (2^194)P */ 0x7c, 0x4d, 0x7b, 0x1e, 0x5a, 0x51, 0xb9, 0x09, 0xc0, 0x44, 0xda, 0x99, 0x25, 0x6a, 0x26, 0x1f, 0x04, 0x55, 0xc5, 0xe2, 0x48, 0x95, 0xc4, 0xa1, 0xcc, 0x15, 0x6f, 0x12, 0x87, 0x42, 0xf0, 0x7e, + /* (2^195)P */ 0x15, 0xef, 0x30, 0xbd, 0x9d, 0x65, 0xd1, 0xfe, 0x7b, 0x27, 0xe0, 0xc4, 0xee, 0xb9, 0x4a, 0x8b, 0x91, 0x32, 0xdf, 0xa5, 0x36, 0x62, 0x4d, 0x88, 0x88, 0xf7, 0x5c, 0xbf, 0xa6, 0x6e, 0xd9, 0x1f, + /* (2^196)P */ 0x9a, 0x0d, 0x19, 0x1f, 0x98, 0x61, 0xa1, 0x42, 0xc1, 0x52, 0x60, 0x7e, 0x50, 0x49, 0xd8, 0x61, 0xd5, 0x2c, 0x5a, 0x28, 0xbf, 0x13, 0xe1, 0x9f, 0xd8, 0x85, 0xad, 0xdb, 0x76, 0xd6, 0x22, 0x7c, + /* (2^197)P */ 0x7d, 0xd2, 0xfb, 0x2b, 0xed, 0x70, 0xe7, 0x82, 0xa5, 0xf5, 0x96, 0xe9, 0xec, 0xb2, 0x05, 0x4c, 0x50, 0x01, 0x90, 0xb0, 0xc2, 0xa9, 0x40, 0xcd, 0x64, 0xbf, 0xd9, 0x13, 0x92, 0x31, 0x95, 0x58, + /* (2^198)P */ 0x08, 0x2e, 0xea, 0x3f, 0x70, 0x5d, 0xcc, 0xe7, 0x8c, 0x18, 0xe2, 0x58, 0x12, 0x49, 0x0c, 0xb5, 0xf0, 0x5b, 0x20, 0x48, 0xaa, 0x0b, 0xe3, 0xcc, 0x62, 0x2d, 0xa3, 0xcf, 0x9c, 0x65, 0x7c, 0x53, + /* (2^199)P */ 0x88, 0xc0, 0xcf, 0x98, 0x3a, 0x62, 0xb6, 0x37, 0xa4, 0xac, 0xd6, 0xa4, 0x1f, 0xed, 0x9b, 0xfe, 0xb0, 0xd1, 0xa8, 0x56, 0x8e, 0x9b, 0xd2, 0x04, 0x75, 0x95, 0x51, 0x0b, 0xc4, 0x71, 0x5f, 0x72, + /* (2^200)P */ 0xe6, 0x9c, 0x33, 0xd0, 0x9c, 0xf8, 0xc7, 0x28, 0x8b, 0xc1, 0xdd, 0x69, 0x44, 0xb1, 0x67, 0x83, 0x2c, 0x65, 0xa1, 0xa6, 0x83, 0xda, 0x3a, 0x88, 0x17, 0x6c, 0x4d, 0x03, 0x74, 0x19, 0x5f, 0x58, + /* (2^201)P */ 0x88, 0x91, 0xb1, 0xf1, 0x66, 0xb2, 0xcf, 0x89, 0x17, 0x52, 0xc3, 0xe7, 0x63, 0x48, 0x3b, 0xe6, 0x6a, 0x52, 0xc0, 0xb4, 0xa6, 0x9d, 0x8c, 0xd8, 0x35, 0x46, 0x95, 0xf0, 0x9d, 0x5c, 0x03, 0x3e, + /* (2^202)P */ 0x9d, 0xde, 0x45, 0xfb, 0x12, 0x54, 0x9d, 0xdd, 0x0d, 0xf4, 0xcf, 0xe4, 0x32, 0x45, 0x68, 0xdd, 0x1c, 0x67, 0x1d, 0x15, 0x9b, 0x99, 0x5c, 0x4b, 0x90, 0xf6, 0xe7, 0x11, 0xc8, 0x2c, 0x8c, 0x2d, + /* (2^203)P */ 0x40, 0x5d, 0x05, 0x90, 0x1d, 0xbe, 0x54, 0x7f, 0x40, 0xaf, 0x4a, 0x46, 0xdf, 0xc5, 0x64, 0xa4, 0xbe, 0x17, 0xe9, 0xf0, 0x24, 0x96, 0x97, 0x33, 0x30, 0x6b, 0x35, 0x27, 0xc5, 0x8d, 0x01, 0x2c, + /* (2^204)P */ 0xd4, 0xb3, 0x30, 0xe3, 0x24, 0x50, 0x41, 0xa5, 0xd3, 0x52, 0x16, 0x69, 0x96, 0x3d, 0xff, 0x73, 0xf1, 0x59, 0x9b, 0xef, 0xc4, 0x42, 0xec, 0x94, 0x5a, 0x8e, 0xd0, 0x18, 0x16, 0x20, 0x47, 0x07, + /* (2^205)P */ 0x53, 0x1c, 0x41, 0xca, 0x8a, 0xa4, 0x6c, 0x4d, 0x19, 0x61, 0xa6, 0xcf, 0x2f, 0x5f, 0x41, 0x66, 0xff, 0x27, 0xe2, 0x51, 0x00, 0xd4, 0x4d, 0x9c, 0xeb, 0xf7, 0x02, 0x9a, 0xc0, 0x0b, 0x81, 0x59, + /* (2^206)P */ 0x1d, 0x10, 0xdc, 0xb3, 0x71, 0xb1, 0x7e, 0x2a, 0x8e, 0xf6, 0xfe, 0x9f, 0xb9, 0x5a, 0x1c, 0x44, 0xea, 0x59, 0xb3, 0x93, 0x9b, 0x5c, 0x02, 0x32, 0x2f, 0x11, 0x9d, 0x1e, 0xa7, 0xe0, 0x8c, 0x5e, + /* (2^207)P */ 0xfd, 0x03, 0x95, 0x42, 0x92, 0xcb, 0xcc, 0xbf, 0x55, 0x5d, 0x09, 0x2f, 0x75, 0xba, 0x71, 0xd2, 0x1e, 0x09, 0x2d, 0x97, 0x5e, 0xad, 0x5e, 0x34, 0xba, 0x03, 0x31, 0xa8, 0x11, 0xdf, 0xc8, 0x18, + /* (2^208)P */ 0x4c, 0x0f, 0xed, 0x9a, 0x9a, 0x94, 0xcd, 0x90, 0x7e, 0xe3, 0x60, 0x66, 0xcb, 0xf4, 0xd1, 0xc5, 0x0b, 0x2e, 0xc5, 0x56, 0x2d, 0xc5, 0xca, 0xb8, 0x0d, 0x8e, 0x80, 0xc5, 0x00, 0xe4, 0x42, 0x6e, + /* (2^209)P */ 0x23, 0xfd, 0xae, 0xee, 0x66, 0x69, 0xb4, 0xa3, 0xca, 0xcd, 0x9e, 0xe3, 0x0b, 0x1f, 0x4f, 0x0c, 0x1d, 0xa5, 0x83, 0xd6, 0xc9, 0xc8, 0x9d, 0x18, 0x1b, 0x35, 0x09, 0x4c, 0x05, 0x7f, 0xf2, 0x51, + /* (2^210)P */ 0x82, 0x06, 0x32, 0x2a, 0xcd, 0x7c, 0x48, 0x4c, 0x96, 0x1c, 0xdf, 0xb3, 0x5b, 0xa9, 0x7e, 0x58, 0xe8, 0xb8, 0x5c, 0x55, 0x9e, 0xf7, 0xcc, 0xc8, 0x3d, 0xd7, 0x06, 0xa2, 0x29, 0xc8, 0x7d, 0x54, + /* (2^211)P */ 0x06, 0x9b, 0xc3, 0x80, 0xcd, 0xa6, 0x22, 0xb8, 0xc6, 0xd4, 0x00, 0x20, 0x73, 0x54, 0x6d, 0xe9, 0x4d, 0x3b, 0x46, 0x91, 0x6f, 0x5b, 0x53, 0x28, 0x1d, 0x6e, 0x48, 0xe2, 0x60, 0x46, 0x8f, 0x22, + /* (2^212)P */ 0xbf, 0x3a, 0x8d, 0xde, 0x38, 0x95, 0x79, 0x98, 0x6e, 0xca, 0xeb, 0x45, 0x00, 0x33, 0xd8, 0x8c, 0x38, 0xe7, 0x21, 0x82, 0x00, 0x2a, 0x95, 0x79, 0xbb, 0xd2, 0x5c, 0x53, 0xa7, 0xe1, 0x22, 0x43, + /* (2^213)P */ 0x1c, 0x80, 0xd1, 0x19, 0x18, 0xc1, 0x14, 0xb1, 0xc7, 0x5e, 0x3f, 0x4f, 0xd8, 0xe4, 0x16, 0x20, 0x4c, 0x0f, 0x26, 0x09, 0xf4, 0x2d, 0x0e, 0xdd, 0x66, 0x72, 0x5f, 0xae, 0xc0, 0x62, 0xc3, 0x5e, + /* (2^214)P */ 0xee, 0xb4, 0xb2, 0xb8, 0x18, 0x2b, 0x46, 0xc0, 0xfb, 0x1a, 0x4d, 0x27, 0x50, 0xd9, 0xc8, 0x7c, 0xd2, 0x02, 0x6b, 0x43, 0x05, 0x71, 0x5f, 0xf2, 0xd3, 0xcc, 0xf9, 0xbf, 0xdc, 0xf8, 0xbb, 0x43, + /* (2^215)P */ 0xdf, 0xe9, 0x39, 0xa0, 0x67, 0x17, 0xad, 0xb6, 0x83, 0x35, 0x9d, 0xf6, 0xa8, 0x4d, 0x71, 0xb0, 0xf5, 0x31, 0x29, 0xb4, 0x18, 0xfa, 0x55, 0x5e, 0x61, 0x09, 0xc6, 0x33, 0x8f, 0x55, 0xd5, 0x4e, + /* (2^216)P */ 0xdd, 0xa5, 0x47, 0xc6, 0x01, 0x79, 0xe3, 0x1f, 0x57, 0xd3, 0x81, 0x80, 0x1f, 0xdf, 0x3d, 0x59, 0xa6, 0xd7, 0x3f, 0x81, 0xfd, 0xa4, 0x49, 0x02, 0x61, 0xaf, 0x9c, 0x4e, 0x27, 0xca, 0xac, 0x69, + /* (2^217)P */ 0xc9, 0x21, 0x07, 0x33, 0xea, 0xa3, 0x7b, 0x04, 0xa0, 0x1e, 0x7e, 0x0e, 0xc2, 0x3f, 0x42, 0x83, 0x60, 0x4a, 0x31, 0x01, 0xaf, 0xc0, 0xf4, 0x1d, 0x27, 0x95, 0x28, 0x89, 0xab, 0x2d, 0xa6, 0x09, + /* (2^218)P */ 0x00, 0xcb, 0xc6, 0x9c, 0xa4, 0x25, 0xb3, 0xa5, 0xb6, 0x6c, 0xb5, 0x54, 0xc6, 0x5d, 0x4b, 0xe9, 0xa0, 0x94, 0xc9, 0xad, 0x79, 0x87, 0xe2, 0x3b, 0xad, 0x4a, 0x3a, 0xba, 0xf8, 0xe8, 0x96, 0x42, + /* (2^219)P */ 0xab, 0x1e, 0x45, 0x1e, 0x76, 0x89, 0x86, 0x32, 0x4a, 0x59, 0x59, 0xff, 0x8b, 0x59, 0x4d, 0x2e, 0x4a, 0x08, 0xa7, 0xd7, 0x53, 0x68, 0xb9, 0x49, 0xa8, 0x20, 0x14, 0x60, 0x19, 0xa3, 0x80, 0x49, + /* (2^220)P */ 0x42, 0x2c, 0x55, 0x2f, 0xe1, 0xb9, 0x65, 0x95, 0x96, 0xfe, 0x00, 0x71, 0xdb, 0x18, 0x53, 0x8a, 0xd7, 0xd0, 0xad, 0x43, 0x4d, 0x0b, 0xc9, 0x05, 0xda, 0x4e, 0x5d, 0x6a, 0xd6, 0x4c, 0x8b, 0x53, + /* (2^221)P */ 0x9f, 0x03, 0x9f, 0xe8, 0xc3, 0x4f, 0xe9, 0xf4, 0x45, 0x80, 0x61, 0x6f, 0xf2, 0x9a, 0x2c, 0x59, 0x50, 0x95, 0x4b, 0xfd, 0xb5, 0x6e, 0xa3, 0x08, 0x19, 0x14, 0xed, 0xc2, 0xf6, 0xfa, 0xff, 0x25, + /* (2^222)P */ 0x54, 0xd3, 0x79, 0xcc, 0x59, 0x44, 0x43, 0x34, 0x6b, 0x47, 0xd5, 0xb1, 0xb4, 0xbf, 0xec, 0xee, 0x99, 0x5d, 0x61, 0x61, 0xa0, 0x34, 0xeb, 0xdd, 0x73, 0xb7, 0x64, 0xeb, 0xcc, 0xce, 0x29, 0x51, + /* (2^223)P */ 0x20, 0x35, 0x99, 0x94, 0x58, 0x21, 0x43, 0xee, 0x3b, 0x0b, 0x4c, 0xf1, 0x7c, 0x9c, 0x2f, 0x77, 0xd5, 0xda, 0xbe, 0x06, 0xe3, 0xfc, 0xe2, 0xd2, 0x97, 0x6a, 0xf0, 0x46, 0xb5, 0x42, 0x5f, 0x71, + /* (2^224)P */ 0x1a, 0x5f, 0x5b, 0xda, 0xce, 0xcd, 0x4e, 0x43, 0xa9, 0x41, 0x97, 0xa4, 0x15, 0x71, 0xa1, 0x0d, 0x2e, 0xad, 0xed, 0x73, 0x7c, 0xd7, 0x0b, 0x68, 0x41, 0x90, 0xdd, 0x4e, 0x35, 0x02, 0x7c, 0x48, + /* (2^225)P */ 0xc4, 0xd9, 0x0e, 0xa7, 0xf3, 0xef, 0xef, 0xb8, 0x02, 0xe3, 0x57, 0xe8, 0xa3, 0x2a, 0xa3, 0x56, 0xa0, 0xa5, 0xa2, 0x48, 0xbd, 0x68, 0x3a, 0xdf, 0x44, 0xc4, 0x76, 0x31, 0xb7, 0x50, 0xf6, 0x07, + /* (2^226)P */ 0xb1, 0xcc, 0xe0, 0x26, 0x16, 0x9b, 0x8b, 0xe3, 0x36, 0xfb, 0x09, 0x8b, 0xc1, 0x53, 0xe0, 0x79, 0x64, 0x49, 0xf9, 0xc9, 0x19, 0x03, 0xd9, 0x56, 0xc4, 0xf5, 0x9f, 0xac, 0xe7, 0x41, 0xa9, 0x1c, + /* (2^227)P */ 0xbb, 0xa0, 0x2f, 0x16, 0x29, 0xdf, 0xc4, 0x49, 0x05, 0x33, 0xb3, 0x82, 0x32, 0xcf, 0x88, 0x84, 0x7d, 0x43, 0xbb, 0xca, 0x14, 0xda, 0xdf, 0x95, 0x86, 0xad, 0xd5, 0x64, 0x82, 0xf7, 0x91, 0x33, + /* (2^228)P */ 0x5d, 0x09, 0xb5, 0xe2, 0x6a, 0xe0, 0x9a, 0x72, 0x46, 0xa9, 0x59, 0x32, 0xd7, 0x58, 0x8a, 0xd5, 0xed, 0x21, 0x39, 0xd1, 0x62, 0x42, 0x83, 0xe9, 0x92, 0xb5, 0x4b, 0xa5, 0xfa, 0xda, 0xfe, 0x27, + /* (2^229)P */ 0xbb, 0x48, 0xad, 0x29, 0xb8, 0xc5, 0x9d, 0xa9, 0x60, 0xe2, 0x9e, 0x49, 0x42, 0x57, 0x02, 0x5f, 0xfd, 0x13, 0x75, 0x5d, 0xcd, 0x8e, 0x2c, 0x80, 0x38, 0xd9, 0x6d, 0x3f, 0xef, 0xb3, 0xce, 0x78, + /* (2^230)P */ 0x94, 0x5d, 0x13, 0x8a, 0x4f, 0xf4, 0x42, 0xc3, 0xa3, 0xdd, 0x8c, 0x82, 0x44, 0xdb, 0x9e, 0x7b, 0xe7, 0xcf, 0x37, 0x05, 0x1a, 0xd1, 0x36, 0x94, 0xc8, 0xb4, 0x1a, 0xec, 0x64, 0xb1, 0x64, 0x50, + /* (2^231)P */ 0xfc, 0xb2, 0x7e, 0xd3, 0xcf, 0xec, 0x20, 0x70, 0xfc, 0x25, 0x0d, 0xd9, 0x3e, 0xea, 0x31, 0x1f, 0x34, 0xbb, 0xa1, 0xdf, 0x7b, 0x0d, 0x93, 0x1b, 0x44, 0x30, 0x11, 0x48, 0x7a, 0x46, 0x44, 0x53, + /* (2^232)P */ 0xfb, 0x6d, 0x5e, 0xf2, 0x70, 0x31, 0x07, 0x70, 0xc8, 0x4c, 0x11, 0x50, 0x1a, 0xdc, 0x85, 0xe3, 0x00, 0x4f, 0xfc, 0xc8, 0x8a, 0x69, 0x48, 0x23, 0xd8, 0x40, 0xdd, 0x84, 0x52, 0xa5, 0x77, 0x2a, + /* (2^233)P */ 0xe4, 0x6c, 0x8c, 0xc9, 0xe0, 0xaf, 0x06, 0xfe, 0xe4, 0xd6, 0xdf, 0xdd, 0x96, 0xdf, 0x35, 0xc2, 0xd3, 0x1e, 0xbf, 0x33, 0x1e, 0xd0, 0x28, 0x14, 0xaf, 0xbd, 0x00, 0x93, 0xec, 0x68, 0x57, 0x78, + /* (2^234)P */ 0x3b, 0xb6, 0xde, 0x91, 0x7a, 0xe5, 0x02, 0x97, 0x80, 0x8b, 0xce, 0xe5, 0xbf, 0xb8, 0xbd, 0x61, 0xac, 0x58, 0x1d, 0x3d, 0x6f, 0x42, 0x5b, 0x64, 0xbc, 0x57, 0xa5, 0x27, 0x22, 0xa8, 0x04, 0x48, + /* (2^235)P */ 0x01, 0x26, 0x4d, 0xb4, 0x8a, 0x04, 0x57, 0x8e, 0x35, 0x69, 0x3a, 0x4b, 0x1a, 0x50, 0xd6, 0x68, 0x93, 0xc2, 0xe1, 0xf9, 0xc3, 0x9e, 0x9c, 0xc3, 0xe2, 0x63, 0xde, 0xd4, 0x57, 0xf2, 0x72, 0x41, + /* (2^236)P */ 0x01, 0x64, 0x0c, 0x33, 0x50, 0xb4, 0x68, 0xd3, 0x91, 0x23, 0x8f, 0x41, 0x17, 0x30, 0x0d, 0x04, 0x0d, 0xd9, 0xb7, 0x90, 0x60, 0xbb, 0x34, 0x2c, 0x1f, 0xd5, 0xdf, 0x8f, 0x22, 0x49, 0xf6, 0x16, + /* (2^237)P */ 0xf5, 0x8e, 0x92, 0x2b, 0x8e, 0x81, 0xa6, 0xbe, 0x72, 0x1e, 0xc1, 0xcd, 0x91, 0xcf, 0x8c, 0xe2, 0xcd, 0x36, 0x7a, 0xe7, 0x68, 0xaa, 0x4a, 0x59, 0x0f, 0xfd, 0x7f, 0x6c, 0x80, 0x34, 0x30, 0x31, + /* (2^238)P */ 0x65, 0xbd, 0x49, 0x22, 0xac, 0x27, 0x9d, 0x8a, 0x12, 0x95, 0x8e, 0x01, 0x64, 0xb4, 0xa3, 0x19, 0xc7, 0x7e, 0xb3, 0x52, 0xf3, 0xcf, 0x6c, 0xc2, 0x21, 0x7b, 0x79, 0x1d, 0x34, 0x68, 0x6f, 0x05, + /* (2^239)P */ 0x27, 0x23, 0xfd, 0x7e, 0x75, 0xd6, 0x79, 0x5e, 0x15, 0xfe, 0x3a, 0x55, 0xb6, 0xbc, 0xbd, 0xfa, 0x60, 0x5a, 0xaf, 0x6e, 0x2c, 0x22, 0xe7, 0xd3, 0x3b, 0x74, 0xae, 0x4d, 0x6d, 0xc7, 0x46, 0x70, + /* (2^240)P */ 0x55, 0x4a, 0x8d, 0xb1, 0x72, 0xe8, 0x0b, 0x66, 0x96, 0x14, 0x4e, 0x57, 0x18, 0x25, 0x99, 0x19, 0xbb, 0xdc, 0x2b, 0x30, 0x3a, 0x05, 0x03, 0xc1, 0x8e, 0x8e, 0x21, 0x0b, 0x80, 0xe9, 0xd8, 0x3e, + /* (2^241)P */ 0x3e, 0xe0, 0x75, 0xfa, 0x39, 0x92, 0x0b, 0x7b, 0x83, 0xc0, 0x33, 0x46, 0x68, 0xfb, 0xe9, 0xef, 0x93, 0x77, 0x1a, 0x39, 0xbe, 0x5f, 0xa3, 0x98, 0x34, 0xfe, 0xd0, 0xe2, 0x0f, 0x51, 0x65, 0x60, + /* (2^242)P */ 0x0c, 0xad, 0xab, 0x48, 0x85, 0x66, 0xcb, 0x55, 0x27, 0xe5, 0x87, 0xda, 0x48, 0x45, 0x58, 0xb4, 0xdd, 0xc1, 0x07, 0x01, 0xea, 0xec, 0x43, 0x2c, 0x35, 0xde, 0x72, 0x93, 0x80, 0x28, 0x60, 0x52, + /* (2^243)P */ 0x1f, 0x3b, 0x21, 0xf9, 0x6a, 0xc5, 0x15, 0x34, 0xdb, 0x98, 0x7e, 0x01, 0x4d, 0x1a, 0xee, 0x5b, 0x9b, 0x70, 0xcf, 0xb5, 0x05, 0xb1, 0xf6, 0x13, 0xb6, 0x9a, 0xb2, 0x82, 0x34, 0x0e, 0xf2, 0x5f, + /* (2^244)P */ 0x90, 0x6c, 0x2e, 0xcc, 0x75, 0x9c, 0xa2, 0x0a, 0x06, 0xe2, 0x70, 0x3a, 0xca, 0x73, 0x7d, 0xfc, 0x15, 0xc5, 0xb5, 0xc4, 0x8f, 0xc3, 0x9f, 0x89, 0x07, 0xc2, 0xff, 0x24, 0xb1, 0x86, 0x03, 0x25, + /* (2^245)P */ 0x56, 0x2b, 0x3d, 0xae, 0xd5, 0x28, 0xea, 0x54, 0xce, 0x60, 0xde, 0xd6, 0x9d, 0x14, 0x13, 0x99, 0xc1, 0xd6, 0x06, 0x8f, 0xc5, 0x4f, 0x69, 0x16, 0xc7, 0x8f, 0x01, 0xeb, 0x75, 0x39, 0xb2, 0x46, + /* (2^246)P */ 0xe2, 0xb4, 0xb7, 0xb4, 0x0f, 0x6a, 0x0a, 0x47, 0xde, 0x53, 0x72, 0x8f, 0x5a, 0x47, 0x92, 0x5d, 0xdb, 0x3a, 0xbd, 0x2f, 0xb5, 0xe5, 0xee, 0xab, 0x68, 0x69, 0x80, 0xa0, 0x01, 0x08, 0xa2, 0x7f, + /* (2^247)P */ 0xd2, 0x14, 0x77, 0x9f, 0xf1, 0xfa, 0xf3, 0x76, 0xc3, 0x60, 0x46, 0x2f, 0xc1, 0x40, 0xe8, 0xb3, 0x4e, 0x74, 0x12, 0xf2, 0x8d, 0xcd, 0xb4, 0x0f, 0xd2, 0x2d, 0x3a, 0x1d, 0x25, 0x5a, 0x06, 0x4b, + /* (2^248)P */ 0x4a, 0xcd, 0x77, 0x3d, 0x38, 0xde, 0xeb, 0x5c, 0xb1, 0x9c, 0x2c, 0x88, 0xdf, 0x39, 0xdf, 0x6a, 0x59, 0xf7, 0x9a, 0xb0, 0x2e, 0x24, 0xdd, 0xa2, 0x22, 0x64, 0x5f, 0x0e, 0xe5, 0xc0, 0x47, 0x31, + /* (2^249)P */ 0xdb, 0x50, 0x13, 0x1d, 0x10, 0xa5, 0x4c, 0x16, 0x62, 0xc9, 0x3f, 0xc3, 0x79, 0x34, 0xd1, 0xf8, 0x08, 0xda, 0xe5, 0x13, 0x4d, 0xce, 0x40, 0xe6, 0xba, 0xf8, 0x61, 0x50, 0xc4, 0xe0, 0xde, 0x4b, + /* (2^250)P */ 0xc9, 0xb1, 0xed, 0xa4, 0xc1, 0x6d, 0xc4, 0xd7, 0x8a, 0xd9, 0x7f, 0x43, 0xb6, 0xd7, 0x14, 0x55, 0x0b, 0xc0, 0xa1, 0xb2, 0x6b, 0x2f, 0x94, 0x58, 0x0e, 0x71, 0x70, 0x1d, 0xab, 0xb2, 0xff, 0x2d, + /* (2^251)P */ 0x68, 0x6d, 0x8b, 0xc1, 0x2f, 0xcf, 0xdf, 0xcc, 0x67, 0x61, 0x80, 0xb7, 0xa8, 0xcb, 0xeb, 0xa8, 0xe3, 0x37, 0x29, 0x5e, 0xf9, 0x97, 0x06, 0x98, 0x8c, 0x6e, 0x12, 0xd0, 0x1c, 0xba, 0xfb, 0x02, + /* (2^252)P */ 0x65, 0x45, 0xff, 0xad, 0x60, 0xc3, 0x98, 0xcb, 0x19, 0x15, 0xdb, 0x4b, 0xd2, 0x01, 0x71, 0x44, 0xd5, 0x15, 0xfb, 0x75, 0x74, 0xc8, 0xc4, 0x98, 0x7d, 0xa2, 0x22, 0x6e, 0x6d, 0xc7, 0xf8, 0x05, + /* (2^253)P */ 0x94, 0xf4, 0xb9, 0xfe, 0xdf, 0xe5, 0x69, 0xab, 0x75, 0x6b, 0x40, 0x18, 0x9d, 0xc7, 0x09, 0xae, 0x1d, 0x2d, 0xa4, 0x94, 0xfb, 0x45, 0x9b, 0x19, 0x84, 0xfa, 0x2a, 0xae, 0xeb, 0x0a, 0x71, 0x79, + /* (2^254)P */ 0xdf, 0xd2, 0x34, 0xf3, 0xa7, 0xed, 0xad, 0xa6, 0xb4, 0x57, 0x2a, 0xaf, 0x51, 0x9c, 0xde, 0x7b, 0xa8, 0xea, 0xdc, 0x86, 0x4f, 0xc6, 0x8f, 0xa9, 0x7b, 0xd0, 0x0e, 0xc2, 0x35, 0x03, 0xbe, 0x6b, + /* (2^255)P */ 0x44, 0x43, 0x98, 0x53, 0xbe, 0xdc, 0x7f, 0x66, 0xa8, 0x49, 0x59, 0x00, 0x1c, 0xbc, 0x72, 0x07, 0x8e, 0xd6, 0xbe, 0x4e, 0x9f, 0xa4, 0x07, 0xba, 0xbf, 0x30, 0xdf, 0xba, 0x85, 0xb0, 0xa7, 0x1f, +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x448/curve.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x448/curve.go new file mode 100644 index 00000000000..d59564e4b42 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x448/curve.go @@ -0,0 +1,104 @@ +package x448 + +import ( + fp "github.com/cloudflare/circl/math/fp448" +) + +// ladderJoye calculates a fixed-point multiplication with the generator point. +// The algorithm is the right-to-left Joye's ladder as described +// in "How to precompute a ladder" in SAC'2017. +func ladderJoye(k *Key) { + w := [5]fp.Elt{} // [mu,x1,z1,x2,z2] order must be preserved. + w[1] = fp.Elt{ // x1 = S + 0xfe, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xfe, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + } + fp.SetOne(&w[2]) // z1 = 1 + w[3] = fp.Elt{ // x2 = G-S + 0x20, 0x27, 0x9d, 0xc9, 0x7d, 0x19, 0xb1, 0xac, + 0xf8, 0xba, 0x69, 0x1c, 0xff, 0x33, 0xac, 0x23, + 0x51, 0x1b, 0xce, 0x3a, 0x64, 0x65, 0xbd, 0xf1, + 0x23, 0xf8, 0xc1, 0x84, 0x9d, 0x45, 0x54, 0x29, + 0x67, 0xb9, 0x81, 0x1c, 0x03, 0xd1, 0xcd, 0xda, + 0x7b, 0xeb, 0xff, 0x1a, 0x88, 0x03, 0xcf, 0x3a, + 0x42, 0x44, 0x32, 0x01, 0x25, 0xb7, 0xfa, 0xf0, + } + fp.SetOne(&w[4]) // z2 = 1 + + const n = 448 + const h = 2 + swap := uint(1) + for s := 0; s < n-h; s++ { + i := (s + h) / 8 + j := (s + h) % 8 + bit := uint((k[i] >> uint(j)) & 1) + copy(w[0][:], tableGenerator[s*Size:(s+1)*Size]) + diffAdd(&w, swap^bit) + swap = bit + } + for s := 0; s < h; s++ { + double(&w[1], &w[2]) + } + toAffine((*[fp.Size]byte)(k), &w[1], &w[2]) +} + +// ladderMontgomery calculates a generic scalar point multiplication +// The algorithm implemented is the left-to-right Montgomery's ladder. +func ladderMontgomery(k, xP *Key) { + w := [5]fp.Elt{} // [x1, x2, z2, x3, z3] order must be preserved. + w[0] = *(*fp.Elt)(xP) // x1 = xP + fp.SetOne(&w[1]) // x2 = 1 + w[3] = *(*fp.Elt)(xP) // x3 = xP + fp.SetOne(&w[4]) // z3 = 1 + + move := uint(0) + for s := 448 - 1; s >= 0; s-- { + i := s / 8 + j := s % 8 + bit := uint((k[i] >> uint(j)) & 1) + ladderStep(&w, move^bit) + move = bit + } + toAffine((*[fp.Size]byte)(k), &w[1], &w[2]) +} + +func toAffine(k *[fp.Size]byte, x, z *fp.Elt) { + fp.Inv(z, z) + fp.Mul(x, x, z) + _ = fp.ToBytes(k[:], x) +} + +var lowOrderPoints = [3]fp.Elt{ + { /* (0,_,1) point of order 2 on Curve448 */ + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + }, + { /* (1,_,1) a point of order 4 on the twist of Curve448 */ + 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + }, + { /* (-1,_,1) point of order 4 on Curve448 */ + 0xfe, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xfe, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + }, +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x448/curve_amd64.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x448/curve_amd64.go new file mode 100644 index 00000000000..a0622666136 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x448/curve_amd64.go @@ -0,0 +1,30 @@ +//go:build amd64 && !purego +// +build amd64,!purego + +package x448 + +import ( + fp "github.com/cloudflare/circl/math/fp448" + "golang.org/x/sys/cpu" +) + +var hasBmi2Adx = cpu.X86.HasBMI2 && cpu.X86.HasADX + +var _ = hasBmi2Adx + +func double(x, z *fp.Elt) { doubleAmd64(x, z) } +func diffAdd(w *[5]fp.Elt, b uint) { diffAddAmd64(w, b) } +func ladderStep(w *[5]fp.Elt, b uint) { ladderStepAmd64(w, b) } +func mulA24(z, x *fp.Elt) { mulA24Amd64(z, x) } + +//go:noescape +func doubleAmd64(x, z *fp.Elt) + +//go:noescape +func diffAddAmd64(w *[5]fp.Elt, b uint) + +//go:noescape +func ladderStepAmd64(w *[5]fp.Elt, b uint) + +//go:noescape +func mulA24Amd64(z, x *fp.Elt) diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x448/curve_amd64.h b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x448/curve_amd64.h new file mode 100644 index 00000000000..8c1ae4d0fbb --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x448/curve_amd64.h @@ -0,0 +1,111 @@ +#define ladderStepLeg \ + addSub(x2,z2) \ + addSub(x3,z3) \ + integerMulLeg(b0,x2,z3) \ + integerMulLeg(b1,x3,z2) \ + reduceFromDoubleLeg(t0,b0) \ + reduceFromDoubleLeg(t1,b1) \ + addSub(t0,t1) \ + cselect(x2,x3,regMove) \ + cselect(z2,z3,regMove) \ + integerSqrLeg(b0,t0) \ + integerSqrLeg(b1,t1) \ + reduceFromDoubleLeg(x3,b0) \ + reduceFromDoubleLeg(z3,b1) \ + integerMulLeg(b0,x1,z3) \ + reduceFromDoubleLeg(z3,b0) \ + integerSqrLeg(b0,x2) \ + integerSqrLeg(b1,z2) \ + reduceFromDoubleLeg(x2,b0) \ + reduceFromDoubleLeg(z2,b1) \ + subtraction(t0,x2,z2) \ + multiplyA24Leg(t1,t0) \ + additionLeg(t1,t1,z2) \ + integerMulLeg(b0,x2,z2) \ + integerMulLeg(b1,t0,t1) \ + reduceFromDoubleLeg(x2,b0) \ + reduceFromDoubleLeg(z2,b1) + +#define ladderStepBmi2Adx \ + addSub(x2,z2) \ + addSub(x3,z3) \ + integerMulAdx(b0,x2,z3) \ + integerMulAdx(b1,x3,z2) \ + reduceFromDoubleAdx(t0,b0) \ + reduceFromDoubleAdx(t1,b1) \ + addSub(t0,t1) \ + cselect(x2,x3,regMove) \ + cselect(z2,z3,regMove) \ + integerSqrAdx(b0,t0) \ + integerSqrAdx(b1,t1) \ + reduceFromDoubleAdx(x3,b0) \ + reduceFromDoubleAdx(z3,b1) \ + integerMulAdx(b0,x1,z3) \ + reduceFromDoubleAdx(z3,b0) \ + integerSqrAdx(b0,x2) \ + integerSqrAdx(b1,z2) \ + reduceFromDoubleAdx(x2,b0) \ + reduceFromDoubleAdx(z2,b1) \ + subtraction(t0,x2,z2) \ + multiplyA24Adx(t1,t0) \ + additionAdx(t1,t1,z2) \ + integerMulAdx(b0,x2,z2) \ + integerMulAdx(b1,t0,t1) \ + reduceFromDoubleAdx(x2,b0) \ + reduceFromDoubleAdx(z2,b1) + +#define difAddLeg \ + addSub(x1,z1) \ + integerMulLeg(b0,z1,ui) \ + reduceFromDoubleLeg(z1,b0) \ + addSub(x1,z1) \ + integerSqrLeg(b0,x1) \ + integerSqrLeg(b1,z1) \ + reduceFromDoubleLeg(x1,b0) \ + reduceFromDoubleLeg(z1,b1) \ + integerMulLeg(b0,x1,z2) \ + integerMulLeg(b1,z1,x2) \ + reduceFromDoubleLeg(x1,b0) \ + reduceFromDoubleLeg(z1,b1) + +#define difAddBmi2Adx \ + addSub(x1,z1) \ + integerMulAdx(b0,z1,ui) \ + reduceFromDoubleAdx(z1,b0) \ + addSub(x1,z1) \ + integerSqrAdx(b0,x1) \ + integerSqrAdx(b1,z1) \ + reduceFromDoubleAdx(x1,b0) \ + reduceFromDoubleAdx(z1,b1) \ + integerMulAdx(b0,x1,z2) \ + integerMulAdx(b1,z1,x2) \ + reduceFromDoubleAdx(x1,b0) \ + reduceFromDoubleAdx(z1,b1) + +#define doubleLeg \ + addSub(x1,z1) \ + integerSqrLeg(b0,x1) \ + integerSqrLeg(b1,z1) \ + reduceFromDoubleLeg(x1,b0) \ + reduceFromDoubleLeg(z1,b1) \ + subtraction(t0,x1,z1) \ + multiplyA24Leg(t1,t0) \ + additionLeg(t1,t1,z1) \ + integerMulLeg(b0,x1,z1) \ + integerMulLeg(b1,t0,t1) \ + reduceFromDoubleLeg(x1,b0) \ + reduceFromDoubleLeg(z1,b1) + +#define doubleBmi2Adx \ + addSub(x1,z1) \ + integerSqrAdx(b0,x1) \ + integerSqrAdx(b1,z1) \ + reduceFromDoubleAdx(x1,b0) \ + reduceFromDoubleAdx(z1,b1) \ + subtraction(t0,x1,z1) \ + multiplyA24Adx(t1,t0) \ + additionAdx(t1,t1,z1) \ + integerMulAdx(b0,x1,z1) \ + integerMulAdx(b1,t0,t1) \ + reduceFromDoubleAdx(x1,b0) \ + reduceFromDoubleAdx(z1,b1) diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x448/curve_amd64.s b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x448/curve_amd64.s new file mode 100644 index 00000000000..810aa9e6481 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x448/curve_amd64.s @@ -0,0 +1,193 @@ +// +build amd64 + +#include "textflag.h" + +// Depends on circl/math/fp448 package +#include "../../math/fp448/fp_amd64.h" +#include "curve_amd64.h" + +// CTE_A24 is (A+2)/4 from Curve448 +#define CTE_A24 39082 + +#define Size 56 + +// multiplyA24Leg multiplies x times CTE_A24 and stores in z +// Uses: AX, DX, R8-R15, FLAGS +// Instr: x86_64, cmov, adx +#define multiplyA24Leg(z,x) \ + MOVQ $CTE_A24, R15; \ + MOVQ 0+x, AX; MULQ R15; MOVQ AX, R8; ;;;;;;;;;;;; MOVQ DX, R9; \ + MOVQ 8+x, AX; MULQ R15; ADDQ AX, R9; ADCQ $0, DX; MOVQ DX, R10; \ + MOVQ 16+x, AX; MULQ R15; ADDQ AX, R10; ADCQ $0, DX; MOVQ DX, R11; \ + MOVQ 24+x, AX; MULQ R15; ADDQ AX, R11; ADCQ $0, DX; MOVQ DX, R12; \ + MOVQ 32+x, AX; MULQ R15; ADDQ AX, R12; ADCQ $0, DX; MOVQ DX, R13; \ + MOVQ 40+x, AX; MULQ R15; ADDQ AX, R13; ADCQ $0, DX; MOVQ DX, R14; \ + MOVQ 48+x, AX; MULQ R15; ADDQ AX, R14; ADCQ $0, DX; \ + MOVQ DX, AX; \ + SHLQ $32, AX; \ + ADDQ DX, R8; MOVQ $0, DX; \ + ADCQ $0, R9; \ + ADCQ $0, R10; \ + ADCQ AX, R11; \ + ADCQ $0, R12; \ + ADCQ $0, R13; \ + ADCQ $0, R14; \ + ADCQ $0, DX; \ + MOVQ DX, AX; \ + SHLQ $32, AX; \ + ADDQ DX, R8; \ + ADCQ $0, R9; \ + ADCQ $0, R10; \ + ADCQ AX, R11; \ + ADCQ $0, R12; \ + ADCQ $0, R13; \ + ADCQ $0, R14; \ + MOVQ R8, 0+z; \ + MOVQ R9, 8+z; \ + MOVQ R10, 16+z; \ + MOVQ R11, 24+z; \ + MOVQ R12, 32+z; \ + MOVQ R13, 40+z; \ + MOVQ R14, 48+z; + +// multiplyA24Adx multiplies x times CTE_A24 and stores in z +// Uses: AX, DX, R8-R14, FLAGS +// Instr: x86_64, bmi2 +#define multiplyA24Adx(z,x) \ + MOVQ $CTE_A24, DX; \ + MULXQ 0+x, R8, R9; \ + MULXQ 8+x, AX, R10; ADDQ AX, R9; \ + MULXQ 16+x, AX, R11; ADCQ AX, R10; \ + MULXQ 24+x, AX, R12; ADCQ AX, R11; \ + MULXQ 32+x, AX, R13; ADCQ AX, R12; \ + MULXQ 40+x, AX, R14; ADCQ AX, R13; \ + MULXQ 48+x, AX, DX; ADCQ AX, R14; \ + ;;;;;;;;;;;;;;;;;;;; ADCQ $0, DX; \ + MOVQ DX, AX; \ + SHLQ $32, AX; \ + ADDQ DX, R8; MOVQ $0, DX; \ + ADCQ $0, R9; \ + ADCQ $0, R10; \ + ADCQ AX, R11; \ + ADCQ $0, R12; \ + ADCQ $0, R13; \ + ADCQ $0, R14; \ + ADCQ $0, DX; \ + MOVQ DX, AX; \ + SHLQ $32, AX; \ + ADDQ DX, R8; \ + ADCQ $0, R9; \ + ADCQ $0, R10; \ + ADCQ AX, R11; \ + ADCQ $0, R12; \ + ADCQ $0, R13; \ + ADCQ $0, R14; \ + MOVQ R8, 0+z; \ + MOVQ R9, 8+z; \ + MOVQ R10, 16+z; \ + MOVQ R11, 24+z; \ + MOVQ R12, 32+z; \ + MOVQ R13, 40+z; \ + MOVQ R14, 48+z; + +#define mulA24Legacy \ + multiplyA24Leg(0(DI),0(SI)) +#define mulA24Bmi2Adx \ + multiplyA24Adx(0(DI),0(SI)) + +// func mulA24Amd64(z, x *fp448.Elt) +TEXT ·mulA24Amd64(SB),NOSPLIT,$0-16 + MOVQ z+0(FP), DI + MOVQ x+8(FP), SI + CHECK_BMI2ADX(LMA24, mulA24Legacy, mulA24Bmi2Adx) + +// func ladderStepAmd64(w *[5]fp448.Elt, b uint) +// ladderStepAmd64 calculates a point addition and doubling as follows: +// (x2,z2) = 2*(x2,z2) and (x3,z3) = (x2,z2)+(x3,z3) using as a difference (x1,-). +// w = {x1,x2,z2,x3,z4} are five fp255.Elt of 56 bytes. +// stack = (t0,t1) are two fp.Elt of fp.Size bytes, and +// (b0,b1) are two-double precision fp.Elt of 2*fp.Size bytes. +TEXT ·ladderStepAmd64(SB),NOSPLIT,$336-16 + // Parameters + #define regWork DI + #define regMove SI + #define x1 0*Size(regWork) + #define x2 1*Size(regWork) + #define z2 2*Size(regWork) + #define x3 3*Size(regWork) + #define z3 4*Size(regWork) + // Local variables + #define t0 0*Size(SP) + #define t1 1*Size(SP) + #define b0 2*Size(SP) + #define b1 4*Size(SP) + MOVQ w+0(FP), regWork + MOVQ b+8(FP), regMove + CHECK_BMI2ADX(LLADSTEP, ladderStepLeg, ladderStepBmi2Adx) + #undef regWork + #undef regMove + #undef x1 + #undef x2 + #undef z2 + #undef x3 + #undef z3 + #undef t0 + #undef t1 + #undef b0 + #undef b1 + +// func diffAddAmd64(work *[5]fp.Elt, swap uint) +// diffAddAmd64 calculates a differential point addition using a precomputed point. +// (x1,z1) = (x1,z1)+(mu) using a difference point (x2,z2) +// work = {mu,x1,z1,x2,z2} are five fp448.Elt of 56 bytes, and +// stack = (b0,b1) are two-double precision fp.Elt of 2*fp.Size bytes. +// This is Equation 7 at https://eprint.iacr.org/2017/264. +TEXT ·diffAddAmd64(SB),NOSPLIT,$224-16 + // Parameters + #define regWork DI + #define regSwap SI + #define ui 0*Size(regWork) + #define x1 1*Size(regWork) + #define z1 2*Size(regWork) + #define x2 3*Size(regWork) + #define z2 4*Size(regWork) + // Local variables + #define b0 0*Size(SP) + #define b1 2*Size(SP) + MOVQ w+0(FP), regWork + MOVQ b+8(FP), regSwap + cswap(x1,x2,regSwap) + cswap(z1,z2,regSwap) + CHECK_BMI2ADX(LDIFADD, difAddLeg, difAddBmi2Adx) + #undef regWork + #undef regSwap + #undef ui + #undef x1 + #undef z1 + #undef x2 + #undef z2 + #undef b0 + #undef b1 + +// func doubleAmd64(x, z *fp448.Elt) +// doubleAmd64 calculates a point doubling (x1,z1) = 2*(x1,z1). +// stack = (t0,t1) are two fp.Elt of fp.Size bytes, and +// (b0,b1) are two-double precision fp.Elt of 2*fp.Size bytes. +TEXT ·doubleAmd64(SB),NOSPLIT,$336-16 + // Parameters + #define x1 0(DI) + #define z1 0(SI) + // Local variables + #define t0 0*Size(SP) + #define t1 1*Size(SP) + #define b0 2*Size(SP) + #define b1 4*Size(SP) + MOVQ x+0(FP), DI + MOVQ z+8(FP), SI + CHECK_BMI2ADX(LDOUB,doubleLeg,doubleBmi2Adx) + #undef x1 + #undef z1 + #undef t0 + #undef t1 + #undef b0 + #undef b1 diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x448/curve_generic.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x448/curve_generic.go new file mode 100644 index 00000000000..b0b65ccf7eb --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x448/curve_generic.go @@ -0,0 +1,100 @@ +package x448 + +import ( + "encoding/binary" + "math/bits" + + "github.com/cloudflare/circl/math/fp448" +) + +func doubleGeneric(x, z *fp448.Elt) { + t0, t1 := &fp448.Elt{}, &fp448.Elt{} + fp448.AddSub(x, z) + fp448.Sqr(x, x) + fp448.Sqr(z, z) + fp448.Sub(t0, x, z) + mulA24Generic(t1, t0) + fp448.Add(t1, t1, z) + fp448.Mul(x, x, z) + fp448.Mul(z, t0, t1) +} + +func diffAddGeneric(w *[5]fp448.Elt, b uint) { + mu, x1, z1, x2, z2 := &w[0], &w[1], &w[2], &w[3], &w[4] + fp448.Cswap(x1, x2, b) + fp448.Cswap(z1, z2, b) + fp448.AddSub(x1, z1) + fp448.Mul(z1, z1, mu) + fp448.AddSub(x1, z1) + fp448.Sqr(x1, x1) + fp448.Sqr(z1, z1) + fp448.Mul(x1, x1, z2) + fp448.Mul(z1, z1, x2) +} + +func ladderStepGeneric(w *[5]fp448.Elt, b uint) { + x1, x2, z2, x3, z3 := &w[0], &w[1], &w[2], &w[3], &w[4] + t0 := &fp448.Elt{} + t1 := &fp448.Elt{} + fp448.AddSub(x2, z2) + fp448.AddSub(x3, z3) + fp448.Mul(t0, x2, z3) + fp448.Mul(t1, x3, z2) + fp448.AddSub(t0, t1) + fp448.Cmov(x2, x3, b) + fp448.Cmov(z2, z3, b) + fp448.Sqr(x3, t0) + fp448.Sqr(z3, t1) + fp448.Mul(z3, x1, z3) + fp448.Sqr(x2, x2) + fp448.Sqr(z2, z2) + fp448.Sub(t0, x2, z2) + mulA24Generic(t1, t0) + fp448.Add(t1, t1, z2) + fp448.Mul(x2, x2, z2) + fp448.Mul(z2, t0, t1) +} + +func mulA24Generic(z, x *fp448.Elt) { + const A24 = 39082 + const n = 8 + var xx [7]uint64 + for i := range xx { + xx[i] = binary.LittleEndian.Uint64(x[i*n : (i+1)*n]) + } + h0, l0 := bits.Mul64(xx[0], A24) + h1, l1 := bits.Mul64(xx[1], A24) + h2, l2 := bits.Mul64(xx[2], A24) + h3, l3 := bits.Mul64(xx[3], A24) + h4, l4 := bits.Mul64(xx[4], A24) + h5, l5 := bits.Mul64(xx[5], A24) + h6, l6 := bits.Mul64(xx[6], A24) + + l1, c0 := bits.Add64(h0, l1, 0) + l2, c1 := bits.Add64(h1, l2, c0) + l3, c2 := bits.Add64(h2, l3, c1) + l4, c3 := bits.Add64(h3, l4, c2) + l5, c4 := bits.Add64(h4, l5, c3) + l6, c5 := bits.Add64(h5, l6, c4) + l7, _ := bits.Add64(h6, 0, c5) + + l0, c0 = bits.Add64(l0, l7, 0) + l1, c1 = bits.Add64(l1, 0, c0) + l2, c2 = bits.Add64(l2, 0, c1) + l3, c3 = bits.Add64(l3, l7<<32, c2) + l4, c4 = bits.Add64(l4, 0, c3) + l5, c5 = bits.Add64(l5, 0, c4) + l6, l7 = bits.Add64(l6, 0, c5) + + xx[0], c0 = bits.Add64(l0, l7, 0) + xx[1], c1 = bits.Add64(l1, 0, c0) + xx[2], c2 = bits.Add64(l2, 0, c1) + xx[3], c3 = bits.Add64(l3, l7<<32, c2) + xx[4], c4 = bits.Add64(l4, 0, c3) + xx[5], c5 = bits.Add64(l5, 0, c4) + xx[6], _ = bits.Add64(l6, 0, c5) + + for i := range xx { + binary.LittleEndian.PutUint64(z[i*n:(i+1)*n], xx[i]) + } +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x448/curve_noasm.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x448/curve_noasm.go new file mode 100644 index 00000000000..3755b7c83b3 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x448/curve_noasm.go @@ -0,0 +1,11 @@ +//go:build !amd64 || purego +// +build !amd64 purego + +package x448 + +import fp "github.com/cloudflare/circl/math/fp448" + +func double(x, z *fp.Elt) { doubleGeneric(x, z) } +func diffAdd(w *[5]fp.Elt, b uint) { diffAddGeneric(w, b) } +func ladderStep(w *[5]fp.Elt, b uint) { ladderStepGeneric(w, b) } +func mulA24(z, x *fp.Elt) { mulA24Generic(z, x) } diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x448/doc.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x448/doc.go new file mode 100644 index 00000000000..c02904fedae --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x448/doc.go @@ -0,0 +1,19 @@ +/* +Package x448 provides Diffie-Hellman functions as specified in RFC-7748. + +Validation of public keys. + +The Diffie-Hellman function, as described in RFC-7748 [1], works for any +public key. However, if a different protocol requires contributory +behaviour [2,3], then the public keys must be validated against low-order +points [3,4]. To do that, the Shared function performs this validation +internally and returns false when the public key is invalid (i.e., it +is a low-order point). + +References: + - [1] RFC7748 by Langley, Hamburg, Turner (https://rfc-editor.org/rfc/rfc7748.txt) + - [2] Curve25519 by Bernstein (https://cr.yp.to/ecdh.html) + - [3] Bernstein (https://cr.yp.to/ecdh.html#validate) + - [4] Cremers&Jackson (https://eprint.iacr.org/2019/526) +*/ +package x448 diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x448/key.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x448/key.go new file mode 100644 index 00000000000..2fdde51168a --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x448/key.go @@ -0,0 +1,46 @@ +package x448 + +import ( + "crypto/subtle" + + fp "github.com/cloudflare/circl/math/fp448" +) + +// Size is the length in bytes of a X448 key. +const Size = 56 + +// Key represents a X448 key. +type Key [Size]byte + +func (k *Key) clamp(in *Key) *Key { + *k = *in + k[0] &= 252 + k[55] |= 128 + return k +} + +// isValidPubKey verifies if the public key is not a low-order point. +func (k *Key) isValidPubKey() bool { + fp.Modp((*fp.Elt)(k)) + var isLowOrder int + for _, P := range lowOrderPoints { + isLowOrder |= subtle.ConstantTimeCompare(P[:], k[:]) + } + return isLowOrder == 0 +} + +// KeyGen obtains a public key given a secret key. +func KeyGen(public, secret *Key) { + ladderJoye(public.clamp(secret)) +} + +// Shared calculates Alice's shared key from Alice's secret key and Bob's +// public key returning true on success. A failure case happens when the public +// key is a low-order point, thus the shared key is all-zeros and the function +// returns false. +func Shared(shared, secret, public *Key) bool { + validPk := *public + ok := validPk.isValidPubKey() + ladderMontgomery(shared.clamp(secret), &validPk) + return ok +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x448/table.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x448/table.go new file mode 100644 index 00000000000..eef53c30f80 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/dh/x448/table.go @@ -0,0 +1,460 @@ +package x448 + +import fp "github.com/cloudflare/circl/math/fp448" + +// tableGenerator contains the set of points: +// +// t[i] = (xi+1)/(xi-1), +// +// where (xi,yi) = 2^iG and G is the generator point +// Size = (448)*(448/8) = 25088 bytes. +var tableGenerator = [448 * fp.Size]byte{ + /* (2^ 0)P */ 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x7f, + /* (2^ 1)P */ 0x37, 0xfa, 0xaa, 0x0d, 0x86, 0xa6, 0x24, 0xe9, 0x6c, 0x95, 0x08, 0x34, 0xba, 0x1a, 0x81, 0x3a, 0xae, 0x01, 0xa5, 0xa7, 0x05, 0x85, 0x96, 0x00, 0x06, 0x5a, 0xd7, 0xff, 0xee, 0x8e, 0x8f, 0x94, 0xd2, 0xdc, 0xd7, 0xfc, 0xe7, 0xe5, 0x99, 0x1d, 0x05, 0x46, 0x43, 0xe8, 0xbc, 0x12, 0xb7, 0xeb, 0x30, 0x5e, 0x7a, 0x85, 0x68, 0xed, 0x9d, 0x28, + /* (2^ 2)P */ 0xf1, 0x7d, 0x08, 0x2b, 0x32, 0x4a, 0x62, 0x80, 0x36, 0xe7, 0xa4, 0x76, 0x5a, 0x2a, 0x1e, 0xf7, 0x9e, 0x3c, 0x40, 0x46, 0x9a, 0x1b, 0x61, 0xc1, 0xbf, 0x1a, 0x1b, 0xae, 0x91, 0x80, 0xa3, 0x76, 0x6c, 0xd4, 0x8f, 0xa4, 0xee, 0x26, 0x39, 0x23, 0xa4, 0x80, 0xf4, 0x66, 0x92, 0xe4, 0xe1, 0x18, 0x76, 0xc5, 0xe2, 0x19, 0x87, 0xd5, 0xc3, 0xe8, + /* (2^ 3)P */ 0xfb, 0xc9, 0xf0, 0x07, 0xf2, 0x93, 0xd8, 0x50, 0x36, 0xed, 0xfb, 0xbd, 0xb2, 0xd3, 0xfc, 0xdf, 0xd5, 0x2a, 0x6e, 0x26, 0x09, 0xce, 0xd4, 0x07, 0x64, 0x9f, 0x40, 0x74, 0xad, 0x98, 0x2f, 0x1c, 0xb6, 0xdc, 0x2d, 0x42, 0xff, 0xbf, 0x97, 0xd8, 0xdb, 0xef, 0x99, 0xca, 0x73, 0x99, 0x1a, 0x04, 0x3b, 0x56, 0x2c, 0x1f, 0x87, 0x9d, 0x9f, 0x03, + /* (2^ 4)P */ 0x4c, 0x35, 0x97, 0xf7, 0x81, 0x2c, 0x84, 0xa6, 0xe0, 0xcb, 0xce, 0x37, 0x4c, 0x21, 0x1c, 0x67, 0xfa, 0xab, 0x18, 0x4d, 0xef, 0xd0, 0xf0, 0x44, 0xa9, 0xfb, 0xc0, 0x8e, 0xda, 0x57, 0xa1, 0xd8, 0xeb, 0x87, 0xf4, 0x17, 0xea, 0x66, 0x0f, 0x16, 0xea, 0xcd, 0x5f, 0x3e, 0x88, 0xea, 0x09, 0x68, 0x40, 0xdf, 0x43, 0xcc, 0x54, 0x61, 0x58, 0xaa, + /* (2^ 5)P */ 0x8d, 0xe7, 0x59, 0xd7, 0x5e, 0x63, 0x37, 0xa7, 0x3f, 0xd1, 0x49, 0x85, 0x01, 0xdd, 0x5e, 0xb3, 0xe6, 0x29, 0xcb, 0x25, 0x93, 0xdd, 0x08, 0x96, 0x83, 0x52, 0x76, 0x85, 0xf5, 0x5d, 0x02, 0xbf, 0xe9, 0x6d, 0x15, 0x27, 0xc1, 0x09, 0xd1, 0x14, 0x4d, 0x6e, 0xe8, 0xaf, 0x59, 0x58, 0x34, 0x9d, 0x2a, 0x99, 0x85, 0x26, 0xbe, 0x4b, 0x1e, 0xb9, + /* (2^ 6)P */ 0x8d, 0xce, 0x94, 0xe2, 0x18, 0x56, 0x0d, 0x82, 0x8e, 0xdf, 0x85, 0x01, 0x8f, 0x93, 0x3c, 0xc6, 0xbd, 0x61, 0xfb, 0xf4, 0x22, 0xc5, 0x16, 0x87, 0xd1, 0xb1, 0x9e, 0x09, 0xc5, 0x83, 0x2e, 0x4a, 0x07, 0x88, 0xee, 0xe0, 0x29, 0x8d, 0x2e, 0x1f, 0x88, 0xad, 0xfd, 0x18, 0x93, 0xb7, 0xed, 0x42, 0x86, 0x78, 0xf0, 0xb8, 0x70, 0xbe, 0x01, 0x67, + /* (2^ 7)P */ 0xdf, 0x62, 0x2d, 0x94, 0xc7, 0x35, 0x23, 0xda, 0x27, 0xbb, 0x2b, 0xdb, 0x30, 0x80, 0x68, 0x16, 0xa3, 0xae, 0xd7, 0xd2, 0xa7, 0x7c, 0xbf, 0x6a, 0x1d, 0x83, 0xde, 0x96, 0x0a, 0x43, 0xb6, 0x30, 0x37, 0xd6, 0xee, 0x63, 0x59, 0x9a, 0xbf, 0xa3, 0x30, 0x6c, 0xaf, 0x0c, 0xee, 0x3d, 0xcb, 0x35, 0x4b, 0x55, 0x5f, 0x84, 0x85, 0xcb, 0x4f, 0x1e, + /* (2^ 8)P */ 0x9d, 0x04, 0x68, 0x89, 0xa4, 0xa9, 0x0d, 0x87, 0xc1, 0x70, 0xf1, 0xeb, 0xfb, 0x47, 0x0a, 0xf0, 0xde, 0x67, 0xb7, 0x94, 0xcd, 0x36, 0x43, 0xa5, 0x49, 0x43, 0x67, 0xc3, 0xee, 0x3c, 0x6b, 0xec, 0xd0, 0x1a, 0xf4, 0xad, 0xef, 0x06, 0x4a, 0xe8, 0x46, 0x24, 0xd7, 0x93, 0xbf, 0xf0, 0xe3, 0x81, 0x61, 0xec, 0xea, 0x64, 0xfe, 0x67, 0xeb, 0xc7, + /* (2^ 9)P */ 0x95, 0x45, 0x79, 0xcf, 0x2c, 0xfd, 0x9b, 0xfe, 0x84, 0x46, 0x4b, 0x8f, 0xa1, 0xcf, 0xc3, 0x04, 0x94, 0x78, 0xdb, 0xc9, 0xa6, 0x01, 0x75, 0xa4, 0xb4, 0x93, 0x72, 0x43, 0xa7, 0x7d, 0xda, 0x31, 0x38, 0x54, 0xab, 0x4e, 0x3f, 0x89, 0xa6, 0xab, 0x57, 0xc0, 0x16, 0x65, 0xdb, 0x92, 0x96, 0xe4, 0xc8, 0xae, 0xe7, 0x4c, 0x7a, 0xeb, 0xbb, 0x5a, + /* (2^ 10)P */ 0xbe, 0xfe, 0x86, 0xc3, 0x97, 0xe0, 0x6a, 0x18, 0x20, 0x21, 0xca, 0x22, 0x55, 0xa1, 0xeb, 0xf5, 0x74, 0xe5, 0xc9, 0x59, 0xa7, 0x92, 0x65, 0x15, 0x08, 0x71, 0xd1, 0x09, 0x7e, 0x83, 0xfc, 0xbc, 0x5a, 0x93, 0x38, 0x0d, 0x43, 0x42, 0xfd, 0x76, 0x30, 0xe8, 0x63, 0x60, 0x09, 0x8d, 0x6c, 0xd3, 0xf8, 0x56, 0x3d, 0x68, 0x47, 0xab, 0xa0, 0x1d, + /* (2^ 11)P */ 0x38, 0x50, 0x1c, 0xb1, 0xac, 0x88, 0x8f, 0x38, 0xe3, 0x69, 0xe6, 0xfc, 0x4f, 0x8f, 0xe1, 0x9b, 0xb1, 0x1a, 0x09, 0x39, 0x19, 0xdf, 0xcd, 0x98, 0x7b, 0x64, 0x42, 0xf6, 0x11, 0xea, 0xc7, 0xe8, 0x92, 0x65, 0x00, 0x2c, 0x75, 0xb5, 0x94, 0x1e, 0x5b, 0xa6, 0x66, 0x81, 0x77, 0xf3, 0x39, 0x94, 0xac, 0xbd, 0xe4, 0x2a, 0x66, 0x84, 0x9c, 0x60, + /* (2^ 12)P */ 0xb5, 0xb6, 0xd9, 0x03, 0x67, 0xa4, 0xa8, 0x0a, 0x4a, 0x2b, 0x9d, 0xfa, 0x13, 0xe1, 0x99, 0x25, 0x4a, 0x5c, 0x67, 0xb9, 0xb2, 0xb7, 0xdd, 0x1e, 0xaf, 0xeb, 0x63, 0x41, 0xb6, 0xb9, 0xa0, 0x87, 0x0a, 0xe0, 0x06, 0x07, 0xaa, 0x97, 0xf8, 0xf9, 0x38, 0x4f, 0xdf, 0x0c, 0x40, 0x7c, 0xc3, 0x98, 0xa9, 0x74, 0xf1, 0x5d, 0xda, 0xd1, 0xc0, 0x0a, + /* (2^ 13)P */ 0xf2, 0x0a, 0xab, 0xab, 0x94, 0x50, 0xf0, 0xa3, 0x6f, 0xc6, 0x66, 0xba, 0xa6, 0xdc, 0x44, 0xdd, 0xd6, 0x08, 0xf4, 0xd3, 0xed, 0xb1, 0x40, 0x93, 0xee, 0xf6, 0xb8, 0x8e, 0xb4, 0x7c, 0xb9, 0x82, 0xc9, 0x9d, 0x45, 0x3b, 0x8e, 0x10, 0xcb, 0x70, 0x1e, 0xba, 0x3c, 0x62, 0x50, 0xda, 0xa9, 0x93, 0xb5, 0xd7, 0xd0, 0x6f, 0x29, 0x52, 0x95, 0xae, + /* (2^ 14)P */ 0x14, 0x68, 0x69, 0x23, 0xa8, 0x44, 0x87, 0x9e, 0x22, 0x91, 0xe8, 0x92, 0xdf, 0xf7, 0xae, 0xba, 0x1c, 0x96, 0xe1, 0xc3, 0x94, 0xed, 0x6c, 0x95, 0xae, 0x96, 0xa7, 0x15, 0x9f, 0xf1, 0x17, 0x11, 0x92, 0x42, 0xd5, 0xcd, 0x18, 0xe7, 0xa9, 0xb5, 0x2f, 0xcd, 0xde, 0x6c, 0xc9, 0x7d, 0xfc, 0x7e, 0xbd, 0x7f, 0x10, 0x3d, 0x01, 0x00, 0x8d, 0x95, + /* (2^ 15)P */ 0x3b, 0x76, 0x72, 0xae, 0xaf, 0x84, 0xf2, 0xf7, 0xd1, 0x6d, 0x13, 0x9c, 0x47, 0xe1, 0xb7, 0xa3, 0x19, 0x16, 0xee, 0x75, 0x45, 0xf6, 0x1a, 0x7b, 0x78, 0x49, 0x79, 0x05, 0x86, 0xf0, 0x7f, 0x9f, 0xfc, 0xc4, 0xbd, 0x86, 0xf3, 0x41, 0xa7, 0xfe, 0x01, 0xd5, 0x67, 0x16, 0x10, 0x5b, 0xa5, 0x16, 0xf3, 0x7f, 0x60, 0xce, 0xd2, 0x0c, 0x8e, 0x4b, + /* (2^ 16)P */ 0x4a, 0x07, 0x99, 0x4a, 0x0f, 0x74, 0x91, 0x14, 0x68, 0xb9, 0x48, 0xb7, 0x44, 0x77, 0x9b, 0x4a, 0xe0, 0x68, 0x0e, 0x43, 0x4d, 0x98, 0x98, 0xbf, 0xa8, 0x3a, 0xb7, 0x6d, 0x2a, 0x9a, 0x77, 0x5f, 0x62, 0xf5, 0x6b, 0x4a, 0xb7, 0x7d, 0xe5, 0x09, 0x6b, 0xc0, 0x8b, 0x9c, 0x88, 0x37, 0x33, 0xf2, 0x41, 0xac, 0x22, 0x1f, 0xcf, 0x3b, 0x82, 0x34, + /* (2^ 17)P */ 0x00, 0xc3, 0x78, 0x42, 0x32, 0x2e, 0xdc, 0xda, 0xb1, 0x96, 0x21, 0xa4, 0xe4, 0xbb, 0xe9, 0x9d, 0xbb, 0x0f, 0x93, 0xed, 0x26, 0x3d, 0xb5, 0xdb, 0x94, 0x31, 0x37, 0x07, 0xa2, 0xb2, 0xd5, 0x99, 0x0d, 0x93, 0xe1, 0xce, 0x3f, 0x0b, 0x96, 0x82, 0x47, 0xfe, 0x60, 0x6f, 0x8f, 0x61, 0x88, 0xd7, 0x05, 0x95, 0x0b, 0x46, 0x06, 0xb7, 0x32, 0x06, + /* (2^ 18)P */ 0x44, 0xf5, 0x34, 0xdf, 0x2f, 0x9c, 0x5d, 0x9f, 0x53, 0x5c, 0x42, 0x8f, 0xc9, 0xdc, 0xd8, 0x40, 0xa2, 0xe7, 0x6a, 0x4a, 0x05, 0xf7, 0x86, 0x77, 0x2b, 0xae, 0x37, 0xed, 0x48, 0xfb, 0xf7, 0x62, 0x7c, 0x17, 0x59, 0x92, 0x41, 0x61, 0x93, 0x38, 0x30, 0xd1, 0xef, 0x54, 0x54, 0x03, 0x17, 0x57, 0x91, 0x15, 0x11, 0x33, 0xb5, 0xfa, 0xfb, 0x17, + /* (2^ 19)P */ 0x29, 0xbb, 0xd4, 0xb4, 0x9c, 0xf1, 0x72, 0x94, 0xce, 0x6a, 0x29, 0xa8, 0x89, 0x18, 0x19, 0xf7, 0xb7, 0xcc, 0xee, 0x9a, 0x02, 0xe3, 0xc0, 0xb1, 0xe0, 0xee, 0x83, 0x78, 0xb4, 0x9e, 0x07, 0x87, 0xdf, 0xb0, 0x82, 0x26, 0x4e, 0xa4, 0x0c, 0x33, 0xaf, 0x40, 0x59, 0xb6, 0xdd, 0x52, 0x45, 0xf0, 0xb4, 0xf6, 0xe8, 0x4e, 0x4e, 0x79, 0x1a, 0x5d, + /* (2^ 20)P */ 0x27, 0x33, 0x4d, 0x4c, 0x6b, 0x4f, 0x75, 0xb1, 0xbc, 0x1f, 0xab, 0x5b, 0x2b, 0xf0, 0x1c, 0x57, 0x86, 0xdd, 0xfd, 0x60, 0xb0, 0x8c, 0xe7, 0x9a, 0xe5, 0x5c, 0xeb, 0x11, 0x3a, 0xda, 0x22, 0x25, 0x99, 0x06, 0x8d, 0xf4, 0xaf, 0x29, 0x7a, 0xc9, 0xe5, 0xd2, 0x16, 0x9e, 0xd4, 0x63, 0x1d, 0x64, 0xa6, 0x47, 0x96, 0x37, 0x6f, 0x93, 0x2c, 0xcc, + /* (2^ 21)P */ 0xc1, 0x94, 0x74, 0x86, 0x75, 0xf2, 0x91, 0x58, 0x23, 0x85, 0x63, 0x76, 0x54, 0xc7, 0xb4, 0x8c, 0xbc, 0x4e, 0xc4, 0xa7, 0xba, 0xa0, 0x55, 0x26, 0x71, 0xd5, 0x33, 0x72, 0xc9, 0xad, 0x1e, 0xf9, 0x5d, 0x78, 0x70, 0x93, 0x4e, 0x85, 0xfc, 0x39, 0x06, 0x73, 0x76, 0xff, 0xe8, 0x64, 0x69, 0x42, 0x45, 0xb2, 0x69, 0xb5, 0x32, 0xe7, 0x2c, 0xde, + /* (2^ 22)P */ 0xde, 0x16, 0xd8, 0x33, 0x49, 0x32, 0xe9, 0x0e, 0x3a, 0x60, 0xee, 0x2e, 0x24, 0x75, 0xe3, 0x9c, 0x92, 0x07, 0xdb, 0xad, 0x92, 0xf5, 0x11, 0xdf, 0xdb, 0xb0, 0x17, 0x5c, 0xd6, 0x1a, 0x70, 0x00, 0xb7, 0xe2, 0x18, 0xec, 0xdc, 0xc2, 0x02, 0x93, 0xb3, 0xc8, 0x3f, 0x4f, 0x1b, 0x96, 0xe6, 0x33, 0x8c, 0xfb, 0xcc, 0xa5, 0x4e, 0xe8, 0xe7, 0x11, + /* (2^ 23)P */ 0x05, 0x7a, 0x74, 0x52, 0xf8, 0xdf, 0x0d, 0x7c, 0x6a, 0x1a, 0x4e, 0x9a, 0x02, 0x1d, 0xae, 0x77, 0xf8, 0x8e, 0xf9, 0xa2, 0x38, 0x54, 0x50, 0xb2, 0x2c, 0x08, 0x9d, 0x9b, 0x9f, 0xfb, 0x2b, 0x06, 0xde, 0x9d, 0xc2, 0x03, 0x0b, 0x22, 0x2b, 0x10, 0x5b, 0x3a, 0x73, 0x29, 0x8e, 0x3e, 0x37, 0x08, 0x2c, 0x3b, 0xf8, 0x80, 0xc1, 0x66, 0x1e, 0x98, + /* (2^ 24)P */ 0xd8, 0xd6, 0x3e, 0xcd, 0x63, 0x8c, 0x2b, 0x41, 0x81, 0xc0, 0x0c, 0x06, 0x87, 0xd6, 0xe7, 0x92, 0xfe, 0xf1, 0x0c, 0x4a, 0x84, 0x5b, 0xaf, 0x40, 0x53, 0x6f, 0x60, 0xd6, 0x6b, 0x76, 0x4b, 0xc2, 0xad, 0xc9, 0xb6, 0xb6, 0x6a, 0xa2, 0xb3, 0xf5, 0xf5, 0xc2, 0x55, 0x83, 0xb2, 0xd3, 0xe9, 0x41, 0x6c, 0x63, 0x51, 0xb8, 0x81, 0x74, 0xc8, 0x2c, + /* (2^ 25)P */ 0xb2, 0xaf, 0x1c, 0xee, 0x07, 0xb0, 0x58, 0xa8, 0x2c, 0x6a, 0xc9, 0x2d, 0x62, 0x28, 0x75, 0x0c, 0x40, 0xb6, 0x11, 0x33, 0x96, 0x80, 0x28, 0x6d, 0xd5, 0x9e, 0x87, 0x90, 0x01, 0x66, 0x1d, 0x1c, 0xf8, 0xb4, 0x92, 0xac, 0x38, 0x18, 0x05, 0xc2, 0x4c, 0x4b, 0x54, 0x7d, 0x80, 0x46, 0x87, 0x2d, 0x99, 0x8e, 0x70, 0x80, 0x69, 0x71, 0x8b, 0xed, + /* (2^ 26)P */ 0x37, 0xa7, 0x6b, 0x71, 0x36, 0x75, 0x8e, 0xff, 0x0f, 0x42, 0xda, 0x5a, 0x46, 0xa6, 0x97, 0x79, 0x7e, 0x30, 0xb3, 0x8f, 0xc7, 0x3a, 0xa0, 0xcb, 0x1d, 0x9c, 0x78, 0x77, 0x36, 0xc2, 0xe7, 0xf4, 0x2f, 0x29, 0x07, 0xb1, 0x07, 0xfd, 0xed, 0x1b, 0x39, 0x77, 0x06, 0x38, 0x77, 0x0f, 0x50, 0x31, 0x12, 0xbf, 0x92, 0xbf, 0x72, 0x79, 0x54, 0xa9, + /* (2^ 27)P */ 0xbd, 0x4d, 0x46, 0x6b, 0x1a, 0x80, 0x46, 0x2d, 0xed, 0xfd, 0x64, 0x6d, 0x94, 0xbc, 0x4a, 0x6e, 0x0c, 0x12, 0xf6, 0x12, 0xab, 0x54, 0x88, 0xd3, 0x85, 0xac, 0x51, 0xae, 0x6f, 0xca, 0xc4, 0xb7, 0xec, 0x22, 0x54, 0x6d, 0x80, 0xb2, 0x1c, 0x63, 0x33, 0x76, 0x6b, 0x8e, 0x6d, 0x59, 0xcd, 0x73, 0x92, 0x5f, 0xff, 0xad, 0x10, 0x35, 0x70, 0x5f, + /* (2^ 28)P */ 0xb3, 0x84, 0xde, 0xc8, 0x04, 0x43, 0x63, 0xfa, 0x29, 0xd9, 0xf0, 0x69, 0x65, 0x5a, 0x0c, 0xe8, 0x2e, 0x0b, 0xfe, 0xb0, 0x7a, 0x42, 0xb3, 0xc3, 0xfc, 0xe6, 0xb8, 0x92, 0x29, 0xae, 0xed, 0xec, 0xd5, 0xe8, 0x4a, 0xa1, 0xbd, 0x3b, 0xd3, 0xc0, 0x07, 0xab, 0x65, 0x65, 0x35, 0x9a, 0xa6, 0x5e, 0x78, 0x18, 0x76, 0x1c, 0x15, 0x49, 0xe6, 0x75, + /* (2^ 29)P */ 0x45, 0xb3, 0x92, 0xa9, 0xc3, 0xb8, 0x11, 0x68, 0x64, 0x3a, 0x83, 0x5d, 0xa8, 0x94, 0x6a, 0x9d, 0xaa, 0x27, 0x9f, 0x98, 0x5d, 0xc0, 0x29, 0xf0, 0xc0, 0x4b, 0x14, 0x3c, 0x05, 0xe7, 0xf8, 0xbd, 0x38, 0x22, 0x96, 0x75, 0x65, 0x5e, 0x0d, 0x3f, 0xbb, 0x6f, 0xe8, 0x3f, 0x96, 0x76, 0x9f, 0xba, 0xd9, 0x44, 0x92, 0x96, 0x22, 0xe7, 0x52, 0xe7, + /* (2^ 30)P */ 0xf4, 0xa3, 0x95, 0x90, 0x47, 0xdf, 0x7d, 0xdc, 0xf4, 0x13, 0x87, 0x67, 0x7d, 0x4f, 0x9d, 0xa0, 0x00, 0x46, 0x72, 0x08, 0xc3, 0xa2, 0x7a, 0x3e, 0xe7, 0x6d, 0x52, 0x7c, 0x11, 0x36, 0x50, 0x83, 0x89, 0x64, 0xcb, 0x1f, 0x08, 0x83, 0x46, 0xcb, 0xac, 0xa6, 0xd8, 0x9c, 0x1b, 0xe8, 0x05, 0x47, 0xc7, 0x26, 0x06, 0x83, 0x39, 0xe9, 0xb1, 0x1c, + /* (2^ 31)P */ 0x11, 0xe8, 0xc8, 0x42, 0xbf, 0x30, 0x9c, 0xa3, 0xf1, 0x85, 0x96, 0x95, 0x4f, 0x4f, 0x52, 0xa2, 0xf5, 0x8b, 0x68, 0x24, 0x16, 0xac, 0x9b, 0xa9, 0x27, 0x28, 0x0e, 0x84, 0x03, 0x46, 0x22, 0x5f, 0xf7, 0x0d, 0xa6, 0x85, 0x88, 0xc1, 0x45, 0x4b, 0x85, 0x1a, 0x10, 0x7f, 0xc9, 0x94, 0x20, 0xb0, 0x04, 0x28, 0x12, 0x30, 0xb9, 0xe6, 0x40, 0x6b, + /* (2^ 32)P */ 0xac, 0x1b, 0x57, 0xb6, 0x42, 0xdb, 0x81, 0x8d, 0x76, 0xfd, 0x9b, 0x1c, 0x29, 0x30, 0xd5, 0x3a, 0xcc, 0x53, 0xd9, 0x26, 0x7a, 0x0f, 0x9c, 0x2e, 0x79, 0xf5, 0x62, 0xeb, 0x61, 0x9d, 0x9b, 0x80, 0x39, 0xcd, 0x60, 0x2e, 0x1f, 0x08, 0x22, 0xbc, 0x19, 0xb3, 0x2a, 0x43, 0x44, 0xf2, 0x4e, 0x66, 0xf4, 0x36, 0xa6, 0xa7, 0xbc, 0xa4, 0x15, 0x7e, + /* (2^ 33)P */ 0xc1, 0x90, 0x8a, 0xde, 0xff, 0x78, 0xc3, 0x73, 0x16, 0xee, 0x76, 0xa0, 0x84, 0x60, 0x8d, 0xe6, 0x82, 0x0f, 0xde, 0x4e, 0xc5, 0x99, 0x34, 0x06, 0x90, 0x44, 0x55, 0xf8, 0x91, 0xd8, 0xe1, 0xe4, 0x2c, 0x8a, 0xde, 0x94, 0x1e, 0x78, 0x25, 0x3d, 0xfd, 0xd8, 0x59, 0x7d, 0xaf, 0x6e, 0xbe, 0x96, 0xbe, 0x3c, 0x16, 0x23, 0x0f, 0x4c, 0xa4, 0x28, + /* (2^ 34)P */ 0xba, 0x11, 0x35, 0x57, 0x03, 0xb6, 0xf4, 0x24, 0x89, 0xb8, 0x5a, 0x0d, 0x50, 0x9c, 0xaa, 0x51, 0x7f, 0xa4, 0x0e, 0xfc, 0x71, 0xb3, 0x3b, 0xf1, 0x96, 0x50, 0x23, 0x15, 0xf5, 0xf5, 0xd4, 0x23, 0xdc, 0x8b, 0x26, 0x9e, 0xae, 0xb7, 0x50, 0xcd, 0xc4, 0x25, 0xf6, 0x75, 0x40, 0x9c, 0x37, 0x79, 0x33, 0x60, 0xd4, 0x4b, 0x13, 0x32, 0xee, 0xe2, + /* (2^ 35)P */ 0x43, 0xb8, 0x56, 0x59, 0xf0, 0x68, 0x23, 0xb3, 0xea, 0x70, 0x58, 0x4c, 0x1e, 0x5a, 0x16, 0x54, 0x03, 0xb2, 0xf4, 0x73, 0xb6, 0xd9, 0x5c, 0x9c, 0x6f, 0xcf, 0x82, 0x2e, 0x54, 0x15, 0x46, 0x2c, 0xa3, 0xda, 0x4e, 0x87, 0xf5, 0x2b, 0xba, 0x91, 0xa3, 0xa0, 0x89, 0xba, 0x48, 0x2b, 0xfa, 0x64, 0x02, 0x7f, 0x78, 0x03, 0xd1, 0xe8, 0x3b, 0xe9, + /* (2^ 36)P */ 0x15, 0xa4, 0x71, 0xd4, 0x0c, 0x24, 0xe9, 0x07, 0xa1, 0x43, 0xf4, 0x7f, 0xbb, 0xa2, 0xa6, 0x6b, 0xfa, 0xb7, 0xea, 0x58, 0xd1, 0x96, 0xb0, 0x24, 0x5c, 0xc7, 0x37, 0x4e, 0x60, 0x0f, 0x40, 0xf2, 0x2f, 0x44, 0x70, 0xea, 0x80, 0x63, 0xfe, 0xfc, 0x46, 0x59, 0x12, 0x27, 0xb5, 0x27, 0xfd, 0xb7, 0x73, 0x0b, 0xca, 0x8b, 0xc2, 0xd3, 0x71, 0x08, + /* (2^ 37)P */ 0x26, 0x0e, 0xd7, 0x52, 0x6f, 0xf1, 0xf2, 0x9d, 0xb8, 0x3d, 0xbd, 0xd4, 0x75, 0x97, 0xd8, 0xbf, 0xa8, 0x86, 0x96, 0xa5, 0x80, 0xa0, 0x45, 0x75, 0xf6, 0x77, 0x71, 0xdb, 0x77, 0x96, 0x55, 0x99, 0x31, 0xd0, 0x4f, 0x34, 0xf4, 0x35, 0x39, 0x41, 0xd3, 0x7d, 0xf7, 0xe2, 0x74, 0xde, 0xbe, 0x5b, 0x1f, 0x39, 0x10, 0x21, 0xa3, 0x4d, 0x3b, 0xc8, + /* (2^ 38)P */ 0x04, 0x00, 0x2a, 0x45, 0xb2, 0xaf, 0x9b, 0x18, 0x6a, 0xeb, 0x96, 0x28, 0xa4, 0x77, 0xd0, 0x13, 0xcf, 0x17, 0x65, 0xe8, 0xc5, 0x81, 0x28, 0xad, 0x39, 0x7a, 0x0b, 0xaa, 0x55, 0x2b, 0xf3, 0xfc, 0x86, 0x40, 0xad, 0x0d, 0x1e, 0x28, 0xa2, 0x2d, 0xc5, 0xd6, 0x04, 0x15, 0xa2, 0x30, 0x3d, 0x12, 0x8e, 0xd6, 0xb5, 0xf7, 0x69, 0xbb, 0x84, 0x20, + /* (2^ 39)P */ 0xd7, 0x7a, 0x77, 0x2c, 0xfb, 0x81, 0x80, 0xe9, 0x1e, 0xc6, 0x36, 0x31, 0x79, 0xc3, 0x7c, 0xa9, 0x57, 0x6b, 0xb5, 0x70, 0xfb, 0xe4, 0xa1, 0xff, 0xfd, 0x21, 0xa5, 0x7c, 0xfa, 0x44, 0xba, 0x0d, 0x96, 0x3d, 0xc4, 0x5c, 0x39, 0x52, 0x87, 0xd7, 0x22, 0x0f, 0x52, 0x88, 0x91, 0x87, 0x96, 0xac, 0xfa, 0x3b, 0xdf, 0xdc, 0x83, 0x8c, 0x99, 0x29, + /* (2^ 40)P */ 0x98, 0x6b, 0x3a, 0x8d, 0x83, 0x17, 0xe1, 0x62, 0xd8, 0x80, 0x4c, 0x97, 0xce, 0x6b, 0xaa, 0x10, 0xa7, 0xc4, 0xe9, 0xeb, 0xa5, 0xfb, 0xc9, 0xdd, 0x2d, 0xeb, 0xfc, 0x9a, 0x71, 0xcd, 0x68, 0x6e, 0xc0, 0x35, 0x64, 0x62, 0x1b, 0x95, 0x12, 0xe8, 0x53, 0xec, 0xf0, 0xf4, 0x86, 0x86, 0x78, 0x18, 0xc4, 0xc6, 0xbc, 0x5a, 0x59, 0x8f, 0x7c, 0x7e, + /* (2^ 41)P */ 0x7f, 0xd7, 0x1e, 0xc5, 0x83, 0xdc, 0x1f, 0xbe, 0x0b, 0xcf, 0x2e, 0x01, 0x01, 0xed, 0xac, 0x17, 0x3b, 0xed, 0xa4, 0x30, 0x96, 0x0e, 0x14, 0x7e, 0x19, 0x2b, 0xa5, 0x67, 0x1e, 0xb3, 0x34, 0x03, 0xa8, 0xbb, 0x0a, 0x7d, 0x08, 0x2d, 0xd5, 0x53, 0x19, 0x6f, 0x13, 0xd5, 0xc0, 0x90, 0x8a, 0xcc, 0xc9, 0x5c, 0xab, 0x24, 0xd7, 0x03, 0xf6, 0x57, + /* (2^ 42)P */ 0x49, 0xcb, 0xb4, 0x96, 0x5f, 0xa6, 0xf8, 0x71, 0x6f, 0x59, 0xad, 0x05, 0x24, 0x2d, 0xaf, 0x67, 0xa8, 0xbe, 0x95, 0xdf, 0x0d, 0x28, 0x5a, 0x7f, 0x6e, 0x87, 0x8c, 0x6e, 0x67, 0x0c, 0xf4, 0xe0, 0x1c, 0x30, 0xc2, 0x66, 0xae, 0x20, 0xa1, 0x34, 0xec, 0x9c, 0xbc, 0xae, 0x3d, 0xa1, 0x28, 0x28, 0x95, 0x1d, 0xc9, 0x3a, 0xa8, 0xfd, 0xfc, 0xa1, + /* (2^ 43)P */ 0xe2, 0x2b, 0x9d, 0xed, 0x02, 0x99, 0x67, 0xbb, 0x2e, 0x16, 0x62, 0x05, 0x70, 0xc7, 0x27, 0xb9, 0x1c, 0x3f, 0xf2, 0x11, 0x01, 0xd8, 0x51, 0xa4, 0x18, 0x92, 0xa9, 0x5d, 0xfb, 0xa9, 0xe4, 0x42, 0xba, 0x38, 0x34, 0x1a, 0x4a, 0xc5, 0x6a, 0x37, 0xde, 0xa7, 0x0c, 0xb4, 0x7e, 0x7f, 0xde, 0xa6, 0xee, 0xcd, 0x55, 0x57, 0x05, 0x06, 0xfd, 0x5d, + /* (2^ 44)P */ 0x2f, 0x32, 0xcf, 0x2e, 0x2c, 0x7b, 0xbe, 0x9a, 0x0c, 0x57, 0x35, 0xf8, 0x87, 0xda, 0x9c, 0xec, 0x48, 0xf2, 0xbb, 0xe2, 0xda, 0x10, 0x58, 0x20, 0xc6, 0xd3, 0x87, 0xe9, 0xc7, 0x26, 0xd1, 0x9a, 0x46, 0x87, 0x90, 0xda, 0xdc, 0xde, 0xc3, 0xb3, 0xf2, 0xe8, 0x6f, 0x4a, 0xe6, 0xe8, 0x9d, 0x98, 0x36, 0x20, 0x03, 0x47, 0x15, 0x3f, 0x64, 0x59, + /* (2^ 45)P */ 0xd4, 0x71, 0x49, 0x0a, 0x67, 0x97, 0xaa, 0x3f, 0xf4, 0x1b, 0x3a, 0x6e, 0x5e, 0x17, 0xcc, 0x0a, 0x8f, 0x81, 0x6a, 0x41, 0x38, 0x77, 0x40, 0x8a, 0x11, 0x42, 0x62, 0xd2, 0x50, 0x32, 0x79, 0x78, 0x28, 0xc2, 0x2e, 0x10, 0x01, 0x94, 0x30, 0x4f, 0x7f, 0x18, 0x17, 0x56, 0x85, 0x4e, 0xad, 0xf7, 0xcb, 0x87, 0x3c, 0x3f, 0x50, 0x2c, 0xc0, 0xba, + /* (2^ 46)P */ 0xbc, 0x30, 0x8e, 0x65, 0x8e, 0x57, 0x5b, 0x38, 0x7a, 0xd4, 0x95, 0x52, 0x7a, 0x32, 0x59, 0x69, 0xcd, 0x9d, 0x47, 0x34, 0x5b, 0x55, 0xa5, 0x24, 0x60, 0xdd, 0xc0, 0xc1, 0x62, 0x73, 0x44, 0xae, 0x4c, 0x9c, 0x65, 0x55, 0x1b, 0x9d, 0x8a, 0x29, 0xb0, 0x1a, 0x52, 0xa8, 0xf1, 0xe6, 0x9a, 0xb3, 0xf6, 0xa3, 0xc9, 0x0a, 0x70, 0x7d, 0x0f, 0xee, + /* (2^ 47)P */ 0x77, 0xd3, 0xe5, 0x8e, 0xfa, 0x00, 0xeb, 0x1b, 0x7f, 0xdc, 0x68, 0x3f, 0x92, 0xbd, 0xb7, 0x0b, 0xb7, 0xb5, 0x24, 0xdf, 0xc5, 0x67, 0x53, 0xd4, 0x36, 0x79, 0xc4, 0x7b, 0x57, 0xbc, 0x99, 0x97, 0x60, 0xef, 0xe4, 0x01, 0xa1, 0xa7, 0xaa, 0x12, 0x36, 0x29, 0xb1, 0x03, 0xc2, 0x83, 0x1c, 0x2b, 0x83, 0xef, 0x2e, 0x2c, 0x23, 0x92, 0xfd, 0xd1, + /* (2^ 48)P */ 0x94, 0xef, 0x03, 0x59, 0xfa, 0x8a, 0x18, 0x76, 0xee, 0x58, 0x08, 0x4d, 0x44, 0xce, 0xf1, 0x52, 0x33, 0x49, 0xf6, 0x69, 0x71, 0xe3, 0xa9, 0xbc, 0x86, 0xe3, 0x43, 0xde, 0x33, 0x7b, 0x90, 0x8b, 0x3e, 0x7d, 0xd5, 0x4a, 0xf0, 0x23, 0x99, 0xa6, 0xea, 0x5f, 0x08, 0xe5, 0xb9, 0x49, 0x8b, 0x0d, 0x6a, 0x21, 0xab, 0x07, 0x62, 0xcd, 0xc4, 0xbe, + /* (2^ 49)P */ 0x61, 0xbf, 0x70, 0x14, 0xfa, 0x4e, 0x9e, 0x7c, 0x0c, 0xf8, 0xb2, 0x48, 0x71, 0x62, 0x83, 0xd6, 0xd1, 0xdc, 0x9c, 0x29, 0x66, 0xb1, 0x34, 0x9c, 0x8d, 0xe6, 0x88, 0xaf, 0xbe, 0xdc, 0x4d, 0xeb, 0xb0, 0xe7, 0x28, 0xae, 0xb2, 0x05, 0x56, 0xc6, 0x0e, 0x10, 0x26, 0xab, 0x2c, 0x59, 0x72, 0x03, 0x66, 0xfe, 0x8f, 0x2c, 0x51, 0x2d, 0xdc, 0xae, + /* (2^ 50)P */ 0xdc, 0x63, 0xf1, 0x8b, 0x5c, 0x65, 0x0b, 0xf1, 0xa6, 0x22, 0xe2, 0xd9, 0xdb, 0x49, 0xb1, 0x3c, 0x47, 0xc2, 0xfe, 0xac, 0x86, 0x07, 0x52, 0xec, 0xb0, 0x08, 0x69, 0xfb, 0xd1, 0x06, 0xdc, 0x48, 0x5c, 0x3d, 0xb2, 0x4d, 0xb8, 0x1a, 0x4e, 0xda, 0xb9, 0xc1, 0x2b, 0xab, 0x4b, 0x62, 0x81, 0x21, 0x9a, 0xfc, 0x3d, 0x39, 0x83, 0x11, 0x36, 0xeb, + /* (2^ 51)P */ 0x94, 0xf3, 0x17, 0xef, 0xf9, 0x60, 0x54, 0xc3, 0xd7, 0x27, 0x35, 0xc5, 0x98, 0x5e, 0xf6, 0x63, 0x6c, 0xa0, 0x4a, 0xd3, 0xa3, 0x98, 0xd9, 0x42, 0xe3, 0xf1, 0xf8, 0x81, 0x96, 0xa9, 0xea, 0x6d, 0x4b, 0x8e, 0x33, 0xca, 0x94, 0x0d, 0xa0, 0xf7, 0xbb, 0x64, 0xa3, 0x36, 0x6f, 0xdc, 0x5a, 0x94, 0x42, 0xca, 0x06, 0xb2, 0x2b, 0x9a, 0x9f, 0x71, + /* (2^ 52)P */ 0xec, 0xdb, 0xa6, 0x1f, 0xdf, 0x15, 0x36, 0xa3, 0xda, 0x8a, 0x7a, 0xb6, 0xa7, 0xe3, 0xaf, 0x52, 0xe0, 0x8d, 0xe8, 0xf2, 0x44, 0x20, 0xeb, 0xa1, 0x20, 0xc4, 0x65, 0x3c, 0x7c, 0x6c, 0x49, 0xed, 0x2f, 0x66, 0x23, 0x68, 0x61, 0x91, 0x40, 0x9f, 0x50, 0x19, 0xd1, 0x84, 0xa7, 0xe2, 0xed, 0x34, 0x37, 0xe3, 0xe4, 0x11, 0x7f, 0x87, 0x55, 0x0f, + /* (2^ 53)P */ 0xb3, 0xa1, 0x0f, 0xb0, 0x48, 0xc0, 0x4d, 0x96, 0xa7, 0xcf, 0x5a, 0x81, 0xb8, 0x4a, 0x46, 0xef, 0x0a, 0xd3, 0x40, 0x7e, 0x02, 0xe3, 0x63, 0xaa, 0x50, 0xd1, 0x2a, 0x37, 0x22, 0x4a, 0x7f, 0x4f, 0xb6, 0xf9, 0x01, 0x82, 0x78, 0x3d, 0x93, 0x14, 0x11, 0x8a, 0x90, 0x60, 0xcd, 0x45, 0x4e, 0x7b, 0x42, 0xb9, 0x3e, 0x6e, 0x68, 0x1f, 0x36, 0x41, + /* (2^ 54)P */ 0x13, 0x73, 0x0e, 0x4f, 0x79, 0x93, 0x9e, 0x29, 0x70, 0x7b, 0x4a, 0x59, 0x1a, 0x9a, 0xf4, 0x55, 0x08, 0xf0, 0xdb, 0x17, 0x58, 0xec, 0x64, 0xad, 0x7f, 0x29, 0xeb, 0x3f, 0x85, 0x4e, 0x60, 0x28, 0x98, 0x1f, 0x73, 0x4e, 0xe6, 0xa8, 0xab, 0xd5, 0xd6, 0xfc, 0xa1, 0x36, 0x6d, 0x15, 0xc6, 0x13, 0x83, 0xa0, 0xc2, 0x6e, 0xd9, 0xdb, 0xc9, 0xcc, + /* (2^ 55)P */ 0xff, 0xd8, 0x52, 0xa3, 0xdc, 0x99, 0xcf, 0x3e, 0x19, 0xb3, 0x68, 0xd0, 0xb5, 0x0d, 0xb8, 0xee, 0x3f, 0xef, 0x6e, 0xc0, 0x38, 0x28, 0x44, 0x92, 0x78, 0x91, 0x1a, 0x08, 0x78, 0x6c, 0x65, 0x24, 0xf3, 0xa2, 0x3d, 0xf2, 0xe5, 0x79, 0x62, 0x69, 0x29, 0xf4, 0x22, 0xc5, 0xdb, 0x6a, 0xae, 0xf4, 0x44, 0xa3, 0x6f, 0xc7, 0x86, 0xab, 0xef, 0xef, + /* (2^ 56)P */ 0xbf, 0x54, 0x9a, 0x09, 0x5d, 0x17, 0xd0, 0xde, 0xfb, 0xf5, 0xca, 0xff, 0x13, 0x20, 0x88, 0x82, 0x3a, 0xe2, 0xd0, 0x3b, 0xfb, 0x05, 0x76, 0xd1, 0xc0, 0x02, 0x71, 0x3b, 0x94, 0xe8, 0xc9, 0x84, 0xcf, 0xa4, 0xe9, 0x28, 0x7b, 0xf5, 0x09, 0xc3, 0x2b, 0x22, 0x40, 0xf1, 0x68, 0x24, 0x24, 0x7d, 0x9f, 0x6e, 0xcd, 0xfe, 0xb0, 0x19, 0x61, 0xf5, + /* (2^ 57)P */ 0xe8, 0x63, 0x51, 0xb3, 0x95, 0x6b, 0x7b, 0x74, 0x92, 0x52, 0x45, 0xa4, 0xed, 0xea, 0x0e, 0x0d, 0x2b, 0x01, 0x1e, 0x2c, 0xbc, 0x91, 0x06, 0x69, 0xdb, 0x1f, 0xb5, 0x77, 0x1d, 0x56, 0xf5, 0xb4, 0x02, 0x80, 0x49, 0x56, 0x12, 0xce, 0x86, 0x05, 0xc9, 0xd9, 0xae, 0xf3, 0x6d, 0xe6, 0x3f, 0x40, 0x52, 0xe9, 0x49, 0x2b, 0x31, 0x06, 0x86, 0x14, + /* (2^ 58)P */ 0xf5, 0x09, 0x3b, 0xd2, 0xff, 0xdf, 0x11, 0xa5, 0x1c, 0x99, 0xe8, 0x1b, 0xa4, 0x2c, 0x7d, 0x8e, 0xc8, 0xf7, 0x03, 0x46, 0xfa, 0xb6, 0xde, 0x73, 0x91, 0x7e, 0x5a, 0x7a, 0xd7, 0x9a, 0x5b, 0x80, 0x24, 0x62, 0x5e, 0x92, 0xf1, 0xa3, 0x45, 0xa3, 0x43, 0x92, 0x8a, 0x2a, 0x5b, 0x0c, 0xb4, 0xc8, 0xad, 0x1c, 0xb6, 0x6c, 0x5e, 0x81, 0x18, 0x91, + /* (2^ 59)P */ 0x96, 0xb3, 0xca, 0x2b, 0xe3, 0x7a, 0x59, 0x72, 0x17, 0x74, 0x29, 0x21, 0xe7, 0x78, 0x07, 0xad, 0xda, 0xb6, 0xcd, 0xf9, 0x27, 0x4d, 0xc8, 0xf2, 0x98, 0x22, 0xca, 0xf2, 0x33, 0x74, 0x7a, 0xdd, 0x1e, 0x71, 0xec, 0xe3, 0x3f, 0xe2, 0xa2, 0xd2, 0x38, 0x75, 0xb0, 0xd0, 0x0a, 0xcf, 0x7d, 0x36, 0xdc, 0x49, 0x38, 0x25, 0x34, 0x4f, 0x20, 0x9a, + /* (2^ 60)P */ 0x2b, 0x6e, 0x04, 0x0d, 0x4f, 0x3d, 0x3b, 0x24, 0xf6, 0x4e, 0x5e, 0x0a, 0xbd, 0x48, 0x96, 0xba, 0x81, 0x8f, 0x39, 0x82, 0x13, 0xe6, 0x72, 0xf3, 0x0f, 0xb6, 0x94, 0xf4, 0xc5, 0x90, 0x74, 0x91, 0xa8, 0xf2, 0xc9, 0xca, 0x9a, 0x4d, 0x98, 0xf2, 0xdf, 0x52, 0x4e, 0x97, 0x2f, 0xeb, 0x84, 0xd3, 0xaf, 0xc2, 0xcc, 0xfb, 0x4c, 0x26, 0x4b, 0xe4, + /* (2^ 61)P */ 0x12, 0x9e, 0xfb, 0x9d, 0x78, 0x79, 0x99, 0xdd, 0xb3, 0x0b, 0x2e, 0x56, 0x41, 0x8e, 0x3f, 0x39, 0xb8, 0x97, 0x89, 0x53, 0x9b, 0x8a, 0x3c, 0x40, 0x9d, 0xa4, 0x6c, 0x2e, 0x31, 0x71, 0xc6, 0x0a, 0x41, 0xd4, 0x95, 0x06, 0x5e, 0xc1, 0xab, 0xc2, 0x14, 0xc4, 0xc7, 0x15, 0x08, 0x3a, 0xad, 0x7a, 0xb4, 0x62, 0xa3, 0x0c, 0x90, 0xf4, 0x47, 0x08, + /* (2^ 62)P */ 0x7f, 0xec, 0x09, 0x82, 0xf5, 0x94, 0x09, 0x93, 0x32, 0xd3, 0xdc, 0x56, 0x80, 0x7b, 0x5b, 0x22, 0x80, 0x6a, 0x96, 0x72, 0xb1, 0xc2, 0xd9, 0xa1, 0x8b, 0x66, 0x42, 0x16, 0xe2, 0x07, 0xb3, 0x2d, 0xf1, 0x75, 0x35, 0x72, 0xc7, 0x98, 0xbe, 0x63, 0x3b, 0x20, 0x75, 0x05, 0xc1, 0x3e, 0x31, 0x5a, 0xf7, 0xaa, 0xae, 0x4b, 0xdb, 0x1d, 0xd0, 0x74, + /* (2^ 63)P */ 0x36, 0x5c, 0x74, 0xe6, 0x5d, 0x59, 0x3f, 0x15, 0x4b, 0x4d, 0x4e, 0x67, 0x41, 0xfe, 0x98, 0x1f, 0x49, 0x76, 0x91, 0x0f, 0x9b, 0xf4, 0xaf, 0x86, 0xaf, 0x66, 0x19, 0xed, 0x46, 0xf1, 0x05, 0x9a, 0xcc, 0xd1, 0x14, 0x1f, 0x82, 0x12, 0x8e, 0xe6, 0xf4, 0xc3, 0x42, 0x5c, 0x4e, 0x33, 0x93, 0xbe, 0x30, 0xe7, 0x64, 0xa9, 0x35, 0x00, 0x4d, 0xf9, + /* (2^ 64)P */ 0x1f, 0xc1, 0x1e, 0xb7, 0xe3, 0x7c, 0xfa, 0xa3, 0x6b, 0x76, 0xaf, 0x9c, 0x05, 0x85, 0x4a, 0xa9, 0xfb, 0xe3, 0x7e, 0xf2, 0x49, 0x56, 0xdc, 0x2f, 0x57, 0x10, 0xba, 0x37, 0xb2, 0x62, 0xf5, 0x6b, 0xe5, 0x8f, 0x0a, 0x87, 0xd1, 0x6a, 0xcb, 0x9d, 0x07, 0xd0, 0xf6, 0x38, 0x99, 0x2c, 0x61, 0x4a, 0x4e, 0xd8, 0xd2, 0x88, 0x29, 0x99, 0x11, 0x95, + /* (2^ 65)P */ 0x6f, 0xdc, 0xd5, 0xd6, 0xd6, 0xa7, 0x4c, 0x46, 0x93, 0x65, 0x62, 0x23, 0x95, 0x32, 0x9c, 0xde, 0x40, 0x41, 0x68, 0x2c, 0x18, 0x4e, 0x5a, 0x8c, 0xc0, 0xc5, 0xc5, 0xea, 0x5c, 0x45, 0x0f, 0x60, 0x78, 0x39, 0xb6, 0x36, 0x23, 0x12, 0xbc, 0x21, 0x9a, 0xf8, 0x91, 0xac, 0xc4, 0x70, 0xdf, 0x85, 0x8e, 0x3c, 0xec, 0x22, 0x04, 0x98, 0xa8, 0xaa, + /* (2^ 66)P */ 0xcc, 0x52, 0x10, 0x5b, 0x4b, 0x6c, 0xc5, 0xfa, 0x3e, 0xd4, 0xf8, 0x1c, 0x04, 0x14, 0x48, 0x33, 0xd9, 0xfc, 0x5f, 0xb0, 0xa5, 0x48, 0x8c, 0x45, 0x8a, 0xee, 0x3e, 0xa7, 0xc1, 0x2e, 0x34, 0xca, 0xf6, 0xc9, 0xeb, 0x10, 0xbb, 0xe1, 0x59, 0x84, 0x25, 0xe8, 0x81, 0x70, 0xc0, 0x09, 0x42, 0xa7, 0x3b, 0x0d, 0x33, 0x00, 0xb5, 0x77, 0xbe, 0x25, + /* (2^ 67)P */ 0xcd, 0x1f, 0xbc, 0x7d, 0xef, 0xe5, 0xca, 0x91, 0xaf, 0xa9, 0x59, 0x6a, 0x09, 0xca, 0xd6, 0x1b, 0x3d, 0x55, 0xde, 0xa2, 0x6a, 0x80, 0xd6, 0x95, 0x47, 0xe4, 0x5f, 0x68, 0x54, 0x08, 0xdf, 0x29, 0xba, 0x2a, 0x02, 0x84, 0xe8, 0xe9, 0x00, 0x77, 0x99, 0x36, 0x03, 0xf6, 0x4a, 0x3e, 0x21, 0x81, 0x7d, 0xb8, 0xa4, 0x8a, 0xa2, 0x05, 0xef, 0xbc, + /* (2^ 68)P */ 0x7c, 0x59, 0x5f, 0x66, 0xd9, 0xb7, 0x83, 0x43, 0x8a, 0xa1, 0x8d, 0x51, 0x70, 0xba, 0xf2, 0x9b, 0x95, 0xc0, 0x4b, 0x4c, 0xa0, 0x14, 0xd3, 0xa4, 0x5d, 0x4a, 0x37, 0x36, 0x97, 0x31, 0x1e, 0x12, 0xe7, 0xbb, 0x08, 0x67, 0xa5, 0x23, 0xd7, 0xfb, 0x97, 0xd8, 0x6a, 0x03, 0xb1, 0xf8, 0x7f, 0xda, 0x58, 0xd9, 0x3f, 0x73, 0x4a, 0x53, 0xe1, 0x7b, + /* (2^ 69)P */ 0x55, 0x83, 0x98, 0x78, 0x6c, 0x56, 0x5e, 0xed, 0xf7, 0x23, 0x3e, 0x4c, 0x7d, 0x09, 0x2d, 0x09, 0x9c, 0x58, 0x8b, 0x32, 0xca, 0xfe, 0xbf, 0x47, 0x03, 0xeb, 0x4d, 0xe7, 0xeb, 0x9c, 0x83, 0x05, 0x68, 0xaa, 0x80, 0x89, 0x44, 0xf9, 0xd4, 0xdc, 0xdb, 0xb1, 0xdb, 0x77, 0xac, 0xf9, 0x2a, 0xae, 0x35, 0xac, 0x74, 0xb5, 0x95, 0x62, 0x18, 0x85, + /* (2^ 70)P */ 0xab, 0x82, 0x7e, 0x10, 0xd7, 0xe6, 0x57, 0xd1, 0x66, 0x12, 0x31, 0x9c, 0x9c, 0xa6, 0x27, 0x59, 0x71, 0x2e, 0xeb, 0xa0, 0x68, 0xc5, 0x87, 0x51, 0xf4, 0xca, 0x3f, 0x98, 0x56, 0xb0, 0x89, 0xb1, 0xc7, 0x7b, 0x46, 0xb3, 0xae, 0x36, 0xf2, 0xee, 0x15, 0x1a, 0x60, 0xf4, 0x50, 0x76, 0x4f, 0xc4, 0x53, 0x0d, 0x36, 0x4d, 0x31, 0xb1, 0x20, 0x51, + /* (2^ 71)P */ 0xf7, 0x1d, 0x8c, 0x1b, 0x5e, 0xe5, 0x02, 0x6f, 0xc5, 0xa5, 0xe0, 0x5f, 0xc6, 0xb6, 0x63, 0x43, 0xaf, 0x3c, 0x19, 0x6c, 0xf4, 0xaf, 0xa4, 0x33, 0xb1, 0x0a, 0x37, 0x3d, 0xd9, 0x4d, 0xe2, 0x29, 0x24, 0x26, 0x94, 0x7c, 0x02, 0xe4, 0xe2, 0xf2, 0xbe, 0xbd, 0xac, 0x1b, 0x48, 0xb8, 0xdd, 0xe9, 0x0d, 0x9a, 0x50, 0x1a, 0x98, 0x71, 0x6e, 0xdc, + /* (2^ 72)P */ 0x9f, 0x40, 0xb1, 0xb3, 0x66, 0x28, 0x6c, 0xfe, 0xa6, 0x7d, 0xf8, 0x3e, 0xb8, 0xf3, 0xde, 0x52, 0x76, 0x52, 0xa3, 0x92, 0x98, 0x23, 0xab, 0x4f, 0x88, 0x97, 0xfc, 0x22, 0xe1, 0x6b, 0x67, 0xcd, 0x13, 0x95, 0xda, 0x65, 0xdd, 0x3b, 0x67, 0x3f, 0x5f, 0x4c, 0xf2, 0x8a, 0xad, 0x98, 0xa7, 0x94, 0x24, 0x45, 0x87, 0x11, 0x7c, 0x75, 0x79, 0x85, + /* (2^ 73)P */ 0x70, 0xbf, 0xf9, 0x3b, 0xa9, 0x44, 0x57, 0x72, 0x96, 0xc9, 0xa4, 0x98, 0x65, 0xbf, 0x87, 0xb3, 0x3a, 0x39, 0x12, 0xde, 0xe5, 0x39, 0x01, 0x4f, 0xf7, 0xc0, 0x71, 0x52, 0x36, 0x85, 0xb3, 0x18, 0xf8, 0x14, 0xc0, 0x6d, 0xae, 0x9e, 0x4f, 0xb0, 0x72, 0x87, 0xac, 0x5c, 0xd1, 0x6c, 0x41, 0x6c, 0x90, 0x9d, 0x22, 0x81, 0xe4, 0x2b, 0xea, 0xe5, + /* (2^ 74)P */ 0xfc, 0xea, 0x1a, 0x65, 0xd9, 0x49, 0x6a, 0x39, 0xb5, 0x96, 0x72, 0x7b, 0x32, 0xf1, 0xd0, 0xe9, 0x45, 0xd9, 0x31, 0x55, 0xc7, 0x34, 0xe9, 0x5a, 0xec, 0x73, 0x0b, 0x03, 0xc4, 0xb3, 0xe6, 0xc9, 0x5e, 0x0a, 0x17, 0xfe, 0x53, 0x66, 0x7f, 0x21, 0x18, 0x74, 0x54, 0x1b, 0xc9, 0x49, 0x16, 0xd2, 0x48, 0xaf, 0x5b, 0x47, 0x7b, 0xeb, 0xaa, 0xc9, + /* (2^ 75)P */ 0x47, 0x04, 0xf5, 0x5a, 0x87, 0x77, 0x9e, 0x21, 0x34, 0x4e, 0x83, 0x88, 0xaf, 0x02, 0x1d, 0xb0, 0x5a, 0x1d, 0x1d, 0x7d, 0x8d, 0x2c, 0xd3, 0x8d, 0x63, 0xa9, 0x45, 0xfb, 0x15, 0x6d, 0x86, 0x45, 0xcd, 0x38, 0x0e, 0xf7, 0x37, 0x79, 0xed, 0x6d, 0x5a, 0xbc, 0x32, 0xcc, 0x66, 0xf1, 0x3a, 0xb2, 0x87, 0x6f, 0x70, 0x71, 0xd9, 0xf2, 0xfa, 0x7b, + /* (2^ 76)P */ 0x68, 0x07, 0xdc, 0x61, 0x40, 0xe4, 0xec, 0x32, 0xc8, 0xbe, 0x66, 0x30, 0x54, 0x80, 0xfd, 0x13, 0x7a, 0xef, 0xae, 0xed, 0x2e, 0x00, 0x6d, 0x3f, 0xbd, 0xfc, 0x91, 0x24, 0x53, 0x7f, 0x63, 0x9d, 0x2e, 0xe3, 0x76, 0xe0, 0xf3, 0xe1, 0x8f, 0x7a, 0xc4, 0x77, 0x0c, 0x91, 0xc0, 0xc2, 0x18, 0x6b, 0x04, 0xad, 0xb6, 0x70, 0x9a, 0x64, 0xc5, 0x82, + /* (2^ 77)P */ 0x7f, 0xea, 0x13, 0xd8, 0x9e, 0xfc, 0x5b, 0x06, 0xb5, 0x4f, 0xda, 0x38, 0xe0, 0x9c, 0xd2, 0x3a, 0xc1, 0x1c, 0x62, 0x70, 0x7f, 0xc6, 0x24, 0x0a, 0x47, 0x04, 0x01, 0xc4, 0x55, 0x09, 0xd1, 0x7a, 0x07, 0xba, 0xa3, 0x80, 0x4f, 0xc1, 0x65, 0x36, 0x6d, 0xc0, 0x10, 0xcf, 0x94, 0xa9, 0xa2, 0x01, 0x44, 0xd1, 0xf9, 0x1c, 0x4c, 0xfb, 0xf8, 0x99, + /* (2^ 78)P */ 0x6c, 0xb9, 0x6b, 0xee, 0x43, 0x5b, 0xb9, 0xbb, 0xee, 0x2e, 0x52, 0xc1, 0xc6, 0xb9, 0x61, 0xd2, 0x93, 0xa5, 0xaf, 0x52, 0xf4, 0xa4, 0x1a, 0x51, 0x61, 0xa7, 0xcb, 0x9e, 0xbb, 0x56, 0x65, 0xe2, 0xbf, 0x75, 0xb9, 0x9c, 0x50, 0x96, 0x60, 0x81, 0x74, 0x47, 0xc0, 0x04, 0x88, 0x71, 0x76, 0x39, 0x9a, 0xa7, 0xb1, 0x4e, 0x43, 0x15, 0xe0, 0xbb, + /* (2^ 79)P */ 0xbb, 0xce, 0xe2, 0xbb, 0xf9, 0x17, 0x0f, 0x82, 0x40, 0xad, 0x73, 0xe3, 0xeb, 0x3b, 0x06, 0x1a, 0xcf, 0x8e, 0x6e, 0x28, 0xb8, 0x26, 0xd9, 0x5b, 0xb7, 0xb3, 0xcf, 0xb4, 0x6a, 0x1c, 0xbf, 0x7f, 0xb8, 0xb5, 0x79, 0xcf, 0x45, 0x68, 0x7d, 0xc5, 0xeb, 0xf3, 0xbe, 0x39, 0x40, 0xfc, 0x07, 0x90, 0x7a, 0x62, 0xad, 0x86, 0x08, 0x71, 0x25, 0xe1, + /* (2^ 80)P */ 0x9b, 0x46, 0xac, 0xef, 0xc1, 0x4e, 0xa1, 0x97, 0x95, 0x76, 0xf9, 0x1b, 0xc2, 0xb2, 0x6a, 0x41, 0xea, 0x80, 0x3d, 0xe9, 0x08, 0x52, 0x5a, 0xe3, 0xf2, 0x08, 0xc5, 0xea, 0x39, 0x3f, 0x44, 0x71, 0x4d, 0xea, 0x0d, 0x05, 0x23, 0xe4, 0x2e, 0x3c, 0x89, 0xfe, 0x12, 0x8a, 0x95, 0x42, 0x0a, 0x68, 0xea, 0x5a, 0x28, 0x06, 0x9e, 0xe3, 0x5f, 0xe0, + /* (2^ 81)P */ 0x00, 0x61, 0x6c, 0x98, 0x9b, 0xe7, 0xb9, 0x06, 0x1c, 0xc5, 0x1b, 0xed, 0xbe, 0xc8, 0xb3, 0xea, 0x87, 0xf0, 0xc4, 0x24, 0x7d, 0xbb, 0x5d, 0xa4, 0x1d, 0x7a, 0x16, 0x00, 0x55, 0x94, 0x67, 0x78, 0xbd, 0x58, 0x02, 0x82, 0x90, 0x53, 0x76, 0xd4, 0x72, 0x99, 0x51, 0x6f, 0x7b, 0xcf, 0x80, 0x30, 0x31, 0x3b, 0x01, 0xc7, 0xc1, 0xef, 0xe6, 0x42, + /* (2^ 82)P */ 0xe2, 0x35, 0xaf, 0x4b, 0x79, 0xc6, 0x12, 0x24, 0x99, 0xc0, 0x68, 0xb0, 0x43, 0x3e, 0xe5, 0xef, 0xe2, 0x29, 0xea, 0xb8, 0xb3, 0xbc, 0x6a, 0x53, 0x2c, 0x69, 0x18, 0x5a, 0xf9, 0x15, 0xae, 0x66, 0x58, 0x18, 0xd3, 0x2d, 0x4b, 0x00, 0xfd, 0x84, 0xab, 0x4f, 0xae, 0x70, 0x6b, 0x9e, 0x9a, 0xdf, 0x83, 0xfd, 0x2e, 0x3c, 0xcf, 0xf8, 0x88, 0x5b, + /* (2^ 83)P */ 0xa4, 0x90, 0x31, 0x85, 0x13, 0xcd, 0xdf, 0x64, 0xc9, 0xa1, 0x0b, 0xe7, 0xb6, 0x73, 0x8a, 0x1b, 0x22, 0x78, 0x4c, 0xd4, 0xae, 0x48, 0x18, 0x00, 0x00, 0xa8, 0x9f, 0x06, 0xf9, 0xfb, 0x2d, 0xc3, 0xb1, 0x2a, 0xbc, 0x13, 0x99, 0x57, 0xaf, 0xf0, 0x8d, 0x61, 0x54, 0x29, 0xd5, 0xf2, 0x72, 0x00, 0x96, 0xd1, 0x85, 0x12, 0x8a, 0xf0, 0x23, 0xfb, + /* (2^ 84)P */ 0x69, 0xc7, 0xdb, 0xd9, 0x92, 0x75, 0x08, 0x9b, 0xeb, 0xa5, 0x93, 0xd1, 0x1a, 0xf4, 0xf5, 0xaf, 0xe6, 0xc4, 0x4a, 0x0d, 0x35, 0x26, 0x39, 0x9d, 0xd3, 0x17, 0x3e, 0xae, 0x2d, 0xbf, 0x73, 0x9f, 0xb7, 0x74, 0x91, 0xd1, 0xd8, 0x5c, 0x14, 0xf9, 0x75, 0xdf, 0xeb, 0xc2, 0x22, 0xd8, 0x14, 0x8d, 0x86, 0x23, 0x4d, 0xd1, 0x2d, 0xdb, 0x6b, 0x42, + /* (2^ 85)P */ 0x8c, 0xda, 0xc6, 0xf8, 0x71, 0xba, 0x2b, 0x06, 0x78, 0xae, 0xcc, 0x3a, 0xe3, 0xe3, 0xa1, 0x8b, 0xe2, 0x34, 0x6d, 0x28, 0x9e, 0x46, 0x13, 0x4d, 0x9e, 0xa6, 0x73, 0x49, 0x65, 0x79, 0x88, 0xb9, 0x3a, 0xd1, 0x6d, 0x2f, 0x48, 0x2b, 0x0a, 0x7f, 0x58, 0x20, 0x37, 0xf4, 0x0e, 0xbb, 0x4a, 0x95, 0x58, 0x0c, 0x88, 0x30, 0xc4, 0x74, 0xdd, 0xfd, + /* (2^ 86)P */ 0x6d, 0x13, 0x4e, 0x89, 0x2d, 0xa9, 0xa3, 0xed, 0x09, 0xe3, 0x0e, 0x71, 0x3e, 0x4a, 0xab, 0x90, 0xde, 0x03, 0xeb, 0x56, 0x46, 0x60, 0x06, 0xf5, 0x71, 0xe5, 0xee, 0x9b, 0xef, 0xff, 0xc4, 0x2c, 0x9f, 0x37, 0x48, 0x45, 0x94, 0x12, 0x41, 0x81, 0x15, 0x70, 0x91, 0x99, 0x5e, 0x56, 0x6b, 0xf4, 0xa6, 0xc9, 0xf5, 0x69, 0x9d, 0x78, 0x37, 0x57, + /* (2^ 87)P */ 0xf3, 0x51, 0x57, 0x7e, 0x43, 0x6f, 0xc6, 0x67, 0x59, 0x0c, 0xcf, 0x94, 0xe6, 0x3d, 0xb5, 0x07, 0xc9, 0x77, 0x48, 0xc9, 0x68, 0x0d, 0x98, 0x36, 0x62, 0x35, 0x38, 0x1c, 0xf5, 0xc5, 0xec, 0x66, 0x78, 0xfe, 0x47, 0xab, 0x26, 0xd6, 0x44, 0xb6, 0x06, 0x0f, 0x89, 0xe3, 0x19, 0x40, 0x1a, 0xe7, 0xd8, 0x65, 0x55, 0xf7, 0x1a, 0xfc, 0xa3, 0x0e, + /* (2^ 88)P */ 0x0e, 0x30, 0xa6, 0xb7, 0x58, 0x60, 0x62, 0x2a, 0x6c, 0x13, 0xa8, 0x14, 0x9b, 0xb8, 0xf2, 0x70, 0xd8, 0xb1, 0x71, 0x88, 0x8c, 0x18, 0x31, 0x25, 0x93, 0x90, 0xb4, 0xc7, 0x49, 0xd8, 0xd4, 0xdb, 0x1e, 0x1e, 0x7f, 0xaa, 0xba, 0xc9, 0xf2, 0x5d, 0xa9, 0x3a, 0x43, 0xb4, 0x5c, 0xee, 0x7b, 0xc7, 0x97, 0xb7, 0x66, 0xd7, 0x23, 0xd9, 0x22, 0x59, + /* (2^ 89)P */ 0x28, 0x19, 0xa6, 0xf9, 0x89, 0x20, 0x78, 0xd4, 0x6d, 0xcb, 0x79, 0x8f, 0x61, 0x6f, 0xb2, 0x5c, 0x4f, 0xa6, 0x54, 0x84, 0x95, 0x24, 0x36, 0x64, 0xcb, 0x39, 0xe7, 0x8f, 0x97, 0x9c, 0x5c, 0x3c, 0xfb, 0x51, 0x11, 0x01, 0x17, 0xdb, 0xc9, 0x9b, 0x51, 0x03, 0x9a, 0xe9, 0xe5, 0x24, 0x1e, 0xf5, 0xda, 0xe0, 0x48, 0x02, 0x23, 0xd0, 0x2c, 0x81, + /* (2^ 90)P */ 0x42, 0x1b, 0xe4, 0x91, 0x85, 0x2a, 0x0c, 0xd2, 0x28, 0x66, 0x57, 0x9e, 0x33, 0x8d, 0x25, 0x71, 0x10, 0x65, 0x76, 0xa2, 0x8c, 0x21, 0x86, 0x81, 0x15, 0xc2, 0x27, 0xeb, 0x54, 0x2d, 0x4f, 0x6c, 0xe6, 0xd6, 0x24, 0x9c, 0x1a, 0x12, 0xb8, 0x81, 0xe2, 0x0a, 0xf3, 0xd3, 0xf0, 0xd3, 0xe1, 0x74, 0x1f, 0x9b, 0x11, 0x47, 0xd0, 0xcf, 0xb6, 0x54, + /* (2^ 91)P */ 0x26, 0x45, 0xa2, 0x10, 0xd4, 0x2d, 0xae, 0xc0, 0xb0, 0xe8, 0x86, 0xb3, 0xc7, 0xea, 0x70, 0x87, 0x61, 0xb5, 0xa5, 0x55, 0xbe, 0x88, 0x1d, 0x7a, 0xd9, 0x6f, 0xeb, 0x83, 0xe2, 0x44, 0x7f, 0x98, 0x04, 0xd6, 0x50, 0x9d, 0xa7, 0x86, 0x66, 0x09, 0x63, 0xe1, 0xed, 0x72, 0xb1, 0xe4, 0x1d, 0x3a, 0xfd, 0x47, 0xce, 0x1c, 0xaa, 0x3b, 0x8f, 0x1b, + /* (2^ 92)P */ 0xf4, 0x3c, 0x4a, 0xb6, 0xc2, 0x9c, 0xe0, 0x2e, 0xb7, 0x38, 0xea, 0x61, 0x35, 0x97, 0x10, 0x90, 0xae, 0x22, 0x48, 0xb3, 0xa9, 0xc6, 0x7a, 0xbb, 0x23, 0xf2, 0xf8, 0x1b, 0xa7, 0xa1, 0x79, 0xcc, 0xc4, 0xf8, 0x08, 0x76, 0x8a, 0x5a, 0x1c, 0x1b, 0xc5, 0x33, 0x91, 0xa9, 0xb8, 0xb9, 0xd3, 0xf8, 0x49, 0xcd, 0xe5, 0x82, 0x43, 0xf7, 0xca, 0x68, + /* (2^ 93)P */ 0x38, 0xba, 0xae, 0x44, 0xfe, 0x57, 0x64, 0x56, 0x7c, 0x0e, 0x9c, 0xca, 0xff, 0xa9, 0x82, 0xbb, 0x38, 0x4a, 0xa7, 0xf7, 0x47, 0xab, 0xbe, 0x6d, 0x23, 0x0b, 0x8a, 0xed, 0xc2, 0xb9, 0x8f, 0xf1, 0xec, 0x91, 0x44, 0x73, 0x64, 0xba, 0xd5, 0x8f, 0x37, 0x38, 0x0d, 0xd5, 0xf8, 0x73, 0x57, 0xb6, 0xc2, 0x45, 0xdc, 0x25, 0xb2, 0xb6, 0xea, 0xd9, + /* (2^ 94)P */ 0xbf, 0xe9, 0x1a, 0x40, 0x4d, 0xcc, 0xe6, 0x1d, 0x70, 0x1a, 0x65, 0xcc, 0x34, 0x2c, 0x37, 0x2c, 0x2d, 0x6b, 0x6d, 0xe5, 0x2f, 0x19, 0x9e, 0xe4, 0xe1, 0xaa, 0xd4, 0xab, 0x54, 0xf4, 0xa8, 0xe4, 0x69, 0x2d, 0x8e, 0x4d, 0xd7, 0xac, 0xb0, 0x5b, 0xfe, 0xe3, 0x26, 0x07, 0xc3, 0xf8, 0x1b, 0x43, 0xa8, 0x1d, 0x64, 0xa5, 0x25, 0x88, 0xbb, 0x77, + /* (2^ 95)P */ 0x92, 0xcd, 0x6e, 0xa0, 0x79, 0x04, 0x18, 0xf4, 0x11, 0x58, 0x48, 0xb5, 0x3c, 0x7b, 0xd1, 0xcc, 0xd3, 0x14, 0x2c, 0xa0, 0xdd, 0x04, 0x44, 0x11, 0xb3, 0x6d, 0x2f, 0x0d, 0xf5, 0x2a, 0x75, 0x5d, 0x1d, 0xda, 0x86, 0x8d, 0x7d, 0x6b, 0x32, 0x68, 0xb6, 0x6c, 0x64, 0x9e, 0xde, 0x80, 0x88, 0xce, 0x08, 0xbf, 0x0b, 0xe5, 0x8e, 0x4f, 0x1d, 0xfb, + /* (2^ 96)P */ 0xaf, 0xe8, 0x85, 0xbf, 0x7f, 0x37, 0x8d, 0x66, 0x7c, 0xd5, 0xd3, 0x96, 0xa5, 0x81, 0x67, 0x95, 0xff, 0x48, 0xde, 0xde, 0xd7, 0x7a, 0x46, 0x34, 0xb1, 0x13, 0x70, 0x29, 0xed, 0x87, 0x90, 0xb0, 0x40, 0x2c, 0xa6, 0x43, 0x6e, 0xb6, 0xbc, 0x48, 0x8a, 0xc1, 0xae, 0xb8, 0xd4, 0xe2, 0xc0, 0x32, 0xb2, 0xa6, 0x2a, 0x8f, 0xb5, 0x16, 0x9e, 0xc3, + /* (2^ 97)P */ 0xff, 0x4d, 0xd2, 0xd6, 0x74, 0xef, 0x2c, 0x96, 0xc1, 0x11, 0xa8, 0xb8, 0xfe, 0x94, 0x87, 0x3e, 0xa0, 0xfb, 0x57, 0xa3, 0xfc, 0x7a, 0x7e, 0x6a, 0x59, 0x6c, 0x54, 0xbb, 0xbb, 0xa2, 0x25, 0x38, 0x1b, 0xdf, 0x5d, 0x7b, 0x94, 0x14, 0xde, 0x07, 0x6e, 0xd3, 0xab, 0x02, 0x26, 0x74, 0x16, 0x12, 0xdf, 0x2e, 0x2a, 0xa7, 0xb0, 0xe8, 0x29, 0xc0, + /* (2^ 98)P */ 0x6a, 0x38, 0x0b, 0xd3, 0xba, 0x45, 0x23, 0xe0, 0x04, 0x3b, 0x83, 0x39, 0xc5, 0x11, 0xe6, 0xcf, 0x39, 0x0a, 0xb3, 0xb0, 0x3b, 0x27, 0x29, 0x63, 0x1c, 0xf3, 0x00, 0xe6, 0xd2, 0x55, 0x21, 0x1f, 0x84, 0x97, 0x9f, 0x01, 0x49, 0x43, 0x30, 0x5f, 0xe0, 0x1d, 0x24, 0xc4, 0x4e, 0xa0, 0x2b, 0x0b, 0x12, 0x55, 0xc3, 0x27, 0xae, 0x08, 0x83, 0x7c, + /* (2^ 99)P */ 0x5d, 0x1a, 0xb7, 0xa9, 0xf5, 0xfd, 0xec, 0xad, 0xb7, 0x87, 0x02, 0x5f, 0x0d, 0x30, 0x4d, 0xe2, 0x65, 0x87, 0xa4, 0x41, 0x45, 0x1d, 0x67, 0xe0, 0x30, 0x5c, 0x13, 0x87, 0xf6, 0x2e, 0x08, 0xc1, 0xc7, 0x12, 0x45, 0xc8, 0x9b, 0xad, 0xb8, 0xd5, 0x57, 0xbb, 0x5c, 0x48, 0x3a, 0xe1, 0x91, 0x5e, 0xf6, 0x4d, 0x8a, 0x63, 0x75, 0x69, 0x0c, 0x01, + /* (2^100)P */ 0x8f, 0x53, 0x2d, 0xa0, 0x71, 0x3d, 0xfc, 0x45, 0x10, 0x96, 0xcf, 0x56, 0xf9, 0xbb, 0x40, 0x3c, 0x86, 0x52, 0x76, 0xbe, 0x84, 0xf9, 0xa6, 0x9d, 0x3d, 0x27, 0xbe, 0xb4, 0x00, 0x49, 0x94, 0xf5, 0x5d, 0xe1, 0x62, 0x85, 0x66, 0xe5, 0xb8, 0x20, 0x2c, 0x09, 0x7d, 0x9d, 0x3d, 0x6e, 0x74, 0x39, 0xab, 0xad, 0xa0, 0x90, 0x97, 0x5f, 0xbb, 0xa7, + /* (2^101)P */ 0xdb, 0x2d, 0x99, 0x08, 0x16, 0x46, 0x83, 0x7a, 0xa8, 0xea, 0x3d, 0x28, 0x5b, 0x49, 0xfc, 0xb9, 0x6d, 0x00, 0x9e, 0x54, 0x4f, 0x47, 0x64, 0x9b, 0x58, 0x4d, 0x07, 0x0c, 0x6f, 0x29, 0x56, 0x0b, 0x00, 0x14, 0x85, 0x96, 0x41, 0x04, 0xb9, 0x5c, 0xa4, 0xf6, 0x16, 0x73, 0x6a, 0xc7, 0x62, 0x0c, 0x65, 0x2f, 0x93, 0xbf, 0xf7, 0xb9, 0xb7, 0xf1, + /* (2^102)P */ 0xeb, 0x6d, 0xb3, 0x46, 0x32, 0xd2, 0xcb, 0x08, 0x94, 0x14, 0xbf, 0x3f, 0xc5, 0xcb, 0x5f, 0x9f, 0x8a, 0x89, 0x0c, 0x1b, 0x45, 0xad, 0x4c, 0x50, 0xb4, 0xe1, 0xa0, 0x6b, 0x11, 0x92, 0xaf, 0x1f, 0x00, 0xcc, 0xe5, 0x13, 0x7e, 0xe4, 0x2e, 0xa0, 0x57, 0xf3, 0xa7, 0x84, 0x79, 0x7a, 0xc2, 0xb7, 0xb7, 0xfc, 0x5d, 0xa5, 0xa9, 0x64, 0xcc, 0xd8, + /* (2^103)P */ 0xa9, 0xc4, 0x12, 0x8b, 0x34, 0x78, 0x3e, 0x38, 0xfd, 0x3f, 0x87, 0xfa, 0x88, 0x94, 0xd5, 0xd9, 0x7f, 0xeb, 0x58, 0xff, 0xb9, 0x45, 0xdb, 0xa1, 0xed, 0x22, 0x28, 0x1d, 0x00, 0x6d, 0x79, 0x85, 0x7a, 0x75, 0x5d, 0xf0, 0xb1, 0x9e, 0x47, 0x28, 0x8c, 0x62, 0xdf, 0xfb, 0x4c, 0x7b, 0xc5, 0x1a, 0x42, 0x95, 0xef, 0x9a, 0xb7, 0x27, 0x7e, 0xda, + /* (2^104)P */ 0xca, 0xd5, 0xc0, 0x17, 0xa1, 0x66, 0x79, 0x9c, 0x2a, 0xb7, 0x0a, 0xfe, 0x62, 0xe4, 0x26, 0x78, 0x90, 0xa7, 0xcb, 0xb0, 0x4f, 0x6d, 0xf9, 0x8f, 0xf7, 0x7d, 0xac, 0xb8, 0x78, 0x1f, 0x41, 0xea, 0x97, 0x1e, 0x62, 0x97, 0x43, 0x80, 0x58, 0x80, 0xb6, 0x69, 0x7d, 0xee, 0x16, 0xd2, 0xa1, 0x81, 0xd7, 0xb1, 0x27, 0x03, 0x48, 0xda, 0xab, 0xec, + /* (2^105)P */ 0x5b, 0xed, 0x40, 0x8e, 0x8c, 0xc1, 0x66, 0x90, 0x7f, 0x0c, 0xb2, 0xfc, 0xbd, 0x16, 0xac, 0x7d, 0x4c, 0x6a, 0xf9, 0xae, 0xe7, 0x4e, 0x11, 0x12, 0xe9, 0xbe, 0x17, 0x09, 0xc6, 0xc1, 0x5e, 0xb5, 0x7b, 0x50, 0x5c, 0x27, 0xfb, 0x80, 0xab, 0x01, 0xfa, 0x5b, 0x9b, 0x75, 0x16, 0x6e, 0xb2, 0x5c, 0x8c, 0x2f, 0xa5, 0x6a, 0x1a, 0x68, 0xa6, 0x90, + /* (2^106)P */ 0x75, 0xfe, 0xb6, 0x96, 0x96, 0x87, 0x4c, 0x12, 0xa9, 0xd1, 0xd8, 0x03, 0xa3, 0xc1, 0x15, 0x96, 0xe8, 0xa0, 0x75, 0x82, 0xa0, 0x6d, 0xea, 0x54, 0xdc, 0x5f, 0x0d, 0x7e, 0xf6, 0x70, 0xb5, 0xdc, 0x7a, 0xf6, 0xc4, 0xd4, 0x21, 0x49, 0xf5, 0xd4, 0x14, 0x6d, 0x48, 0x1d, 0x7c, 0x99, 0x42, 0xdf, 0x78, 0x6b, 0x9d, 0xb9, 0x30, 0x3c, 0xd0, 0x29, + /* (2^107)P */ 0x85, 0xd6, 0xd8, 0xf3, 0x91, 0x74, 0xdd, 0xbd, 0x72, 0x96, 0x10, 0xe4, 0x76, 0x02, 0x5a, 0x72, 0x67, 0xd3, 0x17, 0x72, 0x14, 0x9a, 0x20, 0x5b, 0x0f, 0x8d, 0xed, 0x6d, 0x4e, 0xe3, 0xd9, 0x82, 0xc2, 0x99, 0xee, 0x39, 0x61, 0x69, 0x8a, 0x24, 0x01, 0x92, 0x15, 0xe7, 0xfc, 0xf9, 0x4d, 0xac, 0xf1, 0x30, 0x49, 0x01, 0x0b, 0x6e, 0x0f, 0x20, + /* (2^108)P */ 0xd8, 0x25, 0x94, 0x5e, 0x43, 0x29, 0xf5, 0xcc, 0xe8, 0xe3, 0x55, 0x41, 0x3c, 0x9f, 0x58, 0x5b, 0x00, 0xeb, 0xc5, 0xdf, 0xcf, 0xfb, 0xfd, 0x6e, 0x92, 0xec, 0x99, 0x30, 0xd6, 0x05, 0xdd, 0x80, 0x7a, 0x5d, 0x6d, 0x16, 0x85, 0xd8, 0x9d, 0x43, 0x65, 0xd8, 0x2c, 0x33, 0x2f, 0x5c, 0x41, 0xea, 0xb7, 0x95, 0x77, 0xf2, 0x9e, 0x59, 0x09, 0xe8, + /* (2^109)P */ 0x00, 0xa0, 0x03, 0x80, 0xcd, 0x60, 0xe5, 0x17, 0xd4, 0x15, 0x99, 0xdd, 0x4f, 0xbf, 0x66, 0xb8, 0xc0, 0xf5, 0xf9, 0xfc, 0x6d, 0x42, 0x18, 0x34, 0x1c, 0x7d, 0x5b, 0xb5, 0x09, 0xd0, 0x99, 0x57, 0x81, 0x0b, 0x62, 0xb3, 0xa2, 0xf9, 0x0b, 0xae, 0x95, 0xb8, 0xc2, 0x3b, 0x0d, 0x5b, 0x00, 0xf1, 0xed, 0xbc, 0x05, 0x9d, 0x61, 0xbc, 0x73, 0x9d, + /* (2^110)P */ 0xd4, 0xdb, 0x29, 0xe5, 0x85, 0xe9, 0xc6, 0x89, 0x2a, 0xa8, 0x54, 0xab, 0xb3, 0x7f, 0x88, 0xc0, 0x4d, 0xe0, 0xd1, 0x74, 0x6e, 0xa3, 0xa7, 0x39, 0xd5, 0xcc, 0xa1, 0x8a, 0xcb, 0x5b, 0x34, 0xad, 0x92, 0xb4, 0xd8, 0xd5, 0x17, 0xf6, 0x77, 0x18, 0x9e, 0xaf, 0x45, 0x3b, 0x03, 0xe2, 0xf8, 0x52, 0x60, 0xdc, 0x15, 0x20, 0x9e, 0xdf, 0xd8, 0x5d, + /* (2^111)P */ 0x02, 0xc1, 0xac, 0x1a, 0x15, 0x8e, 0x6c, 0xf5, 0x1e, 0x1e, 0xba, 0x7e, 0xc2, 0xda, 0x7d, 0x02, 0xda, 0x43, 0xae, 0x04, 0x70, 0x28, 0x54, 0x78, 0x94, 0xf5, 0x4f, 0x07, 0x84, 0x8f, 0xed, 0xaa, 0xc0, 0xb8, 0xcd, 0x7f, 0x7e, 0x33, 0xa3, 0xbe, 0x21, 0x29, 0xc8, 0x56, 0x34, 0xc0, 0x76, 0x87, 0x8f, 0xc7, 0x73, 0x58, 0x90, 0x16, 0xfc, 0xd6, + /* (2^112)P */ 0xb8, 0x3f, 0xe1, 0xdf, 0x3a, 0x91, 0x25, 0x0c, 0xf6, 0x47, 0xa8, 0x89, 0xc4, 0xc6, 0x61, 0xec, 0x86, 0x2c, 0xfd, 0xbe, 0xa4, 0x6f, 0xc2, 0xd4, 0x46, 0x19, 0x70, 0x5d, 0x09, 0x02, 0x86, 0xd3, 0x4b, 0xe9, 0x16, 0x7b, 0xf0, 0x0d, 0x6c, 0xff, 0x91, 0x05, 0xbf, 0x55, 0xb4, 0x00, 0x8d, 0xe5, 0x6d, 0x68, 0x20, 0x90, 0x12, 0xb5, 0x5c, 0x32, + /* (2^113)P */ 0x80, 0x45, 0xc8, 0x51, 0x87, 0xba, 0x1c, 0x5c, 0xcf, 0x5f, 0x4b, 0x3c, 0x9e, 0x3b, 0x36, 0xd2, 0x26, 0xa2, 0x7f, 0xab, 0xb7, 0xbf, 0xda, 0x68, 0x23, 0x8f, 0xc3, 0xa0, 0xfd, 0xad, 0xf1, 0x56, 0x3b, 0xd0, 0x75, 0x2b, 0x44, 0x61, 0xd8, 0xf4, 0xf1, 0x05, 0x49, 0x53, 0x07, 0xee, 0x47, 0xef, 0xc0, 0x7c, 0x9d, 0xe4, 0x15, 0x88, 0xc5, 0x47, + /* (2^114)P */ 0x2d, 0xb5, 0x09, 0x80, 0xb9, 0xd3, 0xd8, 0xfe, 0x4c, 0xd2, 0xa6, 0x6e, 0xd3, 0x75, 0xcf, 0xb0, 0x99, 0xcb, 0x50, 0x8d, 0xe9, 0x67, 0x9b, 0x20, 0xe8, 0x57, 0xd8, 0x14, 0x85, 0x73, 0x6a, 0x74, 0xe0, 0x99, 0xf0, 0x6b, 0x6e, 0x59, 0x30, 0x31, 0x33, 0x96, 0x5f, 0xa1, 0x0c, 0x1b, 0xf4, 0xca, 0x09, 0xe1, 0x9b, 0xb5, 0xcf, 0x6d, 0x0b, 0xeb, + /* (2^115)P */ 0x1a, 0xde, 0x50, 0xa9, 0xac, 0x3e, 0x10, 0x43, 0x4f, 0x82, 0x4f, 0xc0, 0xfe, 0x3f, 0x33, 0xd2, 0x64, 0x86, 0x50, 0xa9, 0x51, 0x76, 0x5e, 0x50, 0x97, 0x6c, 0x73, 0x8d, 0x77, 0xa3, 0x75, 0x03, 0xbc, 0xc9, 0xfb, 0x50, 0xd9, 0x6d, 0x16, 0xad, 0x5d, 0x32, 0x3d, 0xac, 0x44, 0xdf, 0x51, 0xf7, 0x19, 0xd4, 0x0b, 0x57, 0x78, 0x0b, 0x81, 0x4e, + /* (2^116)P */ 0x32, 0x24, 0xf1, 0x6c, 0x55, 0x62, 0x1d, 0xb3, 0x1f, 0xda, 0xfa, 0x6a, 0x8f, 0x98, 0x01, 0x16, 0xde, 0x44, 0x50, 0x0d, 0x2e, 0x6c, 0x0b, 0xa2, 0xd3, 0x74, 0x0e, 0xa9, 0xbf, 0x8d, 0xa9, 0xc8, 0xc8, 0x2f, 0x62, 0xc1, 0x35, 0x5e, 0xfd, 0x3a, 0xb3, 0x83, 0x2d, 0xee, 0x4e, 0xfd, 0x5c, 0x5e, 0xad, 0x85, 0xa5, 0x10, 0xb5, 0x4f, 0x34, 0xa7, + /* (2^117)P */ 0xd1, 0x58, 0x6f, 0xe6, 0x54, 0x2c, 0xc2, 0xcd, 0xcf, 0x83, 0xdc, 0x88, 0x0c, 0xb9, 0xb4, 0x62, 0x18, 0x89, 0x65, 0x28, 0xe9, 0x72, 0x4b, 0x65, 0xcf, 0xd6, 0x90, 0x88, 0xd7, 0x76, 0x17, 0x4f, 0x74, 0x64, 0x1e, 0xcb, 0xd3, 0xf5, 0x4b, 0xaa, 0x2e, 0x4d, 0x2d, 0x7c, 0x13, 0x1f, 0xfd, 0xd9, 0x60, 0x83, 0x7e, 0xda, 0x64, 0x1c, 0xdc, 0x9f, + /* (2^118)P */ 0xad, 0xef, 0xac, 0x1b, 0xc1, 0x30, 0x5a, 0x15, 0xc9, 0x1f, 0xac, 0xf1, 0xca, 0x44, 0x95, 0x95, 0xea, 0xf2, 0x22, 0xe7, 0x8d, 0x25, 0xf0, 0xff, 0xd8, 0x71, 0xf7, 0xf8, 0x8f, 0x8f, 0xcd, 0xf4, 0x1e, 0xfe, 0x6c, 0x68, 0x04, 0xb8, 0x78, 0xa1, 0x5f, 0xa6, 0x5d, 0x5e, 0xf9, 0x8d, 0xea, 0x80, 0xcb, 0xf3, 0x17, 0xa6, 0x03, 0xc9, 0x38, 0xd5, + /* (2^119)P */ 0x79, 0x14, 0x31, 0xc3, 0x38, 0xe5, 0xaa, 0xbf, 0x17, 0xa3, 0x04, 0x4e, 0x80, 0x59, 0x9c, 0x9f, 0x19, 0x39, 0xe4, 0x2d, 0x23, 0x54, 0x4a, 0x7f, 0x3e, 0xf3, 0xd9, 0xc7, 0xba, 0x6c, 0x8f, 0x6b, 0xfa, 0x34, 0xb5, 0x23, 0x17, 0x1d, 0xff, 0x1d, 0xea, 0x1f, 0xd7, 0xba, 0x61, 0xb2, 0xe0, 0x38, 0x6a, 0xe9, 0xcf, 0x48, 0x5d, 0x6a, 0x10, 0x9c, + /* (2^120)P */ 0xc8, 0xbb, 0x13, 0x1c, 0x3f, 0x3c, 0x34, 0xfd, 0xac, 0x37, 0x52, 0x44, 0x25, 0xa8, 0xde, 0x1d, 0x63, 0xf4, 0x81, 0x9a, 0xbe, 0x0b, 0x74, 0x2e, 0xc8, 0x51, 0x16, 0xd3, 0xac, 0x4a, 0xaf, 0xe2, 0x5f, 0x3a, 0x89, 0x32, 0xd1, 0x9b, 0x7c, 0x90, 0x0d, 0xac, 0xdc, 0x8b, 0x73, 0x45, 0x45, 0x97, 0xb1, 0x90, 0x2c, 0x1b, 0x31, 0xca, 0xb1, 0x94, + /* (2^121)P */ 0x07, 0x28, 0xdd, 0x10, 0x14, 0xa5, 0x95, 0x7e, 0xf3, 0xe4, 0xd4, 0x14, 0xb4, 0x7e, 0x76, 0xdb, 0x42, 0xd6, 0x94, 0x3e, 0xeb, 0x44, 0x64, 0x88, 0x0d, 0xec, 0xc1, 0x21, 0xf0, 0x79, 0xe0, 0x83, 0x67, 0x55, 0x53, 0xc2, 0xf6, 0xc5, 0xc5, 0x89, 0x39, 0xe8, 0x42, 0xd0, 0x17, 0xbd, 0xff, 0x35, 0x59, 0x0e, 0xc3, 0x06, 0x86, 0xd4, 0x64, 0xcf, + /* (2^122)P */ 0x91, 0xa8, 0xdb, 0x57, 0x9b, 0xe2, 0x96, 0x31, 0x10, 0x6e, 0xd7, 0x9a, 0x97, 0xb3, 0xab, 0xb5, 0x15, 0x66, 0xbe, 0xcc, 0x6d, 0x9a, 0xac, 0x06, 0xb3, 0x0d, 0xaa, 0x4b, 0x9c, 0x96, 0x79, 0x6c, 0x34, 0xee, 0x9e, 0x53, 0x4d, 0x6e, 0xbd, 0x88, 0x02, 0xbf, 0x50, 0x54, 0x12, 0x5d, 0x01, 0x02, 0x46, 0xc6, 0x74, 0x02, 0x8c, 0x24, 0xae, 0xb1, + /* (2^123)P */ 0xf5, 0x22, 0xea, 0xac, 0x7d, 0x9c, 0x33, 0x8a, 0xa5, 0x36, 0x79, 0x6a, 0x4f, 0xa4, 0xdc, 0xa5, 0x73, 0x64, 0xc4, 0x6f, 0x43, 0x02, 0x3b, 0x94, 0x66, 0xd2, 0x4b, 0x4f, 0xf6, 0x45, 0x33, 0x5d, 0x10, 0x33, 0x18, 0x1e, 0xa3, 0xfc, 0xf7, 0xd2, 0xb8, 0xc8, 0xa7, 0xe0, 0x76, 0x8a, 0xcd, 0xff, 0x4f, 0x99, 0x34, 0x47, 0x84, 0x91, 0x96, 0x9f, + /* (2^124)P */ 0x8a, 0x48, 0x3b, 0x48, 0x4a, 0xbc, 0xac, 0xe2, 0x80, 0xd6, 0xd2, 0x35, 0xde, 0xd0, 0x56, 0x42, 0x33, 0xb3, 0x56, 0x5a, 0xcd, 0xb8, 0x3d, 0xb5, 0x25, 0xc1, 0xed, 0xff, 0x87, 0x0b, 0x79, 0xff, 0xf2, 0x62, 0xe1, 0x76, 0xc6, 0xa2, 0x0f, 0xa8, 0x9b, 0x0d, 0xcc, 0x3f, 0x3d, 0x35, 0x27, 0x8d, 0x0b, 0x74, 0xb0, 0xc3, 0x78, 0x8c, 0xcc, 0xc8, + /* (2^125)P */ 0xfc, 0x9a, 0x0c, 0xa8, 0x49, 0x42, 0xb8, 0xdf, 0xcf, 0xb3, 0x19, 0xa6, 0x64, 0x57, 0xfe, 0xe8, 0xf8, 0xa6, 0x4b, 0x86, 0xa1, 0xd5, 0x83, 0x7f, 0x14, 0x99, 0x18, 0x0c, 0x7d, 0x5b, 0xf7, 0x3d, 0xf9, 0x4b, 0x79, 0xb1, 0x86, 0x30, 0xb4, 0x5e, 0x6a, 0xe8, 0x9d, 0xfa, 0x8a, 0x41, 0xc4, 0x30, 0xfc, 0x56, 0x74, 0x14, 0x42, 0xc8, 0x96, 0x0e, + /* (2^126)P */ 0xdf, 0x66, 0xec, 0xbc, 0x44, 0xdb, 0x19, 0xce, 0xd4, 0xb5, 0x49, 0x40, 0x07, 0x49, 0xe0, 0x3a, 0x61, 0x10, 0xfb, 0x7d, 0xba, 0xb1, 0xe0, 0x28, 0x5b, 0x99, 0x59, 0x96, 0xa2, 0xee, 0xe0, 0x23, 0x37, 0x39, 0x1f, 0xe6, 0x57, 0x9f, 0xf8, 0xf8, 0xdc, 0x74, 0xf6, 0x8f, 0x4f, 0x5e, 0x51, 0xa4, 0x12, 0xac, 0xbe, 0xe4, 0xf3, 0xd1, 0xf0, 0x24, + /* (2^127)P */ 0x1e, 0x3e, 0x9a, 0x5f, 0xdf, 0x9f, 0xd6, 0x4e, 0x8a, 0x28, 0xc3, 0xcd, 0x96, 0x9d, 0x57, 0xc7, 0x61, 0x81, 0x90, 0xff, 0xae, 0xb1, 0x4f, 0xc2, 0x96, 0x8b, 0x1a, 0x18, 0xf4, 0x50, 0xcb, 0x31, 0xe1, 0x57, 0xf4, 0x90, 0xa8, 0xea, 0xac, 0xe7, 0x61, 0x98, 0xb6, 0x15, 0xc1, 0x7b, 0x29, 0xa4, 0xc3, 0x18, 0xef, 0xb9, 0xd8, 0xdf, 0xf6, 0xac, + /* (2^128)P */ 0xca, 0xa8, 0x6c, 0xf1, 0xb4, 0xca, 0xfe, 0x31, 0xee, 0x48, 0x38, 0x8b, 0x0e, 0xbb, 0x7a, 0x30, 0xaa, 0xf9, 0xee, 0x27, 0x53, 0x24, 0xdc, 0x2e, 0x15, 0xa6, 0x48, 0x8f, 0xa0, 0x7e, 0xf1, 0xdc, 0x93, 0x87, 0x39, 0xeb, 0x7f, 0x38, 0x92, 0x92, 0x4c, 0x29, 0xe9, 0x57, 0xd8, 0x59, 0xfc, 0xe9, 0x9c, 0x44, 0xc0, 0x65, 0xcf, 0xac, 0x4b, 0xdc, + /* (2^129)P */ 0xa3, 0xd0, 0x37, 0x8f, 0x86, 0x2f, 0xc6, 0x47, 0x55, 0x46, 0x65, 0x26, 0x4b, 0x91, 0xe2, 0x18, 0x5c, 0x4f, 0x23, 0xc1, 0x37, 0x29, 0xb9, 0xc1, 0x27, 0xc5, 0x3c, 0xbf, 0x7e, 0x23, 0xdb, 0x73, 0x99, 0xbd, 0x1b, 0xb2, 0x31, 0x68, 0x3a, 0xad, 0xb7, 0xb0, 0x10, 0xc5, 0xe5, 0x11, 0x51, 0xba, 0xa7, 0x60, 0x66, 0x54, 0xf0, 0x08, 0xd7, 0x69, + /* (2^130)P */ 0x89, 0x41, 0x79, 0xcc, 0xeb, 0x0a, 0xf5, 0x4b, 0xa3, 0x4c, 0xce, 0x52, 0xb0, 0xa7, 0xe4, 0x41, 0x75, 0x7d, 0x04, 0xbb, 0x09, 0x4c, 0x50, 0x9f, 0xdf, 0xea, 0x74, 0x61, 0x02, 0xad, 0xb4, 0x9d, 0xb7, 0x05, 0xb9, 0xea, 0xeb, 0x91, 0x35, 0xe7, 0x49, 0xea, 0xd3, 0x4f, 0x3c, 0x60, 0x21, 0x7a, 0xde, 0xc7, 0xe2, 0x5a, 0xee, 0x8e, 0x93, 0xc7, + /* (2^131)P */ 0x00, 0xe8, 0xed, 0xd0, 0xb3, 0x0d, 0xaf, 0xb2, 0xde, 0x2c, 0xf6, 0x00, 0xe2, 0xea, 0x6d, 0xf8, 0x0e, 0xd9, 0x67, 0x59, 0xa9, 0x50, 0xbb, 0x17, 0x8f, 0xff, 0xb1, 0x9f, 0x17, 0xb6, 0xf2, 0xb5, 0xba, 0x80, 0xf7, 0x0f, 0xba, 0xd5, 0x09, 0x43, 0xaa, 0x4e, 0x3a, 0x67, 0x6a, 0x89, 0x9b, 0x18, 0x65, 0x35, 0xf8, 0x3a, 0x49, 0x91, 0x30, 0x51, + /* (2^132)P */ 0x8d, 0x25, 0xe9, 0x0e, 0x7d, 0x50, 0x76, 0xe4, 0x58, 0x7e, 0xb9, 0x33, 0xe6, 0x65, 0x90, 0xc2, 0x50, 0x9d, 0x50, 0x2e, 0x11, 0xad, 0xd5, 0x43, 0x52, 0x32, 0x41, 0x4f, 0x7b, 0xb6, 0xa0, 0xec, 0x81, 0x75, 0x36, 0x7c, 0x77, 0x85, 0x59, 0x70, 0xe4, 0xf9, 0xef, 0x66, 0x8d, 0x35, 0xc8, 0x2a, 0x6e, 0x5b, 0xc6, 0x0d, 0x0b, 0x29, 0x60, 0x68, + /* (2^133)P */ 0xf8, 0xce, 0xb0, 0x3a, 0x56, 0x7d, 0x51, 0x9a, 0x25, 0x73, 0xea, 0xdd, 0xe4, 0xe0, 0x0e, 0xf0, 0x07, 0xc0, 0x31, 0x00, 0x73, 0x35, 0xd0, 0x39, 0xc4, 0x9b, 0xb7, 0x95, 0xe0, 0x62, 0x70, 0x36, 0x0b, 0xcb, 0xa0, 0x42, 0xde, 0x51, 0xcf, 0x41, 0xe0, 0xb8, 0xb4, 0xc0, 0xe5, 0x46, 0x99, 0x9f, 0x02, 0x7f, 0x14, 0x8c, 0xc1, 0x4e, 0xef, 0xe8, + /* (2^134)P */ 0x10, 0x01, 0x57, 0x0a, 0xbe, 0x8b, 0x18, 0xc8, 0xca, 0x00, 0x28, 0x77, 0x4a, 0x9a, 0xc7, 0x55, 0x2a, 0xcc, 0x0c, 0x7b, 0xb9, 0xe9, 0xc8, 0x97, 0x7c, 0x02, 0xe3, 0x09, 0x2f, 0x62, 0x30, 0xb8, 0x40, 0x09, 0x65, 0xe9, 0x55, 0x63, 0xb5, 0x07, 0xca, 0x9f, 0x00, 0xdf, 0x9d, 0x5c, 0xc7, 0xee, 0x57, 0xa5, 0x90, 0x15, 0x1e, 0x22, 0xa0, 0x12, + /* (2^135)P */ 0x71, 0x2d, 0xc9, 0xef, 0x27, 0xb9, 0xd8, 0x12, 0x43, 0x6b, 0xa8, 0xce, 0x3b, 0x6d, 0x6e, 0x91, 0x43, 0x23, 0xbc, 0x32, 0xb3, 0xbf, 0xe1, 0xc7, 0x39, 0xcf, 0x7c, 0x42, 0x4c, 0xb1, 0x30, 0xe2, 0xdd, 0x69, 0x06, 0xe5, 0xea, 0xf0, 0x2a, 0x16, 0x50, 0x71, 0xca, 0x92, 0xdf, 0xc1, 0xcc, 0xec, 0xe6, 0x54, 0x07, 0xf3, 0x18, 0x8d, 0xd8, 0x29, + /* (2^136)P */ 0x98, 0x51, 0x48, 0x8f, 0xfa, 0x2e, 0x5e, 0x67, 0xb0, 0xc6, 0x17, 0x12, 0xb6, 0x7d, 0xc9, 0xad, 0x81, 0x11, 0xad, 0x0c, 0x1c, 0x2d, 0x45, 0xdf, 0xac, 0x66, 0xbd, 0x08, 0x6f, 0x7c, 0xc7, 0x06, 0x6e, 0x19, 0x08, 0x39, 0x64, 0xd7, 0xe4, 0xd1, 0x11, 0x5f, 0x1c, 0xf4, 0x67, 0xc3, 0x88, 0x6a, 0xe6, 0x07, 0xa3, 0x83, 0xd7, 0xfd, 0x2a, 0xf9, + /* (2^137)P */ 0x87, 0xed, 0xeb, 0xd9, 0xdf, 0xff, 0x43, 0x8b, 0xaa, 0x20, 0x58, 0xb0, 0xb4, 0x6b, 0x14, 0xb8, 0x02, 0xc5, 0x40, 0x20, 0x22, 0xbb, 0xf7, 0xb4, 0xf3, 0x05, 0x1e, 0x4d, 0x94, 0xff, 0xe3, 0xc5, 0x22, 0x82, 0xfe, 0xaf, 0x90, 0x42, 0x98, 0x6b, 0x76, 0x8b, 0x3e, 0x89, 0x3f, 0x42, 0x2a, 0xa7, 0x26, 0x00, 0xda, 0x5c, 0xa2, 0x2b, 0xec, 0xdd, + /* (2^138)P */ 0x5c, 0x21, 0x16, 0x0d, 0x46, 0xb8, 0xd0, 0xa7, 0x88, 0xe7, 0x25, 0xcb, 0x3e, 0x50, 0x73, 0x61, 0xe7, 0xaf, 0x5a, 0x3f, 0x47, 0x8b, 0x3d, 0x97, 0x79, 0x2c, 0xe6, 0x6d, 0x95, 0x74, 0x65, 0x70, 0x36, 0xfd, 0xd1, 0x9e, 0x13, 0x18, 0x63, 0xb1, 0x2d, 0x0b, 0xb5, 0x36, 0x3e, 0xe7, 0x35, 0x42, 0x3b, 0xe6, 0x1f, 0x4d, 0x9d, 0x59, 0xa2, 0x43, + /* (2^139)P */ 0x8c, 0x0c, 0x7c, 0x24, 0x9e, 0xe0, 0xf8, 0x05, 0x1c, 0x9e, 0x1f, 0x31, 0xc0, 0x70, 0xb3, 0xfb, 0x4e, 0xf8, 0x0a, 0x57, 0xb7, 0x49, 0xb5, 0x73, 0xa1, 0x5f, 0x9b, 0x6a, 0x07, 0x6c, 0x87, 0x71, 0x87, 0xd4, 0xbe, 0x98, 0x1e, 0x98, 0xee, 0x52, 0xc1, 0x7b, 0x95, 0x0f, 0x28, 0x32, 0x36, 0x28, 0xd0, 0x3a, 0x0f, 0x7d, 0x2a, 0xa9, 0x62, 0xb9, + /* (2^140)P */ 0x97, 0xe6, 0x18, 0x77, 0xf9, 0x34, 0xac, 0xbc, 0xe0, 0x62, 0x9f, 0x42, 0xde, 0xbd, 0x2f, 0xf7, 0x1f, 0xb7, 0x14, 0x52, 0x8a, 0x79, 0xb2, 0x3f, 0xd2, 0x95, 0x71, 0x01, 0xe8, 0xaf, 0x8c, 0xa4, 0xa4, 0xa7, 0x27, 0xf3, 0x5c, 0xdf, 0x3e, 0x57, 0x7a, 0xf1, 0x76, 0x49, 0xe6, 0x42, 0x3f, 0x8f, 0x1e, 0x63, 0x4a, 0x65, 0xb5, 0x41, 0xf5, 0x02, + /* (2^141)P */ 0x72, 0x85, 0xc5, 0x0b, 0xe1, 0x47, 0x64, 0x02, 0xc5, 0x4d, 0x81, 0x69, 0xb2, 0xcf, 0x0f, 0x6c, 0xd4, 0x6d, 0xd0, 0xc7, 0xb4, 0x1c, 0xd0, 0x32, 0x59, 0x89, 0xe2, 0xe0, 0x96, 0x8b, 0x12, 0x98, 0xbf, 0x63, 0x7a, 0x4c, 0x76, 0x7e, 0x58, 0x17, 0x8f, 0x5b, 0x0a, 0x59, 0x65, 0x75, 0xbc, 0x61, 0x1f, 0xbe, 0xc5, 0x6e, 0x0a, 0x57, 0x52, 0x70, + /* (2^142)P */ 0x92, 0x1c, 0x77, 0xbb, 0x62, 0x02, 0x6c, 0x25, 0x9c, 0x66, 0x07, 0x83, 0xab, 0xcc, 0x80, 0x5d, 0xd2, 0x76, 0x0c, 0xa4, 0xc5, 0xb4, 0x8a, 0x68, 0x23, 0x31, 0x32, 0x29, 0x8a, 0x47, 0x92, 0x12, 0x80, 0xb3, 0xfa, 0x18, 0xe4, 0x8d, 0xc0, 0x4d, 0xfe, 0x97, 0x5f, 0x72, 0x41, 0xb5, 0x5c, 0x7a, 0xbd, 0xf0, 0xcf, 0x5e, 0x97, 0xaa, 0x64, 0x32, + /* (2^143)P */ 0x35, 0x3f, 0x75, 0xc1, 0x7a, 0x75, 0x7e, 0xa9, 0xc6, 0x0b, 0x4e, 0x32, 0x62, 0xec, 0xe3, 0x5c, 0xfb, 0x01, 0x43, 0xb6, 0xd4, 0x5b, 0x75, 0xd2, 0xee, 0x7f, 0x5d, 0x23, 0x2b, 0xb3, 0x54, 0x34, 0x4c, 0xd3, 0xb4, 0x32, 0x84, 0x81, 0xb5, 0x09, 0x76, 0x19, 0xda, 0x58, 0xda, 0x7c, 0xdb, 0x2e, 0xdd, 0x4c, 0x8e, 0xdd, 0x5d, 0x89, 0x10, 0x10, + /* (2^144)P */ 0x57, 0x25, 0x6a, 0x08, 0x37, 0x92, 0xa8, 0xdf, 0x24, 0xef, 0x8f, 0x33, 0x34, 0x52, 0xa4, 0x4c, 0xf0, 0x77, 0x9f, 0x69, 0x77, 0xd5, 0x8f, 0xd2, 0x9a, 0xb3, 0xb6, 0x1d, 0x2d, 0xa6, 0xf7, 0x1f, 0xda, 0xd7, 0xcb, 0x75, 0x11, 0xc3, 0x6b, 0xc0, 0x38, 0xb1, 0xd5, 0x2d, 0x96, 0x84, 0x16, 0xfa, 0x26, 0xb9, 0xcc, 0x3f, 0x16, 0x47, 0x23, 0x74, + /* (2^145)P */ 0x9b, 0x61, 0x2a, 0x1c, 0xdd, 0x39, 0xa5, 0xfa, 0x1c, 0x7d, 0x63, 0x50, 0xca, 0xe6, 0x9d, 0xfa, 0xb7, 0xc4, 0x4c, 0x6a, 0x97, 0x5f, 0x36, 0x4e, 0x47, 0xdd, 0x17, 0xf7, 0xf9, 0x19, 0xce, 0x75, 0x17, 0xad, 0xce, 0x2a, 0xf3, 0xfe, 0x27, 0x8f, 0x3e, 0x48, 0xc0, 0x60, 0x87, 0x24, 0x19, 0xae, 0x59, 0xe4, 0x5a, 0x00, 0x2a, 0xba, 0xa2, 0x1f, + /* (2^146)P */ 0x26, 0x88, 0x42, 0x60, 0x9f, 0x6e, 0x2c, 0x7c, 0x39, 0x0f, 0x47, 0x6a, 0x0e, 0x02, 0xbb, 0x4b, 0x34, 0x29, 0x55, 0x18, 0x36, 0xcf, 0x3b, 0x47, 0xf1, 0x2e, 0xfc, 0x6e, 0x94, 0xff, 0xe8, 0x6b, 0x06, 0xd2, 0xba, 0x77, 0x5e, 0x60, 0xd7, 0x19, 0xef, 0x02, 0x9d, 0x3a, 0xc2, 0xb7, 0xa9, 0xd8, 0x57, 0xee, 0x7e, 0x2b, 0xf2, 0x6d, 0x28, 0xda, + /* (2^147)P */ 0xdf, 0xd9, 0x92, 0x11, 0x98, 0x23, 0xe2, 0x45, 0x2f, 0x74, 0x70, 0xee, 0x0e, 0x55, 0x65, 0x79, 0x86, 0x38, 0x17, 0x92, 0x85, 0x87, 0x99, 0x50, 0xd9, 0x7c, 0xdb, 0xa1, 0x10, 0xec, 0x30, 0xb7, 0x40, 0xa3, 0x23, 0x9b, 0x0e, 0x27, 0x49, 0x29, 0x03, 0x94, 0xff, 0x53, 0xdc, 0xd7, 0xed, 0x49, 0xa9, 0x5a, 0x3b, 0xee, 0xd7, 0xc7, 0x65, 0xaf, + /* (2^148)P */ 0xa0, 0xbd, 0xbe, 0x03, 0xee, 0x0c, 0xbe, 0x32, 0x00, 0x7b, 0x52, 0xcb, 0x92, 0x29, 0xbf, 0xa0, 0xc6, 0xd9, 0xd2, 0xd6, 0x15, 0xe8, 0x3a, 0x75, 0x61, 0x65, 0x56, 0xae, 0xad, 0x3c, 0x2a, 0x64, 0x14, 0x3f, 0x8e, 0xc1, 0x2d, 0x0c, 0x8d, 0x20, 0xdb, 0x58, 0x4b, 0xe5, 0x40, 0x15, 0x4b, 0xdc, 0xa8, 0xbd, 0xef, 0x08, 0xa7, 0xd1, 0xf4, 0xb0, + /* (2^149)P */ 0xa9, 0x0f, 0x05, 0x94, 0x66, 0xac, 0x1f, 0x65, 0x3f, 0xe1, 0xb8, 0xe1, 0x34, 0x5e, 0x1d, 0x8f, 0xe3, 0x93, 0x03, 0x15, 0xff, 0xb6, 0x65, 0xb6, 0x6e, 0xc0, 0x2f, 0xd4, 0x2e, 0xb9, 0x2c, 0x13, 0x3c, 0x99, 0x1c, 0xb5, 0x87, 0xba, 0x79, 0xcb, 0xf0, 0x18, 0x06, 0x86, 0x04, 0x14, 0x25, 0x09, 0xcd, 0x1c, 0x14, 0xda, 0x35, 0xd0, 0x38, 0x3b, + /* (2^150)P */ 0x1b, 0x04, 0xa3, 0x27, 0xb4, 0xd3, 0x37, 0x48, 0x1e, 0x8f, 0x69, 0xd3, 0x5a, 0x2f, 0x20, 0x02, 0x36, 0xbe, 0x06, 0x7b, 0x6b, 0x6c, 0x12, 0x5b, 0x80, 0x74, 0x44, 0xe6, 0xf8, 0xf5, 0x95, 0x59, 0x29, 0xab, 0x51, 0x47, 0x83, 0x28, 0xe0, 0xad, 0xde, 0xaa, 0xd3, 0xb1, 0x1a, 0xcb, 0xa3, 0xcd, 0x8b, 0x6a, 0xb1, 0xa7, 0x0a, 0xd1, 0xf9, 0xbe, + /* (2^151)P */ 0xce, 0x2f, 0x85, 0xca, 0x74, 0x6d, 0x49, 0xb8, 0xce, 0x80, 0x44, 0xe0, 0xda, 0x5b, 0xcf, 0x2f, 0x79, 0x74, 0xfe, 0xb4, 0x2c, 0x99, 0x20, 0x6e, 0x09, 0x04, 0xfb, 0x6d, 0x57, 0x5b, 0x95, 0x0c, 0x45, 0xda, 0x4f, 0x7f, 0x63, 0xcc, 0x85, 0x5a, 0x67, 0x50, 0x68, 0x71, 0xb4, 0x67, 0xb1, 0x2e, 0xc1, 0x1c, 0xdc, 0xff, 0x2a, 0x7c, 0x10, 0x5e, + /* (2^152)P */ 0xa6, 0xde, 0xf3, 0xd4, 0x22, 0x30, 0x24, 0x9e, 0x0b, 0x30, 0x54, 0x59, 0x7e, 0xa2, 0xeb, 0x89, 0x54, 0x65, 0x3e, 0x40, 0xd1, 0xde, 0xe6, 0xee, 0x4d, 0xbf, 0x5e, 0x40, 0x1d, 0xee, 0x4f, 0x68, 0xd9, 0xa7, 0x2f, 0xb3, 0x64, 0xb3, 0xf5, 0xc8, 0xd3, 0xaa, 0x70, 0x70, 0x3d, 0xef, 0xd3, 0x95, 0x54, 0xdb, 0x3e, 0x94, 0x95, 0x92, 0x1f, 0x45, + /* (2^153)P */ 0x22, 0x80, 0x1d, 0x9d, 0x96, 0xa5, 0x78, 0x6f, 0xe0, 0x1e, 0x1b, 0x66, 0x42, 0xc8, 0xae, 0x9e, 0x46, 0x45, 0x08, 0x41, 0xdf, 0x80, 0xae, 0x6f, 0xdb, 0x15, 0x5a, 0x21, 0x31, 0x7a, 0xd0, 0xf2, 0x54, 0x15, 0x88, 0xd3, 0x0f, 0x7f, 0x14, 0x5a, 0x14, 0x97, 0xab, 0xf4, 0x58, 0x6a, 0x9f, 0xea, 0x74, 0xe5, 0x6b, 0x90, 0x59, 0x2b, 0x48, 0xd9, + /* (2^154)P */ 0x12, 0x24, 0x04, 0xf5, 0x50, 0xc2, 0x8c, 0xb0, 0x7c, 0x46, 0x98, 0xd5, 0x24, 0xad, 0xf6, 0x72, 0xdc, 0x82, 0x1a, 0x60, 0xc1, 0xeb, 0x48, 0xef, 0x7f, 0x6e, 0xe6, 0xcc, 0xdb, 0x7b, 0xae, 0xbe, 0x5e, 0x1e, 0x5c, 0xe6, 0x0a, 0x70, 0xdf, 0xa4, 0xa3, 0x85, 0x1b, 0x1b, 0x7f, 0x72, 0xb9, 0x96, 0x6f, 0xdc, 0x03, 0x76, 0x66, 0xfb, 0xa0, 0x33, + /* (2^155)P */ 0x37, 0x40, 0xbb, 0xbc, 0x68, 0x58, 0x86, 0xca, 0xbb, 0xa5, 0x24, 0x76, 0x3d, 0x48, 0xd1, 0xad, 0xb4, 0xa8, 0xcf, 0xc3, 0xb6, 0xa8, 0xba, 0x1a, 0x3a, 0xbe, 0x33, 0x75, 0x04, 0x5c, 0x13, 0x8c, 0x0d, 0x70, 0x8d, 0xa6, 0x4e, 0x2a, 0xeb, 0x17, 0x3c, 0x22, 0xdd, 0x3e, 0x96, 0x40, 0x11, 0x9e, 0x4e, 0xae, 0x3d, 0xf8, 0x91, 0xd7, 0x50, 0xc8, + /* (2^156)P */ 0xd8, 0xca, 0xde, 0x19, 0xcf, 0x00, 0xe4, 0x73, 0x18, 0x7f, 0x9b, 0x9f, 0xf4, 0x5b, 0x49, 0x49, 0x99, 0xdc, 0xa4, 0x46, 0x21, 0xb5, 0xd7, 0x3e, 0xb7, 0x47, 0x1b, 0xa9, 0x9f, 0x4c, 0x69, 0x7d, 0xec, 0x33, 0xd6, 0x1c, 0x51, 0x7f, 0x47, 0x74, 0x7a, 0x6c, 0xf3, 0xd2, 0x2e, 0xbf, 0xdf, 0x6c, 0x9e, 0x77, 0x3b, 0x34, 0xf6, 0x73, 0x80, 0xed, + /* (2^157)P */ 0x16, 0xfb, 0x16, 0xc3, 0xc2, 0x83, 0xe4, 0xf4, 0x03, 0x7f, 0x52, 0xb0, 0x67, 0x51, 0x7b, 0x24, 0x5a, 0x51, 0xd3, 0xb6, 0x4e, 0x59, 0x76, 0xcd, 0x08, 0x7b, 0x1d, 0x7a, 0x9c, 0x65, 0xae, 0xce, 0xaa, 0xd2, 0x1c, 0x85, 0x66, 0x68, 0x06, 0x15, 0xa8, 0x06, 0xe6, 0x16, 0x37, 0xf4, 0x49, 0x9e, 0x0f, 0x50, 0x37, 0xb1, 0xb2, 0x93, 0x70, 0x43, + /* (2^158)P */ 0x18, 0x3a, 0x16, 0xe5, 0x8d, 0xc8, 0x35, 0xd6, 0x7b, 0x09, 0xec, 0x61, 0x5f, 0x5c, 0x2a, 0x19, 0x96, 0x2e, 0xc3, 0xfd, 0xab, 0xe6, 0x23, 0xae, 0xab, 0xc5, 0xcb, 0xb9, 0x7b, 0x2d, 0x34, 0x51, 0xb9, 0x41, 0x9e, 0x7d, 0xca, 0xda, 0x25, 0x45, 0x14, 0xb0, 0xc7, 0x4d, 0x26, 0x2b, 0xfe, 0x43, 0xb0, 0x21, 0x5e, 0xfa, 0xdc, 0x7c, 0xf9, 0x5a, + /* (2^159)P */ 0x94, 0xad, 0x42, 0x17, 0xf5, 0xcd, 0x1c, 0x0d, 0xf6, 0x41, 0xd2, 0x55, 0xbb, 0x50, 0xf1, 0xc6, 0xbc, 0xa6, 0xc5, 0x3a, 0xfd, 0x9b, 0x75, 0x3e, 0xf6, 0x1a, 0xa7, 0xb2, 0x6e, 0x64, 0x12, 0xdc, 0x3c, 0xe5, 0xf6, 0xfc, 0x3b, 0xfa, 0x43, 0x81, 0xd4, 0xa5, 0xee, 0xf5, 0x9c, 0x47, 0x2f, 0xd0, 0x9c, 0xde, 0xa1, 0x48, 0x91, 0x9a, 0x34, 0xc1, + /* (2^160)P */ 0x37, 0x1b, 0xb3, 0x88, 0xc9, 0x98, 0x4e, 0xfb, 0x84, 0x4f, 0x2b, 0x0a, 0xb6, 0x8f, 0x35, 0x15, 0xcd, 0x61, 0x7a, 0x5f, 0x5c, 0xa0, 0xca, 0x23, 0xa0, 0x93, 0x1f, 0xcc, 0x3c, 0x39, 0x3a, 0x24, 0xa7, 0x49, 0xad, 0x8d, 0x59, 0xcc, 0x94, 0x5a, 0x16, 0xf5, 0x70, 0xe8, 0x52, 0x1e, 0xee, 0x20, 0x30, 0x17, 0x7e, 0xf0, 0x4c, 0x93, 0x06, 0x5a, + /* (2^161)P */ 0x81, 0xba, 0x3b, 0xd7, 0x3e, 0xb4, 0x32, 0x3a, 0x22, 0x39, 0x2a, 0xfc, 0x19, 0xd9, 0xd2, 0xf6, 0xc5, 0x79, 0x6c, 0x0e, 0xde, 0xda, 0x01, 0xff, 0x52, 0xfb, 0xb6, 0x95, 0x4e, 0x7a, 0x10, 0xb8, 0x06, 0x86, 0x3c, 0xcd, 0x56, 0xd6, 0x15, 0xbf, 0x6e, 0x3e, 0x4f, 0x35, 0x5e, 0xca, 0xbc, 0xa5, 0x95, 0xa2, 0xdf, 0x2d, 0x1d, 0xaf, 0x59, 0xf9, + /* (2^162)P */ 0x69, 0xe5, 0xe2, 0xfa, 0xc9, 0x7f, 0xdd, 0x09, 0xf5, 0x6b, 0x4e, 0x2e, 0xbe, 0xb4, 0xbf, 0x3e, 0xb2, 0xf2, 0x81, 0x30, 0xe1, 0x07, 0xa8, 0x0d, 0x2b, 0xd2, 0x5a, 0x55, 0xbe, 0x4b, 0x86, 0x5d, 0xb0, 0x5e, 0x7c, 0x8f, 0xc1, 0x3c, 0x81, 0x4c, 0xf7, 0x6d, 0x7d, 0xe6, 0x4f, 0x8a, 0x85, 0xc2, 0x2f, 0x28, 0xef, 0x8c, 0x69, 0xc2, 0xc2, 0x1a, + /* (2^163)P */ 0xd9, 0xe4, 0x0e, 0x1e, 0xc2, 0xf7, 0x2f, 0x9f, 0xa1, 0x40, 0xfe, 0x46, 0x16, 0xaf, 0x2e, 0xd1, 0xec, 0x15, 0x9b, 0x61, 0x92, 0xce, 0xfc, 0x10, 0x43, 0x1d, 0x00, 0xf6, 0xbe, 0x20, 0x80, 0x80, 0x6f, 0x3c, 0x16, 0x94, 0x59, 0xba, 0x03, 0x53, 0x6e, 0xb6, 0xdd, 0x25, 0x7b, 0x86, 0xbf, 0x96, 0xf4, 0x2f, 0xa1, 0x96, 0x8d, 0xf9, 0xb3, 0x29, + /* (2^164)P */ 0x3b, 0x04, 0x60, 0x6e, 0xce, 0xab, 0xd2, 0x63, 0x18, 0x53, 0x88, 0x16, 0x4a, 0x6a, 0xab, 0x72, 0x03, 0x68, 0xa5, 0xd4, 0x0d, 0xb2, 0x82, 0x81, 0x1f, 0x2b, 0x5c, 0x75, 0xe8, 0xd2, 0x1d, 0x7f, 0xe7, 0x1b, 0x35, 0x02, 0xde, 0xec, 0xbd, 0xcb, 0xc7, 0x01, 0xd3, 0x95, 0x61, 0xfe, 0xb2, 0x7a, 0x66, 0x09, 0x4c, 0x6d, 0xfd, 0x39, 0xf7, 0x52, + /* (2^165)P */ 0x42, 0xc1, 0x5f, 0xf8, 0x35, 0x52, 0xc1, 0xfe, 0xc5, 0x11, 0x80, 0x1c, 0x11, 0x46, 0x31, 0x11, 0xbe, 0xd0, 0xc4, 0xb6, 0x07, 0x13, 0x38, 0xa0, 0x8d, 0x65, 0xf0, 0x56, 0x9e, 0x16, 0xbf, 0x9d, 0xcd, 0x51, 0x34, 0xf9, 0x08, 0x48, 0x7b, 0x76, 0x0c, 0x7b, 0x30, 0x07, 0xa8, 0x76, 0xaf, 0xa3, 0x29, 0x38, 0xb0, 0x58, 0xde, 0x72, 0x4b, 0x45, + /* (2^166)P */ 0xd4, 0x16, 0xa7, 0xc0, 0xb4, 0x9f, 0xdf, 0x1a, 0x37, 0xc8, 0x35, 0xed, 0xc5, 0x85, 0x74, 0x64, 0x09, 0x22, 0xef, 0xe9, 0x0c, 0xaf, 0x12, 0x4c, 0x9e, 0xf8, 0x47, 0x56, 0xe0, 0x7f, 0x4e, 0x24, 0x6b, 0x0c, 0xe7, 0xad, 0xc6, 0x47, 0x1d, 0xa4, 0x0d, 0x86, 0x89, 0x65, 0xe8, 0x5f, 0x71, 0xc7, 0xe9, 0xcd, 0xec, 0x6c, 0x62, 0xc7, 0xe3, 0xb3, + /* (2^167)P */ 0xb5, 0xea, 0x86, 0xe3, 0x15, 0x18, 0x3f, 0x6d, 0x7b, 0x05, 0x95, 0x15, 0x53, 0x26, 0x1c, 0xeb, 0xbe, 0x7e, 0x16, 0x42, 0x4b, 0xa2, 0x3d, 0xdd, 0x0e, 0xff, 0xba, 0x67, 0xb5, 0xae, 0x7a, 0x17, 0xde, 0x23, 0xad, 0x14, 0xcc, 0xd7, 0xaf, 0x57, 0x01, 0xe0, 0xdd, 0x48, 0xdd, 0xd7, 0xe3, 0xdf, 0xe9, 0x2d, 0xda, 0x67, 0xa4, 0x9f, 0x29, 0x04, + /* (2^168)P */ 0x16, 0x53, 0xe6, 0x9c, 0x4e, 0xe5, 0x1e, 0x70, 0x81, 0x25, 0x02, 0x9b, 0x47, 0x6d, 0xd2, 0x08, 0x73, 0xbe, 0x0a, 0xf1, 0x7b, 0xeb, 0x24, 0xeb, 0x38, 0x23, 0x5c, 0xb6, 0x3e, 0xce, 0x1e, 0xe3, 0xbc, 0x82, 0x35, 0x1f, 0xaf, 0x3a, 0x3a, 0xe5, 0x4e, 0xc1, 0xca, 0xbf, 0x47, 0xb4, 0xbb, 0xbc, 0x5f, 0xea, 0xc6, 0xca, 0xf3, 0xa0, 0xa2, 0x73, + /* (2^169)P */ 0xef, 0xa4, 0x7a, 0x4e, 0xe4, 0xc7, 0xb6, 0x43, 0x2e, 0xa5, 0xe4, 0xa5, 0xba, 0x1e, 0xa5, 0xfe, 0x9e, 0xce, 0xa9, 0x80, 0x04, 0xcb, 0x4f, 0xd8, 0x74, 0x05, 0x48, 0xfa, 0x99, 0x11, 0x5d, 0x97, 0x3b, 0x07, 0x0d, 0xdd, 0xe6, 0xb1, 0x74, 0x87, 0x1a, 0xd3, 0x26, 0xb7, 0x8f, 0xe1, 0x63, 0x3d, 0xec, 0x53, 0x93, 0xb0, 0x81, 0x78, 0x34, 0xa4, + /* (2^170)P */ 0xe1, 0xe7, 0xd4, 0x58, 0x9d, 0x0e, 0x8b, 0x65, 0x66, 0x37, 0x16, 0x48, 0x6f, 0xaa, 0x42, 0x37, 0x77, 0xad, 0xb1, 0x56, 0x48, 0xdf, 0x65, 0x36, 0x30, 0xb8, 0x00, 0x12, 0xd8, 0x32, 0x28, 0x7f, 0xc1, 0x71, 0xeb, 0x93, 0x0f, 0x48, 0x04, 0xe1, 0x5a, 0x6a, 0x96, 0xc1, 0xca, 0x89, 0x6d, 0x1b, 0x82, 0x4c, 0x18, 0x6d, 0x55, 0x4b, 0xea, 0xfd, + /* (2^171)P */ 0x62, 0x1a, 0x53, 0xb4, 0xb1, 0xbe, 0x6f, 0x15, 0x18, 0x88, 0xd4, 0x66, 0x61, 0xc7, 0x12, 0x69, 0x02, 0xbd, 0x03, 0x23, 0x2b, 0xef, 0xf9, 0x54, 0xa4, 0x85, 0xa8, 0xe3, 0xb7, 0xbd, 0xa9, 0xa3, 0xf3, 0x2a, 0xdd, 0xf1, 0xd4, 0x03, 0x0f, 0xa9, 0xa1, 0xd8, 0xa3, 0xcd, 0xb2, 0x71, 0x90, 0x4b, 0x35, 0x62, 0xf2, 0x2f, 0xce, 0x67, 0x1f, 0xaa, + /* (2^172)P */ 0x9e, 0x1e, 0xcd, 0x43, 0x7e, 0x87, 0x37, 0x94, 0x3a, 0x97, 0x4c, 0x7e, 0xee, 0xc9, 0x37, 0x85, 0xf1, 0xd9, 0x4f, 0xbf, 0xf9, 0x6f, 0x39, 0x9a, 0x39, 0x87, 0x2e, 0x25, 0x84, 0x42, 0xc3, 0x80, 0xcb, 0x07, 0x22, 0xae, 0x30, 0xd5, 0x50, 0xa1, 0x23, 0xcc, 0x31, 0x81, 0x9d, 0xf1, 0x30, 0xd9, 0x2b, 0x73, 0x41, 0x16, 0x50, 0xab, 0x2d, 0xa2, + /* (2^173)P */ 0xa4, 0x69, 0x4f, 0xa1, 0x4e, 0xb9, 0xbf, 0x14, 0xe8, 0x2b, 0x04, 0x93, 0xb7, 0x6e, 0x9f, 0x7d, 0x73, 0x0a, 0xc5, 0x14, 0xb8, 0xde, 0x8c, 0xc1, 0xfe, 0xc0, 0xa7, 0xa4, 0xcc, 0x42, 0x42, 0x81, 0x15, 0x65, 0x8a, 0x80, 0xb9, 0xde, 0x1f, 0x60, 0x33, 0x0e, 0xcb, 0xfc, 0xe0, 0xdb, 0x83, 0xa1, 0xe5, 0xd0, 0x16, 0x86, 0x2c, 0xe2, 0x87, 0xed, + /* (2^174)P */ 0x7a, 0xc0, 0xeb, 0x6b, 0xf6, 0x0d, 0x4c, 0x6d, 0x1e, 0xdb, 0xab, 0xe7, 0x19, 0x45, 0xc6, 0xe3, 0xb2, 0x06, 0xbb, 0xbc, 0x70, 0x99, 0x83, 0x33, 0xeb, 0x28, 0xc8, 0x77, 0xf6, 0x4d, 0x01, 0xb7, 0x59, 0xa0, 0xd2, 0xb3, 0x2a, 0x72, 0x30, 0xe7, 0x11, 0x39, 0xb6, 0x41, 0x29, 0x65, 0x5a, 0x14, 0xb9, 0x86, 0x08, 0xe0, 0x7d, 0x32, 0x8c, 0xf0, + /* (2^175)P */ 0x5c, 0x11, 0x30, 0x9e, 0x05, 0x27, 0xf5, 0x45, 0x0f, 0xb3, 0xc9, 0x75, 0xc3, 0xd7, 0xe1, 0x82, 0x3b, 0x8e, 0x87, 0x23, 0x00, 0x15, 0x19, 0x07, 0xd9, 0x21, 0x53, 0xc7, 0xf1, 0xa3, 0xbf, 0x70, 0x64, 0x15, 0x18, 0xca, 0x23, 0x9e, 0xd3, 0x08, 0xc3, 0x2a, 0x8b, 0xe5, 0x83, 0x04, 0x89, 0x14, 0xfd, 0x28, 0x25, 0x1c, 0xe3, 0x26, 0xa7, 0x22, + /* (2^176)P */ 0xdc, 0xd4, 0x75, 0x60, 0x99, 0x94, 0xea, 0x09, 0x8e, 0x8a, 0x3c, 0x1b, 0xf9, 0xbd, 0x33, 0x0d, 0x51, 0x3d, 0x12, 0x6f, 0x4e, 0x72, 0xe0, 0x17, 0x20, 0xe9, 0x75, 0xe6, 0x3a, 0xb2, 0x13, 0x83, 0x4e, 0x7a, 0x08, 0x9e, 0xd1, 0x04, 0x5f, 0x6b, 0x42, 0x0b, 0x76, 0x2a, 0x2d, 0x77, 0x53, 0x6c, 0x65, 0x6d, 0x8e, 0x25, 0x3c, 0xb6, 0x8b, 0x69, + /* (2^177)P */ 0xb9, 0x49, 0x28, 0xd0, 0xdc, 0x6c, 0x8f, 0x4c, 0xc9, 0x14, 0x8a, 0x38, 0xa3, 0xcb, 0xc4, 0x9d, 0x53, 0xcf, 0xe9, 0xe3, 0xcf, 0xe0, 0xb1, 0xf2, 0x1b, 0x4c, 0x7f, 0x83, 0x2a, 0x7a, 0xe9, 0x8b, 0x3b, 0x86, 0x61, 0x30, 0xe9, 0x99, 0xbd, 0xba, 0x19, 0x6e, 0x65, 0x2a, 0x12, 0x3e, 0x9c, 0xa8, 0xaf, 0xc3, 0xcf, 0xf8, 0x1f, 0x77, 0x86, 0xea, + /* (2^178)P */ 0x30, 0xde, 0xe7, 0xff, 0x54, 0xf7, 0xa2, 0x59, 0xf6, 0x0b, 0xfb, 0x7a, 0xf2, 0x39, 0xf0, 0xdb, 0x39, 0xbc, 0xf0, 0xfa, 0x60, 0xeb, 0x6b, 0x4f, 0x47, 0x17, 0xc8, 0x00, 0x65, 0x6d, 0x25, 0x1c, 0xd0, 0x48, 0x56, 0x53, 0x45, 0x11, 0x30, 0x02, 0x49, 0x20, 0x27, 0xac, 0xf2, 0x4c, 0xac, 0x64, 0x3d, 0x52, 0xb8, 0x89, 0xe0, 0x93, 0x16, 0x0f, + /* (2^179)P */ 0x84, 0x09, 0xba, 0x40, 0xb2, 0x2f, 0xa3, 0xa8, 0xc2, 0xba, 0x46, 0x33, 0x05, 0x9d, 0x62, 0xad, 0xa1, 0x3c, 0x33, 0xef, 0x0d, 0xeb, 0xf0, 0x77, 0x11, 0x5a, 0xb0, 0x21, 0x9c, 0xdf, 0x55, 0x24, 0x25, 0x35, 0x51, 0x61, 0x92, 0xf0, 0xb1, 0xce, 0xf5, 0xd4, 0x7b, 0x6c, 0x21, 0x9d, 0x56, 0x52, 0xf8, 0xa1, 0x4c, 0xe9, 0x27, 0x55, 0xac, 0x91, + /* (2^180)P */ 0x03, 0x3e, 0x30, 0xd2, 0x0a, 0xfa, 0x7d, 0x82, 0x3d, 0x1f, 0x8b, 0xcb, 0xb6, 0x04, 0x5c, 0xcc, 0x8b, 0xda, 0xe2, 0x68, 0x74, 0x08, 0x8c, 0x44, 0x83, 0x57, 0x6d, 0x6f, 0x80, 0xb0, 0x7e, 0xa9, 0x82, 0x91, 0x7b, 0x4c, 0x37, 0x97, 0xd1, 0x63, 0xd1, 0xbd, 0x45, 0xe6, 0x8a, 0x86, 0xd6, 0x89, 0x54, 0xfd, 0xd2, 0xb1, 0xd7, 0x54, 0xad, 0xaf, + /* (2^181)P */ 0x8b, 0x33, 0x62, 0x49, 0x9f, 0x63, 0xf9, 0x87, 0x42, 0x58, 0xbf, 0xb3, 0xe6, 0x68, 0x02, 0x60, 0x5c, 0x76, 0x62, 0xf7, 0x61, 0xd7, 0x36, 0x31, 0xf7, 0x9c, 0xb5, 0xe5, 0x13, 0x6c, 0xea, 0x78, 0xae, 0xcf, 0xde, 0xbf, 0xb6, 0xeb, 0x4f, 0xc8, 0x2a, 0xb4, 0x9a, 0x9f, 0xf3, 0xd1, 0x6a, 0xec, 0x0c, 0xbd, 0x85, 0x98, 0x40, 0x06, 0x1c, 0x2a, + /* (2^182)P */ 0x74, 0x3b, 0xe7, 0x81, 0xd5, 0xae, 0x54, 0x56, 0x03, 0xe8, 0x97, 0x16, 0x76, 0xcf, 0x24, 0x96, 0x96, 0x5b, 0xcc, 0x09, 0xab, 0x23, 0x6f, 0x54, 0xae, 0x8f, 0xe4, 0x12, 0xcb, 0xfd, 0xbc, 0xac, 0x93, 0x45, 0x3d, 0x68, 0x08, 0x22, 0x59, 0xc6, 0xf0, 0x47, 0x19, 0x8c, 0x79, 0x93, 0x1e, 0x0e, 0x30, 0xb0, 0x94, 0xfb, 0x17, 0x1d, 0x5a, 0x12, + /* (2^183)P */ 0x85, 0xff, 0x40, 0x18, 0x85, 0xff, 0x44, 0x37, 0x69, 0x23, 0x4d, 0x34, 0xe1, 0xeb, 0xa3, 0x1b, 0x55, 0x40, 0xc1, 0x64, 0xf4, 0xd4, 0x13, 0x0a, 0x9f, 0xb9, 0x19, 0xfc, 0x88, 0x7d, 0xc0, 0x72, 0xcf, 0x69, 0x2f, 0xd2, 0x0c, 0x82, 0x0f, 0xda, 0x08, 0xba, 0x0f, 0xaa, 0x3b, 0xe9, 0xe5, 0x83, 0x7a, 0x06, 0xe8, 0x1b, 0x38, 0x43, 0xc3, 0x54, + /* (2^184)P */ 0x14, 0xaa, 0xb3, 0x6e, 0xe6, 0x28, 0xee, 0xc5, 0x22, 0x6c, 0x7c, 0xf9, 0xa8, 0x71, 0xcc, 0xfe, 0x68, 0x7e, 0xd3, 0xb8, 0x37, 0x96, 0xca, 0x0b, 0xd9, 0xb6, 0x06, 0xa9, 0xf6, 0x71, 0xe8, 0x31, 0xf7, 0xd8, 0xf1, 0x5d, 0xab, 0xb9, 0xf0, 0x5c, 0x98, 0xcf, 0x22, 0xa2, 0x2a, 0xf6, 0xd0, 0x59, 0xf0, 0x9d, 0xd9, 0x6a, 0x4f, 0x59, 0x57, 0xad, + /* (2^185)P */ 0xd7, 0x2b, 0x3d, 0x38, 0x4c, 0x2e, 0x23, 0x4d, 0x49, 0xa2, 0x62, 0x62, 0xf9, 0x0f, 0xde, 0x08, 0xf3, 0x86, 0x71, 0xb6, 0xc7, 0xf9, 0x85, 0x9c, 0x33, 0xa1, 0xcf, 0x16, 0xaa, 0x60, 0xb9, 0xb7, 0xea, 0xed, 0x01, 0x1c, 0x59, 0xdb, 0x3f, 0x3f, 0x97, 0x2e, 0xf0, 0x09, 0x9f, 0x10, 0x85, 0x5f, 0x53, 0x39, 0xf3, 0x13, 0x40, 0x56, 0x95, 0xf9, + /* (2^186)P */ 0xb4, 0xe3, 0xda, 0xc6, 0x1f, 0x78, 0x8e, 0xac, 0xd4, 0x20, 0x1d, 0xa0, 0xbf, 0x4c, 0x09, 0x16, 0xa7, 0x30, 0xb5, 0x8d, 0x9e, 0xa1, 0x5f, 0x6d, 0x52, 0xf4, 0x71, 0xb6, 0x32, 0x2d, 0x21, 0x51, 0xc6, 0xfc, 0x2f, 0x08, 0xf4, 0x13, 0x6c, 0x55, 0xba, 0x72, 0x81, 0x24, 0x49, 0x0e, 0x4f, 0x06, 0x36, 0x39, 0x6a, 0xc5, 0x81, 0xfc, 0xeb, 0xb2, + /* (2^187)P */ 0x7d, 0x8d, 0xc8, 0x6c, 0xea, 0xb4, 0xb9, 0xe8, 0x40, 0xc9, 0x69, 0xc9, 0x30, 0x05, 0xfd, 0x34, 0x46, 0xfd, 0x94, 0x05, 0x16, 0xf5, 0x4b, 0x13, 0x3d, 0x24, 0x1a, 0xd6, 0x64, 0x2b, 0x9c, 0xe2, 0xa5, 0xd9, 0x98, 0xe0, 0xe8, 0xf4, 0xbc, 0x2c, 0xbd, 0xa2, 0x56, 0xe3, 0x9e, 0x14, 0xdb, 0xbf, 0x05, 0xbf, 0x9a, 0x13, 0x5d, 0xf7, 0x91, 0xa3, + /* (2^188)P */ 0x8b, 0xcb, 0x27, 0xf3, 0x15, 0x26, 0x05, 0x40, 0x0f, 0xa6, 0x15, 0x13, 0x71, 0x95, 0xa2, 0xc6, 0x38, 0x04, 0x67, 0xf8, 0x9a, 0x83, 0x06, 0xaa, 0x25, 0x36, 0x72, 0x01, 0x6f, 0x74, 0x5f, 0xe5, 0x6e, 0x44, 0x99, 0xce, 0x13, 0xbc, 0x82, 0xc2, 0x0d, 0xa4, 0x98, 0x50, 0x38, 0xf3, 0xa2, 0xc5, 0xe5, 0x24, 0x1f, 0x6f, 0x56, 0x3e, 0x07, 0xb2, + /* (2^189)P */ 0xbd, 0x0f, 0x32, 0x60, 0x07, 0xb1, 0xd7, 0x0b, 0x11, 0x07, 0x57, 0x02, 0x89, 0xe8, 0x8b, 0xe8, 0x5a, 0x1f, 0xee, 0x54, 0x6b, 0xff, 0xb3, 0x04, 0x07, 0x57, 0x13, 0x0b, 0x94, 0xa8, 0x4d, 0x81, 0xe2, 0x17, 0x16, 0x45, 0xd4, 0x4b, 0xf7, 0x7e, 0x64, 0x66, 0x20, 0xe8, 0x0b, 0x26, 0xfd, 0xa9, 0x8a, 0x47, 0x52, 0x89, 0x14, 0xd0, 0xd1, 0xa1, + /* (2^190)P */ 0xdc, 0x03, 0xe6, 0x20, 0x44, 0x47, 0x8f, 0x04, 0x16, 0x24, 0x22, 0xc1, 0x55, 0x5c, 0xbe, 0x43, 0xc3, 0x92, 0xc5, 0x54, 0x3d, 0x5d, 0xd1, 0x05, 0x9c, 0xc6, 0x7c, 0xbf, 0x23, 0x84, 0x1a, 0xba, 0x4f, 0x1f, 0xfc, 0xa1, 0xae, 0x1a, 0x64, 0x02, 0x51, 0xf1, 0xcb, 0x7a, 0x20, 0xce, 0xb2, 0x34, 0x3c, 0xca, 0xe0, 0xe4, 0xba, 0x22, 0xd4, 0x7b, + /* (2^191)P */ 0xca, 0xfd, 0xca, 0xd7, 0xde, 0x61, 0xae, 0xf0, 0x79, 0x0c, 0x20, 0xab, 0xbc, 0x6f, 0x4d, 0x61, 0xf0, 0xc7, 0x9c, 0x8d, 0x4b, 0x52, 0xf3, 0xb9, 0x48, 0x63, 0x0b, 0xb6, 0xd2, 0x25, 0x9a, 0x96, 0x72, 0xc1, 0x6b, 0x0c, 0xb5, 0xfb, 0x71, 0xaa, 0xad, 0x47, 0x5b, 0xe7, 0xc0, 0x0a, 0x55, 0xb2, 0xd4, 0x16, 0x2f, 0xb1, 0x01, 0xfd, 0xce, 0x27, + /* (2^192)P */ 0x64, 0x11, 0x4b, 0xab, 0x57, 0x09, 0xc6, 0x49, 0x4a, 0x37, 0xc3, 0x36, 0xc4, 0x7b, 0x81, 0x1f, 0x42, 0xed, 0xbb, 0xe0, 0xa0, 0x8d, 0x51, 0xe6, 0xca, 0x8b, 0xb9, 0xcd, 0x99, 0x2d, 0x91, 0x53, 0xa9, 0x47, 0xcb, 0x32, 0xc7, 0xa4, 0x92, 0xec, 0x46, 0x74, 0x44, 0x6d, 0x71, 0x9f, 0x6d, 0x0c, 0x69, 0xa4, 0xf8, 0xbe, 0x9f, 0x7f, 0xa0, 0xd7, + /* (2^193)P */ 0x5f, 0x33, 0xb6, 0x91, 0xc8, 0xa5, 0x3f, 0x5d, 0x7f, 0x38, 0x6e, 0x74, 0x20, 0x4a, 0xd6, 0x2b, 0x98, 0x2a, 0x41, 0x4b, 0x83, 0x64, 0x0b, 0x92, 0x7a, 0x06, 0x1e, 0xc6, 0x2c, 0xf6, 0xe4, 0x91, 0xe5, 0xb1, 0x2e, 0x6e, 0x4e, 0xa8, 0xc8, 0x14, 0x32, 0x57, 0x44, 0x1c, 0xe4, 0xb9, 0x7f, 0x54, 0x51, 0x08, 0x81, 0xaa, 0x4e, 0xce, 0xa1, 0x5d, + /* (2^194)P */ 0x5c, 0xd5, 0x9b, 0x5e, 0x7c, 0xb5, 0xb1, 0x52, 0x73, 0x00, 0x41, 0x56, 0x79, 0x08, 0x7e, 0x07, 0x28, 0x06, 0xa6, 0xfb, 0x7f, 0x69, 0xbd, 0x7a, 0x3c, 0xae, 0x9f, 0x39, 0xbb, 0x54, 0xa2, 0x79, 0xb9, 0x0e, 0x7f, 0xbb, 0xe0, 0xe6, 0xb7, 0x27, 0x64, 0x38, 0x45, 0xdb, 0x84, 0xe4, 0x61, 0x72, 0x3f, 0xe2, 0x24, 0xfe, 0x7a, 0x31, 0x9a, 0xc9, + /* (2^195)P */ 0xa1, 0xd2, 0xa4, 0xee, 0x24, 0x96, 0xe5, 0x5b, 0x79, 0x78, 0x3c, 0x7b, 0x82, 0x3b, 0x8b, 0x58, 0x0b, 0xa3, 0x63, 0x2d, 0xbc, 0x75, 0x46, 0xe8, 0x83, 0x1a, 0xc0, 0x2a, 0x92, 0x61, 0xa8, 0x75, 0x37, 0x3c, 0xbf, 0x0f, 0xef, 0x8f, 0x6c, 0x97, 0x75, 0x10, 0x05, 0x7a, 0xde, 0x23, 0xe8, 0x2a, 0x35, 0xeb, 0x41, 0x64, 0x7d, 0xcf, 0xe0, 0x52, + /* (2^196)P */ 0x4a, 0xd0, 0x49, 0x93, 0xae, 0xf3, 0x24, 0x8c, 0xe1, 0x09, 0x98, 0x45, 0xd8, 0xb9, 0xfe, 0x8e, 0x8c, 0xa8, 0x2c, 0xc9, 0x9f, 0xce, 0x01, 0xdc, 0x38, 0x11, 0xab, 0x85, 0xb9, 0xe8, 0x00, 0x51, 0xfd, 0x82, 0xe1, 0x9b, 0x4e, 0xfc, 0xb5, 0x2a, 0x0f, 0x8b, 0xda, 0x4e, 0x02, 0xca, 0xcc, 0xe3, 0x91, 0xc4, 0xe0, 0xcf, 0x7b, 0xd6, 0xe6, 0x6a, + /* (2^197)P */ 0xfe, 0x11, 0xd7, 0xaa, 0xe3, 0x0c, 0x52, 0x2e, 0x04, 0xe0, 0xe0, 0x61, 0xc8, 0x05, 0xd7, 0x31, 0x4c, 0xc3, 0x9b, 0x2d, 0xce, 0x59, 0xbe, 0x12, 0xb7, 0x30, 0x21, 0xfc, 0x81, 0xb8, 0x5e, 0x57, 0x73, 0xd0, 0xad, 0x8e, 0x9e, 0xe4, 0xeb, 0xcd, 0xcf, 0xd2, 0x0f, 0x01, 0x35, 0x16, 0xed, 0x7a, 0x43, 0x8e, 0x42, 0xdc, 0xea, 0x4c, 0xa8, 0x7c, + /* (2^198)P */ 0x37, 0x26, 0xcc, 0x76, 0x0b, 0xe5, 0x76, 0xdd, 0x3e, 0x19, 0x3c, 0xc4, 0x6c, 0x7f, 0xd0, 0x03, 0xc1, 0xb8, 0x59, 0x82, 0xca, 0x36, 0xc1, 0xe4, 0xc8, 0xb2, 0x83, 0x69, 0x9c, 0xc5, 0x9d, 0x12, 0x82, 0x1c, 0xea, 0xb2, 0x84, 0x9f, 0xf3, 0x52, 0x6b, 0xbb, 0xd8, 0x81, 0x56, 0x83, 0x04, 0x66, 0x05, 0x22, 0x49, 0x37, 0x93, 0xb1, 0xfd, 0xd5, + /* (2^199)P */ 0xaf, 0x96, 0xbf, 0x03, 0xbe, 0xe6, 0x5d, 0x78, 0x19, 0xba, 0x37, 0x46, 0x0a, 0x2b, 0x52, 0x7c, 0xd8, 0x51, 0x9e, 0x3d, 0x29, 0x42, 0xdb, 0x0e, 0x31, 0x20, 0x94, 0xf8, 0x43, 0x9a, 0x2d, 0x22, 0xd3, 0xe3, 0xa1, 0x79, 0x68, 0xfb, 0x2d, 0x7e, 0xd6, 0x79, 0xda, 0x0b, 0xc6, 0x5b, 0x76, 0x68, 0xf0, 0xfe, 0x72, 0x59, 0xbb, 0xa1, 0x9c, 0x74, + /* (2^200)P */ 0x0a, 0xd9, 0xec, 0xc5, 0xbd, 0xf0, 0xda, 0xcf, 0x82, 0xab, 0x46, 0xc5, 0x32, 0x13, 0xdc, 0x5b, 0xac, 0xc3, 0x53, 0x9a, 0x7f, 0xef, 0xa5, 0x40, 0x5a, 0x1f, 0xc1, 0x12, 0x91, 0x54, 0x83, 0x6a, 0xb0, 0x9a, 0x85, 0x4d, 0xbf, 0x36, 0x8e, 0xd3, 0xa2, 0x2b, 0xe5, 0xd6, 0xc6, 0xe1, 0x58, 0x5b, 0x82, 0x9b, 0xc8, 0xf2, 0x03, 0xba, 0xf5, 0x92, + /* (2^201)P */ 0xfb, 0x21, 0x7e, 0xde, 0xe7, 0xb4, 0xc0, 0x56, 0x86, 0x3a, 0x5b, 0x78, 0xf8, 0xf0, 0xf4, 0xe7, 0x5c, 0x00, 0xd2, 0xd7, 0xd6, 0xf8, 0x75, 0x5e, 0x0f, 0x3e, 0xd1, 0x4b, 0x77, 0xd8, 0xad, 0xb0, 0xc9, 0x8b, 0x59, 0x7d, 0x30, 0x76, 0x64, 0x7a, 0x76, 0xd9, 0x51, 0x69, 0xfc, 0xbd, 0x8e, 0xb5, 0x55, 0xe0, 0xd2, 0x07, 0x15, 0xa9, 0xf7, 0xa4, + /* (2^202)P */ 0xaa, 0x2d, 0x2f, 0x2b, 0x3c, 0x15, 0xdd, 0xcd, 0xe9, 0x28, 0x82, 0x4f, 0xa2, 0xaa, 0x31, 0x48, 0xcc, 0xfa, 0x07, 0x73, 0x8a, 0x34, 0x74, 0x0d, 0xab, 0x1a, 0xca, 0xd2, 0xbf, 0x3a, 0xdb, 0x1a, 0x5f, 0x50, 0x62, 0xf4, 0x6b, 0x83, 0x38, 0x43, 0x96, 0xee, 0x6b, 0x39, 0x1e, 0xf0, 0x17, 0x80, 0x1e, 0x9b, 0xed, 0x2b, 0x2f, 0xcc, 0x65, 0xf7, + /* (2^203)P */ 0x03, 0xb3, 0x23, 0x9c, 0x0d, 0xd1, 0xeb, 0x7e, 0x34, 0x17, 0x8a, 0x4c, 0xde, 0x54, 0x39, 0xc4, 0x11, 0x82, 0xd3, 0xa4, 0x00, 0x32, 0x95, 0x9c, 0xa6, 0x64, 0x76, 0x6e, 0xd6, 0x53, 0x27, 0xb4, 0x6a, 0x14, 0x8c, 0x54, 0xf6, 0x58, 0x9e, 0x22, 0x4a, 0x55, 0x18, 0x77, 0xd0, 0x08, 0x6b, 0x19, 0x8a, 0xb5, 0xe7, 0x19, 0xb8, 0x60, 0x92, 0xb1, + /* (2^204)P */ 0x66, 0xec, 0xf3, 0x12, 0xde, 0x67, 0x7f, 0xd4, 0x5b, 0xf6, 0x70, 0x64, 0x0a, 0xb5, 0xc2, 0xf9, 0xb3, 0x64, 0xab, 0x56, 0x46, 0xc7, 0x93, 0xc2, 0x8b, 0x2d, 0xd0, 0xd6, 0x39, 0x3b, 0x1f, 0xcd, 0xb3, 0xac, 0xcc, 0x2c, 0x27, 0x6a, 0xbc, 0xb3, 0x4b, 0xa8, 0x3c, 0x69, 0x20, 0xe2, 0x18, 0x35, 0x17, 0xe1, 0x8a, 0xd3, 0x11, 0x74, 0xaa, 0x4d, + /* (2^205)P */ 0x96, 0xc4, 0x16, 0x7e, 0xfd, 0xf5, 0xd0, 0x7d, 0x1f, 0x32, 0x1b, 0xdb, 0xa6, 0xfd, 0x51, 0x75, 0x4d, 0xd7, 0x00, 0xe5, 0x7f, 0x58, 0x5b, 0xeb, 0x4b, 0x6a, 0x78, 0xfe, 0xe5, 0xd6, 0x8f, 0x99, 0x17, 0xca, 0x96, 0x45, 0xf7, 0x52, 0xdf, 0x84, 0x06, 0x77, 0xb9, 0x05, 0x63, 0x5d, 0xe9, 0x91, 0xb1, 0x4b, 0x82, 0x5a, 0xdb, 0xd7, 0xca, 0x69, + /* (2^206)P */ 0x02, 0xd3, 0x38, 0x38, 0x87, 0xea, 0xbd, 0x9f, 0x11, 0xca, 0xf3, 0x21, 0xf1, 0x9b, 0x35, 0x97, 0x98, 0xff, 0x8e, 0x6d, 0x3d, 0xd6, 0xb2, 0xfa, 0x68, 0xcb, 0x7e, 0x62, 0x85, 0xbb, 0xc7, 0x5d, 0xee, 0x32, 0x30, 0x2e, 0x71, 0x96, 0x63, 0x43, 0x98, 0xc4, 0xa7, 0xde, 0x60, 0xb2, 0xd9, 0x43, 0x4a, 0xfa, 0x97, 0x2d, 0x5f, 0x21, 0xd4, 0xfe, + /* (2^207)P */ 0x3b, 0x20, 0x29, 0x07, 0x07, 0xb5, 0x78, 0xc3, 0xc7, 0xab, 0x56, 0xba, 0x40, 0xde, 0x1d, 0xcf, 0xc3, 0x00, 0x56, 0x21, 0x0c, 0xc8, 0x42, 0xd9, 0x0e, 0xcd, 0x02, 0x7c, 0x07, 0xb9, 0x11, 0xd7, 0x96, 0xaf, 0xff, 0xad, 0xc5, 0xba, 0x30, 0x6d, 0x82, 0x3a, 0xbf, 0xef, 0x7b, 0xf7, 0x0a, 0x74, 0xbd, 0x31, 0x0c, 0xe4, 0xec, 0x1a, 0xe5, 0xc5, + /* (2^208)P */ 0xcc, 0xf2, 0x28, 0x16, 0x12, 0xbf, 0xef, 0x85, 0xbc, 0xf7, 0xcb, 0x9f, 0xdb, 0xa8, 0xb2, 0x49, 0x53, 0x48, 0xa8, 0x24, 0xa8, 0x68, 0x8d, 0xbb, 0x21, 0x0a, 0x5a, 0xbd, 0xb2, 0x91, 0x61, 0x47, 0xc4, 0x43, 0x08, 0xa6, 0x19, 0xef, 0x8e, 0x88, 0x39, 0xc6, 0x33, 0x30, 0xf3, 0x0e, 0xc5, 0x92, 0x66, 0xd6, 0xfe, 0xc5, 0x12, 0xd9, 0x4c, 0x2d, + /* (2^209)P */ 0x30, 0x34, 0x07, 0xbf, 0x9c, 0x5a, 0x4e, 0x65, 0xf1, 0x39, 0x35, 0x38, 0xae, 0x7b, 0x55, 0xac, 0x6a, 0x92, 0x24, 0x7e, 0x50, 0xd3, 0xba, 0x78, 0x51, 0xfe, 0x4d, 0x32, 0x05, 0x11, 0xf5, 0x52, 0xf1, 0x31, 0x45, 0x39, 0x98, 0x7b, 0x28, 0x56, 0xc3, 0x5d, 0x4f, 0x07, 0x6f, 0x84, 0xb8, 0x1a, 0x58, 0x0b, 0xc4, 0x7c, 0xc4, 0x8d, 0x32, 0x8e, + /* (2^210)P */ 0x7e, 0xaf, 0x98, 0xce, 0xc5, 0x2b, 0x9d, 0xf6, 0xfa, 0x2c, 0xb6, 0x2a, 0x5a, 0x1d, 0xc0, 0x24, 0x8d, 0xa4, 0xce, 0xb1, 0x12, 0x01, 0xf9, 0x79, 0xc6, 0x79, 0x38, 0x0c, 0xd4, 0x07, 0xc9, 0xf7, 0x37, 0xa1, 0x0b, 0xfe, 0x72, 0xec, 0x5d, 0xd6, 0xb0, 0x1c, 0x70, 0xbe, 0x70, 0x01, 0x13, 0xe0, 0x86, 0x95, 0xc7, 0x2e, 0x12, 0x3b, 0xe6, 0xa6, + /* (2^211)P */ 0x24, 0x82, 0x67, 0xe0, 0x14, 0x7b, 0x56, 0x08, 0x38, 0x44, 0xdb, 0xa0, 0x3a, 0x05, 0x47, 0xb2, 0xc0, 0xac, 0xd1, 0xcc, 0x3f, 0x82, 0xb8, 0x8a, 0x88, 0xbc, 0xf5, 0x33, 0xa1, 0x35, 0x0f, 0xf6, 0xe2, 0xef, 0x6c, 0xf7, 0x37, 0x9e, 0xe8, 0x10, 0xca, 0xb0, 0x8e, 0x80, 0x86, 0x00, 0x23, 0xd0, 0x4a, 0x76, 0x9f, 0xf7, 0x2c, 0x52, 0x15, 0x0e, + /* (2^212)P */ 0x5e, 0x49, 0xe1, 0x2c, 0x9a, 0x01, 0x76, 0xa6, 0xb3, 0x07, 0x5b, 0xa4, 0x07, 0xef, 0x1d, 0xc3, 0x6a, 0xbb, 0x64, 0xbe, 0x71, 0x15, 0x6e, 0x32, 0x31, 0x46, 0x9a, 0x9e, 0x8f, 0x45, 0x73, 0xce, 0x0b, 0x94, 0x1a, 0x52, 0x07, 0xf4, 0x50, 0x30, 0x49, 0x53, 0x50, 0xfb, 0x71, 0x1f, 0x5a, 0x03, 0xa9, 0x76, 0xf2, 0x8f, 0x42, 0xff, 0xed, 0xed, + /* (2^213)P */ 0xed, 0x08, 0xdb, 0x91, 0x1c, 0xee, 0xa2, 0xb4, 0x47, 0xa2, 0xfa, 0xcb, 0x03, 0xd1, 0xff, 0x8c, 0xad, 0x64, 0x50, 0x61, 0xcd, 0xfc, 0x88, 0xa0, 0x31, 0x95, 0x30, 0xb9, 0x58, 0xdd, 0xd7, 0x43, 0xe4, 0x46, 0xc2, 0x16, 0xd9, 0x72, 0x4a, 0x56, 0x51, 0x70, 0x85, 0xf1, 0xa1, 0x80, 0x40, 0xd5, 0xba, 0x67, 0x81, 0xda, 0xcd, 0x03, 0xea, 0x51, + /* (2^214)P */ 0x42, 0x50, 0xf0, 0xef, 0x37, 0x61, 0x72, 0x85, 0xe1, 0xf1, 0xff, 0x6f, 0x3d, 0xe8, 0x7b, 0x21, 0x5c, 0xe5, 0x50, 0x03, 0xde, 0x00, 0xc1, 0xf7, 0x3a, 0x55, 0x12, 0x1c, 0x9e, 0x1e, 0xce, 0xd1, 0x2f, 0xaf, 0x05, 0x70, 0x5b, 0x47, 0xf2, 0x04, 0x7a, 0x89, 0xbc, 0x78, 0xa6, 0x65, 0x6c, 0xaa, 0x3c, 0xa2, 0x3c, 0x8b, 0x5c, 0xa9, 0x22, 0x48, + /* (2^215)P */ 0x7e, 0x8c, 0x8f, 0x2f, 0x60, 0xe3, 0x5a, 0x94, 0xd4, 0xce, 0xdd, 0x9d, 0x83, 0x3b, 0x77, 0x78, 0x43, 0x1d, 0xfd, 0x8f, 0xc8, 0xe8, 0x02, 0x90, 0xab, 0xf6, 0xc9, 0xfc, 0xf1, 0x63, 0xaa, 0x5f, 0x42, 0xf1, 0x78, 0x34, 0x64, 0x16, 0x75, 0x9c, 0x7d, 0xd0, 0xe4, 0x74, 0x5a, 0xa8, 0xfb, 0xcb, 0xac, 0x20, 0xa3, 0xc2, 0xa6, 0x20, 0xf8, 0x1b, + /* (2^216)P */ 0x00, 0x4f, 0x1e, 0x56, 0xb5, 0x34, 0xb2, 0x87, 0x31, 0xe5, 0xee, 0x8d, 0xf1, 0x41, 0x67, 0xb7, 0x67, 0x3a, 0x54, 0x86, 0x5c, 0xf0, 0x0b, 0x37, 0x2f, 0x1b, 0x92, 0x5d, 0x58, 0x93, 0xdc, 0xd8, 0x58, 0xcc, 0x9e, 0x67, 0xd0, 0x97, 0x3a, 0xaf, 0x49, 0x39, 0x2d, 0x3b, 0xd8, 0x98, 0xfb, 0x76, 0x6b, 0xe7, 0xaf, 0xc3, 0x45, 0x44, 0x53, 0x94, + /* (2^217)P */ 0x30, 0xbd, 0x90, 0x75, 0xd3, 0xbd, 0x3b, 0x58, 0x27, 0x14, 0x9f, 0x6b, 0xd4, 0x31, 0x99, 0xcd, 0xde, 0x3a, 0x21, 0x1e, 0xb4, 0x02, 0xe4, 0x33, 0x04, 0x02, 0xb0, 0x50, 0x66, 0x68, 0x90, 0xdd, 0x7b, 0x69, 0x31, 0xd9, 0xcf, 0x68, 0x73, 0xf1, 0x60, 0xdd, 0xc8, 0x1d, 0x5d, 0xe3, 0xd6, 0x5b, 0x2a, 0xa4, 0xea, 0xc4, 0x3f, 0x08, 0xcd, 0x9c, + /* (2^218)P */ 0x6b, 0x1a, 0xbf, 0x55, 0xc1, 0x1b, 0x0c, 0x05, 0x09, 0xdf, 0xf5, 0x5e, 0xa3, 0x77, 0x95, 0xe9, 0xdf, 0x19, 0xdd, 0xc7, 0x94, 0xcb, 0x06, 0x73, 0xd0, 0x88, 0x02, 0x33, 0x94, 0xca, 0x7a, 0x2f, 0x8e, 0x3d, 0x72, 0x61, 0x2d, 0x4d, 0xa6, 0x61, 0x1f, 0x32, 0x5e, 0x87, 0x53, 0x36, 0x11, 0x15, 0x20, 0xb3, 0x5a, 0x57, 0x51, 0x93, 0x20, 0xd8, + /* (2^219)P */ 0xb7, 0x56, 0xf4, 0xab, 0x7d, 0x0c, 0xfb, 0x99, 0x1a, 0x30, 0x29, 0xb0, 0x75, 0x2a, 0xf8, 0x53, 0x71, 0x23, 0xbd, 0xa7, 0xd8, 0x0a, 0xe2, 0x27, 0x65, 0xe9, 0x74, 0x26, 0x98, 0x4a, 0x69, 0x19, 0xb2, 0x4d, 0x0a, 0x17, 0x98, 0xb2, 0xa9, 0x57, 0x4e, 0xf6, 0x86, 0xc8, 0x01, 0xa4, 0xc6, 0x98, 0xad, 0x5a, 0x90, 0x2c, 0x05, 0x46, 0x64, 0xb7, + /* (2^220)P */ 0x7b, 0x91, 0xdf, 0xfc, 0xf8, 0x1c, 0x8c, 0x15, 0x9e, 0xf7, 0xd5, 0xa8, 0xe8, 0xe7, 0xe3, 0xa3, 0xb0, 0x04, 0x74, 0xfa, 0x78, 0xfb, 0x26, 0xbf, 0x67, 0x42, 0xf9, 0x8c, 0x9b, 0xb4, 0x69, 0x5b, 0x02, 0x13, 0x6d, 0x09, 0x6c, 0xd6, 0x99, 0x61, 0x7b, 0x89, 0x4a, 0x67, 0x75, 0xa3, 0x98, 0x13, 0x23, 0x1d, 0x18, 0x24, 0x0e, 0xef, 0x41, 0x79, + /* (2^221)P */ 0x86, 0x33, 0xab, 0x08, 0xcb, 0xbf, 0x1e, 0x76, 0x3c, 0x0b, 0xbd, 0x30, 0xdb, 0xe9, 0xa3, 0x35, 0x87, 0x1b, 0xe9, 0x07, 0x00, 0x66, 0x7f, 0x3b, 0x35, 0x0c, 0x8a, 0x3f, 0x61, 0xbc, 0xe0, 0xae, 0xf6, 0xcc, 0x54, 0xe1, 0x72, 0x36, 0x2d, 0xee, 0x93, 0x24, 0xf8, 0xd7, 0xc5, 0xf9, 0xcb, 0xb0, 0xe5, 0x88, 0x0d, 0x23, 0x4b, 0x76, 0x15, 0xa2, + /* (2^222)P */ 0x37, 0xdb, 0x83, 0xd5, 0x6d, 0x06, 0x24, 0x37, 0x1b, 0x15, 0x85, 0x15, 0xe2, 0xc0, 0x4e, 0x02, 0xa9, 0x6d, 0x0a, 0x3a, 0x94, 0x4a, 0x6f, 0x49, 0x00, 0x01, 0x72, 0xbb, 0x60, 0x14, 0x35, 0xae, 0xb4, 0xc6, 0x01, 0x0a, 0x00, 0x9e, 0xc3, 0x58, 0xc5, 0xd1, 0x5e, 0x30, 0x73, 0x96, 0x24, 0x85, 0x9d, 0xf0, 0xf9, 0xec, 0x09, 0xd3, 0xe7, 0x70, + /* (2^223)P */ 0xf3, 0xbd, 0x96, 0x87, 0xe9, 0x71, 0xbd, 0xd6, 0xa2, 0x45, 0xeb, 0x0a, 0xcd, 0x2c, 0xf1, 0x72, 0xa6, 0x31, 0xa9, 0x6f, 0x09, 0xa1, 0x5e, 0xdd, 0xc8, 0x8d, 0x0d, 0xbc, 0x5a, 0x8d, 0xb1, 0x2c, 0x9a, 0xcc, 0x37, 0x74, 0xc2, 0xa9, 0x4e, 0xd6, 0xc0, 0x3c, 0xa0, 0x23, 0xb0, 0xa0, 0x77, 0x14, 0x80, 0x45, 0x71, 0x6a, 0x2d, 0x41, 0xc3, 0x82, + /* (2^224)P */ 0x37, 0x44, 0xec, 0x8a, 0x3e, 0xc1, 0x0c, 0xa9, 0x12, 0x9c, 0x08, 0x88, 0xcb, 0xd9, 0xf8, 0xba, 0x00, 0xd6, 0xc3, 0xdf, 0xef, 0x7a, 0x44, 0x7e, 0x25, 0x69, 0xc9, 0xc1, 0x46, 0xe5, 0x20, 0x9e, 0xcc, 0x0b, 0x05, 0x3e, 0xf4, 0x78, 0x43, 0x0c, 0xa6, 0x2f, 0xc1, 0xfa, 0x70, 0xb2, 0x3c, 0x31, 0x7a, 0x63, 0x58, 0xab, 0x17, 0xcf, 0x4c, 0x4f, + /* (2^225)P */ 0x2b, 0x08, 0x31, 0x59, 0x75, 0x8b, 0xec, 0x0a, 0xa9, 0x79, 0x70, 0xdd, 0xf1, 0x11, 0xc3, 0x11, 0x1f, 0xab, 0x37, 0xaa, 0x26, 0xea, 0x53, 0xc4, 0x79, 0xa7, 0x91, 0x00, 0xaa, 0x08, 0x42, 0xeb, 0x8b, 0x8b, 0xe8, 0xc3, 0x2f, 0xb8, 0x78, 0x90, 0x38, 0x0e, 0x8a, 0x42, 0x0c, 0x0f, 0xbf, 0x3e, 0xf8, 0xd8, 0x07, 0xcf, 0x6a, 0x34, 0xc9, 0xfa, + /* (2^226)P */ 0x11, 0xe0, 0x76, 0x4d, 0x23, 0xc5, 0xa6, 0xcc, 0x9f, 0x9a, 0x2a, 0xde, 0x3a, 0xb5, 0x92, 0x39, 0x19, 0x8a, 0xf1, 0x8d, 0xf9, 0x4d, 0xc9, 0xb4, 0x39, 0x9f, 0x57, 0xd8, 0x72, 0xab, 0x1d, 0x61, 0x6a, 0xb2, 0xff, 0x52, 0xba, 0x54, 0x0e, 0xfb, 0x83, 0x30, 0x8a, 0xf7, 0x3b, 0xf4, 0xd8, 0xae, 0x1a, 0x94, 0x3a, 0xec, 0x63, 0xfe, 0x6e, 0x7c, + /* (2^227)P */ 0xdc, 0x70, 0x8e, 0x55, 0x44, 0xbf, 0xd2, 0x6a, 0xa0, 0x14, 0x61, 0x89, 0xd5, 0x55, 0x45, 0x3c, 0xf6, 0x40, 0x0d, 0x83, 0x85, 0x44, 0xb4, 0x62, 0x56, 0xfe, 0x60, 0xd7, 0x07, 0x1d, 0x47, 0x30, 0x3b, 0x73, 0xa4, 0xb5, 0xb7, 0xea, 0xac, 0xda, 0xf1, 0x17, 0xaa, 0x60, 0xdf, 0xe9, 0x84, 0xda, 0x31, 0x32, 0x61, 0xbf, 0xd0, 0x7e, 0x8a, 0x02, + /* (2^228)P */ 0xb9, 0x51, 0xb3, 0x89, 0x21, 0x5d, 0xa2, 0xfe, 0x79, 0x2a, 0xb3, 0x2a, 0x3b, 0xe6, 0x6f, 0x2b, 0x22, 0x03, 0xea, 0x7b, 0x1f, 0xaf, 0x85, 0xc3, 0x38, 0x55, 0x5b, 0x8e, 0xb4, 0xaa, 0x77, 0xfe, 0x03, 0x6e, 0xda, 0x91, 0x24, 0x0c, 0x48, 0x39, 0x27, 0x43, 0x16, 0xd2, 0x0a, 0x0d, 0x43, 0xa3, 0x0e, 0xca, 0x45, 0xd1, 0x7f, 0xf5, 0xd3, 0x16, + /* (2^229)P */ 0x3d, 0x32, 0x9b, 0x38, 0xf8, 0x06, 0x93, 0x78, 0x5b, 0x50, 0x2b, 0x06, 0xd8, 0x66, 0xfe, 0xab, 0x9b, 0x58, 0xc7, 0xd1, 0x4d, 0xd5, 0xf8, 0x3b, 0x10, 0x7e, 0x85, 0xde, 0x58, 0x4e, 0xdf, 0x53, 0xd9, 0x58, 0xe0, 0x15, 0x81, 0x9f, 0x1a, 0x78, 0xfc, 0x9f, 0x10, 0xc2, 0x23, 0xd6, 0x78, 0xd1, 0x9d, 0xd2, 0xd5, 0x1c, 0x53, 0xe2, 0xc9, 0x76, + /* (2^230)P */ 0x98, 0x1e, 0x38, 0x7b, 0x71, 0x18, 0x4b, 0x15, 0xaf, 0xa1, 0xa6, 0x98, 0xcb, 0x26, 0xa3, 0xc8, 0x07, 0x46, 0xda, 0x3b, 0x70, 0x65, 0xec, 0x7a, 0x2b, 0x34, 0x94, 0xa8, 0xb6, 0x14, 0xf8, 0x1a, 0xce, 0xf7, 0xc8, 0x60, 0xf3, 0x88, 0xf4, 0x33, 0x60, 0x7b, 0xd1, 0x02, 0xe7, 0xda, 0x00, 0x4a, 0xea, 0xd2, 0xfd, 0x88, 0xd2, 0x99, 0x28, 0xf3, + /* (2^231)P */ 0x28, 0x24, 0x1d, 0x26, 0xc2, 0xeb, 0x8b, 0x3b, 0xb4, 0x6b, 0xbe, 0x6b, 0x77, 0xff, 0xf3, 0x21, 0x3b, 0x26, 0x6a, 0x8c, 0x8e, 0x2a, 0x44, 0xa8, 0x01, 0x2b, 0x71, 0xea, 0x64, 0x30, 0xfd, 0xfd, 0x95, 0xcb, 0x39, 0x38, 0x48, 0xfa, 0x96, 0x97, 0x8c, 0x2f, 0x33, 0xca, 0x03, 0xe6, 0xd7, 0x94, 0x55, 0x6c, 0xc3, 0xb3, 0xa8, 0xf7, 0xae, 0x8c, + /* (2^232)P */ 0xea, 0x62, 0x8a, 0xb4, 0xeb, 0x74, 0xf7, 0xb8, 0xae, 0xc5, 0x20, 0x71, 0x06, 0xd6, 0x7c, 0x62, 0x9b, 0x69, 0x74, 0xef, 0xa7, 0x6d, 0xd6, 0x8c, 0x37, 0xb9, 0xbf, 0xcf, 0xeb, 0xe4, 0x2f, 0x04, 0x02, 0x21, 0x7d, 0x75, 0x6b, 0x92, 0x48, 0xf8, 0x70, 0xad, 0x69, 0xe2, 0xea, 0x0e, 0x88, 0x67, 0x72, 0xcc, 0x2d, 0x10, 0xce, 0x2d, 0xcf, 0x65, + /* (2^233)P */ 0x49, 0xf3, 0x57, 0x64, 0xe5, 0x5c, 0xc5, 0x65, 0x49, 0x97, 0xc4, 0x8a, 0xcc, 0xa9, 0xca, 0x94, 0x7b, 0x86, 0x88, 0xb6, 0x51, 0x27, 0x69, 0xa5, 0x0f, 0x8b, 0x06, 0x59, 0xa0, 0x94, 0xef, 0x63, 0x1a, 0x01, 0x9e, 0x4f, 0xd2, 0x5a, 0x93, 0xc0, 0x7c, 0xe6, 0x61, 0x77, 0xb6, 0xf5, 0x40, 0xd9, 0x98, 0x43, 0x5b, 0x56, 0x68, 0xe9, 0x37, 0x8f, + /* (2^234)P */ 0xee, 0x87, 0xd2, 0x05, 0x1b, 0x39, 0x89, 0x10, 0x07, 0x6d, 0xe8, 0xfd, 0x8b, 0x4d, 0xb2, 0xa7, 0x7b, 0x1e, 0xa0, 0x6c, 0x0d, 0x3d, 0x3d, 0x49, 0xba, 0x61, 0x36, 0x1f, 0xc2, 0x84, 0x4a, 0xcc, 0x87, 0xa9, 0x1b, 0x23, 0x04, 0xe2, 0x3e, 0x97, 0xe1, 0xdb, 0xd5, 0x5a, 0xe8, 0x41, 0x6b, 0xe5, 0x5a, 0xa1, 0x99, 0xe5, 0x7b, 0xa7, 0xe0, 0x3b, + /* (2^235)P */ 0xea, 0xa3, 0x6a, 0xdd, 0x77, 0x7f, 0x77, 0x41, 0xc5, 0x6a, 0xe4, 0xaf, 0x11, 0x5f, 0x88, 0xa5, 0x10, 0xee, 0xd0, 0x8c, 0x0c, 0xb4, 0xa5, 0x2a, 0xd0, 0xd8, 0x1d, 0x47, 0x06, 0xc0, 0xd5, 0xce, 0x51, 0x54, 0x9b, 0x2b, 0xe6, 0x2f, 0xe7, 0xe7, 0x31, 0x5f, 0x5c, 0x23, 0x81, 0x3e, 0x03, 0x93, 0xaa, 0x2d, 0x71, 0x84, 0xa0, 0x89, 0x32, 0xa6, + /* (2^236)P */ 0x55, 0xa3, 0x13, 0x92, 0x4e, 0x93, 0x7d, 0xec, 0xca, 0x57, 0xfb, 0x37, 0xae, 0xd2, 0x18, 0x2e, 0x54, 0x05, 0x6c, 0xd1, 0x28, 0xca, 0x90, 0x40, 0x82, 0x2e, 0x79, 0xc6, 0x5a, 0xc7, 0xdd, 0x84, 0x93, 0xdf, 0x15, 0xb8, 0x1f, 0xb1, 0xf9, 0xaf, 0x2c, 0xe5, 0x32, 0xcd, 0xc2, 0x99, 0x6d, 0xac, 0x85, 0x5c, 0x63, 0xd3, 0xe2, 0xff, 0x24, 0xda, + /* (2^237)P */ 0x2d, 0x8d, 0xfd, 0x65, 0xcc, 0xe5, 0x02, 0xa0, 0xe5, 0xb9, 0xec, 0x59, 0x09, 0x50, 0x27, 0xb7, 0x3d, 0x2a, 0x79, 0xb2, 0x76, 0x5d, 0x64, 0x95, 0xf8, 0xc5, 0xaf, 0x8a, 0x62, 0x11, 0x5c, 0x56, 0x1c, 0x05, 0x64, 0x9e, 0x5e, 0xbd, 0x54, 0x04, 0xe6, 0x9e, 0xab, 0xe6, 0x22, 0x7e, 0x42, 0x54, 0xb5, 0xa5, 0xd0, 0x8d, 0x28, 0x6b, 0x0f, 0x0b, + /* (2^238)P */ 0x2d, 0xb2, 0x8c, 0x59, 0x10, 0x37, 0x84, 0x3b, 0x9b, 0x65, 0x1b, 0x0f, 0x10, 0xf9, 0xea, 0x60, 0x1b, 0x02, 0xf5, 0xee, 0x8b, 0xe6, 0x32, 0x7d, 0x10, 0x7f, 0x5f, 0x8c, 0x72, 0x09, 0x4e, 0x1f, 0x29, 0xff, 0x65, 0xcb, 0x3e, 0x3a, 0xd2, 0x96, 0x50, 0x1e, 0xea, 0x64, 0x99, 0xb5, 0x4c, 0x7a, 0x69, 0xb8, 0x95, 0xae, 0x48, 0xc0, 0x7c, 0xb1, + /* (2^239)P */ 0xcd, 0x7c, 0x4f, 0x3e, 0xea, 0xf3, 0x90, 0xcb, 0x12, 0x76, 0xd1, 0x17, 0xdc, 0x0d, 0x13, 0x0f, 0xfd, 0x4d, 0xb5, 0x1f, 0xe4, 0xdd, 0xf2, 0x4d, 0x58, 0xea, 0xa5, 0x66, 0x92, 0xcf, 0xe5, 0x54, 0xea, 0x9b, 0x35, 0x83, 0x1a, 0x44, 0x8e, 0x62, 0x73, 0x45, 0x98, 0xa3, 0x89, 0x95, 0x52, 0x93, 0x1a, 0x8d, 0x63, 0x0f, 0xc2, 0x57, 0x3c, 0xb1, + /* (2^240)P */ 0x72, 0xb4, 0xdf, 0x51, 0xb7, 0xf6, 0x52, 0xa2, 0x14, 0x56, 0xe5, 0x0a, 0x2e, 0x75, 0x81, 0x02, 0xee, 0x93, 0x48, 0x0a, 0x92, 0x4e, 0x0c, 0x0f, 0xdf, 0x09, 0x89, 0x99, 0xf6, 0xf9, 0x22, 0xa2, 0x32, 0xf8, 0xb0, 0x76, 0x0c, 0xb2, 0x4d, 0x6e, 0xbe, 0x83, 0x35, 0x61, 0x44, 0xd2, 0x58, 0xc7, 0xdd, 0x14, 0xcf, 0xc3, 0x4b, 0x7c, 0x07, 0xee, + /* (2^241)P */ 0x8b, 0x03, 0xee, 0xcb, 0xa7, 0x2e, 0x28, 0xbd, 0x97, 0xd1, 0x4c, 0x2b, 0xd1, 0x92, 0x67, 0x5b, 0x5a, 0x12, 0xbf, 0x29, 0x17, 0xfc, 0x50, 0x09, 0x74, 0x76, 0xa2, 0xd4, 0x82, 0xfd, 0x2c, 0x0c, 0x90, 0xf7, 0xe7, 0xe5, 0x9a, 0x2c, 0x16, 0x40, 0xb9, 0x6c, 0xd9, 0xe0, 0x22, 0x9e, 0xf8, 0xdd, 0x73, 0xe4, 0x7b, 0x9e, 0xbe, 0x4f, 0x66, 0x22, + /* (2^242)P */ 0xa4, 0x10, 0xbe, 0xb8, 0x83, 0x3a, 0x77, 0x8e, 0xea, 0x0a, 0xc4, 0x97, 0x3e, 0xb6, 0x6c, 0x81, 0xd7, 0x65, 0xd9, 0xf7, 0xae, 0xe6, 0xbe, 0xab, 0x59, 0x81, 0x29, 0x4b, 0xff, 0xe1, 0x0f, 0xc3, 0x2b, 0xad, 0x4b, 0xef, 0xc4, 0x50, 0x9f, 0x88, 0x31, 0xf2, 0xde, 0x80, 0xd6, 0xf4, 0x20, 0x9c, 0x77, 0x9b, 0xbe, 0xbe, 0x08, 0xf5, 0xf0, 0x95, + /* (2^243)P */ 0x0e, 0x7c, 0x7b, 0x7c, 0xb3, 0xd8, 0x83, 0xfc, 0x8c, 0x75, 0x51, 0x74, 0x1b, 0xe1, 0x6d, 0x11, 0x05, 0x46, 0x24, 0x0d, 0xa4, 0x2b, 0x32, 0xfd, 0x2c, 0x4e, 0x21, 0xdf, 0x39, 0x6b, 0x96, 0xfc, 0xff, 0x92, 0xfc, 0x35, 0x0d, 0x9a, 0x4b, 0xc0, 0x70, 0x46, 0x32, 0x7d, 0xc0, 0xc4, 0x04, 0xe0, 0x2d, 0x83, 0xa7, 0x00, 0xc7, 0xcb, 0xb4, 0x8f, + /* (2^244)P */ 0xa9, 0x5a, 0x7f, 0x0e, 0xdd, 0x2c, 0x85, 0xaa, 0x4d, 0xac, 0xde, 0xb3, 0xb6, 0xaf, 0xe6, 0xd1, 0x06, 0x7b, 0x2c, 0xa4, 0x01, 0x19, 0x22, 0x7d, 0x78, 0xf0, 0x3a, 0xea, 0x89, 0xfe, 0x21, 0x61, 0x6d, 0xb8, 0xfe, 0xa5, 0x2a, 0xab, 0x0d, 0x7b, 0x51, 0x39, 0xb6, 0xde, 0xbc, 0xf0, 0xc5, 0x48, 0xd7, 0x09, 0x82, 0x6e, 0x66, 0x75, 0xc5, 0xcd, + /* (2^245)P */ 0xee, 0xdf, 0x2b, 0x6c, 0xa8, 0xde, 0x61, 0xe1, 0x27, 0xfa, 0x2a, 0x0f, 0x68, 0xe7, 0x7a, 0x9b, 0x13, 0xe9, 0x56, 0xd2, 0x1c, 0x3d, 0x2f, 0x3c, 0x7a, 0xf6, 0x6f, 0x45, 0xee, 0xe8, 0xf4, 0xa0, 0xa6, 0xe8, 0xa5, 0x27, 0xee, 0xf2, 0x85, 0xa9, 0xd5, 0x0e, 0xa9, 0x26, 0x60, 0xfe, 0xee, 0xc7, 0x59, 0x99, 0x5e, 0xa3, 0xdf, 0x23, 0x36, 0xd5, + /* (2^246)P */ 0x15, 0x66, 0x6f, 0xd5, 0x78, 0xa4, 0x0a, 0xf7, 0xb1, 0xe8, 0x75, 0x6b, 0x48, 0x7d, 0xa6, 0x4d, 0x3d, 0x36, 0x9b, 0xc7, 0xcc, 0x68, 0x9a, 0xfe, 0x2f, 0x39, 0x2a, 0x51, 0x31, 0x39, 0x7d, 0x73, 0x6f, 0xc8, 0x74, 0x72, 0x6f, 0x6e, 0xda, 0x5f, 0xad, 0x48, 0xc8, 0x40, 0xe1, 0x06, 0x01, 0x36, 0xa1, 0x88, 0xc8, 0x99, 0x9c, 0xd1, 0x11, 0x8f, + /* (2^247)P */ 0xab, 0xc5, 0xcb, 0xcf, 0xbd, 0x73, 0x21, 0xd0, 0x82, 0xb1, 0x2e, 0x2d, 0xd4, 0x36, 0x1b, 0xed, 0xa9, 0x8a, 0x26, 0x79, 0xc4, 0x17, 0xae, 0xe5, 0x09, 0x0a, 0x0c, 0xa4, 0x21, 0xa0, 0x6e, 0xdd, 0x62, 0x8e, 0x44, 0x62, 0xcc, 0x50, 0xff, 0x93, 0xb3, 0x9a, 0x72, 0x8c, 0x3f, 0xa1, 0xa6, 0x4d, 0x87, 0xd5, 0x1c, 0x5a, 0xc0, 0x0b, 0x1a, 0xd6, + /* (2^248)P */ 0x67, 0x36, 0x6a, 0x1f, 0x96, 0xe5, 0x80, 0x20, 0xa9, 0xe8, 0x0b, 0x0e, 0x21, 0x29, 0x3f, 0xc8, 0x0a, 0x6d, 0x27, 0x47, 0xca, 0xd9, 0x05, 0x55, 0xbf, 0x11, 0xcf, 0x31, 0x7a, 0x37, 0xc7, 0x90, 0xa9, 0xf4, 0x07, 0x5e, 0xd5, 0xc3, 0x92, 0xaa, 0x95, 0xc8, 0x23, 0x2a, 0x53, 0x45, 0xe3, 0x3a, 0x24, 0xe9, 0x67, 0x97, 0x3a, 0x82, 0xf9, 0xa6, + /* (2^249)P */ 0x92, 0x9e, 0x6d, 0x82, 0x67, 0xe9, 0xf9, 0x17, 0x96, 0x2c, 0xa7, 0xd3, 0x89, 0xf9, 0xdb, 0xd8, 0x20, 0xc6, 0x2e, 0xec, 0x4a, 0x76, 0x64, 0xbf, 0x27, 0x40, 0xe2, 0xb4, 0xdf, 0x1f, 0xa0, 0xef, 0x07, 0x80, 0xfb, 0x8e, 0x12, 0xf8, 0xb8, 0xe1, 0xc6, 0xdf, 0x7c, 0x69, 0x35, 0x5a, 0xe1, 0x8e, 0x5d, 0x69, 0x84, 0x56, 0xb6, 0x31, 0x1c, 0x0b, + /* (2^250)P */ 0xd6, 0x94, 0x5c, 0xef, 0xbb, 0x46, 0x45, 0x44, 0x5b, 0xa1, 0xae, 0x03, 0x65, 0xdd, 0xb5, 0x66, 0x88, 0x35, 0x29, 0x95, 0x16, 0x54, 0xa6, 0xf5, 0xc9, 0x78, 0x34, 0xe6, 0x0f, 0xc4, 0x2b, 0x5b, 0x79, 0x51, 0x68, 0x48, 0x3a, 0x26, 0x87, 0x05, 0x70, 0xaf, 0x8b, 0xa6, 0xc7, 0x2e, 0xb3, 0xa9, 0x10, 0x01, 0xb0, 0xb9, 0x31, 0xfd, 0xdc, 0x80, + /* (2^251)P */ 0x25, 0xf2, 0xad, 0xd6, 0x75, 0xa3, 0x04, 0x05, 0x64, 0x8a, 0x97, 0x60, 0x27, 0x2a, 0xe5, 0x6d, 0xb0, 0x73, 0xf4, 0x07, 0x2a, 0x9d, 0xe9, 0x46, 0xb4, 0x1c, 0x51, 0xf8, 0x63, 0x98, 0x7e, 0xe5, 0x13, 0x51, 0xed, 0x98, 0x65, 0x98, 0x4f, 0x8f, 0xe7, 0x7e, 0x72, 0xd7, 0x64, 0x11, 0x2f, 0xcd, 0x12, 0xf8, 0xc4, 0x63, 0x52, 0x0f, 0x7f, 0xc4, + /* (2^252)P */ 0x5c, 0xd9, 0x85, 0x63, 0xc7, 0x8a, 0x65, 0x9a, 0x25, 0x83, 0x31, 0x73, 0x49, 0xf0, 0x93, 0x96, 0x70, 0x67, 0x6d, 0xb1, 0xff, 0x95, 0x54, 0xe4, 0xf8, 0x15, 0x6c, 0x5f, 0xbd, 0xf6, 0x0f, 0x38, 0x7b, 0x68, 0x7d, 0xd9, 0x3d, 0xf0, 0xa9, 0xa0, 0xe4, 0xd1, 0xb6, 0x34, 0x6d, 0x14, 0x16, 0xc2, 0x4c, 0x30, 0x0e, 0x67, 0xd3, 0xbe, 0x2e, 0xc0, + /* (2^253)P */ 0x06, 0x6b, 0x52, 0xc8, 0x14, 0xcd, 0xae, 0x03, 0x93, 0xea, 0xc1, 0xf2, 0xf6, 0x8b, 0xc5, 0xb6, 0xdc, 0x82, 0x42, 0x29, 0x94, 0xe0, 0x25, 0x6c, 0x3f, 0x9f, 0x5d, 0xe4, 0x96, 0xf6, 0x8e, 0x3f, 0xf9, 0x72, 0xc4, 0x77, 0x60, 0x8b, 0xa4, 0xf9, 0xa8, 0xc3, 0x0a, 0x81, 0xb1, 0x97, 0x70, 0x18, 0xab, 0xea, 0x37, 0x8a, 0x08, 0xc7, 0xe2, 0x95, + /* (2^254)P */ 0x94, 0x49, 0xd9, 0x5f, 0x76, 0x72, 0x82, 0xad, 0x2d, 0x50, 0x1a, 0x7a, 0x5b, 0xe6, 0x95, 0x1e, 0x95, 0x65, 0x87, 0x1c, 0x52, 0xd7, 0x44, 0xe6, 0x9b, 0x56, 0xcd, 0x6f, 0x05, 0xff, 0x67, 0xc5, 0xdb, 0xa2, 0xac, 0xe4, 0xa2, 0x28, 0x63, 0x5f, 0xfb, 0x0c, 0x3b, 0xf1, 0x87, 0xc3, 0x36, 0x78, 0x3f, 0x77, 0xfa, 0x50, 0x85, 0xf9, 0xd7, 0x82, + /* (2^255)P */ 0x64, 0xc0, 0xe0, 0xd8, 0x2d, 0xed, 0xcb, 0x6a, 0xfd, 0xcd, 0xbc, 0x7e, 0x9f, 0xc8, 0x85, 0xe9, 0xc1, 0x7c, 0x0f, 0xe5, 0x18, 0xea, 0xd4, 0x51, 0xad, 0x59, 0x13, 0x75, 0xd9, 0x3d, 0xd4, 0x8a, 0xb2, 0xbe, 0x78, 0x52, 0x2b, 0x52, 0x94, 0x37, 0x41, 0xd6, 0xb4, 0xb6, 0x45, 0x20, 0x76, 0xe0, 0x1f, 0x31, 0xdb, 0xb1, 0xa1, 0x43, 0xf0, 0x18, + /* (2^256)P */ 0x74, 0xa9, 0xa4, 0xa9, 0xdd, 0x6e, 0x3e, 0x68, 0xe5, 0xc3, 0x2e, 0x92, 0x17, 0xa4, 0xcb, 0x80, 0xb1, 0xf0, 0x06, 0x93, 0xef, 0xe6, 0x00, 0xe6, 0x3b, 0xb1, 0x32, 0x65, 0x7b, 0x83, 0xb6, 0x8a, 0x49, 0x1b, 0x14, 0x89, 0xee, 0xba, 0xf5, 0x6a, 0x8d, 0x36, 0xef, 0xb0, 0xd8, 0xb2, 0x16, 0x99, 0x17, 0x35, 0x02, 0x16, 0x55, 0x58, 0xdd, 0x82, + /* (2^257)P */ 0x36, 0x95, 0xe8, 0xf4, 0x36, 0x42, 0xbb, 0xc5, 0x3e, 0xfa, 0x30, 0x84, 0x9e, 0x59, 0xfd, 0xd2, 0x95, 0x42, 0xf8, 0x64, 0xd9, 0xb9, 0x0e, 0x9f, 0xfa, 0xd0, 0x7b, 0x20, 0x31, 0x77, 0x48, 0x29, 0x4d, 0xd0, 0x32, 0x57, 0x56, 0x30, 0xa6, 0x17, 0x53, 0x04, 0xbf, 0x08, 0x28, 0xec, 0xb8, 0x46, 0xc1, 0x03, 0x89, 0xdc, 0xed, 0xa0, 0x35, 0x53, + /* (2^258)P */ 0xc5, 0x7f, 0x9e, 0xd8, 0xc5, 0xba, 0x5f, 0x68, 0xc8, 0x23, 0x75, 0xea, 0x0d, 0xd9, 0x5a, 0xfd, 0x61, 0x1a, 0xa3, 0x2e, 0x45, 0x63, 0x14, 0x55, 0x86, 0x21, 0x29, 0xbe, 0xef, 0x5e, 0x50, 0xe5, 0x18, 0x59, 0xe7, 0xe3, 0xce, 0x4d, 0x8c, 0x15, 0x8f, 0x89, 0x66, 0x44, 0x52, 0x3d, 0xfa, 0xc7, 0x9a, 0x59, 0x90, 0x8e, 0xc0, 0x06, 0x3f, 0xc9, + /* (2^259)P */ 0x8e, 0x04, 0xd9, 0x16, 0x50, 0x1d, 0x8c, 0x9f, 0xd5, 0xe3, 0xce, 0xfd, 0x47, 0x04, 0x27, 0x4d, 0xc2, 0xfa, 0x71, 0xd9, 0x0b, 0xb8, 0x65, 0xf4, 0x11, 0xf3, 0x08, 0xee, 0x81, 0xc8, 0x67, 0x99, 0x0b, 0x8d, 0x77, 0xa3, 0x4f, 0xb5, 0x9b, 0xdb, 0x26, 0xf1, 0x97, 0xeb, 0x04, 0x54, 0xeb, 0x80, 0x08, 0x1d, 0x1d, 0xf6, 0x3d, 0x1f, 0x5a, 0xb8, + /* (2^260)P */ 0xb7, 0x9c, 0x9d, 0xee, 0xb9, 0x5c, 0xad, 0x0d, 0x9e, 0xfd, 0x60, 0x3c, 0x27, 0x4e, 0xa2, 0x95, 0xfb, 0x64, 0x7e, 0x79, 0x64, 0x87, 0x10, 0xb4, 0x73, 0xe0, 0x9d, 0x46, 0x4d, 0x3d, 0xee, 0x83, 0xe4, 0x16, 0x88, 0x97, 0xe6, 0x4d, 0xba, 0x70, 0xb6, 0x96, 0x7b, 0xff, 0x4b, 0xc8, 0xcf, 0x72, 0x83, 0x3e, 0x5b, 0x24, 0x2e, 0x57, 0xf1, 0x82, + /* (2^261)P */ 0x30, 0x71, 0x40, 0x51, 0x4f, 0x44, 0xbb, 0xc7, 0xf0, 0x54, 0x6e, 0x9d, 0xeb, 0x15, 0xad, 0xf8, 0x61, 0x43, 0x5a, 0xef, 0xc0, 0xb1, 0x57, 0xae, 0x03, 0x40, 0xe8, 0x68, 0x6f, 0x03, 0x20, 0x4f, 0x8a, 0x51, 0x2a, 0x9e, 0xd2, 0x45, 0xaf, 0xb4, 0xf5, 0xd4, 0x95, 0x7f, 0x3d, 0x3d, 0xb7, 0xb6, 0x28, 0xc5, 0x08, 0x8b, 0x44, 0xd6, 0x3f, 0xe7, + /* (2^262)P */ 0xa9, 0x52, 0x04, 0x67, 0xcb, 0x20, 0x63, 0xf8, 0x18, 0x01, 0x44, 0x21, 0x6a, 0x8a, 0x83, 0x48, 0xd4, 0xaf, 0x23, 0x0f, 0x35, 0x8d, 0xe5, 0x5a, 0xc4, 0x7c, 0x55, 0x46, 0x19, 0x5f, 0x35, 0xe0, 0x5d, 0x97, 0x4c, 0x2d, 0x04, 0xed, 0x59, 0xd4, 0xb0, 0xb2, 0xc6, 0xe3, 0x51, 0xe1, 0x38, 0xc6, 0x30, 0x49, 0x8f, 0xae, 0x61, 0x64, 0xce, 0xa8, + /* (2^263)P */ 0x9b, 0x64, 0x83, 0x3c, 0xd3, 0xdf, 0xb9, 0x27, 0xe7, 0x5b, 0x7f, 0xeb, 0xf3, 0x26, 0xcf, 0xb1, 0x8f, 0xaf, 0x26, 0xc8, 0x48, 0xce, 0xa1, 0xac, 0x7d, 0x10, 0x34, 0x28, 0xe1, 0x1f, 0x69, 0x03, 0x64, 0x77, 0x61, 0xdd, 0x4a, 0x9b, 0x18, 0x47, 0xf8, 0xca, 0x63, 0xc9, 0x03, 0x2d, 0x20, 0x2a, 0x69, 0x6e, 0x42, 0xd0, 0xe7, 0xaa, 0xb5, 0xf3, + /* (2^264)P */ 0xea, 0x31, 0x0c, 0x57, 0x0f, 0x3e, 0xe3, 0x35, 0xd8, 0x30, 0xa5, 0x6f, 0xdd, 0x95, 0x43, 0xc6, 0x66, 0x07, 0x4f, 0x34, 0xc3, 0x7e, 0x04, 0x10, 0x2d, 0xc4, 0x1c, 0x94, 0x52, 0x2e, 0x5b, 0x9a, 0x65, 0x2f, 0x91, 0xaa, 0x4f, 0x3c, 0xdc, 0x23, 0x18, 0xe1, 0x4f, 0x85, 0xcd, 0xf4, 0x8c, 0x51, 0xf7, 0xab, 0x4f, 0xdc, 0x15, 0x5c, 0x9e, 0xc5, + /* (2^265)P */ 0x54, 0x57, 0x23, 0x17, 0xe7, 0x82, 0x2f, 0x04, 0x7d, 0xfe, 0xe7, 0x1f, 0xa2, 0x57, 0x79, 0xe9, 0x58, 0x9b, 0xbe, 0xc6, 0x16, 0x4a, 0x17, 0x50, 0x90, 0x4a, 0x34, 0x70, 0x87, 0x37, 0x01, 0x26, 0xd8, 0xa3, 0x5f, 0x07, 0x7c, 0xd0, 0x7d, 0x05, 0x8a, 0x93, 0x51, 0x2f, 0x99, 0xea, 0xcf, 0x00, 0xd8, 0xc7, 0xe6, 0x9b, 0x8c, 0x62, 0x45, 0x87, + /* (2^266)P */ 0xc3, 0xfd, 0x29, 0x66, 0xe7, 0x30, 0x29, 0x77, 0xe0, 0x0d, 0x63, 0x5b, 0xe6, 0x90, 0x1a, 0x1e, 0x99, 0xc2, 0xa7, 0xab, 0xff, 0xa7, 0xbd, 0x79, 0x01, 0x97, 0xfd, 0x27, 0x1b, 0x43, 0x2b, 0xe6, 0xfe, 0x5e, 0xf1, 0xb9, 0x35, 0x38, 0x08, 0x25, 0x55, 0x90, 0x68, 0x2e, 0xc3, 0x67, 0x39, 0x9f, 0x2b, 0x2c, 0x70, 0x48, 0x8c, 0x47, 0xee, 0x56, + /* (2^267)P */ 0xf7, 0x32, 0x70, 0xb5, 0xe6, 0x42, 0xfd, 0x0a, 0x39, 0x9b, 0x07, 0xfe, 0x0e, 0xf4, 0x47, 0xba, 0x6a, 0x3f, 0xf5, 0x2c, 0x15, 0xf3, 0x60, 0x3f, 0xb1, 0x83, 0x7b, 0x2e, 0x34, 0x58, 0x1a, 0x6e, 0x4a, 0x49, 0x05, 0x45, 0xca, 0xdb, 0x00, 0x01, 0x0c, 0x42, 0x5e, 0x60, 0x40, 0x5f, 0xd9, 0xc7, 0x3a, 0x9e, 0x1c, 0x8d, 0xab, 0x11, 0x55, 0x65, + /* (2^268)P */ 0x87, 0x40, 0xb7, 0x0d, 0xaa, 0x34, 0x89, 0x90, 0x75, 0x6d, 0xa2, 0xfe, 0x3b, 0x6d, 0x5c, 0x39, 0x98, 0x10, 0x9e, 0x15, 0xc5, 0x35, 0xa2, 0x27, 0x23, 0x0a, 0x2d, 0x60, 0xe2, 0xa8, 0x7f, 0x3e, 0x77, 0x8f, 0xcc, 0x44, 0xcc, 0x30, 0x28, 0xe2, 0xf0, 0x04, 0x8c, 0xee, 0xe4, 0x5f, 0x68, 0x8c, 0xdf, 0x70, 0xbf, 0x31, 0xee, 0x2a, 0xfc, 0xce, + /* (2^269)P */ 0x92, 0xf2, 0xa0, 0xd9, 0x58, 0x3b, 0x7c, 0x1a, 0x99, 0x46, 0x59, 0x54, 0x60, 0x06, 0x8d, 0x5e, 0xf0, 0x22, 0xa1, 0xed, 0x92, 0x8a, 0x4d, 0x76, 0x95, 0x05, 0x0b, 0xff, 0xfc, 0x9a, 0xd1, 0xcc, 0x05, 0xb9, 0x5e, 0x99, 0xe8, 0x2a, 0x76, 0x7b, 0xfd, 0xa6, 0xe2, 0xd1, 0x1a, 0xd6, 0x76, 0x9f, 0x2f, 0x0e, 0xd1, 0xa8, 0x77, 0x5a, 0x40, 0x5a, + /* (2^270)P */ 0xff, 0xf9, 0x3f, 0xa9, 0xa6, 0x6c, 0x6d, 0x03, 0x8b, 0xa7, 0x10, 0x5d, 0x3f, 0xec, 0x3e, 0x1c, 0x0b, 0x6b, 0xa2, 0x6a, 0x22, 0xa9, 0x28, 0xd0, 0x66, 0xc9, 0xc2, 0x3d, 0x47, 0x20, 0x7d, 0xa6, 0x1d, 0xd8, 0x25, 0xb5, 0xf2, 0xf9, 0x70, 0x19, 0x6b, 0xf8, 0x43, 0x36, 0xc5, 0x1f, 0xe4, 0x5a, 0x4c, 0x13, 0xe4, 0x6d, 0x08, 0x0b, 0x1d, 0xb1, + /* (2^271)P */ 0x3f, 0x20, 0x9b, 0xfb, 0xec, 0x7d, 0x31, 0xc5, 0xfc, 0x88, 0x0b, 0x30, 0xed, 0x36, 0xc0, 0x63, 0xb1, 0x7d, 0x10, 0xda, 0xb6, 0x2e, 0xad, 0xf3, 0xec, 0x94, 0xe7, 0xec, 0xb5, 0x9c, 0xfe, 0xf5, 0x35, 0xf0, 0xa2, 0x2d, 0x7f, 0xca, 0x6b, 0x67, 0x1a, 0xf6, 0xb3, 0xda, 0x09, 0x2a, 0xaa, 0xdf, 0xb1, 0xca, 0x9b, 0xfb, 0xeb, 0xb3, 0xcd, 0xc0, + /* (2^272)P */ 0xcd, 0x4d, 0x89, 0x00, 0xa4, 0x3b, 0x48, 0xf0, 0x76, 0x91, 0x35, 0xa5, 0xf8, 0xc9, 0xb6, 0x46, 0xbc, 0xf6, 0x9a, 0x45, 0x47, 0x17, 0x96, 0x80, 0x5b, 0x3a, 0x28, 0x33, 0xf9, 0x5a, 0xef, 0x43, 0x07, 0xfe, 0x3b, 0xf4, 0x8e, 0x19, 0xce, 0xd2, 0x94, 0x4b, 0x6d, 0x8e, 0x67, 0x20, 0xc7, 0x4f, 0x2f, 0x59, 0x8e, 0xe1, 0xa1, 0xa9, 0xf9, 0x0e, + /* (2^273)P */ 0xdc, 0x7b, 0xb5, 0x50, 0x2e, 0xe9, 0x7e, 0x8b, 0x78, 0xa1, 0x38, 0x96, 0x22, 0xc3, 0x61, 0x67, 0x6d, 0xc8, 0x58, 0xed, 0x41, 0x1d, 0x5d, 0x86, 0x98, 0x7f, 0x2f, 0x1b, 0x8d, 0x3e, 0xaa, 0xc1, 0xd2, 0x0a, 0xf3, 0xbf, 0x95, 0x04, 0xf3, 0x10, 0x3c, 0x2b, 0x7f, 0x90, 0x46, 0x04, 0xaa, 0x6a, 0xa9, 0x35, 0x76, 0xac, 0x49, 0xb5, 0x00, 0x45, + /* (2^274)P */ 0xb1, 0x93, 0x79, 0x84, 0x4a, 0x2a, 0x30, 0x78, 0x16, 0xaa, 0xc5, 0x74, 0x06, 0xce, 0xa5, 0xa7, 0x32, 0x86, 0xe0, 0xf9, 0x10, 0xd2, 0x58, 0x76, 0xfb, 0x66, 0x49, 0x76, 0x3a, 0x90, 0xba, 0xb5, 0xcc, 0x99, 0xcd, 0x09, 0xc1, 0x9a, 0x74, 0x23, 0xdf, 0x0c, 0xfe, 0x99, 0x52, 0x80, 0xa3, 0x7c, 0x1c, 0x71, 0x5f, 0x2c, 0x49, 0x57, 0xf4, 0xf9, + /* (2^275)P */ 0x6d, 0xbf, 0x52, 0xe6, 0x25, 0x98, 0xed, 0xcf, 0xe3, 0xbc, 0x08, 0xa2, 0x1a, 0x90, 0xae, 0xa0, 0xbf, 0x07, 0x15, 0xad, 0x0a, 0x9f, 0x3e, 0x47, 0x44, 0xc2, 0x10, 0x46, 0xa6, 0x7a, 0x9e, 0x2f, 0x57, 0xbc, 0xe2, 0xf0, 0x1d, 0xd6, 0x9a, 0x06, 0xed, 0xfc, 0x54, 0x95, 0x92, 0x15, 0xa2, 0xf7, 0x8d, 0x6b, 0xef, 0xb2, 0x05, 0xed, 0x5c, 0x63, + /* (2^276)P */ 0xbc, 0x0b, 0x27, 0x3a, 0x3a, 0xf8, 0xe1, 0x48, 0x02, 0x7e, 0x27, 0xe6, 0x81, 0x62, 0x07, 0x73, 0x74, 0xe5, 0x52, 0xd7, 0xf8, 0x26, 0xca, 0x93, 0x4d, 0x3e, 0x9b, 0x55, 0x09, 0x8e, 0xe3, 0xd7, 0xa6, 0xe3, 0xb6, 0x2a, 0xa9, 0xb3, 0xb0, 0xa0, 0x8c, 0x01, 0xbb, 0x07, 0x90, 0x78, 0x6d, 0x6d, 0xe9, 0xf0, 0x7a, 0x90, 0xbd, 0xdc, 0x0c, 0x36, + /* (2^277)P */ 0x7f, 0x20, 0x12, 0x0f, 0x40, 0x00, 0x53, 0xd8, 0x0c, 0x27, 0x47, 0x47, 0x22, 0x80, 0xfb, 0x62, 0xe4, 0xa7, 0xf7, 0xbd, 0x42, 0xa5, 0xc3, 0x2b, 0xb2, 0x7f, 0x50, 0xcc, 0xe2, 0xfb, 0xd5, 0xc0, 0x63, 0xdd, 0x24, 0x5f, 0x7c, 0x08, 0x91, 0xbf, 0x6e, 0x47, 0x44, 0xd4, 0x6a, 0xc0, 0xc3, 0x09, 0x39, 0x27, 0xdd, 0xc7, 0xca, 0x06, 0x29, 0x55, + /* (2^278)P */ 0x76, 0x28, 0x58, 0xb0, 0xd2, 0xf3, 0x0f, 0x04, 0xe9, 0xc9, 0xab, 0x66, 0x5b, 0x75, 0x51, 0xdc, 0xe5, 0x8f, 0xe8, 0x1f, 0xdb, 0x03, 0x0f, 0xb0, 0x7d, 0xf9, 0x20, 0x64, 0x89, 0xe9, 0xdc, 0xe6, 0x24, 0xc3, 0xd5, 0xd2, 0x41, 0xa6, 0xe4, 0xe3, 0xc4, 0x79, 0x7c, 0x0f, 0xa1, 0x61, 0x2f, 0xda, 0xa4, 0xc9, 0xfd, 0xad, 0x5c, 0x65, 0x6a, 0xf3, + /* (2^279)P */ 0xd5, 0xab, 0x72, 0x7a, 0x3b, 0x59, 0xea, 0xcf, 0xd5, 0x17, 0xd2, 0xb2, 0x5f, 0x2d, 0xab, 0xad, 0x9e, 0x88, 0x64, 0x55, 0x96, 0x6e, 0xf3, 0x44, 0xa9, 0x11, 0xf5, 0xf8, 0x3a, 0xf1, 0xcd, 0x79, 0x4c, 0x99, 0x6d, 0x23, 0x6a, 0xa0, 0xc2, 0x1a, 0x19, 0x45, 0xb5, 0xd8, 0x95, 0x2f, 0x49, 0xe9, 0x46, 0x39, 0x26, 0x60, 0x04, 0x15, 0x8b, 0xcc, + /* (2^280)P */ 0x66, 0x0c, 0xf0, 0x54, 0x41, 0x02, 0x91, 0xab, 0xe5, 0x85, 0x8a, 0x44, 0xa6, 0x34, 0x96, 0x32, 0xc0, 0xdf, 0x6c, 0x41, 0x39, 0xd4, 0xc6, 0xe1, 0xe3, 0x81, 0xb0, 0x4c, 0x34, 0x4f, 0xe5, 0xf4, 0x35, 0x46, 0x1f, 0xeb, 0x75, 0xfd, 0x43, 0x37, 0x50, 0x99, 0xab, 0xad, 0xb7, 0x8c, 0xa1, 0x57, 0xcb, 0xe6, 0xce, 0x16, 0x2e, 0x85, 0xcc, 0xf9, + /* (2^281)P */ 0x63, 0xd1, 0x3f, 0x9e, 0xa2, 0x17, 0x2e, 0x1d, 0x3e, 0xce, 0x48, 0x2d, 0xbb, 0x8f, 0x69, 0xc9, 0xa6, 0x3d, 0x4e, 0xfe, 0x09, 0x56, 0xb3, 0x02, 0x5f, 0x99, 0x97, 0x0c, 0x54, 0xda, 0x32, 0x97, 0x9b, 0xf4, 0x95, 0xf1, 0xad, 0xe3, 0x2b, 0x04, 0xa7, 0x9b, 0x3f, 0xbb, 0xe7, 0x87, 0x2e, 0x1f, 0x8b, 0x4b, 0x7a, 0xa4, 0x43, 0x0c, 0x0f, 0x35, + /* (2^282)P */ 0x05, 0xdc, 0xe0, 0x2c, 0xa1, 0xc1, 0xd0, 0xf1, 0x1f, 0x4e, 0xc0, 0x6c, 0x35, 0x7b, 0xca, 0x8f, 0x8b, 0x02, 0xb1, 0xf7, 0xd6, 0x2e, 0xe7, 0x93, 0x80, 0x85, 0x18, 0x88, 0x19, 0xb9, 0xb4, 0x4a, 0xbc, 0xeb, 0x5a, 0x78, 0x38, 0xed, 0xc6, 0x27, 0x2a, 0x74, 0x76, 0xf0, 0x1b, 0x79, 0x92, 0x2f, 0xd2, 0x81, 0x98, 0xdf, 0xa9, 0x50, 0x19, 0xeb, + /* (2^283)P */ 0xb5, 0xe7, 0xb4, 0x11, 0x3a, 0x81, 0xb6, 0xb4, 0xf8, 0xa2, 0xb3, 0x6c, 0xfc, 0x9d, 0xe0, 0xc0, 0xe0, 0x59, 0x7f, 0x05, 0x37, 0xef, 0x2c, 0xa9, 0x3a, 0x24, 0xac, 0x7b, 0x25, 0xa0, 0x55, 0xd2, 0x44, 0x82, 0x82, 0x6e, 0x64, 0xa3, 0x58, 0xc8, 0x67, 0xae, 0x26, 0xa7, 0x0f, 0x42, 0x63, 0xe1, 0x93, 0x01, 0x52, 0x19, 0xaf, 0x49, 0x3e, 0x33, + /* (2^284)P */ 0x05, 0x85, 0xe6, 0x66, 0xaf, 0x5f, 0xdf, 0xbf, 0x9d, 0x24, 0x62, 0x60, 0x90, 0xe2, 0x4c, 0x7d, 0x4e, 0xc3, 0x74, 0x5d, 0x4f, 0x53, 0xf3, 0x63, 0x13, 0xf4, 0x74, 0x28, 0x6b, 0x7d, 0x57, 0x0c, 0x9d, 0x84, 0xa7, 0x1a, 0xff, 0xa0, 0x79, 0xdf, 0xfc, 0x65, 0x98, 0x8e, 0x22, 0x0d, 0x62, 0x7e, 0xf2, 0x34, 0x60, 0x83, 0x05, 0x14, 0xb1, 0xc1, + /* (2^285)P */ 0x64, 0x22, 0xcc, 0xdf, 0x5c, 0xbc, 0x88, 0x68, 0x4c, 0xd9, 0xbc, 0x0e, 0xc9, 0x8b, 0xb4, 0x23, 0x52, 0xad, 0xb0, 0xb3, 0xf1, 0x17, 0xd8, 0x15, 0x04, 0x6b, 0x99, 0xf0, 0xc4, 0x7d, 0x48, 0x22, 0x4a, 0xf8, 0x6f, 0xaa, 0x88, 0x0d, 0xc5, 0x5e, 0xa9, 0x1c, 0x61, 0x3d, 0x95, 0xa9, 0x7b, 0x6a, 0x79, 0x33, 0x0a, 0x2b, 0x99, 0xe3, 0x4e, 0x48, + /* (2^286)P */ 0x6b, 0x9b, 0x6a, 0x2a, 0xf1, 0x60, 0x31, 0xb4, 0x73, 0xd1, 0x87, 0x45, 0x9c, 0x15, 0x58, 0x4b, 0x91, 0x6d, 0x94, 0x1c, 0x41, 0x11, 0x4a, 0x83, 0xec, 0xaf, 0x65, 0xbc, 0x34, 0xaa, 0x26, 0xe2, 0xaf, 0xed, 0x46, 0x05, 0x4e, 0xdb, 0xc6, 0x4e, 0x10, 0x28, 0x4e, 0x72, 0xe5, 0x31, 0xa3, 0x20, 0xd7, 0xb1, 0x96, 0x64, 0xf6, 0xce, 0x08, 0x08, + /* (2^287)P */ 0x16, 0xa9, 0x5c, 0x9f, 0x9a, 0xb4, 0xb8, 0xc8, 0x32, 0x78, 0xc0, 0x3a, 0xd9, 0x5f, 0x94, 0xac, 0x3a, 0x42, 0x1f, 0x43, 0xd6, 0x80, 0x47, 0x2c, 0xdc, 0x76, 0x27, 0xfa, 0x50, 0xe5, 0xa1, 0xe4, 0xc3, 0xcb, 0x61, 0x31, 0xe1, 0x2e, 0xde, 0x81, 0x3b, 0x77, 0x1c, 0x39, 0x3c, 0xdb, 0xda, 0x87, 0x4b, 0x84, 0x12, 0xeb, 0xdd, 0x54, 0xbf, 0xe7, + /* (2^288)P */ 0xbf, 0xcb, 0x73, 0x21, 0x3d, 0x7e, 0x13, 0x8c, 0xa6, 0x34, 0x21, 0x2b, 0xa5, 0xe4, 0x9f, 0x8e, 0x9c, 0x01, 0x9c, 0x43, 0xd9, 0xc7, 0xb9, 0xf1, 0xbe, 0x7f, 0x45, 0x51, 0x97, 0xa1, 0x8e, 0x01, 0xf8, 0xbd, 0xd2, 0xbf, 0x81, 0x3a, 0x8b, 0xab, 0xe4, 0x89, 0xb7, 0xbd, 0xf2, 0xcd, 0xa9, 0x8a, 0x8a, 0xde, 0xfb, 0x8a, 0x55, 0x12, 0x7b, 0x17, + /* (2^289)P */ 0x1b, 0x95, 0x58, 0x4d, 0xe6, 0x51, 0x31, 0x52, 0x1c, 0xd8, 0x15, 0x84, 0xb1, 0x0d, 0x36, 0x25, 0x88, 0x91, 0x46, 0x71, 0x42, 0x56, 0xe2, 0x90, 0x08, 0x9e, 0x77, 0x1b, 0xee, 0x22, 0x3f, 0xec, 0xee, 0x8c, 0x7b, 0x2e, 0x79, 0xc4, 0x6c, 0x07, 0xa1, 0x7e, 0x52, 0xf5, 0x26, 0x5c, 0x84, 0x2a, 0x50, 0x6e, 0x82, 0xb3, 0x76, 0xda, 0x35, 0x16, + /* (2^290)P */ 0x0a, 0x6f, 0x99, 0x87, 0xc0, 0x7d, 0x8a, 0xb2, 0xca, 0xae, 0xe8, 0x65, 0x98, 0x0f, 0xb3, 0x44, 0xe1, 0xdc, 0x52, 0x79, 0x75, 0xec, 0x8f, 0x95, 0x87, 0x45, 0xd1, 0x32, 0x18, 0x55, 0x15, 0xce, 0x64, 0x9b, 0x08, 0x4f, 0x2c, 0xea, 0xba, 0x1c, 0x57, 0x06, 0x63, 0xc8, 0xb1, 0xfd, 0xc5, 0x67, 0xe7, 0x1f, 0x87, 0x9e, 0xde, 0x72, 0x7d, 0xec, + /* (2^291)P */ 0x36, 0x8b, 0x4d, 0x2c, 0xc2, 0x46, 0xe8, 0x96, 0xac, 0x0b, 0x8c, 0xc5, 0x09, 0x10, 0xfc, 0xf2, 0xda, 0xea, 0x22, 0xb2, 0xd3, 0x89, 0xeb, 0xb2, 0x85, 0x0f, 0xff, 0x59, 0x50, 0x2c, 0x99, 0x5a, 0x1f, 0xec, 0x2a, 0x6f, 0xec, 0xcf, 0xe9, 0xce, 0x12, 0x6b, 0x19, 0xd8, 0xde, 0x9b, 0xce, 0x0e, 0x6a, 0xaa, 0xe1, 0x32, 0xea, 0x4c, 0xfe, 0x92, + /* (2^292)P */ 0x5f, 0x17, 0x70, 0x53, 0x26, 0x03, 0x0b, 0xab, 0xd1, 0xc1, 0x42, 0x0b, 0xab, 0x2b, 0x3d, 0x31, 0xa4, 0xd5, 0x2b, 0x5e, 0x00, 0xd5, 0x9a, 0x22, 0x34, 0xe0, 0x53, 0x3f, 0x59, 0x7f, 0x2c, 0x6d, 0x72, 0x9a, 0xa4, 0xbe, 0x3d, 0x42, 0x05, 0x1b, 0xf2, 0x7f, 0x88, 0x56, 0xd1, 0x7c, 0x7d, 0x6b, 0x9f, 0x43, 0xfe, 0x65, 0x19, 0xae, 0x9c, 0x4c, + /* (2^293)P */ 0xf3, 0x7c, 0x20, 0xa9, 0xfc, 0xf2, 0xf2, 0x3b, 0x3c, 0x57, 0x41, 0x94, 0xe5, 0xcc, 0x6a, 0x37, 0x5d, 0x09, 0xf2, 0xab, 0xc2, 0xca, 0x60, 0x38, 0x6b, 0x7a, 0xe1, 0x78, 0x2b, 0xc1, 0x1d, 0xe8, 0xfd, 0xbc, 0x3d, 0x5c, 0xa2, 0xdb, 0x49, 0x20, 0x79, 0xe6, 0x1b, 0x9b, 0x65, 0xd9, 0x6d, 0xec, 0x57, 0x1d, 0xd2, 0xe9, 0x90, 0xeb, 0x43, 0x7b, + /* (2^294)P */ 0x2a, 0x8b, 0x2e, 0x19, 0x18, 0x10, 0xb8, 0x83, 0xe7, 0x7d, 0x2d, 0x9a, 0x3a, 0xe5, 0xd1, 0xe4, 0x7c, 0x38, 0xe5, 0x59, 0x2a, 0x6e, 0xd9, 0x01, 0x29, 0x3d, 0x23, 0xf7, 0x52, 0xba, 0x61, 0x04, 0x9a, 0xde, 0xc4, 0x31, 0x50, 0xeb, 0x1b, 0xaa, 0xde, 0x39, 0x58, 0xd8, 0x1b, 0x1e, 0xfc, 0x57, 0x9a, 0x28, 0x43, 0x9e, 0x97, 0x5e, 0xaa, 0xa3, + /* (2^295)P */ 0x97, 0x0a, 0x74, 0xc4, 0x39, 0x99, 0x6b, 0x40, 0xc7, 0x3e, 0x8c, 0xa7, 0xb1, 0x4e, 0x9a, 0x59, 0x6e, 0x1c, 0xfe, 0xfc, 0x2a, 0x5e, 0x73, 0x2b, 0x8c, 0xa9, 0x71, 0xf5, 0xda, 0x6b, 0x15, 0xab, 0xf7, 0xbe, 0x2a, 0x44, 0x5f, 0xba, 0xae, 0x67, 0x93, 0xc5, 0x86, 0xc1, 0xb8, 0xdf, 0xdc, 0xcb, 0xd7, 0xff, 0xb1, 0x71, 0x7c, 0x6f, 0x88, 0xf8, + /* (2^296)P */ 0x3f, 0x89, 0xb1, 0xbf, 0x24, 0x16, 0xac, 0x56, 0xfe, 0xdf, 0x94, 0x71, 0xbf, 0xd6, 0x57, 0x0c, 0xb4, 0x77, 0x37, 0xaa, 0x2a, 0x70, 0x76, 0x49, 0xaf, 0x0c, 0x97, 0x8e, 0x78, 0x2a, 0x67, 0xc9, 0x3b, 0x3d, 0x5b, 0x01, 0x2f, 0xda, 0xd5, 0xa8, 0xde, 0x02, 0xa9, 0xac, 0x76, 0x00, 0x0b, 0x46, 0xc6, 0x2d, 0xdc, 0x08, 0xf4, 0x10, 0x2c, 0xbe, + /* (2^297)P */ 0xcb, 0x07, 0xf9, 0x91, 0xc6, 0xd5, 0x3e, 0x54, 0x63, 0xae, 0xfc, 0x10, 0xbe, 0x3a, 0x20, 0x73, 0x4e, 0x65, 0x0e, 0x2d, 0x86, 0x77, 0x83, 0x9d, 0xe2, 0x0a, 0xe9, 0xac, 0x22, 0x52, 0x76, 0xd4, 0x6e, 0xfa, 0xe0, 0x09, 0xef, 0x78, 0x82, 0x9f, 0x26, 0xf9, 0x06, 0xb5, 0xe7, 0x05, 0x0e, 0xf2, 0x46, 0x72, 0x93, 0xd3, 0x24, 0xbd, 0x87, 0x60, + /* (2^298)P */ 0x14, 0x55, 0x84, 0x7b, 0x6c, 0x60, 0x80, 0x73, 0x8c, 0xbe, 0x2d, 0xd6, 0x69, 0xd6, 0x17, 0x26, 0x44, 0x9f, 0x88, 0xa2, 0x39, 0x7c, 0x89, 0xbc, 0x6d, 0x9e, 0x46, 0xb6, 0x68, 0x66, 0xea, 0xdc, 0x31, 0xd6, 0x21, 0x51, 0x9f, 0x28, 0x28, 0xaf, 0x9e, 0x47, 0x2c, 0x4c, 0x8f, 0xf3, 0xaf, 0x1f, 0xe4, 0xab, 0xac, 0xe9, 0x0c, 0x91, 0x3a, 0x61, + /* (2^299)P */ 0xb0, 0x37, 0x55, 0x4b, 0xe9, 0xc3, 0xb1, 0xce, 0x42, 0xe6, 0xc5, 0x11, 0x7f, 0x2c, 0x11, 0xfc, 0x4e, 0x71, 0x17, 0x00, 0x74, 0x7f, 0xbf, 0x07, 0x4d, 0xfd, 0x40, 0xb2, 0x87, 0xb0, 0xef, 0x1f, 0x35, 0x2c, 0x2d, 0xd7, 0xe1, 0xe4, 0xad, 0x0e, 0x7f, 0x63, 0x66, 0x62, 0x23, 0x41, 0xf6, 0xc1, 0x14, 0xa6, 0xd7, 0xa9, 0x11, 0x56, 0x9d, 0x1b, + /* (2^300)P */ 0x02, 0x82, 0x42, 0x18, 0x4f, 0x1b, 0xc9, 0x5d, 0x78, 0x5f, 0xee, 0xed, 0x01, 0x49, 0x8f, 0xf2, 0xa0, 0xe2, 0x6e, 0xbb, 0x6b, 0x04, 0x8d, 0xb2, 0x41, 0xae, 0xc8, 0x1b, 0x59, 0x34, 0xb8, 0x2a, 0xdb, 0x1f, 0xd2, 0x52, 0xdf, 0x3f, 0x35, 0x00, 0x8b, 0x61, 0xbc, 0x97, 0xa0, 0xc4, 0x77, 0xd1, 0xe4, 0x2c, 0x59, 0x68, 0xff, 0x30, 0xf2, 0xe2, + /* (2^301)P */ 0x79, 0x08, 0xb1, 0xdb, 0x55, 0xae, 0xd0, 0xed, 0xda, 0xa0, 0xec, 0x6c, 0xae, 0x68, 0xf2, 0x0b, 0x61, 0xb3, 0xf5, 0x21, 0x69, 0x87, 0x0b, 0x03, 0xea, 0x8a, 0x15, 0xd9, 0x7e, 0xca, 0xf7, 0xcd, 0xf3, 0x33, 0xb3, 0x4c, 0x5b, 0x23, 0x4e, 0x6f, 0x90, 0xad, 0x91, 0x4b, 0x4f, 0x46, 0x37, 0xe5, 0xe8, 0xb7, 0xeb, 0xd5, 0xca, 0x34, 0x4e, 0x23, + /* (2^302)P */ 0x09, 0x02, 0xdd, 0xfd, 0x70, 0xac, 0x56, 0x80, 0x36, 0x5e, 0x49, 0xd0, 0x3f, 0xc2, 0xe0, 0xba, 0x46, 0x7f, 0x5c, 0xf7, 0xc5, 0xbd, 0xd5, 0x55, 0x7d, 0x3f, 0xd5, 0x7d, 0x06, 0xdf, 0x27, 0x20, 0x4f, 0xe9, 0x30, 0xec, 0x1b, 0xa0, 0x0c, 0xd4, 0x2c, 0xe1, 0x2b, 0x65, 0x73, 0xea, 0x75, 0x35, 0xe8, 0xe6, 0x56, 0xd6, 0x07, 0x15, 0x99, 0xdf, + /* (2^303)P */ 0x4e, 0x10, 0xb7, 0xd0, 0x63, 0x8c, 0xcf, 0x16, 0x00, 0x7c, 0x58, 0xdf, 0x86, 0xdc, 0x4e, 0xca, 0x9c, 0x40, 0x5a, 0x42, 0xfd, 0xec, 0x98, 0xa4, 0x42, 0x53, 0xae, 0x16, 0x9d, 0xfd, 0x75, 0x5a, 0x12, 0x56, 0x1e, 0xc6, 0x57, 0xcc, 0x79, 0x27, 0x96, 0x00, 0xcf, 0x80, 0x4f, 0x8a, 0x36, 0x5c, 0xbb, 0xe9, 0x12, 0xdb, 0xb6, 0x2b, 0xad, 0x96, + /* (2^304)P */ 0x92, 0x32, 0x1f, 0xfd, 0xc6, 0x02, 0x94, 0x08, 0x1b, 0x60, 0x6a, 0x9f, 0x8b, 0xd6, 0xc8, 0xad, 0xd5, 0x1b, 0x27, 0x4e, 0xa4, 0x4d, 0x4a, 0x00, 0x10, 0x5f, 0x86, 0x11, 0xf5, 0xe3, 0x14, 0x32, 0x43, 0xee, 0xb9, 0xc7, 0xab, 0xf4, 0x6f, 0xe5, 0x66, 0x0c, 0x06, 0x0d, 0x96, 0x79, 0x28, 0xaf, 0x45, 0x2b, 0x56, 0xbe, 0xe4, 0x4a, 0x52, 0xd6, + /* (2^305)P */ 0x15, 0x16, 0x69, 0xef, 0x60, 0xca, 0x82, 0x25, 0x0f, 0xc6, 0x30, 0xa0, 0x0a, 0xd1, 0x83, 0x29, 0xcd, 0xb6, 0x89, 0x6c, 0xf5, 0xb2, 0x08, 0x38, 0xe6, 0xca, 0x6b, 0x19, 0x93, 0xc6, 0x5f, 0x75, 0x8e, 0x60, 0x34, 0x23, 0xc4, 0x13, 0x17, 0x69, 0x55, 0xcc, 0x72, 0x9c, 0x2b, 0x6c, 0x80, 0xf4, 0x4b, 0x8b, 0xb6, 0x97, 0x65, 0x07, 0xb6, 0xfb, + /* (2^306)P */ 0x01, 0x99, 0x74, 0x28, 0xa6, 0x67, 0xa3, 0xe5, 0x25, 0xfb, 0xdf, 0x82, 0x93, 0xe7, 0x35, 0x74, 0xce, 0xe3, 0x15, 0x1c, 0x1d, 0x79, 0x52, 0x84, 0x08, 0x04, 0x2f, 0x5c, 0xb8, 0xcd, 0x7f, 0x89, 0xb0, 0x39, 0x93, 0x63, 0xc9, 0x5d, 0x06, 0x01, 0x59, 0xf7, 0x7e, 0xf1, 0x4c, 0x3d, 0x12, 0x8d, 0x69, 0x1d, 0xb7, 0x21, 0x5e, 0x88, 0x82, 0xa2, + /* (2^307)P */ 0x8e, 0x69, 0xaf, 0x9a, 0x41, 0x0d, 0x9d, 0xcf, 0x8e, 0x8d, 0x5c, 0x51, 0x6e, 0xde, 0x0e, 0x48, 0x23, 0x89, 0xe5, 0x37, 0x80, 0xd6, 0x9d, 0x72, 0x32, 0x26, 0x38, 0x2d, 0x63, 0xa0, 0xfa, 0xd3, 0x40, 0xc0, 0x8c, 0x68, 0x6f, 0x2b, 0x1e, 0x9a, 0x39, 0x51, 0x78, 0x74, 0x9a, 0x7b, 0x4a, 0x8f, 0x0c, 0xa0, 0x88, 0x60, 0xa5, 0x21, 0xcd, 0xc7, + /* (2^308)P */ 0x3a, 0x7f, 0x73, 0x14, 0xbf, 0x89, 0x6a, 0x4c, 0x09, 0x5d, 0xf2, 0x93, 0x20, 0x2d, 0xc4, 0x29, 0x86, 0x06, 0x95, 0xab, 0x22, 0x76, 0x4c, 0x54, 0xe1, 0x7e, 0x80, 0x6d, 0xab, 0x29, 0x61, 0x87, 0x77, 0xf6, 0xc0, 0x3e, 0xda, 0xab, 0x65, 0x7e, 0x39, 0x12, 0xa1, 0x6b, 0x42, 0xf7, 0xc5, 0x97, 0x77, 0xec, 0x6f, 0x22, 0xbe, 0x44, 0xc7, 0x03, + /* (2^309)P */ 0xa5, 0x23, 0x90, 0x41, 0xa3, 0xc5, 0x3e, 0xe0, 0xa5, 0x32, 0x49, 0x1f, 0x39, 0x78, 0xb1, 0xd8, 0x24, 0xea, 0xd4, 0x87, 0x53, 0x42, 0x51, 0xf4, 0xd9, 0x46, 0x25, 0x2f, 0x62, 0xa9, 0x90, 0x9a, 0x4a, 0x25, 0x8a, 0xd2, 0x10, 0xe7, 0x3c, 0xbc, 0x58, 0x8d, 0x16, 0x14, 0x96, 0xa4, 0x6f, 0xf8, 0x12, 0x69, 0x91, 0x73, 0xe2, 0xfa, 0xf4, 0x57, + /* (2^310)P */ 0x51, 0x45, 0x3f, 0x96, 0xdc, 0x97, 0x38, 0xa6, 0x01, 0x63, 0x09, 0xea, 0xc2, 0x13, 0x30, 0xb0, 0x00, 0xb8, 0x0a, 0xce, 0xd1, 0x8f, 0x3e, 0x69, 0x62, 0x46, 0x33, 0x9c, 0xbf, 0x4b, 0xcb, 0x0c, 0x90, 0x1c, 0x45, 0xcf, 0x37, 0x5b, 0xf7, 0x4b, 0x5e, 0x95, 0xc3, 0x28, 0x9f, 0x08, 0x83, 0x53, 0x74, 0xab, 0x0c, 0xb4, 0xc0, 0xa1, 0xbc, 0x89, + /* (2^311)P */ 0x06, 0xb1, 0x51, 0x15, 0x65, 0x60, 0x21, 0x17, 0x7a, 0x20, 0x65, 0xee, 0x12, 0x35, 0x4d, 0x46, 0xf4, 0xf8, 0xd0, 0xb1, 0xca, 0x09, 0x30, 0x08, 0x89, 0x23, 0x3b, 0xe7, 0xab, 0x8b, 0x77, 0xa6, 0xad, 0x25, 0xdd, 0xea, 0x3c, 0x7d, 0xa5, 0x24, 0xb3, 0xe8, 0xfa, 0xfb, 0xc9, 0xf2, 0x71, 0xe9, 0xfa, 0xf2, 0xdc, 0x54, 0xdd, 0x55, 0x2e, 0x2f, + /* (2^312)P */ 0x7f, 0x96, 0x96, 0xfb, 0x52, 0x86, 0xcf, 0xea, 0x62, 0x18, 0xf1, 0x53, 0x1f, 0x61, 0x2a, 0x9f, 0x8c, 0x51, 0xca, 0x2c, 0xde, 0x6d, 0xce, 0xab, 0x58, 0x32, 0x0b, 0x33, 0x9b, 0x99, 0xb4, 0x5c, 0x88, 0x2a, 0x76, 0xcc, 0x3e, 0x54, 0x1e, 0x9d, 0xa2, 0x89, 0xe4, 0x19, 0xba, 0x80, 0xc8, 0x39, 0x32, 0x7f, 0x0f, 0xc7, 0x84, 0xbb, 0x43, 0x56, + /* (2^313)P */ 0x9b, 0x07, 0xb4, 0x42, 0xa9, 0xa0, 0x78, 0x4f, 0x28, 0x70, 0x2b, 0x7e, 0x61, 0xe0, 0xdd, 0x02, 0x98, 0xfc, 0xed, 0x31, 0x80, 0xf1, 0x15, 0x52, 0x89, 0x23, 0xcd, 0x5d, 0x2b, 0xc5, 0x19, 0x32, 0xfb, 0x70, 0x50, 0x7a, 0x97, 0x6b, 0x42, 0xdb, 0xca, 0xdb, 0xc4, 0x59, 0x99, 0xe0, 0x12, 0x1f, 0x17, 0xba, 0x8b, 0xf0, 0xc4, 0x38, 0x5d, 0x27, + /* (2^314)P */ 0x29, 0x1d, 0xdc, 0x2b, 0xf6, 0x5b, 0x04, 0x61, 0x36, 0x76, 0xa0, 0x56, 0x36, 0x6e, 0xd7, 0x24, 0x4d, 0xe7, 0xef, 0x44, 0xd2, 0xd5, 0x07, 0xcd, 0xc4, 0x9d, 0x80, 0x48, 0xc3, 0x38, 0xcf, 0xd8, 0xa3, 0xdd, 0xb2, 0x5e, 0xb5, 0x70, 0x15, 0xbb, 0x36, 0x85, 0x8a, 0xd7, 0xfb, 0x56, 0x94, 0x73, 0x9c, 0x81, 0xbe, 0xb1, 0x44, 0x28, 0xf1, 0x37, + /* (2^315)P */ 0xbf, 0xcf, 0x5c, 0xd2, 0xe2, 0xea, 0xc2, 0xcd, 0x70, 0x7a, 0x9d, 0xcb, 0x81, 0xc1, 0xe9, 0xf1, 0x56, 0x71, 0x52, 0xf7, 0x1b, 0x87, 0xc6, 0xd8, 0xcc, 0xb2, 0x69, 0xf3, 0xb0, 0xbd, 0xba, 0x83, 0x12, 0x26, 0xc4, 0xce, 0x72, 0xde, 0x3b, 0x21, 0x28, 0x9e, 0x5a, 0x94, 0xf5, 0x04, 0xa3, 0xc8, 0x0f, 0x5e, 0xbc, 0x71, 0xf9, 0x0d, 0xce, 0xf5, + /* (2^316)P */ 0x93, 0x97, 0x00, 0x85, 0xf4, 0xb4, 0x40, 0xec, 0xd9, 0x2b, 0x6c, 0xd6, 0x63, 0x9e, 0x93, 0x0a, 0x5a, 0xf4, 0xa7, 0x9a, 0xe3, 0x3c, 0xf0, 0x55, 0xd1, 0x96, 0x6c, 0xf5, 0x2a, 0xce, 0xd7, 0x95, 0x72, 0xbf, 0xc5, 0x0c, 0xce, 0x79, 0xa2, 0x0a, 0x78, 0xe0, 0x72, 0xd0, 0x66, 0x28, 0x05, 0x75, 0xd3, 0x23, 0x09, 0x91, 0xed, 0x7e, 0xc4, 0xbc, + /* (2^317)P */ 0x77, 0xc2, 0x9a, 0xf7, 0xa6, 0xe6, 0x18, 0xb4, 0xe7, 0xf6, 0xda, 0xec, 0x44, 0x6d, 0xfb, 0x08, 0xee, 0x65, 0xa8, 0x92, 0x85, 0x1f, 0xba, 0x38, 0x93, 0x20, 0x5c, 0x4d, 0xd2, 0x18, 0x0f, 0x24, 0xbe, 0x1a, 0x96, 0x44, 0x7d, 0xeb, 0xb3, 0xda, 0x95, 0xf4, 0xaf, 0x6c, 0x06, 0x0f, 0x47, 0x37, 0xc8, 0x77, 0x63, 0xe1, 0x29, 0xef, 0xff, 0xa5, + /* (2^318)P */ 0x16, 0x12, 0xd9, 0x47, 0x90, 0x22, 0x9b, 0x05, 0xf2, 0xa5, 0x9a, 0xae, 0x83, 0x98, 0xb5, 0xac, 0xab, 0x29, 0xaa, 0xdc, 0x5f, 0xde, 0xcd, 0xf7, 0x42, 0xad, 0x3b, 0x96, 0xd6, 0x3e, 0x6e, 0x52, 0x47, 0xb1, 0xab, 0x51, 0xde, 0x49, 0x7c, 0x87, 0x8d, 0x86, 0xe2, 0x70, 0x13, 0x21, 0x51, 0x1c, 0x0c, 0x25, 0xc1, 0xb0, 0xe6, 0x19, 0xcf, 0x12, + /* (2^319)P */ 0xf0, 0xbc, 0x97, 0x8f, 0x4b, 0x2f, 0xd1, 0x1f, 0x8c, 0x57, 0xed, 0x3c, 0xf4, 0x26, 0x19, 0xbb, 0x60, 0xca, 0x24, 0xc5, 0xd9, 0x97, 0xe2, 0x5f, 0x76, 0x49, 0x39, 0x7e, 0x2d, 0x12, 0x21, 0x98, 0xda, 0xe6, 0xdb, 0xd2, 0xd8, 0x9f, 0x18, 0xd8, 0x83, 0x6c, 0xba, 0x89, 0x8d, 0x29, 0xfa, 0x46, 0x33, 0x8c, 0x28, 0xdf, 0x6a, 0xb3, 0x69, 0x28, + /* (2^320)P */ 0x86, 0x17, 0xbc, 0xd6, 0x7c, 0xba, 0x1e, 0x83, 0xbb, 0x84, 0xb5, 0x8c, 0xad, 0xdf, 0xa1, 0x24, 0x81, 0x70, 0x40, 0x0f, 0xad, 0xad, 0x3b, 0x23, 0xd0, 0x93, 0xa0, 0x49, 0x5c, 0x4b, 0x51, 0xbe, 0x20, 0x49, 0x4e, 0xda, 0x2d, 0xd3, 0xad, 0x1b, 0x74, 0x08, 0x41, 0xf0, 0xef, 0x19, 0xe9, 0x45, 0x5d, 0x02, 0xae, 0x26, 0x25, 0xd9, 0xd1, 0xc2, + /* (2^321)P */ 0x48, 0x81, 0x3e, 0xb2, 0x83, 0xf8, 0x4d, 0xb3, 0xd0, 0x4c, 0x75, 0xb3, 0xa0, 0x52, 0x26, 0xf2, 0xaf, 0x5d, 0x36, 0x70, 0x72, 0xd6, 0xb7, 0x88, 0x08, 0x69, 0xbd, 0x15, 0x25, 0xb1, 0x45, 0x1b, 0xb7, 0x0b, 0x5f, 0x71, 0x5d, 0x83, 0x49, 0xb9, 0x84, 0x3b, 0x7c, 0xc1, 0x50, 0x93, 0x05, 0x53, 0xe0, 0x61, 0xea, 0xc1, 0xef, 0xdb, 0x82, 0x97, + /* (2^322)P */ 0x00, 0xd5, 0xc3, 0x3a, 0x4d, 0x8a, 0x23, 0x7a, 0xef, 0xff, 0x37, 0xef, 0xf3, 0xbc, 0xa9, 0xb6, 0xae, 0xd7, 0x3a, 0x7b, 0xfd, 0x3e, 0x8e, 0x9b, 0xab, 0x44, 0x54, 0x60, 0x28, 0x6c, 0xbf, 0x15, 0x24, 0x4a, 0x56, 0x60, 0x7f, 0xa9, 0x7a, 0x28, 0x59, 0x2c, 0x8a, 0xd1, 0x7d, 0x6b, 0x00, 0xfd, 0xa5, 0xad, 0xbc, 0x19, 0x3f, 0xcb, 0x73, 0xe0, + /* (2^323)P */ 0xcf, 0x9e, 0x66, 0x06, 0x4d, 0x2b, 0xf5, 0x9c, 0xc2, 0x9d, 0x9e, 0xed, 0x5a, 0x5c, 0x2d, 0x00, 0xbf, 0x29, 0x90, 0x88, 0xe4, 0x5d, 0xfd, 0xe2, 0xf0, 0x38, 0xec, 0x4d, 0x26, 0xea, 0x54, 0xf0, 0x3c, 0x84, 0x10, 0x6a, 0xf9, 0x66, 0x9c, 0xe7, 0x21, 0xfd, 0x0f, 0xc7, 0x13, 0x50, 0x81, 0xb6, 0x50, 0xf9, 0x04, 0x7f, 0xa4, 0x37, 0x85, 0x14, + /* (2^324)P */ 0xdb, 0x87, 0x49, 0xc7, 0xa8, 0x39, 0x0c, 0x32, 0x98, 0x0c, 0xb9, 0x1a, 0x1b, 0x4d, 0xe0, 0x8a, 0x9a, 0x8e, 0x8f, 0xab, 0x5a, 0x17, 0x3d, 0x04, 0x21, 0xce, 0x3e, 0x2c, 0xf9, 0xa3, 0x97, 0xe4, 0x77, 0x95, 0x0e, 0xb6, 0xa5, 0x15, 0xad, 0x3a, 0x1e, 0x46, 0x53, 0x17, 0x09, 0x83, 0x71, 0x4e, 0x86, 0x38, 0xd5, 0x23, 0x44, 0x16, 0x8d, 0xc8, + /* (2^325)P */ 0x05, 0x5e, 0x99, 0x08, 0xbb, 0xc3, 0xc0, 0xb7, 0x6c, 0x12, 0xf2, 0xf3, 0xf4, 0x7c, 0x6a, 0x4d, 0x9e, 0xeb, 0x3d, 0xb9, 0x63, 0x94, 0xce, 0x81, 0xd8, 0x11, 0xcb, 0x55, 0x69, 0x4a, 0x20, 0x0b, 0x4c, 0x2e, 0x14, 0xb8, 0xd4, 0x6a, 0x7c, 0xf0, 0xed, 0xfc, 0x8f, 0xef, 0xa0, 0xeb, 0x6c, 0x01, 0xe2, 0xdc, 0x10, 0x22, 0xa2, 0x01, 0x85, 0x64, + /* (2^326)P */ 0x58, 0xe1, 0x9c, 0x27, 0x55, 0xc6, 0x25, 0xa6, 0x7d, 0x67, 0x88, 0x65, 0x99, 0x6c, 0xcb, 0xdb, 0x27, 0x4f, 0x44, 0x29, 0xf5, 0x4a, 0x23, 0x10, 0xbc, 0x03, 0x3f, 0x36, 0x1e, 0xef, 0xb0, 0xba, 0x75, 0xe8, 0x74, 0x5f, 0x69, 0x3e, 0x26, 0x40, 0xb4, 0x2f, 0xdc, 0x43, 0xbf, 0xa1, 0x8b, 0xbd, 0xca, 0x6e, 0xc1, 0x6e, 0x21, 0x79, 0xa0, 0xd0, + /* (2^327)P */ 0x78, 0x93, 0x4a, 0x2d, 0x22, 0x6e, 0x6e, 0x7d, 0x74, 0xd2, 0x66, 0x58, 0xce, 0x7b, 0x1d, 0x97, 0xb1, 0xf2, 0xda, 0x1c, 0x79, 0xfb, 0xba, 0xd1, 0xc0, 0xc5, 0x6e, 0xc9, 0x11, 0x89, 0xd2, 0x41, 0x8d, 0x70, 0xb9, 0xcc, 0xea, 0x6a, 0xb3, 0x45, 0xb6, 0x05, 0x2e, 0xf2, 0x17, 0xf1, 0x27, 0xb8, 0xed, 0x06, 0x1f, 0xdb, 0x9d, 0x1f, 0x69, 0x28, + /* (2^328)P */ 0x93, 0x12, 0xa8, 0x11, 0xe1, 0x92, 0x30, 0x8d, 0xac, 0xe1, 0x1c, 0x60, 0x7c, 0xed, 0x2d, 0x2e, 0xd3, 0x03, 0x5c, 0x9c, 0xc5, 0xbd, 0x64, 0x4a, 0x8c, 0xba, 0x76, 0xfe, 0xc6, 0xc1, 0xea, 0xc2, 0x4f, 0xbe, 0x70, 0x3d, 0x64, 0xcf, 0x8e, 0x18, 0xcb, 0xcd, 0x57, 0xa7, 0xf7, 0x36, 0xa9, 0x6b, 0x3e, 0xb8, 0x69, 0xee, 0x47, 0xa2, 0x7e, 0xb2, + /* (2^329)P */ 0x96, 0xaf, 0x3a, 0xf5, 0xed, 0xcd, 0xaf, 0xf7, 0x82, 0xaf, 0x59, 0x62, 0x0b, 0x36, 0x85, 0xf9, 0xaf, 0xd6, 0x38, 0xff, 0x87, 0x2e, 0x1d, 0x6c, 0x8b, 0xaf, 0x3b, 0xdf, 0x28, 0xa2, 0xd6, 0x4d, 0x80, 0x92, 0xc3, 0x0f, 0x34, 0xa8, 0xae, 0x69, 0x5d, 0x7b, 0x9d, 0xbc, 0xf5, 0xfd, 0x1d, 0xb1, 0x96, 0x55, 0x86, 0xe1, 0x5c, 0xb6, 0xac, 0xb9, + /* (2^330)P */ 0x50, 0x9e, 0x37, 0x28, 0x7d, 0xa8, 0x33, 0x63, 0xda, 0x3f, 0x20, 0x98, 0x0e, 0x09, 0xa8, 0x77, 0x3b, 0x7a, 0xfc, 0x16, 0x85, 0x44, 0x64, 0x77, 0x65, 0x68, 0x92, 0x41, 0xc6, 0x1f, 0xdf, 0x27, 0xf9, 0xec, 0xa0, 0x61, 0x22, 0xea, 0x19, 0xe7, 0x75, 0x8b, 0x4e, 0xe5, 0x0f, 0xb7, 0xf7, 0xd2, 0x53, 0xf4, 0xdd, 0x4a, 0xaa, 0x78, 0x40, 0xb7, + /* (2^331)P */ 0xd4, 0x89, 0xe3, 0x79, 0xba, 0xb6, 0xc3, 0xda, 0xe6, 0x78, 0x65, 0x7d, 0x6e, 0x22, 0x62, 0xb1, 0x3d, 0xea, 0x90, 0x84, 0x30, 0x5e, 0xd4, 0x39, 0x84, 0x78, 0xd9, 0x75, 0xd6, 0xce, 0x2a, 0x11, 0x29, 0x69, 0xa4, 0x5e, 0xaa, 0x2a, 0x98, 0x5a, 0xe5, 0x91, 0x8f, 0xb2, 0xfb, 0xda, 0x97, 0xe8, 0x83, 0x6f, 0x04, 0xb9, 0x5d, 0xaf, 0xe1, 0x9b, + /* (2^332)P */ 0x8b, 0xe4, 0xe1, 0x48, 0x9c, 0xc4, 0x83, 0x89, 0xdf, 0x65, 0xd3, 0x35, 0x55, 0x13, 0xf4, 0x1f, 0x36, 0x92, 0x33, 0x38, 0xcb, 0xed, 0x15, 0xe6, 0x60, 0x2d, 0x25, 0xf5, 0x36, 0x60, 0x3a, 0x37, 0x9b, 0x71, 0x9d, 0x42, 0xb0, 0x14, 0xc8, 0xba, 0x62, 0xa3, 0x49, 0xb0, 0x88, 0xc1, 0x72, 0x73, 0xdd, 0x62, 0x40, 0xa9, 0x62, 0x88, 0x99, 0xca, + /* (2^333)P */ 0x47, 0x7b, 0xea, 0xda, 0x46, 0x2f, 0x45, 0xc6, 0xe3, 0xb4, 0x4d, 0x8d, 0xac, 0x0b, 0x54, 0x22, 0x06, 0x31, 0x16, 0x66, 0x3e, 0xe4, 0x38, 0x12, 0xcd, 0xf3, 0xe7, 0x99, 0x37, 0xd9, 0x62, 0x24, 0x4b, 0x05, 0xf2, 0x58, 0xe6, 0x29, 0x4b, 0x0d, 0xf6, 0xc1, 0xba, 0xa0, 0x1e, 0x0f, 0xcb, 0x1f, 0xc6, 0x2b, 0x19, 0xfc, 0x82, 0x01, 0xd0, 0x86, + /* (2^334)P */ 0xa2, 0xae, 0x77, 0x20, 0xfb, 0xa8, 0x18, 0xb4, 0x61, 0xef, 0xe8, 0x52, 0x79, 0xbb, 0x86, 0x90, 0x5d, 0x2e, 0x76, 0xed, 0x66, 0x60, 0x5d, 0x00, 0xb5, 0xa4, 0x00, 0x40, 0x89, 0xec, 0xd1, 0xd2, 0x0d, 0x26, 0xb9, 0x30, 0xb2, 0xd2, 0xb8, 0xe8, 0x0e, 0x56, 0xf9, 0x67, 0x94, 0x2e, 0x62, 0xe1, 0x79, 0x48, 0x2b, 0xa9, 0xfa, 0xea, 0xdb, 0x28, + /* (2^335)P */ 0x35, 0xf1, 0xb0, 0x43, 0xbd, 0x27, 0xef, 0x18, 0x44, 0xa2, 0x04, 0xb4, 0x69, 0xa1, 0x97, 0x1f, 0x8c, 0x04, 0x82, 0x9b, 0x00, 0x6d, 0xf8, 0xbf, 0x7d, 0xc1, 0x5b, 0xab, 0xe8, 0xb2, 0x34, 0xbd, 0xaf, 0x7f, 0xb2, 0x0d, 0xf3, 0xed, 0xfc, 0x5b, 0x50, 0xee, 0xe7, 0x4a, 0x20, 0xd9, 0xf5, 0xc6, 0x9a, 0x97, 0x6d, 0x07, 0x2f, 0xb9, 0x31, 0x02, + /* (2^336)P */ 0xf9, 0x54, 0x4a, 0xc5, 0x61, 0x7e, 0x1d, 0xa6, 0x0e, 0x1a, 0xa8, 0xd3, 0x8c, 0x36, 0x7d, 0xf1, 0x06, 0xb1, 0xac, 0x93, 0xcd, 0xe9, 0x8f, 0x61, 0x6c, 0x5d, 0x03, 0x23, 0xdf, 0x85, 0x53, 0x39, 0x63, 0x5e, 0xeb, 0xf3, 0xd3, 0xd3, 0x75, 0x97, 0x9b, 0x62, 0x9b, 0x01, 0xb3, 0x19, 0xd8, 0x2b, 0x36, 0xf2, 0x2c, 0x2c, 0x6f, 0x36, 0xc6, 0x3c, + /* (2^337)P */ 0x05, 0x74, 0x43, 0x10, 0xb6, 0xb0, 0xf8, 0xbf, 0x02, 0x46, 0x9a, 0xee, 0xc1, 0xaf, 0xc1, 0xe5, 0x5a, 0x2e, 0xbb, 0xe1, 0xdc, 0xc6, 0xce, 0x51, 0x29, 0x50, 0xbf, 0x1b, 0xde, 0xff, 0xba, 0x4d, 0x8d, 0x8b, 0x7e, 0xe7, 0xbd, 0x5b, 0x8f, 0xbe, 0xe3, 0x75, 0x71, 0xff, 0x37, 0x05, 0x5a, 0x10, 0xeb, 0x54, 0x7e, 0x44, 0x72, 0x2c, 0xd4, 0xfc, + /* (2^338)P */ 0x03, 0x12, 0x1c, 0xb2, 0x08, 0x90, 0xa1, 0x2d, 0x50, 0xa0, 0xad, 0x7f, 0x8d, 0xa6, 0x97, 0xc1, 0xbd, 0xdc, 0xc3, 0xa7, 0xad, 0x31, 0xdf, 0xb8, 0x03, 0x84, 0xc3, 0xb9, 0x29, 0x3d, 0x92, 0x2e, 0xc3, 0x90, 0x07, 0xe8, 0xa7, 0xc7, 0xbc, 0x61, 0xe9, 0x3e, 0xa0, 0x35, 0xda, 0x1d, 0xab, 0x48, 0xfe, 0x50, 0xc9, 0x25, 0x59, 0x23, 0x69, 0x3f, + /* (2^339)P */ 0x8e, 0x91, 0xab, 0x6b, 0x91, 0x4f, 0x89, 0x76, 0x67, 0xad, 0xb2, 0x65, 0x9d, 0xad, 0x02, 0x36, 0xdc, 0xac, 0x96, 0x93, 0x97, 0x21, 0x14, 0xd0, 0xe8, 0x11, 0x60, 0x1e, 0xeb, 0x96, 0x06, 0xf2, 0x53, 0xf2, 0x6d, 0xb7, 0x93, 0x6f, 0x26, 0x91, 0x23, 0xe3, 0x34, 0x04, 0x92, 0x91, 0x37, 0x08, 0x50, 0xd6, 0x28, 0x09, 0x27, 0xa1, 0x0c, 0x00, + /* (2^340)P */ 0x1f, 0xbb, 0x21, 0x26, 0x33, 0xcb, 0xa4, 0xd1, 0xee, 0x85, 0xf9, 0xd9, 0x3c, 0x90, 0xc3, 0xd1, 0x26, 0xa2, 0x25, 0x93, 0x43, 0x61, 0xed, 0x91, 0x6e, 0x54, 0x03, 0x2e, 0x42, 0x9d, 0xf7, 0xa6, 0x02, 0x0f, 0x2f, 0x9c, 0x7a, 0x8d, 0x12, 0xc2, 0x18, 0xfc, 0x41, 0xff, 0x85, 0x26, 0x1a, 0x44, 0x55, 0x0b, 0x89, 0xab, 0x6f, 0x62, 0x33, 0x8c, + /* (2^341)P */ 0xe0, 0x3c, 0x5d, 0x70, 0x64, 0x87, 0x81, 0x35, 0xf2, 0x37, 0xa6, 0x24, 0x3e, 0xe0, 0x62, 0xd5, 0x71, 0xe7, 0x93, 0xfb, 0xac, 0xc3, 0xe7, 0xc7, 0x04, 0xe2, 0x70, 0xd3, 0x29, 0x5b, 0x21, 0xbf, 0xf4, 0x26, 0x5d, 0xf3, 0x95, 0xb4, 0x2a, 0x6a, 0x07, 0x55, 0xa6, 0x4b, 0x3b, 0x15, 0xf2, 0x25, 0x8a, 0x95, 0x3f, 0x63, 0x2f, 0x7a, 0x23, 0x96, + /* (2^342)P */ 0x0d, 0x3d, 0xd9, 0x13, 0xa7, 0xb3, 0x5e, 0x67, 0xf7, 0x02, 0x23, 0xee, 0x84, 0xff, 0x99, 0xda, 0xb9, 0x53, 0xf8, 0xf0, 0x0e, 0x39, 0x2f, 0x3c, 0x64, 0x34, 0xe3, 0x09, 0xfd, 0x2b, 0x33, 0xc7, 0xfe, 0x62, 0x2b, 0x84, 0xdf, 0x2b, 0xd2, 0x7c, 0x26, 0x01, 0x70, 0x66, 0x5b, 0x85, 0xc2, 0xbe, 0x88, 0x37, 0xf1, 0x30, 0xac, 0xb8, 0x76, 0xa3, + /* (2^343)P */ 0x6e, 0x01, 0xf0, 0x55, 0x35, 0xe4, 0xbd, 0x43, 0x62, 0x9d, 0xd6, 0x11, 0xef, 0x6f, 0xb8, 0x8c, 0xaa, 0x98, 0x87, 0xc6, 0x6d, 0xc4, 0xcc, 0x74, 0x92, 0x53, 0x4a, 0xdf, 0xe4, 0x08, 0x89, 0x17, 0xd0, 0x0f, 0xf4, 0x00, 0x60, 0x78, 0x08, 0x44, 0xb5, 0xda, 0x18, 0xed, 0x98, 0xc8, 0x61, 0x3d, 0x39, 0xdb, 0xcf, 0x1d, 0x49, 0x40, 0x65, 0x75, + /* (2^344)P */ 0x8e, 0x10, 0xae, 0x5f, 0x06, 0xd2, 0x95, 0xfd, 0x20, 0x16, 0x49, 0x5b, 0x57, 0xbe, 0x22, 0x8b, 0x43, 0xfb, 0xe6, 0xcc, 0x26, 0xa5, 0x5d, 0xd3, 0x68, 0xc5, 0xf9, 0x5a, 0x86, 0x24, 0x87, 0x27, 0x05, 0xfd, 0xe2, 0xff, 0xb3, 0xa3, 0x7b, 0x37, 0x59, 0xc5, 0x4e, 0x14, 0x94, 0xf9, 0x3b, 0xcb, 0x7c, 0xed, 0xca, 0x1d, 0xb2, 0xac, 0x05, 0x4a, + /* (2^345)P */ 0xf4, 0xd1, 0x81, 0xeb, 0x89, 0xbf, 0xfe, 0x1e, 0x41, 0x92, 0x29, 0xee, 0xe1, 0x43, 0xf5, 0x86, 0x1d, 0x2f, 0xbb, 0x1e, 0x84, 0x5d, 0x7b, 0x8d, 0xd5, 0xda, 0xee, 0x1e, 0x8a, 0xd0, 0x27, 0xf2, 0x60, 0x51, 0x59, 0x82, 0xf4, 0x84, 0x2b, 0x5b, 0x14, 0x2d, 0x81, 0x82, 0x3e, 0x2b, 0xb4, 0x6d, 0x51, 0x4f, 0xc5, 0xcb, 0xbf, 0x74, 0xe3, 0xb4, + /* (2^346)P */ 0x19, 0x2f, 0x22, 0xb3, 0x04, 0x5f, 0x81, 0xca, 0x05, 0x60, 0xb9, 0xaa, 0xee, 0x0e, 0x2f, 0x48, 0x38, 0xf9, 0x91, 0xb4, 0x66, 0xe4, 0x57, 0x28, 0x54, 0x10, 0xe9, 0x61, 0x9d, 0xd4, 0x90, 0x75, 0xb1, 0x39, 0x23, 0xb6, 0xfc, 0x82, 0xe0, 0xfa, 0xbb, 0x5c, 0x6e, 0xc3, 0x44, 0x13, 0x00, 0x83, 0x55, 0x9e, 0x8e, 0x10, 0x61, 0x81, 0x91, 0x04, + /* (2^347)P */ 0x5f, 0x2a, 0xd7, 0x81, 0xd9, 0x9c, 0xbb, 0x79, 0xbc, 0x62, 0x56, 0x98, 0x03, 0x5a, 0x18, 0x85, 0x2a, 0x9c, 0xd0, 0xfb, 0xd2, 0xb1, 0xaf, 0xef, 0x0d, 0x24, 0xc5, 0xfa, 0x39, 0xbb, 0x6b, 0xed, 0xa4, 0xdf, 0xe4, 0x87, 0xcd, 0x41, 0xd3, 0x72, 0x32, 0xc6, 0x28, 0x21, 0xb1, 0xba, 0x8b, 0xa3, 0x91, 0x79, 0x76, 0x22, 0x25, 0x10, 0x61, 0xd1, + /* (2^348)P */ 0x73, 0xb5, 0x32, 0x97, 0xdd, 0xeb, 0xdd, 0x22, 0x22, 0xf1, 0x33, 0x3c, 0x77, 0x56, 0x7d, 0x6b, 0x48, 0x2b, 0x05, 0x81, 0x03, 0x03, 0x91, 0x9a, 0xe3, 0x5e, 0xd4, 0xee, 0x3f, 0xf8, 0xbb, 0x50, 0x21, 0x32, 0x4c, 0x4a, 0x58, 0x49, 0xde, 0x0c, 0xde, 0x30, 0x82, 0x3d, 0x92, 0xf0, 0x6c, 0xcc, 0x32, 0x3e, 0xd2, 0x78, 0x8a, 0x6e, 0x2c, 0xd0, + /* (2^349)P */ 0xf0, 0xf7, 0xa1, 0x0b, 0xc1, 0x74, 0x85, 0xa8, 0xe9, 0xdd, 0x48, 0xa1, 0xc0, 0x16, 0xd8, 0x2b, 0x61, 0x08, 0xc2, 0x2b, 0x30, 0x26, 0x79, 0xce, 0x9e, 0xfd, 0x39, 0xd7, 0x81, 0xa4, 0x63, 0x8c, 0xd5, 0x74, 0xa0, 0x88, 0xfa, 0x03, 0x30, 0xe9, 0x7f, 0x2b, 0xc6, 0x02, 0xc9, 0x5e, 0xe4, 0xd5, 0x4d, 0x92, 0xd0, 0xf6, 0xf2, 0x5b, 0x79, 0x08, + /* (2^350)P */ 0x34, 0x89, 0x81, 0x43, 0xd1, 0x94, 0x2c, 0x10, 0x54, 0x9b, 0xa0, 0xe5, 0x44, 0xe8, 0xc2, 0x2f, 0x3e, 0x0e, 0x74, 0xae, 0xba, 0xe2, 0xac, 0x85, 0x6b, 0xd3, 0x5c, 0x97, 0xf7, 0x90, 0xf1, 0x12, 0xc0, 0x03, 0xc8, 0x1f, 0x37, 0x72, 0x8c, 0x9b, 0x9c, 0x17, 0x96, 0x9d, 0xc7, 0xbf, 0xa3, 0x3f, 0x44, 0x3d, 0x87, 0x81, 0xbd, 0x81, 0xa6, 0x5f, + /* (2^351)P */ 0xe4, 0xff, 0x78, 0x62, 0x82, 0x5b, 0x76, 0x58, 0xf5, 0x5b, 0xa6, 0xc4, 0x53, 0x11, 0x3b, 0x7b, 0xaa, 0x67, 0xf8, 0xea, 0x3b, 0x5d, 0x9a, 0x2e, 0x04, 0xeb, 0x4a, 0x24, 0xfb, 0x56, 0xf0, 0xa8, 0xd4, 0x14, 0xed, 0x0f, 0xfd, 0xc5, 0x26, 0x17, 0x2a, 0xf0, 0xb9, 0x13, 0x8c, 0xbd, 0x65, 0x14, 0x24, 0x95, 0x27, 0x12, 0x63, 0x2a, 0x09, 0x18, + /* (2^352)P */ 0xe1, 0x5c, 0xe7, 0xe0, 0x00, 0x6a, 0x96, 0xf2, 0x49, 0x6a, 0x39, 0xa5, 0xe0, 0x17, 0x79, 0x4a, 0x63, 0x07, 0x62, 0x09, 0x61, 0x1b, 0x6e, 0xa9, 0xb5, 0x62, 0xb7, 0xde, 0xdf, 0x80, 0x4c, 0x5a, 0x99, 0x73, 0x59, 0x9d, 0xfb, 0xb1, 0x5e, 0xbe, 0xb8, 0xb7, 0x63, 0x93, 0xe8, 0xad, 0x5e, 0x1f, 0xae, 0x59, 0x1c, 0xcd, 0xb4, 0xc2, 0xb3, 0x8a, + /* (2^353)P */ 0x78, 0x53, 0xa1, 0x4c, 0x70, 0x9c, 0x63, 0x7e, 0xb3, 0x12, 0x40, 0x5f, 0xbb, 0x23, 0xa7, 0xf7, 0x77, 0x96, 0x5b, 0x4d, 0x91, 0x10, 0x52, 0x85, 0x9e, 0xa5, 0x38, 0x0b, 0xfd, 0x25, 0x01, 0x4b, 0xfa, 0x4d, 0xd3, 0x3f, 0x78, 0x74, 0x42, 0xff, 0x62, 0x2d, 0x27, 0xdc, 0x9d, 0xd1, 0x29, 0x76, 0x2e, 0x78, 0xb3, 0x35, 0xfa, 0x15, 0xd5, 0x38, + /* (2^354)P */ 0x8b, 0xc7, 0x43, 0xce, 0xf0, 0x5e, 0xf1, 0x0d, 0x02, 0x38, 0xe8, 0x82, 0xc9, 0x25, 0xad, 0x2d, 0x27, 0xa4, 0x54, 0x18, 0xb2, 0x30, 0x73, 0xa4, 0x41, 0x08, 0xe4, 0x86, 0xe6, 0x8c, 0xe9, 0x2a, 0x34, 0xb3, 0xd6, 0x61, 0x8f, 0x66, 0x26, 0x08, 0xb6, 0x06, 0x33, 0xaa, 0x12, 0xac, 0x72, 0xec, 0x2e, 0x52, 0xa3, 0x25, 0x3e, 0xd7, 0x62, 0xe8, + /* (2^355)P */ 0xc4, 0xbb, 0x89, 0xc8, 0x40, 0xcc, 0x84, 0xec, 0x4a, 0xd9, 0xc4, 0x55, 0x78, 0x00, 0xcf, 0xd8, 0xe9, 0x24, 0x59, 0xdc, 0x5e, 0xf0, 0x66, 0xa1, 0x83, 0xae, 0x97, 0x18, 0xc5, 0x54, 0x27, 0xa2, 0x21, 0x52, 0x03, 0x31, 0x5b, 0x11, 0x67, 0xf6, 0x12, 0x00, 0x87, 0x2f, 0xff, 0x59, 0x70, 0x8f, 0x6d, 0x71, 0xab, 0xab, 0x24, 0xb8, 0xba, 0x35, + /* (2^356)P */ 0x69, 0x43, 0xa7, 0x14, 0x06, 0x96, 0xe9, 0xc2, 0xe3, 0x2b, 0x45, 0x22, 0xc0, 0xd0, 0x2f, 0x34, 0xd1, 0x01, 0x99, 0xfc, 0x99, 0x38, 0xa1, 0x25, 0x2e, 0x59, 0x6c, 0x27, 0xc9, 0xeb, 0x7b, 0xdc, 0x4e, 0x26, 0x68, 0xba, 0xfa, 0xec, 0x02, 0x05, 0x64, 0x80, 0x30, 0x20, 0x5c, 0x26, 0x7f, 0xaf, 0x95, 0x17, 0x3d, 0x5c, 0x9e, 0x96, 0x96, 0xaf, + /* (2^357)P */ 0xa6, 0xba, 0x21, 0x29, 0x32, 0xe2, 0x98, 0xde, 0x9b, 0x6d, 0x0b, 0x44, 0x91, 0xa8, 0x3e, 0xd4, 0xb8, 0x04, 0x6c, 0xf6, 0x04, 0x39, 0xbd, 0x52, 0x05, 0x15, 0x27, 0x78, 0x8e, 0x55, 0xac, 0x79, 0xc5, 0xe6, 0x00, 0x7f, 0x90, 0xa2, 0xdd, 0x07, 0x13, 0xe0, 0x24, 0x70, 0x5c, 0x0f, 0x4d, 0xa9, 0xf9, 0xae, 0xcb, 0x34, 0x10, 0x9d, 0x89, 0x9d, + /* (2^358)P */ 0x12, 0xe0, 0xb3, 0x9f, 0xc4, 0x96, 0x1d, 0xcf, 0xed, 0x99, 0x64, 0x28, 0x8d, 0xc7, 0x31, 0x82, 0xee, 0x5e, 0x75, 0x48, 0xff, 0x3a, 0xf2, 0x09, 0x34, 0x03, 0x93, 0x52, 0x19, 0xb2, 0xc5, 0x81, 0x93, 0x45, 0x5e, 0x59, 0x21, 0x2b, 0xec, 0x89, 0xba, 0x36, 0x6e, 0xf9, 0x82, 0x75, 0x7e, 0x82, 0x3f, 0xaa, 0xe2, 0xe3, 0x3b, 0x94, 0xfd, 0x98, + /* (2^359)P */ 0x7c, 0xdb, 0x75, 0x31, 0x61, 0xfb, 0x15, 0x28, 0x94, 0xd7, 0xc3, 0x5a, 0xa9, 0xa1, 0x0a, 0x66, 0x0f, 0x2b, 0x13, 0x3e, 0x42, 0xb5, 0x28, 0x3a, 0xca, 0x83, 0xf3, 0x61, 0x22, 0xf4, 0x40, 0xc5, 0xdf, 0xe7, 0x31, 0x9f, 0x7e, 0x51, 0x75, 0x06, 0x9d, 0x51, 0xc8, 0xe7, 0x9f, 0xc3, 0x71, 0x4f, 0x3d, 0x5b, 0xfb, 0xe9, 0x8e, 0x08, 0x40, 0x8e, + /* (2^360)P */ 0xf7, 0x31, 0xad, 0x50, 0x5d, 0x25, 0x93, 0x73, 0x68, 0xf6, 0x7c, 0x89, 0x5a, 0x3d, 0x9f, 0x9b, 0x05, 0x82, 0xe7, 0x70, 0x4b, 0x19, 0xaa, 0xcf, 0xff, 0xde, 0x50, 0x8f, 0x2f, 0x69, 0xd3, 0xf0, 0x99, 0x51, 0x6b, 0x9d, 0xb6, 0x56, 0x6f, 0xf8, 0x4c, 0x74, 0x8b, 0x4c, 0x91, 0xf9, 0xa9, 0xb1, 0x3e, 0x07, 0xdf, 0x0b, 0x27, 0x8a, 0xb1, 0xed, + /* (2^361)P */ 0xfb, 0x67, 0xd9, 0x48, 0xd2, 0xe4, 0x44, 0x9b, 0x43, 0x15, 0x8a, 0xeb, 0x00, 0x53, 0xad, 0x25, 0xc7, 0x7e, 0x19, 0x30, 0x87, 0xb7, 0xd5, 0x5f, 0x04, 0xf8, 0xaa, 0xdd, 0x57, 0xae, 0x34, 0x75, 0xe2, 0x84, 0x4b, 0x54, 0x60, 0x37, 0x95, 0xe4, 0xd3, 0xec, 0xac, 0xef, 0x47, 0x31, 0xa3, 0xc8, 0x31, 0x22, 0xdb, 0x26, 0xe7, 0x6a, 0xb5, 0xad, + /* (2^362)P */ 0x44, 0x09, 0x5c, 0x95, 0xe4, 0x72, 0x3c, 0x1a, 0xd1, 0xac, 0x42, 0x51, 0x99, 0x6f, 0xfa, 0x1f, 0xf2, 0x22, 0xbe, 0xff, 0x7b, 0x66, 0xf5, 0x6c, 0xb3, 0x66, 0xc7, 0x4d, 0x78, 0x31, 0x83, 0x80, 0xf5, 0x41, 0xe9, 0x7f, 0xbe, 0xf7, 0x23, 0x49, 0x6b, 0x84, 0x4e, 0x7e, 0x47, 0x07, 0x6e, 0x74, 0xdf, 0xe5, 0x9d, 0x9e, 0x56, 0x2a, 0xc0, 0xbc, + /* (2^363)P */ 0xac, 0x10, 0x80, 0x8c, 0x7c, 0xfa, 0x83, 0xdf, 0xb3, 0xd0, 0xc4, 0xbe, 0xfb, 0x9f, 0xac, 0xc9, 0xc3, 0x40, 0x95, 0x0b, 0x09, 0x23, 0xda, 0x63, 0x67, 0xcf, 0xe7, 0x9f, 0x7d, 0x7b, 0x6b, 0xe2, 0xe6, 0x6d, 0xdb, 0x87, 0x9e, 0xa6, 0xff, 0x6d, 0xab, 0xbd, 0xfb, 0x54, 0x84, 0x68, 0xcf, 0x89, 0xf1, 0xd0, 0xe2, 0x85, 0x61, 0xdc, 0x22, 0xd1, + /* (2^364)P */ 0xa8, 0x48, 0xfb, 0x8c, 0x6a, 0x63, 0x01, 0x72, 0x43, 0x43, 0xeb, 0x21, 0xa3, 0x00, 0x8a, 0xc0, 0x87, 0x51, 0x9e, 0x86, 0x75, 0x16, 0x79, 0xf9, 0x6b, 0x11, 0x80, 0x62, 0xc2, 0x9d, 0xb8, 0x8c, 0x30, 0x8e, 0x8d, 0x03, 0x52, 0x7e, 0x31, 0x59, 0x38, 0xf9, 0x25, 0xc7, 0x0f, 0xc7, 0xa8, 0x2b, 0x5c, 0x80, 0xfa, 0x90, 0xa2, 0x63, 0xca, 0xe7, + /* (2^365)P */ 0xf1, 0x5d, 0xb5, 0xd9, 0x20, 0x10, 0x7d, 0x0f, 0xc5, 0x50, 0x46, 0x07, 0xff, 0x02, 0x75, 0x2b, 0x4a, 0xf3, 0x39, 0x91, 0x72, 0xb7, 0xd5, 0xcc, 0x38, 0xb8, 0xe7, 0x36, 0x26, 0x5e, 0x11, 0x97, 0x25, 0xfb, 0x49, 0x68, 0xdc, 0xb4, 0x46, 0x87, 0x5c, 0xc2, 0x7f, 0xaa, 0x7d, 0x36, 0x23, 0xa6, 0xc6, 0x53, 0xec, 0xbc, 0x57, 0x47, 0xc1, 0x2b, + /* (2^366)P */ 0x25, 0x5d, 0x7d, 0x95, 0xda, 0x0b, 0x8f, 0x78, 0x1e, 0x19, 0x09, 0xfa, 0x67, 0xe0, 0xa0, 0x17, 0x24, 0x76, 0x6c, 0x30, 0x1f, 0x62, 0x3d, 0xbe, 0x45, 0x70, 0xcc, 0xb6, 0x1e, 0x68, 0x06, 0x25, 0x68, 0x16, 0x1a, 0x33, 0x3f, 0x90, 0xc7, 0x78, 0x2d, 0x98, 0x3c, 0x2f, 0xb9, 0x2d, 0x94, 0x0b, 0xfb, 0x49, 0x56, 0x30, 0xd7, 0xc1, 0xe6, 0x48, + /* (2^367)P */ 0x7a, 0xd1, 0xe0, 0x8e, 0x67, 0xfc, 0x0b, 0x50, 0x1f, 0x84, 0x98, 0xfa, 0xaf, 0xae, 0x2e, 0x31, 0x27, 0xcf, 0x3f, 0xf2, 0x6e, 0x8d, 0x81, 0x8f, 0xd2, 0x5f, 0xde, 0xd3, 0x5e, 0xe9, 0xe7, 0x13, 0x48, 0x83, 0x5a, 0x4e, 0x84, 0xd1, 0x58, 0xcf, 0x6b, 0x84, 0xdf, 0x13, 0x1d, 0x91, 0x85, 0xe8, 0xcb, 0x29, 0x79, 0xd2, 0xca, 0xac, 0x6a, 0x93, + /* (2^368)P */ 0x53, 0x82, 0xce, 0x61, 0x96, 0x88, 0x6f, 0xe1, 0x4a, 0x4c, 0x1e, 0x30, 0x73, 0xe8, 0x74, 0xde, 0x40, 0x2b, 0xe0, 0xc4, 0xb5, 0xd8, 0x7c, 0x15, 0xe7, 0xe1, 0xb1, 0xe0, 0xd6, 0x88, 0xb1, 0x6a, 0x57, 0x19, 0x6a, 0x22, 0x66, 0x57, 0xf6, 0x8d, 0xfd, 0xc0, 0xf2, 0xa3, 0x03, 0x56, 0xfb, 0x2e, 0x75, 0x5e, 0xc7, 0x8e, 0x22, 0x96, 0x5c, 0x06, + /* (2^369)P */ 0x98, 0x7e, 0xbf, 0x3e, 0xbf, 0x24, 0x9d, 0x15, 0xd3, 0xf6, 0xd3, 0xd2, 0xf0, 0x11, 0xf2, 0xdb, 0x36, 0x23, 0x38, 0xf7, 0x1d, 0x71, 0x20, 0xd2, 0x54, 0x7f, 0x1e, 0x24, 0x8f, 0xe2, 0xaa, 0xf7, 0x3f, 0x6b, 0x41, 0x4e, 0xdc, 0x0e, 0xec, 0xe8, 0x35, 0x0a, 0x08, 0x6d, 0x89, 0x5b, 0x32, 0x91, 0x01, 0xb6, 0xe0, 0x2c, 0xc6, 0xa1, 0xbe, 0xb4, + /* (2^370)P */ 0x29, 0xf2, 0x1e, 0x1c, 0xdc, 0x68, 0x8a, 0x43, 0x87, 0x2c, 0x48, 0xb3, 0x9e, 0xed, 0xd2, 0x82, 0x46, 0xac, 0x2f, 0xef, 0x93, 0x34, 0x37, 0xca, 0x64, 0x8d, 0xc9, 0x06, 0x90, 0xbb, 0x78, 0x0a, 0x3c, 0x4c, 0xcf, 0x35, 0x7a, 0x0f, 0xf7, 0xa7, 0xf4, 0x2f, 0x45, 0x69, 0x3f, 0xa9, 0x5d, 0xce, 0x7b, 0x8a, 0x84, 0xc3, 0xae, 0xf4, 0xda, 0xd5, + /* (2^371)P */ 0xca, 0xba, 0x95, 0x43, 0x05, 0x7b, 0x06, 0xd9, 0x5c, 0x0a, 0x18, 0x5f, 0x6a, 0x6a, 0xce, 0xc0, 0x3d, 0x95, 0x51, 0x0e, 0x1a, 0xbe, 0x85, 0x7a, 0xf2, 0x69, 0xec, 0xc0, 0x8c, 0xca, 0xa3, 0x32, 0x0a, 0x76, 0x50, 0xc6, 0x76, 0x61, 0x00, 0x89, 0xbf, 0x6e, 0x0f, 0x48, 0x90, 0x31, 0x93, 0xec, 0x34, 0x70, 0xf0, 0xc3, 0x8d, 0xf0, 0x0f, 0xb5, + /* (2^372)P */ 0xbe, 0x23, 0xe2, 0x18, 0x99, 0xf1, 0xed, 0x8a, 0xf6, 0xc9, 0xac, 0xb8, 0x1e, 0x9a, 0x3c, 0x15, 0xae, 0xd7, 0x6d, 0xb3, 0x04, 0xee, 0x5b, 0x0d, 0x1e, 0x79, 0xb7, 0xf9, 0xf9, 0x8d, 0xad, 0xf9, 0x8f, 0x5a, 0x6a, 0x7b, 0xd7, 0x9b, 0xca, 0x62, 0xfe, 0x9c, 0xc0, 0x6f, 0x6d, 0x9d, 0x76, 0xa3, 0x69, 0xb9, 0x4c, 0xa1, 0xc4, 0x0c, 0x76, 0xaa, + /* (2^373)P */ 0x1c, 0x06, 0xfe, 0x3f, 0x45, 0x70, 0xcd, 0x97, 0xa9, 0xa2, 0xb1, 0xd3, 0xf2, 0xa5, 0x0c, 0x49, 0x2c, 0x75, 0x73, 0x1f, 0xcf, 0x00, 0xaf, 0xd5, 0x2e, 0xde, 0x0d, 0x8f, 0x8f, 0x7c, 0xc4, 0x58, 0xce, 0xd4, 0xf6, 0x24, 0x19, 0x2e, 0xd8, 0xc5, 0x1d, 0x1a, 0x3f, 0xb8, 0x4f, 0xbc, 0x7d, 0xbd, 0x68, 0xe3, 0x81, 0x98, 0x1b, 0xa8, 0xc9, 0xd9, + /* (2^374)P */ 0x39, 0x95, 0x78, 0x24, 0x6c, 0x38, 0xe4, 0xe7, 0xd0, 0x8d, 0xb9, 0x38, 0x71, 0x5e, 0xc1, 0x62, 0x80, 0xcc, 0xcb, 0x8c, 0x97, 0xca, 0xf8, 0xb9, 0xd9, 0x9c, 0xce, 0x72, 0x7b, 0x70, 0xee, 0x5f, 0xea, 0xa2, 0xdf, 0xa9, 0x14, 0x10, 0xf9, 0x6e, 0x59, 0x9f, 0x9c, 0xe0, 0x0c, 0xb2, 0x07, 0x97, 0xcd, 0xd2, 0x89, 0x16, 0xfd, 0x9c, 0xa8, 0xa5, + /* (2^375)P */ 0x5a, 0x61, 0xf1, 0x59, 0x7c, 0x38, 0xda, 0xe2, 0x85, 0x99, 0x68, 0xe9, 0xc9, 0xf7, 0x32, 0x7e, 0xc4, 0xca, 0xb7, 0x11, 0x08, 0x69, 0x2b, 0x66, 0x02, 0xf7, 0x2e, 0x18, 0xc3, 0x8e, 0xe1, 0xf9, 0xc5, 0x19, 0x9a, 0x0a, 0x9c, 0x07, 0xba, 0xc7, 0x9c, 0x03, 0x34, 0x89, 0x99, 0x67, 0x0b, 0x16, 0x4b, 0x07, 0x36, 0x16, 0x36, 0x2c, 0xe2, 0xa1, + /* (2^376)P */ 0x70, 0x10, 0x91, 0x27, 0xa8, 0x24, 0x8e, 0x29, 0x04, 0x6f, 0x79, 0x1f, 0xd3, 0xa5, 0x68, 0xd3, 0x0b, 0x7d, 0x56, 0x4d, 0x14, 0x57, 0x7b, 0x2e, 0x00, 0x9f, 0x9a, 0xfd, 0x6c, 0x63, 0x18, 0x81, 0xdb, 0x9d, 0xb7, 0xd7, 0xa4, 0x1e, 0xe8, 0x40, 0xf1, 0x4c, 0xa3, 0x01, 0xd5, 0x4b, 0x75, 0xea, 0xdd, 0x97, 0xfd, 0x5b, 0xb2, 0x66, 0x6a, 0x24, + /* (2^377)P */ 0x72, 0x11, 0xfe, 0x73, 0x1b, 0xd3, 0xea, 0x7f, 0x93, 0x15, 0x15, 0x05, 0xfe, 0x40, 0xe8, 0x28, 0xd8, 0x50, 0x47, 0x66, 0xfa, 0xb7, 0xb5, 0x04, 0xba, 0x35, 0x1e, 0x32, 0x9f, 0x5f, 0x32, 0xba, 0x3d, 0xd1, 0xed, 0x9a, 0x76, 0xca, 0xa3, 0x3e, 0x77, 0xd8, 0xd8, 0x7c, 0x5f, 0x68, 0x42, 0xb5, 0x86, 0x7f, 0x3b, 0xc9, 0xc1, 0x89, 0x64, 0xda, + /* (2^378)P */ 0xd5, 0xd4, 0x17, 0x31, 0xfc, 0x6a, 0xfd, 0xb8, 0xe8, 0xe5, 0x3e, 0x39, 0x06, 0xe4, 0xd1, 0x90, 0x2a, 0xca, 0xf6, 0x54, 0x6c, 0x1b, 0x2f, 0x49, 0x97, 0xb1, 0x2a, 0x82, 0x43, 0x3d, 0x1f, 0x8b, 0xe2, 0x47, 0xc5, 0x24, 0xa8, 0xd5, 0x53, 0x29, 0x7d, 0xc6, 0x87, 0xa6, 0x25, 0x3a, 0x64, 0xdd, 0x71, 0x08, 0x9e, 0xcd, 0xe9, 0x45, 0xc7, 0xba, + /* (2^379)P */ 0x37, 0x72, 0x6d, 0x13, 0x7a, 0x8d, 0x04, 0x31, 0xe6, 0xe3, 0x9e, 0x36, 0x71, 0x3e, 0xc0, 0x1e, 0xe3, 0x71, 0xd3, 0x49, 0x4e, 0x4a, 0x36, 0x42, 0x68, 0x68, 0x61, 0xc7, 0x3c, 0xdb, 0x81, 0x49, 0xf7, 0x91, 0x4d, 0xea, 0x4c, 0x4f, 0x98, 0xc6, 0x7e, 0x60, 0x84, 0x4b, 0x6a, 0x37, 0xbb, 0x52, 0xf7, 0xce, 0x02, 0xe4, 0xad, 0xd1, 0x3c, 0xa7, + /* (2^380)P */ 0x51, 0x06, 0x2d, 0xf8, 0x08, 0xe8, 0xf1, 0x0c, 0xe5, 0xa9, 0xac, 0x29, 0x73, 0x3b, 0xed, 0x98, 0x5f, 0x55, 0x08, 0x38, 0x51, 0x44, 0x36, 0x5d, 0xea, 0xc3, 0xb8, 0x0e, 0xa0, 0x4f, 0xd2, 0x79, 0xe9, 0x98, 0xc3, 0xf5, 0x00, 0xb9, 0x26, 0x27, 0x42, 0xa8, 0x07, 0xc1, 0x12, 0x31, 0xc1, 0xc3, 0x3c, 0x3b, 0x7a, 0x72, 0x97, 0xc2, 0x70, 0x3a, + /* (2^381)P */ 0xf4, 0xb2, 0xba, 0x32, 0xbc, 0xa9, 0x2f, 0x87, 0xc7, 0x3c, 0x45, 0xcd, 0xae, 0xe2, 0x13, 0x6d, 0x3a, 0xf2, 0xf5, 0x66, 0x97, 0x29, 0xaf, 0x53, 0x9f, 0xda, 0xea, 0x14, 0xdf, 0x04, 0x98, 0x19, 0x95, 0x9e, 0x2a, 0x00, 0x5c, 0x9d, 0x1d, 0xf0, 0x39, 0x23, 0xff, 0xfc, 0xca, 0x36, 0xb7, 0xde, 0xdf, 0x37, 0x78, 0x52, 0x21, 0xfa, 0x19, 0x10, + /* (2^382)P */ 0x50, 0x20, 0x73, 0x74, 0x62, 0x21, 0xf2, 0xf7, 0x9b, 0x66, 0x85, 0x34, 0x74, 0xd4, 0x9d, 0x60, 0xd7, 0xbc, 0xc8, 0x46, 0x3b, 0xb8, 0x80, 0x42, 0x15, 0x0a, 0x6c, 0x35, 0x1a, 0x69, 0xf0, 0x1d, 0x4b, 0x29, 0x54, 0x5a, 0x9a, 0x48, 0xec, 0x9f, 0x37, 0x74, 0x91, 0xd0, 0xd1, 0x9e, 0x00, 0xc2, 0x76, 0x56, 0xd6, 0xa0, 0x15, 0x14, 0x83, 0x59, + /* (2^383)P */ 0xc2, 0xf8, 0x22, 0x20, 0x23, 0x07, 0xbd, 0x1d, 0x6f, 0x1e, 0x8c, 0x56, 0x06, 0x6a, 0x4b, 0x9f, 0xe2, 0xa9, 0x92, 0x46, 0x4b, 0x46, 0x59, 0xd7, 0xe1, 0xda, 0x14, 0x98, 0x07, 0x65, 0x7e, 0x28, 0x20, 0xf2, 0x9d, 0x4f, 0x36, 0x5c, 0x92, 0xe0, 0x9d, 0xfe, 0x3e, 0xda, 0xe4, 0x47, 0x19, 0x3c, 0x00, 0x7f, 0x22, 0xf2, 0x9e, 0x51, 0xae, 0x4d, + /* (2^384)P */ 0xbe, 0x8c, 0x1b, 0x10, 0xb6, 0xad, 0xcc, 0xcc, 0xd8, 0x5e, 0x21, 0xa6, 0xfb, 0xf1, 0xf6, 0xbd, 0x0a, 0x24, 0x67, 0xb4, 0x57, 0x7a, 0xbc, 0xe8, 0xe9, 0xff, 0xee, 0x0a, 0x1f, 0xee, 0xbd, 0xc8, 0x44, 0xed, 0x2b, 0xbb, 0x55, 0x1f, 0xdd, 0x7c, 0xb3, 0xeb, 0x3f, 0x63, 0xa1, 0x28, 0x91, 0x21, 0xab, 0x71, 0xc6, 0x4c, 0xd0, 0xe9, 0xb0, 0x21, + /* (2^385)P */ 0xad, 0xc9, 0x77, 0x2b, 0xee, 0x89, 0xa4, 0x7b, 0xfd, 0xf9, 0xf6, 0x14, 0xe4, 0xed, 0x1a, 0x16, 0x9b, 0x78, 0x41, 0x43, 0xa8, 0x83, 0x72, 0x06, 0x2e, 0x7c, 0xdf, 0xeb, 0x7e, 0xdd, 0xd7, 0x8b, 0xea, 0x9a, 0x2b, 0x03, 0xba, 0x57, 0xf3, 0xf1, 0xd9, 0xe5, 0x09, 0xc5, 0x98, 0x61, 0x1c, 0x51, 0x6d, 0x5d, 0x6e, 0xfb, 0x5e, 0x95, 0x9f, 0xb5, + /* (2^386)P */ 0x23, 0xe2, 0x1e, 0x95, 0xa3, 0x5e, 0x42, 0x10, 0xc7, 0xc3, 0x70, 0xbf, 0x4b, 0x6b, 0x83, 0x36, 0x93, 0xb7, 0x68, 0x47, 0x88, 0x3a, 0x10, 0x88, 0x48, 0x7f, 0x8c, 0xae, 0x54, 0x10, 0x02, 0xa4, 0x52, 0x8f, 0x8d, 0xf7, 0x26, 0x4f, 0x50, 0xc3, 0x6a, 0xe2, 0x4e, 0x3b, 0x4c, 0xb9, 0x8a, 0x14, 0x15, 0x6d, 0x21, 0x29, 0xb3, 0x6e, 0x4e, 0xd0, + /* (2^387)P */ 0x4c, 0x8a, 0x18, 0x3f, 0xb7, 0x20, 0xfd, 0x3e, 0x54, 0xca, 0x68, 0x3c, 0xea, 0x6f, 0xf4, 0x6b, 0xa2, 0xbd, 0x01, 0xbd, 0xfe, 0x08, 0xa8, 0xd8, 0xc2, 0x20, 0x36, 0x05, 0xcd, 0xe9, 0xf3, 0x9e, 0xfa, 0x85, 0x66, 0x8f, 0x4b, 0x1d, 0x8c, 0x64, 0x4f, 0xb8, 0xc6, 0x0f, 0x5b, 0x57, 0xd8, 0x24, 0x19, 0x5a, 0x14, 0x4b, 0x92, 0xd3, 0x96, 0xbc, + /* (2^388)P */ 0xa9, 0x3f, 0xc9, 0x6c, 0xca, 0x64, 0x1e, 0x6f, 0xdf, 0x65, 0x7f, 0x9a, 0x47, 0x6b, 0x8a, 0x60, 0x31, 0xa6, 0x06, 0xac, 0x69, 0x30, 0xe6, 0xea, 0x63, 0x42, 0x26, 0x5f, 0xdb, 0xd0, 0xf2, 0x8e, 0x34, 0x0a, 0x3a, 0xeb, 0xf3, 0x79, 0xc8, 0xb7, 0x60, 0x56, 0x5c, 0x37, 0x95, 0x71, 0xf8, 0x7f, 0x49, 0x3e, 0x9e, 0x01, 0x26, 0x1e, 0x80, 0x9f, + /* (2^389)P */ 0xf8, 0x16, 0x9a, 0xaa, 0xb0, 0x28, 0xb5, 0x8e, 0xd0, 0x60, 0xe5, 0x26, 0xa9, 0x47, 0xc4, 0x5c, 0xa9, 0x39, 0xfe, 0x0a, 0xd8, 0x07, 0x2b, 0xb3, 0xce, 0xf1, 0xea, 0x1a, 0xf4, 0x7b, 0x98, 0x31, 0x3d, 0x13, 0x29, 0x80, 0xe8, 0x0d, 0xcf, 0x56, 0x39, 0x86, 0x50, 0x0c, 0xb3, 0x18, 0xf4, 0xc5, 0xca, 0xf2, 0x6f, 0xcd, 0x8d, 0xd5, 0x02, 0xb0, + /* (2^390)P */ 0xbf, 0x39, 0x3f, 0xac, 0x6d, 0x1a, 0x6a, 0xe4, 0x42, 0x24, 0xd6, 0x41, 0x9d, 0xb9, 0x5b, 0x46, 0x73, 0x93, 0x76, 0xaa, 0xb7, 0x37, 0x36, 0xa6, 0x09, 0xe5, 0x04, 0x3b, 0x66, 0xc4, 0x29, 0x3e, 0x41, 0xc2, 0xcb, 0xe5, 0x17, 0xd7, 0x34, 0x67, 0x1d, 0x2c, 0x12, 0xec, 0x24, 0x7a, 0x40, 0xa2, 0x45, 0x41, 0xf0, 0x75, 0xed, 0x43, 0x30, 0xc9, + /* (2^391)P */ 0x80, 0xf6, 0x47, 0x5b, 0xad, 0x54, 0x02, 0xbc, 0xdd, 0xa4, 0xb2, 0xd7, 0x42, 0x95, 0xf2, 0x0d, 0x1b, 0xef, 0x37, 0xa7, 0xb4, 0x34, 0x04, 0x08, 0x71, 0x1b, 0xd3, 0xdf, 0xa1, 0xf0, 0x2b, 0xfa, 0xc0, 0x1f, 0xf3, 0x44, 0xb5, 0xc6, 0x47, 0x3d, 0x65, 0x67, 0x45, 0x4d, 0x2f, 0xde, 0x52, 0x73, 0xfc, 0x30, 0x01, 0x6b, 0xc1, 0x03, 0xd8, 0xd7, + /* (2^392)P */ 0x1c, 0x67, 0x55, 0x3e, 0x01, 0x17, 0x0f, 0x3e, 0xe5, 0x34, 0x58, 0xfc, 0xcb, 0x71, 0x24, 0x74, 0x5d, 0x36, 0x1e, 0x89, 0x2a, 0x63, 0xf8, 0xf8, 0x9f, 0x50, 0x9f, 0x32, 0x92, 0x29, 0xd8, 0x1a, 0xec, 0x76, 0x57, 0x6c, 0x67, 0x12, 0x6a, 0x6e, 0xef, 0x97, 0x1f, 0xc3, 0x77, 0x60, 0x3c, 0x22, 0xcb, 0xc7, 0x04, 0x1a, 0x89, 0x2d, 0x10, 0xa6, + /* (2^393)P */ 0x12, 0xf5, 0xa9, 0x26, 0x16, 0xd9, 0x3c, 0x65, 0x5d, 0x83, 0xab, 0xd1, 0x70, 0x6b, 0x1c, 0xdb, 0xe7, 0x86, 0x0d, 0xfb, 0xe7, 0xf8, 0x2a, 0x58, 0x6e, 0x7a, 0x66, 0x13, 0x53, 0x3a, 0x6f, 0x8d, 0x43, 0x5f, 0x14, 0x23, 0x14, 0xff, 0x3d, 0x52, 0x7f, 0xee, 0xbd, 0x7a, 0x34, 0x8b, 0x35, 0x24, 0xc3, 0x7a, 0xdb, 0xcf, 0x22, 0x74, 0x9a, 0x8f, + /* (2^394)P */ 0xdb, 0x20, 0xfc, 0xe5, 0x39, 0x4e, 0x7d, 0x78, 0xee, 0x0b, 0xbf, 0x1d, 0x80, 0xd4, 0x05, 0x4f, 0xb9, 0xd7, 0x4e, 0x94, 0x88, 0x9a, 0x50, 0x78, 0x1a, 0x70, 0x8c, 0xcc, 0x25, 0xb6, 0x61, 0x09, 0xdc, 0x7b, 0xea, 0x3f, 0x7f, 0xea, 0x2a, 0x0d, 0x47, 0x1c, 0x8e, 0xa6, 0x5b, 0xd2, 0xa3, 0x61, 0x93, 0x3c, 0x68, 0x9f, 0x8b, 0xea, 0xb0, 0xcb, + /* (2^395)P */ 0xff, 0x54, 0x02, 0x19, 0xae, 0x8b, 0x4c, 0x2c, 0x3a, 0xe0, 0xe4, 0xac, 0x87, 0xf7, 0x51, 0x45, 0x41, 0x43, 0xdc, 0xaa, 0xcd, 0xcb, 0xdc, 0x40, 0xe3, 0x44, 0x3b, 0x1d, 0x9e, 0x3d, 0xb9, 0x82, 0xcc, 0x7a, 0xc5, 0x12, 0xf8, 0x1e, 0xdd, 0xdb, 0x8d, 0xb0, 0x2a, 0xe8, 0xe6, 0x6c, 0x94, 0x3b, 0xb7, 0x2d, 0xba, 0x79, 0x3b, 0xb5, 0x86, 0xfb, + /* (2^396)P */ 0x82, 0x88, 0x13, 0xdd, 0x6c, 0xcd, 0x85, 0x2b, 0x90, 0x86, 0xb7, 0xac, 0x16, 0xa6, 0x6e, 0x6a, 0x94, 0xd8, 0x1e, 0x4e, 0x41, 0x0f, 0xce, 0x81, 0x6a, 0xa8, 0x26, 0x56, 0x43, 0x52, 0x52, 0xe6, 0xff, 0x88, 0xcf, 0x47, 0x05, 0x1d, 0xff, 0xf3, 0xa0, 0x10, 0xb2, 0x97, 0x87, 0xeb, 0x47, 0xbb, 0xfa, 0x1f, 0xe8, 0x4c, 0xce, 0xc4, 0xcd, 0x93, + /* (2^397)P */ 0xf4, 0x11, 0xf5, 0x8d, 0x89, 0x29, 0x79, 0xb3, 0x59, 0x0b, 0x29, 0x7d, 0x9c, 0x12, 0x4a, 0x65, 0x72, 0x3a, 0xf9, 0xec, 0x37, 0x18, 0x86, 0xef, 0x44, 0x07, 0x25, 0x74, 0x76, 0x53, 0xed, 0x51, 0x01, 0xc6, 0x28, 0xc5, 0xc3, 0x4a, 0x0f, 0x99, 0xec, 0xc8, 0x40, 0x5a, 0x83, 0x30, 0x79, 0xa2, 0x3e, 0x63, 0x09, 0x2d, 0x6f, 0x23, 0x54, 0x1c, + /* (2^398)P */ 0x5c, 0x6f, 0x3b, 0x1c, 0x30, 0x77, 0x7e, 0x87, 0x66, 0x83, 0x2e, 0x7e, 0x85, 0x50, 0xfd, 0xa0, 0x7a, 0xc2, 0xf5, 0x0f, 0xc1, 0x64, 0xe7, 0x0b, 0xbd, 0x59, 0xa7, 0xe7, 0x65, 0x53, 0xc3, 0xf5, 0x55, 0x5b, 0xe1, 0x82, 0x30, 0x5a, 0x61, 0xcd, 0xa0, 0x89, 0x32, 0xdb, 0x87, 0xfc, 0x21, 0x8a, 0xab, 0x6d, 0x82, 0xa8, 0x42, 0x81, 0x4f, 0xf2, + /* (2^399)P */ 0xb3, 0xeb, 0x88, 0x18, 0xf6, 0x56, 0x96, 0xbf, 0xba, 0x5d, 0x71, 0xa1, 0x5a, 0xd1, 0x04, 0x7b, 0xd5, 0x46, 0x01, 0x74, 0xfe, 0x15, 0x25, 0xb7, 0xff, 0x0c, 0x24, 0x47, 0xac, 0xfd, 0xab, 0x47, 0x32, 0xe1, 0x6a, 0x4e, 0xca, 0xcf, 0x7f, 0xdd, 0xf8, 0xd2, 0x4b, 0x3b, 0xf5, 0x17, 0xba, 0xba, 0x8b, 0xa1, 0xec, 0x28, 0x3f, 0x97, 0xab, 0x2a, + /* (2^400)P */ 0x51, 0x38, 0xc9, 0x5e, 0xc6, 0xb3, 0x64, 0xf2, 0x24, 0x4d, 0x04, 0x7d, 0xc8, 0x39, 0x0c, 0x4a, 0xc9, 0x73, 0x74, 0x1b, 0x5c, 0xb2, 0xc5, 0x41, 0x62, 0xa0, 0x4c, 0x6d, 0x8d, 0x91, 0x9a, 0x7b, 0x88, 0xab, 0x9c, 0x7e, 0x23, 0xdb, 0x6f, 0xb5, 0x72, 0xd6, 0x47, 0x40, 0xef, 0x22, 0x58, 0x62, 0x19, 0x6c, 0x38, 0xba, 0x5b, 0x00, 0x30, 0x9f, + /* (2^401)P */ 0x65, 0xbb, 0x3b, 0x9b, 0xe9, 0xae, 0xbf, 0xbe, 0xe4, 0x13, 0x95, 0xf3, 0xe3, 0x77, 0xcb, 0xe4, 0x9a, 0x22, 0xb5, 0x4a, 0x08, 0x9d, 0xb3, 0x9e, 0x27, 0xe0, 0x15, 0x6c, 0x9f, 0x7e, 0x9a, 0x5e, 0x15, 0x45, 0x25, 0x8d, 0x01, 0x0a, 0xd2, 0x2b, 0xbd, 0x48, 0x06, 0x0d, 0x18, 0x97, 0x4b, 0xdc, 0xbc, 0xf0, 0xcd, 0xb2, 0x52, 0x3c, 0xac, 0xf5, + /* (2^402)P */ 0x3e, 0xed, 0x47, 0x6b, 0x5c, 0xf6, 0x76, 0xd0, 0xe9, 0x15, 0xa3, 0xcb, 0x36, 0x00, 0x21, 0xa3, 0x79, 0x20, 0xa5, 0x3e, 0x88, 0x03, 0xcb, 0x7e, 0x63, 0xbb, 0xed, 0xa9, 0x13, 0x35, 0x16, 0xaf, 0x2e, 0xb4, 0x70, 0x14, 0x93, 0xfb, 0xc4, 0x9b, 0xd8, 0xb1, 0xbe, 0x43, 0xd1, 0x85, 0xb8, 0x97, 0xef, 0xea, 0x88, 0xa1, 0x25, 0x52, 0x62, 0x75, + /* (2^403)P */ 0x8e, 0x4f, 0xaa, 0x23, 0x62, 0x7e, 0x2b, 0x37, 0x89, 0x00, 0x11, 0x30, 0xc5, 0x33, 0x4a, 0x89, 0x8a, 0xe2, 0xfc, 0x5c, 0x6a, 0x75, 0xe5, 0xf7, 0x02, 0x4a, 0x9b, 0xf7, 0xb5, 0x6a, 0x85, 0x31, 0xd3, 0x5a, 0xcf, 0xc3, 0xf8, 0xde, 0x2f, 0xcf, 0xb5, 0x24, 0xf4, 0xe3, 0xa1, 0xad, 0x42, 0xae, 0x09, 0xb9, 0x2e, 0x04, 0x2d, 0x01, 0x22, 0x3f, + /* (2^404)P */ 0x41, 0x16, 0xfb, 0x7d, 0x50, 0xfd, 0xb5, 0xba, 0x88, 0x24, 0xba, 0xfd, 0x3d, 0xb2, 0x90, 0x15, 0xb7, 0xfa, 0xa2, 0xe1, 0x4c, 0x7d, 0xb9, 0xc6, 0xff, 0x81, 0x57, 0xb6, 0xc2, 0x9e, 0xcb, 0xc4, 0x35, 0xbd, 0x01, 0xb7, 0xaa, 0xce, 0xd0, 0xe9, 0xb5, 0xd6, 0x72, 0xbf, 0xd2, 0xee, 0xc7, 0xac, 0x94, 0xff, 0x29, 0x57, 0x02, 0x49, 0x09, 0xad, + /* (2^405)P */ 0x27, 0xa5, 0x78, 0x1b, 0xbf, 0x6b, 0xaf, 0x0b, 0x8c, 0xd9, 0xa8, 0x37, 0xb0, 0x67, 0x18, 0xb6, 0xc7, 0x05, 0x8a, 0x67, 0x03, 0x30, 0x62, 0x6e, 0x56, 0x82, 0xa9, 0x54, 0x3e, 0x0c, 0x4e, 0x07, 0xe1, 0x5a, 0x38, 0xed, 0xfa, 0xc8, 0x55, 0x6b, 0x08, 0xa3, 0x6b, 0x64, 0x2a, 0x15, 0xd6, 0x39, 0x6f, 0x47, 0x99, 0x42, 0x3f, 0x33, 0x84, 0x8f, + /* (2^406)P */ 0xbc, 0x45, 0x29, 0x81, 0x0e, 0xa4, 0xc5, 0x72, 0x3a, 0x10, 0xe1, 0xc4, 0x1e, 0xda, 0xc3, 0xfe, 0xb0, 0xce, 0xd2, 0x13, 0x34, 0x67, 0x21, 0xc6, 0x7e, 0xf9, 0x8c, 0xff, 0x39, 0x50, 0xae, 0x92, 0x60, 0x35, 0x2f, 0x8b, 0x6e, 0xc9, 0xc1, 0x27, 0x3a, 0x94, 0x66, 0x3e, 0x26, 0x84, 0x93, 0xc8, 0x6c, 0xcf, 0xd2, 0x03, 0xa1, 0x10, 0xcf, 0xb7, + /* (2^407)P */ 0x64, 0xda, 0x19, 0xf6, 0xc5, 0x73, 0x17, 0x44, 0x88, 0x81, 0x07, 0x0d, 0x34, 0xb2, 0x75, 0xf9, 0xd9, 0xe2, 0xe0, 0x8b, 0x71, 0xcf, 0x72, 0x34, 0x83, 0xb4, 0xce, 0xfc, 0xd7, 0x29, 0x09, 0x5a, 0x98, 0xbf, 0x14, 0xac, 0x77, 0x55, 0x38, 0x47, 0x5b, 0x0f, 0x40, 0x24, 0xe5, 0xa5, 0xa6, 0xac, 0x2d, 0xa6, 0xff, 0x9c, 0x73, 0xfe, 0x5c, 0x7e, + /* (2^408)P */ 0x1e, 0x33, 0xcc, 0x68, 0xb2, 0xbc, 0x8c, 0x93, 0xaf, 0xcc, 0x38, 0xf8, 0xd9, 0x16, 0x72, 0x50, 0xac, 0xd9, 0xb5, 0x0b, 0x9a, 0xbe, 0x46, 0x7a, 0xf1, 0xee, 0xf1, 0xad, 0xec, 0x5b, 0x59, 0x27, 0x9c, 0x05, 0xa3, 0x87, 0xe0, 0x37, 0x2c, 0x83, 0xce, 0xb3, 0x65, 0x09, 0x8e, 0xc3, 0x9c, 0xbf, 0x6a, 0xa2, 0x00, 0xcc, 0x12, 0x36, 0xc5, 0x95, + /* (2^409)P */ 0x36, 0x11, 0x02, 0x14, 0x9c, 0x3c, 0xeb, 0x2f, 0x23, 0x5b, 0x6b, 0x2b, 0x08, 0x54, 0x53, 0xac, 0xb2, 0xa3, 0xe0, 0x26, 0x62, 0x3c, 0xe4, 0xe1, 0x81, 0xee, 0x13, 0x3e, 0xa4, 0x97, 0xef, 0xf9, 0x92, 0x27, 0x01, 0xce, 0x54, 0x8b, 0x3e, 0x31, 0xbe, 0xa7, 0x88, 0xcf, 0x47, 0x99, 0x3c, 0x10, 0x6f, 0x60, 0xb3, 0x06, 0x4e, 0xee, 0x1b, 0xf0, + /* (2^410)P */ 0x59, 0x49, 0x66, 0xcf, 0x22, 0xe6, 0xf6, 0x73, 0xfe, 0xa3, 0x1c, 0x09, 0xfa, 0x5f, 0x65, 0xa8, 0xf0, 0x82, 0xc2, 0xef, 0x16, 0x63, 0x6e, 0x79, 0x69, 0x51, 0x39, 0x07, 0x65, 0xc4, 0x81, 0xec, 0x73, 0x0f, 0x15, 0x93, 0xe1, 0x30, 0x33, 0xe9, 0x37, 0x86, 0x42, 0x4c, 0x1f, 0x9b, 0xad, 0xee, 0x3f, 0xf1, 0x2a, 0x8e, 0x6a, 0xa3, 0xc8, 0x35, + /* (2^411)P */ 0x1e, 0x49, 0xf1, 0xdd, 0xd2, 0x9c, 0x8e, 0x78, 0xb2, 0x06, 0xe4, 0x6a, 0xab, 0x3a, 0xdc, 0xcd, 0xf4, 0xeb, 0xe1, 0xe7, 0x2f, 0xaa, 0xeb, 0x40, 0x31, 0x9f, 0xb9, 0xab, 0x13, 0xa9, 0x78, 0xbf, 0x38, 0x89, 0x0e, 0x85, 0x14, 0x8b, 0x46, 0x76, 0x14, 0xda, 0xcf, 0x33, 0xc8, 0x79, 0xd3, 0xd5, 0xa3, 0x6a, 0x69, 0x45, 0x70, 0x34, 0xc3, 0xe9, + /* (2^412)P */ 0x5e, 0xe7, 0x78, 0xe9, 0x24, 0xcc, 0xe9, 0xf4, 0xc8, 0x6b, 0xe0, 0xfb, 0x3a, 0xbe, 0xcc, 0x42, 0x4a, 0x00, 0x22, 0xf8, 0xe6, 0x32, 0xbe, 0x6d, 0x18, 0x55, 0x60, 0xe9, 0x72, 0x69, 0x50, 0x56, 0xca, 0x04, 0x18, 0x38, 0xa1, 0xee, 0xd8, 0x38, 0x3c, 0xa7, 0x70, 0xe2, 0xb9, 0x4c, 0xa0, 0xc8, 0x89, 0x72, 0xcf, 0x49, 0x7f, 0xdf, 0xbc, 0x67, + /* (2^413)P */ 0x1d, 0x17, 0xcb, 0x0b, 0xbd, 0xb2, 0x36, 0xe3, 0xa8, 0x99, 0x31, 0xb6, 0x26, 0x9c, 0x0c, 0x74, 0xaf, 0x4d, 0x24, 0x61, 0xcf, 0x31, 0x7b, 0xed, 0xdd, 0xc3, 0xf6, 0x32, 0x70, 0xfe, 0x17, 0xf6, 0x51, 0x37, 0x65, 0xce, 0x5d, 0xaf, 0xa5, 0x2f, 0x2a, 0xfe, 0x00, 0x71, 0x7c, 0x50, 0xbe, 0x21, 0xc7, 0xed, 0xc6, 0xfc, 0x67, 0xcf, 0x9c, 0xdd, + /* (2^414)P */ 0x26, 0x3e, 0xf8, 0xbb, 0xd0, 0xb1, 0x01, 0xd8, 0xeb, 0x0b, 0x62, 0x87, 0x35, 0x4c, 0xde, 0xca, 0x99, 0x9c, 0x6d, 0xf7, 0xb6, 0xf0, 0x57, 0x0a, 0x52, 0x29, 0x6a, 0x3f, 0x26, 0x31, 0x04, 0x07, 0x2a, 0xc9, 0xfa, 0x9b, 0x0e, 0x62, 0x8e, 0x72, 0xf2, 0xad, 0xce, 0xb6, 0x35, 0x7a, 0xc1, 0xae, 0x35, 0xc7, 0xa3, 0x14, 0xcf, 0x0c, 0x28, 0xb7, + /* (2^415)P */ 0xa6, 0xf1, 0x32, 0x3a, 0x20, 0xd2, 0x24, 0x97, 0xcf, 0x5d, 0x37, 0x99, 0xaf, 0x33, 0x7a, 0x5b, 0x7a, 0xcc, 0x4e, 0x41, 0x38, 0xb1, 0x4e, 0xad, 0xc9, 0xd9, 0x71, 0x7e, 0xb2, 0xf5, 0xd5, 0x01, 0x6c, 0x4d, 0xfd, 0xa1, 0xda, 0x03, 0x38, 0x9b, 0x3d, 0x92, 0x92, 0xf2, 0xca, 0xbf, 0x1f, 0x24, 0xa4, 0xbb, 0x30, 0x6a, 0x74, 0x56, 0xc8, 0xce, + /* (2^416)P */ 0x27, 0xf4, 0xed, 0xc9, 0xc3, 0xb1, 0x79, 0x85, 0xbe, 0xf6, 0xeb, 0xf3, 0x55, 0xc7, 0xaa, 0xa6, 0xe9, 0x07, 0x5d, 0xf4, 0xeb, 0xa6, 0x81, 0xe3, 0x0e, 0xcf, 0xa3, 0xc1, 0xef, 0xe7, 0x34, 0xb2, 0x03, 0x73, 0x8a, 0x91, 0xf1, 0xad, 0x05, 0xc7, 0x0b, 0x43, 0x99, 0x12, 0x31, 0xc8, 0xc7, 0xc5, 0xa4, 0x3d, 0xcd, 0xe5, 0x4e, 0x6d, 0x24, 0xdd, + /* (2^417)P */ 0x61, 0x54, 0xd0, 0x95, 0x2c, 0x45, 0x75, 0xac, 0xb5, 0x1a, 0x9d, 0x11, 0xeb, 0xed, 0x6b, 0x57, 0xa3, 0xe6, 0xcd, 0x77, 0xd4, 0x83, 0x8e, 0x39, 0xf1, 0x0f, 0x98, 0xcb, 0x40, 0x02, 0x6e, 0x10, 0x82, 0x9e, 0xb4, 0x93, 0x76, 0xd7, 0x97, 0xa3, 0x53, 0x12, 0x86, 0xc6, 0x15, 0x78, 0x73, 0x93, 0xe7, 0x7f, 0xcf, 0x1f, 0xbf, 0xcd, 0xd2, 0x7a, + /* (2^418)P */ 0xc2, 0x21, 0xdc, 0xd5, 0x69, 0xff, 0xca, 0x49, 0x3a, 0xe1, 0xc3, 0x69, 0x41, 0x56, 0xc1, 0x76, 0x63, 0x24, 0xbd, 0x64, 0x1b, 0x3d, 0x92, 0xf9, 0x13, 0x04, 0x25, 0xeb, 0x27, 0xa6, 0xef, 0x39, 0x3a, 0x80, 0xe0, 0xf8, 0x27, 0xee, 0xc9, 0x49, 0x77, 0xef, 0x3f, 0x29, 0x3d, 0x5e, 0xe6, 0x66, 0x83, 0xd1, 0xf6, 0xfe, 0x9d, 0xbc, 0xf1, 0x96, + /* (2^419)P */ 0x6b, 0xc6, 0x99, 0x26, 0x3c, 0xf3, 0x63, 0xf9, 0xc7, 0x29, 0x8c, 0x52, 0x62, 0x2d, 0xdc, 0x8a, 0x66, 0xce, 0x2c, 0xa7, 0xe4, 0xf0, 0xd7, 0x37, 0x17, 0x1e, 0xe4, 0xa3, 0x53, 0x7b, 0x29, 0x8e, 0x60, 0x99, 0xf9, 0x0c, 0x7c, 0x6f, 0xa2, 0xcc, 0x9f, 0x80, 0xdd, 0x5e, 0x46, 0xaa, 0x0d, 0x6c, 0xc9, 0x6c, 0xf7, 0x78, 0x5b, 0x38, 0xe3, 0x24, + /* (2^420)P */ 0x4b, 0x75, 0x6a, 0x2f, 0x08, 0xe1, 0x72, 0x76, 0xab, 0x82, 0x96, 0xdf, 0x3b, 0x1f, 0x9b, 0xd8, 0xed, 0xdb, 0xcd, 0x15, 0x09, 0x5a, 0x1e, 0xb7, 0xc5, 0x26, 0x72, 0x07, 0x0c, 0x50, 0xcd, 0x3b, 0x4d, 0x3f, 0xa2, 0x67, 0xc2, 0x02, 0x61, 0x2e, 0x68, 0xe9, 0x6f, 0xf0, 0x21, 0x2a, 0xa7, 0x3b, 0x88, 0x04, 0x11, 0x64, 0x49, 0x0d, 0xb4, 0x46, + /* (2^421)P */ 0x63, 0x85, 0xf3, 0xc5, 0x2b, 0x5a, 0x9f, 0xf0, 0x17, 0xcb, 0x45, 0x0a, 0xf3, 0x6e, 0x7e, 0xb0, 0x7c, 0xbc, 0xf0, 0x4f, 0x3a, 0xb0, 0xbc, 0x36, 0x36, 0x52, 0x51, 0xcb, 0xfe, 0x9a, 0xcb, 0xe8, 0x7e, 0x4b, 0x06, 0x7f, 0xaa, 0x35, 0xc8, 0x0e, 0x7a, 0x30, 0xa3, 0xb1, 0x09, 0xbb, 0x86, 0x4c, 0xbe, 0xb8, 0xbd, 0xe0, 0x32, 0xa5, 0xd4, 0xf7, + /* (2^422)P */ 0x7d, 0x50, 0x37, 0x68, 0x4e, 0x22, 0xb2, 0x2c, 0xd5, 0x0f, 0x2b, 0x6d, 0xb1, 0x51, 0xf2, 0x82, 0xe9, 0x98, 0x7c, 0x50, 0xc7, 0x96, 0x7e, 0x0e, 0xdc, 0xb1, 0x0e, 0xb2, 0x63, 0x8c, 0x30, 0x37, 0x72, 0x21, 0x9c, 0x61, 0xc2, 0xa7, 0x33, 0xd9, 0xb2, 0x63, 0x93, 0xd1, 0x6b, 0x6a, 0x73, 0xa5, 0x58, 0x80, 0xff, 0x04, 0xc7, 0x83, 0x21, 0x29, + /* (2^423)P */ 0x29, 0x04, 0xbc, 0x99, 0x39, 0xc9, 0x58, 0xc9, 0x6b, 0x17, 0xe8, 0x90, 0xb3, 0xe6, 0xa9, 0xb6, 0x28, 0x9b, 0xcb, 0x3b, 0x28, 0x90, 0x68, 0x71, 0xff, 0xcf, 0x08, 0x78, 0xc9, 0x8d, 0xa8, 0x4e, 0x43, 0xd1, 0x1c, 0x9e, 0xa4, 0xe3, 0xdf, 0xbf, 0x92, 0xf4, 0xf9, 0x41, 0xba, 0x4d, 0x1c, 0xf9, 0xdd, 0x74, 0x76, 0x1c, 0x6e, 0x3e, 0x94, 0x87, + /* (2^424)P */ 0xe4, 0xda, 0xc5, 0xd7, 0xfb, 0x87, 0xc5, 0x4d, 0x6b, 0x19, 0xaa, 0xb9, 0xbc, 0x8c, 0xf2, 0x8a, 0xd8, 0x5d, 0xdb, 0x4d, 0xef, 0xa6, 0xf2, 0x65, 0xf1, 0x22, 0x9c, 0xf1, 0x46, 0x30, 0x71, 0x7c, 0xe4, 0x53, 0x8e, 0x55, 0x2e, 0x9c, 0x9a, 0x31, 0x2a, 0xc3, 0xab, 0x0f, 0xde, 0xe4, 0xbe, 0xd8, 0x96, 0x50, 0x6e, 0x0c, 0x54, 0x49, 0xe6, 0xec, + /* (2^425)P */ 0x3c, 0x1d, 0x5a, 0xa5, 0xda, 0xad, 0xdd, 0xc2, 0xae, 0xac, 0x6f, 0x86, 0x75, 0x31, 0x91, 0x64, 0x45, 0x9d, 0xa4, 0xf0, 0x81, 0xf1, 0x0e, 0xba, 0x74, 0xaf, 0x7b, 0xcd, 0x6f, 0xfe, 0xac, 0x4e, 0xdb, 0x4e, 0x45, 0x35, 0x36, 0xc5, 0xc0, 0x6c, 0x3d, 0x64, 0xf4, 0xd8, 0x07, 0x62, 0xd1, 0xec, 0xf3, 0xfc, 0x93, 0xc9, 0x28, 0x0c, 0x2c, 0xf3, + /* (2^426)P */ 0x0c, 0x69, 0x2b, 0x5c, 0xb6, 0x41, 0x69, 0xf1, 0xa4, 0xf1, 0x5b, 0x75, 0x4c, 0x42, 0x8b, 0x47, 0xeb, 0x69, 0xfb, 0xa8, 0xe6, 0xf9, 0x7b, 0x48, 0x50, 0xaf, 0xd3, 0xda, 0xb2, 0x35, 0x10, 0xb5, 0x5b, 0x40, 0x90, 0x39, 0xc9, 0x07, 0x06, 0x73, 0x26, 0x20, 0x95, 0x01, 0xa4, 0x2d, 0xf0, 0xe7, 0x2e, 0x00, 0x7d, 0x41, 0x09, 0x68, 0x13, 0xc4, + /* (2^427)P */ 0xbe, 0x38, 0x78, 0xcf, 0xc9, 0x4f, 0x36, 0xca, 0x09, 0x61, 0x31, 0x3c, 0x57, 0x2e, 0xec, 0x17, 0xa4, 0x7d, 0x19, 0x2b, 0x9b, 0x5b, 0xbe, 0x8f, 0xd6, 0xc5, 0x2f, 0x86, 0xf2, 0x64, 0x76, 0x17, 0x00, 0x6e, 0x1a, 0x8c, 0x67, 0x1b, 0x68, 0xeb, 0x15, 0xa2, 0xd6, 0x09, 0x91, 0xdd, 0x23, 0x0d, 0x98, 0xb2, 0x10, 0x19, 0x55, 0x9b, 0x63, 0xf2, + /* (2^428)P */ 0x51, 0x1f, 0x93, 0xea, 0x2a, 0x3a, 0xfa, 0x41, 0xc0, 0x57, 0xfb, 0x74, 0xa6, 0x65, 0x09, 0x56, 0x14, 0xb6, 0x12, 0xaa, 0xb3, 0x1a, 0x8d, 0x3b, 0x76, 0x91, 0x7a, 0x23, 0x56, 0x9c, 0x6a, 0xc0, 0xe0, 0x3c, 0x3f, 0xb5, 0x1a, 0xf4, 0x57, 0x71, 0x93, 0x2b, 0xb1, 0xa7, 0x70, 0x57, 0x22, 0x80, 0xf5, 0xb8, 0x07, 0x77, 0x87, 0x0c, 0xbe, 0x83, + /* (2^429)P */ 0x07, 0x9b, 0x0e, 0x52, 0x38, 0x63, 0x13, 0x86, 0x6a, 0xa6, 0xb4, 0xd2, 0x60, 0x68, 0x9a, 0x99, 0x82, 0x0a, 0x04, 0x5f, 0x89, 0x7a, 0x1a, 0x2a, 0xae, 0x2d, 0x35, 0x0c, 0x1e, 0xad, 0xef, 0x4f, 0x9a, 0xfc, 0xc8, 0xd9, 0xcf, 0x9d, 0x48, 0x71, 0xa5, 0x55, 0x79, 0x73, 0x39, 0x1b, 0xd8, 0x73, 0xec, 0x9b, 0x03, 0x16, 0xd8, 0x82, 0xf7, 0x67, + /* (2^430)P */ 0x52, 0x67, 0x42, 0x21, 0xc9, 0x40, 0x78, 0x82, 0x2b, 0x95, 0x2d, 0x20, 0x92, 0xd1, 0xe2, 0x61, 0x25, 0xb0, 0xc6, 0x9c, 0x20, 0x59, 0x8e, 0x28, 0x6f, 0xf3, 0xfd, 0xd3, 0xc1, 0x32, 0x43, 0xc9, 0xa6, 0x08, 0x7a, 0x77, 0x9c, 0x4c, 0x8c, 0x33, 0x71, 0x13, 0x69, 0xe3, 0x52, 0x30, 0xa7, 0xf5, 0x07, 0x67, 0xac, 0xad, 0x46, 0x8a, 0x26, 0x25, + /* (2^431)P */ 0xda, 0x86, 0xc4, 0xa2, 0x71, 0x56, 0xdd, 0xd2, 0x48, 0xd3, 0xde, 0x42, 0x63, 0x01, 0xa7, 0x2c, 0x92, 0x83, 0x6f, 0x2e, 0xd8, 0x1e, 0x3f, 0xc1, 0xc5, 0x42, 0x4e, 0x34, 0x19, 0x54, 0x6e, 0x35, 0x2c, 0x51, 0x2e, 0xfd, 0x0f, 0x9a, 0x45, 0x66, 0x5e, 0x4a, 0x83, 0xda, 0x0a, 0x53, 0x68, 0x63, 0xfa, 0xce, 0x47, 0x20, 0xd3, 0x34, 0xba, 0x0d, + /* (2^432)P */ 0xd0, 0xe9, 0x64, 0xa4, 0x61, 0x4b, 0x86, 0xe5, 0x93, 0x6f, 0xda, 0x0e, 0x31, 0x7e, 0x6e, 0xe3, 0xc6, 0x73, 0xd8, 0xa3, 0x08, 0x57, 0x52, 0xcd, 0x51, 0x63, 0x1d, 0x9f, 0x93, 0x00, 0x62, 0x91, 0x26, 0x21, 0xa7, 0xdd, 0x25, 0x0f, 0x09, 0x0d, 0x35, 0xad, 0xcf, 0x11, 0x8e, 0x6e, 0xe8, 0xae, 0x1d, 0x95, 0xcb, 0x88, 0xf8, 0x70, 0x7b, 0x91, + /* (2^433)P */ 0x0c, 0x19, 0x5c, 0xd9, 0x8d, 0xda, 0x9d, 0x2c, 0x90, 0x54, 0x65, 0xe8, 0xb6, 0x35, 0x50, 0xae, 0xea, 0xae, 0x43, 0xb7, 0x1e, 0x99, 0x8b, 0x4c, 0x36, 0x4e, 0xe4, 0x1e, 0xc4, 0x64, 0x43, 0xb6, 0xeb, 0xd4, 0xe9, 0x60, 0x22, 0xee, 0xcf, 0xb8, 0x52, 0x1b, 0xf0, 0x04, 0xce, 0xbc, 0x2b, 0xf0, 0xbe, 0xcd, 0x44, 0x74, 0x1e, 0x1f, 0x63, 0xf9, + /* (2^434)P */ 0xe1, 0x3f, 0x95, 0x94, 0xb2, 0xb6, 0x31, 0xa9, 0x1b, 0xdb, 0xfd, 0x0e, 0xdb, 0xdd, 0x1a, 0x22, 0x78, 0x60, 0x9f, 0x75, 0x5f, 0x93, 0x06, 0x0c, 0xd8, 0xbb, 0xa2, 0x85, 0x2b, 0x5e, 0xc0, 0x9b, 0xa8, 0x5d, 0xaf, 0x93, 0x91, 0x91, 0x47, 0x41, 0x1a, 0xfc, 0xb4, 0x51, 0x85, 0xad, 0x69, 0x4d, 0x73, 0x69, 0xd5, 0x4e, 0x82, 0xfb, 0x66, 0xcb, + /* (2^435)P */ 0x7c, 0xbe, 0xc7, 0x51, 0xc4, 0x74, 0x6e, 0xab, 0xfd, 0x41, 0x4f, 0x76, 0x4f, 0x24, 0x03, 0xd6, 0x2a, 0xb7, 0x42, 0xb4, 0xda, 0x41, 0x2c, 0x82, 0x48, 0x4c, 0x7f, 0x6f, 0x25, 0x5d, 0x36, 0xd4, 0x69, 0xf5, 0xef, 0x02, 0x81, 0xea, 0x6f, 0x19, 0x69, 0xe8, 0x6f, 0x5b, 0x2f, 0x14, 0x0e, 0x6f, 0x89, 0xb4, 0xb5, 0xd8, 0xae, 0xef, 0x7b, 0x87, + /* (2^436)P */ 0xe9, 0x91, 0xa0, 0x8b, 0xc9, 0xe0, 0x01, 0x90, 0x37, 0xc1, 0x6f, 0xdc, 0x5e, 0xf7, 0xbf, 0x43, 0x00, 0xaa, 0x10, 0x76, 0x76, 0x18, 0x6e, 0x19, 0x1e, 0x94, 0x50, 0x11, 0x0a, 0xd1, 0xe2, 0xdb, 0x08, 0x21, 0xa0, 0x1f, 0xdb, 0x54, 0xfe, 0xea, 0x6e, 0xa3, 0x68, 0x56, 0x87, 0x0b, 0x22, 0x4e, 0x66, 0xf3, 0x82, 0x82, 0x00, 0xcd, 0xd4, 0x12, + /* (2^437)P */ 0x25, 0x8e, 0x24, 0x77, 0x64, 0x4c, 0xe0, 0xf8, 0x18, 0xc0, 0xdc, 0xc7, 0x1b, 0x35, 0x65, 0xde, 0x67, 0x41, 0x5e, 0x6f, 0x90, 0x82, 0xa7, 0x2e, 0x6d, 0xf1, 0x47, 0xb4, 0x92, 0x9c, 0xfd, 0x6a, 0x9a, 0x41, 0x36, 0x20, 0x24, 0x58, 0xc3, 0x59, 0x07, 0x9a, 0xfa, 0x9f, 0x03, 0xcb, 0xc7, 0x69, 0x37, 0x60, 0xe1, 0xab, 0x13, 0x72, 0xee, 0xa2, + /* (2^438)P */ 0x74, 0x78, 0xfb, 0x13, 0xcb, 0x8e, 0x37, 0x1a, 0xf6, 0x1d, 0x17, 0x83, 0x06, 0xd4, 0x27, 0x06, 0x21, 0xe8, 0xda, 0xdf, 0x6b, 0xf3, 0x83, 0x6b, 0x34, 0x8a, 0x8c, 0xee, 0x01, 0x05, 0x5b, 0xed, 0xd3, 0x1b, 0xc9, 0x64, 0x83, 0xc9, 0x49, 0xc2, 0x57, 0x1b, 0xdd, 0xcf, 0xf1, 0x9d, 0x63, 0xee, 0x1c, 0x0d, 0xa0, 0x0a, 0x73, 0x1f, 0x5b, 0x32, + /* (2^439)P */ 0x29, 0xce, 0x1e, 0xc0, 0x6a, 0xf5, 0xeb, 0x99, 0x5a, 0x39, 0x23, 0xe9, 0xdd, 0xac, 0x44, 0x88, 0xbc, 0x80, 0x22, 0xde, 0x2c, 0xcb, 0xa8, 0x3b, 0xff, 0xf7, 0x6f, 0xc7, 0x71, 0x72, 0xa8, 0xa3, 0xf6, 0x4d, 0xc6, 0x75, 0xda, 0x80, 0xdc, 0xd9, 0x30, 0xd9, 0x07, 0x50, 0x5a, 0x54, 0x7d, 0xda, 0x39, 0x6f, 0x78, 0x94, 0xbf, 0x25, 0x98, 0xdc, + /* (2^440)P */ 0x01, 0x26, 0x62, 0x44, 0xfb, 0x0f, 0x11, 0x72, 0x73, 0x0a, 0x16, 0xc7, 0x16, 0x9c, 0x9b, 0x37, 0xd8, 0xff, 0x4f, 0xfe, 0x57, 0xdb, 0xae, 0xef, 0x7d, 0x94, 0x30, 0x04, 0x70, 0x83, 0xde, 0x3c, 0xd4, 0xb5, 0x70, 0xda, 0xa7, 0x55, 0xc8, 0x19, 0xe1, 0x36, 0x15, 0x61, 0xe7, 0x3b, 0x7d, 0x85, 0xbb, 0xf3, 0x42, 0x5a, 0x94, 0xf4, 0x53, 0x2a, + /* (2^441)P */ 0x14, 0x60, 0xa6, 0x0b, 0x83, 0xe1, 0x23, 0x77, 0xc0, 0xce, 0x50, 0xed, 0x35, 0x8d, 0x98, 0x99, 0x7d, 0xf5, 0x8d, 0xce, 0x94, 0x25, 0xc8, 0x0f, 0x6d, 0xfa, 0x4a, 0xa4, 0x3a, 0x1f, 0x66, 0xfb, 0x5a, 0x64, 0xaf, 0x8b, 0x54, 0x54, 0x44, 0x3f, 0x5b, 0x88, 0x61, 0xe4, 0x48, 0x45, 0x26, 0x20, 0xbe, 0x0d, 0x06, 0xbb, 0x65, 0x59, 0xe1, 0x36, + /* (2^442)P */ 0xb7, 0x98, 0xce, 0xa3, 0xe3, 0xee, 0x11, 0x1b, 0x9e, 0x24, 0x59, 0x75, 0x31, 0x37, 0x44, 0x6f, 0x6b, 0x9e, 0xec, 0xb7, 0x44, 0x01, 0x7e, 0xab, 0xbb, 0x69, 0x5d, 0x11, 0xb0, 0x30, 0x64, 0xea, 0x91, 0xb4, 0x7a, 0x8c, 0x02, 0x4c, 0xb9, 0x10, 0xa7, 0xc7, 0x79, 0xe6, 0xdc, 0x77, 0xe3, 0xc8, 0xef, 0x3e, 0xf9, 0x38, 0x81, 0xce, 0x9a, 0xb2, + /* (2^443)P */ 0x91, 0x12, 0x76, 0xd0, 0x10, 0xb4, 0xaf, 0xe1, 0x89, 0x3a, 0x93, 0x6b, 0x5c, 0x19, 0x5f, 0x24, 0xed, 0x04, 0x92, 0xc7, 0xf0, 0x00, 0x08, 0xc1, 0x92, 0xff, 0x90, 0xdb, 0xb2, 0xbf, 0xdf, 0x49, 0xcd, 0xbd, 0x5c, 0x6e, 0xbf, 0x16, 0xbb, 0x61, 0xf9, 0x20, 0x33, 0x35, 0x93, 0x11, 0xbc, 0x59, 0x69, 0xce, 0x18, 0x9f, 0xf8, 0x7b, 0xa1, 0x6e, + /* (2^444)P */ 0xa1, 0xf4, 0xaf, 0xad, 0xf8, 0xe6, 0x99, 0xd2, 0xa1, 0x4d, 0xde, 0x56, 0xc9, 0x7b, 0x0b, 0x11, 0x3e, 0xbf, 0x89, 0x1a, 0x9a, 0x90, 0xe5, 0xe2, 0xa6, 0x37, 0x88, 0xa1, 0x68, 0x59, 0xae, 0x8c, 0xec, 0x02, 0x14, 0x8d, 0xb7, 0x2e, 0x25, 0x75, 0x7f, 0x76, 0x1a, 0xd3, 0x4d, 0xad, 0x8a, 0x00, 0x6c, 0x96, 0x49, 0xa4, 0xc3, 0x2e, 0x5c, 0x7b, + /* (2^445)P */ 0x26, 0x53, 0xf7, 0xda, 0xa8, 0x01, 0x14, 0xb1, 0x63, 0xe3, 0xc3, 0x89, 0x88, 0xb0, 0x85, 0x40, 0x2b, 0x26, 0x9a, 0x10, 0x1a, 0x70, 0x33, 0xf4, 0x50, 0x9d, 0x4d, 0xd8, 0x64, 0xc6, 0x0f, 0xe1, 0x17, 0xc8, 0x10, 0x4b, 0xfc, 0xa0, 0xc9, 0xba, 0x2c, 0x98, 0x09, 0xf5, 0x84, 0xb6, 0x7c, 0x4e, 0xa3, 0xe3, 0x81, 0x1b, 0x32, 0x60, 0x02, 0xdd, + /* (2^446)P */ 0xa3, 0xe5, 0x86, 0xd4, 0x43, 0xa8, 0xd1, 0x98, 0x9d, 0x9d, 0xdb, 0x04, 0xcf, 0x6e, 0x35, 0x05, 0x30, 0x53, 0x3b, 0xbc, 0x90, 0x00, 0x4a, 0xc5, 0x40, 0x2a, 0x0f, 0xde, 0x1a, 0xd7, 0x36, 0x27, 0x44, 0x62, 0xa6, 0xac, 0x9d, 0xd2, 0x70, 0x69, 0x14, 0x39, 0x9b, 0xd1, 0xc3, 0x0a, 0x3a, 0x82, 0x0e, 0xf1, 0x94, 0xd7, 0x42, 0x94, 0xd5, 0x7d, + /* (2^447)P */ 0x04, 0xc0, 0x6e, 0x12, 0x90, 0x70, 0xf9, 0xdf, 0xf7, 0xc9, 0x86, 0xc0, 0xe6, 0x92, 0x8b, 0x0a, 0xa1, 0xc1, 0x3b, 0xcc, 0x33, 0xb7, 0xf0, 0xeb, 0x51, 0x50, 0x80, 0x20, 0x69, 0x1c, 0x4f, 0x89, 0x05, 0x1e, 0xe4, 0x7a, 0x0a, 0xc2, 0xf0, 0xf5, 0x78, 0x91, 0x76, 0x34, 0x45, 0xdc, 0x24, 0x53, 0x24, 0x98, 0xe2, 0x73, 0x6f, 0xe6, 0x46, 0x67, +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/ecc/goldilocks/constants.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/ecc/goldilocks/constants.go new file mode 100644 index 00000000000..b6b236e5d3d --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/ecc/goldilocks/constants.go @@ -0,0 +1,71 @@ +package goldilocks + +import fp "github.com/cloudflare/circl/math/fp448" + +var ( + // genX is the x-coordinate of the generator of Goldilocks curve. + genX = fp.Elt{ + 0x5e, 0xc0, 0x0c, 0xc7, 0x2b, 0xa8, 0x26, 0x26, + 0x8e, 0x93, 0x00, 0x8b, 0xe1, 0x80, 0x3b, 0x43, + 0x11, 0x65, 0xb6, 0x2a, 0xf7, 0x1a, 0xae, 0x12, + 0x64, 0xa4, 0xd3, 0xa3, 0x24, 0xe3, 0x6d, 0xea, + 0x67, 0x17, 0x0f, 0x47, 0x70, 0x65, 0x14, 0x9e, + 0xda, 0x36, 0xbf, 0x22, 0xa6, 0x15, 0x1d, 0x22, + 0xed, 0x0d, 0xed, 0x6b, 0xc6, 0x70, 0x19, 0x4f, + } + // genY is the y-coordinate of the generator of Goldilocks curve. + genY = fp.Elt{ + 0x14, 0xfa, 0x30, 0xf2, 0x5b, 0x79, 0x08, 0x98, + 0xad, 0xc8, 0xd7, 0x4e, 0x2c, 0x13, 0xbd, 0xfd, + 0xc4, 0x39, 0x7c, 0xe6, 0x1c, 0xff, 0xd3, 0x3a, + 0xd7, 0xc2, 0xa0, 0x05, 0x1e, 0x9c, 0x78, 0x87, + 0x40, 0x98, 0xa3, 0x6c, 0x73, 0x73, 0xea, 0x4b, + 0x62, 0xc7, 0xc9, 0x56, 0x37, 0x20, 0x76, 0x88, + 0x24, 0xbc, 0xb6, 0x6e, 0x71, 0x46, 0x3f, 0x69, + } + // paramD is -39081 in Fp. + paramD = fp.Elt{ + 0x56, 0x67, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xfe, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + } + // order is 2^446-0x8335dc163bb124b65129c96fde933d8d723a70aadc873d6d54a7bb0d, + // which is the number of points in the prime subgroup. + order = Scalar{ + 0xf3, 0x44, 0x58, 0xab, 0x92, 0xc2, 0x78, 0x23, + 0x55, 0x8f, 0xc5, 0x8d, 0x72, 0xc2, 0x6c, 0x21, + 0x90, 0x36, 0xd6, 0xae, 0x49, 0xdb, 0x4e, 0xc4, + 0xe9, 0x23, 0xca, 0x7c, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x3f, + } + // residue448 is 2^448 mod order. + residue448 = [4]uint64{ + 0x721cf5b5529eec34, 0x7a4cf635c8e9c2ab, 0xeec492d944a725bf, 0x20cd77058, + } + // invFour is 1/4 mod order. + invFour = Scalar{ + 0x3d, 0x11, 0xd6, 0xaa, 0xa4, 0x30, 0xde, 0x48, + 0xd5, 0x63, 0x71, 0xa3, 0x9c, 0x30, 0x5b, 0x08, + 0xa4, 0x8d, 0xb5, 0x6b, 0xd2, 0xb6, 0x13, 0x71, + 0xfa, 0x88, 0x32, 0xdf, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x0f, + } + // paramDTwist is -39082 in Fp. The D parameter of the twist curve. + paramDTwist = fp.Elt{ + 0x55, 0x67, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xfe, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + } +) diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/ecc/goldilocks/curve.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/ecc/goldilocks/curve.go new file mode 100644 index 00000000000..5a939100d2c --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/ecc/goldilocks/curve.go @@ -0,0 +1,80 @@ +// Package goldilocks provides elliptic curve operations over the goldilocks curve. +package goldilocks + +import fp "github.com/cloudflare/circl/math/fp448" + +// Curve is the Goldilocks curve x^2+y^2=z^2-39081x^2y^2. +type Curve struct{} + +// Identity returns the identity point. +func (Curve) Identity() *Point { + return &Point{ + y: fp.One(), + z: fp.One(), + } +} + +// IsOnCurve returns true if the point lies on the curve. +func (Curve) IsOnCurve(P *Point) bool { + x2, y2, t, t2, z2 := &fp.Elt{}, &fp.Elt{}, &fp.Elt{}, &fp.Elt{}, &fp.Elt{} + rhs, lhs := &fp.Elt{}, &fp.Elt{} + fp.Mul(t, &P.ta, &P.tb) // t = ta*tb + fp.Sqr(x2, &P.x) // x^2 + fp.Sqr(y2, &P.y) // y^2 + fp.Sqr(z2, &P.z) // z^2 + fp.Sqr(t2, t) // t^2 + fp.Add(lhs, x2, y2) // x^2 + y^2 + fp.Mul(rhs, t2, ¶mD) // dt^2 + fp.Add(rhs, rhs, z2) // z^2 + dt^2 + fp.Sub(lhs, lhs, rhs) // x^2 + y^2 - (z^2 + dt^2) + eq0 := fp.IsZero(lhs) + + fp.Mul(lhs, &P.x, &P.y) // xy + fp.Mul(rhs, t, &P.z) // tz + fp.Sub(lhs, lhs, rhs) // xy - tz + eq1 := fp.IsZero(lhs) + return eq0 && eq1 +} + +// Generator returns the generator point. +func (Curve) Generator() *Point { + return &Point{ + x: genX, + y: genY, + z: fp.One(), + ta: genX, + tb: genY, + } +} + +// Order returns the number of points in the prime subgroup. +func (Curve) Order() Scalar { return order } + +// Double returns 2P. +func (Curve) Double(P *Point) *Point { R := *P; R.Double(); return &R } + +// Add returns P+Q. +func (Curve) Add(P, Q *Point) *Point { R := *P; R.Add(Q); return &R } + +// ScalarMult returns kP. This function runs in constant time. +func (e Curve) ScalarMult(k *Scalar, P *Point) *Point { + k4 := &Scalar{} + k4.divBy4(k) + return e.pull(twistCurve{}.ScalarMult(k4, e.push(P))) +} + +// ScalarBaseMult returns kG where G is the generator point. This function runs in constant time. +func (e Curve) ScalarBaseMult(k *Scalar) *Point { + k4 := &Scalar{} + k4.divBy4(k) + return e.pull(twistCurve{}.ScalarBaseMult(k4)) +} + +// CombinedMult returns mG+nP, where G is the generator point. This function is non-constant time. +func (e Curve) CombinedMult(m, n *Scalar, P *Point) *Point { + m4 := &Scalar{} + n4 := &Scalar{} + m4.divBy4(m) + n4.divBy4(n) + return e.pull(twistCurve{}.CombinedMult(m4, n4, twistCurve{}.pull(P))) +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/ecc/goldilocks/isogeny.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/ecc/goldilocks/isogeny.go new file mode 100644 index 00000000000..b1daab851c5 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/ecc/goldilocks/isogeny.go @@ -0,0 +1,52 @@ +package goldilocks + +import fp "github.com/cloudflare/circl/math/fp448" + +func (Curve) pull(P *twistPoint) *Point { return twistCurve{}.push(P) } +func (twistCurve) pull(P *Point) *twistPoint { return Curve{}.push(P) } + +// push sends a point on the Goldilocks curve to a point on the twist curve. +func (Curve) push(P *Point) *twistPoint { + Q := &twistPoint{} + Px, Py, Pz := &P.x, &P.y, &P.z + a, b, c, d, e, f, g, h := &Q.x, &Q.y, &Q.z, &fp.Elt{}, &Q.ta, &Q.x, &Q.y, &Q.tb + fp.Add(e, Px, Py) // x+y + fp.Sqr(a, Px) // A = x^2 + fp.Sqr(b, Py) // B = y^2 + fp.Sqr(c, Pz) // z^2 + fp.Add(c, c, c) // C = 2*z^2 + *d = *a // D = A + fp.Sqr(e, e) // (x+y)^2 + fp.Sub(e, e, a) // (x+y)^2-A + fp.Sub(e, e, b) // E = (x+y)^2-A-B + fp.Add(h, b, d) // H = B+D + fp.Sub(g, b, d) // G = B-D + fp.Sub(f, c, h) // F = C-H + fp.Mul(&Q.z, f, g) // Z = F * G + fp.Mul(&Q.x, e, f) // X = E * F + fp.Mul(&Q.y, g, h) // Y = G * H, // T = E * H + return Q +} + +// push sends a point on the twist curve to a point on the Goldilocks curve. +func (twistCurve) push(P *twistPoint) *Point { + Q := &Point{} + Px, Py, Pz := &P.x, &P.y, &P.z + a, b, c, d, e, f, g, h := &Q.x, &Q.y, &Q.z, &fp.Elt{}, &Q.ta, &Q.x, &Q.y, &Q.tb + fp.Add(e, Px, Py) // x+y + fp.Sqr(a, Px) // A = x^2 + fp.Sqr(b, Py) // B = y^2 + fp.Sqr(c, Pz) // z^2 + fp.Add(c, c, c) // C = 2*z^2 + fp.Neg(d, a) // D = -A + fp.Sqr(e, e) // (x+y)^2 + fp.Sub(e, e, a) // (x+y)^2-A + fp.Sub(e, e, b) // E = (x+y)^2-A-B + fp.Add(h, b, d) // H = B+D + fp.Sub(g, b, d) // G = B-D + fp.Sub(f, c, h) // F = C-H + fp.Mul(&Q.z, f, g) // Z = F * G + fp.Mul(&Q.x, e, f) // X = E * F + fp.Mul(&Q.y, g, h) // Y = G * H, // T = E * H + return Q +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/ecc/goldilocks/point.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/ecc/goldilocks/point.go new file mode 100644 index 00000000000..11f73de0542 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/ecc/goldilocks/point.go @@ -0,0 +1,171 @@ +package goldilocks + +import ( + "errors" + "fmt" + + fp "github.com/cloudflare/circl/math/fp448" +) + +// Point is a point on the Goldilocks Curve. +type Point struct{ x, y, z, ta, tb fp.Elt } + +func (P Point) String() string { + return fmt.Sprintf("x: %v\ny: %v\nz: %v\nta: %v\ntb: %v", P.x, P.y, P.z, P.ta, P.tb) +} + +// FromAffine creates a point from affine coordinates. +func FromAffine(x, y *fp.Elt) (*Point, error) { + P := &Point{ + x: *x, + y: *y, + z: fp.One(), + ta: *x, + tb: *y, + } + if !(Curve{}).IsOnCurve(P) { + return P, errors.New("point not on curve") + } + return P, nil +} + +// isLessThan returns true if 0 <= x < y, and assumes that slices are of the +// same length and are interpreted in little-endian order. +func isLessThan(x, y []byte) bool { + i := len(x) - 1 + for i > 0 && x[i] == y[i] { + i-- + } + return x[i] < y[i] +} + +// FromBytes returns a point from the input buffer. +func FromBytes(in []byte) (*Point, error) { + if len(in) < fp.Size+1 { + return nil, errors.New("wrong input length") + } + err := errors.New("invalid decoding") + P := &Point{} + signX := in[fp.Size] >> 7 + copy(P.y[:], in[:fp.Size]) + p := fp.P() + if !isLessThan(P.y[:], p[:]) { + return nil, err + } + + u, v := &fp.Elt{}, &fp.Elt{} + one := fp.One() + fp.Sqr(u, &P.y) // u = y^2 + fp.Mul(v, u, ¶mD) // v = dy^2 + fp.Sub(u, u, &one) // u = y^2-1 + fp.Sub(v, v, &one) // v = dy^2-1 + isQR := fp.InvSqrt(&P.x, u, v) // x = sqrt(u/v) + if !isQR { + return nil, err + } + fp.Modp(&P.x) // x = x mod p + if fp.IsZero(&P.x) && signX == 1 { + return nil, err + } + if signX != (P.x[0] & 1) { + fp.Neg(&P.x, &P.x) + } + P.ta = P.x + P.tb = P.y + P.z = fp.One() + return P, nil +} + +// IsIdentity returns true is P is the identity Point. +func (P *Point) IsIdentity() bool { + return fp.IsZero(&P.x) && !fp.IsZero(&P.y) && !fp.IsZero(&P.z) && P.y == P.z +} + +// IsEqual returns true if P is equivalent to Q. +func (P *Point) IsEqual(Q *Point) bool { + l, r := &fp.Elt{}, &fp.Elt{} + fp.Mul(l, &P.x, &Q.z) + fp.Mul(r, &Q.x, &P.z) + fp.Sub(l, l, r) + b := fp.IsZero(l) + fp.Mul(l, &P.y, &Q.z) + fp.Mul(r, &Q.y, &P.z) + fp.Sub(l, l, r) + b = b && fp.IsZero(l) + fp.Mul(l, &P.ta, &P.tb) + fp.Mul(l, l, &Q.z) + fp.Mul(r, &Q.ta, &Q.tb) + fp.Mul(r, r, &P.z) + fp.Sub(l, l, r) + b = b && fp.IsZero(l) + return b +} + +// Neg obtains the inverse of the Point. +func (P *Point) Neg() { fp.Neg(&P.x, &P.x); fp.Neg(&P.ta, &P.ta) } + +// ToAffine returns the x,y affine coordinates of P. +func (P *Point) ToAffine() (x, y fp.Elt) { + fp.Inv(&P.z, &P.z) // 1/z + fp.Mul(&P.x, &P.x, &P.z) // x/z + fp.Mul(&P.y, &P.y, &P.z) // y/z + fp.Modp(&P.x) + fp.Modp(&P.y) + fp.SetOne(&P.z) + P.ta = P.x + P.tb = P.y + return P.x, P.y +} + +// ToBytes stores P into a slice of bytes. +func (P *Point) ToBytes(out []byte) error { + if len(out) < fp.Size+1 { + return errors.New("invalid decoding") + } + x, y := P.ToAffine() + out[fp.Size] = (x[0] & 1) << 7 + return fp.ToBytes(out[:fp.Size], &y) +} + +// MarshalBinary encodes the receiver into a binary form and returns the result. +func (P *Point) MarshalBinary() (data []byte, err error) { + data = make([]byte, fp.Size+1) + err = P.ToBytes(data[:fp.Size+1]) + return data, err +} + +// UnmarshalBinary must be able to decode the form generated by MarshalBinary. +func (P *Point) UnmarshalBinary(data []byte) error { Q, err := FromBytes(data); *P = *Q; return err } + +// Double sets P = 2Q. +func (P *Point) Double() { P.Add(P) } + +// Add sets P =P+Q.. +func (P *Point) Add(Q *Point) { + // This is formula (5) from "Twisted Edwards Curves Revisited" by + // Hisil H., Wong K.KH., Carter G., Dawson E. (2008) + // https://doi.org/10.1007/978-3-540-89255-7_20 + x1, y1, z1, ta1, tb1 := &P.x, &P.y, &P.z, &P.ta, &P.tb + x2, y2, z2, ta2, tb2 := &Q.x, &Q.y, &Q.z, &Q.ta, &Q.tb + x3, y3, z3, E, H := &P.x, &P.y, &P.z, &P.ta, &P.tb + A, B, C, D := &fp.Elt{}, &fp.Elt{}, &fp.Elt{}, &fp.Elt{} + t1, t2, F, G := C, D, &fp.Elt{}, &fp.Elt{} + fp.Mul(t1, ta1, tb1) // t1 = ta1*tb1 + fp.Mul(t2, ta2, tb2) // t2 = ta2*tb2 + fp.Mul(A, x1, x2) // A = x1*x2 + fp.Mul(B, y1, y2) // B = y1*y2 + fp.Mul(C, t1, t2) // t1*t2 + fp.Mul(C, C, ¶mD) // C = d*t1*t2 + fp.Mul(D, z1, z2) // D = z1*z2 + fp.Add(F, x1, y1) // x1+y1 + fp.Add(E, x2, y2) // x2+y2 + fp.Mul(E, E, F) // (x1+y1)*(x2+y2) + fp.Sub(E, E, A) // (x1+y1)*(x2+y2)-A + fp.Sub(E, E, B) // E = (x1+y1)*(x2+y2)-A-B + fp.Sub(F, D, C) // F = D-C + fp.Add(G, D, C) // G = D+C + fp.Sub(H, B, A) // H = B-A + fp.Mul(z3, F, G) // Z = F * G + fp.Mul(x3, E, F) // X = E * F + fp.Mul(y3, G, H) // Y = G * H, T = E * H +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/ecc/goldilocks/scalar.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/ecc/goldilocks/scalar.go new file mode 100644 index 00000000000..f98117b2527 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/ecc/goldilocks/scalar.go @@ -0,0 +1,203 @@ +package goldilocks + +import ( + "encoding/binary" + "math/bits" +) + +// ScalarSize is the size (in bytes) of scalars. +const ScalarSize = 56 // 448 / 8 + +// _N is the number of 64-bit words to store scalars. +const _N = 7 // 448 / 64 + +// Scalar represents a positive integer stored in little-endian order. +type Scalar [ScalarSize]byte + +type scalar64 [_N]uint64 + +func (z *scalar64) fromScalar(x *Scalar) { + z[0] = binary.LittleEndian.Uint64(x[0*8 : 1*8]) + z[1] = binary.LittleEndian.Uint64(x[1*8 : 2*8]) + z[2] = binary.LittleEndian.Uint64(x[2*8 : 3*8]) + z[3] = binary.LittleEndian.Uint64(x[3*8 : 4*8]) + z[4] = binary.LittleEndian.Uint64(x[4*8 : 5*8]) + z[5] = binary.LittleEndian.Uint64(x[5*8 : 6*8]) + z[6] = binary.LittleEndian.Uint64(x[6*8 : 7*8]) +} + +func (z *scalar64) toScalar(x *Scalar) { + binary.LittleEndian.PutUint64(x[0*8:1*8], z[0]) + binary.LittleEndian.PutUint64(x[1*8:2*8], z[1]) + binary.LittleEndian.PutUint64(x[2*8:3*8], z[2]) + binary.LittleEndian.PutUint64(x[3*8:4*8], z[3]) + binary.LittleEndian.PutUint64(x[4*8:5*8], z[4]) + binary.LittleEndian.PutUint64(x[5*8:6*8], z[5]) + binary.LittleEndian.PutUint64(x[6*8:7*8], z[6]) +} + +// add calculates z = x + y. Assumes len(z) > max(len(x),len(y)). +func add(z, x, y []uint64) uint64 { + l, L, zz := len(x), len(y), y + if l > L { + l, L, zz = L, l, x + } + c := uint64(0) + for i := 0; i < l; i++ { + z[i], c = bits.Add64(x[i], y[i], c) + } + for i := l; i < L; i++ { + z[i], c = bits.Add64(zz[i], 0, c) + } + return c +} + +// sub calculates z = x - y. Assumes len(z) > max(len(x),len(y)). +func sub(z, x, y []uint64) uint64 { + l, L, zz := len(x), len(y), y + if l > L { + l, L, zz = L, l, x + } + c := uint64(0) + for i := 0; i < l; i++ { + z[i], c = bits.Sub64(x[i], y[i], c) + } + for i := l; i < L; i++ { + z[i], c = bits.Sub64(zz[i], 0, c) + } + return c +} + +// mulWord calculates z = x * y. Assumes len(z) >= len(x)+1. +func mulWord(z, x []uint64, y uint64) { + for i := range z { + z[i] = 0 + } + carry := uint64(0) + for i := range x { + hi, lo := bits.Mul64(x[i], y) + lo, cc := bits.Add64(lo, z[i], 0) + hi, _ = bits.Add64(hi, 0, cc) + z[i], cc = bits.Add64(lo, carry, 0) + carry, _ = bits.Add64(hi, 0, cc) + } + z[len(x)] = carry +} + +// Cmov moves x into z if b=1. +func (z *scalar64) Cmov(b uint64, x *scalar64) { + m := uint64(0) - b + for i := range z { + z[i] = (z[i] &^ m) | (x[i] & m) + } +} + +// leftShift shifts to the left the words of z returning the more significant word. +func (z *scalar64) leftShift(low uint64) uint64 { + high := z[_N-1] + for i := _N - 1; i > 0; i-- { + z[i] = z[i-1] + } + z[0] = low + return high +} + +// reduceOneWord calculates z = z + 2^448*x such that the result fits in a Scalar. +func (z *scalar64) reduceOneWord(x uint64) { + prod := (&scalar64{})[:] + mulWord(prod, residue448[:], x) + cc := add(z[:], z[:], prod) + mulWord(prod, residue448[:], cc) + add(z[:], z[:], prod) +} + +// modOrder reduces z mod order. +func (z *scalar64) modOrder() { + var o64, x scalar64 + o64.fromScalar(&order) + // Performs: while (z >= order) { z = z-order } + // At most 8 (eight) iterations reduce 3 bits by subtracting. + for i := 0; i < 8; i++ { + c := sub(x[:], z[:], o64[:]) // (c || x) = z-order + z.Cmov(1-c, &x) // if c != 0 { z = x } + } +} + +// FromBytes stores z = x mod order, where x is a number stored in little-endian order. +func (z *Scalar) FromBytes(x []byte) { + n := len(x) + nCeil := (n + 7) >> 3 + for i := range z { + z[i] = 0 + } + if nCeil < _N { + copy(z[:], x) + return + } + copy(z[:], x[8*(nCeil-_N):]) + var z64 scalar64 + z64.fromScalar(z) + for i := nCeil - _N - 1; i >= 0; i-- { + low := binary.LittleEndian.Uint64(x[8*i:]) + high := z64.leftShift(low) + z64.reduceOneWord(high) + } + z64.modOrder() + z64.toScalar(z) +} + +// divBy4 calculates z = x/4 mod order. +func (z *Scalar) divBy4(x *Scalar) { z.Mul(x, &invFour) } + +// Red reduces z mod order. +func (z *Scalar) Red() { var t scalar64; t.fromScalar(z); t.modOrder(); t.toScalar(z) } + +// Neg calculates z = -z mod order. +func (z *Scalar) Neg() { z.Sub(&order, z) } + +// Add calculates z = x+y mod order. +func (z *Scalar) Add(x, y *Scalar) { + var z64, x64, y64, t scalar64 + x64.fromScalar(x) + y64.fromScalar(y) + c := add(z64[:], x64[:], y64[:]) + add(t[:], z64[:], residue448[:]) + z64.Cmov(c, &t) + z64.modOrder() + z64.toScalar(z) +} + +// Sub calculates z = x-y mod order. +func (z *Scalar) Sub(x, y *Scalar) { + var z64, x64, y64, t scalar64 + x64.fromScalar(x) + y64.fromScalar(y) + c := sub(z64[:], x64[:], y64[:]) + sub(t[:], z64[:], residue448[:]) + z64.Cmov(c, &t) + z64.modOrder() + z64.toScalar(z) +} + +// Mul calculates z = x*y mod order. +func (z *Scalar) Mul(x, y *Scalar) { + var z64, x64, y64 scalar64 + prod := (&[_N + 1]uint64{})[:] + x64.fromScalar(x) + y64.fromScalar(y) + mulWord(prod, x64[:], y64[_N-1]) + copy(z64[:], prod[:_N]) + z64.reduceOneWord(prod[_N]) + for i := _N - 2; i >= 0; i-- { + h := z64.leftShift(0) + z64.reduceOneWord(h) + mulWord(prod, x64[:], y64[i]) + c := add(z64[:], z64[:], prod[:_N]) + z64.reduceOneWord(prod[_N] + c) + } + z64.modOrder() + z64.toScalar(z) +} + +// IsZero returns true if z=0. +func (z *Scalar) IsZero() bool { z.Red(); return *z == Scalar{} } diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/ecc/goldilocks/twist.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/ecc/goldilocks/twist.go new file mode 100644 index 00000000000..8cd4e333b96 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/ecc/goldilocks/twist.go @@ -0,0 +1,138 @@ +package goldilocks + +import ( + "crypto/subtle" + "math/bits" + + "github.com/cloudflare/circl/internal/conv" + "github.com/cloudflare/circl/math" + fp "github.com/cloudflare/circl/math/fp448" +) + +// twistCurve is -x^2+y^2=1-39082x^2y^2 and is 4-isogeneous to Goldilocks. +type twistCurve struct{} + +// Identity returns the identity point. +func (twistCurve) Identity() *twistPoint { + return &twistPoint{ + y: fp.One(), + z: fp.One(), + } +} + +// subYDiv16 update x = (x - y) / 16. +func subYDiv16(x *scalar64, y int64) { + s := uint64(y >> 63) + x0, b0 := bits.Sub64((*x)[0], uint64(y), 0) + x1, b1 := bits.Sub64((*x)[1], s, b0) + x2, b2 := bits.Sub64((*x)[2], s, b1) + x3, b3 := bits.Sub64((*x)[3], s, b2) + x4, b4 := bits.Sub64((*x)[4], s, b3) + x5, b5 := bits.Sub64((*x)[5], s, b4) + x6, _ := bits.Sub64((*x)[6], s, b5) + x[0] = (x0 >> 4) | (x1 << 60) + x[1] = (x1 >> 4) | (x2 << 60) + x[2] = (x2 >> 4) | (x3 << 60) + x[3] = (x3 >> 4) | (x4 << 60) + x[4] = (x4 >> 4) | (x5 << 60) + x[5] = (x5 >> 4) | (x6 << 60) + x[6] = (x6 >> 4) +} + +func recodeScalar(d *[113]int8, k *Scalar) { + var k64 scalar64 + k64.fromScalar(k) + for i := 0; i < 112; i++ { + d[i] = int8((k64[0] & 0x1f) - 16) + subYDiv16(&k64, int64(d[i])) + } + d[112] = int8(k64[0]) +} + +// ScalarMult returns kP. +func (e twistCurve) ScalarMult(k *Scalar, P *twistPoint) *twistPoint { + var TabP [8]preTwistPointProy + var S preTwistPointProy + var d [113]int8 + + var isZero int + if k.IsZero() { + isZero = 1 + } + subtle.ConstantTimeCopy(isZero, k[:], order[:]) + + minusK := *k + isEven := 1 - int(k[0]&0x1) + minusK.Neg() + subtle.ConstantTimeCopy(isEven, k[:], minusK[:]) + recodeScalar(&d, k) + + P.oddMultiples(TabP[:]) + Q := e.Identity() + for i := 112; i >= 0; i-- { + Q.Double() + Q.Double() + Q.Double() + Q.Double() + mask := d[i] >> 7 + absDi := (d[i] + mask) ^ mask + inx := int32((absDi - 1) >> 1) + sig := int((d[i] >> 7) & 0x1) + for j := range TabP { + S.cmov(&TabP[j], uint(subtle.ConstantTimeEq(inx, int32(j)))) + } + S.cneg(sig) + Q.mixAdd(&S) + } + Q.cneg(uint(isEven)) + return Q +} + +const ( + omegaFix = 7 + omegaVar = 5 +) + +// CombinedMult returns mG+nP. +func (e twistCurve) CombinedMult(m, n *Scalar, P *twistPoint) *twistPoint { + nafFix := math.OmegaNAF(conv.BytesLe2BigInt(m[:]), omegaFix) + nafVar := math.OmegaNAF(conv.BytesLe2BigInt(n[:]), omegaVar) + + if len(nafFix) > len(nafVar) { + nafVar = append(nafVar, make([]int32, len(nafFix)-len(nafVar))...) + } else if len(nafFix) < len(nafVar) { + nafFix = append(nafFix, make([]int32, len(nafVar)-len(nafFix))...) + } + + var TabQ [1 << (omegaVar - 2)]preTwistPointProy + P.oddMultiples(TabQ[:]) + Q := e.Identity() + for i := len(nafFix) - 1; i >= 0; i-- { + Q.Double() + // Generator point + if nafFix[i] != 0 { + idxM := absolute(nafFix[i]) >> 1 + R := tabVerif[idxM] + if nafFix[i] < 0 { + R.neg() + } + Q.mixAddZ1(&R) + } + // Variable input point + if nafVar[i] != 0 { + idxN := absolute(nafVar[i]) >> 1 + S := TabQ[idxN] + if nafVar[i] < 0 { + S.neg() + } + Q.mixAdd(&S) + } + } + return Q +} + +// absolute returns always a positive value. +func absolute(x int32) int32 { + mask := x >> 31 + return (x + mask) ^ mask +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/ecc/goldilocks/twistPoint.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/ecc/goldilocks/twistPoint.go new file mode 100644 index 00000000000..c55db77b069 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/ecc/goldilocks/twistPoint.go @@ -0,0 +1,135 @@ +package goldilocks + +import ( + "fmt" + + fp "github.com/cloudflare/circl/math/fp448" +) + +type twistPoint struct{ x, y, z, ta, tb fp.Elt } + +type preTwistPointAffine struct{ addYX, subYX, dt2 fp.Elt } + +type preTwistPointProy struct { + preTwistPointAffine + z2 fp.Elt +} + +func (P *twistPoint) String() string { + return fmt.Sprintf("x: %v\ny: %v\nz: %v\nta: %v\ntb: %v", P.x, P.y, P.z, P.ta, P.tb) +} + +// cneg conditionally negates the point if b=1. +func (P *twistPoint) cneg(b uint) { + t := &fp.Elt{} + fp.Neg(t, &P.x) + fp.Cmov(&P.x, t, b) + fp.Neg(t, &P.ta) + fp.Cmov(&P.ta, t, b) +} + +// Double updates P with 2P. +func (P *twistPoint) Double() { + // This is formula (7) from "Twisted Edwards Curves Revisited" by + // Hisil H., Wong K.KH., Carter G., Dawson E. (2008) + // https://doi.org/10.1007/978-3-540-89255-7_20 + Px, Py, Pz, Pta, Ptb := &P.x, &P.y, &P.z, &P.ta, &P.tb + a, b, c, e, f, g, h := Px, Py, Pz, Pta, Px, Py, Ptb + fp.Add(e, Px, Py) // x+y + fp.Sqr(a, Px) // A = x^2 + fp.Sqr(b, Py) // B = y^2 + fp.Sqr(c, Pz) // z^2 + fp.Add(c, c, c) // C = 2*z^2 + fp.Add(h, a, b) // H = A+B + fp.Sqr(e, e) // (x+y)^2 + fp.Sub(e, e, h) // E = (x+y)^2-A-B + fp.Sub(g, b, a) // G = B-A + fp.Sub(f, c, g) // F = C-G + fp.Mul(Pz, f, g) // Z = F * G + fp.Mul(Px, e, f) // X = E * F + fp.Mul(Py, g, h) // Y = G * H, T = E * H +} + +// mixAdd calculates P= P+Q, where Q is a precomputed point with Z_Q = 1. +func (P *twistPoint) mixAddZ1(Q *preTwistPointAffine) { + fp.Add(&P.z, &P.z, &P.z) // D = 2*z1 (z2=1) + P.coreAddition(Q) +} + +// coreAddition calculates P=P+Q for curves with A=-1. +func (P *twistPoint) coreAddition(Q *preTwistPointAffine) { + // This is the formula following (5) from "Twisted Edwards Curves Revisited" by + // Hisil H., Wong K.KH., Carter G., Dawson E. (2008) + // https://doi.org/10.1007/978-3-540-89255-7_20 + Px, Py, Pz, Pta, Ptb := &P.x, &P.y, &P.z, &P.ta, &P.tb + addYX2, subYX2, dt2 := &Q.addYX, &Q.subYX, &Q.dt2 + a, b, c, d, e, f, g, h := Px, Py, &fp.Elt{}, Pz, Pta, Px, Py, Ptb + fp.Mul(c, Pta, Ptb) // t1 = ta*tb + fp.Sub(h, Py, Px) // y1-x1 + fp.Add(b, Py, Px) // y1+x1 + fp.Mul(a, h, subYX2) // A = (y1-x1)*(y2-x2) + fp.Mul(b, b, addYX2) // B = (y1+x1)*(y2+x2) + fp.Mul(c, c, dt2) // C = 2*D*t1*t2 + fp.Sub(e, b, a) // E = B-A + fp.Add(h, b, a) // H = B+A + fp.Sub(f, d, c) // F = D-C + fp.Add(g, d, c) // G = D+C + fp.Mul(Pz, f, g) // Z = F * G + fp.Mul(Px, e, f) // X = E * F + fp.Mul(Py, g, h) // Y = G * H, T = E * H +} + +func (P *preTwistPointAffine) neg() { + P.addYX, P.subYX = P.subYX, P.addYX + fp.Neg(&P.dt2, &P.dt2) +} + +func (P *preTwistPointAffine) cneg(b int) { + t := &fp.Elt{} + fp.Cswap(&P.addYX, &P.subYX, uint(b)) + fp.Neg(t, &P.dt2) + fp.Cmov(&P.dt2, t, uint(b)) +} + +func (P *preTwistPointAffine) cmov(Q *preTwistPointAffine, b uint) { + fp.Cmov(&P.addYX, &Q.addYX, b) + fp.Cmov(&P.subYX, &Q.subYX, b) + fp.Cmov(&P.dt2, &Q.dt2, b) +} + +// mixAdd calculates P= P+Q, where Q is a precomputed point with Z_Q != 1. +func (P *twistPoint) mixAdd(Q *preTwistPointProy) { + fp.Mul(&P.z, &P.z, &Q.z2) // D = 2*z1*z2 + P.coreAddition(&Q.preTwistPointAffine) +} + +// oddMultiples calculates T[i] = (2*i-1)P for 0 < i < len(T). +func (P *twistPoint) oddMultiples(T []preTwistPointProy) { + if n := len(T); n > 0 { + T[0].FromTwistPoint(P) + _2P := *P + _2P.Double() + R := &preTwistPointProy{} + R.FromTwistPoint(&_2P) + for i := 1; i < n; i++ { + P.mixAdd(R) + T[i].FromTwistPoint(P) + } + } +} + +// cmov conditionally moves Q into P if b=1. +func (P *preTwistPointProy) cmov(Q *preTwistPointProy, b uint) { + P.preTwistPointAffine.cmov(&Q.preTwistPointAffine, b) + fp.Cmov(&P.z2, &Q.z2, b) +} + +// FromTwistPoint precomputes some coordinates of Q for missed addition. +func (P *preTwistPointProy) FromTwistPoint(Q *twistPoint) { + fp.Add(&P.addYX, &Q.y, &Q.x) // addYX = X + Y + fp.Sub(&P.subYX, &Q.y, &Q.x) // subYX = Y - X + fp.Mul(&P.dt2, &Q.ta, &Q.tb) // T = ta*tb + fp.Mul(&P.dt2, &P.dt2, ¶mDTwist) // D*T + fp.Add(&P.dt2, &P.dt2, &P.dt2) // dt2 = 2*D*T + fp.Add(&P.z2, &Q.z, &Q.z) // z2 = 2*Z +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/ecc/goldilocks/twistTables.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/ecc/goldilocks/twistTables.go new file mode 100644 index 00000000000..ed432e02c78 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/ecc/goldilocks/twistTables.go @@ -0,0 +1,216 @@ +package goldilocks + +import fp "github.com/cloudflare/circl/math/fp448" + +var tabFixMult = [fxV][fx2w1]preTwistPointAffine{ + { + { + addYX: fp.Elt{0x65, 0x4a, 0xdd, 0xdf, 0xb4, 0x79, 0x60, 0xc8, 0xa1, 0x70, 0xb4, 0x3a, 0x1e, 0x0c, 0x9b, 0x19, 0xe5, 0x48, 0x3f, 0xd7, 0x44, 0x18, 0x18, 0x14, 0x14, 0x27, 0x45, 0xd0, 0x2b, 0x24, 0xd5, 0x93, 0xc3, 0x74, 0x4c, 0x50, 0x70, 0x43, 0x26, 0x05, 0x08, 0x24, 0xca, 0x78, 0x30, 0xc1, 0x06, 0x8d, 0xd4, 0x86, 0x42, 0xf0, 0x14, 0xde, 0x08, 0x05}, + subYX: fp.Elt{0x64, 0x4a, 0xdd, 0xdf, 0xb4, 0x79, 0x60, 0xc8, 0xa1, 0x70, 0xb4, 0x3a, 0x1e, 0x0c, 0x9b, 0x19, 0xe5, 0x48, 0x3f, 0xd7, 0x44, 0x18, 0x18, 0x14, 0x14, 0x27, 0x45, 0xd0, 0x2d, 0x24, 0xd5, 0x93, 0xc3, 0x74, 0x4c, 0x50, 0x70, 0x43, 0x26, 0x05, 0x08, 0x24, 0xca, 0x78, 0x30, 0xc1, 0x06, 0x8d, 0xd4, 0x86, 0x42, 0xf0, 0x14, 0xde, 0x08, 0x05}, + dt2: fp.Elt{0x1a, 0x33, 0xea, 0x64, 0x45, 0x1c, 0xdf, 0x17, 0x1d, 0x16, 0x34, 0x28, 0xd6, 0x61, 0x19, 0x67, 0x79, 0xb4, 0x13, 0xcf, 0x3e, 0x7c, 0x0e, 0x72, 0xda, 0xf1, 0x5f, 0xda, 0xe6, 0xcf, 0x42, 0xd3, 0xb6, 0x17, 0xc2, 0x68, 0x13, 0x2d, 0xd9, 0x60, 0x3e, 0xae, 0xf0, 0x5b, 0x96, 0xf0, 0xcd, 0xaf, 0xea, 0xb7, 0x0d, 0x59, 0x16, 0xa7, 0xff, 0x55}, + }, + { + addYX: fp.Elt{0xca, 0xd8, 0x7d, 0x86, 0x1a, 0xef, 0xad, 0x11, 0xe3, 0x27, 0x41, 0x7e, 0x7f, 0x3e, 0xa9, 0xd2, 0xb5, 0x4e, 0x50, 0xe0, 0x77, 0x91, 0xc2, 0x13, 0x52, 0x73, 0x41, 0x09, 0xa6, 0x57, 0x9a, 0xc8, 0xa8, 0x90, 0x9d, 0x26, 0x14, 0xbb, 0xa1, 0x2a, 0xf7, 0x45, 0x43, 0x4e, 0xea, 0x35, 0x62, 0xe1, 0x08, 0x85, 0x46, 0xb8, 0x24, 0x05, 0x2d, 0xab}, + subYX: fp.Elt{0x9b, 0xe6, 0xd3, 0xe5, 0xfe, 0x50, 0x36, 0x3c, 0x3c, 0x6d, 0x74, 0x1d, 0x74, 0xc0, 0xde, 0x5b, 0x45, 0x27, 0xe5, 0x12, 0xee, 0x63, 0x35, 0x6b, 0x13, 0xe2, 0x41, 0x6b, 0x3a, 0x05, 0x2b, 0xb1, 0x89, 0x26, 0xb6, 0xc6, 0xd1, 0x84, 0xff, 0x0e, 0x9b, 0xa3, 0xfb, 0x21, 0x36, 0x6b, 0x01, 0xf7, 0x9f, 0x7c, 0xeb, 0xf5, 0x18, 0x7a, 0x2a, 0x70}, + dt2: fp.Elt{0x09, 0xad, 0x99, 0x1a, 0x38, 0xd3, 0xdf, 0x22, 0x37, 0x32, 0x61, 0x8b, 0xf3, 0x19, 0x48, 0x08, 0xe8, 0x49, 0xb6, 0x4a, 0xa7, 0xed, 0xa4, 0xa2, 0xee, 0x86, 0xd7, 0x31, 0x5e, 0xce, 0x95, 0x76, 0x86, 0x42, 0x1c, 0x9d, 0x07, 0x14, 0x8c, 0x34, 0x18, 0x9c, 0x6d, 0x3a, 0xdf, 0xa9, 0xe8, 0x36, 0x7e, 0xe4, 0x95, 0xbe, 0xb5, 0x09, 0xf8, 0x9c}, + }, + { + addYX: fp.Elt{0x51, 0xdb, 0x49, 0xa8, 0x9f, 0xe3, 0xd7, 0xec, 0x0d, 0x0f, 0x49, 0xe8, 0xb6, 0xc5, 0x0f, 0x5a, 0x1c, 0xce, 0x54, 0x0d, 0xb1, 0x8d, 0x5b, 0xbf, 0xf4, 0xaa, 0x34, 0x77, 0xc4, 0x5d, 0x59, 0xb6, 0xc5, 0x0e, 0x5a, 0xd8, 0x5b, 0x30, 0xc2, 0x1d, 0xec, 0x85, 0x1c, 0x42, 0xbe, 0x24, 0x2e, 0x50, 0x55, 0x44, 0xb2, 0x3a, 0x01, 0xaa, 0x98, 0xfb}, + subYX: fp.Elt{0xe7, 0x29, 0xb7, 0xd0, 0xaa, 0x4f, 0x32, 0x53, 0x56, 0xde, 0xbc, 0xd1, 0x92, 0x5d, 0x19, 0xbe, 0xa3, 0xe3, 0x75, 0x48, 0xe0, 0x7a, 0x1b, 0x54, 0x7a, 0xb7, 0x41, 0x77, 0x84, 0x38, 0xdd, 0x14, 0x9f, 0xca, 0x3f, 0xa3, 0xc8, 0xa7, 0x04, 0x70, 0xf1, 0x4d, 0x3d, 0xb3, 0x84, 0x79, 0xcb, 0xdb, 0xe4, 0xc5, 0x42, 0x9b, 0x57, 0x19, 0xf1, 0x2d}, + dt2: fp.Elt{0x20, 0xb4, 0x94, 0x9e, 0xdf, 0x31, 0x44, 0x0b, 0xc9, 0x7b, 0x75, 0x40, 0x9d, 0xd1, 0x96, 0x39, 0x70, 0x71, 0x15, 0xc8, 0x93, 0xd5, 0xc5, 0xe5, 0xba, 0xfe, 0xee, 0x08, 0x6a, 0x98, 0x0a, 0x1b, 0xb2, 0xaa, 0x3a, 0xf4, 0xa4, 0x79, 0xf9, 0x8e, 0x4d, 0x65, 0x10, 0x9b, 0x3a, 0x6e, 0x7c, 0x87, 0x94, 0x92, 0x11, 0x65, 0xbf, 0x1a, 0x09, 0xde}, + }, + { + addYX: fp.Elt{0xf3, 0x84, 0x76, 0x77, 0xa5, 0x6b, 0x27, 0x3b, 0x83, 0x3d, 0xdf, 0xa0, 0xeb, 0x32, 0x6d, 0x58, 0x81, 0x57, 0x64, 0xc2, 0x21, 0x7c, 0x9b, 0xea, 0xe6, 0xb0, 0x93, 0xf9, 0xe7, 0xc3, 0xed, 0x5a, 0x8e, 0xe2, 0xb4, 0x72, 0x76, 0x66, 0x0f, 0x22, 0x29, 0x94, 0x3e, 0x63, 0x48, 0x5e, 0x80, 0xcb, 0xac, 0xfa, 0x95, 0xb6, 0x4b, 0xc4, 0x95, 0x33}, + subYX: fp.Elt{0x0c, 0x55, 0xd1, 0x5e, 0x5f, 0xbf, 0xbf, 0xe2, 0x4c, 0xfc, 0x37, 0x4a, 0xc4, 0xb1, 0xf4, 0x83, 0x61, 0x93, 0x60, 0x8e, 0x9f, 0x31, 0xf0, 0xa0, 0x41, 0xff, 0x1d, 0xe2, 0x7f, 0xca, 0x40, 0xd6, 0x88, 0xe8, 0x91, 0x61, 0xe2, 0x11, 0x18, 0x83, 0xf3, 0x25, 0x2f, 0x3f, 0x49, 0x40, 0xd4, 0x83, 0xe2, 0xd7, 0x74, 0x6a, 0x16, 0x86, 0x4e, 0xab}, + dt2: fp.Elt{0xdd, 0x58, 0x65, 0xd8, 0x9f, 0xdd, 0x70, 0x7f, 0x0f, 0xec, 0xbd, 0x5c, 0x5c, 0x9b, 0x7e, 0x1b, 0x9f, 0x79, 0x36, 0x1f, 0xfd, 0x79, 0x10, 0x1c, 0x52, 0xf3, 0x22, 0xa4, 0x1f, 0x71, 0x6e, 0x63, 0x14, 0xf4, 0xa7, 0x3e, 0xbe, 0xad, 0x43, 0x30, 0x38, 0x8c, 0x29, 0xc6, 0xcf, 0x50, 0x75, 0x21, 0xe5, 0x78, 0xfd, 0xb0, 0x9a, 0xc4, 0x6d, 0xd4}, + }, + }, + { + { + addYX: fp.Elt{0x7a, 0xa1, 0x38, 0xa6, 0xfd, 0x0e, 0x96, 0xd5, 0x26, 0x76, 0x86, 0x70, 0x80, 0x30, 0xa6, 0x67, 0xeb, 0xf4, 0x39, 0xdb, 0x22, 0xf5, 0x9f, 0x98, 0xe4, 0xb5, 0x3a, 0x0c, 0x59, 0xbf, 0x85, 0xc6, 0xf0, 0x0b, 0x1c, 0x41, 0x38, 0x09, 0x01, 0xdb, 0xd6, 0x3c, 0xb7, 0xf1, 0x08, 0x6b, 0x4b, 0x9e, 0x63, 0x53, 0x83, 0xd3, 0xab, 0xa3, 0x72, 0x0d}, + subYX: fp.Elt{0x84, 0x68, 0x25, 0xe8, 0xe9, 0x8f, 0x91, 0xbf, 0xf7, 0xa4, 0x30, 0xae, 0xea, 0x9f, 0xdd, 0x56, 0x64, 0x09, 0xc9, 0x54, 0x68, 0x4e, 0x33, 0xc5, 0x6f, 0x7b, 0x2d, 0x52, 0x2e, 0x42, 0xbe, 0xbe, 0xf5, 0x64, 0xbf, 0x77, 0x54, 0xdf, 0xb0, 0x10, 0xd2, 0x16, 0x5d, 0xce, 0xaf, 0x9f, 0xfb, 0xa3, 0x63, 0x50, 0xcb, 0xc0, 0xd0, 0x88, 0x44, 0xa3}, + dt2: fp.Elt{0xc3, 0x8b, 0xa5, 0xf1, 0x44, 0xe4, 0x41, 0xcd, 0x75, 0xe3, 0x17, 0x69, 0x5b, 0xb9, 0xbb, 0xee, 0x82, 0xbb, 0xce, 0x57, 0xdf, 0x2a, 0x9c, 0x12, 0xab, 0x66, 0x08, 0x68, 0x05, 0x1b, 0x87, 0xee, 0x5d, 0x1e, 0x18, 0x14, 0x22, 0x4b, 0x99, 0x61, 0x75, 0x28, 0xe7, 0x65, 0x1c, 0x36, 0xb6, 0x18, 0x09, 0xa8, 0xdf, 0xef, 0x30, 0x35, 0xbc, 0x58}, + }, + { + addYX: fp.Elt{0xc5, 0xd3, 0x0e, 0x6f, 0xaf, 0x06, 0x69, 0xc4, 0x07, 0x9e, 0x58, 0x6e, 0x3f, 0x49, 0xd9, 0x0a, 0x3c, 0x2c, 0x37, 0xcd, 0x27, 0x4d, 0x87, 0x91, 0x7a, 0xb0, 0x28, 0xad, 0x2f, 0x68, 0x92, 0x05, 0x97, 0xf1, 0x30, 0x5f, 0x4c, 0x10, 0x20, 0x30, 0xd3, 0x08, 0x3f, 0xc1, 0xc6, 0xb7, 0xb5, 0xd1, 0x71, 0x7b, 0xa8, 0x0a, 0xd8, 0xf5, 0x17, 0xcf}, + subYX: fp.Elt{0x64, 0xd4, 0x8f, 0x91, 0x40, 0xab, 0x6e, 0x1a, 0x62, 0x83, 0xdc, 0xd7, 0x30, 0x1a, 0x4a, 0x2a, 0x4c, 0x54, 0x86, 0x19, 0x81, 0x5d, 0x04, 0x52, 0xa3, 0xca, 0x82, 0x38, 0xdc, 0x1e, 0xf0, 0x7a, 0x78, 0x76, 0x49, 0x4f, 0x71, 0xc4, 0x74, 0x2f, 0xf0, 0x5b, 0x2e, 0x5e, 0xac, 0xef, 0x17, 0xe4, 0x8e, 0x6e, 0xed, 0x43, 0x23, 0x61, 0x99, 0x49}, + dt2: fp.Elt{0x64, 0x90, 0x72, 0x76, 0xf8, 0x2c, 0x7d, 0x57, 0xf9, 0x30, 0x5e, 0x7a, 0x10, 0x74, 0x19, 0x39, 0xd9, 0xaf, 0x0a, 0xf1, 0x43, 0xed, 0x88, 0x9c, 0x8b, 0xdc, 0x9b, 0x1c, 0x90, 0xe7, 0xf7, 0xa3, 0xa5, 0x0d, 0xc6, 0xbc, 0x30, 0xfb, 0x91, 0x1a, 0x51, 0xba, 0x2d, 0xbe, 0x89, 0xdf, 0x1d, 0xdc, 0x53, 0xa8, 0x82, 0x8a, 0xd3, 0x8d, 0x16, 0x68}, + }, + { + addYX: fp.Elt{0xef, 0x5c, 0xe3, 0x74, 0xbf, 0x13, 0x4a, 0xbf, 0x66, 0x73, 0x64, 0xb7, 0xd4, 0xce, 0x98, 0x82, 0x05, 0xfa, 0x98, 0x0c, 0x0a, 0xae, 0xe5, 0x6b, 0x9f, 0xac, 0xbb, 0x6e, 0x1f, 0xcf, 0xff, 0xa6, 0x71, 0x9a, 0xa8, 0x7a, 0x9e, 0x64, 0x1f, 0x20, 0x4a, 0x61, 0xa2, 0xd6, 0x50, 0xe3, 0xba, 0x81, 0x0c, 0x50, 0x59, 0x69, 0x59, 0x15, 0x55, 0xdb}, + subYX: fp.Elt{0xe8, 0x77, 0x4d, 0xe8, 0x66, 0x3d, 0xc1, 0x00, 0x3c, 0xf2, 0x25, 0x00, 0xdc, 0xb2, 0xe5, 0x9b, 0x12, 0x89, 0xf3, 0xd6, 0xea, 0x85, 0x60, 0xfe, 0x67, 0x91, 0xfd, 0x04, 0x7c, 0xe0, 0xf1, 0x86, 0x06, 0x11, 0x66, 0xee, 0xd4, 0xd5, 0xbe, 0x3b, 0x0f, 0xe3, 0x59, 0xb3, 0x4f, 0x00, 0xb6, 0xce, 0x80, 0xc1, 0x61, 0xf7, 0xaf, 0x04, 0x6a, 0x3c}, + dt2: fp.Elt{0x00, 0xd7, 0x32, 0x93, 0x67, 0x70, 0x6f, 0xd7, 0x69, 0xab, 0xb1, 0xd3, 0xdc, 0xd6, 0xa8, 0xdd, 0x35, 0x25, 0xca, 0xd3, 0x8a, 0x6d, 0xce, 0xfb, 0xfd, 0x2b, 0x83, 0xf0, 0xd4, 0xac, 0x66, 0xfb, 0x72, 0x87, 0x7e, 0x55, 0xb7, 0x91, 0x58, 0x10, 0xc3, 0x11, 0x7e, 0x15, 0xfe, 0x7c, 0x55, 0x90, 0xa3, 0x9e, 0xed, 0x9a, 0x7f, 0xa7, 0xb7, 0xeb}, + }, + { + addYX: fp.Elt{0x25, 0x0f, 0xc2, 0x09, 0x9c, 0x10, 0xc8, 0x7c, 0x93, 0xa7, 0xbe, 0xe9, 0x26, 0x25, 0x7c, 0x21, 0xfe, 0xe7, 0x5f, 0x3c, 0x02, 0x83, 0xa7, 0x9e, 0xdf, 0xc0, 0x94, 0x2b, 0x7d, 0x1a, 0xd0, 0x1d, 0xcc, 0x2e, 0x7d, 0xd4, 0x85, 0xe7, 0xc1, 0x15, 0x66, 0xd6, 0xd6, 0x32, 0xb8, 0xf7, 0x63, 0xaa, 0x3b, 0xa5, 0xea, 0x49, 0xad, 0x88, 0x9b, 0x66}, + subYX: fp.Elt{0x09, 0x97, 0x79, 0x36, 0x41, 0x56, 0x9b, 0xdf, 0x15, 0xd8, 0x43, 0x28, 0x17, 0x5b, 0x96, 0xc9, 0xcf, 0x39, 0x1f, 0x13, 0xf7, 0x4d, 0x1d, 0x1f, 0xda, 0x51, 0x56, 0xe7, 0x0a, 0x5a, 0x65, 0xb6, 0x2a, 0x87, 0x49, 0x86, 0xc2, 0x2b, 0xcd, 0xfe, 0x07, 0xf6, 0x4c, 0xe2, 0x1d, 0x9b, 0xd8, 0x82, 0x09, 0x5b, 0x11, 0x10, 0x62, 0x56, 0x89, 0xbd}, + dt2: fp.Elt{0xd9, 0x15, 0x73, 0xf2, 0x96, 0x35, 0x53, 0xb0, 0xe7, 0xa8, 0x0b, 0x93, 0x35, 0x0b, 0x3a, 0x00, 0xf5, 0x18, 0xb1, 0xc3, 0x12, 0x3f, 0x91, 0x17, 0xc1, 0x4c, 0x15, 0x5a, 0x86, 0x92, 0x11, 0xbd, 0x44, 0x40, 0x5a, 0x7b, 0x15, 0x89, 0xba, 0xc1, 0xc1, 0xbc, 0x43, 0x45, 0xe6, 0x52, 0x02, 0x73, 0x0a, 0xd0, 0x2a, 0x19, 0xda, 0x47, 0xa8, 0xff}, + }, + }, +} + +// tabVerif contains the odd multiples of P. The entry T[i] = (2i+1)P, where +// P = phi(G) and G is the generator of the Goldilocks curve, and phi is a +// 4-degree isogeny. +var tabVerif = [1 << (omegaFix - 2)]preTwistPointAffine{ + { /* 1P*/ + addYX: fp.Elt{0x65, 0x4a, 0xdd, 0xdf, 0xb4, 0x79, 0x60, 0xc8, 0xa1, 0x70, 0xb4, 0x3a, 0x1e, 0x0c, 0x9b, 0x19, 0xe5, 0x48, 0x3f, 0xd7, 0x44, 0x18, 0x18, 0x14, 0x14, 0x27, 0x45, 0xd0, 0x2b, 0x24, 0xd5, 0x93, 0xc3, 0x74, 0x4c, 0x50, 0x70, 0x43, 0x26, 0x05, 0x08, 0x24, 0xca, 0x78, 0x30, 0xc1, 0x06, 0x8d, 0xd4, 0x86, 0x42, 0xf0, 0x14, 0xde, 0x08, 0x05}, + subYX: fp.Elt{0x64, 0x4a, 0xdd, 0xdf, 0xb4, 0x79, 0x60, 0xc8, 0xa1, 0x70, 0xb4, 0x3a, 0x1e, 0x0c, 0x9b, 0x19, 0xe5, 0x48, 0x3f, 0xd7, 0x44, 0x18, 0x18, 0x14, 0x14, 0x27, 0x45, 0xd0, 0x2d, 0x24, 0xd5, 0x93, 0xc3, 0x74, 0x4c, 0x50, 0x70, 0x43, 0x26, 0x05, 0x08, 0x24, 0xca, 0x78, 0x30, 0xc1, 0x06, 0x8d, 0xd4, 0x86, 0x42, 0xf0, 0x14, 0xde, 0x08, 0x05}, + dt2: fp.Elt{0x1a, 0x33, 0xea, 0x64, 0x45, 0x1c, 0xdf, 0x17, 0x1d, 0x16, 0x34, 0x28, 0xd6, 0x61, 0x19, 0x67, 0x79, 0xb4, 0x13, 0xcf, 0x3e, 0x7c, 0x0e, 0x72, 0xda, 0xf1, 0x5f, 0xda, 0xe6, 0xcf, 0x42, 0xd3, 0xb6, 0x17, 0xc2, 0x68, 0x13, 0x2d, 0xd9, 0x60, 0x3e, 0xae, 0xf0, 0x5b, 0x96, 0xf0, 0xcd, 0xaf, 0xea, 0xb7, 0x0d, 0x59, 0x16, 0xa7, 0xff, 0x55}, + }, + { /* 3P*/ + addYX: fp.Elt{0xd1, 0xe9, 0xa8, 0x33, 0x20, 0x76, 0x18, 0x08, 0x45, 0x2a, 0xc9, 0x67, 0x2a, 0xc3, 0x15, 0x24, 0xf9, 0x74, 0x21, 0x30, 0x99, 0x59, 0x8b, 0xb2, 0xf0, 0xa4, 0x07, 0xe2, 0x6a, 0x36, 0x8d, 0xd9, 0xd2, 0x4a, 0x7f, 0x73, 0x50, 0x39, 0x3d, 0xaa, 0xa7, 0x51, 0x73, 0x0d, 0x2b, 0x8b, 0x96, 0x47, 0xac, 0x3c, 0x5d, 0xaa, 0x39, 0x9c, 0xcf, 0xd5}, + subYX: fp.Elt{0x6b, 0x11, 0x5d, 0x1a, 0xf9, 0x41, 0x9d, 0xc5, 0x30, 0x3e, 0xad, 0x25, 0x2c, 0x04, 0x45, 0xea, 0xcc, 0x67, 0x07, 0x85, 0xe9, 0xda, 0x0e, 0xb5, 0x40, 0xb7, 0x32, 0xb4, 0x49, 0xdd, 0xff, 0xaa, 0xfc, 0xbb, 0x19, 0xca, 0x8b, 0x79, 0x2b, 0x8f, 0x8d, 0x00, 0x33, 0xc2, 0xad, 0xe9, 0xd3, 0x12, 0xa8, 0xaa, 0x87, 0x62, 0xad, 0x2d, 0xff, 0xa4}, + dt2: fp.Elt{0xb0, 0xaf, 0x3b, 0xea, 0xf0, 0x42, 0x0b, 0x5e, 0x88, 0xd3, 0x98, 0x08, 0x87, 0x59, 0x72, 0x0a, 0xc2, 0xdf, 0xcb, 0x7f, 0x59, 0xb5, 0x4c, 0x63, 0x68, 0xe8, 0x41, 0x38, 0x67, 0x4f, 0xe9, 0xc6, 0xb2, 0x6b, 0x08, 0xa7, 0xf7, 0x0e, 0xcd, 0xea, 0xca, 0x3d, 0xaf, 0x8e, 0xda, 0x4b, 0x2e, 0xd2, 0x88, 0x64, 0x8d, 0xc5, 0x5f, 0x76, 0x0f, 0x3d}, + }, + { /* 5P*/ + addYX: fp.Elt{0xe5, 0x65, 0xc9, 0xe2, 0x75, 0xf0, 0x7d, 0x1a, 0xba, 0xa4, 0x40, 0x4b, 0x93, 0x12, 0xa2, 0x80, 0x95, 0x0d, 0x03, 0x93, 0xe8, 0xa5, 0x4d, 0xe2, 0x3d, 0x81, 0xf5, 0xce, 0xd4, 0x2d, 0x25, 0x59, 0x16, 0x5c, 0xe7, 0xda, 0xc7, 0x45, 0xd2, 0x7e, 0x2c, 0x38, 0xd4, 0x37, 0x64, 0xb2, 0xc2, 0x28, 0xc5, 0x72, 0x16, 0x32, 0x45, 0x36, 0x6f, 0x9f}, + subYX: fp.Elt{0x09, 0xf4, 0x7e, 0xbd, 0x89, 0xdb, 0x19, 0x58, 0xe1, 0x08, 0x00, 0x8a, 0xf4, 0x5f, 0x2a, 0x32, 0x40, 0xf0, 0x2c, 0x3f, 0x5d, 0xe4, 0xfc, 0x89, 0x11, 0x24, 0xb4, 0x2f, 0x97, 0xad, 0xac, 0x8f, 0x19, 0xab, 0xfa, 0x12, 0xe5, 0xf9, 0x50, 0x4e, 0x50, 0x6f, 0x32, 0x30, 0x88, 0xa6, 0xe5, 0x48, 0x28, 0xa2, 0x1b, 0x9f, 0xcd, 0xe2, 0x43, 0x38}, + dt2: fp.Elt{0xa9, 0xcc, 0x53, 0x39, 0x86, 0x02, 0x60, 0x75, 0x34, 0x99, 0x57, 0xbd, 0xfc, 0x5a, 0x8e, 0xce, 0x5e, 0x98, 0x22, 0xd0, 0xa5, 0x24, 0xff, 0x90, 0x28, 0x9f, 0x58, 0xf3, 0x39, 0xe9, 0xba, 0x36, 0x23, 0xfb, 0x7f, 0x41, 0xcc, 0x2b, 0x5a, 0x25, 0x3f, 0x4c, 0x2a, 0xf1, 0x52, 0x6f, 0x2f, 0x07, 0xe3, 0x88, 0x81, 0x77, 0xdd, 0x7c, 0x88, 0x82}, + }, + { /* 7P*/ + addYX: fp.Elt{0xf7, 0xee, 0x88, 0xfd, 0x3a, 0xbf, 0x7e, 0x28, 0x39, 0x23, 0x79, 0xe6, 0x5c, 0x56, 0xcb, 0xb5, 0x48, 0x6a, 0x80, 0x6d, 0x37, 0x60, 0x6c, 0x10, 0x35, 0x49, 0x4b, 0x46, 0x60, 0xd4, 0x79, 0xd4, 0x53, 0xd3, 0x67, 0x88, 0xd0, 0x41, 0xd5, 0x43, 0x85, 0xc8, 0x71, 0xe3, 0x1c, 0xb6, 0xda, 0x22, 0x64, 0x8f, 0x80, 0xac, 0xad, 0x7d, 0xd5, 0x82}, + subYX: fp.Elt{0x92, 0x40, 0xc1, 0x83, 0x21, 0x9b, 0xd5, 0x7d, 0x3f, 0x29, 0xb6, 0x26, 0xef, 0x12, 0xb9, 0x27, 0x39, 0x42, 0x37, 0x97, 0x09, 0x9a, 0x08, 0xe1, 0x68, 0xb6, 0x7a, 0x3f, 0x9f, 0x45, 0xf8, 0x37, 0x19, 0x83, 0x97, 0xe6, 0x73, 0x30, 0x32, 0x35, 0xcf, 0xae, 0x5c, 0x12, 0x68, 0xdf, 0x6e, 0x2b, 0xde, 0x83, 0xa0, 0x44, 0x74, 0x2e, 0x4a, 0xe9}, + dt2: fp.Elt{0xcb, 0x22, 0x0a, 0xda, 0x6b, 0xc1, 0x8a, 0x29, 0xa1, 0xac, 0x8b, 0x5b, 0x8b, 0x32, 0x20, 0xf2, 0x21, 0xae, 0x0c, 0x43, 0xc4, 0xd7, 0x19, 0x37, 0x3d, 0x79, 0x25, 0x98, 0x6c, 0x9c, 0x22, 0x31, 0x2a, 0x55, 0x9f, 0xda, 0x5e, 0xa8, 0x13, 0xdb, 0x8e, 0x2e, 0x16, 0x39, 0xf4, 0x91, 0x6f, 0xec, 0x71, 0x71, 0xc9, 0x10, 0xf2, 0xa4, 0x8f, 0x11}, + }, + { /* 9P*/ + addYX: fp.Elt{0x85, 0xdd, 0x37, 0x62, 0x74, 0x8e, 0x33, 0x5b, 0x25, 0x12, 0x1b, 0xe7, 0xdf, 0x47, 0xe5, 0x12, 0xfd, 0x3a, 0x3a, 0xf5, 0x5d, 0x4c, 0xa2, 0x29, 0x3c, 0x5c, 0x2f, 0xee, 0x18, 0x19, 0x0a, 0x2b, 0xef, 0x67, 0x50, 0x7a, 0x0d, 0x29, 0xae, 0x55, 0x82, 0xcd, 0xd6, 0x41, 0x90, 0xb4, 0x13, 0x31, 0x5d, 0x11, 0xb8, 0xaa, 0x12, 0x86, 0x08, 0xac}, + subYX: fp.Elt{0xcc, 0x37, 0x8d, 0x83, 0x5f, 0xfd, 0xde, 0xd5, 0xf7, 0xf1, 0xae, 0x0a, 0xa7, 0x0b, 0xeb, 0x6d, 0x19, 0x8a, 0xb6, 0x1a, 0x59, 0xd8, 0xff, 0x3c, 0xbc, 0xbc, 0xef, 0x9c, 0xda, 0x7b, 0x75, 0x12, 0xaf, 0x80, 0x8f, 0x2c, 0x3c, 0xaa, 0x0b, 0x17, 0x86, 0x36, 0x78, 0x18, 0xc8, 0x8a, 0xf6, 0xb8, 0x2c, 0x2f, 0x57, 0x2c, 0x62, 0x57, 0xf6, 0x90}, + dt2: fp.Elt{0x83, 0xbc, 0xa2, 0x07, 0xa5, 0x38, 0x96, 0xea, 0xfe, 0x11, 0x46, 0x1d, 0x3b, 0xcd, 0x42, 0xc5, 0xee, 0x67, 0x04, 0x72, 0x08, 0xd8, 0xd9, 0x96, 0x07, 0xf7, 0xac, 0xc3, 0x64, 0xf1, 0x98, 0x2c, 0x55, 0xd7, 0x7d, 0xc8, 0x6c, 0xbd, 0x2c, 0xff, 0x15, 0xd6, 0x6e, 0xb8, 0x17, 0x8e, 0xa8, 0x27, 0x66, 0xb1, 0x73, 0x79, 0x96, 0xff, 0x29, 0x10}, + }, + { /* 11P*/ + addYX: fp.Elt{0x76, 0xcb, 0x9b, 0x0c, 0x5b, 0xfe, 0xe1, 0x2a, 0xdd, 0x6f, 0x6c, 0xdd, 0x6f, 0xb4, 0xc0, 0xc2, 0x1b, 0x4b, 0x38, 0xe8, 0x66, 0x8c, 0x1e, 0x31, 0x63, 0xb9, 0x94, 0xcd, 0xc3, 0x8c, 0x44, 0x25, 0x7b, 0xd5, 0x39, 0x80, 0xfc, 0x01, 0xaa, 0xf7, 0x2a, 0x61, 0x8a, 0x25, 0xd2, 0x5f, 0xc5, 0x66, 0x38, 0xa4, 0x17, 0xcf, 0x3e, 0x11, 0x0f, 0xa3}, + subYX: fp.Elt{0xe0, 0xb6, 0xd1, 0x9c, 0x71, 0x49, 0x2e, 0x7b, 0xde, 0x00, 0xda, 0x6b, 0xf1, 0xec, 0xe6, 0x7a, 0x15, 0x38, 0x71, 0xe9, 0x7b, 0xdb, 0xf8, 0x98, 0xc0, 0x91, 0x2e, 0x53, 0xee, 0x92, 0x87, 0x25, 0xc9, 0xb0, 0xbb, 0x33, 0x15, 0x46, 0x7f, 0xfd, 0x4f, 0x8b, 0x77, 0x05, 0x96, 0xb6, 0xe2, 0x08, 0xdb, 0x0d, 0x09, 0xee, 0x5b, 0xd1, 0x2a, 0x63}, + dt2: fp.Elt{0x8f, 0x7b, 0x57, 0x8c, 0xbf, 0x06, 0x0d, 0x43, 0x21, 0x92, 0x94, 0x2d, 0x6a, 0x38, 0x07, 0x0f, 0xa0, 0xf1, 0xe3, 0xd8, 0x2a, 0xbf, 0x46, 0xc6, 0x9e, 0x1f, 0x8f, 0x2b, 0x46, 0x84, 0x0b, 0x74, 0xed, 0xff, 0xf8, 0xa5, 0x94, 0xae, 0xf1, 0x67, 0xb1, 0x9b, 0xdd, 0x4a, 0xd0, 0xdb, 0xc2, 0xb5, 0x58, 0x49, 0x0c, 0xa9, 0x1d, 0x7d, 0xa9, 0xd3}, + }, + { /* 13P*/ + addYX: fp.Elt{0x73, 0x84, 0x2e, 0x31, 0x1f, 0xdc, 0xed, 0x9f, 0x74, 0xfa, 0xe0, 0x35, 0xb1, 0x85, 0x6a, 0x8d, 0x86, 0xd0, 0xff, 0xd6, 0x08, 0x43, 0x73, 0x1a, 0xd5, 0xf8, 0x43, 0xd4, 0xb3, 0xe5, 0x3f, 0xa8, 0x84, 0x17, 0x59, 0x65, 0x4e, 0xe6, 0xee, 0x54, 0x9c, 0xda, 0x5e, 0x7e, 0x98, 0x29, 0x6d, 0x73, 0x34, 0x1f, 0x99, 0x80, 0x54, 0x54, 0x81, 0x0b}, + subYX: fp.Elt{0xb1, 0xe5, 0xbb, 0x80, 0x22, 0x9c, 0x81, 0x6d, 0xaf, 0x27, 0x65, 0x6f, 0x7e, 0x9c, 0xb6, 0x8d, 0x35, 0x5c, 0x2e, 0x20, 0x48, 0x7a, 0x28, 0xf0, 0x97, 0xfe, 0xb7, 0x71, 0xce, 0xd6, 0xad, 0x3a, 0x81, 0xf6, 0x74, 0x5e, 0xf3, 0xfd, 0x1b, 0xd4, 0x1e, 0x7c, 0xc2, 0xb7, 0xc8, 0xa6, 0xc9, 0x89, 0x03, 0x47, 0xec, 0x24, 0xd6, 0x0e, 0xec, 0x9c}, + dt2: fp.Elt{0x91, 0x0a, 0x43, 0x34, 0x20, 0xc2, 0x64, 0xf7, 0x4e, 0x48, 0xc8, 0xd2, 0x95, 0x83, 0xd1, 0xa4, 0xfb, 0x4e, 0x41, 0x3b, 0x0d, 0xd5, 0x07, 0xd9, 0xf1, 0x13, 0x16, 0x78, 0x54, 0x57, 0xd0, 0xf1, 0x4f, 0x20, 0xac, 0xcf, 0x9c, 0x3b, 0x33, 0x0b, 0x99, 0x54, 0xc3, 0x7f, 0x3e, 0x57, 0x26, 0x86, 0xd5, 0xa5, 0x2b, 0x8d, 0xe3, 0x19, 0x36, 0xf7}, + }, + { /* 15P*/ + addYX: fp.Elt{0x23, 0x69, 0x47, 0x14, 0xf9, 0x9a, 0x50, 0xff, 0x64, 0xd1, 0x50, 0x35, 0xc3, 0x11, 0xd3, 0x19, 0xcf, 0x87, 0xda, 0x30, 0x0b, 0x50, 0xda, 0xc0, 0xe0, 0x25, 0x00, 0xe5, 0x68, 0x93, 0x04, 0xc2, 0xaf, 0xbd, 0x2f, 0x36, 0x5f, 0x47, 0x96, 0x10, 0xa8, 0xbd, 0xe4, 0x88, 0xac, 0x80, 0x52, 0x61, 0x73, 0xe9, 0x63, 0xdd, 0x99, 0xad, 0x20, 0x5b}, + subYX: fp.Elt{0x1b, 0x5e, 0xa2, 0x2a, 0x25, 0x0f, 0x86, 0xc0, 0xb1, 0x2e, 0x0c, 0x13, 0x40, 0x8d, 0xf0, 0xe6, 0x00, 0x55, 0x08, 0xc5, 0x7d, 0xf4, 0xc9, 0x31, 0x25, 0x3a, 0x99, 0x69, 0xdd, 0x67, 0x63, 0x9a, 0xd6, 0x89, 0x2e, 0xa1, 0x19, 0xca, 0x2c, 0xd9, 0x59, 0x5f, 0x5d, 0xc3, 0x6e, 0x62, 0x36, 0x12, 0x59, 0x15, 0xe1, 0xdc, 0xa4, 0xad, 0xc9, 0xd0}, + dt2: fp.Elt{0xbc, 0xea, 0xfc, 0xaf, 0x66, 0x23, 0xb7, 0x39, 0x6b, 0x2a, 0x96, 0xa8, 0x54, 0x43, 0xe9, 0xaa, 0x32, 0x40, 0x63, 0x92, 0x5e, 0xdf, 0x35, 0xc2, 0x9f, 0x24, 0x0c, 0xed, 0xfc, 0xde, 0x73, 0x8f, 0xa7, 0xd5, 0xa3, 0x2b, 0x18, 0x1f, 0xb0, 0xf8, 0xeb, 0x55, 0xd9, 0xc3, 0xfd, 0x28, 0x7c, 0x4f, 0xce, 0x0d, 0xf7, 0xae, 0xc2, 0x83, 0xc3, 0x78}, + }, + { /* 17P*/ + addYX: fp.Elt{0x71, 0xe6, 0x60, 0x93, 0x37, 0xdb, 0x01, 0xa5, 0x4c, 0xba, 0xe8, 0x8e, 0xd5, 0xf9, 0xd3, 0x98, 0xe5, 0xeb, 0xab, 0x3a, 0x15, 0x8b, 0x35, 0x60, 0xbe, 0xe5, 0x9c, 0x2d, 0x10, 0x9b, 0x2e, 0xcf, 0x65, 0x64, 0xea, 0x8f, 0x72, 0xce, 0xf5, 0x18, 0xe5, 0xe2, 0xf0, 0x0e, 0xae, 0x04, 0xec, 0xa0, 0x20, 0x65, 0x63, 0x07, 0xb1, 0x9f, 0x03, 0x97}, + subYX: fp.Elt{0x9e, 0x41, 0x64, 0x30, 0x95, 0x7f, 0x3a, 0x89, 0x7b, 0x0a, 0x79, 0x59, 0x23, 0x9a, 0x3b, 0xfe, 0xa4, 0x13, 0x08, 0xb2, 0x2e, 0x04, 0x50, 0x10, 0x30, 0xcd, 0x2e, 0xa4, 0x91, 0x71, 0x50, 0x36, 0x4a, 0x02, 0xf4, 0x8d, 0xa3, 0x36, 0x1b, 0xf4, 0x52, 0xba, 0x15, 0x04, 0x8b, 0x80, 0x25, 0xd9, 0xae, 0x67, 0x20, 0xd9, 0x88, 0x8f, 0x97, 0xa6}, + dt2: fp.Elt{0xb5, 0xe7, 0x46, 0xbd, 0x55, 0x23, 0xa0, 0x68, 0xc0, 0x12, 0xd9, 0xf1, 0x0a, 0x75, 0xe2, 0xda, 0xf4, 0x6b, 0xca, 0x14, 0xe4, 0x9f, 0x0f, 0xb5, 0x3c, 0xa6, 0xa5, 0xa2, 0x63, 0x94, 0xd1, 0x1c, 0x39, 0x58, 0x57, 0x02, 0x27, 0x98, 0xb6, 0x47, 0xc6, 0x61, 0x4b, 0x5c, 0xab, 0x6f, 0x2d, 0xab, 0xe3, 0xc1, 0x69, 0xf9, 0x12, 0xb0, 0xc8, 0xd5}, + }, + { /* 19P*/ + addYX: fp.Elt{0x19, 0x7d, 0xd5, 0xac, 0x79, 0xa2, 0x82, 0x9b, 0x28, 0x31, 0x22, 0xc0, 0x73, 0x02, 0x76, 0x17, 0x10, 0x70, 0x79, 0x57, 0xc9, 0x84, 0x62, 0x8e, 0x04, 0x04, 0x61, 0x67, 0x08, 0x48, 0xb4, 0x4b, 0xde, 0x53, 0x8c, 0xff, 0x36, 0x1b, 0x62, 0x86, 0x5d, 0xe1, 0x9b, 0xb1, 0xe5, 0xe8, 0x44, 0x64, 0xa1, 0x68, 0x3f, 0xa8, 0x45, 0x52, 0x91, 0xed}, + subYX: fp.Elt{0x42, 0x1a, 0x36, 0x1f, 0x90, 0x15, 0x24, 0x8d, 0x24, 0x80, 0xe6, 0xfe, 0x1e, 0xf0, 0xad, 0xaf, 0x6a, 0x93, 0xf0, 0xa6, 0x0d, 0x5d, 0xea, 0xf6, 0x62, 0x96, 0x7a, 0x05, 0x76, 0x85, 0x74, 0x32, 0xc7, 0xc8, 0x64, 0x53, 0x62, 0xe7, 0x54, 0x84, 0xe0, 0x40, 0x66, 0x19, 0x70, 0x40, 0x95, 0x35, 0x68, 0x64, 0x43, 0xcd, 0xba, 0x29, 0x32, 0xa8}, + dt2: fp.Elt{0x3e, 0xf6, 0xd6, 0xe4, 0x99, 0xeb, 0x20, 0x66, 0x08, 0x2e, 0x26, 0x64, 0xd7, 0x76, 0xf3, 0xb4, 0xc5, 0xa4, 0x35, 0x92, 0xd2, 0x99, 0x70, 0x5a, 0x1a, 0xe9, 0xe9, 0x3d, 0x3b, 0xe1, 0xcd, 0x0e, 0xee, 0x24, 0x13, 0x03, 0x22, 0xd6, 0xd6, 0x72, 0x08, 0x2b, 0xde, 0xfd, 0x93, 0xed, 0x0c, 0x7f, 0x5e, 0x31, 0x22, 0x4d, 0x80, 0x78, 0xc0, 0x48}, + }, + { /* 21P*/ + addYX: fp.Elt{0x8f, 0x72, 0xd2, 0x9e, 0xc4, 0xcd, 0x2c, 0xbf, 0xa8, 0xd3, 0x24, 0x62, 0x28, 0xee, 0x39, 0x0a, 0x19, 0x3a, 0x58, 0xff, 0x21, 0x2e, 0x69, 0x6c, 0x6e, 0x18, 0xd0, 0xcd, 0x61, 0xc1, 0x18, 0x02, 0x5a, 0xe9, 0xe3, 0xef, 0x1f, 0x8e, 0x10, 0xe8, 0x90, 0x2b, 0x48, 0xcd, 0xee, 0x38, 0xbd, 0x3a, 0xca, 0xbc, 0x2d, 0xe2, 0x3a, 0x03, 0x71, 0x02}, + subYX: fp.Elt{0xf8, 0xa4, 0x32, 0x26, 0x66, 0xaf, 0x3b, 0x53, 0xe7, 0xb0, 0x91, 0x92, 0xf5, 0x3c, 0x74, 0xce, 0xf2, 0xdd, 0x68, 0xa9, 0xf4, 0xcd, 0x5f, 0x60, 0xab, 0x71, 0xdf, 0xcd, 0x5c, 0x5d, 0x51, 0x72, 0x3a, 0x96, 0xea, 0xd6, 0xde, 0x54, 0x8e, 0x55, 0x4c, 0x08, 0x4c, 0x60, 0xdd, 0x34, 0xa9, 0x6f, 0xf3, 0x04, 0x02, 0xa8, 0xa6, 0x4e, 0x4d, 0x62}, + dt2: fp.Elt{0x76, 0x4a, 0xae, 0x38, 0x62, 0x69, 0x72, 0xdc, 0xe8, 0x43, 0xbe, 0x1d, 0x61, 0xde, 0x31, 0xc3, 0x42, 0x8f, 0x33, 0x9d, 0xca, 0xc7, 0x9c, 0xec, 0x6a, 0xe2, 0xaa, 0x01, 0x49, 0x78, 0x8d, 0x72, 0x4f, 0x38, 0xea, 0x52, 0xc2, 0xd3, 0xc9, 0x39, 0x71, 0xba, 0xb9, 0x09, 0x9b, 0xa3, 0x7f, 0x45, 0x43, 0x65, 0x36, 0x29, 0xca, 0xe7, 0x5c, 0x5f}, + }, + { /* 23P*/ + addYX: fp.Elt{0x89, 0x42, 0x35, 0x48, 0x6d, 0x74, 0xe5, 0x1f, 0xc3, 0xdd, 0x28, 0x5b, 0x84, 0x41, 0x33, 0x9f, 0x42, 0xf3, 0x1d, 0x5d, 0x15, 0x6d, 0x76, 0x33, 0x36, 0xaf, 0xe9, 0xdd, 0xfa, 0x63, 0x4f, 0x7a, 0x9c, 0xeb, 0x1c, 0x4f, 0x34, 0x65, 0x07, 0x54, 0xbb, 0x4c, 0x8b, 0x62, 0x9d, 0xd0, 0x06, 0x99, 0xb3, 0xe9, 0xda, 0x85, 0x19, 0xb0, 0x3d, 0x3c}, + subYX: fp.Elt{0xbb, 0x99, 0xf6, 0xbf, 0xaf, 0x2c, 0x22, 0x0d, 0x7a, 0xaa, 0x98, 0x6f, 0x01, 0x82, 0x99, 0xcf, 0x88, 0xbd, 0x0e, 0x3a, 0x89, 0xe0, 0x9c, 0x8c, 0x17, 0x20, 0xc4, 0xe0, 0xcf, 0x43, 0x7a, 0xef, 0x0d, 0x9f, 0x87, 0xd4, 0xfb, 0xf2, 0x96, 0xb8, 0x03, 0xe8, 0xcb, 0x5c, 0xec, 0x65, 0x5f, 0x49, 0xa4, 0x7c, 0x85, 0xb4, 0xf6, 0xc7, 0xdb, 0xa3}, + dt2: fp.Elt{0x11, 0xf3, 0x32, 0xa3, 0xa7, 0xb2, 0x7d, 0x51, 0x82, 0x44, 0xeb, 0xa2, 0x7d, 0x72, 0xcb, 0xc6, 0xf6, 0xc7, 0xb2, 0x38, 0x0e, 0x0f, 0x4f, 0x29, 0x00, 0xe4, 0x5b, 0x94, 0x46, 0x86, 0x66, 0xa1, 0x83, 0xb3, 0xeb, 0x15, 0xb6, 0x31, 0x50, 0x28, 0xeb, 0xed, 0x0d, 0x32, 0x39, 0xe9, 0x23, 0x81, 0x99, 0x3e, 0xff, 0x17, 0x4c, 0x11, 0x43, 0xd1}, + }, + { /* 25P*/ + addYX: fp.Elt{0xce, 0xe7, 0xf8, 0x94, 0x8f, 0x96, 0xf8, 0x96, 0xe6, 0x72, 0x20, 0x44, 0x2c, 0xa7, 0xfc, 0xba, 0xc8, 0xe1, 0xbb, 0xc9, 0x16, 0x85, 0xcd, 0x0b, 0xe5, 0xb5, 0x5a, 0x7f, 0x51, 0x43, 0x63, 0x8b, 0x23, 0x8e, 0x1d, 0x31, 0xff, 0x46, 0x02, 0x66, 0xcc, 0x9e, 0x4d, 0xa2, 0xca, 0xe2, 0xc7, 0xfd, 0x22, 0xb1, 0xdb, 0xdf, 0x6f, 0xe6, 0xa5, 0x82}, + subYX: fp.Elt{0xd0, 0xf5, 0x65, 0x40, 0xec, 0x8e, 0x65, 0x42, 0x78, 0xc1, 0x65, 0xe4, 0x10, 0xc8, 0x0b, 0x1b, 0xdd, 0x96, 0x68, 0xce, 0xee, 0x45, 0x55, 0xd8, 0x6e, 0xd3, 0xe6, 0x77, 0x19, 0xae, 0xc2, 0x8d, 0x8d, 0x3e, 0x14, 0x3f, 0x6d, 0x00, 0x2f, 0x9b, 0xd1, 0x26, 0x60, 0x28, 0x0f, 0x3a, 0x47, 0xb3, 0xe6, 0x68, 0x28, 0x24, 0x25, 0xca, 0xc8, 0x06}, + dt2: fp.Elt{0x54, 0xbb, 0x60, 0x92, 0xdb, 0x8f, 0x0f, 0x38, 0xe0, 0xe6, 0xe4, 0xc9, 0xcc, 0x14, 0x62, 0x01, 0xc4, 0x2b, 0x0f, 0xcf, 0xed, 0x7d, 0x8e, 0xa4, 0xd9, 0x73, 0x0b, 0xba, 0x0c, 0xaf, 0x0c, 0xf9, 0xe2, 0xeb, 0x29, 0x2a, 0x53, 0xdf, 0x2c, 0x5a, 0xfa, 0x8f, 0xc1, 0x01, 0xd7, 0xb1, 0x45, 0x73, 0x92, 0x32, 0x83, 0x85, 0x12, 0x74, 0x89, 0x44}, + }, + { /* 27P*/ + addYX: fp.Elt{0x0b, 0x73, 0x3c, 0xc2, 0xb1, 0x2e, 0xe1, 0xa7, 0xf5, 0xc9, 0x7a, 0xfb, 0x3d, 0x2d, 0xac, 0x59, 0xdb, 0xfa, 0x36, 0x11, 0xd1, 0x13, 0x04, 0x51, 0x1d, 0xab, 0x9b, 0x6b, 0x93, 0xfe, 0xda, 0xb0, 0x8e, 0xb4, 0x79, 0x11, 0x21, 0x0f, 0x65, 0xb9, 0xbb, 0x79, 0x96, 0x2a, 0xfd, 0x30, 0xe0, 0xb4, 0x2d, 0x9a, 0x55, 0x25, 0x5d, 0xd4, 0xad, 0x2a}, + subYX: fp.Elt{0x9e, 0xc5, 0x04, 0xfe, 0xec, 0x3c, 0x64, 0x1c, 0xed, 0x95, 0xed, 0xae, 0xaf, 0x5c, 0x6e, 0x08, 0x9e, 0x02, 0x29, 0x59, 0x7e, 0x5f, 0xc4, 0x9a, 0xd5, 0x32, 0x72, 0x86, 0xe1, 0x4e, 0x3c, 0xce, 0x99, 0x69, 0x3b, 0xc4, 0xdd, 0x4d, 0xb7, 0xbb, 0xda, 0x3b, 0x1a, 0x99, 0xaa, 0x62, 0x15, 0xc1, 0xf0, 0xb6, 0x6c, 0xec, 0x56, 0xc1, 0xff, 0x0c}, + dt2: fp.Elt{0x2f, 0xf1, 0x3f, 0x7a, 0x2d, 0x56, 0x19, 0x7f, 0xea, 0xbe, 0x59, 0x2e, 0x13, 0x67, 0x81, 0xfb, 0xdb, 0xc8, 0xa3, 0x1d, 0xd5, 0xe9, 0x13, 0x8b, 0x29, 0xdf, 0xcf, 0x9f, 0xe7, 0xd9, 0x0b, 0x70, 0xd3, 0x15, 0x57, 0x4a, 0xe9, 0x50, 0x12, 0x1b, 0x81, 0x4b, 0x98, 0x98, 0xa8, 0x31, 0x1d, 0x27, 0x47, 0x38, 0xed, 0x57, 0x99, 0x26, 0xb2, 0xee}, + }, + { /* 29P*/ + addYX: fp.Elt{0x1c, 0xb2, 0xb2, 0x67, 0x3b, 0x8b, 0x3d, 0x5a, 0x30, 0x7e, 0x38, 0x7e, 0x3c, 0x3d, 0x28, 0x56, 0x59, 0xd8, 0x87, 0x53, 0x8b, 0xe6, 0x6c, 0x5d, 0xe5, 0x0a, 0x33, 0x10, 0xce, 0xa2, 0x17, 0x0d, 0xe8, 0x76, 0xee, 0x68, 0xa8, 0x72, 0x54, 0xbd, 0xa6, 0x24, 0x94, 0x6e, 0x77, 0xc7, 0x53, 0xb7, 0x89, 0x1c, 0x7a, 0xe9, 0x78, 0x9a, 0x74, 0x5f}, + subYX: fp.Elt{0x76, 0x96, 0x1c, 0xcf, 0x08, 0x55, 0xd8, 0x1e, 0x0d, 0xa3, 0x59, 0x95, 0x32, 0xf4, 0xc2, 0x8e, 0x84, 0x5e, 0x4b, 0x04, 0xda, 0x71, 0xc9, 0x78, 0x52, 0xde, 0x14, 0xb4, 0x31, 0xf4, 0xd4, 0xb8, 0x58, 0xc5, 0x20, 0xe8, 0xdd, 0x15, 0xb5, 0xee, 0xea, 0x61, 0xe0, 0xf5, 0xd6, 0xae, 0x55, 0x59, 0x05, 0x3e, 0xaf, 0x74, 0xac, 0x1f, 0x17, 0x82}, + dt2: fp.Elt{0x59, 0x24, 0xcd, 0xfc, 0x11, 0x7e, 0x85, 0x18, 0x3d, 0x69, 0xf7, 0x71, 0x31, 0x66, 0x98, 0x42, 0x95, 0x00, 0x8c, 0xb2, 0xae, 0x39, 0x7e, 0x85, 0xd6, 0xb0, 0x02, 0xec, 0xce, 0xfc, 0x25, 0xb2, 0xe3, 0x99, 0x8e, 0x5b, 0x61, 0x96, 0x2e, 0x6d, 0x96, 0x57, 0x71, 0xa5, 0x93, 0x41, 0x0e, 0x6f, 0xfd, 0x0a, 0xbf, 0xa9, 0xf7, 0x56, 0xa9, 0x3e}, + }, + { /* 31P*/ + addYX: fp.Elt{0xa2, 0x2e, 0x0c, 0x17, 0x4d, 0xcc, 0x85, 0x2c, 0x18, 0xa0, 0xd2, 0x08, 0xba, 0x11, 0xfa, 0x47, 0x71, 0x86, 0xaf, 0x36, 0x6a, 0xd7, 0xfe, 0xb9, 0xb0, 0x2f, 0x89, 0x98, 0x49, 0x69, 0xf8, 0x6a, 0xad, 0x27, 0x5e, 0x0a, 0x22, 0x60, 0x5e, 0x5d, 0xca, 0x06, 0x51, 0x27, 0x99, 0x29, 0x85, 0x68, 0x98, 0xe1, 0xc4, 0x21, 0x50, 0xa0, 0xe9, 0xc1}, + subYX: fp.Elt{0x4d, 0x70, 0xee, 0x91, 0x92, 0x3f, 0xb7, 0xd3, 0x1d, 0xdb, 0x8d, 0x6e, 0x16, 0xf5, 0x65, 0x7d, 0x5f, 0xb5, 0x6c, 0x59, 0x26, 0x70, 0x4b, 0xf2, 0xfc, 0xe7, 0xdf, 0x86, 0xfe, 0xa5, 0xa7, 0xa6, 0x5d, 0xfb, 0x06, 0xe9, 0xf9, 0xcc, 0xc0, 0x37, 0xcc, 0xd8, 0x09, 0x04, 0xd2, 0xa5, 0x1d, 0xd7, 0xb7, 0xce, 0x92, 0xac, 0x3c, 0xad, 0xfb, 0xae}, + dt2: fp.Elt{0x17, 0xa3, 0x9a, 0xc7, 0x86, 0x2a, 0x51, 0xf7, 0x96, 0x79, 0x49, 0x22, 0x2e, 0x5a, 0x01, 0x5c, 0xb5, 0x95, 0xd4, 0xe8, 0xcb, 0x00, 0xca, 0x2d, 0x55, 0xb6, 0x34, 0x36, 0x0b, 0x65, 0x46, 0xf0, 0x49, 0xfc, 0x87, 0x86, 0xe5, 0xc3, 0x15, 0xdb, 0x32, 0xcd, 0xf2, 0xd3, 0x82, 0x4c, 0xe6, 0x61, 0x8a, 0xaf, 0xd4, 0x9e, 0x0f, 0x5a, 0xf2, 0x81}, + }, + { /* 33P*/ + addYX: fp.Elt{0x88, 0x10, 0xc0, 0xcb, 0xf5, 0x77, 0xae, 0xa5, 0xbe, 0xf6, 0xcd, 0x2e, 0x8b, 0x7e, 0xbd, 0x79, 0x62, 0x4a, 0xeb, 0x69, 0xc3, 0x28, 0xaa, 0x72, 0x87, 0xa9, 0x25, 0x87, 0x46, 0xea, 0x0e, 0x62, 0xa3, 0x6a, 0x1a, 0xe2, 0xba, 0xdc, 0x81, 0x10, 0x33, 0x01, 0xf6, 0x16, 0x89, 0x80, 0xc6, 0xcd, 0xdb, 0xdc, 0xba, 0x0e, 0x09, 0x4a, 0x35, 0x4a}, + subYX: fp.Elt{0x86, 0xb2, 0x2b, 0xd0, 0xb8, 0x4a, 0x6d, 0x66, 0x7b, 0x32, 0xdf, 0x3b, 0x1a, 0x19, 0x1f, 0x63, 0xee, 0x1f, 0x3d, 0x1c, 0x5c, 0x14, 0x60, 0x5b, 0x72, 0x49, 0x07, 0xb1, 0x0d, 0x72, 0xc6, 0x35, 0xf0, 0xbc, 0x5e, 0xda, 0x80, 0x6b, 0x64, 0x5b, 0xe5, 0x34, 0x54, 0x39, 0xdd, 0xe6, 0x3c, 0xcb, 0xe5, 0x29, 0x32, 0x06, 0xc6, 0xb1, 0x96, 0x34}, + dt2: fp.Elt{0x85, 0x86, 0xf5, 0x84, 0x86, 0xe6, 0x77, 0x8a, 0x71, 0x85, 0x0c, 0x4f, 0x81, 0x5b, 0x29, 0x06, 0xb5, 0x2e, 0x26, 0x71, 0x07, 0x78, 0x07, 0xae, 0xbc, 0x95, 0x46, 0xc3, 0x65, 0xac, 0xe3, 0x76, 0x51, 0x7d, 0xd4, 0x85, 0x31, 0xe3, 0x43, 0xf3, 0x1b, 0x7c, 0xf7, 0x6b, 0x2c, 0xf8, 0x1c, 0xbb, 0x8d, 0xca, 0xab, 0x4b, 0xba, 0x7f, 0xa4, 0xe2}, + }, + { /* 35P*/ + addYX: fp.Elt{0x1a, 0xee, 0xe7, 0xa4, 0x8a, 0x9d, 0x53, 0x80, 0xc6, 0xb8, 0x4e, 0xdc, 0x89, 0xe0, 0xc4, 0x2b, 0x60, 0x52, 0x6f, 0xec, 0x81, 0xd2, 0x55, 0x6b, 0x1b, 0x6f, 0x17, 0x67, 0x8e, 0x42, 0x26, 0x4c, 0x65, 0x23, 0x29, 0xc6, 0x7b, 0xcd, 0x9f, 0xad, 0x4b, 0x42, 0xd3, 0x0c, 0x75, 0xc3, 0x8a, 0xf5, 0xbe, 0x9e, 0x55, 0xf7, 0x47, 0x5d, 0xbd, 0x3a}, + subYX: fp.Elt{0x0d, 0xa8, 0x3b, 0xf9, 0xc7, 0x7e, 0xc6, 0x86, 0x94, 0xc0, 0x01, 0xff, 0x27, 0xce, 0x43, 0xac, 0xe5, 0xe1, 0xd2, 0x8d, 0xc1, 0x22, 0x31, 0xbe, 0xe1, 0xaf, 0xf9, 0x4a, 0x78, 0xa1, 0x0c, 0xaa, 0xd4, 0x80, 0xe4, 0x09, 0x8d, 0xfb, 0x1d, 0x52, 0xc8, 0x60, 0x2d, 0xf2, 0xa2, 0x89, 0x02, 0x56, 0x3d, 0x56, 0x27, 0x85, 0xc7, 0xf0, 0x2b, 0x9a}, + dt2: fp.Elt{0x62, 0x7c, 0xc7, 0x6b, 0x2c, 0x9d, 0x0a, 0x7c, 0xe5, 0x50, 0x3c, 0xe6, 0x87, 0x1c, 0x82, 0x30, 0x67, 0x3c, 0x39, 0xb6, 0xa0, 0x31, 0xfb, 0x03, 0x7b, 0xa1, 0x58, 0xdf, 0x12, 0x76, 0x5d, 0x5d, 0x0a, 0x8f, 0x9b, 0x37, 0x32, 0xc3, 0x60, 0x33, 0xea, 0x9f, 0x0a, 0x99, 0xfa, 0x20, 0xd0, 0x33, 0x21, 0xc3, 0x94, 0xd4, 0x86, 0x49, 0x7c, 0x4e}, + }, + { /* 37P*/ + addYX: fp.Elt{0xc7, 0x0c, 0x71, 0xfe, 0x55, 0xd1, 0x95, 0x8f, 0x43, 0xbb, 0x6b, 0x74, 0x30, 0xbd, 0xe8, 0x6f, 0x1c, 0x1b, 0x06, 0x62, 0xf5, 0xfc, 0x65, 0xa0, 0xeb, 0x81, 0x12, 0xc9, 0x64, 0x66, 0x61, 0xde, 0xf3, 0x6d, 0xd4, 0xae, 0x8e, 0xb1, 0x72, 0xe0, 0xcd, 0x37, 0x01, 0x28, 0x52, 0xd7, 0x39, 0x46, 0x0c, 0x55, 0xcf, 0x47, 0x70, 0xef, 0xa1, 0x17}, + subYX: fp.Elt{0x8d, 0x58, 0xde, 0x83, 0x88, 0x16, 0x0e, 0x12, 0x42, 0x03, 0x50, 0x60, 0x4b, 0xdf, 0xbf, 0x95, 0xcc, 0x7d, 0x18, 0x17, 0x7e, 0x31, 0x5d, 0x8a, 0x66, 0xc1, 0xcf, 0x14, 0xea, 0xf4, 0xf4, 0xe5, 0x63, 0x2d, 0x32, 0x86, 0x9b, 0xed, 0x1f, 0x4f, 0x03, 0xaf, 0x33, 0x92, 0xcb, 0xaf, 0x9c, 0x05, 0x0d, 0x47, 0x1b, 0x42, 0xba, 0x13, 0x22, 0x98}, + dt2: fp.Elt{0xb5, 0x48, 0xeb, 0x7d, 0x3d, 0x10, 0x9f, 0x59, 0xde, 0xf8, 0x1c, 0x4f, 0x7d, 0x9d, 0x40, 0x4d, 0x9e, 0x13, 0x24, 0xb5, 0x21, 0x09, 0xb7, 0xee, 0x98, 0x5c, 0x56, 0xbc, 0x5e, 0x2b, 0x78, 0x38, 0x06, 0xac, 0xe3, 0xe0, 0xfa, 0x2e, 0xde, 0x4f, 0xd2, 0xb3, 0xfb, 0x2d, 0x71, 0x84, 0xd1, 0x9d, 0x12, 0x5b, 0x35, 0xc8, 0x03, 0x68, 0x67, 0xc7}, + }, + { /* 39P*/ + addYX: fp.Elt{0xb6, 0x65, 0xfb, 0xa7, 0x06, 0x35, 0xbb, 0xe0, 0x31, 0x8d, 0x91, 0x40, 0x98, 0xab, 0x30, 0xe4, 0xca, 0x12, 0x59, 0x89, 0xed, 0x65, 0x5d, 0x7f, 0xae, 0x69, 0xa0, 0xa4, 0xfa, 0x78, 0xb4, 0xf7, 0xed, 0xae, 0x86, 0x78, 0x79, 0x64, 0x24, 0xa6, 0xd4, 0xe1, 0xf6, 0xd3, 0xa0, 0x89, 0xba, 0x20, 0xf4, 0x54, 0x0d, 0x8f, 0xdb, 0x1a, 0x79, 0xdb}, + subYX: fp.Elt{0xe1, 0x82, 0x0c, 0x4d, 0xde, 0x9f, 0x40, 0xf0, 0xc1, 0xbd, 0x8b, 0xd3, 0x24, 0x03, 0xcd, 0xf2, 0x92, 0x7d, 0xe2, 0x68, 0x7f, 0xf1, 0xbe, 0x69, 0xde, 0x34, 0x67, 0x4c, 0x85, 0x3b, 0xec, 0x98, 0xcc, 0x4d, 0x3e, 0xc0, 0x96, 0x27, 0xe6, 0x75, 0xfc, 0xdf, 0x37, 0xc0, 0x1e, 0x27, 0xe0, 0xf6, 0xc2, 0xbd, 0xbc, 0x3d, 0x9b, 0x39, 0xdc, 0xe2}, + dt2: fp.Elt{0xd8, 0x29, 0xa7, 0x39, 0xe3, 0x9f, 0x2f, 0x0e, 0x4b, 0x24, 0x21, 0x70, 0xef, 0xfd, 0x91, 0xea, 0xbf, 0xe1, 0x72, 0x90, 0xcc, 0xc9, 0x84, 0x0e, 0xad, 0xd5, 0xe6, 0xbb, 0xc5, 0x99, 0x7f, 0xa4, 0xf0, 0x2e, 0xcc, 0x95, 0x64, 0x27, 0x19, 0xd8, 0x4c, 0x27, 0x0d, 0xff, 0xb6, 0x29, 0xe2, 0x6c, 0xfa, 0xbb, 0x4d, 0x9c, 0xbb, 0xaf, 0xa5, 0xec}, + }, + { /* 41P*/ + addYX: fp.Elt{0xd6, 0x33, 0x3f, 0x9f, 0xcf, 0xfd, 0x4c, 0xd1, 0xfe, 0xe5, 0xeb, 0x64, 0x27, 0xae, 0x7a, 0xa2, 0x82, 0x50, 0x6d, 0xaa, 0xe3, 0x5d, 0xe2, 0x48, 0x60, 0xb3, 0x76, 0x04, 0xd9, 0x19, 0xa7, 0xa1, 0x73, 0x8d, 0x38, 0xa9, 0xaf, 0x45, 0xb5, 0xb2, 0x62, 0x9b, 0xf1, 0x35, 0x7b, 0x84, 0x66, 0xeb, 0x06, 0xef, 0xf1, 0xb2, 0x2d, 0x6a, 0x61, 0x15}, + subYX: fp.Elt{0x86, 0x50, 0x42, 0xf7, 0xda, 0x59, 0xb2, 0xcf, 0x0d, 0x3d, 0xee, 0x8e, 0x53, 0x5d, 0xf7, 0x9e, 0x6a, 0x26, 0x2d, 0xc7, 0x8c, 0x8e, 0x18, 0x50, 0x6d, 0xb7, 0x51, 0x4c, 0xa7, 0x52, 0x6e, 0x0e, 0x0a, 0x16, 0x74, 0xb2, 0x81, 0x8b, 0x56, 0x27, 0x22, 0x84, 0xf4, 0x56, 0xc5, 0x06, 0xe1, 0x8b, 0xca, 0x2d, 0xdb, 0x9a, 0xf6, 0x10, 0x9c, 0x51}, + dt2: fp.Elt{0x1f, 0x16, 0xa2, 0x78, 0x96, 0x1b, 0x85, 0x9c, 0x76, 0x49, 0xd4, 0x0f, 0xac, 0xb0, 0xf4, 0xd0, 0x06, 0x2c, 0x7e, 0x6d, 0x6e, 0x8e, 0xc7, 0x9f, 0x18, 0xad, 0xfc, 0x88, 0x0c, 0x0c, 0x09, 0x05, 0x05, 0xa0, 0x79, 0x72, 0x32, 0x72, 0x87, 0x0f, 0x49, 0x87, 0x0c, 0xb4, 0x12, 0xc2, 0x09, 0xf8, 0x9f, 0x30, 0x72, 0xa9, 0x47, 0x13, 0x93, 0x49}, + }, + { /* 43P*/ + addYX: fp.Elt{0xcc, 0xb1, 0x4c, 0xd3, 0xc0, 0x9e, 0x9e, 0x4d, 0x6d, 0x28, 0x0b, 0xa5, 0x94, 0xa7, 0x2e, 0xc2, 0xc7, 0xaf, 0x29, 0x73, 0xc9, 0x68, 0xea, 0x0f, 0x34, 0x37, 0x8d, 0x96, 0x8f, 0x3a, 0x3d, 0x73, 0x1e, 0x6d, 0x9f, 0xcf, 0x8d, 0x83, 0xb5, 0x71, 0xb9, 0xe1, 0x4b, 0x67, 0x71, 0xea, 0xcf, 0x56, 0xe5, 0xeb, 0x72, 0x15, 0x2f, 0x9e, 0xa8, 0xaa}, + subYX: fp.Elt{0xf4, 0x3e, 0x85, 0x1c, 0x1a, 0xef, 0x50, 0xd1, 0xb4, 0x20, 0xb2, 0x60, 0x05, 0x98, 0xfe, 0x47, 0x3b, 0xc1, 0x76, 0xca, 0x2c, 0x4e, 0x5a, 0x42, 0xa3, 0xf7, 0x20, 0xaa, 0x57, 0x39, 0xee, 0x34, 0x1f, 0xe1, 0x68, 0xd3, 0x7e, 0x06, 0xc4, 0x6c, 0xc7, 0x76, 0x2b, 0xe4, 0x1c, 0x48, 0x44, 0xe6, 0xe5, 0x44, 0x24, 0x8d, 0xb3, 0xb6, 0x88, 0x32}, + dt2: fp.Elt{0x18, 0xa7, 0xba, 0xd0, 0x44, 0x6f, 0x33, 0x31, 0x00, 0xf8, 0xf6, 0x12, 0xe3, 0xc5, 0xc7, 0xb5, 0x91, 0x9c, 0x91, 0xb5, 0x75, 0x18, 0x18, 0x8a, 0xab, 0xed, 0x24, 0x11, 0x2e, 0xce, 0x5a, 0x0f, 0x94, 0x5f, 0x2e, 0xca, 0xd3, 0x80, 0xea, 0xe5, 0x34, 0x96, 0x67, 0x8b, 0x6a, 0x26, 0x5e, 0xc8, 0x9d, 0x2c, 0x5e, 0x6c, 0xa2, 0x0c, 0xbf, 0xf0}, + }, + { /* 45P*/ + addYX: fp.Elt{0xb3, 0xbf, 0xa3, 0x85, 0xee, 0xf6, 0x58, 0x02, 0x78, 0xc4, 0x30, 0xd6, 0x57, 0x59, 0x8c, 0x88, 0x08, 0x7c, 0xbc, 0xbe, 0x0a, 0x74, 0xa9, 0xde, 0x69, 0xe7, 0x41, 0xd8, 0xbf, 0x66, 0x8d, 0x3d, 0x28, 0x00, 0x8c, 0x47, 0x65, 0x34, 0xfe, 0x86, 0x9e, 0x6a, 0xf2, 0x41, 0x6a, 0x94, 0xc4, 0x88, 0x75, 0x23, 0x0d, 0x52, 0x69, 0xee, 0x07, 0x89}, + subYX: fp.Elt{0x22, 0x3c, 0xa1, 0x70, 0x58, 0x97, 0x93, 0xbe, 0x59, 0xa8, 0x0b, 0x8a, 0x46, 0x2a, 0x38, 0x1e, 0x08, 0x6b, 0x61, 0x9f, 0xf2, 0x4a, 0x8b, 0x80, 0x68, 0x6e, 0xc8, 0x92, 0x60, 0xf3, 0xc9, 0x89, 0xb2, 0x6d, 0x63, 0xb0, 0xeb, 0x83, 0x15, 0x63, 0x0e, 0x64, 0xbb, 0xb8, 0xfe, 0xb4, 0x81, 0x90, 0x01, 0x28, 0x10, 0xb9, 0x74, 0x6e, 0xde, 0xa4}, + dt2: fp.Elt{0x1a, 0x23, 0x45, 0xa8, 0x6f, 0x4e, 0xa7, 0x4a, 0x0c, 0xeb, 0xb0, 0x43, 0xf9, 0xef, 0x99, 0x60, 0x5b, 0xdb, 0x66, 0xc0, 0x86, 0x71, 0x43, 0xb1, 0x22, 0x7b, 0x1c, 0xe7, 0x8d, 0x09, 0x1d, 0x83, 0x76, 0x9c, 0xd3, 0x5a, 0xdd, 0x42, 0xd9, 0x2f, 0x2d, 0xba, 0x7a, 0xc2, 0xd9, 0x6b, 0xd4, 0x7a, 0xf1, 0xd5, 0x5f, 0x6b, 0x85, 0xbf, 0x0b, 0xf1}, + }, + { /* 47P*/ + addYX: fp.Elt{0xb2, 0x83, 0xfa, 0x1f, 0xd2, 0xce, 0xb6, 0xf2, 0x2d, 0xea, 0x1b, 0xe5, 0x29, 0xa5, 0x72, 0xf9, 0x25, 0x48, 0x4e, 0xf2, 0x50, 0x1b, 0x39, 0xda, 0x34, 0xc5, 0x16, 0x13, 0xb4, 0x0c, 0xa1, 0x00, 0x79, 0x7a, 0xf5, 0x8b, 0xf3, 0x70, 0x14, 0xb6, 0xfc, 0x9a, 0x47, 0x68, 0x1e, 0x42, 0x70, 0x64, 0x2a, 0x84, 0x3e, 0x3d, 0x20, 0x58, 0xf9, 0x6a}, + subYX: fp.Elt{0xd9, 0xee, 0xc0, 0xc4, 0xf5, 0xc2, 0x86, 0xaf, 0x45, 0xd2, 0xd2, 0x87, 0x1b, 0x64, 0xd5, 0xe0, 0x8c, 0x44, 0x00, 0x4f, 0x43, 0x89, 0x04, 0x48, 0x4a, 0x0b, 0xca, 0x94, 0x06, 0x2f, 0x23, 0x5b, 0x6c, 0x8d, 0x44, 0x66, 0x53, 0xf5, 0x5a, 0x20, 0x72, 0x28, 0x58, 0x84, 0xcc, 0x73, 0x22, 0x5e, 0xd1, 0x0b, 0x56, 0x5e, 0x6a, 0xa3, 0x11, 0x91}, + dt2: fp.Elt{0x6e, 0x9f, 0x88, 0xa8, 0x68, 0x2f, 0x12, 0x37, 0x88, 0xfc, 0x92, 0x8f, 0x24, 0xeb, 0x5b, 0x2a, 0x2a, 0xd0, 0x14, 0x40, 0x4c, 0xa9, 0xa4, 0x03, 0x0c, 0x45, 0x48, 0x13, 0xe8, 0xa6, 0x37, 0xab, 0xc0, 0x06, 0x38, 0x6c, 0x96, 0x73, 0x40, 0x6c, 0xc6, 0xea, 0x56, 0xc6, 0xe9, 0x1a, 0x69, 0xeb, 0x7a, 0xd1, 0x33, 0x69, 0x58, 0x2b, 0xea, 0x2f}, + }, + { /* 49P*/ + addYX: fp.Elt{0x58, 0xa8, 0x05, 0x41, 0x00, 0x9d, 0xaa, 0xd9, 0x98, 0xcf, 0xb9, 0x41, 0xb5, 0x4a, 0x8d, 0xe2, 0xe7, 0xc0, 0x72, 0xef, 0xc8, 0x28, 0x6b, 0x68, 0x9d, 0xc9, 0xdf, 0x05, 0x8b, 0xd0, 0x04, 0x74, 0x79, 0x45, 0x52, 0x05, 0xa3, 0x6e, 0x35, 0x3a, 0xe3, 0xef, 0xb2, 0xdc, 0x08, 0x6f, 0x4e, 0x76, 0x85, 0x67, 0xba, 0x23, 0x8f, 0xdd, 0xaf, 0x09}, + subYX: fp.Elt{0xb4, 0x38, 0xc8, 0xff, 0x4f, 0x65, 0x2a, 0x7e, 0xad, 0xb1, 0xc6, 0xb9, 0x3d, 0xd6, 0xf7, 0x14, 0xcf, 0xf6, 0x98, 0x75, 0xbb, 0x47, 0x83, 0x90, 0xe7, 0xe1, 0xf6, 0x14, 0x99, 0x7e, 0xfa, 0xe4, 0x77, 0x24, 0xe3, 0xe7, 0xf0, 0x1e, 0xdb, 0x27, 0x4e, 0x16, 0x04, 0xf2, 0x08, 0x52, 0xfc, 0xec, 0x55, 0xdb, 0x2e, 0x67, 0xe1, 0x94, 0x32, 0x89}, + dt2: fp.Elt{0x00, 0xad, 0x03, 0x35, 0x1a, 0xb1, 0x88, 0xf0, 0xc9, 0x11, 0xe4, 0x12, 0x52, 0x61, 0xfd, 0x8a, 0x1b, 0x6a, 0x0a, 0x4c, 0x42, 0x46, 0x22, 0x0e, 0xa5, 0xf9, 0xe2, 0x50, 0xf2, 0xb2, 0x1f, 0x20, 0x78, 0x10, 0xf6, 0xbf, 0x7f, 0x0c, 0x9c, 0xad, 0x40, 0x8b, 0x82, 0xd4, 0xba, 0x69, 0x09, 0xac, 0x4b, 0x6d, 0xc4, 0x49, 0x17, 0x81, 0x57, 0x3b}, + }, + { /* 51P*/ + addYX: fp.Elt{0x0d, 0xfe, 0xb4, 0x35, 0x11, 0xbd, 0x1d, 0x6b, 0xc2, 0xc5, 0x3b, 0xd2, 0x23, 0x2c, 0x72, 0xe3, 0x48, 0xb1, 0x48, 0x73, 0xfb, 0xa3, 0x21, 0x6e, 0xc0, 0x09, 0x69, 0xac, 0xe1, 0x60, 0xbc, 0x24, 0x03, 0x99, 0x63, 0x0a, 0x00, 0xf0, 0x75, 0xf6, 0x92, 0xc5, 0xd6, 0xdb, 0x51, 0xd4, 0x7d, 0xe6, 0xf4, 0x11, 0x79, 0xd7, 0xc3, 0xaf, 0x48, 0xd0}, + subYX: fp.Elt{0xf4, 0x4f, 0xaf, 0x31, 0xe3, 0x10, 0x89, 0x95, 0xf0, 0x8a, 0xf6, 0x31, 0x9f, 0x48, 0x02, 0xba, 0x42, 0x2b, 0x3c, 0x22, 0x8b, 0xcc, 0x12, 0x98, 0x6e, 0x7a, 0x64, 0x3a, 0xc4, 0xca, 0x32, 0x2a, 0x72, 0xf8, 0x2c, 0xcf, 0x78, 0x5e, 0x7a, 0x75, 0x6e, 0x72, 0x46, 0x48, 0x62, 0x28, 0xac, 0x58, 0x1a, 0xc6, 0x59, 0x88, 0x2a, 0x44, 0x9e, 0x83}, + dt2: fp.Elt{0xb3, 0xde, 0x36, 0xfd, 0xeb, 0x1b, 0xd4, 0x24, 0x1b, 0x08, 0x8c, 0xfe, 0xa9, 0x41, 0xa1, 0x64, 0xf2, 0x6d, 0xdb, 0xf9, 0x94, 0xae, 0x86, 0x71, 0xab, 0x10, 0xbf, 0xa3, 0xb2, 0xa0, 0xdf, 0x10, 0x8c, 0x74, 0xce, 0xb3, 0xfc, 0xdb, 0xba, 0x15, 0xf6, 0x91, 0x7a, 0x9c, 0x36, 0x1e, 0x45, 0x07, 0x3c, 0xec, 0x1a, 0x61, 0x26, 0x93, 0xe3, 0x50}, + }, + { /* 53P*/ + addYX: fp.Elt{0xc5, 0x50, 0xc5, 0x83, 0xb0, 0xbd, 0xd9, 0xf6, 0x6d, 0x15, 0x5e, 0xc1, 0x1a, 0x33, 0xa0, 0xce, 0x13, 0x70, 0x3b, 0xe1, 0x31, 0xc6, 0xc4, 0x02, 0xec, 0x8c, 0xd5, 0x9c, 0x97, 0xd3, 0x12, 0xc4, 0xa2, 0xf9, 0xd5, 0xfb, 0x22, 0x69, 0x94, 0x09, 0x2f, 0x59, 0xce, 0xdb, 0xf2, 0xf2, 0x00, 0xe0, 0xa9, 0x08, 0x44, 0x2e, 0x8b, 0x6b, 0xf5, 0xb3}, + subYX: fp.Elt{0x90, 0xdd, 0xec, 0xa2, 0x65, 0xb7, 0x61, 0xbc, 0xaa, 0x70, 0xa2, 0x15, 0xd8, 0xb0, 0xf8, 0x8e, 0x23, 0x3d, 0x9f, 0x46, 0xa3, 0x29, 0x20, 0xd1, 0xa1, 0x15, 0x81, 0xc6, 0xb6, 0xde, 0xbe, 0x60, 0x63, 0x24, 0xac, 0x15, 0xfb, 0xeb, 0xd3, 0xea, 0x57, 0x13, 0x86, 0x38, 0x1e, 0x22, 0xf4, 0x8c, 0x5d, 0xaf, 0x1b, 0x27, 0x21, 0x4f, 0xa3, 0x63}, + dt2: fp.Elt{0x07, 0x15, 0x87, 0xc4, 0xfd, 0xa1, 0x97, 0x7a, 0x07, 0x1f, 0x56, 0xcc, 0xe3, 0x6a, 0x01, 0x90, 0xce, 0xf9, 0xfa, 0x50, 0xb2, 0xe0, 0x87, 0x8b, 0x6c, 0x63, 0x6c, 0xf6, 0x2a, 0x09, 0xef, 0xef, 0xd2, 0x31, 0x40, 0x25, 0xf6, 0x84, 0xcb, 0xe0, 0xc4, 0x23, 0xc1, 0xcb, 0xe2, 0x02, 0x83, 0x2d, 0xed, 0x74, 0x74, 0x8b, 0xf8, 0x7c, 0x81, 0x18}, + }, + { /* 55P*/ + addYX: fp.Elt{0x9e, 0xe5, 0x59, 0x95, 0x63, 0x2e, 0xac, 0x8b, 0x03, 0x3c, 0xc1, 0x8e, 0xe1, 0x5b, 0x56, 0x3c, 0x16, 0x41, 0xe4, 0xc2, 0x60, 0x0c, 0x6d, 0x65, 0x9f, 0xfc, 0x27, 0x68, 0x43, 0x44, 0x05, 0x12, 0x6c, 0xda, 0x04, 0xef, 0xcf, 0xcf, 0xdc, 0x0a, 0x1a, 0x7f, 0x12, 0xd3, 0xeb, 0x02, 0xb6, 0x04, 0xca, 0xd6, 0xcb, 0xf0, 0x22, 0xba, 0x35, 0x6d}, + subYX: fp.Elt{0x09, 0x6d, 0xf9, 0x64, 0x4c, 0xe6, 0x41, 0xff, 0x01, 0x4d, 0xce, 0x1e, 0xfa, 0x38, 0xa2, 0x25, 0x62, 0xff, 0x03, 0x39, 0x18, 0x91, 0xbb, 0x9d, 0xce, 0x02, 0xf0, 0xf1, 0x3c, 0x55, 0x18, 0xa9, 0xab, 0x4d, 0xd2, 0x35, 0xfd, 0x8d, 0xa9, 0xb2, 0xad, 0xb7, 0x06, 0x6e, 0xc6, 0x69, 0x49, 0xd6, 0x98, 0x98, 0x0b, 0x22, 0x81, 0x6b, 0xbd, 0xa0}, + dt2: fp.Elt{0x22, 0xf4, 0x85, 0x5d, 0x2b, 0xf1, 0x55, 0xa5, 0xd6, 0x27, 0x86, 0x57, 0x12, 0x1f, 0x16, 0x0a, 0x5a, 0x9b, 0xf2, 0x38, 0xb6, 0x28, 0xd8, 0x99, 0x0c, 0x89, 0x1d, 0x7f, 0xca, 0x21, 0x17, 0x1a, 0x0b, 0x02, 0x5f, 0x77, 0x2f, 0x73, 0x30, 0x7c, 0xc8, 0xd7, 0x2b, 0xcc, 0xe7, 0xf3, 0x21, 0xac, 0x53, 0xa7, 0x11, 0x5d, 0xd8, 0x1d, 0x9b, 0xf5}, + }, + { /* 57P*/ + addYX: fp.Elt{0x94, 0x63, 0x5d, 0xef, 0xfd, 0x6d, 0x25, 0x4e, 0x6d, 0x29, 0x03, 0xed, 0x24, 0x28, 0x27, 0x57, 0x47, 0x3e, 0x6a, 0x1a, 0xfe, 0x37, 0xee, 0x5f, 0x83, 0x29, 0x14, 0xfd, 0x78, 0x25, 0x8a, 0xe1, 0x02, 0x38, 0xd8, 0xca, 0x65, 0x55, 0x40, 0x7d, 0x48, 0x2c, 0x7c, 0x7e, 0x60, 0xb6, 0x0c, 0x6d, 0xf7, 0xe8, 0xb3, 0x62, 0x53, 0xd6, 0x9c, 0x2b}, + subYX: fp.Elt{0x47, 0x25, 0x70, 0x62, 0xf5, 0x65, 0x93, 0x62, 0x08, 0xac, 0x59, 0x66, 0xdb, 0x08, 0xd9, 0x1a, 0x19, 0xaf, 0xf4, 0xef, 0x02, 0xa2, 0x78, 0xa9, 0x55, 0x1c, 0xfa, 0x08, 0x11, 0xcb, 0xa3, 0x71, 0x74, 0xb1, 0x62, 0xe7, 0xc7, 0xf3, 0x5a, 0xb5, 0x8b, 0xd4, 0xf6, 0x10, 0x57, 0x79, 0x72, 0x2f, 0x13, 0x86, 0x7b, 0x44, 0x5f, 0x48, 0xfd, 0x88}, + dt2: fp.Elt{0x10, 0x02, 0xcd, 0x05, 0x9a, 0xc3, 0x32, 0x6d, 0x10, 0x3a, 0x74, 0xba, 0x06, 0xc4, 0x3b, 0x34, 0xbc, 0x36, 0xed, 0xa3, 0xba, 0x9a, 0xdb, 0x6d, 0xd4, 0x69, 0x99, 0x97, 0xd0, 0xe4, 0xdd, 0xf5, 0xd4, 0x7c, 0xd3, 0x4e, 0xab, 0xd1, 0x3b, 0xbb, 0xe9, 0xc7, 0x6a, 0x94, 0x25, 0x61, 0xf0, 0x06, 0xc5, 0x12, 0xa8, 0x86, 0xe5, 0x35, 0x46, 0xeb}, + }, + { /* 59P*/ + addYX: fp.Elt{0x9e, 0x95, 0x11, 0xc6, 0xc7, 0xe8, 0xee, 0x5a, 0x26, 0xa0, 0x72, 0x72, 0x59, 0x91, 0x59, 0x16, 0x49, 0x99, 0x7e, 0xbb, 0xd7, 0x15, 0xb4, 0xf2, 0x40, 0xf9, 0x5a, 0x4d, 0xc8, 0xa0, 0xe2, 0x34, 0x7b, 0x34, 0xf3, 0x99, 0xbf, 0xa9, 0xf3, 0x79, 0xc1, 0x1a, 0x0c, 0xf4, 0x86, 0x74, 0x4e, 0xcb, 0xbc, 0x90, 0xad, 0xb6, 0x51, 0x6d, 0xaa, 0x33}, + subYX: fp.Elt{0x9f, 0xd1, 0xc5, 0xa2, 0x6c, 0x24, 0x88, 0x15, 0x71, 0x68, 0xf6, 0x07, 0x45, 0x02, 0xc4, 0x73, 0x7e, 0x75, 0x87, 0xca, 0x7c, 0xf0, 0x92, 0x00, 0x75, 0xd6, 0x5a, 0xdd, 0xe0, 0x64, 0x16, 0x9d, 0x62, 0x80, 0x33, 0x9f, 0xf4, 0x8e, 0x1a, 0x15, 0x1c, 0xd3, 0x0f, 0x4d, 0x4f, 0x62, 0x2d, 0xd7, 0xa5, 0x77, 0xe3, 0xea, 0xf0, 0xfb, 0x1a, 0xdb}, + dt2: fp.Elt{0x6a, 0xa2, 0xb1, 0xaa, 0xfb, 0x5a, 0x32, 0x4e, 0xff, 0x47, 0x06, 0xd5, 0x9a, 0x4f, 0xce, 0x83, 0x5b, 0x82, 0x34, 0x3e, 0x47, 0xb8, 0xf8, 0xe9, 0x7c, 0x67, 0x69, 0x8d, 0x9c, 0xb7, 0xde, 0x57, 0xf4, 0x88, 0x41, 0x56, 0x0c, 0x87, 0x1e, 0xc9, 0x2f, 0x54, 0xbf, 0x5c, 0x68, 0x2c, 0xd9, 0xc4, 0xef, 0x53, 0x73, 0x1e, 0xa6, 0x38, 0x02, 0x10}, + }, + { /* 61P*/ + addYX: fp.Elt{0x08, 0x80, 0x4a, 0xc9, 0xb7, 0xa8, 0x88, 0xd9, 0xfc, 0x6a, 0xc0, 0x3e, 0xc2, 0x33, 0x4d, 0x2b, 0x2a, 0xa3, 0x6d, 0x72, 0x3e, 0xdc, 0x34, 0x68, 0x08, 0xbf, 0x27, 0xef, 0xf4, 0xff, 0xe2, 0x0c, 0x31, 0x0c, 0xa2, 0x0a, 0x1f, 0x65, 0xc1, 0x4c, 0x61, 0xd3, 0x1b, 0xbc, 0x25, 0xb1, 0xd0, 0xd4, 0x89, 0xb2, 0x53, 0xfb, 0x43, 0xa5, 0xaf, 0x04}, + subYX: fp.Elt{0xe3, 0xe1, 0x37, 0xad, 0x58, 0xa9, 0x55, 0x81, 0xee, 0x64, 0x21, 0xb9, 0xf5, 0x4c, 0x35, 0xea, 0x4a, 0xd3, 0x26, 0xaa, 0x90, 0xd4, 0x60, 0x46, 0x09, 0x4b, 0x4a, 0x62, 0xf9, 0xcd, 0xe1, 0xee, 0xbb, 0xc2, 0x09, 0x0b, 0xb0, 0x96, 0x8e, 0x43, 0x77, 0xaf, 0x25, 0x20, 0x5e, 0x47, 0xe4, 0x1d, 0x50, 0x69, 0x74, 0x08, 0xd7, 0xb9, 0x90, 0x13}, + dt2: fp.Elt{0x51, 0x91, 0x95, 0x64, 0x03, 0x16, 0xfd, 0x6e, 0x26, 0x94, 0x6b, 0x61, 0xe7, 0xd9, 0xe0, 0x4a, 0x6d, 0x7c, 0xfa, 0xc0, 0xe2, 0x43, 0x23, 0x53, 0x70, 0xf5, 0x6f, 0x73, 0x8b, 0x81, 0xb0, 0x0c, 0xee, 0x2e, 0x46, 0xf2, 0x8d, 0xa6, 0xfb, 0xb5, 0x1c, 0x33, 0xbf, 0x90, 0x59, 0xc9, 0x7c, 0xb8, 0x6f, 0xad, 0x75, 0x02, 0x90, 0x8e, 0x59, 0x75}, + }, + { /* 63P*/ + addYX: fp.Elt{0x36, 0x4d, 0x77, 0x04, 0xb8, 0x7d, 0x4a, 0xd1, 0xc5, 0xbb, 0x7b, 0x50, 0x5f, 0x8d, 0x9d, 0x62, 0x0f, 0x66, 0x71, 0xec, 0x87, 0xc5, 0x80, 0x82, 0xc8, 0xf4, 0x6a, 0x94, 0x92, 0x5b, 0xb0, 0x16, 0x9b, 0xb2, 0xc9, 0x6f, 0x2b, 0x2d, 0xee, 0x95, 0x73, 0x2e, 0xc2, 0x1b, 0xc5, 0x55, 0x36, 0x86, 0x24, 0xf8, 0x20, 0x05, 0x0d, 0x93, 0xd7, 0x76}, + subYX: fp.Elt{0x7f, 0x01, 0xeb, 0x2e, 0x48, 0x4d, 0x1d, 0xf1, 0x06, 0x7e, 0x7c, 0x2a, 0x43, 0xbf, 0x28, 0xac, 0xe9, 0x58, 0x13, 0xc8, 0xbf, 0x8e, 0xc0, 0xef, 0xe8, 0x4f, 0x46, 0x8a, 0xe7, 0xc0, 0xf6, 0x0f, 0x0a, 0x03, 0x48, 0x91, 0x55, 0x39, 0x2a, 0xe3, 0xdc, 0xf6, 0x22, 0x9d, 0x4d, 0x71, 0x55, 0x68, 0x25, 0x6e, 0x95, 0x52, 0xee, 0x4c, 0xd9, 0x01}, + dt2: fp.Elt{0xac, 0x33, 0x3f, 0x7c, 0x27, 0x35, 0x15, 0x91, 0x33, 0x8d, 0xf9, 0xc4, 0xf4, 0xf3, 0x90, 0x09, 0x75, 0x69, 0x62, 0x9f, 0x61, 0x35, 0x83, 0x92, 0x04, 0xef, 0x96, 0x38, 0x80, 0x9e, 0x88, 0xb3, 0x67, 0x95, 0xbe, 0x79, 0x3c, 0x35, 0xd8, 0xdc, 0xb2, 0x3e, 0x2d, 0xe6, 0x46, 0xbe, 0x81, 0xf3, 0x32, 0x0e, 0x37, 0x23, 0x75, 0x2a, 0x3d, 0xa0}, + }, +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/ecc/goldilocks/twist_basemult.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/ecc/goldilocks/twist_basemult.go new file mode 100644 index 00000000000..f6ac5edbbbc --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/ecc/goldilocks/twist_basemult.go @@ -0,0 +1,62 @@ +package goldilocks + +import ( + "crypto/subtle" + + mlsb "github.com/cloudflare/circl/math/mlsbset" +) + +const ( + // MLSBRecoding parameters + fxT = 448 + fxV = 2 + fxW = 3 + fx2w1 = 1 << (uint(fxW) - 1) +) + +// ScalarBaseMult returns kG where G is the generator point. +func (e twistCurve) ScalarBaseMult(k *Scalar) *twistPoint { + m, err := mlsb.New(fxT, fxV, fxW) + if err != nil { + panic(err) + } + if m.IsExtended() { + panic("not extended") + } + + var isZero int + if k.IsZero() { + isZero = 1 + } + subtle.ConstantTimeCopy(isZero, k[:], order[:]) + + minusK := *k + isEven := 1 - int(k[0]&0x1) + minusK.Neg() + subtle.ConstantTimeCopy(isEven, k[:], minusK[:]) + c, err := m.Encode(k[:]) + if err != nil { + panic(err) + } + + gP := c.Exp(groupMLSB{}) + P := gP.(*twistPoint) + P.cneg(uint(isEven)) + return P +} + +type groupMLSB struct{} + +func (e groupMLSB) ExtendedEltP() mlsb.EltP { return nil } +func (e groupMLSB) Sqr(x mlsb.EltG) { x.(*twistPoint).Double() } +func (e groupMLSB) Mul(x mlsb.EltG, y mlsb.EltP) { x.(*twistPoint).mixAddZ1(y.(*preTwistPointAffine)) } +func (e groupMLSB) Identity() mlsb.EltG { return twistCurve{}.Identity() } +func (e groupMLSB) NewEltP() mlsb.EltP { return &preTwistPointAffine{} } +func (e groupMLSB) Lookup(a mlsb.EltP, v uint, s, u int32) { + Tabj := &tabFixMult[v] + P := a.(*preTwistPointAffine) + for k := range Tabj { + P.cmov(&Tabj[k], uint(subtle.ConstantTimeEq(int32(k), u))) + } + P.cneg(int(s >> 31)) +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/conv/conv.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/conv/conv.go new file mode 100644 index 00000000000..649a8e931d6 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/conv/conv.go @@ -0,0 +1,140 @@ +package conv + +import ( + "encoding/binary" + "fmt" + "math/big" + "strings" +) + +// BytesLe2Hex returns an hexadecimal string of a number stored in a +// little-endian order slice x. +func BytesLe2Hex(x []byte) string { + b := &strings.Builder{} + b.Grow(2*len(x) + 2) + fmt.Fprint(b, "0x") + if len(x) == 0 { + fmt.Fprint(b, "00") + } + for i := len(x) - 1; i >= 0; i-- { + fmt.Fprintf(b, "%02x", x[i]) + } + return b.String() +} + +// BytesLe2BigInt converts a little-endian slice x into a big-endian +// math/big.Int. +func BytesLe2BigInt(x []byte) *big.Int { + n := len(x) + b := new(big.Int) + if len(x) > 0 { + y := make([]byte, n) + for i := 0; i < n; i++ { + y[n-1-i] = x[i] + } + b.SetBytes(y) + } + return b +} + +// BytesBe2Uint64Le converts a big-endian slice x to a little-endian slice of uint64. +func BytesBe2Uint64Le(x []byte) []uint64 { + l := len(x) + z := make([]uint64, (l+7)/8) + blocks := l / 8 + for i := 0; i < blocks; i++ { + z[i] = binary.BigEndian.Uint64(x[l-8*(i+1):]) + } + remBytes := l % 8 + for i := 0; i < remBytes; i++ { + z[blocks] |= uint64(x[l-1-8*blocks-i]) << uint(8*i) + } + return z +} + +// BigInt2BytesLe stores a positive big.Int number x into a little-endian slice z. +// The slice is modified if the bitlength of x <= 8*len(z) (padding with zeros). +// If x does not fit in the slice or is negative, z is not modified. +func BigInt2BytesLe(z []byte, x *big.Int) { + xLen := (x.BitLen() + 7) >> 3 + zLen := len(z) + if zLen >= xLen && x.Sign() >= 0 { + y := x.Bytes() + for i := 0; i < xLen; i++ { + z[i] = y[xLen-1-i] + } + for i := xLen; i < zLen; i++ { + z[i] = 0 + } + } +} + +// Uint64Le2BigInt converts a little-endian slice x into a big number. +func Uint64Le2BigInt(x []uint64) *big.Int { + n := len(x) + b := new(big.Int) + var bi big.Int + for i := n - 1; i >= 0; i-- { + bi.SetUint64(x[i]) + b.Lsh(b, 64) + b.Add(b, &bi) + } + return b +} + +// Uint64Le2BytesLe converts a little-endian slice x to a little-endian slice of bytes. +func Uint64Le2BytesLe(x []uint64) []byte { + b := make([]byte, 8*len(x)) + n := len(x) + for i := 0; i < n; i++ { + binary.LittleEndian.PutUint64(b[i*8:], x[i]) + } + return b +} + +// Uint64Le2BytesBe converts a little-endian slice x to a big-endian slice of bytes. +func Uint64Le2BytesBe(x []uint64) []byte { + b := make([]byte, 8*len(x)) + n := len(x) + for i := 0; i < n; i++ { + binary.BigEndian.PutUint64(b[i*8:], x[n-1-i]) + } + return b +} + +// Uint64Le2Hex returns an hexadecimal string of a number stored in a +// little-endian order slice x. +func Uint64Le2Hex(x []uint64) string { + b := new(strings.Builder) + b.Grow(16*len(x) + 2) + fmt.Fprint(b, "0x") + if len(x) == 0 { + fmt.Fprint(b, "00") + } + for i := len(x) - 1; i >= 0; i-- { + fmt.Fprintf(b, "%016x", x[i]) + } + return b.String() +} + +// BigInt2Uint64Le stores a positive big.Int number x into a little-endian slice z. +// The slice is modified if the bitlength of x <= 8*len(z) (padding with zeros). +// If x does not fit in the slice or is negative, z is not modified. +func BigInt2Uint64Le(z []uint64, x *big.Int) { + xLen := (x.BitLen() + 63) >> 6 // number of 64-bit words + zLen := len(z) + if zLen >= xLen && x.Sign() > 0 { + var y, yi big.Int + y.Set(x) + two64 := big.NewInt(1) + two64.Lsh(two64, 64).Sub(two64, big.NewInt(1)) + for i := 0; i < xLen; i++ { + yi.And(&y, two64) + z[i] = yi.Uint64() + y.Rsh(&y, 64) + } + } + for i := xLen; i < zLen; i++ { + z[i] = 0 + } +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/doc.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/doc.go new file mode 100644 index 00000000000..7e023090707 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/doc.go @@ -0,0 +1,62 @@ +// Copyright 2014 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package sha3 implements the SHA-3 fixed-output-length hash functions and +// the SHAKE variable-output-length hash functions defined by FIPS-202. +// +// Both types of hash function use the "sponge" construction and the Keccak +// permutation. For a detailed specification see http://keccak.noekeon.org/ +// +// # Guidance +// +// If you aren't sure what function you need, use SHAKE256 with at least 64 +// bytes of output. The SHAKE instances are faster than the SHA3 instances; +// the latter have to allocate memory to conform to the hash.Hash interface. +// +// If you need a secret-key MAC (message authentication code), prepend the +// secret key to the input, hash with SHAKE256 and read at least 32 bytes of +// output. +// +// # Security strengths +// +// The SHA3-x (x equals 224, 256, 384, or 512) functions have a security +// strength against preimage attacks of x bits. Since they only produce "x" +// bits of output, their collision-resistance is only "x/2" bits. +// +// The SHAKE-256 and -128 functions have a generic security strength of 256 and +// 128 bits against all attacks, provided that at least 2x bits of their output +// is used. Requesting more than 64 or 32 bytes of output, respectively, does +// not increase the collision-resistance of the SHAKE functions. +// +// # The sponge construction +// +// A sponge builds a pseudo-random function from a public pseudo-random +// permutation, by applying the permutation to a state of "rate + capacity" +// bytes, but hiding "capacity" of the bytes. +// +// A sponge starts out with a zero state. To hash an input using a sponge, up +// to "rate" bytes of the input are XORed into the sponge's state. The sponge +// is then "full" and the permutation is applied to "empty" it. This process is +// repeated until all the input has been "absorbed". The input is then padded. +// The digest is "squeezed" from the sponge in the same way, except that output +// is copied out instead of input being XORed in. +// +// A sponge is parameterized by its generic security strength, which is equal +// to half its capacity; capacity + rate is equal to the permutation's width. +// Since the KeccakF-1600 permutation is 1600 bits (200 bytes) wide, this means +// that the security strength of a sponge instance is equal to (1600 - bitrate) / 2. +// +// # Recommendations +// +// The SHAKE functions are recommended for most new uses. They can produce +// output of arbitrary length. SHAKE256, with an output length of at least +// 64 bytes, provides 256-bit security against all attacks. The Keccak team +// recommends it for most applications upgrading from SHA2-512. (NIST chose a +// much stronger, but much slower, sponge instance for SHA3-512.) +// +// The SHA-3 functions are "drop-in" replacements for the SHA-2 functions. +// They produce output of the same length, with the same security strengths +// against all attacks. This means, in particular, that SHA3-256 only has +// 128-bit collision resistance, because its output length is 32 bytes. +package sha3 diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/hashes.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/hashes.go new file mode 100644 index 00000000000..7d2365a76ed --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/hashes.go @@ -0,0 +1,69 @@ +// Copyright 2014 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package sha3 + +// This file provides functions for creating instances of the SHA-3 +// and SHAKE hash functions, as well as utility functions for hashing +// bytes. + +// New224 creates a new SHA3-224 hash. +// Its generic security strength is 224 bits against preimage attacks, +// and 112 bits against collision attacks. +func New224() State { + return State{rate: 144, outputLen: 28, dsbyte: 0x06} +} + +// New256 creates a new SHA3-256 hash. +// Its generic security strength is 256 bits against preimage attacks, +// and 128 bits against collision attacks. +func New256() State { + return State{rate: 136, outputLen: 32, dsbyte: 0x06} +} + +// New384 creates a new SHA3-384 hash. +// Its generic security strength is 384 bits against preimage attacks, +// and 192 bits against collision attacks. +func New384() State { + return State{rate: 104, outputLen: 48, dsbyte: 0x06} +} + +// New512 creates a new SHA3-512 hash. +// Its generic security strength is 512 bits against preimage attacks, +// and 256 bits against collision attacks. +func New512() State { + return State{rate: 72, outputLen: 64, dsbyte: 0x06} +} + +// Sum224 returns the SHA3-224 digest of the data. +func Sum224(data []byte) (digest [28]byte) { + h := New224() + _, _ = h.Write(data) + h.Sum(digest[:0]) + return +} + +// Sum256 returns the SHA3-256 digest of the data. +func Sum256(data []byte) (digest [32]byte) { + h := New256() + _, _ = h.Write(data) + h.Sum(digest[:0]) + return +} + +// Sum384 returns the SHA3-384 digest of the data. +func Sum384(data []byte) (digest [48]byte) { + h := New384() + _, _ = h.Write(data) + h.Sum(digest[:0]) + return +} + +// Sum512 returns the SHA3-512 digest of the data. +func Sum512(data []byte) (digest [64]byte) { + h := New512() + _, _ = h.Write(data) + h.Sum(digest[:0]) + return +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/keccakf.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/keccakf.go new file mode 100644 index 00000000000..ab19d0ad124 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/keccakf.go @@ -0,0 +1,383 @@ +// Copyright 2014 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package sha3 + +// KeccakF1600 applies the Keccak permutation to a 1600b-wide +// state represented as a slice of 25 uint64s. +// nolint:funlen +func KeccakF1600(a *[25]uint64) { + // Implementation translated from Keccak-inplace.c + // in the keccak reference code. + var t, bc0, bc1, bc2, bc3, bc4, d0, d1, d2, d3, d4 uint64 + + for i := 0; i < 24; i += 4 { + // Combines the 5 steps in each round into 2 steps. + // Unrolls 4 rounds per loop and spreads some steps across rounds. + + // Round 1 + bc0 = a[0] ^ a[5] ^ a[10] ^ a[15] ^ a[20] + bc1 = a[1] ^ a[6] ^ a[11] ^ a[16] ^ a[21] + bc2 = a[2] ^ a[7] ^ a[12] ^ a[17] ^ a[22] + bc3 = a[3] ^ a[8] ^ a[13] ^ a[18] ^ a[23] + bc4 = a[4] ^ a[9] ^ a[14] ^ a[19] ^ a[24] + d0 = bc4 ^ (bc1<<1 | bc1>>63) + d1 = bc0 ^ (bc2<<1 | bc2>>63) + d2 = bc1 ^ (bc3<<1 | bc3>>63) + d3 = bc2 ^ (bc4<<1 | bc4>>63) + d4 = bc3 ^ (bc0<<1 | bc0>>63) + + bc0 = a[0] ^ d0 + t = a[6] ^ d1 + bc1 = t<<44 | t>>(64-44) + t = a[12] ^ d2 + bc2 = t<<43 | t>>(64-43) + t = a[18] ^ d3 + bc3 = t<<21 | t>>(64-21) + t = a[24] ^ d4 + bc4 = t<<14 | t>>(64-14) + a[0] = bc0 ^ (bc2 &^ bc1) ^ RC[i] + a[6] = bc1 ^ (bc3 &^ bc2) + a[12] = bc2 ^ (bc4 &^ bc3) + a[18] = bc3 ^ (bc0 &^ bc4) + a[24] = bc4 ^ (bc1 &^ bc0) + + t = a[10] ^ d0 + bc2 = t<<3 | t>>(64-3) + t = a[16] ^ d1 + bc3 = t<<45 | t>>(64-45) + t = a[22] ^ d2 + bc4 = t<<61 | t>>(64-61) + t = a[3] ^ d3 + bc0 = t<<28 | t>>(64-28) + t = a[9] ^ d4 + bc1 = t<<20 | t>>(64-20) + a[10] = bc0 ^ (bc2 &^ bc1) + a[16] = bc1 ^ (bc3 &^ bc2) + a[22] = bc2 ^ (bc4 &^ bc3) + a[3] = bc3 ^ (bc0 &^ bc4) + a[9] = bc4 ^ (bc1 &^ bc0) + + t = a[20] ^ d0 + bc4 = t<<18 | t>>(64-18) + t = a[1] ^ d1 + bc0 = t<<1 | t>>(64-1) + t = a[7] ^ d2 + bc1 = t<<6 | t>>(64-6) + t = a[13] ^ d3 + bc2 = t<<25 | t>>(64-25) + t = a[19] ^ d4 + bc3 = t<<8 | t>>(64-8) + a[20] = bc0 ^ (bc2 &^ bc1) + a[1] = bc1 ^ (bc3 &^ bc2) + a[7] = bc2 ^ (bc4 &^ bc3) + a[13] = bc3 ^ (bc0 &^ bc4) + a[19] = bc4 ^ (bc1 &^ bc0) + + t = a[5] ^ d0 + bc1 = t<<36 | t>>(64-36) + t = a[11] ^ d1 + bc2 = t<<10 | t>>(64-10) + t = a[17] ^ d2 + bc3 = t<<15 | t>>(64-15) + t = a[23] ^ d3 + bc4 = t<<56 | t>>(64-56) + t = a[4] ^ d4 + bc0 = t<<27 | t>>(64-27) + a[5] = bc0 ^ (bc2 &^ bc1) + a[11] = bc1 ^ (bc3 &^ bc2) + a[17] = bc2 ^ (bc4 &^ bc3) + a[23] = bc3 ^ (bc0 &^ bc4) + a[4] = bc4 ^ (bc1 &^ bc0) + + t = a[15] ^ d0 + bc3 = t<<41 | t>>(64-41) + t = a[21] ^ d1 + bc4 = t<<2 | t>>(64-2) + t = a[2] ^ d2 + bc0 = t<<62 | t>>(64-62) + t = a[8] ^ d3 + bc1 = t<<55 | t>>(64-55) + t = a[14] ^ d4 + bc2 = t<<39 | t>>(64-39) + a[15] = bc0 ^ (bc2 &^ bc1) + a[21] = bc1 ^ (bc3 &^ bc2) + a[2] = bc2 ^ (bc4 &^ bc3) + a[8] = bc3 ^ (bc0 &^ bc4) + a[14] = bc4 ^ (bc1 &^ bc0) + + // Round 2 + bc0 = a[0] ^ a[5] ^ a[10] ^ a[15] ^ a[20] + bc1 = a[1] ^ a[6] ^ a[11] ^ a[16] ^ a[21] + bc2 = a[2] ^ a[7] ^ a[12] ^ a[17] ^ a[22] + bc3 = a[3] ^ a[8] ^ a[13] ^ a[18] ^ a[23] + bc4 = a[4] ^ a[9] ^ a[14] ^ a[19] ^ a[24] + d0 = bc4 ^ (bc1<<1 | bc1>>63) + d1 = bc0 ^ (bc2<<1 | bc2>>63) + d2 = bc1 ^ (bc3<<1 | bc3>>63) + d3 = bc2 ^ (bc4<<1 | bc4>>63) + d4 = bc3 ^ (bc0<<1 | bc0>>63) + + bc0 = a[0] ^ d0 + t = a[16] ^ d1 + bc1 = t<<44 | t>>(64-44) + t = a[7] ^ d2 + bc2 = t<<43 | t>>(64-43) + t = a[23] ^ d3 + bc3 = t<<21 | t>>(64-21) + t = a[14] ^ d4 + bc4 = t<<14 | t>>(64-14) + a[0] = bc0 ^ (bc2 &^ bc1) ^ RC[i+1] + a[16] = bc1 ^ (bc3 &^ bc2) + a[7] = bc2 ^ (bc4 &^ bc3) + a[23] = bc3 ^ (bc0 &^ bc4) + a[14] = bc4 ^ (bc1 &^ bc0) + + t = a[20] ^ d0 + bc2 = t<<3 | t>>(64-3) + t = a[11] ^ d1 + bc3 = t<<45 | t>>(64-45) + t = a[2] ^ d2 + bc4 = t<<61 | t>>(64-61) + t = a[18] ^ d3 + bc0 = t<<28 | t>>(64-28) + t = a[9] ^ d4 + bc1 = t<<20 | t>>(64-20) + a[20] = bc0 ^ (bc2 &^ bc1) + a[11] = bc1 ^ (bc3 &^ bc2) + a[2] = bc2 ^ (bc4 &^ bc3) + a[18] = bc3 ^ (bc0 &^ bc4) + a[9] = bc4 ^ (bc1 &^ bc0) + + t = a[15] ^ d0 + bc4 = t<<18 | t>>(64-18) + t = a[6] ^ d1 + bc0 = t<<1 | t>>(64-1) + t = a[22] ^ d2 + bc1 = t<<6 | t>>(64-6) + t = a[13] ^ d3 + bc2 = t<<25 | t>>(64-25) + t = a[4] ^ d4 + bc3 = t<<8 | t>>(64-8) + a[15] = bc0 ^ (bc2 &^ bc1) + a[6] = bc1 ^ (bc3 &^ bc2) + a[22] = bc2 ^ (bc4 &^ bc3) + a[13] = bc3 ^ (bc0 &^ bc4) + a[4] = bc4 ^ (bc1 &^ bc0) + + t = a[10] ^ d0 + bc1 = t<<36 | t>>(64-36) + t = a[1] ^ d1 + bc2 = t<<10 | t>>(64-10) + t = a[17] ^ d2 + bc3 = t<<15 | t>>(64-15) + t = a[8] ^ d3 + bc4 = t<<56 | t>>(64-56) + t = a[24] ^ d4 + bc0 = t<<27 | t>>(64-27) + a[10] = bc0 ^ (bc2 &^ bc1) + a[1] = bc1 ^ (bc3 &^ bc2) + a[17] = bc2 ^ (bc4 &^ bc3) + a[8] = bc3 ^ (bc0 &^ bc4) + a[24] = bc4 ^ (bc1 &^ bc0) + + t = a[5] ^ d0 + bc3 = t<<41 | t>>(64-41) + t = a[21] ^ d1 + bc4 = t<<2 | t>>(64-2) + t = a[12] ^ d2 + bc0 = t<<62 | t>>(64-62) + t = a[3] ^ d3 + bc1 = t<<55 | t>>(64-55) + t = a[19] ^ d4 + bc2 = t<<39 | t>>(64-39) + a[5] = bc0 ^ (bc2 &^ bc1) + a[21] = bc1 ^ (bc3 &^ bc2) + a[12] = bc2 ^ (bc4 &^ bc3) + a[3] = bc3 ^ (bc0 &^ bc4) + a[19] = bc4 ^ (bc1 &^ bc0) + + // Round 3 + bc0 = a[0] ^ a[5] ^ a[10] ^ a[15] ^ a[20] + bc1 = a[1] ^ a[6] ^ a[11] ^ a[16] ^ a[21] + bc2 = a[2] ^ a[7] ^ a[12] ^ a[17] ^ a[22] + bc3 = a[3] ^ a[8] ^ a[13] ^ a[18] ^ a[23] + bc4 = a[4] ^ a[9] ^ a[14] ^ a[19] ^ a[24] + d0 = bc4 ^ (bc1<<1 | bc1>>63) + d1 = bc0 ^ (bc2<<1 | bc2>>63) + d2 = bc1 ^ (bc3<<1 | bc3>>63) + d3 = bc2 ^ (bc4<<1 | bc4>>63) + d4 = bc3 ^ (bc0<<1 | bc0>>63) + + bc0 = a[0] ^ d0 + t = a[11] ^ d1 + bc1 = t<<44 | t>>(64-44) + t = a[22] ^ d2 + bc2 = t<<43 | t>>(64-43) + t = a[8] ^ d3 + bc3 = t<<21 | t>>(64-21) + t = a[19] ^ d4 + bc4 = t<<14 | t>>(64-14) + a[0] = bc0 ^ (bc2 &^ bc1) ^ RC[i+2] + a[11] = bc1 ^ (bc3 &^ bc2) + a[22] = bc2 ^ (bc4 &^ bc3) + a[8] = bc3 ^ (bc0 &^ bc4) + a[19] = bc4 ^ (bc1 &^ bc0) + + t = a[15] ^ d0 + bc2 = t<<3 | t>>(64-3) + t = a[1] ^ d1 + bc3 = t<<45 | t>>(64-45) + t = a[12] ^ d2 + bc4 = t<<61 | t>>(64-61) + t = a[23] ^ d3 + bc0 = t<<28 | t>>(64-28) + t = a[9] ^ d4 + bc1 = t<<20 | t>>(64-20) + a[15] = bc0 ^ (bc2 &^ bc1) + a[1] = bc1 ^ (bc3 &^ bc2) + a[12] = bc2 ^ (bc4 &^ bc3) + a[23] = bc3 ^ (bc0 &^ bc4) + a[9] = bc4 ^ (bc1 &^ bc0) + + t = a[5] ^ d0 + bc4 = t<<18 | t>>(64-18) + t = a[16] ^ d1 + bc0 = t<<1 | t>>(64-1) + t = a[2] ^ d2 + bc1 = t<<6 | t>>(64-6) + t = a[13] ^ d3 + bc2 = t<<25 | t>>(64-25) + t = a[24] ^ d4 + bc3 = t<<8 | t>>(64-8) + a[5] = bc0 ^ (bc2 &^ bc1) + a[16] = bc1 ^ (bc3 &^ bc2) + a[2] = bc2 ^ (bc4 &^ bc3) + a[13] = bc3 ^ (bc0 &^ bc4) + a[24] = bc4 ^ (bc1 &^ bc0) + + t = a[20] ^ d0 + bc1 = t<<36 | t>>(64-36) + t = a[6] ^ d1 + bc2 = t<<10 | t>>(64-10) + t = a[17] ^ d2 + bc3 = t<<15 | t>>(64-15) + t = a[3] ^ d3 + bc4 = t<<56 | t>>(64-56) + t = a[14] ^ d4 + bc0 = t<<27 | t>>(64-27) + a[20] = bc0 ^ (bc2 &^ bc1) + a[6] = bc1 ^ (bc3 &^ bc2) + a[17] = bc2 ^ (bc4 &^ bc3) + a[3] = bc3 ^ (bc0 &^ bc4) + a[14] = bc4 ^ (bc1 &^ bc0) + + t = a[10] ^ d0 + bc3 = t<<41 | t>>(64-41) + t = a[21] ^ d1 + bc4 = t<<2 | t>>(64-2) + t = a[7] ^ d2 + bc0 = t<<62 | t>>(64-62) + t = a[18] ^ d3 + bc1 = t<<55 | t>>(64-55) + t = a[4] ^ d4 + bc2 = t<<39 | t>>(64-39) + a[10] = bc0 ^ (bc2 &^ bc1) + a[21] = bc1 ^ (bc3 &^ bc2) + a[7] = bc2 ^ (bc4 &^ bc3) + a[18] = bc3 ^ (bc0 &^ bc4) + a[4] = bc4 ^ (bc1 &^ bc0) + + // Round 4 + bc0 = a[0] ^ a[5] ^ a[10] ^ a[15] ^ a[20] + bc1 = a[1] ^ a[6] ^ a[11] ^ a[16] ^ a[21] + bc2 = a[2] ^ a[7] ^ a[12] ^ a[17] ^ a[22] + bc3 = a[3] ^ a[8] ^ a[13] ^ a[18] ^ a[23] + bc4 = a[4] ^ a[9] ^ a[14] ^ a[19] ^ a[24] + d0 = bc4 ^ (bc1<<1 | bc1>>63) + d1 = bc0 ^ (bc2<<1 | bc2>>63) + d2 = bc1 ^ (bc3<<1 | bc3>>63) + d3 = bc2 ^ (bc4<<1 | bc4>>63) + d4 = bc3 ^ (bc0<<1 | bc0>>63) + + bc0 = a[0] ^ d0 + t = a[1] ^ d1 + bc1 = t<<44 | t>>(64-44) + t = a[2] ^ d2 + bc2 = t<<43 | t>>(64-43) + t = a[3] ^ d3 + bc3 = t<<21 | t>>(64-21) + t = a[4] ^ d4 + bc4 = t<<14 | t>>(64-14) + a[0] = bc0 ^ (bc2 &^ bc1) ^ RC[i+3] + a[1] = bc1 ^ (bc3 &^ bc2) + a[2] = bc2 ^ (bc4 &^ bc3) + a[3] = bc3 ^ (bc0 &^ bc4) + a[4] = bc4 ^ (bc1 &^ bc0) + + t = a[5] ^ d0 + bc2 = t<<3 | t>>(64-3) + t = a[6] ^ d1 + bc3 = t<<45 | t>>(64-45) + t = a[7] ^ d2 + bc4 = t<<61 | t>>(64-61) + t = a[8] ^ d3 + bc0 = t<<28 | t>>(64-28) + t = a[9] ^ d4 + bc1 = t<<20 | t>>(64-20) + a[5] = bc0 ^ (bc2 &^ bc1) + a[6] = bc1 ^ (bc3 &^ bc2) + a[7] = bc2 ^ (bc4 &^ bc3) + a[8] = bc3 ^ (bc0 &^ bc4) + a[9] = bc4 ^ (bc1 &^ bc0) + + t = a[10] ^ d0 + bc4 = t<<18 | t>>(64-18) + t = a[11] ^ d1 + bc0 = t<<1 | t>>(64-1) + t = a[12] ^ d2 + bc1 = t<<6 | t>>(64-6) + t = a[13] ^ d3 + bc2 = t<<25 | t>>(64-25) + t = a[14] ^ d4 + bc3 = t<<8 | t>>(64-8) + a[10] = bc0 ^ (bc2 &^ bc1) + a[11] = bc1 ^ (bc3 &^ bc2) + a[12] = bc2 ^ (bc4 &^ bc3) + a[13] = bc3 ^ (bc0 &^ bc4) + a[14] = bc4 ^ (bc1 &^ bc0) + + t = a[15] ^ d0 + bc1 = t<<36 | t>>(64-36) + t = a[16] ^ d1 + bc2 = t<<10 | t>>(64-10) + t = a[17] ^ d2 + bc3 = t<<15 | t>>(64-15) + t = a[18] ^ d3 + bc4 = t<<56 | t>>(64-56) + t = a[19] ^ d4 + bc0 = t<<27 | t>>(64-27) + a[15] = bc0 ^ (bc2 &^ bc1) + a[16] = bc1 ^ (bc3 &^ bc2) + a[17] = bc2 ^ (bc4 &^ bc3) + a[18] = bc3 ^ (bc0 &^ bc4) + a[19] = bc4 ^ (bc1 &^ bc0) + + t = a[20] ^ d0 + bc3 = t<<41 | t>>(64-41) + t = a[21] ^ d1 + bc4 = t<<2 | t>>(64-2) + t = a[22] ^ d2 + bc0 = t<<62 | t>>(64-62) + t = a[23] ^ d3 + bc1 = t<<55 | t>>(64-55) + t = a[24] ^ d4 + bc2 = t<<39 | t>>(64-39) + a[20] = bc0 ^ (bc2 &^ bc1) + a[21] = bc1 ^ (bc3 &^ bc2) + a[22] = bc2 ^ (bc4 &^ bc3) + a[23] = bc3 ^ (bc0 &^ bc4) + a[24] = bc4 ^ (bc1 &^ bc0) + } +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/rc.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/rc.go new file mode 100644 index 00000000000..6a3df42f305 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/rc.go @@ -0,0 +1,29 @@ +package sha3 + +// RC stores the round constants for use in the ι step. +var RC = [24]uint64{ + 0x0000000000000001, + 0x0000000000008082, + 0x800000000000808A, + 0x8000000080008000, + 0x000000000000808B, + 0x0000000080000001, + 0x8000000080008081, + 0x8000000000008009, + 0x000000000000008A, + 0x0000000000000088, + 0x0000000080008009, + 0x000000008000000A, + 0x000000008000808B, + 0x800000000000008B, + 0x8000000000008089, + 0x8000000000008003, + 0x8000000000008002, + 0x8000000000000080, + 0x000000000000800A, + 0x800000008000000A, + 0x8000000080008081, + 0x8000000000008080, + 0x0000000080000001, + 0x8000000080008008, +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/sha3.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/sha3.go new file mode 100644 index 00000000000..b35cd006b03 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/sha3.go @@ -0,0 +1,195 @@ +// Copyright 2014 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package sha3 + +// spongeDirection indicates the direction bytes are flowing through the sponge. +type spongeDirection int + +const ( + // spongeAbsorbing indicates that the sponge is absorbing input. + spongeAbsorbing spongeDirection = iota + // spongeSqueezing indicates that the sponge is being squeezed. + spongeSqueezing +) + +const ( + // maxRate is the maximum size of the internal buffer. SHAKE-256 + // currently needs the largest buffer. + maxRate = 168 +) + +func (d *State) buf() []byte { + return d.storage.asBytes()[d.bufo:d.bufe] +} + +type State struct { + // Generic sponge components. + a [25]uint64 // main state of the hash + rate int // the number of bytes of state to use + + bufo int // offset of buffer in storage + bufe int // end of buffer in storage + + // dsbyte contains the "domain separation" bits and the first bit of + // the padding. Sections 6.1 and 6.2 of [1] separate the outputs of the + // SHA-3 and SHAKE functions by appending bitstrings to the message. + // Using a little-endian bit-ordering convention, these are "01" for SHA-3 + // and "1111" for SHAKE, or 00000010b and 00001111b, respectively. Then the + // padding rule from section 5.1 is applied to pad the message to a multiple + // of the rate, which involves adding a "1" bit, zero or more "0" bits, and + // a final "1" bit. We merge the first "1" bit from the padding into dsbyte, + // giving 00000110b (0x06) and 00011111b (0x1f). + // [1] http://csrc.nist.gov/publications/drafts/fips-202/fips_202_draft.pdf + // "Draft FIPS 202: SHA-3 Standard: Permutation-Based Hash and + // Extendable-Output Functions (May 2014)" + dsbyte byte + + storage storageBuf + + // Specific to SHA-3 and SHAKE. + outputLen int // the default output size in bytes + state spongeDirection // whether the sponge is absorbing or squeezing +} + +// BlockSize returns the rate of sponge underlying this hash function. +func (d *State) BlockSize() int { return d.rate } + +// Size returns the output size of the hash function in bytes. +func (d *State) Size() int { return d.outputLen } + +// Reset clears the internal state by zeroing the sponge state and +// the byte buffer, and setting Sponge.state to absorbing. +func (d *State) Reset() { + // Zero the permutation's state. + for i := range d.a { + d.a[i] = 0 + } + d.state = spongeAbsorbing + d.bufo = 0 + d.bufe = 0 +} + +func (d *State) clone() *State { + ret := *d + return &ret +} + +// permute applies the KeccakF-1600 permutation. It handles +// any input-output buffering. +func (d *State) permute() { + switch d.state { + case spongeAbsorbing: + // If we're absorbing, we need to xor the input into the state + // before applying the permutation. + xorIn(d, d.buf()) + d.bufe = 0 + d.bufo = 0 + KeccakF1600(&d.a) + case spongeSqueezing: + // If we're squeezing, we need to apply the permutation before + // copying more output. + KeccakF1600(&d.a) + d.bufe = d.rate + d.bufo = 0 + copyOut(d, d.buf()) + } +} + +// pads appends the domain separation bits in dsbyte, applies +// the multi-bitrate 10..1 padding rule, and permutes the state. +func (d *State) padAndPermute(dsbyte byte) { + // Pad with this instance's domain-separator bits. We know that there's + // at least one byte of space in d.buf() because, if it were full, + // permute would have been called to empty it. dsbyte also contains the + // first one bit for the padding. See the comment in the state struct. + zerosStart := d.bufe + 1 + d.bufe = d.rate + buf := d.buf() + buf[zerosStart-1] = dsbyte + for i := zerosStart; i < d.rate; i++ { + buf[i] = 0 + } + // This adds the final one bit for the padding. Because of the way that + // bits are numbered from the LSB upwards, the final bit is the MSB of + // the last byte. + buf[d.rate-1] ^= 0x80 + // Apply the permutation + d.permute() + d.state = spongeSqueezing + d.bufe = d.rate + copyOut(d, buf) +} + +// Write absorbs more data into the hash's state. It produces an error +// if more data is written to the ShakeHash after writing +func (d *State) Write(p []byte) (written int, err error) { + if d.state != spongeAbsorbing { + panic("sha3: write to sponge after read") + } + written = len(p) + + for len(p) > 0 { + bufl := d.bufe - d.bufo + if bufl == 0 && len(p) >= d.rate { + // The fast path; absorb a full "rate" bytes of input and apply the permutation. + xorIn(d, p[:d.rate]) + p = p[d.rate:] + KeccakF1600(&d.a) + } else { + // The slow path; buffer the input until we can fill the sponge, and then xor it in. + todo := d.rate - bufl + if todo > len(p) { + todo = len(p) + } + d.bufe += todo + buf := d.buf() + copy(buf[bufl:], p[:todo]) + p = p[todo:] + + // If the sponge is full, apply the permutation. + if d.bufe == d.rate { + d.permute() + } + } + } + + return written, nil +} + +// Read squeezes an arbitrary number of bytes from the sponge. +func (d *State) Read(out []byte) (n int, err error) { + // If we're still absorbing, pad and apply the permutation. + if d.state == spongeAbsorbing { + d.padAndPermute(d.dsbyte) + } + + n = len(out) + + // Now, do the squeezing. + for len(out) > 0 { + buf := d.buf() + n := copy(out, buf) + d.bufo += n + out = out[n:] + + // Apply the permutation if we've squeezed the sponge dry. + if d.bufo == d.bufe { + d.permute() + } + } + + return +} + +// Sum applies padding to the hash state and then squeezes out the desired +// number of output bytes. +func (d *State) Sum(in []byte) []byte { + // Make a copy of the original hash so that caller can keep writing + // and summing. + dup := d.clone() + hash := make([]byte, dup.outputLen) + _, _ = dup.Read(hash) + return append(in, hash...) +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/sha3_s390x.s b/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/sha3_s390x.s new file mode 100644 index 00000000000..8a4458f63f9 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/sha3_s390x.s @@ -0,0 +1,33 @@ +// Copyright 2017 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build !gccgo,!appengine + +#include "textflag.h" + +// func kimd(function code, chain *[200]byte, src []byte) +TEXT ·kimd(SB), NOFRAME|NOSPLIT, $0-40 + MOVD function+0(FP), R0 + MOVD chain+8(FP), R1 + LMG src+16(FP), R2, R3 // R2=base, R3=len + +continue: + WORD $0xB93E0002 // KIMD --, R2 + BVS continue // continue if interrupted + MOVD $0, R0 // reset R0 for pre-go1.8 compilers + RET + +// func klmd(function code, chain *[200]byte, dst, src []byte) +TEXT ·klmd(SB), NOFRAME|NOSPLIT, $0-64 + // TODO: SHAKE support + MOVD function+0(FP), R0 + MOVD chain+8(FP), R1 + LMG dst+16(FP), R2, R3 // R2=base, R3=len + LMG src+40(FP), R4, R5 // R4=base, R5=len + +continue: + WORD $0xB93F0024 // KLMD R2, R4 + BVS continue // continue if interrupted + MOVD $0, R0 // reset R0 for pre-go1.8 compilers + RET diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/shake.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/shake.go new file mode 100644 index 00000000000..b92c5b7d785 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/shake.go @@ -0,0 +1,79 @@ +// Copyright 2014 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package sha3 + +// This file defines the ShakeHash interface, and provides +// functions for creating SHAKE and cSHAKE instances, as well as utility +// functions for hashing bytes to arbitrary-length output. +// +// +// SHAKE implementation is based on FIPS PUB 202 [1] +// cSHAKE implementations is based on NIST SP 800-185 [2] +// +// [1] https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.202.pdf +// [2] https://doi.org/10.6028/NIST.SP.800-185 + +import ( + "io" +) + +// ShakeHash defines the interface to hash functions that +// support arbitrary-length output. +type ShakeHash interface { + // Write absorbs more data into the hash's state. It panics if input is + // written to it after output has been read from it. + io.Writer + + // Read reads more output from the hash; reading affects the hash's + // state. (ShakeHash.Read is thus very different from Hash.Sum) + // It never returns an error. + io.Reader + + // Clone returns a copy of the ShakeHash in its current state. + Clone() ShakeHash + + // Reset resets the ShakeHash to its initial state. + Reset() +} + +// Consts for configuring initial SHA-3 state +const ( + dsbyteShake = 0x1f + rate128 = 168 + rate256 = 136 +) + +// Clone returns copy of SHAKE context within its current state. +func (d *State) Clone() ShakeHash { + return d.clone() +} + +// NewShake128 creates a new SHAKE128 variable-output-length ShakeHash. +// Its generic security strength is 128 bits against all attacks if at +// least 32 bytes of its output are used. +func NewShake128() State { + return State{rate: rate128, dsbyte: dsbyteShake} +} + +// NewShake256 creates a new SHAKE256 variable-output-length ShakeHash. +// Its generic security strength is 256 bits against all attacks if +// at least 64 bytes of its output are used. +func NewShake256() State { + return State{rate: rate256, dsbyte: dsbyteShake} +} + +// ShakeSum128 writes an arbitrary-length digest of data into hash. +func ShakeSum128(hash, data []byte) { + h := NewShake128() + _, _ = h.Write(data) + _, _ = h.Read(hash) +} + +// ShakeSum256 writes an arbitrary-length digest of data into hash. +func ShakeSum256(hash, data []byte) { + h := NewShake256() + _, _ = h.Write(data) + _, _ = h.Read(hash) +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/xor.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/xor.go new file mode 100644 index 00000000000..1e21337454f --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/xor.go @@ -0,0 +1,15 @@ +// Copyright 2015 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build (!amd64 && !386 && !ppc64le) || appengine +// +build !amd64,!386,!ppc64le appengine + +package sha3 + +// A storageBuf is an aligned array of maxRate bytes. +type storageBuf [maxRate]byte + +func (b *storageBuf) asBytes() *[maxRate]byte { + return (*[maxRate]byte)(b) +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/xor_generic.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/xor_generic.go new file mode 100644 index 00000000000..2b0c6617906 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/xor_generic.go @@ -0,0 +1,33 @@ +// Copyright 2015 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build (!amd64 || appengine) && (!386 || appengine) && (!ppc64le || appengine) +// +build !amd64 appengine +// +build !386 appengine +// +build !ppc64le appengine + +package sha3 + +import "encoding/binary" + +// xorIn xors the bytes in buf into the state; it +// makes no non-portable assumptions about memory layout +// or alignment. +func xorIn(d *State, buf []byte) { + n := len(buf) / 8 + + for i := 0; i < n; i++ { + a := binary.LittleEndian.Uint64(buf) + d.a[i] ^= a + buf = buf[8:] + } +} + +// copyOut copies ulint64s to a byte buffer. +func copyOut(d *State, b []byte) { + for i := 0; len(b) >= 8; i++ { + binary.LittleEndian.PutUint64(b, d.a[i]) + b = b[8:] + } +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/xor_unaligned.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/xor_unaligned.go new file mode 100644 index 00000000000..052fc8d32d2 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/internal/sha3/xor_unaligned.go @@ -0,0 +1,61 @@ +// Copyright 2015 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build (amd64 || 386 || ppc64le) && !appengine +// +build amd64 386 ppc64le +// +build !appengine + +package sha3 + +import "unsafe" + +// A storageBuf is an aligned array of maxRate bytes. +type storageBuf [maxRate / 8]uint64 + +func (b *storageBuf) asBytes() *[maxRate]byte { + return (*[maxRate]byte)(unsafe.Pointer(b)) +} + +// xorInuses unaligned reads and writes to update d.a to contain d.a +// XOR buf. +func xorIn(d *State, buf []byte) { + n := len(buf) + bw := (*[maxRate / 8]uint64)(unsafe.Pointer(&buf[0]))[: n/8 : n/8] + if n >= 72 { + d.a[0] ^= bw[0] + d.a[1] ^= bw[1] + d.a[2] ^= bw[2] + d.a[3] ^= bw[3] + d.a[4] ^= bw[4] + d.a[5] ^= bw[5] + d.a[6] ^= bw[6] + d.a[7] ^= bw[7] + d.a[8] ^= bw[8] + } + if n >= 104 { + d.a[9] ^= bw[9] + d.a[10] ^= bw[10] + d.a[11] ^= bw[11] + d.a[12] ^= bw[12] + } + if n >= 136 { + d.a[13] ^= bw[13] + d.a[14] ^= bw[14] + d.a[15] ^= bw[15] + d.a[16] ^= bw[16] + } + if n >= 144 { + d.a[17] ^= bw[17] + } + if n >= 168 { + d.a[18] ^= bw[18] + d.a[19] ^= bw[19] + d.a[20] ^= bw[20] + } +} + +func copyOut(d *State, buf []byte) { + ab := (*[maxRate]uint8)(unsafe.Pointer(&d.a[0])) + copy(buf, ab[:]) +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp25519/fp.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp25519/fp.go new file mode 100644 index 00000000000..57a50ff5e9b --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp25519/fp.go @@ -0,0 +1,205 @@ +// Package fp25519 provides prime field arithmetic over GF(2^255-19). +package fp25519 + +import ( + "errors" + + "github.com/cloudflare/circl/internal/conv" +) + +// Size in bytes of an element. +const Size = 32 + +// Elt is a prime field element. +type Elt [Size]byte + +func (e Elt) String() string { return conv.BytesLe2Hex(e[:]) } + +// p is the prime modulus 2^255-19. +var p = Elt{ + 0xed, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x7f, +} + +// P returns the prime modulus 2^255-19. +func P() Elt { return p } + +// ToBytes stores in b the little-endian byte representation of x. +func ToBytes(b []byte, x *Elt) error { + if len(b) != Size { + return errors.New("wrong size") + } + Modp(x) + copy(b, x[:]) + return nil +} + +// IsZero returns true if x is equal to 0. +func IsZero(x *Elt) bool { Modp(x); return *x == Elt{} } + +// SetOne assigns x=1. +func SetOne(x *Elt) { *x = Elt{}; x[0] = 1 } + +// Neg calculates z = -x. +func Neg(z, x *Elt) { Sub(z, &p, x) } + +// InvSqrt calculates z = sqrt(x/y) iff x/y is a quadratic-residue, which is +// indicated by returning isQR = true. Otherwise, when x/y is a quadratic +// non-residue, z will have an undetermined value and isQR = false. +func InvSqrt(z, x, y *Elt) (isQR bool) { + sqrtMinusOne := &Elt{ + 0xb0, 0xa0, 0x0e, 0x4a, 0x27, 0x1b, 0xee, 0xc4, + 0x78, 0xe4, 0x2f, 0xad, 0x06, 0x18, 0x43, 0x2f, + 0xa7, 0xd7, 0xfb, 0x3d, 0x99, 0x00, 0x4d, 0x2b, + 0x0b, 0xdf, 0xc1, 0x4f, 0x80, 0x24, 0x83, 0x2b, + } + t0, t1, t2, t3 := &Elt{}, &Elt{}, &Elt{}, &Elt{} + + Mul(t0, x, y) // t0 = u*v + Sqr(t1, y) // t1 = v^2 + Mul(t2, t0, t1) // t2 = u*v^3 + Sqr(t0, t1) // t0 = v^4 + Mul(t1, t0, t2) // t1 = u*v^7 + + var Tab [4]*Elt + Tab[0] = &Elt{} + Tab[1] = &Elt{} + Tab[2] = t3 + Tab[3] = t1 + + *Tab[0] = *t1 + Sqr(Tab[0], Tab[0]) + Sqr(Tab[1], Tab[0]) + Sqr(Tab[1], Tab[1]) + Mul(Tab[1], Tab[1], Tab[3]) + Mul(Tab[0], Tab[0], Tab[1]) + Sqr(Tab[0], Tab[0]) + Mul(Tab[0], Tab[0], Tab[1]) + Sqr(Tab[1], Tab[0]) + for i := 0; i < 4; i++ { + Sqr(Tab[1], Tab[1]) + } + Mul(Tab[1], Tab[1], Tab[0]) + Sqr(Tab[2], Tab[1]) + for i := 0; i < 4; i++ { + Sqr(Tab[2], Tab[2]) + } + Mul(Tab[2], Tab[2], Tab[0]) + Sqr(Tab[1], Tab[2]) + for i := 0; i < 14; i++ { + Sqr(Tab[1], Tab[1]) + } + Mul(Tab[1], Tab[1], Tab[2]) + Sqr(Tab[2], Tab[1]) + for i := 0; i < 29; i++ { + Sqr(Tab[2], Tab[2]) + } + Mul(Tab[2], Tab[2], Tab[1]) + Sqr(Tab[1], Tab[2]) + for i := 0; i < 59; i++ { + Sqr(Tab[1], Tab[1]) + } + Mul(Tab[1], Tab[1], Tab[2]) + for i := 0; i < 5; i++ { + Sqr(Tab[1], Tab[1]) + } + Mul(Tab[1], Tab[1], Tab[0]) + Sqr(Tab[2], Tab[1]) + for i := 0; i < 124; i++ { + Sqr(Tab[2], Tab[2]) + } + Mul(Tab[2], Tab[2], Tab[1]) + Sqr(Tab[2], Tab[2]) + Sqr(Tab[2], Tab[2]) + Mul(Tab[2], Tab[2], Tab[3]) + + Mul(z, t3, t2) // z = xy^(p+3)/8 = xy^3*(xy^7)^(p-5)/8 + // Checking whether y z^2 == x + Sqr(t0, z) // t0 = z^2 + Mul(t0, t0, y) // t0 = yz^2 + Sub(t1, t0, x) // t1 = t0-u + Add(t2, t0, x) // t2 = t0+u + if IsZero(t1) { + return true + } else if IsZero(t2) { + Mul(z, z, sqrtMinusOne) // z = z*sqrt(-1) + return true + } else { + return false + } +} + +// Inv calculates z = 1/x mod p. +func Inv(z, x *Elt) { + x0, x1, x2 := &Elt{}, &Elt{}, &Elt{} + Sqr(x1, x) + Sqr(x0, x1) + Sqr(x0, x0) + Mul(x0, x0, x) + Mul(z, x0, x1) + Sqr(x1, z) + Mul(x0, x0, x1) + Sqr(x1, x0) + for i := 0; i < 4; i++ { + Sqr(x1, x1) + } + Mul(x0, x0, x1) + Sqr(x1, x0) + for i := 0; i < 9; i++ { + Sqr(x1, x1) + } + Mul(x1, x1, x0) + Sqr(x2, x1) + for i := 0; i < 19; i++ { + Sqr(x2, x2) + } + Mul(x2, x2, x1) + for i := 0; i < 10; i++ { + Sqr(x2, x2) + } + Mul(x2, x2, x0) + Sqr(x0, x2) + for i := 0; i < 49; i++ { + Sqr(x0, x0) + } + Mul(x0, x0, x2) + Sqr(x1, x0) + for i := 0; i < 99; i++ { + Sqr(x1, x1) + } + Mul(x1, x1, x0) + for i := 0; i < 50; i++ { + Sqr(x1, x1) + } + Mul(x1, x1, x2) + for i := 0; i < 5; i++ { + Sqr(x1, x1) + } + Mul(z, z, x1) +} + +// Cmov assigns y to x if n is 1. +func Cmov(x, y *Elt, n uint) { cmov(x, y, n) } + +// Cswap interchanges x and y if n is 1. +func Cswap(x, y *Elt, n uint) { cswap(x, y, n) } + +// Add calculates z = x+y mod p. +func Add(z, x, y *Elt) { add(z, x, y) } + +// Sub calculates z = x-y mod p. +func Sub(z, x, y *Elt) { sub(z, x, y) } + +// AddSub calculates (x,y) = (x+y mod p, x-y mod p). +func AddSub(x, y *Elt) { addsub(x, y) } + +// Mul calculates z = x*y mod p. +func Mul(z, x, y *Elt) { mul(z, x, y) } + +// Sqr calculates z = x^2 mod p. +func Sqr(z, x *Elt) { sqr(z, x) } + +// Modp ensures that z is between [0,p-1]. +func Modp(z *Elt) { modp(z) } diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp25519/fp_amd64.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp25519/fp_amd64.go new file mode 100644 index 00000000000..057f0d2803f --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp25519/fp_amd64.go @@ -0,0 +1,45 @@ +//go:build amd64 && !purego +// +build amd64,!purego + +package fp25519 + +import ( + "golang.org/x/sys/cpu" +) + +var hasBmi2Adx = cpu.X86.HasBMI2 && cpu.X86.HasADX + +var _ = hasBmi2Adx + +func cmov(x, y *Elt, n uint) { cmovAmd64(x, y, n) } +func cswap(x, y *Elt, n uint) { cswapAmd64(x, y, n) } +func add(z, x, y *Elt) { addAmd64(z, x, y) } +func sub(z, x, y *Elt) { subAmd64(z, x, y) } +func addsub(x, y *Elt) { addsubAmd64(x, y) } +func mul(z, x, y *Elt) { mulAmd64(z, x, y) } +func sqr(z, x *Elt) { sqrAmd64(z, x) } +func modp(z *Elt) { modpAmd64(z) } + +//go:noescape +func cmovAmd64(x, y *Elt, n uint) + +//go:noescape +func cswapAmd64(x, y *Elt, n uint) + +//go:noescape +func addAmd64(z, x, y *Elt) + +//go:noescape +func subAmd64(z, x, y *Elt) + +//go:noescape +func addsubAmd64(x, y *Elt) + +//go:noescape +func mulAmd64(z, x, y *Elt) + +//go:noescape +func sqrAmd64(z, x *Elt) + +//go:noescape +func modpAmd64(z *Elt) diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp25519/fp_amd64.h b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp25519/fp_amd64.h new file mode 100644 index 00000000000..b884b584ab3 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp25519/fp_amd64.h @@ -0,0 +1,351 @@ +// This code was imported from https://github.com/armfazh/rfc7748_precomputed + +// CHECK_BMI2ADX triggers bmi2adx if supported, +// otherwise it fallbacks to legacy code. +#define CHECK_BMI2ADX(label, legacy, bmi2adx) \ + CMPB ·hasBmi2Adx(SB), $0 \ + JE label \ + bmi2adx \ + RET \ + label: \ + legacy \ + RET + +// cselect is a conditional move +// if b=1: it copies y into x; +// if b=0: x remains with the same value; +// if b<> 0,1: undefined. +// Uses: AX, DX, FLAGS +// Instr: x86_64, cmov +#define cselect(x,y,b) \ + TESTQ b, b \ + MOVQ 0+x, AX; MOVQ 0+y, DX; CMOVQNE DX, AX; MOVQ AX, 0+x; \ + MOVQ 8+x, AX; MOVQ 8+y, DX; CMOVQNE DX, AX; MOVQ AX, 8+x; \ + MOVQ 16+x, AX; MOVQ 16+y, DX; CMOVQNE DX, AX; MOVQ AX, 16+x; \ + MOVQ 24+x, AX; MOVQ 24+y, DX; CMOVQNE DX, AX; MOVQ AX, 24+x; + +// cswap is a conditional swap +// if b=1: x,y <- y,x; +// if b=0: x,y remain with the same values; +// if b<> 0,1: undefined. +// Uses: AX, DX, R8, FLAGS +// Instr: x86_64, cmov +#define cswap(x,y,b) \ + TESTQ b, b \ + MOVQ 0+x, AX; MOVQ AX, R8; MOVQ 0+y, DX; CMOVQNE DX, AX; CMOVQNE R8, DX; MOVQ AX, 0+x; MOVQ DX, 0+y; \ + MOVQ 8+x, AX; MOVQ AX, R8; MOVQ 8+y, DX; CMOVQNE DX, AX; CMOVQNE R8, DX; MOVQ AX, 8+x; MOVQ DX, 8+y; \ + MOVQ 16+x, AX; MOVQ AX, R8; MOVQ 16+y, DX; CMOVQNE DX, AX; CMOVQNE R8, DX; MOVQ AX, 16+x; MOVQ DX, 16+y; \ + MOVQ 24+x, AX; MOVQ AX, R8; MOVQ 24+y, DX; CMOVQNE DX, AX; CMOVQNE R8, DX; MOVQ AX, 24+x; MOVQ DX, 24+y; + +// additionLeg adds x and y and stores in z +// Uses: AX, DX, R8-R11, FLAGS +// Instr: x86_64, cmov +#define additionLeg(z,x,y) \ + MOVL $38, AX; \ + MOVL $0, DX; \ + MOVQ 0+x, R8; ADDQ 0+y, R8; \ + MOVQ 8+x, R9; ADCQ 8+y, R9; \ + MOVQ 16+x, R10; ADCQ 16+y, R10; \ + MOVQ 24+x, R11; ADCQ 24+y, R11; \ + CMOVQCS AX, DX; \ + ADDQ DX, R8; \ + ADCQ $0, R9; MOVQ R9, 8+z; \ + ADCQ $0, R10; MOVQ R10, 16+z; \ + ADCQ $0, R11; MOVQ R11, 24+z; \ + MOVL $0, DX; \ + CMOVQCS AX, DX; \ + ADDQ DX, R8; MOVQ R8, 0+z; + +// additionAdx adds x and y and stores in z +// Uses: AX, DX, R8-R11, FLAGS +// Instr: x86_64, cmov, adx +#define additionAdx(z,x,y) \ + MOVL $38, AX; \ + XORL DX, DX; \ + MOVQ 0+x, R8; ADCXQ 0+y, R8; \ + MOVQ 8+x, R9; ADCXQ 8+y, R9; \ + MOVQ 16+x, R10; ADCXQ 16+y, R10; \ + MOVQ 24+x, R11; ADCXQ 24+y, R11; \ + CMOVQCS AX, DX ; \ + XORL AX, AX; \ + ADCXQ DX, R8; \ + ADCXQ AX, R9; MOVQ R9, 8+z; \ + ADCXQ AX, R10; MOVQ R10, 16+z; \ + ADCXQ AX, R11; MOVQ R11, 24+z; \ + MOVL $38, DX; \ + CMOVQCS DX, AX; \ + ADDQ AX, R8; MOVQ R8, 0+z; + +// subtraction subtracts y from x and stores in z +// Uses: AX, DX, R8-R11, FLAGS +// Instr: x86_64, cmov +#define subtraction(z,x,y) \ + MOVL $38, AX; \ + MOVQ 0+x, R8; SUBQ 0+y, R8; \ + MOVQ 8+x, R9; SBBQ 8+y, R9; \ + MOVQ 16+x, R10; SBBQ 16+y, R10; \ + MOVQ 24+x, R11; SBBQ 24+y, R11; \ + MOVL $0, DX; \ + CMOVQCS AX, DX; \ + SUBQ DX, R8; \ + SBBQ $0, R9; MOVQ R9, 8+z; \ + SBBQ $0, R10; MOVQ R10, 16+z; \ + SBBQ $0, R11; MOVQ R11, 24+z; \ + MOVL $0, DX; \ + CMOVQCS AX, DX; \ + SUBQ DX, R8; MOVQ R8, 0+z; + +// integerMulAdx multiplies x and y and stores in z +// Uses: AX, DX, R8-R15, FLAGS +// Instr: x86_64, bmi2, adx +#define integerMulAdx(z,x,y) \ + MOVL $0,R15; \ + MOVQ 0+y, DX; XORL AX, AX; \ + MULXQ 0+x, AX, R8; MOVQ AX, 0+z; \ + MULXQ 8+x, AX, R9; ADCXQ AX, R8; \ + MULXQ 16+x, AX, R10; ADCXQ AX, R9; \ + MULXQ 24+x, AX, R11; ADCXQ AX, R10; \ + MOVL $0, AX;;;;;;;;; ADCXQ AX, R11; \ + MOVQ 8+y, DX; XORL AX, AX; \ + MULXQ 0+x, AX, R12; ADCXQ R8, AX; MOVQ AX, 8+z; \ + MULXQ 8+x, AX, R13; ADCXQ R9, R12; ADOXQ AX, R12; \ + MULXQ 16+x, AX, R14; ADCXQ R10, R13; ADOXQ AX, R13; \ + MULXQ 24+x, AX, R15; ADCXQ R11, R14; ADOXQ AX, R14; \ + MOVL $0, AX;;;;;;;;; ADCXQ AX, R15; ADOXQ AX, R15; \ + MOVQ 16+y, DX; XORL AX, AX; \ + MULXQ 0+x, AX, R8; ADCXQ R12, AX; MOVQ AX, 16+z; \ + MULXQ 8+x, AX, R9; ADCXQ R13, R8; ADOXQ AX, R8; \ + MULXQ 16+x, AX, R10; ADCXQ R14, R9; ADOXQ AX, R9; \ + MULXQ 24+x, AX, R11; ADCXQ R15, R10; ADOXQ AX, R10; \ + MOVL $0, AX;;;;;;;;; ADCXQ AX, R11; ADOXQ AX, R11; \ + MOVQ 24+y, DX; XORL AX, AX; \ + MULXQ 0+x, AX, R12; ADCXQ R8, AX; MOVQ AX, 24+z; \ + MULXQ 8+x, AX, R13; ADCXQ R9, R12; ADOXQ AX, R12; MOVQ R12, 32+z; \ + MULXQ 16+x, AX, R14; ADCXQ R10, R13; ADOXQ AX, R13; MOVQ R13, 40+z; \ + MULXQ 24+x, AX, R15; ADCXQ R11, R14; ADOXQ AX, R14; MOVQ R14, 48+z; \ + MOVL $0, AX;;;;;;;;; ADCXQ AX, R15; ADOXQ AX, R15; MOVQ R15, 56+z; + +// integerMulLeg multiplies x and y and stores in z +// Uses: AX, DX, R8-R15, FLAGS +// Instr: x86_64 +#define integerMulLeg(z,x,y) \ + MOVQ 0+y, R8; \ + MOVQ 0+x, AX; MULQ R8; MOVQ AX, 0+z; MOVQ DX, R15; \ + MOVQ 8+x, AX; MULQ R8; MOVQ AX, R13; MOVQ DX, R10; \ + MOVQ 16+x, AX; MULQ R8; MOVQ AX, R14; MOVQ DX, R11; \ + MOVQ 24+x, AX; MULQ R8; \ + ADDQ R13, R15; \ + ADCQ R14, R10; MOVQ R10, 16+z; \ + ADCQ AX, R11; MOVQ R11, 24+z; \ + ADCQ $0, DX; MOVQ DX, 32+z; \ + MOVQ 8+y, R8; \ + MOVQ 0+x, AX; MULQ R8; MOVQ AX, R12; MOVQ DX, R9; \ + MOVQ 8+x, AX; MULQ R8; MOVQ AX, R13; MOVQ DX, R10; \ + MOVQ 16+x, AX; MULQ R8; MOVQ AX, R14; MOVQ DX, R11; \ + MOVQ 24+x, AX; MULQ R8; \ + ADDQ R12, R15; MOVQ R15, 8+z; \ + ADCQ R13, R9; \ + ADCQ R14, R10; \ + ADCQ AX, R11; \ + ADCQ $0, DX; \ + ADCQ 16+z, R9; MOVQ R9, R15; \ + ADCQ 24+z, R10; MOVQ R10, 24+z; \ + ADCQ 32+z, R11; MOVQ R11, 32+z; \ + ADCQ $0, DX; MOVQ DX, 40+z; \ + MOVQ 16+y, R8; \ + MOVQ 0+x, AX; MULQ R8; MOVQ AX, R12; MOVQ DX, R9; \ + MOVQ 8+x, AX; MULQ R8; MOVQ AX, R13; MOVQ DX, R10; \ + MOVQ 16+x, AX; MULQ R8; MOVQ AX, R14; MOVQ DX, R11; \ + MOVQ 24+x, AX; MULQ R8; \ + ADDQ R12, R15; MOVQ R15, 16+z; \ + ADCQ R13, R9; \ + ADCQ R14, R10; \ + ADCQ AX, R11; \ + ADCQ $0, DX; \ + ADCQ 24+z, R9; MOVQ R9, R15; \ + ADCQ 32+z, R10; MOVQ R10, 32+z; \ + ADCQ 40+z, R11; MOVQ R11, 40+z; \ + ADCQ $0, DX; MOVQ DX, 48+z; \ + MOVQ 24+y, R8; \ + MOVQ 0+x, AX; MULQ R8; MOVQ AX, R12; MOVQ DX, R9; \ + MOVQ 8+x, AX; MULQ R8; MOVQ AX, R13; MOVQ DX, R10; \ + MOVQ 16+x, AX; MULQ R8; MOVQ AX, R14; MOVQ DX, R11; \ + MOVQ 24+x, AX; MULQ R8; \ + ADDQ R12, R15; MOVQ R15, 24+z; \ + ADCQ R13, R9; \ + ADCQ R14, R10; \ + ADCQ AX, R11; \ + ADCQ $0, DX; \ + ADCQ 32+z, R9; MOVQ R9, 32+z; \ + ADCQ 40+z, R10; MOVQ R10, 40+z; \ + ADCQ 48+z, R11; MOVQ R11, 48+z; \ + ADCQ $0, DX; MOVQ DX, 56+z; + +// integerSqrLeg squares x and stores in z +// Uses: AX, CX, DX, R8-R15, FLAGS +// Instr: x86_64 +#define integerSqrLeg(z,x) \ + MOVQ 0+x, R8; \ + MOVQ 8+x, AX; MULQ R8; MOVQ AX, R9; MOVQ DX, R10; /* A[0]*A[1] */ \ + MOVQ 16+x, AX; MULQ R8; MOVQ AX, R14; MOVQ DX, R11; /* A[0]*A[2] */ \ + MOVQ 24+x, AX; MULQ R8; MOVQ AX, R15; MOVQ DX, R12; /* A[0]*A[3] */ \ + MOVQ 24+x, R8; \ + MOVQ 8+x, AX; MULQ R8; MOVQ AX, CX; MOVQ DX, R13; /* A[3]*A[1] */ \ + MOVQ 16+x, AX; MULQ R8; /* A[3]*A[2] */ \ + \ + ADDQ R14, R10;\ + ADCQ R15, R11; MOVL $0, R15;\ + ADCQ CX, R12;\ + ADCQ AX, R13;\ + ADCQ $0, DX; MOVQ DX, R14;\ + MOVQ 8+x, AX; MULQ 16+x;\ + \ + ADDQ AX, R11;\ + ADCQ DX, R12;\ + ADCQ $0, R13;\ + ADCQ $0, R14;\ + ADCQ $0, R15;\ + \ + SHLQ $1, R14, R15; MOVQ R15, 56+z;\ + SHLQ $1, R13, R14; MOVQ R14, 48+z;\ + SHLQ $1, R12, R13; MOVQ R13, 40+z;\ + SHLQ $1, R11, R12; MOVQ R12, 32+z;\ + SHLQ $1, R10, R11; MOVQ R11, 24+z;\ + SHLQ $1, R9, R10; MOVQ R10, 16+z;\ + SHLQ $1, R9; MOVQ R9, 8+z;\ + \ + MOVQ 0+x,AX; MULQ AX; MOVQ AX, 0+z; MOVQ DX, R9;\ + MOVQ 8+x,AX; MULQ AX; MOVQ AX, R10; MOVQ DX, R11;\ + MOVQ 16+x,AX; MULQ AX; MOVQ AX, R12; MOVQ DX, R13;\ + MOVQ 24+x,AX; MULQ AX; MOVQ AX, R14; MOVQ DX, R15;\ + \ + ADDQ 8+z, R9; MOVQ R9, 8+z;\ + ADCQ 16+z, R10; MOVQ R10, 16+z;\ + ADCQ 24+z, R11; MOVQ R11, 24+z;\ + ADCQ 32+z, R12; MOVQ R12, 32+z;\ + ADCQ 40+z, R13; MOVQ R13, 40+z;\ + ADCQ 48+z, R14; MOVQ R14, 48+z;\ + ADCQ 56+z, R15; MOVQ R15, 56+z; + +// integerSqrAdx squares x and stores in z +// Uses: AX, CX, DX, R8-R15, FLAGS +// Instr: x86_64, bmi2, adx +#define integerSqrAdx(z,x) \ + MOVQ 0+x, DX; /* A[0] */ \ + MULXQ 8+x, R8, R14; /* A[1]*A[0] */ XORL R15, R15; \ + MULXQ 16+x, R9, R10; /* A[2]*A[0] */ ADCXQ R14, R9; \ + MULXQ 24+x, AX, CX; /* A[3]*A[0] */ ADCXQ AX, R10; \ + MOVQ 24+x, DX; /* A[3] */ \ + MULXQ 8+x, R11, R12; /* A[1]*A[3] */ ADCXQ CX, R11; \ + MULXQ 16+x, AX, R13; /* A[2]*A[3] */ ADCXQ AX, R12; \ + MOVQ 8+x, DX; /* A[1] */ ADCXQ R15, R13; \ + MULXQ 16+x, AX, CX; /* A[2]*A[1] */ MOVL $0, R14; \ + ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ADCXQ R15, R14; \ + XORL R15, R15; \ + ADOXQ AX, R10; ADCXQ R8, R8; \ + ADOXQ CX, R11; ADCXQ R9, R9; \ + ADOXQ R15, R12; ADCXQ R10, R10; \ + ADOXQ R15, R13; ADCXQ R11, R11; \ + ADOXQ R15, R14; ADCXQ R12, R12; \ + ;;;;;;;;;;;;;;; ADCXQ R13, R13; \ + ;;;;;;;;;;;;;;; ADCXQ R14, R14; \ + MOVQ 0+x, DX; MULXQ DX, AX, CX; /* A[0]^2 */ \ + ;;;;;;;;;;;;;;; MOVQ AX, 0+z; \ + ADDQ CX, R8; MOVQ R8, 8+z; \ + MOVQ 8+x, DX; MULXQ DX, AX, CX; /* A[1]^2 */ \ + ADCQ AX, R9; MOVQ R9, 16+z; \ + ADCQ CX, R10; MOVQ R10, 24+z; \ + MOVQ 16+x, DX; MULXQ DX, AX, CX; /* A[2]^2 */ \ + ADCQ AX, R11; MOVQ R11, 32+z; \ + ADCQ CX, R12; MOVQ R12, 40+z; \ + MOVQ 24+x, DX; MULXQ DX, AX, CX; /* A[3]^2 */ \ + ADCQ AX, R13; MOVQ R13, 48+z; \ + ADCQ CX, R14; MOVQ R14, 56+z; + +// reduceFromDouble finds z congruent to x modulo p such that 0> 63) + // PUT BIT 255 IN CARRY FLAG AND CLEAR + x3 &^= 1 << 63 + + x0, c0 := bits.Add64(x0, cx, 0) + x1, c1 := bits.Add64(x1, 0, c0) + x2, c2 := bits.Add64(x2, 0, c1) + x3, _ = bits.Add64(x3, 0, c2) + + // TEST FOR BIT 255 AGAIN; ONLY TRIGGERED ON OVERFLOW MODULO 2^255-19 + // cx = C[255] ? 0 : 19 + cx = uint64(19) &^ (-(x3 >> 63)) + // CLEAR BIT 255 + x3 &^= 1 << 63 + + x0, c0 = bits.Sub64(x0, cx, 0) + x1, c1 = bits.Sub64(x1, 0, c0) + x2, c2 = bits.Sub64(x2, 0, c1) + x3, _ = bits.Sub64(x3, 0, c2) + + binary.LittleEndian.PutUint64(x[0*8:1*8], x0) + binary.LittleEndian.PutUint64(x[1*8:2*8], x1) + binary.LittleEndian.PutUint64(x[2*8:3*8], x2) + binary.LittleEndian.PutUint64(x[3*8:4*8], x3) +} + +func red64(z *Elt, x0, x1, x2, x3, x4, x5, x6, x7 uint64) { + h0, l0 := bits.Mul64(x4, 38) + h1, l1 := bits.Mul64(x5, 38) + h2, l2 := bits.Mul64(x6, 38) + h3, l3 := bits.Mul64(x7, 38) + + l1, c0 := bits.Add64(h0, l1, 0) + l2, c1 := bits.Add64(h1, l2, c0) + l3, c2 := bits.Add64(h2, l3, c1) + l4, _ := bits.Add64(h3, 0, c2) + + l0, c0 = bits.Add64(l0, x0, 0) + l1, c1 = bits.Add64(l1, x1, c0) + l2, c2 = bits.Add64(l2, x2, c1) + l3, c3 := bits.Add64(l3, x3, c2) + l4, _ = bits.Add64(l4, 0, c3) + + _, l4 = bits.Mul64(l4, 38) + l0, c0 = bits.Add64(l0, l4, 0) + z1, c1 := bits.Add64(l1, 0, c0) + z2, c2 := bits.Add64(l2, 0, c1) + z3, c3 := bits.Add64(l3, 0, c2) + z0, _ := bits.Add64(l0, (-c3)&38, 0) + + binary.LittleEndian.PutUint64(z[0*8:1*8], z0) + binary.LittleEndian.PutUint64(z[1*8:2*8], z1) + binary.LittleEndian.PutUint64(z[2*8:3*8], z2) + binary.LittleEndian.PutUint64(z[3*8:4*8], z3) +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp25519/fp_noasm.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp25519/fp_noasm.go new file mode 100644 index 00000000000..26ca4d01b7e --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp25519/fp_noasm.go @@ -0,0 +1,13 @@ +//go:build !amd64 || purego +// +build !amd64 purego + +package fp25519 + +func cmov(x, y *Elt, n uint) { cmovGeneric(x, y, n) } +func cswap(x, y *Elt, n uint) { cswapGeneric(x, y, n) } +func add(z, x, y *Elt) { addGeneric(z, x, y) } +func sub(z, x, y *Elt) { subGeneric(z, x, y) } +func addsub(x, y *Elt) { addsubGeneric(x, y) } +func mul(z, x, y *Elt) { mulGeneric(z, x, y) } +func sqr(z, x *Elt) { sqrGeneric(z, x) } +func modp(z *Elt) { modpGeneric(z) } diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp448/fp.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp448/fp.go new file mode 100644 index 00000000000..a5e36600bb6 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp448/fp.go @@ -0,0 +1,164 @@ +// Package fp448 provides prime field arithmetic over GF(2^448-2^224-1). +package fp448 + +import ( + "errors" + + "github.com/cloudflare/circl/internal/conv" +) + +// Size in bytes of an element. +const Size = 56 + +// Elt is a prime field element. +type Elt [Size]byte + +func (e Elt) String() string { return conv.BytesLe2Hex(e[:]) } + +// p is the prime modulus 2^448-2^224-1. +var p = Elt{ + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xfe, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, +} + +// P returns the prime modulus 2^448-2^224-1. +func P() Elt { return p } + +// ToBytes stores in b the little-endian byte representation of x. +func ToBytes(b []byte, x *Elt) error { + if len(b) != Size { + return errors.New("wrong size") + } + Modp(x) + copy(b, x[:]) + return nil +} + +// IsZero returns true if x is equal to 0. +func IsZero(x *Elt) bool { Modp(x); return *x == Elt{} } + +// IsOne returns true if x is equal to 1. +func IsOne(x *Elt) bool { Modp(x); return *x == Elt{1} } + +// SetOne assigns x=1. +func SetOne(x *Elt) { *x = Elt{1} } + +// One returns the 1 element. +func One() (x Elt) { x = Elt{1}; return } + +// Neg calculates z = -x. +func Neg(z, x *Elt) { Sub(z, &p, x) } + +// Modp ensures that z is between [0,p-1]. +func Modp(z *Elt) { Sub(z, z, &p) } + +// InvSqrt calculates z = sqrt(x/y) iff x/y is a quadratic-residue. If so, +// isQR = true; otherwise, isQR = false, since x/y is a quadratic non-residue, +// and z = sqrt(-x/y). +func InvSqrt(z, x, y *Elt) (isQR bool) { + // First note that x^(2(k+1)) = x^(p-1)/2 * x = legendre(x) * x + // so that's x if x is a quadratic residue and -x otherwise. + // Next, y^(6k+3) = y^(4k+2) * y^(2k+1) = y^(p-1) * y^((p-1)/2) = legendre(y). + // So the z we compute satisfies z^2 y = x^(2(k+1)) y^(6k+3) = legendre(x)*legendre(y). + // Thus if x and y are quadratic residues, then z is indeed sqrt(x/y). + t0, t1 := &Elt{}, &Elt{} + Mul(t0, x, y) // x*y + Sqr(t1, y) // y^2 + Mul(t1, t0, t1) // x*y^3 + powPminus3div4(z, t1) // (x*y^3)^k + Mul(z, z, t0) // z = x*y*(x*y^3)^k = x^(k+1) * y^(3k+1) + + // Check if x/y is a quadratic residue + Sqr(t0, z) // z^2 + Mul(t0, t0, y) // y*z^2 + Sub(t0, t0, x) // y*z^2-x + return IsZero(t0) +} + +// Inv calculates z = 1/x mod p. +func Inv(z, x *Elt) { + // Calculates z = x^(4k+1) = x^(p-3+1) = x^(p-2) = x^-1, where k = (p-3)/4. + t := &Elt{} + powPminus3div4(t, x) // t = x^k + Sqr(t, t) // t = x^2k + Sqr(t, t) // t = x^4k + Mul(z, t, x) // z = x^(4k+1) +} + +// powPminus3div4 calculates z = x^k mod p, where k = (p-3)/4. +func powPminus3div4(z, x *Elt) { + x0, x1 := &Elt{}, &Elt{} + Sqr(z, x) + Mul(z, z, x) + Sqr(x0, z) + Mul(x0, x0, x) + Sqr(z, x0) + Sqr(z, z) + Sqr(z, z) + Mul(z, z, x0) + Sqr(x1, z) + for i := 0; i < 5; i++ { + Sqr(x1, x1) + } + Mul(x1, x1, z) + Sqr(z, x1) + for i := 0; i < 11; i++ { + Sqr(z, z) + } + Mul(z, z, x1) + Sqr(z, z) + Sqr(z, z) + Sqr(z, z) + Mul(z, z, x0) + Sqr(x1, z) + for i := 0; i < 26; i++ { + Sqr(x1, x1) + } + Mul(x1, x1, z) + Sqr(z, x1) + for i := 0; i < 53; i++ { + Sqr(z, z) + } + Mul(z, z, x1) + Sqr(z, z) + Sqr(z, z) + Sqr(z, z) + Mul(z, z, x0) + Sqr(x1, z) + for i := 0; i < 110; i++ { + Sqr(x1, x1) + } + Mul(x1, x1, z) + Sqr(z, x1) + Mul(z, z, x) + for i := 0; i < 223; i++ { + Sqr(z, z) + } + Mul(z, z, x1) +} + +// Cmov assigns y to x if n is 1. +func Cmov(x, y *Elt, n uint) { cmov(x, y, n) } + +// Cswap interchanges x and y if n is 1. +func Cswap(x, y *Elt, n uint) { cswap(x, y, n) } + +// Add calculates z = x+y mod p. +func Add(z, x, y *Elt) { add(z, x, y) } + +// Sub calculates z = x-y mod p. +func Sub(z, x, y *Elt) { sub(z, x, y) } + +// AddSub calculates (x,y) = (x+y mod p, x-y mod p). +func AddSub(x, y *Elt) { addsub(x, y) } + +// Mul calculates z = x*y mod p. +func Mul(z, x, y *Elt) { mul(z, x, y) } + +// Sqr calculates z = x^2 mod p. +func Sqr(z, x *Elt) { sqr(z, x) } diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp448/fp_amd64.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp448/fp_amd64.go new file mode 100644 index 00000000000..6a12209a704 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp448/fp_amd64.go @@ -0,0 +1,43 @@ +//go:build amd64 && !purego +// +build amd64,!purego + +package fp448 + +import ( + "golang.org/x/sys/cpu" +) + +var hasBmi2Adx = cpu.X86.HasBMI2 && cpu.X86.HasADX + +var _ = hasBmi2Adx + +func cmov(x, y *Elt, n uint) { cmovAmd64(x, y, n) } +func cswap(x, y *Elt, n uint) { cswapAmd64(x, y, n) } +func add(z, x, y *Elt) { addAmd64(z, x, y) } +func sub(z, x, y *Elt) { subAmd64(z, x, y) } +func addsub(x, y *Elt) { addsubAmd64(x, y) } +func mul(z, x, y *Elt) { mulAmd64(z, x, y) } +func sqr(z, x *Elt) { sqrAmd64(z, x) } + +/* Functions defined in fp_amd64.s */ + +//go:noescape +func cmovAmd64(x, y *Elt, n uint) + +//go:noescape +func cswapAmd64(x, y *Elt, n uint) + +//go:noescape +func addAmd64(z, x, y *Elt) + +//go:noescape +func subAmd64(z, x, y *Elt) + +//go:noescape +func addsubAmd64(x, y *Elt) + +//go:noescape +func mulAmd64(z, x, y *Elt) + +//go:noescape +func sqrAmd64(z, x *Elt) diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp448/fp_amd64.h b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp448/fp_amd64.h new file mode 100644 index 00000000000..536fe5bdfe0 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp448/fp_amd64.h @@ -0,0 +1,591 @@ +// This code was imported from https://github.com/armfazh/rfc7748_precomputed + +// CHECK_BMI2ADX triggers bmi2adx if supported, +// otherwise it fallbacks to legacy code. +#define CHECK_BMI2ADX(label, legacy, bmi2adx) \ + CMPB ·hasBmi2Adx(SB), $0 \ + JE label \ + bmi2adx \ + RET \ + label: \ + legacy \ + RET + +// cselect is a conditional move +// if b=1: it copies y into x; +// if b=0: x remains with the same value; +// if b<> 0,1: undefined. +// Uses: AX, DX, FLAGS +// Instr: x86_64, cmov +#define cselect(x,y,b) \ + TESTQ b, b \ + MOVQ 0+x, AX; MOVQ 0+y, DX; CMOVQNE DX, AX; MOVQ AX, 0+x; \ + MOVQ 8+x, AX; MOVQ 8+y, DX; CMOVQNE DX, AX; MOVQ AX, 8+x; \ + MOVQ 16+x, AX; MOVQ 16+y, DX; CMOVQNE DX, AX; MOVQ AX, 16+x; \ + MOVQ 24+x, AX; MOVQ 24+y, DX; CMOVQNE DX, AX; MOVQ AX, 24+x; \ + MOVQ 32+x, AX; MOVQ 32+y, DX; CMOVQNE DX, AX; MOVQ AX, 32+x; \ + MOVQ 40+x, AX; MOVQ 40+y, DX; CMOVQNE DX, AX; MOVQ AX, 40+x; \ + MOVQ 48+x, AX; MOVQ 48+y, DX; CMOVQNE DX, AX; MOVQ AX, 48+x; + +// cswap is a conditional swap +// if b=1: x,y <- y,x; +// if b=0: x,y remain with the same values; +// if b<> 0,1: undefined. +// Uses: AX, DX, R8, FLAGS +// Instr: x86_64, cmov +#define cswap(x,y,b) \ + TESTQ b, b \ + MOVQ 0+x, AX; MOVQ AX, R8; MOVQ 0+y, DX; CMOVQNE DX, AX; CMOVQNE R8, DX; MOVQ AX, 0+x; MOVQ DX, 0+y; \ + MOVQ 8+x, AX; MOVQ AX, R8; MOVQ 8+y, DX; CMOVQNE DX, AX; CMOVQNE R8, DX; MOVQ AX, 8+x; MOVQ DX, 8+y; \ + MOVQ 16+x, AX; MOVQ AX, R8; MOVQ 16+y, DX; CMOVQNE DX, AX; CMOVQNE R8, DX; MOVQ AX, 16+x; MOVQ DX, 16+y; \ + MOVQ 24+x, AX; MOVQ AX, R8; MOVQ 24+y, DX; CMOVQNE DX, AX; CMOVQNE R8, DX; MOVQ AX, 24+x; MOVQ DX, 24+y; \ + MOVQ 32+x, AX; MOVQ AX, R8; MOVQ 32+y, DX; CMOVQNE DX, AX; CMOVQNE R8, DX; MOVQ AX, 32+x; MOVQ DX, 32+y; \ + MOVQ 40+x, AX; MOVQ AX, R8; MOVQ 40+y, DX; CMOVQNE DX, AX; CMOVQNE R8, DX; MOVQ AX, 40+x; MOVQ DX, 40+y; \ + MOVQ 48+x, AX; MOVQ AX, R8; MOVQ 48+y, DX; CMOVQNE DX, AX; CMOVQNE R8, DX; MOVQ AX, 48+x; MOVQ DX, 48+y; + +// additionLeg adds x and y and stores in z +// Uses: AX, DX, R8-R14, FLAGS +// Instr: x86_64 +#define additionLeg(z,x,y) \ + MOVQ 0+x, R8; ADDQ 0+y, R8; \ + MOVQ 8+x, R9; ADCQ 8+y, R9; \ + MOVQ 16+x, R10; ADCQ 16+y, R10; \ + MOVQ 24+x, R11; ADCQ 24+y, R11; \ + MOVQ 32+x, R12; ADCQ 32+y, R12; \ + MOVQ 40+x, R13; ADCQ 40+y, R13; \ + MOVQ 48+x, R14; ADCQ 48+y, R14; \ + MOVQ $0, AX; ADCQ $0, AX; \ + MOVQ AX, DX; \ + SHLQ $32, DX; \ + ADDQ AX, R8; MOVQ $0, AX; \ + ADCQ $0, R9; \ + ADCQ $0, R10; \ + ADCQ DX, R11; \ + ADCQ $0, R12; \ + ADCQ $0, R13; \ + ADCQ $0, R14; \ + ADCQ $0, AX; \ + MOVQ AX, DX; \ + SHLQ $32, DX; \ + ADDQ AX, R8; MOVQ R8, 0+z; \ + ADCQ $0, R9; MOVQ R9, 8+z; \ + ADCQ $0, R10; MOVQ R10, 16+z; \ + ADCQ DX, R11; MOVQ R11, 24+z; \ + ADCQ $0, R12; MOVQ R12, 32+z; \ + ADCQ $0, R13; MOVQ R13, 40+z; \ + ADCQ $0, R14; MOVQ R14, 48+z; + + +// additionAdx adds x and y and stores in z +// Uses: AX, DX, R8-R15, FLAGS +// Instr: x86_64, adx +#define additionAdx(z,x,y) \ + MOVL $32, R15; \ + XORL DX, DX; \ + MOVQ 0+x, R8; ADCXQ 0+y, R8; \ + MOVQ 8+x, R9; ADCXQ 8+y, R9; \ + MOVQ 16+x, R10; ADCXQ 16+y, R10; \ + MOVQ 24+x, R11; ADCXQ 24+y, R11; \ + MOVQ 32+x, R12; ADCXQ 32+y, R12; \ + MOVQ 40+x, R13; ADCXQ 40+y, R13; \ + MOVQ 48+x, R14; ADCXQ 48+y, R14; \ + ;;;;;;;;;;;;;;; ADCXQ DX, DX; \ + XORL AX, AX; \ + ADCXQ DX, R8; SHLXQ R15, DX, DX; \ + ADCXQ AX, R9; \ + ADCXQ AX, R10; \ + ADCXQ DX, R11; \ + ADCXQ AX, R12; \ + ADCXQ AX, R13; \ + ADCXQ AX, R14; \ + ADCXQ AX, AX; \ + XORL DX, DX; \ + ADCXQ AX, R8; MOVQ R8, 0+z; SHLXQ R15, AX, AX; \ + ADCXQ DX, R9; MOVQ R9, 8+z; \ + ADCXQ DX, R10; MOVQ R10, 16+z; \ + ADCXQ AX, R11; MOVQ R11, 24+z; \ + ADCXQ DX, R12; MOVQ R12, 32+z; \ + ADCXQ DX, R13; MOVQ R13, 40+z; \ + ADCXQ DX, R14; MOVQ R14, 48+z; + +// subtraction subtracts y from x and stores in z +// Uses: AX, DX, R8-R14, FLAGS +// Instr: x86_64 +#define subtraction(z,x,y) \ + MOVQ 0+x, R8; SUBQ 0+y, R8; \ + MOVQ 8+x, R9; SBBQ 8+y, R9; \ + MOVQ 16+x, R10; SBBQ 16+y, R10; \ + MOVQ 24+x, R11; SBBQ 24+y, R11; \ + MOVQ 32+x, R12; SBBQ 32+y, R12; \ + MOVQ 40+x, R13; SBBQ 40+y, R13; \ + MOVQ 48+x, R14; SBBQ 48+y, R14; \ + MOVQ $0, AX; SETCS AX; \ + MOVQ AX, DX; \ + SHLQ $32, DX; \ + SUBQ AX, R8; MOVQ $0, AX; \ + SBBQ $0, R9; \ + SBBQ $0, R10; \ + SBBQ DX, R11; \ + SBBQ $0, R12; \ + SBBQ $0, R13; \ + SBBQ $0, R14; \ + SETCS AX; \ + MOVQ AX, DX; \ + SHLQ $32, DX; \ + SUBQ AX, R8; MOVQ R8, 0+z; \ + SBBQ $0, R9; MOVQ R9, 8+z; \ + SBBQ $0, R10; MOVQ R10, 16+z; \ + SBBQ DX, R11; MOVQ R11, 24+z; \ + SBBQ $0, R12; MOVQ R12, 32+z; \ + SBBQ $0, R13; MOVQ R13, 40+z; \ + SBBQ $0, R14; MOVQ R14, 48+z; + +// maddBmi2Adx multiplies x and y and accumulates in z +// Uses: AX, DX, R15, FLAGS +// Instr: x86_64, bmi2, adx +#define maddBmi2Adx(z,x,y,i,r0,r1,r2,r3,r4,r5,r6) \ + MOVQ i+y, DX; XORL AX, AX; \ + MULXQ 0+x, AX, R8; ADOXQ AX, r0; ADCXQ R8, r1; MOVQ r0,i+z; \ + MULXQ 8+x, AX, r0; ADOXQ AX, r1; ADCXQ r0, r2; MOVQ $0, R8; \ + MULXQ 16+x, AX, r0; ADOXQ AX, r2; ADCXQ r0, r3; \ + MULXQ 24+x, AX, r0; ADOXQ AX, r3; ADCXQ r0, r4; \ + MULXQ 32+x, AX, r0; ADOXQ AX, r4; ADCXQ r0, r5; \ + MULXQ 40+x, AX, r0; ADOXQ AX, r5; ADCXQ r0, r6; \ + MULXQ 48+x, AX, r0; ADOXQ AX, r6; ADCXQ R8, r0; \ + ;;;;;;;;;;;;;;;;;;; ADOXQ R8, r0; + +// integerMulAdx multiplies x and y and stores in z +// Uses: AX, DX, R8-R15, FLAGS +// Instr: x86_64, bmi2, adx +#define integerMulAdx(z,x,y) \ + MOVL $0,R15; \ + MOVQ 0+y, DX; XORL AX, AX; MOVQ $0, R8; \ + MULXQ 0+x, AX, R9; MOVQ AX, 0+z; \ + MULXQ 8+x, AX, R10; ADCXQ AX, R9; \ + MULXQ 16+x, AX, R11; ADCXQ AX, R10; \ + MULXQ 24+x, AX, R12; ADCXQ AX, R11; \ + MULXQ 32+x, AX, R13; ADCXQ AX, R12; \ + MULXQ 40+x, AX, R14; ADCXQ AX, R13; \ + MULXQ 48+x, AX, R15; ADCXQ AX, R14; \ + ;;;;;;;;;;;;;;;;;;;; ADCXQ R8, R15; \ + maddBmi2Adx(z,x,y, 8, R9,R10,R11,R12,R13,R14,R15) \ + maddBmi2Adx(z,x,y,16,R10,R11,R12,R13,R14,R15, R9) \ + maddBmi2Adx(z,x,y,24,R11,R12,R13,R14,R15, R9,R10) \ + maddBmi2Adx(z,x,y,32,R12,R13,R14,R15, R9,R10,R11) \ + maddBmi2Adx(z,x,y,40,R13,R14,R15, R9,R10,R11,R12) \ + maddBmi2Adx(z,x,y,48,R14,R15, R9,R10,R11,R12,R13) \ + MOVQ R15, 56+z; \ + MOVQ R9, 64+z; \ + MOVQ R10, 72+z; \ + MOVQ R11, 80+z; \ + MOVQ R12, 88+z; \ + MOVQ R13, 96+z; \ + MOVQ R14, 104+z; + +// maddLegacy multiplies x and y and accumulates in z +// Uses: AX, DX, R15, FLAGS +// Instr: x86_64 +#define maddLegacy(z,x,y,i) \ + MOVQ i+y, R15; \ + MOVQ 0+x, AX; MULQ R15; MOVQ AX, R8; ;;;;;;;;;;;; MOVQ DX, R9; \ + MOVQ 8+x, AX; MULQ R15; ADDQ AX, R9; ADCQ $0, DX; MOVQ DX, R10; \ + MOVQ 16+x, AX; MULQ R15; ADDQ AX, R10; ADCQ $0, DX; MOVQ DX, R11; \ + MOVQ 24+x, AX; MULQ R15; ADDQ AX, R11; ADCQ $0, DX; MOVQ DX, R12; \ + MOVQ 32+x, AX; MULQ R15; ADDQ AX, R12; ADCQ $0, DX; MOVQ DX, R13; \ + MOVQ 40+x, AX; MULQ R15; ADDQ AX, R13; ADCQ $0, DX; MOVQ DX, R14; \ + MOVQ 48+x, AX; MULQ R15; ADDQ AX, R14; ADCQ $0, DX; \ + ADDQ 0+i+z, R8; MOVQ R8, 0+i+z; \ + ADCQ 8+i+z, R9; MOVQ R9, 8+i+z; \ + ADCQ 16+i+z, R10; MOVQ R10, 16+i+z; \ + ADCQ 24+i+z, R11; MOVQ R11, 24+i+z; \ + ADCQ 32+i+z, R12; MOVQ R12, 32+i+z; \ + ADCQ 40+i+z, R13; MOVQ R13, 40+i+z; \ + ADCQ 48+i+z, R14; MOVQ R14, 48+i+z; \ + ADCQ $0, DX; MOVQ DX, 56+i+z; + +// integerMulLeg multiplies x and y and stores in z +// Uses: AX, DX, R8-R15, FLAGS +// Instr: x86_64 +#define integerMulLeg(z,x,y) \ + MOVQ 0+y, R15; \ + MOVQ 0+x, AX; MULQ R15; MOVQ AX, 0+z; ;;;;;;;;;;;; MOVQ DX, R8; \ + MOVQ 8+x, AX; MULQ R15; ADDQ AX, R8; ADCQ $0, DX; MOVQ DX, R9; MOVQ R8, 8+z; \ + MOVQ 16+x, AX; MULQ R15; ADDQ AX, R9; ADCQ $0, DX; MOVQ DX, R10; MOVQ R9, 16+z; \ + MOVQ 24+x, AX; MULQ R15; ADDQ AX, R10; ADCQ $0, DX; MOVQ DX, R11; MOVQ R10, 24+z; \ + MOVQ 32+x, AX; MULQ R15; ADDQ AX, R11; ADCQ $0, DX; MOVQ DX, R12; MOVQ R11, 32+z; \ + MOVQ 40+x, AX; MULQ R15; ADDQ AX, R12; ADCQ $0, DX; MOVQ DX, R13; MOVQ R12, 40+z; \ + MOVQ 48+x, AX; MULQ R15; ADDQ AX, R13; ADCQ $0, DX; MOVQ DX,56+z; MOVQ R13, 48+z; \ + maddLegacy(z,x,y, 8) \ + maddLegacy(z,x,y,16) \ + maddLegacy(z,x,y,24) \ + maddLegacy(z,x,y,32) \ + maddLegacy(z,x,y,40) \ + maddLegacy(z,x,y,48) + +// integerSqrLeg squares x and stores in z +// Uses: AX, CX, DX, R8-R15, FLAGS +// Instr: x86_64 +#define integerSqrLeg(z,x) \ + XORL R15, R15; \ + MOVQ 0+x, CX; \ + MOVQ CX, AX; MULQ CX; MOVQ AX, 0+z; MOVQ DX, R8; \ + ADDQ CX, CX; ADCQ $0, R15; \ + MOVQ 8+x, AX; MULQ CX; ADDQ AX, R8; ADCQ $0, DX; MOVQ DX, R9; MOVQ R8, 8+z; \ + MOVQ 16+x, AX; MULQ CX; ADDQ AX, R9; ADCQ $0, DX; MOVQ DX, R10; \ + MOVQ 24+x, AX; MULQ CX; ADDQ AX, R10; ADCQ $0, DX; MOVQ DX, R11; \ + MOVQ 32+x, AX; MULQ CX; ADDQ AX, R11; ADCQ $0, DX; MOVQ DX, R12; \ + MOVQ 40+x, AX; MULQ CX; ADDQ AX, R12; ADCQ $0, DX; MOVQ DX, R13; \ + MOVQ 48+x, AX; MULQ CX; ADDQ AX, R13; ADCQ $0, DX; MOVQ DX, R14; \ + \ + MOVQ 8+x, CX; \ + MOVQ CX, AX; ADDQ R15, CX; MOVQ $0, R15; ADCQ $0, R15; \ + ;;;;;;;;;;;;;; MULQ CX; ADDQ AX, R9; ADCQ $0, DX; MOVQ R9,16+z; \ + MOVQ R15, AX; NEGQ AX; ANDQ 8+x, AX; ADDQ AX, DX; ADCQ $0, R11; MOVQ DX, R8; \ + ADDQ 8+x, CX; ADCQ $0, R15; \ + MOVQ 16+x, AX; MULQ CX; ADDQ AX, R10; ADCQ $0, DX; ADDQ R8, R10; ADCQ $0, DX; MOVQ DX, R8; MOVQ R10, 24+z; \ + MOVQ 24+x, AX; MULQ CX; ADDQ AX, R11; ADCQ $0, DX; ADDQ R8, R11; ADCQ $0, DX; MOVQ DX, R8; \ + MOVQ 32+x, AX; MULQ CX; ADDQ AX, R12; ADCQ $0, DX; ADDQ R8, R12; ADCQ $0, DX; MOVQ DX, R8; \ + MOVQ 40+x, AX; MULQ CX; ADDQ AX, R13; ADCQ $0, DX; ADDQ R8, R13; ADCQ $0, DX; MOVQ DX, R8; \ + MOVQ 48+x, AX; MULQ CX; ADDQ AX, R14; ADCQ $0, DX; ADDQ R8, R14; ADCQ $0, DX; MOVQ DX, R9; \ + \ + MOVQ 16+x, CX; \ + MOVQ CX, AX; ADDQ R15, CX; MOVQ $0, R15; ADCQ $0, R15; \ + ;;;;;;;;;;;;;; MULQ CX; ADDQ AX, R11; ADCQ $0, DX; MOVQ R11, 32+z; \ + MOVQ R15, AX; NEGQ AX; ANDQ 16+x,AX; ADDQ AX, DX; ADCQ $0, R13; MOVQ DX, R8; \ + ADDQ 16+x, CX; ADCQ $0, R15; \ + MOVQ 24+x, AX; MULQ CX; ADDQ AX, R12; ADCQ $0, DX; ADDQ R8, R12; ADCQ $0, DX; MOVQ DX, R8; MOVQ R12, 40+z; \ + MOVQ 32+x, AX; MULQ CX; ADDQ AX, R13; ADCQ $0, DX; ADDQ R8, R13; ADCQ $0, DX; MOVQ DX, R8; \ + MOVQ 40+x, AX; MULQ CX; ADDQ AX, R14; ADCQ $0, DX; ADDQ R8, R14; ADCQ $0, DX; MOVQ DX, R8; \ + MOVQ 48+x, AX; MULQ CX; ADDQ AX, R9; ADCQ $0, DX; ADDQ R8, R9; ADCQ $0, DX; MOVQ DX,R10; \ + \ + MOVQ 24+x, CX; \ + MOVQ CX, AX; ADDQ R15, CX; MOVQ $0, R15; ADCQ $0, R15; \ + ;;;;;;;;;;;;;; MULQ CX; ADDQ AX, R13; ADCQ $0, DX; MOVQ R13, 48+z; \ + MOVQ R15, AX; NEGQ AX; ANDQ 24+x,AX; ADDQ AX, DX; ADCQ $0, R9; MOVQ DX, R8; \ + ADDQ 24+x, CX; ADCQ $0, R15; \ + MOVQ 32+x, AX; MULQ CX; ADDQ AX, R14; ADCQ $0, DX; ADDQ R8, R14; ADCQ $0, DX; MOVQ DX, R8; MOVQ R14, 56+z; \ + MOVQ 40+x, AX; MULQ CX; ADDQ AX, R9; ADCQ $0, DX; ADDQ R8, R9; ADCQ $0, DX; MOVQ DX, R8; \ + MOVQ 48+x, AX; MULQ CX; ADDQ AX, R10; ADCQ $0, DX; ADDQ R8, R10; ADCQ $0, DX; MOVQ DX,R11; \ + \ + MOVQ 32+x, CX; \ + MOVQ CX, AX; ADDQ R15, CX; MOVQ $0, R15; ADCQ $0, R15; \ + ;;;;;;;;;;;;;; MULQ CX; ADDQ AX, R9; ADCQ $0, DX; MOVQ R9, 64+z; \ + MOVQ R15, AX; NEGQ AX; ANDQ 32+x,AX; ADDQ AX, DX; ADCQ $0, R11; MOVQ DX, R8; \ + ADDQ 32+x, CX; ADCQ $0, R15; \ + MOVQ 40+x, AX; MULQ CX; ADDQ AX, R10; ADCQ $0, DX; ADDQ R8, R10; ADCQ $0, DX; MOVQ DX, R8; MOVQ R10, 72+z; \ + MOVQ 48+x, AX; MULQ CX; ADDQ AX, R11; ADCQ $0, DX; ADDQ R8, R11; ADCQ $0, DX; MOVQ DX,R12; \ + \ + XORL R13, R13; \ + XORL R14, R14; \ + MOVQ 40+x, CX; \ + MOVQ CX, AX; ADDQ R15, CX; MOVQ $0, R15; ADCQ $0, R15; \ + ;;;;;;;;;;;;;; MULQ CX; ADDQ AX, R11; ADCQ $0, DX; MOVQ R11, 80+z; \ + MOVQ R15, AX; NEGQ AX; ANDQ 40+x,AX; ADDQ AX, DX; ADCQ $0, R13; MOVQ DX, R8; \ + ADDQ 40+x, CX; ADCQ $0, R15; \ + MOVQ 48+x, AX; MULQ CX; ADDQ AX, R12; ADCQ $0, DX; ADDQ R8, R12; ADCQ $0, DX; MOVQ DX, R8; MOVQ R12, 88+z; \ + ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ADDQ R8, R13; ADCQ $0,R14; \ + \ + XORL R9, R9; \ + MOVQ 48+x, CX; \ + MOVQ CX, AX; ADDQ R15, CX; MOVQ $0, R15; ADCQ $0, R15; \ + ;;;;;;;;;;;;;; MULQ CX; ADDQ AX, R13; ADCQ $0, DX; MOVQ R13, 96+z; \ + MOVQ R15, AX; NEGQ AX; ANDQ 48+x,AX; ADDQ AX, DX; ADCQ $0, R9; MOVQ DX, R8; \ + ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ADDQ R8,R14; ADCQ $0, R9; MOVQ R14, 104+z; + + +// integerSqrAdx squares x and stores in z +// Uses: AX, CX, DX, R8-R15, FLAGS +// Instr: x86_64, bmi2, adx +#define integerSqrAdx(z,x) \ + XORL R15, R15; \ + MOVQ 0+x, DX; \ + ;;;;;;;;;;;;;; MULXQ DX, AX, R8; MOVQ AX, 0+z; \ + ADDQ DX, DX; ADCQ $0, R15; CLC; \ + MULXQ 8+x, AX, R9; ADCXQ AX, R8; MOVQ R8, 8+z; \ + MULXQ 16+x, AX, R10; ADCXQ AX, R9; MOVQ $0, R8;\ + MULXQ 24+x, AX, R11; ADCXQ AX, R10; \ + MULXQ 32+x, AX, R12; ADCXQ AX, R11; \ + MULXQ 40+x, AX, R13; ADCXQ AX, R12; \ + MULXQ 48+x, AX, R14; ADCXQ AX, R13; \ + ;;;;;;;;;;;;;;;;;;;; ADCXQ R8, R14; \ + \ + MOVQ 8+x, DX; \ + MOVQ DX, AX; ADDQ R15, DX; MOVQ $0, R15; ADCQ $0, R15; \ + MULXQ AX, AX, CX; \ + MOVQ R15, R8; NEGQ R8; ANDQ 8+x, R8; \ + ADDQ AX, R9; MOVQ R9, 16+z; \ + ADCQ CX, R8; \ + ADCQ $0, R11; \ + ADDQ 8+x, DX; \ + ADCQ $0, R15; \ + XORL R9, R9; ;;;;;;;;;;;;;;;;;;;;; ADOXQ R8, R10; \ + MULXQ 16+x, AX, CX; ADCXQ AX, R10; ADOXQ CX, R11; MOVQ R10, 24+z; \ + MULXQ 24+x, AX, CX; ADCXQ AX, R11; ADOXQ CX, R12; MOVQ $0, R10; \ + MULXQ 32+x, AX, CX; ADCXQ AX, R12; ADOXQ CX, R13; \ + MULXQ 40+x, AX, CX; ADCXQ AX, R13; ADOXQ CX, R14; \ + MULXQ 48+x, AX, CX; ADCXQ AX, R14; ADOXQ CX, R9; \ + ;;;;;;;;;;;;;;;;;;; ADCXQ R10, R9; \ + \ + MOVQ 16+x, DX; \ + MOVQ DX, AX; ADDQ R15, DX; MOVQ $0, R15; ADCQ $0, R15; \ + MULXQ AX, AX, CX; \ + MOVQ R15, R8; NEGQ R8; ANDQ 16+x, R8; \ + ADDQ AX, R11; MOVQ R11, 32+z; \ + ADCQ CX, R8; \ + ADCQ $0, R13; \ + ADDQ 16+x, DX; \ + ADCQ $0, R15; \ + XORL R11, R11; ;;;;;;;;;;;;;;;;;;; ADOXQ R8, R12; \ + MULXQ 24+x, AX, CX; ADCXQ AX, R12; ADOXQ CX, R13; MOVQ R12, 40+z; \ + MULXQ 32+x, AX, CX; ADCXQ AX, R13; ADOXQ CX, R14; MOVQ $0, R12; \ + MULXQ 40+x, AX, CX; ADCXQ AX, R14; ADOXQ CX, R9; \ + MULXQ 48+x, AX, CX; ADCXQ AX, R9; ADOXQ CX, R10; \ + ;;;;;;;;;;;;;;;;;;; ADCXQ R11,R10; \ + \ + MOVQ 24+x, DX; \ + MOVQ DX, AX; ADDQ R15, DX; MOVQ $0, R15; ADCQ $0, R15; \ + MULXQ AX, AX, CX; \ + MOVQ R15, R8; NEGQ R8; ANDQ 24+x, R8; \ + ADDQ AX, R13; MOVQ R13, 48+z; \ + ADCQ CX, R8; \ + ADCQ $0, R9; \ + ADDQ 24+x, DX; \ + ADCQ $0, R15; \ + XORL R13, R13; ;;;;;;;;;;;;;;;;;;; ADOXQ R8, R14; \ + MULXQ 32+x, AX, CX; ADCXQ AX, R14; ADOXQ CX, R9; MOVQ R14, 56+z; \ + MULXQ 40+x, AX, CX; ADCXQ AX, R9; ADOXQ CX, R10; MOVQ $0, R14; \ + MULXQ 48+x, AX, CX; ADCXQ AX, R10; ADOXQ CX, R11; \ + ;;;;;;;;;;;;;;;;;;; ADCXQ R12,R11; \ + \ + MOVQ 32+x, DX; \ + MOVQ DX, AX; ADDQ R15, DX; MOVQ $0, R15; ADCQ $0, R15; \ + MULXQ AX, AX, CX; \ + MOVQ R15, R8; NEGQ R8; ANDQ 32+x, R8; \ + ADDQ AX, R9; MOVQ R9, 64+z; \ + ADCQ CX, R8; \ + ADCQ $0, R11; \ + ADDQ 32+x, DX; \ + ADCQ $0, R15; \ + XORL R9, R9; ;;;;;;;;;;;;;;;;;;;;; ADOXQ R8, R10; \ + MULXQ 40+x, AX, CX; ADCXQ AX, R10; ADOXQ CX, R11; MOVQ R10, 72+z; \ + MULXQ 48+x, AX, CX; ADCXQ AX, R11; ADOXQ CX, R12; \ + ;;;;;;;;;;;;;;;;;;; ADCXQ R13,R12; \ + \ + MOVQ 40+x, DX; \ + MOVQ DX, AX; ADDQ R15, DX; MOVQ $0, R15; ADCQ $0, R15; \ + MULXQ AX, AX, CX; \ + MOVQ R15, R8; NEGQ R8; ANDQ 40+x, R8; \ + ADDQ AX, R11; MOVQ R11, 80+z; \ + ADCQ CX, R8; \ + ADCQ $0, R13; \ + ADDQ 40+x, DX; \ + ADCQ $0, R15; \ + XORL R11, R11; ;;;;;;;;;;;;;;;;;;; ADOXQ R8, R12; \ + MULXQ 48+x, AX, CX; ADCXQ AX, R12; ADOXQ CX, R13; MOVQ R12, 88+z; \ + ;;;;;;;;;;;;;;;;;;; ADCXQ R14,R13; \ + \ + MOVQ 48+x, DX; \ + MOVQ DX, AX; ADDQ R15, DX; MOVQ $0, R15; ADCQ $0, R15; \ + MULXQ AX, AX, CX; \ + MOVQ R15, R8; NEGQ R8; ANDQ 48+x, R8; \ + XORL R10, R10; ;;;;;;;;;;;;;; ADOXQ CX, R14; \ + ;;;;;;;;;;;;;; ADCXQ AX, R13; ;;;;;;;;;;;;;; MOVQ R13, 96+z; \ + ;;;;;;;;;;;;;; ADCXQ R8, R14; MOVQ R14, 104+z; + +// reduceFromDoubleLeg finds a z=x modulo p such that z<2^448 and stores in z +// Uses: AX, R8-R15, FLAGS +// Instr: x86_64 +#define reduceFromDoubleLeg(z,x) \ + /* ( ,2C13,2C12,2C11,2C10|C10,C9,C8, C7) + (C6,...,C0) */ \ + /* (r14, r13, r12, r11, r10,r9,r8,r15) */ \ + MOVQ 80+x,AX; MOVQ AX,R10; \ + MOVQ $0xFFFFFFFF00000000, R8; \ + ANDQ R8,R10; \ + \ + MOVQ $0,R14; \ + MOVQ 104+x,R13; SHLQ $1,R13,R14; \ + MOVQ 96+x,R12; SHLQ $1,R12,R13; \ + MOVQ 88+x,R11; SHLQ $1,R11,R12; \ + MOVQ 72+x, R9; SHLQ $1,R10,R11; \ + MOVQ 64+x, R8; SHLQ $1,R10; \ + MOVQ $0xFFFFFFFF,R15; ANDQ R15,AX; ORQ AX,R10; \ + MOVQ 56+x,R15; \ + \ + ADDQ 0+x,R15; MOVQ R15, 0+z; MOVQ 56+x,R15; \ + ADCQ 8+x, R8; MOVQ R8, 8+z; MOVQ 64+x, R8; \ + ADCQ 16+x, R9; MOVQ R9,16+z; MOVQ 72+x, R9; \ + ADCQ 24+x,R10; MOVQ R10,24+z; MOVQ 80+x,R10; \ + ADCQ 32+x,R11; MOVQ R11,32+z; MOVQ 88+x,R11; \ + ADCQ 40+x,R12; MOVQ R12,40+z; MOVQ 96+x,R12; \ + ADCQ 48+x,R13; MOVQ R13,48+z; MOVQ 104+x,R13; \ + ADCQ $0,R14; \ + /* (c10c9,c9c8,c8c7,c7c13,c13c12,c12c11,c11c10) + (c6,...,c0) */ \ + /* ( r9, r8, r15, r13, r12, r11, r10) */ \ + MOVQ R10, AX; \ + SHRQ $32,R11,R10; \ + SHRQ $32,R12,R11; \ + SHRQ $32,R13,R12; \ + SHRQ $32,R15,R13; \ + SHRQ $32, R8,R15; \ + SHRQ $32, R9, R8; \ + SHRQ $32, AX, R9; \ + \ + ADDQ 0+z,R10; \ + ADCQ 8+z,R11; \ + ADCQ 16+z,R12; \ + ADCQ 24+z,R13; \ + ADCQ 32+z,R15; \ + ADCQ 40+z, R8; \ + ADCQ 48+z, R9; \ + ADCQ $0,R14; \ + /* ( c7) + (c6,...,c0) */ \ + /* (r14) */ \ + MOVQ R14, AX; SHLQ $32, AX; \ + ADDQ R14,R10; MOVQ $0,R14; \ + ADCQ $0,R11; \ + ADCQ $0,R12; \ + ADCQ AX,R13; \ + ADCQ $0,R15; \ + ADCQ $0, R8; \ + ADCQ $0, R9; \ + ADCQ $0,R14; \ + /* ( c7) + (c6,...,c0) */ \ + /* (r14) */ \ + MOVQ R14, AX; SHLQ $32,AX; \ + ADDQ R14,R10; MOVQ R10, 0+z; \ + ADCQ $0,R11; MOVQ R11, 8+z; \ + ADCQ $0,R12; MOVQ R12,16+z; \ + ADCQ AX,R13; MOVQ R13,24+z; \ + ADCQ $0,R15; MOVQ R15,32+z; \ + ADCQ $0, R8; MOVQ R8,40+z; \ + ADCQ $0, R9; MOVQ R9,48+z; + +// reduceFromDoubleAdx finds a z=x modulo p such that z<2^448 and stores in z +// Uses: AX, R8-R15, FLAGS +// Instr: x86_64, adx +#define reduceFromDoubleAdx(z,x) \ + /* ( ,2C13,2C12,2C11,2C10|C10,C9,C8, C7) + (C6,...,C0) */ \ + /* (r14, r13, r12, r11, r10,r9,r8,r15) */ \ + MOVQ 80+x,AX; MOVQ AX,R10; \ + MOVQ $0xFFFFFFFF00000000, R8; \ + ANDQ R8,R10; \ + \ + MOVQ $0,R14; \ + MOVQ 104+x,R13; SHLQ $1,R13,R14; \ + MOVQ 96+x,R12; SHLQ $1,R12,R13; \ + MOVQ 88+x,R11; SHLQ $1,R11,R12; \ + MOVQ 72+x, R9; SHLQ $1,R10,R11; \ + MOVQ 64+x, R8; SHLQ $1,R10; \ + MOVQ $0xFFFFFFFF,R15; ANDQ R15,AX; ORQ AX,R10; \ + MOVQ 56+x,R15; \ + \ + XORL AX,AX; \ + ADCXQ 0+x,R15; MOVQ R15, 0+z; MOVQ 56+x,R15; \ + ADCXQ 8+x, R8; MOVQ R8, 8+z; MOVQ 64+x, R8; \ + ADCXQ 16+x, R9; MOVQ R9,16+z; MOVQ 72+x, R9; \ + ADCXQ 24+x,R10; MOVQ R10,24+z; MOVQ 80+x,R10; \ + ADCXQ 32+x,R11; MOVQ R11,32+z; MOVQ 88+x,R11; \ + ADCXQ 40+x,R12; MOVQ R12,40+z; MOVQ 96+x,R12; \ + ADCXQ 48+x,R13; MOVQ R13,48+z; MOVQ 104+x,R13; \ + ADCXQ AX,R14; \ + /* (c10c9,c9c8,c8c7,c7c13,c13c12,c12c11,c11c10) + (c6,...,c0) */ \ + /* ( r9, r8, r15, r13, r12, r11, r10) */ \ + MOVQ R10, AX; \ + SHRQ $32,R11,R10; \ + SHRQ $32,R12,R11; \ + SHRQ $32,R13,R12; \ + SHRQ $32,R15,R13; \ + SHRQ $32, R8,R15; \ + SHRQ $32, R9, R8; \ + SHRQ $32, AX, R9; \ + \ + XORL AX,AX; \ + ADCXQ 0+z,R10; \ + ADCXQ 8+z,R11; \ + ADCXQ 16+z,R12; \ + ADCXQ 24+z,R13; \ + ADCXQ 32+z,R15; \ + ADCXQ 40+z, R8; \ + ADCXQ 48+z, R9; \ + ADCXQ AX,R14; \ + /* ( c7) + (c6,...,c0) */ \ + /* (r14) */ \ + MOVQ R14, AX; SHLQ $32, AX; \ + CLC; \ + ADCXQ R14,R10; MOVQ $0,R14; \ + ADCXQ R14,R11; \ + ADCXQ R14,R12; \ + ADCXQ AX,R13; \ + ADCXQ R14,R15; \ + ADCXQ R14, R8; \ + ADCXQ R14, R9; \ + ADCXQ R14,R14; \ + /* ( c7) + (c6,...,c0) */ \ + /* (r14) */ \ + MOVQ R14, AX; SHLQ $32, AX; \ + CLC; \ + ADCXQ R14,R10; MOVQ R10, 0+z; MOVQ $0,R14; \ + ADCXQ R14,R11; MOVQ R11, 8+z; \ + ADCXQ R14,R12; MOVQ R12,16+z; \ + ADCXQ AX,R13; MOVQ R13,24+z; \ + ADCXQ R14,R15; MOVQ R15,32+z; \ + ADCXQ R14, R8; MOVQ R8,40+z; \ + ADCXQ R14, R9; MOVQ R9,48+z; + +// addSub calculates two operations: x,y = x+y,x-y +// Uses: AX, DX, R8-R15, FLAGS +#define addSub(x,y) \ + MOVQ 0+x, R8; ADDQ 0+y, R8; \ + MOVQ 8+x, R9; ADCQ 8+y, R9; \ + MOVQ 16+x, R10; ADCQ 16+y, R10; \ + MOVQ 24+x, R11; ADCQ 24+y, R11; \ + MOVQ 32+x, R12; ADCQ 32+y, R12; \ + MOVQ 40+x, R13; ADCQ 40+y, R13; \ + MOVQ 48+x, R14; ADCQ 48+y, R14; \ + MOVQ $0, AX; ADCQ $0, AX; \ + MOVQ AX, DX; \ + SHLQ $32, DX; \ + ADDQ AX, R8; MOVQ $0, AX; \ + ADCQ $0, R9; \ + ADCQ $0, R10; \ + ADCQ DX, R11; \ + ADCQ $0, R12; \ + ADCQ $0, R13; \ + ADCQ $0, R14; \ + ADCQ $0, AX; \ + MOVQ AX, DX; \ + SHLQ $32, DX; \ + ADDQ AX, R8; MOVQ 0+x,AX; MOVQ R8, 0+x; MOVQ AX, R8; \ + ADCQ $0, R9; MOVQ 8+x,AX; MOVQ R9, 8+x; MOVQ AX, R9; \ + ADCQ $0, R10; MOVQ 16+x,AX; MOVQ R10, 16+x; MOVQ AX, R10; \ + ADCQ DX, R11; MOVQ 24+x,AX; MOVQ R11, 24+x; MOVQ AX, R11; \ + ADCQ $0, R12; MOVQ 32+x,AX; MOVQ R12, 32+x; MOVQ AX, R12; \ + ADCQ $0, R13; MOVQ 40+x,AX; MOVQ R13, 40+x; MOVQ AX, R13; \ + ADCQ $0, R14; MOVQ 48+x,AX; MOVQ R14, 48+x; MOVQ AX, R14; \ + SUBQ 0+y, R8; \ + SBBQ 8+y, R9; \ + SBBQ 16+y, R10; \ + SBBQ 24+y, R11; \ + SBBQ 32+y, R12; \ + SBBQ 40+y, R13; \ + SBBQ 48+y, R14; \ + MOVQ $0, AX; SETCS AX; \ + MOVQ AX, DX; \ + SHLQ $32, DX; \ + SUBQ AX, R8; MOVQ $0, AX; \ + SBBQ $0, R9; \ + SBBQ $0, R10; \ + SBBQ DX, R11; \ + SBBQ $0, R12; \ + SBBQ $0, R13; \ + SBBQ $0, R14; \ + SETCS AX; \ + MOVQ AX, DX; \ + SHLQ $32, DX; \ + SUBQ AX, R8; MOVQ R8, 0+y; \ + SBBQ $0, R9; MOVQ R9, 8+y; \ + SBBQ $0, R10; MOVQ R10, 16+y; \ + SBBQ DX, R11; MOVQ R11, 24+y; \ + SBBQ $0, R12; MOVQ R12, 32+y; \ + SBBQ $0, R13; MOVQ R13, 40+y; \ + SBBQ $0, R14; MOVQ R14, 48+y; diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp448/fp_amd64.s b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp448/fp_amd64.s new file mode 100644 index 00000000000..435addf5e6c --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp448/fp_amd64.s @@ -0,0 +1,74 @@ +// +build amd64 + +#include "textflag.h" +#include "fp_amd64.h" + +// func cmovAmd64(x, y *Elt, n uint) +TEXT ·cmovAmd64(SB),NOSPLIT,$0-24 + MOVQ x+0(FP), DI + MOVQ y+8(FP), SI + MOVQ n+16(FP), BX + cselect(0(DI),0(SI),BX) + RET + +// func cswapAmd64(x, y *Elt, n uint) +TEXT ·cswapAmd64(SB),NOSPLIT,$0-24 + MOVQ x+0(FP), DI + MOVQ y+8(FP), SI + MOVQ n+16(FP), BX + cswap(0(DI),0(SI),BX) + RET + +// func subAmd64(z, x, y *Elt) +TEXT ·subAmd64(SB),NOSPLIT,$0-24 + MOVQ z+0(FP), DI + MOVQ x+8(FP), SI + MOVQ y+16(FP), BX + subtraction(0(DI),0(SI),0(BX)) + RET + +// func addsubAmd64(x, y *Elt) +TEXT ·addsubAmd64(SB),NOSPLIT,$0-16 + MOVQ x+0(FP), DI + MOVQ y+8(FP), SI + addSub(0(DI),0(SI)) + RET + +#define addLegacy \ + additionLeg(0(DI),0(SI),0(BX)) +#define addBmi2Adx \ + additionAdx(0(DI),0(SI),0(BX)) + +#define mulLegacy \ + integerMulLeg(0(SP),0(SI),0(BX)) \ + reduceFromDoubleLeg(0(DI),0(SP)) +#define mulBmi2Adx \ + integerMulAdx(0(SP),0(SI),0(BX)) \ + reduceFromDoubleAdx(0(DI),0(SP)) + +#define sqrLegacy \ + integerSqrLeg(0(SP),0(SI)) \ + reduceFromDoubleLeg(0(DI),0(SP)) +#define sqrBmi2Adx \ + integerSqrAdx(0(SP),0(SI)) \ + reduceFromDoubleAdx(0(DI),0(SP)) + +// func addAmd64(z, x, y *Elt) +TEXT ·addAmd64(SB),NOSPLIT,$0-24 + MOVQ z+0(FP), DI + MOVQ x+8(FP), SI + MOVQ y+16(FP), BX + CHECK_BMI2ADX(LADD, addLegacy, addBmi2Adx) + +// func mulAmd64(z, x, y *Elt) +TEXT ·mulAmd64(SB),NOSPLIT,$112-24 + MOVQ z+0(FP), DI + MOVQ x+8(FP), SI + MOVQ y+16(FP), BX + CHECK_BMI2ADX(LMUL, mulLegacy, mulBmi2Adx) + +// func sqrAmd64(z, x *Elt) +TEXT ·sqrAmd64(SB),NOSPLIT,$112-16 + MOVQ z+0(FP), DI + MOVQ x+8(FP), SI + CHECK_BMI2ADX(LSQR, sqrLegacy, sqrBmi2Adx) diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp448/fp_generic.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp448/fp_generic.go new file mode 100644 index 00000000000..47a0b63205f --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp448/fp_generic.go @@ -0,0 +1,339 @@ +package fp448 + +import ( + "encoding/binary" + "math/bits" +) + +func cmovGeneric(x, y *Elt, n uint) { + m := -uint64(n & 0x1) + x0 := binary.LittleEndian.Uint64(x[0*8 : 1*8]) + x1 := binary.LittleEndian.Uint64(x[1*8 : 2*8]) + x2 := binary.LittleEndian.Uint64(x[2*8 : 3*8]) + x3 := binary.LittleEndian.Uint64(x[3*8 : 4*8]) + x4 := binary.LittleEndian.Uint64(x[4*8 : 5*8]) + x5 := binary.LittleEndian.Uint64(x[5*8 : 6*8]) + x6 := binary.LittleEndian.Uint64(x[6*8 : 7*8]) + + y0 := binary.LittleEndian.Uint64(y[0*8 : 1*8]) + y1 := binary.LittleEndian.Uint64(y[1*8 : 2*8]) + y2 := binary.LittleEndian.Uint64(y[2*8 : 3*8]) + y3 := binary.LittleEndian.Uint64(y[3*8 : 4*8]) + y4 := binary.LittleEndian.Uint64(y[4*8 : 5*8]) + y5 := binary.LittleEndian.Uint64(y[5*8 : 6*8]) + y6 := binary.LittleEndian.Uint64(y[6*8 : 7*8]) + + x0 = (x0 &^ m) | (y0 & m) + x1 = (x1 &^ m) | (y1 & m) + x2 = (x2 &^ m) | (y2 & m) + x3 = (x3 &^ m) | (y3 & m) + x4 = (x4 &^ m) | (y4 & m) + x5 = (x5 &^ m) | (y5 & m) + x6 = (x6 &^ m) | (y6 & m) + + binary.LittleEndian.PutUint64(x[0*8:1*8], x0) + binary.LittleEndian.PutUint64(x[1*8:2*8], x1) + binary.LittleEndian.PutUint64(x[2*8:3*8], x2) + binary.LittleEndian.PutUint64(x[3*8:4*8], x3) + binary.LittleEndian.PutUint64(x[4*8:5*8], x4) + binary.LittleEndian.PutUint64(x[5*8:6*8], x5) + binary.LittleEndian.PutUint64(x[6*8:7*8], x6) +} + +func cswapGeneric(x, y *Elt, n uint) { + m := -uint64(n & 0x1) + x0 := binary.LittleEndian.Uint64(x[0*8 : 1*8]) + x1 := binary.LittleEndian.Uint64(x[1*8 : 2*8]) + x2 := binary.LittleEndian.Uint64(x[2*8 : 3*8]) + x3 := binary.LittleEndian.Uint64(x[3*8 : 4*8]) + x4 := binary.LittleEndian.Uint64(x[4*8 : 5*8]) + x5 := binary.LittleEndian.Uint64(x[5*8 : 6*8]) + x6 := binary.LittleEndian.Uint64(x[6*8 : 7*8]) + + y0 := binary.LittleEndian.Uint64(y[0*8 : 1*8]) + y1 := binary.LittleEndian.Uint64(y[1*8 : 2*8]) + y2 := binary.LittleEndian.Uint64(y[2*8 : 3*8]) + y3 := binary.LittleEndian.Uint64(y[3*8 : 4*8]) + y4 := binary.LittleEndian.Uint64(y[4*8 : 5*8]) + y5 := binary.LittleEndian.Uint64(y[5*8 : 6*8]) + y6 := binary.LittleEndian.Uint64(y[6*8 : 7*8]) + + t0 := m & (x0 ^ y0) + t1 := m & (x1 ^ y1) + t2 := m & (x2 ^ y2) + t3 := m & (x3 ^ y3) + t4 := m & (x4 ^ y4) + t5 := m & (x5 ^ y5) + t6 := m & (x6 ^ y6) + x0 ^= t0 + x1 ^= t1 + x2 ^= t2 + x3 ^= t3 + x4 ^= t4 + x5 ^= t5 + x6 ^= t6 + y0 ^= t0 + y1 ^= t1 + y2 ^= t2 + y3 ^= t3 + y4 ^= t4 + y5 ^= t5 + y6 ^= t6 + + binary.LittleEndian.PutUint64(x[0*8:1*8], x0) + binary.LittleEndian.PutUint64(x[1*8:2*8], x1) + binary.LittleEndian.PutUint64(x[2*8:3*8], x2) + binary.LittleEndian.PutUint64(x[3*8:4*8], x3) + binary.LittleEndian.PutUint64(x[4*8:5*8], x4) + binary.LittleEndian.PutUint64(x[5*8:6*8], x5) + binary.LittleEndian.PutUint64(x[6*8:7*8], x6) + + binary.LittleEndian.PutUint64(y[0*8:1*8], y0) + binary.LittleEndian.PutUint64(y[1*8:2*8], y1) + binary.LittleEndian.PutUint64(y[2*8:3*8], y2) + binary.LittleEndian.PutUint64(y[3*8:4*8], y3) + binary.LittleEndian.PutUint64(y[4*8:5*8], y4) + binary.LittleEndian.PutUint64(y[5*8:6*8], y5) + binary.LittleEndian.PutUint64(y[6*8:7*8], y6) +} + +func addGeneric(z, x, y *Elt) { + x0 := binary.LittleEndian.Uint64(x[0*8 : 1*8]) + x1 := binary.LittleEndian.Uint64(x[1*8 : 2*8]) + x2 := binary.LittleEndian.Uint64(x[2*8 : 3*8]) + x3 := binary.LittleEndian.Uint64(x[3*8 : 4*8]) + x4 := binary.LittleEndian.Uint64(x[4*8 : 5*8]) + x5 := binary.LittleEndian.Uint64(x[5*8 : 6*8]) + x6 := binary.LittleEndian.Uint64(x[6*8 : 7*8]) + + y0 := binary.LittleEndian.Uint64(y[0*8 : 1*8]) + y1 := binary.LittleEndian.Uint64(y[1*8 : 2*8]) + y2 := binary.LittleEndian.Uint64(y[2*8 : 3*8]) + y3 := binary.LittleEndian.Uint64(y[3*8 : 4*8]) + y4 := binary.LittleEndian.Uint64(y[4*8 : 5*8]) + y5 := binary.LittleEndian.Uint64(y[5*8 : 6*8]) + y6 := binary.LittleEndian.Uint64(y[6*8 : 7*8]) + + z0, c0 := bits.Add64(x0, y0, 0) + z1, c1 := bits.Add64(x1, y1, c0) + z2, c2 := bits.Add64(x2, y2, c1) + z3, c3 := bits.Add64(x3, y3, c2) + z4, c4 := bits.Add64(x4, y4, c3) + z5, c5 := bits.Add64(x5, y5, c4) + z6, z7 := bits.Add64(x6, y6, c5) + + z0, c0 = bits.Add64(z0, z7, 0) + z1, c1 = bits.Add64(z1, 0, c0) + z2, c2 = bits.Add64(z2, 0, c1) + z3, c3 = bits.Add64(z3, z7<<32, c2) + z4, c4 = bits.Add64(z4, 0, c3) + z5, c5 = bits.Add64(z5, 0, c4) + z6, z7 = bits.Add64(z6, 0, c5) + + z0, c0 = bits.Add64(z0, z7, 0) + z1, c1 = bits.Add64(z1, 0, c0) + z2, c2 = bits.Add64(z2, 0, c1) + z3, c3 = bits.Add64(z3, z7<<32, c2) + z4, c4 = bits.Add64(z4, 0, c3) + z5, c5 = bits.Add64(z5, 0, c4) + z6, _ = bits.Add64(z6, 0, c5) + + binary.LittleEndian.PutUint64(z[0*8:1*8], z0) + binary.LittleEndian.PutUint64(z[1*8:2*8], z1) + binary.LittleEndian.PutUint64(z[2*8:3*8], z2) + binary.LittleEndian.PutUint64(z[3*8:4*8], z3) + binary.LittleEndian.PutUint64(z[4*8:5*8], z4) + binary.LittleEndian.PutUint64(z[5*8:6*8], z5) + binary.LittleEndian.PutUint64(z[6*8:7*8], z6) +} + +func subGeneric(z, x, y *Elt) { + x0 := binary.LittleEndian.Uint64(x[0*8 : 1*8]) + x1 := binary.LittleEndian.Uint64(x[1*8 : 2*8]) + x2 := binary.LittleEndian.Uint64(x[2*8 : 3*8]) + x3 := binary.LittleEndian.Uint64(x[3*8 : 4*8]) + x4 := binary.LittleEndian.Uint64(x[4*8 : 5*8]) + x5 := binary.LittleEndian.Uint64(x[5*8 : 6*8]) + x6 := binary.LittleEndian.Uint64(x[6*8 : 7*8]) + + y0 := binary.LittleEndian.Uint64(y[0*8 : 1*8]) + y1 := binary.LittleEndian.Uint64(y[1*8 : 2*8]) + y2 := binary.LittleEndian.Uint64(y[2*8 : 3*8]) + y3 := binary.LittleEndian.Uint64(y[3*8 : 4*8]) + y4 := binary.LittleEndian.Uint64(y[4*8 : 5*8]) + y5 := binary.LittleEndian.Uint64(y[5*8 : 6*8]) + y6 := binary.LittleEndian.Uint64(y[6*8 : 7*8]) + + z0, c0 := bits.Sub64(x0, y0, 0) + z1, c1 := bits.Sub64(x1, y1, c0) + z2, c2 := bits.Sub64(x2, y2, c1) + z3, c3 := bits.Sub64(x3, y3, c2) + z4, c4 := bits.Sub64(x4, y4, c3) + z5, c5 := bits.Sub64(x5, y5, c4) + z6, z7 := bits.Sub64(x6, y6, c5) + + z0, c0 = bits.Sub64(z0, z7, 0) + z1, c1 = bits.Sub64(z1, 0, c0) + z2, c2 = bits.Sub64(z2, 0, c1) + z3, c3 = bits.Sub64(z3, z7<<32, c2) + z4, c4 = bits.Sub64(z4, 0, c3) + z5, c5 = bits.Sub64(z5, 0, c4) + z6, z7 = bits.Sub64(z6, 0, c5) + + z0, c0 = bits.Sub64(z0, z7, 0) + z1, c1 = bits.Sub64(z1, 0, c0) + z2, c2 = bits.Sub64(z2, 0, c1) + z3, c3 = bits.Sub64(z3, z7<<32, c2) + z4, c4 = bits.Sub64(z4, 0, c3) + z5, c5 = bits.Sub64(z5, 0, c4) + z6, _ = bits.Sub64(z6, 0, c5) + + binary.LittleEndian.PutUint64(z[0*8:1*8], z0) + binary.LittleEndian.PutUint64(z[1*8:2*8], z1) + binary.LittleEndian.PutUint64(z[2*8:3*8], z2) + binary.LittleEndian.PutUint64(z[3*8:4*8], z3) + binary.LittleEndian.PutUint64(z[4*8:5*8], z4) + binary.LittleEndian.PutUint64(z[5*8:6*8], z5) + binary.LittleEndian.PutUint64(z[6*8:7*8], z6) +} + +func addsubGeneric(x, y *Elt) { + z := &Elt{} + addGeneric(z, x, y) + subGeneric(y, x, y) + *x = *z +} + +func mulGeneric(z, x, y *Elt) { + x0 := binary.LittleEndian.Uint64(x[0*8 : 1*8]) + x1 := binary.LittleEndian.Uint64(x[1*8 : 2*8]) + x2 := binary.LittleEndian.Uint64(x[2*8 : 3*8]) + x3 := binary.LittleEndian.Uint64(x[3*8 : 4*8]) + x4 := binary.LittleEndian.Uint64(x[4*8 : 5*8]) + x5 := binary.LittleEndian.Uint64(x[5*8 : 6*8]) + x6 := binary.LittleEndian.Uint64(x[6*8 : 7*8]) + + y0 := binary.LittleEndian.Uint64(y[0*8 : 1*8]) + y1 := binary.LittleEndian.Uint64(y[1*8 : 2*8]) + y2 := binary.LittleEndian.Uint64(y[2*8 : 3*8]) + y3 := binary.LittleEndian.Uint64(y[3*8 : 4*8]) + y4 := binary.LittleEndian.Uint64(y[4*8 : 5*8]) + y5 := binary.LittleEndian.Uint64(y[5*8 : 6*8]) + y6 := binary.LittleEndian.Uint64(y[6*8 : 7*8]) + + yy := [7]uint64{y0, y1, y2, y3, y4, y5, y6} + zz := [7]uint64{} + + yi := yy[0] + h0, l0 := bits.Mul64(x0, yi) + h1, l1 := bits.Mul64(x1, yi) + h2, l2 := bits.Mul64(x2, yi) + h3, l3 := bits.Mul64(x3, yi) + h4, l4 := bits.Mul64(x4, yi) + h5, l5 := bits.Mul64(x5, yi) + h6, l6 := bits.Mul64(x6, yi) + + zz[0] = l0 + a0, c0 := bits.Add64(h0, l1, 0) + a1, c1 := bits.Add64(h1, l2, c0) + a2, c2 := bits.Add64(h2, l3, c1) + a3, c3 := bits.Add64(h3, l4, c2) + a4, c4 := bits.Add64(h4, l5, c3) + a5, c5 := bits.Add64(h5, l6, c4) + a6, _ := bits.Add64(h6, 0, c5) + + for i := 1; i < 7; i++ { + yi = yy[i] + h0, l0 = bits.Mul64(x0, yi) + h1, l1 = bits.Mul64(x1, yi) + h2, l2 = bits.Mul64(x2, yi) + h3, l3 = bits.Mul64(x3, yi) + h4, l4 = bits.Mul64(x4, yi) + h5, l5 = bits.Mul64(x5, yi) + h6, l6 = bits.Mul64(x6, yi) + + zz[i], c0 = bits.Add64(a0, l0, 0) + a0, c1 = bits.Add64(a1, l1, c0) + a1, c2 = bits.Add64(a2, l2, c1) + a2, c3 = bits.Add64(a3, l3, c2) + a3, c4 = bits.Add64(a4, l4, c3) + a4, c5 = bits.Add64(a5, l5, c4) + a5, a6 = bits.Add64(a6, l6, c5) + + a0, c0 = bits.Add64(a0, h0, 0) + a1, c1 = bits.Add64(a1, h1, c0) + a2, c2 = bits.Add64(a2, h2, c1) + a3, c3 = bits.Add64(a3, h3, c2) + a4, c4 = bits.Add64(a4, h4, c3) + a5, c5 = bits.Add64(a5, h5, c4) + a6, _ = bits.Add64(a6, h6, c5) + } + red64(z, &zz, &[7]uint64{a0, a1, a2, a3, a4, a5, a6}) +} + +func sqrGeneric(z, x *Elt) { mulGeneric(z, x, x) } + +func red64(z *Elt, l, h *[7]uint64) { + /* (2C13, 2C12, 2C11, 2C10|C10, C9, C8, C7) + (C6,...,C0) */ + h0 := h[0] + h1 := h[1] + h2 := h[2] + h3 := ((h[3] & (0xFFFFFFFF << 32)) << 1) | (h[3] & 0xFFFFFFFF) + h4 := (h[3] >> 63) | (h[4] << 1) + h5 := (h[4] >> 63) | (h[5] << 1) + h6 := (h[5] >> 63) | (h[6] << 1) + h7 := (h[6] >> 63) + + l0, c0 := bits.Add64(h0, l[0], 0) + l1, c1 := bits.Add64(h1, l[1], c0) + l2, c2 := bits.Add64(h2, l[2], c1) + l3, c3 := bits.Add64(h3, l[3], c2) + l4, c4 := bits.Add64(h4, l[4], c3) + l5, c5 := bits.Add64(h5, l[5], c4) + l6, c6 := bits.Add64(h6, l[6], c5) + l7, _ := bits.Add64(h7, 0, c6) + + /* (C10C9, C9C8,C8C7,C7C13,C13C12,C12C11,C11C10) + (C6,...,C0) */ + h0 = (h[3] >> 32) | (h[4] << 32) + h1 = (h[4] >> 32) | (h[5] << 32) + h2 = (h[5] >> 32) | (h[6] << 32) + h3 = (h[6] >> 32) | (h[0] << 32) + h4 = (h[0] >> 32) | (h[1] << 32) + h5 = (h[1] >> 32) | (h[2] << 32) + h6 = (h[2] >> 32) | (h[3] << 32) + + l0, c0 = bits.Add64(l0, h0, 0) + l1, c1 = bits.Add64(l1, h1, c0) + l2, c2 = bits.Add64(l2, h2, c1) + l3, c3 = bits.Add64(l3, h3, c2) + l4, c4 = bits.Add64(l4, h4, c3) + l5, c5 = bits.Add64(l5, h5, c4) + l6, c6 = bits.Add64(l6, h6, c5) + l7, _ = bits.Add64(l7, 0, c6) + + /* (C7) + (C6,...,C0) */ + l0, c0 = bits.Add64(l0, l7, 0) + l1, c1 = bits.Add64(l1, 0, c0) + l2, c2 = bits.Add64(l2, 0, c1) + l3, c3 = bits.Add64(l3, l7<<32, c2) + l4, c4 = bits.Add64(l4, 0, c3) + l5, c5 = bits.Add64(l5, 0, c4) + l6, l7 = bits.Add64(l6, 0, c5) + + /* (C7) + (C6,...,C0) */ + l0, c0 = bits.Add64(l0, l7, 0) + l1, c1 = bits.Add64(l1, 0, c0) + l2, c2 = bits.Add64(l2, 0, c1) + l3, c3 = bits.Add64(l3, l7<<32, c2) + l4, c4 = bits.Add64(l4, 0, c3) + l5, c5 = bits.Add64(l5, 0, c4) + l6, _ = bits.Add64(l6, 0, c5) + + binary.LittleEndian.PutUint64(z[0*8:1*8], l0) + binary.LittleEndian.PutUint64(z[1*8:2*8], l1) + binary.LittleEndian.PutUint64(z[2*8:3*8], l2) + binary.LittleEndian.PutUint64(z[3*8:4*8], l3) + binary.LittleEndian.PutUint64(z[4*8:5*8], l4) + binary.LittleEndian.PutUint64(z[5*8:6*8], l5) + binary.LittleEndian.PutUint64(z[6*8:7*8], l6) +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp448/fp_noasm.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp448/fp_noasm.go new file mode 100644 index 00000000000..a62225d2962 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp448/fp_noasm.go @@ -0,0 +1,12 @@ +//go:build !amd64 || purego +// +build !amd64 purego + +package fp448 + +func cmov(x, y *Elt, n uint) { cmovGeneric(x, y, n) } +func cswap(x, y *Elt, n uint) { cswapGeneric(x, y, n) } +func add(z, x, y *Elt) { addGeneric(z, x, y) } +func sub(z, x, y *Elt) { subGeneric(z, x, y) } +func addsub(x, y *Elt) { addsubGeneric(x, y) } +func mul(z, x, y *Elt) { mulGeneric(z, x, y) } +func sqr(z, x *Elt) { sqrGeneric(z, x) } diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp448/fuzzer.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp448/fuzzer.go new file mode 100644 index 00000000000..2d7afc80598 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/fp448/fuzzer.go @@ -0,0 +1,75 @@ +//go:build gofuzz +// +build gofuzz + +// How to run the fuzzer: +// +// $ go get -u github.com/dvyukov/go-fuzz/go-fuzz +// $ go get -u github.com/dvyukov/go-fuzz/go-fuzz-build +// $ go-fuzz-build -libfuzzer -func FuzzReduction -o lib.a +// $ clang -fsanitize=fuzzer lib.a -o fu.exe +// $ ./fu.exe +package fp448 + +import ( + "encoding/binary" + "fmt" + "math/big" + + "github.com/cloudflare/circl/internal/conv" +) + +// FuzzReduction is a fuzzer target for red64 function, which reduces t +// (112 bits) to a number t' (56 bits) congruent modulo p448. +func FuzzReduction(data []byte) int { + if len(data) != 2*Size { + return -1 + } + var got, want Elt + var lo, hi [7]uint64 + a := data[:Size] + b := data[Size:] + lo[0] = binary.LittleEndian.Uint64(a[0*8 : 1*8]) + lo[1] = binary.LittleEndian.Uint64(a[1*8 : 2*8]) + lo[2] = binary.LittleEndian.Uint64(a[2*8 : 3*8]) + lo[3] = binary.LittleEndian.Uint64(a[3*8 : 4*8]) + lo[4] = binary.LittleEndian.Uint64(a[4*8 : 5*8]) + lo[5] = binary.LittleEndian.Uint64(a[5*8 : 6*8]) + lo[6] = binary.LittleEndian.Uint64(a[6*8 : 7*8]) + + hi[0] = binary.LittleEndian.Uint64(b[0*8 : 1*8]) + hi[1] = binary.LittleEndian.Uint64(b[1*8 : 2*8]) + hi[2] = binary.LittleEndian.Uint64(b[2*8 : 3*8]) + hi[3] = binary.LittleEndian.Uint64(b[3*8 : 4*8]) + hi[4] = binary.LittleEndian.Uint64(b[4*8 : 5*8]) + hi[5] = binary.LittleEndian.Uint64(b[5*8 : 6*8]) + hi[6] = binary.LittleEndian.Uint64(b[6*8 : 7*8]) + + red64(&got, &lo, &hi) + + t := conv.BytesLe2BigInt(data[:2*Size]) + + two448 := big.NewInt(1) + two448.Lsh(two448, 448) // 2^448 + mask448 := big.NewInt(1) + mask448.Sub(two448, mask448) // 2^448-1 + two224plus1 := big.NewInt(1) + two224plus1.Lsh(two224plus1, 224) + two224plus1.Add(two224plus1, big.NewInt(1)) // 2^224+1 + + var loBig, hiBig big.Int + for t.Cmp(two448) >= 0 { + loBig.And(t, mask448) + hiBig.Rsh(t, 448) + t.Mul(&hiBig, two224plus1) + t.Add(t, &loBig) + } + conv.BigInt2BytesLe(want[:], t) + + if got != want { + fmt.Printf("in: %v\n", conv.BytesLe2BigInt(data[:2*Size])) + fmt.Printf("got: %v\n", got) + fmt.Printf("want: %v\n", want) + panic("error found") + } + return 1 +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/math/mlsbset/mlsbset.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/mlsbset/mlsbset.go new file mode 100644 index 00000000000..a43851b8bb2 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/mlsbset/mlsbset.go @@ -0,0 +1,122 @@ +// Package mlsbset provides a constant-time exponentiation method with precomputation. +// +// References: "Efficient and secure algorithms for GLV-based scalar +// multiplication and their implementation on GLV–GLS curves" by (Faz-Hernandez et al.) +// - https://doi.org/10.1007/s13389-014-0085-7 +// - https://eprint.iacr.org/2013/158 +package mlsbset + +import ( + "errors" + "fmt" + "math/big" + + "github.com/cloudflare/circl/internal/conv" +) + +// EltG is a group element. +type EltG interface{} + +// EltP is a precomputed group element. +type EltP interface{} + +// Group defines the operations required by MLSBSet exponentiation method. +type Group interface { + Identity() EltG // Returns the identity of the group. + Sqr(x EltG) // Calculates x = x^2. + Mul(x EltG, y EltP) // Calculates x = x*y. + NewEltP() EltP // Returns an arbitrary precomputed element. + ExtendedEltP() EltP // Returns the precomputed element x^(2^(w*d)). + Lookup(a EltP, v uint, s, u int32) // Sets a = s*T[v][u]. +} + +// Params contains the parameters of the encoding. +type Params struct { + T uint // T is the maximum size (in bits) of exponents. + V uint // V is the number of tables. + W uint // W is the window size. + E uint // E is the number of digits per table. + D uint // D is the number of digits in total. + L uint // L is the length of the code. +} + +// Encoder allows to convert integers into valid powers. +type Encoder struct{ p Params } + +// New produces an encoder of the MLSBSet algorithm. +func New(t, v, w uint) (Encoder, error) { + if !(t > 1 && v >= 1 && w >= 2) { + return Encoder{}, errors.New("t>1, v>=1, w>=2") + } + e := (t + w*v - 1) / (w * v) + d := e * v + l := d * w + return Encoder{Params{t, v, w, e, d, l}}, nil +} + +// Encode converts an odd integer k into a valid power for exponentiation. +func (m Encoder) Encode(k []byte) (*Power, error) { + if len(k) == 0 { + return nil, errors.New("empty slice") + } + if !(len(k) <= int(m.p.L+7)>>3) { + return nil, errors.New("k too big") + } + if k[0]%2 == 0 { + return nil, errors.New("k must be odd") + } + ap := int((m.p.L+7)/8) - len(k) + k = append(k, make([]byte, ap)...) + s := m.signs(k) + b := make([]int32, m.p.L-m.p.D) + c := conv.BytesLe2BigInt(k) + c.Rsh(c, m.p.D) + var bi big.Int + for i := m.p.D; i < m.p.L; i++ { + c0 := int32(c.Bit(0)) + b[i-m.p.D] = s[i%m.p.D] * c0 + bi.SetInt64(int64(b[i-m.p.D] >> 1)) + c.Rsh(c, 1) + c.Sub(c, &bi) + } + carry := int(c.Int64()) + return &Power{m, s, b, carry}, nil +} + +// signs calculates the set of signs. +func (m Encoder) signs(k []byte) []int32 { + s := make([]int32, m.p.D) + s[m.p.D-1] = 1 + for i := uint(1); i < m.p.D; i++ { + ki := int32((k[i>>3] >> (i & 0x7)) & 0x1) + s[i-1] = 2*ki - 1 + } + return s +} + +// GetParams returns the complementary parameters of the encoding. +func (m Encoder) GetParams() Params { return m.p } + +// tableSize returns the size of each table. +func (m Encoder) tableSize() uint { return 1 << (m.p.W - 1) } + +// Elts returns the total number of elements that must be precomputed. +func (m Encoder) Elts() uint { return m.p.V * m.tableSize() } + +// IsExtended returns true if the element x^(2^(wd)) must be calculated. +func (m Encoder) IsExtended() bool { q := m.p.T / (m.p.V * m.p.W); return m.p.T == q*m.p.V*m.p.W } + +// Ops returns the number of squares and multiplications executed during an exponentiation. +func (m Encoder) Ops() (S uint, M uint) { + S = m.p.E + M = m.p.E * m.p.V + if m.IsExtended() { + M++ + } + return +} + +func (m Encoder) String() string { + return fmt.Sprintf("T: %v W: %v V: %v e: %v d: %v l: %v wv|t: %v", + m.p.T, m.p.W, m.p.V, m.p.E, m.p.D, m.p.L, m.IsExtended()) +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/math/mlsbset/power.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/mlsbset/power.go new file mode 100644 index 00000000000..3f214c3046a --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/mlsbset/power.go @@ -0,0 +1,64 @@ +package mlsbset + +import "fmt" + +// Power is a valid exponent produced by the MLSBSet encoding algorithm. +type Power struct { + set Encoder // parameters of code. + s []int32 // set of signs. + b []int32 // set of digits. + c int // carry is {0,1}. +} + +// Exp is calculates x^k, where x is a predetermined element of a group G. +func (p *Power) Exp(G Group) EltG { + a, b := G.Identity(), G.NewEltP() + for e := int(p.set.p.E - 1); e >= 0; e-- { + G.Sqr(a) + for v := uint(0); v < p.set.p.V; v++ { + sgnElt, idElt := p.Digit(v, uint(e)) + G.Lookup(b, v, sgnElt, idElt) + G.Mul(a, b) + } + } + if p.set.IsExtended() && p.c == 1 { + G.Mul(a, G.ExtendedEltP()) + } + return a +} + +// Digit returns the (v,e)-th digit and its sign. +func (p *Power) Digit(v, e uint) (sgn, dig int32) { + sgn = p.bit(0, v, e) + dig = 0 + for i := p.set.p.W - 1; i > 0; i-- { + dig = 2*dig + p.bit(i, v, e) + } + mask := dig >> 31 + dig = (dig + mask) ^ mask + return sgn, dig +} + +// bit returns the (w,v,e)-th bit of the code. +func (p *Power) bit(w, v, e uint) int32 { + if !(w < p.set.p.W && + v < p.set.p.V && + e < p.set.p.E) { + panic(fmt.Errorf("indexes outside (%v,%v,%v)", w, v, e)) + } + if w == 0 { + return p.s[p.set.p.E*v+e] + } + return p.b[p.set.p.D*(w-1)+p.set.p.E*v+e] +} + +func (p *Power) String() string { + dig := "" + for j := uint(0); j < p.set.p.V; j++ { + for i := uint(0); i < p.set.p.E; i++ { + s, d := p.Digit(j, i) + dig += fmt.Sprintf("(%2v,%2v) = %+2v %+2v\n", j, i, s, d) + } + } + return fmt.Sprintf("len: %v\ncarry: %v\ndigits:\n%v", len(p.b)+len(p.s), p.c, dig) +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/math/wnaf.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/wnaf.go new file mode 100644 index 00000000000..94a1ec50429 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/math/wnaf.go @@ -0,0 +1,84 @@ +// Package math provides some utility functions for big integers. +package math + +import "math/big" + +// SignedDigit obtains the signed-digit recoding of n and returns a list L of +// digits such that n = sum( L[i]*2^(i*(w-1)) ), and each L[i] is an odd number +// in the set {±1, ±3, ..., ±2^(w-1)-1}. The third parameter ensures that the +// output has ceil(l/(w-1)) digits. +// +// Restrictions: +// - n is odd and n > 0. +// - 1 < w < 32. +// - l >= bit length of n. +// +// References: +// - Alg.6 in "Exponent Recoding and Regular Exponentiation Algorithms" +// by Joye-Tunstall. http://doi.org/10.1007/978-3-642-02384-2_21 +// - Alg.6 in "Selecting Elliptic Curves for Cryptography: An Efficiency and +// Security Analysis" by Bos et al. http://doi.org/10.1007/s13389-015-0097-y +func SignedDigit(n *big.Int, w, l uint) []int32 { + if n.Sign() <= 0 || n.Bit(0) == 0 { + panic("n must be non-zero, odd, and positive") + } + if w <= 1 || w >= 32 { + panic("Verify that 1 < w < 32") + } + if uint(n.BitLen()) > l { + panic("n is too big to fit in l digits") + } + lenN := (l + (w - 1) - 1) / (w - 1) // ceil(l/(w-1)) + L := make([]int32, lenN+1) + var k, v big.Int + k.Set(n) + + var i uint + for i = 0; i < lenN; i++ { + words := k.Bits() + value := int32(words[0] & ((1 << w) - 1)) + value -= int32(1) << (w - 1) + L[i] = value + v.SetInt64(int64(value)) + k.Sub(&k, &v) + k.Rsh(&k, w-1) + } + L[i] = int32(k.Int64()) + return L +} + +// OmegaNAF obtains the window-w Non-Adjacent Form of a positive number n and +// 1 < w < 32. The returned slice L holds n = sum( L[i]*2^i ). +// +// Reference: +// - Alg.9 "Efficient arithmetic on Koblitz curves" by Solinas. +// http://doi.org/10.1023/A:1008306223194 +func OmegaNAF(n *big.Int, w uint) (L []int32) { + if n.Sign() < 0 { + panic("n must be positive") + } + if w <= 1 || w >= 32 { + panic("Verify that 1 < w < 32") + } + + L = make([]int32, n.BitLen()+1) + var k, v big.Int + k.Set(n) + + i := 0 + for ; k.Sign() > 0; i++ { + value := int32(0) + if k.Bit(0) == 1 { + words := k.Bits() + value = int32(words[0] & ((1 << w) - 1)) + if value >= (int32(1) << (w - 1)) { + value -= int32(1) << w + } + v.SetInt64(int64(value)) + k.Sub(&k, &v) + } + L[i] = value + k.Rsh(&k, 1) + } + return L[:i] +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed25519/ed25519.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed25519/ed25519.go new file mode 100644 index 00000000000..08ca65d799a --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed25519/ed25519.go @@ -0,0 +1,453 @@ +// Package ed25519 implements Ed25519 signature scheme as described in RFC-8032. +// +// This package provides optimized implementations of the three signature +// variants and maintaining closer compatiblilty with crypto/ed25519. +// +// | Scheme Name | Sign Function | Verification | Context | +// |-------------|-------------------|---------------|-------------------| +// | Ed25519 | Sign | Verify | None | +// | Ed25519Ph | SignPh | VerifyPh | Yes, can be empty | +// | Ed25519Ctx | SignWithCtx | VerifyWithCtx | Yes, non-empty | +// | All above | (PrivateKey).Sign | VerifyAny | As above | +// +// Specific functions for sign and verify are defined. A generic signing +// function for all schemes is available through the crypto.Signer interface, +// which is implemented by the PrivateKey type. A correspond all-in-one +// verification method is provided by the VerifyAny function. +// +// Signing with Ed25519Ph or Ed25519Ctx requires a context string for domain +// separation. This parameter is passed using a SignerOptions struct defined +// in this package. While Ed25519Ph accepts an empty context, Ed25519Ctx +// enforces non-empty context strings. +// +// # Compatibility with crypto.ed25519 +// +// These functions are compatible with the “Ed25519” function defined in +// RFC-8032. However, unlike RFC 8032's formulation, this package's private +// key representation includes a public key suffix to make multiple signing +// operations with the same key more efficient. This package refers to the +// RFC-8032 private key as the “seed”. +// +// References +// +// - RFC-8032: https://rfc-editor.org/rfc/rfc8032.txt +// - Ed25519: https://ed25519.cr.yp.to/ +// - EdDSA: High-speed high-security signatures. https://doi.org/10.1007/s13389-012-0027-1 +package ed25519 + +import ( + "bytes" + "crypto" + cryptoRand "crypto/rand" + "crypto/sha512" + "crypto/subtle" + "errors" + "fmt" + "io" + "strconv" + + "github.com/cloudflare/circl/sign" +) + +const ( + // ContextMaxSize is the maximum length (in bytes) allowed for context. + ContextMaxSize = 255 + // PublicKeySize is the size, in bytes, of public keys as used in this package. + PublicKeySize = 32 + // PrivateKeySize is the size, in bytes, of private keys as used in this package. + PrivateKeySize = 64 + // SignatureSize is the size, in bytes, of signatures generated and verified by this package. + SignatureSize = 64 + // SeedSize is the size, in bytes, of private key seeds. These are the private key representations used by RFC 8032. + SeedSize = 32 +) + +const ( + paramB = 256 / 8 // Size of keys in bytes. +) + +// SignerOptions implements crypto.SignerOpts and augments with parameters +// that are specific to the Ed25519 signature schemes. +type SignerOptions struct { + // Hash must be crypto.Hash(0) for Ed25519/Ed25519ctx, or crypto.SHA512 + // for Ed25519ph. + crypto.Hash + + // Context is an optional domain separation string for Ed25519ph and a + // must for Ed25519ctx. Its length must be less or equal than 255 bytes. + Context string + + // Scheme is an identifier for choosing a signature scheme. The zero value + // is ED25519. + Scheme SchemeID +} + +// SchemeID is an identifier for each signature scheme. +type SchemeID uint + +const ( + ED25519 SchemeID = iota + ED25519Ph + ED25519Ctx +) + +// PrivateKey is the type of Ed25519 private keys. It implements crypto.Signer. +type PrivateKey []byte + +// Equal reports whether priv and x have the same value. +func (priv PrivateKey) Equal(x crypto.PrivateKey) bool { + xx, ok := x.(PrivateKey) + return ok && subtle.ConstantTimeCompare(priv, xx) == 1 +} + +// Public returns the PublicKey corresponding to priv. +func (priv PrivateKey) Public() crypto.PublicKey { + publicKey := make(PublicKey, PublicKeySize) + copy(publicKey, priv[SeedSize:]) + return publicKey +} + +// Seed returns the private key seed corresponding to priv. It is provided for +// interoperability with RFC 8032. RFC 8032's private keys correspond to seeds +// in this package. +func (priv PrivateKey) Seed() []byte { + seed := make([]byte, SeedSize) + copy(seed, priv[:SeedSize]) + return seed +} + +func (priv PrivateKey) Scheme() sign.Scheme { return sch } + +func (pub PublicKey) Scheme() sign.Scheme { return sch } + +func (priv PrivateKey) MarshalBinary() (data []byte, err error) { + privateKey := make(PrivateKey, PrivateKeySize) + copy(privateKey, priv) + return privateKey, nil +} + +func (pub PublicKey) MarshalBinary() (data []byte, err error) { + publicKey := make(PublicKey, PublicKeySize) + copy(publicKey, pub) + return publicKey, nil +} + +// Equal reports whether pub and x have the same value. +func (pub PublicKey) Equal(x crypto.PublicKey) bool { + xx, ok := x.(PublicKey) + return ok && bytes.Equal(pub, xx) +} + +// Sign creates a signature of a message with priv key. +// This function is compatible with crypto.ed25519 and also supports the +// three signature variants defined in RFC-8032, namely Ed25519 (or pure +// EdDSA), Ed25519Ph, and Ed25519Ctx. +// The opts.HashFunc() must return zero to specify either Ed25519 or Ed25519Ctx +// variant. This can be achieved by passing crypto.Hash(0) as the value for +// opts. +// The opts.HashFunc() must return SHA512 to specify the Ed25519Ph variant. +// This can be achieved by passing crypto.SHA512 as the value for opts. +// Use a SignerOptions struct (defined in this package) to pass a context +// string for signing. +func (priv PrivateKey) Sign( + rand io.Reader, + message []byte, + opts crypto.SignerOpts, +) (signature []byte, err error) { + var ctx string + var scheme SchemeID + if o, ok := opts.(SignerOptions); ok { + ctx = o.Context + scheme = o.Scheme + } + + switch true { + case scheme == ED25519 && opts.HashFunc() == crypto.Hash(0): + return Sign(priv, message), nil + case scheme == ED25519Ph && opts.HashFunc() == crypto.SHA512: + return SignPh(priv, message, ctx), nil + case scheme == ED25519Ctx && opts.HashFunc() == crypto.Hash(0) && len(ctx) > 0: + return SignWithCtx(priv, message, ctx), nil + default: + return nil, errors.New("ed25519: bad hash algorithm") + } +} + +// GenerateKey generates a public/private key pair using entropy from rand. +// If rand is nil, crypto/rand.Reader will be used. +func GenerateKey(rand io.Reader) (PublicKey, PrivateKey, error) { + if rand == nil { + rand = cryptoRand.Reader + } + + seed := make([]byte, SeedSize) + if _, err := io.ReadFull(rand, seed); err != nil { + return nil, nil, err + } + + privateKey := NewKeyFromSeed(seed) + publicKey := make(PublicKey, PublicKeySize) + copy(publicKey, privateKey[SeedSize:]) + + return publicKey, privateKey, nil +} + +// NewKeyFromSeed calculates a private key from a seed. It will panic if +// len(seed) is not SeedSize. This function is provided for interoperability +// with RFC 8032. RFC 8032's private keys correspond to seeds in this +// package. +func NewKeyFromSeed(seed []byte) PrivateKey { + privateKey := make(PrivateKey, PrivateKeySize) + newKeyFromSeed(privateKey, seed) + return privateKey +} + +func newKeyFromSeed(privateKey, seed []byte) { + if l := len(seed); l != SeedSize { + panic("ed25519: bad seed length: " + strconv.Itoa(l)) + } + var P pointR1 + k := sha512.Sum512(seed) + clamp(k[:]) + reduceModOrder(k[:paramB], false) + P.fixedMult(k[:paramB]) + copy(privateKey[:SeedSize], seed) + _ = P.ToBytes(privateKey[SeedSize:]) +} + +func signAll(signature []byte, privateKey PrivateKey, message, ctx []byte, preHash bool) { + if l := len(privateKey); l != PrivateKeySize { + panic("ed25519: bad private key length: " + strconv.Itoa(l)) + } + + H := sha512.New() + var PHM []byte + + if preHash { + _, _ = H.Write(message) + PHM = H.Sum(nil) + H.Reset() + } else { + PHM = message + } + + // 1. Hash the 32-byte private key using SHA-512. + _, _ = H.Write(privateKey[:SeedSize]) + h := H.Sum(nil) + clamp(h[:]) + prefix, s := h[paramB:], h[:paramB] + + // 2. Compute SHA-512(dom2(F, C) || prefix || PH(M)) + H.Reset() + + writeDom(H, ctx, preHash) + + _, _ = H.Write(prefix) + _, _ = H.Write(PHM) + r := H.Sum(nil) + reduceModOrder(r[:], true) + + // 3. Compute the point [r]B. + var P pointR1 + P.fixedMult(r[:paramB]) + R := (&[paramB]byte{})[:] + if err := P.ToBytes(R); err != nil { + panic(err) + } + + // 4. Compute SHA512(dom2(F, C) || R || A || PH(M)). + H.Reset() + + writeDom(H, ctx, preHash) + + _, _ = H.Write(R) + _, _ = H.Write(privateKey[SeedSize:]) + _, _ = H.Write(PHM) + hRAM := H.Sum(nil) + + reduceModOrder(hRAM[:], true) + + // 5. Compute S = (r + k * s) mod order. + S := (&[paramB]byte{})[:] + calculateS(S, r[:paramB], hRAM[:paramB], s) + + // 6. The signature is the concatenation of R and S. + copy(signature[:paramB], R[:]) + copy(signature[paramB:], S[:]) +} + +// Sign signs the message with privateKey and returns a signature. +// This function supports the signature variant defined in RFC-8032: Ed25519, +// also known as the pure version of EdDSA. +// It will panic if len(privateKey) is not PrivateKeySize. +func Sign(privateKey PrivateKey, message []byte) []byte { + signature := make([]byte, SignatureSize) + signAll(signature, privateKey, message, []byte(""), false) + return signature +} + +// SignPh creates a signature of a message with private key and context. +// This function supports the signature variant defined in RFC-8032: Ed25519ph, +// meaning it internally hashes the message using SHA-512, and optionally +// accepts a context string. +// It will panic if len(privateKey) is not PrivateKeySize. +// Context could be passed to this function, which length should be no more than +// ContextMaxSize=255. It can be empty. +func SignPh(privateKey PrivateKey, message []byte, ctx string) []byte { + if len(ctx) > ContextMaxSize { + panic(fmt.Errorf("ed25519: bad context length: %v", len(ctx))) + } + + signature := make([]byte, SignatureSize) + signAll(signature, privateKey, message, []byte(ctx), true) + return signature +} + +// SignWithCtx creates a signature of a message with private key and context. +// This function supports the signature variant defined in RFC-8032: Ed25519ctx, +// meaning it accepts a non-empty context string. +// It will panic if len(privateKey) is not PrivateKeySize. +// Context must be passed to this function, which length should be no more than +// ContextMaxSize=255 and cannot be empty. +func SignWithCtx(privateKey PrivateKey, message []byte, ctx string) []byte { + if len(ctx) == 0 || len(ctx) > ContextMaxSize { + panic(fmt.Errorf("ed25519: bad context length: %v > %v", len(ctx), ContextMaxSize)) + } + + signature := make([]byte, SignatureSize) + signAll(signature, privateKey, message, []byte(ctx), false) + return signature +} + +func verify(public PublicKey, message, signature, ctx []byte, preHash bool) bool { + if len(public) != PublicKeySize || + len(signature) != SignatureSize || + !isLessThanOrder(signature[paramB:]) { + return false + } + + var P pointR1 + if ok := P.FromBytes(public); !ok { + return false + } + + H := sha512.New() + var PHM []byte + + if preHash { + _, _ = H.Write(message) + PHM = H.Sum(nil) + H.Reset() + } else { + PHM = message + } + + R := signature[:paramB] + + writeDom(H, ctx, preHash) + + _, _ = H.Write(R) + _, _ = H.Write(public) + _, _ = H.Write(PHM) + hRAM := H.Sum(nil) + reduceModOrder(hRAM[:], true) + + var Q pointR1 + encR := (&[paramB]byte{})[:] + P.neg() + Q.doubleMult(&P, signature[paramB:], hRAM[:paramB]) + _ = Q.ToBytes(encR) + return bytes.Equal(R, encR) +} + +// VerifyAny returns true if the signature is valid. Failure cases are invalid +// signature, or when the public key cannot be decoded. +// This function supports all the three signature variants defined in RFC-8032, +// namely Ed25519 (or pure EdDSA), Ed25519Ph, and Ed25519Ctx. +// The opts.HashFunc() must return zero to specify either Ed25519 or Ed25519Ctx +// variant. This can be achieved by passing crypto.Hash(0) as the value for opts. +// The opts.HashFunc() must return SHA512 to specify the Ed25519Ph variant. +// This can be achieved by passing crypto.SHA512 as the value for opts. +// Use a SignerOptions struct to pass a context string for signing. +func VerifyAny(public PublicKey, message, signature []byte, opts crypto.SignerOpts) bool { + var ctx string + var scheme SchemeID + if o, ok := opts.(SignerOptions); ok { + ctx = o.Context + scheme = o.Scheme + } + + switch true { + case scheme == ED25519 && opts.HashFunc() == crypto.Hash(0): + return Verify(public, message, signature) + case scheme == ED25519Ph && opts.HashFunc() == crypto.SHA512: + return VerifyPh(public, message, signature, ctx) + case scheme == ED25519Ctx && opts.HashFunc() == crypto.Hash(0) && len(ctx) > 0: + return VerifyWithCtx(public, message, signature, ctx) + default: + return false + } +} + +// Verify returns true if the signature is valid. Failure cases are invalid +// signature, or when the public key cannot be decoded. +// This function supports the signature variant defined in RFC-8032: Ed25519, +// also known as the pure version of EdDSA. +func Verify(public PublicKey, message, signature []byte) bool { + return verify(public, message, signature, []byte(""), false) +} + +// VerifyPh returns true if the signature is valid. Failure cases are invalid +// signature, or when the public key cannot be decoded. +// This function supports the signature variant defined in RFC-8032: Ed25519ph, +// meaning it internally hashes the message using SHA-512. +// Context could be passed to this function, which length should be no more than +// 255. It can be empty. +func VerifyPh(public PublicKey, message, signature []byte, ctx string) bool { + return verify(public, message, signature, []byte(ctx), true) +} + +// VerifyWithCtx returns true if the signature is valid. Failure cases are invalid +// signature, or when the public key cannot be decoded, or when context is +// not provided. +// This function supports the signature variant defined in RFC-8032: Ed25519ctx, +// meaning it does not handle prehashed messages. Non-empty context string must be +// provided, and must not be more than 255 of length. +func VerifyWithCtx(public PublicKey, message, signature []byte, ctx string) bool { + if len(ctx) == 0 || len(ctx) > ContextMaxSize { + return false + } + + return verify(public, message, signature, []byte(ctx), false) +} + +func clamp(k []byte) { + k[0] &= 248 + k[paramB-1] = (k[paramB-1] & 127) | 64 +} + +// isLessThanOrder returns true if 0 <= x < order. +func isLessThanOrder(x []byte) bool { + i := len(order) - 1 + for i > 0 && x[i] == order[i] { + i-- + } + return x[i] < order[i] +} + +func writeDom(h io.Writer, ctx []byte, preHash bool) { + dom2 := "SigEd25519 no Ed25519 collisions" + + if len(ctx) > 0 { + _, _ = h.Write([]byte(dom2)) + if preHash { + _, _ = h.Write([]byte{byte(0x01), byte(len(ctx))}) + } else { + _, _ = h.Write([]byte{byte(0x00), byte(len(ctx))}) + } + _, _ = h.Write(ctx) + } else if preHash { + _, _ = h.Write([]byte(dom2)) + _, _ = h.Write([]byte{0x01, 0x00}) + } +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed25519/modular.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed25519/modular.go new file mode 100644 index 00000000000..10efafdcafb --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed25519/modular.go @@ -0,0 +1,175 @@ +package ed25519 + +import ( + "encoding/binary" + "math/bits" +) + +var order = [paramB]byte{ + 0xed, 0xd3, 0xf5, 0x5c, 0x1a, 0x63, 0x12, 0x58, + 0xd6, 0x9c, 0xf7, 0xa2, 0xde, 0xf9, 0xde, 0x14, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x10, +} + +// isLessThan returns true if 0 <= x < y, and assumes that slices have the same length. +func isLessThan(x, y []byte) bool { + i := len(x) - 1 + for i > 0 && x[i] == y[i] { + i-- + } + return x[i] < y[i] +} + +// reduceModOrder calculates k = k mod order of the curve. +func reduceModOrder(k []byte, is512Bit bool) { + var X [((2 * paramB) * 8) / 64]uint64 + numWords := len(k) >> 3 + for i := 0; i < numWords; i++ { + X[i] = binary.LittleEndian.Uint64(k[i*8 : (i+1)*8]) + } + red512(&X, is512Bit) + for i := 0; i < numWords; i++ { + binary.LittleEndian.PutUint64(k[i*8:(i+1)*8], X[i]) + } +} + +// red512 calculates x = x mod Order of the curve. +func red512(x *[8]uint64, full bool) { + // Implementation of Algs.(14.47)+(14.52) of Handbook of Applied + // Cryptography, by A. Menezes, P. van Oorschot, and S. Vanstone. + const ( + ell0 = uint64(0x5812631a5cf5d3ed) + ell1 = uint64(0x14def9dea2f79cd6) + ell160 = uint64(0x812631a5cf5d3ed0) + ell161 = uint64(0x4def9dea2f79cd65) + ell162 = uint64(0x0000000000000001) + ) + + var c0, c1, c2, c3 uint64 + r0, r1, r2, r3, r4 := x[0], x[1], x[2], x[3], uint64(0) + + if full { + q0, q1, q2, q3 := x[4], x[5], x[6], x[7] + + for i := 0; i < 3; i++ { + h0, s0 := bits.Mul64(q0, ell160) + h1, s1 := bits.Mul64(q1, ell160) + h2, s2 := bits.Mul64(q2, ell160) + h3, s3 := bits.Mul64(q3, ell160) + + s1, c0 = bits.Add64(h0, s1, 0) + s2, c1 = bits.Add64(h1, s2, c0) + s3, c2 = bits.Add64(h2, s3, c1) + s4, _ := bits.Add64(h3, 0, c2) + + h0, l0 := bits.Mul64(q0, ell161) + h1, l1 := bits.Mul64(q1, ell161) + h2, l2 := bits.Mul64(q2, ell161) + h3, l3 := bits.Mul64(q3, ell161) + + l1, c0 = bits.Add64(h0, l1, 0) + l2, c1 = bits.Add64(h1, l2, c0) + l3, c2 = bits.Add64(h2, l3, c1) + l4, _ := bits.Add64(h3, 0, c2) + + s1, c0 = bits.Add64(s1, l0, 0) + s2, c1 = bits.Add64(s2, l1, c0) + s3, c2 = bits.Add64(s3, l2, c1) + s4, c3 = bits.Add64(s4, l3, c2) + s5, s6 := bits.Add64(l4, 0, c3) + + s2, c0 = bits.Add64(s2, q0, 0) + s3, c1 = bits.Add64(s3, q1, c0) + s4, c2 = bits.Add64(s4, q2, c1) + s5, c3 = bits.Add64(s5, q3, c2) + s6, s7 := bits.Add64(s6, 0, c3) + + q := q0 | q1 | q2 | q3 + m := -((q | -q) >> 63) // if q=0 then m=0...0 else m=1..1 + s0 &= m + s1 &= m + s2 &= m + s3 &= m + q0, q1, q2, q3 = s4, s5, s6, s7 + + if (i+1)%2 == 0 { + r0, c0 = bits.Add64(r0, s0, 0) + r1, c1 = bits.Add64(r1, s1, c0) + r2, c2 = bits.Add64(r2, s2, c1) + r3, c3 = bits.Add64(r3, s3, c2) + r4, _ = bits.Add64(r4, 0, c3) + } else { + r0, c0 = bits.Sub64(r0, s0, 0) + r1, c1 = bits.Sub64(r1, s1, c0) + r2, c2 = bits.Sub64(r2, s2, c1) + r3, c3 = bits.Sub64(r3, s3, c2) + r4, _ = bits.Sub64(r4, 0, c3) + } + } + + m := -(r4 >> 63) + r0, c0 = bits.Add64(r0, m&ell160, 0) + r1, c1 = bits.Add64(r1, m&ell161, c0) + r2, c2 = bits.Add64(r2, m&ell162, c1) + r3, c3 = bits.Add64(r3, 0, c2) + r4, _ = bits.Add64(r4, m&1, c3) + x[4], x[5], x[6], x[7] = 0, 0, 0, 0 + } + + q0 := (r4 << 4) | (r3 >> 60) + r3 &= (uint64(1) << 60) - 1 + + h0, s0 := bits.Mul64(ell0, q0) + h1, s1 := bits.Mul64(ell1, q0) + s1, c0 = bits.Add64(h0, s1, 0) + s2, _ := bits.Add64(h1, 0, c0) + + r0, c0 = bits.Sub64(r0, s0, 0) + r1, c1 = bits.Sub64(r1, s1, c0) + r2, c2 = bits.Sub64(r2, s2, c1) + r3, _ = bits.Sub64(r3, 0, c2) + + x[0], x[1], x[2], x[3] = r0, r1, r2, r3 +} + +// calculateS performs s = r+k*a mod Order of the curve. +func calculateS(s, r, k, a []byte) { + K := [4]uint64{ + binary.LittleEndian.Uint64(k[0*8 : 1*8]), + binary.LittleEndian.Uint64(k[1*8 : 2*8]), + binary.LittleEndian.Uint64(k[2*8 : 3*8]), + binary.LittleEndian.Uint64(k[3*8 : 4*8]), + } + S := [8]uint64{ + binary.LittleEndian.Uint64(r[0*8 : 1*8]), + binary.LittleEndian.Uint64(r[1*8 : 2*8]), + binary.LittleEndian.Uint64(r[2*8 : 3*8]), + binary.LittleEndian.Uint64(r[3*8 : 4*8]), + } + var c3 uint64 + for i := range K { + ai := binary.LittleEndian.Uint64(a[i*8 : (i+1)*8]) + + h0, l0 := bits.Mul64(K[0], ai) + h1, l1 := bits.Mul64(K[1], ai) + h2, l2 := bits.Mul64(K[2], ai) + h3, l3 := bits.Mul64(K[3], ai) + + l1, c0 := bits.Add64(h0, l1, 0) + l2, c1 := bits.Add64(h1, l2, c0) + l3, c2 := bits.Add64(h2, l3, c1) + l4, _ := bits.Add64(h3, 0, c2) + + S[i+0], c0 = bits.Add64(S[i+0], l0, 0) + S[i+1], c1 = bits.Add64(S[i+1], l1, c0) + S[i+2], c2 = bits.Add64(S[i+2], l2, c1) + S[i+3], c3 = bits.Add64(S[i+3], l3, c2) + S[i+4], _ = bits.Add64(S[i+4], l4, c3) + } + red512(&S, true) + binary.LittleEndian.PutUint64(s[0*8:1*8], S[0]) + binary.LittleEndian.PutUint64(s[1*8:2*8], S[1]) + binary.LittleEndian.PutUint64(s[2*8:3*8], S[2]) + binary.LittleEndian.PutUint64(s[3*8:4*8], S[3]) +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed25519/mult.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed25519/mult.go new file mode 100644 index 00000000000..3216aae303c --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed25519/mult.go @@ -0,0 +1,180 @@ +package ed25519 + +import ( + "crypto/subtle" + "encoding/binary" + "math/bits" + + "github.com/cloudflare/circl/internal/conv" + "github.com/cloudflare/circl/math" + fp "github.com/cloudflare/circl/math/fp25519" +) + +var paramD = fp.Elt{ + 0xa3, 0x78, 0x59, 0x13, 0xca, 0x4d, 0xeb, 0x75, + 0xab, 0xd8, 0x41, 0x41, 0x4d, 0x0a, 0x70, 0x00, + 0x98, 0xe8, 0x79, 0x77, 0x79, 0x40, 0xc7, 0x8c, + 0x73, 0xfe, 0x6f, 0x2b, 0xee, 0x6c, 0x03, 0x52, +} + +// mLSBRecoding parameters. +const ( + fxT = 257 + fxV = 2 + fxW = 3 + fx2w1 = 1 << (uint(fxW) - 1) + numWords64 = (paramB * 8 / 64) +) + +// mLSBRecoding is the odd-only modified LSB-set. +// +// Reference: +// +// "Efficient and secure algorithms for GLV-based scalar multiplication and +// their implementation on GLV–GLS curves" by (Faz-Hernandez et al.) +// http://doi.org/10.1007/s13389-014-0085-7. +func mLSBRecoding(L []int8, k []byte) { + const ee = (fxT + fxW*fxV - 1) / (fxW * fxV) + const dd = ee * fxV + const ll = dd * fxW + if len(L) == (ll + 1) { + var m [numWords64 + 1]uint64 + for i := 0; i < numWords64; i++ { + m[i] = binary.LittleEndian.Uint64(k[8*i : 8*i+8]) + } + condAddOrderN(&m) + L[dd-1] = 1 + for i := 0; i < dd-1; i++ { + kip1 := (m[(i+1)/64] >> (uint(i+1) % 64)) & 0x1 + L[i] = int8(kip1<<1) - 1 + } + { // right-shift by d + right := uint(dd % 64) + left := uint(64) - right + lim := ((numWords64+1)*64 - dd) / 64 + j := dd / 64 + for i := 0; i < lim; i++ { + m[i] = (m[i+j] >> right) | (m[i+j+1] << left) + } + m[lim] = m[lim+j] >> right + } + for i := dd; i < ll; i++ { + L[i] = L[i%dd] * int8(m[0]&0x1) + div2subY(m[:], int64(L[i]>>1), numWords64) + } + L[ll] = int8(m[0]) + } +} + +// absolute returns always a positive value. +func absolute(x int32) int32 { + mask := x >> 31 + return (x + mask) ^ mask +} + +// condAddOrderN updates x = x+order if x is even, otherwise x remains unchanged. +func condAddOrderN(x *[numWords64 + 1]uint64) { + isOdd := (x[0] & 0x1) - 1 + c := uint64(0) + for i := 0; i < numWords64; i++ { + orderWord := binary.LittleEndian.Uint64(order[8*i : 8*i+8]) + o := isOdd & orderWord + x0, c0 := bits.Add64(x[i], o, c) + x[i] = x0 + c = c0 + } + x[numWords64], _ = bits.Add64(x[numWords64], 0, c) +} + +// div2subY update x = (x/2) - y. +func div2subY(x []uint64, y int64, l int) { + s := uint64(y >> 63) + for i := 0; i < l-1; i++ { + x[i] = (x[i] >> 1) | (x[i+1] << 63) + } + x[l-1] = (x[l-1] >> 1) + + b := uint64(0) + x0, b0 := bits.Sub64(x[0], uint64(y), b) + x[0] = x0 + b = b0 + for i := 1; i < l-1; i++ { + x0, b0 := bits.Sub64(x[i], s, b) + x[i] = x0 + b = b0 + } + x[l-1], _ = bits.Sub64(x[l-1], s, b) +} + +func (P *pointR1) fixedMult(scalar []byte) { + if len(scalar) != paramB { + panic("wrong scalar size") + } + const ee = (fxT + fxW*fxV - 1) / (fxW * fxV) + const dd = ee * fxV + const ll = dd * fxW + + L := make([]int8, ll+1) + mLSBRecoding(L[:], scalar) + S := &pointR3{} + P.SetIdentity() + for ii := ee - 1; ii >= 0; ii-- { + P.double() + for j := 0; j < fxV; j++ { + dig := L[fxW*dd-j*ee+ii-ee] + for i := (fxW-1)*dd - j*ee + ii - ee; i >= (2*dd - j*ee + ii - ee); i = i - dd { + dig = 2*dig + L[i] + } + idx := absolute(int32(dig)) + sig := L[dd-j*ee+ii-ee] + Tabj := &tabSign[fxV-j-1] + for k := 0; k < fx2w1; k++ { + S.cmov(&Tabj[k], subtle.ConstantTimeEq(int32(k), idx)) + } + S.cneg(subtle.ConstantTimeEq(int32(sig), -1)) + P.mixAdd(S) + } + } +} + +const ( + omegaFix = 7 + omegaVar = 5 +) + +// doubleMult returns P=mG+nQ. +func (P *pointR1) doubleMult(Q *pointR1, m, n []byte) { + nafFix := math.OmegaNAF(conv.BytesLe2BigInt(m), omegaFix) + nafVar := math.OmegaNAF(conv.BytesLe2BigInt(n), omegaVar) + + if len(nafFix) > len(nafVar) { + nafVar = append(nafVar, make([]int32, len(nafFix)-len(nafVar))...) + } else if len(nafFix) < len(nafVar) { + nafFix = append(nafFix, make([]int32, len(nafVar)-len(nafFix))...) + } + + var TabQ [1 << (omegaVar - 2)]pointR2 + Q.oddMultiples(TabQ[:]) + P.SetIdentity() + for i := len(nafFix) - 1; i >= 0; i-- { + P.double() + // Generator point + if nafFix[i] != 0 { + idxM := absolute(nafFix[i]) >> 1 + R := tabVerif[idxM] + if nafFix[i] < 0 { + R.neg() + } + P.mixAdd(&R) + } + // Variable input point + if nafVar[i] != 0 { + idxN := absolute(nafVar[i]) >> 1 + S := TabQ[idxN] + if nafVar[i] < 0 { + S.neg() + } + P.add(&S) + } + } +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed25519/point.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed25519/point.go new file mode 100644 index 00000000000..374a69503c3 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed25519/point.go @@ -0,0 +1,195 @@ +package ed25519 + +import fp "github.com/cloudflare/circl/math/fp25519" + +type ( + pointR1 struct{ x, y, z, ta, tb fp.Elt } + pointR2 struct { + pointR3 + z2 fp.Elt + } +) +type pointR3 struct{ addYX, subYX, dt2 fp.Elt } + +func (P *pointR1) neg() { + fp.Neg(&P.x, &P.x) + fp.Neg(&P.ta, &P.ta) +} + +func (P *pointR1) SetIdentity() { + P.x = fp.Elt{} + fp.SetOne(&P.y) + fp.SetOne(&P.z) + P.ta = fp.Elt{} + P.tb = fp.Elt{} +} + +func (P *pointR1) toAffine() { + fp.Inv(&P.z, &P.z) + fp.Mul(&P.x, &P.x, &P.z) + fp.Mul(&P.y, &P.y, &P.z) + fp.Modp(&P.x) + fp.Modp(&P.y) + fp.SetOne(&P.z) + P.ta = P.x + P.tb = P.y +} + +func (P *pointR1) ToBytes(k []byte) error { + P.toAffine() + var x [fp.Size]byte + err := fp.ToBytes(k[:fp.Size], &P.y) + if err != nil { + return err + } + err = fp.ToBytes(x[:], &P.x) + if err != nil { + return err + } + b := x[0] & 1 + k[paramB-1] = k[paramB-1] | (b << 7) + return nil +} + +func (P *pointR1) FromBytes(k []byte) bool { + if len(k) != paramB { + panic("wrong size") + } + signX := k[paramB-1] >> 7 + copy(P.y[:], k[:fp.Size]) + P.y[fp.Size-1] &= 0x7F + p := fp.P() + if !isLessThan(P.y[:], p[:]) { + return false + } + + one, u, v := &fp.Elt{}, &fp.Elt{}, &fp.Elt{} + fp.SetOne(one) + fp.Sqr(u, &P.y) // u = y^2 + fp.Mul(v, u, ¶mD) // v = dy^2 + fp.Sub(u, u, one) // u = y^2-1 + fp.Add(v, v, one) // v = dy^2+1 + isQR := fp.InvSqrt(&P.x, u, v) // x = sqrt(u/v) + if !isQR { + return false + } + fp.Modp(&P.x) // x = x mod p + if fp.IsZero(&P.x) && signX == 1 { + return false + } + if signX != (P.x[0] & 1) { + fp.Neg(&P.x, &P.x) + } + P.ta = P.x + P.tb = P.y + fp.SetOne(&P.z) + return true +} + +// double calculates 2P for curves with A=-1. +func (P *pointR1) double() { + Px, Py, Pz, Pta, Ptb := &P.x, &P.y, &P.z, &P.ta, &P.tb + a, b, c, e, f, g, h := Px, Py, Pz, Pta, Px, Py, Ptb + fp.Add(e, Px, Py) // x+y + fp.Sqr(a, Px) // A = x^2 + fp.Sqr(b, Py) // B = y^2 + fp.Sqr(c, Pz) // z^2 + fp.Add(c, c, c) // C = 2*z^2 + fp.Add(h, a, b) // H = A+B + fp.Sqr(e, e) // (x+y)^2 + fp.Sub(e, e, h) // E = (x+y)^2-A-B + fp.Sub(g, b, a) // G = B-A + fp.Sub(f, c, g) // F = C-G + fp.Mul(Pz, f, g) // Z = F * G + fp.Mul(Px, e, f) // X = E * F + fp.Mul(Py, g, h) // Y = G * H, T = E * H +} + +func (P *pointR1) mixAdd(Q *pointR3) { + fp.Add(&P.z, &P.z, &P.z) // D = 2*z1 + P.coreAddition(Q) +} + +func (P *pointR1) add(Q *pointR2) { + fp.Mul(&P.z, &P.z, &Q.z2) // D = 2*z1*z2 + P.coreAddition(&Q.pointR3) +} + +// coreAddition calculates P=P+Q for curves with A=-1. +func (P *pointR1) coreAddition(Q *pointR3) { + Px, Py, Pz, Pta, Ptb := &P.x, &P.y, &P.z, &P.ta, &P.tb + addYX2, subYX2, dt2 := &Q.addYX, &Q.subYX, &Q.dt2 + a, b, c, d, e, f, g, h := Px, Py, &fp.Elt{}, Pz, Pta, Px, Py, Ptb + fp.Mul(c, Pta, Ptb) // t1 = ta*tb + fp.Sub(h, Py, Px) // y1-x1 + fp.Add(b, Py, Px) // y1+x1 + fp.Mul(a, h, subYX2) // A = (y1-x1)*(y2-x2) + fp.Mul(b, b, addYX2) // B = (y1+x1)*(y2+x2) + fp.Mul(c, c, dt2) // C = 2*D*t1*t2 + fp.Sub(e, b, a) // E = B-A + fp.Add(h, b, a) // H = B+A + fp.Sub(f, d, c) // F = D-C + fp.Add(g, d, c) // G = D+C + fp.Mul(Pz, f, g) // Z = F * G + fp.Mul(Px, e, f) // X = E * F + fp.Mul(Py, g, h) // Y = G * H, T = E * H +} + +func (P *pointR1) oddMultiples(T []pointR2) { + var R pointR2 + n := len(T) + T[0].fromR1(P) + _2P := *P + _2P.double() + R.fromR1(&_2P) + for i := 1; i < n; i++ { + P.add(&R) + T[i].fromR1(P) + } +} + +func (P *pointR1) isEqual(Q *pointR1) bool { + l, r := &fp.Elt{}, &fp.Elt{} + fp.Mul(l, &P.x, &Q.z) + fp.Mul(r, &Q.x, &P.z) + fp.Sub(l, l, r) + b := fp.IsZero(l) + fp.Mul(l, &P.y, &Q.z) + fp.Mul(r, &Q.y, &P.z) + fp.Sub(l, l, r) + b = b && fp.IsZero(l) + fp.Mul(l, &P.ta, &P.tb) + fp.Mul(l, l, &Q.z) + fp.Mul(r, &Q.ta, &Q.tb) + fp.Mul(r, r, &P.z) + fp.Sub(l, l, r) + b = b && fp.IsZero(l) + return b +} + +func (P *pointR3) neg() { + P.addYX, P.subYX = P.subYX, P.addYX + fp.Neg(&P.dt2, &P.dt2) +} + +func (P *pointR2) fromR1(Q *pointR1) { + fp.Add(&P.addYX, &Q.y, &Q.x) + fp.Sub(&P.subYX, &Q.y, &Q.x) + fp.Mul(&P.dt2, &Q.ta, &Q.tb) + fp.Mul(&P.dt2, &P.dt2, ¶mD) + fp.Add(&P.dt2, &P.dt2, &P.dt2) + fp.Add(&P.z2, &Q.z, &Q.z) +} + +func (P *pointR3) cneg(b int) { + t := &fp.Elt{} + fp.Cswap(&P.addYX, &P.subYX, uint(b)) + fp.Neg(t, &P.dt2) + fp.Cmov(&P.dt2, t, uint(b)) +} + +func (P *pointR3) cmov(Q *pointR3, b int) { + fp.Cmov(&P.addYX, &Q.addYX, uint(b)) + fp.Cmov(&P.subYX, &Q.subYX, uint(b)) + fp.Cmov(&P.dt2, &Q.dt2, uint(b)) +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed25519/pubkey.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed25519/pubkey.go new file mode 100644 index 00000000000..c3505b67ace --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed25519/pubkey.go @@ -0,0 +1,9 @@ +//go:build go1.13 +// +build go1.13 + +package ed25519 + +import cryptoEd25519 "crypto/ed25519" + +// PublicKey is the type of Ed25519 public keys. +type PublicKey cryptoEd25519.PublicKey diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed25519/pubkey112.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed25519/pubkey112.go new file mode 100644 index 00000000000..d57d86eff08 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed25519/pubkey112.go @@ -0,0 +1,7 @@ +//go:build !go1.13 +// +build !go1.13 + +package ed25519 + +// PublicKey is the type of Ed25519 public keys. +type PublicKey []byte diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed25519/signapi.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed25519/signapi.go new file mode 100644 index 00000000000..e4520f52034 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed25519/signapi.go @@ -0,0 +1,87 @@ +package ed25519 + +import ( + "crypto/rand" + "encoding/asn1" + + "github.com/cloudflare/circl/sign" +) + +var sch sign.Scheme = &scheme{} + +// Scheme returns a signature interface. +func Scheme() sign.Scheme { return sch } + +type scheme struct{} + +func (*scheme) Name() string { return "Ed25519" } +func (*scheme) PublicKeySize() int { return PublicKeySize } +func (*scheme) PrivateKeySize() int { return PrivateKeySize } +func (*scheme) SignatureSize() int { return SignatureSize } +func (*scheme) SeedSize() int { return SeedSize } +func (*scheme) TLSIdentifier() uint { return 0x0807 } +func (*scheme) SupportsContext() bool { return false } +func (*scheme) Oid() asn1.ObjectIdentifier { + return asn1.ObjectIdentifier{1, 3, 101, 112} +} + +func (*scheme) GenerateKey() (sign.PublicKey, sign.PrivateKey, error) { + return GenerateKey(rand.Reader) +} + +func (*scheme) Sign( + sk sign.PrivateKey, + message []byte, + opts *sign.SignatureOpts, +) []byte { + priv, ok := sk.(PrivateKey) + if !ok { + panic(sign.ErrTypeMismatch) + } + if opts != nil && opts.Context != "" { + panic(sign.ErrContextNotSupported) + } + return Sign(priv, message) +} + +func (*scheme) Verify( + pk sign.PublicKey, + message, signature []byte, + opts *sign.SignatureOpts, +) bool { + pub, ok := pk.(PublicKey) + if !ok { + panic(sign.ErrTypeMismatch) + } + if opts != nil { + if opts.Context != "" { + panic(sign.ErrContextNotSupported) + } + } + return Verify(pub, message, signature) +} + +func (*scheme) DeriveKey(seed []byte) (sign.PublicKey, sign.PrivateKey) { + privateKey := NewKeyFromSeed(seed) + publicKey := make(PublicKey, PublicKeySize) + copy(publicKey, privateKey[SeedSize:]) + return publicKey, privateKey +} + +func (*scheme) UnmarshalBinaryPublicKey(buf []byte) (sign.PublicKey, error) { + if len(buf) < PublicKeySize { + return nil, sign.ErrPubKeySize + } + pub := make(PublicKey, PublicKeySize) + copy(pub, buf[:PublicKeySize]) + return pub, nil +} + +func (*scheme) UnmarshalBinaryPrivateKey(buf []byte) (sign.PrivateKey, error) { + if len(buf) < PrivateKeySize { + return nil, sign.ErrPrivKeySize + } + priv := make(PrivateKey, PrivateKeySize) + copy(priv, buf[:PrivateKeySize]) + return priv, nil +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed25519/tables.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed25519/tables.go new file mode 100644 index 00000000000..8763b426fc0 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed25519/tables.go @@ -0,0 +1,213 @@ +package ed25519 + +import fp "github.com/cloudflare/circl/math/fp25519" + +var tabSign = [fxV][fx2w1]pointR3{ + { + pointR3{ + addYX: fp.Elt{0x85, 0x3b, 0x8c, 0xf5, 0xc6, 0x93, 0xbc, 0x2f, 0x19, 0x0e, 0x8c, 0xfb, 0xc6, 0x2d, 0x93, 0xcf, 0xc2, 0x42, 0x3d, 0x64, 0x98, 0x48, 0x0b, 0x27, 0x65, 0xba, 0xd4, 0x33, 0x3a, 0x9d, 0xcf, 0x07}, + subYX: fp.Elt{0x3e, 0x91, 0x40, 0xd7, 0x05, 0x39, 0x10, 0x9d, 0xb3, 0xbe, 0x40, 0xd1, 0x05, 0x9f, 0x39, 0xfd, 0x09, 0x8a, 0x8f, 0x68, 0x34, 0x84, 0xc1, 0xa5, 0x67, 0x12, 0xf8, 0x98, 0x92, 0x2f, 0xfd, 0x44}, + dt2: fp.Elt{0x68, 0xaa, 0x7a, 0x87, 0x05, 0x12, 0xc9, 0xab, 0x9e, 0xc4, 0xaa, 0xcc, 0x23, 0xe8, 0xd9, 0x26, 0x8c, 0x59, 0x43, 0xdd, 0xcb, 0x7d, 0x1b, 0x5a, 0xa8, 0x65, 0x0c, 0x9f, 0x68, 0x7b, 0x11, 0x6f}, + }, + { + addYX: fp.Elt{0x7c, 0xb0, 0x9e, 0xe6, 0xc5, 0xbf, 0xfa, 0x13, 0x8e, 0x0d, 0x22, 0xde, 0xc8, 0xd1, 0xce, 0x52, 0x02, 0xd5, 0x62, 0x31, 0x71, 0x0e, 0x8e, 0x9d, 0xb0, 0xd6, 0x00, 0xa5, 0x5a, 0x0e, 0xce, 0x72}, + subYX: fp.Elt{0x1a, 0x8e, 0x5c, 0xdc, 0xa4, 0xb3, 0x6c, 0x51, 0x18, 0xa0, 0x09, 0x80, 0x9a, 0x46, 0x33, 0xd5, 0xe0, 0x3c, 0x4d, 0x3b, 0xfc, 0x49, 0xa2, 0x43, 0x29, 0xe1, 0x29, 0xa9, 0x93, 0xea, 0x7c, 0x35}, + dt2: fp.Elt{0x08, 0x46, 0x6f, 0x68, 0x7f, 0x0b, 0x7c, 0x9e, 0xad, 0xba, 0x07, 0x61, 0x74, 0x83, 0x2f, 0xfc, 0x26, 0xd6, 0x09, 0xb9, 0x00, 0x34, 0x36, 0x4f, 0x01, 0xf3, 0x48, 0xdb, 0x43, 0xba, 0x04, 0x44}, + }, + { + addYX: fp.Elt{0x4c, 0xda, 0x0d, 0x13, 0x66, 0xfd, 0x82, 0x84, 0x9f, 0x75, 0x5b, 0xa2, 0x17, 0xfe, 0x34, 0xbf, 0x1f, 0xcb, 0xba, 0x90, 0x55, 0x80, 0x83, 0xfd, 0x63, 0xb9, 0x18, 0xf8, 0x5b, 0x5d, 0x94, 0x1e}, + subYX: fp.Elt{0xb9, 0xdb, 0x6c, 0x04, 0x88, 0x22, 0xd8, 0x79, 0x83, 0x2f, 0x8d, 0x65, 0x6b, 0xd2, 0xab, 0x1b, 0xdd, 0x65, 0xe5, 0x93, 0x63, 0xf8, 0xa2, 0xd8, 0x3c, 0xf1, 0x4b, 0xc5, 0x99, 0xd1, 0xf2, 0x12}, + dt2: fp.Elt{0x05, 0x4c, 0xb8, 0x3b, 0xfe, 0xf5, 0x9f, 0x2e, 0xd1, 0xb2, 0xb8, 0xff, 0xfe, 0x6d, 0xd9, 0x37, 0xe0, 0xae, 0xb4, 0x5a, 0x51, 0x80, 0x7e, 0x9b, 0x1d, 0xd1, 0x8d, 0x8c, 0x56, 0xb1, 0x84, 0x35}, + }, + { + addYX: fp.Elt{0x39, 0x71, 0x43, 0x34, 0xe3, 0x42, 0x45, 0xa1, 0xf2, 0x68, 0x71, 0xa7, 0xe8, 0x23, 0xfd, 0x9f, 0x86, 0x48, 0xff, 0xe5, 0x96, 0x74, 0xcf, 0x05, 0x49, 0xe2, 0xb3, 0x6c, 0x17, 0x77, 0x2f, 0x6d}, + subYX: fp.Elt{0x73, 0x3f, 0xc1, 0xc7, 0x6a, 0x66, 0xa1, 0x20, 0xdd, 0x11, 0xfb, 0x7a, 0x6e, 0xa8, 0x51, 0xb8, 0x3f, 0x9d, 0xa2, 0x97, 0x84, 0xb5, 0xc7, 0x90, 0x7c, 0xab, 0x48, 0xd6, 0x84, 0xa3, 0xd5, 0x1a}, + dt2: fp.Elt{0x63, 0x27, 0x3c, 0x49, 0x4b, 0xfc, 0x22, 0xf2, 0x0b, 0x50, 0xc2, 0x0f, 0xb4, 0x1f, 0x31, 0x0c, 0x2f, 0x53, 0xab, 0xaa, 0x75, 0x6f, 0xe0, 0x69, 0x39, 0x56, 0xe0, 0x3b, 0xb7, 0xa8, 0xbf, 0x45}, + }, + }, + { + { + addYX: fp.Elt{0x00, 0x45, 0xd9, 0x0d, 0x58, 0x03, 0xfc, 0x29, 0x93, 0xec, 0xbb, 0x6f, 0xa4, 0x7a, 0xd2, 0xec, 0xf8, 0xa7, 0xe2, 0xc2, 0x5f, 0x15, 0x0a, 0x13, 0xd5, 0xa1, 0x06, 0xb7, 0x1a, 0x15, 0x6b, 0x41}, + subYX: fp.Elt{0x85, 0x8c, 0xb2, 0x17, 0xd6, 0x3b, 0x0a, 0xd3, 0xea, 0x3b, 0x77, 0x39, 0xb7, 0x77, 0xd3, 0xc5, 0xbf, 0x5c, 0x6a, 0x1e, 0x8c, 0xe7, 0xc6, 0xc6, 0xc4, 0xb7, 0x2a, 0x8b, 0xf7, 0xb8, 0x61, 0x0d}, + dt2: fp.Elt{0xb0, 0x36, 0xc1, 0xe9, 0xef, 0xd7, 0xa8, 0x56, 0x20, 0x4b, 0xe4, 0x58, 0xcd, 0xe5, 0x07, 0xbd, 0xab, 0xe0, 0x57, 0x1b, 0xda, 0x2f, 0xe6, 0xaf, 0xd2, 0xe8, 0x77, 0x42, 0xf7, 0x2a, 0x1a, 0x19}, + }, + { + addYX: fp.Elt{0x6a, 0x6d, 0x6d, 0xd1, 0xfa, 0xf5, 0x03, 0x30, 0xbd, 0x6d, 0xc2, 0xc8, 0xf5, 0x38, 0x80, 0x4f, 0xb2, 0xbe, 0xa1, 0x76, 0x50, 0x1a, 0x73, 0xf2, 0x78, 0x2b, 0x8e, 0x3a, 0x1e, 0x34, 0x47, 0x7b}, + subYX: fp.Elt{0xc3, 0x2c, 0x36, 0xdc, 0xc5, 0x45, 0xbc, 0xef, 0x1b, 0x64, 0xd6, 0x65, 0x28, 0xe9, 0xda, 0x84, 0x13, 0xbe, 0x27, 0x8e, 0x3f, 0x98, 0x2a, 0x37, 0xee, 0x78, 0x97, 0xd6, 0xc0, 0x6f, 0xb4, 0x53}, + dt2: fp.Elt{0x58, 0x5d, 0xa7, 0xa3, 0x68, 0xbb, 0x20, 0x30, 0x2e, 0x03, 0xe9, 0xb1, 0xd4, 0x90, 0x72, 0xe3, 0x71, 0xb2, 0x36, 0x3e, 0x73, 0xa0, 0x2e, 0x3d, 0xd1, 0x85, 0x33, 0x62, 0x4e, 0xa7, 0x7b, 0x31}, + }, + { + addYX: fp.Elt{0xbf, 0xc4, 0x38, 0x53, 0xfb, 0x68, 0xa9, 0x77, 0xce, 0x55, 0xf9, 0x05, 0xcb, 0xeb, 0xfb, 0x8c, 0x46, 0xc2, 0x32, 0x7c, 0xf0, 0xdb, 0xd7, 0x2c, 0x62, 0x8e, 0xdd, 0x54, 0x75, 0xcf, 0x3f, 0x33}, + subYX: fp.Elt{0x49, 0x50, 0x1f, 0x4e, 0x6e, 0x55, 0x55, 0xde, 0x8c, 0x4e, 0x77, 0x96, 0x38, 0x3b, 0xfe, 0xb6, 0x43, 0x3c, 0x86, 0x69, 0xc2, 0x72, 0x66, 0x1f, 0x6b, 0xf9, 0x87, 0xbc, 0x4f, 0x37, 0x3e, 0x3c}, + dt2: fp.Elt{0xd2, 0x2f, 0x06, 0x6b, 0x08, 0x07, 0x69, 0x77, 0xc0, 0x94, 0xcc, 0xae, 0x43, 0x00, 0x59, 0x6e, 0xa3, 0x63, 0xa8, 0xdd, 0xfa, 0x24, 0x18, 0xd0, 0x35, 0xc7, 0x78, 0xf7, 0x0d, 0xd4, 0x5a, 0x1e}, + }, + { + addYX: fp.Elt{0x45, 0xc1, 0x17, 0x51, 0xf8, 0xed, 0x7e, 0xc7, 0xa9, 0x1a, 0x11, 0x6e, 0x2d, 0xef, 0x0b, 0xd5, 0x3f, 0x98, 0xb0, 0xa3, 0x9d, 0x65, 0xf1, 0xcd, 0x53, 0x4a, 0x8a, 0x18, 0x70, 0x0a, 0x7f, 0x23}, + subYX: fp.Elt{0xdd, 0xef, 0xbe, 0x3a, 0x31, 0xe0, 0xbc, 0xbe, 0x6d, 0x5d, 0x79, 0x87, 0xd6, 0xbe, 0x68, 0xe3, 0x59, 0x76, 0x8c, 0x86, 0x0e, 0x7a, 0x92, 0x13, 0x14, 0x8f, 0x67, 0xb3, 0xcb, 0x1a, 0x76, 0x76}, + dt2: fp.Elt{0x56, 0x7a, 0x1c, 0x9d, 0xca, 0x96, 0xf9, 0xf9, 0x03, 0x21, 0xd4, 0xe8, 0xb3, 0xd5, 0xe9, 0x52, 0xc8, 0x54, 0x1e, 0x1b, 0x13, 0xb6, 0xfd, 0x47, 0x7d, 0x02, 0x32, 0x33, 0x27, 0xe2, 0x1f, 0x19}, + }, + }, +} + +var tabVerif = [1 << (omegaFix - 2)]pointR3{ + { /* 1P */ + addYX: fp.Elt{0x85, 0x3b, 0x8c, 0xf5, 0xc6, 0x93, 0xbc, 0x2f, 0x19, 0x0e, 0x8c, 0xfb, 0xc6, 0x2d, 0x93, 0xcf, 0xc2, 0x42, 0x3d, 0x64, 0x98, 0x48, 0x0b, 0x27, 0x65, 0xba, 0xd4, 0x33, 0x3a, 0x9d, 0xcf, 0x07}, + subYX: fp.Elt{0x3e, 0x91, 0x40, 0xd7, 0x05, 0x39, 0x10, 0x9d, 0xb3, 0xbe, 0x40, 0xd1, 0x05, 0x9f, 0x39, 0xfd, 0x09, 0x8a, 0x8f, 0x68, 0x34, 0x84, 0xc1, 0xa5, 0x67, 0x12, 0xf8, 0x98, 0x92, 0x2f, 0xfd, 0x44}, + dt2: fp.Elt{0x68, 0xaa, 0x7a, 0x87, 0x05, 0x12, 0xc9, 0xab, 0x9e, 0xc4, 0xaa, 0xcc, 0x23, 0xe8, 0xd9, 0x26, 0x8c, 0x59, 0x43, 0xdd, 0xcb, 0x7d, 0x1b, 0x5a, 0xa8, 0x65, 0x0c, 0x9f, 0x68, 0x7b, 0x11, 0x6f}, + }, + { /* 3P */ + addYX: fp.Elt{0x30, 0x97, 0xee, 0x4c, 0xa8, 0xb0, 0x25, 0xaf, 0x8a, 0x4b, 0x86, 0xe8, 0x30, 0x84, 0x5a, 0x02, 0x32, 0x67, 0x01, 0x9f, 0x02, 0x50, 0x1b, 0xc1, 0xf4, 0xf8, 0x80, 0x9a, 0x1b, 0x4e, 0x16, 0x7a}, + subYX: fp.Elt{0x65, 0xd2, 0xfc, 0xa4, 0xe8, 0x1f, 0x61, 0x56, 0x7d, 0xba, 0xc1, 0xe5, 0xfd, 0x53, 0xd3, 0x3b, 0xbd, 0xd6, 0x4b, 0x21, 0x1a, 0xf3, 0x31, 0x81, 0x62, 0xda, 0x5b, 0x55, 0x87, 0x15, 0xb9, 0x2a}, + dt2: fp.Elt{0x89, 0xd8, 0xd0, 0x0d, 0x3f, 0x93, 0xae, 0x14, 0x62, 0xda, 0x35, 0x1c, 0x22, 0x23, 0x94, 0x58, 0x4c, 0xdb, 0xf2, 0x8c, 0x45, 0xe5, 0x70, 0xd1, 0xc6, 0xb4, 0xb9, 0x12, 0xaf, 0x26, 0x28, 0x5a}, + }, + { /* 5P */ + addYX: fp.Elt{0x33, 0xbb, 0xa5, 0x08, 0x44, 0xbc, 0x12, 0xa2, 0x02, 0xed, 0x5e, 0xc7, 0xc3, 0x48, 0x50, 0x8d, 0x44, 0xec, 0xbf, 0x5a, 0x0c, 0xeb, 0x1b, 0xdd, 0xeb, 0x06, 0xe2, 0x46, 0xf1, 0xcc, 0x45, 0x29}, + subYX: fp.Elt{0xba, 0xd6, 0x47, 0xa4, 0xc3, 0x82, 0x91, 0x7f, 0xb7, 0x29, 0x27, 0x4b, 0xd1, 0x14, 0x00, 0xd5, 0x87, 0xa0, 0x64, 0xb8, 0x1c, 0xf1, 0x3c, 0xe3, 0xf3, 0x55, 0x1b, 0xeb, 0x73, 0x7e, 0x4a, 0x15}, + dt2: fp.Elt{0x85, 0x82, 0x2a, 0x81, 0xf1, 0xdb, 0xbb, 0xbc, 0xfc, 0xd1, 0xbd, 0xd0, 0x07, 0x08, 0x0e, 0x27, 0x2d, 0xa7, 0xbd, 0x1b, 0x0b, 0x67, 0x1b, 0xb4, 0x9a, 0xb6, 0x3b, 0x6b, 0x69, 0xbe, 0xaa, 0x43}, + }, + { /* 7P */ + addYX: fp.Elt{0xbf, 0xa3, 0x4e, 0x94, 0xd0, 0x5c, 0x1a, 0x6b, 0xd2, 0xc0, 0x9d, 0xb3, 0x3a, 0x35, 0x70, 0x74, 0x49, 0x2e, 0x54, 0x28, 0x82, 0x52, 0xb2, 0x71, 0x7e, 0x92, 0x3c, 0x28, 0x69, 0xea, 0x1b, 0x46}, + subYX: fp.Elt{0xb1, 0x21, 0x32, 0xaa, 0x9a, 0x2c, 0x6f, 0xba, 0xa7, 0x23, 0xba, 0x3b, 0x53, 0x21, 0xa0, 0x6c, 0x3a, 0x2c, 0x19, 0x92, 0x4f, 0x76, 0xea, 0x9d, 0xe0, 0x17, 0x53, 0x2e, 0x5d, 0xdd, 0x6e, 0x1d}, + dt2: fp.Elt{0xa2, 0xb3, 0xb8, 0x01, 0xc8, 0x6d, 0x83, 0xf1, 0x9a, 0xa4, 0x3e, 0x05, 0x47, 0x5f, 0x03, 0xb3, 0xf3, 0xad, 0x77, 0x58, 0xba, 0x41, 0x9c, 0x52, 0xa7, 0x90, 0x0f, 0x6a, 0x1c, 0xbb, 0x9f, 0x7a}, + }, + { /* 9P */ + addYX: fp.Elt{0x2f, 0x63, 0xa8, 0xa6, 0x8a, 0x67, 0x2e, 0x9b, 0xc5, 0x46, 0xbc, 0x51, 0x6f, 0x9e, 0x50, 0xa6, 0xb5, 0xf5, 0x86, 0xc6, 0xc9, 0x33, 0xb2, 0xce, 0x59, 0x7f, 0xdd, 0x8a, 0x33, 0xed, 0xb9, 0x34}, + subYX: fp.Elt{0x64, 0x80, 0x9d, 0x03, 0x7e, 0x21, 0x6e, 0xf3, 0x9b, 0x41, 0x20, 0xf5, 0xb6, 0x81, 0xa0, 0x98, 0x44, 0xb0, 0x5e, 0xe7, 0x08, 0xc6, 0xcb, 0x96, 0x8f, 0x9c, 0xdc, 0xfa, 0x51, 0x5a, 0xc0, 0x49}, + dt2: fp.Elt{0x1b, 0xaf, 0x45, 0x90, 0xbf, 0xe8, 0xb4, 0x06, 0x2f, 0xd2, 0x19, 0xa7, 0xe8, 0x83, 0xff, 0xe2, 0x16, 0xcf, 0xd4, 0x93, 0x29, 0xfc, 0xf6, 0xaa, 0x06, 0x8b, 0x00, 0x1b, 0x02, 0x72, 0xc1, 0x73}, + }, + { /* 11P */ + addYX: fp.Elt{0xde, 0x2a, 0x80, 0x8a, 0x84, 0x00, 0xbf, 0x2f, 0x27, 0x2e, 0x30, 0x02, 0xcf, 0xfe, 0xd9, 0xe5, 0x06, 0x34, 0x70, 0x17, 0x71, 0x84, 0x3e, 0x11, 0xaf, 0x8f, 0x6d, 0x54, 0xe2, 0xaa, 0x75, 0x42}, + subYX: fp.Elt{0x48, 0x43, 0x86, 0x49, 0x02, 0x5b, 0x5f, 0x31, 0x81, 0x83, 0x08, 0x77, 0x69, 0xb3, 0xd6, 0x3e, 0x95, 0xeb, 0x8d, 0x6a, 0x55, 0x75, 0xa0, 0xa3, 0x7f, 0xc7, 0xd5, 0x29, 0x80, 0x59, 0xab, 0x18}, + dt2: fp.Elt{0xe9, 0x89, 0x60, 0xfd, 0xc5, 0x2c, 0x2b, 0xd8, 0xa4, 0xe4, 0x82, 0x32, 0xa1, 0xb4, 0x1e, 0x03, 0x22, 0x86, 0x1a, 0xb5, 0x99, 0x11, 0x31, 0x44, 0x48, 0xf9, 0x3d, 0xb5, 0x22, 0x55, 0xc6, 0x3d}, + }, + { /* 13P */ + addYX: fp.Elt{0x6d, 0x7f, 0x00, 0xa2, 0x22, 0xc2, 0x70, 0xbf, 0xdb, 0xde, 0xbc, 0xb5, 0x9a, 0xb3, 0x84, 0xbf, 0x07, 0xba, 0x07, 0xfb, 0x12, 0x0e, 0x7a, 0x53, 0x41, 0xf2, 0x46, 0xc3, 0xee, 0xd7, 0x4f, 0x23}, + subYX: fp.Elt{0x93, 0xbf, 0x7f, 0x32, 0x3b, 0x01, 0x6f, 0x50, 0x6b, 0x6f, 0x77, 0x9b, 0xc9, 0xeb, 0xfc, 0xae, 0x68, 0x59, 0xad, 0xaa, 0x32, 0xb2, 0x12, 0x9d, 0xa7, 0x24, 0x60, 0x17, 0x2d, 0x88, 0x67, 0x02}, + dt2: fp.Elt{0x78, 0xa3, 0x2e, 0x73, 0x19, 0xa1, 0x60, 0x53, 0x71, 0xd4, 0x8d, 0xdf, 0xb1, 0xe6, 0x37, 0x24, 0x33, 0xe5, 0xa7, 0x91, 0xf8, 0x37, 0xef, 0xa2, 0x63, 0x78, 0x09, 0xaa, 0xfd, 0xa6, 0x7b, 0x49}, + }, + { /* 15P */ + addYX: fp.Elt{0xa0, 0xea, 0xcf, 0x13, 0x03, 0xcc, 0xce, 0x24, 0x6d, 0x24, 0x9c, 0x18, 0x8d, 0xc2, 0x48, 0x86, 0xd0, 0xd4, 0xf2, 0xc1, 0xfa, 0xbd, 0xbd, 0x2d, 0x2b, 0xe7, 0x2d, 0xf1, 0x17, 0x29, 0xe2, 0x61}, + subYX: fp.Elt{0x0b, 0xcf, 0x8c, 0x46, 0x86, 0xcd, 0x0b, 0x04, 0xd6, 0x10, 0x99, 0x2a, 0xa4, 0x9b, 0x82, 0xd3, 0x92, 0x51, 0xb2, 0x07, 0x08, 0x30, 0x08, 0x75, 0xbf, 0x5e, 0xd0, 0x18, 0x42, 0xcd, 0xb5, 0x43}, + dt2: fp.Elt{0x16, 0xb5, 0xd0, 0x9b, 0x2f, 0x76, 0x9a, 0x5d, 0xee, 0xde, 0x3f, 0x37, 0x4e, 0xaf, 0x38, 0xeb, 0x70, 0x42, 0xd6, 0x93, 0x7d, 0x5a, 0x2e, 0x03, 0x42, 0xd8, 0xe4, 0x0a, 0x21, 0x61, 0x1d, 0x51}, + }, + { /* 17P */ + addYX: fp.Elt{0x81, 0x9d, 0x0e, 0x95, 0xef, 0x76, 0xc6, 0x92, 0x4f, 0x04, 0xd7, 0xc0, 0xcd, 0x20, 0x46, 0xa5, 0x48, 0x12, 0x8f, 0x6f, 0x64, 0x36, 0x9b, 0xaa, 0xe3, 0x55, 0xb8, 0xdd, 0x24, 0x59, 0x32, 0x6d}, + subYX: fp.Elt{0x87, 0xde, 0x20, 0x44, 0x48, 0x86, 0x13, 0x08, 0xb4, 0xed, 0x92, 0xb5, 0x16, 0xf0, 0x1c, 0x8a, 0x25, 0x2d, 0x94, 0x29, 0x27, 0x4e, 0xfa, 0x39, 0x10, 0x28, 0x48, 0xe2, 0x6f, 0xfe, 0xa7, 0x71}, + dt2: fp.Elt{0x54, 0xc8, 0xc8, 0xa5, 0xb8, 0x82, 0x71, 0x6c, 0x03, 0x2a, 0x5f, 0xfe, 0x79, 0x14, 0xfd, 0x33, 0x0c, 0x8d, 0x77, 0x83, 0x18, 0x59, 0xcf, 0x72, 0xa9, 0xea, 0x9e, 0x55, 0xb6, 0xc4, 0x46, 0x47}, + }, + { /* 19P */ + addYX: fp.Elt{0x2b, 0x9a, 0xc6, 0x6d, 0x3c, 0x7b, 0x77, 0xd3, 0x17, 0xf6, 0x89, 0x6f, 0x27, 0xb2, 0xfa, 0xde, 0xb5, 0x16, 0x3a, 0xb5, 0xf7, 0x1c, 0x65, 0x45, 0xb7, 0x9f, 0xfe, 0x34, 0xde, 0x51, 0x9a, 0x5c}, + subYX: fp.Elt{0x47, 0x11, 0x74, 0x64, 0xc8, 0x46, 0x85, 0x34, 0x49, 0xc8, 0xfc, 0x0e, 0xdd, 0xae, 0x35, 0x7d, 0x32, 0xa3, 0x72, 0x06, 0x76, 0x9a, 0x93, 0xff, 0xd6, 0xe6, 0xb5, 0x7d, 0x49, 0x63, 0x96, 0x21}, + dt2: fp.Elt{0x67, 0x0e, 0xf1, 0x79, 0xcf, 0xf1, 0x10, 0xf5, 0x5b, 0x51, 0x58, 0xe6, 0xa1, 0xda, 0xdd, 0xff, 0x77, 0x22, 0x14, 0x10, 0x17, 0xa7, 0xc3, 0x09, 0xbb, 0x23, 0x82, 0x60, 0x3c, 0x50, 0x04, 0x48}, + }, + { /* 21P */ + addYX: fp.Elt{0xc7, 0x7f, 0xa3, 0x2c, 0xd0, 0x9e, 0x24, 0xc4, 0xab, 0xac, 0x15, 0xa6, 0xe3, 0xa0, 0x59, 0xa0, 0x23, 0x0e, 0x6e, 0xc9, 0xd7, 0x6e, 0xa9, 0x88, 0x6d, 0x69, 0x50, 0x16, 0xa5, 0x98, 0x33, 0x55}, + subYX: fp.Elt{0x75, 0xd1, 0x36, 0x3a, 0xd2, 0x21, 0x68, 0x3b, 0x32, 0x9e, 0x9b, 0xe9, 0xa7, 0x0a, 0xb4, 0xbb, 0x47, 0x8a, 0x83, 0x20, 0xe4, 0x5c, 0x9e, 0x5d, 0x5e, 0x4c, 0xde, 0x58, 0x88, 0x09, 0x1e, 0x77}, + dt2: fp.Elt{0xdf, 0x1e, 0x45, 0x78, 0xd2, 0xf5, 0x12, 0x9a, 0xcb, 0x9c, 0x89, 0x85, 0x79, 0x5d, 0xda, 0x3a, 0x08, 0x95, 0xa5, 0x9f, 0x2d, 0x4a, 0x7f, 0x47, 0x11, 0xa6, 0xf5, 0x8f, 0xd6, 0xd1, 0x5e, 0x5a}, + }, + { /* 23P */ + addYX: fp.Elt{0x83, 0x0e, 0x15, 0xfe, 0x2a, 0x12, 0x95, 0x11, 0xd8, 0x35, 0x4b, 0x7e, 0x25, 0x9a, 0x20, 0xcf, 0x20, 0x1e, 0x71, 0x1e, 0x29, 0xf8, 0x87, 0x73, 0xf0, 0x92, 0xbf, 0xd8, 0x97, 0xb8, 0xac, 0x44}, + subYX: fp.Elt{0x59, 0x73, 0x52, 0x58, 0xc5, 0xe0, 0xe5, 0xba, 0x7e, 0x9d, 0xdb, 0xca, 0x19, 0x5c, 0x2e, 0x39, 0xe9, 0xab, 0x1c, 0xda, 0x1e, 0x3c, 0x65, 0x28, 0x44, 0xdc, 0xef, 0x5f, 0x13, 0x60, 0x9b, 0x01}, + dt2: fp.Elt{0x83, 0x4b, 0x13, 0x5e, 0x14, 0x68, 0x60, 0x1e, 0x16, 0x4c, 0x30, 0x24, 0x4f, 0xe6, 0xf5, 0xc4, 0xd7, 0x3e, 0x1a, 0xfc, 0xa8, 0x88, 0x6e, 0x50, 0x92, 0x2f, 0xad, 0xe6, 0xfd, 0x49, 0x0c, 0x15}, + }, + { /* 25P */ + addYX: fp.Elt{0x38, 0x11, 0x47, 0x09, 0x95, 0xf2, 0x7b, 0x8e, 0x51, 0xa6, 0x75, 0x4f, 0x39, 0xef, 0x6f, 0x5d, 0xad, 0x08, 0xa7, 0x25, 0xc4, 0x79, 0xaf, 0x10, 0x22, 0x99, 0xb9, 0x5b, 0x07, 0x5a, 0x2b, 0x6b}, + subYX: fp.Elt{0x68, 0xa8, 0xdc, 0x9c, 0x3c, 0x86, 0x49, 0xb8, 0xd0, 0x4a, 0x71, 0xb8, 0xdb, 0x44, 0x3f, 0xc8, 0x8d, 0x16, 0x36, 0x0c, 0x56, 0xe3, 0x3e, 0xfe, 0xc1, 0xfb, 0x05, 0x1e, 0x79, 0xd7, 0xa6, 0x78}, + dt2: fp.Elt{0x76, 0xb9, 0xa0, 0x47, 0x4b, 0x70, 0xbf, 0x58, 0xd5, 0x48, 0x17, 0x74, 0x55, 0xb3, 0x01, 0xa6, 0x90, 0xf5, 0x42, 0xd5, 0xb1, 0x1f, 0x2b, 0xaa, 0x00, 0x5d, 0xd5, 0x4a, 0xfc, 0x7f, 0x5c, 0x72}, + }, + { /* 27P */ + addYX: fp.Elt{0xb2, 0x99, 0xcf, 0xd1, 0x15, 0x67, 0x42, 0xe4, 0x34, 0x0d, 0xa2, 0x02, 0x11, 0xd5, 0x52, 0x73, 0x9f, 0x10, 0x12, 0x8b, 0x7b, 0x15, 0xd1, 0x23, 0xa3, 0xf3, 0xb1, 0x7c, 0x27, 0xc9, 0x4c, 0x79}, + subYX: fp.Elt{0xc0, 0x98, 0xd0, 0x1c, 0xf7, 0x2b, 0x80, 0x91, 0x66, 0x63, 0x5e, 0xed, 0xa4, 0x6c, 0x41, 0xfe, 0x4c, 0x99, 0x02, 0x49, 0x71, 0x5d, 0x58, 0xdf, 0xe7, 0xfa, 0x55, 0xf8, 0x25, 0x46, 0xd5, 0x4c}, + dt2: fp.Elt{0x53, 0x50, 0xac, 0xc2, 0x26, 0xc4, 0xf6, 0x4a, 0x58, 0x72, 0xf6, 0x32, 0xad, 0xed, 0x9a, 0xbc, 0x21, 0x10, 0x31, 0x0a, 0xf1, 0x32, 0xd0, 0x2a, 0x85, 0x8e, 0xcc, 0x6f, 0x7b, 0x35, 0x08, 0x70}, + }, + { /* 29P */ + addYX: fp.Elt{0x01, 0x3f, 0x77, 0x38, 0x27, 0x67, 0x88, 0x0b, 0xfb, 0xcc, 0xfb, 0x95, 0xfa, 0xc8, 0xcc, 0xb8, 0xb6, 0x29, 0xad, 0xb9, 0xa3, 0xd5, 0x2d, 0x8d, 0x6a, 0x0f, 0xad, 0x51, 0x98, 0x7e, 0xef, 0x06}, + subYX: fp.Elt{0x34, 0x4a, 0x58, 0x82, 0xbb, 0x9f, 0x1b, 0xd0, 0x2b, 0x79, 0xb4, 0xd2, 0x63, 0x64, 0xab, 0x47, 0x02, 0x62, 0x53, 0x48, 0x9c, 0x63, 0x31, 0xb6, 0x28, 0xd4, 0xd6, 0x69, 0x36, 0x2a, 0xa9, 0x13}, + dt2: fp.Elt{0xe5, 0x7d, 0x57, 0xc0, 0x1c, 0x77, 0x93, 0xca, 0x5c, 0xdc, 0x35, 0x50, 0x1e, 0xe4, 0x40, 0x75, 0x71, 0xe0, 0x02, 0xd8, 0x01, 0x0f, 0x68, 0x24, 0x6a, 0xf8, 0x2a, 0x8a, 0xdf, 0x6d, 0x29, 0x3c}, + }, + { /* 31P */ + addYX: fp.Elt{0x13, 0xa7, 0x14, 0xd9, 0xf9, 0x15, 0xad, 0xae, 0x12, 0xf9, 0x8f, 0x8c, 0xf9, 0x7b, 0x2f, 0xa9, 0x30, 0xd7, 0x53, 0x9f, 0x17, 0x23, 0xf8, 0xaf, 0xba, 0x77, 0x0c, 0x49, 0x93, 0xd3, 0x99, 0x7a}, + subYX: fp.Elt{0x41, 0x25, 0x1f, 0xbb, 0x2e, 0x4d, 0xeb, 0xfc, 0x1f, 0xb9, 0xad, 0x40, 0xc7, 0x10, 0x95, 0xb8, 0x05, 0xad, 0xa1, 0xd0, 0x7d, 0xa3, 0x71, 0xfc, 0x7b, 0x71, 0x47, 0x07, 0x70, 0x2c, 0x89, 0x0a}, + dt2: fp.Elt{0xe8, 0xa3, 0xbd, 0x36, 0x24, 0xed, 0x52, 0x8f, 0x94, 0x07, 0xe8, 0x57, 0x41, 0xc8, 0xa8, 0x77, 0xe0, 0x9c, 0x2f, 0x26, 0x63, 0x65, 0xa9, 0xa5, 0xd2, 0xf7, 0x02, 0x83, 0xd2, 0x62, 0x67, 0x28}, + }, + { /* 33P */ + addYX: fp.Elt{0x25, 0x5b, 0xe3, 0x3c, 0x09, 0x36, 0x78, 0x4e, 0x97, 0xaa, 0x6b, 0xb2, 0x1d, 0x18, 0xe1, 0x82, 0x3f, 0xb8, 0xc7, 0xcb, 0xd3, 0x92, 0xc1, 0x0c, 0x3a, 0x9d, 0x9d, 0x6a, 0x04, 0xda, 0xf1, 0x32}, + subYX: fp.Elt{0xbd, 0xf5, 0x2e, 0xce, 0x2b, 0x8e, 0x55, 0x7c, 0x63, 0xbc, 0x47, 0x67, 0xb4, 0x6c, 0x98, 0xe4, 0xb8, 0x89, 0xbb, 0x3b, 0x9f, 0x17, 0x4a, 0x15, 0x7a, 0x76, 0xf1, 0xd6, 0xa3, 0xf2, 0x86, 0x76}, + dt2: fp.Elt{0x6a, 0x7c, 0x59, 0x6d, 0xa6, 0x12, 0x8d, 0xaa, 0x2b, 0x85, 0xd3, 0x04, 0x03, 0x93, 0x11, 0x8f, 0x22, 0xb0, 0x09, 0xc2, 0x73, 0xdc, 0x91, 0x3f, 0xa6, 0x28, 0xad, 0xa9, 0xf8, 0x05, 0x13, 0x56}, + }, + { /* 35P */ + addYX: fp.Elt{0xd1, 0xae, 0x92, 0xec, 0x8d, 0x97, 0x0c, 0x10, 0xe5, 0x73, 0x6d, 0x4d, 0x43, 0xd5, 0x43, 0xca, 0x48, 0xba, 0x47, 0xd8, 0x22, 0x1b, 0x13, 0x83, 0x2c, 0x4d, 0x5d, 0xe3, 0x53, 0xec, 0xaa}, + subYX: fp.Elt{0xd5, 0xc0, 0xb0, 0xe7, 0x28, 0xcc, 0x22, 0x67, 0x53, 0x5c, 0x07, 0xdb, 0xbb, 0xe9, 0x9d, 0x70, 0x61, 0x0a, 0x01, 0xd7, 0xa7, 0x8d, 0xf6, 0xca, 0x6c, 0xcc, 0x57, 0x2c, 0xef, 0x1a, 0x0a, 0x03}, + dt2: fp.Elt{0xaa, 0xd2, 0x3a, 0x00, 0x73, 0xf7, 0xb1, 0x7b, 0x08, 0x66, 0x21, 0x2b, 0x80, 0x29, 0x3f, 0x0b, 0x3e, 0xd2, 0x0e, 0x52, 0x86, 0xdc, 0x21, 0x78, 0x80, 0x54, 0x06, 0x24, 0x1c, 0x9c, 0xbe, 0x20}, + }, + { /* 37P */ + addYX: fp.Elt{0xa6, 0x73, 0x96, 0x24, 0xd8, 0x87, 0x53, 0xe1, 0x93, 0xe4, 0x46, 0xf5, 0x2d, 0xbc, 0x43, 0x59, 0xb5, 0x63, 0x6f, 0xc3, 0x81, 0x9a, 0x7f, 0x1c, 0xde, 0xc1, 0x0a, 0x1f, 0x36, 0xb3, 0x0a, 0x75}, + subYX: fp.Elt{0x60, 0x5e, 0x02, 0xe2, 0x4a, 0xe4, 0xe0, 0x20, 0x38, 0xb9, 0xdc, 0xcb, 0x2f, 0x3b, 0x3b, 0xb0, 0x1c, 0x0d, 0x5a, 0xf9, 0x9c, 0x63, 0x5d, 0x10, 0x11, 0xe3, 0x67, 0x50, 0x54, 0x4c, 0x76, 0x69}, + dt2: fp.Elt{0x37, 0x10, 0xf8, 0xa2, 0x83, 0x32, 0x8a, 0x1e, 0xf1, 0xcb, 0x7f, 0xbd, 0x23, 0xda, 0x2e, 0x6f, 0x63, 0x25, 0x2e, 0xac, 0x5b, 0xd1, 0x2f, 0xb7, 0x40, 0x50, 0x07, 0xb7, 0x3f, 0x6b, 0xf9, 0x54}, + }, + { /* 39P */ + addYX: fp.Elt{0x79, 0x92, 0x66, 0x29, 0x04, 0xf2, 0xad, 0x0f, 0x4a, 0x72, 0x7d, 0x7d, 0x04, 0xa2, 0xdd, 0x3a, 0xf1, 0x60, 0x57, 0x8c, 0x82, 0x94, 0x3d, 0x6f, 0x9e, 0x53, 0xb7, 0x2b, 0xc5, 0xe9, 0x7f, 0x3d}, + subYX: fp.Elt{0xcd, 0x1e, 0xb1, 0x16, 0xc6, 0xaf, 0x7d, 0x17, 0x79, 0x64, 0x57, 0xfa, 0x9c, 0x4b, 0x76, 0x89, 0x85, 0xe7, 0xec, 0xe6, 0x10, 0xa1, 0xa8, 0xb7, 0xf0, 0xdb, 0x85, 0xbe, 0x9f, 0x83, 0xe6, 0x78}, + dt2: fp.Elt{0x6b, 0x85, 0xb8, 0x37, 0xf7, 0x2d, 0x33, 0x70, 0x8a, 0x17, 0x1a, 0x04, 0x43, 0x5d, 0xd0, 0x75, 0x22, 0x9e, 0xe5, 0xa0, 0x4a, 0xf7, 0x0f, 0x32, 0x42, 0x82, 0x08, 0x50, 0xf3, 0x68, 0xf2, 0x70}, + }, + { /* 41P */ + addYX: fp.Elt{0x47, 0x5f, 0x80, 0xb1, 0x83, 0x45, 0x86, 0x66, 0x19, 0x7c, 0xdd, 0x60, 0xd1, 0xc5, 0x35, 0xf5, 0x06, 0xb0, 0x4c, 0x1e, 0xb7, 0x4e, 0x87, 0xe9, 0xd9, 0x89, 0xd8, 0xfa, 0x5c, 0x34, 0x0d, 0x7c}, + subYX: fp.Elt{0x55, 0xf3, 0xdc, 0x70, 0x20, 0x11, 0x24, 0x23, 0x17, 0xe1, 0xfc, 0xe7, 0x7e, 0xc9, 0x0c, 0x38, 0x98, 0xb6, 0x52, 0x35, 0xed, 0xde, 0x1d, 0xb3, 0xb9, 0xc4, 0xb8, 0x39, 0xc0, 0x56, 0x4e, 0x40}, + dt2: fp.Elt{0x8a, 0x33, 0x78, 0x8c, 0x4b, 0x1f, 0x1f, 0x59, 0xe1, 0xb5, 0xe0, 0x67, 0xb1, 0x6a, 0x36, 0xa0, 0x44, 0x3d, 0x5f, 0xb4, 0x52, 0x41, 0xbc, 0x5c, 0x77, 0xc7, 0xae, 0x2a, 0x76, 0x54, 0xd7, 0x20}, + }, + { /* 43P */ + addYX: fp.Elt{0x58, 0xb7, 0x3b, 0xc7, 0x6f, 0xc3, 0x8f, 0x5e, 0x9a, 0xbb, 0x3c, 0x36, 0xa5, 0x43, 0xe5, 0xac, 0x22, 0xc9, 0x3b, 0x90, 0x7d, 0x4a, 0x93, 0xa9, 0x62, 0xec, 0xce, 0xf3, 0x46, 0x1e, 0x8f, 0x2b}, + subYX: fp.Elt{0x43, 0xf5, 0xb9, 0x35, 0xb1, 0xfe, 0x74, 0x9d, 0x6c, 0x95, 0x8c, 0xde, 0xf1, 0x7d, 0xb3, 0x84, 0xa9, 0x8b, 0x13, 0x57, 0x07, 0x2b, 0x32, 0xe9, 0xe1, 0x4c, 0x0b, 0x79, 0xa8, 0xad, 0xb8, 0x38}, + dt2: fp.Elt{0x5d, 0xf9, 0x51, 0xdf, 0x9c, 0x4a, 0xc0, 0xb5, 0xac, 0xde, 0x1f, 0xcb, 0xae, 0x52, 0x39, 0x2b, 0xda, 0x66, 0x8b, 0x32, 0x8b, 0x6d, 0x10, 0x1d, 0x53, 0x19, 0xba, 0xce, 0x32, 0xeb, 0x9a, 0x04}, + }, + { /* 45P */ + addYX: fp.Elt{0x31, 0x79, 0xfc, 0x75, 0x0b, 0x7d, 0x50, 0xaa, 0xd3, 0x25, 0x67, 0x7a, 0x4b, 0x92, 0xef, 0x0f, 0x30, 0x39, 0x6b, 0x39, 0x2b, 0x54, 0x82, 0x1d, 0xfc, 0x74, 0xf6, 0x30, 0x75, 0xe1, 0x5e, 0x79}, + subYX: fp.Elt{0x7e, 0xfe, 0xdc, 0x63, 0x3c, 0x7d, 0x76, 0xd7, 0x40, 0x6e, 0x85, 0x97, 0x48, 0x59, 0x9c, 0x20, 0x13, 0x7c, 0x4f, 0xe1, 0x61, 0x68, 0x67, 0xb6, 0xfc, 0x25, 0xd6, 0xc8, 0xe0, 0x65, 0xc6, 0x51}, + dt2: fp.Elt{0x81, 0xbd, 0xec, 0x52, 0x0a, 0x5b, 0x4a, 0x25, 0xe7, 0xaf, 0x34, 0xe0, 0x6e, 0x1f, 0x41, 0x5d, 0x31, 0x4a, 0xee, 0xca, 0x0d, 0x4d, 0xa2, 0xe6, 0x77, 0x44, 0xc5, 0x9d, 0xf4, 0x9b, 0xd1, 0x6c}, + }, + { /* 47P */ + addYX: fp.Elt{0x86, 0xc3, 0xaf, 0x65, 0x21, 0x61, 0xfe, 0x1f, 0x10, 0x1b, 0xd5, 0xb8, 0x88, 0x2a, 0x2a, 0x08, 0xaa, 0x0b, 0x99, 0x20, 0x7e, 0x62, 0xf6, 0x76, 0xe7, 0x43, 0x9e, 0x42, 0xa7, 0xb3, 0x01, 0x5e}, + subYX: fp.Elt{0xa3, 0x9c, 0x17, 0x52, 0x90, 0x61, 0x87, 0x7e, 0x85, 0x9f, 0x2c, 0x0b, 0x06, 0x0a, 0x1d, 0x57, 0x1e, 0x71, 0x99, 0x84, 0xa8, 0xba, 0xa2, 0x80, 0x38, 0xe6, 0xb2, 0x40, 0xdb, 0xf3, 0x20, 0x75}, + dt2: fp.Elt{0xa1, 0x57, 0x93, 0xd3, 0xe3, 0x0b, 0xb5, 0x3d, 0xa5, 0x94, 0x9e, 0x59, 0xdd, 0x6c, 0x7b, 0x96, 0x6e, 0x1e, 0x31, 0xdf, 0x64, 0x9a, 0x30, 0x1a, 0x86, 0xc9, 0xf3, 0xce, 0x9c, 0x2c, 0x09, 0x71}, + }, + { /* 49P */ + addYX: fp.Elt{0xcf, 0x1d, 0x05, 0x74, 0xac, 0xd8, 0x6b, 0x85, 0x1e, 0xaa, 0xb7, 0x55, 0x08, 0xa4, 0xf6, 0x03, 0xeb, 0x3c, 0x74, 0xc9, 0xcb, 0xe7, 0x4a, 0x3a, 0xde, 0xab, 0x37, 0x71, 0xbb, 0xa5, 0x73, 0x41}, + subYX: fp.Elt{0x8c, 0x91, 0x64, 0x03, 0x3f, 0x52, 0xd8, 0x53, 0x1c, 0x6b, 0xab, 0x3f, 0xf4, 0x04, 0xb4, 0xa2, 0xa4, 0xe5, 0x81, 0x66, 0x9e, 0x4a, 0x0b, 0x08, 0xa7, 0x7b, 0x25, 0xd0, 0x03, 0x5b, 0xa1, 0x0e}, + dt2: fp.Elt{0x8a, 0x21, 0xf9, 0xf0, 0x31, 0x6e, 0xc5, 0x17, 0x08, 0x47, 0xfc, 0x1a, 0x2b, 0x6e, 0x69, 0x5a, 0x76, 0xf1, 0xb2, 0xf4, 0x68, 0x16, 0x93, 0xf7, 0x67, 0x3a, 0x4e, 0x4a, 0x61, 0x65, 0xc5, 0x5f}, + }, + { /* 51P */ + addYX: fp.Elt{0x8e, 0x98, 0x90, 0x77, 0xe6, 0xe1, 0x92, 0x48, 0x22, 0xd7, 0x5c, 0x1c, 0x0f, 0x95, 0xd5, 0x01, 0xed, 0x3e, 0x92, 0xe5, 0x9a, 0x81, 0xb0, 0xe3, 0x1b, 0x65, 0x46, 0x9d, 0x40, 0xc7, 0x14, 0x32}, + subYX: fp.Elt{0xe5, 0x7a, 0x6d, 0xc4, 0x0d, 0x57, 0x6e, 0x13, 0x8f, 0xdc, 0xf8, 0x54, 0xcc, 0xaa, 0xd0, 0x0f, 0x86, 0xad, 0x0d, 0x31, 0x03, 0x9f, 0x54, 0x59, 0xa1, 0x4a, 0x45, 0x4c, 0x41, 0x1c, 0x71, 0x62}, + dt2: fp.Elt{0x70, 0x17, 0x65, 0x06, 0x74, 0x82, 0x29, 0x13, 0x36, 0x94, 0x27, 0x8a, 0x66, 0xa0, 0xa4, 0x3b, 0x3c, 0x22, 0x5d, 0x18, 0xec, 0xb8, 0xb6, 0xd9, 0x3c, 0x83, 0xcb, 0x3e, 0x07, 0x94, 0xea, 0x5b}, + }, + { /* 53P */ + addYX: fp.Elt{0xf8, 0xd2, 0x43, 0xf3, 0x63, 0xce, 0x70, 0xb4, 0xf1, 0xe8, 0x43, 0x05, 0x8f, 0xba, 0x67, 0x00, 0x6f, 0x7b, 0x11, 0xa2, 0xa1, 0x51, 0xda, 0x35, 0x2f, 0xbd, 0xf1, 0x44, 0x59, 0x78, 0xd0, 0x4a}, + subYX: fp.Elt{0xe4, 0x9b, 0xc8, 0x12, 0x09, 0xbf, 0x1d, 0x64, 0x9c, 0x57, 0x6e, 0x7d, 0x31, 0x8b, 0xf3, 0xac, 0x65, 0xb0, 0x97, 0xf6, 0x02, 0x9e, 0xfe, 0xab, 0xec, 0x1e, 0xf6, 0x48, 0xc1, 0xd5, 0xac, 0x3a}, + dt2: fp.Elt{0x01, 0x83, 0x31, 0xc3, 0x34, 0x3b, 0x8e, 0x85, 0x26, 0x68, 0x31, 0x07, 0x47, 0xc0, 0x99, 0xdc, 0x8c, 0xa8, 0x9d, 0xd3, 0x2e, 0x5b, 0x08, 0x34, 0x3d, 0x85, 0x02, 0xd9, 0xb1, 0x0c, 0xff, 0x3a}, + }, + { /* 55P */ + addYX: fp.Elt{0x05, 0x35, 0xc5, 0xf4, 0x0b, 0x43, 0x26, 0x92, 0x83, 0x22, 0x1f, 0x26, 0x13, 0x9c, 0xe4, 0x68, 0xc6, 0x27, 0xd3, 0x8f, 0x78, 0x33, 0xef, 0x09, 0x7f, 0x9e, 0xd9, 0x2b, 0x73, 0x9f, 0xcf, 0x2c}, + subYX: fp.Elt{0x5e, 0x40, 0x20, 0x3a, 0xeb, 0xc7, 0xc5, 0x87, 0xc9, 0x56, 0xad, 0xed, 0xef, 0x11, 0xe3, 0x8e, 0xf9, 0xd5, 0x29, 0xad, 0x48, 0x2e, 0x25, 0x29, 0x1d, 0x25, 0xcd, 0xf4, 0x86, 0x7e, 0x0e, 0x11}, + dt2: fp.Elt{0xe4, 0xf5, 0x03, 0xd6, 0x9e, 0xd8, 0xc0, 0x57, 0x0c, 0x20, 0xb0, 0xf0, 0x28, 0x86, 0x88, 0x12, 0xb7, 0x3b, 0x2e, 0xa0, 0x09, 0x27, 0x17, 0x53, 0x37, 0x3a, 0x69, 0xb9, 0xe0, 0x57, 0xc5, 0x05}, + }, + { /* 57P */ + addYX: fp.Elt{0xb0, 0x0e, 0xc2, 0x89, 0xb0, 0xbb, 0x76, 0xf7, 0x5c, 0xd8, 0x0f, 0xfa, 0xf6, 0x5b, 0xf8, 0x61, 0xfb, 0x21, 0x44, 0x63, 0x4e, 0x3f, 0xb9, 0xb6, 0x05, 0x12, 0x86, 0x41, 0x08, 0xef, 0x9f, 0x28}, + subYX: fp.Elt{0x6f, 0x7e, 0xc9, 0x1f, 0x31, 0xce, 0xf9, 0xd8, 0xae, 0xfd, 0xf9, 0x11, 0x30, 0x26, 0x3f, 0x7a, 0xdd, 0x25, 0xed, 0x8b, 0xa0, 0x7e, 0x5b, 0xe1, 0x5a, 0x87, 0xe9, 0x8f, 0x17, 0x4c, 0x15, 0x6e}, + dt2: fp.Elt{0xbf, 0x9a, 0xd6, 0xfe, 0x36, 0x63, 0x61, 0xcf, 0x4f, 0xc9, 0x35, 0x83, 0xe7, 0xe4, 0x16, 0x9b, 0xe7, 0x7f, 0x3a, 0x75, 0x65, 0x97, 0x78, 0x13, 0x19, 0xa3, 0x5c, 0xa9, 0x42, 0xf6, 0xfb, 0x6a}, + }, + { /* 59P */ + addYX: fp.Elt{0xcc, 0xa8, 0x13, 0xf9, 0x70, 0x50, 0xe5, 0x5d, 0x61, 0xf5, 0x0c, 0x2b, 0x7b, 0x16, 0x1d, 0x7d, 0x89, 0xd4, 0xea, 0x90, 0xb6, 0x56, 0x29, 0xda, 0xd9, 0x1e, 0x80, 0xdb, 0xce, 0x93, 0xc0, 0x12}, + subYX: fp.Elt{0xc1, 0xd2, 0xf5, 0x62, 0x0c, 0xde, 0xa8, 0x7d, 0x9a, 0x7b, 0x0e, 0xb0, 0xa4, 0x3d, 0xfc, 0x98, 0xe0, 0x70, 0xad, 0x0d, 0xda, 0x6a, 0xeb, 0x7d, 0xc4, 0x38, 0x50, 0xb9, 0x51, 0xb8, 0xb4, 0x0d}, + dt2: fp.Elt{0x0f, 0x19, 0xb8, 0x08, 0x93, 0x7f, 0x14, 0xfc, 0x10, 0xe3, 0x1a, 0xa1, 0xa0, 0x9d, 0x96, 0x06, 0xfd, 0xd7, 0xc7, 0xda, 0x72, 0x55, 0xe7, 0xce, 0xe6, 0x5c, 0x63, 0xc6, 0x99, 0x87, 0xaa, 0x33}, + }, + { /* 61P */ + addYX: fp.Elt{0xb1, 0x6c, 0x15, 0xfc, 0x88, 0xf5, 0x48, 0x83, 0x27, 0x6d, 0x0a, 0x1a, 0x9b, 0xba, 0xa2, 0x6d, 0xb6, 0x5a, 0xca, 0x87, 0x5c, 0x2d, 0x26, 0xe2, 0xa6, 0x89, 0xd5, 0xc8, 0xc1, 0xd0, 0x2c, 0x21}, + subYX: fp.Elt{0xf2, 0x5c, 0x08, 0xbd, 0x1e, 0xf5, 0x0f, 0xaf, 0x1f, 0x3f, 0xd3, 0x67, 0x89, 0x1a, 0xf5, 0x78, 0x3c, 0x03, 0x60, 0x50, 0xe1, 0xbf, 0xc2, 0x6e, 0x86, 0x1a, 0xe2, 0xe8, 0x29, 0x6f, 0x3c, 0x23}, + dt2: fp.Elt{0x81, 0xc7, 0x18, 0x7f, 0x10, 0xd5, 0xf4, 0xd2, 0x28, 0x9d, 0x7e, 0x52, 0xf2, 0xcd, 0x2e, 0x12, 0x41, 0x33, 0x3d, 0x3d, 0x2a, 0x86, 0x0a, 0xa7, 0xe3, 0x4c, 0x91, 0x11, 0x89, 0x77, 0xb7, 0x1d}, + }, + { /* 63P */ + addYX: fp.Elt{0xb6, 0x1a, 0x70, 0xdd, 0x69, 0x47, 0x39, 0xb3, 0xa5, 0x8d, 0xcf, 0x19, 0xd4, 0xde, 0xb8, 0xe2, 0x52, 0xc8, 0x2a, 0xfd, 0x61, 0x41, 0xdf, 0x15, 0xbe, 0x24, 0x7d, 0x01, 0x8a, 0xca, 0xe2, 0x7a}, + subYX: fp.Elt{0x6f, 0xc2, 0x6b, 0x7c, 0x39, 0x52, 0xf3, 0xdd, 0x13, 0x01, 0xd5, 0x53, 0xcc, 0xe2, 0x97, 0x7a, 0x30, 0xa3, 0x79, 0xbf, 0x3a, 0xf4, 0x74, 0x7c, 0xfc, 0xad, 0xe2, 0x26, 0xad, 0x97, 0xad, 0x31}, + dt2: fp.Elt{0x62, 0xb9, 0x20, 0x09, 0xed, 0x17, 0xe8, 0xb7, 0x9d, 0xda, 0x19, 0x3f, 0xcc, 0x18, 0x85, 0x1e, 0x64, 0x0a, 0x56, 0x25, 0x4f, 0xc1, 0x91, 0xe4, 0x83, 0x2c, 0x62, 0xa6, 0x53, 0xfc, 0xd1, 0x1e}, + }, +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed448/ed448.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed448/ed448.go new file mode 100644 index 00000000000..324bd8f3346 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed448/ed448.go @@ -0,0 +1,411 @@ +// Package ed448 implements Ed448 signature scheme as described in RFC-8032. +// +// This package implements two signature variants. +// +// | Scheme Name | Sign Function | Verification | Context | +// |-------------|-------------------|---------------|-------------------| +// | Ed448 | Sign | Verify | Yes, can be empty | +// | Ed448Ph | SignPh | VerifyPh | Yes, can be empty | +// | All above | (PrivateKey).Sign | VerifyAny | As above | +// +// Specific functions for sign and verify are defined. A generic signing +// function for all schemes is available through the crypto.Signer interface, +// which is implemented by the PrivateKey type. A correspond all-in-one +// verification method is provided by the VerifyAny function. +// +// Both schemes require a context string for domain separation. This parameter +// is passed using a SignerOptions struct defined in this package. +// +// References: +// +// - RFC8032: https://rfc-editor.org/rfc/rfc8032.txt +// - EdDSA for more curves: https://eprint.iacr.org/2015/677 +// - High-speed high-security signatures: https://doi.org/10.1007/s13389-012-0027-1 +package ed448 + +import ( + "bytes" + "crypto" + cryptoRand "crypto/rand" + "crypto/subtle" + "errors" + "fmt" + "io" + "strconv" + + "github.com/cloudflare/circl/ecc/goldilocks" + "github.com/cloudflare/circl/internal/sha3" + "github.com/cloudflare/circl/sign" +) + +const ( + // ContextMaxSize is the maximum length (in bytes) allowed for context. + ContextMaxSize = 255 + // PublicKeySize is the length in bytes of Ed448 public keys. + PublicKeySize = 57 + // PrivateKeySize is the length in bytes of Ed448 private keys. + PrivateKeySize = 114 + // SignatureSize is the length in bytes of signatures. + SignatureSize = 114 + // SeedSize is the size, in bytes, of private key seeds. These are the private key representations used by RFC 8032. + SeedSize = 57 +) + +const ( + paramB = 456 / 8 // Size of keys in bytes. + hashSize = 2 * paramB // Size of the hash function's output. +) + +// SignerOptions implements crypto.SignerOpts and augments with parameters +// that are specific to the Ed448 signature schemes. +type SignerOptions struct { + // Hash must be crypto.Hash(0) for both Ed448 and Ed448Ph. + crypto.Hash + + // Context is an optional domain separation string for signing. + // Its length must be less or equal than 255 bytes. + Context string + + // Scheme is an identifier for choosing a signature scheme. + Scheme SchemeID +} + +// SchemeID is an identifier for each signature scheme. +type SchemeID uint + +const ( + ED448 SchemeID = iota + ED448Ph +) + +// PublicKey is the type of Ed448 public keys. +type PublicKey []byte + +// Equal reports whether pub and x have the same value. +func (pub PublicKey) Equal(x crypto.PublicKey) bool { + xx, ok := x.(PublicKey) + return ok && bytes.Equal(pub, xx) +} + +// PrivateKey is the type of Ed448 private keys. It implements crypto.Signer. +type PrivateKey []byte + +// Equal reports whether priv and x have the same value. +func (priv PrivateKey) Equal(x crypto.PrivateKey) bool { + xx, ok := x.(PrivateKey) + return ok && subtle.ConstantTimeCompare(priv, xx) == 1 +} + +// Public returns the PublicKey corresponding to priv. +func (priv PrivateKey) Public() crypto.PublicKey { + publicKey := make([]byte, PublicKeySize) + copy(publicKey, priv[SeedSize:]) + return PublicKey(publicKey) +} + +// Seed returns the private key seed corresponding to priv. It is provided for +// interoperability with RFC 8032. RFC 8032's private keys correspond to seeds +// in this package. +func (priv PrivateKey) Seed() []byte { + seed := make([]byte, SeedSize) + copy(seed, priv[:SeedSize]) + return seed +} + +func (priv PrivateKey) Scheme() sign.Scheme { return sch } + +func (pub PublicKey) Scheme() sign.Scheme { return sch } + +func (priv PrivateKey) MarshalBinary() (data []byte, err error) { + privateKey := make(PrivateKey, PrivateKeySize) + copy(privateKey, priv) + return privateKey, nil +} + +func (pub PublicKey) MarshalBinary() (data []byte, err error) { + publicKey := make(PublicKey, PublicKeySize) + copy(publicKey, pub) + return publicKey, nil +} + +// Sign creates a signature of a message given a key pair. +// This function supports all the two signature variants defined in RFC-8032, +// namely Ed448 (or pure EdDSA) and Ed448Ph. +// The opts.HashFunc() must return zero to the specify Ed448 variant. This can +// be achieved by passing crypto.Hash(0) as the value for opts. +// Use an Options struct to pass a bool indicating that the ed448Ph variant +// should be used. +// The struct can also be optionally used to pass a context string for signing. +func (priv PrivateKey) Sign( + rand io.Reader, + message []byte, + opts crypto.SignerOpts, +) (signature []byte, err error) { + var ctx string + var scheme SchemeID + + if o, ok := opts.(SignerOptions); ok { + ctx = o.Context + scheme = o.Scheme + } + + switch true { + case scheme == ED448 && opts.HashFunc() == crypto.Hash(0): + return Sign(priv, message, ctx), nil + case scheme == ED448Ph && opts.HashFunc() == crypto.Hash(0): + return SignPh(priv, message, ctx), nil + default: + return nil, errors.New("ed448: bad hash algorithm") + } +} + +// GenerateKey generates a public/private key pair using entropy from rand. +// If rand is nil, crypto/rand.Reader will be used. +func GenerateKey(rand io.Reader) (PublicKey, PrivateKey, error) { + if rand == nil { + rand = cryptoRand.Reader + } + + seed := make(PrivateKey, SeedSize) + if _, err := io.ReadFull(rand, seed); err != nil { + return nil, nil, err + } + + privateKey := NewKeyFromSeed(seed) + publicKey := make([]byte, PublicKeySize) + copy(publicKey, privateKey[SeedSize:]) + + return publicKey, privateKey, nil +} + +// NewKeyFromSeed calculates a private key from a seed. It will panic if +// len(seed) is not SeedSize. This function is provided for interoperability +// with RFC 8032. RFC 8032's private keys correspond to seeds in this +// package. +func NewKeyFromSeed(seed []byte) PrivateKey { + privateKey := make([]byte, PrivateKeySize) + newKeyFromSeed(privateKey, seed) + return privateKey +} + +func newKeyFromSeed(privateKey, seed []byte) { + if l := len(seed); l != SeedSize { + panic("ed448: bad seed length: " + strconv.Itoa(l)) + } + + var h [hashSize]byte + H := sha3.NewShake256() + _, _ = H.Write(seed) + _, _ = H.Read(h[:]) + s := &goldilocks.Scalar{} + deriveSecretScalar(s, h[:paramB]) + + copy(privateKey[:SeedSize], seed) + _ = goldilocks.Curve{}.ScalarBaseMult(s).ToBytes(privateKey[SeedSize:]) +} + +func signAll(signature []byte, privateKey PrivateKey, message, ctx []byte, preHash bool) { + if len(ctx) > ContextMaxSize { + panic(fmt.Errorf("ed448: bad context length: " + strconv.Itoa(len(ctx)))) + } + + H := sha3.NewShake256() + var PHM []byte + + if preHash { + var h [64]byte + _, _ = H.Write(message) + _, _ = H.Read(h[:]) + PHM = h[:] + H.Reset() + } else { + PHM = message + } + + // 1. Hash the 57-byte private key using SHAKE256(x, 114). + var h [hashSize]byte + _, _ = H.Write(privateKey[:SeedSize]) + _, _ = H.Read(h[:]) + s := &goldilocks.Scalar{} + deriveSecretScalar(s, h[:paramB]) + prefix := h[paramB:] + + // 2. Compute SHAKE256(dom4(F, C) || prefix || PH(M), 114). + var rPM [hashSize]byte + H.Reset() + + writeDom(&H, ctx, preHash) + + _, _ = H.Write(prefix) + _, _ = H.Write(PHM) + _, _ = H.Read(rPM[:]) + + // 3. Compute the point [r]B. + r := &goldilocks.Scalar{} + r.FromBytes(rPM[:]) + R := (&[paramB]byte{})[:] + if err := (goldilocks.Curve{}.ScalarBaseMult(r).ToBytes(R)); err != nil { + panic(err) + } + // 4. Compute SHAKE256(dom4(F, C) || R || A || PH(M), 114) + var hRAM [hashSize]byte + H.Reset() + + writeDom(&H, ctx, preHash) + + _, _ = H.Write(R) + _, _ = H.Write(privateKey[SeedSize:]) + _, _ = H.Write(PHM) + _, _ = H.Read(hRAM[:]) + + // 5. Compute S = (r + k * s) mod order. + k := &goldilocks.Scalar{} + k.FromBytes(hRAM[:]) + S := &goldilocks.Scalar{} + S.Mul(k, s) + S.Add(S, r) + + // 6. The signature is the concatenation of R and S. + copy(signature[:paramB], R[:]) + copy(signature[paramB:], S[:]) +} + +// Sign signs the message with privateKey and returns a signature. +// This function supports the signature variant defined in RFC-8032: Ed448, +// also known as the pure version of EdDSA. +// It will panic if len(privateKey) is not PrivateKeySize. +func Sign(priv PrivateKey, message []byte, ctx string) []byte { + signature := make([]byte, SignatureSize) + signAll(signature, priv, message, []byte(ctx), false) + return signature +} + +// SignPh creates a signature of a message given a keypair. +// This function supports the signature variant defined in RFC-8032: Ed448ph, +// meaning it internally hashes the message using SHAKE-256. +// Context could be passed to this function, which length should be no more than +// 255. It can be empty. +func SignPh(priv PrivateKey, message []byte, ctx string) []byte { + signature := make([]byte, SignatureSize) + signAll(signature, priv, message, []byte(ctx), true) + return signature +} + +func verify(public PublicKey, message, signature, ctx []byte, preHash bool) bool { + if len(public) != PublicKeySize || + len(signature) != SignatureSize || + len(ctx) > ContextMaxSize || + !isLessThanOrder(signature[paramB:]) { + return false + } + + P, err := goldilocks.FromBytes(public) + if err != nil { + return false + } + + H := sha3.NewShake256() + var PHM []byte + + if preHash { + var h [64]byte + _, _ = H.Write(message) + _, _ = H.Read(h[:]) + PHM = h[:] + H.Reset() + } else { + PHM = message + } + + var hRAM [hashSize]byte + R := signature[:paramB] + + writeDom(&H, ctx, preHash) + + _, _ = H.Write(R) + _, _ = H.Write(public) + _, _ = H.Write(PHM) + _, _ = H.Read(hRAM[:]) + + k := &goldilocks.Scalar{} + k.FromBytes(hRAM[:]) + S := &goldilocks.Scalar{} + S.FromBytes(signature[paramB:]) + + encR := (&[paramB]byte{})[:] + P.Neg() + _ = goldilocks.Curve{}.CombinedMult(S, k, P).ToBytes(encR) + return bytes.Equal(R, encR) +} + +// VerifyAny returns true if the signature is valid. Failure cases are invalid +// signature, or when the public key cannot be decoded. +// This function supports all the two signature variants defined in RFC-8032, +// namely Ed448 (or pure EdDSA) and Ed448Ph. +// The opts.HashFunc() must return zero, this can be achieved by passing +// crypto.Hash(0) as the value for opts. +// Use a SignerOptions struct to pass a context string for signing. +func VerifyAny(public PublicKey, message, signature []byte, opts crypto.SignerOpts) bool { + var ctx string + var scheme SchemeID + if o, ok := opts.(SignerOptions); ok { + ctx = o.Context + scheme = o.Scheme + } + + switch true { + case scheme == ED448 && opts.HashFunc() == crypto.Hash(0): + return Verify(public, message, signature, ctx) + case scheme == ED448Ph && opts.HashFunc() == crypto.Hash(0): + return VerifyPh(public, message, signature, ctx) + default: + return false + } +} + +// Verify returns true if the signature is valid. Failure cases are invalid +// signature, or when the public key cannot be decoded. +// This function supports the signature variant defined in RFC-8032: Ed448, +// also known as the pure version of EdDSA. +func Verify(public PublicKey, message, signature []byte, ctx string) bool { + return verify(public, message, signature, []byte(ctx), false) +} + +// VerifyPh returns true if the signature is valid. Failure cases are invalid +// signature, or when the public key cannot be decoded. +// This function supports the signature variant defined in RFC-8032: Ed448ph, +// meaning it internally hashes the message using SHAKE-256. +// Context could be passed to this function, which length should be no more than +// 255. It can be empty. +func VerifyPh(public PublicKey, message, signature []byte, ctx string) bool { + return verify(public, message, signature, []byte(ctx), true) +} + +func deriveSecretScalar(s *goldilocks.Scalar, h []byte) { + h[0] &= 0xFC // The two least significant bits of the first octet are cleared, + h[paramB-1] = 0x00 // all eight bits the last octet are cleared, and + h[paramB-2] |= 0x80 // the highest bit of the second to last octet is set. + s.FromBytes(h[:paramB]) +} + +// isLessThanOrder returns true if 0 <= x < order and if the last byte of x is zero. +func isLessThanOrder(x []byte) bool { + order := goldilocks.Curve{}.Order() + i := len(order) - 1 + for i > 0 && x[i] == order[i] { + i-- + } + return x[paramB-1] == 0 && x[i] < order[i] +} + +func writeDom(h io.Writer, ctx []byte, preHash bool) { + dom4 := "SigEd448" + _, _ = h.Write([]byte(dom4)) + + if preHash { + _, _ = h.Write([]byte{byte(0x01), byte(len(ctx))}) + } else { + _, _ = h.Write([]byte{byte(0x00), byte(len(ctx))}) + } + _, _ = h.Write(ctx) +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed448/signapi.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed448/signapi.go new file mode 100644 index 00000000000..22da8bc0a57 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/ed448/signapi.go @@ -0,0 +1,87 @@ +package ed448 + +import ( + "crypto/rand" + "encoding/asn1" + + "github.com/cloudflare/circl/sign" +) + +var sch sign.Scheme = &scheme{} + +// Scheme returns a signature interface. +func Scheme() sign.Scheme { return sch } + +type scheme struct{} + +func (*scheme) Name() string { return "Ed448" } +func (*scheme) PublicKeySize() int { return PublicKeySize } +func (*scheme) PrivateKeySize() int { return PrivateKeySize } +func (*scheme) SignatureSize() int { return SignatureSize } +func (*scheme) SeedSize() int { return SeedSize } +func (*scheme) TLSIdentifier() uint { return 0x0808 } +func (*scheme) SupportsContext() bool { return true } +func (*scheme) Oid() asn1.ObjectIdentifier { + return asn1.ObjectIdentifier{1, 3, 101, 113} +} + +func (*scheme) GenerateKey() (sign.PublicKey, sign.PrivateKey, error) { + return GenerateKey(rand.Reader) +} + +func (*scheme) Sign( + sk sign.PrivateKey, + message []byte, + opts *sign.SignatureOpts, +) []byte { + priv, ok := sk.(PrivateKey) + if !ok { + panic(sign.ErrTypeMismatch) + } + ctx := "" + if opts != nil { + ctx = opts.Context + } + return Sign(priv, message, ctx) +} + +func (*scheme) Verify( + pk sign.PublicKey, + message, signature []byte, + opts *sign.SignatureOpts, +) bool { + pub, ok := pk.(PublicKey) + if !ok { + panic(sign.ErrTypeMismatch) + } + ctx := "" + if opts != nil { + ctx = opts.Context + } + return Verify(pub, message, signature, ctx) +} + +func (*scheme) DeriveKey(seed []byte) (sign.PublicKey, sign.PrivateKey) { + privateKey := NewKeyFromSeed(seed) + publicKey := make(PublicKey, PublicKeySize) + copy(publicKey, privateKey[SeedSize:]) + return publicKey, privateKey +} + +func (*scheme) UnmarshalBinaryPublicKey(buf []byte) (sign.PublicKey, error) { + if len(buf) < PublicKeySize { + return nil, sign.ErrPubKeySize + } + pub := make(PublicKey, PublicKeySize) + copy(pub, buf[:PublicKeySize]) + return pub, nil +} + +func (*scheme) UnmarshalBinaryPrivateKey(buf []byte) (sign.PrivateKey, error) { + if len(buf) < PrivateKeySize { + return nil, sign.ErrPrivKeySize + } + priv := make(PrivateKey, PrivateKeySize) + copy(priv, buf[:PrivateKeySize]) + return priv, nil +} diff --git a/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/sign.go b/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/sign.go new file mode 100644 index 00000000000..13b20fa4b04 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/cloudflare/circl/sign/sign.go @@ -0,0 +1,110 @@ +// Package sign provides unified interfaces for signature schemes. +// +// A register of schemes is available in the package +// +// github.com/cloudflare/circl/sign/schemes +package sign + +import ( + "crypto" + "encoding" + "errors" +) + +type SignatureOpts struct { + // If non-empty, includes the given context in the signature if supported + // and will cause an error during signing otherwise. + Context string +} + +// A public key is used to verify a signature set by the corresponding private +// key. +type PublicKey interface { + // Returns the signature scheme for this public key. + Scheme() Scheme + Equal(crypto.PublicKey) bool + encoding.BinaryMarshaler + crypto.PublicKey +} + +// A private key allows one to create signatures. +type PrivateKey interface { + // Returns the signature scheme for this private key. + Scheme() Scheme + Equal(crypto.PrivateKey) bool + // For compatibility with Go standard library + crypto.Signer + crypto.PrivateKey + encoding.BinaryMarshaler +} + +// A Scheme represents a specific instance of a signature scheme. +type Scheme interface { + // Name of the scheme. + Name() string + + // GenerateKey creates a new key-pair. + GenerateKey() (PublicKey, PrivateKey, error) + + // Creates a signature using the PrivateKey on the given message and + // returns the signature. opts are additional options which can be nil. + // + // Panics if key is nil or wrong type or opts context is not supported. + Sign(sk PrivateKey, message []byte, opts *SignatureOpts) []byte + + // Checks whether the given signature is a valid signature set by + // the private key corresponding to the given public key on the + // given message. opts are additional options which can be nil. + // + // Panics if key is nil or wrong type or opts context is not supported. + Verify(pk PublicKey, message []byte, signature []byte, opts *SignatureOpts) bool + + // Deterministically derives a keypair from a seed. If you're unsure, + // you're better off using GenerateKey(). + // + // Panics if seed is not of length SeedSize(). + DeriveKey(seed []byte) (PublicKey, PrivateKey) + + // Unmarshals a PublicKey from the provided buffer. + UnmarshalBinaryPublicKey([]byte) (PublicKey, error) + + // Unmarshals a PublicKey from the provided buffer. + UnmarshalBinaryPrivateKey([]byte) (PrivateKey, error) + + // Size of binary marshalled public keys. + PublicKeySize() int + + // Size of binary marshalled public keys. + PrivateKeySize() int + + // Size of signatures. + SignatureSize() int + + // Size of seeds. + SeedSize() int + + // Returns whether contexts are supported. + SupportsContext() bool +} + +var ( + // ErrTypeMismatch is the error used if types of, for instance, private + // and public keys don't match. + ErrTypeMismatch = errors.New("types mismatch") + + // ErrSeedSize is the error used if the provided seed is of the wrong + // size. + ErrSeedSize = errors.New("wrong seed size") + + // ErrPubKeySize is the error used if the provided public key is of + // the wrong size. + ErrPubKeySize = errors.New("wrong size for public key") + + // ErrPrivKeySize is the error used if the provided private key is of + // the wrong size. + ErrPrivKeySize = errors.New("wrong size for private key") + + // ErrContextNotSupported is the error used if a context is not + // supported. + ErrContextNotSupported = errors.New("context not supported") +) diff --git a/.ci/providerlint/vendor/github.com/golang/protobuf/jsonpb/decode.go b/.ci/providerlint/vendor/github.com/golang/protobuf/jsonpb/decode.go index 60e82caa9a2..6c16c255ffb 100644 --- a/.ci/providerlint/vendor/github.com/golang/protobuf/jsonpb/decode.go +++ b/.ci/providerlint/vendor/github.com/golang/protobuf/jsonpb/decode.go @@ -386,8 +386,14 @@ func (u *Unmarshaler) unmarshalMessage(m protoreflect.Message, in []byte) error } func isSingularWellKnownValue(fd protoreflect.FieldDescriptor) bool { + if fd.Cardinality() == protoreflect.Repeated { + return false + } if md := fd.Message(); md != nil { - return md.FullName() == "google.protobuf.Value" && fd.Cardinality() != protoreflect.Repeated + return md.FullName() == "google.protobuf.Value" + } + if ed := fd.Enum(); ed != nil { + return ed.FullName() == "google.protobuf.NullValue" } return false } diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/LICENSE b/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/LICENSE index abaf1e45f2a..9938fb50ee1 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/LICENSE +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/LICENSE @@ -1,6 +1,4 @@ -MIT License - -Copyright (c) 2017 HashiCorp +Copyright (c) 2017 HashiCorp, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/colorize_unix.go b/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/colorize_unix.go index 99cc176a416..d00816b38fa 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/colorize_unix.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/colorize_unix.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MIT + //go:build !windows // +build !windows @@ -7,23 +10,35 @@ import ( "github.com/mattn/go-isatty" ) +// hasFD is used to check if the writer has an Fd value to check +// if it's a terminal. +type hasFD interface { + Fd() uintptr +} + // setColorization will mutate the values of this logger // to appropriately configure colorization options. It provides // a wrapper to the output stream on Windows systems. func (l *intLogger) setColorization(opts *LoggerOptions) { - switch opts.Color { - case ColorOff: - fallthrough - case ForceColor: + if opts.Color != AutoColor { return - case AutoColor: - fi := l.checkWriterIsFile() - isUnixTerm := isatty.IsTerminal(fi.Fd()) - isCygwinTerm := isatty.IsCygwinTerminal(fi.Fd()) - isTerm := isUnixTerm || isCygwinTerm - if !isTerm { + } + + if sc, ok := l.writer.w.(SupportsColor); ok { + if !sc.SupportsColor() { l.headerColor = ColorOff l.writer.color = ColorOff } + return + } + + fi, ok := l.writer.w.(hasFD) + if !ok { + return + } + + if !isatty.IsTerminal(fi.Fd()) { + l.headerColor = ColorOff + l.writer.color = ColorOff } } diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/colorize_windows.go b/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/colorize_windows.go index 26f8cef8d12..2c3fb9ea6f0 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/colorize_windows.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/colorize_windows.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MIT + //go:build windows // +build windows @@ -7,32 +10,32 @@ import ( "os" colorable "github.com/mattn/go-colorable" - "github.com/mattn/go-isatty" ) // setColorization will mutate the values of this logger // to appropriately configure colorization options. It provides // a wrapper to the output stream on Windows systems. func (l *intLogger) setColorization(opts *LoggerOptions) { - switch opts.Color { - case ColorOff: + if opts.Color == ColorOff { + return + } + + fi, ok := l.writer.w.(*os.File) + if !ok { + l.writer.color = ColorOff + l.headerColor = ColorOff return - case ForceColor: - fi := l.checkWriterIsFile() - l.writer.w = colorable.NewColorable(fi) - case AutoColor: - fi := l.checkWriterIsFile() - isUnixTerm := isatty.IsTerminal(os.Stdout.Fd()) - isCygwinTerm := isatty.IsCygwinTerminal(os.Stdout.Fd()) - isTerm := isUnixTerm || isCygwinTerm - if !isTerm { - l.writer.color = ColorOff - l.headerColor = ColorOff - return - } - - if l.headerColor == ColorOff { - l.writer.w = colorable.NewColorable(fi) - } + } + + cfi := colorable.NewColorable(fi) + + // NewColorable detects if color is possible and if it's not, then it + // returns the original value. So we can test if we got the original + // value back to know if color is possible. + if cfi == fi { + l.writer.color = ColorOff + l.headerColor = ColorOff + } else { + l.writer.w = cfi } } diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/context.go b/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/context.go index 7815f501942..eb5aba556bb 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/context.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/context.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MIT + package hclog import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/exclude.go b/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/exclude.go index cfd4307a803..4b73ba553da 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/exclude.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/exclude.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MIT + package hclog import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/global.go b/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/global.go index 48ff1f3a4e9..a7403f593a9 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/global.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/global.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MIT + package hclog import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/interceptlogger.go b/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/interceptlogger.go index ff42f1bfc1d..e9b1c188530 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/interceptlogger.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/interceptlogger.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MIT + package hclog import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/intlogger.go b/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/intlogger.go index 89d26c9b01f..b45064acf1a 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/intlogger.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/intlogger.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MIT + package hclog import ( @@ -8,7 +11,6 @@ import ( "fmt" "io" "log" - "os" "reflect" "runtime" "sort" @@ -86,6 +88,8 @@ type intLogger struct { // create subloggers with their own level setting independentLevels bool + + subloggerHook func(sub Logger) Logger } // New returns a configured logger. @@ -152,6 +156,7 @@ func newLogger(opts *LoggerOptions) *intLogger { independentLevels: opts.IndependentLevels, headerColor: headerColor, fieldColor: fieldColor, + subloggerHook: opts.SubloggerHook, } if opts.IncludeLocation { l.callerOffset = offsetIntLogger + opts.AdditionalLocationOffset @@ -167,6 +172,10 @@ func newLogger(opts *LoggerOptions) *intLogger { l.timeFormat = opts.TimeFormat } + if l.subloggerHook == nil { + l.subloggerHook = identityHook + } + l.setColorization(opts) atomic.StoreInt32(l.level, int32(level)) @@ -174,6 +183,10 @@ func newLogger(opts *LoggerOptions) *intLogger { return l } +func identityHook(logger Logger) Logger { + return logger +} + // offsetIntLogger is the stack frame offset in the call stack for the caller to // one of the Warn, Info, Log, etc methods. const offsetIntLogger = 3 @@ -775,7 +788,7 @@ func (l *intLogger) With(args ...interface{}) Logger { sl.implied = append(sl.implied, MissingKey, extra) } - return sl + return l.subloggerHook(sl) } // Create a new sub-Logger that a name decending from the current name. @@ -789,7 +802,7 @@ func (l *intLogger) Named(name string) Logger { sl.name = name } - return sl + return l.subloggerHook(sl) } // Create a new sub-Logger with an explicit name. This ignores the current @@ -800,7 +813,7 @@ func (l *intLogger) ResetNamed(name string) Logger { sl.name = name - return sl + return l.subloggerHook(sl) } func (l *intLogger) ResetOutput(opts *LoggerOptions) error { @@ -876,16 +889,6 @@ func (l *intLogger) StandardWriter(opts *StandardLoggerOptions) io.Writer { } } -// checks if the underlying io.Writer is a file, and -// panics if not. For use by colorization. -func (l *intLogger) checkWriterIsFile() *os.File { - fi, ok := l.writer.w.(*os.File) - if !ok { - panic("Cannot enable coloring of non-file Writers") - } - return fi -} - // Accept implements the SinkAdapter interface func (i *intLogger) Accept(name string, level Level, msg string, args ...interface{}) { i.log(name, level, msg, args...) diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/logger.go b/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/logger.go index 3cdb2837d79..947ac0c9afc 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/logger.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/logger.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MIT + package hclog import ( @@ -89,6 +92,13 @@ const ( ForceColor ) +// SupportsColor is an optional interface that can be implemented by the output +// value. If implemented and SupportsColor() returns true, then AutoColor will +// enable colorization. +type SupportsColor interface { + SupportsColor() bool +} + // LevelFromString returns a Level type for the named log level, or "NoLevel" if // the level string is invalid. This facilitates setting the log level via // config or environment variable by name in a predictable way. @@ -292,6 +302,13 @@ type LoggerOptions struct { // logger will not affect any subloggers, and SetLevel on any subloggers // will not affect the parent or sibling loggers. IndependentLevels bool + + // SubloggerHook registers a function that is called when a sublogger via + // Named, With, or ResetNamed is created. If defined, the function is passed + // the newly created Logger and the returned Logger is returned from the + // original function. This option allows customization via interception and + // wrapping of Logger instances. + SubloggerHook func(sub Logger) Logger } // InterceptLogger describes the interface for using a logger diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/nulllogger.go b/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/nulllogger.go index 55e89dd31ca..d43da809eb9 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/nulllogger.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/nulllogger.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MIT + package hclog import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/stdlog.go b/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/stdlog.go index 641f20ccbcc..03739b61fa7 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/stdlog.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/stdlog.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MIT + package hclog import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/writer.go b/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/writer.go index 421a1f06c0b..4ee219bf0ce 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/writer.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-hclog/writer.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MIT + package hclog import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/CHANGELOG.md b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/CHANGELOG.md index d40ad613615..dbb20c96190 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/CHANGELOG.md +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/CHANGELOG.md @@ -1,3 +1,19 @@ +## v1.4.10 + +BUG FIXES: + +* additional notes: ensure to close files [GH-241](https://github.com/hashicorp/go-plugin/pull/241)] + +ENHANCEMENTS: + +* deps: Remove direct dependency on golang.org/x/net [GH-240](https://github.com/hashicorp/go-plugin/pull/240)] + +## v1.4.9 + +ENHANCEMENTS: + +* client: Remove log warning introduced in 1.4.5 when SecureConfig is nil. [[GH-238](https://github.com/hashicorp/go-plugin/pull/238)] + ## v1.4.8 BUG FIXES: @@ -33,5 +49,3 @@ BUG FIXES: * Bidirectional communication: fix bidirectional communication when AutoMTLS is enabled [[GH-193](https://github.com/hashicorp/go-plugin/pull/193)] * RPC: Trim a spurious log message for plugins using RPC [[GH-186](https://github.com/hashicorp/go-plugin/pull/186)] - - diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/README.md b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/README.md index 39391f24fe4..50baee06e15 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/README.md +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/README.md @@ -4,8 +4,9 @@ that has been in use by HashiCorp tooling for over 4 years. While initially created for [Packer](https://www.packer.io), it is additionally in use by [Terraform](https://www.terraform.io), [Nomad](https://www.nomadproject.io), -[Vault](https://www.vaultproject.io), and -[Boundary](https://www.boundaryproject.io). +[Vault](https://www.vaultproject.io), +[Boundary](https://www.boundaryproject.io), +and [Waypoint](https://www.waypointproject.io). While the plugin system is over RPC, it is currently only designed to work over a local [reliable] network. Plugins over a real network are not supported diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/client.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/client.go index d0baf7e8d76..a6a9ffa263a 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/client.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/client.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugin import ( @@ -565,9 +568,7 @@ func (c *Client) Start() (addr net.Addr, err error) { return nil, err } - if c.config.SecureConfig == nil { - c.logger.Warn("plugin configured with a nil SecureConfig") - } else { + if c.config.SecureConfig != nil { if ok, err := c.config.SecureConfig.Check(cmd.Path); err != nil { return nil, fmt.Errorf("error verifying checksum: %s", err) } else if !ok { diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/discover.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/discover.go index d22c566ed50..c5b96242b1a 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/discover.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/discover.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugin import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/error.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/error.go index 22a7baa6a0d..e62a21913f4 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/error.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/error.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugin // This is a type that wraps error types so that they can be messaged diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/grpc_broker.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/grpc_broker.go index daf142d1709..9bf5677684f 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/grpc_broker.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/grpc_broker.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugin import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/grpc_client.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/grpc_client.go index 842903c922b..6454d426461 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/grpc_client.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/grpc_client.go @@ -1,6 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugin import ( + "context" "crypto/tls" "fmt" "math" @@ -8,7 +12,6 @@ import ( "time" "github.com/hashicorp/go-plugin/internal/plugin" - "golang.org/x/net/context" "google.golang.org/grpc" "google.golang.org/grpc/credentials" "google.golang.org/grpc/health/grpc_health_v1" diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/grpc_controller.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/grpc_controller.go index 1a8a8e70ea4..2085356cd34 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/grpc_controller.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/grpc_controller.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugin import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/grpc_server.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/grpc_server.go index 54b061cc365..7203a2cf5d3 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/grpc_server.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/grpc_server.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugin import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/grpc_stdio.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/grpc_stdio.go index a582181505f..ae06c116313 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/grpc_stdio.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/grpc_stdio.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugin import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/gen.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/gen.go index fb9d415254f..a3b5fb124e0 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/gen.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/gen.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate protoc -I ./ ./grpc_broker.proto ./grpc_controller.proto ./grpc_stdio.proto --go_out=plugins=grpc:. package plugin diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/grpc_broker.pb.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/grpc_broker.pb.go index 6bf103859f8..303b63e43b1 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/grpc_broker.pb.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/grpc_broker.pb.go @@ -8,7 +8,7 @@ import fmt "fmt" import math "math" import ( - context "golang.org/x/net/context" + context "context" grpc "google.golang.org/grpc" ) diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/grpc_broker.proto b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/grpc_broker.proto index aa3df4630a7..038423ded7a 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/grpc_broker.proto +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/grpc_broker.proto @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + syntax = "proto3"; package plugin; option go_package = "plugin"; diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/grpc_controller.pb.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/grpc_controller.pb.go index 3e39da95a89..982fca0a574 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/grpc_controller.pb.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/grpc_controller.pb.go @@ -8,7 +8,7 @@ import fmt "fmt" import math "math" import ( - context "golang.org/x/net/context" + context "context" grpc "google.golang.org/grpc" ) diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/grpc_controller.proto b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/grpc_controller.proto index 345d0a1c1f2..3157eb885de 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/grpc_controller.proto +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/grpc_controller.proto @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + syntax = "proto3"; package plugin; option go_package = "plugin"; diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/grpc_stdio.pb.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/grpc_stdio.pb.go index c8f94921b46..bdef71b8aa0 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/grpc_stdio.pb.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/grpc_stdio.pb.go @@ -9,7 +9,7 @@ import math "math" import empty "github.com/golang/protobuf/ptypes/empty" import ( - context "golang.org/x/net/context" + context "context" grpc "google.golang.org/grpc" ) diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/grpc_stdio.proto b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/grpc_stdio.proto index ce1a1223035..1c0d1d0526a 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/grpc_stdio.proto +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/internal/plugin/grpc_stdio.proto @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + syntax = "proto3"; package plugin; option go_package = "plugin"; diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/log_entry.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/log_entry.go index fb2ef930caa..ab963d56b54 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/log_entry.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/log_entry.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugin import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/mtls.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/mtls.go index 88955245877..09ecafaf45a 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/mtls.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/mtls.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugin import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/mux_broker.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/mux_broker.go index 01c45ad7c68..4eb1208fbb7 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/mux_broker.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/mux_broker.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugin import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/notes_unix.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/notes_unix.go index dae1c411dec..9734557529c 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/notes_unix.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/notes_unix.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build !windows // +build !windows @@ -50,10 +53,13 @@ func additionalNotesAboutCommand(path string) string { } if elfFile, err := elf.Open(path); err == nil { + defer elfFile.Close() notes += fmt.Sprintf(" ELF architecture: %s (current architecture: %s)\n", elfFile.Machine, runtime.GOARCH) } else if machoFile, err := macho.Open(path); err == nil { + defer machoFile.Close() notes += fmt.Sprintf(" MachO architecture: %s (current architecture: %s)\n", machoFile.Cpu, runtime.GOARCH) } else if peFile, err := pe.Open(path); err == nil { + defer peFile.Close() machine, ok := peTypes[peFile.Machine] if !ok { machine = "unknown" diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/notes_windows.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/notes_windows.go index 900b93319c1..7afcaf3fe45 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/notes_windows.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/notes_windows.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build windows // +build windows @@ -26,10 +29,13 @@ func additionalNotesAboutCommand(path string) string { notes += fmt.Sprintf(" Mode: %s\n", stat.Mode()) if elfFile, err := elf.Open(path); err == nil { + defer elfFile.Close() notes += fmt.Sprintf(" ELF architecture: %s (current architecture: %s)\n", elfFile.Machine, runtime.GOARCH) } else if machoFile, err := macho.Open(path); err == nil { + defer machoFile.Close() notes += fmt.Sprintf(" MachO architecture: %s (current architecture: %s)\n", machoFile.Cpu, runtime.GOARCH) } else if peFile, err := pe.Open(path); err == nil { + defer peFile.Close() machine, ok := peTypes[peFile.Machine] if !ok { machine = "unknown" diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/plugin.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/plugin.go index 79d9674633a..184749b96ef 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/plugin.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/plugin.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // The plugin package exposes functions and helpers for communicating to // plugins which are implemented as standalone binary applications. // diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/process.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/process.go index 88c999a580d..68b028c6eb1 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/process.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/process.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugin import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/process_posix.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/process_posix.go index 185957f8d11..b73a360758a 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/process_posix.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/process_posix.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build !windows // +build !windows diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/process_windows.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/process_windows.go index 0eaa7705d22..ffa9b9e0299 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/process_windows.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/process_windows.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugin import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/protocol.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/protocol.go index 0cfc19e52d6..e4b7be38378 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/protocol.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/protocol.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugin import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/rpc_client.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/rpc_client.go index f30a4b1d387..142454df80d 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/rpc_client.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/rpc_client.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugin import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/rpc_server.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/rpc_server.go index 064809d2918..cec0a3d93a2 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/rpc_server.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/rpc_server.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugin import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/server.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/server.go index e134999103f..3f4a017da5e 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/server.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/server.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugin import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/server_mux.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/server_mux.go index 033079ea0fc..6b14b0c291d 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/server_mux.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/server_mux.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugin import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/stream.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/stream.go index 1d547aaaab3..a2348642d86 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/stream.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/stream.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugin import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/testing.go b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/testing.go index e36f2eb2b7c..ffe6fa46823 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/testing.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/go-plugin/testing.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugin import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/.copywrite.hcl b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/.copywrite.hcl new file mode 100644 index 00000000000..45ec82c768d --- /dev/null +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/.copywrite.hcl @@ -0,0 +1,7 @@ +schema_version = 1 + +project { + license = "MPL-2.0" + copyright_year = 2020 + header_ignore = [] +} diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/.go-version b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/.go-version index 83d5e73f00e..5fb5a6b4f54 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/.go-version +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/.go-version @@ -1 +1 @@ -1.19.5 +1.20 diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/README.md b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/README.md index eb287ff0fc7..d3012d75843 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/README.md +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/README.md @@ -44,13 +44,13 @@ Each comes with different trade-offs described below. - **Cons:** - Installation may consume some bandwith, disk space and a little time - Potentially less stable builds (see `checkpoint` below) - - `checkpoint.{LatestVersion}` - Downloads, verifies & installs any known product available in HashiCorp Checkpoint + - `checkpoint.LatestVersion` - Downloads, verifies & installs any known product available in HashiCorp Checkpoint - **Pros:** - Checkpoint typically contains only product versions considered stable - **Cons:** - Installation may consume some bandwith, disk space and a little time - Currently doesn't allow installation of a old versions (see `releases` above) - - `build.{GitRevision}` - Clones raw source code and builds the product from it + - `build.GitRevision` - Clones raw source code and builds the product from it - **Pros:** - Useful for catching bugs and incompatibilities as early as possible (prior to product release). - **Cons:** @@ -61,73 +61,59 @@ Each comes with different trade-offs described below. ## Example Usage -### Install single version +See examples at https://pkg.go.dev/github.com/hashicorp/hc-install#example-Installer. -```go -TODO -``` +## CLI + +In addition to the Go library, which is the intended primary use case of `hc-install`, we also distribute CLI. + +The CLI comes with some trade-offs: -### Find or install single version + - more limited interface compared to the flexible Go API (installs specific versions of products via `releases.ExactVersion`) + - minimal environment pre-requisites (no need to compile Go code) + - see ["hc-install is not a package manager"](https://github.com/hashicorp/hc-install#hc-install-is-not-a-package-manager) -```go -i := NewInstaller() +### Installation -v0_14_0 := version.Must(version.NewVersion("0.14.0")) +Given that one of the key roles of the CLI/library is integrity checking, you should choose the installation method which involves the same level of integrity checks, and/or perform these checks yourself. `go install` provides only minimal to no integrity checks, depending on exact use. We recommend any of the installation methods documented below. -execPath, err := i.Ensure(context.Background(), []src.Source{ - &fs.ExactVersion{ - Product: product.Terraform, - Version: v0_14_0, - }, - &releases.ExactVersion{ - Product: product.Terraform, - Version: v0_14_0, - }, -}) -if err != nil { - // process err -} +#### Homebrew (macOS / Linux) -// run any tests +[Homebrew](https://brew.sh) -defer i.Remove() ``` +brew install hashicorp/tap/hc-install +``` + +#### Linux + +We support Debian & Ubuntu via apt and RHEL, CentOS, Fedora and Amazon Linux via RPM. -### Install multiple versions +You can follow the instructions in the [Official Packaging Guide](https://www.hashicorp.com/official-packaging-guide) to install the package from the official HashiCorp-maintained repositories. The package name is `hc-install` in all repositories. + +#### Other platforms + +1. [Download for the latest version](https://releases.hashicorp.com/hc-install/) relevant for your operating system and architecture. +2. Verify integrity by comparing the SHA256 checksums which are part of the release (called `hc-install__SHA256SUMS`). +3. Install it by unzipping it and moving it to a directory included in your system's `PATH`. +4. Check that you have installed it correctly via `hc-install --version`. + You should see the latest version printed to your terminal. + +### Usage -```go -TODO ``` +Usage: hc-install install [options] -version -### Install and build multiple versions - -```go -i := NewInstaller() - -vc, _ := version.NewConstraint(">= 0.12") -rv := &releases.Versions{ - Product: product.Terraform, - Constraints: vc, -} - -versions, err := rv.List(context.Background()) -if err != nil { - return err -} -versions = append(versions, &build.GitRevision{Ref: "HEAD"}) - -for _, installableVersion := range versions { - execPath, err := i.Ensure(context.Background(), []src.Source{ - installableVersion, - }) - if err != nil { - return err - } - - // Do some testing here - _ = execPath - - // clean up - os.Remove(execPath) -} + This command installs a HashiCorp product. + Options: + -version [REQUIRED] Version of product to install. + -path Path to directory where the product will be installed. Defaults + to current working directory. +``` +```sh +hc-install install -version 1.3.7 terraform +``` +``` +hc-install: will install terraform@1.3.7 +installed terraform@1.3.7 to /current/working/dir/terraform ``` diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/checkpoint/latest_version.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/checkpoint/latest_version.go index 2d52b339f97..23c6891e10e 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/checkpoint/latest_version.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/checkpoint/latest_version.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package checkpoint import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/errors/errors.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/errors/errors.go index 8d4f1d22d87..15d51b60263 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/errors/errors.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/errors/errors.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package errors type skippableErr struct { diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/fs/any_version.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/fs/any_version.go index fc1f9463403..8071dfcf52c 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/fs/any_version.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/fs/any_version.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fs import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/fs/exact_version.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/fs/exact_version.go index 018c1fbadcb..c3cc49bfc91 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/fs/exact_version.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/fs/exact_version.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fs import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/fs/fs.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/fs/fs.go index 5adb9c329c2..216df2c2cdc 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/fs/fs.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/fs/fs.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fs import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/fs/fs_unix.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/fs/fs_unix.go index 95c5c11f1d9..eebd98b82c0 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/fs/fs_unix.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/fs/fs_unix.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build !windows // +build !windows diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/fs/fs_windows.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/fs/fs_windows.go index 2a03c7ad2f2..e2e4e73fb19 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/fs/fs_windows.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/fs/fs_windows.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fs import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/fs/version.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/fs/version.go index 26633b8afc8..39efb52d9fa 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/fs/version.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/fs/version.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fs import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/installer.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/installer.go index 8b773c56db0..6c704eede3d 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/installer.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/installer.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package install import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/build/get_go_version.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/build/get_go_version.go index 3a929859ac1..858f8ab2974 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/build/get_go_version.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/build/get_go_version.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package build import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/build/go_build.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/build/go_build.go index 9581c322cc2..504bf45a305 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/build/go_build.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/build/go_build.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package build import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/build/go_is_installed.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/build/go_is_installed.go index 6a81d196b68..00165fff5cb 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/build/go_is_installed.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/build/go_is_installed.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package build import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/build/install_go_version.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/build/install_go_version.go index a6610e076f6..385509a08c6 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/build/install_go_version.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/build/install_go_version.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package build import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/httpclient/httpclient.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/httpclient/httpclient.go index 0ae6ae4af86..a9503dfdb88 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/httpclient/httpclient.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/httpclient/httpclient.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package httpclient import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/pubkey/pubkey.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/pubkey/pubkey.go index c36fba47166..d06f1045063 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/pubkey/pubkey.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/pubkey/pubkey.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pubkey const ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/releasesjson/checksum_downloader.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/releasesjson/checksum_downloader.go index c012a20aab1..843de8cdfad 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/releasesjson/checksum_downloader.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/releasesjson/checksum_downloader.go @@ -1,7 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package releasesjson import ( - "bytes" "context" "crypto/sha256" "encoding/hex" @@ -12,8 +14,8 @@ import ( "net/url" "strings" + "github.com/ProtonMail/go-crypto/openpgp" "github.com/hashicorp/hc-install/internal/httpclient" - "golang.org/x/crypto/openpgp" ) type ChecksumDownloader struct { @@ -133,43 +135,13 @@ func fileMapFromChecksums(checksums strings.Builder) (ChecksumFileMap, error) { return csMap, nil } -func compareChecksum(logger *log.Logger, r io.Reader, verifiedHashSum HashSum, filename string, expectedSize int64) error { - h := sha256.New() - - // This may take a while depending on network connection as the io.Reader - // is expected to be http.Response.Body which streams the bytes - // on demand over the network. - logger.Printf("copying %q (%d bytes) to calculate checksum", filename, expectedSize) - bytesCopied, err := io.Copy(h, r) - if err != nil { - return err - } - logger.Printf("copied %d bytes of %q", bytesCopied, filename) - - if expectedSize != 0 && bytesCopied != int64(expectedSize) { - return fmt.Errorf("unexpected size (downloaded: %d, expected: %d)", - bytesCopied, expectedSize) - } - - calculatedSum := h.Sum(nil) - if !bytes.Equal(calculatedSum, verifiedHashSum) { - return fmt.Errorf("checksum mismatch (expected %q, calculated %q)", - verifiedHashSum, - hex.EncodeToString(calculatedSum)) - } - - logger.Printf("checksum matches: %q", hex.EncodeToString(calculatedSum)) - - return nil -} - func (cd *ChecksumDownloader) verifySumsSignature(checksums, signature io.Reader) error { el, err := cd.keyEntityList() if err != nil { return err } - _, err = openpgp.CheckDetachedSignature(el, checksums, signature) + _, err = openpgp.CheckDetachedSignature(el, checksums, signature, nil) if err != nil { return fmt.Errorf("unable to verify checksums signature: %w", err) } diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/releasesjson/downloader.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/releasesjson/downloader.go index 8b2097d7615..c5d71b381b1 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/releasesjson/downloader.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/releasesjson/downloader.go @@ -1,9 +1,13 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package releasesjson import ( "archive/zip" "bytes" "context" + "crypto/sha256" "fmt" "io" "io/ioutil" @@ -92,8 +96,7 @@ func (d *Downloader) DownloadAndUnpack(ctx context.Context, pv *ProductVersion, defer resp.Body.Close() - var pkgReader io.Reader - pkgReader = resp.Body + pkgReader := resp.Body contentType := resp.Header.Get("content-type") if !contentTypeIsZip(contentType) { @@ -103,19 +106,6 @@ func (d *Downloader) DownloadAndUnpack(ctx context.Context, pv *ProductVersion, expectedSize := resp.ContentLength - if d.VerifyChecksum { - d.Logger.Printf("verifying checksum of %q", pb.Filename) - // provide extra reader to calculate & compare checksum - var buf bytes.Buffer - r := io.TeeReader(resp.Body, &buf) - pkgReader = &buf - - err := compareChecksum(d.Logger, r, verifiedChecksum, pb.Filename, expectedSize) - if err != nil { - return "", err - } - } - pkgFile, err := ioutil.TempFile("", pb.Filename) if err != nil { return "", err @@ -124,19 +114,39 @@ func (d *Downloader) DownloadAndUnpack(ctx context.Context, pv *ProductVersion, pkgFilePath, err := filepath.Abs(pkgFile.Name()) d.Logger.Printf("copying %q (%d bytes) to %s", pb.Filename, expectedSize, pkgFile.Name()) - // Unless the bytes were already downloaded above for checksum verification - // this may take a while depending on network connection as the io.Reader - // is expected to be http.Response.Body which streams the bytes - // on demand over the network. - bytesCopied, err := io.Copy(pkgFile, pkgReader) - if err != nil { - return pkgFilePath, err + + var bytesCopied int64 + if d.VerifyChecksum { + d.Logger.Printf("verifying checksum of %q", pb.Filename) + h := sha256.New() + r := io.TeeReader(resp.Body, pkgFile) + + bytesCopied, err = io.Copy(h, r) + if err != nil { + return "", err + } + + calculatedSum := h.Sum(nil) + if !bytes.Equal(calculatedSum, verifiedChecksum) { + return pkgFilePath, fmt.Errorf( + "checksum mismatch (expected: %x, got: %x)", + verifiedChecksum, calculatedSum, + ) + } + } else { + bytesCopied, err = io.Copy(pkgFile, pkgReader) + if err != nil { + return pkgFilePath, err + } } + d.Logger.Printf("copied %d bytes to %s", bytesCopied, pkgFile.Name()) if expectedSize != 0 && bytesCopied != int64(expectedSize) { - return pkgFilePath, fmt.Errorf("unexpected size (downloaded: %d, expected: %d)", - bytesCopied, expectedSize) + return pkgFilePath, fmt.Errorf( + "unexpected size (downloaded: %d, expected: %d)", + bytesCopied, expectedSize, + ) } r, err := zip.OpenReader(pkgFile.Name()) diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/releasesjson/product_version.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/releasesjson/product_version.go index 5eecb013647..99b811a6458 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/releasesjson/product_version.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/releasesjson/product_version.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package releasesjson import "github.com/hashicorp/go-version" diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/releasesjson/releases.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/releasesjson/releases.go index b100a8d2b57..acbb676131a 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/releasesjson/releases.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/releasesjson/releases.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package releasesjson import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/src/src.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/src/src.go index 5b53d92b12f..9cac8a64e31 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/src/src.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/src/src.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package src type InstallSrcSigil struct{} diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/validators/validators.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/validators/validators.go index 5e3e6c81637..8a331c4cf72 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/validators/validators.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/internal/validators/validators.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package validators import "regexp" diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/product/consul.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/product/consul.go index 72e490e2f30..9789d7c318e 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/product/consul.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/product/consul.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package product import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/product/product.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/product/product.go index 0b5e2a54671..85f2e11bf2c 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/product/product.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/product/product.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package product import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/product/terraform.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/product/terraform.go index d820203a77e..afb6b35fb3a 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/product/terraform.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/product/terraform.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package product import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/product/vault.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/product/vault.go index f00827bfecb..0b25965963a 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/product/vault.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/product/vault.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package product import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/releases/exact_version.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/releases/exact_version.go index 7e4091ef7e1..03111230535 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/releases/exact_version.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/releases/exact_version.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package releases import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/releases/latest_version.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/releases/latest_version.go index 36aa5b2b3cb..0dc1ffa4e93 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/releases/latest_version.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/releases/latest_version.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package releases import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/releases/releases.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/releases/releases.go index 2c3f3099280..7bef49ba30f 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/releases/releases.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/releases/releases.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package releases import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/releases/versions.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/releases/versions.go index bf0f799faaf..9084c91bf6b 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/releases/versions.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/releases/versions.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package releases import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/src/src.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/src/src.go index 11fef7869ff..f7b8265efba 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/src/src.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/src/src.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package src import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/version/VERSION b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/version/VERSION index 79a2734bbf3..cb0c939a936 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/version/VERSION +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/version/VERSION @@ -1 +1 @@ -0.5.0 \ No newline at end of file +0.5.2 diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/version/version.go b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/version/version.go index db367e560ae..facd42949c7 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hc-install/version/version.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hc-install/version/version.go @@ -1,7 +1,11 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package version import ( _ "embed" + "strings" "github.com/hashicorp/go-version" ) @@ -9,6 +13,9 @@ import ( //go:embed VERSION var rawVersion string +// parsedVersion declared here ensures that invalid versions panic early, on import +var parsedVersion = version.Must(version.NewVersion(strings.TrimSpace(rawVersion))) + // Version returns the version of the library // // Note: This is only exposed as public function/package @@ -16,5 +23,5 @@ var rawVersion string // In general downstream should not implement version-specific // logic and rely on this function to be present in future releases. func Version() *version.Version { - return version.Must(version.NewVersion(rawVersion)) + return parsedVersion } diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hcl/v2/CHANGELOG.md b/.ci/providerlint/vendor/github.com/hashicorp/hcl/v2/CHANGELOG.md index daf3ad90f60..82175271b88 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hcl/v2/CHANGELOG.md +++ b/.ci/providerlint/vendor/github.com/hashicorp/hcl/v2/CHANGELOG.md @@ -1,5 +1,15 @@ # HCL Changelog +## v2.17.0 (May 31, 2023) + +### Enhancements + +* HCL now uses a newer version of the upstream `cty` library which has improved treatment of unknown values: it can now track additional optional information that reduces the range of an unknown value, which allows some operations against unknown values to return known or partially-known results. ([#590](https://github.com/hashicorp/hcl/pull/590)) + + **Note:** This change effectively passes on [`cty`'s notion of backward compatibility](https://github.com/zclconf/go-cty/blob/main/COMPATIBILITY.md) whereby unknown values can become "more known" in later releases. In particular, if your caller is using `cty.Value.RawEquals` in its tests against the results of operations with unknown values then you may see those tests begin failing after upgrading, due to the values now being more "refined". + + If so, you should review the refinements with consideration to [the `cty` refinements docs](https://github.com/zclconf/go-cty/blob/7dcbae46a6f247e983efb1fa774d2bb68781a333/docs/refinements.md) and update your expected results to match only if the reported refinements seem correct for the given situation. The `RawEquals` method is intended only for making exact value comparisons in test cases, so main application code should not use it; use `Equals` instead for real logic, which will take refinements into account automatically. + ## v2.16.2 (March 9, 2023) ### Bugs Fixed diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hcl/v2/hclsyntax/expression.go b/.ci/providerlint/vendor/github.com/hashicorp/hcl/v2/hclsyntax/expression.go index 55fecd4eaf0..5df423fec24 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hcl/v2/hclsyntax/expression.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hcl/v2/hclsyntax/expression.go @@ -696,7 +696,59 @@ func (e *ConditionalExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostic return cty.UnknownVal(resultType), diags } if !condResult.IsKnown() { - return cty.UnknownVal(resultType), diags + // We might be able to offer a refined range for the result based on + // the two possible outcomes. + if trueResult.Type() == cty.Number && falseResult.Type() == cty.Number { + // This case deals with the common case of (predicate ? 1 : 0) and + // significantly decreases the range of the result in that case. + if !(trueResult.IsNull() || falseResult.IsNull()) { + if gt := trueResult.GreaterThan(falseResult); gt.IsKnown() { + b := cty.UnknownVal(cty.Number).Refine() + if gt.True() { + b = b. + NumberRangeLowerBound(falseResult, true). + NumberRangeUpperBound(trueResult, true) + } else { + b = b. + NumberRangeLowerBound(trueResult, true). + NumberRangeUpperBound(falseResult, true) + } + b = b.NotNull() // If neither of the results is null then the result can't be either + return b.NewValue().WithSameMarks(condResult).WithSameMarks(trueResult).WithSameMarks(falseResult), diags + } + } + } + if trueResult.Type().IsCollectionType() && falseResult.Type().IsCollectionType() { + if trueResult.Type().Equals(falseResult.Type()) { + if !(trueResult.IsNull() || falseResult.IsNull()) { + trueLen := trueResult.Length() + falseLen := falseResult.Length() + if gt := trueLen.GreaterThan(falseLen); gt.IsKnown() { + b := cty.UnknownVal(resultType).Refine() + trueLen, _ := trueLen.AsBigFloat().Int64() + falseLen, _ := falseLen.AsBigFloat().Int64() + if gt.True() { + b = b. + CollectionLengthLowerBound(int(falseLen)). + CollectionLengthUpperBound(int(trueLen)) + } else { + b = b. + CollectionLengthLowerBound(int(trueLen)). + CollectionLengthUpperBound(int(falseLen)) + } + b = b.NotNull() // If neither of the results is null then the result can't be either + return b.NewValue().WithSameMarks(condResult).WithSameMarks(trueResult).WithSameMarks(falseResult), diags + } + } + } + } + trueRng := trueResult.Range() + falseRng := falseResult.Range() + ret := cty.UnknownVal(resultType) + if trueRng.DefinitelyNotNull() && falseRng.DefinitelyNotNull() { + ret = ret.RefineNotNull() + } + return ret.WithSameMarks(condResult).WithSameMarks(trueResult).WithSameMarks(falseResult), diags } condResult, err := convert.Convert(condResult, cty.Bool) if err != nil { @@ -1632,11 +1684,15 @@ func (e *SplatExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) { // example, it is valid to use a splat on a single object to retrieve a // list of a single attribute, but we still need to check if that // attribute actually exists. - upgradedUnknown = !sourceVal.IsKnown() + if !sourceVal.IsKnown() { + sourceRng := sourceVal.Range() + if sourceRng.CouldBeNull() { + upgradedUnknown = true + } + } sourceVal = cty.TupleVal([]cty.Value{sourceVal}) sourceTy = sourceVal.Type() - } // We'll compute our result type lazily if we need it. In the normal case @@ -1675,7 +1731,20 @@ func (e *SplatExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) { // checking to proceed. ty, tyDiags := resultTy() diags = append(diags, tyDiags...) - return cty.UnknownVal(ty), diags + ret := cty.UnknownVal(ty) + if ty != cty.DynamicPseudoType { + ret = ret.RefineNotNull() + } + if ty.IsListType() && sourceVal.Type().IsCollectionType() { + // We can refine the length of an unknown list result based on + // the source collection's own length. + sourceRng := sourceVal.Range() + ret = ret.Refine(). + CollectionLengthLowerBound(sourceRng.LengthLowerBound()). + CollectionLengthUpperBound(sourceRng.LengthUpperBound()). + NewValue() + } + return ret.WithSameMarks(sourceVal), diags } // Unmark the collection, and save the marks to apply to the returned diff --git a/.ci/providerlint/vendor/github.com/hashicorp/hcl/v2/hclsyntax/expression_template.go b/.ci/providerlint/vendor/github.com/hashicorp/hcl/v2/hclsyntax/expression_template.go index 0b5ac19553e..f175fc5f62f 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/hcl/v2/hclsyntax/expression_template.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/hcl/v2/hclsyntax/expression_template.go @@ -38,11 +38,9 @@ func (e *TemplateExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) if partVal.IsNull() { diags = append(diags, &hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Invalid template interpolation value", - Detail: fmt.Sprintf( - "The expression result is null. Cannot include a null value in a string template.", - ), + Severity: hcl.DiagError, + Summary: "Invalid template interpolation value", + Detail: "The expression result is null. Cannot include a null value in a string template.", Subject: part.Range().Ptr(), Context: &e.SrcRange, Expression: part, @@ -83,16 +81,29 @@ func (e *TemplateExpr) Value(ctx *hcl.EvalContext) (cty.Value, hcl.Diagnostics) continue } - buf.WriteString(strVal.AsString()) + // If we're just continuing to validate after we found an unknown value + // then we'll skip appending so that "buf" will contain only the + // known prefix of the result. + if isKnown && !diags.HasErrors() { + buf.WriteString(strVal.AsString()) + } } var ret cty.Value if !isKnown { ret = cty.UnknownVal(cty.String) + if !diags.HasErrors() { // Invalid input means our partial result buffer is suspect + if knownPrefix := buf.String(); knownPrefix != "" { + ret = ret.Refine().StringPrefix(knownPrefix).NewValue() + } + } } else { ret = cty.StringVal(buf.String()) } + // A template rendering result is never null. + ret = ret.RefineNotNull() + // Apply the full set of marks to the returned value return ret.WithMarks(marks), diags } diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/.copywrite.hcl b/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/.copywrite.hcl new file mode 100644 index 00000000000..ada7d74aa73 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/.copywrite.hcl @@ -0,0 +1,13 @@ +schema_version = 1 + +project { + license = "MPL-2.0" + copyright_year = 2019 + + # (OPTIONAL) A list of globs that should not have copyright/license headers. + # Supports doublestar glob patterns for more flexibility in defining which + # files or folders should be ignored + header_ignore = [ + "testdata/**", + ] +} diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/action.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/action.go index 51c4c8369af..c74f7e68a31 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/action.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/action.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfjson // Action is a valid action type for a resource change. diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/checks.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/checks.go new file mode 100644 index 00000000000..1a4c8cce53d --- /dev/null +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/checks.go @@ -0,0 +1,145 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package tfjson + +// CheckKind is a string representation of the type of conditional check +// referenced in a check result. +type CheckKind string + +const ( + // CheckKindResource indicates the check result is from a pre- or + // post-condition on a resource or data source. + CheckKindResource CheckKind = "resource" + + // CheckKindOutputValue indicates the check result is from an output + // post-condition. + CheckKindOutputValue CheckKind = "output_value" + + // CheckKindCheckBlock indicates the check result is from a check block. + CheckKindCheckBlock CheckKind = "check" +) + +// CheckStatus is a string representation of the status of a given conditional +// check. +type CheckStatus string + +const ( + // CheckStatusPass indicates the check passed. + CheckStatusPass CheckStatus = "pass" + + // CheckStatusFail indicates the check failed. + CheckStatusFail CheckStatus = "fail" + + // CheckStatusError indicates the check errored. This is distinct from + // CheckStatusFail in that it represents a logical or configuration error + // within the check block that prevented the check from executing, as + // opposed to the check was attempted and evaluated to false. + CheckStatusError CheckStatus = "error" + + // CheckStatusUnknown indicates the result of the check was not known. This + // could be because a value within the check could not be known at plan + // time, or because the overall plan failed for an unrelated reason before + // this check could be executed. + CheckStatusUnknown CheckStatus = "unknown" +) + +// CheckStaticAddress details the address of the object that performed a given +// check. The static address points to the overall resource, as opposed to the +// dynamic address which contains the instance key for any resource that has +// multiple instances. +type CheckStaticAddress struct { + // ToDisplay is a formatted and ready to display representation of the + // address. + ToDisplay string `json:"to_display"` + + // Kind represents the CheckKind of this check. + Kind CheckKind `json:"kind"` + + // Module is the module part of the address. This will be empty for any + // resources in the root module. + Module string `json:"module,omitempty"` + + // Mode is the ResourceMode of the resource that contains this check. This + // field is only set is Kind equals CheckKindResource. + Mode ResourceMode `json:"mode,omitempty"` + + // Type is the resource type for the resource that contains this check. This + // field is only set if Kind equals CheckKindResource. + Type string `json:"type,omitempty"` + + // Name is the name of the resource, check block, or output that contains + // this check. + Name string `json:"name,omitempty"` +} + +// CheckDynamicAddress contains the InstanceKey field for any resources that +// have multiple instances. A complete address can be built by combining the +// CheckStaticAddress with the CheckDynamicAddress. +type CheckDynamicAddress struct { + // ToDisplay is a formatted and ready to display representation of the + // full address, including the additional information from the relevant + // CheckStaticAddress. + ToDisplay string `json:"to_display"` + + // Module is the module part of the address. This address will include the + // instance key for any module expansions resulting from foreach or count + // arguments. This field will be empty for any resources within the root + // module. + Module string `json:"module,omitempty"` + + // InstanceKey is the instance key for any instances of a given resource. + // + // InstanceKey will be empty if there was no foreach or count argument + // defined on the containing object. + InstanceKey string `json:"instance_key,omitempty"` +} + +// CheckResultStatic is the container for a "checkable object". +// +// A "checkable object" is a resource or data source, an output, or a check +// block. +type CheckResultStatic struct { + // Address is the absolute address of the "checkable object" + Address CheckStaticAddress `json:"address"` + + // Status is the overall status for all the checks within this object. + Status CheckStatus `json:"status"` + + // Instances contains the results for dynamic object that belongs to this + // static object. For example, any instances created from an object using + // the foreach or count meta arguments. + // + // Check blocks and outputs will only contain a single instance, while + // resources can contain 1 to many. + Instances []CheckResultDynamic `json:"instances,omitempty"` +} + +// CheckResultDynamic describes the check result for a dynamic object that +// results from the expansion of the containing object. +type CheckResultDynamic struct { + // Address is the relative address of this instance given the Address in the + // parent object. + Address CheckDynamicAddress `json:"address"` + + // Status is the overall status for the checks within this dynamic object. + Status CheckStatus `json:"status"` + + // Problems describes any additional optional details about this check if + // the check failed. + // + // This will not include the errors resulting from this check block, as they + // will be exposed as diagnostics in the original terraform execution. It + // may contain any failure messages even if the overall status is + // CheckStatusError, however, as the instance could contain multiple checks + // that returned a mix of error and failure statuses. + Problems []CheckResultProblem `json:"problems,omitempty"` +} + +// CheckResultProblem describes one of potentially several problems that led to +// a check being classied as CheckStatusFail. +type CheckResultProblem struct { + // Message is the condition error message provided by the original check + // author. + Message string `json:"message"` +} diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/config.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/config.go index 5ebe4bc840c..e8ea638acc1 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/config.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/config.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfjson import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/expression.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/expression.go index 8a39face7b6..5ecb15ce50f 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/expression.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/expression.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfjson import "encoding/json" diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/metadata.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/metadata.go index 70d7ce4380a..eb525776078 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/metadata.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/metadata.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfjson import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/plan.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/plan.go index 6cfd533e992..de529accc05 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/plan.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/plan.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfjson import ( @@ -63,6 +66,14 @@ type Plan struct { // RelevantAttributes represents any resource instances and their // attributes which may have contributed to the planned changes RelevantAttributes []ResourceAttribute `json:"relevant_attributes,omitempty"` + + // Checks contains the results of any conditional checks executed, or + // planned to be executed, during this plan. + Checks []CheckResultStatic `json:"checks,omitempty"` + + // Timestamp contains the static timestamp that Terraform considers to be + // the time this plan executed, in UTC. + Timestamp string `json:"timestamp,omitempty"` } // ResourceAttribute describes a full path to a resource attribute @@ -197,6 +208,28 @@ type Change struct { // display of sensitive values in user interfaces. BeforeSensitive interface{} `json:"before_sensitive,omitempty"` AfterSensitive interface{} `json:"after_sensitive,omitempty"` + + // Importing contains the import metadata about this operation. If importing + // is present (ie. not null) then the change is an import operation in + // addition to anything mentioned in the actions field. The actual contents + // of the Importing struct is subject to change, so downstream consumers + // should treat any values in here as strictly optional. + Importing *Importing `json:"importing,omitempty"` + + // GeneratedConfig contains any HCL config generated for this resource + // during planning as a string. + // + // If this is populated, then Importing should also be populated but this + // might change in the future. However, not all Importing changes will + // contain generated config. + GeneratedConfig string `json:"generated_config,omitempty"` +} + +// Importing is a nested object for the resource import metadata. +type Importing struct { + // The original ID of this resource used to target it as part of planned + // import operation. + ID string `json:"id,omitempty"` } // PlanVariable is a top-level variable in the Terraform plan. diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/schemas.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/schemas.go index f7c8abd50f6..64f87d84a90 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/schemas.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/schemas.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfjson import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/state.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/state.go index 3c3f6a4b0aa..0f2a9996966 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/state.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/state.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfjson import ( @@ -32,6 +35,10 @@ type State struct { // The values that make up the state. Values *StateValues `json:"values,omitempty"` + + // Checks contains the results of any conditional checks when Values was + // last updated. + Checks *CheckResultStatic `json:"checks,omitempty"` } // UseJSONNumber controls whether the State will be decoded using the diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/tfjson.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/tfjson.go index 55f9ac44498..3a78a4f0fed 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/tfjson.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/tfjson.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // Package tfjson is a de-coupled helper library containing types for // the plan format output by "terraform show -json" command. This // command is designed for the export of Terraform plan data in diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/validate.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/validate.go index 97b82d0a979..53652eff300 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/validate.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfjson import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/version.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/version.go index 16f0a853e34..7516ad6dd49 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/version.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-json/version.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfjson // VersionOutput represents output from the version -json command diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/context.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/context.go index 385283bb915..d99e19796c2 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/context.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/context.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logging import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/doc.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/doc.go index cc1a033e614..1d29f515eac 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/doc.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/doc.go @@ -1,2 +1,5 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // Package logging contains shared environment variable and log functionality. package logging diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/environment_variables.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/environment_variables.go index 49b8072c8c8..c203345769d 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/environment_variables.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/environment_variables.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logging // Environment variables. diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/keys.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/keys.go index ce803d752bd..0b2b9d2f109 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/keys.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/keys.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logging // Global logging keys attached to all requests. diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/protocol.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/protocol.go index ee124769055..a1d49eae12c 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/protocol.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/protocol.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logging import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/protocol_data.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/protocol_data.go index 36dfcd764df..e96188f59c6 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/protocol_data.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/protocol_data.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logging import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/provider.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/provider.go index b40763c6e17..1ece6992d09 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/provider.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/internal/logging/provider.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logging import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/data_source.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/data_source.go index bdeac863848..c84e807668e 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/data_source.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfprotov5 import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/diagnostic.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/diagnostic.go index de0906e33a0..15ab6a4ab13 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/diagnostic.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/diagnostic.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfprotov5 import "github.com/hashicorp/terraform-plugin-go/tftypes" diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/doc.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/doc.go index fdaf5cde4c7..2d35c9251fb 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/doc.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/doc.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // Package tfprotov5 provides the interfaces and types needed to build a // Terraform provider server. // diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/dynamic_value.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/dynamic_value.go index 71d5cc4e97b..a190d3cb273 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/dynamic_value.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/dynamic_value.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfprotov5 import ( @@ -35,7 +38,7 @@ func NewDynamicValue(t tftypes.Type, v tftypes.Value) (DynamicValue, error) { // DynamicValue represents a nested encoding value that came from the protocol. // The only way providers should ever interact with it is by calling its -// `Unmarshal` method to retrive a `tftypes.Value`. Although the type system +// `Unmarshal` method to retrieve a `tftypes.Value`. Although the type system // allows for other interactions, they are explicitly not supported, and will // not be considered when evaluating for breaking changes. Treat this type as // an opaque value, and *only* call its `Unmarshal` method. diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/diag/diagnostics.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/diag/diagnostics.go index 9032b901fdb..cc40d861fce 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/diag/diagnostics.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/diag/diagnostics.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package diag import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/diag/doc.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/diag/doc.go index 0c73dab12fe..faaba2285cd 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/diag/doc.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/diag/doc.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // Package diag contains diagnostics helpers. These implementations are // intentionally outside the public API. package diag diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/attribute_path.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/attribute_path.go index 421bfbec548..182479ec9a8 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/attribute_path.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/attribute_path.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fromproto import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/data_source.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/data_source.go index bb1ad54250c..ae2957e4c96 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/data_source.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fromproto import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/diagnostic.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/diagnostic.go index e0feaea5e94..1dc41687dc1 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/diagnostic.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/diagnostic.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fromproto import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/provider.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/provider.go index 8184bebf387..3442814b6e3 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/provider.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/provider.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fromproto import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/resource.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/resource.go index 9b7d505be4d..9133bb91042 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/resource.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/resource.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fromproto import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/schema.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/schema.go index 90f2e032226..35828c3115c 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/schema.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/schema.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fromproto import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/state.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/state.go index ad89004d2d1..910f0c7304c 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/state.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/state.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fromproto import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/string_kind.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/string_kind.go index c15d2df4614..b2b47248f38 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/string_kind.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/string_kind.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fromproto import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/types.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/types.go index 17f31f852b9..f99ff2b777c 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/types.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/fromproto/types.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fromproto import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tf5serverlogging/context_keys.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tf5serverlogging/context_keys.go index cc72fe4bbf6..5b2556a27b4 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tf5serverlogging/context_keys.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tf5serverlogging/context_keys.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tf5serverlogging // Context key types. diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tf5serverlogging/doc.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tf5serverlogging/doc.go index e77a831c03e..82b9d39e5c5 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tf5serverlogging/doc.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tf5serverlogging/doc.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // Package tf5serverlogging contains logging functionality specific to // tf5server and tfprotov5 types. package tf5serverlogging diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tf5serverlogging/downstream_request.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tf5serverlogging/downstream_request.go index 65ed7011321..efa4d9b529b 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tf5serverlogging/downstream_request.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tf5serverlogging/downstream_request.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tf5serverlogging import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5/tfplugin5.pb.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5/tfplugin5.pb.go index 9b7abe79e46..664d7b08a41 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5/tfplugin5.pb.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5/tfplugin5.pb.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // Terraform Plugin RPC protocol version 5.3 // // This file defines version 5.3 of the RPC protocol. To implement a plugin @@ -19,8 +22,8 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.28.1 -// protoc v3.19.4 +// protoc-gen-go v1.30.0 +// protoc v4.23.2 // source: tfplugin5.proto package tfplugin5 @@ -1061,6 +1064,7 @@ type AttributePath_Step struct { unknownFields protoimpl.UnknownFields // Types that are assignable to Selector: + // // *AttributePath_Step_AttributeName // *AttributePath_Step_ElementKeyString // *AttributePath_Step_ElementKeyInt @@ -2486,9 +2490,9 @@ type PlanResourceChange_Response struct { // specific details of the legacy SDK type system, and are not a general // mechanism to avoid proper type handling in providers. // - // ==== DO NOT USE THIS ==== - // ==== THIS MUST BE LEFT UNSET IN ALL OTHER SDKS ==== - // ==== DO NOT USE THIS ==== + // ==== DO NOT USE THIS ==== + // ==== THIS MUST BE LEFT UNSET IN ALL OTHER SDKS ==== + // ==== DO NOT USE THIS ==== LegacyTypeSystem bool `protobuf:"varint,5,opt,name=legacy_type_system,json=legacyTypeSystem,proto3" json:"legacy_type_system,omitempty"` } @@ -2662,9 +2666,9 @@ type ApplyResourceChange_Response struct { // specific details of the legacy SDK type system, and are not a general // mechanism to avoid proper type handling in providers. // - // ==== DO NOT USE THIS ==== - // ==== THIS MUST BE LEFT UNSET IN ALL OTHER SDKS ==== - // ==== DO NOT USE THIS ==== + // ==== DO NOT USE THIS ==== + // ==== THIS MUST BE LEFT UNSET IN ALL OTHER SDKS ==== + // ==== DO NOT USE THIS ==== LegacyTypeSystem bool `protobuf:"varint,4,opt,name=legacy_type_system,json=legacyTypeSystem,proto3" json:"legacy_type_system,omitempty"` } diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5/tfplugin5.proto b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5/tfplugin5.proto index 6cdaf2c0305..f79feee42cb 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5/tfplugin5.proto +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5/tfplugin5.proto @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // Terraform Plugin RPC protocol version 5.3 // // This file defines version 5.3 of the RPC protocol. To implement a plugin diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go index ddbdea31e71..548f966137d 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go @@ -1,7 +1,29 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +// Terraform Plugin RPC protocol version 5.3 +// +// This file defines version 5.3 of the RPC protocol. To implement a plugin +// against this protocol, copy this definition into your own codebase and +// use protoc to generate stubs for your target language. +// +// This file will not be updated. Any minor versions of protocol 5 to follow +// should copy this file and modify the copy while maintaing backwards +// compatibility. Breaking changes, if any are required, will come +// in a subsequent major version with its own separate proto definition. +// +// Note that only the proto files included in a release tag of Terraform are +// official protocol releases. Proto files taken from other commits may include +// incomplete changes or features that did not make it into a final release. +// In all reasonable cases, plugin developers should take the proto file from +// the tag of the most recent release of Terraform, and not from the main +// branch or any other development branch. +// + // Code generated by protoc-gen-go-grpc. DO NOT EDIT. // versions: -// - protoc-gen-go-grpc v1.2.0 -// - protoc v3.19.4 +// - protoc-gen-go-grpc v1.3.0 +// - protoc v4.23.2 // source: tfplugin5.proto package tfplugin5 @@ -18,25 +40,40 @@ import ( // Requires gRPC-Go v1.32.0 or later. const _ = grpc.SupportPackageIsVersion7 +const ( + Provider_GetSchema_FullMethodName = "/tfplugin5.Provider/GetSchema" + Provider_PrepareProviderConfig_FullMethodName = "/tfplugin5.Provider/PrepareProviderConfig" + Provider_ValidateResourceTypeConfig_FullMethodName = "/tfplugin5.Provider/ValidateResourceTypeConfig" + Provider_ValidateDataSourceConfig_FullMethodName = "/tfplugin5.Provider/ValidateDataSourceConfig" + Provider_UpgradeResourceState_FullMethodName = "/tfplugin5.Provider/UpgradeResourceState" + Provider_Configure_FullMethodName = "/tfplugin5.Provider/Configure" + Provider_ReadResource_FullMethodName = "/tfplugin5.Provider/ReadResource" + Provider_PlanResourceChange_FullMethodName = "/tfplugin5.Provider/PlanResourceChange" + Provider_ApplyResourceChange_FullMethodName = "/tfplugin5.Provider/ApplyResourceChange" + Provider_ImportResourceState_FullMethodName = "/tfplugin5.Provider/ImportResourceState" + Provider_ReadDataSource_FullMethodName = "/tfplugin5.Provider/ReadDataSource" + Provider_Stop_FullMethodName = "/tfplugin5.Provider/Stop" +) + // ProviderClient is the client API for Provider service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream. type ProviderClient interface { - //////// Information about what a provider supports/expects + // ////// Information about what a provider supports/expects GetSchema(ctx context.Context, in *GetProviderSchema_Request, opts ...grpc.CallOption) (*GetProviderSchema_Response, error) PrepareProviderConfig(ctx context.Context, in *PrepareProviderConfig_Request, opts ...grpc.CallOption) (*PrepareProviderConfig_Response, error) ValidateResourceTypeConfig(ctx context.Context, in *ValidateResourceTypeConfig_Request, opts ...grpc.CallOption) (*ValidateResourceTypeConfig_Response, error) ValidateDataSourceConfig(ctx context.Context, in *ValidateDataSourceConfig_Request, opts ...grpc.CallOption) (*ValidateDataSourceConfig_Response, error) UpgradeResourceState(ctx context.Context, in *UpgradeResourceState_Request, opts ...grpc.CallOption) (*UpgradeResourceState_Response, error) - //////// One-time initialization, called before other functions below + // ////// One-time initialization, called before other functions below Configure(ctx context.Context, in *Configure_Request, opts ...grpc.CallOption) (*Configure_Response, error) - //////// Managed Resource Lifecycle + // ////// Managed Resource Lifecycle ReadResource(ctx context.Context, in *ReadResource_Request, opts ...grpc.CallOption) (*ReadResource_Response, error) PlanResourceChange(ctx context.Context, in *PlanResourceChange_Request, opts ...grpc.CallOption) (*PlanResourceChange_Response, error) ApplyResourceChange(ctx context.Context, in *ApplyResourceChange_Request, opts ...grpc.CallOption) (*ApplyResourceChange_Response, error) ImportResourceState(ctx context.Context, in *ImportResourceState_Request, opts ...grpc.CallOption) (*ImportResourceState_Response, error) ReadDataSource(ctx context.Context, in *ReadDataSource_Request, opts ...grpc.CallOption) (*ReadDataSource_Response, error) - //////// Graceful Shutdown + // ////// Graceful Shutdown Stop(ctx context.Context, in *Stop_Request, opts ...grpc.CallOption) (*Stop_Response, error) } @@ -50,7 +87,7 @@ func NewProviderClient(cc grpc.ClientConnInterface) ProviderClient { func (c *providerClient) GetSchema(ctx context.Context, in *GetProviderSchema_Request, opts ...grpc.CallOption) (*GetProviderSchema_Response, error) { out := new(GetProviderSchema_Response) - err := c.cc.Invoke(ctx, "/tfplugin5.Provider/GetSchema", in, out, opts...) + err := c.cc.Invoke(ctx, Provider_GetSchema_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -59,7 +96,7 @@ func (c *providerClient) GetSchema(ctx context.Context, in *GetProviderSchema_Re func (c *providerClient) PrepareProviderConfig(ctx context.Context, in *PrepareProviderConfig_Request, opts ...grpc.CallOption) (*PrepareProviderConfig_Response, error) { out := new(PrepareProviderConfig_Response) - err := c.cc.Invoke(ctx, "/tfplugin5.Provider/PrepareProviderConfig", in, out, opts...) + err := c.cc.Invoke(ctx, Provider_PrepareProviderConfig_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -68,7 +105,7 @@ func (c *providerClient) PrepareProviderConfig(ctx context.Context, in *PrepareP func (c *providerClient) ValidateResourceTypeConfig(ctx context.Context, in *ValidateResourceTypeConfig_Request, opts ...grpc.CallOption) (*ValidateResourceTypeConfig_Response, error) { out := new(ValidateResourceTypeConfig_Response) - err := c.cc.Invoke(ctx, "/tfplugin5.Provider/ValidateResourceTypeConfig", in, out, opts...) + err := c.cc.Invoke(ctx, Provider_ValidateResourceTypeConfig_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -77,7 +114,7 @@ func (c *providerClient) ValidateResourceTypeConfig(ctx context.Context, in *Val func (c *providerClient) ValidateDataSourceConfig(ctx context.Context, in *ValidateDataSourceConfig_Request, opts ...grpc.CallOption) (*ValidateDataSourceConfig_Response, error) { out := new(ValidateDataSourceConfig_Response) - err := c.cc.Invoke(ctx, "/tfplugin5.Provider/ValidateDataSourceConfig", in, out, opts...) + err := c.cc.Invoke(ctx, Provider_ValidateDataSourceConfig_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -86,7 +123,7 @@ func (c *providerClient) ValidateDataSourceConfig(ctx context.Context, in *Valid func (c *providerClient) UpgradeResourceState(ctx context.Context, in *UpgradeResourceState_Request, opts ...grpc.CallOption) (*UpgradeResourceState_Response, error) { out := new(UpgradeResourceState_Response) - err := c.cc.Invoke(ctx, "/tfplugin5.Provider/UpgradeResourceState", in, out, opts...) + err := c.cc.Invoke(ctx, Provider_UpgradeResourceState_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -95,7 +132,7 @@ func (c *providerClient) UpgradeResourceState(ctx context.Context, in *UpgradeRe func (c *providerClient) Configure(ctx context.Context, in *Configure_Request, opts ...grpc.CallOption) (*Configure_Response, error) { out := new(Configure_Response) - err := c.cc.Invoke(ctx, "/tfplugin5.Provider/Configure", in, out, opts...) + err := c.cc.Invoke(ctx, Provider_Configure_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -104,7 +141,7 @@ func (c *providerClient) Configure(ctx context.Context, in *Configure_Request, o func (c *providerClient) ReadResource(ctx context.Context, in *ReadResource_Request, opts ...grpc.CallOption) (*ReadResource_Response, error) { out := new(ReadResource_Response) - err := c.cc.Invoke(ctx, "/tfplugin5.Provider/ReadResource", in, out, opts...) + err := c.cc.Invoke(ctx, Provider_ReadResource_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -113,7 +150,7 @@ func (c *providerClient) ReadResource(ctx context.Context, in *ReadResource_Requ func (c *providerClient) PlanResourceChange(ctx context.Context, in *PlanResourceChange_Request, opts ...grpc.CallOption) (*PlanResourceChange_Response, error) { out := new(PlanResourceChange_Response) - err := c.cc.Invoke(ctx, "/tfplugin5.Provider/PlanResourceChange", in, out, opts...) + err := c.cc.Invoke(ctx, Provider_PlanResourceChange_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -122,7 +159,7 @@ func (c *providerClient) PlanResourceChange(ctx context.Context, in *PlanResourc func (c *providerClient) ApplyResourceChange(ctx context.Context, in *ApplyResourceChange_Request, opts ...grpc.CallOption) (*ApplyResourceChange_Response, error) { out := new(ApplyResourceChange_Response) - err := c.cc.Invoke(ctx, "/tfplugin5.Provider/ApplyResourceChange", in, out, opts...) + err := c.cc.Invoke(ctx, Provider_ApplyResourceChange_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -131,7 +168,7 @@ func (c *providerClient) ApplyResourceChange(ctx context.Context, in *ApplyResou func (c *providerClient) ImportResourceState(ctx context.Context, in *ImportResourceState_Request, opts ...grpc.CallOption) (*ImportResourceState_Response, error) { out := new(ImportResourceState_Response) - err := c.cc.Invoke(ctx, "/tfplugin5.Provider/ImportResourceState", in, out, opts...) + err := c.cc.Invoke(ctx, Provider_ImportResourceState_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -140,7 +177,7 @@ func (c *providerClient) ImportResourceState(ctx context.Context, in *ImportReso func (c *providerClient) ReadDataSource(ctx context.Context, in *ReadDataSource_Request, opts ...grpc.CallOption) (*ReadDataSource_Response, error) { out := new(ReadDataSource_Response) - err := c.cc.Invoke(ctx, "/tfplugin5.Provider/ReadDataSource", in, out, opts...) + err := c.cc.Invoke(ctx, Provider_ReadDataSource_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -149,7 +186,7 @@ func (c *providerClient) ReadDataSource(ctx context.Context, in *ReadDataSource_ func (c *providerClient) Stop(ctx context.Context, in *Stop_Request, opts ...grpc.CallOption) (*Stop_Response, error) { out := new(Stop_Response) - err := c.cc.Invoke(ctx, "/tfplugin5.Provider/Stop", in, out, opts...) + err := c.cc.Invoke(ctx, Provider_Stop_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -160,21 +197,21 @@ func (c *providerClient) Stop(ctx context.Context, in *Stop_Request, opts ...grp // All implementations must embed UnimplementedProviderServer // for forward compatibility type ProviderServer interface { - //////// Information about what a provider supports/expects + // ////// Information about what a provider supports/expects GetSchema(context.Context, *GetProviderSchema_Request) (*GetProviderSchema_Response, error) PrepareProviderConfig(context.Context, *PrepareProviderConfig_Request) (*PrepareProviderConfig_Response, error) ValidateResourceTypeConfig(context.Context, *ValidateResourceTypeConfig_Request) (*ValidateResourceTypeConfig_Response, error) ValidateDataSourceConfig(context.Context, *ValidateDataSourceConfig_Request) (*ValidateDataSourceConfig_Response, error) UpgradeResourceState(context.Context, *UpgradeResourceState_Request) (*UpgradeResourceState_Response, error) - //////// One-time initialization, called before other functions below + // ////// One-time initialization, called before other functions below Configure(context.Context, *Configure_Request) (*Configure_Response, error) - //////// Managed Resource Lifecycle + // ////// Managed Resource Lifecycle ReadResource(context.Context, *ReadResource_Request) (*ReadResource_Response, error) PlanResourceChange(context.Context, *PlanResourceChange_Request) (*PlanResourceChange_Response, error) ApplyResourceChange(context.Context, *ApplyResourceChange_Request) (*ApplyResourceChange_Response, error) ImportResourceState(context.Context, *ImportResourceState_Request) (*ImportResourceState_Response, error) ReadDataSource(context.Context, *ReadDataSource_Request) (*ReadDataSource_Response, error) - //////// Graceful Shutdown + // ////// Graceful Shutdown Stop(context.Context, *Stop_Request) (*Stop_Response, error) mustEmbedUnimplementedProviderServer() } @@ -242,7 +279,7 @@ func _Provider_GetSchema_Handler(srv interface{}, ctx context.Context, dec func( } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/tfplugin5.Provider/GetSchema", + FullMethod: Provider_GetSchema_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ProviderServer).GetSchema(ctx, req.(*GetProviderSchema_Request)) @@ -260,7 +297,7 @@ func _Provider_PrepareProviderConfig_Handler(srv interface{}, ctx context.Contex } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/tfplugin5.Provider/PrepareProviderConfig", + FullMethod: Provider_PrepareProviderConfig_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ProviderServer).PrepareProviderConfig(ctx, req.(*PrepareProviderConfig_Request)) @@ -278,7 +315,7 @@ func _Provider_ValidateResourceTypeConfig_Handler(srv interface{}, ctx context.C } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/tfplugin5.Provider/ValidateResourceTypeConfig", + FullMethod: Provider_ValidateResourceTypeConfig_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ProviderServer).ValidateResourceTypeConfig(ctx, req.(*ValidateResourceTypeConfig_Request)) @@ -296,7 +333,7 @@ func _Provider_ValidateDataSourceConfig_Handler(srv interface{}, ctx context.Con } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/tfplugin5.Provider/ValidateDataSourceConfig", + FullMethod: Provider_ValidateDataSourceConfig_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ProviderServer).ValidateDataSourceConfig(ctx, req.(*ValidateDataSourceConfig_Request)) @@ -314,7 +351,7 @@ func _Provider_UpgradeResourceState_Handler(srv interface{}, ctx context.Context } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/tfplugin5.Provider/UpgradeResourceState", + FullMethod: Provider_UpgradeResourceState_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ProviderServer).UpgradeResourceState(ctx, req.(*UpgradeResourceState_Request)) @@ -332,7 +369,7 @@ func _Provider_Configure_Handler(srv interface{}, ctx context.Context, dec func( } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/tfplugin5.Provider/Configure", + FullMethod: Provider_Configure_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ProviderServer).Configure(ctx, req.(*Configure_Request)) @@ -350,7 +387,7 @@ func _Provider_ReadResource_Handler(srv interface{}, ctx context.Context, dec fu } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/tfplugin5.Provider/ReadResource", + FullMethod: Provider_ReadResource_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ProviderServer).ReadResource(ctx, req.(*ReadResource_Request)) @@ -368,7 +405,7 @@ func _Provider_PlanResourceChange_Handler(srv interface{}, ctx context.Context, } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/tfplugin5.Provider/PlanResourceChange", + FullMethod: Provider_PlanResourceChange_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ProviderServer).PlanResourceChange(ctx, req.(*PlanResourceChange_Request)) @@ -386,7 +423,7 @@ func _Provider_ApplyResourceChange_Handler(srv interface{}, ctx context.Context, } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/tfplugin5.Provider/ApplyResourceChange", + FullMethod: Provider_ApplyResourceChange_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ProviderServer).ApplyResourceChange(ctx, req.(*ApplyResourceChange_Request)) @@ -404,7 +441,7 @@ func _Provider_ImportResourceState_Handler(srv interface{}, ctx context.Context, } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/tfplugin5.Provider/ImportResourceState", + FullMethod: Provider_ImportResourceState_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ProviderServer).ImportResourceState(ctx, req.(*ImportResourceState_Request)) @@ -422,7 +459,7 @@ func _Provider_ReadDataSource_Handler(srv interface{}, ctx context.Context, dec } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/tfplugin5.Provider/ReadDataSource", + FullMethod: Provider_ReadDataSource_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ProviderServer).ReadDataSource(ctx, req.(*ReadDataSource_Request)) @@ -440,7 +477,7 @@ func _Provider_Stop_Handler(srv interface{}, ctx context.Context, dec func(inter } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/tfplugin5.Provider/Stop", + FullMethod: Provider_Stop_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ProviderServer).Stop(ctx, req.(*Stop_Request)) @@ -508,6 +545,13 @@ var Provider_ServiceDesc = grpc.ServiceDesc{ Metadata: "tfplugin5.proto", } +const ( + Provisioner_GetSchema_FullMethodName = "/tfplugin5.Provisioner/GetSchema" + Provisioner_ValidateProvisionerConfig_FullMethodName = "/tfplugin5.Provisioner/ValidateProvisionerConfig" + Provisioner_ProvisionResource_FullMethodName = "/tfplugin5.Provisioner/ProvisionResource" + Provisioner_Stop_FullMethodName = "/tfplugin5.Provisioner/Stop" +) + // ProvisionerClient is the client API for Provisioner service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream. @@ -528,7 +572,7 @@ func NewProvisionerClient(cc grpc.ClientConnInterface) ProvisionerClient { func (c *provisionerClient) GetSchema(ctx context.Context, in *GetProvisionerSchema_Request, opts ...grpc.CallOption) (*GetProvisionerSchema_Response, error) { out := new(GetProvisionerSchema_Response) - err := c.cc.Invoke(ctx, "/tfplugin5.Provisioner/GetSchema", in, out, opts...) + err := c.cc.Invoke(ctx, Provisioner_GetSchema_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -537,7 +581,7 @@ func (c *provisionerClient) GetSchema(ctx context.Context, in *GetProvisionerSch func (c *provisionerClient) ValidateProvisionerConfig(ctx context.Context, in *ValidateProvisionerConfig_Request, opts ...grpc.CallOption) (*ValidateProvisionerConfig_Response, error) { out := new(ValidateProvisionerConfig_Response) - err := c.cc.Invoke(ctx, "/tfplugin5.Provisioner/ValidateProvisionerConfig", in, out, opts...) + err := c.cc.Invoke(ctx, Provisioner_ValidateProvisionerConfig_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -545,7 +589,7 @@ func (c *provisionerClient) ValidateProvisionerConfig(ctx context.Context, in *V } func (c *provisionerClient) ProvisionResource(ctx context.Context, in *ProvisionResource_Request, opts ...grpc.CallOption) (Provisioner_ProvisionResourceClient, error) { - stream, err := c.cc.NewStream(ctx, &Provisioner_ServiceDesc.Streams[0], "/tfplugin5.Provisioner/ProvisionResource", opts...) + stream, err := c.cc.NewStream(ctx, &Provisioner_ServiceDesc.Streams[0], Provisioner_ProvisionResource_FullMethodName, opts...) if err != nil { return nil, err } @@ -578,7 +622,7 @@ func (x *provisionerProvisionResourceClient) Recv() (*ProvisionResource_Response func (c *provisionerClient) Stop(ctx context.Context, in *Stop_Request, opts ...grpc.CallOption) (*Stop_Response, error) { out := new(Stop_Response) - err := c.cc.Invoke(ctx, "/tfplugin5.Provisioner/Stop", in, out, opts...) + err := c.cc.Invoke(ctx, Provisioner_Stop_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -635,7 +679,7 @@ func _Provisioner_GetSchema_Handler(srv interface{}, ctx context.Context, dec fu } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/tfplugin5.Provisioner/GetSchema", + FullMethod: Provisioner_GetSchema_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ProvisionerServer).GetSchema(ctx, req.(*GetProvisionerSchema_Request)) @@ -653,7 +697,7 @@ func _Provisioner_ValidateProvisionerConfig_Handler(srv interface{}, ctx context } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/tfplugin5.Provisioner/ValidateProvisionerConfig", + FullMethod: Provisioner_ValidateProvisionerConfig_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ProvisionerServer).ValidateProvisionerConfig(ctx, req.(*ValidateProvisionerConfig_Request)) @@ -692,7 +736,7 @@ func _Provisioner_Stop_Handler(srv interface{}, ctx context.Context, dec func(in } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/tfplugin5.Provisioner/Stop", + FullMethod: Provisioner_Stop_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ProvisionerServer).Stop(ctx, req.(*Stop_Request)) diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/toproto/server_capabilities.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/toproto/server_capabilities.go index c1ff45ae413..bde7d4bf94c 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/toproto/server_capabilities.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/toproto/server_capabilities.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package toproto import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/provider.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/provider.go index 0bc84ff2630..63112fe0443 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/provider.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/provider.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfprotov5 import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/resource.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/resource.go index b86a045e957..e2c15374580 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/resource.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/resource.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfprotov5 import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/schema.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/schema.go index b0b6dff2843..9b860275f13 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/schema.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/schema.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfprotov5 import "github.com/hashicorp/terraform-plugin-go/tftypes" diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/server_capabilities.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/server_capabilities.go index f87cf2f4aa3..32ffe9a96d6 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/server_capabilities.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/server_capabilities.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfprotov5 // ServerCapabilities allows providers to communicate optionally supported diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/state.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/state.go index 1f85dcad106..08c118958ef 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/state.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/state.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfprotov5 import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/string_kind.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/string_kind.go index 35d89d038c0..69169b678a8 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/string_kind.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/string_kind.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfprotov5 const ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server/doc.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server/doc.go index d8360a7fba3..96089e6ba12 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server/doc.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server/doc.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // Package tf5server implements a server implementation to run // tfprotov5.ProviderServers as gRPC servers. // diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server/plugin.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server/plugin.go index a33afd84c0e..602246aa3f2 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server/plugin.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server/plugin.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tf5server import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server/server.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server/server.go index 952b789f585..9afebe9b24f 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server/server.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server/server.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tf5server import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/data_source.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/data_source.go index 1feb2cf3450..f6871ceea9b 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/data_source.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfprotov6 import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/diagnostic.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/diagnostic.go index c40d52ad07c..8f856abbba7 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/diagnostic.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/diagnostic.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfprotov6 import "github.com/hashicorp/terraform-plugin-go/tftypes" diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/doc.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/doc.go index 875120cba38..e9adfdedb69 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/doc.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/doc.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // Package tfprotov6 provides the interfaces and types needed to build a // Terraform provider server. // diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/dynamic_value.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/dynamic_value.go index a9fd78420a3..deb18a86675 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/dynamic_value.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/dynamic_value.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfprotov6 import ( @@ -35,7 +38,7 @@ func NewDynamicValue(t tftypes.Type, v tftypes.Value) (DynamicValue, error) { // DynamicValue represents a nested encoding value that came from the protocol. // The only way providers should ever interact with it is by calling its -// `Unmarshal` method to retrive a `tftypes.Value`. Although the type system +// `Unmarshal` method to retrieve a `tftypes.Value`. Although the type system // allows for other interactions, they are explicitly not supported, and will // not be considered when evaluating for breaking changes. Treat this type as // an opaque value, and *only* call its `Unmarshal` method. diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/diag/diagnostics.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/diag/diagnostics.go index fbec5677cb1..29bed4540c4 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/diag/diagnostics.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/diag/diagnostics.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package diag import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/diag/doc.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/diag/doc.go index 0c73dab12fe..faaba2285cd 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/diag/doc.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/diag/doc.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // Package diag contains diagnostics helpers. These implementations are // intentionally outside the public API. package diag diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/attribute_path.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/attribute_path.go index c25c3cb0d95..267bc223da3 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/attribute_path.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/attribute_path.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fromproto import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/data_source.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/data_source.go index 130f6a82198..3abff4ba7e9 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/data_source.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fromproto import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/diagnostic.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/diagnostic.go index ac92796fb1f..21673ca0da8 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/diagnostic.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/diagnostic.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fromproto import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/provider.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/provider.go index 2000b9bb0c4..b302b5d6f1f 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/provider.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/provider.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fromproto import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/resource.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/resource.go index 94addd80125..d69e38229dc 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/resource.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/resource.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fromproto import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/schema.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/schema.go index db720246f51..b4cb1399fac 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/schema.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/schema.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fromproto import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/state.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/state.go index 0aa2a5a6a06..e5470b50d52 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/state.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/state.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fromproto import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/string_kind.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/string_kind.go index f2b1107538a..bfe82ca8aa6 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/string_kind.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/string_kind.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fromproto import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/types.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/types.go index 87127f4a6df..967256b728e 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/types.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/fromproto/types.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fromproto import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/tf6serverlogging/context_keys.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/tf6serverlogging/context_keys.go index 15386cd2cc2..99f55e467d3 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/tf6serverlogging/context_keys.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/tf6serverlogging/context_keys.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tf6serverlogging // Context key types. diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/tf6serverlogging/doc.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/tf6serverlogging/doc.go index 167a61825ac..29c51245efb 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/tf6serverlogging/doc.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/tf6serverlogging/doc.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // Package tf5serverlogging contains logging functionality specific to // tf5server and tfprotov5 types. package tf6serverlogging diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/tf6serverlogging/downstream_request.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/tf6serverlogging/downstream_request.go index 31ce386aebb..0fd36326e56 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/tf6serverlogging/downstream_request.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/tf6serverlogging/downstream_request.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tf6serverlogging import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/tfplugin6/tfplugin6.pb.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/tfplugin6/tfplugin6.pb.go index 817f059ccd7..3e97e831da8 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/tfplugin6/tfplugin6.pb.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/tfplugin6/tfplugin6.pb.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // Terraform Plugin RPC protocol version 6.3 // // This file defines version 6.3 of the RPC protocol. To implement a plugin @@ -19,8 +22,8 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.28.1 -// protoc v3.19.4 +// protoc-gen-go v1.30.0 +// protoc v4.23.2 // source: tfplugin6.proto package tfplugin6 @@ -1002,6 +1005,7 @@ type AttributePath_Step struct { unknownFields protoimpl.UnknownFields // Types that are assignable to Selector: + // // *AttributePath_Step_AttributeName // *AttributePath_Step_ElementKeyString // *AttributePath_Step_ElementKeyInt @@ -1474,9 +1478,9 @@ type Schema_Object struct { // MinItems and MaxItems were never used in the protocol, and have no // effect on validation. // - // Deprecated: Do not use. + // Deprecated: Marked as deprecated in tfplugin6.proto. MinItems int64 `protobuf:"varint,4,opt,name=min_items,json=minItems,proto3" json:"min_items,omitempty"` - // Deprecated: Do not use. + // Deprecated: Marked as deprecated in tfplugin6.proto. MaxItems int64 `protobuf:"varint,5,opt,name=max_items,json=maxItems,proto3" json:"max_items,omitempty"` } @@ -1526,7 +1530,7 @@ func (x *Schema_Object) GetNesting() Schema_Object_NestingMode { return Schema_Object_INVALID } -// Deprecated: Do not use. +// Deprecated: Marked as deprecated in tfplugin6.proto. func (x *Schema_Object) GetMinItems() int64 { if x != nil { return x.MinItems @@ -1534,7 +1538,7 @@ func (x *Schema_Object) GetMinItems() int64 { return 0 } -// Deprecated: Do not use. +// Deprecated: Marked as deprecated in tfplugin6.proto. func (x *Schema_Object) GetMaxItems() int64 { if x != nil { return x.MaxItems @@ -2505,9 +2509,9 @@ type PlanResourceChange_Response struct { // specific details of the legacy SDK type system, and are not a general // mechanism to avoid proper type handling in providers. // - // ==== DO NOT USE THIS ==== - // ==== THIS MUST BE LEFT UNSET IN ALL OTHER SDKS ==== - // ==== DO NOT USE THIS ==== + // ==== DO NOT USE THIS ==== + // ==== THIS MUST BE LEFT UNSET IN ALL OTHER SDKS ==== + // ==== DO NOT USE THIS ==== LegacyTypeSystem bool `protobuf:"varint,5,opt,name=legacy_type_system,json=legacyTypeSystem,proto3" json:"legacy_type_system,omitempty"` } @@ -2681,9 +2685,9 @@ type ApplyResourceChange_Response struct { // specific details of the legacy SDK type system, and are not a general // mechanism to avoid proper type handling in providers. // - // ==== DO NOT USE THIS ==== - // ==== THIS MUST BE LEFT UNSET IN ALL OTHER SDKS ==== - // ==== DO NOT USE THIS ==== + // ==== DO NOT USE THIS ==== + // ==== THIS MUST BE LEFT UNSET IN ALL OTHER SDKS ==== + // ==== DO NOT USE THIS ==== LegacyTypeSystem bool `protobuf:"varint,4,opt,name=legacy_type_system,json=legacyTypeSystem,proto3" json:"legacy_type_system,omitempty"` } diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/tfplugin6/tfplugin6.proto b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/tfplugin6/tfplugin6.proto index affb15fb460..01a72fd9a31 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/tfplugin6/tfplugin6.proto +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/tfplugin6/tfplugin6.proto @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // Terraform Plugin RPC protocol version 6.3 // // This file defines version 6.3 of the RPC protocol. To implement a plugin diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/tfplugin6/tfplugin6_grpc.pb.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/tfplugin6/tfplugin6_grpc.pb.go index a22d32dc9ed..01e8f1628c4 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/tfplugin6/tfplugin6_grpc.pb.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/tfplugin6/tfplugin6_grpc.pb.go @@ -1,7 +1,29 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +// Terraform Plugin RPC protocol version 6.3 +// +// This file defines version 6.3 of the RPC protocol. To implement a plugin +// against this protocol, copy this definition into your own codebase and +// use protoc to generate stubs for your target language. +// +// This file will not be updated. Any minor versions of protocol 6 to follow +// should copy this file and modify the copy while maintaing backwards +// compatibility. Breaking changes, if any are required, will come +// in a subsequent major version with its own separate proto definition. +// +// Note that only the proto files included in a release tag of Terraform are +// official protocol releases. Proto files taken from other commits may include +// incomplete changes or features that did not make it into a final release. +// In all reasonable cases, plugin developers should take the proto file from +// the tag of the most recent release of Terraform, and not from the main +// branch or any other development branch. +// + // Code generated by protoc-gen-go-grpc. DO NOT EDIT. // versions: -// - protoc-gen-go-grpc v1.2.0 -// - protoc v3.19.4 +// - protoc-gen-go-grpc v1.3.0 +// - protoc v4.23.2 // source: tfplugin6.proto package tfplugin6 @@ -18,25 +40,40 @@ import ( // Requires gRPC-Go v1.32.0 or later. const _ = grpc.SupportPackageIsVersion7 +const ( + Provider_GetProviderSchema_FullMethodName = "/tfplugin6.Provider/GetProviderSchema" + Provider_ValidateProviderConfig_FullMethodName = "/tfplugin6.Provider/ValidateProviderConfig" + Provider_ValidateResourceConfig_FullMethodName = "/tfplugin6.Provider/ValidateResourceConfig" + Provider_ValidateDataResourceConfig_FullMethodName = "/tfplugin6.Provider/ValidateDataResourceConfig" + Provider_UpgradeResourceState_FullMethodName = "/tfplugin6.Provider/UpgradeResourceState" + Provider_ConfigureProvider_FullMethodName = "/tfplugin6.Provider/ConfigureProvider" + Provider_ReadResource_FullMethodName = "/tfplugin6.Provider/ReadResource" + Provider_PlanResourceChange_FullMethodName = "/tfplugin6.Provider/PlanResourceChange" + Provider_ApplyResourceChange_FullMethodName = "/tfplugin6.Provider/ApplyResourceChange" + Provider_ImportResourceState_FullMethodName = "/tfplugin6.Provider/ImportResourceState" + Provider_ReadDataSource_FullMethodName = "/tfplugin6.Provider/ReadDataSource" + Provider_StopProvider_FullMethodName = "/tfplugin6.Provider/StopProvider" +) + // ProviderClient is the client API for Provider service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream. type ProviderClient interface { - //////// Information about what a provider supports/expects + // ////// Information about what a provider supports/expects GetProviderSchema(ctx context.Context, in *GetProviderSchema_Request, opts ...grpc.CallOption) (*GetProviderSchema_Response, error) ValidateProviderConfig(ctx context.Context, in *ValidateProviderConfig_Request, opts ...grpc.CallOption) (*ValidateProviderConfig_Response, error) ValidateResourceConfig(ctx context.Context, in *ValidateResourceConfig_Request, opts ...grpc.CallOption) (*ValidateResourceConfig_Response, error) ValidateDataResourceConfig(ctx context.Context, in *ValidateDataResourceConfig_Request, opts ...grpc.CallOption) (*ValidateDataResourceConfig_Response, error) UpgradeResourceState(ctx context.Context, in *UpgradeResourceState_Request, opts ...grpc.CallOption) (*UpgradeResourceState_Response, error) - //////// One-time initialization, called before other functions below + // ////// One-time initialization, called before other functions below ConfigureProvider(ctx context.Context, in *ConfigureProvider_Request, opts ...grpc.CallOption) (*ConfigureProvider_Response, error) - //////// Managed Resource Lifecycle + // ////// Managed Resource Lifecycle ReadResource(ctx context.Context, in *ReadResource_Request, opts ...grpc.CallOption) (*ReadResource_Response, error) PlanResourceChange(ctx context.Context, in *PlanResourceChange_Request, opts ...grpc.CallOption) (*PlanResourceChange_Response, error) ApplyResourceChange(ctx context.Context, in *ApplyResourceChange_Request, opts ...grpc.CallOption) (*ApplyResourceChange_Response, error) ImportResourceState(ctx context.Context, in *ImportResourceState_Request, opts ...grpc.CallOption) (*ImportResourceState_Response, error) ReadDataSource(ctx context.Context, in *ReadDataSource_Request, opts ...grpc.CallOption) (*ReadDataSource_Response, error) - //////// Graceful Shutdown + // ////// Graceful Shutdown StopProvider(ctx context.Context, in *StopProvider_Request, opts ...grpc.CallOption) (*StopProvider_Response, error) } @@ -50,7 +87,7 @@ func NewProviderClient(cc grpc.ClientConnInterface) ProviderClient { func (c *providerClient) GetProviderSchema(ctx context.Context, in *GetProviderSchema_Request, opts ...grpc.CallOption) (*GetProviderSchema_Response, error) { out := new(GetProviderSchema_Response) - err := c.cc.Invoke(ctx, "/tfplugin6.Provider/GetProviderSchema", in, out, opts...) + err := c.cc.Invoke(ctx, Provider_GetProviderSchema_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -59,7 +96,7 @@ func (c *providerClient) GetProviderSchema(ctx context.Context, in *GetProviderS func (c *providerClient) ValidateProviderConfig(ctx context.Context, in *ValidateProviderConfig_Request, opts ...grpc.CallOption) (*ValidateProviderConfig_Response, error) { out := new(ValidateProviderConfig_Response) - err := c.cc.Invoke(ctx, "/tfplugin6.Provider/ValidateProviderConfig", in, out, opts...) + err := c.cc.Invoke(ctx, Provider_ValidateProviderConfig_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -68,7 +105,7 @@ func (c *providerClient) ValidateProviderConfig(ctx context.Context, in *Validat func (c *providerClient) ValidateResourceConfig(ctx context.Context, in *ValidateResourceConfig_Request, opts ...grpc.CallOption) (*ValidateResourceConfig_Response, error) { out := new(ValidateResourceConfig_Response) - err := c.cc.Invoke(ctx, "/tfplugin6.Provider/ValidateResourceConfig", in, out, opts...) + err := c.cc.Invoke(ctx, Provider_ValidateResourceConfig_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -77,7 +114,7 @@ func (c *providerClient) ValidateResourceConfig(ctx context.Context, in *Validat func (c *providerClient) ValidateDataResourceConfig(ctx context.Context, in *ValidateDataResourceConfig_Request, opts ...grpc.CallOption) (*ValidateDataResourceConfig_Response, error) { out := new(ValidateDataResourceConfig_Response) - err := c.cc.Invoke(ctx, "/tfplugin6.Provider/ValidateDataResourceConfig", in, out, opts...) + err := c.cc.Invoke(ctx, Provider_ValidateDataResourceConfig_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -86,7 +123,7 @@ func (c *providerClient) ValidateDataResourceConfig(ctx context.Context, in *Val func (c *providerClient) UpgradeResourceState(ctx context.Context, in *UpgradeResourceState_Request, opts ...grpc.CallOption) (*UpgradeResourceState_Response, error) { out := new(UpgradeResourceState_Response) - err := c.cc.Invoke(ctx, "/tfplugin6.Provider/UpgradeResourceState", in, out, opts...) + err := c.cc.Invoke(ctx, Provider_UpgradeResourceState_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -95,7 +132,7 @@ func (c *providerClient) UpgradeResourceState(ctx context.Context, in *UpgradeRe func (c *providerClient) ConfigureProvider(ctx context.Context, in *ConfigureProvider_Request, opts ...grpc.CallOption) (*ConfigureProvider_Response, error) { out := new(ConfigureProvider_Response) - err := c.cc.Invoke(ctx, "/tfplugin6.Provider/ConfigureProvider", in, out, opts...) + err := c.cc.Invoke(ctx, Provider_ConfigureProvider_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -104,7 +141,7 @@ func (c *providerClient) ConfigureProvider(ctx context.Context, in *ConfigurePro func (c *providerClient) ReadResource(ctx context.Context, in *ReadResource_Request, opts ...grpc.CallOption) (*ReadResource_Response, error) { out := new(ReadResource_Response) - err := c.cc.Invoke(ctx, "/tfplugin6.Provider/ReadResource", in, out, opts...) + err := c.cc.Invoke(ctx, Provider_ReadResource_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -113,7 +150,7 @@ func (c *providerClient) ReadResource(ctx context.Context, in *ReadResource_Requ func (c *providerClient) PlanResourceChange(ctx context.Context, in *PlanResourceChange_Request, opts ...grpc.CallOption) (*PlanResourceChange_Response, error) { out := new(PlanResourceChange_Response) - err := c.cc.Invoke(ctx, "/tfplugin6.Provider/PlanResourceChange", in, out, opts...) + err := c.cc.Invoke(ctx, Provider_PlanResourceChange_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -122,7 +159,7 @@ func (c *providerClient) PlanResourceChange(ctx context.Context, in *PlanResourc func (c *providerClient) ApplyResourceChange(ctx context.Context, in *ApplyResourceChange_Request, opts ...grpc.CallOption) (*ApplyResourceChange_Response, error) { out := new(ApplyResourceChange_Response) - err := c.cc.Invoke(ctx, "/tfplugin6.Provider/ApplyResourceChange", in, out, opts...) + err := c.cc.Invoke(ctx, Provider_ApplyResourceChange_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -131,7 +168,7 @@ func (c *providerClient) ApplyResourceChange(ctx context.Context, in *ApplyResou func (c *providerClient) ImportResourceState(ctx context.Context, in *ImportResourceState_Request, opts ...grpc.CallOption) (*ImportResourceState_Response, error) { out := new(ImportResourceState_Response) - err := c.cc.Invoke(ctx, "/tfplugin6.Provider/ImportResourceState", in, out, opts...) + err := c.cc.Invoke(ctx, Provider_ImportResourceState_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -140,7 +177,7 @@ func (c *providerClient) ImportResourceState(ctx context.Context, in *ImportReso func (c *providerClient) ReadDataSource(ctx context.Context, in *ReadDataSource_Request, opts ...grpc.CallOption) (*ReadDataSource_Response, error) { out := new(ReadDataSource_Response) - err := c.cc.Invoke(ctx, "/tfplugin6.Provider/ReadDataSource", in, out, opts...) + err := c.cc.Invoke(ctx, Provider_ReadDataSource_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -149,7 +186,7 @@ func (c *providerClient) ReadDataSource(ctx context.Context, in *ReadDataSource_ func (c *providerClient) StopProvider(ctx context.Context, in *StopProvider_Request, opts ...grpc.CallOption) (*StopProvider_Response, error) { out := new(StopProvider_Response) - err := c.cc.Invoke(ctx, "/tfplugin6.Provider/StopProvider", in, out, opts...) + err := c.cc.Invoke(ctx, Provider_StopProvider_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -160,21 +197,21 @@ func (c *providerClient) StopProvider(ctx context.Context, in *StopProvider_Requ // All implementations must embed UnimplementedProviderServer // for forward compatibility type ProviderServer interface { - //////// Information about what a provider supports/expects + // ////// Information about what a provider supports/expects GetProviderSchema(context.Context, *GetProviderSchema_Request) (*GetProviderSchema_Response, error) ValidateProviderConfig(context.Context, *ValidateProviderConfig_Request) (*ValidateProviderConfig_Response, error) ValidateResourceConfig(context.Context, *ValidateResourceConfig_Request) (*ValidateResourceConfig_Response, error) ValidateDataResourceConfig(context.Context, *ValidateDataResourceConfig_Request) (*ValidateDataResourceConfig_Response, error) UpgradeResourceState(context.Context, *UpgradeResourceState_Request) (*UpgradeResourceState_Response, error) - //////// One-time initialization, called before other functions below + // ////// One-time initialization, called before other functions below ConfigureProvider(context.Context, *ConfigureProvider_Request) (*ConfigureProvider_Response, error) - //////// Managed Resource Lifecycle + // ////// Managed Resource Lifecycle ReadResource(context.Context, *ReadResource_Request) (*ReadResource_Response, error) PlanResourceChange(context.Context, *PlanResourceChange_Request) (*PlanResourceChange_Response, error) ApplyResourceChange(context.Context, *ApplyResourceChange_Request) (*ApplyResourceChange_Response, error) ImportResourceState(context.Context, *ImportResourceState_Request) (*ImportResourceState_Response, error) ReadDataSource(context.Context, *ReadDataSource_Request) (*ReadDataSource_Response, error) - //////// Graceful Shutdown + // ////// Graceful Shutdown StopProvider(context.Context, *StopProvider_Request) (*StopProvider_Response, error) mustEmbedUnimplementedProviderServer() } @@ -242,7 +279,7 @@ func _Provider_GetProviderSchema_Handler(srv interface{}, ctx context.Context, d } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/tfplugin6.Provider/GetProviderSchema", + FullMethod: Provider_GetProviderSchema_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ProviderServer).GetProviderSchema(ctx, req.(*GetProviderSchema_Request)) @@ -260,7 +297,7 @@ func _Provider_ValidateProviderConfig_Handler(srv interface{}, ctx context.Conte } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/tfplugin6.Provider/ValidateProviderConfig", + FullMethod: Provider_ValidateProviderConfig_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ProviderServer).ValidateProviderConfig(ctx, req.(*ValidateProviderConfig_Request)) @@ -278,7 +315,7 @@ func _Provider_ValidateResourceConfig_Handler(srv interface{}, ctx context.Conte } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/tfplugin6.Provider/ValidateResourceConfig", + FullMethod: Provider_ValidateResourceConfig_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ProviderServer).ValidateResourceConfig(ctx, req.(*ValidateResourceConfig_Request)) @@ -296,7 +333,7 @@ func _Provider_ValidateDataResourceConfig_Handler(srv interface{}, ctx context.C } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/tfplugin6.Provider/ValidateDataResourceConfig", + FullMethod: Provider_ValidateDataResourceConfig_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ProviderServer).ValidateDataResourceConfig(ctx, req.(*ValidateDataResourceConfig_Request)) @@ -314,7 +351,7 @@ func _Provider_UpgradeResourceState_Handler(srv interface{}, ctx context.Context } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/tfplugin6.Provider/UpgradeResourceState", + FullMethod: Provider_UpgradeResourceState_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ProviderServer).UpgradeResourceState(ctx, req.(*UpgradeResourceState_Request)) @@ -332,7 +369,7 @@ func _Provider_ConfigureProvider_Handler(srv interface{}, ctx context.Context, d } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/tfplugin6.Provider/ConfigureProvider", + FullMethod: Provider_ConfigureProvider_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ProviderServer).ConfigureProvider(ctx, req.(*ConfigureProvider_Request)) @@ -350,7 +387,7 @@ func _Provider_ReadResource_Handler(srv interface{}, ctx context.Context, dec fu } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/tfplugin6.Provider/ReadResource", + FullMethod: Provider_ReadResource_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ProviderServer).ReadResource(ctx, req.(*ReadResource_Request)) @@ -368,7 +405,7 @@ func _Provider_PlanResourceChange_Handler(srv interface{}, ctx context.Context, } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/tfplugin6.Provider/PlanResourceChange", + FullMethod: Provider_PlanResourceChange_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ProviderServer).PlanResourceChange(ctx, req.(*PlanResourceChange_Request)) @@ -386,7 +423,7 @@ func _Provider_ApplyResourceChange_Handler(srv interface{}, ctx context.Context, } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/tfplugin6.Provider/ApplyResourceChange", + FullMethod: Provider_ApplyResourceChange_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ProviderServer).ApplyResourceChange(ctx, req.(*ApplyResourceChange_Request)) @@ -404,7 +441,7 @@ func _Provider_ImportResourceState_Handler(srv interface{}, ctx context.Context, } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/tfplugin6.Provider/ImportResourceState", + FullMethod: Provider_ImportResourceState_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ProviderServer).ImportResourceState(ctx, req.(*ImportResourceState_Request)) @@ -422,7 +459,7 @@ func _Provider_ReadDataSource_Handler(srv interface{}, ctx context.Context, dec } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/tfplugin6.Provider/ReadDataSource", + FullMethod: Provider_ReadDataSource_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ProviderServer).ReadDataSource(ctx, req.(*ReadDataSource_Request)) @@ -440,7 +477,7 @@ func _Provider_StopProvider_Handler(srv interface{}, ctx context.Context, dec fu } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/tfplugin6.Provider/StopProvider", + FullMethod: Provider_StopProvider_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(ProviderServer).StopProvider(ctx, req.(*StopProvider_Request)) diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/toproto/server_capabilities.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/toproto/server_capabilities.go index fa335d6913e..26a16417fc7 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/toproto/server_capabilities.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/toproto/server_capabilities.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package toproto import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/provider.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/provider.go index 970ef8e70c2..0681cfe1758 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/provider.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/provider.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfprotov6 import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/resource.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/resource.go index 2768bb526e8..d2e213f22c3 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/resource.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/resource.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfprotov6 import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/schema.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/schema.go index c17e2cceebb..b368c620fbc 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/schema.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/schema.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfprotov6 import "github.com/hashicorp/terraform-plugin-go/tftypes" diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/server_capabilities.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/server_capabilities.go index eed59219d55..153ea46de21 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/server_capabilities.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/server_capabilities.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfprotov6 // ServerCapabilities allows providers to communicate optionally supported diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/state.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/state.go index 41506b8d700..5f7a14518b8 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/state.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/state.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfprotov6 import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/string_kind.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/string_kind.go index 359e036e7a3..7f0d9131b6a 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/string_kind.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/string_kind.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfprotov6 const ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/tf6server/doc.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/tf6server/doc.go index 8b5313d953b..5501d70d713 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/tf6server/doc.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/tf6server/doc.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // Package tf6server implements a server implementation to run // tfprotov6.ProviderServers as gRPC servers. // diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/tf6server/plugin.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/tf6server/plugin.go index e868d30a5ba..c5d24ffdb49 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/tf6server/plugin.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/tf6server/plugin.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tf6server import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/tf6server/server.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/tf6server/server.go index 9889a93e695..a639e82d478 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/tf6server/server.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov6/tf6server/server.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tf6server import ( @@ -578,7 +581,7 @@ func (s *server) stop() { s.stopCh = make(chan struct{}) } -func (s *server) Stop(ctx context.Context, req *tfplugin6.StopProvider_Request) (*tfplugin6.StopProvider_Response, error) { +func (s *server) StopProvider(ctx context.Context, req *tfplugin6.StopProvider_Request) (*tfplugin6.StopProvider_Response, error) { rpc := "StopProvider" ctx = s.loggingContext(ctx) ctx = logging.RpcContext(ctx, rpc) diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/attribute_path.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/attribute_path.go index d6108d4ab33..486c2d8b751 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/attribute_path.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/attribute_path.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tftypes import ( @@ -324,17 +327,17 @@ func builtinAttributePathStepper(in interface{}) (AttributePathStepper, bool) { type mapStringInterfaceAttributePathStepper map[string]interface{} func (m mapStringInterfaceAttributePathStepper) ApplyTerraform5AttributePathStep(step AttributePathStep) (interface{}, error) { - _, isAttributeName := step.(AttributeName) - _, isElementKeyString := step.(ElementKeyString) + attributeName, isAttributeName := step.(AttributeName) + elementKeyString, isElementKeyString := step.(ElementKeyString) if !isAttributeName && !isElementKeyString { return nil, ErrInvalidStep } var stepValue string if isAttributeName { - stepValue = string(step.(AttributeName)) + stepValue = string(attributeName) } if isElementKeyString { - stepValue = string(step.(ElementKeyString)) + stepValue = string(elementKeyString) } v, ok := m[stepValue] if !ok { diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/attribute_path_error.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/attribute_path_error.go index be000207217..df887c2fd61 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/attribute_path_error.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/attribute_path_error.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tftypes import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/diff.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/diff.go index 328b1adf67c..72486b65e59 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/diff.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/diff.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tftypes import ( @@ -122,8 +125,10 @@ func (val1 Value) Diff(val2 Value) ([]ValueDiff, error) { return true, nil } - // convert from an interface{} to a Value - value2 := value2I.(Value) + value2, ok := value2I.(Value) + if !ok { + return false, fmt.Errorf("unexpected type %T in Diff", value2I) + } // if they're both unknown, no need to continue if !value1.IsKnown() && !value2.IsKnown() { diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/doc.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/doc.go index 414a797d96c..ea82da6e906 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/doc.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/doc.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // Package tftypes provides a type system for Terraform configuration and state // values. // diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/list.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/list.go index a0bc2f37009..512c7c9004a 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/list.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/list.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tftypes import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/map.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/map.go index 7c1e7d05ae0..0d299dcbee8 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/map.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/map.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tftypes import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/object.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/object.go index ce2bf1325c6..b901c99e01e 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/object.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/object.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tftypes import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/primitive.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/primitive.go index 0349f414553..5a631baea2a 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/primitive.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/primitive.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tftypes import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/set.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/set.go index 00163067ea1..e4549b5945b 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/set.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tftypes import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/tuple.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/tuple.go index 5ca5709a340..a5c5e52390f 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/tuple.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/tuple.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tftypes import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/type.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/type.go index 4f0e4af7c85..c5965992187 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/type.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/type.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tftypes import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/unknown_value.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/unknown_value.go index 016d3bbe8ac..4aefe719e4c 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/unknown_value.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/unknown_value.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tftypes const ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/value.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/value.go index f997347e2b2..af9f680f9c6 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/value.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/value.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tftypes import ( @@ -8,7 +11,7 @@ import ( "strconv" "strings" - msgpack "github.com/vmihailenco/msgpack/v4" + msgpack "github.com/vmihailenco/msgpack/v5" ) // ValueConverter is an interface that provider-defined types can implement to @@ -540,6 +543,7 @@ func (val Value) IsFullyKnown() bool { case primitive: return true case List, Set, Tuple: + //nolint:forcetypeassert // NewValue func validates the type for _, v := range val.value.([]Value) { if !v.IsFullyKnown() { return false @@ -547,6 +551,7 @@ func (val Value) IsFullyKnown() bool { } return true case Map, Object: + //nolint:forcetypeassert // NewValue func validates the type for _, v := range val.value.(map[string]Value) { if !v.IsFullyKnown() { return false diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/value_json.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/value_json.go index b889b3bebf5..8a61918e2cc 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/value_json.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/value_json.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tftypes import ( @@ -65,14 +68,19 @@ func jsonUnmarshal(buf []byte, typ Type, p *AttributePath, opts ValueFromJSONOpt case typ.Is(DynamicPseudoType): return jsonUnmarshalDynamicPseudoType(buf, typ, p, opts) case typ.Is(List{}): + //nolint:forcetypeassert // Is func above guarantees this type assertion return jsonUnmarshalList(buf, typ.(List).ElementType, p, opts) case typ.Is(Set{}): + //nolint:forcetypeassert // Is func above guarantees this type assertion return jsonUnmarshalSet(buf, typ.(Set).ElementType, p, opts) case typ.Is(Map{}): + //nolint:forcetypeassert // Is func above guarantees this type assertion return jsonUnmarshalMap(buf, typ.(Map).ElementType, p, opts) case typ.Is(Tuple{}): + //nolint:forcetypeassert // Is func above guarantees this type assertion return jsonUnmarshalTuple(buf, typ.(Tuple).ElementTypes, p, opts) case typ.Is(Object{}): + //nolint:forcetypeassert // Is func above guarantees this type assertion return jsonUnmarshalObject(buf, typ.(Object).AttributeTypes, p, opts) } return Value{}, p.NewErrorf("unknown type %s", typ) diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/value_msgpack.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/value_msgpack.go index c4047aeab2b..ed03ef9833d 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/value_msgpack.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-go/tftypes/value_msgpack.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tftypes import ( @@ -7,8 +10,8 @@ import ( "math/big" "sort" - msgpack "github.com/vmihailenco/msgpack/v4" - msgpackCodes "github.com/vmihailenco/msgpack/v4/codes" + msgpack "github.com/vmihailenco/msgpack/v5" + msgpackCodes "github.com/vmihailenco/msgpack/v5/msgpcode" ) type msgPackUnknownType struct{} @@ -118,14 +121,19 @@ func msgpackUnmarshal(dec *msgpack.Decoder, typ Type, path *AttributePath) (Valu } return NewValue(Bool, rv), nil case typ.Is(List{}): + //nolint:forcetypeassert // Is func above guarantees this type assertion return msgpackUnmarshalList(dec, typ.(List).ElementType, path) case typ.Is(Set{}): + //nolint:forcetypeassert // Is func above guarantees this type assertion return msgpackUnmarshalSet(dec, typ.(Set).ElementType, path) case typ.Is(Map{}): + //nolint:forcetypeassert // Is func above guarantees this type assertion return msgpackUnmarshalMap(dec, typ.(Map).ElementType, path) case typ.Is(Tuple{}): + //nolint:forcetypeassert // Is func above guarantees this type assertion return msgpackUnmarshalTuple(dec, typ.(Tuple).ElementTypes, path) case typ.Is(Object{}): + //nolint:forcetypeassert // Is func above guarantees this type assertion return msgpackUnmarshalObject(dec, typ.(Object).AttributeTypes, path) } return Value{}, path.NewErrorf("unsupported type %s", typ.String()) @@ -364,15 +372,20 @@ func marshalMsgPack(val Value, typ Type, p *AttributePath, enc *msgpack.Encoder) case typ.Is(Bool): return marshalMsgPackBool(val, typ, p, enc) case typ.Is(List{}): - return marshalMsgPackList(val, typ, p, enc) + //nolint:forcetypeassert // Is func above guarantees this type assertion + return marshalMsgPackList(val, typ.(List), p, enc) case typ.Is(Set{}): - return marshalMsgPackSet(val, typ, p, enc) + //nolint:forcetypeassert // Is func above guarantees this type assertion + return marshalMsgPackSet(val, typ.(Set), p, enc) case typ.Is(Map{}): - return marshalMsgPackMap(val, typ, p, enc) + //nolint:forcetypeassert // Is func above guarantees this type assertion + return marshalMsgPackMap(val, typ.(Map), p, enc) case typ.Is(Tuple{}): - return marshalMsgPackTuple(val, typ, p, enc) + //nolint:forcetypeassert // Is func above guarantees this type assertion + return marshalMsgPackTuple(val, typ.(Tuple), p, enc) case typ.Is(Object{}): - return marshalMsgPackObject(val, typ, p, enc) + //nolint:forcetypeassert // Is func above guarantees this type assertion + return marshalMsgPackObject(val, typ.(Object), p, enc) } return fmt.Errorf("unknown type %s", typ) } @@ -459,7 +472,7 @@ func marshalMsgPackBool(val Value, typ Type, p *AttributePath, enc *msgpack.Enco return nil } -func marshalMsgPackList(val Value, typ Type, p *AttributePath, enc *msgpack.Encoder) error { +func marshalMsgPackList(val Value, typ List, p *AttributePath, enc *msgpack.Encoder) error { l, ok := val.value.([]Value) if !ok { return unexpectedValueTypeError(p, l, val.value, typ) @@ -469,7 +482,7 @@ func marshalMsgPackList(val Value, typ Type, p *AttributePath, enc *msgpack.Enco return p.NewErrorf("error encoding list length: %w", err) } for pos, i := range l { - err := marshalMsgPack(i, typ.(List).ElementType, p.WithElementKeyInt(pos), enc) + err := marshalMsgPack(i, typ.ElementType, p.WithElementKeyInt(pos), enc) if err != nil { return err } @@ -477,7 +490,7 @@ func marshalMsgPackList(val Value, typ Type, p *AttributePath, enc *msgpack.Enco return nil } -func marshalMsgPackSet(val Value, typ Type, p *AttributePath, enc *msgpack.Encoder) error { +func marshalMsgPackSet(val Value, typ Set, p *AttributePath, enc *msgpack.Encoder) error { s, ok := val.value.([]Value) if !ok { return unexpectedValueTypeError(p, s, val.value, typ) @@ -487,7 +500,7 @@ func marshalMsgPackSet(val Value, typ Type, p *AttributePath, enc *msgpack.Encod return p.NewErrorf("error encoding set length: %w", err) } for _, i := range s { - err := marshalMsgPack(i, typ.(Set).ElementType, p.WithElementKeyValue(i), enc) + err := marshalMsgPack(i, typ.ElementType, p.WithElementKeyValue(i), enc) if err != nil { return err } @@ -495,7 +508,7 @@ func marshalMsgPackSet(val Value, typ Type, p *AttributePath, enc *msgpack.Encod return nil } -func marshalMsgPackMap(val Value, typ Type, p *AttributePath, enc *msgpack.Encoder) error { +func marshalMsgPackMap(val Value, typ Map, p *AttributePath, enc *msgpack.Encoder) error { m, ok := val.value.(map[string]Value) if !ok { return unexpectedValueTypeError(p, m, val.value, typ) @@ -509,7 +522,7 @@ func marshalMsgPackMap(val Value, typ Type, p *AttributePath, enc *msgpack.Encod if err != nil { return p.NewErrorf("error encoding map key: %w", err) } - err = marshalMsgPack(v, typ.(Map).ElementType, p, enc) + err = marshalMsgPack(v, typ.ElementType, p, enc) if err != nil { return err } @@ -517,12 +530,12 @@ func marshalMsgPackMap(val Value, typ Type, p *AttributePath, enc *msgpack.Encod return nil } -func marshalMsgPackTuple(val Value, typ Type, p *AttributePath, enc *msgpack.Encoder) error { +func marshalMsgPackTuple(val Value, typ Tuple, p *AttributePath, enc *msgpack.Encoder) error { t, ok := val.value.([]Value) if !ok { return unexpectedValueTypeError(p, t, val.value, typ) } - types := typ.(Tuple).ElementTypes + types := typ.ElementTypes err := enc.EncodeArrayLen(len(types)) if err != nil { return p.NewErrorf("error encoding tuple length: %w", err) @@ -537,12 +550,12 @@ func marshalMsgPackTuple(val Value, typ Type, p *AttributePath, enc *msgpack.Enc return nil } -func marshalMsgPackObject(val Value, typ Type, p *AttributePath, enc *msgpack.Encoder) error { +func marshalMsgPackObject(val Value, typ Object, p *AttributePath, enc *msgpack.Encoder) error { o, ok := val.value.(map[string]Value) if !ok { return unexpectedValueTypeError(p, o, val.value, typ) } - types := typ.(Object).AttributeTypes + types := typ.AttributeTypes keys := make([]string, 0, len(types)) for k := range types { keys = append(keys, k) diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-log/tfsdklog/levels.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-log/tfsdklog/levels.go new file mode 100644 index 00000000000..ed475a13790 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-log/tfsdklog/levels.go @@ -0,0 +1,81 @@ +package tfsdklog + +import ( + "sync" + + "github.com/hashicorp/go-hclog" +) + +var ( + // rootLevel stores the effective level of the root SDK logger during + // NewRootSDKLogger where the value is deterministically chosen based on + // environment variables, etc. This call generally happens with each new + // provider RPC request. If the environment variable values changed during + // runtime between calls, then inflight provider requests checking this + // value would receive the most up-to-date value which would potentially + // differ with the actual in-context logger level. This tradeoff would only + // effect the inflight requests and should not be an overall performance + // concern in the case of this level causing more context checks until the + // request is over. + rootLevel hclog.Level = hclog.NoLevel + + // rootLevelMutex is a read-write mutex that protects rootLevel from + // triggering the data race detector. + rootLevelMutex = sync.RWMutex{} + + // subsystemLevels stores the effective level of all subsystem SDK loggers + // during NewSubsystem where the value is deterministically chosen based on + // environment variables, etc. This call generally happens with each new + // provider RPC request. If the environment variable values changed during + // runtime between calls, then inflight provider requests checking this + // value would receive the most up-to-date value which would potentially + // differ with the actual in-context logger level. This tradeoff would only + // effect the inflight requests and should not be an overall performance + // concern in the case of this level causing more context checks until the + // request is over. + subsystemLevels map[string]hclog.Level = make(map[string]hclog.Level) + + // subsystemLevelsMutex is a read-write mutex that protects the + // subsystemLevels map from concurrent read and write panics. + subsystemLevelsMutex = sync.RWMutex{} +) + +// subsystemWouldLog returns true if the subsystem SDK logger would emit a log +// at the given level. This is performed outside the context-based logger for +// performance. +func subsystemWouldLog(subsystem string, level hclog.Level) bool { + subsystemLevelsMutex.RLock() + + setLevel, ok := subsystemLevels[subsystem] + + subsystemLevelsMutex.RUnlock() + + if !ok { + return false + } + + return wouldLog(setLevel, level) +} + +// rootWouldLog returns true if the root SDK logger would emit a log at the +// given level. This is performed outside the context-based logger for +// performance. +func rootWouldLog(level hclog.Level) bool { + rootLevelMutex.RLock() + + setLevel := rootLevel + + rootLevelMutex.RUnlock() + + return wouldLog(setLevel, level) +} + +// wouldLog returns true if the set level would emit a log at the given +// level. This is performed outside the context-based logger for performance. +func wouldLog(setLevel, checkLevel hclog.Level) bool { + if checkLevel == hclog.Off { + return false + } + + return checkLevel >= setLevel +} diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-log/tfsdklog/sdk.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-log/tfsdklog/sdk.go index 2891ddf29c0..4ffb2cc2f53 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-log/tfsdklog/sdk.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-log/tfsdklog/sdk.go @@ -35,6 +35,14 @@ func NewRootSDKLogger(ctx context.Context, options ...logging.Option) context.Co if opts.Level == hclog.NoLevel { opts.Level = hclog.Trace } + + // Cache root logger level outside context for performance reasons. + rootLevelMutex.Lock() + + rootLevel = opts.Level + + rootLevelMutex.Unlock() + loggerOptions := &hclog.LoggerOptions{ Name: opts.Name, Level: opts.Level, diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-log/tfsdklog/subsystem.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-log/tfsdklog/subsystem.go index 4d2a1a1a54a..0aeb5463cab 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-log/tfsdklog/subsystem.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-log/tfsdklog/subsystem.go @@ -58,6 +58,13 @@ func NewSubsystem(ctx context.Context, subsystem string, options ...logging.Opti subLogger = hclog.New(subLoggerOptions) } + // Cache subsystem logger level outside context for performance reasons. + subsystemLevelsMutex.Lock() + + subsystemLevels[subsystem] = subLoggerTFLoggerOpts.Level + + subsystemLevelsMutex.Unlock() + // Set the configured log level if subLoggerTFLoggerOpts.Level != hclog.NoLevel { subLogger.SetLevel(subLoggerTFLoggerOpts.Level) @@ -97,6 +104,10 @@ func SubsystemSetField(ctx context.Context, subsystem, key string, value interfa // subsystem logger, e.g. by the `SubsystemSetField()` function, and across // multiple maps. func SubsystemTrace(ctx context.Context, subsystem, msg string, additionalFields ...map[string]interface{}) { + if !subsystemWouldLog(subsystem, hclog.Trace) { + return + } + logger := logging.GetSDKSubsystemLogger(ctx, subsystem) if logger == nil { if logging.GetSDKRootLogger(ctx) == nil { @@ -122,6 +133,10 @@ func SubsystemTrace(ctx context.Context, subsystem, msg string, additionalFields // subsystem logger, e.g. by the `SubsystemSetField()` function, and across // multiple maps. func SubsystemDebug(ctx context.Context, subsystem, msg string, additionalFields ...map[string]interface{}) { + if !subsystemWouldLog(subsystem, hclog.Debug) { + return + } + logger := logging.GetSDKSubsystemLogger(ctx, subsystem) if logger == nil { if logging.GetSDKRootLogger(ctx) == nil { @@ -147,6 +162,10 @@ func SubsystemDebug(ctx context.Context, subsystem, msg string, additionalFields // subsystem logger, e.g. by the `SubsystemSetField()` function, and across // multiple maps. func SubsystemInfo(ctx context.Context, subsystem, msg string, additionalFields ...map[string]interface{}) { + if !subsystemWouldLog(subsystem, hclog.Info) { + return + } + logger := logging.GetSDKSubsystemLogger(ctx, subsystem) if logger == nil { if logging.GetSDKRootLogger(ctx) == nil { @@ -172,6 +191,10 @@ func SubsystemInfo(ctx context.Context, subsystem, msg string, additionalFields // subsystem logger, e.g. by the `SubsystemSetField()` function, and across // multiple maps. func SubsystemWarn(ctx context.Context, subsystem, msg string, additionalFields ...map[string]interface{}) { + if !subsystemWouldLog(subsystem, hclog.Warn) { + return + } + logger := logging.GetSDKSubsystemLogger(ctx, subsystem) if logger == nil { if logging.GetSDKRootLogger(ctx) == nil { @@ -197,6 +220,10 @@ func SubsystemWarn(ctx context.Context, subsystem, msg string, additionalFields // subsystem logger, e.g. by the `SubsystemSetField()` function, and across // multiple maps. func SubsystemError(ctx context.Context, subsystem, msg string, additionalFields ...map[string]interface{}) { + if !subsystemWouldLog(subsystem, hclog.Error) { + return + } + logger := logging.GetSDKSubsystemLogger(ctx, subsystem) if logger == nil { if logging.GetSDKRootLogger(ctx) == nil { diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/diag/diagnostic.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/diag/diagnostic.go index 3b1ced344f5..5a533042697 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/diag/diagnostic.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/diag/diagnostic.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package diag import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/diag/helpers.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/diag/helpers.go index 08e9291784d..4f2b7a88332 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/diag/helpers.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/diag/helpers.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package diag import "fmt" diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/id/id.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/id/id.go index f0af80efd5c..0491252c855 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/id/id.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/id/id.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package id import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/logging/logging.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/logging/logging.go index 74eb7f6667a..ea0764d13ad 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/logging/logging.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/logging/logging.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logging import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/logging/logging_http_transport.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/logging/logging_http_transport.go index 335e1784e22..ca8ba4ecce9 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/logging/logging_http_transport.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/logging/logging_http_transport.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logging import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/logging/transport.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/logging/transport.go index bda3813d961..774bc480f69 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/logging/transport.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/logging/transport.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logging import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/aliases.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/aliases.go index 582adada891..275137d8129 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/aliases.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/aliases.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resource import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/environment_variables.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/environment_variables.go index fed6aa25c32..981908799e6 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/environment_variables.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/environment_variables.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resource // Environment variables for acceptance testing. Additional environment diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/json.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/json.go index 345abf71995..9cd6a1b9832 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/json.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/json.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resource import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/plugin.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/plugin.go index 120f04276b3..14d869466b5 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/plugin.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/plugin.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resource import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/state_shim.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/state_shim.go index 497bcc1c996..43c809f2ba6 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/state_shim.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/state_shim.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resource import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testcase_providers.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testcase_providers.go index 19a548cdb1f..9639cb04dfc 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testcase_providers.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testcase_providers.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resource import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testcase_validate.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testcase_validate.go index 4a4911615cb..8eb85a14abf 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testcase_validate.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testcase_validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resource import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing.go index caadc3ddc5f..a14eede9e5e 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resource import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_config.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_config.go index 35bfea0988d..f56c885be32 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_config.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_config.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resource import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_new.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_new.go index 3298ccc86a5..14b247306af 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_new.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_new.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resource import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_new_config.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_new_config.go index 0d4e0b0e4ae..a52008768fb 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_new_config.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_new_config.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resource import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_new_import_state.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_new_import_state.go index 48ec26c1ee0..4ddf56c5b28 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_new_import_state.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_new_import_state.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resource import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_new_refresh_state.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_new_refresh_state.go index 37ed3fc7cdd..627190a9d19 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_new_refresh_state.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_new_refresh_state.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resource import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_sets.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_sets.go index 3cdb4b9ecc4..8f5a731c32e 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_sets.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_sets.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // These test helpers were developed by the AWS provider team at HashiCorp. package resource diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/teststep_providers.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/teststep_providers.go index 757f8f638cf..9b759bde03a 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/teststep_providers.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/teststep_providers.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resource import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/teststep_validate.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/teststep_validate.go index 0ba655168c8..7dbf883b504 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/teststep_validate.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/teststep_validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resource import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry/error.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry/error.go index d9175bdcbe1..789c712f51e 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry/error.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry/error.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package retry import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry/state.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry/state.go index e2d8bfff40c..4780090d956 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry/state.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry/state.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package retry import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry/wait.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry/wait.go index 913acb12d69..c8d2de14398 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry/wait.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package retry import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/context.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/context.go index ed8d0964bfa..bced9ead54d 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/context.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/context.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema type Key string diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/core_schema.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/core_schema.go index bb1fc1dbe54..736af218da2 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/core_schema.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/core_schema.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( @@ -364,5 +367,5 @@ func (r *Resource) CoreConfigSchema() *configschema.Block { } func (r *Resource) coreConfigSchema() *configschema.Block { - return schemaMap(r.Schema).CoreConfigSchema() + return schemaMap(r.SchemaMap()).CoreConfigSchema() } diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/data_source_resource_shim.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/data_source_resource_shim.go index 8d93750aede..3a01c0f32f6 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/data_source_resource_shim.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/data_source_resource_shim.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/equal.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/equal.go index d5e20e03889..92a02b3b3ac 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/equal.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/equal.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema // Equal is an interface that checks for deep equality between two objects. diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_reader.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_reader.go index 855bc4028a1..6aae74d95b6 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_reader.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_reader.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( @@ -81,7 +84,7 @@ func addrToSchema(addr []string, schemaMap map[string]*Schema) []*Schema { case *Resource: current = &Schema{ Type: typeObject, - Elem: v.Schema, + Elem: v.SchemaMap(), } case *Schema: current = v diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_reader_config.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_reader_config.go index cd4a993a35c..df317c20bb0 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_reader_config.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_reader_config.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( @@ -300,7 +303,7 @@ func (r *ConfigFieldReader) hasComputedSubKeys(key string, schema *Schema) bool switch t := schema.Elem.(type) { case *Resource: - for k, schema := range t.Schema { + for k, schema := range t.SchemaMap() { if r.Config.IsComputed(prefix + k) { return true } diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_reader_diff.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_reader_diff.go index ca3b714b1dd..c9da00e9119 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_reader_diff.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_reader_diff.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_reader_map.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_reader_map.go index 6fa19152274..6697606c6f2 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_reader_map.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_reader_map.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_reader_multi.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_reader_multi.go index 89ad3a86f2b..da4c9c81500 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_reader_multi.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_reader_multi.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_writer.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_writer.go index 9abc41b54f4..be4fae5060e 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_writer.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_writer.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema // FieldWriters are responsible for writing fields by address into diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_writer_map.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_writer_map.go index 39c708b27fa..c9147cec191 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_writer_map.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/field_writer_map.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/grpc_provider.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/grpc_provider.go index 17a0de4cc63..831d78777af 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/grpc_provider.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/grpc_provider.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( @@ -1300,7 +1303,7 @@ func stripResourceModifiers(r *Resource) *Resource { newResource.CustomizeDiff = nil newResource.Schema = map[string]*Schema{} - for k, s := range r.Schema { + for k, s := range r.SchemaMap() { newResource.Schema[k] = stripSchema(s) } diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/json.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/json.go index 265099a6b6f..8cf22a51855 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/json.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/json.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/provider.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/provider.go index 91a21b38908..75e1a7c42e1 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/provider.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/provider.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource.go index 28a7600b077..8a1472e45d8 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( @@ -53,13 +56,25 @@ var ReservedResourceFields = []string{ // being implemented. type Resource struct { // Schema is the structure and type information for this component. This - // field is required for all Resource concepts. + // field, or SchemaFunc, is required for all Resource concepts. To prevent + // storing all schema information in memory for the lifecycle of a provider, + // use SchemaFunc instead. // // The keys of this map are the names used in a practitioner configuration, // such as the attribute or block name. The values describe the structure // and type information of that attribute or block. Schema map[string]*Schema + // SchemaFunc is the structure and type information for this component. This + // field, or Schema, is required for all Resource concepts. Use this field + // instead of Schema on top level Resource declarations to prevent storing + // all schema information in memory for the lifecycle of a provider. + // + // The keys of this map are the names used in a practitioner configuration, + // such as the attribute or block name. The values describe the structure + // and type information of that attribute or block. + SchemaFunc func() map[string]*Schema + // SchemaVersion is the version number for this resource's Schema // definition. This field is only valid when the Resource is a managed // resource. @@ -582,6 +597,17 @@ type Resource struct { UseJSONNumber bool } +// SchemaMap returns the schema information for this Resource whether it is +// defined via the SchemaFunc field or Schema field. The SchemaFunc field, if +// defined, takes precedence over the Schema field. +func (r *Resource) SchemaMap() map[string]*Schema { + if r.SchemaFunc != nil { + return r.SchemaFunc() + } + + return r.Schema +} + // ShimInstanceStateFromValue converts a cty.Value to a // terraform.InstanceState. func (r *Resource) ShimInstanceStateFromValue(state cty.Value) (*terraform.InstanceState, error) { @@ -591,7 +617,7 @@ func (r *Resource) ShimInstanceStateFromValue(state cty.Value) (*terraform.Insta // We now rebuild the state through the ResourceData, so that the set indexes // match what helper/schema expects. - data, err := schemaMap(r.Schema).Data(s, nil) + data, err := schemaMap(r.SchemaMap()).Data(s, nil) if err != nil { return nil, err } @@ -764,7 +790,8 @@ func (r *Resource) Apply( s *terraform.InstanceState, d *terraform.InstanceDiff, meta interface{}) (*terraform.InstanceState, diag.Diagnostics) { - data, err := schemaMap(r.Schema).Data(s, d) + schema := schemaMap(r.SchemaMap()) + data, err := schema.Data(s, d) if err != nil { return s, diag.FromErr(err) } @@ -821,7 +848,7 @@ func (r *Resource) Apply( } // Reset the data to be stateless since we just destroyed - data, err = schemaMap(r.Schema).Data(nil, d) + data, err = schema.Data(nil, d) if err != nil { return nil, append(diags, diag.FromErr(err)...) } @@ -865,7 +892,7 @@ func (r *Resource) Diff( return nil, fmt.Errorf("[ERR] Error decoding timeout: %s", err) } - instanceDiff, err := schemaMap(r.Schema).Diff(ctx, s, c, r.CustomizeDiff, meta, true) + instanceDiff, err := schemaMap(r.SchemaMap()).Diff(ctx, s, c, r.CustomizeDiff, meta, true) if err != nil { return instanceDiff, err } @@ -887,7 +914,7 @@ func (r *Resource) SimpleDiff( c *terraform.ResourceConfig, meta interface{}) (*terraform.InstanceDiff, error) { - instanceDiff, err := schemaMap(r.Schema).Diff(ctx, s, c, r.CustomizeDiff, meta, false) + instanceDiff, err := schemaMap(r.SchemaMap()).Diff(ctx, s, c, r.CustomizeDiff, meta, false) if err != nil { return instanceDiff, err } @@ -912,7 +939,7 @@ func (r *Resource) SimpleDiff( // Validate validates the resource configuration against the schema. func (r *Resource) Validate(c *terraform.ResourceConfig) diag.Diagnostics { - diags := schemaMap(r.Schema).Validate(c) + diags := schemaMap(r.SchemaMap()).Validate(c) if r.DeprecationMessage != "" { diags = append(diags, diag.Diagnostic{ @@ -934,7 +961,7 @@ func (r *Resource) ReadDataApply( ) (*terraform.InstanceState, diag.Diagnostics) { // Data sources are always built completely from scratch // on each read, so the source state is always nil. - data, err := schemaMap(r.Schema).Data(nil, d) + data, err := schemaMap(r.SchemaMap()).Data(nil, d) if err != nil { return nil, diag.FromErr(err) } @@ -975,10 +1002,12 @@ func (r *Resource) RefreshWithoutUpgrade( } } + schema := schemaMap(r.SchemaMap()) + if r.Exists != nil { // Make a copy of data so that if it is modified it doesn't // affect our Read later. - data, err := schemaMap(r.Schema).Data(s, nil) + data, err := schema.Data(s, nil) if err != nil { return s, diag.FromErr(err) } @@ -1001,7 +1030,7 @@ func (r *Resource) RefreshWithoutUpgrade( } } - data, err := schemaMap(r.Schema).Data(s, nil) + data, err := schema.Data(s, nil) if err != nil { return s, diag.FromErr(err) } @@ -1020,7 +1049,7 @@ func (r *Resource) RefreshWithoutUpgrade( state = nil } - schemaMap(r.Schema).handleDiffSuppressOnRefresh(ctx, s, state) + schema.handleDiffSuppressOnRefresh(ctx, s, state) return r.recordCurrentSchemaVersion(state), diags } @@ -1066,13 +1095,14 @@ func (r *Resource) InternalValidate(topSchemaMap schemaMap, writable bool) error } } + schema := schemaMap(r.SchemaMap()) tsm := topSchemaMap if r.isTopLevel() && writable { // All non-Computed attributes must be ForceNew if Update is not defined if !r.updateFuncSet() { nonForceNewAttrs := make([]string, 0) - for k, v := range r.Schema { + for k, v := range schema { if !v.ForceNew && !v.Computed { nonForceNewAttrs = append(nonForceNewAttrs, k) } @@ -1083,19 +1113,19 @@ func (r *Resource) InternalValidate(topSchemaMap schemaMap, writable bool) error } } else { nonUpdateableAttrs := make([]string, 0) - for k, v := range r.Schema { + for k, v := range schema { if v.ForceNew || v.Computed && !v.Optional { nonUpdateableAttrs = append(nonUpdateableAttrs, k) } } - updateableAttrs := len(r.Schema) - len(nonUpdateableAttrs) + updateableAttrs := len(schema) - len(nonUpdateableAttrs) if updateableAttrs == 0 { return fmt.Errorf( "All fields are ForceNew or Computed w/out Optional, Update is superfluous") } } - tsm = schemaMap(r.Schema) + tsm = schema // Destroy, and Read are required if !r.readFuncSet() { @@ -1154,7 +1184,7 @@ func (r *Resource) InternalValidate(topSchemaMap schemaMap, writable bool) error // Data source if r.isTopLevel() && !writable { - tsm = schemaMap(r.Schema) + tsm = schema for k := range tsm { if isReservedDataSourceFieldName(k) { return fmt.Errorf("%s is a reserved field name", k) @@ -1162,6 +1192,10 @@ func (r *Resource) InternalValidate(topSchemaMap schemaMap, writable bool) error } } + if r.SchemaFunc != nil && r.Schema != nil { + return fmt.Errorf("SchemaFunc and Schema should not both be set") + } + // check context funcs are not set alongside their nonctx counterparts if r.CreateContext != nil && r.Create != nil { return fmt.Errorf("CreateContext and Create should not both be set") @@ -1204,7 +1238,7 @@ func (r *Resource) InternalValidate(topSchemaMap schemaMap, writable bool) error return fmt.Errorf("Delete and DeleteWithoutTimeout should not both be set") } - return schemaMap(r.Schema).InternalValidate(tsm) + return schema.InternalValidate(tsm) } func isReservedDataSourceFieldName(name string) bool { @@ -1251,7 +1285,7 @@ func isReservedResourceFieldName(name string) bool { // // This function is useful for unit tests and ResourceImporter functions. func (r *Resource) Data(s *terraform.InstanceState) *ResourceData { - result, err := schemaMap(r.Schema).Data(s, nil) + result, err := schemaMap(r.SchemaMap()).Data(s, nil) if err != nil { // At the time of writing, this isn't possible (Data never returns // non-nil errors). We panic to find this in the future if we have to. @@ -1278,7 +1312,7 @@ func (r *Resource) Data(s *terraform.InstanceState) *ResourceData { // TODO: May be able to be removed with the above ResourceData function. func (r *Resource) TestResourceData() *ResourceData { return &ResourceData{ - schema: r.Schema, + schema: r.SchemaMap(), } } diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource_data.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource_data.go index 396b5bc6f15..b7eb7d2711f 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource_data.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource_data.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource_data_get_source.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource_data_get_source.go index 9e4cc53e78f..0639540b47b 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource_data_get_source.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource_data_get_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema // This code was previously generated with a go:generate directive calling: diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource_diff.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource_diff.go index 27575b29770..6af9490b9e0 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource_diff.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource_diff.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource_importer.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource_importer.go index 3b17c8e96d4..ad9a5c3b9c6 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource_importer.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource_importer.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource_timeout.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource_timeout.go index 93f6fc243cf..90d29e6259a 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource_timeout.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource_timeout.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/schema.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/schema.go index 4a3e146b60d..176288b0cd8 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/schema.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/schema.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // schema is a high-level framework for easily writing new providers // for Terraform. Usage of schema is recommended over attempting to write // to the low-level plugin interfaces manually. @@ -88,10 +91,6 @@ type Schema struct { // Optional indicates whether the practitioner can choose to not enter // a value in the configuration for this attribute. Optional cannot be used // with Required. - // - // If also using Default or DefaultFunc, Computed should also be enabled, - // otherwise Terraform can output warning logs or "inconsistent result - // after apply" errors. Optional bool // Computed indicates whether the provider may return its own value for @@ -940,7 +939,7 @@ func (m schemaMap) internalValidate(topSchemaMap schemaMap, attrsOnly bool) erro case *Resource: attrsOnly := attrsOnly || v.ConfigMode == SchemaConfigModeAttr - if err := schemaMap(t.Schema).internalValidate(topSchemaMap, attrsOnly); err != nil { + if err := schemaMap(t.SchemaMap()).internalValidate(topSchemaMap, attrsOnly); err != nil { return err } case *Schema: @@ -1070,7 +1069,7 @@ func checkKeysAgainstSchemaFlags(k string, keys []string, topSchemaMap schemaMap return fmt.Errorf("%s configuration block reference (%s) can only be used with TypeList and MaxItems: 1 configuration blocks", k, key) } - sm = schemaMap(subResource.Schema) + sm = subResource.SchemaMap() } if target == nil { @@ -1260,7 +1259,7 @@ func (m schemaMap) diffList( case *Resource: // This is a complex resource for i := 0; i < maxLen; i++ { - for k2, schema := range t.Schema { + for k2, schema := range t.SchemaMap() { subK := fmt.Sprintf("%s.%d.%s", k, i, k2) err := m.diff(ctx, subK, schema, diff, d, all) if err != nil { @@ -1507,7 +1506,7 @@ func (m schemaMap) diffSet( switch t := schema.Elem.(type) { case *Resource: // This is a complex resource - for k2, schema := range t.Schema { + for k2, schema := range t.SchemaMap() { subK := fmt.Sprintf("%s.%s.%s", k, code, k2) err := m.diff(ctx, subK, schema, diff, d, true) if err != nil { @@ -1974,7 +1973,7 @@ func (m schemaMap) validateList( switch t := schema.Elem.(type) { case *Resource: // This is a sub-resource - diags = append(diags, m.validateObject(key, t.Schema, c, p)...) + diags = append(diags, m.validateObject(key, t.SchemaMap(), c, p)...) case *Schema: diags = append(diags, m.validateType(key, raw, t, c, p)...) } diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/serialize.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/serialize.go index 0e0e3cca9e5..d629240fd38 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/serialize.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/serialize.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( @@ -88,7 +91,7 @@ func SerializeResourceForHash(buf *bytes.Buffer, val interface{}, resource *Reso if val == nil { return } - sm := resource.Schema + sm := resource.SchemaMap() m := val.(map[string]interface{}) var keys []string allComputed := true diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/set.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/set.go index b937ab6e561..e897817fd30 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/set.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/shims.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/shims.go index 9c7f0906c64..e8baebd70cf 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/shims.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/shims.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( @@ -40,7 +43,7 @@ func diffFromValues(ctx context.Context, prior, planned, config cty.Value, res * removeConfigUnknowns(cfg.Config) removeConfigUnknowns(cfg.Raw) - diff, err := schemaMap(res.Schema).Diff(ctx, instanceState, cfg, cust, nil, false) + diff, err := schemaMap(res.SchemaMap()).Diff(ctx, instanceState, cfg, cust, nil, false) if err != nil { return nil, err } diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/testing.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/testing.go index f345f832631..bdf56d9012f 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/testing.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/testing.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/unknown.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/unknown.go index c58d3648ab4..1089e4d2d79 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/unknown.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/unknown.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/valuetype.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/valuetype.go index 2b355cc9ca1..2563ff84106 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/valuetype.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/valuetype.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema // This code was previously generated with a go:generate directive calling: diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure/expand_json.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure/expand_json.go index b3eb90fdff5..520afeffcb2 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure/expand_json.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure/expand_json.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package structure import "encoding/json" diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure/flatten_json.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure/flatten_json.go index 578ad2eade3..d0913ac65fc 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure/flatten_json.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure/flatten_json.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package structure import "encoding/json" diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure/normalize_json.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure/normalize_json.go index 3256b476dd0..0521420849b 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure/normalize_json.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure/normalize_json.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package structure import "encoding/json" diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure/suppress_json_diff.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure/suppress_json_diff.go index c99b73846ee..c610616374c 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure/suppress_json_diff.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure/suppress_json_diff.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package structure import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/float.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/float.go index 05a30531e2d..dfc261842d3 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/float.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/float.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package validation import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/int.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/int.go index f2738201cc0..2873897f276 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/int.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/int.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package validation import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/list.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/list.go index 75702b5b8a3..9f0eb4b65d6 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/list.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/list.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package validation import "fmt" diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/map.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/map.go index 9e4510b1e55..7c925090541 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/map.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/map.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package validation import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/meta.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/meta.go index b0515c8d085..f24f7fa28ab 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/meta.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/meta.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package validation import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/network.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/network.go index 8aa7139a004..9bc6da2b8ee 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/network.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/network.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package validation import ( @@ -89,7 +92,7 @@ func IsCIDR(i interface{}, k string) (warnings []string, errors []error) { } if _, _, err := net.ParseCIDR(v); err != nil { - errors = append(errors, fmt.Errorf("expected %q to be a valid IPv4 Value, got %v: %v", k, i, err)) + errors = append(errors, fmt.Errorf("expected %q to be a valid CIDR Value, got %v: %v", k, i, err)) } return warnings, errors diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/strings.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/strings.go index e739a1a1bfa..19c9055f39b 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/strings.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/strings.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package validation import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/testing.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/testing.go index d861f5a2af9..8dadd66fcfb 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/testing.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/testing.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package validation import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/time.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/time.go index fde3a019923..940a9ec1cf3 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/time.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/time.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package validation import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/uuid.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/uuid.go index 00783fafce8..91122ce714d 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/uuid.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/uuid.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package validation import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/web.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/web.go index 615e0feb7c1..2e875442f4d 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/web.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/web.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package validation import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/addrs/doc.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/addrs/doc.go index 46093314fe2..0d29d9f4563 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/addrs/doc.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/addrs/doc.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // Package addrs contains types that represent "addresses", which are // references to specific objects within a Terraform configuration or // state. diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/addrs/instance_key.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/addrs/instance_key.go index 064aeda28c3..8373297f876 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/addrs/instance_key.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/addrs/instance_key.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package addrs import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/addrs/module.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/addrs/module.go index 98699f46e27..8dbbb469d48 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/addrs/module.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/addrs/module.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package addrs // Module is an address for a module call within configuration. This is diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/addrs/module_instance.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/addrs/module_instance.go index 113d3d675ab..12cd83051a7 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/addrs/module_instance.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/addrs/module_instance.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package addrs import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/configschema/coerce_value.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/configschema/coerce_value.go index 48278abed25..d12ff8cced9 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/configschema/coerce_value.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/configschema/coerce_value.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configschema import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/configschema/doc.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/configschema/doc.go index caf8d730c1e..d96be9c7f0f 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/configschema/doc.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/configschema/doc.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // Package configschema contains types for describing the expected structure // of a configuration block whose shape is not known until runtime. // diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/configschema/empty_value.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/configschema/empty_value.go index 51b8c5d24c0..3c9573bc56a 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/configschema/empty_value.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/configschema/empty_value.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configschema import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/configschema/implied_type.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/configschema/implied_type.go index edc9dadccee..4de413519f6 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/configschema/implied_type.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/configschema/implied_type.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configschema import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/configschema/schema.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/configschema/schema.go index c41751b71ca..c445b4ba55e 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/configschema/schema.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/configschema/schema.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configschema import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/hcl2shim/flatmap.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/hcl2shim/flatmap.go index b96e17c5853..2bad034de94 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/hcl2shim/flatmap.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/hcl2shim/flatmap.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package hcl2shim import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/hcl2shim/paths.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/hcl2shim/paths.go index e557845a703..628a8bf6868 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/hcl2shim/paths.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/hcl2shim/paths.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package hcl2shim import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/hcl2shim/values.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/hcl2shim/values.go index 91e91547a85..f5f5de3b259 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/hcl2shim/values.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/hcl2shim/values.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package hcl2shim import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/hcl2shim/values_equiv.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/hcl2shim/values_equiv.go index 87638b4e499..6b2be2239d3 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/hcl2shim/values_equiv.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/configs/hcl2shim/values_equiv.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package hcl2shim import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/helper/hashcode/hashcode.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/helper/hashcode/hashcode.go index 6ccc5231834..97bc709b2a7 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/helper/hashcode/hashcode.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/helper/hashcode/hashcode.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package hashcode import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/logging/context.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/logging/context.go index 6036b0d0446..0fe8002aa7a 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/logging/context.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/logging/context.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logging import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/logging/environment_variables.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/logging/environment_variables.go index db1a27a8147..2ffc73eee6c 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/logging/environment_variables.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/logging/environment_variables.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logging // Environment variables. diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/logging/helper_resource.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/logging/helper_resource.go index a889601c7c4..1b1459f2461 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/logging/helper_resource.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/logging/helper_resource.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logging import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/logging/helper_schema.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/logging/helper_schema.go index b2fe71d15d6..0ecf6bf2e48 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/logging/helper_schema.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/logging/helper_schema.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logging import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/logging/keys.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/logging/keys.go index ad510721847..983fde437a2 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/logging/keys.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/logging/keys.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logging // Structured logging keys. diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plans/objchange/normalize_obj.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plans/objchange/normalize_obj.go index d6aabbe3382..b888237fc29 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plans/objchange/normalize_obj.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plans/objchange/normalize_obj.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package objchange import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugin/convert/diagnostics.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugin/convert/diagnostics.go index e02c1e44394..672f75e6d83 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugin/convert/diagnostics.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugin/convert/diagnostics.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package convert import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugin/convert/schema.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugin/convert/schema.go index 07d0b89783c..e2b4e431ce9 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugin/convert/schema.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugin/convert/schema.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package convert import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/config.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/config.go index 920c4716558..d3cb35bcec9 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/config.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/config.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugintest import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/doc.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/doc.go index 3f84c6a37e4..1b34a0b233e 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/doc.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/doc.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // Package plugintest contains utilities to help with writing tests for // Terraform plugins. // diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/environment_variables.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/environment_variables.go index 6fd001a07d0..6df86f89f8c 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/environment_variables.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/environment_variables.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugintest // Environment variables diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/guard.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/guard.go index ad796be08c6..77f87398009 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/guard.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/guard.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugintest import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/helper.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/helper.go index bfe89e1b9ea..f9617887218 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/helper.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/helper.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugintest import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/util.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/util.go index c77a9cb0827..0d4bbe52667 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/util.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/util.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugintest import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/working_dir.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/working_dir.go index dba079eb266..05b02844206 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/working_dir.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/working_dir.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugintest import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/config_traversals.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/config_traversals.go index 2201d0c0bef..6208117cbff 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/config_traversals.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/config_traversals.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfdiags import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/contextual.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/contextual.go index b063bc1de23..a9b5c7e83e8 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/contextual.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/contextual.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfdiags import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/diagnostic.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/diagnostic.go index 21fb177262f..547271346aa 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/diagnostic.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/diagnostic.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfdiags type Diagnostic interface { diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/diagnostic_base.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/diagnostic_base.go index 34816208786..505692ce51a 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/diagnostic_base.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/diagnostic_base.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfdiags // diagnosticBase can be embedded in other diagnostic structs to get diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/diagnostics.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/diagnostics.go index 2e04b72a629..4fc99c1bb70 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/diagnostics.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/diagnostics.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfdiags import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/doc.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/doc.go index c427879ebc7..23be0a8bece 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/doc.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/doc.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // Package tfdiags is a utility package for representing errors and // warnings in a manner that allows us to produce good messages for the // user. diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/error.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/error.go index b63c5d74150..f7c9c65d382 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/error.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/error.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfdiags // nativeError is a Diagnostic implementation that wraps a normal Go error diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/simple_warning.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/simple_warning.go index 0919170456c..0c90c478891 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/simple_warning.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags/simple_warning.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfdiags type simpleWarning string diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/meta/meta.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/meta/meta.go index 35f20f7ee53..494f61ce1bd 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/meta/meta.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/meta/meta.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // The meta package provides a location to set the release version // and any other relevant metadata for the SDK. // diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/plugin/debug.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/plugin/debug.go index e1875e134cb..3e33dfed6b9 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/plugin/debug.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/plugin/debug.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugin import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/plugin/serve.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/plugin/serve.go index cce6212b159..f089ab5215b 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/plugin/serve.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/plugin/serve.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package plugin import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/diff.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/diff.go index 8a2940f31e2..7b988d9f3dc 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/diff.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/diff.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package terraform import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/instancetype.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/instancetype.go index c0d1288f00b..1871445819a 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/instancetype.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/instancetype.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package terraform // This code was previously generated with a go:generate directive calling: diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/resource.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/resource.go index 703547745e8..0da593711b9 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/resource.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/resource.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package terraform import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/resource_address.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/resource_address.go index ec2665d3133..8d92fbb5e45 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/resource_address.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/resource_address.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package terraform import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/resource_mode.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/resource_mode.go index d2a806c0ba9..2d7b10bcff7 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/resource_mode.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/resource_mode.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package terraform // This code was previously generated with a go:generate directive calling: diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/resource_provider.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/resource_provider.go index ece8fc66042..c8e7008c0a8 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/resource_provider.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/resource_provider.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package terraform // ResourceType is a type of resource that a resource provider can manage. diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/schemas.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/schemas.go index 07e5a84fa1d..86ad0e7f1cf 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/schemas.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/schemas.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package terraform import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/state.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/state.go index 1a4d3d6d112..9357070b401 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/state.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/state.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package terraform import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/state_filter.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/state_filter.go index d2229f8ce76..caf2c79674b 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/state_filter.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/state_filter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package terraform import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/util.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/util.go index 01ac810f103..6353ad27d95 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/util.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/terraform/util.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package terraform import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-registry-address/.copywrite.hcl b/.ci/providerlint/vendor/github.com/hashicorp/terraform-registry-address/.copywrite.hcl new file mode 100644 index 00000000000..235a80dc462 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-registry-address/.copywrite.hcl @@ -0,0 +1,7 @@ +schema_version = 1 + +project { + license = "MPL-2.0" + copyright_year = 2021 + header_ignore = [] +} diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-registry-address/.go-version b/.ci/providerlint/vendor/github.com/hashicorp/terraform-registry-address/.go-version index adc97d8e221..bc4493477ae 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-registry-address/.go-version +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-registry-address/.go-version @@ -1 +1 @@ -1.18 +1.19 diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-registry-address/errors.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-registry-address/errors.go index 1ca735383a6..cf977115a55 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-registry-address/errors.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-registry-address/errors.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfaddr import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-registry-address/module.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-registry-address/module.go index 6af0c5976b5..f000a7410ee 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-registry-address/module.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-registry-address/module.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfaddr import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-registry-address/module_package.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-registry-address/module_package.go index d8ad2534a09..be065966138 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-registry-address/module_package.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-registry-address/module_package.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfaddr import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-registry-address/provider.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-registry-address/provider.go index 23e1e221f2f..7fb252ea222 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-registry-address/provider.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-registry-address/provider.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfaddr import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-svchost/CHANGELOG.md b/.ci/providerlint/vendor/github.com/hashicorp/terraform-svchost/CHANGELOG.md new file mode 100644 index 00000000000..ed9f9932a93 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-svchost/CHANGELOG.md @@ -0,0 +1,13 @@ +## v0.1.1 + +The `disco.Disco` and `auth.CachingCredentialsSource` implementations are now safe for concurrent calls. Previously concurrent calls could potentially corrupt the internal cache maps or cause the Go runtime to panic. + +## v0.1.0 + +#### Features: + +- Adds hostname `Alias` method to service discovery, making it possible to interpret one hostname as another. + +## v0.0.1 + +Initial release diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-svchost/CONTRIBUTING.md b/.ci/providerlint/vendor/github.com/hashicorp/terraform-svchost/CONTRIBUTING.md new file mode 100644 index 00000000000..c12e3bb41a7 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-svchost/CONTRIBUTING.md @@ -0,0 +1,3 @@ +# Contributing to the svchost library + +If you find an issue or would like to add a feature, please add an issue in GitHub. We welcome your contributions - fork the repo and submit a pull request. diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-svchost/LICENSE b/.ci/providerlint/vendor/github.com/hashicorp/terraform-svchost/LICENSE index 82b4de97c7e..342bbb5bb91 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-svchost/LICENSE +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-svchost/LICENSE @@ -1,3 +1,5 @@ +Copyright (c) 2019 HashiCorp, Inc. + Mozilla Public License, version 2.0 1. Definitions diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-svchost/README.md b/.ci/providerlint/vendor/github.com/hashicorp/terraform-svchost/README.md new file mode 100644 index 00000000000..3a12f013be4 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-svchost/README.md @@ -0,0 +1,9 @@ +# terraform-svchost + +[![CI Tests](https://github.com/hashicorp/terraform-svchost/actions/workflows/ci.yml/badge.svg?branch=main)](https://github.com/hashicorp/terraform-svchost/actions/workflows/ci.yml) +[![GitHub license](https://img.shields.io/github/license/hashicorp/terraform-svchost.svg)](https://github.com/hashicorp/terraform-svchost/blob/main/LICENSE) +[![GoDoc](https://godoc.org/github.com/hashicorp/terraform-svchost?status.svg)](https://godoc.org/github.com/hashicorp/terraform-svchost) +[![Go Report Card](https://goreportcard.com/badge/github.com/hashicorp/terraform-svchost)](https://goreportcard.com/report/github.com/hashicorp/terraform-svchost) +[![GitHub issues](https://img.shields.io/github/issues/hashicorp/terraform-svchost.svg)](https://github.com/hashicorp/terraform-svchost/issues) + +This package provides friendly hostnames, and is used by [terraform](https://github.com/hashicorp/terraform). diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-svchost/label_iter.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-svchost/label_iter.go index af8ccbab208..eb875681445 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-svchost/label_iter.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-svchost/label_iter.go @@ -1,3 +1,5 @@ +// Copyright (c) HashiCorp, Inc. + package svchost import ( diff --git a/.ci/providerlint/vendor/github.com/hashicorp/terraform-svchost/svchost.go b/.ci/providerlint/vendor/github.com/hashicorp/terraform-svchost/svchost.go index 4060b767e58..45a702978ef 100644 --- a/.ci/providerlint/vendor/github.com/hashicorp/terraform-svchost/svchost.go +++ b/.ci/providerlint/vendor/github.com/hashicorp/terraform-svchost/svchost.go @@ -1,3 +1,5 @@ +// Copyright (c) HashiCorp, Inc. + // Package svchost deals with the representations of the so-called "friendly // hostnames" that we use to represent systems that provide Terraform-native // remote services, such as module registry, remote operations, etc. @@ -101,6 +103,18 @@ func ForComparison(given string) (Hostname, error) { var err error portPortion, err = normalizePortPortion(portPortion) if err != nil { + // We can get in here if someone has incorrectly specified a URL + // instead of a hostname, because normalizePortPortion will try to + // treat the colon after the scheme as the port number separator. + // We'll return a more specific error message for that situation. + given = strings.ToLower(given) + if given == "https" || given == "http" { + // Technically it's valid to have a host called "https" or "http" + // which would generate a false positive here with input like + // "http:foo", but we can only get here if the hostname exactly + // matches one of the schemes _and_ the port number is also invalid. + return Hostname(""), fmt.Errorf("need just a hostname and optional port number, not a full URL") + } return Hostname(""), err } diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/.golangci.yml b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/.golangci.yml deleted file mode 100644 index 98d6cb7797f..00000000000 --- a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/.golangci.yml +++ /dev/null @@ -1,12 +0,0 @@ -run: - concurrency: 8 - deadline: 5m - tests: false -linters: - enable-all: true - disable: - - gochecknoglobals - - gocognit - - godox - - wsl - - funlen diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/README.md b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/README.md deleted file mode 100644 index a5b1004e08c..00000000000 --- a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/README.md +++ /dev/null @@ -1,72 +0,0 @@ -# MessagePack encoding for Golang - -[![Build Status](https://travis-ci.org/vmihailenco/msgpack.svg?branch=v2)](https://travis-ci.org/vmihailenco/msgpack) -[![GoDoc](https://godoc.org/github.com/vmihailenco/msgpack?status.svg)](https://godoc.org/github.com/vmihailenco/msgpack) - -Supports: -- Primitives, arrays, maps, structs, time.Time and interface{}. -- Appengine *datastore.Key and datastore.Cursor. -- [CustomEncoder](https://godoc.org/github.com/vmihailenco/msgpack#example-CustomEncoder)/CustomDecoder interfaces for custom encoding. -- [Extensions](https://godoc.org/github.com/vmihailenco/msgpack#example-RegisterExt) to encode type information. -- Renaming fields via `msgpack:"my_field_name"` and alias via `msgpack:"alias:another_name"`. -- Omitting individual empty fields via `msgpack:",omitempty"` tag or all [empty fields in a struct](https://godoc.org/github.com/vmihailenco/msgpack#example-Marshal--OmitEmpty). -- [Map keys sorting](https://godoc.org/github.com/vmihailenco/msgpack#Encoder.SortMapKeys). -- Encoding/decoding all [structs as arrays](https://godoc.org/github.com/vmihailenco/msgpack#Encoder.UseArrayForStructs) or [individual structs](https://godoc.org/github.com/vmihailenco/msgpack#example-Marshal--AsArray). -- [Encoder.UseJSONTag](https://godoc.org/github.com/vmihailenco/msgpack#Encoder.UseJSONTag) with [Decoder.UseJSONTag](https://godoc.org/github.com/vmihailenco/msgpack#Decoder.UseJSONTag) can turn msgpack into drop-in replacement for JSON. -- Simple but very fast and efficient [queries](https://godoc.org/github.com/vmihailenco/msgpack#example-Decoder-Query). - -API docs: https://godoc.org/github.com/vmihailenco/msgpack. -Examples: https://godoc.org/github.com/vmihailenco/msgpack#pkg-examples. - -## Installation - -This project uses [Go Modules](https://github.com/golang/go/wiki/Modules) and semantic import versioning since v4: - -``` shell -go mod init github.com/my/repo -go get github.com/vmihailenco/msgpack/v4 -``` - -## Quickstart - -``` go -import "github.com/vmihailenco/msgpack/v4" - -func ExampleMarshal() { - type Item struct { - Foo string - } - - b, err := msgpack.Marshal(&Item{Foo: "bar"}) - if err != nil { - panic(err) - } - - var item Item - err = msgpack.Unmarshal(b, &item) - if err != nil { - panic(err) - } - fmt.Println(item.Foo) - // Output: bar -} -``` - -## Benchmark - -``` -BenchmarkStructVmihailencoMsgpack-4 200000 12814 ns/op 2128 B/op 26 allocs/op -BenchmarkStructUgorjiGoMsgpack-4 100000 17678 ns/op 3616 B/op 70 allocs/op -BenchmarkStructUgorjiGoCodec-4 100000 19053 ns/op 7346 B/op 23 allocs/op -BenchmarkStructJSON-4 20000 69438 ns/op 7864 B/op 26 allocs/op -BenchmarkStructGOB-4 10000 104331 ns/op 14664 B/op 278 allocs/op -``` - -## Howto - -Please go through [examples](https://godoc.org/github.com/vmihailenco/msgpack#pkg-examples) to get an idea how to use this package. - -## See also - -- [Golang PostgreSQL ORM](https://github.com/go-pg/pg) -- [Golang message task queue](https://github.com/vmihailenco/taskq) diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/appengine.go b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/appengine.go deleted file mode 100644 index e8e91e53f35..00000000000 --- a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/appengine.go +++ /dev/null @@ -1,64 +0,0 @@ -// +build appengine - -package msgpack - -import ( - "reflect" - - ds "google.golang.org/appengine/datastore" -) - -func init() { - Register((*ds.Key)(nil), encodeDatastoreKeyValue, decodeDatastoreKeyValue) - Register((*ds.Cursor)(nil), encodeDatastoreCursorValue, decodeDatastoreCursorValue) -} - -func EncodeDatastoreKey(e *Encoder, key *ds.Key) error { - if key == nil { - return e.EncodeNil() - } - return e.EncodeString(key.Encode()) -} - -func encodeDatastoreKeyValue(e *Encoder, v reflect.Value) error { - key := v.Interface().(*ds.Key) - return EncodeDatastoreKey(e, key) -} - -func DecodeDatastoreKey(d *Decoder) (*ds.Key, error) { - v, err := d.DecodeString() - if err != nil { - return nil, err - } - if v == "" { - return nil, nil - } - return ds.DecodeKey(v) -} - -func decodeDatastoreKeyValue(d *Decoder, v reflect.Value) error { - key, err := DecodeDatastoreKey(d) - if err != nil { - return err - } - v.Set(reflect.ValueOf(key)) - return nil -} - -func encodeDatastoreCursorValue(e *Encoder, v reflect.Value) error { - cursor := v.Interface().(ds.Cursor) - return e.Encode(cursor.String()) -} - -func decodeDatastoreCursorValue(d *Decoder, v reflect.Value) error { - s, err := d.DecodeString() - if err != nil { - return err - } - cursor, err := ds.DecodeCursor(s) - if err != nil { - return err - } - v.Set(reflect.ValueOf(cursor)) - return nil -} diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/codes/codes.go b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/codes/codes.go deleted file mode 100644 index 28e0a5a88b4..00000000000 --- a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/codes/codes.go +++ /dev/null @@ -1,90 +0,0 @@ -package codes - -type Code byte - -var ( - PosFixedNumHigh Code = 0x7f - NegFixedNumLow Code = 0xe0 - - Nil Code = 0xc0 - - False Code = 0xc2 - True Code = 0xc3 - - Float Code = 0xca - Double Code = 0xcb - - Uint8 Code = 0xcc - Uint16 Code = 0xcd - Uint32 Code = 0xce - Uint64 Code = 0xcf - - Int8 Code = 0xd0 - Int16 Code = 0xd1 - Int32 Code = 0xd2 - Int64 Code = 0xd3 - - FixedStrLow Code = 0xa0 - FixedStrHigh Code = 0xbf - FixedStrMask Code = 0x1f - Str8 Code = 0xd9 - Str16 Code = 0xda - Str32 Code = 0xdb - - Bin8 Code = 0xc4 - Bin16 Code = 0xc5 - Bin32 Code = 0xc6 - - FixedArrayLow Code = 0x90 - FixedArrayHigh Code = 0x9f - FixedArrayMask Code = 0xf - Array16 Code = 0xdc - Array32 Code = 0xdd - - FixedMapLow Code = 0x80 - FixedMapHigh Code = 0x8f - FixedMapMask Code = 0xf - Map16 Code = 0xde - Map32 Code = 0xdf - - FixExt1 Code = 0xd4 - FixExt2 Code = 0xd5 - FixExt4 Code = 0xd6 - FixExt8 Code = 0xd7 - FixExt16 Code = 0xd8 - Ext8 Code = 0xc7 - Ext16 Code = 0xc8 - Ext32 Code = 0xc9 -) - -func IsFixedNum(c Code) bool { - return c <= PosFixedNumHigh || c >= NegFixedNumLow -} - -func IsFixedMap(c Code) bool { - return c >= FixedMapLow && c <= FixedMapHigh -} - -func IsFixedArray(c Code) bool { - return c >= FixedArrayLow && c <= FixedArrayHigh -} - -func IsFixedString(c Code) bool { - return c >= FixedStrLow && c <= FixedStrHigh -} - -func IsString(c Code) bool { - return IsFixedString(c) || c == Str8 || c == Str16 || c == Str32 -} - -func IsBin(c Code) bool { - return c == Bin8 || c == Bin16 || c == Bin32 -} - -func IsFixedExt(c Code) bool { - return c >= FixExt1 && c <= FixExt16 -} - -func IsExt(c Code) bool { - return IsFixedExt(c) || c == Ext8 || c == Ext16 || c == Ext32 -} diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/ext.go b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/ext.go deleted file mode 100644 index 17e709bc8ff..00000000000 --- a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/ext.go +++ /dev/null @@ -1,244 +0,0 @@ -package msgpack - -import ( - "bytes" - "fmt" - "reflect" - "sync" - - "github.com/vmihailenco/msgpack/v4/codes" -) - -type extInfo struct { - Type reflect.Type - Decoder decoderFunc -} - -var extTypes = make(map[int8]*extInfo) - -var bufferPool = &sync.Pool{ - New: func() interface{} { - return new(bytes.Buffer) - }, -} - -// RegisterExt records a type, identified by a value for that type, -// under the provided id. That id will identify the concrete type of a value -// sent or received as an interface variable. Only types that will be -// transferred as implementations of interface values need to be registered. -// Expecting to be used only during initialization, it panics if the mapping -// between types and ids is not a bijection. -func RegisterExt(id int8, value interface{}) { - typ := reflect.TypeOf(value) - if typ.Kind() == reflect.Ptr { - typ = typ.Elem() - } - ptr := reflect.PtrTo(typ) - - if _, ok := extTypes[id]; ok { - panic(fmt.Errorf("msgpack: ext with id=%d is already registered", id)) - } - - registerExt(id, ptr, getEncoder(ptr), getDecoder(ptr)) - registerExt(id, typ, getEncoder(typ), getDecoder(typ)) -} - -func registerExt(id int8, typ reflect.Type, enc encoderFunc, dec decoderFunc) { - if enc != nil { - typeEncMap.Store(typ, makeExtEncoder(id, enc)) - } - if dec != nil { - extTypes[id] = &extInfo{ - Type: typ, - Decoder: dec, - } - typeDecMap.Store(typ, makeExtDecoder(id, dec)) - } -} - -func (e *Encoder) EncodeExtHeader(typeID int8, length int) error { - if err := e.encodeExtLen(length); err != nil { - return err - } - if err := e.w.WriteByte(byte(typeID)); err != nil { - return err - } - return nil -} - -func makeExtEncoder(typeID int8, enc encoderFunc) encoderFunc { - return func(e *Encoder, v reflect.Value) error { - buf := bufferPool.Get().(*bytes.Buffer) - defer bufferPool.Put(buf) - buf.Reset() - - oldw := e.w - e.w = buf - err := enc(e, v) - e.w = oldw - - if err != nil { - return err - } - - err = e.EncodeExtHeader(typeID, buf.Len()) - if err != nil { - return err - } - return e.write(buf.Bytes()) - } -} - -func makeExtDecoder(typeID int8, dec decoderFunc) decoderFunc { - return func(d *Decoder, v reflect.Value) error { - c, err := d.PeekCode() - if err != nil { - return err - } - - if !codes.IsExt(c) { - return dec(d, v) - } - - id, extLen, err := d.DecodeExtHeader() - if err != nil { - return err - } - - if id != typeID { - return fmt.Errorf("msgpack: got ext type=%d, wanted %d", id, typeID) - } - - d.extLen = extLen - return dec(d, v) - } -} - -func (e *Encoder) encodeExtLen(l int) error { - switch l { - case 1: - return e.writeCode(codes.FixExt1) - case 2: - return e.writeCode(codes.FixExt2) - case 4: - return e.writeCode(codes.FixExt4) - case 8: - return e.writeCode(codes.FixExt8) - case 16: - return e.writeCode(codes.FixExt16) - } - if l < 256 { - return e.write1(codes.Ext8, uint8(l)) - } - if l < 65536 { - return e.write2(codes.Ext16, uint16(l)) - } - return e.write4(codes.Ext32, uint32(l)) -} - -func (d *Decoder) parseExtLen(c codes.Code) (int, error) { - switch c { - case codes.FixExt1: - return 1, nil - case codes.FixExt2: - return 2, nil - case codes.FixExt4: - return 4, nil - case codes.FixExt8: - return 8, nil - case codes.FixExt16: - return 16, nil - case codes.Ext8: - n, err := d.uint8() - return int(n), err - case codes.Ext16: - n, err := d.uint16() - return int(n), err - case codes.Ext32: - n, err := d.uint32() - return int(n), err - default: - return 0, fmt.Errorf("msgpack: invalid code=%x decoding ext length", c) - } -} - -func (d *Decoder) extHeader(c codes.Code) (int8, int, error) { - length, err := d.parseExtLen(c) - if err != nil { - return 0, 0, err - } - - typeID, err := d.readCode() - if err != nil { - return 0, 0, err - } - - return int8(typeID), length, nil -} - -func (d *Decoder) DecodeExtHeader() (typeID int8, length int, err error) { - c, err := d.readCode() - if err != nil { - return - } - return d.extHeader(c) -} - -func (d *Decoder) extInterface(c codes.Code) (interface{}, error) { - extID, extLen, err := d.extHeader(c) - if err != nil { - return nil, err - } - - info, ok := extTypes[extID] - if !ok { - return nil, fmt.Errorf("msgpack: unknown ext id=%d", extID) - } - - v := reflect.New(info.Type) - - d.extLen = extLen - err = info.Decoder(d, v.Elem()) - d.extLen = 0 - if err != nil { - return nil, err - } - - return v.Interface(), nil -} - -func (d *Decoder) skipExt(c codes.Code) error { - n, err := d.parseExtLen(c) - if err != nil { - return err - } - return d.skipN(n + 1) -} - -func (d *Decoder) skipExtHeader(c codes.Code) error { - // Read ext type. - _, err := d.readCode() - if err != nil { - return err - } - // Read ext body len. - for i := 0; i < extHeaderLen(c); i++ { - _, err := d.readCode() - if err != nil { - return err - } - } - return nil -} - -func extHeaderLen(c codes.Code) int { - switch c { - case codes.Ext8: - return 1 - case codes.Ext16: - return 2 - case codes.Ext32: - return 4 - } - return 0 -} diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/intern.go b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/intern.go deleted file mode 100644 index 6ca5692739a..00000000000 --- a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/intern.go +++ /dev/null @@ -1,236 +0,0 @@ -package msgpack - -import ( - "encoding/binary" - "errors" - "fmt" - "math" - "reflect" - - "github.com/vmihailenco/msgpack/v4/codes" -) - -var internStringExtID int8 = -128 - -var errUnexpectedCode = errors.New("msgpack: unexpected code") - -func encodeInternInterfaceValue(e *Encoder, v reflect.Value) error { - if v.IsNil() { - return e.EncodeNil() - } - - v = v.Elem() - if v.Kind() == reflect.String { - return encodeInternStringValue(e, v) - } - return e.EncodeValue(v) -} - -func encodeInternStringValue(e *Encoder, v reflect.Value) error { - s := v.String() - - if s != "" { - if idx, ok := e.intern[s]; ok { - return e.internStringIndex(idx) - } - - if e.intern == nil { - e.intern = make(map[string]int) - } - - idx := len(e.intern) - e.intern[s] = idx - } - - return e.EncodeString(s) -} - -func (e *Encoder) internStringIndex(idx int) error { - if idx < math.MaxUint8 { - if err := e.writeCode(codes.FixExt1); err != nil { - return err - } - if err := e.w.WriteByte(byte(internStringExtID)); err != nil { - return err - } - return e.w.WriteByte(byte(idx)) - } - - if idx < math.MaxUint16 { - if err := e.writeCode(codes.FixExt2); err != nil { - return err - } - if err := e.w.WriteByte(byte(internStringExtID)); err != nil { - return err - } - if err := e.w.WriteByte(byte(idx >> 8)); err != nil { - return err - } - return e.w.WriteByte(byte(idx)) - } - - if int64(idx) < math.MaxUint32 { - if err := e.writeCode(codes.FixExt4); err != nil { - return err - } - if err := e.w.WriteByte(byte(internStringExtID)); err != nil { - return err - } - if err := e.w.WriteByte(byte(idx >> 24)); err != nil { - return err - } - if err := e.w.WriteByte(byte(idx >> 16)); err != nil { - return err - } - if err := e.w.WriteByte(byte(idx >> 8)); err != nil { - return err - } - return e.w.WriteByte(byte(idx)) - } - - return fmt.Errorf("msgpack: intern string index=%d is too large", idx) -} - -//------------------------------------------------------------------------------ - -func decodeInternInterfaceValue(d *Decoder, v reflect.Value) error { - c, err := d.readCode() - if err != nil { - return err - } - - s, err := d.internString(c) - if err == nil { - v.Set(reflect.ValueOf(s)) - return nil - } - if err != nil && err != errUnexpectedCode { - return err - } - - if err := d.s.UnreadByte(); err != nil { - return err - } - - return decodeInterfaceValue(d, v) -} - -func decodeInternStringValue(d *Decoder, v reflect.Value) error { - if err := mustSet(v); err != nil { - return err - } - - c, err := d.readCode() - if err != nil { - return err - } - - s, err := d.internString(c) - if err != nil { - if err == errUnexpectedCode { - return fmt.Errorf("msgpack: invalid code=%x decoding intern string", c) - } - return err - } - - v.SetString(s) - return nil -} - -func (d *Decoder) internString(c codes.Code) (string, error) { - if codes.IsFixedString(c) { - n := int(c & codes.FixedStrMask) - return d.internStringWithLen(n) - } - - switch c { - case codes.FixExt1, codes.FixExt2, codes.FixExt4: - typeID, length, err := d.extHeader(c) - if err != nil { - return "", err - } - if typeID != internStringExtID { - err := fmt.Errorf("msgpack: got ext type=%d, wanted %d", - typeID, internStringExtID) - return "", err - } - - idx, err := d.internStringIndex(length) - if err != nil { - return "", err - } - - return d.internStringAtIndex(idx) - case codes.Str8, codes.Bin8: - n, err := d.uint8() - if err != nil { - return "", err - } - return d.internStringWithLen(int(n)) - case codes.Str16, codes.Bin16: - n, err := d.uint16() - if err != nil { - return "", err - } - return d.internStringWithLen(int(n)) - case codes.Str32, codes.Bin32: - n, err := d.uint32() - if err != nil { - return "", err - } - return d.internStringWithLen(int(n)) - } - - return "", errUnexpectedCode -} - -func (d *Decoder) internStringIndex(length int) (int, error) { - switch length { - case 1: - c, err := d.s.ReadByte() - if err != nil { - return 0, err - } - return int(c), nil - case 2: - b, err := d.readN(2) - if err != nil { - return 0, err - } - n := binary.BigEndian.Uint16(b) - return int(n), nil - case 4: - b, err := d.readN(4) - if err != nil { - return 0, err - } - n := binary.BigEndian.Uint32(b) - return int(n), nil - } - - err := fmt.Errorf("msgpack: unsupported intern string index length=%d", length) - return 0, err -} - -func (d *Decoder) internStringAtIndex(idx int) (string, error) { - if idx >= len(d.intern) { - err := fmt.Errorf("msgpack: intern string with index=%d does not exist", idx) - return "", err - } - return d.intern[idx], nil -} - -func (d *Decoder) internStringWithLen(n int) (string, error) { - if n <= 0 { - return "", nil - } - - s, err := d.stringWithLen(n) - if err != nil { - return "", err - } - - d.intern = append(d.intern, s) - - return s, nil -} diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/msgpack.go b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/msgpack.go deleted file mode 100644 index 220b43c47b3..00000000000 --- a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/msgpack.go +++ /dev/null @@ -1,17 +0,0 @@ -package msgpack - -type Marshaler interface { - MarshalMsgpack() ([]byte, error) -} - -type Unmarshaler interface { - UnmarshalMsgpack([]byte) error -} - -type CustomEncoder interface { - EncodeMsgpack(*Encoder) error -} - -type CustomDecoder interface { - DecodeMsgpack(*Decoder) error -} diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/.prettierrc b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/.prettierrc new file mode 100644 index 00000000000..8b7f044ad1f --- /dev/null +++ b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/.prettierrc @@ -0,0 +1,4 @@ +semi: false +singleQuote: true +proseWrap: always +printWidth: 100 diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/.travis.yml b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/.travis.yml similarity index 68% rename from .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/.travis.yml rename to .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/.travis.yml index b35bf5484e7..e2ce06c49f0 100644 --- a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/.travis.yml +++ b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/.travis.yml @@ -2,10 +2,8 @@ sudo: false language: go go: - - 1.11.x - - 1.12.x - - 1.13.x - - 1.14.x + - 1.15.x + - 1.16.x - tip matrix: @@ -18,4 +16,5 @@ env: go_import_path: github.com/vmihailenco/msgpack before_install: - - curl -sfL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh | sh -s -- -b $(go env GOPATH)/bin v1.21.0 + - curl -sfL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh | sh -s -- -b $(go + env GOPATH)/bin v1.31.0 diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/CHANGELOG.md b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/CHANGELOG.md similarity index 50% rename from .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/CHANGELOG.md rename to .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/CHANGELOG.md index fac97090e45..f6b19d5ba40 100644 --- a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/CHANGELOG.md +++ b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/CHANGELOG.md @@ -1,8 +1,32 @@ +## [5.3.5](https://github.com/vmihailenco/msgpack/compare/v5.3.4...v5.3.5) (2021-10-22) + + + +## v5 + +### Added + +- `DecodeMap` is split into `DecodeMap`, `DecodeTypedMap`, and `DecodeUntypedMap`. +- New msgpack extensions API. + +### Changed + +- `Reset*` functions also reset flags. +- `SetMapDecodeFunc` is renamed to `SetMapDecoder`. +- `StructAsArray` is renamed to `UseArrayEncodedStructs`. +- `SortMapKeys` is renamed to `SetSortMapKeys`. + +### Removed + +- `UseJSONTag` is removed. Use `SetCustomStructTag("json")` instead. + ## v4 -- Encode, Decode, Marshal, and Unmarshal are changed to accept single argument. EncodeMulti and DecodeMulti are added as replacement. +- Encode, Decode, Marshal, and Unmarshal are changed to accept single argument. EncodeMulti and + DecodeMulti are added as replacement. - Added EncodeInt8/16/32/64 and EncodeUint8/16/32/64. -- Encoder changed to preserve type of numbers instead of chosing most compact encoding. The old behavior can be achieved with Encoder.UseCompactEncoding. +- Encoder changed to preserve type of numbers instead of chosing most compact encoding. The old + behavior can be achieved with Encoder.UseCompactEncoding. ## v3.3 @@ -16,9 +40,12 @@ - gopkg.in is not supported any more. Update import path to github.com/vmihailenco/msgpack. - Msgpack maps are decoded into map[string]interface{} by default. -- EncodeSliceLen is removed in favor of EncodeArrayLen. DecodeSliceLen is removed in favor of DecodeArrayLen. +- EncodeSliceLen is removed in favor of EncodeArrayLen. DecodeSliceLen is removed in favor of + DecodeArrayLen. - Embedded structs are automatically inlined where possible. -- Time is encoded using extension as described in https://github.com/msgpack/msgpack/pull/209. Old format is supported as well. -- EncodeInt8/16/32/64 is replaced with EncodeInt. EncodeUint8/16/32/64 is replaced with EncodeUint. There should be no performance differences. +- Time is encoded using extension as described in https://github.com/msgpack/msgpack/pull/209. Old + format is supported as well. +- EncodeInt8/16/32/64 is replaced with EncodeInt. EncodeUint8/16/32/64 is replaced with EncodeUint. + There should be no performance differences. - DecodeInterface can now return int8/16/32 and uint8/16/32. - PeekCode returns codes.Code instead of byte. diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/LICENSE b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/LICENSE similarity index 100% rename from .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/LICENSE rename to .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/LICENSE diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/Makefile b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/Makefile similarity index 84% rename from .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/Makefile rename to .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/Makefile index 57914e333a8..e9aade78294 100644 --- a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/Makefile +++ b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/Makefile @@ -1,6 +1,6 @@ -all: +test: go test ./... go test ./... -short -race go test ./... -run=NONE -bench=. -benchmem env GOOS=linux GOARCH=386 go test ./... - golangci-lint run + go vet diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/README.md b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/README.md new file mode 100644 index 00000000000..66ad98b9c8d --- /dev/null +++ b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/README.md @@ -0,0 +1,86 @@ +# MessagePack encoding for Golang + +[![Build Status](https://travis-ci.org/vmihailenco/msgpack.svg)](https://travis-ci.org/vmihailenco/msgpack) +[![PkgGoDev](https://pkg.go.dev/badge/github.com/vmihailenco/msgpack/v5)](https://pkg.go.dev/github.com/vmihailenco/msgpack/v5) +[![Documentation](https://img.shields.io/badge/msgpack-documentation-informational)](https://msgpack.uptrace.dev/) +[![Chat](https://discordapp.com/api/guilds/752070105847955518/widget.png)](https://discord.gg/rWtp5Aj) + +> :heart: +> [**Uptrace.dev** - All-in-one tool to optimize performance and monitor errors & logs](https://uptrace.dev/?utm_source=gh-msgpack&utm_campaign=gh-msgpack-var2) + +- Join [Discord](https://discord.gg/rWtp5Aj) to ask questions. +- [Documentation](https://msgpack.uptrace.dev) +- [Reference](https://pkg.go.dev/github.com/vmihailenco/msgpack/v5) +- [Examples](https://pkg.go.dev/github.com/vmihailenco/msgpack/v5#pkg-examples) + +Other projects you may like: + +- [Bun](https://bun.uptrace.dev) - fast and simple SQL client for PostgreSQL, MySQL, and SQLite. +- [BunRouter](https://bunrouter.uptrace.dev/) - fast and flexible HTTP router for Go. + +## Features + +- Primitives, arrays, maps, structs, time.Time and interface{}. +- Appengine \*datastore.Key and datastore.Cursor. +- [CustomEncoder]/[CustomDecoder] interfaces for custom encoding. +- [Extensions](https://pkg.go.dev/github.com/vmihailenco/msgpack/v5#example-RegisterExt) to encode + type information. +- Renaming fields via `msgpack:"my_field_name"` and alias via `msgpack:"alias:another_name"`. +- Omitting individual empty fields via `msgpack:",omitempty"` tag or all + [empty fields in a struct](https://pkg.go.dev/github.com/vmihailenco/msgpack/v5#example-Marshal-OmitEmpty). +- [Map keys sorting](https://pkg.go.dev/github.com/vmihailenco/msgpack/v5#Encoder.SetSortMapKeys). +- Encoding/decoding all + [structs as arrays](https://pkg.go.dev/github.com/vmihailenco/msgpack/v5#Encoder.UseArrayEncodedStructs) + or + [individual structs](https://pkg.go.dev/github.com/vmihailenco/msgpack/v5#example-Marshal-AsArray). +- [Encoder.SetCustomStructTag] with [Decoder.SetCustomStructTag] can turn msgpack into drop-in + replacement for any tag. +- Simple but very fast and efficient + [queries](https://pkg.go.dev/github.com/vmihailenco/msgpack/v5#example-Decoder.Query). + +[customencoder]: https://pkg.go.dev/github.com/vmihailenco/msgpack/v5#CustomEncoder +[customdecoder]: https://pkg.go.dev/github.com/vmihailenco/msgpack/v5#CustomDecoder +[encoder.setcustomstructtag]: + https://pkg.go.dev/github.com/vmihailenco/msgpack/v5#Encoder.SetCustomStructTag +[decoder.setcustomstructtag]: + https://pkg.go.dev/github.com/vmihailenco/msgpack/v5#Decoder.SetCustomStructTag + +## Installation + +msgpack supports 2 last Go versions and requires support for +[Go modules](https://github.com/golang/go/wiki/Modules). So make sure to initialize a Go module: + +```shell +go mod init github.com/my/repo +``` + +And then install msgpack/v5 (note _v5_ in the import; omitting it is a popular mistake): + +```shell +go get github.com/vmihailenco/msgpack/v5 +``` + +## Quickstart + +```go +import "github.com/vmihailenco/msgpack/v5" + +func ExampleMarshal() { + type Item struct { + Foo string + } + + b, err := msgpack.Marshal(&Item{Foo: "bar"}) + if err != nil { + panic(err) + } + + var item Item + err = msgpack.Unmarshal(b, &item) + if err != nil { + panic(err) + } + fmt.Println(item.Foo) + // Output: bar +} +``` diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/commitlint.config.js b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/commitlint.config.js new file mode 100644 index 00000000000..4fedde6daf0 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/commitlint.config.js @@ -0,0 +1 @@ +module.exports = { extends: ['@commitlint/config-conventional'] } diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/decode.go b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/decode.go similarity index 63% rename from .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/decode.go rename to .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/decode.go index 1711675e9a1..5df40e5d9ca 100644 --- a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/decode.go +++ b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/decode.go @@ -10,12 +10,11 @@ import ( "sync" "time" - "github.com/vmihailenco/msgpack/v4/codes" + "github.com/vmihailenco/msgpack/v5/msgpcode" ) const ( - looseIfaceFlag uint32 = 1 << iota - decodeUsingJSONFlag + looseInterfaceDecodingFlag uint32 = 1 << iota disallowUnknownFieldsFlag ) @@ -38,19 +37,27 @@ var decPool = sync.Pool{ }, } +func GetDecoder() *Decoder { + return decPool.Get().(*Decoder) +} + +func PutDecoder(dec *Decoder) { + dec.r = nil + dec.s = nil + decPool.Put(dec) +} + +//------------------------------------------------------------------------------ + // Unmarshal decodes the MessagePack-encoded data and stores the result // in the value pointed to by v. func Unmarshal(data []byte, v interface{}) error { - dec := decPool.Get().(*Decoder) + dec := GetDecoder() - if r, ok := dec.r.(*bytes.Reader); ok { - r.Reset(data) - } else { - dec.Reset(bytes.NewReader(data)) - } + dec.Reset(bytes.NewReader(data)) err := dec.Decode(v) - decPool.Put(dec) + PutDecoder(dec) return err } @@ -61,18 +68,18 @@ type Decoder struct { s io.ByteScanner buf []byte - extLen int - rec []byte // accumulates read data if not nil + rec []byte // accumulates read data if not nil - intern []string - flags uint32 - decodeMapFunc func(*Decoder) (interface{}, error) + dict []string + flags uint32 + structTag string + mapDecoder func(*Decoder) (interface{}, error) } // NewDecoder returns a new decoder that reads from r. // // The decoder introduces its own buffering and may read data from r -// beyond the MessagePack values requested. Buffering can be disabled +// beyond the requested msgpack values. Buffering can be disabled // by passing a reader that implements io.ByteScanner interface. func NewDecoder(r io.Reader) *Decoder { d := new(Decoder) @@ -83,65 +90,77 @@ func NewDecoder(r io.Reader) *Decoder { // Reset discards any buffered data, resets all state, and switches the buffered // reader to read from r. func (d *Decoder) Reset(r io.Reader) { + d.ResetDict(r, nil) +} + +// ResetDict is like Reset, but also resets the dict. +func (d *Decoder) ResetDict(r io.Reader, dict []string) { + d.resetReader(r) + d.flags = 0 + d.structTag = "" + d.mapDecoder = nil + d.dict = dict +} + +func (d *Decoder) WithDict(dict []string, fn func(*Decoder) error) error { + oldDict := d.dict + d.dict = dict + err := fn(d) + d.dict = oldDict + return err +} + +func (d *Decoder) resetReader(r io.Reader) { if br, ok := r.(bufReader); ok { d.r = br d.s = br - } else if br, ok := d.r.(*bufio.Reader); ok { - br.Reset(r) } else { br := bufio.NewReader(r) d.r = br d.s = br } - - if d.intern != nil { - d.intern = d.intern[:0] - } - - //TODO: - //d.useLoose = false - //d.useJSONTag = false - //d.disallowUnknownFields = false - //d.decodeMapFunc = nil } -func (d *Decoder) SetDecodeMapFunc(fn func(*Decoder) (interface{}, error)) { - d.decodeMapFunc = fn +func (d *Decoder) SetMapDecoder(fn func(*Decoder) (interface{}, error)) { + d.mapDecoder = fn } -// UseDecodeInterfaceLoose causes decoder to use DecodeInterfaceLoose +// UseLooseInterfaceDecoding causes decoder to use DecodeInterfaceLoose // to decode msgpack value into Go interface{}. -func (d *Decoder) UseDecodeInterfaceLoose(on bool) *Decoder { +func (d *Decoder) UseLooseInterfaceDecoding(on bool) { if on { - d.flags |= looseIfaceFlag + d.flags |= looseInterfaceDecodingFlag } else { - d.flags &= ^looseIfaceFlag + d.flags &= ^looseInterfaceDecodingFlag } - return d } -// UseJSONTag causes the Decoder to use json struct tag as fallback option +// SetCustomStructTag causes the decoder to use the supplied tag as a fallback option // if there is no msgpack tag. -func (d *Decoder) UseJSONTag(on bool) *Decoder { - if on { - d.flags |= decodeUsingJSONFlag - } else { - d.flags &= ^decodeUsingJSONFlag - } - return d +func (d *Decoder) SetCustomStructTag(tag string) { + d.structTag = tag } // DisallowUnknownFields causes the Decoder to return an error when the destination // is a struct and the input contains object keys which do not match any // non-ignored, exported fields in the destination. -func (d *Decoder) DisallowUnknownFields() { - if true { +func (d *Decoder) DisallowUnknownFields(on bool) { + if on { d.flags |= disallowUnknownFieldsFlag } else { d.flags &= ^disallowUnknownFieldsFlag } } +// UseInternedStrings enables support for decoding interned strings. +func (d *Decoder) UseInternedStrings(on bool) { + if on { + d.flags |= useInternedStringsFlag + } else { + d.flags &= ^useInternedStringsFlag + } +} + // Buffered returns a reader of the data remaining in the Decoder's buffer. // The reader is valid until the next call to Decode. func (d *Decoder) Buffered() io.Reader { @@ -250,12 +269,22 @@ func (d *Decoder) Decode(v interface{}) error { return errors.New("msgpack: Decode(nil)") } if vv.Kind() != reflect.Ptr { - return fmt.Errorf("msgpack: Decode(nonsettable %T)", v) + return fmt.Errorf("msgpack: Decode(non-pointer %T)", v) + } + if vv.IsNil() { + return fmt.Errorf("msgpack: Decode(non-settable %T)", v) } + vv = vv.Elem() - if !vv.IsValid() { - return fmt.Errorf("msgpack: Decode(nonsettable %T)", v) + if vv.Kind() == reflect.Interface { + if !vv.IsNil() { + vv = vv.Elem() + if vv.Kind() != reflect.Ptr { + return fmt.Errorf("msgpack: Decode(non-pointer %s)", vv.Type().String()) + } + } } + return d.DecodeValue(vv) } @@ -269,7 +298,7 @@ func (d *Decoder) DecodeMulti(v ...interface{}) error { } func (d *Decoder) decodeInterfaceCond() (interface{}, error) { - if d.flags&looseIfaceFlag != 0 { + if d.flags&looseInterfaceDecodingFlag != 0 { return d.DecodeInterfaceLoose() } return d.DecodeInterface() @@ -285,7 +314,7 @@ func (d *Decoder) DecodeNil() error { if err != nil { return err } - if c != codes.Nil { + if c != msgpcode.Nil { return fmt.Errorf("msgpack: invalid code=%x decoding nil", c) } return nil @@ -311,11 +340,14 @@ func (d *Decoder) DecodeBool() (bool, error) { return d.bool(c) } -func (d *Decoder) bool(c codes.Code) (bool, error) { - if c == codes.False { +func (d *Decoder) bool(c byte) (bool, error) { + if c == msgpcode.Nil { + return false, nil + } + if c == msgpcode.False { return false, nil } - if c == codes.True { + if c == msgpcode.True { return true, nil } return false, fmt.Errorf("msgpack: invalid code=%x decoding bool", c) @@ -349,63 +381,63 @@ func (d *Decoder) DecodeInterface() (interface{}, error) { return nil, err } - if codes.IsFixedNum(c) { + if msgpcode.IsFixedNum(c) { return int8(c), nil } - if codes.IsFixedMap(c) { + if msgpcode.IsFixedMap(c) { err = d.s.UnreadByte() if err != nil { return nil, err } - return d.DecodeMap() + return d.decodeMapDefault() } - if codes.IsFixedArray(c) { + if msgpcode.IsFixedArray(c) { return d.decodeSlice(c) } - if codes.IsFixedString(c) { + if msgpcode.IsFixedString(c) { return d.string(c) } switch c { - case codes.Nil: + case msgpcode.Nil: return nil, nil - case codes.False, codes.True: + case msgpcode.False, msgpcode.True: return d.bool(c) - case codes.Float: + case msgpcode.Float: return d.float32(c) - case codes.Double: + case msgpcode.Double: return d.float64(c) - case codes.Uint8: + case msgpcode.Uint8: return d.uint8() - case codes.Uint16: + case msgpcode.Uint16: return d.uint16() - case codes.Uint32: + case msgpcode.Uint32: return d.uint32() - case codes.Uint64: + case msgpcode.Uint64: return d.uint64() - case codes.Int8: + case msgpcode.Int8: return d.int8() - case codes.Int16: + case msgpcode.Int16: return d.int16() - case codes.Int32: + case msgpcode.Int32: return d.int32() - case codes.Int64: + case msgpcode.Int64: return d.int64() - case codes.Bin8, codes.Bin16, codes.Bin32: + case msgpcode.Bin8, msgpcode.Bin16, msgpcode.Bin32: return d.bytes(c, nil) - case codes.Str8, codes.Str16, codes.Str32: + case msgpcode.Str8, msgpcode.Str16, msgpcode.Str32: return d.string(c) - case codes.Array16, codes.Array32: + case msgpcode.Array16, msgpcode.Array32: return d.decodeSlice(c) - case codes.Map16, codes.Map32: + case msgpcode.Map16, msgpcode.Map32: err = d.s.UnreadByte() if err != nil { return nil, err } - return d.DecodeMap() - case codes.FixExt1, codes.FixExt2, codes.FixExt4, codes.FixExt8, codes.FixExt16, - codes.Ext8, codes.Ext16, codes.Ext32: - return d.extInterface(c) + return d.decodeMapDefault() + case msgpcode.FixExt1, msgpcode.FixExt2, msgpcode.FixExt4, msgpcode.FixExt8, msgpcode.FixExt16, + msgpcode.Ext8, msgpcode.Ext16, msgpcode.Ext32: + return d.decodeInterfaceExt(c) } return 0, fmt.Errorf("msgpack: unknown code %x decoding interface{}", c) @@ -415,55 +447,55 @@ func (d *Decoder) DecodeInterface() (interface{}, error) { // - int8, int16, and int32 are converted to int64, // - uint8, uint16, and uint32 are converted to uint64, // - float32 is converted to float64. +// - []byte is converted to string. func (d *Decoder) DecodeInterfaceLoose() (interface{}, error) { c, err := d.readCode() if err != nil { return nil, err } - if codes.IsFixedNum(c) { + if msgpcode.IsFixedNum(c) { return int64(int8(c)), nil } - if codes.IsFixedMap(c) { + if msgpcode.IsFixedMap(c) { err = d.s.UnreadByte() if err != nil { return nil, err } - return d.DecodeMap() + return d.decodeMapDefault() } - if codes.IsFixedArray(c) { + if msgpcode.IsFixedArray(c) { return d.decodeSlice(c) } - if codes.IsFixedString(c) { + if msgpcode.IsFixedString(c) { return d.string(c) } switch c { - case codes.Nil: + case msgpcode.Nil: return nil, nil - case codes.False, codes.True: + case msgpcode.False, msgpcode.True: return d.bool(c) - case codes.Float, codes.Double: + case msgpcode.Float, msgpcode.Double: return d.float64(c) - case codes.Uint8, codes.Uint16, codes.Uint32, codes.Uint64: + case msgpcode.Uint8, msgpcode.Uint16, msgpcode.Uint32, msgpcode.Uint64: return d.uint(c) - case codes.Int8, codes.Int16, codes.Int32, codes.Int64: + case msgpcode.Int8, msgpcode.Int16, msgpcode.Int32, msgpcode.Int64: return d.int(c) - case codes.Bin8, codes.Bin16, codes.Bin32: - return d.bytes(c, nil) - case codes.Str8, codes.Str16, codes.Str32: + case msgpcode.Str8, msgpcode.Str16, msgpcode.Str32, + msgpcode.Bin8, msgpcode.Bin16, msgpcode.Bin32: return d.string(c) - case codes.Array16, codes.Array32: + case msgpcode.Array16, msgpcode.Array32: return d.decodeSlice(c) - case codes.Map16, codes.Map32: + case msgpcode.Map16, msgpcode.Map32: err = d.s.UnreadByte() if err != nil { return nil, err } - return d.DecodeMap() - case codes.FixExt1, codes.FixExt2, codes.FixExt4, codes.FixExt8, codes.FixExt16, - codes.Ext8, codes.Ext16, codes.Ext32: - return d.extInterface(c) + return d.decodeMapDefault() + case msgpcode.FixExt1, msgpcode.FixExt2, msgpcode.FixExt4, msgpcode.FixExt8, msgpcode.FixExt16, + msgpcode.Ext8, msgpcode.Ext16, msgpcode.Ext32: + return d.decodeInterfaceExt(c) } return 0, fmt.Errorf("msgpack: unknown code %x decoding interface{}", c) @@ -476,63 +508,78 @@ func (d *Decoder) Skip() error { return err } - if codes.IsFixedNum(c) { + if msgpcode.IsFixedNum(c) { return nil } - if codes.IsFixedMap(c) { + if msgpcode.IsFixedMap(c) { return d.skipMap(c) } - if codes.IsFixedArray(c) { + if msgpcode.IsFixedArray(c) { return d.skipSlice(c) } - if codes.IsFixedString(c) { + if msgpcode.IsFixedString(c) { return d.skipBytes(c) } switch c { - case codes.Nil, codes.False, codes.True: + case msgpcode.Nil, msgpcode.False, msgpcode.True: return nil - case codes.Uint8, codes.Int8: + case msgpcode.Uint8, msgpcode.Int8: return d.skipN(1) - case codes.Uint16, codes.Int16: + case msgpcode.Uint16, msgpcode.Int16: return d.skipN(2) - case codes.Uint32, codes.Int32, codes.Float: + case msgpcode.Uint32, msgpcode.Int32, msgpcode.Float: return d.skipN(4) - case codes.Uint64, codes.Int64, codes.Double: + case msgpcode.Uint64, msgpcode.Int64, msgpcode.Double: return d.skipN(8) - case codes.Bin8, codes.Bin16, codes.Bin32: + case msgpcode.Bin8, msgpcode.Bin16, msgpcode.Bin32: return d.skipBytes(c) - case codes.Str8, codes.Str16, codes.Str32: + case msgpcode.Str8, msgpcode.Str16, msgpcode.Str32: return d.skipBytes(c) - case codes.Array16, codes.Array32: + case msgpcode.Array16, msgpcode.Array32: return d.skipSlice(c) - case codes.Map16, codes.Map32: + case msgpcode.Map16, msgpcode.Map32: return d.skipMap(c) - case codes.FixExt1, codes.FixExt2, codes.FixExt4, codes.FixExt8, codes.FixExt16, - codes.Ext8, codes.Ext16, codes.Ext32: + case msgpcode.FixExt1, msgpcode.FixExt2, msgpcode.FixExt4, msgpcode.FixExt8, msgpcode.FixExt16, + msgpcode.Ext8, msgpcode.Ext16, msgpcode.Ext32: return d.skipExt(c) } return fmt.Errorf("msgpack: unknown code %x", c) } +func (d *Decoder) DecodeRaw() (RawMessage, error) { + d.rec = make([]byte, 0) + if err := d.Skip(); err != nil { + return nil, err + } + msg := RawMessage(d.rec) + d.rec = nil + return msg, nil +} + // PeekCode returns the next MessagePack code without advancing the reader. -// Subpackage msgpack/codes contains list of available codes. -func (d *Decoder) PeekCode() (codes.Code, error) { +// Subpackage msgpack/codes defines the list of available msgpcode. +func (d *Decoder) PeekCode() (byte, error) { c, err := d.s.ReadByte() if err != nil { return 0, err } - return codes.Code(c), d.s.UnreadByte() + return c, d.s.UnreadByte() +} + +// ReadFull reads exactly len(buf) bytes into the buf. +func (d *Decoder) ReadFull(buf []byte) error { + _, err := readN(d.r, buf, len(buf)) + return err } func (d *Decoder) hasNilCode() bool { code, err := d.PeekCode() - return err == nil && code == codes.Nil + return err == nil && code == msgpcode.Nil } -func (d *Decoder) readCode() (codes.Code, error) { - d.extLen = 0 +func (d *Decoder) readCode() (byte, error) { c, err := d.s.ReadByte() if err != nil { return 0, err @@ -540,7 +587,7 @@ func (d *Decoder) readCode() (codes.Code, error) { if d.rec != nil { d.rec = append(d.rec, c) } - return codes.Code(c), nil + return c, nil } func (d *Decoder) readFull(b []byte) error { @@ -549,7 +596,6 @@ func (d *Decoder) readFull(b []byte) error { return err } if d.rec != nil { - //TODO: read directly into d.rec? d.rec = append(d.rec, b...) } return nil @@ -562,7 +608,7 @@ func (d *Decoder) readN(n int) ([]byte, error) { return nil, err } if d.rec != nil { - //TODO: read directly into d.rec? + // TODO: read directly into d.rec? d.rec = append(d.rec, d.buf...) } return d.buf, nil diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/decode_map.go b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/decode_map.go similarity index 61% rename from .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/decode_map.go rename to .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/decode_map.go index 16c40fe77e1..52e0526cc51 100644 --- a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/decode_map.go +++ b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/decode_map.go @@ -5,9 +5,11 @@ import ( "fmt" "reflect" - "github.com/vmihailenco/msgpack/v4/codes" + "github.com/vmihailenco/msgpack/v5/msgpcode" ) +var errArrayStruct = errors.New("msgpack: number of fields in array-encoded struct has changed") + var ( mapStringStringPtrType = reflect.TypeOf((*map[string]string)(nil)) mapStringStringType = mapStringStringPtrType.Elem() @@ -19,13 +21,13 @@ var ( ) func decodeMapValue(d *Decoder, v reflect.Value) error { - size, err := d.DecodeMapLen() + n, err := d.DecodeMapLen() if err != nil { return err } typ := v.Type() - if size == -1 { + if n == -1 { v.Set(reflect.Zero(typ)) return nil } @@ -33,33 +35,18 @@ func decodeMapValue(d *Decoder, v reflect.Value) error { if v.IsNil() { v.Set(reflect.MakeMap(typ)) } - if size == 0 { + if n == 0 { return nil } - return decodeMapValueSize(d, v, size) + return d.decodeTypedMapValue(v, n) } -func decodeMapValueSize(d *Decoder, v reflect.Value, size int) error { - typ := v.Type() - keyType := typ.Key() - valueType := typ.Elem() - - for i := 0; i < size; i++ { - mk := reflect.New(keyType).Elem() - if err := d.DecodeValue(mk); err != nil { - return err - } - - mv := reflect.New(valueType).Elem() - if err := d.DecodeValue(mv); err != nil { - return err - } - - v.SetMapIndex(mk, mv) +func (d *Decoder) decodeMapDefault() (interface{}, error) { + if d.mapDecoder != nil { + return d.mapDecoder(d) } - - return nil + return d.DecodeMap() } // DecodeMapLen decodes map length. Length is -1 when map is nil. @@ -69,7 +56,7 @@ func (d *Decoder) DecodeMapLen() (int, error) { return 0, err } - if codes.IsExt(c) { + if msgpcode.IsExt(c) { if err = d.skipExtHeader(c); err != nil { return 0, err } @@ -82,37 +69,22 @@ func (d *Decoder) DecodeMapLen() (int, error) { return d.mapLen(c) } -func (d *Decoder) mapLen(c codes.Code) (int, error) { - size, err := d._mapLen(c) - err = expandInvalidCodeMapLenError(c, err) - return size, err -} - -func (d *Decoder) _mapLen(c codes.Code) (int, error) { - if c == codes.Nil { +func (d *Decoder) mapLen(c byte) (int, error) { + if c == msgpcode.Nil { return -1, nil } - if c >= codes.FixedMapLow && c <= codes.FixedMapHigh { - return int(c & codes.FixedMapMask), nil + if c >= msgpcode.FixedMapLow && c <= msgpcode.FixedMapHigh { + return int(c & msgpcode.FixedMapMask), nil } - if c == codes.Map16 { + if c == msgpcode.Map16 { size, err := d.uint16() return int(size), err } - if c == codes.Map32 { + if c == msgpcode.Map32 { size, err := d.uint32() return int(size), err } - return 0, errInvalidCode -} - -var errInvalidCode = errors.New("invalid code") - -func expandInvalidCodeMapLenError(c codes.Code, err error) error { - if err == errInvalidCode { - return fmt.Errorf("msgpack: invalid code=%x decoding map length", c) - } - return err + return 0, unexpectedCodeError{code: c, hint: "map length"} } func decodeMapStringStringValue(d *Decoder, v reflect.Value) error { @@ -157,61 +129,79 @@ func decodeMapStringInterfaceValue(d *Decoder, v reflect.Value) error { } func (d *Decoder) decodeMapStringInterfacePtr(ptr *map[string]interface{}) error { - n, err := d.DecodeMapLen() + m, err := d.DecodeMap() if err != nil { return err } - if n == -1 { - *ptr = nil - return nil + *ptr = m + return nil +} + +func (d *Decoder) DecodeMap() (map[string]interface{}, error) { + n, err := d.DecodeMapLen() + if err != nil { + return nil, err } - m := *ptr - if m == nil { - *ptr = make(map[string]interface{}, min(n, maxMapSize)) - m = *ptr + if n == -1 { + return nil, nil } + m := make(map[string]interface{}, min(n, maxMapSize)) + for i := 0; i < n; i++ { mk, err := d.DecodeString() if err != nil { - return err + return nil, err } mv, err := d.decodeInterfaceCond() if err != nil { - return err + return nil, err } m[mk] = mv } - return nil + return m, nil } -var errUnsupportedMapKey = errors.New("msgpack: unsupported map key") - -func (d *Decoder) DecodeMap() (interface{}, error) { - if d.decodeMapFunc != nil { - return d.decodeMapFunc(d) - } - - size, err := d.DecodeMapLen() +func (d *Decoder) DecodeUntypedMap() (map[interface{}]interface{}, error) { + n, err := d.DecodeMapLen() if err != nil { return nil, err } - if size == -1 { + + if n == -1 { return nil, nil } - if size == 0 { - return make(map[string]interface{}), nil + + m := make(map[interface{}]interface{}, min(n, maxMapSize)) + + for i := 0; i < n; i++ { + mk, err := d.decodeInterfaceCond() + if err != nil { + return nil, err + } + + mv, err := d.decodeInterfaceCond() + if err != nil { + return nil, err + } + + m[mk] = mv } - code, err := d.PeekCode() + return m, nil +} + +// DecodeTypedMap decodes a typed map. Typed map is a map that has a fixed type for keys and values. +// Key and value types may be different. +func (d *Decoder) DecodeTypedMap() (interface{}, error) { + n, err := d.DecodeMapLen() if err != nil { return nil, err } - - if codes.IsString(code) || codes.IsBin(code) { - return d.decodeMapStringInterfaceSize(size) + if n <= 0 { + return nil, nil } key, err := d.decodeInterfaceCond() @@ -228,40 +218,44 @@ func (d *Decoder) DecodeMap() (interface{}, error) { valueType := reflect.TypeOf(value) if !keyType.Comparable() { - return nil, errUnsupportedMapKey + return nil, fmt.Errorf("msgpack: unsupported map key: %s", keyType.String()) } mapType := reflect.MapOf(keyType, valueType) mapValue := reflect.MakeMap(mapType) - mapValue.SetMapIndex(reflect.ValueOf(key), reflect.ValueOf(value)) - size-- - err = decodeMapValueSize(d, mapValue, size) - if err != nil { + n-- + if err := d.decodeTypedMapValue(mapValue, n); err != nil { return nil, err } return mapValue.Interface(), nil } -func (d *Decoder) decodeMapStringInterfaceSize(size int) (map[string]interface{}, error) { - m := make(map[string]interface{}, min(size, maxMapSize)) - for i := 0; i < size; i++ { - mk, err := d.DecodeString() - if err != nil { - return nil, err +func (d *Decoder) decodeTypedMapValue(v reflect.Value, n int) error { + typ := v.Type() + keyType := typ.Key() + valueType := typ.Elem() + + for i := 0; i < n; i++ { + mk := reflect.New(keyType).Elem() + if err := d.DecodeValue(mk); err != nil { + return err } - mv, err := d.decodeInterfaceCond() - if err != nil { - return nil, err + + mv := reflect.New(valueType).Elem() + if err := d.DecodeValue(mv); err != nil { + return err } - m[mk] = mv + + v.SetMapIndex(mk, mv) } - return m, nil + + return nil } -func (d *Decoder) skipMap(c codes.Code) error { +func (d *Decoder) skipMap(c byte) error { n, err := d.mapLen(c) if err != nil { return err @@ -283,54 +277,45 @@ func decodeStructValue(d *Decoder, v reflect.Value) error { return err } - var isArray bool + n, err := d.mapLen(c) + if err == nil { + return d.decodeStruct(v, n) + } - n, err := d._mapLen(c) - if err != nil { - var err2 error - n, err2 = d.arrayLen(c) - if err2 != nil { - return expandInvalidCodeMapLenError(c, err) - } - isArray = true + var err2 error + n, err2 = d.arrayLen(c) + if err2 != nil { + return err } - if n == -1 { - if err = mustSet(v); err != nil { - return err - } + + if n <= 0 { v.Set(reflect.Zero(v.Type())) return nil } - var fields *fields - if d.flags&decodeUsingJSONFlag != 0 { - fields = jsonStructs.Fields(v.Type()) - } else { - fields = structs.Fields(v.Type()) + fields := structs.Fields(v.Type(), d.structTag) + if n != len(fields.List) { + return errArrayStruct } - if isArray { - for i, f := range fields.List { - if i >= n { - break - } - if err := f.DecodeValue(d, v); err != nil { - return err - } + for _, f := range fields.List { + if err := f.DecodeValue(d, v); err != nil { + return err } + } - // Skip extra values. - for i := len(fields.List); i < n; i++ { - if err := d.Skip(); err != nil { - return err - } - } + return nil +} +func (d *Decoder) decodeStruct(v reflect.Value, n int) error { + if n == -1 { + v.Set(reflect.Zero(v.Type())) return nil } + fields := structs.Fields(v.Type(), d.structTag) for i := 0; i < n; i++ { - name, err := d.DecodeString() + name, err := d.decodeStringTemp() if err != nil { return err } @@ -339,9 +324,13 @@ func decodeStructValue(d *Decoder, v reflect.Value) error { if err := f.DecodeValue(d, v); err != nil { return err } - } else if d.flags&disallowUnknownFieldsFlag != 0 { + continue + } + + if d.flags&disallowUnknownFieldsFlag != 0 { return fmt.Errorf("msgpack: unknown field %q", name) - } else if err := d.Skip(); err != nil { + } + if err := d.Skip(); err != nil { return err } } diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/decode_number.go b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/decode_number.go similarity index 83% rename from .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/decode_number.go rename to .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/decode_number.go index f6b9151f01e..45d6a741868 100644 --- a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/decode_number.go +++ b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/decode_number.go @@ -5,7 +5,7 @@ import ( "math" "reflect" - "github.com/vmihailenco/msgpack/v4/codes" + "github.com/vmihailenco/msgpack/v5/msgpcode" ) func (d *Decoder) skipN(n int) error { @@ -18,7 +18,7 @@ func (d *Decoder) uint8() (uint8, error) { if err != nil { return 0, err } - return uint8(c), nil + return c, nil } func (d *Decoder) int8() (int8, error) { @@ -87,33 +87,33 @@ func (d *Decoder) DecodeUint64() (uint64, error) { return d.uint(c) } -func (d *Decoder) uint(c codes.Code) (uint64, error) { - if c == codes.Nil { +func (d *Decoder) uint(c byte) (uint64, error) { + if c == msgpcode.Nil { return 0, nil } - if codes.IsFixedNum(c) { + if msgpcode.IsFixedNum(c) { return uint64(int8(c)), nil } switch c { - case codes.Uint8: + case msgpcode.Uint8: n, err := d.uint8() return uint64(n), err - case codes.Int8: + case msgpcode.Int8: n, err := d.int8() return uint64(n), err - case codes.Uint16: + case msgpcode.Uint16: n, err := d.uint16() return uint64(n), err - case codes.Int16: + case msgpcode.Int16: n, err := d.int16() return uint64(n), err - case codes.Uint32: + case msgpcode.Uint32: n, err := d.uint32() return uint64(n), err - case codes.Int32: + case msgpcode.Int32: n, err := d.int32() return uint64(n), err - case codes.Uint64, codes.Int64: + case msgpcode.Uint64, msgpcode.Int64: return d.uint64() } return 0, fmt.Errorf("msgpack: invalid code=%x decoding uint64", c) @@ -129,33 +129,33 @@ func (d *Decoder) DecodeInt64() (int64, error) { return d.int(c) } -func (d *Decoder) int(c codes.Code) (int64, error) { - if c == codes.Nil { +func (d *Decoder) int(c byte) (int64, error) { + if c == msgpcode.Nil { return 0, nil } - if codes.IsFixedNum(c) { + if msgpcode.IsFixedNum(c) { return int64(int8(c)), nil } switch c { - case codes.Uint8: + case msgpcode.Uint8: n, err := d.uint8() return int64(n), err - case codes.Int8: + case msgpcode.Int8: n, err := d.uint8() return int64(int8(n)), err - case codes.Uint16: + case msgpcode.Uint16: n, err := d.uint16() return int64(n), err - case codes.Int16: + case msgpcode.Int16: n, err := d.uint16() return int64(int16(n)), err - case codes.Uint32: + case msgpcode.Uint32: n, err := d.uint32() return int64(n), err - case codes.Int32: + case msgpcode.Int32: n, err := d.uint32() return int64(int32(n)), err - case codes.Uint64, codes.Int64: + case msgpcode.Uint64, msgpcode.Int64: n, err := d.uint64() return int64(n), err } @@ -170,8 +170,8 @@ func (d *Decoder) DecodeFloat32() (float32, error) { return d.float32(c) } -func (d *Decoder) float32(c codes.Code) (float32, error) { - if c == codes.Float { +func (d *Decoder) float32(c byte) (float32, error) { + if c == msgpcode.Float { n, err := d.uint32() if err != nil { return 0, err @@ -195,15 +195,15 @@ func (d *Decoder) DecodeFloat64() (float64, error) { return d.float64(c) } -func (d *Decoder) float64(c codes.Code) (float64, error) { +func (d *Decoder) float64(c byte) (float64, error) { switch c { - case codes.Float: + case msgpcode.Float: n, err := d.float32(c) if err != nil { return 0, err } return float64(n), nil - case codes.Double: + case msgpcode.Double: n, err := d.uint64() if err != nil { return 0, err @@ -263,9 +263,6 @@ func decodeFloat32Value(d *Decoder, v reflect.Value) error { if err != nil { return err } - if err = mustSet(v); err != nil { - return err - } v.SetFloat(float64(f)) return nil } @@ -275,9 +272,6 @@ func decodeFloat64Value(d *Decoder, v reflect.Value) error { if err != nil { return err } - if err = mustSet(v); err != nil { - return err - } v.SetFloat(f) return nil } @@ -287,9 +281,6 @@ func decodeInt64Value(d *Decoder, v reflect.Value) error { if err != nil { return err } - if err = mustSet(v); err != nil { - return err - } v.SetInt(n) return nil } @@ -299,9 +290,6 @@ func decodeUint64Value(d *Decoder, v reflect.Value) error { if err != nil { return err } - if err = mustSet(v); err != nil { - return err - } v.SetUint(n) return nil } diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/decode_query.go b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/decode_query.go similarity index 89% rename from .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/decode_query.go rename to .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/decode_query.go index 80cd80e7852..c302ed1f33e 100644 --- a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/decode_query.go +++ b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/decode_query.go @@ -5,7 +5,7 @@ import ( "strconv" "strings" - "github.com/vmihailenco/msgpack/v4/codes" + "github.com/vmihailenco/msgpack/v5/msgpcode" ) type queryResult struct { @@ -57,9 +57,9 @@ func (d *Decoder) query(q *queryResult) error { } switch { - case code == codes.Map16 || code == codes.Map32 || codes.IsFixedMap(code): + case code == msgpcode.Map16 || code == msgpcode.Map32 || msgpcode.IsFixedMap(code): err = d.queryMapKey(q) - case code == codes.Array16 || code == codes.Array32 || codes.IsFixedArray(code): + case code == msgpcode.Array16 || code == msgpcode.Array32 || msgpcode.IsFixedArray(code): err = d.queryArrayIndex(q) default: err = fmt.Errorf("msgpack: unsupported code=%x decoding key=%q", code, q.key) @@ -77,12 +77,12 @@ func (d *Decoder) queryMapKey(q *queryResult) error { } for i := 0; i < n; i++ { - k, err := d.bytesNoCopy() + key, err := d.decodeStringTemp() if err != nil { return err } - if string(k) == q.key { + if key == q.key { if err := d.query(q); err != nil { return err } diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/decode_slice.go b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/decode_slice.go similarity index 88% rename from .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/decode_slice.go rename to .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/decode_slice.go index adf17ae5cfb..db6f7c5472d 100644 --- a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/decode_slice.go +++ b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/decode_slice.go @@ -4,7 +4,7 @@ import ( "fmt" "reflect" - "github.com/vmihailenco/msgpack/v4/codes" + "github.com/vmihailenco/msgpack/v5/msgpcode" ) var sliceStringPtrType = reflect.TypeOf((*[]string)(nil)) @@ -18,17 +18,17 @@ func (d *Decoder) DecodeArrayLen() (int, error) { return d.arrayLen(c) } -func (d *Decoder) arrayLen(c codes.Code) (int, error) { - if c == codes.Nil { +func (d *Decoder) arrayLen(c byte) (int, error) { + if c == msgpcode.Nil { return -1, nil - } else if c >= codes.FixedArrayLow && c <= codes.FixedArrayHigh { - return int(c & codes.FixedArrayMask), nil + } else if c >= msgpcode.FixedArrayLow && c <= msgpcode.FixedArrayHigh { + return int(c & msgpcode.FixedArrayMask), nil } switch c { - case codes.Array16: + case msgpcode.Array16: n, err := d.uint16() return int(n), err - case codes.Array32: + case msgpcode.Array32: n, err := d.uint32() return int(n), err } @@ -154,7 +154,7 @@ func (d *Decoder) DecodeSlice() ([]interface{}, error) { return d.decodeSlice(c) } -func (d *Decoder) decodeSlice(c codes.Code) ([]interface{}, error) { +func (d *Decoder) decodeSlice(c byte) ([]interface{}, error) { n, err := d.arrayLen(c) if err != nil { return nil, err @@ -175,7 +175,7 @@ func (d *Decoder) decodeSlice(c codes.Code) ([]interface{}, error) { return s, nil } -func (d *Decoder) skipSlice(c codes.Code) error { +func (d *Decoder) skipSlice(c byte) error { n, err := d.arrayLen(c) if err != nil { return err diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/decode_string.go b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/decode_string.go similarity index 68% rename from .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/decode_string.go rename to .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/decode_string.go index c5adf3865cb..e837e08bf14 100644 --- a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/decode_string.go +++ b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/decode_string.go @@ -4,34 +4,38 @@ import ( "fmt" "reflect" - "github.com/vmihailenco/msgpack/v4/codes" + "github.com/vmihailenco/msgpack/v5/msgpcode" ) -func (d *Decoder) bytesLen(c codes.Code) (int, error) { - if c == codes.Nil { +func (d *Decoder) bytesLen(c byte) (int, error) { + if c == msgpcode.Nil { return -1, nil } - if codes.IsFixedString(c) { - return int(c & codes.FixedStrMask), nil + if msgpcode.IsFixedString(c) { + return int(c & msgpcode.FixedStrMask), nil } switch c { - case codes.Str8, codes.Bin8: + case msgpcode.Str8, msgpcode.Bin8: n, err := d.uint8() return int(n), err - case codes.Str16, codes.Bin16: + case msgpcode.Str16, msgpcode.Bin16: n, err := d.uint16() return int(n), err - case codes.Str32, codes.Bin32: + case msgpcode.Str32, msgpcode.Bin32: n, err := d.uint32() return int(n), err } - return 0, fmt.Errorf("msgpack: invalid code=%x decoding bytes length", c) + return 0, fmt.Errorf("msgpack: invalid code=%x decoding string/bytes length", c) } func (d *Decoder) DecodeString() (string, error) { + if intern := d.flags&useInternedStringsFlag != 0; intern || len(d.dict) > 0 { + return d.decodeInternedString(intern) + } + c, err := d.readCode() if err != nil { return "", err @@ -39,7 +43,7 @@ func (d *Decoder) DecodeString() (string, error) { return d.string(c) } -func (d *Decoder) string(c codes.Code) (string, error) { +func (d *Decoder) string(c byte) (string, error) { n, err := d.bytesLen(c) if err != nil { return "", err @@ -56,15 +60,10 @@ func (d *Decoder) stringWithLen(n int) (string, error) { } func decodeStringValue(d *Decoder, v reflect.Value) error { - if err := mustSet(v); err != nil { - return err - } - s, err := d.DecodeString() if err != nil { return err } - v.SetString(s) return nil } @@ -85,7 +84,7 @@ func (d *Decoder) DecodeBytes() ([]byte, error) { return d.bytes(c, nil) } -func (d *Decoder) bytes(c codes.Code, b []byte) ([]byte, error) { +func (d *Decoder) bytes(c byte, b []byte) ([]byte, error) { n, err := d.bytesLen(c) if err != nil { return nil, err @@ -96,19 +95,30 @@ func (d *Decoder) bytes(c codes.Code, b []byte) ([]byte, error) { return readN(d.r, b, n) } -func (d *Decoder) bytesNoCopy() ([]byte, error) { +func (d *Decoder) decodeStringTemp() (string, error) { + if intern := d.flags&useInternedStringsFlag != 0; intern || len(d.dict) > 0 { + return d.decodeInternedString(intern) + } + c, err := d.readCode() if err != nil { - return nil, err + return "", err } + n, err := d.bytesLen(c) if err != nil { - return nil, err + return "", err } if n == -1 { - return nil, nil + return "", nil + } + + b, err := d.readN(n) + if err != nil { + return "", err } - return d.readN(n) + + return bytesToString(b), nil } func (d *Decoder) decodeBytesPtr(ptr *[]byte) error { @@ -119,7 +129,7 @@ func (d *Decoder) decodeBytesPtr(ptr *[]byte) error { return d.bytesPtr(c, ptr) } -func (d *Decoder) bytesPtr(c codes.Code, ptr *[]byte) error { +func (d *Decoder) bytesPtr(c byte, ptr *[]byte) error { n, err := d.bytesLen(c) if err != nil { return err @@ -133,7 +143,7 @@ func (d *Decoder) bytesPtr(c codes.Code, ptr *[]byte) error { return err } -func (d *Decoder) skipBytes(c codes.Code) error { +func (d *Decoder) skipBytes(c byte) error { n, err := d.bytesLen(c) if err != nil { return err @@ -145,10 +155,6 @@ func (d *Decoder) skipBytes(c codes.Code) error { } func decodeBytesValue(d *Decoder, v reflect.Value) error { - if err := mustSet(v); err != nil { - return err - } - c, err := d.readCode() if err != nil { return err diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/decode_value.go b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/decode_value.go similarity index 68% rename from .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/decode_value.go rename to .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/decode_value.go index 810d3be8f92..d2ff2aea50f 100644 --- a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/decode_value.go +++ b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/decode_value.go @@ -7,8 +7,10 @@ import ( "reflect" ) -var interfaceType = reflect.TypeOf((*interface{})(nil)).Elem() -var stringType = reflect.TypeOf((*string)(nil)).Elem() +var ( + interfaceType = reflect.TypeOf((*interface{})(nil)).Elem() + stringType = reflect.TypeOf((*string)(nil)).Elem() +) var valueDecoders []decoderFunc @@ -43,13 +45,6 @@ func init() { } } -func mustSet(v reflect.Value) error { - if !v.CanSet() { - return fmt.Errorf("msgpack: Decode(nonsettable %s)", v.Type()) - } - return nil -} - func getDecoder(typ reflect.Type) decoderFunc { if v, ok := typeDecMap.Load(typ); ok { return v.(decoderFunc) @@ -62,33 +57,45 @@ func getDecoder(typ reflect.Type) decoderFunc { func _getDecoder(typ reflect.Type) decoderFunc { kind := typ.Kind() + if kind == reflect.Ptr { + if _, ok := typeDecMap.Load(typ.Elem()); ok { + return ptrValueDecoder(typ) + } + } + if typ.Implements(customDecoderType) { - return decodeCustomValue + return nilAwareDecoder(typ, decodeCustomValue) } if typ.Implements(unmarshalerType) { - return unmarshalValue + return nilAwareDecoder(typ, unmarshalValue) } if typ.Implements(binaryUnmarshalerType) { - return unmarshalBinaryValue + return nilAwareDecoder(typ, unmarshalBinaryValue) + } + if typ.Implements(textUnmarshalerType) { + return nilAwareDecoder(typ, unmarshalTextValue) } // Addressable struct field value. if kind != reflect.Ptr { ptr := reflect.PtrTo(typ) if ptr.Implements(customDecoderType) { - return decodeCustomValueAddr + return addrDecoder(nilAwareDecoder(typ, decodeCustomValue)) } if ptr.Implements(unmarshalerType) { - return unmarshalValueAddr + return addrDecoder(nilAwareDecoder(typ, unmarshalValue)) } if ptr.Implements(binaryUnmarshalerType) { - return unmarshalBinaryValueAddr + return addrDecoder(nilAwareDecoder(typ, unmarshalBinaryValue)) + } + if ptr.Implements(textUnmarshalerType) { + return addrDecoder(nilAwareDecoder(typ, unmarshalTextValue)) } } switch kind { case reflect.Ptr: - return ptrDecoderFunc(typ) + return ptrValueDecoder(typ) case reflect.Slice: elem := typ.Elem() if elem.Kind() == reflect.Uint8 { @@ -115,85 +122,50 @@ func _getDecoder(typ reflect.Type) decoderFunc { return valueDecoders[kind] } -func ptrDecoderFunc(typ reflect.Type) decoderFunc { +func ptrValueDecoder(typ reflect.Type) decoderFunc { decoder := getDecoder(typ.Elem()) return func(d *Decoder, v reflect.Value) error { if d.hasNilCode() { - if err := mustSet(v); err != nil { - return err - } if !v.IsNil() { v.Set(reflect.Zero(v.Type())) } return d.DecodeNil() } if v.IsNil() { - if err := mustSet(v); err != nil { - return err - } v.Set(reflect.New(v.Type().Elem())) } return decoder(d, v.Elem()) } } -func decodeCustomValueAddr(d *Decoder, v reflect.Value) error { - if !v.CanAddr() { - return fmt.Errorf("msgpack: Decode(nonaddressable %T)", v.Interface()) - } - return decodeCustomValue(d, v.Addr()) -} - -func decodeCustomValue(d *Decoder, v reflect.Value) error { - if d.hasNilCode() { - return d.decodeNilValue(v) - } - - if v.IsNil() { - v.Set(reflect.New(v.Type().Elem())) +func addrDecoder(fn decoderFunc) decoderFunc { + return func(d *Decoder, v reflect.Value) error { + if !v.CanAddr() { + return fmt.Errorf("msgpack: Decode(nonaddressable %T)", v.Interface()) + } + return fn(d, v.Addr()) } - - decoder := v.Interface().(CustomDecoder) - return decoder.DecodeMsgpack(d) } -func unmarshalValueAddr(d *Decoder, v reflect.Value) error { - if !v.CanAddr() { - return fmt.Errorf("msgpack: Decode(nonaddressable %T)", v.Interface()) +func nilAwareDecoder(typ reflect.Type, fn decoderFunc) decoderFunc { + if nilable(typ.Kind()) { + return func(d *Decoder, v reflect.Value) error { + if d.hasNilCode() { + return d.decodeNilValue(v) + } + if v.IsNil() { + v.Set(reflect.New(v.Type().Elem())) + } + return fn(d, v) + } } - return unmarshalValue(d, v.Addr()) -} -func unmarshalValue(d *Decoder, v reflect.Value) error { - if d.extLen == 0 || d.extLen == 1 { + return func(d *Decoder, v reflect.Value) error { if d.hasNilCode() { return d.decodeNilValue(v) } + return fn(d, v) } - - if v.IsNil() { - v.Set(reflect.New(v.Type().Elem())) - } - - var b []byte - - if d.extLen != 0 { - var err error - b, err = d.readN(d.extLen) - if err != nil { - return err - } - } else { - d.rec = make([]byte, 0, 64) - if err := d.Skip(); err != nil { - return err - } - b = d.rec - d.rec = nil - } - - unmarshaler := v.Interface().(Unmarshaler) - return unmarshaler.UnmarshalMsgpack(b) } func decodeBoolValue(d *Decoder, v reflect.Value) error { @@ -201,9 +173,6 @@ func decodeBoolValue(d *Decoder, v reflect.Value) error { if err != nil { return err } - if err = mustSet(v); err != nil { - return err - } v.SetBool(flag) return nil } @@ -212,16 +181,7 @@ func decodeInterfaceValue(d *Decoder, v reflect.Value) error { if v.IsNil() { return d.interfaceValue(v) } - - elem := v.Elem() - if !elem.CanAddr() { - if d.hasNilCode() { - v.Set(reflect.Zero(v.Type())) - return d.DecodeNil() - } - } - - return d.DecodeValue(elem) + return d.DecodeValue(v.Elem()) } func (d *Decoder) interfaceValue(v reflect.Value) error { @@ -250,22 +210,26 @@ func decodeUnsupportedValue(d *Decoder, v reflect.Value) error { //------------------------------------------------------------------------------ -func unmarshalBinaryValueAddr(d *Decoder, v reflect.Value) error { - if !v.CanAddr() { - return fmt.Errorf("msgpack: Decode(nonaddressable %T)", v.Interface()) - } - return unmarshalBinaryValue(d, v.Addr()) +func decodeCustomValue(d *Decoder, v reflect.Value) error { + decoder := v.Interface().(CustomDecoder) + return decoder.DecodeMsgpack(d) } -func unmarshalBinaryValue(d *Decoder, v reflect.Value) error { - if d.hasNilCode() { - return d.decodeNilValue(v) - } +func unmarshalValue(d *Decoder, v reflect.Value) error { + var b []byte - if v.IsNil() { - v.Set(reflect.New(v.Type().Elem())) + d.rec = make([]byte, 0, 64) + if err := d.Skip(); err != nil { + return err } + b = d.rec + d.rec = nil + + unmarshaler := v.Interface().(Unmarshaler) + return unmarshaler.UnmarshalMsgpack(b) +} +func unmarshalBinaryValue(d *Decoder, v reflect.Value) error { data, err := d.DecodeBytes() if err != nil { return err @@ -274,3 +238,13 @@ func unmarshalBinaryValue(d *Decoder, v reflect.Value) error { unmarshaler := v.Interface().(encoding.BinaryUnmarshaler) return unmarshaler.UnmarshalBinary(data) } + +func unmarshalTextValue(d *Decoder, v reflect.Value) error { + data, err := d.DecodeBytes() + if err != nil { + return err + } + + unmarshaler := v.Interface().(encoding.TextUnmarshaler) + return unmarshaler.UnmarshalText(data) +} diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/encode.go b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/encode.go similarity index 55% rename from .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/encode.go rename to .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/encode.go index 37f098701ea..0ef6212e63b 100644 --- a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/encode.go +++ b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/encode.go @@ -7,15 +7,16 @@ import ( "sync" "time" - "github.com/vmihailenco/msgpack/v4/codes" + "github.com/vmihailenco/msgpack/v5/msgpcode" ) const ( sortMapKeysFlag uint32 = 1 << iota - structAsArrayFlag - encodeUsingJSONFlag + arrayEncodedStructsFlag useCompactIntsFlag useCompactFloatsFlag + useInternedStringsFlag + omitEmptyFlag ) type writer interface { @@ -25,23 +26,16 @@ type writer interface { type byteWriter struct { io.Writer - - buf [1]byte -} - -func newByteWriter(w io.Writer) *byteWriter { - bw := new(byteWriter) - bw.Reset(w) - return bw } -func (bw *byteWriter) Reset(w io.Writer) { - bw.Writer = w +func newByteWriter(w io.Writer) byteWriter { + return byteWriter{ + Writer: w, + } } -func (bw *byteWriter) WriteByte(c byte) error { - bw.buf[0] = c - _, err := bw.Write(bw.buf[:]) +func (bw byteWriter) WriteByte(c byte) error { + _, err := bw.Write([]byte{c}) return err } @@ -53,9 +47,18 @@ var encPool = sync.Pool{ }, } +func GetEncoder() *Encoder { + return encPool.Get().(*Encoder) +} + +func PutEncoder(enc *Encoder) { + enc.w = nil + encPool.Put(enc) +} + // Marshal returns the MessagePack encoding of v. func Marshal(v interface{}) ([]byte, error) { - enc := encPool.Get().(*Encoder) + enc := GetEncoder() var buf bytes.Buffer enc.Reset(&buf) @@ -63,7 +66,7 @@ func Marshal(v interface{}) ([]byte, error) { err := enc.Encode(v) b := buf.Bytes() - encPool.Put(enc) + PutEncoder(enc) if err != nil { return nil, err @@ -74,49 +77,63 @@ func Marshal(v interface{}) ([]byte, error) { type Encoder struct { w writer - buf []byte - timeBuf []byte - bootstrap [9 + 12]byte + buf []byte + timeBuf []byte - intern map[string]int + dict map[string]int - flags uint32 + flags uint32 + structTag string } // NewEncoder returns a new encoder that writes to w. func NewEncoder(w io.Writer) *Encoder { - e := new(Encoder) - e.buf = e.bootstrap[:9] - e.timeBuf = e.bootstrap[9 : 9+12] + e := &Encoder{ + buf: make([]byte, 9), + } e.Reset(w) return e } +// Writer returns the Encoder's writer. +func (e *Encoder) Writer() io.Writer { + return e.w +} + +// Reset discards any buffered data, resets all state, and switches the writer to write to w. func (e *Encoder) Reset(w io.Writer) { + e.ResetDict(w, nil) +} + +// ResetDict is like Reset, but also resets the dict. +func (e *Encoder) ResetDict(w io.Writer, dict map[string]int) { + e.resetWriter(w) + e.flags = 0 + e.structTag = "" + e.dict = dict +} + +func (e *Encoder) WithDict(dict map[string]int, fn func(*Encoder) error) error { + oldDict := e.dict + e.dict = dict + err := fn(e) + e.dict = oldDict + return err +} + +func (e *Encoder) resetWriter(w io.Writer) { if bw, ok := w.(writer); ok { e.w = bw - } else if bw, ok := e.w.(*byteWriter); ok { - bw.Reset(w) } else { e.w = newByteWriter(w) } - - for k := range e.intern { - delete(e.intern, k) - } - - //TODO: - //e.sortMapKeys = false - //e.structAsArray = false - //e.useJSONTag = false - //e.useCompact = false } -// SortMapKeys causes the Encoder to encode map keys in increasing order. +// SetSortMapKeys causes the Encoder to encode map keys in increasing order. // Supported map types are: // - map[string]string // - map[string]interface{} -func (e *Encoder) SortMapKeys(on bool) *Encoder { +func (e *Encoder) SetSortMapKeys(on bool) *Encoder { if on { e.flags |= sortMapKeysFlag } else { @@ -125,36 +142,38 @@ func (e *Encoder) SortMapKeys(on bool) *Encoder { return e } -// StructAsArray causes the Encoder to encode Go structs as msgpack arrays. -func (e *Encoder) StructAsArray(on bool) *Encoder { +// SetCustomStructTag causes the Encoder to use a custom struct tag as +// fallback option if there is no msgpack tag. +func (e *Encoder) SetCustomStructTag(tag string) { + e.structTag = tag +} + +// SetOmitEmpty causes the Encoder to omit empty values by default. +func (e *Encoder) SetOmitEmpty(on bool) { if on { - e.flags |= structAsArrayFlag + e.flags |= omitEmptyFlag } else { - e.flags &= ^structAsArrayFlag + e.flags &= ^omitEmptyFlag } - return e } -// UseJSONTag causes the Encoder to use json struct tag as fallback option -// if there is no msgpack tag. -func (e *Encoder) UseJSONTag(on bool) *Encoder { +// UseArrayEncodedStructs causes the Encoder to encode Go structs as msgpack arrays. +func (e *Encoder) UseArrayEncodedStructs(on bool) { if on { - e.flags |= encodeUsingJSONFlag + e.flags |= arrayEncodedStructsFlag } else { - e.flags &= ^encodeUsingJSONFlag + e.flags &= ^arrayEncodedStructsFlag } - return e } // UseCompactEncoding causes the Encoder to chose the most compact encoding. // For example, it allows to encode small Go int64 as msgpack int8 saving 7 bytes. -func (e *Encoder) UseCompactEncoding(on bool) *Encoder { +func (e *Encoder) UseCompactInts(on bool) { if on { e.flags |= useCompactIntsFlag } else { e.flags &= ^useCompactIntsFlag } - return e } // UseCompactFloats causes the Encoder to chose a compact integer encoding @@ -167,6 +186,15 @@ func (e *Encoder) UseCompactFloats(on bool) { } } +// UseInternedStrings causes the Encoder to intern strings. +func (e *Encoder) UseInternedStrings(on bool) { + if on { + e.flags |= useInternedStringsFlag + } else { + e.flags &= ^useInternedStringsFlag + } +} + func (e *Encoder) Encode(v interface{}) error { switch v := v.(type) { case nil: @@ -176,11 +204,11 @@ func (e *Encoder) Encode(v interface{}) error { case []byte: return e.EncodeBytes(v) case int: - return e.encodeInt64Cond(int64(v)) + return e.EncodeInt(int64(v)) case int64: return e.encodeInt64Cond(v) case uint: - return e.encodeUint64Cond(uint64(v)) + return e.EncodeUint(uint64(v)) case uint64: return e.encodeUint64Cond(v) case bool: @@ -212,22 +240,22 @@ func (e *Encoder) EncodeValue(v reflect.Value) error { } func (e *Encoder) EncodeNil() error { - return e.writeCode(codes.Nil) + return e.writeCode(msgpcode.Nil) } func (e *Encoder) EncodeBool(value bool) error { if value { - return e.writeCode(codes.True) + return e.writeCode(msgpcode.True) } - return e.writeCode(codes.False) + return e.writeCode(msgpcode.False) } func (e *Encoder) EncodeDuration(d time.Duration) error { return e.EncodeInt(int64(d)) } -func (e *Encoder) writeCode(c codes.Code) error { - return e.w.WriteByte(byte(c)) +func (e *Encoder) writeCode(c byte) error { + return e.w.WriteByte(c) } func (e *Encoder) write(b []byte) error { diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/encode_map.go b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/encode_map.go similarity index 70% rename from .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/encode_map.go rename to .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/encode_map.go index d9b954d8667..ba4c61be72d 100644 --- a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/encode_map.go +++ b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/encode_map.go @@ -1,10 +1,11 @@ package msgpack import ( + "math" "reflect" "sort" - "github.com/vmihailenco/msgpack/v4/codes" + "github.com/vmihailenco/msgpack/v5/msgpcode" ) func encodeMapValue(e *Encoder, v reflect.Value) error { @@ -16,11 +17,12 @@ func encodeMapValue(e *Encoder, v reflect.Value) error { return err } - for _, key := range v.MapKeys() { - if err := e.EncodeValue(key); err != nil { + iter := v.MapRange() + for iter.Next() { + if err := e.EncodeValue(iter.Key()); err != nil { return err } - if err := e.EncodeValue(v.MapIndex(key)); err != nil { + if err := e.EncodeValue(iter.Value()); err != nil { return err } } @@ -58,16 +60,20 @@ func encodeMapStringInterfaceValue(e *Encoder, v reflect.Value) error { if v.IsNil() { return e.EncodeNil() } - - if err := e.EncodeMapLen(v.Len()); err != nil { - return err - } - m := v.Convert(mapStringInterfaceType).Interface().(map[string]interface{}) if e.flags&sortMapKeysFlag != 0 { - return e.encodeSortedMapStringInterface(m) + return e.EncodeMapSorted(m) } + return e.EncodeMap(m) +} +func (e *Encoder) EncodeMap(m map[string]interface{}) error { + if m == nil { + return e.EncodeNil() + } + if err := e.EncodeMapLen(len(m)); err != nil { + return err + } for mk, mv := range m { if err := e.EncodeString(mk); err != nil { return err @@ -76,23 +82,30 @@ func encodeMapStringInterfaceValue(e *Encoder, v reflect.Value) error { return err } } - return nil } -func (e *Encoder) encodeSortedMapStringString(m map[string]string) error { +func (e *Encoder) EncodeMapSorted(m map[string]interface{}) error { + if m == nil { + return e.EncodeNil() + } + if err := e.EncodeMapLen(len(m)); err != nil { + return err + } + keys := make([]string, 0, len(m)) + for k := range m { keys = append(keys, k) } + sort.Strings(keys) for _, k := range keys { - err := e.EncodeString(k) - if err != nil { + if err := e.EncodeString(k); err != nil { return err } - if err = e.EncodeString(m[k]); err != nil { + if err := e.Encode(m[k]); err != nil { return err } } @@ -100,7 +113,7 @@ func (e *Encoder) encodeSortedMapStringString(m map[string]string) error { return nil } -func (e *Encoder) encodeSortedMapStringInterface(m map[string]interface{}) error { +func (e *Encoder) encodeSortedMapStringString(m map[string]string) error { keys := make([]string, 0, len(m)) for k := range m { keys = append(keys, k) @@ -112,7 +125,7 @@ func (e *Encoder) encodeSortedMapStringInterface(m map[string]interface{}) error if err != nil { return err } - if err = e.Encode(m[k]); err != nil { + if err = e.EncodeString(m[k]); err != nil { return err } } @@ -122,26 +135,20 @@ func (e *Encoder) encodeSortedMapStringInterface(m map[string]interface{}) error func (e *Encoder) EncodeMapLen(l int) error { if l < 16 { - return e.writeCode(codes.FixedMapLow | codes.Code(l)) + return e.writeCode(msgpcode.FixedMapLow | byte(l)) } - if l < 65536 { - return e.write2(codes.Map16, uint16(l)) + if l <= math.MaxUint16 { + return e.write2(msgpcode.Map16, uint16(l)) } - return e.write4(codes.Map32, uint32(l)) + return e.write4(msgpcode.Map32, uint32(l)) } func encodeStructValue(e *Encoder, strct reflect.Value) error { - var structFields *fields - if e.flags&encodeUsingJSONFlag != 0 { - structFields = jsonStructs.Fields(strct.Type()) - } else { - structFields = structs.Fields(strct.Type()) - } - - if e.flags&structAsArrayFlag != 0 || structFields.AsArray { + structFields := structs.Fields(strct.Type(), e.structTag) + if e.flags&arrayEncodedStructsFlag != 0 || structFields.AsArray { return encodeStructValueAsArray(e, strct, structFields.List) } - fields := structFields.OmitEmpty(strct) + fields := structFields.OmitEmpty(strct, e.flags&omitEmptyFlag != 0) if err := e.EncodeMapLen(len(fields)); err != nil { return err diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/encode_number.go b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/encode_number.go similarity index 84% rename from .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/encode_number.go rename to .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/encode_number.go index bf3c2f851ae..63c311bfae8 100644 --- a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/encode_number.go +++ b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/encode_number.go @@ -4,12 +4,12 @@ import ( "math" "reflect" - "github.com/vmihailenco/msgpack/v4/codes" + "github.com/vmihailenco/msgpack/v5/msgpcode" ) // EncodeUint8 encodes an uint8 in 2 bytes preserving type of the number. func (e *Encoder) EncodeUint8(n uint8) error { - return e.write1(codes.Uint8, n) + return e.write1(msgpcode.Uint8, n) } func (e *Encoder) encodeUint8Cond(n uint8) error { @@ -21,7 +21,7 @@ func (e *Encoder) encodeUint8Cond(n uint8) error { // EncodeUint16 encodes an uint16 in 3 bytes preserving type of the number. func (e *Encoder) EncodeUint16(n uint16) error { - return e.write2(codes.Uint16, n) + return e.write2(msgpcode.Uint16, n) } func (e *Encoder) encodeUint16Cond(n uint16) error { @@ -33,7 +33,7 @@ func (e *Encoder) encodeUint16Cond(n uint16) error { // EncodeUint32 encodes an uint16 in 5 bytes preserving type of the number. func (e *Encoder) EncodeUint32(n uint32) error { - return e.write4(codes.Uint32, n) + return e.write4(msgpcode.Uint32, n) } func (e *Encoder) encodeUint32Cond(n uint32) error { @@ -45,7 +45,7 @@ func (e *Encoder) encodeUint32Cond(n uint32) error { // EncodeUint64 encodes an uint16 in 9 bytes preserving type of the number. func (e *Encoder) EncodeUint64(n uint64) error { - return e.write8(codes.Uint64, n) + return e.write8(msgpcode.Uint64, n) } func (e *Encoder) encodeUint64Cond(n uint64) error { @@ -57,7 +57,7 @@ func (e *Encoder) encodeUint64Cond(n uint64) error { // EncodeInt8 encodes an int8 in 2 bytes preserving type of the number. func (e *Encoder) EncodeInt8(n int8) error { - return e.write1(codes.Int8, uint8(n)) + return e.write1(msgpcode.Int8, uint8(n)) } func (e *Encoder) encodeInt8Cond(n int8) error { @@ -69,7 +69,7 @@ func (e *Encoder) encodeInt8Cond(n int8) error { // EncodeInt16 encodes an int16 in 3 bytes preserving type of the number. func (e *Encoder) EncodeInt16(n int16) error { - return e.write2(codes.Int16, uint16(n)) + return e.write2(msgpcode.Int16, uint16(n)) } func (e *Encoder) encodeInt16Cond(n int16) error { @@ -81,7 +81,7 @@ func (e *Encoder) encodeInt16Cond(n int16) error { // EncodeInt32 encodes an int32 in 5 bytes preserving type of the number. func (e *Encoder) EncodeInt32(n int32) error { - return e.write4(codes.Int32, uint32(n)) + return e.write4(msgpcode.Int32, uint32(n)) } func (e *Encoder) encodeInt32Cond(n int32) error { @@ -93,7 +93,7 @@ func (e *Encoder) encodeInt32Cond(n int32) error { // EncodeInt64 encodes an int64 in 9 bytes preserving type of the number. func (e *Encoder) EncodeInt64(n int64) error { - return e.write8(codes.Int64, uint64(n)) + return e.write8(msgpcode.Int64, uint64(n)) } func (e *Encoder) encodeInt64Cond(n int64) error { @@ -127,7 +127,7 @@ func (e *Encoder) EncodeInt(n int64) error { if n >= 0 { return e.EncodeUint(uint64(n)) } - if n >= int64(int8(codes.NegFixedNumLow)) { + if n >= int64(int8(msgpcode.NegFixedNumLow)) { return e.w.WriteByte(byte(n)) } if n >= math.MinInt8 { @@ -148,7 +148,7 @@ func (e *Encoder) EncodeFloat32(n float32) error { return e.EncodeInt(int64(n)) } } - return e.write4(codes.Float, math.Float32bits(n)) + return e.write4(msgpcode.Float, math.Float32bits(n)) } func (e *Encoder) EncodeFloat64(n float64) error { @@ -161,27 +161,27 @@ func (e *Encoder) EncodeFloat64(n float64) error { return e.EncodeInt(int64(n)) } } - return e.write8(codes.Double, math.Float64bits(n)) + return e.write8(msgpcode.Double, math.Float64bits(n)) } -func (e *Encoder) write1(code codes.Code, n uint8) error { +func (e *Encoder) write1(code byte, n uint8) error { e.buf = e.buf[:2] - e.buf[0] = byte(code) + e.buf[0] = code e.buf[1] = n return e.write(e.buf) } -func (e *Encoder) write2(code codes.Code, n uint16) error { +func (e *Encoder) write2(code byte, n uint16) error { e.buf = e.buf[:3] - e.buf[0] = byte(code) + e.buf[0] = code e.buf[1] = byte(n >> 8) e.buf[2] = byte(n) return e.write(e.buf) } -func (e *Encoder) write4(code codes.Code, n uint32) error { +func (e *Encoder) write4(code byte, n uint32) error { e.buf = e.buf[:5] - e.buf[0] = byte(code) + e.buf[0] = code e.buf[1] = byte(n >> 24) e.buf[2] = byte(n >> 16) e.buf[3] = byte(n >> 8) @@ -189,9 +189,9 @@ func (e *Encoder) write4(code codes.Code, n uint32) error { return e.write(e.buf) } -func (e *Encoder) write8(code codes.Code, n uint64) error { +func (e *Encoder) write8(code byte, n uint64) error { e.buf = e.buf[:9] - e.buf[0] = byte(code) + e.buf[0] = code e.buf[1] = byte(n >> 56) e.buf[2] = byte(n >> 48) e.buf[3] = byte(n >> 40) @@ -203,6 +203,14 @@ func (e *Encoder) write8(code codes.Code, n uint64) error { return e.write(e.buf) } +func encodeUintValue(e *Encoder, v reflect.Value) error { + return e.EncodeUint(v.Uint()) +} + +func encodeIntValue(e *Encoder, v reflect.Value) error { + return e.EncodeInt(v.Int()) +} + func encodeUint8CondValue(e *Encoder, v reflect.Value) error { return e.encodeUint8Cond(uint8(v.Uint())) } diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/encode_slice.go b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/encode_slice.go similarity index 64% rename from .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/encode_slice.go rename to .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/encode_slice.go index 69a9618e064..ca46eadae51 100644 --- a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/encode_slice.go +++ b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/encode_slice.go @@ -1,12 +1,13 @@ package msgpack import ( + "math" "reflect" - "github.com/vmihailenco/msgpack/v4/codes" + "github.com/vmihailenco/msgpack/v5/msgpcode" ) -var sliceStringType = reflect.TypeOf(([]string)(nil)) +var stringSliceType = reflect.TypeOf(([]string)(nil)) func encodeStringValue(e *Encoder, v reflect.Value) error { return e.EncodeString(v.String()) @@ -42,29 +43,36 @@ func grow(b []byte, n int) []byte { func (e *Encoder) EncodeBytesLen(l int) error { if l < 256 { - return e.write1(codes.Bin8, uint8(l)) + return e.write1(msgpcode.Bin8, uint8(l)) } - if l < 65536 { - return e.write2(codes.Bin16, uint16(l)) + if l <= math.MaxUint16 { + return e.write2(msgpcode.Bin16, uint16(l)) } - return e.write4(codes.Bin32, uint32(l)) + return e.write4(msgpcode.Bin32, uint32(l)) } -func (e *Encoder) encodeStrLen(l int) error { +func (e *Encoder) encodeStringLen(l int) error { if l < 32 { - return e.writeCode(codes.FixedStrLow | codes.Code(l)) + return e.writeCode(msgpcode.FixedStrLow | byte(l)) } if l < 256 { - return e.write1(codes.Str8, uint8(l)) + return e.write1(msgpcode.Str8, uint8(l)) } - if l < 65536 { - return e.write2(codes.Str16, uint16(l)) + if l <= math.MaxUint16 { + return e.write2(msgpcode.Str16, uint16(l)) } - return e.write4(codes.Str32, uint32(l)) + return e.write4(msgpcode.Str32, uint32(l)) } func (e *Encoder) EncodeString(v string) error { - if err := e.encodeStrLen(len(v)); err != nil { + if intern := e.flags&useInternedStringsFlag != 0; intern || len(e.dict) > 0 { + return e.encodeInternedString(v, intern) + } + return e.encodeNormalString(v) +} + +func (e *Encoder) encodeNormalString(v string) error { + if err := e.encodeStringLen(len(v)); err != nil { return err } return e.writeString(v) @@ -82,16 +90,16 @@ func (e *Encoder) EncodeBytes(v []byte) error { func (e *Encoder) EncodeArrayLen(l int) error { if l < 16 { - return e.writeCode(codes.FixedArrayLow | codes.Code(l)) + return e.writeCode(msgpcode.FixedArrayLow | byte(l)) } - if l < 65536 { - return e.write2(codes.Array16, uint16(l)) + if l <= math.MaxUint16 { + return e.write2(msgpcode.Array16, uint16(l)) } - return e.write4(codes.Array32, uint32(l)) + return e.write4(msgpcode.Array32, uint32(l)) } func encodeStringSliceValue(e *Encoder, v reflect.Value) error { - ss := v.Convert(sliceStringType).Interface().([]string) + ss := v.Convert(stringSliceType).Interface().([]string) return e.encodeStringSlice(ss) } diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/encode_value.go b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/encode_value.go similarity index 82% rename from .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/encode_value.go rename to .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/encode_value.go index 335fcdb7ed6..48cf489fa1f 100644 --- a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/encode_value.go +++ b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/encode_value.go @@ -12,12 +12,12 @@ var valueEncoders []encoderFunc func init() { valueEncoders = []encoderFunc{ reflect.Bool: encodeBoolValue, - reflect.Int: encodeInt64CondValue, + reflect.Int: encodeIntValue, reflect.Int8: encodeInt8CondValue, reflect.Int16: encodeInt16CondValue, reflect.Int32: encodeInt32CondValue, reflect.Int64: encodeInt64CondValue, - reflect.Uint: encodeUint64CondValue, + reflect.Uint: encodeUintValue, reflect.Uint8: encodeUint8CondValue, reflect.Uint16: encodeUint16CondValue, reflect.Uint32: encodeUint32CondValue, @@ -66,6 +66,9 @@ func _getEncoder(typ reflect.Type) encoderFunc { if typ.Implements(binaryMarshalerType) { return marshalBinaryValue } + if typ.Implements(textMarshalerType) { + return marshalTextValue + } // Addressable struct field value. if kind != reflect.Ptr { @@ -77,7 +80,10 @@ func _getEncoder(typ reflect.Type) encoderFunc { return marshalValuePtr } if ptr.Implements(binaryMarshalerType) { - return marshalBinaryValuePtr + return marshalBinaryValueAddr + } + if ptr.Implements(textMarshalerType) { + return marshalTextValueAddr } } @@ -133,7 +139,7 @@ func encodeCustomValuePtr(e *Encoder, v reflect.Value) error { } func encodeCustomValue(e *Encoder, v reflect.Value) error { - if nilable(v) && v.IsNil() { + if nilable(v.Kind()) && v.IsNil() { return e.EncodeNil() } @@ -149,7 +155,7 @@ func marshalValuePtr(e *Encoder, v reflect.Value) error { } func marshalValue(e *Encoder, v reflect.Value) error { - if nilable(v) && v.IsNil() { + if nilable(v.Kind()) && v.IsNil() { return e.EncodeNil() } @@ -184,8 +190,8 @@ func encodeUnsupportedValue(e *Encoder, v reflect.Value) error { return fmt.Errorf("msgpack: Encode(unsupported %s)", v.Type()) } -func nilable(v reflect.Value) bool { - switch v.Kind() { +func nilable(kind reflect.Kind) bool { + switch kind { case reflect.Chan, reflect.Func, reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice: return true } @@ -194,7 +200,7 @@ func nilable(v reflect.Value) bool { //------------------------------------------------------------------------------ -func marshalBinaryValuePtr(e *Encoder, v reflect.Value) error { +func marshalBinaryValueAddr(e *Encoder, v reflect.Value) error { if !v.CanAddr() { return fmt.Errorf("msgpack: Encode(non-addressable %T)", v.Interface()) } @@ -202,7 +208,7 @@ func marshalBinaryValuePtr(e *Encoder, v reflect.Value) error { } func marshalBinaryValue(e *Encoder, v reflect.Value) error { - if nilable(v) && v.IsNil() { + if nilable(v.Kind()) && v.IsNil() { return e.EncodeNil() } @@ -214,3 +220,26 @@ func marshalBinaryValue(e *Encoder, v reflect.Value) error { return e.EncodeBytes(data) } + +//------------------------------------------------------------------------------ + +func marshalTextValueAddr(e *Encoder, v reflect.Value) error { + if !v.CanAddr() { + return fmt.Errorf("msgpack: Encode(non-addressable %T)", v.Interface()) + } + return marshalTextValue(e, v.Addr()) +} + +func marshalTextValue(e *Encoder, v reflect.Value) error { + if nilable(v.Kind()) && v.IsNil() { + return e.EncodeNil() + } + + marshaler := v.Interface().(encoding.TextMarshaler) + data, err := marshaler.MarshalText() + if err != nil { + return err + } + + return e.EncodeBytes(data) +} diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/ext.go b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/ext.go new file mode 100644 index 00000000000..76e11603d92 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/ext.go @@ -0,0 +1,303 @@ +package msgpack + +import ( + "fmt" + "math" + "reflect" + + "github.com/vmihailenco/msgpack/v5/msgpcode" +) + +type extInfo struct { + Type reflect.Type + Decoder func(d *Decoder, v reflect.Value, extLen int) error +} + +var extTypes = make(map[int8]*extInfo) + +type MarshalerUnmarshaler interface { + Marshaler + Unmarshaler +} + +func RegisterExt(extID int8, value MarshalerUnmarshaler) { + RegisterExtEncoder(extID, value, func(e *Encoder, v reflect.Value) ([]byte, error) { + marshaler := v.Interface().(Marshaler) + return marshaler.MarshalMsgpack() + }) + RegisterExtDecoder(extID, value, func(d *Decoder, v reflect.Value, extLen int) error { + b, err := d.readN(extLen) + if err != nil { + return err + } + return v.Interface().(Unmarshaler).UnmarshalMsgpack(b) + }) +} + +func UnregisterExt(extID int8) { + unregisterExtEncoder(extID) + unregisterExtDecoder(extID) +} + +func RegisterExtEncoder( + extID int8, + value interface{}, + encoder func(enc *Encoder, v reflect.Value) ([]byte, error), +) { + unregisterExtEncoder(extID) + + typ := reflect.TypeOf(value) + extEncoder := makeExtEncoder(extID, typ, encoder) + typeEncMap.Store(extID, typ) + typeEncMap.Store(typ, extEncoder) + if typ.Kind() == reflect.Ptr { + typeEncMap.Store(typ.Elem(), makeExtEncoderAddr(extEncoder)) + } +} + +func unregisterExtEncoder(extID int8) { + t, ok := typeEncMap.Load(extID) + if !ok { + return + } + typeEncMap.Delete(extID) + typ := t.(reflect.Type) + typeEncMap.Delete(typ) + if typ.Kind() == reflect.Ptr { + typeEncMap.Delete(typ.Elem()) + } +} + +func makeExtEncoder( + extID int8, + typ reflect.Type, + encoder func(enc *Encoder, v reflect.Value) ([]byte, error), +) encoderFunc { + nilable := typ.Kind() == reflect.Ptr + + return func(e *Encoder, v reflect.Value) error { + if nilable && v.IsNil() { + return e.EncodeNil() + } + + b, err := encoder(e, v) + if err != nil { + return err + } + + if err := e.EncodeExtHeader(extID, len(b)); err != nil { + return err + } + + return e.write(b) + } +} + +func makeExtEncoderAddr(extEncoder encoderFunc) encoderFunc { + return func(e *Encoder, v reflect.Value) error { + if !v.CanAddr() { + return fmt.Errorf("msgpack: Decode(nonaddressable %T)", v.Interface()) + } + return extEncoder(e, v.Addr()) + } +} + +func RegisterExtDecoder( + extID int8, + value interface{}, + decoder func(dec *Decoder, v reflect.Value, extLen int) error, +) { + unregisterExtDecoder(extID) + + typ := reflect.TypeOf(value) + extDecoder := makeExtDecoder(extID, typ, decoder) + extTypes[extID] = &extInfo{ + Type: typ, + Decoder: decoder, + } + + typeDecMap.Store(extID, typ) + typeDecMap.Store(typ, extDecoder) + if typ.Kind() == reflect.Ptr { + typeDecMap.Store(typ.Elem(), makeExtDecoderAddr(extDecoder)) + } +} + +func unregisterExtDecoder(extID int8) { + t, ok := typeDecMap.Load(extID) + if !ok { + return + } + typeDecMap.Delete(extID) + delete(extTypes, extID) + typ := t.(reflect.Type) + typeDecMap.Delete(typ) + if typ.Kind() == reflect.Ptr { + typeDecMap.Delete(typ.Elem()) + } +} + +func makeExtDecoder( + wantedExtID int8, + typ reflect.Type, + decoder func(d *Decoder, v reflect.Value, extLen int) error, +) decoderFunc { + return nilAwareDecoder(typ, func(d *Decoder, v reflect.Value) error { + extID, extLen, err := d.DecodeExtHeader() + if err != nil { + return err + } + if extID != wantedExtID { + return fmt.Errorf("msgpack: got ext type=%d, wanted %d", extID, wantedExtID) + } + return decoder(d, v, extLen) + }) +} + +func makeExtDecoderAddr(extDecoder decoderFunc) decoderFunc { + return func(d *Decoder, v reflect.Value) error { + if !v.CanAddr() { + return fmt.Errorf("msgpack: Decode(nonaddressable %T)", v.Interface()) + } + return extDecoder(d, v.Addr()) + } +} + +func (e *Encoder) EncodeExtHeader(extID int8, extLen int) error { + if err := e.encodeExtLen(extLen); err != nil { + return err + } + if err := e.w.WriteByte(byte(extID)); err != nil { + return err + } + return nil +} + +func (e *Encoder) encodeExtLen(l int) error { + switch l { + case 1: + return e.writeCode(msgpcode.FixExt1) + case 2: + return e.writeCode(msgpcode.FixExt2) + case 4: + return e.writeCode(msgpcode.FixExt4) + case 8: + return e.writeCode(msgpcode.FixExt8) + case 16: + return e.writeCode(msgpcode.FixExt16) + } + if l <= math.MaxUint8 { + return e.write1(msgpcode.Ext8, uint8(l)) + } + if l <= math.MaxUint16 { + return e.write2(msgpcode.Ext16, uint16(l)) + } + return e.write4(msgpcode.Ext32, uint32(l)) +} + +func (d *Decoder) DecodeExtHeader() (extID int8, extLen int, err error) { + c, err := d.readCode() + if err != nil { + return + } + return d.extHeader(c) +} + +func (d *Decoder) extHeader(c byte) (int8, int, error) { + extLen, err := d.parseExtLen(c) + if err != nil { + return 0, 0, err + } + + extID, err := d.readCode() + if err != nil { + return 0, 0, err + } + + return int8(extID), extLen, nil +} + +func (d *Decoder) parseExtLen(c byte) (int, error) { + switch c { + case msgpcode.FixExt1: + return 1, nil + case msgpcode.FixExt2: + return 2, nil + case msgpcode.FixExt4: + return 4, nil + case msgpcode.FixExt8: + return 8, nil + case msgpcode.FixExt16: + return 16, nil + case msgpcode.Ext8: + n, err := d.uint8() + return int(n), err + case msgpcode.Ext16: + n, err := d.uint16() + return int(n), err + case msgpcode.Ext32: + n, err := d.uint32() + return int(n), err + default: + return 0, fmt.Errorf("msgpack: invalid code=%x decoding ext len", c) + } +} + +func (d *Decoder) decodeInterfaceExt(c byte) (interface{}, error) { + extID, extLen, err := d.extHeader(c) + if err != nil { + return nil, err + } + + info, ok := extTypes[extID] + if !ok { + return nil, fmt.Errorf("msgpack: unknown ext id=%d", extID) + } + + v := reflect.New(info.Type).Elem() + if nilable(v.Kind()) && v.IsNil() { + v.Set(reflect.New(info.Type.Elem())) + } + + if err := info.Decoder(d, v, extLen); err != nil { + return nil, err + } + + return v.Interface(), nil +} + +func (d *Decoder) skipExt(c byte) error { + n, err := d.parseExtLen(c) + if err != nil { + return err + } + return d.skipN(n + 1) +} + +func (d *Decoder) skipExtHeader(c byte) error { + // Read ext type. + _, err := d.readCode() + if err != nil { + return err + } + // Read ext body len. + for i := 0; i < extHeaderLen(c); i++ { + _, err := d.readCode() + if err != nil { + return err + } + } + return nil +} + +func extHeaderLen(c byte) int { + switch c { + case msgpcode.Ext8: + return 1 + case msgpcode.Ext16: + return 2 + case msgpcode.Ext32: + return 4 + } + return 0 +} diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/intern.go b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/intern.go new file mode 100644 index 00000000000..be0316a83d8 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/intern.go @@ -0,0 +1,238 @@ +package msgpack + +import ( + "fmt" + "math" + "reflect" + + "github.com/vmihailenco/msgpack/v5/msgpcode" +) + +const ( + minInternedStringLen = 3 + maxDictLen = math.MaxUint16 +) + +var internedStringExtID = int8(math.MinInt8) + +func init() { + extTypes[internedStringExtID] = &extInfo{ + Type: stringType, + Decoder: decodeInternedStringExt, + } +} + +func decodeInternedStringExt(d *Decoder, v reflect.Value, extLen int) error { + idx, err := d.decodeInternedStringIndex(extLen) + if err != nil { + return err + } + + s, err := d.internedStringAtIndex(idx) + if err != nil { + return err + } + + v.SetString(s) + return nil +} + +//------------------------------------------------------------------------------ + +func encodeInternedInterfaceValue(e *Encoder, v reflect.Value) error { + if v.IsNil() { + return e.EncodeNil() + } + + v = v.Elem() + if v.Kind() == reflect.String { + return e.encodeInternedString(v.String(), true) + } + return e.EncodeValue(v) +} + +func encodeInternedStringValue(e *Encoder, v reflect.Value) error { + return e.encodeInternedString(v.String(), true) +} + +func (e *Encoder) encodeInternedString(s string, intern bool) error { + // Interned string takes at least 3 bytes. Plain string 1 byte + string len. + if len(s) >= minInternedStringLen { + if idx, ok := e.dict[s]; ok { + return e.encodeInternedStringIndex(idx) + } + + if intern && len(e.dict) < maxDictLen { + if e.dict == nil { + e.dict = make(map[string]int) + } + idx := len(e.dict) + e.dict[s] = idx + } + } + + return e.encodeNormalString(s) +} + +func (e *Encoder) encodeInternedStringIndex(idx int) error { + if idx <= math.MaxUint8 { + if err := e.writeCode(msgpcode.FixExt1); err != nil { + return err + } + return e.write1(byte(internedStringExtID), uint8(idx)) + } + + if idx <= math.MaxUint16 { + if err := e.writeCode(msgpcode.FixExt2); err != nil { + return err + } + return e.write2(byte(internedStringExtID), uint16(idx)) + } + + if uint64(idx) <= math.MaxUint32 { + if err := e.writeCode(msgpcode.FixExt4); err != nil { + return err + } + return e.write4(byte(internedStringExtID), uint32(idx)) + } + + return fmt.Errorf("msgpack: interned string index=%d is too large", idx) +} + +//------------------------------------------------------------------------------ + +func decodeInternedInterfaceValue(d *Decoder, v reflect.Value) error { + s, err := d.decodeInternedString(true) + if err == nil { + v.Set(reflect.ValueOf(s)) + return nil + } + if err != nil { + if _, ok := err.(unexpectedCodeError); !ok { + return err + } + } + + if err := d.s.UnreadByte(); err != nil { + return err + } + return decodeInterfaceValue(d, v) +} + +func decodeInternedStringValue(d *Decoder, v reflect.Value) error { + s, err := d.decodeInternedString(true) + if err != nil { + return err + } + + v.SetString(s) + return nil +} + +func (d *Decoder) decodeInternedString(intern bool) (string, error) { + c, err := d.readCode() + if err != nil { + return "", err + } + + if msgpcode.IsFixedString(c) { + n := int(c & msgpcode.FixedStrMask) + return d.decodeInternedStringWithLen(n, intern) + } + + switch c { + case msgpcode.Nil: + return "", nil + case msgpcode.FixExt1, msgpcode.FixExt2, msgpcode.FixExt4: + typeID, extLen, err := d.extHeader(c) + if err != nil { + return "", err + } + if typeID != internedStringExtID { + err := fmt.Errorf("msgpack: got ext type=%d, wanted %d", + typeID, internedStringExtID) + return "", err + } + + idx, err := d.decodeInternedStringIndex(extLen) + if err != nil { + return "", err + } + + return d.internedStringAtIndex(idx) + case msgpcode.Str8, msgpcode.Bin8: + n, err := d.uint8() + if err != nil { + return "", err + } + return d.decodeInternedStringWithLen(int(n), intern) + case msgpcode.Str16, msgpcode.Bin16: + n, err := d.uint16() + if err != nil { + return "", err + } + return d.decodeInternedStringWithLen(int(n), intern) + case msgpcode.Str32, msgpcode.Bin32: + n, err := d.uint32() + if err != nil { + return "", err + } + return d.decodeInternedStringWithLen(int(n), intern) + } + + return "", unexpectedCodeError{ + code: c, + hint: "interned string", + } +} + +func (d *Decoder) decodeInternedStringIndex(extLen int) (int, error) { + switch extLen { + case 1: + n, err := d.uint8() + if err != nil { + return 0, err + } + return int(n), nil + case 2: + n, err := d.uint16() + if err != nil { + return 0, err + } + return int(n), nil + case 4: + n, err := d.uint32() + if err != nil { + return 0, err + } + return int(n), nil + } + + err := fmt.Errorf("msgpack: unsupported ext len=%d decoding interned string", extLen) + return 0, err +} + +func (d *Decoder) internedStringAtIndex(idx int) (string, error) { + if idx >= len(d.dict) { + err := fmt.Errorf("msgpack: interned string at index=%d does not exist", idx) + return "", err + } + return d.dict[idx], nil +} + +func (d *Decoder) decodeInternedStringWithLen(n int, intern bool) (string, error) { + if n <= 0 { + return "", nil + } + + s, err := d.stringWithLen(n) + if err != nil { + return "", err + } + + if intern && len(s) >= minInternedStringLen && len(d.dict) < maxDictLen { + d.dict = append(d.dict, s) + } + + return s, nil +} diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/msgpack.go b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/msgpack.go new file mode 100644 index 00000000000..4db2fa2c71d --- /dev/null +++ b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/msgpack.go @@ -0,0 +1,52 @@ +package msgpack + +import "fmt" + +type Marshaler interface { + MarshalMsgpack() ([]byte, error) +} + +type Unmarshaler interface { + UnmarshalMsgpack([]byte) error +} + +type CustomEncoder interface { + EncodeMsgpack(*Encoder) error +} + +type CustomDecoder interface { + DecodeMsgpack(*Decoder) error +} + +//------------------------------------------------------------------------------ + +type RawMessage []byte + +var ( + _ CustomEncoder = (RawMessage)(nil) + _ CustomDecoder = (*RawMessage)(nil) +) + +func (m RawMessage) EncodeMsgpack(enc *Encoder) error { + return enc.write(m) +} + +func (m *RawMessage) DecodeMsgpack(dec *Decoder) error { + msg, err := dec.DecodeRaw() + if err != nil { + return err + } + *m = msg + return nil +} + +//------------------------------------------------------------------------------ + +type unexpectedCodeError struct { + code byte + hint string +} + +func (err unexpectedCodeError) Error() string { + return fmt.Sprintf("msgpack: unexpected code=%x decoding %s", err.code, err.hint) +} diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/msgpcode/msgpcode.go b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/msgpcode/msgpcode.go new file mode 100644 index 00000000000..e35389cccfe --- /dev/null +++ b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/msgpcode/msgpcode.go @@ -0,0 +1,88 @@ +package msgpcode + +var ( + PosFixedNumHigh byte = 0x7f + NegFixedNumLow byte = 0xe0 + + Nil byte = 0xc0 + + False byte = 0xc2 + True byte = 0xc3 + + Float byte = 0xca + Double byte = 0xcb + + Uint8 byte = 0xcc + Uint16 byte = 0xcd + Uint32 byte = 0xce + Uint64 byte = 0xcf + + Int8 byte = 0xd0 + Int16 byte = 0xd1 + Int32 byte = 0xd2 + Int64 byte = 0xd3 + + FixedStrLow byte = 0xa0 + FixedStrHigh byte = 0xbf + FixedStrMask byte = 0x1f + Str8 byte = 0xd9 + Str16 byte = 0xda + Str32 byte = 0xdb + + Bin8 byte = 0xc4 + Bin16 byte = 0xc5 + Bin32 byte = 0xc6 + + FixedArrayLow byte = 0x90 + FixedArrayHigh byte = 0x9f + FixedArrayMask byte = 0xf + Array16 byte = 0xdc + Array32 byte = 0xdd + + FixedMapLow byte = 0x80 + FixedMapHigh byte = 0x8f + FixedMapMask byte = 0xf + Map16 byte = 0xde + Map32 byte = 0xdf + + FixExt1 byte = 0xd4 + FixExt2 byte = 0xd5 + FixExt4 byte = 0xd6 + FixExt8 byte = 0xd7 + FixExt16 byte = 0xd8 + Ext8 byte = 0xc7 + Ext16 byte = 0xc8 + Ext32 byte = 0xc9 +) + +func IsFixedNum(c byte) bool { + return c <= PosFixedNumHigh || c >= NegFixedNumLow +} + +func IsFixedMap(c byte) bool { + return c >= FixedMapLow && c <= FixedMapHigh +} + +func IsFixedArray(c byte) bool { + return c >= FixedArrayLow && c <= FixedArrayHigh +} + +func IsFixedString(c byte) bool { + return c >= FixedStrLow && c <= FixedStrHigh +} + +func IsString(c byte) bool { + return IsFixedString(c) || c == Str8 || c == Str16 || c == Str32 +} + +func IsBin(c byte) bool { + return c == Bin8 || c == Bin16 || c == Bin32 +} + +func IsFixedExt(c byte) bool { + return c >= FixExt1 && c <= FixExt16 +} + +func IsExt(c byte) bool { + return IsFixedExt(c) || c == Ext8 || c == Ext16 || c == Ext32 +} diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/package.json b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/package.json new file mode 100644 index 00000000000..298910d45cf --- /dev/null +++ b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/package.json @@ -0,0 +1,4 @@ +{ + "name": "msgpack", + "version": "5.3.5" +} diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/safe.go b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/safe.go similarity index 100% rename from .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/safe.go rename to .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/safe.go diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/time.go b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/time.go similarity index 60% rename from .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/time.go rename to .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/time.go index bf53eb2a36f..44566ec0761 100644 --- a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/time.go +++ b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/time.go @@ -6,16 +6,30 @@ import ( "reflect" "time" - "github.com/vmihailenco/msgpack/v4/codes" + "github.com/vmihailenco/msgpack/v5/msgpcode" ) var timeExtID int8 = -1 -var timePtrType = reflect.TypeOf((*time.Time)(nil)) - -//nolint:gochecknoinits func init() { - registerExt(timeExtID, timePtrType.Elem(), encodeTimeValue, decodeTimeValue) + RegisterExtEncoder(timeExtID, time.Time{}, timeEncoder) + RegisterExtDecoder(timeExtID, time.Time{}, timeDecoder) +} + +func timeEncoder(e *Encoder, v reflect.Value) ([]byte, error) { + return e.encodeTime(v.Interface().(time.Time)), nil +} + +func timeDecoder(d *Decoder, v reflect.Value, extLen int) error { + tm, err := d.decodeTime(extLen) + if err != nil { + return err + } + + ptr := v.Addr().Interface().(*time.Time) + *ptr = tm + + return nil } func (e *Encoder) EncodeTime(tm time.Time) error { @@ -30,14 +44,20 @@ func (e *Encoder) EncodeTime(tm time.Time) error { } func (e *Encoder) encodeTime(tm time.Time) []byte { + if e.timeBuf == nil { + e.timeBuf = make([]byte, 12) + } + secs := uint64(tm.Unix()) if secs>>34 == 0 { data := uint64(tm.Nanosecond())<<34 | secs + if data&0xffffffff00000000 == 0 { b := e.timeBuf[:4] binary.BigEndian.PutUint32(b, uint32(data)) return b } + b := e.timeBuf[:8] binary.BigEndian.PutUint64(b, data) return b @@ -50,62 +70,56 @@ func (e *Encoder) encodeTime(tm time.Time) []byte { } func (d *Decoder) DecodeTime() (time.Time, error) { - tm, err := d.decodeTime() + c, err := d.readCode() if err != nil { - return tm, err + return time.Time{}, err } - if tm.IsZero() { - // Assume that zero time does not have timezone information. - return tm.UTC(), nil - } - return tm, nil -} - -func (d *Decoder) decodeTime() (time.Time, error) { - extLen := d.extLen - d.extLen = 0 - if extLen == 0 { - c, err := d.readCode() + // Legacy format. + if c == msgpcode.FixedArrayLow|2 { + sec, err := d.DecodeInt64() if err != nil { return time.Time{}, err } - // Legacy format. - if c == codes.FixedArrayLow|2 { - sec, err := d.DecodeInt64() - if err != nil { - return time.Time{}, err - } - - nsec, err := d.DecodeInt64() - if err != nil { - return time.Time{}, err - } - - return time.Unix(sec, nsec), nil + nsec, err := d.DecodeInt64() + if err != nil { + return time.Time{}, err } - if codes.IsString(c) { - s, err := d.string(c) - if err != nil { - return time.Time{}, err - } - return time.Parse(time.RFC3339Nano, s) - } + return time.Unix(sec, nsec), nil + } - extLen, err = d.parseExtLen(c) + if msgpcode.IsString(c) { + s, err := d.string(c) if err != nil { return time.Time{}, err } + return time.Parse(time.RFC3339Nano, s) + } - // Skip ext id. - _, err = d.s.ReadByte() - if err != nil { - return time.Time{}, nil - } + extID, extLen, err := d.extHeader(c) + if err != nil { + return time.Time{}, err + } + + if extID != timeExtID { + return time.Time{}, fmt.Errorf("msgpack: invalid time ext id=%d", extID) + } + + tm, err := d.decodeTime(extLen) + if err != nil { + return tm, err } + if tm.IsZero() { + // Zero time does not have timezone information. + return tm.UTC(), nil + } + return tm, nil +} + +func (d *Decoder) decodeTime(extLen int) (time.Time, error) { b, err := d.readN(extLen) if err != nil { return time.Time{}, err @@ -129,21 +143,3 @@ func (d *Decoder) decodeTime() (time.Time, error) { return time.Time{}, err } } - -func encodeTimeValue(e *Encoder, v reflect.Value) error { - tm := v.Interface().(time.Time) - b := e.encodeTime(tm) - return e.write(b) -} - -func decodeTimeValue(d *Decoder, v reflect.Value) error { - tm, err := d.DecodeTime() - if err != nil { - return err - } - - ptr := v.Addr().Interface().(*time.Time) - *ptr = tm - - return nil -} diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/types.go b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/types.go similarity index 73% rename from .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/types.go rename to .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/types.go index 08cf099dd2a..69aca611b23 100644 --- a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/types.go +++ b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/types.go @@ -7,7 +7,7 @@ import ( "reflect" "sync" - "github.com/vmihailenco/tagparser" + "github.com/vmihailenco/tagparser/v2" ) var errorType = reflect.TypeOf((*error)(nil)).Elem() @@ -27,6 +27,11 @@ var ( binaryUnmarshalerType = reflect.TypeOf((*encoding.BinaryUnmarshaler)(nil)).Elem() ) +var ( + textMarshalerType = reflect.TypeOf((*encoding.TextMarshaler)(nil)).Elem() + textUnmarshalerType = reflect.TypeOf((*encoding.TextUnmarshaler)(nil)).Elem() +) + type ( encoderFunc func(*Encoder, reflect.Value) error decoderFunc func(*Decoder, reflect.Value) error @@ -39,7 +44,7 @@ var ( // Register registers encoder and decoder functions for a value. // This is low level API and in most cases you should prefer implementing -// Marshaler/CustomEncoder and Unmarshaler/CustomDecoder interfaces. +// CustomEncoder/CustomDecoder or Marshaler/Unmarshaler interfaces. func Register(value interface{}, enc encoderFunc, dec decoderFunc) { typ := reflect.TypeOf(value) if enc != nil { @@ -52,30 +57,33 @@ func Register(value interface{}, enc encoderFunc, dec decoderFunc) { //------------------------------------------------------------------------------ -var ( - structs = newStructCache(false) - jsonStructs = newStructCache(true) -) +const defaultStructTag = "msgpack" + +var structs = newStructCache() type structCache struct { m sync.Map +} - useJSONTag bool +type structCacheKey struct { + tag string + typ reflect.Type } -func newStructCache(useJSONTag bool) *structCache { - return &structCache{ - useJSONTag: useJSONTag, - } +func newStructCache() *structCache { + return new(structCache) } -func (m *structCache) Fields(typ reflect.Type) *fields { - if v, ok := m.m.Load(typ); ok { +func (m *structCache) Fields(typ reflect.Type, tag string) *fields { + key := structCacheKey{tag: tag, typ: typ} + + if v, ok := m.m.Load(key); ok { return v.(*fields) } - fs := getFields(typ, m.useJSONTag) - m.m.Store(typ, fs) + fs := getFields(typ, tag) + m.m.Store(key, fs) + return fs } @@ -89,17 +97,17 @@ type field struct { decoder decoderFunc } -func (f *field) Omit(strct reflect.Value) bool { - v, isNil := fieldByIndex(strct, f.index) - if isNil { +func (f *field) Omit(strct reflect.Value, forced bool) bool { + v, ok := fieldByIndex(strct, f.index) + if !ok { return true } - return f.omitEmpty && isEmptyValue(v) + return (f.omitEmpty || forced) && isEmptyValue(v) } func (f *field) EncodeValue(e *Encoder, strct reflect.Value) error { - v, isNil := fieldByIndex(strct, f.index) - if isNil { + v, ok := fieldByIndex(strct, f.index) + if !ok { return e.EncodeNil() } return f.encoder(e, v) @@ -144,15 +152,15 @@ func (fs *fields) warnIfFieldExists(name string) { } } -func (fs *fields) OmitEmpty(strct reflect.Value) []*field { - if !fs.hasOmitEmpty { +func (fs *fields) OmitEmpty(strct reflect.Value, forced bool) []*field { + if !fs.hasOmitEmpty && !forced { return fs.List } fields := make([]*field, 0, len(fs.List)) for _, f := range fs.List { - if !f.Omit(strct) { + if !f.Omit(strct, forced) { fields = append(fields, f) } } @@ -160,16 +168,16 @@ func (fs *fields) OmitEmpty(strct reflect.Value) []*field { return fields } -func getFields(typ reflect.Type, useJSONTag bool) *fields { +func getFields(typ reflect.Type, fallbackTag string) *fields { fs := newFields(typ) var omitEmpty bool for i := 0; i < typ.NumField(); i++ { f := typ.Field(i) - tagStr := f.Tag.Get("msgpack") - if useJSONTag && tagStr == "" { - tagStr = f.Tag.Get("json") + tagStr := f.Tag.Get(defaultStructTag) + if tagStr == "" && fallbackTag != "" { + tagStr = f.Tag.Get(fallbackTag) } tag := tagparser.Parse(tagStr) @@ -178,9 +186,7 @@ func getFields(typ reflect.Type, useJSONTag bool) *fields { } if f.Name == "_msgpack" { - if tag.HasOption("asArray") { - fs.AsArray = true - } + fs.AsArray = tag.HasOption("as_array") || tag.HasOption("asArray") if tag.HasOption("omitempty") { omitEmpty = true } @@ -199,11 +205,11 @@ func getFields(typ reflect.Type, useJSONTag bool) *fields { if tag.HasOption("intern") { switch f.Type.Kind() { case reflect.Interface: - field.encoder = encodeInternInterfaceValue - field.decoder = decodeInternInterfaceValue + field.encoder = encodeInternedInterfaceValue + field.decoder = decodeInternedInterfaceValue case reflect.String: - field.encoder = encodeInternStringValue - field.decoder = decodeInternStringValue + field.encoder = encodeInternedStringValue + field.decoder = decodeInternedStringValue default: err := fmt.Errorf("msgpack: intern strings are not supported on %s", f.Type) panic(err) @@ -220,9 +226,9 @@ func getFields(typ reflect.Type, useJSONTag bool) *fields { if f.Anonymous && !tag.HasOption("noinline") { inline := tag.HasOption("inline") if inline { - inlineFields(fs, f.Type, field, useJSONTag) + inlineFields(fs, f.Type, field, fallbackTag) } else { - inline = shouldInline(fs, f.Type, field, useJSONTag) + inline = shouldInline(fs, f.Type, field, fallbackTag) } if inline { @@ -255,8 +261,8 @@ func init() { decodeStructValuePtr = reflect.ValueOf(decodeStructValue).Pointer() } -func inlineFields(fs *fields, typ reflect.Type, f *field, useJSONTag bool) { - inlinedFields := getFields(typ, useJSONTag).List +func inlineFields(fs *fields, typ reflect.Type, f *field, tag string) { + inlinedFields := getFields(typ, tag).List for _, field := range inlinedFields { if _, ok := fs.Map[field.name]; ok { // Don't inline shadowed fields. @@ -267,7 +273,7 @@ func inlineFields(fs *fields, typ reflect.Type, f *field, useJSONTag bool) { } } -func shouldInline(fs *fields, typ reflect.Type, f *field, useJSONTag bool) bool { +func shouldInline(fs *fields, typ reflect.Type, f *field, tag string) bool { var encoder encoderFunc var decoder decoderFunc @@ -292,7 +298,7 @@ func shouldInline(fs *fields, typ reflect.Type, f *field, useJSONTag bool) bool return false } - inlinedFields := getFields(typ, useJSONTag).List + inlinedFields := getFields(typ, tag).List for _, field := range inlinedFields { if _, ok := fs.Map[field.name]; ok { // Don't auto inline if there are shadowed fields. @@ -307,8 +313,26 @@ func shouldInline(fs *fields, typ reflect.Type, f *field, useJSONTag bool) bool return true } +type isZeroer interface { + IsZero() bool +} + func isEmptyValue(v reflect.Value) bool { - switch v.Kind() { + kind := v.Kind() + + for kind == reflect.Interface { + if v.IsNil() { + return true + } + v = v.Elem() + kind = v.Kind() + } + + if z, ok := v.Interface().(isZeroer); ok { + return nilable(kind) && v.IsNil() || z.IsZero() + } + + switch kind { case reflect.Array, reflect.Map, reflect.Slice, reflect.String: return v.Len() == 0 case reflect.Bool: @@ -319,22 +343,23 @@ func isEmptyValue(v reflect.Value) bool { return v.Uint() == 0 case reflect.Float32, reflect.Float64: return v.Float() == 0 - case reflect.Interface, reflect.Ptr: + case reflect.Ptr: return v.IsNil() + default: + return false } - return false } -func fieldByIndex(v reflect.Value, index []int) (_ reflect.Value, isNil bool) { +func fieldByIndex(v reflect.Value, index []int) (_ reflect.Value, ok bool) { if len(index) == 1 { - return v.Field(index[0]), false + return v.Field(index[0]), true } for i, idx := range index { if i > 0 { if v.Kind() == reflect.Ptr { if v.IsNil() { - return v, true + return v, false } v = v.Elem() } @@ -342,7 +367,7 @@ func fieldByIndex(v reflect.Value, index []int) (_ reflect.Value, isNil bool) { v = v.Field(idx) } - return v, false + return v, true } func fieldByIndexAlloc(v reflect.Value, index []int) reflect.Value { @@ -353,7 +378,7 @@ func fieldByIndexAlloc(v reflect.Value, index []int) reflect.Value { for i, idx := range index { if i > 0 { var ok bool - v, ok = indirectNew(v) + v, ok = indirectNil(v) if !ok { return v } @@ -364,7 +389,7 @@ func fieldByIndexAlloc(v reflect.Value, index []int) reflect.Value { return v } -func indirectNew(v reflect.Value) (reflect.Value, bool) { +func indirectNil(v reflect.Value) (reflect.Value, bool) { if v.Kind() == reflect.Ptr { if v.IsNil() { if !v.CanSet() { diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/unsafe.go b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/unsafe.go similarity index 83% rename from .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/unsafe.go rename to .ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/unsafe.go index 50c0da8b5be..192ac47920d 100644 --- a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v4/unsafe.go +++ b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/unsafe.go @@ -7,7 +7,7 @@ import ( ) // bytesToString converts byte slice to string. -func bytesToString(b []byte) string { //nolint:deadcode,unused +func bytesToString(b []byte) string { return *(*string)(unsafe.Pointer(&b)) } diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/version.go b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/version.go new file mode 100644 index 00000000000..1d49337c359 --- /dev/null +++ b/.ci/providerlint/vendor/github.com/vmihailenco/msgpack/v5/version.go @@ -0,0 +1,6 @@ +package msgpack + +// Version is the current release version. +func Version() string { + return "5.3.5" +} diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/tagparser/.travis.yml b/.ci/providerlint/vendor/github.com/vmihailenco/tagparser/v2/.travis.yml similarity index 80% rename from .ci/providerlint/vendor/github.com/vmihailenco/tagparser/.travis.yml rename to .ci/providerlint/vendor/github.com/vmihailenco/tagparser/v2/.travis.yml index ec538452322..7194cd00109 100644 --- a/.ci/providerlint/vendor/github.com/vmihailenco/tagparser/.travis.yml +++ b/.ci/providerlint/vendor/github.com/vmihailenco/tagparser/v2/.travis.yml @@ -1,10 +1,9 @@ dist: xenial -sudo: false language: go go: - - 1.11.x - - 1.12.x + - 1.14.x + - 1.15.x - tip matrix: @@ -18,7 +17,3 @@ go_import_path: github.com/vmihailenco/tagparser before_install: - curl -sfL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh | sh -s -- -b $(go env GOPATH)/bin v1.17.1 - -script: - - make - - golangci-lint run diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/tagparser/LICENSE b/.ci/providerlint/vendor/github.com/vmihailenco/tagparser/v2/LICENSE similarity index 100% rename from .ci/providerlint/vendor/github.com/vmihailenco/tagparser/LICENSE rename to .ci/providerlint/vendor/github.com/vmihailenco/tagparser/v2/LICENSE diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/tagparser/Makefile b/.ci/providerlint/vendor/github.com/vmihailenco/tagparser/v2/Makefile similarity index 91% rename from .ci/providerlint/vendor/github.com/vmihailenco/tagparser/Makefile rename to .ci/providerlint/vendor/github.com/vmihailenco/tagparser/v2/Makefile index fe9dc5bdbaf..0b1b59595ac 100644 --- a/.ci/providerlint/vendor/github.com/vmihailenco/tagparser/Makefile +++ b/.ci/providerlint/vendor/github.com/vmihailenco/tagparser/v2/Makefile @@ -6,3 +6,4 @@ all: go vet ./... go get github.com/gordonklaus/ineffassign ineffassign . + golangci-lint run diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/tagparser/README.md b/.ci/providerlint/vendor/github.com/vmihailenco/tagparser/v2/README.md similarity index 92% rename from .ci/providerlint/vendor/github.com/vmihailenco/tagparser/README.md rename to .ci/providerlint/vendor/github.com/vmihailenco/tagparser/v2/README.md index 411aa5444dc..c0259de5659 100644 --- a/.ci/providerlint/vendor/github.com/vmihailenco/tagparser/README.md +++ b/.ci/providerlint/vendor/github.com/vmihailenco/tagparser/v2/README.md @@ -8,7 +8,7 @@ Install: ```shell -go get -u github.com/vmihailenco/tagparser +go get github.com/vmihailenco/tagparser/v2 ``` ## Quickstart diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/tagparser/internal/parser/parser.go b/.ci/providerlint/vendor/github.com/vmihailenco/tagparser/v2/internal/parser/parser.go similarity index 95% rename from .ci/providerlint/vendor/github.com/vmihailenco/tagparser/internal/parser/parser.go rename to .ci/providerlint/vendor/github.com/vmihailenco/tagparser/v2/internal/parser/parser.go index 2de1c6f7bde..21a9bc7f747 100644 --- a/.ci/providerlint/vendor/github.com/vmihailenco/tagparser/internal/parser/parser.go +++ b/.ci/providerlint/vendor/github.com/vmihailenco/tagparser/v2/internal/parser/parser.go @@ -3,7 +3,7 @@ package parser import ( "bytes" - "github.com/vmihailenco/tagparser/internal" + "github.com/vmihailenco/tagparser/v2/internal" ) type Parser struct { diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/tagparser/internal/safe.go b/.ci/providerlint/vendor/github.com/vmihailenco/tagparser/v2/internal/safe.go similarity index 100% rename from .ci/providerlint/vendor/github.com/vmihailenco/tagparser/internal/safe.go rename to .ci/providerlint/vendor/github.com/vmihailenco/tagparser/v2/internal/safe.go diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/tagparser/internal/unsafe.go b/.ci/providerlint/vendor/github.com/vmihailenco/tagparser/v2/internal/unsafe.go similarity index 100% rename from .ci/providerlint/vendor/github.com/vmihailenco/tagparser/internal/unsafe.go rename to .ci/providerlint/vendor/github.com/vmihailenco/tagparser/v2/internal/unsafe.go diff --git a/.ci/providerlint/vendor/github.com/vmihailenco/tagparser/tagparser.go b/.ci/providerlint/vendor/github.com/vmihailenco/tagparser/v2/tagparser.go similarity index 88% rename from .ci/providerlint/vendor/github.com/vmihailenco/tagparser/tagparser.go rename to .ci/providerlint/vendor/github.com/vmihailenco/tagparser/v2/tagparser.go index 56b918011b0..5002e6453ea 100644 --- a/.ci/providerlint/vendor/github.com/vmihailenco/tagparser/tagparser.go +++ b/.ci/providerlint/vendor/github.com/vmihailenco/tagparser/v2/tagparser.go @@ -1,7 +1,9 @@ package tagparser import ( - "github.com/vmihailenco/tagparser/internal/parser" + "strings" + + "github.com/vmihailenco/tagparser/v2/internal/parser" ) type Tag struct { @@ -31,6 +33,9 @@ type tagParser struct { } func (p *tagParser) setTagOption(key, value string) { + key = strings.TrimSpace(key) + value = strings.TrimSpace(value) + if !p.hasName { p.hasName = true if key == "" { @@ -79,7 +84,6 @@ func (p *tagParser) parseKey() { func (p *tagParser) parseValue() { const quote = '\'' - c := p.Peek() if c == quote { p.Skip(quote) @@ -134,10 +138,7 @@ loop: func (p *tagParser) parseQuotedValue() { const quote = '\'' - var b []byte - b = append(b, quote) - for p.Valid() { bb, ok := p.ReadSep(quote) if !ok { @@ -145,6 +146,8 @@ func (p *tagParser) parseQuotedValue() { break } + // keep the escaped single-quote, and continue until we've found the + // one that isn't. if len(bb) > 0 && bb[len(bb)-1] == '\\' { b = append(b, bb[:len(bb)-1]...) b = append(b, quote) @@ -152,7 +155,6 @@ func (p *tagParser) parseQuotedValue() { } b = append(b, bb...) - b = append(b, quote) break } @@ -162,15 +164,3 @@ func (p *tagParser) parseQuotedValue() { } p.parseKey() } - -func Unquote(s string) (string, bool) { - const quote = '\'' - - if len(s) < 2 { - return s, false - } - if s[0] == quote && s[len(s)-1] == quote { - return s[1 : len(s)-1], true - } - return s, false -} diff --git a/.ci/providerlint/vendor/github.com/zclconf/go-cty/cty/path.go b/.ci/providerlint/vendor/github.com/zclconf/go-cty/cty/path.go index 636e68c63dd..4995a8c7bfc 100644 --- a/.ci/providerlint/vendor/github.com/zclconf/go-cty/cty/path.go +++ b/.ci/providerlint/vendor/github.com/zclconf/go-cty/cty/path.go @@ -225,7 +225,9 @@ func (s IndexStep) Apply(val Value) (Value, error) { return NilVal, errors.New("key value not number or string") } - has := val.HasIndex(s.Key) + // This value needs to be stripped of marks to check True(), but Index will + // apply the correct marks for the result. + has, _ := val.HasIndex(s.Key).Unmark() if !has.IsKnown() { return UnknownVal(val.Type().ElementType()), nil } diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/hkdf/hkdf.go b/.ci/providerlint/vendor/golang.org/x/crypto/hkdf/hkdf.go new file mode 100644 index 00000000000..dda3f143bec --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/crypto/hkdf/hkdf.go @@ -0,0 +1,93 @@ +// Copyright 2014 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package hkdf implements the HMAC-based Extract-and-Expand Key Derivation +// Function (HKDF) as defined in RFC 5869. +// +// HKDF is a cryptographic key derivation function (KDF) with the goal of +// expanding limited input keying material into one or more cryptographically +// strong secret keys. +package hkdf // import "golang.org/x/crypto/hkdf" + +import ( + "crypto/hmac" + "errors" + "hash" + "io" +) + +// Extract generates a pseudorandom key for use with Expand from an input secret +// and an optional independent salt. +// +// Only use this function if you need to reuse the extracted key with multiple +// Expand invocations and different context values. Most common scenarios, +// including the generation of multiple keys, should use New instead. +func Extract(hash func() hash.Hash, secret, salt []byte) []byte { + if salt == nil { + salt = make([]byte, hash().Size()) + } + extractor := hmac.New(hash, salt) + extractor.Write(secret) + return extractor.Sum(nil) +} + +type hkdf struct { + expander hash.Hash + size int + + info []byte + counter byte + + prev []byte + buf []byte +} + +func (f *hkdf) Read(p []byte) (int, error) { + // Check whether enough data can be generated + need := len(p) + remains := len(f.buf) + int(255-f.counter+1)*f.size + if remains < need { + return 0, errors.New("hkdf: entropy limit reached") + } + // Read any leftover from the buffer + n := copy(p, f.buf) + p = p[n:] + + // Fill the rest of the buffer + for len(p) > 0 { + f.expander.Reset() + f.expander.Write(f.prev) + f.expander.Write(f.info) + f.expander.Write([]byte{f.counter}) + f.prev = f.expander.Sum(f.prev[:0]) + f.counter++ + + // Copy the new batch into p + f.buf = f.prev + n = copy(p, f.buf) + p = p[n:] + } + // Save leftovers for next run + f.buf = f.buf[n:] + + return need, nil +} + +// Expand returns a Reader, from which keys can be read, using the given +// pseudorandom key and optional context info, skipping the extraction step. +// +// The pseudorandomKey should have been generated by Extract, or be a uniformly +// random or pseudorandom cryptographically strong key. See RFC 5869, Section +// 3.3. Most common scenarios will want to use New instead. +func Expand(hash func() hash.Hash, pseudorandomKey, info []byte) io.Reader { + expander := hmac.New(hash, pseudorandomKey) + return &hkdf{expander, expander.Size(), info, 1, nil, nil} +} + +// New returns a Reader, from which keys can be read, using the given hash, +// secret, salt and context info. Salt and info can be nil. +func New(hash func() hash.Hash, secret, salt, info []byte) io.Reader { + prk := Extract(hash, secret, salt) + return Expand(hash, prk, info) +} diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/config.go b/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/config.go deleted file mode 100644 index c76eecc963a..00000000000 --- a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/config.go +++ /dev/null @@ -1,91 +0,0 @@ -// Copyright 2012 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package packet - -import ( - "crypto" - "crypto/rand" - "io" - "time" -) - -// Config collects a number of parameters along with sensible defaults. -// A nil *Config is valid and results in all default values. -type Config struct { - // Rand provides the source of entropy. - // If nil, the crypto/rand Reader is used. - Rand io.Reader - // DefaultHash is the default hash function to be used. - // If zero, SHA-256 is used. - DefaultHash crypto.Hash - // DefaultCipher is the cipher to be used. - // If zero, AES-128 is used. - DefaultCipher CipherFunction - // Time returns the current time as the number of seconds since the - // epoch. If Time is nil, time.Now is used. - Time func() time.Time - // DefaultCompressionAlgo is the compression algorithm to be - // applied to the plaintext before encryption. If zero, no - // compression is done. - DefaultCompressionAlgo CompressionAlgo - // CompressionConfig configures the compression settings. - CompressionConfig *CompressionConfig - // S2KCount is only used for symmetric encryption. It - // determines the strength of the passphrase stretching when - // the said passphrase is hashed to produce a key. S2KCount - // should be between 1024 and 65011712, inclusive. If Config - // is nil or S2KCount is 0, the value 65536 used. Not all - // values in the above range can be represented. S2KCount will - // be rounded up to the next representable value if it cannot - // be encoded exactly. When set, it is strongly encrouraged to - // use a value that is at least 65536. See RFC 4880 Section - // 3.7.1.3. - S2KCount int - // RSABits is the number of bits in new RSA keys made with NewEntity. - // If zero, then 2048 bit keys are created. - RSABits int -} - -func (c *Config) Random() io.Reader { - if c == nil || c.Rand == nil { - return rand.Reader - } - return c.Rand -} - -func (c *Config) Hash() crypto.Hash { - if c == nil || uint(c.DefaultHash) == 0 { - return crypto.SHA256 - } - return c.DefaultHash -} - -func (c *Config) Cipher() CipherFunction { - if c == nil || uint8(c.DefaultCipher) == 0 { - return CipherAES128 - } - return c.DefaultCipher -} - -func (c *Config) Now() time.Time { - if c == nil || c.Time == nil { - return time.Now() - } - return c.Time() -} - -func (c *Config) Compression() CompressionAlgo { - if c == nil { - return CompressionNone - } - return c.DefaultCompressionAlgo -} - -func (c *Config) PasswordHashIterations() int { - if c == nil || c.S2KCount == 0 { - return 0 - } - return c.S2KCount -} diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/private_key.go b/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/private_key.go deleted file mode 100644 index 192aac376d1..00000000000 --- a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/private_key.go +++ /dev/null @@ -1,384 +0,0 @@ -// Copyright 2011 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package packet - -import ( - "bytes" - "crypto" - "crypto/cipher" - "crypto/dsa" - "crypto/ecdsa" - "crypto/rsa" - "crypto/sha1" - "io" - "math/big" - "strconv" - "time" - - "golang.org/x/crypto/openpgp/elgamal" - "golang.org/x/crypto/openpgp/errors" - "golang.org/x/crypto/openpgp/s2k" -) - -// PrivateKey represents a possibly encrypted private key. See RFC 4880, -// section 5.5.3. -type PrivateKey struct { - PublicKey - Encrypted bool // if true then the private key is unavailable until Decrypt has been called. - encryptedData []byte - cipher CipherFunction - s2k func(out, in []byte) - PrivateKey interface{} // An *{rsa|dsa|ecdsa}.PrivateKey or crypto.Signer/crypto.Decrypter (Decryptor RSA only). - sha1Checksum bool - iv []byte -} - -func NewRSAPrivateKey(creationTime time.Time, priv *rsa.PrivateKey) *PrivateKey { - pk := new(PrivateKey) - pk.PublicKey = *NewRSAPublicKey(creationTime, &priv.PublicKey) - pk.PrivateKey = priv - return pk -} - -func NewDSAPrivateKey(creationTime time.Time, priv *dsa.PrivateKey) *PrivateKey { - pk := new(PrivateKey) - pk.PublicKey = *NewDSAPublicKey(creationTime, &priv.PublicKey) - pk.PrivateKey = priv - return pk -} - -func NewElGamalPrivateKey(creationTime time.Time, priv *elgamal.PrivateKey) *PrivateKey { - pk := new(PrivateKey) - pk.PublicKey = *NewElGamalPublicKey(creationTime, &priv.PublicKey) - pk.PrivateKey = priv - return pk -} - -func NewECDSAPrivateKey(creationTime time.Time, priv *ecdsa.PrivateKey) *PrivateKey { - pk := new(PrivateKey) - pk.PublicKey = *NewECDSAPublicKey(creationTime, &priv.PublicKey) - pk.PrivateKey = priv - return pk -} - -// NewSignerPrivateKey creates a PrivateKey from a crypto.Signer that -// implements RSA or ECDSA. -func NewSignerPrivateKey(creationTime time.Time, signer crypto.Signer) *PrivateKey { - pk := new(PrivateKey) - // In general, the public Keys should be used as pointers. We still - // type-switch on the values, for backwards-compatibility. - switch pubkey := signer.Public().(type) { - case *rsa.PublicKey: - pk.PublicKey = *NewRSAPublicKey(creationTime, pubkey) - case rsa.PublicKey: - pk.PublicKey = *NewRSAPublicKey(creationTime, &pubkey) - case *ecdsa.PublicKey: - pk.PublicKey = *NewECDSAPublicKey(creationTime, pubkey) - case ecdsa.PublicKey: - pk.PublicKey = *NewECDSAPublicKey(creationTime, &pubkey) - default: - panic("openpgp: unknown crypto.Signer type in NewSignerPrivateKey") - } - pk.PrivateKey = signer - return pk -} - -func (pk *PrivateKey) parse(r io.Reader) (err error) { - err = (&pk.PublicKey).parse(r) - if err != nil { - return - } - var buf [1]byte - _, err = readFull(r, buf[:]) - if err != nil { - return - } - - s2kType := buf[0] - - switch s2kType { - case 0: - pk.s2k = nil - pk.Encrypted = false - case 254, 255: - _, err = readFull(r, buf[:]) - if err != nil { - return - } - pk.cipher = CipherFunction(buf[0]) - pk.Encrypted = true - pk.s2k, err = s2k.Parse(r) - if err != nil { - return - } - if s2kType == 254 { - pk.sha1Checksum = true - } - default: - return errors.UnsupportedError("deprecated s2k function in private key") - } - - if pk.Encrypted { - blockSize := pk.cipher.blockSize() - if blockSize == 0 { - return errors.UnsupportedError("unsupported cipher in private key: " + strconv.Itoa(int(pk.cipher))) - } - pk.iv = make([]byte, blockSize) - _, err = readFull(r, pk.iv) - if err != nil { - return - } - } - - pk.encryptedData, err = io.ReadAll(r) - if err != nil { - return - } - - if !pk.Encrypted { - return pk.parsePrivateKey(pk.encryptedData) - } - - return -} - -func mod64kHash(d []byte) uint16 { - var h uint16 - for _, b := range d { - h += uint16(b) - } - return h -} - -func (pk *PrivateKey) Serialize(w io.Writer) (err error) { - // TODO(agl): support encrypted private keys - buf := bytes.NewBuffer(nil) - err = pk.PublicKey.serializeWithoutHeaders(buf) - if err != nil { - return - } - buf.WriteByte(0 /* no encryption */) - - privateKeyBuf := bytes.NewBuffer(nil) - - switch priv := pk.PrivateKey.(type) { - case *rsa.PrivateKey: - err = serializeRSAPrivateKey(privateKeyBuf, priv) - case *dsa.PrivateKey: - err = serializeDSAPrivateKey(privateKeyBuf, priv) - case *elgamal.PrivateKey: - err = serializeElGamalPrivateKey(privateKeyBuf, priv) - case *ecdsa.PrivateKey: - err = serializeECDSAPrivateKey(privateKeyBuf, priv) - default: - err = errors.InvalidArgumentError("unknown private key type") - } - if err != nil { - return - } - - ptype := packetTypePrivateKey - contents := buf.Bytes() - privateKeyBytes := privateKeyBuf.Bytes() - if pk.IsSubkey { - ptype = packetTypePrivateSubkey - } - err = serializeHeader(w, ptype, len(contents)+len(privateKeyBytes)+2) - if err != nil { - return - } - _, err = w.Write(contents) - if err != nil { - return - } - _, err = w.Write(privateKeyBytes) - if err != nil { - return - } - - checksum := mod64kHash(privateKeyBytes) - var checksumBytes [2]byte - checksumBytes[0] = byte(checksum >> 8) - checksumBytes[1] = byte(checksum) - _, err = w.Write(checksumBytes[:]) - - return -} - -func serializeRSAPrivateKey(w io.Writer, priv *rsa.PrivateKey) error { - err := writeBig(w, priv.D) - if err != nil { - return err - } - err = writeBig(w, priv.Primes[1]) - if err != nil { - return err - } - err = writeBig(w, priv.Primes[0]) - if err != nil { - return err - } - return writeBig(w, priv.Precomputed.Qinv) -} - -func serializeDSAPrivateKey(w io.Writer, priv *dsa.PrivateKey) error { - return writeBig(w, priv.X) -} - -func serializeElGamalPrivateKey(w io.Writer, priv *elgamal.PrivateKey) error { - return writeBig(w, priv.X) -} - -func serializeECDSAPrivateKey(w io.Writer, priv *ecdsa.PrivateKey) error { - return writeBig(w, priv.D) -} - -// Decrypt decrypts an encrypted private key using a passphrase. -func (pk *PrivateKey) Decrypt(passphrase []byte) error { - if !pk.Encrypted { - return nil - } - - key := make([]byte, pk.cipher.KeySize()) - pk.s2k(key, passphrase) - block := pk.cipher.new(key) - cfb := cipher.NewCFBDecrypter(block, pk.iv) - - data := make([]byte, len(pk.encryptedData)) - cfb.XORKeyStream(data, pk.encryptedData) - - if pk.sha1Checksum { - if len(data) < sha1.Size { - return errors.StructuralError("truncated private key data") - } - h := sha1.New() - h.Write(data[:len(data)-sha1.Size]) - sum := h.Sum(nil) - if !bytes.Equal(sum, data[len(data)-sha1.Size:]) { - return errors.StructuralError("private key checksum failure") - } - data = data[:len(data)-sha1.Size] - } else { - if len(data) < 2 { - return errors.StructuralError("truncated private key data") - } - var sum uint16 - for i := 0; i < len(data)-2; i++ { - sum += uint16(data[i]) - } - if data[len(data)-2] != uint8(sum>>8) || - data[len(data)-1] != uint8(sum) { - return errors.StructuralError("private key checksum failure") - } - data = data[:len(data)-2] - } - - return pk.parsePrivateKey(data) -} - -func (pk *PrivateKey) parsePrivateKey(data []byte) (err error) { - switch pk.PublicKey.PubKeyAlgo { - case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly, PubKeyAlgoRSAEncryptOnly: - return pk.parseRSAPrivateKey(data) - case PubKeyAlgoDSA: - return pk.parseDSAPrivateKey(data) - case PubKeyAlgoElGamal: - return pk.parseElGamalPrivateKey(data) - case PubKeyAlgoECDSA: - return pk.parseECDSAPrivateKey(data) - } - panic("impossible") -} - -func (pk *PrivateKey) parseRSAPrivateKey(data []byte) (err error) { - rsaPub := pk.PublicKey.PublicKey.(*rsa.PublicKey) - rsaPriv := new(rsa.PrivateKey) - rsaPriv.PublicKey = *rsaPub - - buf := bytes.NewBuffer(data) - d, _, err := readMPI(buf) - if err != nil { - return - } - p, _, err := readMPI(buf) - if err != nil { - return - } - q, _, err := readMPI(buf) - if err != nil { - return - } - - rsaPriv.D = new(big.Int).SetBytes(d) - rsaPriv.Primes = make([]*big.Int, 2) - rsaPriv.Primes[0] = new(big.Int).SetBytes(p) - rsaPriv.Primes[1] = new(big.Int).SetBytes(q) - if err := rsaPriv.Validate(); err != nil { - return err - } - rsaPriv.Precompute() - pk.PrivateKey = rsaPriv - pk.Encrypted = false - pk.encryptedData = nil - - return nil -} - -func (pk *PrivateKey) parseDSAPrivateKey(data []byte) (err error) { - dsaPub := pk.PublicKey.PublicKey.(*dsa.PublicKey) - dsaPriv := new(dsa.PrivateKey) - dsaPriv.PublicKey = *dsaPub - - buf := bytes.NewBuffer(data) - x, _, err := readMPI(buf) - if err != nil { - return - } - - dsaPriv.X = new(big.Int).SetBytes(x) - pk.PrivateKey = dsaPriv - pk.Encrypted = false - pk.encryptedData = nil - - return nil -} - -func (pk *PrivateKey) parseElGamalPrivateKey(data []byte) (err error) { - pub := pk.PublicKey.PublicKey.(*elgamal.PublicKey) - priv := new(elgamal.PrivateKey) - priv.PublicKey = *pub - - buf := bytes.NewBuffer(data) - x, _, err := readMPI(buf) - if err != nil { - return - } - - priv.X = new(big.Int).SetBytes(x) - pk.PrivateKey = priv - pk.Encrypted = false - pk.encryptedData = nil - - return nil -} - -func (pk *PrivateKey) parseECDSAPrivateKey(data []byte) (err error) { - ecdsaPub := pk.PublicKey.PublicKey.(*ecdsa.PublicKey) - - buf := bytes.NewBuffer(data) - d, _, err := readMPI(buf) - if err != nil { - return - } - - pk.PrivateKey = &ecdsa.PrivateKey{ - PublicKey: *ecdsaPub, - D: new(big.Int).SetBytes(d), - } - pk.Encrypted = false - pk.encryptedData = nil - - return nil -} diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/public_key.go b/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/public_key.go deleted file mode 100644 index fcd5f525196..00000000000 --- a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/public_key.go +++ /dev/null @@ -1,753 +0,0 @@ -// Copyright 2011 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package packet - -import ( - "bytes" - "crypto" - "crypto/dsa" - "crypto/ecdsa" - "crypto/elliptic" - "crypto/rsa" - "crypto/sha1" - _ "crypto/sha256" - _ "crypto/sha512" - "encoding/binary" - "fmt" - "hash" - "io" - "math/big" - "strconv" - "time" - - "golang.org/x/crypto/openpgp/elgamal" - "golang.org/x/crypto/openpgp/errors" -) - -var ( - // NIST curve P-256 - oidCurveP256 []byte = []byte{0x2A, 0x86, 0x48, 0xCE, 0x3D, 0x03, 0x01, 0x07} - // NIST curve P-384 - oidCurveP384 []byte = []byte{0x2B, 0x81, 0x04, 0x00, 0x22} - // NIST curve P-521 - oidCurveP521 []byte = []byte{0x2B, 0x81, 0x04, 0x00, 0x23} -) - -const maxOIDLength = 8 - -// ecdsaKey stores the algorithm-specific fields for ECDSA keys. -// as defined in RFC 6637, Section 9. -type ecdsaKey struct { - // oid contains the OID byte sequence identifying the elliptic curve used - oid []byte - // p contains the elliptic curve point that represents the public key - p parsedMPI -} - -// parseOID reads the OID for the curve as defined in RFC 6637, Section 9. -func parseOID(r io.Reader) (oid []byte, err error) { - buf := make([]byte, maxOIDLength) - if _, err = readFull(r, buf[:1]); err != nil { - return - } - oidLen := buf[0] - if int(oidLen) > len(buf) { - err = errors.UnsupportedError("invalid oid length: " + strconv.Itoa(int(oidLen))) - return - } - oid = buf[:oidLen] - _, err = readFull(r, oid) - return -} - -func (f *ecdsaKey) parse(r io.Reader) (err error) { - if f.oid, err = parseOID(r); err != nil { - return err - } - f.p.bytes, f.p.bitLength, err = readMPI(r) - return -} - -func (f *ecdsaKey) serialize(w io.Writer) (err error) { - buf := make([]byte, maxOIDLength+1) - buf[0] = byte(len(f.oid)) - copy(buf[1:], f.oid) - if _, err = w.Write(buf[:len(f.oid)+1]); err != nil { - return - } - return writeMPIs(w, f.p) -} - -func (f *ecdsaKey) newECDSA() (*ecdsa.PublicKey, error) { - var c elliptic.Curve - if bytes.Equal(f.oid, oidCurveP256) { - c = elliptic.P256() - } else if bytes.Equal(f.oid, oidCurveP384) { - c = elliptic.P384() - } else if bytes.Equal(f.oid, oidCurveP521) { - c = elliptic.P521() - } else { - return nil, errors.UnsupportedError(fmt.Sprintf("unsupported oid: %x", f.oid)) - } - x, y := elliptic.Unmarshal(c, f.p.bytes) - if x == nil { - return nil, errors.UnsupportedError("failed to parse EC point") - } - return &ecdsa.PublicKey{Curve: c, X: x, Y: y}, nil -} - -func (f *ecdsaKey) byteLen() int { - return 1 + len(f.oid) + 2 + len(f.p.bytes) -} - -type kdfHashFunction byte -type kdfAlgorithm byte - -// ecdhKdf stores key derivation function parameters -// used for ECDH encryption. See RFC 6637, Section 9. -type ecdhKdf struct { - KdfHash kdfHashFunction - KdfAlgo kdfAlgorithm -} - -func (f *ecdhKdf) parse(r io.Reader) (err error) { - buf := make([]byte, 1) - if _, err = readFull(r, buf); err != nil { - return - } - kdfLen := int(buf[0]) - if kdfLen < 3 { - return errors.UnsupportedError("Unsupported ECDH KDF length: " + strconv.Itoa(kdfLen)) - } - buf = make([]byte, kdfLen) - if _, err = readFull(r, buf); err != nil { - return - } - reserved := int(buf[0]) - f.KdfHash = kdfHashFunction(buf[1]) - f.KdfAlgo = kdfAlgorithm(buf[2]) - if reserved != 0x01 { - return errors.UnsupportedError("Unsupported KDF reserved field: " + strconv.Itoa(reserved)) - } - return -} - -func (f *ecdhKdf) serialize(w io.Writer) (err error) { - buf := make([]byte, 4) - // See RFC 6637, Section 9, Algorithm-Specific Fields for ECDH keys. - buf[0] = byte(0x03) // Length of the following fields - buf[1] = byte(0x01) // Reserved for future extensions, must be 1 for now - buf[2] = byte(f.KdfHash) - buf[3] = byte(f.KdfAlgo) - _, err = w.Write(buf[:]) - return -} - -func (f *ecdhKdf) byteLen() int { - return 4 -} - -// PublicKey represents an OpenPGP public key. See RFC 4880, section 5.5.2. -type PublicKey struct { - CreationTime time.Time - PubKeyAlgo PublicKeyAlgorithm - PublicKey interface{} // *rsa.PublicKey, *dsa.PublicKey or *ecdsa.PublicKey - Fingerprint [20]byte - KeyId uint64 - IsSubkey bool - - n, e, p, q, g, y parsedMPI - - // RFC 6637 fields - ec *ecdsaKey - ecdh *ecdhKdf -} - -// signingKey provides a convenient abstraction over signature verification -// for v3 and v4 public keys. -type signingKey interface { - SerializeSignaturePrefix(io.Writer) - serializeWithoutHeaders(io.Writer) error -} - -func fromBig(n *big.Int) parsedMPI { - return parsedMPI{ - bytes: n.Bytes(), - bitLength: uint16(n.BitLen()), - } -} - -// NewRSAPublicKey returns a PublicKey that wraps the given rsa.PublicKey. -func NewRSAPublicKey(creationTime time.Time, pub *rsa.PublicKey) *PublicKey { - pk := &PublicKey{ - CreationTime: creationTime, - PubKeyAlgo: PubKeyAlgoRSA, - PublicKey: pub, - n: fromBig(pub.N), - e: fromBig(big.NewInt(int64(pub.E))), - } - - pk.setFingerPrintAndKeyId() - return pk -} - -// NewDSAPublicKey returns a PublicKey that wraps the given dsa.PublicKey. -func NewDSAPublicKey(creationTime time.Time, pub *dsa.PublicKey) *PublicKey { - pk := &PublicKey{ - CreationTime: creationTime, - PubKeyAlgo: PubKeyAlgoDSA, - PublicKey: pub, - p: fromBig(pub.P), - q: fromBig(pub.Q), - g: fromBig(pub.G), - y: fromBig(pub.Y), - } - - pk.setFingerPrintAndKeyId() - return pk -} - -// NewElGamalPublicKey returns a PublicKey that wraps the given elgamal.PublicKey. -func NewElGamalPublicKey(creationTime time.Time, pub *elgamal.PublicKey) *PublicKey { - pk := &PublicKey{ - CreationTime: creationTime, - PubKeyAlgo: PubKeyAlgoElGamal, - PublicKey: pub, - p: fromBig(pub.P), - g: fromBig(pub.G), - y: fromBig(pub.Y), - } - - pk.setFingerPrintAndKeyId() - return pk -} - -func NewECDSAPublicKey(creationTime time.Time, pub *ecdsa.PublicKey) *PublicKey { - pk := &PublicKey{ - CreationTime: creationTime, - PubKeyAlgo: PubKeyAlgoECDSA, - PublicKey: pub, - ec: new(ecdsaKey), - } - - switch pub.Curve { - case elliptic.P256(): - pk.ec.oid = oidCurveP256 - case elliptic.P384(): - pk.ec.oid = oidCurveP384 - case elliptic.P521(): - pk.ec.oid = oidCurveP521 - default: - panic("unknown elliptic curve") - } - - pk.ec.p.bytes = elliptic.Marshal(pub.Curve, pub.X, pub.Y) - - // The bit length is 3 (for the 0x04 specifying an uncompressed key) - // plus two field elements (for x and y), which are rounded up to the - // nearest byte. See https://tools.ietf.org/html/rfc6637#section-6 - fieldBytes := (pub.Curve.Params().BitSize + 7) & ^7 - pk.ec.p.bitLength = uint16(3 + fieldBytes + fieldBytes) - - pk.setFingerPrintAndKeyId() - return pk -} - -func (pk *PublicKey) parse(r io.Reader) (err error) { - // RFC 4880, section 5.5.2 - var buf [6]byte - _, err = readFull(r, buf[:]) - if err != nil { - return - } - if buf[0] != 4 { - return errors.UnsupportedError("public key version") - } - pk.CreationTime = time.Unix(int64(uint32(buf[1])<<24|uint32(buf[2])<<16|uint32(buf[3])<<8|uint32(buf[4])), 0) - pk.PubKeyAlgo = PublicKeyAlgorithm(buf[5]) - switch pk.PubKeyAlgo { - case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoRSASignOnly: - err = pk.parseRSA(r) - case PubKeyAlgoDSA: - err = pk.parseDSA(r) - case PubKeyAlgoElGamal: - err = pk.parseElGamal(r) - case PubKeyAlgoECDSA: - pk.ec = new(ecdsaKey) - if err = pk.ec.parse(r); err != nil { - return err - } - pk.PublicKey, err = pk.ec.newECDSA() - case PubKeyAlgoECDH: - pk.ec = new(ecdsaKey) - if err = pk.ec.parse(r); err != nil { - return - } - pk.ecdh = new(ecdhKdf) - if err = pk.ecdh.parse(r); err != nil { - return - } - // The ECDH key is stored in an ecdsa.PublicKey for convenience. - pk.PublicKey, err = pk.ec.newECDSA() - default: - err = errors.UnsupportedError("public key type: " + strconv.Itoa(int(pk.PubKeyAlgo))) - } - if err != nil { - return - } - - pk.setFingerPrintAndKeyId() - return -} - -func (pk *PublicKey) setFingerPrintAndKeyId() { - // RFC 4880, section 12.2 - fingerPrint := sha1.New() - pk.SerializeSignaturePrefix(fingerPrint) - pk.serializeWithoutHeaders(fingerPrint) - copy(pk.Fingerprint[:], fingerPrint.Sum(nil)) - pk.KeyId = binary.BigEndian.Uint64(pk.Fingerprint[12:20]) -} - -// parseRSA parses RSA public key material from the given Reader. See RFC 4880, -// section 5.5.2. -func (pk *PublicKey) parseRSA(r io.Reader) (err error) { - pk.n.bytes, pk.n.bitLength, err = readMPI(r) - if err != nil { - return - } - pk.e.bytes, pk.e.bitLength, err = readMPI(r) - if err != nil { - return - } - - if len(pk.e.bytes) > 3 { - err = errors.UnsupportedError("large public exponent") - return - } - rsa := &rsa.PublicKey{ - N: new(big.Int).SetBytes(pk.n.bytes), - E: 0, - } - for i := 0; i < len(pk.e.bytes); i++ { - rsa.E <<= 8 - rsa.E |= int(pk.e.bytes[i]) - } - pk.PublicKey = rsa - return -} - -// parseDSA parses DSA public key material from the given Reader. See RFC 4880, -// section 5.5.2. -func (pk *PublicKey) parseDSA(r io.Reader) (err error) { - pk.p.bytes, pk.p.bitLength, err = readMPI(r) - if err != nil { - return - } - pk.q.bytes, pk.q.bitLength, err = readMPI(r) - if err != nil { - return - } - pk.g.bytes, pk.g.bitLength, err = readMPI(r) - if err != nil { - return - } - pk.y.bytes, pk.y.bitLength, err = readMPI(r) - if err != nil { - return - } - - dsa := new(dsa.PublicKey) - dsa.P = new(big.Int).SetBytes(pk.p.bytes) - dsa.Q = new(big.Int).SetBytes(pk.q.bytes) - dsa.G = new(big.Int).SetBytes(pk.g.bytes) - dsa.Y = new(big.Int).SetBytes(pk.y.bytes) - pk.PublicKey = dsa - return -} - -// parseElGamal parses ElGamal public key material from the given Reader. See -// RFC 4880, section 5.5.2. -func (pk *PublicKey) parseElGamal(r io.Reader) (err error) { - pk.p.bytes, pk.p.bitLength, err = readMPI(r) - if err != nil { - return - } - pk.g.bytes, pk.g.bitLength, err = readMPI(r) - if err != nil { - return - } - pk.y.bytes, pk.y.bitLength, err = readMPI(r) - if err != nil { - return - } - - elgamal := new(elgamal.PublicKey) - elgamal.P = new(big.Int).SetBytes(pk.p.bytes) - elgamal.G = new(big.Int).SetBytes(pk.g.bytes) - elgamal.Y = new(big.Int).SetBytes(pk.y.bytes) - pk.PublicKey = elgamal - return -} - -// SerializeSignaturePrefix writes the prefix for this public key to the given Writer. -// The prefix is used when calculating a signature over this public key. See -// RFC 4880, section 5.2.4. -func (pk *PublicKey) SerializeSignaturePrefix(h io.Writer) { - var pLength uint16 - switch pk.PubKeyAlgo { - case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoRSASignOnly: - pLength += 2 + uint16(len(pk.n.bytes)) - pLength += 2 + uint16(len(pk.e.bytes)) - case PubKeyAlgoDSA: - pLength += 2 + uint16(len(pk.p.bytes)) - pLength += 2 + uint16(len(pk.q.bytes)) - pLength += 2 + uint16(len(pk.g.bytes)) - pLength += 2 + uint16(len(pk.y.bytes)) - case PubKeyAlgoElGamal: - pLength += 2 + uint16(len(pk.p.bytes)) - pLength += 2 + uint16(len(pk.g.bytes)) - pLength += 2 + uint16(len(pk.y.bytes)) - case PubKeyAlgoECDSA: - pLength += uint16(pk.ec.byteLen()) - case PubKeyAlgoECDH: - pLength += uint16(pk.ec.byteLen()) - pLength += uint16(pk.ecdh.byteLen()) - default: - panic("unknown public key algorithm") - } - pLength += 6 - h.Write([]byte{0x99, byte(pLength >> 8), byte(pLength)}) - return -} - -func (pk *PublicKey) Serialize(w io.Writer) (err error) { - length := 6 // 6 byte header - - switch pk.PubKeyAlgo { - case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoRSASignOnly: - length += 2 + len(pk.n.bytes) - length += 2 + len(pk.e.bytes) - case PubKeyAlgoDSA: - length += 2 + len(pk.p.bytes) - length += 2 + len(pk.q.bytes) - length += 2 + len(pk.g.bytes) - length += 2 + len(pk.y.bytes) - case PubKeyAlgoElGamal: - length += 2 + len(pk.p.bytes) - length += 2 + len(pk.g.bytes) - length += 2 + len(pk.y.bytes) - case PubKeyAlgoECDSA: - length += pk.ec.byteLen() - case PubKeyAlgoECDH: - length += pk.ec.byteLen() - length += pk.ecdh.byteLen() - default: - panic("unknown public key algorithm") - } - - packetType := packetTypePublicKey - if pk.IsSubkey { - packetType = packetTypePublicSubkey - } - err = serializeHeader(w, packetType, length) - if err != nil { - return - } - return pk.serializeWithoutHeaders(w) -} - -// serializeWithoutHeaders marshals the PublicKey to w in the form of an -// OpenPGP public key packet, not including the packet header. -func (pk *PublicKey) serializeWithoutHeaders(w io.Writer) (err error) { - var buf [6]byte - buf[0] = 4 - t := uint32(pk.CreationTime.Unix()) - buf[1] = byte(t >> 24) - buf[2] = byte(t >> 16) - buf[3] = byte(t >> 8) - buf[4] = byte(t) - buf[5] = byte(pk.PubKeyAlgo) - - _, err = w.Write(buf[:]) - if err != nil { - return - } - - switch pk.PubKeyAlgo { - case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoRSASignOnly: - return writeMPIs(w, pk.n, pk.e) - case PubKeyAlgoDSA: - return writeMPIs(w, pk.p, pk.q, pk.g, pk.y) - case PubKeyAlgoElGamal: - return writeMPIs(w, pk.p, pk.g, pk.y) - case PubKeyAlgoECDSA: - return pk.ec.serialize(w) - case PubKeyAlgoECDH: - if err = pk.ec.serialize(w); err != nil { - return - } - return pk.ecdh.serialize(w) - } - return errors.InvalidArgumentError("bad public-key algorithm") -} - -// CanSign returns true iff this public key can generate signatures -func (pk *PublicKey) CanSign() bool { - return pk.PubKeyAlgo != PubKeyAlgoRSAEncryptOnly && pk.PubKeyAlgo != PubKeyAlgoElGamal -} - -// VerifySignature returns nil iff sig is a valid signature, made by this -// public key, of the data hashed into signed. signed is mutated by this call. -func (pk *PublicKey) VerifySignature(signed hash.Hash, sig *Signature) (err error) { - if !pk.CanSign() { - return errors.InvalidArgumentError("public key cannot generate signatures") - } - - signed.Write(sig.HashSuffix) - hashBytes := signed.Sum(nil) - - if hashBytes[0] != sig.HashTag[0] || hashBytes[1] != sig.HashTag[1] { - return errors.SignatureError("hash tag doesn't match") - } - - if pk.PubKeyAlgo != sig.PubKeyAlgo { - return errors.InvalidArgumentError("public key and signature use different algorithms") - } - - switch pk.PubKeyAlgo { - case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly: - rsaPublicKey, _ := pk.PublicKey.(*rsa.PublicKey) - err = rsa.VerifyPKCS1v15(rsaPublicKey, sig.Hash, hashBytes, padToKeySize(rsaPublicKey, sig.RSASignature.bytes)) - if err != nil { - return errors.SignatureError("RSA verification failure") - } - return nil - case PubKeyAlgoDSA: - dsaPublicKey, _ := pk.PublicKey.(*dsa.PublicKey) - // Need to truncate hashBytes to match FIPS 186-3 section 4.6. - subgroupSize := (dsaPublicKey.Q.BitLen() + 7) / 8 - if len(hashBytes) > subgroupSize { - hashBytes = hashBytes[:subgroupSize] - } - if !dsa.Verify(dsaPublicKey, hashBytes, new(big.Int).SetBytes(sig.DSASigR.bytes), new(big.Int).SetBytes(sig.DSASigS.bytes)) { - return errors.SignatureError("DSA verification failure") - } - return nil - case PubKeyAlgoECDSA: - ecdsaPublicKey := pk.PublicKey.(*ecdsa.PublicKey) - if !ecdsa.Verify(ecdsaPublicKey, hashBytes, new(big.Int).SetBytes(sig.ECDSASigR.bytes), new(big.Int).SetBytes(sig.ECDSASigS.bytes)) { - return errors.SignatureError("ECDSA verification failure") - } - return nil - default: - return errors.SignatureError("Unsupported public key algorithm used in signature") - } -} - -// VerifySignatureV3 returns nil iff sig is a valid signature, made by this -// public key, of the data hashed into signed. signed is mutated by this call. -func (pk *PublicKey) VerifySignatureV3(signed hash.Hash, sig *SignatureV3) (err error) { - if !pk.CanSign() { - return errors.InvalidArgumentError("public key cannot generate signatures") - } - - suffix := make([]byte, 5) - suffix[0] = byte(sig.SigType) - binary.BigEndian.PutUint32(suffix[1:], uint32(sig.CreationTime.Unix())) - signed.Write(suffix) - hashBytes := signed.Sum(nil) - - if hashBytes[0] != sig.HashTag[0] || hashBytes[1] != sig.HashTag[1] { - return errors.SignatureError("hash tag doesn't match") - } - - if pk.PubKeyAlgo != sig.PubKeyAlgo { - return errors.InvalidArgumentError("public key and signature use different algorithms") - } - - switch pk.PubKeyAlgo { - case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly: - rsaPublicKey := pk.PublicKey.(*rsa.PublicKey) - if err = rsa.VerifyPKCS1v15(rsaPublicKey, sig.Hash, hashBytes, padToKeySize(rsaPublicKey, sig.RSASignature.bytes)); err != nil { - return errors.SignatureError("RSA verification failure") - } - return - case PubKeyAlgoDSA: - dsaPublicKey := pk.PublicKey.(*dsa.PublicKey) - // Need to truncate hashBytes to match FIPS 186-3 section 4.6. - subgroupSize := (dsaPublicKey.Q.BitLen() + 7) / 8 - if len(hashBytes) > subgroupSize { - hashBytes = hashBytes[:subgroupSize] - } - if !dsa.Verify(dsaPublicKey, hashBytes, new(big.Int).SetBytes(sig.DSASigR.bytes), new(big.Int).SetBytes(sig.DSASigS.bytes)) { - return errors.SignatureError("DSA verification failure") - } - return nil - default: - panic("shouldn't happen") - } -} - -// keySignatureHash returns a Hash of the message that needs to be signed for -// pk to assert a subkey relationship to signed. -func keySignatureHash(pk, signed signingKey, hashFunc crypto.Hash) (h hash.Hash, err error) { - if !hashFunc.Available() { - return nil, errors.UnsupportedError("hash function") - } - h = hashFunc.New() - - // RFC 4880, section 5.2.4 - pk.SerializeSignaturePrefix(h) - pk.serializeWithoutHeaders(h) - signed.SerializeSignaturePrefix(h) - signed.serializeWithoutHeaders(h) - return -} - -// VerifyKeySignature returns nil iff sig is a valid signature, made by this -// public key, of signed. -func (pk *PublicKey) VerifyKeySignature(signed *PublicKey, sig *Signature) error { - h, err := keySignatureHash(pk, signed, sig.Hash) - if err != nil { - return err - } - if err = pk.VerifySignature(h, sig); err != nil { - return err - } - - if sig.FlagSign { - // Signing subkeys must be cross-signed. See - // https://www.gnupg.org/faq/subkey-cross-certify.html. - if sig.EmbeddedSignature == nil { - return errors.StructuralError("signing subkey is missing cross-signature") - } - // Verify the cross-signature. This is calculated over the same - // data as the main signature, so we cannot just recursively - // call signed.VerifyKeySignature(...) - if h, err = keySignatureHash(pk, signed, sig.EmbeddedSignature.Hash); err != nil { - return errors.StructuralError("error while hashing for cross-signature: " + err.Error()) - } - if err := signed.VerifySignature(h, sig.EmbeddedSignature); err != nil { - return errors.StructuralError("error while verifying cross-signature: " + err.Error()) - } - } - - return nil -} - -func keyRevocationHash(pk signingKey, hashFunc crypto.Hash) (h hash.Hash, err error) { - if !hashFunc.Available() { - return nil, errors.UnsupportedError("hash function") - } - h = hashFunc.New() - - // RFC 4880, section 5.2.4 - pk.SerializeSignaturePrefix(h) - pk.serializeWithoutHeaders(h) - - return -} - -// VerifyRevocationSignature returns nil iff sig is a valid signature, made by this -// public key. -func (pk *PublicKey) VerifyRevocationSignature(sig *Signature) (err error) { - h, err := keyRevocationHash(pk, sig.Hash) - if err != nil { - return err - } - return pk.VerifySignature(h, sig) -} - -// userIdSignatureHash returns a Hash of the message that needs to be signed -// to assert that pk is a valid key for id. -func userIdSignatureHash(id string, pk *PublicKey, hashFunc crypto.Hash) (h hash.Hash, err error) { - if !hashFunc.Available() { - return nil, errors.UnsupportedError("hash function") - } - h = hashFunc.New() - - // RFC 4880, section 5.2.4 - pk.SerializeSignaturePrefix(h) - pk.serializeWithoutHeaders(h) - - var buf [5]byte - buf[0] = 0xb4 - buf[1] = byte(len(id) >> 24) - buf[2] = byte(len(id) >> 16) - buf[3] = byte(len(id) >> 8) - buf[4] = byte(len(id)) - h.Write(buf[:]) - h.Write([]byte(id)) - - return -} - -// VerifyUserIdSignature returns nil iff sig is a valid signature, made by this -// public key, that id is the identity of pub. -func (pk *PublicKey) VerifyUserIdSignature(id string, pub *PublicKey, sig *Signature) (err error) { - h, err := userIdSignatureHash(id, pub, sig.Hash) - if err != nil { - return err - } - return pk.VerifySignature(h, sig) -} - -// VerifyUserIdSignatureV3 returns nil iff sig is a valid signature, made by this -// public key, that id is the identity of pub. -func (pk *PublicKey) VerifyUserIdSignatureV3(id string, pub *PublicKey, sig *SignatureV3) (err error) { - h, err := userIdSignatureV3Hash(id, pub, sig.Hash) - if err != nil { - return err - } - return pk.VerifySignatureV3(h, sig) -} - -// KeyIdString returns the public key's fingerprint in capital hex -// (e.g. "6C7EE1B8621CC013"). -func (pk *PublicKey) KeyIdString() string { - return fmt.Sprintf("%X", pk.Fingerprint[12:20]) -} - -// KeyIdShortString returns the short form of public key's fingerprint -// in capital hex, as shown by gpg --list-keys (e.g. "621CC013"). -func (pk *PublicKey) KeyIdShortString() string { - return fmt.Sprintf("%X", pk.Fingerprint[16:20]) -} - -// A parsedMPI is used to store the contents of a big integer, along with the -// bit length that was specified in the original input. This allows the MPI to -// be reserialized exactly. -type parsedMPI struct { - bytes []byte - bitLength uint16 -} - -// writeMPIs is a utility function for serializing several big integers to the -// given Writer. -func writeMPIs(w io.Writer, mpis ...parsedMPI) (err error) { - for _, mpi := range mpis { - err = writeMPI(w, mpi.bitLength, mpi.bytes) - if err != nil { - return - } - } - return -} - -// BitLength returns the bit length for the given public key. -func (pk *PublicKey) BitLength() (bitLength uint16, err error) { - switch pk.PubKeyAlgo { - case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoRSASignOnly: - bitLength = pk.n.bitLength - case PubKeyAlgoDSA: - bitLength = pk.p.bitLength - case PubKeyAlgoElGamal: - bitLength = pk.p.bitLength - default: - err = errors.InvalidArgumentError("bad public-key algorithm") - } - return -} diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/public_key_v3.go b/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/public_key_v3.go deleted file mode 100644 index 5daf7b6cfd4..00000000000 --- a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/public_key_v3.go +++ /dev/null @@ -1,279 +0,0 @@ -// Copyright 2013 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package packet - -import ( - "crypto" - "crypto/md5" - "crypto/rsa" - "encoding/binary" - "fmt" - "hash" - "io" - "math/big" - "strconv" - "time" - - "golang.org/x/crypto/openpgp/errors" -) - -// PublicKeyV3 represents older, version 3 public keys. These keys are less secure and -// should not be used for signing or encrypting. They are supported here only for -// parsing version 3 key material and validating signatures. -// See RFC 4880, section 5.5.2. -type PublicKeyV3 struct { - CreationTime time.Time - DaysToExpire uint16 - PubKeyAlgo PublicKeyAlgorithm - PublicKey *rsa.PublicKey - Fingerprint [16]byte - KeyId uint64 - IsSubkey bool - - n, e parsedMPI -} - -// newRSAPublicKeyV3 returns a PublicKey that wraps the given rsa.PublicKey. -// Included here for testing purposes only. RFC 4880, section 5.5.2: -// "an implementation MUST NOT generate a V3 key, but MAY accept it." -func newRSAPublicKeyV3(creationTime time.Time, pub *rsa.PublicKey) *PublicKeyV3 { - pk := &PublicKeyV3{ - CreationTime: creationTime, - PublicKey: pub, - n: fromBig(pub.N), - e: fromBig(big.NewInt(int64(pub.E))), - } - - pk.setFingerPrintAndKeyId() - return pk -} - -func (pk *PublicKeyV3) parse(r io.Reader) (err error) { - // RFC 4880, section 5.5.2 - var buf [8]byte - if _, err = readFull(r, buf[:]); err != nil { - return - } - if buf[0] < 2 || buf[0] > 3 { - return errors.UnsupportedError("public key version") - } - pk.CreationTime = time.Unix(int64(uint32(buf[1])<<24|uint32(buf[2])<<16|uint32(buf[3])<<8|uint32(buf[4])), 0) - pk.DaysToExpire = binary.BigEndian.Uint16(buf[5:7]) - pk.PubKeyAlgo = PublicKeyAlgorithm(buf[7]) - switch pk.PubKeyAlgo { - case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoRSASignOnly: - err = pk.parseRSA(r) - default: - err = errors.UnsupportedError("public key type: " + strconv.Itoa(int(pk.PubKeyAlgo))) - } - if err != nil { - return - } - - pk.setFingerPrintAndKeyId() - return -} - -func (pk *PublicKeyV3) setFingerPrintAndKeyId() { - // RFC 4880, section 12.2 - fingerPrint := md5.New() - fingerPrint.Write(pk.n.bytes) - fingerPrint.Write(pk.e.bytes) - fingerPrint.Sum(pk.Fingerprint[:0]) - pk.KeyId = binary.BigEndian.Uint64(pk.n.bytes[len(pk.n.bytes)-8:]) -} - -// parseRSA parses RSA public key material from the given Reader. See RFC 4880, -// section 5.5.2. -func (pk *PublicKeyV3) parseRSA(r io.Reader) (err error) { - if pk.n.bytes, pk.n.bitLength, err = readMPI(r); err != nil { - return - } - if pk.e.bytes, pk.e.bitLength, err = readMPI(r); err != nil { - return - } - - // RFC 4880 Section 12.2 requires the low 8 bytes of the - // modulus to form the key id. - if len(pk.n.bytes) < 8 { - return errors.StructuralError("v3 public key modulus is too short") - } - if len(pk.e.bytes) > 3 { - err = errors.UnsupportedError("large public exponent") - return - } - rsa := &rsa.PublicKey{N: new(big.Int).SetBytes(pk.n.bytes)} - for i := 0; i < len(pk.e.bytes); i++ { - rsa.E <<= 8 - rsa.E |= int(pk.e.bytes[i]) - } - pk.PublicKey = rsa - return -} - -// SerializeSignaturePrefix writes the prefix for this public key to the given Writer. -// The prefix is used when calculating a signature over this public key. See -// RFC 4880, section 5.2.4. -func (pk *PublicKeyV3) SerializeSignaturePrefix(w io.Writer) { - var pLength uint16 - switch pk.PubKeyAlgo { - case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoRSASignOnly: - pLength += 2 + uint16(len(pk.n.bytes)) - pLength += 2 + uint16(len(pk.e.bytes)) - default: - panic("unknown public key algorithm") - } - pLength += 6 - w.Write([]byte{0x99, byte(pLength >> 8), byte(pLength)}) - return -} - -func (pk *PublicKeyV3) Serialize(w io.Writer) (err error) { - length := 8 // 8 byte header - - switch pk.PubKeyAlgo { - case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoRSASignOnly: - length += 2 + len(pk.n.bytes) - length += 2 + len(pk.e.bytes) - default: - panic("unknown public key algorithm") - } - - packetType := packetTypePublicKey - if pk.IsSubkey { - packetType = packetTypePublicSubkey - } - if err = serializeHeader(w, packetType, length); err != nil { - return - } - return pk.serializeWithoutHeaders(w) -} - -// serializeWithoutHeaders marshals the PublicKey to w in the form of an -// OpenPGP public key packet, not including the packet header. -func (pk *PublicKeyV3) serializeWithoutHeaders(w io.Writer) (err error) { - var buf [8]byte - // Version 3 - buf[0] = 3 - // Creation time - t := uint32(pk.CreationTime.Unix()) - buf[1] = byte(t >> 24) - buf[2] = byte(t >> 16) - buf[3] = byte(t >> 8) - buf[4] = byte(t) - // Days to expire - buf[5] = byte(pk.DaysToExpire >> 8) - buf[6] = byte(pk.DaysToExpire) - // Public key algorithm - buf[7] = byte(pk.PubKeyAlgo) - - if _, err = w.Write(buf[:]); err != nil { - return - } - - switch pk.PubKeyAlgo { - case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoRSASignOnly: - return writeMPIs(w, pk.n, pk.e) - } - return errors.InvalidArgumentError("bad public-key algorithm") -} - -// CanSign returns true iff this public key can generate signatures -func (pk *PublicKeyV3) CanSign() bool { - return pk.PubKeyAlgo != PubKeyAlgoRSAEncryptOnly -} - -// VerifySignatureV3 returns nil iff sig is a valid signature, made by this -// public key, of the data hashed into signed. signed is mutated by this call. -func (pk *PublicKeyV3) VerifySignatureV3(signed hash.Hash, sig *SignatureV3) (err error) { - if !pk.CanSign() { - return errors.InvalidArgumentError("public key cannot generate signatures") - } - - suffix := make([]byte, 5) - suffix[0] = byte(sig.SigType) - binary.BigEndian.PutUint32(suffix[1:], uint32(sig.CreationTime.Unix())) - signed.Write(suffix) - hashBytes := signed.Sum(nil) - - if hashBytes[0] != sig.HashTag[0] || hashBytes[1] != sig.HashTag[1] { - return errors.SignatureError("hash tag doesn't match") - } - - if pk.PubKeyAlgo != sig.PubKeyAlgo { - return errors.InvalidArgumentError("public key and signature use different algorithms") - } - - switch pk.PubKeyAlgo { - case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly: - if err = rsa.VerifyPKCS1v15(pk.PublicKey, sig.Hash, hashBytes, sig.RSASignature.bytes); err != nil { - return errors.SignatureError("RSA verification failure") - } - return - default: - // V3 public keys only support RSA. - panic("shouldn't happen") - } -} - -// VerifyUserIdSignatureV3 returns nil iff sig is a valid signature, made by this -// public key, that id is the identity of pub. -func (pk *PublicKeyV3) VerifyUserIdSignatureV3(id string, pub *PublicKeyV3, sig *SignatureV3) (err error) { - h, err := userIdSignatureV3Hash(id, pk, sig.Hash) - if err != nil { - return err - } - return pk.VerifySignatureV3(h, sig) -} - -// VerifyKeySignatureV3 returns nil iff sig is a valid signature, made by this -// public key, of signed. -func (pk *PublicKeyV3) VerifyKeySignatureV3(signed *PublicKeyV3, sig *SignatureV3) (err error) { - h, err := keySignatureHash(pk, signed, sig.Hash) - if err != nil { - return err - } - return pk.VerifySignatureV3(h, sig) -} - -// userIdSignatureV3Hash returns a Hash of the message that needs to be signed -// to assert that pk is a valid key for id. -func userIdSignatureV3Hash(id string, pk signingKey, hfn crypto.Hash) (h hash.Hash, err error) { - if !hfn.Available() { - return nil, errors.UnsupportedError("hash function") - } - h = hfn.New() - - // RFC 4880, section 5.2.4 - pk.SerializeSignaturePrefix(h) - pk.serializeWithoutHeaders(h) - - h.Write([]byte(id)) - - return -} - -// KeyIdString returns the public key's fingerprint in capital hex -// (e.g. "6C7EE1B8621CC013"). -func (pk *PublicKeyV3) KeyIdString() string { - return fmt.Sprintf("%X", pk.KeyId) -} - -// KeyIdShortString returns the short form of public key's fingerprint -// in capital hex, as shown by gpg --list-keys (e.g. "621CC013"). -func (pk *PublicKeyV3) KeyIdShortString() string { - return fmt.Sprintf("%X", pk.KeyId&0xFFFFFFFF) -} - -// BitLength returns the bit length for the given public key. -func (pk *PublicKeyV3) BitLength() (bitLength uint16, err error) { - switch pk.PubKeyAlgo { - case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoRSASignOnly: - bitLength = pk.n.bitLength - default: - err = errors.InvalidArgumentError("bad public-key algorithm") - } - return -} diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/signature_v3.go b/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/signature_v3.go deleted file mode 100644 index 6edff889349..00000000000 --- a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/signature_v3.go +++ /dev/null @@ -1,146 +0,0 @@ -// Copyright 2013 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package packet - -import ( - "crypto" - "encoding/binary" - "fmt" - "io" - "strconv" - "time" - - "golang.org/x/crypto/openpgp/errors" - "golang.org/x/crypto/openpgp/s2k" -) - -// SignatureV3 represents older version 3 signatures. These signatures are less secure -// than version 4 and should not be used to create new signatures. They are included -// here for backwards compatibility to read and validate with older key material. -// See RFC 4880, section 5.2.2. -type SignatureV3 struct { - SigType SignatureType - CreationTime time.Time - IssuerKeyId uint64 - PubKeyAlgo PublicKeyAlgorithm - Hash crypto.Hash - HashTag [2]byte - - RSASignature parsedMPI - DSASigR, DSASigS parsedMPI -} - -func (sig *SignatureV3) parse(r io.Reader) (err error) { - // RFC 4880, section 5.2.2 - var buf [8]byte - if _, err = readFull(r, buf[:1]); err != nil { - return - } - if buf[0] < 2 || buf[0] > 3 { - err = errors.UnsupportedError("signature packet version " + strconv.Itoa(int(buf[0]))) - return - } - if _, err = readFull(r, buf[:1]); err != nil { - return - } - if buf[0] != 5 { - err = errors.UnsupportedError( - "invalid hashed material length " + strconv.Itoa(int(buf[0]))) - return - } - - // Read hashed material: signature type + creation time - if _, err = readFull(r, buf[:5]); err != nil { - return - } - sig.SigType = SignatureType(buf[0]) - t := binary.BigEndian.Uint32(buf[1:5]) - sig.CreationTime = time.Unix(int64(t), 0) - - // Eight-octet Key ID of signer. - if _, err = readFull(r, buf[:8]); err != nil { - return - } - sig.IssuerKeyId = binary.BigEndian.Uint64(buf[:]) - - // Public-key and hash algorithm - if _, err = readFull(r, buf[:2]); err != nil { - return - } - sig.PubKeyAlgo = PublicKeyAlgorithm(buf[0]) - switch sig.PubKeyAlgo { - case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly, PubKeyAlgoDSA: - default: - err = errors.UnsupportedError("public key algorithm " + strconv.Itoa(int(sig.PubKeyAlgo))) - return - } - var ok bool - if sig.Hash, ok = s2k.HashIdToHash(buf[1]); !ok { - return errors.UnsupportedError("hash function " + strconv.Itoa(int(buf[2]))) - } - - // Two-octet field holding left 16 bits of signed hash value. - if _, err = readFull(r, sig.HashTag[:2]); err != nil { - return - } - - switch sig.PubKeyAlgo { - case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly: - sig.RSASignature.bytes, sig.RSASignature.bitLength, err = readMPI(r) - case PubKeyAlgoDSA: - if sig.DSASigR.bytes, sig.DSASigR.bitLength, err = readMPI(r); err != nil { - return - } - sig.DSASigS.bytes, sig.DSASigS.bitLength, err = readMPI(r) - default: - panic("unreachable") - } - return -} - -// Serialize marshals sig to w. Sign, SignUserId or SignKey must have been -// called first. -func (sig *SignatureV3) Serialize(w io.Writer) (err error) { - buf := make([]byte, 8) - - // Write the sig type and creation time - buf[0] = byte(sig.SigType) - binary.BigEndian.PutUint32(buf[1:5], uint32(sig.CreationTime.Unix())) - if _, err = w.Write(buf[:5]); err != nil { - return - } - - // Write the issuer long key ID - binary.BigEndian.PutUint64(buf[:8], sig.IssuerKeyId) - if _, err = w.Write(buf[:8]); err != nil { - return - } - - // Write public key algorithm, hash ID, and hash value - buf[0] = byte(sig.PubKeyAlgo) - hashId, ok := s2k.HashToHashId(sig.Hash) - if !ok { - return errors.UnsupportedError(fmt.Sprintf("hash function %v", sig.Hash)) - } - buf[1] = hashId - copy(buf[2:4], sig.HashTag[:]) - if _, err = w.Write(buf[:4]); err != nil { - return - } - - if sig.RSASignature.bytes == nil && sig.DSASigR.bytes == nil { - return errors.InvalidArgumentError("Signature: need to call Sign, SignUserId or SignKey before Serialize") - } - - switch sig.PubKeyAlgo { - case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly: - err = writeMPIs(w, sig.RSASignature) - case PubKeyAlgoDSA: - err = writeMPIs(w, sig.DSASigR, sig.DSASigS) - default: - panic("impossible") - } - return -} diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/symmetric_key_encrypted.go b/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/symmetric_key_encrypted.go deleted file mode 100644 index 744c2d2c42d..00000000000 --- a/.ci/providerlint/vendor/golang.org/x/crypto/openpgp/packet/symmetric_key_encrypted.go +++ /dev/null @@ -1,155 +0,0 @@ -// Copyright 2011 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package packet - -import ( - "bytes" - "crypto/cipher" - "io" - "strconv" - - "golang.org/x/crypto/openpgp/errors" - "golang.org/x/crypto/openpgp/s2k" -) - -// This is the largest session key that we'll support. Since no 512-bit cipher -// has even been seriously used, this is comfortably large. -const maxSessionKeySizeInBytes = 64 - -// SymmetricKeyEncrypted represents a passphrase protected session key. See RFC -// 4880, section 5.3. -type SymmetricKeyEncrypted struct { - CipherFunc CipherFunction - s2k func(out, in []byte) - encryptedKey []byte -} - -const symmetricKeyEncryptedVersion = 4 - -func (ske *SymmetricKeyEncrypted) parse(r io.Reader) error { - // RFC 4880, section 5.3. - var buf [2]byte - if _, err := readFull(r, buf[:]); err != nil { - return err - } - if buf[0] != symmetricKeyEncryptedVersion { - return errors.UnsupportedError("SymmetricKeyEncrypted version") - } - ske.CipherFunc = CipherFunction(buf[1]) - - if ske.CipherFunc.KeySize() == 0 { - return errors.UnsupportedError("unknown cipher: " + strconv.Itoa(int(buf[1]))) - } - - var err error - ske.s2k, err = s2k.Parse(r) - if err != nil { - return err - } - - encryptedKey := make([]byte, maxSessionKeySizeInBytes) - // The session key may follow. We just have to try and read to find - // out. If it exists then we limit it to maxSessionKeySizeInBytes. - n, err := readFull(r, encryptedKey) - if err != nil && err != io.ErrUnexpectedEOF { - return err - } - - if n != 0 { - if n == maxSessionKeySizeInBytes { - return errors.UnsupportedError("oversized encrypted session key") - } - ske.encryptedKey = encryptedKey[:n] - } - - return nil -} - -// Decrypt attempts to decrypt an encrypted session key and returns the key and -// the cipher to use when decrypting a subsequent Symmetrically Encrypted Data -// packet. -func (ske *SymmetricKeyEncrypted) Decrypt(passphrase []byte) ([]byte, CipherFunction, error) { - key := make([]byte, ske.CipherFunc.KeySize()) - ske.s2k(key, passphrase) - - if len(ske.encryptedKey) == 0 { - return key, ske.CipherFunc, nil - } - - // the IV is all zeros - iv := make([]byte, ske.CipherFunc.blockSize()) - c := cipher.NewCFBDecrypter(ske.CipherFunc.new(key), iv) - plaintextKey := make([]byte, len(ske.encryptedKey)) - c.XORKeyStream(plaintextKey, ske.encryptedKey) - cipherFunc := CipherFunction(plaintextKey[0]) - if cipherFunc.blockSize() == 0 { - return nil, ske.CipherFunc, errors.UnsupportedError("unknown cipher: " + strconv.Itoa(int(cipherFunc))) - } - plaintextKey = plaintextKey[1:] - if l, cipherKeySize := len(plaintextKey), cipherFunc.KeySize(); l != cipherFunc.KeySize() { - return nil, cipherFunc, errors.StructuralError("length of decrypted key (" + strconv.Itoa(l) + ") " + - "not equal to cipher keysize (" + strconv.Itoa(cipherKeySize) + ")") - } - return plaintextKey, cipherFunc, nil -} - -// SerializeSymmetricKeyEncrypted serializes a symmetric key packet to w. The -// packet contains a random session key, encrypted by a key derived from the -// given passphrase. The session key is returned and must be passed to -// SerializeSymmetricallyEncrypted. -// If config is nil, sensible defaults will be used. -func SerializeSymmetricKeyEncrypted(w io.Writer, passphrase []byte, config *Config) (key []byte, err error) { - cipherFunc := config.Cipher() - keySize := cipherFunc.KeySize() - if keySize == 0 { - return nil, errors.UnsupportedError("unknown cipher: " + strconv.Itoa(int(cipherFunc))) - } - - s2kBuf := new(bytes.Buffer) - keyEncryptingKey := make([]byte, keySize) - // s2k.Serialize salts and stretches the passphrase, and writes the - // resulting key to keyEncryptingKey and the s2k descriptor to s2kBuf. - err = s2k.Serialize(s2kBuf, keyEncryptingKey, config.Random(), passphrase, &s2k.Config{Hash: config.Hash(), S2KCount: config.PasswordHashIterations()}) - if err != nil { - return - } - s2kBytes := s2kBuf.Bytes() - - packetLength := 2 /* header */ + len(s2kBytes) + 1 /* cipher type */ + keySize - err = serializeHeader(w, packetTypeSymmetricKeyEncrypted, packetLength) - if err != nil { - return - } - - var buf [2]byte - buf[0] = symmetricKeyEncryptedVersion - buf[1] = byte(cipherFunc) - _, err = w.Write(buf[:]) - if err != nil { - return - } - _, err = w.Write(s2kBytes) - if err != nil { - return - } - - sessionKey := make([]byte, keySize) - _, err = io.ReadFull(config.Random(), sessionKey) - if err != nil { - return - } - iv := make([]byte, cipherFunc.blockSize()) - c := cipher.NewCFBEncrypter(cipherFunc.new(keyEncryptingKey), iv) - encryptedCipherAndKey := make([]byte, keySize+1) - c.XORKeyStream(encryptedCipherAndKey, buf[1:]) - c.XORKeyStream(encryptedCipherAndKey[1:], sessionKey) - _, err = w.Write(encryptedCipherAndKey) - if err != nil { - return - } - - key = sessionKey - return -} diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/sha3/doc.go b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/doc.go new file mode 100644 index 00000000000..decd8cf9bf7 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/doc.go @@ -0,0 +1,62 @@ +// Copyright 2014 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package sha3 implements the SHA-3 fixed-output-length hash functions and +// the SHAKE variable-output-length hash functions defined by FIPS-202. +// +// Both types of hash function use the "sponge" construction and the Keccak +// permutation. For a detailed specification see http://keccak.noekeon.org/ +// +// # Guidance +// +// If you aren't sure what function you need, use SHAKE256 with at least 64 +// bytes of output. The SHAKE instances are faster than the SHA3 instances; +// the latter have to allocate memory to conform to the hash.Hash interface. +// +// If you need a secret-key MAC (message authentication code), prepend the +// secret key to the input, hash with SHAKE256 and read at least 32 bytes of +// output. +// +// # Security strengths +// +// The SHA3-x (x equals 224, 256, 384, or 512) functions have a security +// strength against preimage attacks of x bits. Since they only produce "x" +// bits of output, their collision-resistance is only "x/2" bits. +// +// The SHAKE-256 and -128 functions have a generic security strength of 256 and +// 128 bits against all attacks, provided that at least 2x bits of their output +// is used. Requesting more than 64 or 32 bytes of output, respectively, does +// not increase the collision-resistance of the SHAKE functions. +// +// # The sponge construction +// +// A sponge builds a pseudo-random function from a public pseudo-random +// permutation, by applying the permutation to a state of "rate + capacity" +// bytes, but hiding "capacity" of the bytes. +// +// A sponge starts out with a zero state. To hash an input using a sponge, up +// to "rate" bytes of the input are XORed into the sponge's state. The sponge +// is then "full" and the permutation is applied to "empty" it. This process is +// repeated until all the input has been "absorbed". The input is then padded. +// The digest is "squeezed" from the sponge in the same way, except that output +// is copied out instead of input being XORed in. +// +// A sponge is parameterized by its generic security strength, which is equal +// to half its capacity; capacity + rate is equal to the permutation's width. +// Since the KeccakF-1600 permutation is 1600 bits (200 bytes) wide, this means +// that the security strength of a sponge instance is equal to (1600 - bitrate) / 2. +// +// # Recommendations +// +// The SHAKE functions are recommended for most new uses. They can produce +// output of arbitrary length. SHAKE256, with an output length of at least +// 64 bytes, provides 256-bit security against all attacks. The Keccak team +// recommends it for most applications upgrading from SHA2-512. (NIST chose a +// much stronger, but much slower, sponge instance for SHA3-512.) +// +// The SHA-3 functions are "drop-in" replacements for the SHA-2 functions. +// They produce output of the same length, with the same security strengths +// against all attacks. This means, in particular, that SHA3-256 only has +// 128-bit collision resistance, because its output length is 32 bytes. +package sha3 // import "golang.org/x/crypto/sha3" diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/sha3/hashes.go b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/hashes.go new file mode 100644 index 00000000000..0d8043fd2a1 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/hashes.go @@ -0,0 +1,97 @@ +// Copyright 2014 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package sha3 + +// This file provides functions for creating instances of the SHA-3 +// and SHAKE hash functions, as well as utility functions for hashing +// bytes. + +import ( + "hash" +) + +// New224 creates a new SHA3-224 hash. +// Its generic security strength is 224 bits against preimage attacks, +// and 112 bits against collision attacks. +func New224() hash.Hash { + if h := new224Asm(); h != nil { + return h + } + return &state{rate: 144, outputLen: 28, dsbyte: 0x06} +} + +// New256 creates a new SHA3-256 hash. +// Its generic security strength is 256 bits against preimage attacks, +// and 128 bits against collision attacks. +func New256() hash.Hash { + if h := new256Asm(); h != nil { + return h + } + return &state{rate: 136, outputLen: 32, dsbyte: 0x06} +} + +// New384 creates a new SHA3-384 hash. +// Its generic security strength is 384 bits against preimage attacks, +// and 192 bits against collision attacks. +func New384() hash.Hash { + if h := new384Asm(); h != nil { + return h + } + return &state{rate: 104, outputLen: 48, dsbyte: 0x06} +} + +// New512 creates a new SHA3-512 hash. +// Its generic security strength is 512 bits against preimage attacks, +// and 256 bits against collision attacks. +func New512() hash.Hash { + if h := new512Asm(); h != nil { + return h + } + return &state{rate: 72, outputLen: 64, dsbyte: 0x06} +} + +// NewLegacyKeccak256 creates a new Keccak-256 hash. +// +// Only use this function if you require compatibility with an existing cryptosystem +// that uses non-standard padding. All other users should use New256 instead. +func NewLegacyKeccak256() hash.Hash { return &state{rate: 136, outputLen: 32, dsbyte: 0x01} } + +// NewLegacyKeccak512 creates a new Keccak-512 hash. +// +// Only use this function if you require compatibility with an existing cryptosystem +// that uses non-standard padding. All other users should use New512 instead. +func NewLegacyKeccak512() hash.Hash { return &state{rate: 72, outputLen: 64, dsbyte: 0x01} } + +// Sum224 returns the SHA3-224 digest of the data. +func Sum224(data []byte) (digest [28]byte) { + h := New224() + h.Write(data) + h.Sum(digest[:0]) + return +} + +// Sum256 returns the SHA3-256 digest of the data. +func Sum256(data []byte) (digest [32]byte) { + h := New256() + h.Write(data) + h.Sum(digest[:0]) + return +} + +// Sum384 returns the SHA3-384 digest of the data. +func Sum384(data []byte) (digest [48]byte) { + h := New384() + h.Write(data) + h.Sum(digest[:0]) + return +} + +// Sum512 returns the SHA3-512 digest of the data. +func Sum512(data []byte) (digest [64]byte) { + h := New512() + h.Write(data) + h.Sum(digest[:0]) + return +} diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/sha3/hashes_generic.go b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/hashes_generic.go new file mode 100644 index 00000000000..c74fc20fcb3 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/hashes_generic.go @@ -0,0 +1,28 @@ +// Copyright 2017 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build !gc || purego || !s390x +// +build !gc purego !s390x + +package sha3 + +import ( + "hash" +) + +// new224Asm returns an assembly implementation of SHA3-224 if available, +// otherwise it returns nil. +func new224Asm() hash.Hash { return nil } + +// new256Asm returns an assembly implementation of SHA3-256 if available, +// otherwise it returns nil. +func new256Asm() hash.Hash { return nil } + +// new384Asm returns an assembly implementation of SHA3-384 if available, +// otherwise it returns nil. +func new384Asm() hash.Hash { return nil } + +// new512Asm returns an assembly implementation of SHA3-512 if available, +// otherwise it returns nil. +func new512Asm() hash.Hash { return nil } diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/sha3/keccakf.go b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/keccakf.go new file mode 100644 index 00000000000..e5faa375c04 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/keccakf.go @@ -0,0 +1,415 @@ +// Copyright 2014 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build !amd64 || purego || !gc +// +build !amd64 purego !gc + +package sha3 + +import "math/bits" + +// rc stores the round constants for use in the ι step. +var rc = [24]uint64{ + 0x0000000000000001, + 0x0000000000008082, + 0x800000000000808A, + 0x8000000080008000, + 0x000000000000808B, + 0x0000000080000001, + 0x8000000080008081, + 0x8000000000008009, + 0x000000000000008A, + 0x0000000000000088, + 0x0000000080008009, + 0x000000008000000A, + 0x000000008000808B, + 0x800000000000008B, + 0x8000000000008089, + 0x8000000000008003, + 0x8000000000008002, + 0x8000000000000080, + 0x000000000000800A, + 0x800000008000000A, + 0x8000000080008081, + 0x8000000000008080, + 0x0000000080000001, + 0x8000000080008008, +} + +// keccakF1600 applies the Keccak permutation to a 1600b-wide +// state represented as a slice of 25 uint64s. +func keccakF1600(a *[25]uint64) { + // Implementation translated from Keccak-inplace.c + // in the keccak reference code. + var t, bc0, bc1, bc2, bc3, bc4, d0, d1, d2, d3, d4 uint64 + + for i := 0; i < 24; i += 4 { + // Combines the 5 steps in each round into 2 steps. + // Unrolls 4 rounds per loop and spreads some steps across rounds. + + // Round 1 + bc0 = a[0] ^ a[5] ^ a[10] ^ a[15] ^ a[20] + bc1 = a[1] ^ a[6] ^ a[11] ^ a[16] ^ a[21] + bc2 = a[2] ^ a[7] ^ a[12] ^ a[17] ^ a[22] + bc3 = a[3] ^ a[8] ^ a[13] ^ a[18] ^ a[23] + bc4 = a[4] ^ a[9] ^ a[14] ^ a[19] ^ a[24] + d0 = bc4 ^ (bc1<<1 | bc1>>63) + d1 = bc0 ^ (bc2<<1 | bc2>>63) + d2 = bc1 ^ (bc3<<1 | bc3>>63) + d3 = bc2 ^ (bc4<<1 | bc4>>63) + d4 = bc3 ^ (bc0<<1 | bc0>>63) + + bc0 = a[0] ^ d0 + t = a[6] ^ d1 + bc1 = bits.RotateLeft64(t, 44) + t = a[12] ^ d2 + bc2 = bits.RotateLeft64(t, 43) + t = a[18] ^ d3 + bc3 = bits.RotateLeft64(t, 21) + t = a[24] ^ d4 + bc4 = bits.RotateLeft64(t, 14) + a[0] = bc0 ^ (bc2 &^ bc1) ^ rc[i] + a[6] = bc1 ^ (bc3 &^ bc2) + a[12] = bc2 ^ (bc4 &^ bc3) + a[18] = bc3 ^ (bc0 &^ bc4) + a[24] = bc4 ^ (bc1 &^ bc0) + + t = a[10] ^ d0 + bc2 = bits.RotateLeft64(t, 3) + t = a[16] ^ d1 + bc3 = bits.RotateLeft64(t, 45) + t = a[22] ^ d2 + bc4 = bits.RotateLeft64(t, 61) + t = a[3] ^ d3 + bc0 = bits.RotateLeft64(t, 28) + t = a[9] ^ d4 + bc1 = bits.RotateLeft64(t, 20) + a[10] = bc0 ^ (bc2 &^ bc1) + a[16] = bc1 ^ (bc3 &^ bc2) + a[22] = bc2 ^ (bc4 &^ bc3) + a[3] = bc3 ^ (bc0 &^ bc4) + a[9] = bc4 ^ (bc1 &^ bc0) + + t = a[20] ^ d0 + bc4 = bits.RotateLeft64(t, 18) + t = a[1] ^ d1 + bc0 = bits.RotateLeft64(t, 1) + t = a[7] ^ d2 + bc1 = bits.RotateLeft64(t, 6) + t = a[13] ^ d3 + bc2 = bits.RotateLeft64(t, 25) + t = a[19] ^ d4 + bc3 = bits.RotateLeft64(t, 8) + a[20] = bc0 ^ (bc2 &^ bc1) + a[1] = bc1 ^ (bc3 &^ bc2) + a[7] = bc2 ^ (bc4 &^ bc3) + a[13] = bc3 ^ (bc0 &^ bc4) + a[19] = bc4 ^ (bc1 &^ bc0) + + t = a[5] ^ d0 + bc1 = bits.RotateLeft64(t, 36) + t = a[11] ^ d1 + bc2 = bits.RotateLeft64(t, 10) + t = a[17] ^ d2 + bc3 = bits.RotateLeft64(t, 15) + t = a[23] ^ d3 + bc4 = bits.RotateLeft64(t, 56) + t = a[4] ^ d4 + bc0 = bits.RotateLeft64(t, 27) + a[5] = bc0 ^ (bc2 &^ bc1) + a[11] = bc1 ^ (bc3 &^ bc2) + a[17] = bc2 ^ (bc4 &^ bc3) + a[23] = bc3 ^ (bc0 &^ bc4) + a[4] = bc4 ^ (bc1 &^ bc0) + + t = a[15] ^ d0 + bc3 = bits.RotateLeft64(t, 41) + t = a[21] ^ d1 + bc4 = bits.RotateLeft64(t, 2) + t = a[2] ^ d2 + bc0 = bits.RotateLeft64(t, 62) + t = a[8] ^ d3 + bc1 = bits.RotateLeft64(t, 55) + t = a[14] ^ d4 + bc2 = bits.RotateLeft64(t, 39) + a[15] = bc0 ^ (bc2 &^ bc1) + a[21] = bc1 ^ (bc3 &^ bc2) + a[2] = bc2 ^ (bc4 &^ bc3) + a[8] = bc3 ^ (bc0 &^ bc4) + a[14] = bc4 ^ (bc1 &^ bc0) + + // Round 2 + bc0 = a[0] ^ a[5] ^ a[10] ^ a[15] ^ a[20] + bc1 = a[1] ^ a[6] ^ a[11] ^ a[16] ^ a[21] + bc2 = a[2] ^ a[7] ^ a[12] ^ a[17] ^ a[22] + bc3 = a[3] ^ a[8] ^ a[13] ^ a[18] ^ a[23] + bc4 = a[4] ^ a[9] ^ a[14] ^ a[19] ^ a[24] + d0 = bc4 ^ (bc1<<1 | bc1>>63) + d1 = bc0 ^ (bc2<<1 | bc2>>63) + d2 = bc1 ^ (bc3<<1 | bc3>>63) + d3 = bc2 ^ (bc4<<1 | bc4>>63) + d4 = bc3 ^ (bc0<<1 | bc0>>63) + + bc0 = a[0] ^ d0 + t = a[16] ^ d1 + bc1 = bits.RotateLeft64(t, 44) + t = a[7] ^ d2 + bc2 = bits.RotateLeft64(t, 43) + t = a[23] ^ d3 + bc3 = bits.RotateLeft64(t, 21) + t = a[14] ^ d4 + bc4 = bits.RotateLeft64(t, 14) + a[0] = bc0 ^ (bc2 &^ bc1) ^ rc[i+1] + a[16] = bc1 ^ (bc3 &^ bc2) + a[7] = bc2 ^ (bc4 &^ bc3) + a[23] = bc3 ^ (bc0 &^ bc4) + a[14] = bc4 ^ (bc1 &^ bc0) + + t = a[20] ^ d0 + bc2 = bits.RotateLeft64(t, 3) + t = a[11] ^ d1 + bc3 = bits.RotateLeft64(t, 45) + t = a[2] ^ d2 + bc4 = bits.RotateLeft64(t, 61) + t = a[18] ^ d3 + bc0 = bits.RotateLeft64(t, 28) + t = a[9] ^ d4 + bc1 = bits.RotateLeft64(t, 20) + a[20] = bc0 ^ (bc2 &^ bc1) + a[11] = bc1 ^ (bc3 &^ bc2) + a[2] = bc2 ^ (bc4 &^ bc3) + a[18] = bc3 ^ (bc0 &^ bc4) + a[9] = bc4 ^ (bc1 &^ bc0) + + t = a[15] ^ d0 + bc4 = bits.RotateLeft64(t, 18) + t = a[6] ^ d1 + bc0 = bits.RotateLeft64(t, 1) + t = a[22] ^ d2 + bc1 = bits.RotateLeft64(t, 6) + t = a[13] ^ d3 + bc2 = bits.RotateLeft64(t, 25) + t = a[4] ^ d4 + bc3 = bits.RotateLeft64(t, 8) + a[15] = bc0 ^ (bc2 &^ bc1) + a[6] = bc1 ^ (bc3 &^ bc2) + a[22] = bc2 ^ (bc4 &^ bc3) + a[13] = bc3 ^ (bc0 &^ bc4) + a[4] = bc4 ^ (bc1 &^ bc0) + + t = a[10] ^ d0 + bc1 = bits.RotateLeft64(t, 36) + t = a[1] ^ d1 + bc2 = bits.RotateLeft64(t, 10) + t = a[17] ^ d2 + bc3 = bits.RotateLeft64(t, 15) + t = a[8] ^ d3 + bc4 = bits.RotateLeft64(t, 56) + t = a[24] ^ d4 + bc0 = bits.RotateLeft64(t, 27) + a[10] = bc0 ^ (bc2 &^ bc1) + a[1] = bc1 ^ (bc3 &^ bc2) + a[17] = bc2 ^ (bc4 &^ bc3) + a[8] = bc3 ^ (bc0 &^ bc4) + a[24] = bc4 ^ (bc1 &^ bc0) + + t = a[5] ^ d0 + bc3 = bits.RotateLeft64(t, 41) + t = a[21] ^ d1 + bc4 = bits.RotateLeft64(t, 2) + t = a[12] ^ d2 + bc0 = bits.RotateLeft64(t, 62) + t = a[3] ^ d3 + bc1 = bits.RotateLeft64(t, 55) + t = a[19] ^ d4 + bc2 = bits.RotateLeft64(t, 39) + a[5] = bc0 ^ (bc2 &^ bc1) + a[21] = bc1 ^ (bc3 &^ bc2) + a[12] = bc2 ^ (bc4 &^ bc3) + a[3] = bc3 ^ (bc0 &^ bc4) + a[19] = bc4 ^ (bc1 &^ bc0) + + // Round 3 + bc0 = a[0] ^ a[5] ^ a[10] ^ a[15] ^ a[20] + bc1 = a[1] ^ a[6] ^ a[11] ^ a[16] ^ a[21] + bc2 = a[2] ^ a[7] ^ a[12] ^ a[17] ^ a[22] + bc3 = a[3] ^ a[8] ^ a[13] ^ a[18] ^ a[23] + bc4 = a[4] ^ a[9] ^ a[14] ^ a[19] ^ a[24] + d0 = bc4 ^ (bc1<<1 | bc1>>63) + d1 = bc0 ^ (bc2<<1 | bc2>>63) + d2 = bc1 ^ (bc3<<1 | bc3>>63) + d3 = bc2 ^ (bc4<<1 | bc4>>63) + d4 = bc3 ^ (bc0<<1 | bc0>>63) + + bc0 = a[0] ^ d0 + t = a[11] ^ d1 + bc1 = bits.RotateLeft64(t, 44) + t = a[22] ^ d2 + bc2 = bits.RotateLeft64(t, 43) + t = a[8] ^ d3 + bc3 = bits.RotateLeft64(t, 21) + t = a[19] ^ d4 + bc4 = bits.RotateLeft64(t, 14) + a[0] = bc0 ^ (bc2 &^ bc1) ^ rc[i+2] + a[11] = bc1 ^ (bc3 &^ bc2) + a[22] = bc2 ^ (bc4 &^ bc3) + a[8] = bc3 ^ (bc0 &^ bc4) + a[19] = bc4 ^ (bc1 &^ bc0) + + t = a[15] ^ d0 + bc2 = bits.RotateLeft64(t, 3) + t = a[1] ^ d1 + bc3 = bits.RotateLeft64(t, 45) + t = a[12] ^ d2 + bc4 = bits.RotateLeft64(t, 61) + t = a[23] ^ d3 + bc0 = bits.RotateLeft64(t, 28) + t = a[9] ^ d4 + bc1 = bits.RotateLeft64(t, 20) + a[15] = bc0 ^ (bc2 &^ bc1) + a[1] = bc1 ^ (bc3 &^ bc2) + a[12] = bc2 ^ (bc4 &^ bc3) + a[23] = bc3 ^ (bc0 &^ bc4) + a[9] = bc4 ^ (bc1 &^ bc0) + + t = a[5] ^ d0 + bc4 = bits.RotateLeft64(t, 18) + t = a[16] ^ d1 + bc0 = bits.RotateLeft64(t, 1) + t = a[2] ^ d2 + bc1 = bits.RotateLeft64(t, 6) + t = a[13] ^ d3 + bc2 = bits.RotateLeft64(t, 25) + t = a[24] ^ d4 + bc3 = bits.RotateLeft64(t, 8) + a[5] = bc0 ^ (bc2 &^ bc1) + a[16] = bc1 ^ (bc3 &^ bc2) + a[2] = bc2 ^ (bc4 &^ bc3) + a[13] = bc3 ^ (bc0 &^ bc4) + a[24] = bc4 ^ (bc1 &^ bc0) + + t = a[20] ^ d0 + bc1 = bits.RotateLeft64(t, 36) + t = a[6] ^ d1 + bc2 = bits.RotateLeft64(t, 10) + t = a[17] ^ d2 + bc3 = bits.RotateLeft64(t, 15) + t = a[3] ^ d3 + bc4 = bits.RotateLeft64(t, 56) + t = a[14] ^ d4 + bc0 = bits.RotateLeft64(t, 27) + a[20] = bc0 ^ (bc2 &^ bc1) + a[6] = bc1 ^ (bc3 &^ bc2) + a[17] = bc2 ^ (bc4 &^ bc3) + a[3] = bc3 ^ (bc0 &^ bc4) + a[14] = bc4 ^ (bc1 &^ bc0) + + t = a[10] ^ d0 + bc3 = bits.RotateLeft64(t, 41) + t = a[21] ^ d1 + bc4 = bits.RotateLeft64(t, 2) + t = a[7] ^ d2 + bc0 = bits.RotateLeft64(t, 62) + t = a[18] ^ d3 + bc1 = bits.RotateLeft64(t, 55) + t = a[4] ^ d4 + bc2 = bits.RotateLeft64(t, 39) + a[10] = bc0 ^ (bc2 &^ bc1) + a[21] = bc1 ^ (bc3 &^ bc2) + a[7] = bc2 ^ (bc4 &^ bc3) + a[18] = bc3 ^ (bc0 &^ bc4) + a[4] = bc4 ^ (bc1 &^ bc0) + + // Round 4 + bc0 = a[0] ^ a[5] ^ a[10] ^ a[15] ^ a[20] + bc1 = a[1] ^ a[6] ^ a[11] ^ a[16] ^ a[21] + bc2 = a[2] ^ a[7] ^ a[12] ^ a[17] ^ a[22] + bc3 = a[3] ^ a[8] ^ a[13] ^ a[18] ^ a[23] + bc4 = a[4] ^ a[9] ^ a[14] ^ a[19] ^ a[24] + d0 = bc4 ^ (bc1<<1 | bc1>>63) + d1 = bc0 ^ (bc2<<1 | bc2>>63) + d2 = bc1 ^ (bc3<<1 | bc3>>63) + d3 = bc2 ^ (bc4<<1 | bc4>>63) + d4 = bc3 ^ (bc0<<1 | bc0>>63) + + bc0 = a[0] ^ d0 + t = a[1] ^ d1 + bc1 = bits.RotateLeft64(t, 44) + t = a[2] ^ d2 + bc2 = bits.RotateLeft64(t, 43) + t = a[3] ^ d3 + bc3 = bits.RotateLeft64(t, 21) + t = a[4] ^ d4 + bc4 = bits.RotateLeft64(t, 14) + a[0] = bc0 ^ (bc2 &^ bc1) ^ rc[i+3] + a[1] = bc1 ^ (bc3 &^ bc2) + a[2] = bc2 ^ (bc4 &^ bc3) + a[3] = bc3 ^ (bc0 &^ bc4) + a[4] = bc4 ^ (bc1 &^ bc0) + + t = a[5] ^ d0 + bc2 = bits.RotateLeft64(t, 3) + t = a[6] ^ d1 + bc3 = bits.RotateLeft64(t, 45) + t = a[7] ^ d2 + bc4 = bits.RotateLeft64(t, 61) + t = a[8] ^ d3 + bc0 = bits.RotateLeft64(t, 28) + t = a[9] ^ d4 + bc1 = bits.RotateLeft64(t, 20) + a[5] = bc0 ^ (bc2 &^ bc1) + a[6] = bc1 ^ (bc3 &^ bc2) + a[7] = bc2 ^ (bc4 &^ bc3) + a[8] = bc3 ^ (bc0 &^ bc4) + a[9] = bc4 ^ (bc1 &^ bc0) + + t = a[10] ^ d0 + bc4 = bits.RotateLeft64(t, 18) + t = a[11] ^ d1 + bc0 = bits.RotateLeft64(t, 1) + t = a[12] ^ d2 + bc1 = bits.RotateLeft64(t, 6) + t = a[13] ^ d3 + bc2 = bits.RotateLeft64(t, 25) + t = a[14] ^ d4 + bc3 = bits.RotateLeft64(t, 8) + a[10] = bc0 ^ (bc2 &^ bc1) + a[11] = bc1 ^ (bc3 &^ bc2) + a[12] = bc2 ^ (bc4 &^ bc3) + a[13] = bc3 ^ (bc0 &^ bc4) + a[14] = bc4 ^ (bc1 &^ bc0) + + t = a[15] ^ d0 + bc1 = bits.RotateLeft64(t, 36) + t = a[16] ^ d1 + bc2 = bits.RotateLeft64(t, 10) + t = a[17] ^ d2 + bc3 = bits.RotateLeft64(t, 15) + t = a[18] ^ d3 + bc4 = bits.RotateLeft64(t, 56) + t = a[19] ^ d4 + bc0 = bits.RotateLeft64(t, 27) + a[15] = bc0 ^ (bc2 &^ bc1) + a[16] = bc1 ^ (bc3 &^ bc2) + a[17] = bc2 ^ (bc4 &^ bc3) + a[18] = bc3 ^ (bc0 &^ bc4) + a[19] = bc4 ^ (bc1 &^ bc0) + + t = a[20] ^ d0 + bc3 = bits.RotateLeft64(t, 41) + t = a[21] ^ d1 + bc4 = bits.RotateLeft64(t, 2) + t = a[22] ^ d2 + bc0 = bits.RotateLeft64(t, 62) + t = a[23] ^ d3 + bc1 = bits.RotateLeft64(t, 55) + t = a[24] ^ d4 + bc2 = bits.RotateLeft64(t, 39) + a[20] = bc0 ^ (bc2 &^ bc1) + a[21] = bc1 ^ (bc3 &^ bc2) + a[22] = bc2 ^ (bc4 &^ bc3) + a[23] = bc3 ^ (bc0 &^ bc4) + a[24] = bc4 ^ (bc1 &^ bc0) + } +} diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/sha3/keccakf_amd64.go b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/keccakf_amd64.go new file mode 100644 index 00000000000..248a38241ff --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/keccakf_amd64.go @@ -0,0 +1,14 @@ +// Copyright 2015 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build amd64 && !purego && gc +// +build amd64,!purego,gc + +package sha3 + +// This function is implemented in keccakf_amd64.s. + +//go:noescape + +func keccakF1600(a *[25]uint64) diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/sha3/keccakf_amd64.s b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/keccakf_amd64.s new file mode 100644 index 00000000000..4cfa54383bf --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/keccakf_amd64.s @@ -0,0 +1,391 @@ +// Copyright 2015 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build amd64 && !purego && gc +// +build amd64,!purego,gc + +// This code was translated into a form compatible with 6a from the public +// domain sources at https://github.com/gvanas/KeccakCodePackage + +// Offsets in state +#define _ba (0*8) +#define _be (1*8) +#define _bi (2*8) +#define _bo (3*8) +#define _bu (4*8) +#define _ga (5*8) +#define _ge (6*8) +#define _gi (7*8) +#define _go (8*8) +#define _gu (9*8) +#define _ka (10*8) +#define _ke (11*8) +#define _ki (12*8) +#define _ko (13*8) +#define _ku (14*8) +#define _ma (15*8) +#define _me (16*8) +#define _mi (17*8) +#define _mo (18*8) +#define _mu (19*8) +#define _sa (20*8) +#define _se (21*8) +#define _si (22*8) +#define _so (23*8) +#define _su (24*8) + +// Temporary registers +#define rT1 AX + +// Round vars +#define rpState DI +#define rpStack SP + +#define rDa BX +#define rDe CX +#define rDi DX +#define rDo R8 +#define rDu R9 + +#define rBa R10 +#define rBe R11 +#define rBi R12 +#define rBo R13 +#define rBu R14 + +#define rCa SI +#define rCe BP +#define rCi rBi +#define rCo rBo +#define rCu R15 + +#define MOVQ_RBI_RCE MOVQ rBi, rCe +#define XORQ_RT1_RCA XORQ rT1, rCa +#define XORQ_RT1_RCE XORQ rT1, rCe +#define XORQ_RBA_RCU XORQ rBa, rCu +#define XORQ_RBE_RCU XORQ rBe, rCu +#define XORQ_RDU_RCU XORQ rDu, rCu +#define XORQ_RDA_RCA XORQ rDa, rCa +#define XORQ_RDE_RCE XORQ rDe, rCe + +#define mKeccakRound(iState, oState, rc, B_RBI_RCE, G_RT1_RCA, G_RT1_RCE, G_RBA_RCU, K_RT1_RCA, K_RT1_RCE, K_RBA_RCU, M_RT1_RCA, M_RT1_RCE, M_RBE_RCU, S_RDU_RCU, S_RDA_RCA, S_RDE_RCE) \ + /* Prepare round */ \ + MOVQ rCe, rDa; \ + ROLQ $1, rDa; \ + \ + MOVQ _bi(iState), rCi; \ + XORQ _gi(iState), rDi; \ + XORQ rCu, rDa; \ + XORQ _ki(iState), rCi; \ + XORQ _mi(iState), rDi; \ + XORQ rDi, rCi; \ + \ + MOVQ rCi, rDe; \ + ROLQ $1, rDe; \ + \ + MOVQ _bo(iState), rCo; \ + XORQ _go(iState), rDo; \ + XORQ rCa, rDe; \ + XORQ _ko(iState), rCo; \ + XORQ _mo(iState), rDo; \ + XORQ rDo, rCo; \ + \ + MOVQ rCo, rDi; \ + ROLQ $1, rDi; \ + \ + MOVQ rCu, rDo; \ + XORQ rCe, rDi; \ + ROLQ $1, rDo; \ + \ + MOVQ rCa, rDu; \ + XORQ rCi, rDo; \ + ROLQ $1, rDu; \ + \ + /* Result b */ \ + MOVQ _ba(iState), rBa; \ + MOVQ _ge(iState), rBe; \ + XORQ rCo, rDu; \ + MOVQ _ki(iState), rBi; \ + MOVQ _mo(iState), rBo; \ + MOVQ _su(iState), rBu; \ + XORQ rDe, rBe; \ + ROLQ $44, rBe; \ + XORQ rDi, rBi; \ + XORQ rDa, rBa; \ + ROLQ $43, rBi; \ + \ + MOVQ rBe, rCa; \ + MOVQ rc, rT1; \ + ORQ rBi, rCa; \ + XORQ rBa, rT1; \ + XORQ rT1, rCa; \ + MOVQ rCa, _ba(oState); \ + \ + XORQ rDu, rBu; \ + ROLQ $14, rBu; \ + MOVQ rBa, rCu; \ + ANDQ rBe, rCu; \ + XORQ rBu, rCu; \ + MOVQ rCu, _bu(oState); \ + \ + XORQ rDo, rBo; \ + ROLQ $21, rBo; \ + MOVQ rBo, rT1; \ + ANDQ rBu, rT1; \ + XORQ rBi, rT1; \ + MOVQ rT1, _bi(oState); \ + \ + NOTQ rBi; \ + ORQ rBa, rBu; \ + ORQ rBo, rBi; \ + XORQ rBo, rBu; \ + XORQ rBe, rBi; \ + MOVQ rBu, _bo(oState); \ + MOVQ rBi, _be(oState); \ + B_RBI_RCE; \ + \ + /* Result g */ \ + MOVQ _gu(iState), rBe; \ + XORQ rDu, rBe; \ + MOVQ _ka(iState), rBi; \ + ROLQ $20, rBe; \ + XORQ rDa, rBi; \ + ROLQ $3, rBi; \ + MOVQ _bo(iState), rBa; \ + MOVQ rBe, rT1; \ + ORQ rBi, rT1; \ + XORQ rDo, rBa; \ + MOVQ _me(iState), rBo; \ + MOVQ _si(iState), rBu; \ + ROLQ $28, rBa; \ + XORQ rBa, rT1; \ + MOVQ rT1, _ga(oState); \ + G_RT1_RCA; \ + \ + XORQ rDe, rBo; \ + ROLQ $45, rBo; \ + MOVQ rBi, rT1; \ + ANDQ rBo, rT1; \ + XORQ rBe, rT1; \ + MOVQ rT1, _ge(oState); \ + G_RT1_RCE; \ + \ + XORQ rDi, rBu; \ + ROLQ $61, rBu; \ + MOVQ rBu, rT1; \ + ORQ rBa, rT1; \ + XORQ rBo, rT1; \ + MOVQ rT1, _go(oState); \ + \ + ANDQ rBe, rBa; \ + XORQ rBu, rBa; \ + MOVQ rBa, _gu(oState); \ + NOTQ rBu; \ + G_RBA_RCU; \ + \ + ORQ rBu, rBo; \ + XORQ rBi, rBo; \ + MOVQ rBo, _gi(oState); \ + \ + /* Result k */ \ + MOVQ _be(iState), rBa; \ + MOVQ _gi(iState), rBe; \ + MOVQ _ko(iState), rBi; \ + MOVQ _mu(iState), rBo; \ + MOVQ _sa(iState), rBu; \ + XORQ rDi, rBe; \ + ROLQ $6, rBe; \ + XORQ rDo, rBi; \ + ROLQ $25, rBi; \ + MOVQ rBe, rT1; \ + ORQ rBi, rT1; \ + XORQ rDe, rBa; \ + ROLQ $1, rBa; \ + XORQ rBa, rT1; \ + MOVQ rT1, _ka(oState); \ + K_RT1_RCA; \ + \ + XORQ rDu, rBo; \ + ROLQ $8, rBo; \ + MOVQ rBi, rT1; \ + ANDQ rBo, rT1; \ + XORQ rBe, rT1; \ + MOVQ rT1, _ke(oState); \ + K_RT1_RCE; \ + \ + XORQ rDa, rBu; \ + ROLQ $18, rBu; \ + NOTQ rBo; \ + MOVQ rBo, rT1; \ + ANDQ rBu, rT1; \ + XORQ rBi, rT1; \ + MOVQ rT1, _ki(oState); \ + \ + MOVQ rBu, rT1; \ + ORQ rBa, rT1; \ + XORQ rBo, rT1; \ + MOVQ rT1, _ko(oState); \ + \ + ANDQ rBe, rBa; \ + XORQ rBu, rBa; \ + MOVQ rBa, _ku(oState); \ + K_RBA_RCU; \ + \ + /* Result m */ \ + MOVQ _ga(iState), rBe; \ + XORQ rDa, rBe; \ + MOVQ _ke(iState), rBi; \ + ROLQ $36, rBe; \ + XORQ rDe, rBi; \ + MOVQ _bu(iState), rBa; \ + ROLQ $10, rBi; \ + MOVQ rBe, rT1; \ + MOVQ _mi(iState), rBo; \ + ANDQ rBi, rT1; \ + XORQ rDu, rBa; \ + MOVQ _so(iState), rBu; \ + ROLQ $27, rBa; \ + XORQ rBa, rT1; \ + MOVQ rT1, _ma(oState); \ + M_RT1_RCA; \ + \ + XORQ rDi, rBo; \ + ROLQ $15, rBo; \ + MOVQ rBi, rT1; \ + ORQ rBo, rT1; \ + XORQ rBe, rT1; \ + MOVQ rT1, _me(oState); \ + M_RT1_RCE; \ + \ + XORQ rDo, rBu; \ + ROLQ $56, rBu; \ + NOTQ rBo; \ + MOVQ rBo, rT1; \ + ORQ rBu, rT1; \ + XORQ rBi, rT1; \ + MOVQ rT1, _mi(oState); \ + \ + ORQ rBa, rBe; \ + XORQ rBu, rBe; \ + MOVQ rBe, _mu(oState); \ + \ + ANDQ rBa, rBu; \ + XORQ rBo, rBu; \ + MOVQ rBu, _mo(oState); \ + M_RBE_RCU; \ + \ + /* Result s */ \ + MOVQ _bi(iState), rBa; \ + MOVQ _go(iState), rBe; \ + MOVQ _ku(iState), rBi; \ + XORQ rDi, rBa; \ + MOVQ _ma(iState), rBo; \ + ROLQ $62, rBa; \ + XORQ rDo, rBe; \ + MOVQ _se(iState), rBu; \ + ROLQ $55, rBe; \ + \ + XORQ rDu, rBi; \ + MOVQ rBa, rDu; \ + XORQ rDe, rBu; \ + ROLQ $2, rBu; \ + ANDQ rBe, rDu; \ + XORQ rBu, rDu; \ + MOVQ rDu, _su(oState); \ + \ + ROLQ $39, rBi; \ + S_RDU_RCU; \ + NOTQ rBe; \ + XORQ rDa, rBo; \ + MOVQ rBe, rDa; \ + ANDQ rBi, rDa; \ + XORQ rBa, rDa; \ + MOVQ rDa, _sa(oState); \ + S_RDA_RCA; \ + \ + ROLQ $41, rBo; \ + MOVQ rBi, rDe; \ + ORQ rBo, rDe; \ + XORQ rBe, rDe; \ + MOVQ rDe, _se(oState); \ + S_RDE_RCE; \ + \ + MOVQ rBo, rDi; \ + MOVQ rBu, rDo; \ + ANDQ rBu, rDi; \ + ORQ rBa, rDo; \ + XORQ rBi, rDi; \ + XORQ rBo, rDo; \ + MOVQ rDi, _si(oState); \ + MOVQ rDo, _so(oState) \ + +// func keccakF1600(state *[25]uint64) +TEXT ·keccakF1600(SB), 0, $200-8 + MOVQ state+0(FP), rpState + + // Convert the user state into an internal state + NOTQ _be(rpState) + NOTQ _bi(rpState) + NOTQ _go(rpState) + NOTQ _ki(rpState) + NOTQ _mi(rpState) + NOTQ _sa(rpState) + + // Execute the KeccakF permutation + MOVQ _ba(rpState), rCa + MOVQ _be(rpState), rCe + MOVQ _bu(rpState), rCu + + XORQ _ga(rpState), rCa + XORQ _ge(rpState), rCe + XORQ _gu(rpState), rCu + + XORQ _ka(rpState), rCa + XORQ _ke(rpState), rCe + XORQ _ku(rpState), rCu + + XORQ _ma(rpState), rCa + XORQ _me(rpState), rCe + XORQ _mu(rpState), rCu + + XORQ _sa(rpState), rCa + XORQ _se(rpState), rCe + MOVQ _si(rpState), rDi + MOVQ _so(rpState), rDo + XORQ _su(rpState), rCu + + mKeccakRound(rpState, rpStack, $0x0000000000000001, MOVQ_RBI_RCE, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBE_RCU, XORQ_RDU_RCU, XORQ_RDA_RCA, XORQ_RDE_RCE) + mKeccakRound(rpStack, rpState, $0x0000000000008082, MOVQ_RBI_RCE, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBE_RCU, XORQ_RDU_RCU, XORQ_RDA_RCA, XORQ_RDE_RCE) + mKeccakRound(rpState, rpStack, $0x800000000000808a, MOVQ_RBI_RCE, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBE_RCU, XORQ_RDU_RCU, XORQ_RDA_RCA, XORQ_RDE_RCE) + mKeccakRound(rpStack, rpState, $0x8000000080008000, MOVQ_RBI_RCE, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBE_RCU, XORQ_RDU_RCU, XORQ_RDA_RCA, XORQ_RDE_RCE) + mKeccakRound(rpState, rpStack, $0x000000000000808b, MOVQ_RBI_RCE, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBE_RCU, XORQ_RDU_RCU, XORQ_RDA_RCA, XORQ_RDE_RCE) + mKeccakRound(rpStack, rpState, $0x0000000080000001, MOVQ_RBI_RCE, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBE_RCU, XORQ_RDU_RCU, XORQ_RDA_RCA, XORQ_RDE_RCE) + mKeccakRound(rpState, rpStack, $0x8000000080008081, MOVQ_RBI_RCE, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBE_RCU, XORQ_RDU_RCU, XORQ_RDA_RCA, XORQ_RDE_RCE) + mKeccakRound(rpStack, rpState, $0x8000000000008009, MOVQ_RBI_RCE, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBE_RCU, XORQ_RDU_RCU, XORQ_RDA_RCA, XORQ_RDE_RCE) + mKeccakRound(rpState, rpStack, $0x000000000000008a, MOVQ_RBI_RCE, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBE_RCU, XORQ_RDU_RCU, XORQ_RDA_RCA, XORQ_RDE_RCE) + mKeccakRound(rpStack, rpState, $0x0000000000000088, MOVQ_RBI_RCE, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBE_RCU, XORQ_RDU_RCU, XORQ_RDA_RCA, XORQ_RDE_RCE) + mKeccakRound(rpState, rpStack, $0x0000000080008009, MOVQ_RBI_RCE, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBE_RCU, XORQ_RDU_RCU, XORQ_RDA_RCA, XORQ_RDE_RCE) + mKeccakRound(rpStack, rpState, $0x000000008000000a, MOVQ_RBI_RCE, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBE_RCU, XORQ_RDU_RCU, XORQ_RDA_RCA, XORQ_RDE_RCE) + mKeccakRound(rpState, rpStack, $0x000000008000808b, MOVQ_RBI_RCE, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBE_RCU, XORQ_RDU_RCU, XORQ_RDA_RCA, XORQ_RDE_RCE) + mKeccakRound(rpStack, rpState, $0x800000000000008b, MOVQ_RBI_RCE, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBE_RCU, XORQ_RDU_RCU, XORQ_RDA_RCA, XORQ_RDE_RCE) + mKeccakRound(rpState, rpStack, $0x8000000000008089, MOVQ_RBI_RCE, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBE_RCU, XORQ_RDU_RCU, XORQ_RDA_RCA, XORQ_RDE_RCE) + mKeccakRound(rpStack, rpState, $0x8000000000008003, MOVQ_RBI_RCE, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBE_RCU, XORQ_RDU_RCU, XORQ_RDA_RCA, XORQ_RDE_RCE) + mKeccakRound(rpState, rpStack, $0x8000000000008002, MOVQ_RBI_RCE, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBE_RCU, XORQ_RDU_RCU, XORQ_RDA_RCA, XORQ_RDE_RCE) + mKeccakRound(rpStack, rpState, $0x8000000000000080, MOVQ_RBI_RCE, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBE_RCU, XORQ_RDU_RCU, XORQ_RDA_RCA, XORQ_RDE_RCE) + mKeccakRound(rpState, rpStack, $0x000000000000800a, MOVQ_RBI_RCE, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBE_RCU, XORQ_RDU_RCU, XORQ_RDA_RCA, XORQ_RDE_RCE) + mKeccakRound(rpStack, rpState, $0x800000008000000a, MOVQ_RBI_RCE, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBE_RCU, XORQ_RDU_RCU, XORQ_RDA_RCA, XORQ_RDE_RCE) + mKeccakRound(rpState, rpStack, $0x8000000080008081, MOVQ_RBI_RCE, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBE_RCU, XORQ_RDU_RCU, XORQ_RDA_RCA, XORQ_RDE_RCE) + mKeccakRound(rpStack, rpState, $0x8000000000008080, MOVQ_RBI_RCE, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBE_RCU, XORQ_RDU_RCU, XORQ_RDA_RCA, XORQ_RDE_RCE) + mKeccakRound(rpState, rpStack, $0x0000000080000001, MOVQ_RBI_RCE, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBA_RCU, XORQ_RT1_RCA, XORQ_RT1_RCE, XORQ_RBE_RCU, XORQ_RDU_RCU, XORQ_RDA_RCA, XORQ_RDE_RCE) + mKeccakRound(rpStack, rpState, $0x8000000080008008, NOP, NOP, NOP, NOP, NOP, NOP, NOP, NOP, NOP, NOP, NOP, NOP, NOP) + + // Revert the internal state to the user state + NOTQ _be(rpState) + NOTQ _bi(rpState) + NOTQ _go(rpState) + NOTQ _ki(rpState) + NOTQ _mi(rpState) + NOTQ _sa(rpState) + + RET diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/sha3/register.go b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/register.go new file mode 100644 index 00000000000..8b4453aac3c --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/register.go @@ -0,0 +1,19 @@ +// Copyright 2014 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build go1.4 +// +build go1.4 + +package sha3 + +import ( + "crypto" +) + +func init() { + crypto.RegisterHash(crypto.SHA3_224, New224) + crypto.RegisterHash(crypto.SHA3_256, New256) + crypto.RegisterHash(crypto.SHA3_384, New384) + crypto.RegisterHash(crypto.SHA3_512, New512) +} diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/sha3/sha3.go b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/sha3.go new file mode 100644 index 00000000000..fa182beb40b --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/sha3.go @@ -0,0 +1,193 @@ +// Copyright 2014 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package sha3 + +// spongeDirection indicates the direction bytes are flowing through the sponge. +type spongeDirection int + +const ( + // spongeAbsorbing indicates that the sponge is absorbing input. + spongeAbsorbing spongeDirection = iota + // spongeSqueezing indicates that the sponge is being squeezed. + spongeSqueezing +) + +const ( + // maxRate is the maximum size of the internal buffer. SHAKE-256 + // currently needs the largest buffer. + maxRate = 168 +) + +type state struct { + // Generic sponge components. + a [25]uint64 // main state of the hash + buf []byte // points into storage + rate int // the number of bytes of state to use + + // dsbyte contains the "domain separation" bits and the first bit of + // the padding. Sections 6.1 and 6.2 of [1] separate the outputs of the + // SHA-3 and SHAKE functions by appending bitstrings to the message. + // Using a little-endian bit-ordering convention, these are "01" for SHA-3 + // and "1111" for SHAKE, or 00000010b and 00001111b, respectively. Then the + // padding rule from section 5.1 is applied to pad the message to a multiple + // of the rate, which involves adding a "1" bit, zero or more "0" bits, and + // a final "1" bit. We merge the first "1" bit from the padding into dsbyte, + // giving 00000110b (0x06) and 00011111b (0x1f). + // [1] http://csrc.nist.gov/publications/drafts/fips-202/fips_202_draft.pdf + // "Draft FIPS 202: SHA-3 Standard: Permutation-Based Hash and + // Extendable-Output Functions (May 2014)" + dsbyte byte + + storage storageBuf + + // Specific to SHA-3 and SHAKE. + outputLen int // the default output size in bytes + state spongeDirection // whether the sponge is absorbing or squeezing +} + +// BlockSize returns the rate of sponge underlying this hash function. +func (d *state) BlockSize() int { return d.rate } + +// Size returns the output size of the hash function in bytes. +func (d *state) Size() int { return d.outputLen } + +// Reset clears the internal state by zeroing the sponge state and +// the byte buffer, and setting Sponge.state to absorbing. +func (d *state) Reset() { + // Zero the permutation's state. + for i := range d.a { + d.a[i] = 0 + } + d.state = spongeAbsorbing + d.buf = d.storage.asBytes()[:0] +} + +func (d *state) clone() *state { + ret := *d + if ret.state == spongeAbsorbing { + ret.buf = ret.storage.asBytes()[:len(ret.buf)] + } else { + ret.buf = ret.storage.asBytes()[d.rate-cap(d.buf) : d.rate] + } + + return &ret +} + +// permute applies the KeccakF-1600 permutation. It handles +// any input-output buffering. +func (d *state) permute() { + switch d.state { + case spongeAbsorbing: + // If we're absorbing, we need to xor the input into the state + // before applying the permutation. + xorIn(d, d.buf) + d.buf = d.storage.asBytes()[:0] + keccakF1600(&d.a) + case spongeSqueezing: + // If we're squeezing, we need to apply the permutation before + // copying more output. + keccakF1600(&d.a) + d.buf = d.storage.asBytes()[:d.rate] + copyOut(d, d.buf) + } +} + +// pads appends the domain separation bits in dsbyte, applies +// the multi-bitrate 10..1 padding rule, and permutes the state. +func (d *state) padAndPermute(dsbyte byte) { + if d.buf == nil { + d.buf = d.storage.asBytes()[:0] + } + // Pad with this instance's domain-separator bits. We know that there's + // at least one byte of space in d.buf because, if it were full, + // permute would have been called to empty it. dsbyte also contains the + // first one bit for the padding. See the comment in the state struct. + d.buf = append(d.buf, dsbyte) + zerosStart := len(d.buf) + d.buf = d.storage.asBytes()[:d.rate] + for i := zerosStart; i < d.rate; i++ { + d.buf[i] = 0 + } + // This adds the final one bit for the padding. Because of the way that + // bits are numbered from the LSB upwards, the final bit is the MSB of + // the last byte. + d.buf[d.rate-1] ^= 0x80 + // Apply the permutation + d.permute() + d.state = spongeSqueezing + d.buf = d.storage.asBytes()[:d.rate] + copyOut(d, d.buf) +} + +// Write absorbs more data into the hash's state. It produces an error +// if more data is written to the ShakeHash after writing +func (d *state) Write(p []byte) (written int, err error) { + if d.state != spongeAbsorbing { + panic("sha3: write to sponge after read") + } + if d.buf == nil { + d.buf = d.storage.asBytes()[:0] + } + written = len(p) + + for len(p) > 0 { + if len(d.buf) == 0 && len(p) >= d.rate { + // The fast path; absorb a full "rate" bytes of input and apply the permutation. + xorIn(d, p[:d.rate]) + p = p[d.rate:] + keccakF1600(&d.a) + } else { + // The slow path; buffer the input until we can fill the sponge, and then xor it in. + todo := d.rate - len(d.buf) + if todo > len(p) { + todo = len(p) + } + d.buf = append(d.buf, p[:todo]...) + p = p[todo:] + + // If the sponge is full, apply the permutation. + if len(d.buf) == d.rate { + d.permute() + } + } + } + + return +} + +// Read squeezes an arbitrary number of bytes from the sponge. +func (d *state) Read(out []byte) (n int, err error) { + // If we're still absorbing, pad and apply the permutation. + if d.state == spongeAbsorbing { + d.padAndPermute(d.dsbyte) + } + + n = len(out) + + // Now, do the squeezing. + for len(out) > 0 { + n := copy(out, d.buf) + d.buf = d.buf[n:] + out = out[n:] + + // Apply the permutation if we've squeezed the sponge dry. + if len(d.buf) == 0 { + d.permute() + } + } + + return +} + +// Sum applies padding to the hash state and then squeezes out the desired +// number of output bytes. +func (d *state) Sum(in []byte) []byte { + // Make a copy of the original hash so that caller can keep writing + // and summing. + dup := d.clone() + hash := make([]byte, dup.outputLen) + dup.Read(hash) + return append(in, hash...) +} diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/sha3/sha3_s390x.go b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/sha3_s390x.go new file mode 100644 index 00000000000..63a3edb4cea --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/sha3_s390x.go @@ -0,0 +1,287 @@ +// Copyright 2017 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build gc && !purego +// +build gc,!purego + +package sha3 + +// This file contains code for using the 'compute intermediate +// message digest' (KIMD) and 'compute last message digest' (KLMD) +// instructions to compute SHA-3 and SHAKE hashes on IBM Z. + +import ( + "hash" + + "golang.org/x/sys/cpu" +) + +// codes represent 7-bit KIMD/KLMD function codes as defined in +// the Principles of Operation. +type code uint64 + +const ( + // function codes for KIMD/KLMD + sha3_224 code = 32 + sha3_256 = 33 + sha3_384 = 34 + sha3_512 = 35 + shake_128 = 36 + shake_256 = 37 + nopad = 0x100 +) + +// kimd is a wrapper for the 'compute intermediate message digest' instruction. +// src must be a multiple of the rate for the given function code. +// +//go:noescape +func kimd(function code, chain *[200]byte, src []byte) + +// klmd is a wrapper for the 'compute last message digest' instruction. +// src padding is handled by the instruction. +// +//go:noescape +func klmd(function code, chain *[200]byte, dst, src []byte) + +type asmState struct { + a [200]byte // 1600 bit state + buf []byte // care must be taken to ensure cap(buf) is a multiple of rate + rate int // equivalent to block size + storage [3072]byte // underlying storage for buf + outputLen int // output length if fixed, 0 if not + function code // KIMD/KLMD function code + state spongeDirection // whether the sponge is absorbing or squeezing +} + +func newAsmState(function code) *asmState { + var s asmState + s.function = function + switch function { + case sha3_224: + s.rate = 144 + s.outputLen = 28 + case sha3_256: + s.rate = 136 + s.outputLen = 32 + case sha3_384: + s.rate = 104 + s.outputLen = 48 + case sha3_512: + s.rate = 72 + s.outputLen = 64 + case shake_128: + s.rate = 168 + case shake_256: + s.rate = 136 + default: + panic("sha3: unrecognized function code") + } + + // limit s.buf size to a multiple of s.rate + s.resetBuf() + return &s +} + +func (s *asmState) clone() *asmState { + c := *s + c.buf = c.storage[:len(s.buf):cap(s.buf)] + return &c +} + +// copyIntoBuf copies b into buf. It will panic if there is not enough space to +// store all of b. +func (s *asmState) copyIntoBuf(b []byte) { + bufLen := len(s.buf) + s.buf = s.buf[:len(s.buf)+len(b)] + copy(s.buf[bufLen:], b) +} + +// resetBuf points buf at storage, sets the length to 0 and sets cap to be a +// multiple of the rate. +func (s *asmState) resetBuf() { + max := (cap(s.storage) / s.rate) * s.rate + s.buf = s.storage[:0:max] +} + +// Write (via the embedded io.Writer interface) adds more data to the running hash. +// It never returns an error. +func (s *asmState) Write(b []byte) (int, error) { + if s.state != spongeAbsorbing { + panic("sha3: write to sponge after read") + } + length := len(b) + for len(b) > 0 { + if len(s.buf) == 0 && len(b) >= cap(s.buf) { + // Hash the data directly and push any remaining bytes + // into the buffer. + remainder := len(b) % s.rate + kimd(s.function, &s.a, b[:len(b)-remainder]) + if remainder != 0 { + s.copyIntoBuf(b[len(b)-remainder:]) + } + return length, nil + } + + if len(s.buf) == cap(s.buf) { + // flush the buffer + kimd(s.function, &s.a, s.buf) + s.buf = s.buf[:0] + } + + // copy as much as we can into the buffer + n := len(b) + if len(b) > cap(s.buf)-len(s.buf) { + n = cap(s.buf) - len(s.buf) + } + s.copyIntoBuf(b[:n]) + b = b[n:] + } + return length, nil +} + +// Read squeezes an arbitrary number of bytes from the sponge. +func (s *asmState) Read(out []byte) (n int, err error) { + n = len(out) + + // need to pad if we were absorbing + if s.state == spongeAbsorbing { + s.state = spongeSqueezing + + // write hash directly into out if possible + if len(out)%s.rate == 0 { + klmd(s.function, &s.a, out, s.buf) // len(out) may be 0 + s.buf = s.buf[:0] + return + } + + // write hash into buffer + max := cap(s.buf) + if max > len(out) { + max = (len(out)/s.rate)*s.rate + s.rate + } + klmd(s.function, &s.a, s.buf[:max], s.buf) + s.buf = s.buf[:max] + } + + for len(out) > 0 { + // flush the buffer + if len(s.buf) != 0 { + c := copy(out, s.buf) + out = out[c:] + s.buf = s.buf[c:] + continue + } + + // write hash directly into out if possible + if len(out)%s.rate == 0 { + klmd(s.function|nopad, &s.a, out, nil) + return + } + + // write hash into buffer + s.resetBuf() + if cap(s.buf) > len(out) { + s.buf = s.buf[:(len(out)/s.rate)*s.rate+s.rate] + } + klmd(s.function|nopad, &s.a, s.buf, nil) + } + return +} + +// Sum appends the current hash to b and returns the resulting slice. +// It does not change the underlying hash state. +func (s *asmState) Sum(b []byte) []byte { + if s.outputLen == 0 { + panic("sha3: cannot call Sum on SHAKE functions") + } + + // Copy the state to preserve the original. + a := s.a + + // Hash the buffer. Note that we don't clear it because we + // aren't updating the state. + klmd(s.function, &a, nil, s.buf) + return append(b, a[:s.outputLen]...) +} + +// Reset resets the Hash to its initial state. +func (s *asmState) Reset() { + for i := range s.a { + s.a[i] = 0 + } + s.resetBuf() + s.state = spongeAbsorbing +} + +// Size returns the number of bytes Sum will return. +func (s *asmState) Size() int { + return s.outputLen +} + +// BlockSize returns the hash's underlying block size. +// The Write method must be able to accept any amount +// of data, but it may operate more efficiently if all writes +// are a multiple of the block size. +func (s *asmState) BlockSize() int { + return s.rate +} + +// Clone returns a copy of the ShakeHash in its current state. +func (s *asmState) Clone() ShakeHash { + return s.clone() +} + +// new224Asm returns an assembly implementation of SHA3-224 if available, +// otherwise it returns nil. +func new224Asm() hash.Hash { + if cpu.S390X.HasSHA3 { + return newAsmState(sha3_224) + } + return nil +} + +// new256Asm returns an assembly implementation of SHA3-256 if available, +// otherwise it returns nil. +func new256Asm() hash.Hash { + if cpu.S390X.HasSHA3 { + return newAsmState(sha3_256) + } + return nil +} + +// new384Asm returns an assembly implementation of SHA3-384 if available, +// otherwise it returns nil. +func new384Asm() hash.Hash { + if cpu.S390X.HasSHA3 { + return newAsmState(sha3_384) + } + return nil +} + +// new512Asm returns an assembly implementation of SHA3-512 if available, +// otherwise it returns nil. +func new512Asm() hash.Hash { + if cpu.S390X.HasSHA3 { + return newAsmState(sha3_512) + } + return nil +} + +// newShake128Asm returns an assembly implementation of SHAKE-128 if available, +// otherwise it returns nil. +func newShake128Asm() ShakeHash { + if cpu.S390X.HasSHA3 { + return newAsmState(shake_128) + } + return nil +} + +// newShake256Asm returns an assembly implementation of SHAKE-256 if available, +// otherwise it returns nil. +func newShake256Asm() ShakeHash { + if cpu.S390X.HasSHA3 { + return newAsmState(shake_256) + } + return nil +} diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/sha3/sha3_s390x.s b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/sha3_s390x.s new file mode 100644 index 00000000000..a0e051b0451 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/sha3_s390x.s @@ -0,0 +1,34 @@ +// Copyright 2017 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build gc && !purego +// +build gc,!purego + +#include "textflag.h" + +// func kimd(function code, chain *[200]byte, src []byte) +TEXT ·kimd(SB), NOFRAME|NOSPLIT, $0-40 + MOVD function+0(FP), R0 + MOVD chain+8(FP), R1 + LMG src+16(FP), R2, R3 // R2=base, R3=len + +continue: + WORD $0xB93E0002 // KIMD --, R2 + BVS continue // continue if interrupted + MOVD $0, R0 // reset R0 for pre-go1.8 compilers + RET + +// func klmd(function code, chain *[200]byte, dst, src []byte) +TEXT ·klmd(SB), NOFRAME|NOSPLIT, $0-64 + // TODO: SHAKE support + MOVD function+0(FP), R0 + MOVD chain+8(FP), R1 + LMG dst+16(FP), R2, R3 // R2=base, R3=len + LMG src+40(FP), R4, R5 // R4=base, R5=len + +continue: + WORD $0xB93F0024 // KLMD R2, R4 + BVS continue // continue if interrupted + MOVD $0, R0 // reset R0 for pre-go1.8 compilers + RET diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/sha3/shake.go b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/shake.go new file mode 100644 index 00000000000..d7be2954ab2 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/shake.go @@ -0,0 +1,173 @@ +// Copyright 2014 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package sha3 + +// This file defines the ShakeHash interface, and provides +// functions for creating SHAKE and cSHAKE instances, as well as utility +// functions for hashing bytes to arbitrary-length output. +// +// +// SHAKE implementation is based on FIPS PUB 202 [1] +// cSHAKE implementations is based on NIST SP 800-185 [2] +// +// [1] https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.202.pdf +// [2] https://doi.org/10.6028/NIST.SP.800-185 + +import ( + "encoding/binary" + "io" +) + +// ShakeHash defines the interface to hash functions that +// support arbitrary-length output. +type ShakeHash interface { + // Write absorbs more data into the hash's state. It panics if input is + // written to it after output has been read from it. + io.Writer + + // Read reads more output from the hash; reading affects the hash's + // state. (ShakeHash.Read is thus very different from Hash.Sum) + // It never returns an error. + io.Reader + + // Clone returns a copy of the ShakeHash in its current state. + Clone() ShakeHash + + // Reset resets the ShakeHash to its initial state. + Reset() +} + +// cSHAKE specific context +type cshakeState struct { + *state // SHA-3 state context and Read/Write operations + + // initBlock is the cSHAKE specific initialization set of bytes. It is initialized + // by newCShake function and stores concatenation of N followed by S, encoded + // by the method specified in 3.3 of [1]. + // It is stored here in order for Reset() to be able to put context into + // initial state. + initBlock []byte +} + +// Consts for configuring initial SHA-3 state +const ( + dsbyteShake = 0x1f + dsbyteCShake = 0x04 + rate128 = 168 + rate256 = 136 +) + +func bytepad(input []byte, w int) []byte { + // leftEncode always returns max 9 bytes + buf := make([]byte, 0, 9+len(input)+w) + buf = append(buf, leftEncode(uint64(w))...) + buf = append(buf, input...) + padlen := w - (len(buf) % w) + return append(buf, make([]byte, padlen)...) +} + +func leftEncode(value uint64) []byte { + var b [9]byte + binary.BigEndian.PutUint64(b[1:], value) + // Trim all but last leading zero bytes + i := byte(1) + for i < 8 && b[i] == 0 { + i++ + } + // Prepend number of encoded bytes + b[i-1] = 9 - i + return b[i-1:] +} + +func newCShake(N, S []byte, rate int, dsbyte byte) ShakeHash { + c := cshakeState{state: &state{rate: rate, dsbyte: dsbyte}} + + // leftEncode returns max 9 bytes + c.initBlock = make([]byte, 0, 9*2+len(N)+len(S)) + c.initBlock = append(c.initBlock, leftEncode(uint64(len(N)*8))...) + c.initBlock = append(c.initBlock, N...) + c.initBlock = append(c.initBlock, leftEncode(uint64(len(S)*8))...) + c.initBlock = append(c.initBlock, S...) + c.Write(bytepad(c.initBlock, c.rate)) + return &c +} + +// Reset resets the hash to initial state. +func (c *cshakeState) Reset() { + c.state.Reset() + c.Write(bytepad(c.initBlock, c.rate)) +} + +// Clone returns copy of a cSHAKE context within its current state. +func (c *cshakeState) Clone() ShakeHash { + b := make([]byte, len(c.initBlock)) + copy(b, c.initBlock) + return &cshakeState{state: c.clone(), initBlock: b} +} + +// Clone returns copy of SHAKE context within its current state. +func (c *state) Clone() ShakeHash { + return c.clone() +} + +// NewShake128 creates a new SHAKE128 variable-output-length ShakeHash. +// Its generic security strength is 128 bits against all attacks if at +// least 32 bytes of its output are used. +func NewShake128() ShakeHash { + if h := newShake128Asm(); h != nil { + return h + } + return &state{rate: rate128, dsbyte: dsbyteShake} +} + +// NewShake256 creates a new SHAKE256 variable-output-length ShakeHash. +// Its generic security strength is 256 bits against all attacks if +// at least 64 bytes of its output are used. +func NewShake256() ShakeHash { + if h := newShake256Asm(); h != nil { + return h + } + return &state{rate: rate256, dsbyte: dsbyteShake} +} + +// NewCShake128 creates a new instance of cSHAKE128 variable-output-length ShakeHash, +// a customizable variant of SHAKE128. +// N is used to define functions based on cSHAKE, it can be empty when plain cSHAKE is +// desired. S is a customization byte string used for domain separation - two cSHAKE +// computations on same input with different S yield unrelated outputs. +// When N and S are both empty, this is equivalent to NewShake128. +func NewCShake128(N, S []byte) ShakeHash { + if len(N) == 0 && len(S) == 0 { + return NewShake128() + } + return newCShake(N, S, rate128, dsbyteCShake) +} + +// NewCShake256 creates a new instance of cSHAKE256 variable-output-length ShakeHash, +// a customizable variant of SHAKE256. +// N is used to define functions based on cSHAKE, it can be empty when plain cSHAKE is +// desired. S is a customization byte string used for domain separation - two cSHAKE +// computations on same input with different S yield unrelated outputs. +// When N and S are both empty, this is equivalent to NewShake256. +func NewCShake256(N, S []byte) ShakeHash { + if len(N) == 0 && len(S) == 0 { + return NewShake256() + } + return newCShake(N, S, rate256, dsbyteCShake) +} + +// ShakeSum128 writes an arbitrary-length digest of data into hash. +func ShakeSum128(hash, data []byte) { + h := NewShake128() + h.Write(data) + h.Read(hash) +} + +// ShakeSum256 writes an arbitrary-length digest of data into hash. +func ShakeSum256(hash, data []byte) { + h := NewShake256() + h.Write(data) + h.Read(hash) +} diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/sha3/shake_generic.go b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/shake_generic.go new file mode 100644 index 00000000000..5c0710ef98f --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/shake_generic.go @@ -0,0 +1,20 @@ +// Copyright 2017 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build !gc || purego || !s390x +// +build !gc purego !s390x + +package sha3 + +// newShake128Asm returns an assembly implementation of SHAKE-128 if available, +// otherwise it returns nil. +func newShake128Asm() ShakeHash { + return nil +} + +// newShake256Asm returns an assembly implementation of SHAKE-256 if available, +// otherwise it returns nil. +func newShake256Asm() ShakeHash { + return nil +} diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/sha3/xor.go b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/xor.go new file mode 100644 index 00000000000..59c8eb94186 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/xor.go @@ -0,0 +1,24 @@ +// Copyright 2015 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build (!amd64 && !386 && !ppc64le) || purego +// +build !amd64,!386,!ppc64le purego + +package sha3 + +// A storageBuf is an aligned array of maxRate bytes. +type storageBuf [maxRate]byte + +func (b *storageBuf) asBytes() *[maxRate]byte { + return (*[maxRate]byte)(b) +} + +var ( + xorIn = xorInGeneric + copyOut = copyOutGeneric + xorInUnaligned = xorInGeneric + copyOutUnaligned = copyOutGeneric +) + +const xorImplementationUnaligned = "generic" diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/sha3/xor_generic.go b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/xor_generic.go new file mode 100644 index 00000000000..8d947711272 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/xor_generic.go @@ -0,0 +1,28 @@ +// Copyright 2015 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package sha3 + +import "encoding/binary" + +// xorInGeneric xors the bytes in buf into the state; it +// makes no non-portable assumptions about memory layout +// or alignment. +func xorInGeneric(d *state, buf []byte) { + n := len(buf) / 8 + + for i := 0; i < n; i++ { + a := binary.LittleEndian.Uint64(buf) + d.a[i] ^= a + buf = buf[8:] + } +} + +// copyOutGeneric copies uint64s to a byte buffer. +func copyOutGeneric(d *state, b []byte) { + for i := 0; len(b) >= 8; i++ { + binary.LittleEndian.PutUint64(b, d.a[i]) + b = b[8:] + } +} diff --git a/.ci/providerlint/vendor/golang.org/x/crypto/sha3/xor_unaligned.go b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/xor_unaligned.go new file mode 100644 index 00000000000..1ce606246d5 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/crypto/sha3/xor_unaligned.go @@ -0,0 +1,68 @@ +// Copyright 2015 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build (amd64 || 386 || ppc64le) && !purego +// +build amd64 386 ppc64le +// +build !purego + +package sha3 + +import "unsafe" + +// A storageBuf is an aligned array of maxRate bytes. +type storageBuf [maxRate / 8]uint64 + +func (b *storageBuf) asBytes() *[maxRate]byte { + return (*[maxRate]byte)(unsafe.Pointer(b)) +} + +// xorInUnaligned uses unaligned reads and writes to update d.a to contain d.a +// XOR buf. +func xorInUnaligned(d *state, buf []byte) { + n := len(buf) + bw := (*[maxRate / 8]uint64)(unsafe.Pointer(&buf[0]))[: n/8 : n/8] + if n >= 72 { + d.a[0] ^= bw[0] + d.a[1] ^= bw[1] + d.a[2] ^= bw[2] + d.a[3] ^= bw[3] + d.a[4] ^= bw[4] + d.a[5] ^= bw[5] + d.a[6] ^= bw[6] + d.a[7] ^= bw[7] + d.a[8] ^= bw[8] + } + if n >= 104 { + d.a[9] ^= bw[9] + d.a[10] ^= bw[10] + d.a[11] ^= bw[11] + d.a[12] ^= bw[12] + } + if n >= 136 { + d.a[13] ^= bw[13] + d.a[14] ^= bw[14] + d.a[15] ^= bw[15] + d.a[16] ^= bw[16] + } + if n >= 144 { + d.a[17] ^= bw[17] + } + if n >= 168 { + d.a[18] ^= bw[18] + d.a[19] ^= bw[19] + d.a[20] ^= bw[20] + } +} + +func copyOutUnaligned(d *state, buf []byte) { + ab := (*[maxRate]uint8)(unsafe.Pointer(&d.a[0])) + copy(buf, ab[:]) +} + +var ( + xorIn = xorInUnaligned + copyOut = copyOutUnaligned +) + +const xorImplementationUnaligned = "unaligned" diff --git a/.ci/providerlint/vendor/golang.org/x/net/http2/server.go b/.ci/providerlint/vendor/golang.org/x/net/http2/server.go index cd057f39824..033b6e6db64 100644 --- a/.ci/providerlint/vendor/golang.org/x/net/http2/server.go +++ b/.ci/providerlint/vendor/golang.org/x/net/http2/server.go @@ -441,7 +441,7 @@ func (s *Server) ServeConn(c net.Conn, opts *ServeConnOpts) { if s.NewWriteScheduler != nil { sc.writeSched = s.NewWriteScheduler() } else { - sc.writeSched = NewPriorityWriteScheduler(nil) + sc.writeSched = newRoundRobinWriteScheduler() } // These start at the RFC-specified defaults. If there is a higher @@ -2429,7 +2429,7 @@ type requestBody struct { conn *serverConn closeOnce sync.Once // for use by Close only sawEOF bool // for use by Read only - pipe *pipe // non-nil if we have a HTTP entity message body + pipe *pipe // non-nil if we have an HTTP entity message body needsContinue bool // need to send a 100-continue } @@ -2569,7 +2569,8 @@ func (rws *responseWriterState) writeChunk(p []byte) (n int, err error) { clen = "" } } - if clen == "" && rws.handlerDone && bodyAllowedForStatus(rws.status) && (len(p) > 0 || !isHeadResp) { + _, hasContentLength := rws.snapHeader["Content-Length"] + if !hasContentLength && clen == "" && rws.handlerDone && bodyAllowedForStatus(rws.status) && (len(p) > 0 || !isHeadResp) { clen = strconv.Itoa(len(p)) } _, hasContentType := rws.snapHeader["Content-Type"] @@ -2774,7 +2775,7 @@ func (w *responseWriter) FlushError() error { err = rws.bw.Flush() } else { // The bufio.Writer won't call chunkWriter.Write - // (writeChunk with zero bytes, so we have to do it + // (writeChunk with zero bytes), so we have to do it // ourselves to force the HTTP response header and/or // final DATA frame (with END_STREAM) to be sent. _, err = chunkWriter{rws}.Write(nil) diff --git a/.ci/providerlint/vendor/golang.org/x/net/http2/transport.go b/.ci/providerlint/vendor/golang.org/x/net/http2/transport.go index f965579f7d5..4f08ccba9ab 100644 --- a/.ci/providerlint/vendor/golang.org/x/net/http2/transport.go +++ b/.ci/providerlint/vendor/golang.org/x/net/http2/transport.go @@ -1266,6 +1266,44 @@ func (cc *ClientConn) RoundTrip(req *http.Request) (*http.Response, error) { return res, nil } + cancelRequest := func(cs *clientStream, err error) error { + cs.cc.mu.Lock() + cs.abortStreamLocked(err) + bodyClosed := cs.reqBodyClosed + if cs.ID != 0 { + // This request may have failed because of a problem with the connection, + // or for some unrelated reason. (For example, the user might have canceled + // the request without waiting for a response.) Mark the connection as + // not reusable, since trying to reuse a dead connection is worse than + // unnecessarily creating a new one. + // + // If cs.ID is 0, then the request was never allocated a stream ID and + // whatever went wrong was unrelated to the connection. We might have + // timed out waiting for a stream slot when StrictMaxConcurrentStreams + // is set, for example, in which case retrying on a different connection + // will not help. + cs.cc.doNotReuse = true + } + cs.cc.mu.Unlock() + // Wait for the request body to be closed. + // + // If nothing closed the body before now, abortStreamLocked + // will have started a goroutine to close it. + // + // Closing the body before returning avoids a race condition + // with net/http checking its readTrackingBody to see if the + // body was read from or closed. See golang/go#60041. + // + // The body is closed in a separate goroutine without the + // connection mutex held, but dropping the mutex before waiting + // will keep us from holding it indefinitely if the body + // close is slow for some reason. + if bodyClosed != nil { + <-bodyClosed + } + return err + } + for { select { case <-cs.respHeaderRecv: @@ -1280,15 +1318,12 @@ func (cc *ClientConn) RoundTrip(req *http.Request) (*http.Response, error) { return handleResponseHeaders() default: waitDone() - return nil, cs.abortErr + return nil, cancelRequest(cs, cs.abortErr) } case <-ctx.Done(): - err := ctx.Err() - cs.abortStream(err) - return nil, err + return nil, cancelRequest(cs, ctx.Err()) case <-cs.reqCancel: - cs.abortStream(errRequestCanceled) - return nil, errRequestCanceled + return nil, cancelRequest(cs, errRequestCanceled) } } } @@ -1881,7 +1916,7 @@ func (cc *ClientConn) encodeHeaders(req *http.Request, addGzipHeader bool, trail // 8.1.2.3 Request Pseudo-Header Fields // The :path pseudo-header field includes the path and query parts of the // target URI (the path-absolute production and optionally a '?' character - // followed by the query production (see Sections 3.3 and 3.4 of + // followed by the query production, see Sections 3.3 and 3.4 of // [RFC3986]). f(":authority", host) m := req.Method diff --git a/.ci/providerlint/vendor/golang.org/x/net/http2/writesched.go b/.ci/providerlint/vendor/golang.org/x/net/http2/writesched.go index c7cd0017392..cc893adc29a 100644 --- a/.ci/providerlint/vendor/golang.org/x/net/http2/writesched.go +++ b/.ci/providerlint/vendor/golang.org/x/net/http2/writesched.go @@ -184,7 +184,8 @@ func (wr *FrameWriteRequest) replyToWriter(err error) { // writeQueue is used by implementations of WriteScheduler. type writeQueue struct { - s []FrameWriteRequest + s []FrameWriteRequest + prev, next *writeQueue } func (q *writeQueue) empty() bool { return len(q.s) == 0 } diff --git a/.ci/providerlint/vendor/golang.org/x/net/http2/writesched_roundrobin.go b/.ci/providerlint/vendor/golang.org/x/net/http2/writesched_roundrobin.go new file mode 100644 index 00000000000..54fe86322d2 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/net/http2/writesched_roundrobin.go @@ -0,0 +1,119 @@ +// Copyright 2023 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package http2 + +import ( + "fmt" + "math" +) + +type roundRobinWriteScheduler struct { + // control contains control frames (SETTINGS, PING, etc.). + control writeQueue + + // streams maps stream ID to a queue. + streams map[uint32]*writeQueue + + // stream queues are stored in a circular linked list. + // head is the next stream to write, or nil if there are no streams open. + head *writeQueue + + // pool of empty queues for reuse. + queuePool writeQueuePool +} + +// newRoundRobinWriteScheduler constructs a new write scheduler. +// The round robin scheduler priorizes control frames +// like SETTINGS and PING over DATA frames. +// When there are no control frames to send, it performs a round-robin +// selection from the ready streams. +func newRoundRobinWriteScheduler() WriteScheduler { + ws := &roundRobinWriteScheduler{ + streams: make(map[uint32]*writeQueue), + } + return ws +} + +func (ws *roundRobinWriteScheduler) OpenStream(streamID uint32, options OpenStreamOptions) { + if ws.streams[streamID] != nil { + panic(fmt.Errorf("stream %d already opened", streamID)) + } + q := ws.queuePool.get() + ws.streams[streamID] = q + if ws.head == nil { + ws.head = q + q.next = q + q.prev = q + } else { + // Queues are stored in a ring. + // Insert the new stream before ws.head, putting it at the end of the list. + q.prev = ws.head.prev + q.next = ws.head + q.prev.next = q + q.next.prev = q + } +} + +func (ws *roundRobinWriteScheduler) CloseStream(streamID uint32) { + q := ws.streams[streamID] + if q == nil { + return + } + if q.next == q { + // This was the only open stream. + ws.head = nil + } else { + q.prev.next = q.next + q.next.prev = q.prev + if ws.head == q { + ws.head = q.next + } + } + delete(ws.streams, streamID) + ws.queuePool.put(q) +} + +func (ws *roundRobinWriteScheduler) AdjustStream(streamID uint32, priority PriorityParam) {} + +func (ws *roundRobinWriteScheduler) Push(wr FrameWriteRequest) { + if wr.isControl() { + ws.control.push(wr) + return + } + q := ws.streams[wr.StreamID()] + if q == nil { + // This is a closed stream. + // wr should not be a HEADERS or DATA frame. + // We push the request onto the control queue. + if wr.DataSize() > 0 { + panic("add DATA on non-open stream") + } + ws.control.push(wr) + return + } + q.push(wr) +} + +func (ws *roundRobinWriteScheduler) Pop() (FrameWriteRequest, bool) { + // Control and RST_STREAM frames first. + if !ws.control.empty() { + return ws.control.shift(), true + } + if ws.head == nil { + return FrameWriteRequest{}, false + } + q := ws.head + for { + if wr, ok := q.consume(math.MaxInt32); ok { + ws.head = q.next + return wr, true + } + q = q.next + if q == ws.head { + break + } + } + return FrameWriteRequest{}, false +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/asm_aix_ppc64.s b/.ci/providerlint/vendor/golang.org/x/sys/cpu/asm_aix_ppc64.s new file mode 100644 index 00000000000..db9171c2e49 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/asm_aix_ppc64.s @@ -0,0 +1,18 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build gc +// +build gc + +#include "textflag.h" + +// +// System calls for ppc64, AIX are implemented in runtime/syscall_aix.go +// + +TEXT ·syscall6(SB),NOSPLIT,$0-88 + JMP syscall·syscall6(SB) + +TEXT ·rawSyscall6(SB),NOSPLIT,$0-88 + JMP syscall·rawSyscall6(SB) diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/byteorder.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/byteorder.go new file mode 100644 index 00000000000..271055be0b1 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/byteorder.go @@ -0,0 +1,66 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package cpu + +import ( + "runtime" +) + +// byteOrder is a subset of encoding/binary.ByteOrder. +type byteOrder interface { + Uint32([]byte) uint32 + Uint64([]byte) uint64 +} + +type littleEndian struct{} +type bigEndian struct{} + +func (littleEndian) Uint32(b []byte) uint32 { + _ = b[3] // bounds check hint to compiler; see golang.org/issue/14808 + return uint32(b[0]) | uint32(b[1])<<8 | uint32(b[2])<<16 | uint32(b[3])<<24 +} + +func (littleEndian) Uint64(b []byte) uint64 { + _ = b[7] // bounds check hint to compiler; see golang.org/issue/14808 + return uint64(b[0]) | uint64(b[1])<<8 | uint64(b[2])<<16 | uint64(b[3])<<24 | + uint64(b[4])<<32 | uint64(b[5])<<40 | uint64(b[6])<<48 | uint64(b[7])<<56 +} + +func (bigEndian) Uint32(b []byte) uint32 { + _ = b[3] // bounds check hint to compiler; see golang.org/issue/14808 + return uint32(b[3]) | uint32(b[2])<<8 | uint32(b[1])<<16 | uint32(b[0])<<24 +} + +func (bigEndian) Uint64(b []byte) uint64 { + _ = b[7] // bounds check hint to compiler; see golang.org/issue/14808 + return uint64(b[7]) | uint64(b[6])<<8 | uint64(b[5])<<16 | uint64(b[4])<<24 | + uint64(b[3])<<32 | uint64(b[2])<<40 | uint64(b[1])<<48 | uint64(b[0])<<56 +} + +// hostByteOrder returns littleEndian on little-endian machines and +// bigEndian on big-endian machines. +func hostByteOrder() byteOrder { + switch runtime.GOARCH { + case "386", "amd64", "amd64p32", + "alpha", + "arm", "arm64", + "loong64", + "mipsle", "mips64le", "mips64p32le", + "nios2", + "ppc64le", + "riscv", "riscv64", + "sh": + return littleEndian{} + case "armbe", "arm64be", + "m68k", + "mips", "mips64", "mips64p32", + "ppc", "ppc64", + "s390", "s390x", + "shbe", + "sparc", "sparc64": + return bigEndian{} + } + panic("unknown architecture") +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu.go new file mode 100644 index 00000000000..83f112c4c80 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu.go @@ -0,0 +1,287 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package cpu implements processor feature detection for +// various CPU architectures. +package cpu + +import ( + "os" + "strings" +) + +// Initialized reports whether the CPU features were initialized. +// +// For some GOOS/GOARCH combinations initialization of the CPU features depends +// on reading an operating specific file, e.g. /proc/self/auxv on linux/arm +// Initialized will report false if reading the file fails. +var Initialized bool + +// CacheLinePad is used to pad structs to avoid false sharing. +type CacheLinePad struct{ _ [cacheLineSize]byte } + +// X86 contains the supported CPU features of the +// current X86/AMD64 platform. If the current platform +// is not X86/AMD64 then all feature flags are false. +// +// X86 is padded to avoid false sharing. Further the HasAVX +// and HasAVX2 are only set if the OS supports XMM and YMM +// registers in addition to the CPUID feature bit being set. +var X86 struct { + _ CacheLinePad + HasAES bool // AES hardware implementation (AES NI) + HasADX bool // Multi-precision add-carry instruction extensions + HasAVX bool // Advanced vector extension + HasAVX2 bool // Advanced vector extension 2 + HasAVX512 bool // Advanced vector extension 512 + HasAVX512F bool // Advanced vector extension 512 Foundation Instructions + HasAVX512CD bool // Advanced vector extension 512 Conflict Detection Instructions + HasAVX512ER bool // Advanced vector extension 512 Exponential and Reciprocal Instructions + HasAVX512PF bool // Advanced vector extension 512 Prefetch Instructions Instructions + HasAVX512VL bool // Advanced vector extension 512 Vector Length Extensions + HasAVX512BW bool // Advanced vector extension 512 Byte and Word Instructions + HasAVX512DQ bool // Advanced vector extension 512 Doubleword and Quadword Instructions + HasAVX512IFMA bool // Advanced vector extension 512 Integer Fused Multiply Add + HasAVX512VBMI bool // Advanced vector extension 512 Vector Byte Manipulation Instructions + HasAVX5124VNNIW bool // Advanced vector extension 512 Vector Neural Network Instructions Word variable precision + HasAVX5124FMAPS bool // Advanced vector extension 512 Fused Multiply Accumulation Packed Single precision + HasAVX512VPOPCNTDQ bool // Advanced vector extension 512 Double and quad word population count instructions + HasAVX512VPCLMULQDQ bool // Advanced vector extension 512 Vector carry-less multiply operations + HasAVX512VNNI bool // Advanced vector extension 512 Vector Neural Network Instructions + HasAVX512GFNI bool // Advanced vector extension 512 Galois field New Instructions + HasAVX512VAES bool // Advanced vector extension 512 Vector AES instructions + HasAVX512VBMI2 bool // Advanced vector extension 512 Vector Byte Manipulation Instructions 2 + HasAVX512BITALG bool // Advanced vector extension 512 Bit Algorithms + HasAVX512BF16 bool // Advanced vector extension 512 BFloat16 Instructions + HasBMI1 bool // Bit manipulation instruction set 1 + HasBMI2 bool // Bit manipulation instruction set 2 + HasCX16 bool // Compare and exchange 16 Bytes + HasERMS bool // Enhanced REP for MOVSB and STOSB + HasFMA bool // Fused-multiply-add instructions + HasOSXSAVE bool // OS supports XSAVE/XRESTOR for saving/restoring XMM registers. + HasPCLMULQDQ bool // PCLMULQDQ instruction - most often used for AES-GCM + HasPOPCNT bool // Hamming weight instruction POPCNT. + HasRDRAND bool // RDRAND instruction (on-chip random number generator) + HasRDSEED bool // RDSEED instruction (on-chip random number generator) + HasSSE2 bool // Streaming SIMD extension 2 (always available on amd64) + HasSSE3 bool // Streaming SIMD extension 3 + HasSSSE3 bool // Supplemental streaming SIMD extension 3 + HasSSE41 bool // Streaming SIMD extension 4 and 4.1 + HasSSE42 bool // Streaming SIMD extension 4 and 4.2 + _ CacheLinePad +} + +// ARM64 contains the supported CPU features of the +// current ARMv8(aarch64) platform. If the current platform +// is not arm64 then all feature flags are false. +var ARM64 struct { + _ CacheLinePad + HasFP bool // Floating-point instruction set (always available) + HasASIMD bool // Advanced SIMD (always available) + HasEVTSTRM bool // Event stream support + HasAES bool // AES hardware implementation + HasPMULL bool // Polynomial multiplication instruction set + HasSHA1 bool // SHA1 hardware implementation + HasSHA2 bool // SHA2 hardware implementation + HasCRC32 bool // CRC32 hardware implementation + HasATOMICS bool // Atomic memory operation instruction set + HasFPHP bool // Half precision floating-point instruction set + HasASIMDHP bool // Advanced SIMD half precision instruction set + HasCPUID bool // CPUID identification scheme registers + HasASIMDRDM bool // Rounding double multiply add/subtract instruction set + HasJSCVT bool // Javascript conversion from floating-point to integer + HasFCMA bool // Floating-point multiplication and addition of complex numbers + HasLRCPC bool // Release Consistent processor consistent support + HasDCPOP bool // Persistent memory support + HasSHA3 bool // SHA3 hardware implementation + HasSM3 bool // SM3 hardware implementation + HasSM4 bool // SM4 hardware implementation + HasASIMDDP bool // Advanced SIMD double precision instruction set + HasSHA512 bool // SHA512 hardware implementation + HasSVE bool // Scalable Vector Extensions + HasASIMDFHM bool // Advanced SIMD multiplication FP16 to FP32 + _ CacheLinePad +} + +// ARM contains the supported CPU features of the current ARM (32-bit) platform. +// All feature flags are false if: +// 1. the current platform is not arm, or +// 2. the current operating system is not Linux. +var ARM struct { + _ CacheLinePad + HasSWP bool // SWP instruction support + HasHALF bool // Half-word load and store support + HasTHUMB bool // ARM Thumb instruction set + Has26BIT bool // Address space limited to 26-bits + HasFASTMUL bool // 32-bit operand, 64-bit result multiplication support + HasFPA bool // Floating point arithmetic support + HasVFP bool // Vector floating point support + HasEDSP bool // DSP Extensions support + HasJAVA bool // Java instruction set + HasIWMMXT bool // Intel Wireless MMX technology support + HasCRUNCH bool // MaverickCrunch context switching and handling + HasTHUMBEE bool // Thumb EE instruction set + HasNEON bool // NEON instruction set + HasVFPv3 bool // Vector floating point version 3 support + HasVFPv3D16 bool // Vector floating point version 3 D8-D15 + HasTLS bool // Thread local storage support + HasVFPv4 bool // Vector floating point version 4 support + HasIDIVA bool // Integer divide instruction support in ARM mode + HasIDIVT bool // Integer divide instruction support in Thumb mode + HasVFPD32 bool // Vector floating point version 3 D15-D31 + HasLPAE bool // Large Physical Address Extensions + HasEVTSTRM bool // Event stream support + HasAES bool // AES hardware implementation + HasPMULL bool // Polynomial multiplication instruction set + HasSHA1 bool // SHA1 hardware implementation + HasSHA2 bool // SHA2 hardware implementation + HasCRC32 bool // CRC32 hardware implementation + _ CacheLinePad +} + +// MIPS64X contains the supported CPU features of the current mips64/mips64le +// platforms. If the current platform is not mips64/mips64le or the current +// operating system is not Linux then all feature flags are false. +var MIPS64X struct { + _ CacheLinePad + HasMSA bool // MIPS SIMD architecture + _ CacheLinePad +} + +// PPC64 contains the supported CPU features of the current ppc64/ppc64le platforms. +// If the current platform is not ppc64/ppc64le then all feature flags are false. +// +// For ppc64/ppc64le, it is safe to check only for ISA level starting on ISA v3.00, +// since there are no optional categories. There are some exceptions that also +// require kernel support to work (DARN, SCV), so there are feature bits for +// those as well. The struct is padded to avoid false sharing. +var PPC64 struct { + _ CacheLinePad + HasDARN bool // Hardware random number generator (requires kernel enablement) + HasSCV bool // Syscall vectored (requires kernel enablement) + IsPOWER8 bool // ISA v2.07 (POWER8) + IsPOWER9 bool // ISA v3.00 (POWER9), implies IsPOWER8 + _ CacheLinePad +} + +// S390X contains the supported CPU features of the current IBM Z +// (s390x) platform. If the current platform is not IBM Z then all +// feature flags are false. +// +// S390X is padded to avoid false sharing. Further HasVX is only set +// if the OS supports vector registers in addition to the STFLE +// feature bit being set. +var S390X struct { + _ CacheLinePad + HasZARCH bool // z/Architecture mode is active [mandatory] + HasSTFLE bool // store facility list extended + HasLDISP bool // long (20-bit) displacements + HasEIMM bool // 32-bit immediates + HasDFP bool // decimal floating point + HasETF3EH bool // ETF-3 enhanced + HasMSA bool // message security assist (CPACF) + HasAES bool // KM-AES{128,192,256} functions + HasAESCBC bool // KMC-AES{128,192,256} functions + HasAESCTR bool // KMCTR-AES{128,192,256} functions + HasAESGCM bool // KMA-GCM-AES{128,192,256} functions + HasGHASH bool // KIMD-GHASH function + HasSHA1 bool // K{I,L}MD-SHA-1 functions + HasSHA256 bool // K{I,L}MD-SHA-256 functions + HasSHA512 bool // K{I,L}MD-SHA-512 functions + HasSHA3 bool // K{I,L}MD-SHA3-{224,256,384,512} and K{I,L}MD-SHAKE-{128,256} functions + HasVX bool // vector facility + HasVXE bool // vector-enhancements facility 1 + _ CacheLinePad +} + +func init() { + archInit() + initOptions() + processOptions() +} + +// options contains the cpu debug options that can be used in GODEBUG. +// Options are arch dependent and are added by the arch specific initOptions functions. +// Features that are mandatory for the specific GOARCH should have the Required field set +// (e.g. SSE2 on amd64). +var options []option + +// Option names should be lower case. e.g. avx instead of AVX. +type option struct { + Name string + Feature *bool + Specified bool // whether feature value was specified in GODEBUG + Enable bool // whether feature should be enabled + Required bool // whether feature is mandatory and can not be disabled +} + +func processOptions() { + env := os.Getenv("GODEBUG") +field: + for env != "" { + field := "" + i := strings.IndexByte(env, ',') + if i < 0 { + field, env = env, "" + } else { + field, env = env[:i], env[i+1:] + } + if len(field) < 4 || field[:4] != "cpu." { + continue + } + i = strings.IndexByte(field, '=') + if i < 0 { + print("GODEBUG sys/cpu: no value specified for \"", field, "\"\n") + continue + } + key, value := field[4:i], field[i+1:] // e.g. "SSE2", "on" + + var enable bool + switch value { + case "on": + enable = true + case "off": + enable = false + default: + print("GODEBUG sys/cpu: value \"", value, "\" not supported for cpu option \"", key, "\"\n") + continue field + } + + if key == "all" { + for i := range options { + options[i].Specified = true + options[i].Enable = enable || options[i].Required + } + continue field + } + + for i := range options { + if options[i].Name == key { + options[i].Specified = true + options[i].Enable = enable + continue field + } + } + + print("GODEBUG sys/cpu: unknown cpu feature \"", key, "\"\n") + } + + for _, o := range options { + if !o.Specified { + continue + } + + if o.Enable && !*o.Feature { + print("GODEBUG sys/cpu: can not enable \"", o.Name, "\", missing CPU support\n") + continue + } + + if !o.Enable && o.Required { + print("GODEBUG sys/cpu: can not disable \"", o.Name, "\", required CPU feature\n") + continue + } + + *o.Feature = o.Enable + } +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_aix.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_aix.go new file mode 100644 index 00000000000..8aaeef545a7 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_aix.go @@ -0,0 +1,34 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build aix +// +build aix + +package cpu + +const ( + // getsystemcfg constants + _SC_IMPL = 2 + _IMPL_POWER8 = 0x10000 + _IMPL_POWER9 = 0x20000 +) + +func archInit() { + impl := getsystemcfg(_SC_IMPL) + if impl&_IMPL_POWER8 != 0 { + PPC64.IsPOWER8 = true + } + if impl&_IMPL_POWER9 != 0 { + PPC64.IsPOWER8 = true + PPC64.IsPOWER9 = true + } + + Initialized = true +} + +func getsystemcfg(label int) (n uint64) { + r0, _ := callgetsystemcfg(label) + n = uint64(r0) + return +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_arm.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_arm.go new file mode 100644 index 00000000000..301b752e9c5 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_arm.go @@ -0,0 +1,73 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package cpu + +const cacheLineSize = 32 + +// HWCAP/HWCAP2 bits. +// These are specific to Linux. +const ( + hwcap_SWP = 1 << 0 + hwcap_HALF = 1 << 1 + hwcap_THUMB = 1 << 2 + hwcap_26BIT = 1 << 3 + hwcap_FAST_MULT = 1 << 4 + hwcap_FPA = 1 << 5 + hwcap_VFP = 1 << 6 + hwcap_EDSP = 1 << 7 + hwcap_JAVA = 1 << 8 + hwcap_IWMMXT = 1 << 9 + hwcap_CRUNCH = 1 << 10 + hwcap_THUMBEE = 1 << 11 + hwcap_NEON = 1 << 12 + hwcap_VFPv3 = 1 << 13 + hwcap_VFPv3D16 = 1 << 14 + hwcap_TLS = 1 << 15 + hwcap_VFPv4 = 1 << 16 + hwcap_IDIVA = 1 << 17 + hwcap_IDIVT = 1 << 18 + hwcap_VFPD32 = 1 << 19 + hwcap_LPAE = 1 << 20 + hwcap_EVTSTRM = 1 << 21 + + hwcap2_AES = 1 << 0 + hwcap2_PMULL = 1 << 1 + hwcap2_SHA1 = 1 << 2 + hwcap2_SHA2 = 1 << 3 + hwcap2_CRC32 = 1 << 4 +) + +func initOptions() { + options = []option{ + {Name: "pmull", Feature: &ARM.HasPMULL}, + {Name: "sha1", Feature: &ARM.HasSHA1}, + {Name: "sha2", Feature: &ARM.HasSHA2}, + {Name: "swp", Feature: &ARM.HasSWP}, + {Name: "thumb", Feature: &ARM.HasTHUMB}, + {Name: "thumbee", Feature: &ARM.HasTHUMBEE}, + {Name: "tls", Feature: &ARM.HasTLS}, + {Name: "vfp", Feature: &ARM.HasVFP}, + {Name: "vfpd32", Feature: &ARM.HasVFPD32}, + {Name: "vfpv3", Feature: &ARM.HasVFPv3}, + {Name: "vfpv3d16", Feature: &ARM.HasVFPv3D16}, + {Name: "vfpv4", Feature: &ARM.HasVFPv4}, + {Name: "half", Feature: &ARM.HasHALF}, + {Name: "26bit", Feature: &ARM.Has26BIT}, + {Name: "fastmul", Feature: &ARM.HasFASTMUL}, + {Name: "fpa", Feature: &ARM.HasFPA}, + {Name: "edsp", Feature: &ARM.HasEDSP}, + {Name: "java", Feature: &ARM.HasJAVA}, + {Name: "iwmmxt", Feature: &ARM.HasIWMMXT}, + {Name: "crunch", Feature: &ARM.HasCRUNCH}, + {Name: "neon", Feature: &ARM.HasNEON}, + {Name: "idivt", Feature: &ARM.HasIDIVT}, + {Name: "idiva", Feature: &ARM.HasIDIVA}, + {Name: "lpae", Feature: &ARM.HasLPAE}, + {Name: "evtstrm", Feature: &ARM.HasEVTSTRM}, + {Name: "aes", Feature: &ARM.HasAES}, + {Name: "crc32", Feature: &ARM.HasCRC32}, + } + +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_arm64.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_arm64.go new file mode 100644 index 00000000000..f3eb993bf24 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_arm64.go @@ -0,0 +1,172 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package cpu + +import "runtime" + +// cacheLineSize is used to prevent false sharing of cache lines. +// We choose 128 because Apple Silicon, a.k.a. M1, has 128-byte cache line size. +// It doesn't cost much and is much more future-proof. +const cacheLineSize = 128 + +func initOptions() { + options = []option{ + {Name: "fp", Feature: &ARM64.HasFP}, + {Name: "asimd", Feature: &ARM64.HasASIMD}, + {Name: "evstrm", Feature: &ARM64.HasEVTSTRM}, + {Name: "aes", Feature: &ARM64.HasAES}, + {Name: "fphp", Feature: &ARM64.HasFPHP}, + {Name: "jscvt", Feature: &ARM64.HasJSCVT}, + {Name: "lrcpc", Feature: &ARM64.HasLRCPC}, + {Name: "pmull", Feature: &ARM64.HasPMULL}, + {Name: "sha1", Feature: &ARM64.HasSHA1}, + {Name: "sha2", Feature: &ARM64.HasSHA2}, + {Name: "sha3", Feature: &ARM64.HasSHA3}, + {Name: "sha512", Feature: &ARM64.HasSHA512}, + {Name: "sm3", Feature: &ARM64.HasSM3}, + {Name: "sm4", Feature: &ARM64.HasSM4}, + {Name: "sve", Feature: &ARM64.HasSVE}, + {Name: "crc32", Feature: &ARM64.HasCRC32}, + {Name: "atomics", Feature: &ARM64.HasATOMICS}, + {Name: "asimdhp", Feature: &ARM64.HasASIMDHP}, + {Name: "cpuid", Feature: &ARM64.HasCPUID}, + {Name: "asimrdm", Feature: &ARM64.HasASIMDRDM}, + {Name: "fcma", Feature: &ARM64.HasFCMA}, + {Name: "dcpop", Feature: &ARM64.HasDCPOP}, + {Name: "asimddp", Feature: &ARM64.HasASIMDDP}, + {Name: "asimdfhm", Feature: &ARM64.HasASIMDFHM}, + } +} + +func archInit() { + switch runtime.GOOS { + case "freebsd": + readARM64Registers() + case "linux", "netbsd", "openbsd": + doinit() + default: + // Many platforms don't seem to allow reading these registers. + setMinimalFeatures() + } +} + +// setMinimalFeatures fakes the minimal ARM64 features expected by +// TestARM64minimalFeatures. +func setMinimalFeatures() { + ARM64.HasASIMD = true + ARM64.HasFP = true +} + +func readARM64Registers() { + Initialized = true + + parseARM64SystemRegisters(getisar0(), getisar1(), getpfr0()) +} + +func parseARM64SystemRegisters(isar0, isar1, pfr0 uint64) { + // ID_AA64ISAR0_EL1 + switch extractBits(isar0, 4, 7) { + case 1: + ARM64.HasAES = true + case 2: + ARM64.HasAES = true + ARM64.HasPMULL = true + } + + switch extractBits(isar0, 8, 11) { + case 1: + ARM64.HasSHA1 = true + } + + switch extractBits(isar0, 12, 15) { + case 1: + ARM64.HasSHA2 = true + case 2: + ARM64.HasSHA2 = true + ARM64.HasSHA512 = true + } + + switch extractBits(isar0, 16, 19) { + case 1: + ARM64.HasCRC32 = true + } + + switch extractBits(isar0, 20, 23) { + case 2: + ARM64.HasATOMICS = true + } + + switch extractBits(isar0, 28, 31) { + case 1: + ARM64.HasASIMDRDM = true + } + + switch extractBits(isar0, 32, 35) { + case 1: + ARM64.HasSHA3 = true + } + + switch extractBits(isar0, 36, 39) { + case 1: + ARM64.HasSM3 = true + } + + switch extractBits(isar0, 40, 43) { + case 1: + ARM64.HasSM4 = true + } + + switch extractBits(isar0, 44, 47) { + case 1: + ARM64.HasASIMDDP = true + } + + // ID_AA64ISAR1_EL1 + switch extractBits(isar1, 0, 3) { + case 1: + ARM64.HasDCPOP = true + } + + switch extractBits(isar1, 12, 15) { + case 1: + ARM64.HasJSCVT = true + } + + switch extractBits(isar1, 16, 19) { + case 1: + ARM64.HasFCMA = true + } + + switch extractBits(isar1, 20, 23) { + case 1: + ARM64.HasLRCPC = true + } + + // ID_AA64PFR0_EL1 + switch extractBits(pfr0, 16, 19) { + case 0: + ARM64.HasFP = true + case 1: + ARM64.HasFP = true + ARM64.HasFPHP = true + } + + switch extractBits(pfr0, 20, 23) { + case 0: + ARM64.HasASIMD = true + case 1: + ARM64.HasASIMD = true + ARM64.HasASIMDHP = true + } + + switch extractBits(pfr0, 32, 35) { + case 1: + ARM64.HasSVE = true + } +} + +func extractBits(data uint64, start, end uint) uint { + return (uint)(data>>start) & ((1 << (end - start + 1)) - 1) +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_arm64.s b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_arm64.s new file mode 100644 index 00000000000..c61f95a05a7 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_arm64.s @@ -0,0 +1,32 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build gc +// +build gc + +#include "textflag.h" + +// func getisar0() uint64 +TEXT ·getisar0(SB),NOSPLIT,$0-8 + // get Instruction Set Attributes 0 into x0 + // mrs x0, ID_AA64ISAR0_EL1 = d5380600 + WORD $0xd5380600 + MOVD R0, ret+0(FP) + RET + +// func getisar1() uint64 +TEXT ·getisar1(SB),NOSPLIT,$0-8 + // get Instruction Set Attributes 1 into x0 + // mrs x0, ID_AA64ISAR1_EL1 = d5380620 + WORD $0xd5380620 + MOVD R0, ret+0(FP) + RET + +// func getpfr0() uint64 +TEXT ·getpfr0(SB),NOSPLIT,$0-8 + // get Processor Feature Register 0 into x0 + // mrs x0, ID_AA64PFR0_EL1 = d5380400 + WORD $0xd5380400 + MOVD R0, ret+0(FP) + RET diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_gc_arm64.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_gc_arm64.go new file mode 100644 index 00000000000..ccf542a73da --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_gc_arm64.go @@ -0,0 +1,12 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build gc +// +build gc + +package cpu + +func getisar0() uint64 +func getisar1() uint64 +func getpfr0() uint64 diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_gc_s390x.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_gc_s390x.go new file mode 100644 index 00000000000..0af2f248412 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_gc_s390x.go @@ -0,0 +1,22 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build gc +// +build gc + +package cpu + +// haveAsmFunctions reports whether the other functions in this file can +// be safely called. +func haveAsmFunctions() bool { return true } + +// The following feature detection functions are defined in cpu_s390x.s. +// They are likely to be expensive to call so the results should be cached. +func stfle() facilityList +func kmQuery() queryResult +func kmcQuery() queryResult +func kmctrQuery() queryResult +func kmaQuery() queryResult +func kimdQuery() queryResult +func klmdQuery() queryResult diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_gc_x86.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_gc_x86.go new file mode 100644 index 00000000000..fa7cdb9bcd5 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_gc_x86.go @@ -0,0 +1,17 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build (386 || amd64 || amd64p32) && gc +// +build 386 amd64 amd64p32 +// +build gc + +package cpu + +// cpuid is implemented in cpu_x86.s for gc compiler +// and in cpu_gccgo.c for gccgo. +func cpuid(eaxArg, ecxArg uint32) (eax, ebx, ecx, edx uint32) + +// xgetbv with ecx = 0 is implemented in cpu_x86.s for gc compiler +// and in cpu_gccgo.c for gccgo. +func xgetbv() (eax, edx uint32) diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_gccgo_arm64.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_gccgo_arm64.go new file mode 100644 index 00000000000..2aff3189116 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_gccgo_arm64.go @@ -0,0 +1,12 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build gccgo +// +build gccgo + +package cpu + +func getisar0() uint64 { return 0 } +func getisar1() uint64 { return 0 } +func getpfr0() uint64 { return 0 } diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_gccgo_s390x.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_gccgo_s390x.go new file mode 100644 index 00000000000..4bfbda61993 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_gccgo_s390x.go @@ -0,0 +1,23 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build gccgo +// +build gccgo + +package cpu + +// haveAsmFunctions reports whether the other functions in this file can +// be safely called. +func haveAsmFunctions() bool { return false } + +// TODO(mundaym): the following feature detection functions are currently +// stubs. See https://golang.org/cl/162887 for how to fix this. +// They are likely to be expensive to call so the results should be cached. +func stfle() facilityList { panic("not implemented for gccgo") } +func kmQuery() queryResult { panic("not implemented for gccgo") } +func kmcQuery() queryResult { panic("not implemented for gccgo") } +func kmctrQuery() queryResult { panic("not implemented for gccgo") } +func kmaQuery() queryResult { panic("not implemented for gccgo") } +func kimdQuery() queryResult { panic("not implemented for gccgo") } +func klmdQuery() queryResult { panic("not implemented for gccgo") } diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_gccgo_x86.c b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_gccgo_x86.c new file mode 100644 index 00000000000..6cc73109f59 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_gccgo_x86.c @@ -0,0 +1,39 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build (386 || amd64 || amd64p32) && gccgo +// +build 386 amd64 amd64p32 +// +build gccgo + +#include +#include +#include + +// Need to wrap __get_cpuid_count because it's declared as static. +int +gccgoGetCpuidCount(uint32_t leaf, uint32_t subleaf, + uint32_t *eax, uint32_t *ebx, + uint32_t *ecx, uint32_t *edx) +{ + return __get_cpuid_count(leaf, subleaf, eax, ebx, ecx, edx); +} + +#pragma GCC diagnostic ignored "-Wunknown-pragmas" +#pragma GCC push_options +#pragma GCC target("xsave") +#pragma clang attribute push (__attribute__((target("xsave"))), apply_to=function) + +// xgetbv reads the contents of an XCR (Extended Control Register) +// specified in the ECX register into registers EDX:EAX. +// Currently, the only supported value for XCR is 0. +void +gccgoXgetbv(uint32_t *eax, uint32_t *edx) +{ + uint64_t v = _xgetbv(0); + *eax = v & 0xffffffff; + *edx = v >> 32; +} + +#pragma clang attribute pop +#pragma GCC pop_options diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_gccgo_x86.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_gccgo_x86.go new file mode 100644 index 00000000000..863d415ab49 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_gccgo_x86.go @@ -0,0 +1,33 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build (386 || amd64 || amd64p32) && gccgo +// +build 386 amd64 amd64p32 +// +build gccgo + +package cpu + +//extern gccgoGetCpuidCount +func gccgoGetCpuidCount(eaxArg, ecxArg uint32, eax, ebx, ecx, edx *uint32) + +func cpuid(eaxArg, ecxArg uint32) (eax, ebx, ecx, edx uint32) { + var a, b, c, d uint32 + gccgoGetCpuidCount(eaxArg, ecxArg, &a, &b, &c, &d) + return a, b, c, d +} + +//extern gccgoXgetbv +func gccgoXgetbv(eax, edx *uint32) + +func xgetbv() (eax, edx uint32) { + var a, d uint32 + gccgoXgetbv(&a, &d) + return a, d +} + +// gccgo doesn't build on Darwin, per: +// https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/gcc.rb#L76 +func darwinSupportsAVX512() bool { + return false +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_linux.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_linux.go new file mode 100644 index 00000000000..159a686f6f7 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_linux.go @@ -0,0 +1,16 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build !386 && !amd64 && !amd64p32 && !arm64 +// +build !386,!amd64,!amd64p32,!arm64 + +package cpu + +func archInit() { + if err := readHWCAP(); err != nil { + return + } + doinit() + Initialized = true +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_linux_arm.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_linux_arm.go new file mode 100644 index 00000000000..2057006dce4 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_linux_arm.go @@ -0,0 +1,39 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package cpu + +func doinit() { + ARM.HasSWP = isSet(hwCap, hwcap_SWP) + ARM.HasHALF = isSet(hwCap, hwcap_HALF) + ARM.HasTHUMB = isSet(hwCap, hwcap_THUMB) + ARM.Has26BIT = isSet(hwCap, hwcap_26BIT) + ARM.HasFASTMUL = isSet(hwCap, hwcap_FAST_MULT) + ARM.HasFPA = isSet(hwCap, hwcap_FPA) + ARM.HasVFP = isSet(hwCap, hwcap_VFP) + ARM.HasEDSP = isSet(hwCap, hwcap_EDSP) + ARM.HasJAVA = isSet(hwCap, hwcap_JAVA) + ARM.HasIWMMXT = isSet(hwCap, hwcap_IWMMXT) + ARM.HasCRUNCH = isSet(hwCap, hwcap_CRUNCH) + ARM.HasTHUMBEE = isSet(hwCap, hwcap_THUMBEE) + ARM.HasNEON = isSet(hwCap, hwcap_NEON) + ARM.HasVFPv3 = isSet(hwCap, hwcap_VFPv3) + ARM.HasVFPv3D16 = isSet(hwCap, hwcap_VFPv3D16) + ARM.HasTLS = isSet(hwCap, hwcap_TLS) + ARM.HasVFPv4 = isSet(hwCap, hwcap_VFPv4) + ARM.HasIDIVA = isSet(hwCap, hwcap_IDIVA) + ARM.HasIDIVT = isSet(hwCap, hwcap_IDIVT) + ARM.HasVFPD32 = isSet(hwCap, hwcap_VFPD32) + ARM.HasLPAE = isSet(hwCap, hwcap_LPAE) + ARM.HasEVTSTRM = isSet(hwCap, hwcap_EVTSTRM) + ARM.HasAES = isSet(hwCap2, hwcap2_AES) + ARM.HasPMULL = isSet(hwCap2, hwcap2_PMULL) + ARM.HasSHA1 = isSet(hwCap2, hwcap2_SHA1) + ARM.HasSHA2 = isSet(hwCap2, hwcap2_SHA2) + ARM.HasCRC32 = isSet(hwCap2, hwcap2_CRC32) +} + +func isSet(hwc uint, value uint) bool { + return hwc&value != 0 +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_linux_arm64.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_linux_arm64.go new file mode 100644 index 00000000000..a968b80fa6a --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_linux_arm64.go @@ -0,0 +1,111 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package cpu + +import ( + "strings" + "syscall" +) + +// HWCAP/HWCAP2 bits. These are exposed by Linux. +const ( + hwcap_FP = 1 << 0 + hwcap_ASIMD = 1 << 1 + hwcap_EVTSTRM = 1 << 2 + hwcap_AES = 1 << 3 + hwcap_PMULL = 1 << 4 + hwcap_SHA1 = 1 << 5 + hwcap_SHA2 = 1 << 6 + hwcap_CRC32 = 1 << 7 + hwcap_ATOMICS = 1 << 8 + hwcap_FPHP = 1 << 9 + hwcap_ASIMDHP = 1 << 10 + hwcap_CPUID = 1 << 11 + hwcap_ASIMDRDM = 1 << 12 + hwcap_JSCVT = 1 << 13 + hwcap_FCMA = 1 << 14 + hwcap_LRCPC = 1 << 15 + hwcap_DCPOP = 1 << 16 + hwcap_SHA3 = 1 << 17 + hwcap_SM3 = 1 << 18 + hwcap_SM4 = 1 << 19 + hwcap_ASIMDDP = 1 << 20 + hwcap_SHA512 = 1 << 21 + hwcap_SVE = 1 << 22 + hwcap_ASIMDFHM = 1 << 23 +) + +// linuxKernelCanEmulateCPUID reports whether we're running +// on Linux 4.11+. Ideally we'd like to ask the question about +// whether the current kernel contains +// https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=77c97b4ee21290f5f083173d957843b615abbff2 +// but the version number will have to do. +func linuxKernelCanEmulateCPUID() bool { + var un syscall.Utsname + syscall.Uname(&un) + var sb strings.Builder + for _, b := range un.Release[:] { + if b == 0 { + break + } + sb.WriteByte(byte(b)) + } + major, minor, _, ok := parseRelease(sb.String()) + return ok && (major > 4 || major == 4 && minor >= 11) +} + +func doinit() { + if err := readHWCAP(); err != nil { + // We failed to read /proc/self/auxv. This can happen if the binary has + // been given extra capabilities(7) with /bin/setcap. + // + // When this happens, we have two options. If the Linux kernel is new + // enough (4.11+), we can read the arm64 registers directly which'll + // trap into the kernel and then return back to userspace. + // + // But on older kernels, such as Linux 4.4.180 as used on many Synology + // devices, calling readARM64Registers (specifically getisar0) will + // cause a SIGILL and we'll die. So for older kernels, parse /proc/cpuinfo + // instead. + // + // See golang/go#57336. + if linuxKernelCanEmulateCPUID() { + readARM64Registers() + } else { + readLinuxProcCPUInfo() + } + return + } + + // HWCAP feature bits + ARM64.HasFP = isSet(hwCap, hwcap_FP) + ARM64.HasASIMD = isSet(hwCap, hwcap_ASIMD) + ARM64.HasEVTSTRM = isSet(hwCap, hwcap_EVTSTRM) + ARM64.HasAES = isSet(hwCap, hwcap_AES) + ARM64.HasPMULL = isSet(hwCap, hwcap_PMULL) + ARM64.HasSHA1 = isSet(hwCap, hwcap_SHA1) + ARM64.HasSHA2 = isSet(hwCap, hwcap_SHA2) + ARM64.HasCRC32 = isSet(hwCap, hwcap_CRC32) + ARM64.HasATOMICS = isSet(hwCap, hwcap_ATOMICS) + ARM64.HasFPHP = isSet(hwCap, hwcap_FPHP) + ARM64.HasASIMDHP = isSet(hwCap, hwcap_ASIMDHP) + ARM64.HasCPUID = isSet(hwCap, hwcap_CPUID) + ARM64.HasASIMDRDM = isSet(hwCap, hwcap_ASIMDRDM) + ARM64.HasJSCVT = isSet(hwCap, hwcap_JSCVT) + ARM64.HasFCMA = isSet(hwCap, hwcap_FCMA) + ARM64.HasLRCPC = isSet(hwCap, hwcap_LRCPC) + ARM64.HasDCPOP = isSet(hwCap, hwcap_DCPOP) + ARM64.HasSHA3 = isSet(hwCap, hwcap_SHA3) + ARM64.HasSM3 = isSet(hwCap, hwcap_SM3) + ARM64.HasSM4 = isSet(hwCap, hwcap_SM4) + ARM64.HasASIMDDP = isSet(hwCap, hwcap_ASIMDDP) + ARM64.HasSHA512 = isSet(hwCap, hwcap_SHA512) + ARM64.HasSVE = isSet(hwCap, hwcap_SVE) + ARM64.HasASIMDFHM = isSet(hwCap, hwcap_ASIMDFHM) +} + +func isSet(hwc uint, value uint) bool { + return hwc&value != 0 +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_linux_mips64x.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_linux_mips64x.go new file mode 100644 index 00000000000..6000db4cdd1 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_linux_mips64x.go @@ -0,0 +1,24 @@ +// Copyright 2020 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build linux && (mips64 || mips64le) +// +build linux +// +build mips64 mips64le + +package cpu + +// HWCAP bits. These are exposed by the Linux kernel 5.4. +const ( + // CPU features + hwcap_MIPS_MSA = 1 << 1 +) + +func doinit() { + // HWCAP feature bits + MIPS64X.HasMSA = isSet(hwCap, hwcap_MIPS_MSA) +} + +func isSet(hwc uint, value uint) bool { + return hwc&value != 0 +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_linux_noinit.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_linux_noinit.go new file mode 100644 index 00000000000..f4992b1a593 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_linux_noinit.go @@ -0,0 +1,10 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build linux && !arm && !arm64 && !mips64 && !mips64le && !ppc64 && !ppc64le && !s390x +// +build linux,!arm,!arm64,!mips64,!mips64le,!ppc64,!ppc64le,!s390x + +package cpu + +func doinit() {} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_linux_ppc64x.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_linux_ppc64x.go new file mode 100644 index 00000000000..021356d6deb --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_linux_ppc64x.go @@ -0,0 +1,32 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build linux && (ppc64 || ppc64le) +// +build linux +// +build ppc64 ppc64le + +package cpu + +// HWCAP/HWCAP2 bits. These are exposed by the kernel. +const ( + // ISA Level + _PPC_FEATURE2_ARCH_2_07 = 0x80000000 + _PPC_FEATURE2_ARCH_3_00 = 0x00800000 + + // CPU features + _PPC_FEATURE2_DARN = 0x00200000 + _PPC_FEATURE2_SCV = 0x00100000 +) + +func doinit() { + // HWCAP2 feature bits + PPC64.IsPOWER8 = isSet(hwCap2, _PPC_FEATURE2_ARCH_2_07) + PPC64.IsPOWER9 = isSet(hwCap2, _PPC_FEATURE2_ARCH_3_00) + PPC64.HasDARN = isSet(hwCap2, _PPC_FEATURE2_DARN) + PPC64.HasSCV = isSet(hwCap2, _PPC_FEATURE2_SCV) +} + +func isSet(hwc uint, value uint) bool { + return hwc&value != 0 +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_linux_s390x.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_linux_s390x.go new file mode 100644 index 00000000000..1517ac61d31 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_linux_s390x.go @@ -0,0 +1,40 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package cpu + +const ( + // bit mask values from /usr/include/bits/hwcap.h + hwcap_ZARCH = 2 + hwcap_STFLE = 4 + hwcap_MSA = 8 + hwcap_LDISP = 16 + hwcap_EIMM = 32 + hwcap_DFP = 64 + hwcap_ETF3EH = 256 + hwcap_VX = 2048 + hwcap_VXE = 8192 +) + +func initS390Xbase() { + // test HWCAP bit vector + has := func(featureMask uint) bool { + return hwCap&featureMask == featureMask + } + + // mandatory + S390X.HasZARCH = has(hwcap_ZARCH) + + // optional + S390X.HasSTFLE = has(hwcap_STFLE) + S390X.HasLDISP = has(hwcap_LDISP) + S390X.HasEIMM = has(hwcap_EIMM) + S390X.HasETF3EH = has(hwcap_ETF3EH) + S390X.HasDFP = has(hwcap_DFP) + S390X.HasMSA = has(hwcap_MSA) + S390X.HasVX = has(hwcap_VX) + if S390X.HasVX { + S390X.HasVXE = has(hwcap_VXE) + } +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_loong64.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_loong64.go new file mode 100644 index 00000000000..0f57b05bdbe --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_loong64.go @@ -0,0 +1,13 @@ +// Copyright 2022 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build loong64 +// +build loong64 + +package cpu + +const cacheLineSize = 64 + +func initOptions() { +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_mips64x.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_mips64x.go new file mode 100644 index 00000000000..f4063c66423 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_mips64x.go @@ -0,0 +1,16 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build mips64 || mips64le +// +build mips64 mips64le + +package cpu + +const cacheLineSize = 32 + +func initOptions() { + options = []option{ + {Name: "msa", Feature: &MIPS64X.HasMSA}, + } +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_mipsx.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_mipsx.go new file mode 100644 index 00000000000..07c4e36d8f5 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_mipsx.go @@ -0,0 +1,12 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build mips || mipsle +// +build mips mipsle + +package cpu + +const cacheLineSize = 32 + +func initOptions() {} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_netbsd_arm64.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_netbsd_arm64.go new file mode 100644 index 00000000000..ebfb3fc8e76 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_netbsd_arm64.go @@ -0,0 +1,173 @@ +// Copyright 2020 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package cpu + +import ( + "syscall" + "unsafe" +) + +// Minimal copy of functionality from x/sys/unix so the cpu package can call +// sysctl without depending on x/sys/unix. + +const ( + _CTL_QUERY = -2 + + _SYSCTL_VERS_1 = 0x1000000 +) + +var _zero uintptr + +func sysctl(mib []int32, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { + var _p0 unsafe.Pointer + if len(mib) > 0 { + _p0 = unsafe.Pointer(&mib[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, errno := syscall.Syscall6( + syscall.SYS___SYSCTL, + uintptr(_p0), + uintptr(len(mib)), + uintptr(unsafe.Pointer(old)), + uintptr(unsafe.Pointer(oldlen)), + uintptr(unsafe.Pointer(new)), + uintptr(newlen)) + if errno != 0 { + return errno + } + return nil +} + +type sysctlNode struct { + Flags uint32 + Num int32 + Name [32]int8 + Ver uint32 + __rsvd uint32 + Un [16]byte + _sysctl_size [8]byte + _sysctl_func [8]byte + _sysctl_parent [8]byte + _sysctl_desc [8]byte +} + +func sysctlNodes(mib []int32) ([]sysctlNode, error) { + var olen uintptr + + // Get a list of all sysctl nodes below the given MIB by performing + // a sysctl for the given MIB with CTL_QUERY appended. + mib = append(mib, _CTL_QUERY) + qnode := sysctlNode{Flags: _SYSCTL_VERS_1} + qp := (*byte)(unsafe.Pointer(&qnode)) + sz := unsafe.Sizeof(qnode) + if err := sysctl(mib, nil, &olen, qp, sz); err != nil { + return nil, err + } + + // Now that we know the size, get the actual nodes. + nodes := make([]sysctlNode, olen/sz) + np := (*byte)(unsafe.Pointer(&nodes[0])) + if err := sysctl(mib, np, &olen, qp, sz); err != nil { + return nil, err + } + + return nodes, nil +} + +func nametomib(name string) ([]int32, error) { + // Split name into components. + var parts []string + last := 0 + for i := 0; i < len(name); i++ { + if name[i] == '.' { + parts = append(parts, name[last:i]) + last = i + 1 + } + } + parts = append(parts, name[last:]) + + mib := []int32{} + // Discover the nodes and construct the MIB OID. + for partno, part := range parts { + nodes, err := sysctlNodes(mib) + if err != nil { + return nil, err + } + for _, node := range nodes { + n := make([]byte, 0) + for i := range node.Name { + if node.Name[i] != 0 { + n = append(n, byte(node.Name[i])) + } + } + if string(n) == part { + mib = append(mib, int32(node.Num)) + break + } + } + if len(mib) != partno+1 { + return nil, err + } + } + + return mib, nil +} + +// aarch64SysctlCPUID is struct aarch64_sysctl_cpu_id from NetBSD's +type aarch64SysctlCPUID struct { + midr uint64 /* Main ID Register */ + revidr uint64 /* Revision ID Register */ + mpidr uint64 /* Multiprocessor Affinity Register */ + aa64dfr0 uint64 /* A64 Debug Feature Register 0 */ + aa64dfr1 uint64 /* A64 Debug Feature Register 1 */ + aa64isar0 uint64 /* A64 Instruction Set Attribute Register 0 */ + aa64isar1 uint64 /* A64 Instruction Set Attribute Register 1 */ + aa64mmfr0 uint64 /* A64 Memory Model Feature Register 0 */ + aa64mmfr1 uint64 /* A64 Memory Model Feature Register 1 */ + aa64mmfr2 uint64 /* A64 Memory Model Feature Register 2 */ + aa64pfr0 uint64 /* A64 Processor Feature Register 0 */ + aa64pfr1 uint64 /* A64 Processor Feature Register 1 */ + aa64zfr0 uint64 /* A64 SVE Feature ID Register 0 */ + mvfr0 uint32 /* Media and VFP Feature Register 0 */ + mvfr1 uint32 /* Media and VFP Feature Register 1 */ + mvfr2 uint32 /* Media and VFP Feature Register 2 */ + pad uint32 + clidr uint64 /* Cache Level ID Register */ + ctr uint64 /* Cache Type Register */ +} + +func sysctlCPUID(name string) (*aarch64SysctlCPUID, error) { + mib, err := nametomib(name) + if err != nil { + return nil, err + } + + out := aarch64SysctlCPUID{} + n := unsafe.Sizeof(out) + _, _, errno := syscall.Syscall6( + syscall.SYS___SYSCTL, + uintptr(unsafe.Pointer(&mib[0])), + uintptr(len(mib)), + uintptr(unsafe.Pointer(&out)), + uintptr(unsafe.Pointer(&n)), + uintptr(0), + uintptr(0)) + if errno != 0 { + return nil, errno + } + return &out, nil +} + +func doinit() { + cpuid, err := sysctlCPUID("machdep.cpu0.cpu_id") + if err != nil { + setMinimalFeatures() + return + } + parseARM64SystemRegisters(cpuid.aa64isar0, cpuid.aa64isar1, cpuid.aa64pfr0) + + Initialized = true +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_openbsd_arm64.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_openbsd_arm64.go new file mode 100644 index 00000000000..85b64d5ccb7 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_openbsd_arm64.go @@ -0,0 +1,65 @@ +// Copyright 2022 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package cpu + +import ( + "syscall" + "unsafe" +) + +// Minimal copy of functionality from x/sys/unix so the cpu package can call +// sysctl without depending on x/sys/unix. + +const ( + // From OpenBSD's sys/sysctl.h. + _CTL_MACHDEP = 7 + + // From OpenBSD's machine/cpu.h. + _CPU_ID_AA64ISAR0 = 2 + _CPU_ID_AA64ISAR1 = 3 +) + +// Implemented in the runtime package (runtime/sys_openbsd3.go) +func syscall_syscall6(fn, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2 uintptr, err syscall.Errno) + +//go:linkname syscall_syscall6 syscall.syscall6 + +func sysctl(mib []uint32, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { + _, _, errno := syscall_syscall6(libc_sysctl_trampoline_addr, uintptr(unsafe.Pointer(&mib[0])), uintptr(len(mib)), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(oldlen)), uintptr(unsafe.Pointer(new)), uintptr(newlen)) + if errno != 0 { + return errno + } + return nil +} + +var libc_sysctl_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_sysctl sysctl "libc.so" + +func sysctlUint64(mib []uint32) (uint64, bool) { + var out uint64 + nout := unsafe.Sizeof(out) + if err := sysctl(mib, (*byte)(unsafe.Pointer(&out)), &nout, nil, 0); err != nil { + return 0, false + } + return out, true +} + +func doinit() { + setMinimalFeatures() + + // Get ID_AA64ISAR0 and ID_AA64ISAR1 from sysctl. + isar0, ok := sysctlUint64([]uint32{_CTL_MACHDEP, _CPU_ID_AA64ISAR0}) + if !ok { + return + } + isar1, ok := sysctlUint64([]uint32{_CTL_MACHDEP, _CPU_ID_AA64ISAR1}) + if !ok { + return + } + parseARM64SystemRegisters(isar0, isar1, 0) + + Initialized = true +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_openbsd_arm64.s b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_openbsd_arm64.s new file mode 100644 index 00000000000..054ba05d607 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_openbsd_arm64.s @@ -0,0 +1,11 @@ +// Copyright 2022 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +#include "textflag.h" + +TEXT libc_sysctl_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_sysctl(SB) + +GLOBL ·libc_sysctl_trampoline_addr(SB), RODATA, $8 +DATA ·libc_sysctl_trampoline_addr(SB)/8, $libc_sysctl_trampoline<>(SB) diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_other_arm.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_other_arm.go new file mode 100644 index 00000000000..d7b4fb4ccc2 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_other_arm.go @@ -0,0 +1,10 @@ +// Copyright 2020 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build !linux && arm +// +build !linux,arm + +package cpu + +func archInit() {} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_other_arm64.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_other_arm64.go new file mode 100644 index 00000000000..f3cde129b63 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_other_arm64.go @@ -0,0 +1,10 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build !linux && !netbsd && !openbsd && arm64 +// +build !linux,!netbsd,!openbsd,arm64 + +package cpu + +func doinit() {} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_other_mips64x.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_other_mips64x.go new file mode 100644 index 00000000000..0dafe9644a5 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_other_mips64x.go @@ -0,0 +1,13 @@ +// Copyright 2020 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build !linux && (mips64 || mips64le) +// +build !linux +// +build mips64 mips64le + +package cpu + +func archInit() { + Initialized = true +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_other_ppc64x.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_other_ppc64x.go new file mode 100644 index 00000000000..060d46b6eac --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_other_ppc64x.go @@ -0,0 +1,15 @@ +// Copyright 2022 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build !aix && !linux && (ppc64 || ppc64le) +// +build !aix +// +build !linux +// +build ppc64 ppc64le + +package cpu + +func archInit() { + PPC64.IsPOWER8 = true + Initialized = true +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_other_riscv64.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_other_riscv64.go new file mode 100644 index 00000000000..dd10eb79fee --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_other_riscv64.go @@ -0,0 +1,12 @@ +// Copyright 2022 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build !linux && riscv64 +// +build !linux,riscv64 + +package cpu + +func archInit() { + Initialized = true +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_ppc64x.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_ppc64x.go new file mode 100644 index 00000000000..4e8acd16583 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_ppc64x.go @@ -0,0 +1,17 @@ +// Copyright 2020 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build ppc64 || ppc64le +// +build ppc64 ppc64le + +package cpu + +const cacheLineSize = 128 + +func initOptions() { + options = []option{ + {Name: "darn", Feature: &PPC64.HasDARN}, + {Name: "scv", Feature: &PPC64.HasSCV}, + } +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_riscv64.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_riscv64.go new file mode 100644 index 00000000000..bd6c128af9b --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_riscv64.go @@ -0,0 +1,12 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build riscv64 +// +build riscv64 + +package cpu + +const cacheLineSize = 32 + +func initOptions() {} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_s390x.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_s390x.go new file mode 100644 index 00000000000..5881b8833f5 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_s390x.go @@ -0,0 +1,172 @@ +// Copyright 2020 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package cpu + +const cacheLineSize = 256 + +func initOptions() { + options = []option{ + {Name: "zarch", Feature: &S390X.HasZARCH, Required: true}, + {Name: "stfle", Feature: &S390X.HasSTFLE, Required: true}, + {Name: "ldisp", Feature: &S390X.HasLDISP, Required: true}, + {Name: "eimm", Feature: &S390X.HasEIMM, Required: true}, + {Name: "dfp", Feature: &S390X.HasDFP}, + {Name: "etf3eh", Feature: &S390X.HasETF3EH}, + {Name: "msa", Feature: &S390X.HasMSA}, + {Name: "aes", Feature: &S390X.HasAES}, + {Name: "aescbc", Feature: &S390X.HasAESCBC}, + {Name: "aesctr", Feature: &S390X.HasAESCTR}, + {Name: "aesgcm", Feature: &S390X.HasAESGCM}, + {Name: "ghash", Feature: &S390X.HasGHASH}, + {Name: "sha1", Feature: &S390X.HasSHA1}, + {Name: "sha256", Feature: &S390X.HasSHA256}, + {Name: "sha3", Feature: &S390X.HasSHA3}, + {Name: "sha512", Feature: &S390X.HasSHA512}, + {Name: "vx", Feature: &S390X.HasVX}, + {Name: "vxe", Feature: &S390X.HasVXE}, + } +} + +// bitIsSet reports whether the bit at index is set. The bit index +// is in big endian order, so bit index 0 is the leftmost bit. +func bitIsSet(bits []uint64, index uint) bool { + return bits[index/64]&((1<<63)>>(index%64)) != 0 +} + +// facility is a bit index for the named facility. +type facility uint8 + +const ( + // mandatory facilities + zarch facility = 1 // z architecture mode is active + stflef facility = 7 // store-facility-list-extended + ldisp facility = 18 // long-displacement + eimm facility = 21 // extended-immediate + + // miscellaneous facilities + dfp facility = 42 // decimal-floating-point + etf3eh facility = 30 // extended-translation 3 enhancement + + // cryptography facilities + msa facility = 17 // message-security-assist + msa3 facility = 76 // message-security-assist extension 3 + msa4 facility = 77 // message-security-assist extension 4 + msa5 facility = 57 // message-security-assist extension 5 + msa8 facility = 146 // message-security-assist extension 8 + msa9 facility = 155 // message-security-assist extension 9 + + // vector facilities + vx facility = 129 // vector facility + vxe facility = 135 // vector-enhancements 1 + vxe2 facility = 148 // vector-enhancements 2 +) + +// facilityList contains the result of an STFLE call. +// Bits are numbered in big endian order so the +// leftmost bit (the MSB) is at index 0. +type facilityList struct { + bits [4]uint64 +} + +// Has reports whether the given facilities are present. +func (s *facilityList) Has(fs ...facility) bool { + if len(fs) == 0 { + panic("no facility bits provided") + } + for _, f := range fs { + if !bitIsSet(s.bits[:], uint(f)) { + return false + } + } + return true +} + +// function is the code for the named cryptographic function. +type function uint8 + +const ( + // KM{,A,C,CTR} function codes + aes128 function = 18 // AES-128 + aes192 function = 19 // AES-192 + aes256 function = 20 // AES-256 + + // K{I,L}MD function codes + sha1 function = 1 // SHA-1 + sha256 function = 2 // SHA-256 + sha512 function = 3 // SHA-512 + sha3_224 function = 32 // SHA3-224 + sha3_256 function = 33 // SHA3-256 + sha3_384 function = 34 // SHA3-384 + sha3_512 function = 35 // SHA3-512 + shake128 function = 36 // SHAKE-128 + shake256 function = 37 // SHAKE-256 + + // KLMD function codes + ghash function = 65 // GHASH +) + +// queryResult contains the result of a Query function +// call. Bits are numbered in big endian order so the +// leftmost bit (the MSB) is at index 0. +type queryResult struct { + bits [2]uint64 +} + +// Has reports whether the given functions are present. +func (q *queryResult) Has(fns ...function) bool { + if len(fns) == 0 { + panic("no function codes provided") + } + for _, f := range fns { + if !bitIsSet(q.bits[:], uint(f)) { + return false + } + } + return true +} + +func doinit() { + initS390Xbase() + + // We need implementations of stfle, km and so on + // to detect cryptographic features. + if !haveAsmFunctions() { + return + } + + // optional cryptographic functions + if S390X.HasMSA { + aes := []function{aes128, aes192, aes256} + + // cipher message + km, kmc := kmQuery(), kmcQuery() + S390X.HasAES = km.Has(aes...) + S390X.HasAESCBC = kmc.Has(aes...) + if S390X.HasSTFLE { + facilities := stfle() + if facilities.Has(msa4) { + kmctr := kmctrQuery() + S390X.HasAESCTR = kmctr.Has(aes...) + } + if facilities.Has(msa8) { + kma := kmaQuery() + S390X.HasAESGCM = kma.Has(aes...) + } + } + + // compute message digest + kimd := kimdQuery() // intermediate (no padding) + klmd := klmdQuery() // last (padding) + S390X.HasSHA1 = kimd.Has(sha1) && klmd.Has(sha1) + S390X.HasSHA256 = kimd.Has(sha256) && klmd.Has(sha256) + S390X.HasSHA512 = kimd.Has(sha512) && klmd.Has(sha512) + S390X.HasGHASH = kimd.Has(ghash) // KLMD-GHASH does not exist + sha3 := []function{ + sha3_224, sha3_256, sha3_384, sha3_512, + shake128, shake256, + } + S390X.HasSHA3 = kimd.Has(sha3...) && klmd.Has(sha3...) + } +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_s390x.s b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_s390x.s new file mode 100644 index 00000000000..96f81e20971 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_s390x.s @@ -0,0 +1,58 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build gc +// +build gc + +#include "textflag.h" + +// func stfle() facilityList +TEXT ·stfle(SB), NOSPLIT|NOFRAME, $0-32 + MOVD $ret+0(FP), R1 + MOVD $3, R0 // last doubleword index to store + XC $32, (R1), (R1) // clear 4 doublewords (32 bytes) + WORD $0xb2b01000 // store facility list extended (STFLE) + RET + +// func kmQuery() queryResult +TEXT ·kmQuery(SB), NOSPLIT|NOFRAME, $0-16 + MOVD $0, R0 // set function code to 0 (KM-Query) + MOVD $ret+0(FP), R1 // address of 16-byte return value + WORD $0xB92E0024 // cipher message (KM) + RET + +// func kmcQuery() queryResult +TEXT ·kmcQuery(SB), NOSPLIT|NOFRAME, $0-16 + MOVD $0, R0 // set function code to 0 (KMC-Query) + MOVD $ret+0(FP), R1 // address of 16-byte return value + WORD $0xB92F0024 // cipher message with chaining (KMC) + RET + +// func kmctrQuery() queryResult +TEXT ·kmctrQuery(SB), NOSPLIT|NOFRAME, $0-16 + MOVD $0, R0 // set function code to 0 (KMCTR-Query) + MOVD $ret+0(FP), R1 // address of 16-byte return value + WORD $0xB92D4024 // cipher message with counter (KMCTR) + RET + +// func kmaQuery() queryResult +TEXT ·kmaQuery(SB), NOSPLIT|NOFRAME, $0-16 + MOVD $0, R0 // set function code to 0 (KMA-Query) + MOVD $ret+0(FP), R1 // address of 16-byte return value + WORD $0xb9296024 // cipher message with authentication (KMA) + RET + +// func kimdQuery() queryResult +TEXT ·kimdQuery(SB), NOSPLIT|NOFRAME, $0-16 + MOVD $0, R0 // set function code to 0 (KIMD-Query) + MOVD $ret+0(FP), R1 // address of 16-byte return value + WORD $0xB93E0024 // compute intermediate message digest (KIMD) + RET + +// func klmdQuery() queryResult +TEXT ·klmdQuery(SB), NOSPLIT|NOFRAME, $0-16 + MOVD $0, R0 // set function code to 0 (KLMD-Query) + MOVD $ret+0(FP), R1 // address of 16-byte return value + WORD $0xB93F0024 // compute last message digest (KLMD) + RET diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_wasm.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_wasm.go new file mode 100644 index 00000000000..7747d888a69 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_wasm.go @@ -0,0 +1,18 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build wasm +// +build wasm + +package cpu + +// We're compiling the cpu package for an unknown (software-abstracted) CPU. +// Make CacheLinePad an empty struct and hope that the usual struct alignment +// rules are good enough. + +const cacheLineSize = 0 + +func initOptions() {} + +func archInit() {} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_x86.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_x86.go new file mode 100644 index 00000000000..f5aacfc825d --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/cpu_x86.go @@ -0,0 +1,145 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build 386 || amd64 || amd64p32 +// +build 386 amd64 amd64p32 + +package cpu + +import "runtime" + +const cacheLineSize = 64 + +func initOptions() { + options = []option{ + {Name: "adx", Feature: &X86.HasADX}, + {Name: "aes", Feature: &X86.HasAES}, + {Name: "avx", Feature: &X86.HasAVX}, + {Name: "avx2", Feature: &X86.HasAVX2}, + {Name: "avx512", Feature: &X86.HasAVX512}, + {Name: "avx512f", Feature: &X86.HasAVX512F}, + {Name: "avx512cd", Feature: &X86.HasAVX512CD}, + {Name: "avx512er", Feature: &X86.HasAVX512ER}, + {Name: "avx512pf", Feature: &X86.HasAVX512PF}, + {Name: "avx512vl", Feature: &X86.HasAVX512VL}, + {Name: "avx512bw", Feature: &X86.HasAVX512BW}, + {Name: "avx512dq", Feature: &X86.HasAVX512DQ}, + {Name: "avx512ifma", Feature: &X86.HasAVX512IFMA}, + {Name: "avx512vbmi", Feature: &X86.HasAVX512VBMI}, + {Name: "avx512vnniw", Feature: &X86.HasAVX5124VNNIW}, + {Name: "avx5124fmaps", Feature: &X86.HasAVX5124FMAPS}, + {Name: "avx512vpopcntdq", Feature: &X86.HasAVX512VPOPCNTDQ}, + {Name: "avx512vpclmulqdq", Feature: &X86.HasAVX512VPCLMULQDQ}, + {Name: "avx512vnni", Feature: &X86.HasAVX512VNNI}, + {Name: "avx512gfni", Feature: &X86.HasAVX512GFNI}, + {Name: "avx512vaes", Feature: &X86.HasAVX512VAES}, + {Name: "avx512vbmi2", Feature: &X86.HasAVX512VBMI2}, + {Name: "avx512bitalg", Feature: &X86.HasAVX512BITALG}, + {Name: "avx512bf16", Feature: &X86.HasAVX512BF16}, + {Name: "bmi1", Feature: &X86.HasBMI1}, + {Name: "bmi2", Feature: &X86.HasBMI2}, + {Name: "cx16", Feature: &X86.HasCX16}, + {Name: "erms", Feature: &X86.HasERMS}, + {Name: "fma", Feature: &X86.HasFMA}, + {Name: "osxsave", Feature: &X86.HasOSXSAVE}, + {Name: "pclmulqdq", Feature: &X86.HasPCLMULQDQ}, + {Name: "popcnt", Feature: &X86.HasPOPCNT}, + {Name: "rdrand", Feature: &X86.HasRDRAND}, + {Name: "rdseed", Feature: &X86.HasRDSEED}, + {Name: "sse3", Feature: &X86.HasSSE3}, + {Name: "sse41", Feature: &X86.HasSSE41}, + {Name: "sse42", Feature: &X86.HasSSE42}, + {Name: "ssse3", Feature: &X86.HasSSSE3}, + + // These capabilities should always be enabled on amd64: + {Name: "sse2", Feature: &X86.HasSSE2, Required: runtime.GOARCH == "amd64"}, + } +} + +func archInit() { + + Initialized = true + + maxID, _, _, _ := cpuid(0, 0) + + if maxID < 1 { + return + } + + _, _, ecx1, edx1 := cpuid(1, 0) + X86.HasSSE2 = isSet(26, edx1) + + X86.HasSSE3 = isSet(0, ecx1) + X86.HasPCLMULQDQ = isSet(1, ecx1) + X86.HasSSSE3 = isSet(9, ecx1) + X86.HasFMA = isSet(12, ecx1) + X86.HasCX16 = isSet(13, ecx1) + X86.HasSSE41 = isSet(19, ecx1) + X86.HasSSE42 = isSet(20, ecx1) + X86.HasPOPCNT = isSet(23, ecx1) + X86.HasAES = isSet(25, ecx1) + X86.HasOSXSAVE = isSet(27, ecx1) + X86.HasRDRAND = isSet(30, ecx1) + + var osSupportsAVX, osSupportsAVX512 bool + // For XGETBV, OSXSAVE bit is required and sufficient. + if X86.HasOSXSAVE { + eax, _ := xgetbv() + // Check if XMM and YMM registers have OS support. + osSupportsAVX = isSet(1, eax) && isSet(2, eax) + + if runtime.GOOS == "darwin" { + // Darwin doesn't save/restore AVX-512 mask registers correctly across signal handlers. + // Since users can't rely on mask register contents, let's not advertise AVX-512 support. + // See issue 49233. + osSupportsAVX512 = false + } else { + // Check if OPMASK and ZMM registers have OS support. + osSupportsAVX512 = osSupportsAVX && isSet(5, eax) && isSet(6, eax) && isSet(7, eax) + } + } + + X86.HasAVX = isSet(28, ecx1) && osSupportsAVX + + if maxID < 7 { + return + } + + _, ebx7, ecx7, edx7 := cpuid(7, 0) + X86.HasBMI1 = isSet(3, ebx7) + X86.HasAVX2 = isSet(5, ebx7) && osSupportsAVX + X86.HasBMI2 = isSet(8, ebx7) + X86.HasERMS = isSet(9, ebx7) + X86.HasRDSEED = isSet(18, ebx7) + X86.HasADX = isSet(19, ebx7) + + X86.HasAVX512 = isSet(16, ebx7) && osSupportsAVX512 // Because avx-512 foundation is the core required extension + if X86.HasAVX512 { + X86.HasAVX512F = true + X86.HasAVX512CD = isSet(28, ebx7) + X86.HasAVX512ER = isSet(27, ebx7) + X86.HasAVX512PF = isSet(26, ebx7) + X86.HasAVX512VL = isSet(31, ebx7) + X86.HasAVX512BW = isSet(30, ebx7) + X86.HasAVX512DQ = isSet(17, ebx7) + X86.HasAVX512IFMA = isSet(21, ebx7) + X86.HasAVX512VBMI = isSet(1, ecx7) + X86.HasAVX5124VNNIW = isSet(2, edx7) + X86.HasAVX5124FMAPS = isSet(3, edx7) + X86.HasAVX512VPOPCNTDQ = isSet(14, ecx7) + X86.HasAVX512VPCLMULQDQ = isSet(10, ecx7) + X86.HasAVX512VNNI = isSet(11, ecx7) + X86.HasAVX512GFNI = isSet(8, ecx7) + X86.HasAVX512VAES = isSet(9, ecx7) + X86.HasAVX512VBMI2 = isSet(6, ecx7) + X86.HasAVX512BITALG = isSet(12, ecx7) + + eax71, _, _, _ := cpuid(7, 1) + X86.HasAVX512BF16 = isSet(5, eax71) + } +} + +func isSet(bitpos uint, value uint32) bool { + return value&(1<> 63)) +) + +// For those platforms don't have a 'cpuid' equivalent we use HWCAP/HWCAP2 +// These are initialized in cpu_$GOARCH.go +// and should not be changed after they are initialized. +var hwCap uint +var hwCap2 uint + +func readHWCAP() error { + // For Go 1.21+, get auxv from the Go runtime. + if a := getAuxv(); len(a) > 0 { + for len(a) >= 2 { + tag, val := a[0], uint(a[1]) + a = a[2:] + switch tag { + case _AT_HWCAP: + hwCap = val + case _AT_HWCAP2: + hwCap2 = val + } + } + return nil + } + + buf, err := ioutil.ReadFile(procAuxv) + if err != nil { + // e.g. on android /proc/self/auxv is not accessible, so silently + // ignore the error and leave Initialized = false. On some + // architectures (e.g. arm64) doinit() implements a fallback + // readout and will set Initialized = true again. + return err + } + bo := hostByteOrder() + for len(buf) >= 2*(uintSize/8) { + var tag, val uint + switch uintSize { + case 32: + tag = uint(bo.Uint32(buf[0:])) + val = uint(bo.Uint32(buf[4:])) + buf = buf[8:] + case 64: + tag = uint(bo.Uint64(buf[0:])) + val = uint(bo.Uint64(buf[8:])) + buf = buf[16:] + } + switch tag { + case _AT_HWCAP: + hwCap = val + case _AT_HWCAP2: + hwCap2 = val + } + } + return nil +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/parse.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/parse.go new file mode 100644 index 00000000000..762b63d6882 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/parse.go @@ -0,0 +1,43 @@ +// Copyright 2022 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package cpu + +import "strconv" + +// parseRelease parses a dot-separated version number. It follows the semver +// syntax, but allows the minor and patch versions to be elided. +// +// This is a copy of the Go runtime's parseRelease from +// https://golang.org/cl/209597. +func parseRelease(rel string) (major, minor, patch int, ok bool) { + // Strip anything after a dash or plus. + for i := 0; i < len(rel); i++ { + if rel[i] == '-' || rel[i] == '+' { + rel = rel[:i] + break + } + } + + next := func() (int, bool) { + for i := 0; i < len(rel); i++ { + if rel[i] == '.' { + ver, err := strconv.Atoi(rel[:i]) + rel = rel[i+1:] + return ver, err == nil + } + } + ver, err := strconv.Atoi(rel) + rel = "" + return ver, err == nil + } + if major, ok = next(); !ok || rel == "" { + return + } + if minor, ok = next(); !ok || rel == "" { + return + } + patch, ok = next() + return +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/proc_cpuinfo_linux.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/proc_cpuinfo_linux.go new file mode 100644 index 00000000000..d87bd6b3eb0 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/proc_cpuinfo_linux.go @@ -0,0 +1,54 @@ +// Copyright 2022 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build linux && arm64 +// +build linux,arm64 + +package cpu + +import ( + "errors" + "io" + "os" + "strings" +) + +func readLinuxProcCPUInfo() error { + f, err := os.Open("/proc/cpuinfo") + if err != nil { + return err + } + defer f.Close() + + var buf [1 << 10]byte // enough for first CPU + n, err := io.ReadFull(f, buf[:]) + if err != nil && err != io.ErrUnexpectedEOF { + return err + } + in := string(buf[:n]) + const features = "\nFeatures : " + i := strings.Index(in, features) + if i == -1 { + return errors.New("no CPU features found") + } + in = in[i+len(features):] + if i := strings.Index(in, "\n"); i != -1 { + in = in[:i] + } + m := map[string]*bool{} + + initOptions() // need it early here; it's harmless to call twice + for _, o := range options { + m[o.Name] = o.Feature + } + // The EVTSTRM field has alias "evstrm" in Go, but Linux calls it "evtstrm". + m["evtstrm"] = &ARM64.HasEVTSTRM + + for _, f := range strings.Fields(in) { + if p, ok := m[f]; ok { + *p = true + } + } + return nil +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/runtime_auxv.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/runtime_auxv.go new file mode 100644 index 00000000000..5f92ac9a2e2 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/runtime_auxv.go @@ -0,0 +1,16 @@ +// Copyright 2023 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package cpu + +// getAuxvFn is non-nil on Go 1.21+ (via runtime_auxv_go121.go init) +// on platforms that use auxv. +var getAuxvFn func() []uintptr + +func getAuxv() []uintptr { + if getAuxvFn == nil { + return nil + } + return getAuxvFn() +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/runtime_auxv_go121.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/runtime_auxv_go121.go new file mode 100644 index 00000000000..b975ea2a04e --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/runtime_auxv_go121.go @@ -0,0 +1,19 @@ +// Copyright 2023 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build go1.21 +// +build go1.21 + +package cpu + +import ( + _ "unsafe" // for linkname +) + +//go:linkname runtime_getAuxv runtime.getAuxv +func runtime_getAuxv() []uintptr + +func init() { + getAuxvFn = runtime_getAuxv +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/syscall_aix_gccgo.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/syscall_aix_gccgo.go new file mode 100644 index 00000000000..96134157a10 --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/syscall_aix_gccgo.go @@ -0,0 +1,27 @@ +// Copyright 2020 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Recreate a getsystemcfg syscall handler instead of +// using the one provided by x/sys/unix to avoid having +// the dependency between them. (See golang.org/issue/32102) +// Moreover, this file will be used during the building of +// gccgo's libgo and thus must not used a CGo method. + +//go:build aix && gccgo +// +build aix,gccgo + +package cpu + +import ( + "syscall" +) + +//extern getsystemcfg +func gccgoGetsystemcfg(label uint32) (r uint64) + +func callgetsystemcfg(label int) (r1 uintptr, e1 syscall.Errno) { + r1 = uintptr(gccgoGetsystemcfg(uint32(label))) + e1 = syscall.GetErrno() + return +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/cpu/syscall_aix_ppc64_gc.go b/.ci/providerlint/vendor/golang.org/x/sys/cpu/syscall_aix_ppc64_gc.go new file mode 100644 index 00000000000..904be42ffdc --- /dev/null +++ b/.ci/providerlint/vendor/golang.org/x/sys/cpu/syscall_aix_ppc64_gc.go @@ -0,0 +1,36 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Minimal copy of x/sys/unix so the cpu package can make a +// system call on AIX without depending on x/sys/unix. +// (See golang.org/issue/32102) + +//go:build aix && ppc64 && gc +// +build aix,ppc64,gc + +package cpu + +import ( + "syscall" + "unsafe" +) + +//go:cgo_import_dynamic libc_getsystemcfg getsystemcfg "libc.a/shr_64.o" + +//go:linkname libc_getsystemcfg libc_getsystemcfg + +type syscallFunc uintptr + +var libc_getsystemcfg syscallFunc + +type errno = syscall.Errno + +// Implemented in runtime/syscall_aix.go. +func rawSyscall6(trap, nargs, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2 uintptr, err errno) +func syscall6(trap, nargs, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2 uintptr, err errno) + +func callgetsystemcfg(label int) (r1 uintptr, e1 errno) { + r1, _, e1 = syscall6(uintptr(unsafe.Pointer(&libc_getsystemcfg)), 1, uintptr(label), 0, 0, 0, 0, 0) + return +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/unix/mkall.sh b/.ci/providerlint/vendor/golang.org/x/sys/unix/mkall.sh index 8e3947c3686..e6f31d374df 100644 --- a/.ci/providerlint/vendor/golang.org/x/sys/unix/mkall.sh +++ b/.ci/providerlint/vendor/golang.org/x/sys/unix/mkall.sh @@ -50,7 +50,7 @@ if [[ "$GOOS" = "linux" ]]; then # Use the Docker-based build system # Files generated through docker (use $cmd so you can Ctl-C the build or run) $cmd docker build --tag generate:$GOOS $GOOS - $cmd docker run --interactive --tty --volume $(cd -- "$(dirname -- "$0")/.." && /bin/pwd):/build generate:$GOOS + $cmd docker run --interactive --tty --volume $(cd -- "$(dirname -- "$0")/.." && pwd):/build generate:$GOOS exit fi diff --git a/.ci/providerlint/vendor/golang.org/x/sys/unix/mkerrors.sh b/.ci/providerlint/vendor/golang.org/x/sys/unix/mkerrors.sh index 2045d3dadb8..3156462715a 100644 --- a/.ci/providerlint/vendor/golang.org/x/sys/unix/mkerrors.sh +++ b/.ci/providerlint/vendor/golang.org/x/sys/unix/mkerrors.sh @@ -204,6 +204,7 @@ struct ltchars { #include #include #include +#include #include #include #include @@ -518,7 +519,7 @@ ccflags="$@" $2 ~ /^LOCK_(SH|EX|NB|UN)$/ || $2 ~ /^LO_(KEY|NAME)_SIZE$/ || $2 ~ /^LOOP_(CLR|CTL|GET|SET)_/ || - $2 ~ /^(AF|SOCK|SO|SOL|IPPROTO|IP|IPV6|TCP|MCAST|EVFILT|NOTE|SHUT|PROT|MAP|MFD|T?PACKET|MSG|SCM|MCL|DT|MADV|PR|LOCAL|TCPOPT)_/ || + $2 ~ /^(AF|SOCK|SO|SOL|IPPROTO|IP|IPV6|TCP|MCAST|EVFILT|NOTE|SHUT|PROT|MAP|MFD|T?PACKET|MSG|SCM|MCL|DT|MADV|PR|LOCAL|TCPOPT|UDP)_/ || $2 ~ /^NFC_(GENL|PROTO|COMM|RF|SE|DIRECTION|LLCP|SOCKPROTO)_/ || $2 ~ /^NFC_.*_(MAX)?SIZE$/ || $2 ~ /^RAW_PAYLOAD_/ || @@ -740,7 +741,8 @@ main(void) e = errors[i].num; if(i > 0 && errors[i-1].num == e) continue; - strcpy(buf, strerror(e)); + strncpy(buf, strerror(e), sizeof(buf) - 1); + buf[sizeof(buf) - 1] = '\0'; // lowercase first letter: Bad -> bad, but STREAM -> STREAM. if(A <= buf[0] && buf[0] <= Z && a <= buf[1] && buf[1] <= z) buf[0] += a - A; @@ -759,7 +761,8 @@ main(void) e = signals[i].num; if(i > 0 && signals[i-1].num == e) continue; - strcpy(buf, strsignal(e)); + strncpy(buf, strsignal(e), sizeof(buf) - 1); + buf[sizeof(buf) - 1] = '\0'; // lowercase first letter: Bad -> bad, but STREAM -> STREAM. if(A <= buf[0] && buf[0] <= Z && a <= buf[1] && buf[1] <= z) buf[0] += a - A; diff --git a/.ci/providerlint/vendor/golang.org/x/sys/unix/syscall_linux.go b/.ci/providerlint/vendor/golang.org/x/sys/unix/syscall_linux.go index fbaeb5fff14..6de486befe1 100644 --- a/.ci/providerlint/vendor/golang.org/x/sys/unix/syscall_linux.go +++ b/.ci/providerlint/vendor/golang.org/x/sys/unix/syscall_linux.go @@ -1699,12 +1699,23 @@ func PtracePokeUser(pid int, addr uintptr, data []byte) (count int, err error) { return ptracePoke(PTRACE_POKEUSR, PTRACE_PEEKUSR, pid, addr, data) } +// elfNT_PRSTATUS is a copy of the debug/elf.NT_PRSTATUS constant so +// x/sys/unix doesn't need to depend on debug/elf and thus +// compress/zlib, debug/dwarf, and other packages. +const elfNT_PRSTATUS = 1 + func PtraceGetRegs(pid int, regsout *PtraceRegs) (err error) { - return ptracePtr(PTRACE_GETREGS, pid, 0, unsafe.Pointer(regsout)) + var iov Iovec + iov.Base = (*byte)(unsafe.Pointer(regsout)) + iov.SetLen(int(unsafe.Sizeof(*regsout))) + return ptracePtr(PTRACE_GETREGSET, pid, uintptr(elfNT_PRSTATUS), unsafe.Pointer(&iov)) } func PtraceSetRegs(pid int, regs *PtraceRegs) (err error) { - return ptracePtr(PTRACE_SETREGS, pid, 0, unsafe.Pointer(regs)) + var iov Iovec + iov.Base = (*byte)(unsafe.Pointer(regs)) + iov.SetLen(int(unsafe.Sizeof(*regs))) + return ptracePtr(PTRACE_SETREGSET, pid, uintptr(elfNT_PRSTATUS), unsafe.Pointer(&iov)) } func PtraceSetOptions(pid int, options int) (err error) { @@ -2420,6 +2431,21 @@ func PthreadSigmask(how int, set, oldset *Sigset_t) error { return rtSigprocmask(how, set, oldset, _C__NSIG/8) } +//sysnb getresuid(ruid *_C_int, euid *_C_int, suid *_C_int) +//sysnb getresgid(rgid *_C_int, egid *_C_int, sgid *_C_int) + +func Getresuid() (ruid, euid, suid int) { + var r, e, s _C_int + getresuid(&r, &e, &s) + return int(r), int(e), int(s) +} + +func Getresgid() (rgid, egid, sgid int) { + var r, e, s _C_int + getresgid(&r, &e, &s) + return int(r), int(e), int(s) +} + /* * Unimplemented */ diff --git a/.ci/providerlint/vendor/golang.org/x/sys/unix/syscall_openbsd.go b/.ci/providerlint/vendor/golang.org/x/sys/unix/syscall_openbsd.go index f9c7a9663c6..c5f166a1152 100644 --- a/.ci/providerlint/vendor/golang.org/x/sys/unix/syscall_openbsd.go +++ b/.ci/providerlint/vendor/golang.org/x/sys/unix/syscall_openbsd.go @@ -151,6 +151,21 @@ func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { return } +//sysnb getresuid(ruid *_C_int, euid *_C_int, suid *_C_int) +//sysnb getresgid(rgid *_C_int, egid *_C_int, sgid *_C_int) + +func Getresuid() (ruid, euid, suid int) { + var r, e, s _C_int + getresuid(&r, &e, &s) + return int(r), int(e), int(s) +} + +func Getresgid() (rgid, egid, sgid int) { + var r, e, s _C_int + getresgid(&r, &e, &s) + return int(r), int(e), int(s) +} + //sys ioctl(fd int, req uint, arg uintptr) (err error) //sys ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) = SYS_IOCTL @@ -338,8 +353,6 @@ func Uname(uname *Utsname) error { // getgid // getitimer // getlogin -// getresgid -// getresuid // getthrid // ktrace // lfs_bmapv diff --git a/.ci/providerlint/vendor/golang.org/x/sys/unix/zerrors_linux.go b/.ci/providerlint/vendor/golang.org/x/sys/unix/zerrors_linux.go index 398c37e52d6..de936b677b6 100644 --- a/.ci/providerlint/vendor/golang.org/x/sys/unix/zerrors_linux.go +++ b/.ci/providerlint/vendor/golang.org/x/sys/unix/zerrors_linux.go @@ -2967,6 +2967,7 @@ const ( SOL_TCP = 0x6 SOL_TIPC = 0x10f SOL_TLS = 0x11a + SOL_UDP = 0x11 SOL_X25 = 0x106 SOL_XDP = 0x11b SOMAXCONN = 0x1000 @@ -3251,6 +3252,19 @@ const ( TRACEFS_MAGIC = 0x74726163 TS_COMM_LEN = 0x20 UDF_SUPER_MAGIC = 0x15013346 + UDP_CORK = 0x1 + UDP_ENCAP = 0x64 + UDP_ENCAP_ESPINUDP = 0x2 + UDP_ENCAP_ESPINUDP_NON_IKE = 0x1 + UDP_ENCAP_GTP0 = 0x4 + UDP_ENCAP_GTP1U = 0x5 + UDP_ENCAP_L2TPINUDP = 0x3 + UDP_GRO = 0x68 + UDP_NO_CHECK6_RX = 0x66 + UDP_NO_CHECK6_TX = 0x65 + UDP_SEGMENT = 0x67 + UDP_V4_FLOW = 0x2 + UDP_V6_FLOW = 0x6 UMOUNT_NOFOLLOW = 0x8 USBDEVICE_SUPER_MAGIC = 0x9fa2 UTIME_NOW = 0x3fffffff diff --git a/.ci/providerlint/vendor/golang.org/x/sys/unix/zerrors_linux_sparc64.go b/.ci/providerlint/vendor/golang.org/x/sys/unix/zerrors_linux_sparc64.go index f619252691e..48984202c65 100644 --- a/.ci/providerlint/vendor/golang.org/x/sys/unix/zerrors_linux_sparc64.go +++ b/.ci/providerlint/vendor/golang.org/x/sys/unix/zerrors_linux_sparc64.go @@ -329,6 +329,54 @@ const ( SCM_WIFI_STATUS = 0x25 SFD_CLOEXEC = 0x400000 SFD_NONBLOCK = 0x4000 + SF_FP = 0x38 + SF_I0 = 0x20 + SF_I1 = 0x24 + SF_I2 = 0x28 + SF_I3 = 0x2c + SF_I4 = 0x30 + SF_I5 = 0x34 + SF_L0 = 0x0 + SF_L1 = 0x4 + SF_L2 = 0x8 + SF_L3 = 0xc + SF_L4 = 0x10 + SF_L5 = 0x14 + SF_L6 = 0x18 + SF_L7 = 0x1c + SF_PC = 0x3c + SF_RETP = 0x40 + SF_V9_FP = 0x70 + SF_V9_I0 = 0x40 + SF_V9_I1 = 0x48 + SF_V9_I2 = 0x50 + SF_V9_I3 = 0x58 + SF_V9_I4 = 0x60 + SF_V9_I5 = 0x68 + SF_V9_L0 = 0x0 + SF_V9_L1 = 0x8 + SF_V9_L2 = 0x10 + SF_V9_L3 = 0x18 + SF_V9_L4 = 0x20 + SF_V9_L5 = 0x28 + SF_V9_L6 = 0x30 + SF_V9_L7 = 0x38 + SF_V9_PC = 0x78 + SF_V9_RETP = 0x80 + SF_V9_XARG0 = 0x88 + SF_V9_XARG1 = 0x90 + SF_V9_XARG2 = 0x98 + SF_V9_XARG3 = 0xa0 + SF_V9_XARG4 = 0xa8 + SF_V9_XARG5 = 0xb0 + SF_V9_XXARG = 0xb8 + SF_XARG0 = 0x44 + SF_XARG1 = 0x48 + SF_XARG2 = 0x4c + SF_XARG3 = 0x50 + SF_XARG4 = 0x54 + SF_XARG5 = 0x58 + SF_XXARG = 0x5c SIOCATMARK = 0x8905 SIOCGPGRP = 0x8904 SIOCGSTAMPNS_NEW = 0x40108907 diff --git a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_linux.go b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_linux.go index da63d9d7822..722c29a0087 100644 --- a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_linux.go +++ b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_linux.go @@ -2172,3 +2172,17 @@ func rtSigprocmask(how int, set *Sigset_t, oldset *Sigset_t, sigsetsize uintptr) } return } + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func getresuid(ruid *_C_int, euid *_C_int, suid *_C_int) { + RawSyscallNoError(SYS_GETRESUID, uintptr(unsafe.Pointer(ruid)), uintptr(unsafe.Pointer(euid)), uintptr(unsafe.Pointer(suid))) + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func getresgid(rgid *_C_int, egid *_C_int, sgid *_C_int) { + RawSyscallNoError(SYS_GETRESGID, uintptr(unsafe.Pointer(rgid)), uintptr(unsafe.Pointer(egid)), uintptr(unsafe.Pointer(sgid))) + return +} diff --git a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.go b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.go index 6699a783e1f..9ab9abf7215 100644 --- a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.go +++ b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.go @@ -519,6 +519,28 @@ var libc_getcwd_trampoline_addr uintptr // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func getresuid(ruid *_C_int, euid *_C_int, suid *_C_int) { + syscall_rawSyscall(libc_getresuid_trampoline_addr, uintptr(unsafe.Pointer(ruid)), uintptr(unsafe.Pointer(euid)), uintptr(unsafe.Pointer(suid))) + return +} + +var libc_getresuid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getresuid getresuid "libc.so" + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func getresgid(rgid *_C_int, egid *_C_int, sgid *_C_int) { + syscall_rawSyscall(libc_getresgid_trampoline_addr, uintptr(unsafe.Pointer(rgid)), uintptr(unsafe.Pointer(egid)), uintptr(unsafe.Pointer(sgid))) + return +} + +var libc_getresgid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getresgid getresgid "libc.so" + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func ioctl(fd int, req uint, arg uintptr) (err error) { _, _, e1 := syscall_syscall(libc_ioctl_trampoline_addr, uintptr(fd), uintptr(req), uintptr(arg)) if e1 != 0 { diff --git a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.s b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.s index 04f0de34b2e..3dcacd30d7e 100644 --- a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.s +++ b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.s @@ -158,6 +158,16 @@ TEXT libc_getcwd_trampoline<>(SB),NOSPLIT,$0-0 GLOBL ·libc_getcwd_trampoline_addr(SB), RODATA, $4 DATA ·libc_getcwd_trampoline_addr(SB)/4, $libc_getcwd_trampoline<>(SB) +TEXT libc_getresuid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getresuid(SB) +GLOBL ·libc_getresuid_trampoline_addr(SB), RODATA, $4 +DATA ·libc_getresuid_trampoline_addr(SB)/4, $libc_getresuid_trampoline<>(SB) + +TEXT libc_getresgid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getresgid(SB) +GLOBL ·libc_getresgid_trampoline_addr(SB), RODATA, $4 +DATA ·libc_getresgid_trampoline_addr(SB)/4, $libc_getresgid_trampoline<>(SB) + TEXT libc_ioctl_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_ioctl(SB) GLOBL ·libc_ioctl_trampoline_addr(SB), RODATA, $4 diff --git a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.go b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.go index 1e775fe0571..915761eab77 100644 --- a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.go +++ b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.go @@ -519,6 +519,28 @@ var libc_getcwd_trampoline_addr uintptr // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func getresuid(ruid *_C_int, euid *_C_int, suid *_C_int) { + syscall_rawSyscall(libc_getresuid_trampoline_addr, uintptr(unsafe.Pointer(ruid)), uintptr(unsafe.Pointer(euid)), uintptr(unsafe.Pointer(suid))) + return +} + +var libc_getresuid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getresuid getresuid "libc.so" + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func getresgid(rgid *_C_int, egid *_C_int, sgid *_C_int) { + syscall_rawSyscall(libc_getresgid_trampoline_addr, uintptr(unsafe.Pointer(rgid)), uintptr(unsafe.Pointer(egid)), uintptr(unsafe.Pointer(sgid))) + return +} + +var libc_getresgid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getresgid getresgid "libc.so" + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func ioctl(fd int, req uint, arg uintptr) (err error) { _, _, e1 := syscall_syscall(libc_ioctl_trampoline_addr, uintptr(fd), uintptr(req), uintptr(arg)) if e1 != 0 { @@ -527,6 +549,12 @@ func ioctl(fd int, req uint, arg uintptr) (err error) { return } +var libc_ioctl_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_ioctl ioctl "libc.so" + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) { _, _, e1 := syscall_syscall(libc_ioctl_trampoline_addr, uintptr(fd), uintptr(req), uintptr(arg)) if e1 != 0 { @@ -535,10 +563,6 @@ func ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) { return } -var libc_ioctl_trampoline_addr uintptr - -//go:cgo_import_dynamic libc_ioctl ioctl "libc.so" - // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { diff --git a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.s b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.s index 27b6f4df74f..2763620b01a 100644 --- a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.s +++ b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.s @@ -158,6 +158,16 @@ TEXT libc_getcwd_trampoline<>(SB),NOSPLIT,$0-0 GLOBL ·libc_getcwd_trampoline_addr(SB), RODATA, $8 DATA ·libc_getcwd_trampoline_addr(SB)/8, $libc_getcwd_trampoline<>(SB) +TEXT libc_getresuid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getresuid(SB) +GLOBL ·libc_getresuid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getresuid_trampoline_addr(SB)/8, $libc_getresuid_trampoline<>(SB) + +TEXT libc_getresgid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getresgid(SB) +GLOBL ·libc_getresgid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getresgid_trampoline_addr(SB)/8, $libc_getresgid_trampoline<>(SB) + TEXT libc_ioctl_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_ioctl(SB) GLOBL ·libc_ioctl_trampoline_addr(SB), RODATA, $8 diff --git a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.go b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.go index 7f6427899a5..8e87fdf153f 100644 --- a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.go +++ b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.go @@ -519,6 +519,28 @@ var libc_getcwd_trampoline_addr uintptr // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func getresuid(ruid *_C_int, euid *_C_int, suid *_C_int) { + syscall_rawSyscall(libc_getresuid_trampoline_addr, uintptr(unsafe.Pointer(ruid)), uintptr(unsafe.Pointer(euid)), uintptr(unsafe.Pointer(suid))) + return +} + +var libc_getresuid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getresuid getresuid "libc.so" + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func getresgid(rgid *_C_int, egid *_C_int, sgid *_C_int) { + syscall_rawSyscall(libc_getresgid_trampoline_addr, uintptr(unsafe.Pointer(rgid)), uintptr(unsafe.Pointer(egid)), uintptr(unsafe.Pointer(sgid))) + return +} + +var libc_getresgid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getresgid getresgid "libc.so" + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func ioctl(fd int, req uint, arg uintptr) (err error) { _, _, e1 := syscall_syscall(libc_ioctl_trampoline_addr, uintptr(fd), uintptr(req), uintptr(arg)) if e1 != 0 { diff --git a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.s b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.s index b797045fd2d..c922314048f 100644 --- a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.s +++ b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.s @@ -158,6 +158,16 @@ TEXT libc_getcwd_trampoline<>(SB),NOSPLIT,$0-0 GLOBL ·libc_getcwd_trampoline_addr(SB), RODATA, $4 DATA ·libc_getcwd_trampoline_addr(SB)/4, $libc_getcwd_trampoline<>(SB) +TEXT libc_getresuid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getresuid(SB) +GLOBL ·libc_getresuid_trampoline_addr(SB), RODATA, $4 +DATA ·libc_getresuid_trampoline_addr(SB)/4, $libc_getresuid_trampoline<>(SB) + +TEXT libc_getresgid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getresgid(SB) +GLOBL ·libc_getresgid_trampoline_addr(SB), RODATA, $4 +DATA ·libc_getresgid_trampoline_addr(SB)/4, $libc_getresgid_trampoline<>(SB) + TEXT libc_ioctl_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_ioctl(SB) GLOBL ·libc_ioctl_trampoline_addr(SB), RODATA, $4 diff --git a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm64.go b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm64.go index 756ef7b1736..12a7a2160e0 100644 --- a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm64.go +++ b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm64.go @@ -519,6 +519,28 @@ var libc_getcwd_trampoline_addr uintptr // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func getresuid(ruid *_C_int, euid *_C_int, suid *_C_int) { + syscall_rawSyscall(libc_getresuid_trampoline_addr, uintptr(unsafe.Pointer(ruid)), uintptr(unsafe.Pointer(euid)), uintptr(unsafe.Pointer(suid))) + return +} + +var libc_getresuid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getresuid getresuid "libc.so" + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func getresgid(rgid *_C_int, egid *_C_int, sgid *_C_int) { + syscall_rawSyscall(libc_getresgid_trampoline_addr, uintptr(unsafe.Pointer(rgid)), uintptr(unsafe.Pointer(egid)), uintptr(unsafe.Pointer(sgid))) + return +} + +var libc_getresgid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getresgid getresgid "libc.so" + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func ioctl(fd int, req uint, arg uintptr) (err error) { _, _, e1 := syscall_syscall(libc_ioctl_trampoline_addr, uintptr(fd), uintptr(req), uintptr(arg)) if e1 != 0 { diff --git a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm64.s b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm64.s index a871266221e..a6bc32c9220 100644 --- a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm64.s +++ b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm64.s @@ -158,6 +158,16 @@ TEXT libc_getcwd_trampoline<>(SB),NOSPLIT,$0-0 GLOBL ·libc_getcwd_trampoline_addr(SB), RODATA, $8 DATA ·libc_getcwd_trampoline_addr(SB)/8, $libc_getcwd_trampoline<>(SB) +TEXT libc_getresuid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getresuid(SB) +GLOBL ·libc_getresuid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getresuid_trampoline_addr(SB)/8, $libc_getresuid_trampoline<>(SB) + +TEXT libc_getresgid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getresgid(SB) +GLOBL ·libc_getresgid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getresgid_trampoline_addr(SB)/8, $libc_getresgid_trampoline<>(SB) + TEXT libc_ioctl_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_ioctl(SB) GLOBL ·libc_ioctl_trampoline_addr(SB), RODATA, $8 diff --git a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_mips64.go b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_mips64.go index 7bc2e24eb95..b19e8aa031d 100644 --- a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_mips64.go +++ b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_mips64.go @@ -519,6 +519,28 @@ var libc_getcwd_trampoline_addr uintptr // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func getresuid(ruid *_C_int, euid *_C_int, suid *_C_int) { + syscall_rawSyscall(libc_getresuid_trampoline_addr, uintptr(unsafe.Pointer(ruid)), uintptr(unsafe.Pointer(euid)), uintptr(unsafe.Pointer(suid))) + return +} + +var libc_getresuid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getresuid getresuid "libc.so" + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func getresgid(rgid *_C_int, egid *_C_int, sgid *_C_int) { + syscall_rawSyscall(libc_getresgid_trampoline_addr, uintptr(unsafe.Pointer(rgid)), uintptr(unsafe.Pointer(egid)), uintptr(unsafe.Pointer(sgid))) + return +} + +var libc_getresgid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getresgid getresgid "libc.so" + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func ioctl(fd int, req uint, arg uintptr) (err error) { _, _, e1 := syscall_syscall(libc_ioctl_trampoline_addr, uintptr(fd), uintptr(req), uintptr(arg)) if e1 != 0 { diff --git a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_mips64.s b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_mips64.s index 05d4bffd791..b4e7bceabf3 100644 --- a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_mips64.s +++ b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_mips64.s @@ -158,6 +158,16 @@ TEXT libc_getcwd_trampoline<>(SB),NOSPLIT,$0-0 GLOBL ·libc_getcwd_trampoline_addr(SB), RODATA, $8 DATA ·libc_getcwd_trampoline_addr(SB)/8, $libc_getcwd_trampoline<>(SB) +TEXT libc_getresuid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getresuid(SB) +GLOBL ·libc_getresuid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getresuid_trampoline_addr(SB)/8, $libc_getresuid_trampoline<>(SB) + +TEXT libc_getresgid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getresgid(SB) +GLOBL ·libc_getresgid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getresgid_trampoline_addr(SB)/8, $libc_getresgid_trampoline<>(SB) + TEXT libc_ioctl_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_ioctl(SB) GLOBL ·libc_ioctl_trampoline_addr(SB), RODATA, $8 diff --git a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_ppc64.go b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_ppc64.go index 739be6217a3..fb99594c937 100644 --- a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_ppc64.go +++ b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_ppc64.go @@ -519,6 +519,28 @@ var libc_getcwd_trampoline_addr uintptr // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func getresuid(ruid *_C_int, euid *_C_int, suid *_C_int) { + syscall_rawSyscall(libc_getresuid_trampoline_addr, uintptr(unsafe.Pointer(ruid)), uintptr(unsafe.Pointer(euid)), uintptr(unsafe.Pointer(suid))) + return +} + +var libc_getresuid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getresuid getresuid "libc.so" + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func getresgid(rgid *_C_int, egid *_C_int, sgid *_C_int) { + syscall_rawSyscall(libc_getresgid_trampoline_addr, uintptr(unsafe.Pointer(rgid)), uintptr(unsafe.Pointer(egid)), uintptr(unsafe.Pointer(sgid))) + return +} + +var libc_getresgid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getresgid getresgid "libc.so" + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func ioctl(fd int, req uint, arg uintptr) (err error) { _, _, e1 := syscall_syscall(libc_ioctl_trampoline_addr, uintptr(fd), uintptr(req), uintptr(arg)) if e1 != 0 { diff --git a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_ppc64.s b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_ppc64.s index 74a25f8d643..ca3f766009c 100644 --- a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_ppc64.s +++ b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_ppc64.s @@ -189,6 +189,18 @@ TEXT libc_getcwd_trampoline<>(SB),NOSPLIT,$0-0 GLOBL ·libc_getcwd_trampoline_addr(SB), RODATA, $8 DATA ·libc_getcwd_trampoline_addr(SB)/8, $libc_getcwd_trampoline<>(SB) +TEXT libc_getresuid_trampoline<>(SB),NOSPLIT,$0-0 + CALL libc_getresuid(SB) + RET +GLOBL ·libc_getresuid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getresuid_trampoline_addr(SB)/8, $libc_getresuid_trampoline<>(SB) + +TEXT libc_getresgid_trampoline<>(SB),NOSPLIT,$0-0 + CALL libc_getresgid(SB) + RET +GLOBL ·libc_getresgid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getresgid_trampoline_addr(SB)/8, $libc_getresgid_trampoline<>(SB) + TEXT libc_ioctl_trampoline<>(SB),NOSPLIT,$0-0 CALL libc_ioctl(SB) RET diff --git a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_riscv64.go b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_riscv64.go index 7d95a197803..32cbbbc52b5 100644 --- a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_riscv64.go +++ b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_riscv64.go @@ -519,6 +519,28 @@ var libc_getcwd_trampoline_addr uintptr // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func getresuid(ruid *_C_int, euid *_C_int, suid *_C_int) { + syscall_rawSyscall(libc_getresuid_trampoline_addr, uintptr(unsafe.Pointer(ruid)), uintptr(unsafe.Pointer(euid)), uintptr(unsafe.Pointer(suid))) + return +} + +var libc_getresuid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getresuid getresuid "libc.so" + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func getresgid(rgid *_C_int, egid *_C_int, sgid *_C_int) { + syscall_rawSyscall(libc_getresgid_trampoline_addr, uintptr(unsafe.Pointer(rgid)), uintptr(unsafe.Pointer(egid)), uintptr(unsafe.Pointer(sgid))) + return +} + +var libc_getresgid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getresgid getresgid "libc.so" + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func ioctl(fd int, req uint, arg uintptr) (err error) { _, _, e1 := syscall_syscall(libc_ioctl_trampoline_addr, uintptr(fd), uintptr(req), uintptr(arg)) if e1 != 0 { diff --git a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_riscv64.s b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_riscv64.s index 990be245740..477a7d5b21e 100644 --- a/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_riscv64.s +++ b/.ci/providerlint/vendor/golang.org/x/sys/unix/zsyscall_openbsd_riscv64.s @@ -158,6 +158,16 @@ TEXT libc_getcwd_trampoline<>(SB),NOSPLIT,$0-0 GLOBL ·libc_getcwd_trampoline_addr(SB), RODATA, $8 DATA ·libc_getcwd_trampoline_addr(SB)/8, $libc_getcwd_trampoline<>(SB) +TEXT libc_getresuid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getresuid(SB) +GLOBL ·libc_getresuid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getresuid_trampoline_addr(SB)/8, $libc_getresuid_trampoline<>(SB) + +TEXT libc_getresgid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getresgid(SB) +GLOBL ·libc_getresgid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getresgid_trampoline_addr(SB)/8, $libc_getresgid_trampoline<>(SB) + TEXT libc_ioctl_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_ioctl(SB) GLOBL ·libc_ioctl_trampoline_addr(SB), RODATA, $8 diff --git a/.ci/providerlint/vendor/golang.org/x/sys/unix/ztypes_linux.go b/.ci/providerlint/vendor/golang.org/x/sys/unix/ztypes_linux.go index ca84727cfe8..00c3b8c20f3 100644 --- a/.ci/providerlint/vendor/golang.org/x/sys/unix/ztypes_linux.go +++ b/.ci/providerlint/vendor/golang.org/x/sys/unix/ztypes_linux.go @@ -2555,6 +2555,11 @@ const ( BPF_REG_8 = 0x8 BPF_REG_9 = 0x9 BPF_REG_10 = 0xa + BPF_CGROUP_ITER_ORDER_UNSPEC = 0x0 + BPF_CGROUP_ITER_SELF_ONLY = 0x1 + BPF_CGROUP_ITER_DESCENDANTS_PRE = 0x2 + BPF_CGROUP_ITER_DESCENDANTS_POST = 0x3 + BPF_CGROUP_ITER_ANCESTORS_UP = 0x4 BPF_MAP_CREATE = 0x0 BPF_MAP_LOOKUP_ELEM = 0x1 BPF_MAP_UPDATE_ELEM = 0x2 @@ -2566,6 +2571,7 @@ const ( BPF_PROG_ATTACH = 0x8 BPF_PROG_DETACH = 0x9 BPF_PROG_TEST_RUN = 0xa + BPF_PROG_RUN = 0xa BPF_PROG_GET_NEXT_ID = 0xb BPF_MAP_GET_NEXT_ID = 0xc BPF_PROG_GET_FD_BY_ID = 0xd @@ -2610,6 +2616,7 @@ const ( BPF_MAP_TYPE_CPUMAP = 0x10 BPF_MAP_TYPE_XSKMAP = 0x11 BPF_MAP_TYPE_SOCKHASH = 0x12 + BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED = 0x13 BPF_MAP_TYPE_CGROUP_STORAGE = 0x13 BPF_MAP_TYPE_REUSEPORT_SOCKARRAY = 0x14 BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE = 0x15 @@ -2620,6 +2627,10 @@ const ( BPF_MAP_TYPE_STRUCT_OPS = 0x1a BPF_MAP_TYPE_RINGBUF = 0x1b BPF_MAP_TYPE_INODE_STORAGE = 0x1c + BPF_MAP_TYPE_TASK_STORAGE = 0x1d + BPF_MAP_TYPE_BLOOM_FILTER = 0x1e + BPF_MAP_TYPE_USER_RINGBUF = 0x1f + BPF_MAP_TYPE_CGRP_STORAGE = 0x20 BPF_PROG_TYPE_UNSPEC = 0x0 BPF_PROG_TYPE_SOCKET_FILTER = 0x1 BPF_PROG_TYPE_KPROBE = 0x2 @@ -2651,6 +2662,7 @@ const ( BPF_PROG_TYPE_EXT = 0x1c BPF_PROG_TYPE_LSM = 0x1d BPF_PROG_TYPE_SK_LOOKUP = 0x1e + BPF_PROG_TYPE_SYSCALL = 0x1f BPF_CGROUP_INET_INGRESS = 0x0 BPF_CGROUP_INET_EGRESS = 0x1 BPF_CGROUP_INET_SOCK_CREATE = 0x2 @@ -2689,6 +2701,12 @@ const ( BPF_XDP_CPUMAP = 0x23 BPF_SK_LOOKUP = 0x24 BPF_XDP = 0x25 + BPF_SK_SKB_VERDICT = 0x26 + BPF_SK_REUSEPORT_SELECT = 0x27 + BPF_SK_REUSEPORT_SELECT_OR_MIGRATE = 0x28 + BPF_PERF_EVENT = 0x29 + BPF_TRACE_KPROBE_MULTI = 0x2a + BPF_LSM_CGROUP = 0x2b BPF_LINK_TYPE_UNSPEC = 0x0 BPF_LINK_TYPE_RAW_TRACEPOINT = 0x1 BPF_LINK_TYPE_TRACING = 0x2 @@ -2696,6 +2714,9 @@ const ( BPF_LINK_TYPE_ITER = 0x4 BPF_LINK_TYPE_NETNS = 0x5 BPF_LINK_TYPE_XDP = 0x6 + BPF_LINK_TYPE_PERF_EVENT = 0x7 + BPF_LINK_TYPE_KPROBE_MULTI = 0x8 + BPF_LINK_TYPE_STRUCT_OPS = 0x9 BPF_ANY = 0x0 BPF_NOEXIST = 0x1 BPF_EXIST = 0x2 @@ -2733,6 +2754,7 @@ const ( BPF_F_ZERO_CSUM_TX = 0x2 BPF_F_DONT_FRAGMENT = 0x4 BPF_F_SEQ_NUMBER = 0x8 + BPF_F_TUNINFO_FLAGS = 0x10 BPF_F_INDEX_MASK = 0xffffffff BPF_F_CURRENT_CPU = 0xffffffff BPF_F_CTXLEN_MASK = 0xfffff00000000 @@ -2747,6 +2769,7 @@ const ( BPF_F_ADJ_ROOM_ENCAP_L4_GRE = 0x8 BPF_F_ADJ_ROOM_ENCAP_L4_UDP = 0x10 BPF_F_ADJ_ROOM_NO_CSUM_RESET = 0x20 + BPF_F_ADJ_ROOM_ENCAP_L2_ETH = 0x40 BPF_ADJ_ROOM_ENCAP_L2_MASK = 0xff BPF_ADJ_ROOM_ENCAP_L2_SHIFT = 0x38 BPF_F_SYSCTL_BASE_NAME = 0x1 @@ -2771,10 +2794,16 @@ const ( BPF_LWT_ENCAP_SEG6 = 0x0 BPF_LWT_ENCAP_SEG6_INLINE = 0x1 BPF_LWT_ENCAP_IP = 0x2 + BPF_F_BPRM_SECUREEXEC = 0x1 + BPF_F_BROADCAST = 0x8 + BPF_F_EXCLUDE_INGRESS = 0x10 + BPF_SKB_TSTAMP_UNSPEC = 0x0 + BPF_SKB_TSTAMP_DELIVERY_MONO = 0x1 BPF_OK = 0x0 BPF_DROP = 0x2 BPF_REDIRECT = 0x7 BPF_LWT_REROUTE = 0x80 + BPF_FLOW_DISSECTOR_CONTINUE = 0x81 BPF_SOCK_OPS_RTO_CB_FLAG = 0x1 BPF_SOCK_OPS_RETRANS_CB_FLAG = 0x2 BPF_SOCK_OPS_STATE_CB_FLAG = 0x4 @@ -2838,6 +2867,10 @@ const ( BPF_FIB_LKUP_RET_UNSUPP_LWT = 0x6 BPF_FIB_LKUP_RET_NO_NEIGH = 0x7 BPF_FIB_LKUP_RET_FRAG_NEEDED = 0x8 + BPF_MTU_CHK_SEGS = 0x1 + BPF_MTU_CHK_RET_SUCCESS = 0x0 + BPF_MTU_CHK_RET_FRAG_NEEDED = 0x1 + BPF_MTU_CHK_RET_SEGS_TOOBIG = 0x2 BPF_FD_TYPE_RAW_TRACEPOINT = 0x0 BPF_FD_TYPE_TRACEPOINT = 0x1 BPF_FD_TYPE_KPROBE = 0x2 @@ -2847,6 +2880,19 @@ const ( BPF_FLOW_DISSECTOR_F_PARSE_1ST_FRAG = 0x1 BPF_FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL = 0x2 BPF_FLOW_DISSECTOR_F_STOP_AT_ENCAP = 0x4 + BPF_CORE_FIELD_BYTE_OFFSET = 0x0 + BPF_CORE_FIELD_BYTE_SIZE = 0x1 + BPF_CORE_FIELD_EXISTS = 0x2 + BPF_CORE_FIELD_SIGNED = 0x3 + BPF_CORE_FIELD_LSHIFT_U64 = 0x4 + BPF_CORE_FIELD_RSHIFT_U64 = 0x5 + BPF_CORE_TYPE_ID_LOCAL = 0x6 + BPF_CORE_TYPE_ID_TARGET = 0x7 + BPF_CORE_TYPE_EXISTS = 0x8 + BPF_CORE_TYPE_SIZE = 0x9 + BPF_CORE_ENUMVAL_EXISTS = 0xa + BPF_CORE_ENUMVAL_VALUE = 0xb + BPF_CORE_TYPE_MATCHES = 0xc ) const ( diff --git a/.ci/providerlint/vendor/google.golang.org/appengine/datastore/save.go b/.ci/providerlint/vendor/google.golang.org/appengine/datastore/save.go index 7b045a59556..b9a2a8f07a7 100644 --- a/.ci/providerlint/vendor/google.golang.org/appengine/datastore/save.go +++ b/.ci/providerlint/vendor/google.golang.org/appengine/datastore/save.go @@ -310,14 +310,14 @@ func propertiesToProto(defaultAppID string, key *Key, props []Property) (*pb.Ent func isEmptyValue(v reflect.Value) bool { switch v.Kind() { case reflect.Array, reflect.Map, reflect.Slice, reflect.String: - // TODO(perfomance): Only reflect.String needed, other property types are not supported (copy/paste from json package) + // TODO(performance): Only reflect.String needed, other property types are not supported (copy/paste from json package) return v.Len() == 0 case reflect.Bool: return !v.Bool() case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: return v.Int() == 0 case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: - // TODO(perfomance): Uint* are unsupported property types - should be removed (copy/paste from json package) + // TODO(performance): Uint* are unsupported property types - should be removed (copy/paste from json package) return v.Uint() == 0 case reflect.Float32, reflect.Float64: return v.Float() == 0 diff --git a/.ci/providerlint/vendor/google.golang.org/genproto/googleapis/rpc/status/status.pb.go b/.ci/providerlint/vendor/google.golang.org/genproto/googleapis/rpc/status/status.pb.go index e79a5388465..a6b5081888b 100644 --- a/.ci/providerlint/vendor/google.golang.org/genproto/googleapis/rpc/status/status.pb.go +++ b/.ci/providerlint/vendor/google.golang.org/genproto/googleapis/rpc/status/status.pb.go @@ -1,4 +1,4 @@ -// Copyright 2020 Google LLC +// Copyright 2022 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. @@ -14,8 +14,8 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.25.0 -// protoc v3.13.0 +// protoc-gen-go v1.26.0 +// protoc v3.21.9 // source: google/rpc/status.proto package status @@ -24,7 +24,6 @@ import ( reflect "reflect" sync "sync" - proto "github.com/golang/protobuf/proto" protoreflect "google.golang.org/protobuf/reflect/protoreflect" protoimpl "google.golang.org/protobuf/runtime/protoimpl" anypb "google.golang.org/protobuf/types/known/anypb" @@ -37,10 +36,6 @@ const ( _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) ) -// This is a compile-time assertion that a sufficiently up-to-date version -// of the legacy proto package is being used. -const _ = proto.ProtoPackageIsVersion4 - // The `Status` type defines a logical error model that is suitable for // different programming environments, including REST APIs and RPC APIs. It is // used by [gRPC](https://github.com/grpc). Each `Status` message contains @@ -53,11 +48,13 @@ type Status struct { sizeCache protoimpl.SizeCache unknownFields protoimpl.UnknownFields - // The status code, which should be an enum value of [google.rpc.Code][google.rpc.Code]. + // The status code, which should be an enum value of + // [google.rpc.Code][google.rpc.Code]. Code int32 `protobuf:"varint,1,opt,name=code,proto3" json:"code,omitempty"` // A developer-facing error message, which should be in English. Any // user-facing error message should be localized and sent in the - // [google.rpc.Status.details][google.rpc.Status.details] field, or localized by the client. + // [google.rpc.Status.details][google.rpc.Status.details] field, or localized + // by the client. Message string `protobuf:"bytes,2,opt,name=message,proto3" json:"message,omitempty"` // A list of messages that carry the error details. There is a common set of // message types for APIs to use. diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/CONTRIBUTING.md b/.ci/providerlint/vendor/google.golang.org/grpc/CONTRIBUTING.md index 52338d004ce..608aa6e1ac5 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/CONTRIBUTING.md +++ b/.ci/providerlint/vendor/google.golang.org/grpc/CONTRIBUTING.md @@ -20,6 +20,15 @@ How to get your contributions merged smoothly and quickly. both author's & review's time is wasted. Create more PRs to address different concerns and everyone will be happy. +- If you are searching for features to work on, issues labeled [Status: Help + Wanted](https://github.com/grpc/grpc-go/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3A%22Status%3A+Help+Wanted%22) + is a great place to start. These issues are well-documented and usually can be + resolved with a single pull request. + +- If you are adding a new file, make sure it has the copyright message template + at the top as a comment. You can copy over the message from an existing file + and update the year. + - The grpc package should only depend on standard Go packages and a small number of exceptions. If your contribution introduces new dependencies which are NOT in the [list](https://godoc.org/google.golang.org/grpc?imports), you need a @@ -32,14 +41,18 @@ How to get your contributions merged smoothly and quickly. - Provide a good **PR description** as a record of **what** change is being made and **why** it was made. Link to a github issue if it exists. -- Don't fix code style and formatting unless you are already changing that line - to address an issue. PRs with irrelevant changes won't be merged. If you do - want to fix formatting or style, do that in a separate PR. +- If you want to fix formatting or style, consider whether your changes are an + obvious improvement or might be considered a personal preference. If a style + change is based on preference, it likely will not be accepted. If it corrects + widely agreed-upon anti-patterns, then please do create a PR and explain the + benefits of the change. - Unless your PR is trivial, you should expect there will be reviewer comments - that you'll need to address before merging. We expect you to be reasonably - responsive to those comments, otherwise the PR will be closed after 2-3 weeks - of inactivity. + that you'll need to address before merging. We'll mark it as `Status: Requires + Reporter Clarification` if we expect you to respond to these comments in a + timely manner. If the PR remains inactive for 6 days, it will be marked as + `stale` and automatically close 7 days after that if we don't hear back from + you. - Maintain **clean commit history** and use **meaningful commit messages**. PRs with messy commit history are difficult to review and won't be merged. Use diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/attributes/attributes.go b/.ci/providerlint/vendor/google.golang.org/grpc/attributes/attributes.go index 02f5dc53189..3efca459149 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/attributes/attributes.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/attributes/attributes.go @@ -25,6 +25,11 @@ // later release. package attributes +import ( + "fmt" + "strings" +) + // Attributes is an immutable struct for storing and retrieving generic // key/value pairs. Keys must be hashable, and users should define their own // types for keys. Values should not be modified after they are added to an @@ -99,3 +104,27 @@ func (a *Attributes) Equal(o *Attributes) bool { } return true } + +// String prints the attribute map. If any key or values throughout the map +// implement fmt.Stringer, it calls that method and appends. +func (a *Attributes) String() string { + var sb strings.Builder + sb.WriteString("{") + first := true + for k, v := range a.m { + var key, val string + if str, ok := k.(interface{ String() string }); ok { + key = str.String() + } + if str, ok := v.(interface{ String() string }); ok { + val = str.String() + } + if !first { + sb.WriteString(", ") + } + sb.WriteString(fmt.Sprintf("%q: %q, ", key, val)) + first = false + } + sb.WriteString("}") + return sb.String() +} diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/balancer/balancer.go b/.ci/providerlint/vendor/google.golang.org/grpc/balancer/balancer.go index 392b21fb2d8..8f00523c0e2 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/balancer/balancer.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/balancer/balancer.go @@ -279,6 +279,14 @@ type PickResult struct { // type, Done may not be called. May be nil if the balancer does not wish // to be notified when the RPC completes. Done func(DoneInfo) + + // Metadata provides a way for LB policies to inject arbitrary per-call + // metadata. Any metadata returned here will be merged with existing + // metadata added by the client application. + // + // LB policies with child policies are responsible for propagating metadata + // injected by their children to the ClientConn, as part of Pick(). + Metadata metadata.MD } // TransientFailureError returns e. It exists for backward compatibility and diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/balancer_conn_wrappers.go b/.ci/providerlint/vendor/google.golang.org/grpc/balancer_conn_wrappers.go index 0359956d36f..04b9ad41169 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/balancer_conn_wrappers.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/balancer_conn_wrappers.go @@ -25,14 +25,20 @@ import ( "sync" "google.golang.org/grpc/balancer" - "google.golang.org/grpc/codes" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/internal/balancer/gracefulswitch" - "google.golang.org/grpc/internal/buffer" "google.golang.org/grpc/internal/channelz" "google.golang.org/grpc/internal/grpcsync" "google.golang.org/grpc/resolver" - "google.golang.org/grpc/status" +) + +type ccbMode int + +const ( + ccbModeActive = iota + ccbModeIdle + ccbModeClosed + ccbModeExitingIdle ) // ccBalancerWrapper sits between the ClientConn and the Balancer. @@ -49,192 +55,101 @@ import ( // It uses the gracefulswitch.Balancer internally to ensure that balancer // switches happen in a graceful manner. type ccBalancerWrapper struct { - cc *ClientConn - - // Since these fields are accessed only from handleXxx() methods which are - // synchronized by the watcher goroutine, we do not need a mutex to protect - // these fields. + // The following fields are initialized when the wrapper is created and are + // read-only afterwards, and therefore can be accessed without a mutex. + cc *ClientConn + opts balancer.BuildOptions + + // Outgoing (gRPC --> balancer) calls are guaranteed to execute in a + // mutually exclusive manner as they are scheduled in the serializer. Fields + // accessed *only* in these serializer callbacks, can therefore be accessed + // without a mutex. balancer *gracefulswitch.Balancer curBalancerName string - updateCh *buffer.Unbounded // Updates written on this channel are processed by watcher(). - resultCh *buffer.Unbounded // Results of calls to UpdateClientConnState() are pushed here. - closed *grpcsync.Event // Indicates if close has been called. - done *grpcsync.Event // Indicates if close has completed its work. + // mu guards access to the below fields. Access to the serializer and its + // cancel function needs to be mutex protected because they are overwritten + // when the wrapper exits idle mode. + mu sync.Mutex + serializer *grpcsync.CallbackSerializer // To serialize all outoing calls. + serializerCancel context.CancelFunc // To close the seralizer at close/enterIdle time. + mode ccbMode // Tracks the current mode of the wrapper. } // newCCBalancerWrapper creates a new balancer wrapper. The underlying balancer // is not created until the switchTo() method is invoked. func newCCBalancerWrapper(cc *ClientConn, bopts balancer.BuildOptions) *ccBalancerWrapper { + ctx, cancel := context.WithCancel(context.Background()) ccb := &ccBalancerWrapper{ - cc: cc, - updateCh: buffer.NewUnbounded(), - resultCh: buffer.NewUnbounded(), - closed: grpcsync.NewEvent(), - done: grpcsync.NewEvent(), + cc: cc, + opts: bopts, + serializer: grpcsync.NewCallbackSerializer(ctx), + serializerCancel: cancel, } - go ccb.watcher() ccb.balancer = gracefulswitch.NewBalancer(ccb, bopts) return ccb } -// The following xxxUpdate structs wrap the arguments received as part of the -// corresponding update. The watcher goroutine uses the 'type' of the update to -// invoke the appropriate handler routine to handle the update. - -type ccStateUpdate struct { - ccs *balancer.ClientConnState -} - -type scStateUpdate struct { - sc balancer.SubConn - state connectivity.State - err error -} - -type exitIdleUpdate struct{} - -type resolverErrorUpdate struct { - err error -} - -type switchToUpdate struct { - name string -} - -type subConnUpdate struct { - acbw *acBalancerWrapper -} - -// watcher is a long-running goroutine which reads updates from a channel and -// invokes corresponding methods on the underlying balancer. It ensures that -// these methods are invoked in a synchronous fashion. It also ensures that -// these methods are invoked in the order in which the updates were received. -func (ccb *ccBalancerWrapper) watcher() { - for { - select { - case u := <-ccb.updateCh.Get(): - ccb.updateCh.Load() - if ccb.closed.HasFired() { - break - } - switch update := u.(type) { - case *ccStateUpdate: - ccb.handleClientConnStateChange(update.ccs) - case *scStateUpdate: - ccb.handleSubConnStateChange(update) - case *exitIdleUpdate: - ccb.handleExitIdle() - case *resolverErrorUpdate: - ccb.handleResolverError(update.err) - case *switchToUpdate: - ccb.handleSwitchTo(update.name) - case *subConnUpdate: - ccb.handleRemoveSubConn(update.acbw) - default: - logger.Errorf("ccBalancerWrapper.watcher: unknown update %+v, type %T", update, update) - } - case <-ccb.closed.Done(): - } - - if ccb.closed.HasFired() { - ccb.handleClose() - return - } - } -} - // updateClientConnState is invoked by grpc to push a ClientConnState update to // the underlying balancer. -// -// Unlike other methods invoked by grpc to push updates to the underlying -// balancer, this method cannot simply push the update onto the update channel -// and return. It needs to return the error returned by the underlying balancer -// back to grpc which propagates that to the resolver. func (ccb *ccBalancerWrapper) updateClientConnState(ccs *balancer.ClientConnState) error { - ccb.updateCh.Put(&ccStateUpdate{ccs: ccs}) - - var res interface{} - select { - case res = <-ccb.resultCh.Get(): - ccb.resultCh.Load() - case <-ccb.closed.Done(): - // Return early if the balancer wrapper is closed while we are waiting for - // the underlying balancer to process a ClientConnState update. - return nil - } - // If the returned error is nil, attempting to type assert to error leads to - // panic. So, this needs to handled separately. - if res == nil { - return nil - } - return res.(error) -} - -// handleClientConnStateChange handles a ClientConnState update from the update -// channel and invokes the appropriate method on the underlying balancer. -// -// If the addresses specified in the update contain addresses of type "grpclb" -// and the selected LB policy is not "grpclb", these addresses will be filtered -// out and ccs will be modified with the updated address list. -func (ccb *ccBalancerWrapper) handleClientConnStateChange(ccs *balancer.ClientConnState) { - if ccb.curBalancerName != grpclbName { - // Filter any grpclb addresses since we don't have the grpclb balancer. - var addrs []resolver.Address - for _, addr := range ccs.ResolverState.Addresses { - if addr.Type == resolver.GRPCLB { - continue + ccb.mu.Lock() + errCh := make(chan error, 1) + // Here and everywhere else where Schedule() is called, it is done with the + // lock held. But the lock guards only the scheduling part. The actual + // callback is called asynchronously without the lock being held. + ok := ccb.serializer.Schedule(func(_ context.Context) { + // If the addresses specified in the update contain addresses of type + // "grpclb" and the selected LB policy is not "grpclb", these addresses + // will be filtered out and ccs will be modified with the updated + // address list. + if ccb.curBalancerName != grpclbName { + var addrs []resolver.Address + for _, addr := range ccs.ResolverState.Addresses { + if addr.Type == resolver.GRPCLB { + continue + } + addrs = append(addrs, addr) } - addrs = append(addrs, addr) + ccs.ResolverState.Addresses = addrs } - ccs.ResolverState.Addresses = addrs + errCh <- ccb.balancer.UpdateClientConnState(*ccs) + }) + if !ok { + // If we are unable to schedule a function with the serializer, it + // indicates that it has been closed. A serializer is only closed when + // the wrapper is closed or is in idle. + ccb.mu.Unlock() + return fmt.Errorf("grpc: cannot send state update to a closed or idle balancer") } - ccb.resultCh.Put(ccb.balancer.UpdateClientConnState(*ccs)) + ccb.mu.Unlock() + + // We get here only if the above call to Schedule succeeds, in which case it + // is guaranteed that the scheduled function will run. Therefore it is safe + // to block on this channel. + err := <-errCh + if logger.V(2) && err != nil { + logger.Infof("error from balancer.UpdateClientConnState: %v", err) + } + return err } // updateSubConnState is invoked by grpc to push a subConn state update to the // underlying balancer. func (ccb *ccBalancerWrapper) updateSubConnState(sc balancer.SubConn, s connectivity.State, err error) { - // When updating addresses for a SubConn, if the address in use is not in - // the new addresses, the old ac will be tearDown() and a new ac will be - // created. tearDown() generates a state change with Shutdown state, we - // don't want the balancer to receive this state change. So before - // tearDown() on the old ac, ac.acbw (acWrapper) will be set to nil, and - // this function will be called with (nil, Shutdown). We don't need to call - // balancer method in this case. - if sc == nil { - return - } - ccb.updateCh.Put(&scStateUpdate{ - sc: sc, - state: s, - err: err, + ccb.mu.Lock() + ccb.serializer.Schedule(func(_ context.Context) { + ccb.balancer.UpdateSubConnState(sc, balancer.SubConnState{ConnectivityState: s, ConnectionError: err}) }) -} - -// handleSubConnStateChange handles a SubConnState update from the update -// channel and invokes the appropriate method on the underlying balancer. -func (ccb *ccBalancerWrapper) handleSubConnStateChange(update *scStateUpdate) { - ccb.balancer.UpdateSubConnState(update.sc, balancer.SubConnState{ConnectivityState: update.state, ConnectionError: update.err}) -} - -func (ccb *ccBalancerWrapper) exitIdle() { - ccb.updateCh.Put(&exitIdleUpdate{}) -} - -func (ccb *ccBalancerWrapper) handleExitIdle() { - if ccb.cc.GetState() != connectivity.Idle { - return - } - ccb.balancer.ExitIdle() + ccb.mu.Unlock() } func (ccb *ccBalancerWrapper) resolverError(err error) { - ccb.updateCh.Put(&resolverErrorUpdate{err: err}) -} - -func (ccb *ccBalancerWrapper) handleResolverError(err error) { - ccb.balancer.ResolverError(err) + ccb.mu.Lock() + ccb.serializer.Schedule(func(_ context.Context) { + ccb.balancer.ResolverError(err) + }) + ccb.mu.Unlock() } // switchTo is invoked by grpc to instruct the balancer wrapper to switch to the @@ -248,24 +163,27 @@ func (ccb *ccBalancerWrapper) handleResolverError(err error) { // the ccBalancerWrapper keeps track of the current LB policy name, and skips // the graceful balancer switching process if the name does not change. func (ccb *ccBalancerWrapper) switchTo(name string) { - ccb.updateCh.Put(&switchToUpdate{name: name}) + ccb.mu.Lock() + ccb.serializer.Schedule(func(_ context.Context) { + // TODO: Other languages use case-sensitive balancer registries. We should + // switch as well. See: https://github.com/grpc/grpc-go/issues/5288. + if strings.EqualFold(ccb.curBalancerName, name) { + return + } + ccb.buildLoadBalancingPolicy(name) + }) + ccb.mu.Unlock() } -// handleSwitchTo handles a balancer switch update from the update channel. It -// calls the SwitchTo() method on the gracefulswitch.Balancer with a -// balancer.Builder corresponding to name. If no balancer.Builder is registered -// for the given name, it uses the default LB policy which is "pick_first". -func (ccb *ccBalancerWrapper) handleSwitchTo(name string) { - // TODO: Other languages use case-insensitive balancer registries. We should - // switch as well. See: https://github.com/grpc/grpc-go/issues/5288. - if strings.EqualFold(ccb.curBalancerName, name) { - return - } - - // TODO: Ensure that name is a registered LB policy when we get here. - // We currently only validate the `loadBalancingConfig` field. We need to do - // the same for the `loadBalancingPolicy` field and reject the service config - // if the specified policy is not registered. +// buildLoadBalancingPolicy performs the following: +// - retrieve a balancer builder for the given name. Use the default LB +// policy, pick_first, if no LB policy with name is found in the registry. +// - instruct the gracefulswitch balancer to switch to the above builder. This +// will actually build the new balancer. +// - update the `curBalancerName` field +// +// Must be called from a serializer callback. +func (ccb *ccBalancerWrapper) buildLoadBalancingPolicy(name string) { builder := balancer.Get(name) if builder == nil { channelz.Warningf(logger, ccb.cc.channelzID, "Channel switches to new LB policy %q, since the specified LB policy %q was not registered", PickFirstBalancerName, name) @@ -281,26 +199,114 @@ func (ccb *ccBalancerWrapper) handleSwitchTo(name string) { ccb.curBalancerName = builder.Name() } -// handleRemoveSucConn handles a request from the underlying balancer to remove -// a subConn. -// -// See comments in RemoveSubConn() for more details. -func (ccb *ccBalancerWrapper) handleRemoveSubConn(acbw *acBalancerWrapper) { - ccb.cc.removeAddrConn(acbw.getAddrConn(), errConnDrain) +func (ccb *ccBalancerWrapper) close() { + channelz.Info(logger, ccb.cc.channelzID, "ccBalancerWrapper: closing") + ccb.closeBalancer(ccbModeClosed) } -func (ccb *ccBalancerWrapper) close() { - ccb.closed.Fire() - <-ccb.done.Done() +// enterIdleMode is invoked by grpc when the channel enters idle mode upon +// expiry of idle_timeout. This call blocks until the balancer is closed. +func (ccb *ccBalancerWrapper) enterIdleMode() { + channelz.Info(logger, ccb.cc.channelzID, "ccBalancerWrapper: entering idle mode") + ccb.closeBalancer(ccbModeIdle) +} + +// closeBalancer is invoked when the channel is being closed or when it enters +// idle mode upon expiry of idle_timeout. +func (ccb *ccBalancerWrapper) closeBalancer(m ccbMode) { + ccb.mu.Lock() + if ccb.mode == ccbModeClosed || ccb.mode == ccbModeIdle { + ccb.mu.Unlock() + return + } + + ccb.mode = m + done := ccb.serializer.Done + b := ccb.balancer + ok := ccb.serializer.Schedule(func(_ context.Context) { + // Close the serializer to ensure that no more calls from gRPC are sent + // to the balancer. + ccb.serializerCancel() + // Empty the current balancer name because we don't have a balancer + // anymore and also so that we act on the next call to switchTo by + // creating a new balancer specified by the new resolver. + ccb.curBalancerName = "" + }) + if !ok { + ccb.mu.Unlock() + return + } + ccb.mu.Unlock() + + // Give enqueued callbacks a chance to finish. + <-done + // Spawn a goroutine to close the balancer (since it may block trying to + // cleanup all allocated resources) and return early. + go b.Close() } -func (ccb *ccBalancerWrapper) handleClose() { - ccb.balancer.Close() - ccb.done.Fire() +// exitIdleMode is invoked by grpc when the channel exits idle mode either +// because of an RPC or because of an invocation of the Connect() API. This +// recreates the balancer that was closed previously when entering idle mode. +// +// If the channel is not in idle mode, we know for a fact that we are here as a +// result of the user calling the Connect() method on the ClientConn. In this +// case, we can simply forward the call to the underlying balancer, instructing +// it to reconnect to the backends. +func (ccb *ccBalancerWrapper) exitIdleMode() { + ccb.mu.Lock() + if ccb.mode == ccbModeClosed { + // Request to exit idle is a no-op when wrapper is already closed. + ccb.mu.Unlock() + return + } + + if ccb.mode == ccbModeIdle { + // Recreate the serializer which was closed when we entered idle. + ctx, cancel := context.WithCancel(context.Background()) + ccb.serializer = grpcsync.NewCallbackSerializer(ctx) + ccb.serializerCancel = cancel + } + + // The ClientConn guarantees that mutual exclusion between close() and + // exitIdleMode(), and since we just created a new serializer, we can be + // sure that the below function will be scheduled. + done := make(chan struct{}) + ccb.serializer.Schedule(func(_ context.Context) { + defer close(done) + + ccb.mu.Lock() + defer ccb.mu.Unlock() + + if ccb.mode != ccbModeIdle { + ccb.balancer.ExitIdle() + return + } + + // Gracefulswitch balancer does not support a switchTo operation after + // being closed. Hence we need to create a new one here. + ccb.balancer = gracefulswitch.NewBalancer(ccb, ccb.opts) + ccb.mode = ccbModeActive + channelz.Info(logger, ccb.cc.channelzID, "ccBalancerWrapper: exiting idle mode") + + }) + ccb.mu.Unlock() + + <-done +} + +func (ccb *ccBalancerWrapper) isIdleOrClosed() bool { + ccb.mu.Lock() + defer ccb.mu.Unlock() + return ccb.mode == ccbModeIdle || ccb.mode == ccbModeClosed } func (ccb *ccBalancerWrapper) NewSubConn(addrs []resolver.Address, opts balancer.NewSubConnOptions) (balancer.SubConn, error) { - if len(addrs) <= 0 { + if ccb.isIdleOrClosed() { + return nil, fmt.Errorf("grpc: cannot create SubConn when balancer is closed or idle") + } + + if len(addrs) == 0 { return nil, fmt.Errorf("grpc: cannot create SubConn with empty address list") } ac, err := ccb.cc.newAddrConn(addrs, opts) @@ -309,31 +315,35 @@ func (ccb *ccBalancerWrapper) NewSubConn(addrs []resolver.Address, opts balancer return nil, err } acbw := &acBalancerWrapper{ac: ac, producers: make(map[balancer.ProducerBuilder]*refCountedProducer)} - acbw.ac.mu.Lock() ac.acbw = acbw - acbw.ac.mu.Unlock() return acbw, nil } func (ccb *ccBalancerWrapper) RemoveSubConn(sc balancer.SubConn) { - // Before we switched the ccBalancerWrapper to use gracefulswitch.Balancer, it - // was required to handle the RemoveSubConn() method asynchronously by pushing - // the update onto the update channel. This was done to avoid a deadlock as - // switchBalancer() was holding cc.mu when calling Close() on the old - // balancer, which would in turn call RemoveSubConn(). - // - // With the use of gracefulswitch.Balancer in ccBalancerWrapper, handling this - // asynchronously is probably not required anymore since the switchTo() method - // handles the balancer switch by pushing the update onto the channel. - // TODO(easwars): Handle this inline. + if ccb.isIdleOrClosed() { + // It it safe to ignore this call when the balancer is closed or in idle + // because the ClientConn takes care of closing the connections. + // + // Not returning early from here when the balancer is closed or in idle + // leads to a deadlock though, because of the following sequence of + // calls when holding cc.mu: + // cc.exitIdleMode --> ccb.enterIdleMode --> gsw.Close --> + // ccb.RemoveAddrConn --> cc.removeAddrConn + return + } + acbw, ok := sc.(*acBalancerWrapper) if !ok { return } - ccb.updateCh.Put(&subConnUpdate{acbw: acbw}) + ccb.cc.removeAddrConn(acbw.ac, errConnDrain) } func (ccb *ccBalancerWrapper) UpdateAddresses(sc balancer.SubConn, addrs []resolver.Address) { + if ccb.isIdleOrClosed() { + return + } + acbw, ok := sc.(*acBalancerWrapper) if !ok { return @@ -342,6 +352,10 @@ func (ccb *ccBalancerWrapper) UpdateAddresses(sc balancer.SubConn, addrs []resol } func (ccb *ccBalancerWrapper) UpdateState(s balancer.State) { + if ccb.isIdleOrClosed() { + return + } + // Update picker before updating state. Even though the ordering here does // not matter, it can lead to multiple calls of Pick in the common start-up // case where we wait for ready and then perform an RPC. If the picker is @@ -352,6 +366,10 @@ func (ccb *ccBalancerWrapper) UpdateState(s balancer.State) { } func (ccb *ccBalancerWrapper) ResolveNow(o resolver.ResolveNowOptions) { + if ccb.isIdleOrClosed() { + return + } + ccb.cc.resolveNow(o) } @@ -362,71 +380,31 @@ func (ccb *ccBalancerWrapper) Target() string { // acBalancerWrapper is a wrapper on top of ac for balancers. // It implements balancer.SubConn interface. type acBalancerWrapper struct { + ac *addrConn // read-only + mu sync.Mutex - ac *addrConn producers map[balancer.ProducerBuilder]*refCountedProducer } -func (acbw *acBalancerWrapper) UpdateAddresses(addrs []resolver.Address) { - acbw.mu.Lock() - defer acbw.mu.Unlock() - if len(addrs) <= 0 { - acbw.ac.cc.removeAddrConn(acbw.ac, errConnDrain) - return - } - if !acbw.ac.tryUpdateAddrs(addrs) { - cc := acbw.ac.cc - opts := acbw.ac.scopts - acbw.ac.mu.Lock() - // Set old ac.acbw to nil so the Shutdown state update will be ignored - // by balancer. - // - // TODO(bar) the state transition could be wrong when tearDown() old ac - // and creating new ac, fix the transition. - acbw.ac.acbw = nil - acbw.ac.mu.Unlock() - acState := acbw.ac.getState() - acbw.ac.cc.removeAddrConn(acbw.ac, errConnDrain) - - if acState == connectivity.Shutdown { - return - } +func (acbw *acBalancerWrapper) String() string { + return fmt.Sprintf("SubConn(id:%d)", acbw.ac.channelzID.Int()) +} - newAC, err := cc.newAddrConn(addrs, opts) - if err != nil { - channelz.Warningf(logger, acbw.ac.channelzID, "acBalancerWrapper: UpdateAddresses: failed to newAddrConn: %v", err) - return - } - acbw.ac = newAC - newAC.mu.Lock() - newAC.acbw = acbw - newAC.mu.Unlock() - if acState != connectivity.Idle { - go newAC.connect() - } - } +func (acbw *acBalancerWrapper) UpdateAddresses(addrs []resolver.Address) { + acbw.ac.updateAddrs(addrs) } func (acbw *acBalancerWrapper) Connect() { - acbw.mu.Lock() - defer acbw.mu.Unlock() go acbw.ac.connect() } -func (acbw *acBalancerWrapper) getAddrConn() *addrConn { - acbw.mu.Lock() - defer acbw.mu.Unlock() - return acbw.ac -} - -var errSubConnNotReady = status.Error(codes.Unavailable, "SubConn not currently connected") - // NewStream begins a streaming RPC on the addrConn. If the addrConn is not -// ready, returns errSubConnNotReady. +// ready, blocks until it is or ctx expires. Returns an error when the context +// expires or the addrConn is shut down. func (acbw *acBalancerWrapper) NewStream(ctx context.Context, desc *StreamDesc, method string, opts ...CallOption) (ClientStream, error) { - transport := acbw.ac.getReadyTransport() - if transport == nil { - return nil, errSubConnNotReady + transport, err := acbw.ac.getTransport(ctx) + if err != nil { + return nil, err } return newNonRetryClientStream(ctx, desc, method, transport, acbw.ac, opts...) } diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/binarylog/grpc_binarylog_v1/binarylog.pb.go b/.ci/providerlint/vendor/google.golang.org/grpc/binarylog/grpc_binarylog_v1/binarylog.pb.go index 64a232f2811..ec2c2fa14dd 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/binarylog/grpc_binarylog_v1/binarylog.pb.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/binarylog/grpc_binarylog_v1/binarylog.pb.go @@ -18,14 +18,13 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.25.0 -// protoc v3.14.0 +// protoc-gen-go v1.30.0 +// protoc v4.22.0 // source: grpc/binlog/v1/binarylog.proto package grpc_binarylog_v1 import ( - proto "github.com/golang/protobuf/proto" protoreflect "google.golang.org/protobuf/reflect/protoreflect" protoimpl "google.golang.org/protobuf/runtime/protoimpl" durationpb "google.golang.org/protobuf/types/known/durationpb" @@ -41,10 +40,6 @@ const ( _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) ) -// This is a compile-time assertion that a sufficiently up-to-date version -// of the legacy proto package is being used. -const _ = proto.ProtoPackageIsVersion4 - // Enumerates the type of event // Note the terminology is different from the RPC semantics // definition, but the same meaning is expressed here. diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/call.go b/.ci/providerlint/vendor/google.golang.org/grpc/call.go index 9e20e4d385f..e6a1dc5d75e 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/call.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/call.go @@ -27,6 +27,11 @@ import ( // // All errors returned by Invoke are compatible with the status package. func (cc *ClientConn) Invoke(ctx context.Context, method string, args, reply interface{}, opts ...CallOption) error { + if err := cc.idlenessMgr.onCallBegin(); err != nil { + return err + } + defer cc.idlenessMgr.onCallEnd() + // allow interceptor to see all applicable call options, which means those // configured as defaults from dial option as well as per-call options opts = combine(cc.dopts.callOptions, opts) diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/clientconn.go b/.ci/providerlint/vendor/google.golang.org/grpc/clientconn.go index 422639c79db..314addcaa1c 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/clientconn.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/clientconn.go @@ -24,7 +24,6 @@ import ( "fmt" "math" "net/url" - "reflect" "strings" "sync" "sync/atomic" @@ -69,6 +68,9 @@ var ( errConnDrain = errors.New("grpc: the connection is drained") // errConnClosing indicates that the connection is closing. errConnClosing = errors.New("grpc: the connection is closing") + // errConnIdling indicates the the connection is being closed as the channel + // is moving to an idle mode due to inactivity. + errConnIdling = errors.New("grpc: the connection is closing due to channel idleness") // invalidDefaultServiceConfigErrPrefix is used to prefix the json parsing error for the default // service config. invalidDefaultServiceConfigErrPrefix = "grpc: the provided default service config is invalid" @@ -134,20 +136,42 @@ func (dcs *defaultConfigSelector) SelectConfig(rpcInfo iresolver.RPCInfo) (*ires // e.g. to use dns resolver, a "dns:///" prefix should be applied to the target. func DialContext(ctx context.Context, target string, opts ...DialOption) (conn *ClientConn, err error) { cc := &ClientConn{ - target: target, - csMgr: &connectivityStateManager{}, - conns: make(map[*addrConn]struct{}), - dopts: defaultDialOptions(), - blockingpicker: newPickerWrapper(), - czData: new(channelzData), - firstResolveEvent: grpcsync.NewEvent(), - } + target: target, + csMgr: &connectivityStateManager{}, + conns: make(map[*addrConn]struct{}), + dopts: defaultDialOptions(), + czData: new(channelzData), + } + + // We start the channel off in idle mode, but kick it out of idle at the end + // of this method, instead of waiting for the first RPC. Other gRPC + // implementations do wait for the first RPC to kick the channel out of + // idle. But doing so would be a major behavior change for our users who are + // used to seeing the channel active after Dial. + // + // Taking this approach of kicking it out of idle at the end of this method + // allows us to share the code between channel creation and exiting idle + // mode. This will also make it easy for us to switch to starting the + // channel off in idle, if at all we ever get to do that. + cc.idlenessState = ccIdlenessStateIdle + cc.retryThrottler.Store((*retryThrottler)(nil)) cc.safeConfigSelector.UpdateConfigSelector(&defaultConfigSelector{nil}) cc.ctx, cc.cancel = context.WithCancel(context.Background()) + cc.exitIdleCond = sync.NewCond(&cc.mu) - for _, opt := range extraDialOptions { - opt.apply(&cc.dopts) + disableGlobalOpts := false + for _, opt := range opts { + if _, ok := opt.(*disableGlobalDialOptions); ok { + disableGlobalOpts = true + break + } + } + + if !disableGlobalOpts { + for _, opt := range globalDialOptions { + opt.apply(&cc.dopts) + } } for _, opt := range opts { @@ -163,40 +187,11 @@ func DialContext(ctx context.Context, target string, opts ...DialOption) (conn * } }() - pid := cc.dopts.channelzParentID - cc.channelzID = channelz.RegisterChannel(&channelzChannel{cc}, pid, target) - ted := &channelz.TraceEventDesc{ - Desc: "Channel created", - Severity: channelz.CtInfo, - } - if cc.dopts.channelzParentID != nil { - ted.Parent = &channelz.TraceEventDesc{ - Desc: fmt.Sprintf("Nested Channel(id:%d) created", cc.channelzID.Int()), - Severity: channelz.CtInfo, - } - } - channelz.AddTraceEvent(logger, cc.channelzID, 1, ted) - cc.csMgr.channelzID = cc.channelzID + // Register ClientConn with channelz. + cc.channelzRegistration(target) - if cc.dopts.copts.TransportCredentials == nil && cc.dopts.copts.CredsBundle == nil { - return nil, errNoTransportSecurity - } - if cc.dopts.copts.TransportCredentials != nil && cc.dopts.copts.CredsBundle != nil { - return nil, errTransportCredsAndBundle - } - if cc.dopts.copts.CredsBundle != nil && cc.dopts.copts.CredsBundle.TransportCredentials() == nil { - return nil, errNoTransportCredsInBundle - } - transportCreds := cc.dopts.copts.TransportCredentials - if transportCreds == nil { - transportCreds = cc.dopts.copts.CredsBundle.TransportCredentials() - } - if transportCreds.Info().SecurityProtocol == "insecure" { - for _, cd := range cc.dopts.copts.PerRPCCredentials { - if cd.RequireTransportSecurity() { - return nil, errTransportCredentialsMissing - } - } + if err := cc.validateTransportCredentials(); err != nil { + return nil, err } if cc.dopts.defaultServiceConfigRawJSON != nil { @@ -234,35 +229,19 @@ func DialContext(ctx context.Context, target string, opts ...DialOption) (conn * } }() - scSet := false - if cc.dopts.scChan != nil { - // Try to get an initial service config. - select { - case sc, ok := <-cc.dopts.scChan: - if ok { - cc.sc = &sc - cc.safeConfigSelector.UpdateConfigSelector(&defaultConfigSelector{&sc}) - scSet = true - } - default: - } - } if cc.dopts.bs == nil { cc.dopts.bs = backoff.DefaultExponential } // Determine the resolver to use. - resolverBuilder, err := cc.parseTargetAndFindResolver() - if err != nil { + if err := cc.parseTargetAndFindResolver(); err != nil { return nil, err } - cc.authority, err = determineAuthority(cc.parsedTarget.Endpoint, cc.target, cc.dopts) - if err != nil { + if err = cc.determineAuthority(); err != nil { return nil, err } - channelz.Infof(logger, cc.channelzID, "Channel authority set to %q", cc.authority) - if cc.dopts.scChan != nil && !scSet { + if cc.dopts.scChan != nil { // Blocking wait for the initial service config. select { case sc, ok := <-cc.dopts.scChan: @@ -278,57 +257,224 @@ func DialContext(ctx context.Context, target string, opts ...DialOption) (conn * go cc.scWatcher() } + // This creates the name resolver, load balancer, blocking picker etc. + if err := cc.exitIdleMode(); err != nil { + return nil, err + } + + // Configure idleness support with configured idle timeout or default idle + // timeout duration. Idleness can be explicitly disabled by the user, by + // setting the dial option to 0. + cc.idlenessMgr = newIdlenessManager(cc, cc.dopts.idleTimeout) + + // Return early for non-blocking dials. + if !cc.dopts.block { + return cc, nil + } + + // A blocking dial blocks until the clientConn is ready. + for { + s := cc.GetState() + if s == connectivity.Idle { + cc.Connect() + } + if s == connectivity.Ready { + return cc, nil + } else if cc.dopts.copts.FailOnNonTempDialError && s == connectivity.TransientFailure { + if err = cc.connectionError(); err != nil { + terr, ok := err.(interface { + Temporary() bool + }) + if ok && !terr.Temporary() { + return nil, err + } + } + } + if !cc.WaitForStateChange(ctx, s) { + // ctx got timeout or canceled. + if err = cc.connectionError(); err != nil && cc.dopts.returnLastError { + return nil, err + } + return nil, ctx.Err() + } + } +} + +// addTraceEvent is a helper method to add a trace event on the channel. If the +// channel is a nested one, the same event is also added on the parent channel. +func (cc *ClientConn) addTraceEvent(msg string) { + ted := &channelz.TraceEventDesc{ + Desc: fmt.Sprintf("Channel %s", msg), + Severity: channelz.CtInfo, + } + if cc.dopts.channelzParentID != nil { + ted.Parent = &channelz.TraceEventDesc{ + Desc: fmt.Sprintf("Nested channel(id:%d) %s", cc.channelzID.Int(), msg), + Severity: channelz.CtInfo, + } + } + channelz.AddTraceEvent(logger, cc.channelzID, 0, ted) +} + +// exitIdleMode moves the channel out of idle mode by recreating the name +// resolver and load balancer. +func (cc *ClientConn) exitIdleMode() error { + cc.mu.Lock() + if cc.conns == nil { + cc.mu.Unlock() + return errConnClosing + } + if cc.idlenessState != ccIdlenessStateIdle { + cc.mu.Unlock() + logger.Info("ClientConn asked to exit idle mode when not in idle mode") + return nil + } + + defer func() { + // When Close() and exitIdleMode() race against each other, one of the + // following two can happen: + // - Close() wins the race and runs first. exitIdleMode() runs after, and + // sees that the ClientConn is already closed and hence returns early. + // - exitIdleMode() wins the race and runs first and recreates the balancer + // and releases the lock before recreating the resolver. If Close() runs + // in this window, it will wait for exitIdleMode to complete. + // + // We achieve this synchronization using the below condition variable. + cc.mu.Lock() + cc.idlenessState = ccIdlenessStateActive + cc.exitIdleCond.Signal() + cc.mu.Unlock() + }() + + cc.idlenessState = ccIdlenessStateExitingIdle + exitedIdle := false + if cc.blockingpicker == nil { + cc.blockingpicker = newPickerWrapper() + } else { + cc.blockingpicker.exitIdleMode() + exitedIdle = true + } + var credsClone credentials.TransportCredentials if creds := cc.dopts.copts.TransportCredentials; creds != nil { credsClone = creds.Clone() } - cc.balancerWrapper = newCCBalancerWrapper(cc, balancer.BuildOptions{ - DialCreds: credsClone, - CredsBundle: cc.dopts.copts.CredsBundle, - Dialer: cc.dopts.copts.Dialer, - Authority: cc.authority, - CustomUserAgent: cc.dopts.copts.UserAgent, - ChannelzParentID: cc.channelzID, - Target: cc.parsedTarget, - }) + if cc.balancerWrapper == nil { + cc.balancerWrapper = newCCBalancerWrapper(cc, balancer.BuildOptions{ + DialCreds: credsClone, + CredsBundle: cc.dopts.copts.CredsBundle, + Dialer: cc.dopts.copts.Dialer, + Authority: cc.authority, + CustomUserAgent: cc.dopts.copts.UserAgent, + ChannelzParentID: cc.channelzID, + Target: cc.parsedTarget, + }) + } else { + cc.balancerWrapper.exitIdleMode() + } + cc.firstResolveEvent = grpcsync.NewEvent() + cc.mu.Unlock() - // Build the resolver. - rWrapper, err := newCCResolverWrapper(cc, resolverBuilder) - if err != nil { - return nil, fmt.Errorf("failed to build resolver: %v", err) + // This needs to be called without cc.mu because this builds a new resolver + // which might update state or report error inline which needs to be handled + // by cc.updateResolverState() which also grabs cc.mu. + if err := cc.initResolverWrapper(credsClone); err != nil { + return err + } + + if exitedIdle { + cc.addTraceEvent("exiting idle mode") } + return nil +} + +// enterIdleMode puts the channel in idle mode, and as part of it shuts down the +// name resolver, load balancer and any subchannels. +func (cc *ClientConn) enterIdleMode() error { cc.mu.Lock() - cc.resolverWrapper = rWrapper + if cc.conns == nil { + cc.mu.Unlock() + return ErrClientConnClosing + } + if cc.idlenessState != ccIdlenessStateActive { + logger.Error("ClientConn asked to enter idle mode when not active") + return nil + } + + // cc.conns == nil is a proxy for the ClientConn being closed. So, instead + // of setting it to nil here, we recreate the map. This also means that we + // don't have to do this when exiting idle mode. + conns := cc.conns + cc.conns = make(map[*addrConn]struct{}) + + // TODO: Currently, we close the resolver wrapper upon entering idle mode + // and create a new one upon exiting idle mode. This means that the + // `cc.resolverWrapper` field would be overwritten everytime we exit idle + // mode. While this means that we need to hold `cc.mu` when accessing + // `cc.resolverWrapper`, it makes the code simpler in the wrapper. We should + // try to do the same for the balancer and picker wrappers too. + cc.resolverWrapper.close() + cc.blockingpicker.enterIdleMode() + cc.balancerWrapper.enterIdleMode() + cc.csMgr.updateState(connectivity.Idle) + cc.idlenessState = ccIdlenessStateIdle cc.mu.Unlock() - // A blocking dial blocks until the clientConn is ready. - if cc.dopts.block { - for { - cc.Connect() - s := cc.GetState() - if s == connectivity.Ready { - break - } else if cc.dopts.copts.FailOnNonTempDialError && s == connectivity.TransientFailure { - if err = cc.connectionError(); err != nil { - terr, ok := err.(interface { - Temporary() bool - }) - if ok && !terr.Temporary() { - return nil, err - } - } - } - if !cc.WaitForStateChange(ctx, s) { - // ctx got timeout or canceled. - if err = cc.connectionError(); err != nil && cc.dopts.returnLastError { - return nil, err - } - return nil, ctx.Err() + go func() { + cc.addTraceEvent("entering idle mode") + for ac := range conns { + ac.tearDown(errConnIdling) + } + }() + return nil +} + +// validateTransportCredentials performs a series of checks on the configured +// transport credentials. It returns a non-nil error if any of these conditions +// are met: +// - no transport creds and no creds bundle is configured +// - both transport creds and creds bundle are configured +// - creds bundle is configured, but it lacks a transport credentials +// - insecure transport creds configured alongside call creds that require +// transport level security +// +// If none of the above conditions are met, the configured credentials are +// deemed valid and a nil error is returned. +func (cc *ClientConn) validateTransportCredentials() error { + if cc.dopts.copts.TransportCredentials == nil && cc.dopts.copts.CredsBundle == nil { + return errNoTransportSecurity + } + if cc.dopts.copts.TransportCredentials != nil && cc.dopts.copts.CredsBundle != nil { + return errTransportCredsAndBundle + } + if cc.dopts.copts.CredsBundle != nil && cc.dopts.copts.CredsBundle.TransportCredentials() == nil { + return errNoTransportCredsInBundle + } + transportCreds := cc.dopts.copts.TransportCredentials + if transportCreds == nil { + transportCreds = cc.dopts.copts.CredsBundle.TransportCredentials() + } + if transportCreds.Info().SecurityProtocol == "insecure" { + for _, cd := range cc.dopts.copts.PerRPCCredentials { + if cd.RequireTransportSecurity() { + return errTransportCredentialsMissing } } } + return nil +} - return cc, nil +// channelzRegistration registers the newly created ClientConn with channelz and +// stores the returned identifier in `cc.channelzID` and `cc.csMgr.channelzID`. +// A channelz trace event is emitted for ClientConn creation. If the newly +// created ClientConn is a nested one, i.e a valid parent ClientConn ID is +// specified via a dial option, the trace event is also added to the parent. +// +// Doesn't grab cc.mu as this method is expected to be called only at Dial time. +func (cc *ClientConn) channelzRegistration(target string) { + cc.channelzID = channelz.RegisterChannel(&channelzChannel{cc}, cc.dopts.channelzParentID, target) + cc.addTraceEvent("created") + cc.csMgr.channelzID = cc.channelzID } // chainUnaryClientInterceptors chains all unary client interceptors into one. @@ -474,7 +620,9 @@ type ClientConn struct { authority string // See determineAuthority(). dopts dialOptions // Default and user specified dial options. channelzID *channelz.Identifier // Channelz identifier for the channel. + resolverBuilder resolver.Builder // See parseTargetAndFindResolver(). balancerWrapper *ccBalancerWrapper // Uses gracefulswitch.balancer underneath. + idlenessMgr idlenessManager // The following provide their own synchronization, and therefore don't // require cc.mu to be held to access them. @@ -495,11 +643,31 @@ type ClientConn struct { sc *ServiceConfig // Latest service config received from the resolver. conns map[*addrConn]struct{} // Set to nil on close. mkp keepalive.ClientParameters // May be updated upon receipt of a GoAway. + idlenessState ccIdlenessState // Tracks idleness state of the channel. + exitIdleCond *sync.Cond // Signalled when channel exits idle. lceMu sync.Mutex // protects lastConnectionError lastConnectionError error } +// ccIdlenessState tracks the idleness state of the channel. +// +// Channels start off in `active` and move to `idle` after a period of +// inactivity. When moving back to `active` upon an incoming RPC, they +// transition through `exiting_idle`. This state is useful for synchronization +// with Close(). +// +// This state tracking is mostly for self-protection. The idlenessManager is +// expected to keep track of the state as well, and is expected not to call into +// the ClientConn unnecessarily. +type ccIdlenessState int8 + +const ( + ccIdlenessStateActive ccIdlenessState = iota + ccIdlenessStateIdle + ccIdlenessStateExitingIdle +) + // WaitForStateChange waits until the connectivity.State of ClientConn changes from sourceState or // ctx expires. A true value is returned in former case and false in latter. // @@ -539,7 +707,10 @@ func (cc *ClientConn) GetState() connectivity.State { // Notice: This API is EXPERIMENTAL and may be changed or removed in a later // release. func (cc *ClientConn) Connect() { - cc.balancerWrapper.exitIdle() + cc.exitIdleMode() + // If the ClientConn was not in idle mode, we need to call ExitIdle on the + // LB policy so that connections can be created. + cc.balancerWrapper.exitIdleMode() } func (cc *ClientConn) scWatcher() { @@ -708,6 +879,7 @@ func (cc *ClientConn) newAddrConn(addrs []resolver.Address, opts balancer.NewSub dopts: cc.dopts, czData: new(channelzData), resetBackoff: make(chan struct{}), + stateChan: make(chan struct{}), } ac.ctx, ac.cancel = context.WithCancel(cc.ctx) // Track ac in cc. This needs to be done before any getTransport(...) is called. @@ -788,16 +960,19 @@ func (cc *ClientConn) incrCallsFailed() { func (ac *addrConn) connect() error { ac.mu.Lock() if ac.state == connectivity.Shutdown { + if logger.V(2) { + logger.Infof("connect called on shutdown addrConn; ignoring.") + } ac.mu.Unlock() return errConnClosing } if ac.state != connectivity.Idle { + if logger.V(2) { + logger.Infof("connect called on addrConn in non-idle state (%v); ignoring.", ac.state) + } ac.mu.Unlock() return nil } - // Update connectivity state within the lock to prevent subsequent or - // concurrent calls from resetting the transport more than once. - ac.updateConnectivityState(connectivity.Connecting, nil) ac.mu.Unlock() ac.resetTransport() @@ -816,58 +991,60 @@ func equalAddresses(a, b []resolver.Address) bool { return true } -// tryUpdateAddrs tries to update ac.addrs with the new addresses list. -// -// If ac is TransientFailure, it updates ac.addrs and returns true. The updated -// addresses will be picked up by retry in the next iteration after backoff. -// -// If ac is Shutdown or Idle, it updates ac.addrs and returns true. -// -// If the addresses is the same as the old list, it does nothing and returns -// true. -// -// If ac is Connecting, it returns false. The caller should tear down the ac and -// create a new one. Note that the backoff will be reset when this happens. -// -// If ac is Ready, it checks whether current connected address of ac is in the -// new addrs list. -// - If true, it updates ac.addrs and returns true. The ac will keep using -// the existing connection. -// - If false, it does nothing and returns false. -func (ac *addrConn) tryUpdateAddrs(addrs []resolver.Address) bool { +// updateAddrs updates ac.addrs with the new addresses list and handles active +// connections or connection attempts. +func (ac *addrConn) updateAddrs(addrs []resolver.Address) { ac.mu.Lock() - defer ac.mu.Unlock() - channelz.Infof(logger, ac.channelzID, "addrConn: tryUpdateAddrs curAddr: %v, addrs: %v", ac.curAddr, addrs) - if ac.state == connectivity.Shutdown || - ac.state == connectivity.TransientFailure || - ac.state == connectivity.Idle { - ac.addrs = addrs - return true - } + channelz.Infof(logger, ac.channelzID, "addrConn: updateAddrs curAddr: %v, addrs: %v", ac.curAddr, addrs) if equalAddresses(ac.addrs, addrs) { - return true + ac.mu.Unlock() + return } - if ac.state == connectivity.Connecting { - return false + ac.addrs = addrs + + if ac.state == connectivity.Shutdown || + ac.state == connectivity.TransientFailure || + ac.state == connectivity.Idle { + // We were not connecting, so do nothing but update the addresses. + ac.mu.Unlock() + return } - // ac.state is Ready, try to find the connected address. - var curAddrFound bool - for _, a := range addrs { - a.ServerName = ac.cc.getServerName(a) - if reflect.DeepEqual(ac.curAddr, a) { - curAddrFound = true - break + if ac.state == connectivity.Ready { + // Try to find the connected address. + for _, a := range addrs { + a.ServerName = ac.cc.getServerName(a) + if a.Equal(ac.curAddr) { + // We are connected to a valid address, so do nothing but + // update the addresses. + ac.mu.Unlock() + return + } } } - channelz.Infof(logger, ac.channelzID, "addrConn: tryUpdateAddrs curAddrFound: %v", curAddrFound) - if curAddrFound { - ac.addrs = addrs + + // We are either connected to the wrong address or currently connecting. + // Stop the current iteration and restart. + + ac.cancel() + ac.ctx, ac.cancel = context.WithCancel(ac.cc.ctx) + + // We have to defer here because GracefulClose => Close => onClose, which + // requires locking ac.mu. + defer ac.transport.GracefulClose() + ac.transport = nil + + if len(addrs) == 0 { + ac.updateConnectivityState(connectivity.Idle, nil) } - return curAddrFound + ac.mu.Unlock() + + // Since we were connecting/connected, we should start a new connection + // attempt. + go ac.resetTransport() } // getServerName determines the serverName to be used in the connection @@ -928,7 +1105,7 @@ func (cc *ClientConn) healthCheckConfig() *healthCheckConfig { return cc.sc.healthCheckConfig } -func (cc *ClientConn) getTransport(ctx context.Context, failfast bool, method string) (transport.ClientTransport, func(balancer.DoneInfo), error) { +func (cc *ClientConn) getTransport(ctx context.Context, failfast bool, method string) (transport.ClientTransport, balancer.PickResult, error) { return cc.blockingpicker.pick(ctx, failfast, balancer.PickInfo{ Ctx: ctx, FullMethodName: method, @@ -1020,39 +1197,40 @@ func (cc *ClientConn) Close() error { cc.mu.Unlock() return ErrClientConnClosing } + + for cc.idlenessState == ccIdlenessStateExitingIdle { + cc.exitIdleCond.Wait() + } + conns := cc.conns cc.conns = nil cc.csMgr.updateState(connectivity.Shutdown) + pWrapper := cc.blockingpicker rWrapper := cc.resolverWrapper - cc.resolverWrapper = nil bWrapper := cc.balancerWrapper + idlenessMgr := cc.idlenessMgr cc.mu.Unlock() // The order of closing matters here since the balancer wrapper assumes the // picker is closed before it is closed. - cc.blockingpicker.close() + if pWrapper != nil { + pWrapper.close() + } if bWrapper != nil { bWrapper.close() } if rWrapper != nil { rWrapper.close() } + if idlenessMgr != nil { + idlenessMgr.close() + } for ac := range conns { ac.tearDown(ErrClientConnClosing) } - ted := &channelz.TraceEventDesc{ - Desc: "Channel deleted", - Severity: channelz.CtInfo, - } - if cc.dopts.channelzParentID != nil { - ted.Parent = &channelz.TraceEventDesc{ - Desc: fmt.Sprintf("Nested channel(id:%d) deleted", cc.channelzID.Int()), - Severity: channelz.CtInfo, - } - } - channelz.AddTraceEvent(logger, cc.channelzID, 0, ted) + cc.addTraceEvent("deleted") // TraceEvent needs to be called before RemoveEntry, as TraceEvent may add // trace reference to the entity being deleted, and thus prevent it from being // deleted right away. @@ -1082,7 +1260,8 @@ type addrConn struct { addrs []resolver.Address // All addresses that the resolver resolved to. // Use updateConnectivityState for updating addrConn's connectivity state. - state connectivity.State + state connectivity.State + stateChan chan struct{} // closed and recreated on every state change. backoffIdx int // Needs to be stateful for resetConnectBackoff. resetBackoff chan struct{} @@ -1096,8 +1275,15 @@ func (ac *addrConn) updateConnectivityState(s connectivity.State, lastErr error) if ac.state == s { return } + // When changing states, reset the state change channel. + close(ac.stateChan) + ac.stateChan = make(chan struct{}) ac.state = s - channelz.Infof(logger, ac.channelzID, "Subchannel Connectivity change to %v", s) + if lastErr == nil { + channelz.Infof(logger, ac.channelzID, "Subchannel Connectivity change to %v", s) + } else { + channelz.Infof(logger, ac.channelzID, "Subchannel Connectivity change to %v, last error: %s", s, lastErr) + } ac.cc.handleSubConnStateChange(ac.acbw, s, lastErr) } @@ -1117,7 +1303,8 @@ func (ac *addrConn) adjustParams(r transport.GoAwayReason) { func (ac *addrConn) resetTransport() { ac.mu.Lock() - if ac.state == connectivity.Shutdown { + acCtx := ac.ctx + if acCtx.Err() != nil { ac.mu.Unlock() return } @@ -1145,15 +1332,14 @@ func (ac *addrConn) resetTransport() { ac.updateConnectivityState(connectivity.Connecting, nil) ac.mu.Unlock() - if err := ac.tryAllAddrs(addrs, connectDeadline); err != nil { + if err := ac.tryAllAddrs(acCtx, addrs, connectDeadline); err != nil { ac.cc.resolveNow(resolver.ResolveNowOptions{}) // After exhausting all addresses, the addrConn enters // TRANSIENT_FAILURE. - ac.mu.Lock() - if ac.state == connectivity.Shutdown { - ac.mu.Unlock() + if acCtx.Err() != nil { return } + ac.mu.Lock() ac.updateConnectivityState(connectivity.TransientFailure, err) // Backoff. @@ -1168,13 +1354,13 @@ func (ac *addrConn) resetTransport() { ac.mu.Unlock() case <-b: timer.Stop() - case <-ac.ctx.Done(): + case <-acCtx.Done(): timer.Stop() return } ac.mu.Lock() - if ac.state != connectivity.Shutdown { + if acCtx.Err() == nil { ac.updateConnectivityState(connectivity.Idle, err) } ac.mu.Unlock() @@ -1189,14 +1375,13 @@ func (ac *addrConn) resetTransport() { // tryAllAddrs tries to creates a connection to the addresses, and stop when at // the first successful one. It returns an error if no address was successfully // connected, or updates ac appropriately with the new transport. -func (ac *addrConn) tryAllAddrs(addrs []resolver.Address, connectDeadline time.Time) error { +func (ac *addrConn) tryAllAddrs(ctx context.Context, addrs []resolver.Address, connectDeadline time.Time) error { var firstConnErr error for _, addr := range addrs { - ac.mu.Lock() - if ac.state == connectivity.Shutdown { - ac.mu.Unlock() + if ctx.Err() != nil { return errConnClosing } + ac.mu.Lock() ac.cc.mu.RLock() ac.dopts.copts.KeepaliveParams = ac.cc.mkp @@ -1210,7 +1395,7 @@ func (ac *addrConn) tryAllAddrs(addrs []resolver.Address, connectDeadline time.T channelz.Infof(logger, ac.channelzID, "Subchannel picks a new address %q to connect", addr.Addr) - err := ac.createTransport(addr, copts, connectDeadline) + err := ac.createTransport(ctx, addr, copts, connectDeadline) if err == nil { return nil } @@ -1227,17 +1412,20 @@ func (ac *addrConn) tryAllAddrs(addrs []resolver.Address, connectDeadline time.T // createTransport creates a connection to addr. It returns an error if the // address was not successfully connected, or updates ac appropriately with the // new transport. -func (ac *addrConn) createTransport(addr resolver.Address, copts transport.ConnectOptions, connectDeadline time.Time) error { +func (ac *addrConn) createTransport(ctx context.Context, addr resolver.Address, copts transport.ConnectOptions, connectDeadline time.Time) error { addr.ServerName = ac.cc.getServerName(addr) - hctx, hcancel := context.WithCancel(ac.ctx) + hctx, hcancel := context.WithCancel(ctx) - onClose := grpcsync.OnceFunc(func() { + onClose := func(r transport.GoAwayReason) { ac.mu.Lock() defer ac.mu.Unlock() - if ac.state == connectivity.Shutdown { - // Already shut down. tearDown() already cleared the transport and - // canceled hctx via ac.ctx, and we expected this connection to be - // closed, so do nothing here. + // adjust params based on GoAwayReason + ac.adjustParams(r) + if ctx.Err() != nil { + // Already shut down or connection attempt canceled. tearDown() or + // updateAddrs() already cleared the transport and canceled hctx + // via ac.ctx, and we expected this connection to be closed, so do + // nothing here. return } hcancel() @@ -1254,20 +1442,17 @@ func (ac *addrConn) createTransport(addr resolver.Address, copts transport.Conne // Always go idle and wait for the LB policy to initiate a new // connection attempt. ac.updateConnectivityState(connectivity.Idle, nil) - }) - onGoAway := func(r transport.GoAwayReason) { - ac.mu.Lock() - ac.adjustParams(r) - ac.mu.Unlock() - onClose() } - connectCtx, cancel := context.WithDeadline(ac.ctx, connectDeadline) + connectCtx, cancel := context.WithDeadline(ctx, connectDeadline) defer cancel() copts.ChannelzParentID = ac.channelzID - newTr, err := transport.NewClientTransport(connectCtx, ac.cc.ctx, addr, copts, onGoAway, onClose) + newTr, err := transport.NewClientTransport(connectCtx, ac.cc.ctx, addr, copts, onClose) if err != nil { + if logger.V(2) { + logger.Infof("Creating new client transport to %q: %v", addr, err) + } // newTr is either nil, or closed. hcancel() channelz.Warningf(logger, ac.channelzID, "grpc: addrConn.createTransport failed to connect to %s. Err: %v", addr, err) @@ -1276,7 +1461,7 @@ func (ac *addrConn) createTransport(addr resolver.Address, copts transport.Conne ac.mu.Lock() defer ac.mu.Unlock() - if ac.state == connectivity.Shutdown { + if ctx.Err() != nil { // This can happen if the subConn was removed while in `Connecting` // state. tearDown() would have set the state to `Shutdown`, but // would not have closed the transport since ac.transport would not @@ -1288,6 +1473,9 @@ func (ac *addrConn) createTransport(addr resolver.Address, copts transport.Conne // The error we pass to Close() is immaterial since there are no open // streams at this point, so no trailers with error details will be sent // out. We just need to pass a non-nil error. + // + // This can also happen when updateAddrs is called during a connection + // attempt. go newTr.Close(transport.ErrConnClosing) return nil } @@ -1371,7 +1559,7 @@ func (ac *addrConn) startHealthCheck(ctx context.Context) { if status.Code(err) == codes.Unimplemented { channelz.Error(logger, ac.channelzID, "Subchannel health check is unimplemented at server side, thus health check is disabled") } else { - channelz.Errorf(logger, ac.channelzID, "HealthCheckFunc exits with unexpected error %v", err) + channelz.Errorf(logger, ac.channelzID, "Health checking failed: %v", err) } } }() @@ -1395,6 +1583,29 @@ func (ac *addrConn) getReadyTransport() transport.ClientTransport { return nil } +// getTransport waits until the addrconn is ready and returns the transport. +// If the context expires first, returns an appropriate status. If the +// addrConn is stopped first, returns an Unavailable status error. +func (ac *addrConn) getTransport(ctx context.Context) (transport.ClientTransport, error) { + for ctx.Err() == nil { + ac.mu.Lock() + t, state, sc := ac.transport, ac.state, ac.stateChan + ac.mu.Unlock() + if state == connectivity.Ready { + return t, nil + } + if state == connectivity.Shutdown { + return nil, status.Errorf(codes.Unavailable, "SubConn shutting down") + } + + select { + case <-ctx.Done(): + case <-sc: + } + } + return nil, status.FromContextError(ctx.Err()).Err() +} + // tearDown starts to tear down the addrConn. // // Note that tearDown doesn't remove ac from ac.cc.conns, so the addrConn struct @@ -1522,6 +1733,9 @@ func (c *channelzChannel) ChannelzMetric() *channelz.ChannelInternalMetric { // referenced by users. var ErrClientConnTimeout = errors.New("grpc: timed out when dialing") +// getResolver finds the scheme in the cc's resolvers or the global registry. +// scheme should always be lowercase (typically by virtue of url.Parse() +// performing proper RFC3986 behavior). func (cc *ClientConn) getResolver(scheme string) resolver.Builder { for _, rb := range cc.dopts.resolvers { if scheme == rb.Scheme() { @@ -1543,7 +1757,14 @@ func (cc *ClientConn) connectionError() error { return cc.lastConnectionError } -func (cc *ClientConn) parseTargetAndFindResolver() (resolver.Builder, error) { +// parseTargetAndFindResolver parses the user's dial target and stores the +// parsed target in `cc.parsedTarget`. +// +// The resolver to use is determined based on the scheme in the parsed target +// and the same is stored in `cc.resolverBuilder`. +// +// Doesn't grab cc.mu as this method is expected to be called only at Dial time. +func (cc *ClientConn) parseTargetAndFindResolver() error { channelz.Infof(logger, cc.channelzID, "original dial target is: %q", cc.target) var rb resolver.Builder @@ -1555,7 +1776,8 @@ func (cc *ClientConn) parseTargetAndFindResolver() (resolver.Builder, error) { rb = cc.getResolver(parsedTarget.URL.Scheme) if rb != nil { cc.parsedTarget = parsedTarget - return rb, nil + cc.resolverBuilder = rb + return nil } } @@ -1570,42 +1792,30 @@ func (cc *ClientConn) parseTargetAndFindResolver() (resolver.Builder, error) { parsedTarget, err = parseTarget(canonicalTarget) if err != nil { channelz.Infof(logger, cc.channelzID, "dial target %q parse failed: %v", canonicalTarget, err) - return nil, err + return err } channelz.Infof(logger, cc.channelzID, "parsed dial target is: %+v", parsedTarget) rb = cc.getResolver(parsedTarget.URL.Scheme) if rb == nil { - return nil, fmt.Errorf("could not get resolver for default scheme: %q", parsedTarget.URL.Scheme) + return fmt.Errorf("could not get resolver for default scheme: %q", parsedTarget.URL.Scheme) } cc.parsedTarget = parsedTarget - return rb, nil + cc.resolverBuilder = rb + return nil } // parseTarget uses RFC 3986 semantics to parse the given target into a -// resolver.Target struct containing scheme, authority and endpoint. Query +// resolver.Target struct containing scheme, authority and url. Query // params are stripped from the endpoint. func parseTarget(target string) (resolver.Target, error) { u, err := url.Parse(target) if err != nil { return resolver.Target{}, err } - // For targets of the form "[scheme]://[authority]/endpoint, the endpoint - // value returned from url.Parse() contains a leading "/". Although this is - // in accordance with RFC 3986, we do not want to break existing resolver - // implementations which expect the endpoint without the leading "/". So, we - // end up stripping the leading "/" here. But this will result in an - // incorrect parsing for something like "unix:///path/to/socket". Since we - // own the "unix" resolver, we can workaround in the unix resolver by using - // the `URL` field instead of the `Endpoint` field. - endpoint := u.Path - if endpoint == "" { - endpoint = u.Opaque - } - endpoint = strings.TrimPrefix(endpoint, "/") + return resolver.Target{ Scheme: u.Scheme, Authority: u.Host, - Endpoint: endpoint, URL: *u, }, nil } @@ -1614,7 +1824,15 @@ func parseTarget(target string) (resolver.Target, error) { // - user specified authority override using `WithAuthority` dial option // - creds' notion of server name for the authentication handshake // - endpoint from dial target of the form "scheme://[authority]/endpoint" -func determineAuthority(endpoint, target string, dopts dialOptions) (string, error) { +// +// Stores the determined authority in `cc.authority`. +// +// Returns a non-nil error if the authority returned by the transport +// credentials do not match the authority configured through the dial option. +// +// Doesn't grab cc.mu as this method is expected to be called only at Dial time. +func (cc *ClientConn) determineAuthority() error { + dopts := cc.dopts // Historically, we had two options for users to specify the serverName or // authority for a channel. One was through the transport credentials // (either in its constructor, or through the OverrideServerName() method). @@ -1631,25 +1849,58 @@ func determineAuthority(endpoint, target string, dopts dialOptions) (string, err } authorityFromDialOption := dopts.authority if (authorityFromCreds != "" && authorityFromDialOption != "") && authorityFromCreds != authorityFromDialOption { - return "", fmt.Errorf("ClientConn's authority from transport creds %q and dial option %q don't match", authorityFromCreds, authorityFromDialOption) + return fmt.Errorf("ClientConn's authority from transport creds %q and dial option %q don't match", authorityFromCreds, authorityFromDialOption) } + endpoint := cc.parsedTarget.Endpoint() + target := cc.target switch { case authorityFromDialOption != "": - return authorityFromDialOption, nil + cc.authority = authorityFromDialOption case authorityFromCreds != "": - return authorityFromCreds, nil + cc.authority = authorityFromCreds case strings.HasPrefix(target, "unix:") || strings.HasPrefix(target, "unix-abstract:"): // TODO: remove when the unix resolver implements optional interface to // return channel authority. - return "localhost", nil + cc.authority = "localhost" case strings.HasPrefix(endpoint, ":"): - return "localhost" + endpoint, nil + cc.authority = "localhost" + endpoint default: // TODO: Define an optional interface on the resolver builder to return // the channel authority given the user's dial target. For resolvers // which don't implement this interface, we will use the endpoint from // "scheme://authority/endpoint" as the default authority. - return endpoint, nil + cc.authority = endpoint } + channelz.Infof(logger, cc.channelzID, "Channel authority set to %q", cc.authority) + return nil +} + +// initResolverWrapper creates a ccResolverWrapper, which builds the name +// resolver. This method grabs the lock to assign the newly built resolver +// wrapper to the cc.resolverWrapper field. +func (cc *ClientConn) initResolverWrapper(creds credentials.TransportCredentials) error { + rw, err := newCCResolverWrapper(cc, ccResolverWrapperOpts{ + target: cc.parsedTarget, + builder: cc.resolverBuilder, + bOpts: resolver.BuildOptions{ + DisableServiceConfig: cc.dopts.disableServiceConfig, + DialCreds: creds, + CredsBundle: cc.dopts.copts.CredsBundle, + Dialer: cc.dopts.copts.Dialer, + }, + channelzID: cc.channelzID, + }) + if err != nil { + return fmt.Errorf("failed to build resolver: %v", err) + } + // Resolver implementations may report state update or error inline when + // built (or right after), and this is handled in cc.updateResolverState. + // Also, an error from the resolver might lead to a re-resolution request + // from the balancer, which is handled in resolveNow() where + // `cc.resolverWrapper` is accessed. Hence, we need to hold the lock here. + cc.mu.Lock() + cc.resolverWrapper = rw + cc.mu.Unlock() + return nil } diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/codes/code_string.go b/.ci/providerlint/vendor/google.golang.org/grpc/codes/code_string.go index 0b206a57822..934fac2b090 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/codes/code_string.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/codes/code_string.go @@ -18,7 +18,15 @@ package codes -import "strconv" +import ( + "strconv" + + "google.golang.org/grpc/internal" +) + +func init() { + internal.CanonicalString = canonicalString +} func (c Code) String() string { switch c { @@ -60,3 +68,44 @@ func (c Code) String() string { return "Code(" + strconv.FormatInt(int64(c), 10) + ")" } } + +func canonicalString(c Code) string { + switch c { + case OK: + return "OK" + case Canceled: + return "CANCELLED" + case Unknown: + return "UNKNOWN" + case InvalidArgument: + return "INVALID_ARGUMENT" + case DeadlineExceeded: + return "DEADLINE_EXCEEDED" + case NotFound: + return "NOT_FOUND" + case AlreadyExists: + return "ALREADY_EXISTS" + case PermissionDenied: + return "PERMISSION_DENIED" + case ResourceExhausted: + return "RESOURCE_EXHAUSTED" + case FailedPrecondition: + return "FAILED_PRECONDITION" + case Aborted: + return "ABORTED" + case OutOfRange: + return "OUT_OF_RANGE" + case Unimplemented: + return "UNIMPLEMENTED" + case Internal: + return "INTERNAL" + case Unavailable: + return "UNAVAILABLE" + case DataLoss: + return "DATA_LOSS" + case Unauthenticated: + return "UNAUTHENTICATED" + default: + return "CODE(" + strconv.FormatInt(int64(c), 10) + ")" + } +} diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/credentials/tls.go b/.ci/providerlint/vendor/google.golang.org/grpc/credentials/tls.go index ce2bbc10a14..877b7cd21af 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/credentials/tls.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/credentials/tls.go @@ -23,9 +23,9 @@ import ( "crypto/tls" "crypto/x509" "fmt" - "io/ioutil" "net" "net/url" + "os" credinternal "google.golang.org/grpc/internal/credentials" ) @@ -166,7 +166,7 @@ func NewClientTLSFromCert(cp *x509.CertPool, serverNameOverride string) Transpor // it will override the virtual host name of authority (e.g. :authority header // field) in requests. func NewClientTLSFromFile(certFile, serverNameOverride string) (TransportCredentials, error) { - b, err := ioutil.ReadFile(certFile) + b, err := os.ReadFile(certFile) if err != nil { return nil, err } diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/dialoptions.go b/.ci/providerlint/vendor/google.golang.org/grpc/dialoptions.go index 9372dc322e8..15a3d5102a9 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/dialoptions.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/dialoptions.go @@ -38,12 +38,14 @@ import ( func init() { internal.AddGlobalDialOptions = func(opt ...DialOption) { - extraDialOptions = append(extraDialOptions, opt...) + globalDialOptions = append(globalDialOptions, opt...) } internal.ClearGlobalDialOptions = func() { - extraDialOptions = nil + globalDialOptions = nil } internal.WithBinaryLogger = withBinaryLogger + internal.JoinDialOptions = newJoinDialOption + internal.DisableGlobalDialOptions = newDisableGlobalDialOptions } // dialOptions configure a Dial call. dialOptions are set by the DialOption @@ -75,6 +77,7 @@ type dialOptions struct { defaultServiceConfig *ServiceConfig // defaultServiceConfig is parsed from defaultServiceConfigRawJSON. defaultServiceConfigRawJSON *string resolvers []resolver.Builder + idleTimeout time.Duration } // DialOption configures how we set up the connection. @@ -82,7 +85,7 @@ type DialOption interface { apply(*dialOptions) } -var extraDialOptions []DialOption +var globalDialOptions []DialOption // EmptyDialOption does not alter the dial configuration. It can be embedded in // another structure to build custom dial options. @@ -95,6 +98,16 @@ type EmptyDialOption struct{} func (EmptyDialOption) apply(*dialOptions) {} +type disableGlobalDialOptions struct{} + +func (disableGlobalDialOptions) apply(*dialOptions) {} + +// newDisableGlobalDialOptions returns a DialOption that prevents the ClientConn +// from applying the global DialOptions (set via AddGlobalDialOptions). +func newDisableGlobalDialOptions() DialOption { + return &disableGlobalDialOptions{} +} + // funcDialOption wraps a function that modifies dialOptions into an // implementation of the DialOption interface. type funcDialOption struct { @@ -111,13 +124,28 @@ func newFuncDialOption(f func(*dialOptions)) *funcDialOption { } } +type joinDialOption struct { + opts []DialOption +} + +func (jdo *joinDialOption) apply(do *dialOptions) { + for _, opt := range jdo.opts { + opt.apply(do) + } +} + +func newJoinDialOption(opts ...DialOption) DialOption { + return &joinDialOption{opts: opts} +} + // WithWriteBufferSize determines how much data can be batched before doing a // write on the wire. The corresponding memory allocation for this buffer will // be twice the size to keep syscalls low. The default value for this buffer is // 32KB. // -// Zero will disable the write buffer such that each write will be on underlying -// connection. Note: A Send call may not directly translate to a write. +// Zero or negative values will disable the write buffer such that each write +// will be on underlying connection. Note: A Send call may not directly +// translate to a write. func WithWriteBufferSize(s int) DialOption { return newFuncDialOption(func(o *dialOptions) { o.copts.WriteBufferSize = s @@ -127,8 +155,9 @@ func WithWriteBufferSize(s int) DialOption { // WithReadBufferSize lets you set the size of read buffer, this determines how // much data can be read at most for each read syscall. // -// The default value for this buffer is 32KB. Zero will disable read buffer for -// a connection so data framer can access the underlying conn directly. +// The default value for this buffer is 32KB. Zero or negative values will +// disable read buffer for a connection so data framer can access the +// underlying conn directly. func WithReadBufferSize(s int) DialOption { return newFuncDialOption(func(o *dialOptions) { o.copts.ReadBufferSize = s @@ -267,6 +296,9 @@ func withBackoff(bs internalbackoff.Strategy) DialOption { // WithBlock returns a DialOption which makes callers of Dial block until the // underlying connection is up. Without this, Dial returns immediately and // connecting the server happens in background. +// +// Use of this feature is not recommended. For more information, please see: +// https://github.com/grpc/grpc-go/blob/master/Documentation/anti-patterns.md func WithBlock() DialOption { return newFuncDialOption(func(o *dialOptions) { o.block = true @@ -278,6 +310,9 @@ func WithBlock() DialOption { // the context.DeadlineExceeded error. // Implies WithBlock() // +// Use of this feature is not recommended. For more information, please see: +// https://github.com/grpc/grpc-go/blob/master/Documentation/anti-patterns.md +// // # Experimental // // Notice: This API is EXPERIMENTAL and may be changed or removed in a @@ -420,6 +455,9 @@ func withBinaryLogger(bl binarylog.Logger) DialOption { // FailOnNonTempDialError only affects the initial dial, and does not do // anything useful unless you are also using WithBlock(). // +// Use of this feature is not recommended. For more information, please see: +// https://github.com/grpc/grpc-go/blob/master/Documentation/anti-patterns.md +// // # Experimental // // Notice: This API is EXPERIMENTAL and may be changed or removed in a @@ -618,3 +656,23 @@ func WithResolvers(rs ...resolver.Builder) DialOption { o.resolvers = append(o.resolvers, rs...) }) } + +// WithIdleTimeout returns a DialOption that configures an idle timeout for the +// channel. If the channel is idle for the configured timeout, i.e there are no +// ongoing RPCs and no new RPCs are initiated, the channel will enter idle mode +// and as a result the name resolver and load balancer will be shut down. The +// channel will exit idle mode when the Connect() method is called or when an +// RPC is initiated. +// +// By default this feature is disabled, which can also be explicitly configured +// by passing zero to this function. +// +// # Experimental +// +// Notice: This API is EXPERIMENTAL and may be changed or removed in a +// later release. +func WithIdleTimeout(d time.Duration) DialOption { + return newFuncDialOption(func(o *dialOptions) { + o.idleTimeout = d + }) +} diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/encoding/encoding.go b/.ci/providerlint/vendor/google.golang.org/grpc/encoding/encoding.go index 711763d54fb..07a5861352a 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/encoding/encoding.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/encoding/encoding.go @@ -75,7 +75,9 @@ var registeredCompressor = make(map[string]Compressor) // registered with the same name, the one registered last will take effect. func RegisterCompressor(c Compressor) { registeredCompressor[c.Name()] = c - grpcutil.RegisteredCompressorNames = append(grpcutil.RegisteredCompressorNames, c.Name()) + if !grpcutil.IsCompressorNameRegistered(c.Name()) { + grpcutil.RegisteredCompressorNames = append(grpcutil.RegisteredCompressorNames, c.Name()) + } } // GetCompressor returns Compressor for the given compressor name. diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/grpclog/loggerv2.go b/.ci/providerlint/vendor/google.golang.org/grpc/grpclog/loggerv2.go index b5560b47ec4..5de66e40d36 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/grpclog/loggerv2.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/grpclog/loggerv2.go @@ -22,7 +22,6 @@ import ( "encoding/json" "fmt" "io" - "io/ioutil" "log" "os" "strconv" @@ -140,9 +139,9 @@ func newLoggerV2WithConfig(infoW, warningW, errorW io.Writer, c loggerV2Config) // newLoggerV2 creates a loggerV2 to be used as default logger. // All logs are written to stderr. func newLoggerV2() LoggerV2 { - errorW := ioutil.Discard - warningW := ioutil.Discard - infoW := ioutil.Discard + errorW := io.Discard + warningW := io.Discard + infoW := io.Discard logLevel := os.Getenv("GRPC_GO_LOG_SEVERITY_LEVEL") switch logLevel { diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/health/grpc_health_v1/health.pb.go b/.ci/providerlint/vendor/google.golang.org/grpc/health/grpc_health_v1/health.pb.go index a66024d23e3..142d35f753e 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/health/grpc_health_v1/health.pb.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/health/grpc_health_v1/health.pb.go @@ -17,14 +17,13 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.25.0 -// protoc v3.14.0 +// protoc-gen-go v1.30.0 +// protoc v4.22.0 // source: grpc/health/v1/health.proto package grpc_health_v1 import ( - proto "github.com/golang/protobuf/proto" protoreflect "google.golang.org/protobuf/reflect/protoreflect" protoimpl "google.golang.org/protobuf/runtime/protoimpl" reflect "reflect" @@ -38,10 +37,6 @@ const ( _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) ) -// This is a compile-time assertion that a sufficiently up-to-date version -// of the legacy proto package is being used. -const _ = proto.ProtoPackageIsVersion4 - type HealthCheckResponse_ServingStatus int32 const ( diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/health/grpc_health_v1/health_grpc.pb.go b/.ci/providerlint/vendor/google.golang.org/grpc/health/grpc_health_v1/health_grpc.pb.go index a332dfd7b54..a01a1b4d54b 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/health/grpc_health_v1/health_grpc.pb.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/health/grpc_health_v1/health_grpc.pb.go @@ -17,8 +17,8 @@ // Code generated by protoc-gen-go-grpc. DO NOT EDIT. // versions: -// - protoc-gen-go-grpc v1.2.0 -// - protoc v3.14.0 +// - protoc-gen-go-grpc v1.3.0 +// - protoc v4.22.0 // source: grpc/health/v1/health.proto package grpc_health_v1 @@ -35,6 +35,11 @@ import ( // Requires gRPC-Go v1.32.0 or later. const _ = grpc.SupportPackageIsVersion7 +const ( + Health_Check_FullMethodName = "/grpc.health.v1.Health/Check" + Health_Watch_FullMethodName = "/grpc.health.v1.Health/Watch" +) + // HealthClient is the client API for Health service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream. @@ -70,7 +75,7 @@ func NewHealthClient(cc grpc.ClientConnInterface) HealthClient { func (c *healthClient) Check(ctx context.Context, in *HealthCheckRequest, opts ...grpc.CallOption) (*HealthCheckResponse, error) { out := new(HealthCheckResponse) - err := c.cc.Invoke(ctx, "/grpc.health.v1.Health/Check", in, out, opts...) + err := c.cc.Invoke(ctx, Health_Check_FullMethodName, in, out, opts...) if err != nil { return nil, err } @@ -78,7 +83,7 @@ func (c *healthClient) Check(ctx context.Context, in *HealthCheckRequest, opts . } func (c *healthClient) Watch(ctx context.Context, in *HealthCheckRequest, opts ...grpc.CallOption) (Health_WatchClient, error) { - stream, err := c.cc.NewStream(ctx, &Health_ServiceDesc.Streams[0], "/grpc.health.v1.Health/Watch", opts...) + stream, err := c.cc.NewStream(ctx, &Health_ServiceDesc.Streams[0], Health_Watch_FullMethodName, opts...) if err != nil { return nil, err } @@ -166,7 +171,7 @@ func _Health_Check_Handler(srv interface{}, ctx context.Context, dec func(interf } info := &grpc.UnaryServerInfo{ Server: srv, - FullMethod: "/grpc.health.v1.Health/Check", + FullMethod: Health_Check_FullMethodName, } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(HealthServer).Check(ctx, req.(*HealthCheckRequest)) diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/idle.go b/.ci/providerlint/vendor/google.golang.org/grpc/idle.go new file mode 100644 index 00000000000..dc3dc72f6b0 --- /dev/null +++ b/.ci/providerlint/vendor/google.golang.org/grpc/idle.go @@ -0,0 +1,287 @@ +/* + * + * Copyright 2023 gRPC authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package grpc + +import ( + "fmt" + "math" + "sync" + "sync/atomic" + "time" +) + +// For overriding in unit tests. +var timeAfterFunc = func(d time.Duration, f func()) *time.Timer { + return time.AfterFunc(d, f) +} + +// idlenessEnforcer is the functionality provided by grpc.ClientConn to enter +// and exit from idle mode. +type idlenessEnforcer interface { + exitIdleMode() error + enterIdleMode() error +} + +// idlenessManager defines the functionality required to track RPC activity on a +// channel. +type idlenessManager interface { + onCallBegin() error + onCallEnd() + close() +} + +type noopIdlenessManager struct{} + +func (noopIdlenessManager) onCallBegin() error { return nil } +func (noopIdlenessManager) onCallEnd() {} +func (noopIdlenessManager) close() {} + +// idlenessManagerImpl implements the idlenessManager interface. It uses atomic +// operations to synchronize access to shared state and a mutex to guarantee +// mutual exclusion in a critical section. +type idlenessManagerImpl struct { + // State accessed atomically. + lastCallEndTime int64 // Unix timestamp in nanos; time when the most recent RPC completed. + activeCallsCount int32 // Count of active RPCs; -math.MaxInt32 means channel is idle or is trying to get there. + activeSinceLastTimerCheck int32 // Boolean; True if there was an RPC since the last timer callback. + closed int32 // Boolean; True when the manager is closed. + + // Can be accessed without atomics or mutex since these are set at creation + // time and read-only after that. + enforcer idlenessEnforcer // Functionality provided by grpc.ClientConn. + timeout int64 // Idle timeout duration nanos stored as an int64. + + // idleMu is used to guarantee mutual exclusion in two scenarios: + // - Opposing intentions: + // - a: Idle timeout has fired and handleIdleTimeout() is trying to put + // the channel in idle mode because the channel has been inactive. + // - b: At the same time an RPC is made on the channel, and onCallBegin() + // is trying to prevent the channel from going idle. + // - Competing intentions: + // - The channel is in idle mode and there are multiple RPCs starting at + // the same time, all trying to move the channel out of idle. Only one + // of them should succeed in doing so, while the other RPCs should + // piggyback on the first one and be successfully handled. + idleMu sync.RWMutex + actuallyIdle bool + timer *time.Timer +} + +// newIdlenessManager creates a new idleness manager implementation for the +// given idle timeout. +func newIdlenessManager(enforcer idlenessEnforcer, idleTimeout time.Duration) idlenessManager { + if idleTimeout == 0 { + return noopIdlenessManager{} + } + + i := &idlenessManagerImpl{ + enforcer: enforcer, + timeout: int64(idleTimeout), + } + i.timer = timeAfterFunc(idleTimeout, i.handleIdleTimeout) + return i +} + +// resetIdleTimer resets the idle timer to the given duration. This method +// should only be called from the timer callback. +func (i *idlenessManagerImpl) resetIdleTimer(d time.Duration) { + i.idleMu.Lock() + defer i.idleMu.Unlock() + + if i.timer == nil { + // Only close sets timer to nil. We are done. + return + } + + // It is safe to ignore the return value from Reset() because this method is + // only ever called from the timer callback, which means the timer has + // already fired. + i.timer.Reset(d) +} + +// handleIdleTimeout is the timer callback that is invoked upon expiry of the +// configured idle timeout. The channel is considered inactive if there are no +// ongoing calls and no RPC activity since the last time the timer fired. +func (i *idlenessManagerImpl) handleIdleTimeout() { + if i.isClosed() { + return + } + + if atomic.LoadInt32(&i.activeCallsCount) > 0 { + i.resetIdleTimer(time.Duration(i.timeout)) + return + } + + // There has been activity on the channel since we last got here. Reset the + // timer and return. + if atomic.LoadInt32(&i.activeSinceLastTimerCheck) == 1 { + // Set the timer to fire after a duration of idle timeout, calculated + // from the time the most recent RPC completed. + atomic.StoreInt32(&i.activeSinceLastTimerCheck, 0) + i.resetIdleTimer(time.Duration(atomic.LoadInt64(&i.lastCallEndTime) + i.timeout - time.Now().UnixNano())) + return + } + + // This CAS operation is extremely likely to succeed given that there has + // been no activity since the last time we were here. Setting the + // activeCallsCount to -math.MaxInt32 indicates to onCallBegin() that the + // channel is either in idle mode or is trying to get there. + if !atomic.CompareAndSwapInt32(&i.activeCallsCount, 0, -math.MaxInt32) { + // This CAS operation can fail if an RPC started after we checked for + // activity at the top of this method, or one was ongoing from before + // the last time we were here. In both case, reset the timer and return. + i.resetIdleTimer(time.Duration(i.timeout)) + return + } + + // Now that we've set the active calls count to -math.MaxInt32, it's time to + // actually move to idle mode. + if i.tryEnterIdleMode() { + // Successfully entered idle mode. No timer needed until we exit idle. + return + } + + // Failed to enter idle mode due to a concurrent RPC that kept the channel + // active, or because of an error from the channel. Undo the attempt to + // enter idle, and reset the timer to try again later. + atomic.AddInt32(&i.activeCallsCount, math.MaxInt32) + i.resetIdleTimer(time.Duration(i.timeout)) +} + +// tryEnterIdleMode instructs the channel to enter idle mode. But before +// that, it performs a last minute check to ensure that no new RPC has come in, +// making the channel active. +// +// Return value indicates whether or not the channel moved to idle mode. +// +// Holds idleMu which ensures mutual exclusion with exitIdleMode. +func (i *idlenessManagerImpl) tryEnterIdleMode() bool { + i.idleMu.Lock() + defer i.idleMu.Unlock() + + if atomic.LoadInt32(&i.activeCallsCount) != -math.MaxInt32 { + // We raced and lost to a new RPC. Very rare, but stop entering idle. + return false + } + if atomic.LoadInt32(&i.activeSinceLastTimerCheck) == 1 { + // An very short RPC could have come in (and also finished) after we + // checked for calls count and activity in handleIdleTimeout(), but + // before the CAS operation. So, we need to check for activity again. + return false + } + + // No new RPCs have come in since we last set the active calls count value + // -math.MaxInt32 in the timer callback. And since we have the lock, it is + // safe to enter idle mode now. + if err := i.enforcer.enterIdleMode(); err != nil { + logger.Errorf("Failed to enter idle mode: %v", err) + return false + } + + // Successfully entered idle mode. + i.actuallyIdle = true + return true +} + +// onCallBegin is invoked at the start of every RPC. +func (i *idlenessManagerImpl) onCallBegin() error { + if i.isClosed() { + return nil + } + + if atomic.AddInt32(&i.activeCallsCount, 1) > 0 { + // Channel is not idle now. Set the activity bit and allow the call. + atomic.StoreInt32(&i.activeSinceLastTimerCheck, 1) + return nil + } + + // Channel is either in idle mode or is in the process of moving to idle + // mode. Attempt to exit idle mode to allow this RPC. + if err := i.exitIdleMode(); err != nil { + // Undo the increment to calls count, and return an error causing the + // RPC to fail. + atomic.AddInt32(&i.activeCallsCount, -1) + return err + } + + atomic.StoreInt32(&i.activeSinceLastTimerCheck, 1) + return nil +} + +// exitIdleMode instructs the channel to exit idle mode. +// +// Holds idleMu which ensures mutual exclusion with tryEnterIdleMode. +func (i *idlenessManagerImpl) exitIdleMode() error { + i.idleMu.Lock() + defer i.idleMu.Unlock() + + if !i.actuallyIdle { + // This can happen in two scenarios: + // - handleIdleTimeout() set the calls count to -math.MaxInt32 and called + // tryEnterIdleMode(). But before the latter could grab the lock, an RPC + // came in and onCallBegin() noticed that the calls count is negative. + // - Channel is in idle mode, and multiple new RPCs come in at the same + // time, all of them notice a negative calls count in onCallBegin and get + // here. The first one to get the lock would got the channel to exit idle. + // + // Either way, nothing to do here. + return nil + } + + if err := i.enforcer.exitIdleMode(); err != nil { + return fmt.Errorf("channel failed to exit idle mode: %v", err) + } + + // Undo the idle entry process. This also respects any new RPC attempts. + atomic.AddInt32(&i.activeCallsCount, math.MaxInt32) + i.actuallyIdle = false + + // Start a new timer to fire after the configured idle timeout. + i.timer = timeAfterFunc(time.Duration(i.timeout), i.handleIdleTimeout) + return nil +} + +// onCallEnd is invoked at the end of every RPC. +func (i *idlenessManagerImpl) onCallEnd() { + if i.isClosed() { + return + } + + // Record the time at which the most recent call finished. + atomic.StoreInt64(&i.lastCallEndTime, time.Now().UnixNano()) + + // Decrement the active calls count. This count can temporarily go negative + // when the timer callback is in the process of moving the channel to idle + // mode, but one or more RPCs come in and complete before the timer callback + // can get done with the process of moving to idle mode. + atomic.AddInt32(&i.activeCallsCount, -1) +} + +func (i *idlenessManagerImpl) isClosed() bool { + return atomic.LoadInt32(&i.closed) == 1 +} + +func (i *idlenessManagerImpl) close() { + atomic.StoreInt32(&i.closed, 1) + + i.idleMu.Lock() + i.timer.Stop() + i.timer = nil + i.idleMu.Unlock() +} diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/internal/binarylog/binarylog.go b/.ci/providerlint/vendor/google.golang.org/grpc/internal/binarylog/binarylog.go index 809d73ccafb..755fdebc1b1 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/internal/binarylog/binarylog.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/internal/binarylog/binarylog.go @@ -28,8 +28,13 @@ import ( "google.golang.org/grpc/internal/grpcutil" ) -// Logger is the global binary logger. It can be used to get binary logger for -// each method. +var grpclogLogger = grpclog.Component("binarylog") + +// Logger specifies MethodLoggers for method names with a Log call that +// takes a context. +// +// This is used in the 1.0 release of gcp/observability, and thus must not be +// deleted or changed. type Logger interface { GetMethodLogger(methodName string) MethodLogger } @@ -40,8 +45,6 @@ type Logger interface { // It is used to get a MethodLogger for each individual method. var binLogger Logger -var grpclogLogger = grpclog.Component("binarylog") - // SetLogger sets the binary logger. // // Only call this at init time. diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/internal/binarylog/method_logger.go b/.ci/providerlint/vendor/google.golang.org/grpc/internal/binarylog/method_logger.go index 179f4a26d13..6c3f632215f 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/internal/binarylog/method_logger.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/internal/binarylog/method_logger.go @@ -19,6 +19,7 @@ package binarylog import ( + "context" "net" "strings" "sync/atomic" @@ -26,7 +27,7 @@ import ( "github.com/golang/protobuf/proto" "github.com/golang/protobuf/ptypes" - pb "google.golang.org/grpc/binarylog/grpc_binarylog_v1" + binlogpb "google.golang.org/grpc/binarylog/grpc_binarylog_v1" "google.golang.org/grpc/metadata" "google.golang.org/grpc/status" ) @@ -48,8 +49,11 @@ func (g *callIDGenerator) reset() { var idGen callIDGenerator // MethodLogger is the sub-logger for each method. +// +// This is used in the 1.0 release of gcp/observability, and thus must not be +// deleted or changed. type MethodLogger interface { - Log(LogEntryConfig) + Log(context.Context, LogEntryConfig) } // TruncatingMethodLogger is a method logger that truncates headers and messages @@ -64,6 +68,9 @@ type TruncatingMethodLogger struct { } // NewTruncatingMethodLogger returns a new truncating method logger. +// +// This is used in the 1.0 release of gcp/observability, and thus must not be +// deleted or changed. func NewTruncatingMethodLogger(h, m uint64) *TruncatingMethodLogger { return &TruncatingMethodLogger{ headerMaxLen: h, @@ -79,7 +86,7 @@ func NewTruncatingMethodLogger(h, m uint64) *TruncatingMethodLogger { // Build is an internal only method for building the proto message out of the // input event. It's made public to enable other library to reuse as much logic // in TruncatingMethodLogger as possible. -func (ml *TruncatingMethodLogger) Build(c LogEntryConfig) *pb.GrpcLogEntry { +func (ml *TruncatingMethodLogger) Build(c LogEntryConfig) *binlogpb.GrpcLogEntry { m := c.toProto() timestamp, _ := ptypes.TimestampProto(time.Now()) m.Timestamp = timestamp @@ -87,22 +94,22 @@ func (ml *TruncatingMethodLogger) Build(c LogEntryConfig) *pb.GrpcLogEntry { m.SequenceIdWithinCall = ml.idWithinCallGen.next() switch pay := m.Payload.(type) { - case *pb.GrpcLogEntry_ClientHeader: + case *binlogpb.GrpcLogEntry_ClientHeader: m.PayloadTruncated = ml.truncateMetadata(pay.ClientHeader.GetMetadata()) - case *pb.GrpcLogEntry_ServerHeader: + case *binlogpb.GrpcLogEntry_ServerHeader: m.PayloadTruncated = ml.truncateMetadata(pay.ServerHeader.GetMetadata()) - case *pb.GrpcLogEntry_Message: + case *binlogpb.GrpcLogEntry_Message: m.PayloadTruncated = ml.truncateMessage(pay.Message) } return m } // Log creates a proto binary log entry, and logs it to the sink. -func (ml *TruncatingMethodLogger) Log(c LogEntryConfig) { +func (ml *TruncatingMethodLogger) Log(ctx context.Context, c LogEntryConfig) { ml.sink.Write(ml.Build(c)) } -func (ml *TruncatingMethodLogger) truncateMetadata(mdPb *pb.Metadata) (truncated bool) { +func (ml *TruncatingMethodLogger) truncateMetadata(mdPb *binlogpb.Metadata) (truncated bool) { if ml.headerMaxLen == maxUInt { return false } @@ -121,7 +128,7 @@ func (ml *TruncatingMethodLogger) truncateMetadata(mdPb *pb.Metadata) (truncated // but not counted towards the size limit. continue } - currentEntryLen := uint64(len(entry.Value)) + currentEntryLen := uint64(len(entry.GetKey())) + uint64(len(entry.GetValue())) if currentEntryLen > bytesLimit { break } @@ -132,7 +139,7 @@ func (ml *TruncatingMethodLogger) truncateMetadata(mdPb *pb.Metadata) (truncated return truncated } -func (ml *TruncatingMethodLogger) truncateMessage(msgPb *pb.Message) (truncated bool) { +func (ml *TruncatingMethodLogger) truncateMessage(msgPb *binlogpb.Message) (truncated bool) { if ml.messageMaxLen == maxUInt { return false } @@ -144,8 +151,11 @@ func (ml *TruncatingMethodLogger) truncateMessage(msgPb *pb.Message) (truncated } // LogEntryConfig represents the configuration for binary log entry. +// +// This is used in the 1.0 release of gcp/observability, and thus must not be +// deleted or changed. type LogEntryConfig interface { - toProto() *pb.GrpcLogEntry + toProto() *binlogpb.GrpcLogEntry } // ClientHeader configs the binary log entry to be a ClientHeader entry. @@ -159,10 +169,10 @@ type ClientHeader struct { PeerAddr net.Addr } -func (c *ClientHeader) toProto() *pb.GrpcLogEntry { +func (c *ClientHeader) toProto() *binlogpb.GrpcLogEntry { // This function doesn't need to set all the fields (e.g. seq ID). The Log // function will set the fields when necessary. - clientHeader := &pb.ClientHeader{ + clientHeader := &binlogpb.ClientHeader{ Metadata: mdToMetadataProto(c.Header), MethodName: c.MethodName, Authority: c.Authority, @@ -170,16 +180,16 @@ func (c *ClientHeader) toProto() *pb.GrpcLogEntry { if c.Timeout > 0 { clientHeader.Timeout = ptypes.DurationProto(c.Timeout) } - ret := &pb.GrpcLogEntry{ - Type: pb.GrpcLogEntry_EVENT_TYPE_CLIENT_HEADER, - Payload: &pb.GrpcLogEntry_ClientHeader{ + ret := &binlogpb.GrpcLogEntry{ + Type: binlogpb.GrpcLogEntry_EVENT_TYPE_CLIENT_HEADER, + Payload: &binlogpb.GrpcLogEntry_ClientHeader{ ClientHeader: clientHeader, }, } if c.OnClientSide { - ret.Logger = pb.GrpcLogEntry_LOGGER_CLIENT + ret.Logger = binlogpb.GrpcLogEntry_LOGGER_CLIENT } else { - ret.Logger = pb.GrpcLogEntry_LOGGER_SERVER + ret.Logger = binlogpb.GrpcLogEntry_LOGGER_SERVER } if c.PeerAddr != nil { ret.Peer = addrToProto(c.PeerAddr) @@ -195,19 +205,19 @@ type ServerHeader struct { PeerAddr net.Addr } -func (c *ServerHeader) toProto() *pb.GrpcLogEntry { - ret := &pb.GrpcLogEntry{ - Type: pb.GrpcLogEntry_EVENT_TYPE_SERVER_HEADER, - Payload: &pb.GrpcLogEntry_ServerHeader{ - ServerHeader: &pb.ServerHeader{ +func (c *ServerHeader) toProto() *binlogpb.GrpcLogEntry { + ret := &binlogpb.GrpcLogEntry{ + Type: binlogpb.GrpcLogEntry_EVENT_TYPE_SERVER_HEADER, + Payload: &binlogpb.GrpcLogEntry_ServerHeader{ + ServerHeader: &binlogpb.ServerHeader{ Metadata: mdToMetadataProto(c.Header), }, }, } if c.OnClientSide { - ret.Logger = pb.GrpcLogEntry_LOGGER_CLIENT + ret.Logger = binlogpb.GrpcLogEntry_LOGGER_CLIENT } else { - ret.Logger = pb.GrpcLogEntry_LOGGER_SERVER + ret.Logger = binlogpb.GrpcLogEntry_LOGGER_SERVER } if c.PeerAddr != nil { ret.Peer = addrToProto(c.PeerAddr) @@ -223,7 +233,7 @@ type ClientMessage struct { Message interface{} } -func (c *ClientMessage) toProto() *pb.GrpcLogEntry { +func (c *ClientMessage) toProto() *binlogpb.GrpcLogEntry { var ( data []byte err error @@ -238,19 +248,19 @@ func (c *ClientMessage) toProto() *pb.GrpcLogEntry { } else { grpclogLogger.Infof("binarylogging: message to log is neither proto.message nor []byte") } - ret := &pb.GrpcLogEntry{ - Type: pb.GrpcLogEntry_EVENT_TYPE_CLIENT_MESSAGE, - Payload: &pb.GrpcLogEntry_Message{ - Message: &pb.Message{ + ret := &binlogpb.GrpcLogEntry{ + Type: binlogpb.GrpcLogEntry_EVENT_TYPE_CLIENT_MESSAGE, + Payload: &binlogpb.GrpcLogEntry_Message{ + Message: &binlogpb.Message{ Length: uint32(len(data)), Data: data, }, }, } if c.OnClientSide { - ret.Logger = pb.GrpcLogEntry_LOGGER_CLIENT + ret.Logger = binlogpb.GrpcLogEntry_LOGGER_CLIENT } else { - ret.Logger = pb.GrpcLogEntry_LOGGER_SERVER + ret.Logger = binlogpb.GrpcLogEntry_LOGGER_SERVER } return ret } @@ -263,7 +273,7 @@ type ServerMessage struct { Message interface{} } -func (c *ServerMessage) toProto() *pb.GrpcLogEntry { +func (c *ServerMessage) toProto() *binlogpb.GrpcLogEntry { var ( data []byte err error @@ -278,19 +288,19 @@ func (c *ServerMessage) toProto() *pb.GrpcLogEntry { } else { grpclogLogger.Infof("binarylogging: message to log is neither proto.message nor []byte") } - ret := &pb.GrpcLogEntry{ - Type: pb.GrpcLogEntry_EVENT_TYPE_SERVER_MESSAGE, - Payload: &pb.GrpcLogEntry_Message{ - Message: &pb.Message{ + ret := &binlogpb.GrpcLogEntry{ + Type: binlogpb.GrpcLogEntry_EVENT_TYPE_SERVER_MESSAGE, + Payload: &binlogpb.GrpcLogEntry_Message{ + Message: &binlogpb.Message{ Length: uint32(len(data)), Data: data, }, }, } if c.OnClientSide { - ret.Logger = pb.GrpcLogEntry_LOGGER_CLIENT + ret.Logger = binlogpb.GrpcLogEntry_LOGGER_CLIENT } else { - ret.Logger = pb.GrpcLogEntry_LOGGER_SERVER + ret.Logger = binlogpb.GrpcLogEntry_LOGGER_SERVER } return ret } @@ -300,15 +310,15 @@ type ClientHalfClose struct { OnClientSide bool } -func (c *ClientHalfClose) toProto() *pb.GrpcLogEntry { - ret := &pb.GrpcLogEntry{ - Type: pb.GrpcLogEntry_EVENT_TYPE_CLIENT_HALF_CLOSE, +func (c *ClientHalfClose) toProto() *binlogpb.GrpcLogEntry { + ret := &binlogpb.GrpcLogEntry{ + Type: binlogpb.GrpcLogEntry_EVENT_TYPE_CLIENT_HALF_CLOSE, Payload: nil, // No payload here. } if c.OnClientSide { - ret.Logger = pb.GrpcLogEntry_LOGGER_CLIENT + ret.Logger = binlogpb.GrpcLogEntry_LOGGER_CLIENT } else { - ret.Logger = pb.GrpcLogEntry_LOGGER_SERVER + ret.Logger = binlogpb.GrpcLogEntry_LOGGER_SERVER } return ret } @@ -324,7 +334,7 @@ type ServerTrailer struct { PeerAddr net.Addr } -func (c *ServerTrailer) toProto() *pb.GrpcLogEntry { +func (c *ServerTrailer) toProto() *binlogpb.GrpcLogEntry { st, ok := status.FromError(c.Err) if !ok { grpclogLogger.Info("binarylogging: error in trailer is not a status error") @@ -340,10 +350,10 @@ func (c *ServerTrailer) toProto() *pb.GrpcLogEntry { grpclogLogger.Infof("binarylogging: failed to marshal status proto: %v", err) } } - ret := &pb.GrpcLogEntry{ - Type: pb.GrpcLogEntry_EVENT_TYPE_SERVER_TRAILER, - Payload: &pb.GrpcLogEntry_Trailer{ - Trailer: &pb.Trailer{ + ret := &binlogpb.GrpcLogEntry{ + Type: binlogpb.GrpcLogEntry_EVENT_TYPE_SERVER_TRAILER, + Payload: &binlogpb.GrpcLogEntry_Trailer{ + Trailer: &binlogpb.Trailer{ Metadata: mdToMetadataProto(c.Trailer), StatusCode: uint32(st.Code()), StatusMessage: st.Message(), @@ -352,9 +362,9 @@ func (c *ServerTrailer) toProto() *pb.GrpcLogEntry { }, } if c.OnClientSide { - ret.Logger = pb.GrpcLogEntry_LOGGER_CLIENT + ret.Logger = binlogpb.GrpcLogEntry_LOGGER_CLIENT } else { - ret.Logger = pb.GrpcLogEntry_LOGGER_SERVER + ret.Logger = binlogpb.GrpcLogEntry_LOGGER_SERVER } if c.PeerAddr != nil { ret.Peer = addrToProto(c.PeerAddr) @@ -367,15 +377,15 @@ type Cancel struct { OnClientSide bool } -func (c *Cancel) toProto() *pb.GrpcLogEntry { - ret := &pb.GrpcLogEntry{ - Type: pb.GrpcLogEntry_EVENT_TYPE_CANCEL, +func (c *Cancel) toProto() *binlogpb.GrpcLogEntry { + ret := &binlogpb.GrpcLogEntry{ + Type: binlogpb.GrpcLogEntry_EVENT_TYPE_CANCEL, Payload: nil, } if c.OnClientSide { - ret.Logger = pb.GrpcLogEntry_LOGGER_CLIENT + ret.Logger = binlogpb.GrpcLogEntry_LOGGER_CLIENT } else { - ret.Logger = pb.GrpcLogEntry_LOGGER_SERVER + ret.Logger = binlogpb.GrpcLogEntry_LOGGER_SERVER } return ret } @@ -392,15 +402,15 @@ func metadataKeyOmit(key string) bool { return strings.HasPrefix(key, "grpc-") } -func mdToMetadataProto(md metadata.MD) *pb.Metadata { - ret := &pb.Metadata{} +func mdToMetadataProto(md metadata.MD) *binlogpb.Metadata { + ret := &binlogpb.Metadata{} for k, vv := range md { if metadataKeyOmit(k) { continue } for _, v := range vv { ret.Entry = append(ret.Entry, - &pb.MetadataEntry{ + &binlogpb.MetadataEntry{ Key: k, Value: []byte(v), }, @@ -410,26 +420,26 @@ func mdToMetadataProto(md metadata.MD) *pb.Metadata { return ret } -func addrToProto(addr net.Addr) *pb.Address { - ret := &pb.Address{} +func addrToProto(addr net.Addr) *binlogpb.Address { + ret := &binlogpb.Address{} switch a := addr.(type) { case *net.TCPAddr: if a.IP.To4() != nil { - ret.Type = pb.Address_TYPE_IPV4 + ret.Type = binlogpb.Address_TYPE_IPV4 } else if a.IP.To16() != nil { - ret.Type = pb.Address_TYPE_IPV6 + ret.Type = binlogpb.Address_TYPE_IPV6 } else { - ret.Type = pb.Address_TYPE_UNKNOWN + ret.Type = binlogpb.Address_TYPE_UNKNOWN // Do not set address and port fields. break } ret.Address = a.IP.String() ret.IpPort = uint32(a.Port) case *net.UnixAddr: - ret.Type = pb.Address_TYPE_UNIX + ret.Type = binlogpb.Address_TYPE_UNIX ret.Address = a.String() default: - ret.Type = pb.Address_TYPE_UNKNOWN + ret.Type = binlogpb.Address_TYPE_UNKNOWN } return ret } diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/internal/binarylog/sink.go b/.ci/providerlint/vendor/google.golang.org/grpc/internal/binarylog/sink.go index c2fdd58b319..264de387c2a 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/internal/binarylog/sink.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/internal/binarylog/sink.go @@ -26,7 +26,7 @@ import ( "time" "github.com/golang/protobuf/proto" - pb "google.golang.org/grpc/binarylog/grpc_binarylog_v1" + binlogpb "google.golang.org/grpc/binarylog/grpc_binarylog_v1" ) var ( @@ -42,15 +42,15 @@ type Sink interface { // Write will be called to write the log entry into the sink. // // It should be thread-safe so it can be called in parallel. - Write(*pb.GrpcLogEntry) error + Write(*binlogpb.GrpcLogEntry) error // Close will be called when the Sink is replaced by a new Sink. Close() error } type noopSink struct{} -func (ns *noopSink) Write(*pb.GrpcLogEntry) error { return nil } -func (ns *noopSink) Close() error { return nil } +func (ns *noopSink) Write(*binlogpb.GrpcLogEntry) error { return nil } +func (ns *noopSink) Close() error { return nil } // newWriterSink creates a binary log sink with the given writer. // @@ -66,7 +66,7 @@ type writerSink struct { out io.Writer } -func (ws *writerSink) Write(e *pb.GrpcLogEntry) error { +func (ws *writerSink) Write(e *binlogpb.GrpcLogEntry) error { b, err := proto.Marshal(e) if err != nil { grpclogLogger.Errorf("binary logging: failed to marshal proto message: %v", err) @@ -96,7 +96,7 @@ type bufferedSink struct { done chan struct{} } -func (fs *bufferedSink) Write(e *pb.GrpcLogEntry) error { +func (fs *bufferedSink) Write(e *binlogpb.GrpcLogEntry) error { fs.mu.Lock() defer fs.mu.Unlock() if !fs.flusherStarted { diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/internal/buffer/unbounded.go b/.ci/providerlint/vendor/google.golang.org/grpc/internal/buffer/unbounded.go index 9f6a0c1200d..81c2f5fd761 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/internal/buffer/unbounded.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/internal/buffer/unbounded.go @@ -35,6 +35,7 @@ import "sync" // internal/transport/transport.go for an example of this. type Unbounded struct { c chan interface{} + closed bool mu sync.Mutex backlog []interface{} } @@ -47,16 +48,18 @@ func NewUnbounded() *Unbounded { // Put adds t to the unbounded buffer. func (b *Unbounded) Put(t interface{}) { b.mu.Lock() + defer b.mu.Unlock() + if b.closed { + return + } if len(b.backlog) == 0 { select { case b.c <- t: - b.mu.Unlock() return default: } } b.backlog = append(b.backlog, t) - b.mu.Unlock() } // Load sends the earliest buffered data, if any, onto the read channel @@ -64,6 +67,10 @@ func (b *Unbounded) Put(t interface{}) { // value from the read channel. func (b *Unbounded) Load() { b.mu.Lock() + defer b.mu.Unlock() + if b.closed { + return + } if len(b.backlog) > 0 { select { case b.c <- b.backlog[0]: @@ -72,7 +79,6 @@ func (b *Unbounded) Load() { default: } } - b.mu.Unlock() } // Get returns a read channel on which values added to the buffer, via Put(), @@ -80,6 +86,20 @@ func (b *Unbounded) Load() { // // Upon reading a value from this channel, users are expected to call Load() to // send the next buffered value onto the channel if there is any. +// +// If the unbounded buffer is closed, the read channel returned by this method +// is closed. func (b *Unbounded) Get() <-chan interface{} { return b.c } + +// Close closes the unbounded buffer. +func (b *Unbounded) Close() { + b.mu.Lock() + defer b.mu.Unlock() + if b.closed { + return + } + b.closed = true + close(b.c) +} diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/internal/envconfig/envconfig.go b/.ci/providerlint/vendor/google.golang.org/grpc/internal/envconfig/envconfig.go index 7edd196bd3d..80fd5c7d2a4 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/internal/envconfig/envconfig.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/internal/envconfig/envconfig.go @@ -21,19 +21,46 @@ package envconfig import ( "os" + "strconv" "strings" ) -const ( - prefix = "GRPC_GO_" - txtErrIgnoreStr = prefix + "IGNORE_TXT_ERRORS" - advertiseCompressorsStr = prefix + "ADVERTISE_COMPRESSORS" -) - var ( // TXTErrIgnore is set if TXT errors should be ignored ("GRPC_GO_IGNORE_TXT_ERRORS" is not "false"). - TXTErrIgnore = !strings.EqualFold(os.Getenv(txtErrIgnoreStr), "false") + TXTErrIgnore = boolFromEnv("GRPC_GO_IGNORE_TXT_ERRORS", true) // AdvertiseCompressors is set if registered compressor should be advertised // ("GRPC_GO_ADVERTISE_COMPRESSORS" is not "false"). - AdvertiseCompressors = !strings.EqualFold(os.Getenv(advertiseCompressorsStr), "false") + AdvertiseCompressors = boolFromEnv("GRPC_GO_ADVERTISE_COMPRESSORS", true) + // RingHashCap indicates the maximum ring size which defaults to 4096 + // entries but may be overridden by setting the environment variable + // "GRPC_RING_HASH_CAP". This does not override the default bounds + // checking which NACKs configs specifying ring sizes > 8*1024*1024 (~8M). + RingHashCap = uint64FromEnv("GRPC_RING_HASH_CAP", 4096, 1, 8*1024*1024) + // PickFirstLBConfig is set if we should support configuration of the + // pick_first LB policy, which can be enabled by setting the environment + // variable "GRPC_EXPERIMENTAL_PICKFIRST_LB_CONFIG" to "true". + PickFirstLBConfig = boolFromEnv("GRPC_EXPERIMENTAL_PICKFIRST_LB_CONFIG", false) ) + +func boolFromEnv(envVar string, def bool) bool { + if def { + // The default is true; return true unless the variable is "false". + return !strings.EqualFold(os.Getenv(envVar), "false") + } + // The default is false; return false unless the variable is "true". + return strings.EqualFold(os.Getenv(envVar), "true") +} + +func uint64FromEnv(envVar string, def, min, max uint64) uint64 { + v, err := strconv.ParseUint(os.Getenv(envVar), 10, 64) + if err != nil { + return def + } + if v < min { + return min + } + if v > max { + return max + } + return v +} diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/internal/envconfig/observability.go b/.ci/providerlint/vendor/google.golang.org/grpc/internal/envconfig/observability.go index 821dd0a7c19..dd314cfb18f 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/internal/envconfig/observability.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/internal/envconfig/observability.go @@ -28,9 +28,15 @@ const ( var ( // ObservabilityConfig is the json configuration for the gcp/observability // package specified directly in the envObservabilityConfig env var. + // + // This is used in the 1.0 release of gcp/observability, and thus must not be + // deleted or changed. ObservabilityConfig = os.Getenv(envObservabilityConfig) // ObservabilityConfigFile is the json configuration for the // gcp/observability specified in a file with the location specified in // envObservabilityConfigFile env var. + // + // This is used in the 1.0 release of gcp/observability, and thus must not be + // deleted or changed. ObservabilityConfigFile = os.Getenv(envObservabilityConfigFile) ) diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/internal/envconfig/xds.go b/.ci/providerlint/vendor/google.golang.org/grpc/internal/envconfig/xds.go index af09711a3e8..02b4b6a1c10 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/internal/envconfig/xds.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/internal/envconfig/xds.go @@ -20,7 +20,6 @@ package envconfig import ( "os" - "strings" ) const ( @@ -36,16 +35,6 @@ const ( // // When both bootstrap FileName and FileContent are set, FileName is used. XDSBootstrapFileContentEnv = "GRPC_XDS_BOOTSTRAP_CONFIG" - - ringHashSupportEnv = "GRPC_XDS_EXPERIMENTAL_ENABLE_RING_HASH" - clientSideSecuritySupportEnv = "GRPC_XDS_EXPERIMENTAL_SECURITY_SUPPORT" - aggregateAndDNSSupportEnv = "GRPC_XDS_EXPERIMENTAL_ENABLE_AGGREGATE_AND_LOGICAL_DNS_CLUSTER" - rbacSupportEnv = "GRPC_XDS_EXPERIMENTAL_RBAC" - outlierDetectionSupportEnv = "GRPC_EXPERIMENTAL_ENABLE_OUTLIER_DETECTION" - federationEnv = "GRPC_EXPERIMENTAL_XDS_FEDERATION" - rlsInXDSEnv = "GRPC_EXPERIMENTAL_XDS_RLS_LB" - - c2pResolverTestOnlyTrafficDirectorURIEnv = "GRPC_TEST_ONLY_GOOGLE_C2P_RESOLVER_TRAFFIC_DIRECTOR_URI" ) var ( @@ -64,38 +53,43 @@ var ( // XDSRingHash indicates whether ring hash support is enabled, which can be // disabled by setting the environment variable // "GRPC_XDS_EXPERIMENTAL_ENABLE_RING_HASH" to "false". - XDSRingHash = !strings.EqualFold(os.Getenv(ringHashSupportEnv), "false") + XDSRingHash = boolFromEnv("GRPC_XDS_EXPERIMENTAL_ENABLE_RING_HASH", true) // XDSClientSideSecurity is used to control processing of security // configuration on the client-side. // // Note that there is no env var protection for the server-side because we // have a brand new API on the server-side and users explicitly need to use // the new API to get security integration on the server. - XDSClientSideSecurity = !strings.EqualFold(os.Getenv(clientSideSecuritySupportEnv), "false") - // XDSAggregateAndDNS indicates whether processing of aggregated cluster - // and DNS cluster is enabled, which can be enabled by setting the - // environment variable - // "GRPC_XDS_EXPERIMENTAL_ENABLE_AGGREGATE_AND_LOGICAL_DNS_CLUSTER" to - // "true". - XDSAggregateAndDNS = !strings.EqualFold(os.Getenv(aggregateAndDNSSupportEnv), "false") + XDSClientSideSecurity = boolFromEnv("GRPC_XDS_EXPERIMENTAL_SECURITY_SUPPORT", true) + // XDSAggregateAndDNS indicates whether processing of aggregated cluster and + // DNS cluster is enabled, which can be disabled by setting the environment + // variable "GRPC_XDS_EXPERIMENTAL_ENABLE_AGGREGATE_AND_LOGICAL_DNS_CLUSTER" + // to "false". + XDSAggregateAndDNS = boolFromEnv("GRPC_XDS_EXPERIMENTAL_ENABLE_AGGREGATE_AND_LOGICAL_DNS_CLUSTER", true) // XDSRBAC indicates whether xDS configured RBAC HTTP Filter is enabled, // which can be disabled by setting the environment variable // "GRPC_XDS_EXPERIMENTAL_RBAC" to "false". - XDSRBAC = !strings.EqualFold(os.Getenv(rbacSupportEnv), "false") + XDSRBAC = boolFromEnv("GRPC_XDS_EXPERIMENTAL_RBAC", true) // XDSOutlierDetection indicates whether outlier detection support is // enabled, which can be disabled by setting the environment variable // "GRPC_EXPERIMENTAL_ENABLE_OUTLIER_DETECTION" to "false". - XDSOutlierDetection = !strings.EqualFold(os.Getenv(outlierDetectionSupportEnv), "false") - // XDSFederation indicates whether federation support is enabled. - XDSFederation = strings.EqualFold(os.Getenv(federationEnv), "true") + XDSOutlierDetection = boolFromEnv("GRPC_EXPERIMENTAL_ENABLE_OUTLIER_DETECTION", true) + // XDSFederation indicates whether federation support is enabled, which can + // be enabled by setting the environment variable + // "GRPC_EXPERIMENTAL_XDS_FEDERATION" to "true". + XDSFederation = boolFromEnv("GRPC_EXPERIMENTAL_XDS_FEDERATION", true) // XDSRLS indicates whether processing of Cluster Specifier plugins and - // support for the RLS CLuster Specifier is enabled, which can be enabled by + // support for the RLS CLuster Specifier is enabled, which can be disabled by // setting the environment variable "GRPC_EXPERIMENTAL_XDS_RLS_LB" to - // "true". - XDSRLS = strings.EqualFold(os.Getenv(rlsInXDSEnv), "true") + // "false". + XDSRLS = boolFromEnv("GRPC_EXPERIMENTAL_XDS_RLS_LB", true) // C2PResolverTestOnlyTrafficDirectorURI is the TD URI for testing. - C2PResolverTestOnlyTrafficDirectorURI = os.Getenv(c2pResolverTestOnlyTrafficDirectorURIEnv) + C2PResolverTestOnlyTrafficDirectorURI = os.Getenv("GRPC_TEST_ONLY_GOOGLE_C2P_RESOLVER_TRAFFIC_DIRECTOR_URI") + // XDSCustomLBPolicy indicates whether Custom LB Policies are enabled, which + // can be disabled by setting the environment variable + // "GRPC_EXPERIMENTAL_XDS_CUSTOM_LB_CONFIG" to "false". + XDSCustomLBPolicy = boolFromEnv("GRPC_EXPERIMENTAL_XDS_CUSTOM_LB_CONFIG", true) ) diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/internal/grpclog/prefixLogger.go b/.ci/providerlint/vendor/google.golang.org/grpc/internal/grpclog/prefixLogger.go index 82af70e96f1..02224b42ca8 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/internal/grpclog/prefixLogger.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/internal/grpclog/prefixLogger.go @@ -63,6 +63,9 @@ func (pl *PrefixLogger) Errorf(format string, args ...interface{}) { // Debugf does info logging at verbose level 2. func (pl *PrefixLogger) Debugf(format string, args ...interface{}) { + // TODO(6044): Refactor interfaces LoggerV2 and DepthLogger, and maybe + // rewrite PrefixLogger a little to ensure that we don't use the global + // `Logger` here, and instead use the `logger` field. if !Logger.V(2) { return } @@ -73,6 +76,15 @@ func (pl *PrefixLogger) Debugf(format string, args ...interface{}) { return } InfoDepth(1, fmt.Sprintf(format, args...)) + +} + +// V reports whether verbosity level l is at least the requested verbose level. +func (pl *PrefixLogger) V(l int) bool { + // TODO(6044): Refactor interfaces LoggerV2 and DepthLogger, and maybe + // rewrite PrefixLogger a little to ensure that we don't use the global + // `Logger` here, and instead use the `logger` field. + return Logger.V(l) } // NewPrefixLogger creates a prefix logger with the given prefix. diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/internal/grpcrand/grpcrand.go b/.ci/providerlint/vendor/google.golang.org/grpc/internal/grpcrand/grpcrand.go index 517ea70642a..d08e3e90766 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/internal/grpcrand/grpcrand.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/internal/grpcrand/grpcrand.go @@ -72,3 +72,17 @@ func Uint64() uint64 { defer mu.Unlock() return r.Uint64() } + +// Uint32 implements rand.Uint32 on the grpcrand global source. +func Uint32() uint32 { + mu.Lock() + defer mu.Unlock() + return r.Uint32() +} + +// Shuffle implements rand.Shuffle on the grpcrand global source. +var Shuffle = func(n int, f func(int, int)) { + mu.Lock() + defer mu.Unlock() + r.Shuffle(n, f) +} diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/internal/grpcsync/callback_serializer.go b/.ci/providerlint/vendor/google.golang.org/grpc/internal/grpcsync/callback_serializer.go new file mode 100644 index 00000000000..37b8d4117e7 --- /dev/null +++ b/.ci/providerlint/vendor/google.golang.org/grpc/internal/grpcsync/callback_serializer.go @@ -0,0 +1,119 @@ +/* + * + * Copyright 2022 gRPC authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package grpcsync + +import ( + "context" + "sync" + + "google.golang.org/grpc/internal/buffer" +) + +// CallbackSerializer provides a mechanism to schedule callbacks in a +// synchronized manner. It provides a FIFO guarantee on the order of execution +// of scheduled callbacks. New callbacks can be scheduled by invoking the +// Schedule() method. +// +// This type is safe for concurrent access. +type CallbackSerializer struct { + // Done is closed once the serializer is shut down completely, i.e all + // scheduled callbacks are executed and the serializer has deallocated all + // its resources. + Done chan struct{} + + callbacks *buffer.Unbounded + closedMu sync.Mutex + closed bool +} + +// NewCallbackSerializer returns a new CallbackSerializer instance. The provided +// context will be passed to the scheduled callbacks. Users should cancel the +// provided context to shutdown the CallbackSerializer. It is guaranteed that no +// callbacks will be added once this context is canceled, and any pending un-run +// callbacks will be executed before the serializer is shut down. +func NewCallbackSerializer(ctx context.Context) *CallbackSerializer { + t := &CallbackSerializer{ + Done: make(chan struct{}), + callbacks: buffer.NewUnbounded(), + } + go t.run(ctx) + return t +} + +// Schedule adds a callback to be scheduled after existing callbacks are run. +// +// Callbacks are expected to honor the context when performing any blocking +// operations, and should return early when the context is canceled. +// +// Return value indicates if the callback was successfully added to the list of +// callbacks to be executed by the serializer. It is not possible to add +// callbacks once the context passed to NewCallbackSerializer is cancelled. +func (t *CallbackSerializer) Schedule(f func(ctx context.Context)) bool { + t.closedMu.Lock() + defer t.closedMu.Unlock() + + if t.closed { + return false + } + t.callbacks.Put(f) + return true +} + +func (t *CallbackSerializer) run(ctx context.Context) { + var backlog []func(context.Context) + + defer close(t.Done) + for ctx.Err() == nil { + select { + case <-ctx.Done(): + // Do nothing here. Next iteration of the for loop will not happen, + // since ctx.Err() would be non-nil. + case callback, ok := <-t.callbacks.Get(): + if !ok { + return + } + t.callbacks.Load() + callback.(func(ctx context.Context))(ctx) + } + } + + // Fetch pending callbacks if any, and execute them before returning from + // this method and closing t.Done. + t.closedMu.Lock() + t.closed = true + backlog = t.fetchPendingCallbacks() + t.callbacks.Close() + t.closedMu.Unlock() + for _, b := range backlog { + b(ctx) + } +} + +func (t *CallbackSerializer) fetchPendingCallbacks() []func(context.Context) { + var backlog []func(context.Context) + for { + select { + case b := <-t.callbacks.Get(): + backlog = append(backlog, b.(func(context.Context))) + t.callbacks.Load() + default: + return backlog + } + } +} diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/internal/internal.go b/.ci/providerlint/vendor/google.golang.org/grpc/internal/internal.go index fd0ee3dcaf1..42ff39c8444 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/internal/internal.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/internal/internal.go @@ -58,6 +58,12 @@ var ( // gRPC server. An xDS-enabled server needs to know what type of credentials // is configured on the underlying gRPC server. This is set by server.go. GetServerCredentials interface{} // func (*grpc.Server) credentials.TransportCredentials + // CanonicalString returns the canonical string of the code defined here: + // https://github.com/grpc/grpc/blob/master/doc/statuscodes.md. + // + // This is used in the 1.0 release of gcp/observability, and thus must not be + // deleted or changed. + CanonicalString interface{} // func (codes.Code) string // DrainServerTransports initiates a graceful close of existing connections // on a gRPC server accepted on the provided listener address. An // xDS-enabled server invokes this method on a grpc.Server when a particular @@ -66,26 +72,54 @@ var ( // AddGlobalServerOptions adds an array of ServerOption that will be // effective globally for newly created servers. The priority will be: 1. // user-provided; 2. this method; 3. default values. + // + // This is used in the 1.0 release of gcp/observability, and thus must not be + // deleted or changed. AddGlobalServerOptions interface{} // func(opt ...ServerOption) // ClearGlobalServerOptions clears the array of extra ServerOption. This // method is useful in testing and benchmarking. + // + // This is used in the 1.0 release of gcp/observability, and thus must not be + // deleted or changed. ClearGlobalServerOptions func() // AddGlobalDialOptions adds an array of DialOption that will be effective // globally for newly created client channels. The priority will be: 1. // user-provided; 2. this method; 3. default values. + // + // This is used in the 1.0 release of gcp/observability, and thus must not be + // deleted or changed. AddGlobalDialOptions interface{} // func(opt ...DialOption) + // DisableGlobalDialOptions returns a DialOption that prevents the + // ClientConn from applying the global DialOptions (set via + // AddGlobalDialOptions). + // + // This is used in the 1.0 release of gcp/observability, and thus must not be + // deleted or changed. + DisableGlobalDialOptions interface{} // func() grpc.DialOption // ClearGlobalDialOptions clears the array of extra DialOption. This // method is useful in testing and benchmarking. + // + // This is used in the 1.0 release of gcp/observability, and thus must not be + // deleted or changed. ClearGlobalDialOptions func() + // JoinDialOptions combines the dial options passed as arguments into a + // single dial option. + JoinDialOptions interface{} // func(...grpc.DialOption) grpc.DialOption // JoinServerOptions combines the server options passed as arguments into a // single server option. JoinServerOptions interface{} // func(...grpc.ServerOption) grpc.ServerOption // WithBinaryLogger returns a DialOption that specifies the binary logger // for a ClientConn. + // + // This is used in the 1.0 release of gcp/observability, and thus must not be + // deleted or changed. WithBinaryLogger interface{} // func(binarylog.Logger) grpc.DialOption // BinaryLogger returns a ServerOption that can set the binary logger for a // server. + // + // This is used in the 1.0 release of gcp/observability, and thus must not be + // deleted or changed. BinaryLogger interface{} // func(binarylog.Logger) grpc.ServerOption // NewXDSResolverWithConfigForTesting creates a new xds resolver builder using @@ -127,6 +161,9 @@ var ( // // TODO: Remove this function once the RBAC env var is removed. UnregisterRBACHTTPFilterForTesting func() + + // ORCAAllowAnyMinReportingInterval is for examples/orca use ONLY. + ORCAAllowAnyMinReportingInterval interface{} // func(so *orca.ServiceOptions) ) // HealthChecker defines the signature of the client-side LB channel health checking function. diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/internal/metadata/metadata.go b/.ci/providerlint/vendor/google.golang.org/grpc/internal/metadata/metadata.go index b2980f8ac44..c82e608e077 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/internal/metadata/metadata.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/internal/metadata/metadata.go @@ -76,33 +76,11 @@ func Set(addr resolver.Address, md metadata.MD) resolver.Address { return addr } -// Validate returns an error if the input md contains invalid keys or values. -// -// If the header is not a pseudo-header, the following items are checked: -// - header names must contain one or more characters from this set [0-9 a-z _ - .]. -// - if the header-name ends with a "-bin" suffix, no validation of the header value is performed. -// - otherwise, the header value must contain one or more characters from the set [%x20-%x7E]. +// Validate validates every pair in md with ValidatePair. func Validate(md metadata.MD) error { for k, vals := range md { - // pseudo-header will be ignored - if k[0] == ':' { - continue - } - // check key, for i that saving a conversion if not using for range - for i := 0; i < len(k); i++ { - r := k[i] - if !(r >= 'a' && r <= 'z') && !(r >= '0' && r <= '9') && r != '.' && r != '-' && r != '_' { - return fmt.Errorf("header key %q contains illegal characters not in [0-9a-z-_.]", k) - } - } - if strings.HasSuffix(k, "-bin") { - continue - } - // check value - for _, val := range vals { - if hasNotPrintable(val) { - return fmt.Errorf("header key %q contains value with non-printable ASCII characters", k) - } + if err := ValidatePair(k, vals...); err != nil { + return err } } return nil @@ -118,3 +96,37 @@ func hasNotPrintable(msg string) bool { } return false } + +// ValidatePair validate a key-value pair with the following rules (the pseudo-header will be skipped) : +// +// - key must contain one or more characters. +// - the characters in the key must be contained in [0-9 a-z _ - .]. +// - if the key ends with a "-bin" suffix, no validation of the corresponding value is performed. +// - the characters in the every value must be printable (in [%x20-%x7E]). +func ValidatePair(key string, vals ...string) error { + // key should not be empty + if key == "" { + return fmt.Errorf("there is an empty key in the header") + } + // pseudo-header will be ignored + if key[0] == ':' { + return nil + } + // check key, for i that saving a conversion if not using for range + for i := 0; i < len(key); i++ { + r := key[i] + if !(r >= 'a' && r <= 'z') && !(r >= '0' && r <= '9') && r != '.' && r != '-' && r != '_' { + return fmt.Errorf("header key %q contains illegal characters not in [0-9a-z-_.]", key) + } + } + if strings.HasSuffix(key, "-bin") { + return nil + } + // check value + for _, val := range vals { + if hasNotPrintable(val) { + return fmt.Errorf("header key %q contains value with non-printable ASCII characters", key) + } + } + return nil +} diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/internal/resolver/dns/dns_resolver.go b/.ci/providerlint/vendor/google.golang.org/grpc/internal/resolver/dns/dns_resolver.go index 75301c51491..09a667f33cb 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/internal/resolver/dns/dns_resolver.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/internal/resolver/dns/dns_resolver.go @@ -116,7 +116,7 @@ type dnsBuilder struct{} // Build creates and starts a DNS resolver that watches the name resolution of the target. func (b *dnsBuilder) Build(target resolver.Target, cc resolver.ClientConn, opts resolver.BuildOptions) (resolver.Resolver, error) { - host, port, err := parseTarget(target.Endpoint, defaultPort) + host, port, err := parseTarget(target.Endpoint(), defaultPort) if err != nil { return nil, err } @@ -140,10 +140,10 @@ func (b *dnsBuilder) Build(target resolver.Target, cc resolver.ClientConn, opts disableServiceConfig: opts.DisableServiceConfig, } - if target.Authority == "" { + if target.URL.Host == "" { d.resolver = defaultResolver } else { - d.resolver, err = customAuthorityResolver(target.Authority) + d.resolver, err = customAuthorityResolver(target.URL.Host) if err != nil { return nil, err } diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/internal/resolver/passthrough/passthrough.go b/.ci/providerlint/vendor/google.golang.org/grpc/internal/resolver/passthrough/passthrough.go index 520d9229e1e..afac56572ad 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/internal/resolver/passthrough/passthrough.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/internal/resolver/passthrough/passthrough.go @@ -20,13 +20,20 @@ // name without scheme back to gRPC as resolved address. package passthrough -import "google.golang.org/grpc/resolver" +import ( + "errors" + + "google.golang.org/grpc/resolver" +) const scheme = "passthrough" type passthroughBuilder struct{} func (*passthroughBuilder) Build(target resolver.Target, cc resolver.ClientConn, opts resolver.BuildOptions) (resolver.Resolver, error) { + if target.Endpoint() == "" && opts.Dialer == nil { + return nil, errors.New("passthrough: received empty target in Build()") + } r := &passthroughResolver{ target: target, cc: cc, @@ -45,7 +52,7 @@ type passthroughResolver struct { } func (r *passthroughResolver) start() { - r.cc.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: r.target.Endpoint}}}) + r.cc.UpdateState(resolver.State{Addresses: []resolver.Address{{Addr: r.target.Endpoint()}}}) } func (*passthroughResolver) ResolveNow(o resolver.ResolveNowOptions) {} diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/internal/resolver/unix/unix.go b/.ci/providerlint/vendor/google.golang.org/grpc/internal/resolver/unix/unix.go index 7f1a702cacb..16091168773 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/internal/resolver/unix/unix.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/internal/resolver/unix/unix.go @@ -34,8 +34,8 @@ type builder struct { } func (b *builder) Build(target resolver.Target, cc resolver.ClientConn, _ resolver.BuildOptions) (resolver.Resolver, error) { - if target.Authority != "" { - return nil, fmt.Errorf("invalid (non-empty) authority: %v", target.Authority) + if target.URL.Host != "" { + return nil, fmt.Errorf("invalid (non-empty) authority: %v", target.URL.Host) } // gRPC was parsing the dial target manually before PR #4817, and we diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/internal/serviceconfig/duration.go b/.ci/providerlint/vendor/google.golang.org/grpc/internal/serviceconfig/duration.go new file mode 100644 index 00000000000..11d82afcc7e --- /dev/null +++ b/.ci/providerlint/vendor/google.golang.org/grpc/internal/serviceconfig/duration.go @@ -0,0 +1,130 @@ +/* + * + * Copyright 2023 gRPC authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package serviceconfig + +import ( + "encoding/json" + "fmt" + "math" + "strconv" + "strings" + "time" +) + +// Duration defines JSON marshal and unmarshal methods to conform to the +// protobuf JSON spec defined [here]. +// +// [here]: https://protobuf.dev/reference/protobuf/google.protobuf/#duration +type Duration time.Duration + +func (d Duration) String() string { + return fmt.Sprint(time.Duration(d)) +} + +// MarshalJSON converts from d to a JSON string output. +func (d Duration) MarshalJSON() ([]byte, error) { + ns := time.Duration(d).Nanoseconds() + sec := ns / int64(time.Second) + ns = ns % int64(time.Second) + + var sign string + if sec < 0 || ns < 0 { + sign, sec, ns = "-", -1*sec, -1*ns + } + + // Generated output always contains 0, 3, 6, or 9 fractional digits, + // depending on required precision. + str := fmt.Sprintf("%s%d.%09d", sign, sec, ns) + str = strings.TrimSuffix(str, "000") + str = strings.TrimSuffix(str, "000") + str = strings.TrimSuffix(str, ".000") + return []byte(fmt.Sprintf("\"%ss\"", str)), nil +} + +// UnmarshalJSON unmarshals b as a duration JSON string into d. +func (d *Duration) UnmarshalJSON(b []byte) error { + var s string + if err := json.Unmarshal(b, &s); err != nil { + return err + } + if !strings.HasSuffix(s, "s") { + return fmt.Errorf("malformed duration %q: missing seconds unit", s) + } + neg := false + if s[0] == '-' { + neg = true + s = s[1:] + } + ss := strings.SplitN(s[:len(s)-1], ".", 3) + if len(ss) > 2 { + return fmt.Errorf("malformed duration %q: too many decimals", s) + } + // hasDigits is set if either the whole or fractional part of the number is + // present, since both are optional but one is required. + hasDigits := false + var sec, ns int64 + if len(ss[0]) > 0 { + var err error + if sec, err = strconv.ParseInt(ss[0], 10, 64); err != nil { + return fmt.Errorf("malformed duration %q: %v", s, err) + } + // Maximum seconds value per the durationpb spec. + const maxProtoSeconds = 315_576_000_000 + if sec > maxProtoSeconds { + return fmt.Errorf("out of range: %q", s) + } + hasDigits = true + } + if len(ss) == 2 && len(ss[1]) > 0 { + if len(ss[1]) > 9 { + return fmt.Errorf("malformed duration %q: too many digits after decimal", s) + } + var err error + if ns, err = strconv.ParseInt(ss[1], 10, 64); err != nil { + return fmt.Errorf("malformed duration %q: %v", s, err) + } + for i := 9; i > len(ss[1]); i-- { + ns *= 10 + } + hasDigits = true + } + if !hasDigits { + return fmt.Errorf("malformed duration %q: contains no numbers", s) + } + + if neg { + sec *= -1 + ns *= -1 + } + + // Maximum/minimum seconds/nanoseconds representable by Go's time.Duration. + const maxSeconds = math.MaxInt64 / int64(time.Second) + const maxNanosAtMaxSeconds = math.MaxInt64 % int64(time.Second) + const minSeconds = math.MinInt64 / int64(time.Second) + const minNanosAtMinSeconds = math.MinInt64 % int64(time.Second) + + if sec > maxSeconds || (sec == maxSeconds && ns >= maxNanosAtMaxSeconds) { + *d = Duration(math.MaxInt64) + } else if sec < minSeconds || (sec == minSeconds && ns <= minNanosAtMinSeconds) { + *d = Duration(math.MinInt64) + } else { + *d = Duration(sec*int64(time.Second) + ns) + } + return nil +} diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/controlbuf.go b/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/controlbuf.go index 409769f48fd..be5a9c81eb9 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/controlbuf.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/controlbuf.go @@ -22,6 +22,7 @@ import ( "bytes" "errors" "fmt" + "net" "runtime" "strconv" "sync" @@ -29,6 +30,7 @@ import ( "golang.org/x/net/http2" "golang.org/x/net/http2/hpack" + "google.golang.org/grpc/internal/grpclog" "google.golang.org/grpc/internal/grpcutil" "google.golang.org/grpc/status" ) @@ -191,7 +193,7 @@ type goAway struct { code http2.ErrCode debugData []byte headsUp bool - closeConn bool + closeConn error // if set, loopyWriter will exit, resulting in conn closure } func (*goAway) isTransportResponseFrame() bool { return false } @@ -209,6 +211,14 @@ type outFlowControlSizeRequest struct { func (*outFlowControlSizeRequest) isTransportResponseFrame() bool { return false } +// closeConnection is an instruction to tell the loopy writer to flush the +// framer and exit, which will cause the transport's connection to be closed +// (by the client or server). The transport itself will close after the reader +// encounters the EOF caused by the connection closure. +type closeConnection struct{} + +func (closeConnection) isTransportResponseFrame() bool { return false } + type outStreamState int const ( @@ -408,7 +418,7 @@ func (c *controlBuffer) get(block bool) (interface{}, error) { select { case <-c.ch: case <-c.done: - return nil, ErrConnClosing + return nil, errors.New("transport closed by client") } } } @@ -478,12 +488,14 @@ type loopyWriter struct { hEnc *hpack.Encoder // HPACK encoder. bdpEst *bdpEstimator draining bool + conn net.Conn + logger *grpclog.PrefixLogger // Side-specific handlers ssGoAwayHandler func(*goAway) (bool, error) } -func newLoopyWriter(s side, fr *framer, cbuf *controlBuffer, bdpEst *bdpEstimator) *loopyWriter { +func newLoopyWriter(s side, fr *framer, cbuf *controlBuffer, bdpEst *bdpEstimator, conn net.Conn, logger *grpclog.PrefixLogger) *loopyWriter { var buf bytes.Buffer l := &loopyWriter{ side: s, @@ -496,6 +508,8 @@ func newLoopyWriter(s side, fr *framer, cbuf *controlBuffer, bdpEst *bdpEstimato hBuf: &buf, hEnc: hpack.NewEncoder(&buf), bdpEst: bdpEst, + conn: conn, + logger: logger, } return l } @@ -513,23 +527,26 @@ const minBatchSize = 1000 // 2. Stream level flow control quota available. // // In each iteration of run loop, other than processing the incoming control -// frame, loopy calls processData, which processes one node from the activeStreams linked-list. -// This results in writing of HTTP2 frames into an underlying write buffer. -// When there's no more control frames to read from controlBuf, loopy flushes the write buffer. -// As an optimization, to increase the batch size for each flush, loopy yields the processor, once -// if the batch size is too low to give stream goroutines a chance to fill it up. +// frame, loopy calls processData, which processes one node from the +// activeStreams linked-list. This results in writing of HTTP2 frames into an +// underlying write buffer. When there's no more control frames to read from +// controlBuf, loopy flushes the write buffer. As an optimization, to increase +// the batch size for each flush, loopy yields the processor, once if the batch +// size is too low to give stream goroutines a chance to fill it up. +// +// Upon exiting, if the error causing the exit is not an I/O error, run() +// flushes and closes the underlying connection. Otherwise, the connection is +// left open to allow the I/O error to be encountered by the reader instead. func (l *loopyWriter) run() (err error) { defer func() { - if err == ErrConnClosing { - // Don't log ErrConnClosing as error since it happens - // 1. When the connection is closed by some other known issue. - // 2. User closed the connection. - // 3. A graceful close of connection. - if logger.V(logLevel) { - logger.Infof("transport: loopyWriter.run returning. %v", err) - } - err = nil + if l.logger.V(logLevel) { + l.logger.Infof("loopyWriter exiting with error: %v", err) } + if !isIOError(err) { + l.framer.writer.Flush() + l.conn.Close() + } + l.cbuf.finish() }() for { it, err := l.cbuf.get(true) @@ -574,7 +591,6 @@ func (l *loopyWriter) run() (err error) { } l.framer.writer.Flush() break hasdata - } } } @@ -583,11 +599,11 @@ func (l *loopyWriter) outgoingWindowUpdateHandler(w *outgoingWindowUpdate) error return l.framer.fr.WriteWindowUpdate(w.streamID, w.increment) } -func (l *loopyWriter) incomingWindowUpdateHandler(w *incomingWindowUpdate) error { +func (l *loopyWriter) incomingWindowUpdateHandler(w *incomingWindowUpdate) { // Otherwise update the quota. if w.streamID == 0 { l.sendQuota += w.increment - return nil + return } // Find the stream and update it. if str, ok := l.estdStreams[w.streamID]; ok { @@ -595,10 +611,9 @@ func (l *loopyWriter) incomingWindowUpdateHandler(w *incomingWindowUpdate) error if strQuota := int(l.oiws) - str.bytesOutStanding; strQuota > 0 && str.state == waitingOnStreamQuota { str.state = active l.activeStreams.enqueue(str) - return nil + return } } - return nil } func (l *loopyWriter) outgoingSettingsHandler(s *outgoingSettings) error { @@ -606,13 +621,11 @@ func (l *loopyWriter) outgoingSettingsHandler(s *outgoingSettings) error { } func (l *loopyWriter) incomingSettingsHandler(s *incomingSettings) error { - if err := l.applySettings(s.ss); err != nil { - return err - } + l.applySettings(s.ss) return l.framer.fr.WriteSettingsAck() } -func (l *loopyWriter) registerStreamHandler(h *registerStream) error { +func (l *loopyWriter) registerStreamHandler(h *registerStream) { str := &outStream{ id: h.streamID, state: empty, @@ -620,15 +633,14 @@ func (l *loopyWriter) registerStreamHandler(h *registerStream) error { wq: h.wq, } l.estdStreams[h.streamID] = str - return nil } func (l *loopyWriter) headerHandler(h *headerFrame) error { if l.side == serverSide { str, ok := l.estdStreams[h.streamID] if !ok { - if logger.V(logLevel) { - logger.Warningf("transport: loopy doesn't recognize the stream: %d", h.streamID) + if l.logger.V(logLevel) { + l.logger.Infof("Unrecognized streamID %d in loopyWriter", h.streamID) } return nil } @@ -655,19 +667,20 @@ func (l *loopyWriter) headerHandler(h *headerFrame) error { itl: &itemList{}, wq: h.wq, } - str.itl.enqueue(h) - return l.originateStream(str) + return l.originateStream(str, h) } -func (l *loopyWriter) originateStream(str *outStream) error { - hdr := str.itl.dequeue().(*headerFrame) - if err := hdr.initStream(str.id); err != nil { - if err == ErrConnClosing { - return err - } - // Other errors(errStreamDrain) need not close transport. +func (l *loopyWriter) originateStream(str *outStream, hdr *headerFrame) error { + // l.draining is set when handling GoAway. In which case, we want to avoid + // creating new streams. + if l.draining { + // TODO: provide a better error with the reason we are in draining. + hdr.onOrphaned(errStreamDrain) return nil } + if err := hdr.initStream(str.id); err != nil { + return err + } if err := l.writeHeader(str.id, hdr.endStream, hdr.hf, hdr.onWrite); err != nil { return err } @@ -682,8 +695,8 @@ func (l *loopyWriter) writeHeader(streamID uint32, endStream bool, hf []hpack.He l.hBuf.Reset() for _, f := range hf { if err := l.hEnc.WriteField(f); err != nil { - if logger.V(logLevel) { - logger.Warningf("transport: loopyWriter.writeHeader encountered error while encoding headers: %v", err) + if l.logger.V(logLevel) { + l.logger.Warningf("Encountered error while encoding headers: %v", err) } } } @@ -721,10 +734,10 @@ func (l *loopyWriter) writeHeader(streamID uint32, endStream bool, hf []hpack.He return nil } -func (l *loopyWriter) preprocessData(df *dataFrame) error { +func (l *loopyWriter) preprocessData(df *dataFrame) { str, ok := l.estdStreams[df.streamID] if !ok { - return nil + return } // If we got data for a stream it means that // stream was originated and the headers were sent out. @@ -733,7 +746,6 @@ func (l *loopyWriter) preprocessData(df *dataFrame) error { str.state = active l.activeStreams.enqueue(str) } - return nil } func (l *loopyWriter) pingHandler(p *ping) error { @@ -744,9 +756,8 @@ func (l *loopyWriter) pingHandler(p *ping) error { } -func (l *loopyWriter) outFlowControlSizeRequestHandler(o *outFlowControlSizeRequest) error { +func (l *loopyWriter) outFlowControlSizeRequestHandler(o *outFlowControlSizeRequest) { o.resp <- l.sendQuota - return nil } func (l *loopyWriter) cleanupStreamHandler(c *cleanupStream) error { @@ -763,8 +774,9 @@ func (l *loopyWriter) cleanupStreamHandler(c *cleanupStream) error { return err } } - if l.side == clientSide && l.draining && len(l.estdStreams) == 0 { - return ErrConnClosing + if l.draining && len(l.estdStreams) == 0 { + // Flush and close the connection; we are done with it. + return errors.New("finished processing active streams while in draining mode") } return nil } @@ -799,7 +811,8 @@ func (l *loopyWriter) incomingGoAwayHandler(*incomingGoAway) error { if l.side == clientSide { l.draining = true if len(l.estdStreams) == 0 { - return ErrConnClosing + // Flush and close the connection; we are done with it. + return errors.New("received GOAWAY with no active streams") } } return nil @@ -820,7 +833,7 @@ func (l *loopyWriter) goAwayHandler(g *goAway) error { func (l *loopyWriter) handle(i interface{}) error { switch i := i.(type) { case *incomingWindowUpdate: - return l.incomingWindowUpdateHandler(i) + l.incomingWindowUpdateHandler(i) case *outgoingWindowUpdate: return l.outgoingWindowUpdateHandler(i) case *incomingSettings: @@ -830,7 +843,7 @@ func (l *loopyWriter) handle(i interface{}) error { case *headerFrame: return l.headerHandler(i) case *registerStream: - return l.registerStreamHandler(i) + l.registerStreamHandler(i) case *cleanupStream: return l.cleanupStreamHandler(i) case *earlyAbortStream: @@ -838,19 +851,24 @@ func (l *loopyWriter) handle(i interface{}) error { case *incomingGoAway: return l.incomingGoAwayHandler(i) case *dataFrame: - return l.preprocessData(i) + l.preprocessData(i) case *ping: return l.pingHandler(i) case *goAway: return l.goAwayHandler(i) case *outFlowControlSizeRequest: - return l.outFlowControlSizeRequestHandler(i) + l.outFlowControlSizeRequestHandler(i) + case closeConnection: + // Just return a non-I/O error and run() will flush and close the + // connection. + return ErrConnClosing default: return fmt.Errorf("transport: unknown control message type %T", i) } + return nil } -func (l *loopyWriter) applySettings(ss []http2.Setting) error { +func (l *loopyWriter) applySettings(ss []http2.Setting) { for _, s := range ss { switch s.ID { case http2.SettingInitialWindowSize: @@ -869,7 +887,6 @@ func (l *loopyWriter) applySettings(ss []http2.Setting) error { updateHeaderTblSize(l.hEnc, s.Val) } } - return nil } // processData removes the first stream from active streams, writes out at most 16KB @@ -903,7 +920,7 @@ func (l *loopyWriter) processData() (bool, error) { return false, err } if err := l.cleanupStreamHandler(trailer.cleanup); err != nil { - return false, nil + return false, err } } else { l.activeStreams.enqueue(str) diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/defaults.go b/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/defaults.go index 9fa306b2e07..bc8ee074749 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/defaults.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/defaults.go @@ -47,3 +47,9 @@ const ( defaultClientMaxHeaderListSize = uint32(16 << 20) defaultServerMaxHeaderListSize = uint32(16 << 20) ) + +// MaxStreamID is the upper bound for the stream ID before the current +// transport gracefully closes and new transport is created for subsequent RPCs. +// This is set to 75% of 2^31-1. Streams are identified with an unsigned 31-bit +// integer. It's exported so that tests can override it. +var MaxStreamID = uint32(math.MaxInt32 * 3 / 4) diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/handler_server.go b/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/handler_server.go index fb272235d81..98f80e3fa00 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/handler_server.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/handler_server.go @@ -39,6 +39,7 @@ import ( "golang.org/x/net/http2" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" + "google.golang.org/grpc/internal/grpclog" "google.golang.org/grpc/internal/grpcutil" "google.golang.org/grpc/metadata" "google.golang.org/grpc/peer" @@ -46,24 +47,32 @@ import ( "google.golang.org/grpc/status" ) -// NewServerHandlerTransport returns a ServerTransport handling gRPC -// from inside an http.Handler. It requires that the http Server -// supports HTTP/2. +// NewServerHandlerTransport returns a ServerTransport handling gRPC from +// inside an http.Handler, or writes an HTTP error to w and returns an error. +// It requires that the http Server supports HTTP/2. func NewServerHandlerTransport(w http.ResponseWriter, r *http.Request, stats []stats.Handler) (ServerTransport, error) { if r.ProtoMajor != 2 { - return nil, errors.New("gRPC requires HTTP/2") + msg := "gRPC requires HTTP/2" + http.Error(w, msg, http.StatusBadRequest) + return nil, errors.New(msg) } if r.Method != "POST" { - return nil, errors.New("invalid gRPC request method") + msg := fmt.Sprintf("invalid gRPC request method %q", r.Method) + http.Error(w, msg, http.StatusBadRequest) + return nil, errors.New(msg) } contentType := r.Header.Get("Content-Type") // TODO: do we assume contentType is lowercase? we did before contentSubtype, validContentType := grpcutil.ContentSubtype(contentType) if !validContentType { - return nil, errors.New("invalid gRPC request content-type") + msg := fmt.Sprintf("invalid gRPC request content-type %q", contentType) + http.Error(w, msg, http.StatusUnsupportedMediaType) + return nil, errors.New(msg) } if _, ok := w.(http.Flusher); !ok { - return nil, errors.New("gRPC requires a ResponseWriter supporting http.Flusher") + msg := "gRPC requires a ResponseWriter supporting http.Flusher" + http.Error(w, msg, http.StatusInternalServerError) + return nil, errors.New(msg) } st := &serverHandlerTransport{ @@ -75,11 +84,14 @@ func NewServerHandlerTransport(w http.ResponseWriter, r *http.Request, stats []s contentSubtype: contentSubtype, stats: stats, } + st.logger = prefixLoggerForServerHandlerTransport(st) if v := r.Header.Get("grpc-timeout"); v != "" { to, err := decodeTimeout(v) if err != nil { - return nil, status.Errorf(codes.Internal, "malformed time-out: %v", err) + msg := fmt.Sprintf("malformed grpc-timeout: %v", err) + http.Error(w, msg, http.StatusBadRequest) + return nil, status.Error(codes.Internal, msg) } st.timeoutSet = true st.timeout = to @@ -97,7 +109,9 @@ func NewServerHandlerTransport(w http.ResponseWriter, r *http.Request, stats []s for _, v := range vv { v, err := decodeMetadataHeader(k, v) if err != nil { - return nil, status.Errorf(codes.Internal, "malformed binary metadata: %v", err) + msg := fmt.Sprintf("malformed binary metadata %q in header %q: %v", v, k, err) + http.Error(w, msg, http.StatusBadRequest) + return nil, status.Error(codes.Internal, msg) } metakv = append(metakv, k, v) } @@ -138,15 +152,19 @@ type serverHandlerTransport struct { // TODO make sure this is consistent across handler_server and http2_server contentSubtype string - stats []stats.Handler + stats []stats.Handler + logger *grpclog.PrefixLogger } -func (ht *serverHandlerTransport) Close() { - ht.closeOnce.Do(ht.closeCloseChanOnce) +func (ht *serverHandlerTransport) Close(err error) { + ht.closeOnce.Do(func() { + if ht.logger.V(logLevel) { + ht.logger.Infof("Closing: %v", err) + } + close(ht.closedCh) + }) } -func (ht *serverHandlerTransport) closeCloseChanOnce() { close(ht.closedCh) } - func (ht *serverHandlerTransport) RemoteAddr() net.Addr { return strAddr(ht.req.RemoteAddr) } // strAddr is a net.Addr backed by either a TCP "ip:port" string, or @@ -236,7 +254,7 @@ func (ht *serverHandlerTransport) WriteStatus(s *Stream, st *status.Status) erro }) } } - ht.Close() + ht.Close(errors.New("finished writing status")) return err } @@ -346,7 +364,7 @@ func (ht *serverHandlerTransport) HandleStreams(startStream func(*Stream), trace case <-ht.req.Context().Done(): } cancel() - ht.Close() + ht.Close(errors.New("request is done processing")) }() req := ht.req @@ -435,7 +453,7 @@ func (ht *serverHandlerTransport) IncrMsgSent() {} func (ht *serverHandlerTransport) IncrMsgRecv() {} -func (ht *serverHandlerTransport) Drain() { +func (ht *serverHandlerTransport) Drain(debugData string) { panic("Drain() is not implemented") } diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/http2_client.go b/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/http2_client.go index d518b07e16f..326bf084800 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/http2_client.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/http2_client.go @@ -38,6 +38,7 @@ import ( "google.golang.org/grpc/credentials" "google.golang.org/grpc/internal/channelz" icredentials "google.golang.org/grpc/internal/credentials" + "google.golang.org/grpc/internal/grpclog" "google.golang.org/grpc/internal/grpcsync" "google.golang.org/grpc/internal/grpcutil" imetadata "google.golang.org/grpc/internal/metadata" @@ -59,11 +60,15 @@ var clientConnectionCounter uint64 // http2Client implements the ClientTransport interface with HTTP2. type http2Client struct { - lastRead int64 // Keep this field 64-bit aligned. Accessed atomically. - ctx context.Context - cancel context.CancelFunc - ctxDone <-chan struct{} // Cache the ctx.Done() chan. - userAgent string + lastRead int64 // Keep this field 64-bit aligned. Accessed atomically. + ctx context.Context + cancel context.CancelFunc + ctxDone <-chan struct{} // Cache the ctx.Done() chan. + userAgent string + // address contains the resolver returned address for this transport. + // If the `ServerName` field is set, it takes precedence over `CallHdr.Host` + // passed to `NewStream`, when determining the :authority header. + address resolver.Address md metadata.MD conn net.Conn // underlying communication channel loopy *loopyWriter @@ -136,12 +141,12 @@ type http2Client struct { channelzID *channelz.Identifier czData *channelzData - onGoAway func(GoAwayReason) - onClose func() + onClose func(GoAwayReason) bufferPool *bufferPool connectionID uint64 + logger *grpclog.PrefixLogger } func dial(ctx context.Context, fn func(context.Context, string) (net.Conn, error), addr resolver.Address, useProxy bool, grpcUA string) (net.Conn, error) { @@ -193,7 +198,7 @@ func isTemporary(err error) bool { // newHTTP2Client constructs a connected ClientTransport to addr based on HTTP2 // and starts to receive messages on it. Non-nil error returns if construction // fails. -func newHTTP2Client(connectCtx, ctx context.Context, addr resolver.Address, opts ConnectOptions, onGoAway func(GoAwayReason), onClose func()) (_ *http2Client, err error) { +func newHTTP2Client(connectCtx, ctx context.Context, addr resolver.Address, opts ConnectOptions, onClose func(GoAwayReason)) (_ *http2Client, err error) { scheme := "http" ctx, cancel := context.WithCancel(ctx) defer func() { @@ -213,7 +218,7 @@ func newHTTP2Client(connectCtx, ctx context.Context, addr resolver.Address, opts if opts.FailOnNonTempDialError { return nil, connectionErrorf(isTemporary(err), err, "transport: error while dialing: %v", err) } - return nil, connectionErrorf(true, err, "transport: Error while dialing %v", err) + return nil, connectionErrorf(true, err, "transport: Error while dialing: %v", err) } // Any further errors will close the underlying connection @@ -238,8 +243,11 @@ func newHTTP2Client(connectCtx, ctx context.Context, addr resolver.Address, opts go func(conn net.Conn) { defer ctxMonitorDone.Fire() // Signal this goroutine has exited. <-newClientCtx.Done() // Block until connectCtx expires or the defer above executes. - if connectCtx.Err() != nil { + if err := connectCtx.Err(); err != nil { // connectCtx expired before exiting the function. Hard close the connection. + if logger.V(logLevel) { + logger.Infof("Aborting due to connect deadline expiring: %v", err) + } conn.Close() } }(conn) @@ -314,6 +322,7 @@ func newHTTP2Client(connectCtx, ctx context.Context, addr resolver.Address, opts cancel: cancel, userAgent: opts.UserAgent, registeredCompressors: grpcutil.RegisteredCompressors(), + address: addr, conn: conn, remoteAddr: conn.RemoteAddr(), localAddr: conn.LocalAddr(), @@ -335,11 +344,11 @@ func newHTTP2Client(connectCtx, ctx context.Context, addr resolver.Address, opts streamQuota: defaultMaxStreamsClient, streamsQuotaAvailable: make(chan struct{}, 1), czData: new(channelzData), - onGoAway: onGoAway, keepaliveEnabled: keepaliveEnabled, bufferPool: newBufferPool(), onClose: onClose, } + t.logger = prefixLoggerForClientTransport(t) // Add peer information to the http2client context. t.ctx = peer.NewContext(t.ctx, t.getPeer()) @@ -438,17 +447,8 @@ func newHTTP2Client(connectCtx, ctx context.Context, addr resolver.Address, opts return nil, err } go func() { - t.loopy = newLoopyWriter(clientSide, t.framer, t.controlBuf, t.bdpEst) - err := t.loopy.run() - if err != nil { - if logger.V(logLevel) { - logger.Errorf("transport: loopyWriter.run returning. Err: %v", err) - } - } - // Do not close the transport. Let reader goroutine handle it since - // there might be data in the buffers. - t.conn.Close() - t.controlBuf.finish() + t.loopy = newLoopyWriter(clientSide, t.framer, t.controlBuf, t.bdpEst, t.conn, t.logger) + t.loopy.run() close(t.writerDone) }() return t, nil @@ -702,6 +702,18 @@ func (e NewStreamError) Error() string { // streams. All non-nil errors returned will be *NewStreamError. func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (*Stream, error) { ctx = peer.NewContext(ctx, t.getPeer()) + + // ServerName field of the resolver returned address takes precedence over + // Host field of CallHdr to determine the :authority header. This is because, + // the ServerName field takes precedence for server authentication during + // TLS handshake, and the :authority header should match the value used + // for server authentication. + if t.address.ServerName != "" { + newCallHdr := *callHdr + newCallHdr.Host = t.address.ServerName + callHdr = &newCallHdr + } + headerFields, err := t.createHeaderFields(ctx, callHdr) if err != nil { return nil, &NewStreamError{Err: err, AllowTransparentRetry: false} @@ -726,15 +738,12 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (*Stream, endStream: false, initStream: func(id uint32) error { t.mu.Lock() - if state := t.state; state != reachable { + // TODO: handle transport closure in loopy instead and remove this + // initStream is never called when transport is draining. + if t.state == closing { t.mu.Unlock() - // Do a quick cleanup. - err := error(errStreamDrain) - if state == closing { - err = ErrConnClosing - } - cleanup(err) - return err + cleanup(ErrConnClosing) + return ErrConnClosing } if channelz.IsOn() { atomic.AddInt64(&t.czData.streamsStarted, 1) @@ -752,6 +761,7 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (*Stream, } firstTry := true var ch chan struct{} + transportDrainRequired := false checkForStreamQuota := func(it interface{}) bool { if t.streamQuota <= 0 { // Can go negative if server decreases it. if firstTry { @@ -767,10 +777,15 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (*Stream, h := it.(*headerFrame) h.streamID = t.nextID t.nextID += 2 + + // Drain client transport if nextID > MaxStreamID which signals gRPC that + // the connection is closed and a new one must be created for subsequent RPCs. + transportDrainRequired = t.nextID > MaxStreamID + s.id = h.streamID s.fc = &inFlow{limit: uint32(t.initialWindowSize)} t.mu.Lock() - if t.activeStreams == nil { // Can be niled from Close(). + if t.state == draining || t.activeStreams == nil { // Can be niled from Close(). t.mu.Unlock() return false // Don't create a stream if the transport is already closed. } @@ -846,6 +861,12 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (*Stream, sh.HandleRPC(s.ctx, outHeader) } } + if transportDrainRequired { + if t.logger.V(logLevel) { + t.logger.Infof("Draining transport: t.nextID > MaxStreamID") + } + t.GracefulClose() + } return s, nil } @@ -934,9 +955,14 @@ func (t *http2Client) Close(err error) { t.mu.Unlock() return } + if t.logger.V(logLevel) { + t.logger.Infof("Closing: %v", err) + } // Call t.onClose ASAP to prevent the client from attempting to create new // streams. - t.onClose() + if t.state != draining { + t.onClose(GoAwayInvalid) + } t.state = closing streams := t.activeStreams t.activeStreams = nil @@ -986,11 +1012,15 @@ func (t *http2Client) GracefulClose() { t.mu.Unlock() return } + if t.logger.V(logLevel) { + t.logger.Infof("GracefulClose called") + } + t.onClose(GoAwayInvalid) t.state = draining active := len(t.activeStreams) t.mu.Unlock() if active == 0 { - t.Close(ErrConnClosing) + t.Close(connectionErrorf(true, nil, "no active streams left to process while draining")) return } t.controlBuf.put(&incomingGoAway{}) @@ -1147,8 +1177,8 @@ func (t *http2Client) handleRSTStream(f *http2.RSTStreamFrame) { } statusCode, ok := http2ErrConvTab[f.ErrCode] if !ok { - if logger.V(logLevel) { - logger.Warningf("transport: http2Client.handleRSTStream found no mapped gRPC status for the received http2 error %v", f.ErrCode) + if t.logger.V(logLevel) { + t.logger.Infof("Received a RST_STREAM frame with code %q, but found no mapped gRPC status", f.ErrCode) } statusCode = codes.Unknown } @@ -1230,10 +1260,12 @@ func (t *http2Client) handleGoAway(f *http2.GoAwayFrame) { t.mu.Unlock() return } - if f.ErrCode == http2.ErrCodeEnhanceYourCalm { - if logger.V(logLevel) { - logger.Infof("Client received GoAway with http2.ErrCodeEnhanceYourCalm.") - } + if f.ErrCode == http2.ErrCodeEnhanceYourCalm && string(f.DebugData()) == "too_many_pings" { + // When a client receives a GOAWAY with error code ENHANCE_YOUR_CALM and debug + // data equal to ASCII "too_many_pings", it should log the occurrence at a log level that is + // enabled by default and double the configure KEEPALIVE_TIME used for new connections + // on that channel. + logger.Errorf("Client received GoAway with error code ENHANCE_YOUR_CALM and debug data equal to ASCII \"too_many_pings\".") } id := f.LastStreamID if id > 0 && id%2 == 0 { @@ -1266,8 +1298,10 @@ func (t *http2Client) handleGoAway(f *http2.GoAwayFrame) { // Notify the clientconn about the GOAWAY before we set the state to // draining, to allow the client to stop attempting to create streams // before disallowing new streams on this connection. - t.onGoAway(t.goAwayReason) - t.state = draining + if t.state != draining { + t.onClose(t.goAwayReason) + t.state = draining + } } // All streams with IDs greater than the GoAwayId // and smaller than the previous GoAway ID should be killed. @@ -1303,7 +1337,7 @@ func (t *http2Client) handleGoAway(f *http2.GoAwayFrame) { // setGoAwayReason sets the value of t.goAwayReason based // on the GoAway frame received. -// It expects a lock on transport's mutext to be held by +// It expects a lock on transport's mutex to be held by // the caller. func (t *http2Client) setGoAwayReason(f *http2.GoAwayFrame) { t.goAwayReason = GoAwayNoReason @@ -1756,3 +1790,9 @@ func (t *http2Client) getOutFlowWindow() int64 { return -2 } } + +func (t *http2Client) stateForTesting() transportState { + t.mu.Lock() + defer t.mu.Unlock() + return t.state +} diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/http2_server.go b/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/http2_server.go index 3dd15647bc8..79e86ba0883 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/http2_server.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/http2_server.go @@ -21,6 +21,7 @@ package transport import ( "bytes" "context" + "errors" "fmt" "io" "math" @@ -34,13 +35,16 @@ import ( "github.com/golang/protobuf/proto" "golang.org/x/net/http2" "golang.org/x/net/http2/hpack" + "google.golang.org/grpc/internal/grpclog" "google.golang.org/grpc/internal/grpcutil" + "google.golang.org/grpc/internal/pretty" "google.golang.org/grpc/internal/syscall" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials" "google.golang.org/grpc/internal/channelz" "google.golang.org/grpc/internal/grpcrand" + "google.golang.org/grpc/internal/grpcsync" "google.golang.org/grpc/keepalive" "google.golang.org/grpc/metadata" "google.golang.org/grpc/peer" @@ -101,13 +105,13 @@ type http2Server struct { mu sync.Mutex // guard the following - // drainChan is initialized when Drain() is called the first time. - // After which the server writes out the first GoAway(with ID 2^31-1) frame. - // Then an independent goroutine will be launched to later send the second GoAway. - // During this time we don't want to write another first GoAway(with ID 2^31 -1) frame. - // Thus call to Drain() will be a no-op if drainChan is already initialized since draining is - // already underway. - drainChan chan struct{} + // drainEvent is initialized when Drain() is called the first time. After + // which the server writes out the first GoAway(with ID 2^31-1) frame. Then + // an independent goroutine will be launched to later send the second + // GoAway. During this time we don't want to write another first GoAway(with + // ID 2^31 -1) frame. Thus call to Drain() will be a no-op if drainEvent is + // already initialized since draining is already underway. + drainEvent *grpcsync.Event state transportState activeStreams map[uint32]*Stream // idle is the time instant when the connection went idle. @@ -127,6 +131,8 @@ type http2Server struct { // This lock may not be taken if mu is already held. maxStreamMu sync.Mutex maxStreamID uint32 // max stream ID ever seen + + logger *grpclog.PrefixLogger } // NewServerTransport creates a http2 transport with conn and configuration @@ -265,6 +271,7 @@ func NewServerTransport(conn net.Conn, config *ServerConfig) (_ ServerTransport, czData: new(channelzData), bufferPool: newBufferPool(), } + t.logger = prefixLoggerForServerTransport(t) // Add peer information to the http2server context. t.ctx = peer.NewContext(t.ctx, t.getPeer()) @@ -293,7 +300,7 @@ func NewServerTransport(conn net.Conn, config *ServerConfig) (_ ServerTransport, defer func() { if err != nil { - t.Close() + t.Close(err) } }() @@ -329,23 +336,18 @@ func NewServerTransport(conn net.Conn, config *ServerConfig) (_ ServerTransport, t.handleSettings(sf) go func() { - t.loopy = newLoopyWriter(serverSide, t.framer, t.controlBuf, t.bdpEst) + t.loopy = newLoopyWriter(serverSide, t.framer, t.controlBuf, t.bdpEst, t.conn, t.logger) t.loopy.ssGoAwayHandler = t.outgoingGoAwayHandler - if err := t.loopy.run(); err != nil { - if logger.V(logLevel) { - logger.Errorf("transport: loopyWriter.run returning. Err: %v", err) - } - } - t.conn.Close() - t.controlBuf.finish() + t.loopy.run() close(t.writerDone) }() go t.keepalive() return t, nil } -// operateHeader takes action on the decoded headers. -func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(*Stream), traceCtx func(context.Context, string) context.Context) (fatal bool) { +// operateHeaders takes action on the decoded headers. Returns an error if fatal +// error encountered and transport needs to close, otherwise returns nil. +func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(*Stream), traceCtx func(context.Context, string) context.Context) error { // Acquire max stream ID lock for entire duration t.maxStreamMu.Lock() defer t.maxStreamMu.Unlock() @@ -361,15 +363,12 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func( rstCode: http2.ErrCodeFrameSize, onWrite: func() {}, }) - return false + return nil } if streamID%2 != 1 || streamID <= t.maxStreamID { // illegal gRPC stream id. - if logger.V(logLevel) { - logger.Errorf("transport: http2Server.HandleStreams received an illegal stream id: %v", streamID) - } - return true + return fmt.Errorf("received an illegal stream id: %v. headers frame: %+v", streamID, frame) } t.maxStreamID = streamID @@ -381,13 +380,14 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func( fc: &inFlow{limit: uint32(t.initialWindowSize)}, } var ( - // If a gRPC Response-Headers has already been received, then it means - // that the peer is speaking gRPC and we are in gRPC mode. - isGRPC = false - mdata = make(map[string][]string) - httpMethod string - // headerError is set if an error is encountered while parsing the headers - headerError bool + // if false, content-type was missing or invalid + isGRPC = false + contentType = "" + mdata = make(metadata.MD, len(frame.Fields)) + httpMethod string + // these are set if an error is encountered while parsing the headers + protocolError bool + headerError *status.Status timeoutSet bool timeout time.Duration @@ -398,11 +398,23 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func( case "content-type": contentSubtype, validContentType := grpcutil.ContentSubtype(hf.Value) if !validContentType { + contentType = hf.Value break } mdata[hf.Name] = append(mdata[hf.Name], hf.Value) s.contentSubtype = contentSubtype isGRPC = true + + case "grpc-accept-encoding": + mdata[hf.Name] = append(mdata[hf.Name], hf.Value) + if hf.Value == "" { + continue + } + compressors := hf.Value + if s.clientAdvertisedCompressors != "" { + compressors = s.clientAdvertisedCompressors + "," + compressors + } + s.clientAdvertisedCompressors = compressors case "grpc-encoding": s.recvCompress = hf.Value case ":method": @@ -413,23 +425,23 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func( timeoutSet = true var err error if timeout, err = decodeTimeout(hf.Value); err != nil { - headerError = true + headerError = status.Newf(codes.Internal, "malformed grpc-timeout: %v", err) } // "Transports must consider requests containing the Connection header // as malformed." - A41 case "connection": - if logger.V(logLevel) { - logger.Errorf("transport: http2Server.operateHeaders parsed a :connection header which makes a request malformed as per the HTTP/2 spec") + if t.logger.V(logLevel) { + t.logger.Infof("Received a HEADERS frame with a :connection header which makes the request malformed, as per the HTTP/2 spec") } - headerError = true + protocolError = true default: if isReservedHeader(hf.Name) && !isWhitelistedHeader(hf.Name) { break } v, err := decodeMetadataHeader(hf.Name, hf.Value) if err != nil { - headerError = true - logger.Warningf("Failed to decode metadata header (%q, %q): %v", hf.Name, hf.Value, err) + headerError = status.Newf(codes.Internal, "malformed binary metadata %q in header %q: %v", hf.Value, hf.Name, err) + t.logger.Warningf("Failed to decode metadata header (%q, %q): %v", hf.Name, hf.Value, err) break } mdata[hf.Name] = append(mdata[hf.Name], v) @@ -443,27 +455,47 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func( // error, this takes precedence over a client not speaking gRPC. if len(mdata[":authority"]) > 1 || len(mdata["host"]) > 1 { errMsg := fmt.Sprintf("num values of :authority: %v, num values of host: %v, both must only have 1 value as per HTTP/2 spec", len(mdata[":authority"]), len(mdata["host"])) - if logger.V(logLevel) { - logger.Errorf("transport: %v", errMsg) + if t.logger.V(logLevel) { + t.logger.Infof("Aborting the stream early: %v", errMsg) } t.controlBuf.put(&earlyAbortStream{ - httpStatus: 400, + httpStatus: http.StatusBadRequest, streamID: streamID, contentSubtype: s.contentSubtype, status: status.New(codes.Internal, errMsg), rst: !frame.StreamEnded(), }) - return false + return nil } - if !isGRPC || headerError { + if protocolError { t.controlBuf.put(&cleanupStream{ streamID: streamID, rst: true, rstCode: http2.ErrCodeProtocol, onWrite: func() {}, }) - return false + return nil + } + if !isGRPC { + t.controlBuf.put(&earlyAbortStream{ + httpStatus: http.StatusUnsupportedMediaType, + streamID: streamID, + contentSubtype: s.contentSubtype, + status: status.Newf(codes.InvalidArgument, "invalid gRPC request content-type %q", contentType), + rst: !frame.StreamEnded(), + }) + return nil + } + if headerError != nil { + t.controlBuf.put(&earlyAbortStream{ + httpStatus: http.StatusBadRequest, + streamID: streamID, + contentSubtype: s.contentSubtype, + status: headerError, + rst: !frame.StreamEnded(), + }) + return nil } // "If :authority is missing, Host must be renamed to :authority." - A41 @@ -503,7 +535,7 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func( if t.state != reachable { t.mu.Unlock() s.cancel() - return false + return nil } if uint32(len(t.activeStreams)) >= t.maxStreams { t.mu.Unlock() @@ -514,13 +546,13 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func( onWrite: func() {}, }) s.cancel() - return false + return nil } if httpMethod != http.MethodPost { t.mu.Unlock() - errMsg := fmt.Sprintf("http2Server.operateHeaders parsed a :method field: %v which should be POST", httpMethod) - if logger.V(logLevel) { - logger.Infof("transport: %v", errMsg) + errMsg := fmt.Sprintf("Received a HEADERS frame with :method %q which should be POST", httpMethod) + if t.logger.V(logLevel) { + t.logger.Infof("Aborting the stream early: %v", errMsg) } t.controlBuf.put(&earlyAbortStream{ httpStatus: 405, @@ -530,14 +562,14 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func( rst: !frame.StreamEnded(), }) s.cancel() - return false + return nil } if t.inTapHandle != nil { var err error if s.ctx, err = t.inTapHandle(s.ctx, &tap.Info{FullMethodName: s.method}); err != nil { t.mu.Unlock() - if logger.V(logLevel) { - logger.Infof("transport: http2Server.operateHeaders got an error from InTapHandle: %v", err) + if t.logger.V(logLevel) { + t.logger.Infof("Aborting the stream early due to InTapHandle failure: %v", err) } stat, ok := status.FromError(err) if !ok { @@ -550,7 +582,7 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func( status: stat, rst: !frame.StreamEnded(), }) - return false + return nil } } t.activeStreams[streamID] = s @@ -574,7 +606,7 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func( LocalAddr: t.localAddr, Compression: s.recvCompress, WireLength: int(frame.Header().Length), - Header: metadata.MD(mdata).Copy(), + Header: mdata.Copy(), } sh.HandleRPC(s.ctx, inHeader) } @@ -597,7 +629,7 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func( wq: s.wq, }) handle(s) - return false + return nil } // HandleStreams receives incoming streams using the given handler. This is @@ -611,8 +643,8 @@ func (t *http2Server) HandleStreams(handle func(*Stream), traceCtx func(context. atomic.StoreInt64(&t.lastRead, time.Now().UnixNano()) if err != nil { if se, ok := err.(http2.StreamError); ok { - if logger.V(logLevel) { - logger.Warningf("transport: http2Server.HandleStreams encountered http2.StreamError: %v", se) + if t.logger.V(logLevel) { + t.logger.Warningf("Encountered http2.StreamError: %v", se) } t.mu.Lock() s := t.activeStreams[se.StreamID] @@ -630,19 +662,16 @@ func (t *http2Server) HandleStreams(handle func(*Stream), traceCtx func(context. continue } if err == io.EOF || err == io.ErrUnexpectedEOF { - t.Close() + t.Close(err) return } - if logger.V(logLevel) { - logger.Warningf("transport: http2Server.HandleStreams failed to read frame: %v", err) - } - t.Close() + t.Close(err) return } switch frame := frame.(type) { case *http2.MetaHeadersFrame: - if t.operateHeaders(frame, handle, traceCtx) { - t.Close() + if err := t.operateHeaders(frame, handle, traceCtx); err != nil { + t.Close(err) break } case *http2.DataFrame: @@ -658,8 +687,8 @@ func (t *http2Server) HandleStreams(handle func(*Stream), traceCtx func(context. case *http2.GoAwayFrame: // TODO: Handle GoAway from the client appropriately. default: - if logger.V(logLevel) { - logger.Errorf("transport: http2Server.HandleStreams found unhandled frame type %v.", frame) + if t.logger.V(logLevel) { + t.logger.Infof("Received unsupported frame type %T", frame) } } } @@ -843,8 +872,8 @@ const ( func (t *http2Server) handlePing(f *http2.PingFrame) { if f.IsAck() { - if f.Data == goAwayPing.data && t.drainChan != nil { - close(t.drainChan) + if f.Data == goAwayPing.data && t.drainEvent != nil { + t.drainEvent.Fire() return } // Maybe it's a BDP ping. @@ -886,10 +915,7 @@ func (t *http2Server) handlePing(f *http2.PingFrame) { if t.pingStrikes > maxPingStrikes { // Send goaway and close the connection. - if logger.V(logLevel) { - logger.Errorf("transport: Got too many pings from the client, closing the connection.") - } - t.controlBuf.put(&goAway{code: http2.ErrCodeEnhanceYourCalm, debugData: []byte("too_many_pings"), closeConn: true}) + t.controlBuf.put(&goAway{code: http2.ErrCodeEnhanceYourCalm, debugData: []byte("too_many_pings"), closeConn: errors.New("got too many pings from the client")}) } } @@ -921,8 +947,8 @@ func (t *http2Server) checkForHeaderListSize(it interface{}) bool { var sz int64 for _, f := range hdrFrame.hf { if sz += int64(f.Size()); sz > int64(*t.maxSendHeaderListSize) { - if logger.V(logLevel) { - logger.Errorf("header list size to send violates the maximum size (%d bytes) set by client", *t.maxSendHeaderListSize) + if t.logger.V(logLevel) { + t.logger.Infof("Header list size to send violates the maximum size (%d bytes) set by client", *t.maxSendHeaderListSize) } return false } @@ -1035,7 +1061,7 @@ func (t *http2Server) WriteStatus(s *Stream, st *status.Status) error { stBytes, err := proto.Marshal(p) if err != nil { // TODO: return error instead, when callers are able to handle it. - logger.Errorf("transport: failed to marshal rpc status: %v, error: %v", p, err) + t.logger.Errorf("Failed to marshal rpc status: %s, error: %v", pretty.ToJSON(p), err) } else { headerFields = append(headerFields, hpack.HeaderField{Name: "grpc-status-details-bin", Value: encodeBinHeader(stBytes)}) } @@ -1140,20 +1166,20 @@ func (t *http2Server) keepalive() { if val <= 0 { // The connection has been idle for a duration of keepalive.MaxConnectionIdle or more. // Gracefully close the connection. - t.Drain() + t.Drain("max_idle") return } idleTimer.Reset(val) case <-ageTimer.C: - t.Drain() + t.Drain("max_age") ageTimer.Reset(t.kp.MaxConnectionAgeGrace) select { case <-ageTimer.C: // Close the connection after grace period. - if logger.V(logLevel) { - logger.Infof("transport: closing server transport due to maximum connection age.") + if t.logger.V(logLevel) { + t.logger.Infof("Closing server transport due to maximum connection age") } - t.Close() + t.controlBuf.put(closeConnection{}) case <-t.done: } return @@ -1169,10 +1195,7 @@ func (t *http2Server) keepalive() { continue } if outstandingPing && kpTimeoutLeft <= 0 { - if logger.V(logLevel) { - logger.Infof("transport: closing server transport due to idleness.") - } - t.Close() + t.Close(fmt.Errorf("keepalive ping not acked within timeout %s", t.kp.Time)) return } if !outstandingPing { @@ -1199,20 +1222,23 @@ func (t *http2Server) keepalive() { // Close starts shutting down the http2Server transport. // TODO(zhaoq): Now the destruction is not blocked on any pending streams. This // could cause some resource issue. Revisit this later. -func (t *http2Server) Close() { +func (t *http2Server) Close(err error) { t.mu.Lock() if t.state == closing { t.mu.Unlock() return } + if t.logger.V(logLevel) { + t.logger.Infof("Closing: %v", err) + } t.state = closing streams := t.activeStreams t.activeStreams = nil t.mu.Unlock() t.controlBuf.finish() close(t.done) - if err := t.conn.Close(); err != nil && logger.V(logLevel) { - logger.Infof("transport: error closing conn during Close: %v", err) + if err := t.conn.Close(); err != nil && t.logger.V(logLevel) { + t.logger.Infof("Error closing underlying net.Conn during Close: %v", err) } channelz.RemoveEntry(t.channelzID) // Cancel all active streams. @@ -1292,14 +1318,14 @@ func (t *http2Server) RemoteAddr() net.Addr { return t.remoteAddr } -func (t *http2Server) Drain() { +func (t *http2Server) Drain(debugData string) { t.mu.Lock() defer t.mu.Unlock() - if t.drainChan != nil { + if t.drainEvent != nil { return } - t.drainChan = make(chan struct{}) - t.controlBuf.put(&goAway{code: http2.ErrCodeNo, debugData: []byte{}, headsUp: true}) + t.drainEvent = grpcsync.NewEvent() + t.controlBuf.put(&goAway{code: http2.ErrCodeNo, debugData: []byte(debugData), headsUp: true}) } var goAwayPing = &ping{data: [8]byte{1, 6, 1, 8, 0, 3, 3, 9}} @@ -1319,19 +1345,17 @@ func (t *http2Server) outgoingGoAwayHandler(g *goAway) (bool, error) { // Stop accepting more streams now. t.state = draining sid := t.maxStreamID + retErr := g.closeConn if len(t.activeStreams) == 0 { - g.closeConn = true + retErr = errors.New("second GOAWAY written and no active streams left to process") } t.mu.Unlock() t.maxStreamMu.Unlock() if err := t.framer.fr.WriteGoAway(sid, g.code, g.debugData); err != nil { return false, err } - if g.closeConn { - // Abruptly close the connection following the GoAway (via - // loopywriter). But flush out what's inside the buffer first. - t.framer.writer.Flush() - return false, fmt.Errorf("transport: Connection closing") + if retErr != nil { + return false, retErr } return true, nil } @@ -1343,7 +1367,7 @@ func (t *http2Server) outgoingGoAwayHandler(g *goAway) (bool, error) { // originated before the GoAway reaches the client. // After getting the ack or timer expiration send out another GoAway this // time with an ID of the max stream server intends to process. - if err := t.framer.fr.WriteGoAway(math.MaxUint32, http2.ErrCodeNo, []byte{}); err != nil { + if err := t.framer.fr.WriteGoAway(math.MaxUint32, http2.ErrCodeNo, g.debugData); err != nil { return false, err } if err := t.framer.fr.WritePing(false, goAwayPing.data); err != nil { @@ -1353,7 +1377,7 @@ func (t *http2Server) outgoingGoAwayHandler(g *goAway) (bool, error) { timer := time.NewTimer(time.Minute) defer timer.Stop() select { - case <-t.drainChan: + case <-t.drainEvent.Done(): case <-timer.C: case <-t.done: return diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/http_util.go b/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/http_util.go index 2c601a864d9..19cbb18f5ab 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/http_util.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/http_util.go @@ -21,6 +21,7 @@ package transport import ( "bufio" "encoding/base64" + "errors" "fmt" "io" "math" @@ -37,7 +38,6 @@ import ( "golang.org/x/net/http2/hpack" spb "google.golang.org/genproto/googleapis/rpc/status" "google.golang.org/grpc/codes" - "google.golang.org/grpc/grpclog" "google.golang.org/grpc/status" ) @@ -85,7 +85,6 @@ var ( // 504 Gateway timeout - UNAVAILABLE. http.StatusGatewayTimeout: codes.Unavailable, } - logger = grpclog.Component("transport") ) // isReservedHeader checks whether hdr belongs to HTTP2 headers @@ -330,7 +329,8 @@ func (w *bufWriter) Write(b []byte) (n int, err error) { return 0, w.err } if w.batchSize == 0 { // Buffer has been disabled. - return w.conn.Write(b) + n, err = w.conn.Write(b) + return n, toIOError(err) } for len(b) > 0 { nn := copy(w.buf[w.offset:], b) @@ -352,10 +352,30 @@ func (w *bufWriter) Flush() error { return nil } _, w.err = w.conn.Write(w.buf[:w.offset]) + w.err = toIOError(w.err) w.offset = 0 return w.err } +type ioError struct { + error +} + +func (i ioError) Unwrap() error { + return i.error +} + +func isIOError(err error) bool { + return errors.As(err, &ioError{}) +} + +func toIOError(err error) error { + if err == nil { + return nil + } + return ioError{error: err} +} + type framer struct { writer *bufWriter fr *http2.Framer diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/logging.go b/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/logging.go new file mode 100644 index 00000000000..42ed2b07af6 --- /dev/null +++ b/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/logging.go @@ -0,0 +1,40 @@ +/* + * + * Copyright 2023 gRPC authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package transport + +import ( + "fmt" + + "google.golang.org/grpc/grpclog" + internalgrpclog "google.golang.org/grpc/internal/grpclog" +) + +var logger = grpclog.Component("transport") + +func prefixLoggerForServerTransport(p *http2Server) *internalgrpclog.PrefixLogger { + return internalgrpclog.NewPrefixLogger(logger, fmt.Sprintf("[server-transport %p] ", p)) +} + +func prefixLoggerForServerHandlerTransport(p *serverHandlerTransport) *internalgrpclog.PrefixLogger { + return internalgrpclog.NewPrefixLogger(logger, fmt.Sprintf("[server-handler-transport %p] ", p)) +} + +func prefixLoggerForClientTransport(p *http2Client) *internalgrpclog.PrefixLogger { + return internalgrpclog.NewPrefixLogger(logger, fmt.Sprintf("[client-transport %p] ", p)) +} diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/transport.go b/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/transport.go index 2e615ee20cc..aa1c896595d 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/transport.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/internal/transport/transport.go @@ -257,6 +257,9 @@ type Stream struct { fc *inFlow wq *writeQuota + // Holds compressor names passed in grpc-accept-encoding metadata from the + // client. This is empty for the client side stream. + clientAdvertisedCompressors string // Callback to state application's intentions to read data. This // is used to adjust flow control, if needed. requestRead func(int) @@ -345,8 +348,24 @@ func (s *Stream) RecvCompress() string { } // SetSendCompress sets the compression algorithm to the stream. -func (s *Stream) SetSendCompress(str string) { - s.sendCompress = str +func (s *Stream) SetSendCompress(name string) error { + if s.isHeaderSent() || s.getState() == streamDone { + return errors.New("transport: set send compressor called after headers sent or stream done") + } + + s.sendCompress = name + return nil +} + +// SendCompress returns the send compressor name. +func (s *Stream) SendCompress() string { + return s.sendCompress +} + +// ClientAdvertisedCompressors returns the compressor names advertised by the +// client via grpc-accept-encoding header. +func (s *Stream) ClientAdvertisedCompressors() string { + return s.clientAdvertisedCompressors } // Done returns a channel which is closed when it receives the final status @@ -583,8 +602,8 @@ type ConnectOptions struct { // NewClientTransport establishes the transport with the required ConnectOptions // and returns it to the caller. -func NewClientTransport(connectCtx, ctx context.Context, addr resolver.Address, opts ConnectOptions, onGoAway func(GoAwayReason), onClose func()) (ClientTransport, error) { - return newHTTP2Client(connectCtx, ctx, addr, opts, onGoAway, onClose) +func NewClientTransport(connectCtx, ctx context.Context, addr resolver.Address, opts ConnectOptions, onClose func(GoAwayReason)) (ClientTransport, error) { + return newHTTP2Client(connectCtx, ctx, addr, opts, onClose) } // Options provides additional hints and information for message @@ -701,13 +720,13 @@ type ServerTransport interface { // Close tears down the transport. Once it is called, the transport // should not be accessed any more. All the pending streams and their // handlers will be terminated asynchronously. - Close() + Close(err error) // RemoteAddr returns the remote network address. RemoteAddr() net.Addr // Drain notifies the client this ServerTransport stops accepting new RPCs. - Drain() + Drain(debugData string) // IncrMsgSent increments the number of message sent through this transport. IncrMsgSent() diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/metadata/metadata.go b/.ci/providerlint/vendor/google.golang.org/grpc/metadata/metadata.go index fb4a88f59bd..a2cdcaf12a8 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/metadata/metadata.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/metadata/metadata.go @@ -91,7 +91,11 @@ func (md MD) Len() int { // Copy returns a copy of md. func (md MD) Copy() MD { - return Join(md) + out := make(MD, len(md)) + for k, v := range md { + out[k] = copyOf(v) + } + return out } // Get obtains the values for a given key. @@ -171,8 +175,11 @@ func AppendToOutgoingContext(ctx context.Context, kv ...string) context.Context md, _ := ctx.Value(mdOutgoingKey{}).(rawMD) added := make([][]string, len(md.added)+1) copy(added, md.added) - added[len(added)-1] = make([]string, len(kv)) - copy(added[len(added)-1], kv) + kvCopy := make([]string, 0, len(kv)) + for i := 0; i < len(kv); i += 2 { + kvCopy = append(kvCopy, strings.ToLower(kv[i]), kv[i+1]) + } + added[len(added)-1] = kvCopy return context.WithValue(ctx, mdOutgoingKey{}, rawMD{md: md.md, added: added}) } diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/picker_wrapper.go b/.ci/providerlint/vendor/google.golang.org/grpc/picker_wrapper.go index a5d5516ee06..02f97595124 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/picker_wrapper.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/picker_wrapper.go @@ -36,6 +36,7 @@ import ( type pickerWrapper struct { mu sync.Mutex done bool + idle bool blockingCh chan struct{} picker balancer.Picker } @@ -47,7 +48,11 @@ func newPickerWrapper() *pickerWrapper { // updatePicker is called by UpdateBalancerState. It unblocks all blocked pick. func (pw *pickerWrapper) updatePicker(p balancer.Picker) { pw.mu.Lock() - if pw.done { + if pw.done || pw.idle { + // There is a small window where a picker update from the LB policy can + // race with the channel going to idle mode. If the picker is idle here, + // it is because the channel asked it to do so, and therefore it is sage + // to ignore the update from the LB policy. pw.mu.Unlock() return } @@ -58,12 +63,16 @@ func (pw *pickerWrapper) updatePicker(p balancer.Picker) { pw.mu.Unlock() } -func doneChannelzWrapper(acw *acBalancerWrapper, done func(balancer.DoneInfo)) func(balancer.DoneInfo) { - acw.mu.Lock() - ac := acw.ac - acw.mu.Unlock() +// doneChannelzWrapper performs the following: +// - increments the calls started channelz counter +// - wraps the done function in the passed in result to increment the calls +// failed or calls succeeded channelz counter before invoking the actual +// done function. +func doneChannelzWrapper(acbw *acBalancerWrapper, result *balancer.PickResult) { + ac := acbw.ac ac.incrCallsStarted() - return func(b balancer.DoneInfo) { + done := result.Done + result.Done = func(b balancer.DoneInfo) { if b.Err != nil && b.Err != io.EOF { ac.incrCallsFailed() } else { @@ -82,7 +91,7 @@ func doneChannelzWrapper(acw *acBalancerWrapper, done func(balancer.DoneInfo)) f // - the current picker returns other errors and failfast is false. // - the subConn returned by the current picker is not READY // When one of these situations happens, pick blocks until the picker gets updated. -func (pw *pickerWrapper) pick(ctx context.Context, failfast bool, info balancer.PickInfo) (transport.ClientTransport, func(balancer.DoneInfo), error) { +func (pw *pickerWrapper) pick(ctx context.Context, failfast bool, info balancer.PickInfo) (transport.ClientTransport, balancer.PickResult, error) { var ch chan struct{} var lastPickErr error @@ -90,7 +99,7 @@ func (pw *pickerWrapper) pick(ctx context.Context, failfast bool, info balancer. pw.mu.Lock() if pw.done { pw.mu.Unlock() - return nil, nil, ErrClientConnClosing + return nil, balancer.PickResult{}, ErrClientConnClosing } if pw.picker == nil { @@ -111,9 +120,9 @@ func (pw *pickerWrapper) pick(ctx context.Context, failfast bool, info balancer. } switch ctx.Err() { case context.DeadlineExceeded: - return nil, nil, status.Error(codes.DeadlineExceeded, errStr) + return nil, balancer.PickResult{}, status.Error(codes.DeadlineExceeded, errStr) case context.Canceled: - return nil, nil, status.Error(codes.Canceled, errStr) + return nil, balancer.PickResult{}, status.Error(codes.Canceled, errStr) } case <-ch: } @@ -125,7 +134,6 @@ func (pw *pickerWrapper) pick(ctx context.Context, failfast bool, info balancer. pw.mu.Unlock() pickResult, err := p.Pick(info) - if err != nil { if err == balancer.ErrNoSubConnAvailable { continue @@ -136,7 +144,7 @@ func (pw *pickerWrapper) pick(ctx context.Context, failfast bool, info balancer. if istatus.IsRestrictedControlPlaneCode(st) { err = status.Errorf(codes.Internal, "received picker error with illegal status: %v", err) } - return nil, nil, dropError{error: err} + return nil, balancer.PickResult{}, dropError{error: err} } // For all other errors, wait for ready RPCs should block and other // RPCs should fail with unavailable. @@ -144,19 +152,20 @@ func (pw *pickerWrapper) pick(ctx context.Context, failfast bool, info balancer. lastPickErr = err continue } - return nil, nil, status.Error(codes.Unavailable, err.Error()) + return nil, balancer.PickResult{}, status.Error(codes.Unavailable, err.Error()) } - acw, ok := pickResult.SubConn.(*acBalancerWrapper) + acbw, ok := pickResult.SubConn.(*acBalancerWrapper) if !ok { logger.Errorf("subconn returned from pick is type %T, not *acBalancerWrapper", pickResult.SubConn) continue } - if t := acw.getAddrConn().getReadyTransport(); t != nil { + if t := acbw.ac.getReadyTransport(); t != nil { if channelz.IsOn() { - return t, doneChannelzWrapper(acw, pickResult.Done), nil + doneChannelzWrapper(acbw, &pickResult) + return t, pickResult, nil } - return t, pickResult.Done, nil + return t, pickResult, nil } if pickResult.Done != nil { // Calling done with nil error, no bytes sent and no bytes received. @@ -181,6 +190,25 @@ func (pw *pickerWrapper) close() { close(pw.blockingCh) } +func (pw *pickerWrapper) enterIdleMode() { + pw.mu.Lock() + defer pw.mu.Unlock() + if pw.done { + return + } + pw.idle = true +} + +func (pw *pickerWrapper) exitIdleMode() { + pw.mu.Lock() + defer pw.mu.Unlock() + if pw.done { + return + } + pw.blockingCh = make(chan struct{}) + pw.idle = false +} + // dropError is a wrapper error that indicates the LB policy wishes to drop the // RPC and not retry it. type dropError struct { diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/pickfirst.go b/.ci/providerlint/vendor/google.golang.org/grpc/pickfirst.go index fb7a99e0a27..abe266b021d 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/pickfirst.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/pickfirst.go @@ -19,11 +19,15 @@ package grpc import ( + "encoding/json" "errors" "fmt" "google.golang.org/grpc/balancer" "google.golang.org/grpc/connectivity" + "google.golang.org/grpc/internal/envconfig" + "google.golang.org/grpc/internal/grpcrand" + "google.golang.org/grpc/serviceconfig" ) // PickFirstBalancerName is the name of the pick_first balancer. @@ -43,15 +47,33 @@ func (*pickfirstBuilder) Name() string { return PickFirstBalancerName } +type pfConfig struct { + serviceconfig.LoadBalancingConfig `json:"-"` + + // If set to true, instructs the LB policy to shuffle the order of the list + // of addresses received from the name resolver before attempting to + // connect to them. + ShuffleAddressList bool `json:"shuffleAddressList"` +} + +func (*pickfirstBuilder) ParseConfig(js json.RawMessage) (serviceconfig.LoadBalancingConfig, error) { + cfg := &pfConfig{} + if err := json.Unmarshal(js, cfg); err != nil { + return nil, fmt.Errorf("pickfirst: unable to unmarshal LB policy config: %s, error: %v", string(js), err) + } + return cfg, nil +} + type pickfirstBalancer struct { state connectivity.State cc balancer.ClientConn subConn balancer.SubConn + cfg *pfConfig } func (b *pickfirstBalancer) ResolverError(err error) { if logger.V(2) { - logger.Infof("pickfirstBalancer: ResolverError called with error %v", err) + logger.Infof("pickfirstBalancer: ResolverError called with error: %v", err) } if b.subConn == nil { b.state = connectivity.TransientFailure @@ -69,7 +91,8 @@ func (b *pickfirstBalancer) ResolverError(err error) { } func (b *pickfirstBalancer) UpdateClientConnState(state balancer.ClientConnState) error { - if len(state.ResolverState.Addresses) == 0 { + addrs := state.ResolverState.Addresses + if len(addrs) == 0 { // The resolver reported an empty address list. Treat it like an error by // calling b.ResolverError. if b.subConn != nil { @@ -82,12 +105,23 @@ func (b *pickfirstBalancer) UpdateClientConnState(state balancer.ClientConnState return balancer.ErrBadResolverState } + if state.BalancerConfig != nil { + cfg, ok := state.BalancerConfig.(*pfConfig) + if !ok { + return fmt.Errorf("pickfirstBalancer: received nil or illegal BalancerConfig (type %T): %v", state.BalancerConfig, state.BalancerConfig) + } + b.cfg = cfg + } + + if envconfig.PickFirstLBConfig && b.cfg != nil && b.cfg.ShuffleAddressList { + grpcrand.Shuffle(len(addrs), func(i, j int) { addrs[i], addrs[j] = addrs[j], addrs[i] }) + } if b.subConn != nil { - b.cc.UpdateAddresses(b.subConn, state.ResolverState.Addresses) + b.cc.UpdateAddresses(b.subConn, addrs) return nil } - subConn, err := b.cc.NewSubConn(state.ResolverState.Addresses, balancer.NewSubConnOptions{}) + subConn, err := b.cc.NewSubConn(addrs, balancer.NewSubConnOptions{}) if err != nil { if logger.V(2) { logger.Errorf("pickfirstBalancer: failed to NewSubConn: %v", err) @@ -102,8 +136,8 @@ func (b *pickfirstBalancer) UpdateClientConnState(state balancer.ClientConnState b.subConn = subConn b.state = connectivity.Idle b.cc.UpdateState(balancer.State{ - ConnectivityState: connectivity.Idle, - Picker: &picker{result: balancer.PickResult{SubConn: b.subConn}}, + ConnectivityState: connectivity.Connecting, + Picker: &picker{err: balancer.ErrNoSubConnAvailable}, }) b.subConn.Connect() return nil @@ -119,7 +153,6 @@ func (b *pickfirstBalancer) UpdateSubConnState(subConn balancer.SubConn, state b } return } - b.state = state.ConnectivityState if state.ConnectivityState == connectivity.Shutdown { b.subConn = nil return @@ -132,11 +165,21 @@ func (b *pickfirstBalancer) UpdateSubConnState(subConn balancer.SubConn, state b Picker: &picker{result: balancer.PickResult{SubConn: subConn}}, }) case connectivity.Connecting: + if b.state == connectivity.TransientFailure { + // We stay in TransientFailure until we are Ready. See A62. + return + } b.cc.UpdateState(balancer.State{ ConnectivityState: state.ConnectivityState, Picker: &picker{err: balancer.ErrNoSubConnAvailable}, }) case connectivity.Idle: + if b.state == connectivity.TransientFailure { + // We stay in TransientFailure until we are Ready. Also kick the + // subConn out of Idle into Connecting. See A62. + b.subConn.Connect() + return + } b.cc.UpdateState(balancer.State{ ConnectivityState: state.ConnectivityState, Picker: &idlePicker{subConn: subConn}, @@ -147,6 +190,7 @@ func (b *pickfirstBalancer) UpdateSubConnState(subConn balancer.SubConn, state b Picker: &picker{err: state.ConnectionError}, }) } + b.state = state.ConnectivityState } func (b *pickfirstBalancer) Close() { diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/reflection/grpc_reflection_v1alpha/reflection.pb.go b/.ci/providerlint/vendor/google.golang.org/grpc/reflection/grpc_reflection_v1alpha/reflection.pb.go index c22f9a52db4..d54c07676d5 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/reflection/grpc_reflection_v1alpha/reflection.pb.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/reflection/grpc_reflection_v1alpha/reflection.pb.go @@ -1,4 +1,4 @@ -// Copyright 2016 gRPC authors. +// Copyright 2016 The gRPC Authors // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. @@ -11,19 +11,20 @@ // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. - // Service exported by server reflection +// Warning: this entire file is deprecated. Use this instead: +// https://github.com/grpc/grpc-proto/blob/master/grpc/reflection/v1/reflection.proto + // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.25.0 -// protoc v3.14.0 -// source: reflection/grpc_reflection_v1alpha/reflection.proto +// protoc-gen-go v1.30.0 +// protoc v4.22.0 +// grpc/reflection/v1alpha/reflection.proto is a deprecated file. package grpc_reflection_v1alpha import ( - proto "github.com/golang/protobuf/proto" protoreflect "google.golang.org/protobuf/reflect/protoreflect" protoimpl "google.golang.org/protobuf/runtime/protoimpl" reflect "reflect" @@ -37,16 +38,15 @@ const ( _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) ) -// This is a compile-time assertion that a sufficiently up-to-date version -// of the legacy proto package is being used. -const _ = proto.ProtoPackageIsVersion4 - // The message sent by the client when calling ServerReflectionInfo method. +// +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. type ServerReflectionRequest struct { state protoimpl.MessageState sizeCache protoimpl.SizeCache unknownFields protoimpl.UnknownFields + // Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. Host string `protobuf:"bytes,1,opt,name=host,proto3" json:"host,omitempty"` // To use reflection service, the client should set one of the following // fields in message_request. The server distinguishes requests by their @@ -65,7 +65,7 @@ type ServerReflectionRequest struct { func (x *ServerReflectionRequest) Reset() { *x = ServerReflectionRequest{} if protoimpl.UnsafeEnabled { - mi := &file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes[0] + mi := &file_grpc_reflection_v1alpha_reflection_proto_msgTypes[0] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -78,7 +78,7 @@ func (x *ServerReflectionRequest) String() string { func (*ServerReflectionRequest) ProtoMessage() {} func (x *ServerReflectionRequest) ProtoReflect() protoreflect.Message { - mi := &file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes[0] + mi := &file_grpc_reflection_v1alpha_reflection_proto_msgTypes[0] if protoimpl.UnsafeEnabled && x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -91,9 +91,10 @@ func (x *ServerReflectionRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use ServerReflectionRequest.ProtoReflect.Descriptor instead. func (*ServerReflectionRequest) Descriptor() ([]byte, []int) { - return file_reflection_grpc_reflection_v1alpha_reflection_proto_rawDescGZIP(), []int{0} + return file_grpc_reflection_v1alpha_reflection_proto_rawDescGZIP(), []int{0} } +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. func (x *ServerReflectionRequest) GetHost() string { if x != nil { return x.Host @@ -108,6 +109,7 @@ func (m *ServerReflectionRequest) GetMessageRequest() isServerReflectionRequest_ return nil } +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. func (x *ServerReflectionRequest) GetFileByFilename() string { if x, ok := x.GetMessageRequest().(*ServerReflectionRequest_FileByFilename); ok { return x.FileByFilename @@ -115,6 +117,7 @@ func (x *ServerReflectionRequest) GetFileByFilename() string { return "" } +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. func (x *ServerReflectionRequest) GetFileContainingSymbol() string { if x, ok := x.GetMessageRequest().(*ServerReflectionRequest_FileContainingSymbol); ok { return x.FileContainingSymbol @@ -122,6 +125,7 @@ func (x *ServerReflectionRequest) GetFileContainingSymbol() string { return "" } +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. func (x *ServerReflectionRequest) GetFileContainingExtension() *ExtensionRequest { if x, ok := x.GetMessageRequest().(*ServerReflectionRequest_FileContainingExtension); ok { return x.FileContainingExtension @@ -129,6 +133,7 @@ func (x *ServerReflectionRequest) GetFileContainingExtension() *ExtensionRequest return nil } +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. func (x *ServerReflectionRequest) GetAllExtensionNumbersOfType() string { if x, ok := x.GetMessageRequest().(*ServerReflectionRequest_AllExtensionNumbersOfType); ok { return x.AllExtensionNumbersOfType @@ -136,6 +141,7 @@ func (x *ServerReflectionRequest) GetAllExtensionNumbersOfType() string { return "" } +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. func (x *ServerReflectionRequest) GetListServices() string { if x, ok := x.GetMessageRequest().(*ServerReflectionRequest_ListServices); ok { return x.ListServices @@ -149,6 +155,8 @@ type isServerReflectionRequest_MessageRequest interface { type ServerReflectionRequest_FileByFilename struct { // Find a proto file by the file name. + // + // Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. FileByFilename string `protobuf:"bytes,3,opt,name=file_by_filename,json=fileByFilename,proto3,oneof"` } @@ -156,12 +164,16 @@ type ServerReflectionRequest_FileContainingSymbol struct { // Find the proto file that declares the given fully-qualified symbol name. // This field should be a fully-qualified symbol name // (e.g. .[.] or .). + // + // Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. FileContainingSymbol string `protobuf:"bytes,4,opt,name=file_containing_symbol,json=fileContainingSymbol,proto3,oneof"` } type ServerReflectionRequest_FileContainingExtension struct { // Find the proto file which defines an extension extending the given // message type with the given field number. + // + // Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. FileContainingExtension *ExtensionRequest `protobuf:"bytes,5,opt,name=file_containing_extension,json=fileContainingExtension,proto3,oneof"` } @@ -174,12 +186,16 @@ type ServerReflectionRequest_AllExtensionNumbersOfType struct { // StatusCode::UNIMPLEMENTED if it's not implemented. // This field should be a fully-qualified type name. The format is // . + // + // Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. AllExtensionNumbersOfType string `protobuf:"bytes,6,opt,name=all_extension_numbers_of_type,json=allExtensionNumbersOfType,proto3,oneof"` } type ServerReflectionRequest_ListServices struct { // List the full names of registered services. The content will not be // checked. + // + // Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. ListServices string `protobuf:"bytes,7,opt,name=list_services,json=listServices,proto3,oneof"` } @@ -196,20 +212,25 @@ func (*ServerReflectionRequest_ListServices) isServerReflectionRequest_MessageRe // The type name and extension number sent by the client when requesting // file_containing_extension. +// +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. type ExtensionRequest struct { state protoimpl.MessageState sizeCache protoimpl.SizeCache unknownFields protoimpl.UnknownFields // Fully-qualified type name. The format should be . - ContainingType string `protobuf:"bytes,1,opt,name=containing_type,json=containingType,proto3" json:"containing_type,omitempty"` - ExtensionNumber int32 `protobuf:"varint,2,opt,name=extension_number,json=extensionNumber,proto3" json:"extension_number,omitempty"` + // + // Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. + ContainingType string `protobuf:"bytes,1,opt,name=containing_type,json=containingType,proto3" json:"containing_type,omitempty"` + // Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. + ExtensionNumber int32 `protobuf:"varint,2,opt,name=extension_number,json=extensionNumber,proto3" json:"extension_number,omitempty"` } func (x *ExtensionRequest) Reset() { *x = ExtensionRequest{} if protoimpl.UnsafeEnabled { - mi := &file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes[1] + mi := &file_grpc_reflection_v1alpha_reflection_proto_msgTypes[1] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -222,7 +243,7 @@ func (x *ExtensionRequest) String() string { func (*ExtensionRequest) ProtoMessage() {} func (x *ExtensionRequest) ProtoReflect() protoreflect.Message { - mi := &file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes[1] + mi := &file_grpc_reflection_v1alpha_reflection_proto_msgTypes[1] if protoimpl.UnsafeEnabled && x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -235,9 +256,10 @@ func (x *ExtensionRequest) ProtoReflect() protoreflect.Message { // Deprecated: Use ExtensionRequest.ProtoReflect.Descriptor instead. func (*ExtensionRequest) Descriptor() ([]byte, []int) { - return file_reflection_grpc_reflection_v1alpha_reflection_proto_rawDescGZIP(), []int{1} + return file_grpc_reflection_v1alpha_reflection_proto_rawDescGZIP(), []int{1} } +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. func (x *ExtensionRequest) GetContainingType() string { if x != nil { return x.ContainingType @@ -245,6 +267,7 @@ func (x *ExtensionRequest) GetContainingType() string { return "" } +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. func (x *ExtensionRequest) GetExtensionNumber() int32 { if x != nil { return x.ExtensionNumber @@ -253,15 +276,19 @@ func (x *ExtensionRequest) GetExtensionNumber() int32 { } // The message sent by the server to answer ServerReflectionInfo method. +// +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. type ServerReflectionResponse struct { state protoimpl.MessageState sizeCache protoimpl.SizeCache unknownFields protoimpl.UnknownFields - ValidHost string `protobuf:"bytes,1,opt,name=valid_host,json=validHost,proto3" json:"valid_host,omitempty"` + // Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. + ValidHost string `protobuf:"bytes,1,opt,name=valid_host,json=validHost,proto3" json:"valid_host,omitempty"` + // Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. OriginalRequest *ServerReflectionRequest `protobuf:"bytes,2,opt,name=original_request,json=originalRequest,proto3" json:"original_request,omitempty"` - // The server sets one of the following fields according to the - // message_request in the request. + // The server set one of the following fields according to the message_request + // in the request. // // Types that are assignable to MessageResponse: // @@ -275,7 +302,7 @@ type ServerReflectionResponse struct { func (x *ServerReflectionResponse) Reset() { *x = ServerReflectionResponse{} if protoimpl.UnsafeEnabled { - mi := &file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes[2] + mi := &file_grpc_reflection_v1alpha_reflection_proto_msgTypes[2] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -288,7 +315,7 @@ func (x *ServerReflectionResponse) String() string { func (*ServerReflectionResponse) ProtoMessage() {} func (x *ServerReflectionResponse) ProtoReflect() protoreflect.Message { - mi := &file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes[2] + mi := &file_grpc_reflection_v1alpha_reflection_proto_msgTypes[2] if protoimpl.UnsafeEnabled && x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -301,9 +328,10 @@ func (x *ServerReflectionResponse) ProtoReflect() protoreflect.Message { // Deprecated: Use ServerReflectionResponse.ProtoReflect.Descriptor instead. func (*ServerReflectionResponse) Descriptor() ([]byte, []int) { - return file_reflection_grpc_reflection_v1alpha_reflection_proto_rawDescGZIP(), []int{2} + return file_grpc_reflection_v1alpha_reflection_proto_rawDescGZIP(), []int{2} } +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. func (x *ServerReflectionResponse) GetValidHost() string { if x != nil { return x.ValidHost @@ -311,6 +339,7 @@ func (x *ServerReflectionResponse) GetValidHost() string { return "" } +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. func (x *ServerReflectionResponse) GetOriginalRequest() *ServerReflectionRequest { if x != nil { return x.OriginalRequest @@ -325,6 +354,7 @@ func (m *ServerReflectionResponse) GetMessageResponse() isServerReflectionRespon return nil } +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. func (x *ServerReflectionResponse) GetFileDescriptorResponse() *FileDescriptorResponse { if x, ok := x.GetMessageResponse().(*ServerReflectionResponse_FileDescriptorResponse); ok { return x.FileDescriptorResponse @@ -332,6 +362,7 @@ func (x *ServerReflectionResponse) GetFileDescriptorResponse() *FileDescriptorRe return nil } +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. func (x *ServerReflectionResponse) GetAllExtensionNumbersResponse() *ExtensionNumberResponse { if x, ok := x.GetMessageResponse().(*ServerReflectionResponse_AllExtensionNumbersResponse); ok { return x.AllExtensionNumbersResponse @@ -339,6 +370,7 @@ func (x *ServerReflectionResponse) GetAllExtensionNumbersResponse() *ExtensionNu return nil } +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. func (x *ServerReflectionResponse) GetListServicesResponse() *ListServiceResponse { if x, ok := x.GetMessageResponse().(*ServerReflectionResponse_ListServicesResponse); ok { return x.ListServicesResponse @@ -346,6 +378,7 @@ func (x *ServerReflectionResponse) GetListServicesResponse() *ListServiceRespons return nil } +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. func (x *ServerReflectionResponse) GetErrorResponse() *ErrorResponse { if x, ok := x.GetMessageResponse().(*ServerReflectionResponse_ErrorResponse); ok { return x.ErrorResponse @@ -359,26 +392,34 @@ type isServerReflectionResponse_MessageResponse interface { type ServerReflectionResponse_FileDescriptorResponse struct { // This message is used to answer file_by_filename, file_containing_symbol, - // file_containing_extension requests with transitive dependencies. - // As the repeated label is not allowed in oneof fields, we use a + // file_containing_extension requests with transitive dependencies. As + // the repeated label is not allowed in oneof fields, we use a // FileDescriptorResponse message to encapsulate the repeated fields. // The reflection service is allowed to avoid sending FileDescriptorProtos // that were previously sent in response to earlier requests in the stream. + // + // Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. FileDescriptorResponse *FileDescriptorResponse `protobuf:"bytes,4,opt,name=file_descriptor_response,json=fileDescriptorResponse,proto3,oneof"` } type ServerReflectionResponse_AllExtensionNumbersResponse struct { - // This message is used to answer all_extension_numbers_of_type requests. + // This message is used to answer all_extension_numbers_of_type requst. + // + // Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. AllExtensionNumbersResponse *ExtensionNumberResponse `protobuf:"bytes,5,opt,name=all_extension_numbers_response,json=allExtensionNumbersResponse,proto3,oneof"` } type ServerReflectionResponse_ListServicesResponse struct { - // This message is used to answer list_services requests. + // This message is used to answer list_services request. + // + // Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. ListServicesResponse *ListServiceResponse `protobuf:"bytes,6,opt,name=list_services_response,json=listServicesResponse,proto3,oneof"` } type ServerReflectionResponse_ErrorResponse struct { // This message is used when an error occurs. + // + // Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. ErrorResponse *ErrorResponse `protobuf:"bytes,7,opt,name=error_response,json=errorResponse,proto3,oneof"` } @@ -395,6 +436,8 @@ func (*ServerReflectionResponse_ErrorResponse) isServerReflectionResponse_Messag // Serialized FileDescriptorProto messages sent by the server answering // a file_by_filename, file_containing_symbol, or file_containing_extension // request. +// +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. type FileDescriptorResponse struct { state protoimpl.MessageState sizeCache protoimpl.SizeCache @@ -403,13 +446,15 @@ type FileDescriptorResponse struct { // Serialized FileDescriptorProto messages. We avoid taking a dependency on // descriptor.proto, which uses proto2 only features, by making them opaque // bytes instead. + // + // Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. FileDescriptorProto [][]byte `protobuf:"bytes,1,rep,name=file_descriptor_proto,json=fileDescriptorProto,proto3" json:"file_descriptor_proto,omitempty"` } func (x *FileDescriptorResponse) Reset() { *x = FileDescriptorResponse{} if protoimpl.UnsafeEnabled { - mi := &file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes[3] + mi := &file_grpc_reflection_v1alpha_reflection_proto_msgTypes[3] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -422,7 +467,7 @@ func (x *FileDescriptorResponse) String() string { func (*FileDescriptorResponse) ProtoMessage() {} func (x *FileDescriptorResponse) ProtoReflect() protoreflect.Message { - mi := &file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes[3] + mi := &file_grpc_reflection_v1alpha_reflection_proto_msgTypes[3] if protoimpl.UnsafeEnabled && x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -435,9 +480,10 @@ func (x *FileDescriptorResponse) ProtoReflect() protoreflect.Message { // Deprecated: Use FileDescriptorResponse.ProtoReflect.Descriptor instead. func (*FileDescriptorResponse) Descriptor() ([]byte, []int) { - return file_reflection_grpc_reflection_v1alpha_reflection_proto_rawDescGZIP(), []int{3} + return file_grpc_reflection_v1alpha_reflection_proto_rawDescGZIP(), []int{3} } +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. func (x *FileDescriptorResponse) GetFileDescriptorProto() [][]byte { if x != nil { return x.FileDescriptorProto @@ -447,6 +493,8 @@ func (x *FileDescriptorResponse) GetFileDescriptorProto() [][]byte { // A list of extension numbers sent by the server answering // all_extension_numbers_of_type request. +// +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. type ExtensionNumberResponse struct { state protoimpl.MessageState sizeCache protoimpl.SizeCache @@ -454,14 +502,17 @@ type ExtensionNumberResponse struct { // Full name of the base type, including the package name. The format // is . - BaseTypeName string `protobuf:"bytes,1,opt,name=base_type_name,json=baseTypeName,proto3" json:"base_type_name,omitempty"` + // + // Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. + BaseTypeName string `protobuf:"bytes,1,opt,name=base_type_name,json=baseTypeName,proto3" json:"base_type_name,omitempty"` + // Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. ExtensionNumber []int32 `protobuf:"varint,2,rep,packed,name=extension_number,json=extensionNumber,proto3" json:"extension_number,omitempty"` } func (x *ExtensionNumberResponse) Reset() { *x = ExtensionNumberResponse{} if protoimpl.UnsafeEnabled { - mi := &file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes[4] + mi := &file_grpc_reflection_v1alpha_reflection_proto_msgTypes[4] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -474,7 +525,7 @@ func (x *ExtensionNumberResponse) String() string { func (*ExtensionNumberResponse) ProtoMessage() {} func (x *ExtensionNumberResponse) ProtoReflect() protoreflect.Message { - mi := &file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes[4] + mi := &file_grpc_reflection_v1alpha_reflection_proto_msgTypes[4] if protoimpl.UnsafeEnabled && x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -487,9 +538,10 @@ func (x *ExtensionNumberResponse) ProtoReflect() protoreflect.Message { // Deprecated: Use ExtensionNumberResponse.ProtoReflect.Descriptor instead. func (*ExtensionNumberResponse) Descriptor() ([]byte, []int) { - return file_reflection_grpc_reflection_v1alpha_reflection_proto_rawDescGZIP(), []int{4} + return file_grpc_reflection_v1alpha_reflection_proto_rawDescGZIP(), []int{4} } +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. func (x *ExtensionNumberResponse) GetBaseTypeName() string { if x != nil { return x.BaseTypeName @@ -497,6 +549,7 @@ func (x *ExtensionNumberResponse) GetBaseTypeName() string { return "" } +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. func (x *ExtensionNumberResponse) GetExtensionNumber() []int32 { if x != nil { return x.ExtensionNumber @@ -505,6 +558,8 @@ func (x *ExtensionNumberResponse) GetExtensionNumber() []int32 { } // A list of ServiceResponse sent by the server answering list_services request. +// +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. type ListServiceResponse struct { state protoimpl.MessageState sizeCache protoimpl.SizeCache @@ -512,13 +567,15 @@ type ListServiceResponse struct { // The information of each service may be expanded in the future, so we use // ServiceResponse message to encapsulate it. + // + // Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. Service []*ServiceResponse `protobuf:"bytes,1,rep,name=service,proto3" json:"service,omitempty"` } func (x *ListServiceResponse) Reset() { *x = ListServiceResponse{} if protoimpl.UnsafeEnabled { - mi := &file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes[5] + mi := &file_grpc_reflection_v1alpha_reflection_proto_msgTypes[5] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -531,7 +588,7 @@ func (x *ListServiceResponse) String() string { func (*ListServiceResponse) ProtoMessage() {} func (x *ListServiceResponse) ProtoReflect() protoreflect.Message { - mi := &file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes[5] + mi := &file_grpc_reflection_v1alpha_reflection_proto_msgTypes[5] if protoimpl.UnsafeEnabled && x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -544,9 +601,10 @@ func (x *ListServiceResponse) ProtoReflect() protoreflect.Message { // Deprecated: Use ListServiceResponse.ProtoReflect.Descriptor instead. func (*ListServiceResponse) Descriptor() ([]byte, []int) { - return file_reflection_grpc_reflection_v1alpha_reflection_proto_rawDescGZIP(), []int{5} + return file_grpc_reflection_v1alpha_reflection_proto_rawDescGZIP(), []int{5} } +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. func (x *ListServiceResponse) GetService() []*ServiceResponse { if x != nil { return x.Service @@ -556,6 +614,8 @@ func (x *ListServiceResponse) GetService() []*ServiceResponse { // The information of a single service used by ListServiceResponse to answer // list_services request. +// +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. type ServiceResponse struct { state protoimpl.MessageState sizeCache protoimpl.SizeCache @@ -563,13 +623,15 @@ type ServiceResponse struct { // Full name of a registered service, including its package name. The format // is . + // + // Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` } func (x *ServiceResponse) Reset() { *x = ServiceResponse{} if protoimpl.UnsafeEnabled { - mi := &file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes[6] + mi := &file_grpc_reflection_v1alpha_reflection_proto_msgTypes[6] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -582,7 +644,7 @@ func (x *ServiceResponse) String() string { func (*ServiceResponse) ProtoMessage() {} func (x *ServiceResponse) ProtoReflect() protoreflect.Message { - mi := &file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes[6] + mi := &file_grpc_reflection_v1alpha_reflection_proto_msgTypes[6] if protoimpl.UnsafeEnabled && x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -595,9 +657,10 @@ func (x *ServiceResponse) ProtoReflect() protoreflect.Message { // Deprecated: Use ServiceResponse.ProtoReflect.Descriptor instead. func (*ServiceResponse) Descriptor() ([]byte, []int) { - return file_reflection_grpc_reflection_v1alpha_reflection_proto_rawDescGZIP(), []int{6} + return file_grpc_reflection_v1alpha_reflection_proto_rawDescGZIP(), []int{6} } +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. func (x *ServiceResponse) GetName() string { if x != nil { return x.Name @@ -606,20 +669,25 @@ func (x *ServiceResponse) GetName() string { } // The error code and error message sent by the server when an error occurs. +// +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. type ErrorResponse struct { state protoimpl.MessageState sizeCache protoimpl.SizeCache unknownFields protoimpl.UnknownFields // This field uses the error codes defined in grpc::StatusCode. - ErrorCode int32 `protobuf:"varint,1,opt,name=error_code,json=errorCode,proto3" json:"error_code,omitempty"` + // + // Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. + ErrorCode int32 `protobuf:"varint,1,opt,name=error_code,json=errorCode,proto3" json:"error_code,omitempty"` + // Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. ErrorMessage string `protobuf:"bytes,2,opt,name=error_message,json=errorMessage,proto3" json:"error_message,omitempty"` } func (x *ErrorResponse) Reset() { *x = ErrorResponse{} if protoimpl.UnsafeEnabled { - mi := &file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes[7] + mi := &file_grpc_reflection_v1alpha_reflection_proto_msgTypes[7] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -632,7 +700,7 @@ func (x *ErrorResponse) String() string { func (*ErrorResponse) ProtoMessage() {} func (x *ErrorResponse) ProtoReflect() protoreflect.Message { - mi := &file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes[7] + mi := &file_grpc_reflection_v1alpha_reflection_proto_msgTypes[7] if protoimpl.UnsafeEnabled && x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -645,9 +713,10 @@ func (x *ErrorResponse) ProtoReflect() protoreflect.Message { // Deprecated: Use ErrorResponse.ProtoReflect.Descriptor instead. func (*ErrorResponse) Descriptor() ([]byte, []int) { - return file_reflection_grpc_reflection_v1alpha_reflection_proto_rawDescGZIP(), []int{7} + return file_grpc_reflection_v1alpha_reflection_proto_rawDescGZIP(), []int{7} } +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. func (x *ErrorResponse) GetErrorCode() int32 { if x != nil { return x.ErrorCode @@ -655,6 +724,7 @@ func (x *ErrorResponse) GetErrorCode() int32 { return 0 } +// Deprecated: The entire proto file grpc/reflection/v1alpha/reflection.proto is marked as deprecated. func (x *ErrorResponse) GetErrorMessage() string { if x != nil { return x.ErrorMessage @@ -662,136 +732,139 @@ func (x *ErrorResponse) GetErrorMessage() string { return "" } -var File_reflection_grpc_reflection_v1alpha_reflection_proto protoreflect.FileDescriptor - -var file_reflection_grpc_reflection_v1alpha_reflection_proto_rawDesc = []byte{ - 0x0a, 0x33, 0x72, 0x65, 0x66, 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x2f, 0x67, 0x72, 0x70, - 0x63, 0x5f, 0x72, 0x65, 0x66, 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x76, 0x31, 0x61, - 0x6c, 0x70, 0x68, 0x61, 0x2f, 0x72, 0x65, 0x66, 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x2e, - 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x17, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x72, 0x65, 0x66, 0x6c, - 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x22, 0xf8, - 0x02, 0x0a, 0x17, 0x53, 0x65, 0x72, 0x76, 0x65, 0x72, 0x52, 0x65, 0x66, 0x6c, 0x65, 0x63, 0x74, - 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x12, 0x0a, 0x04, 0x68, 0x6f, - 0x73, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x68, 0x6f, 0x73, 0x74, 0x12, 0x2a, - 0x0a, 0x10, 0x66, 0x69, 0x6c, 0x65, 0x5f, 0x62, 0x79, 0x5f, 0x66, 0x69, 0x6c, 0x65, 0x6e, 0x61, - 0x6d, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x48, 0x00, 0x52, 0x0e, 0x66, 0x69, 0x6c, 0x65, - 0x42, 0x79, 0x46, 0x69, 0x6c, 0x65, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x36, 0x0a, 0x16, 0x66, 0x69, - 0x6c, 0x65, 0x5f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x69, 0x6e, 0x67, 0x5f, 0x73, 0x79, - 0x6d, 0x62, 0x6f, 0x6c, 0x18, 0x04, 0x20, 0x01, 0x28, 0x09, 0x48, 0x00, 0x52, 0x14, 0x66, 0x69, - 0x6c, 0x65, 0x43, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x69, 0x6e, 0x67, 0x53, 0x79, 0x6d, 0x62, - 0x6f, 0x6c, 0x12, 0x67, 0x0a, 0x19, 0x66, 0x69, 0x6c, 0x65, 0x5f, 0x63, 0x6f, 0x6e, 0x74, 0x61, - 0x69, 0x6e, 0x69, 0x6e, 0x67, 0x5f, 0x65, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x18, - 0x05, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x29, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x72, 0x65, 0x66, - 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x2e, - 0x45, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, - 0x48, 0x00, 0x52, 0x17, 0x66, 0x69, 0x6c, 0x65, 0x43, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x69, - 0x6e, 0x67, 0x45, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x12, 0x42, 0x0a, 0x1d, 0x61, - 0x6c, 0x6c, 0x5f, 0x65, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x5f, 0x6e, 0x75, 0x6d, - 0x62, 0x65, 0x72, 0x73, 0x5f, 0x6f, 0x66, 0x5f, 0x74, 0x79, 0x70, 0x65, 0x18, 0x06, 0x20, 0x01, - 0x28, 0x09, 0x48, 0x00, 0x52, 0x19, 0x61, 0x6c, 0x6c, 0x45, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, - 0x6f, 0x6e, 0x4e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x73, 0x4f, 0x66, 0x54, 0x79, 0x70, 0x65, 0x12, - 0x25, 0x0a, 0x0d, 0x6c, 0x69, 0x73, 0x74, 0x5f, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x73, - 0x18, 0x07, 0x20, 0x01, 0x28, 0x09, 0x48, 0x00, 0x52, 0x0c, 0x6c, 0x69, 0x73, 0x74, 0x53, 0x65, - 0x72, 0x76, 0x69, 0x63, 0x65, 0x73, 0x42, 0x11, 0x0a, 0x0f, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, - 0x65, 0x5f, 0x72, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x22, 0x66, 0x0a, 0x10, 0x45, 0x78, 0x74, - 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x27, 0x0a, - 0x0f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x69, 0x6e, 0x67, 0x5f, 0x74, 0x79, 0x70, 0x65, - 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0e, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x69, - 0x6e, 0x67, 0x54, 0x79, 0x70, 0x65, 0x12, 0x29, 0x0a, 0x10, 0x65, 0x78, 0x74, 0x65, 0x6e, 0x73, - 0x69, 0x6f, 0x6e, 0x5f, 0x6e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x18, 0x02, 0x20, 0x01, 0x28, 0x05, - 0x52, 0x0f, 0x65, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x4e, 0x75, 0x6d, 0x62, 0x65, - 0x72, 0x22, 0xc7, 0x04, 0x0a, 0x18, 0x53, 0x65, 0x72, 0x76, 0x65, 0x72, 0x52, 0x65, 0x66, 0x6c, - 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x1d, - 0x0a, 0x0a, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x5f, 0x68, 0x6f, 0x73, 0x74, 0x18, 0x01, 0x20, 0x01, - 0x28, 0x09, 0x52, 0x09, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x48, 0x6f, 0x73, 0x74, 0x12, 0x5b, 0x0a, - 0x10, 0x6f, 0x72, 0x69, 0x67, 0x69, 0x6e, 0x61, 0x6c, 0x5f, 0x72, 0x65, 0x71, 0x75, 0x65, 0x73, - 0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x30, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x72, - 0x65, 0x66, 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, - 0x61, 0x2e, 0x53, 0x65, 0x72, 0x76, 0x65, 0x72, 0x52, 0x65, 0x66, 0x6c, 0x65, 0x63, 0x74, 0x69, - 0x6f, 0x6e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x52, 0x0f, 0x6f, 0x72, 0x69, 0x67, 0x69, - 0x6e, 0x61, 0x6c, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x6b, 0x0a, 0x18, 0x66, 0x69, - 0x6c, 0x65, 0x5f, 0x64, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x5f, 0x72, 0x65, - 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x2f, 0x2e, 0x67, +var File_grpc_reflection_v1alpha_reflection_proto protoreflect.FileDescriptor + +var file_grpc_reflection_v1alpha_reflection_proto_rawDesc = []byte{ + 0x0a, 0x28, 0x67, 0x72, 0x70, 0x63, 0x2f, 0x72, 0x65, 0x66, 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, + 0x6e, 0x2f, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x2f, 0x72, 0x65, 0x66, 0x6c, 0x65, 0x63, + 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x17, 0x67, 0x72, 0x70, 0x63, + 0x2e, 0x72, 0x65, 0x66, 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x31, 0x61, 0x6c, + 0x70, 0x68, 0x61, 0x22, 0xf8, 0x02, 0x0a, 0x17, 0x53, 0x65, 0x72, 0x76, 0x65, 0x72, 0x52, 0x65, + 0x66, 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, + 0x12, 0x0a, 0x04, 0x68, 0x6f, 0x73, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x68, + 0x6f, 0x73, 0x74, 0x12, 0x2a, 0x0a, 0x10, 0x66, 0x69, 0x6c, 0x65, 0x5f, 0x62, 0x79, 0x5f, 0x66, + 0x69, 0x6c, 0x65, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x48, 0x00, 0x52, + 0x0e, 0x66, 0x69, 0x6c, 0x65, 0x42, 0x79, 0x46, 0x69, 0x6c, 0x65, 0x6e, 0x61, 0x6d, 0x65, 0x12, + 0x36, 0x0a, 0x16, 0x66, 0x69, 0x6c, 0x65, 0x5f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x69, + 0x6e, 0x67, 0x5f, 0x73, 0x79, 0x6d, 0x62, 0x6f, 0x6c, 0x18, 0x04, 0x20, 0x01, 0x28, 0x09, 0x48, + 0x00, 0x52, 0x14, 0x66, 0x69, 0x6c, 0x65, 0x43, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x69, 0x6e, + 0x67, 0x53, 0x79, 0x6d, 0x62, 0x6f, 0x6c, 0x12, 0x67, 0x0a, 0x19, 0x66, 0x69, 0x6c, 0x65, 0x5f, + 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x69, 0x6e, 0x67, 0x5f, 0x65, 0x78, 0x74, 0x65, 0x6e, + 0x73, 0x69, 0x6f, 0x6e, 0x18, 0x05, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x29, 0x2e, 0x67, 0x72, 0x70, + 0x63, 0x2e, 0x72, 0x65, 0x66, 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x31, 0x61, + 0x6c, 0x70, 0x68, 0x61, 0x2e, 0x45, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x52, 0x65, + 0x71, 0x75, 0x65, 0x73, 0x74, 0x48, 0x00, 0x52, 0x17, 0x66, 0x69, 0x6c, 0x65, 0x43, 0x6f, 0x6e, + 0x74, 0x61, 0x69, 0x6e, 0x69, 0x6e, 0x67, 0x45, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, + 0x12, 0x42, 0x0a, 0x1d, 0x61, 0x6c, 0x6c, 0x5f, 0x65, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, + 0x6e, 0x5f, 0x6e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x73, 0x5f, 0x6f, 0x66, 0x5f, 0x74, 0x79, 0x70, + 0x65, 0x18, 0x06, 0x20, 0x01, 0x28, 0x09, 0x48, 0x00, 0x52, 0x19, 0x61, 0x6c, 0x6c, 0x45, 0x78, + 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x4e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x73, 0x4f, 0x66, + 0x54, 0x79, 0x70, 0x65, 0x12, 0x25, 0x0a, 0x0d, 0x6c, 0x69, 0x73, 0x74, 0x5f, 0x73, 0x65, 0x72, + 0x76, 0x69, 0x63, 0x65, 0x73, 0x18, 0x07, 0x20, 0x01, 0x28, 0x09, 0x48, 0x00, 0x52, 0x0c, 0x6c, + 0x69, 0x73, 0x74, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x73, 0x42, 0x11, 0x0a, 0x0f, 0x6d, + 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x5f, 0x72, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x22, 0x66, + 0x0a, 0x10, 0x45, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x71, 0x75, 0x65, + 0x73, 0x74, 0x12, 0x27, 0x0a, 0x0f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x69, 0x6e, 0x67, + 0x5f, 0x74, 0x79, 0x70, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0e, 0x63, 0x6f, 0x6e, + 0x74, 0x61, 0x69, 0x6e, 0x69, 0x6e, 0x67, 0x54, 0x79, 0x70, 0x65, 0x12, 0x29, 0x0a, 0x10, 0x65, + 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x5f, 0x6e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x18, + 0x02, 0x20, 0x01, 0x28, 0x05, 0x52, 0x0f, 0x65, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, + 0x4e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x22, 0xc7, 0x04, 0x0a, 0x18, 0x53, 0x65, 0x72, 0x76, 0x65, + 0x72, 0x52, 0x65, 0x66, 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x73, 0x70, 0x6f, + 0x6e, 0x73, 0x65, 0x12, 0x1d, 0x0a, 0x0a, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x5f, 0x68, 0x6f, 0x73, + 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x48, 0x6f, + 0x73, 0x74, 0x12, 0x5b, 0x0a, 0x10, 0x6f, 0x72, 0x69, 0x67, 0x69, 0x6e, 0x61, 0x6c, 0x5f, 0x72, + 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x30, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x72, 0x65, 0x66, 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x76, - 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x2e, 0x46, 0x69, 0x6c, 0x65, 0x44, 0x65, 0x73, 0x63, 0x72, - 0x69, 0x70, 0x74, 0x6f, 0x72, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x48, 0x00, 0x52, - 0x16, 0x66, 0x69, 0x6c, 0x65, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x52, - 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x77, 0x0a, 0x1e, 0x61, 0x6c, 0x6c, 0x5f, 0x65, - 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x5f, 0x6e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x73, - 0x5f, 0x72, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x18, 0x05, 0x20, 0x01, 0x28, 0x0b, 0x32, - 0x30, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x72, 0x65, 0x66, 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, - 0x6e, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x2e, 0x45, 0x78, 0x74, 0x65, 0x6e, 0x73, - 0x69, 0x6f, 0x6e, 0x4e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, - 0x65, 0x48, 0x00, 0x52, 0x1b, 0x61, 0x6c, 0x6c, 0x45, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, - 0x6e, 0x4e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, - 0x12, 0x64, 0x0a, 0x16, 0x6c, 0x69, 0x73, 0x74, 0x5f, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, - 0x73, 0x5f, 0x72, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x18, 0x06, 0x20, 0x01, 0x28, 0x0b, - 0x32, 0x2c, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x72, 0x65, 0x66, 0x6c, 0x65, 0x63, 0x74, 0x69, - 0x6f, 0x6e, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x2e, 0x4c, 0x69, 0x73, 0x74, 0x53, - 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x48, 0x00, - 0x52, 0x14, 0x6c, 0x69, 0x73, 0x74, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x73, 0x52, 0x65, - 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x4f, 0x0a, 0x0e, 0x65, 0x72, 0x72, 0x6f, 0x72, 0x5f, - 0x72, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x18, 0x07, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x26, + 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x2e, 0x53, 0x65, 0x72, 0x76, 0x65, 0x72, 0x52, 0x65, 0x66, + 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x52, 0x0f, + 0x6f, 0x72, 0x69, 0x67, 0x69, 0x6e, 0x61, 0x6c, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, + 0x6b, 0x0a, 0x18, 0x66, 0x69, 0x6c, 0x65, 0x5f, 0x64, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, + 0x6f, 0x72, 0x5f, 0x72, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x18, 0x04, 0x20, 0x01, 0x28, + 0x0b, 0x32, 0x2f, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x72, 0x65, 0x66, 0x6c, 0x65, 0x63, 0x74, + 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x2e, 0x46, 0x69, 0x6c, 0x65, + 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, + 0x73, 0x65, 0x48, 0x00, 0x52, 0x16, 0x66, 0x69, 0x6c, 0x65, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, + 0x70, 0x74, 0x6f, 0x72, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x77, 0x0a, 0x1e, + 0x61, 0x6c, 0x6c, 0x5f, 0x65, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x5f, 0x6e, 0x75, + 0x6d, 0x62, 0x65, 0x72, 0x73, 0x5f, 0x72, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x18, 0x05, + 0x20, 0x01, 0x28, 0x0b, 0x32, 0x30, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x72, 0x65, 0x66, 0x6c, + 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x2e, 0x45, + 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x4e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x52, 0x65, + 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x48, 0x00, 0x52, 0x1b, 0x61, 0x6c, 0x6c, 0x45, 0x78, 0x74, + 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x4e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x73, 0x52, 0x65, 0x73, + 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x64, 0x0a, 0x16, 0x6c, 0x69, 0x73, 0x74, 0x5f, 0x73, 0x65, + 0x72, 0x76, 0x69, 0x63, 0x65, 0x73, 0x5f, 0x72, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x18, + 0x06, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x2c, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x72, 0x65, 0x66, + 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x2e, + 0x4c, 0x69, 0x73, 0x74, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, + 0x6e, 0x73, 0x65, 0x48, 0x00, 0x52, 0x14, 0x6c, 0x69, 0x73, 0x74, 0x53, 0x65, 0x72, 0x76, 0x69, + 0x63, 0x65, 0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x4f, 0x0a, 0x0e, 0x65, + 0x72, 0x72, 0x6f, 0x72, 0x5f, 0x72, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x18, 0x07, 0x20, + 0x01, 0x28, 0x0b, 0x32, 0x26, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x72, 0x65, 0x66, 0x6c, 0x65, + 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x2e, 0x45, 0x72, + 0x72, 0x6f, 0x72, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x48, 0x00, 0x52, 0x0d, 0x65, + 0x72, 0x72, 0x6f, 0x72, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x42, 0x12, 0x0a, 0x10, + 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x5f, 0x72, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, + 0x22, 0x4c, 0x0a, 0x16, 0x46, 0x69, 0x6c, 0x65, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, + 0x6f, 0x72, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x32, 0x0a, 0x15, 0x66, 0x69, + 0x6c, 0x65, 0x5f, 0x64, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x5f, 0x70, 0x72, + 0x6f, 0x74, 0x6f, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0c, 0x52, 0x13, 0x66, 0x69, 0x6c, 0x65, 0x44, + 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0x6a, + 0x0a, 0x17, 0x45, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x4e, 0x75, 0x6d, 0x62, 0x65, + 0x72, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x24, 0x0a, 0x0e, 0x62, 0x61, 0x73, + 0x65, 0x5f, 0x74, 0x79, 0x70, 0x65, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, + 0x09, 0x52, 0x0c, 0x62, 0x61, 0x73, 0x65, 0x54, 0x79, 0x70, 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x12, + 0x29, 0x0a, 0x10, 0x65, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x5f, 0x6e, 0x75, 0x6d, + 0x62, 0x65, 0x72, 0x18, 0x02, 0x20, 0x03, 0x28, 0x05, 0x52, 0x0f, 0x65, 0x78, 0x74, 0x65, 0x6e, + 0x73, 0x69, 0x6f, 0x6e, 0x4e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x22, 0x59, 0x0a, 0x13, 0x4c, 0x69, + 0x73, 0x74, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, + 0x65, 0x12, 0x42, 0x0a, 0x07, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x18, 0x01, 0x20, 0x03, + 0x28, 0x0b, 0x32, 0x28, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x72, 0x65, 0x66, 0x6c, 0x65, 0x63, + 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x2e, 0x53, 0x65, 0x72, + 0x76, 0x69, 0x63, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x52, 0x07, 0x73, 0x65, + 0x72, 0x76, 0x69, 0x63, 0x65, 0x22, 0x25, 0x0a, 0x0f, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, + 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, + 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x22, 0x53, 0x0a, 0x0d, + 0x45, 0x72, 0x72, 0x6f, 0x72, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x1d, 0x0a, + 0x0a, 0x65, 0x72, 0x72, 0x6f, 0x72, 0x5f, 0x63, 0x6f, 0x64, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, + 0x05, 0x52, 0x09, 0x65, 0x72, 0x72, 0x6f, 0x72, 0x43, 0x6f, 0x64, 0x65, 0x12, 0x23, 0x0a, 0x0d, + 0x65, 0x72, 0x72, 0x6f, 0x72, 0x5f, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x18, 0x02, 0x20, + 0x01, 0x28, 0x09, 0x52, 0x0c, 0x65, 0x72, 0x72, 0x6f, 0x72, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, + 0x65, 0x32, 0x93, 0x01, 0x0a, 0x10, 0x53, 0x65, 0x72, 0x76, 0x65, 0x72, 0x52, 0x65, 0x66, 0x6c, + 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x7f, 0x0a, 0x14, 0x53, 0x65, 0x72, 0x76, 0x65, 0x72, + 0x52, 0x65, 0x66, 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x30, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x72, 0x65, 0x66, 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, - 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x2e, 0x45, 0x72, 0x72, 0x6f, 0x72, 0x52, 0x65, - 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x48, 0x00, 0x52, 0x0d, 0x65, 0x72, 0x72, 0x6f, 0x72, 0x52, - 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x42, 0x12, 0x0a, 0x10, 0x6d, 0x65, 0x73, 0x73, 0x61, - 0x67, 0x65, 0x5f, 0x72, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x4c, 0x0a, 0x16, 0x46, - 0x69, 0x6c, 0x65, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x52, 0x65, 0x73, - 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x32, 0x0a, 0x15, 0x66, 0x69, 0x6c, 0x65, 0x5f, 0x64, 0x65, - 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x5f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x18, 0x01, - 0x20, 0x03, 0x28, 0x0c, 0x52, 0x13, 0x66, 0x69, 0x6c, 0x65, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, - 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0x6a, 0x0a, 0x17, 0x45, 0x78, 0x74, - 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x4e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x52, 0x65, 0x73, 0x70, - 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x24, 0x0a, 0x0e, 0x62, 0x61, 0x73, 0x65, 0x5f, 0x74, 0x79, 0x70, - 0x65, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0c, 0x62, 0x61, - 0x73, 0x65, 0x54, 0x79, 0x70, 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x29, 0x0a, 0x10, 0x65, 0x78, - 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x5f, 0x6e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x18, 0x02, - 0x20, 0x03, 0x28, 0x05, 0x52, 0x0f, 0x65, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x4e, - 0x75, 0x6d, 0x62, 0x65, 0x72, 0x22, 0x59, 0x0a, 0x13, 0x4c, 0x69, 0x73, 0x74, 0x53, 0x65, 0x72, - 0x76, 0x69, 0x63, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x42, 0x0a, 0x07, - 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x28, 0x2e, - 0x67, 0x72, 0x70, 0x63, 0x2e, 0x72, 0x65, 0x66, 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x2e, - 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x2e, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x52, - 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x52, 0x07, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, - 0x22, 0x25, 0x0a, 0x0f, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, - 0x6e, 0x73, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, - 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x22, 0x53, 0x0a, 0x0d, 0x45, 0x72, 0x72, 0x6f, 0x72, - 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x1d, 0x0a, 0x0a, 0x65, 0x72, 0x72, 0x6f, - 0x72, 0x5f, 0x63, 0x6f, 0x64, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x05, 0x52, 0x09, 0x65, 0x72, - 0x72, 0x6f, 0x72, 0x43, 0x6f, 0x64, 0x65, 0x12, 0x23, 0x0a, 0x0d, 0x65, 0x72, 0x72, 0x6f, 0x72, - 0x5f, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0c, - 0x65, 0x72, 0x72, 0x6f, 0x72, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x32, 0x93, 0x01, 0x0a, - 0x10, 0x53, 0x65, 0x72, 0x76, 0x65, 0x72, 0x52, 0x65, 0x66, 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, - 0x6e, 0x12, 0x7f, 0x0a, 0x14, 0x53, 0x65, 0x72, 0x76, 0x65, 0x72, 0x52, 0x65, 0x66, 0x6c, 0x65, - 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x30, 0x2e, 0x67, 0x72, 0x70, 0x63, - 0x2e, 0x72, 0x65, 0x66, 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x31, 0x61, 0x6c, - 0x70, 0x68, 0x61, 0x2e, 0x53, 0x65, 0x72, 0x76, 0x65, 0x72, 0x52, 0x65, 0x66, 0x6c, 0x65, 0x63, - 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x31, 0x2e, 0x67, 0x72, + 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x2e, 0x53, 0x65, 0x72, 0x76, 0x65, 0x72, 0x52, + 0x65, 0x66, 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, + 0x1a, 0x31, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x72, 0x65, 0x66, 0x6c, 0x65, 0x63, 0x74, 0x69, + 0x6f, 0x6e, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x2e, 0x53, 0x65, 0x72, 0x76, 0x65, + 0x72, 0x52, 0x65, 0x66, 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x73, 0x70, 0x6f, + 0x6e, 0x73, 0x65, 0x28, 0x01, 0x30, 0x01, 0x42, 0x73, 0x0a, 0x1a, 0x69, 0x6f, 0x2e, 0x67, 0x72, 0x70, 0x63, 0x2e, 0x72, 0x65, 0x66, 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x76, 0x31, - 0x61, 0x6c, 0x70, 0x68, 0x61, 0x2e, 0x53, 0x65, 0x72, 0x76, 0x65, 0x72, 0x52, 0x65, 0x66, 0x6c, - 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x28, 0x01, - 0x30, 0x01, 0x42, 0x3b, 0x5a, 0x39, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x67, 0x6f, 0x6c, - 0x61, 0x6e, 0x67, 0x2e, 0x6f, 0x72, 0x67, 0x2f, 0x67, 0x72, 0x70, 0x63, 0x2f, 0x72, 0x65, 0x66, - 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x2f, 0x67, 0x72, 0x70, 0x63, 0x5f, 0x72, 0x65, 0x66, - 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x62, - 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, + 0x61, 0x6c, 0x70, 0x68, 0x61, 0x42, 0x15, 0x53, 0x65, 0x72, 0x76, 0x65, 0x72, 0x52, 0x65, 0x66, + 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01, 0x5a, 0x39, + 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x67, 0x6f, 0x6c, 0x61, 0x6e, 0x67, 0x2e, 0x6f, 0x72, + 0x67, 0x2f, 0x67, 0x72, 0x70, 0x63, 0x2f, 0x72, 0x65, 0x66, 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, + 0x6e, 0x2f, 0x67, 0x72, 0x70, 0x63, 0x5f, 0x72, 0x65, 0x66, 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, + 0x6e, 0x5f, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0xb8, 0x01, 0x01, 0x62, 0x06, 0x70, 0x72, + 0x6f, 0x74, 0x6f, 0x33, } var ( - file_reflection_grpc_reflection_v1alpha_reflection_proto_rawDescOnce sync.Once - file_reflection_grpc_reflection_v1alpha_reflection_proto_rawDescData = file_reflection_grpc_reflection_v1alpha_reflection_proto_rawDesc + file_grpc_reflection_v1alpha_reflection_proto_rawDescOnce sync.Once + file_grpc_reflection_v1alpha_reflection_proto_rawDescData = file_grpc_reflection_v1alpha_reflection_proto_rawDesc ) -func file_reflection_grpc_reflection_v1alpha_reflection_proto_rawDescGZIP() []byte { - file_reflection_grpc_reflection_v1alpha_reflection_proto_rawDescOnce.Do(func() { - file_reflection_grpc_reflection_v1alpha_reflection_proto_rawDescData = protoimpl.X.CompressGZIP(file_reflection_grpc_reflection_v1alpha_reflection_proto_rawDescData) +func file_grpc_reflection_v1alpha_reflection_proto_rawDescGZIP() []byte { + file_grpc_reflection_v1alpha_reflection_proto_rawDescOnce.Do(func() { + file_grpc_reflection_v1alpha_reflection_proto_rawDescData = protoimpl.X.CompressGZIP(file_grpc_reflection_v1alpha_reflection_proto_rawDescData) }) - return file_reflection_grpc_reflection_v1alpha_reflection_proto_rawDescData + return file_grpc_reflection_v1alpha_reflection_proto_rawDescData } -var file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes = make([]protoimpl.MessageInfo, 8) -var file_reflection_grpc_reflection_v1alpha_reflection_proto_goTypes = []interface{}{ +var file_grpc_reflection_v1alpha_reflection_proto_msgTypes = make([]protoimpl.MessageInfo, 8) +var file_grpc_reflection_v1alpha_reflection_proto_goTypes = []interface{}{ (*ServerReflectionRequest)(nil), // 0: grpc.reflection.v1alpha.ServerReflectionRequest (*ExtensionRequest)(nil), // 1: grpc.reflection.v1alpha.ExtensionRequest (*ServerReflectionResponse)(nil), // 2: grpc.reflection.v1alpha.ServerReflectionResponse @@ -801,7 +874,7 @@ var file_reflection_grpc_reflection_v1alpha_reflection_proto_goTypes = []interfa (*ServiceResponse)(nil), // 6: grpc.reflection.v1alpha.ServiceResponse (*ErrorResponse)(nil), // 7: grpc.reflection.v1alpha.ErrorResponse } -var file_reflection_grpc_reflection_v1alpha_reflection_proto_depIdxs = []int32{ +var file_grpc_reflection_v1alpha_reflection_proto_depIdxs = []int32{ 1, // 0: grpc.reflection.v1alpha.ServerReflectionRequest.file_containing_extension:type_name -> grpc.reflection.v1alpha.ExtensionRequest 0, // 1: grpc.reflection.v1alpha.ServerReflectionResponse.original_request:type_name -> grpc.reflection.v1alpha.ServerReflectionRequest 3, // 2: grpc.reflection.v1alpha.ServerReflectionResponse.file_descriptor_response:type_name -> grpc.reflection.v1alpha.FileDescriptorResponse @@ -818,13 +891,13 @@ var file_reflection_grpc_reflection_v1alpha_reflection_proto_depIdxs = []int32{ 0, // [0:7] is the sub-list for field type_name } -func init() { file_reflection_grpc_reflection_v1alpha_reflection_proto_init() } -func file_reflection_grpc_reflection_v1alpha_reflection_proto_init() { - if File_reflection_grpc_reflection_v1alpha_reflection_proto != nil { +func init() { file_grpc_reflection_v1alpha_reflection_proto_init() } +func file_grpc_reflection_v1alpha_reflection_proto_init() { + if File_grpc_reflection_v1alpha_reflection_proto != nil { return } if !protoimpl.UnsafeEnabled { - file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_grpc_reflection_v1alpha_reflection_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { switch v := v.(*ServerReflectionRequest); i { case 0: return &v.state @@ -836,7 +909,7 @@ func file_reflection_grpc_reflection_v1alpha_reflection_proto_init() { return nil } } - file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + file_grpc_reflection_v1alpha_reflection_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { switch v := v.(*ExtensionRequest); i { case 0: return &v.state @@ -848,7 +921,7 @@ func file_reflection_grpc_reflection_v1alpha_reflection_proto_init() { return nil } } - file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} { + file_grpc_reflection_v1alpha_reflection_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} { switch v := v.(*ServerReflectionResponse); i { case 0: return &v.state @@ -860,7 +933,7 @@ func file_reflection_grpc_reflection_v1alpha_reflection_proto_init() { return nil } } - file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} { + file_grpc_reflection_v1alpha_reflection_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} { switch v := v.(*FileDescriptorResponse); i { case 0: return &v.state @@ -872,7 +945,7 @@ func file_reflection_grpc_reflection_v1alpha_reflection_proto_init() { return nil } } - file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} { + file_grpc_reflection_v1alpha_reflection_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} { switch v := v.(*ExtensionNumberResponse); i { case 0: return &v.state @@ -884,7 +957,7 @@ func file_reflection_grpc_reflection_v1alpha_reflection_proto_init() { return nil } } - file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} { + file_grpc_reflection_v1alpha_reflection_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} { switch v := v.(*ListServiceResponse); i { case 0: return &v.state @@ -896,7 +969,7 @@ func file_reflection_grpc_reflection_v1alpha_reflection_proto_init() { return nil } } - file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} { + file_grpc_reflection_v1alpha_reflection_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} { switch v := v.(*ServiceResponse); i { case 0: return &v.state @@ -908,7 +981,7 @@ func file_reflection_grpc_reflection_v1alpha_reflection_proto_init() { return nil } } - file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes[7].Exporter = func(v interface{}, i int) interface{} { + file_grpc_reflection_v1alpha_reflection_proto_msgTypes[7].Exporter = func(v interface{}, i int) interface{} { switch v := v.(*ErrorResponse); i { case 0: return &v.state @@ -921,14 +994,14 @@ func file_reflection_grpc_reflection_v1alpha_reflection_proto_init() { } } } - file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes[0].OneofWrappers = []interface{}{ + file_grpc_reflection_v1alpha_reflection_proto_msgTypes[0].OneofWrappers = []interface{}{ (*ServerReflectionRequest_FileByFilename)(nil), (*ServerReflectionRequest_FileContainingSymbol)(nil), (*ServerReflectionRequest_FileContainingExtension)(nil), (*ServerReflectionRequest_AllExtensionNumbersOfType)(nil), (*ServerReflectionRequest_ListServices)(nil), } - file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes[2].OneofWrappers = []interface{}{ + file_grpc_reflection_v1alpha_reflection_proto_msgTypes[2].OneofWrappers = []interface{}{ (*ServerReflectionResponse_FileDescriptorResponse)(nil), (*ServerReflectionResponse_AllExtensionNumbersResponse)(nil), (*ServerReflectionResponse_ListServicesResponse)(nil), @@ -938,18 +1011,18 @@ func file_reflection_grpc_reflection_v1alpha_reflection_proto_init() { out := protoimpl.TypeBuilder{ File: protoimpl.DescBuilder{ GoPackagePath: reflect.TypeOf(x{}).PkgPath(), - RawDescriptor: file_reflection_grpc_reflection_v1alpha_reflection_proto_rawDesc, + RawDescriptor: file_grpc_reflection_v1alpha_reflection_proto_rawDesc, NumEnums: 0, NumMessages: 8, NumExtensions: 0, NumServices: 1, }, - GoTypes: file_reflection_grpc_reflection_v1alpha_reflection_proto_goTypes, - DependencyIndexes: file_reflection_grpc_reflection_v1alpha_reflection_proto_depIdxs, - MessageInfos: file_reflection_grpc_reflection_v1alpha_reflection_proto_msgTypes, + GoTypes: file_grpc_reflection_v1alpha_reflection_proto_goTypes, + DependencyIndexes: file_grpc_reflection_v1alpha_reflection_proto_depIdxs, + MessageInfos: file_grpc_reflection_v1alpha_reflection_proto_msgTypes, }.Build() - File_reflection_grpc_reflection_v1alpha_reflection_proto = out.File - file_reflection_grpc_reflection_v1alpha_reflection_proto_rawDesc = nil - file_reflection_grpc_reflection_v1alpha_reflection_proto_goTypes = nil - file_reflection_grpc_reflection_v1alpha_reflection_proto_depIdxs = nil + File_grpc_reflection_v1alpha_reflection_proto = out.File + file_grpc_reflection_v1alpha_reflection_proto_rawDesc = nil + file_grpc_reflection_v1alpha_reflection_proto_goTypes = nil + file_grpc_reflection_v1alpha_reflection_proto_depIdxs = nil } diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/reflection/grpc_reflection_v1alpha/reflection.proto b/.ci/providerlint/vendor/google.golang.org/grpc/reflection/grpc_reflection_v1alpha/reflection.proto deleted file mode 100644 index ee2b82c0a5b..00000000000 --- a/.ci/providerlint/vendor/google.golang.org/grpc/reflection/grpc_reflection_v1alpha/reflection.proto +++ /dev/null @@ -1,138 +0,0 @@ -// Copyright 2016 gRPC authors. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -// Service exported by server reflection - -syntax = "proto3"; - -option go_package = "google.golang.org/grpc/reflection/grpc_reflection_v1alpha"; - -package grpc.reflection.v1alpha; - -service ServerReflection { - // The reflection service is structured as a bidirectional stream, ensuring - // all related requests go to a single server. - rpc ServerReflectionInfo(stream ServerReflectionRequest) - returns (stream ServerReflectionResponse); -} - -// The message sent by the client when calling ServerReflectionInfo method. -message ServerReflectionRequest { - string host = 1; - // To use reflection service, the client should set one of the following - // fields in message_request. The server distinguishes requests by their - // defined field and then handles them using corresponding methods. - oneof message_request { - // Find a proto file by the file name. - string file_by_filename = 3; - - // Find the proto file that declares the given fully-qualified symbol name. - // This field should be a fully-qualified symbol name - // (e.g. .[.] or .). - string file_containing_symbol = 4; - - // Find the proto file which defines an extension extending the given - // message type with the given field number. - ExtensionRequest file_containing_extension = 5; - - // Finds the tag numbers used by all known extensions of extendee_type, and - // appends them to ExtensionNumberResponse in an undefined order. - // Its corresponding method is best-effort: it's not guaranteed that the - // reflection service will implement this method, and it's not guaranteed - // that this method will provide all extensions. Returns - // StatusCode::UNIMPLEMENTED if it's not implemented. - // This field should be a fully-qualified type name. The format is - // . - string all_extension_numbers_of_type = 6; - - // List the full names of registered services. The content will not be - // checked. - string list_services = 7; - } -} - -// The type name and extension number sent by the client when requesting -// file_containing_extension. -message ExtensionRequest { - // Fully-qualified type name. The format should be . - string containing_type = 1; - int32 extension_number = 2; -} - -// The message sent by the server to answer ServerReflectionInfo method. -message ServerReflectionResponse { - string valid_host = 1; - ServerReflectionRequest original_request = 2; - // The server sets one of the following fields according to the - // message_request in the request. - oneof message_response { - // This message is used to answer file_by_filename, file_containing_symbol, - // file_containing_extension requests with transitive dependencies. - // As the repeated label is not allowed in oneof fields, we use a - // FileDescriptorResponse message to encapsulate the repeated fields. - // The reflection service is allowed to avoid sending FileDescriptorProtos - // that were previously sent in response to earlier requests in the stream. - FileDescriptorResponse file_descriptor_response = 4; - - // This message is used to answer all_extension_numbers_of_type requests. - ExtensionNumberResponse all_extension_numbers_response = 5; - - // This message is used to answer list_services requests. - ListServiceResponse list_services_response = 6; - - // This message is used when an error occurs. - ErrorResponse error_response = 7; - } -} - -// Serialized FileDescriptorProto messages sent by the server answering -// a file_by_filename, file_containing_symbol, or file_containing_extension -// request. -message FileDescriptorResponse { - // Serialized FileDescriptorProto messages. We avoid taking a dependency on - // descriptor.proto, which uses proto2 only features, by making them opaque - // bytes instead. - repeated bytes file_descriptor_proto = 1; -} - -// A list of extension numbers sent by the server answering -// all_extension_numbers_of_type request. -message ExtensionNumberResponse { - // Full name of the base type, including the package name. The format - // is . - string base_type_name = 1; - repeated int32 extension_number = 2; -} - -// A list of ServiceResponse sent by the server answering list_services request. -message ListServiceResponse { - // The information of each service may be expanded in the future, so we use - // ServiceResponse message to encapsulate it. - repeated ServiceResponse service = 1; -} - -// The information of a single service used by ListServiceResponse to answer -// list_services request. -message ServiceResponse { - // Full name of a registered service, including its package name. The format - // is . - string name = 1; -} - -// The error code and error message sent by the server when an error occurs. -message ErrorResponse { - // This field uses the error codes defined in grpc::StatusCode. - int32 error_code = 1; - string error_message = 2; -} diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/reflection/grpc_reflection_v1alpha/reflection_grpc.pb.go b/.ci/providerlint/vendor/google.golang.org/grpc/reflection/grpc_reflection_v1alpha/reflection_grpc.pb.go index b8e76a87dca..367a029be6b 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/reflection/grpc_reflection_v1alpha/reflection_grpc.pb.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/reflection/grpc_reflection_v1alpha/reflection_grpc.pb.go @@ -1,4 +1,4 @@ -// Copyright 2016 gRPC authors. +// Copyright 2016 The gRPC Authors // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. @@ -11,14 +11,16 @@ // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. - // Service exported by server reflection +// Warning: this entire file is deprecated. Use this instead: +// https://github.com/grpc/grpc-proto/blob/master/grpc/reflection/v1/reflection.proto + // Code generated by protoc-gen-go-grpc. DO NOT EDIT. // versions: -// - protoc-gen-go-grpc v1.2.0 -// - protoc v3.14.0 -// source: reflection/grpc_reflection_v1alpha/reflection.proto +// - protoc-gen-go-grpc v1.3.0 +// - protoc v4.22.0 +// grpc/reflection/v1alpha/reflection.proto is a deprecated file. package grpc_reflection_v1alpha @@ -34,6 +36,10 @@ import ( // Requires gRPC-Go v1.32.0 or later. const _ = grpc.SupportPackageIsVersion7 +const ( + ServerReflection_ServerReflectionInfo_FullMethodName = "/grpc.reflection.v1alpha.ServerReflection/ServerReflectionInfo" +) + // ServerReflectionClient is the client API for ServerReflection service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream. @@ -52,7 +58,7 @@ func NewServerReflectionClient(cc grpc.ClientConnInterface) ServerReflectionClie } func (c *serverReflectionClient) ServerReflectionInfo(ctx context.Context, opts ...grpc.CallOption) (ServerReflection_ServerReflectionInfoClient, error) { - stream, err := c.cc.NewStream(ctx, &ServerReflection_ServiceDesc.Streams[0], "/grpc.reflection.v1alpha.ServerReflection/ServerReflectionInfo", opts...) + stream, err := c.cc.NewStream(ctx, &ServerReflection_ServiceDesc.Streams[0], ServerReflection_ServerReflectionInfo_FullMethodName, opts...) if err != nil { return nil, err } @@ -151,5 +157,5 @@ var ServerReflection_ServiceDesc = grpc.ServiceDesc{ ClientStreams: true, }, }, - Metadata: "reflection/grpc_reflection_v1alpha/reflection.proto", + Metadata: "grpc/reflection/v1alpha/reflection.proto", } diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/reflection/serverreflection.go b/.ci/providerlint/vendor/google.golang.org/grpc/reflection/serverreflection.go index 0b41783aa53..e2f9ebfbbce 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/reflection/serverreflection.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/reflection/serverreflection.go @@ -42,12 +42,14 @@ import ( "google.golang.org/grpc" "google.golang.org/grpc/codes" - rpb "google.golang.org/grpc/reflection/grpc_reflection_v1alpha" "google.golang.org/grpc/status" "google.golang.org/protobuf/proto" "google.golang.org/protobuf/reflect/protodesc" "google.golang.org/protobuf/reflect/protoreflect" "google.golang.org/protobuf/reflect/protoregistry" + + v1alphagrpc "google.golang.org/grpc/reflection/grpc_reflection_v1alpha" + v1alphapb "google.golang.org/grpc/reflection/grpc_reflection_v1alpha" ) // GRPCServer is the interface provided by a gRPC server. It is implemented by @@ -63,7 +65,7 @@ var _ GRPCServer = (*grpc.Server)(nil) // Register registers the server reflection service on the given gRPC server. func Register(s GRPCServer) { svr := NewServer(ServerOptions{Services: s}) - rpb.RegisterServerReflectionServer(s, svr) + v1alphagrpc.RegisterServerReflectionServer(s, svr) } // ServiceInfoProvider is an interface used to retrieve metadata about the @@ -124,7 +126,7 @@ type ServerOptions struct { // // Notice: This function is EXPERIMENTAL and may be changed or removed in a // later release. -func NewServer(opts ServerOptions) rpb.ServerReflectionServer { +func NewServer(opts ServerOptions) v1alphagrpc.ServerReflectionServer { if opts.DescriptorResolver == nil { opts.DescriptorResolver = protoregistry.GlobalFiles } @@ -139,7 +141,7 @@ func NewServer(opts ServerOptions) rpb.ServerReflectionServer { } type serverReflectionServer struct { - rpb.UnimplementedServerReflectionServer + v1alphagrpc.UnimplementedServerReflectionServer s ServiceInfoProvider descResolver protodesc.Resolver extResolver ExtensionResolver @@ -213,11 +215,11 @@ func (s *serverReflectionServer) allExtensionNumbersForTypeName(name string) ([] } // listServices returns the names of services this server exposes. -func (s *serverReflectionServer) listServices() []*rpb.ServiceResponse { +func (s *serverReflectionServer) listServices() []*v1alphapb.ServiceResponse { serviceInfo := s.s.GetServiceInfo() - resp := make([]*rpb.ServiceResponse, 0, len(serviceInfo)) + resp := make([]*v1alphapb.ServiceResponse, 0, len(serviceInfo)) for svc := range serviceInfo { - resp = append(resp, &rpb.ServiceResponse{Name: svc}) + resp = append(resp, &v1alphapb.ServiceResponse{Name: svc}) } sort.Slice(resp, func(i, j int) bool { return resp[i].Name < resp[j].Name @@ -226,7 +228,7 @@ func (s *serverReflectionServer) listServices() []*rpb.ServiceResponse { } // ServerReflectionInfo is the reflection service handler. -func (s *serverReflectionServer) ServerReflectionInfo(stream rpb.ServerReflection_ServerReflectionInfoServer) error { +func (s *serverReflectionServer) ServerReflectionInfo(stream v1alphagrpc.ServerReflection_ServerReflectionInfoServer) error { sentFileDescriptors := make(map[string]bool) for { in, err := stream.Recv() @@ -237,79 +239,79 @@ func (s *serverReflectionServer) ServerReflectionInfo(stream rpb.ServerReflectio return err } - out := &rpb.ServerReflectionResponse{ + out := &v1alphapb.ServerReflectionResponse{ ValidHost: in.Host, OriginalRequest: in, } switch req := in.MessageRequest.(type) { - case *rpb.ServerReflectionRequest_FileByFilename: + case *v1alphapb.ServerReflectionRequest_FileByFilename: var b [][]byte fd, err := s.descResolver.FindFileByPath(req.FileByFilename) if err == nil { b, err = s.fileDescWithDependencies(fd, sentFileDescriptors) } if err != nil { - out.MessageResponse = &rpb.ServerReflectionResponse_ErrorResponse{ - ErrorResponse: &rpb.ErrorResponse{ + out.MessageResponse = &v1alphapb.ServerReflectionResponse_ErrorResponse{ + ErrorResponse: &v1alphapb.ErrorResponse{ ErrorCode: int32(codes.NotFound), ErrorMessage: err.Error(), }, } } else { - out.MessageResponse = &rpb.ServerReflectionResponse_FileDescriptorResponse{ - FileDescriptorResponse: &rpb.FileDescriptorResponse{FileDescriptorProto: b}, + out.MessageResponse = &v1alphapb.ServerReflectionResponse_FileDescriptorResponse{ + FileDescriptorResponse: &v1alphapb.FileDescriptorResponse{FileDescriptorProto: b}, } } - case *rpb.ServerReflectionRequest_FileContainingSymbol: + case *v1alphapb.ServerReflectionRequest_FileContainingSymbol: b, err := s.fileDescEncodingContainingSymbol(req.FileContainingSymbol, sentFileDescriptors) if err != nil { - out.MessageResponse = &rpb.ServerReflectionResponse_ErrorResponse{ - ErrorResponse: &rpb.ErrorResponse{ + out.MessageResponse = &v1alphapb.ServerReflectionResponse_ErrorResponse{ + ErrorResponse: &v1alphapb.ErrorResponse{ ErrorCode: int32(codes.NotFound), ErrorMessage: err.Error(), }, } } else { - out.MessageResponse = &rpb.ServerReflectionResponse_FileDescriptorResponse{ - FileDescriptorResponse: &rpb.FileDescriptorResponse{FileDescriptorProto: b}, + out.MessageResponse = &v1alphapb.ServerReflectionResponse_FileDescriptorResponse{ + FileDescriptorResponse: &v1alphapb.FileDescriptorResponse{FileDescriptorProto: b}, } } - case *rpb.ServerReflectionRequest_FileContainingExtension: + case *v1alphapb.ServerReflectionRequest_FileContainingExtension: typeName := req.FileContainingExtension.ContainingType extNum := req.FileContainingExtension.ExtensionNumber b, err := s.fileDescEncodingContainingExtension(typeName, extNum, sentFileDescriptors) if err != nil { - out.MessageResponse = &rpb.ServerReflectionResponse_ErrorResponse{ - ErrorResponse: &rpb.ErrorResponse{ + out.MessageResponse = &v1alphapb.ServerReflectionResponse_ErrorResponse{ + ErrorResponse: &v1alphapb.ErrorResponse{ ErrorCode: int32(codes.NotFound), ErrorMessage: err.Error(), }, } } else { - out.MessageResponse = &rpb.ServerReflectionResponse_FileDescriptorResponse{ - FileDescriptorResponse: &rpb.FileDescriptorResponse{FileDescriptorProto: b}, + out.MessageResponse = &v1alphapb.ServerReflectionResponse_FileDescriptorResponse{ + FileDescriptorResponse: &v1alphapb.FileDescriptorResponse{FileDescriptorProto: b}, } } - case *rpb.ServerReflectionRequest_AllExtensionNumbersOfType: + case *v1alphapb.ServerReflectionRequest_AllExtensionNumbersOfType: extNums, err := s.allExtensionNumbersForTypeName(req.AllExtensionNumbersOfType) if err != nil { - out.MessageResponse = &rpb.ServerReflectionResponse_ErrorResponse{ - ErrorResponse: &rpb.ErrorResponse{ + out.MessageResponse = &v1alphapb.ServerReflectionResponse_ErrorResponse{ + ErrorResponse: &v1alphapb.ErrorResponse{ ErrorCode: int32(codes.NotFound), ErrorMessage: err.Error(), }, } } else { - out.MessageResponse = &rpb.ServerReflectionResponse_AllExtensionNumbersResponse{ - AllExtensionNumbersResponse: &rpb.ExtensionNumberResponse{ + out.MessageResponse = &v1alphapb.ServerReflectionResponse_AllExtensionNumbersResponse{ + AllExtensionNumbersResponse: &v1alphapb.ExtensionNumberResponse{ BaseTypeName: req.AllExtensionNumbersOfType, ExtensionNumber: extNums, }, } } - case *rpb.ServerReflectionRequest_ListServices: - out.MessageResponse = &rpb.ServerReflectionResponse_ListServicesResponse{ - ListServicesResponse: &rpb.ListServiceResponse{ + case *v1alphapb.ServerReflectionRequest_ListServices: + out.MessageResponse = &v1alphapb.ServerReflectionResponse_ListServicesResponse{ + ListServicesResponse: &v1alphapb.ListServiceResponse{ Service: s.listServices(), }, } diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/regenerate.sh b/.ci/providerlint/vendor/google.golang.org/grpc/regenerate.sh index 99db79fafcf..a6f26c8ab0f 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/regenerate.sh +++ b/.ci/providerlint/vendor/google.golang.org/grpc/regenerate.sh @@ -57,7 +57,8 @@ LEGACY_SOURCES=( ${WORKDIR}/grpc-proto/grpc/health/v1/health.proto ${WORKDIR}/grpc-proto/grpc/lb/v1/load_balancer.proto profiling/proto/service.proto - reflection/grpc_reflection_v1alpha/reflection.proto + ${WORKDIR}/grpc-proto/grpc/reflection/v1alpha/reflection.proto + ${WORKDIR}/grpc-proto/grpc/reflection/v1/reflection.proto ) # Generates only the new gRPC Service symbols @@ -119,8 +120,4 @@ mv ${WORKDIR}/out/google.golang.org/grpc/lookup/grpc_lookup_v1/* ${WORKDIR}/out/ # see grpc_testing_not_regenerate/README.md for details. rm ${WORKDIR}/out/google.golang.org/grpc/reflection/grpc_testing_not_regenerate/*.pb.go -# grpc/testing does not have a go_package option. -mv ${WORKDIR}/out/grpc/testing/*.pb.go interop/grpc_testing/ -mv ${WORKDIR}/out/grpc/core/*.pb.go interop/grpc_testing/core/ - cp -R ${WORKDIR}/out/google.golang.org/grpc/* . diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/resolver/resolver.go b/.ci/providerlint/vendor/google.golang.org/grpc/resolver/resolver.go index 967cbc7373a..353c10b69a5 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/resolver/resolver.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/resolver/resolver.go @@ -22,12 +22,13 @@ package resolver import ( "context" + "fmt" "net" "net/url" + "strings" "google.golang.org/grpc/attributes" "google.golang.org/grpc/credentials" - "google.golang.org/grpc/internal/pretty" "google.golang.org/grpc/serviceconfig" ) @@ -40,8 +41,9 @@ var ( // TODO(bar) install dns resolver in init(){}. -// Register registers the resolver builder to the resolver map. b.Scheme will be -// used as the scheme registered with this builder. +// Register registers the resolver builder to the resolver map. b.Scheme will +// be used as the scheme registered with this builder. The registry is case +// sensitive, and schemes should not contain any uppercase characters. // // NOTE: this function must only be called during initialization time (i.e. in // an init() function), and is not thread-safe. If multiple Resolvers are @@ -122,7 +124,7 @@ type Address struct { Attributes *attributes.Attributes // BalancerAttributes contains arbitrary data about this address intended - // for consumption by the LB policy. These attribes do not affect SubConn + // for consumption by the LB policy. These attributes do not affect SubConn // creation, connection establishment, handshaking, etc. BalancerAttributes *attributes.Attributes @@ -149,7 +151,17 @@ func (a Address) Equal(o Address) bool { // String returns JSON formatted string representation of the address. func (a Address) String() string { - return pretty.ToJSON(a) + var sb strings.Builder + sb.WriteString(fmt.Sprintf("{Addr: %q, ", a.Addr)) + sb.WriteString(fmt.Sprintf("ServerName: %q, ", a.ServerName)) + if a.Attributes != nil { + sb.WriteString(fmt.Sprintf("Attributes: %v, ", a.Attributes.String())) + } + if a.BalancerAttributes != nil { + sb.WriteString(fmt.Sprintf("BalancerAttributes: %v", a.BalancerAttributes.String())) + } + sb.WriteString("}") + return sb.String() } // BuildOptions includes additional information for the builder to create @@ -202,6 +214,15 @@ type State struct { // gRPC to add new methods to this interface. type ClientConn interface { // UpdateState updates the state of the ClientConn appropriately. + // + // If an error is returned, the resolver should try to resolve the + // target again. The resolver should use a backoff timer to prevent + // overloading the server with requests. If a resolver is certain that + // reresolving will not change the result, e.g. because it is + // a watch-based resolver, returned errors can be ignored. + // + // If the resolved State is the same as the last reported one, calling + // UpdateState can be omitted. UpdateState(State) error // ReportError notifies the ClientConn that the Resolver encountered an // error. The ClientConn will notify the load balancer and begin calling @@ -247,9 +268,6 @@ type Target struct { Scheme string // Deprecated: use URL.Host instead. Authority string - // Deprecated: use URL.Path or URL.Opaque instead. The latter is set when - // the former is empty. - Endpoint string // URL contains the parsed dial target with an optional default scheme added // to it if the original dial target contained no scheme or contained an // unregistered scheme. Any query params specified in the original dial @@ -257,6 +275,24 @@ type Target struct { URL url.URL } +// Endpoint retrieves endpoint without leading "/" from either `URL.Path` +// or `URL.Opaque`. The latter is used when the former is empty. +func (t Target) Endpoint() string { + endpoint := t.URL.Path + if endpoint == "" { + endpoint = t.URL.Opaque + } + // For targets of the form "[scheme]://[authority]/endpoint, the endpoint + // value returned from url.Parse() contains a leading "/". Although this is + // in accordance with RFC 3986, we do not want to break existing resolver + // implementations which expect the endpoint without the leading "/". So, we + // end up stripping the leading "/" here. But this will result in an + // incorrect parsing for something like "unix:///path/to/socket". Since we + // own the "unix" resolver, we can workaround in the unix resolver by using + // the `URL` field. + return strings.TrimPrefix(endpoint, "/") +} + // Builder creates a resolver that will be used to watch name resolution updates. type Builder interface { // Build creates a new resolver for the given target. @@ -264,8 +300,10 @@ type Builder interface { // gRPC dial calls Build synchronously, and fails if the returned error is // not nil. Build(target Target, cc ClientConn, opts BuildOptions) (Resolver, error) - // Scheme returns the scheme supported by this resolver. - // Scheme is defined at https://github.com/grpc/grpc/blob/master/doc/naming.md. + // Scheme returns the scheme supported by this resolver. Scheme is defined + // at https://github.com/grpc/grpc/blob/master/doc/naming.md. The returned + // string should not contain uppercase characters, as they will not match + // the parsed target's scheme as defined in RFC 3986. Scheme() string } diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/resolver_conn_wrapper.go b/.ci/providerlint/vendor/google.golang.org/grpc/resolver_conn_wrapper.go index 05a9d4e0bac..b408b3688f2 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/resolver_conn_wrapper.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/resolver_conn_wrapper.go @@ -19,11 +19,11 @@ package grpc import ( + "context" "strings" "sync" "google.golang.org/grpc/balancer" - "google.golang.org/grpc/credentials" "google.golang.org/grpc/internal/channelz" "google.golang.org/grpc/internal/grpcsync" "google.golang.org/grpc/internal/pretty" @@ -31,129 +31,192 @@ import ( "google.golang.org/grpc/serviceconfig" ) +// resolverStateUpdater wraps the single method used by ccResolverWrapper to +// report a state update from the actual resolver implementation. +type resolverStateUpdater interface { + updateResolverState(s resolver.State, err error) error +} + // ccResolverWrapper is a wrapper on top of cc for resolvers. // It implements resolver.ClientConn interface. type ccResolverWrapper struct { - cc *ClientConn - resolverMu sync.Mutex - resolver resolver.Resolver - done *grpcsync.Event - curState resolver.State + // The following fields are initialized when the wrapper is created and are + // read-only afterwards, and therefore can be accessed without a mutex. + cc resolverStateUpdater + channelzID *channelz.Identifier + ignoreServiceConfig bool + opts ccResolverWrapperOpts + serializer *grpcsync.CallbackSerializer // To serialize all incoming calls. + serializerCancel context.CancelFunc // To close the serializer, accessed only from close(). + + // All incoming (resolver --> gRPC) calls are guaranteed to execute in a + // mutually exclusive manner as they are scheduled on the serializer. + // Fields accessed *only* in these serializer callbacks, can therefore be + // accessed without a mutex. + curState resolver.State + + // mu guards access to the below fields. + mu sync.Mutex + closed bool + resolver resolver.Resolver // Accessed only from outgoing calls. +} - incomingMu sync.Mutex // Synchronizes all the incoming calls. +// ccResolverWrapperOpts wraps the arguments to be passed when creating a new +// ccResolverWrapper. +type ccResolverWrapperOpts struct { + target resolver.Target // User specified dial target to resolve. + builder resolver.Builder // Resolver builder to use. + bOpts resolver.BuildOptions // Resolver build options to use. + channelzID *channelz.Identifier // Channelz identifier for the channel. } // newCCResolverWrapper uses the resolver.Builder to build a Resolver and // returns a ccResolverWrapper object which wraps the newly built resolver. -func newCCResolverWrapper(cc *ClientConn, rb resolver.Builder) (*ccResolverWrapper, error) { +func newCCResolverWrapper(cc resolverStateUpdater, opts ccResolverWrapperOpts) (*ccResolverWrapper, error) { + ctx, cancel := context.WithCancel(context.Background()) ccr := &ccResolverWrapper{ - cc: cc, - done: grpcsync.NewEvent(), - } - - var credsClone credentials.TransportCredentials - if creds := cc.dopts.copts.TransportCredentials; creds != nil { - credsClone = creds.Clone() - } - rbo := resolver.BuildOptions{ - DisableServiceConfig: cc.dopts.disableServiceConfig, - DialCreds: credsClone, - CredsBundle: cc.dopts.copts.CredsBundle, - Dialer: cc.dopts.copts.Dialer, - } - - var err error - // We need to hold the lock here while we assign to the ccr.resolver field - // to guard against a data race caused by the following code path, - // rb.Build-->ccr.ReportError-->ccr.poll-->ccr.resolveNow, would end up - // accessing ccr.resolver which is being assigned here. - ccr.resolverMu.Lock() - defer ccr.resolverMu.Unlock() - ccr.resolver, err = rb.Build(cc.parsedTarget, ccr, rbo) + cc: cc, + channelzID: opts.channelzID, + ignoreServiceConfig: opts.bOpts.DisableServiceConfig, + opts: opts, + serializer: grpcsync.NewCallbackSerializer(ctx), + serializerCancel: cancel, + } + + // Cannot hold the lock at build time because the resolver can send an + // update or error inline and these incoming calls grab the lock to schedule + // a callback in the serializer. + r, err := opts.builder.Build(opts.target, ccr, opts.bOpts) if err != nil { + cancel() return nil, err } + + // Any error reported by the resolver at build time that leads to a + // re-resolution request from the balancer is dropped by grpc until we + // return from this function. So, we don't have to handle pending resolveNow + // requests here. + ccr.mu.Lock() + ccr.resolver = r + ccr.mu.Unlock() + return ccr, nil } func (ccr *ccResolverWrapper) resolveNow(o resolver.ResolveNowOptions) { - ccr.resolverMu.Lock() - if !ccr.done.HasFired() { - ccr.resolver.ResolveNow(o) + ccr.mu.Lock() + defer ccr.mu.Unlock() + + // ccr.resolver field is set only after the call to Build() returns. But in + // the process of building, the resolver may send an error update which when + // propagated to the balancer may result in a re-resolution request. + if ccr.closed || ccr.resolver == nil { + return } - ccr.resolverMu.Unlock() + ccr.resolver.ResolveNow(o) } func (ccr *ccResolverWrapper) close() { - ccr.resolverMu.Lock() - ccr.resolver.Close() - ccr.done.Fire() - ccr.resolverMu.Unlock() + ccr.mu.Lock() + if ccr.closed { + ccr.mu.Unlock() + return + } + + channelz.Info(logger, ccr.channelzID, "Closing the name resolver") + + // Close the serializer to ensure that no more calls from the resolver are + // handled, before actually closing the resolver. + ccr.serializerCancel() + ccr.closed = true + r := ccr.resolver + ccr.mu.Unlock() + + // Give enqueued callbacks a chance to finish. + <-ccr.serializer.Done + + // Spawn a goroutine to close the resolver (since it may block trying to + // cleanup all allocated resources) and return early. + go r.Close() +} + +// serializerScheduleLocked is a convenience method to schedule a function to be +// run on the serializer while holding ccr.mu. +func (ccr *ccResolverWrapper) serializerScheduleLocked(f func(context.Context)) { + ccr.mu.Lock() + ccr.serializer.Schedule(f) + ccr.mu.Unlock() } +// UpdateState is called by resolver implementations to report new state to gRPC +// which includes addresses and service config. func (ccr *ccResolverWrapper) UpdateState(s resolver.State) error { - ccr.incomingMu.Lock() - defer ccr.incomingMu.Unlock() - if ccr.done.HasFired() { + errCh := make(chan error, 1) + ok := ccr.serializer.Schedule(func(context.Context) { + ccr.addChannelzTraceEvent(s) + ccr.curState = s + if err := ccr.cc.updateResolverState(ccr.curState, nil); err == balancer.ErrBadResolverState { + errCh <- balancer.ErrBadResolverState + return + } + errCh <- nil + }) + if !ok { + // The only time when Schedule() fail to add the callback to the + // serializer is when the serializer is closed, and this happens only + // when the resolver wrapper is closed. return nil } - ccr.addChannelzTraceEvent(s) - ccr.curState = s - if err := ccr.cc.updateResolverState(ccr.curState, nil); err == balancer.ErrBadResolverState { - return balancer.ErrBadResolverState - } - return nil + return <-errCh } +// ReportError is called by resolver implementations to report errors +// encountered during name resolution to gRPC. func (ccr *ccResolverWrapper) ReportError(err error) { - ccr.incomingMu.Lock() - defer ccr.incomingMu.Unlock() - if ccr.done.HasFired() { - return - } - channelz.Warningf(logger, ccr.cc.channelzID, "ccResolverWrapper: reporting error to cc: %v", err) - ccr.cc.updateResolverState(resolver.State{}, err) + ccr.serializerScheduleLocked(func(_ context.Context) { + channelz.Warningf(logger, ccr.channelzID, "ccResolverWrapper: reporting error to cc: %v", err) + ccr.cc.updateResolverState(resolver.State{}, err) + }) } -// NewAddress is called by the resolver implementation to send addresses to gRPC. +// NewAddress is called by the resolver implementation to send addresses to +// gRPC. func (ccr *ccResolverWrapper) NewAddress(addrs []resolver.Address) { - ccr.incomingMu.Lock() - defer ccr.incomingMu.Unlock() - if ccr.done.HasFired() { - return - } - ccr.addChannelzTraceEvent(resolver.State{Addresses: addrs, ServiceConfig: ccr.curState.ServiceConfig}) - ccr.curState.Addresses = addrs - ccr.cc.updateResolverState(ccr.curState, nil) + ccr.serializerScheduleLocked(func(_ context.Context) { + ccr.addChannelzTraceEvent(resolver.State{Addresses: addrs, ServiceConfig: ccr.curState.ServiceConfig}) + ccr.curState.Addresses = addrs + ccr.cc.updateResolverState(ccr.curState, nil) + }) } // NewServiceConfig is called by the resolver implementation to send service // configs to gRPC. func (ccr *ccResolverWrapper) NewServiceConfig(sc string) { - ccr.incomingMu.Lock() - defer ccr.incomingMu.Unlock() - if ccr.done.HasFired() { - return - } - channelz.Infof(logger, ccr.cc.channelzID, "ccResolverWrapper: got new service config: %s", sc) - if ccr.cc.dopts.disableServiceConfig { - channelz.Info(logger, ccr.cc.channelzID, "Service config lookups disabled; ignoring config") - return - } - scpr := parseServiceConfig(sc) - if scpr.Err != nil { - channelz.Warningf(logger, ccr.cc.channelzID, "ccResolverWrapper: error parsing service config: %v", scpr.Err) - return - } - ccr.addChannelzTraceEvent(resolver.State{Addresses: ccr.curState.Addresses, ServiceConfig: scpr}) - ccr.curState.ServiceConfig = scpr - ccr.cc.updateResolverState(ccr.curState, nil) + ccr.serializerScheduleLocked(func(_ context.Context) { + channelz.Infof(logger, ccr.channelzID, "ccResolverWrapper: got new service config: %s", sc) + if ccr.ignoreServiceConfig { + channelz.Info(logger, ccr.channelzID, "Service config lookups disabled; ignoring config") + return + } + scpr := parseServiceConfig(sc) + if scpr.Err != nil { + channelz.Warningf(logger, ccr.channelzID, "ccResolverWrapper: error parsing service config: %v", scpr.Err) + return + } + ccr.addChannelzTraceEvent(resolver.State{Addresses: ccr.curState.Addresses, ServiceConfig: scpr}) + ccr.curState.ServiceConfig = scpr + ccr.cc.updateResolverState(ccr.curState, nil) + }) } +// ParseServiceConfig is called by resolver implementations to parse a JSON +// representation of the service config. func (ccr *ccResolverWrapper) ParseServiceConfig(scJSON string) *serviceconfig.ParseResult { return parseServiceConfig(scJSON) } +// addChannelzTraceEvent adds a channelz trace event containing the new +// state received from resolver implementations. func (ccr *ccResolverWrapper) addChannelzTraceEvent(s resolver.State) { var updates []string var oldSC, newSC *ServiceConfig @@ -172,5 +235,5 @@ func (ccr *ccResolverWrapper) addChannelzTraceEvent(s resolver.State) { } else if len(ccr.curState.Addresses) == 0 && len(s.Addresses) > 0 { updates = append(updates, "resolver returned new addresses") } - channelz.Infof(logger, ccr.cc.channelzID, "Resolver state updated: %s (%v)", pretty.ToJSON(s), strings.Join(updates, "; ")) + channelz.Infof(logger, ccr.channelzID, "Resolver state updated: %s (%v)", pretty.ToJSON(s), strings.Join(updates, "; ")) } diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/rpc_util.go b/.ci/providerlint/vendor/google.golang.org/grpc/rpc_util.go index 934fc1aa015..2030736a306 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/rpc_util.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/rpc_util.go @@ -25,7 +25,6 @@ import ( "encoding/binary" "fmt" "io" - "io/ioutil" "math" "strings" "sync" @@ -77,7 +76,7 @@ func NewGZIPCompressorWithLevel(level int) (Compressor, error) { return &gzipCompressor{ pool: sync.Pool{ New: func() interface{} { - w, err := gzip.NewWriterLevel(ioutil.Discard, level) + w, err := gzip.NewWriterLevel(io.Discard, level) if err != nil { panic(err) } @@ -143,7 +142,7 @@ func (d *gzipDecompressor) Do(r io.Reader) ([]byte, error) { z.Close() d.pool.Put(z) }() - return ioutil.ReadAll(z) + return io.ReadAll(z) } func (d *gzipDecompressor) Type() string { @@ -160,6 +159,7 @@ type callInfo struct { contentSubtype string codec baseCodec maxRetryRPCBufferSize int + onFinish []func(err error) } func defaultCallInfo() *callInfo { @@ -296,8 +296,44 @@ func (o FailFastCallOption) before(c *callInfo) error { } func (o FailFastCallOption) after(c *callInfo, attempt *csAttempt) {} +// OnFinish returns a CallOption that configures a callback to be called when +// the call completes. The error passed to the callback is the status of the +// RPC, and may be nil. The onFinish callback provided will only be called once +// by gRPC. This is mainly used to be used by streaming interceptors, to be +// notified when the RPC completes along with information about the status of +// the RPC. +// +// # Experimental +// +// Notice: This API is EXPERIMENTAL and may be changed or removed in a +// later release. +func OnFinish(onFinish func(err error)) CallOption { + return OnFinishCallOption{ + OnFinish: onFinish, + } +} + +// OnFinishCallOption is CallOption that indicates a callback to be called when +// the call completes. +// +// # Experimental +// +// Notice: This type is EXPERIMENTAL and may be changed or removed in a +// later release. +type OnFinishCallOption struct { + OnFinish func(error) +} + +func (o OnFinishCallOption) before(c *callInfo) error { + c.onFinish = append(c.onFinish, o.OnFinish) + return nil +} + +func (o OnFinishCallOption) after(c *callInfo, attempt *csAttempt) {} + // MaxCallRecvMsgSize returns a CallOption which sets the maximum message size -// in bytes the client can receive. +// in bytes the client can receive. If this is not set, gRPC uses the default +// 4MB. func MaxCallRecvMsgSize(bytes int) CallOption { return MaxRecvMsgSizeCallOption{MaxRecvMsgSize: bytes} } @@ -320,7 +356,8 @@ func (o MaxRecvMsgSizeCallOption) before(c *callInfo) error { func (o MaxRecvMsgSizeCallOption) after(c *callInfo, attempt *csAttempt) {} // MaxCallSendMsgSize returns a CallOption which sets the maximum message size -// in bytes the client can send. +// in bytes the client can send. If this is not set, gRPC uses the default +// `math.MaxInt32`. func MaxCallSendMsgSize(bytes int) CallOption { return MaxSendMsgSizeCallOption{MaxSendMsgSize: bytes} } @@ -657,12 +694,13 @@ func msgHeader(data, compData []byte) (hdr []byte, payload []byte) { func outPayload(client bool, msg interface{}, data, payload []byte, t time.Time) *stats.OutPayload { return &stats.OutPayload{ - Client: client, - Payload: msg, - Data: data, - Length: len(data), - WireLength: len(payload) + headerLen, - SentTime: t, + Client: client, + Payload: msg, + Data: data, + Length: len(data), + WireLength: len(payload) + headerLen, + CompressedLength: len(payload), + SentTime: t, } } @@ -683,7 +721,7 @@ func checkRecvPayload(pf payloadFormat, recvCompress string, haveCompressor bool } type payloadInfo struct { - wireLength int // The compressed length got from wire. + compressedLength int // The compressed length got from wire. uncompressedBytes []byte } @@ -693,7 +731,7 @@ func recvAndDecompress(p *parser, s *transport.Stream, dc Decompressor, maxRecei return nil, err } if payInfo != nil { - payInfo.wireLength = len(d) + payInfo.compressedLength = len(d) } if st := checkRecvPayload(pf, s.RecvCompress(), compressor != nil || dc != nil); st != nil { @@ -711,7 +749,7 @@ func recvAndDecompress(p *parser, s *transport.Stream, dc Decompressor, maxRecei d, size, err = decompress(compressor, d, maxReceiveMessageSize) } if err != nil { - return nil, status.Errorf(codes.Internal, "grpc: failed to decompress the received message %v", err) + return nil, status.Errorf(codes.Internal, "grpc: failed to decompress the received message: %v", err) } if size > maxReceiveMessageSize { // TODO: Revisit the error code. Currently keep it consistent with java @@ -746,7 +784,7 @@ func decompress(compressor encoding.Compressor, d []byte, maxReceiveMessageSize } // Read from LimitReader with limit max+1. So if the underlying // reader is over limit, the result will be bigger than max. - d, err = ioutil.ReadAll(io.LimitReader(dcReader, int64(maxReceiveMessageSize)+1)) + d, err = io.ReadAll(io.LimitReader(dcReader, int64(maxReceiveMessageSize)+1)) return d, len(d), err } @@ -759,7 +797,7 @@ func recv(p *parser, c baseCodec, s *transport.Stream, dc Decompressor, m interf return err } if err := c.Unmarshal(d, m); err != nil { - return status.Errorf(codes.Internal, "grpc: failed to unmarshal the received message %v", err) + return status.Errorf(codes.Internal, "grpc: failed to unmarshal the received message: %v", err) } if payInfo != nil { payInfo.uncompressedBytes = d diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/server.go b/.ci/providerlint/vendor/google.golang.org/grpc/server.go index f4dde72b41f..81969e7c15a 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/server.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/server.go @@ -43,8 +43,8 @@ import ( "google.golang.org/grpc/internal" "google.golang.org/grpc/internal/binarylog" "google.golang.org/grpc/internal/channelz" - "google.golang.org/grpc/internal/grpcrand" "google.golang.org/grpc/internal/grpcsync" + "google.golang.org/grpc/internal/grpcutil" "google.golang.org/grpc/internal/transport" "google.golang.org/grpc/keepalive" "google.golang.org/grpc/metadata" @@ -74,10 +74,10 @@ func init() { srv.drainServerTransports(addr) } internal.AddGlobalServerOptions = func(opt ...ServerOption) { - extraServerOptions = append(extraServerOptions, opt...) + globalServerOptions = append(globalServerOptions, opt...) } internal.ClearGlobalServerOptions = func() { - extraServerOptions = nil + globalServerOptions = nil } internal.BinaryLogger = binaryLogger internal.JoinServerOptions = newJoinServerOption @@ -145,7 +145,7 @@ type Server struct { channelzID *channelz.Identifier czData *channelzData - serverWorkerChannels []chan *serverWorkerData + serverWorkerChannel chan *serverWorkerData } type serverOptions struct { @@ -183,7 +183,7 @@ var defaultServerOptions = serverOptions{ writeBufferSize: defaultWriteBufSize, readBufferSize: defaultReadBufSize, } -var extraServerOptions []ServerOption +var globalServerOptions []ServerOption // A ServerOption sets options such as credentials, codec and keepalive parameters, etc. type ServerOption interface { @@ -233,10 +233,11 @@ func newJoinServerOption(opts ...ServerOption) ServerOption { return &joinServerOption{opts: opts} } -// WriteBufferSize determines how much data can be batched before doing a write on the wire. -// The corresponding memory allocation for this buffer will be twice the size to keep syscalls low. -// The default value for this buffer is 32KB. -// Zero will disable the write buffer such that each write will be on underlying connection. +// WriteBufferSize determines how much data can be batched before doing a write +// on the wire. The corresponding memory allocation for this buffer will be +// twice the size to keep syscalls low. The default value for this buffer is +// 32KB. Zero or negative values will disable the write buffer such that each +// write will be on underlying connection. // Note: A Send call may not directly translate to a write. func WriteBufferSize(s int) ServerOption { return newFuncServerOption(func(o *serverOptions) { @@ -244,11 +245,10 @@ func WriteBufferSize(s int) ServerOption { }) } -// ReadBufferSize lets you set the size of read buffer, this determines how much data can be read at most -// for one read syscall. -// The default value for this buffer is 32KB. -// Zero will disable read buffer for a connection so data framer can access the underlying -// conn directly. +// ReadBufferSize lets you set the size of read buffer, this determines how much +// data can be read at most for one read syscall. The default value for this +// buffer is 32KB. Zero or negative values will disable read buffer for a +// connection so data framer can access the underlying conn directly. func ReadBufferSize(s int) ServerOption { return newFuncServerOption(func(o *serverOptions) { o.readBufferSize = s @@ -560,47 +560,45 @@ func NumStreamWorkers(numServerWorkers uint32) ServerOption { const serverWorkerResetThreshold = 1 << 16 // serverWorkers blocks on a *transport.Stream channel forever and waits for -// data to be fed by serveStreams. This allows different requests to be +// data to be fed by serveStreams. This allows multiple requests to be // processed by the same goroutine, removing the need for expensive stack // re-allocations (see the runtime.morestack problem [1]). // // [1] https://github.com/golang/go/issues/18138 -func (s *Server) serverWorker(ch chan *serverWorkerData) { - // To make sure all server workers don't reset at the same time, choose a - // random number of iterations before resetting. - threshold := serverWorkerResetThreshold + grpcrand.Intn(serverWorkerResetThreshold) - for completed := 0; completed < threshold; completed++ { - data, ok := <-ch +func (s *Server) serverWorker() { + for completed := 0; completed < serverWorkerResetThreshold; completed++ { + data, ok := <-s.serverWorkerChannel if !ok { return } - s.handleStream(data.st, data.stream, s.traceInfo(data.st, data.stream)) - data.wg.Done() + s.handleSingleStream(data) } - go s.serverWorker(ch) + go s.serverWorker() } -// initServerWorkers creates worker goroutines and channels to process incoming +func (s *Server) handleSingleStream(data *serverWorkerData) { + defer data.wg.Done() + s.handleStream(data.st, data.stream, s.traceInfo(data.st, data.stream)) +} + +// initServerWorkers creates worker goroutines and a channel to process incoming // connections to reduce the time spent overall on runtime.morestack. func (s *Server) initServerWorkers() { - s.serverWorkerChannels = make([]chan *serverWorkerData, s.opts.numServerWorkers) + s.serverWorkerChannel = make(chan *serverWorkerData) for i := uint32(0); i < s.opts.numServerWorkers; i++ { - s.serverWorkerChannels[i] = make(chan *serverWorkerData) - go s.serverWorker(s.serverWorkerChannels[i]) + go s.serverWorker() } } func (s *Server) stopServerWorkers() { - for i := uint32(0); i < s.opts.numServerWorkers; i++ { - close(s.serverWorkerChannels[i]) - } + close(s.serverWorkerChannel) } // NewServer creates a gRPC server which has no service registered and has not // started to accept requests yet. func NewServer(opt ...ServerOption) *Server { opts := defaultServerOptions - for _, o := range extraServerOptions { + for _, o := range globalServerOptions { o.apply(&opts) } for _, o := range opt { @@ -897,7 +895,7 @@ func (s *Server) drainServerTransports(addr string) { s.mu.Lock() conns := s.conns[addr] for st := range conns { - st.Drain() + st.Drain("") } s.mu.Unlock() } @@ -942,29 +940,24 @@ func (s *Server) newHTTP2Transport(c net.Conn) transport.ServerTransport { } func (s *Server) serveStreams(st transport.ServerTransport) { - defer st.Close() + defer st.Close(errors.New("finished serving streams for the server transport")) var wg sync.WaitGroup - var roundRobinCounter uint32 st.HandleStreams(func(stream *transport.Stream) { wg.Add(1) if s.opts.numServerWorkers > 0 { data := &serverWorkerData{st: st, wg: &wg, stream: stream} select { - case s.serverWorkerChannels[atomic.AddUint32(&roundRobinCounter, 1)%s.opts.numServerWorkers] <- data: + case s.serverWorkerChannel <- data: + return default: // If all stream workers are busy, fallback to the default code path. - go func() { - s.handleStream(st, stream, s.traceInfo(st, stream)) - wg.Done() - }() } - } else { - go func() { - defer wg.Done() - s.handleStream(st, stream, s.traceInfo(st, stream)) - }() } + go func() { + defer wg.Done() + s.handleStream(st, stream, s.traceInfo(st, stream)) + }() }, func(ctx context.Context, method string) context.Context { if !EnableTracing { return ctx @@ -1008,7 +1001,8 @@ var _ http.Handler = (*Server)(nil) func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) { st, err := transport.NewServerHandlerTransport(w, r, s.opts.statsHandlers) if err != nil { - http.Error(w, err.Error(), http.StatusInternalServerError) + // Errors returned from transport.NewServerHandlerTransport have + // already been written to w. return } if !s.addConn(listenerAddressForServeHTTP, st) { @@ -1046,13 +1040,13 @@ func (s *Server) addConn(addr string, st transport.ServerTransport) bool { s.mu.Lock() defer s.mu.Unlock() if s.conns == nil { - st.Close() + st.Close(errors.New("Server.addConn called when server has already been stopped")) return false } if s.drain { // Transport added after we drained our existing conns: drain it // immediately. - st.Drain() + st.Drain("") } if s.conns[addr] == nil { @@ -1150,21 +1144,16 @@ func chainUnaryServerInterceptors(s *Server) { func chainUnaryInterceptors(interceptors []UnaryServerInterceptor) UnaryServerInterceptor { return func(ctx context.Context, req interface{}, info *UnaryServerInfo, handler UnaryHandler) (interface{}, error) { - // the struct ensures the variables are allocated together, rather than separately, since we - // know they should be garbage collected together. This saves 1 allocation and decreases - // time/call by about 10% on the microbenchmark. - var state struct { - i int - next UnaryHandler - } - state.next = func(ctx context.Context, req interface{}) (interface{}, error) { - if state.i == len(interceptors)-1 { - return interceptors[state.i](ctx, req, info, handler) - } - state.i++ - return interceptors[state.i-1](ctx, req, info, state.next) - } - return state.next(ctx, req) + return interceptors[0](ctx, req, info, getChainUnaryHandler(interceptors, 0, info, handler)) + } +} + +func getChainUnaryHandler(interceptors []UnaryServerInterceptor, curr int, info *UnaryServerInfo, finalHandler UnaryHandler) UnaryHandler { + if curr == len(interceptors)-1 { + return finalHandler + } + return func(ctx context.Context, req interface{}) (interface{}, error) { + return interceptors[curr+1](ctx, req, info, getChainUnaryHandler(interceptors, curr+1, info, finalHandler)) } } @@ -1256,7 +1245,7 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport. logEntry.PeerAddr = peer.Addr } for _, binlog := range binlogs { - binlog.Log(logEntry) + binlog.Log(ctx, logEntry) } } @@ -1267,6 +1256,7 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport. var comp, decomp encoding.Compressor var cp Compressor var dc Decompressor + var sendCompressorName string // If dc is set and matches the stream's compression, use it. Otherwise, try // to find a matching registered compressor for decomp. @@ -1287,12 +1277,18 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport. // NOTE: this needs to be ahead of all handling, https://github.com/grpc/grpc-go/issues/686. if s.opts.cp != nil { cp = s.opts.cp - stream.SetSendCompress(cp.Type()) + sendCompressorName = cp.Type() } else if rc := stream.RecvCompress(); rc != "" && rc != encoding.Identity { // Legacy compressor not specified; attempt to respond with same encoding. comp = encoding.GetCompressor(rc) if comp != nil { - stream.SetSendCompress(rc) + sendCompressorName = comp.Name() + } + } + + if sendCompressorName != "" { + if err := stream.SetSendCompress(sendCompressorName); err != nil { + return status.Errorf(codes.Internal, "grpc: failed to set send compressor: %v", err) } } @@ -1303,7 +1299,7 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport. d, err := recvAndDecompress(&parser{r: stream}, stream, dc, s.opts.maxReceiveMessageSize, payInfo, decomp) if err != nil { if e := t.WriteStatus(stream, status.Convert(err)); e != nil { - channelz.Warningf(logger, s.channelzID, "grpc: Server.processUnaryRPC failed to write status %v", e) + channelz.Warningf(logger, s.channelzID, "grpc: Server.processUnaryRPC failed to write status: %v", e) } return err } @@ -1316,11 +1312,12 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport. } for _, sh := range shs { sh.HandleRPC(stream.Context(), &stats.InPayload{ - RecvTime: time.Now(), - Payload: v, - WireLength: payInfo.wireLength + headerLen, - Data: d, - Length: len(d), + RecvTime: time.Now(), + Payload: v, + Length: len(d), + WireLength: payInfo.compressedLength + headerLen, + CompressedLength: payInfo.compressedLength, + Data: d, }) } if len(binlogs) != 0 { @@ -1328,7 +1325,7 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport. Message: d, } for _, binlog := range binlogs { - binlog.Log(cm) + binlog.Log(stream.Context(), cm) } } if trInfo != nil { @@ -1361,7 +1358,7 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport. Header: h, } for _, binlog := range binlogs { - binlog.Log(sh) + binlog.Log(stream.Context(), sh) } } st := &binarylog.ServerTrailer{ @@ -1369,7 +1366,7 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport. Err: appErr, } for _, binlog := range binlogs { - binlog.Log(st) + binlog.Log(stream.Context(), st) } } return appErr @@ -1379,6 +1376,11 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport. } opts := &transport.Options{Last: true} + // Server handler could have set new compressor by calling SetSendCompressor. + // In case it is set, we need to use it for compressing outbound message. + if stream.SendCompress() != sendCompressorName { + comp = encoding.GetCompressor(stream.SendCompress()) + } if err := s.sendResponse(t, stream, reply, cp, opts, comp); err != nil { if err == io.EOF { // The entire stream is done (for unary RPC only). @@ -1406,8 +1408,8 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport. Err: appErr, } for _, binlog := range binlogs { - binlog.Log(sh) - binlog.Log(st) + binlog.Log(stream.Context(), sh) + binlog.Log(stream.Context(), st) } } return err @@ -1421,8 +1423,8 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport. Message: reply, } for _, binlog := range binlogs { - binlog.Log(sh) - binlog.Log(sm) + binlog.Log(stream.Context(), sh) + binlog.Log(stream.Context(), sm) } } if channelz.IsOn() { @@ -1434,17 +1436,16 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport. // TODO: Should we be logging if writing status failed here, like above? // Should the logging be in WriteStatus? Should we ignore the WriteStatus // error or allow the stats handler to see it? - err = t.WriteStatus(stream, statusOK) if len(binlogs) != 0 { st := &binarylog.ServerTrailer{ Trailer: stream.Trailer(), Err: appErr, } for _, binlog := range binlogs { - binlog.Log(st) + binlog.Log(stream.Context(), st) } } - return err + return t.WriteStatus(stream, statusOK) } // chainStreamServerInterceptors chains all stream server interceptors into one. @@ -1470,21 +1471,16 @@ func chainStreamServerInterceptors(s *Server) { func chainStreamInterceptors(interceptors []StreamServerInterceptor) StreamServerInterceptor { return func(srv interface{}, ss ServerStream, info *StreamServerInfo, handler StreamHandler) error { - // the struct ensures the variables are allocated together, rather than separately, since we - // know they should be garbage collected together. This saves 1 allocation and decreases - // time/call by about 10% on the microbenchmark. - var state struct { - i int - next StreamHandler - } - state.next = func(srv interface{}, ss ServerStream) error { - if state.i == len(interceptors)-1 { - return interceptors[state.i](srv, ss, info, handler) - } - state.i++ - return interceptors[state.i-1](srv, ss, info, state.next) - } - return state.next(srv, ss) + return interceptors[0](srv, ss, info, getChainStreamHandler(interceptors, 0, info, handler)) + } +} + +func getChainStreamHandler(interceptors []StreamServerInterceptor, curr int, info *StreamServerInfo, finalHandler StreamHandler) StreamHandler { + if curr == len(interceptors)-1 { + return finalHandler + } + return func(srv interface{}, stream ServerStream) error { + return interceptors[curr+1](srv, stream, info, getChainStreamHandler(interceptors, curr+1, info, finalHandler)) } } @@ -1583,7 +1579,7 @@ func (s *Server) processStreamingRPC(t transport.ServerTransport, stream *transp logEntry.PeerAddr = peer.Addr } for _, binlog := range ss.binlogs { - binlog.Log(logEntry) + binlog.Log(stream.Context(), logEntry) } } @@ -1606,12 +1602,18 @@ func (s *Server) processStreamingRPC(t transport.ServerTransport, stream *transp // NOTE: this needs to be ahead of all handling, https://github.com/grpc/grpc-go/issues/686. if s.opts.cp != nil { ss.cp = s.opts.cp - stream.SetSendCompress(s.opts.cp.Type()) + ss.sendCompressorName = s.opts.cp.Type() } else if rc := stream.RecvCompress(); rc != "" && rc != encoding.Identity { // Legacy compressor not specified; attempt to respond with same encoding. ss.comp = encoding.GetCompressor(rc) if ss.comp != nil { - stream.SetSendCompress(rc) + ss.sendCompressorName = rc + } + } + + if ss.sendCompressorName != "" { + if err := stream.SetSendCompress(ss.sendCompressorName); err != nil { + return status.Errorf(codes.Internal, "grpc: failed to set send compressor: %v", err) } } @@ -1649,16 +1651,16 @@ func (s *Server) processStreamingRPC(t transport.ServerTransport, stream *transp ss.trInfo.tr.SetError() ss.mu.Unlock() } - t.WriteStatus(ss.s, appStatus) if len(ss.binlogs) != 0 { st := &binarylog.ServerTrailer{ Trailer: ss.s.Trailer(), Err: appErr, } for _, binlog := range ss.binlogs { - binlog.Log(st) + binlog.Log(stream.Context(), st) } } + t.WriteStatus(ss.s, appStatus) // TODO: Should we log an error from WriteStatus here and below? return appErr } @@ -1667,17 +1669,16 @@ func (s *Server) processStreamingRPC(t transport.ServerTransport, stream *transp ss.trInfo.tr.LazyLog(stringer("OK"), false) ss.mu.Unlock() } - err = t.WriteStatus(ss.s, statusOK) if len(ss.binlogs) != 0 { st := &binarylog.ServerTrailer{ Trailer: ss.s.Trailer(), Err: appErr, } for _, binlog := range ss.binlogs { - binlog.Log(st) + binlog.Log(stream.Context(), st) } } - return err + return t.WriteStatus(ss.s, statusOK) } func (s *Server) handleStream(t transport.ServerTransport, stream *transport.Stream, trInfo *traceInfo) { @@ -1819,7 +1820,7 @@ func (s *Server) Stop() { } for _, cs := range conns { for st := range cs { - st.Close() + st.Close(errors.New("Server.Stop called")) } } if s.opts.numServerWorkers > 0 { @@ -1855,7 +1856,7 @@ func (s *Server) GracefulStop() { if !s.drain { for _, conns := range s.conns { for st := range conns { - st.Drain() + st.Drain("graceful_stop") } } s.drain = true @@ -1944,6 +1945,60 @@ func SendHeader(ctx context.Context, md metadata.MD) error { return nil } +// SetSendCompressor sets a compressor for outbound messages from the server. +// It must not be called after any event that causes headers to be sent +// (see ServerStream.SetHeader for the complete list). Provided compressor is +// used when below conditions are met: +// +// - compressor is registered via encoding.RegisterCompressor +// - compressor name must exist in the client advertised compressor names +// sent in grpc-accept-encoding header. Use ClientSupportedCompressors to +// get client supported compressor names. +// +// The context provided must be the context passed to the server's handler. +// It must be noted that compressor name encoding.Identity disables the +// outbound compression. +// By default, server messages will be sent using the same compressor with +// which request messages were sent. +// +// It is not safe to call SetSendCompressor concurrently with SendHeader and +// SendMsg. +// +// # Experimental +// +// Notice: This function is EXPERIMENTAL and may be changed or removed in a +// later release. +func SetSendCompressor(ctx context.Context, name string) error { + stream, ok := ServerTransportStreamFromContext(ctx).(*transport.Stream) + if !ok || stream == nil { + return fmt.Errorf("failed to fetch the stream from the given context") + } + + if err := validateSendCompressor(name, stream.ClientAdvertisedCompressors()); err != nil { + return fmt.Errorf("unable to set send compressor: %w", err) + } + + return stream.SetSendCompress(name) +} + +// ClientSupportedCompressors returns compressor names advertised by the client +// via grpc-accept-encoding header. +// +// The context provided must be the context passed to the server's handler. +// +// # Experimental +// +// Notice: This function is EXPERIMENTAL and may be changed or removed in a +// later release. +func ClientSupportedCompressors(ctx context.Context) ([]string, error) { + stream, ok := ServerTransportStreamFromContext(ctx).(*transport.Stream) + if !ok || stream == nil { + return nil, fmt.Errorf("failed to fetch the stream from the given context %v", ctx) + } + + return strings.Split(stream.ClientAdvertisedCompressors(), ","), nil +} + // SetTrailer sets the trailer metadata that will be sent when an RPC returns. // When called more than once, all the provided metadata will be merged. // @@ -1978,3 +2033,22 @@ type channelzServer struct { func (c *channelzServer) ChannelzMetric() *channelz.ServerInternalMetric { return c.s.channelzMetric() } + +// validateSendCompressor returns an error when given compressor name cannot be +// handled by the server or the client based on the advertised compressors. +func validateSendCompressor(name, clientCompressors string) error { + if name == encoding.Identity { + return nil + } + + if !grpcutil.IsCompressorNameRegistered(name) { + return fmt.Errorf("compressor not registered %q", name) + } + + for _, c := range strings.Split(clientCompressors, ",") { + if c == name { + return nil // found match + } + } + return fmt.Errorf("client does not support compressor %q", name) +} diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/service_config.go b/.ci/providerlint/vendor/google.golang.org/grpc/service_config.go index 01bbb2025ae..0df11fc0988 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/service_config.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/service_config.go @@ -23,8 +23,6 @@ import ( "errors" "fmt" "reflect" - "strconv" - "strings" "time" "google.golang.org/grpc/codes" @@ -106,8 +104,8 @@ type healthCheckConfig struct { type jsonRetryPolicy struct { MaxAttempts int - InitialBackoff string - MaxBackoff string + InitialBackoff internalserviceconfig.Duration + MaxBackoff internalserviceconfig.Duration BackoffMultiplier float64 RetryableStatusCodes []codes.Code } @@ -129,50 +127,6 @@ type retryThrottlingPolicy struct { TokenRatio float64 } -func parseDuration(s *string) (*time.Duration, error) { - if s == nil { - return nil, nil - } - if !strings.HasSuffix(*s, "s") { - return nil, fmt.Errorf("malformed duration %q", *s) - } - ss := strings.SplitN((*s)[:len(*s)-1], ".", 3) - if len(ss) > 2 { - return nil, fmt.Errorf("malformed duration %q", *s) - } - // hasDigits is set if either the whole or fractional part of the number is - // present, since both are optional but one is required. - hasDigits := false - var d time.Duration - if len(ss[0]) > 0 { - i, err := strconv.ParseInt(ss[0], 10, 32) - if err != nil { - return nil, fmt.Errorf("malformed duration %q: %v", *s, err) - } - d = time.Duration(i) * time.Second - hasDigits = true - } - if len(ss) == 2 && len(ss[1]) > 0 { - if len(ss[1]) > 9 { - return nil, fmt.Errorf("malformed duration %q", *s) - } - f, err := strconv.ParseInt(ss[1], 10, 64) - if err != nil { - return nil, fmt.Errorf("malformed duration %q: %v", *s, err) - } - for i := 9; i > len(ss[1]); i-- { - f *= 10 - } - d += time.Duration(f) - hasDigits = true - } - if !hasDigits { - return nil, fmt.Errorf("malformed duration %q", *s) - } - - return &d, nil -} - type jsonName struct { Service string Method string @@ -201,7 +155,7 @@ func (j jsonName) generatePath() (string, error) { type jsonMC struct { Name *[]jsonName WaitForReady *bool - Timeout *string + Timeout *internalserviceconfig.Duration MaxRequestMessageBytes *int64 MaxResponseMessageBytes *int64 RetryPolicy *jsonRetryPolicy @@ -226,7 +180,7 @@ func parseServiceConfig(js string) *serviceconfig.ParseResult { var rsc jsonSC err := json.Unmarshal([]byte(js), &rsc) if err != nil { - logger.Warningf("grpc: parseServiceConfig error unmarshaling %s due to %v", js, err) + logger.Warningf("grpc: unmarshaling service config %s: %v", js, err) return &serviceconfig.ParseResult{Err: err} } sc := ServiceConfig{ @@ -252,18 +206,13 @@ func parseServiceConfig(js string) *serviceconfig.ParseResult { if m.Name == nil { continue } - d, err := parseDuration(m.Timeout) - if err != nil { - logger.Warningf("grpc: parseServiceConfig error unmarshaling %s due to %v", js, err) - return &serviceconfig.ParseResult{Err: err} - } mc := MethodConfig{ WaitForReady: m.WaitForReady, - Timeout: d, + Timeout: (*time.Duration)(m.Timeout), } if mc.RetryPolicy, err = convertRetryPolicy(m.RetryPolicy); err != nil { - logger.Warningf("grpc: parseServiceConfig error unmarshaling %s due to %v", js, err) + logger.Warningf("grpc: unmarshaling service config %s: %v", js, err) return &serviceconfig.ParseResult{Err: err} } if m.MaxRequestMessageBytes != nil { @@ -283,13 +232,13 @@ func parseServiceConfig(js string) *serviceconfig.ParseResult { for i, n := range *m.Name { path, err := n.generatePath() if err != nil { - logger.Warningf("grpc: parseServiceConfig error unmarshaling %s due to methodConfig[%d]: %v", js, i, err) + logger.Warningf("grpc: error unmarshaling service config %s due to methodConfig[%d]: %v", js, i, err) return &serviceconfig.ParseResult{Err: err} } if _, ok := paths[path]; ok { err = errDuplicatedName - logger.Warningf("grpc: parseServiceConfig error unmarshaling %s due to methodConfig[%d]: %v", js, i, err) + logger.Warningf("grpc: error unmarshaling service config %s due to methodConfig[%d]: %v", js, i, err) return &serviceconfig.ParseResult{Err: err} } paths[path] = struct{}{} @@ -312,18 +261,10 @@ func convertRetryPolicy(jrp *jsonRetryPolicy) (p *internalserviceconfig.RetryPol if jrp == nil { return nil, nil } - ib, err := parseDuration(&jrp.InitialBackoff) - if err != nil { - return nil, err - } - mb, err := parseDuration(&jrp.MaxBackoff) - if err != nil { - return nil, err - } if jrp.MaxAttempts <= 1 || - *ib <= 0 || - *mb <= 0 || + jrp.InitialBackoff <= 0 || + jrp.MaxBackoff <= 0 || jrp.BackoffMultiplier <= 0 || len(jrp.RetryableStatusCodes) == 0 { logger.Warningf("grpc: ignoring retry policy %v due to illegal configuration", jrp) @@ -332,8 +273,8 @@ func convertRetryPolicy(jrp *jsonRetryPolicy) (p *internalserviceconfig.RetryPol rp := &internalserviceconfig.RetryPolicy{ MaxAttempts: jrp.MaxAttempts, - InitialBackoff: *ib, - MaxBackoff: *mb, + InitialBackoff: time.Duration(jrp.InitialBackoff), + MaxBackoff: time.Duration(jrp.MaxBackoff), BackoffMultiplier: jrp.BackoffMultiplier, RetryableStatusCodes: make(map[codes.Code]bool), } diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/stats/stats.go b/.ci/providerlint/vendor/google.golang.org/grpc/stats/stats.go index 0285dcc6a26..7a552a9b787 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/stats/stats.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/stats/stats.go @@ -67,10 +67,18 @@ type InPayload struct { Payload interface{} // Data is the serialized message payload. Data []byte - // Length is the length of uncompressed data. + + // Length is the size of the uncompressed payload data. Does not include any + // framing (gRPC or HTTP/2). Length int - // WireLength is the length of data on wire (compressed, signed, encrypted). + // CompressedLength is the size of the compressed payload data. Does not + // include any framing (gRPC or HTTP/2). Same as Length if compression not + // enabled. + CompressedLength int + // WireLength is the size of the compressed payload data plus gRPC framing. + // Does not include HTTP/2 framing. WireLength int + // RecvTime is the time when the payload is received. RecvTime time.Time } @@ -129,9 +137,15 @@ type OutPayload struct { Payload interface{} // Data is the serialized message payload. Data []byte - // Length is the length of uncompressed data. + // Length is the size of the uncompressed payload data. Does not include any + // framing (gRPC or HTTP/2). Length int - // WireLength is the length of data on wire (compressed, signed, encrypted). + // CompressedLength is the size of the compressed payload data. Does not + // include any framing (gRPC or HTTP/2). Same as Length if compression not + // enabled. + CompressedLength int + // WireLength is the size of the compressed payload data plus gRPC framing. + // Does not include HTTP/2 framing. WireLength int // SentTime is the time when the payload is sent. SentTime time.Time diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/status/status.go b/.ci/providerlint/vendor/google.golang.org/grpc/status/status.go index 623be39f26b..53910fb7c90 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/status/status.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/status/status.go @@ -77,7 +77,9 @@ func FromProto(s *spb.Status) *Status { // FromError returns a Status representation of err. // // - If err was produced by this package or implements the method `GRPCStatus() -// *Status`, the appropriate Status is returned. +// *Status`, or if err wraps a type satisfying this, the appropriate Status is +// returned. For wrapped errors, the message returned contains the entire +// err.Error() text and not just the wrapped status. // // - If err is nil, a Status is returned with codes.OK and no message. // @@ -88,10 +90,15 @@ func FromError(err error) (s *Status, ok bool) { if err == nil { return nil, true } - if se, ok := err.(interface { - GRPCStatus() *Status - }); ok { - return se.GRPCStatus(), true + type grpcstatus interface{ GRPCStatus() *Status } + if gs, ok := err.(grpcstatus); ok { + return gs.GRPCStatus(), true + } + var gs grpcstatus + if errors.As(err, &gs) { + p := gs.GRPCStatus().Proto() + p.Message = err.Error() + return status.FromProto(p), true } return New(codes.Unknown, err.Error()), false } @@ -103,19 +110,16 @@ func Convert(err error) *Status { return s } -// Code returns the Code of the error if it is a Status error, codes.OK if err -// is nil, or codes.Unknown otherwise. +// Code returns the Code of the error if it is a Status error or if it wraps a +// Status error. If that is not the case, it returns codes.OK if err is nil, or +// codes.Unknown otherwise. func Code(err error) codes.Code { // Don't use FromError to avoid allocation of OK status. if err == nil { return codes.OK } - if se, ok := err.(interface { - GRPCStatus() *Status - }); ok { - return se.GRPCStatus().Code() - } - return codes.Unknown + + return Convert(err).Code() } // FromContextError converts a context error or wrapped context error into a diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/stream.go b/.ci/providerlint/vendor/google.golang.org/grpc/stream.go index 960c3e33dfd..10092685b22 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/stream.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/stream.go @@ -123,6 +123,9 @@ type ClientStream interface { // calling RecvMsg on the same stream at the same time, but it is not safe // to call SendMsg on the same stream in different goroutines. It is also // not safe to call CloseSend concurrently with SendMsg. + // + // It is not safe to modify the message after calling SendMsg. Tracing + // libraries and stats handlers may use the message lazily. SendMsg(m interface{}) error // RecvMsg blocks until it receives a message into m or the stream is // done. It returns io.EOF when the stream completes successfully. On @@ -152,6 +155,11 @@ type ClientStream interface { // If none of the above happen, a goroutine and a context will be leaked, and grpc // will not call the optionally-configured stats handler with a stats.End message. func (cc *ClientConn) NewStream(ctx context.Context, desc *StreamDesc, method string, opts ...CallOption) (ClientStream, error) { + if err := cc.idlenessMgr.onCallBegin(); err != nil { + return nil, err + } + defer cc.idlenessMgr.onCallEnd() + // allow interceptor to see all applicable call options, which means those // configured as defaults from dial option as well as per-call options opts = combine(cc.dopts.callOptions, opts) @@ -168,10 +176,19 @@ func NewClientStream(ctx context.Context, desc *StreamDesc, cc *ClientConn, meth } func newClientStream(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, opts ...CallOption) (_ ClientStream, err error) { - if md, _, ok := metadata.FromOutgoingContextRaw(ctx); ok { + if md, added, ok := metadata.FromOutgoingContextRaw(ctx); ok { + // validate md if err := imetadata.Validate(md); err != nil { return nil, status.Error(codes.Internal, err.Error()) } + // validate added + for _, kvs := range added { + for i := 0; i < len(kvs); i += 2 { + if err := imetadata.ValidatePair(kvs[i], kvs[i+1]); err != nil { + return nil, status.Error(codes.Internal, err.Error()) + } + } + } } if channelz.IsOn() { cc.incrCallsStarted() @@ -352,7 +369,7 @@ func newClientStreamWithParams(ctx context.Context, desc *StreamDesc, cc *Client } } for _, binlog := range cs.binlogs { - binlog.Log(logEntry) + binlog.Log(cs.ctx, logEntry) } } @@ -416,7 +433,7 @@ func (cs *clientStream) newAttemptLocked(isTransparent bool) (*csAttempt, error) ctx = trace.NewContext(ctx, trInfo.tr) } - if cs.cc.parsedTarget.Scheme == "xds" { + if cs.cc.parsedTarget.URL.Scheme == "xds" { // Add extra metadata (metadata that will be added by transport) to context // so the balancer can see them. ctx = grpcutil.WithExtraMetadata(ctx, metadata.Pairs( @@ -438,7 +455,7 @@ func (a *csAttempt) getTransport() error { cs := a.cs var err error - a.t, a.done, err = cs.cc.getTransport(a.ctx, cs.callInfo.failFast, cs.callHdr.Method) + a.t, a.pickResult, err = cs.cc.getTransport(a.ctx, cs.callInfo.failFast, cs.callHdr.Method) if err != nil { if de, ok := err.(dropError); ok { err = de.error @@ -455,6 +472,25 @@ func (a *csAttempt) getTransport() error { func (a *csAttempt) newStream() error { cs := a.cs cs.callHdr.PreviousAttempts = cs.numRetries + + // Merge metadata stored in PickResult, if any, with existing call metadata. + // It is safe to overwrite the csAttempt's context here, since all state + // maintained in it are local to the attempt. When the attempt has to be + // retried, a new instance of csAttempt will be created. + if a.pickResult.Metadata != nil { + // We currently do not have a function it the metadata package which + // merges given metadata with existing metadata in a context. Existing + // function `AppendToOutgoingContext()` takes a variadic argument of key + // value pairs. + // + // TODO: Make it possible to retrieve key value pairs from metadata.MD + // in a form passable to AppendToOutgoingContext(), or create a version + // of AppendToOutgoingContext() that accepts a metadata.MD. + md, _ := metadata.FromOutgoingContext(a.ctx) + md = metadata.Join(md, a.pickResult.Metadata) + a.ctx = metadata.NewOutgoingContext(a.ctx, md) + } + s, err := a.t.NewStream(a.ctx, cs.callHdr) if err != nil { nse, ok := err.(*transport.NewStreamError) @@ -529,12 +565,12 @@ type clientStream struct { // csAttempt implements a single transport stream attempt within a // clientStream. type csAttempt struct { - ctx context.Context - cs *clientStream - t transport.ClientTransport - s *transport.Stream - p *parser - done func(balancer.DoneInfo) + ctx context.Context + cs *clientStream + t transport.ClientTransport + s *transport.Stream + p *parser + pickResult balancer.PickResult finished bool dc Decompressor @@ -781,7 +817,7 @@ func (cs *clientStream) Header() (metadata.MD, error) { } cs.serverHeaderBinlogged = true for _, binlog := range cs.binlogs { - binlog.Log(logEntry) + binlog.Log(cs.ctx, logEntry) } } return m, nil @@ -862,7 +898,7 @@ func (cs *clientStream) SendMsg(m interface{}) (err error) { Message: data, } for _, binlog := range cs.binlogs { - binlog.Log(cm) + binlog.Log(cs.ctx, cm) } } return err @@ -886,7 +922,7 @@ func (cs *clientStream) RecvMsg(m interface{}) error { Message: recvInfo.uncompressedBytes, } for _, binlog := range cs.binlogs { - binlog.Log(sm) + binlog.Log(cs.ctx, sm) } } if err != nil || !cs.desc.ServerStreams { @@ -907,7 +943,7 @@ func (cs *clientStream) RecvMsg(m interface{}) error { logEntry.PeerAddr = peer.Addr } for _, binlog := range cs.binlogs { - binlog.Log(logEntry) + binlog.Log(cs.ctx, logEntry) } } } @@ -934,7 +970,7 @@ func (cs *clientStream) CloseSend() error { OnClientSide: true, } for _, binlog := range cs.binlogs { - binlog.Log(chc) + binlog.Log(cs.ctx, chc) } } // We never returned an error here for reasons. @@ -952,6 +988,9 @@ func (cs *clientStream) finish(err error) { return } cs.finished = true + for _, onFinish := range cs.callInfo.onFinish { + onFinish(err) + } cs.commitAttemptLocked() if cs.attempt != nil { cs.attempt.finish(err) @@ -973,7 +1012,7 @@ func (cs *clientStream) finish(err error) { OnClientSide: true, } for _, binlog := range cs.binlogs { - binlog.Log(c) + binlog.Log(cs.ctx, c) } } if err == nil { @@ -1062,9 +1101,10 @@ func (a *csAttempt) recvMsg(m interface{}, payInfo *payloadInfo) (err error) { RecvTime: time.Now(), Payload: m, // TODO truncate large payload. - Data: payInfo.uncompressedBytes, - WireLength: payInfo.wireLength + headerLen, - Length: len(payInfo.uncompressedBytes), + Data: payInfo.uncompressedBytes, + WireLength: payInfo.compressedLength + headerLen, + CompressedLength: payInfo.compressedLength, + Length: len(payInfo.uncompressedBytes), }) } if channelz.IsOn() { @@ -1103,12 +1143,12 @@ func (a *csAttempt) finish(err error) { tr = a.s.Trailer() } - if a.done != nil { + if a.pickResult.Done != nil { br := false if a.s != nil { br = a.s.BytesReceived() } - a.done(balancer.DoneInfo{ + a.pickResult.Done(balancer.DoneInfo{ Err: err, Trailer: tr, BytesSent: a.s != nil, @@ -1233,14 +1273,19 @@ func newNonRetryClientStream(ctx context.Context, desc *StreamDesc, method strin as.p = &parser{r: s} ac.incrCallsStarted() if desc != unaryStreamDesc { - // Listen on cc and stream contexts to cleanup when the user closes the - // ClientConn or cancels the stream context. In all other cases, an error - // should already be injected into the recv buffer by the transport, which - // the client will eventually receive, and then we will cancel the stream's - // context in clientStream.finish. + // Listen on stream context to cleanup when the stream context is + // canceled. Also listen for the addrConn's context in case the + // addrConn is closed or reconnects to a different address. In all + // other cases, an error should already be injected into the recv + // buffer by the transport, which the client will eventually receive, + // and then we will cancel the stream's context in + // addrConnStream.finish. go func() { + ac.mu.Lock() + acCtx := ac.ctx + ac.mu.Unlock() select { - case <-ac.ctx.Done(): + case <-acCtx.Done(): as.finish(status.Error(codes.Canceled, "grpc: the SubConn is closing")) case <-ctx.Done(): as.finish(toRPCErr(ctx.Err())) @@ -1464,6 +1509,9 @@ type ServerStream interface { // It is safe to have a goroutine calling SendMsg and another goroutine // calling RecvMsg on the same stream at the same time, but it is not safe // to call SendMsg on the same stream in different goroutines. + // + // It is not safe to modify the message after calling SendMsg. Tracing + // libraries and stats handlers may use the message lazily. SendMsg(m interface{}) error // RecvMsg blocks until it receives a message into m or the stream is // done. It returns io.EOF when the client has performed a CloseSend. On @@ -1489,6 +1537,8 @@ type serverStream struct { comp encoding.Compressor decomp encoding.Compressor + sendCompressorName string + maxReceiveMessageSize int maxSendMessageSize int trInfo *traceInfo @@ -1536,7 +1586,7 @@ func (ss *serverStream) SendHeader(md metadata.MD) error { } ss.serverHeaderBinlogged = true for _, binlog := range ss.binlogs { - binlog.Log(sh) + binlog.Log(ss.ctx, sh) } } return err @@ -1581,6 +1631,13 @@ func (ss *serverStream) SendMsg(m interface{}) (err error) { } }() + // Server handler could have set new compressor by calling SetSendCompressor. + // In case it is set, we need to use it for compressing outbound message. + if sendCompressorsName := ss.s.SendCompress(); sendCompressorsName != ss.sendCompressorName { + ss.comp = encoding.GetCompressor(sendCompressorsName) + ss.sendCompressorName = sendCompressorsName + } + // load hdr, payload, data hdr, payload, data, err := prepareMsg(m, ss.codec, ss.cp, ss.comp) if err != nil { @@ -1602,14 +1659,14 @@ func (ss *serverStream) SendMsg(m interface{}) (err error) { } ss.serverHeaderBinlogged = true for _, binlog := range ss.binlogs { - binlog.Log(sh) + binlog.Log(ss.ctx, sh) } } sm := &binarylog.ServerMessage{ Message: data, } for _, binlog := range ss.binlogs { - binlog.Log(sm) + binlog.Log(ss.ctx, sm) } } if len(ss.statsHandler) != 0 { @@ -1657,7 +1714,7 @@ func (ss *serverStream) RecvMsg(m interface{}) (err error) { if len(ss.binlogs) != 0 { chc := &binarylog.ClientHalfClose{} for _, binlog := range ss.binlogs { - binlog.Log(chc) + binlog.Log(ss.ctx, chc) } } return err @@ -1673,9 +1730,10 @@ func (ss *serverStream) RecvMsg(m interface{}) (err error) { RecvTime: time.Now(), Payload: m, // TODO truncate large payload. - Data: payInfo.uncompressedBytes, - WireLength: payInfo.wireLength + headerLen, - Length: len(payInfo.uncompressedBytes), + Data: payInfo.uncompressedBytes, + Length: len(payInfo.uncompressedBytes), + WireLength: payInfo.compressedLength + headerLen, + CompressedLength: payInfo.compressedLength, }) } } @@ -1684,7 +1742,7 @@ func (ss *serverStream) RecvMsg(m interface{}) (err error) { Message: payInfo.uncompressedBytes, } for _, binlog := range ss.binlogs { - binlog.Log(cm) + binlog.Log(ss.ctx, cm) } } return nil diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/version.go b/.ci/providerlint/vendor/google.golang.org/grpc/version.go index 2198e7098d7..d0460ec92f3 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/version.go +++ b/.ci/providerlint/vendor/google.golang.org/grpc/version.go @@ -19,4 +19,4 @@ package grpc // Version is the current grpc version. -const Version = "1.51.0" +const Version = "1.56.0" diff --git a/.ci/providerlint/vendor/google.golang.org/grpc/vet.sh b/.ci/providerlint/vendor/google.golang.org/grpc/vet.sh index bd8e0cdb33c..a8e4732b3d2 100644 --- a/.ci/providerlint/vendor/google.golang.org/grpc/vet.sh +++ b/.ci/providerlint/vendor/google.golang.org/grpc/vet.sh @@ -41,16 +41,8 @@ if [[ "$1" = "-install" ]]; then github.com/client9/misspell/cmd/misspell popd if [[ -z "${VET_SKIP_PROTO}" ]]; then - if [[ "${TRAVIS}" = "true" ]]; then - PROTOBUF_VERSION=3.14.0 - PROTOC_FILENAME=protoc-${PROTOBUF_VERSION}-linux-x86_64.zip - pushd /home/travis - wget https://github.com/google/protobuf/releases/download/v${PROTOBUF_VERSION}/${PROTOC_FILENAME} - unzip ${PROTOC_FILENAME} - bin/protoc --version - popd - elif [[ "${GITHUB_ACTIONS}" = "true" ]]; then - PROTOBUF_VERSION=3.14.0 + if [[ "${GITHUB_ACTIONS}" = "true" ]]; then + PROTOBUF_VERSION=22.0 # a.k.a v4.22.0 in pb.go files. PROTOC_FILENAME=protoc-${PROTOBUF_VERSION}-linux-x86_64.zip pushd /home/runner/go wget https://github.com/google/protobuf/releases/download/v${PROTOBUF_VERSION}/${PROTOC_FILENAME} @@ -66,6 +58,16 @@ elif [[ "$#" -ne 0 ]]; then die "Unknown argument(s): $*" fi +# - Check that generated proto files are up to date. +if [[ -z "${VET_SKIP_PROTO}" ]]; then + make proto && git status --porcelain 2>&1 | fail_on_output || \ + (git status; git --no-pager diff; exit 1) +fi + +if [[ -n "${VET_ONLY_PROTO}" ]]; then + exit 0 +fi + # - Ensure all source files contain a copyright message. # (Done in two parts because Darwin "git grep" has broken support for compound # exclusion matches.) @@ -93,13 +95,6 @@ git grep '"github.com/envoyproxy/go-control-plane/envoy' -- '*.go' ':(exclude)*. misspell -error . -# - Check that generated proto files are up to date. -if [[ -z "${VET_SKIP_PROTO}" ]]; then - PATH="/home/travis/bin:${PATH}" make proto && \ - git status --porcelain 2>&1 | fail_on_output || \ - (git status; git --no-pager diff; exit 1) -fi - # - gofmt, goimports, golint (with exceptions for generated code), go vet, # go mod tidy. # Perform these checks on each module inside gRPC. @@ -111,7 +106,7 @@ for MOD_FILE in $(find . -name 'go.mod'); do goimports -l . 2>&1 | not grep -vE "\.pb\.go" golint ./... 2>&1 | not grep -vE "/grpc_testing_not_regenerate/.*\.pb\.go:" - go mod tidy + go mod tidy -compat=1.17 git status --porcelain 2>&1 | fail_on_output || \ (git status; git --no-pager diff; exit 1) popd @@ -121,8 +116,9 @@ done # # TODO(dfawley): don't use deprecated functions in examples or first-party # plugins. +# TODO(dfawley): enable ST1019 (duplicate imports) but allow for protobufs. SC_OUT="$(mktemp)" -staticcheck -go 1.9 -checks 'inherit,-ST1015' ./... > "${SC_OUT}" || true +staticcheck -go 1.19 -checks 'inherit,-ST1015,-ST1019,-SA1019' ./... > "${SC_OUT}" || true # Error if anything other than deprecation warnings are printed. not grep -v "is deprecated:.*SA1019" "${SC_OUT}" # Only ignore the following deprecated types/fields/functions. diff --git a/.ci/providerlint/vendor/google.golang.org/protobuf/encoding/protojson/doc.go b/.ci/providerlint/vendor/google.golang.org/protobuf/encoding/protojson/doc.go index 00ea2fecfb7..21d5d2cb18e 100644 --- a/.ci/providerlint/vendor/google.golang.org/protobuf/encoding/protojson/doc.go +++ b/.ci/providerlint/vendor/google.golang.org/protobuf/encoding/protojson/doc.go @@ -4,7 +4,7 @@ // Package protojson marshals and unmarshals protocol buffer messages as JSON // format. It follows the guide at -// https://developers.google.com/protocol-buffers/docs/proto3#json. +// https://protobuf.dev/programming-guides/proto3#json. // // This package produces a different output than the standard "encoding/json" // package, which does not operate correctly on protocol buffer messages. diff --git a/.ci/providerlint/vendor/google.golang.org/protobuf/encoding/protojson/well_known_types.go b/.ci/providerlint/vendor/google.golang.org/protobuf/encoding/protojson/well_known_types.go index c85f8469480..6c37d417449 100644 --- a/.ci/providerlint/vendor/google.golang.org/protobuf/encoding/protojson/well_known_types.go +++ b/.ci/providerlint/vendor/google.golang.org/protobuf/encoding/protojson/well_known_types.go @@ -814,16 +814,22 @@ func (d decoder) unmarshalTimestamp(m protoreflect.Message) error { return d.unexpectedTokenError(tok) } - t, err := time.Parse(time.RFC3339Nano, tok.ParsedString()) + s := tok.ParsedString() + t, err := time.Parse(time.RFC3339Nano, s) if err != nil { return d.newError(tok.Pos(), "invalid %v value %v", genid.Timestamp_message_fullname, tok.RawString()) } - // Validate seconds. No need to validate nanos because time.Parse would have - // covered that already. + // Validate seconds. secs := t.Unix() if secs < minTimestampSeconds || secs > maxTimestampSeconds { return d.newError(tok.Pos(), "%v value out of range: %v", genid.Timestamp_message_fullname, tok.RawString()) } + // Validate subseconds. + i := strings.LastIndexByte(s, '.') // start of subsecond field + j := strings.LastIndexAny(s, "Z-+") // start of timezone field + if i >= 0 && j >= i && j-i > len(".999999999") { + return d.newError(tok.Pos(), "invalid %v value %v", genid.Timestamp_message_fullname, tok.RawString()) + } fds := m.Descriptor().Fields() fdSeconds := fds.ByNumber(genid.Timestamp_Seconds_field_number) diff --git a/.ci/providerlint/vendor/google.golang.org/protobuf/encoding/protowire/wire.go b/.ci/providerlint/vendor/google.golang.org/protobuf/encoding/protowire/wire.go index ce57f57ebd4..f4b4686cf9d 100644 --- a/.ci/providerlint/vendor/google.golang.org/protobuf/encoding/protowire/wire.go +++ b/.ci/providerlint/vendor/google.golang.org/protobuf/encoding/protowire/wire.go @@ -3,7 +3,7 @@ // license that can be found in the LICENSE file. // Package protowire parses and formats the raw wire encoding. -// See https://developers.google.com/protocol-buffers/docs/encoding. +// See https://protobuf.dev/programming-guides/encoding. // // For marshaling and unmarshaling entire protobuf messages, // use the "google.golang.org/protobuf/proto" package instead. @@ -29,12 +29,8 @@ const ( ) // IsValid reports whether the field number is semantically valid. -// -// Note that while numbers within the reserved range are semantically invalid, -// they are syntactically valid in the wire format. -// Implementations may treat records with reserved field numbers as unknown. func (n Number) IsValid() bool { - return MinValidNumber <= n && n < FirstReservedNumber || LastReservedNumber < n && n <= MaxValidNumber + return MinValidNumber <= n && n <= MaxValidNumber } // Type represents the wire type. diff --git a/.ci/providerlint/vendor/google.golang.org/protobuf/internal/encoding/json/decode.go b/.ci/providerlint/vendor/google.golang.org/protobuf/internal/encoding/json/decode.go index b13fd29e81e..d043a6ebe0b 100644 --- a/.ci/providerlint/vendor/google.golang.org/protobuf/internal/encoding/json/decode.go +++ b/.ci/providerlint/vendor/google.golang.org/protobuf/internal/encoding/json/decode.go @@ -294,7 +294,7 @@ func (d *Decoder) isValueNext() bool { } // consumeToken constructs a Token for given Kind with raw value derived from -// current d.in and given size, and consumes the given size-lenght of it. +// current d.in and given size, and consumes the given size-length of it. func (d *Decoder) consumeToken(kind Kind, size int) Token { tok := Token{ kind: kind, diff --git a/.ci/providerlint/vendor/google.golang.org/protobuf/internal/encoding/text/decode.go b/.ci/providerlint/vendor/google.golang.org/protobuf/internal/encoding/text/decode.go index 427c62d037f..87853e786d0 100644 --- a/.ci/providerlint/vendor/google.golang.org/protobuf/internal/encoding/text/decode.go +++ b/.ci/providerlint/vendor/google.golang.org/protobuf/internal/encoding/text/decode.go @@ -412,12 +412,13 @@ func (d *Decoder) parseFieldName() (tok Token, err error) { // Field number. Identify if input is a valid number that is not negative // and is decimal integer within 32-bit range. if num := parseNumber(d.in); num.size > 0 { + str := num.string(d.in) if !num.neg && num.kind == numDec { - if _, err := strconv.ParseInt(string(d.in[:num.size]), 10, 32); err == nil { + if _, err := strconv.ParseInt(str, 10, 32); err == nil { return d.consumeToken(Name, num.size, uint8(FieldNumber)), nil } } - return Token{}, d.newSyntaxError("invalid field number: %s", d.in[:num.size]) + return Token{}, d.newSyntaxError("invalid field number: %s", str) } return Token{}, d.newSyntaxError("invalid field name: %s", errId(d.in)) diff --git a/.ci/providerlint/vendor/google.golang.org/protobuf/internal/encoding/text/decode_number.go b/.ci/providerlint/vendor/google.golang.org/protobuf/internal/encoding/text/decode_number.go index 81a5d8c8613..45c81f0298e 100644 --- a/.ci/providerlint/vendor/google.golang.org/protobuf/internal/encoding/text/decode_number.go +++ b/.ci/providerlint/vendor/google.golang.org/protobuf/internal/encoding/text/decode_number.go @@ -15,17 +15,12 @@ func (d *Decoder) parseNumberValue() (Token, bool) { if num.neg { numAttrs |= isNegative } - strSize := num.size - last := num.size - 1 - if num.kind == numFloat && (d.in[last] == 'f' || d.in[last] == 'F') { - strSize = last - } tok := Token{ kind: Scalar, attrs: numberValue, pos: len(d.orig) - len(d.in), raw: d.in[:num.size], - str: string(d.in[:strSize]), + str: num.string(d.in), numAttrs: numAttrs, } d.consume(num.size) @@ -46,6 +41,27 @@ type number struct { kind uint8 neg bool size int + // if neg, this is the length of whitespace and comments between + // the minus sign and the rest fo the number literal + sep int +} + +func (num number) string(data []byte) string { + strSize := num.size + last := num.size - 1 + if num.kind == numFloat && (data[last] == 'f' || data[last] == 'F') { + strSize = last + } + if num.neg && num.sep > 0 { + // strip whitespace/comments between negative sign and the rest + strLen := strSize - num.sep + str := make([]byte, strLen) + str[0] = data[0] + copy(str[1:], data[num.sep+1:strSize]) + return string(str) + } + return string(data[:strSize]) + } // parseNumber constructs a number object from given input. It allows for the @@ -67,19 +83,22 @@ func parseNumber(input []byte) number { } // Optional - + var sep int if s[0] == '-' { neg = true s = s[1:] size++ + // Consume any whitespace or comments between the + // negative sign and the rest of the number + lenBefore := len(s) + s = consume(s, 0) + sep = lenBefore - len(s) + size += sep if len(s) == 0 { return number{} } } - // C++ allows for whitespace and comments in between the negative sign and - // the rest of the number. This logic currently does not but is consistent - // with v1. - switch { case s[0] == '0': if len(s) > 1 { @@ -116,7 +135,7 @@ func parseNumber(input []byte) number { if len(s) > 0 && !isDelim(s[0]) { return number{} } - return number{kind: kind, neg: neg, size: size} + return number{kind: kind, neg: neg, size: size, sep: sep} } } s = s[1:] @@ -188,5 +207,5 @@ func parseNumber(input []byte) number { return number{} } - return number{kind: kind, neg: neg, size: size} + return number{kind: kind, neg: neg, size: size, sep: sep} } diff --git a/.ci/providerlint/vendor/google.golang.org/protobuf/internal/genid/descriptor_gen.go b/.ci/providerlint/vendor/google.golang.org/protobuf/internal/genid/descriptor_gen.go index e3cdf1c2059..5c0e8f73f4e 100644 --- a/.ci/providerlint/vendor/google.golang.org/protobuf/internal/genid/descriptor_gen.go +++ b/.ci/providerlint/vendor/google.golang.org/protobuf/internal/genid/descriptor_gen.go @@ -50,6 +50,7 @@ const ( FileDescriptorProto_Options_field_name protoreflect.Name = "options" FileDescriptorProto_SourceCodeInfo_field_name protoreflect.Name = "source_code_info" FileDescriptorProto_Syntax_field_name protoreflect.Name = "syntax" + FileDescriptorProto_Edition_field_name protoreflect.Name = "edition" FileDescriptorProto_Name_field_fullname protoreflect.FullName = "google.protobuf.FileDescriptorProto.name" FileDescriptorProto_Package_field_fullname protoreflect.FullName = "google.protobuf.FileDescriptorProto.package" @@ -63,6 +64,7 @@ const ( FileDescriptorProto_Options_field_fullname protoreflect.FullName = "google.protobuf.FileDescriptorProto.options" FileDescriptorProto_SourceCodeInfo_field_fullname protoreflect.FullName = "google.protobuf.FileDescriptorProto.source_code_info" FileDescriptorProto_Syntax_field_fullname protoreflect.FullName = "google.protobuf.FileDescriptorProto.syntax" + FileDescriptorProto_Edition_field_fullname protoreflect.FullName = "google.protobuf.FileDescriptorProto.edition" ) // Field numbers for google.protobuf.FileDescriptorProto. @@ -79,6 +81,7 @@ const ( FileDescriptorProto_Options_field_number protoreflect.FieldNumber = 8 FileDescriptorProto_SourceCodeInfo_field_number protoreflect.FieldNumber = 9 FileDescriptorProto_Syntax_field_number protoreflect.FieldNumber = 12 + FileDescriptorProto_Edition_field_number protoreflect.FieldNumber = 13 ) // Names for google.protobuf.DescriptorProto. @@ -494,26 +497,29 @@ const ( // Field names for google.protobuf.MessageOptions. const ( - MessageOptions_MessageSetWireFormat_field_name protoreflect.Name = "message_set_wire_format" - MessageOptions_NoStandardDescriptorAccessor_field_name protoreflect.Name = "no_standard_descriptor_accessor" - MessageOptions_Deprecated_field_name protoreflect.Name = "deprecated" - MessageOptions_MapEntry_field_name protoreflect.Name = "map_entry" - MessageOptions_UninterpretedOption_field_name protoreflect.Name = "uninterpreted_option" + MessageOptions_MessageSetWireFormat_field_name protoreflect.Name = "message_set_wire_format" + MessageOptions_NoStandardDescriptorAccessor_field_name protoreflect.Name = "no_standard_descriptor_accessor" + MessageOptions_Deprecated_field_name protoreflect.Name = "deprecated" + MessageOptions_MapEntry_field_name protoreflect.Name = "map_entry" + MessageOptions_DeprecatedLegacyJsonFieldConflicts_field_name protoreflect.Name = "deprecated_legacy_json_field_conflicts" + MessageOptions_UninterpretedOption_field_name protoreflect.Name = "uninterpreted_option" - MessageOptions_MessageSetWireFormat_field_fullname protoreflect.FullName = "google.protobuf.MessageOptions.message_set_wire_format" - MessageOptions_NoStandardDescriptorAccessor_field_fullname protoreflect.FullName = "google.protobuf.MessageOptions.no_standard_descriptor_accessor" - MessageOptions_Deprecated_field_fullname protoreflect.FullName = "google.protobuf.MessageOptions.deprecated" - MessageOptions_MapEntry_field_fullname protoreflect.FullName = "google.protobuf.MessageOptions.map_entry" - MessageOptions_UninterpretedOption_field_fullname protoreflect.FullName = "google.protobuf.MessageOptions.uninterpreted_option" + MessageOptions_MessageSetWireFormat_field_fullname protoreflect.FullName = "google.protobuf.MessageOptions.message_set_wire_format" + MessageOptions_NoStandardDescriptorAccessor_field_fullname protoreflect.FullName = "google.protobuf.MessageOptions.no_standard_descriptor_accessor" + MessageOptions_Deprecated_field_fullname protoreflect.FullName = "google.protobuf.MessageOptions.deprecated" + MessageOptions_MapEntry_field_fullname protoreflect.FullName = "google.protobuf.MessageOptions.map_entry" + MessageOptions_DeprecatedLegacyJsonFieldConflicts_field_fullname protoreflect.FullName = "google.protobuf.MessageOptions.deprecated_legacy_json_field_conflicts" + MessageOptions_UninterpretedOption_field_fullname protoreflect.FullName = "google.protobuf.MessageOptions.uninterpreted_option" ) // Field numbers for google.protobuf.MessageOptions. const ( - MessageOptions_MessageSetWireFormat_field_number protoreflect.FieldNumber = 1 - MessageOptions_NoStandardDescriptorAccessor_field_number protoreflect.FieldNumber = 2 - MessageOptions_Deprecated_field_number protoreflect.FieldNumber = 3 - MessageOptions_MapEntry_field_number protoreflect.FieldNumber = 7 - MessageOptions_UninterpretedOption_field_number protoreflect.FieldNumber = 999 + MessageOptions_MessageSetWireFormat_field_number protoreflect.FieldNumber = 1 + MessageOptions_NoStandardDescriptorAccessor_field_number protoreflect.FieldNumber = 2 + MessageOptions_Deprecated_field_number protoreflect.FieldNumber = 3 + MessageOptions_MapEntry_field_number protoreflect.FieldNumber = 7 + MessageOptions_DeprecatedLegacyJsonFieldConflicts_field_number protoreflect.FieldNumber = 11 + MessageOptions_UninterpretedOption_field_number protoreflect.FieldNumber = 999 ) // Names for google.protobuf.FieldOptions. @@ -528,16 +534,24 @@ const ( FieldOptions_Packed_field_name protoreflect.Name = "packed" FieldOptions_Jstype_field_name protoreflect.Name = "jstype" FieldOptions_Lazy_field_name protoreflect.Name = "lazy" + FieldOptions_UnverifiedLazy_field_name protoreflect.Name = "unverified_lazy" FieldOptions_Deprecated_field_name protoreflect.Name = "deprecated" FieldOptions_Weak_field_name protoreflect.Name = "weak" + FieldOptions_DebugRedact_field_name protoreflect.Name = "debug_redact" + FieldOptions_Retention_field_name protoreflect.Name = "retention" + FieldOptions_Target_field_name protoreflect.Name = "target" FieldOptions_UninterpretedOption_field_name protoreflect.Name = "uninterpreted_option" FieldOptions_Ctype_field_fullname protoreflect.FullName = "google.protobuf.FieldOptions.ctype" FieldOptions_Packed_field_fullname protoreflect.FullName = "google.protobuf.FieldOptions.packed" FieldOptions_Jstype_field_fullname protoreflect.FullName = "google.protobuf.FieldOptions.jstype" FieldOptions_Lazy_field_fullname protoreflect.FullName = "google.protobuf.FieldOptions.lazy" + FieldOptions_UnverifiedLazy_field_fullname protoreflect.FullName = "google.protobuf.FieldOptions.unverified_lazy" FieldOptions_Deprecated_field_fullname protoreflect.FullName = "google.protobuf.FieldOptions.deprecated" FieldOptions_Weak_field_fullname protoreflect.FullName = "google.protobuf.FieldOptions.weak" + FieldOptions_DebugRedact_field_fullname protoreflect.FullName = "google.protobuf.FieldOptions.debug_redact" + FieldOptions_Retention_field_fullname protoreflect.FullName = "google.protobuf.FieldOptions.retention" + FieldOptions_Target_field_fullname protoreflect.FullName = "google.protobuf.FieldOptions.target" FieldOptions_UninterpretedOption_field_fullname protoreflect.FullName = "google.protobuf.FieldOptions.uninterpreted_option" ) @@ -547,8 +561,12 @@ const ( FieldOptions_Packed_field_number protoreflect.FieldNumber = 2 FieldOptions_Jstype_field_number protoreflect.FieldNumber = 6 FieldOptions_Lazy_field_number protoreflect.FieldNumber = 5 + FieldOptions_UnverifiedLazy_field_number protoreflect.FieldNumber = 15 FieldOptions_Deprecated_field_number protoreflect.FieldNumber = 3 FieldOptions_Weak_field_number protoreflect.FieldNumber = 10 + FieldOptions_DebugRedact_field_number protoreflect.FieldNumber = 16 + FieldOptions_Retention_field_number protoreflect.FieldNumber = 17 + FieldOptions_Target_field_number protoreflect.FieldNumber = 18 FieldOptions_UninterpretedOption_field_number protoreflect.FieldNumber = 999 ) @@ -564,6 +582,18 @@ const ( FieldOptions_JSType_enum_name = "JSType" ) +// Full and short names for google.protobuf.FieldOptions.OptionRetention. +const ( + FieldOptions_OptionRetention_enum_fullname = "google.protobuf.FieldOptions.OptionRetention" + FieldOptions_OptionRetention_enum_name = "OptionRetention" +) + +// Full and short names for google.protobuf.FieldOptions.OptionTargetType. +const ( + FieldOptions_OptionTargetType_enum_fullname = "google.protobuf.FieldOptions.OptionTargetType" + FieldOptions_OptionTargetType_enum_name = "OptionTargetType" +) + // Names for google.protobuf.OneofOptions. const ( OneofOptions_message_name protoreflect.Name = "OneofOptions" @@ -590,20 +620,23 @@ const ( // Field names for google.protobuf.EnumOptions. const ( - EnumOptions_AllowAlias_field_name protoreflect.Name = "allow_alias" - EnumOptions_Deprecated_field_name protoreflect.Name = "deprecated" - EnumOptions_UninterpretedOption_field_name protoreflect.Name = "uninterpreted_option" + EnumOptions_AllowAlias_field_name protoreflect.Name = "allow_alias" + EnumOptions_Deprecated_field_name protoreflect.Name = "deprecated" + EnumOptions_DeprecatedLegacyJsonFieldConflicts_field_name protoreflect.Name = "deprecated_legacy_json_field_conflicts" + EnumOptions_UninterpretedOption_field_name protoreflect.Name = "uninterpreted_option" - EnumOptions_AllowAlias_field_fullname protoreflect.FullName = "google.protobuf.EnumOptions.allow_alias" - EnumOptions_Deprecated_field_fullname protoreflect.FullName = "google.protobuf.EnumOptions.deprecated" - EnumOptions_UninterpretedOption_field_fullname protoreflect.FullName = "google.protobuf.EnumOptions.uninterpreted_option" + EnumOptions_AllowAlias_field_fullname protoreflect.FullName = "google.protobuf.EnumOptions.allow_alias" + EnumOptions_Deprecated_field_fullname protoreflect.FullName = "google.protobuf.EnumOptions.deprecated" + EnumOptions_DeprecatedLegacyJsonFieldConflicts_field_fullname protoreflect.FullName = "google.protobuf.EnumOptions.deprecated_legacy_json_field_conflicts" + EnumOptions_UninterpretedOption_field_fullname protoreflect.FullName = "google.protobuf.EnumOptions.uninterpreted_option" ) // Field numbers for google.protobuf.EnumOptions. const ( - EnumOptions_AllowAlias_field_number protoreflect.FieldNumber = 2 - EnumOptions_Deprecated_field_number protoreflect.FieldNumber = 3 - EnumOptions_UninterpretedOption_field_number protoreflect.FieldNumber = 999 + EnumOptions_AllowAlias_field_number protoreflect.FieldNumber = 2 + EnumOptions_Deprecated_field_number protoreflect.FieldNumber = 3 + EnumOptions_DeprecatedLegacyJsonFieldConflicts_field_number protoreflect.FieldNumber = 6 + EnumOptions_UninterpretedOption_field_number protoreflect.FieldNumber = 999 ) // Names for google.protobuf.EnumValueOptions. @@ -813,11 +846,13 @@ const ( GeneratedCodeInfo_Annotation_SourceFile_field_name protoreflect.Name = "source_file" GeneratedCodeInfo_Annotation_Begin_field_name protoreflect.Name = "begin" GeneratedCodeInfo_Annotation_End_field_name protoreflect.Name = "end" + GeneratedCodeInfo_Annotation_Semantic_field_name protoreflect.Name = "semantic" GeneratedCodeInfo_Annotation_Path_field_fullname protoreflect.FullName = "google.protobuf.GeneratedCodeInfo.Annotation.path" GeneratedCodeInfo_Annotation_SourceFile_field_fullname protoreflect.FullName = "google.protobuf.GeneratedCodeInfo.Annotation.source_file" GeneratedCodeInfo_Annotation_Begin_field_fullname protoreflect.FullName = "google.protobuf.GeneratedCodeInfo.Annotation.begin" GeneratedCodeInfo_Annotation_End_field_fullname protoreflect.FullName = "google.protobuf.GeneratedCodeInfo.Annotation.end" + GeneratedCodeInfo_Annotation_Semantic_field_fullname protoreflect.FullName = "google.protobuf.GeneratedCodeInfo.Annotation.semantic" ) // Field numbers for google.protobuf.GeneratedCodeInfo.Annotation. @@ -826,4 +861,11 @@ const ( GeneratedCodeInfo_Annotation_SourceFile_field_number protoreflect.FieldNumber = 2 GeneratedCodeInfo_Annotation_Begin_field_number protoreflect.FieldNumber = 3 GeneratedCodeInfo_Annotation_End_field_number protoreflect.FieldNumber = 4 + GeneratedCodeInfo_Annotation_Semantic_field_number protoreflect.FieldNumber = 5 +) + +// Full and short names for google.protobuf.GeneratedCodeInfo.Annotation.Semantic. +const ( + GeneratedCodeInfo_Annotation_Semantic_enum_fullname = "google.protobuf.GeneratedCodeInfo.Annotation.Semantic" + GeneratedCodeInfo_Annotation_Semantic_enum_name = "Semantic" ) diff --git a/.ci/providerlint/vendor/google.golang.org/protobuf/internal/impl/convert.go b/.ci/providerlint/vendor/google.golang.org/protobuf/internal/impl/convert.go index 11a6128ba56..185ef2efa5b 100644 --- a/.ci/providerlint/vendor/google.golang.org/protobuf/internal/impl/convert.go +++ b/.ci/providerlint/vendor/google.golang.org/protobuf/internal/impl/convert.go @@ -59,7 +59,6 @@ func NewConverter(t reflect.Type, fd protoreflect.FieldDescriptor) Converter { default: return newSingularConverter(t, fd) } - panic(fmt.Sprintf("invalid Go type %v for field %v", t, fd.FullName())) } var ( diff --git a/.ci/providerlint/vendor/google.golang.org/protobuf/internal/strs/strings_unsafe.go b/.ci/providerlint/vendor/google.golang.org/protobuf/internal/strs/strings_unsafe.go index fea589c457e..61a84d34185 100644 --- a/.ci/providerlint/vendor/google.golang.org/protobuf/internal/strs/strings_unsafe.go +++ b/.ci/providerlint/vendor/google.golang.org/protobuf/internal/strs/strings_unsafe.go @@ -87,7 +87,7 @@ func (sb *Builder) grow(n int) { // Unlike strings.Builder, we do not need to copy over the contents // of the old buffer since our builder provides no API for // retrieving previously created strings. - sb.buf = make([]byte, 2*(cap(sb.buf)+n)) + sb.buf = make([]byte, 0, 2*(cap(sb.buf)+n)) } func (sb *Builder) last(n int) string { diff --git a/.ci/providerlint/vendor/google.golang.org/protobuf/internal/version/version.go b/.ci/providerlint/vendor/google.golang.org/protobuf/internal/version/version.go index b480c5010f1..f7014cd51cd 100644 --- a/.ci/providerlint/vendor/google.golang.org/protobuf/internal/version/version.go +++ b/.ci/providerlint/vendor/google.golang.org/protobuf/internal/version/version.go @@ -51,8 +51,8 @@ import ( // 10. Send out the CL for review and submit it. const ( Major = 1 - Minor = 28 - Patch = 1 + Minor = 30 + Patch = 0 PreRelease = "" ) diff --git a/.ci/providerlint/vendor/google.golang.org/protobuf/proto/doc.go b/.ci/providerlint/vendor/google.golang.org/protobuf/proto/doc.go index 08d2a46f535..ec71e717fe7 100644 --- a/.ci/providerlint/vendor/google.golang.org/protobuf/proto/doc.go +++ b/.ci/providerlint/vendor/google.golang.org/protobuf/proto/doc.go @@ -5,16 +5,13 @@ // Package proto provides functions operating on protocol buffer messages. // // For documentation on protocol buffers in general, see: -// -// https://developers.google.com/protocol-buffers +// https://protobuf.dev. // // For a tutorial on using protocol buffers with Go, see: -// -// https://developers.google.com/protocol-buffers/docs/gotutorial +// https://protobuf.dev/getting-started/gotutorial. // // For a guide to generated Go protocol buffer code, see: -// -// https://developers.google.com/protocol-buffers/docs/reference/go-generated +// https://protobuf.dev/reference/go/go-generated. // // # Binary serialization // diff --git a/.ci/providerlint/vendor/google.golang.org/protobuf/proto/equal.go b/.ci/providerlint/vendor/google.golang.org/protobuf/proto/equal.go index 67948dd1df8..1a0be1b03c7 100644 --- a/.ci/providerlint/vendor/google.golang.org/protobuf/proto/equal.go +++ b/.ci/providerlint/vendor/google.golang.org/protobuf/proto/equal.go @@ -5,30 +5,39 @@ package proto import ( - "bytes" - "math" "reflect" - "google.golang.org/protobuf/encoding/protowire" "google.golang.org/protobuf/reflect/protoreflect" ) -// Equal reports whether two messages are equal. -// If two messages marshal to the same bytes under deterministic serialization, -// then Equal is guaranteed to report true. +// Equal reports whether two messages are equal, +// by recursively comparing the fields of the message. // -// Two messages are equal if they belong to the same message descriptor, -// have the same set of populated known and extension field values, -// and the same set of unknown fields values. If either of the top-level -// messages are invalid, then Equal reports true only if both are invalid. +// - Bytes fields are equal if they contain identical bytes. +// Empty bytes (regardless of nil-ness) are considered equal. // -// Scalar values are compared with the equivalent of the == operator in Go, -// except bytes values which are compared using bytes.Equal and -// floating point values which specially treat NaNs as equal. -// Message values are compared by recursively calling Equal. -// Lists are equal if each element value is also equal. -// Maps are equal if they have the same set of keys, where the pair of values -// for each key is also equal. +// - Floating-point fields are equal if they contain the same value. +// Unlike the == operator, a NaN is equal to another NaN. +// +// - Other scalar fields are equal if they contain the same value. +// +// - Message fields are equal if they have +// the same set of populated known and extension field values, and +// the same set of unknown fields values. +// +// - Lists are equal if they are the same length and +// each corresponding element is equal. +// +// - Maps are equal if they have the same set of keys and +// the corresponding value for each key is equal. +// +// An invalid message is not equal to a valid message. +// An invalid message is only equal to another invalid message of the +// same type. An invalid message often corresponds to a nil pointer +// of the concrete message type. For example, (*pb.M)(nil) is not equal +// to &pb.M{}. +// If two valid messages marshal to the same bytes under deterministic +// serialization, then Equal is guaranteed to report true. func Equal(x, y Message) bool { if x == nil || y == nil { return x == nil && y == nil @@ -42,130 +51,7 @@ func Equal(x, y Message) bool { if mx.IsValid() != my.IsValid() { return false } - return equalMessage(mx, my) -} - -// equalMessage compares two messages. -func equalMessage(mx, my protoreflect.Message) bool { - if mx.Descriptor() != my.Descriptor() { - return false - } - - nx := 0 - equal := true - mx.Range(func(fd protoreflect.FieldDescriptor, vx protoreflect.Value) bool { - nx++ - vy := my.Get(fd) - equal = my.Has(fd) && equalField(fd, vx, vy) - return equal - }) - if !equal { - return false - } - ny := 0 - my.Range(func(fd protoreflect.FieldDescriptor, vx protoreflect.Value) bool { - ny++ - return true - }) - if nx != ny { - return false - } - - return equalUnknown(mx.GetUnknown(), my.GetUnknown()) -} - -// equalField compares two fields. -func equalField(fd protoreflect.FieldDescriptor, x, y protoreflect.Value) bool { - switch { - case fd.IsList(): - return equalList(fd, x.List(), y.List()) - case fd.IsMap(): - return equalMap(fd, x.Map(), y.Map()) - default: - return equalValue(fd, x, y) - } -} - -// equalMap compares two maps. -func equalMap(fd protoreflect.FieldDescriptor, x, y protoreflect.Map) bool { - if x.Len() != y.Len() { - return false - } - equal := true - x.Range(func(k protoreflect.MapKey, vx protoreflect.Value) bool { - vy := y.Get(k) - equal = y.Has(k) && equalValue(fd.MapValue(), vx, vy) - return equal - }) - return equal -} - -// equalList compares two lists. -func equalList(fd protoreflect.FieldDescriptor, x, y protoreflect.List) bool { - if x.Len() != y.Len() { - return false - } - for i := x.Len() - 1; i >= 0; i-- { - if !equalValue(fd, x.Get(i), y.Get(i)) { - return false - } - } - return true -} - -// equalValue compares two singular values. -func equalValue(fd protoreflect.FieldDescriptor, x, y protoreflect.Value) bool { - switch fd.Kind() { - case protoreflect.BoolKind: - return x.Bool() == y.Bool() - case protoreflect.EnumKind: - return x.Enum() == y.Enum() - case protoreflect.Int32Kind, protoreflect.Sint32Kind, - protoreflect.Int64Kind, protoreflect.Sint64Kind, - protoreflect.Sfixed32Kind, protoreflect.Sfixed64Kind: - return x.Int() == y.Int() - case protoreflect.Uint32Kind, protoreflect.Uint64Kind, - protoreflect.Fixed32Kind, protoreflect.Fixed64Kind: - return x.Uint() == y.Uint() - case protoreflect.FloatKind, protoreflect.DoubleKind: - fx := x.Float() - fy := y.Float() - if math.IsNaN(fx) || math.IsNaN(fy) { - return math.IsNaN(fx) && math.IsNaN(fy) - } - return fx == fy - case protoreflect.StringKind: - return x.String() == y.String() - case protoreflect.BytesKind: - return bytes.Equal(x.Bytes(), y.Bytes()) - case protoreflect.MessageKind, protoreflect.GroupKind: - return equalMessage(x.Message(), y.Message()) - default: - return x.Interface() == y.Interface() - } -} - -// equalUnknown compares unknown fields by direct comparison on the raw bytes -// of each individual field number. -func equalUnknown(x, y protoreflect.RawFields) bool { - if len(x) != len(y) { - return false - } - if bytes.Equal([]byte(x), []byte(y)) { - return true - } - - mx := make(map[protoreflect.FieldNumber]protoreflect.RawFields) - my := make(map[protoreflect.FieldNumber]protoreflect.RawFields) - for len(x) > 0 { - fnum, _, n := protowire.ConsumeField(x) - mx[fnum] = append(mx[fnum], x[:n]...) - x = x[n:] - } - for len(y) > 0 { - fnum, _, n := protowire.ConsumeField(y) - my[fnum] = append(my[fnum], y[:n]...) - y = y[n:] - } - return reflect.DeepEqual(mx, my) + vx := protoreflect.ValueOfMessage(mx) + vy := protoreflect.ValueOfMessage(my) + return vx.Equal(vy) } diff --git a/.ci/providerlint/vendor/google.golang.org/protobuf/reflect/protoreflect/source_gen.go b/.ci/providerlint/vendor/google.golang.org/protobuf/reflect/protoreflect/source_gen.go index b03c1223c4a..54ce326df94 100644 --- a/.ci/providerlint/vendor/google.golang.org/protobuf/reflect/protoreflect/source_gen.go +++ b/.ci/providerlint/vendor/google.golang.org/protobuf/reflect/protoreflect/source_gen.go @@ -35,6 +35,8 @@ func (p *SourcePath) appendFileDescriptorProto(b []byte) []byte { b = p.appendSingularField(b, "source_code_info", (*SourcePath).appendSourceCodeInfo) case 12: b = p.appendSingularField(b, "syntax", nil) + case 13: + b = p.appendSingularField(b, "edition", nil) } return b } @@ -236,6 +238,8 @@ func (p *SourcePath) appendMessageOptions(b []byte) []byte { b = p.appendSingularField(b, "deprecated", nil) case 7: b = p.appendSingularField(b, "map_entry", nil) + case 11: + b = p.appendSingularField(b, "deprecated_legacy_json_field_conflicts", nil) case 999: b = p.appendRepeatedField(b, "uninterpreted_option", (*SourcePath).appendUninterpretedOption) } @@ -279,6 +283,8 @@ func (p *SourcePath) appendEnumOptions(b []byte) []byte { b = p.appendSingularField(b, "allow_alias", nil) case 3: b = p.appendSingularField(b, "deprecated", nil) + case 6: + b = p.appendSingularField(b, "deprecated_legacy_json_field_conflicts", nil) case 999: b = p.appendRepeatedField(b, "uninterpreted_option", (*SourcePath).appendUninterpretedOption) } @@ -345,10 +351,18 @@ func (p *SourcePath) appendFieldOptions(b []byte) []byte { b = p.appendSingularField(b, "jstype", nil) case 5: b = p.appendSingularField(b, "lazy", nil) + case 15: + b = p.appendSingularField(b, "unverified_lazy", nil) case 3: b = p.appendSingularField(b, "deprecated", nil) case 10: b = p.appendSingularField(b, "weak", nil) + case 16: + b = p.appendSingularField(b, "debug_redact", nil) + case 17: + b = p.appendSingularField(b, "retention", nil) + case 18: + b = p.appendSingularField(b, "target", nil) case 999: b = p.appendRepeatedField(b, "uninterpreted_option", (*SourcePath).appendUninterpretedOption) } diff --git a/.ci/providerlint/vendor/google.golang.org/protobuf/reflect/protoreflect/value.go b/.ci/providerlint/vendor/google.golang.org/protobuf/reflect/protoreflect/value.go index f3198107782..37601b78199 100644 --- a/.ci/providerlint/vendor/google.golang.org/protobuf/reflect/protoreflect/value.go +++ b/.ci/providerlint/vendor/google.golang.org/protobuf/reflect/protoreflect/value.go @@ -148,7 +148,7 @@ type Message interface { // be preserved in marshaling or other operations. IsValid() bool - // ProtoMethods returns optional fast-path implementions of various operations. + // ProtoMethods returns optional fast-path implementations of various operations. // This method may return nil. // // The returned methods type is identical to diff --git a/.ci/providerlint/vendor/google.golang.org/protobuf/reflect/protoreflect/value_equal.go b/.ci/providerlint/vendor/google.golang.org/protobuf/reflect/protoreflect/value_equal.go new file mode 100644 index 00000000000..591652541f2 --- /dev/null +++ b/.ci/providerlint/vendor/google.golang.org/protobuf/reflect/protoreflect/value_equal.go @@ -0,0 +1,168 @@ +// Copyright 2022 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package protoreflect + +import ( + "bytes" + "fmt" + "math" + "reflect" + + "google.golang.org/protobuf/encoding/protowire" +) + +// Equal reports whether v1 and v2 are recursively equal. +// +// - Values of different types are always unequal. +// +// - Bytes values are equal if they contain identical bytes. +// Empty bytes (regardless of nil-ness) are considered equal. +// +// - Floating point values are equal if they contain the same value. +// Unlike the == operator, a NaN is equal to another NaN. +// +// - Enums are equal if they contain the same number. +// Since Value does not contain an enum descriptor, +// enum values do not consider the type of the enum. +// +// - Other scalar values are equal if they contain the same value. +// +// - Message values are equal if they belong to the same message descriptor, +// have the same set of populated known and extension field values, +// and the same set of unknown fields values. +// +// - Lists are equal if they are the same length and +// each corresponding element is equal. +// +// - Maps are equal if they have the same set of keys and +// the corresponding value for each key is equal. +func (v1 Value) Equal(v2 Value) bool { + return equalValue(v1, v2) +} + +func equalValue(x, y Value) bool { + eqType := x.typ == y.typ + switch x.typ { + case nilType: + return eqType + case boolType: + return eqType && x.Bool() == y.Bool() + case int32Type, int64Type: + return eqType && x.Int() == y.Int() + case uint32Type, uint64Type: + return eqType && x.Uint() == y.Uint() + case float32Type, float64Type: + return eqType && equalFloat(x.Float(), y.Float()) + case stringType: + return eqType && x.String() == y.String() + case bytesType: + return eqType && bytes.Equal(x.Bytes(), y.Bytes()) + case enumType: + return eqType && x.Enum() == y.Enum() + default: + switch x := x.Interface().(type) { + case Message: + y, ok := y.Interface().(Message) + return ok && equalMessage(x, y) + case List: + y, ok := y.Interface().(List) + return ok && equalList(x, y) + case Map: + y, ok := y.Interface().(Map) + return ok && equalMap(x, y) + default: + panic(fmt.Sprintf("unknown type: %T", x)) + } + } +} + +// equalFloat compares two floats, where NaNs are treated as equal. +func equalFloat(x, y float64) bool { + if math.IsNaN(x) || math.IsNaN(y) { + return math.IsNaN(x) && math.IsNaN(y) + } + return x == y +} + +// equalMessage compares two messages. +func equalMessage(mx, my Message) bool { + if mx.Descriptor() != my.Descriptor() { + return false + } + + nx := 0 + equal := true + mx.Range(func(fd FieldDescriptor, vx Value) bool { + nx++ + vy := my.Get(fd) + equal = my.Has(fd) && equalValue(vx, vy) + return equal + }) + if !equal { + return false + } + ny := 0 + my.Range(func(fd FieldDescriptor, vx Value) bool { + ny++ + return true + }) + if nx != ny { + return false + } + + return equalUnknown(mx.GetUnknown(), my.GetUnknown()) +} + +// equalList compares two lists. +func equalList(x, y List) bool { + if x.Len() != y.Len() { + return false + } + for i := x.Len() - 1; i >= 0; i-- { + if !equalValue(x.Get(i), y.Get(i)) { + return false + } + } + return true +} + +// equalMap compares two maps. +func equalMap(x, y Map) bool { + if x.Len() != y.Len() { + return false + } + equal := true + x.Range(func(k MapKey, vx Value) bool { + vy := y.Get(k) + equal = y.Has(k) && equalValue(vx, vy) + return equal + }) + return equal +} + +// equalUnknown compares unknown fields by direct comparison on the raw bytes +// of each individual field number. +func equalUnknown(x, y RawFields) bool { + if len(x) != len(y) { + return false + } + if bytes.Equal([]byte(x), []byte(y)) { + return true + } + + mx := make(map[FieldNumber]RawFields) + my := make(map[FieldNumber]RawFields) + for len(x) > 0 { + fnum, _, n := protowire.ConsumeField(x) + mx[fnum] = append(mx[fnum], x[:n]...) + x = x[n:] + } + for len(y) > 0 { + fnum, _, n := protowire.ConsumeField(y) + my[fnum] = append(my[fnum], y[:n]...) + y = y[n:] + } + return reflect.DeepEqual(mx, my) +} diff --git a/.ci/providerlint/vendor/google.golang.org/protobuf/reflect/protoreflect/value_union.go b/.ci/providerlint/vendor/google.golang.org/protobuf/reflect/protoreflect/value_union.go index ca8e28c5bc8..08e5ef73fc0 100644 --- a/.ci/providerlint/vendor/google.golang.org/protobuf/reflect/protoreflect/value_union.go +++ b/.ci/providerlint/vendor/google.golang.org/protobuf/reflect/protoreflect/value_union.go @@ -54,11 +54,11 @@ import ( // // Append a 0 to a "repeated int32" field. // // Since the Value returned by Mutable is guaranteed to alias // // the source message, modifying the Value modifies the message. -// message.Mutable(fieldDesc).(List).Append(protoreflect.ValueOfInt32(0)) +// message.Mutable(fieldDesc).List().Append(protoreflect.ValueOfInt32(0)) // // // Assign [0] to a "repeated int32" field by creating a new Value, // // modifying it, and assigning it. -// list := message.NewField(fieldDesc).(List) +// list := message.NewField(fieldDesc).List() // list.Append(protoreflect.ValueOfInt32(0)) // message.Set(fieldDesc, list) // // ERROR: Since it is not defined whether Set aliases the source, diff --git a/.ci/providerlint/vendor/google.golang.org/protobuf/reflect/protoregistry/registry.go b/.ci/providerlint/vendor/google.golang.org/protobuf/reflect/protoregistry/registry.go index 58352a6978b..aeb55977446 100644 --- a/.ci/providerlint/vendor/google.golang.org/protobuf/reflect/protoregistry/registry.go +++ b/.ci/providerlint/vendor/google.golang.org/protobuf/reflect/protoregistry/registry.go @@ -46,7 +46,7 @@ var conflictPolicy = "panic" // "panic" | "warn" | "ignore" // It is a variable so that the behavior is easily overridden in another file. var ignoreConflict = func(d protoreflect.Descriptor, err error) bool { const env = "GOLANG_PROTOBUF_REGISTRATION_CONFLICT" - const faq = "https://developers.google.com/protocol-buffers/docs/reference/go/faq#namespace-conflict" + const faq = "https://protobuf.dev/reference/go/faq#namespace-conflict" policy := conflictPolicy if v := os.Getenv(env); v != "" { policy = v diff --git a/.ci/providerlint/vendor/google.golang.org/protobuf/types/descriptorpb/descriptor.pb.go b/.ci/providerlint/vendor/google.golang.org/protobuf/types/descriptorpb/descriptor.pb.go index abe4ab5115b..dac5671db00 100644 --- a/.ci/providerlint/vendor/google.golang.org/protobuf/types/descriptorpb/descriptor.pb.go +++ b/.ci/providerlint/vendor/google.golang.org/protobuf/types/descriptorpb/descriptor.pb.go @@ -406,6 +406,152 @@ func (FieldOptions_JSType) EnumDescriptor() ([]byte, []int) { return file_google_protobuf_descriptor_proto_rawDescGZIP(), []int{12, 1} } +// If set to RETENTION_SOURCE, the option will be omitted from the binary. +// Note: as of January 2023, support for this is in progress and does not yet +// have an effect (b/264593489). +type FieldOptions_OptionRetention int32 + +const ( + FieldOptions_RETENTION_UNKNOWN FieldOptions_OptionRetention = 0 + FieldOptions_RETENTION_RUNTIME FieldOptions_OptionRetention = 1 + FieldOptions_RETENTION_SOURCE FieldOptions_OptionRetention = 2 +) + +// Enum value maps for FieldOptions_OptionRetention. +var ( + FieldOptions_OptionRetention_name = map[int32]string{ + 0: "RETENTION_UNKNOWN", + 1: "RETENTION_RUNTIME", + 2: "RETENTION_SOURCE", + } + FieldOptions_OptionRetention_value = map[string]int32{ + "RETENTION_UNKNOWN": 0, + "RETENTION_RUNTIME": 1, + "RETENTION_SOURCE": 2, + } +) + +func (x FieldOptions_OptionRetention) Enum() *FieldOptions_OptionRetention { + p := new(FieldOptions_OptionRetention) + *p = x + return p +} + +func (x FieldOptions_OptionRetention) String() string { + return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x)) +} + +func (FieldOptions_OptionRetention) Descriptor() protoreflect.EnumDescriptor { + return file_google_protobuf_descriptor_proto_enumTypes[5].Descriptor() +} + +func (FieldOptions_OptionRetention) Type() protoreflect.EnumType { + return &file_google_protobuf_descriptor_proto_enumTypes[5] +} + +func (x FieldOptions_OptionRetention) Number() protoreflect.EnumNumber { + return protoreflect.EnumNumber(x) +} + +// Deprecated: Do not use. +func (x *FieldOptions_OptionRetention) UnmarshalJSON(b []byte) error { + num, err := protoimpl.X.UnmarshalJSONEnum(x.Descriptor(), b) + if err != nil { + return err + } + *x = FieldOptions_OptionRetention(num) + return nil +} + +// Deprecated: Use FieldOptions_OptionRetention.Descriptor instead. +func (FieldOptions_OptionRetention) EnumDescriptor() ([]byte, []int) { + return file_google_protobuf_descriptor_proto_rawDescGZIP(), []int{12, 2} +} + +// This indicates the types of entities that the field may apply to when used +// as an option. If it is unset, then the field may be freely used as an +// option on any kind of entity. Note: as of January 2023, support for this is +// in progress and does not yet have an effect (b/264593489). +type FieldOptions_OptionTargetType int32 + +const ( + FieldOptions_TARGET_TYPE_UNKNOWN FieldOptions_OptionTargetType = 0 + FieldOptions_TARGET_TYPE_FILE FieldOptions_OptionTargetType = 1 + FieldOptions_TARGET_TYPE_EXTENSION_RANGE FieldOptions_OptionTargetType = 2 + FieldOptions_TARGET_TYPE_MESSAGE FieldOptions_OptionTargetType = 3 + FieldOptions_TARGET_TYPE_FIELD FieldOptions_OptionTargetType = 4 + FieldOptions_TARGET_TYPE_ONEOF FieldOptions_OptionTargetType = 5 + FieldOptions_TARGET_TYPE_ENUM FieldOptions_OptionTargetType = 6 + FieldOptions_TARGET_TYPE_ENUM_ENTRY FieldOptions_OptionTargetType = 7 + FieldOptions_TARGET_TYPE_SERVICE FieldOptions_OptionTargetType = 8 + FieldOptions_TARGET_TYPE_METHOD FieldOptions_OptionTargetType = 9 +) + +// Enum value maps for FieldOptions_OptionTargetType. +var ( + FieldOptions_OptionTargetType_name = map[int32]string{ + 0: "TARGET_TYPE_UNKNOWN", + 1: "TARGET_TYPE_FILE", + 2: "TARGET_TYPE_EXTENSION_RANGE", + 3: "TARGET_TYPE_MESSAGE", + 4: "TARGET_TYPE_FIELD", + 5: "TARGET_TYPE_ONEOF", + 6: "TARGET_TYPE_ENUM", + 7: "TARGET_TYPE_ENUM_ENTRY", + 8: "TARGET_TYPE_SERVICE", + 9: "TARGET_TYPE_METHOD", + } + FieldOptions_OptionTargetType_value = map[string]int32{ + "TARGET_TYPE_UNKNOWN": 0, + "TARGET_TYPE_FILE": 1, + "TARGET_TYPE_EXTENSION_RANGE": 2, + "TARGET_TYPE_MESSAGE": 3, + "TARGET_TYPE_FIELD": 4, + "TARGET_TYPE_ONEOF": 5, + "TARGET_TYPE_ENUM": 6, + "TARGET_TYPE_ENUM_ENTRY": 7, + "TARGET_TYPE_SERVICE": 8, + "TARGET_TYPE_METHOD": 9, + } +) + +func (x FieldOptions_OptionTargetType) Enum() *FieldOptions_OptionTargetType { + p := new(FieldOptions_OptionTargetType) + *p = x + return p +} + +func (x FieldOptions_OptionTargetType) String() string { + return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x)) +} + +func (FieldOptions_OptionTargetType) Descriptor() protoreflect.EnumDescriptor { + return file_google_protobuf_descriptor_proto_enumTypes[6].Descriptor() +} + +func (FieldOptions_OptionTargetType) Type() protoreflect.EnumType { + return &file_google_protobuf_descriptor_proto_enumTypes[6] +} + +func (x FieldOptions_OptionTargetType) Number() protoreflect.EnumNumber { + return protoreflect.EnumNumber(x) +} + +// Deprecated: Do not use. +func (x *FieldOptions_OptionTargetType) UnmarshalJSON(b []byte) error { + num, err := protoimpl.X.UnmarshalJSONEnum(x.Descriptor(), b) + if err != nil { + return err + } + *x = FieldOptions_OptionTargetType(num) + return nil +} + +// Deprecated: Use FieldOptions_OptionTargetType.Descriptor instead. +func (FieldOptions_OptionTargetType) EnumDescriptor() ([]byte, []int) { + return file_google_protobuf_descriptor_proto_rawDescGZIP(), []int{12, 3} +} + // Is this method side-effect-free (or safe in HTTP parlance), or idempotent, // or neither? HTTP based RPC implementation may choose GET verb for safe // methods, and PUT verb for idempotent methods instead of the default POST. @@ -442,11 +588,11 @@ func (x MethodOptions_IdempotencyLevel) String() string { } func (MethodOptions_IdempotencyLevel) Descriptor() protoreflect.EnumDescriptor { - return file_google_protobuf_descriptor_proto_enumTypes[5].Descriptor() + return file_google_protobuf_descriptor_proto_enumTypes[7].Descriptor() } func (MethodOptions_IdempotencyLevel) Type() protoreflect.EnumType { - return &file_google_protobuf_descriptor_proto_enumTypes[5] + return &file_google_protobuf_descriptor_proto_enumTypes[7] } func (x MethodOptions_IdempotencyLevel) Number() protoreflect.EnumNumber { @@ -468,6 +614,70 @@ func (MethodOptions_IdempotencyLevel) EnumDescriptor() ([]byte, []int) { return file_google_protobuf_descriptor_proto_rawDescGZIP(), []int{17, 0} } +// Represents the identified object's effect on the element in the original +// .proto file. +type GeneratedCodeInfo_Annotation_Semantic int32 + +const ( + // There is no effect or the effect is indescribable. + GeneratedCodeInfo_Annotation_NONE GeneratedCodeInfo_Annotation_Semantic = 0 + // The element is set or otherwise mutated. + GeneratedCodeInfo_Annotation_SET GeneratedCodeInfo_Annotation_Semantic = 1 + // An alias to the element is returned. + GeneratedCodeInfo_Annotation_ALIAS GeneratedCodeInfo_Annotation_Semantic = 2 +) + +// Enum value maps for GeneratedCodeInfo_Annotation_Semantic. +var ( + GeneratedCodeInfo_Annotation_Semantic_name = map[int32]string{ + 0: "NONE", + 1: "SET", + 2: "ALIAS", + } + GeneratedCodeInfo_Annotation_Semantic_value = map[string]int32{ + "NONE": 0, + "SET": 1, + "ALIAS": 2, + } +) + +func (x GeneratedCodeInfo_Annotation_Semantic) Enum() *GeneratedCodeInfo_Annotation_Semantic { + p := new(GeneratedCodeInfo_Annotation_Semantic) + *p = x + return p +} + +func (x GeneratedCodeInfo_Annotation_Semantic) String() string { + return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x)) +} + +func (GeneratedCodeInfo_Annotation_Semantic) Descriptor() protoreflect.EnumDescriptor { + return file_google_protobuf_descriptor_proto_enumTypes[8].Descriptor() +} + +func (GeneratedCodeInfo_Annotation_Semantic) Type() protoreflect.EnumType { + return &file_google_protobuf_descriptor_proto_enumTypes[8] +} + +func (x GeneratedCodeInfo_Annotation_Semantic) Number() protoreflect.EnumNumber { + return protoreflect.EnumNumber(x) +} + +// Deprecated: Do not use. +func (x *GeneratedCodeInfo_Annotation_Semantic) UnmarshalJSON(b []byte) error { + num, err := protoimpl.X.UnmarshalJSONEnum(x.Descriptor(), b) + if err != nil { + return err + } + *x = GeneratedCodeInfo_Annotation_Semantic(num) + return nil +} + +// Deprecated: Use GeneratedCodeInfo_Annotation_Semantic.Descriptor instead. +func (GeneratedCodeInfo_Annotation_Semantic) EnumDescriptor() ([]byte, []int) { + return file_google_protobuf_descriptor_proto_rawDescGZIP(), []int{20, 0, 0} +} + // The protocol compiler can output a FileDescriptorSet containing the .proto // files it parses. type FileDescriptorSet struct { @@ -544,8 +754,12 @@ type FileDescriptorProto struct { // development tools. SourceCodeInfo *SourceCodeInfo `protobuf:"bytes,9,opt,name=source_code_info,json=sourceCodeInfo" json:"source_code_info,omitempty"` // The syntax of the proto file. - // The supported values are "proto2" and "proto3". + // The supported values are "proto2", "proto3", and "editions". + // + // If `edition` is present, this value must be "editions". Syntax *string `protobuf:"bytes,12,opt,name=syntax" json:"syntax,omitempty"` + // The edition of the proto file, which is an opaque string. + Edition *string `protobuf:"bytes,13,opt,name=edition" json:"edition,omitempty"` } func (x *FileDescriptorProto) Reset() { @@ -664,6 +878,13 @@ func (x *FileDescriptorProto) GetSyntax() string { return "" } +func (x *FileDescriptorProto) GetEdition() string { + if x != nil && x.Edition != nil { + return *x.Edition + } + return "" +} + // Describes a message type. type DescriptorProto struct { state protoimpl.MessageState @@ -860,7 +1081,6 @@ type FieldDescriptorProto struct { // For booleans, "true" or "false". // For strings, contains the default text contents (not escaped in any way). // For bytes, contains the C escaped value. All bytes >= 128 are escaped. - // TODO(kenton): Base-64 encode? DefaultValue *string `protobuf:"bytes,7,opt,name=default_value,json=defaultValue" json:"default_value,omitempty"` // If set, gives the index of a oneof in the containing type's oneof_decl // list. This field is a member of that oneof. @@ -1382,22 +1602,22 @@ type FileOptions struct { // inappropriate because proto packages do not normally start with backwards // domain names. JavaPackage *string `protobuf:"bytes,1,opt,name=java_package,json=javaPackage" json:"java_package,omitempty"` - // If set, all the classes from the .proto file are wrapped in a single - // outer class with the given name. This applies to both Proto1 - // (equivalent to the old "--one_java_file" option) and Proto2 (where - // a .proto always translates to a single class, but you may want to - // explicitly choose the class name). + // Controls the name of the wrapper Java class generated for the .proto file. + // That class will always contain the .proto file's getDescriptor() method as + // well as any top-level extensions defined in the .proto file. + // If java_multiple_files is disabled, then all the other classes from the + // .proto file will be nested inside the single wrapper outer class. JavaOuterClassname *string `protobuf:"bytes,8,opt,name=java_outer_classname,json=javaOuterClassname" json:"java_outer_classname,omitempty"` - // If set true, then the Java code generator will generate a separate .java + // If enabled, then the Java code generator will generate a separate .java // file for each top-level message, enum, and service defined in the .proto - // file. Thus, these types will *not* be nested inside the outer class - // named by java_outer_classname. However, the outer class will still be + // file. Thus, these types will *not* be nested inside the wrapper class + // named by java_outer_classname. However, the wrapper class will still be // generated to contain the file's getDescriptor() method as well as any // top-level extensions defined in the file. JavaMultipleFiles *bool `protobuf:"varint,10,opt,name=java_multiple_files,json=javaMultipleFiles,def=0" json:"java_multiple_files,omitempty"` // This option does nothing. // - // Deprecated: Do not use. + // Deprecated: Marked as deprecated in google/protobuf/descriptor.proto. JavaGenerateEqualsAndHash *bool `protobuf:"varint,20,opt,name=java_generate_equals_and_hash,json=javaGenerateEqualsAndHash" json:"java_generate_equals_and_hash,omitempty"` // If set true, then the Java2 code generator will generate code that // throws an exception whenever an attempt is made to assign a non-UTF-8 @@ -1531,7 +1751,7 @@ func (x *FileOptions) GetJavaMultipleFiles() bool { return Default_FileOptions_JavaMultipleFiles } -// Deprecated: Do not use. +// Deprecated: Marked as deprecated in google/protobuf/descriptor.proto. func (x *FileOptions) GetJavaGenerateEqualsAndHash() bool { if x != nil && x.JavaGenerateEqualsAndHash != nil { return *x.JavaGenerateEqualsAndHash @@ -1670,10 +1890,12 @@ type MessageOptions struct { // efficient, has fewer features, and is more complicated. // // The message must be defined exactly as follows: - // message Foo { - // option message_set_wire_format = true; - // extensions 4 to max; - // } + // + // message Foo { + // option message_set_wire_format = true; + // extensions 4 to max; + // } + // // Note that the message cannot have any defined fields; MessageSets only // have extensions. // @@ -1692,28 +1914,44 @@ type MessageOptions struct { // for the message, or it will be completely ignored; in the very least, // this is a formalization for deprecating messages. Deprecated *bool `protobuf:"varint,3,opt,name=deprecated,def=0" json:"deprecated,omitempty"` + // NOTE: Do not set the option in .proto files. Always use the maps syntax + // instead. The option should only be implicitly set by the proto compiler + // parser. + // // Whether the message is an automatically generated map entry type for the // maps field. // // For maps fields: - // map map_field = 1; + // + // map map_field = 1; + // // The parsed descriptor looks like: - // message MapFieldEntry { - // option map_entry = true; - // optional KeyType key = 1; - // optional ValueType value = 2; - // } - // repeated MapFieldEntry map_field = 1; + // + // message MapFieldEntry { + // option map_entry = true; + // optional KeyType key = 1; + // optional ValueType value = 2; + // } + // repeated MapFieldEntry map_field = 1; // // Implementations may choose not to generate the map_entry=true message, but // use a native map in the target language to hold the keys and values. // The reflection APIs in such implementations still need to work as // if the field is a repeated message field. - // - // NOTE: Do not set the option in .proto files. Always use the maps syntax - // instead. The option should only be implicitly set by the proto compiler - // parser. MapEntry *bool `protobuf:"varint,7,opt,name=map_entry,json=mapEntry" json:"map_entry,omitempty"` + // Enable the legacy handling of JSON field name conflicts. This lowercases + // and strips underscored from the fields before comparison in proto3 only. + // The new behavior takes `json_name` into account and applies to proto2 as + // well. + // + // This should only be used as a temporary measure against broken builds due + // to the change in behavior for JSON field name conflicts. + // + // TODO(b/261750190) This is legacy behavior we plan to remove once downstream + // teams have had time to migrate. + // + // Deprecated: Marked as deprecated in google/protobuf/descriptor.proto. + DeprecatedLegacyJsonFieldConflicts *bool `protobuf:"varint,11,opt,name=deprecated_legacy_json_field_conflicts,json=deprecatedLegacyJsonFieldConflicts" json:"deprecated_legacy_json_field_conflicts,omitempty"` // The parser stores options it doesn't recognize here. See above. UninterpretedOption []*UninterpretedOption `protobuf:"bytes,999,rep,name=uninterpreted_option,json=uninterpretedOption" json:"uninterpreted_option,omitempty"` } @@ -1785,6 +2023,14 @@ func (x *MessageOptions) GetMapEntry() bool { return false } +// Deprecated: Marked as deprecated in google/protobuf/descriptor.proto. +func (x *MessageOptions) GetDeprecatedLegacyJsonFieldConflicts() bool { + if x != nil && x.DeprecatedLegacyJsonFieldConflicts != nil { + return *x.DeprecatedLegacyJsonFieldConflicts + } + return false +} + func (x *MessageOptions) GetUninterpretedOption() []*UninterpretedOption { if x != nil { return x.UninterpretedOption @@ -1838,7 +2084,6 @@ type FieldOptions struct { // call from multiple threads concurrently, while non-const methods continue // to require exclusive access. // - // // Note that implementations may choose not to check required fields within // a lazy sub-message. That is, calling IsInitialized() on the outer message // may return true even if the inner message has missing required fields. @@ -1849,7 +2094,14 @@ type FieldOptions struct { // implementation must either *always* check its required fields, or *never* // check its required fields, regardless of whether or not the message has // been parsed. + // + // As of May 2022, lazy verifies the contents of the byte stream during + // parsing. An invalid byte stream will cause the overall parsing to fail. Lazy *bool `protobuf:"varint,5,opt,name=lazy,def=0" json:"lazy,omitempty"` + // unverified_lazy does no correctness checks on the byte stream. This should + // only be used where lazy with verification is prohibitive for performance + // reasons. + UnverifiedLazy *bool `protobuf:"varint,15,opt,name=unverified_lazy,json=unverifiedLazy,def=0" json:"unverified_lazy,omitempty"` // Is this field deprecated? // Depending on the target platform, this can emit Deprecated annotations // for accessors, or it will be completely ignored; in the very least, this @@ -1857,17 +2109,24 @@ type FieldOptions struct { Deprecated *bool `protobuf:"varint,3,opt,name=deprecated,def=0" json:"deprecated,omitempty"` // For Google-internal migration only. Do not use. Weak *bool `protobuf:"varint,10,opt,name=weak,def=0" json:"weak,omitempty"` + // Indicate that the field value should not be printed out when using debug + // formats, e.g. when the field contains sensitive credentials. + DebugRedact *bool `protobuf:"varint,16,opt,name=debug_redact,json=debugRedact,def=0" json:"debug_redact,omitempty"` + Retention *FieldOptions_OptionRetention `protobuf:"varint,17,opt,name=retention,enum=google.protobuf.FieldOptions_OptionRetention" json:"retention,omitempty"` + Target *FieldOptions_OptionTargetType `protobuf:"varint,18,opt,name=target,enum=google.protobuf.FieldOptions_OptionTargetType" json:"target,omitempty"` // The parser stores options it doesn't recognize here. See above. UninterpretedOption []*UninterpretedOption `protobuf:"bytes,999,rep,name=uninterpreted_option,json=uninterpretedOption" json:"uninterpreted_option,omitempty"` } // Default values for FieldOptions fields. const ( - Default_FieldOptions_Ctype = FieldOptions_STRING - Default_FieldOptions_Jstype = FieldOptions_JS_NORMAL - Default_FieldOptions_Lazy = bool(false) - Default_FieldOptions_Deprecated = bool(false) - Default_FieldOptions_Weak = bool(false) + Default_FieldOptions_Ctype = FieldOptions_STRING + Default_FieldOptions_Jstype = FieldOptions_JS_NORMAL + Default_FieldOptions_Lazy = bool(false) + Default_FieldOptions_UnverifiedLazy = bool(false) + Default_FieldOptions_Deprecated = bool(false) + Default_FieldOptions_Weak = bool(false) + Default_FieldOptions_DebugRedact = bool(false) ) func (x *FieldOptions) Reset() { @@ -1930,6 +2189,13 @@ func (x *FieldOptions) GetLazy() bool { return Default_FieldOptions_Lazy } +func (x *FieldOptions) GetUnverifiedLazy() bool { + if x != nil && x.UnverifiedLazy != nil { + return *x.UnverifiedLazy + } + return Default_FieldOptions_UnverifiedLazy +} + func (x *FieldOptions) GetDeprecated() bool { if x != nil && x.Deprecated != nil { return *x.Deprecated @@ -1944,6 +2210,27 @@ func (x *FieldOptions) GetWeak() bool { return Default_FieldOptions_Weak } +func (x *FieldOptions) GetDebugRedact() bool { + if x != nil && x.DebugRedact != nil { + return *x.DebugRedact + } + return Default_FieldOptions_DebugRedact +} + +func (x *FieldOptions) GetRetention() FieldOptions_OptionRetention { + if x != nil && x.Retention != nil { + return *x.Retention + } + return FieldOptions_RETENTION_UNKNOWN +} + +func (x *FieldOptions) GetTarget() FieldOptions_OptionTargetType { + if x != nil && x.Target != nil { + return *x.Target + } + return FieldOptions_TARGET_TYPE_UNKNOWN +} + func (x *FieldOptions) GetUninterpretedOption() []*UninterpretedOption { if x != nil { return x.UninterpretedOption @@ -2014,6 +2301,15 @@ type EnumOptions struct { // for the enum, or it will be completely ignored; in the very least, this // is a formalization for deprecating enums. Deprecated *bool `protobuf:"varint,3,opt,name=deprecated,def=0" json:"deprecated,omitempty"` + // Enable the legacy handling of JSON field name conflicts. This lowercases + // and strips underscored from the fields before comparison in proto3 only. + // The new behavior takes `json_name` into account and applies to proto2 as + // well. + // TODO(b/261750190) Remove this legacy behavior once downstream teams have + // had time to migrate. + // + // Deprecated: Marked as deprecated in google/protobuf/descriptor.proto. + DeprecatedLegacyJsonFieldConflicts *bool `protobuf:"varint,6,opt,name=deprecated_legacy_json_field_conflicts,json=deprecatedLegacyJsonFieldConflicts" json:"deprecated_legacy_json_field_conflicts,omitempty"` // The parser stores options it doesn't recognize here. See above. UninterpretedOption []*UninterpretedOption `protobuf:"bytes,999,rep,name=uninterpreted_option,json=uninterpretedOption" json:"uninterpreted_option,omitempty"` } @@ -2069,6 +2365,14 @@ func (x *EnumOptions) GetDeprecated() bool { return Default_EnumOptions_Deprecated } +// Deprecated: Marked as deprecated in google/protobuf/descriptor.proto. +func (x *EnumOptions) GetDeprecatedLegacyJsonFieldConflicts() bool { + if x != nil && x.DeprecatedLegacyJsonFieldConflicts != nil { + return *x.DeprecatedLegacyJsonFieldConflicts + } + return false +} + func (x *EnumOptions) GetUninterpretedOption() []*UninterpretedOption { if x != nil { return x.UninterpretedOption @@ -2399,43 +2703,48 @@ type SourceCodeInfo struct { // tools. // // For example, say we have a file like: - // message Foo { - // optional string foo = 1; - // } + // + // message Foo { + // optional string foo = 1; + // } + // // Let's look at just the field definition: - // optional string foo = 1; - // ^ ^^ ^^ ^ ^^^ - // a bc de f ghi + // + // optional string foo = 1; + // ^ ^^ ^^ ^ ^^^ + // a bc de f ghi + // // We have the following locations: - // span path represents - // [a,i) [ 4, 0, 2, 0 ] The whole field definition. - // [a,b) [ 4, 0, 2, 0, 4 ] The label (optional). - // [c,d) [ 4, 0, 2, 0, 5 ] The type (string). - // [e,f) [ 4, 0, 2, 0, 1 ] The name (foo). - // [g,h) [ 4, 0, 2, 0, 3 ] The number (1). + // + // span path represents + // [a,i) [ 4, 0, 2, 0 ] The whole field definition. + // [a,b) [ 4, 0, 2, 0, 4 ] The label (optional). + // [c,d) [ 4, 0, 2, 0, 5 ] The type (string). + // [e,f) [ 4, 0, 2, 0, 1 ] The name (foo). + // [g,h) [ 4, 0, 2, 0, 3 ] The number (1). // // Notes: - // - A location may refer to a repeated field itself (i.e. not to any - // particular index within it). This is used whenever a set of elements are - // logically enclosed in a single code segment. For example, an entire - // extend block (possibly containing multiple extension definitions) will - // have an outer location whose path refers to the "extensions" repeated - // field without an index. - // - Multiple locations may have the same path. This happens when a single - // logical declaration is spread out across multiple places. The most - // obvious example is the "extend" block again -- there may be multiple - // extend blocks in the same scope, each of which will have the same path. - // - A location's span is not always a subset of its parent's span. For - // example, the "extendee" of an extension declaration appears at the - // beginning of the "extend" block and is shared by all extensions within - // the block. - // - Just because a location's span is a subset of some other location's span - // does not mean that it is a descendant. For example, a "group" defines - // both a type and a field in a single declaration. Thus, the locations - // corresponding to the type and field and their components will overlap. - // - Code which tries to interpret locations should probably be designed to - // ignore those that it doesn't understand, as more types of locations could - // be recorded in the future. + // - A location may refer to a repeated field itself (i.e. not to any + // particular index within it). This is used whenever a set of elements are + // logically enclosed in a single code segment. For example, an entire + // extend block (possibly containing multiple extension definitions) will + // have an outer location whose path refers to the "extensions" repeated + // field without an index. + // - Multiple locations may have the same path. This happens when a single + // logical declaration is spread out across multiple places. The most + // obvious example is the "extend" block again -- there may be multiple + // extend blocks in the same scope, each of which will have the same path. + // - A location's span is not always a subset of its parent's span. For + // example, the "extendee" of an extension declaration appears at the + // beginning of the "extend" block and is shared by all extensions within + // the block. + // - Just because a location's span is a subset of some other location's span + // does not mean that it is a descendant. For example, a "group" defines + // both a type and a field in a single declaration. Thus, the locations + // corresponding to the type and field and their components will overlap. + // - Code which tries to interpret locations should probably be designed to + // ignore those that it doesn't understand, as more types of locations could + // be recorded in the future. Location []*SourceCodeInfo_Location `protobuf:"bytes,1,rep,name=location" json:"location,omitempty"` } @@ -2715,8 +3024,8 @@ func (x *EnumDescriptorProto_EnumReservedRange) GetEnd() int32 { // The name of the uninterpreted option. Each string represents a segment in // a dot-separated name. is_extension is true iff a segment represents an // extension (denoted with parentheses in options specs in .proto files). -// E.g.,{ ["foo", false], ["bar.baz", true], ["qux", false] } represents -// "foo.(bar.baz).qux". +// E.g.,{ ["foo", false], ["bar.baz", true], ["moo", false] } represents +// "foo.(bar.baz).moo". type UninterpretedOption_NamePart struct { state protoimpl.MessageState sizeCache protoimpl.SizeCache @@ -2781,23 +3090,34 @@ type SourceCodeInfo_Location struct { // location. // // Each element is a field number or an index. They form a path from - // the root FileDescriptorProto to the place where the definition. For - // example, this path: - // [ 4, 3, 2, 7, 1 ] + // the root FileDescriptorProto to the place where the definition occurs. + // For example, this path: + // + // [ 4, 3, 2, 7, 1 ] + // // refers to: - // file.message_type(3) // 4, 3 - // .field(7) // 2, 7 - // .name() // 1 + // + // file.message_type(3) // 4, 3 + // .field(7) // 2, 7 + // .name() // 1 + // // This is because FileDescriptorProto.message_type has field number 4: - // repeated DescriptorProto message_type = 4; + // + // repeated DescriptorProto message_type = 4; + // // and DescriptorProto.field has field number 2: - // repeated FieldDescriptorProto field = 2; + // + // repeated FieldDescriptorProto field = 2; + // // and FieldDescriptorProto.name has field number 1: - // optional string name = 1; + // + // optional string name = 1; // // Thus, the above path gives the location of a field name. If we removed // the last element: - // [ 4, 3, 2, 7 ] + // + // [ 4, 3, 2, 7 ] + // // this path refers to the whole field declaration (from the beginning // of the label to the terminating semicolon). Path []int32 `protobuf:"varint,1,rep,packed,name=path" json:"path,omitempty"` @@ -2826,34 +3146,34 @@ type SourceCodeInfo_Location struct { // // Examples: // - // optional int32 foo = 1; // Comment attached to foo. - // // Comment attached to bar. - // optional int32 bar = 2; + // optional int32 foo = 1; // Comment attached to foo. + // // Comment attached to bar. + // optional int32 bar = 2; // - // optional string baz = 3; - // // Comment attached to baz. - // // Another line attached to baz. + // optional string baz = 3; + // // Comment attached to baz. + // // Another line attached to baz. // - // // Comment attached to qux. - // // - // // Another line attached to qux. - // optional double qux = 4; + // // Comment attached to moo. + // // + // // Another line attached to moo. + // optional double moo = 4; // - // // Detached comment for corge. This is not leading or trailing comments - // // to qux or corge because there are blank lines separating it from - // // both. + // // Detached comment for corge. This is not leading or trailing comments + // // to moo or corge because there are blank lines separating it from + // // both. // - // // Detached comment for corge paragraph 2. + // // Detached comment for corge paragraph 2. // - // optional string corge = 5; - // /* Block comment attached - // * to corge. Leading asterisks - // * will be removed. */ - // /* Block comment attached to - // * grault. */ - // optional int32 grault = 6; + // optional string corge = 5; + // /* Block comment attached + // * to corge. Leading asterisks + // * will be removed. */ + // /* Block comment attached to + // * grault. */ + // optional int32 grault = 6; // - // // ignored detached comments. + // // ignored detached comments. LeadingComments *string `protobuf:"bytes,3,opt,name=leading_comments,json=leadingComments" json:"leading_comments,omitempty"` TrailingComments *string `protobuf:"bytes,4,opt,name=trailing_comments,json=trailingComments" json:"trailing_comments,omitempty"` LeadingDetachedComments []string `protobuf:"bytes,6,rep,name=leading_detached_comments,json=leadingDetachedComments" json:"leading_detached_comments,omitempty"` @@ -2940,9 +3260,10 @@ type GeneratedCodeInfo_Annotation struct { // that relates to the identified object. Begin *int32 `protobuf:"varint,3,opt,name=begin" json:"begin,omitempty"` // Identifies the ending offset in bytes in the generated code that - // relates to the identified offset. The end offset should be one past + // relates to the identified object. The end offset should be one past // the last relevant byte (so the length of the text = end - begin). - End *int32 `protobuf:"varint,4,opt,name=end" json:"end,omitempty"` + End *int32 `protobuf:"varint,4,opt,name=end" json:"end,omitempty"` + Semantic *GeneratedCodeInfo_Annotation_Semantic `protobuf:"varint,5,opt,name=semantic,enum=google.protobuf.GeneratedCodeInfo_Annotation_Semantic" json:"semantic,omitempty"` } func (x *GeneratedCodeInfo_Annotation) Reset() { @@ -3005,6 +3326,13 @@ func (x *GeneratedCodeInfo_Annotation) GetEnd() int32 { return 0 } +func (x *GeneratedCodeInfo_Annotation) GetSemantic() GeneratedCodeInfo_Annotation_Semantic { + if x != nil && x.Semantic != nil { + return *x.Semantic + } + return GeneratedCodeInfo_Annotation_NONE +} + var File_google_protobuf_descriptor_proto protoreflect.FileDescriptor var file_google_protobuf_descriptor_proto_rawDesc = []byte{ @@ -3016,7 +3344,7 @@ var file_google_protobuf_descriptor_proto_rawDesc = []byte{ 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x24, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x46, 0x69, 0x6c, 0x65, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x52, 0x04, 0x66, 0x69, - 0x6c, 0x65, 0x22, 0xe4, 0x04, 0x0a, 0x13, 0x46, 0x69, 0x6c, 0x65, 0x44, 0x65, 0x73, 0x63, 0x72, + 0x6c, 0x65, 0x22, 0xfe, 0x04, 0x0a, 0x13, 0x46, 0x69, 0x6c, 0x65, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x70, 0x61, 0x63, 0x6b, 0x61, 0x67, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, @@ -3054,330 +3382,391 @@ var file_google_protobuf_descriptor_proto_rawDesc = []byte{ 0x75, 0x66, 0x2e, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x43, 0x6f, 0x64, 0x65, 0x49, 0x6e, 0x66, 0x6f, 0x52, 0x0e, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x43, 0x6f, 0x64, 0x65, 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x16, 0x0a, 0x06, 0x73, 0x79, 0x6e, 0x74, 0x61, 0x78, 0x18, 0x0c, 0x20, 0x01, 0x28, - 0x09, 0x52, 0x06, 0x73, 0x79, 0x6e, 0x74, 0x61, 0x78, 0x22, 0xb9, 0x06, 0x0a, 0x0f, 0x44, 0x65, - 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x12, 0x0a, - 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, - 0x65, 0x12, 0x3b, 0x0a, 0x05, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, - 0x32, 0x25, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, - 0x75, 0x66, 0x2e, 0x46, 0x69, 0x65, 0x6c, 0x64, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, - 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x52, 0x05, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x12, 0x43, - 0x0a, 0x09, 0x65, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x18, 0x06, 0x20, 0x03, 0x28, - 0x0b, 0x32, 0x25, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, - 0x62, 0x75, 0x66, 0x2e, 0x46, 0x69, 0x65, 0x6c, 0x64, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, - 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x52, 0x09, 0x65, 0x78, 0x74, 0x65, 0x6e, 0x73, - 0x69, 0x6f, 0x6e, 0x12, 0x41, 0x0a, 0x0b, 0x6e, 0x65, 0x73, 0x74, 0x65, 0x64, 0x5f, 0x74, 0x79, - 0x70, 0x65, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x20, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, - 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x44, 0x65, 0x73, 0x63, 0x72, - 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x52, 0x0a, 0x6e, 0x65, 0x73, 0x74, - 0x65, 0x64, 0x54, 0x79, 0x70, 0x65, 0x12, 0x41, 0x0a, 0x09, 0x65, 0x6e, 0x75, 0x6d, 0x5f, 0x74, - 0x79, 0x70, 0x65, 0x18, 0x04, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x24, 0x2e, 0x67, 0x6f, 0x6f, 0x67, - 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x45, 0x6e, 0x75, 0x6d, - 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x52, - 0x08, 0x65, 0x6e, 0x75, 0x6d, 0x54, 0x79, 0x70, 0x65, 0x12, 0x58, 0x0a, 0x0f, 0x65, 0x78, 0x74, - 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x5f, 0x72, 0x61, 0x6e, 0x67, 0x65, 0x18, 0x05, 0x20, 0x03, - 0x28, 0x0b, 0x32, 0x2f, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, + 0x09, 0x52, 0x06, 0x73, 0x79, 0x6e, 0x74, 0x61, 0x78, 0x12, 0x18, 0x0a, 0x07, 0x65, 0x64, 0x69, + 0x74, 0x69, 0x6f, 0x6e, 0x18, 0x0d, 0x20, 0x01, 0x28, 0x09, 0x52, 0x07, 0x65, 0x64, 0x69, 0x74, + 0x69, 0x6f, 0x6e, 0x22, 0xb9, 0x06, 0x0a, 0x0f, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, + 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, + 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x3b, 0x0a, 0x05, 0x66, + 0x69, 0x65, 0x6c, 0x64, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x25, 0x2e, 0x67, 0x6f, 0x6f, + 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x46, 0x69, 0x65, + 0x6c, 0x64, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, + 0x6f, 0x52, 0x05, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x12, 0x43, 0x0a, 0x09, 0x65, 0x78, 0x74, 0x65, + 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x18, 0x06, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x25, 0x2e, 0x67, 0x6f, + 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x46, 0x69, + 0x65, 0x6c, 0x64, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, + 0x74, 0x6f, 0x52, 0x09, 0x65, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x12, 0x41, 0x0a, + 0x0b, 0x6e, 0x65, 0x73, 0x74, 0x65, 0x64, 0x5f, 0x74, 0x79, 0x70, 0x65, 0x18, 0x03, 0x20, 0x03, + 0x28, 0x0b, 0x32, 0x20, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, - 0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x45, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x52, 0x61, - 0x6e, 0x67, 0x65, 0x52, 0x0e, 0x65, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x52, 0x61, - 0x6e, 0x67, 0x65, 0x12, 0x44, 0x0a, 0x0a, 0x6f, 0x6e, 0x65, 0x6f, 0x66, 0x5f, 0x64, 0x65, 0x63, - 0x6c, 0x18, 0x08, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x25, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, - 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x4f, 0x6e, 0x65, 0x6f, 0x66, 0x44, - 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x52, 0x09, - 0x6f, 0x6e, 0x65, 0x6f, 0x66, 0x44, 0x65, 0x63, 0x6c, 0x12, 0x39, 0x0a, 0x07, 0x6f, 0x70, 0x74, - 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x07, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1f, 0x2e, 0x67, 0x6f, 0x6f, - 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x4d, 0x65, 0x73, - 0x73, 0x61, 0x67, 0x65, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x52, 0x07, 0x6f, 0x70, 0x74, - 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x55, 0x0a, 0x0e, 0x72, 0x65, 0x73, 0x65, 0x72, 0x76, 0x65, 0x64, - 0x5f, 0x72, 0x61, 0x6e, 0x67, 0x65, 0x18, 0x09, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x2e, 0x2e, 0x67, + 0x72, 0x6f, 0x74, 0x6f, 0x52, 0x0a, 0x6e, 0x65, 0x73, 0x74, 0x65, 0x64, 0x54, 0x79, 0x70, 0x65, + 0x12, 0x41, 0x0a, 0x09, 0x65, 0x6e, 0x75, 0x6d, 0x5f, 0x74, 0x79, 0x70, 0x65, 0x18, 0x04, 0x20, + 0x03, 0x28, 0x0b, 0x32, 0x24, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, + 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x45, 0x6e, 0x75, 0x6d, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, + 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x52, 0x08, 0x65, 0x6e, 0x75, 0x6d, 0x54, + 0x79, 0x70, 0x65, 0x12, 0x58, 0x0a, 0x0f, 0x65, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, + 0x5f, 0x72, 0x61, 0x6e, 0x67, 0x65, 0x18, 0x05, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x2f, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x44, - 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x52, - 0x65, 0x73, 0x65, 0x72, 0x76, 0x65, 0x64, 0x52, 0x61, 0x6e, 0x67, 0x65, 0x52, 0x0d, 0x72, 0x65, - 0x73, 0x65, 0x72, 0x76, 0x65, 0x64, 0x52, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x23, 0x0a, 0x0d, 0x72, - 0x65, 0x73, 0x65, 0x72, 0x76, 0x65, 0x64, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x0a, 0x20, 0x03, - 0x28, 0x09, 0x52, 0x0c, 0x72, 0x65, 0x73, 0x65, 0x72, 0x76, 0x65, 0x64, 0x4e, 0x61, 0x6d, 0x65, - 0x1a, 0x7a, 0x0a, 0x0e, 0x45, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x52, 0x61, 0x6e, - 0x67, 0x65, 0x12, 0x14, 0x0a, 0x05, 0x73, 0x74, 0x61, 0x72, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, - 0x05, 0x52, 0x05, 0x73, 0x74, 0x61, 0x72, 0x74, 0x12, 0x10, 0x0a, 0x03, 0x65, 0x6e, 0x64, 0x18, - 0x02, 0x20, 0x01, 0x28, 0x05, 0x52, 0x03, 0x65, 0x6e, 0x64, 0x12, 0x40, 0x0a, 0x07, 0x6f, 0x70, - 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x26, 0x2e, 0x67, 0x6f, - 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x45, 0x78, - 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x52, 0x61, 0x6e, 0x67, 0x65, 0x4f, 0x70, 0x74, 0x69, - 0x6f, 0x6e, 0x73, 0x52, 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x1a, 0x37, 0x0a, 0x0d, - 0x52, 0x65, 0x73, 0x65, 0x72, 0x76, 0x65, 0x64, 0x52, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x14, 0x0a, - 0x05, 0x73, 0x74, 0x61, 0x72, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x05, 0x52, 0x05, 0x73, 0x74, - 0x61, 0x72, 0x74, 0x12, 0x10, 0x0a, 0x03, 0x65, 0x6e, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x05, - 0x52, 0x03, 0x65, 0x6e, 0x64, 0x22, 0x7c, 0x0a, 0x15, 0x45, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, - 0x6f, 0x6e, 0x52, 0x61, 0x6e, 0x67, 0x65, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x58, - 0x0a, 0x14, 0x75, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x5f, - 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0xe7, 0x07, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x24, 0x2e, - 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, - 0x55, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, 0x70, 0x74, - 0x69, 0x6f, 0x6e, 0x52, 0x13, 0x75, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, - 0x65, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x2a, 0x09, 0x08, 0xe8, 0x07, 0x10, 0x80, 0x80, - 0x80, 0x80, 0x02, 0x22, 0xc1, 0x06, 0x0a, 0x14, 0x46, 0x69, 0x65, 0x6c, 0x64, 0x44, 0x65, 0x73, - 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x12, 0x0a, 0x04, - 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, - 0x12, 0x16, 0x0a, 0x06, 0x6e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x18, 0x03, 0x20, 0x01, 0x28, 0x05, - 0x52, 0x06, 0x6e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x12, 0x41, 0x0a, 0x05, 0x6c, 0x61, 0x62, 0x65, - 0x6c, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x2b, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, - 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x46, 0x69, 0x65, 0x6c, 0x64, 0x44, - 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x4c, - 0x61, 0x62, 0x65, 0x6c, 0x52, 0x05, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x12, 0x3e, 0x0a, 0x04, 0x74, - 0x79, 0x70, 0x65, 0x18, 0x05, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x2a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, - 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x46, 0x69, 0x65, 0x6c, - 0x64, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, - 0x2e, 0x54, 0x79, 0x70, 0x65, 0x52, 0x04, 0x74, 0x79, 0x70, 0x65, 0x12, 0x1b, 0x0a, 0x09, 0x74, - 0x79, 0x70, 0x65, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x06, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, - 0x74, 0x79, 0x70, 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x1a, 0x0a, 0x08, 0x65, 0x78, 0x74, 0x65, - 0x6e, 0x64, 0x65, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x65, 0x78, 0x74, 0x65, - 0x6e, 0x64, 0x65, 0x65, 0x12, 0x23, 0x0a, 0x0d, 0x64, 0x65, 0x66, 0x61, 0x75, 0x6c, 0x74, 0x5f, - 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x07, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0c, 0x64, 0x65, 0x66, - 0x61, 0x75, 0x6c, 0x74, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x1f, 0x0a, 0x0b, 0x6f, 0x6e, 0x65, - 0x6f, 0x66, 0x5f, 0x69, 0x6e, 0x64, 0x65, 0x78, 0x18, 0x09, 0x20, 0x01, 0x28, 0x05, 0x52, 0x0a, - 0x6f, 0x6e, 0x65, 0x6f, 0x66, 0x49, 0x6e, 0x64, 0x65, 0x78, 0x12, 0x1b, 0x0a, 0x09, 0x6a, 0x73, - 0x6f, 0x6e, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x0a, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x6a, - 0x73, 0x6f, 0x6e, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x37, 0x0a, 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, - 0x6e, 0x73, 0x18, 0x08, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1d, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, - 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x46, 0x69, 0x65, 0x6c, 0x64, - 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x52, 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, - 0x12, 0x27, 0x0a, 0x0f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, 0x5f, 0x6f, 0x70, 0x74, 0x69, 0x6f, - 0x6e, 0x61, 0x6c, 0x18, 0x11, 0x20, 0x01, 0x28, 0x08, 0x52, 0x0e, 0x70, 0x72, 0x6f, 0x74, 0x6f, - 0x33, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x61, 0x6c, 0x22, 0xb6, 0x02, 0x0a, 0x04, 0x54, 0x79, - 0x70, 0x65, 0x12, 0x0f, 0x0a, 0x0b, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x44, 0x4f, 0x55, 0x42, 0x4c, - 0x45, 0x10, 0x01, 0x12, 0x0e, 0x0a, 0x0a, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x46, 0x4c, 0x4f, 0x41, - 0x54, 0x10, 0x02, 0x12, 0x0e, 0x0a, 0x0a, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x49, 0x4e, 0x54, 0x36, - 0x34, 0x10, 0x03, 0x12, 0x0f, 0x0a, 0x0b, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x55, 0x49, 0x4e, 0x54, - 0x36, 0x34, 0x10, 0x04, 0x12, 0x0e, 0x0a, 0x0a, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x49, 0x4e, 0x54, - 0x33, 0x32, 0x10, 0x05, 0x12, 0x10, 0x0a, 0x0c, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x46, 0x49, 0x58, - 0x45, 0x44, 0x36, 0x34, 0x10, 0x06, 0x12, 0x10, 0x0a, 0x0c, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x46, - 0x49, 0x58, 0x45, 0x44, 0x33, 0x32, 0x10, 0x07, 0x12, 0x0d, 0x0a, 0x09, 0x54, 0x59, 0x50, 0x45, - 0x5f, 0x42, 0x4f, 0x4f, 0x4c, 0x10, 0x08, 0x12, 0x0f, 0x0a, 0x0b, 0x54, 0x59, 0x50, 0x45, 0x5f, - 0x53, 0x54, 0x52, 0x49, 0x4e, 0x47, 0x10, 0x09, 0x12, 0x0e, 0x0a, 0x0a, 0x54, 0x59, 0x50, 0x45, - 0x5f, 0x47, 0x52, 0x4f, 0x55, 0x50, 0x10, 0x0a, 0x12, 0x10, 0x0a, 0x0c, 0x54, 0x59, 0x50, 0x45, - 0x5f, 0x4d, 0x45, 0x53, 0x53, 0x41, 0x47, 0x45, 0x10, 0x0b, 0x12, 0x0e, 0x0a, 0x0a, 0x54, 0x59, - 0x50, 0x45, 0x5f, 0x42, 0x59, 0x54, 0x45, 0x53, 0x10, 0x0c, 0x12, 0x0f, 0x0a, 0x0b, 0x54, 0x59, - 0x50, 0x45, 0x5f, 0x55, 0x49, 0x4e, 0x54, 0x33, 0x32, 0x10, 0x0d, 0x12, 0x0d, 0x0a, 0x09, 0x54, - 0x59, 0x50, 0x45, 0x5f, 0x45, 0x4e, 0x55, 0x4d, 0x10, 0x0e, 0x12, 0x11, 0x0a, 0x0d, 0x54, 0x59, - 0x50, 0x45, 0x5f, 0x53, 0x46, 0x49, 0x58, 0x45, 0x44, 0x33, 0x32, 0x10, 0x0f, 0x12, 0x11, 0x0a, - 0x0d, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x53, 0x46, 0x49, 0x58, 0x45, 0x44, 0x36, 0x34, 0x10, 0x10, - 0x12, 0x0f, 0x0a, 0x0b, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x53, 0x49, 0x4e, 0x54, 0x33, 0x32, 0x10, - 0x11, 0x12, 0x0f, 0x0a, 0x0b, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x53, 0x49, 0x4e, 0x54, 0x36, 0x34, - 0x10, 0x12, 0x22, 0x43, 0x0a, 0x05, 0x4c, 0x61, 0x62, 0x65, 0x6c, 0x12, 0x12, 0x0a, 0x0e, 0x4c, - 0x41, 0x42, 0x45, 0x4c, 0x5f, 0x4f, 0x50, 0x54, 0x49, 0x4f, 0x4e, 0x41, 0x4c, 0x10, 0x01, 0x12, - 0x12, 0x0a, 0x0e, 0x4c, 0x41, 0x42, 0x45, 0x4c, 0x5f, 0x52, 0x45, 0x51, 0x55, 0x49, 0x52, 0x45, - 0x44, 0x10, 0x02, 0x12, 0x12, 0x0a, 0x0e, 0x4c, 0x41, 0x42, 0x45, 0x4c, 0x5f, 0x52, 0x45, 0x50, - 0x45, 0x41, 0x54, 0x45, 0x44, 0x10, 0x03, 0x22, 0x63, 0x0a, 0x14, 0x4f, 0x6e, 0x65, 0x6f, 0x66, - 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x12, - 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, - 0x61, 0x6d, 0x65, 0x12, 0x37, 0x0a, 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x02, - 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1d, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, - 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x4f, 0x6e, 0x65, 0x6f, 0x66, 0x4f, 0x70, 0x74, 0x69, - 0x6f, 0x6e, 0x73, 0x52, 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x22, 0xe3, 0x02, 0x0a, - 0x13, 0x45, 0x6e, 0x75, 0x6d, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, - 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, - 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x3f, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, - 0x65, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x29, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, - 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x45, 0x6e, 0x75, 0x6d, 0x56, 0x61, - 0x6c, 0x75, 0x65, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, - 0x74, 0x6f, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x36, 0x0a, 0x07, 0x6f, 0x70, 0x74, - 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1c, 0x2e, 0x67, 0x6f, 0x6f, - 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x45, 0x6e, 0x75, - 0x6d, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x52, 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, - 0x73, 0x12, 0x5d, 0x0a, 0x0e, 0x72, 0x65, 0x73, 0x65, 0x72, 0x76, 0x65, 0x64, 0x5f, 0x72, 0x61, - 0x6e, 0x67, 0x65, 0x18, 0x04, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x36, 0x2e, 0x67, 0x6f, 0x6f, 0x67, - 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x45, 0x6e, 0x75, 0x6d, - 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x2e, - 0x45, 0x6e, 0x75, 0x6d, 0x52, 0x65, 0x73, 0x65, 0x72, 0x76, 0x65, 0x64, 0x52, 0x61, 0x6e, 0x67, - 0x65, 0x52, 0x0d, 0x72, 0x65, 0x73, 0x65, 0x72, 0x76, 0x65, 0x64, 0x52, 0x61, 0x6e, 0x67, 0x65, - 0x12, 0x23, 0x0a, 0x0d, 0x72, 0x65, 0x73, 0x65, 0x72, 0x76, 0x65, 0x64, 0x5f, 0x6e, 0x61, 0x6d, - 0x65, 0x18, 0x05, 0x20, 0x03, 0x28, 0x09, 0x52, 0x0c, 0x72, 0x65, 0x73, 0x65, 0x72, 0x76, 0x65, - 0x64, 0x4e, 0x61, 0x6d, 0x65, 0x1a, 0x3b, 0x0a, 0x11, 0x45, 0x6e, 0x75, 0x6d, 0x52, 0x65, 0x73, - 0x65, 0x72, 0x76, 0x65, 0x64, 0x52, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x14, 0x0a, 0x05, 0x73, 0x74, - 0x61, 0x72, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x05, 0x52, 0x05, 0x73, 0x74, 0x61, 0x72, 0x74, - 0x12, 0x10, 0x0a, 0x03, 0x65, 0x6e, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x05, 0x52, 0x03, 0x65, - 0x6e, 0x64, 0x22, 0x83, 0x01, 0x0a, 0x18, 0x45, 0x6e, 0x75, 0x6d, 0x56, 0x61, 0x6c, 0x75, 0x65, - 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x12, - 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, - 0x61, 0x6d, 0x65, 0x12, 0x16, 0x0a, 0x06, 0x6e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x18, 0x02, 0x20, - 0x01, 0x28, 0x05, 0x52, 0x06, 0x6e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x12, 0x3b, 0x0a, 0x07, 0x6f, - 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x21, 0x2e, 0x67, - 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x45, - 0x6e, 0x75, 0x6d, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x52, - 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x22, 0xa7, 0x01, 0x0a, 0x16, 0x53, 0x65, 0x72, - 0x76, 0x69, 0x63, 0x65, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, - 0x6f, 0x74, 0x6f, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, - 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x3e, 0x0a, 0x06, 0x6d, 0x65, 0x74, 0x68, 0x6f, - 0x64, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x26, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, - 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x4d, 0x65, 0x74, 0x68, 0x6f, 0x64, - 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x52, - 0x06, 0x6d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x12, 0x39, 0x0a, 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, - 0x6e, 0x73, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1f, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, - 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x53, 0x65, 0x72, 0x76, 0x69, - 0x63, 0x65, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x52, 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, - 0x6e, 0x73, 0x22, 0x89, 0x02, 0x0a, 0x15, 0x4d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x44, 0x65, 0x73, - 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x12, 0x0a, 0x04, - 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, - 0x12, 0x1d, 0x0a, 0x0a, 0x69, 0x6e, 0x70, 0x75, 0x74, 0x5f, 0x74, 0x79, 0x70, 0x65, 0x18, 0x02, - 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x69, 0x6e, 0x70, 0x75, 0x74, 0x54, 0x79, 0x70, 0x65, 0x12, - 0x1f, 0x0a, 0x0b, 0x6f, 0x75, 0x74, 0x70, 0x75, 0x74, 0x5f, 0x74, 0x79, 0x70, 0x65, 0x18, 0x03, - 0x20, 0x01, 0x28, 0x09, 0x52, 0x0a, 0x6f, 0x75, 0x74, 0x70, 0x75, 0x74, 0x54, 0x79, 0x70, 0x65, - 0x12, 0x38, 0x0a, 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x04, 0x20, 0x01, 0x28, - 0x0b, 0x32, 0x1e, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, - 0x62, 0x75, 0x66, 0x2e, 0x4d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, - 0x73, 0x52, 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x30, 0x0a, 0x10, 0x63, 0x6c, - 0x69, 0x65, 0x6e, 0x74, 0x5f, 0x73, 0x74, 0x72, 0x65, 0x61, 0x6d, 0x69, 0x6e, 0x67, 0x18, 0x05, - 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x52, 0x0f, 0x63, 0x6c, 0x69, - 0x65, 0x6e, 0x74, 0x53, 0x74, 0x72, 0x65, 0x61, 0x6d, 0x69, 0x6e, 0x67, 0x12, 0x30, 0x0a, 0x10, - 0x73, 0x65, 0x72, 0x76, 0x65, 0x72, 0x5f, 0x73, 0x74, 0x72, 0x65, 0x61, 0x6d, 0x69, 0x6e, 0x67, - 0x18, 0x06, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x52, 0x0f, 0x73, - 0x65, 0x72, 0x76, 0x65, 0x72, 0x53, 0x74, 0x72, 0x65, 0x61, 0x6d, 0x69, 0x6e, 0x67, 0x22, 0x91, - 0x09, 0x0a, 0x0b, 0x46, 0x69, 0x6c, 0x65, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x21, - 0x0a, 0x0c, 0x6a, 0x61, 0x76, 0x61, 0x5f, 0x70, 0x61, 0x63, 0x6b, 0x61, 0x67, 0x65, 0x18, 0x01, - 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x6a, 0x61, 0x76, 0x61, 0x50, 0x61, 0x63, 0x6b, 0x61, 0x67, - 0x65, 0x12, 0x30, 0x0a, 0x14, 0x6a, 0x61, 0x76, 0x61, 0x5f, 0x6f, 0x75, 0x74, 0x65, 0x72, 0x5f, - 0x63, 0x6c, 0x61, 0x73, 0x73, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x08, 0x20, 0x01, 0x28, 0x09, 0x52, - 0x12, 0x6a, 0x61, 0x76, 0x61, 0x4f, 0x75, 0x74, 0x65, 0x72, 0x43, 0x6c, 0x61, 0x73, 0x73, 0x6e, - 0x61, 0x6d, 0x65, 0x12, 0x35, 0x0a, 0x13, 0x6a, 0x61, 0x76, 0x61, 0x5f, 0x6d, 0x75, 0x6c, 0x74, - 0x69, 0x70, 0x6c, 0x65, 0x5f, 0x66, 0x69, 0x6c, 0x65, 0x73, 0x18, 0x0a, 0x20, 0x01, 0x28, 0x08, - 0x3a, 0x05, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x52, 0x11, 0x6a, 0x61, 0x76, 0x61, 0x4d, 0x75, 0x6c, - 0x74, 0x69, 0x70, 0x6c, 0x65, 0x46, 0x69, 0x6c, 0x65, 0x73, 0x12, 0x44, 0x0a, 0x1d, 0x6a, 0x61, - 0x76, 0x61, 0x5f, 0x67, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x65, 0x5f, 0x65, 0x71, 0x75, 0x61, - 0x6c, 0x73, 0x5f, 0x61, 0x6e, 0x64, 0x5f, 0x68, 0x61, 0x73, 0x68, 0x18, 0x14, 0x20, 0x01, 0x28, - 0x08, 0x42, 0x02, 0x18, 0x01, 0x52, 0x19, 0x6a, 0x61, 0x76, 0x61, 0x47, 0x65, 0x6e, 0x65, 0x72, - 0x61, 0x74, 0x65, 0x45, 0x71, 0x75, 0x61, 0x6c, 0x73, 0x41, 0x6e, 0x64, 0x48, 0x61, 0x73, 0x68, - 0x12, 0x3a, 0x0a, 0x16, 0x6a, 0x61, 0x76, 0x61, 0x5f, 0x73, 0x74, 0x72, 0x69, 0x6e, 0x67, 0x5f, - 0x63, 0x68, 0x65, 0x63, 0x6b, 0x5f, 0x75, 0x74, 0x66, 0x38, 0x18, 0x1b, 0x20, 0x01, 0x28, 0x08, - 0x3a, 0x05, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x52, 0x13, 0x6a, 0x61, 0x76, 0x61, 0x53, 0x74, 0x72, - 0x69, 0x6e, 0x67, 0x43, 0x68, 0x65, 0x63, 0x6b, 0x55, 0x74, 0x66, 0x38, 0x12, 0x53, 0x0a, 0x0c, - 0x6f, 0x70, 0x74, 0x69, 0x6d, 0x69, 0x7a, 0x65, 0x5f, 0x66, 0x6f, 0x72, 0x18, 0x09, 0x20, 0x01, - 0x28, 0x0e, 0x32, 0x29, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, - 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x46, 0x69, 0x6c, 0x65, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, - 0x2e, 0x4f, 0x70, 0x74, 0x69, 0x6d, 0x69, 0x7a, 0x65, 0x4d, 0x6f, 0x64, 0x65, 0x3a, 0x05, 0x53, - 0x50, 0x45, 0x45, 0x44, 0x52, 0x0b, 0x6f, 0x70, 0x74, 0x69, 0x6d, 0x69, 0x7a, 0x65, 0x46, 0x6f, - 0x72, 0x12, 0x1d, 0x0a, 0x0a, 0x67, 0x6f, 0x5f, 0x70, 0x61, 0x63, 0x6b, 0x61, 0x67, 0x65, 0x18, - 0x0b, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x67, 0x6f, 0x50, 0x61, 0x63, 0x6b, 0x61, 0x67, 0x65, - 0x12, 0x35, 0x0a, 0x13, 0x63, 0x63, 0x5f, 0x67, 0x65, 0x6e, 0x65, 0x72, 0x69, 0x63, 0x5f, 0x73, - 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x73, 0x18, 0x10, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, 0x66, - 0x61, 0x6c, 0x73, 0x65, 0x52, 0x11, 0x63, 0x63, 0x47, 0x65, 0x6e, 0x65, 0x72, 0x69, 0x63, 0x53, - 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x73, 0x12, 0x39, 0x0a, 0x15, 0x6a, 0x61, 0x76, 0x61, 0x5f, - 0x67, 0x65, 0x6e, 0x65, 0x72, 0x69, 0x63, 0x5f, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x73, - 0x18, 0x11, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x52, 0x13, 0x6a, - 0x61, 0x76, 0x61, 0x47, 0x65, 0x6e, 0x65, 0x72, 0x69, 0x63, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, - 0x65, 0x73, 0x12, 0x35, 0x0a, 0x13, 0x70, 0x79, 0x5f, 0x67, 0x65, 0x6e, 0x65, 0x72, 0x69, 0x63, - 0x5f, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x73, 0x18, 0x12, 0x20, 0x01, 0x28, 0x08, 0x3a, - 0x05, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x52, 0x11, 0x70, 0x79, 0x47, 0x65, 0x6e, 0x65, 0x72, 0x69, - 0x63, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x73, 0x12, 0x37, 0x0a, 0x14, 0x70, 0x68, 0x70, + 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x45, + 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x52, 0x61, 0x6e, 0x67, 0x65, 0x52, 0x0e, 0x65, + 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x52, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x44, 0x0a, + 0x0a, 0x6f, 0x6e, 0x65, 0x6f, 0x66, 0x5f, 0x64, 0x65, 0x63, 0x6c, 0x18, 0x08, 0x20, 0x03, 0x28, + 0x0b, 0x32, 0x25, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, + 0x62, 0x75, 0x66, 0x2e, 0x4f, 0x6e, 0x65, 0x6f, 0x66, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, + 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x52, 0x09, 0x6f, 0x6e, 0x65, 0x6f, 0x66, 0x44, + 0x65, 0x63, 0x6c, 0x12, 0x39, 0x0a, 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x07, + 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1f, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, + 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x4f, 0x70, + 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x52, 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x55, + 0x0a, 0x0e, 0x72, 0x65, 0x73, 0x65, 0x72, 0x76, 0x65, 0x64, 0x5f, 0x72, 0x61, 0x6e, 0x67, 0x65, + 0x18, 0x09, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x2e, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, + 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, + 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x52, 0x65, 0x73, 0x65, 0x72, 0x76, 0x65, + 0x64, 0x52, 0x61, 0x6e, 0x67, 0x65, 0x52, 0x0d, 0x72, 0x65, 0x73, 0x65, 0x72, 0x76, 0x65, 0x64, + 0x52, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x23, 0x0a, 0x0d, 0x72, 0x65, 0x73, 0x65, 0x72, 0x76, 0x65, + 0x64, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x0a, 0x20, 0x03, 0x28, 0x09, 0x52, 0x0c, 0x72, 0x65, + 0x73, 0x65, 0x72, 0x76, 0x65, 0x64, 0x4e, 0x61, 0x6d, 0x65, 0x1a, 0x7a, 0x0a, 0x0e, 0x45, 0x78, + 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x52, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x14, 0x0a, 0x05, + 0x73, 0x74, 0x61, 0x72, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x05, 0x52, 0x05, 0x73, 0x74, 0x61, + 0x72, 0x74, 0x12, 0x10, 0x0a, 0x03, 0x65, 0x6e, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x05, 0x52, + 0x03, 0x65, 0x6e, 0x64, 0x12, 0x40, 0x0a, 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, + 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x26, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, + 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x45, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, + 0x6e, 0x52, 0x61, 0x6e, 0x67, 0x65, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x52, 0x07, 0x6f, + 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x1a, 0x37, 0x0a, 0x0d, 0x52, 0x65, 0x73, 0x65, 0x72, 0x76, + 0x65, 0x64, 0x52, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x14, 0x0a, 0x05, 0x73, 0x74, 0x61, 0x72, 0x74, + 0x18, 0x01, 0x20, 0x01, 0x28, 0x05, 0x52, 0x05, 0x73, 0x74, 0x61, 0x72, 0x74, 0x12, 0x10, 0x0a, + 0x03, 0x65, 0x6e, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x05, 0x52, 0x03, 0x65, 0x6e, 0x64, 0x22, + 0x7c, 0x0a, 0x15, 0x45, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x52, 0x61, 0x6e, 0x67, + 0x65, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x58, 0x0a, 0x14, 0x75, 0x6e, 0x69, 0x6e, + 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x5f, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, + 0x18, 0xe7, 0x07, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x24, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, + 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x55, 0x6e, 0x69, 0x6e, 0x74, 0x65, + 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x13, 0x75, + 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, 0x70, 0x74, 0x69, + 0x6f, 0x6e, 0x2a, 0x09, 0x08, 0xe8, 0x07, 0x10, 0x80, 0x80, 0x80, 0x80, 0x02, 0x22, 0xc1, 0x06, + 0x0a, 0x14, 0x46, 0x69, 0x65, 0x6c, 0x64, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, + 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, + 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x16, 0x0a, 0x06, 0x6e, 0x75, + 0x6d, 0x62, 0x65, 0x72, 0x18, 0x03, 0x20, 0x01, 0x28, 0x05, 0x52, 0x06, 0x6e, 0x75, 0x6d, 0x62, + 0x65, 0x72, 0x12, 0x41, 0x0a, 0x05, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x18, 0x04, 0x20, 0x01, 0x28, + 0x0e, 0x32, 0x2b, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, + 0x62, 0x75, 0x66, 0x2e, 0x46, 0x69, 0x65, 0x6c, 0x64, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, + 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x4c, 0x61, 0x62, 0x65, 0x6c, 0x52, 0x05, + 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x12, 0x3e, 0x0a, 0x04, 0x74, 0x79, 0x70, 0x65, 0x18, 0x05, 0x20, + 0x01, 0x28, 0x0e, 0x32, 0x2a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, + 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x46, 0x69, 0x65, 0x6c, 0x64, 0x44, 0x65, 0x73, 0x63, 0x72, + 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x54, 0x79, 0x70, 0x65, 0x52, + 0x04, 0x74, 0x79, 0x70, 0x65, 0x12, 0x1b, 0x0a, 0x09, 0x74, 0x79, 0x70, 0x65, 0x5f, 0x6e, 0x61, + 0x6d, 0x65, 0x18, 0x06, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x74, 0x79, 0x70, 0x65, 0x4e, 0x61, + 0x6d, 0x65, 0x12, 0x1a, 0x0a, 0x08, 0x65, 0x78, 0x74, 0x65, 0x6e, 0x64, 0x65, 0x65, 0x18, 0x02, + 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x65, 0x78, 0x74, 0x65, 0x6e, 0x64, 0x65, 0x65, 0x12, 0x23, + 0x0a, 0x0d, 0x64, 0x65, 0x66, 0x61, 0x75, 0x6c, 0x74, 0x5f, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, + 0x07, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0c, 0x64, 0x65, 0x66, 0x61, 0x75, 0x6c, 0x74, 0x56, 0x61, + 0x6c, 0x75, 0x65, 0x12, 0x1f, 0x0a, 0x0b, 0x6f, 0x6e, 0x65, 0x6f, 0x66, 0x5f, 0x69, 0x6e, 0x64, + 0x65, 0x78, 0x18, 0x09, 0x20, 0x01, 0x28, 0x05, 0x52, 0x0a, 0x6f, 0x6e, 0x65, 0x6f, 0x66, 0x49, + 0x6e, 0x64, 0x65, 0x78, 0x12, 0x1b, 0x0a, 0x09, 0x6a, 0x73, 0x6f, 0x6e, 0x5f, 0x6e, 0x61, 0x6d, + 0x65, 0x18, 0x0a, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x6a, 0x73, 0x6f, 0x6e, 0x4e, 0x61, 0x6d, + 0x65, 0x12, 0x37, 0x0a, 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x08, 0x20, 0x01, + 0x28, 0x0b, 0x32, 0x1d, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, + 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x46, 0x69, 0x65, 0x6c, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, + 0x73, 0x52, 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x27, 0x0a, 0x0f, 0x70, 0x72, + 0x6f, 0x74, 0x6f, 0x33, 0x5f, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x61, 0x6c, 0x18, 0x11, 0x20, + 0x01, 0x28, 0x08, 0x52, 0x0e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, 0x4f, 0x70, 0x74, 0x69, 0x6f, + 0x6e, 0x61, 0x6c, 0x22, 0xb6, 0x02, 0x0a, 0x04, 0x54, 0x79, 0x70, 0x65, 0x12, 0x0f, 0x0a, 0x0b, + 0x54, 0x59, 0x50, 0x45, 0x5f, 0x44, 0x4f, 0x55, 0x42, 0x4c, 0x45, 0x10, 0x01, 0x12, 0x0e, 0x0a, + 0x0a, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x46, 0x4c, 0x4f, 0x41, 0x54, 0x10, 0x02, 0x12, 0x0e, 0x0a, + 0x0a, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x49, 0x4e, 0x54, 0x36, 0x34, 0x10, 0x03, 0x12, 0x0f, 0x0a, + 0x0b, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x55, 0x49, 0x4e, 0x54, 0x36, 0x34, 0x10, 0x04, 0x12, 0x0e, + 0x0a, 0x0a, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x49, 0x4e, 0x54, 0x33, 0x32, 0x10, 0x05, 0x12, 0x10, + 0x0a, 0x0c, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x46, 0x49, 0x58, 0x45, 0x44, 0x36, 0x34, 0x10, 0x06, + 0x12, 0x10, 0x0a, 0x0c, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x46, 0x49, 0x58, 0x45, 0x44, 0x33, 0x32, + 0x10, 0x07, 0x12, 0x0d, 0x0a, 0x09, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x42, 0x4f, 0x4f, 0x4c, 0x10, + 0x08, 0x12, 0x0f, 0x0a, 0x0b, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x53, 0x54, 0x52, 0x49, 0x4e, 0x47, + 0x10, 0x09, 0x12, 0x0e, 0x0a, 0x0a, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x47, 0x52, 0x4f, 0x55, 0x50, + 0x10, 0x0a, 0x12, 0x10, 0x0a, 0x0c, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x4d, 0x45, 0x53, 0x53, 0x41, + 0x47, 0x45, 0x10, 0x0b, 0x12, 0x0e, 0x0a, 0x0a, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x42, 0x59, 0x54, + 0x45, 0x53, 0x10, 0x0c, 0x12, 0x0f, 0x0a, 0x0b, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x55, 0x49, 0x4e, + 0x54, 0x33, 0x32, 0x10, 0x0d, 0x12, 0x0d, 0x0a, 0x09, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x45, 0x4e, + 0x55, 0x4d, 0x10, 0x0e, 0x12, 0x11, 0x0a, 0x0d, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x53, 0x46, 0x49, + 0x58, 0x45, 0x44, 0x33, 0x32, 0x10, 0x0f, 0x12, 0x11, 0x0a, 0x0d, 0x54, 0x59, 0x50, 0x45, 0x5f, + 0x53, 0x46, 0x49, 0x58, 0x45, 0x44, 0x36, 0x34, 0x10, 0x10, 0x12, 0x0f, 0x0a, 0x0b, 0x54, 0x59, + 0x50, 0x45, 0x5f, 0x53, 0x49, 0x4e, 0x54, 0x33, 0x32, 0x10, 0x11, 0x12, 0x0f, 0x0a, 0x0b, 0x54, + 0x59, 0x50, 0x45, 0x5f, 0x53, 0x49, 0x4e, 0x54, 0x36, 0x34, 0x10, 0x12, 0x22, 0x43, 0x0a, 0x05, + 0x4c, 0x61, 0x62, 0x65, 0x6c, 0x12, 0x12, 0x0a, 0x0e, 0x4c, 0x41, 0x42, 0x45, 0x4c, 0x5f, 0x4f, + 0x50, 0x54, 0x49, 0x4f, 0x4e, 0x41, 0x4c, 0x10, 0x01, 0x12, 0x12, 0x0a, 0x0e, 0x4c, 0x41, 0x42, + 0x45, 0x4c, 0x5f, 0x52, 0x45, 0x51, 0x55, 0x49, 0x52, 0x45, 0x44, 0x10, 0x02, 0x12, 0x12, 0x0a, + 0x0e, 0x4c, 0x41, 0x42, 0x45, 0x4c, 0x5f, 0x52, 0x45, 0x50, 0x45, 0x41, 0x54, 0x45, 0x44, 0x10, + 0x03, 0x22, 0x63, 0x0a, 0x14, 0x4f, 0x6e, 0x65, 0x6f, 0x66, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, + 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, + 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x37, 0x0a, + 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1d, + 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, + 0x2e, 0x4f, 0x6e, 0x65, 0x6f, 0x66, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x52, 0x07, 0x6f, + 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x22, 0xe3, 0x02, 0x0a, 0x13, 0x45, 0x6e, 0x75, 0x6d, 0x44, + 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x12, + 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, + 0x6d, 0x65, 0x12, 0x3f, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x03, 0x28, + 0x0b, 0x32, 0x29, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, + 0x62, 0x75, 0x66, 0x2e, 0x45, 0x6e, 0x75, 0x6d, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x44, 0x65, 0x73, + 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x52, 0x05, 0x76, 0x61, + 0x6c, 0x75, 0x65, 0x12, 0x36, 0x0a, 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x03, + 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1c, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, + 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x45, 0x6e, 0x75, 0x6d, 0x4f, 0x70, 0x74, 0x69, 0x6f, + 0x6e, 0x73, 0x52, 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x5d, 0x0a, 0x0e, 0x72, + 0x65, 0x73, 0x65, 0x72, 0x76, 0x65, 0x64, 0x5f, 0x72, 0x61, 0x6e, 0x67, 0x65, 0x18, 0x04, 0x20, + 0x03, 0x28, 0x0b, 0x32, 0x36, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, + 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x45, 0x6e, 0x75, 0x6d, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, + 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x45, 0x6e, 0x75, 0x6d, 0x52, 0x65, + 0x73, 0x65, 0x72, 0x76, 0x65, 0x64, 0x52, 0x61, 0x6e, 0x67, 0x65, 0x52, 0x0d, 0x72, 0x65, 0x73, + 0x65, 0x72, 0x76, 0x65, 0x64, 0x52, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x23, 0x0a, 0x0d, 0x72, 0x65, + 0x73, 0x65, 0x72, 0x76, 0x65, 0x64, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x05, 0x20, 0x03, 0x28, + 0x09, 0x52, 0x0c, 0x72, 0x65, 0x73, 0x65, 0x72, 0x76, 0x65, 0x64, 0x4e, 0x61, 0x6d, 0x65, 0x1a, + 0x3b, 0x0a, 0x11, 0x45, 0x6e, 0x75, 0x6d, 0x52, 0x65, 0x73, 0x65, 0x72, 0x76, 0x65, 0x64, 0x52, + 0x61, 0x6e, 0x67, 0x65, 0x12, 0x14, 0x0a, 0x05, 0x73, 0x74, 0x61, 0x72, 0x74, 0x18, 0x01, 0x20, + 0x01, 0x28, 0x05, 0x52, 0x05, 0x73, 0x74, 0x61, 0x72, 0x74, 0x12, 0x10, 0x0a, 0x03, 0x65, 0x6e, + 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x05, 0x52, 0x03, 0x65, 0x6e, 0x64, 0x22, 0x83, 0x01, 0x0a, + 0x18, 0x45, 0x6e, 0x75, 0x6d, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, + 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, + 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x16, 0x0a, + 0x06, 0x6e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x18, 0x02, 0x20, 0x01, 0x28, 0x05, 0x52, 0x06, 0x6e, + 0x75, 0x6d, 0x62, 0x65, 0x72, 0x12, 0x3b, 0x0a, 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, + 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x21, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, + 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x45, 0x6e, 0x75, 0x6d, 0x56, 0x61, 0x6c, + 0x75, 0x65, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x52, 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, + 0x6e, 0x73, 0x22, 0xa7, 0x01, 0x0a, 0x16, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x44, 0x65, + 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x12, 0x0a, + 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, + 0x65, 0x12, 0x3e, 0x0a, 0x06, 0x6d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x18, 0x02, 0x20, 0x03, 0x28, + 0x0b, 0x32, 0x26, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, + 0x62, 0x75, 0x66, 0x2e, 0x4d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, + 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x52, 0x06, 0x6d, 0x65, 0x74, 0x68, 0x6f, + 0x64, 0x12, 0x39, 0x0a, 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x03, 0x20, 0x01, + 0x28, 0x0b, 0x32, 0x1f, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, + 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x4f, 0x70, 0x74, 0x69, + 0x6f, 0x6e, 0x73, 0x52, 0x07, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x22, 0x89, 0x02, 0x0a, + 0x15, 0x4d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, + 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, + 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x1d, 0x0a, 0x0a, 0x69, 0x6e, + 0x70, 0x75, 0x74, 0x5f, 0x74, 0x79, 0x70, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, + 0x69, 0x6e, 0x70, 0x75, 0x74, 0x54, 0x79, 0x70, 0x65, 0x12, 0x1f, 0x0a, 0x0b, 0x6f, 0x75, 0x74, + 0x70, 0x75, 0x74, 0x5f, 0x74, 0x79, 0x70, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0a, + 0x6f, 0x75, 0x74, 0x70, 0x75, 0x74, 0x54, 0x79, 0x70, 0x65, 0x12, 0x38, 0x0a, 0x07, 0x6f, 0x70, + 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1e, 0x2e, 0x67, 0x6f, + 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x4d, 0x65, + 0x74, 0x68, 0x6f, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x52, 0x07, 0x6f, 0x70, 0x74, + 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x30, 0x0a, 0x10, 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x5f, 0x73, + 0x74, 0x72, 0x65, 0x61, 0x6d, 0x69, 0x6e, 0x67, 0x18, 0x05, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, + 0x66, 0x61, 0x6c, 0x73, 0x65, 0x52, 0x0f, 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x53, 0x74, 0x72, + 0x65, 0x61, 0x6d, 0x69, 0x6e, 0x67, 0x12, 0x30, 0x0a, 0x10, 0x73, 0x65, 0x72, 0x76, 0x65, 0x72, + 0x5f, 0x73, 0x74, 0x72, 0x65, 0x61, 0x6d, 0x69, 0x6e, 0x67, 0x18, 0x06, 0x20, 0x01, 0x28, 0x08, + 0x3a, 0x05, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x52, 0x0f, 0x73, 0x65, 0x72, 0x76, 0x65, 0x72, 0x53, + 0x74, 0x72, 0x65, 0x61, 0x6d, 0x69, 0x6e, 0x67, 0x22, 0x91, 0x09, 0x0a, 0x0b, 0x46, 0x69, 0x6c, + 0x65, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x21, 0x0a, 0x0c, 0x6a, 0x61, 0x76, 0x61, + 0x5f, 0x70, 0x61, 0x63, 0x6b, 0x61, 0x67, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, + 0x6a, 0x61, 0x76, 0x61, 0x50, 0x61, 0x63, 0x6b, 0x61, 0x67, 0x65, 0x12, 0x30, 0x0a, 0x14, 0x6a, + 0x61, 0x76, 0x61, 0x5f, 0x6f, 0x75, 0x74, 0x65, 0x72, 0x5f, 0x63, 0x6c, 0x61, 0x73, 0x73, 0x6e, + 0x61, 0x6d, 0x65, 0x18, 0x08, 0x20, 0x01, 0x28, 0x09, 0x52, 0x12, 0x6a, 0x61, 0x76, 0x61, 0x4f, + 0x75, 0x74, 0x65, 0x72, 0x43, 0x6c, 0x61, 0x73, 0x73, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x35, 0x0a, + 0x13, 0x6a, 0x61, 0x76, 0x61, 0x5f, 0x6d, 0x75, 0x6c, 0x74, 0x69, 0x70, 0x6c, 0x65, 0x5f, 0x66, + 0x69, 0x6c, 0x65, 0x73, 0x18, 0x0a, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, 0x66, 0x61, 0x6c, 0x73, + 0x65, 0x52, 0x11, 0x6a, 0x61, 0x76, 0x61, 0x4d, 0x75, 0x6c, 0x74, 0x69, 0x70, 0x6c, 0x65, 0x46, + 0x69, 0x6c, 0x65, 0x73, 0x12, 0x44, 0x0a, 0x1d, 0x6a, 0x61, 0x76, 0x61, 0x5f, 0x67, 0x65, 0x6e, + 0x65, 0x72, 0x61, 0x74, 0x65, 0x5f, 0x65, 0x71, 0x75, 0x61, 0x6c, 0x73, 0x5f, 0x61, 0x6e, 0x64, + 0x5f, 0x68, 0x61, 0x73, 0x68, 0x18, 0x14, 0x20, 0x01, 0x28, 0x08, 0x42, 0x02, 0x18, 0x01, 0x52, + 0x19, 0x6a, 0x61, 0x76, 0x61, 0x47, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x65, 0x45, 0x71, 0x75, + 0x61, 0x6c, 0x73, 0x41, 0x6e, 0x64, 0x48, 0x61, 0x73, 0x68, 0x12, 0x3a, 0x0a, 0x16, 0x6a, 0x61, + 0x76, 0x61, 0x5f, 0x73, 0x74, 0x72, 0x69, 0x6e, 0x67, 0x5f, 0x63, 0x68, 0x65, 0x63, 0x6b, 0x5f, + 0x75, 0x74, 0x66, 0x38, 0x18, 0x1b, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, 0x66, 0x61, 0x6c, 0x73, + 0x65, 0x52, 0x13, 0x6a, 0x61, 0x76, 0x61, 0x53, 0x74, 0x72, 0x69, 0x6e, 0x67, 0x43, 0x68, 0x65, + 0x63, 0x6b, 0x55, 0x74, 0x66, 0x38, 0x12, 0x53, 0x0a, 0x0c, 0x6f, 0x70, 0x74, 0x69, 0x6d, 0x69, + 0x7a, 0x65, 0x5f, 0x66, 0x6f, 0x72, 0x18, 0x09, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x29, 0x2e, 0x67, + 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x46, + 0x69, 0x6c, 0x65, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x2e, 0x4f, 0x70, 0x74, 0x69, 0x6d, + 0x69, 0x7a, 0x65, 0x4d, 0x6f, 0x64, 0x65, 0x3a, 0x05, 0x53, 0x50, 0x45, 0x45, 0x44, 0x52, 0x0b, + 0x6f, 0x70, 0x74, 0x69, 0x6d, 0x69, 0x7a, 0x65, 0x46, 0x6f, 0x72, 0x12, 0x1d, 0x0a, 0x0a, 0x67, + 0x6f, 0x5f, 0x70, 0x61, 0x63, 0x6b, 0x61, 0x67, 0x65, 0x18, 0x0b, 0x20, 0x01, 0x28, 0x09, 0x52, + 0x09, 0x67, 0x6f, 0x50, 0x61, 0x63, 0x6b, 0x61, 0x67, 0x65, 0x12, 0x35, 0x0a, 0x13, 0x63, 0x63, 0x5f, 0x67, 0x65, 0x6e, 0x65, 0x72, 0x69, 0x63, 0x5f, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, - 0x73, 0x18, 0x2a, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x52, 0x12, - 0x70, 0x68, 0x70, 0x47, 0x65, 0x6e, 0x65, 0x72, 0x69, 0x63, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, - 0x65, 0x73, 0x12, 0x25, 0x0a, 0x0a, 0x64, 0x65, 0x70, 0x72, 0x65, 0x63, 0x61, 0x74, 0x65, 0x64, - 0x18, 0x17, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x52, 0x0a, 0x64, - 0x65, 0x70, 0x72, 0x65, 0x63, 0x61, 0x74, 0x65, 0x64, 0x12, 0x2e, 0x0a, 0x10, 0x63, 0x63, 0x5f, - 0x65, 0x6e, 0x61, 0x62, 0x6c, 0x65, 0x5f, 0x61, 0x72, 0x65, 0x6e, 0x61, 0x73, 0x18, 0x1f, 0x20, - 0x01, 0x28, 0x08, 0x3a, 0x04, 0x74, 0x72, 0x75, 0x65, 0x52, 0x0e, 0x63, 0x63, 0x45, 0x6e, 0x61, - 0x62, 0x6c, 0x65, 0x41, 0x72, 0x65, 0x6e, 0x61, 0x73, 0x12, 0x2a, 0x0a, 0x11, 0x6f, 0x62, 0x6a, - 0x63, 0x5f, 0x63, 0x6c, 0x61, 0x73, 0x73, 0x5f, 0x70, 0x72, 0x65, 0x66, 0x69, 0x78, 0x18, 0x24, - 0x20, 0x01, 0x28, 0x09, 0x52, 0x0f, 0x6f, 0x62, 0x6a, 0x63, 0x43, 0x6c, 0x61, 0x73, 0x73, 0x50, - 0x72, 0x65, 0x66, 0x69, 0x78, 0x12, 0x29, 0x0a, 0x10, 0x63, 0x73, 0x68, 0x61, 0x72, 0x70, 0x5f, - 0x6e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x18, 0x25, 0x20, 0x01, 0x28, 0x09, 0x52, - 0x0f, 0x63, 0x73, 0x68, 0x61, 0x72, 0x70, 0x4e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, - 0x12, 0x21, 0x0a, 0x0c, 0x73, 0x77, 0x69, 0x66, 0x74, 0x5f, 0x70, 0x72, 0x65, 0x66, 0x69, 0x78, - 0x18, 0x27, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x73, 0x77, 0x69, 0x66, 0x74, 0x50, 0x72, 0x65, - 0x66, 0x69, 0x78, 0x12, 0x28, 0x0a, 0x10, 0x70, 0x68, 0x70, 0x5f, 0x63, 0x6c, 0x61, 0x73, 0x73, - 0x5f, 0x70, 0x72, 0x65, 0x66, 0x69, 0x78, 0x18, 0x28, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0e, 0x70, - 0x68, 0x70, 0x43, 0x6c, 0x61, 0x73, 0x73, 0x50, 0x72, 0x65, 0x66, 0x69, 0x78, 0x12, 0x23, 0x0a, - 0x0d, 0x70, 0x68, 0x70, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x18, 0x29, - 0x20, 0x01, 0x28, 0x09, 0x52, 0x0c, 0x70, 0x68, 0x70, 0x4e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, - 0x63, 0x65, 0x12, 0x34, 0x0a, 0x16, 0x70, 0x68, 0x70, 0x5f, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, - 0x74, 0x61, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x18, 0x2c, 0x20, 0x01, - 0x28, 0x09, 0x52, 0x14, 0x70, 0x68, 0x70, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x4e, - 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x12, 0x21, 0x0a, 0x0c, 0x72, 0x75, 0x62, 0x79, - 0x5f, 0x70, 0x61, 0x63, 0x6b, 0x61, 0x67, 0x65, 0x18, 0x2d, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, - 0x72, 0x75, 0x62, 0x79, 0x50, 0x61, 0x63, 0x6b, 0x61, 0x67, 0x65, 0x12, 0x58, 0x0a, 0x14, 0x75, - 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x5f, 0x6f, 0x70, 0x74, - 0x69, 0x6f, 0x6e, 0x18, 0xe7, 0x07, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x24, 0x2e, 0x67, 0x6f, 0x6f, - 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x55, 0x6e, 0x69, - 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, - 0x52, 0x13, 0x75, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, - 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x22, 0x3a, 0x0a, 0x0c, 0x4f, 0x70, 0x74, 0x69, 0x6d, 0x69, 0x7a, - 0x65, 0x4d, 0x6f, 0x64, 0x65, 0x12, 0x09, 0x0a, 0x05, 0x53, 0x50, 0x45, 0x45, 0x44, 0x10, 0x01, - 0x12, 0x0d, 0x0a, 0x09, 0x43, 0x4f, 0x44, 0x45, 0x5f, 0x53, 0x49, 0x5a, 0x45, 0x10, 0x02, 0x12, - 0x10, 0x0a, 0x0c, 0x4c, 0x49, 0x54, 0x45, 0x5f, 0x52, 0x55, 0x4e, 0x54, 0x49, 0x4d, 0x45, 0x10, - 0x03, 0x2a, 0x09, 0x08, 0xe8, 0x07, 0x10, 0x80, 0x80, 0x80, 0x80, 0x02, 0x4a, 0x04, 0x08, 0x26, - 0x10, 0x27, 0x22, 0xd1, 0x02, 0x0a, 0x0e, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x4f, 0x70, - 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x3c, 0x0a, 0x17, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, - 0x5f, 0x73, 0x65, 0x74, 0x5f, 0x77, 0x69, 0x72, 0x65, 0x5f, 0x66, 0x6f, 0x72, 0x6d, 0x61, 0x74, - 0x18, 0x01, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x52, 0x14, 0x6d, - 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x53, 0x65, 0x74, 0x57, 0x69, 0x72, 0x65, 0x46, 0x6f, 0x72, - 0x6d, 0x61, 0x74, 0x12, 0x4c, 0x0a, 0x1f, 0x6e, 0x6f, 0x5f, 0x73, 0x74, 0x61, 0x6e, 0x64, 0x61, - 0x72, 0x64, 0x5f, 0x64, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x5f, 0x61, 0x63, - 0x63, 0x65, 0x73, 0x73, 0x6f, 0x72, 0x18, 0x02, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, 0x66, 0x61, - 0x6c, 0x73, 0x65, 0x52, 0x1c, 0x6e, 0x6f, 0x53, 0x74, 0x61, 0x6e, 0x64, 0x61, 0x72, 0x64, 0x44, - 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x41, 0x63, 0x63, 0x65, 0x73, 0x73, 0x6f, - 0x72, 0x12, 0x25, 0x0a, 0x0a, 0x64, 0x65, 0x70, 0x72, 0x65, 0x63, 0x61, 0x74, 0x65, 0x64, 0x18, - 0x03, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x52, 0x0a, 0x64, 0x65, - 0x70, 0x72, 0x65, 0x63, 0x61, 0x74, 0x65, 0x64, 0x12, 0x1b, 0x0a, 0x09, 0x6d, 0x61, 0x70, 0x5f, - 0x65, 0x6e, 0x74, 0x72, 0x79, 0x18, 0x07, 0x20, 0x01, 0x28, 0x08, 0x52, 0x08, 0x6d, 0x61, 0x70, - 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x58, 0x0a, 0x14, 0x75, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, + 0x73, 0x18, 0x10, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x52, 0x11, + 0x63, 0x63, 0x47, 0x65, 0x6e, 0x65, 0x72, 0x69, 0x63, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, + 0x73, 0x12, 0x39, 0x0a, 0x15, 0x6a, 0x61, 0x76, 0x61, 0x5f, 0x67, 0x65, 0x6e, 0x65, 0x72, 0x69, + 0x63, 0x5f, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x73, 0x18, 0x11, 0x20, 0x01, 0x28, 0x08, + 0x3a, 0x05, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x52, 0x13, 0x6a, 0x61, 0x76, 0x61, 0x47, 0x65, 0x6e, + 0x65, 0x72, 0x69, 0x63, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x73, 0x12, 0x35, 0x0a, 0x13, + 0x70, 0x79, 0x5f, 0x67, 0x65, 0x6e, 0x65, 0x72, 0x69, 0x63, 0x5f, 0x73, 0x65, 0x72, 0x76, 0x69, + 0x63, 0x65, 0x73, 0x18, 0x12, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, 0x66, 0x61, 0x6c, 0x73, 0x65, + 0x52, 0x11, 0x70, 0x79, 0x47, 0x65, 0x6e, 0x65, 0x72, 0x69, 0x63, 0x53, 0x65, 0x72, 0x76, 0x69, + 0x63, 0x65, 0x73, 0x12, 0x37, 0x0a, 0x14, 0x70, 0x68, 0x70, 0x5f, 0x67, 0x65, 0x6e, 0x65, 0x72, + 0x69, 0x63, 0x5f, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x73, 0x18, 0x2a, 0x20, 0x01, 0x28, + 0x08, 0x3a, 0x05, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x52, 0x12, 0x70, 0x68, 0x70, 0x47, 0x65, 0x6e, + 0x65, 0x72, 0x69, 0x63, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x73, 0x12, 0x25, 0x0a, 0x0a, + 0x64, 0x65, 0x70, 0x72, 0x65, 0x63, 0x61, 0x74, 0x65, 0x64, 0x18, 0x17, 0x20, 0x01, 0x28, 0x08, + 0x3a, 0x05, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x52, 0x0a, 0x64, 0x65, 0x70, 0x72, 0x65, 0x63, 0x61, + 0x74, 0x65, 0x64, 0x12, 0x2e, 0x0a, 0x10, 0x63, 0x63, 0x5f, 0x65, 0x6e, 0x61, 0x62, 0x6c, 0x65, + 0x5f, 0x61, 0x72, 0x65, 0x6e, 0x61, 0x73, 0x18, 0x1f, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x04, 0x74, + 0x72, 0x75, 0x65, 0x52, 0x0e, 0x63, 0x63, 0x45, 0x6e, 0x61, 0x62, 0x6c, 0x65, 0x41, 0x72, 0x65, + 0x6e, 0x61, 0x73, 0x12, 0x2a, 0x0a, 0x11, 0x6f, 0x62, 0x6a, 0x63, 0x5f, 0x63, 0x6c, 0x61, 0x73, + 0x73, 0x5f, 0x70, 0x72, 0x65, 0x66, 0x69, 0x78, 0x18, 0x24, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0f, + 0x6f, 0x62, 0x6a, 0x63, 0x43, 0x6c, 0x61, 0x73, 0x73, 0x50, 0x72, 0x65, 0x66, 0x69, 0x78, 0x12, + 0x29, 0x0a, 0x10, 0x63, 0x73, 0x68, 0x61, 0x72, 0x70, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x73, 0x70, + 0x61, 0x63, 0x65, 0x18, 0x25, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0f, 0x63, 0x73, 0x68, 0x61, 0x72, + 0x70, 0x4e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x12, 0x21, 0x0a, 0x0c, 0x73, 0x77, + 0x69, 0x66, 0x74, 0x5f, 0x70, 0x72, 0x65, 0x66, 0x69, 0x78, 0x18, 0x27, 0x20, 0x01, 0x28, 0x09, + 0x52, 0x0b, 0x73, 0x77, 0x69, 0x66, 0x74, 0x50, 0x72, 0x65, 0x66, 0x69, 0x78, 0x12, 0x28, 0x0a, + 0x10, 0x70, 0x68, 0x70, 0x5f, 0x63, 0x6c, 0x61, 0x73, 0x73, 0x5f, 0x70, 0x72, 0x65, 0x66, 0x69, + 0x78, 0x18, 0x28, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0e, 0x70, 0x68, 0x70, 0x43, 0x6c, 0x61, 0x73, + 0x73, 0x50, 0x72, 0x65, 0x66, 0x69, 0x78, 0x12, 0x23, 0x0a, 0x0d, 0x70, 0x68, 0x70, 0x5f, 0x6e, + 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x18, 0x29, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0c, + 0x70, 0x68, 0x70, 0x4e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x12, 0x34, 0x0a, 0x16, + 0x70, 0x68, 0x70, 0x5f, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x5f, 0x6e, 0x61, 0x6d, + 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x18, 0x2c, 0x20, 0x01, 0x28, 0x09, 0x52, 0x14, 0x70, 0x68, + 0x70, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x4e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, + 0x63, 0x65, 0x12, 0x21, 0x0a, 0x0c, 0x72, 0x75, 0x62, 0x79, 0x5f, 0x70, 0x61, 0x63, 0x6b, 0x61, + 0x67, 0x65, 0x18, 0x2d, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x72, 0x75, 0x62, 0x79, 0x50, 0x61, + 0x63, 0x6b, 0x61, 0x67, 0x65, 0x12, 0x58, 0x0a, 0x14, 0x75, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x5f, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0xe7, 0x07, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x24, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x55, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x13, 0x75, 0x6e, 0x69, 0x6e, - 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x2a, - 0x09, 0x08, 0xe8, 0x07, 0x10, 0x80, 0x80, 0x80, 0x80, 0x02, 0x4a, 0x04, 0x08, 0x08, 0x10, 0x09, - 0x4a, 0x04, 0x08, 0x09, 0x10, 0x0a, 0x22, 0xe2, 0x03, 0x0a, 0x0c, 0x46, 0x69, 0x65, 0x6c, 0x64, - 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x41, 0x0a, 0x05, 0x63, 0x74, 0x79, 0x70, 0x65, - 0x18, 0x01, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x23, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, + 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x22, + 0x3a, 0x0a, 0x0c, 0x4f, 0x70, 0x74, 0x69, 0x6d, 0x69, 0x7a, 0x65, 0x4d, 0x6f, 0x64, 0x65, 0x12, + 0x09, 0x0a, 0x05, 0x53, 0x50, 0x45, 0x45, 0x44, 0x10, 0x01, 0x12, 0x0d, 0x0a, 0x09, 0x43, 0x4f, + 0x44, 0x45, 0x5f, 0x53, 0x49, 0x5a, 0x45, 0x10, 0x02, 0x12, 0x10, 0x0a, 0x0c, 0x4c, 0x49, 0x54, + 0x45, 0x5f, 0x52, 0x55, 0x4e, 0x54, 0x49, 0x4d, 0x45, 0x10, 0x03, 0x2a, 0x09, 0x08, 0xe8, 0x07, + 0x10, 0x80, 0x80, 0x80, 0x80, 0x02, 0x4a, 0x04, 0x08, 0x26, 0x10, 0x27, 0x22, 0xbb, 0x03, 0x0a, + 0x0e, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, + 0x3c, 0x0a, 0x17, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x5f, 0x73, 0x65, 0x74, 0x5f, 0x77, + 0x69, 0x72, 0x65, 0x5f, 0x66, 0x6f, 0x72, 0x6d, 0x61, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x08, + 0x3a, 0x05, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x52, 0x14, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, + 0x53, 0x65, 0x74, 0x57, 0x69, 0x72, 0x65, 0x46, 0x6f, 0x72, 0x6d, 0x61, 0x74, 0x12, 0x4c, 0x0a, + 0x1f, 0x6e, 0x6f, 0x5f, 0x73, 0x74, 0x61, 0x6e, 0x64, 0x61, 0x72, 0x64, 0x5f, 0x64, 0x65, 0x73, + 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x5f, 0x61, 0x63, 0x63, 0x65, 0x73, 0x73, 0x6f, 0x72, + 0x18, 0x02, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x52, 0x1c, 0x6e, + 0x6f, 0x53, 0x74, 0x61, 0x6e, 0x64, 0x61, 0x72, 0x64, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, + 0x74, 0x6f, 0x72, 0x41, 0x63, 0x63, 0x65, 0x73, 0x73, 0x6f, 0x72, 0x12, 0x25, 0x0a, 0x0a, 0x64, + 0x65, 0x70, 0x72, 0x65, 0x63, 0x61, 0x74, 0x65, 0x64, 0x18, 0x03, 0x20, 0x01, 0x28, 0x08, 0x3a, + 0x05, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x52, 0x0a, 0x64, 0x65, 0x70, 0x72, 0x65, 0x63, 0x61, 0x74, + 0x65, 0x64, 0x12, 0x1b, 0x0a, 0x09, 0x6d, 0x61, 0x70, 0x5f, 0x65, 0x6e, 0x74, 0x72, 0x79, 0x18, + 0x07, 0x20, 0x01, 0x28, 0x08, 0x52, 0x08, 0x6d, 0x61, 0x70, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, + 0x56, 0x0a, 0x26, 0x64, 0x65, 0x70, 0x72, 0x65, 0x63, 0x61, 0x74, 0x65, 0x64, 0x5f, 0x6c, 0x65, + 0x67, 0x61, 0x63, 0x79, 0x5f, 0x6a, 0x73, 0x6f, 0x6e, 0x5f, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x5f, + 0x63, 0x6f, 0x6e, 0x66, 0x6c, 0x69, 0x63, 0x74, 0x73, 0x18, 0x0b, 0x20, 0x01, 0x28, 0x08, 0x42, + 0x02, 0x18, 0x01, 0x52, 0x22, 0x64, 0x65, 0x70, 0x72, 0x65, 0x63, 0x61, 0x74, 0x65, 0x64, 0x4c, + 0x65, 0x67, 0x61, 0x63, 0x79, 0x4a, 0x73, 0x6f, 0x6e, 0x46, 0x69, 0x65, 0x6c, 0x64, 0x43, 0x6f, + 0x6e, 0x66, 0x6c, 0x69, 0x63, 0x74, 0x73, 0x12, 0x58, 0x0a, 0x14, 0x75, 0x6e, 0x69, 0x6e, 0x74, + 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x5f, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x18, + 0xe7, 0x07, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x24, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, + 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x55, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, + 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x13, 0x75, 0x6e, + 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, + 0x6e, 0x2a, 0x09, 0x08, 0xe8, 0x07, 0x10, 0x80, 0x80, 0x80, 0x80, 0x02, 0x4a, 0x04, 0x08, 0x04, + 0x10, 0x05, 0x4a, 0x04, 0x08, 0x05, 0x10, 0x06, 0x4a, 0x04, 0x08, 0x06, 0x10, 0x07, 0x4a, 0x04, + 0x08, 0x08, 0x10, 0x09, 0x4a, 0x04, 0x08, 0x09, 0x10, 0x0a, 0x22, 0xb7, 0x08, 0x0a, 0x0c, 0x46, + 0x69, 0x65, 0x6c, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x41, 0x0a, 0x05, 0x63, + 0x74, 0x79, 0x70, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x23, 0x2e, 0x67, 0x6f, 0x6f, + 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x46, 0x69, 0x65, + 0x6c, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x2e, 0x43, 0x54, 0x79, 0x70, 0x65, 0x3a, + 0x06, 0x53, 0x54, 0x52, 0x49, 0x4e, 0x47, 0x52, 0x05, 0x63, 0x74, 0x79, 0x70, 0x65, 0x12, 0x16, + 0x0a, 0x06, 0x70, 0x61, 0x63, 0x6b, 0x65, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x08, 0x52, 0x06, + 0x70, 0x61, 0x63, 0x6b, 0x65, 0x64, 0x12, 0x47, 0x0a, 0x06, 0x6a, 0x73, 0x74, 0x79, 0x70, 0x65, + 0x18, 0x06, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x24, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x46, 0x69, 0x65, 0x6c, 0x64, 0x4f, 0x70, - 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x2e, 0x43, 0x54, 0x79, 0x70, 0x65, 0x3a, 0x06, 0x53, 0x54, 0x52, - 0x49, 0x4e, 0x47, 0x52, 0x05, 0x63, 0x74, 0x79, 0x70, 0x65, 0x12, 0x16, 0x0a, 0x06, 0x70, 0x61, - 0x63, 0x6b, 0x65, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x08, 0x52, 0x06, 0x70, 0x61, 0x63, 0x6b, - 0x65, 0x64, 0x12, 0x47, 0x0a, 0x06, 0x6a, 0x73, 0x74, 0x79, 0x70, 0x65, 0x18, 0x06, 0x20, 0x01, - 0x28, 0x0e, 0x32, 0x24, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, - 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x46, 0x69, 0x65, 0x6c, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, - 0x73, 0x2e, 0x4a, 0x53, 0x54, 0x79, 0x70, 0x65, 0x3a, 0x09, 0x4a, 0x53, 0x5f, 0x4e, 0x4f, 0x52, - 0x4d, 0x41, 0x4c, 0x52, 0x06, 0x6a, 0x73, 0x74, 0x79, 0x70, 0x65, 0x12, 0x19, 0x0a, 0x04, 0x6c, - 0x61, 0x7a, 0x79, 0x18, 0x05, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, 0x66, 0x61, 0x6c, 0x73, 0x65, - 0x52, 0x04, 0x6c, 0x61, 0x7a, 0x79, 0x12, 0x25, 0x0a, 0x0a, 0x64, 0x65, 0x70, 0x72, 0x65, 0x63, - 0x61, 0x74, 0x65, 0x64, 0x18, 0x03, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, 0x66, 0x61, 0x6c, 0x73, - 0x65, 0x52, 0x0a, 0x64, 0x65, 0x70, 0x72, 0x65, 0x63, 0x61, 0x74, 0x65, 0x64, 0x12, 0x19, 0x0a, - 0x04, 0x77, 0x65, 0x61, 0x6b, 0x18, 0x0a, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, 0x66, 0x61, 0x6c, - 0x73, 0x65, 0x52, 0x04, 0x77, 0x65, 0x61, 0x6b, 0x12, 0x58, 0x0a, 0x14, 0x75, 0x6e, 0x69, 0x6e, - 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x5f, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, - 0x18, 0xe7, 0x07, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x24, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, - 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x55, 0x6e, 0x69, 0x6e, 0x74, 0x65, - 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x13, 0x75, - 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, 0x70, 0x74, 0x69, - 0x6f, 0x6e, 0x22, 0x2f, 0x0a, 0x05, 0x43, 0x54, 0x79, 0x70, 0x65, 0x12, 0x0a, 0x0a, 0x06, 0x53, - 0x54, 0x52, 0x49, 0x4e, 0x47, 0x10, 0x00, 0x12, 0x08, 0x0a, 0x04, 0x43, 0x4f, 0x52, 0x44, 0x10, - 0x01, 0x12, 0x10, 0x0a, 0x0c, 0x53, 0x54, 0x52, 0x49, 0x4e, 0x47, 0x5f, 0x50, 0x49, 0x45, 0x43, - 0x45, 0x10, 0x02, 0x22, 0x35, 0x0a, 0x06, 0x4a, 0x53, 0x54, 0x79, 0x70, 0x65, 0x12, 0x0d, 0x0a, - 0x09, 0x4a, 0x53, 0x5f, 0x4e, 0x4f, 0x52, 0x4d, 0x41, 0x4c, 0x10, 0x00, 0x12, 0x0d, 0x0a, 0x09, - 0x4a, 0x53, 0x5f, 0x53, 0x54, 0x52, 0x49, 0x4e, 0x47, 0x10, 0x01, 0x12, 0x0d, 0x0a, 0x09, 0x4a, - 0x53, 0x5f, 0x4e, 0x55, 0x4d, 0x42, 0x45, 0x52, 0x10, 0x02, 0x2a, 0x09, 0x08, 0xe8, 0x07, 0x10, - 0x80, 0x80, 0x80, 0x80, 0x02, 0x4a, 0x04, 0x08, 0x04, 0x10, 0x05, 0x22, 0x73, 0x0a, 0x0c, 0x4f, - 0x6e, 0x65, 0x6f, 0x66, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x58, 0x0a, 0x14, 0x75, + 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x2e, 0x4a, 0x53, 0x54, 0x79, 0x70, 0x65, 0x3a, 0x09, 0x4a, 0x53, + 0x5f, 0x4e, 0x4f, 0x52, 0x4d, 0x41, 0x4c, 0x52, 0x06, 0x6a, 0x73, 0x74, 0x79, 0x70, 0x65, 0x12, + 0x19, 0x0a, 0x04, 0x6c, 0x61, 0x7a, 0x79, 0x18, 0x05, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, 0x66, + 0x61, 0x6c, 0x73, 0x65, 0x52, 0x04, 0x6c, 0x61, 0x7a, 0x79, 0x12, 0x2e, 0x0a, 0x0f, 0x75, 0x6e, + 0x76, 0x65, 0x72, 0x69, 0x66, 0x69, 0x65, 0x64, 0x5f, 0x6c, 0x61, 0x7a, 0x79, 0x18, 0x0f, 0x20, + 0x01, 0x28, 0x08, 0x3a, 0x05, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x52, 0x0e, 0x75, 0x6e, 0x76, 0x65, + 0x72, 0x69, 0x66, 0x69, 0x65, 0x64, 0x4c, 0x61, 0x7a, 0x79, 0x12, 0x25, 0x0a, 0x0a, 0x64, 0x65, + 0x70, 0x72, 0x65, 0x63, 0x61, 0x74, 0x65, 0x64, 0x18, 0x03, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, + 0x66, 0x61, 0x6c, 0x73, 0x65, 0x52, 0x0a, 0x64, 0x65, 0x70, 0x72, 0x65, 0x63, 0x61, 0x74, 0x65, + 0x64, 0x12, 0x19, 0x0a, 0x04, 0x77, 0x65, 0x61, 0x6b, 0x18, 0x0a, 0x20, 0x01, 0x28, 0x08, 0x3a, + 0x05, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x52, 0x04, 0x77, 0x65, 0x61, 0x6b, 0x12, 0x28, 0x0a, 0x0c, + 0x64, 0x65, 0x62, 0x75, 0x67, 0x5f, 0x72, 0x65, 0x64, 0x61, 0x63, 0x74, 0x18, 0x10, 0x20, 0x01, + 0x28, 0x08, 0x3a, 0x05, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x52, 0x0b, 0x64, 0x65, 0x62, 0x75, 0x67, + 0x52, 0x65, 0x64, 0x61, 0x63, 0x74, 0x12, 0x4b, 0x0a, 0x09, 0x72, 0x65, 0x74, 0x65, 0x6e, 0x74, + 0x69, 0x6f, 0x6e, 0x18, 0x11, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x2d, 0x2e, 0x67, 0x6f, 0x6f, 0x67, + 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x46, 0x69, 0x65, 0x6c, + 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x2e, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x52, + 0x65, 0x74, 0x65, 0x6e, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x09, 0x72, 0x65, 0x74, 0x65, 0x6e, 0x74, + 0x69, 0x6f, 0x6e, 0x12, 0x46, 0x0a, 0x06, 0x74, 0x61, 0x72, 0x67, 0x65, 0x74, 0x18, 0x12, 0x20, + 0x01, 0x28, 0x0e, 0x32, 0x2e, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, + 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x46, 0x69, 0x65, 0x6c, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, + 0x6e, 0x73, 0x2e, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x54, 0x61, 0x72, 0x67, 0x65, 0x74, 0x54, + 0x79, 0x70, 0x65, 0x52, 0x06, 0x74, 0x61, 0x72, 0x67, 0x65, 0x74, 0x12, 0x58, 0x0a, 0x14, 0x75, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x5f, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0xe7, 0x07, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x24, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x55, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x13, 0x75, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, - 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x2a, 0x09, 0x08, 0xe8, 0x07, 0x10, 0x80, 0x80, 0x80, 0x80, 0x02, - 0x22, 0xc0, 0x01, 0x0a, 0x0b, 0x45, 0x6e, 0x75, 0x6d, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, - 0x12, 0x1f, 0x0a, 0x0b, 0x61, 0x6c, 0x6c, 0x6f, 0x77, 0x5f, 0x61, 0x6c, 0x69, 0x61, 0x73, 0x18, - 0x02, 0x20, 0x01, 0x28, 0x08, 0x52, 0x0a, 0x61, 0x6c, 0x6c, 0x6f, 0x77, 0x41, 0x6c, 0x69, 0x61, - 0x73, 0x12, 0x25, 0x0a, 0x0a, 0x64, 0x65, 0x70, 0x72, 0x65, 0x63, 0x61, 0x74, 0x65, 0x64, 0x18, - 0x03, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x52, 0x0a, 0x64, 0x65, - 0x70, 0x72, 0x65, 0x63, 0x61, 0x74, 0x65, 0x64, 0x12, 0x58, 0x0a, 0x14, 0x75, 0x6e, 0x69, 0x6e, - 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x5f, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, - 0x18, 0xe7, 0x07, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x24, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, - 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x55, 0x6e, 0x69, 0x6e, 0x74, 0x65, - 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x13, 0x75, - 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, 0x70, 0x74, 0x69, - 0x6f, 0x6e, 0x2a, 0x09, 0x08, 0xe8, 0x07, 0x10, 0x80, 0x80, 0x80, 0x80, 0x02, 0x4a, 0x04, 0x08, - 0x05, 0x10, 0x06, 0x22, 0x9e, 0x01, 0x0a, 0x10, 0x45, 0x6e, 0x75, 0x6d, 0x56, 0x61, 0x6c, 0x75, + 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x22, 0x2f, 0x0a, 0x05, 0x43, 0x54, 0x79, 0x70, 0x65, 0x12, 0x0a, + 0x0a, 0x06, 0x53, 0x54, 0x52, 0x49, 0x4e, 0x47, 0x10, 0x00, 0x12, 0x08, 0x0a, 0x04, 0x43, 0x4f, + 0x52, 0x44, 0x10, 0x01, 0x12, 0x10, 0x0a, 0x0c, 0x53, 0x54, 0x52, 0x49, 0x4e, 0x47, 0x5f, 0x50, + 0x49, 0x45, 0x43, 0x45, 0x10, 0x02, 0x22, 0x35, 0x0a, 0x06, 0x4a, 0x53, 0x54, 0x79, 0x70, 0x65, + 0x12, 0x0d, 0x0a, 0x09, 0x4a, 0x53, 0x5f, 0x4e, 0x4f, 0x52, 0x4d, 0x41, 0x4c, 0x10, 0x00, 0x12, + 0x0d, 0x0a, 0x09, 0x4a, 0x53, 0x5f, 0x53, 0x54, 0x52, 0x49, 0x4e, 0x47, 0x10, 0x01, 0x12, 0x0d, + 0x0a, 0x09, 0x4a, 0x53, 0x5f, 0x4e, 0x55, 0x4d, 0x42, 0x45, 0x52, 0x10, 0x02, 0x22, 0x55, 0x0a, + 0x0f, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x74, 0x65, 0x6e, 0x74, 0x69, 0x6f, 0x6e, + 0x12, 0x15, 0x0a, 0x11, 0x52, 0x45, 0x54, 0x45, 0x4e, 0x54, 0x49, 0x4f, 0x4e, 0x5f, 0x55, 0x4e, + 0x4b, 0x4e, 0x4f, 0x57, 0x4e, 0x10, 0x00, 0x12, 0x15, 0x0a, 0x11, 0x52, 0x45, 0x54, 0x45, 0x4e, + 0x54, 0x49, 0x4f, 0x4e, 0x5f, 0x52, 0x55, 0x4e, 0x54, 0x49, 0x4d, 0x45, 0x10, 0x01, 0x12, 0x14, + 0x0a, 0x10, 0x52, 0x45, 0x54, 0x45, 0x4e, 0x54, 0x49, 0x4f, 0x4e, 0x5f, 0x53, 0x4f, 0x55, 0x52, + 0x43, 0x45, 0x10, 0x02, 0x22, 0x8c, 0x02, 0x0a, 0x10, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x54, + 0x61, 0x72, 0x67, 0x65, 0x74, 0x54, 0x79, 0x70, 0x65, 0x12, 0x17, 0x0a, 0x13, 0x54, 0x41, 0x52, + 0x47, 0x45, 0x54, 0x5f, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x55, 0x4e, 0x4b, 0x4e, 0x4f, 0x57, 0x4e, + 0x10, 0x00, 0x12, 0x14, 0x0a, 0x10, 0x54, 0x41, 0x52, 0x47, 0x45, 0x54, 0x5f, 0x54, 0x59, 0x50, + 0x45, 0x5f, 0x46, 0x49, 0x4c, 0x45, 0x10, 0x01, 0x12, 0x1f, 0x0a, 0x1b, 0x54, 0x41, 0x52, 0x47, + 0x45, 0x54, 0x5f, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x45, 0x58, 0x54, 0x45, 0x4e, 0x53, 0x49, 0x4f, + 0x4e, 0x5f, 0x52, 0x41, 0x4e, 0x47, 0x45, 0x10, 0x02, 0x12, 0x17, 0x0a, 0x13, 0x54, 0x41, 0x52, + 0x47, 0x45, 0x54, 0x5f, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x4d, 0x45, 0x53, 0x53, 0x41, 0x47, 0x45, + 0x10, 0x03, 0x12, 0x15, 0x0a, 0x11, 0x54, 0x41, 0x52, 0x47, 0x45, 0x54, 0x5f, 0x54, 0x59, 0x50, + 0x45, 0x5f, 0x46, 0x49, 0x45, 0x4c, 0x44, 0x10, 0x04, 0x12, 0x15, 0x0a, 0x11, 0x54, 0x41, 0x52, + 0x47, 0x45, 0x54, 0x5f, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x4f, 0x4e, 0x45, 0x4f, 0x46, 0x10, 0x05, + 0x12, 0x14, 0x0a, 0x10, 0x54, 0x41, 0x52, 0x47, 0x45, 0x54, 0x5f, 0x54, 0x59, 0x50, 0x45, 0x5f, + 0x45, 0x4e, 0x55, 0x4d, 0x10, 0x06, 0x12, 0x1a, 0x0a, 0x16, 0x54, 0x41, 0x52, 0x47, 0x45, 0x54, + 0x5f, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x45, 0x4e, 0x55, 0x4d, 0x5f, 0x45, 0x4e, 0x54, 0x52, 0x59, + 0x10, 0x07, 0x12, 0x17, 0x0a, 0x13, 0x54, 0x41, 0x52, 0x47, 0x45, 0x54, 0x5f, 0x54, 0x59, 0x50, + 0x45, 0x5f, 0x53, 0x45, 0x52, 0x56, 0x49, 0x43, 0x45, 0x10, 0x08, 0x12, 0x16, 0x0a, 0x12, 0x54, + 0x41, 0x52, 0x47, 0x45, 0x54, 0x5f, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x4d, 0x45, 0x54, 0x48, 0x4f, + 0x44, 0x10, 0x09, 0x2a, 0x09, 0x08, 0xe8, 0x07, 0x10, 0x80, 0x80, 0x80, 0x80, 0x02, 0x4a, 0x04, + 0x08, 0x04, 0x10, 0x05, 0x22, 0x73, 0x0a, 0x0c, 0x4f, 0x6e, 0x65, 0x6f, 0x66, 0x4f, 0x70, 0x74, + 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x58, 0x0a, 0x14, 0x75, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, + 0x72, 0x65, 0x74, 0x65, 0x64, 0x5f, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0xe7, 0x07, 0x20, + 0x03, 0x28, 0x0b, 0x32, 0x24, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, + 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x55, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, + 0x74, 0x65, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x13, 0x75, 0x6e, 0x69, 0x6e, 0x74, + 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x2a, 0x09, + 0x08, 0xe8, 0x07, 0x10, 0x80, 0x80, 0x80, 0x80, 0x02, 0x22, 0x98, 0x02, 0x0a, 0x0b, 0x45, 0x6e, + 0x75, 0x6d, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x1f, 0x0a, 0x0b, 0x61, 0x6c, 0x6c, + 0x6f, 0x77, 0x5f, 0x61, 0x6c, 0x69, 0x61, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x08, 0x52, 0x0a, + 0x61, 0x6c, 0x6c, 0x6f, 0x77, 0x41, 0x6c, 0x69, 0x61, 0x73, 0x12, 0x25, 0x0a, 0x0a, 0x64, 0x65, + 0x70, 0x72, 0x65, 0x63, 0x61, 0x74, 0x65, 0x64, 0x18, 0x03, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, + 0x66, 0x61, 0x6c, 0x73, 0x65, 0x52, 0x0a, 0x64, 0x65, 0x70, 0x72, 0x65, 0x63, 0x61, 0x74, 0x65, + 0x64, 0x12, 0x56, 0x0a, 0x26, 0x64, 0x65, 0x70, 0x72, 0x65, 0x63, 0x61, 0x74, 0x65, 0x64, 0x5f, + 0x6c, 0x65, 0x67, 0x61, 0x63, 0x79, 0x5f, 0x6a, 0x73, 0x6f, 0x6e, 0x5f, 0x66, 0x69, 0x65, 0x6c, + 0x64, 0x5f, 0x63, 0x6f, 0x6e, 0x66, 0x6c, 0x69, 0x63, 0x74, 0x73, 0x18, 0x06, 0x20, 0x01, 0x28, + 0x08, 0x42, 0x02, 0x18, 0x01, 0x52, 0x22, 0x64, 0x65, 0x70, 0x72, 0x65, 0x63, 0x61, 0x74, 0x65, + 0x64, 0x4c, 0x65, 0x67, 0x61, 0x63, 0x79, 0x4a, 0x73, 0x6f, 0x6e, 0x46, 0x69, 0x65, 0x6c, 0x64, + 0x43, 0x6f, 0x6e, 0x66, 0x6c, 0x69, 0x63, 0x74, 0x73, 0x12, 0x58, 0x0a, 0x14, 0x75, 0x6e, 0x69, + 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x5f, 0x6f, 0x70, 0x74, 0x69, 0x6f, + 0x6e, 0x18, 0xe7, 0x07, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x24, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, + 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x55, 0x6e, 0x69, 0x6e, 0x74, + 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x13, + 0x75, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, 0x70, 0x74, + 0x69, 0x6f, 0x6e, 0x2a, 0x09, 0x08, 0xe8, 0x07, 0x10, 0x80, 0x80, 0x80, 0x80, 0x02, 0x4a, 0x04, + 0x08, 0x05, 0x10, 0x06, 0x22, 0x9e, 0x01, 0x0a, 0x10, 0x45, 0x6e, 0x75, 0x6d, 0x56, 0x61, 0x6c, + 0x75, 0x65, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x25, 0x0a, 0x0a, 0x64, 0x65, 0x70, + 0x72, 0x65, 0x63, 0x61, 0x74, 0x65, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, 0x66, + 0x61, 0x6c, 0x73, 0x65, 0x52, 0x0a, 0x64, 0x65, 0x70, 0x72, 0x65, 0x63, 0x61, 0x74, 0x65, 0x64, + 0x12, 0x58, 0x0a, 0x14, 0x75, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, + 0x64, 0x5f, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0xe7, 0x07, 0x20, 0x03, 0x28, 0x0b, 0x32, + 0x24, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, + 0x66, 0x2e, 0x55, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, + 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x13, 0x75, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, + 0x65, 0x74, 0x65, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x2a, 0x09, 0x08, 0xe8, 0x07, 0x10, + 0x80, 0x80, 0x80, 0x80, 0x02, 0x22, 0x9c, 0x01, 0x0a, 0x0e, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x25, 0x0a, 0x0a, 0x64, 0x65, 0x70, 0x72, - 0x65, 0x63, 0x61, 0x74, 0x65, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, 0x66, 0x61, + 0x65, 0x63, 0x61, 0x74, 0x65, 0x64, 0x18, 0x21, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, 0x66, 0x61, 0x6c, 0x73, 0x65, 0x52, 0x0a, 0x64, 0x65, 0x70, 0x72, 0x65, 0x63, 0x61, 0x74, 0x65, 0x64, 0x12, 0x58, 0x0a, 0x14, 0x75, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x5f, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0xe7, 0x07, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x24, @@ -3385,97 +3774,95 @@ var file_google_protobuf_descriptor_proto_rawDesc = []byte{ 0x2e, 0x55, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x13, 0x75, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x2a, 0x09, 0x08, 0xe8, 0x07, 0x10, 0x80, - 0x80, 0x80, 0x80, 0x02, 0x22, 0x9c, 0x01, 0x0a, 0x0e, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, - 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x25, 0x0a, 0x0a, 0x64, 0x65, 0x70, 0x72, 0x65, - 0x63, 0x61, 0x74, 0x65, 0x64, 0x18, 0x21, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, 0x66, 0x61, 0x6c, - 0x73, 0x65, 0x52, 0x0a, 0x64, 0x65, 0x70, 0x72, 0x65, 0x63, 0x61, 0x74, 0x65, 0x64, 0x12, 0x58, - 0x0a, 0x14, 0x75, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x5f, - 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0xe7, 0x07, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x24, 0x2e, + 0x80, 0x80, 0x80, 0x02, 0x22, 0xe0, 0x02, 0x0a, 0x0d, 0x4d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x4f, + 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x25, 0x0a, 0x0a, 0x64, 0x65, 0x70, 0x72, 0x65, 0x63, + 0x61, 0x74, 0x65, 0x64, 0x18, 0x21, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, 0x66, 0x61, 0x6c, 0x73, + 0x65, 0x52, 0x0a, 0x64, 0x65, 0x70, 0x72, 0x65, 0x63, 0x61, 0x74, 0x65, 0x64, 0x12, 0x71, 0x0a, + 0x11, 0x69, 0x64, 0x65, 0x6d, 0x70, 0x6f, 0x74, 0x65, 0x6e, 0x63, 0x79, 0x5f, 0x6c, 0x65, 0x76, + 0x65, 0x6c, 0x18, 0x22, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x2f, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, + 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x4d, 0x65, 0x74, 0x68, 0x6f, + 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x2e, 0x49, 0x64, 0x65, 0x6d, 0x70, 0x6f, 0x74, + 0x65, 0x6e, 0x63, 0x79, 0x4c, 0x65, 0x76, 0x65, 0x6c, 0x3a, 0x13, 0x49, 0x44, 0x45, 0x4d, 0x50, + 0x4f, 0x54, 0x45, 0x4e, 0x43, 0x59, 0x5f, 0x55, 0x4e, 0x4b, 0x4e, 0x4f, 0x57, 0x4e, 0x52, 0x10, + 0x69, 0x64, 0x65, 0x6d, 0x70, 0x6f, 0x74, 0x65, 0x6e, 0x63, 0x79, 0x4c, 0x65, 0x76, 0x65, 0x6c, + 0x12, 0x58, 0x0a, 0x14, 0x75, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, + 0x64, 0x5f, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0xe7, 0x07, 0x20, 0x03, 0x28, 0x0b, 0x32, + 0x24, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, + 0x66, 0x2e, 0x55, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, + 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x13, 0x75, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, + 0x65, 0x74, 0x65, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x22, 0x50, 0x0a, 0x10, 0x49, 0x64, + 0x65, 0x6d, 0x70, 0x6f, 0x74, 0x65, 0x6e, 0x63, 0x79, 0x4c, 0x65, 0x76, 0x65, 0x6c, 0x12, 0x17, + 0x0a, 0x13, 0x49, 0x44, 0x45, 0x4d, 0x50, 0x4f, 0x54, 0x45, 0x4e, 0x43, 0x59, 0x5f, 0x55, 0x4e, + 0x4b, 0x4e, 0x4f, 0x57, 0x4e, 0x10, 0x00, 0x12, 0x13, 0x0a, 0x0f, 0x4e, 0x4f, 0x5f, 0x53, 0x49, + 0x44, 0x45, 0x5f, 0x45, 0x46, 0x46, 0x45, 0x43, 0x54, 0x53, 0x10, 0x01, 0x12, 0x0e, 0x0a, 0x0a, + 0x49, 0x44, 0x45, 0x4d, 0x50, 0x4f, 0x54, 0x45, 0x4e, 0x54, 0x10, 0x02, 0x2a, 0x09, 0x08, 0xe8, + 0x07, 0x10, 0x80, 0x80, 0x80, 0x80, 0x02, 0x22, 0x9a, 0x03, 0x0a, 0x13, 0x55, 0x6e, 0x69, 0x6e, + 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x12, + 0x41, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x2d, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x55, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, 0x70, 0x74, - 0x69, 0x6f, 0x6e, 0x52, 0x13, 0x75, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, - 0x65, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x2a, 0x09, 0x08, 0xe8, 0x07, 0x10, 0x80, 0x80, - 0x80, 0x80, 0x02, 0x22, 0xe0, 0x02, 0x0a, 0x0d, 0x4d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x4f, 0x70, - 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x25, 0x0a, 0x0a, 0x64, 0x65, 0x70, 0x72, 0x65, 0x63, 0x61, - 0x74, 0x65, 0x64, 0x18, 0x21, 0x20, 0x01, 0x28, 0x08, 0x3a, 0x05, 0x66, 0x61, 0x6c, 0x73, 0x65, - 0x52, 0x0a, 0x64, 0x65, 0x70, 0x72, 0x65, 0x63, 0x61, 0x74, 0x65, 0x64, 0x12, 0x71, 0x0a, 0x11, - 0x69, 0x64, 0x65, 0x6d, 0x70, 0x6f, 0x74, 0x65, 0x6e, 0x63, 0x79, 0x5f, 0x6c, 0x65, 0x76, 0x65, - 0x6c, 0x18, 0x22, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x2f, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, - 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x4d, 0x65, 0x74, 0x68, 0x6f, 0x64, - 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x2e, 0x49, 0x64, 0x65, 0x6d, 0x70, 0x6f, 0x74, 0x65, - 0x6e, 0x63, 0x79, 0x4c, 0x65, 0x76, 0x65, 0x6c, 0x3a, 0x13, 0x49, 0x44, 0x45, 0x4d, 0x50, 0x4f, - 0x54, 0x45, 0x4e, 0x43, 0x59, 0x5f, 0x55, 0x4e, 0x4b, 0x4e, 0x4f, 0x57, 0x4e, 0x52, 0x10, 0x69, - 0x64, 0x65, 0x6d, 0x70, 0x6f, 0x74, 0x65, 0x6e, 0x63, 0x79, 0x4c, 0x65, 0x76, 0x65, 0x6c, 0x12, - 0x58, 0x0a, 0x14, 0x75, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, - 0x5f, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0xe7, 0x07, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x24, - 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, - 0x2e, 0x55, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, 0x70, - 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x13, 0x75, 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, - 0x74, 0x65, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x22, 0x50, 0x0a, 0x10, 0x49, 0x64, 0x65, - 0x6d, 0x70, 0x6f, 0x74, 0x65, 0x6e, 0x63, 0x79, 0x4c, 0x65, 0x76, 0x65, 0x6c, 0x12, 0x17, 0x0a, - 0x13, 0x49, 0x44, 0x45, 0x4d, 0x50, 0x4f, 0x54, 0x45, 0x4e, 0x43, 0x59, 0x5f, 0x55, 0x4e, 0x4b, - 0x4e, 0x4f, 0x57, 0x4e, 0x10, 0x00, 0x12, 0x13, 0x0a, 0x0f, 0x4e, 0x4f, 0x5f, 0x53, 0x49, 0x44, - 0x45, 0x5f, 0x45, 0x46, 0x46, 0x45, 0x43, 0x54, 0x53, 0x10, 0x01, 0x12, 0x0e, 0x0a, 0x0a, 0x49, - 0x44, 0x45, 0x4d, 0x50, 0x4f, 0x54, 0x45, 0x4e, 0x54, 0x10, 0x02, 0x2a, 0x09, 0x08, 0xe8, 0x07, - 0x10, 0x80, 0x80, 0x80, 0x80, 0x02, 0x22, 0x9a, 0x03, 0x0a, 0x13, 0x55, 0x6e, 0x69, 0x6e, 0x74, - 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x41, - 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x2d, 0x2e, 0x67, - 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x55, - 0x6e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x72, 0x65, 0x74, 0x65, 0x64, 0x4f, 0x70, 0x74, 0x69, - 0x6f, 0x6e, 0x2e, 0x4e, 0x61, 0x6d, 0x65, 0x50, 0x61, 0x72, 0x74, 0x52, 0x04, 0x6e, 0x61, 0x6d, - 0x65, 0x12, 0x29, 0x0a, 0x10, 0x69, 0x64, 0x65, 0x6e, 0x74, 0x69, 0x66, 0x69, 0x65, 0x72, 0x5f, - 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0f, 0x69, 0x64, 0x65, - 0x6e, 0x74, 0x69, 0x66, 0x69, 0x65, 0x72, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x2c, 0x0a, 0x12, - 0x70, 0x6f, 0x73, 0x69, 0x74, 0x69, 0x76, 0x65, 0x5f, 0x69, 0x6e, 0x74, 0x5f, 0x76, 0x61, 0x6c, - 0x75, 0x65, 0x18, 0x04, 0x20, 0x01, 0x28, 0x04, 0x52, 0x10, 0x70, 0x6f, 0x73, 0x69, 0x74, 0x69, - 0x76, 0x65, 0x49, 0x6e, 0x74, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x2c, 0x0a, 0x12, 0x6e, 0x65, - 0x67, 0x61, 0x74, 0x69, 0x76, 0x65, 0x5f, 0x69, 0x6e, 0x74, 0x5f, 0x76, 0x61, 0x6c, 0x75, 0x65, - 0x18, 0x05, 0x20, 0x01, 0x28, 0x03, 0x52, 0x10, 0x6e, 0x65, 0x67, 0x61, 0x74, 0x69, 0x76, 0x65, - 0x49, 0x6e, 0x74, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x21, 0x0a, 0x0c, 0x64, 0x6f, 0x75, 0x62, - 0x6c, 0x65, 0x5f, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x06, 0x20, 0x01, 0x28, 0x01, 0x52, 0x0b, - 0x64, 0x6f, 0x75, 0x62, 0x6c, 0x65, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x21, 0x0a, 0x0c, 0x73, - 0x74, 0x72, 0x69, 0x6e, 0x67, 0x5f, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x07, 0x20, 0x01, 0x28, - 0x0c, 0x52, 0x0b, 0x73, 0x74, 0x72, 0x69, 0x6e, 0x67, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x27, - 0x0a, 0x0f, 0x61, 0x67, 0x67, 0x72, 0x65, 0x67, 0x61, 0x74, 0x65, 0x5f, 0x76, 0x61, 0x6c, 0x75, - 0x65, 0x18, 0x08, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0e, 0x61, 0x67, 0x67, 0x72, 0x65, 0x67, 0x61, - 0x74, 0x65, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x1a, 0x4a, 0x0a, 0x08, 0x4e, 0x61, 0x6d, 0x65, 0x50, - 0x61, 0x72, 0x74, 0x12, 0x1b, 0x0a, 0x09, 0x6e, 0x61, 0x6d, 0x65, 0x5f, 0x70, 0x61, 0x72, 0x74, - 0x18, 0x01, 0x20, 0x02, 0x28, 0x09, 0x52, 0x08, 0x6e, 0x61, 0x6d, 0x65, 0x50, 0x61, 0x72, 0x74, - 0x12, 0x21, 0x0a, 0x0c, 0x69, 0x73, 0x5f, 0x65, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, - 0x18, 0x02, 0x20, 0x02, 0x28, 0x08, 0x52, 0x0b, 0x69, 0x73, 0x45, 0x78, 0x74, 0x65, 0x6e, 0x73, - 0x69, 0x6f, 0x6e, 0x22, 0xa7, 0x02, 0x0a, 0x0e, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x43, 0x6f, - 0x64, 0x65, 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x44, 0x0a, 0x08, 0x6c, 0x6f, 0x63, 0x61, 0x74, 0x69, - 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x28, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, - 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x53, 0x6f, 0x75, 0x72, 0x63, - 0x65, 0x43, 0x6f, 0x64, 0x65, 0x49, 0x6e, 0x66, 0x6f, 0x2e, 0x4c, 0x6f, 0x63, 0x61, 0x74, 0x69, - 0x6f, 0x6e, 0x52, 0x08, 0x6c, 0x6f, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x1a, 0xce, 0x01, 0x0a, - 0x08, 0x4c, 0x6f, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x16, 0x0a, 0x04, 0x70, 0x61, 0x74, - 0x68, 0x18, 0x01, 0x20, 0x03, 0x28, 0x05, 0x42, 0x02, 0x10, 0x01, 0x52, 0x04, 0x70, 0x61, 0x74, - 0x68, 0x12, 0x16, 0x0a, 0x04, 0x73, 0x70, 0x61, 0x6e, 0x18, 0x02, 0x20, 0x03, 0x28, 0x05, 0x42, - 0x02, 0x10, 0x01, 0x52, 0x04, 0x73, 0x70, 0x61, 0x6e, 0x12, 0x29, 0x0a, 0x10, 0x6c, 0x65, 0x61, - 0x64, 0x69, 0x6e, 0x67, 0x5f, 0x63, 0x6f, 0x6d, 0x6d, 0x65, 0x6e, 0x74, 0x73, 0x18, 0x03, 0x20, - 0x01, 0x28, 0x09, 0x52, 0x0f, 0x6c, 0x65, 0x61, 0x64, 0x69, 0x6e, 0x67, 0x43, 0x6f, 0x6d, 0x6d, - 0x65, 0x6e, 0x74, 0x73, 0x12, 0x2b, 0x0a, 0x11, 0x74, 0x72, 0x61, 0x69, 0x6c, 0x69, 0x6e, 0x67, - 0x5f, 0x63, 0x6f, 0x6d, 0x6d, 0x65, 0x6e, 0x74, 0x73, 0x18, 0x04, 0x20, 0x01, 0x28, 0x09, 0x52, - 0x10, 0x74, 0x72, 0x61, 0x69, 0x6c, 0x69, 0x6e, 0x67, 0x43, 0x6f, 0x6d, 0x6d, 0x65, 0x6e, 0x74, - 0x73, 0x12, 0x3a, 0x0a, 0x19, 0x6c, 0x65, 0x61, 0x64, 0x69, 0x6e, 0x67, 0x5f, 0x64, 0x65, 0x74, - 0x61, 0x63, 0x68, 0x65, 0x64, 0x5f, 0x63, 0x6f, 0x6d, 0x6d, 0x65, 0x6e, 0x74, 0x73, 0x18, 0x06, - 0x20, 0x03, 0x28, 0x09, 0x52, 0x17, 0x6c, 0x65, 0x61, 0x64, 0x69, 0x6e, 0x67, 0x44, 0x65, 0x74, - 0x61, 0x63, 0x68, 0x65, 0x64, 0x43, 0x6f, 0x6d, 0x6d, 0x65, 0x6e, 0x74, 0x73, 0x22, 0xd1, 0x01, - 0x0a, 0x11, 0x47, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x65, 0x64, 0x43, 0x6f, 0x64, 0x65, 0x49, - 0x6e, 0x66, 0x6f, 0x12, 0x4d, 0x0a, 0x0a, 0x61, 0x6e, 0x6e, 0x6f, 0x74, 0x61, 0x74, 0x69, 0x6f, - 0x6e, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x2d, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, - 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x47, 0x65, 0x6e, 0x65, 0x72, 0x61, - 0x74, 0x65, 0x64, 0x43, 0x6f, 0x64, 0x65, 0x49, 0x6e, 0x66, 0x6f, 0x2e, 0x41, 0x6e, 0x6e, 0x6f, - 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x0a, 0x61, 0x6e, 0x6e, 0x6f, 0x74, 0x61, 0x74, 0x69, - 0x6f, 0x6e, 0x1a, 0x6d, 0x0a, 0x0a, 0x41, 0x6e, 0x6e, 0x6f, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, - 0x12, 0x16, 0x0a, 0x04, 0x70, 0x61, 0x74, 0x68, 0x18, 0x01, 0x20, 0x03, 0x28, 0x05, 0x42, 0x02, - 0x10, 0x01, 0x52, 0x04, 0x70, 0x61, 0x74, 0x68, 0x12, 0x1f, 0x0a, 0x0b, 0x73, 0x6f, 0x75, 0x72, - 0x63, 0x65, 0x5f, 0x66, 0x69, 0x6c, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0a, 0x73, - 0x6f, 0x75, 0x72, 0x63, 0x65, 0x46, 0x69, 0x6c, 0x65, 0x12, 0x14, 0x0a, 0x05, 0x62, 0x65, 0x67, - 0x69, 0x6e, 0x18, 0x03, 0x20, 0x01, 0x28, 0x05, 0x52, 0x05, 0x62, 0x65, 0x67, 0x69, 0x6e, 0x12, - 0x10, 0x0a, 0x03, 0x65, 0x6e, 0x64, 0x18, 0x04, 0x20, 0x01, 0x28, 0x05, 0x52, 0x03, 0x65, 0x6e, - 0x64, 0x42, 0x7e, 0x0a, 0x13, 0x63, 0x6f, 0x6d, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, + 0x69, 0x6f, 0x6e, 0x2e, 0x4e, 0x61, 0x6d, 0x65, 0x50, 0x61, 0x72, 0x74, 0x52, 0x04, 0x6e, 0x61, + 0x6d, 0x65, 0x12, 0x29, 0x0a, 0x10, 0x69, 0x64, 0x65, 0x6e, 0x74, 0x69, 0x66, 0x69, 0x65, 0x72, + 0x5f, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0f, 0x69, 0x64, + 0x65, 0x6e, 0x74, 0x69, 0x66, 0x69, 0x65, 0x72, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x2c, 0x0a, + 0x12, 0x70, 0x6f, 0x73, 0x69, 0x74, 0x69, 0x76, 0x65, 0x5f, 0x69, 0x6e, 0x74, 0x5f, 0x76, 0x61, + 0x6c, 0x75, 0x65, 0x18, 0x04, 0x20, 0x01, 0x28, 0x04, 0x52, 0x10, 0x70, 0x6f, 0x73, 0x69, 0x74, + 0x69, 0x76, 0x65, 0x49, 0x6e, 0x74, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x2c, 0x0a, 0x12, 0x6e, + 0x65, 0x67, 0x61, 0x74, 0x69, 0x76, 0x65, 0x5f, 0x69, 0x6e, 0x74, 0x5f, 0x76, 0x61, 0x6c, 0x75, + 0x65, 0x18, 0x05, 0x20, 0x01, 0x28, 0x03, 0x52, 0x10, 0x6e, 0x65, 0x67, 0x61, 0x74, 0x69, 0x76, + 0x65, 0x49, 0x6e, 0x74, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x21, 0x0a, 0x0c, 0x64, 0x6f, 0x75, + 0x62, 0x6c, 0x65, 0x5f, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x06, 0x20, 0x01, 0x28, 0x01, 0x52, + 0x0b, 0x64, 0x6f, 0x75, 0x62, 0x6c, 0x65, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x21, 0x0a, 0x0c, + 0x73, 0x74, 0x72, 0x69, 0x6e, 0x67, 0x5f, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x07, 0x20, 0x01, + 0x28, 0x0c, 0x52, 0x0b, 0x73, 0x74, 0x72, 0x69, 0x6e, 0x67, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x12, + 0x27, 0x0a, 0x0f, 0x61, 0x67, 0x67, 0x72, 0x65, 0x67, 0x61, 0x74, 0x65, 0x5f, 0x76, 0x61, 0x6c, + 0x75, 0x65, 0x18, 0x08, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0e, 0x61, 0x67, 0x67, 0x72, 0x65, 0x67, + 0x61, 0x74, 0x65, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x1a, 0x4a, 0x0a, 0x08, 0x4e, 0x61, 0x6d, 0x65, + 0x50, 0x61, 0x72, 0x74, 0x12, 0x1b, 0x0a, 0x09, 0x6e, 0x61, 0x6d, 0x65, 0x5f, 0x70, 0x61, 0x72, + 0x74, 0x18, 0x01, 0x20, 0x02, 0x28, 0x09, 0x52, 0x08, 0x6e, 0x61, 0x6d, 0x65, 0x50, 0x61, 0x72, + 0x74, 0x12, 0x21, 0x0a, 0x0c, 0x69, 0x73, 0x5f, 0x65, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, + 0x6e, 0x18, 0x02, 0x20, 0x02, 0x28, 0x08, 0x52, 0x0b, 0x69, 0x73, 0x45, 0x78, 0x74, 0x65, 0x6e, + 0x73, 0x69, 0x6f, 0x6e, 0x22, 0xa7, 0x02, 0x0a, 0x0e, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x43, + 0x6f, 0x64, 0x65, 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x44, 0x0a, 0x08, 0x6c, 0x6f, 0x63, 0x61, 0x74, + 0x69, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x28, 0x2e, 0x67, 0x6f, 0x6f, 0x67, + 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x53, 0x6f, 0x75, 0x72, + 0x63, 0x65, 0x43, 0x6f, 0x64, 0x65, 0x49, 0x6e, 0x66, 0x6f, 0x2e, 0x4c, 0x6f, 0x63, 0x61, 0x74, + 0x69, 0x6f, 0x6e, 0x52, 0x08, 0x6c, 0x6f, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x1a, 0xce, 0x01, + 0x0a, 0x08, 0x4c, 0x6f, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x16, 0x0a, 0x04, 0x70, 0x61, + 0x74, 0x68, 0x18, 0x01, 0x20, 0x03, 0x28, 0x05, 0x42, 0x02, 0x10, 0x01, 0x52, 0x04, 0x70, 0x61, + 0x74, 0x68, 0x12, 0x16, 0x0a, 0x04, 0x73, 0x70, 0x61, 0x6e, 0x18, 0x02, 0x20, 0x03, 0x28, 0x05, + 0x42, 0x02, 0x10, 0x01, 0x52, 0x04, 0x73, 0x70, 0x61, 0x6e, 0x12, 0x29, 0x0a, 0x10, 0x6c, 0x65, + 0x61, 0x64, 0x69, 0x6e, 0x67, 0x5f, 0x63, 0x6f, 0x6d, 0x6d, 0x65, 0x6e, 0x74, 0x73, 0x18, 0x03, + 0x20, 0x01, 0x28, 0x09, 0x52, 0x0f, 0x6c, 0x65, 0x61, 0x64, 0x69, 0x6e, 0x67, 0x43, 0x6f, 0x6d, + 0x6d, 0x65, 0x6e, 0x74, 0x73, 0x12, 0x2b, 0x0a, 0x11, 0x74, 0x72, 0x61, 0x69, 0x6c, 0x69, 0x6e, + 0x67, 0x5f, 0x63, 0x6f, 0x6d, 0x6d, 0x65, 0x6e, 0x74, 0x73, 0x18, 0x04, 0x20, 0x01, 0x28, 0x09, + 0x52, 0x10, 0x74, 0x72, 0x61, 0x69, 0x6c, 0x69, 0x6e, 0x67, 0x43, 0x6f, 0x6d, 0x6d, 0x65, 0x6e, + 0x74, 0x73, 0x12, 0x3a, 0x0a, 0x19, 0x6c, 0x65, 0x61, 0x64, 0x69, 0x6e, 0x67, 0x5f, 0x64, 0x65, + 0x74, 0x61, 0x63, 0x68, 0x65, 0x64, 0x5f, 0x63, 0x6f, 0x6d, 0x6d, 0x65, 0x6e, 0x74, 0x73, 0x18, + 0x06, 0x20, 0x03, 0x28, 0x09, 0x52, 0x17, 0x6c, 0x65, 0x61, 0x64, 0x69, 0x6e, 0x67, 0x44, 0x65, + 0x74, 0x61, 0x63, 0x68, 0x65, 0x64, 0x43, 0x6f, 0x6d, 0x6d, 0x65, 0x6e, 0x74, 0x73, 0x22, 0xd0, + 0x02, 0x0a, 0x11, 0x47, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x65, 0x64, 0x43, 0x6f, 0x64, 0x65, + 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x4d, 0x0a, 0x0a, 0x61, 0x6e, 0x6e, 0x6f, 0x74, 0x61, 0x74, 0x69, + 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x2d, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, + 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x47, 0x65, 0x6e, 0x65, 0x72, + 0x61, 0x74, 0x65, 0x64, 0x43, 0x6f, 0x64, 0x65, 0x49, 0x6e, 0x66, 0x6f, 0x2e, 0x41, 0x6e, 0x6e, + 0x6f, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x0a, 0x61, 0x6e, 0x6e, 0x6f, 0x74, 0x61, 0x74, + 0x69, 0x6f, 0x6e, 0x1a, 0xeb, 0x01, 0x0a, 0x0a, 0x41, 0x6e, 0x6e, 0x6f, 0x74, 0x61, 0x74, 0x69, + 0x6f, 0x6e, 0x12, 0x16, 0x0a, 0x04, 0x70, 0x61, 0x74, 0x68, 0x18, 0x01, 0x20, 0x03, 0x28, 0x05, + 0x42, 0x02, 0x10, 0x01, 0x52, 0x04, 0x70, 0x61, 0x74, 0x68, 0x12, 0x1f, 0x0a, 0x0b, 0x73, 0x6f, + 0x75, 0x72, 0x63, 0x65, 0x5f, 0x66, 0x69, 0x6c, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, + 0x0a, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x46, 0x69, 0x6c, 0x65, 0x12, 0x14, 0x0a, 0x05, 0x62, + 0x65, 0x67, 0x69, 0x6e, 0x18, 0x03, 0x20, 0x01, 0x28, 0x05, 0x52, 0x05, 0x62, 0x65, 0x67, 0x69, + 0x6e, 0x12, 0x10, 0x0a, 0x03, 0x65, 0x6e, 0x64, 0x18, 0x04, 0x20, 0x01, 0x28, 0x05, 0x52, 0x03, + 0x65, 0x6e, 0x64, 0x12, 0x52, 0x0a, 0x08, 0x73, 0x65, 0x6d, 0x61, 0x6e, 0x74, 0x69, 0x63, 0x18, + 0x05, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x36, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, + 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x47, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x65, + 0x64, 0x43, 0x6f, 0x64, 0x65, 0x49, 0x6e, 0x66, 0x6f, 0x2e, 0x41, 0x6e, 0x6e, 0x6f, 0x74, 0x61, + 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x53, 0x65, 0x6d, 0x61, 0x6e, 0x74, 0x69, 0x63, 0x52, 0x08, 0x73, + 0x65, 0x6d, 0x61, 0x6e, 0x74, 0x69, 0x63, 0x22, 0x28, 0x0a, 0x08, 0x53, 0x65, 0x6d, 0x61, 0x6e, + 0x74, 0x69, 0x63, 0x12, 0x08, 0x0a, 0x04, 0x4e, 0x4f, 0x4e, 0x45, 0x10, 0x00, 0x12, 0x07, 0x0a, + 0x03, 0x53, 0x45, 0x54, 0x10, 0x01, 0x12, 0x09, 0x0a, 0x05, 0x41, 0x4c, 0x49, 0x41, 0x53, 0x10, + 0x02, 0x42, 0x7e, 0x0a, 0x13, 0x63, 0x6f, 0x6d, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x42, 0x10, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x73, 0x48, 0x01, 0x5a, 0x2d, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x67, 0x6f, 0x6c, 0x61, 0x6e, 0x67, 0x2e, 0x6f, 0x72, 0x67, 0x2f, @@ -3498,7 +3885,7 @@ func file_google_protobuf_descriptor_proto_rawDescGZIP() []byte { return file_google_protobuf_descriptor_proto_rawDescData } -var file_google_protobuf_descriptor_proto_enumTypes = make([]protoimpl.EnumInfo, 6) +var file_google_protobuf_descriptor_proto_enumTypes = make([]protoimpl.EnumInfo, 9) var file_google_protobuf_descriptor_proto_msgTypes = make([]protoimpl.MessageInfo, 27) var file_google_protobuf_descriptor_proto_goTypes = []interface{}{ (FieldDescriptorProto_Type)(0), // 0: google.protobuf.FieldDescriptorProto.Type @@ -3506,84 +3893,90 @@ var file_google_protobuf_descriptor_proto_goTypes = []interface{}{ (FileOptions_OptimizeMode)(0), // 2: google.protobuf.FileOptions.OptimizeMode (FieldOptions_CType)(0), // 3: google.protobuf.FieldOptions.CType (FieldOptions_JSType)(0), // 4: google.protobuf.FieldOptions.JSType - (MethodOptions_IdempotencyLevel)(0), // 5: google.protobuf.MethodOptions.IdempotencyLevel - (*FileDescriptorSet)(nil), // 6: google.protobuf.FileDescriptorSet - (*FileDescriptorProto)(nil), // 7: google.protobuf.FileDescriptorProto - (*DescriptorProto)(nil), // 8: google.protobuf.DescriptorProto - (*ExtensionRangeOptions)(nil), // 9: google.protobuf.ExtensionRangeOptions - (*FieldDescriptorProto)(nil), // 10: google.protobuf.FieldDescriptorProto - (*OneofDescriptorProto)(nil), // 11: google.protobuf.OneofDescriptorProto - (*EnumDescriptorProto)(nil), // 12: google.protobuf.EnumDescriptorProto - (*EnumValueDescriptorProto)(nil), // 13: google.protobuf.EnumValueDescriptorProto - (*ServiceDescriptorProto)(nil), // 14: google.protobuf.ServiceDescriptorProto - (*MethodDescriptorProto)(nil), // 15: google.protobuf.MethodDescriptorProto - (*FileOptions)(nil), // 16: google.protobuf.FileOptions - (*MessageOptions)(nil), // 17: google.protobuf.MessageOptions - (*FieldOptions)(nil), // 18: google.protobuf.FieldOptions - (*OneofOptions)(nil), // 19: google.protobuf.OneofOptions - (*EnumOptions)(nil), // 20: google.protobuf.EnumOptions - (*EnumValueOptions)(nil), // 21: google.protobuf.EnumValueOptions - (*ServiceOptions)(nil), // 22: google.protobuf.ServiceOptions - (*MethodOptions)(nil), // 23: google.protobuf.MethodOptions - (*UninterpretedOption)(nil), // 24: google.protobuf.UninterpretedOption - (*SourceCodeInfo)(nil), // 25: google.protobuf.SourceCodeInfo - (*GeneratedCodeInfo)(nil), // 26: google.protobuf.GeneratedCodeInfo - (*DescriptorProto_ExtensionRange)(nil), // 27: google.protobuf.DescriptorProto.ExtensionRange - (*DescriptorProto_ReservedRange)(nil), // 28: google.protobuf.DescriptorProto.ReservedRange - (*EnumDescriptorProto_EnumReservedRange)(nil), // 29: google.protobuf.EnumDescriptorProto.EnumReservedRange - (*UninterpretedOption_NamePart)(nil), // 30: google.protobuf.UninterpretedOption.NamePart - (*SourceCodeInfo_Location)(nil), // 31: google.protobuf.SourceCodeInfo.Location - (*GeneratedCodeInfo_Annotation)(nil), // 32: google.protobuf.GeneratedCodeInfo.Annotation + (FieldOptions_OptionRetention)(0), // 5: google.protobuf.FieldOptions.OptionRetention + (FieldOptions_OptionTargetType)(0), // 6: google.protobuf.FieldOptions.OptionTargetType + (MethodOptions_IdempotencyLevel)(0), // 7: google.protobuf.MethodOptions.IdempotencyLevel + (GeneratedCodeInfo_Annotation_Semantic)(0), // 8: google.protobuf.GeneratedCodeInfo.Annotation.Semantic + (*FileDescriptorSet)(nil), // 9: google.protobuf.FileDescriptorSet + (*FileDescriptorProto)(nil), // 10: google.protobuf.FileDescriptorProto + (*DescriptorProto)(nil), // 11: google.protobuf.DescriptorProto + (*ExtensionRangeOptions)(nil), // 12: google.protobuf.ExtensionRangeOptions + (*FieldDescriptorProto)(nil), // 13: google.protobuf.FieldDescriptorProto + (*OneofDescriptorProto)(nil), // 14: google.protobuf.OneofDescriptorProto + (*EnumDescriptorProto)(nil), // 15: google.protobuf.EnumDescriptorProto + (*EnumValueDescriptorProto)(nil), // 16: google.protobuf.EnumValueDescriptorProto + (*ServiceDescriptorProto)(nil), // 17: google.protobuf.ServiceDescriptorProto + (*MethodDescriptorProto)(nil), // 18: google.protobuf.MethodDescriptorProto + (*FileOptions)(nil), // 19: google.protobuf.FileOptions + (*MessageOptions)(nil), // 20: google.protobuf.MessageOptions + (*FieldOptions)(nil), // 21: google.protobuf.FieldOptions + (*OneofOptions)(nil), // 22: google.protobuf.OneofOptions + (*EnumOptions)(nil), // 23: google.protobuf.EnumOptions + (*EnumValueOptions)(nil), // 24: google.protobuf.EnumValueOptions + (*ServiceOptions)(nil), // 25: google.protobuf.ServiceOptions + (*MethodOptions)(nil), // 26: google.protobuf.MethodOptions + (*UninterpretedOption)(nil), // 27: google.protobuf.UninterpretedOption + (*SourceCodeInfo)(nil), // 28: google.protobuf.SourceCodeInfo + (*GeneratedCodeInfo)(nil), // 29: google.protobuf.GeneratedCodeInfo + (*DescriptorProto_ExtensionRange)(nil), // 30: google.protobuf.DescriptorProto.ExtensionRange + (*DescriptorProto_ReservedRange)(nil), // 31: google.protobuf.DescriptorProto.ReservedRange + (*EnumDescriptorProto_EnumReservedRange)(nil), // 32: google.protobuf.EnumDescriptorProto.EnumReservedRange + (*UninterpretedOption_NamePart)(nil), // 33: google.protobuf.UninterpretedOption.NamePart + (*SourceCodeInfo_Location)(nil), // 34: google.protobuf.SourceCodeInfo.Location + (*GeneratedCodeInfo_Annotation)(nil), // 35: google.protobuf.GeneratedCodeInfo.Annotation } var file_google_protobuf_descriptor_proto_depIdxs = []int32{ - 7, // 0: google.protobuf.FileDescriptorSet.file:type_name -> google.protobuf.FileDescriptorProto - 8, // 1: google.protobuf.FileDescriptorProto.message_type:type_name -> google.protobuf.DescriptorProto - 12, // 2: google.protobuf.FileDescriptorProto.enum_type:type_name -> google.protobuf.EnumDescriptorProto - 14, // 3: google.protobuf.FileDescriptorProto.service:type_name -> google.protobuf.ServiceDescriptorProto - 10, // 4: google.protobuf.FileDescriptorProto.extension:type_name -> google.protobuf.FieldDescriptorProto - 16, // 5: google.protobuf.FileDescriptorProto.options:type_name -> google.protobuf.FileOptions - 25, // 6: google.protobuf.FileDescriptorProto.source_code_info:type_name -> google.protobuf.SourceCodeInfo - 10, // 7: google.protobuf.DescriptorProto.field:type_name -> google.protobuf.FieldDescriptorProto - 10, // 8: google.protobuf.DescriptorProto.extension:type_name -> google.protobuf.FieldDescriptorProto - 8, // 9: google.protobuf.DescriptorProto.nested_type:type_name -> google.protobuf.DescriptorProto - 12, // 10: google.protobuf.DescriptorProto.enum_type:type_name -> google.protobuf.EnumDescriptorProto - 27, // 11: google.protobuf.DescriptorProto.extension_range:type_name -> google.protobuf.DescriptorProto.ExtensionRange - 11, // 12: google.protobuf.DescriptorProto.oneof_decl:type_name -> google.protobuf.OneofDescriptorProto - 17, // 13: google.protobuf.DescriptorProto.options:type_name -> google.protobuf.MessageOptions - 28, // 14: google.protobuf.DescriptorProto.reserved_range:type_name -> google.protobuf.DescriptorProto.ReservedRange - 24, // 15: google.protobuf.ExtensionRangeOptions.uninterpreted_option:type_name -> google.protobuf.UninterpretedOption + 10, // 0: google.protobuf.FileDescriptorSet.file:type_name -> google.protobuf.FileDescriptorProto + 11, // 1: google.protobuf.FileDescriptorProto.message_type:type_name -> google.protobuf.DescriptorProto + 15, // 2: google.protobuf.FileDescriptorProto.enum_type:type_name -> google.protobuf.EnumDescriptorProto + 17, // 3: google.protobuf.FileDescriptorProto.service:type_name -> google.protobuf.ServiceDescriptorProto + 13, // 4: google.protobuf.FileDescriptorProto.extension:type_name -> google.protobuf.FieldDescriptorProto + 19, // 5: google.protobuf.FileDescriptorProto.options:type_name -> google.protobuf.FileOptions + 28, // 6: google.protobuf.FileDescriptorProto.source_code_info:type_name -> google.protobuf.SourceCodeInfo + 13, // 7: google.protobuf.DescriptorProto.field:type_name -> google.protobuf.FieldDescriptorProto + 13, // 8: google.protobuf.DescriptorProto.extension:type_name -> google.protobuf.FieldDescriptorProto + 11, // 9: google.protobuf.DescriptorProto.nested_type:type_name -> google.protobuf.DescriptorProto + 15, // 10: google.protobuf.DescriptorProto.enum_type:type_name -> google.protobuf.EnumDescriptorProto + 30, // 11: google.protobuf.DescriptorProto.extension_range:type_name -> google.protobuf.DescriptorProto.ExtensionRange + 14, // 12: google.protobuf.DescriptorProto.oneof_decl:type_name -> google.protobuf.OneofDescriptorProto + 20, // 13: google.protobuf.DescriptorProto.options:type_name -> google.protobuf.MessageOptions + 31, // 14: google.protobuf.DescriptorProto.reserved_range:type_name -> google.protobuf.DescriptorProto.ReservedRange + 27, // 15: google.protobuf.ExtensionRangeOptions.uninterpreted_option:type_name -> google.protobuf.UninterpretedOption 1, // 16: google.protobuf.FieldDescriptorProto.label:type_name -> google.protobuf.FieldDescriptorProto.Label 0, // 17: google.protobuf.FieldDescriptorProto.type:type_name -> google.protobuf.FieldDescriptorProto.Type - 18, // 18: google.protobuf.FieldDescriptorProto.options:type_name -> google.protobuf.FieldOptions - 19, // 19: google.protobuf.OneofDescriptorProto.options:type_name -> google.protobuf.OneofOptions - 13, // 20: google.protobuf.EnumDescriptorProto.value:type_name -> google.protobuf.EnumValueDescriptorProto - 20, // 21: google.protobuf.EnumDescriptorProto.options:type_name -> google.protobuf.EnumOptions - 29, // 22: google.protobuf.EnumDescriptorProto.reserved_range:type_name -> google.protobuf.EnumDescriptorProto.EnumReservedRange - 21, // 23: google.protobuf.EnumValueDescriptorProto.options:type_name -> google.protobuf.EnumValueOptions - 15, // 24: google.protobuf.ServiceDescriptorProto.method:type_name -> google.protobuf.MethodDescriptorProto - 22, // 25: google.protobuf.ServiceDescriptorProto.options:type_name -> google.protobuf.ServiceOptions - 23, // 26: google.protobuf.MethodDescriptorProto.options:type_name -> google.protobuf.MethodOptions + 21, // 18: google.protobuf.FieldDescriptorProto.options:type_name -> google.protobuf.FieldOptions + 22, // 19: google.protobuf.OneofDescriptorProto.options:type_name -> google.protobuf.OneofOptions + 16, // 20: google.protobuf.EnumDescriptorProto.value:type_name -> google.protobuf.EnumValueDescriptorProto + 23, // 21: google.protobuf.EnumDescriptorProto.options:type_name -> google.protobuf.EnumOptions + 32, // 22: google.protobuf.EnumDescriptorProto.reserved_range:type_name -> google.protobuf.EnumDescriptorProto.EnumReservedRange + 24, // 23: google.protobuf.EnumValueDescriptorProto.options:type_name -> google.protobuf.EnumValueOptions + 18, // 24: google.protobuf.ServiceDescriptorProto.method:type_name -> google.protobuf.MethodDescriptorProto + 25, // 25: google.protobuf.ServiceDescriptorProto.options:type_name -> google.protobuf.ServiceOptions + 26, // 26: google.protobuf.MethodDescriptorProto.options:type_name -> google.protobuf.MethodOptions 2, // 27: google.protobuf.FileOptions.optimize_for:type_name -> google.protobuf.FileOptions.OptimizeMode - 24, // 28: google.protobuf.FileOptions.uninterpreted_option:type_name -> google.protobuf.UninterpretedOption - 24, // 29: google.protobuf.MessageOptions.uninterpreted_option:type_name -> google.protobuf.UninterpretedOption + 27, // 28: google.protobuf.FileOptions.uninterpreted_option:type_name -> google.protobuf.UninterpretedOption + 27, // 29: google.protobuf.MessageOptions.uninterpreted_option:type_name -> google.protobuf.UninterpretedOption 3, // 30: google.protobuf.FieldOptions.ctype:type_name -> google.protobuf.FieldOptions.CType 4, // 31: google.protobuf.FieldOptions.jstype:type_name -> google.protobuf.FieldOptions.JSType - 24, // 32: google.protobuf.FieldOptions.uninterpreted_option:type_name -> google.protobuf.UninterpretedOption - 24, // 33: google.protobuf.OneofOptions.uninterpreted_option:type_name -> google.protobuf.UninterpretedOption - 24, // 34: google.protobuf.EnumOptions.uninterpreted_option:type_name -> google.protobuf.UninterpretedOption - 24, // 35: google.protobuf.EnumValueOptions.uninterpreted_option:type_name -> google.protobuf.UninterpretedOption - 24, // 36: google.protobuf.ServiceOptions.uninterpreted_option:type_name -> google.protobuf.UninterpretedOption - 5, // 37: google.protobuf.MethodOptions.idempotency_level:type_name -> google.protobuf.MethodOptions.IdempotencyLevel - 24, // 38: google.protobuf.MethodOptions.uninterpreted_option:type_name -> google.protobuf.UninterpretedOption - 30, // 39: google.protobuf.UninterpretedOption.name:type_name -> google.protobuf.UninterpretedOption.NamePart - 31, // 40: google.protobuf.SourceCodeInfo.location:type_name -> google.protobuf.SourceCodeInfo.Location - 32, // 41: google.protobuf.GeneratedCodeInfo.annotation:type_name -> google.protobuf.GeneratedCodeInfo.Annotation - 9, // 42: google.protobuf.DescriptorProto.ExtensionRange.options:type_name -> google.protobuf.ExtensionRangeOptions - 43, // [43:43] is the sub-list for method output_type - 43, // [43:43] is the sub-list for method input_type - 43, // [43:43] is the sub-list for extension type_name - 43, // [43:43] is the sub-list for extension extendee - 0, // [0:43] is the sub-list for field type_name + 5, // 32: google.protobuf.FieldOptions.retention:type_name -> google.protobuf.FieldOptions.OptionRetention + 6, // 33: google.protobuf.FieldOptions.target:type_name -> google.protobuf.FieldOptions.OptionTargetType + 27, // 34: google.protobuf.FieldOptions.uninterpreted_option:type_name -> google.protobuf.UninterpretedOption + 27, // 35: google.protobuf.OneofOptions.uninterpreted_option:type_name -> google.protobuf.UninterpretedOption + 27, // 36: google.protobuf.EnumOptions.uninterpreted_option:type_name -> google.protobuf.UninterpretedOption + 27, // 37: google.protobuf.EnumValueOptions.uninterpreted_option:type_name -> google.protobuf.UninterpretedOption + 27, // 38: google.protobuf.ServiceOptions.uninterpreted_option:type_name -> google.protobuf.UninterpretedOption + 7, // 39: google.protobuf.MethodOptions.idempotency_level:type_name -> google.protobuf.MethodOptions.IdempotencyLevel + 27, // 40: google.protobuf.MethodOptions.uninterpreted_option:type_name -> google.protobuf.UninterpretedOption + 33, // 41: google.protobuf.UninterpretedOption.name:type_name -> google.protobuf.UninterpretedOption.NamePart + 34, // 42: google.protobuf.SourceCodeInfo.location:type_name -> google.protobuf.SourceCodeInfo.Location + 35, // 43: google.protobuf.GeneratedCodeInfo.annotation:type_name -> google.protobuf.GeneratedCodeInfo.Annotation + 12, // 44: google.protobuf.DescriptorProto.ExtensionRange.options:type_name -> google.protobuf.ExtensionRangeOptions + 8, // 45: google.protobuf.GeneratedCodeInfo.Annotation.semantic:type_name -> google.protobuf.GeneratedCodeInfo.Annotation.Semantic + 46, // [46:46] is the sub-list for method output_type + 46, // [46:46] is the sub-list for method input_type + 46, // [46:46] is the sub-list for extension type_name + 46, // [46:46] is the sub-list for extension extendee + 0, // [0:46] is the sub-list for field type_name } func init() { file_google_protobuf_descriptor_proto_init() } @@ -3940,7 +4333,7 @@ func file_google_protobuf_descriptor_proto_init() { File: protoimpl.DescBuilder{ GoPackagePath: reflect.TypeOf(x{}).PkgPath(), RawDescriptor: file_google_protobuf_descriptor_proto_rawDesc, - NumEnums: 6, + NumEnums: 9, NumMessages: 27, NumExtensions: 0, NumServices: 0, diff --git a/.ci/providerlint/vendor/google.golang.org/protobuf/types/known/anypb/any.pb.go b/.ci/providerlint/vendor/google.golang.org/protobuf/types/known/anypb/any.pb.go index 8c10797b905..a6c7a33f333 100644 --- a/.ci/providerlint/vendor/google.golang.org/protobuf/types/known/anypb/any.pb.go +++ b/.ci/providerlint/vendor/google.golang.org/protobuf/types/known/anypb/any.pb.go @@ -37,8 +37,7 @@ // It is functionally a tuple of the full name of the remote message type and // the serialized bytes of the remote message value. // -// -// Constructing an Any +// # Constructing an Any // // An Any message containing another message value is constructed using New: // @@ -48,8 +47,7 @@ // } // ... // make use of any // -// -// Unmarshaling an Any +// # Unmarshaling an Any // // With a populated Any message, the underlying message can be serialized into // a remote concrete message value in a few ways. @@ -95,8 +93,7 @@ // listed in the case clauses are linked into the Go binary and therefore also // registered in the global registry. // -// -// Type checking an Any +// # Type checking an Any // // In order to type check whether an Any message represents some other message, // then use the MessageIs method: @@ -115,7 +112,6 @@ // } // ... // make use of m // } -// package anypb import ( @@ -136,45 +132,49 @@ import ( // // Example 1: Pack and unpack a message in C++. // -// Foo foo = ...; -// Any any; -// any.PackFrom(foo); -// ... -// if (any.UnpackTo(&foo)) { -// ... -// } +// Foo foo = ...; +// Any any; +// any.PackFrom(foo); +// ... +// if (any.UnpackTo(&foo)) { +// ... +// } // // Example 2: Pack and unpack a message in Java. // -// Foo foo = ...; -// Any any = Any.pack(foo); -// ... -// if (any.is(Foo.class)) { -// foo = any.unpack(Foo.class); -// } -// -// Example 3: Pack and unpack a message in Python. -// -// foo = Foo(...) -// any = Any() -// any.Pack(foo) -// ... -// if any.Is(Foo.DESCRIPTOR): -// any.Unpack(foo) -// ... -// -// Example 4: Pack and unpack a message in Go -// -// foo := &pb.Foo{...} -// any, err := anypb.New(foo) -// if err != nil { -// ... -// } -// ... -// foo := &pb.Foo{} -// if err := any.UnmarshalTo(foo); err != nil { -// ... -// } +// Foo foo = ...; +// Any any = Any.pack(foo); +// ... +// if (any.is(Foo.class)) { +// foo = any.unpack(Foo.class); +// } +// // or ... +// if (any.isSameTypeAs(Foo.getDefaultInstance())) { +// foo = any.unpack(Foo.getDefaultInstance()); +// } +// +// Example 3: Pack and unpack a message in Python. +// +// foo = Foo(...) +// any = Any() +// any.Pack(foo) +// ... +// if any.Is(Foo.DESCRIPTOR): +// any.Unpack(foo) +// ... +// +// Example 4: Pack and unpack a message in Go +// +// foo := &pb.Foo{...} +// any, err := anypb.New(foo) +// if err != nil { +// ... +// } +// ... +// foo := &pb.Foo{} +// if err := any.UnmarshalTo(foo); err != nil { +// ... +// } // // The pack methods provided by protobuf library will by default use // 'type.googleapis.com/full.type.name' as the type URL and the unpack @@ -182,35 +182,33 @@ import ( // in the type URL, for example "foo.bar.com/x/y.z" will yield type // name "y.z". // +// # JSON // -// JSON -// ==== // The JSON representation of an `Any` value uses the regular // representation of the deserialized, embedded message, with an // additional field `@type` which contains the type URL. Example: // -// package google.profile; -// message Person { -// string first_name = 1; -// string last_name = 2; -// } +// package google.profile; +// message Person { +// string first_name = 1; +// string last_name = 2; +// } // -// { -// "@type": "type.googleapis.com/google.profile.Person", -// "firstName": , -// "lastName": -// } +// { +// "@type": "type.googleapis.com/google.profile.Person", +// "firstName": , +// "lastName": +// } // // If the embedded message type is well-known and has a custom JSON // representation, that representation will be embedded adding a field // `value` which holds the custom JSON in addition to the `@type` // field. Example (for message [google.protobuf.Duration][]): // -// { -// "@type": "type.googleapis.com/google.protobuf.Duration", -// "value": "1.212s" -// } -// +// { +// "@type": "type.googleapis.com/google.protobuf.Duration", +// "value": "1.212s" +// } type Any struct { state protoimpl.MessageState sizeCache protoimpl.SizeCache @@ -228,14 +226,14 @@ type Any struct { // scheme `http`, `https`, or no scheme, one can optionally set up a type // server that maps type URLs to message definitions as follows: // - // * If no scheme is provided, `https` is assumed. - // * An HTTP GET on the URL must yield a [google.protobuf.Type][] - // value in binary format, or produce an error. - // * Applications are allowed to cache lookup results based on the - // URL, or have them precompiled into a binary to avoid any - // lookup. Therefore, binary compatibility needs to be preserved - // on changes to types. (Use versioned type names to manage - // breaking changes.) + // - If no scheme is provided, `https` is assumed. + // - An HTTP GET on the URL must yield a [google.protobuf.Type][] + // value in binary format, or produce an error. + // - Applications are allowed to cache lookup results based on the + // URL, or have them precompiled into a binary to avoid any + // lookup. Therefore, binary compatibility needs to be preserved + // on changes to types. (Use versioned type names to manage + // breaking changes.) // // Note: this functionality is not currently available in the official // protobuf release, and it is not used for type URLs beginning with @@ -243,7 +241,6 @@ type Any struct { // // Schemes other than `http`, `https` (or the empty scheme) might be // used with implementation specific semantics. - // TypeUrl string `protobuf:"bytes,1,opt,name=type_url,json=typeUrl,proto3" json:"type_url,omitempty"` // Must be a valid serialized protocol buffer of the above specified type. Value []byte `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"` diff --git a/.ci/providerlint/vendor/google.golang.org/protobuf/types/known/durationpb/duration.pb.go b/.ci/providerlint/vendor/google.golang.org/protobuf/types/known/durationpb/duration.pb.go index a583ca2f6c7..df709a8dd4c 100644 --- a/.ci/providerlint/vendor/google.golang.org/protobuf/types/known/durationpb/duration.pb.go +++ b/.ci/providerlint/vendor/google.golang.org/protobuf/types/known/durationpb/duration.pb.go @@ -35,8 +35,7 @@ // // The Duration message represents a signed span of time. // -// -// Conversion to a Go Duration +// # Conversion to a Go Duration // // The AsDuration method can be used to convert a Duration message to a // standard Go time.Duration value: @@ -65,15 +64,13 @@ // the resulting value to the closest representable value (e.g., math.MaxInt64 // for positive overflow and math.MinInt64 for negative overflow). // -// -// Conversion from a Go Duration +// # Conversion from a Go Duration // // The durationpb.New function can be used to construct a Duration message // from a standard Go time.Duration value: // // dur := durationpb.New(d) // ... // make use of d as a *durationpb.Duration -// package durationpb import ( @@ -96,43 +93,43 @@ import ( // // Example 1: Compute Duration from two Timestamps in pseudo code. // -// Timestamp start = ...; -// Timestamp end = ...; -// Duration duration = ...; +// Timestamp start = ...; +// Timestamp end = ...; +// Duration duration = ...; // -// duration.seconds = end.seconds - start.seconds; -// duration.nanos = end.nanos - start.nanos; +// duration.seconds = end.seconds - start.seconds; +// duration.nanos = end.nanos - start.nanos; // -// if (duration.seconds < 0 && duration.nanos > 0) { -// duration.seconds += 1; -// duration.nanos -= 1000000000; -// } else if (duration.seconds > 0 && duration.nanos < 0) { -// duration.seconds -= 1; -// duration.nanos += 1000000000; -// } +// if (duration.seconds < 0 && duration.nanos > 0) { +// duration.seconds += 1; +// duration.nanos -= 1000000000; +// } else if (duration.seconds > 0 && duration.nanos < 0) { +// duration.seconds -= 1; +// duration.nanos += 1000000000; +// } // // Example 2: Compute Timestamp from Timestamp + Duration in pseudo code. // -// Timestamp start = ...; -// Duration duration = ...; -// Timestamp end = ...; +// Timestamp start = ...; +// Duration duration = ...; +// Timestamp end = ...; // -// end.seconds = start.seconds + duration.seconds; -// end.nanos = start.nanos + duration.nanos; +// end.seconds = start.seconds + duration.seconds; +// end.nanos = start.nanos + duration.nanos; // -// if (end.nanos < 0) { -// end.seconds -= 1; -// end.nanos += 1000000000; -// } else if (end.nanos >= 1000000000) { -// end.seconds += 1; -// end.nanos -= 1000000000; -// } +// if (end.nanos < 0) { +// end.seconds -= 1; +// end.nanos += 1000000000; +// } else if (end.nanos >= 1000000000) { +// end.seconds += 1; +// end.nanos -= 1000000000; +// } // // Example 3: Compute Duration from datetime.timedelta in Python. // -// td = datetime.timedelta(days=3, minutes=10) -// duration = Duration() -// duration.FromTimedelta(td) +// td = datetime.timedelta(days=3, minutes=10) +// duration = Duration() +// duration.FromTimedelta(td) // // # JSON Mapping // @@ -143,8 +140,6 @@ import ( // encoded in JSON format as "3s", while 3 seconds and 1 nanosecond should // be expressed in JSON format as "3.000000001s", and 3 seconds and 1 // microsecond should be expressed in JSON format as "3.000001s". -// -// type Duration struct { state protoimpl.MessageState sizeCache protoimpl.SizeCache diff --git a/.ci/providerlint/vendor/google.golang.org/protobuf/types/known/emptypb/empty.pb.go b/.ci/providerlint/vendor/google.golang.org/protobuf/types/known/emptypb/empty.pb.go index e7fcea31f62..9a7277ba394 100644 --- a/.ci/providerlint/vendor/google.golang.org/protobuf/types/known/emptypb/empty.pb.go +++ b/.ci/providerlint/vendor/google.golang.org/protobuf/types/known/emptypb/empty.pb.go @@ -44,11 +44,9 @@ import ( // empty messages in your APIs. A typical example is to use it as the request // or the response type of an API method. For instance: // -// service Foo { -// rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); -// } -// -// The JSON representation for `Empty` is empty JSON object `{}`. +// service Foo { +// rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); +// } type Empty struct { state protoimpl.MessageState sizeCache protoimpl.SizeCache diff --git a/.ci/providerlint/vendor/google.golang.org/protobuf/types/known/timestamppb/timestamp.pb.go b/.ci/providerlint/vendor/google.golang.org/protobuf/types/known/timestamppb/timestamp.pb.go index c9ae92132aa..61f69fc11b1 100644 --- a/.ci/providerlint/vendor/google.golang.org/protobuf/types/known/timestamppb/timestamp.pb.go +++ b/.ci/providerlint/vendor/google.golang.org/protobuf/types/known/timestamppb/timestamp.pb.go @@ -36,8 +36,7 @@ // The Timestamp message represents a timestamp, // an instant in time since the Unix epoch (January 1st, 1970). // -// -// Conversion to a Go Time +// # Conversion to a Go Time // // The AsTime method can be used to convert a Timestamp message to a // standard Go time.Time value in UTC: @@ -59,8 +58,7 @@ // ... // handle error // } // -// -// Conversion from a Go Time +// # Conversion from a Go Time // // The timestamppb.New function can be used to construct a Timestamp message // from a standard Go time.Time value: @@ -72,7 +70,6 @@ // // ts := timestamppb.Now() // ... // make use of ts as a *timestamppb.Timestamp -// package timestamppb import ( @@ -101,52 +98,50 @@ import ( // // Example 1: Compute Timestamp from POSIX `time()`. // -// Timestamp timestamp; -// timestamp.set_seconds(time(NULL)); -// timestamp.set_nanos(0); +// Timestamp timestamp; +// timestamp.set_seconds(time(NULL)); +// timestamp.set_nanos(0); // // Example 2: Compute Timestamp from POSIX `gettimeofday()`. // -// struct timeval tv; -// gettimeofday(&tv, NULL); +// struct timeval tv; +// gettimeofday(&tv, NULL); // -// Timestamp timestamp; -// timestamp.set_seconds(tv.tv_sec); -// timestamp.set_nanos(tv.tv_usec * 1000); +// Timestamp timestamp; +// timestamp.set_seconds(tv.tv_sec); +// timestamp.set_nanos(tv.tv_usec * 1000); // // Example 3: Compute Timestamp from Win32 `GetSystemTimeAsFileTime()`. // -// FILETIME ft; -// GetSystemTimeAsFileTime(&ft); -// UINT64 ticks = (((UINT64)ft.dwHighDateTime) << 32) | ft.dwLowDateTime; +// FILETIME ft; +// GetSystemTimeAsFileTime(&ft); +// UINT64 ticks = (((UINT64)ft.dwHighDateTime) << 32) | ft.dwLowDateTime; // -// // A Windows tick is 100 nanoseconds. Windows epoch 1601-01-01T00:00:00Z -// // is 11644473600 seconds before Unix epoch 1970-01-01T00:00:00Z. -// Timestamp timestamp; -// timestamp.set_seconds((INT64) ((ticks / 10000000) - 11644473600LL)); -// timestamp.set_nanos((INT32) ((ticks % 10000000) * 100)); +// // A Windows tick is 100 nanoseconds. Windows epoch 1601-01-01T00:00:00Z +// // is 11644473600 seconds before Unix epoch 1970-01-01T00:00:00Z. +// Timestamp timestamp; +// timestamp.set_seconds((INT64) ((ticks / 10000000) - 11644473600LL)); +// timestamp.set_nanos((INT32) ((ticks % 10000000) * 100)); // // Example 4: Compute Timestamp from Java `System.currentTimeMillis()`. // -// long millis = System.currentTimeMillis(); -// -// Timestamp timestamp = Timestamp.newBuilder().setSeconds(millis / 1000) -// .setNanos((int) ((millis % 1000) * 1000000)).build(); +// long millis = System.currentTimeMillis(); // +// Timestamp timestamp = Timestamp.newBuilder().setSeconds(millis / 1000) +// .setNanos((int) ((millis % 1000) * 1000000)).build(); // // Example 5: Compute Timestamp from Java `Instant.now()`. // -// Instant now = Instant.now(); -// -// Timestamp timestamp = -// Timestamp.newBuilder().setSeconds(now.getEpochSecond()) -// .setNanos(now.getNano()).build(); +// Instant now = Instant.now(); // +// Timestamp timestamp = +// Timestamp.newBuilder().setSeconds(now.getEpochSecond()) +// .setNanos(now.getNano()).build(); // // Example 6: Compute Timestamp from current time in Python. // -// timestamp = Timestamp() -// timestamp.GetCurrentTime() +// timestamp = Timestamp() +// timestamp.GetCurrentTime() // // # JSON Mapping // @@ -174,8 +169,6 @@ import ( // the Joda Time's [`ISODateTimeFormat.dateTime()`]( // http://www.joda.org/joda-time/apidocs/org/joda/time/format/ISODateTimeFormat.html#dateTime%2D%2D // ) to obtain a formatter capable of generating timestamps in this format. -// -// type Timestamp struct { state protoimpl.MessageState sizeCache protoimpl.SizeCache diff --git a/.ci/providerlint/vendor/modules.txt b/.ci/providerlint/vendor/modules.txt index df89de5337c..71e3cb9b540 100644 --- a/.ci/providerlint/vendor/modules.txt +++ b/.ci/providerlint/vendor/modules.txt @@ -1,10 +1,30 @@ +# github.com/ProtonMail/go-crypto v0.0.0-20230217124315-7d5c6f04bbb8 +## explicit; go 1.13 +github.com/ProtonMail/go-crypto/bitcurves +github.com/ProtonMail/go-crypto/brainpool +github.com/ProtonMail/go-crypto/eax +github.com/ProtonMail/go-crypto/internal/byteutil +github.com/ProtonMail/go-crypto/ocb +github.com/ProtonMail/go-crypto/openpgp +github.com/ProtonMail/go-crypto/openpgp/aes/keywrap +github.com/ProtonMail/go-crypto/openpgp/armor +github.com/ProtonMail/go-crypto/openpgp/ecdh +github.com/ProtonMail/go-crypto/openpgp/ecdsa +github.com/ProtonMail/go-crypto/openpgp/eddsa +github.com/ProtonMail/go-crypto/openpgp/elgamal +github.com/ProtonMail/go-crypto/openpgp/errors +github.com/ProtonMail/go-crypto/openpgp/internal/algorithm +github.com/ProtonMail/go-crypto/openpgp/internal/ecc +github.com/ProtonMail/go-crypto/openpgp/internal/encoding +github.com/ProtonMail/go-crypto/openpgp/packet +github.com/ProtonMail/go-crypto/openpgp/s2k # github.com/agext/levenshtein v1.2.2 ## explicit github.com/agext/levenshtein # github.com/apparentlymart/go-textseg/v13 v13.0.0 ## explicit; go 1.16 github.com/apparentlymart/go-textseg/v13/textseg -# github.com/aws/aws-sdk-go v1.44.261 +# github.com/aws/aws-sdk-go v1.44.299 ## explicit; go 1.11 github.com/aws/aws-sdk-go/aws/awserr github.com/aws/aws-sdk-go/aws/endpoints @@ -163,10 +183,24 @@ github.com/bflad/tfproviderlint/xpasses/XR007 github.com/bflad/tfproviderlint/xpasses/XR008 github.com/bflad/tfproviderlint/xpasses/XS001 github.com/bflad/tfproviderlint/xpasses/XS002 +# github.com/cloudflare/circl v1.3.3 +## explicit; go 1.19 +github.com/cloudflare/circl/dh/x25519 +github.com/cloudflare/circl/dh/x448 +github.com/cloudflare/circl/ecc/goldilocks +github.com/cloudflare/circl/internal/conv +github.com/cloudflare/circl/internal/sha3 +github.com/cloudflare/circl/math +github.com/cloudflare/circl/math/fp25519 +github.com/cloudflare/circl/math/fp448 +github.com/cloudflare/circl/math/mlsbset +github.com/cloudflare/circl/sign +github.com/cloudflare/circl/sign/ed25519 +github.com/cloudflare/circl/sign/ed448 # github.com/fatih/color v1.13.0 ## explicit; go 1.13 github.com/fatih/color -# github.com/golang/protobuf v1.5.2 +# github.com/golang/protobuf v1.5.3 ## explicit; go 1.9 github.com/golang/protobuf/jsonpb github.com/golang/protobuf/proto @@ -199,13 +233,13 @@ github.com/hashicorp/go-cty/cty/gocty github.com/hashicorp/go-cty/cty/json github.com/hashicorp/go-cty/cty/msgpack github.com/hashicorp/go-cty/cty/set -# github.com/hashicorp/go-hclog v1.4.0 +# github.com/hashicorp/go-hclog v1.5.0 ## explicit; go 1.13 github.com/hashicorp/go-hclog # github.com/hashicorp/go-multierror v1.1.1 ## explicit; go 1.13 github.com/hashicorp/go-multierror -# github.com/hashicorp/go-plugin v1.4.8 +# github.com/hashicorp/go-plugin v1.4.10 ## explicit; go 1.17 github.com/hashicorp/go-plugin github.com/hashicorp/go-plugin/internal/plugin @@ -215,8 +249,8 @@ github.com/hashicorp/go-uuid # github.com/hashicorp/go-version v1.6.0 ## explicit github.com/hashicorp/go-version -# github.com/hashicorp/hc-install v0.5.0 -## explicit; go 1.16 +# github.com/hashicorp/hc-install v0.5.2 +## explicit; go 1.18 github.com/hashicorp/hc-install github.com/hashicorp/hc-install/checkpoint github.com/hashicorp/hc-install/errors @@ -231,7 +265,7 @@ github.com/hashicorp/hc-install/product github.com/hashicorp/hc-install/releases github.com/hashicorp/hc-install/src github.com/hashicorp/hc-install/version -# github.com/hashicorp/hcl/v2 v2.16.2 +# github.com/hashicorp/hcl/v2 v2.17.0 ## explicit; go 1.18 github.com/hashicorp/hcl/v2 github.com/hashicorp/hcl/v2/ext/customdecode @@ -243,11 +277,11 @@ github.com/hashicorp/logutils ## explicit; go 1.18 github.com/hashicorp/terraform-exec/internal/version github.com/hashicorp/terraform-exec/tfexec -# github.com/hashicorp/terraform-json v0.16.0 +# github.com/hashicorp/terraform-json v0.17.0 ## explicit; go 1.18 github.com/hashicorp/terraform-json -# github.com/hashicorp/terraform-plugin-go v0.14.3 -## explicit; go 1.18 +# github.com/hashicorp/terraform-plugin-go v0.16.0 +## explicit; go 1.19 github.com/hashicorp/terraform-plugin-go/internal/logging github.com/hashicorp/terraform-plugin-go/tfprotov5 github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/diag @@ -264,14 +298,14 @@ github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/tfplugin6 github.com/hashicorp/terraform-plugin-go/tfprotov6/internal/toproto github.com/hashicorp/terraform-plugin-go/tfprotov6/tf6server github.com/hashicorp/terraform-plugin-go/tftypes -# github.com/hashicorp/terraform-plugin-log v0.8.0 -## explicit; go 1.18 +# github.com/hashicorp/terraform-plugin-log v0.9.0 +## explicit; go 1.19 github.com/hashicorp/terraform-plugin-log/internal/fieldutils github.com/hashicorp/terraform-plugin-log/internal/hclogutils github.com/hashicorp/terraform-plugin-log/internal/logging github.com/hashicorp/terraform-plugin-log/tflog github.com/hashicorp/terraform-plugin-log/tfsdklog -# github.com/hashicorp/terraform-plugin-sdk/v2 v2.26.1 +# github.com/hashicorp/terraform-plugin-sdk/v2 v2.27.0 ## explicit; go 1.19 github.com/hashicorp/terraform-plugin-sdk/v2/diag github.com/hashicorp/terraform-plugin-sdk/v2/helper/id @@ -293,11 +327,11 @@ github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfdiags github.com/hashicorp/terraform-plugin-sdk/v2/meta github.com/hashicorp/terraform-plugin-sdk/v2/plugin github.com/hashicorp/terraform-plugin-sdk/v2/terraform -# github.com/hashicorp/terraform-registry-address v0.1.0 -## explicit; go 1.14 +# github.com/hashicorp/terraform-registry-address v0.2.1 +## explicit; go 1.19 github.com/hashicorp/terraform-registry-address -# github.com/hashicorp/terraform-svchost v0.0.0-20200729002733-f050f53b9734 -## explicit; go 1.12 +# github.com/hashicorp/terraform-svchost v0.1.1 +## explicit; go 1.19 github.com/hashicorp/terraform-svchost # github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d ## explicit @@ -330,16 +364,16 @@ github.com/oklog/run ## explicit github.com/vmihailenco/msgpack github.com/vmihailenco/msgpack/codes -# github.com/vmihailenco/msgpack/v4 v4.3.12 +# github.com/vmihailenco/msgpack/v5 v5.3.5 ## explicit; go 1.11 -github.com/vmihailenco/msgpack/v4 -github.com/vmihailenco/msgpack/v4/codes -# github.com/vmihailenco/tagparser v0.1.1 -## explicit; go 1.13 -github.com/vmihailenco/tagparser -github.com/vmihailenco/tagparser/internal -github.com/vmihailenco/tagparser/internal/parser -# github.com/zclconf/go-cty v1.13.1 +github.com/vmihailenco/msgpack/v5 +github.com/vmihailenco/msgpack/v5/msgpcode +# github.com/vmihailenco/tagparser/v2 v2.0.0 +## explicit; go 1.15 +github.com/vmihailenco/tagparser/v2 +github.com/vmihailenco/tagparser/v2/internal +github.com/vmihailenco/tagparser/v2/internal/parser +# github.com/zclconf/go-cty v1.13.2 ## explicit; go 1.18 github.com/zclconf/go-cty/cty github.com/zclconf/go-cty/cty/convert @@ -349,22 +383,18 @@ github.com/zclconf/go-cty/cty/function/stdlib github.com/zclconf/go-cty/cty/gocty github.com/zclconf/go-cty/cty/json github.com/zclconf/go-cty/cty/set -# golang.org/x/crypto v0.8.0 +# golang.org/x/crypto v0.10.0 ## explicit; go 1.17 golang.org/x/crypto/cast5 -golang.org/x/crypto/openpgp -golang.org/x/crypto/openpgp/armor -golang.org/x/crypto/openpgp/elgamal -golang.org/x/crypto/openpgp/errors -golang.org/x/crypto/openpgp/packet -golang.org/x/crypto/openpgp/s2k +golang.org/x/crypto/hkdf +golang.org/x/crypto/sha3 # golang.org/x/mod v0.10.0 ## explicit; go 1.17 golang.org/x/mod/internal/lazyregexp golang.org/x/mod/modfile golang.org/x/mod/module golang.org/x/mod/semver -# golang.org/x/net v0.9.0 +# golang.org/x/net v0.11.0 ## explicit; go 1.17 golang.org/x/net/context golang.org/x/net/http/httpguts @@ -373,11 +403,12 @@ golang.org/x/net/http2/hpack golang.org/x/net/idna golang.org/x/net/internal/timeseries golang.org/x/net/trace -# golang.org/x/sys v0.7.0 +# golang.org/x/sys v0.9.0 ## explicit; go 1.17 +golang.org/x/sys/cpu golang.org/x/sys/execabs golang.org/x/sys/unix -# golang.org/x/text v0.9.0 +# golang.org/x/text v0.10.0 ## explicit; go 1.17 golang.org/x/text/secure/bidirule golang.org/x/text/transform @@ -415,7 +446,7 @@ golang.org/x/tools/internal/tokeninternal golang.org/x/tools/internal/typeparams golang.org/x/tools/internal/typesinternal golang.org/x/tools/txtar -# google.golang.org/appengine v1.6.6 +# google.golang.org/appengine v1.6.7 ## explicit; go 1.11 google.golang.org/appengine google.golang.org/appengine/datastore @@ -428,10 +459,10 @@ google.golang.org/appengine/internal/datastore google.golang.org/appengine/internal/log google.golang.org/appengine/internal/modules google.golang.org/appengine/internal/remote_api -# google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d -## explicit; go 1.11 +# google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 +## explicit; go 1.19 google.golang.org/genproto/googleapis/rpc/status -# google.golang.org/grpc v1.51.0 +# google.golang.org/grpc v1.56.0 ## explicit; go 1.17 google.golang.org/grpc google.golang.org/grpc/attributes @@ -485,7 +516,7 @@ google.golang.org/grpc/serviceconfig google.golang.org/grpc/stats google.golang.org/grpc/status google.golang.org/grpc/tap -# google.golang.org/protobuf v1.28.1 +# google.golang.org/protobuf v1.30.0 ## explicit; go 1.11 google.golang.org/protobuf/encoding/protojson google.golang.org/protobuf/encoding/prototext diff --git a/.ci/scripts/validate-terraform-file.sh b/.ci/scripts/validate-terraform-file.sh index 87525bbcf24..4f93a4abe20 100755 --- a/.ci/scripts/validate-terraform-file.sh +++ b/.ci/scripts/validate-terraform-file.sh @@ -31,7 +31,7 @@ while IFS= read -r block ; do # We need to capture the output and error code here. We don't want to exit on the first error set +e - tflint_output=$(tflint --config .ci/.tflint.hcl "${rules[@]}" "$tf" 2>&1) + tflint_output=$(tflint --config .ci/.tflint.hcl --filter=$tf "${rules[@]}" 2>&1) tflint_exitcode=$? set -e diff --git a/.ci/scripts/validate-terraform.sh b/.ci/scripts/validate-terraform.sh index 5771b834a89..c37620c2f5e 100755 --- a/.ci/scripts/validate-terraform.sh +++ b/.ci/scripts/validate-terraform.sh @@ -43,7 +43,7 @@ while read -r filename ; do # We need to capture the output and error code here. We don't want to exit on the first error set +e - tflint_output=$(${TFLINT_CMD} --config .ci/.tflint.hcl "${rules[@]}" "$tf" 2>&1) + tflint_output=$(${TFLINT_CMD} --config .ci/.tflint.hcl --filter=$tf "${rules[@]}" 2>&1) tflint_exitcode=$? set -e diff --git a/.ci/semgrep/aws/go-sdk.yml b/.ci/semgrep/aws/go-sdk.yml new file mode 100644 index 00000000000..4dce3e2b306 --- /dev/null +++ b/.ci/semgrep/aws/go-sdk.yml @@ -0,0 +1,125 @@ +rules: + - id: multiple-service-imports + languages: [go] + message: Resources should not implement multiple AWS Go SDK service functionality + paths: + include: + - internal/ + exclude: + - "internal/service/**/*_test.go" + - "internal/service/**/sweep.go" + - "internal/acctest/acctest.go" + - "internal/conns/**/*.go" + patterns: + - pattern: | + import ("$X") + import ("$Y") + - metavariable-regex: + metavariable: "$X" + regex: '^github.com/aws/aws-sdk-go(-v2)?/service/[^/]+$' + - metavariable-regex: + metavariable: "$Y" + regex: '^github.com/aws/aws-sdk-go/service(-v2)?/[^/]+$' + # wafregional uses a number of resources from waf + - pattern-not: | + import ("github.com/aws/aws-sdk-go/service/waf") + import ("github.com/aws/aws-sdk-go/service/wafregional") + severity: WARNING + + - id: prefer-pointer-conversion-assignment + languages: [go] + message: Prefer AWS Go SDK pointer conversion functions for dereferencing during assignment, e.g. aws.StringValue() + paths: + include: + - internal/service + exclude: + - "internal/service/**/*_test.go" + - "internal/service/*/service_package.go" + - "internal/service/*/service_package_gen.go" + patterns: + - pattern: "$LHS = *$RHS" + - pattern-not: "*$LHS2 = *$RHS" + severity: WARNING + + - id: prefer-pointer-conversion-conditional + languages: [go] + message: Prefer AWS Go SDK pointer conversion functions for dereferencing during conditionals, e.g. aws.StringValue() + paths: + include: + - internal/service + exclude: + - "internal/service/**/*_test.go" + patterns: + - pattern-either: + - pattern: "$LHS == *$RHS" + - pattern: "$LHS != *$RHS" + - pattern: "$LHS > *$RHS" + - pattern: "$LHS < *$RHS" + - pattern: "$LHS >= *$RHS" + - pattern: "$LHS <= *$RHS" + - pattern: "*$LHS == $RHS" + - pattern: "*$LHS != $RHS" + - pattern: "*$LHS > $RHS" + - pattern: "*$LHS < $RHS" + - pattern: "*$LHS >= $RHS" + - pattern: "*$LHS <= $RHS" + severity: WARNING + + - id: pointer-conversion-immediate-dereference + fix: $VALUE + languages: [go] + message: Using AWS Go SDK pointer conversion, e.g. aws.String(), with immediate dereferencing is extraneous + paths: + include: + - internal/ + patterns: + - pattern-either: + - pattern: "*aws.Bool($VALUE)" + - pattern: "*aws.Float64($VALUE)" + - pattern: "*aws.Int64($VALUE)" + - pattern: "*aws.String($VALUE)" + - pattern: "*aws.Time($VALUE)" + severity: WARNING + + - id: pointer-conversion-ResourceData-SetId + fix: d.SetId(aws.StringValue($VALUE)) + languages: [go] + message: Prefer AWS Go SDK pointer conversion aws.StringValue() function for dereferencing during d.SetId() + paths: + include: + - internal/ + pattern: "d.SetId(*$VALUE)" + severity: WARNING + + - id: helper-schema-ResourceData-Set-extraneous-value-pointer-conversion + fix: d.Set($ATTRIBUTE, $APIOBJECT) + languages: [go] + message: AWS Go SDK pointer conversion function for `d.Set()` value is extraneous + paths: + include: + - internal/ + patterns: + - pattern-either: + - pattern: d.Set($ATTRIBUTE, aws.BoolValue($APIOBJECT)) + - pattern: d.Set($ATTRIBUTE, aws.ToBool($APIOBJECT)) + - pattern: d.Set($ATTRIBUTE, aws.Float64Value($APIOBJECT)) + - pattern: d.Set($ATTRIBUTE, aws.ToFloat64($APIOBJECT)) + - pattern: d.Set($ATTRIBUTE, aws.IntValue($APIOBJECT)) + - pattern: d.Set($ATTRIBUTE, aws.ToInt($APIOBJECT)) + - pattern: d.Set($ATTRIBUTE, aws.Int64Value($APIOBJECT)) + - pattern: d.Set($ATTRIBUTE, aws.ToInt64($APIOBJECT)) + - pattern: d.Set($ATTRIBUTE, int(aws.Int64Value($APIOBJECT))) + - pattern: d.Set($ATTRIBUTE, int(aws.ToInt64($APIOBJECT))) + - pattern: d.Set($ATTRIBUTE, aws.StringValue($APIOBJECT)) + - pattern: d.Set($ATTRIBUTE, aws.ToString($APIOBJECT)) + severity: WARNING + + - id: prefer-pointer-conversion-int-conversion-int64-pointer + fix: $VALUE + languages: [go] + message: Prefer AWS Go SDK pointer conversion functions for dereferencing when converting int64 to int + paths: + include: + - internal/ + pattern: int(*$VALUE) + severity: WARNING diff --git a/.ci/semgrep/errors/msgfmt.yml b/.ci/semgrep/errors/msgfmt.yml new file mode 100644 index 00000000000..3a042b091a9 --- /dev/null +++ b/.ci/semgrep/errors/msgfmt.yml @@ -0,0 +1,59 @@ +rules: + # To fix: + # find internal/service/*/*.go -print | xargs ruby -p -i -e 'gsub(/diag.FromErr\(fmt.Errorf\((.*)\)\)/, "diag.Errorf(\\1)")' + # then + # find internal/service/*/*.go -print | xargs ruby -p -i -e 'gsub(/diag.Errorf\((.*)%w(.*)\)/, "diag.Errorf(\\1%s\\2)")' + # and maybe + # goimports -w ./internal/service + - id: no-diag.FromErr-fmt.Errorf + languages: [go] + message: Use diag.Errorf(...) instead of diag.FromErr(fmt.Errorf(...)) + paths: + include: + - internal/ + patterns: + - pattern-regex: diag.FromErr\(fmt.Errorf\( + severity: ERROR + + # To fix: + # find internal/service/*/*.go -print | xargs ruby -p -i -e 'gsub(/diag.Errorf\((")error (.*)\)/, "diag.Errorf(\\1\\2)")' + # find internal/service/*/*.go -print | xargs ruby -p -i -e 'gsub(/diag.Errorf\((")Error (.*)\)/, "diag.Errorf(\\1\\2)")' + - id: no-diag.Errorf-leading-error + languages: [go] + message: Remove leading 'error ' from diag.Errorf("error ...") + paths: + include: + - internal/ + patterns: + - pattern-regex: 'diag.Errorf\("\s*[Ee]rror ' + severity: ERROR + + # To fix: + # find internal/service/*/*.go -print | xargs ruby -p -i -e 'gsub(/AppendErrorf\(diags, (")Error (.*)\)/, "AppendErrorf(diags, \\1\\2)")' + # find internal/service/*/*.go -print | xargs ruby -p -i -e 'gsub(/AppendErrorf\(diags, (")error (.*)\)/, "AppendErrorf(diags, \\1\\2)")' + - id: no-AppendErrorf-leading-error + languages: [go] + message: Remove leading 'Error ' from AppendErrorf(diags, "Error ...") + paths: + include: + - internal/ + patterns: + - pattern-regex: 'AppendErrorf\(diags, "\s*[Ee]rror ' + severity: ERROR + + # To fix: + # find internal -name '*.go' ! -name 'sweep.go' ! -name '*_test.go' -print | xargs ruby -p -i -e 'gsub(/fmt.Errorf\((")error (.*)\)/, "fmt.Errorf(\\1\\2)")' + # find internal -name '*.go' ! -name 'sweep.go' ! -name '*_test.go' -print | xargs ruby -p -i -e 'gsub(/fmt.Errorf\((")Error (.*)\)/, "fmt.Errorf(\\1\\2)")' + - id: no-fmt.Errorf-leading-error + languages: [go] + message: Remove leading 'error ' from fmt.Errorf("error ...") + paths: + include: + - internal/ + exclude: + - "internal/service/**/*_test.go" + - "internal/service/**/sweep.go" + - "internal/acctest/acctest.go" + patterns: + - pattern-regex: 'fmt.Errorf\("\s*[Ee]rror ' + severity: ERROR diff --git a/.ci/semgrep/migrate/context.yml b/.ci/semgrep/migrate/context.yml index f7fab62e55a..7222f086859 100644 --- a/.ci/semgrep/migrate/context.yml +++ b/.ci/semgrep/migrate/context.yml @@ -6,6 +6,9 @@ rules: include: - internal/service/* - internal/acctest/* + exclude: + - "internal/service/*/service_package.go" + - "internal/service/*/service_package_gen.go" patterns: - pattern: | $CONN.$API(...) diff --git a/.ci/tools/go.mod b/.ci/tools/go.mod index 747f6c9f59d..4540623f654 100644 --- a/.ci/tools/go.mod +++ b/.ci/tools/go.mod @@ -1,63 +1,75 @@ module github.com/hashicorp/terraform-provider-aws/tools -go 1.19 +go 1.20 require ( - github.com/bflad/tfproviderdocs v0.9.1 + github.com/YakDriver/tfproviderdocs v0.5.0 github.com/client9/misspell v0.3.4 - github.com/golangci/golangci-lint v1.52.2 + github.com/golangci/golangci-lint v1.53.3 + github.com/hashicorp/copywrite v0.16.4 github.com/hashicorp/go-changelog v0.0.0-20201005170154-56335215ce3a github.com/katbyte/terrafmt v0.5.2 github.com/pavius/impi v0.0.3 - github.com/rhysd/actionlint v1.6.24 - github.com/terraform-linters/tflint v0.46.1 + github.com/rhysd/actionlint v1.6.25 + github.com/terraform-linters/tflint v0.47.0 mvdan.cc/gofumpt v0.5.0 ) require ( 4d63.com/gocheckcompilerdirectives v1.2.1 // indirect 4d63.com/gochecknoglobals v0.2.1 // indirect - cloud.google.com/go v0.107.0 // indirect - cloud.google.com/go/compute v1.15.1 // indirect + cloud.google.com/go v0.110.0 // indirect + cloud.google.com/go/compute v1.18.0 // indirect cloud.google.com/go/compute/metadata v0.2.3 // indirect - cloud.google.com/go/iam v0.8.0 // indirect - cloud.google.com/go/storage v1.27.0 // indirect + cloud.google.com/go/iam v0.12.0 // indirect + cloud.google.com/go/storage v1.28.1 // indirect + github.com/4meepo/tagalign v1.2.2 // indirect github.com/Abirdcfly/dupword v0.0.11 // indirect - github.com/Antonboom/errname v0.1.9 // indirect - github.com/Antonboom/nilnil v0.1.3 // indirect - github.com/BurntSushi/toml v1.2.1 // indirect + github.com/AlecAivazis/survey/v2 v2.3.6 // indirect + github.com/Antonboom/errname v0.1.10 // indirect + github.com/Antonboom/nilnil v0.1.5 // indirect + github.com/BurntSushi/toml v1.3.2 // indirect github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 // indirect github.com/GaijinEntertainment/go-exhaustruct/v2 v2.3.0 // indirect github.com/Masterminds/goutils v1.1.1 // indirect github.com/Masterminds/semver v1.5.0 // indirect - github.com/Masterminds/semver/v3 v3.2.0 // indirect - github.com/Masterminds/sprig v2.22.0+incompatible // indirect + github.com/Masterminds/semver/v3 v3.2.1 // indirect + github.com/Masterminds/sprig/v3 v3.2.1 // indirect github.com/Microsoft/go-winio v0.4.16 // indirect - github.com/OpenPeeDeeP/depguard v1.1.1 // indirect - github.com/ProtonMail/go-crypto v0.0.0-20210428141323-04723f9f07d7 // indirect + github.com/OpenPeeDeeP/depguard/v2 v2.1.0 // indirect + github.com/ProtonMail/go-crypto v0.0.0-20230217124315-7d5c6f04bbb8 // indirect github.com/acomagu/bufpipe v1.0.3 // indirect github.com/agext/levenshtein v1.2.3 // indirect + github.com/alexkohler/nakedret/v2 v2.0.2 // indirect github.com/alexkohler/prealloc v1.0.0 // indirect github.com/alingse/asasalint v0.0.11 // indirect github.com/apparentlymart/go-cidr v1.1.0 // indirect github.com/apparentlymart/go-textseg/v13 v13.0.0 // indirect github.com/armon/go-radix v1.0.0 // indirect - github.com/ashanbrown/forbidigo v1.5.1 // indirect + github.com/asaskevich/govalidator v0.0.0-20200907205600-7a23bdc65eef // indirect + github.com/ashanbrown/forbidigo v1.5.3 // indirect github.com/ashanbrown/makezero v1.1.1 // indirect github.com/aws/aws-sdk-go v1.44.122 // indirect github.com/beorn7/perks v1.0.1 // indirect github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d // indirect github.com/bgentry/speakeasy v0.1.0 // indirect - github.com/bkielbasa/cyclop v1.2.0 // indirect + github.com/bkielbasa/cyclop v1.2.1 // indirect github.com/blizzy78/varnamelen v0.8.0 // indirect github.com/bmatcuk/doublestar v1.3.4 // indirect + github.com/bmatcuk/doublestar/v4 v4.6.0 // indirect github.com/bombsimon/wsl/v3 v3.4.0 // indirect + github.com/bradleyfalzon/ghinstallation/v2 v2.5.0 // indirect github.com/breml/bidichk v0.2.4 // indirect github.com/breml/errchkjson v0.3.1 // indirect - github.com/butuzov/ireturn v0.1.1 // indirect + github.com/butuzov/ireturn v0.2.0 // indirect + github.com/butuzov/mirror v1.1.0 // indirect github.com/cespare/xxhash/v2 v2.2.0 // indirect github.com/charithe/durationcheck v0.0.10 // indirect github.com/chavacava/garif v0.0.0-20230227094218-b8c73b2037b8 // indirect + github.com/cli/go-gh v1.2.1 // indirect + github.com/cli/safeexec v1.0.0 // indirect + github.com/cli/shurcooL-graphql v0.0.2 // indirect + github.com/cloudflare/circl v1.3.3 // indirect github.com/curioswitch/go-reassign v0.2.0 // indirect github.com/daixiang0/gci v0.10.1 // indirect github.com/davecgh/go-spew v1.1.1 // indirect @@ -70,10 +82,12 @@ require ( github.com/firefart/nonamedreturns v1.0.4 // indirect github.com/fsnotify/fsnotify v1.5.4 // indirect github.com/fzipp/gocyclo v0.6.0 // indirect - github.com/go-critic/go-critic v0.7.0 // indirect + github.com/go-critic/go-critic v0.8.1 // indirect github.com/go-git/gcfg v1.5.0 // indirect github.com/go-git/go-billy/v5 v5.3.1 // indirect github.com/go-git/go-git/v5 v5.4.2 // indirect + github.com/go-openapi/errors v0.20.2 // indirect + github.com/go-openapi/strfmt v0.21.3 // indirect github.com/go-toolsmith/astcast v1.1.0 // indirect github.com/go-toolsmith/astcopy v1.1.0 // indirect github.com/go-toolsmith/astequal v1.1.0 // indirect @@ -84,8 +98,9 @@ require ( github.com/go-xmlfmt/xmlfmt v1.1.2 // indirect github.com/gobwas/glob v0.2.3 // indirect github.com/gofrs/flock v0.8.1 // indirect + github.com/golang-jwt/jwt/v4 v4.5.0 // indirect github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect - github.com/golang/protobuf v1.5.2 // indirect + github.com/golang/protobuf v1.5.3 // indirect github.com/golangci/check v0.0.0-20180506172741-cfe4005ccda2 // indirect github.com/golangci/dupl v0.0.0-20180902072040-3e9179ac440a // indirect github.com/golangci/go-misc v0.0.0-20220329215616-d24fe342adfe // indirect @@ -97,72 +112,82 @@ require ( github.com/golangci/unconvert v0.0.0-20180507085042-28b1c447d1f4 // indirect github.com/google/go-cmp v0.5.9 // indirect github.com/google/go-github/v35 v35.3.0 // indirect - github.com/google/go-querystring v1.0.0 // indirect + github.com/google/go-github/v45 v45.2.0 // indirect + github.com/google/go-github/v53 v53.0.0 // indirect + github.com/google/go-querystring v1.1.0 // indirect github.com/google/uuid v1.3.0 // indirect - github.com/googleapis/enterprise-certificate-proxy v0.2.0 // indirect + github.com/googleapis/enterprise-certificate-proxy v0.2.3 // indirect github.com/googleapis/gax-go/v2 v2.7.0 // indirect - github.com/gookit/color v1.5.2 // indirect - github.com/gordonklaus/ineffassign v0.0.0-20230107090616-13ace0543b28 // indirect + github.com/gookit/color v1.5.3 // indirect + github.com/gordonklaus/ineffassign v0.0.0-20230610083614-0e73809eb601 // indirect github.com/gostaticanalysis/analysisutil v0.7.1 // indirect github.com/gostaticanalysis/comment v1.4.2 // indirect github.com/gostaticanalysis/forcetypeassert v0.1.0 // indirect github.com/gostaticanalysis/nilerr v0.1.1 // indirect github.com/hashicorp/errwrap v1.1.0 // indirect github.com/hashicorp/go-cleanhttp v0.5.2 // indirect - github.com/hashicorp/go-getter v1.7.0 // indirect + github.com/hashicorp/go-getter v1.7.1 // indirect github.com/hashicorp/go-hclog v1.5.0 // indirect github.com/hashicorp/go-multierror v1.1.1 // indirect - github.com/hashicorp/go-plugin v1.4.9 // indirect + github.com/hashicorp/go-plugin v1.4.10 // indirect github.com/hashicorp/go-safetemp v1.0.0 // indirect github.com/hashicorp/go-uuid v1.0.3 // indirect github.com/hashicorp/go-version v1.6.0 // indirect github.com/hashicorp/hc-install v0.4.0 // indirect github.com/hashicorp/hcl v1.0.0 // indirect - github.com/hashicorp/hcl/v2 v2.16.2 // indirect + github.com/hashicorp/hcl/v2 v2.17.0 // indirect github.com/hashicorp/logutils v1.0.0 // indirect github.com/hashicorp/terraform-exec v0.17.2 // indirect - github.com/hashicorp/terraform-json v0.14.0 // indirect - github.com/hashicorp/terraform-registry-address v0.1.0 // indirect - github.com/hashicorp/terraform-svchost v0.0.0-20200729002733-f050f53b9734 // indirect + github.com/hashicorp/terraform-json v0.17.1 // indirect + github.com/hashicorp/terraform-registry-address v0.2.0 // indirect + github.com/hashicorp/terraform-svchost v0.0.1 // indirect github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d // indirect + github.com/henvic/httpretty v0.0.6 // indirect github.com/hexops/gotextdiff v1.0.3 // indirect github.com/huandu/xstrings v1.3.2 // indirect github.com/imdario/mergo v0.3.12 // indirect - github.com/inconshreveable/mousetrap v1.0.1 // indirect + github.com/inconshreveable/mousetrap v1.1.0 // indirect github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 // indirect + github.com/jedib0t/go-pretty v4.3.0+incompatible // indirect + github.com/jedib0t/go-pretty/v6 v6.4.6 // indirect github.com/jessevdk/go-flags v1.5.0 // indirect github.com/jgautheron/goconst v1.5.1 // indirect github.com/jingyugao/rowserrcheck v1.1.1 // indirect github.com/jirfag/go-printf-func-name v0.0.0-20200119135958-7558a9eaa5af // indirect github.com/jmespath/go-jmespath v0.4.0 // indirect + github.com/joho/godotenv v1.3.0 // indirect github.com/jstemmer/go-junit-report v1.0.0 // indirect github.com/julz/importas v0.1.0 // indirect - github.com/junk1tm/musttag v0.5.0 // indirect github.com/katbyte/andreyvit-diff v0.0.1 // indirect github.com/katbyte/sergi-go-diff v1.1.1 // indirect + github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 // indirect github.com/kevinburke/ssh_config v0.0.0-20201106050909-4977a11b4351 // indirect github.com/kisielk/errcheck v1.6.3 // indirect github.com/kisielk/gotool v1.0.0 // indirect github.com/kkHAIKE/contextcheck v1.1.4 // indirect github.com/klauspost/compress v1.15.11 // indirect + github.com/knadh/koanf v1.5.0 // indirect github.com/kulti/thelper v0.6.3 // indirect - github.com/kunwardeep/paralleltest v1.0.6 // indirect + github.com/kunwardeep/paralleltest v1.0.7 // indirect github.com/kyoh86/exportloopref v0.1.11 // indirect github.com/ldez/gomoddirectives v0.2.3 // indirect - github.com/ldez/tagliatelle v0.4.0 // indirect + github.com/ldez/tagliatelle v0.5.0 // indirect github.com/leonklingele/grouper v1.1.1 // indirect + github.com/lucasb-eyer/go-colorful v1.2.0 // indirect github.com/lufeee/execinquery v1.2.1 // indirect github.com/magiconair/properties v1.8.6 // indirect github.com/maratori/testableexamples v1.0.0 // indirect github.com/maratori/testpackage v1.1.1 // indirect github.com/matoous/godox v0.0.0-20230222163458-006bad1f9d26 // indirect github.com/mattn/go-colorable v0.1.13 // indirect - github.com/mattn/go-isatty v0.0.18 // indirect + github.com/mattn/go-isatty v0.0.19 // indirect github.com/mattn/go-runewidth v0.0.14 // indirect github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect github.com/mbilski/exhaustivestruct v1.2.0 // indirect - github.com/mgechev/revive v1.3.1 // indirect - github.com/mitchellh/cli v1.1.2 // indirect + github.com/mergestat/timediff v0.0.3 // indirect + github.com/mgechev/revive v1.3.2 // indirect + github.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d // indirect + github.com/mitchellh/cli v1.1.5 // indirect github.com/mitchellh/copystructure v1.2.0 // indirect github.com/mitchellh/go-homedir v1.1.0 // indirect github.com/mitchellh/go-testing-interface v1.14.1 // indirect @@ -170,18 +195,20 @@ require ( github.com/mitchellh/mapstructure v1.5.0 // indirect github.com/mitchellh/reflectwalk v1.0.2 // indirect github.com/moricho/tparallel v0.3.1 // indirect + github.com/muesli/termenv v0.12.0 // indirect github.com/nakabonne/nestif v0.3.1 // indirect github.com/nbutton23/zxcvbn-go v0.0.0-20210217022336-fa2cb2858354 // indirect - github.com/nishanths/exhaustive v0.9.5 // indirect + github.com/nishanths/exhaustive v0.11.0 // indirect github.com/nishanths/predeclared v0.2.2 // indirect - github.com/nunnatsa/ginkgolinter v0.9.0 // indirect + github.com/nunnatsa/ginkgolinter v0.12.1 // indirect github.com/oklog/run v1.0.0 // indirect + github.com/oklog/ulid v1.3.1 // indirect github.com/olekukonko/tablewriter v0.0.5 // indirect github.com/owenrumney/go-sarif v1.1.1 // indirect github.com/pelletier/go-toml v1.9.5 // indirect github.com/pelletier/go-toml/v2 v2.0.5 // indirect github.com/pmezard/go-difflib v1.0.0 // indirect - github.com/polyfloyd/go-errorlint v1.4.0 // indirect + github.com/polyfloyd/go-errorlint v1.4.2 // indirect github.com/posener/complete v1.2.3 // indirect github.com/prometheus/client_golang v1.12.1 // indirect github.com/prometheus/client_model v0.2.0 // indirect @@ -195,14 +222,16 @@ require ( github.com/robfig/cron v1.2.0 // indirect github.com/ryancurrah/gomodguard v1.3.0 // indirect github.com/ryanrolds/sqlclosecheck v0.4.0 // indirect + github.com/samber/lo v1.37.0 // indirect github.com/sanposhiho/wastedassign/v2 v2.0.7 // indirect github.com/sashamelentyev/interfacebloat v1.1.0 // indirect github.com/sashamelentyev/usestdlibvars v1.23.0 // indirect - github.com/securego/gosec/v2 v2.15.0 // indirect + github.com/securego/gosec/v2 v2.16.0 // indirect github.com/sergi/go-diff v1.2.0 // indirect github.com/shazow/go-diff v0.0.0-20160112020656-b6b7b6733b8c // indirect - github.com/sirupsen/logrus v1.9.0 // indirect - github.com/sivchari/containedctx v1.0.2 // indirect + github.com/shopspring/decimal v1.2.0 // indirect + github.com/sirupsen/logrus v1.9.3 // indirect + github.com/sivchari/containedctx v1.0.3 // indirect github.com/sivchari/nosnakecase v1.7.0 // indirect github.com/sivchari/tenv v1.7.1 // indirect github.com/sonatard/noctx v0.0.2 // indirect @@ -211,21 +240,23 @@ require ( github.com/sourcegraph/jsonrpc2 v0.2.0 // indirect github.com/spf13/afero v1.9.5 // indirect github.com/spf13/cast v1.5.0 // indirect - github.com/spf13/cobra v1.6.1 // indirect + github.com/spf13/cobra v1.7.0 // indirect github.com/spf13/jwalterweatherman v1.1.0 // indirect github.com/spf13/pflag v1.0.5 // indirect github.com/spf13/viper v1.13.0 // indirect github.com/ssgreg/nlreturn/v2 v2.2.1 // indirect github.com/stbenjam/no-sprintf-host-port v0.1.1 // indirect github.com/stretchr/objx v0.5.0 // indirect - github.com/stretchr/testify v1.8.2 // indirect + github.com/stretchr/testify v1.8.4 // indirect github.com/subosito/gotenv v1.4.1 // indirect github.com/t-yuki/gocover-cobertura v0.0.0-20180217150009-aaee18c8195c // indirect github.com/tdakkota/asciicheck v0.2.0 // indirect - github.com/terraform-linters/tflint-plugin-sdk v0.16.1 // indirect - github.com/terraform-linters/tflint-ruleset-terraform v0.2.2 // indirect + github.com/terraform-linters/tflint-plugin-sdk v0.17.0 // indirect + github.com/terraform-linters/tflint-ruleset-terraform v0.4.0 // indirect github.com/tetafro/godot v1.4.11 // indirect - github.com/timakin/bodyclose v0.0.0-20221125081123-e39cf3fc478e // indirect + github.com/thanhpk/randstr v1.0.4 // indirect + github.com/thlib/go-timezone-local v0.0.0-20210907160436-ef149e42d28e // indirect + github.com/timakin/bodyclose v0.0.0-20230421092635-574207250966 // indirect github.com/timonwong/loggercheck v0.9.4 // indirect github.com/tomarrell/wrapcheck/v2 v2.8.1 // indirect github.com/tommy-muehle/go-mnd/v2 v2.5.1 // indirect @@ -236,34 +267,39 @@ require ( github.com/vmihailenco/msgpack/v5 v5.3.5 // indirect github.com/vmihailenco/tagparser/v2 v2.0.0 // indirect github.com/xanzy/ssh-agent v0.3.0 // indirect + github.com/xen0n/gosmopolitan v1.2.1 // indirect github.com/xo/terminfo v0.0.0-20210125001918-ca9a967f8778 // indirect github.com/yagipy/maintidx v1.0.0 // indirect github.com/yeya24/promlinter v0.2.0 // indirect + github.com/ykadowak/zerologlint v0.1.2 // indirect github.com/yuin/goldmark v1.5.4 // indirect - github.com/yuin/goldmark-meta v0.0.0-20191126180153-f0638e958b60 // indirect - github.com/zclconf/go-cty v1.13.1 // indirect + github.com/yuin/goldmark-meta v1.1.0 // indirect + github.com/zclconf/go-cty v1.13.2 // indirect github.com/zclconf/go-cty-yaml v1.0.3 // indirect gitlab.com/bosi/decorder v0.2.3 // indirect + go.mongodb.org/mongo-driver v1.10.0 // indirect go.opencensus.io v0.24.0 // indirect + go.tmz.dev/musttag v0.7.0 // indirect go.uber.org/atomic v1.7.0 // indirect go.uber.org/multierr v1.6.0 // indirect go.uber.org/zap v1.24.0 // indirect - golang.org/x/crypto v0.8.0 // indirect - golang.org/x/exp v0.0.0-20220722155223-a9213eeb770e // indirect + golang.org/x/crypto v0.10.0 // indirect + golang.org/x/exp v0.0.0-20230510235704-dd950f8aeaea // indirect golang.org/x/exp/typeparams v0.0.0-20230224173230-c95f2b4c22f2 // indirect golang.org/x/lint v0.0.0-20210508222113-6edffad5e616 // indirect - golang.org/x/mod v0.10.0 // indirect - golang.org/x/net v0.9.0 // indirect - golang.org/x/oauth2 v0.7.0 // indirect - golang.org/x/sync v0.1.0 // indirect - golang.org/x/sys v0.7.0 // indirect - golang.org/x/text v0.9.0 // indirect - golang.org/x/tools v0.8.0 // indirect + golang.org/x/mod v0.11.0 // indirect + golang.org/x/net v0.11.0 // indirect + golang.org/x/oauth2 v0.8.0 // indirect + golang.org/x/sync v0.3.0 // indirect + golang.org/x/sys v0.9.0 // indirect + golang.org/x/term v0.9.0 // indirect + golang.org/x/text v0.10.0 // indirect + golang.org/x/tools v0.10.0 // indirect golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 // indirect - google.golang.org/api v0.103.0 // indirect + google.golang.org/api v0.110.0 // indirect google.golang.org/appengine v1.6.7 // indirect - google.golang.org/genproto v0.0.0-20230110181048-76db0878b65f // indirect - google.golang.org/grpc v1.54.0 // indirect + google.golang.org/genproto v0.0.0-20230306155012-7f2fa6fef1f4 // indirect + google.golang.org/grpc v1.55.0 // indirect google.golang.org/protobuf v1.30.0 // indirect gopkg.in/ini.v1 v1.67.0 // indirect gopkg.in/warnings.v0 v0.1.2 // indirect diff --git a/.ci/tools/go.sum b/.ci/tools/go.sum index 7cf1a2b6729..d3add9ebf7e 100644 --- a/.ci/tools/go.sum +++ b/.ci/tools/go.sum @@ -36,8 +36,8 @@ cloud.google.com/go v0.100.2/go.mod h1:4Xra9TjzAeYHrl5+oeLlzbM2k3mjVhZh4UqTZ//w9 cloud.google.com/go v0.102.0/go.mod h1:oWcCzKlqJ5zgHQt9YsaeTY9KzIvjyy0ArmiBUgpQ+nc= cloud.google.com/go v0.102.1/go.mod h1:XZ77E9qnTEnrgEOvr4xzfdX5TRo7fB4T2F4O6+34hIU= cloud.google.com/go v0.104.0/go.mod h1:OO6xxXdJyvuJPcEPBLN9BJPD+jep5G1+2U5B5gkRYtA= -cloud.google.com/go v0.107.0 h1:qkj22L7bgkl6vIeZDlOY2po43Mx/TIa2Wsa7VR+PEww= -cloud.google.com/go v0.107.0/go.mod h1:wpc2eNrD7hXUTy8EKS10jkxpZBjASrORK7goS+3YX2I= +cloud.google.com/go v0.110.0 h1:Zc8gqp3+a9/Eyph2KDmcGaPtbKRIoqq4YTlL4NMD0Ys= +cloud.google.com/go v0.110.0/go.mod h1:SJnCLqQ0FCFGSZMUNUf84MV3Aia54kn7pi8st7tMzaY= cloud.google.com/go/aiplatform v1.22.0/go.mod h1:ig5Nct50bZlzV6NvKaTwmplLLddFx0YReh9WfTO5jKw= cloud.google.com/go/aiplatform v1.24.0/go.mod h1:67UUvRBKG6GTayHKV8DBv2RtR1t93YRu5B1P3x99mYY= cloud.google.com/go/analytics v0.11.0/go.mod h1:DjEWCu41bVbYcKyvlws9Er60YE4a//bK6mnhWvQeFNI= @@ -74,8 +74,9 @@ cloud.google.com/go/compute v1.6.0/go.mod h1:T29tfhtVbq1wvAPo0E3+7vhgmkOYeXjhFvz cloud.google.com/go/compute v1.6.1/go.mod h1:g85FgpzFvNULZ+S8AYq87axRKuf2Kh7deLqV/jJ3thU= cloud.google.com/go/compute v1.7.0/go.mod h1:435lt8av5oL9P3fv1OEzSbSUe+ybHXGMPQHHZWZxy9U= cloud.google.com/go/compute v1.10.0/go.mod h1:ER5CLbMxl90o2jtNbGSbtfOpQKR0t15FOtRsugnLrlU= -cloud.google.com/go/compute v1.15.1 h1:7UGq3QknM33pw5xATlpzeoomNxsacIVvTqTTvbfajmE= -cloud.google.com/go/compute v1.15.1/go.mod h1:bjjoF/NtFUrkD/urWfdHaKuOPDR5nWIs63rR+SXhcpA= +cloud.google.com/go/compute v1.18.0 h1:FEigFqoDbys2cvFkZ9Fjq4gnHBP55anJ0yQyau2f9oY= +cloud.google.com/go/compute v1.18.0/go.mod h1:1X7yHxec2Ga+Ss6jPyjxRxpu2uu7PLgsOVXvgU0yacs= +cloud.google.com/go/compute/metadata v0.2.0/go.mod h1:zFmK7XCadkQkj6TtorcaGlCW1hT1fIilQDwofLpJ20k= cloud.google.com/go/compute/metadata v0.2.3 h1:mg4jlk7mCAj6xXp9UJ4fjI9VUI5rubuGBW5aJ7UnBMY= cloud.google.com/go/compute/metadata v0.2.3/go.mod h1:VAV5nSsACxMJvgaAuX6Pk2AawlZn8kiOGuCv6gTkwuA= cloud.google.com/go/containeranalysis v0.5.1/go.mod h1:1D92jd8gRR/c0fGMlymRgxWD3Qw9C1ff6/T7mLgVL8I= @@ -115,13 +116,13 @@ cloud.google.com/go/gkehub v0.10.0/go.mod h1:UIPwxI0DsrpsVoWpLB0stwKCP+WFVG9+y97 cloud.google.com/go/grafeas v0.2.0/go.mod h1:KhxgtF2hb0P191HlY5besjYm6MqTSTj3LSI+M+ByZHc= cloud.google.com/go/iam v0.3.0/go.mod h1:XzJPvDayI+9zsASAFO68Hk07u3z+f+JrT2xXNdp4bnY= cloud.google.com/go/iam v0.5.0/go.mod h1:wPU9Vt0P4UmCux7mqtRu6jcpPAb74cP1fh50J3QpkUc= -cloud.google.com/go/iam v0.8.0 h1:E2osAkZzxI/+8pZcxVLcDtAQx/u+hZXVryUaYQ5O0Kk= -cloud.google.com/go/iam v0.8.0/go.mod h1:lga0/y3iH6CX7sYqypWJ33hf7kkfXJag67naqGESjkE= +cloud.google.com/go/iam v0.12.0 h1:DRtTY29b75ciH6Ov1PHb4/iat2CLCvrOm40Q0a6DFpE= +cloud.google.com/go/iam v0.12.0/go.mod h1:knyHGviacl11zrtZUoDuYpDgLjvr28sLQaG0YB2GYAY= cloud.google.com/go/language v1.4.0/go.mod h1:F9dRpNFQmJbkaop6g0JhSBXCNlO90e1KWx5iDdxbWic= cloud.google.com/go/language v1.6.0/go.mod h1:6dJ8t3B+lUYfStgls25GusK04NLh3eDLQnWM3mdEbhI= cloud.google.com/go/lifesciences v0.5.0/go.mod h1:3oIKy8ycWGPUyZDR/8RNnTOYevhaMLqh5vLUXs9zvT8= cloud.google.com/go/lifesciences v0.6.0/go.mod h1:ddj6tSX/7BOnhxCSd3ZcETvtNr8NZ6t/iPhY2Tyfu08= -cloud.google.com/go/longrunning v0.3.0 h1:NjljC+FYPV3uh5/OwWT6pVU+doBqMg2x/rZlE+CamDs= +cloud.google.com/go/longrunning v0.4.1 h1:v+yFJOfKC3yZdY6ZUI933pIYdhyhV8S3NpWrXWmg7jM= cloud.google.com/go/mediatranslation v0.5.0/go.mod h1:jGPUhGTybqsPQn91pNXw0xVHfuJ3leR1wj37oU3y1f4= cloud.google.com/go/mediatranslation v0.6.0/go.mod h1:hHdBCTYNigsBxshbznuIMFNe5QXEowAuNmmC7h8pu5w= cloud.google.com/go/memcache v1.4.0/go.mod h1:rTOfiGZtJX1AaFUrOgsMHX5kAzaTQ8azHiuDoTPzNsE= @@ -178,8 +179,9 @@ cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9 cloud.google.com/go/storage v1.14.0/go.mod h1:GrKmX003DSIwi9o29oFT7YDnHYwZoctc3fOKtUw0Xmo= cloud.google.com/go/storage v1.22.1/go.mod h1:S8N1cAStu7BOeFfE8KAQzmyyLkK8p/vmRq6kuBTW58Y= cloud.google.com/go/storage v1.23.0/go.mod h1:vOEEDNFnciUMhBeT6hsJIn3ieU5cFRmzeLgDvXzfIXc= -cloud.google.com/go/storage v1.27.0 h1:YOO045NZI9RKfCj1c5A/ZtuuENUc8OAW+gHdGnDgyMQ= cloud.google.com/go/storage v1.27.0/go.mod h1:x9DOL8TK/ygDUMieqwfhdpQryTeEkhGKMi80i/iqR2s= +cloud.google.com/go/storage v1.28.1 h1:F5QDG5ChchaAVQhINh24U99OWHURqrW8OmQcGKXcbgI= +cloud.google.com/go/storage v1.28.1/go.mod h1:Qnisd4CqDdo6BGs2AD5LLnEsmSQ80wQ5ogcBBKhU86Y= cloud.google.com/go/talent v1.1.0/go.mod h1:Vl4pt9jiHKvOgF9KoZo6Kob9oV4lwd/ZD5Cto54zDRw= cloud.google.com/go/talent v1.2.0/go.mod h1:MoNF9bhFQbiJ6eFD3uSsg0uBALw4n4gaCaEjBw9zo8g= cloud.google.com/go/videointelligence v1.6.0/go.mod h1:w0DIDlVRKtwPCn/C4iwZIJdvC69yInhW0cfi+p546uU= @@ -192,37 +194,47 @@ cloud.google.com/go/webrisk v1.5.0/go.mod h1:iPG6fr52Tv7sGk0H6qUFzmL3HHZev1htXuW cloud.google.com/go/workflows v1.6.0/go.mod h1:6t9F5h/unJz41YqfBmqSASJSXccBLtD1Vwf+KmJENM0= cloud.google.com/go/workflows v1.7.0/go.mod h1:JhSrZuVZWuiDfKEFxU0/F1PQjmpnpcoISEXH2bcHC3M= dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU= +github.com/4meepo/tagalign v1.2.2 h1:kQeUTkFTaBRtd/7jm8OKJl9iHk0gAO+TDFPHGSna0aw= +github.com/4meepo/tagalign v1.2.2/go.mod h1:Q9c1rYMZJc9dPRkbQPpcBNCLEmY2njbAsXhQOZFE2dE= github.com/Abirdcfly/dupword v0.0.11 h1:z6v8rMETchZXUIuHxYNmlUAuKuB21PeaSymTed16wgU= github.com/Abirdcfly/dupword v0.0.11/go.mod h1:wH8mVGuf3CP5fsBTkfWwwwKTjDnVVCxtU8d8rgeVYXA= -github.com/Antonboom/errname v0.1.9 h1:BZDX4r3l4TBZxZ2o2LNrlGxSHran4d1u4veZdoORTT4= -github.com/Antonboom/errname v0.1.9/go.mod h1:nLTcJzevREuAsgTbG85UsuiWpMpAqbKD1HNZ29OzE58= -github.com/Antonboom/nilnil v0.1.3 h1:6RTbx3d2mcEu3Zwq9TowQpQMVpP75zugwOtqY1RTtcE= -github.com/Antonboom/nilnil v0.1.3/go.mod h1:iOov/7gRcXkeEU+EMGpBu2ORih3iyVEiWjeste1SJm8= +github.com/AlecAivazis/survey/v2 v2.3.6 h1:NvTuVHISgTHEHeBFqt6BHOe4Ny/NwGZr7w+F8S9ziyw= +github.com/AlecAivazis/survey/v2 v2.3.6/go.mod h1:4AuI9b7RjAR+G7v9+C4YSlX/YL3K3cWNXgWXOhllqvI= +github.com/Antonboom/errname v0.1.10 h1:RZ7cYo/GuZqjr1nuJLNe8ZH+a+Jd9DaZzttWzak9Bls= +github.com/Antonboom/errname v0.1.10/go.mod h1:xLeiCIrvVNpUtsN0wxAh05bNIZpqE22/qDMnTBTttiA= +github.com/Antonboom/nilnil v0.1.5 h1:X2JAdEVcbPaOom2TUa1FxZ3uyuUlex0XMLGYMemu6l0= +github.com/Antonboom/nilnil v0.1.5/go.mod h1:I24toVuBKhfP5teihGWctrRiPbRKHwZIFOvc6v3HZXk= github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= -github.com/BurntSushi/toml v1.2.1 h1:9F2/+DoOYIOksmaJFPw1tGFy1eDnIJXg+UHjuD8lTak= -github.com/BurntSushi/toml v1.2.1/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ= +github.com/BurntSushi/toml v1.3.2 h1:o7IhLm0Msx3BaB+n3Ag7L8EVlByGnpq14C4YWiu/gL8= +github.com/BurntSushi/toml v1.3.2/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ= github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo= github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 h1:sHglBQTwgx+rWPdisA5ynNEsoARbiCBOyGcJM4/OzsM= github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24/go.mod h1:4UJr5HIiMZrwgkSPdsjy2uOQExX/WEILpIrO9UPGuXs= github.com/GaijinEntertainment/go-exhaustruct/v2 v2.3.0 h1:+r1rSv4gvYn0wmRjC8X7IAzX8QezqtFV9m0MUHFJgts= github.com/GaijinEntertainment/go-exhaustruct/v2 v2.3.0/go.mod h1:b3g59n2Y+T5xmcxJL+UEG2f8cQploZm1mR/v6BW0mU0= -github.com/Masterminds/goutils v1.1.0/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU= +github.com/MakeNowJust/heredoc v1.0.0 h1:cXCdzVdstXyiTqTvfqk9SDHpKNjxuom+DOlyEeQ4pzQ= github.com/Masterminds/goutils v1.1.1 h1:5nUrii3FMTL5diU80unEVvNevw1nH4+ZV4DSLVJLSYI= github.com/Masterminds/goutils v1.1.1/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU= github.com/Masterminds/semver v1.5.0 h1:H65muMkzWKEuNDnfl9d70GUjFniHKHRbFPGBuZ3QEww= github.com/Masterminds/semver v1.5.0/go.mod h1:MB6lktGJrhw8PrUyiEoblNEGEQ+RzHPF078ddwwvV3Y= -github.com/Masterminds/semver/v3 v3.2.0 h1:3MEsd0SM6jqZojhjLWWeBY+Kcjy9i6MQAeY7YgDP83g= -github.com/Masterminds/semver/v3 v3.2.0/go.mod h1:qvl/7zhW3nngYb5+80sSMF+FG2BjYrf8m9wsX0PNOMQ= -github.com/Masterminds/sprig v2.22.0+incompatible h1:z4yfnGrZ7netVz+0EDJ0Wi+5VZCSYp4Z0m2dk6cEM60= -github.com/Masterminds/sprig v2.22.0+incompatible/go.mod h1:y6hNFY5UBTIWBxnzTeuNhlNS5hqE0NB0E6fgfo2Br3o= +github.com/Masterminds/semver/v3 v3.1.1/go.mod h1:VPu/7SZ7ePZ3QOrcuXROw5FAcLl4a0cBrbBpGY/8hQs= +github.com/Masterminds/semver/v3 v3.2.1 h1:RN9w6+7QoMeJVGyfmbcgs28Br8cvmnucEXnY0rYXWg0= +github.com/Masterminds/semver/v3 v3.2.1/go.mod h1:qvl/7zhW3nngYb5+80sSMF+FG2BjYrf8m9wsX0PNOMQ= +github.com/Masterminds/sprig/v3 v3.2.1 h1:n6EPaDyLSvCEa3frruQvAiHuNp2dhBlMSmkEr+HuzGc= +github.com/Masterminds/sprig/v3 v3.2.1/go.mod h1:UoaO7Yp8KlPnJIYWTFkMaqPUYKTfGFPhxNuwnnxkKlk= github.com/Microsoft/go-winio v0.4.14/go.mod h1:qXqCSQ3Xa7+6tgxaGTIe4Kpcdsi+P8jBhyzoq1bpyYA= github.com/Microsoft/go-winio v0.4.16 h1:FtSW/jqD+l4ba5iPBj9CODVtgfYAD8w2wS923g/cFDk= github.com/Microsoft/go-winio v0.4.16/go.mod h1:XB6nPKklQyQ7GC9LdcBEcBl8PF76WugXOPRXwdLnMv0= +github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2 h1:+vx7roKuyA63nhn5WAunQHLTznkw5W8b1Xc0dNjp83s= +github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2/go.mod h1:HBCaDeC1lPdgDeDbhX8XFpy1jqjK0IBG8W5K+xYqA0w= github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU= -github.com/OpenPeeDeeP/depguard v1.1.1 h1:TSUznLjvp/4IUP+OQ0t/4jF4QUyxIcVX8YnghZdunyA= -github.com/OpenPeeDeeP/depguard v1.1.1/go.mod h1:JtAMzWkmFEzDPyAd+W0NHl1lvpQKTvT9jnRVsohBKpc= -github.com/ProtonMail/go-crypto v0.0.0-20210428141323-04723f9f07d7 h1:YoJbenK9C67SkzkDfmQuVln04ygHj3vjZfd9FL+GmQQ= +github.com/OpenPeeDeeP/depguard/v2 v2.1.0 h1:aQl70G173h/GZYhWf36aE5H0KaujXfVMnn/f1kSDVYY= +github.com/OpenPeeDeeP/depguard/v2 v2.1.0/go.mod h1:PUBgk35fX4i7JDmwzlJwJ+GMe6NfO1723wmJMgPThNQ= github.com/ProtonMail/go-crypto v0.0.0-20210428141323-04723f9f07d7/go.mod h1:z4/9nQmJSSwwds7ejkxaJwO37dru3geImFUdJlaLzQo= +github.com/ProtonMail/go-crypto v0.0.0-20230217124315-7d5c6f04bbb8 h1:wPbRQzjjwFc0ih8puEVAOFGELsn1zoIIYdxvML7mDxA= +github.com/ProtonMail/go-crypto v0.0.0-20230217124315-7d5c6f04bbb8/go.mod h1:I0gYDMZ6Z5GRU7l58bNFSkPTFN6Yl12dsUlAZ8xy98g= +github.com/YakDriver/tfproviderdocs v0.5.0 h1:PmJnobXFVsx2WHTYFZhxmY633WtMZPozNyW2ouJ1APQ= +github.com/YakDriver/tfproviderdocs v0.5.0/go.mod h1:cmvxw7mRCeXngkZzN5rgygQqujN0irc5OJOCPfOJ9cQ= github.com/acomagu/bufpipe v1.0.3 h1:fxAGrHZTgQ9w5QqVItgzwj235/uYZYgbXitB+dLupOk= github.com/acomagu/bufpipe v1.0.3/go.mod h1:mxdxdup/WdsKVreO5GpW4+M/1CE2sMG4jeGJ2sYmHc4= github.com/agext/levenshtein v1.2.3 h1:YB2fHEn0UJagG8T1rrWknE3ZQzWM06O8AMAatNn7lmo= @@ -233,6 +245,8 @@ github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuy github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho= +github.com/alexkohler/nakedret/v2 v2.0.2 h1:qnXuZNvv3/AxkAb22q/sEsEpcA99YxLFACDtEw9TPxE= +github.com/alexkohler/nakedret/v2 v2.0.2/go.mod h1:2b8Gkk0GsOrqQv/gPWjNLDSKwG8I5moSXG1K4VIBcTQ= github.com/alexkohler/prealloc v1.0.0 h1:Hbq0/3fJPQhNkN0dR95AVrr6R7tou91y0uHG5pOcUuw= github.com/alexkohler/prealloc v1.0.0/go.mod h1:VetnK3dIgFBBKmg0YnD9F9x6Icjd+9cvfHR56wJVlKE= github.com/alingse/asasalint v0.0.11 h1:SFwnQXJ49Kx/1GghOFz1XGqHYKp21Kq1nHad/0WQRnw= @@ -243,48 +257,66 @@ github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kd github.com/apparentlymart/go-cidr v1.1.0 h1:2mAhrMoF+nhXqxTzSZMUzDHkLjmIHC+Zzn4tdgBZjnU= github.com/apparentlymart/go-cidr v1.1.0/go.mod h1:EBcsNrHc3zQeuaeCeCtQruQm+n9/YjEn/vI25Lg7Gwc= github.com/apparentlymart/go-dump v0.0.0-20180507223929-23540a00eaa3 h1:ZSTrOEhiM5J5RFxEaFvMZVEAM1KvT1YzbEOwB2EAGjA= -github.com/apparentlymart/go-textseg v1.0.0/go.mod h1:z96Txxhf3xSFMPmb5X/1W05FF/Nj9VFpLOpjS5yuumk= github.com/apparentlymart/go-textseg/v13 v13.0.0 h1:Y+KvPE1NYz0xl601PVImeQfFyEy6iT90AvPUL1NNfNw= github.com/apparentlymart/go-textseg/v13 v13.0.0/go.mod h1:ZK2fH7c4NqDTLtiYLvIkEghdlcqw7yxLeM89kiTRPUo= +github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o= +github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY= github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8= github.com/armon/go-radix v1.0.0 h1:F4z6KzEeeQIMeLFa97iZU6vupzoecKdU5TX24SNppXI= github.com/armon/go-radix v1.0.0/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8= github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5 h1:0CwZNZbxp69SHPdPJAN/hZIm0C4OItdklCFmMRWYpio= github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs= -github.com/ashanbrown/forbidigo v1.5.1 h1:WXhzLjOlnuDYPYQo/eFlcFMi8X/kLfvWLYu6CSoebis= -github.com/ashanbrown/forbidigo v1.5.1/go.mod h1:Y8j9jy9ZYAEHXdu723cUlraTqbzjKF1MUyfOKL+AjcU= +github.com/asaskevich/govalidator v0.0.0-20200907205600-7a23bdc65eef h1:46PFijGLmAjMPwCCCo7Jf0W6f9slllCkkv7vyc1yOSg= +github.com/asaskevich/govalidator v0.0.0-20200907205600-7a23bdc65eef/go.mod h1:WaHUgvxTVq04UNunO+XhnAqY/wQc+bxr74GqbsZ/Jqw= +github.com/ashanbrown/forbidigo v1.5.3 h1:jfg+fkm/snMx+V9FBwsl1d340BV/99kZGv5jN9hBoXk= +github.com/ashanbrown/forbidigo v1.5.3/go.mod h1:Y8j9jy9ZYAEHXdu723cUlraTqbzjKF1MUyfOKL+AjcU= github.com/ashanbrown/makezero v1.1.1 h1:iCQ87C0V0vSyO+M9E/FZYbu65auqH0lnsOkf5FcB28s= github.com/ashanbrown/makezero v1.1.1/go.mod h1:i1bJLCRSCHOcOa9Y6MyF2FTfMZMFdHvxKHxgO5Z1axI= github.com/aws/aws-sdk-go v1.44.122 h1:p6mw01WBaNpbdP2xrisz5tIkcNwzj/HysobNoaAHjgo= github.com/aws/aws-sdk-go v1.44.122/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo= +github.com/aws/aws-sdk-go-v2 v1.9.2/go.mod h1:cK/D0BBs0b/oWPIcX/Z/obahJK1TT7IPVjy53i/mX/4= +github.com/aws/aws-sdk-go-v2/config v1.8.3/go.mod h1:4AEiLtAb8kLs7vgw2ZV3p2VZ1+hBavOc84hqxVNpCyw= +github.com/aws/aws-sdk-go-v2/credentials v1.4.3/go.mod h1:FNNC6nQZQUuyhq5aE5c7ata8o9e4ECGmS4lAXC7o1mQ= +github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.6.0/go.mod h1:gqlclDEZp4aqJOancXK6TN24aKhT0W0Ae9MHk3wzTMM= +github.com/aws/aws-sdk-go-v2/internal/ini v1.2.4/go.mod h1:ZcBrrI3zBKlhGFNYWvju0I3TR93I7YIgAfy82Fh4lcQ= +github.com/aws/aws-sdk-go-v2/service/appconfig v1.4.2/go.mod h1:FZ3HkCe+b10uFZZkFdvf98LHW21k49W8o8J366lqVKY= +github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.3.2/go.mod h1:72HRZDLMtmVQiLG2tLfQcaWLCssELvGl+Zf2WVxMmR8= +github.com/aws/aws-sdk-go-v2/service/sso v1.4.2/go.mod h1:NBvT9R1MEF+Ud6ApJKM0G+IkPchKS7p7c2YPKwHmBOk= +github.com/aws/aws-sdk-go-v2/service/sts v1.7.2/go.mod h1:8EzeIqfWt2wWT4rJVu3f21TfrhJ8AEMzVybRNSb/b4g= +github.com/aws/smithy-go v1.8.0/go.mod h1:SObp3lf9smib00L/v3U2eAKG8FyQ7iLrJnQiAmR5n+E= github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8= github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q= github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8= github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= -github.com/bflad/tfproviderdocs v0.9.1 h1:C9Rkh61PgaQnRLbgfuBKRGCS8C8d8yEla3gpToC+b+o= -github.com/bflad/tfproviderdocs v0.9.1/go.mod h1:gC+fjQ+tT1x1bqq+2yZ9G/y9VjRI24c6HDi+uEdgBG4= github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d h1:xDfNPAt8lFiC1UJrqV3uuy861HCTo708pDMbjHHdCas= github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d/go.mod h1:6QX/PXZ00z/TKoufEY6K/a0k6AhaJrQKdFe6OfVXsa4= github.com/bgentry/speakeasy v0.1.0 h1:ByYyxL9InA1OWqxJqqp2A5pYHUrCiAL6K3J+LKSsQkY= github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs= -github.com/bkielbasa/cyclop v1.2.0 h1:7Jmnh0yL2DjKfw28p86YTd/B4lRGcNuu12sKE35sM7A= -github.com/bkielbasa/cyclop v1.2.0/go.mod h1:qOI0yy6A7dYC4Zgsa72Ppm9kONl0RoIlPbzot9mhmeI= +github.com/bkielbasa/cyclop v1.2.1 h1:AeF71HZDob1P2/pRm1so9cd1alZnrpyc4q2uP2l0gJY= +github.com/bkielbasa/cyclop v1.2.1/go.mod h1:K/dT/M0FPAiYjBgQGau7tz+3TMh4FWAEqlMhzFWCrgM= github.com/blizzy78/varnamelen v0.8.0 h1:oqSblyuQvFsW1hbBHh1zfwrKe3kcSj0rnXkKzsQ089M= github.com/blizzy78/varnamelen v0.8.0/go.mod h1:V9TzQZ4fLJ1DSrjVDfl89H7aMnTvKkApdHeyESmyR7k= -github.com/bmatcuk/doublestar v1.2.1/go.mod h1:wiQtGV+rzVYxB7WIlirSN++5HPtPlXEo9MEoZQC/PmE= github.com/bmatcuk/doublestar v1.3.4 h1:gPypJ5xD31uhX6Tf54sDPUOBXTqKH4c9aPY66CyQrS0= github.com/bmatcuk/doublestar v1.3.4/go.mod h1:wiQtGV+rzVYxB7WIlirSN++5HPtPlXEo9MEoZQC/PmE= +github.com/bmatcuk/doublestar/v4 v4.6.0 h1:HTuxyug8GyFbRkrffIpzNCSK4luc0TY3wzXvzIZhEXc= +github.com/bmatcuk/doublestar/v4 v4.6.0/go.mod h1:xBQ8jztBU6kakFMg+8WGxn0c6z1fTSPVIjEY1Wr7jzc= github.com/bombsimon/wsl/v3 v3.4.0 h1:RkSxjT3tmlptwfgEgTgU+KYKLI35p/tviNXNXiL2aNU= github.com/bombsimon/wsl/v3 v3.4.0/go.mod h1:KkIB+TXkqy6MvK9BDZVbZxKNYsE1/oLRJbIFtf14qqo= +github.com/bradleyfalzon/ghinstallation/v2 v2.5.0 h1:yaYcGQ7yEIGbsJfW/9z7v1sLiZg/5rSNNXwmMct5XaE= +github.com/bradleyfalzon/ghinstallation/v2 v2.5.0/go.mod h1:amcvPQMrRkWNdueWOjPytGL25xQGzox7425qMgzo+Vo= github.com/breathingdust/go-changelog v0.0.0-20210127001721-f985d5709c15 h1:OUv8PSGE8S6CPWWKc+2T7tyLwcKKERcvWn19O4KiUu4= github.com/breathingdust/go-changelog v0.0.0-20210127001721-f985d5709c15/go.mod h1:3cN0yNLxr97LobXDDmNQBh8tgBssK7ftuGC5y1sc17M= github.com/breml/bidichk v0.2.4 h1:i3yedFWWQ7YzjdZJHnPo9d/xURinSq3OM+gyM43K4/8= github.com/breml/bidichk v0.2.4/go.mod h1:7Zk0kRFt1LIZxtQdl9W9JwGAcLTTkOs+tN7wuEYGJ3s= github.com/breml/errchkjson v0.3.1 h1:hlIeXuspTyt8Y/UmP5qy1JocGNR00KQHgfaNtRAjoxQ= github.com/breml/errchkjson v0.3.1/go.mod h1:XroxrzKjdiutFyW3nWhw34VGg7kiMsDQox73yWCGI2U= -github.com/butuzov/ireturn v0.1.1 h1:QvrO2QF2+/Cx1WA/vETCIYBKtRjc30vesdoPUNo1EbY= -github.com/butuzov/ireturn v0.1.1/go.mod h1:Wh6Zl3IMtTpaIKbmwzqi6olnM9ptYQxxVacMsOEFPoc= +github.com/butuzov/ireturn v0.2.0 h1:kCHi+YzC150GE98WFuZQu9yrTn6GEydO2AuPLbTgnO4= +github.com/butuzov/ireturn v0.2.0/go.mod h1:Wh6Zl3IMtTpaIKbmwzqi6olnM9ptYQxxVacMsOEFPoc= +github.com/butuzov/mirror v1.1.0 h1:ZqX54gBVMXu78QLoiqdwpl2mgmoOJTk7s4p4o+0avZI= +github.com/butuzov/mirror v1.1.0/go.mod h1:8Q0BdQU6rC6WILDiBM60DBfvV78OLJmMmixe7GF45AE= +github.com/bwesterb/go-ristretto v1.2.0/go.mod h1:fUIoIZaG73pV5biE2Blr2xEzDoMj7NFEuV9ekS419A0= +github.com/bwesterb/go-ristretto v1.2.3/go.mod h1:fUIoIZaG73pV5biE2Blr2xEzDoMj7NFEuV9ekS419A0= github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc= github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= @@ -299,8 +331,17 @@ github.com/cheggaaa/pb v1.0.27/go.mod h1:pQciLPpbU0oxA0h+VJYYLxO+XeDQb5pZijXscXH github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI= github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI= github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU= +github.com/cli/go-gh v1.2.1 h1:xFrjejSsgPiwXFP6VYynKWwxLQcNJy3Twbu82ZDlR/o= +github.com/cli/go-gh v1.2.1/go.mod h1:Jxk8X+TCO4Ui/GarwY9tByWm/8zp4jJktzVZNlTW5VM= +github.com/cli/safeexec v1.0.0 h1:0VngyaIyqACHdcMNWfo6+KdUYnqEr2Sg+bSP1pdF+dI= +github.com/cli/safeexec v1.0.0/go.mod h1:Z/D4tTN8Vs5gXYHDCbaM1S/anmEDnJb1iW0+EJ5zx3Q= +github.com/cli/shurcooL-graphql v0.0.2 h1:rwP5/qQQ2fM0TzkUTwtt6E2LbIYf6R+39cUXTa04NYk= +github.com/cli/shurcooL-graphql v0.0.2/go.mod h1:tlrLmw/n5Q/+4qSvosT+9/W5zc8ZMjnJeYBxSdb4nWA= github.com/client9/misspell v0.3.4 h1:ta993UF76GwbvJcIo3Y68y/M3WxlpEHPWIGDkJYwzJI= github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= +github.com/cloudflare/circl v1.1.0/go.mod h1:prBCrKB9DV4poKZY1l9zBXg2QJY7mvgRvtMxxK7fi4I= +github.com/cloudflare/circl v1.3.3 h1:fE/Qz0QdIGqeWfnwq0RE0R7MI51s0M2E4Ga9kq5AEMs= +github.com/cloudflare/circl v1.3.3/go.mod h1:5XYMA4rFBvNIrhs50XuiBJ15vF2pZn4nnUKZrLbUZFA= github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc= github.com/cncf/udpa/go v0.0.0-20200629203442-efcf912fb354/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk= github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk= @@ -310,8 +351,12 @@ github.com/cncf/xds/go v0.0.0-20210805033703-aa0b78936158/go.mod h1:eXthEFrGJvWH github.com/cncf/xds/go v0.0.0-20210922020428-25de7278fc84/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= github.com/cncf/xds/go v0.0.0-20211001041855-01bcc9b48dfe/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= +github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= +github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc= github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o= github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= +github.com/creack/pty v1.1.17 h1:QeVUsEDNrLBW4tMgZHvxy18sKtr6VI492kBhUfhDJNI= +github.com/creack/pty v1.1.17/go.mod h1:MOBLtS5ELjhRRrroQr9kyvTxUAFNvYEK993ew/Vr4O4= github.com/curioswitch/go-reassign v0.2.0 h1:G9UZyOcpk/d7Gd6mqYgd8XYWFMw/znxwGDUstnC9DIo= github.com/curioswitch/go-reassign v0.2.0/go.mod h1:x6OpXuWvgfQaMGks2BZybTngWjT84hqJfKoO8Tt/Roc= github.com/daixiang0/gci v0.10.1 h1:eheNA3ljF6SxnPD/vE4lCBusVHmV3Rs3dkKvFrJ7MR0= @@ -321,6 +366,7 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/denis-tingaikin/go-header v0.4.3 h1:tEaZKAlqql6SKCY++utLmkPLd6K8IBM20Ha7UVm+mtU= github.com/denis-tingaikin/go-header v0.4.3/go.mod h1:0wOCWuN71D5qIgE2nz9KrKmuYBAC2Mra5RassOIQ2/c= +github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk= github.com/emirpasic/gods v1.12.0 h1:QAUIPSaCu4G+POclxeqb3F+WPpdKqFGlw36+yOzGlrg= github.com/emirpasic/gods v1.12.0/go.mod h1:YfzfFFoVP/catgzJb4IKIqXjX78Ha8FMSDh3ymbK86o= github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= @@ -338,15 +384,18 @@ github.com/esimonov/ifshort v1.0.4/go.mod h1:Pe8zjlRrJ80+q2CxHLfEOfTwxCZ4O+MuhcH github.com/ettle/strcase v0.1.1 h1:htFueZyVeE1XNnMEfbqp5r67qAN/4r6ya1ysq8Q+Zcw= github.com/ettle/strcase v0.1.1/go.mod h1:hzDLsPC7/lwKyBOywSHEP89nt2pDgdy+No1NBA9o9VY= github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4= +github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU= github.com/fatih/color v1.13.0/go.mod h1:kLAiJbzzSOZDVNGyDpeOxJ47H46qBXwg5ILebYFFOfk= github.com/fatih/color v1.15.0 h1:kOqh6YHBtK8aywxGerMG2Eq3H6Qgoqeo13Bk2Mv/nBs= github.com/fatih/color v1.15.0/go.mod h1:0h5ZqXfHYED7Bhv2ZJamyIOUej9KtShiJESRwBDUSsw= +github.com/fatih/structs v1.1.0/go.mod h1:9NiDSp5zOcgEDl+j00MP/WkGVPOlPRLejGD8Ga6PJ7M= github.com/fatih/structtag v1.2.0 h1:/OdNE99OxoI/PqaW/SuSK9uxxT3f/tcSZgon/ssNSx4= github.com/fatih/structtag v1.2.0/go.mod h1:mBJUNpUnHmRKrKlQQlmCrh5PuhftFbNv8Ys4/aAZl94= github.com/firefart/nonamedreturns v1.0.4 h1:abzI1p7mAEPYuR4A+VLKn4eNDOycjYo2phmY9sfv40Y= github.com/firefart/nonamedreturns v1.0.4/go.mod h1:TDhe/tjI1BXo48CmYbUduTV7BdIga8MAO/xbKdcVsGI= github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:xEzjJPgXI435gkrCt3MPfRiAkVrwSbHsst4LCFVfpJc= github.com/frankban/quicktest v1.14.4 h1:g2rn0vABPOOXmZUj+vbmUp0lPoXEMuhTpIluN0XL9UY= +github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ= github.com/fsnotify/fsnotify v1.5.4 h1:jRbGcIw6P2Meqdwuo0H1p6JVLbL5DHKAKlYndzMwVZI= github.com/fsnotify/fsnotify v1.5.4/go.mod h1:OVB6XrOHzAwXMpEM7uPOzcehqUV2UqJxmVXmkdnm1bU= github.com/fzipp/gocyclo v0.6.0 h1:lsblElZG7d3ALtGMx9fmxeTKZaLLpU8mET09yN4BBLo= @@ -354,8 +403,8 @@ github.com/fzipp/gocyclo v0.6.0/go.mod h1:rXPyn8fnlpa0R2csP/31uerbiVBugk5whMdlya github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= github.com/gliderlabs/ssh v0.2.2 h1:6zsha5zo/TWhRhwqCD3+EarCAgZ2yN28ipRnGPnwkI0= github.com/gliderlabs/ssh v0.2.2/go.mod h1:U7qILu1NlMHj9FlMhZLlkCdDnU1DBEAqr0aevW3Awn0= -github.com/go-critic/go-critic v0.7.0 h1:tqbKzB8pqi0NsRZ+1pyU4aweAF7A7QN0Pi4Q02+rYnQ= -github.com/go-critic/go-critic v0.7.0/go.mod h1:moYzd7GdVXE2C2hYTwd7h0CPcqlUeclsyBRwMa38v64= +github.com/go-critic/go-critic v0.8.1 h1:16omCF1gN3gTzt4j4J6fKI/HnRojhEp+Eks6EuKw3vw= +github.com/go-critic/go-critic v0.8.1/go.mod h1:kpzXl09SIJX1cr9TB/g/sAG+eFEl7ZS9f9cqvZtyNl0= github.com/go-git/gcfg v1.5.0 h1:Q5ViNfGF8zFgyJWPqYwA7qGFoMTEiBmdlkcfRmpIMa4= github.com/go-git/gcfg v1.5.0/go.mod h1:5m20vg6GwYabIxaOonVkTdrILxQMpEShl1xiMF4ua+E= github.com/go-git/go-billy/v5 v5.0.0/go.mod h1:pmpqyWchKfYfrkb/UVH4otLvyi/5gJlGI4Hb3ZqZ3W0= @@ -374,11 +423,18 @@ github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2 github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY= +github.com/go-ldap/ldap v3.0.2+incompatible/go.mod h1:qfd9rJvER9Q0/D/Sqn1DfHRoBp40uXYvFoEVrNEPqRc= github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk= github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A= -github.com/go-logr/logr v1.2.3 h1:2DntVwHkVopvECVRSlL5PSo9eG+cAkDCuckLubN+rq0= +github.com/go-logr/logr v1.2.4 h1:g01GSCwiDw2xSZfjJ2/T9M+S6pFdcNtFYsp+Y43HYDQ= +github.com/go-openapi/errors v0.20.2 h1:dxy7PGTqEh94zj2E3h1cUmQQWiM1+aeCROfAr02EmK8= +github.com/go-openapi/errors v0.20.2/go.mod h1:cM//ZKUKyO06HSwqAelJ5NsEMMcpa6VpXe8DOa1Mi1M= +github.com/go-openapi/strfmt v0.21.3 h1:xwhj5X6CjXEZZHMWy1zKJxvW9AfHC9pkyUjLvHtKG7o= +github.com/go-openapi/strfmt v0.21.3/go.mod h1:k+RzNO0Da+k3FrrynSNN8F7n/peCmQQqbbXjtDfvmGg= github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= +github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 h1:tfuBGBXKqDEevZMzYi5KSi8KkcZtzBcTgAUUtapy0OI= +github.com/go-test/deep v1.0.2-0.20181118220953-042da051cf31/go.mod h1:wGDj63lr65AM2AQyKZd/NYHGb0R+1RLqB8NKt3aSFNA= github.com/go-test/deep v1.1.0 h1:WOcxcdHcvdgThNXjw0t76K42FXTU7HpNQWHpA2HHNlg= github.com/go-toolsmith/astcast v1.1.0 h1:+JN9xZV1A+Re+95pgnMgDboWNVnIMMQXwfBwLRPgSC8= github.com/go-toolsmith/astcast v1.1.0/go.mod h1:qdcuFWeGGS2xX5bLM/c3U9lewg7+Zu4mr+xPwZIB4ZU= @@ -401,9 +457,13 @@ github.com/go-xmlfmt/xmlfmt v1.1.2 h1:Nea7b4icn8s57fTx1M5AI4qQT5HEM3rVUO8MuE6g80 github.com/go-xmlfmt/xmlfmt v1.1.2/go.mod h1:aUCEOzzezBEjDBbFBoSiya/gduyIiWYRP6CnSFIV8AM= github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y= github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8= +github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA= github.com/gofrs/flock v0.8.1 h1:+gYjHKf32LDeiEEFhQaotPbLuUXjY5ZqxKgXy7n59aw= github.com/gofrs/flock v0.8.1/go.mod h1:F1TvTiK9OcQqauNUHlbJvyl9Qa1QvF/gOUDKA14jxHU= github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ= +github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= +github.com/golang-jwt/jwt/v4 v4.5.0 h1:7cYmW1XlMY7h7ii7UhUyChSgS5wUJEnm9uZVTGqOWzg= +github.com/golang-jwt/jwt/v4 v4.5.0/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0= github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= @@ -419,7 +479,6 @@ github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4= github.com/golang/mock v1.5.0/go.mod h1:CWnOUgYIOo4TcNZ0wHX3YZCqsaM1I1Jvs6v3mP3KVu8= github.com/golang/mock v1.6.0/go.mod h1:p6yTPP+5HYm5mzsMV8JkE6ZKdX+/wYM6Hr+LicevLPs= -github.com/golang/protobuf v1.1.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= @@ -436,8 +495,10 @@ github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk= github.com/golang/protobuf v1.5.1/go.mod h1:DopwsBzvsk0Fs44TXzsVbJyPhcCPeIwnvohx4u74HPM= -github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw= github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= +github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg= +github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= +github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= github.com/golangci/check v0.0.0-20180506172741-cfe4005ccda2 h1:23T5iq8rbUYlhpt5DB4XJkc6BU31uODLD1o1gKvZmD0= github.com/golangci/check v0.0.0-20180506172741-cfe4005ccda2/go.mod h1:k9Qvh+8juN+UKMCS/3jFtGICgW8O96FVaZsaxdzDkR4= @@ -447,8 +508,8 @@ github.com/golangci/go-misc v0.0.0-20220329215616-d24fe342adfe h1:6RGUuS7EGotKx6 github.com/golangci/go-misc v0.0.0-20220329215616-d24fe342adfe/go.mod h1:gjqyPShc/m8pEMpk0a3SeagVb0kaqvhscv+i9jI5ZhQ= github.com/golangci/gofmt v0.0.0-20220901101216-f2edd75033f2 h1:amWTbTGqOZ71ruzrdA+Nx5WA3tV1N0goTspwmKCQvBY= github.com/golangci/gofmt v0.0.0-20220901101216-f2edd75033f2/go.mod h1:9wOXstvyDRshQ9LggQuzBCGysxs3b6Uo/1MvYCR2NMs= -github.com/golangci/golangci-lint v1.52.2 h1:FrPElUUI5rrHXg1mQ7KxI1MXPAw5lBVskiz7U7a8a1A= -github.com/golangci/golangci-lint v1.52.2/go.mod h1:S5fhC5sHM5kE22/HcATKd1XLWQxX+y7mHj8B5H91Q/0= +github.com/golangci/golangci-lint v1.53.3 h1:CUcRafczT4t1F+mvdkUm6KuOpxUZTl0yWN/rSU6sSMo= +github.com/golangci/golangci-lint v1.53.3/go.mod h1:W4Gg3ONq6p3Jl+0s/h9Gr0j7yEgHJWWZO2bHl2tBUXM= github.com/golangci/lint-1 v0.0.0-20191013205115-297bf364a8e0 h1:MfyDlzVjl1hoaPzPD4Gpb/QgoRfSBR0jdhwGyAWwMSA= github.com/golangci/lint-1 v0.0.0-20191013205115-297bf364a8e0/go.mod h1:66R6K6P6VWk9I95jvqGxkqJxVWGFy9XlDwLwVz1RCFg= github.com/golangci/maligned v0.0.0-20180506175553-b1d89398deca h1:kNY3/svz5T29MYHubXix4aDDuE3RWHkPvopM/EDv/MA= @@ -480,15 +541,20 @@ github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeN github.com/google/go-github v17.0.0+incompatible/go.mod h1:zLgOLi98H3fifZn+44m+umXrS52loVEgC2AApnigrVQ= github.com/google/go-github/v35 v35.3.0 h1:fU+WBzuukn0VssbayTT+Zo3/ESKX9JYWjbZTLOTEyho= github.com/google/go-github/v35 v35.3.0/go.mod h1:yWB7uCcVWaUbUP74Aq3whuMySRMatyRmq5U9FTNlbio= -github.com/google/go-querystring v1.0.0 h1:Xkwi/a1rcvNg1PPYe5vI8GbeBY/jrVuDX5ASuANWTrk= +github.com/google/go-github/v45 v45.2.0 h1:5oRLszbrkvxDDqBCNj2hjDZMKmvexaZ1xw/FCD+K3FI= +github.com/google/go-github/v45 v45.2.0/go.mod h1:FObaZJEDSTa/WGCzZ2Z3eoCDXWJKMenWWTrd8jrta28= +github.com/google/go-github/v53 v53.0.0 h1:T1RyHbSnpHYnoF0ZYKiIPSgPtuJ8G6vgc0MKodXsQDQ= +github.com/google/go-github/v53 v53.0.0/go.mod h1:XhFRObz+m/l+UCm9b7KSIC3lT3NWSXGt7mOsAWEloao= github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck= +github.com/google/go-querystring v1.1.0 h1:AnCroh3fv4ZBgVIf1Iwtovgjaw/GiKJo8M8yD/fhyJ8= +github.com/google/go-querystring v1.1.0/go.mod h1:Kcdr2DB4koayq7X8pmAG4sNG59So17icRSOU623lUBU= github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/martian v2.1.0+incompatible h1:/CP5g8u/VJHijgedC/Legn3BAbAaWPgecwXBIDzw5no= github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs= github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0= github.com/google/martian/v3 v3.1.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0= -github.com/google/martian/v3 v3.2.1 h1:d8MncMlErDFTwQGBK1xhv026j9kqhvw1Qv9IbWT1VLQ= github.com/google/martian/v3 v3.2.1/go.mod h1:oBOf6HBosgwRXnUGWUB05QECsc6uvmMiJ3+6W4l/CUk= +github.com/google/martian/v3 v3.3.2 h1:IqNFLAmvJOgVlpdEBiQbDc2EwKW77amAycfTuWKdfvw= github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= @@ -503,15 +569,18 @@ github.com/google/pprof v0.0.0-20210122040257-d980be63207e/go.mod h1:kpwsk12EmLe github.com/google/pprof v0.0.0-20210226084205-cbba55b83ad5/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= github.com/google/pprof v0.0.0-20210601050228-01bbb1931b22/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= github.com/google/pprof v0.0.0-20210609004039-a478d1d731e9/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= +github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1 h1:K6RDEckDVWvDI9JAJYCmNdQXq6neHJOYx3V6jnqNEec= github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI= +github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I= github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/googleapis/enterprise-certificate-proxy v0.0.0-20220520183353-fd19c99a87aa/go.mod h1:17drOmN3MwGY7t0e+Ei9b45FFGA3fBs3x36SsCg1hq8= github.com/googleapis/enterprise-certificate-proxy v0.1.0/go.mod h1:17drOmN3MwGY7t0e+Ei9b45FFGA3fBs3x36SsCg1hq8= -github.com/googleapis/enterprise-certificate-proxy v0.2.0 h1:y8Yozv7SZtlU//QXbezB6QkpuE6jMD2/gfzk4AftXjs= github.com/googleapis/enterprise-certificate-proxy v0.2.0/go.mod h1:8C0jb7/mgJe/9KK8Lm7X9ctZC2t60YyIpYEI16jx0Qg= +github.com/googleapis/enterprise-certificate-proxy v0.2.3 h1:yk9/cqRKtT9wXZSsRH9aurXEpJX+U6FLtpYTdC3R06k= +github.com/googleapis/enterprise-certificate-proxy v0.2.3/go.mod h1:AwSRAtLfXpU5Nm3pW+v7rGDHp09LsPtGY9MduiEsR9k= github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg= github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk= github.com/googleapis/gax-go/v2 v2.1.0/go.mod h1:Q3nei7sK6ybPYH7twZdmQpAd1MKb7pfu6SK+H1/DsU0= @@ -525,10 +594,10 @@ github.com/googleapis/gax-go/v2 v2.7.0 h1:IcsPKeInNvYi7eqSaDjiZqDDKu5rsmunY0Y1Yu github.com/googleapis/gax-go/v2 v2.7.0/go.mod h1:TEop28CZZQ2y+c0VxMUmu1lV+fQx57QpBWsYpwqHJx8= github.com/googleapis/go-type-adapters v1.0.0/go.mod h1:zHW75FOG2aur7gAO2B+MLby+cLsWGBF62rFAi7WjWO4= github.com/googleapis/google-cloud-go-testing v0.0.0-20200911160855-bcd43fbb19e8/go.mod h1:dvDLG8qkwmyD9a/MJJN3XJcT3xFxOKAvTZGvuZmac9g= -github.com/gookit/color v1.5.2 h1:uLnfXcaFjlrDnQDT+NCBcfhrXqYTx/rcCa6xn01Y8yI= -github.com/gookit/color v1.5.2/go.mod h1:w8h4bGiHeeBpvQVePTutdbERIUf3oJE5lZ8HM0UgXyg= -github.com/gordonklaus/ineffassign v0.0.0-20230107090616-13ace0543b28 h1:9alfqbrhuD+9fLZ4iaAVwhlp5PEhmnBt7yvK2Oy5C1U= -github.com/gordonklaus/ineffassign v0.0.0-20230107090616-13ace0543b28/go.mod h1:Qcp2HIAYhR7mNUVSIxZww3Guk4it82ghYcEXIAk+QT0= +github.com/gookit/color v1.5.3 h1:twfIhZs4QLCtimkP7MOxlF3A0U/5cDPseRT9M/+2SCE= +github.com/gookit/color v1.5.3/go.mod h1:NUzwzeehUfl7GIb36pqId+UGmRfQcU/WiiyTTeNjHtE= +github.com/gordonklaus/ineffassign v0.0.0-20230610083614-0e73809eb601 h1:mrEEilTAUmaAORhssPPkxj84TsHrPMLBGW2Z4SoTxm8= +github.com/gordonklaus/ineffassign v0.0.0-20230610083614-0e73809eb601/go.mod h1:Qcp2HIAYhR7mNUVSIxZww3Guk4it82ghYcEXIAk+QT0= github.com/gorilla/websocket v1.4.1 h1:q7AeDBpnBk8AogcD4DSag/Ukw/KV+YhzLj2bP5HvKCM= github.com/gorilla/websocket v1.4.1/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE= github.com/gostaticanalysis/analysisutil v0.7.1 h1:ZMCjoue3DtDWQ5WyU16YbjbQEQ3VuzwxALrpYd+HeKk= @@ -542,7 +611,13 @@ github.com/gostaticanalysis/nilerr v0.1.1 h1:ThE+hJP0fEp4zWLkWHWcRyI2Od0p7DlgYG3 github.com/gostaticanalysis/nilerr v0.1.1/go.mod h1:wZYb6YI5YAxxq0i1+VJbY0s2YONW0HU0GPE3+5PWN4A= github.com/gostaticanalysis/testutil v0.3.1-0.20210208050101-bfb5c8eec0e4/go.mod h1:D+FIZ+7OahH3ePw/izIEeH5I06eKs1IKI4Xr64/Am3M= github.com/gostaticanalysis/testutil v0.4.0 h1:nhdCmubdmDF6VEatUNjgUZBJKWRqugoISdUv3PPQgHY= +github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk= github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw= +github.com/h2non/parth v0.0.0-20190131123155-b4df798d6542 h1:2VTzZjLZBgl62/EtslCrtky5vbi9dd7HrQPQIx6wqiw= +github.com/hashicorp/consul/api v1.13.0/go.mod h1:ZlVrynguJKcYr54zGaDbaL3fOvKC9m72FhPvA8T35KQ= +github.com/hashicorp/consul/sdk v0.8.0/go.mod h1:GBvyrGALthsZObzUGsfgHZQDXjg4lOjagTIwIR1vPms= +github.com/hashicorp/copywrite v0.16.4 h1:qDuO1wJJZx30SCEqxJnFuStXK43R39juJKMMTn/u/9c= +github.com/hashicorp/copywrite v0.16.4/go.mod h1:6wvQH+ICDoD2bpjO1RJ6fi+h3aY5NeLEM12oTkEtFoc= github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4= github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I= github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4= @@ -551,22 +626,35 @@ github.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtng github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80= github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ= github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48= -github.com/hashicorp/go-getter v1.7.0 h1:bzrYP+qu/gMrL1au7/aDvkoOVGUJpeKBgbqRHACAFDY= -github.com/hashicorp/go-getter v1.7.0/go.mod h1:W7TalhMmbPmsSMdNjD0ZskARur/9GJ17cfHTRtXV744= -github.com/hashicorp/go-hclog v0.10.0/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ= +github.com/hashicorp/go-getter v1.7.1 h1:SWiSWN/42qdpR0MdhaOc/bLR48PLuP1ZQtYLRlM69uY= +github.com/hashicorp/go-getter v1.7.1/go.mod h1:W7TalhMmbPmsSMdNjD0ZskARur/9GJ17cfHTRtXV744= +github.com/hashicorp/go-hclog v0.0.0-20180709165350-ff2cf002a8dd/go.mod h1:9bjs9uLqI8l75knNv3lV1kA55veR+WUPSiKIWcQHudI= +github.com/hashicorp/go-hclog v0.8.0/go.mod h1:5CU+agLiy3J7N7QjHK5d05KxGsuXiQLrjA0H7acj2lQ= +github.com/hashicorp/go-hclog v0.12.0/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ= github.com/hashicorp/go-hclog v1.5.0 h1:bI2ocEMgcVlz55Oj1xZNBsVi900c7II+fWDyV9o+13c= github.com/hashicorp/go-hclog v1.5.0/go.mod h1:W4Qnvbt70Wk/zYJryRzDRU/4r0kIg0PVHBcfoyhpF5M= +github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60= +github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM= github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk= +github.com/hashicorp/go-multierror v1.1.0/go.mod h1:spPvp8C1qA32ftKqdAHm4hHTbPw+vmowP0z+KUhOZdA= github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo= github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM= -github.com/hashicorp/go-plugin v1.4.9 h1:ESiK220/qE0aGxWdzKIvRH69iLiuN/PjoLTm69RoWtU= -github.com/hashicorp/go-plugin v1.4.9/go.mod h1:viDMjcLJuDui6pXb8U4HVfb8AamCWhHGUjr2IrTF67s= +github.com/hashicorp/go-plugin v1.0.1/go.mod h1:++UyYGoz3o5w9ZzAdZxtQKrWWP+iqPBn3cQptSMzBuY= +github.com/hashicorp/go-plugin v1.4.10 h1:xUbmA4jC6Dq163/fWcp8P3JuHilrHHMLNRxzGQJ9hNk= +github.com/hashicorp/go-plugin v1.4.10/go.mod h1:6/1TEzT0eQznvI/gV2CM29DLSkAK/e58mUWKVsPaph0= +github.com/hashicorp/go-retryablehttp v0.5.4/go.mod h1:9B5zBasrRhHXnJnui7y6sL7es7NDiJgTc6Er0maI1Xs= +github.com/hashicorp/go-rootcerts v1.0.1/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8= +github.com/hashicorp/go-rootcerts v1.0.2/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8= github.com/hashicorp/go-safetemp v1.0.0 h1:2HR189eFNrjHQyENnQMMpCiBAsRxzbTMIgBhEyExpmo= github.com/hashicorp/go-safetemp v1.0.0/go.mod h1:oaerMy3BhqiTbVye6QuFhFtIceqFoDHxNAB65b+Rj1I= +github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU= +github.com/hashicorp/go-sockaddr v1.0.2/go.mod h1:rB4wwRAUzs07qva3c5SdrY/NEtAUjGlgmH/UkBUC97A= +github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4= github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= +github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= github.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8= github.com/hashicorp/go-uuid v1.0.3/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= -github.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/hashicorp/go-version v1.1.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= github.com/hashicorp/go-version v1.2.1/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= github.com/hashicorp/go-version v1.5.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= github.com/hashicorp/go-version v1.6.0 h1:feTTfFNnjP967rlCxM/I9g701jU+RN74YKx2mOkIeek= @@ -577,23 +665,35 @@ github.com/hashicorp/hc-install v0.4.0 h1:cZkRFr1WVa0Ty6x5fTvL1TuO1flul231rWkGH9 github.com/hashicorp/hc-install v0.4.0/go.mod h1:5d155H8EC5ewegao9A4PUTMNPZaq+TbOzkJJZ4vrXeI= github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4= github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= -github.com/hashicorp/hcl/v2 v2.16.2 h1:mpkHZh/Tv+xet3sy3F9Ld4FyI2tUpWe9x3XtPx9f1a0= -github.com/hashicorp/hcl/v2 v2.16.2/go.mod h1:JRmR89jycNkrrqnMmvPDMd56n1rQJ2Q6KocSLCMCXng= +github.com/hashicorp/hcl/v2 v2.17.0 h1:z1XvSUyXd1HP10U4lrLg5e0JMVz6CPaJvAgxM0KNZVY= +github.com/hashicorp/hcl/v2 v2.17.0/go.mod h1:gJyW2PTShkJqQBKpAmPO3yxMxIuoXkOF2TpqXzrQyx4= github.com/hashicorp/logutils v1.0.0 h1:dLEQVugN8vlakKOUE3ihGLTZJRB4j+M2cdTm/ORI65Y= github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64= +github.com/hashicorp/mdns v1.0.4/go.mod h1:mtBihi+LeNXGtG8L9dX59gAEa12BDtBQSp4v/YAJqrc= +github.com/hashicorp/memberlist v0.3.0/go.mod h1:MS2lj3INKhZjWNqd3N0m3J+Jxf3DAOnAH9VT3Sh9MUE= +github.com/hashicorp/serf v0.9.6/go.mod h1:TXZNMjZQijwlDvp+r0b63xZ45H7JmCmgg4gpTwn9UV4= github.com/hashicorp/terraform-exec v0.17.2 h1:EU7i3Fh7vDUI9nNRdMATCEfnm9axzTnad8zszYZ73Go= github.com/hashicorp/terraform-exec v0.17.2/go.mod h1:tuIbsL2l4MlwwIZx9HPM+LOV9vVyEfBYu2GsO1uH3/8= -github.com/hashicorp/terraform-json v0.5.0/go.mod h1:eAbqb4w0pSlRmdvl8fOyHAi/+8jnkVYN28gJkSJrLhU= -github.com/hashicorp/terraform-json v0.14.0 h1:sh9iZ1Y8IFJLx+xQiKHGud6/TSUCM0N8e17dKDpqV7s= -github.com/hashicorp/terraform-json v0.14.0/go.mod h1:5A9HIWPkk4e5aeeXIBbkcOvaZbIYnAIkEyqP2pNSckM= -github.com/hashicorp/terraform-registry-address v0.1.0 h1:W6JkV9wbum+m516rCl5/NjKxCyTVaaUBbzYcMzBDO3U= -github.com/hashicorp/terraform-registry-address v0.1.0/go.mod h1:EnyO2jYO6j29DTHbJcm00E5nQTFeTtyZH3H5ycydQ5A= -github.com/hashicorp/terraform-svchost v0.0.0-20200729002733-f050f53b9734 h1:HKLsbzeOsfXmKNpr3GiT18XAblV0BjCbzL8KQAMZGa0= -github.com/hashicorp/terraform-svchost v0.0.0-20200729002733-f050f53b9734/go.mod h1:kNDNcF7sN4DocDLBkQYz73HGKwN1ANB1blq4lIYLYvg= +github.com/hashicorp/terraform-json v0.17.1 h1:eMfvh/uWggKmY7Pmb3T85u86E2EQg6EQHgyRwf3RkyA= +github.com/hashicorp/terraform-json v0.17.1/go.mod h1:Huy6zt6euxaY9knPAFKjUITn8QxUFIe9VuSzb4zn/0o= +github.com/hashicorp/terraform-registry-address v0.2.0 h1:92LUg03NhfgZv44zpNTLBGIbiyTokQCDcdH5BhVHT3s= +github.com/hashicorp/terraform-registry-address v0.2.0/go.mod h1:478wuzJPzdmqT6OGbB/iH82EDcI8VFM4yujknh/1nIs= +github.com/hashicorp/terraform-svchost v0.0.1 h1:Zj6fR5wnpOHnJUmLyWozjMeDaVuE+cstMPj41/eKmSQ= +github.com/hashicorp/terraform-svchost v0.0.1/go.mod h1:ut8JaH0vumgdCfJaihdcZULqkAwHdQNwNH7taIDdsZM= +github.com/hashicorp/vault/api v1.0.4/go.mod h1:gDcqh3WGcR1cpF5AJz/B1UFheUEneMoIospckxBxk6Q= +github.com/hashicorp/vault/sdk v0.1.13/go.mod h1:B+hVj7TpuQY1Y/GPbCpffmgd+tSEwvhkWnjtSYCaS2M= +github.com/hashicorp/yamux v0.0.0-20180604194846-3520598351bb/go.mod h1:+NfK9FKeTrX5uv1uIXGdwYDTeHna2qgaIlx54MXqjAM= github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d h1:kJCB4vdITiW1eC1vq2e6IsrXKrZit1bv/TDYFGMp4BQ= github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d/go.mod h1:+NfK9FKeTrX5uv1uIXGdwYDTeHna2qgaIlx54MXqjAM= +github.com/henvic/httpretty v0.0.6 h1:JdzGzKZBajBfnvlMALXXMVQWxWMF/ofTy8C3/OSUTxs= +github.com/henvic/httpretty v0.0.6/go.mod h1:X38wLjWXHkXT7r2+uK8LjCMne9rsuNaBLJ+5cU2/Pmo= github.com/hexops/gotextdiff v1.0.3 h1:gitA9+qJrrTCsiCl7+kh75nPqQt1cx4ZkudSTLoUqJM= github.com/hexops/gotextdiff v1.0.3/go.mod h1:pSWU5MAI3yDq+fZBTazCSJysOMbxWL1BSow5/V2vxeg= +github.com/hinshun/vt10x v0.0.0-20220119200601-820417d04eec h1:qv2VnGeEQHchGaZ/u7lxST/RaJw+cv273q79D81Xbog= +github.com/hinshun/vt10x v0.0.0-20220119200601-820417d04eec/go.mod h1:Q48J4R4DvxnHolD5P8pOtXigYlRuPLGl6moFx3ulM68= +github.com/hjson/hjson-go/v4 v4.0.0 h1:wlm6IYYqHjOdXH1gHev4VoXCaW20HdQAGCxdOEEg2cs= +github.com/hjson/hjson-go/v4 v4.0.0/go.mod h1:KaYt3bTw3zhBjYqnXkYywcYctk0A2nxeEFTse3rH13E= +github.com/huandu/xstrings v1.3.1/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE= github.com/huandu/xstrings v1.3.2 h1:L18LIDzqlW6xN2rEkpdV8+oL/IXWJ1APd+vsdYy4Wdw= github.com/huandu/xstrings v1.3.2/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE= github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc= @@ -602,10 +702,14 @@ github.com/imdario/mergo v0.3.9/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJ github.com/imdario/mergo v0.3.11/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA= github.com/imdario/mergo v0.3.12 h1:b6R2BslTbIEToALKP7LxUvijTsNI9TAe80pLWN2g/HU= github.com/imdario/mergo v0.3.12/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA= -github.com/inconshreveable/mousetrap v1.0.1 h1:U3uMjPSQEBMNp1lFxmllqCPM6P5u/Xq7Pgzkat/bFNc= -github.com/inconshreveable/mousetrap v1.0.1/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw= +github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8= +github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw= github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 h1:BQSFePA1RWJOlocH6Fxy8MmwDt+yVQYULKfN0RoTN8A= github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99/go.mod h1:1lJo3i6rXxKeerYnT8Nvf0QmHCRC1n8sfWVwXF2Frvo= +github.com/jedib0t/go-pretty v4.3.0+incompatible h1:CGs8AVhEKg/n9YbUenWmNStRW2PHJzaeDodcfvRAbIo= +github.com/jedib0t/go-pretty v4.3.0+incompatible/go.mod h1:XemHduiw8R651AF9Pt4FwCTKeG3oo7hrHJAoznj9nag= +github.com/jedib0t/go-pretty/v6 v6.4.6 h1:v6aG9h6Uby3IusSSEjHaZNXpHFhzqMmjXcPq1Rjl9Jw= +github.com/jedib0t/go-pretty/v6 v6.4.6/go.mod h1:Ndk3ase2CkQbXLLNf5QDHoYb6J9WtVfmHZu9n8rk2xs= github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI= github.com/jessevdk/go-flags v1.5.0 h1:1jKYvbxEjfUl0fmqTCOfonvskHHXMjBySTLW4y9LFvc= github.com/jessevdk/go-flags v1.5.0/go.mod h1:Fw0T6WPc1dYxT4mKEZRfG5kJhaTDP9pj1c2EWnYs/m4= @@ -620,6 +724,8 @@ github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9Y github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo= github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8= github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U= +github.com/joho/godotenv v1.3.0 h1:Zjp+RcGpHhGlrMbJzXTrZZPrWj+1vfm90La1wgB6Bhc= +github.com/joho/godotenv v1.3.0/go.mod h1:7hK45KPybAkOC6peb+G5yklZfMxEjkZhHbwpqxOKXbg= github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4= github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU= github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= @@ -633,30 +739,35 @@ github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7V github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM= github.com/julz/importas v0.1.0 h1:F78HnrsjY3cR7j0etXy5+TU1Zuy7Xt08X/1aJnH5xXY= github.com/julz/importas v0.1.0/go.mod h1:oSFU2R4XK/P7kNBrnL/FEQlDGN1/6WoxXEjSSXO0DV0= -github.com/junk1tm/musttag v0.5.0 h1:bV1DTdi38Hi4pG4OVWa7Kap0hi0o7EczuK6wQt9zPOM= -github.com/junk1tm/musttag v0.5.0/go.mod h1:PcR7BA+oREQYvHwgjIDmw3exJeds5JzRcvEJTfjrA0M= github.com/katbyte/andreyvit-diff v0.0.1 h1:2u6ofZeHrVgJjUzJ6JFlcfb3LeDq0rHxUH+WMHerELo= github.com/katbyte/andreyvit-diff v0.0.1/go.mod h1:F6SME78YVaEk4agzLHhmsVwdVU+o/CtRnR0Bl9qBfrI= github.com/katbyte/sergi-go-diff v1.1.1 h1:HelbPXYFHziR633zFq8QzwDY44jQ0Xy7COcLxNEWJtY= github.com/katbyte/sergi-go-diff v1.1.1/go.mod h1:BxkLLDDB1iVQsnURErqoQMjkyXIlR0DefDKzZCUHNEw= github.com/katbyte/terrafmt v0.5.2 h1:s9FiRKPgh7fHomEXYbhuoeDpFjWnr/h9nusUUK3TPbI= github.com/katbyte/terrafmt v0.5.2/go.mod h1:xGuDBw9hFBIrP8c/BWiJgVPn16DV2AnSUlIFc2OMLYU= +github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 h1:Z9n2FFNUXsshfwJMBgNA0RU6/i7WVaAegv3PtuIHPMs= +github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51/go.mod h1:CzGEWj7cYgsdH8dAjBGEr58BoE7ScuLd+fwFZ44+/x8= github.com/kevinburke/ssh_config v0.0.0-20190725054713-01f96b0aa0cd/go.mod h1:CT57kijsi8u/K/BOFA39wgDQJ9CxiF4nAY/ojJ6r6mM= github.com/kevinburke/ssh_config v0.0.0-20201106050909-4977a11b4351 h1:DowS9hvgyYSX4TO5NpyC606/Z4SxnNYbT+WX27or6Ck= github.com/kevinburke/ssh_config v0.0.0-20201106050909-4977a11b4351/go.mod h1:CT57kijsi8u/K/BOFA39wgDQJ9CxiF4nAY/ojJ6r6mM= +github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= github.com/kisielk/errcheck v1.6.3 h1:dEKh+GLHcWm2oN34nMvDzn1sqI0i0WxPvrgiJA5JuM8= github.com/kisielk/errcheck v1.6.3/go.mod h1:nXw/i/MfnvRHqXa7XXmQMUB0oNFGuBrNI8d8NLy0LPw= github.com/kisielk/gotool v1.0.0 h1:AV2c/EiW3KqPNT9ZKl07ehoAGi4C5/01Cfbblndcapg= github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= github.com/kkHAIKE/contextcheck v1.1.4 h1:B6zAaLhOEEcjvUgIYEqystmnFk1Oemn8bvJhbt0GMb8= github.com/kkHAIKE/contextcheck v1.1.4/go.mod h1:1+i/gWqokIa+dm31mqGLZhZJ7Uh44DJGZVmr6QRBNJg= +github.com/klauspost/compress v1.13.6/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk= github.com/klauspost/compress v1.15.11 h1:Lcadnb3RKGin4FYM/orgq0qde+nc15E5Cbqg4B9Sx9c= github.com/klauspost/compress v1.15.11/go.mod h1:QPwzmACJjUTFsnSHH934V6woptycfrDDJnH7hvFVbGM= +github.com/knadh/koanf v1.5.0 h1:q2TSd/3Pyc/5yP9ldIrSdIz26MCcyNQzW0pEAugLPNs= +github.com/knadh/koanf v1.5.0/go.mod h1:Hgyjp4y8v44hpZtPzs7JZfRAW5AhN7KfZcwv1RYggDs= github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg= github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc= github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= +github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= @@ -665,18 +776,19 @@ github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= github.com/kulti/thelper v0.6.3 h1:ElhKf+AlItIu+xGnI990no4cE2+XaSu1ULymV2Yulxs= github.com/kulti/thelper v0.6.3/go.mod h1:DsqKShOvP40epevkFrvIwkCMNYxMeTNjdWL4dqWHZ6I= -github.com/kunwardeep/paralleltest v1.0.6 h1:FCKYMF1OF2+RveWlABsdnmsvJrei5aoyZoaGS+Ugg8g= -github.com/kunwardeep/paralleltest v1.0.6/go.mod h1:Y0Y0XISdZM5IKm3TREQMZ6iteqn1YuwCsJO/0kL9Zes= -github.com/kylelemons/godebug v0.0.0-20170820004349-d65d576e9348/go.mod h1:B69LEHPfb2qLo0BaaOLcbitczOKLWTsrBG9LczfCD4k= +github.com/kunwardeep/paralleltest v1.0.7 h1:2uCk94js0+nVNQoHZNLBkAR1DQJrVzw6T0RMzJn55dQ= +github.com/kunwardeep/paralleltest v1.0.7/go.mod h1:2C7s65hONVqY7Q5Efj5aLzRCNLjw2h4eMc9EcypGjcY= github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc= github.com/kyoh86/exportloopref v0.1.11 h1:1Z0bcmTypkL3Q4k+IDHMWTcnCliEZcaPiIe0/ymEyhQ= github.com/kyoh86/exportloopref v0.1.11/go.mod h1:qkV4UF1zGl6EkF1ox8L5t9SwyeBAZ3qLMd6up458uqA= github.com/ldez/gomoddirectives v0.2.3 h1:y7MBaisZVDYmKvt9/l1mjNCiSA1BVn34U0ObUcJwlhA= github.com/ldez/gomoddirectives v0.2.3/go.mod h1:cpgBogWITnCfRq2qGoDkKMEVSaarhdBr6g8G04uz6d0= -github.com/ldez/tagliatelle v0.4.0 h1:sylp7d9kh6AdXN2DpVGHBRb5guTVAgOxqNGhbqc4b1c= -github.com/ldez/tagliatelle v0.4.0/go.mod h1:mNtTfrHy2haaBAw+VT7IBV6VXBThS7TCreYWbBcJ87I= +github.com/ldez/tagliatelle v0.5.0 h1:epgfuYt9v0CG3fms0pEgIMNPuFf/LpPIfjk4kyqSioo= +github.com/ldez/tagliatelle v0.5.0/go.mod h1:rj1HmWiL1MiKQuOONhd09iySTEkUuE/8+5jtPYz9xa4= github.com/leonklingele/grouper v1.1.1 h1:suWXRU57D4/Enn6pXR0QVqqWWrnJ9Osrz+5rjt8ivzU= github.com/leonklingele/grouper v1.1.1/go.mod h1:uk3I3uDfi9B6PeUjsCKi6ndcf63Uy7snXgR4yDYQVDY= +github.com/lucasb-eyer/go-colorful v1.2.0 h1:1nnpGOrhyZZuNyfu1QjKiUICQ74+3FNCN69Aj6K7nkY= +github.com/lucasb-eyer/go-colorful v1.2.0/go.mod h1:R4dSotOR9KMtayYi1e77YzuveK+i7ruzyGqttikkLy0= github.com/lufeee/execinquery v1.2.1 h1:hf0Ems4SHcUGBxpGN7Jz78z1ppVkP/837ZlETPCEtOM= github.com/lufeee/execinquery v1.2.1/go.mod h1:EC7DrEKView09ocscGHC+apXMIaorh4xqSxS/dy8SbM= github.com/magiconair/properties v1.8.6 h1:5ibWZ6iY0NctNGWo87LalDlEZ6R41TqbbDamhfG/Qzo= @@ -691,7 +803,9 @@ github.com/matryer/is v1.2.0/go.mod h1:2fLPjFQM9rhQ15aVEtbuwhJinnOqrmgXPNdZsdwlW github.com/matryer/is v1.4.0 h1:sosSmIWwkYITGrxZ25ULNDeKiMNzFSr4V/eqBQP0PeE= github.com/matryer/is v1.4.0/go.mod h1:8I/i5uYgLzgsgEloJE1U6xx5HkBQpAZvepWuujKwMRU= github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU= +github.com/mattn/go-colorable v0.1.2/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE= github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE= +github.com/mattn/go-colorable v0.1.6/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc= github.com/mattn/go-colorable v0.1.9/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc= github.com/mattn/go-colorable v0.1.12/go.mod h1:u5H1YNBxpqRaxsYJYSkiCWKzEfiAb1Gb520KVy5xxl4= github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA= @@ -700,33 +814,49 @@ github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNx github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4= github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s= github.com/mattn/go-isatty v0.0.10/go.mod h1:qgIWMr58cqv1PHHyhnkY9lrL7etaEgOFcMEpPG5Rm84= +github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE= github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU= github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94= github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM= -github.com/mattn/go-isatty v0.0.18 h1:DOKFKCQ7FNG2L1rbrmstDN4QVRdS89Nkh85u68Uwp98= -github.com/mattn/go-isatty v0.0.18/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= +github.com/mattn/go-isatty v0.0.19 h1:JITubQf0MOLdlGRuRq+jtsDlekdYPia9ZFsB8h/APPA= +github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= github.com/mattn/go-runewidth v0.0.4/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU= github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI= +github.com/mattn/go-runewidth v0.0.13/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w= github.com/mattn/go-runewidth v0.0.14 h1:+xnbZSEeDbOIg5/mE6JF0w6n9duR1l3/WmbinWVwUuU= github.com/mattn/go-runewidth v0.0.14/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w= github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU= github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0= github.com/mbilski/exhaustivestruct v1.2.0 h1:wCBmUnSYufAHO6J4AVWY6ff+oxWxsVFrwgOdMUQePUo= github.com/mbilski/exhaustivestruct v1.2.0/go.mod h1:OeTBVxQWoEmB2J2JCHmXWPJ0aksxSUOUy+nvtVEfzXc= -github.com/mgechev/revive v1.3.1 h1:OlQkcH40IB2cGuprTPcjB0iIUddgVZgGmDX3IAMR8D4= -github.com/mgechev/revive v1.3.1/go.mod h1:YlD6TTWl2B8A103R9KWJSPVI9DrEf+oqr15q21Ld+5I= +github.com/mergestat/timediff v0.0.3 h1:ucCNh4/ZrTPjFZ081PccNbhx9spymCJkFxSzgVuPU+Y= +github.com/mergestat/timediff v0.0.3/go.mod h1:yvMUaRu2oetc+9IbPLYBJviz6sA7xz8OXMDfhBl7YSI= +github.com/mgechev/revive v1.3.2 h1:Wb8NQKBaALBJ3xrrj4zpwJwqwNA6nDpyJSEQWcCka6U= +github.com/mgechev/revive v1.3.2/go.mod h1:UCLtc7o5vg5aXCwdUTU1kEBQ1v+YXPAkYDIDXbrs5I0= +github.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b/go.mod h1:01TrycV0kFyexm33Z7vhZRXopbI8J3TDReVlkTgMUxE= +github.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d h1:5PJl274Y63IEHC+7izoQE9x6ikvDFZS2mDVS3drnohI= +github.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d/go.mod h1:01TrycV0kFyexm33Z7vhZRXopbI8J3TDReVlkTgMUxE= +github.com/miekg/dns v1.1.26/go.mod h1:bPDLeHnStXmXAq1m/Ch/hvfNHr14JKNPMBo3VZKjuso= +github.com/miekg/dns v1.1.41/go.mod h1:p6aan82bvRIyn+zDIv9xYNUpwa73JcSh9BKwknJysuI= github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc= -github.com/mitchellh/cli v1.1.2 h1:PvH+lL2B7IQ101xQL63Of8yFS2y+aDlsFcsqNc+u/Kw= -github.com/mitchellh/cli v1.1.2/go.mod h1:6iaV0fGdElS6dPBx0EApTxHrcWvmJphyh2n8YBLPPZ4= +github.com/mitchellh/cli v1.1.0/go.mod h1:xcISNoH86gajksDmfB23e/pu+B+GeFRMYmoHXxx3xhI= +github.com/mitchellh/cli v1.1.5 h1:OxRIeJXpAMztws/XHlN2vu6imG5Dpq+j61AzAX5fLng= +github.com/mitchellh/cli v1.1.5/go.mod h1:v8+iFts2sPIKUV1ltktPXMCC8fumSKFItNcD2cLtRR4= github.com/mitchellh/copystructure v1.0.0/go.mod h1:SNtv71yrdKgLRyLFxmLdkAbkKEFWgYaq1OVrnRcwhnw= github.com/mitchellh/copystructure v1.2.0 h1:vpKXTN4ewci03Vljg/q9QvCGUDttBOGBIa15WveJJGw= github.com/mitchellh/copystructure v1.2.0/go.mod h1:qLl+cE2AmVv+CoeAwDPye/v+N2HKCj9FbZEVFJRxO9s= github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y= github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= +github.com/mitchellh/go-testing-interface v0.0.0-20171004221916-a61a99592b77/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI= +github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI= github.com/mitchellh/go-testing-interface v1.14.1 h1:jrgshOhYAUVNMAJiKbEu7EqAwgJJ2JqpQmpLJOu07cU= github.com/mitchellh/go-testing-interface v1.14.1/go.mod h1:gfgS7OtZj6MA4U1UrDRp04twqAjfvlZyCfX3sDjEym8= +github.com/mitchellh/go-wordwrap v1.0.0/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo= github.com/mitchellh/go-wordwrap v1.0.1 h1:TLuKupo69TCn6TQSyGxwI1EblZZEsQ0vMlAFQflz0v0= github.com/mitchellh/go-wordwrap v1.0.1/go.mod h1:R62XHJLzvMFRBbcrT7m7WgmE1eOyTSsCt+hzestvNj0= +github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= +github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= +github.com/mitchellh/mapstructure v1.3.3/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY= github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= github.com/mitchellh/reflectwalk v1.0.0/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw= @@ -737,8 +867,12 @@ github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJ github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/montanaflynn/stats v0.0.0-20171201202039-1bf9dbcd8cbe/go.mod h1:wL8QJuTMNUDYhXwkmfOly8iTdp5TEcJFWZD2D7SIkUc= github.com/moricho/tparallel v0.3.1 h1:fQKD4U1wRMAYNngDonW5XupoB/ZGJHdpzrWqgyg9krA= github.com/moricho/tparallel v0.3.1/go.mod h1:leENX2cUv7Sv2qDgdi0D0fCftN8fRC67Bcn8pqzeYNI= +github.com/muesli/reflow v0.3.0 h1:IFsN6K9NfGtjeggFP+68I4chLZV2yIKsXJFNZ+eWh6s= +github.com/muesli/termenv v0.12.0 h1:KuQRUE3PgxRFWhq4gHvZtPSLCGDqM5q/cYr1pZ39ytc= +github.com/muesli/termenv v0.12.0/go.mod h1:WCCv32tusQ/EEZ5S8oUIIrC/nIuBcxCVqlN4Xfkv+7A= github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= github.com/nakabonne/nestif v0.3.1 h1:wm28nZjhQY5HyYPx+weN3Q65k6ilSBxDb8v5S81B81U= @@ -746,18 +880,21 @@ github.com/nakabonne/nestif v0.3.1/go.mod h1:9EtoZochLn5iUprVDmDjqGKPofoUEBL8U4N github.com/nbutton23/zxcvbn-go v0.0.0-20210217022336-fa2cb2858354 h1:4kuARK6Y6FxaNu/BnU2OAaLF86eTVhP2hjTB6iMvItA= github.com/nbutton23/zxcvbn-go v0.0.0-20210217022336-fa2cb2858354/go.mod h1:KSVJerMDfblTH7p5MZaTt+8zaT2iEk3AkVb9PQdZuE8= github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno= -github.com/nishanths/exhaustive v0.9.5 h1:TzssWan6orBiLYVqewCG8faud9qlFntJE30ACpzmGME= -github.com/nishanths/exhaustive v0.9.5/go.mod h1:IbwrGdVMizvDcIxPYGVdQn5BqWJaOwpCvg4RGb8r/TA= +github.com/nishanths/exhaustive v0.11.0 h1:T3I8nUGhl/Cwu5Z2hfc92l0e04D2GEW6e0l8pzda2l0= +github.com/nishanths/exhaustive v0.11.0/go.mod h1:RqwDsZ1xY0dNdqHho2z6X+bgzizwbLYOWnZbbl2wLB4= github.com/nishanths/predeclared v0.2.2 h1:V2EPdZPliZymNAn79T8RkNApBjMmVKh5XRpLm/w98Vk= github.com/nishanths/predeclared v0.2.2/go.mod h1:RROzoN6TnGQupbC+lqggsOlcgysk3LMK/HI84Mp280c= -github.com/nunnatsa/ginkgolinter v0.9.0 h1:Sm0zX5QfjJzkeCjEp+t6d3Ha0jwvoDjleP9XCsrEzOA= -github.com/nunnatsa/ginkgolinter v0.9.0/go.mod h1:FHaMLURXP7qImeH6bvxWJUpyH+2tuqe5j4rW1gxJRmI= +github.com/npillmayer/nestext v0.1.3/go.mod h1:h2lrijH8jpicr25dFY+oAJLyzlya6jhnuG+zWp9L0Uk= +github.com/nunnatsa/ginkgolinter v0.12.1 h1:vwOqb5Nu05OikTXqhvLdHCGcx5uthIYIl0t79UVrERQ= +github.com/nunnatsa/ginkgolinter v0.12.1/go.mod h1:AK8Ab1PypVrcGUusuKD8RDcl2KgsIwvNaaxAlyHSzso= github.com/oklog/run v1.0.0 h1:Ru7dDtJNOyC66gQ5dQmaCa0qIsAUFY3sFpK1Xk8igrw= github.com/oklog/run v1.0.0/go.mod h1:dlhp/R75TPv97u0XWUtDeV/lRKWPKSdTuV0TZvrmrQA= +github.com/oklog/ulid v1.3.1 h1:EGfNDEx6MqHz8B3uNV6QAib1UR2Lm97sHi3ocA6ESJ4= +github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U= github.com/olekukonko/tablewriter v0.0.5 h1:P2Ga83D34wi1o9J6Wh1mRuqd4mF/x/lgBS7N7AbDhec= github.com/olekukonko/tablewriter v0.0.5/go.mod h1:hPp6KlRPjbx+hW8ykQs1w3UBbZlj6HuIJcUGPhkA7kY= -github.com/onsi/ginkgo/v2 v2.8.0 h1:pAM+oBNPrpXRs+E/8spkeGx9QgekbRVyr74EUvRVOUI= -github.com/onsi/gomega v1.26.0 h1:03cDLK28U6hWvCAns6NeydX3zIm4SF3ci69ulidS32Q= +github.com/onsi/ginkgo/v2 v2.9.4 h1:xR7vG4IXt5RWx6FfIjyAtsoMAtnc3C/rFXBBd2AjZwE= +github.com/onsi/gomega v1.27.6 h1:ENqfyGeS5AX/rlXDd/ETokDz93u0YufY1Pgxuy/PvWE= github.com/otiai10/copy v1.2.0 h1:HvG945u96iNadPoG2/Ja2+AUJeW5YuFQMixq9yirC+k= github.com/otiai10/copy v1.2.0/go.mod h1:rrF5dJ5F0t/EWSYODDu4j9/vEeYHMkc8jt0zJChqQWw= github.com/otiai10/curr v0.0.0-20150429015615-9b4961190c95/go.mod h1:9qAhocn7zKJG+0mI8eUu6xqkFDYS2kb2saOteoSB3cE= @@ -766,21 +903,26 @@ github.com/otiai10/mint v1.3.0/go.mod h1:F5AjcsTsWUqX+Na9fpHb52P8pcRX2CI6A3ctIT9 github.com/otiai10/mint v1.3.1/go.mod h1:/yxELlJQ0ufhjUwhshSj+wFjZ78CnZ48/1wtmBH1OTc= github.com/owenrumney/go-sarif v1.1.1 h1:QNObu6YX1igyFKhdzd7vgzmw7XsWN3/6NMGuDzBgXmE= github.com/owenrumney/go-sarif v1.1.1/go.mod h1:dNDiPlF04ESR/6fHlPyq7gHKmrM0sHUvAGjsoh8ZH0U= +github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc= +github.com/pascaldekloe/goe v0.1.0/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc= github.com/pavius/impi v0.0.3 h1:DND6MzU+BLABhOZXbELR3FU8b+zDgcq4dOCNLhiTYuI= github.com/pavius/impi v0.0.3/go.mod h1:x/hU0bfdWIhuOT1SKwiJg++yvkk6EuOtJk8WtDZqgr8= +github.com/pelletier/go-toml v1.7.0/go.mod h1:vwGMzjaWMwyfHwgIBhI2YUM4fB6nL6lVAvS1LBMMhTE= github.com/pelletier/go-toml v1.9.5 h1:4yBQzkHv+7BHq2PQUZF3Mx0IYxG7LsP222s7Agd3ve8= github.com/pelletier/go-toml v1.9.5/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c= github.com/pelletier/go-toml/v2 v2.0.5 h1:ipoSadvV8oGUjnUbMub59IDPPwfxF694nG/jwbMiyQg= github.com/pelletier/go-toml/v2 v2.0.5/go.mod h1:OMHamSCAODeSsVrwwvcJOaoN0LIUIaFVNZzmWyNfXas= +github.com/pierrec/lz4 v2.0.5+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY= github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= +github.com/pkg/profile v1.6.0/go.mod h1:qBsxPvzyUincmltOk6iyRVxHYg4adc0OFOv72ZdLa18= github.com/pkg/sftp v1.13.1/go.mod h1:3HaPG6Dq1ILlpPZRO0HVMrsydcdLt6HRDccSgb87qRg= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/polyfloyd/go-errorlint v1.4.0 h1:b+sQ5HibPIAjEZwtuwU8Wz/u0dMZ7YL+bk+9yWyHVJk= -github.com/polyfloyd/go-errorlint v1.4.0/go.mod h1:qJCkPeBn+0EXkdKTrUCcuFStM2xrDKfxI3MGLXPexUs= +github.com/polyfloyd/go-errorlint v1.4.2 h1:CU+O4181IxFDdPH6t/HT7IiDj1I7zxNi1RIUxYwn8d0= +github.com/polyfloyd/go-errorlint v1.4.2/go.mod h1:k6fU/+fQe38ednoZS51T7gSIGQW1y94d6TkSr35OzH8= github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI= github.com/posener/complete v1.2.3 h1:NP0eAhjcjImqslEwo/1hq7gpajME0fTLTezBKDqfXqo= github.com/posener/complete v1.2.3/go.mod h1:WZIdtGGp+qx0sLrYKtIRAruyNpv6hFCicSgv7Sy7s/s= @@ -788,6 +930,7 @@ github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXP github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo= github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M= github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0= +github.com/prometheus/client_golang v1.11.1/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0= github.com/prometheus/client_golang v1.12.1 h1:ZiaPsmm9uiBeaSMRznKsCDNtPCS0T3JVDGF+06gjBzk= github.com/prometheus/client_golang v1.12.1/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY= github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= @@ -814,8 +957,9 @@ github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727 h1:TCg2WBOl github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727/go.mod h1:rlzQ04UMyJXu/aOvhd8qT+hvDrFpiwqp8MRXDY9szc0= github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567 h1:M8mH9eK4OUR4lu7Gd+PU1fV2/qnDNfzT635KRSObncs= github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567/go.mod h1:DWNGW8A4Y+GyBgPuaQJuWiy0XYftx4Xm/y5Jqk9I6VQ= -github.com/rhysd/actionlint v1.6.24 h1:5f61cF5ssP2pzG0jws5bEsfZBNhbBcO9nl7vTzVKjzs= -github.com/rhysd/actionlint v1.6.24/go.mod h1:gQmz9r2wlcpLy+VdbzK0GINJQnAK5/sNH3BpwW4Mt5I= +github.com/rhnvrm/simples3 v0.6.1/go.mod h1:Y+3vYm2V7Y4VijFoJHHTrja6OgPrJ2cBti8dPGkC3sA= +github.com/rhysd/actionlint v1.6.25 h1:0Is99a51w1iocdxKUzNYiBNwjoSlO2Klqzll98joVj4= +github.com/rhysd/actionlint v1.6.25/go.mod h1:Q+MtZKm1MdmJ9woOSKxLscMW7kU44/PShvjNy5ZKHA8= github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc= github.com/rivo/uniseg v0.4.4 h1:8TfxU8dW6PdqD27gjM8MVNuicgxIjxpm4K7x4jp8sis= github.com/rivo/uniseg v0.4.4/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88= @@ -829,30 +973,37 @@ github.com/ryancurrah/gomodguard v1.3.0 h1:q15RT/pd6UggBXVBuLps8BXRvl5GPBcwVA7BJ github.com/ryancurrah/gomodguard v1.3.0/go.mod h1:ggBxb3luypPEzqVtq33ee7YSN35V28XeGnid8dnni50= github.com/ryanrolds/sqlclosecheck v0.4.0 h1:i8SX60Rppc1wRuyQjMciLqIzV3xnoHB7/tXbr6RGYNI= github.com/ryanrolds/sqlclosecheck v0.4.0/go.mod h1:TBRRjzL31JONc9i4XMinicuo+s+E8yKZ5FN8X3G6CKQ= +github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts= +github.com/ryanuber/columnize v2.1.0+incompatible/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts= +github.com/ryanuber/go-glob v1.0.0/go.mod h1:807d1WSdnB0XRJzKNil9Om6lcp/3a0v4qIHxIXzX/Yc= +github.com/samber/lo v1.37.0 h1:XjVcB8g6tgUp8rsPsJ2CvhClfImrpL04YpQHXeHPhRw= +github.com/samber/lo v1.37.0/go.mod h1:9vaz2O4o8oOnK23pd2TrXufcbdbJIa3b6cstBWKpopA= github.com/sanposhiho/wastedassign/v2 v2.0.7 h1:J+6nrY4VW+gC9xFzUc+XjPD3g3wF3je/NsJFwFK7Uxc= github.com/sanposhiho/wastedassign/v2 v2.0.7/go.mod h1:KyZ0MWTwxxBmfwn33zh3k1dmsbF2ud9pAAGfoLfjhtI= github.com/sashamelentyev/interfacebloat v1.1.0 h1:xdRdJp0irL086OyW1H/RTZTr1h/tMEOsumirXcOJqAw= github.com/sashamelentyev/interfacebloat v1.1.0/go.mod h1:+Y9yU5YdTkrNvoX0xHc84dxiN1iBi9+G8zZIhPVoNjQ= github.com/sashamelentyev/usestdlibvars v1.23.0 h1:01h+/2Kd+NblNItNeux0veSL5cBF1jbEOPrEhDzGYq0= github.com/sashamelentyev/usestdlibvars v1.23.0/go.mod h1:YPwr/Y1LATzHI93CqoPUN/2BzGQ/6N/cl/KwgR0B/aU= -github.com/sebdah/goldie v1.0.0/go.mod h1:jXP4hmWywNEwZzhMuv2ccnqTSFpuq8iyQhtQdkkZBH4= -github.com/securego/gosec/v2 v2.15.0 h1:v4Ym7FF58/jlykYmmhZ7mTm7FQvN/setNm++0fgIAtw= -github.com/securego/gosec/v2 v2.15.0/go.mod h1:VOjTrZOkUtSDt2QLSJmQBMWnvwiQPEjg0l+5juIqGk8= +github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc= +github.com/securego/gosec/v2 v2.16.0 h1:Pi0JKoasQQ3NnoRao/ww/N/XdynIB9NRYYZT5CyOs5U= +github.com/securego/gosec/v2 v2.16.0/go.mod h1:xvLcVZqUfo4aAQu56TNv7/Ltz6emAOQAEsrZrt7uGlI= github.com/sergi/go-diff v1.1.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM= github.com/sergi/go-diff v1.2.0 h1:XU+rvMAioB0UC3q1MFrIQy4Vo5/4VsRDQQXHsEya6xQ= github.com/sergi/go-diff v1.2.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM= github.com/shazow/go-diff v0.0.0-20160112020656-b6b7b6733b8c h1:W65qqJCIOVP4jpqPQ0YvHYKwcMEMVWIzWC5iNQQfBTU= github.com/shazow/go-diff v0.0.0-20160112020656-b6b7b6733b8c/go.mod h1:/PevMnwAxekIXwN8qQyfc5gl2NlkB3CQlkizAbOkeBs= +github.com/shopspring/decimal v1.2.0 h1:abSATXmQEYyShuxI4/vyW3tV1MrKAJzCZ/0zLUXYbsQ= +github.com/shopspring/decimal v1.2.0/go.mod h1:DKyhrW/HYNuLGql+MJL6WCR6knT2jwCFRcu2hWCYk4o= github.com/shurcooL/go v0.0.0-20180423040247-9e1955d9fb6e/go.mod h1:TDJrrUr11Vxrven61rcy3hJMUqaf/CLWYhHNPmT14Lk= github.com/shurcooL/go-goon v0.0.0-20170922171312-37c2f522c041/go.mod h1:N5mDOmsrJOB+vfqUK+7DmDyjhSLIIBnXo9lvZJj3MWQ= github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo= github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q= github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE= github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88= -github.com/sirupsen/logrus v1.9.0 h1:trlNQbNUG3OdDrDil03MCb1H2o9nJ1x4/5LYw7byDE0= -github.com/sirupsen/logrus v1.9.0/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= -github.com/sivchari/containedctx v1.0.2 h1:0hLQKpgC53OVF1VT7CeoFHk9YKstur1XOgfYIc1yrHI= -github.com/sivchari/containedctx v1.0.2/go.mod h1:PwZOeqm4/DLoJOqMSIJs3aKqXRX4YO+uXww087KZ7Bw= +github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ= +github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= +github.com/sivchari/containedctx v1.0.3 h1:x+etemjbsh2fB5ewm5FeLNi5bUjK0V8n0RB+Wwfd0XE= +github.com/sivchari/containedctx v1.0.3/go.mod h1:c1RDvCbnJLtH4lLcYD/GqwiBSSf4F5Qk0xld2rBqzJ4= github.com/sivchari/nosnakecase v1.7.0 h1:7QkpWIRMe8x25gckkFd2A5Pi6Ymo0qgr4JrhGt95do8= github.com/sivchari/nosnakecase v1.7.0/go.mod h1:CwDzrzPea40/GB6uynrNLiorAlgFRvRbFSgJx2Gs+QY= github.com/sivchari/tenv v1.7.1 h1:PSpuD4bu6fSmtWMxSGWcvqUUgIn7k3yOJhOIzVWn8Ak= @@ -868,10 +1019,11 @@ github.com/sourcegraph/jsonrpc2 v0.2.0/go.mod h1:ZafdZgk/axhT1cvZAPOhw+95nz2I/Ra github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA= github.com/spf13/afero v1.9.5 h1:stMpOSZFs//0Lv29HduCmli3GUfpFoF3Y1Q/aXj/wVM= github.com/spf13/afero v1.9.5/go.mod h1:UBogFpq8E9Hx+xc5CNTTEpTnuHVmXDwZcZcE1eb/UhQ= +github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= github.com/spf13/cast v1.5.0 h1:rj3WzYc11XZaIZMPKmwP96zkFEnnAmV8s6XbB2aY32w= github.com/spf13/cast v1.5.0/go.mod h1:SpXXQ5YoyJw6s3/6cMTQuxvgRl3PCJiyaX9p6b155UU= -github.com/spf13/cobra v1.6.1 h1:o94oiPyS4KD1mPy2fmcYYHHfCxLqYjJOhGsCHFZtEzA= -github.com/spf13/cobra v1.6.1/go.mod h1:IOw/AERYS7UzyrGinqmz6HLUo219MORXGxhbaJUqzrY= +github.com/spf13/cobra v1.7.0 h1:hyqWnYt1ZQShIddO5kBpj3vu05/++x6tJ6dg8EC572I= +github.com/spf13/cobra v1.7.0/go.mod h1:uLxZILRyS/50WlhOIKD7W6V5bgeIt+4sICxh6uRMrb0= github.com/spf13/jwalterweatherman v1.1.0 h1:ue6voC5bR5F8YxI5S67j9i582FU4Qvo2bmqnqMYADFk= github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo= github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= @@ -896,10 +1048,12 @@ github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/ github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.7.2/go.mod h1:R6va5+xMeoiuVRoj+gSkQ7d3FALtqAAGI1FQKckRals= +github.com/stretchr/testify v1.7.4/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4= -github.com/stretchr/testify v1.8.2 h1:+h33VjcLVPDHtOdpUCuF+7gSuG3yGIftsP1YvFihtJ8= github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4= +github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk= +github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= github.com/subosito/gotenv v1.4.1 h1:jyEFiXpy21Wm81FBN71l9VoMMV8H8jG+qIK3GCpY6Qs= github.com/subosito/gotenv v1.4.1/go.mod h1:ayKnFf/c6rvx/2iiLrJUk1e6plDbT3edrFNGqEflhK0= github.com/t-yuki/gocover-cobertura v0.0.0-20180217150009-aaee18c8195c h1:+aPplBwWcHBo6q9xrfWdMrT9o4kltkmmvpemgIjep/8= @@ -910,16 +1064,22 @@ github.com/tenntenn/modver v1.0.1 h1:2klLppGhDgzJrScMpkj9Ujy3rXPUspSjAcev9tSEBgA github.com/tenntenn/modver v1.0.1/go.mod h1:bePIyQPb7UeioSRkw3Q0XeMhYZSMx9B8ePqg6SAMGH0= github.com/tenntenn/text/transform v0.0.0-20200319021203-7eef512accb3 h1:f+jULpRQGxTSkNYKJ51yaw6ChIqO+Je8UqsTKN/cDag= github.com/tenntenn/text/transform v0.0.0-20200319021203-7eef512accb3/go.mod h1:ON8b8w4BN/kE1EOhwT0o+d62W65a6aPw1nouo9LMgyY= -github.com/terraform-linters/tflint v0.46.1 h1:Z2Psh60BgINuSwQQV+iL64P/61Bd9m5pXeP9LqJo8EM= -github.com/terraform-linters/tflint v0.46.1/go.mod h1:5DtZb3Zza9rQyZDcVWrcdJAZsrzCogPOseJlk0vUadI= -github.com/terraform-linters/tflint-plugin-sdk v0.16.1 h1:fBfLL8KzP3pkQrNp3iQxaGoKBoMo2sFYoqmhuo6yc+A= -github.com/terraform-linters/tflint-plugin-sdk v0.16.1/go.mod h1:ltxVy04PRwptL6P/Ugz2ZeTNclYapClrLn/kVFXJGzo= -github.com/terraform-linters/tflint-ruleset-terraform v0.2.2 h1:iTE09KkaZ0DE29xvp6IIM1/gmas9V0h8CER28SyBmQ8= -github.com/terraform-linters/tflint-ruleset-terraform v0.2.2/go.mod h1:bCkvH8Vqzr16bWEE3e6Q3hvdZlmSAOR8i6G3M5y+M+k= +github.com/terraform-linters/tflint v0.47.0 h1:RTN4b4E+O//rUgWDgQtEJ+DN6893aUSjrsL8c7wgPag= +github.com/terraform-linters/tflint v0.47.0/go.mod h1:bWFnRUxnFZA9uX5Iv5HMlgIWjVk8cE0ddD10FE8DC8Q= +github.com/terraform-linters/tflint-plugin-sdk v0.17.0 h1:epg9rqt6zaj7C3qaNqRMlqscOqX0S1UKiTk+UHGvi1Q= +github.com/terraform-linters/tflint-plugin-sdk v0.17.0/go.mod h1:XrMaY79YpbU4J4cZgo6o4F31ZiMO2ccETXISniTOCsA= +github.com/terraform-linters/tflint-ruleset-terraform v0.4.0 h1:HOkKth3zhtpEo4J0f122ln6xAo1RKnroDYzP+gnZWbM= +github.com/terraform-linters/tflint-ruleset-terraform v0.4.0/go.mod h1:rcgg6YCJIvU2zL2aJlYE9s1u0HirSunjJg7Gu/mqUNY= github.com/tetafro/godot v1.4.11 h1:BVoBIqAf/2QdbFmSwAWnaIqDivZdOV0ZRwEm6jivLKw= github.com/tetafro/godot v1.4.11/go.mod h1:LR3CJpxDVGlYOWn3ZZg1PgNZdTUvzsZWu8xaEohUpn8= -github.com/timakin/bodyclose v0.0.0-20221125081123-e39cf3fc478e h1:MV6KaVu/hzByHP0UvJ4HcMGE/8a6A4Rggc/0wx2AvJo= -github.com/timakin/bodyclose v0.0.0-20221125081123-e39cf3fc478e/go.mod h1:27bSVNWSBOHm+qRp1T9qzaIpsWEP6TbUnei/43HK+PQ= +github.com/thanhpk/randstr v1.0.4 h1:IN78qu/bR+My+gHCvMEXhR/i5oriVHcTB/BJJIRTsNo= +github.com/thanhpk/randstr v1.0.4/go.mod h1:M/H2P1eNLZzlDwAzpkkkUvoyNNMbzRGhESZuEQk3r0U= +github.com/thlib/go-timezone-local v0.0.0-20210907160436-ef149e42d28e h1:BuzhfgfWQbX0dWzYzT1zsORLnHRv3bcRcsaUk0VmXA8= +github.com/thlib/go-timezone-local v0.0.0-20210907160436-ef149e42d28e/go.mod h1:/Tnicc6m/lsJE0irFMA0LfIwTBo4QP7A8IfyIv4zZKI= +github.com/tidwall/pretty v1.0.0 h1:HsD+QiTn7sK6flMKIvNmpqz1qrpP3Ps6jOKIKMooyg4= +github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk= +github.com/timakin/bodyclose v0.0.0-20230421092635-574207250966 h1:quvGphlmUVU+nhpFa4gg4yJyTRJ13reZMDHrKwYw53M= +github.com/timakin/bodyclose v0.0.0-20230421092635-574207250966/go.mod h1:27bSVNWSBOHm+qRp1T9qzaIpsWEP6TbUnei/43HK+PQ= github.com/timonwong/loggercheck v0.9.4 h1:HKKhqrjcVj8sxL7K77beXh0adEm6DLjV/QOGeMXEVi4= github.com/timonwong/loggercheck v0.9.4/go.mod h1:caz4zlPcgvpEkXgVnAJGowHAMW2NwHaNlpS8xDbVhTg= github.com/tomarrell/wrapcheck/v2 v2.8.1 h1:HxSqDSN0sAt0yJYsrcYVoEeyM4aI9yAm3KQpIXDJRhQ= @@ -934,7 +1094,6 @@ github.com/ultraware/whitespace v0.0.5 h1:hh+/cpIcopyMYbZNVov9iSxvJU3OYQg78Sfaqz github.com/ultraware/whitespace v0.0.5/go.mod h1:aVMh/gQve5Maj9hQ/hg+F75lr/X5A89uZnzAmWSineA= github.com/uudashr/gocognit v1.0.6 h1:2Cgi6MweCsdB6kpcVQp7EW4U23iBFQWfTXiWlyp842Y= github.com/uudashr/gocognit v1.0.6/go.mod h1:nAIUuVBnYU7pcninia3BHOvQkpQCeO76Uscky5BOwcY= -github.com/vmihailenco/msgpack v3.3.3+incompatible/go.mod h1:fy3FlTQTDXWkZ7Bh6AcGMlsjHatGryHQYUTf1ShIgkk= github.com/vmihailenco/msgpack/v4 v4.3.12/go.mod h1:gborTTJjAo/GWTqqRjrLCn9pgNN+NXzzngzBKDPIqw4= github.com/vmihailenco/msgpack/v5 v5.3.5 h1:5gO0H1iULLWGhs2H5tbAHIZTV8/cYafcFOr9znI5mJU= github.com/vmihailenco/msgpack/v5 v5.3.5/go.mod h1:7xyJ9e+0+9SaZT0Wt1RGleJXzli6Q/V5KbhBonMG9jc= @@ -944,16 +1103,23 @@ github.com/vmihailenco/tagparser/v2 v2.0.0/go.mod h1:Wri+At7QHww0WTrCBeu4J6bNtoV github.com/xanzy/ssh-agent v0.2.1/go.mod h1:mLlQY/MoOhWBj+gOGMQkOeiEvkx+8pJSI+0Bx9h2kr4= github.com/xanzy/ssh-agent v0.3.0 h1:wUMzuKtKilRgBAD1sUb8gOwwRr2FGoBVumcjoOACClI= github.com/xanzy/ssh-agent v0.3.0/go.mod h1:3s9xbODqPuuhK9JV1R321M/FlMZSBvE5aY6eAcqrDh0= +github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI= +github.com/xdg-go/scram v1.1.1/go.mod h1:RaEWvsqvNKKvBPvcKeFjrG2cJqOkHTiyTpzz23ni57g= +github.com/xdg-go/stringprep v1.0.3/go.mod h1:W3f5j4i+9rC0kuIEJL0ky1VpHXQU3ocBgklLGvcBnW8= github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f h1:J9EGpcZtP0E/raorCMxlFGSTBrsSlaDGf3jU/qvAE2c= github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 h1:EzJWgHovont7NscjpAxXsDA8S8BMYve8Y5+7cuRE7R0= github.com/xeipuuv/gojsonschema v1.2.0 h1:LhYJRs+L4fBtjZUfuSZIKGeVu0QRy8e5Xi7D17UxZ74= +github.com/xen0n/gosmopolitan v1.2.1 h1:3pttnTuFumELBRSh+KQs1zcz4fN6Zy7aB0xlnQSn1Iw= +github.com/xen0n/gosmopolitan v1.2.1/go.mod h1:JsHq/Brs1o050OOdmzHeOr0N7OtlnKRAGAsElF8xBQA= github.com/xo/terminfo v0.0.0-20210125001918-ca9a967f8778 h1:QldyIu/L63oPpyvQmHgvgickp1Yw510KJOqX7H24mg8= github.com/xo/terminfo v0.0.0-20210125001918-ca9a967f8778/go.mod h1:2MuV+tbUrU1zIOPMxZ5EncGwgmMJsa+9ucAQZXxsObs= github.com/yagipy/maintidx v1.0.0 h1:h5NvIsCz+nRDapQ0exNv4aJ0yXSI0420omVANTv3GJM= github.com/yagipy/maintidx v1.0.0/go.mod h1:0qNf/I/CCZXSMhsRsrEPDZ+DkekpKLXAJfsTACwgXLk= github.com/yeya24/promlinter v0.2.0 h1:xFKDQ82orCU5jQujdaD8stOHiv8UN68BSdn2a8u8Y3o= github.com/yeya24/promlinter v0.2.0/go.mod h1:u54lkmBOZrpEbQQ6gox2zWKKLKu2SGe+2KOiextY+IA= -github.com/yuin/goldmark v1.1.7/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/ykadowak/zerologlint v0.1.2 h1:Um4P5RMmelfjQqQJKtE8ZW+dLZrXrENeIzWWKw800U4= +github.com/ykadowak/zerologlint v0.1.2/go.mod h1:KaUskqF3e/v59oPmdq1U1DnKcuHokl2/K1U4pmIELKg= +github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d/go.mod h1:rHwXgn7JulP+udvsHwJoVG1YGAP6VLg4y9I5dyZdqmA= github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= @@ -963,19 +1129,22 @@ github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1 github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= github.com/yuin/goldmark v1.5.4 h1:2uY/xC0roWy8IBEGLgB1ywIoEJFGmRrX21YQcvGZzjU= github.com/yuin/goldmark v1.5.4/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= -github.com/yuin/goldmark-meta v0.0.0-20191126180153-f0638e958b60 h1:gZucqLjL1eDzVWrXj4uiWeMbAopJlBR2mKQAsTGdPwo= -github.com/yuin/goldmark-meta v0.0.0-20191126180153-f0638e958b60/go.mod h1:i9VhcIHN2PxXMbQrKqXNueok6QNONoPjNMoj9MygVL0= -github.com/zclconf/go-cty v1.1.0/go.mod h1:xnAOWiHeOqg2nWS62VtQ7pbOu17FtxJNW8RLEih+O3s= -github.com/zclconf/go-cty v1.2.0/go.mod h1:hOPWgoHbaTUnI5k4D2ld+GRpFJSCe6bCM7m1q/N4PQ8= -github.com/zclconf/go-cty v1.2.1/go.mod h1:hOPWgoHbaTUnI5k4D2ld+GRpFJSCe6bCM7m1q/N4PQ8= +github.com/yuin/goldmark-meta v1.1.0 h1:pWw+JLHGZe8Rk0EGsMVssiNb/AaPMHfSRszZeUeiOUc= +github.com/yuin/goldmark-meta v1.1.0/go.mod h1:U4spWENafuA7Zyg+Lj5RqK/MF+ovMYtBvXi1lBb2VP0= github.com/zclconf/go-cty v1.10.0/go.mod h1:vVKLxnk3puL4qRAv72AO+W99LUD4da90g3uUAzyuvAk= -github.com/zclconf/go-cty v1.13.1 h1:0a6bRwuiSHtAmqCqNOE+c2oHgepv0ctoxU4FUe43kwc= -github.com/zclconf/go-cty v1.13.1/go.mod h1:YKQzy/7pZ7iq2jNFzy5go57xdxdWoLLpaEp4u238AE0= -github.com/zclconf/go-cty-debug v0.0.0-20191215020915-b22d67c1ba0b/go.mod h1:ZRKQfBXbGkpdV6QMzT3rU1kSTAnfu1dO8dPKjYprgj8= +github.com/zclconf/go-cty v1.13.2 h1:4GvrUxe/QUDYuJKAav4EYqdM47/kZa672LwmXFmEKT0= +github.com/zclconf/go-cty v1.13.2/go.mod h1:YKQzy/7pZ7iq2jNFzy5go57xdxdWoLLpaEp4u238AE0= +github.com/zclconf/go-cty-debug v0.0.0-20191215020915-b22d67c1ba0b h1:FosyBZYxY34Wul7O/MSKey3txpPYyCqVO5ZyceuQJEI= github.com/zclconf/go-cty-yaml v1.0.3 h1:og/eOQ7lvA/WWhHGFETVWNduJM7Rjsv2RRpx1sdFMLc= github.com/zclconf/go-cty-yaml v1.0.3/go.mod h1:9YLUH4g7lOhVWqUbctnVlZ5KLpg7JAprQNgxSZ1Gyxs= gitlab.com/bosi/decorder v0.2.3 h1:gX4/RgK16ijY8V+BRQHAySfQAb354T7/xQpDB2n10P0= gitlab.com/bosi/decorder v0.2.3/go.mod h1:9K1RB5+VPNQYtXtTDAzd2OEftsZb1oV0IrJrzChSdGE= +go-simpler.org/assert v0.5.0 h1:+5L/lajuQtzmbtEfh69sr5cRf2/xZzyJhFjoOz/PPqs= +go.etcd.io/etcd/api/v3 v3.5.4/go.mod h1:5GB2vv4A4AOn3yk7MftYGHkUfGtDHnEraIjym4dYz5A= +go.etcd.io/etcd/client/pkg/v3 v3.5.4/go.mod h1:IJHfcCEKxYu1Os13ZdwCwIUTUVGYTSAM3YSwc9/Ac1g= +go.etcd.io/etcd/client/v3 v3.5.4/go.mod h1:ZaRkVgBZC+L+dLCjTcF1hRXpgZXQPOvnA/Ak/gq3kiY= +go.mongodb.org/mongo-driver v1.10.0 h1:UtV6N5k14upNp4LTduX0QCufG124fSu25Wz9tu94GLg= +go.mongodb.org/mongo-driver v1.10.0/go.mod h1:wsihk0Kdgv8Kqu1Anit4sfK+22vSFbUrAVEYRhCXrA8= go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU= go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8= go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw= @@ -986,11 +1155,14 @@ go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E= go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0= go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo= go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI= +go.tmz.dev/musttag v0.7.0 h1:QfytzjTWGXZmChoX0L++7uQN+yRCPfyFm+whsM+lfGc= +go.tmz.dev/musttag v0.7.0/go.mod h1:oTFPvgOkJmp5kYL02S8+jrH0eLrBIl57rzWeA26zDEM= go.uber.org/atomic v1.7.0 h1:ADUqmZGgLDDfbSL9ZmPxKTybcoEYHgpYfELNoN+7hsw= go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc= go.uber.org/goleak v1.1.11 h1:wy28qYRKZgnJTxGxvye5/wgWr1EKjmUDGYox5mGlRlI= go.uber.org/multierr v1.6.0 h1:y6IPFStTAIT5Ytl7/XYmHvzXQ7S3g/IeZW9hyZ5thw4= go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU= +go.uber.org/zap v1.17.0/go.mod h1:MXVU+bhUf/A7Xi2HNOnopQOrmycQ5Ih87HtOu4q5SSo= go.uber.org/zap v1.24.0 h1:FiJd5l1UOLj0wCgbSE0rwwXHzEdAZS6hiiSnxJN/D60= go.uber.org/zap v1.24.0/go.mod h1:2kMP+WWQ8aoFoedH3T2sq6iJ2yDWpHbP0f6MQbS9Gkg= golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= @@ -998,18 +1170,23 @@ golang.org/x/crypto v0.0.0-20190219172222-a4c6cb3142f2/go.mod h1:6SG95UA2DQfeDnf golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20190923035154-9ee001bba392/go.mod h1:/lpIB1dKB+9EgE3H3cr1v9wB50oz8l4C4h62xy7jSTY= golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20200302210943-78000ba7a073/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= +golang.org/x/crypto v0.0.0-20200414173820-0848c9571904/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200820211705-5c72a883971a/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4= golang.org/x/crypto v0.0.0-20210421170649-83a5a9bb288b/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4= golang.org/x/crypto v0.0.0-20210616213533-5ff15b29337e/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= +golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw= -golang.org/x/crypto v0.8.0 h1:pd9TJtTueMTVQXzk8E2XESSMQDj/U7OUu0PqJqPXQjQ= -golang.org/x/crypto v0.8.0/go.mod h1:mRqEX+O9/h5TFCrQhkgjo2yKi0yYA+9ecGkdQoHrywE= +golang.org/x/crypto v0.3.1-0.20221117191849-2c476679df9a/go.mod h1:hebNnKkNXi2UzZN1eVRvBB7co0a+JxK6XbPiWVs/3J4= +golang.org/x/crypto v0.7.0/go.mod h1:pYwdfH91IfpZVANVyUOhSIPZaFoJGxTFbZhFTx+dXZU= +golang.org/x/crypto v0.10.0 h1:LKqV2xt9+kDzSTfOhx4FrkEBcMrAgHSYgzywV9zcGmM= +golang.org/x/crypto v0.10.0/go.mod h1:o4eNf7Ede1fv+hwOwZsTHl9EsPFO6q6ZvYR8vYfY45I= golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8= @@ -1020,8 +1197,8 @@ golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u0 golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4= golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM= golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU= -golang.org/x/exp v0.0.0-20220722155223-a9213eeb770e h1:+WEEuIdZHnUeJJmEUjyYC2gfUMj69yZXw17EnHg/otA= -golang.org/x/exp v0.0.0-20220722155223-a9213eeb770e/go.mod h1:Kr81I6Kryrl9sr8s2FK3vxD90NdsKWRuOIl2O4CvYbA= +golang.org/x/exp v0.0.0-20230510235704-dd950f8aeaea h1:vLCWI/yYrdEHyN2JzIzPO3aaQJHQdp89IZBA/+azVC4= +golang.org/x/exp v0.0.0-20230510235704-dd950f8aeaea/go.mod h1:V1LtkGg67GoY2N1AnLN78QLrzxkLyJw7RJb1gzOOz9w= golang.org/x/exp/typeparams v0.0.0-20220428152302-39d4317da171/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= golang.org/x/exp/typeparams v0.0.0-20230203172020-98cc5a0785f9/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= golang.org/x/exp/typeparams v0.0.0-20230224173230-c95f2b4c22f2 h1:J74nGeMgeFnYQJN59eFwh06jX/V8g0lB7LWpjSLxtgU= @@ -1058,10 +1235,9 @@ golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91 golang.org/x/mod v0.6.0/go.mod h1:4mET923SAdbXp2ki8ey+zGs1SLqsuM2Y0uvdZR/fUNI= golang.org/x/mod v0.7.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= -golang.org/x/mod v0.10.0 h1:lFO9qtOdlre5W1jxS3r/4szv2/6iXxScdzjoBMXNhYk= -golang.org/x/mod v0.10.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.11.0 h1:bUO06HqtnRcc/7l71XBe4WcqTZ+3AH1J59zWDDwLKgU= +golang.org/x/mod v0.11.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20180811021610-c39426892332/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= @@ -1075,7 +1251,7 @@ golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLL golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190628185345-da137c7871d7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20191009170851-d66e71096ffb/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= @@ -1100,6 +1276,7 @@ golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLdyRGr576XBO4/greRjx4P4O3yc= golang.org/x/net v0.0.0-20210326060303-6b1517762897/go.mod h1:uSPa2vr4CLtc/ILN5odXGNXS6mhrKVzTaCXzk9m6W3k= golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= +golang.org/x/net v0.0.0-20210410081132-afb366fc7cd1/go.mod h1:9tjilg8BloeKEkVJvy7fQ90B1CfIiPueXVOjqfkSzI8= golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= @@ -1114,14 +1291,16 @@ golang.org/x/net v0.0.0-20220617184016-355a448f1bc9/go.mod h1:XRhObCWvk6IyKnWLug golang.org/x/net v0.0.0-20220624214902-1bab6f366d9e/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= golang.org/x/net v0.0.0-20220909164309-bea034e7d591/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk= +golang.org/x/net v0.0.0-20220923203811-8be639271d50/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk= golang.org/x/net v0.0.0-20221014081412-f15817d10f9b/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk= golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco= golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY= -golang.org/x/net v0.3.0/go.mod h1:MBQ8lrhLObU/6UmLb4fmbmk5OcyYmqtbGd/9yIeKjEE= golang.org/x/net v0.5.0/go.mod h1:DivGGAXEgPSlEBzxGzZI+ZLohi+xUj054jfeKui00ws= golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= -golang.org/x/net v0.9.0 h1:aWJ/m6xSmxWBx+V0XRHTlrYrPG56jKsLdTFmsSsCzOM= -golang.org/x/net v0.9.0/go.mod h1:d48xBJpPfHeWQsugry2m+kC02ZBRGRgulfHnEXEuWns= +golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc= +golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg= +golang.org/x/net v0.11.0 h1:Gi2tvZIJyBtO9SDr1q9h5hEQCp/4L2RQ+ar0qjx2oNU= +golang.org/x/net v0.11.0/go.mod h1:2L/ixqYpgIVXmeoSA/4Lu7BzTG4KIyPIryS4IsOd1oQ= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= @@ -1147,8 +1326,8 @@ golang.org/x/oauth2 v0.0.0-20220822191816-0ebed06d0094/go.mod h1:h4gKUeWbJ4rQPri golang.org/x/oauth2 v0.0.0-20220909003341-f21342109be1/go.mod h1:h4gKUeWbJ4rQPri7E0u6Gs4e9Ri2zaLxzw5DI5XGrYg= golang.org/x/oauth2 v0.0.0-20221014153046-6fdb5e3db783/go.mod h1:h4gKUeWbJ4rQPri7E0u6Gs4e9Ri2zaLxzw5DI5XGrYg= golang.org/x/oauth2 v0.1.0/go.mod h1:G9FE4dLTsbXUu90h/Pf85g4w1D+SSAgR+q46nJZ8M4A= -golang.org/x/oauth2 v0.7.0 h1:qe6s0zUXlPX80/dITx3440hWZ7GwMwgDDyrSGTPJG/g= -golang.org/x/oauth2 v0.7.0/go.mod h1:hPLQkd9LyjfXTiRohC/41GhcFqxisoUQ99sCUOHO9x4= +golang.org/x/oauth2 v0.8.0 h1:6dkIjl3j3LtZ/O3sTgZTMsLKSftL/B8Zgq4huOIIUu8= +golang.org/x/oauth2 v0.8.0/go.mod h1:yr7u4HXZRm1R1kBWqr/xKNqewf0plRYoB7sla+BCIXE= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= @@ -1163,16 +1342,19 @@ golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJ golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20220929204114-8fcdb60fdcc0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o= golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.3.0 h1:ftCYgMx6zT/asHUrPw8BLLscYtGznsLAnjq5RH9P66E= +golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190129075346-302c3dd5f1cc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190221075227-b4e8571b14e0/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190403152447-81d4e9dc473e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= @@ -1181,14 +1363,19 @@ golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7w golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190922100055-0a153f010e69/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190924154521-2837fb4f24fe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191008105621-543471e840be/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200124204421-9fbb57f87de9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= @@ -1211,11 +1398,13 @@ golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7w golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210225134936-a50acf3fe073/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210303074136-134d130e1a04/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210315160823-c6e025ad8005/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210324051608-47abb6519492/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210403161142-5e06dd20ab57/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210423185535-09eb48e85fd7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210502180810-71e4cd670f79/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= @@ -1228,8 +1417,10 @@ golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBc golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210823070655-63515b42dcdf/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20210831042530-f4d43177bf5e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210908233432-aa78b53d3365/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20211105183446-c75c47738b0c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20211124211545-fe61309f8881/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= @@ -1241,6 +1432,7 @@ golang.org/x/sys v0.0.0-20220209214540-3681064d5158/go.mod h1:oPkhp1MJrh7nUepCBc golang.org/x/sys v0.0.0-20220227234510-4e6760a101f9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220328115105-d36c6a25d886/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220412211240-33da011f77ad/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220422013727-9388b58f7150/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220502124256-b6088ccd6cba/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= @@ -1258,19 +1450,24 @@ golang.org/x/sys v0.3.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.4.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.7.0 h1:3jlCCIQZPdOYu1h8BkNvLz8Kgwtae2cagcG/VamtZRU= -golang.org/x/sys v0.7.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.9.0 h1:KS/R3tvhPqvJvwcKfnBHJwwthS11LRhmM5D59eEXa0s= +golang.org/x/sys v0.9.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= +golang.org/x/term v0.0.0-20210503060354-a79de5458b56/go.mod h1:tfny5GFUkzUvx4ps4ajbZsCe5lw1metzhBm9T3x7oIY= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc= -golang.org/x/term v0.3.0/go.mod h1:q750SLmJuPmVoN1blW3UFBPREJfb1KmY3vwxfr+nFDA= golang.org/x/term v0.4.0/go.mod h1:9P2UbLfCdcvo3p/nzKvsmas4TnlujnuoV9hGgYzW1lQ= golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= -golang.org/x/term v0.7.0 h1:BEvjmm5fURWqcfbSKTdpkDXYBrUS1c0m8agp14W48vQ= +golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U= +golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo= +golang.org/x/term v0.9.0 h1:GRRCnKYhdQrD8kfRAdQ6Zcw1P0OcELxGLKJvtjVMZ28= +golang.org/x/term v0.9.0/go.mod h1:M6DEAAIenWoTxdKrOltXcmDY3rSplQUkrvaDU5FcQyo= golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/text v0.3.1-0.20181227161524-e6919f6577db/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= @@ -1278,15 +1475,15 @@ golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= -golang.org/x/text v0.5.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/text v0.6.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= -golang.org/x/text v0.9.0 h1:2sjJmO8cDvYveuX97RDLsxlyUxLl+GHoLxBiRdHllBE= +golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.10.0 h1:UpjohKhiEgNc0CSauXmwYftY1+LlaC75SJwh0SgCX58= +golang.org/x/text v0.10.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= -golang.org/x/tools v0.0.0-20180525024113-a5b4c53f6e8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY= @@ -1301,6 +1498,7 @@ golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgw golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= +golang.org/x/tools v0.0.0-20190907020128-2ca718005c18/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20190910044552-dd2b5c81c578/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= @@ -1329,6 +1527,7 @@ golang.org/x/tools v0.0.0-20200501065659-ab2804fb9c9d/go.mod h1:EkVYQZoAsY45+roY golang.org/x/tools v0.0.0-20200512131952-2bc93b1c0c88/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= golang.org/x/tools v0.0.0-20200515010526-7d3b6ebf133d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= golang.org/x/tools v0.0.0-20200618134242-20370b0cb4b2/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= +golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= golang.org/x/tools v0.0.0-20200724022722-7017fd6b1305/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= golang.org/x/tools v0.0.0-20200729194436-6467de6f59a7/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= golang.org/x/tools v0.0.0-20200804011535-6c149bb5ef0d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= @@ -1341,6 +1540,7 @@ golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd/go.mod h1:emZCQorbCU4vsT4f golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= +golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20210108195828-e2f9c7f1fc8e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0= golang.org/x/tools v0.1.1-0.20210205202024-ef80cdb6ec6d/go.mod h1:9bzcO0MWcOuT0tm1iBGzDVPshzfwoVvREIui8C+MHqU= @@ -1356,11 +1556,10 @@ golang.org/x/tools v0.1.11/go.mod h1:SgwaegtQh8clINPpECJMqnxLv9I09HLqnW3RMqW0CA4 golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= golang.org/x/tools v0.2.0/go.mod h1:y4OqIKeOV/fWJetJ8bXPU1sEVniLMIyDAZWeHdV+NTA= golang.org/x/tools v0.3.0/go.mod h1:/rWhSS2+zyEVwoJf8YAX6L2f0ntZ7Kn/mGgAWcipA5k= -golang.org/x/tools v0.4.0/go.mod h1:UE5sM2OK9E/d67R0ANs2xJizIymRP5gJU295PvKXxjQ= golang.org/x/tools v0.5.0/go.mod h1:N+Kgy78s5I24c24dU8OfWNEotWjutIs8SnJvn5IDq+k= golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= -golang.org/x/tools v0.8.0 h1:vSDcovVPld282ceKgDimkRSC8kpaH1dgyc9UMzlt84Y= -golang.org/x/tools v0.8.0/go.mod h1:JxBZ99ISMI5ViVkT1tr6tdNmXeTrcpVSD3vZ1RsRdN4= +golang.org/x/tools v0.10.0 h1:tvDr/iQoUqNdohiYm0LmmKcBk+q86lb9EprIUFhHHGg= +golang.org/x/tools v0.10.0/go.mod h1:UJwyiVBsOA2uwvK/e5OY3GTpDUJriEd+/YlqAwLPmyM= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= @@ -1418,8 +1617,8 @@ google.golang.org/api v0.96.0/go.mod h1:w7wJQLTM+wvQpNf5JyEcBoxK0RH7EDrh/L4qfsuJ google.golang.org/api v0.97.0/go.mod h1:w7wJQLTM+wvQpNf5JyEcBoxK0RH7EDrh/L4qfsuJ13s= google.golang.org/api v0.98.0/go.mod h1:w7wJQLTM+wvQpNf5JyEcBoxK0RH7EDrh/L4qfsuJ13s= google.golang.org/api v0.100.0/go.mod h1:ZE3Z2+ZOr87Rx7dqFsdRQkRBk36kDtp/h+QpHbB7a70= -google.golang.org/api v0.103.0 h1:9yuVqlu2JCvcLg9p8S3fcFLZij8EPSyvODIY1rkMizQ= -google.golang.org/api v0.103.0/go.mod h1:hGtW6nK1AC+d9si/UBhw8Xli+QMOf6xyNAyJw4qU9w0= +google.golang.org/api v0.110.0 h1:l+rh0KYUooe9JGbGVx71tbFo4SMbMTXK3I3ia2QSEeU= +google.golang.org/api v0.110.0/go.mod h1:7FC4Vvx1Mooxh8C5HWjzZHcavuS2f6pmJpZx60ca7iI= google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= @@ -1430,6 +1629,7 @@ google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6 google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= +google.golang.org/genproto v0.0.0-20190404172233-64821d5d2107/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= @@ -1531,11 +1731,13 @@ google.golang.org/genproto v0.0.0-20221010155953-15ba04fc1c0e/go.mod h1:3526vdqw google.golang.org/genproto v0.0.0-20221014173430-6e2ab493f96b/go.mod h1:1vXfmgAz9N9Jx0QA82PqRVauvCz1SGSz739p0f183jM= google.golang.org/genproto v0.0.0-20221014213838-99cd37c6964a/go.mod h1:1vXfmgAz9N9Jx0QA82PqRVauvCz1SGSz739p0f183jM= google.golang.org/genproto v0.0.0-20221025140454-527a21cfbd71/go.mod h1:9qHF0xnpdSfF6knlcsnpzUu5y+rpwgbvsyGAZPBMg4s= -google.golang.org/genproto v0.0.0-20230110181048-76db0878b65f h1:BWUVssLB0HVOSY78gIdvk1dTVYtT1y8SBWtPYuTJ/6w= -google.golang.org/genproto v0.0.0-20230110181048-76db0878b65f/go.mod h1:RGgjbofJ8xD9Sq1VVhDM1Vok1vRONV+rg+CjzG4SZKM= +google.golang.org/genproto v0.0.0-20230306155012-7f2fa6fef1f4 h1:DdoeryqhaXp1LtT/emMP1BRJPHHKFi5akj/nbx/zNTA= +google.golang.org/genproto v0.0.0-20230306155012-7f2fa6fef1f4/go.mod h1:NWraEVixdDnqcqQ30jipen1STv2r/n24Wb7twVTGR4s= +google.golang.org/grpc v1.14.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38= google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM= +google.golang.org/grpc v1.22.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY= google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= @@ -1568,8 +1770,8 @@ google.golang.org/grpc v1.48.0/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACu google.golang.org/grpc v1.49.0/go.mod h1:ZgQEeidpAuNRZ8iRrlBKXZQP1ghovWIVhdJRyCDK+GI= google.golang.org/grpc v1.50.0/go.mod h1:ZgQEeidpAuNRZ8iRrlBKXZQP1ghovWIVhdJRyCDK+GI= google.golang.org/grpc v1.50.1/go.mod h1:ZgQEeidpAuNRZ8iRrlBKXZQP1ghovWIVhdJRyCDK+GI= -google.golang.org/grpc v1.54.0 h1:EhTqbhiYeixwWQtAEZAxmV9MGqcjEU2mFx52xCzNyag= -google.golang.org/grpc v1.54.0/go.mod h1:PUSEXI6iWghWaB6lXM4knEgpJNu2qUcKfDtNci3EC2g= +google.golang.org/grpc v1.55.0 h1:3Oj82/tFSCeUrRTg/5E/7d/W5A1tj6Ky1ABAuZuv5ag= +google.golang.org/grpc v1.55.0/go.mod h1:iYEXKGkEBhg1PjZQvoYEVPTDkHo1/bjTnfwTeGONTY8= google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw= google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= @@ -1589,6 +1791,7 @@ google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqw google.golang.org/protobuf v1.30.0 h1:kPPoIgf3TsEvrm0PFe15JQ+570QVxYzEvvHqChK+cng= google.golang.org/protobuf v1.30.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw= +gopkg.in/asn1-ber.v1 v1.0.0-20181015200546-f715ec2f112d/go.mod h1:cuepJuh7vyXfUyUwEgHQXw849cJrilpS5NeIjOWESAw= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= @@ -1597,8 +1800,10 @@ gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntN gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= gopkg.in/cheggaaa/pb.v1 v1.0.27/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw= gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= +gopkg.in/h2non/gock.v1 v1.1.2 h1:jBbHXgGBK/AoPVfJh5x4r/WxIrElvbLel8TCZkkZJoY= gopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA= gopkg.in/ini.v1 v1.67.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k= +gopkg.in/square/go-jose.v2 v2.3.1/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI= gopkg.in/warnings.v0 v0.1.2 h1:wFXVbFY8DY5/xOe1ECiWdKCzZlxgshcYVNkBHstARME= gopkg.in/warnings.v0 v0.1.2/go.mod h1:jksf8JmL6Qr/oQM2OXTHunEvvTAsrWBLb6OOjuVWRNI= gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= @@ -1606,12 +1811,13 @@ gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.2.7/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +gopkg.in/yaml.v3 v3.0.0-20200605160147-a5ece683394c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= @@ -1634,3 +1840,4 @@ mvdan.cc/unparam v0.0.0-20221223090309-7455f1af531d/go.mod h1:IeHQjmn6TOD+e4Z3RF rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8= rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0= rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA= +sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc= diff --git a/.ci/tools/main.go b/.ci/tools/main.go index 9651e086dd5..362a3679208 100644 --- a/.ci/tools/main.go +++ b/.ci/tools/main.go @@ -4,9 +4,10 @@ package main import ( - _ "github.com/bflad/tfproviderdocs" + _ "github.com/YakDriver/tfproviderdocs" _ "github.com/client9/misspell/cmd/misspell" _ "github.com/golangci/golangci-lint/cmd/golangci-lint" + _ "github.com/hashicorp/copywrite" _ "github.com/hashicorp/go-changelog/cmd/changelog-build" _ "github.com/katbyte/terrafmt" _ "github.com/pavius/impi/cmd/impi" diff --git a/.copywrite.hcl b/.copywrite.hcl new file mode 100644 index 00000000000..86475faf338 --- /dev/null +++ b/.copywrite.hcl @@ -0,0 +1,18 @@ +schema_version = 1 + +project { + license = "MPL-2.0" + copyright_year = 2023 + + # (OPTIONAL) A list of globs that should not have copyright/license headers. + # Supports doublestar glob patterns for more flexibility in defining which + # files or folders should be ignored + header_ignore = [ + ".ci/**", + ".github/**", + ".teamcity/**", + ".release/**", + "infrastructure/repository/labels-service.tf", + ".goreleaser.yml", + ] +} diff --git a/.github/actionlint.yaml b/.github/actionlint.yaml index 341c4dbe737..963a389f411 100644 --- a/.github/actionlint.yaml +++ b/.github/actionlint.yaml @@ -1,3 +1,3 @@ self-hosted-runner: # Labels of self-hosted runner in array of string - labels: [custom, small, medium, large, xl] + labels: [custom, small, medium, large, xl, custom-linux-medium] diff --git a/.github/dependabot.yml b/.github/dependabot.yml index 3e06ccdb1b5..cb6c6e92fb4 100644 --- a/.github/dependabot.yml +++ b/.github/dependabot.yml @@ -1,11 +1,21 @@ version: 2 updates: - - package-ecosystem: "github-actions" - directory: "/" + - directory: "/" + package-ecosystem: "github-actions" schedule: interval: "daily" - - package-ecosystem: "gomod" - directory: "/" + + - directory: "/" + package-ecosystem: "gomod" + groups: + aws-sdk-go: + patterns: + - "github.com/aws/aws-sdk-go" + - "github.com/aws/aws-sdk-go-v2/*" + aws-sdk-go-base: + patterns: + - "github.com/hashicorp/aws-sdk-go-base/v2" + - "github.com/hashicorp/aws-sdk-go-base/v2/*" ignore: # hcl/v2 should only be updated via terraform-plugin-sdk - dependency-name: "github.com/hashicorp/hcl/v2" @@ -20,28 +30,44 @@ updates: schedule: interval: "daily" open-pull-requests-limit: 30 - - package-ecosystem: "gomod" - directory: "/.ci/providerlint" + + - directory: "/.ci/providerlint" + package-ecosystem: "gomod" ignore: - dependency-name: "golang.org/x/tools" - dependency-name: "google.golang.org/grpc" schedule: interval: "daily" - - package-ecosystem: "gomod" - directory: "/.ci/tools" + + - directory: "/.ci/tools" + package-ecosystem: "gomod" ignore: - dependency-name: "golang.org/x/tools" - dependency-name: "google.golang.org/grpc" schedule: interval: "daily" - - package-ecosystem: "gomod" - directory: "/skaff" + + - directory: "/skaff" + package-ecosystem: "gomod" ignore: - dependency-name: "golang.org/x/tools" - dependency-name: "google.golang.org/grpc" schedule: interval: "daily" - - package-ecosystem: "terraform" - directory: "/infrastructure/repository" + + - directory: "/tools/tfsdk2fw" + package-ecosystem: "gomod" + allow: + - dependency-type: direct + ignore: + # terraform-plugin-sdk/v2 should only be updated via terraform-provider-aws + - dependency-name: "github.com/hashicorp/terraform-plugin-sdk/v2" + - dependency-name: "golang.org/x/tools" + - dependency-name: "google.golang.org/grpc" + schedule: + interval: "daily" + + - directory: "/infrastructure/repository" + package-ecosystem: "terraform" schedule: interval: "daily" diff --git a/.github/labeler-issue-triage.yml b/.github/labeler-issue-triage.yml index db45cd30ab3..6530687d66b 100644 --- a/.github/labeler-issue-triage.yml +++ b/.github/labeler-issue-triage.yml @@ -643,6 +643,8 @@ service/translate: - '((\*|-)\s*`?|(data|resource)\s+"?)aws_translate_' service/verifiedaccess: - '((\*|-)\s*`?|(data|resource)\s+"?)aws_verifiedaccess' +service/verifiedpermissions: + - '((\*|-)\s*`?|(data|resource)\s+"?)aws_verifiedpermissions_' service/voiceid: - '((\*|-)\s*`?|(data|resource)\s+"?)aws_voiceid_' service/vpc: diff --git a/.github/labeler-pr-needs-triage.yml b/.github/labeler-pr-needs-triage.yml deleted file mode 100644 index 3d0eca28900..00000000000 --- a/.github/labeler-pr-needs-triage.yml +++ /dev/null @@ -1,4 +0,0 @@ -needs-triage: - - '**' - - '.*' - - '.*/**' diff --git a/.github/labeler-pr-triage.yml b/.github/labeler-pr-triage.yml index c291ad111bc..5c251c6f36f 100644 --- a/.github/labeler-pr-triage.yml +++ b/.github/labeler-pr-triage.yml @@ -1043,6 +1043,9 @@ service/translate: service/verifiedaccess: - 'internal/service/ec2/**/verifiedaccess_*' - 'website/**/verifiedaccess*' +service/verifiedpermissions: + - 'internal/service/verifiedpermissions/**/*' + - 'website/**/verifiedpermissions_*' service/voiceid: - 'internal/service/voiceid/**/*' - 'website/**/voiceid_*' diff --git a/.github/workflows/acctest-terraform-lint.yml b/.github/workflows/acctest-terraform-lint.yml index 95b435bf570..b378915d9ee 100644 --- a/.github/workflows/acctest-terraform-lint.yml +++ b/.github/workflows/acctest-terraform-lint.yml @@ -17,8 +17,8 @@ jobs: terrafmt: runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab - - uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 + - uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 with: go-version-file: go.mod - uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 @@ -45,8 +45,8 @@ jobs: TEST_FILES_PARTITION: '\./internal/service/${{ matrix.path }}.*/.*_test\.go' steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab - - uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 + - uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 with: go-version-file: go.mod - uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 diff --git a/.github/workflows/autoremove_labels.yml b/.github/workflows/autoremove_labels.yml index ace26b84eae..e090fb6a626 100644 --- a/.github/workflows/autoremove_labels.yml +++ b/.github/workflows/autoremove_labels.yml @@ -12,23 +12,16 @@ jobs: secrets: inherit RemoveNeedsTriageFromMaintainers: + name: 'Remove needs-triage for Maintainers' needs: community_check if: github.event.action == 'opened' && needs.community_check.outputs.maintainer == 'true' - runs-on: ubuntu-latest - steps: - - name: Remove needs-triage label from member's Issues - uses: actions-ecosystem/action-remove-labels@2ce5d41b4b6aa8503e285553f75ed56e0a40bae0 - with: - labels: | - needs-triage + uses: ./.github/workflows/reusable-add-or-remove-labels.yml + with: + remove: needs-triage RemoveTriagingLabelsFromClosedIssueOrPR: + name: 'Remove Triaging Labels from Closed Items' if: github.event.action == 'closed' - runs-on: ubuntu-latest - steps: - - name: Remove triaging labels from closed issues and PRs - uses: actions-ecosystem/action-remove-labels@2ce5d41b4b6aa8503e285553f75ed56e0a40bae0 - with: - labels: | - needs-triage - waiting-response + uses: ./.github/workflows/reusable-add-or-remove-labels.yml + with: + remove: needs-triage,waiting-response diff --git a/.github/workflows/cdktf-documentation.yml b/.github/workflows/cdktf-documentation.yml new file mode 100644 index 00000000000..a0e2302a0ca --- /dev/null +++ b/.github/workflows/cdktf-documentation.yml @@ -0,0 +1,57 @@ +name: CDKTF Documentation +on: + schedule: + - cron: "0 0 * * WED" + workflow_dispatch: {} + +permissions: + contents: write + pull-requests: write + +jobs: + cdktfDocs: + runs-on: + - custom + - linux + - custom-linux-medium + container: + image: docker.mirror.hashicorp.services/hashicorp/jsii-terraform + env: + CHECKPOINT_DISABLE: "1" + timeout-minutes: 120 + steps: + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3 + - run: git config --global user.email "github-team-tf-cdk@hashicorp.com" + - run: git config --global user.name "team-tf-cdk" + - name: Get yarn cache directory path + id: global-cache-dir-path + run: echo "dir=$(yarn cache dir)" >> $GITHUB_OUTPUT + - uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 # v3.3.1 + id: global-cache + with: + path: ${{ steps.global-cache-dir-path.outputs.dir }} + key: ${{ runner.os }}-integration-yarn-${{ hashFiles('**/yarn.lock') }} + - name: Setup Node.js + uses: actions/setup-node@e33196f7422957bea03ed53f6fbb155025ffc7b8 # v3.7.0 + with: + node-version: "18.x" + - name: Install cdktf-registry-docs + run: npm install -g cdktf-registry-docs@1.12.0 + - name: Run conversion + run: | + cdktf-registry-docs convert \ + --files='*/ec2_*.html.markdown' \ + --languages='typescript,python' \ + --parallel-file-conversions=1 \ + --provider-from-registry="hashicorp/aws" \ + . + env: + TF_PLUGIN_CACHE_DIR: ${{ steps.global-cache-dir-path.outputs.dir }}/terraform-plugins + + - name: Create Pull Request + uses: peter-evans/create-pull-request@153407881ec5c347639a548ade7d8ad1d6740e38 # v5.0.2 + with: + commit-message: "docs: update cdktf documentation" + title: "docs: update cdktf documentation" + body: "This PR updates the cdktf related documentation based on the current HCL-based documentation. It is automatically created by the cdktf-documentation GitHub action." + token: ${{ secrets.ORGSCOPED_GITHUB_TOKEN }} diff --git a/.github/workflows/changelog.yml b/.github/workflows/changelog.yml index 3eff4b9160e..315cc3bc167 100644 --- a/.github/workflows/changelog.yml +++ b/.github/workflows/changelog.yml @@ -42,7 +42,7 @@ jobs: - run: echo ${{ steps.prc.outputs.comment-id }} - name: PR Comment if: steps.prc.outputs.comment-id == '' - uses: peter-evans/create-or-update-comment@ca08ebd5dc95aa0cd97021e9708fcd6b87138c9b + uses: peter-evans/create-or-update-comment@c6c9a1a66007646a28c153e2a8580a5bad27bcfa env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} with: diff --git a/.github/workflows/changelog_misspell.yml b/.github/workflows/changelog_misspell.yml index 629f7106a00..115da54c72a 100644 --- a/.github/workflows/changelog_misspell.yml +++ b/.github/workflows/changelog_misspell.yml @@ -14,8 +14,8 @@ jobs: misspell: runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab - - uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 + - uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 with: go-version-file: go.mod - uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 diff --git a/.github/workflows/community-check.yml b/.github/workflows/community-check.yml index 07d40497eae..785f9d0b756 100644 --- a/.github/workflows/community-check.yml +++ b/.github/workflows/community-check.yml @@ -1,5 +1,4 @@ name: Community Check - on: workflow_call: outputs: @@ -9,7 +8,6 @@ on: value: ${{ jobs.community_check.outputs.is_maintainer }} partner: value: ${{ jobs.community_check.outputs.is_partner }} - jobs: community_check: name: Check community lists for username @@ -43,10 +41,8 @@ jobs: - name: Determine if user is in lists id: determination env: - IS_CORE_CONTRIBUTOR: ${{ contains(fromJSON(steps.decode.outputs.core_contributors_list), github.actor) }} - IS_MAINTAINER: ${{ contains(fromJSON(steps.decode.outputs.maintainers_list), github.actor) }} - IS_PARTNER: ${{ contains(fromJSON(steps.decode.outputs.partners_list), github.actor) }} + USER_LOGIN: ${{ github.event.issue.user.login || github.event.pull_request.user.login }} run: | - echo "is_core_contributor=$IS_CORE_CONTRIBUTOR" >> $GITHUB_OUTPUT - echo "is_maintainer=$IS_MAINTAINER" >> $GITHUB_OUTPUT - echo "is_partner=$IS_PARTNER" >> $GITHUB_OUTPUT + echo "is_core_contributor="${{ contains(fromJSON(steps.decode.outputs.core_contributors_list), env.USER_LOGIN) }} >> $GITHUB_OUTPUT + echo "is_maintainer="${{ contains(fromJSON(steps.decode.outputs.maintainers_list), env.USER_LOGIN) }} >> $GITHUB_OUTPUT + echo "is_partner="${{ contains(fromJSON(steps.decode.outputs.partners_list), env.USER_LOGIN) }} >> $GITHUB_OUTPUT diff --git a/.github/workflows/community-comment.yml b/.github/workflows/community-comment.yml index 87a7fd4e62a..a854300f1b4 100644 --- a/.github/workflows/community-comment.yml +++ b/.github/workflows/community-comment.yml @@ -12,7 +12,7 @@ jobs: steps: - name: Add community note to new Issues if: github.event_name == 'issues' - uses: peter-evans/create-or-update-comment@ca08ebd5dc95aa0cd97021e9708fcd6b87138c9b + uses: peter-evans/create-or-update-comment@c6c9a1a66007646a28c153e2a8580a5bad27bcfa with: issue-number: ${{ github.event.issue.number }} body: | @@ -30,7 +30,7 @@ jobs: * If this would be your first contribution, please review the [contribution guide](https://hashicorp.github.io/terraform-provider-aws/). - name: Add community note to new Pull Requests - uses: peter-evans/create-or-update-comment@ca08ebd5dc95aa0cd97021e9708fcd6b87138c9b + uses: peter-evans/create-or-update-comment@c6c9a1a66007646a28c153e2a8580a5bad27bcfa if: github.event_name == 'pull_request_target' with: issue-number: ${{ github.event.pull_request.number }} diff --git a/.github/workflows/copyright.yml b/.github/workflows/copyright.yml new file mode 100644 index 00000000000..e7b0e37e4ee --- /dev/null +++ b/.github/workflows/copyright.yml @@ -0,0 +1,47 @@ +name: Copyright Checks + +on: + push: + branches: + - main + - "release/**" + pull_request: + paths-ignore: + - .ci/** + - .github/** + - .teamcity/** + - .release/** + - infrastructure/repository/labels-service.tf + - .goreleaser.yml + +jobs: + go_generate: + name: add headers check + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 + - uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 + with: + go-version-file: go.mod + # See also: https://github.com/actions/setup-go/issues/54 + - name: go env + run: | + echo "GOCACHE=$(go env GOCACHE)" >> $GITHUB_ENV + - uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 + continue-on-error: true + timeout-minutes: 2 + with: + path: ${{ env.GOCACHE }} + key: ${{ runner.os }}-GOCACHE-${{ hashFiles('go.sum') }}-${{ hashFiles('internal/**') }} + - uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 + continue-on-error: true + timeout-minutes: 2 + with: + path: ~/go/pkg/mod + key: ${{ runner.os }}-go-pkg-mod-${{ hashFiles('go.sum') }} + - run: go install github.com/hashicorp/copywrite@latest + - run: copywrite headers + - name: Check for Git Differences + run: | + git diff --compact-summary --exit-code || \ + (echo; echo "Unexpected difference in directories after adding copyright headers. Run 'copywrite headers' command and commit."; exit 1) diff --git a/.github/workflows/dependencies.yml b/.github/workflows/dependencies.yml index 0052f3237ef..21a15ae8a86 100644 --- a/.github/workflows/dependencies.yml +++ b/.github/workflows/dependencies.yml @@ -45,7 +45,7 @@ jobs: - run: echo ${{ steps.prc.outputs.comment-id }} - name: PR Comment if: steps.prc.outputs.comment-id == '' - uses: peter-evans/create-or-update-comment@ca08ebd5dc95aa0cd97021e9708fcd6b87138c9b + uses: peter-evans/create-or-update-comment@c6c9a1a66007646a28c153e2a8580a5bad27bcfa env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} with: @@ -67,8 +67,8 @@ jobs: name: go mod runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab - - uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 + - uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 with: go-version-file: go.mod - name: go mod diff --git a/.github/workflows/documentation.yml b/.github/workflows/documentation.yml index 796bb63031c..b5dc6d3a645 100644 --- a/.github/workflows/documentation.yml +++ b/.github/workflows/documentation.yml @@ -17,7 +17,7 @@ jobs: env: UV_THREADPOOL_SIZE: 128 steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 - uses: YakDriver/md-check-links@9044b7162323b91150cef35355e1a3514564aa82 with: use-quiet-mode: 'yes' @@ -30,15 +30,15 @@ jobs: markdown-lint: runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 - uses: avto-dev/markdown-lint@04d43ee9191307b50935a753da3b775ab695eceb with: args: 'docs' misspell: runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab - - uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 + - uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 with: go-version-file: go.mod - uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 diff --git a/.github/workflows/examples.yml b/.github/workflows/examples.yml index 8c73c0d181e..337c7504eff 100644 --- a/.github/workflows/examples.yml +++ b/.github/workflows/examples.yml @@ -18,14 +18,14 @@ jobs: tflint: runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 with: fetch-depth: 0 - uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 with: path: ~/go/pkg/mod key: ${{ runner.os }}-go-pkg-mod-${{ hashFiles('go.sum') }} - - uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 + - uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 with: go-version-file: go.mod @@ -67,14 +67,14 @@ jobs: env: TF_IN_AUTOMATION: "1" steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 with: fetch-depth: 0 - uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 with: path: ~/go/pkg/mod key: ${{ runner.os }}-go-pkg-mod-${{ hashFiles('go.sum') }} - - uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 + - uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 with: go-version-file: go.mod - name: go build diff --git a/.github/workflows/gen-teamcity.yml b/.github/workflows/gen-teamcity.yml index 35a4142668f..ea92fbed087 100644 --- a/.github/workflows/gen-teamcity.yml +++ b/.github/workflows/gen-teamcity.yml @@ -13,13 +13,14 @@ jobs: name: Validate TeamCity Configuration runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # v3.5.2 + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3 - uses: actions/setup-java@5ffc13f4174014e2d4d4572b3d74c3fa61aeb2c2 # v3.11.0 with: distribution: adopt - java-version: 11 + java-version: 17 cache: maven - name: Build TeamCity Configuration run: | cd .teamcity - mvn org.jetbrains.teamcity:teamcity-configs-maven-plugin:generate + make tools + make validate diff --git a/.github/workflows/generate_changelog.yml b/.github/workflows/generate_changelog.yml index 062de2b8ea6..5781909e9e0 100644 --- a/.github/workflows/generate_changelog.yml +++ b/.github/workflows/generate_changelog.yml @@ -8,7 +8,7 @@ jobs: if: github.event.pull_request.merged || github.event_name == 'workflow_dispatch' runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 with: fetch-depth: 0 - run: cd .ci/tools && go install github.com/hashicorp/go-changelog/cmd/changelog-build diff --git a/.github/workflows/golangci-lint.yml b/.github/workflows/golangci-lint.yml index d9b2517b85c..7df5d264f7a 100644 --- a/.github/workflows/golangci-lint.yml +++ b/.github/workflows/golangci-lint.yml @@ -21,8 +21,8 @@ jobs: name: 1 of 2 runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab - - uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 + - uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 with: go-version-file: go.mod - id: golangci-lint-version @@ -43,8 +43,8 @@ jobs: needs: [golangci-linta] runs-on: [custom, linux, xl] steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab - - uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 + - uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 with: go-version-file: go.mod - id: golangci-lint-version @@ -60,3 +60,8 @@ jobs: with: version: "${{ steps.golangci-lint-version.outputs.version }}" args: --config .ci/.golangci2.yml + env: + # Trigger garbage collection more frequently to reduce the likelihood + # of OOM errors. + # ref: https://golangci-lint.run/usage/performance/#memory-usage + GOGC: "50" diff --git a/.github/workflows/goreleaser-ci.yml b/.github/workflows/goreleaser-ci.yml index 61a53c5d57a..7bd1de6c7fd 100644 --- a/.github/workflows/goreleaser-ci.yml +++ b/.github/workflows/goreleaser-ci.yml @@ -23,7 +23,7 @@ jobs: outputs: goreleaser: ${{ steps.filter.outputs.goreleaser }} steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 - uses: dorny/paths-filter@4512585405083f25c027a35db413c2b3b9006d50 id: filter with: @@ -37,8 +37,8 @@ jobs: if: ${{ needs.changes.outputs.goreleaser == 'true' }} runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab - - uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 + - uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 with: go-version-file: go.mod - uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 @@ -57,8 +57,8 @@ jobs: # Ref: https://github.com/hashicorp/terraform-provider-aws/issues/8988 runs-on: [custom, linux, small] steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab - - uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 + - uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 with: go-version-file: go.mod - uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 diff --git a/.github/workflows/issue-comment-created.yml b/.github/workflows/issue-comment-created.yml index fb50c3941b9..99a801b9cbf 100644 --- a/.github/workflows/issue-comment-created.yml +++ b/.github/workflows/issue-comment-created.yml @@ -10,12 +10,9 @@ jobs: secrets: inherit issue_comment_triage: + name: 'Remove stale and waiting-response Labels on Response' needs: community_check - runs-on: ubuntu-latest - steps: - - uses: actions-ecosystem/action-remove-labels@2ce5d41b4b6aa8503e285553f75ed56e0a40bae0 - if: github.event_name == 'issue_comment' && needs.community_check.outputs.maintainer == 'false' - with: - labels: | - stale - waiting-response + if: github.event_name == 'issue_comment' && needs.community_check.outputs.maintainer == 'false' + uses: ./.github/workflows/reusable-add-or-remove-labels.yml + with: + remove: stale,waiting-response diff --git a/.github/workflows/issues.yml b/.github/workflows/issues.yml index 4c38574830f..211aa4081f9 100644 --- a/.github/workflows/issues.yml +++ b/.github/workflows/issues.yml @@ -9,7 +9,7 @@ jobs: runs-on: ubuntu-latest steps: - name: Apply Issue Triage Labels - uses: github/issue-labeler@v3.1 + uses: github/issue-labeler@v3.2 with: repo-token: "${{ secrets.GITHUB_TOKEN }}" configuration-path: .github/labeler-issue-triage.yml diff --git a/.github/workflows/maintainer-edit.yml b/.github/workflows/maintainer-edit.yml index 222e1910a4a..2184a2d0d65 100644 --- a/.github/workflows/maintainer-edit.yml +++ b/.github/workflows/maintainer-edit.yml @@ -14,7 +14,7 @@ jobs: steps: - name: Comment if maintainers cannot edit if: (!github.event.pull_request.maintainer_can_modify && needs.community_check.outputs.maintainer == 'false') - uses: peter-evans/create-or-update-comment@ca08ebd5dc95aa0cd97021e9708fcd6b87138c9b + uses: peter-evans/create-or-update-comment@c6c9a1a66007646a28c153e2a8580a5bad27bcfa with: issue-number: ${{ github.event.pull_request.number }} body: | diff --git a/.github/workflows/milestone.yml b/.github/workflows/milestone.yml index fc1065e6e63..1b9b83205c2 100644 --- a/.github/workflows/milestone.yml +++ b/.github/workflows/milestone.yml @@ -7,7 +7,7 @@ jobs: if: github.event.pull_request.merged runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 with: ref: ${{ github.event.pull_request.base.ref }} - id: get-current-milestone diff --git a/.github/workflows/mkdocs.yml b/.github/workflows/mkdocs.yml index 8a69d835ca4..deabaed92d0 100644 --- a/.github/workflows/mkdocs.yml +++ b/.github/workflows/mkdocs.yml @@ -11,7 +11,7 @@ jobs: runs-on: ubuntu-latest steps: - name: Checkout main - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 - name: Deploy docs uses: mhausenblas/mkdocs-deploy-gh-pages@master diff --git a/.github/workflows/new-project-automation.yml b/.github/workflows/new-project-automation.yml new file mode 100644 index 00000000000..1b50a66ab74 --- /dev/null +++ b/.github/workflows/new-project-automation.yml @@ -0,0 +1,33 @@ +name: 'Team GitHub Projects (new) Automation' +on: + issues: + types: ["labeled", "milestoned", "opened"] + pull_request_target: + types: ["labeled", "opened", "ready_for_review"] +jobs: + community_check: + uses: ./.github/workflows/community-check.yml + secrets: inherit + maintainer_prs: + name: 'Add Maintainer PRs that are Ready for Review' + needs: community_check + if: github.event_name == 'pull_request_target' && !github.event.pull_request.draft && needs.community_check.outputs.maintainer == 'true' + uses: ./.github/workflows/reusable-team-project.yml + secrets: inherit + with: + status: ${{ vars.team_project_status_maintainer_pr }} + partner_prs: + name: 'Add Partner PRs that are Ready for Review' + if: github.event_name == 'pull_request_target' && !github.event.pull_request.draft && contains(github.event.pull_request.labels.*.name, 'partner') + uses: ./.github/workflows/reusable-team-project.yml + secrets: inherit + roadmap: + name: 'Add Roadmap Items' + if: github.event.label.name == 'roadmap' || github.event.*.milestone.title == 'Roadmap' + uses: ./.github/workflows/reusable-team-project.yml + secrets: inherit + regressions: + name: 'Add Regressions' + if: github.event.label.name == 'regression' + uses: ./.github/workflows/reusable-team-project.yml + secrets: inherit diff --git a/.github/workflows/post_publish.yml b/.github/workflows/post_publish.yml index 03ab0c5ed58..bfe7a092345 100644 --- a/.github/workflows/post_publish.yml +++ b/.github/workflows/post_publish.yml @@ -52,7 +52,7 @@ jobs: runs-on: ubuntu-latest steps: - name: Archive Release Cards - uses: breathingdust/github-project-archive@v1 + uses: breathingdust/github-project-archive@4be60b012447c51a5d60d988168f6360815d9ec9 with: github_done_column_id: 11513756 github_release_name: ${{ needs.on-success-or-workflow-dispatch.outputs.release-tag }} diff --git a/.github/workflows/project-partner-label.yml b/.github/workflows/project-partner-label.yml index b660540f20e..932297d23b7 100644 --- a/.github/workflows/project-partner-label.yml +++ b/.github/workflows/project-partner-label.yml @@ -4,7 +4,7 @@ on: types: - labeled env: - DESTINATION_COLUMN_ID: 'MDEzOlByb2plY3RDb2x1bW4xMjE0MzAxNw==' + DESTINATION_COLUMN_ID: 'PC_lAPOAAuecM4AXZPtzgC5Sak' GH_TOKEN: ${{ secrets.ORGSCOPED_GITHUB_TOKEN }} jobs: tag-created: diff --git a/.github/workflows/project-roadmap-label.yml b/.github/workflows/project-roadmap-label.yml index fe9420b4a70..4a446b8ed7b 100644 --- a/.github/workflows/project-roadmap-label.yml +++ b/.github/workflows/project-roadmap-label.yml @@ -3,18 +3,21 @@ on: issues: types: - labeled + - milestoned pull_request: types: - labeled env: DESTINATION_COLUMN_ID: 'MDEzOlByb2plY3RDb2x1bW4xMTUxMzUyOQ==' GH_TOKEN: ${{ secrets.ORGSCOPED_GITHUB_TOKEN }} + jobs: tag-created: - if: ${{ github.event.label.name == 'roadmap' }} + if: ${{ github.event.label.name == 'roadmap' || github.event.issue.milestone.title == 'Roadmap' }} runs-on: ubuntu-latest steps: - - name: AddIssueToWorkingBoard + - name: 'Add Roadmapped or Milestoned Issue to Working Board' + if: github.event_name == 'issues' run: | PROJCOLUMNS=$(gh api graphql -f query=' query { @@ -40,3 +43,31 @@ jobs: } }' fi + + - name: 'Add Roadmapped Pull Requests to Working Board' + if: github.event_name == 'pull_request' + run: | + PROJCOLUMNS=$(gh api graphql -f query=' + query { + repository(name: "${{ github.event.repository.name }}", owner: "${{ github.repository_owner }}") { + pullRequest(number: ${{ github.event.pull_request.number }}) { + projectCards { + nodes { + column { + name + id + } + } + } + } + } + }') + INPROJECT=$(echo $PROJCOLUMNS | jq '.data.repository.pullRequest.projectCards.nodes[].column | select(.id=="${{ env.DESTINATION_COLUMN_ID }}")') + if [ -z "$INPROJECT" ]; + then + gh api graphql -f query='mutation { + addProjectCard(input: {projectColumnId: "${{ env.DESTINATION_COLUMN_ID }}", contentId: "${{ github.event.pull_request.node_id }}"}) { + clientMutationId + } + }' + fi diff --git a/.github/workflows/project.yml b/.github/workflows/project.yml index 6fb599fcdfd..a265e4d478c 100644 --- a/.github/workflows/project.yml +++ b/.github/workflows/project.yml @@ -13,10 +13,33 @@ jobs: needs: community_check runs-on: ubuntu-latest steps: - - name: Move team PRs to Review column - uses: alex-page/github-project-automation-plus@v0.8.3 + - name: 'Move team PRs to Review column' if: github.event.pull_request.draft == false && needs.community_check.outputs.maintainer == 'true' - with: - project: AWS Provider Working Board - column: Open Maintainer PR - repo-token: ${{ secrets.ORGSCOPED_GITHUB_TOKEN}} + env: + DESTINATION_COLUMN_ID: 'MDEzOlByb2plY3RDb2x1bW4xMTUxMzYzOA==' + GH_TOKEN: ${{ secrets.ORGSCOPED_GITHUB_TOKEN }} + run: | + PROJCOLUMNS=$(gh api graphql -f query=' + query { + repository(name: "${{ github.event.repository.name }}", owner: "${{ github.repository_owner }}") { + pullRequest(number: ${{ github.event.pull_request.number }}) { + projectCards { + nodes { + column { + name + id + } + } + } + } + } + }') + INPROJECT=$(echo $PROJCOLUMNS | jq '.data.repository.pullRequest.projectCards.nodes[].column | select(.id=="${{ env.DESTINATION_COLUMN_ID }}")') + if [ -z "$INPROJECT" ]; + then + gh api graphql -f query='mutation { + addProjectCard(input: {projectColumnId: "${{ env.DESTINATION_COLUMN_ID }}", contentId: "${{ github.event.pull_request.node_id }}"}) { + clientMutationId + } + }' + fi diff --git a/.github/workflows/providerlint.yml b/.github/workflows/providerlint.yml index da9c29b130d..678f5b36948 100644 --- a/.github/workflows/providerlint.yml +++ b/.github/workflows/providerlint.yml @@ -17,8 +17,8 @@ jobs: providerlint: runs-on: [custom, linux, medium] steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab - - uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 + - uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 with: go-version-file: go.mod - name: go env diff --git a/.github/workflows/pull_request_feed.yml b/.github/workflows/pull_request_feed.yml index e89d023fc1a..7eec09cd8e2 100644 --- a/.github/workflows/pull_request_feed.yml +++ b/.github/workflows/pull_request_feed.yml @@ -17,7 +17,7 @@ jobs: runs-on: ubuntu-latest steps: - name: Notification PR Merged - uses: slackapi/slack-github-action@007b2c3c751a190b6f0f040e47ed024deaa72844 + uses: slackapi/slack-github-action@e28cf165c92ffef168d23c5c9000cffc8a25e117 with: payload: | { @@ -36,7 +36,7 @@ jobs: runs-on: ubuntu-latest steps: - name: Notification Maintainer PR Opened - uses: slackapi/slack-github-action@007b2c3c751a190b6f0f040e47ed024deaa72844 + uses: slackapi/slack-github-action@e28cf165c92ffef168d23c5c9000cffc8a25e117 if: github.event.action == 'opened' && needs.community_check.outputs.maintainer == 'true' && github.actor != 'dependabot[bot]' with: payload: | @@ -56,7 +56,7 @@ jobs: runs-on: ubuntu-latest steps: - name: Notification Partner PR Opened - uses: slackapi/slack-github-action@007b2c3c751a190b6f0f040e47ed024deaa72844 + uses: slackapi/slack-github-action@e28cf165c92ffef168d23c5c9000cffc8a25e117 if: github.event.action == 'opened' && needs.community_check.outputs.partner == 'true' with: payload: | @@ -71,8 +71,3 @@ jobs: } ] } - - name: Apply Partner Label - if: github.event.action == 'opened' && needs.community_check.outputs.partner == 'true' - run: | - gh api repos/${{ github.repository_owner }}/${{ github.event.repository.name }}/issues/${{ github.event.pull_request.number }}/labels \ - -f "labels[]=partner" diff --git a/.github/workflows/pull_requests.yml b/.github/workflows/pull_requests.yml index e58aa3f77a5..bbdf2f3ab2b 100644 --- a/.github/workflows/pull_requests.yml +++ b/.github/workflows/pull_requests.yml @@ -11,7 +11,7 @@ jobs: Labeler: runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 - name: Apply Labels uses: actions/labeler@v4 with: @@ -20,15 +20,19 @@ jobs: NeedsTriageLabeler: needs: community_check - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab - - name: Apply needs-triage Label - uses: actions/labeler@v4 - if: github.event.action == 'opened' && needs.community_check.outputs.maintainer == 'false' - with: - configuration-path: .github/labeler-pr-needs-triage.yml - repo-token: ${{ secrets.GITHUB_TOKEN }} + name: 'Apply needs-triage to New Pull Requests' + if: github.event.action == 'opened' && needs.community_check.outputs.maintainer == 'false' + uses: ./.github/workflows/reusable-add-or-remove-labels.yml + with: + add: needs-triage + + PartnerLabeler: + needs: community_check + name: 'Apply Partner Label' + if: github.event.action == 'opened' && needs.community_check.outputs.partner == 'true' + uses: ./.github/workflows/reusable-add-or-remove-labels.yml + with: + add: partner SizeLabeler: runs-on: ubuntu-latest diff --git a/.github/workflows/regressions.yml b/.github/workflows/regressions.yml index 388038cc623..7a6f7be612a 100644 --- a/.github/workflows/regressions.yml +++ b/.github/workflows/regressions.yml @@ -8,38 +8,98 @@ on: - labeled jobs: slack-notification: - if: ${{ github.event.label.name == 'regression' }} + name: Slack Notifier + if: github.event.label.name == 'regression' runs-on: ubuntu-latest steps: - - name: Issues - if: ${{ github.event_name == 'issues' }} - uses: actions-ecosystem/action-slack-notifier@v1 + - name: Send Slack Notification + uses: slackapi/slack-github-action@e28cf165c92ffef168d23c5c9000cffc8a25e117 + env: + SLACK_BOT_TOKEN: ${{ secrets.SLACK_BOT_TOKEN }} + EVENT_URL: ${{ github.event.issue.html_url || github.event.pull_request.html_url }} + EVENT_TITLE: ${{ github.event.issue.title || github.event.pull_request.title }} with: - slack_token: ${{ secrets.SLACK_BOT_TOKEN }} - channel: ${{ secrets.SLACK_CHANNEL }} - color: red - verbose: false - message: | - :warning: The following issue has been labeled as a regression: - https://github.com/${{ github.repository }}/issues/${{ github.event.issue.number }} - - name: Pull Requests - if: ${{ github.event_name == 'pull_request' }} - uses: actions-ecosystem/action-slack-notifier@v1 - with: - slack_token: ${{ secrets.SLACK_BOT_TOKEN }} - channel: ${{ secrets.SLACK_CHANNEL }} - color: red - verbose: false - message: | - :warning: The following pull request has been labeled as a regression: - https://github.com/${{ github.repository }}/pull/${{ github.event.pull_request.number }} + channel-id: ${{ secrets.SLACK_CHANNEL }} + payload: | + { + "blocks": [ + { + "type": "section", + "text": { + "type": "mrkdwn", + "text": ":warning: The following has been labeled as a regression:" + } + }, + { + "type": "section", + "text": { + "type": "mrkdwn", + "text": "<${{ env.EVENT_URL }}|${{ env.EVENT_TITLE }}>" + } + } + ] + } + AddToWorkingBoard: - if: ${{ github.event.label.name == 'regression' }} + name: Add regression to Working Board + if: github.event.label.name == 'regression' + env: + DESTINATION_COLUMN_ID: 'MDEzOlByb2plY3RDb2x1bW4xMTUxMzUyOQ==' + GH_TOKEN: ${{ secrets.ORGSCOPED_GITHUB_TOKEN }} runs-on: ubuntu-latest steps: - - name: Add regressions to To Do column - uses: alex-page/github-project-automation-plus@v0.8.3 - with: - project: AWS Provider Working Board - column: To Do - repo-token: ${{ secrets.ORGSCOPED_GITHUB_TOKEN }} + - name: 'Add Regression Issues to To Do column' + if: github.event_name == 'issues' + run: | + PROJCOLUMNS=$(gh api graphql -f query=' + query { + repository(name: "${{ github.event.repository.name }}", owner: "${{ github.repository_owner }}") { + issue(number: ${{ github.event.issue.number }}) { + projectCards { + nodes { + column { + name + id + } + } + } + } + } + }') + INPROJECT=$(echo $PROJCOLUMNS | jq '.data.repository.issue.projectCards.nodes[].column | select(.id=="${{ env.DESTINATION_COLUMN_ID }}")') + if [ -z "$INPROJECT" ]; + then + gh api graphql -f query='mutation { + addProjectCard(input: {projectColumnId: "${{ env.DESTINATION_COLUMN_ID }}", contentId: "${{ github.event.issue.node_id }}"}) { + clientMutationId + } + }' + fi + + - name: 'Add Regression Pull Requests to To Do column' + if: github.event_name == 'pull_request' + run: | + PROJCOLUMNS=$(gh api graphql -f query=' + query { + repository(name: "${{ github.event.repository.name }}", owner: "${{ github.repository_owner }}") { + pullRequest(number: ${{ github.event.pull_request.number }}) { + projectCards { + nodes { + column { + name + id + } + } + } + } + } + }') + INPROJECT=$(echo $PROJCOLUMNS | jq '.data.repository.pullRequest.projectCards.nodes[].column | select(.id=="${{ env.DESTINATION_COLUMN_ID }}")') + if [ -z "$INPROJECT" ]; + then + gh api graphql -f query='mutation { + addProjectCard(input: {projectColumnId: "${{ env.DESTINATION_COLUMN_ID }}", contentId: "${{ github.event.pull_request.node_id }}"}) { + clientMutationId + } + }' + fi diff --git a/.github/workflows/release-tag.yml b/.github/workflows/release-tag.yml index 0c5febc60bf..1cbd32c3457 100644 --- a/.github/workflows/release-tag.yml +++ b/.github/workflows/release-tag.yml @@ -9,7 +9,7 @@ jobs: steps: - name: Notify Slack id: slack - uses: slackapi/slack-github-action@007b2c3c751a190b6f0f040e47ed024deaa72844 + uses: slackapi/slack-github-action@e28cf165c92ffef168d23c5c9000cffc8a25e117 with: payload: | { diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml index ad3e4d816d6..383e018eb55 100644 --- a/.github/workflows/release.yml +++ b/.github/workflows/release.yml @@ -12,7 +12,7 @@ jobs: release-notes: runs-on: macos-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 with: fetch-depth: 0 - name: Generate Release Notes @@ -48,7 +48,7 @@ jobs: outputs: tag: ${{ steps.highest-version-tag.outputs.tag }} steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 with: # Allow tag to be fetched when ref is a commit fetch-depth: 0 @@ -66,7 +66,7 @@ jobs: if: github.ref_name == needs.highest-version-tag.outputs.tag runs-on: macos-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 with: fetch-depth: 0 ref: main diff --git a/.github/workflows/resource-counts.yml b/.github/workflows/resource-counts.yml new file mode 100644 index 00000000000..a0a57ec1b10 --- /dev/null +++ b/.github/workflows/resource-counts.yml @@ -0,0 +1,40 @@ +name: Resource Counts +on: + workflow_dispatch: {} + schedule: + - cron: '0 9 * * WED' +permissions: + contents: write + pull-requests: write +jobs: + coverage: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3 + - run: | + touch main.tf + cat << EOF > main.tf + terraform { + required_providers { + aws = { + source = "hashicorp/aws" + } + } + } + EOF + - run: terraform init + - run: | + datasources=$(terraform providers schema -json | jq '.provider_schemas[] .data_source_schemas | length') + resources=$(terraform providers schema -json | jq '.provider_schemas[] .resource_schemas | length') + sed -r -i "s/There are currently ([0-9]+) resources and ([0-9]+)(.*)/There are currently $resources resources and $datasources\3/" website/docs/index.html.markdown + - run: | + rm main.tf + rm .terraform.lock.hcl + rm -rf .terraform + - name: Create Pull Request + uses: peter-evans/create-pull-request@153407881ec5c347639a548ade7d8ad1d6740e38 # v5.0.2 + with: + branch: "resource-counts" + commit-message: "docs: update resource counts" + title: "docs: update resource counts" + body: "This PR updates the resource/data source counts included on the provider documentation index page." diff --git a/.github/workflows/reusable-add-or-remove-labels.yml b/.github/workflows/reusable-add-or-remove-labels.yml new file mode 100644 index 00000000000..b517d72f540 --- /dev/null +++ b/.github/workflows/reusable-add-or-remove-labels.yml @@ -0,0 +1,42 @@ +name: 'Modify Issue or Pull Request Labels' + +on: + workflow_call: + inputs: + add: + description: '(Optional) The comma-separated label(s) to add to the Issue or Pull Request.' + required: false + type: string + remove: + description: '(Optional) The comma-separated label(s) to remove from the Issue or Pull Request.' + required: false + type: string + +env: + GH_TOKEN: ${{ github.token }} + ITEM_URL: ${{ github.event.issue.html_url || github.event.pull_request.html_url }} + +jobs: + modify-labels: + name: 'Modify Label(s)' + runs-on: ubuntu-latest + env: + ITEM_TYPE: 'issue' + steps: + - name: 'Determine if the Item is an Issue or Pull Request' + # The issue_comment trigger happens for issues and pull requests. + # The best way to determine if an issue_comment trigger is related to an issue or pull request is to check + # whether or not github.event.issue.pull_request is empty. + if: github.event.pull_request || github.event.issue.pull_request + run: | + echo 'ITEM_TYPE=pr' >> $GITHUB_ENV + + - name: 'Add Label(s) to Issue or Pull Request' + if: inputs.add != '' + run: | + gh $ITEM_TYPE edit $ITEM_URL --add-label ${{ inputs.add }} + + - name: 'Remove Label(s) From Issue or Pull Request' + if: inputs.remove != '' + run: | + gh $ITEM_TYPE edit $ITEM_URL --remove-label ${{ inputs.remove }} diff --git a/.github/workflows/reusable-team-project.yml b/.github/workflows/reusable-team-project.yml new file mode 100644 index 00000000000..920d95c9040 --- /dev/null +++ b/.github/workflows/reusable-team-project.yml @@ -0,0 +1,54 @@ +name: "Team GitHub Project Automation" +on: + workflow_call: + inputs: + status: + description: "(Optional) The ID of the value of the Status field to set." + required: false + type: string +env: + ITEM_NODE_ID: ${{ github.event.issue.node_id || github.event.pull_request.node_id }} + GH_TOKEN: ${{ secrets.PROJECT_SCOPED_TOKEN }} +jobs: + add-to-project: + name: "Add Item to Project and Optionally Set Status" + runs-on: ubuntu-latest + steps: + - name: "Add Item to Project" + run: | + project_item_id="$(gh api graphql -f query=' + mutation ( + $node_id: ID! + $project_id: ID! + ){ + addProjectV2ItemById(input: {projectId: $project_id, contentId: $node_id}) { + item { + id + } + } + }' -f node_id=$ITEM_NODE_ID -f project_id='PVT_kwDOAAuecM4AF-7h' --jq '.data.addProjectV2ItemById.item.id')" + + echo 'PROJECT_ITEM_ID='$project_item_id >> $GITHUB_ENV + - name: "Set Status Field Value" + if: inputs.status != '' + run: | + gh api graphql -f query=' + mutation( + $field_id: ID! + $project_id: ID! + $project_item: ID! + $status_value: String! + ){ + updateProjectV2ItemFieldValue(input: { + projectId: $project_id + itemId: $project_item + fieldId: $field_id + value: { + singleSelectOptionId: $status_value + } + }){ + projectV2Item { + id + } + } + }' -f field_id='PVTSSF_lADOAAuecM4AF-7hzgDcsQA' -f project_id='PVT_kwDOAAuecM4AF-7h' -f project_item=$PROJECT_ITEM_ID -f status_value=${{ inputs.status }} --silent diff --git a/.github/workflows/roadmap_milestone.yml b/.github/workflows/roadmap_milestone.yml deleted file mode 100644 index e9dfdc01b0c..00000000000 --- a/.github/workflows/roadmap_milestone.yml +++ /dev/null @@ -1,15 +0,0 @@ -name: If roadmap milestone is assigned, add to working board. -on: - issues: - types: [milestoned] -jobs: - AddRoadmapItemsToBoard: - runs-on: ubuntu-latest - steps: - - name: Move Roadmap Items To Working Board - uses: alex-page/github-project-automation-plus@v0.8.3 - if: github.event.issue.milestone.title == 'Roadmap' - with: - project: AWS Provider Working Board - column: To Do - repo-token: ${{ secrets.ORGSCOPED_GITHUB_TOKEN}} diff --git a/.github/workflows/semgrep-ci.yml b/.github/workflows/semgrep-ci.yml index 102759080c9..76e521ecc5f 100644 --- a/.github/workflows/semgrep-ci.yml +++ b/.github/workflows/semgrep-ci.yml @@ -23,7 +23,7 @@ jobs: container: image: returntocorp/semgrep steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 - run: | semgrep --validate \ --config .ci/.semgrep.yml \ @@ -46,7 +46,7 @@ jobs: image: returntocorp/semgrep if: (github.action != 'dependabot[bot]') steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 - run: semgrep --validate --config .ci/.semgrep-caps-aws-ec2.yml - run: semgrep $COMMON_PARAMS --config .ci/.semgrep-caps-aws-ec2.yml @@ -57,7 +57,7 @@ jobs: image: returntocorp/semgrep if: (github.action != 'dependabot[bot]') steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 - run: semgrep --validate --config .ci/.semgrep-configs.yml - run: semgrep $COMMON_PARAMS --config .ci/.semgrep-configs.yml @@ -68,7 +68,7 @@ jobs: image: returntocorp/semgrep if: (github.action != 'dependabot[bot]') steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 - run: semgrep --validate --config .ci/.semgrep-service-name0.yml - run: semgrep $COMMON_PARAMS --config .ci/.semgrep-service-name0.yml @@ -79,7 +79,7 @@ jobs: image: returntocorp/semgrep if: (github.action != 'dependabot[bot]') steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 - run: semgrep --validate --config .ci/.semgrep-service-name1.yml - run: semgrep $COMMON_PARAMS --config .ci/.semgrep-service-name1.yml @@ -90,7 +90,7 @@ jobs: image: returntocorp/semgrep if: (github.action != 'dependabot[bot]') steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 - run: semgrep --validate --config .ci/.semgrep-service-name2.yml - run: semgrep $COMMON_PARAMS --config .ci/.semgrep-service-name2.yml @@ -101,6 +101,6 @@ jobs: image: returntocorp/semgrep if: (github.action != 'dependabot[bot]') steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 - run: semgrep --validate --config .ci/.semgrep-service-name3.yml - run: semgrep $COMMON_PARAMS --config .ci/.semgrep-service-name3.yml diff --git a/.github/workflows/skaff.yml b/.github/workflows/skaff.yml index bfa46932b84..47bbdc1948d 100644 --- a/.github/workflows/skaff.yml +++ b/.github/workflows/skaff.yml @@ -15,10 +15,10 @@ jobs: name: Compile skaff runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 with: fetch-depth: 0 - - uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 + - uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 with: go-version-file: skaff/go.mod # See also: https://github.com/actions/setup-go/issues/54 diff --git a/.github/workflows/snapshot.yml b/.github/workflows/snapshot.yml index c19d0bbbdfb..926e80a2057 100644 --- a/.github/workflows/snapshot.yml +++ b/.github/workflows/snapshot.yml @@ -9,8 +9,8 @@ jobs: goreleaser: runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab - - uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 + - uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 with: go-version-file: go.mod - uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 diff --git a/.github/workflows/team_slack_bot.yml b/.github/workflows/team_slack_bot.yml index 71b73634e45..eedbdd47995 100644 --- a/.github/workflows/team_slack_bot.yml +++ b/.github/workflows/team_slack_bot.yml @@ -11,7 +11,7 @@ jobs: if: github.repository_owner == 'hashicorp' steps: - name: open-pr-stats - uses: breathingdust/github-team-slackbot@f54b7983243d7f5a015b659f71d3c3dbe7b04001 + uses: breathingdust/github-team-slackbot@8f1053f9b472b94e6564ebc499a92136c48ace1f with: github_token: ${{ secrets.ORGSCOPED_GITHUB_TOKEN}} org: hashicorp diff --git a/.github/workflows/terraform_provider.yml b/.github/workflows/terraform_provider.yml index eed71bc2fa0..0adefc2a23e 100644 --- a/.github/workflows/terraform_provider.yml +++ b/.github/workflows/terraform_provider.yml @@ -4,7 +4,7 @@ on: push: branches: - main - - 'release/**' + - "release/**" pull_request: paths: - .github/workflows/terraform_provider.yml @@ -31,8 +31,8 @@ jobs: name: go mod download runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab - - uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 + - uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 with: go-version-file: go.mod - uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 @@ -50,7 +50,7 @@ jobs: needs: [go_mod_download] runs-on: [custom, linux, medium] steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 - uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 continue-on-error: true id: cache-terraform-plugin-dir @@ -59,7 +59,7 @@ jobs: path: terraform-plugin-dir key: ${{ runner.os }}-terraform-plugin-dir-${{ hashFiles('go.sum') }}-${{ hashFiles('internal/**') }} - if: steps.cache-terraform-plugin-dir.outputs.cache-hit != 'true' || steps.cache-terraform-plugin-dir.outcome == 'failure' - uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 + uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 with: go-version-file: go.mod # See also: https://github.com/actions/setup-go/issues/54 @@ -86,8 +86,8 @@ jobs: needs: [go_build] runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab - - uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 + - uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 with: go-version-file: go.mod # See also: https://github.com/actions/setup-go/issues/54 @@ -118,10 +118,10 @@ jobs: needs: [go_build] runs-on: [custom, linux, large] steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 with: fetch-depth: 0 - - uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 + - uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 with: go-version-file: go.mod # See also: https://github.com/actions/setup-go/issues/54 @@ -147,8 +147,8 @@ jobs: needs: [go_build] runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab - - uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 + - uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 with: go-version-file: go.mod # See also: https://github.com/actions/setup-go/issues/54 @@ -175,10 +175,10 @@ jobs: needs: [go_build] runs-on: [custom, linux, medium] steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 with: fetch-depth: 0 - - uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 + - uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 with: go-version-file: go.mod # See also: https://github.com/actions/setup-go/issues/54 @@ -205,7 +205,7 @@ jobs: needs: [go_build] runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 - uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 continue-on-error: true id: cache-terraform-providers-schema @@ -240,8 +240,8 @@ jobs: needs: [terraform_providers_schema] runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab - - uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 + - uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 with: go-version-file: go.mod - uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 @@ -250,7 +250,7 @@ jobs: with: path: ~/go/pkg/mod key: ${{ runner.os }}-go-pkg-mod-${{ hashFiles('go.sum') }} - - run: cd .ci/tools && go install github.com/bflad/tfproviderdocs + - run: cd .ci/tools && go install github.com/YakDriver/tfproviderdocs - uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 timeout-minutes: 2 with: @@ -265,13 +265,14 @@ jobs: -ignore-file-missing-resources aws_alb,aws_alb_listener,aws_alb_listener_certificate,aws_alb_listener_rule,aws_alb_target_group,aws_alb_target_group_attachment \ -provider-source registry.terraform.io/hashicorp/aws \ -providers-schema-json terraform-providers-schema/schema.json \ - -require-resource-subcategory + -require-resource-subcategory \ + -ignore-cdktf-missing-files markdown-lint: runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 - uses: avto-dev/markdown-lint@04d43ee9191307b50935a753da3b775ab695eceb with: - args: '.' - ignore: './docs ./website/docs ./CHANGELOG.md ./internal/service/cloudformation/testdata/examplecompany-exampleservice-exampleresource/docs' + args: "." + ignore: "./docs ./website/docs ./CHANGELOG.md ./internal/service/cloudformation/testdata/examplecompany-exampleservice-exampleresource/docs" diff --git a/.github/workflows/tfsdk2fw.yml b/.github/workflows/tfsdk2fw.yml new file mode 100644 index 00000000000..20ae6841cfb --- /dev/null +++ b/.github/workflows/tfsdk2fw.yml @@ -0,0 +1,31 @@ +name: tfsdk2fw Checks + +on: + push: + branches: + - main + - 'release/**' + pull_request: + paths: + - go.mod + - go.sum + - internal/generate/common/** + - internal/provider/** + - tools/tfsdk2fw/** + +jobs: + tfsdk2fw_go_mod: + name: go mod + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 + - uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 + with: + go-version-file: tools/tfsdk2fw/go.mod + - name: go mod + run: | + cd tools/tfsdk2fw/ + echo "==> Checking tfsdk2fw with go mod tidy..." + go mod tidy + git diff --exit-code -- go.mod go.sum || \ + (echo; echo "Unexpected difference in tools/tfsdk2fw/go.mod or tools/tfsdk2fw/go.sum files. Run 'go mod tidy' command or revert any go.mod/go.sum changes and commit."; exit 1) diff --git a/.github/workflows/website.yml b/.github/workflows/website.yml index fdac3804404..f62746a3acb 100644 --- a/.github/workflows/website.yml +++ b/.github/workflows/website.yml @@ -20,7 +20,7 @@ jobs: markdown-link-check-a-h-markdown: runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 - uses: YakDriver/md-check-links@9044b7162323b91150cef35355e1a3514564aa82 name: markdown-link-check website/docs/**/[a-h].markdown with: @@ -36,7 +36,7 @@ jobs: markdown-link-check-i-z-markdown: runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 - uses: YakDriver/md-check-links@9044b7162323b91150cef35355e1a3514564aa82 name: markdown-link-check website/docs/**/[i-z].markdown with: @@ -52,7 +52,7 @@ jobs: markdown-link-check-md: runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 - uses: YakDriver/md-check-links@9044b7162323b91150cef35355e1a3514564aa82 name: markdown-link-check website/docs/**/*.md with: @@ -67,7 +67,7 @@ jobs: markdown-lint: runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 - uses: avto-dev/markdown-lint@04d43ee9191307b50935a753da3b775ab695eceb with: args: "website/docs" @@ -75,8 +75,8 @@ jobs: misspell: runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab - - uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 + - uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 with: go-version-file: .ci/tools/go.mod - uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 @@ -91,8 +91,8 @@ jobs: terrafmt: runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab - - uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 + - uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 with: go-version-file: .ci/tools/go.mod - uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 @@ -107,10 +107,10 @@ jobs: tflint: runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 with: fetch-depth: 0 - - uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 + - uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 with: go-version-file: .ci/tools/go.mod - uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 @@ -140,10 +140,6 @@ jobs: # being carried over from test cases. shared_rules=( "--enable-rule=terraform_comment_syntax" - "--disable-rule=aws_appsync_function_invalid_request_mapping_template" - "--disable-rule=aws_appsync_function_invalid_response_mapping_template" - "--disable-rule=aws_appsync_resolver_invalid_request_template" - "--disable-rule=aws_appsync_resolver_invalid_response_template" "--disable-rule=aws_cloudwatch_event_target_invalid_arn" "--disable-rule=aws_db_instance_default_parameter_group" "--disable-rule=aws_elasticache_cluster_default_parameter_group" diff --git a/.github/workflows/workflow-lint.yml b/.github/workflows/workflow-lint.yml index 5da89ebb685..deb82720ee4 100644 --- a/.github/workflows/workflow-lint.yml +++ b/.github/workflows/workflow-lint.yml @@ -12,8 +12,8 @@ jobs: actionlint: runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab - - uses: actions/setup-go@4d34df0c2316fe8122ab82dc22947d607c0c91f9 + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 + - uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 with: go-version-file: .ci/tools/go.mod - name: Install actionlint diff --git a/.github/workflows/yamllint.yml b/.github/workflows/yamllint.yml index f47a858e83a..2714d7f34dc 100644 --- a/.github/workflows/yamllint.yml +++ b/.github/workflows/yamllint.yml @@ -12,7 +12,7 @@ jobs: yamllint: runs-on: ubuntu-latest steps: - - uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab + - uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 - name: Run yamllint uses: ibiqlik/action-yamllint@v3 with: diff --git a/.go-version b/.go-version index 1b92e588b79..7bf9455f08c 100644 --- a/.go-version +++ b/.go-version @@ -1 +1 @@ -1.19.3 +1.20.5 diff --git a/.markdownlint.yml b/.markdownlint.yml index e34dd1f6963..03e409bab2a 100644 --- a/.markdownlint.yml +++ b/.markdownlint.yml @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + # Configuration for markdownlint # https://github.com/DavidAnson/markdownlint#configuration diff --git a/.teamcity/.java-version b/.teamcity/.java-version index 5f3c84a4912..0b495a50c11 100644 --- a/.teamcity/.java-version +++ b/.teamcity/.java-version @@ -1 +1 @@ -corretto64-11.0.13 +corretto64-17.0.7 diff --git a/.teamcity/GNUmakefile b/.teamcity/GNUmakefile new file mode 100644 index 00000000000..667ef9283a4 --- /dev/null +++ b/.teamcity/GNUmakefile @@ -0,0 +1,7 @@ +default: tools + +tools: + mvn -U dependency:sources + +validate: + mvn teamcity-configs:generate diff --git a/.teamcity/components/generated/services_all.kt b/.teamcity/components/generated/services_all.kt index 0cc372c79a1..8781ce3dbba 100644 --- a/.teamcity/components/generated/services_all.kt +++ b/.teamcity/components/generated/services_all.kt @@ -84,6 +84,7 @@ val services = mapOf( "emrserverless" to ServiceSpec("EMR Serverless"), "events" to ServiceSpec("EventBridge"), "evidently" to ServiceSpec("CloudWatch Evidently"), + "finspace" to ServiceSpec("FinSpace"), "firehose" to ServiceSpec("Kinesis Firehose"), "fis" to ServiceSpec("FIS (Fault Injection Simulator)"), "fms" to ServiceSpec("FMS (Firewall Manager)", regionOverride = "us-east-1"), @@ -191,6 +192,7 @@ val services = mapOf( "timestreamwrite" to ServiceSpec("Timestream Write"), "transcribe" to ServiceSpec("Transcribe"), "transfer" to ServiceSpec("Transfer Family", vpcLock = true), + "verifiedpermissions" to ServiceSpec("Verified Permissions"), "vpclattice" to ServiceSpec("VPC Lattice"), "waf" to ServiceSpec("WAF Classic", regionOverride = "us-east-1"), "wafregional" to ServiceSpec("WAF Classic Regional"), diff --git a/.teamcity/components/service_build_config.kt b/.teamcity/components/service_build_config.kt index de209418856..bb65754bd1d 100644 --- a/.teamcity/components/service_build_config.kt +++ b/.teamcity/components/service_build_config.kt @@ -1,8 +1,11 @@ -import jetbrains.buildServer.configs.kotlin.v2019_2.AbsoluteId -import jetbrains.buildServer.configs.kotlin.v2019_2.BuildType -import jetbrains.buildServer.configs.kotlin.v2019_2.DslContext -import jetbrains.buildServer.configs.kotlin.v2019_2.ParameterDisplay -import jetbrains.buildServer.configs.kotlin.v2019_2.buildSteps.script +import jetbrains.buildServer.configs.kotlin.AbsoluteId +import jetbrains.buildServer.configs.kotlin.BuildSteps +import jetbrains.buildServer.configs.kotlin.BuildType +import jetbrains.buildServer.configs.kotlin.DslContext +import jetbrains.buildServer.configs.kotlin.ParameterDisplay +import jetbrains.buildServer.configs.kotlin.buildFeatures.notifications +import jetbrains.buildServer.configs.kotlin.buildSteps.ScriptBuildStep +import jetbrains.buildServer.configs.kotlin.buildSteps.script import java.io.File data class ServiceSpec( @@ -13,11 +16,16 @@ data class ServiceSpec( val regionOverride: String? = null, ) +data class Notifier( + val connectionID: String, + val destination: String, +) + class Service(name: String, spec: ServiceSpec) { val packageName = name val spec = spec - fun buildType(): BuildType { + fun buildType(notifier: Notifier?): BuildType { return BuildType { id = DslContext.createId("ServiceTest_$packageName") @@ -46,10 +54,7 @@ class Service(name: String, spec: ServiceSpec) { val serviceDir = "./internal/service/$packageName" steps { - script { - name = "Setup GOENV" - scriptContent = File("./scripts/setup_goenv.sh").readText() - } + ConfigureGoEnv() script { name = "Compile Test Binary" workingDir = serviceDir @@ -67,14 +72,42 @@ class Service(name: String, spec: ServiceSpec) { } } - if (spec.vpcLock) { - features { + features { + if (spec.vpcLock) { feature { type = "JetBrains.SharedResources" param("locks-param", "${DslContext.getParameter("aws_account.vpc_lock_id")} readLock") } } + + if (notifier != null) { + notifications { + notifierSettings = slackNotifier { + connection = notifier.connectionID + sendTo = notifier.destination + messageFormat = verboseMessageFormat { + addBranch = true + addStatusText = true + } + } + buildStarted = false // With the number of tests, this would be too noisy + buildFailedToStart = true + buildFailed = false // With the current number of faling tests, this would be too noisy + buildFinishedSuccessfully = false // With the number of tests, this would be too noisy + firstSuccessAfterFailure = true + buildProbablyHanging = false + // Ideally we'd have this enabled, but we have too many failures and this would get very noisy + // firstBuildErrorOccurs = true + } + } } } } } + +fun BuildSteps.ConfigureGoEnv() { + step(ScriptBuildStep { + name = "Configure GOENV" + scriptContent = File("./scripts/configure_goenv.sh").readText() + }) +} diff --git a/.teamcity/pom.xml b/.teamcity/pom.xml index 75834555f55..ba5c9571263 100644 --- a/.teamcity/pom.xml +++ b/.teamcity/pom.xml @@ -1,9 +1,9 @@ 4.0.0 - AWS TeamCity Config DSL Script - TeamCity-Config-DSL-Script - TeamCity-Config-DSL-Script + Terraform-Provider-AWS Config DSL Script + TerraformProviderAWS + TerraformProviderAWS 1.0-SNAPSHOT @@ -22,7 +22,7 @@ teamcity-server - https://ci-oss.hashicorp.engineering/app/dsl-plugins-repository + https://teamcity.jetbrains.com/app/dsl-plugins-repository true @@ -70,10 +70,12 @@ kotlin target/generated-configs - + + --> @@ -82,13 +84,13 @@ org.jetbrains.teamcity - configs-dsl-kotlin + configs-dsl-kotlin-latest ${teamcity.dsl.version} compile org.jetbrains.teamcity - configs-dsl-kotlin-plugins + configs-dsl-kotlin-plugins-latest 1.0-SNAPSHOT pom compile diff --git a/.teamcity/scripts/configure_goenv.sh b/.teamcity/scripts/configure_goenv.sh new file mode 100644 index 00000000000..8d406c61c3e --- /dev/null +++ b/.teamcity/scripts/configure_goenv.sh @@ -0,0 +1,6 @@ +#!/usr/bin/env bash + +set -euo pipefail + +go_version="$(goenv local)" +goenv install --skip-existing "${go_version}" && goenv rehash diff --git a/.teamcity/scripts/setup_goenv.sh b/.teamcity/scripts/setup_goenv.sh deleted file mode 100644 index 82de04b051d..00000000000 --- a/.teamcity/scripts/setup_goenv.sh +++ /dev/null @@ -1,16 +0,0 @@ -#!/usr/bin/env bash - -set -euo pipefail - -pushd "${GOENV_ROOT}" -# Make sure we're using the main `goenv` -if ! git remote | grep -q syndbg; then - printf '\nInstalling syndbg/goenv\n' - git remote add -f syndbg https://github.com/syndbg/goenv.git -fi -printf '\nUpdating goenv to %s...\n' "${GOENV_TOOL_VERSION}" -git reset --hard syndbg/"${GOENV_TOOL_VERSION}" -popd - -go_version="$(goenv local)" -goenv install --skip-existing "${go_version}" && goenv rehash diff --git a/.teamcity/settings.kts b/.teamcity/settings.kts index 41ada09fa62..d7c3f8f530f 100644 --- a/.teamcity/settings.kts +++ b/.teamcity/settings.kts @@ -1,14 +1,15 @@ -import jetbrains.buildServer.configs.kotlin.v2019_2.* // ktlint-disable no-wildcard-imports -import jetbrains.buildServer.configs.kotlin.v2019_2.buildFeatures.golang -import jetbrains.buildServer.configs.kotlin.v2019_2.buildSteps.script -import jetbrains.buildServer.configs.kotlin.v2019_2.triggers.schedule +import jetbrains.buildServer.configs.kotlin.* // ktlint-disable no-wildcard-imports +import jetbrains.buildServer.configs.kotlin.buildFeatures.golang +import jetbrains.buildServer.configs.kotlin.buildFeatures.notifications +import jetbrains.buildServer.configs.kotlin.buildSteps.script +import jetbrains.buildServer.configs.kotlin.triggers.schedule import java.io.File import java.time.Duration import java.time.LocalTime import java.time.ZoneId import java.time.format.DateTimeFormatter -version = "2020.2" +version = "2023.05" val defaultRegion = DslContext.getParameter("default_region") val alternateRegion = DslContext.getParameter("alternate_region", "") @@ -74,9 +75,9 @@ project { text("env.EC2_SECURITY_GROUP_RULES_PER_GROUP_LIMIT", securityGroupRulesPerGroup) } - val branchName = DslContext.getParameter("branch_name", "") - if (branchName != "") { - text("BRANCH_NAME", branchName, display = ParameterDisplay.HIDDEN) + val brancRef = DslContext.getParameter("branch_name", "") + if (brancRef != "") { + text("BRANCH_NAME", brancRef, display = ParameterDisplay.HIDDEN) } if (tfAccAssumeRoleArn != "") { @@ -111,10 +112,15 @@ project { // Define this parameter even when not set to allow individual builds to set the value text("env.TF_ACC_TERRAFORM_VERSION", DslContext.getParameter("terraform_version", "")) - // These should be overridden in the base AWS project - param("env.GOPATH", "") - param("env.GO111MODULE", "") // No longer needed as of Go 1.16 - param("env.GO_VERSION", "") // We're using `goenv` and `.go-version` + // These overrides exist because of the inherited dependency in the existing project structure and can + // be removed when this is moved outside of it + val isOnPrem = DslContext.getParameter("is_on_prem", "true").equals("true", ignoreCase = true) + if (isOnPrem) { + // These should be overridden in the base AWS project + param("env.GOPATH", "") + param("env.GO111MODULE", "") // No longer needed as of Go 1.16 + param("env.GO_VERSION", "") // We're using `goenv` and `.go-version` + } } subProject(Services) @@ -134,11 +140,9 @@ object PullRequest : BuildType({ executionTimeoutMin = Duration.ofHours(defaultPullRequestTimeoutHours).toMinutes().toInt() } + val accTestRoleARN = DslContext.getParameter("aws_account.role_arn", "") steps { - script { - name = "Setup GOENV" - scriptContent = File("./scripts/setup_goenv.sh").readText() - } + ConfigureGoEnv() script { name = "Run Tests" scriptContent = File("./scripts/pullrequest_tests/tests.sh").readText() @@ -157,6 +161,34 @@ object PullRequest : BuildType({ param("locks-param", "$alternateAccountLockId readLock") } } + + val notifierConnectionID = DslContext.getParameter("notifier.id", "") + val notifier: Notifier? = if (notifierConnectionID != "") { + Notifier(notifierConnectionID, DslContext.getParameter("notifier.destination")) + } else { + null + } + + if (notifier != null) { + notifications { + notifierSettings = slackNotifier { + connection = notifier.connectionID + sendTo = notifier.destination + messageFormat = verboseMessageFormat { + addBranch = true + addStatusText = true + } + } + branchFilter = "+:*" + + buildStarted = true + buildFailedToStart = true + buildFailed = true + buildFinishedSuccessfully = true + firstBuildErrorOccurs = true + buildProbablyHanging = false + } + } } }) @@ -169,6 +201,13 @@ object FullBuild : BuildType({ showDependenciesChanges = true } + val notifierConnectionID = DslContext.getParameter("notifier.id", "") + val notifier: Notifier? = if (notifierConnectionID != "") { + Notifier(notifierConnectionID, DslContext.getParameter("notifier.destination")) + } else { + null + } + dependencies { snapshot(SetUp) { reuseBuilds = ReuseBuilds.NO @@ -179,7 +218,7 @@ object FullBuild : BuildType({ val testType = DslContext.getParameter("test_type", "") val serviceList = if (testType == "orgacct") orgacctServices else services serviceList.forEach { (serviceName, displayName) -> - snapshot(Service(serviceName, displayName).buildType()) { + snapshot(Service(serviceName, displayName).buildType(notifier)) { reuseBuilds = ReuseBuilds.NO onDependencyFailure = FailureAction.ADD_PROBLEM onDependencyCancel = FailureAction.IGNORE @@ -203,20 +242,24 @@ object FullBuild : BuildType({ } else { "Sun-Thu" } - triggers { - schedule { - schedulingPolicy = cron { - dayOfWeek = triggerDay - val triggerHM = LocalTime.from(triggerTime) - hours = triggerHM.getHour().toString() - minutes = triggerHM.getMinute().toString() - timezone = ZoneId.from(triggerTime).toString() + + val enableTestTriggersGlobally = DslContext.getParameter("enable_test_triggers_globally", "true").equals("true", ignoreCase = true) + if (enableTestTriggersGlobally) { + triggers { + schedule { + schedulingPolicy = cron { + dayOfWeek = triggerDay + val triggerHM = LocalTime.from(triggerTime) + hours = triggerHM.getHour().toString() + minutes = triggerHM.getMinute().toString() + timezone = ZoneId.from(triggerTime).toString() + } + branchFilter = "" // For a Composite build, the branch filter must be empty + triggerBuild = always() + withPendingChangesOnly = false + enableQueueOptimization = false + enforceCleanCheckoutForDependencies = true } - branchFilter = "" // For a Composite build, the branch filter must be empty - triggerBuild = always() - withPendingChangesOnly = false - enableQueueOptimization = false - enforceCleanCheckoutForDependencies = true } } } @@ -233,6 +276,21 @@ object FullBuild : BuildType({ param("locks-param", "$alternateAccountLockId readLock") } } + + if (notifier != null) { + notifications { + notifierSettings = slackNotifier { + connection = notifier.connectionID + sendTo = notifier.destination + messageFormat = simpleMessageFormat() + } + buildStarted = true + buildFailedToStart = true + buildFailed = true + buildFinishedSuccessfully = true + firstBuildErrorOccurs = true + } + } } }) @@ -246,10 +304,7 @@ object SetUp : BuildType({ } steps { - script { - name = "Setup GOENV" - scriptContent = File("./scripts/setup_goenv.sh").readText() - } + ConfigureGoEnv() script { name = "Run provider unit tests" scriptContent = File("./scripts/provider_tests/unit_tests.sh").readText() @@ -269,6 +324,34 @@ object SetUp : BuildType({ golang { testFormat = "json" } + + val notifierConnectionID = DslContext.getParameter("notifier.id", "") + val notifier: Notifier? = if (notifierConnectionID != "") { + Notifier(notifierConnectionID, DslContext.getParameter("notifier.destination")) + } else { + null + } + + if (notifier != null) { + notifications { + notifierSettings = slackNotifier { + connection = notifier.connectionID + sendTo = notifier.destination + messageFormat = verboseMessageFormat { + addBranch = true + addStatusText = true + } + } + buildStarted = false // With the number of tests, this would be too noisy + buildFailedToStart = true + buildFailed = true + buildFinishedSuccessfully = false // With the number of tests, this would be too noisy + firstSuccessAfterFailure = true + buildProbablyHanging = false + // Ideally we'd have this enabled, but we have too many failures and this would get very noisy + // firstBuildErrorOccurs = true + } + } } }) @@ -277,6 +360,13 @@ object Services : Project({ name = "Services" + val notifierConnectionID = DslContext.getParameter("notifier.id", "") + val notifier: Notifier? = if (notifierConnectionID != "") { + Notifier(notifierConnectionID, DslContext.getParameter("notifier.destination")) + } else { + null + } + val buildChain = sequential { buildType(SetUp) @@ -284,7 +374,7 @@ object Services : Project({ val serviceList = if (testType == "orgacct") orgacctServices else services parallel(options = { onDependencyFailure = FailureAction.IGNORE }) { serviceList.forEach { (serviceName, displayName) -> - buildType(Service(serviceName, displayName).buildType()) + buildType(Service(serviceName, displayName).buildType(notifier)) } } @@ -307,11 +397,7 @@ object CleanUp : BuildType({ } steps { - script { - name = "Setup GOENV" - enabled = false - scriptContent = File("./scripts/setup_goenv.sh").readText() - } + ConfigureGoEnv() script { name = "Post-Sweeper" enabled = false @@ -330,10 +416,7 @@ object Sweeper : BuildType({ } steps { - script { - name = "Setup GOENV" - scriptContent = File("./scripts/setup_goenv.sh").readText() - } + ConfigureGoEnv() script { name = "Sweeper" scriptContent = File("./scripts/sweeper.sh").readText() @@ -344,19 +427,22 @@ object Sweeper : BuildType({ if (triggerTimeRaw != "") { val formatter = DateTimeFormatter.ofPattern("HH':'mm' 'VV") val triggerTime = formatter.parse(triggerTimeRaw) - triggers { - schedule { - schedulingPolicy = daily { - val triggerHM = LocalTime.from(triggerTime) - hour = triggerHM.getHour() - minute = triggerHM.getMinute() - timezone = ZoneId.from(triggerTime).toString() + val enableTestTriggersGlobally = DslContext.getParameter("enable_test_triggers_globally", "true").equals("true", ignoreCase = true) + if (enableTestTriggersGlobally) { + triggers { + schedule { + schedulingPolicy = daily { + val triggerHM = LocalTime.from(triggerTime) + hour = triggerHM.getHour() + minute = triggerHM.getMinute() + timezone = ZoneId.from(triggerTime).toString() + } + branchFilter = "+:refs/heads/main" + triggerBuild = always() + withPendingChangesOnly = false + enableQueueOptimization = false + enforceCleanCheckoutForDependencies = true } - branchFilter = "+:refs/heads/main" - triggerBuild = always() - withPendingChangesOnly = false - enableQueueOptimization = false - enforceCleanCheckoutForDependencies = true } } } @@ -366,5 +452,31 @@ object Sweeper : BuildType({ type = "JetBrains.SharedResources" param("locks-param", "${DslContext.getParameter("aws_account.lock_id")} writeLock") } + + val notifierConnectionID = DslContext.getParameter("notifier.id", "") + val notifier: Notifier? = if (notifierConnectionID != "") { + Notifier(notifierConnectionID, DslContext.getParameter("notifier.destination")) + } else { + null + } + + if (notifier != null) { + val branchRef = DslContext.getParameter("branch_name", "") + notifications { + notifierSettings = slackNotifier { + connection = notifier.connectionID + sendTo = notifier.destination + messageFormat = verboseMessageFormat { + addBranch = branchRef != "refs/heads/main" + addStatusText = true + } + } + buildStarted = true + buildFailedToStart = true + buildFailed = true + buildFinishedSuccessfully = true + firstBuildErrorOccurs = true + } + } } }) diff --git a/CHANGELOG.md b/CHANGELOG.md index a6183407f40..0de1cb0b802 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,8 +1,321 @@ -## 5.0.0 (Unreleased) +## 5.8.0 (Unreleased) + +ENHANCEMENTS: + +* data-source/aws_ssm_parameter: Add `insecure_value` attribute ([#30817](https://github.com/hashicorp/terraform-provider-aws/issues/30817)) +* resource/aws_iam_virtual_mfa_device: Add `enable_date` and `user_name` attributes ([#32462](https://github.com/hashicorp/terraform-provider-aws/issues/32462)) + +BUG FIXES: + +* resource/aws_config_config_rule: Prevent crash on nil describe output ([#32439](https://github.com/hashicorp/terraform-provider-aws/issues/32439)) +* resource/aws_mq_broker: default `replication_user` to `false` ([#32454](https://github.com/hashicorp/terraform-provider-aws/issues/32454)) +* resource/aws_quicksight_analysis: Fix exception thrown when specifying `definition.sheets.visuals.bar_chart_visual.chart_configuration.category_axis.scrollbar_options.visible_range` ([#32464](https://github.com/hashicorp/terraform-provider-aws/issues/32464)) +* resource/aws_quicksight_analysis: Fix exception thrown when specifying `definition.sheets.visuals.pivot_table_visual.chart_configuration.field_options.selected_field_options.visibility` ([#32464](https://github.com/hashicorp/terraform-provider-aws/issues/32464)) +* resource/aws_quicksight_analysis: Fix exception thrown when specifying `definition.sheets.visuals.pivot_table_visual.chart_configuration.field_wells.pivot_table_aggregated_field_wells.rows` ([#32464](https://github.com/hashicorp/terraform-provider-aws/issues/32464)) +* resource/aws_quicksight_dashboard: Fix exception thrown when specifying `definition.sheets.visuals.bar_chart_visual.chart_configuration.category_axis.scrollbar_options.visible_range` ([#32464](https://github.com/hashicorp/terraform-provider-aws/issues/32464)) +* resource/aws_quicksight_dashboard: Fix exception thrown when specifying `definition.sheets.visuals.pivot_table_visual.chart_configuration.field_options.selected_field_options.visibility` ([#32464](https://github.com/hashicorp/terraform-provider-aws/issues/32464)) +* resource/aws_quicksight_dashboard: Fix exception thrown when specifying `definition.sheets.visuals.pivot_table_visual.chart_configuration.field_wells.pivot_table_aggregated_field_wells.rows` ([#32464](https://github.com/hashicorp/terraform-provider-aws/issues/32464)) +* resource/aws_quicksight_template: Fix exception thrown when specifying `definition.sheets.visuals.bar_chart_visual.chart_configuration.category_axis.scrollbar_options.visible_range` ([#32464](https://github.com/hashicorp/terraform-provider-aws/issues/32464)) +* resource/aws_quicksight_template: Fix exception thrown when specifying `definition.sheets.visuals.pivot_table_visual.chart_configuration.field_options.selected_field_options.visibility` ([#32464](https://github.com/hashicorp/terraform-provider-aws/issues/32464)) +* resource/aws_quicksight_template: Fix exception thrown when specifying `definition.sheets.visuals.pivot_table_visual.chart_configuration.field_wells.pivot_table_aggregated_field_wells.rows` ([#32464](https://github.com/hashicorp/terraform-provider-aws/issues/32464)) + +## 5.7.0 (July 7, 2023) + +FEATURES: + +* **New Data Source:** `aws_opensearchserverless_security_config` ([#32321](https://github.com/hashicorp/terraform-provider-aws/issues/32321)) +* **New Data Source:** `aws_opensearchserverless_security_policy` ([#32226](https://github.com/hashicorp/terraform-provider-aws/issues/32226)) +* **New Data Source:** `aws_opensearchserverless_vpc_endpoint` ([#32276](https://github.com/hashicorp/terraform-provider-aws/issues/32276)) +* **New Resource:** `aws_cleanrooms_collaboration` ([#31680](https://github.com/hashicorp/terraform-provider-aws/issues/31680)) + +ENHANCEMENTS: + +* resource/aws_aws_keyspaces_table: Add `client_side_timestamps` configuration block ([#32339](https://github.com/hashicorp/terraform-provider-aws/issues/32339)) +* resource/aws_glue_catalog_database: Add `target_database.region` argument ([#32283](https://github.com/hashicorp/terraform-provider-aws/issues/32283)) +* resource/aws_glue_crawler: Add `iceberg_target` configuration block ([#32332](https://github.com/hashicorp/terraform-provider-aws/issues/32332)) +* resource/aws_internetmonitor_monitor: Add `health_events_config` configuration block ([#32343](https://github.com/hashicorp/terraform-provider-aws/issues/32343)) +* resource/aws_lambda_function: Support `code_signing_config_arn` in the `ap-east-1` AWS Region ([#32327](https://github.com/hashicorp/terraform-provider-aws/issues/32327)) +* resource/aws_qldb_stream: Add configurable Create and Delete timeouts ([#32345](https://github.com/hashicorp/terraform-provider-aws/issues/32345)) +* resource/aws_service_discovery_private_dns_namespace: Allow `description` to be updated in-place ([#32342](https://github.com/hashicorp/terraform-provider-aws/issues/32342)) +* resource/aws_service_discovery_public_dns_namespace: Allow `description` to be updated in-place ([#32342](https://github.com/hashicorp/terraform-provider-aws/issues/32342)) +* resource/aws_timestreamwrite_table: Add `schema` configuration block ([#32354](https://github.com/hashicorp/terraform-provider-aws/issues/32354)) + +BUG FIXES: + +* provider: Correctly handle `forbidden_account_ids` ([#32352](https://github.com/hashicorp/terraform-provider-aws/issues/32352)) +* resource/aws_kms_external_key: Correctly remove all tags ([#32371](https://github.com/hashicorp/terraform-provider-aws/issues/32371)) +* resource/aws_kms_key: Correctly remove all tags ([#32371](https://github.com/hashicorp/terraform-provider-aws/issues/32371)) +* resource/aws_kms_replica_external_key: Correctly remove all tags ([#32371](https://github.com/hashicorp/terraform-provider-aws/issues/32371)) +* resource/aws_kms_replica_key: Correctly remove all tags ([#32371](https://github.com/hashicorp/terraform-provider-aws/issues/32371)) +* resource/aws_secretsmanager_secret_rotation: Fix `InvalidParameterException: You cannot specify both rotation frequency and schedule expression together` errors on resource Update ([#31915](https://github.com/hashicorp/terraform-provider-aws/issues/31915)) +* resource/aws_ssm_parameter: Skip Update if only `overwrite` parameter changes ([#32372](https://github.com/hashicorp/terraform-provider-aws/issues/32372)) +* resource/aws_vpc_endpoint: Fix `InvalidParameter: PrivateDnsOnlyForInboundResolverEndpoint not supported for this service` errors creating S3 _Interface_ VPC endpoints ([#32355](https://github.com/hashicorp/terraform-provider-aws/issues/32355)) + +## 5.6.2 (June 30, 2023) + +BUG FIXES: + +* resource/aws_s3_bucket: Fix `InvalidArgument: Invalid attribute name specified` errors when listing S3 Bucket objects, caused by an [AWS SDK for Go regression](https://github.com/aws/aws-sdk-go/issues/4897) ([#32317](https://github.com/hashicorp/terraform-provider-aws/issues/32317)) + +## 5.6.1 (June 30, 2023) + +BUG FIXES: + +* provider: Prevent resource recreation if `tags` or `tags_all` are updated ([#32297](https://github.com/hashicorp/terraform-provider-aws/issues/32297)) + +## 5.6.0 (June 29, 2023) + +FEATURES: + +* **New Data Source:** `aws_opensearchserverless_access_policy` ([#32231](https://github.com/hashicorp/terraform-provider-aws/issues/32231)) +* **New Data Source:** `aws_opensearchserverless_collection` ([#32247](https://github.com/hashicorp/terraform-provider-aws/issues/32247)) +* **New Data Source:** `aws_sfn_alias` ([#32176](https://github.com/hashicorp/terraform-provider-aws/issues/32176)) +* **New Data Source:** `aws_sfn_state_machine_versions` ([#32176](https://github.com/hashicorp/terraform-provider-aws/issues/32176)) +* **New Resource:** `aws_ec2_instance_connect_endpoint` ([#31858](https://github.com/hashicorp/terraform-provider-aws/issues/31858)) +* **New Resource:** `aws_sfn_alias` ([#32176](https://github.com/hashicorp/terraform-provider-aws/issues/32176)) +* **New Resource:** `aws_transfer_agreement` ([#32203](https://github.com/hashicorp/terraform-provider-aws/issues/32203)) +* **New Resource:** `aws_transfer_certificate` ([#32203](https://github.com/hashicorp/terraform-provider-aws/issues/32203)) +* **New Resource:** `aws_transfer_connector` ([#32203](https://github.com/hashicorp/terraform-provider-aws/issues/32203)) +* **New Resource:** `aws_transfer_profile` ([#32203](https://github.com/hashicorp/terraform-provider-aws/issues/32203)) + +ENHANCEMENTS: + +* resource/aws_batch_compute_environment: Add `placement_group` attribute to the `compute_resources` configuration block ([#32200](https://github.com/hashicorp/terraform-provider-aws/issues/32200)) +* resource/aws_emrserverless_application: Do not recreate the resource if `release_label` changes ([#32278](https://github.com/hashicorp/terraform-provider-aws/issues/32278)) +* resource/aws_fis_experiment_template: Add `log_configuration` configuration block ([#32102](https://github.com/hashicorp/terraform-provider-aws/issues/32102)) +* resource/aws_fis_experiment_template: Add `parameters` attribute to the `target` configuration block ([#32160](https://github.com/hashicorp/terraform-provider-aws/issues/32160)) +* resource/aws_fis_experiment_template: Add support for `Pods` and `Tasks` to `action.*.target` ([#32152](https://github.com/hashicorp/terraform-provider-aws/issues/32152)) +* resource/aws_lambda_event_source_mapping: The `queues` argument has changed from a set to a list with a maximum of one element. ([#31931](https://github.com/hashicorp/terraform-provider-aws/issues/31931)) +* resource/aws_pipes_pipe: Add `activemq_broker_parameters`, `dynamodb_stream_parameters`, `kinesis_stream_parameters`, `managed_streaming_kafka_parameters`, `rabbitmq_broker_parameters`, `self_managed_kafka_parameters` and `sqs_queue_parameters` attributes to the `source_parameters` configuration block. NOTE: Because we cannot easily test all this functionality, it is best effort and we ask for community help in testing ([#31607](https://github.com/hashicorp/terraform-provider-aws/issues/31607)) +* resource/aws_pipes_pipe: Add `batch_job_parameters`, `cloudwatch_logs_parameters`, `ecs_task_parameters`, `eventbridge_event_bus_parameters`, `http_parameters`, `kinesis_stream_parameters`, `lambda_function_parameters`, `redshift_data_parameters`, `sagemaker_pipeline_parameters`, `sqs_queue_parameters` and `step_function_state_machine_parameters` attributes to the `target_parameters` configuration block. NOTE: Because we cannot easily test all this functionality, it is best effort and we ask for community help in testing ([#31607](https://github.com/hashicorp/terraform-provider-aws/issues/31607)) +* resource/aws_pipes_pipe: Add `enrichment_parameters` argument ([#31607](https://github.com/hashicorp/terraform-provider-aws/issues/31607)) +* resource/aws_resourcegroups_group: `resource_query` no longer conflicts with `configuration` ([#30242](https://github.com/hashicorp/terraform-provider-aws/issues/30242)) +* resource/aws_s3_bucket_logging: Retry on empty read of logging config ([#30916](https://github.com/hashicorp/terraform-provider-aws/issues/30916)) +* resource/aws_sfn_state_machine: Add `description`, `publish`, `revision_id`, `state_machine_version_arn` and `version_description` attributes ([#32176](https://github.com/hashicorp/terraform-provider-aws/issues/32176)) + +BUG FIXES: + +* resource/aws_db_instance: Fix resource Create returning instances not in the `available` state when `identifier_prefix` is specified ([#32287](https://github.com/hashicorp/terraform-provider-aws/issues/32287)) +* resource/aws_resourcegroups_resource: Fix crash when resource Create fails ([#30242](https://github.com/hashicorp/terraform-provider-aws/issues/30242)) +* resource/aws_route: Fix `reading Route in Route Table (rtb-1234abcd) with destination (1.2.3.4/5): couldn't find resource` errors when reading new resource ([#32196](https://github.com/hashicorp/terraform-provider-aws/issues/32196)) +* resource/aws_vpc_security_group_egress_rule: `security_group_id` is Required ([#32148](https://github.com/hashicorp/terraform-provider-aws/issues/32148)) +* resource/aws_vpc_security_group_ingress_rule: `security_group_id` is Required ([#32148](https://github.com/hashicorp/terraform-provider-aws/issues/32148)) + +## 5.5.0 (June 23, 2023) + +NOTES: + +* provider: Updates to Go 1.20, the last release that will run on any release of Windows 7, 8, Server 2008 and Server 2012. A future release will update to Go 1.21, and these platforms will no longer be supported. ([#32108](https://github.com/hashicorp/terraform-provider-aws/issues/32108)) +* provider: Updates to Go 1.20, the last release that will run on macOS 10.13 High Sierra or 10.14 Mojave. A future release will update to Go 1.21, and these platforms will no longer be supported. ([#32108](https://github.com/hashicorp/terraform-provider-aws/issues/32108)) +* provider: Updates to Go 1.20. The provider will now notice the `trust-ad` option in `/etc/resolv.conf` and, if set, will set the "authentic data" option in outgoing DNS requests in order to better match the behavior of the GNU libc resolver. ([#32108](https://github.com/hashicorp/terraform-provider-aws/issues/32108)) + +FEATURES: + +* **New Data Source:** `aws_sesv2_email_identity` ([#32026](https://github.com/hashicorp/terraform-provider-aws/issues/32026)) +* **New Data Source:** `aws_sesv2_email_identity_mail_from_attributes` ([#32026](https://github.com/hashicorp/terraform-provider-aws/issues/32026)) +* **New Resource:** `aws_chimesdkvoice_sip_rule` ([#32070](https://github.com/hashicorp/terraform-provider-aws/issues/32070)) +* **New Resource:** `aws_organizations_resource_policy` ([#32056](https://github.com/hashicorp/terraform-provider-aws/issues/32056)) + +ENHANCEMENTS: + +* data-source/aws_organizations_organization: Return the full set of attributes when running as a [delegated administrator for AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_delegate_policies.html) ([#32056](https://github.com/hashicorp/terraform-provider-aws/issues/32056)) +* provider: Mask all sensitive values that appear when `TF_LOG` level is `TRACE` ([#32174](https://github.com/hashicorp/terraform-provider-aws/issues/32174)) +* resource/aws_config_configuration_recorder: Add `exclusion_by_resource_types` and `recording_strategy` attributes to the `recording_group` configuration block ([#32007](https://github.com/hashicorp/terraform-provider-aws/issues/32007)) +* resource/aws_datasync_task: Add `object_tags` attribute to `options` configuration block ([#27811](https://github.com/hashicorp/terraform-provider-aws/issues/27811)) +* resource/aws_networkmanager_attachment_accepter: Added support for Transit Gateway route table attachments ([#32023](https://github.com/hashicorp/terraform-provider-aws/issues/32023)) +* resource/aws_ses_active_receipt_rule_set: Support import ([#27604](https://github.com/hashicorp/terraform-provider-aws/issues/27604)) + +BUG FIXES: + +* resource/aws_api_gateway_rest_api: Fix crash when `binary_media_types` is `null` ([#32169](https://github.com/hashicorp/terraform-provider-aws/issues/32169)) +* resource/aws_datasync_location_object_storage: Don't ignore `server_certificate` argument ([#27811](https://github.com/hashicorp/terraform-provider-aws/issues/27811)) +* resource/aws_eip: Fix `reading EC2 EIP (eipalloc-abcd1234): couldn't find resource` errors when reading new resource ([#32016](https://github.com/hashicorp/terraform-provider-aws/issues/32016)) +* resource/aws_quicksight_analysis: Fix schema mapping for string set elements ([#31903](https://github.com/hashicorp/terraform-provider-aws/issues/31903)) +* resource/aws_redshiftserverless_workgroup: Fix `waiting for completion: unexpected state 'AVAILABLE'` errors when deleting resource ([#32067](https://github.com/hashicorp/terraform-provider-aws/issues/32067)) +* resource/aws_route_table: Fix `reading Route Table (rtb-abcd1234): couldn't find resource` errors when reading new resource ([#30999](https://github.com/hashicorp/terraform-provider-aws/issues/30999)) +* resource/aws_storagegateway_smb_file_share: Fix update error when `kms_encrypted` is `true` but `kms_key_arn` is not sent in the request ([#32171](https://github.com/hashicorp/terraform-provider-aws/issues/32171)) + +## 5.4.0 (June 15, 2023) + +FEATURES: + +* **New Data Source:** `aws_organizations_policies` ([#31545](https://github.com/hashicorp/terraform-provider-aws/issues/31545)) +* **New Data Source:** `aws_organizations_policies_for_target` ([#31682](https://github.com/hashicorp/terraform-provider-aws/issues/31682)) +* **New Resource:** `aws_chimesdkvoice_sip_media_application` ([#31937](https://github.com/hashicorp/terraform-provider-aws/issues/31937)) +* **New Resource:** `aws_opensearchserverless_collection` ([#31091](https://github.com/hashicorp/terraform-provider-aws/issues/31091)) +* **New Resource:** `aws_opensearchserverless_security_config` ([#28776](https://github.com/hashicorp/terraform-provider-aws/issues/28776)) +* **New Resource:** `aws_opensearchserverless_vpc_endpoint` ([#28651](https://github.com/hashicorp/terraform-provider-aws/issues/28651)) + +ENHANCEMENTS: + +* resource/aws_elb: Add configurable Create and Update timeouts ([#31976](https://github.com/hashicorp/terraform-provider-aws/issues/31976)) +* resource/aws_glue_data_quality_ruleset: Add `catalog_id` argument to `target_table` block ([#31926](https://github.com/hashicorp/terraform-provider-aws/issues/31926)) + +BUG FIXES: + +* provider: Fix `index out of range [0] with length 0` panic ([#32004](https://github.com/hashicorp/terraform-provider-aws/issues/32004)) +* resource/aws_elb: Recreate the resource if `subnets` is updated to an empty list ([#31976](https://github.com/hashicorp/terraform-provider-aws/issues/31976)) +* resource/aws_lambda_provisioned_concurrency_config: The `function_name` argument now properly handles ARN values ([#31933](https://github.com/hashicorp/terraform-provider-aws/issues/31933)) +* resource/aws_quicksight_data_set: Allow physical table map to be optional ([#31863](https://github.com/hashicorp/terraform-provider-aws/issues/31863)) +* resource/aws_ssm_default_patch_baseline: Fix `*conns.AWSClient is not ssm.ssmClient: missing method SSMClient` panic ([#31928](https://github.com/hashicorp/terraform-provider-aws/issues/31928)) + +## 5.3.0 (June 13, 2023) + +NOTES: + +* resource/aws_instance: The `metadata_options.http_endpoint` argument now correctly defaults to `enabled`. ([#24774](https://github.com/hashicorp/terraform-provider-aws/issues/24774)) +* resource/aws_lambda_function: The `replace_security_groups_on_destroy` and `replacement_security_group_ids` attributes are being deprecated as AWS no longer supports this operation. These attributes now have no effect, and will be removed in a future major version. ([#31904](https://github.com/hashicorp/terraform-provider-aws/issues/31904)) + +FEATURES: + +* **New Data Source:** `aws_quicksight_theme` ([#31900](https://github.com/hashicorp/terraform-provider-aws/issues/31900)) +* **New Resource:** `aws_opensearchserverless_access_policy` ([#28518](https://github.com/hashicorp/terraform-provider-aws/issues/28518)) +* **New Resource:** `aws_opensearchserverless_security_policy` ([#28470](https://github.com/hashicorp/terraform-provider-aws/issues/28470)) +* **New Resource:** `aws_quicksight_theme` ([#31900](https://github.com/hashicorp/terraform-provider-aws/issues/31900)) + +ENHANCEMENTS: + +* data-source/aws_redshift_cluster: Add `cluster_namespace_arn` attribute ([#31884](https://github.com/hashicorp/terraform-provider-aws/issues/31884)) +* resource/aws_redshift_cluster: Add `cluster_namespace_arn` attribute ([#31884](https://github.com/hashicorp/terraform-provider-aws/issues/31884)) +* resource/aws_vpc_endpoint: Add `private_dns_only_for_inbound_resolver_endpoint` attribute to the `dns_options` configuration block ([#31873](https://github.com/hashicorp/terraform-provider-aws/issues/31873)) + +BUG FIXES: + +* resource/aws_ecs_task_definition: Fix to prevent persistent diff when `efs_volume_configuration` has both `root_volume` and `authorization_config` set. ([#26880](https://github.com/hashicorp/terraform-provider-aws/issues/26880)) +* resource/aws_instance: Fix default for `metadata_options.http_endpoint` argument. ([#24774](https://github.com/hashicorp/terraform-provider-aws/issues/24774)) +* resource/aws_keyspaces_keyspace: Correct plan time validation for `name` ([#31352](https://github.com/hashicorp/terraform-provider-aws/issues/31352)) +* resource/aws_keyspaces_table: Correct plan time validation for `keyspace_name`, `table_name` and column names ([#31352](https://github.com/hashicorp/terraform-provider-aws/issues/31352)) +* resource/aws_quicksight_analysis: Fix assignment of KPI visual field well target values ([#31901](https://github.com/hashicorp/terraform-provider-aws/issues/31901)) +* resource/aws_redshift_cluster: Allow `availability_zone_relocation_enabled` to be `true` when `publicly_accessible` is `true` ([#31886](https://github.com/hashicorp/terraform-provider-aws/issues/31886)) +* resource/aws_vpc: Fix `reading EC2 VPC (vpc-abcd1234) Attribute (enableDnsSupport): couldn't find resource` errors when reading new resource ([#31877](https://github.com/hashicorp/terraform-provider-aws/issues/31877)) + +## 5.2.0 (June 9, 2023) + +NOTES: + +* resource/aws_mwaa_environment: Upgrading your environment to a new major version of Apache Airflow forces replacement of the resource ([#31833](https://github.com/hashicorp/terraform-provider-aws/issues/31833)) + +FEATURES: + +* **New Data Source:** `aws_budgets_budget` ([#31691](https://github.com/hashicorp/terraform-provider-aws/issues/31691)) +* **New Data Source:** `aws_ecr_pull_through_cache_rule` ([#31696](https://github.com/hashicorp/terraform-provider-aws/issues/31696)) +* **New Data Source:** `aws_guardduty_finding_ids` ([#31711](https://github.com/hashicorp/terraform-provider-aws/issues/31711)) +* **New Data Source:** `aws_iam_principal_policy_simulation` ([#25569](https://github.com/hashicorp/terraform-provider-aws/issues/25569)) +* **New Resource:** `aws_chimesdkvoice_global_settings` ([#31365](https://github.com/hashicorp/terraform-provider-aws/issues/31365)) +* **New Resource:** `aws_finspace_kx_cluster` ([#31806](https://github.com/hashicorp/terraform-provider-aws/issues/31806)) +* **New Resource:** `aws_finspace_kx_database` ([#31803](https://github.com/hashicorp/terraform-provider-aws/issues/31803)) +* **New Resource:** `aws_finspace_kx_environment` ([#31802](https://github.com/hashicorp/terraform-provider-aws/issues/31802)) +* **New Resource:** `aws_finspace_kx_user` ([#31804](https://github.com/hashicorp/terraform-provider-aws/issues/31804)) + +ENHANCEMENTS: + +* data/aws_ec2_transit_gateway_connect_peer: Add `bgp_peer_address` and `bgp_transit_gateway_addresses` attributes ([#31752](https://github.com/hashicorp/terraform-provider-aws/issues/31752)) +* provider: Adds `retry_mode` parameter ([#31745](https://github.com/hashicorp/terraform-provider-aws/issues/31745)) +* resource/aws_chime_voice_connector: Add tagging support ([#31746](https://github.com/hashicorp/terraform-provider-aws/issues/31746)) +* resource/aws_ec2_transit_gateway_connect_peer: Add `bgp_peer_address` and `bgp_transit_gateway_addresses` attributes ([#31752](https://github.com/hashicorp/terraform-provider-aws/issues/31752)) +* resource/aws_ec2_transit_gateway_route_table_association: Add `replace_existing_association` argument ([#31452](https://github.com/hashicorp/terraform-provider-aws/issues/31452)) +* resource/aws_fis_experiment_template: Add support for `Volumes` to `actions.*.target` ([#31499](https://github.com/hashicorp/terraform-provider-aws/issues/31499)) +* resource/aws_instance: Add `instance_market_options` configuration block and `instance_lifecycle` and `spot_instance_request_id` attributes ([#31495](https://github.com/hashicorp/terraform-provider-aws/issues/31495)) +* resource/aws_lambda_function: Add support for `ruby3.2` `runtime` value ([#31842](https://github.com/hashicorp/terraform-provider-aws/issues/31842)) +* resource/aws_lambda_layer_version: Add support for `ruby3.2` `compatible_runtimes` value ([#31842](https://github.com/hashicorp/terraform-provider-aws/issues/31842)) +* resource/aws_mwaa_environment: Consider `CREATING_SNAPSHOT` a valid pending state for resource update ([#31833](https://github.com/hashicorp/terraform-provider-aws/issues/31833)) +* resource/aws_networkfirewall_firewall_policy: Add `stream_exception_policy` option to `firewall_policy.stateful_engine_options` ([#31541](https://github.com/hashicorp/terraform-provider-aws/issues/31541)) +* resource/aws_redshiftserverless_workgroup: Additional supported values for `config_parameter.parameter_key` ([#31747](https://github.com/hashicorp/terraform-provider-aws/issues/31747)) +* resource/aws_sagemaker_model: Add `container.model_package_name` and `primary_container.model_package_name` arguments ([#31755](https://github.com/hashicorp/terraform-provider-aws/issues/31755)) + +BUG FIXES: + +* data-source/aws_redshift_cluster: Fix crash reading clusters in `modifying` state ([#31772](https://github.com/hashicorp/terraform-provider-aws/issues/31772)) +* provider/default_tags: Fix perpetual diff when identical tags are moved from `default_tags` to resource `tags`, and vice versa ([#31826](https://github.com/hashicorp/terraform-provider-aws/issues/31826)) +* resource/aws_autoscaling_group: Ignore any `Failed` scaling activities due to IAM eventual consistency ([#31282](https://github.com/hashicorp/terraform-provider-aws/issues/31282)) +* resource/aws_dx_connection: Convert `vlan_id` from [`TypeString`](https://developer.hashicorp.com/terraform/plugin/sdkv2/schemas/schema-types#typestring) to [`TypeInt`](https://developer.hashicorp.com/terraform/plugin/sdkv2/schemas/schema-types#typeint) in [Terraform state](https://developer.hashicorp.com/terraform/language/state) for existing resources. This fixes a regression introduced in [v5.1.0](https://github.com/hashicorp/terraform-provider-aws/blob/main/CHANGELOG.md#510-june--1-2023) causing `a number is required` errors ([#31735](https://github.com/hashicorp/terraform-provider-aws/issues/31735)) +* resource/aws_globalaccelerator_endpoint_group: Fix bug updating `endpoint_configuration.weight` to `0` ([#31767](https://github.com/hashicorp/terraform-provider-aws/issues/31767)) +* resource/aws_medialive_channel: Fix spelling in `hls_cdn_settings` expander. ([#31844](https://github.com/hashicorp/terraform-provider-aws/issues/31844)) +* resource/aws_redshiftserverless_namespace: Fix perpetual `iam_roles` diffs when the namespace contains a workgroup ([#31749](https://github.com/hashicorp/terraform-provider-aws/issues/31749)) +* resource/aws_redshiftserverless_workgroup: Change `config_parameter` from `TypeList` to `TypeSet` as order is not significant ([#31747](https://github.com/hashicorp/terraform-provider-aws/issues/31747)) +* resource/aws_redshiftserverless_workgroup: Fix `ValidationException: Can't update multiple configurations at the same time` errors ([#31747](https://github.com/hashicorp/terraform-provider-aws/issues/31747)) +* resource/aws_vpc_endpoint: Fix tagging error preventing use in ISO partitions ([#31801](https://github.com/hashicorp/terraform-provider-aws/issues/31801)) + +## 5.1.0 (June 1, 2023) + +BREAKING CHANGES: + +* resource/aws_iam_role: The `role_last_used` attribute has been removed. Use the `aws_iam_role` data source instead. ([#31656](https://github.com/hashicorp/terraform-provider-aws/issues/31656)) + +NOTES: + +* resource/aws_autoscaling_group: The `load_balancers` and `target_group_arns` attributes have been changed to `Computed`. This means that omitting this argument is interpreted as ignoring any existing load balancer or target group attachments. To remove all load balancer or target group attachments an empty list should be specified. ([#31527](https://github.com/hashicorp/terraform-provider-aws/issues/31527)) +* resource/aws_iam_role: The `role_last_used` attribute has been removed. Use the `aws_iam_role` data source instead. See the community feedback provided in the [linked issue](https://github.com/hashicorp/terraform-provider-aws/issues/30861) for additional justification on this change. As the attribute is read-only, unlikely to be used as an input to another resource, and available in the corresponding data source, a breaking change in a minor version was deemed preferable to a long deprecation/removal cycle in this circumstance. ([#31656](https://github.com/hashicorp/terraform-provider-aws/issues/31656)) +* resource/aws_redshift_cluster: Ignores the parameter `aqua_configuration_status`, since the AWS API ignores it. Now always returns `auto`. ([#31612](https://github.com/hashicorp/terraform-provider-aws/issues/31612)) + +FEATURES: + +* **New Data Source:** `aws_vpclattice_resource_policy` ([#31372](https://github.com/hashicorp/terraform-provider-aws/issues/31372)) +* **New Resource:** `aws_autoscaling_traffic_source_attachment` ([#31527](https://github.com/hashicorp/terraform-provider-aws/issues/31527)) +* **New Resource:** `aws_emrcontainers_job_template` ([#31399](https://github.com/hashicorp/terraform-provider-aws/issues/31399)) +* **New Resource:** `aws_glue_data_quality_ruleset` ([#31604](https://github.com/hashicorp/terraform-provider-aws/issues/31604)) +* **New Resource:** `aws_quicksight_analysis` ([#31542](https://github.com/hashicorp/terraform-provider-aws/issues/31542)) +* **New Resource:** `aws_quicksight_dashboard` ([#31448](https://github.com/hashicorp/terraform-provider-aws/issues/31448)) +* **New Resource:** `aws_resourcegroups_resource` ([#31430](https://github.com/hashicorp/terraform-provider-aws/issues/31430)) + +ENHANCEMENTS: + +* data-source/aws_autoscaling_group: Add `traffic_source` attribute ([#31527](https://github.com/hashicorp/terraform-provider-aws/issues/31527)) +* data-source/aws_opensearch_domain: Add `off_peak_window_options` attribute ([#35970](https://github.com/hashicorp/terraform-provider-aws/issues/35970)) +* provider: Increases size of HTTP request bodies in logs to 1 KB ([#31718](https://github.com/hashicorp/terraform-provider-aws/issues/31718)) +* resource/aws_appsync_graphql_api: Add `visibility` argument ([#31369](https://github.com/hashicorp/terraform-provider-aws/issues/31369)) +* resource/aws_appsync_graphql_api: Add plan time validation for `log_config.cloudwatch_logs_role_arn` ([#31369](https://github.com/hashicorp/terraform-provider-aws/issues/31369)) +* resource/aws_autoscaling_group: Add `traffic_source` configuration block ([#31527](https://github.com/hashicorp/terraform-provider-aws/issues/31527)) +* resource/aws_cloudformation_stack_set: Add `managed_execution` argument ([#25210](https://github.com/hashicorp/terraform-provider-aws/issues/25210)) +* resource/aws_fsx_ontap_volume: Add `skip_final_backup` argument ([#31544](https://github.com/hashicorp/terraform-provider-aws/issues/31544)) +* resource/aws_fsx_ontap_volume: Remove default value for `security_style` argument and mark as Computed ([#31544](https://github.com/hashicorp/terraform-provider-aws/issues/31544)) +* resource/aws_fsx_ontap_volume: Update `ontap_volume_type` attribute to be configurable ([#31544](https://github.com/hashicorp/terraform-provider-aws/issues/31544)) +* resource/aws_fsx_ontap_volume: `junction_path` is Optional ([#31544](https://github.com/hashicorp/terraform-provider-aws/issues/31544)) +* resource/aws_fsx_ontap_volume: `storage_efficiency_enabled` is Optional ([#31544](https://github.com/hashicorp/terraform-provider-aws/issues/31544)) +* resource/aws_grafana_workspace: Increase default Create and Update timeouts to 30 minutes ([#31422](https://github.com/hashicorp/terraform-provider-aws/issues/31422)) +* resource/aws_lambda_invocation: Add lifecycle_scope CRUD to invoke on each resource state transition ([#29367](https://github.com/hashicorp/terraform-provider-aws/issues/29367)) +* resource/aws_lambda_layer_version_permission: Add `skip_destroy` attribute ([#29571](https://github.com/hashicorp/terraform-provider-aws/issues/29571)) +* resource/aws_lambda_provisioned_concurrency_configuration: Add `skip_destroy` argument ([#31646](https://github.com/hashicorp/terraform-provider-aws/issues/31646)) +* resource/aws_opensearch_domain: Add `off_peak_window_options` configuration block ([#35970](https://github.com/hashicorp/terraform-provider-aws/issues/35970)) +* resource/aws_sagemaker_endpoint_configuration: Add and `shadow_production_variants.serverless_config.provisioned_concurrency` arguments ([#31398](https://github.com/hashicorp/terraform-provider-aws/issues/31398)) +* resource/aws_transfer_server: Add support for `TransferSecurityPolicy-2023-05` `security_policy_name` value ([#31536](https://github.com/hashicorp/terraform-provider-aws/issues/31536)) + +BUG FIXES: + +* data-source/aws_dx_connection: Fix the `vlan_id` being returned as null ([#31480](https://github.com/hashicorp/terraform-provider-aws/issues/31480)) +* provider/tags: Fix crash when some `tags` are `null` and others are `computed` ([#31687](https://github.com/hashicorp/terraform-provider-aws/issues/31687)) +* provider: Limits size of HTTP response bodies in logs to 4 KB ([#31718](https://github.com/hashicorp/terraform-provider-aws/issues/31718)) +* resource/aws_autoscaling_group: Fix `The AutoRollback parameter cannot be set to true when the DesiredConfiguration parameter is empty` errors when refreshing instances ([#31715](https://github.com/hashicorp/terraform-provider-aws/issues/31715)) +* resource/aws_autoscaling_group: Now ignores previous failed scaling activities ([#31551](https://github.com/hashicorp/terraform-provider-aws/issues/31551)) +* resource/aws_cloudfront_distribution: Remove the upper limit on `origin_keepalive_timeout` ([#31608](https://github.com/hashicorp/terraform-provider-aws/issues/31608)) +* resource/aws_connect_instance: Fix crash when reading instances with `CREATION_FAILED` status ([#31689](https://github.com/hashicorp/terraform-provider-aws/issues/31689)) +* resource/aws_connect_security_profile: Set correct `tags` in state ([#31716](https://github.com/hashicorp/terraform-provider-aws/issues/31716)) +* resource/aws_dx_connection: Fix the `vlan_id` being returned as null ([#31480](https://github.com/hashicorp/terraform-provider-aws/issues/31480)) +* resource/aws_ecs_service: Fix crash when just `alarms` is updated ([#31683](https://github.com/hashicorp/terraform-provider-aws/issues/31683)) +* resource/aws_fsx_ontap_volume: Change `storage_virtual_machine_id` to [ForceNew](https://developer.hashicorp.com/terraform/plugin/sdkv2/schemas/schema-behaviors#forcenew) ([#31544](https://github.com/hashicorp/terraform-provider-aws/issues/31544)) +* resource/aws_fsx_ontap_volume: Change `volume_type` to [ForceNew](https://developer.hashicorp.com/terraform/plugin/sdkv2/schemas/schema-behaviors#forcenew) ([#31544](https://github.com/hashicorp/terraform-provider-aws/issues/31544)) +* resource/aws_kendra_index: Persist `user_group_resolution_mode` value to state after creation ([#31669](https://github.com/hashicorp/terraform-provider-aws/issues/31669)) +* resource/aws_medialive_channel: Fix attribute spelling in `hls_cdn_settings` expand ([#31647](https://github.com/hashicorp/terraform-provider-aws/issues/31647)) +* resource/aws_quicksight_data_set: Fix join_instruction not applied when creating dataset ([#31424](https://github.com/hashicorp/terraform-provider-aws/issues/31424)) +* resource/aws_quicksight_data_set: Ignore failure to read refresh properties for non-SPICE datasets ([#31488](https://github.com/hashicorp/terraform-provider-aws/issues/31488)) +* resource/aws_rbin_rule: Fix crash when multiple `resource_tags` blocks are configured ([#31393](https://github.com/hashicorp/terraform-provider-aws/issues/31393)) +* resource/aws_rds_cluster: Correctly update `db_cluster_instance_class` ([#31709](https://github.com/hashicorp/terraform-provider-aws/issues/31709)) +* resource/aws_redshift_cluster: No longer errors on deletion when status is `Maintenance` ([#31612](https://github.com/hashicorp/terraform-provider-aws/issues/31612)) +* resource/aws_route53_vpc_association_authorization: Fix `ConcurrentModification` error ([#31588](https://github.com/hashicorp/terraform-provider-aws/issues/31588)) +* resource/aws_s3_bucket_replication_configuration: Replication configs sometimes need more than a second or two. This resolves a race condition and adds retry logic when reading them. ([#30995](https://github.com/hashicorp/terraform-provider-aws/issues/30995)) + +## 5.0.1 (May 26, 2023) + +BUG FIXES: + +* provider/tags: Fix crash when tags are `null` ([#31587](https://github.com/hashicorp/terraform-provider-aws/issues/31587)) + +## 5.0.0 (May 25, 2023) BREAKING CHANGES: * data-source/aws_api_gateway_rest_api: `minimum_compression_size` is now a string type to allow values set via the `body` attribute to be properly computed. ([#30969](https://github.com/hashicorp/terraform-provider-aws/issues/30969)) +* data-source/aws_connect_hours_of_operation: The `hours_of_operation_arn` attribute has been removed ([#31484](https://github.com/hashicorp/terraform-provider-aws/issues/31484)) * data-source/aws_db_instance: With the retirement of EC2-Classic the `db_security_groups` attribute has been removed ([#30966](https://github.com/hashicorp/terraform-provider-aws/issues/30966)) * data-source/aws_elasticache_cluster: With the retirement of EC2-Classic the `security_group_names` attribute has been removed ([#30966](https://github.com/hashicorp/terraform-provider-aws/issues/30966)) * data-source/aws_elasticache_replication_group: Remove `number_cache_clusters`, `replication_group_description` arguments -- use `num_cache_clusters`, and `description`, respectively, instead ([#31008](https://github.com/hashicorp/terraform-provider-aws/issues/31008)) @@ -12,6 +325,7 @@ BREAKING CHANGES: * data-source/aws_identitystore_user: The `filter` argument has been removed ([#31312](https://github.com/hashicorp/terraform-provider-aws/issues/31312)) * data-source/aws_launch_configuration: With the retirement of EC2-Classic the `vpc_classic_link_id` and `vpc_classic_link_security_groups` attributes have been removed ([#30966](https://github.com/hashicorp/terraform-provider-aws/issues/30966)) * data-source/aws_redshift_cluster: With the retirement of EC2-Classic the `cluster_security_groups` attribute has been removed ([#30966](https://github.com/hashicorp/terraform-provider-aws/issues/30966)) +* data-source/aws_secretsmanager_secret: The `rotation_enabled`, `rotation_lambda_arn` and `rotation_rules` attributes have been removed ([#31487](https://github.com/hashicorp/terraform-provider-aws/issues/31487)) * data-source/aws_vpc_peering_connection: With the retirement of EC2-Classic the `allow_classic_link_to_remote_vpc` and `allow_vpc_to_remote_classic_link` attributes have been removed ([#30966](https://github.com/hashicorp/terraform-provider-aws/issues/30966)) * provider: The `assume_role.duration_seconds`, `assume_role_with_web_identity.duration_seconds`, `s3_force_path_style`, `shared_credentials_file` and `skip_get_ec2_platforms` attributes have been removed ([#31155](https://github.com/hashicorp/terraform-provider-aws/issues/31155)) * provider: The `aws_subnet_ids` data source has been removed ([#31140](https://github.com/hashicorp/terraform-provider-aws/issues/31140)) @@ -27,6 +341,8 @@ BREAKING CHANGES: * resource/aws_budgets_budget: The `cost_filters` attribute has been removed ([#31395](https://github.com/hashicorp/terraform-provider-aws/issues/31395)) * resource/aws_ce_anomaly_subscription: The `threshold` attribute has been removed ([#30374](https://github.com/hashicorp/terraform-provider-aws/issues/30374)) * resource/aws_cloudwatch_event_target: The `ecs_target.propagate_tags` attribute now has no default value ([#25233](https://github.com/hashicorp/terraform-provider-aws/issues/25233)) +* resource/aws_codebuild_project: The `secondary_sources.auth` and `source.auth` attributes have been removed ([#31483](https://github.com/hashicorp/terraform-provider-aws/issues/31483)) +* resource/aws_connect_hours_of_operation: The `hours_of_operation_arn` attribute has been removed ([#31484](https://github.com/hashicorp/terraform-provider-aws/issues/31484)) * resource/aws_connect_queue: The `quick_connect_ids_associated` attribute has been removed ([#31376](https://github.com/hashicorp/terraform-provider-aws/issues/31376)) * resource/aws_connect_routing_profile: The `queue_configs_associated` attribute has been removed ([#31376](https://github.com/hashicorp/terraform-provider-aws/issues/31376)) * resource/aws_db_instance: Remove `name` - use `db_name` instead ([#31232](https://github.com/hashicorp/terraform-provider-aws/issues/31232)) @@ -50,6 +366,7 @@ BREAKING CHANGES: * resource/aws_kinesis_firehose_delivery_stream: Rename `redshift_configuration.0.s3_backup_configuration.0.buffer_size` and `redshift_configuration.0.s3_backup_configuration.0.buffer_interval` to `redshift_configuration.0.s3_backup_configuration.0.buffering_size` and `redshift_configuration.0.s3_backup_configuration.0.buffering_interval`, respectively ([#31141](https://github.com/hashicorp/terraform-provider-aws/issues/31141)) * resource/aws_kinesis_firehose_delivery_stream: Rename `s3_configuration.0.buffer_size` and `s3_configuration.0.buffer_internval` to `s3_configuration.0.buffering_size` and `s3_configuration.0.buffering_internval`, respectively ([#31141](https://github.com/hashicorp/terraform-provider-aws/issues/31141)) * resource/aws_launch_configuration: With the retirement of EC2-Classic the `vpc_classic_link_id` and `vpc_classic_link_security_groups` attributes have been removed ([#30966](https://github.com/hashicorp/terraform-provider-aws/issues/30966)) +* resource/aws_lightsail_instance: The `ipv6_address` attribute has been removed ([#31489](https://github.com/hashicorp/terraform-provider-aws/issues/31489)) * resource/aws_medialive_multiplex_program: The `statemux_settings` attribute has been removed. Use `statmux_settings` argument instead ([#31034](https://github.com/hashicorp/terraform-provider-aws/issues/31034)) * resource/aws_msk_cluster: The `broker_node_group_info.ebs_volume_size` attribute has been removed ([#31324](https://github.com/hashicorp/terraform-provider-aws/issues/31324)) * resource/aws_neptune_cluster: `snapshot_identifier` change now properly forces replacement ([#29409](https://github.com/hashicorp/terraform-provider-aws/issues/29409)) @@ -60,6 +377,7 @@ BREAKING CHANGES: * resource/aws_redshift_cluster: With the retirement of EC2-Classic the `cluster_security_groups` attribute has been removed ([#30966](https://github.com/hashicorp/terraform-provider-aws/issues/30966)) * resource/aws_route: `instance_id` can no longer be set in configurations. Use `network_interface_id` instead, for example, setting `network_interface_id` to `aws_instance.test.primary_network_interface_id`. ([#30804](https://github.com/hashicorp/terraform-provider-aws/issues/30804)) * resource/aws_route_table: `route.*.instance_id` can no longer be set in configurations. Use `route.*.network_interface_id` instead, for example, setting `network_interface_id` to `aws_instance.test.primary_network_interface_id`. ([#30804](https://github.com/hashicorp/terraform-provider-aws/issues/30804)) +* resource/aws_secretsmanager_secret: The `rotation_enabled`, `rotation_lambda_arn` and `rotation_rules` attributes have been removed ([#31487](https://github.com/hashicorp/terraform-provider-aws/issues/31487)) * resource/aws_security_group: With the retirement of EC2-Classic non-VPC security groups are no longer supported ([#30966](https://github.com/hashicorp/terraform-provider-aws/issues/30966)) * resource/aws_security_group_rule: With the retirement of EC2-Classic non-VPC security groups are no longer supported ([#30966](https://github.com/hashicorp/terraform-provider-aws/issues/30966)) * resource/aws_servicecatalog_product: Changes to any `provisioning_artifact_parameters` arguments now properly trigger a replacement. This fixes incorrect behavior, but may technically be breaking for configurations expecting non-functional in-place updates. ([#31061](https://github.com/hashicorp/terraform-provider-aws/issues/31061)) @@ -68,13 +386,15 @@ BREAKING CHANGES: * resource/aws_vpc_peering_connection_accepter: With the retirement of EC2-Classic the `allow_classic_link_to_remote_vpc` and `allow_vpc_to_remote_classic_link` attributes have been removed ([#30966](https://github.com/hashicorp/terraform-provider-aws/issues/30966)) * resource/aws_vpc_peering_connection_options: With the retirement of EC2-Classic the `allow_classic_link_to_remote_vpc` and `allow_vpc_to_remote_classic_link` attributes have been removed ([#30966](https://github.com/hashicorp/terraform-provider-aws/issues/30966)) * resource/aws_wafv2_web_acl: The `statement.managed_rule_group_statement.excluded_rule` and `statement.rule_group_reference_statement.excluded_rule` attributes have been removed ([#31374](https://github.com/hashicorp/terraform-provider-aws/issues/31374)) +* resource/aws_wafv2_web_acl_logging_configuration: The `redacted_fields.all_query_arguments`, `redacted_fields.body` and `redacted_fields.single_query_argument` attributes have been removed ([#31486](https://github.com/hashicorp/terraform-provider-aws/issues/31486)) NOTES: -* data-source/aws_db_security_group: The `aws_redshift_service_account` data source has been deprecated and will be removed in a future version. AWS documentation [states that](https://docs.aws.amazon.com/redshift/latest/mgmt/db-auditing.html#db-auditing-bucket-permissions) a [service principal name](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html#principal-services) should be used instead of an AWS account ID in any relevant IAM policy ([#31006](https://github.com/hashicorp/terraform-provider-aws/issues/31006)) * data-source/aws_elasticache_replication_group: Update configurations to use `description` instead of the `replication_group_description` argument ([#31008](https://github.com/hashicorp/terraform-provider-aws/issues/31008)) * data-source/aws_elasticache_replication_group: Update configurations to use `num_cache_clusters` instead of the `number_cache_clusters` argument ([#31008](https://github.com/hashicorp/terraform-provider-aws/issues/31008)) +* data-source/aws_opensearch_domain: The `kibana_endpoint` attribute has been deprecated. All configurations using `kibana_endpoint` should be updated to use the `dashboard_endpoint` attribute instead ([#31490](https://github.com/hashicorp/terraform-provider-aws/issues/31490)) * data-source/aws_quicksight_data_set: The `tags_all` attribute has been deprecated and will be removed in a future version ([#31162](https://github.com/hashicorp/terraform-provider-aws/issues/31162)) +* data-source/aws_redshift_service_account: The `aws_redshift_service_account` data source has been deprecated and will be removed in a future version. AWS documentation [states that](https://docs.aws.amazon.com/redshift/latest/mgmt/db-auditing.html#db-auditing-bucket-permissions) a [service principal name](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html#principal-services) should be used instead of an AWS account ID in any relevant IAM policy ([#31006](https://github.com/hashicorp/terraform-provider-aws/issues/31006)) * data-source/aws_service_discovery_service: The `tags_all` attribute has been deprecated and will be removed in a future version ([#31162](https://github.com/hashicorp/terraform-provider-aws/issues/31162)) * resource/aws_api_gateway_rest_api: Update configurations with `minimum_compression_size` set to pass the value as a string. Valid values remain the same. ([#30969](https://github.com/hashicorp/terraform-provider-aws/issues/30969)) * resource/aws_autoscaling_attachment: Update configurations to use `lb_target_group_arn` instead of `alb_target_group_arn` which has been removed ([#30828](https://github.com/hashicorp/terraform-provider-aws/issues/30828)) @@ -94,6 +414,8 @@ NOTES: * resource/aws_guardduty_organization_configuration: The `auto_enable` argument has been deprecated. Use the `auto_enable_organization_members` argument instead. ([#30736](https://github.com/hashicorp/terraform-provider-aws/issues/30736)) * resource/aws_neptune_cluster: Changes to the `snapshot_identifier` attribute will now trigger a replacement, rather than an in-place update. This corrects the previous behavior which resulted in a successful apply, but did not actually restore the cluster from the designated snapshot. ([#29409](https://github.com/hashicorp/terraform-provider-aws/issues/29409)) * resource/aws_networkmanager_core_network: Update configurations to use the `aws_networkmanager_core_network_policy_attachment` resource instead of the `policy_document` argument ([#30875](https://github.com/hashicorp/terraform-provider-aws/issues/30875)) +* resource/aws_opensearch_domain: The `engine_version` attribute no longer has a default value. When omitted, the underlying AWS API will use the latest OpenSearch engine version. ([#31568](https://github.com/hashicorp/terraform-provider-aws/issues/31568)) +* resource/aws_opensearch_domain: The `kibana_endpoint` attribute has been deprecated. All configurations using `kibana_endpoint` should be updated to use the `dashboard_endpoint` attribute instead ([#31490](https://github.com/hashicorp/terraform-provider-aws/issues/31490)) * resource/aws_rds_cluster: Changes to the `snapshot_identifier` attribute will now trigger a replacement, rather than an in-place update. This corrects the previous behavior which resulted in a successful apply, but did not actually restore the cluster from the designated snapshot. ([#29409](https://github.com/hashicorp/terraform-provider-aws/issues/29409)) * resource/aws_rds_cluster: Configurations not including the `engine` argument must be updated to include `engine` as it is now required. Previously, not including `engine` was equivalent to `engine = "aurora"` and created a MySQL-5.6-compatible cluster ([#31112](https://github.com/hashicorp/terraform-provider-aws/issues/31112)) * resource/aws_rds_cluster_instance: Configurations not including the `engine` argument must be updated to include `engine` as it is now required. Previously, not including `engine` was equivalent to `engine = "aurora"` and created a MySQL-5.6-compatible cluster instance ([#31112](https://github.com/hashicorp/terraform-provider-aws/issues/31112)) @@ -107,3123 +429,26 @@ ENHANCEMENTS: * provider: Allow `default_tags` and resource `tags` to include zero values `""` ([#30793](https://github.com/hashicorp/terraform-provider-aws/issues/30793)) * provider: Duplicate `default_tags` can now be included and will be overwritten by resource `tags` ([#30793](https://github.com/hashicorp/terraform-provider-aws/issues/30793)) * resource/aws_db_instance: Updates to `identifier` and `identifier_prefix` will no longer cause the database instance to be destroyed and recreated ([#31232](https://github.com/hashicorp/terraform-provider-aws/issues/31232)) +* resource/aws_eip: Deprecate `vpc` attribute. Use `domain` instead ([#31567](https://github.com/hashicorp/terraform-provider-aws/issues/31567)) * resource/aws_guardduty_organization_configuration: Add `auto_enable_organization_members` attribute ([#30736](https://github.com/hashicorp/terraform-provider-aws/issues/30736)) * resource/aws_kinesis_firehose_delivery_stream: Add `s3_configuration` to `elasticsearch_configuration`, `opensearch_configuration`, `redshift_configuration`, `splunk_configuration`, and `http_endpoint_configuration` ([#31138](https://github.com/hashicorp/terraform-provider-aws/issues/31138)) +* resource/aws_opensearch_domain: Removed `engine_version` default value ([#31568](https://github.com/hashicorp/terraform-provider-aws/issues/31568)) * resource/aws_wafv2_web_acl: Support `rule_action_override` on `rule_group_reference_statement` ([#31374](https://github.com/hashicorp/terraform-provider-aws/issues/31374)) BUG FIXES: * resource/aws_ecs_capacity_provider: Allow an `instance_warmup_period` of `0` in the `auto_scaling_group_provider.managed_scaling` configuration block ([#24005](https://github.com/hashicorp/terraform-provider-aws/issues/24005)) +* resource/aws_launch_template: Remove default values in `metadata_options` to allow default condition ([#30545](https://github.com/hashicorp/terraform-provider-aws/issues/30545)) +* resource/aws_s3_bucket: Fix bucket_regional_domain_name not including region for buckets in us-east-1 ([#25724](https://github.com/hashicorp/terraform-provider-aws/issues/25724)) +* resource/aws_s3_object: Remove `acl` default in order to work with S3 buckets that have ACL disabled ([#27197](https://github.com/hashicorp/terraform-provider-aws/issues/27197)) +* resource/aws_s3_object_copy: Remove `acl` default in order to work with S3 buckets that have ACL disabled ([#27197](https://github.com/hashicorp/terraform-provider-aws/issues/27197)) * resource/aws_servicecatalog_product: Changes to `provisioning_artifact_parameters` arguments now properly trigger a replacement ([#31061](https://github.com/hashicorp/terraform-provider-aws/issues/31061)) * resource/aws_vpc_peering_connection: Fix crash in `vpcPeeringConnectionOptionsEqual` ([#30966](https://github.com/hashicorp/terraform-provider-aws/issues/30966)) -## 4.67.0 (May 11, 2023) - -NOTES: - -* resource/aws_lightsail_domain_entry: The `id` attribute is now comma-delimited ([#30820](https://github.com/hashicorp/terraform-provider-aws/issues/30820)) - -FEATURES: - -* **New Data Source:** `aws_connect_user` ([#26156](https://github.com/hashicorp/terraform-provider-aws/issues/26156)) -* **New Data Source:** `aws_connect_vocabulary` ([#26158](https://github.com/hashicorp/terraform-provider-aws/issues/26158)) -* **New Data Source:** `aws_organizations_policy` ([#30920](https://github.com/hashicorp/terraform-provider-aws/issues/30920)) -* **New Data Source:** `aws_redshiftserverless_namespace` ([#31250](https://github.com/hashicorp/terraform-provider-aws/issues/31250)) -* **New Resource:** `aws_quicksight_template` ([#30453](https://github.com/hashicorp/terraform-provider-aws/issues/30453)) -* **New Resource:** `aws_quicksight_template_alias` ([#31310](https://github.com/hashicorp/terraform-provider-aws/issues/31310)) -* **New Resource:** `aws_quicksight_vpc_connection` ([#31309](https://github.com/hashicorp/terraform-provider-aws/issues/31309)) - -ENHANCEMENTS: - -* aws_quicksight_data_set: Add support for configuring refresh properties ([#30744](https://github.com/hashicorp/terraform-provider-aws/issues/30744)) -* data-source/aws_acmpca_certificate_authority: Add `key_storage_security_standard` attribute ([#31280](https://github.com/hashicorp/terraform-provider-aws/issues/31280)) -* data-source/aws_elastic_beanstalk_hosted_zone: Add hosted zone ID for `ap-southeast-3` AWS Region ([#31248](https://github.com/hashicorp/terraform-provider-aws/issues/31248)) -* data-source/aws_s3_bucket: Set `hosted_zone_id` for `cn-north-1` AWS China Region ([#31247](https://github.com/hashicorp/terraform-provider-aws/issues/31247)) -* resource/aws_acmpca_certificate_authority: Add `key_storage_security_standard` argument ([#31280](https://github.com/hashicorp/terraform-provider-aws/issues/31280)) -* resource/aws_cloudwatch_metric_stream: Add `metric_names` to `include_filter` and `exclude_filter` configuration blocks ([#31288](https://github.com/hashicorp/terraform-provider-aws/issues/31288)) -* resource/aws_dms_endpoint: Add ability to use the `db2-zos` IBM DB2 for z/OS engine ([#31291](https://github.com/hashicorp/terraform-provider-aws/issues/31291)) -* resource/aws_fsx_ontap_file_system: Allow in-place update of `route_table_ids` ([#31251](https://github.com/hashicorp/terraform-provider-aws/issues/31251)) -* resource/aws_fsx_ontap_file_system: Support setting `throughput_capacity` to `4096` ([#31251](https://github.com/hashicorp/terraform-provider-aws/issues/31251)) -* resource/aws_rds_cluster: Add ability to specify Aurora IO Optimized `storage_type` ([#31336](https://github.com/hashicorp/terraform-provider-aws/issues/31336)) -* resource/aws_s3_bucket: Set `hosted_zone_id` for `cn-north-1` AWS China Region ([#31247](https://github.com/hashicorp/terraform-provider-aws/issues/31247)) - -BUG FIXES: - -* resource/aws_appintegrations_data_integration: Correctly read `tags` into state ([#31241](https://github.com/hashicorp/terraform-provider-aws/issues/31241)) -* resource/aws_config_remediation_configuration: Change `parameter` attribute to `TypeList` for better diff calculation ([#31315](https://github.com/hashicorp/terraform-provider-aws/issues/31315)) -* resource/aws_iam_openid_connect_provider: Change `client_id_list` from `TypeList` to `TypeSet` as order is not significant ([#31253](https://github.com/hashicorp/terraform-provider-aws/issues/31253)) -* resource/aws_servicecatalog_provisioned_product: Fix to properly send `stack_set_provisioned_preferences.0.accounts` on create and update ([#31293](https://github.com/hashicorp/terraform-provider-aws/issues/31293)) -* resource/aws_servicecatalog_provisioned_product: Fix to properly set `stack_set_provisioned_preferences` integer types `failure_tolerance_count`, `failure_tolerance_percentage`, `max_concurrency_count`, `max_concurrency_percentage` ([#31289](https://github.com/hashicorp/terraform-provider-aws/issues/31289)) -* resource/aws_ssm_activation: Fix various `ValidationException` errors on resource Create ([#31340](https://github.com/hashicorp/terraform-provider-aws/issues/31340)) - -## 4.66.1 (May 5, 2023) - -BUG FIXES: - -* resource/aws_appautoscaling_target: Fix `InvalidParameter: 1 validation error(s) found. -minimum field size of 1, ListTagsForResourceInput.ResourceARN.` related to [Application Auto Scaling resource tagging](https://aws.amazon.com/about-aws/whats-new/2023/03/application-auto-scaling-resource-tagging/) introduced in [v4.66.0](https://github.com/hashicorp/terraform-provider-aws/blob/main/CHANGELOG.md#4660-may--4-2023) ([#31214](https://github.com/hashicorp/terraform-provider-aws/issues/31214)) - -## 4.66.0 (May 4, 2023) - -NOTES: - -* resource/aws_instance: The `cpu_core_count` argument is deprecated in favor of the `cpu_options` block. The `cpu_options` block can set `core_count` ([#31035](https://github.com/hashicorp/terraform-provider-aws/issues/31035)) -* resource/aws_instance: The `cpu_threads_per_core` argument is deprecated in favor of the `cpu_options` block. The `cpu_options` block can set `threads_per_core` ([#31035](https://github.com/hashicorp/terraform-provider-aws/issues/31035)) - -FEATURES: - -* **New Data Source:** `aws_appintegrations_event_integration` ([#24965](https://github.com/hashicorp/terraform-provider-aws/issues/24965)) -* **New Data Source:** `aws_dms_replication_instance` ([#15406](https://github.com/hashicorp/terraform-provider-aws/issues/15406)) -* **New Data Source:** `aws_vpclattice_auth_policy` ([#30898](https://github.com/hashicorp/terraform-provider-aws/issues/30898)) -* **New Data Source:** `aws_vpclattice_service_network` ([#30904](https://github.com/hashicorp/terraform-provider-aws/issues/30904)) -* **New Resource:** `aws_account_primary_contact` ([#26123](https://github.com/hashicorp/terraform-provider-aws/issues/26123)) -* **New Resource:** `aws_appintegrations_data_integration` ([#24941](https://github.com/hashicorp/terraform-provider-aws/issues/24941)) -* **New Resource:** `aws_chimesdkvoice_voice_profile_domain` ([#30977](https://github.com/hashicorp/terraform-provider-aws/issues/30977)) -* **New Resource:** `aws_directory_service_trust` ([#31037](https://github.com/hashicorp/terraform-provider-aws/issues/31037)) -* **New Resource:** `aws_vpclattice_access_log_subscription` ([#30896](https://github.com/hashicorp/terraform-provider-aws/issues/30896)) -* **New Resource:** `aws_vpclattice_auth_policy` ([#30891](https://github.com/hashicorp/terraform-provider-aws/issues/30891)) -* **New Resource:** `aws_vpclattice_resource_policy` ([#30900](https://github.com/hashicorp/terraform-provider-aws/issues/30900)) -* **New Resource:** `aws_vpclattice_target_group_attachment` ([#31039](https://github.com/hashicorp/terraform-provider-aws/issues/31039)) - -ENHANCEMENTS: - -* data-source/aws_autoscaling_group: Add `max_instance_lifetime` attribute ([#31067](https://github.com/hashicorp/terraform-provider-aws/issues/31067)) -* data-source/aws_autoscaling_group: Add `mixed_instances_policy` attribute ([#31067](https://github.com/hashicorp/terraform-provider-aws/issues/31067)) -* data-source/aws_autoscaling_group: Add `predicted_capacity` attribute ([#31067](https://github.com/hashicorp/terraform-provider-aws/issues/31067)) -* data-source/aws_autoscaling_group: Add `suspended_processes` attribute ([#31067](https://github.com/hashicorp/terraform-provider-aws/issues/31067)) -* data-source/aws_autoscaling_group: Add `tag` attribute ([#31067](https://github.com/hashicorp/terraform-provider-aws/issues/31067)) -* data-source/aws_autoscaling_group: Add `warm_pool_size` attribute ([#31067](https://github.com/hashicorp/terraform-provider-aws/issues/31067)) -* data-source/aws_autoscaling_group: Add `warm_pool` attribute ([#31067](https://github.com/hashicorp/terraform-provider-aws/issues/31067)) -* datasource/aws_launch_template: Add `amd_sev_snp` attribute ([#31035](https://github.com/hashicorp/terraform-provider-aws/issues/31035)) -* resource/aws_appautoscaling_policy: Add `metrics` to the `target_tracking_scaling_policy_configuration.customized_metric_specification` configuration block in support of [metric math](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-target-tracking-metric-math.html) ([#30172](https://github.com/hashicorp/terraform-provider-aws/issues/30172)) -* resource/aws_appautoscaling_target: Add `arn` attribute ([#30172](https://github.com/hashicorp/terraform-provider-aws/issues/30172)) -* resource/aws_appautoscaling_target: Add `tags` argument and `tags_all` attribute to support resource tagging ([#30172](https://github.com/hashicorp/terraform-provider-aws/issues/30172)) -* resource/aws_autoscaling_group: Add `predicted_capacity` attribute ([#31067](https://github.com/hashicorp/terraform-provider-aws/issues/31067)) -* resource/aws_autoscaling_group: Add `warm_pool_size` attribute ([#31067](https://github.com/hashicorp/terraform-provider-aws/issues/31067)) -* resource/aws_directory_service_conditional_forwarder: Add plan time validation for `remote_domain_name` ([#31037](https://github.com/hashicorp/terraform-provider-aws/issues/31037)) -* resource/aws_directory_service_directory: Correct plan time validation for `remote_domain_name` ([#31037](https://github.com/hashicorp/terraform-provider-aws/issues/31037)) -* resource/aws_elasticache_user: Add support for defining custom timeouts ([#31076](https://github.com/hashicorp/terraform-provider-aws/issues/31076)) -* resource/aws_fsx_lustre_file_system: Add `root_squash_configuration` argument ([#31073](https://github.com/hashicorp/terraform-provider-aws/issues/31073)) -* resource/aws_glue_catalog_database: Add tagging support ([#31071](https://github.com/hashicorp/terraform-provider-aws/issues/31071)) -* resource/aws_grafana_workspace: Make `grafana_version` optional so that its value can be specified in configuration ([#31083](https://github.com/hashicorp/terraform-provider-aws/issues/31083)) -* resource/aws_instance: Add `amd_sev_snp` argument ([#31035](https://github.com/hashicorp/terraform-provider-aws/issues/31035)) -* resource/aws_instance: Add `cpu_options` argument ([#31035](https://github.com/hashicorp/terraform-provider-aws/issues/31035)) -* resource/aws_lambda_function: Add support for `java17` `runtime` value ([#31027](https://github.com/hashicorp/terraform-provider-aws/issues/31027)) -* resource/aws_lambda_layer_version: Add support for `java17` `compatible_runtimes` value ([#31028](https://github.com/hashicorp/terraform-provider-aws/issues/31028)) -* resource/aws_launch_template: Add `amd_sev_snp` argument ([#31035](https://github.com/hashicorp/terraform-provider-aws/issues/31035)) -* resource/aws_medialive_channel: Added H265 support. ([#30908](https://github.com/hashicorp/terraform-provider-aws/issues/30908)) -* resource/aws_rds_cluster_role_association: Add configurable Create and Delete timeouts ([#31015](https://github.com/hashicorp/terraform-provider-aws/issues/31015)) -* resource/aws_redshift_scheduled_action: Add plan time validation for `name` argument ([#31020](https://github.com/hashicorp/terraform-provider-aws/issues/31020)) -* resource/aws_redshiftserverless_workgroup: Add support for defining custom timeouts ([#31054](https://github.com/hashicorp/terraform-provider-aws/issues/31054)) -* resource/aws_sagemaker_domain: Add `domain_settings.r_studio_server_pro_domain_settings`, `default_user_settings.canvas_app_settings.model_register_settings`, and `default_user_settings.r_studio_server_pro_app_settings` arguments ([#31031](https://github.com/hashicorp/terraform-provider-aws/issues/31031)) -* resource/aws_sagemaker_endpoint_configuration: Add `async_inference_config.output_config.notification_config.include_inference_response_in` and `async_inference_config.output_config.s3_failure_path` arguments ([#31070](https://github.com/hashicorp/terraform-provider-aws/issues/31070)) -* resource/aws_sagemaker_user_profile: Add `user_settings.canvas_app_settings.model_register_settings` and `user_settings.r_studio_server_pro_app_settings` arguments ([#31072](https://github.com/hashicorp/terraform-provider-aws/issues/31072)) -* resource/aws_servicecatalog_provisioning_artifact: Add `provisioning_artifact_id` attribute ([#31086](https://github.com/hashicorp/terraform-provider-aws/issues/31086)) -* resource/aws_sfn_state_machine: Add configurable timeouts ([#31097](https://github.com/hashicorp/terraform-provider-aws/issues/31097)) -* resource/aws_spot_fleet_request: Add 'aws_spot_fleet_request.context' argument ([#30918](https://github.com/hashicorp/terraform-provider-aws/issues/30918)) -* resource/aws_vpn_connection: Add `tunnel1_enable_tunnel_lifecycle_control` and `tunnel2_enable_tunnel_lifecycle_control` arguments ([#31064](https://github.com/hashicorp/terraform-provider-aws/issues/31064)) - -BUG FIXES: - -* data-source/aws_nat_gateway: Guarantee that all attributes are set when the NAT Gateway is associated with a single address ([#31118](https://github.com/hashicorp/terraform-provider-aws/issues/31118)) -* data-source/aws_networkfirewall_firewall_policy: Add `firewall_policy.stateful_rule_group_reference.override` attribute, fixing `setting firewall_policy: Invalid address to set` error ([#31089](https://github.com/hashicorp/terraform-provider-aws/issues/31089)) -* resource/aws_connect_routing_profile: Remove the limit on the maximum number of queues that can be associated with a routing profile. Batch processing is now done when there are more than 10 queues associated or disassociated at a time. ([#30895](https://github.com/hashicorp/terraform-provider-aws/issues/30895)) -* resource/aws_db_instance: Consider `delete-precheck` a valid pending state for resource deletion ([#31047](https://github.com/hashicorp/terraform-provider-aws/issues/31047)) -* resource/aws_inspector2_enabler: Correctly supports `LAMBDA` resource scanning ([#31038](https://github.com/hashicorp/terraform-provider-aws/issues/31038)) -* resource/aws_inspector2_enabler: Correctly supports multiple accounts ([#31038](https://github.com/hashicorp/terraform-provider-aws/issues/31038)) -* resource/aws_inspector2_enabler: No longer calls `Disable` API for status checking ([#31038](https://github.com/hashicorp/terraform-provider-aws/issues/31038)) -* resource/aws_nat_gateway: Guarantee that all attributes are set when the NAT Gateway is associated with a single address ([#31118](https://github.com/hashicorp/terraform-provider-aws/issues/31118)) -* resource/aws_rds_cluster_instance: Consider `delete-precheck` a valid pending state for resource deletion ([#31047](https://github.com/hashicorp/terraform-provider-aws/issues/31047)) -* resource/aws_servicecatalog_provisioned_product: Changes in the provisioning_artifact_name attribute are now reflected correctly in AWS ([#26371](https://github.com/hashicorp/terraform-provider-aws/issues/26371)) -* resource/aws_servicecatalog_provisioned_product: Fix `product_name` update handling ([#31094](https://github.com/hashicorp/terraform-provider-aws/issues/31094)) - -## 4.65.0 (April 27, 2023) - -NOTES: - -* data-source/aws_db_instance: With the retirement of EC2-Classic the`db_security_groups` attribute has been deprecated and will be removed in a future version ([#30919](https://github.com/hashicorp/terraform-provider-aws/issues/30919)) -* data-source/aws_elasticache_cluster: With the retirement of EC2-Classic the`security_group_names` attribute has been deprecated and will be removed in a future version ([#30919](https://github.com/hashicorp/terraform-provider-aws/issues/30919)) -* data-source/aws_launch_configuration: With the retirement of EC2-Classic the`vpc_classic_link_id` and `vpc_classic_link_security_groups` attributes have been deprecated and will be removed in a future version ([#30919](https://github.com/hashicorp/terraform-provider-aws/issues/30919)) -* data-source/aws_redshift_cluster: With the retirement of EC2-Classic the `cluster_security_groups` attribute has been deprecated and will be removed in a future version ([#30919](https://github.com/hashicorp/terraform-provider-aws/issues/30919)) -* resource/aws_config_organization_custom_policy_rule: Because we cannot easily test this functionality, it is best effort and we ask for community help in testing ([#21373](https://github.com/hashicorp/terraform-provider-aws/issues/21373)) - -FEATURES: - -* **New Data Source:** `aws_api_gateway_authorizer` ([#28148](https://github.com/hashicorp/terraform-provider-aws/issues/28148)) -* **New Data Source:** `aws_api_gateway_authorizers` ([#28148](https://github.com/hashicorp/terraform-provider-aws/issues/28148)) -* **New Data Source:** `aws_dms_replication_subnet_group` ([#30832](https://github.com/hashicorp/terraform-provider-aws/issues/30832)) -* **New Data Source:** `aws_dms_replication_task` ([#30967](https://github.com/hashicorp/terraform-provider-aws/issues/30967)) -* **New Data Source:** `aws_ssmcontacts_contact` ([#30667](https://github.com/hashicorp/terraform-provider-aws/issues/30667)) -* **New Data Source:** `aws_ssmcontacts_contact_channel` ([#30667](https://github.com/hashicorp/terraform-provider-aws/issues/30667)) -* **New Data Source:** `aws_ssmcontacts_plan` ([#30667](https://github.com/hashicorp/terraform-provider-aws/issues/30667)) -* **New Data Source:** `aws_ssmincidents_response_plan` ([#30665](https://github.com/hashicorp/terraform-provider-aws/issues/30665)) -* **New Resource:** `aws_config_organization_custom_policy_rule` ([#28201](https://github.com/hashicorp/terraform-provider-aws/issues/28201)) -* **New Resource:** `aws_quicksight_folder_membership` ([#30871](https://github.com/hashicorp/terraform-provider-aws/issues/30871)) -* **New Resource:** `aws_quicksight_refresh_schedule` ([#30788](https://github.com/hashicorp/terraform-provider-aws/issues/30788)) -* **New Resource:** `aws_ssmcontacts_contact` ([#30667](https://github.com/hashicorp/terraform-provider-aws/issues/30667)) -* **New Resource:** `aws_ssmcontacts_contact_channel` ([#30667](https://github.com/hashicorp/terraform-provider-aws/issues/30667)) -* **New Resource:** `aws_ssmcontacts_plan` ([#30667](https://github.com/hashicorp/terraform-provider-aws/issues/30667)) -* **New Resource:** `aws_ssmincidents_response_plan` ([#30665](https://github.com/hashicorp/terraform-provider-aws/issues/30665)) -* **New Resource:** `aws_synthetics_group` ([#30678](https://github.com/hashicorp/terraform-provider-aws/issues/30678)) -* **New Resource:** `aws_synthetics_group_association` ([#30678](https://github.com/hashicorp/terraform-provider-aws/issues/30678)) - -ENHANCEMENTS: - -* data-source/aws_ami_ids: Add `include_deprecated` argument ([#30294](https://github.com/hashicorp/terraform-provider-aws/issues/30294)) -* data-source/aws_backup_report_plan: Add `accounts`, `organization_units` and `regions` attributes to the `report_setting` block ([#28309](https://github.com/hashicorp/terraform-provider-aws/issues/28309)) -* data-source/aws_imagebuilder_image: Add `containers` attribute to the `output_resources` block ([#30899](https://github.com/hashicorp/terraform-provider-aws/issues/30899)) -* resource/aws_appstream_stack: Add `streaming_experience_settings` attribute ([#28512](https://github.com/hashicorp/terraform-provider-aws/issues/28512)) -* resource/aws_backup_report_plan: Add `accounts`, `organization_units` and `regions` attributes to the `report_setting` block ([#28309](https://github.com/hashicorp/terraform-provider-aws/issues/28309)) -* resource/aws_chime_voice_connector_streaming: Add `media_insights_configuration` argument ([#30713](https://github.com/hashicorp/terraform-provider-aws/issues/30713)) -* resource/aws_db_subnet_group: Add `vpc_id` attribute ([#30775](https://github.com/hashicorp/terraform-provider-aws/issues/30775)) -* resource/aws_fis_experiment_template: Add support for `Cluster` Network Actions to `actions.*.target` ([#27337](https://github.com/hashicorp/terraform-provider-aws/issues/27337)) -* resource/aws_gamelift_game_session_queue: Add `custom_event_data` argument ([#26206](https://github.com/hashicorp/terraform-provider-aws/issues/26206)) -* resource/aws_imagebuilder_image: Add `containers` attribute to the `output_resources` block ([#30899](https://github.com/hashicorp/terraform-provider-aws/issues/30899)) -* resource/aws_networkfirewall_rule_group: Add limit for `reference_sets` ([#30759](https://github.com/hashicorp/terraform-provider-aws/issues/30759)) -* resource/aws_networkmanager_core_network: Wait for the network policy to be in the `READY_TO_EXECUTE` state before executing any changes ([#30879](https://github.com/hashicorp/terraform-provider-aws/issues/30879)) -* resource/aws_s3outposts_endpoint: Add `access_type` and `customer_owned_ipv4_pool` arguments ([#23839](https://github.com/hashicorp/terraform-provider-aws/issues/23839)) -* resource/aws_wafv2_web_acl: Add `token_domains` argument ([#30340](https://github.com/hashicorp/terraform-provider-aws/issues/30340)) -* various IAM resource types: more detailed error messages for invalid policy document JSON ([#27502](https://github.com/hashicorp/terraform-provider-aws/issues/27502)) - -BUG FIXES: - -* resource/aws_api_gateway_api_key: Fix `value` minimum length verification when specified. ([#30894](https://github.com/hashicorp/terraform-provider-aws/issues/30894)) -* resource/aws_apprunner_service: Allow additional `instance_configuration.cpu` and `instance_configuration.memory` values ([#30889](https://github.com/hashicorp/terraform-provider-aws/issues/30889)) -* resource/aws_dms_replication_task: Fix perpetual diff on dms replication_task settings ([#30885](https://github.com/hashicorp/terraform-provider-aws/issues/30885)) -* resource/aws_ds_shared_directory: Properly handle paged response objects on read ([#30914](https://github.com/hashicorp/terraform-provider-aws/issues/30914)) -* resource/aws_ecs_service: Fix removal of `service_registries` configuration block ([#30852](https://github.com/hashicorp/terraform-provider-aws/issues/30852)) -* resource/aws_redshiftdata_statement: Fix `ValidationException` errors reading expired statements ([#26343](https://github.com/hashicorp/terraform-provider-aws/issues/26343)) -* resource/aws_vpc_endpoint_route_table_association: Retry resource Create for EC2 eventual consistency ([#30994](https://github.com/hashicorp/terraform-provider-aws/issues/30994)) -* resource/aws_vpc_endpoint_service_allowed_principal: Fix `too many results` error ([#30974](https://github.com/hashicorp/terraform-provider-aws/issues/30974)) - -BUG FIXES: - -* resource/aws_vpc_peering_connection: Fix crash in `vpcPeeringConnectionOptionsEqual` ([#30966](https://github.com/hashicorp/terraform-provider-aws/issues/30966)) - -## 4.64.0 (April 20, 2023) - -BREAKING CHANGES: - -* data-source/aws_iam_policy_document: `source_json` and `override_json` have been removed -- use `source_policy_documents` and `override_policy_documents`, respectively, instead ([#30829](https://github.com/hashicorp/terraform-provider-aws/issues/30829)) -* resource/aws_autoscaling_attachment: `alb_target_group_arn` has been removed -- use `lb_target_group_arn` instead ([#30828](https://github.com/hashicorp/terraform-provider-aws/issues/30828)) -* resource/aws_route: `instance_id` can no longer be set in configurations. Use `network_interface_id` instead, for example, setting `network_interface_id` to `aws_instance.test.primary_network_interface_id`. ([#30804](https://github.com/hashicorp/terraform-provider-aws/issues/30804)) -* resource/aws_route_table: `route.*.instance_id` can no longer be set in configurations. Use `route.*.network_interface_id` instead, for example, setting `network_interface_id` to `aws_instance.test.primary_network_interface_id`. ([#30804](https://github.com/hashicorp/terraform-provider-aws/issues/30804)) - -NOTES: - -* resource/aws_autoscaling_attachment: Update configurations to use `lb_target_group_arn` instead of `alb_target_group_arn` which has been removed ([#30828](https://github.com/hashicorp/terraform-provider-aws/issues/30828)) -* resource/aws_route: Since `instance_id` can no longer be set in configurations, use `network_interface_id` instead. For example, set `network_interface_id` to `aws_instance.test.primary_network_interface_id`. ([#30804](https://github.com/hashicorp/terraform-provider-aws/issues/30804)) -* resource/aws_route_table: Since `route.*.instance_id` can no longer be set in configurations, use `route.*.network_interface_id` instead. For example, set `network_interface_id` to `aws_instance.test.primary_network_interface_id`. ([#30804](https://github.com/hashicorp/terraform-provider-aws/issues/30804)) - -FEATURES: - -* **New Data Source:** `aws_dms_endpoint` ([#30717](https://github.com/hashicorp/terraform-provider-aws/issues/30717)) -* **New Data Source:** `aws_fsx_windows_file_system` ([#28622](https://github.com/hashicorp/terraform-provider-aws/issues/28622)) -* **New Data Source:** `aws_iam_access_keys` ([#29278](https://github.com/hashicorp/terraform-provider-aws/issues/29278)) -* **New Data Source:** `aws_networkfirewall_resource_policy` ([#25474](https://github.com/hashicorp/terraform-provider-aws/issues/25474)) -* **New Data Source:** `aws_prometheus_workspaces` ([#28574](https://github.com/hashicorp/terraform-provider-aws/issues/28574)) -* **New Data Source:** `aws_redshiftserverless_workgroup` ([#29208](https://github.com/hashicorp/terraform-provider-aws/issues/29208)) -* **New Data Source:** `aws_route53_resolver_query_log_config` ([#29111](https://github.com/hashicorp/terraform-provider-aws/issues/29111)) -* **New Data Source:** `aws_sesv2_configuration_set` ([#30108](https://github.com/hashicorp/terraform-provider-aws/issues/30108)) -* **New Data Source:** `aws_vpclattice_listener` ([#30843](https://github.com/hashicorp/terraform-provider-aws/issues/30843)) -* **New Resource:** `aws_cloudwatch_event_endpoint` ([#25846](https://github.com/hashicorp/terraform-provider-aws/issues/25846)) -* **New Resource:** `aws_vpclattice_listener` ([#30711](https://github.com/hashicorp/terraform-provider-aws/issues/30711)) -* **New Resource:** `aws_vpclattice_listener_rule` ([#30784](https://github.com/hashicorp/terraform-provider-aws/issues/30784)) - -ENHANCEMENTS: - -* data-source/aws_cloudfront_response_headers_policy: Add `remove_headers_config` attribute ([#28940](https://github.com/hashicorp/terraform-provider-aws/issues/28940)) -* data-source/aws_ecs_task_definition: Add `execution_role_arn` attribute ([#28662](https://github.com/hashicorp/terraform-provider-aws/issues/28662)) -* data-source/aws_eks_node_group: Add `launch_template` attribute ([#30780](https://github.com/hashicorp/terraform-provider-aws/issues/30780)) -* data-source/aws_iam_role: Add `role_last_used` attribute ([#30750](https://github.com/hashicorp/terraform-provider-aws/issues/30750)) -* data-source/aws_kms_key: Add `cloud_hsm_cluster_id`, `custom_key_store_id`, `key_spec`, `pending_deletion_window_in_days`, and `xks_key_configuration` attributes ([#29250](https://github.com/hashicorp/terraform-provider-aws/issues/29250)) -* data-source/aws_lakeformation_data_lake_settings: Add `allow_external_data_filtering`, `external_data_filtering_allow_list` and `authorized_session_tag_value_list` attributes ([#30207](https://github.com/hashicorp/terraform-provider-aws/issues/30207)) -* data-source/aws_outposts_outpost: Add `lifecycle_status`, `site_arn`, `supported_hardware_type` and `tags` attributes ([#30754](https://github.com/hashicorp/terraform-provider-aws/issues/30754)) -* data-source/aws_servicequotas_service_quota: Add `usage_metric` attribute ([#29499](https://github.com/hashicorp/terraform-provider-aws/issues/29499)) -* data-source/aws_subnet: Add `enable_lni_at_device_index` attribute ([#30798](https://github.com/hashicorp/terraform-provider-aws/issues/30798)) -* resource/aws_appsync_datasource: Add `opensearchservice_config` argument ([#29578](https://github.com/hashicorp/terraform-provider-aws/issues/29578)) -* resource/aws_cloudfront_response_headers_policy: Add `remove_headers_config` argument ([#28940](https://github.com/hashicorp/terraform-provider-aws/issues/28940)) -* resource/aws_cloudwatch_event_target: Add `ecs_target.ordered_placement_strategy` argument ([#28384](https://github.com/hashicorp/terraform-provider-aws/issues/28384)) -* resource/aws_cloudwatch_metric_stream: Add `include_linked_accounts_metrics` argument ([#29281](https://github.com/hashicorp/terraform-provider-aws/issues/29281)) -* resource/aws_dms_replication_instance: Increase default timeout for `create` ([#29905](https://github.com/hashicorp/terraform-provider-aws/issues/29905)) -* resource/aws_eks_node_group: Add plan time validation to `node_group_name` and `node_group_name_prefix` arguments ([#29975](https://github.com/hashicorp/terraform-provider-aws/issues/29975)) -* resource/aws_elastic_beanstalk_application: Add plan time validation to `appversion_lifecycle.service_role` and `name` arguments ([#17727](https://github.com/hashicorp/terraform-provider-aws/issues/17727)) -* resource/aws_emr_cluster: Add `placement_group_config` argument ([#30121](https://github.com/hashicorp/terraform-provider-aws/issues/30121)) -* resource/aws_fis_experiment_template: Add support for `Subnets` Network Actions to `actions.*.target` ([#30211](https://github.com/hashicorp/terraform-provider-aws/issues/30211)) -* resource/aws_iam_role: Add `role_last_used` attribute ([#30750](https://github.com/hashicorp/terraform-provider-aws/issues/30750)) -* resource/aws_iot_topic_rule: Add `error_action.firehose.batch_mode`, `error_action.iot_analytics.batch_mode`, `error_action.iot_events.batch_mode`, `firehose.batch_mode`, `iot_analytics.batch_mode` and `iot_events.batch_mode` arguments ([#28568](https://github.com/hashicorp/terraform-provider-aws/issues/28568)) -* resource/aws_kinesis_firehose_delivery_stream: Add `opensearch_configuration` block ([#29112](https://github.com/hashicorp/terraform-provider-aws/issues/29112)) -* resource/aws_kinesis_firehose_delivery_stream: Add `opensearch` as a valid `destination` value ([#29112](https://github.com/hashicorp/terraform-provider-aws/issues/29112)) -* resource/aws_lakeformation_data_lake_settings: Add `allow_external_data_filtering`, `external_data_filtering_allow_list` and `authorized_session_tag_value_list` arguments ([#30207](https://github.com/hashicorp/terraform-provider-aws/issues/30207)) -* resource/aws_lambda_event_source_mapping: Add `document_db_event_source_config` configuration block ([#28586](https://github.com/hashicorp/terraform-provider-aws/issues/28586)) -* resource/aws_lambda_function: Add support for `python3.10` `runtime` value ([#30781](https://github.com/hashicorp/terraform-provider-aws/issues/30781)) -* resource/aws_lambda_layer_version: Add support for `python3.10` `compatible_runtimes` value ([#30781](https://github.com/hashicorp/terraform-provider-aws/issues/30781)) -* resource/aws_main_route_table_association: Add configurable timeouts ([#30755](https://github.com/hashicorp/terraform-provider-aws/issues/30755)) -* resource/aws_route: Allow `gateway_id` value of `local` when updating a Route ([#24507](https://github.com/hashicorp/terraform-provider-aws/issues/24507)) -* resource/aws_route_table_association: Add configurable timeouts ([#30755](https://github.com/hashicorp/terraform-provider-aws/issues/30755)) -* resource/aws_s3_bucket: Correct S3 Object Lock error handling for third-party S3-compatible API implementations ([#26317](https://github.com/hashicorp/terraform-provider-aws/issues/26317)) -* resource/aws_s3_bucket_object_lock_configuration: Correct error handling for third-party S3-compatible API implementations ([#26317](https://github.com/hashicorp/terraform-provider-aws/issues/26317)) -* resource/aws_securityhub_account: Add `control_finding_generator`, `auto_enable_controls` and `arn` attributes ([#30692](https://github.com/hashicorp/terraform-provider-aws/issues/30692)) -* resource/aws_servicequotas_service_quota: Add `usage_metric` attribute ([#29499](https://github.com/hashicorp/terraform-provider-aws/issues/29499)) -* resource/aws_ssoadmin_account_assignment: Extend timeout delay and min timeout ([#25849](https://github.com/hashicorp/terraform-provider-aws/issues/25849)) -* resource/aws_ssoadmin_permission_set: Extend timeout delay and min timeout ([#25849](https://github.com/hashicorp/terraform-provider-aws/issues/25849)) -* resource/aws_subnet: Add `enable_lni_at_device_index` attribute ([#30798](https://github.com/hashicorp/terraform-provider-aws/issues/30798)) -* resource/aws_vpc_endpoint_service_allowed_principal: Changed id to use ServicePermissionId ([#27640](https://github.com/hashicorp/terraform-provider-aws/issues/27640)) -* resource/aws_wafv2_rule_group: Add `rule.action.challenge` argument ([#29690](https://github.com/hashicorp/terraform-provider-aws/issues/29690)) -* resource/aws_wafv2_rule_group: Add `rule.captcha_config` argument ([#29608](https://github.com/hashicorp/terraform-provider-aws/issues/29608)) -* resource/aws_wafv2_web_acl: Add `captcha_config` and `rule.captcha_config` arguments ([#29608](https://github.com/hashicorp/terraform-provider-aws/issues/29608)) - -BUG FIXES: - -* data-source/aws_lakeformation_permissions: Change `lf_tag_policy.expression` from `TypeList` to `TypeSet` as order is not significant ([#26643](https://github.com/hashicorp/terraform-provider-aws/issues/26643)) -* data-source/aws_lakeformation_permissions: Remove limit on number of `lf_tag_policy.expression` blocks ([#26643](https://github.com/hashicorp/terraform-provider-aws/issues/26643)) -* resource/aws_cloudwatch_event_rule: Add retry to read step, resolving `couldn't find resource` error ([#25846](https://github.com/hashicorp/terraform-provider-aws/issues/25846)) -* resource/aws_default_vpc: Fix adoption of default VPC with generated IPv6 ([#29083](https://github.com/hashicorp/terraform-provider-aws/issues/29083)) -* resource/aws_dx_gateway: Remove plan time validation from `name` argument ([#30739](https://github.com/hashicorp/terraform-provider-aws/issues/30739)) -* resource/aws_ecs_service: Fix error importing service with an IAM role with a path ([#30170](https://github.com/hashicorp/terraform-provider-aws/issues/30170)) -* resource/aws_fsx_windows_file_system: Increase `throughput_capacity` first to avoid `BadRequest` errors ([#28622](https://github.com/hashicorp/terraform-provider-aws/issues/28622)) -* resource/aws_lakeformation_permissions: Change `lf_tag_policy.expression` from `TypeList` to `TypeSet` as order is not significant ([#26643](https://github.com/hashicorp/terraform-provider-aws/issues/26643)) -* resource/aws_lakeformation_permissions: Change `lf_tag`, `lf_tag.values`, `lf_tag_policy`, `lf_tag_policy.expression.key`, `lf_tag_policy.expression.values` and `lf_tag_policy.resource_type` to [ForceNew](https://developer.hashicorp.com/terraform/plugin/sdkv2/schemas/schema-behaviors#forcenew) ([#26643](https://github.com/hashicorp/terraform-provider-aws/issues/26643)) -* resource/aws_lakeformation_permissions: Remove limit on number of `lf_tag_policy.expression` blocks ([#26643](https://github.com/hashicorp/terraform-provider-aws/issues/26643)) -* resource/aws_lambda_event_source_mapping: Fix IAM eventual consistency errors on resource Update ([#28586](https://github.com/hashicorp/terraform-provider-aws/issues/28586)) -* resource/aws_medialive_channel: Fix to properly expand `destinations.media_package_settings` field ([#30660](https://github.com/hashicorp/terraform-provider-aws/issues/30660)) -* resource/aws_networkfirewall_firewall_policy: Fix unexpected `encryption_configuration.type` updates from `Customer_KMS` to `AWS_KMS` ([#30821](https://github.com/hashicorp/terraform-provider-aws/issues/30821)) -* resource/aws_networkfirewall_rule_group: Fix unexpected `encryption_configuration.type` updates from `Customer_KMS` to `AWS_KMS` ([#30821](https://github.com/hashicorp/terraform-provider-aws/issues/30821)) -* resource/aws_quicksight_data_set: Correct custom_sql documentation ([#30742](https://github.com/hashicorp/terraform-provider-aws/issues/30742)) -* resource/aws_quicksight_data_set: Correctly persist `create_columns_operation.expression` field ([#30708](https://github.com/hashicorp/terraform-provider-aws/issues/30708)) -* resource/aws_quicksight_data_set: Fix to properly expand `project_operation.projected_columns` field ([#30699](https://github.com/hashicorp/terraform-provider-aws/issues/30699)) -* resource/aws_quicksight_data_set: Fix to properly flatten `cast_column_type_operation.format` field ([#30701](https://github.com/hashicorp/terraform-provider-aws/issues/30701)) -* resource/aws_sagemaker_app: Fix crash when app is not found ([#30786](https://github.com/hashicorp/terraform-provider-aws/issues/30786)) -* resource/aws_sns_topic: Fix IAM eventual consistency error creating SNS topics with ABAC-controlled permissions ([#30432](https://github.com/hashicorp/terraform-provider-aws/issues/30432)) -* resource/aws_vpc: Don't overwrite any configured value for `ipv6_ipam_pool_id` with _IPAM Managed_ ([#30795](https://github.com/hashicorp/terraform-provider-aws/issues/30795)) - -## 4.63.0 (April 14, 2023) - -FEATURES: - -* **New Data Source:** `aws_dms_certificate` ([#30498](https://github.com/hashicorp/terraform-provider-aws/issues/30498)) -* **New Data Source:** `aws_quicksight_group` ([#12311](https://github.com/hashicorp/terraform-provider-aws/issues/12311)) -* **New Data Source:** `aws_quicksight_user` ([#12310](https://github.com/hashicorp/terraform-provider-aws/issues/12310)) -* **New Resource:** `aws_chimesdkmediapipelines_media_insights_pipeline_configuration` ([#30603](https://github.com/hashicorp/terraform-provider-aws/issues/30603)) -* **New Resource:** `aws_pipes_pipe` ([#30538](https://github.com/hashicorp/terraform-provider-aws/issues/30538)) -* **New Resource:** `aws_quicksight_iam_policy_assignment` ([#30653](https://github.com/hashicorp/terraform-provider-aws/issues/30653)) -* **New Resource:** `aws_quicksight_ingestion` ([#30487](https://github.com/hashicorp/terraform-provider-aws/issues/30487)) -* **New Resource:** `aws_quicksight_namespace` ([#30681](https://github.com/hashicorp/terraform-provider-aws/issues/30681)) -* **New Resource:** `aws_sagemaker_data_quality_job_definition` ([#30301](https://github.com/hashicorp/terraform-provider-aws/issues/30301)) -* **New Resource:** `aws_sagemaker_monitoring_schedule` ([#30684](https://github.com/hashicorp/terraform-provider-aws/issues/30684)) -* **New Resource:** `aws_vpclattice_service_network_service_association` ([#30410](https://github.com/hashicorp/terraform-provider-aws/issues/30410)) -* **New Resource:** `aws_vpclattice_service_network_vpc_association` ([#30411](https://github.com/hashicorp/terraform-provider-aws/issues/30411)) -* **New Resource:** `aws_vpclattice_target_group` ([#30455](https://github.com/hashicorp/terraform-provider-aws/issues/30455)) - -ENHANCEMENTS: - -* data-source/aws_dx_connection: Add `partner_name` attribute ([#30385](https://github.com/hashicorp/terraform-provider-aws/issues/30385)) -* data-source/aws_lambda_function_url: Add `invoke_mode` attribute ([#30547](https://github.com/hashicorp/terraform-provider-aws/issues/30547)) -* data-source/aws_nat_gateway: Add `association_id` attribute ([#30546](https://github.com/hashicorp/terraform-provider-aws/issues/30546)) -* data-source/aws_sagemaker_prebuilt_ecr_image: Added sagemaker-model-monitor-analyzer images ([#30301](https://github.com/hashicorp/terraform-provider-aws/issues/30301)) -* resource/aws_acmpca_certificate: Add `api_passthrough` argument ([#28142](https://github.com/hashicorp/terraform-provider-aws/issues/28142)) -* resource/aws_api_gateway_rest_api: Added `fail_on_warnings` attribute ([#22300](https://github.com/hashicorp/terraform-provider-aws/issues/22300)) -* resource/aws_dx_connection: Add `partner_name` attribute ([#30385](https://github.com/hashicorp/terraform-provider-aws/issues/30385)) -* resource/aws_dx_gateway: Add plan time validation to `name` argument ([#30375](https://github.com/hashicorp/terraform-provider-aws/issues/30375)) -* resource/aws_dx_gateway: Allow updates to `name` without forcing resource replacement ([#30375](https://github.com/hashicorp/terraform-provider-aws/issues/30375)) -* resource/aws_ec2_client_vpn_route: Increase Create and Delete timeouts to 4 minutes ([#30552](https://github.com/hashicorp/terraform-provider-aws/issues/30552)) -* resource/aws_lambda_function_url: Add `invoke_mode` attribute ([#30547](https://github.com/hashicorp/terraform-provider-aws/issues/30547)) -* resource/aws_mwaa_environment: Add `startup_script_s3_path` and `startup_script_s3_object_version` attributes ([#30549](https://github.com/hashicorp/terraform-provider-aws/issues/30549)) -* resource/aws_nat_gateway: Add `association_id` attribute ([#30546](https://github.com/hashicorp/terraform-provider-aws/issues/30546)) -* resource/aws_servicecatalog_provisioned_product: Surfaces more clear error message when resource fails to apply ([#30663](https://github.com/hashicorp/terraform-provider-aws/issues/30663)) -* resource/aws_wafv2_web_acl: Add `aws_managed_rules_atp_rule_set` to `managed_rule_group_configs` configuration block ([#30518](https://github.com/hashicorp/terraform-provider-aws/issues/30518)) - -BUG FIXES: - -* resource/aws_batch_compute_environment: Fix crash when `compute_resources.launch_template` is empty ([#30537](https://github.com/hashicorp/terraform-provider-aws/issues/30537)) -* resource/aws_cognito_managed_user_pool_client: Allow removing `token_validity_units` ([#30662](https://github.com/hashicorp/terraform-provider-aws/issues/30662)) -* resource/aws_cognito_user_pool_client: Allow removing `token_validity_units` ([#30662](https://github.com/hashicorp/terraform-provider-aws/issues/30662)) -* resource/aws_db_instance: Allow `engine` and `engine_version` to be set when `replicate_source_db` is set ([#30703](https://github.com/hashicorp/terraform-provider-aws/issues/30703)) -* resource/aws_db_instance: Fixes panic when updating `replica_mode` ([#30714](https://github.com/hashicorp/terraform-provider-aws/issues/30714)) -* resource/aws_dynamodb_table_item: Would report spurious diffs when List and Map attributes were changed out-of-band ([#30712](https://github.com/hashicorp/terraform-provider-aws/issues/30712)) -* resource/aws_elasticache_user_group: Change `user_group_id` to [ForceNew](https://developer.hashicorp.com/terraform/plugin/sdkv2/schemas/schema-behaviors#forcenew) ([#30533](https://github.com/hashicorp/terraform-provider-aws/issues/30533)) -* resource/aws_launch_template: Fix crash when `instance_market_options.spot_options` is empty ([#30539](https://github.com/hashicorp/terraform-provider-aws/issues/30539)) -* resource/aws_msk_serverless_cluster: Change `vpc_config.security_group_ids` to Computed ([#30535](https://github.com/hashicorp/terraform-provider-aws/issues/30535)) -* resource/aws_quicksight_data_set: Fix to properly send `physical_table_map.*.relational_table.catalog` when set ([#30704](https://github.com/hashicorp/terraform-provider-aws/issues/30704)) -* resource/aws_quicksight_data_set: Fix to properly send `physical_table_map.*.relational_table.schema` when set ([#30704](https://github.com/hashicorp/terraform-provider-aws/issues/30704)) -* resource/aws_rds_cluster: Prevent `db_instance_parameter_group_name` from causing errors on minor upgrades ([#30679](https://github.com/hashicorp/terraform-provider-aws/issues/30679)) -* resource/aws_rds_cluster_parameter_group: Fixes differences being reported on every apply when setting system-source parameters ([#30536](https://github.com/hashicorp/terraform-provider-aws/issues/30536)) - -## 4.62.0 (April 6, 2023) - -FEATURES: - -* **New Data Source:** `aws_ec2_transit_gateway_attachments` ([#29644](https://github.com/hashicorp/terraform-provider-aws/issues/29644)) -* **New Data Source:** `aws_ec2_transit_gateway_route_table_associations` ([#29642](https://github.com/hashicorp/terraform-provider-aws/issues/29642)) -* **New Data Source:** `aws_ec2_transit_gateway_route_table_propagations` ([#29640](https://github.com/hashicorp/terraform-provider-aws/issues/29640)) -* **New Data Source:** `aws_oam_link` ([#30401](https://github.com/hashicorp/terraform-provider-aws/issues/30401)) -* **New Data Source:** `aws_oam_links` ([#30401](https://github.com/hashicorp/terraform-provider-aws/issues/30401)) -* **New Data Source:** `aws_quicksight_data_set` ([#30422](https://github.com/hashicorp/terraform-provider-aws/issues/30422)) -* **New Data Source:** `aws_vpclattice_service` ([#30490](https://github.com/hashicorp/terraform-provider-aws/issues/30490)) -* **New Resource:** `aws_inspector2_member_association` ([#28921](https://github.com/hashicorp/terraform-provider-aws/issues/28921)) -* **New Resource:** `aws_lightsail_distribution` ([#30124](https://github.com/hashicorp/terraform-provider-aws/issues/30124)) -* **New Resource:** `aws_quicksight_account_subscription` ([#30359](https://github.com/hashicorp/terraform-provider-aws/issues/30359)) -* **New Resource:** `aws_quicksight_data_set` ([#30349](https://github.com/hashicorp/terraform-provider-aws/issues/30349)) -* **New Resource:** `aws_quicksight_folder` ([#30400](https://github.com/hashicorp/terraform-provider-aws/issues/30400)) -* **New Resource:** `aws_vpclattice_service` ([#30429](https://github.com/hashicorp/terraform-provider-aws/issues/30429)) -* **New Resource:** `aws_vpclattice_service_network` ([#30482](https://github.com/hashicorp/terraform-provider-aws/issues/30482)) - -ENHANCEMENTS: - -* data-source/aws_route_table: Ignore routes managed by VPC Lattice ([#30515](https://github.com/hashicorp/terraform-provider-aws/issues/30515)) -* data-source/aws_secretsmanager_secret: Add `rotation_rules.duration` and `rotation_rules.schedule_expression` attributes ([#30425](https://github.com/hashicorp/terraform-provider-aws/issues/30425)) -* data-source/aws_secretsmanager_secret_rotation: Add `rotation_rules.duration` and `rotation_rules.schedule_expression` attributes ([#30425](https://github.com/hashicorp/terraform-provider-aws/issues/30425)) -* resource/aws_default_route_table: Ignore routes managed by VPC Lattice ([#30515](https://github.com/hashicorp/terraform-provider-aws/issues/30515)) -* resource/aws_emrserverless_application: Add `image_configuration` field ([#30398](https://github.com/hashicorp/terraform-provider-aws/issues/30398)) -* resource/aws_imagebuilder_container_recipe: Add `platform_override` field ([#30398](https://github.com/hashicorp/terraform-provider-aws/issues/30398)) -* resource/aws_route_table: Ignore routes managed by VPC Lattice ([#30515](https://github.com/hashicorp/terraform-provider-aws/issues/30515)) -* resource/aws_s3_bucket: Enable S3-compatible providers with no support for bucket tagging ([#30151](https://github.com/hashicorp/terraform-provider-aws/issues/30151)) -* resource/aws_sagemaker_endpoint_configuration: Add `name_prefix` argument ([#28785](https://github.com/hashicorp/terraform-provider-aws/issues/28785)) -* resource/aws_sagemaker_feature_group: Add `table_format` to the `offline_store_config` configuration block ([#30118](https://github.com/hashicorp/terraform-provider-aws/issues/30118)) -* resource/aws_secretsmanager_secret: Add `duration` and `schedule_expression` attributes to `rotation_rules` configuration block ([#30425](https://github.com/hashicorp/terraform-provider-aws/issues/30425)) -* resource/aws_secretsmanager_secret_rotation: Add `duration` and `schedule_expression` attributes to `rotation_rules` configuration block ([#30425](https://github.com/hashicorp/terraform-provider-aws/issues/30425)) - -BUG FIXES: - -* resource/aws_ce_cost_category: Fixed `effective_start` being reset on any changes despite `effective_start` having the same value ([#30369](https://github.com/hashicorp/terraform-provider-aws/issues/30369)) -* resource/aws_db_instance: Fix crash when updating `password` ([#30379](https://github.com/hashicorp/terraform-provider-aws/issues/30379)) -* resource/aws_glue_crawler: Fix InvalidInputException error string matching ([#30370](https://github.com/hashicorp/terraform-provider-aws/issues/30370)) -* resource/aws_glue_trigger: Fix InvalidInputException error string matching ([#30370](https://github.com/hashicorp/terraform-provider-aws/issues/30370)) -* resource/aws_medialive_channel: Fix attribute `certificate_mode` spelling in `rtmp_output_settings` ([#30224](https://github.com/hashicorp/terraform-provider-aws/issues/30224)) -* resource/aws_rds_cluster: Fix crash when updating `master_password` ([#30379](https://github.com/hashicorp/terraform-provider-aws/issues/30379)) -* resource/aws_rds_cluster: Fix inconsistent final plan errors when `engine_version` updates are not applied immediately ([#30247](https://github.com/hashicorp/terraform-provider-aws/issues/30247)) -* resource/aws_rds_cluster: Send `db_instance_parameter_group_name` on all modify requests when set ([#30247](https://github.com/hashicorp/terraform-provider-aws/issues/30247)) -* resource/aws_rds_cluster_instance: Fix inconsistent final plan errors when `engine_version` updates are not applied immediately ([#30247](https://github.com/hashicorp/terraform-provider-aws/issues/30247)) -* resource/aws_rds_instance: Fix inconsistent final plan errors when `engine_version` updates are not applied immediately ([#30247](https://github.com/hashicorp/terraform-provider-aws/issues/30247)) -* resource/aws_s3_bucket_lifecycle_configuration: Allow `rule.filter.object_size_greater_than` = 0 ([#29857](https://github.com/hashicorp/terraform-provider-aws/issues/29857)) -* resource/aws_scheduler_schedule: Mark `arn` property of `dead_letter_config` as a required property ([#30360](https://github.com/hashicorp/terraform-provider-aws/issues/30360)) - -## 4.61.0 (March 30, 2023) - -FEATURES: - -* **New Data Source:** `aws_appmesh_gateway_route` ([#29064](https://github.com/hashicorp/terraform-provider-aws/issues/29064)) -* **New Data Source:** `aws_appmesh_virtual_node` ([#27545](https://github.com/hashicorp/terraform-provider-aws/issues/27545)) -* **New Data Source:** `aws_appmesh_virtual_router` ([#26908](https://github.com/hashicorp/terraform-provider-aws/issues/26908)) -* **New Data Source:** `aws_globalaccelerator_custom_routing_accelerator` ([#28922](https://github.com/hashicorp/terraform-provider-aws/issues/28922)) -* **New Data Source:** `aws_oam_sink` ([#30258](https://github.com/hashicorp/terraform-provider-aws/issues/30258)) -* **New Data Source:** `aws_oam_sinks` ([#30258](https://github.com/hashicorp/terraform-provider-aws/issues/30258)) -* **New Data Source:** `aws_ssmincidents_replication_set` ([#29769](https://github.com/hashicorp/terraform-provider-aws/issues/29769)) -* **New Resource:** `aws_globalaccelerator_custom_routing_accelerator` ([#28922](https://github.com/hashicorp/terraform-provider-aws/issues/28922)) -* **New Resource:** `aws_globalaccelerator_custom_routing_endpoint_group` ([#28922](https://github.com/hashicorp/terraform-provider-aws/issues/28922)) -* **New Resource:** `aws_globalaccelerator_custom_routing_listener` ([#28922](https://github.com/hashicorp/terraform-provider-aws/issues/28922)) -* **New Resource:** `aws_rbin_rule` ([#25926](https://github.com/hashicorp/terraform-provider-aws/issues/25926)) -* **New Resource:** `aws_sns_topic_data_protection_policy` ([#30008](https://github.com/hashicorp/terraform-provider-aws/issues/30008)) -* **New Resource:** `aws_ssmincidents_replication_set` ([#29769](https://github.com/hashicorp/terraform-provider-aws/issues/29769)) - -ENHANCEMENTS: - -* data-source/aws_db_instance: Add `master_user_secret` attribute ([#28848](https://github.com/hashicorp/terraform-provider-aws/issues/28848)) -* data-source/aws_globalaccelerator_accelerator: Add `dual_stack_dns_name` attribute ([#28922](https://github.com/hashicorp/terraform-provider-aws/issues/28922)) -* data-source/aws_rds_cluster: Add `master_user_secret` attribute ([#28848](https://github.com/hashicorp/terraform-provider-aws/issues/28848)) -* resource/aws_appmesh_gateway_route: Add `header`, `path` and `query_parameter` to the `spec.http_route.match` and `spec.http2_route.match` configuration blocks ([#29064](https://github.com/hashicorp/terraform-provider-aws/issues/29064)) -* resource/aws_appmesh_gateway_route: Add `port` to the `spec.grpc_route.action.target`, `spec.http_route.action.target` and `spec.http2_route.action.target` configuration blocks to support Virtual Services with multiple listeners ([#29064](https://github.com/hashicorp/terraform-provider-aws/issues/29064)) -* resource/aws_appmesh_gateway_route: Add `priority` to the `spec` configuration block ([#29064](https://github.com/hashicorp/terraform-provider-aws/issues/29064)) -* resource/aws_appmesh_route: Add `path` and `query_parameter` to the `spec.http_route.match` and `spec.http2_route.match` configuration blocks ([#29064](https://github.com/hashicorp/terraform-provider-aws/issues/29064)) -* resource/aws_appmesh_route: `spec.http_route.match.prefix` and `spec.http2_route.match.prefix` are Optional ([#29064](https://github.com/hashicorp/terraform-provider-aws/issues/29064)) -* resource/aws_appmesh_virtual_node: Add `ip_preference` and `response_type` to the `spec.service_discovery.dns` configuration block ([#29064](https://github.com/hashicorp/terraform-provider-aws/issues/29064)) -* resource/aws_db_instance: Add `manage_master_user_password`, `master_user_secret` and `master_user_secret_kms_key_id` arguments to support RDS managed master password in Secrets Manager ([#28848](https://github.com/hashicorp/terraform-provider-aws/issues/28848)) -* resource/aws_globalaccelerator_accelerator: Add `dual_stack_dns_name` attribute ([#28922](https://github.com/hashicorp/terraform-provider-aws/issues/28922)) -* resource/aws_lakeformation_lf_tag: Increase values MaxItem up to 1000 to match with AWS real limit ([#26546](https://github.com/hashicorp/terraform-provider-aws/issues/26546)) -* resource/aws_rds_cluster: Add `manage_master_user_password`, `master_user_secret` and `master_user_secret_kms_key_id` arguments to support RDS managed master password in Secrets Manager ([#28848](https://github.com/hashicorp/terraform-provider-aws/issues/28848)) -* resource/aws_sagemaker_endpoint_configuration: Add `production_variants.enable_ssm_access` and `shadow_production_variants.enable_ssm_access` arguments ([#30267](https://github.com/hashicorp/terraform-provider-aws/issues/30267)) - -BUG FIXES: - -* datasource/aws_ecs_task_execution: Fix type assertion panic on `overrides.0.container_overrides.*.environment` attribute ([#30214](https://github.com/hashicorp/terraform-provider-aws/issues/30214)) -* datasource/aws_ecs_task_execution: Fix type assertion panic on `overrides.0.container_overrides.*.resource_requirements` attribute ([#30214](https://github.com/hashicorp/terraform-provider-aws/issues/30214)) -* datasource/aws_ecs_task_execution: Fix type assertion panic on `overrides.0.inference_accelerator_overrides` attribute ([#30214](https://github.com/hashicorp/terraform-provider-aws/issues/30214)) -* resource/aws_appmesh_virtual_router: `spec.listener` is Optional ([#29064](https://github.com/hashicorp/terraform-provider-aws/issues/29064)) -* resource/aws_fsx_openzfs_file_system: Fix `iops` validation in `disk_iops_configuration` to allow values for `SINGLE_AZ_1` and `SINGLE_AZ_2` ([#30299](https://github.com/hashicorp/terraform-provider-aws/issues/30299)) -* resource/aws_lakeformation_lf_tag: Fix support for lf-tag keys with colons in the name ([#28258](https://github.com/hashicorp/terraform-provider-aws/issues/28258)) -* resource/aws_launch_template: Allow `metadata_options` to be applied when `http_endpoint` is not configured ([#30107](https://github.com/hashicorp/terraform-provider-aws/issues/30107)) -* resource/aws_ssm_activation: Fix IAM eventual consistency errors on resource Create ([#30280](https://github.com/hashicorp/terraform-provider-aws/issues/30280)) -* resource/aws_ssm_document: Correctly set `default_version`, `document_version`, `hash`, `latest_version` and `parameter` as Computed when `content` changes ([#28489](https://github.com/hashicorp/terraform-provider-aws/issues/28489)) -* resource/aws_wafv2_ip_set: Fix `DiffSuppress` on `addresses` to detect changes for unknown values ([#30352](https://github.com/hashicorp/terraform-provider-aws/issues/30352)) - -## 4.60.0 (March 24, 2023) - -FEATURES: - -* **New Data Source:** `aws_appmesh_route` ([#26695](https://github.com/hashicorp/terraform-provider-aws/issues/26695)) -* **New Data Source:** `aws_appmesh_virtual_gateway` ([#27057](https://github.com/hashicorp/terraform-provider-aws/issues/27057)) -* **New Resource:** `aws_cognito_managed_user_pool_client` ([#30140](https://github.com/hashicorp/terraform-provider-aws/issues/30140)) -* **New Resource:** `aws_oam_link` ([#30125](https://github.com/hashicorp/terraform-provider-aws/issues/30125)) -* **New Resource:** `aws_sesv2_contact_list` ([#30094](https://github.com/hashicorp/terraform-provider-aws/issues/30094)) - -ENHANCEMENTS: - -* data-source/aws_ecs_cluster: Add `tags` attribute ([#30073](https://github.com/hashicorp/terraform-provider-aws/issues/30073)) -* resource/aws_appmesh_virtual_gateway: Add `logging.access_log.file.format` configuration block ([#29315](https://github.com/hashicorp/terraform-provider-aws/issues/29315)) -* resource/aws_appmesh_virtual_node: Add `logging.access_log.file.format` configuration block ([#29315](https://github.com/hashicorp/terraform-provider-aws/issues/29315)) -* resource/aws_rds_cluster: Conflict `snapshot_identifier` and `global_cluster_identifier` attributes, preventing misleading results on restore ([#30158](https://github.com/hashicorp/terraform-provider-aws/issues/30158)) -* resource/aws_securityhub_account: Add `enable_default_standards` argument ([#13477](https://github.com/hashicorp/terraform-provider-aws/issues/13477)) -* resource/aws_securityhub_member: `email` is Optional ([#19065](https://github.com/hashicorp/terraform-provider-aws/issues/19065)) - -BUG FIXES: - -* data-source/aws_appmesh_mesh: Don't attempt to list tags if the current AWS account is not the mesh owner ([#26695](https://github.com/hashicorp/terraform-provider-aws/issues/26695)) -* data-source/aws_appmesh_virtual_service: Don't attempt to list tags if the current AWS account is not the mesh owner ([#26695](https://github.com/hashicorp/terraform-provider-aws/issues/26695)) -* resource/aws_apigateway_domain_name: Add ability to update `mutual_tls_authentication.truststore_uri` in place ([#30081](https://github.com/hashicorp/terraform-provider-aws/issues/30081)) -* resource/aws_apigatewayv2_domain_name: Add ability to update `mutual_tls_authentication.truststore_uri` in place ([#30081](https://github.com/hashicorp/terraform-provider-aws/issues/30081)) -* resource/aws_appmesh_gateway_route: Use configured `mesh_owner` when deleting shared gateway route ([#29362](https://github.com/hashicorp/terraform-provider-aws/issues/29362)) -* resource/aws_appmesh_route: Use configured `mesh_owner` value when deleting shared route ([#29362](https://github.com/hashicorp/terraform-provider-aws/issues/29362)) -* resource/aws_appmesh_virtual_gateway: Use configured `mesh_owner` value when deleting shared virtual gateway ([#29362](https://github.com/hashicorp/terraform-provider-aws/issues/29362)) -* resource/aws_appmesh_virtual_node: Use configured `mesh_owner` value when deleting shared virtual node ([#29362](https://github.com/hashicorp/terraform-provider-aws/issues/29362)) -* resource/aws_appmesh_virtual_router: Use configured `mesh_owner` value when deleting shared virtual router ([#29362](https://github.com/hashicorp/terraform-provider-aws/issues/29362)) -* resource/aws_appmesh_virtual_service: Use configured `mesh_owner` value when deleting shared virtual service ([#29362](https://github.com/hashicorp/terraform-provider-aws/issues/29362)) -* resource/aws_cognito_risk_configuration: Adds validation to `risk_exception_configuration` and requires at least one of `account_takeover_risk_configuration`, `compromised_credentials_risk_configuration`, or `risk_exception_configuration`. ([#30074](https://github.com/hashicorp/terraform-provider-aws/issues/30074)) -* resource/aws_medialive_channel: Change `TypeSet` to `TypeList` on `video_description`, to get more precise actions from plan output ([#30064](https://github.com/hashicorp/terraform-provider-aws/issues/30064)) -* resource/aws_medialive_channel: Fix type casting for `h264_settings` in `video_descriptions` ([#30063](https://github.com/hashicorp/terraform-provider-aws/issues/30063)) -* resource/aws_medialive_channel: Fix type casting of `program_num`, `segmentation_time` and `fragment_time` for `m2ts_settings` ([#30025](https://github.com/hashicorp/terraform-provider-aws/issues/30025)) -* resource/aws_opsworks_application: Don't return an error like `deleting OpsWorks Application (...): %!s()` after successful Delete ([#30101](https://github.com/hashicorp/terraform-provider-aws/issues/30101)) -* resource/aws_pinpoint_app: Don't return an error like `deleting Pinpoint Application (...): %!s()` after successful Delete ([#30101](https://github.com/hashicorp/terraform-provider-aws/issues/30101)) -* resource/aws_placement_group: Change `spread_level` to Computed ([#28596](https://github.com/hashicorp/terraform-provider-aws/issues/28596)) -* resource/aws_security_group: Improve respect for delete timeout set by user and retry of certain errors ([#30114](https://github.com/hashicorp/terraform-provider-aws/issues/30114)) -* resource/aws_transfer_server: Fix error refreshing `protocol_details.as2_transports` value ([#30115](https://github.com/hashicorp/terraform-provider-aws/issues/30115)) - -## 4.59.0 (March 16, 2023) - -NOTES: - -* resource/aws_connect_queue: The `quick_connect_ids_associated` attribute is being deprecated in favor of `quick_connect_ids` ([#26151](https://github.com/hashicorp/terraform-provider-aws/issues/26151)) -* resource/aws_connect_routing_profile: The `queue_configs_associated` attribute is being deprecated in favor of `queue_configs` ([#26151](https://github.com/hashicorp/terraform-provider-aws/issues/26151)) - -FEATURES: - -* **New Data Source:** `aws_ec2_public_ipv4_pool` ([#28245](https://github.com/hashicorp/terraform-provider-aws/issues/28245)) -* **New Data Source:** `aws_ec2_public_ipv4_pools` ([#28245](https://github.com/hashicorp/terraform-provider-aws/issues/28245)) -* **New Data Source:** `aws_servicecatalog_provisioning_artifacts` ([#25535](https://github.com/hashicorp/terraform-provider-aws/issues/25535)) -* **New Resource:** `aws_codegurureviewer_repository_association` ([#29656](https://github.com/hashicorp/terraform-provider-aws/issues/29656)) -* **New Resource:** `aws_emr_block_public_access_configuration` ([#29968](https://github.com/hashicorp/terraform-provider-aws/issues/29968)) -* **New Resource:** `aws_kms_key_policy` ([#29923](https://github.com/hashicorp/terraform-provider-aws/issues/29923)) -* **New Resource:** `aws_oam_sink` ([#29670](https://github.com/hashicorp/terraform-provider-aws/issues/29670)) -* **New Resource:** `aws_oam_sink_policy` ([#30020](https://github.com/hashicorp/terraform-provider-aws/issues/30020)) - -ENHANCEMENTS: - -* aws_cognito_user_pool_domain: Add ability to update `certificate_arn` in place ([#25275](https://github.com/hashicorp/terraform-provider-aws/issues/25275)) -* data-source/aws_aws_lb: Add `enable_xff_client_port`, `xff_header_processing_mode` and `enable_tls_version_and_cipher_suite_headers` attributes ([#29792](https://github.com/hashicorp/terraform-provider-aws/issues/29792)) -* data-source/aws_ce_cost_category: Add `default_value` attribute ([#29291](https://github.com/hashicorp/terraform-provider-aws/issues/29291)) -* data-source/aws_dynamodb_table: Add `deletion_protection_enabled` attribute ([#29924](https://github.com/hashicorp/terraform-provider-aws/issues/29924)) -* data-source/aws_opensearch_domain: Add `dashboard_endpoint` attribute ([#29867](https://github.com/hashicorp/terraform-provider-aws/issues/29867)) -* resource/aws_amplify_domain_association: Add `enable_auto_sub_domain` argument ([#29814](https://github.com/hashicorp/terraform-provider-aws/issues/29814)) -* resource/aws_appflow_flow: Add attribute `preserve_source_data_typing` to `s3_output_format_config` in `s3` ([#27616](https://github.com/hashicorp/terraform-provider-aws/issues/27616)) -* resource/aws_appsync_datasource: Add `event_bridge_config` argument to support AppSync EventBridge data sources ([#30042](https://github.com/hashicorp/terraform-provider-aws/issues/30042)) -* resource/aws_aws_lb: Add `enable_xff_client_port`, `xff_header_processing_mode` and `enable_tls_version_and_cipher_suite_headers` arguments ([#29792](https://github.com/hashicorp/terraform-provider-aws/issues/29792)) -* resource/aws_batch_compute_environment: Allow a maximum of 2 `compute_resources.ec2_configuration`s ([#27207](https://github.com/hashicorp/terraform-provider-aws/issues/27207)) -* resource/aws_cloudwatch_metric_alarm: Add `period` parameter to `metric_query` ([#29896](https://github.com/hashicorp/terraform-provider-aws/issues/29896)) -* resource/aws_cloudwatch_metric_alarm: Add validation to `period` parameter of `metric_query.metric` ([#29896](https://github.com/hashicorp/terraform-provider-aws/issues/29896)) -* resource/aws_cognito_user_pool_domain: Add `cloudfront_distribution` and `cloudfront_distribution_zone_id` attributes ([#27790](https://github.com/hashicorp/terraform-provider-aws/issues/27790)) -* resource/aws_dynamodb_table: Add `deletion_protection_enabled` argument ([#29924](https://github.com/hashicorp/terraform-provider-aws/issues/29924)) -* resource/aws_ecs_task_definition: Add `arn_without_revision` attribute ([#27351](https://github.com/hashicorp/terraform-provider-aws/issues/27351)) -* resource/aws_elasticache_user: Add `authentication_mode` argument ([#28928](https://github.com/hashicorp/terraform-provider-aws/issues/28928)) -* resource/aws_fms_policy: Add `description` argument ([#29926](https://github.com/hashicorp/terraform-provider-aws/issues/29926)) -* resource/aws_fsx_openzfs_file_system: Add support for `SINGLE_AZ_2` `deployment_type` ([#28583](https://github.com/hashicorp/terraform-provider-aws/issues/28583)) -* resource/aws_glue_crawler: Add `create_native_delta_table` attribute to the `delta_target` configuration block ([#29566](https://github.com/hashicorp/terraform-provider-aws/issues/29566)) -* resource/aws_inspector2_organization_configuration: Add `lambda` attribute to `auto_enable` configuration block ([#28961](https://github.com/hashicorp/terraform-provider-aws/issues/28961)) -* resource/aws_instance: Add ability to update `private_dns_name_options` in place ([#26305](https://github.com/hashicorp/terraform-provider-aws/issues/26305)) -* resource/aws_lb_target_group: Add `load_balancing_cross_zone_enabled` argument ([#29920](https://github.com/hashicorp/terraform-provider-aws/issues/29920)) -* resource/aws_opensearch_domain: Add `dashboard_endpoint` attribute ([#29867](https://github.com/hashicorp/terraform-provider-aws/issues/29867)) -* resource/aws_qldb_ledger: Add configurable timeouts ([#29635](https://github.com/hashicorp/terraform-provider-aws/issues/29635)) -* resource/aws_s3_bucket: Add error handling for `XNotImplemented` errors when reading `acceleration_status`, `request_payer`, `lifecycle_rule`, `logging`, or `replication_configuration` into terraform state. ([#29632](https://github.com/hashicorp/terraform-provider-aws/issues/29632)) -* resource/aws_securityhub_organization_configuration: Add `auto_enable_standards` attribute ([#29773](https://github.com/hashicorp/terraform-provider-aws/issues/29773)) -* resource/aws_wafv2_web_acl_association: Add configurable timeout for Create ([#30002](https://github.com/hashicorp/terraform-provider-aws/issues/30002)) - -BUG FIXES: - -* data-source/aws_opensearch_domain: Add missing `advanced_security_options.anonymous_auth_enabled` attribute ([#26746](https://github.com/hashicorp/terraform-provider-aws/issues/26746)) -* resource/aws_api_gateway_integration: Fix bug that cleared unchanged `cache_key_parameters` values on Update ([#29991](https://github.com/hashicorp/terraform-provider-aws/issues/29991)) -* resource/aws_apigatewayv2_integration: Retry errors like `ConflictException: Unable to complete operation due to concurrent modification. Please try again later.` ([#29735](https://github.com/hashicorp/terraform-provider-aws/issues/29735)) -* resource/aws_budgets_action: Extend and add configurable timeouts for create and update ([#29976](https://github.com/hashicorp/terraform-provider-aws/issues/29976)) -* resource/aws_cognito_user_pool: Remove [Computed](https://developer.hashicorp.com/terraform/plugin/sdkv2/schemas/schema-behaviors#computed) from `lambda_config.custom_email_sender` and `lambda_config.custom_sms_sender` allowing their values to be removed ([#29047](https://github.com/hashicorp/terraform-provider-aws/issues/29047)) -* resource/aws_cognito_user_pool: `account_recovery_setting.recovery_mechanism` is Optional+Computed ([#22302](https://github.com/hashicorp/terraform-provider-aws/issues/22302)) -* resource/aws_ecr_repository: Fix unhandled errors and nil output on read ([#30067](https://github.com/hashicorp/terraform-provider-aws/issues/30067)) -* resource/aws_elasticache_user: Change `user_id` to [ForceNew](https://developer.hashicorp.com/terraform/plugin/sdkv2/schemas/schema-behaviors#forcenew) ([#28928](https://github.com/hashicorp/terraform-provider-aws/issues/28928)) -* resource/aws_elasticsearch_domain: Remove upper bound validation for `ebs_options.throughput` as the 1,000 MB/s limit can be raised ([#27598](https://github.com/hashicorp/terraform-provider-aws/issues/27598)) -* resource/aws_lambda_function: Fix empty environment variable update ([#29839](https://github.com/hashicorp/terraform-provider-aws/issues/29839)) -* resource/aws_lightsail_domain_entry: Allow for the domain entry to begin with an underscore. ([#30056](https://github.com/hashicorp/terraform-provider-aws/issues/30056)) -* resource/aws_lightsail_domain_entry: Moved the error handling of an improperly formatted ID to be before attempting to access the id_parts. This will cause a proper empty resource message instead of a panic when ID is not properly formed. ([#30056](https://github.com/hashicorp/terraform-provider-aws/issues/30056)) -* resource/aws_lightsail_instance: Added a check to ensure that the availability_zone value is within the current region of the provider. ([#30056](https://github.com/hashicorp/terraform-provider-aws/issues/30056)) -* resource/aws_lightsail_instance: Fix `name` validation to allow instances to start with a numeric character ([#29903](https://github.com/hashicorp/terraform-provider-aws/issues/29903)) -* resource/aws_medialive_channel: Fix setting of `bitrate` and `sample_rate` for `aac_settings`. ([#29807](https://github.com/hashicorp/terraform-provider-aws/issues/29807)) -* resource/aws_medialive_channel: Fix setting of `bitrate` for `eac3_settings`. ([#29809](https://github.com/hashicorp/terraform-provider-aws/issues/29809)) -* resource/aws_medialive_channel: Fix spelling for attribute `audio_only_timecode_control` and correct type for `event_id` in `ms_smooth_group_settings` ([#29917](https://github.com/hashicorp/terraform-provider-aws/issues/29917)) -* resource/aws_medialive_channel: Removed `Compute` flag from `audio_normalization_settings` and `remix_settings` in `audio_descriptions` ([#29859](https://github.com/hashicorp/terraform-provider-aws/issues/29859)) -* resource/aws_medialive_channel: Removed `Computed` flag from `aac_settings`, ´ac3_settings`, `eac3_atmos_settings`, `eac3_settings`, `mp2_settings`, `pass_through_settings` and `wav_settings` in `codec_settings`. ([#29825](https://github.com/hashicorp/terraform-provider-aws/issues/29825)) -* resource/aws_neptune_cluster: Change lower bound validation for `serverless_v2_scaling_configuration.min_capacity` to 1 Neptune Capacity Unit (NCU) ([#29999](https://github.com/hashicorp/terraform-provider-aws/issues/29999)) -* resource/aws_network_acl_association: Add retry to read step, resolving `empty result` error ([#26838](https://github.com/hashicorp/terraform-provider-aws/issues/26838)) -* resource/aws_opensearch_domain: Remove upper bound validation for `ebs_options.throughput` as the 1,000 MB/s limit can be raised ([#27598](https://github.com/hashicorp/terraform-provider-aws/issues/27598)) -* resource/aws_route: Allow `destination_ipv6_cidr_block` to be specified for a `vpc_endpoint_id` target ([#29994](https://github.com/hashicorp/terraform-provider-aws/issues/29994)) -* resource/aws_sagemaker_endpoint_configuration: Fix `variant_name` generation when unset ([#29915](https://github.com/hashicorp/terraform-provider-aws/issues/29915)) - -## 4.58.0 (March 10, 2023) - -FEATURES: - -* **New Data Source:** `aws_ecs_task_execution` ([#29783](https://github.com/hashicorp/terraform-provider-aws/issues/29783)) -* **New Data Source:** `aws_licensemanager_grants` ([#29741](https://github.com/hashicorp/terraform-provider-aws/issues/29741)) -* **New Data Source:** `aws_licensemanager_received_license` ([#29741](https://github.com/hashicorp/terraform-provider-aws/issues/29741)) -* **New Data Source:** `aws_licensemanager_received_licenses` ([#29741](https://github.com/hashicorp/terraform-provider-aws/issues/29741)) -* **New Resource:** `aws_licensemanager_grant` ([#29741](https://github.com/hashicorp/terraform-provider-aws/issues/29741)) -* **New Resource:** `aws_licensemanager_grant_accepter` ([#29741](https://github.com/hashicorp/terraform-provider-aws/issues/29741)) - -ENHANCEMENTS: - -* data-source/aws_ec2_transit_gateway_attachment: Add `association_state` and `association_transit_gateway_route_table_id` attributes ([#29648](https://github.com/hashicorp/terraform-provider-aws/issues/29648)) -* data-source/aws_instances: Add `ipv6_addresses` attribute ([#29794](https://github.com/hashicorp/terraform-provider-aws/issues/29794)) -* resource/aws_acm_certificate: Change `options` to `Computed` ([#29763](https://github.com/hashicorp/terraform-provider-aws/issues/29763)) -* resource/aws_amplify_domain_association: Add `enable_auto_sub_domain` argument ([#29814](https://github.com/hashicorp/terraform-provider-aws/issues/29814)) -* resource/aws_cloudhsm_v2_hsm: Enforce `ExactlyOneOf` for `availability_zone` and `subnet_id` arguments ([#20891](https://github.com/hashicorp/terraform-provider-aws/issues/20891)) -* resource/aws_db_instance: Add `listener_endpoint` attribute ([#28434](https://github.com/hashicorp/terraform-provider-aws/issues/28434)) -* resource/aws_db_instance: Add plan time validations for `backup_retention_period`, `monitoring_interval`, and `monitoring_role_arn` ([#28434](https://github.com/hashicorp/terraform-provider-aws/issues/28434)) -* resource/aws_flow_log: Add `deliver_cross_account_role` argument ([#29254](https://github.com/hashicorp/terraform-provider-aws/issues/29254)) -* resource/aws_grafana_workspace: Add `network_access_control` argument ([#29793](https://github.com/hashicorp/terraform-provider-aws/issues/29793)) -* resource/aws_sesv2_configuration_set: Add `vdm_options` argument ([#28812](https://github.com/hashicorp/terraform-provider-aws/issues/28812)) -* resource/aws_transfer_server: Add `protocol_details` argument ([#28621](https://github.com/hashicorp/terraform-provider-aws/issues/28621)) -* resource/aws_transfer_workflow: Add `decrypt_step_details` to the `on_exception_steps` and `steps` configuration blocks ([#29692](https://github.com/hashicorp/terraform-provider-aws/issues/29692)) -* resource/db_snapshot: Add `shared_accounts` argument ([#28424](https://github.com/hashicorp/terraform-provider-aws/issues/28424)) - -BUG FIXES: - -* resource/aws_acm_certificate: Update `options.certificate_transparency_logging_preference` in place rather than replacing the resource ([#29763](https://github.com/hashicorp/terraform-provider-aws/issues/29763)) -* resource/aws_batch_job_definition: Prevents perpetual diff when container properties environment variable has empty value. ([#29820](https://github.com/hashicorp/terraform-provider-aws/issues/29820)) -* resource/aws_elastic_beanstalk_configuration_template: Map errors like `InvalidParameterValue: No Platform named '...' found.` to `resource.NotFoundError` so `terraform refesh` correctly removes the resource from state ([#29863](https://github.com/hashicorp/terraform-provider-aws/issues/29863)) -* resource/aws_flow_log: Fix IAM eventual consistency errors on resource Create ([#29254](https://github.com/hashicorp/terraform-provider-aws/issues/29254)) -* resource/aws_grafana_workspace: Allow removing `vpc_configuration` ([#29793](https://github.com/hashicorp/terraform-provider-aws/issues/29793)) -* resource/aws_medialive_channel: Fix setting of the `include_fec` attribute in `fec_output_settings` ([#29808](https://github.com/hashicorp/terraform-provider-aws/issues/29808)) -* resource/aws_medialive_channel: Fix setting of the `video_pid` attribute in `m2ts_settings` ([#29824](https://github.com/hashicorp/terraform-provider-aws/issues/29824)) - -## 4.57.1 (March 6, 2023) - -BUG FIXES: - -* resource/aws_lambda_function: Prevent `Provider produced inconsistent final plan` errors produced by null `skip_destroy` attribute value. NOTE: Because the maintainers have been unable to reproduce the reported problem, the fix is best effort and we ask for community support in verifying the fix. ([#29812](https://github.com/hashicorp/terraform-provider-aws/issues/29812)) - -## 4.57.0 (March 3, 2023) - -NOTES: - -* resource/aws_dms_endpoint: The `s3_settings` argument has been deprecated. All configurations using `aws_dms_endpoint.*.s3_settings` should be updated to use the `aws_dms_s3_endpoint` resource instead ([#29728](https://github.com/hashicorp/terraform-provider-aws/issues/29728)) -* resource/aws_networkmanager_core_network: The `base_policy_region` argument is being deprecated in favor of the new `base_policy_regions` argument. ([#29623](https://github.com/hashicorp/terraform-provider-aws/issues/29623)) - -FEATURES: - -* **New Resource:** `aws_lightsail_bucket_resource_access` ([#29460](https://github.com/hashicorp/terraform-provider-aws/issues/29460)) - -ENHANCEMENTS: - -* data-source/aws_launch_template: Add `instance_requirements.allowed_instance_types` and `instance_requirements.network_bandwidth_gbps` attributes ([#29140](https://github.com/hashicorp/terraform-provider-aws/issues/29140)) -* resource/aws_autoscaling_group: Add `auto_rollback` to the `instance_refresh.preferences` configuration block ([#29513](https://github.com/hashicorp/terraform-provider-aws/issues/29513)) -* resource/aws_autoscaling_group: Add `mixed_instances_policy.launch_template.override.instance_requirements.allowed_instance_types` and `mixed_instances_policy.launch_template.override.instance_requirements.network_bandwidth_gbps` arguments ([#29140](https://github.com/hashicorp/terraform-provider-aws/issues/29140)) -* resource/aws_autoscaling_policy: Add `metrics` to the `target_tracking_configuration.customized_metric_specification` configuration block in support of [metric math](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-target-tracking-metric-math.html) ([#28560](https://github.com/hashicorp/terraform-provider-aws/issues/28560)) -* resource/aws_cloudtrail_event_data_store: Add `kms_key_id` argument ([#29224](https://github.com/hashicorp/terraform-provider-aws/issues/29224)) -* resource/aws_dms_endpoint: Add ability to use AWS Secrets Manager with the `db2` engine ([#29380](https://github.com/hashicorp/terraform-provider-aws/issues/29380)) -* resource/aws_dms_endpoint: Add support for `azure-sql-managed-instance` `engine_name` value ([#28960](https://github.com/hashicorp/terraform-provider-aws/issues/28960)) -* resource/aws_dms_s3_endpoint: Add `detach_target_on_lob_lookup_failure_parquet` argument ([#29772](https://github.com/hashicorp/terraform-provider-aws/issues/29772)) -* resource/aws_ec2_fleet: Add `fleet_instance_set`, `fleet_state`, `fulfilled_capacity`, and `fulfilled_on_demand_capacity` attributes ([#29181](https://github.com/hashicorp/terraform-provider-aws/issues/29181)) -* resource/aws_ec2_fleet: Add `launch_template_config.override.instance_requirements.allowed_instance_types` and `launch_template_config.override.instance_requirements.network_bandwidth_gbps` arguments ([#29140](https://github.com/hashicorp/terraform-provider-aws/issues/29140)) -* resource/aws_ec2_fleet: Add `on_demand_options.capacity_reservation_options`,`on_demand_options.max_total_price`, `on_demand_options.min_target_capacity`, `on_demand_options.single_availability_zone` and `on_demand_options.single_instance_type` arguments ([#29181](https://github.com/hashicorp/terraform-provider-aws/issues/29181)) -* resource/aws_ec2_fleet: Add `spot_options.maintenance_strategies.capacity_rebalance.termination_delay` argument ([#29181](https://github.com/hashicorp/terraform-provider-aws/issues/29181)) -* resource/aws_ec2_fleet: Add `valid_from` and `valid_until` arguments ([#29181](https://github.com/hashicorp/terraform-provider-aws/issues/29181)) -* resource/aws_lambda_function: Add `skip_destroy` argument ([#29646](https://github.com/hashicorp/terraform-provider-aws/issues/29646)) -* resource/aws_lambda_function: Add configurable timeout for Delete ([#29646](https://github.com/hashicorp/terraform-provider-aws/issues/29646)) -* resource/aws_lambda_function: Add plan time validators for `memory_size`, `role`, and `timeout` ([#29721](https://github.com/hashicorp/terraform-provider-aws/issues/29721)) -* resource/aws_lambda_function: Retry (up to the configurable timeout) deletion of replicated Lambda@Edge functions ([#29646](https://github.com/hashicorp/terraform-provider-aws/issues/29646)) -* resource/aws_launch_template: Add `instance_requirements.allowed_instance_types` and `instance_requirements.network_bandwidth_gbps` arguments ([#29140](https://github.com/hashicorp/terraform-provider-aws/issues/29140)) -* resource/aws_networkmanager_core_network: Add `base_policy_regions` argument ([#29623](https://github.com/hashicorp/terraform-provider-aws/issues/29623)) -* resource/aws_spot_fleet_request: Add `launch_template_config.overrides.instance_requirements.allowed_instance_types` and `launch_template_config.overrides.instance_requirements.network_bandwidth_gbps` arguments ([#29140](https://github.com/hashicorp/terraform-provider-aws/issues/29140)) -* resource/aws_transfer_server: Add support for `on_partial_upload` block on the `workflow_details` attribute. ([#27730](https://github.com/hashicorp/terraform-provider-aws/issues/27730)) -* resource/aws_transfer_user: Add configurable timeout for Delete ([#27563](https://github.com/hashicorp/terraform-provider-aws/issues/27563)) - -BUG FIXES: - -* resource/aws_dms_endpoint: Trigger updates based on adding new `extra_connection_attributes` ([#29772](https://github.com/hashicorp/terraform-provider-aws/issues/29772)) -* resource/aws_instance: When encountering `InsufficientInstanceCapacity` errors, do not retry in order to fail faster, as this error is typically not resolvable in the near future ([#21293](https://github.com/hashicorp/terraform-provider-aws/issues/21293)) -* resource/aws_transfer_server: Allow the removal of `workflow_details` attribute. ([#27730](https://github.com/hashicorp/terraform-provider-aws/issues/27730)) -* resource/aws_transfer_user: Fix bug preventing removal of all `home_directory_mappings` due to empty list validation error ([#27563](https://github.com/hashicorp/terraform-provider-aws/issues/27563)) - -## 4.56.0 (February 24, 2023) - -NOTES: - -* resource/aws_lambda_function: Updated to AWS SDK V2 ([#29615](https://github.com/hashicorp/terraform-provider-aws/issues/29615)) - -FEATURES: - -* **New Data Source:** `aws_vpc_security_group_rule` ([#29484](https://github.com/hashicorp/terraform-provider-aws/issues/29484)) -* **New Data Source:** `aws_vpc_security_group_rules` ([#29484](https://github.com/hashicorp/terraform-provider-aws/issues/29484)) -* **New Resource:** `aws_networkmanager_connect_peer` ([#29296](https://github.com/hashicorp/terraform-provider-aws/issues/29296)) -* **New Resource:** `aws_vpc_security_group_egress_rule` ([#29484](https://github.com/hashicorp/terraform-provider-aws/issues/29484)) -* **New Resource:** `aws_vpc_security_group_ingress_rule` ([#29484](https://github.com/hashicorp/terraform-provider-aws/issues/29484)) - -ENHANCEMENTS: - -* data-source/aws_ecr_image: Add `most_recent` argument to return the most recently pushed image ([#26857](https://github.com/hashicorp/terraform-provider-aws/issues/26857)) -* data-source/aws_ecr_repository: Add `most_recent_image_tags` attribute containing the most recently pushed image tag(s) in an ECR repository ([#26857](https://github.com/hashicorp/terraform-provider-aws/issues/26857)) -* resource/aws_lb_ssl_negotiation_policy: Add `triggers` attribute to force resource updates ([#29482](https://github.com/hashicorp/terraform-provider-aws/issues/29482)) -* resource/aws_load_balancer_listener_policy: Add `triggers` attribute to force resource updates ([#29482](https://github.com/hashicorp/terraform-provider-aws/issues/29482)) -* resource/aws_organizations_policy: Add `skip_destroy` attribute ([#29382](https://github.com/hashicorp/terraform-provider-aws/issues/29382)) -* resource/aws_organizations_policy_attachment: Add `skip_destroy` attribute ([#29382](https://github.com/hashicorp/terraform-provider-aws/issues/29382)) -* resource/aws_sns_topic: Add `signature_version` and `tracing_config` arguments ([#29462](https://github.com/hashicorp/terraform-provider-aws/issues/29462)) - -BUG FIXES: - -* resource/aws_acmpca_certificate_authority: `revocation_configuration.crl_configuration.expiration_in_days` is Optional ([#29613](https://github.com/hashicorp/terraform-provider-aws/issues/29613)) -* resource/aws_default_vpc: Change `enable_network_address_usage_metrics` to Optional+Computed, matching the `aws_vpc` resource ([#29607](https://github.com/hashicorp/terraform-provider-aws/issues/29607)) -* resource/aws_lambda_function: Fix missing `ValidationException` message body ([#29615](https://github.com/hashicorp/terraform-provider-aws/issues/29615)) -* resource/aws_medialive_channel: Fix setting of `m2ts_settings` `arib_captions_pid` and `arib_captions_pid_control` attributes ([#29467](https://github.com/hashicorp/terraform-provider-aws/issues/29467)) -* resource/aws_resourceexplorer2_view: Fix `Unexpected Planned Resource State on Destroy` errors when using Terraform CLI v1.3 and above ([#29550](https://github.com/hashicorp/terraform-provider-aws/issues/29550)) -* resource/aws_servicecatalog_provisioned_product: Fix to allow `outputs` to be `Computed` when the resource changes ([#29559](https://github.com/hashicorp/terraform-provider-aws/issues/29559)) -* resource/aws_sns_topic_subscription: Fix `filter_policy_scope` update from `MessageAttributes` to `MessageBody` with nested objects in `filter_policy` ([#28572](https://github.com/hashicorp/terraform-provider-aws/issues/28572)) -* resource/aws_wafv2_web_acl: Prevent erroneous diffs and attempts to remove AWS-added rule when applying to CF distribution using AWS Shield to automatically mitigate DDoS ([#29575](https://github.com/hashicorp/terraform-provider-aws/issues/29575)) - -## 4.55.0 (February 16, 2023) - -FEATURES: - -* **New Data Source:** `aws_organizations_organizational_unit_child_accounts` ([#24350](https://github.com/hashicorp/terraform-provider-aws/issues/24350)) -* **New Data Source:** `aws_organizations_organizational_unit_descendant_accounts` ([#24350](https://github.com/hashicorp/terraform-provider-aws/issues/24350)) -* **New Resource:** `aws_route53_cidr_collection` ([#29407](https://github.com/hashicorp/terraform-provider-aws/issues/29407)) -* **New Resource:** `aws_route53_cidr_location` ([#29407](https://github.com/hashicorp/terraform-provider-aws/issues/29407)) -* **New Resource:** `aws_vpc_ipam_resource_discovery` ([#29216](https://github.com/hashicorp/terraform-provider-aws/issues/29216)) -* **New Resource:** `aws_vpc_ipam_resource_discovery_association` ([#29216](https://github.com/hashicorp/terraform-provider-aws/issues/29216)) - -ENHANCEMENTS: - -* data-source/aws_s3_bucket_object: Expand content types that can be read from S3 to include some human-readable application types (e.g., `application/xml`, `application/atom+xml`) ([#27704](https://github.com/hashicorp/terraform-provider-aws/issues/27704)) -* data-source/aws_s3_object: Expand content types that can be read from S3 to include some human-readable application types (e.g., `application/xml`, `application/atom+xml`) ([#27704](https://github.com/hashicorp/terraform-provider-aws/issues/27704)) -* resource/aws_autoscaling_policy: Make `resource_label` optional in `predefined_load_metric_specification`, `predefined_metric_pair_specification`, and `predefined_scaling_metric_specification` ([#29277](https://github.com/hashicorp/terraform-provider-aws/issues/29277)) -* resource/aws_cloudwatch_log_group: Allow `retention_in_days` attribute to accept a three year retention period (1096 days) ([#29426](https://github.com/hashicorp/terraform-provider-aws/issues/29426)) -* resource/aws_db_proxy: Add `auth.client_password_auth_type` attribute ([#28432](https://github.com/hashicorp/terraform-provider-aws/issues/28432)) -* resource/aws_firehose_delivery_stream: Add `ForceNew` to `dynamic_partitioning_configuration` attribute ([#29093](https://github.com/hashicorp/terraform-provider-aws/issues/29093)) -* resource/aws_firehose_delivery_stream: Add configurable timeouts for create, update, and delete ([#28469](https://github.com/hashicorp/terraform-provider-aws/issues/28469)) -* resource/aws_neptune_cluster: Add `neptune_instance_parameter_group_name` argument, used only when upgrading major version ([#28051](https://github.com/hashicorp/terraform-provider-aws/issues/28051)) -* resource/aws_neptune_global_cluster: Increase Update timeout to 120 minutes (per global cluster member) ([#28051](https://github.com/hashicorp/terraform-provider-aws/issues/28051)) -* resource/aws_route53_cidr_location: Add `cidr_routing_policy` argument ([#29407](https://github.com/hashicorp/terraform-provider-aws/issues/29407)) -* resource/aws_s3_bucket: Accept 'NoSuchTagSetError' responses from S3-compatible services ([#28530](https://github.com/hashicorp/terraform-provider-aws/issues/28530)) -* resource/aws_s3_bucket: Add error handling for `NotImplemented` errors when reading `lifecycle_rule` or `replication_configuration` into terraform state. ([#28790](https://github.com/hashicorp/terraform-provider-aws/issues/28790)) -* resource/aws_s3_object: Accept 'NoSuchTagSetError' responses from S3-compatible services ([#28530](https://github.com/hashicorp/terraform-provider-aws/issues/28530)) - -BUG FIXES: - -* data-source/aws_elb: Fix errors caused by multiple security groups with the same name but different owners ([#29202](https://github.com/hashicorp/terraform-provider-aws/issues/29202)) -* resource/aws_appflow_connector_profile: Fix bug in connector_profile_config.0.connector_profile_properties.0.sapo_data.0.logon_language validation regex ([#28550](https://github.com/hashicorp/terraform-provider-aws/issues/28550)) -* resource/aws_appflow_flow: Fix misspelled `source_connector_properties.0.sapo_data.0.object`, which never worked, to be `object_path` ([#28600](https://github.com/hashicorp/terraform-provider-aws/issues/28600)) -* resource/aws_appmesh_route: Fix RequiredWith setting for `spec.0.grpc_route.0.match.0.method_name` attribute ([#29217](https://github.com/hashicorp/terraform-provider-aws/issues/29217)) -* resource/aws_autoscaling_policy: Fix type of target_value for predictive scaling ([#28444](https://github.com/hashicorp/terraform-provider-aws/issues/28444)) -* resource/aws_cloudfront_response_headers_policy: Allow `server_timing_headers_config.0.sampling_rate` to be `0` ([#27778](https://github.com/hashicorp/terraform-provider-aws/issues/27778)) -* resource/aws_codebuild_project: Fix err check on delete ([#29042](https://github.com/hashicorp/terraform-provider-aws/issues/29042)) -* resource/aws_ecs_service: Allow multiple `service` blocks within `service_connect_configuration` ([#28813](https://github.com/hashicorp/terraform-provider-aws/issues/28813)) -* resource/aws_ecs_service: Mark `service_connect_configuration.service.client_alias` as optional and ensure that only 1 such block can be provided ([#28813](https://github.com/hashicorp/terraform-provider-aws/issues/28813)) -* resource/aws_ecs_service: Require `service_connect_configuration.log_configuration.log_driver` to be provided ([#28813](https://github.com/hashicorp/terraform-provider-aws/issues/28813)) -* resource/aws_elb: Fix errors caused by multiple security groups with the same name but different owners ([#29202](https://github.com/hashicorp/terraform-provider-aws/issues/29202)) -* resource/aws_emr_cluster: Fix errors caused by multiple security groups with the same name but different owners ([#29202](https://github.com/hashicorp/terraform-provider-aws/issues/29202)) -* resource/aws_globalaccelerator_endpoint_group: Fix errors caused by multiple security groups with the same name but different owners ([#29202](https://github.com/hashicorp/terraform-provider-aws/issues/29202)) -* resource/aws_kms_key: Increase `policy propagation` eventual consistency timeouts from 5 minutes to 10 minutes ([#28636](https://github.com/hashicorp/terraform-provider-aws/issues/28636)) -* resource/aws_medialive_channel: Fix issue causing `dbv_sub_pids` attribute to be configured incorrectly in `m2ts_settings` ([#29371](https://github.com/hashicorp/terraform-provider-aws/issues/29371)) -* resource/aws_medialive_channel: Fix issue preventing `audio_pids` attribute from being configured in `m2ts_settings` ([#29371](https://github.com/hashicorp/terraform-provider-aws/issues/29371)) -* resource/aws_neptune_cluster: Fix [restore-from-snapshot functionality](https://docs.aws.amazon.com/neptune/latest/userguide/backup-restore-restore-snapshot.html) using the `snapshot_identifier` argument on resource Create ([#28051](https://github.com/hashicorp/terraform-provider-aws/issues/28051)) -* resource/aws_neptune_cluster: Fix major version upgrade ([#28051](https://github.com/hashicorp/terraform-provider-aws/issues/28051)) -* resource/aws_sagemaker_user_profile: Change `user_settings.0.jupyter_server_app_settings.0.default_resource_spec` to be optional ([#28581](https://github.com/hashicorp/terraform-provider-aws/issues/28581)) - -## 4.54.0 (February 9, 2023) - -NOTES: - -* provider: Resolves provider crashes reporting `Error: Plugin did not respond` and `fatal error: concurrent map writes` with updated upstream package (`terraform-plugin-log`) ([#29269](https://github.com/hashicorp/terraform-provider-aws/issues/29269)) -* resource/aws_networkmanager_core_network: The `policy_document` attribute is being deprecated in favor of the new `aws_networkmanager_core_network_policy_attachment` resource. ([#29097](https://github.com/hashicorp/terraform-provider-aws/issues/29097)) - -FEATURES: - -* **New Resource:** `aws_evidently_launch` ([#28752](https://github.com/hashicorp/terraform-provider-aws/issues/28752)) -* **New Resource:** `aws_lightsail_bucket_access_key` ([#28699](https://github.com/hashicorp/terraform-provider-aws/issues/28699)) -* **New Resource:** `aws_networkmanager_core_network_policy_attachment` ([#29097](https://github.com/hashicorp/terraform-provider-aws/issues/29097)) - -ENHANCEMENTS: - -* data-source/aws_cloudtrail_service_account: Add service account ID for `ap-southeast-4` AWS Region ([#29103](https://github.com/hashicorp/terraform-provider-aws/issues/29103)) -* data-source/aws_elb_hosted_zone_id: Add hosted zone ID for `ap-southeast-4` AWS Region ([#29103](https://github.com/hashicorp/terraform-provider-aws/issues/29103)) -* data-source/aws_lb_hosted_zone_id: Add hosted zone IDs for `ap-southeast-4` AWS Region ([#29103](https://github.com/hashicorp/terraform-provider-aws/issues/29103)) -* data-source/aws_s3_bucket: Add hosted zone ID for `ap-south-2` AWS Region ([#29103](https://github.com/hashicorp/terraform-provider-aws/issues/29103)) -* data-source/aws_s3_bucket: Add hosted zone ID for `ap-southeast-4` AWS Region ([#29103](https://github.com/hashicorp/terraform-provider-aws/issues/29103)) -* provider: Support `ap-southeast-4` as a valid AWS region ([#29329](https://github.com/hashicorp/terraform-provider-aws/issues/29329)) -* resource/aws_dynamodb_table: Add `arn`, `stream_arn`, and `stream_label` attributes to `replica` to obtain this information for replicas ([#29269](https://github.com/hashicorp/terraform-provider-aws/issues/29269)) -* resource/aws_efs_mount_target: Add configurable timeouts for Create and Delete ([#27991](https://github.com/hashicorp/terraform-provider-aws/issues/27991)) -* resource/aws_lambda_function: Add `replace_security_groups_on_destroy` and `replacement_security_group_ids` attributes ([#29289](https://github.com/hashicorp/terraform-provider-aws/issues/29289)) -* resource/aws_networkfirewall_firewall: Add `ip_address_type` attribute to the `subnet_mapping` configuration block ([#29010](https://github.com/hashicorp/terraform-provider-aws/issues/29010)) -* resource/aws_networkmanager_core_network: Add `base_policy_region` and `create_base_policy` arguments ([#29097](https://github.com/hashicorp/terraform-provider-aws/issues/29097)) - -BUG FIXES: - -* data-source/aws_kms_key: Reinstate support for KMS multi-Region key ID or ARN values for the `key_id` argument ([#29266](https://github.com/hashicorp/terraform-provider-aws/issues/29266)) -* resource/aws_cloudwatch_log_group: Fix IAM eventual consistency error when setting a retention policy ([#29325](https://github.com/hashicorp/terraform-provider-aws/issues/29325)) -* resource/aws_dynamodb_table: Avoid recreating table replicas when enabling PITR on them ([#29269](https://github.com/hashicorp/terraform-provider-aws/issues/29269)) -* resource/aws_ec2_client_vpn_endpoint: Change `authentication_options` from `TypeList` to `TypeSet` as order is not significant ([#29294](https://github.com/hashicorp/terraform-provider-aws/issues/29294)) -* resource/aws_kms_grant: Retries until valid principal ARNs are returned instead of not updating state ([#29245](https://github.com/hashicorp/terraform-provider-aws/issues/29245)) -* resource/aws_opsworks_permission: `stack_id` and `user_arn` are both Required and ForceNew ([#27991](https://github.com/hashicorp/terraform-provider-aws/issues/27991)) -* resource/aws_prometheus_workspace: Create a logging configuration on resource update if none existed previously ([#27472](https://github.com/hashicorp/terraform-provider-aws/issues/27472)) -* resource/aws_s3_bucket: Fix crash when `logging` is empty ([#29243](https://github.com/hashicorp/terraform-provider-aws/issues/29243)) -* resource/aws_sns_topic: Fixes potential race condition when reading policy document. ([#29226](https://github.com/hashicorp/terraform-provider-aws/issues/29226)) -* resource/aws_sns_topic_policy: Fixes potential race condition when reading policy document. ([#29226](https://github.com/hashicorp/terraform-provider-aws/issues/29226)) - -## 4.53.0 (February 3, 2023) - -ENHANCEMENTS: - -* provider: Adds structured fields in logging ([#29223](https://github.com/hashicorp/terraform-provider-aws/issues/29223)) -* provider: Masks authentication fields in HTTP header logging ([#29223](https://github.com/hashicorp/terraform-provider-aws/issues/29223)) - -## 4.52.0 (January 27, 2023) - -NOTES: - -* resource/aws_dynamodb_table: In the past, in certain situations, `kms_key_arn` could be populated with the default DynamoDB key `alias/aws/dynamodb`. This was an error because it would then be sent back to AWS and should not be. ([#29102](https://github.com/hashicorp/terraform-provider-aws/issues/29102)) -* resource/aws_dynamodb_table: In the past, in certain situations, `server_side_encryption.0.kms_key_arn` or `replica.*.kms_key_arn` could be populated with the default DynamoDB key `alias/aws/dynamodb`. This was an error because it would then be sent back to AWS and should not be. ([#29102](https://github.com/hashicorp/terraform-provider-aws/issues/29102)) -* resource/aws_dynamodb_table: Updating `replica.*.kms_key_arn` or `replica.*.point_in_time_recovery`, when the `replica`'s `kms_key_arn` is set, requires recreating the replica. ([#29102](https://github.com/hashicorp/terraform-provider-aws/issues/29102)) -* resource/aws_dynamodb_table_replica: Updating `kms_key_arn` forces replacement of the replica now as required to re-encrypt the replica ([#29102](https://github.com/hashicorp/terraform-provider-aws/issues/29102)) - -FEATURES: - -* **New Data Source:** `aws_auditmanager_framework` ([#28989](https://github.com/hashicorp/terraform-provider-aws/issues/28989)) -* **New Resource:** `aws_auditmanager_assessment_delegation` ([#29099](https://github.com/hashicorp/terraform-provider-aws/issues/29099)) -* **New Resource:** `aws_auditmanager_framework_share` ([#29049](https://github.com/hashicorp/terraform-provider-aws/issues/29049)) -* **New Resource:** `aws_auditmanager_organization_admin_account_registration` ([#29018](https://github.com/hashicorp/terraform-provider-aws/issues/29018)) - -ENHANCEMENTS: - -* resource/aws_wafv2_rule_group: Add `oversize_handling` argument to `body` block of the `field_to_match` block ([#29082](https://github.com/hashicorp/terraform-provider-aws/issues/29082)) - -BUG FIXES: - -* resource/aws_api_gateway_integration: Prevent drift of `connection_type` attribute when `aws_api_gateway_deployment` `triggers` are used ([#29016](https://github.com/hashicorp/terraform-provider-aws/issues/29016)) -* resource/aws_dynamodb_table: Fix perpetual diffs when using default AWS-managed keys ([#29102](https://github.com/hashicorp/terraform-provider-aws/issues/29102)) -* resource/aws_dynamodb_table: Fix to allow updating of `replica.*.kms_key_arn` ([#29102](https://github.com/hashicorp/terraform-provider-aws/issues/29102)) -* resource/aws_dynamodb_table: Fix to allow updating of `replica.*.point_in_time_recovery` when a `replica` has `kms_key_arn` set ([#29102](https://github.com/hashicorp/terraform-provider-aws/issues/29102)) -* resource/aws_dynamodb_table: Fix unexpected state 'DISABLED' error when waiting for PITR to update ([#29086](https://github.com/hashicorp/terraform-provider-aws/issues/29086)) -* resource/aws_dynamodb_table_replica: Fix to allow creation of the replica without errors when `kms_key_arn` is set ([#29102](https://github.com/hashicorp/terraform-provider-aws/issues/29102)) -* resource/aws_dynamodb_table_replica: Fix to allow updating of `kms_key_arn` ([#29102](https://github.com/hashicorp/terraform-provider-aws/issues/29102)) -* resource/aws_medialive_channel: Add missing `rate_control_mode` in `acc_settings` for `audio_descriptions` ([#29051](https://github.com/hashicorp/terraform-provider-aws/issues/29051)) -* resource/aws_medialive_input: Fix eventual consistency error when updating ([#29051](https://github.com/hashicorp/terraform-provider-aws/issues/29051)) -* resource/aws_vpc_ipam_pool_cidr_allocation: Added support for eventual consistency on read operations after create. ([#29022](https://github.com/hashicorp/terraform-provider-aws/issues/29022)) -* resource/aws_wafv2_web_acl: Fix error when setting `aws_managed_rules_bot_control_rule_set` in `managed_rule_group_configs` ([#28810](https://github.com/hashicorp/terraform-provider-aws/issues/28810)) - -## 4.51.0 (January 19, 2023) - -NOTES: - -* resource/aws_ce_anomaly_subscription: Deprecate `threshold` argument in favour of `threshold_expression` ([#28573](https://github.com/hashicorp/terraform-provider-aws/issues/28573)) - -FEATURES: - -* **New Data Source:** `aws_auditmanager_control` ([#28967](https://github.com/hashicorp/terraform-provider-aws/issues/28967)) -* **New Resource:** `aws_datasync_location_object_storage` ([#23154](https://github.com/hashicorp/terraform-provider-aws/issues/23154)) -* **New Resource:** `aws_rds_export_task` ([#28831](https://github.com/hashicorp/terraform-provider-aws/issues/28831)) -* **New Resource:** `aws_resourceexplorer2_view` ([#28841](https://github.com/hashicorp/terraform-provider-aws/issues/28841)) - -ENHANCEMENTS: - -* resource/aws_appmesh_gateway_route: Add `port` on the `match` attribute for routes ([#27799](https://github.com/hashicorp/terraform-provider-aws/issues/27799)) -* resource/aws_appmesh_route: Add `port` on the `weighted_target` attribute ([#27799](https://github.com/hashicorp/terraform-provider-aws/issues/27799)) -* resource/aws_appmesh_virtual_gateway: Add the functionality to be able specify multi listeners ([#27799](https://github.com/hashicorp/terraform-provider-aws/issues/27799)) -* resource/aws_appmesh_virtual_node: Add the functionality to be able specify multi listeners ([#27799](https://github.com/hashicorp/terraform-provider-aws/issues/27799)) -* resource/aws_appmesh_virtual_router: Add the functionality to be able specify multi listeners ([#27799](https://github.com/hashicorp/terraform-provider-aws/issues/27799)) -* resource/aws_apprunner_service: Add `source_configuration.code_repository.code_configuration.runtime_environment_secrets` and `source_configuration.image_repository.image_configuration.runtime_environment_secrets` argument ([#28871](https://github.com/hashicorp/terraform-provider-aws/issues/28871)) -* resource/aws_ce_anomaly_subscription: Add `threshold_expression` argument ([#28573](https://github.com/hashicorp/terraform-provider-aws/issues/28573)) -* resource/aws_grafana_workspace: Add `configuration` argument ([#28569](https://github.com/hashicorp/terraform-provider-aws/issues/28569)) -* resource/aws_imagbuilder_component: Add `skip_destroy` argument ([#28905](https://github.com/hashicorp/terraform-provider-aws/issues/28905)) -* resource/aws_lambda_event_source_mapping: Add `scaling_config` argument ([#28876](https://github.com/hashicorp/terraform-provider-aws/issues/28876)) -* resource/aws_lambda_function: Add configurable timeout for Update ([#28963](https://github.com/hashicorp/terraform-provider-aws/issues/28963)) -* resource/aws_rum_app_monitor: Add `custom_events` argument ([#28431](https://github.com/hashicorp/terraform-provider-aws/issues/28431)) -* resource/aws_servicecatalog_portfolio_share: Add `share_principals` argument ([#28619](https://github.com/hashicorp/terraform-provider-aws/issues/28619)) - -BUG FIXES: - -* data-source/aws_eks_cluster: Add `outpost_config.control_plane_placement` attribute ([#28924](https://github.com/hashicorp/terraform-provider-aws/issues/28924)) -* data-source/aws_identitystore_group: Restore use of `ListGroups` API when `filter` is specified ([#28937](https://github.com/hashicorp/terraform-provider-aws/issues/28937)) -* data-source/aws_identitystore_user: Restore use of `ListUsers` API when `filter` is specified ([#28937](https://github.com/hashicorp/terraform-provider-aws/issues/28937)) -* data-source/aws_lambda_function: Fix `AccessDeniedException` errors in [AWS Regions where AWS Signer is not supported](https://docs.aws.amazon.com/general/latest/gr/signer.html#signer_lambda_region) ([#28963](https://github.com/hashicorp/terraform-provider-aws/issues/28963)) -* data-source/aws_lambda_function: Remove any qualifier from `invoke_arn` ([#28963](https://github.com/hashicorp/terraform-provider-aws/issues/28963)) -* resource/aws_appstream_image_builder: Fix IAM eventual consistency error for optional role ([#26677](https://github.com/hashicorp/terraform-provider-aws/issues/26677)) -* resource/aws_appstream_image_builder: Fix refresh error when `domain_join_info` and `vpc_config` are not empty ([#26677](https://github.com/hashicorp/terraform-provider-aws/issues/26677)) -* resource/aws_elasticsearch_domain: Prevent persistent `iops` diff ([#28901](https://github.com/hashicorp/terraform-provider-aws/issues/28901)) -* resource/aws_grafana_workspace: Fix updating `vpc_configuration` ([#28569](https://github.com/hashicorp/terraform-provider-aws/issues/28569)) -* resource/aws_iam_server_certificate: Avoid errors on delete when no error occurred ([#28968](https://github.com/hashicorp/terraform-provider-aws/issues/28968)) -* resource/aws_lambda_function: Don't persist invalid `filename`, `s3_bucket`, `s3_key` or `s3_object_version` values on resource Update ([#28963](https://github.com/hashicorp/terraform-provider-aws/issues/28963)) -* resource/aws_lambda_function: Retry `ResourceNotFoundException` errors on resource Create ([#28963](https://github.com/hashicorp/terraform-provider-aws/issues/28963)) -* resource/aws_lb_listener_certificate: Show errors in certain cases where they were previously only logged and resource was removed from state ([#28968](https://github.com/hashicorp/terraform-provider-aws/issues/28968)) -* resource/aws_opensearch_domain: Omit `throughput` and `iops` for unsupported volume types ([#28862](https://github.com/hashicorp/terraform-provider-aws/issues/28862)) -* resource/aws_sagemaker_app: Correctly list all apps so as not to lose track in an environment where there are many apps ([#28561](https://github.com/hashicorp/terraform-provider-aws/issues/28561)) - -## 4.50.0 (January 13, 2023) - -FEATURES: - -* **New Data Source:** `aws_lbs` ([#27161](https://github.com/hashicorp/terraform-provider-aws/issues/27161)) -* **New Resource:** `aws_sesv2_configuration_set_event_destination` ([#27565](https://github.com/hashicorp/terraform-provider-aws/issues/27565)) - -ENHANCEMENTS: - -* data-source/aws_lb_target_group: Support querying by `tags` ([#27261](https://github.com/hashicorp/terraform-provider-aws/issues/27261)) -* resource/aws_redshiftdata_statement: Add `workgroup_name` argument ([#28751](https://github.com/hashicorp/terraform-provider-aws/issues/28751)) -* resource/aws_service_discovery_service: Add `type` argument ([#28778](https://github.com/hashicorp/terraform-provider-aws/issues/28778)) - -BUG FIXES: - -* resource/aws_acmpca_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28788](https://github.com/hashicorp/terraform-provider-aws/issues/28788)) -* resource/aws_api_gateway_rest_api: Improve refresh to avoid unnecessary diffs in `policy` ([#28789](https://github.com/hashicorp/terraform-provider-aws/issues/28789)) -* resource/aws_api_gateway_rest_api_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28789](https://github.com/hashicorp/terraform-provider-aws/issues/28789)) -* resource/aws_apprunner_service: `observability_configuration_arn` is optional ([#28620](https://github.com/hashicorp/terraform-provider-aws/issues/28620)) -* resource/aws_apprunner_vpc_connector: Fix `default_tags` not handled correctly ([#28736](https://github.com/hashicorp/terraform-provider-aws/issues/28736)) -* resource/aws_appstream_stack: Fix panic on user_settings update ([#28766](https://github.com/hashicorp/terraform-provider-aws/issues/28766)) -* resource/aws_appstream_stack: Prevent unnecessary replacements on update ([#28766](https://github.com/hashicorp/terraform-provider-aws/issues/28766)) -* resource/aws_backup_vault_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28791](https://github.com/hashicorp/terraform-provider-aws/issues/28791)) -* resource/aws_cloudsearch_domain_service_access_policy: Improve refresh to avoid unnecessary diffs in `access_policy` ([#28792](https://github.com/hashicorp/terraform-provider-aws/issues/28792)) -* resource/aws_cloudwatch_event_bus_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28802](https://github.com/hashicorp/terraform-provider-aws/issues/28802)) -* resource/aws_codeartifact_domain_permissions_policy: Improve refresh to avoid unnecessary diffs in `policy_document` ([#28794](https://github.com/hashicorp/terraform-provider-aws/issues/28794)) -* resource/aws_codeartifact_repository_permissions_policy: Improve refresh to avoid unnecessary diffs in `policy_document` ([#28794](https://github.com/hashicorp/terraform-provider-aws/issues/28794)) -* resource/aws_codebuild_resource_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28796](https://github.com/hashicorp/terraform-provider-aws/issues/28796)) -* resource/aws_dms_replication_subnet_group: Fix error ("Provider produced inconsistent result") when an error is encountered during creation ([#28748](https://github.com/hashicorp/terraform-provider-aws/issues/28748)) -* resource/aws_dms_replication_task: Allow updates to `aws_dms_replication_task` even when `migration_type` and `table_mappings` have not changed ([#28047](https://github.com/hashicorp/terraform-provider-aws/issues/28047)) -* resource/aws_dms_replication_task: Fix error with `cdc_path` when used with `aws_dms_s3_endpoint` ([#28704](https://github.com/hashicorp/terraform-provider-aws/issues/28704)) -* resource/aws_dms_s3_endpoint: Fix error with `cdc_path` when used with `aws_dms_replication_task` ([#28704](https://github.com/hashicorp/terraform-provider-aws/issues/28704)) -* resource/aws_ecr_registry_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28799](https://github.com/hashicorp/terraform-provider-aws/issues/28799)) -* resource/aws_ecr_repository_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28799](https://github.com/hashicorp/terraform-provider-aws/issues/28799)) -* resource/aws_ecrpublic_repository_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28799](https://github.com/hashicorp/terraform-provider-aws/issues/28799)) -* resource/aws_efs_file_system_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28800](https://github.com/hashicorp/terraform-provider-aws/issues/28800)) -* resource/aws_elasticsearch_domain: Improve refresh to avoid unnecessary diffs in `access_policies` ([#28801](https://github.com/hashicorp/terraform-provider-aws/issues/28801)) -* resource/aws_elasticsearch_domain_policy: Improve refresh to avoid unnecessary diffs in `access_policies` ([#28801](https://github.com/hashicorp/terraform-provider-aws/issues/28801)) -* resource/aws_glacier_vault: Improve refresh to avoid unnecessary diffs in `access_policy` ([#28804](https://github.com/hashicorp/terraform-provider-aws/issues/28804)) -* resource/aws_glacier_vault_lock: Improve refresh to avoid unnecessary diffs in `policy` ([#28804](https://github.com/hashicorp/terraform-provider-aws/issues/28804)) -* resource/aws_glue_resource_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28807](https://github.com/hashicorp/terraform-provider-aws/issues/28807)) -* resource/aws_iam_group_policy: Fixed issue that could result in "inconsistent final plan" errors ([#28868](https://github.com/hashicorp/terraform-provider-aws/issues/28868)) -* resource/aws_iam_group_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28777](https://github.com/hashicorp/terraform-provider-aws/issues/28777)) -* resource/aws_iam_group_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28836](https://github.com/hashicorp/terraform-provider-aws/issues/28836)) -* resource/aws_iam_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28777](https://github.com/hashicorp/terraform-provider-aws/issues/28777)) -* resource/aws_iam_policy: Improve refresh to avoid unnecessary diffs in `policy`, `tags` ([#28836](https://github.com/hashicorp/terraform-provider-aws/issues/28836)) -* resource/aws_iam_role: Fixed issue that could result in "inconsistent final plan" errors ([#28868](https://github.com/hashicorp/terraform-provider-aws/issues/28868)) -* resource/aws_iam_role: Improve refresh to avoid unnecessary diffs in `assume_role_policy` and `inline_policy` `policy` ([#28777](https://github.com/hashicorp/terraform-provider-aws/issues/28777)) -* resource/aws_iam_role: Improve refresh to avoid unnecessary diffs in `inline_policy.*.policy`, `tags` ([#28836](https://github.com/hashicorp/terraform-provider-aws/issues/28836)) -* resource/aws_iam_role_policy: Fixed issue that could result in "inconsistent final plan" errors ([#28868](https://github.com/hashicorp/terraform-provider-aws/issues/28868)) -* resource/aws_iam_role_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28777](https://github.com/hashicorp/terraform-provider-aws/issues/28777)) -* resource/aws_iam_role_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28836](https://github.com/hashicorp/terraform-provider-aws/issues/28836)) -* resource/aws_iam_user_policy: Fixed issue that could result in "inconsistent final plan" errors ([#28868](https://github.com/hashicorp/terraform-provider-aws/issues/28868)) -* resource/aws_iam_user_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28777](https://github.com/hashicorp/terraform-provider-aws/issues/28777)) -* resource/aws_iam_user_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28836](https://github.com/hashicorp/terraform-provider-aws/issues/28836)) -* resource/aws_iot_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28838](https://github.com/hashicorp/terraform-provider-aws/issues/28838)) -* resource/aws_kms_external_key: Improve refresh to avoid unnecessary diffs in `policy` ([#28853](https://github.com/hashicorp/terraform-provider-aws/issues/28853)) -* resource/aws_kms_key: Improve refresh to avoid unnecessary diffs in `policy` ([#28853](https://github.com/hashicorp/terraform-provider-aws/issues/28853)) -* resource/aws_lb_target_group: Change `protocol_version` to [ForceNew](https://developer.hashicorp.com/terraform/plugin/sdkv2/schemas/schema-behaviors#forcenew) ([#17845](https://github.com/hashicorp/terraform-provider-aws/issues/17845)) -* resource/aws_lb_target_group: When creating a new target group, return an error if there is an existing target group with the same name. Use [`terraform import`](https://developer.hashicorp.com/terraform/cli/commands/import) for existing target groups ([#26977](https://github.com/hashicorp/terraform-provider-aws/issues/26977)) -* resource/aws_mq_configuration: Improve refresh to avoid unnecessary diffs in `data` ([#28837](https://github.com/hashicorp/terraform-provider-aws/issues/28837)) -* resource/aws_s3_access_point: Improve refresh to avoid unnecessary diffs in `policy` ([#28866](https://github.com/hashicorp/terraform-provider-aws/issues/28866)) -* resource/aws_s3_bucket: Improve refresh to avoid unnecessary diffs in `policy` ([#28855](https://github.com/hashicorp/terraform-provider-aws/issues/28855)) -* resource/aws_s3_bucket_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28855](https://github.com/hashicorp/terraform-provider-aws/issues/28855)) -* resource/aws_s3control_access_point_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28866](https://github.com/hashicorp/terraform-provider-aws/issues/28866)) -* resource/aws_s3control_bucket_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28866](https://github.com/hashicorp/terraform-provider-aws/issues/28866)) -* resource/aws_s3control_multi_region_access_point_policy: Improve refresh to avoid unnecessary diffs in `details` `policy` ([#28866](https://github.com/hashicorp/terraform-provider-aws/issues/28866)) -* resource/aws_s3control_object_lambda_access_point_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28866](https://github.com/hashicorp/terraform-provider-aws/issues/28866)) -* resource/aws_sagemaker_model_package_group_policy: Improve refresh to avoid unnecessary diffs in `resource_policy` ([#28865](https://github.com/hashicorp/terraform-provider-aws/issues/28865)) -* resource/aws_schemas_registry_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28864](https://github.com/hashicorp/terraform-provider-aws/issues/28864)) -* resource/aws_secretsmanager_secret: Improve refresh to avoid unnecessary diffs in `policy` ([#28863](https://github.com/hashicorp/terraform-provider-aws/issues/28863)) -* resource/aws_secretsmanager_secret_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28863](https://github.com/hashicorp/terraform-provider-aws/issues/28863)) -* resource/aws_ses_identity_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28861](https://github.com/hashicorp/terraform-provider-aws/issues/28861)) -* resource/aws_sns_topic: Improve refresh to avoid unnecessary diffs in `policy` ([#28860](https://github.com/hashicorp/terraform-provider-aws/issues/28860)) -* resource/aws_sns_topic_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28860](https://github.com/hashicorp/terraform-provider-aws/issues/28860)) -* resource/aws_sqs_queue: Improve refresh to avoid unnecessary diffs in `policy` ([#28840](https://github.com/hashicorp/terraform-provider-aws/issues/28840)) -* resource/aws_sqs_queue_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28840](https://github.com/hashicorp/terraform-provider-aws/issues/28840)) -* resource/aws_transfer_access: Improve refresh to avoid unnecessary diffs in `policy` ([#28859](https://github.com/hashicorp/terraform-provider-aws/issues/28859)) -* resource/aws_transfer_user: Improve refresh to avoid unnecessary diffs in `policy` ([#28859](https://github.com/hashicorp/terraform-provider-aws/issues/28859)) -* resource/aws_vpc_endpoint: Improve refresh to avoid unnecessary diffs in `policy` ([#28798](https://github.com/hashicorp/terraform-provider-aws/issues/28798)) -* resource/aws_vpc_endpoint_policy: Improve refresh to avoid unnecessary diffs in `policy` ([#28798](https://github.com/hashicorp/terraform-provider-aws/issues/28798)) - -## 4.49.0 (January 5, 2023) - -NOTES: - -* resource/aws_dms_endpoint: For `s3_settings` `cdc_min_file_size`, AWS changed the multiplier to kilobytes instead of megabytes. In other words, prior to the change, a value of `32` represented 32 MiB. After the change, a value of `32` represents 32 KB. Change your configuration accordingly. ([#28578](https://github.com/hashicorp/terraform-provider-aws/issues/28578)) -* resource/aws_fsx_ontap_storage_virtual_machine: The `subtype` attribute is no longer deprecated ([#28567](https://github.com/hashicorp/terraform-provider-aws/issues/28567)) - -FEATURES: - -* **New Data Source:** `aws_s3control_multi_region_access_point` ([#28373](https://github.com/hashicorp/terraform-provider-aws/issues/28373)) -* **New Resource:** `aws_appsync_type` ([#28437](https://github.com/hashicorp/terraform-provider-aws/issues/28437)) -* **New Resource:** `aws_auditmanager_assessment` ([#28643](https://github.com/hashicorp/terraform-provider-aws/issues/28643)) -* **New Resource:** `aws_auditmanager_assessment_report` ([#28663](https://github.com/hashicorp/terraform-provider-aws/issues/28663)) -* **New Resource:** `aws_ec2_instance_state` ([#28639](https://github.com/hashicorp/terraform-provider-aws/issues/28639)) -* **New Resource:** `aws_lightsail_bucket` ([#28585](https://github.com/hashicorp/terraform-provider-aws/issues/28585)) -* **New Resource:** `aws_ssoadmin_instance_access_control_attributes` ([#23317](https://github.com/hashicorp/terraform-provider-aws/issues/23317)) - -ENHANCEMENTS: - -* data-source/aws_autoscaling_group: Add `desired_capacity_type` attribute ([#28658](https://github.com/hashicorp/terraform-provider-aws/issues/28658)) -* data-source/aws_kms_secrets: Add `encryption_algorithm` and `key_id` arguments in support of [asymmetric keys](https://docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html) ([#21054](https://github.com/hashicorp/terraform-provider-aws/issues/21054)) -* resource/aws_appflow_connector_profile: Add support for `connector_type` CustomConnector. Add `cluster_identifier`, `database_name`, and `data_api_role_arn` attributes for `redshift` `connection_profile_properties` ([#26766](https://github.com/hashicorp/terraform-provider-aws/issues/26766)) -* resource/aws_appsync_resolver: Add `runtime` and `code` arguments ([#28436](https://github.com/hashicorp/terraform-provider-aws/issues/28436)) -* resource/aws_appsync_resolver: Add plan time validation for `caching_config.ttl` ([#28436](https://github.com/hashicorp/terraform-provider-aws/issues/28436)) -* resource/aws_athena_workgroup: Add `configuration.execution_role` argument ([#28420](https://github.com/hashicorp/terraform-provider-aws/issues/28420)) -* resource/aws_autoscaling_group: Add `desired_capacity_type` argument ([#28658](https://github.com/hashicorp/terraform-provider-aws/issues/28658)) -* resource/aws_dms_endpoint: Change `s3_settings` `cdc_min_file_size` default to 32000 in order to align with AWS's change from megabytes to kilobytes for this setting ([#28578](https://github.com/hashicorp/terraform-provider-aws/issues/28578)) -* resource/aws_ecs_service: Add `alarms` argument ([#28521](https://github.com/hashicorp/terraform-provider-aws/issues/28521)) -* resource/aws_lightsail_instance: Add `add_on` configuration block. ([#28602](https://github.com/hashicorp/terraform-provider-aws/issues/28602)) -* resource/aws_lightsail_instance_public_ports: Add `cidr_list_aliases` argument ([#28376](https://github.com/hashicorp/terraform-provider-aws/issues/28376)) -* resource/aws_s3_access_point: Add `bucket_account_id` argument ([#28564](https://github.com/hashicorp/terraform-provider-aws/issues/28564)) -* resource/aws_s3control_storage_lens_configuration: Add `advanced_cost_optimization_metrics`, `advanced_data_protection_metrics`, and `detailed_status_code_metrics` arguments to the `storage_lens_configuration.account_level` and `storage_lens_configuration.account_level.bucket_level` configuration blocks ([#28564](https://github.com/hashicorp/terraform-provider-aws/issues/28564)) -* resource/aws_wafv2_rule_group: Add `rule.action.captcha` argument ([#28435](https://github.com/hashicorp/terraform-provider-aws/issues/28435)) -* resource/aws_wafv2_web_acl: Add `rule.action.challenge` argument ([#28305](https://github.com/hashicorp/terraform-provider-aws/issues/28305)) -* resource/aws_wafv2_web_acl: Add support for ManagedRuleGroupConfig ([#28594](https://github.com/hashicorp/terraform-provider-aws/issues/28594)) - -BUG FIXES: - -* data-source/aws_cloudwatch_log_group: Restore use of `ListTagsLogGroup` API ([#28492](https://github.com/hashicorp/terraform-provider-aws/issues/28492)) -* resource/aws_cloudwatch_log_group: Restore use of `ListTagsLogGroup`, `TagLogGroup` and `UntagLogGroup` APIs ([#28492](https://github.com/hashicorp/terraform-provider-aws/issues/28492)) -* resource/aws_dms_endpoint: Add s3 setting `ignore_header_rows` and deprecate misspelled `ignore_headers_row`. ([#28579](https://github.com/hashicorp/terraform-provider-aws/issues/28579)) -* resource/aws_elasticache_user_group_association: Retry on `InvalidUserGroupState` errors to handle concurrent updates ([#28689](https://github.com/hashicorp/terraform-provider-aws/issues/28689)) -* resource/aws_lambda_function_url: Fix removal of `cors` configuration block ([#28439](https://github.com/hashicorp/terraform-provider-aws/issues/28439)) -* resource/aws_lightsail_database: The `availability_zone` attribute is now optional/computed to support HA `bundle_id`s ([#28590](https://github.com/hashicorp/terraform-provider-aws/issues/28590)) -* resource/aws_lightsail_disk_attachment: Resolves a panic when an attachment fails and attempts to display the error returned by AWS. ([#28593](https://github.com/hashicorp/terraform-provider-aws/issues/28593)) - -## 4.48.0 (December 19, 2022) - -FEATURES: - -* **New Resource:** `aws_dx_macsec_key_association` ([#26274](https://github.com/hashicorp/terraform-provider-aws/issues/26274)) - -ENHANCEMENTS: - -* resource/aws_dx_connection: Add `encryption_mode` and `request_macsec` arguments and `macsec_capable` and `port_encryption_status` attributes in support of [MACsec](https://docs.aws.amazon.com/directconnect/latest/UserGuide/MACsec.html) ([#26274](https://github.com/hashicorp/terraform-provider-aws/issues/26274)) -* resource/aws_dx_connection: Add `skip_destroy` argument ([#26274](https://github.com/hashicorp/terraform-provider-aws/issues/26274)) -* resource/aws_eks_node_group: Add support for `WINDOWS_CORE_2019_x86_64`, `WINDOWS_FULL_2019_x86_64`, `WINDOWS_CORE_2022_x86_64`, and `WINDOWS_FULL_2022_x86_64` `ami_type` values ([#28445](https://github.com/hashicorp/terraform-provider-aws/issues/28445)) -* resource/aws_networkfirewall_rule_group: Add `reference_sets` configuration block ([#28335](https://github.com/hashicorp/terraform-provider-aws/issues/28335)) -* resource/aws_networkmanager_vpc_attachment: Add `options.appliance_mode_support` argument ([#28450](https://github.com/hashicorp/terraform-provider-aws/issues/28450)) - -BUG FIXES: - -* resource/aws_networkfirewall_rule_group: Change `rule_group.rules_source.stateful_rule` from `TypeSet` to `TypeList` to preserve rule order ([#27102](https://github.com/hashicorp/terraform-provider-aws/issues/27102)) - -## 4.47.0 (December 15, 2022) - -FEATURES: - -* **New Data Source:** `aws_cloudwatch_log_data_protection_policy_document` ([#28272](https://github.com/hashicorp/terraform-provider-aws/issues/28272)) -* **New Data Source:** `aws_db_instances` ([#28303](https://github.com/hashicorp/terraform-provider-aws/issues/28303)) -* **New Resource:** `aws_auditmanager_account_registration` ([#28314](https://github.com/hashicorp/terraform-provider-aws/issues/28314)) -* **New Resource:** `aws_auditmanager_framework` ([#28257](https://github.com/hashicorp/terraform-provider-aws/issues/28257)) -* **New Resource:** `aws_lambda_functions` ([#28254](https://github.com/hashicorp/terraform-provider-aws/issues/28254)) -* **New Resource:** `aws_sagemaker_space` ([#28154](https://github.com/hashicorp/terraform-provider-aws/issues/28154)) -* **New Resource:** `aws_ssoadmin_permissions_boundary_attachment` ([#28241](https://github.com/hashicorp/terraform-provider-aws/issues/28241)) - -ENHANCEMENTS: - -* data-source/aws_cloudwatch_log_group: Use resource tagging APIs that are not on a path to deprecation ([#28359](https://github.com/hashicorp/terraform-provider-aws/issues/28359)) -* data-source/aws_eks_addon: Add `configuration_values` attribute ([#28295](https://github.com/hashicorp/terraform-provider-aws/issues/28295)) -* resource/aws_appsync_function: Add `runtime` and `code` arguments ([#28057](https://github.com/hashicorp/terraform-provider-aws/issues/28057)) -* resource/aws_appsync_function: Make `request_mapping_template` and `response_mapping_template` Optional ([#28057](https://github.com/hashicorp/terraform-provider-aws/issues/28057)) -* resource/aws_cloudwatch_log_destination: Add `tags` argument and `tags_all` attribute to support resource tagging ([#28359](https://github.com/hashicorp/terraform-provider-aws/issues/28359)) -* resource/aws_cloudwatch_log_group: Use resource tagging APIs that are not on a path to deprecation ([#28359](https://github.com/hashicorp/terraform-provider-aws/issues/28359)) -* resource/aws_eks_addon: Add `configuration_values` argument ([#28295](https://github.com/hashicorp/terraform-provider-aws/issues/28295)) -* resource/aws_grafana_workspace: Add `vpc_configuration` argument. ([#28308](https://github.com/hashicorp/terraform-provider-aws/issues/28308)) -* resource/aws_networkmanager_core_network: Increase Create, Update, and Delete timeouts to 30 minutes ([#28363](https://github.com/hashicorp/terraform-provider-aws/issues/28363)) -* resource/aws_sagemaker_app: Add `space_name` argument ([#28154](https://github.com/hashicorp/terraform-provider-aws/issues/28154)) -* resource/aws_sagemaker_app: Make `user_profile_name` optional ([#28154](https://github.com/hashicorp/terraform-provider-aws/issues/28154)) -* resource/aws_sagemaker_domain: Add `default_space_settings` and `default_user_settings.jupyter_server_app_settings.code_repository` arguments ([#28154](https://github.com/hashicorp/terraform-provider-aws/issues/28154)) -* resource/aws_sagemaker_endpoint_configuration: Add `shadow_production_variants`, `production_variants.container_startup_health_check_timeout_in_seconds`, `production_variants.core_dump_config`, `production_variants.model_data_download_timeout_in_seconds`, and `production_variants.volume_size_in_gb` arguments ([#28159](https://github.com/hashicorp/terraform-provider-aws/issues/28159)) -* resource/aws_sagemaker_user_profile: Add `user_settings.jupyter_server_app_settings.code_repository` argument ([#28154](https://github.com/hashicorp/terraform-provider-aws/issues/28154)) - -BUG FIXES: - -* resource/aws_cloudwatch_metric_stream: Correctly update `tags` ([#28310](https://github.com/hashicorp/terraform-provider-aws/issues/28310)) -* resource/aws_db_instance: Ensure that `apply_immediately` default value is applied ([#25768](https://github.com/hashicorp/terraform-provider-aws/issues/25768)) -* resource/aws_ecs_service: Fix `missing required field, UpdateServiceInput.ServiceConnectConfiguration.Enabled` error when removing `service_connect_configuration` configuration block ([#28338](https://github.com/hashicorp/terraform-provider-aws/issues/28338)) -* resource/aws_ecs_service: Fix `service_connect_configuration.service.ingress_port_override` being set to 0 (`InvalidParameterException: IngressPortOverride cannot use ports <= 1024` error) when not configured ([#28338](https://github.com/hashicorp/terraform-provider-aws/issues/28338)) - -## 4.46.0 (December 8, 2022) - -FEATURES: - -* **New Data Source:** `aws_glue_catalog_table` ([#23256](https://github.com/hashicorp/terraform-provider-aws/issues/23256)) -* **New Resource:** `aws_auditmanager_control` ([#27857](https://github.com/hashicorp/terraform-provider-aws/issues/27857)) -* **New Resource:** `aws_networkmanager_core_network` ([#28155](https://github.com/hashicorp/terraform-provider-aws/issues/28155)) -* **New Resource:** `aws_resourceexplorer2_index` ([#28144](https://github.com/hashicorp/terraform-provider-aws/issues/28144)) -* **New Resource:** `aws_rum_metrics_destination` ([#28143](https://github.com/hashicorp/terraform-provider-aws/issues/28143)) -* **New Resource:** `aws_vpc_network_performance_metric_subscription` ([#28150](https://github.com/hashicorp/terraform-provider-aws/issues/28150)) - -ENHANCEMENTS: - -* resource/aws_glue_crawler: Add `catalog_target.dlq_event_queue_arn`, `catalog_target.event_queue_arn`, `catalog_target.connection_name`, `lake_formation_configuration`, and `jdbc_target.enable_additional_metadata` arguments ([#28156](https://github.com/hashicorp/terraform-provider-aws/issues/28156)) -* resource/aws_glue_crawler: Make `delta_target.connection_name` optional ([#28156](https://github.com/hashicorp/terraform-provider-aws/issues/28156)) -* resource/aws_networkfirewall_firewall: Add `encryption_configuration` attribute ([#28242](https://github.com/hashicorp/terraform-provider-aws/issues/28242)) -* resource/aws_networkfirewall_firewall_policy: Add `encryption_configuration` attribute ([#28242](https://github.com/hashicorp/terraform-provider-aws/issues/28242)) -* resource/aws_networkfirewall_rule_group: Add `encryption_configuration` attribute ([#28242](https://github.com/hashicorp/terraform-provider-aws/issues/28242)) - -BUG FIXES: - -* resource/aws_db_instance: Fix error modifying `allocated_storage` when `storage_type` is `"gp3"` ([#28243](https://github.com/hashicorp/terraform-provider-aws/issues/28243)) -* resource/aws_dms_s3_endpoint: Fix disparate handling of endpoint attributes in different regions ([#28220](https://github.com/hashicorp/terraform-provider-aws/issues/28220)) -* resource/aws_evidently_feature: Fix `description` attribute to accept strings between `0` and `160` in length ([#27948](https://github.com/hashicorp/terraform-provider-aws/issues/27948)) -* resource/aws_lb_target_group: Allow `healthy_threshold` and `unhealthy_threshold` to be set to different values for TCP health checks. ([#28018](https://github.com/hashicorp/terraform-provider-aws/issues/28018)) -* resource/aws_lb_target_group: Allow `interval` to be updated for TCP health checks ([#28018](https://github.com/hashicorp/terraform-provider-aws/issues/28018)) -* resource/aws_lb_target_group: Allow `timeout` to be set for TCP health checks ([#28018](https://github.com/hashicorp/terraform-provider-aws/issues/28018)) -* resource/aws_lb_target_group: Don't force recreation on `health_check` attribute changes ([#28018](https://github.com/hashicorp/terraform-provider-aws/issues/28018)) -* resource/aws_sns_topic_subscription: Fix unsupported `FilterPolicyScope` attribute error in the aws-cn partition ([#28253](https://github.com/hashicorp/terraform-provider-aws/issues/28253)) - -## 4.45.0 (December 2, 2022) - -NOTES: - -* provider: With AWS's retirement of EC2-Classic the `skip_get_ec2_platforms` attribute has been deprecated and will be removed in a future version ([#28084](https://github.com/hashicorp/terraform-provider-aws/issues/28084)) -* resource/aws_fsx_ontap_storage_virtual_machine: The `subtype` attribute has been deprecated and will be removed in a future version ([#28127](https://github.com/hashicorp/terraform-provider-aws/issues/28127)) - -FEATURES: - -* **New Resource:** `aws_dms_s3_endpoint` ([#28130](https://github.com/hashicorp/terraform-provider-aws/issues/28130)) - -ENHANCEMENTS: - -* data-source/aws_db_instance: Add `storage_throughput` attribute ([#27670](https://github.com/hashicorp/terraform-provider-aws/issues/27670)) -* data-source/aws_eks_cluster: Add `cluster_id` attribute ([#28112](https://github.com/hashicorp/terraform-provider-aws/issues/28112)) -* resource/aws_db_instance: Add `storage_throughput` argument ([#27670](https://github.com/hashicorp/terraform-provider-aws/issues/27670)) -* resource/aws_db_instance: Add support for `gp3` `storage_type` value ([#27670](https://github.com/hashicorp/terraform-provider-aws/issues/27670)) -* resource/aws_db_instance: Change `iops` to `Computed` ([#27670](https://github.com/hashicorp/terraform-provider-aws/issues/27670)) -* resource/aws_eks_cluster: Add `cluster_id` attribute and `outpost_config.control_plane_placement` argument ([#28112](https://github.com/hashicorp/terraform-provider-aws/issues/28112)) -* resource/aws_redshiftserverless_workgroup: Wait on `MODIFYING` status on resource Delete ([#28114](https://github.com/hashicorp/terraform-provider-aws/issues/28114)) - -BUG FIXES: - -* resource/aws_redshiftserverless_namespace: Fix updating `admin_username` and `admin_user_password` ([#28125](https://github.com/hashicorp/terraform-provider-aws/issues/28125)) - -## 4.44.0 (November 30, 2022) - -NOTES: - -* resource/aws_fsx_ontap_storage_virtual_machine: The `subtype` attribute will always have the value `"DEFAULT"` ([#28085](https://github.com/hashicorp/terraform-provider-aws/issues/28085)) -* resource/aws_wafv2_web_acl: `excluded_rule` on `managed_rule_group_statement` has been deprecated. All configurations using `excluded_rule` should be updated to use the new `rule_action_override` attribute instead ([#27954](https://github.com/hashicorp/terraform-provider-aws/issues/27954)) - -ENHANCEMENTS: - -* resource/aws_api_gateway_deployment: Add import support ([#28030](https://github.com/hashicorp/terraform-provider-aws/issues/28030)) -* resource/aws_kinesisanalyticsv2_application: Add support for `FLINK-1_15` `runtime_environment` value ([#28099](https://github.com/hashicorp/terraform-provider-aws/issues/28099)) -* resource/aws_lambda_function: Add `snap_start` attribute ([#28097](https://github.com/hashicorp/terraform-provider-aws/issues/28097)) -* resource/aws_wafv2_web_acl: Support `rule_action_override` on `managed_rule_group_statement` ([#27954](https://github.com/hashicorp/terraform-provider-aws/issues/27954)) - -BUG FIXES: - -* resource/aws_instance: Change `iam_instance_profile` to `Computed` as the value may be configured via a launch template ([#27972](https://github.com/hashicorp/terraform-provider-aws/issues/27972)) - -## 4.43.0 (November 29, 2022) - -FEATURES: - -* **New Resource:** `aws_neptune_global_cluster` ([#26133](https://github.com/hashicorp/terraform-provider-aws/issues/26133)) - -ENHANCEMENTS: - -* data-source/aws_ecs_cluster: Add `service_connect_defaults` attribute ([#28052](https://github.com/hashicorp/terraform-provider-aws/issues/28052)) -* resource/aws_ce_cost_category: Allow configuration of `effective_start` value ([#28055](https://github.com/hashicorp/terraform-provider-aws/issues/28055)) -* resource/aws_ecs_cluster: Add `service_connect_defaults` argument ([#28052](https://github.com/hashicorp/terraform-provider-aws/issues/28052)) -* resource/aws_ecs_service: Add `service_connect_configuration` argument in support of [ECS Service Connect](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-connect.html) ([#28052](https://github.com/hashicorp/terraform-provider-aws/issues/28052)) -* resource/aws_glue_classifier: Add `custom_datatypes` and `custom_datatype_configured` arguments ([#28048](https://github.com/hashicorp/terraform-provider-aws/issues/28048)) -* resource/aws_neptune_cluster: Add `global_cluster_identifier` argument ([#26133](https://github.com/hashicorp/terraform-provider-aws/issues/26133)) - -## 4.42.0 (November 28, 2022) - -FEATURES: - -* **New Data Source:** `aws_redshiftserverless_credentials` ([#28026](https://github.com/hashicorp/terraform-provider-aws/issues/28026)) -* **New Resource:** `aws_cloudwatch_log_data_protection_policy` ([#28049](https://github.com/hashicorp/terraform-provider-aws/issues/28049)) - -ENHANCEMENTS: - -* data-source/aws_memorydb_cluster: Add `data_tiering` attribute ([#28022](https://github.com/hashicorp/terraform-provider-aws/issues/28022)) -* resource/aws_db_instance: Add `blue_green_update` argument in support of [RDS Blue/Green Deployments](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/blue-green-deployments.html) ([#28046](https://github.com/hashicorp/terraform-provider-aws/issues/28046)) -* resource/aws_efs_file_system: Add support for `AFTER_1_DAY` `lifecycle_policy.transition_to_ia` argument ([#28054](https://github.com/hashicorp/terraform-provider-aws/issues/28054)) -* resource/aws_efs_file_system: Add support for `elastic` `throughput_mode` argument ([#28054](https://github.com/hashicorp/terraform-provider-aws/issues/28054)) -* resource/aws_emrserverless_application: Add `architecture` argument ([#28027](https://github.com/hashicorp/terraform-provider-aws/issues/28027)) -* resource/aws_emrserverless_application: Mark `maximum_capacity` and `maximum_capacity.disk` as Computed, preventing spurious resource diffs ([#28027](https://github.com/hashicorp/terraform-provider-aws/issues/28027)) -* resource/aws_memorydb_cluster: Add `data_tiering` attribute ([#28022](https://github.com/hashicorp/terraform-provider-aws/issues/28022)) -* resource/aws_sns_topic_subscription: Add `filter_policy_scope` argument in support of [SNS message filtering](https://docs.aws.amazon.com/sns/latest/dg/sns-message-filtering.html) ([#28004](https://github.com/hashicorp/terraform-provider-aws/issues/28004)) - -BUG FIXES: - -* resource/aws_lambda_function: Don't fail resource Create if AWS Signer service is not available in the configured Region ([#28008](https://github.com/hashicorp/terraform-provider-aws/issues/28008)) -* resource/aws_memorydb_cluster: Allow more than one element in `snapshot_arns` ([#28022](https://github.com/hashicorp/terraform-provider-aws/issues/28022)) -* resource/aws_sagemaker_user_profile: `user_settings.jupyter_server_app_settings`, `user_settings.kernel_gateway_app_settings`, and `user_settings.tensor_board_app_settings` are updateable ([#28025](https://github.com/hashicorp/terraform-provider-aws/issues/28025)) - -## 4.41.0 (November 25, 2022) - -FEATURES: - -* **New Data Source:** `aws_sqs_queues` ([#27890](https://github.com/hashicorp/terraform-provider-aws/issues/27890)) -* **New Resource:** `aws_ivschat_logging_configuration` ([#27924](https://github.com/hashicorp/terraform-provider-aws/issues/27924)) -* **New Resource:** `aws_ivschat_room` ([#27974](https://github.com/hashicorp/terraform-provider-aws/issues/27974)) -* **New Resource:** `aws_rds_clusters` ([#27891](https://github.com/hashicorp/terraform-provider-aws/issues/27891)) -* **New Resource:** `aws_redshiftserverless_resource_policy` ([#27920](https://github.com/hashicorp/terraform-provider-aws/issues/27920)) -* **New Resource:** `aws_scheduler_schedule` ([#27975](https://github.com/hashicorp/terraform-provider-aws/issues/27975)) - -ENHANCEMENTS: - -* data-source/aws_cloudtrail_service_account: Add service account ID for `ap-south-2` AWS Region ([#27983](https://github.com/hashicorp/terraform-provider-aws/issues/27983)) -* data-source/aws_elasticache_cluster: Add `cache_nodes.outpost_arn` and `preferred_outpost_arn` attributes ([#27934](https://github.com/hashicorp/terraform-provider-aws/issues/27934)) -* data-source/aws_elasticache_cluster: Add `ip_discovery` and `network_type` attributes ([#27856](https://github.com/hashicorp/terraform-provider-aws/issues/27856)) -* data-source/aws_elb_hosted_zone_id: Add hosted zone ID for `ap-south-2` AWS Region ([#27983](https://github.com/hashicorp/terraform-provider-aws/issues/27983)) -* data-source/aws_lb_hosted_zone_id: Add hosted zone IDs for `ap-south-2` AWS Region ([#27983](https://github.com/hashicorp/terraform-provider-aws/issues/27983)) -* data-source/aws_rds_cluster: Add `engine_mode` attribute ([#27892](https://github.com/hashicorp/terraform-provider-aws/issues/27892)) -* provider: Support `ap-south-2` as a valid AWS Region ([#27950](https://github.com/hashicorp/terraform-provider-aws/issues/27950)) -* resource/aws_amplify_app: Add support for `WEB_COMPUTE` `platform` value in support of [Next.js web apps](https://docs.aws.amazon.com/amplify/latest/userguide/ssr-Amplify-support.html) ([#27925](https://github.com/hashicorp/terraform-provider-aws/issues/27925)) -* resource/aws_elasticache_cluster: Add `ip_discovery` and `network_type` arguments in support of [IPv6 clusters](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/network-type.html) ([#27856](https://github.com/hashicorp/terraform-provider-aws/issues/27856)) -* resource/aws_elasticache_cluster: Add `outpost_mode` and `preferred_outpost_arn` arguments and `cache_nodes.outpost_arn` attribute. NOTE: Because we cannot easily test this functionality, it is best effort and we ask for community help in testing ([#27934](https://github.com/hashicorp/terraform-provider-aws/issues/27934)) -* resource/aws_lambda_function: Add support for `nodejs18.x` `runtime` value ([#27923](https://github.com/hashicorp/terraform-provider-aws/issues/27923)) -* resource/aws_lambda_layer_version: Add support for `nodejs18.x` `compatible_runtimes` value ([#27923](https://github.com/hashicorp/terraform-provider-aws/issues/27923)) -* resource/aws_medialive_channel: Add `start_channel` attribute ([#27882](https://github.com/hashicorp/terraform-provider-aws/issues/27882)) -* resource/aws_nat_gateway: Update `private_ip` attribute to be configurable ([#27953](https://github.com/hashicorp/terraform-provider-aws/issues/27953)) - -BUG FIXES: - -* resource/aws_cloudcontrolapi_resource: Remove invalid regular expressions from CloudFormation resource schema ([#27935](https://github.com/hashicorp/terraform-provider-aws/issues/27935)) -* resource/aws_dms_endpoint: Add ability to use AWS Secrets Manager with the `sybase` engine ([#27949](https://github.com/hashicorp/terraform-provider-aws/issues/27949)) -* resource/aws_resourcegroups_group: Properly set `configuration.parameters` as optional ([#27985](https://github.com/hashicorp/terraform-provider-aws/issues/27985)) - -## 4.40.0 (November 17, 2022) - -NOTES: - -* data-source/aws_identitystore_group: The `filter` argument has been deprecated. Use the `alternate_identifier` argument instead ([#27762](https://github.com/hashicorp/terraform-provider-aws/issues/27762)) - -FEATURES: - -* **New Data Source:** `aws_controltower_controls` ([#26978](https://github.com/hashicorp/terraform-provider-aws/issues/26978)) -* **New Data Source:** `aws_ivs_stream_key` ([#27789](https://github.com/hashicorp/terraform-provider-aws/issues/27789)) -* **New Resource:** `aws_appconfig_extension` ([#27860](https://github.com/hashicorp/terraform-provider-aws/issues/27860)) -* **New Resource:** `aws_appconfig_extension_association` ([#27860](https://github.com/hashicorp/terraform-provider-aws/issues/27860)) -* **New Resource:** `aws_controltower_control` ([#26990](https://github.com/hashicorp/terraform-provider-aws/issues/26990)) -* **New Resource:** `aws_evidently_feature` ([#27395](https://github.com/hashicorp/terraform-provider-aws/issues/27395)) -* **New Resource:** `aws_ivs_channel` ([#27726](https://github.com/hashicorp/terraform-provider-aws/issues/27726)) -* **New Resource:** `aws_networkmanager_connect_attachment` ([#27787](https://github.com/hashicorp/terraform-provider-aws/issues/27787)) -* **New Resource:** `aws_opensearch_inbound_connection_accepter` ([#22988](https://github.com/hashicorp/terraform-provider-aws/issues/22988)) -* **New Resource:** `aws_opensearch_outbound_connection` ([#22988](https://github.com/hashicorp/terraform-provider-aws/issues/22988)) -* **New Resource:** `aws_scheduler_schedule_group` ([#27800](https://github.com/hashicorp/terraform-provider-aws/issues/27800)) -* **New Resource:** `aws_schemas_registry_policy` ([#27705](https://github.com/hashicorp/terraform-provider-aws/issues/27705)) -* **New Resource:** `aws_sesv2_email_identity_mail_from_attributes` ([#27672](https://github.com/hashicorp/terraform-provider-aws/issues/27672)) - -ENHANCEMENTS: - -* data-source/aws_cloudtrail_service_account: Add service account ID for `eu-central-2` AWS Region ([#27814](https://github.com/hashicorp/terraform-provider-aws/issues/27814)) -* data-source/aws_cloudtrail_service_account: Add service account ID for `eu-south-2` AWS Region ([#27855](https://github.com/hashicorp/terraform-provider-aws/issues/27855)) -* data-source/aws_connect_instance: Add `multi_party_conference_enabled` attribute ([#27734](https://github.com/hashicorp/terraform-provider-aws/issues/27734)) -* data-source/aws_elb_hosted_zone_id: Add hosted zone ID for `eu-central-2` AWS Region ([#27814](https://github.com/hashicorp/terraform-provider-aws/issues/27814)) -* data-source/aws_elb_hosted_zone_id: Add hosted zone ID for `eu-south-2` AWS Region ([#27855](https://github.com/hashicorp/terraform-provider-aws/issues/27855)) -* data-source/aws_identitystore_group: Add `alternate_identifier` argument and `description` attribute ([#27762](https://github.com/hashicorp/terraform-provider-aws/issues/27762)) -* data-source/aws_lb_hosted_zone_id: Add hosted zone IDs for `eu-central-2` AWS Region ([#27814](https://github.com/hashicorp/terraform-provider-aws/issues/27814)) -* data-source/aws_lb_hosted_zone_id: Add hosted zone IDs for `eu-south-2` AWS Region ([#27855](https://github.com/hashicorp/terraform-provider-aws/issues/27855)) -* data-source/aws_s3_bucket: Add hosted zone ID for `eu-central-2` AWS Region ([#27814](https://github.com/hashicorp/terraform-provider-aws/issues/27814)) -* data-source/aws_s3_bucket: Add hosted zone ID for `eu-south-2` AWS Region ([#27855](https://github.com/hashicorp/terraform-provider-aws/issues/27855)) -* provider: Support `eu-central-2` as a valid AWS Region ([#27812](https://github.com/hashicorp/terraform-provider-aws/issues/27812)) -* provider: Support `eu-south-2` as a valid AWS Region ([#27847](https://github.com/hashicorp/terraform-provider-aws/issues/27847)) -* resource/aws_acm_certificate: Add `key_algorithm` argument in support of [ECDSA TLS certificates](https://docs.aws.amazon.com/acm/latest/userguide/acm-certificate.html#algorithms) ([#27781](https://github.com/hashicorp/terraform-provider-aws/issues/27781)) -* resource/aws_autoscaling_group: Add support for `price-capacity-optimized` `spot_allocation_strategy` value ([#27795](https://github.com/hashicorp/terraform-provider-aws/issues/27795)) -* resource/aws_cloudwatch_logs_group: Add `skip_destroy` argument ([#26775](https://github.com/hashicorp/terraform-provider-aws/issues/26775)) -* resource/aws_cognito_user_pool: Add `sns_region` attribute to `sms_configuration` block ([#26684](https://github.com/hashicorp/terraform-provider-aws/issues/26684)) -* resource/aws_connect_instance: Add `multi_party_conference_enabled` argument ([#27734](https://github.com/hashicorp/terraform-provider-aws/issues/27734)) -* resource/aws_customer_gateway: Make `ip_address` optional ([#26673](https://github.com/hashicorp/terraform-provider-aws/issues/26673)) -* resource/aws_docdb_cluster_instance: Add `enable_performance_insights` and `performance_insights_kms_key_id` arguments ([#27769](https://github.com/hashicorp/terraform-provider-aws/issues/27769)) -* resource/aws_dynamodb_table_item: Allow the creation of items with the same hash key but different range keys ([#27517](https://github.com/hashicorp/terraform-provider-aws/issues/27517)) -* resource/aws_ec2_fleet: Add support for `price-capacity-optimized` `spot_options.allocation_strategy` value ([#27795](https://github.com/hashicorp/terraform-provider-aws/issues/27795)) -* resource/aws_ecs_service: Add `triggers` argument to enable in-place updates (redeployments) on each apply, when used with `force_new_deployment = true` ([#25840](https://github.com/hashicorp/terraform-provider-aws/issues/25840)) -* resource/aws_medialive_channel: Add support for more `output`, `output_groups`, `audio_descriptions` and `video_descriptions` in `encoder_settings`. Add support for `input_settings` in `input_attachments` ([#27823](https://github.com/hashicorp/terraform-provider-aws/issues/27823)) -* resource/aws_msk_cluster: Add `storage_mode` argument ([#27546](https://github.com/hashicorp/terraform-provider-aws/issues/27546)) -* resource/aws_neptune_cluster: Add `serverless_v2_scaling_configuration` block in support of [Neptune Serverless](https://docs.aws.amazon.com/neptune/latest/userguide/neptune-serverless.html) ([#27763](https://github.com/hashicorp/terraform-provider-aws/issues/27763)) -* resource/aws_network_interface_sg_attachment: Add import support ([#27785](https://github.com/hashicorp/terraform-provider-aws/issues/27785)) -* resource/aws_security_group_rule: Add `security_group_rule_id` attribute ([#27828](https://github.com/hashicorp/terraform-provider-aws/issues/27828)) -* resource/aws_spot_fleet_request: Add support for `priceCapacityOptimized` `allocation_strategy` value ([#27795](https://github.com/hashicorp/terraform-provider-aws/issues/27795)) - -BUG FIXES: - -* resource/aws_appstream_stack: Fix `redirect_url` max character length ([#27744](https://github.com/hashicorp/terraform-provider-aws/issues/27744)) -* resource/aws_dynamodb_table: Allow changing KMS keys on tables with replicas. ([#23156](https://github.com/hashicorp/terraform-provider-aws/issues/23156)) -* resource/aws_route53_resolver_endpoint: Fix deduplication with multiple IPs on the same subnet ([#25708](https://github.com/hashicorp/terraform-provider-aws/issues/25708)) -* resource/aws_sesv2_email_identity_feedback_attributes: Fix invalid resource ID in error messages when creating the resource ([#27784](https://github.com/hashicorp/terraform-provider-aws/issues/27784)) - -## 4.39.0 (November 10, 2022) - -BREAKING CHANGES: - -* resource/aws_secretsmanager_secret_rotation: Remove unused `tags` attribute ([#27656](https://github.com/hashicorp/terraform-provider-aws/issues/27656)) - -NOTES: - -* provider: Add OpenBSD to list of OSes which the provider is built on ([#27663](https://github.com/hashicorp/terraform-provider-aws/issues/27663)) - -FEATURES: - -* **New Data Source:** `aws_dynamodb_table_item` ([#27504](https://github.com/hashicorp/terraform-provider-aws/issues/27504)) -* **New Data Source:** `aws_route53_resolver_firewall_config` ([#25496](https://github.com/hashicorp/terraform-provider-aws/issues/25496)) -* **New Data Source:** `aws_route53_resolver_firewall_domain_list` ([#25509](https://github.com/hashicorp/terraform-provider-aws/issues/25509)) -* **New Data Source:** `aws_route53_resolver_firewall_rule_group` ([#25511](https://github.com/hashicorp/terraform-provider-aws/issues/25511)) -* **New Data Source:** `aws_route53_resolver_firewall_rule_group_association` ([#25512](https://github.com/hashicorp/terraform-provider-aws/issues/25512)) -* **New Data Source:** `aws_route53_resolver_firewall_rules` ([#25536](https://github.com/hashicorp/terraform-provider-aws/issues/25536)) -* **New Resource:** `aws_ivs_playback_key_pair` ([#27678](https://github.com/hashicorp/terraform-provider-aws/issues/27678)) -* **New Resource:** `aws_ivs_recording_configuration` ([#27718](https://github.com/hashicorp/terraform-provider-aws/issues/27718)) -* **New Resource:** `aws_lightsail_lb_https_redirection_policy` ([#27679](https://github.com/hashicorp/terraform-provider-aws/issues/27679)) -* **New Resource:** `aws_medialive_channel` ([#26810](https://github.com/hashicorp/terraform-provider-aws/issues/26810)) -* **New Resource:** `aws_networkmanager_site_to_site_vpn_attachment` ([#27387](https://github.com/hashicorp/terraform-provider-aws/issues/27387)) -* **New Resource:** `aws_redshift_endpoint_authorization` ([#27654](https://github.com/hashicorp/terraform-provider-aws/issues/27654)) -* **New Resource:** `aws_redshift_partner` ([#27665](https://github.com/hashicorp/terraform-provider-aws/issues/27665)) -* **New Resource:** `aws_redshiftserverless_snapshot` ([#27741](https://github.com/hashicorp/terraform-provider-aws/issues/27741)) - -ENHANCEMENTS: - -* data-source/aws_rds_engine_version: Support `default_only`, `include_all`, and `filter` ([#26923](https://github.com/hashicorp/terraform-provider-aws/issues/26923)) -* resource/aws_lightsail_instance: Add `ip_address_type` argument ([#27699](https://github.com/hashicorp/terraform-provider-aws/issues/27699)) -* resource/aws_security_group: Do not pass `from_port` or `to_port` values to the AWS API if a `rule`'s `protocol` value is `-1` or `all` ([#27642](https://github.com/hashicorp/terraform-provider-aws/issues/27642)) -* resource/aws_wafv2_rule_group: Correct maximum nesting level for `and_statement`, `not_statement`, `or_statement` and `rate_based_statement` ([#27682](https://github.com/hashicorp/terraform-provider-aws/issues/27682)) - -BUG FIXES: - -* resource/aws_cognito_identity_pool: Fix deletion of identity pool on tags-only update ([#27669](https://github.com/hashicorp/terraform-provider-aws/issues/27669)) -* resource/aws_dynamodb_table: Correctly set `stream_arn` as Computed when `stream_enabled` changes ([#27664](https://github.com/hashicorp/terraform-provider-aws/issues/27664)) -* resource/aws_lightsail_instance_public_ports: Resource will now be removed from state properly when parent instance is removed ([#27699](https://github.com/hashicorp/terraform-provider-aws/issues/27699)) -* resource/aws_s3_bucket: Attributes `arn` and `hosted_zone_id` were incorrectly settable but ignored ([#27597](https://github.com/hashicorp/terraform-provider-aws/issues/27597)) -* resource/aws_security_group: Return an error if a `rule`'s `protocol` value is `all` and `from_port` or `to_port` are not `0` ([#27642](https://github.com/hashicorp/terraform-provider-aws/issues/27642)) -* resource/aws_vpn_connection: Configuring exactly one of `transit_gateway_id` or `vpn_gateway_id` is not required ([#27693](https://github.com/hashicorp/terraform-provider-aws/issues/27693)) - -## 4.38.0 (November 3, 2022) - -FEATURES: - -* **New Data Source:** `aws_connect_instance_storage_config` ([#27308](https://github.com/hashicorp/terraform-provider-aws/issues/27308)) -* **New Resource:** `aws_apprunner_vpc_ingress_connection` ([#27600](https://github.com/hashicorp/terraform-provider-aws/issues/27600)) -* **New Resource:** `aws_connect_phone_number` ([#26364](https://github.com/hashicorp/terraform-provider-aws/issues/26364)) -* **New Resource:** `aws_evidently_segment` ([#27159](https://github.com/hashicorp/terraform-provider-aws/issues/27159)) -* **New Resource:** `aws_fsx_file_cache` ([#27384](https://github.com/hashicorp/terraform-provider-aws/issues/27384)) -* **New Resource:** `aws_lightsail_disk` ([#27537](https://github.com/hashicorp/terraform-provider-aws/issues/27537)) -* **New Resource:** `aws_lightsail_disk_attachment` ([#27537](https://github.com/hashicorp/terraform-provider-aws/issues/27537)) -* **New Resource:** `aws_lightsail_lb_stickiness_policy` ([#27514](https://github.com/hashicorp/terraform-provider-aws/issues/27514)) -* **New Resource:** `aws_sagemaker_servicecatalog_portfolio_status` ([#27548](https://github.com/hashicorp/terraform-provider-aws/issues/27548)) -* **New Resource:** `aws_sesv2_email_identity_feedback_attributes` ([#27433](https://github.com/hashicorp/terraform-provider-aws/issues/27433)) -* **New Resource:** `aws_ssm_default_patch_baseline` ([#27610](https://github.com/hashicorp/terraform-provider-aws/issues/27610)) - -ENHANCEMENTS: - -* data-source/aws_networkmanager_core_network_policy_document: Add plan-time validation for `core_network_configuration.edge_locations.asn` ([#27305](https://github.com/hashicorp/terraform-provider-aws/issues/27305)) -* resource/aws_ami_copy: Add `imds_support` attribute ([#27561](https://github.com/hashicorp/terraform-provider-aws/issues/27561)) -* resource/aws_ami_from_instance: Add `imds_support` attribute ([#27561](https://github.com/hashicorp/terraform-provider-aws/issues/27561)) -* resource/aws_apprunner_service: Add `ingress_configuration` argument block. ([#27600](https://github.com/hashicorp/terraform-provider-aws/issues/27600)) -* resource/aws_batch_compute_environment: Add `eks_configuration` configuration block ([#27499](https://github.com/hashicorp/terraform-provider-aws/issues/27499)) -* resource/aws_batch_compute_environment: Allow deletion of AWS Batch compute environments in `INVALID` state ([#26931](https://github.com/hashicorp/terraform-provider-aws/issues/26931)) -* resource/aws_budgets_budget: Add `auto_adjust_data` configuration block ([#27474](https://github.com/hashicorp/terraform-provider-aws/issues/27474)) -* resource/aws_budgets_budget: Add `planned_limit` configuration block ([#25766](https://github.com/hashicorp/terraform-provider-aws/issues/25766)) -* resource/aws_cognito_user_pool: Add `deletion_protection` argument ([#27612](https://github.com/hashicorp/terraform-provider-aws/issues/27612)) -* resource/aws_cognito_user_pool_client: Add `auth_session_validity` argument ([#27620](https://github.com/hashicorp/terraform-provider-aws/issues/27620)) -* resource/aws_lb_target_group: Add support for `target_failover` and `stickiness` attributes for GENEVE protocol target groups ([#27334](https://github.com/hashicorp/terraform-provider-aws/issues/27334)) -* resource/aws_sagemaker_domain: Add `domain_settings`, `app_security_group_management`, `default_user_settings.r_session_app_settings`, and `default_user_settings.canvas_app_settings` arguments. ([#27542](https://github.com/hashicorp/terraform-provider-aws/issues/27542)) -* resource/aws_sagemaker_user_profile: Add `user_settings.r_session_app_settings` and `user_settings.canvas_app_settings` arguments. ([#27542](https://github.com/hashicorp/terraform-provider-aws/issues/27542)) -* resource/aws_sagemaker_workforce: Add `workforce_vpc_config` argument ([#27538](https://github.com/hashicorp/terraform-provider-aws/issues/27538)) -* resource/aws_sfn_state_machine: Add `name_prefix` argument ([#27574](https://github.com/hashicorp/terraform-provider-aws/issues/27574)) - -BUG FIXES: - -* data-source/aws_ip_ranges: Fix regression causing filtering on `regions` and `services` to become case-sensitive ([#27558](https://github.com/hashicorp/terraform-provider-aws/issues/27558)) -* resource/aws_batch_compute_environment: Update `compute_resources.security_group_ids` to be optional ([#26172](https://github.com/hashicorp/terraform-provider-aws/issues/26172)) -* resource/aws_dynamodb_table: Fix bug causing spurious diffs with and preventing proper updating of `stream_enabled` and `stream_view_type` ([#27566](https://github.com/hashicorp/terraform-provider-aws/issues/27566)) -* resource/aws_instance: Use EC2 API idempotency to ensure that only a single Instance is created ([#27561](https://github.com/hashicorp/terraform-provider-aws/issues/27561)) - -## 4.37.0 (October 27, 2022) - -NOTES: - -* resource/aws_medialive_multiplex_program: The `statemux_settings` argument has been deprecated. Use the `statmux_settings` argument instead ([#27223](https://github.com/hashicorp/terraform-provider-aws/issues/27223)) - -FEATURES: - -* **New Data Source:** `aws_dx_router_configuration` ([#27341](https://github.com/hashicorp/terraform-provider-aws/issues/27341)) -* **New Resource:** `aws_inspector2_enabler` ([#27505](https://github.com/hashicorp/terraform-provider-aws/issues/27505)) -* **New Resource:** `aws_lightsail_lb_certificate` ([#27462](https://github.com/hashicorp/terraform-provider-aws/issues/27462)) -* **New Resource:** `aws_lightsail_lb_certificate_attachment` ([#27462](https://github.com/hashicorp/terraform-provider-aws/issues/27462)) -* **New Resource:** `aws_route53_resolver_config` ([#27487](https://github.com/hashicorp/terraform-provider-aws/issues/27487)) -* **New Resource:** `aws_sesv2_dedicated_ip_assignment` ([#27361](https://github.com/hashicorp/terraform-provider-aws/issues/27361)) -* **New Resource:** `aws_sesv2_email_identity` ([#27260](https://github.com/hashicorp/terraform-provider-aws/issues/27260)) - -ENHANCEMENTS: - -* data-source/aws_acmpca_certificate_authority: Add `usage_mode` attribute ([#27496](https://github.com/hashicorp/terraform-provider-aws/issues/27496)) -* data-source/aws_outposts_assets: Add `host_id_filter` and `status_id_filter` arguments ([#27303](https://github.com/hashicorp/terraform-provider-aws/issues/27303)) -* resource/aws_acmpca_certificate_authority: Add `usage_mode` argument to support [short-lived certificates](https://docs.aws.amazon.com/privateca/latest/userguide/short-lived-certificates.html) ([#27496](https://github.com/hashicorp/terraform-provider-aws/issues/27496)) -* resource/aws_apprunner_vpc_connector: Add ability to update `tags` ([#27345](https://github.com/hashicorp/terraform-provider-aws/issues/27345)) -* resource/aws_datasync_task: Add `security_descriptor_copy_flags` to `options` configuration block ([#26992](https://github.com/hashicorp/terraform-provider-aws/issues/26992)) -* resource/aws_ec2_capacity_reservation: Add `placement_group_arn` argument ([#27458](https://github.com/hashicorp/terraform-provider-aws/issues/27458)) -* resource/aws_ec2_transit_gateway: Add support to modify `amazon_side_asn` argument ([#27306](https://github.com/hashicorp/terraform-provider-aws/issues/27306)) -* resource/aws_elasticache_global_replication_group: Add `global_node_groups` and `num_node_groups` arguments ([#27500](https://github.com/hashicorp/terraform-provider-aws/issues/27500)) -* resource/aws_elasticache_global_replication_group: Add timeouts. ([#27500](https://github.com/hashicorp/terraform-provider-aws/issues/27500)) -* resource/aws_evidently_project: Support configurable timeouts for create, update, and delete ([#27336](https://github.com/hashicorp/terraform-provider-aws/issues/27336)) -* resource/aws_flow_log: Amazon VPC Flow Logs supports Kinesis Data Firehose as destination ([#27340](https://github.com/hashicorp/terraform-provider-aws/issues/27340)) -* resource/aws_medialive_multiplex_program: Add ability to update `multiplex_program_settings` in place ([#27223](https://github.com/hashicorp/terraform-provider-aws/issues/27223)) -* resource/aws_network_interface_attachment: Added import capabilities for the resource ([#27364](https://github.com/hashicorp/terraform-provider-aws/issues/27364)) -* resource/aws_sesv2_dedicated_ip_pool: Add `scaling_mode` attribute ([#27388](https://github.com/hashicorp/terraform-provider-aws/issues/27388)) -* resource/aws_ssm_parameter: Support `aws:ssm:integration` as a valid value for `data_type` ([#27329](https://github.com/hashicorp/terraform-provider-aws/issues/27329)) - -BUG FIXES: - -* data-source/aws_route53_traffic_policy_document: Fixed incorrect capitalization for `GeoproximityLocations` ([#27473](https://github.com/hashicorp/terraform-provider-aws/issues/27473)) -* resource/aws_connect_contact_flow: Change `type` to ForceNew ([#27347](https://github.com/hashicorp/terraform-provider-aws/issues/27347)) -* resource/aws_ecs_service: Correctly handle unconfigured `task_definition`, making `EXTERNAL` deployments possible ([#27390](https://github.com/hashicorp/terraform-provider-aws/issues/27390)) -* resource/aws_lb_target_group: Fix import issues on `aws_lb_target_group` when specifying `ip_address_type` of `ipv4` ([#27464](https://github.com/hashicorp/terraform-provider-aws/issues/27464)) -* resource/aws_rds_proxy_endpoint: Respect configured provider `default_tags` value on resource Update ([#27367](https://github.com/hashicorp/terraform-provider-aws/issues/27367)) -* resource/aws_vpc_ipam_pool_cidr: Fix crash when IPAM Pool CIDR not found ([#27512](https://github.com/hashicorp/terraform-provider-aws/issues/27512)) - -## 4.36.1 (October 21, 2022) - -BUG FIXES: - -* data-source/aws_default_tags: Fix regression setting `tags` to `null` instead of an empty map (`{}`) when no `default_tags` are defined ([#27377](https://github.com/hashicorp/terraform-provider-aws/issues/27377)) - -## 4.36.0 (October 20, 2022) - -FEATURES: - -* **New Data Source:** `aws_elasticache_subnet_group` ([#27233](https://github.com/hashicorp/terraform-provider-aws/issues/27233)) -* **New Data Source:** `aws_sesv2_dedicated_ip_pool` ([#27278](https://github.com/hashicorp/terraform-provider-aws/issues/27278)) -* **New Resource:** `aws_lightsail_certificate` ([#25283](https://github.com/hashicorp/terraform-provider-aws/issues/25283)) -* **New Resource:** `aws_lightsail_domain_entry` ([#27309](https://github.com/hashicorp/terraform-provider-aws/issues/27309)) -* **New Resource:** `aws_lightsail_lb` ([#27339](https://github.com/hashicorp/terraform-provider-aws/issues/27339)) -* **New Resource:** `aws_lightsail_lb_attachment` ([#27339](https://github.com/hashicorp/terraform-provider-aws/issues/27339)) -* **New Resource:** `aws_sesv2_dedicated_ip_pool` ([#27278](https://github.com/hashicorp/terraform-provider-aws/issues/27278)) - -ENHANCEMENTS: - -* data-source/aws_route53_zone: Add `primary_name_server` attribute ([#27293](https://github.com/hashicorp/terraform-provider-aws/issues/27293)) -* resource/aws_appstream_stack: Add validation for `application_settings`. ([#27257](https://github.com/hashicorp/terraform-provider-aws/issues/27257)) -* resource/aws_lightsail_container_service: Add `private_registry_access` argument ([#27236](https://github.com/hashicorp/terraform-provider-aws/issues/27236)) -* resource/aws_mq_broker: Add configurable timeouts ([#27035](https://github.com/hashicorp/terraform-provider-aws/issues/27035)) -* resource/aws_resourcegroups_group: Add `configuration` argument ([#26934](https://github.com/hashicorp/terraform-provider-aws/issues/26934)) -* resource/aws_route53_zone: Add `primary_name_server` attribute ([#27293](https://github.com/hashicorp/terraform-provider-aws/issues/27293)) -* resource/aws_rum_app_monitor: Add `app_monitor_id` attribute ([#26994](https://github.com/hashicorp/terraform-provider-aws/issues/26994)) -* resource/aws_sns_platform_application: Add `apple_platform_bundle_id` and `apple_platform_team_id` arguments. NOTE: Because we cannot easily test this functionality, it is best effort and we ask for community help in testing ([#23147](https://github.com/hashicorp/terraform-provider-aws/issues/23147)) - -BUG FIXES: - -* resource/aws_appstream_stack: Fix panic with `application_settings`. ([#27257](https://github.com/hashicorp/terraform-provider-aws/issues/27257)) -* resource/aws_sqs_queue: Change `sqs_managed_sse_enabled` to `Computed` as newly created SQS queues use [SSE-SQS encryption by default](https://aws.amazon.com/about-aws/whats-new/2022/10/amazon-sqs-announces-server-side-encryption-ssq-managed-sse-sqs-default/). This means that Terraform will only perform drift detection of the attribute's value when present in a configuration ([#26843](https://github.com/hashicorp/terraform-provider-aws/issues/26843)) -* resource/aws_sqs_queue: Respect configured `sqs_managed_sse_enabled` value on resource Create. In particular a configured `false` value is sent to the AWS API, which overrides the [new service default value of `true`](https://aws.amazon.com/about-aws/whats-new/2022/10/amazon-sqs-announces-server-side-encryption-ssq-managed-sse-sqs-default/) ([#27335](https://github.com/hashicorp/terraform-provider-aws/issues/27335)) - -## 4.35.0 (October 17, 2022) - -FEATURES: - -* **New Data Source:** `aws_rds_reserved_instance_offering` ([#26025](https://github.com/hashicorp/terraform-provider-aws/issues/26025)) -* **New Data Source:** `aws_vpc_ipam_pools` ([#27101](https://github.com/hashicorp/terraform-provider-aws/issues/27101)) -* **New Resource:** `aws_codepipeline_custom_action_type` ([#8123](https://github.com/hashicorp/terraform-provider-aws/issues/8123)) -* **New Resource:** `aws_comprehend_document_classifier` ([#26951](https://github.com/hashicorp/terraform-provider-aws/issues/26951)) -* **New Resource:** `aws_inspector2_delegated_admin_account` ([#27229](https://github.com/hashicorp/terraform-provider-aws/issues/27229)) -* **New Resource:** `aws_rds_reserved_instance` ([#26025](https://github.com/hashicorp/terraform-provider-aws/issues/26025)) -* **New Resource:** `aws_s3control_storage_lens_configuration` ([#27097](https://github.com/hashicorp/terraform-provider-aws/issues/27097)) -* **New Resource:** `aws_sesv2_configuration_set` ([#27056](https://github.com/hashicorp/terraform-provider-aws/issues/27056)) -* **New Resource:** `aws_transfer_tag` ([#27131](https://github.com/hashicorp/terraform-provider-aws/issues/27131)) - -ENHANCEMENTS: - -* data-source/aws_dx_connection: Add `vlan_id` attribute ([#27148](https://github.com/hashicorp/terraform-provider-aws/issues/27148)) -* data-source/aws_vpc: Add `enable_network_address_usage_metrics` attribute ([#27165](https://github.com/hashicorp/terraform-provider-aws/issues/27165)) -* resource/aws_cognito_user_pool: Add `user_attribute_update_settings` attribute ([#27129](https://github.com/hashicorp/terraform-provider-aws/issues/27129)) -* resource/aws_default_vpc: Add `enable_network_address_usage_metrics` argument ([#27165](https://github.com/hashicorp/terraform-provider-aws/issues/27165)) -* resource/aws_dx_connection: Add `vlan_id` attribute ([#27148](https://github.com/hashicorp/terraform-provider-aws/issues/27148)) -* resource/aws_elasticache_global_replication_group: Add support for updating `cache_node_type` and `automatic_failover_enabled`. ([#27134](https://github.com/hashicorp/terraform-provider-aws/issues/27134)) -* resource/aws_globalaccelerator_accelerator: Add `ip_addresses` argument in support of [BYOIP addresses](https://docs.aws.amazon.com/global-accelerator/latest/dg/using-byoip.html) ([#27181](https://github.com/hashicorp/terraform-provider-aws/issues/27181)) -* resource/aws_opsworks_custom_layer: Add `load_based_auto_scaling` argument ([#10962](https://github.com/hashicorp/terraform-provider-aws/issues/10962)) -* resource/aws_prometheus_workspace: Add `logging_configuration` argument ([#27213](https://github.com/hashicorp/terraform-provider-aws/issues/27213)) -* resource/aws_vpc: Add `enable_network_address_usage_metrics` argument ([#27165](https://github.com/hashicorp/terraform-provider-aws/issues/27165)) - -BUG FIXES: - -* data-source/aws_identitystore_user: Change the type of `external_ids` to a string instead of a bool. ([#27184](https://github.com/hashicorp/terraform-provider-aws/issues/27184)) -* resource/aws_ecs_task_definition: Prevent panic when supplying a `null` value in `container_definitions` ([#27263](https://github.com/hashicorp/terraform-provider-aws/issues/27263)) -* resource/aws_identitystore_user: Change the type of `external_ids` to a string instead of a bool. ([#27184](https://github.com/hashicorp/terraform-provider-aws/issues/27184)) -* resource/aws_organizations_policy_attachment: Handle missing policy when reading policy attachment ([#27238](https://github.com/hashicorp/terraform-provider-aws/issues/27238)) -* resource/aws_ssm_service_setting: Prevent panic during status read ([#27232](https://github.com/hashicorp/terraform-provider-aws/issues/27232)) - -## 4.34.0 (October 6, 2022) - -NOTES: - -* data-source/aws_identitystore_user: The `filter` argument has been deprecated. Use the `alternate_identifier` argument instead ([#27053](https://github.com/hashicorp/terraform-provider-aws/issues/27053)) - -FEATURES: - -* **New Data Source:** `aws_appconfig_configuration_profile` ([#27054](https://github.com/hashicorp/terraform-provider-aws/issues/27054)) -* **New Data Source:** `aws_appconfig_configuration_profiles` ([#27054](https://github.com/hashicorp/terraform-provider-aws/issues/27054)) -* **New Data Source:** `aws_appconfig_environment` ([#27054](https://github.com/hashicorp/terraform-provider-aws/issues/27054)) -* **New Data Source:** `aws_appconfig_environments` ([#27054](https://github.com/hashicorp/terraform-provider-aws/issues/27054)) -* **New Data Source:** `aws_vpc_ipam_pool_cidrs` ([#27051](https://github.com/hashicorp/terraform-provider-aws/issues/27051)) -* **New Resource:** `aws_evidently_project` ([#24263](https://github.com/hashicorp/terraform-provider-aws/issues/24263)) - -ENHANCEMENTS: - -* data-source/aws_ami: Add `imds_support` attribute ([#27084](https://github.com/hashicorp/terraform-provider-aws/issues/27084)) -* data-source/aws_identitystore_user: Add `alternate_identifier` argument and `addresses`, `display_name`, `emails`, `external_ids`, `locale`, `name`, `nickname`, `phone_numbers`, `preferred_language`, `profile_url`, `timezone`, `title` and `user_type` attributes ([#27053](https://github.com/hashicorp/terraform-provider-aws/issues/27053)) -* datasource/aws_eks_cluster: Add `service_ipv6_cidr` attribute to `kubernetes_network_config` block ([#26980](https://github.com/hashicorp/terraform-provider-aws/issues/26980)) -* resource/aws_ami: Add `imds_support` argument ([#27084](https://github.com/hashicorp/terraform-provider-aws/issues/27084)) -* resource/aws_ami_copy: Add `imds_support` argument ([#27084](https://github.com/hashicorp/terraform-provider-aws/issues/27084)) -* resource/aws_ami_from_instance: Add `imds_support` argument ([#27084](https://github.com/hashicorp/terraform-provider-aws/issues/27084)) -* resource/aws_cloudwatch_event_target: Add `capacity_provider_strategy` configuration block to the `ecs_target` configuration block ([#27068](https://github.com/hashicorp/terraform-provider-aws/issues/27068)) -* resource/aws_eks_addon: Add `PRESERVE` option to `resolve_conflicts` argument. ([#27038](https://github.com/hashicorp/terraform-provider-aws/issues/27038)) -* resource/aws_eks_cluster: Add `service_ipv6_cidr` attribute to `kubernetes_network_config` block ([#26980](https://github.com/hashicorp/terraform-provider-aws/issues/26980)) -* resource/aws_mwaa_environment: Add custom timeouts ([#27031](https://github.com/hashicorp/terraform-provider-aws/issues/27031)) -* resource/aws_networkfirewall_firewall_policy: Add `firewall_policy.stateful_rule_group_reference.override` argument ([#25135](https://github.com/hashicorp/terraform-provider-aws/issues/25135)) -* resource/aws_wafv2_rule_group: Add `headers` attribute to the `field_to_match` block ([#26506](https://github.com/hashicorp/terraform-provider-aws/issues/26506)) -* resource/aws_wafv2_rule_group: Add rate_based_statement ([#27113](https://github.com/hashicorp/terraform-provider-aws/issues/27113)) -* resource/aws_wafv2_rule_group: Add support for `regex_match_statement` ([#22452](https://github.com/hashicorp/terraform-provider-aws/issues/22452)) -* resource/aws_wafv2_web_acl: Add `headers` attribute to the `field_to_match` block ([#26506](https://github.com/hashicorp/terraform-provider-aws/issues/26506)) -* resource/aws_wafv2_web_acl: Add support for `regex_match_statement` ([#22452](https://github.com/hashicorp/terraform-provider-aws/issues/22452)) - -BUG FIXES: - -* data-source/aws_iam_policy_document: Better handling when invalid JSON passed to `override_policy_documents` ([#27055](https://github.com/hashicorp/terraform-provider-aws/issues/27055)) -* data-source/aws_ses_active_receipt_rule_set: Prevent crash when no receipt rule set is active ([#27073](https://github.com/hashicorp/terraform-provider-aws/issues/27073)) -* resource/aws_keyspaces_table: Change `schema_definition.clustering_key` and `schema_definition.partition_key` to lists in order to respect configured orderings ([#26812](https://github.com/hashicorp/terraform-provider-aws/issues/26812)) -* resource/aws_rolesanywhere_profile: Correctly handle updates to `enabled` and `session_policy` ([#26858](https://github.com/hashicorp/terraform-provider-aws/issues/26858)) -* resource/aws_rolesanywhere_trust_anchor: Correctly handle updates to `enabled` ([#26858](https://github.com/hashicorp/terraform-provider-aws/issues/26858)) - -## 4.33.0 (September 29, 2022) - -FEATURES: - -* **New Data Source:** `aws_kms_custom_key_store` ([#24787](https://github.com/hashicorp/terraform-provider-aws/issues/24787)) -* **New Resource:** `aws_identitystore_group` ([#26674](https://github.com/hashicorp/terraform-provider-aws/issues/26674)) -* **New Resource:** `aws_identitystore_group_membership` ([#26944](https://github.com/hashicorp/terraform-provider-aws/issues/26944)) -* **New Resource:** `aws_identitystore_user` ([#26948](https://github.com/hashicorp/terraform-provider-aws/issues/26948)) -* **New Resource:** `aws_inspector2_organization_configuration` ([#27000](https://github.com/hashicorp/terraform-provider-aws/issues/27000)) -* **New Resource:** `aws_kms_custom_key_store` ([#26997](https://github.com/hashicorp/terraform-provider-aws/issues/26997)) - -ENHANCEMENTS: - -* resource/aws_acm_certificate: Add `early_renewal_duration`, `pending_renewal`, `renewal_eligibility`, `renewal_summary` and `type` attributes ([#26784](https://github.com/hashicorp/terraform-provider-aws/issues/26784)) -* resource/aws_appautoscaling_policy: Add `alarm_arns` attribute ([#27011](https://github.com/hashicorp/terraform-provider-aws/issues/27011)) -* resource/aws_dms_endpoint: Add `s3_settings.use_task_start_time_for_full_load_timestamp` argument ([#27004](https://github.com/hashicorp/terraform-provider-aws/issues/27004)) -* resource/aws_ec2_traffic_mirror_target: Add `gateway_load_balancer_endpoint_id` argument ([#26767](https://github.com/hashicorp/terraform-provider-aws/issues/26767)) -* resource/aws_kms_key: Add `custom_key_store_id` attribute ([#24787](https://github.com/hashicorp/terraform-provider-aws/issues/24787)) - -BUG FIXES: - -* resource/aws_rds_cluster: Support `upgrade` as a valid value in `enabled_cloudwatch_logs_exports` ([#26792](https://github.com/hashicorp/terraform-provider-aws/issues/26792)) -* resource/aws_ssm_parameter: Allow parameter overwrite on create ([#26785](https://github.com/hashicorp/terraform-provider-aws/issues/26785)) - -## 4.32.0 (September 23, 2022) - -ENHANCEMENTS: - -* resource/aws_eks_cluster: Add `outpost_config` argument to support EKS local clusers on Outposts ([#26866](https://github.com/hashicorp/terraform-provider-aws/issues/26866)) - -BUG FIXES: - -* resource/aws_ec2_managed_prefix_list: MaxEntries and Entry(s) can now be changed in the same apply ([#26845](https://github.com/hashicorp/terraform-provider-aws/issues/26845)) - -## 4.31.0 (September 15, 2022) - -FEATURES: - -* **New Data Source:** `aws_ec2_managed_prefix_lists` ([#26727](https://github.com/hashicorp/terraform-provider-aws/issues/26727)) -* **New Resource:** `aws_sqs_queue_redrive_allow_policy` ([#26733](https://github.com/hashicorp/terraform-provider-aws/issues/26733)) -* **New Resource:** `aws_sqs_queue_redrive_policy` ([#26733](https://github.com/hashicorp/terraform-provider-aws/issues/26733)) - -ENHANCEMENTS: - -* data-source/aws_lambda_function: Add `qualified_invoke_arn` attribute ([#26439](https://github.com/hashicorp/terraform-provider-aws/issues/26439)) -* resource/aws_db_instance: Add `custom_iam_instance_profile` attribute ([#26765](https://github.com/hashicorp/terraform-provider-aws/issues/26765)) -* resource/aws_lambda_function: Add `qualified_invoke_arn` attribute ([#26439](https://github.com/hashicorp/terraform-provider-aws/issues/26439)) - -BUG FIXES: - -* resource/aws_autoscaling_attachment: Retry errors like `ValidationError: Trying to update too many Load Balancers/Target Groups at once. The limit is 10` when creating or deleting resource ([#26654](https://github.com/hashicorp/terraform-provider-aws/issues/26654)) -* resource/aws_dynamodb_table: No longer returns error for an ARCHIVED table ([#26744](https://github.com/hashicorp/terraform-provider-aws/issues/26744)) -* resource/aws_instance: Prevents errors in ISO regions when not using DisableApiStop attribute ([#26745](https://github.com/hashicorp/terraform-provider-aws/issues/26745)) -* resource/aws_replication_subnet_group: Add retry to create step, resolving `AccessDeniedFault` error ([#26768](https://github.com/hashicorp/terraform-provider-aws/issues/26768)) - -## 4.30.0 (September 9, 2022) - -FEATURES: - -* **New Resource:** `aws_medialive_multiplex` ([#26608](https://github.com/hashicorp/terraform-provider-aws/issues/26608)) -* **New Resource:** `aws_medialive_multiplex_program` ([#26694](https://github.com/hashicorp/terraform-provider-aws/issues/26694)) -* **New Resource:** `aws_redshiftserverless_usage_limit` ([#26636](https://github.com/hashicorp/terraform-provider-aws/issues/26636)) -* **New Resource:** `aws_ssoadmin_customer_managed_policy_attachment` ([#25915](https://github.com/hashicorp/terraform-provider-aws/issues/25915)) - -ENHANCEMENTS: - -* data-source/aws_rds_cluster: Add `network_type` attribute ([#26489](https://github.com/hashicorp/terraform-provider-aws/issues/26489)) -* resource/aws_eks_addon: Support configurable timeouts for addon create, update, and delete ([#26629](https://github.com/hashicorp/terraform-provider-aws/issues/26629)) -* resource/aws_rds_cluster: Add `network_type` argument ([#26489](https://github.com/hashicorp/terraform-provider-aws/issues/26489)) -* resource/aws_rds_cluster_instance: Add `network_type` attribute ([#26489](https://github.com/hashicorp/terraform-provider-aws/issues/26489)) -* resource/aws_s3_bucket_object_lock_configuration: Update `rule` argument to be Optional ([#26520](https://github.com/hashicorp/terraform-provider-aws/issues/26520)) -* resource/aws_vpn_connection: Add `tunnel1_log_options` and `tunnel2_log_options` arguments ([#26637](https://github.com/hashicorp/terraform-provider-aws/issues/26637)) - -BUG FIXES: - -* data-source/aws_ec2_managed_prefix_list: Fixes bug where an error is returned for regions with more than 100 managed prefix lists ([#26683](https://github.com/hashicorp/terraform-provider-aws/issues/26683)) -* data-source/aws_iam_policy_document: Correctly handle unquoted Boolean values in `Condition` ([#26657](https://github.com/hashicorp/terraform-provider-aws/issues/26657)) -* data-source/aws_iam_policy_document: Prevent crash when `source_policy_documents` contains empty or invalid JSON documents ([#26640](https://github.com/hashicorp/terraform-provider-aws/issues/26640)) -* resource/aws_eip: Defaults to default regional `domain` when `vpc` not set ([#26716](https://github.com/hashicorp/terraform-provider-aws/issues/26716)) -* resource/aws_instance: No longer fails when setting `metadata_options.instance_metadata_tags` ([#26631](https://github.com/hashicorp/terraform-provider-aws/issues/26631)) -* resource/aws_lambda_function: Update the environment variables if the `kms_key_arn` has changed ([#26696](https://github.com/hashicorp/terraform-provider-aws/issues/26696)) -* resource/aws_opsworks_stack: Defaults to default VPC when not supplied ([#26711](https://github.com/hashicorp/terraform-provider-aws/issues/26711)) -* resource/aws_security_group: Defaults to default VPC when not supplied ([#26697](https://github.com/hashicorp/terraform-provider-aws/issues/26697)) - -## 4.29.0 (September 1, 2022) - -NOTES: - -* resource/aws_db_instance: With AWS's retirement of EC2-Classic no new RDS DB Instances can be created referencing RDS DB Security Groups ([#26525](https://github.com/hashicorp/terraform-provider-aws/issues/26525)) -* resource/aws_db_security_group: With AWS's retirement of EC2-Classic no new RDS DB Security Groups can be created ([#26525](https://github.com/hashicorp/terraform-provider-aws/issues/26525)) -* resource/aws_default_vpc: With AWS's retirement of EC2-Classic the`enable_classiclink` and `enable_classiclink_dns_support` attributes have been deprecated and will be removed in a future version ([#26525](https://github.com/hashicorp/terraform-provider-aws/issues/26525)) -* resource/aws_eip: With AWS's retirement of EC2-Classic no new non-VPC EC2 EIPs can be created ([#26525](https://github.com/hashicorp/terraform-provider-aws/issues/26525)) -* resource/aws_elasticache_cluster: With AWS's retirement of EC2-Classic no new ElastiCache Clusters can be created referencing ElastiCache Security Groups ([#26525](https://github.com/hashicorp/terraform-provider-aws/issues/26525)) -* resource/aws_elasticache_security_group: With AWS's retirement of EC2-Classic no new ElastiCache Security Groups can be created ([#26525](https://github.com/hashicorp/terraform-provider-aws/issues/26525)) -* resource/aws_instance: With the retirement of EC2-Classic, `aws_instance` has been updated to remove support for EC2-Classic ([#26532](https://github.com/hashicorp/terraform-provider-aws/issues/26532)) -* resource/aws_launch_configuration: With AWS's retirement of EC2-Classic no new Auto Scaling Launch Configurations can be created referencing ClassicLink ([#26525](https://github.com/hashicorp/terraform-provider-aws/issues/26525)) -* resource/aws_opsworks_stack: With AWS's retirement of EC2-Classic no new OpsWorks Stacks can be created without referencing a VPC ([#26525](https://github.com/hashicorp/terraform-provider-aws/issues/26525)) -* resource/aws_redshift_cluster: With AWS's retirement of EC2-Classic no new Redshift Clusters can be created referencing Redshift Security Groups ([#26525](https://github.com/hashicorp/terraform-provider-aws/issues/26525)) -* resource/aws_redshift_security_group: With AWS's retirement of EC2-Classic no new Redshift Security Groups can be created ([#26525](https://github.com/hashicorp/terraform-provider-aws/issues/26525)) -* resource/aws_security_group: With AWS's retirement of EC2-Classic no new Security Groups can be created without referencing a VPC ([#26525](https://github.com/hashicorp/terraform-provider-aws/issues/26525)) -* resource/aws_vpc: With AWS's retirement of EC2-Classic no new VPCs can be created with ClassicLink enabled ([#26525](https://github.com/hashicorp/terraform-provider-aws/issues/26525)) -* resource/aws_vpc_peering_connection: With AWS's retirement of EC2-Classic no new VPC Peering Connections can be created with ClassicLink options enabled ([#26525](https://github.com/hashicorp/terraform-provider-aws/issues/26525)) -* resource/aws_vpc_peering_connection_accepter: With AWS's retirement of EC2-Classic no VPC Peering Connections can be accepted with ClassicLink options enabled ([#26525](https://github.com/hashicorp/terraform-provider-aws/issues/26525)) -* resource/aws_vpc_peering_connection_options: With AWS's retirement of EC2-Classic no new VPC Peering Connection Options can be created with ClassicLink options enabled ([#26525](https://github.com/hashicorp/terraform-provider-aws/issues/26525)) - -FEATURES: - -* **New Data Source:** `aws_location_tracker_associations` ([#26472](https://github.com/hashicorp/terraform-provider-aws/issues/26472)) -* **New Resource:** `aws_cloudfront_origin_access_control` ([#26508](https://github.com/hashicorp/terraform-provider-aws/issues/26508)) -* **New Resource:** `aws_medialive_input` ([#26550](https://github.com/hashicorp/terraform-provider-aws/issues/26550)) -* **New Resource:** `aws_medialive_input_security_group` ([#26550](https://github.com/hashicorp/terraform-provider-aws/issues/26550)) -* **New Resource:** `aws_redshiftserverless_endpoint_access` ([#26555](https://github.com/hashicorp/terraform-provider-aws/issues/26555)) - -ENHANCEMENTS: - -* data-source/aws_cloudtrail_service_account: Add service account ID for `me-central-1` AWS Region ([#26572](https://github.com/hashicorp/terraform-provider-aws/issues/26572)) -* data-source/aws_eks_node_group: Add `capacity_type` attribute ([#26521](https://github.com/hashicorp/terraform-provider-aws/issues/26521)) -* data-source/aws_elb_hosted_zone_id: Add hosted zone ID for `me-central-1` AWS Region ([#26572](https://github.com/hashicorp/terraform-provider-aws/issues/26572)) -* data-source/aws_instance: Add `host_resource_group_arn` attribute ([#26532](https://github.com/hashicorp/terraform-provider-aws/issues/26532)) -* data-source/aws_lambda_function: Return most recent published version when `qualifier` is not set ([#11195](https://github.com/hashicorp/terraform-provider-aws/issues/11195)) -* data-source/aws_lb_hosted_zone_id: Add hosted zone IDs for `me-central-1` AWS Region ([#26572](https://github.com/hashicorp/terraform-provider-aws/issues/26572)) -* data-source/aws_s3_bucket: Add hosted zone ID for `me-central-1` AWS Region ([#26572](https://github.com/hashicorp/terraform-provider-aws/issues/26572)) -* provider: Support `me-central-1` as a valid AWS Region ([#26590](https://github.com/hashicorp/terraform-provider-aws/issues/26590)) -* provider: Add `source_identity` argument to `assume_role` block ([#25368](https://github.com/hashicorp/terraform-provider-aws/issues/25368)) -* resource/aws_cloudfront_distribution: Add `origin_access_control_id` to the `origin` configuration block ([#26510](https://github.com/hashicorp/terraform-provider-aws/issues/26510)) -* resource/aws_dms_endpoint: Add `redis_settings` configuration block ([#26411](https://github.com/hashicorp/terraform-provider-aws/issues/26411)) -* resource/aws_ec2_fleet: Add `target_capacity_unit_type` attribute to the `target_capacity_specification` configuration block ([#26493](https://github.com/hashicorp/terraform-provider-aws/issues/26493)) -* resource/aws_instance: Add `host_resource_group_arn` attribute; improve compatibility with launching instances in a host resource group using an AMI registered with License Manager. NOTE: Because we cannot easily test this functionality, it is best effort and we ask for community help in testing. ([#26532](https://github.com/hashicorp/terraform-provider-aws/issues/26532)) -* resource/aws_lambda_event_source_mapping: Add `amazon_managed_kafka_event_source_config` and `self_managed_kafka_event_source_config` configuration blocks ([#26560](https://github.com/hashicorp/terraform-provider-aws/issues/26560)) -* resource/aws_lambda_function: Add validation for `function_name` attribute ([#25259](https://github.com/hashicorp/terraform-provider-aws/issues/25259)) -* resource/aws_opensearch_domain: Add support for enabling fine-grained access control on existing domains with `advanced_security_options` `anonymous_auth_enabled` ([#26503](https://github.com/hashicorp/terraform-provider-aws/issues/26503)) -* resource/aws_redshiftserverless_endpoint_workgroup: Add `endpoint` attribute ([#26555](https://github.com/hashicorp/terraform-provider-aws/issues/26555)) -* resource/aws_spot_fleet_request: Add `target_capacity_unit_type` argument ([#26493](https://github.com/hashicorp/terraform-provider-aws/issues/26493)) -* resource/aws_wafv2_rule_group: Add `cookies` attribute to the `field_to_match` block ([#25845](https://github.com/hashicorp/terraform-provider-aws/issues/25845)) -* resource/aws_wafv2_rule_group: Add `json_body` attribute to the `field_to_match` block ([#24772](https://github.com/hashicorp/terraform-provider-aws/issues/24772)) -* resource/aws_wafv2_web_acl: Add `cookies` attribute to the `field_to_match` block ([#25845](https://github.com/hashicorp/terraform-provider-aws/issues/25845)) -* resource/aws_wafv2_web_acl: Add `json_body` attribute to the `field_to_match` block ([#24772](https://github.com/hashicorp/terraform-provider-aws/issues/24772)) - -BUG FIXES: - -* provider: No longer silently ignores `assume_role` block when `role_arn` has unknown value. ([#26590](https://github.com/hashicorp/terraform-provider-aws/issues/26590)) -* resource/aws_security_group: Fix complex dependency violations such as using a security group with an EMR cluster ([#26553](https://github.com/hashicorp/terraform-provider-aws/issues/26553)) - -## 4.28.0 (August 26, 2022) - -NOTES: - -* resource/aws_db_instance: With the retirement of EC2-Classic the`security_group_names` attribute has been deprecated and will be removed in a future version ([#26427](https://github.com/hashicorp/terraform-provider-aws/issues/26427)) -* resource/aws_db_security_group: With the retirement of EC2-Classic the`aws_db_security_group` resource has been deprecated and will be removed in a future version ([#26427](https://github.com/hashicorp/terraform-provider-aws/issues/26427)) -* resource/aws_elasticache_cluster: With the retirement of EC2-Classic the`security_group_names` attribute has been deprecated and will be removed in a future version ([#26427](https://github.com/hashicorp/terraform-provider-aws/issues/26427)) -* resource/aws_elasticache_security_group: With the retirement of EC2-Classic the`aws_elasticache_security_group` resource has been deprecated and will be removed in a future version ([#26427](https://github.com/hashicorp/terraform-provider-aws/issues/26427)) -* resource/aws_launch_configuration: With the retirement of EC2-Classic the`vpc_classic_link_id` and `vpc_classic_link_security_groups` attributes have been deprecated and will be removed in a future version ([#26427](https://github.com/hashicorp/terraform-provider-aws/issues/26427)) -* resource/aws_redshift_cluster: With the retirement of EC2-Classic the`cluster_security_groups` attribute has been deprecated and will be removed in a future version ([#26427](https://github.com/hashicorp/terraform-provider-aws/issues/26427)) -* resource/aws_redshift_security_group: With the retirement of EC2-Classic the`aws_redshift_security_group` resource has been deprecated and will be removed in a future version ([#26427](https://github.com/hashicorp/terraform-provider-aws/issues/26427)) -* resource/aws_vpc: With the retirement of EC2-Classic the`enable_classiclink` and `enable_classiclink_dns_support` attributes have been deprecated and will be removed in a future version ([#26427](https://github.com/hashicorp/terraform-provider-aws/issues/26427)) -* resource/aws_vpc_peering_connection: With the retirement of EC2-Classic the`allow_classic_link_to_remote_vpc` and `allow_vpc_to_remote_classic_link` attributes have been deprecated and will be removed in a future version ([#26427](https://github.com/hashicorp/terraform-provider-aws/issues/26427)) -* resource/aws_vpc_peering_connection_accepter: With the retirement of EC2-Classic the`allow_classic_link_to_remote_vpc` and `allow_vpc_to_remote_classic_link` attributes have been deprecated and will be removed in a future version ([#26427](https://github.com/hashicorp/terraform-provider-aws/issues/26427)) -* resource/aws_vpc_peering_connection_options: With the retirement of EC2-Classic the`allow_classic_link_to_remote_vpc` and `allow_vpc_to_remote_classic_link` attributes have been deprecated and will be removed in a future version ([#26427](https://github.com/hashicorp/terraform-provider-aws/issues/26427)) - -FEATURES: - -* **New Data Source:** `aws_ec2_network_insights_analysis` ([#23532](https://github.com/hashicorp/terraform-provider-aws/issues/23532)) -* **New Data Source:** `aws_ec2_network_insights_path` ([#23532](https://github.com/hashicorp/terraform-provider-aws/issues/23532)) -* **New Data Source:** `aws_ec2_transit_gateway_attachment` ([#26264](https://github.com/hashicorp/terraform-provider-aws/issues/26264)) -* **New Data Source:** `aws_location_tracker_association` ([#26404](https://github.com/hashicorp/terraform-provider-aws/issues/26404)) -* **New Resource:** `aws_ec2_network_insights_analysis` ([#23532](https://github.com/hashicorp/terraform-provider-aws/issues/23532)) -* **New Resource:** `aws_ec2_transit_gateway_policy_table` ([#26264](https://github.com/hashicorp/terraform-provider-aws/issues/26264)) -* **New Resource:** `aws_ec2_transit_gateway_policy_table_association` ([#26264](https://github.com/hashicorp/terraform-provider-aws/issues/26264)) -* **New Resource:** `aws_grafana_workspace_api_key` ([#25286](https://github.com/hashicorp/terraform-provider-aws/issues/25286)) -* **New Resource:** `aws_networkmanager_transit_gateway_peering` ([#26264](https://github.com/hashicorp/terraform-provider-aws/issues/26264)) -* **New Resource:** `aws_networkmanager_transit_gateway_route_table_attachment` ([#26264](https://github.com/hashicorp/terraform-provider-aws/issues/26264)) -* **New Resource:** `aws_redshiftserverless_workgroup` ([#26467](https://github.com/hashicorp/terraform-provider-aws/issues/26467)) - -ENHANCEMENTS: - -* data-source/aws_db_instance: Add `network_type` attribute ([#26185](https://github.com/hashicorp/terraform-provider-aws/issues/26185)) -* data-source/aws_db_subnet_group: Add `supported_network_types` attribute ([#26185](https://github.com/hashicorp/terraform-provider-aws/issues/26185)) -* data-source/aws_rds_orderable_db_instance: Add `supported_network_types` attribute ([#26185](https://github.com/hashicorp/terraform-provider-aws/issues/26185)) -* resource/aws_db_instance: Add `network_type` argument ([#26185](https://github.com/hashicorp/terraform-provider-aws/issues/26185)) -* resource/aws_db_subnet_group: Add `supported_network_types` argument ([#26185](https://github.com/hashicorp/terraform-provider-aws/issues/26185)) -* resource/aws_glue_job: Add support for `3.9` as valid `python_version` value ([#26407](https://github.com/hashicorp/terraform-provider-aws/issues/26407)) -* resource/aws_kendra_index: The `document_metadata_configuration_updates` argument can now be updated. Refer to the documentation for more details. ([#20294](https://github.com/hashicorp/terraform-provider-aws/issues/20294)) - -BUG FIXES: - -* resource/aws_appstream_fleet: Fix crash when providing empty `domain_join_info` (_e.g._, `directory_name = ""`) ([#26454](https://github.com/hashicorp/terraform-provider-aws/issues/26454)) -* resource/aws_eip: Include any provider-level configured `default_tags` on resource Create ([#26308](https://github.com/hashicorp/terraform-provider-aws/issues/26308)) -* resource/aws_kinesis_firehose_delivery_stream: Updating `tags` no longer causes an unnecessary update ([#26451](https://github.com/hashicorp/terraform-provider-aws/issues/26451)) -* resource/aws_organizations_policy: Prevent `InvalidParameter` errors by handling `content` as generic JSON, not an IAM policy ([#26279](https://github.com/hashicorp/terraform-provider-aws/issues/26279)) - -## 4.27.0 (August 19, 2022) - -FEATURES: - -* **New Resource:** `aws_msk_serverless_cluster` ([#25684](https://github.com/hashicorp/terraform-provider-aws/issues/25684)) -* **New Resource:** `aws_networkmanager_attachment_accepter` ([#26227](https://github.com/hashicorp/terraform-provider-aws/issues/26227)) -* **New Resource:** `aws_networkmanager_vpc_attachment` ([#26227](https://github.com/hashicorp/terraform-provider-aws/issues/26227)) - -ENHANCEMENTS: - -* data-source/aws_networkfirewall_firewall: Add `capacity_usage_summary`, `configuration_sync_state_summary`, and `status` attributes to the `firewall_status` block ([#26284](https://github.com/hashicorp/terraform-provider-aws/issues/26284)) -* resource/aws_acm_certificate: Add `not_after` argument ([#26281](https://github.com/hashicorp/terraform-provider-aws/issues/26281)) -* resource/aws_acm_certificate: Add `not_before` argument ([#26281](https://github.com/hashicorp/terraform-provider-aws/issues/26281)) -* resource/aws_chime_voice_connector_logging: Add `enable_media_metric_logs` argument ([#26283](https://github.com/hashicorp/terraform-provider-aws/issues/26283)) -* resource/aws_cloudfront_distribution: Support `http3` and `http2and3` as valid values for the `http_version` argument ([#26313](https://github.com/hashicorp/terraform-provider-aws/issues/26313)) -* resource/aws_inspector_assessment_template: Add `event_subscription` configuration block ([#26334](https://github.com/hashicorp/terraform-provider-aws/issues/26334)) -* resource/aws_lb_target_group: Add `ip_address_type` argument ([#26320](https://github.com/hashicorp/terraform-provider-aws/issues/26320)) -* resource/aws_opsworks_stack: Add plan-time validation for `custom_cookbooks_source.type` ([#26278](https://github.com/hashicorp/terraform-provider-aws/issues/26278)) - -BUG FIXES: - -* resource/aws_appflow_flow: Correctly specify `trigger_config.trigger_properties.scheduled.schedule_start_time` during create and update ([#26289](https://github.com/hashicorp/terraform-provider-aws/issues/26289)) -* resource/aws_db_instance: Prevent `InvalidParameterCombination: No modifications were requested` errors when only `delete_automated_backups`, `final_snapshot_identifier` and/or `skip_final_snapshot` change ([#26286](https://github.com/hashicorp/terraform-provider-aws/issues/26286)) -* resource/aws_opsworks_custom_layer: Correctly apply `tags` during create if the stack's `region` is not equal to the configured AWS Region ([#26278](https://github.com/hashicorp/terraform-provider-aws/issues/26278)) -* resource/aws_opsworks_ecs_cluster_layer: Correctly apply `tags` during create if the stack's `region` is not equal to the configured AWS Region ([#26278](https://github.com/hashicorp/terraform-provider-aws/issues/26278)) -* resource/aws_opsworks_ganglia_layer: Correctly apply `tags` during create if the stack's `region` is not equal to the configured AWS Region ([#26278](https://github.com/hashicorp/terraform-provider-aws/issues/26278)) -* resource/aws_opsworks_haproxy_layer: Correctly apply `tags` during create if the stack's `region` is not equal to the configured AWS Region ([#26278](https://github.com/hashicorp/terraform-provider-aws/issues/26278)) -* resource/aws_opsworks_java_app_layer: Correctly apply `tags` during create if the stack's `region` is not equal to the configured AWS Region ([#26278](https://github.com/hashicorp/terraform-provider-aws/issues/26278)) -* resource/aws_opsworks_memcached_layer: Correctly apply `tags` during create if the stack's `region` is not equal to the configured AWS Region ([#26278](https://github.com/hashicorp/terraform-provider-aws/issues/26278)) -* resource/aws_opsworks_mysql_layer: Correctly apply `tags` during create if the stack's `region` is not equal to the configured AWS Region ([#26278](https://github.com/hashicorp/terraform-provider-aws/issues/26278)) -* resource/aws_opsworks_nodejs_app_layer: Correctly apply `tags` during create if the stack's `region` is not equal to the configured AWS Region ([#26278](https://github.com/hashicorp/terraform-provider-aws/issues/26278)) -* resource/aws_opsworks_php_app_layer: Correctly apply `tags` during create if the stack's `region` is not equal to the configured AWS Region ([#26278](https://github.com/hashicorp/terraform-provider-aws/issues/26278)) -* resource/aws_opsworks_rails_app_layer: Correctly apply `tags` during create if the stack's `region` is not equal to the configured AWS Region ([#26278](https://github.com/hashicorp/terraform-provider-aws/issues/26278)) -* resource/aws_opsworks_stack: Correctly apply `tags` during create if `region` is not equal to the configured AWS Region ([#26278](https://github.com/hashicorp/terraform-provider-aws/issues/26278)) -* resource/aws_opsworks_static_web_layer: Correctly apply `tags` during create if the stack's `region` is not equal to the configured AWS Region ([#26278](https://github.com/hashicorp/terraform-provider-aws/issues/26278)) - -## 4.26.0 (August 12, 2022) - -FEATURES: - -* **New Data Source:** `aws_fsx_openzfs_snapshot` ([#26184](https://github.com/hashicorp/terraform-provider-aws/issues/26184)) -* **New Data Source:** `aws_networkfirewall_firewall` ([#25495](https://github.com/hashicorp/terraform-provider-aws/issues/25495)) -* **New Data Source:** `aws_prometheus_workspace` ([#26120](https://github.com/hashicorp/terraform-provider-aws/issues/26120)) -* **New Resource:** `aws_comprehend_entity_recognizer` ([#26244](https://github.com/hashicorp/terraform-provider-aws/issues/26244)) -* **New Resource:** `aws_connect_instance_storage_config` ([#26152](https://github.com/hashicorp/terraform-provider-aws/issues/26152)) -* **New Resource:** `aws_directory_service_radius_settings` ([#14045](https://github.com/hashicorp/terraform-provider-aws/issues/14045)) -* **New Resource:** `aws_directory_service_region` ([#25755](https://github.com/hashicorp/terraform-provider-aws/issues/25755)) -* **New Resource:** `aws_dynamodb_table_replica` ([#26250](https://github.com/hashicorp/terraform-provider-aws/issues/26250)) -* **New Resource:** `aws_location_tracker_association` ([#26061](https://github.com/hashicorp/terraform-provider-aws/issues/26061)) - -ENHANCEMENTS: - -* data-source/aws_directory_service_directory: Add `radius_settings` attribute ([#14045](https://github.com/hashicorp/terraform-provider-aws/issues/14045)) -* data-source/aws_directory_service_directory: Set `dns_ip_addresses` to the owner directory's DNS IP addresses for SharedMicrosoftAD directories ([#20819](https://github.com/hashicorp/terraform-provider-aws/issues/20819)) -* data-source/aws_elasticsearch_domain: Add `throughput` attribute to the `ebs_options` configuration block ([#26045](https://github.com/hashicorp/terraform-provider-aws/issues/26045)) -* data-source/aws_opensearch_domain: Add `throughput` attribute to the `ebs_options` configuration block ([#26045](https://github.com/hashicorp/terraform-provider-aws/issues/26045)) -* resource/aws_autoscaling_group: Better error handling when attempting to create Auto Scaling groups with incompatible options ([#25987](https://github.com/hashicorp/terraform-provider-aws/issues/25987)) -* resource/aws_backup_vault: Add `force_destroy` argument ([#26199](https://github.com/hashicorp/terraform-provider-aws/issues/26199)) -* resource/aws_directory_service_directory: Add `desired_number_of_domain_controllers` argument ([#25755](https://github.com/hashicorp/terraform-provider-aws/issues/25755)) -* resource/aws_directory_service_directory: Add configurable timeouts for Create, Update and Delete ([#25755](https://github.com/hashicorp/terraform-provider-aws/issues/25755)) -* resource/aws_directory_service_shared_directory: Add configurable timeouts for Delete ([#25755](https://github.com/hashicorp/terraform-provider-aws/issues/25755)) -* resource/aws_directory_service_shared_directory_accepter: Add configurable timeouts for Create and Delete ([#25755](https://github.com/hashicorp/terraform-provider-aws/issues/25755)) -* resource/aws_elasticsearch_domain: Add `throughput` attribute to the `ebs_options` configuration block ([#26045](https://github.com/hashicorp/terraform-provider-aws/issues/26045)) -* resource/aws_glue_job: Add `execution_class` argument ([#26188](https://github.com/hashicorp/terraform-provider-aws/issues/26188)) -* resource/aws_macie2_classification_job: Add `bucket_criteria` attribute to the `s3_job_definition` configuration block ([#19837](https://github.com/hashicorp/terraform-provider-aws/issues/19837)) -* resource/aws_opensearch_domain: Add `throughput` attribute to the `ebs_options` configuration block ([#26045](https://github.com/hashicorp/terraform-provider-aws/issues/26045)) - -BUG FIXES: - -* resource/aws_appflow_flow: Fix `trigger_properties.scheduled` being set during resource read ([#26240](https://github.com/hashicorp/terraform-provider-aws/issues/26240)) -* resource/aws_db_instance: Add retries (for handling IAM eventual consistency) when creating database replicas that use enhanced monitoring ([#20926](https://github.com/hashicorp/terraform-provider-aws/issues/20926)) -* resource/aws_db_instance: Apply `monitoring_interval` and `monitoring_role_arn` when creating via `restore_to_point_in_time` ([#20926](https://github.com/hashicorp/terraform-provider-aws/issues/20926)) -* resource/aws_dynamodb_table: Fix `replica.*.propagate_tags` not propagating tags to newly added replicas ([#26257](https://github.com/hashicorp/terraform-provider-aws/issues/26257)) -* resource/aws_emr_instance_group: Handle deleted instance groups during resource read ([#26154](https://github.com/hashicorp/terraform-provider-aws/issues/26154)) -* resource/aws_emr_instance_group: Mark `instance_count` as Computed to prevent diff when autoscaling is active ([#26154](https://github.com/hashicorp/terraform-provider-aws/issues/26154)) -* resource/aws_lb_listener: Fix `ValidationError` when tags are added on `create` ([#26194](https://github.com/hashicorp/terraform-provider-aws/issues/26194)) -* resource/aws_lb_target_group: Fix `ValidationError` when tags are added on `create` ([#26194](https://github.com/hashicorp/terraform-provider-aws/issues/26194)) -* resource/aws_macie2_classification_job: Fix incorrect plan diff for `TagScopeTerm()` when updating resources ([#19837](https://github.com/hashicorp/terraform-provider-aws/issues/19837)) -* resource/aws_security_group_rule: Disallow empty strings in `prefix_list_ids` ([#26220](https://github.com/hashicorp/terraform-provider-aws/issues/26220)) - -## 4.25.0 (August 4, 2022) - -FEATURES: - -* **New Data Source:** `aws_waf_subscribed_rule_group` ([#10563](https://github.com/hashicorp/terraform-provider-aws/issues/10563)) -* **New Data Source:** `aws_wafregional_subscribed_rule_group` ([#10563](https://github.com/hashicorp/terraform-provider-aws/issues/10563)) -* **New Resource:** `aws_kendra_data_source` ([#25686](https://github.com/hashicorp/terraform-provider-aws/issues/25686)) -* **New Resource:** `aws_macie2_classification_export_configuration` ([#19856](https://github.com/hashicorp/terraform-provider-aws/issues/19856)) -* **New Resource:** `aws_transcribe_language_model` ([#25698](https://github.com/hashicorp/terraform-provider-aws/issues/25698)) - -ENHANCEMENTS: - -* data-source/aws_alb: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ami: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ami_ids: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_availability_zone: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_availability_zones: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_customer_gateway: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_dx_location: Add `available_macsec_port_speeds` attribute ([#26110](https://github.com/hashicorp/terraform-provider-aws/issues/26110)) -* data-source/aws_ebs_default_kms_key: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ebs_encryption_by_default: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ebs_snapshot: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ebs_snapshot_ids: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ebs_volume: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ebs_volumes: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_client_vpn_endpoint: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_coip_pool: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_coip_pools: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_host: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_instance_type: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_instance_type_offering: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_instance_type_offerings: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_instance_types: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_local_gateway: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_local_gateway_route_table: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_local_gateway_route_tables: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_local_gateway_virtual_interface: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_local_gateway_virtual_interface_group: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_local_gateway_virtual_interface_groups: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_local_gateways: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_managed_prefix_list: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_serial_console_access: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_spot_price: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_transit_gateway: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_transit_gateway_connect: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_transit_gateway_connect_peer: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_transit_gateway_dx_gateway_attachment: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_transit_gateway_multicast_domain: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_transit_gateway_peering_attachment: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_transit_gateway_route_table: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_transit_gateway_route_tables: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_transit_gateway_vpc_attachment: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_transit_gateway_vpc_attachments: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_ec2_transit_gateway_vpn_attachment: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_eip: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_eips: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_instance: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_instances: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_internet_gateway: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_key_pair: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_launch_template: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_lb: Add `preserve_host_header` attribute ([#26056](https://github.com/hashicorp/terraform-provider-aws/issues/26056)) -* data-source/aws_lb: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_lb_listener: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_lb_target_group: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_nat_gateway: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_nat_gateways: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_network_acls: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_network_interface: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_network_interfaces: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_prefix_list: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_route: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_route_table: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_route_tables: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_security_group: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_security_groups: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_subnet: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_subnet_ids: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_subnets: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_vpc: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_vpc_dhcp_options: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_vpc_endpoint: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_vpc_endpoint_service: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_vpc_ipam_pool: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_vpc_ipam_preview_next_cidr: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_vpc_peering_connection: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_vpc_peering_connections: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_vpcs: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* data-source/aws_vpn_gateway: Allow customizable read timeout ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* resource/aws_ecrpublic_repository: Add `tags` argument and `tags_all` attribute to support resource tagging ([#26057](https://github.com/hashicorp/terraform-provider-aws/issues/26057)) -* resource/aws_fsx_openzfs_file_system: Add `root_volume_configuration.record_size_kib` argument ([#26049](https://github.com/hashicorp/terraform-provider-aws/issues/26049)) -* resource/aws_fsx_openzfs_volume: Add `record_size_kib` argument ([#26049](https://github.com/hashicorp/terraform-provider-aws/issues/26049)) -* resource/aws_globalaccelerator_accelerator: Support `DUAL_STACK` value for `ip_address_type` ([#26055](https://github.com/hashicorp/terraform-provider-aws/issues/26055)) -* resource/aws_iam_role_policy: Add plan time validation to `role` argument ([#26082](https://github.com/hashicorp/terraform-provider-aws/issues/26082)) -* resource/aws_internet_gateway: Allow customizable timeouts ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* resource/aws_internet_gateway_attachment: Allow customizable timeouts ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) -* resource/aws_lb: Add `preserve_host_header` argument ([#26056](https://github.com/hashicorp/terraform-provider-aws/issues/26056)) -* resource/aws_s3_bucket: Allow customizable timeouts ([#26121](https://github.com/hashicorp/terraform-provider-aws/issues/26121)) - -BUG FIXES: - -* resource/aws_api_gateway_rest_api: Add `put_rest_api_mode` argument to address race conditions when importing OpenAPI Specifications ([#26051](https://github.com/hashicorp/terraform-provider-aws/issues/26051)) -* resource/aws_appstream_fleet: Fix IAM `InvalidRoleException` error on creation ([#26060](https://github.com/hashicorp/terraform-provider-aws/issues/26060)) - -## 4.24.0 (July 29, 2022) - -FEATURES: - -* **New Resource:** `aws_acmpca_permission` ([#12485](https://github.com/hashicorp/terraform-provider-aws/issues/12485)) -* **New Resource:** `aws_ssm_service_setting` ([#13018](https://github.com/hashicorp/terraform-provider-aws/issues/13018)) - -ENHANCEMENTS: - -* data-source/aws_ecs_service: Add `tags` attribute ([#25961](https://github.com/hashicorp/terraform-provider-aws/issues/25961)) -* resource/aws_datasync_task: Add `includes` argument ([#25929](https://github.com/hashicorp/terraform-provider-aws/issues/25929)) -* resource/aws_guardduty_detector: Add `malware_protection` attribute to the `datasources` configuration block ([#25994](https://github.com/hashicorp/terraform-provider-aws/issues/25994)) -* resource/aws_guardduty_organization_configuration: Add `malware_protection` attribute to the `datasources` configuration block ([#25992](https://github.com/hashicorp/terraform-provider-aws/issues/25992)) -* resource/aws_security_group: Additional plan-time validation for `name` and `name_prefix` ([#15011](https://github.com/hashicorp/terraform-provider-aws/issues/15011)) -* resource/aws_security_group_rule: Add configurable Create timeout ([#24340](https://github.com/hashicorp/terraform-provider-aws/issues/24340)) -* resource/aws_ses_configuration_set: Add `tracking_options.0.custom_redirect_domain` argument (NOTE: This enhancement is provided as best effort due to testing limitations, i.e., the requirement of a verified domain) ([#26032](https://github.com/hashicorp/terraform-provider-aws/issues/26032)) - -BUG FIXES: - -* data-source/aws_networkmanager_core_network_policy_document: Fix bug where bool values for `attachment-policy.action.require-acceptance` can only be `true` or omitted ([#26010](https://github.com/hashicorp/terraform-provider-aws/issues/26010)) -* resource/aws_appmesh_gateway_route: Fix crash when only one of hostname rewrite or path rewrite is configured ([#26012](https://github.com/hashicorp/terraform-provider-aws/issues/26012)) -* resource/aws_ce_anomaly_subscription:Fix crash upon adding or removing monitor ARNs to `monitor_arn_list`. ([#25941](https://github.com/hashicorp/terraform-provider-aws/issues/25941)) -* resource/aws_cognito_identity_pool_provider_principal_tag: Fix read operation when using an OIDC provider ([#25964](https://github.com/hashicorp/terraform-provider-aws/issues/25964)) -* resource/aws_route53_record: Don't ignore `dualstack` prefix in Route 53 Record alias names ([#10672](https://github.com/hashicorp/terraform-provider-aws/issues/10672)) -* resource/aws_s3_bucket: Prevents unexpected import of existing bucket in `us-east-1`. ([#26011](https://github.com/hashicorp/terraform-provider-aws/issues/26011)) -* resource/aws_s3_bucket: Refactored `object_lock_enabled` parameter's default assignment behavior to protect partitions without Object Lock available. ([#25098](https://github.com/hashicorp/terraform-provider-aws/issues/25098)) - -## 4.23.0 (July 22, 2022) - -FEATURES: - -* **New Data Source:** `aws_connect_user_hierarchy_group` ([#24777](https://github.com/hashicorp/terraform-provider-aws/issues/24777)) -* **New Data Source:** `aws_location_geofence_collection` ([#25844](https://github.com/hashicorp/terraform-provider-aws/issues/25844)) -* **New Data Source:** `aws_networkfirewall_firewall_policy` ([#24748](https://github.com/hashicorp/terraform-provider-aws/issues/24748)) -* **New Data Source:** `aws_s3_account_public_access_block` ([#25781](https://github.com/hashicorp/terraform-provider-aws/issues/25781)) -* **New Resource:** `aws_connect_user` ([#24832](https://github.com/hashicorp/terraform-provider-aws/issues/24832)) -* **New Resource:** `aws_connect_vocabulary` ([#24849](https://github.com/hashicorp/terraform-provider-aws/issues/24849)) -* **New Resource:** `aws_location_geofence_collection` ([#25762](https://github.com/hashicorp/terraform-provider-aws/issues/25762)) -* **New Resource:** `aws_redshiftserverless_namespace` ([#25889](https://github.com/hashicorp/terraform-provider-aws/issues/25889)) -* **New Resource:** `aws_rolesanywhere_profile` ([#25850](https://github.com/hashicorp/terraform-provider-aws/issues/25850)) -* **New Resource:** `aws_rolesanywhere_trust_anchor` ([#25779](https://github.com/hashicorp/terraform-provider-aws/issues/25779)) -* **New Resource:** `aws_transcribe_vocabulary` ([#25863](https://github.com/hashicorp/terraform-provider-aws/issues/25863)) -* **New Resource:** `aws_transcribe_vocabulary_filter` ([#25918](https://github.com/hashicorp/terraform-provider-aws/issues/25918)) - -ENHANCEMENTS: - -* data-source/aws_imagebuilder_container_recipe: Add `throughput` attribute to the `block_device_mapping` configuration block ([#25790](https://github.com/hashicorp/terraform-provider-aws/issues/25790)) -* data-source/aws_imagebuilder_image_recipe: Add `throughput` attribute to the `block_device_mapping` configuration block ([#25790](https://github.com/hashicorp/terraform-provider-aws/issues/25790)) -* data/aws_outposts_asset: Add `rack_elevation` attribute ([#25822](https://github.com/hashicorp/terraform-provider-aws/issues/25822)) -* resource/aws_appmesh_gateway_route: Add `http2_route.action.rewrite`, `http2_route.match.hostname`, `http_route.action.rewrite` and `http_route.match.hostname` arguments ([#25819](https://github.com/hashicorp/terraform-provider-aws/issues/25819)) -* resource/aws_ce_cost_category: Add `tags` argument and `tags_all` attribute to support resource tagging ([#25432](https://github.com/hashicorp/terraform-provider-aws/issues/25432)) -* resource/aws_db_instance_automated_backups_replication: Add support for custom timeouts (create and delete) ([#25796](https://github.com/hashicorp/terraform-provider-aws/issues/25796)) -* resource/aws_dynamodb_table: Add `replica.*.propagate_tags` argument to allow propagating tags to replicas ([#25866](https://github.com/hashicorp/terraform-provider-aws/issues/25866)) -* resource/aws_flow_log: Add `transit_gateway_id` and `transit_gateway_attachment_id` arguments ([#25913](https://github.com/hashicorp/terraform-provider-aws/issues/25913)) -* resource/aws_fsx_openzfs_file_system: Allow in-place update of `storage_capacity`, `throughput_capacity`, and `disk_iops_configuration`. ([#25841](https://github.com/hashicorp/terraform-provider-aws/issues/25841)) -* resource/aws_guardduty_organization_configuration: Add `kubernetes` attribute to the `datasources` configuration block ([#25131](https://github.com/hashicorp/terraform-provider-aws/issues/25131)) -* resource/aws_imagebuilder_container_recipe: Add `throughput` argument to the `block_device_mapping` configuration block ([#25790](https://github.com/hashicorp/terraform-provider-aws/issues/25790)) -* resource/aws_imagebuilder_image_recipe: Add `throughput` argument to the `block_device_mapping` configuration block ([#25790](https://github.com/hashicorp/terraform-provider-aws/issues/25790)) -* resource/aws_rds_cluster_instance: Allow `performance_insights_retention_period` values that are multiples of `31` ([#25729](https://github.com/hashicorp/terraform-provider-aws/issues/25729)) - -BUG FIXES: - -* data-source/aws_networkmanager_core_network_policy_document: Fix bug where bool values in `segments` blocks weren't being included in json payloads ([#25789](https://github.com/hashicorp/terraform-provider-aws/issues/25789)) -* resource/aws_connect_hours_of_operation: Fix tags not being updated ([#24864](https://github.com/hashicorp/terraform-provider-aws/issues/24864)) -* resource/aws_connect_queue: Fix tags not being updated ([#24864](https://github.com/hashicorp/terraform-provider-aws/issues/24864)) -* resource/aws_connect_quick_connect: Fix tags not being updated ([#24864](https://github.com/hashicorp/terraform-provider-aws/issues/24864)) -* resource/aws_connect_routing_profile: Fix tags not being updated ([#24864](https://github.com/hashicorp/terraform-provider-aws/issues/24864)) -* resource/aws_connect_security_profile: Fix tags not being updated ([#24864](https://github.com/hashicorp/terraform-provider-aws/issues/24864)) -* resource/aws_connect_user_hierarchy_group: Fix tags not being updated ([#24864](https://github.com/hashicorp/terraform-provider-aws/issues/24864)) -* resource/aws_iam_role: Fix diffs in `assume_role_policy` when there are no semantic changes ([#23060](https://github.com/hashicorp/terraform-provider-aws/issues/23060)) -* resource/aws_iam_role: Fix problem with exclusive management of inline and managed policies when empty (i.e., remove out-of-band policies) ([#23060](https://github.com/hashicorp/terraform-provider-aws/issues/23060)) -* resource/aws_rds_cluster: Prevent failure of AWS RDS Cluster creation when it is in `rebooting` state. ([#25718](https://github.com/hashicorp/terraform-provider-aws/issues/25718)) -* resource/aws_route_table: Retry resource Create for EC2 eventual consistency ([#25793](https://github.com/hashicorp/terraform-provider-aws/issues/25793)) -* resource/aws_storagegateway_gateway: Only manage `average_download_rate_limit_in_bits_per_sec` and `average_upload_rate_limit_in_bits_per_sec` when gateway type supports rate limits ([#25922](https://github.com/hashicorp/terraform-provider-aws/issues/25922)) - -## 4.22.0 (July 8, 2022) - -FEATURES: - -* **New Data Source:** `aws_location_route_calculator` ([#25689](https://github.com/hashicorp/terraform-provider-aws/issues/25689)) -* **New Data Source:** `aws_location_tracker` ([#25639](https://github.com/hashicorp/terraform-provider-aws/issues/25639)) -* **New Data Source:** `aws_secretsmanager_random_password` ([#25704](https://github.com/hashicorp/terraform-provider-aws/issues/25704)) -* **New Resource:** `aws_directory_service_shared_directory` ([#24766](https://github.com/hashicorp/terraform-provider-aws/issues/24766)) -* **New Resource:** `aws_directory_service_shared_directory_accepter` ([#24766](https://github.com/hashicorp/terraform-provider-aws/issues/24766)) -* **New Resource:** `aws_lightsail_database` ([#18663](https://github.com/hashicorp/terraform-provider-aws/issues/18663)) -* **New Resource:** `aws_location_route_calculator` ([#25656](https://github.com/hashicorp/terraform-provider-aws/issues/25656)) -* **New Resource:** `aws_transcribe_medical_vocabulary` ([#25723](https://github.com/hashicorp/terraform-provider-aws/issues/25723)) - -ENHANCEMENTS: - -* data-source/aws_imagebuilder_distribution_configuration: Add `fast_launch_configuration` attribute to the `distribution` configuration block ([#25671](https://github.com/hashicorp/terraform-provider-aws/issues/25671)) -* resource/aws_acmpca_certificate_authority: Add `revocation_configuration.ocsp_configuration` argument ([#25720](https://github.com/hashicorp/terraform-provider-aws/issues/25720)) -* resource/aws_apprunner_service: Add `observability_configuration` argument configuration block ([#25697](https://github.com/hashicorp/terraform-provider-aws/issues/25697)) -* resource/aws_autoscaling_group: Add `default_instance_warmup` attribute ([#25722](https://github.com/hashicorp/terraform-provider-aws/issues/25722)) -* resource/aws_config_remediation_configuration: Add `parameter.*.static_values` attribute for a list of values ([#25738](https://github.com/hashicorp/terraform-provider-aws/issues/25738)) -* resource/aws_dynamodb_table: Add `replica.*.point_in_time_recovery` argument ([#25659](https://github.com/hashicorp/terraform-provider-aws/issues/25659)) -* resource/aws_ecr_repository: Add `force_delete` parameter. ([#9913](https://github.com/hashicorp/terraform-provider-aws/issues/9913)) -* resource/aws_ecs_service: Add configurable timeouts for Create and Delete. ([#25641](https://github.com/hashicorp/terraform-provider-aws/issues/25641)) -* resource/aws_emr_cluster: Add `core_instance_group.ebs_config.throughput` and `master_instance_group.ebs_config.throughput` arguments ([#25668](https://github.com/hashicorp/terraform-provider-aws/issues/25668)) -* resource/aws_emr_cluster: Add `gp3` EBS volume support ([#25668](https://github.com/hashicorp/terraform-provider-aws/issues/25668)) -* resource/aws_emr_cluster: Add `sc1` EBS volume support ([#25255](https://github.com/hashicorp/terraform-provider-aws/issues/25255)) -* resource/aws_gamelift_game_session_queue: Add `notification_target` argument ([#25544](https://github.com/hashicorp/terraform-provider-aws/issues/25544)) -* resource/aws_imagebuilder_distribution_configuration: Add `fast_launch_configuration` argument to the `distribution` configuration block ([#25671](https://github.com/hashicorp/terraform-provider-aws/issues/25671)) -* resource/aws_placement_group: Add `spread_level` argument ([#25615](https://github.com/hashicorp/terraform-provider-aws/issues/25615)) -* resource/aws_sagemaker_notebook_instance: Add `accelerator_types` argument ([#10210](https://github.com/hashicorp/terraform-provider-aws/issues/10210)) -* resource/aws_sagemaker_project: Increase SageMaker Project create and delete timeout to 15 minutes ([#25638](https://github.com/hashicorp/terraform-provider-aws/issues/25638)) -* resource/aws_ssm_parameter: Add `insecure_value` argument to enable dynamic use of SSM parameter values ([#25721](https://github.com/hashicorp/terraform-provider-aws/issues/25721)) -* resource/aws_vpc_ipam_pool_cidr: Better error reporting ([#25287](https://github.com/hashicorp/terraform-provider-aws/issues/25287)) - -BUG FIXES: - -* provider: Ensure that the configured `assume_role_with_web_identity` value is used ([#25681](https://github.com/hashicorp/terraform-provider-aws/issues/25681)) -* resource/aws_acmpca_certificate_authority: Fix crash when `revocation_configuration` block is empty ([#25695](https://github.com/hashicorp/terraform-provider-aws/issues/25695)) -* resource/aws_cognito_risk_configuration: Increase maximum allowed length of `account_takeover_risk_configuration.notify_configuration.block_email.html_body`, `account_takeover_risk_configuration.notify_configuration.block_email.text_body`, `account_takeover_risk_configuration.notify_configuration.mfa_email.html_body`, `account_takeover_risk_configuration.notify_configuration.mfa_email.text_body`, `account_takeover_risk_configuration.notify_configuration.no_action_email.html_body` and `account_takeover_risk_configuration.notify_configuration.no_action_email.text_body` arguments from `2000` to `20000` ([#25645](https://github.com/hashicorp/terraform-provider-aws/issues/25645)) -* resource/aws_dynamodb_table: Prevent `restore_source_name` from forcing replacement when removed to enable restoring from a PITR backup ([#25659](https://github.com/hashicorp/terraform-provider-aws/issues/25659)) -* resource/aws_dynamodb_table: Respect custom timeouts including when working with replicas ([#25659](https://github.com/hashicorp/terraform-provider-aws/issues/25659)) -* resource/aws_ec2_transit_gateway: Fix MaxItems and subnet size validation in `transit_gateway_cidr_blocks` ([#25673](https://github.com/hashicorp/terraform-provider-aws/issues/25673)) -* resource/aws_ecs_service: Fix "unexpected new value" errors on creation. ([#25641](https://github.com/hashicorp/terraform-provider-aws/issues/25641)) -* resource/aws_ecs_service: Fix error where tags are sometimes not retrieved. ([#25641](https://github.com/hashicorp/terraform-provider-aws/issues/25641)) -* resource/aws_emr_managed_scaling_policy: Support `maximum_ondemand_capacity_units` value of `0` ([#17134](https://github.com/hashicorp/terraform-provider-aws/issues/17134)) - -## 4.21.0 (June 30, 2022) - -FEATURES: - -* **New Data Source:** `aws_kendra_experience` ([#25601](https://github.com/hashicorp/terraform-provider-aws/issues/25601)) -* **New Data Source:** `aws_kendra_query_suggestions_block_list` ([#25592](https://github.com/hashicorp/terraform-provider-aws/issues/25592)) -* **New Data Source:** `aws_kendra_thesaurus` ([#25555](https://github.com/hashicorp/terraform-provider-aws/issues/25555)) -* **New Data Source:** `aws_service_discovery_http_namespace` ([#25162](https://github.com/hashicorp/terraform-provider-aws/issues/25162)) -* **New Data Source:** `aws_service_discovery_service` ([#25162](https://github.com/hashicorp/terraform-provider-aws/issues/25162)) -* **New Resource:** `aws_accessanalyzer_archive_rule` ([#25514](https://github.com/hashicorp/terraform-provider-aws/issues/25514)) -* **New Resource:** `aws_apprunner_observability_configuration` ([#25591](https://github.com/hashicorp/terraform-provider-aws/issues/25591)) -* **New Resource:** `aws_lakeformation_resource_lf_tags` ([#25565](https://github.com/hashicorp/terraform-provider-aws/issues/25565)) - -ENHANCEMENTS: - -* data-source/aws_ami: Add `include_deprecated` argument ([#25566](https://github.com/hashicorp/terraform-provider-aws/issues/25566)) -* data-source/aws_ami: Make `owners` optional ([#25566](https://github.com/hashicorp/terraform-provider-aws/issues/25566)) -* data-source/aws_service_discovery_dns_namespace: Add `tags` attribute ([#25162](https://github.com/hashicorp/terraform-provider-aws/issues/25162)) -* data/aws_key_pair: New attribute `public_key` populated by setting the new `include_public_key` argument ([#25371](https://github.com/hashicorp/terraform-provider-aws/issues/25371)) -* resource/aws_connect_instance: Configurable Create and Delete timeouts ([#24861](https://github.com/hashicorp/terraform-provider-aws/issues/24861)) -* resource/aws_key_pair: Added 2 new attributes - `key_type` and `create_time` ([#25371](https://github.com/hashicorp/terraform-provider-aws/issues/25371)) -* resource/aws_sagemaker_model: Add `repository_auth_config` arguments in support of [Private Docker Registry](https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-containers-inference-private.html) ([#25557](https://github.com/hashicorp/terraform-provider-aws/issues/25557)) -* resource/aws_service_discovery_http_namespace: Add `http_name` attribute ([#25162](https://github.com/hashicorp/terraform-provider-aws/issues/25162)) -* resource/aws_wafv2_web_acl: Add `rule.action.captcha` argument ([#21766](https://github.com/hashicorp/terraform-provider-aws/issues/21766)) - -BUG FIXES: - -* resource/aws_api_gateway_model: Remove length validation from `schema` argument ([#25623](https://github.com/hashicorp/terraform-provider-aws/issues/25623)) -* resource/aws_appstream_fleet_stack_association: Fix association not being found after creation ([#25370](https://github.com/hashicorp/terraform-provider-aws/issues/25370)) -* resource/aws_appstream_stack: Fix crash when setting `embed_host_domains` ([#25372](https://github.com/hashicorp/terraform-provider-aws/issues/25372)) -* resource/aws_route53_record: Successfully allow renaming of `set_identifier` (specified with multiple routing policies) ([#25620](https://github.com/hashicorp/terraform-provider-aws/issues/25620)) - -## 4.20.1 (June 24, 2022) - -BUG FIXES: - -* resource/aws_default_vpc_dhcp_options: Fix `missing expected [` error introduced in [v4.20.0](https://github.com/hashicorp/terraform-provider-aws/blob/main/CHANGELOG.md#4200-june-23-2022) ([#25562](https://github.com/hashicorp/terraform-provider-aws/issues/25562)) - -## 4.20.0 (June 23, 2022) - -FEATURES: - -* **New Data Source:** `aws_kendra_faq` ([#25523](https://github.com/hashicorp/terraform-provider-aws/issues/25523)) -* **New Data Source:** `aws_kendra_index` ([#25473](https://github.com/hashicorp/terraform-provider-aws/issues/25473)) -* **New Data Source:** `aws_outposts_asset` ([#25476](https://github.com/hashicorp/terraform-provider-aws/issues/25476)) -* **New Data Source:** `aws_outposts_assets` ([#25476](https://github.com/hashicorp/terraform-provider-aws/issues/25476)) -* **New Resource:** `aws_applicationinsights_application` ([#25195](https://github.com/hashicorp/terraform-provider-aws/issues/25195)) -* **New Resource:** `aws_ce_anomaly_monitor` ([#25177](https://github.com/hashicorp/terraform-provider-aws/issues/25177)) -* **New Resource:** `aws_ce_anomaly_subscription` ([#25224](https://github.com/hashicorp/terraform-provider-aws/issues/25224)) -* **New Resource:** `aws_ce_cost_allocation_tag` ([#25272](https://github.com/hashicorp/terraform-provider-aws/issues/25272)) -* **New Resource:** `aws_cloudwatchrum_app_monitor` ([#25180](https://github.com/hashicorp/terraform-provider-aws/issues/25180)) -* **New Resource:** `aws_cognito_risk_configuration` ([#25282](https://github.com/hashicorp/terraform-provider-aws/issues/25282)) -* **New Resource:** `aws_kendra_experience` ([#25315](https://github.com/hashicorp/terraform-provider-aws/issues/25315)) -* **New Resource:** `aws_kendra_faq` ([#25515](https://github.com/hashicorp/terraform-provider-aws/issues/25515)) -* **New Resource:** `aws_kendra_query_suggestions_block_list` ([#25198](https://github.com/hashicorp/terraform-provider-aws/issues/25198)) -* **New Resource:** `aws_kendra_thesaurus` ([#25199](https://github.com/hashicorp/terraform-provider-aws/issues/25199)) -* **New Resource:** `aws_lakeformation_lf_tag` ([#19523](https://github.com/hashicorp/terraform-provider-aws/issues/19523)) -* **New Resource:** `aws_location_tracker` ([#25466](https://github.com/hashicorp/terraform-provider-aws/issues/25466)) - -ENHANCEMENTS: - -* data-source/aws_instance: Add `disable_api_stop` attribute ([#25185](https://github.com/hashicorp/terraform-provider-aws/issues/25185)) -* data-source/aws_instance: Add `private_dns_name_options` attribute ([#25161](https://github.com/hashicorp/terraform-provider-aws/issues/25161)) -* data-source/aws_instance: Correctly set `credit_specification` for T4g instances ([#25161](https://github.com/hashicorp/terraform-provider-aws/issues/25161)) -* data-source/aws_launch_template: Add `disable_api_stop` attribute ([#25185](https://github.com/hashicorp/terraform-provider-aws/issues/25185)) -* data-source/aws_launch_template: Correctly set `credit_specification` for T4g instances ([#25161](https://github.com/hashicorp/terraform-provider-aws/issues/25161)) -* data-source/aws_vpc_endpoint: Add `dns_options` and `ip_address_type` attributes ([#25190](https://github.com/hashicorp/terraform-provider-aws/issues/25190)) -* data-source/aws_vpc_endpoint_service: Add `supported_ip_address_types` attribute ([#25189](https://github.com/hashicorp/terraform-provider-aws/issues/25189)) -* resource/aws_cloudwatch_event_api_destination: Remove validation of a maximum value for the `invocation_rate_limit_per_second` argument ([#25277](https://github.com/hashicorp/terraform-provider-aws/issues/25277)) -* resource/aws_datasync_location_efs: Add `access_point_arn`, `file_system_access_role_arn`, and `in_transit_encryption` arguments ([#25182](https://github.com/hashicorp/terraform-provider-aws/issues/25182)) -* resource/aws_datasync_location_efs: Add plan time validations for `ec2_config.security_group_arns` ([#25182](https://github.com/hashicorp/terraform-provider-aws/issues/25182)) -* resource/aws_ec2_host: Add `outpost_arn` argument ([#25464](https://github.com/hashicorp/terraform-provider-aws/issues/25464)) -* resource/aws_instance: Add `disable_api_stop` argument ([#25185](https://github.com/hashicorp/terraform-provider-aws/issues/25185)) -* resource/aws_instance: Add `private_dns_name_options` argument ([#25161](https://github.com/hashicorp/terraform-provider-aws/issues/25161)) -* resource/aws_instance: Correctly handle `credit_specification` for T4g instances ([#25161](https://github.com/hashicorp/terraform-provider-aws/issues/25161)) -* resource/aws_launch_template: Add `disable_api_stop` argument ([#25185](https://github.com/hashicorp/terraform-provider-aws/issues/25185)) -* resource/aws_launch_template: Correctly handle `credit_specification` for T4g instances ([#25161](https://github.com/hashicorp/terraform-provider-aws/issues/25161)) -* resource/aws_s3_bucket_metric: Add validation to ensure name is <= 64 characters. ([#25260](https://github.com/hashicorp/terraform-provider-aws/issues/25260)) -* resource/aws_sagemaker_endpoint_configuration: Add `serverless_config` argument ([#25218](https://github.com/hashicorp/terraform-provider-aws/issues/25218)) -* resource/aws_sagemaker_endpoint_configuration: Make `production_variants.initial_instance_count` and `production_variants.instance_type` arguments optional ([#25218](https://github.com/hashicorp/terraform-provider-aws/issues/25218)) -* resource/aws_sagemaker_notebook_instance: Add `instance_metadata_service_configuration` argument ([#25236](https://github.com/hashicorp/terraform-provider-aws/issues/25236)) -* resource/aws_sagemaker_notebook_instance: Support `notebook-al2-v2` value for `platform_identifier` ([#25236](https://github.com/hashicorp/terraform-provider-aws/issues/25236)) -* resource/aws_synthetics_canary: Add `delete_lambda` argument ([#25284](https://github.com/hashicorp/terraform-provider-aws/issues/25284)) -* resource/aws_vpc_endpoint: Add `dns_options` and `ip_address_type` arguments ([#25190](https://github.com/hashicorp/terraform-provider-aws/issues/25190)) -* resource/aws_vpc_endpoint_service: Add `supported_ip_address_types` argument ([#25189](https://github.com/hashicorp/terraform-provider-aws/issues/25189)) -* resource/aws_vpn_connection: Add `outside_ip_address_type` and `transport_transit_gateway_attachment_id` arguments in support of [Private IP VPNs](https://docs.aws.amazon.com/vpn/latest/s2svpn/private-ip-dx.html) ([#25529](https://github.com/hashicorp/terraform-provider-aws/issues/25529)) - -BUG FIXES: - -* data-source/aws_ecr_repository: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* data-source/aws_elasticache_cluster: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* data-source/aws_iam_policy: Add validation to prevent setting incompatible parameters. ([#25538](https://github.com/hashicorp/terraform-provider-aws/issues/25538)) -* data-source/aws_iam_policy: Now loads tags. ([#25538](https://github.com/hashicorp/terraform-provider-aws/issues/25538)) -* data-source/aws_lb: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* data-source/aws_lb_listener: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* data-source/aws_lb_target_group: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* data-source/aws_sqs_queue: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_api_gateway_model: Suppress whitespace differences between model schemas ([#25245](https://github.com/hashicorp/terraform-provider-aws/issues/25245)) -* resource/aws_ce_cost_category: Allow duplicate values in `split_charge_rule.parameter.values` argument ([#25488](https://github.com/hashicorp/terraform-provider-aws/issues/25488)) -* resource/aws_ce_cost_category: Fix error passing `split_charge_rule.parameter` to the AWS API ([#25488](https://github.com/hashicorp/terraform-provider-aws/issues/25488)) -* resource/aws_cloudwatch_composite_alarm: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_cloudwatch_event_bus: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_cloudwatch_event_rule: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_cloudwatch_metric_alarm: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_cloudwatch_metric_stream: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_cognito_user_pool: Correctly handle missing or empty `account_recovery_setting` attribute ([#25184](https://github.com/hashicorp/terraform-provider-aws/issues/25184)) -* resource/aws_ecr_repository: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_ecs_capacity_provider: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_ecs_cluster: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_ecs_service: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_ecs_task_definition: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_ecs_task_set: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_elasticache_cluster: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_elasticache_parameter_group: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_elasticache_replication_group: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_elasticache_subnet_group: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_elasticache_user: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_elasticache_user_group: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_iam_instance_profile: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_iam_openid_connect_provider: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_iam_policy: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_iam_role: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_iam_saml_provider: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_iam_server_certificate: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_iam_service_linked_role: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_iam_user: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_iam_virtual_mfa_device: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_keyspaces_table: Relax validation of the `schema_definition.column.type` argument to allow collection types ([#25230](https://github.com/hashicorp/terraform-provider-aws/issues/25230)) -* resource/aws_launch_configuration: Remove default value for `associate_public_ip_address` argument and mark as Computed. This fixes a regression introduced in [v4.17.0](https://github.com/hashicorp/terraform-provider-aws/blob/main/CHANGELOG.md#4170-june--3-2022) via [#17695](https://github.com/hashicorp/terraform-provider-aws/issues/17695) when no value is configured, whilst honoring any configured value ([#25450](https://github.com/hashicorp/terraform-provider-aws/issues/25450)) -* resource/aws_lb: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_lb_listener: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_lb_listener_rule: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_lb_target_group: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_sns_topic: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) -* resource/aws_sqs_queue: Prevent ISO-partition tagging precautions from eating legit errors ([#25549](https://github.com/hashicorp/terraform-provider-aws/issues/25549)) - -## 4.19.0 (June 17, 2022) - -FEATURES: - -* **New Resource:** `aws_kendra_index` ([#24920](https://github.com/hashicorp/terraform-provider-aws/issues/24920)) -* **New Resource:** `aws_lightsail_container_service` ([#20625](https://github.com/hashicorp/terraform-provider-aws/issues/20625)) -* **New Resource:** `aws_lightsail_container_service_deployment_version` ([#20625](https://github.com/hashicorp/terraform-provider-aws/issues/20625)) - -BUG FIXES: - -* resource/aws_dynamodb_table_item: Fix to remove attribute from table item on update ([#25326](https://github.com/hashicorp/terraform-provider-aws/issues/25326)) -* resource/aws_ec2_managed_prefix_list_entry: Fix error when attempting to create or delete multiple list entries ([#25046](https://github.com/hashicorp/terraform-provider-aws/issues/25046)) - -## 4.18.0 (June 10, 2022) - -FEATURES: - -* **New Resource:** `aws_ce_anomaly_monitor` ([#25177](https://github.com/hashicorp/terraform-provider-aws/issues/25177)) -* **New Resource:** `aws_emrserverless_application` ([#25144](https://github.com/hashicorp/terraform-provider-aws/issues/25144)) - -ENHANCEMENTS: - -* data-source/aws_cloudwatch_logs_groups: Make `log_group_name_prefix` optional ([#25187](https://github.com/hashicorp/terraform-provider-aws/issues/25187)) -* data-source/aws_cognito_user_pool_client: Add `enable_propagate_additional_user_context_data` argument ([#25181](https://github.com/hashicorp/terraform-provider-aws/issues/25181)) -* data-source/aws_ram_resource_share: Add `resource_share_status` argument. ([#25159](https://github.com/hashicorp/terraform-provider-aws/issues/25159)) -* resource/aws_cognito_user_pool_client: Add `enable_propagate_additional_user_context_data` argument ([#25181](https://github.com/hashicorp/terraform-provider-aws/issues/25181)) -* resource/aws_ebs_snapshot_copy: Add support for `timeouts` configuration block. ([#20912](https://github.com/hashicorp/terraform-provider-aws/issues/20912)) -* resource/aws_ebs_volume: Add `final_snapshot` argument ([#21916](https://github.com/hashicorp/terraform-provider-aws/issues/21916)) -* resource/aws_s3_bucket: Add error handling for `ErrCodeNotImplemented` and `ErrCodeXNotImplemented` errors when ready bucket information. ([#24764](https://github.com/hashicorp/terraform-provider-aws/issues/24764)) -* resource/aws_vpc_ipam_pool_cidr_allocation: improve internal search mechanism ([#25257](https://github.com/hashicorp/terraform-provider-aws/issues/25257)) - -BUG FIXES: - -* resource/aws_snapshot_create_volume_permission: Error if `account_id` is the snapshot's owner ([#12103](https://github.com/hashicorp/terraform-provider-aws/issues/12103)) -* resource/aws_ssm_parameter: Allow `Intelligent-Tiering` to upgrade to `Advanced` tier as needed. ([#25174](https://github.com/hashicorp/terraform-provider-aws/issues/25174)) - -## 4.17.1 (June 3, 2022) - -BUG FIXES: - -* resource/aws_ram_resource_share: Fix regression in v4.17.0 where `permission_arns` would get clobbered if already set ([#25158](https://github.com/hashicorp/terraform-provider-aws/issues/25158)) - -## 4.17.0 (June 3, 2022) - -FEATURES: - -* **New Data Source:** `aws_redshift_cluster_credentials` ([#25092](https://github.com/hashicorp/terraform-provider-aws/issues/25092)) -* **New Resource:** `aws_acmpca_policy` ([#25109](https://github.com/hashicorp/terraform-provider-aws/issues/25109)) -* **New Resource:** `aws_redshift_cluster_iam_roles` ([#25096](https://github.com/hashicorp/terraform-provider-aws/issues/25096)) -* **New Resource:** `aws_redshift_hsm_configuration` ([#25093](https://github.com/hashicorp/terraform-provider-aws/issues/25093)) -* **New Resource:** `aws_redshiftdata_statement` ([#25104](https://github.com/hashicorp/terraform-provider-aws/issues/25104)) - -ENHANCEMENTS: - -* resource/aws_dms_endpoint: Add `redshift_settings` configuration block ([#21846](https://github.com/hashicorp/terraform-provider-aws/issues/21846)) -* resource/aws_dms_endpoint: Add ability to use AWS Secrets Manager with the `aurora-postgresql` and `mongodb` engines ([#23691](https://github.com/hashicorp/terraform-provider-aws/issues/23691)) -* resource/aws_dms_endpoint: Add ability to use AWS Secrets Manager with the `aurora`, `mariadb` and `mysql` engines ([#24846](https://github.com/hashicorp/terraform-provider-aws/issues/24846)) -* resource/aws_dms_endpoint: Add ability to use AWS Secrets Manager with the `redshift` engine ([#25080](https://github.com/hashicorp/terraform-provider-aws/issues/25080)) -* resource/aws_dms_endpoint: Add ability to use AWS Secrets Manager with the `sqlserver` engine ([#22646](https://github.com/hashicorp/terraform-provider-aws/issues/22646)) -* resource/aws_guardduty_detector: Add `kubernetes` attribute to the `datasources` configuration block ([#22859](https://github.com/hashicorp/terraform-provider-aws/issues/22859)) -* resource/aws_ram_resource_share: Add `permission_arns` argument. ([#25113](https://github.com/hashicorp/terraform-provider-aws/issues/25113)) -* resource/aws_redshift_cluster: The `default_iam_role_arn` argument is now Computed ([#25096](https://github.com/hashicorp/terraform-provider-aws/issues/25096)) - -BUG FIXES: - -* data-source/aws_launch_configuration: Correct data type for `ebs_block_device.throughput` and `root_block_device.throughput` attributes ([#25097](https://github.com/hashicorp/terraform-provider-aws/issues/25097)) -* resource/aws_db_instance_role_association: Extend timeout to 10 minutes ([#25145](https://github.com/hashicorp/terraform-provider-aws/issues/25145)) -* resource/aws_ebs_volume: Fix to preserve `iops` when changing EBS volume type (`io1`, `io2`, `gp3`) ([#23280](https://github.com/hashicorp/terraform-provider-aws/issues/23280)) -* resource/aws_launch_configuration: Honor associate_public_ip_address = false ([#17695](https://github.com/hashicorp/terraform-provider-aws/issues/17695)) -* resource/aws_rds_cluster_role_association: Extend timeout to 10 minutes ([#25145](https://github.com/hashicorp/terraform-provider-aws/issues/25145)) -* resource/aws_servicecatalog_provisioned_product: Correctly handle resources in a `TAINTED` state ([#25130](https://github.com/hashicorp/terraform-provider-aws/issues/25130)) - -## 4.16.0 (May 27, 2022) - -FEATURES: - -* **New Data Source:** `aws_location_place_index` ([#24980](https://github.com/hashicorp/terraform-provider-aws/issues/24980)) -* **New Data Source:** `aws_redshift_subnet_group` ([#25053](https://github.com/hashicorp/terraform-provider-aws/issues/25053)) -* **New Resource:** `aws_efs_replication_configuration` ([#22844](https://github.com/hashicorp/terraform-provider-aws/issues/22844)) -* **New Resource:** `aws_location_place_index` ([#24821](https://github.com/hashicorp/terraform-provider-aws/issues/24821)) -* **New Resource:** `aws_redshift_authentication_profile` ([#24907](https://github.com/hashicorp/terraform-provider-aws/issues/24907)) -* **New Resource:** `aws_redshift_endpoint_access` ([#25073](https://github.com/hashicorp/terraform-provider-aws/issues/25073)) -* **New Resource:** `aws_redshift_hsm_client_certificate` ([#24906](https://github.com/hashicorp/terraform-provider-aws/issues/24906)) -* **New Resource:** `aws_redshift_usage_limit` ([#24916](https://github.com/hashicorp/terraform-provider-aws/issues/24916)) - -ENHANCEMENTS: - -* data-source/aws_ami: Add `tpm_support` attribute ([#25045](https://github.com/hashicorp/terraform-provider-aws/issues/25045)) -* data-source/aws_redshift_cluster: Add `aqua_configuration_status` attribute. ([#24856](https://github.com/hashicorp/terraform-provider-aws/issues/24856)) -* data-source/aws_redshift_cluster: Add `arn`, `cluster_nodes`, `cluster_nodes`, `maintenance_track_name`, `manual_snapshot_retention_period`, `log_destination_type`, and `log_exports` attributes. ([#24982](https://github.com/hashicorp/terraform-provider-aws/issues/24982)) -* data-source/aws_cloudfront_response_headers_policy: Add `server_timing_headers_config` attribute ([#24913](https://github.com/hashicorp/terraform-provider-aws/issues/24913)) -* resource/aws_ami: Add `tpm_support` argument ([#25045](https://github.com/hashicorp/terraform-provider-aws/issues/25045)) -* resource/aws_ami_copy: Add `tpm_support` argument ([#25045](https://github.com/hashicorp/terraform-provider-aws/issues/25045)) -* resource/aws_ami_from_instance: Add `tpm_support` argument ([#25045](https://github.com/hashicorp/terraform-provider-aws/issues/25045)) -* resource/aws_autoscaling_group: Add `context` argument ([#24951](https://github.com/hashicorp/terraform-provider-aws/issues/24951)) -* resource/aws_autoscaling_group: Add `mixed_instances_policy.launch_template.override.instance_requirements` argument ([#24795](https://github.com/hashicorp/terraform-provider-aws/issues/24795)) -* resource/aws_cloudfront_response_headers_policy: Add `server_timing_headers_config` argument ([#24913](https://github.com/hashicorp/terraform-provider-aws/issues/24913)) -* resource/aws_cloudsearch_domain: Add `index_field.source_fields` argument ([#24915](https://github.com/hashicorp/terraform-provider-aws/issues/24915)) -* resource/aws_cloudwatch_metric_stream: Add `statistics_configuration` argument ([#24882](https://github.com/hashicorp/terraform-provider-aws/issues/24882)) -* resource/aws_elasticache_global_replication_group: Add support for upgrading `engine_version`. ([#25077](https://github.com/hashicorp/terraform-provider-aws/issues/25077)) -* resource/aws_msk_cluster: Support multiple attribute updates by refreshing `current_version` after each update ([#25062](https://github.com/hashicorp/terraform-provider-aws/issues/25062)) -* resource/aws_redshift_cluster: Add `aqua_configuration_status` and `apply_immediately` arguments. ([#24856](https://github.com/hashicorp/terraform-provider-aws/issues/24856)) -* resource/aws_redshift_cluster: Add `default_iam_role_arn`, `maintenance_track_name`, and `manual_snapshot_retention_period` arguments. ([#24982](https://github.com/hashicorp/terraform-provider-aws/issues/24982)) -* resource/aws_redshift_cluster: Add `logging.log_destination_type` and `logging.log_exports` arguments. ([#24886](https://github.com/hashicorp/terraform-provider-aws/issues/24886)) -* resource/aws_redshift_cluster: Add plan-time validation for `iam_roles`, `owner_account`, and `port`. ([#24856](https://github.com/hashicorp/terraform-provider-aws/issues/24856)) -* resource/aws_redshift_event_subscription: Add plan time validations for `event_categories`, `source_type`, and `severity`. ([#24909](https://github.com/hashicorp/terraform-provider-aws/issues/24909)) -* resource/aws_transfer_server: Add support for `TransferSecurityPolicy-2022-03` `security_policy_name` value ([#25060](https://github.com/hashicorp/terraform-provider-aws/issues/25060)) - -BUG FIXES: - -* resource/aws_appflow_flow: Amend `task_properties` validation to avoid conflicting type assumption ([#24889](https://github.com/hashicorp/terraform-provider-aws/issues/24889)) -* resource/aws_db_proxy_target: Fix `InvalidDBInstanceState: DB Instance is in an unsupported state - CREATING, needs to be in [AVAILABLE, MODIFYING, BACKING_UP]` error on resource Create ([#24875](https://github.com/hashicorp/terraform-provider-aws/issues/24875)) -* resource/aws_instance: Correctly delete instance on destroy when `disable_api_termination` is `true` ([#19277](https://github.com/hashicorp/terraform-provider-aws/issues/19277)) -* resource/aws_instance: Prevent error `InvalidParameterCombination: The parameter GroupName within placement information cannot be specified when instanceInterruptionBehavior is set to 'STOP'` when using a launch template that sets `instance_interruption_behavior` to `stop` ([#24695](https://github.com/hashicorp/terraform-provider-aws/issues/24695)) -* resource/aws_msk_cluster: Prevent crash on apply when `client_authentication.tls` is empty ([#25072](https://github.com/hashicorp/terraform-provider-aws/issues/25072)) -* resource/aws_servicecatalog_provisioned_product: Add possible `TAINTED` target state for resource update and remove one of the internal waiters during read ([#24804](https://github.com/hashicorp/terraform-provider-aws/issues/24804)) - -## 4.15.1 (May 20, 2022) - -BUG FIXES: - -* resource/aws_organizations_account: Fix reading account state for existing accounts ([#24899](https://github.com/hashicorp/terraform-provider-aws/issues/24899)) - -## 4.15.0 (May 20, 2022) - -BREAKING CHANGES: - -* resource/aws_msk_cluster: The `ebs_volume_size` argument is deprecated in favor of the `storage_info` block. The `storage_info` block can set `volume_size` and `provisioned_throughput` ([#24767](https://github.com/hashicorp/terraform-provider-aws/issues/24767)) - -FEATURES: - -* **New Data Source:** `aws_lb_hosted_zone_id` ([#24749](https://github.com/hashicorp/terraform-provider-aws/issues/24749)) -* **New Data Source:** `aws_networkmanager_core_network_policy_document` ([#24368](https://github.com/hashicorp/terraform-provider-aws/issues/24368)) -* **New Resource:** `aws_db_snapshot_copy` ([#9886](https://github.com/hashicorp/terraform-provider-aws/issues/9886)) -* **New Resource:** `aws_keyspaces_table` ([#24351](https://github.com/hashicorp/terraform-provider-aws/issues/24351)) - -ENHANCEMENTS: - -* data-source/aws_route53_resolver_rules: add `name_regex` argument ([#24582](https://github.com/hashicorp/terraform-provider-aws/issues/24582)) -* resource/aws_autoscaling_group: Add `instance_refresh.preferences.skip_matching` argument ([#23059](https://github.com/hashicorp/terraform-provider-aws/issues/23059)) -* resource/aws_autoscaling_policy: Add `enabled` argument ([#12625](https://github.com/hashicorp/terraform-provider-aws/issues/12625)) -* resource/aws_ec2_fleet: Add `arn` attribute ([#24732](https://github.com/hashicorp/terraform-provider-aws/issues/24732)) -* resource/aws_ec2_fleet: Add `launch_template_config.override.instance_requirements` argument ([#24732](https://github.com/hashicorp/terraform-provider-aws/issues/24732)) -* resource/aws_ec2_fleet: Add support for `capacity-optimized` and `capacity-optimized-prioritized` values for `spot_options.allocation_strategy` ([#24732](https://github.com/hashicorp/terraform-provider-aws/issues/24732)) -* resource/aws_lambda_function: Add support for `nodejs16.x` `runtime` value ([#24768](https://github.com/hashicorp/terraform-provider-aws/issues/24768)) -* resource/aws_lambda_layer_version: Add support for `nodejs16.x` `compatible_runtimes` value ([#24768](https://github.com/hashicorp/terraform-provider-aws/issues/24768)) -* resource/aws_organizations_account: Add `create_govcloud` argument and `govcloud_id` attribute ([#24447](https://github.com/hashicorp/terraform-provider-aws/issues/24447)) -* resource/aws_s3_bucket_website_configuration: Add `routing_rules` parameter to be used instead of `routing_rule` to support configurations with empty String values ([#24198](https://github.com/hashicorp/terraform-provider-aws/issues/24198)) - -BUG FIXES: - -* resource/aws_autoscaling_group: Wait for correct number of ELBs when `wait_for_elb_capacity` is configured ([#20806](https://github.com/hashicorp/terraform-provider-aws/issues/20806)) -* resource/aws_elasticache_replication_group: Fix perpetual diff on `auto_minor_version_upgrade` ([#24688](https://github.com/hashicorp/terraform-provider-aws/issues/24688)) - -## 4.14.0 (May 13, 2022) - -FEATURES: - -* **New Data Source:** `aws_connect_routing_profile` ([#23525](https://github.com/hashicorp/terraform-provider-aws/issues/23525)) -* **New Data Source:** `aws_connect_security_profile` ([#23524](https://github.com/hashicorp/terraform-provider-aws/issues/23524)) -* **New Data Source:** `aws_connect_user_hierarchy_structure` ([#23527](https://github.com/hashicorp/terraform-provider-aws/issues/23527)) -* **New Data Source:** `aws_location_map` ([#24693](https://github.com/hashicorp/terraform-provider-aws/issues/24693)) -* **New Resource:** `aws_appflow_connector_profile` ([#23892](https://github.com/hashicorp/terraform-provider-aws/issues/23892)) -* **New Resource:** `aws_appflow_flow` ([#24017](https://github.com/hashicorp/terraform-provider-aws/issues/24017)) -* **New Resource:** `aws_appintegrations_event_integration` ([#23904](https://github.com/hashicorp/terraform-provider-aws/issues/23904)) -* **New Resource:** `aws_connect_user_hierarchy_group` ([#23531](https://github.com/hashicorp/terraform-provider-aws/issues/23531)) -* **New Resource:** `aws_location_map` ([#24682](https://github.com/hashicorp/terraform-provider-aws/issues/24682)) - -ENHANCEMENTS: - -* data-source/aws_acm_certificate: Add `certificate` and `certificate_chain` attributes ([#24593](https://github.com/hashicorp/terraform-provider-aws/issues/24593)) -* data-source/aws_autoscaling_group: Add `enabled_metrics` attribute ([#24691](https://github.com/hashicorp/terraform-provider-aws/issues/24691)) -* data-source/aws_codestarconnections_connection: Support lookup by `name` ([#19262](https://github.com/hashicorp/terraform-provider-aws/issues/19262)) -* data-source/aws_launch_template: Add `instance_requirements` attribute ([#24543](https://github.com/hashicorp/terraform-provider-aws/issues/24543)) -* resource/aws_ebs_volume: Add support for `multi_attach_enabled` with `io2` volumes ([#19060](https://github.com/hashicorp/terraform-provider-aws/issues/19060)) -* resource/aws_launch_template: Add `instance_requirements` argument ([#24543](https://github.com/hashicorp/terraform-provider-aws/issues/24543)) -* resource/aws_servicecatalog_provisioned_product: Wait for provisioning to finish ([#24758](https://github.com/hashicorp/terraform-provider-aws/issues/24758)) -* resource/aws_servicecatalog_provisioned_product: Wait for update to finish ([#24758](https://github.com/hashicorp/terraform-provider-aws/issues/24758)) -* resource/aws_spot_fleet_request: Add `overrides.instance_requirements` argument ([#24448](https://github.com/hashicorp/terraform-provider-aws/issues/24448)) - -BUG FIXES: - -* resource/aws_alb_listener_rule: Don't force recreate listener rule on priority change. ([#23768](https://github.com/hashicorp/terraform-provider-aws/issues/23768)) -* resource/aws_default_subnet: Fix `InvalidSubnet.Conflict` errors when associating IPv6 CIDR blocks ([#24685](https://github.com/hashicorp/terraform-provider-aws/issues/24685)) -* resource/aws_ebs_volume: Add configurable timeouts ([#24745](https://github.com/hashicorp/terraform-provider-aws/issues/24745)) -* resource/aws_imagebuilder_image_recipe: Fix `ResourceDependencyException` errors when a dependency is modified ([#24708](https://github.com/hashicorp/terraform-provider-aws/issues/24708)) -* resource/aws_kms_key: Retry on `MalformedPolicyDocumentException` errors when updating key policy ([#24697](https://github.com/hashicorp/terraform-provider-aws/issues/24697)) -* resource/aws_servicecatalog_provisioned_product: Prevent error when retrieving a provisioned product in a non-available state ([#24758](https://github.com/hashicorp/terraform-provider-aws/issues/24758)) -* resource/aws_subnet: Fix `InvalidSubnet.Conflict` errors when associating IPv6 CIDR blocks ([#24685](https://github.com/hashicorp/terraform-provider-aws/issues/24685)) - -## 4.13.0 (May 5, 2022) - -FEATURES: - -* **New Data Source:** `aws_emrcontainers_virtual_cluster` ([#20003](https://github.com/hashicorp/terraform-provider-aws/issues/20003)) -* **New Data Source:** `aws_iam_instance_profiles` ([#24423](https://github.com/hashicorp/terraform-provider-aws/issues/24423)) -* **New Data Source:** `aws_secretsmanager_secrets` ([#24514](https://github.com/hashicorp/terraform-provider-aws/issues/24514)) -* **New Resource:** `aws_emrcontainers_virtual_cluster` ([#20003](https://github.com/hashicorp/terraform-provider-aws/issues/20003)) -* **New Resource:** `aws_iot_topic_rule_destination` ([#24395](https://github.com/hashicorp/terraform-provider-aws/issues/24395)) - -ENHANCEMENTS: - -* data-source/aws_ami: Add `deprecation_time` attribute ([#24489](https://github.com/hashicorp/terraform-provider-aws/issues/24489)) -* data-source/aws_msk_cluster: Add `bootstrap_brokers_public_sasl_iam`, `bootstrap_brokers_public_sasl_scram` and `bootstrap_brokers_public_tls` attributes ([#21005](https://github.com/hashicorp/terraform-provider-aws/issues/21005)) -* data-source/aws_ssm_patch_baseline: Add the following attributes: `approved_patches`, `approved_patches_compliance_level`, `approval_rule`, `global_filter`, `rejected_patches`, `rejected_patches_action`, `source` ([#24401](https://github.com/hashicorp/terraform-provider-aws/issues/24401)) -* resource/aws_ami: Add `deprecation_time` argument ([#24489](https://github.com/hashicorp/terraform-provider-aws/issues/24489)) -* resource/aws_ami_copy: Add `deprecation_time` argument ([#24489](https://github.com/hashicorp/terraform-provider-aws/issues/24489)) -* resource/aws_ami_from_instance: Add `deprecation_time` argument ([#24489](https://github.com/hashicorp/terraform-provider-aws/issues/24489)) -* resource/aws_iot_topic_rule: Add `http` and `error_action.http` arguments ([#16087](https://github.com/hashicorp/terraform-provider-aws/issues/16087)) -* resource/aws_iot_topic_rule: Add `kafka` and `error_action.kafka` arguments ([#24395](https://github.com/hashicorp/terraform-provider-aws/issues/24395)) -* resource/aws_iot_topic_rule: Add `s3.canned_acl` and `error_action.s3.canned_acl` arguments ([#19175](https://github.com/hashicorp/terraform-provider-aws/issues/19175)) -* resource/aws_iot_topic_rule: Add `timestream` and `error_action.timestream` arguments ([#22337](https://github.com/hashicorp/terraform-provider-aws/issues/22337)) -* resource/aws_lambda_permission: Add `function_url_auth_type` argument ([#24510](https://github.com/hashicorp/terraform-provider-aws/issues/24510)) -* resource/aws_msk_cluster: Add `bootstrap_brokers_public_sasl_iam`, `bootstrap_brokers_public_sasl_scram` and `bootstrap_brokers_public_tls` attributes ([#21005](https://github.com/hashicorp/terraform-provider-aws/issues/21005)) -* resource/aws_msk_cluster: Add `broker_node_group_info.connectivity_info` argument to support [public access](https://docs.aws.amazon.com/msk/latest/developerguide/public-access.html) ([#21005](https://github.com/hashicorp/terraform-provider-aws/issues/21005)) -* resource/aws_msk_cluster: Add `client_authentication.unauthenticated` argument ([#21005](https://github.com/hashicorp/terraform-provider-aws/issues/21005)) -* resource/aws_msk_cluster: Allow in-place update of `client_authentication` and `encryption_info.encryption_in_transit.client_broker` ([#21005](https://github.com/hashicorp/terraform-provider-aws/issues/21005)) - -BUG FIXES: - -* resource/aws_cloudfront_distribution: Fix PreconditionFailed errors when other CloudFront resources are changed before the distribution ([#24537](https://github.com/hashicorp/terraform-provider-aws/issues/24537)) -* resource/aws_ecs_service: Fix retry when using the `wait_for_steady_state` parameter ([#24541](https://github.com/hashicorp/terraform-provider-aws/issues/24541)) -* resource/aws_launch_template: Fix crash when reading `license_specification` ([#24579](https://github.com/hashicorp/terraform-provider-aws/issues/24579)) -* resource/aws_ssm_document: Always include `attachment_sources` when updating SSM documents ([#24530](https://github.com/hashicorp/terraform-provider-aws/issues/24530)) - -## 4.12.1 (April 29, 2022) - -ENHANCEMENTS: - -* resource/aws_kms_key: Add support for HMAC_256 customer master key spec ([#24450](https://github.com/hashicorp/terraform-provider-aws/issues/24450)) - -BUG FIXES: - -* resource/aws_acm_certificate_validation: Restore certificate issuance timestamp as the resource `id` value, fixing error on existing resource Read ([#24453](https://github.com/hashicorp/terraform-provider-aws/issues/24453)) -* resource/aws_kms_alias: Fix reserved prefix used in `name` and `name_prefix` plan time validation ([#24469](https://github.com/hashicorp/terraform-provider-aws/issues/24469)) - -## 4.12.0 (April 28, 2022) - -FEATURES: - -* **New Data Source:** `aws_ce_cost_category` ([#24402](https://github.com/hashicorp/terraform-provider-aws/issues/24402)) -* **New Data Source:** `aws_ce_tags` ([#24402](https://github.com/hashicorp/terraform-provider-aws/issues/24402)) -* **New Data Source:** `aws_cloudfront_origin_access_identities` ([#24382](https://github.com/hashicorp/terraform-provider-aws/issues/24382)) -* **New Data Source:** `aws_mq_broker_instance_type_offerings` ([#24394](https://github.com/hashicorp/terraform-provider-aws/issues/24394)) -* **New Resource:** `aws_athena_data_catalog` ([#22968](https://github.com/hashicorp/terraform-provider-aws/issues/22968)) -* **New Resource:** `aws_ce_cost_category` ([#24402](https://github.com/hashicorp/terraform-provider-aws/issues/24402)) -* **New Resource:** `aws_docdb_event_subscription` ([#24379](https://github.com/hashicorp/terraform-provider-aws/issues/24379)) - -ENHANCEMENTS: - -* data-source/aws_grafana_workspace: Add `tags` attribute ([#24358](https://github.com/hashicorp/terraform-provider-aws/issues/24358)) -* data-source/aws_instance: Add `maintenance_options` attribute ([#24377](https://github.com/hashicorp/terraform-provider-aws/issues/24377)) -* data-source/aws_launch_template: Add `maintenance_options` attribute ([#24377](https://github.com/hashicorp/terraform-provider-aws/issues/24377)) -* provider: Add support for Assume Role with Web Identity. ([#24441](https://github.com/hashicorp/terraform-provider-aws/issues/24441)) -* resource/aws_acm_certificate: Add `validation_option` argument ([#3853](https://github.com/hashicorp/terraform-provider-aws/issues/3853)) -* resource/aws_acm_certificate_validation: Increase default resource Create (certificate issuance) timeout to 75 minutes ([#20073](https://github.com/hashicorp/terraform-provider-aws/issues/20073)) -* resource/aws_emr_cluster: Add `list_steps_states` argument ([#20871](https://github.com/hashicorp/terraform-provider-aws/issues/20871)) -* resource/aws_grafana_workspace: Add `tags` argument ([#24358](https://github.com/hashicorp/terraform-provider-aws/issues/24358)) -* resource/aws_instance: Add `maintenance_options` argument ([#24377](https://github.com/hashicorp/terraform-provider-aws/issues/24377)) -* resource/aws_launch_template: Add `maintenance_options` argument ([#24377](https://github.com/hashicorp/terraform-provider-aws/issues/24377)) -* resource/aws_mq_broker: Make `maintenance_window_start_time` updateable without recreation. ([#24385](https://github.com/hashicorp/terraform-provider-aws/issues/24385)) -* resource/aws_rds_cluster: Add `serverlessv2_scaling_configuration` argument to support [Aurora Serverless v2](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.html) ([#24363](https://github.com/hashicorp/terraform-provider-aws/issues/24363)) -* resource/aws_spot_fleet_request: Add `terminate_instances_on_delete` argument ([#17268](https://github.com/hashicorp/terraform-provider-aws/issues/17268)) - -BUG FIXES: - -* data-source/aws_kms_alias: Fix `name` plan time validation ([#13000](https://github.com/hashicorp/terraform-provider-aws/issues/13000)) -* provider: Setting `skip_metadata_api_check = false` now overrides `AWS_EC2_METADATA_DISABLED` environment variable. ([#24441](https://github.com/hashicorp/terraform-provider-aws/issues/24441)) -* resource/aws_acm_certificate: Correctly handle SAN entries that match `domain_name` ([#20073](https://github.com/hashicorp/terraform-provider-aws/issues/20073)) -* resource/aws_dms_replication_task: Fix to stop the task before updating, if required ([#24047](https://github.com/hashicorp/terraform-provider-aws/issues/24047)) -* resource/aws_ec2_availability_zone_group: Don't crash if `group_name` is not found ([#24422](https://github.com/hashicorp/terraform-provider-aws/issues/24422)) -* resource/aws_elasticache_cluster: Update regex pattern to target specific Redis V6 versions through the `engine_version` attribute ([#23734](https://github.com/hashicorp/terraform-provider-aws/issues/23734)) -* resource/aws_elasticache_replication_group: Update regex pattern to target specific Redis V6 versions through the `engine_version` attribute ([#23734](https://github.com/hashicorp/terraform-provider-aws/issues/23734)) -* resource/aws_kms_alias: Fix `name` and `name_prefix` plan time validation ([#13000](https://github.com/hashicorp/terraform-provider-aws/issues/13000)) -* resource/aws_lb: Fix bug causing an error on update if tags unsupported in ISO region ([#24334](https://github.com/hashicorp/terraform-provider-aws/issues/24334)) -* resource/aws_s3_bucket_policy: Let resource be removed from tfstate if bucket deleted outside Terraform ([#23510](https://github.com/hashicorp/terraform-provider-aws/issues/23510)) -* resource/aws_s3_bucket_versioning: Let resource be removed from tfstate if bucket deleted outside Terraform ([#23510](https://github.com/hashicorp/terraform-provider-aws/issues/23510)) -* resource/aws_ses_receipt_filter: Allow period character (`.`) in `name` argument ([#24383](https://github.com/hashicorp/terraform-provider-aws/issues/24383)) - -## 4.11.0 (April 22, 2022) - -FEATURES: - -* **New Data Source:** `aws_s3_bucket_policy` ([#17738](https://github.com/hashicorp/terraform-provider-aws/issues/17738)) -* **New Resource:** `aws_transfer_workflow` ([#24248](https://github.com/hashicorp/terraform-provider-aws/issues/24248)) - -ENHANCEMENTS: - -* data-source/aws_imagebuilder_infrastructure_configuration: Add `instance_metadata_options` attribute ([#24285](https://github.com/hashicorp/terraform-provider-aws/issues/24285)) -* data-source/aws_opensearch_domain: Add `cold_storage_options` attribute to the `cluster_config` configuration block ([#24284](https://github.com/hashicorp/terraform-provider-aws/issues/24284)) -* resource/aws_db_proxy: Add `auth.username` argument ([#24264](https://github.com/hashicorp/terraform-provider-aws/issues/24264)) -* resource/aws_elasticache_user: Add plan-time validation of password argumnet length ([#24274](https://github.com/hashicorp/terraform-provider-aws/issues/24274)) -* resource/aws_elasticsearch_domain: For Elasticsearch versions 6.7+, allow in-place update of `node_to_node_encryption.0.enabled` and `encrypt_at_rest.0.enabled`. ([#24222](https://github.com/hashicorp/terraform-provider-aws/issues/24222)) -* resource/aws_fsx_ontap_file_system: Add support for `SINGLE_AZ_1` `deployment_type`. ([#24280](https://github.com/hashicorp/terraform-provider-aws/issues/24280)) -* resource/aws_imagebuilder_infrastructure_configuration: Add `instance_metadata_options` argument ([#24285](https://github.com/hashicorp/terraform-provider-aws/issues/24285)) -* resource/aws_instance: Add `capacity_reservation_specification.capacity_reservation_target.capacity_reservation_resource_group_arn` argument ([#24283](https://github.com/hashicorp/terraform-provider-aws/issues/24283)) -* resource/aws_instance: Add `network_interface.network_card_index` argument ([#24283](https://github.com/hashicorp/terraform-provider-aws/issues/24283)) -* resource/aws_opensearch_domain: Add `cold_storage_options` argument to the `cluster_config` configuration block ([#24284](https://github.com/hashicorp/terraform-provider-aws/issues/24284)) -* resource/aws_opensearch_domain: For Elasticsearch versions 6.7+, allow in-place update of `node_to_node_encryption.0.enabled` and `encrypt_at_rest.0.enabled`. ([#24222](https://github.com/hashicorp/terraform-provider-aws/issues/24222)) -* resource/aws_transfer_server: Add `workflow_details` argument ([#24248](https://github.com/hashicorp/terraform-provider-aws/issues/24248)) -* resource/aws_waf_byte_match_set: Additional supported values for `byte_match_tuples.field_to_match.type` argument ([#24286](https://github.com/hashicorp/terraform-provider-aws/issues/24286)) -* resource/aws_wafregional_web_acl: Additional supported values for `logging_configuration.redacted_fields.field_to_match.type` argument ([#24286](https://github.com/hashicorp/terraform-provider-aws/issues/24286)) -* resource/aws_workspaces_workspace: Additional supported values for `workspace_properties.compute_type_name` argument ([#24286](https://github.com/hashicorp/terraform-provider-aws/issues/24286)) - -BUG FIXES: - -* data-source/aws_db_instance: Prevent panic when setting instance connection endpoint values ([#24299](https://github.com/hashicorp/terraform-provider-aws/issues/24299)) -* data-source/aws_efs_file_system: Prevent panic when searching by tag returns 0 or multiple results ([#24298](https://github.com/hashicorp/terraform-provider-aws/issues/24298)) -* data-source/aws_elasticache_cluster: Gracefully handle additional tagging error type in non-standard AWS partitions (i.e., ISO) ([#24275](https://github.com/hashicorp/terraform-provider-aws/issues/24275)) -* resource/aws_appstream_user_stack_association: Prevent panic during resource read ([#24303](https://github.com/hashicorp/terraform-provider-aws/issues/24303)) -* resource/aws_cloudformation_stack_set: Prevent `Validation` errors when `operation_preferences.failure_tolerance_count` is zero ([#24250](https://github.com/hashicorp/terraform-provider-aws/issues/24250)) -* resource/aws_elastic_beanstalk_environment: Correctly set `cname_prefix` attribute ([#24278](https://github.com/hashicorp/terraform-provider-aws/issues/24278)) -* resource/aws_elasticache_cluster: Gracefully handle additional tagging error type in non-standard AWS partitions (i.e., ISO) ([#24275](https://github.com/hashicorp/terraform-provider-aws/issues/24275)) -* resource/aws_elasticache_parameter_group: Gracefully handle additional tagging error type in non-standard AWS partitions (i.e., ISO) ([#24275](https://github.com/hashicorp/terraform-provider-aws/issues/24275)) -* resource/aws_elasticache_replication_group: Gracefully handle additional tagging error type in non-standard AWS partitions (i.e., ISO) ([#24275](https://github.com/hashicorp/terraform-provider-aws/issues/24275)) -* resource/aws_elasticache_subnet_group: Gracefully handle additional tagging error type in non-standard AWS partitions (i.e., ISO) ([#24275](https://github.com/hashicorp/terraform-provider-aws/issues/24275)) -* resource/aws_elasticache_user: Gracefully handle additional tagging error type in non-standard AWS partitions (i.e., ISO) ([#24275](https://github.com/hashicorp/terraform-provider-aws/issues/24275)) -* resource/aws_elasticache_user_group: Gracefully handle additional tagging error type in non-standard AWS partitions (i.e., ISO) ([#24275](https://github.com/hashicorp/terraform-provider-aws/issues/24275)) -* resource/aws_instance: Fix issue with assuming Placement and disableApiTermination instance attributes exist when managing a Snowball Edge device ([#19256](https://github.com/hashicorp/terraform-provider-aws/issues/19256)) -* resource/aws_kinesis_firehose_delivery_stream: Increase the maximum length of the `processing_configuration.processors.parameters.parameter_value` argument's value to `5120` ([#24312](https://github.com/hashicorp/terraform-provider-aws/issues/24312)) -* resource/aws_macie2_member: Correct type for `invitation_disable_email_notification` parameter ([#24304](https://github.com/hashicorp/terraform-provider-aws/issues/24304)) -* resource/aws_s3_bucket_server_side_encryption_configuration: Retry on `ServerSideEncryptionConfigurationNotFoundError` errors due to eventual consistency ([#24266](https://github.com/hashicorp/terraform-provider-aws/issues/24266)) -* resource/aws_sfn_state_machine: Prevent panic during resource update ([#24302](https://github.com/hashicorp/terraform-provider-aws/issues/24302)) -* resource/aws_shield_protection_group: When updating resource tags, use the `protection_group_arn` parameter instead of `arn`. ([#24296](https://github.com/hashicorp/terraform-provider-aws/issues/24296)) -* resource/aws_ssm_association: Prevent panic when `wait_for_success_timeout_seconds` is configured ([#24300](https://github.com/hashicorp/terraform-provider-aws/issues/24300)) - -## 4.10.0 (April 14, 2022) - -FEATURES: - -* **New Data Source:** `aws_iam_saml_provider` ([#10498](https://github.com/hashicorp/terraform-provider-aws/issues/10498)) -* **New Data Source:** `aws_nat_gateways` ([#24190](https://github.com/hashicorp/terraform-provider-aws/issues/24190)) -* **New Resource:** `aws_datasync_location_fsx_openzfs_file_system` ([#24200](https://github.com/hashicorp/terraform-provider-aws/issues/24200)) -* **New Resource:** `aws_elasticache_user_group_association` ([#24204](https://github.com/hashicorp/terraform-provider-aws/issues/24204)) -* **New Resource:** `aws_qldb_stream` ([#19297](https://github.com/hashicorp/terraform-provider-aws/issues/19297)) - -ENHANCEMENTS: - -* data-source/aws_qldb_ledger: Add `kms_key` and `tags` attributes ([#19297](https://github.com/hashicorp/terraform-provider-aws/issues/19297)) -* resource/aws_ami_launch_permission: Add `group` argument ([#20677](https://github.com/hashicorp/terraform-provider-aws/issues/20677)) -* resource/aws_ami_launch_permission: Add `organization_arn` and `organizational_unit_arn` arguments ([#21694](https://github.com/hashicorp/terraform-provider-aws/issues/21694)) -* resource/aws_athena_database: Add `properties` argument. ([#24172](https://github.com/hashicorp/terraform-provider-aws/issues/24172)) -* resource/aws_athena_database: Add import support. ([#24172](https://github.com/hashicorp/terraform-provider-aws/issues/24172)) -* resource/aws_config_config_rule: Add `source.custom_policy_details` argument. ([#24057](https://github.com/hashicorp/terraform-provider-aws/issues/24057)) -* resource/aws_config_config_rule: Add plan time validation for `source.source_detail.event_source` and `source.source_detail.message_type`. ([#24057](https://github.com/hashicorp/terraform-provider-aws/issues/24057)) -* resource/aws_config_config_rule: Make `source.source_identifier` optional. ([#24057](https://github.com/hashicorp/terraform-provider-aws/issues/24057)) -* resource/aws_eks_addon: Add `preserve` argument ([#24218](https://github.com/hashicorp/terraform-provider-aws/issues/24218)) -* resource/aws_grafana_workspace: Add plan time validations for `authentication_providers`, `authentication_providers`, `authentication_providers`. ([#24170](https://github.com/hashicorp/terraform-provider-aws/issues/24170)) -* resource/aws_qldb_ledger: Add `kms_key` argument ([#19297](https://github.com/hashicorp/terraform-provider-aws/issues/19297)) -* resource/aws_vpc_ipam_scope: Add pagination when describing IPAM Scopes ([#24188](https://github.com/hashicorp/terraform-provider-aws/issues/24188)) - -BUG FIXES: - -* resource/aws_athena_database: Add drift detection for `comment`. ([#24172](https://github.com/hashicorp/terraform-provider-aws/issues/24172)) -* resource/aws_cloudformation_stack_set: Prevent `InvalidParameter` errors when updating `operation_preferences` ([#24202](https://github.com/hashicorp/terraform-provider-aws/issues/24202)) -* resource/aws_cloudwatch_event_connection: Add validation to `auth_parameters.api_key.key`, `auth_parameters.api_key.value`, `auth_parameters.basic.username`, `auth_parameters.basic.password`, `auth_parameters.oauth.authorization_endpoint`, `auth_parameters.oauth.client_parameters.client_id` and `auth_parameters.oauth.client_parameters.client_secret` arguments ([#24154](https://github.com/hashicorp/terraform-provider-aws/issues/24154)) -* resource/aws_cloudwatch_log_subscription_filter: Retry resource create and update when a conflicting operation error is returned ([#24148](https://github.com/hashicorp/terraform-provider-aws/issues/24148)) -* resource/aws_ecs_service: Retry when using the `wait_for_steady_state` parameter and `ResourceNotReady` errors are returned from the AWS API ([#24223](https://github.com/hashicorp/terraform-provider-aws/issues/24223)) -* resource/aws_ecs_service: Wait for service to reach an active state after create and update operations ([#24223](https://github.com/hashicorp/terraform-provider-aws/issues/24223)) -* resource/aws_emr_cluster: Ignore `UnknownOperationException` errors when reading a cluster's auto-termination policy ([#24237](https://github.com/hashicorp/terraform-provider-aws/issues/24237)) -* resource/aws_lambda_function_url: Ignore `ResourceConflictException` errors caused by existing `FunctionURLAllowPublicAccess` permission statements ([#24220](https://github.com/hashicorp/terraform-provider-aws/issues/24220)) -* resource/aws_vpc_ipam_scope: Prevent panic when describing IPAM Scopes by ID ([#24188](https://github.com/hashicorp/terraform-provider-aws/issues/24188)) - -## 4.9.0 (April 07, 2022) - -NOTES: - -* resource/aws_s3_bucket: The `acceleration_status`, `acl`, `cors_rule`, `grant`, `lifecycle_rule`, `logging`, `object_lock_configuration.rule`, `policy`, `replication_configuration`, `request_payer`, `server_side_encryption_configuration`, `versioning`, and `website` parameters are now Optional. Please refer to the documentation for details on drift detection and potential conflicts when configuring these parameters with the standalone `aws_s3_bucket_*` resources. ([#23985](https://github.com/hashicorp/terraform-provider-aws/issues/23985)) - -FEATURES: - -* **New Data Source:** `aws_eks_addon_version` ([#23157](https://github.com/hashicorp/terraform-provider-aws/issues/23157)) -* **New Data Source:** `aws_lambda_function_url` ([#24053](https://github.com/hashicorp/terraform-provider-aws/issues/24053)) -* **New Data Source:** `aws_memorydb_acl` ([#23891](https://github.com/hashicorp/terraform-provider-aws/issues/23891)) -* **New Data Source:** `aws_memorydb_cluster` ([#23991](https://github.com/hashicorp/terraform-provider-aws/issues/23991)) -* **New Data Source:** `aws_memorydb_snapshot` ([#23990](https://github.com/hashicorp/terraform-provider-aws/issues/23990)) -* **New Data Source:** `aws_memorydb_user` ([#23890](https://github.com/hashicorp/terraform-provider-aws/issues/23890)) -* **New Data Source:** `aws_opensearch_domain` ([#23902](https://github.com/hashicorp/terraform-provider-aws/issues/23902)) -* **New Data Source:** `aws_ssm_maintenance_windows` ([#24011](https://github.com/hashicorp/terraform-provider-aws/issues/24011)) -* **New Resource:** `aws_db_instance_automated_backups_replication` ([#23759](https://github.com/hashicorp/terraform-provider-aws/issues/23759)) -* **New Resource:** `aws_dynamodb_contributor_insights` ([#23947](https://github.com/hashicorp/terraform-provider-aws/issues/23947)) -* **New Resource:** `aws_iot_indexing_configuration` ([#9929](https://github.com/hashicorp/terraform-provider-aws/issues/9929)) -* **New Resource:** `aws_iot_logging_options` ([#13392](https://github.com/hashicorp/terraform-provider-aws/issues/13392)) -* **New Resource:** `aws_iot_provisioning_template` ([#12108](https://github.com/hashicorp/terraform-provider-aws/issues/12108)) -* **New Resource:** `aws_lambda_function_url` ([#24053](https://github.com/hashicorp/terraform-provider-aws/issues/24053)) -* **New Resource:** `aws_opensearch_domain` ([#23902](https://github.com/hashicorp/terraform-provider-aws/issues/23902)) -* **New Resource:** `aws_opensearch_domain_policy` ([#23902](https://github.com/hashicorp/terraform-provider-aws/issues/23902)) -* **New Resource:** `aws_opensearch_domain_saml_options` ([#23902](https://github.com/hashicorp/terraform-provider-aws/issues/23902)) -* **New Resource:** `aws_rds_cluster_activity_stream` ([#22097](https://github.com/hashicorp/terraform-provider-aws/issues/22097)) - -ENHANCEMENTS: - -* data-source/aws_imagebuilder_distribution_configuration: Add `account_id` attribute to the `launch_template_configuration` attribute of the `distribution` configuration block ([#23924](https://github.com/hashicorp/terraform-provider-aws/issues/23924)) -* data-source/aws_route: Add `core_network_arn` argument ([#24024](https://github.com/hashicorp/terraform-provider-aws/issues/24024)) -* data-source/aws_route_table: Add 'routes.core_network_arn' attribute' ([#24024](https://github.com/hashicorp/terraform-provider-aws/issues/24024)) -* provider: Add support for reading custom CA bundle setting from shared config files ([#24064](https://github.com/hashicorp/terraform-provider-aws/issues/24064)) -* resource/aws_cloudformation_stack_set: Add `operation_preferences` argument ([#23908](https://github.com/hashicorp/terraform-provider-aws/issues/23908)) -* resource/aws_default_route_table: Add `core_network_arn` argument to the `route` configuration block ([#24024](https://github.com/hashicorp/terraform-provider-aws/issues/24024)) -* resource/aws_dlm_lifecycle_policy: Add `policy_details.schedule.create_rule.cron_expression`, `policy_details.schedule.retain_rule.interval`, `policy_details.schedule.retain_rule.interval_unit`, `policy_details.policy_type`, `policy_details.schedule.deprecate_rule`, `policy_details.parameters`, `policy_details.schedule.variable_tags`, `policy_details.schedule.fast_restore_rule`, `policy_details.schedule.share_rule`, `policy_details.resource_locations`, `policy_details.schedule.create_rule.location`, `policy_details.action` and `policy_details.event_source` arguments ([#23880](https://github.com/hashicorp/terraform-provider-aws/issues/23880)) -* resource/aws_dlm_lifecycle_policy: Add plan time validations for `policy_details.resource_types` and `description` arguments ([#23880](https://github.com/hashicorp/terraform-provider-aws/issues/23880)) -* resource/aws_dlm_lifecycle_policy: Make `policy_details.resource_types`, `policy_details.schedule`, `policy_details.target_tags`, `policy_details.schedule.retain_rule` and `policy_details.schedule.create_rule.interval` arguments optional ([#23880](https://github.com/hashicorp/terraform-provider-aws/issues/23880)) -* resource/aws_elasticache_cluster: Add `auto_minor_version_upgrade` argument ([#23996](https://github.com/hashicorp/terraform-provider-aws/issues/23996)) -* resource/aws_fms_policy: Retry when `InternalErrorException` errors are returned from the AWS API ([#23952](https://github.com/hashicorp/terraform-provider-aws/issues/23952)) -* resource/aws_fsx_ontap_file_system: Support updating `storage_capacity`, `throughput_capacity`, and `disk_iops_configuration`. ([#24002](https://github.com/hashicorp/terraform-provider-aws/issues/24002)) -* resource/aws_imagebuilder_distribution_configuration: Add `account_id` argument to the `launch_template_configuration` attribute of the `distribution` configuration block ([#23924](https://github.com/hashicorp/terraform-provider-aws/issues/23924)) -* resource/aws_iot_authorizer: Add `enable_caching_for_http` argument ([#23993](https://github.com/hashicorp/terraform-provider-aws/issues/23993)) -* resource/aws_lambda_permission: Add `principal_org_id` argument. ([#24001](https://github.com/hashicorp/terraform-provider-aws/issues/24001)) -* resource/aws_mq_broker: Add validation to `broker_name` and `security_groups` arguments ([#18088](https://github.com/hashicorp/terraform-provider-aws/issues/18088)) -* resource/aws_organizations_account: Add `close_on_deletion` argument to close account on deletion ([#23930](https://github.com/hashicorp/terraform-provider-aws/issues/23930)) -* resource/aws_route: Add `core_network_arn` argument ([#24024](https://github.com/hashicorp/terraform-provider-aws/issues/24024)) -* resource/aws_route_table: Add `core_network_arn` argument to the `route` configuration block ([#24024](https://github.com/hashicorp/terraform-provider-aws/issues/24024)) -* resource/aws_s3_bucket: Speed up resource deletion, especially when the S3 buckets contains a large number of objects and `force_destroy` is `true` ([#24020](https://github.com/hashicorp/terraform-provider-aws/issues/24020)) -* resource/aws_s3_bucket: Update `acceleration_status` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_accelerate_configuration` resource. ([#23816](https://github.com/hashicorp/terraform-provider-aws/issues/23816)) -* resource/aws_s3_bucket: Update `acl` and `grant` parameters to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring these parameters with the standalone `aws_s3_bucket_acl` resource. ([#23798](https://github.com/hashicorp/terraform-provider-aws/issues/23798)) -* resource/aws_s3_bucket: Update `cors_rule` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_cors_configuration` resource. ([#23817](https://github.com/hashicorp/terraform-provider-aws/issues/23817)) -* resource/aws_s3_bucket: Update `lifecycle_rule` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_lifecycle_configuration` resource. ([#23818](https://github.com/hashicorp/terraform-provider-aws/issues/23818)) -* resource/aws_s3_bucket: Update `logging` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_logging` resource. ([#23819](https://github.com/hashicorp/terraform-provider-aws/issues/23819)) -* resource/aws_s3_bucket: Update `object_lock_configuration.rule` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_object_lock_configuration` resource. ([#23984](https://github.com/hashicorp/terraform-provider-aws/issues/23984)) -* resource/aws_s3_bucket: Update `policy` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_policy` resource. ([#23843](https://github.com/hashicorp/terraform-provider-aws/issues/23843)) -* resource/aws_s3_bucket: Update `replication_configuration` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_replication_configuration` resource. ([#23842](https://github.com/hashicorp/terraform-provider-aws/issues/23842)) -* resource/aws_s3_bucket: Update `request_payer` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_request_payment_configuration` resource. ([#23844](https://github.com/hashicorp/terraform-provider-aws/issues/23844)) -* resource/aws_s3_bucket: Update `server_side_encryption_configuration` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_server_side_encryption_configuration` resource. ([#23822](https://github.com/hashicorp/terraform-provider-aws/issues/23822)) -* resource/aws_s3_bucket: Update `versioning` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_versioning` resource. ([#23820](https://github.com/hashicorp/terraform-provider-aws/issues/23820)) -* resource/aws_s3_bucket: Update `website` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_website_configuration` resource. ([#23821](https://github.com/hashicorp/terraform-provider-aws/issues/23821)) -* resource/aws_storagegateway_gateway: Add `maintenance_start_time` argument ([#15355](https://github.com/hashicorp/terraform-provider-aws/issues/15355)) -* resource/aws_storagegateway_nfs_file_share: Add `bucket_region` and `vpc_endpoint_dns_name` arguments to support PrivateLink endpoints ([#24038](https://github.com/hashicorp/terraform-provider-aws/issues/24038)) -* resource/aws_vpc_ipam: add `cascade` argument ([#23973](https://github.com/hashicorp/terraform-provider-aws/issues/23973)) -* resource/aws_vpn_connection: Add `core_network_arn` and `core_network_attachment_arn` attributes ([#24024](https://github.com/hashicorp/terraform-provider-aws/issues/24024)) -* resource/aws_xray_group: Add `insights_configuration` argument ([#24028](https://github.com/hashicorp/terraform-provider-aws/issues/24028)) - -BUG FIXES: - -* data-source/aws_elasticache_cluster: Allow some `tags` errors to be non-fatal to support non-standard AWS partitions (i.e., ISO) ([#23979](https://github.com/hashicorp/terraform-provider-aws/issues/23979)) -* resource/aws_backup_report_plan: Wait for asynchronous lifecycle operations to complete ([#23967](https://github.com/hashicorp/terraform-provider-aws/issues/23967)) -* resource/aws_cloudformation_stack_set: Consider `QUEUED` a valid pending state for resource creation ([#22160](https://github.com/hashicorp/terraform-provider-aws/issues/22160)) -* resource/aws_dynamodb_table_item: Allow `item` names to still succeed if they include non-letters ([#14075](https://github.com/hashicorp/terraform-provider-aws/issues/14075)) -* resource/aws_elasticache_cluster: Attempt `tags`-on-create, fallback to tag after create, and allow some `tags` errors to be non-fatal to support non-standard AWS partitions (i.e., ISO) ([#23979](https://github.com/hashicorp/terraform-provider-aws/issues/23979)) -* resource/aws_elasticache_parameter_group: Attempt `tags`-on-create, fallback to tag after create, and allow some `tags` errors to be non-fatal to support non-standard AWS partitions (i.e., ISO) ([#23979](https://github.com/hashicorp/terraform-provider-aws/issues/23979)) -* resource/aws_elasticache_replication_group: Allow disabling `auto_minor_version_upgrade` ([#23996](https://github.com/hashicorp/terraform-provider-aws/issues/23996)) -* resource/aws_elasticache_replication_group: Attempt `tags`-on-create, fallback to tag after create, and allow some `tags` errors to be non-fatal to support non-standard AWS partitions (i.e., ISO) ([#23979](https://github.com/hashicorp/terraform-provider-aws/issues/23979)) -* resource/aws_elasticache_replication_group: Waits for available state before updating tags ([#24021](https://github.com/hashicorp/terraform-provider-aws/issues/24021)) -* resource/aws_elasticache_subnet_group: Attempt `tags`-on-create, fallback to tag after create, and allow some `tags` errors to be non-fatal to support non-standard AWS partitions (i.e., ISO) ([#23979](https://github.com/hashicorp/terraform-provider-aws/issues/23979)) -* resource/aws_elasticache_user: Attempt `tags`-on-create, fallback to tag after create, and allow some `tags` errors to be non-fatal to support non-standard AWS partitions (i.e., ISO) ([#23979](https://github.com/hashicorp/terraform-provider-aws/issues/23979)) -* resource/aws_elasticache_user_group: Attempt `tags`-on-create, fallback to tag after create, and allow some `tags` errors to be non-fatal to support non-standard AWS partitions (i.e., ISO) ([#23979](https://github.com/hashicorp/terraform-provider-aws/issues/23979)) -* resource/aws_elasticsearch_domain_saml_option: Fix difference caused by `subject_key` default not matching AWS default; old and new defaults are equivalent ([#20892](https://github.com/hashicorp/terraform-provider-aws/issues/20892)) -* resource/aws_lb: Fix attribute key not recognized issue preventing creation in ISO-B regions ([#23972](https://github.com/hashicorp/terraform-provider-aws/issues/23972)) -* resource/aws_redshift_cluster: Correctly use `number_of_nodes` argument value when restoring from snapshot ([#13203](https://github.com/hashicorp/terraform-provider-aws/issues/13203)) -* resource/aws_route: Ensure that resource ID is set in case of wait-for-creation time out ([#24024](https://github.com/hashicorp/terraform-provider-aws/issues/24024)) -* resource/aws_s3_bucket_lifecycle_configuration: Prevent `MalformedXML` errors when handling diffs in `rule.filter` ([#23893](https://github.com/hashicorp/terraform-provider-aws/issues/23893)) - -## 4.8.0 (March 25, 2022) - -FEATURES: - -* **New Data Source:** `aws_mskconnect_connector` ([#23792](https://github.com/hashicorp/terraform-provider-aws/issues/23544)) -* **New Resource:** `aws_mskconnect_connector` ([#23765](https://github.com/hashicorp/terraform-provider-aws/issues/23544)) - -ENHANCEMENTS: - -* data-source/aws_eips: Set `public_ips` for VPC as well as EC2 Classic ([#23859](https://github.com/hashicorp/terraform-provider-aws/issues/23859)) -* data-source/aws_elasticache_cluster: Add `log_delivery_configuration` attribute ([#20068](https://github.com/hashicorp/terraform-provider-aws/issues/20068)) -* data-source/aws_elasticache_replication_group: Add `log_delivery_configuration` attribute ([#20068](https://github.com/hashicorp/terraform-provider-aws/issues/20068)) -* data-source/aws_elasticsearch_domain: Add `cold_storage_options` attribute to the `cluster_config` configuration block ([#19713](https://github.com/hashicorp/terraform-provider-aws/issues/19713)) -* data-source/aws_lambda_function: Add `ephemeral_storage` attribute ([#23873](https://github.com/hashicorp/terraform-provider-aws/issues/23873)) -* resource/aws_elasticache_cluster: Add `log_delivery_configuration` argument ([#20068](https://github.com/hashicorp/terraform-provider-aws/issues/20068)) -* resource/aws_elasticache_replication_group: Add `log_delivery_configuration` argument ([#20068](https://github.com/hashicorp/terraform-provider-aws/issues/20068)) -* resource/aws_elasticsearch_domain: Add `cold_storage_options` argument to the `cluster_config` configuration block ([#19713](https://github.com/hashicorp/terraform-provider-aws/issues/19713)) -* resource/aws_elasticsearch_domain: Add configurable Create and Delete timeouts ([#19713](https://github.com/hashicorp/terraform-provider-aws/issues/19713)) -* resource/aws_lambda_function: Add `ephemeral_storage` argument ([#23873](https://github.com/hashicorp/terraform-provider-aws/issues/23873)) -* resource/aws_lambda_function: Add error handling for `ResourceConflictException` errors on create and update ([#23879](https://github.com/hashicorp/terraform-provider-aws/issues/23879)) -* resource/aws_mskconnect_custom_plugin: Implement resource Delete ([#23544](https://github.com/hashicorp/terraform-provider-aws/issues/23544)) -* resource/aws_mwaa_environment: Add `schedulers` argument ([#21941](https://github.com/hashicorp/terraform-provider-aws/issues/21941)) -* resource/aws_network_firewall_policy: Allow use of managed rule group arns for network firewall managed rule groups. ([#22355](https://github.com/hashicorp/terraform-provider-aws/issues/22355)) - -BUG FIXES: - -* resource/aws_autoscaling_group: Fix issue where group was not recreated if `initial_lifecycle_hook` changed ([#20708](https://github.com/hashicorp/terraform-provider-aws/issues/20708)) -* resource/aws_cloudfront_distribution: Fix default value of `origin_path` in `origin` block ([#20709](https://github.com/hashicorp/terraform-provider-aws/issues/20709)) -* resource/aws_cloudwatch_event_target: Fix setting `path_parameter_values`. ([#23862](https://github.com/hashicorp/terraform-provider-aws/issues/23862)) - -## 4.7.0 (March 24, 2022) - -FEATURES: - -* **New Data Source:** `aws_cloudwatch_event_bus` ([#23792](https://github.com/hashicorp/terraform-provider-aws/issues/23792)) -* **New Data Source:** `aws_imagebuilder_image_pipelines` ([#23741](https://github.com/hashicorp/terraform-provider-aws/issues/23741)) -* **New Data Source:** `aws_memorydb_parameter_group` ([#23814](https://github.com/hashicorp/terraform-provider-aws/issues/23814)) -* **New Data Source:** `aws_route53_traffic_policy_document` ([#23602](https://github.com/hashicorp/terraform-provider-aws/issues/23602)) -* **New Resource:** `aws_cognito_user_in_group` ([#23765](https://github.com/hashicorp/terraform-provider-aws/issues/23765)) -* **New Resource:** `aws_keyspaces_keyspace` ([#23770](https://github.com/hashicorp/terraform-provider-aws/issues/23770)) -* **New Resource:** `aws_route53_traffic_policy` ([#23602](https://github.com/hashicorp/terraform-provider-aws/issues/23602)) -* **New Resource:** `aws_route53_traffic_policy_instance` ([#23602](https://github.com/hashicorp/terraform-provider-aws/issues/23602)) - -ENHANCEMENTS: - -* data-source/aws_imagebuilder_distribution_configuration: Add `organization_arns` and `organizational_unit_arns` attributes to the `distribution.ami_distribution_configuration.launch_permission` configuration block ([#22104](https://github.com/hashicorp/terraform-provider-aws/issues/22104)) -* data-source/aws_msk_cluster: Add `zookeeper_connect_string_tls` attribute ([#23804](https://github.com/hashicorp/terraform-provider-aws/issues/23804)) -* data-source/aws_ssm_document: Support `TEXT` `document_format` ([#23757](https://github.com/hashicorp/terraform-provider-aws/issues/23757)) -* resource/aws_api_gateway_stage: Add `canary_settings` argument. ([#23754](https://github.com/hashicorp/terraform-provider-aws/issues/23754)) -* resource/aws_athena_workgroup: Add `acl_configuration` and `expected_bucket_owner` arguments to the `configuration.result_configuration` block ([#23748](https://github.com/hashicorp/terraform-provider-aws/issues/23748)) -* resource/aws_autoscaling_group: Add `instance_reuse_policy` argument to support [Warm Pool scale-in](https://aws.amazon.com/about-aws/whats-new/2022/02/amazon-ec2-auto-scaling-warm-pools-supports-hibernating-returning-instances-warm-pools-scale-in/) ([#23769](https://github.com/hashicorp/terraform-provider-aws/issues/23769)) -* resource/aws_autoscaling_group: Update documentation to include [Warm Pool hibernation](https://aws.amazon.com/about-aws/whats-new/2022/02/amazon-ec2-auto-scaling-warm-pools-supports-hibernating-returning-instances-warm-pools-scale-in/) ([#23772](https://github.com/hashicorp/terraform-provider-aws/issues/23772)) -* resource/aws_cloudformation_stack_set_instance: Add `operation_preferences` argument ([#23666](https://github.com/hashicorp/terraform-provider-aws/issues/23666)) -* resource/aws_cloudwatch_log_subscription_filter: Add plan time validations for `name`, `destination_arn`, `filter_pattern`, `role_arn`, `distribution`. ([#23760](https://github.com/hashicorp/terraform-provider-aws/issues/23760)) -* resource/aws_glue_schema: Update documentation to include [Protobuf data format support](https://aws.amazon.com/about-aws/whats-new/2022/02/aws-glue-schema-registry-protocol-buffers) ([#23815](https://github.com/hashicorp/terraform-provider-aws/issues/23815)) -* resource/aws_imagebuilder_distribution_configuration: Add `organization_arns` and `organizational_unit_arns` arguments to the `distribution.ami_distribution_configuration.launch_permission` configuration block ([#22104](https://github.com/hashicorp/terraform-provider-aws/issues/22104)) -* resource/aws_instance: Add `user_data_replace_on_change` attribute ([#23604](https://github.com/hashicorp/terraform-provider-aws/issues/23604)) -* resource/aws_ssm_maintenance_window_task: Add `arn` and `window_task_id` attributes. ([#23756](https://github.com/hashicorp/terraform-provider-aws/issues/23756)) -* resource/aws_ssm_maintenance_window_task: Add `cutoff_behavior` argument. ([#23756](https://github.com/hashicorp/terraform-provider-aws/issues/23756)) - -BUG FIXES: - -* data-source/aws_ssm_document: Dont generate `arn` for AWS managed docs. ([#23757](https://github.com/hashicorp/terraform-provider-aws/issues/23757)) -* resource/aws_ecs_service: Ensure that `load_balancer` and `service_registries` can be updated in-place ([#23786](https://github.com/hashicorp/terraform-provider-aws/issues/23786)) -* resource/aws_launch_template: Fix `network_interfaces.device_index` and `network_interfaces.network_card_index` of `0` not being set ([#23767](https://github.com/hashicorp/terraform-provider-aws/issues/23767)) -* resource/aws_ssm_maintenance_window_task: Allow creating a window taks without targets. ([#23756](https://github.com/hashicorp/terraform-provider-aws/issues/23756)) - -## 4.6.0 (March 18, 2022) - -FEATURES: - -* **New Data Source:** `aws_networkmanager_connection` ([#13251](https://github.com/hashicorp/terraform-provider-aws/issues/13251)) -* **New Data Source:** `aws_networkmanager_connections` ([#13251](https://github.com/hashicorp/terraform-provider-aws/issues/13251)) -* **New Data Source:** `aws_networkmanager_device` ([#13251](https://github.com/hashicorp/terraform-provider-aws/issues/13251)) -* **New Data Source:** `aws_networkmanager_devices` ([#13251](https://github.com/hashicorp/terraform-provider-aws/issues/13251)) -* **New Data Source:** `aws_networkmanager_global_network` ([#13251](https://github.com/hashicorp/terraform-provider-aws/issues/13251)) -* **New Data Source:** `aws_networkmanager_global_networks` ([#13251](https://github.com/hashicorp/terraform-provider-aws/issues/13251)) -* **New Data Source:** `aws_networkmanager_link` ([#13251](https://github.com/hashicorp/terraform-provider-aws/issues/13251)) -* **New Data Source:** `aws_networkmanager_links` ([#13251](https://github.com/hashicorp/terraform-provider-aws/issues/13251)) -* **New Data Source:** `aws_networkmanager_site` ([#13251](https://github.com/hashicorp/terraform-provider-aws/issues/13251)) -* **New Data Source:** `aws_networkmanager_sites` ([#13251](https://github.com/hashicorp/terraform-provider-aws/issues/13251)) -* **New Resource:** `aws_gamelift_game_server_group` ([#23606](https://github.com/hashicorp/terraform-provider-aws/issues/23606)) -* **New Resource:** `aws_networkmanager_connection` ([#13251](https://github.com/hashicorp/terraform-provider-aws/issues/13251)) -* **New Resource:** `aws_networkmanager_customer_gateway_association` ([#13251](https://github.com/hashicorp/terraform-provider-aws/issues/13251)) -* **New Resource:** `aws_networkmanager_device` ([#13251](https://github.com/hashicorp/terraform-provider-aws/issues/13251)) -* **New Resource:** `aws_networkmanager_global_network` ([#13251](https://github.com/hashicorp/terraform-provider-aws/issues/13251)) -* **New Resource:** `aws_networkmanager_link` ([#13251](https://github.com/hashicorp/terraform-provider-aws/issues/13251)) -* **New Resource:** `aws_networkmanager_link_association` ([#13251](https://github.com/hashicorp/terraform-provider-aws/issues/13251)) -* **New Resource:** `aws_networkmanager_site` ([#13251](https://github.com/hashicorp/terraform-provider-aws/issues/13251)) -* **New Resource:** `aws_networkmanager_transit_gateway_connect_peer_association` ([#13251](https://github.com/hashicorp/terraform-provider-aws/issues/13251)) -* **New Resource:** `aws_networkmanager_transit_gateway_registration` ([#13251](https://github.com/hashicorp/terraform-provider-aws/issues/13251)) -* **New Resource:** `aws_vpc_endpoint_security_group_association` ([#13737](https://github.com/hashicorp/terraform-provider-aws/issues/13737)) - -ENHANCEMENTS: - -* data-source/aws_ec2_transit_gateway_connect_peer: Add `arn` attribute ([#13251](https://github.com/hashicorp/terraform-provider-aws/issues/13251)) -* data-source/aws_imagebuilder_image: Add `container_recipe_arn` attribute ([#23647](https://github.com/hashicorp/terraform-provider-aws/issues/23647)) -* data-source/aws_launch_template: Add `capacity_reservation_resource_group_arn` attribute to the `capacity_reservation_specification.capacity_reservation_target` configuration block ([#23365](https://github.com/hashicorp/terraform-provider-aws/issues/23365)) -* data-source/aws_launch_template: Add `capacity_reservation_specification`, `cpu_options`, `elastic_inference_accelerator` and `license_specification` attributes ([#23365](https://github.com/hashicorp/terraform-provider-aws/issues/23365)) -* data-source/aws_launch_template: Add `ipv4_prefixes`, `ipv4_prefix_count`, `ipv6_prefixes` and `ipv6_prefix_count` attributes to the `network_interfaces` configuration block ([#23365](https://github.com/hashicorp/terraform-provider-aws/issues/23365)) -* data-source/aws_launch_template: Add `private_dns_name_options` attribute ([#23365](https://github.com/hashicorp/terraform-provider-aws/issues/23365)) -* data-source/aws_redshift_cluster: Add `availability_zone_relocation_enabled` attribute. ([#20812](https://github.com/hashicorp/terraform-provider-aws/issues/20812)) -* resource/aws_appconfig_configuration_profile: Add `type` argument to support [AWS AppConfig Feature Flags](https://aws.amazon.com/blogs/mt/using-aws-appconfig-feature-flags/) ([#23719](https://github.com/hashicorp/terraform-provider-aws/issues/23719)) -* resource/aws_athena_database: Add `acl_configuration` and `expected_bucket_owner` arguments ([#23745](https://github.com/hashicorp/terraform-provider-aws/issues/23745)) -* resource/aws_athena_database: Add `comment` argument to support database descriptions ([#23745](https://github.com/hashicorp/terraform-provider-aws/issues/23745)) -* resource/aws_athena_database: Do not recreate the resource if `bucket` changes ([#23745](https://github.com/hashicorp/terraform-provider-aws/issues/23745)) -* resource/aws_cloud9_environment_ec2: Add `connection_type` and `image_id` arguments ([#19195](https://github.com/hashicorp/terraform-provider-aws/issues/19195)) -* resource/aws_cloudformation_stack_set:_instance: Add `call_as` argument ([#23339](https://github.com/hashicorp/terraform-provider-aws/issues/23339)) -* resource/aws_dms_replication_task: Add optional `start_replication_task` and `status` argument ([#23692](https://github.com/hashicorp/terraform-provider-aws/issues/23692)) -* resource/aws_ec2_transit_gateway_connect_peer: Add `arn` attribute ([#13251](https://github.com/hashicorp/terraform-provider-aws/issues/13251)) -* resource/aws_ecs_service: `enable_ecs_managed_tags`, `load_balancer`, `propagate_tags` and `service_registries` can now be updated in-place ([#23600](https://github.com/hashicorp/terraform-provider-aws/issues/23600)) -* resource/aws_imagebuilder_image: Add `container_recipe_arn` argument ([#23647](https://github.com/hashicorp/terraform-provider-aws/issues/23647)) -* resource/aws_iot_certificate: Add `ca_pem` argument, enabling the use of existing IoT certificates ([#23126](https://github.com/hashicorp/terraform-provider-aws/issues/23126)) -* resource/aws_iot_topic_rule: Add `cloudwatch_logs` and `error_action.cloudwatch_logs` arguments ([#23440](https://github.com/hashicorp/terraform-provider-aws/issues/23440)) -* resource/aws_launch_configuration: Add `ephemeral_block_device.no_device` argument ([#23152](https://github.com/hashicorp/terraform-provider-aws/issues/23152)) -* resource/aws_launch_template: Add `capacity_reservation_resource_group_arn` argument to the `capacity_reservation_specification.capacity_reservation_target` configuration block ([#23365](https://github.com/hashicorp/terraform-provider-aws/issues/23365)) -* resource/aws_launch_template: Add `ipv4_prefixes`, `ipv4_prefix_count`, `ipv6_prefixes` and `ipv6_prefix_count` arguments to the `network_interfaces` configuration block ([#23365](https://github.com/hashicorp/terraform-provider-aws/issues/23365)) -* resource/aws_launch_template: Add `private_dns_name_options` argument ([#23365](https://github.com/hashicorp/terraform-provider-aws/issues/23365)) -* resource/aws_msk_configuration: Correctly set `latest_revision` as Computed when `server_properties` changes ([#23662](https://github.com/hashicorp/terraform-provider-aws/issues/23662)) -* resource/aws_quicksight_user: Allow custom values for `namespace` ([#23607](https://github.com/hashicorp/terraform-provider-aws/issues/23607)) -* resource/aws_rds_cluster: Add `db_cluster_instance_class`, `allocated_storage`, `storage_type`, and `iops` arguments to support [Multi-AZ deployments for MySQL & PostgreSQL](https://aws.amazon.com/blogs/aws/amazon-rds-multi-az-db-cluster/) ([#23684](https://github.com/hashicorp/terraform-provider-aws/issues/23684)) -* resource/aws_rds_global_cluster: Add configurable timeouts ([#23560](https://github.com/hashicorp/terraform-provider-aws/issues/23560)) -* resource/aws_rds_instance: Add `source_db_instance_automated_backup_arn` option within `restore_to_point_in_time` attribute ([#23086](https://github.com/hashicorp/terraform-provider-aws/issues/23086)) -* resource/aws_redshift_cluster: Add `availability_zone_relocation_enabled` attribute and allow `availability_zone` to be changed in-place. ([#20812](https://github.com/hashicorp/terraform-provider-aws/issues/20812)) -* resource/aws_transfer_server: Add `pre_authentication_login_banner` and `post_authentication_login_banner` arguments ([#23631](https://github.com/hashicorp/terraform-provider-aws/issues/23631)) -* resource/aws_vpc_endpoint: The `security_group_ids` attribute can now be empty when the resource is created. In this case the VPC's default security is associated with the VPC endpoint ([#13737](https://github.com/hashicorp/terraform-provider-aws/issues/13737)) - -BUG FIXES: - -* resource/aws_amplify_app: Allow `repository` to be updated in-place ([#23517](https://github.com/hashicorp/terraform-provider-aws/issues/23517)) -* resource/aws_api_gateway_stage: Fixed issue with providing `cache_cluster_size` without `cache_cluster_enabled` resulted in waiter error ([#23091](https://github.com/hashicorp/terraform-provider-aws/issues/23091)) -* resource/aws_athena_database: Remove from state on resource Read if deleted outside of Terraform ([#23745](https://github.com/hashicorp/terraform-provider-aws/issues/23745)) -* resource/aws_cloudformation_stack_set: Use `call_as` attribute when reading stack sets, fixing an error raised when using a delegated admistrator account ([#23339](https://github.com/hashicorp/terraform-provider-aws/issues/23339)) -* resource/aws_cloudsearch_domain: Set correct defaults for `index_field.facet`, `index_field.highlight`, `index_field.return`, `index_field.search` and `index_field.sort`, preventing spurious resource diffs ([#23687](https://github.com/hashicorp/terraform-provider-aws/issues/23687)) -* resource/aws_db_instance: Fix issues where configured update timeout was not respected, and update would fail if instance were in the process of being configured. ([#23560](https://github.com/hashicorp/terraform-provider-aws/issues/23560)) -* resource/aws_rds_event_subscription: Fix issue where `enabled` was sometimes not updated ([#23560](https://github.com/hashicorp/terraform-provider-aws/issues/23560)) -* resource/aws_rds_global_cluster: Fix ability to perform cluster version upgrades, including of clusters in distinct regions, such as previously got error: "Invalid database cluster identifier" ([#23560](https://github.com/hashicorp/terraform-provider-aws/issues/23560)) -* resource/aws_route53domains_registered_domain: Redirect all Route 53 Domains AWS API calls to the `us-east-1` Region ([#23672](https://github.com/hashicorp/terraform-provider-aws/issues/23672)) -* resource/aws_s3_bucket_acl: Fix resource import for S3 bucket names consisting of uppercase letters, underscores, and a maximum of 255 characters ([#23678](https://github.com/hashicorp/terraform-provider-aws/issues/23678)) -* resource/aws_s3_bucket_lifecycle_configuration: Support empty string filtering (default behavior of the `aws_s3_bucket.lifecycle_rule` parameter in provider versions prior to v4.0) ([#23746](https://github.com/hashicorp/terraform-provider-aws/issues/23746)) -* resource/aws_s3_bucket_replication_configuration: Change `rule` configuration block to list instead of set ([#23703](https://github.com/hashicorp/terraform-provider-aws/issues/23703)) -* resource/aws_s3_bucket_replication_configuration: Set `rule.id` as Computed to prevent drift when the value is not configured ([#23703](https://github.com/hashicorp/terraform-provider-aws/issues/23703)) -* resource/aws_s3_bucket_versioning: Add missing support for `Disabled` bucket versioning ([#23723](https://github.com/hashicorp/terraform-provider-aws/issues/23723)) - -## 4.5.0 (March 11, 2022) - -ENHANCEMENTS: - -* resource/aws_account_alternate_contact: Add configurable timeouts ([#23516](https://github.com/hashicorp/terraform-provider-aws/issues/23516)) -* resource/aws_s3_bucket: Add error handling for `NotImplemented` errors when reading `object_lock_enabled` and `object_lock_configuration` into terraform state. ([#13366](https://github.com/hashicorp/terraform-provider-aws/issues/13366)) -* resource/aws_s3_bucket: Add top-level `object_lock_enabled` parameter ([#23556](https://github.com/hashicorp/terraform-provider-aws/issues/23556)) -* resource/aws_s3_bucket_replication_configuration: Add `token` field to specify -x-amz-bucket-object-lock-token for enabling replication on object lock enabled -buckets or enabling object lock on an existing bucket. ([#23624](https://github.com/hashicorp/terraform-provider-aws/issues/23624)) -* resource/aws_servicecatalog_budget_resource_association: Add configurable timeouts ([#23518](https://github.com/hashicorp/terraform-provider-aws/issues/23518)) -* resource/aws_servicecatalog_constraint: Add configurable timeouts ([#23518](https://github.com/hashicorp/terraform-provider-aws/issues/23518)) -* resource/aws_servicecatalog_organizations_access: Add configurable timeouts ([#23518](https://github.com/hashicorp/terraform-provider-aws/issues/23518)) -* resource/aws_servicecatalog_portfolio: Add configurable timeouts ([#23518](https://github.com/hashicorp/terraform-provider-aws/issues/23518)) -* resource/aws_servicecatalog_portfolio_share: Add configurable timeouts ([#23518](https://github.com/hashicorp/terraform-provider-aws/issues/23518)) -* resource/aws_servicecatalog_principal_portfolio_association: Add configurable timeouts ([#23518](https://github.com/hashicorp/terraform-provider-aws/issues/23518)) -* resource/aws_servicecatalog_product: Add configurable timeouts ([#23518](https://github.com/hashicorp/terraform-provider-aws/issues/23518)) -* resource/aws_servicecatalog_product_portfolio_association: Add configurable timeouts ([#23518](https://github.com/hashicorp/terraform-provider-aws/issues/23518)) -* resource/aws_servicecatalog_provisioned_product: Add configurable timeouts ([#23518](https://github.com/hashicorp/terraform-provider-aws/issues/23518)) -* resource/aws_servicecatalog_provisioning_artifact: Add configurable timeouts ([#23518](https://github.com/hashicorp/terraform-provider-aws/issues/23518)) -* resource/aws_servicecatalog_service_action: Add configurable timeouts ([#23518](https://github.com/hashicorp/terraform-provider-aws/issues/23518)) -* resource/aws_servicecatalog_tag_option: Add configurable timeouts ([#23518](https://github.com/hashicorp/terraform-provider-aws/issues/23518)) -* resource/aws_servicecatalog_tag_option_resource_association: Add configurable timeouts ([#23518](https://github.com/hashicorp/terraform-provider-aws/issues/23518)) -* resource/aws_synthetics_canary: Add optional `environment_variables` to `run_config`. ([#23574](https://github.com/hashicorp/terraform-provider-aws/issues/23574)) - -BUG FIXES: - -* resource/aws_account_alternate_contact: Improve eventual consistency handling to avoid "no resource found" on updates ([#23516](https://github.com/hashicorp/terraform-provider-aws/issues/23516)) -* resource/aws_image_builder_image_recipe: Fix regression in 4.3.0 whereby Windows-based images wouldn't build because of the newly introduced `systems_manager_agent.uninstall_after_build` argument. ([#23580](https://github.com/hashicorp/terraform-provider-aws/issues/23580)) -* resource/aws_kms_external_key: Increase `tags` eventual consistency timeout from 5 minutes to 10 minutes ([#23593](https://github.com/hashicorp/terraform-provider-aws/issues/23593)) -* resource/aws_kms_key: Increase `description` and `tags` eventual consistency timeouts from 5 minutes to 10 minutes ([#23593](https://github.com/hashicorp/terraform-provider-aws/issues/23593)) -* resource/aws_kms_replica_external_key: Increase `tags` eventual consistency timeout from 5 minutes to 10 minutes ([#23593](https://github.com/hashicorp/terraform-provider-aws/issues/23593)) -* resource/aws_kms_replica_key: Increase `tags` eventual consistency timeout from 5 minutes to 10 minutes ([#23593](https://github.com/hashicorp/terraform-provider-aws/issues/23593)) -* resource/aws_s3_bucket_lifecycle_configuration: Correctly configure `rule.filter.object_size_greater_than` and `rule.filter.object_size_less_than` in API requests and terraform state ([#23441](https://github.com/hashicorp/terraform-provider-aws/issues/23441)) -* resource/aws_s3_bucket_lifecycle_configuration: Prevent drift when `rule.noncurrent_version_expiration.newer_noncurrent_versions` or `rule.noncurrent_version_transition.newer_noncurrent_versions` is not specified ([#23441](https://github.com/hashicorp/terraform-provider-aws/issues/23441)) -* resource/aws_s3_bucket_replication_configuration: Correctly configure empty `rule.filter` configuration block in API requests ([#23586](https://github.com/hashicorp/terraform-provider-aws/issues/23586)) -* resource/aws_s3_bucket_replication_configuration: Ensure both `key` and `value` arguments of the `rule.filter.tag` configuration block are correctly populated in the outgoing API request and terraform state. ([#23579](https://github.com/hashicorp/terraform-provider-aws/issues/23579)) -* resource/aws_s3_bucket_replication_configuration: Prevent inconsistent final plan when `rule.filter.prefix` is an empty string ([#23586](https://github.com/hashicorp/terraform-provider-aws/issues/23586)) - -## 4.4.0 (March 04, 2022) - -FEATURES: - -* **New Data Source:** `aws_connect_queue` ([#22768](https://github.com/hashicorp/terraform-provider-aws/issues/22768)) -* **New Data Source:** `aws_ec2_serial_console_access` ([#23443](https://github.com/hashicorp/terraform-provider-aws/issues/23443)) -* **New Data Source:** `aws_ec2_transit_gateway_connect` ([#22181](https://github.com/hashicorp/terraform-provider-aws/issues/22181)) -* **New Data Source:** `aws_ec2_transit_gateway_connect_peer` ([#22181](https://github.com/hashicorp/terraform-provider-aws/issues/22181)) -* **New Resource:** `aws_apprunner_vpc_connector` ([#23173](https://github.com/hashicorp/terraform-provider-aws/issues/23173)) -* **New Resource:** `aws_connect_routing_profile` ([#22813](https://github.com/hashicorp/terraform-provider-aws/issues/22813)) -* **New Resource:** `aws_connect_user_hierarchy_structure` ([#22836](https://github.com/hashicorp/terraform-provider-aws/issues/22836)) -* **New Resource:** `aws_ec2_network_insights_path` ([#23330](https://github.com/hashicorp/terraform-provider-aws/issues/23330)) -* **New Resource:** `aws_ec2_serial_console_access` ([#23443](https://github.com/hashicorp/terraform-provider-aws/issues/23443)) -* **New Resource:** `aws_ec2_transit_gateway_connect` ([#22181](https://github.com/hashicorp/terraform-provider-aws/issues/22181)) -* **New Resource:** `aws_ec2_transit_gateway_connect_peer` ([#22181](https://github.com/hashicorp/terraform-provider-aws/issues/22181)) -* **New Resource:** `aws_grafana_license_association` ([#23401](https://github.com/hashicorp/terraform-provider-aws/issues/23401)) -* **New Resource:** `aws_route53domains_registered_domain` ([#12711](https://github.com/hashicorp/terraform-provider-aws/issues/12711)) - -ENHANCEMENTS: - -* data-source/aws_ec2_transit_gateway: Add `transit_gateway_cidr_blocks` attribute ([#22181](https://github.com/hashicorp/terraform-provider-aws/issues/22181)) -* data-source/aws_eks_node_group: Add `taints` attribute ([#23452](https://github.com/hashicorp/terraform-provider-aws/issues/23452)) -* resource/aws_apprunner_service: Add `network_configuration` argument ([#23173](https://github.com/hashicorp/terraform-provider-aws/issues/23173)) -* resource/aws_cloudwatch_metric_alarm: Additional allowed values for `extended_statistic` and `metric_query.metric.stat` arguments ([#22942](https://github.com/hashicorp/terraform-provider-aws/issues/22942)) -* resource/aws_ec2_transit_gateway: Add [custom `timeouts`](https://www.terraform.io/docs/language/resources/syntax.html#operation-timeouts) block ([#22181](https://github.com/hashicorp/terraform-provider-aws/issues/22181)) -* resource/aws_ec2_transit_gateway: Add `transit_gateway_cidr_blocks` argument ([#22181](https://github.com/hashicorp/terraform-provider-aws/issues/22181)) -* resource/aws_eks_cluster: Retry when `ResourceInUseException` errors are returned from the AWS API during resource deletion ([#23366](https://github.com/hashicorp/terraform-provider-aws/issues/23366)) -* resource/aws_glue_job: Add support for [streaming jobs](https://docs.aws.amazon.com/glue/latest/dg/add-job-streaming.html) by removing the default value for the `timeout` argument and marking it as Computed ([#23275](https://github.com/hashicorp/terraform-provider-aws/issues/23275)) -* resource/aws_lambda_function: Add support for `dotnet6` `runtime` value ([#23426](https://github.com/hashicorp/terraform-provider-aws/issues/23426)) -* resource/aws_lambda_layer_version: Add support for `dotnet6` `compatible_runtimes` value ([#23426](https://github.com/hashicorp/terraform-provider-aws/issues/23426)) -* resource/aws_route: `nat_gateway_id` target no longer conflicts with `destination_ipv6_cidr_block` ([#23427](https://github.com/hashicorp/terraform-provider-aws/issues/23427)) - -BUG FIXES: - -* resource/aws_dms_endpoint: Fix bug where KMS key was ignored for DynamoDB, OpenSearch, Kafka, Kinesis, Oracle, PostgreSQL, and S3 engines. ([#23444](https://github.com/hashicorp/terraform-provider-aws/issues/23444)) -* resource/aws_networkfirewall_rule_group: Allow any character in `source` and `destination` `rule_group.rules_source.stateful_rule.header` arguments as per the AWS API docs ([#22727](https://github.com/hashicorp/terraform-provider-aws/issues/22727)) -* resource/aws_opsworks_application: Fix error reported on successful deletion ([#23397](https://github.com/hashicorp/terraform-provider-aws/issues/23397)) -* resource/aws_opsworks_custom_layer: Fix error reported on successful deletion ([#23397](https://github.com/hashicorp/terraform-provider-aws/issues/23397)) -* resource/aws_opsworks_ecs_cluster_layer: Fix error reported on successful deletion ([#23397](https://github.com/hashicorp/terraform-provider-aws/issues/23397)) -* resource/aws_opsworks_ganglia_layer: Fix error reported on successful deletion ([#23397](https://github.com/hashicorp/terraform-provider-aws/issues/23397)) -* resource/aws_opsworks_haproxy_layer: Fix error reported on successful deletion ([#23397](https://github.com/hashicorp/terraform-provider-aws/issues/23397)) -* resource/aws_opsworks_instance: Fix error reported on successful deletion ([#23397](https://github.com/hashicorp/terraform-provider-aws/issues/23397)) -* resource/aws_opsworks_java_app_layer: Fix error reported on successful deletion ([#23397](https://github.com/hashicorp/terraform-provider-aws/issues/23397)) -* resource/aws_opsworks_memcached_layer: Fix error reported on successful deletion ([#23397](https://github.com/hashicorp/terraform-provider-aws/issues/23397)) -* resource/aws_opsworks_mysql_layer: Fix error reported on successful deletion ([#23397](https://github.com/hashicorp/terraform-provider-aws/issues/23397)) -* resource/aws_opsworks_nodejs_app_layer: Fix error reported on successful deletion ([#23397](https://github.com/hashicorp/terraform-provider-aws/issues/23397)) -* resource/aws_opsworks_php_app_layer: Fix error reported on successful deletion ([#23397](https://github.com/hashicorp/terraform-provider-aws/issues/23397)) -* resource/aws_opsworks_rails_app_layer: Fix error reported on successful deletion ([#23397](https://github.com/hashicorp/terraform-provider-aws/issues/23397)) -* resource/aws_opsworks_rds_db_instance: Correctly remove from state in certain deletion situations ([#23397](https://github.com/hashicorp/terraform-provider-aws/issues/23397)) -* resource/aws_opsworks_stack: Fix error reported on successful deletion, lack of eventual consistency wait ([#23397](https://github.com/hashicorp/terraform-provider-aws/issues/23397)) -* resource/aws_opsworks_static_web_layer: Fix error reported on successful deletion ([#23397](https://github.com/hashicorp/terraform-provider-aws/issues/23397)) -* resource/aws_opsworks_user_profile: Fix error reported on successful deletion ([#23397](https://github.com/hashicorp/terraform-provider-aws/issues/23397)) -* resource/aws_route53_resolver_firewall_domain_list: Remove limit for number of `domains`. ([#23485](https://github.com/hashicorp/terraform-provider-aws/issues/23485)) -* resource/aws_synthetics_canary: Retry canary creation if it fails because of IAM propagation. ([#23394](https://github.com/hashicorp/terraform-provider-aws/issues/23394)) - -## 4.3.0 (February 28, 2022) - -NOTES: - -* resource/aws_internet_gateway: Set `vpc_id` as Computed to prevent drift when the `aws_internet_gateway_attachment` resource is used ([#16386](https://github.com/hashicorp/terraform-provider-aws/issues/16386)) -* resource/aws_s3_bucket_lifecycle_configuration: The `prefix` argument of the `rule` configuration block has been deprecated. Use the `filter` configuration block instead. ([#23325](https://github.com/hashicorp/terraform-provider-aws/issues/23325)) - -FEATURES: - -* **New Data Source:** `aws_ec2_transit_gateway_multicast_domain` ([#22756](https://github.com/hashicorp/terraform-provider-aws/issues/22756)) -* **New Data Source:** `aws_ec2_transit_gateway_vpc_attachments` ([#12409](https://github.com/hashicorp/terraform-provider-aws/issues/12409)) -* **New Resource:** `aws_ec2_transit_gateway_multicast_domain` ([#22756](https://github.com/hashicorp/terraform-provider-aws/issues/22756)) -* **New Resource:** `aws_ec2_transit_gateway_multicast_domain_association` ([#22756](https://github.com/hashicorp/terraform-provider-aws/issues/22756)) -* **New Resource:** `aws_ec2_transit_gateway_multicast_group_member` ([#22756](https://github.com/hashicorp/terraform-provider-aws/issues/22756)) -* **New Resource:** `aws_ec2_transit_gateway_multicast_group_source` ([#22756](https://github.com/hashicorp/terraform-provider-aws/issues/22756)) -* **New Resource:** `aws_internet_gateway_attachment` ([#16386](https://github.com/hashicorp/terraform-provider-aws/issues/16386)) -* **New Resource:** `aws_opsworks_ecs_cluster_layer` ([#12495](https://github.com/hashicorp/terraform-provider-aws/issues/12495)) -* **New Resource:** `aws_vpc_endpoint_policy` ([#17039](https://github.com/hashicorp/terraform-provider-aws/issues/17039)) - -ENHANCEMENTS: - -* data-source/aws_ec2_transit_gateway: Add `multicast_support` attribute ([#22756](https://github.com/hashicorp/terraform-provider-aws/issues/22756)) -* provider: Improves error message when `Profile` and static credential environment variables are set. ([#23388](https://github.com/hashicorp/terraform-provider-aws/issues/23388)) -* provider: Makes `region` an optional parameter to allow sourcing from shared config files and IMDS ([#23384](https://github.com/hashicorp/terraform-provider-aws/issues/23384)) -* provider: Retrieves region from IMDS when credentials retrieved from IMDS. ([#23388](https://github.com/hashicorp/terraform-provider-aws/issues/23388)) -* resource/aws_connect_queue: The `quick_connect_ids` argument can now be updated in-place ([#22821](https://github.com/hashicorp/terraform-provider-aws/issues/22821)) -* resource/aws_connect_security_profile: add `permissions` attribute to read ([#22761](https://github.com/hashicorp/terraform-provider-aws/issues/22761)) -* resource/aws_ec2_fleet: Add `context` argument ([#23304](https://github.com/hashicorp/terraform-provider-aws/issues/23304)) -* resource/aws_ec2_transit_gateway: Add `multicast_support` argument ([#22756](https://github.com/hashicorp/terraform-provider-aws/issues/22756)) -* resource/aws_imagebuilder_image_pipeline: Add `schedule.timezone` argument ([#23322](https://github.com/hashicorp/terraform-provider-aws/issues/23322)) -* resource/aws_imagebuilder_image_recipe: Add `systems_manager_agent.uninstall_after_build` argument ([#23293](https://github.com/hashicorp/terraform-provider-aws/issues/23293)) -* resource/aws_instance: Prevent double base64 encoding of `user_data` and `user_data_base64` on update ([#23362](https://github.com/hashicorp/terraform-provider-aws/issues/23362)) -* resource/aws_s3_bucket: Add error handling for `NotImplemented` error when reading `logging` into terraform state ([#23398](https://github.com/hashicorp/terraform-provider-aws/issues/23398)) -* resource/aws_s3_bucket_object_lock_configuration: Mark `token` argument as sensitive ([#23368](https://github.com/hashicorp/terraform-provider-aws/issues/23368)) -* resource/aws_servicecatalog_provisioned_product: Add `outputs` attribute ([#23270](https://github.com/hashicorp/terraform-provider-aws/issues/23270)) - -BUG FIXES: - -* provider: Validates names of named profiles before use. ([#23388](https://github.com/hashicorp/terraform-provider-aws/issues/23388)) -* resource/aws_dms_replication_task: Allow `cdc_start_position` to be computed ([#23328](https://github.com/hashicorp/terraform-provider-aws/issues/23328)) -* resource/aws_ecs_cluster: Fix bug preventing describing clusters in ISO regions ([#23341](https://github.com/hashicorp/terraform-provider-aws/issues/23341)) - -## 4.2.0 (February 18, 2022) - -FEATURES: - -* **New Data Source:** `aws_grafana_workspace` ([#22874](https://github.com/hashicorp/terraform-provider-aws/issues/22874)) -* **New Data Source:** `aws_iam_openid_connect_provider` ([#23240](https://github.com/hashicorp/terraform-provider-aws/issues/23240)) -* **New Data Source:** `aws_ssm_instances` ([#23162](https://github.com/hashicorp/terraform-provider-aws/issues/23162)) -* **New Resource:** `aws_cloudtrail_event_data_store` ([#22490](https://github.com/hashicorp/terraform-provider-aws/issues/22490)) -* **New Resource:** `aws_grafana_workspace` ([#22874](https://github.com/hashicorp/terraform-provider-aws/issues/22874)) - -ENHANCEMENTS: - -* provider: Add `custom_ca_bundle` argument ([#23279](https://github.com/hashicorp/terraform-provider-aws/issues/23279)) -* provider: Add `sts_region` argument ([#23212](https://github.com/hashicorp/terraform-provider-aws/issues/23212)) -* provider: Expands environment variables in file paths in provider configuration. ([#23282](https://github.com/hashicorp/terraform-provider-aws/issues/23282)) -* provider: Updates list of valid AWS regions ([#23282](https://github.com/hashicorp/terraform-provider-aws/issues/23282)) -* resource/aws_dms_endpoint: Add `s3_settings.add_column_name`, `s3_settings.canned_acl_for_objects`, `s3_settings.cdc_inserts_and_updates`, `s3_settings.cdc_inserts_only`, `s3_settings.cdc_max_batch_interval`, `s3_settings.cdc_min_file_size`, `s3_settings.cdc_path`, `s3_settings.csv_no_sup_value`, `s3_settings.csv_null_value`, `s3_settings.data_page_size`, `s3_settings.date_partition_delimiter`, `s3_settings.date_partition_sequence`, `s3_settings.dict_page_size_limit`, `s3_settings.enable_statistics`, `s3_settings.encoding_type`, `s3_settings.ignore_headers_row`, `s3_settings.include_op_for_full_load`, `s3_settings.max_file_size`, `s3_settings.preserve_transactions`, `s3_settings.rfc_4180`, `s3_settings.row_group_length`, `s3_settings.timestamp_column_name`, `s3_settings.use_csv_no_sup_value` arguments ([#20913](https://github.com/hashicorp/terraform-provider-aws/issues/20913)) -* resource/aws_elasticache_replication_group: Add plan-time validation to `description` and `replication_group_description` to ensure non-empty strings ([#23254](https://github.com/hashicorp/terraform-provider-aws/issues/23254)) -* resource/aws_fms_policy: Add `delete_unused_fm_managed_resources` argument ([#21295](https://github.com/hashicorp/terraform-provider-aws/issues/21295)) -* resource/aws_fms_policy: Add `tags` argument and `tags_all` attribute to support resource tagging ([#21299](https://github.com/hashicorp/terraform-provider-aws/issues/21299)) -* resource/aws_imagebuilder_image_recipe: Update plan time validation of `block_device_mapping.ebs.kms_key_id`, `block_device_mapping.ebs.snapshot_id`, `block_device_mapping.ebs.volume_type`, `name`, `parent_image`. ([#23235](https://github.com/hashicorp/terraform-provider-aws/issues/23235)) -* resource/aws_instance: Allow updates to `user_data` and `user_data_base64` without forcing resource replacement ([#18043](https://github.com/hashicorp/terraform-provider-aws/issues/18043)) -* resource/aws_s3_bucket: Add error handling for `MethodNotAllowed` and `XNotImplemented` errors when reading `website` into terraform state. ([#23278](https://github.com/hashicorp/terraform-provider-aws/issues/23278)) -* resource/aws_s3_bucket: Add error handling for `NotImplemented` errors when reading `acceleration_status`, `policy`, or `request_payer` into terraform state. ([#23278](https://github.com/hashicorp/terraform-provider-aws/issues/23278)) - -BUG FIXES: - -* provider: Credentials with expiry, such as assuming a role, would not renew. ([#23282](https://github.com/hashicorp/terraform-provider-aws/issues/23282)) -* provider: Setting a custom CA bundle caused the provider to fail. ([#23282](https://github.com/hashicorp/terraform-provider-aws/issues/23282)) -* resource/aws_iam_instance_profile: Improve tag handling in ISO regions ([#23283](https://github.com/hashicorp/terraform-provider-aws/issues/23283)) -* resource/aws_iam_openid_connect_provider: Improve tag handling in ISO regions ([#23283](https://github.com/hashicorp/terraform-provider-aws/issues/23283)) -* resource/aws_iam_policy: Improve tag handling in ISO regions ([#23283](https://github.com/hashicorp/terraform-provider-aws/issues/23283)) -* resource/aws_iam_saml_provider: Improve tag handling in ISO regions ([#23283](https://github.com/hashicorp/terraform-provider-aws/issues/23283)) -* resource/aws_iam_server_certificate: Improve tag handling in ISO regions ([#23283](https://github.com/hashicorp/terraform-provider-aws/issues/23283)) -* resource/aws_iam_service_linked_role: Improve tag handling in ISO regions ([#23283](https://github.com/hashicorp/terraform-provider-aws/issues/23283)) -* resource/aws_iam_virtual_mfa_device: Improve tag handling in ISO regions ([#23283](https://github.com/hashicorp/terraform-provider-aws/issues/23283)) -* resource/aws_s3_bucket_lifecycle_configuration: Ensure both `key` and `value` arguments of the `filter` `tag` configuration block are correctly populated in the outgoing API request and terraform state. ([#23252](https://github.com/hashicorp/terraform-provider-aws/issues/23252)) -* resource/aws_s3_bucket_lifecycle_configuration: Prevent non-empty plans when `filter` is an empty configuration block ([#23232](https://github.com/hashicorp/terraform-provider-aws/issues/23232)) - -## 4.1.0 (February 15, 2022) - -FEATURES: - -* **New Data Source:** `aws_backup_framework` ([#23193](https://github.com/hashicorp/terraform-provider-aws/issues/23193)) -* **New Data Source:** `aws_backup_report_plan` ([#23146](https://github.com/hashicorp/terraform-provider-aws/issues/23146)) -* **New Data Source:** `aws_imagebuilder_container_recipe` ([#23040](https://github.com/hashicorp/terraform-provider-aws/issues/23040)) -* **New Data Source:** `aws_imagebuilder_container_recipes` ([#23134](https://github.com/hashicorp/terraform-provider-aws/issues/23134)) -* **New Data Source:** `aws_service` ([#16640](https://github.com/hashicorp/terraform-provider-aws/issues/16640)) -* **New Resource:** `aws_backup_framework` ([#23175](https://github.com/hashicorp/terraform-provider-aws/issues/23175)) -* **New Resource:** `aws_backup_report_plan` ([#23098](https://github.com/hashicorp/terraform-provider-aws/issues/23098)) -* **New Resource:** `aws_gamelift_script` ([#11560](https://github.com/hashicorp/terraform-provider-aws/issues/11560)) -* **New Resource:** `aws_iam_service_specific_credential` ([#16185](https://github.com/hashicorp/terraform-provider-aws/issues/16185)) -* **New Resource:** `aws_iam_signing_certificate` ([#23161](https://github.com/hashicorp/terraform-provider-aws/issues/23161)) -* **New Resource:** `aws_iam_virtual_mfa_device` ([#23113](https://github.com/hashicorp/terraform-provider-aws/issues/23113)) -* **New Resource:** `aws_imagebuilder_container_recipe` ([#22965](https://github.com/hashicorp/terraform-provider-aws/issues/22965)) - -ENHANCEMENTS: - -* data-source/aws_imagebuilder_image_pipeline: Add `container_recipe_arn` attribute ([#23111](https://github.com/hashicorp/terraform-provider-aws/issues/23111)) -* data-source/aws_kms_public_key: Add `public_key_pem` attribute ([#23130](https://github.com/hashicorp/terraform-provider-aws/issues/23130)) -* resource/aws_api_gateway_authorizer: Add `arn` attribute. ([#23151](https://github.com/hashicorp/terraform-provider-aws/issues/23151)) -* resource/aws_autoscaling_group: Disable scale-in protection before draining instances ([#23187](https://github.com/hashicorp/terraform-provider-aws/issues/23187)) -* resource/aws_cloudformation_stack_set: Add `call_as` argument ([#22440](https://github.com/hashicorp/terraform-provider-aws/issues/22440)) -* resource/aws_elastic_transcoder_preset: Add plan time validations to `audio.audio_packing_mode`, `audio.channels`, -`audio.codec`,`audio.sample_rate`, `audio_codec_options.bit_depth`, `audio_codec_options.bit_order`, -`audio_codec_options.profile`, `audio_codec_options.signed`, `audio_codec_options.signed`, -`container`, `thumbnails.aspect_ratio`, `thumbnails.format`, `thumbnails.padding_policy`, `thumbnails.sizing_policy`, -`type`, `video.aspect_ratio`, `video.codec`, `video.display_aspect_ratio`, `video.fixed_gop`, `video.frame_rate`, `video.max_frame_rate`, `video.padding_policy`, `video.sizing_policy`, `video_watermarks.horizontal_align`, -`video_watermarks.id`, `video_watermarks.sizing_policy`, `video_watermarks.target`, `video_watermarks.vertical_align` ([#13974](https://github.com/hashicorp/terraform-provider-aws/issues/13974)) -* resource/aws_elastic_transcoder_preset: Allow `audio.bit_rate` to be computed. ([#13974](https://github.com/hashicorp/terraform-provider-aws/issues/13974)) -* resource/aws_gamelift_build: Add `object_version` argument to `storage_location` block. ([#22966](https://github.com/hashicorp/terraform-provider-aws/issues/22966)) -* resource/aws_gamelift_build: Add import support ([#22966](https://github.com/hashicorp/terraform-provider-aws/issues/22966)) -* resource/aws_gamelift_fleet: Add `certificate_configuration` argument ([#22967](https://github.com/hashicorp/terraform-provider-aws/issues/22967)) -* resource/aws_gamelift_fleet: Add import support ([#22967](https://github.com/hashicorp/terraform-provider-aws/issues/22967)) -* resource/aws_gamelift_fleet: Add plan time validation to `ec2_instance_type` ([#22967](https://github.com/hashicorp/terraform-provider-aws/issues/22967)) -* resource/aws_gamelift_fleet: Adds `script_arn` attribute. ([#11560](https://github.com/hashicorp/terraform-provider-aws/issues/11560)) -* resource/aws_gamelift_fleet: Adds `script_id` argument. ([#11560](https://github.com/hashicorp/terraform-provider-aws/issues/11560)) -* resource/aws_glue_catalog_database: Add support `create_table_default_permission` argument ([#22964](https://github.com/hashicorp/terraform-provider-aws/issues/22964)) -* resource/aws_glue_trigger: Add `event_batching_condition` argument. ([#22963](https://github.com/hashicorp/terraform-provider-aws/issues/22963)) -* resource/aws_iam_user_login_profile: Make `pgp_key` optional ([#12384](https://github.com/hashicorp/terraform-provider-aws/issues/12384)) -* resource/aws_imagebuilder_image_pipeline: Add `container_recipe_arn` argument ([#23111](https://github.com/hashicorp/terraform-provider-aws/issues/23111)) -* resource/aws_prometheus_workspace: Add `tags` argument and `tags_all` attribute to support resource tagging ([#23202](https://github.com/hashicorp/terraform-provider-aws/issues/23202)) -* resource/aws_ssm_association: Add `arn` attribute ([#17732](https://github.com/hashicorp/terraform-provider-aws/issues/17732)) -* resource/aws_ssm_association: Add `wait_for_success_timeout_seconds` argument ([#17732](https://github.com/hashicorp/terraform-provider-aws/issues/17732)) -* resource/aws_ssm_association: Add plan time validation to `association_name`, `document_version`, `schedule_expression`, `output_location.s3_bucket_name`, `output_location.s3_key_prefix`, `targets.key`, `targets.values`, `automation_target_parameter_name` ([#17732](https://github.com/hashicorp/terraform-provider-aws/issues/17732)) - -BUG FIXES: - -* data-source/aws_vpc_ipam_pool: error if no pool found ([#23195](https://github.com/hashicorp/terraform-provider-aws/issues/23195)) -* provider: Support `ap-northeast-3`, `ap-southeast-3` and `us-iso-west-1` as valid AWS Regions ([#23191](https://github.com/hashicorp/terraform-provider-aws/issues/23191)) -* provider: Use AWS HTTP client which allows IMDS authentication in container environments and custom RootCAs in ISO regions ([#23191](https://github.com/hashicorp/terraform-provider-aws/issues/23191)) -* resource/aws_appmesh_route: Handle zero `max_retries` ([#23035](https://github.com/hashicorp/terraform-provider-aws/issues/23035)) -* resource/aws_elastic_transcoder_preset: Allow `video_codec_options` to be empty. ([#13974](https://github.com/hashicorp/terraform-provider-aws/issues/13974)) -* resource/aws_rds_cluster: Fix crash when configured `engine_version` string is shorter than the `EngineVersion` string returned from the AWS API ([#23039](https://github.com/hashicorp/terraform-provider-aws/issues/23039)) -* resource/aws_s3_bucket_lifecycle_configuration: Correctly handle the `days` value of the `rule` `transition` configuration block when set to `0` ([#23120](https://github.com/hashicorp/terraform-provider-aws/issues/23120)) -* resource/aws_s3_bucket_lifecycle_configuration: Fix extraneous diffs especially after import ([#23144](https://github.com/hashicorp/terraform-provider-aws/issues/23144)) -* resource/aws_sagemaker_endpoint_configuration: Emptiness check for arguments, Allow not passing `async_inference_config.kms_key_id`. ([#22960](https://github.com/hashicorp/terraform-provider-aws/issues/22960)) -* resource/aws_vpn_connection: Add support for `ipsec.1-aes256` connection type ([#23127](https://github.com/hashicorp/terraform-provider-aws/issues/23127)) - -## 4.0.0 (February 10, 2022) - -BREAKING CHANGES: - -* data-source/aws_connect_hours_of_operation: The hours_of_operation_arn attribute is renamed to arn ([#22375](https://github.com/hashicorp/terraform-provider-aws/issues/22375)) -* resource/aws_batch_compute_environment: No `compute_resources` configuration block can be specified when `type` is `UNMANAGED` ([#22805](https://github.com/hashicorp/terraform-provider-aws/issues/22805)) -* resource/aws_cloudwatch_event_target: The `ecs_target` `launch_type` argument no longer has a default value (previously was `EC2`) ([#22803](https://github.com/hashicorp/terraform-provider-aws/issues/22803)) -* resource/aws_cloudwatch_event_target: `ecs_target.0.launch_type` can no longer be set to `""`; instead, remove or set to `null` ([#22954](https://github.com/hashicorp/terraform-provider-aws/issues/22954)) -* resource/aws_connect_hours_of_operation: The hours_of_operation_arn attribute is renamed to arn ([#22375](https://github.com/hashicorp/terraform-provider-aws/issues/22375)) -* resource/aws_default_network_acl: These arguments can no longer be set to `""`: `egress.*.cidr_block`, `egress.*.ipv6_cidr_block`, `ingress.*.cidr_block`, or `ingress.*.ipv6_cidr_block` ([#22928](https://github.com/hashicorp/terraform-provider-aws/issues/22928)) -* resource/aws_default_route_table: These arguments can no longer be set to `""`: `route.*.cidr_block`, `route.*.ipv6_cidr_block` ([#22931](https://github.com/hashicorp/terraform-provider-aws/issues/22931)) -* resource/aws_default_vpc: `ipv6_cidr_block` can no longer be set to `""`; remove or set to `null` ([#22948](https://github.com/hashicorp/terraform-provider-aws/issues/22948)) -* resource/aws_efs_mount_target: `ip_address` can no longer be set to `""`; instead, remove or set to `null` ([#22954](https://github.com/hashicorp/terraform-provider-aws/issues/22954)) -* resource/aws_elasticache_cluster: Either `engine` or `replication_group_id` must be specified ([#20482](https://github.com/hashicorp/terraform-provider-aws/issues/20482)) -* resource/aws_elasticsearch_domain: `ebs_options.0.volume_type` can no longer be set to `""`; instead, remove or set to `null` ([#22954](https://github.com/hashicorp/terraform-provider-aws/issues/22954)) -* resource/aws_fsx_ontap_storage_virtual_machine: Remove deprecated `active_directory_configuration.0.self_managed_active_directory_configuration.0.organizational_unit_distinguidshed_name`, migrating value to `active_directory_configuration.0.self_managed_active_directory_configuration.0.organizational_unit_distinguished_name` ([#22915](https://github.com/hashicorp/terraform-provider-aws/issues/22915)) -* resource/aws_instance: `private_ip` can no longer be set to `""`; remove or set to `null` ([#22948](https://github.com/hashicorp/terraform-provider-aws/issues/22948)) -* resource/aws_lb_target_group: For `protocol = "TCP"`, `stickiness` can no longer be type set to `lb_cookie` even when `enabled = false`; instead use type `source_ip` ([#22996](https://github.com/hashicorp/terraform-provider-aws/issues/22996)) -* resource/aws_network_acl: These arguments can no longer be set to `""`: `egress.*.cidr_block`, `egress.*.ipv6_cidr_block`, `ingress.*.cidr_block`, or `ingress.*.ipv6_cidr_block` ([#22928](https://github.com/hashicorp/terraform-provider-aws/issues/22928)) -* resource/aws_route: Exactly one of these can be set: `destination_cidr_block`, `destination_ipv6_cidr_block`, `destination_prefix_list_id`. These arguments can no longer be set to `""`: `destination_cidr_block`, `destination_ipv6_cidr_block`. ([#22931](https://github.com/hashicorp/terraform-provider-aws/issues/22931)) -* resource/aws_route_table: These arguments can no longer be set to `""`: `route.*.cidr_block`, `route.*.ipv6_cidr_block` ([#22931](https://github.com/hashicorp/terraform-provider-aws/issues/22931)) -* resource/aws_s3_bucket: The `acceleration_status` argument has been deprecated and is now read-only. Use the `aws_s3_bucket_accelerate_configuration` resource instead. ([#22610](https://github.com/hashicorp/terraform-provider-aws/issues/22610)) -* resource/aws_s3_bucket: The `acl` and `grant` arguments have been deprecated and are now read-only. Use the `aws_s3_bucket_acl` resource instead. ([#22537](https://github.com/hashicorp/terraform-provider-aws/issues/22537)) -* resource/aws_s3_bucket: The `cors_rule` argument has been deprecated and is now read-only. Use the `aws_s3_bucket_cors_configuration` resource instead. ([#22611](https://github.com/hashicorp/terraform-provider-aws/issues/22611)) -* resource/aws_s3_bucket: The `lifecycle_rule` argument has been deprecated and is now read-only. Use the `aws_s3_bucket_lifecycle_configuration` resource instead. ([#22581](https://github.com/hashicorp/terraform-provider-aws/issues/22581)) -* resource/aws_s3_bucket: The `logging` argument has been deprecated and is now read-only. Use the `aws_s3_bucket_logging` resource instead. ([#22599](https://github.com/hashicorp/terraform-provider-aws/issues/22599)) -* resource/aws_s3_bucket: The `object_lock_configuration` `rule` argument has been deprecated and is now read-only. Use the `aws_s3_bucket_object_lock_configuration` resource instead. ([#22612](https://github.com/hashicorp/terraform-provider-aws/issues/22612)) -* resource/aws_s3_bucket: The `policy` argument has been deprecated and is now read-only. Use the `aws_s3_bucket_policy` resource instead. ([#22538](https://github.com/hashicorp/terraform-provider-aws/issues/22538)) -* resource/aws_s3_bucket: The `replication_configuration` argument has been deprecated and is now read-only. Use the `aws_s3_bucket_replication_configuration` resource instead. ([#22604](https://github.com/hashicorp/terraform-provider-aws/issues/22604)) -* resource/aws_s3_bucket: The `request_payer` argument has been deprecated and is now read-only. Use the `aws_s3_bucket_request_payment_configuration` resource instead. ([#22613](https://github.com/hashicorp/terraform-provider-aws/issues/22613)) -* resource/aws_s3_bucket: The `server_side_encryption_configuration` argument has been deprecated and is now read-only. Use the `aws_s3_bucket_server_side_encryption_configuration` resource instead. ([#22605](https://github.com/hashicorp/terraform-provider-aws/issues/22605)) -* resource/aws_s3_bucket: The `versioning` argument has been deprecated and is now read-only. Use the `aws_s3_bucket_versioning` resource instead. ([#22606](https://github.com/hashicorp/terraform-provider-aws/issues/22606)) -* resource/aws_s3_bucket: The `website`, `website_domain`, and `website_endpoint` arguments have been deprecated and are now read-only. Use the `aws_s3_bucket_website_configuration` resource instead. ([#22614](https://github.com/hashicorp/terraform-provider-aws/issues/22614)) -* resource/aws_vpc: `ipv6_cidr_block` can no longer be set to `""`; remove or set to `null` ([#22948](https://github.com/hashicorp/terraform-provider-aws/issues/22948)) -* resource/aws_vpc_ipv6_cidr_block_association: `ipv6_cidr_block` can no longer be set to `""`; remove or set to `null` ([#22948](https://github.com/hashicorp/terraform-provider-aws/issues/22948)) - -NOTES: - -* data-source/aws_cognito_user_pools: The type of the `ids` and `arns` attributes has changed from Set to List. If no volumes match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) -* data-source/aws_db_event_categories: The type of the `ids` attribute has changed from Set to List. If no event categories match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) -* data-source/aws_ebs_volumes: The type of the `ids` attribute has changed from Set to List. If no volumes match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) -* data-source/aws_ec2_coip_pools: The type of the `pool_ids` attribute has changed from Set to List. If no COIP pools match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) -* data-source/aws_ec2_local_gateway_route_tables: The type of the `ids` attribute has changed from Set to List. If no local gateway route tables match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) -* data-source/aws_ec2_local_gateway_virtual_interface_groups: The type of the `ids` and `local_gateway_virtual_interface_ids` attributes has changed from Set to List. If no local gateway virtual interface groups match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) -* data-source/aws_ec2_local_gateways: The type of the `ids` attribute has changed from Set to List. If no local gateways match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) -* data-source/aws_ec2_transit_gateway_route_tables: The type of the `ids` attribute has changed from Set to List. If no transit gateway route tables match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) -* data-source/aws_efs_access_points: The type of the `ids` and `arns` attributes has changed from Set to List. If no access points match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) -* data-source/aws_elasticache_replication_group: The `number_cache_clusters` attribute has been deprecated. All configurations using `number_cache_clusters` should be updated to use the `num_cache_clusters` attribute instead ([#22667](https://github.com/hashicorp/terraform-provider-aws/issues/22667)) -* data-source/aws_elasticache_replication_group: The `replication_group_description` attribute has been deprecated. All configurations using `replication_group_description` should be updated to use the `description` attribute instead ([#22667](https://github.com/hashicorp/terraform-provider-aws/issues/22667)) -* data-source/aws_emr_release_labels: The type of the `ids` attribute has changed from Set to List. If no release labels match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) -* data-source/aws_iam_policy_document: The `source_json` and `override_json` attributes have been deprecated. Use the `source_policy_documents` and `override_policy_documents` attributes respectively instead. ([#22890](https://github.com/hashicorp/terraform-provider-aws/issues/22890)) -* data-source/aws_inspector_rules_packages: If no rules packages match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) -* data-source/aws_instances: If no instances match the specified criteria an empty list is returned (previously an error was raised) ([#5055](https://github.com/hashicorp/terraform-provider-aws/issues/5055)) -* data-source/aws_ip_ranges: If no ranges match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) -* data-source/aws_network_acls: The type of the `ids` attribute has changed from Set to List. If no NACLs match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) -* data-source/aws_network_interfaces: The type of the `ids` attribute has changed from Set to List. If no network interfaces match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) -* data-source/aws_route_tables: The type of the `ids` attribute has changed from Set to List. If no route tables match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) -* data-source/aws_s3_bucket_object: The data source is deprecated; use `aws_s3_object` instead ([#22877](https://github.com/hashicorp/terraform-provider-aws/issues/22877)) -* data-source/aws_s3_bucket_objects: The data source is deprecated; use `aws_s3_objects` instead ([#22877](https://github.com/hashicorp/terraform-provider-aws/issues/22877)) -* data-source/aws_security_groups: If no security groups match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) -* data-source/aws_ssoadmin_instances: The type of the `identity_store_ids` and `arns` attributes has changed from Set to List. If no instances match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) -* data-source/aws_subnet_ids: The `aws_subnet_ids` data source has been deprecated and will be removed in a future version. Use the `aws_subnets` data source instead ([#22743](https://github.com/hashicorp/terraform-provider-aws/issues/22743)) -* data-source/aws_vpcs: The type of the `ids` attributes has changed from Set to List. If no VPCs match the specified criteria an empty list is returned (previously an error was raised) ([#22253](https://github.com/hashicorp/terraform-provider-aws/issues/22253)) -* provider: The `assume_role.duration_seconds` argument has been deprecated. All configurations using `assume_role.duration_seconds` should be updated to use the new `assume_role.duration` argument instead. ([#23077](https://github.com/hashicorp/terraform-provider-aws/issues/23077)) -* resource/aws_acmpca_certificate_authority: The `status` attribute has been deprecated. Use the `enabled` attribute instead. ([#22878](https://github.com/hashicorp/terraform-provider-aws/issues/22878)) -* resource/aws_autoscaling_attachment: The `alb_target_group_arn` argument has been deprecated. All configurations using `alb_target_group_arn` should be updated to use the new `lb_target_group_arn` argument instead ([#22662](https://github.com/hashicorp/terraform-provider-aws/issues/22662)) -* resource/aws_autoscaling_group: The `tags` argument has been deprecated. All configurations using `tags` should be updated to use the `tag` argument instead ([#22663](https://github.com/hashicorp/terraform-provider-aws/issues/22663)) -* resource/aws_budgets_budget: The `cost_filters` attribute has been deprecated. Use the `cost_filter` attribute instead. ([#22888](https://github.com/hashicorp/terraform-provider-aws/issues/22888)) -* resource/aws_connect_hours_of_operation: Timeout support has been removed as it is not needed for this resource ([#22375](https://github.com/hashicorp/terraform-provider-aws/issues/22375)) -* resource/aws_customer_gateway: `ip_address` can no longer be set to `""` ([#22926](https://github.com/hashicorp/terraform-provider-aws/issues/22926)) -* resource/aws_db_instance: The `name` argument has been deprecated. All configurations using `name` should be updated to use the `db_name` argument instead ([#22668](https://github.com/hashicorp/terraform-provider-aws/issues/22668)) -* resource/aws_default_subnet: If no default subnet exists in the specified Availability Zone one is now created. The `force_destroy` destroy argument has been added (defaults to `false`). Setting this argument to `true` deletes the default subnet on `terraform destroy` ([#22253](https://github.com/hashicorp/terraform-provider-aws/issues/22253)) -* resource/aws_default_vpc: If no default VPC exists in the current AWS Region one is now created. The `force_destroy` destroy argument has been added (defaults to `false`). Setting this argument to `true` deletes the default VPC on `terraform destroy` ([#22253](https://github.com/hashicorp/terraform-provider-aws/issues/22253)) -* resource/aws_ec2_client_vpn_endpoint: The `status` attribute has been deprecated ([#22887](https://github.com/hashicorp/terraform-provider-aws/issues/22887)) -* resource/aws_ec2_client_vpn_endpoint: The type of the `dns_servers` argument has changed from Set to List ([#22889](https://github.com/hashicorp/terraform-provider-aws/issues/22889)) -* resource/aws_ec2_client_vpn_network_association: The `security_groups` argument has been deprecated. Use the `security_group_ids` argument of the `aws_ec2_client_vpn_endpoint` resource instead ([#22911](https://github.com/hashicorp/terraform-provider-aws/issues/22911)) -* resource/aws_ec2_client_vpn_network_association: The `status` attribute has been deprecated ([#22887](https://github.com/hashicorp/terraform-provider-aws/issues/22887)) -* resource/aws_ec2_client_vpn_route: Add [custom `timeouts`](https://www.terraform.io/docs/language/resources/syntax.html#operation-timeouts) block ([#22911](https://github.com/hashicorp/terraform-provider-aws/issues/22911)) -* resource/aws_ecs_cluster: The `capacity_providers` and `default_capacity_provider_strategy` arguments have been deprecated. Use the `aws_ecs_cluster_capacity_providers` resource instead. ([#22783](https://github.com/hashicorp/terraform-provider-aws/issues/22783)) -* resource/aws_elasticache_replication_group: The `cluster_mode` argument has been deprecated. All configurations using `cluster_mode` should be updated to use the root-level `num_node_groups` and `replicas_per_node_group` arguments instead ([#22666](https://github.com/hashicorp/terraform-provider-aws/issues/22666)) -* resource/aws_elasticache_replication_group: The `number_cache_clusters` argument has been deprecated. All configurations using `number_cache_clusters` should be updated to use the `num_cache_clusters` argument instead ([#22666](https://github.com/hashicorp/terraform-provider-aws/issues/22666)) -* resource/aws_elasticache_replication_group: The `replication_group_description` argument has been deprecated. All configurations using `replication_group_description` should be updated to use the `description` argument instead ([#22666](https://github.com/hashicorp/terraform-provider-aws/issues/22666)) -* resource/aws_route: The `instance_id` argument has been deprecated. All configurations using `instance_id` should be updated to use the `network_interface_id` argument instead ([#22664](https://github.com/hashicorp/terraform-provider-aws/issues/22664)) -* resource/aws_route_table: The `instance_id` argument of the `route` configuration block has been deprecated. All configurations using `route` `instance_id` should be updated to use the `route` `network_interface_id` argument instead ([#22664](https://github.com/hashicorp/terraform-provider-aws/issues/22664)) -* resource/aws_s3_bucket_object: The resource is deprecated; use `aws_s3_object` instead ([#22877](https://github.com/hashicorp/terraform-provider-aws/issues/22877)) - -FEATURES: - -* **New Data Source:** `aws_cloudfront_realtime_log_config` ([#22620](https://github.com/hashicorp/terraform-provider-aws/issues/22620)) -* **New Data Source:** `aws_ec2_client_vpn_endpoint` ([#14218](https://github.com/hashicorp/terraform-provider-aws/issues/14218)) -* **New Data Source:** `aws_eips` ([#7537](https://github.com/hashicorp/terraform-provider-aws/issues/7537)) -* **New Data Source:** `aws_s3_object` ([#22850](https://github.com/hashicorp/terraform-provider-aws/issues/22850)) -* **New Data Source:** `aws_s3_objects` ([#22850](https://github.com/hashicorp/terraform-provider-aws/issues/22850)) -* **New Resource:** `aws_cognito_user` ([#19919](https://github.com/hashicorp/terraform-provider-aws/issues/19919)) -* **New Resource:** `aws_dataexchange_revision` ([#22933](https://github.com/hashicorp/terraform-provider-aws/issues/22933)) -* **New Resource:** `aws_network_acl_association` ([#18807](https://github.com/hashicorp/terraform-provider-aws/issues/18807)) -* **New Resource:** `aws_s3_bucket_accelerate_configuration` ([#22617](https://github.com/hashicorp/terraform-provider-aws/issues/22617)) -* **New Resource:** `aws_s3_bucket_acl` ([#22853](https://github.com/hashicorp/terraform-provider-aws/issues/22853)) -* **New Resource:** `aws_s3_bucket_cors_configuration` ([#12141](https://github.com/hashicorp/terraform-provider-aws/issues/12141)) -* **New Resource:** `aws_s3_bucket_lifecycle_configuration` ([#22579](https://github.com/hashicorp/terraform-provider-aws/issues/22579)) -* **New Resource:** `aws_s3_bucket_logging` ([#22608](https://github.com/hashicorp/terraform-provider-aws/issues/22608)) -* **New Resource:** `aws_s3_bucket_object_lock_configuration` ([#22644](https://github.com/hashicorp/terraform-provider-aws/issues/22644)) -* **New Resource:** `aws_s3_bucket_request_payment_configuration` ([#22649](https://github.com/hashicorp/terraform-provider-aws/issues/22649)) -* **New Resource:** `aws_s3_bucket_server_side_encryption_configuration` ([#22609](https://github.com/hashicorp/terraform-provider-aws/issues/22609)) -* **New Resource:** `aws_s3_bucket_versioning` ([#5132](https://github.com/hashicorp/terraform-provider-aws/issues/5132)) -* **New Resource:** `aws_s3_bucket_website_configuration` ([#22648](https://github.com/hashicorp/terraform-provider-aws/issues/22648)) -* **New Resource:** `aws_s3_object` ([#22850](https://github.com/hashicorp/terraform-provider-aws/issues/22850)) - -ENHANCEMENTS: - -* data-source/aws_ami: Add `boot_mode` attribute. ([#22939](https://github.com/hashicorp/terraform-provider-aws/issues/22939)) -* data-source/aws_cloudwatch_log_group: Automatically trim `:*` suffix from `arn` attribute ([#22043](https://github.com/hashicorp/terraform-provider-aws/issues/22043)) -* data-source/aws_ec2_client_vpn_endpoint: Add `security_group_ids` and `vpc_id` attributes ([#22911](https://github.com/hashicorp/terraform-provider-aws/issues/22911)) -* data-source/aws_elasticache_replication_group: Add `description`, `num_cache_clusters`, `num_node_groups`, and `replicas_per_node_group` attributes ([#22667](https://github.com/hashicorp/terraform-provider-aws/issues/22667)) -* data-source/aws_imagebuilder_distribution_configuration: Add `container_distribution_configuration` attribute to the `distribution` configuration block ([#22838](https://github.com/hashicorp/terraform-provider-aws/issues/22838)) -* data-source/aws_imagebuilder_distribution_configuration: Add `launch_template_configuration` attribute to the `distribution` configuration block ([#22884](https://github.com/hashicorp/terraform-provider-aws/issues/22884)) -* data-source/aws_imagebuilder_image_recipe: Add `parameter` attribute to the `component` configuration block ([#22856](https://github.com/hashicorp/terraform-provider-aws/issues/22856)) -* provider: Add `duration` argument to the `assume_role` configuration block ([#23077](https://github.com/hashicorp/terraform-provider-aws/issues/23077)) -* provider: Add `ec2_metadata_service_endpoint`, `ec2_metadata_service_endpoint_mode`, `use_dualstack_endpoint`, `use_fips_endpoint` arguments ([#22804](https://github.com/hashicorp/terraform-provider-aws/issues/22804)) -* provider: Add environment variables `TF_AWS_DYNAMODB_ENDPOINT`, `TF_AWS_IAM_ENDPOINT`, `TF_AWS_S3_ENDPOINT`, and `TF_AWS_STS_ENDPOINT`. ([#23052](https://github.com/hashicorp/terraform-provider-aws/issues/23052)) -* provider: Add support for `shared_config_file` parameter ([#20587](https://github.com/hashicorp/terraform-provider-aws/issues/20587)) -* provider: Add support for `shared_credentials_files` parameter and deprecates `shared_credentials_file` ([#23080](https://github.com/hashicorp/terraform-provider-aws/issues/23080)) -* provider: Adds `s3_use_path_style` parameter and deprecates `s3_force_path_style`. ([#23055](https://github.com/hashicorp/terraform-provider-aws/issues/23055)) -* provider: Changes `shared_config_file` parameter to `shared_config_files` ([#23080](https://github.com/hashicorp/terraform-provider-aws/issues/23080)) -* provider: Updates AWS authentication to use AWS SDK for Go v2 ([#20587](https://github.com/hashicorp/terraform-provider-aws/issues/20587)) -* resource/aws_ami: Add `boot_mode` and `ebs_block_device.outpost_arn` arguments. ([#22939](https://github.com/hashicorp/terraform-provider-aws/issues/22939)) -* resource/aws_ami_copy: Add `boot_mode` and `ebs_block_device.outpost_arn` attributes ([#22972](https://github.com/hashicorp/terraform-provider-aws/issues/22972)) -* resource/aws_ami_from_instance: Add `boot_mode` and `ebs_block_device.outpost_arn` attributes ([#22972](https://github.com/hashicorp/terraform-provider-aws/issues/22972)) -* resource/aws_api_gateway_domain_name: Add `ownership_verification_certificate_arn` argument. ([#21076](https://github.com/hashicorp/terraform-provider-aws/issues/21076)) -* resource/aws_apigatewayv2_domain_name: Add `domain_name_configuration.ownership_verification_certificate_arn` argument. ([#21076](https://github.com/hashicorp/terraform-provider-aws/issues/21076)) -* resource/aws_autoscaling_attachment: Add `lb_target_group_arn` argument ([#22662](https://github.com/hashicorp/terraform-provider-aws/issues/22662)) -* resource/aws_cloudwatch_event_target: Add plan time validation for `input`, `input_path`, `run_command_targets.values`, `http_target.header_parameters`, `http_target.query_string_parameters`, `redshift_target.database`, `redshift_target.db_user`, `redshift_target.secrets_manager_arn`, `redshift_target.sql`, `redshift_target.statement_name`, `retry_policy.maximum_event_age_in_seconds`, `retry_policy.maximum_retry_attempts`. ([#22946](https://github.com/hashicorp/terraform-provider-aws/issues/22946)) -* resource/aws_db_instance: Add `db_name` argument ([#22668](https://github.com/hashicorp/terraform-provider-aws/issues/22668)) -* resource/aws_ec2_client_vpn_authorization_rule: Configurable Create and Delete timeouts ([#20688](https://github.com/hashicorp/terraform-provider-aws/issues/20688)) -* resource/aws_ec2_client_vpn_endpoint: Add `client_connect_options` argument ([#22793](https://github.com/hashicorp/terraform-provider-aws/issues/22793)) -* resource/aws_ec2_client_vpn_endpoint: Add `client_login_banner_options` argument ([#22793](https://github.com/hashicorp/terraform-provider-aws/issues/22793)) -* resource/aws_ec2_client_vpn_endpoint: Add `security_group_ids` and `vpc_id` arguments ([#22911](https://github.com/hashicorp/terraform-provider-aws/issues/22911)) -* resource/aws_ec2_client_vpn_endpoint: Add `session_timeout_hours` argument ([#22793](https://github.com/hashicorp/terraform-provider-aws/issues/22793)) -* resource/aws_ec2_client_vpn_endpoint: Add `vpn_port` argument ([#22793](https://github.com/hashicorp/terraform-provider-aws/issues/22793)) -* resource/aws_ec2_client_vpn_network_association: Configurable Create and Delete timeouts ([#20689](https://github.com/hashicorp/terraform-provider-aws/issues/20689)) -* resource/aws_elasticache_replication_group: Add `description` argument ([#22666](https://github.com/hashicorp/terraform-provider-aws/issues/22666)) -* resource/aws_elasticache_replication_group: Add `num_cache_clusters` argument ([#22666](https://github.com/hashicorp/terraform-provider-aws/issues/22666)) -* resource/aws_elasticache_replication_group: Add `num_node_groups` and `replicas_per_node_group` arguments ([#22666](https://github.com/hashicorp/terraform-provider-aws/issues/22666)) -* resource/aws_fsx_lustre_file_system: Add `log_configuration` argument. ([#22935](https://github.com/hashicorp/terraform-provider-aws/issues/22935)) -* resource/aws_fsx_ontap_file_system: Reduce the minimum valid value of the `throughput_capacity` argument to `128` (128 MB/s) ([#22898](https://github.com/hashicorp/terraform-provider-aws/issues/22898)) -* resource/aws_glue_partition_index: Add support for custom timeouts. ([#22941](https://github.com/hashicorp/terraform-provider-aws/issues/22941)) -* resource/aws_imagebuilder_distribution_configuration: Add `launch_template_configuration` argument to the `distribution` configuration block ([#22842](https://github.com/hashicorp/terraform-provider-aws/issues/22842)) -* resource/aws_imagebuilder_image_recipe: Add `parameter` argument to the `component` configuration block ([#22837](https://github.com/hashicorp/terraform-provider-aws/issues/22837)) -* resource/aws_mq_broker: `auto_minor_version_upgrade` and `host_instance_type` can be changed without recreating broker ([#20661](https://github.com/hashicorp/terraform-provider-aws/issues/20661)) -* resource/aws_s3_bucket_cors_configuration: Retry when `NoSuchCORSConfiguration` errors are returned from the AWS API ([#22977](https://github.com/hashicorp/terraform-provider-aws/issues/22977)) -* resource/aws_s3_bucket_versioning: Add eventual consistency handling to help ensure bucket versioning is stabilized. ([#21076](https://github.com/hashicorp/terraform-provider-aws/issues/21076)) -* resource/aws_vpn_connection: Add the ability to revert changes to unconfigured tunnel options made outside of Terraform to their [documented default values](https://docs.aws.amazon.com/vpn/latest/s2svpn/VPNTunnels.html) ([#17031](https://github.com/hashicorp/terraform-provider-aws/issues/17031)) -* resource/aws_vpn_connection: Mark `customer_gateway_configuration` as [`Sensitive`](https://www.terraform.io/plugin/sdkv2/best-practices/sensitive-state#using-the-sensitive-flag) ([#15806](https://github.com/hashicorp/terraform-provider-aws/issues/15806)) -* resource/aws_wafv2_web_acl: Support `version` on `managed_rule_group_statement` ([#21732](https://github.com/hashicorp/terraform-provider-aws/issues/21732)) - -BUG FIXES: - -* data-source/aws_vpc_peering_connections: Return empty array instead of error when no connections found. ([#17382](https://github.com/hashicorp/terraform-provider-aws/issues/17382)) -* resource/aws_cloudformation_stack: Retry resource Create and Update for IAM eventual consistency ([#22840](https://github.com/hashicorp/terraform-provider-aws/issues/22840)) -* resource/aws_cloudwatch_event_target: Preserve order of `http_target.path_parameter_values`. ([#22946](https://github.com/hashicorp/terraform-provider-aws/issues/22946)) -* resource/aws_db_instance: Fix error with reboot of replica ([#22178](https://github.com/hashicorp/terraform-provider-aws/issues/22178)) -* resource/aws_ec2_client_vpn_authorization_rule: Don't raise an error when `InvalidClientVpnEndpointId.NotFound` is returned during refresh ([#20688](https://github.com/hashicorp/terraform-provider-aws/issues/20688)) -* resource/aws_ec2_client_vpn_endpoint: `connection_log_options.cloudwatch_log_stream` argument is Computed, preventing spurious resource diffs ([#22891](https://github.com/hashicorp/terraform-provider-aws/issues/22891)) -* resource/aws_ecs_capacity_provider: Fix tagging error preventing use in ISO partitions ([#23030](https://github.com/hashicorp/terraform-provider-aws/issues/23030)) -* resource/aws_ecs_cluster: Fix tagging error preventing use in ISO partitions ([#23030](https://github.com/hashicorp/terraform-provider-aws/issues/23030)) -* resource/aws_ecs_service: Fix tagging error preventing use in ISO partitions ([#23030](https://github.com/hashicorp/terraform-provider-aws/issues/23030)) -* resource/aws_ecs_task_definition: Fix tagging error preventing use in ISO partitions ([#23030](https://github.com/hashicorp/terraform-provider-aws/issues/23030)) -* resource/aws_ecs_task_set: Fix tagging error preventing use in ISO partitions ([#23030](https://github.com/hashicorp/terraform-provider-aws/issues/23030)) -* resource/aws_route_table_association: Handle nil 'AssociationState' in ISO regions ([#22806](https://github.com/hashicorp/terraform-provider-aws/issues/22806)) -* resource/aws_route_table_association: Retry resource Read for EC2 eventual consistency ([#22927](https://github.com/hashicorp/terraform-provider-aws/issues/22927)) -* resource/aws_vpc_ipam: Correct update of `description` ([#22863](https://github.com/hashicorp/terraform-provider-aws/issues/22863)) -* resource/aws_waf_rule_group: Prevent panic when expanding the rule group's set of `activated_rule` ([#22978](https://github.com/hashicorp/terraform-provider-aws/issues/22978)) -* resource/aws_wafregional_rule_group: Prevent panic when expanding the rule group's set of `activated_rule` ([#22978](https://github.com/hashicorp/terraform-provider-aws/issues/22978)) - ## Previous Releases For information on prior major releases, see their changelogs: +* [4.x](https://github.com/hashicorp/terraform-provider-aws/blob/release/4.x/CHANGELOG.md) * [3.x](https://github.com/hashicorp/terraform-provider-aws/blob/release/3.x/CHANGELOG.md) * [2.x and earlier](https://github.com/hashicorp/terraform-provider-aws/blob/release/2.x/CHANGELOG.md) diff --git a/GNUmakefile b/GNUmakefile index 71e2fd15a98..92e8a761f96 100644 --- a/GNUmakefile +++ b/GNUmakefile @@ -81,6 +81,11 @@ build: fmtcheck cleango: @echo "==> Cleaning Go..." @echo "WARNING: This will kill gopls and clean Go caches" + @vscode=`ps -ef | grep Visual\ Studio\ Code | wc -l | xargs` ; \ + if [ $$vscode -gt 1 ] ; then \ + echo "ALERT: vscode is running. Close it and try again." ; \ + exit 1 ; \ + fi @for proc in `pgrep gopls` ; do \ echo "Killing gopls process $$proc" ; \ kill -9 $$proc ; \ @@ -89,6 +94,9 @@ cleango: clean: cleango build tools +copyright: + @copywrite headers + depscheck: @echo "==> Checking source code with go mod tidy..." @$(GO_VER) mod tidy @@ -140,7 +148,7 @@ gen: rm -f internal/conns/*_gen.go rm -f internal/provider/*_gen.go rm -f internal/service/**/*_gen.go - rm -f internal/sweep/sweep_test.go + rm -f internal/sweep/sweep_test.go internal/sweep/service_packages_gen_test.go rm -f names/caps.md rm -f names/*_gen.go rm -f website/docs/guides/custom-service-endpoints.html.md @@ -148,10 +156,11 @@ gen: rm -f .ci/.semgrep-configs.yml rm -f .ci/.semgrep-service-name*.yml $(GO_VER) generate ./... - # Generate service package data last as it may depend on output of earlier generators. - rm -f internal/service/**/service_package_gen.go + # Generate service package lists last as they may depend on output of earlier generators. rm -f internal/provider/service_packages_gen.go - $(GO_VER) generate ./internal/generate/servicepackages + $(GO_VER) generate ./internal/provider + rm -f internal/sweep/sweep_test.go internal/sweep/service_packages_gen_test.go + $(GO_VER) generate ./internal/sweep gencheck: @echo "==> Checking generated source code..." @@ -209,7 +218,7 @@ providerlint: sane: @echo "==> Sane Check (48 tests of Top 30 resources)" - @echo "==> Like 'sanity' except full output, stops soon after error" + @echo "==> Like 'sanity' except full output and stops soon after 1st error" @echo "==> NOTE: NOT an exhaustive set of tests! Finds big problems only." @TF_ACC=1 $(GO_VER) test \ ./internal/service/iam/... \ @@ -232,7 +241,7 @@ sane: sanity: @echo "==> Sanity Check (48 tests of Top 30 resources)" - @echo "==> Like 'sane' but little output, runs all tests despite errors" + @echo "==> Like 'sane' but less output and runs all tests despite most errors" @echo "==> NOTE: NOT an exhaustive set of tests! Finds big problems only." @iam=`TF_ACC=1 $(GO_VER) test \ ./internal/service/iam/... \ @@ -302,11 +311,6 @@ semgrep: semgrep-validate @echo "==> Running Semgrep static analysis..." @docker run --rm --volume "${PWD}:/src" returntocorp/semgrep semgrep --config .ci/.semgrep.yml -servicepackages: - rm -f internal/service/**/service_package_gen.go - rm -f internal/provider/service_packages_gen.go - $(GO_VER) generate ./internal/generate/servicepackages - skaff: cd skaff && $(GO_VER) install github.com/hashicorp/terraform-provider-aws/skaff @@ -364,13 +368,14 @@ tfsdk2fw: tools: cd .ci/providerlint && $(GO_VER) install . - cd .ci/tools && $(GO_VER) install github.com/bflad/tfproviderdocs + cd .ci/tools && $(GO_VER) install github.com/YakDriver/tfproviderdocs cd .ci/tools && $(GO_VER) install github.com/client9/misspell/cmd/misspell cd .ci/tools && $(GO_VER) install github.com/golangci/golangci-lint/cmd/golangci-lint cd .ci/tools && $(GO_VER) install github.com/katbyte/terrafmt cd .ci/tools && $(GO_VER) install github.com/terraform-linters/tflint cd .ci/tools && $(GO_VER) install github.com/pavius/impi/cmd/impi cd .ci/tools && $(GO_VER) install github.com/hashicorp/go-changelog/cmd/changelog-build + cd .ci/tools && $(GO_VER) install github.com/hashicorp/copywrite cd .ci/tools && $(GO_VER) install github.com/rhysd/actionlint/cmd/actionlint cd .ci/tools && $(GO_VER) install mvdan.cc/gofumpt @@ -429,7 +434,6 @@ yamllint: sanity \ semall \ semgrep \ - servicepackages \ skaff \ sweep \ t \ diff --git a/ROADMAP.md b/ROADMAP.md index c9ac9381eb2..45374ce65b3 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -1,80 +1,92 @@ -# Roadmap: February 2023 - April 2023 +# Roadmap: May 2023 - July 2023 Every few months, the team will highlight areas of focus for our work and upcoming research. We select items for inclusion in the roadmap from the Top Community Issues, [Core Services](https://hashicorp.github.io/terraform-provider-aws/core-services/), and internal priorities. Where community sourced contributions exist we will work with the authors to review and merge their work. Where this does not exist or the original contributors are not available we will create the resources and implementation ourselves. -Each weekly release will include necessary tasks that lead to the completion of the stated goals as well as community pull requests, enhancements, and features that are not highlighted in the roadmap. To view all the items we've prioritized for this quarter, please see the [Roadmap milestone](https://github.com/hashicorp/terraform-provider-aws/milestone/138). +Each weekly release will include necessary tasks that lead to the completion of the stated goals as well as community pull requests, enhancements, and features that are not highlighted in the roadmap. This roadmap does not describe all the work that will be included within this timeframe, but it does describe our focus. We will include other work as events occur. -In the period spanning August to October 2022, 808 Pull Requests were opened in the provider and 783 were closed/merged, adding support for the following (among many others): +In the period spanning January to April 2023 the AWS Provider added support for the following (among many others): -- AWS Audit Manager -- Lambda SnapStart -- RDS: Blue/Green Deployments +- AWS VPC Lattice +- AWS Quicksight +- AWS Directory Service “Trust” +- AWS Observability Access Manager -From February - April 2023, we will be prioritizing the following areas of work: +From May - July 2023, we will be prioritizing the following areas of work: ## New Services -### AWS Quicksight +### Amazon OpenSearch Serverless -Issue: [#10990]([https://github.com/hashicorp/terraform-provider-aws/issues/17981](https://github.com/hashicorp/terraform-provider-aws/issues/10990)) +Issue: [#28313](https://github.com/hashicorp/terraform-provider-aws/issues/28313) -[AWS Quicksight](https://aws.amazon.com/quicksight/) has a serverless architecture that automatically scales to hundreds of thousands of users without the need to set up, configure, or manage your own servers. It also ensures that your users don’t have to deal with slow dashboards during peak hours, when multiple business intelligence (BI) users are accessing the same dashboards or datasets. And with pay-per-session pricing, you pay only when your users access the dashboards or reports, which makes it cost effective for deployments with many users. QuickSight is also built with robust security, governance, and global collaboration features for your enterprise workloads. +[Amazon OpenSearch Serverless](https://aws.amazon.com/opensearch-service/features/serverless/) makes it easy for customers to run large-scale search and analytics workloads without managing clusters. It automatically provisions and scales the underlying resources to deliver fast data ingestion and query responses for even the most demanding and unpredictable workloads, eliminating the need to configure and optimize clusters. -Support for AWS Quicksight may include: +Support for Amazon OpenSearch Serverless may include: New Resource(s): -- `aws_quicksight_iam_policy_assignment` -- `aws_quicksight_data_set` -- `aws_quicksight_ingestion` -- `aws_quicksight_template` -- `aws_quicksight_dashboard` -- `aws_quicksight_template_alias` +- `aws_opensearchserverless_collection` +- `aws_opensearchserverless_access_policy` +- `aws_opensearchserverless_security_config` +- `aws_opensearchserverless_security_policy` +- `aws_opensearchserverless_vpc_endpoint` -### AWS Recycle Bin +### AWS Clean Rooms -Issue: [#23160](https://github.com/hashicorp/terraform-provider-aws/issues/23160) +Issue: [#30024](https://github.com/hashicorp/terraform-provider-aws/issues/30024) -[AWS Recycle Bin](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recycle-bin.html) is a data recovery feature that enables you to restore accidentally deleted Amazon EBS snapshots and EBS-backed AMIs. When using Recycle Bin, if your resources are deleted, they are retained in the Recycle Bin for a time period that you specify before being permanently deleted. +[AWS Clean Rooms](https://aws.amazon.com/clean-rooms/) helps companies and their partners more easily and securely analyze and collaborate on their collective datasets–without sharing or copying one another's underlying data. With AWS Clean Rooms, customers can create a secure data clean room in minutes, and collaborate with any other company on the AWS Cloud to generate unique insights about advertising campaigns, investment decisions, and research and development. -Support for AWS Recycle Bin may include: +Support for AWS Clean Rooms may include: New Resource(s): -- `aws_recycle_bin_rule` +- `aws_cleanrooms_collaboration` +- `aws_cleanrooms_configured_table` +- `aws_cleanrooms_configured_table_analysis_rule` +- `aws_cleanrooms_configured_table_association` +- `aws_cleanrooms_membership` -### AWS Directory Service “Trust” +## Enhancements to Existing Services -Issue: [#11901](https://github.com/hashicorp/terraform-provider-aws/issues/11901) +This quarter most of our efforts will be focused on enhancements and stability improvements of our core services, rather than adding brand new services to the provider. The following list comprises the items most important to the community. -Easily integrate AWS Managed Microsoft AD with your existing AD by using AD trust relationships. Using trusts enables you to use your existing Active Directory to control which AD users can access your AWS resources. +- [Lack of support for sso-session in .aws/config](https://github.com/hashicorp/terraform-provider-aws/issues/28263) +- [Cognito User Pool: cannot modify or remove schema items](https://github.com/hashicorp/terraform-provider-aws/issues/21654) +- [aws_wafv2_web_acl - Error: Provider produced inconsistent final plan](https://github.com/hashicorp/terraform-provider-aws/issues/23992) +- [aws_lb_target_group_attachment: target_id should be a list](https://github.com/hashicorp/terraform-provider-aws/issues/9901) +- [Extend Secrets Manager Rotation Configuration](https://github.com/hashicorp/terraform-provider-aws/issues/22969) -Support for AWS Director Service "Trust" may include: +## Major Release v5 -New Resource(s): +The release of version 5.0 of the Terraform AWS provider will bring highly anticipated updates to default tags, and make changes and deprecations. -- `aws_directory_service_directory_trust` +### Default Tags -## Enhancements to Existing Services +Default tags in the Terraform AWS provider allow practitioners to define common metadata tags at the provider level. These tags are then applied to all supported resources in the Terraform configuration. Previously, assumptions and restrictions were made to allow this feature to function across as many resources as possible. However, it could be difficult to retrofit existing code, causing frustrating manual intervention. +Thanks to new features available in the [Terraform plugin SDK](https://developer.hashicorp.com/terraform/plugin/sdkv2) and the [Terraform plugin framework](https://developer.hashicorp.com/terraform/plugin/framework), we have removed several limitations which made default tags difficult to integrate with existing resources and modules. -This quarter most of our efforts will be focused on enhancements and stability improvements of our core services, rather than adding brand new services to the provider. The following list comprises the items most important to the community. +The updates in version 5.0 solve for: -- [Resource Identifiers and Tags for VPC Security Group Rules](https://github.com/hashicorp/terraform-provider-aws/issues/20104) -- [Better Lambda error](https://github.com/hashicorp/terraform-provider-aws/issues/13709) -- [AssumeRoleTokenProviderNotSetError when using assume_role with mfa enabled](https://github.com/hashicorp/terraform-provider-aws/issues/10491) -- [Proposal: Add support Object-level logging in the existing trail for resource 'aws_s3_bucket'](https://github.com/hashicorp/terraform-provider-aws/issues/9459) -- [Proposal: Add support Object-level logging in the existing trail for resource 'aws_s3_bucket'](https://github.com/hashicorp/terraform-provider-aws/issues/9459) -- [Add support for elasticsearch outbound connection and relevant accepter](https://github.com/hashicorp/terraform-provider-aws/pull/22988) -- [Add support for Route 53 IP Based Routing Policy](https://github.com/hashicorp/terraform-provider-aws/issues/25321) -- [Add ability to query ECR repository for most recently pushed image](https://github.com/hashicorp/terraform-provider-aws/issues/12798) +- Inconsistent final plans that cause failures when tags are computed. +- Identical tags in both default tags and resource tags. +- Perpetual diffs within tag configurations. -### Default Tags +### Remove EC2 Classic Functionality + +In 2021 AWS [announced](https://aws.amazon.com/blogs/aws/ec2-classic-is-retiring-heres-how-to-prepare/) the retirement of EC2 Classic Networking functionality. This was scheduled to occur on August 15th, 2022. Support for the functionality was extended until late September when any AWS customers who had qualified for extension finished their migration. At that time those features were marked as deprecated and it is now time to remove them as the functionality is no longer available through AWS. While this is a standard deprecation, this is a major feature removal. + +### Updating RDS Identifiers In–Place + +Allow DB names to be updated in place. This is now supported by AWS, so we should allow its use. Practitioners will now be able to change names without a recreation. Details for this issue can be tracked in issue [#507](https://github.com/hashicorp/terraform-provider-aws/issues/507). + +### Remove Default Value from Engine Parameters -[#17829](https://github.com/hashicorp/terraform-provider-aws/issues/17829) added the `default_tags` block to allow practitioners to tags at the provider level. This allows configured resources capable of assigning tags to have them inherit those as well as be able to specify them at the resource level. This has proven extremely popular with the community, however it comes with a number of significant caveats ([#18311](https://github.com/hashicorp/terraform-provider-aws/issues/18311), [#19583](https://github.com/hashicorp/terraform-provider-aws/issues/19583), [#19204](https://github.com/hashicorp/terraform-provider-aws/issues/19204)) for use which have resulted from limitations in the provider SDK we use. New functionality in the [terraform-plugin-sdk](https://github.com/hashicorp/terraform-plugin-sdk) and [terraform-plugin-framework](https://github.com/hashicorp/terraform-plugin-framework) should allow us to temper these caveats. This quarter we plan to begin the development of this feature, based on the research completed last quarter by the engineering team. +Removes a default value that does not have a parallel with AWS and causes unexpected behavior for end users. Practitioners will now have to specify a value. Details for this issue can be tracked in issue [#27960](https://github.com/hashicorp/terraform-provider-aws/issues/27960). ## Disclosures diff --git a/docs/add-a-new-datasource.md b/docs/add-a-new-datasource.md index ad7011aef00..e8ac91fedc7 100644 --- a/docs/add-a-new-datasource.md +++ b/docs/add-a-new-datasource.md @@ -34,7 +34,7 @@ These will map the AWS API response to the data source schema. You will also nee ### Register Data Source to the provider -Data Sources use a self registration process that adds them to the provider using the `@SDKDataSource()` annotation in the datasource's comments. Run `make servicepackages` to register the datasource. This will add an entry to the `service_package_gen.go` file located in the service package folder. +Data Sources use a self registration process that adds them to the provider using the `@SDKDataSource()` annotation in the datasource's comments. Run `make gen` to register the datasource. This will add an entry to the `service_package_gen.go` file located in the service package folder. ``` package something diff --git a/docs/add-a-new-resource.md b/docs/add-a-new-resource.md index 73bf5af6c7c..b43fda72e22 100644 --- a/docs/add-a-new-resource.md +++ b/docs/add-a-new-resource.md @@ -36,7 +36,7 @@ These will map planned Terraform state to the AWS API call, or an AWS API respon ### Register Resource to the provider -Resources use a self registration process that adds them to the provider using the `@SDKResource()` annotation in the resource's comments. Run `make servicepackages` to register the resource. This will add an entry to the `service_package_gen.go` file located in the service package folder. +Resources use a self registration process that adds them to the provider using the `@SDKResource()` annotation in the resource's comments. Run `make gen` to register the resource. This will add an entry to the `service_package_gen.go` file located in the service package folder. ``` package something diff --git a/docs/add-a-new-service.md b/docs/add-a-new-service.md index bd59ad45ca7..cd7bf1688e3 100644 --- a/docs/add-a-new-service.md +++ b/docs/add-a-new-service.md @@ -37,3 +37,73 @@ To add an AWS SDK for Go service client: ``` Once the service client has been added, implement the first [resource](./add-a-new-resource.md) or [data source](./add-a-new-datasource.md) in a separate PR. + +## Adding a Custom Service Client + +If an AWS service must be created in a non-standard way, for example the service API's endpoint must be accessed via a single AWS Region, then: + +1. Add an `x` in the **SkipClientGenerate** column for the service in [`names/names_data.csv`](https://github.com/hashicorp/terraform-provider-aws/blob/main/names/README.md) + +1. Run `make gen` + +1. Add a file `internal//service_package.go` that contains an API client factory function, for example: + +```go +package globalaccelerator + +import ( + "context" + + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + endpoints_sdkv1 "github.com/aws/aws-sdk-go/aws/endpoints" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + globalaccelerator_sdkv1 "github.com/aws/aws-sdk-go/service/globalaccelerator" +) + +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context) (*globalaccelerator_sdkv1.GlobalAccelerator, error) { + sess := p.config["session"].(*session_sdkv1.Session) + config := &aws_sdkv1.Config{Endpoint: aws_sdkv1.String(p.config["endpoint"].(string))} + + // Force "global" services to correct Regions. + if p.config["partition"].(string) == endpoints_sdkv1.AwsPartitionID { + config.Region = aws_sdkv1.String(endpoints_sdkv1.UsWest2RegionID) + } + + return globalaccelerator_sdkv1.New(sess.Copy(config)), nil +} +``` + +## Customizing a new Service Client + +If an AWS service must be customized after creation, for example retry handling must be changed, then: + +1. Add a file `internal//service_package.go` that contains an API client customization function, for example: + +```go +package chime + +import ( + "context" + + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + request_sdkv1 "github.com/aws/aws-sdk-go/aws/request" + chime_sdkv1 "github.com/aws/aws-sdk-go/service/chime" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" +) + +// CustomizeConn customizes a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) CustomizeConn(ctx context.Context, conn *chime_sdkv1.Chime) (*chime_sdkv1.Chime, error) { + conn.Handlers.Retry.PushBack(func(r *request_sdkv1.Request) { + // When calling CreateVoiceConnector across multiple resources, + // the API can randomly return a BadRequestException without explanation + if r.Operation.Name == "CreateVoiceConnector" { + if tfawserr.ErrMessageContains(r.Error, chime_sdkv1.ErrCodeBadRequestException, "Service received a bad request") { + r.Retryable = aws_sdkv1.Bool(true) + } + } + }) + + return conn, nil +} +``` diff --git a/docs/resource-tagging.md b/docs/resource-tagging.md index 572075e18e8..4fc91c147a4 100644 --- a/docs/resource-tagging.md +++ b/docs/resource-tagging.md @@ -28,7 +28,7 @@ can be found in the [`generate` package documentation](https://github.com/hashic The generator will create several types of tagging-related code. All services that support tagging will generate the function `KeyValueTags`, which converts from service-specific structs returned by the AWS SDK into a common format used by the provider, and the function `Tags`, which converts from the common format back to the service-specific structs. -In addition, many services have separate functions to list or update tags, so the corresponding `ListTags` and `UpdateTags` can be generated. +In addition, many services have separate functions to list or update tags, so the corresponding `listTags` and `updateTags` can be generated. Optionally, to retrieve a specific tag, you can generate the `GetTag` function. If the service directory does not contain a `generate.go` file, create one. @@ -204,7 +204,7 @@ func ResourceAnalyzer() *schema.Resource { ``` The `identifierAttribute` argument to the `@Tags` annotation identifies the attribute in the resource's schema whose value is used in tag listing and updating API calls. Common values are `"arn"` and "`id`". -Once the annotation has been added to the resource's code, run `make servicepackages` to register the resource for transparent tagging. This will add an entry to the `service_package_gen.go` file located in the service package folder. +Once the annotation has been added to the resource's code, run `make gen` to register the resource for transparent tagging. This will add an entry to the `service_package_gen.go` file located in the service package folder. #### Resource Create Operation @@ -212,38 +212,38 @@ When creating a resource, some AWS APIs support passing tags in the Create call while others require setting the tags after the initial creation. If the API supports tagging on creation (e.g., the `Input` struct accepts a `Tags` field), -use the `GetTagsIn` function to get any configured tags, e.g., with EKS Clusters: +use the `getTagsIn` function to get any configured tags, e.g., with EKS Clusters: ```go input := &eks.CreateClusterInput{ /* ... other configuration ... */ - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } ``` Otherwise, if the API does not support tagging on creation, call `createTags` after the resource has been created, e.g., with Device Farm device pools: ```go -if err := createTags(ctx, conn, d.Id(), GetTagsIn(ctx)); err != nil { +if err := createTags(ctx, conn, d.Id(), getTagsIn(ctx)); err != nil { return sdkdiag.AppendErrorf(diags, "setting DeviceFarm Device Pool (%s) tags: %s", d.Id(), err) } ``` #### Resource Read Operation -In the resource `Read` operation, use the `SetTagsOut` function to signal to the transparent tagging mechanism that the resource has tags that should be saved into Terraform state, e.g., with EKS Clusters: +In the resource `Read` operation, use the `setTagsOut` function to signal to the transparent tagging mechanism that the resource has tags that should be saved into Terraform state, e.g., with EKS Clusters: ```go /* ... other d.Set(...) logic ... */ -SetTagsOut(ctx, cluster.Tags) +setTagsOut(ctx, cluster.Tags) ``` -If the service API does not return the tags directly from reading the resource and requires use of the generated `ListTags` function, do nothing and the transparent tagging mechanism will make the `ListTags` call and save any tags into Terraform state. +If the service API does not return the tags directly from reading the resource and requires use of the generated `listTags` function, do nothing and the transparent tagging mechanism will make the `listTags` call and save any tags into Terraform state. #### Resource Update Operation -In the resource `Update` operation, only non-`tags` updates need be done as the transparent tagging mechanism makes the `UpdateTags` call. +In the resource `Update` operation, only non-`tags` updates need be done as the transparent tagging mechanism makes the `updateTags` call. ```go if d.HasChangesExcept("tags", "tags_all") { @@ -312,7 +312,7 @@ tags := defaultTagsConfig.MergeTags(tftags.New(ctx, d.Get("tags").(map[string]in /* ... creation steps ... */ if len(tags) > 0 { - if err := UpdateTags(ctx, conn, d.Id(), nil, tags); err != nil { + if err := updateTags(ctx, conn, d.Id(), nil, tags); err != nil { return fmt.Errorf("adding DeviceFarm Device Pool (%s) tags: %w", d.Id(), err) } } @@ -356,7 +356,7 @@ if err := d.Set("tags_all", tags.Map()); err != nil { ``` If the service API does not return the tags directly from reading the resource and requires a separate API call, -use the generated `ListTags` function, e.g., with Athena Workgroups: +use the generated `listTags` function, e.g., with Athena Workgroups: ```go // Typically declared near conn := /* ... */ @@ -365,7 +365,7 @@ ignoreTagsConfig := meta.(*AWSClient).IgnoreTagsConfig /* ... other d.Set(...) logic ... */ -tags, err := ListTags(ctx, conn, arn.String()) +tags, err := listTags(ctx, conn, arn.String()) if err != nil { return fmt.Errorf("listing tags for resource (%s): %w", arn, err) @@ -389,7 +389,7 @@ In the resource `Update` operation, implement the logic to handle tagging update ```go if d.HasChange("tags_all") { o, n := d.GetChange("tags_all") - if err := UpdateTags(ctx, conn, d.Get("arn").(string), o, n); err != nil { + if err := updateTags(ctx, conn, d.Get("arn").(string), o, n); err != nil { return fmt.Errorf("updating tags: %w", err) } } diff --git a/docs/running-and-writing-acceptance-tests.md b/docs/running-and-writing-acceptance-tests.md index 59a96d8ff6a..fb1667464c4 100644 --- a/docs/running-and-writing-acceptance-tests.md +++ b/docs/running-and-writing-acceptance-tests.md @@ -252,7 +252,7 @@ When executing the test, the following steps are taken for each `TestStep`: return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudWatchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudWatchConn(ctx) params := cloudwatch.GetDashboardInput{ DashboardName: aws.String(rs.Primary.ID), } @@ -289,7 +289,7 @@ When executing the test, the following steps are taken for each `TestStep`: ```go func testAccCheckDashboardDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudWatchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudWatchConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudwatch_dashboard" { @@ -628,7 +628,7 @@ func TestAccExampleThing_basic(t *testing.T) { } func testAccPreCheckExample(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).ExampleConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ExampleConn(ctx) input := &example.ListThingsInput{} _, err := conn.ListThingsWithContext(ctx, input) if testAccPreCheckSkipError(err) { @@ -1123,13 +1123,13 @@ Then add the actual implementation. Preferably, if a paginated SDK call is avail ```go func sweepThings(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).ExampleConn() + conn := client.ExampleConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -1150,14 +1150,13 @@ func sweepThings(region string) error { // Perform resource specific pre-sweep setup. // For example, you may need to perform one or more of these types of pre-sweep tasks, specific to the resource: // - // err := sweep.ReadResource(ctx, r, d, client) // fill in data + // err := sdk.ReadResource(ctx, r, d, client) // fill in data // d.Set("skip_final_snapshot", true) // set an argument in order to delete // This "if" is only needed if the pre-sweep setup can produce errors. // Otherwise, do not include it. if err != nil { err := fmt.Errorf("reading Example Thing (%s): %w", id, err) - log.Printf("[ERROR] %s", err) errs = multierror.Append(errs, err) continue } @@ -1168,35 +1167,36 @@ func sweepThings(region string) error { return !lastPage }) + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping Example Thing sweep for %s: %s", region, errs) + return nil + } if err != nil { - errs = multierror.Append(errs, fmt.Errorf("listing Example Thing for %s: %w", region, err)) + errs = multierror.Append(errs, fmt.Errorf("listing Example Things for %s: %w", region, err)) } if err := sweep.SweepOrchestrator(sweepResources); err != nil { - errs = multierror.Append(errs, fmt.Errorf("sweeping Example Thing for %s: %w", region, err)) - } - - if sweep.SkipSweepError(err) { - log.Printf("[WARN] Skipping Example Thing sweep for %s: %s", region, errs) - return nil + errs = multierror.Append(errs, fmt.Errorf("sweeping Example Things for %s: %w", region, err)) } return errs.ErrorOrNil() } ``` -Otherwise, if no paginated SDK call is available: +If no paginated SDK call is available, +consider generating one using the [`listpages` generator](https://github.com/hashicorp/terraform-provider-aws/blob/main/internal/generate/listpages/README.md), +or implement the sweeper as follows: ```go func sweepThings(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).ExampleConn() + conn := client.ExampleConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -1204,6 +1204,14 @@ func sweepThings(region string) error { for { output, err := conn.ListThings(input) + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping Example Thing sweep for %s: %s", region, errs) + return nil + } + if err != nil { + errs = multierror.Append(errs, fmt.Errorf("listing Example Things for %s: %w", region, err)) + return errs.ErrorOrNil() + } for _, thing := range output.Things { r := ResourceThing() @@ -1215,14 +1223,13 @@ func sweepThings(region string) error { // Perform resource specific pre-sweep setup. // For example, you may need to perform one or more of these types of pre-sweep tasks, specific to the resource: // - // err := sweep.ReadResource(ctx, r, d, client) // fill in data + // err := sdk.ReadResource(ctx, r, d, client) // fill in data // d.Set("skip_final_snapshot", true) // set an argument in order to delete // This "if" is only needed if the pre-sweep setup can produce errors. // Otherwise, do not include it. if err != nil { err := fmt.Errorf("reading Example Thing (%s): %w", id, err) - log.Printf("[ERROR] %s", err) errs = multierror.Append(errs, err) continue } @@ -1241,11 +1248,6 @@ func sweepThings(region string) error { errs = multierror.Append(errs, fmt.Errorf("sweeping Example Thing for %s: %w", region, err)) } - if sweep.SkipSweepError(err) { - log.Printf("[WARN] Skipping Example Thing sweep for %s: %s", region, errs) - return nil - } - return errs.ErrorOrNil() } ``` diff --git a/docs/service-package-pullrequest-guide.md b/docs/service-package-pullrequest-guide.md deleted file mode 100644 index 9adc4be3ed9..00000000000 --- a/docs/service-package-pullrequest-guide.md +++ /dev/null @@ -1,176 +0,0 @@ -# Service Package Refactor Pull Request Guide - -Pull request -[#21306](https://github.com/hashicorp/terraform-provider-aws/pull/21306) has -significantly refactored the AWS provider codebase. Specifically, the code for -all AWS resources and data sources has been relocated from a single `aws` -directory to a large number of separate directories in `internal/service`, each -corresponding to a particular AWS service. In addition to vastly simplifying -the codebase's overall structure, this change has also allowed us to simplify -the names of a number of underlying functions -- without encountering namespace -collisions. Issue -[#20000](https://github.com/hashicorp/terraform-provider-aws/issues/20000) -contains a more complete description of these changes. - -As a result, nearly every pull request opened prior to the refactoring has merge -conflicts; they are attempting to apply changes to files that have since been -relocated. Furthermore, any new files or functions introduced must be brought -into line with the codebase's new conventions. The following steps are intended -to resolve such a conflict -- though it should be noted that this guide is an -active work in progress as additional pull requests are amended. - -These fixes, however, *in no way affect the prioritization* of a particular -pull request. Once a pull request has been selected for review, the necessary -changes will be made by a maintainer -- either directly or in collaboration -with the pull request author. - -## Fixing a Pre-Refactor Pull Request - -1. `git checkout` the branch pertaining to the pull request you wish to amend - -1. Begin a merge of the latest version of `main` branch into your local branch: - `git pull origin main`. Merge conflicts are expected. - -1. For any **new file**, rename and move the file to its appropriate service - package directory: - - **Resource Files** - - ``` - git mv aws/resource_aws_{service_name}_{resource_name}.go \ - internal/service/{service_name}/{resource_name}.go - ``` - - **Resource Test Files** - - ``` - git mv aws/resource_aws_{service_name}_{resource_name}_test.go \ - internal/service/{service_name}/{resource_name}_test.go - ``` - - **Data Source Files** - - ``` - git mv aws/data_source_aws_{service_name}_{resource_name}.go \ - internal/service/{service_name}/{resource_name}_data_source.go - ``` - - **Data Source Test Files** - - ``` - git mv aws/data_source_aws_{service_name}_{resource_name}_test.go \ - internal/service/{service_name}/{resource_name}_data_source_test.go - ``` - -1. For any new **function**, rename the function appropriately: - - **Resource Schema Functions** - - ``` - func resourceAws{ResourceName}() => - func Resource{ResourceName}() - ``` - - **Resource Generic Functions** - - ``` - func resourceAws{ServiceName}{ResourceName}{FunctionName}() => - func resource{ResourceName}{FunctionName}() - ``` - - **Resource Acceptance Test Functions** - - ``` - func TestAccAWS{ServiceName}{ResourceName}_{testType}() => - func TestAcc{ResourceName}_{testType}() - ``` - - **Data Source Schema Functions** - - ``` - func dataSourceAws{ResourceName}() => - func DataSource{ResourceName}() - ``` - - **Data Source Generic Functions** - - ``` - func dataSourceAws{ServiceName}{ResourceName}{FunctionName}() => - func dataSource{ResourceName}{FunctionName}() - ``` - - **Data Source Acceptance Test Functions** - - ``` - func TestAccDataSourceAWS{ServiceName}{ResourceName}_{testType}() => - func TestAcc{ResourceName}DataSource_{testType}() - ``` - - **Finder Functions** - - ``` - func finder.{FunctionName}() => - func Find{FunctionName}() - ``` - - **Status Functions** - - ``` - func waiter.{FunctionName}Status() => - func status{FunctionName}() - ``` - - **Waiter Functions** - - ``` - func waiter.{FunctionName}() => - func wait{FunctionName}() - ``` - -1. If a file has a package declaration of `package aws`, you will need to change - it to the new package location. For example, if you moved a file to `internal/service/ecs`, - the declaration will now be `package ecs`. - - Any file that imports `"github.com/hashicorp/terraform-provider-aws/internal/acctest"` _must_ - be in the `_test` package. For example, `internal/service/ecs/account_setting_default_test.go` - does import the `acctest` package and must have a package declaration of `package ecs_test`. - -1. If you have made any changes to `aws/provider.go`, you will have to manually - re-enact those changes on the new `internal/provider/provider.go` file. - - Most commonly, these changes involve the addition of an entry to either the - `DataSourcesMap` or `ResourcesMap`. If this is the case for your PR, you will have - to adapt your entry to follow our new code conventions. - - **Resources Map Entries** - - ``` - "{aws_terraform_resource_type}": resourceAws{ServiceName}{ResourceName}(), => - "{aws_terraform_resource_type}": {serviceName}.Resource{ResourceName}(), - ``` - - **Data Source Map Entries** - - ``` - "{aws_terraform_data_source_type}": dataSourceAws{ServiceName}{ResourceName}(), => - "{aws_terraform_data_source_type}": {serviceName}.DataSource{ResourceName}(), - ``` - -1. Some functions, constants, and variables have been moved, removed, or renamed. - This table shows some of the common changes you may need to make to fix compile errors. - - | Before | Now | - | --- | --- | - | `isAWSErr(α, β, "")` | `tfawserr.ErrMessageContains(α, β, "")` | - | `isAWSErr(α, β, "")` | `tfawserr.ErrCodeEquals(α, β)` | - | `isResourceNotFoundError(α)` | `tfresource.NotFound(α)` | - | `isResourceTimeoutError(α)` | `tfresource.TimedOut(α)` | - | `testSweepSkipResourceError(α)` | `tfawserr.ErrCodeContains(α, "AccessDenied")` | - | `testAccPreCheck(t)` | `acctest.PreCheck(ctx, t)` | - | `testAccProviders` | `acctest.Providers` | - | `acctest.RandomWithPrefix("tf-acc-test")` | `sdkacctest.RandomWithPrefix(acctest.ResourcePrefix)` | - | `composeConfig(α)` | `acctest.ConfigCompose(α)` | - -1. Use `git status` to report the state of the merge. Review any merge - conflicts -- being sure to adopt the new naming conventions described in the - previous step where relevant. Use `git add` to add any new files to the commit. diff --git a/docs/skaff.md b/docs/skaff.md index fd92b878a97..c0f56e1c20b 100644 --- a/docs/skaff.md +++ b/docs/skaff.md @@ -7,6 +7,7 @@ 1. Figure out what you're trying to do: * Create a resource or a data source? * [AWS Go SDK v1 or v2](aws-go-sdk-versions.md) code? + * [Terraform Plugin Framework or Plugin SDKv2](terraform-plugin-versions.md) based? * [Name](naming.md) of the new resource or data source? 2. Use `skaff` to generate provider code 3. Go through the generated code completing code and customizing for the AWS Go SDK API @@ -73,12 +74,13 @@ Usage: skaff datasource [flags] Flags: - -c, --clear-comments Do not include instructional comments in source - -f, --force Force creation, overwriting existing files + -c, --clear-comments do not include instructional comments in source + -f, --force force creation, overwriting existing files -h, --help help for datasource - -n, --name string Name of the entity - -s, --snakename string If skaff doesn't get it right, explicitly give name in snake case (e.g., db_vpc_instance) - -o, --v1 Generate code targeting aws-sdk-go v1 (some existing services) + -n, --name string name of the entity + -p, --plugin-framework generate for Terraform Plugin-Framework + -s, --snakename string if skaff doesn't get it right, explicitly give name in snake case (e.g., db_vpc_instance) + -o, --v1 generate for AWS Go SDK v1 (some existing services) ``` ### Resource @@ -91,10 +93,11 @@ Usage: skaff resource [flags] Flags: - -c, --clear-comments Do not include instructional comments in source - -f, --force Force creation, overwriting existing files + -c, --clear-comments do not include instructional comments in source + -f, --force force creation, overwriting existing files -h, --help help for resource - -n, --name string Name of the entity - -s, --snakename string If skaff doesn't get it right, explicitly give name in snake case (e.g., db_vpc_instance) - -o, --v1 Generate code targeting aws-sdk-go v1 (some existing services) + -n, --name string name of the entity + -p, --plugin-framework generate for Terraform Plugin-Framework + -s, --snakename string if skaff doesn't get it right, explicitly give name in snake case (e.g., db_vpc_instance) + -o, --v1 generate for AWS Go SDK v1 (some existing services) ``` diff --git a/docs/stylesheets/extra.css b/docs/stylesheets/extra.css index 5499fe3f073..1f666ea4d75 100644 --- a/docs/stylesheets/extra.css +++ b/docs/stylesheets/extra.css @@ -1,3 +1,8 @@ +/** + * Copyright (c) HashiCorp, Inc. + * SPDX-License-Identifier: MPL-2.0 + */ + .md-footer__inner:not([hidden]) { display: none } \ No newline at end of file diff --git a/docs/terraform-plugin-versions.md b/docs/terraform-plugin-versions.md new file mode 100644 index 00000000000..c3e7f679024 --- /dev/null +++ b/docs/terraform-plugin-versions.md @@ -0,0 +1,9 @@ +# Terraform Plugin Versions + +The Terraform AWS Provider is constructed with HashiCorp-maintained packages for building plugins. Most existing resources are implemented with [Terraform Plugin SDKv2](https://developer.hashicorp.com/terraform/plugin/sdkv2), while newer resources may use [Terraform Plugin Framework](https://developer.hashicorp.com/terraform/plugin/framework). A thorough comparison of the packages can be found [here](https://developer.hashicorp.com/terraform/plugin/framework-benefits). + +At this time community contributions in either package will be accepted. The AWS Provider is [muxed](https://developer.hashicorp.com/terraform/plugin/framework/migrating/mux) to allow resources and data sources implemented in both packages. As AWS Provider tooling around Plugin Framework (and the library itself) matures, we will being requiring all net-new resources be implemented with it. [`skaff`](skaff.md) currently supports generating Plugin Framework based resources using the optional `-p`/`--plugin-framework` flag. Factors to consider when choosing between packages are: + +1. What other resources in a given service use +2. Level of comfort with the new idioms introduced in Plugin Framework +3. [Advantages](https://developer.hashicorp.com/terraform/plugin/framework-benefits#plugin-framework-benefits) Plugin Framework may afford over Plugin SDKv2 (improved null handling, plan modifications, etc.) diff --git a/examples/alexa/main.tf b/examples/alexa/main.tf index 71ce061b0b5..5c86d4d079c 100644 --- a/examples/alexa/main.tf +++ b/examples/alexa/main.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" } diff --git a/examples/alexa/outputs.tf b/examples/alexa/outputs.tf index 6df79e495fa..4fee8c59ccf 100644 --- a/examples/alexa/outputs.tf +++ b/examples/alexa/outputs.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + output "aws_lambda_function_arn" { value = aws_lambda_function.default.arn } diff --git a/examples/alexa/variables.tf b/examples/alexa/variables.tf index 78938923066..cc7ed572e0e 100644 --- a/examples/alexa/variables.tf +++ b/examples/alexa/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "aws_region" { description = "The AWS region to create things in." default = "us-east-1" diff --git a/examples/api-gateway-rest-api-openapi/domain.tf b/examples/api-gateway-rest-api-openapi/domain.tf index ef56e11d8f6..6c00b2d9b78 100644 --- a/examples/api-gateway-rest-api-openapi/domain.tf +++ b/examples/api-gateway-rest-api-openapi/domain.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + # # Domain Setup # diff --git a/examples/api-gateway-rest-api-openapi/main.tf b/examples/api-gateway-rest-api-openapi/main.tf index 1b8857c5dc8..5ce07f02740 100644 --- a/examples/api-gateway-rest-api-openapi/main.tf +++ b/examples/api-gateway-rest-api-openapi/main.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" } diff --git a/examples/api-gateway-rest-api-openapi/outputs.tf b/examples/api-gateway-rest-api-openapi/outputs.tf index a144a5b4c11..eb4751cd5a7 100644 --- a/examples/api-gateway-rest-api-openapi/outputs.tf +++ b/examples/api-gateway-rest-api-openapi/outputs.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + # # Outputs # diff --git a/examples/api-gateway-rest-api-openapi/rest-api.tf b/examples/api-gateway-rest-api-openapi/rest-api.tf index 9d73af6e35a..94130fa078a 100644 --- a/examples/api-gateway-rest-api-openapi/rest-api.tf +++ b/examples/api-gateway-rest-api-openapi/rest-api.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + resource "aws_api_gateway_rest_api" "example" { body = jsonencode({ openapi = "3.0.1" diff --git a/examples/api-gateway-rest-api-openapi/stage.tf b/examples/api-gateway-rest-api-openapi/stage.tf index aa5e5a83f57..637ca3b7290 100644 --- a/examples/api-gateway-rest-api-openapi/stage.tf +++ b/examples/api-gateway-rest-api-openapi/stage.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + # # Stage and Stage Settings # diff --git a/examples/api-gateway-rest-api-openapi/terraform.template.tfvars b/examples/api-gateway-rest-api-openapi/terraform.template.tfvars index 897df7521d4..6eb16abd87a 100644 --- a/examples/api-gateway-rest-api-openapi/terraform.template.tfvars +++ b/examples/api-gateway-rest-api-openapi/terraform.template.tfvars @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + aws_region = "us-west-2" rest_api_domain_name = "example.com" rest_api_name = "api-gateway-rest-api-openapi-example" diff --git a/examples/api-gateway-rest-api-openapi/tls.tf b/examples/api-gateway-rest-api-openapi/tls.tf index b1e3ee73459..c5dcfbef1e7 100644 --- a/examples/api-gateway-rest-api-openapi/tls.tf +++ b/examples/api-gateway-rest-api-openapi/tls.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + # # Self-Signed TLS Certificate for Testing # diff --git a/examples/api-gateway-rest-api-openapi/variables.tf b/examples/api-gateway-rest-api-openapi/variables.tf index 1a47ff7f712..608e49fee75 100644 --- a/examples/api-gateway-rest-api-openapi/variables.tf +++ b/examples/api-gateway-rest-api-openapi/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "aws_region" { default = "us-west-2" description = "AWS Region to deploy example API Gateway REST API" diff --git a/examples/api-gateway-websocket-chat-app/main.tf b/examples/api-gateway-websocket-chat-app/main.tf index d8bd0904263..bb11bb3d1b4 100644 --- a/examples/api-gateway-websocket-chat-app/main.tf +++ b/examples/api-gateway-websocket-chat-app/main.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + # # Terraform configuration for simple-websockets-chat-app that has the DynamoDB table and # Lambda functions needed to demonstrate the Websocket protocol on API Gateway. diff --git a/examples/api-gateway-websocket-chat-app/terraform.template.tfvars b/examples/api-gateway-websocket-chat-app/terraform.template.tfvars index bb891daf459..a80912583cf 100644 --- a/examples/api-gateway-websocket-chat-app/terraform.template.tfvars +++ b/examples/api-gateway-websocket-chat-app/terraform.template.tfvars @@ -1 +1,4 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + aws_region = "us-east-1" diff --git a/examples/api-gateway-websocket-chat-app/variables.tf b/examples/api-gateway-websocket-chat-app/variables.tf index e4f8705a6ac..d5d8f13b4f1 100644 --- a/examples/api-gateway-websocket-chat-app/variables.tf +++ b/examples/api-gateway-websocket-chat-app/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "aws_region" { description = "AWS region" default = "us-west-2" diff --git a/examples/asg/main.tf b/examples/asg/main.tf index 58db7486c4c..6fa6e49e072 100644 --- a/examples/asg/main.tf +++ b/examples/asg/main.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" } diff --git a/examples/asg/outputs.tf b/examples/asg/outputs.tf index 0f5acfd9d53..2456734d556 100644 --- a/examples/asg/outputs.tf +++ b/examples/asg/outputs.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + output "security_group" { value = aws_security_group.default.id } diff --git a/examples/asg/terraform.template.tfvars b/examples/asg/terraform.template.tfvars index ac1a6a47e57..a251042dc11 100644 --- a/examples/asg/terraform.template.tfvars +++ b/examples/asg/terraform.template.tfvars @@ -1 +1,4 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + key_name = "terraform-aws-provider-example" diff --git a/examples/asg/userdata.sh b/examples/asg/userdata.sh index 77e340b3abf..586a583cbe6 100644 --- a/examples/asg/userdata.sh +++ b/examples/asg/userdata.sh @@ -1,3 +1,6 @@ #!/bin/bash -v +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + apt-get update -y apt-get install -y nginx > /tmp/nginx.log diff --git a/examples/asg/variables.tf b/examples/asg/variables.tf index d45c10d2524..af40e9392bc 100644 --- a/examples/asg/variables.tf +++ b/examples/asg/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "aws_region" { description = "The AWS region to create things in." default = "us-east-1" diff --git a/examples/cleanrooms/main.tf b/examples/cleanrooms/main.tf new file mode 100644 index 00000000000..27547df250e --- /dev/null +++ b/examples/cleanrooms/main.tf @@ -0,0 +1,36 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + +terraform { + required_version = ">= 0.12" +} + +provider "aws" { + region = "us-east-1" +} + +resource "aws_cleanrooms_collaboration" "test_collab" { + name = "terraform-example-collaboration" + creator_member_abilities = ["CAN_QUERY", "CAN_RECEIVE_RESULTS"] + creator_display_name = "Creator " + description = "I made this collaboration with terraform!" + query_log_status = "DISABLED" + + data_encryption_metadata { + allow_clear_text = true + allow_duplicates = true + allow_joins_on_columns_with_different_names = true + preserve_nulls = false + } + + member { + account_id = 123456789012 + display_name = "Other member" + member_abilities = [] + } + + tags = { + Project = "Terraform" + } + +} diff --git a/examples/cloudhsm/main.tf b/examples/cloudhsm/main.tf index 9887b008441..09971d11b23 100644 --- a/examples/cloudhsm/main.tf +++ b/examples/cloudhsm/main.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" } diff --git a/examples/cloudhsm/outputs.tf b/examples/cloudhsm/outputs.tf index a3490dc600d..dd07a1b57e5 100644 --- a/examples/cloudhsm/outputs.tf +++ b/examples/cloudhsm/outputs.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + output "hsm_ip_address" { value = aws_cloudhsm_v2_hsm.cloudhsm_v2_hsm.ip_address } diff --git a/examples/cloudhsm/variables.tf b/examples/cloudhsm/variables.tf index 74e618c20cf..28180c92919 100644 --- a/examples/cloudhsm/variables.tf +++ b/examples/cloudhsm/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "aws_region" { description = "AWS region to launch cloudHSM cluster." default = "eu-west-1" diff --git a/examples/cognito-user-pool/main.tf b/examples/cognito-user-pool/main.tf index c03439368cf..c3d6b2bff66 100644 --- a/examples/cognito-user-pool/main.tf +++ b/examples/cognito-user-pool/main.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" } diff --git a/examples/cognito-user-pool/variables.tf b/examples/cognito-user-pool/variables.tf index 8478c23b1a0..58de427606d 100644 --- a/examples/cognito-user-pool/variables.tf +++ b/examples/cognito-user-pool/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "aws_region" { default = "us-west-2" } diff --git a/examples/count/main.tf b/examples/count/main.tf index 7b24636313c..3c6fbcd72c1 100644 --- a/examples/count/main.tf +++ b/examples/count/main.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" } diff --git a/examples/count/outputs.tf b/examples/count/outputs.tf index 8e1fa6cf08b..0561b14057f 100644 --- a/examples/count/outputs.tf +++ b/examples/count/outputs.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + output "address" { value = "Instances: ${element(aws_instance.web[*].id, 0)}" } diff --git a/examples/count/variables.tf b/examples/count/variables.tf index 3cb51b2e1b3..c0503d62e1c 100644 --- a/examples/count/variables.tf +++ b/examples/count/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "aws_region" { description = "The AWS region to create things in." default = "us-west-2" diff --git a/examples/dx-gateway-cross-account-vgw-association/main.tf b/examples/dx-gateway-cross-account-vgw-association/main.tf index b5b6a84adde..ec1759fa020 100644 --- a/examples/dx-gateway-cross-account-vgw-association/main.tf +++ b/examples/dx-gateway-cross-account-vgw-association/main.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" } diff --git a/examples/dx-gateway-cross-account-vgw-association/terraform.template.tfvars b/examples/dx-gateway-cross-account-vgw-association/terraform.template.tfvars index 7c099c2e978..ce85942a8da 100644 --- a/examples/dx-gateway-cross-account-vgw-association/terraform.template.tfvars +++ b/examples/dx-gateway-cross-account-vgw-association/terraform.template.tfvars @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + # First account aws_first_access_key = "AAAAAAAAAAAAAAAAAAA" aws_first_secret_key = "SuperSecretKeyForAccount1" diff --git a/examples/dx-gateway-cross-account-vgw-association/variables.tf b/examples/dx-gateway-cross-account-vgw-association/variables.tf index 5f1601468ae..a3fb1c2ab27 100644 --- a/examples/dx-gateway-cross-account-vgw-association/variables.tf +++ b/examples/dx-gateway-cross-account-vgw-association/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "aws_first_access_key" {} variable "aws_first_secret_key" {} diff --git a/examples/ecs-alb/cloud-config.yml b/examples/ecs-alb/cloud-config.yml index 002295a4f6c..db7411c8cbd 100644 --- a/examples/ecs-alb/cloud-config.yml +++ b/examples/ecs-alb/cloud-config.yml @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + # cloud-config coreos: units: diff --git a/examples/ecs-alb/main.tf b/examples/ecs-alb/main.tf index 18d03b8c7d9..749e655ec6d 100644 --- a/examples/ecs-alb/main.tf +++ b/examples/ecs-alb/main.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" } diff --git a/examples/ecs-alb/outputs.tf b/examples/ecs-alb/outputs.tf index 980e04a4101..5c66510465f 100644 --- a/examples/ecs-alb/outputs.tf +++ b/examples/ecs-alb/outputs.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + output "instance_security_group" { value = aws_security_group.instance_sg.id } diff --git a/examples/ecs-alb/terraform.template.tfvars b/examples/ecs-alb/terraform.template.tfvars index 78c37be0e45..652d82221da 100644 --- a/examples/ecs-alb/terraform.template.tfvars +++ b/examples/ecs-alb/terraform.template.tfvars @@ -1,2 +1,5 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + admin_cidr_ingress = "1.2.3.4/32" key_name = "terraform-aws-provider-example" diff --git a/examples/ecs-alb/variables.tf b/examples/ecs-alb/variables.tf index 92aef28f207..db4d9929e27 100644 --- a/examples/ecs-alb/variables.tf +++ b/examples/ecs-alb/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "aws_region" { description = "The AWS region to create things in." default = "us-west-2" diff --git a/examples/eip/main.tf b/examples/eip/main.tf index e26728b3249..b494c52d72c 100644 --- a/examples/eip/main.tf +++ b/examples/eip/main.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" } @@ -8,7 +11,7 @@ provider "aws" { resource "aws_eip" "default" { instance = aws_instance.web.id - vpc = true + domain = "vpc" } # Our default security group to access diff --git a/examples/eip/outputs.tf b/examples/eip/outputs.tf index d80cbbadd44..dd00e1181de 100644 --- a/examples/eip/outputs.tf +++ b/examples/eip/outputs.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + output "address" { value = aws_instance.web.private_ip } diff --git a/examples/eip/terraform.template.tfvars b/examples/eip/terraform.template.tfvars index ac1a6a47e57..a251042dc11 100644 --- a/examples/eip/terraform.template.tfvars +++ b/examples/eip/terraform.template.tfvars @@ -1 +1,4 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + key_name = "terraform-aws-provider-example" diff --git a/examples/eip/userdata.sh b/examples/eip/userdata.sh index 8a058c78617..670a272dbeb 100644 --- a/examples/eip/userdata.sh +++ b/examples/eip/userdata.sh @@ -1,4 +1,7 @@ #!/bin/bash -v +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + apt-get update -y apt-get install -y nginx > /tmp/nginx.log diff --git a/examples/eip/variables.tf b/examples/eip/variables.tf index 400301d5fc8..2d3131bfa1b 100644 --- a/examples/eip/variables.tf +++ b/examples/eip/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "aws_region" { description = "The AWS region to create things in." default = "us-east-1" diff --git a/examples/eks-getting-started/eks-cluster.tf b/examples/eks-getting-started/eks-cluster.tf index 36ba135b07c..0cb83e6afc1 100644 --- a/examples/eks-getting-started/eks-cluster.tf +++ b/examples/eks-getting-started/eks-cluster.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + # # EKS Cluster Resources # * IAM Role to allow EKS service to manage other AWS services diff --git a/examples/eks-getting-started/eks-worker-nodes.tf b/examples/eks-getting-started/eks-worker-nodes.tf index 7121d198338..c437d0ca652 100644 --- a/examples/eks-getting-started/eks-worker-nodes.tf +++ b/examples/eks-getting-started/eks-worker-nodes.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + # # EKS Worker Nodes Resources # * IAM role allowing Kubernetes actions to access other AWS services diff --git a/examples/eks-getting-started/outputs.tf b/examples/eks-getting-started/outputs.tf index f3889268554..ab762dccb3e 100644 --- a/examples/eks-getting-started/outputs.tf +++ b/examples/eks-getting-started/outputs.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + # # Outputs # diff --git a/examples/eks-getting-started/providers.tf b/examples/eks-getting-started/providers.tf index 5620f4f2eab..24be793dd5d 100644 --- a/examples/eks-getting-started/providers.tf +++ b/examples/eks-getting-started/providers.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" } diff --git a/examples/eks-getting-started/variables.tf b/examples/eks-getting-started/variables.tf index 265111500a5..c1a9225265a 100644 --- a/examples/eks-getting-started/variables.tf +++ b/examples/eks-getting-started/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "aws_region" { default = "us-west-2" } diff --git a/examples/eks-getting-started/vpc.tf b/examples/eks-getting-started/vpc.tf index 99e26b7fa47..84a6d094cf2 100644 --- a/examples/eks-getting-started/vpc.tf +++ b/examples/eks-getting-started/vpc.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + # # VPC Resources # * VPC diff --git a/examples/eks-getting-started/workstation-external-ip.tf b/examples/eks-getting-started/workstation-external-ip.tf index 9cf0644ad0a..608e6dfbcd7 100644 --- a/examples/eks-getting-started/workstation-external-ip.tf +++ b/examples/eks-getting-started/workstation-external-ip.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + # # Workstation External IP # diff --git a/examples/elasticsearch-domain/main.tf b/examples/elasticsearch-domain/main.tf index b7149638350..d300c74430a 100644 --- a/examples/elasticsearch-domain/main.tf +++ b/examples/elasticsearch-domain/main.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" } diff --git a/examples/elasticsearch-domain/variables.tf b/examples/elasticsearch-domain/variables.tf index c7ec1f2bfab..9c7027c31fa 100644 --- a/examples/elasticsearch-domain/variables.tf +++ b/examples/elasticsearch-domain/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "aws_region" { default = "us-west-2" } diff --git a/examples/elb/main.tf b/examples/elb/main.tf index 37da5121af2..5c4f8e4cd65 100644 --- a/examples/elb/main.tf +++ b/examples/elb/main.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" } diff --git a/examples/elb/outputs.tf b/examples/elb/outputs.tf index 2c535bb7283..b57b1a52a60 100644 --- a/examples/elb/outputs.tf +++ b/examples/elb/outputs.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + output "address" { value = aws_elb.web.dns_name } diff --git a/examples/elb/terraform.template.tfvars b/examples/elb/terraform.template.tfvars index ac1a6a47e57..a251042dc11 100644 --- a/examples/elb/terraform.template.tfvars +++ b/examples/elb/terraform.template.tfvars @@ -1 +1,4 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + key_name = "terraform-aws-provider-example" diff --git a/examples/elb/userdata.sh b/examples/elb/userdata.sh index 8a058c78617..670a272dbeb 100644 --- a/examples/elb/userdata.sh +++ b/examples/elb/userdata.sh @@ -1,4 +1,7 @@ #!/bin/bash -v +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + apt-get update -y apt-get install -y nginx > /tmp/nginx.log diff --git a/examples/elb/variables.tf b/examples/elb/variables.tf index 2f9175fc845..6ea74b1599d 100644 --- a/examples/elb/variables.tf +++ b/examples/elb/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "key_name" { description = "Name of the SSH keypair to use in AWS." } diff --git a/examples/events/kinesis/main.tf b/examples/events/kinesis/main.tf index 93b171e5b86..0886745d24e 100644 --- a/examples/events/kinesis/main.tf +++ b/examples/events/kinesis/main.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" } diff --git a/examples/events/kinesis/outputs.tf b/examples/events/kinesis/outputs.tf index 81bcbf4fec3..280a35a919d 100644 --- a/examples/events/kinesis/outputs.tf +++ b/examples/events/kinesis/outputs.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + output "rule_arn" { value = aws_cloudwatch_event_rule.foo.arn } diff --git a/examples/events/kinesis/variables.tf b/examples/events/kinesis/variables.tf index 8ec330ad0aa..58efda3c542 100644 --- a/examples/events/kinesis/variables.tf +++ b/examples/events/kinesis/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "aws_region" { description = "The AWS region to create resources in." default = "us-east-1" diff --git a/examples/events/sns/main.tf b/examples/events/sns/main.tf index 2d93090b651..a9c8f1003de 100644 --- a/examples/events/sns/main.tf +++ b/examples/events/sns/main.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" } diff --git a/examples/events/sns/outputs.tf b/examples/events/sns/outputs.tf index 7b92f327847..abfa0b78f52 100644 --- a/examples/events/sns/outputs.tf +++ b/examples/events/sns/outputs.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + output "rule_arn" { value = aws_cloudwatch_event_rule.foo.arn } diff --git a/examples/events/sns/variables.tf b/examples/events/sns/variables.tf index 55708304946..5db0ec59bc0 100644 --- a/examples/events/sns/variables.tf +++ b/examples/events/sns/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "aws_region" { description = "The AWS region to create resources in." default = "us-east-1" diff --git a/examples/ivs/main.tf b/examples/ivs/main.tf index e345afe8832..99bfae6508a 100644 --- a/examples/ivs/main.tf +++ b/examples/ivs/main.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" } diff --git a/examples/ivs/outputs.tf b/examples/ivs/outputs.tf index 77c08baa0df..41d0f80390f 100644 --- a/examples/ivs/outputs.tf +++ b/examples/ivs/outputs.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + output "ingest_endpoint" { value = aws_ivs_channel.example.ingest_endpoint } diff --git a/examples/ivs/variables.tf b/examples/ivs/variables.tf index 3cb51b2e1b3..c0503d62e1c 100644 --- a/examples/ivs/variables.tf +++ b/examples/ivs/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "aws_region" { description = "The AWS region to create things in." default = "us-west-2" diff --git a/examples/ivschat/index.js b/examples/ivschat/index.js index 424368493cf..199095006ed 100644 --- a/examples/ivschat/index.js +++ b/examples/ivschat/index.js @@ -1,3 +1,8 @@ +/** + * Copyright (c) HashiCorp, Inc. + * SPDX-License-Identifier: MPL-2.0 + */ + /** IVS Chat message review handler */ exports.handler = async function ({ Content }) { return { diff --git a/examples/ivschat/main.tf b/examples/ivschat/main.tf index 2fb971f39ca..4c84a94b705 100644 --- a/examples/ivschat/main.tf +++ b/examples/ivschat/main.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" } diff --git a/examples/ivschat/outputs.tf b/examples/ivschat/outputs.tf index 46fe6da522f..6e667a8b02b 100644 --- a/examples/ivschat/outputs.tf +++ b/examples/ivschat/outputs.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + output "room" { value = aws_ivschat_room.example.arn } diff --git a/examples/ivschat/variables.tf b/examples/ivschat/variables.tf index 3cb51b2e1b3..c0503d62e1c 100644 --- a/examples/ivschat/variables.tf +++ b/examples/ivschat/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "aws_region" { description = "The AWS region to create things in." default = "us-west-2" diff --git a/examples/lambda-file-systems/hello_lambda.py b/examples/lambda-file-systems/hello_lambda.py index 5c28c83973f..4cbc976a94e 100644 --- a/examples/lambda-file-systems/hello_lambda.py +++ b/examples/lambda-file-systems/hello_lambda.py @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + from datetime import datetime diff --git a/examples/lambda-file-systems/main.tf b/examples/lambda-file-systems/main.tf index cc1f7ee7be1..2bb19f69952 100644 --- a/examples/lambda-file-systems/main.tf +++ b/examples/lambda-file-systems/main.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" } diff --git a/examples/lambda-file-systems/outputs.tf b/examples/lambda-file-systems/outputs.tf index 09bd3465c66..d1f4009e592 100644 --- a/examples/lambda-file-systems/outputs.tf +++ b/examples/lambda-file-systems/outputs.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + output "lambda" { value = aws_lambda_function.example_lambda.qualified_arn } diff --git a/examples/lambda-file-systems/variables.tf b/examples/lambda-file-systems/variables.tf index 78938923066..cc7ed572e0e 100644 --- a/examples/lambda-file-systems/variables.tf +++ b/examples/lambda-file-systems/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "aws_region" { description = "The AWS region to create things in." default = "us-east-1" diff --git a/examples/lambda/hello_lambda.py b/examples/lambda/hello_lambda.py index 3c3b6bf9a23..14c25cb1dc3 100644 --- a/examples/lambda/hello_lambda.py +++ b/examples/lambda/hello_lambda.py @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + import os def lambda_handler(event, context): diff --git a/examples/lambda/main.tf b/examples/lambda/main.tf index fd42442715d..7f71cbba226 100644 --- a/examples/lambda/main.tf +++ b/examples/lambda/main.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" } diff --git a/examples/lambda/outputs.tf b/examples/lambda/outputs.tf index c743dc353c8..be68094c8ae 100644 --- a/examples/lambda/outputs.tf +++ b/examples/lambda/outputs.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + output "lambda" { value = aws_lambda_function.lambda.qualified_arn } diff --git a/examples/lambda/variables.tf b/examples/lambda/variables.tf index 78938923066..cc7ed572e0e 100644 --- a/examples/lambda/variables.tf +++ b/examples/lambda/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "aws_region" { description = "The AWS region to create things in." default = "us-east-1" diff --git a/examples/networking/region/outputs.tf b/examples/networking/region/outputs.tf index 22afed18e26..d84d3f7dd02 100644 --- a/examples/networking/region/outputs.tf +++ b/examples/networking/region/outputs.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + output "vpc_id" { value = aws_vpc.main.id } diff --git a/examples/networking/region/security_group.tf b/examples/networking/region/security_group.tf index 5221798506d..c66092f01cf 100644 --- a/examples/networking/region/security_group.tf +++ b/examples/networking/region/security_group.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + resource "aws_security_group" "region" { name = "region" description = "Open access within this region" diff --git a/examples/networking/region/subnets.tf b/examples/networking/region/subnets.tf index 78a3064d413..e6583c87ac6 100644 --- a/examples/networking/region/subnets.tf +++ b/examples/networking/region/subnets.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + data "aws_availability_zones" "all" {} module "primary_subnet" { diff --git a/examples/networking/region/terraform.template.tfvars b/examples/networking/region/terraform.template.tfvars index 9304a92ab0b..07a1f0f6019 100644 --- a/examples/networking/region/terraform.template.tfvars +++ b/examples/networking/region/terraform.template.tfvars @@ -1,2 +1,5 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + base_cidr_block = "10.0.0.0/16" region = "us-west-2" diff --git a/examples/networking/region/variables.tf b/examples/networking/region/variables.tf index a115c628277..8f7bf6b4609 100644 --- a/examples/networking/region/variables.tf +++ b/examples/networking/region/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "region" { description = "The name of the AWS region to set up a network within" } diff --git a/examples/networking/region/versions.tf b/examples/networking/region/versions.tf index ac97c6ac8e7..83878d7b561 100644 --- a/examples/networking/region/versions.tf +++ b/examples/networking/region/versions.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" diff --git a/examples/networking/region/vpc.tf b/examples/networking/region/vpc.tf index 387263d0b2d..4e5df1a7951 100644 --- a/examples/networking/region/vpc.tf +++ b/examples/networking/region/vpc.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + resource "aws_vpc" "main" { cidr_block = cidrsubnet(var.base_cidr_block, 4, var.region_numbers[var.region]) } diff --git a/examples/networking/regions.tf b/examples/networking/regions.tf index ebfbc5a8b18..e2edcf07a86 100644 --- a/examples/networking/regions.tf +++ b/examples/networking/regions.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + module "us-east-1" { source = "./region" region = "us-east-1" diff --git a/examples/networking/subnet/outputs.tf b/examples/networking/subnet/outputs.tf index 087110ab168..8e11e0b688f 100644 --- a/examples/networking/subnet/outputs.tf +++ b/examples/networking/subnet/outputs.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + output "subnet_id" { value = aws_subnet.main.id } diff --git a/examples/networking/subnet/security_group.tf b/examples/networking/subnet/security_group.tf index c6036f57481..2d895881ae5 100644 --- a/examples/networking/subnet/security_group.tf +++ b/examples/networking/subnet/security_group.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + resource "aws_security_group" "az" { name = "az-${data.aws_availability_zone.target.name}" description = "Open access within the AZ ${data.aws_availability_zone.target.name}" diff --git a/examples/networking/subnet/subnet.tf b/examples/networking/subnet/subnet.tf index 9554c99f954..909711f511c 100644 --- a/examples/networking/subnet/subnet.tf +++ b/examples/networking/subnet/subnet.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + resource "aws_subnet" "main" { cidr_block = cidrsubnet(data.aws_vpc.target.cidr_block, 2, var.az_numbers[data.aws_availability_zone.target.name_suffix]) vpc_id = var.vpc_id diff --git a/examples/networking/subnet/terraform.template.tfvars b/examples/networking/subnet/terraform.template.tfvars index 435e6fc8f1c..feaa64238aa 100644 --- a/examples/networking/subnet/terraform.template.tfvars +++ b/examples/networking/subnet/terraform.template.tfvars @@ -1,2 +1,5 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + availability_zone = "us-west-2a" vpc_id = "vpc-12345678" diff --git a/examples/networking/subnet/variables.tf b/examples/networking/subnet/variables.tf index 406a4310b5c..b92ee99a9ff 100644 --- a/examples/networking/subnet/variables.tf +++ b/examples/networking/subnet/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "vpc_id" { } diff --git a/examples/networking/subnet/versions.tf b/examples/networking/subnet/versions.tf index ac97c6ac8e7..83878d7b561 100644 --- a/examples/networking/subnet/versions.tf +++ b/examples/networking/subnet/versions.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" diff --git a/examples/networking/variables.tf b/examples/networking/variables.tf index 7482b1f4c5a..3bbbc9ebbe8 100644 --- a/examples/networking/variables.tf +++ b/examples/networking/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "base_cidr_block" { default = "10.0.0.0/22" } diff --git a/examples/networking/versions.tf b/examples/networking/versions.tf index ac97c6ac8e7..83878d7b561 100644 --- a/examples/networking/versions.tf +++ b/examples/networking/versions.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" diff --git a/examples/rds/main.tf b/examples/rds/main.tf index 09a5590732f..4fa4415dc2b 100644 --- a/examples/rds/main.tf +++ b/examples/rds/main.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" } diff --git a/examples/rds/outputs.tf b/examples/rds/outputs.tf index e8f41d4bdc8..6b245ee4b36 100644 --- a/examples/rds/outputs.tf +++ b/examples/rds/outputs.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + output "subnet_group" { value = aws_db_subnet_group.default.name } diff --git a/examples/rds/sg-variables.tf b/examples/rds/sg-variables.tf index c6db98abb64..5100c585493 100644 --- a/examples/rds/sg-variables.tf +++ b/examples/rds/sg-variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "cidr_blocks" { default = "0.0.0.0/0" description = "CIDR for sg" diff --git a/examples/rds/sg.tf b/examples/rds/sg.tf index 1ddf27500e2..1bfaad8d5eb 100644 --- a/examples/rds/sg.tf +++ b/examples/rds/sg.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + resource "aws_security_group" "default" { name = "main_rds_sg" description = "Allow all inbound traffic" diff --git a/examples/rds/subnet-variables.tf b/examples/rds/subnet-variables.tf index cf9d9b731cf..8001766adb6 100644 --- a/examples/rds/subnet-variables.tf +++ b/examples/rds/subnet-variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "subnet_1_cidr" { default = "10.0.1.0/24" description = "Your AZ" diff --git a/examples/rds/subnets.tf b/examples/rds/subnets.tf index 27e9e5a6c97..6547be8405f 100644 --- a/examples/rds/subnets.tf +++ b/examples/rds/subnets.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + resource "aws_subnet" "subnet_1" { vpc_id = var.vpc_id cidr_block = var.subnet_1_cidr diff --git a/examples/rds/terraform.template.tfvars b/examples/rds/terraform.template.tfvars index bbc90ba75b8..3e2820d50b1 100644 --- a/examples/rds/terraform.template.tfvars +++ b/examples/rds/terraform.template.tfvars @@ -1,2 +1,5 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + password = "neverstorepasswordsinplaintext" vpc_id = "vpc-12345678" diff --git a/examples/rds/variables.tf b/examples/rds/variables.tf index 8bd47d36a7e..4328e303b70 100644 --- a/examples/rds/variables.tf +++ b/examples/rds/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "aws_region" { default = "us-west-2" } diff --git a/examples/s3-api-gateway-integration/main.tf b/examples/s3-api-gateway-integration/main.tf index 557e0f7b938..d65869496e9 100644 --- a/examples/s3-api-gateway-integration/main.tf +++ b/examples/s3-api-gateway-integration/main.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" } diff --git a/examples/s3-api-gateway-integration/terraform.template.tfvars b/examples/s3-api-gateway-integration/terraform.template.tfvars index d078194646d..d67285cd3fe 100644 --- a/examples/s3-api-gateway-integration/terraform.template.tfvars +++ b/examples/s3-api-gateway-integration/terraform.template.tfvars @@ -1 +1,4 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + aws_region = "us-west-2" diff --git a/examples/s3-api-gateway-integration/variables.tf b/examples/s3-api-gateway-integration/variables.tf index 452d6553d45..67883483574 100644 --- a/examples/s3-api-gateway-integration/variables.tf +++ b/examples/s3-api-gateway-integration/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "aws_region" { default = "us-west-2" } diff --git a/examples/s3-cross-account-access/main.tf b/examples/s3-cross-account-access/main.tf index 79182b8ecda..6193914239c 100644 --- a/examples/s3-cross-account-access/main.tf +++ b/examples/s3-cross-account-access/main.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" } diff --git a/examples/s3-cross-account-access/terraform.template.tfvars b/examples/s3-cross-account-access/terraform.template.tfvars index a4fb4c59b82..30832ef5f23 100644 --- a/examples/s3-cross-account-access/terraform.template.tfvars +++ b/examples/s3-cross-account-access/terraform.template.tfvars @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + # prod account prod_access_key = "AAAAAAAAAAAAAAAAAAA" prod_secret_key = "SuperSecretKeyForAccountA" diff --git a/examples/s3-cross-account-access/variables.tf b/examples/s3-cross-account-access/variables.tf index 58c2a1e7cf0..28dccede991 100644 --- a/examples/s3-cross-account-access/variables.tf +++ b/examples/s3-cross-account-access/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "prod_access_key" {} variable "prod_secret_key" {} diff --git a/examples/sagemaker/main.tf b/examples/sagemaker/main.tf index d78e28245bb..3b839569f87 100644 --- a/examples/sagemaker/main.tf +++ b/examples/sagemaker/main.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" } diff --git a/examples/transit-gateway-cross-account-peering-attachment/main.tf b/examples/transit-gateway-cross-account-peering-attachment/main.tf index 0799f85ac0f..6cb3c24a367 100644 --- a/examples/transit-gateway-cross-account-peering-attachment/main.tf +++ b/examples/transit-gateway-cross-account-peering-attachment/main.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" } diff --git a/examples/transit-gateway-cross-account-peering-attachment/terraform.template.tfvars b/examples/transit-gateway-cross-account-peering-attachment/terraform.template.tfvars index 122bfe58357..d06e23228a9 100644 --- a/examples/transit-gateway-cross-account-peering-attachment/terraform.template.tfvars +++ b/examples/transit-gateway-cross-account-peering-attachment/terraform.template.tfvars @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + # First account aws_first_access_key = "AAAAAAAAAAAAAAAAAAA" aws_first_secret_key = "SuperSecretKeyForAccount1" diff --git a/examples/transit-gateway-cross-account-peering-attachment/variables.tf b/examples/transit-gateway-cross-account-peering-attachment/variables.tf index 88b369a456e..e7001647aff 100644 --- a/examples/transit-gateway-cross-account-peering-attachment/variables.tf +++ b/examples/transit-gateway-cross-account-peering-attachment/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "aws_first_access_key" {} variable "aws_first_secret_key" {} diff --git a/examples/transit-gateway-cross-account-vpc-attachment/main.tf b/examples/transit-gateway-cross-account-vpc-attachment/main.tf index 31a9c41210b..ea5c5bd5f57 100644 --- a/examples/transit-gateway-cross-account-vpc-attachment/main.tf +++ b/examples/transit-gateway-cross-account-vpc-attachment/main.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" } diff --git a/examples/transit-gateway-cross-account-vpc-attachment/terraform.template.tfvars b/examples/transit-gateway-cross-account-vpc-attachment/terraform.template.tfvars index 813b24302fe..1b2e74891f5 100644 --- a/examples/transit-gateway-cross-account-vpc-attachment/terraform.template.tfvars +++ b/examples/transit-gateway-cross-account-vpc-attachment/terraform.template.tfvars @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + # First account aws_first_access_key = "AAAAAAAAAAAAAAAAAAA" aws_first_secret_key = "SuperSecretKeyForAccount1" diff --git a/examples/transit-gateway-cross-account-vpc-attachment/variables.tf b/examples/transit-gateway-cross-account-vpc-attachment/variables.tf index ed1a71f53b1..a1b8e9c6a72 100644 --- a/examples/transit-gateway-cross-account-vpc-attachment/variables.tf +++ b/examples/transit-gateway-cross-account-vpc-attachment/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "aws_first_access_key" {} variable "aws_first_secret_key" {} diff --git a/examples/transit-gateway-intra-region-peering/main.tf b/examples/transit-gateway-intra-region-peering/main.tf index 4a1ad370b44..d78f3acb3a8 100644 --- a/examples/transit-gateway-intra-region-peering/main.tf +++ b/examples/transit-gateway-intra-region-peering/main.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" } diff --git a/examples/transit-gateway-intra-region-peering/terraform.template.tfvars b/examples/transit-gateway-intra-region-peering/terraform.template.tfvars index 290c2a8fa3f..160ec813bf8 100644 --- a/examples/transit-gateway-intra-region-peering/terraform.template.tfvars +++ b/examples/transit-gateway-intra-region-peering/terraform.template.tfvars @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + # AWS Profile (type `aws configure`) aws_profile = "default" diff --git a/examples/transit-gateway-intra-region-peering/variables.tf b/examples/transit-gateway-intra-region-peering/variables.tf index 018322bac48..568c2059459 100644 --- a/examples/transit-gateway-intra-region-peering/variables.tf +++ b/examples/transit-gateway-intra-region-peering/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "aws_profile" {} variable "aws_region" {} diff --git a/examples/two-tier/main.tf b/examples/two-tier/main.tf index d1b8b41701d..da907e9cad3 100644 --- a/examples/two-tier/main.tf +++ b/examples/two-tier/main.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + terraform { required_version = ">= 0.12" } diff --git a/examples/two-tier/outputs.tf b/examples/two-tier/outputs.tf index 2c535bb7283..b57b1a52a60 100644 --- a/examples/two-tier/outputs.tf +++ b/examples/two-tier/outputs.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + output "address" { value = aws_elb.web.dns_name } diff --git a/examples/two-tier/terraform.template.tfvars b/examples/two-tier/terraform.template.tfvars index b19a09391c5..223da16b3eb 100644 --- a/examples/two-tier/terraform.template.tfvars +++ b/examples/two-tier/terraform.template.tfvars @@ -1,2 +1,5 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + key_name = "terraform-provider-aws-example" public_key_path = "~/.ssh/terraform-provider-aws-example.pub" diff --git a/examples/two-tier/variables.tf b/examples/two-tier/variables.tf index d590361880b..ccb4fb1bbb2 100644 --- a/examples/two-tier/variables.tf +++ b/examples/two-tier/variables.tf @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + variable "public_key_path" { description = < value) { - if !ok { - return fmt.Errorf("%s: Attribute %q not found", n, key) - } + if err != nil { + return err + } - return fmt.Errorf("%s: Attribute %q is not greater than %q, got %q", n, key, value, v) + if v <= val { + return fmt.Errorf("got %d, want > %d", v, val) } return nil - } + }) } func CheckResourceAttrGreaterThanOrEqualValue(n, key string, val int) resource.TestCheckFunc { @@ -2263,7 +2293,23 @@ func CheckResourceAttrGreaterThanOrEqualValue(n, key string, val int) resource.T } if v < val { - return fmt.Errorf("%s: Attribute %q is not greater than or equal to %d, got %d", n, key, val, v) + return fmt.Errorf("got %d, want >= %d", v, val) + } + + return nil + }) +} + +func CheckResourceAttrIsJSONString(n, key string) resource.TestCheckFunc { + return resource.TestCheckResourceAttrWith(n, key, func(value string) error { + var m map[string]*json.RawMessage + + if err := json.Unmarshal([]byte(value), &m); err != nil { + return err + } + + if len(m) == 0 { + return errors.New(`empty JSON string`) } return nil diff --git a/internal/acctest/context.go b/internal/acctest/context.go index 2fa367a705b..cd602bcb603 100644 --- a/internal/acctest/context.go +++ b/internal/acctest/context.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acctest import ( diff --git a/internal/acctest/crypto.go b/internal/acctest/crypto.go index 00c4d8f6d91..7df76c18970 100644 --- a/internal/acctest/crypto.go +++ b/internal/acctest/crypto.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acctest import ( diff --git a/internal/acctest/crypto_test.go b/internal/acctest/crypto_test.go index 7e78f56d9ea..c97ba1acc96 100644 --- a/internal/acctest/crypto_test.go +++ b/internal/acctest/crypto_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acctest_test import ( diff --git a/internal/acctest/exports_test.go b/internal/acctest/exports_test.go index 53e1a1cff14..5e30a95ec23 100644 --- a/internal/acctest/exports_test.go +++ b/internal/acctest/exports_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acctest // Exports for use in tests only. diff --git a/internal/acctest/framework.go b/internal/acctest/framework.go index 92360a9ca21..74ed7e7719c 100644 --- a/internal/acctest/framework.go +++ b/internal/acctest/framework.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acctest import ( diff --git a/internal/acctest/vcr.go b/internal/acctest/vcr.go index dc6b6fbc109..fa0a2ff1050 100644 --- a/internal/acctest/vcr.go +++ b/internal/acctest/vcr.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acctest import ( @@ -270,7 +273,7 @@ func vcrProviderConfigureContextFunc(provider *schema.Provider, configureContext // TODO Need to loop through all API clients to do this. // TODO Use []*client.Client? // TODO AWS SDK for Go v2 API clients. - meta.LogsConn().Handlers.AfterRetry.PushFront(func(r *request.Request) { + meta.LogsConn(ctx).Handlers.AfterRetry.PushFront(func(r *request.Request) { // We have to use 'Contains' rather than 'errors.Is' because 'awserr.Error' doesn't implement 'Unwrap'. if errs.Contains(r.Error, cassette.ErrInteractionNotFound.Error()) { r.Retryable = aws.Bool(false) diff --git a/internal/acctest/vcr_test.go b/internal/acctest/vcr_test.go index 5d5bc2824b6..c1589e5f495 100644 --- a/internal/acctest/vcr_test.go +++ b/internal/acctest/vcr_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acctest_test import ( diff --git a/internal/attrmap/attrmap.go b/internal/attrmap/attrmap.go index fb33dbd42f0..db4c59a2186 100644 --- a/internal/attrmap/attrmap.go +++ b/internal/attrmap/attrmap.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package attrmap import ( diff --git a/internal/conns/awsclient.go b/internal/conns/awsclient.go index f1aaf8c8238..80064849270 100644 --- a/internal/conns/awsclient.go +++ b/internal/conns/awsclient.go @@ -1,14 +1,48 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package conns import ( + "context" "fmt" "net/http" + "sync" - "github.com/aws/aws-sdk-go/aws/endpoints" - "github.com/aws/aws-sdk-go/service/apigatewayv2" - "github.com/aws/aws-sdk-go/service/s3" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + endpoints_sdkv1 "github.com/aws/aws-sdk-go/aws/endpoints" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + apigatewayv2_sdkv1 "github.com/aws/aws-sdk-go/service/apigatewayv2" + mediaconvert_sdkv1 "github.com/aws/aws-sdk-go/service/mediaconvert" + s3_sdkv1 "github.com/aws/aws-sdk-go/service/s3" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/names" ) +type AWSClient struct { + AccountID string + DefaultTagsConfig *tftags.DefaultConfig + DNSSuffix string + IgnoreTagsConfig *tftags.IgnoreConfig + MediaConvertAccountConn *mediaconvert_sdkv1.MediaConvert + Partition string + Region string + ReverseDNSPrefix string + ServicePackages map[string]ServicePackage + Session *session_sdkv1.Session + TerraformVersion string + + awsConfig *aws_sdkv2.Config + clients map[string]any + conns map[string]any + endpoints map[string]string // From provider configuration. + httpClient *http.Client + lock sync.Mutex + s3UsePathStyle bool // From provider configuration. + stsRegion string // From provider configuration. +} + // PartitionHostname returns a hostname with the provider domain suffix for the partition // e.g. PREFIX.amazonaws.com // The prefix should not contain a trailing period. @@ -23,8 +57,11 @@ func (client *AWSClient) RegionalHostname(prefix string) string { return fmt.Sprintf("%s.%s.%s", prefix, client.Region, client.DNSSuffix) } -func (client *AWSClient) S3ConnURICleaningDisabled() *s3.S3 { - return client.s3ConnURICleaningDisabled +func (client *AWSClient) S3ConnURICleaningDisabled(ctx context.Context) *s3_sdkv1.S3 { + config := client.S3Conn(ctx).Config + config.DisableRestProtocolURICleaning = aws_sdkv1.Bool(true) + + return s3_sdkv1.New(client.Session.Copy(&config)) } // SetHTTPClient sets the http.Client used for AWS API calls. @@ -50,7 +87,7 @@ func (client *AWSClient) APIGatewayInvokeURL(restAPIID, stageName string) string // See https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-publish.html and // https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-set-up-websocket-deployment.html. func (client *AWSClient) APIGatewayV2InvokeURL(protocolType, apiID, stageName string) string { - if protocolType == apigatewayv2.ProtocolTypeWebsocket { + if protocolType == apigatewayv2_sdkv1.ProtocolTypeWebsocket { return fmt.Sprintf("wss://%s/%s", client.RegionalHostname(fmt.Sprintf("%s.execute-api", apiID)), stageName) } @@ -64,7 +101,7 @@ func (client *AWSClient) APIGatewayV2InvokeURL(protocolType, apiID, stageName st // CloudFrontDistributionHostedZoneID returns the Route 53 hosted zone ID // for Amazon CloudFront distributions in the configured AWS partition. func (client *AWSClient) CloudFrontDistributionHostedZoneID() string { - if client.Partition == endpoints.AwsCnPartitionID { + if client.Partition == endpoints_sdkv1.AwsCnPartitionID { return "Z3RFFRIM2A3IF5" // See https://docs.amazonaws.cn/en_us/aws/latest/userguide/route53.html } return "Z2FDTNDATAQYW2" // See https://docs.aws.amazon.com/Route53/latest/APIReference/API_AliasTarget.html#Route53-Type-AliasTarget-HostedZoneId @@ -96,3 +133,111 @@ func (client *AWSClient) DefaultKMSKeyPolicy() string { func (client *AWSClient) GlobalAcceleratorHostedZoneID() string { return "Z2BJ6XQ5FK7U4H" // See https://docs.aws.amazon.com/general/latest/gr/global_accelerator.html#global_accelerator_region } + +// apiClientConfig returns the AWS API client configuration parameters for the specified service. +func (client *AWSClient) apiClientConfig(servicePackageName string) map[string]any { + m := map[string]any{ + "aws_sdkv2_config": client.awsConfig, + "endpoint": client.endpoints[servicePackageName], + "partition": client.Partition, + "session": client.Session, + } + switch servicePackageName { + case names.S3: + m["s3_use_path_style"] = client.s3UsePathStyle + case names.STS: + m["sts_region"] = client.stsRegion + } + + return m +} + +// conn returns the AWS SDK for Go v1 API client for the specified service. +func conn[T any](ctx context.Context, c *AWSClient, servicePackageName string) (T, error) { + c.lock.Lock() + defer c.lock.Unlock() + + if raw, ok := c.conns[servicePackageName]; ok { + if conn, ok := raw.(T); ok { + return conn, nil + } else { + var zero T + return zero, fmt.Errorf("AWS SDK v1 API client (%s): %T, want %T", servicePackageName, raw, zero) + } + } + + sp, ok := c.ServicePackages[servicePackageName] + if !ok { + var zero T + return zero, fmt.Errorf("unknown service package: %s", servicePackageName) + } + + v, ok := sp.(interface { + NewConn(context.Context, map[string]any) (T, error) + }) + if !ok { + var zero T + return zero, fmt.Errorf("no AWS SDK v1 API client factory: %s", servicePackageName) + } + + conn, err := v.NewConn(ctx, c.apiClientConfig(servicePackageName)) + if err != nil { + var zero T + return zero, err + } + + if v, ok := sp.(interface { + CustomizeConn(context.Context, T) (T, error) + }); ok { + conn, err = v.CustomizeConn(ctx, conn) + if err != nil { + var zero T + return zero, err + } + } + + c.conns[servicePackageName] = conn + + return conn, nil +} + +// client returns the AWS SDK for Go v2 API client for the specified service. +func client[T any](ctx context.Context, c *AWSClient, servicePackageName string) (T, error) { + c.lock.Lock() + defer c.lock.Unlock() + + if raw, ok := c.clients[servicePackageName]; ok { + if client, ok := raw.(T); ok { + return client, nil + } else { + var zero T + return zero, fmt.Errorf("AWS SDK v2 API client (%s): %T, want %T", servicePackageName, raw, zero) + } + } + + sp, ok := c.ServicePackages[servicePackageName] + if !ok { + var zero T + return zero, fmt.Errorf("unknown service package: %s", servicePackageName) + } + + v, ok := sp.(interface { + NewClient(context.Context, map[string]any) (T, error) + }) + if !ok { + var zero T + return zero, fmt.Errorf("no AWS SDK v2 API client factory: %s", servicePackageName) + } + + client, err := v.NewClient(ctx, c.apiClientConfig(servicePackageName)) + if err != nil { + var zero T + return zero, err + } + + // All customization for AWS SDK for Go v2 API clients must be done during construction. + + c.clients[servicePackageName] = client + + return client, nil +} diff --git a/internal/conns/awsclient_gen.go b/internal/conns/awsclient_gen.go index 998007145a2..2849a7e37d1 100644 --- a/internal/conns/awsclient_gen.go +++ b/internal/conns/awsclient_gen.go @@ -2,1966 +2,1633 @@ package conns import ( - "net/http" - - "github.com/aws/aws-sdk-go-v2/service/accessanalyzer" - "github.com/aws/aws-sdk-go-v2/service/account" - "github.com/aws/aws-sdk-go-v2/service/acm" - "github.com/aws/aws-sdk-go-v2/service/auditmanager" - "github.com/aws/aws-sdk-go-v2/service/cleanrooms" - "github.com/aws/aws-sdk-go-v2/service/cloudcontrol" + "context" + + accessanalyzer_sdkv2 "github.com/aws/aws-sdk-go-v2/service/accessanalyzer" + account_sdkv2 "github.com/aws/aws-sdk-go-v2/service/account" + acm_sdkv2 "github.com/aws/aws-sdk-go-v2/service/acm" + appconfig_sdkv2 "github.com/aws/aws-sdk-go-v2/service/appconfig" + auditmanager_sdkv2 "github.com/aws/aws-sdk-go-v2/service/auditmanager" + cleanrooms_sdkv2 "github.com/aws/aws-sdk-go-v2/service/cleanrooms" + cloudcontrol_sdkv2 "github.com/aws/aws-sdk-go-v2/service/cloudcontrol" cloudwatchlogs_sdkv2 "github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs" - "github.com/aws/aws-sdk-go-v2/service/comprehend" - "github.com/aws/aws-sdk-go-v2/service/computeoptimizer" + comprehend_sdkv2 "github.com/aws/aws-sdk-go-v2/service/comprehend" + computeoptimizer_sdkv2 "github.com/aws/aws-sdk-go-v2/service/computeoptimizer" directoryservice_sdkv2 "github.com/aws/aws-sdk-go-v2/service/directoryservice" - "github.com/aws/aws-sdk-go-v2/service/docdbelastic" + docdbelastic_sdkv2 "github.com/aws/aws-sdk-go-v2/service/docdbelastic" ec2_sdkv2 "github.com/aws/aws-sdk-go-v2/service/ec2" - "github.com/aws/aws-sdk-go-v2/service/fis" - "github.com/aws/aws-sdk-go-v2/service/healthlake" - "github.com/aws/aws-sdk-go-v2/service/identitystore" - "github.com/aws/aws-sdk-go-v2/service/inspector2" - "github.com/aws/aws-sdk-go-v2/service/ivschat" - "github.com/aws/aws-sdk-go-v2/service/kendra" + finspace_sdkv2 "github.com/aws/aws-sdk-go-v2/service/finspace" + fis_sdkv2 "github.com/aws/aws-sdk-go-v2/service/fis" + glacier_sdkv2 "github.com/aws/aws-sdk-go-v2/service/glacier" + healthlake_sdkv2 "github.com/aws/aws-sdk-go-v2/service/healthlake" + identitystore_sdkv2 "github.com/aws/aws-sdk-go-v2/service/identitystore" + inspector2_sdkv2 "github.com/aws/aws-sdk-go-v2/service/inspector2" + internetmonitor_sdkv2 "github.com/aws/aws-sdk-go-v2/service/internetmonitor" + ivschat_sdkv2 "github.com/aws/aws-sdk-go-v2/service/ivschat" + kendra_sdkv2 "github.com/aws/aws-sdk-go-v2/service/kendra" + keyspaces_sdkv2 "github.com/aws/aws-sdk-go-v2/service/keyspaces" lambda_sdkv2 "github.com/aws/aws-sdk-go-v2/service/lambda" - "github.com/aws/aws-sdk-go-v2/service/medialive" - "github.com/aws/aws-sdk-go-v2/service/oam" - "github.com/aws/aws-sdk-go-v2/service/opensearchserverless" - "github.com/aws/aws-sdk-go-v2/service/pipes" - "github.com/aws/aws-sdk-go-v2/service/rbin" + lightsail_sdkv2 "github.com/aws/aws-sdk-go-v2/service/lightsail" + medialive_sdkv2 "github.com/aws/aws-sdk-go-v2/service/medialive" + oam_sdkv2 "github.com/aws/aws-sdk-go-v2/service/oam" + opensearchserverless_sdkv2 "github.com/aws/aws-sdk-go-v2/service/opensearchserverless" + pipes_sdkv2 "github.com/aws/aws-sdk-go-v2/service/pipes" + pricing_sdkv2 "github.com/aws/aws-sdk-go-v2/service/pricing" + qldb_sdkv2 "github.com/aws/aws-sdk-go-v2/service/qldb" + rbin_sdkv2 "github.com/aws/aws-sdk-go-v2/service/rbin" rds_sdkv2 "github.com/aws/aws-sdk-go-v2/service/rds" - "github.com/aws/aws-sdk-go-v2/service/resourceexplorer2" - "github.com/aws/aws-sdk-go-v2/service/rolesanywhere" - "github.com/aws/aws-sdk-go-v2/service/route53domains" + resourceexplorer2_sdkv2 "github.com/aws/aws-sdk-go-v2/service/resourceexplorer2" + rolesanywhere_sdkv2 "github.com/aws/aws-sdk-go-v2/service/rolesanywhere" + route53domains_sdkv2 "github.com/aws/aws-sdk-go-v2/service/route53domains" s3control_sdkv2 "github.com/aws/aws-sdk-go-v2/service/s3control" - "github.com/aws/aws-sdk-go-v2/service/scheduler" - "github.com/aws/aws-sdk-go-v2/service/securitylake" - "github.com/aws/aws-sdk-go-v2/service/sesv2" + scheduler_sdkv2 "github.com/aws/aws-sdk-go-v2/service/scheduler" + securitylake_sdkv2 "github.com/aws/aws-sdk-go-v2/service/securitylake" + sesv2_sdkv2 "github.com/aws/aws-sdk-go-v2/service/sesv2" ssm_sdkv2 "github.com/aws/aws-sdk-go-v2/service/ssm" - "github.com/aws/aws-sdk-go-v2/service/ssmcontacts" - "github.com/aws/aws-sdk-go-v2/service/ssmincidents" - "github.com/aws/aws-sdk-go-v2/service/transcribe" - "github.com/aws/aws-sdk-go-v2/service/vpclattice" - "github.com/aws/aws-sdk-go-v2/service/xray" - "github.com/aws/aws-sdk-go/aws/session" - "github.com/aws/aws-sdk-go/service/acmpca" - "github.com/aws/aws-sdk-go/service/alexaforbusiness" - "github.com/aws/aws-sdk-go/service/amplify" - "github.com/aws/aws-sdk-go/service/amplifybackend" - "github.com/aws/aws-sdk-go/service/amplifyuibuilder" - "github.com/aws/aws-sdk-go/service/apigateway" - "github.com/aws/aws-sdk-go/service/apigatewaymanagementapi" - "github.com/aws/aws-sdk-go/service/apigatewayv2" - "github.com/aws/aws-sdk-go/service/appconfig" - "github.com/aws/aws-sdk-go/service/appconfigdata" - "github.com/aws/aws-sdk-go/service/appflow" - "github.com/aws/aws-sdk-go/service/appintegrationsservice" - "github.com/aws/aws-sdk-go/service/applicationautoscaling" - "github.com/aws/aws-sdk-go/service/applicationcostprofiler" - "github.com/aws/aws-sdk-go/service/applicationdiscoveryservice" - "github.com/aws/aws-sdk-go/service/applicationinsights" - "github.com/aws/aws-sdk-go/service/appmesh" - "github.com/aws/aws-sdk-go/service/appregistry" - "github.com/aws/aws-sdk-go/service/apprunner" - "github.com/aws/aws-sdk-go/service/appstream" - "github.com/aws/aws-sdk-go/service/appsync" - "github.com/aws/aws-sdk-go/service/athena" - "github.com/aws/aws-sdk-go/service/augmentedairuntime" - "github.com/aws/aws-sdk-go/service/autoscaling" - "github.com/aws/aws-sdk-go/service/autoscalingplans" - "github.com/aws/aws-sdk-go/service/backup" - "github.com/aws/aws-sdk-go/service/backupgateway" - "github.com/aws/aws-sdk-go/service/batch" - "github.com/aws/aws-sdk-go/service/billingconductor" - "github.com/aws/aws-sdk-go/service/braket" - "github.com/aws/aws-sdk-go/service/budgets" - "github.com/aws/aws-sdk-go/service/chime" - "github.com/aws/aws-sdk-go/service/chimesdkidentity" - "github.com/aws/aws-sdk-go/service/chimesdkmediapipelines" - "github.com/aws/aws-sdk-go/service/chimesdkmeetings" - "github.com/aws/aws-sdk-go/service/chimesdkmessaging" - "github.com/aws/aws-sdk-go/service/chimesdkvoice" - "github.com/aws/aws-sdk-go/service/cloud9" - "github.com/aws/aws-sdk-go/service/clouddirectory" - "github.com/aws/aws-sdk-go/service/cloudformation" - "github.com/aws/aws-sdk-go/service/cloudfront" - "github.com/aws/aws-sdk-go/service/cloudhsmv2" - "github.com/aws/aws-sdk-go/service/cloudsearch" - "github.com/aws/aws-sdk-go/service/cloudsearchdomain" - "github.com/aws/aws-sdk-go/service/cloudtrail" - "github.com/aws/aws-sdk-go/service/cloudwatch" - "github.com/aws/aws-sdk-go/service/cloudwatchevidently" - "github.com/aws/aws-sdk-go/service/cloudwatchlogs" - "github.com/aws/aws-sdk-go/service/cloudwatchrum" - "github.com/aws/aws-sdk-go/service/codeartifact" - "github.com/aws/aws-sdk-go/service/codebuild" - "github.com/aws/aws-sdk-go/service/codecommit" - "github.com/aws/aws-sdk-go/service/codedeploy" - "github.com/aws/aws-sdk-go/service/codeguruprofiler" - "github.com/aws/aws-sdk-go/service/codegurureviewer" - "github.com/aws/aws-sdk-go/service/codepipeline" - "github.com/aws/aws-sdk-go/service/codestar" - "github.com/aws/aws-sdk-go/service/codestarconnections" - "github.com/aws/aws-sdk-go/service/codestarnotifications" - "github.com/aws/aws-sdk-go/service/cognitoidentity" - "github.com/aws/aws-sdk-go/service/cognitoidentityprovider" - "github.com/aws/aws-sdk-go/service/cognitosync" - "github.com/aws/aws-sdk-go/service/comprehendmedical" - "github.com/aws/aws-sdk-go/service/configservice" - "github.com/aws/aws-sdk-go/service/connect" - "github.com/aws/aws-sdk-go/service/connectcontactlens" - "github.com/aws/aws-sdk-go/service/connectparticipant" - "github.com/aws/aws-sdk-go/service/connectwisdomservice" - "github.com/aws/aws-sdk-go/service/controltower" - "github.com/aws/aws-sdk-go/service/costandusagereportservice" - "github.com/aws/aws-sdk-go/service/costexplorer" - "github.com/aws/aws-sdk-go/service/customerprofiles" - "github.com/aws/aws-sdk-go/service/databasemigrationservice" - "github.com/aws/aws-sdk-go/service/dataexchange" - "github.com/aws/aws-sdk-go/service/datapipeline" - "github.com/aws/aws-sdk-go/service/datasync" - "github.com/aws/aws-sdk-go/service/dax" - "github.com/aws/aws-sdk-go/service/detective" - "github.com/aws/aws-sdk-go/service/devicefarm" - "github.com/aws/aws-sdk-go/service/devopsguru" - "github.com/aws/aws-sdk-go/service/directconnect" - "github.com/aws/aws-sdk-go/service/directoryservice" - "github.com/aws/aws-sdk-go/service/dlm" - "github.com/aws/aws-sdk-go/service/docdb" - "github.com/aws/aws-sdk-go/service/drs" - "github.com/aws/aws-sdk-go/service/dynamodb" - "github.com/aws/aws-sdk-go/service/dynamodbstreams" - "github.com/aws/aws-sdk-go/service/ebs" - "github.com/aws/aws-sdk-go/service/ec2" - "github.com/aws/aws-sdk-go/service/ec2instanceconnect" - "github.com/aws/aws-sdk-go/service/ecr" - "github.com/aws/aws-sdk-go/service/ecrpublic" - "github.com/aws/aws-sdk-go/service/ecs" - "github.com/aws/aws-sdk-go/service/efs" - "github.com/aws/aws-sdk-go/service/eks" - "github.com/aws/aws-sdk-go/service/elasticache" - "github.com/aws/aws-sdk-go/service/elasticbeanstalk" - "github.com/aws/aws-sdk-go/service/elasticinference" - "github.com/aws/aws-sdk-go/service/elasticsearchservice" - "github.com/aws/aws-sdk-go/service/elastictranscoder" - "github.com/aws/aws-sdk-go/service/elb" - "github.com/aws/aws-sdk-go/service/elbv2" - "github.com/aws/aws-sdk-go/service/emr" - "github.com/aws/aws-sdk-go/service/emrcontainers" - "github.com/aws/aws-sdk-go/service/emrserverless" - "github.com/aws/aws-sdk-go/service/eventbridge" - "github.com/aws/aws-sdk-go/service/finspace" - "github.com/aws/aws-sdk-go/service/finspacedata" - "github.com/aws/aws-sdk-go/service/firehose" - "github.com/aws/aws-sdk-go/service/fms" - "github.com/aws/aws-sdk-go/service/forecastqueryservice" - "github.com/aws/aws-sdk-go/service/forecastservice" - "github.com/aws/aws-sdk-go/service/frauddetector" - "github.com/aws/aws-sdk-go/service/fsx" - "github.com/aws/aws-sdk-go/service/gamelift" - "github.com/aws/aws-sdk-go/service/glacier" - "github.com/aws/aws-sdk-go/service/globalaccelerator" - "github.com/aws/aws-sdk-go/service/glue" - "github.com/aws/aws-sdk-go/service/gluedatabrew" - "github.com/aws/aws-sdk-go/service/greengrass" - "github.com/aws/aws-sdk-go/service/greengrassv2" - "github.com/aws/aws-sdk-go/service/groundstation" - "github.com/aws/aws-sdk-go/service/guardduty" - "github.com/aws/aws-sdk-go/service/health" - "github.com/aws/aws-sdk-go/service/honeycode" - "github.com/aws/aws-sdk-go/service/iam" - "github.com/aws/aws-sdk-go/service/imagebuilder" - "github.com/aws/aws-sdk-go/service/inspector" - "github.com/aws/aws-sdk-go/service/internetmonitor" - "github.com/aws/aws-sdk-go/service/iot" - "github.com/aws/aws-sdk-go/service/iot1clickdevicesservice" - "github.com/aws/aws-sdk-go/service/iot1clickprojects" - "github.com/aws/aws-sdk-go/service/iotanalytics" - "github.com/aws/aws-sdk-go/service/iotdataplane" - "github.com/aws/aws-sdk-go/service/iotdeviceadvisor" - "github.com/aws/aws-sdk-go/service/iotevents" - "github.com/aws/aws-sdk-go/service/ioteventsdata" - "github.com/aws/aws-sdk-go/service/iotfleethub" - "github.com/aws/aws-sdk-go/service/iotjobsdataplane" - "github.com/aws/aws-sdk-go/service/iotsecuretunneling" - "github.com/aws/aws-sdk-go/service/iotsitewise" - "github.com/aws/aws-sdk-go/service/iotthingsgraph" - "github.com/aws/aws-sdk-go/service/iottwinmaker" - "github.com/aws/aws-sdk-go/service/iotwireless" - "github.com/aws/aws-sdk-go/service/ivs" - "github.com/aws/aws-sdk-go/service/kafka" - "github.com/aws/aws-sdk-go/service/kafkaconnect" - "github.com/aws/aws-sdk-go/service/keyspaces" - "github.com/aws/aws-sdk-go/service/kinesis" - "github.com/aws/aws-sdk-go/service/kinesisanalytics" - "github.com/aws/aws-sdk-go/service/kinesisanalyticsv2" - "github.com/aws/aws-sdk-go/service/kinesisvideo" - "github.com/aws/aws-sdk-go/service/kinesisvideoarchivedmedia" - "github.com/aws/aws-sdk-go/service/kinesisvideomedia" - "github.com/aws/aws-sdk-go/service/kinesisvideosignalingchannels" - "github.com/aws/aws-sdk-go/service/kms" - "github.com/aws/aws-sdk-go/service/lakeformation" - "github.com/aws/aws-sdk-go/service/lambda" - "github.com/aws/aws-sdk-go/service/lexmodelbuildingservice" - "github.com/aws/aws-sdk-go/service/lexmodelsv2" - "github.com/aws/aws-sdk-go/service/lexruntimeservice" - "github.com/aws/aws-sdk-go/service/lexruntimev2" - "github.com/aws/aws-sdk-go/service/licensemanager" - "github.com/aws/aws-sdk-go/service/lightsail" - "github.com/aws/aws-sdk-go/service/locationservice" - "github.com/aws/aws-sdk-go/service/lookoutequipment" - "github.com/aws/aws-sdk-go/service/lookoutforvision" - "github.com/aws/aws-sdk-go/service/lookoutmetrics" - "github.com/aws/aws-sdk-go/service/machinelearning" - "github.com/aws/aws-sdk-go/service/macie" - "github.com/aws/aws-sdk-go/service/macie2" - "github.com/aws/aws-sdk-go/service/managedblockchain" - "github.com/aws/aws-sdk-go/service/managedgrafana" - "github.com/aws/aws-sdk-go/service/marketplacecatalog" - "github.com/aws/aws-sdk-go/service/marketplacecommerceanalytics" - "github.com/aws/aws-sdk-go/service/marketplaceentitlementservice" - "github.com/aws/aws-sdk-go/service/marketplacemetering" - "github.com/aws/aws-sdk-go/service/mediaconnect" - "github.com/aws/aws-sdk-go/service/mediaconvert" - "github.com/aws/aws-sdk-go/service/mediapackage" - "github.com/aws/aws-sdk-go/service/mediapackagevod" - "github.com/aws/aws-sdk-go/service/mediastore" - "github.com/aws/aws-sdk-go/service/mediastoredata" - "github.com/aws/aws-sdk-go/service/mediatailor" - "github.com/aws/aws-sdk-go/service/memorydb" - "github.com/aws/aws-sdk-go/service/mgn" - "github.com/aws/aws-sdk-go/service/migrationhub" - "github.com/aws/aws-sdk-go/service/migrationhubconfig" - "github.com/aws/aws-sdk-go/service/migrationhubrefactorspaces" - "github.com/aws/aws-sdk-go/service/migrationhubstrategyrecommendations" - "github.com/aws/aws-sdk-go/service/mobile" - "github.com/aws/aws-sdk-go/service/mq" - "github.com/aws/aws-sdk-go/service/mturk" - "github.com/aws/aws-sdk-go/service/mwaa" - "github.com/aws/aws-sdk-go/service/neptune" - "github.com/aws/aws-sdk-go/service/networkfirewall" - "github.com/aws/aws-sdk-go/service/networkmanager" - "github.com/aws/aws-sdk-go/service/nimblestudio" - "github.com/aws/aws-sdk-go/service/opensearchservice" - "github.com/aws/aws-sdk-go/service/opsworks" - "github.com/aws/aws-sdk-go/service/opsworkscm" - "github.com/aws/aws-sdk-go/service/organizations" - "github.com/aws/aws-sdk-go/service/outposts" - "github.com/aws/aws-sdk-go/service/panorama" - "github.com/aws/aws-sdk-go/service/personalize" - "github.com/aws/aws-sdk-go/service/personalizeevents" - "github.com/aws/aws-sdk-go/service/personalizeruntime" - "github.com/aws/aws-sdk-go/service/pi" - "github.com/aws/aws-sdk-go/service/pinpoint" - "github.com/aws/aws-sdk-go/service/pinpointemail" - "github.com/aws/aws-sdk-go/service/pinpointsmsvoice" - "github.com/aws/aws-sdk-go/service/polly" - "github.com/aws/aws-sdk-go/service/pricing" - "github.com/aws/aws-sdk-go/service/prometheusservice" - "github.com/aws/aws-sdk-go/service/proton" - "github.com/aws/aws-sdk-go/service/qldb" - "github.com/aws/aws-sdk-go/service/qldbsession" - "github.com/aws/aws-sdk-go/service/quicksight" - "github.com/aws/aws-sdk-go/service/ram" - "github.com/aws/aws-sdk-go/service/rds" - "github.com/aws/aws-sdk-go/service/rdsdataservice" - "github.com/aws/aws-sdk-go/service/redshift" - "github.com/aws/aws-sdk-go/service/redshiftdataapiservice" - "github.com/aws/aws-sdk-go/service/redshiftserverless" - "github.com/aws/aws-sdk-go/service/rekognition" - "github.com/aws/aws-sdk-go/service/resiliencehub" - "github.com/aws/aws-sdk-go/service/resourcegroups" - "github.com/aws/aws-sdk-go/service/resourcegroupstaggingapi" - "github.com/aws/aws-sdk-go/service/robomaker" - "github.com/aws/aws-sdk-go/service/route53" - "github.com/aws/aws-sdk-go/service/route53recoverycluster" - "github.com/aws/aws-sdk-go/service/route53recoverycontrolconfig" - "github.com/aws/aws-sdk-go/service/route53recoveryreadiness" - "github.com/aws/aws-sdk-go/service/route53resolver" - "github.com/aws/aws-sdk-go/service/s3" - "github.com/aws/aws-sdk-go/service/s3control" - "github.com/aws/aws-sdk-go/service/s3outposts" - "github.com/aws/aws-sdk-go/service/sagemaker" - "github.com/aws/aws-sdk-go/service/sagemakeredgemanager" - "github.com/aws/aws-sdk-go/service/sagemakerfeaturestoreruntime" - "github.com/aws/aws-sdk-go/service/sagemakerruntime" - "github.com/aws/aws-sdk-go/service/savingsplans" - "github.com/aws/aws-sdk-go/service/schemas" - "github.com/aws/aws-sdk-go/service/secretsmanager" - "github.com/aws/aws-sdk-go/service/securityhub" - "github.com/aws/aws-sdk-go/service/serverlessapplicationrepository" - "github.com/aws/aws-sdk-go/service/servicecatalog" - "github.com/aws/aws-sdk-go/service/servicediscovery" - "github.com/aws/aws-sdk-go/service/servicequotas" - "github.com/aws/aws-sdk-go/service/ses" - "github.com/aws/aws-sdk-go/service/sfn" - "github.com/aws/aws-sdk-go/service/shield" - "github.com/aws/aws-sdk-go/service/signer" - "github.com/aws/aws-sdk-go/service/simpledb" - "github.com/aws/aws-sdk-go/service/sms" - "github.com/aws/aws-sdk-go/service/snowball" - "github.com/aws/aws-sdk-go/service/snowdevicemanagement" - "github.com/aws/aws-sdk-go/service/sns" - "github.com/aws/aws-sdk-go/service/sqs" - "github.com/aws/aws-sdk-go/service/ssm" - "github.com/aws/aws-sdk-go/service/sso" - "github.com/aws/aws-sdk-go/service/ssoadmin" - "github.com/aws/aws-sdk-go/service/ssooidc" - "github.com/aws/aws-sdk-go/service/storagegateway" - "github.com/aws/aws-sdk-go/service/sts" - "github.com/aws/aws-sdk-go/service/support" - "github.com/aws/aws-sdk-go/service/swf" - "github.com/aws/aws-sdk-go/service/synthetics" - "github.com/aws/aws-sdk-go/service/textract" - "github.com/aws/aws-sdk-go/service/timestreamquery" - "github.com/aws/aws-sdk-go/service/timestreamwrite" - "github.com/aws/aws-sdk-go/service/transcribestreamingservice" - "github.com/aws/aws-sdk-go/service/transfer" - "github.com/aws/aws-sdk-go/service/translate" - "github.com/aws/aws-sdk-go/service/voiceid" - "github.com/aws/aws-sdk-go/service/waf" - "github.com/aws/aws-sdk-go/service/wafregional" - "github.com/aws/aws-sdk-go/service/wafv2" - "github.com/aws/aws-sdk-go/service/wellarchitected" - "github.com/aws/aws-sdk-go/service/workdocs" - "github.com/aws/aws-sdk-go/service/worklink" - "github.com/aws/aws-sdk-go/service/workmail" - "github.com/aws/aws-sdk-go/service/workmailmessageflow" - "github.com/aws/aws-sdk-go/service/workspaces" - "github.com/aws/aws-sdk-go/service/workspacesweb" - tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + ssmcontacts_sdkv2 "github.com/aws/aws-sdk-go-v2/service/ssmcontacts" + ssmincidents_sdkv2 "github.com/aws/aws-sdk-go-v2/service/ssmincidents" + swf_sdkv2 "github.com/aws/aws-sdk-go-v2/service/swf" + timestreamwrite_sdkv2 "github.com/aws/aws-sdk-go-v2/service/timestreamwrite" + transcribe_sdkv2 "github.com/aws/aws-sdk-go-v2/service/transcribe" + verifiedpermissions_sdkv2 "github.com/aws/aws-sdk-go-v2/service/verifiedpermissions" + vpclattice_sdkv2 "github.com/aws/aws-sdk-go-v2/service/vpclattice" + workspaces_sdkv2 "github.com/aws/aws-sdk-go-v2/service/workspaces" + xray_sdkv2 "github.com/aws/aws-sdk-go-v2/service/xray" + acmpca_sdkv1 "github.com/aws/aws-sdk-go/service/acmpca" + alexaforbusiness_sdkv1 "github.com/aws/aws-sdk-go/service/alexaforbusiness" + amplify_sdkv1 "github.com/aws/aws-sdk-go/service/amplify" + amplifybackend_sdkv1 "github.com/aws/aws-sdk-go/service/amplifybackend" + amplifyuibuilder_sdkv1 "github.com/aws/aws-sdk-go/service/amplifyuibuilder" + apigateway_sdkv1 "github.com/aws/aws-sdk-go/service/apigateway" + apigatewaymanagementapi_sdkv1 "github.com/aws/aws-sdk-go/service/apigatewaymanagementapi" + apigatewayv2_sdkv1 "github.com/aws/aws-sdk-go/service/apigatewayv2" + appconfig_sdkv1 "github.com/aws/aws-sdk-go/service/appconfig" + appconfigdata_sdkv1 "github.com/aws/aws-sdk-go/service/appconfigdata" + appflow_sdkv1 "github.com/aws/aws-sdk-go/service/appflow" + appintegrationsservice_sdkv1 "github.com/aws/aws-sdk-go/service/appintegrationsservice" + applicationautoscaling_sdkv1 "github.com/aws/aws-sdk-go/service/applicationautoscaling" + applicationcostprofiler_sdkv1 "github.com/aws/aws-sdk-go/service/applicationcostprofiler" + applicationdiscoveryservice_sdkv1 "github.com/aws/aws-sdk-go/service/applicationdiscoveryservice" + applicationinsights_sdkv1 "github.com/aws/aws-sdk-go/service/applicationinsights" + appmesh_sdkv1 "github.com/aws/aws-sdk-go/service/appmesh" + appregistry_sdkv1 "github.com/aws/aws-sdk-go/service/appregistry" + apprunner_sdkv1 "github.com/aws/aws-sdk-go/service/apprunner" + appstream_sdkv1 "github.com/aws/aws-sdk-go/service/appstream" + appsync_sdkv1 "github.com/aws/aws-sdk-go/service/appsync" + athena_sdkv1 "github.com/aws/aws-sdk-go/service/athena" + augmentedairuntime_sdkv1 "github.com/aws/aws-sdk-go/service/augmentedairuntime" + autoscaling_sdkv1 "github.com/aws/aws-sdk-go/service/autoscaling" + autoscalingplans_sdkv1 "github.com/aws/aws-sdk-go/service/autoscalingplans" + backup_sdkv1 "github.com/aws/aws-sdk-go/service/backup" + backupgateway_sdkv1 "github.com/aws/aws-sdk-go/service/backupgateway" + batch_sdkv1 "github.com/aws/aws-sdk-go/service/batch" + billingconductor_sdkv1 "github.com/aws/aws-sdk-go/service/billingconductor" + braket_sdkv1 "github.com/aws/aws-sdk-go/service/braket" + budgets_sdkv1 "github.com/aws/aws-sdk-go/service/budgets" + chime_sdkv1 "github.com/aws/aws-sdk-go/service/chime" + chimesdkidentity_sdkv1 "github.com/aws/aws-sdk-go/service/chimesdkidentity" + chimesdkmediapipelines_sdkv1 "github.com/aws/aws-sdk-go/service/chimesdkmediapipelines" + chimesdkmeetings_sdkv1 "github.com/aws/aws-sdk-go/service/chimesdkmeetings" + chimesdkmessaging_sdkv1 "github.com/aws/aws-sdk-go/service/chimesdkmessaging" + chimesdkvoice_sdkv1 "github.com/aws/aws-sdk-go/service/chimesdkvoice" + cloud9_sdkv1 "github.com/aws/aws-sdk-go/service/cloud9" + clouddirectory_sdkv1 "github.com/aws/aws-sdk-go/service/clouddirectory" + cloudformation_sdkv1 "github.com/aws/aws-sdk-go/service/cloudformation" + cloudfront_sdkv1 "github.com/aws/aws-sdk-go/service/cloudfront" + cloudhsmv2_sdkv1 "github.com/aws/aws-sdk-go/service/cloudhsmv2" + cloudsearch_sdkv1 "github.com/aws/aws-sdk-go/service/cloudsearch" + cloudsearchdomain_sdkv1 "github.com/aws/aws-sdk-go/service/cloudsearchdomain" + cloudtrail_sdkv1 "github.com/aws/aws-sdk-go/service/cloudtrail" + cloudwatch_sdkv1 "github.com/aws/aws-sdk-go/service/cloudwatch" + cloudwatchevidently_sdkv1 "github.com/aws/aws-sdk-go/service/cloudwatchevidently" + cloudwatchlogs_sdkv1 "github.com/aws/aws-sdk-go/service/cloudwatchlogs" + cloudwatchrum_sdkv1 "github.com/aws/aws-sdk-go/service/cloudwatchrum" + codeartifact_sdkv1 "github.com/aws/aws-sdk-go/service/codeartifact" + codebuild_sdkv1 "github.com/aws/aws-sdk-go/service/codebuild" + codecommit_sdkv1 "github.com/aws/aws-sdk-go/service/codecommit" + codedeploy_sdkv1 "github.com/aws/aws-sdk-go/service/codedeploy" + codeguruprofiler_sdkv1 "github.com/aws/aws-sdk-go/service/codeguruprofiler" + codegurureviewer_sdkv1 "github.com/aws/aws-sdk-go/service/codegurureviewer" + codepipeline_sdkv1 "github.com/aws/aws-sdk-go/service/codepipeline" + codestar_sdkv1 "github.com/aws/aws-sdk-go/service/codestar" + codestarconnections_sdkv1 "github.com/aws/aws-sdk-go/service/codestarconnections" + codestarnotifications_sdkv1 "github.com/aws/aws-sdk-go/service/codestarnotifications" + cognitoidentity_sdkv1 "github.com/aws/aws-sdk-go/service/cognitoidentity" + cognitoidentityprovider_sdkv1 "github.com/aws/aws-sdk-go/service/cognitoidentityprovider" + cognitosync_sdkv1 "github.com/aws/aws-sdk-go/service/cognitosync" + comprehendmedical_sdkv1 "github.com/aws/aws-sdk-go/service/comprehendmedical" + configservice_sdkv1 "github.com/aws/aws-sdk-go/service/configservice" + connect_sdkv1 "github.com/aws/aws-sdk-go/service/connect" + connectcontactlens_sdkv1 "github.com/aws/aws-sdk-go/service/connectcontactlens" + connectparticipant_sdkv1 "github.com/aws/aws-sdk-go/service/connectparticipant" + connectwisdomservice_sdkv1 "github.com/aws/aws-sdk-go/service/connectwisdomservice" + controltower_sdkv1 "github.com/aws/aws-sdk-go/service/controltower" + costandusagereportservice_sdkv1 "github.com/aws/aws-sdk-go/service/costandusagereportservice" + costexplorer_sdkv1 "github.com/aws/aws-sdk-go/service/costexplorer" + customerprofiles_sdkv1 "github.com/aws/aws-sdk-go/service/customerprofiles" + databasemigrationservice_sdkv1 "github.com/aws/aws-sdk-go/service/databasemigrationservice" + dataexchange_sdkv1 "github.com/aws/aws-sdk-go/service/dataexchange" + datapipeline_sdkv1 "github.com/aws/aws-sdk-go/service/datapipeline" + datasync_sdkv1 "github.com/aws/aws-sdk-go/service/datasync" + dax_sdkv1 "github.com/aws/aws-sdk-go/service/dax" + detective_sdkv1 "github.com/aws/aws-sdk-go/service/detective" + devicefarm_sdkv1 "github.com/aws/aws-sdk-go/service/devicefarm" + devopsguru_sdkv1 "github.com/aws/aws-sdk-go/service/devopsguru" + directconnect_sdkv1 "github.com/aws/aws-sdk-go/service/directconnect" + directoryservice_sdkv1 "github.com/aws/aws-sdk-go/service/directoryservice" + dlm_sdkv1 "github.com/aws/aws-sdk-go/service/dlm" + docdb_sdkv1 "github.com/aws/aws-sdk-go/service/docdb" + drs_sdkv1 "github.com/aws/aws-sdk-go/service/drs" + dynamodb_sdkv1 "github.com/aws/aws-sdk-go/service/dynamodb" + dynamodbstreams_sdkv1 "github.com/aws/aws-sdk-go/service/dynamodbstreams" + ebs_sdkv1 "github.com/aws/aws-sdk-go/service/ebs" + ec2_sdkv1 "github.com/aws/aws-sdk-go/service/ec2" + ec2instanceconnect_sdkv1 "github.com/aws/aws-sdk-go/service/ec2instanceconnect" + ecr_sdkv1 "github.com/aws/aws-sdk-go/service/ecr" + ecrpublic_sdkv1 "github.com/aws/aws-sdk-go/service/ecrpublic" + ecs_sdkv1 "github.com/aws/aws-sdk-go/service/ecs" + efs_sdkv1 "github.com/aws/aws-sdk-go/service/efs" + eks_sdkv1 "github.com/aws/aws-sdk-go/service/eks" + elasticache_sdkv1 "github.com/aws/aws-sdk-go/service/elasticache" + elasticbeanstalk_sdkv1 "github.com/aws/aws-sdk-go/service/elasticbeanstalk" + elasticinference_sdkv1 "github.com/aws/aws-sdk-go/service/elasticinference" + elasticsearchservice_sdkv1 "github.com/aws/aws-sdk-go/service/elasticsearchservice" + elastictranscoder_sdkv1 "github.com/aws/aws-sdk-go/service/elastictranscoder" + elb_sdkv1 "github.com/aws/aws-sdk-go/service/elb" + elbv2_sdkv1 "github.com/aws/aws-sdk-go/service/elbv2" + emr_sdkv1 "github.com/aws/aws-sdk-go/service/emr" + emrcontainers_sdkv1 "github.com/aws/aws-sdk-go/service/emrcontainers" + emrserverless_sdkv1 "github.com/aws/aws-sdk-go/service/emrserverless" + eventbridge_sdkv1 "github.com/aws/aws-sdk-go/service/eventbridge" + finspacedata_sdkv1 "github.com/aws/aws-sdk-go/service/finspacedata" + firehose_sdkv1 "github.com/aws/aws-sdk-go/service/firehose" + fms_sdkv1 "github.com/aws/aws-sdk-go/service/fms" + forecastqueryservice_sdkv1 "github.com/aws/aws-sdk-go/service/forecastqueryservice" + forecastservice_sdkv1 "github.com/aws/aws-sdk-go/service/forecastservice" + frauddetector_sdkv1 "github.com/aws/aws-sdk-go/service/frauddetector" + fsx_sdkv1 "github.com/aws/aws-sdk-go/service/fsx" + gamelift_sdkv1 "github.com/aws/aws-sdk-go/service/gamelift" + globalaccelerator_sdkv1 "github.com/aws/aws-sdk-go/service/globalaccelerator" + glue_sdkv1 "github.com/aws/aws-sdk-go/service/glue" + gluedatabrew_sdkv1 "github.com/aws/aws-sdk-go/service/gluedatabrew" + greengrass_sdkv1 "github.com/aws/aws-sdk-go/service/greengrass" + greengrassv2_sdkv1 "github.com/aws/aws-sdk-go/service/greengrassv2" + groundstation_sdkv1 "github.com/aws/aws-sdk-go/service/groundstation" + guardduty_sdkv1 "github.com/aws/aws-sdk-go/service/guardduty" + health_sdkv1 "github.com/aws/aws-sdk-go/service/health" + honeycode_sdkv1 "github.com/aws/aws-sdk-go/service/honeycode" + iam_sdkv1 "github.com/aws/aws-sdk-go/service/iam" + imagebuilder_sdkv1 "github.com/aws/aws-sdk-go/service/imagebuilder" + inspector_sdkv1 "github.com/aws/aws-sdk-go/service/inspector" + iot_sdkv1 "github.com/aws/aws-sdk-go/service/iot" + iot1clickdevicesservice_sdkv1 "github.com/aws/aws-sdk-go/service/iot1clickdevicesservice" + iot1clickprojects_sdkv1 "github.com/aws/aws-sdk-go/service/iot1clickprojects" + iotanalytics_sdkv1 "github.com/aws/aws-sdk-go/service/iotanalytics" + iotdataplane_sdkv1 "github.com/aws/aws-sdk-go/service/iotdataplane" + iotdeviceadvisor_sdkv1 "github.com/aws/aws-sdk-go/service/iotdeviceadvisor" + iotevents_sdkv1 "github.com/aws/aws-sdk-go/service/iotevents" + ioteventsdata_sdkv1 "github.com/aws/aws-sdk-go/service/ioteventsdata" + iotfleethub_sdkv1 "github.com/aws/aws-sdk-go/service/iotfleethub" + iotjobsdataplane_sdkv1 "github.com/aws/aws-sdk-go/service/iotjobsdataplane" + iotsecuretunneling_sdkv1 "github.com/aws/aws-sdk-go/service/iotsecuretunneling" + iotsitewise_sdkv1 "github.com/aws/aws-sdk-go/service/iotsitewise" + iotthingsgraph_sdkv1 "github.com/aws/aws-sdk-go/service/iotthingsgraph" + iottwinmaker_sdkv1 "github.com/aws/aws-sdk-go/service/iottwinmaker" + iotwireless_sdkv1 "github.com/aws/aws-sdk-go/service/iotwireless" + ivs_sdkv1 "github.com/aws/aws-sdk-go/service/ivs" + kafka_sdkv1 "github.com/aws/aws-sdk-go/service/kafka" + kafkaconnect_sdkv1 "github.com/aws/aws-sdk-go/service/kafkaconnect" + kinesis_sdkv1 "github.com/aws/aws-sdk-go/service/kinesis" + kinesisanalytics_sdkv1 "github.com/aws/aws-sdk-go/service/kinesisanalytics" + kinesisanalyticsv2_sdkv1 "github.com/aws/aws-sdk-go/service/kinesisanalyticsv2" + kinesisvideo_sdkv1 "github.com/aws/aws-sdk-go/service/kinesisvideo" + kinesisvideoarchivedmedia_sdkv1 "github.com/aws/aws-sdk-go/service/kinesisvideoarchivedmedia" + kinesisvideomedia_sdkv1 "github.com/aws/aws-sdk-go/service/kinesisvideomedia" + kinesisvideosignalingchannels_sdkv1 "github.com/aws/aws-sdk-go/service/kinesisvideosignalingchannels" + kms_sdkv1 "github.com/aws/aws-sdk-go/service/kms" + lakeformation_sdkv1 "github.com/aws/aws-sdk-go/service/lakeformation" + lambda_sdkv1 "github.com/aws/aws-sdk-go/service/lambda" + lexmodelbuildingservice_sdkv1 "github.com/aws/aws-sdk-go/service/lexmodelbuildingservice" + lexmodelsv2_sdkv1 "github.com/aws/aws-sdk-go/service/lexmodelsv2" + lexruntimeservice_sdkv1 "github.com/aws/aws-sdk-go/service/lexruntimeservice" + lexruntimev2_sdkv1 "github.com/aws/aws-sdk-go/service/lexruntimev2" + licensemanager_sdkv1 "github.com/aws/aws-sdk-go/service/licensemanager" + locationservice_sdkv1 "github.com/aws/aws-sdk-go/service/locationservice" + lookoutequipment_sdkv1 "github.com/aws/aws-sdk-go/service/lookoutequipment" + lookoutforvision_sdkv1 "github.com/aws/aws-sdk-go/service/lookoutforvision" + lookoutmetrics_sdkv1 "github.com/aws/aws-sdk-go/service/lookoutmetrics" + machinelearning_sdkv1 "github.com/aws/aws-sdk-go/service/machinelearning" + macie_sdkv1 "github.com/aws/aws-sdk-go/service/macie" + macie2_sdkv1 "github.com/aws/aws-sdk-go/service/macie2" + managedblockchain_sdkv1 "github.com/aws/aws-sdk-go/service/managedblockchain" + managedgrafana_sdkv1 "github.com/aws/aws-sdk-go/service/managedgrafana" + marketplacecatalog_sdkv1 "github.com/aws/aws-sdk-go/service/marketplacecatalog" + marketplacecommerceanalytics_sdkv1 "github.com/aws/aws-sdk-go/service/marketplacecommerceanalytics" + marketplaceentitlementservice_sdkv1 "github.com/aws/aws-sdk-go/service/marketplaceentitlementservice" + marketplacemetering_sdkv1 "github.com/aws/aws-sdk-go/service/marketplacemetering" + mediaconnect_sdkv1 "github.com/aws/aws-sdk-go/service/mediaconnect" + mediaconvert_sdkv1 "github.com/aws/aws-sdk-go/service/mediaconvert" + mediapackage_sdkv1 "github.com/aws/aws-sdk-go/service/mediapackage" + mediapackagevod_sdkv1 "github.com/aws/aws-sdk-go/service/mediapackagevod" + mediastore_sdkv1 "github.com/aws/aws-sdk-go/service/mediastore" + mediastoredata_sdkv1 "github.com/aws/aws-sdk-go/service/mediastoredata" + mediatailor_sdkv1 "github.com/aws/aws-sdk-go/service/mediatailor" + memorydb_sdkv1 "github.com/aws/aws-sdk-go/service/memorydb" + mgn_sdkv1 "github.com/aws/aws-sdk-go/service/mgn" + migrationhub_sdkv1 "github.com/aws/aws-sdk-go/service/migrationhub" + migrationhubconfig_sdkv1 "github.com/aws/aws-sdk-go/service/migrationhubconfig" + migrationhubrefactorspaces_sdkv1 "github.com/aws/aws-sdk-go/service/migrationhubrefactorspaces" + migrationhubstrategyrecommendations_sdkv1 "github.com/aws/aws-sdk-go/service/migrationhubstrategyrecommendations" + mobile_sdkv1 "github.com/aws/aws-sdk-go/service/mobile" + mq_sdkv1 "github.com/aws/aws-sdk-go/service/mq" + mturk_sdkv1 "github.com/aws/aws-sdk-go/service/mturk" + mwaa_sdkv1 "github.com/aws/aws-sdk-go/service/mwaa" + neptune_sdkv1 "github.com/aws/aws-sdk-go/service/neptune" + networkfirewall_sdkv1 "github.com/aws/aws-sdk-go/service/networkfirewall" + networkmanager_sdkv1 "github.com/aws/aws-sdk-go/service/networkmanager" + nimblestudio_sdkv1 "github.com/aws/aws-sdk-go/service/nimblestudio" + opensearchservice_sdkv1 "github.com/aws/aws-sdk-go/service/opensearchservice" + opsworks_sdkv1 "github.com/aws/aws-sdk-go/service/opsworks" + opsworkscm_sdkv1 "github.com/aws/aws-sdk-go/service/opsworkscm" + organizations_sdkv1 "github.com/aws/aws-sdk-go/service/organizations" + outposts_sdkv1 "github.com/aws/aws-sdk-go/service/outposts" + panorama_sdkv1 "github.com/aws/aws-sdk-go/service/panorama" + personalize_sdkv1 "github.com/aws/aws-sdk-go/service/personalize" + personalizeevents_sdkv1 "github.com/aws/aws-sdk-go/service/personalizeevents" + personalizeruntime_sdkv1 "github.com/aws/aws-sdk-go/service/personalizeruntime" + pi_sdkv1 "github.com/aws/aws-sdk-go/service/pi" + pinpoint_sdkv1 "github.com/aws/aws-sdk-go/service/pinpoint" + pinpointemail_sdkv1 "github.com/aws/aws-sdk-go/service/pinpointemail" + pinpointsmsvoice_sdkv1 "github.com/aws/aws-sdk-go/service/pinpointsmsvoice" + polly_sdkv1 "github.com/aws/aws-sdk-go/service/polly" + prometheusservice_sdkv1 "github.com/aws/aws-sdk-go/service/prometheusservice" + proton_sdkv1 "github.com/aws/aws-sdk-go/service/proton" + qldbsession_sdkv1 "github.com/aws/aws-sdk-go/service/qldbsession" + quicksight_sdkv1 "github.com/aws/aws-sdk-go/service/quicksight" + ram_sdkv1 "github.com/aws/aws-sdk-go/service/ram" + rds_sdkv1 "github.com/aws/aws-sdk-go/service/rds" + rdsdataservice_sdkv1 "github.com/aws/aws-sdk-go/service/rdsdataservice" + redshift_sdkv1 "github.com/aws/aws-sdk-go/service/redshift" + redshiftdataapiservice_sdkv1 "github.com/aws/aws-sdk-go/service/redshiftdataapiservice" + redshiftserverless_sdkv1 "github.com/aws/aws-sdk-go/service/redshiftserverless" + rekognition_sdkv1 "github.com/aws/aws-sdk-go/service/rekognition" + resiliencehub_sdkv1 "github.com/aws/aws-sdk-go/service/resiliencehub" + resourcegroups_sdkv1 "github.com/aws/aws-sdk-go/service/resourcegroups" + resourcegroupstaggingapi_sdkv1 "github.com/aws/aws-sdk-go/service/resourcegroupstaggingapi" + robomaker_sdkv1 "github.com/aws/aws-sdk-go/service/robomaker" + route53_sdkv1 "github.com/aws/aws-sdk-go/service/route53" + route53recoverycluster_sdkv1 "github.com/aws/aws-sdk-go/service/route53recoverycluster" + route53recoverycontrolconfig_sdkv1 "github.com/aws/aws-sdk-go/service/route53recoverycontrolconfig" + route53recoveryreadiness_sdkv1 "github.com/aws/aws-sdk-go/service/route53recoveryreadiness" + route53resolver_sdkv1 "github.com/aws/aws-sdk-go/service/route53resolver" + s3_sdkv1 "github.com/aws/aws-sdk-go/service/s3" + s3control_sdkv1 "github.com/aws/aws-sdk-go/service/s3control" + s3outposts_sdkv1 "github.com/aws/aws-sdk-go/service/s3outposts" + sagemaker_sdkv1 "github.com/aws/aws-sdk-go/service/sagemaker" + sagemakeredgemanager_sdkv1 "github.com/aws/aws-sdk-go/service/sagemakeredgemanager" + sagemakerfeaturestoreruntime_sdkv1 "github.com/aws/aws-sdk-go/service/sagemakerfeaturestoreruntime" + sagemakerruntime_sdkv1 "github.com/aws/aws-sdk-go/service/sagemakerruntime" + savingsplans_sdkv1 "github.com/aws/aws-sdk-go/service/savingsplans" + schemas_sdkv1 "github.com/aws/aws-sdk-go/service/schemas" + secretsmanager_sdkv1 "github.com/aws/aws-sdk-go/service/secretsmanager" + securityhub_sdkv1 "github.com/aws/aws-sdk-go/service/securityhub" + serverlessapplicationrepository_sdkv1 "github.com/aws/aws-sdk-go/service/serverlessapplicationrepository" + servicecatalog_sdkv1 "github.com/aws/aws-sdk-go/service/servicecatalog" + servicediscovery_sdkv1 "github.com/aws/aws-sdk-go/service/servicediscovery" + servicequotas_sdkv1 "github.com/aws/aws-sdk-go/service/servicequotas" + ses_sdkv1 "github.com/aws/aws-sdk-go/service/ses" + sfn_sdkv1 "github.com/aws/aws-sdk-go/service/sfn" + shield_sdkv1 "github.com/aws/aws-sdk-go/service/shield" + signer_sdkv1 "github.com/aws/aws-sdk-go/service/signer" + simpledb_sdkv1 "github.com/aws/aws-sdk-go/service/simpledb" + sms_sdkv1 "github.com/aws/aws-sdk-go/service/sms" + snowball_sdkv1 "github.com/aws/aws-sdk-go/service/snowball" + snowdevicemanagement_sdkv1 "github.com/aws/aws-sdk-go/service/snowdevicemanagement" + sns_sdkv1 "github.com/aws/aws-sdk-go/service/sns" + sqs_sdkv1 "github.com/aws/aws-sdk-go/service/sqs" + ssm_sdkv1 "github.com/aws/aws-sdk-go/service/ssm" + sso_sdkv1 "github.com/aws/aws-sdk-go/service/sso" + ssoadmin_sdkv1 "github.com/aws/aws-sdk-go/service/ssoadmin" + ssooidc_sdkv1 "github.com/aws/aws-sdk-go/service/ssooidc" + storagegateway_sdkv1 "github.com/aws/aws-sdk-go/service/storagegateway" + sts_sdkv1 "github.com/aws/aws-sdk-go/service/sts" + support_sdkv1 "github.com/aws/aws-sdk-go/service/support" + synthetics_sdkv1 "github.com/aws/aws-sdk-go/service/synthetics" + textract_sdkv1 "github.com/aws/aws-sdk-go/service/textract" + timestreamquery_sdkv1 "github.com/aws/aws-sdk-go/service/timestreamquery" + transcribestreamingservice_sdkv1 "github.com/aws/aws-sdk-go/service/transcribestreamingservice" + transfer_sdkv1 "github.com/aws/aws-sdk-go/service/transfer" + translate_sdkv1 "github.com/aws/aws-sdk-go/service/translate" + voiceid_sdkv1 "github.com/aws/aws-sdk-go/service/voiceid" + waf_sdkv1 "github.com/aws/aws-sdk-go/service/waf" + wafregional_sdkv1 "github.com/aws/aws-sdk-go/service/wafregional" + wafv2_sdkv1 "github.com/aws/aws-sdk-go/service/wafv2" + wellarchitected_sdkv1 "github.com/aws/aws-sdk-go/service/wellarchitected" + workdocs_sdkv1 "github.com/aws/aws-sdk-go/service/workdocs" + worklink_sdkv1 "github.com/aws/aws-sdk-go/service/worklink" + workmail_sdkv1 "github.com/aws/aws-sdk-go/service/workmail" + workmailmessageflow_sdkv1 "github.com/aws/aws-sdk-go/service/workmailmessageflow" + workspacesweb_sdkv1 "github.com/aws/aws-sdk-go/service/workspacesweb" + "github.com/hashicorp/terraform-provider-aws/internal/errs" + "github.com/hashicorp/terraform-provider-aws/names" ) -type AWSClient struct { - AccountID string - DefaultTagsConfig *tftags.DefaultConfig - DNSSuffix string - IgnoreTagsConfig *tftags.IgnoreConfig - MediaConvertAccountConn *mediaconvert.MediaConvert - Partition string - Region string - ReverseDNSPrefix string - ServicePackages map[string]ServicePackage - Session *session.Session - TerraformVersion string - - httpClient *http.Client - - dsClient lazyClient[*directoryservice_sdkv2.Client] - ec2Client lazyClient[*ec2_sdkv2.Client] - lambdaClient lazyClient[*lambda_sdkv2.Client] - logsClient lazyClient[*cloudwatchlogs_sdkv2.Client] - rdsClient lazyClient[*rds_sdkv2.Client] - s3controlClient lazyClient[*s3control_sdkv2.Client] - ssmClient lazyClient[*ssm_sdkv2.Client] - - acmClient *acm.Client - acmpcaConn *acmpca.ACMPCA - ampConn *prometheusservice.PrometheusService - apigatewayConn *apigateway.APIGateway - apigatewaymanagementapiConn *apigatewaymanagementapi.ApiGatewayManagementApi - apigatewayv2Conn *apigatewayv2.ApiGatewayV2 - accessanalyzerClient *accessanalyzer.Client - accountClient *account.Client - alexaforbusinessConn *alexaforbusiness.AlexaForBusiness - amplifyConn *amplify.Amplify - amplifybackendConn *amplifybackend.AmplifyBackend - amplifyuibuilderConn *amplifyuibuilder.AmplifyUIBuilder - applicationautoscalingConn *applicationautoscaling.ApplicationAutoScaling - appconfigConn *appconfig.AppConfig - appconfigdataConn *appconfigdata.AppConfigData - appflowConn *appflow.Appflow - appintegrationsConn *appintegrationsservice.AppIntegrationsService - appmeshConn *appmesh.AppMesh - apprunnerConn *apprunner.AppRunner - appstreamConn *appstream.AppStream - appsyncConn *appsync.AppSync - applicationcostprofilerConn *applicationcostprofiler.ApplicationCostProfiler - applicationinsightsConn *applicationinsights.ApplicationInsights - athenaConn *athena.Athena - auditmanagerClient *auditmanager.Client - autoscalingConn *autoscaling.AutoScaling - autoscalingplansConn *autoscalingplans.AutoScalingPlans - backupConn *backup.Backup - backupgatewayConn *backupgateway.BackupGateway - batchConn *batch.Batch - billingconductorConn *billingconductor.BillingConductor - braketConn *braket.Braket - budgetsConn *budgets.Budgets - ceConn *costexplorer.CostExplorer - curConn *costandusagereportservice.CostandUsageReportService - chimeConn *chime.Chime - chimesdkidentityConn *chimesdkidentity.ChimeSDKIdentity - chimesdkmediapipelinesConn *chimesdkmediapipelines.ChimeSDKMediaPipelines - chimesdkmeetingsConn *chimesdkmeetings.ChimeSDKMeetings - chimesdkmessagingConn *chimesdkmessaging.ChimeSDKMessaging - chimesdkvoiceConn *chimesdkvoice.ChimeSDKVoice - cleanroomsClient *cleanrooms.Client - cloud9Conn *cloud9.Cloud9 - cloudcontrolClient *cloudcontrol.Client - clouddirectoryConn *clouddirectory.CloudDirectory - cloudformationConn *cloudformation.CloudFormation - cloudfrontConn *cloudfront.CloudFront - cloudhsmv2Conn *cloudhsmv2.CloudHSMV2 - cloudsearchConn *cloudsearch.CloudSearch - cloudsearchdomainConn *cloudsearchdomain.CloudSearchDomain - cloudtrailConn *cloudtrail.CloudTrail - cloudwatchConn *cloudwatch.CloudWatch - codeartifactConn *codeartifact.CodeArtifact - codebuildConn *codebuild.CodeBuild - codecommitConn *codecommit.CodeCommit - codeguruprofilerConn *codeguruprofiler.CodeGuruProfiler - codegurureviewerConn *codegurureviewer.CodeGuruReviewer - codepipelineConn *codepipeline.CodePipeline - codestarConn *codestar.CodeStar - codestarconnectionsConn *codestarconnections.CodeStarConnections - codestarnotificationsConn *codestarnotifications.CodeStarNotifications - cognitoidpConn *cognitoidentityprovider.CognitoIdentityProvider - cognitoidentityConn *cognitoidentity.CognitoIdentity - cognitosyncConn *cognitosync.CognitoSync - comprehendClient *comprehend.Client - comprehendmedicalConn *comprehendmedical.ComprehendMedical - computeoptimizerClient *computeoptimizer.Client - configserviceConn *configservice.ConfigService - connectConn *connect.Connect - connectcontactlensConn *connectcontactlens.ConnectContactLens - connectparticipantConn *connectparticipant.ConnectParticipant - controltowerConn *controltower.ControlTower - customerprofilesConn *customerprofiles.CustomerProfiles - daxConn *dax.DAX - dlmConn *dlm.DLM - dmsConn *databasemigrationservice.DatabaseMigrationService - drsConn *drs.Drs - dsConn *directoryservice.DirectoryService - databrewConn *gluedatabrew.GlueDataBrew - dataexchangeConn *dataexchange.DataExchange - datapipelineConn *datapipeline.DataPipeline - datasyncConn *datasync.DataSync - deployConn *codedeploy.CodeDeploy - detectiveConn *detective.Detective - devopsguruConn *devopsguru.DevOpsGuru - devicefarmConn *devicefarm.DeviceFarm - directconnectConn *directconnect.DirectConnect - discoveryConn *applicationdiscoveryservice.ApplicationDiscoveryService - docdbConn *docdb.DocDB - docdbelasticClient *docdbelastic.Client - dynamodbConn *dynamodb.DynamoDB - dynamodbstreamsConn *dynamodbstreams.DynamoDBStreams - ebsConn *ebs.EBS - ec2Conn *ec2.EC2 - ec2instanceconnectConn *ec2instanceconnect.EC2InstanceConnect - ecrConn *ecr.ECR - ecrpublicConn *ecrpublic.ECRPublic - ecsConn *ecs.ECS - efsConn *efs.EFS - eksConn *eks.EKS - elbConn *elb.ELB - elbv2Conn *elbv2.ELBV2 - emrConn *emr.EMR - emrcontainersConn *emrcontainers.EMRContainers - emrserverlessConn *emrserverless.EMRServerless - elasticacheConn *elasticache.ElastiCache - elasticbeanstalkConn *elasticbeanstalk.ElasticBeanstalk - elasticinferenceConn *elasticinference.ElasticInference - elastictranscoderConn *elastictranscoder.ElasticTranscoder - esConn *elasticsearchservice.ElasticsearchService - eventsConn *eventbridge.EventBridge - evidentlyConn *cloudwatchevidently.CloudWatchEvidently - fisClient *fis.Client - fmsConn *fms.FMS - fsxConn *fsx.FSx - finspaceConn *finspace.Finspace - finspacedataConn *finspacedata.FinSpaceData - firehoseConn *firehose.Firehose - forecastConn *forecastservice.ForecastService - forecastqueryConn *forecastqueryservice.ForecastQueryService - frauddetectorConn *frauddetector.FraudDetector - gameliftConn *gamelift.GameLift - glacierConn *glacier.Glacier - globalacceleratorConn *globalaccelerator.GlobalAccelerator - glueConn *glue.Glue - grafanaConn *managedgrafana.ManagedGrafana - greengrassConn *greengrass.Greengrass - greengrassv2Conn *greengrassv2.GreengrassV2 - groundstationConn *groundstation.GroundStation - guarddutyConn *guardduty.GuardDuty - healthConn *health.Health - healthlakeClient *healthlake.Client - honeycodeConn *honeycode.Honeycode - iamConn *iam.IAM - ivsConn *ivs.IVS - ivschatClient *ivschat.Client - identitystoreClient *identitystore.Client - imagebuilderConn *imagebuilder.Imagebuilder - inspectorConn *inspector.Inspector - inspector2Client *inspector2.Client - internetmonitorConn *internetmonitor.InternetMonitor - iotConn *iot.IoT - iot1clickdevicesConn *iot1clickdevicesservice.IoT1ClickDevicesService - iot1clickprojectsConn *iot1clickprojects.IoT1ClickProjects - iotanalyticsConn *iotanalytics.IoTAnalytics - iotdataConn *iotdataplane.IoTDataPlane - iotdeviceadvisorConn *iotdeviceadvisor.IoTDeviceAdvisor - ioteventsConn *iotevents.IoTEvents - ioteventsdataConn *ioteventsdata.IoTEventsData - iotfleethubConn *iotfleethub.IoTFleetHub - iotjobsdataConn *iotjobsdataplane.IoTJobsDataPlane - iotsecuretunnelingConn *iotsecuretunneling.IoTSecureTunneling - iotsitewiseConn *iotsitewise.IoTSiteWise - iotthingsgraphConn *iotthingsgraph.IoTThingsGraph - iottwinmakerConn *iottwinmaker.IoTTwinMaker - iotwirelessConn *iotwireless.IoTWireless - kmsConn *kms.KMS - kafkaConn *kafka.Kafka - kafkaconnectConn *kafkaconnect.KafkaConnect - kendraClient *kendra.Client - keyspacesConn *keyspaces.Keyspaces - kinesisConn *kinesis.Kinesis - kinesisanalyticsConn *kinesisanalytics.KinesisAnalytics - kinesisanalyticsv2Conn *kinesisanalyticsv2.KinesisAnalyticsV2 - kinesisvideoConn *kinesisvideo.KinesisVideo - kinesisvideoarchivedmediaConn *kinesisvideoarchivedmedia.KinesisVideoArchivedMedia - kinesisvideomediaConn *kinesisvideomedia.KinesisVideoMedia - kinesisvideosignalingConn *kinesisvideosignalingchannels.KinesisVideoSignalingChannels - lakeformationConn *lakeformation.LakeFormation - lambdaConn *lambda.Lambda - lexmodelsConn *lexmodelbuildingservice.LexModelBuildingService - lexmodelsv2Conn *lexmodelsv2.LexModelsV2 - lexruntimeConn *lexruntimeservice.LexRuntimeService - lexruntimev2Conn *lexruntimev2.LexRuntimeV2 - licensemanagerConn *licensemanager.LicenseManager - lightsailConn *lightsail.Lightsail - locationConn *locationservice.LocationService - logsConn *cloudwatchlogs.CloudWatchLogs - lookoutequipmentConn *lookoutequipment.LookoutEquipment - lookoutmetricsConn *lookoutmetrics.LookoutMetrics - lookoutvisionConn *lookoutforvision.LookoutForVision - mqConn *mq.MQ - mturkConn *mturk.MTurk - mwaaConn *mwaa.MWAA - machinelearningConn *machinelearning.MachineLearning - macieConn *macie.Macie - macie2Conn *macie2.Macie2 - managedblockchainConn *managedblockchain.ManagedBlockchain - marketplacecatalogConn *marketplacecatalog.MarketplaceCatalog - marketplacecommerceanalyticsConn *marketplacecommerceanalytics.MarketplaceCommerceAnalytics - marketplaceentitlementConn *marketplaceentitlementservice.MarketplaceEntitlementService - marketplacemeteringConn *marketplacemetering.MarketplaceMetering - mediaconnectConn *mediaconnect.MediaConnect - mediaconvertConn *mediaconvert.MediaConvert - medialiveClient *medialive.Client - mediapackageConn *mediapackage.MediaPackage - mediapackagevodConn *mediapackagevod.MediaPackageVod - mediastoreConn *mediastore.MediaStore - mediastoredataConn *mediastoredata.MediaStoreData - mediatailorConn *mediatailor.MediaTailor - memorydbConn *memorydb.MemoryDB - mghConn *migrationhub.MigrationHub - mgnConn *mgn.Mgn - migrationhubconfigConn *migrationhubconfig.MigrationHubConfig - migrationhubrefactorspacesConn *migrationhubrefactorspaces.MigrationHubRefactorSpaces - migrationhubstrategyConn *migrationhubstrategyrecommendations.MigrationHubStrategyRecommendations - mobileConn *mobile.Mobile - neptuneConn *neptune.Neptune - networkfirewallConn *networkfirewall.NetworkFirewall - networkmanagerConn *networkmanager.NetworkManager - nimbleConn *nimblestudio.NimbleStudio - oamClient *oam.Client - opensearchConn *opensearchservice.OpenSearchService - opensearchserverlessClient *opensearchserverless.Client - opsworksConn *opsworks.OpsWorks - opsworkscmConn *opsworkscm.OpsWorksCM - organizationsConn *organizations.Organizations - outpostsConn *outposts.Outposts - piConn *pi.PI - panoramaConn *panorama.Panorama - personalizeConn *personalize.Personalize - personalizeeventsConn *personalizeevents.PersonalizeEvents - personalizeruntimeConn *personalizeruntime.PersonalizeRuntime - pinpointConn *pinpoint.Pinpoint - pinpointemailConn *pinpointemail.PinpointEmail - pinpointsmsvoiceConn *pinpointsmsvoice.PinpointSMSVoice - pipesClient *pipes.Client - pollyConn *polly.Polly - pricingConn *pricing.Pricing - protonConn *proton.Proton - qldbConn *qldb.QLDB - qldbsessionConn *qldbsession.QLDBSession - quicksightConn *quicksight.QuickSight - ramConn *ram.RAM - rbinClient *rbin.Client - rdsConn *rds.RDS - rdsdataConn *rdsdataservice.RDSDataService - rumConn *cloudwatchrum.CloudWatchRUM - redshiftConn *redshift.Redshift - redshiftdataConn *redshiftdataapiservice.RedshiftDataAPIService - redshiftserverlessConn *redshiftserverless.RedshiftServerless - rekognitionConn *rekognition.Rekognition - resiliencehubConn *resiliencehub.ResilienceHub - resourceexplorer2Client *resourceexplorer2.Client - resourcegroupsConn *resourcegroups.ResourceGroups - resourcegroupstaggingapiConn *resourcegroupstaggingapi.ResourceGroupsTaggingAPI - robomakerConn *robomaker.RoboMaker - rolesanywhereClient *rolesanywhere.Client - route53Conn *route53.Route53 - route53domainsClient *route53domains.Client - route53recoveryclusterConn *route53recoverycluster.Route53RecoveryCluster - route53recoverycontrolconfigConn *route53recoverycontrolconfig.Route53RecoveryControlConfig - route53recoveryreadinessConn *route53recoveryreadiness.Route53RecoveryReadiness - route53resolverConn *route53resolver.Route53Resolver - s3Conn *s3.S3 - s3controlConn *s3control.S3Control - s3outpostsConn *s3outposts.S3Outposts - sesConn *ses.SES - sesv2Client *sesv2.Client - sfnConn *sfn.SFN - smsConn *sms.SMS - snsConn *sns.SNS - sqsConn *sqs.SQS - ssmConn *ssm.SSM - ssmcontactsClient *ssmcontacts.Client - ssmincidentsClient *ssmincidents.Client - ssoConn *sso.SSO - ssoadminConn *ssoadmin.SSOAdmin - ssooidcConn *ssooidc.SSOOIDC - stsConn *sts.STS - swfConn *swf.SWF - sagemakerConn *sagemaker.SageMaker - sagemakera2iruntimeConn *augmentedairuntime.AugmentedAIRuntime - sagemakeredgeConn *sagemakeredgemanager.SagemakerEdgeManager - sagemakerfeaturestoreruntimeConn *sagemakerfeaturestoreruntime.SageMakerFeatureStoreRuntime - sagemakerruntimeConn *sagemakerruntime.SageMakerRuntime - savingsplansConn *savingsplans.SavingsPlans - schedulerClient *scheduler.Client - schemasConn *schemas.Schemas - secretsmanagerConn *secretsmanager.SecretsManager - securityhubConn *securityhub.SecurityHub - securitylakeClient *securitylake.Client - serverlessrepoConn *serverlessapplicationrepository.ServerlessApplicationRepository - servicecatalogConn *servicecatalog.ServiceCatalog - servicecatalogappregistryConn *appregistry.AppRegistry - servicediscoveryConn *servicediscovery.ServiceDiscovery - servicequotasConn *servicequotas.ServiceQuotas - shieldConn *shield.Shield - signerConn *signer.Signer - sdbConn *simpledb.SimpleDB - snowdevicemanagementConn *snowdevicemanagement.SnowDeviceManagement - snowballConn *snowball.Snowball - storagegatewayConn *storagegateway.StorageGateway - supportConn *support.Support - syntheticsConn *synthetics.Synthetics - textractConn *textract.Textract - timestreamqueryConn *timestreamquery.TimestreamQuery - timestreamwriteConn *timestreamwrite.TimestreamWrite - transcribeClient *transcribe.Client - transcribestreamingConn *transcribestreamingservice.TranscribeStreamingService - transferConn *transfer.Transfer - translateConn *translate.Translate - vpclatticeClient *vpclattice.Client - voiceidConn *voiceid.VoiceID - wafConn *waf.WAF - wafregionalConn *wafregional.WAFRegional - wafv2Conn *wafv2.WAFV2 - wellarchitectedConn *wellarchitected.WellArchitected - wisdomConn *connectwisdomservice.ConnectWisdomService - workdocsConn *workdocs.WorkDocs - worklinkConn *worklink.WorkLink - workmailConn *workmail.WorkMail - workmailmessageflowConn *workmailmessageflow.WorkMailMessageFlow - workspacesConn *workspaces.WorkSpaces - workspaceswebConn *workspacesweb.WorkSpacesWeb - xrayClient *xray.Client +func (c *AWSClient) ACMClient(ctx context.Context) *acm_sdkv2.Client { + return errs.Must(client[*acm_sdkv2.Client](ctx, c, names.ACM)) +} - s3ConnURICleaningDisabled *s3.S3 +func (c *AWSClient) ACMPCAConn(ctx context.Context) *acmpca_sdkv1.ACMPCA { + return errs.Must(conn[*acmpca_sdkv1.ACMPCA](ctx, c, names.ACMPCA)) } -func (client *AWSClient) ACMClient() *acm.Client { - return client.acmClient +func (c *AWSClient) AMPConn(ctx context.Context) *prometheusservice_sdkv1.PrometheusService { + return errs.Must(conn[*prometheusservice_sdkv1.PrometheusService](ctx, c, names.AMP)) } -func (client *AWSClient) ACMPCAConn() *acmpca.ACMPCA { - return client.acmpcaConn +func (c *AWSClient) APIGatewayConn(ctx context.Context) *apigateway_sdkv1.APIGateway { + return errs.Must(conn[*apigateway_sdkv1.APIGateway](ctx, c, names.APIGateway)) } -func (client *AWSClient) AMPConn() *prometheusservice.PrometheusService { - return client.ampConn +func (c *AWSClient) APIGatewayManagementAPIConn(ctx context.Context) *apigatewaymanagementapi_sdkv1.ApiGatewayManagementApi { + return errs.Must(conn[*apigatewaymanagementapi_sdkv1.ApiGatewayManagementApi](ctx, c, names.APIGatewayManagementAPI)) } -func (client *AWSClient) APIGatewayConn() *apigateway.APIGateway { - return client.apigatewayConn +func (c *AWSClient) APIGatewayV2Conn(ctx context.Context) *apigatewayv2_sdkv1.ApiGatewayV2 { + return errs.Must(conn[*apigatewayv2_sdkv1.ApiGatewayV2](ctx, c, names.APIGatewayV2)) } -func (client *AWSClient) APIGatewayManagementAPIConn() *apigatewaymanagementapi.ApiGatewayManagementApi { - return client.apigatewaymanagementapiConn +func (c *AWSClient) AccessAnalyzerClient(ctx context.Context) *accessanalyzer_sdkv2.Client { + return errs.Must(client[*accessanalyzer_sdkv2.Client](ctx, c, names.AccessAnalyzer)) } -func (client *AWSClient) APIGatewayV2Conn() *apigatewayv2.ApiGatewayV2 { - return client.apigatewayv2Conn +func (c *AWSClient) AccountClient(ctx context.Context) *account_sdkv2.Client { + return errs.Must(client[*account_sdkv2.Client](ctx, c, names.Account)) } -func (client *AWSClient) AccessAnalyzerClient() *accessanalyzer.Client { - return client.accessanalyzerClient +func (c *AWSClient) AlexaForBusinessConn(ctx context.Context) *alexaforbusiness_sdkv1.AlexaForBusiness { + return errs.Must(conn[*alexaforbusiness_sdkv1.AlexaForBusiness](ctx, c, names.AlexaForBusiness)) } -func (client *AWSClient) AccountClient() *account.Client { - return client.accountClient +func (c *AWSClient) AmplifyConn(ctx context.Context) *amplify_sdkv1.Amplify { + return errs.Must(conn[*amplify_sdkv1.Amplify](ctx, c, names.Amplify)) } -func (client *AWSClient) AlexaForBusinessConn() *alexaforbusiness.AlexaForBusiness { - return client.alexaforbusinessConn +func (c *AWSClient) AmplifyBackendConn(ctx context.Context) *amplifybackend_sdkv1.AmplifyBackend { + return errs.Must(conn[*amplifybackend_sdkv1.AmplifyBackend](ctx, c, names.AmplifyBackend)) } -func (client *AWSClient) AmplifyConn() *amplify.Amplify { - return client.amplifyConn +func (c *AWSClient) AmplifyUIBuilderConn(ctx context.Context) *amplifyuibuilder_sdkv1.AmplifyUIBuilder { + return errs.Must(conn[*amplifyuibuilder_sdkv1.AmplifyUIBuilder](ctx, c, names.AmplifyUIBuilder)) } -func (client *AWSClient) AmplifyBackendConn() *amplifybackend.AmplifyBackend { - return client.amplifybackendConn +func (c *AWSClient) AppAutoScalingConn(ctx context.Context) *applicationautoscaling_sdkv1.ApplicationAutoScaling { + return errs.Must(conn[*applicationautoscaling_sdkv1.ApplicationAutoScaling](ctx, c, names.AppAutoScaling)) } -func (client *AWSClient) AmplifyUIBuilderConn() *amplifyuibuilder.AmplifyUIBuilder { - return client.amplifyuibuilderConn +func (c *AWSClient) AppConfigConn(ctx context.Context) *appconfig_sdkv1.AppConfig { + return errs.Must(conn[*appconfig_sdkv1.AppConfig](ctx, c, names.AppConfig)) } -func (client *AWSClient) AppAutoScalingConn() *applicationautoscaling.ApplicationAutoScaling { - return client.applicationautoscalingConn +func (c *AWSClient) AppConfigClient(ctx context.Context) *appconfig_sdkv2.Client { + return errs.Must(client[*appconfig_sdkv2.Client](ctx, c, names.AppConfig)) } -func (client *AWSClient) AppConfigConn() *appconfig.AppConfig { - return client.appconfigConn +func (c *AWSClient) AppConfigDataConn(ctx context.Context) *appconfigdata_sdkv1.AppConfigData { + return errs.Must(conn[*appconfigdata_sdkv1.AppConfigData](ctx, c, names.AppConfigData)) } -func (client *AWSClient) AppConfigDataConn() *appconfigdata.AppConfigData { - return client.appconfigdataConn +func (c *AWSClient) AppFlowConn(ctx context.Context) *appflow_sdkv1.Appflow { + return errs.Must(conn[*appflow_sdkv1.Appflow](ctx, c, names.AppFlow)) } -func (client *AWSClient) AppFlowConn() *appflow.Appflow { - return client.appflowConn +func (c *AWSClient) AppIntegrationsConn(ctx context.Context) *appintegrationsservice_sdkv1.AppIntegrationsService { + return errs.Must(conn[*appintegrationsservice_sdkv1.AppIntegrationsService](ctx, c, names.AppIntegrations)) } -func (client *AWSClient) AppIntegrationsConn() *appintegrationsservice.AppIntegrationsService { - return client.appintegrationsConn +func (c *AWSClient) AppMeshConn(ctx context.Context) *appmesh_sdkv1.AppMesh { + return errs.Must(conn[*appmesh_sdkv1.AppMesh](ctx, c, names.AppMesh)) } -func (client *AWSClient) AppMeshConn() *appmesh.AppMesh { - return client.appmeshConn +func (c *AWSClient) AppRunnerConn(ctx context.Context) *apprunner_sdkv1.AppRunner { + return errs.Must(conn[*apprunner_sdkv1.AppRunner](ctx, c, names.AppRunner)) } -func (client *AWSClient) AppRunnerConn() *apprunner.AppRunner { - return client.apprunnerConn +func (c *AWSClient) AppStreamConn(ctx context.Context) *appstream_sdkv1.AppStream { + return errs.Must(conn[*appstream_sdkv1.AppStream](ctx, c, names.AppStream)) } -func (client *AWSClient) AppStreamConn() *appstream.AppStream { - return client.appstreamConn +func (c *AWSClient) AppSyncConn(ctx context.Context) *appsync_sdkv1.AppSync { + return errs.Must(conn[*appsync_sdkv1.AppSync](ctx, c, names.AppSync)) } -func (client *AWSClient) AppSyncConn() *appsync.AppSync { - return client.appsyncConn +func (c *AWSClient) ApplicationCostProfilerConn(ctx context.Context) *applicationcostprofiler_sdkv1.ApplicationCostProfiler { + return errs.Must(conn[*applicationcostprofiler_sdkv1.ApplicationCostProfiler](ctx, c, names.ApplicationCostProfiler)) } -func (client *AWSClient) ApplicationCostProfilerConn() *applicationcostprofiler.ApplicationCostProfiler { - return client.applicationcostprofilerConn +func (c *AWSClient) ApplicationInsightsConn(ctx context.Context) *applicationinsights_sdkv1.ApplicationInsights { + return errs.Must(conn[*applicationinsights_sdkv1.ApplicationInsights](ctx, c, names.ApplicationInsights)) } -func (client *AWSClient) ApplicationInsightsConn() *applicationinsights.ApplicationInsights { - return client.applicationinsightsConn +func (c *AWSClient) AthenaConn(ctx context.Context) *athena_sdkv1.Athena { + return errs.Must(conn[*athena_sdkv1.Athena](ctx, c, names.Athena)) } -func (client *AWSClient) AthenaConn() *athena.Athena { - return client.athenaConn +func (c *AWSClient) AuditManagerClient(ctx context.Context) *auditmanager_sdkv2.Client { + return errs.Must(client[*auditmanager_sdkv2.Client](ctx, c, names.AuditManager)) } -func (client *AWSClient) AuditManagerClient() *auditmanager.Client { - return client.auditmanagerClient +func (c *AWSClient) AutoScalingConn(ctx context.Context) *autoscaling_sdkv1.AutoScaling { + return errs.Must(conn[*autoscaling_sdkv1.AutoScaling](ctx, c, names.AutoScaling)) } -func (client *AWSClient) AutoScalingConn() *autoscaling.AutoScaling { - return client.autoscalingConn +func (c *AWSClient) AutoScalingPlansConn(ctx context.Context) *autoscalingplans_sdkv1.AutoScalingPlans { + return errs.Must(conn[*autoscalingplans_sdkv1.AutoScalingPlans](ctx, c, names.AutoScalingPlans)) } -func (client *AWSClient) AutoScalingPlansConn() *autoscalingplans.AutoScalingPlans { - return client.autoscalingplansConn +func (c *AWSClient) BackupConn(ctx context.Context) *backup_sdkv1.Backup { + return errs.Must(conn[*backup_sdkv1.Backup](ctx, c, names.Backup)) } -func (client *AWSClient) BackupConn() *backup.Backup { - return client.backupConn +func (c *AWSClient) BackupGatewayConn(ctx context.Context) *backupgateway_sdkv1.BackupGateway { + return errs.Must(conn[*backupgateway_sdkv1.BackupGateway](ctx, c, names.BackupGateway)) } -func (client *AWSClient) BackupGatewayConn() *backupgateway.BackupGateway { - return client.backupgatewayConn +func (c *AWSClient) BatchConn(ctx context.Context) *batch_sdkv1.Batch { + return errs.Must(conn[*batch_sdkv1.Batch](ctx, c, names.Batch)) } -func (client *AWSClient) BatchConn() *batch.Batch { - return client.batchConn +func (c *AWSClient) BillingConductorConn(ctx context.Context) *billingconductor_sdkv1.BillingConductor { + return errs.Must(conn[*billingconductor_sdkv1.BillingConductor](ctx, c, names.BillingConductor)) } -func (client *AWSClient) BillingConductorConn() *billingconductor.BillingConductor { - return client.billingconductorConn +func (c *AWSClient) BraketConn(ctx context.Context) *braket_sdkv1.Braket { + return errs.Must(conn[*braket_sdkv1.Braket](ctx, c, names.Braket)) } -func (client *AWSClient) BraketConn() *braket.Braket { - return client.braketConn +func (c *AWSClient) BudgetsConn(ctx context.Context) *budgets_sdkv1.Budgets { + return errs.Must(conn[*budgets_sdkv1.Budgets](ctx, c, names.Budgets)) } -func (client *AWSClient) BudgetsConn() *budgets.Budgets { - return client.budgetsConn +func (c *AWSClient) CEConn(ctx context.Context) *costexplorer_sdkv1.CostExplorer { + return errs.Must(conn[*costexplorer_sdkv1.CostExplorer](ctx, c, names.CE)) } -func (client *AWSClient) CEConn() *costexplorer.CostExplorer { - return client.ceConn +func (c *AWSClient) CURConn(ctx context.Context) *costandusagereportservice_sdkv1.CostandUsageReportService { + return errs.Must(conn[*costandusagereportservice_sdkv1.CostandUsageReportService](ctx, c, names.CUR)) } -func (client *AWSClient) CURConn() *costandusagereportservice.CostandUsageReportService { - return client.curConn +func (c *AWSClient) ChimeConn(ctx context.Context) *chime_sdkv1.Chime { + return errs.Must(conn[*chime_sdkv1.Chime](ctx, c, names.Chime)) } -func (client *AWSClient) ChimeConn() *chime.Chime { - return client.chimeConn +func (c *AWSClient) ChimeSDKIdentityConn(ctx context.Context) *chimesdkidentity_sdkv1.ChimeSDKIdentity { + return errs.Must(conn[*chimesdkidentity_sdkv1.ChimeSDKIdentity](ctx, c, names.ChimeSDKIdentity)) } -func (client *AWSClient) ChimeSDKIdentityConn() *chimesdkidentity.ChimeSDKIdentity { - return client.chimesdkidentityConn +func (c *AWSClient) ChimeSDKMediaPipelinesConn(ctx context.Context) *chimesdkmediapipelines_sdkv1.ChimeSDKMediaPipelines { + return errs.Must(conn[*chimesdkmediapipelines_sdkv1.ChimeSDKMediaPipelines](ctx, c, names.ChimeSDKMediaPipelines)) } -func (client *AWSClient) ChimeSDKMediaPipelinesConn() *chimesdkmediapipelines.ChimeSDKMediaPipelines { - return client.chimesdkmediapipelinesConn +func (c *AWSClient) ChimeSDKMeetingsConn(ctx context.Context) *chimesdkmeetings_sdkv1.ChimeSDKMeetings { + return errs.Must(conn[*chimesdkmeetings_sdkv1.ChimeSDKMeetings](ctx, c, names.ChimeSDKMeetings)) } -func (client *AWSClient) ChimeSDKMeetingsConn() *chimesdkmeetings.ChimeSDKMeetings { - return client.chimesdkmeetingsConn +func (c *AWSClient) ChimeSDKMessagingConn(ctx context.Context) *chimesdkmessaging_sdkv1.ChimeSDKMessaging { + return errs.Must(conn[*chimesdkmessaging_sdkv1.ChimeSDKMessaging](ctx, c, names.ChimeSDKMessaging)) } -func (client *AWSClient) ChimeSDKMessagingConn() *chimesdkmessaging.ChimeSDKMessaging { - return client.chimesdkmessagingConn +func (c *AWSClient) ChimeSDKVoiceConn(ctx context.Context) *chimesdkvoice_sdkv1.ChimeSDKVoice { + return errs.Must(conn[*chimesdkvoice_sdkv1.ChimeSDKVoice](ctx, c, names.ChimeSDKVoice)) } -func (client *AWSClient) ChimeSDKVoiceConn() *chimesdkvoice.ChimeSDKVoice { - return client.chimesdkvoiceConn +func (c *AWSClient) CleanRoomsClient(ctx context.Context) *cleanrooms_sdkv2.Client { + return errs.Must(client[*cleanrooms_sdkv2.Client](ctx, c, names.CleanRooms)) } -func (client *AWSClient) CleanRoomsClient() *cleanrooms.Client { - return client.cleanroomsClient +func (c *AWSClient) Cloud9Conn(ctx context.Context) *cloud9_sdkv1.Cloud9 { + return errs.Must(conn[*cloud9_sdkv1.Cloud9](ctx, c, names.Cloud9)) } -func (client *AWSClient) Cloud9Conn() *cloud9.Cloud9 { - return client.cloud9Conn +func (c *AWSClient) CloudControlClient(ctx context.Context) *cloudcontrol_sdkv2.Client { + return errs.Must(client[*cloudcontrol_sdkv2.Client](ctx, c, names.CloudControl)) } -func (client *AWSClient) CloudControlClient() *cloudcontrol.Client { - return client.cloudcontrolClient +func (c *AWSClient) CloudDirectoryConn(ctx context.Context) *clouddirectory_sdkv1.CloudDirectory { + return errs.Must(conn[*clouddirectory_sdkv1.CloudDirectory](ctx, c, names.CloudDirectory)) } -func (client *AWSClient) CloudDirectoryConn() *clouddirectory.CloudDirectory { - return client.clouddirectoryConn +func (c *AWSClient) CloudFormationConn(ctx context.Context) *cloudformation_sdkv1.CloudFormation { + return errs.Must(conn[*cloudformation_sdkv1.CloudFormation](ctx, c, names.CloudFormation)) } -func (client *AWSClient) CloudFormationConn() *cloudformation.CloudFormation { - return client.cloudformationConn +func (c *AWSClient) CloudFrontConn(ctx context.Context) *cloudfront_sdkv1.CloudFront { + return errs.Must(conn[*cloudfront_sdkv1.CloudFront](ctx, c, names.CloudFront)) } -func (client *AWSClient) CloudFrontConn() *cloudfront.CloudFront { - return client.cloudfrontConn +func (c *AWSClient) CloudHSMV2Conn(ctx context.Context) *cloudhsmv2_sdkv1.CloudHSMV2 { + return errs.Must(conn[*cloudhsmv2_sdkv1.CloudHSMV2](ctx, c, names.CloudHSMV2)) } -func (client *AWSClient) CloudHSMV2Conn() *cloudhsmv2.CloudHSMV2 { - return client.cloudhsmv2Conn +func (c *AWSClient) CloudSearchConn(ctx context.Context) *cloudsearch_sdkv1.CloudSearch { + return errs.Must(conn[*cloudsearch_sdkv1.CloudSearch](ctx, c, names.CloudSearch)) } -func (client *AWSClient) CloudSearchConn() *cloudsearch.CloudSearch { - return client.cloudsearchConn +func (c *AWSClient) CloudSearchDomainConn(ctx context.Context) *cloudsearchdomain_sdkv1.CloudSearchDomain { + return errs.Must(conn[*cloudsearchdomain_sdkv1.CloudSearchDomain](ctx, c, names.CloudSearchDomain)) } -func (client *AWSClient) CloudSearchDomainConn() *cloudsearchdomain.CloudSearchDomain { - return client.cloudsearchdomainConn +func (c *AWSClient) CloudTrailConn(ctx context.Context) *cloudtrail_sdkv1.CloudTrail { + return errs.Must(conn[*cloudtrail_sdkv1.CloudTrail](ctx, c, names.CloudTrail)) } -func (client *AWSClient) CloudTrailConn() *cloudtrail.CloudTrail { - return client.cloudtrailConn +func (c *AWSClient) CloudWatchConn(ctx context.Context) *cloudwatch_sdkv1.CloudWatch { + return errs.Must(conn[*cloudwatch_sdkv1.CloudWatch](ctx, c, names.CloudWatch)) } -func (client *AWSClient) CloudWatchConn() *cloudwatch.CloudWatch { - return client.cloudwatchConn +func (c *AWSClient) CodeArtifactConn(ctx context.Context) *codeartifact_sdkv1.CodeArtifact { + return errs.Must(conn[*codeartifact_sdkv1.CodeArtifact](ctx, c, names.CodeArtifact)) } -func (client *AWSClient) CodeArtifactConn() *codeartifact.CodeArtifact { - return client.codeartifactConn +func (c *AWSClient) CodeBuildConn(ctx context.Context) *codebuild_sdkv1.CodeBuild { + return errs.Must(conn[*codebuild_sdkv1.CodeBuild](ctx, c, names.CodeBuild)) } -func (client *AWSClient) CodeBuildConn() *codebuild.CodeBuild { - return client.codebuildConn +func (c *AWSClient) CodeCommitConn(ctx context.Context) *codecommit_sdkv1.CodeCommit { + return errs.Must(conn[*codecommit_sdkv1.CodeCommit](ctx, c, names.CodeCommit)) } -func (client *AWSClient) CodeCommitConn() *codecommit.CodeCommit { - return client.codecommitConn +func (c *AWSClient) CodeGuruProfilerConn(ctx context.Context) *codeguruprofiler_sdkv1.CodeGuruProfiler { + return errs.Must(conn[*codeguruprofiler_sdkv1.CodeGuruProfiler](ctx, c, names.CodeGuruProfiler)) } -func (client *AWSClient) CodeGuruProfilerConn() *codeguruprofiler.CodeGuruProfiler { - return client.codeguruprofilerConn +func (c *AWSClient) CodeGuruReviewerConn(ctx context.Context) *codegurureviewer_sdkv1.CodeGuruReviewer { + return errs.Must(conn[*codegurureviewer_sdkv1.CodeGuruReviewer](ctx, c, names.CodeGuruReviewer)) } -func (client *AWSClient) CodeGuruReviewerConn() *codegurureviewer.CodeGuruReviewer { - return client.codegurureviewerConn +func (c *AWSClient) CodePipelineConn(ctx context.Context) *codepipeline_sdkv1.CodePipeline { + return errs.Must(conn[*codepipeline_sdkv1.CodePipeline](ctx, c, names.CodePipeline)) } -func (client *AWSClient) CodePipelineConn() *codepipeline.CodePipeline { - return client.codepipelineConn +func (c *AWSClient) CodeStarConn(ctx context.Context) *codestar_sdkv1.CodeStar { + return errs.Must(conn[*codestar_sdkv1.CodeStar](ctx, c, names.CodeStar)) } -func (client *AWSClient) CodeStarConn() *codestar.CodeStar { - return client.codestarConn +func (c *AWSClient) CodeStarConnectionsConn(ctx context.Context) *codestarconnections_sdkv1.CodeStarConnections { + return errs.Must(conn[*codestarconnections_sdkv1.CodeStarConnections](ctx, c, names.CodeStarConnections)) } -func (client *AWSClient) CodeStarConnectionsConn() *codestarconnections.CodeStarConnections { - return client.codestarconnectionsConn +func (c *AWSClient) CodeStarNotificationsConn(ctx context.Context) *codestarnotifications_sdkv1.CodeStarNotifications { + return errs.Must(conn[*codestarnotifications_sdkv1.CodeStarNotifications](ctx, c, names.CodeStarNotifications)) } -func (client *AWSClient) CodeStarNotificationsConn() *codestarnotifications.CodeStarNotifications { - return client.codestarnotificationsConn +func (c *AWSClient) CognitoIDPConn(ctx context.Context) *cognitoidentityprovider_sdkv1.CognitoIdentityProvider { + return errs.Must(conn[*cognitoidentityprovider_sdkv1.CognitoIdentityProvider](ctx, c, names.CognitoIDP)) } -func (client *AWSClient) CognitoIDPConn() *cognitoidentityprovider.CognitoIdentityProvider { - return client.cognitoidpConn +func (c *AWSClient) CognitoIdentityConn(ctx context.Context) *cognitoidentity_sdkv1.CognitoIdentity { + return errs.Must(conn[*cognitoidentity_sdkv1.CognitoIdentity](ctx, c, names.CognitoIdentity)) } -func (client *AWSClient) CognitoIdentityConn() *cognitoidentity.CognitoIdentity { - return client.cognitoidentityConn +func (c *AWSClient) CognitoSyncConn(ctx context.Context) *cognitosync_sdkv1.CognitoSync { + return errs.Must(conn[*cognitosync_sdkv1.CognitoSync](ctx, c, names.CognitoSync)) } -func (client *AWSClient) CognitoSyncConn() *cognitosync.CognitoSync { - return client.cognitosyncConn +func (c *AWSClient) ComprehendClient(ctx context.Context) *comprehend_sdkv2.Client { + return errs.Must(client[*comprehend_sdkv2.Client](ctx, c, names.Comprehend)) } -func (client *AWSClient) ComprehendClient() *comprehend.Client { - return client.comprehendClient +func (c *AWSClient) ComprehendMedicalConn(ctx context.Context) *comprehendmedical_sdkv1.ComprehendMedical { + return errs.Must(conn[*comprehendmedical_sdkv1.ComprehendMedical](ctx, c, names.ComprehendMedical)) } -func (client *AWSClient) ComprehendMedicalConn() *comprehendmedical.ComprehendMedical { - return client.comprehendmedicalConn +func (c *AWSClient) ComputeOptimizerClient(ctx context.Context) *computeoptimizer_sdkv2.Client { + return errs.Must(client[*computeoptimizer_sdkv2.Client](ctx, c, names.ComputeOptimizer)) } -func (client *AWSClient) ComputeOptimizerClient() *computeoptimizer.Client { - return client.computeoptimizerClient +func (c *AWSClient) ConfigServiceConn(ctx context.Context) *configservice_sdkv1.ConfigService { + return errs.Must(conn[*configservice_sdkv1.ConfigService](ctx, c, names.ConfigService)) } -func (client *AWSClient) ConfigServiceConn() *configservice.ConfigService { - return client.configserviceConn +func (c *AWSClient) ConnectConn(ctx context.Context) *connect_sdkv1.Connect { + return errs.Must(conn[*connect_sdkv1.Connect](ctx, c, names.Connect)) } -func (client *AWSClient) ConnectConn() *connect.Connect { - return client.connectConn +func (c *AWSClient) ConnectContactLensConn(ctx context.Context) *connectcontactlens_sdkv1.ConnectContactLens { + return errs.Must(conn[*connectcontactlens_sdkv1.ConnectContactLens](ctx, c, names.ConnectContactLens)) } -func (client *AWSClient) ConnectContactLensConn() *connectcontactlens.ConnectContactLens { - return client.connectcontactlensConn +func (c *AWSClient) ConnectParticipantConn(ctx context.Context) *connectparticipant_sdkv1.ConnectParticipant { + return errs.Must(conn[*connectparticipant_sdkv1.ConnectParticipant](ctx, c, names.ConnectParticipant)) } -func (client *AWSClient) ConnectParticipantConn() *connectparticipant.ConnectParticipant { - return client.connectparticipantConn +func (c *AWSClient) ControlTowerConn(ctx context.Context) *controltower_sdkv1.ControlTower { + return errs.Must(conn[*controltower_sdkv1.ControlTower](ctx, c, names.ControlTower)) } -func (client *AWSClient) ControlTowerConn() *controltower.ControlTower { - return client.controltowerConn +func (c *AWSClient) CustomerProfilesConn(ctx context.Context) *customerprofiles_sdkv1.CustomerProfiles { + return errs.Must(conn[*customerprofiles_sdkv1.CustomerProfiles](ctx, c, names.CustomerProfiles)) } -func (client *AWSClient) CustomerProfilesConn() *customerprofiles.CustomerProfiles { - return client.customerprofilesConn +func (c *AWSClient) DAXConn(ctx context.Context) *dax_sdkv1.DAX { + return errs.Must(conn[*dax_sdkv1.DAX](ctx, c, names.DAX)) } -func (client *AWSClient) DAXConn() *dax.DAX { - return client.daxConn +func (c *AWSClient) DLMConn(ctx context.Context) *dlm_sdkv1.DLM { + return errs.Must(conn[*dlm_sdkv1.DLM](ctx, c, names.DLM)) } -func (client *AWSClient) DLMConn() *dlm.DLM { - return client.dlmConn +func (c *AWSClient) DMSConn(ctx context.Context) *databasemigrationservice_sdkv1.DatabaseMigrationService { + return errs.Must(conn[*databasemigrationservice_sdkv1.DatabaseMigrationService](ctx, c, names.DMS)) } -func (client *AWSClient) DMSConn() *databasemigrationservice.DatabaseMigrationService { - return client.dmsConn +func (c *AWSClient) DRSConn(ctx context.Context) *drs_sdkv1.Drs { + return errs.Must(conn[*drs_sdkv1.Drs](ctx, c, names.DRS)) } -func (client *AWSClient) DRSConn() *drs.Drs { - return client.drsConn +func (c *AWSClient) DSConn(ctx context.Context) *directoryservice_sdkv1.DirectoryService { + return errs.Must(conn[*directoryservice_sdkv1.DirectoryService](ctx, c, names.DS)) } -func (client *AWSClient) DSConn() *directoryservice.DirectoryService { - return client.dsConn +func (c *AWSClient) DSClient(ctx context.Context) *directoryservice_sdkv2.Client { + return errs.Must(client[*directoryservice_sdkv2.Client](ctx, c, names.DS)) } -func (client *AWSClient) DSClient() *directoryservice_sdkv2.Client { - return client.dsClient.Client() +func (c *AWSClient) DataBrewConn(ctx context.Context) *gluedatabrew_sdkv1.GlueDataBrew { + return errs.Must(conn[*gluedatabrew_sdkv1.GlueDataBrew](ctx, c, names.DataBrew)) } -func (client *AWSClient) DataBrewConn() *gluedatabrew.GlueDataBrew { - return client.databrewConn +func (c *AWSClient) DataExchangeConn(ctx context.Context) *dataexchange_sdkv1.DataExchange { + return errs.Must(conn[*dataexchange_sdkv1.DataExchange](ctx, c, names.DataExchange)) } -func (client *AWSClient) DataExchangeConn() *dataexchange.DataExchange { - return client.dataexchangeConn +func (c *AWSClient) DataPipelineConn(ctx context.Context) *datapipeline_sdkv1.DataPipeline { + return errs.Must(conn[*datapipeline_sdkv1.DataPipeline](ctx, c, names.DataPipeline)) } -func (client *AWSClient) DataPipelineConn() *datapipeline.DataPipeline { - return client.datapipelineConn +func (c *AWSClient) DataSyncConn(ctx context.Context) *datasync_sdkv1.DataSync { + return errs.Must(conn[*datasync_sdkv1.DataSync](ctx, c, names.DataSync)) } -func (client *AWSClient) DataSyncConn() *datasync.DataSync { - return client.datasyncConn +func (c *AWSClient) DeployConn(ctx context.Context) *codedeploy_sdkv1.CodeDeploy { + return errs.Must(conn[*codedeploy_sdkv1.CodeDeploy](ctx, c, names.Deploy)) } -func (client *AWSClient) DeployConn() *codedeploy.CodeDeploy { - return client.deployConn +func (c *AWSClient) DetectiveConn(ctx context.Context) *detective_sdkv1.Detective { + return errs.Must(conn[*detective_sdkv1.Detective](ctx, c, names.Detective)) } -func (client *AWSClient) DetectiveConn() *detective.Detective { - return client.detectiveConn +func (c *AWSClient) DevOpsGuruConn(ctx context.Context) *devopsguru_sdkv1.DevOpsGuru { + return errs.Must(conn[*devopsguru_sdkv1.DevOpsGuru](ctx, c, names.DevOpsGuru)) } -func (client *AWSClient) DevOpsGuruConn() *devopsguru.DevOpsGuru { - return client.devopsguruConn +func (c *AWSClient) DeviceFarmConn(ctx context.Context) *devicefarm_sdkv1.DeviceFarm { + return errs.Must(conn[*devicefarm_sdkv1.DeviceFarm](ctx, c, names.DeviceFarm)) } -func (client *AWSClient) DeviceFarmConn() *devicefarm.DeviceFarm { - return client.devicefarmConn +func (c *AWSClient) DirectConnectConn(ctx context.Context) *directconnect_sdkv1.DirectConnect { + return errs.Must(conn[*directconnect_sdkv1.DirectConnect](ctx, c, names.DirectConnect)) } -func (client *AWSClient) DirectConnectConn() *directconnect.DirectConnect { - return client.directconnectConn +func (c *AWSClient) DiscoveryConn(ctx context.Context) *applicationdiscoveryservice_sdkv1.ApplicationDiscoveryService { + return errs.Must(conn[*applicationdiscoveryservice_sdkv1.ApplicationDiscoveryService](ctx, c, names.Discovery)) } -func (client *AWSClient) DiscoveryConn() *applicationdiscoveryservice.ApplicationDiscoveryService { - return client.discoveryConn +func (c *AWSClient) DocDBConn(ctx context.Context) *docdb_sdkv1.DocDB { + return errs.Must(conn[*docdb_sdkv1.DocDB](ctx, c, names.DocDB)) } -func (client *AWSClient) DocDBConn() *docdb.DocDB { - return client.docdbConn +func (c *AWSClient) DocDBElasticClient(ctx context.Context) *docdbelastic_sdkv2.Client { + return errs.Must(client[*docdbelastic_sdkv2.Client](ctx, c, names.DocDBElastic)) } -func (client *AWSClient) DocDBElasticClient() *docdbelastic.Client { - return client.docdbelasticClient +func (c *AWSClient) DynamoDBConn(ctx context.Context) *dynamodb_sdkv1.DynamoDB { + return errs.Must(conn[*dynamodb_sdkv1.DynamoDB](ctx, c, names.DynamoDB)) } -func (client *AWSClient) DynamoDBConn() *dynamodb.DynamoDB { - return client.dynamodbConn +func (c *AWSClient) DynamoDBStreamsConn(ctx context.Context) *dynamodbstreams_sdkv1.DynamoDBStreams { + return errs.Must(conn[*dynamodbstreams_sdkv1.DynamoDBStreams](ctx, c, names.DynamoDBStreams)) } -func (client *AWSClient) DynamoDBStreamsConn() *dynamodbstreams.DynamoDBStreams { - return client.dynamodbstreamsConn +func (c *AWSClient) EBSConn(ctx context.Context) *ebs_sdkv1.EBS { + return errs.Must(conn[*ebs_sdkv1.EBS](ctx, c, names.EBS)) } -func (client *AWSClient) EBSConn() *ebs.EBS { - return client.ebsConn +func (c *AWSClient) EC2Conn(ctx context.Context) *ec2_sdkv1.EC2 { + return errs.Must(conn[*ec2_sdkv1.EC2](ctx, c, names.EC2)) } -func (client *AWSClient) EC2Conn() *ec2.EC2 { - return client.ec2Conn +func (c *AWSClient) EC2Client(ctx context.Context) *ec2_sdkv2.Client { + return errs.Must(client[*ec2_sdkv2.Client](ctx, c, names.EC2)) } -func (client *AWSClient) EC2Client() *ec2_sdkv2.Client { - return client.ec2Client.Client() +func (c *AWSClient) EC2InstanceConnectConn(ctx context.Context) *ec2instanceconnect_sdkv1.EC2InstanceConnect { + return errs.Must(conn[*ec2instanceconnect_sdkv1.EC2InstanceConnect](ctx, c, names.EC2InstanceConnect)) } -func (client *AWSClient) EC2InstanceConnectConn() *ec2instanceconnect.EC2InstanceConnect { - return client.ec2instanceconnectConn +func (c *AWSClient) ECRConn(ctx context.Context) *ecr_sdkv1.ECR { + return errs.Must(conn[*ecr_sdkv1.ECR](ctx, c, names.ECR)) } -func (client *AWSClient) ECRConn() *ecr.ECR { - return client.ecrConn +func (c *AWSClient) ECRPublicConn(ctx context.Context) *ecrpublic_sdkv1.ECRPublic { + return errs.Must(conn[*ecrpublic_sdkv1.ECRPublic](ctx, c, names.ECRPublic)) } -func (client *AWSClient) ECRPublicConn() *ecrpublic.ECRPublic { - return client.ecrpublicConn +func (c *AWSClient) ECSConn(ctx context.Context) *ecs_sdkv1.ECS { + return errs.Must(conn[*ecs_sdkv1.ECS](ctx, c, names.ECS)) } -func (client *AWSClient) ECSConn() *ecs.ECS { - return client.ecsConn +func (c *AWSClient) EFSConn(ctx context.Context) *efs_sdkv1.EFS { + return errs.Must(conn[*efs_sdkv1.EFS](ctx, c, names.EFS)) } -func (client *AWSClient) EFSConn() *efs.EFS { - return client.efsConn +func (c *AWSClient) EKSConn(ctx context.Context) *eks_sdkv1.EKS { + return errs.Must(conn[*eks_sdkv1.EKS](ctx, c, names.EKS)) } -func (client *AWSClient) EKSConn() *eks.EKS { - return client.eksConn +func (c *AWSClient) ELBConn(ctx context.Context) *elb_sdkv1.ELB { + return errs.Must(conn[*elb_sdkv1.ELB](ctx, c, names.ELB)) } -func (client *AWSClient) ELBConn() *elb.ELB { - return client.elbConn +func (c *AWSClient) ELBV2Conn(ctx context.Context) *elbv2_sdkv1.ELBV2 { + return errs.Must(conn[*elbv2_sdkv1.ELBV2](ctx, c, names.ELBV2)) } -func (client *AWSClient) ELBV2Conn() *elbv2.ELBV2 { - return client.elbv2Conn +func (c *AWSClient) EMRConn(ctx context.Context) *emr_sdkv1.EMR { + return errs.Must(conn[*emr_sdkv1.EMR](ctx, c, names.EMR)) } -func (client *AWSClient) EMRConn() *emr.EMR { - return client.emrConn +func (c *AWSClient) EMRContainersConn(ctx context.Context) *emrcontainers_sdkv1.EMRContainers { + return errs.Must(conn[*emrcontainers_sdkv1.EMRContainers](ctx, c, names.EMRContainers)) } -func (client *AWSClient) EMRContainersConn() *emrcontainers.EMRContainers { - return client.emrcontainersConn +func (c *AWSClient) EMRServerlessConn(ctx context.Context) *emrserverless_sdkv1.EMRServerless { + return errs.Must(conn[*emrserverless_sdkv1.EMRServerless](ctx, c, names.EMRServerless)) } -func (client *AWSClient) EMRServerlessConn() *emrserverless.EMRServerless { - return client.emrserverlessConn +func (c *AWSClient) ElastiCacheConn(ctx context.Context) *elasticache_sdkv1.ElastiCache { + return errs.Must(conn[*elasticache_sdkv1.ElastiCache](ctx, c, names.ElastiCache)) } -func (client *AWSClient) ElastiCacheConn() *elasticache.ElastiCache { - return client.elasticacheConn +func (c *AWSClient) ElasticBeanstalkConn(ctx context.Context) *elasticbeanstalk_sdkv1.ElasticBeanstalk { + return errs.Must(conn[*elasticbeanstalk_sdkv1.ElasticBeanstalk](ctx, c, names.ElasticBeanstalk)) } -func (client *AWSClient) ElasticBeanstalkConn() *elasticbeanstalk.ElasticBeanstalk { - return client.elasticbeanstalkConn +func (c *AWSClient) ElasticInferenceConn(ctx context.Context) *elasticinference_sdkv1.ElasticInference { + return errs.Must(conn[*elasticinference_sdkv1.ElasticInference](ctx, c, names.ElasticInference)) } -func (client *AWSClient) ElasticInferenceConn() *elasticinference.ElasticInference { - return client.elasticinferenceConn +func (c *AWSClient) ElasticTranscoderConn(ctx context.Context) *elastictranscoder_sdkv1.ElasticTranscoder { + return errs.Must(conn[*elastictranscoder_sdkv1.ElasticTranscoder](ctx, c, names.ElasticTranscoder)) } -func (client *AWSClient) ElasticTranscoderConn() *elastictranscoder.ElasticTranscoder { - return client.elastictranscoderConn +func (c *AWSClient) ElasticsearchConn(ctx context.Context) *elasticsearchservice_sdkv1.ElasticsearchService { + return errs.Must(conn[*elasticsearchservice_sdkv1.ElasticsearchService](ctx, c, names.Elasticsearch)) } -func (client *AWSClient) ElasticsearchConn() *elasticsearchservice.ElasticsearchService { - return client.esConn +func (c *AWSClient) EventsConn(ctx context.Context) *eventbridge_sdkv1.EventBridge { + return errs.Must(conn[*eventbridge_sdkv1.EventBridge](ctx, c, names.Events)) } -func (client *AWSClient) EventsConn() *eventbridge.EventBridge { - return client.eventsConn +func (c *AWSClient) EvidentlyConn(ctx context.Context) *cloudwatchevidently_sdkv1.CloudWatchEvidently { + return errs.Must(conn[*cloudwatchevidently_sdkv1.CloudWatchEvidently](ctx, c, names.Evidently)) } -func (client *AWSClient) EvidentlyConn() *cloudwatchevidently.CloudWatchEvidently { - return client.evidentlyConn +func (c *AWSClient) FISClient(ctx context.Context) *fis_sdkv2.Client { + return errs.Must(client[*fis_sdkv2.Client](ctx, c, names.FIS)) } -func (client *AWSClient) FISClient() *fis.Client { - return client.fisClient +func (c *AWSClient) FMSConn(ctx context.Context) *fms_sdkv1.FMS { + return errs.Must(conn[*fms_sdkv1.FMS](ctx, c, names.FMS)) } -func (client *AWSClient) FMSConn() *fms.FMS { - return client.fmsConn +func (c *AWSClient) FSxConn(ctx context.Context) *fsx_sdkv1.FSx { + return errs.Must(conn[*fsx_sdkv1.FSx](ctx, c, names.FSx)) } -func (client *AWSClient) FSxConn() *fsx.FSx { - return client.fsxConn +func (c *AWSClient) FinSpaceClient(ctx context.Context) *finspace_sdkv2.Client { + return errs.Must(client[*finspace_sdkv2.Client](ctx, c, names.FinSpace)) } -func (client *AWSClient) FinSpaceConn() *finspace.Finspace { - return client.finspaceConn +func (c *AWSClient) FinSpaceDataConn(ctx context.Context) *finspacedata_sdkv1.FinSpaceData { + return errs.Must(conn[*finspacedata_sdkv1.FinSpaceData](ctx, c, names.FinSpaceData)) } -func (client *AWSClient) FinSpaceDataConn() *finspacedata.FinSpaceData { - return client.finspacedataConn +func (c *AWSClient) FirehoseConn(ctx context.Context) *firehose_sdkv1.Firehose { + return errs.Must(conn[*firehose_sdkv1.Firehose](ctx, c, names.Firehose)) } -func (client *AWSClient) FirehoseConn() *firehose.Firehose { - return client.firehoseConn +func (c *AWSClient) ForecastConn(ctx context.Context) *forecastservice_sdkv1.ForecastService { + return errs.Must(conn[*forecastservice_sdkv1.ForecastService](ctx, c, names.Forecast)) } -func (client *AWSClient) ForecastConn() *forecastservice.ForecastService { - return client.forecastConn +func (c *AWSClient) ForecastQueryConn(ctx context.Context) *forecastqueryservice_sdkv1.ForecastQueryService { + return errs.Must(conn[*forecastqueryservice_sdkv1.ForecastQueryService](ctx, c, names.ForecastQuery)) } -func (client *AWSClient) ForecastQueryConn() *forecastqueryservice.ForecastQueryService { - return client.forecastqueryConn +func (c *AWSClient) FraudDetectorConn(ctx context.Context) *frauddetector_sdkv1.FraudDetector { + return errs.Must(conn[*frauddetector_sdkv1.FraudDetector](ctx, c, names.FraudDetector)) } -func (client *AWSClient) FraudDetectorConn() *frauddetector.FraudDetector { - return client.frauddetectorConn +func (c *AWSClient) GameLiftConn(ctx context.Context) *gamelift_sdkv1.GameLift { + return errs.Must(conn[*gamelift_sdkv1.GameLift](ctx, c, names.GameLift)) } -func (client *AWSClient) GameLiftConn() *gamelift.GameLift { - return client.gameliftConn +func (c *AWSClient) GlacierClient(ctx context.Context) *glacier_sdkv2.Client { + return errs.Must(client[*glacier_sdkv2.Client](ctx, c, names.Glacier)) } -func (client *AWSClient) GlacierConn() *glacier.Glacier { - return client.glacierConn +func (c *AWSClient) GlobalAcceleratorConn(ctx context.Context) *globalaccelerator_sdkv1.GlobalAccelerator { + return errs.Must(conn[*globalaccelerator_sdkv1.GlobalAccelerator](ctx, c, names.GlobalAccelerator)) } -func (client *AWSClient) GlobalAcceleratorConn() *globalaccelerator.GlobalAccelerator { - return client.globalacceleratorConn +func (c *AWSClient) GlueConn(ctx context.Context) *glue_sdkv1.Glue { + return errs.Must(conn[*glue_sdkv1.Glue](ctx, c, names.Glue)) } -func (client *AWSClient) GlueConn() *glue.Glue { - return client.glueConn +func (c *AWSClient) GrafanaConn(ctx context.Context) *managedgrafana_sdkv1.ManagedGrafana { + return errs.Must(conn[*managedgrafana_sdkv1.ManagedGrafana](ctx, c, names.Grafana)) } -func (client *AWSClient) GrafanaConn() *managedgrafana.ManagedGrafana { - return client.grafanaConn +func (c *AWSClient) GreengrassConn(ctx context.Context) *greengrass_sdkv1.Greengrass { + return errs.Must(conn[*greengrass_sdkv1.Greengrass](ctx, c, names.Greengrass)) } -func (client *AWSClient) GreengrassConn() *greengrass.Greengrass { - return client.greengrassConn +func (c *AWSClient) GreengrassV2Conn(ctx context.Context) *greengrassv2_sdkv1.GreengrassV2 { + return errs.Must(conn[*greengrassv2_sdkv1.GreengrassV2](ctx, c, names.GreengrassV2)) } -func (client *AWSClient) GreengrassV2Conn() *greengrassv2.GreengrassV2 { - return client.greengrassv2Conn +func (c *AWSClient) GroundStationConn(ctx context.Context) *groundstation_sdkv1.GroundStation { + return errs.Must(conn[*groundstation_sdkv1.GroundStation](ctx, c, names.GroundStation)) } -func (client *AWSClient) GroundStationConn() *groundstation.GroundStation { - return client.groundstationConn +func (c *AWSClient) GuardDutyConn(ctx context.Context) *guardduty_sdkv1.GuardDuty { + return errs.Must(conn[*guardduty_sdkv1.GuardDuty](ctx, c, names.GuardDuty)) } -func (client *AWSClient) GuardDutyConn() *guardduty.GuardDuty { - return client.guarddutyConn +func (c *AWSClient) HealthConn(ctx context.Context) *health_sdkv1.Health { + return errs.Must(conn[*health_sdkv1.Health](ctx, c, names.Health)) } -func (client *AWSClient) HealthConn() *health.Health { - return client.healthConn +func (c *AWSClient) HealthLakeClient(ctx context.Context) *healthlake_sdkv2.Client { + return errs.Must(client[*healthlake_sdkv2.Client](ctx, c, names.HealthLake)) } -func (client *AWSClient) HealthLakeClient() *healthlake.Client { - return client.healthlakeClient +func (c *AWSClient) HoneycodeConn(ctx context.Context) *honeycode_sdkv1.Honeycode { + return errs.Must(conn[*honeycode_sdkv1.Honeycode](ctx, c, names.Honeycode)) } -func (client *AWSClient) HoneycodeConn() *honeycode.Honeycode { - return client.honeycodeConn +func (c *AWSClient) IAMConn(ctx context.Context) *iam_sdkv1.IAM { + return errs.Must(conn[*iam_sdkv1.IAM](ctx, c, names.IAM)) } -func (client *AWSClient) IAMConn() *iam.IAM { - return client.iamConn +func (c *AWSClient) IVSConn(ctx context.Context) *ivs_sdkv1.IVS { + return errs.Must(conn[*ivs_sdkv1.IVS](ctx, c, names.IVS)) } -func (client *AWSClient) IVSConn() *ivs.IVS { - return client.ivsConn +func (c *AWSClient) IVSChatClient(ctx context.Context) *ivschat_sdkv2.Client { + return errs.Must(client[*ivschat_sdkv2.Client](ctx, c, names.IVSChat)) } -func (client *AWSClient) IVSChatClient() *ivschat.Client { - return client.ivschatClient +func (c *AWSClient) IdentityStoreClient(ctx context.Context) *identitystore_sdkv2.Client { + return errs.Must(client[*identitystore_sdkv2.Client](ctx, c, names.IdentityStore)) } -func (client *AWSClient) IdentityStoreClient() *identitystore.Client { - return client.identitystoreClient +func (c *AWSClient) ImageBuilderConn(ctx context.Context) *imagebuilder_sdkv1.Imagebuilder { + return errs.Must(conn[*imagebuilder_sdkv1.Imagebuilder](ctx, c, names.ImageBuilder)) } -func (client *AWSClient) ImageBuilderConn() *imagebuilder.Imagebuilder { - return client.imagebuilderConn +func (c *AWSClient) InspectorConn(ctx context.Context) *inspector_sdkv1.Inspector { + return errs.Must(conn[*inspector_sdkv1.Inspector](ctx, c, names.Inspector)) } -func (client *AWSClient) InspectorConn() *inspector.Inspector { - return client.inspectorConn +func (c *AWSClient) Inspector2Client(ctx context.Context) *inspector2_sdkv2.Client { + return errs.Must(client[*inspector2_sdkv2.Client](ctx, c, names.Inspector2)) } -func (client *AWSClient) Inspector2Client() *inspector2.Client { - return client.inspector2Client +func (c *AWSClient) InternetMonitorClient(ctx context.Context) *internetmonitor_sdkv2.Client { + return errs.Must(client[*internetmonitor_sdkv2.Client](ctx, c, names.InternetMonitor)) } -func (client *AWSClient) InternetMonitorConn() *internetmonitor.InternetMonitor { - return client.internetmonitorConn +func (c *AWSClient) IoTConn(ctx context.Context) *iot_sdkv1.IoT { + return errs.Must(conn[*iot_sdkv1.IoT](ctx, c, names.IoT)) } -func (client *AWSClient) IoTConn() *iot.IoT { - return client.iotConn +func (c *AWSClient) IoT1ClickDevicesConn(ctx context.Context) *iot1clickdevicesservice_sdkv1.IoT1ClickDevicesService { + return errs.Must(conn[*iot1clickdevicesservice_sdkv1.IoT1ClickDevicesService](ctx, c, names.IoT1ClickDevices)) } -func (client *AWSClient) IoT1ClickDevicesConn() *iot1clickdevicesservice.IoT1ClickDevicesService { - return client.iot1clickdevicesConn +func (c *AWSClient) IoT1ClickProjectsConn(ctx context.Context) *iot1clickprojects_sdkv1.IoT1ClickProjects { + return errs.Must(conn[*iot1clickprojects_sdkv1.IoT1ClickProjects](ctx, c, names.IoT1ClickProjects)) } -func (client *AWSClient) IoT1ClickProjectsConn() *iot1clickprojects.IoT1ClickProjects { - return client.iot1clickprojectsConn +func (c *AWSClient) IoTAnalyticsConn(ctx context.Context) *iotanalytics_sdkv1.IoTAnalytics { + return errs.Must(conn[*iotanalytics_sdkv1.IoTAnalytics](ctx, c, names.IoTAnalytics)) } -func (client *AWSClient) IoTAnalyticsConn() *iotanalytics.IoTAnalytics { - return client.iotanalyticsConn +func (c *AWSClient) IoTDataConn(ctx context.Context) *iotdataplane_sdkv1.IoTDataPlane { + return errs.Must(conn[*iotdataplane_sdkv1.IoTDataPlane](ctx, c, names.IoTData)) } -func (client *AWSClient) IoTDataConn() *iotdataplane.IoTDataPlane { - return client.iotdataConn +func (c *AWSClient) IoTDeviceAdvisorConn(ctx context.Context) *iotdeviceadvisor_sdkv1.IoTDeviceAdvisor { + return errs.Must(conn[*iotdeviceadvisor_sdkv1.IoTDeviceAdvisor](ctx, c, names.IoTDeviceAdvisor)) } -func (client *AWSClient) IoTDeviceAdvisorConn() *iotdeviceadvisor.IoTDeviceAdvisor { - return client.iotdeviceadvisorConn +func (c *AWSClient) IoTEventsConn(ctx context.Context) *iotevents_sdkv1.IoTEvents { + return errs.Must(conn[*iotevents_sdkv1.IoTEvents](ctx, c, names.IoTEvents)) } -func (client *AWSClient) IoTEventsConn() *iotevents.IoTEvents { - return client.ioteventsConn +func (c *AWSClient) IoTEventsDataConn(ctx context.Context) *ioteventsdata_sdkv1.IoTEventsData { + return errs.Must(conn[*ioteventsdata_sdkv1.IoTEventsData](ctx, c, names.IoTEventsData)) } -func (client *AWSClient) IoTEventsDataConn() *ioteventsdata.IoTEventsData { - return client.ioteventsdataConn +func (c *AWSClient) IoTFleetHubConn(ctx context.Context) *iotfleethub_sdkv1.IoTFleetHub { + return errs.Must(conn[*iotfleethub_sdkv1.IoTFleetHub](ctx, c, names.IoTFleetHub)) } -func (client *AWSClient) IoTFleetHubConn() *iotfleethub.IoTFleetHub { - return client.iotfleethubConn +func (c *AWSClient) IoTJobsDataConn(ctx context.Context) *iotjobsdataplane_sdkv1.IoTJobsDataPlane { + return errs.Must(conn[*iotjobsdataplane_sdkv1.IoTJobsDataPlane](ctx, c, names.IoTJobsData)) } -func (client *AWSClient) IoTJobsDataConn() *iotjobsdataplane.IoTJobsDataPlane { - return client.iotjobsdataConn +func (c *AWSClient) IoTSecureTunnelingConn(ctx context.Context) *iotsecuretunneling_sdkv1.IoTSecureTunneling { + return errs.Must(conn[*iotsecuretunneling_sdkv1.IoTSecureTunneling](ctx, c, names.IoTSecureTunneling)) } -func (client *AWSClient) IoTSecureTunnelingConn() *iotsecuretunneling.IoTSecureTunneling { - return client.iotsecuretunnelingConn +func (c *AWSClient) IoTSiteWiseConn(ctx context.Context) *iotsitewise_sdkv1.IoTSiteWise { + return errs.Must(conn[*iotsitewise_sdkv1.IoTSiteWise](ctx, c, names.IoTSiteWise)) } -func (client *AWSClient) IoTSiteWiseConn() *iotsitewise.IoTSiteWise { - return client.iotsitewiseConn +func (c *AWSClient) IoTThingsGraphConn(ctx context.Context) *iotthingsgraph_sdkv1.IoTThingsGraph { + return errs.Must(conn[*iotthingsgraph_sdkv1.IoTThingsGraph](ctx, c, names.IoTThingsGraph)) } -func (client *AWSClient) IoTThingsGraphConn() *iotthingsgraph.IoTThingsGraph { - return client.iotthingsgraphConn +func (c *AWSClient) IoTTwinMakerConn(ctx context.Context) *iottwinmaker_sdkv1.IoTTwinMaker { + return errs.Must(conn[*iottwinmaker_sdkv1.IoTTwinMaker](ctx, c, names.IoTTwinMaker)) } -func (client *AWSClient) IoTTwinMakerConn() *iottwinmaker.IoTTwinMaker { - return client.iottwinmakerConn +func (c *AWSClient) IoTWirelessConn(ctx context.Context) *iotwireless_sdkv1.IoTWireless { + return errs.Must(conn[*iotwireless_sdkv1.IoTWireless](ctx, c, names.IoTWireless)) } -func (client *AWSClient) IoTWirelessConn() *iotwireless.IoTWireless { - return client.iotwirelessConn +func (c *AWSClient) KMSConn(ctx context.Context) *kms_sdkv1.KMS { + return errs.Must(conn[*kms_sdkv1.KMS](ctx, c, names.KMS)) } -func (client *AWSClient) KMSConn() *kms.KMS { - return client.kmsConn +func (c *AWSClient) KafkaConn(ctx context.Context) *kafka_sdkv1.Kafka { + return errs.Must(conn[*kafka_sdkv1.Kafka](ctx, c, names.Kafka)) } -func (client *AWSClient) KafkaConn() *kafka.Kafka { - return client.kafkaConn +func (c *AWSClient) KafkaConnectConn(ctx context.Context) *kafkaconnect_sdkv1.KafkaConnect { + return errs.Must(conn[*kafkaconnect_sdkv1.KafkaConnect](ctx, c, names.KafkaConnect)) } -func (client *AWSClient) KafkaConnectConn() *kafkaconnect.KafkaConnect { - return client.kafkaconnectConn +func (c *AWSClient) KendraClient(ctx context.Context) *kendra_sdkv2.Client { + return errs.Must(client[*kendra_sdkv2.Client](ctx, c, names.Kendra)) } -func (client *AWSClient) KendraClient() *kendra.Client { - return client.kendraClient +func (c *AWSClient) KeyspacesClient(ctx context.Context) *keyspaces_sdkv2.Client { + return errs.Must(client[*keyspaces_sdkv2.Client](ctx, c, names.Keyspaces)) } -func (client *AWSClient) KeyspacesConn() *keyspaces.Keyspaces { - return client.keyspacesConn +func (c *AWSClient) KinesisConn(ctx context.Context) *kinesis_sdkv1.Kinesis { + return errs.Must(conn[*kinesis_sdkv1.Kinesis](ctx, c, names.Kinesis)) } -func (client *AWSClient) KinesisConn() *kinesis.Kinesis { - return client.kinesisConn +func (c *AWSClient) KinesisAnalyticsConn(ctx context.Context) *kinesisanalytics_sdkv1.KinesisAnalytics { + return errs.Must(conn[*kinesisanalytics_sdkv1.KinesisAnalytics](ctx, c, names.KinesisAnalytics)) } -func (client *AWSClient) KinesisAnalyticsConn() *kinesisanalytics.KinesisAnalytics { - return client.kinesisanalyticsConn +func (c *AWSClient) KinesisAnalyticsV2Conn(ctx context.Context) *kinesisanalyticsv2_sdkv1.KinesisAnalyticsV2 { + return errs.Must(conn[*kinesisanalyticsv2_sdkv1.KinesisAnalyticsV2](ctx, c, names.KinesisAnalyticsV2)) } -func (client *AWSClient) KinesisAnalyticsV2Conn() *kinesisanalyticsv2.KinesisAnalyticsV2 { - return client.kinesisanalyticsv2Conn +func (c *AWSClient) KinesisVideoConn(ctx context.Context) *kinesisvideo_sdkv1.KinesisVideo { + return errs.Must(conn[*kinesisvideo_sdkv1.KinesisVideo](ctx, c, names.KinesisVideo)) } -func (client *AWSClient) KinesisVideoConn() *kinesisvideo.KinesisVideo { - return client.kinesisvideoConn +func (c *AWSClient) KinesisVideoArchivedMediaConn(ctx context.Context) *kinesisvideoarchivedmedia_sdkv1.KinesisVideoArchivedMedia { + return errs.Must(conn[*kinesisvideoarchivedmedia_sdkv1.KinesisVideoArchivedMedia](ctx, c, names.KinesisVideoArchivedMedia)) } -func (client *AWSClient) KinesisVideoArchivedMediaConn() *kinesisvideoarchivedmedia.KinesisVideoArchivedMedia { - return client.kinesisvideoarchivedmediaConn +func (c *AWSClient) KinesisVideoMediaConn(ctx context.Context) *kinesisvideomedia_sdkv1.KinesisVideoMedia { + return errs.Must(conn[*kinesisvideomedia_sdkv1.KinesisVideoMedia](ctx, c, names.KinesisVideoMedia)) } -func (client *AWSClient) KinesisVideoMediaConn() *kinesisvideomedia.KinesisVideoMedia { - return client.kinesisvideomediaConn +func (c *AWSClient) KinesisVideoSignalingConn(ctx context.Context) *kinesisvideosignalingchannels_sdkv1.KinesisVideoSignalingChannels { + return errs.Must(conn[*kinesisvideosignalingchannels_sdkv1.KinesisVideoSignalingChannels](ctx, c, names.KinesisVideoSignaling)) } -func (client *AWSClient) KinesisVideoSignalingConn() *kinesisvideosignalingchannels.KinesisVideoSignalingChannels { - return client.kinesisvideosignalingConn +func (c *AWSClient) LakeFormationConn(ctx context.Context) *lakeformation_sdkv1.LakeFormation { + return errs.Must(conn[*lakeformation_sdkv1.LakeFormation](ctx, c, names.LakeFormation)) } -func (client *AWSClient) LakeFormationConn() *lakeformation.LakeFormation { - return client.lakeformationConn +func (c *AWSClient) LambdaConn(ctx context.Context) *lambda_sdkv1.Lambda { + return errs.Must(conn[*lambda_sdkv1.Lambda](ctx, c, names.Lambda)) } -func (client *AWSClient) LambdaConn() *lambda.Lambda { - return client.lambdaConn +func (c *AWSClient) LambdaClient(ctx context.Context) *lambda_sdkv2.Client { + return errs.Must(client[*lambda_sdkv2.Client](ctx, c, names.Lambda)) } -func (client *AWSClient) LambdaClient() *lambda_sdkv2.Client { - return client.lambdaClient.Client() +func (c *AWSClient) LexModelsConn(ctx context.Context) *lexmodelbuildingservice_sdkv1.LexModelBuildingService { + return errs.Must(conn[*lexmodelbuildingservice_sdkv1.LexModelBuildingService](ctx, c, names.LexModels)) } -func (client *AWSClient) LexModelsConn() *lexmodelbuildingservice.LexModelBuildingService { - return client.lexmodelsConn +func (c *AWSClient) LexModelsV2Conn(ctx context.Context) *lexmodelsv2_sdkv1.LexModelsV2 { + return errs.Must(conn[*lexmodelsv2_sdkv1.LexModelsV2](ctx, c, names.LexModelsV2)) } -func (client *AWSClient) LexModelsV2Conn() *lexmodelsv2.LexModelsV2 { - return client.lexmodelsv2Conn +func (c *AWSClient) LexRuntimeConn(ctx context.Context) *lexruntimeservice_sdkv1.LexRuntimeService { + return errs.Must(conn[*lexruntimeservice_sdkv1.LexRuntimeService](ctx, c, names.LexRuntime)) } -func (client *AWSClient) LexRuntimeConn() *lexruntimeservice.LexRuntimeService { - return client.lexruntimeConn +func (c *AWSClient) LexRuntimeV2Conn(ctx context.Context) *lexruntimev2_sdkv1.LexRuntimeV2 { + return errs.Must(conn[*lexruntimev2_sdkv1.LexRuntimeV2](ctx, c, names.LexRuntimeV2)) } -func (client *AWSClient) LexRuntimeV2Conn() *lexruntimev2.LexRuntimeV2 { - return client.lexruntimev2Conn +func (c *AWSClient) LicenseManagerConn(ctx context.Context) *licensemanager_sdkv1.LicenseManager { + return errs.Must(conn[*licensemanager_sdkv1.LicenseManager](ctx, c, names.LicenseManager)) } -func (client *AWSClient) LicenseManagerConn() *licensemanager.LicenseManager { - return client.licensemanagerConn +func (c *AWSClient) LightsailClient(ctx context.Context) *lightsail_sdkv2.Client { + return errs.Must(client[*lightsail_sdkv2.Client](ctx, c, names.Lightsail)) } -func (client *AWSClient) LightsailConn() *lightsail.Lightsail { - return client.lightsailConn +func (c *AWSClient) LocationConn(ctx context.Context) *locationservice_sdkv1.LocationService { + return errs.Must(conn[*locationservice_sdkv1.LocationService](ctx, c, names.Location)) } -func (client *AWSClient) LocationConn() *locationservice.LocationService { - return client.locationConn +func (c *AWSClient) LogsConn(ctx context.Context) *cloudwatchlogs_sdkv1.CloudWatchLogs { + return errs.Must(conn[*cloudwatchlogs_sdkv1.CloudWatchLogs](ctx, c, names.Logs)) } -func (client *AWSClient) LogsConn() *cloudwatchlogs.CloudWatchLogs { - return client.logsConn +func (c *AWSClient) LogsClient(ctx context.Context) *cloudwatchlogs_sdkv2.Client { + return errs.Must(client[*cloudwatchlogs_sdkv2.Client](ctx, c, names.Logs)) } -func (client *AWSClient) LogsClient() *cloudwatchlogs_sdkv2.Client { - return client.logsClient.Client() +func (c *AWSClient) LookoutEquipmentConn(ctx context.Context) *lookoutequipment_sdkv1.LookoutEquipment { + return errs.Must(conn[*lookoutequipment_sdkv1.LookoutEquipment](ctx, c, names.LookoutEquipment)) } -func (client *AWSClient) LookoutEquipmentConn() *lookoutequipment.LookoutEquipment { - return client.lookoutequipmentConn +func (c *AWSClient) LookoutMetricsConn(ctx context.Context) *lookoutmetrics_sdkv1.LookoutMetrics { + return errs.Must(conn[*lookoutmetrics_sdkv1.LookoutMetrics](ctx, c, names.LookoutMetrics)) } -func (client *AWSClient) LookoutMetricsConn() *lookoutmetrics.LookoutMetrics { - return client.lookoutmetricsConn +func (c *AWSClient) LookoutVisionConn(ctx context.Context) *lookoutforvision_sdkv1.LookoutForVision { + return errs.Must(conn[*lookoutforvision_sdkv1.LookoutForVision](ctx, c, names.LookoutVision)) } -func (client *AWSClient) LookoutVisionConn() *lookoutforvision.LookoutForVision { - return client.lookoutvisionConn +func (c *AWSClient) MQConn(ctx context.Context) *mq_sdkv1.MQ { + return errs.Must(conn[*mq_sdkv1.MQ](ctx, c, names.MQ)) } -func (client *AWSClient) MQConn() *mq.MQ { - return client.mqConn +func (c *AWSClient) MTurkConn(ctx context.Context) *mturk_sdkv1.MTurk { + return errs.Must(conn[*mturk_sdkv1.MTurk](ctx, c, names.MTurk)) } -func (client *AWSClient) MTurkConn() *mturk.MTurk { - return client.mturkConn +func (c *AWSClient) MWAAConn(ctx context.Context) *mwaa_sdkv1.MWAA { + return errs.Must(conn[*mwaa_sdkv1.MWAA](ctx, c, names.MWAA)) } -func (client *AWSClient) MWAAConn() *mwaa.MWAA { - return client.mwaaConn +func (c *AWSClient) MachineLearningConn(ctx context.Context) *machinelearning_sdkv1.MachineLearning { + return errs.Must(conn[*machinelearning_sdkv1.MachineLearning](ctx, c, names.MachineLearning)) } -func (client *AWSClient) MachineLearningConn() *machinelearning.MachineLearning { - return client.machinelearningConn +func (c *AWSClient) MacieConn(ctx context.Context) *macie_sdkv1.Macie { + return errs.Must(conn[*macie_sdkv1.Macie](ctx, c, names.Macie)) } -func (client *AWSClient) MacieConn() *macie.Macie { - return client.macieConn +func (c *AWSClient) Macie2Conn(ctx context.Context) *macie2_sdkv1.Macie2 { + return errs.Must(conn[*macie2_sdkv1.Macie2](ctx, c, names.Macie2)) } -func (client *AWSClient) Macie2Conn() *macie2.Macie2 { - return client.macie2Conn +func (c *AWSClient) ManagedBlockchainConn(ctx context.Context) *managedblockchain_sdkv1.ManagedBlockchain { + return errs.Must(conn[*managedblockchain_sdkv1.ManagedBlockchain](ctx, c, names.ManagedBlockchain)) } -func (client *AWSClient) ManagedBlockchainConn() *managedblockchain.ManagedBlockchain { - return client.managedblockchainConn +func (c *AWSClient) MarketplaceCatalogConn(ctx context.Context) *marketplacecatalog_sdkv1.MarketplaceCatalog { + return errs.Must(conn[*marketplacecatalog_sdkv1.MarketplaceCatalog](ctx, c, names.MarketplaceCatalog)) } -func (client *AWSClient) MarketplaceCatalogConn() *marketplacecatalog.MarketplaceCatalog { - return client.marketplacecatalogConn +func (c *AWSClient) MarketplaceCommerceAnalyticsConn(ctx context.Context) *marketplacecommerceanalytics_sdkv1.MarketplaceCommerceAnalytics { + return errs.Must(conn[*marketplacecommerceanalytics_sdkv1.MarketplaceCommerceAnalytics](ctx, c, names.MarketplaceCommerceAnalytics)) } -func (client *AWSClient) MarketplaceCommerceAnalyticsConn() *marketplacecommerceanalytics.MarketplaceCommerceAnalytics { - return client.marketplacecommerceanalyticsConn +func (c *AWSClient) MarketplaceEntitlementConn(ctx context.Context) *marketplaceentitlementservice_sdkv1.MarketplaceEntitlementService { + return errs.Must(conn[*marketplaceentitlementservice_sdkv1.MarketplaceEntitlementService](ctx, c, names.MarketplaceEntitlement)) } -func (client *AWSClient) MarketplaceEntitlementConn() *marketplaceentitlementservice.MarketplaceEntitlementService { - return client.marketplaceentitlementConn +func (c *AWSClient) MarketplaceMeteringConn(ctx context.Context) *marketplacemetering_sdkv1.MarketplaceMetering { + return errs.Must(conn[*marketplacemetering_sdkv1.MarketplaceMetering](ctx, c, names.MarketplaceMetering)) } -func (client *AWSClient) MarketplaceMeteringConn() *marketplacemetering.MarketplaceMetering { - return client.marketplacemeteringConn +func (c *AWSClient) MediaConnectConn(ctx context.Context) *mediaconnect_sdkv1.MediaConnect { + return errs.Must(conn[*mediaconnect_sdkv1.MediaConnect](ctx, c, names.MediaConnect)) } -func (client *AWSClient) MediaConnectConn() *mediaconnect.MediaConnect { - return client.mediaconnectConn +func (c *AWSClient) MediaConvertConn(ctx context.Context) *mediaconvert_sdkv1.MediaConvert { + return errs.Must(conn[*mediaconvert_sdkv1.MediaConvert](ctx, c, names.MediaConvert)) } -func (client *AWSClient) MediaConvertConn() *mediaconvert.MediaConvert { - return client.mediaconvertConn +func (c *AWSClient) MediaLiveClient(ctx context.Context) *medialive_sdkv2.Client { + return errs.Must(client[*medialive_sdkv2.Client](ctx, c, names.MediaLive)) } -func (client *AWSClient) MediaLiveClient() *medialive.Client { - return client.medialiveClient +func (c *AWSClient) MediaPackageConn(ctx context.Context) *mediapackage_sdkv1.MediaPackage { + return errs.Must(conn[*mediapackage_sdkv1.MediaPackage](ctx, c, names.MediaPackage)) } -func (client *AWSClient) MediaPackageConn() *mediapackage.MediaPackage { - return client.mediapackageConn +func (c *AWSClient) MediaPackageVODConn(ctx context.Context) *mediapackagevod_sdkv1.MediaPackageVod { + return errs.Must(conn[*mediapackagevod_sdkv1.MediaPackageVod](ctx, c, names.MediaPackageVOD)) } -func (client *AWSClient) MediaPackageVODConn() *mediapackagevod.MediaPackageVod { - return client.mediapackagevodConn +func (c *AWSClient) MediaStoreConn(ctx context.Context) *mediastore_sdkv1.MediaStore { + return errs.Must(conn[*mediastore_sdkv1.MediaStore](ctx, c, names.MediaStore)) } -func (client *AWSClient) MediaStoreConn() *mediastore.MediaStore { - return client.mediastoreConn +func (c *AWSClient) MediaStoreDataConn(ctx context.Context) *mediastoredata_sdkv1.MediaStoreData { + return errs.Must(conn[*mediastoredata_sdkv1.MediaStoreData](ctx, c, names.MediaStoreData)) } -func (client *AWSClient) MediaStoreDataConn() *mediastoredata.MediaStoreData { - return client.mediastoredataConn +func (c *AWSClient) MediaTailorConn(ctx context.Context) *mediatailor_sdkv1.MediaTailor { + return errs.Must(conn[*mediatailor_sdkv1.MediaTailor](ctx, c, names.MediaTailor)) } -func (client *AWSClient) MediaTailorConn() *mediatailor.MediaTailor { - return client.mediatailorConn +func (c *AWSClient) MemoryDBConn(ctx context.Context) *memorydb_sdkv1.MemoryDB { + return errs.Must(conn[*memorydb_sdkv1.MemoryDB](ctx, c, names.MemoryDB)) } -func (client *AWSClient) MemoryDBConn() *memorydb.MemoryDB { - return client.memorydbConn +func (c *AWSClient) MgHConn(ctx context.Context) *migrationhub_sdkv1.MigrationHub { + return errs.Must(conn[*migrationhub_sdkv1.MigrationHub](ctx, c, names.MgH)) } -func (client *AWSClient) MgHConn() *migrationhub.MigrationHub { - return client.mghConn +func (c *AWSClient) MgnConn(ctx context.Context) *mgn_sdkv1.Mgn { + return errs.Must(conn[*mgn_sdkv1.Mgn](ctx, c, names.Mgn)) } -func (client *AWSClient) MgnConn() *mgn.Mgn { - return client.mgnConn +func (c *AWSClient) MigrationHubConfigConn(ctx context.Context) *migrationhubconfig_sdkv1.MigrationHubConfig { + return errs.Must(conn[*migrationhubconfig_sdkv1.MigrationHubConfig](ctx, c, names.MigrationHubConfig)) } -func (client *AWSClient) MigrationHubConfigConn() *migrationhubconfig.MigrationHubConfig { - return client.migrationhubconfigConn +func (c *AWSClient) MigrationHubRefactorSpacesConn(ctx context.Context) *migrationhubrefactorspaces_sdkv1.MigrationHubRefactorSpaces { + return errs.Must(conn[*migrationhubrefactorspaces_sdkv1.MigrationHubRefactorSpaces](ctx, c, names.MigrationHubRefactorSpaces)) } -func (client *AWSClient) MigrationHubRefactorSpacesConn() *migrationhubrefactorspaces.MigrationHubRefactorSpaces { - return client.migrationhubrefactorspacesConn +func (c *AWSClient) MigrationHubStrategyConn(ctx context.Context) *migrationhubstrategyrecommendations_sdkv1.MigrationHubStrategyRecommendations { + return errs.Must(conn[*migrationhubstrategyrecommendations_sdkv1.MigrationHubStrategyRecommendations](ctx, c, names.MigrationHubStrategy)) } -func (client *AWSClient) MigrationHubStrategyConn() *migrationhubstrategyrecommendations.MigrationHubStrategyRecommendations { - return client.migrationhubstrategyConn +func (c *AWSClient) MobileConn(ctx context.Context) *mobile_sdkv1.Mobile { + return errs.Must(conn[*mobile_sdkv1.Mobile](ctx, c, names.Mobile)) } -func (client *AWSClient) MobileConn() *mobile.Mobile { - return client.mobileConn +func (c *AWSClient) NeptuneConn(ctx context.Context) *neptune_sdkv1.Neptune { + return errs.Must(conn[*neptune_sdkv1.Neptune](ctx, c, names.Neptune)) } -func (client *AWSClient) NeptuneConn() *neptune.Neptune { - return client.neptuneConn +func (c *AWSClient) NetworkFirewallConn(ctx context.Context) *networkfirewall_sdkv1.NetworkFirewall { + return errs.Must(conn[*networkfirewall_sdkv1.NetworkFirewall](ctx, c, names.NetworkFirewall)) } -func (client *AWSClient) NetworkFirewallConn() *networkfirewall.NetworkFirewall { - return client.networkfirewallConn +func (c *AWSClient) NetworkManagerConn(ctx context.Context) *networkmanager_sdkv1.NetworkManager { + return errs.Must(conn[*networkmanager_sdkv1.NetworkManager](ctx, c, names.NetworkManager)) } -func (client *AWSClient) NetworkManagerConn() *networkmanager.NetworkManager { - return client.networkmanagerConn +func (c *AWSClient) NimbleConn(ctx context.Context) *nimblestudio_sdkv1.NimbleStudio { + return errs.Must(conn[*nimblestudio_sdkv1.NimbleStudio](ctx, c, names.Nimble)) } -func (client *AWSClient) NimbleConn() *nimblestudio.NimbleStudio { - return client.nimbleConn +func (c *AWSClient) ObservabilityAccessManagerClient(ctx context.Context) *oam_sdkv2.Client { + return errs.Must(client[*oam_sdkv2.Client](ctx, c, names.ObservabilityAccessManager)) } -func (client *AWSClient) ObservabilityAccessManagerClient() *oam.Client { - return client.oamClient +func (c *AWSClient) OpenSearchConn(ctx context.Context) *opensearchservice_sdkv1.OpenSearchService { + return errs.Must(conn[*opensearchservice_sdkv1.OpenSearchService](ctx, c, names.OpenSearch)) } -func (client *AWSClient) OpenSearchConn() *opensearchservice.OpenSearchService { - return client.opensearchConn +func (c *AWSClient) OpenSearchServerlessClient(ctx context.Context) *opensearchserverless_sdkv2.Client { + return errs.Must(client[*opensearchserverless_sdkv2.Client](ctx, c, names.OpenSearchServerless)) } -func (client *AWSClient) OpenSearchServerlessClient() *opensearchserverless.Client { - return client.opensearchserverlessClient +func (c *AWSClient) OpsWorksConn(ctx context.Context) *opsworks_sdkv1.OpsWorks { + return errs.Must(conn[*opsworks_sdkv1.OpsWorks](ctx, c, names.OpsWorks)) } -func (client *AWSClient) OpsWorksConn() *opsworks.OpsWorks { - return client.opsworksConn +func (c *AWSClient) OpsWorksCMConn(ctx context.Context) *opsworkscm_sdkv1.OpsWorksCM { + return errs.Must(conn[*opsworkscm_sdkv1.OpsWorksCM](ctx, c, names.OpsWorksCM)) } -func (client *AWSClient) OpsWorksCMConn() *opsworkscm.OpsWorksCM { - return client.opsworkscmConn +func (c *AWSClient) OrganizationsConn(ctx context.Context) *organizations_sdkv1.Organizations { + return errs.Must(conn[*organizations_sdkv1.Organizations](ctx, c, names.Organizations)) } -func (client *AWSClient) OrganizationsConn() *organizations.Organizations { - return client.organizationsConn +func (c *AWSClient) OutpostsConn(ctx context.Context) *outposts_sdkv1.Outposts { + return errs.Must(conn[*outposts_sdkv1.Outposts](ctx, c, names.Outposts)) } -func (client *AWSClient) OutpostsConn() *outposts.Outposts { - return client.outpostsConn +func (c *AWSClient) PIConn(ctx context.Context) *pi_sdkv1.PI { + return errs.Must(conn[*pi_sdkv1.PI](ctx, c, names.PI)) } -func (client *AWSClient) PIConn() *pi.PI { - return client.piConn +func (c *AWSClient) PanoramaConn(ctx context.Context) *panorama_sdkv1.Panorama { + return errs.Must(conn[*panorama_sdkv1.Panorama](ctx, c, names.Panorama)) } -func (client *AWSClient) PanoramaConn() *panorama.Panorama { - return client.panoramaConn +func (c *AWSClient) PersonalizeConn(ctx context.Context) *personalize_sdkv1.Personalize { + return errs.Must(conn[*personalize_sdkv1.Personalize](ctx, c, names.Personalize)) } -func (client *AWSClient) PersonalizeConn() *personalize.Personalize { - return client.personalizeConn +func (c *AWSClient) PersonalizeEventsConn(ctx context.Context) *personalizeevents_sdkv1.PersonalizeEvents { + return errs.Must(conn[*personalizeevents_sdkv1.PersonalizeEvents](ctx, c, names.PersonalizeEvents)) } -func (client *AWSClient) PersonalizeEventsConn() *personalizeevents.PersonalizeEvents { - return client.personalizeeventsConn +func (c *AWSClient) PersonalizeRuntimeConn(ctx context.Context) *personalizeruntime_sdkv1.PersonalizeRuntime { + return errs.Must(conn[*personalizeruntime_sdkv1.PersonalizeRuntime](ctx, c, names.PersonalizeRuntime)) } -func (client *AWSClient) PersonalizeRuntimeConn() *personalizeruntime.PersonalizeRuntime { - return client.personalizeruntimeConn +func (c *AWSClient) PinpointConn(ctx context.Context) *pinpoint_sdkv1.Pinpoint { + return errs.Must(conn[*pinpoint_sdkv1.Pinpoint](ctx, c, names.Pinpoint)) } -func (client *AWSClient) PinpointConn() *pinpoint.Pinpoint { - return client.pinpointConn +func (c *AWSClient) PinpointEmailConn(ctx context.Context) *pinpointemail_sdkv1.PinpointEmail { + return errs.Must(conn[*pinpointemail_sdkv1.PinpointEmail](ctx, c, names.PinpointEmail)) } -func (client *AWSClient) PinpointEmailConn() *pinpointemail.PinpointEmail { - return client.pinpointemailConn +func (c *AWSClient) PinpointSMSVoiceConn(ctx context.Context) *pinpointsmsvoice_sdkv1.PinpointSMSVoice { + return errs.Must(conn[*pinpointsmsvoice_sdkv1.PinpointSMSVoice](ctx, c, names.PinpointSMSVoice)) } -func (client *AWSClient) PinpointSMSVoiceConn() *pinpointsmsvoice.PinpointSMSVoice { - return client.pinpointsmsvoiceConn +func (c *AWSClient) PipesClient(ctx context.Context) *pipes_sdkv2.Client { + return errs.Must(client[*pipes_sdkv2.Client](ctx, c, names.Pipes)) } -func (client *AWSClient) PipesClient() *pipes.Client { - return client.pipesClient +func (c *AWSClient) PollyConn(ctx context.Context) *polly_sdkv1.Polly { + return errs.Must(conn[*polly_sdkv1.Polly](ctx, c, names.Polly)) } -func (client *AWSClient) PollyConn() *polly.Polly { - return client.pollyConn +func (c *AWSClient) PricingClient(ctx context.Context) *pricing_sdkv2.Client { + return errs.Must(client[*pricing_sdkv2.Client](ctx, c, names.Pricing)) } -func (client *AWSClient) PricingConn() *pricing.Pricing { - return client.pricingConn +func (c *AWSClient) ProtonConn(ctx context.Context) *proton_sdkv1.Proton { + return errs.Must(conn[*proton_sdkv1.Proton](ctx, c, names.Proton)) } -func (client *AWSClient) ProtonConn() *proton.Proton { - return client.protonConn +func (c *AWSClient) QLDBClient(ctx context.Context) *qldb_sdkv2.Client { + return errs.Must(client[*qldb_sdkv2.Client](ctx, c, names.QLDB)) } -func (client *AWSClient) QLDBConn() *qldb.QLDB { - return client.qldbConn +func (c *AWSClient) QLDBSessionConn(ctx context.Context) *qldbsession_sdkv1.QLDBSession { + return errs.Must(conn[*qldbsession_sdkv1.QLDBSession](ctx, c, names.QLDBSession)) } -func (client *AWSClient) QLDBSessionConn() *qldbsession.QLDBSession { - return client.qldbsessionConn +func (c *AWSClient) QuickSightConn(ctx context.Context) *quicksight_sdkv1.QuickSight { + return errs.Must(conn[*quicksight_sdkv1.QuickSight](ctx, c, names.QuickSight)) } -func (client *AWSClient) QuickSightConn() *quicksight.QuickSight { - return client.quicksightConn +func (c *AWSClient) RAMConn(ctx context.Context) *ram_sdkv1.RAM { + return errs.Must(conn[*ram_sdkv1.RAM](ctx, c, names.RAM)) } -func (client *AWSClient) RAMConn() *ram.RAM { - return client.ramConn +func (c *AWSClient) RBinClient(ctx context.Context) *rbin_sdkv2.Client { + return errs.Must(client[*rbin_sdkv2.Client](ctx, c, names.RBin)) } -func (client *AWSClient) RBinClient() *rbin.Client { - return client.rbinClient +func (c *AWSClient) RDSConn(ctx context.Context) *rds_sdkv1.RDS { + return errs.Must(conn[*rds_sdkv1.RDS](ctx, c, names.RDS)) } -func (client *AWSClient) RDSConn() *rds.RDS { - return client.rdsConn +func (c *AWSClient) RDSClient(ctx context.Context) *rds_sdkv2.Client { + return errs.Must(client[*rds_sdkv2.Client](ctx, c, names.RDS)) } -func (client *AWSClient) RDSClient() *rds_sdkv2.Client { - return client.rdsClient.Client() +func (c *AWSClient) RDSDataConn(ctx context.Context) *rdsdataservice_sdkv1.RDSDataService { + return errs.Must(conn[*rdsdataservice_sdkv1.RDSDataService](ctx, c, names.RDSData)) } -func (client *AWSClient) RDSDataConn() *rdsdataservice.RDSDataService { - return client.rdsdataConn +func (c *AWSClient) RUMConn(ctx context.Context) *cloudwatchrum_sdkv1.CloudWatchRUM { + return errs.Must(conn[*cloudwatchrum_sdkv1.CloudWatchRUM](ctx, c, names.RUM)) } -func (client *AWSClient) RUMConn() *cloudwatchrum.CloudWatchRUM { - return client.rumConn +func (c *AWSClient) RedshiftConn(ctx context.Context) *redshift_sdkv1.Redshift { + return errs.Must(conn[*redshift_sdkv1.Redshift](ctx, c, names.Redshift)) } -func (client *AWSClient) RedshiftConn() *redshift.Redshift { - return client.redshiftConn +func (c *AWSClient) RedshiftDataConn(ctx context.Context) *redshiftdataapiservice_sdkv1.RedshiftDataAPIService { + return errs.Must(conn[*redshiftdataapiservice_sdkv1.RedshiftDataAPIService](ctx, c, names.RedshiftData)) } -func (client *AWSClient) RedshiftDataConn() *redshiftdataapiservice.RedshiftDataAPIService { - return client.redshiftdataConn +func (c *AWSClient) RedshiftServerlessConn(ctx context.Context) *redshiftserverless_sdkv1.RedshiftServerless { + return errs.Must(conn[*redshiftserverless_sdkv1.RedshiftServerless](ctx, c, names.RedshiftServerless)) } -func (client *AWSClient) RedshiftServerlessConn() *redshiftserverless.RedshiftServerless { - return client.redshiftserverlessConn +func (c *AWSClient) RekognitionConn(ctx context.Context) *rekognition_sdkv1.Rekognition { + return errs.Must(conn[*rekognition_sdkv1.Rekognition](ctx, c, names.Rekognition)) } -func (client *AWSClient) RekognitionConn() *rekognition.Rekognition { - return client.rekognitionConn +func (c *AWSClient) ResilienceHubConn(ctx context.Context) *resiliencehub_sdkv1.ResilienceHub { + return errs.Must(conn[*resiliencehub_sdkv1.ResilienceHub](ctx, c, names.ResilienceHub)) } -func (client *AWSClient) ResilienceHubConn() *resiliencehub.ResilienceHub { - return client.resiliencehubConn +func (c *AWSClient) ResourceExplorer2Client(ctx context.Context) *resourceexplorer2_sdkv2.Client { + return errs.Must(client[*resourceexplorer2_sdkv2.Client](ctx, c, names.ResourceExplorer2)) } -func (client *AWSClient) ResourceExplorer2Client() *resourceexplorer2.Client { - return client.resourceexplorer2Client +func (c *AWSClient) ResourceGroupsConn(ctx context.Context) *resourcegroups_sdkv1.ResourceGroups { + return errs.Must(conn[*resourcegroups_sdkv1.ResourceGroups](ctx, c, names.ResourceGroups)) } -func (client *AWSClient) ResourceGroupsConn() *resourcegroups.ResourceGroups { - return client.resourcegroupsConn +func (c *AWSClient) ResourceGroupsTaggingAPIConn(ctx context.Context) *resourcegroupstaggingapi_sdkv1.ResourceGroupsTaggingAPI { + return errs.Must(conn[*resourcegroupstaggingapi_sdkv1.ResourceGroupsTaggingAPI](ctx, c, names.ResourceGroupsTaggingAPI)) } -func (client *AWSClient) ResourceGroupsTaggingAPIConn() *resourcegroupstaggingapi.ResourceGroupsTaggingAPI { - return client.resourcegroupstaggingapiConn +func (c *AWSClient) RoboMakerConn(ctx context.Context) *robomaker_sdkv1.RoboMaker { + return errs.Must(conn[*robomaker_sdkv1.RoboMaker](ctx, c, names.RoboMaker)) } -func (client *AWSClient) RoboMakerConn() *robomaker.RoboMaker { - return client.robomakerConn +func (c *AWSClient) RolesAnywhereClient(ctx context.Context) *rolesanywhere_sdkv2.Client { + return errs.Must(client[*rolesanywhere_sdkv2.Client](ctx, c, names.RolesAnywhere)) } -func (client *AWSClient) RolesAnywhereClient() *rolesanywhere.Client { - return client.rolesanywhereClient +func (c *AWSClient) Route53Conn(ctx context.Context) *route53_sdkv1.Route53 { + return errs.Must(conn[*route53_sdkv1.Route53](ctx, c, names.Route53)) } -func (client *AWSClient) Route53Conn() *route53.Route53 { - return client.route53Conn +func (c *AWSClient) Route53DomainsClient(ctx context.Context) *route53domains_sdkv2.Client { + return errs.Must(client[*route53domains_sdkv2.Client](ctx, c, names.Route53Domains)) } -func (client *AWSClient) Route53DomainsClient() *route53domains.Client { - return client.route53domainsClient +func (c *AWSClient) Route53RecoveryClusterConn(ctx context.Context) *route53recoverycluster_sdkv1.Route53RecoveryCluster { + return errs.Must(conn[*route53recoverycluster_sdkv1.Route53RecoveryCluster](ctx, c, names.Route53RecoveryCluster)) } -func (client *AWSClient) Route53RecoveryClusterConn() *route53recoverycluster.Route53RecoveryCluster { - return client.route53recoveryclusterConn +func (c *AWSClient) Route53RecoveryControlConfigConn(ctx context.Context) *route53recoverycontrolconfig_sdkv1.Route53RecoveryControlConfig { + return errs.Must(conn[*route53recoverycontrolconfig_sdkv1.Route53RecoveryControlConfig](ctx, c, names.Route53RecoveryControlConfig)) } -func (client *AWSClient) Route53RecoveryControlConfigConn() *route53recoverycontrolconfig.Route53RecoveryControlConfig { - return client.route53recoverycontrolconfigConn +func (c *AWSClient) Route53RecoveryReadinessConn(ctx context.Context) *route53recoveryreadiness_sdkv1.Route53RecoveryReadiness { + return errs.Must(conn[*route53recoveryreadiness_sdkv1.Route53RecoveryReadiness](ctx, c, names.Route53RecoveryReadiness)) } -func (client *AWSClient) Route53RecoveryReadinessConn() *route53recoveryreadiness.Route53RecoveryReadiness { - return client.route53recoveryreadinessConn +func (c *AWSClient) Route53ResolverConn(ctx context.Context) *route53resolver_sdkv1.Route53Resolver { + return errs.Must(conn[*route53resolver_sdkv1.Route53Resolver](ctx, c, names.Route53Resolver)) } -func (client *AWSClient) Route53ResolverConn() *route53resolver.Route53Resolver { - return client.route53resolverConn +func (c *AWSClient) S3Conn(ctx context.Context) *s3_sdkv1.S3 { + return errs.Must(conn[*s3_sdkv1.S3](ctx, c, names.S3)) } -func (client *AWSClient) S3Conn() *s3.S3 { - return client.s3Conn +func (c *AWSClient) S3ControlConn(ctx context.Context) *s3control_sdkv1.S3Control { + return errs.Must(conn[*s3control_sdkv1.S3Control](ctx, c, names.S3Control)) } -func (client *AWSClient) S3ControlConn() *s3control.S3Control { - return client.s3controlConn +func (c *AWSClient) S3ControlClient(ctx context.Context) *s3control_sdkv2.Client { + return errs.Must(client[*s3control_sdkv2.Client](ctx, c, names.S3Control)) } -func (client *AWSClient) S3ControlClient() *s3control_sdkv2.Client { - return client.s3controlClient.Client() +func (c *AWSClient) S3OutpostsConn(ctx context.Context) *s3outposts_sdkv1.S3Outposts { + return errs.Must(conn[*s3outposts_sdkv1.S3Outposts](ctx, c, names.S3Outposts)) } -func (client *AWSClient) S3OutpostsConn() *s3outposts.S3Outposts { - return client.s3outpostsConn +func (c *AWSClient) SESConn(ctx context.Context) *ses_sdkv1.SES { + return errs.Must(conn[*ses_sdkv1.SES](ctx, c, names.SES)) } -func (client *AWSClient) SESConn() *ses.SES { - return client.sesConn +func (c *AWSClient) SESV2Client(ctx context.Context) *sesv2_sdkv2.Client { + return errs.Must(client[*sesv2_sdkv2.Client](ctx, c, names.SESV2)) } -func (client *AWSClient) SESV2Client() *sesv2.Client { - return client.sesv2Client +func (c *AWSClient) SFNConn(ctx context.Context) *sfn_sdkv1.SFN { + return errs.Must(conn[*sfn_sdkv1.SFN](ctx, c, names.SFN)) } -func (client *AWSClient) SFNConn() *sfn.SFN { - return client.sfnConn +func (c *AWSClient) SMSConn(ctx context.Context) *sms_sdkv1.SMS { + return errs.Must(conn[*sms_sdkv1.SMS](ctx, c, names.SMS)) } -func (client *AWSClient) SMSConn() *sms.SMS { - return client.smsConn +func (c *AWSClient) SNSConn(ctx context.Context) *sns_sdkv1.SNS { + return errs.Must(conn[*sns_sdkv1.SNS](ctx, c, names.SNS)) } -func (client *AWSClient) SNSConn() *sns.SNS { - return client.snsConn +func (c *AWSClient) SQSConn(ctx context.Context) *sqs_sdkv1.SQS { + return errs.Must(conn[*sqs_sdkv1.SQS](ctx, c, names.SQS)) } -func (client *AWSClient) SQSConn() *sqs.SQS { - return client.sqsConn +func (c *AWSClient) SSMConn(ctx context.Context) *ssm_sdkv1.SSM { + return errs.Must(conn[*ssm_sdkv1.SSM](ctx, c, names.SSM)) } -func (client *AWSClient) SSMConn() *ssm.SSM { - return client.ssmConn +func (c *AWSClient) SSMClient(ctx context.Context) *ssm_sdkv2.Client { + return errs.Must(client[*ssm_sdkv2.Client](ctx, c, names.SSM)) } -func (client *AWSClient) SSMClient() *ssm_sdkv2.Client { - return client.ssmClient.Client() +func (c *AWSClient) SSMContactsClient(ctx context.Context) *ssmcontacts_sdkv2.Client { + return errs.Must(client[*ssmcontacts_sdkv2.Client](ctx, c, names.SSMContacts)) } -func (client *AWSClient) SSMContactsClient() *ssmcontacts.Client { - return client.ssmcontactsClient +func (c *AWSClient) SSMIncidentsClient(ctx context.Context) *ssmincidents_sdkv2.Client { + return errs.Must(client[*ssmincidents_sdkv2.Client](ctx, c, names.SSMIncidents)) } -func (client *AWSClient) SSMIncidentsClient() *ssmincidents.Client { - return client.ssmincidentsClient +func (c *AWSClient) SSOConn(ctx context.Context) *sso_sdkv1.SSO { + return errs.Must(conn[*sso_sdkv1.SSO](ctx, c, names.SSO)) } -func (client *AWSClient) SSOConn() *sso.SSO { - return client.ssoConn +func (c *AWSClient) SSOAdminConn(ctx context.Context) *ssoadmin_sdkv1.SSOAdmin { + return errs.Must(conn[*ssoadmin_sdkv1.SSOAdmin](ctx, c, names.SSOAdmin)) } -func (client *AWSClient) SSOAdminConn() *ssoadmin.SSOAdmin { - return client.ssoadminConn +func (c *AWSClient) SSOOIDCConn(ctx context.Context) *ssooidc_sdkv1.SSOOIDC { + return errs.Must(conn[*ssooidc_sdkv1.SSOOIDC](ctx, c, names.SSOOIDC)) } -func (client *AWSClient) SSOOIDCConn() *ssooidc.SSOOIDC { - return client.ssooidcConn +func (c *AWSClient) STSConn(ctx context.Context) *sts_sdkv1.STS { + return errs.Must(conn[*sts_sdkv1.STS](ctx, c, names.STS)) } -func (client *AWSClient) STSConn() *sts.STS { - return client.stsConn +func (c *AWSClient) SWFClient(ctx context.Context) *swf_sdkv2.Client { + return errs.Must(client[*swf_sdkv2.Client](ctx, c, names.SWF)) } -func (client *AWSClient) SWFConn() *swf.SWF { - return client.swfConn +func (c *AWSClient) SageMakerConn(ctx context.Context) *sagemaker_sdkv1.SageMaker { + return errs.Must(conn[*sagemaker_sdkv1.SageMaker](ctx, c, names.SageMaker)) } -func (client *AWSClient) SageMakerConn() *sagemaker.SageMaker { - return client.sagemakerConn +func (c *AWSClient) SageMakerA2IRuntimeConn(ctx context.Context) *augmentedairuntime_sdkv1.AugmentedAIRuntime { + return errs.Must(conn[*augmentedairuntime_sdkv1.AugmentedAIRuntime](ctx, c, names.SageMakerA2IRuntime)) } -func (client *AWSClient) SageMakerA2IRuntimeConn() *augmentedairuntime.AugmentedAIRuntime { - return client.sagemakera2iruntimeConn +func (c *AWSClient) SageMakerEdgeConn(ctx context.Context) *sagemakeredgemanager_sdkv1.SagemakerEdgeManager { + return errs.Must(conn[*sagemakeredgemanager_sdkv1.SagemakerEdgeManager](ctx, c, names.SageMakerEdge)) } -func (client *AWSClient) SageMakerEdgeConn() *sagemakeredgemanager.SagemakerEdgeManager { - return client.sagemakeredgeConn +func (c *AWSClient) SageMakerFeatureStoreRuntimeConn(ctx context.Context) *sagemakerfeaturestoreruntime_sdkv1.SageMakerFeatureStoreRuntime { + return errs.Must(conn[*sagemakerfeaturestoreruntime_sdkv1.SageMakerFeatureStoreRuntime](ctx, c, names.SageMakerFeatureStoreRuntime)) } -func (client *AWSClient) SageMakerFeatureStoreRuntimeConn() *sagemakerfeaturestoreruntime.SageMakerFeatureStoreRuntime { - return client.sagemakerfeaturestoreruntimeConn +func (c *AWSClient) SageMakerRuntimeConn(ctx context.Context) *sagemakerruntime_sdkv1.SageMakerRuntime { + return errs.Must(conn[*sagemakerruntime_sdkv1.SageMakerRuntime](ctx, c, names.SageMakerRuntime)) } -func (client *AWSClient) SageMakerRuntimeConn() *sagemakerruntime.SageMakerRuntime { - return client.sagemakerruntimeConn +func (c *AWSClient) SavingsPlansConn(ctx context.Context) *savingsplans_sdkv1.SavingsPlans { + return errs.Must(conn[*savingsplans_sdkv1.SavingsPlans](ctx, c, names.SavingsPlans)) } -func (client *AWSClient) SavingsPlansConn() *savingsplans.SavingsPlans { - return client.savingsplansConn +func (c *AWSClient) SchedulerClient(ctx context.Context) *scheduler_sdkv2.Client { + return errs.Must(client[*scheduler_sdkv2.Client](ctx, c, names.Scheduler)) } -func (client *AWSClient) SchedulerClient() *scheduler.Client { - return client.schedulerClient +func (c *AWSClient) SchemasConn(ctx context.Context) *schemas_sdkv1.Schemas { + return errs.Must(conn[*schemas_sdkv1.Schemas](ctx, c, names.Schemas)) } -func (client *AWSClient) SchemasConn() *schemas.Schemas { - return client.schemasConn +func (c *AWSClient) SecretsManagerConn(ctx context.Context) *secretsmanager_sdkv1.SecretsManager { + return errs.Must(conn[*secretsmanager_sdkv1.SecretsManager](ctx, c, names.SecretsManager)) } -func (client *AWSClient) SecretsManagerConn() *secretsmanager.SecretsManager { - return client.secretsmanagerConn +func (c *AWSClient) SecurityHubConn(ctx context.Context) *securityhub_sdkv1.SecurityHub { + return errs.Must(conn[*securityhub_sdkv1.SecurityHub](ctx, c, names.SecurityHub)) } -func (client *AWSClient) SecurityHubConn() *securityhub.SecurityHub { - return client.securityhubConn +func (c *AWSClient) SecurityLakeClient(ctx context.Context) *securitylake_sdkv2.Client { + return errs.Must(client[*securitylake_sdkv2.Client](ctx, c, names.SecurityLake)) } -func (client *AWSClient) SecurityLakeClient() *securitylake.Client { - return client.securitylakeClient +func (c *AWSClient) ServerlessRepoConn(ctx context.Context) *serverlessapplicationrepository_sdkv1.ServerlessApplicationRepository { + return errs.Must(conn[*serverlessapplicationrepository_sdkv1.ServerlessApplicationRepository](ctx, c, names.ServerlessRepo)) } -func (client *AWSClient) ServerlessRepoConn() *serverlessapplicationrepository.ServerlessApplicationRepository { - return client.serverlessrepoConn +func (c *AWSClient) ServiceCatalogConn(ctx context.Context) *servicecatalog_sdkv1.ServiceCatalog { + return errs.Must(conn[*servicecatalog_sdkv1.ServiceCatalog](ctx, c, names.ServiceCatalog)) } -func (client *AWSClient) ServiceCatalogConn() *servicecatalog.ServiceCatalog { - return client.servicecatalogConn +func (c *AWSClient) ServiceCatalogAppRegistryConn(ctx context.Context) *appregistry_sdkv1.AppRegistry { + return errs.Must(conn[*appregistry_sdkv1.AppRegistry](ctx, c, names.ServiceCatalogAppRegistry)) } -func (client *AWSClient) ServiceCatalogAppRegistryConn() *appregistry.AppRegistry { - return client.servicecatalogappregistryConn +func (c *AWSClient) ServiceDiscoveryConn(ctx context.Context) *servicediscovery_sdkv1.ServiceDiscovery { + return errs.Must(conn[*servicediscovery_sdkv1.ServiceDiscovery](ctx, c, names.ServiceDiscovery)) } -func (client *AWSClient) ServiceDiscoveryConn() *servicediscovery.ServiceDiscovery { - return client.servicediscoveryConn +func (c *AWSClient) ServiceQuotasConn(ctx context.Context) *servicequotas_sdkv1.ServiceQuotas { + return errs.Must(conn[*servicequotas_sdkv1.ServiceQuotas](ctx, c, names.ServiceQuotas)) } -func (client *AWSClient) ServiceQuotasConn() *servicequotas.ServiceQuotas { - return client.servicequotasConn +func (c *AWSClient) ShieldConn(ctx context.Context) *shield_sdkv1.Shield { + return errs.Must(conn[*shield_sdkv1.Shield](ctx, c, names.Shield)) } -func (client *AWSClient) ShieldConn() *shield.Shield { - return client.shieldConn +func (c *AWSClient) SignerConn(ctx context.Context) *signer_sdkv1.Signer { + return errs.Must(conn[*signer_sdkv1.Signer](ctx, c, names.Signer)) } -func (client *AWSClient) SignerConn() *signer.Signer { - return client.signerConn +func (c *AWSClient) SimpleDBConn(ctx context.Context) *simpledb_sdkv1.SimpleDB { + return errs.Must(conn[*simpledb_sdkv1.SimpleDB](ctx, c, names.SimpleDB)) } -func (client *AWSClient) SimpleDBConn() *simpledb.SimpleDB { - return client.sdbConn +func (c *AWSClient) SnowDeviceManagementConn(ctx context.Context) *snowdevicemanagement_sdkv1.SnowDeviceManagement { + return errs.Must(conn[*snowdevicemanagement_sdkv1.SnowDeviceManagement](ctx, c, names.SnowDeviceManagement)) } -func (client *AWSClient) SnowDeviceManagementConn() *snowdevicemanagement.SnowDeviceManagement { - return client.snowdevicemanagementConn +func (c *AWSClient) SnowballConn(ctx context.Context) *snowball_sdkv1.Snowball { + return errs.Must(conn[*snowball_sdkv1.Snowball](ctx, c, names.Snowball)) } -func (client *AWSClient) SnowballConn() *snowball.Snowball { - return client.snowballConn +func (c *AWSClient) StorageGatewayConn(ctx context.Context) *storagegateway_sdkv1.StorageGateway { + return errs.Must(conn[*storagegateway_sdkv1.StorageGateway](ctx, c, names.StorageGateway)) } -func (client *AWSClient) StorageGatewayConn() *storagegateway.StorageGateway { - return client.storagegatewayConn +func (c *AWSClient) SupportConn(ctx context.Context) *support_sdkv1.Support { + return errs.Must(conn[*support_sdkv1.Support](ctx, c, names.Support)) } -func (client *AWSClient) SupportConn() *support.Support { - return client.supportConn +func (c *AWSClient) SyntheticsConn(ctx context.Context) *synthetics_sdkv1.Synthetics { + return errs.Must(conn[*synthetics_sdkv1.Synthetics](ctx, c, names.Synthetics)) } -func (client *AWSClient) SyntheticsConn() *synthetics.Synthetics { - return client.syntheticsConn +func (c *AWSClient) TextractConn(ctx context.Context) *textract_sdkv1.Textract { + return errs.Must(conn[*textract_sdkv1.Textract](ctx, c, names.Textract)) } -func (client *AWSClient) TextractConn() *textract.Textract { - return client.textractConn +func (c *AWSClient) TimestreamQueryConn(ctx context.Context) *timestreamquery_sdkv1.TimestreamQuery { + return errs.Must(conn[*timestreamquery_sdkv1.TimestreamQuery](ctx, c, names.TimestreamQuery)) } -func (client *AWSClient) TimestreamQueryConn() *timestreamquery.TimestreamQuery { - return client.timestreamqueryConn +func (c *AWSClient) TimestreamWriteClient(ctx context.Context) *timestreamwrite_sdkv2.Client { + return errs.Must(client[*timestreamwrite_sdkv2.Client](ctx, c, names.TimestreamWrite)) } -func (client *AWSClient) TimestreamWriteConn() *timestreamwrite.TimestreamWrite { - return client.timestreamwriteConn +func (c *AWSClient) TranscribeClient(ctx context.Context) *transcribe_sdkv2.Client { + return errs.Must(client[*transcribe_sdkv2.Client](ctx, c, names.Transcribe)) } -func (client *AWSClient) TranscribeClient() *transcribe.Client { - return client.transcribeClient +func (c *AWSClient) TranscribeStreamingConn(ctx context.Context) *transcribestreamingservice_sdkv1.TranscribeStreamingService { + return errs.Must(conn[*transcribestreamingservice_sdkv1.TranscribeStreamingService](ctx, c, names.TranscribeStreaming)) } -func (client *AWSClient) TranscribeStreamingConn() *transcribestreamingservice.TranscribeStreamingService { - return client.transcribestreamingConn +func (c *AWSClient) TransferConn(ctx context.Context) *transfer_sdkv1.Transfer { + return errs.Must(conn[*transfer_sdkv1.Transfer](ctx, c, names.Transfer)) } -func (client *AWSClient) TransferConn() *transfer.Transfer { - return client.transferConn +func (c *AWSClient) TranslateConn(ctx context.Context) *translate_sdkv1.Translate { + return errs.Must(conn[*translate_sdkv1.Translate](ctx, c, names.Translate)) } -func (client *AWSClient) TranslateConn() *translate.Translate { - return client.translateConn +func (c *AWSClient) VPCLatticeClient(ctx context.Context) *vpclattice_sdkv2.Client { + return errs.Must(client[*vpclattice_sdkv2.Client](ctx, c, names.VPCLattice)) } -func (client *AWSClient) VPCLatticeClient() *vpclattice.Client { - return client.vpclatticeClient +func (c *AWSClient) VerifiedPermissionsClient(ctx context.Context) *verifiedpermissions_sdkv2.Client { + return errs.Must(client[*verifiedpermissions_sdkv2.Client](ctx, c, names.VerifiedPermissions)) } -func (client *AWSClient) VoiceIDConn() *voiceid.VoiceID { - return client.voiceidConn +func (c *AWSClient) VoiceIDConn(ctx context.Context) *voiceid_sdkv1.VoiceID { + return errs.Must(conn[*voiceid_sdkv1.VoiceID](ctx, c, names.VoiceID)) } -func (client *AWSClient) WAFConn() *waf.WAF { - return client.wafConn +func (c *AWSClient) WAFConn(ctx context.Context) *waf_sdkv1.WAF { + return errs.Must(conn[*waf_sdkv1.WAF](ctx, c, names.WAF)) } -func (client *AWSClient) WAFRegionalConn() *wafregional.WAFRegional { - return client.wafregionalConn +func (c *AWSClient) WAFRegionalConn(ctx context.Context) *wafregional_sdkv1.WAFRegional { + return errs.Must(conn[*wafregional_sdkv1.WAFRegional](ctx, c, names.WAFRegional)) } -func (client *AWSClient) WAFV2Conn() *wafv2.WAFV2 { - return client.wafv2Conn +func (c *AWSClient) WAFV2Conn(ctx context.Context) *wafv2_sdkv1.WAFV2 { + return errs.Must(conn[*wafv2_sdkv1.WAFV2](ctx, c, names.WAFV2)) } -func (client *AWSClient) WellArchitectedConn() *wellarchitected.WellArchitected { - return client.wellarchitectedConn +func (c *AWSClient) WellArchitectedConn(ctx context.Context) *wellarchitected_sdkv1.WellArchitected { + return errs.Must(conn[*wellarchitected_sdkv1.WellArchitected](ctx, c, names.WellArchitected)) } -func (client *AWSClient) WisdomConn() *connectwisdomservice.ConnectWisdomService { - return client.wisdomConn +func (c *AWSClient) WisdomConn(ctx context.Context) *connectwisdomservice_sdkv1.ConnectWisdomService { + return errs.Must(conn[*connectwisdomservice_sdkv1.ConnectWisdomService](ctx, c, names.Wisdom)) } -func (client *AWSClient) WorkDocsConn() *workdocs.WorkDocs { - return client.workdocsConn +func (c *AWSClient) WorkDocsConn(ctx context.Context) *workdocs_sdkv1.WorkDocs { + return errs.Must(conn[*workdocs_sdkv1.WorkDocs](ctx, c, names.WorkDocs)) } -func (client *AWSClient) WorkLinkConn() *worklink.WorkLink { - return client.worklinkConn +func (c *AWSClient) WorkLinkConn(ctx context.Context) *worklink_sdkv1.WorkLink { + return errs.Must(conn[*worklink_sdkv1.WorkLink](ctx, c, names.WorkLink)) } -func (client *AWSClient) WorkMailConn() *workmail.WorkMail { - return client.workmailConn +func (c *AWSClient) WorkMailConn(ctx context.Context) *workmail_sdkv1.WorkMail { + return errs.Must(conn[*workmail_sdkv1.WorkMail](ctx, c, names.WorkMail)) } -func (client *AWSClient) WorkMailMessageFlowConn() *workmailmessageflow.WorkMailMessageFlow { - return client.workmailmessageflowConn +func (c *AWSClient) WorkMailMessageFlowConn(ctx context.Context) *workmailmessageflow_sdkv1.WorkMailMessageFlow { + return errs.Must(conn[*workmailmessageflow_sdkv1.WorkMailMessageFlow](ctx, c, names.WorkMailMessageFlow)) } -func (client *AWSClient) WorkSpacesConn() *workspaces.WorkSpaces { - return client.workspacesConn +func (c *AWSClient) WorkSpacesClient(ctx context.Context) *workspaces_sdkv2.Client { + return errs.Must(client[*workspaces_sdkv2.Client](ctx, c, names.WorkSpaces)) } -func (client *AWSClient) WorkSpacesWebConn() *workspacesweb.WorkSpacesWeb { - return client.workspaceswebConn +func (c *AWSClient) WorkSpacesWebConn(ctx context.Context) *workspacesweb_sdkv1.WorkSpacesWeb { + return errs.Must(conn[*workspacesweb_sdkv1.WorkSpacesWeb](ctx, c, names.WorkSpacesWeb)) } -func (client *AWSClient) XRayClient() *xray.Client { - return client.xrayClient +func (c *AWSClient) XRayClient(ctx context.Context) *xray_sdkv2.Client { + return errs.Must(client[*xray_sdkv2.Client](ctx, c, names.XRay)) } diff --git a/internal/conns/awsclient_test.go b/internal/conns/awsclient_test.go index ea549342d50..176166ba2b3 100644 --- a/internal/conns/awsclient_test.go +++ b/internal/conns/awsclient_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package conns import ( diff --git a/internal/conns/config.go b/internal/conns/config.go index 2f45f62fc34..7b137df8786 100644 --- a/internal/conns/config.go +++ b/internal/conns/config.go @@ -1,44 +1,17 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package conns import ( "context" "log" - "strings" - "github.com/aws/aws-sdk-go-v2/feature/ec2/imds" - "github.com/aws/aws-sdk-go-v2/service/route53domains" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/endpoints" - "github.com/aws/aws-sdk-go/aws/request" - "github.com/aws/aws-sdk-go/service/apigateway" - "github.com/aws/aws-sdk-go/service/apigatewayv2" - "github.com/aws/aws-sdk-go/service/appconfig" - "github.com/aws/aws-sdk-go/service/applicationautoscaling" - "github.com/aws/aws-sdk-go/service/appsync" - "github.com/aws/aws-sdk-go/service/chime" - "github.com/aws/aws-sdk-go/service/cloudformation" - "github.com/aws/aws-sdk-go/service/cloudhsmv2" - "github.com/aws/aws-sdk-go/service/configservice" - "github.com/aws/aws-sdk-go/service/dynamodb" - "github.com/aws/aws-sdk-go/service/fms" - "github.com/aws/aws-sdk-go/service/globalaccelerator" - "github.com/aws/aws-sdk-go/service/kafka" - "github.com/aws/aws-sdk-go/service/kinesis" - "github.com/aws/aws-sdk-go/service/lightsail" - "github.com/aws/aws-sdk-go/service/organizations" - "github.com/aws/aws-sdk-go/service/route53" - "github.com/aws/aws-sdk-go/service/route53recoverycontrolconfig" - "github.com/aws/aws-sdk-go/service/route53recoveryreadiness" - "github.com/aws/aws-sdk-go/service/s3" - "github.com/aws/aws-sdk-go/service/securityhub" - "github.com/aws/aws-sdk-go/service/shield" - "github.com/aws/aws-sdk-go/service/ssoadmin" - "github.com/aws/aws-sdk-go/service/storagegateway" - "github.com/aws/aws-sdk-go/service/sts" - "github.com/aws/aws-sdk-go/service/wafv2" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + imds_sdkv2 "github.com/aws/aws-sdk-go-v2/feature/ec2/imds" + endpoints_sdkv1 "github.com/aws/aws-sdk-go/aws/endpoints" awsbase "github.com/hashicorp/aws-sdk-go-base/v2" awsbasev1 "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/terraform-plugin-log/tflog" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" @@ -52,7 +25,7 @@ type Config struct { AssumeRoleWithWebIdentity *awsbase.AssumeRoleWithWebIdentity CustomCABundle string DefaultTagsConfig *tftags.DefaultConfig - EC2MetadataServiceEnableState imds.ClientEnableState + EC2MetadataServiceEnableState imds_sdkv2.ClientEnableState EC2MetadataServiceEndpoint string EC2MetadataServiceEndpointMode string Endpoints map[string]string @@ -63,6 +36,7 @@ type Config struct { MaxRetries int Profile string Region string + RetryMode aws_sdkv2.RetryMode S3UsePathStyle bool SecretKey string SharedConfigFiles []string @@ -94,6 +68,7 @@ func (c *Config) ConfigureProvider(ctx context.Context, client *AWSClient) (*AWS MaxRetries: c.MaxRetries, Profile: c.Profile, Region: c.Region, + RetryMode: c.RetryMode, SecretKey: c.SecretKey, SkipCredsValidation: c.SkipCredsValidation, SkipRequestingAccountId: c.SkipRequestingAccountId, @@ -160,7 +135,7 @@ func (c *Config) ConfigureProvider(ctx context.Context, client *AWSClient) (*AWS } if len(c.ForbiddenAccountIds) > 0 { - for _, forbiddenAccountID := range c.AllowedAccountIds { + for _, forbiddenAccountID := range c.ForbiddenAccountIds { if accountID == forbiddenAccountID { return nil, diag.Errorf("AWS account ID not allowed: %s", accountID) } @@ -180,7 +155,7 @@ func (c *Config) ConfigureProvider(ctx context.Context, client *AWSClient) (*AWS } DNSSuffix := "amazonaws.com" - if p, ok := endpoints.PartitionForRegion(endpoints.DefaultPartitions(), c.Region); ok { + if p, ok := endpoints_sdkv1.PartitionForRegion(endpoints_sdkv1.DefaultPartitions(), c.Region); ok { DNSSuffix = p.DNSSuffix() } @@ -195,356 +170,13 @@ func (c *Config) ConfigureProvider(ctx context.Context, client *AWSClient) (*AWS client.Session = sess client.TerraformVersion = c.TerraformVersion - // API clients (generated). - c.sdkv1Conns(client, sess) - c.sdkv2Conns(client, cfg) - c.sdkv2LazyConns(client, cfg) - - // AWS SDK for Go v1 custom API clients. - - // STS. - stsConfig := &aws.Config{ - Endpoint: aws.String(c.Endpoints[names.STS]), - } - if c.STSRegion != "" { - stsConfig.Region = aws.String(c.STSRegion) - } - client.stsConn = sts.New(sess.Copy(stsConfig)) - - // Services that require multiple client configurations. - s3Config := &aws.Config{ - Endpoint: aws.String(c.Endpoints[names.S3]), - S3ForcePathStyle: aws.Bool(c.S3UsePathStyle), - } - client.s3Conn = s3.New(sess.Copy(s3Config)) - - s3Config.DisableRestProtocolURICleaning = aws.Bool(true) - client.s3ConnURICleaningDisabled = s3.New(sess.Copy(s3Config)) - - // "Global" services that require customizations. - globalAcceleratorConfig := &aws.Config{ - Endpoint: aws.String(c.Endpoints[names.GlobalAccelerator]), - } - route53Config := &aws.Config{ - Endpoint: aws.String(c.Endpoints[names.Route53]), - } - route53RecoveryControlConfigConfig := &aws.Config{ - Endpoint: aws.String(c.Endpoints[names.Route53RecoveryControlConfig]), - } - route53RecoveryReadinessConfig := &aws.Config{ - Endpoint: aws.String(c.Endpoints[names.Route53RecoveryReadiness]), - } - shieldConfig := &aws.Config{ - Endpoint: aws.String(c.Endpoints[names.Shield]), - } - - // Force "global" services to correct Regions. - switch partition { - case endpoints.AwsPartitionID: - globalAcceleratorConfig.Region = aws.String(endpoints.UsWest2RegionID) - route53Config.Region = aws.String(endpoints.UsEast1RegionID) - route53RecoveryControlConfigConfig.Region = aws.String(endpoints.UsWest2RegionID) - route53RecoveryReadinessConfig.Region = aws.String(endpoints.UsWest2RegionID) - shieldConfig.Region = aws.String(endpoints.UsEast1RegionID) - case endpoints.AwsCnPartitionID: - // The AWS Go SDK is missing endpoint information for Route 53 in the AWS China partition. - // This can likely be removed in the future. - if aws.StringValue(route53Config.Endpoint) == "" { - route53Config.Endpoint = aws.String("https://api.route53.cn") - } - route53Config.Region = aws.String(endpoints.CnNorthwest1RegionID) - case endpoints.AwsUsGovPartitionID: - route53Config.Region = aws.String(endpoints.UsGovWest1RegionID) - } - - client.globalacceleratorConn = globalaccelerator.New(sess.Copy(globalAcceleratorConfig)) - client.route53Conn = route53.New(sess.Copy(route53Config)) - client.route53recoverycontrolconfigConn = route53recoverycontrolconfig.New(sess.Copy(route53RecoveryControlConfigConfig)) - client.route53recoveryreadinessConn = route53recoveryreadiness.New(sess.Copy(route53RecoveryReadinessConfig)) - client.shieldConn = shield.New(sess.Copy(shieldConfig)) - - client.apigatewayConn.Handlers.Retry.PushBack(func(r *request.Request) { - // Many operations can return an error such as: - // ConflictException: Unable to complete operation due to concurrent modification. Please try again later. - // Handle them all globally for the service client. - if tfawserr.ErrMessageContains(r.Error, apigateway.ErrCodeConflictException, "try again later") { - r.Retryable = aws.Bool(true) - } - }) - - client.apigatewayv2Conn.Handlers.Retry.PushBack(func(r *request.Request) { - // Many operations can return an error such as: - // ConflictException: Unable to complete operation due to concurrent modification. Please try again later. - // Handle them all globally for the service client. - if tfawserr.ErrMessageContains(r.Error, apigatewayv2.ErrCodeConflictException, "try again later") { - r.Retryable = aws.Bool(true) - } - }) - - // Workaround for https://github.com/aws/aws-sdk-go/issues/1472 - client.applicationautoscalingConn.Handlers.Retry.PushBack(func(r *request.Request) { - if !strings.HasPrefix(r.Operation.Name, "Describe") && !strings.HasPrefix(r.Operation.Name, "List") { - return - } - if tfawserr.ErrCodeEquals(r.Error, applicationautoscaling.ErrCodeFailedResourceAccessException) { - r.Retryable = aws.Bool(true) - } - }) - - // StartDeployment operations can return a ConflictException - // if ongoing deployments are in-progress, thus we handle them - // here for the service client. - client.appconfigConn.Handlers.Retry.PushBack(func(r *request.Request) { - if r.Operation.Name == "StartDeployment" { - if tfawserr.ErrCodeEquals(r.Error, appconfig.ErrCodeConflictException) { - r.Retryable = aws.Bool(true) - } - } - }) - - client.appsyncConn.Handlers.Retry.PushBack(func(r *request.Request) { - if r.Operation.Name == "CreateGraphqlApi" { - if tfawserr.ErrMessageContains(r.Error, appsync.ErrCodeConcurrentModificationException, "a GraphQL API creation is already in progress") { - r.Retryable = aws.Bool(true) - } - } - }) - - client.chimeConn.Handlers.Retry.PushBack(func(r *request.Request) { - // When calling CreateVoiceConnector across multiple resources, - // the API can randomly return a BadRequestException without explanation - if r.Operation.Name == "CreateVoiceConnector" { - if tfawserr.ErrMessageContains(r.Error, chime.ErrCodeBadRequestException, "Service received a bad request") { - r.Retryable = aws.Bool(true) - } - } - }) - - client.cloudhsmv2Conn.Handlers.Retry.PushBack(func(r *request.Request) { - if tfawserr.ErrMessageContains(r.Error, cloudhsmv2.ErrCodeCloudHsmInternalFailureException, "request was rejected because of an AWS CloudHSM internal failure") { - r.Retryable = aws.Bool(true) - } - }) - - client.configserviceConn.Handlers.Retry.PushBack(func(r *request.Request) { - // When calling Config Organization Rules API actions immediately - // after Organization creation, the API can randomly return the - // OrganizationAccessDeniedException error for a few minutes, even - // after succeeding a few requests. - switch r.Operation.Name { - case "DeleteOrganizationConfigRule", "DescribeOrganizationConfigRules", "DescribeOrganizationConfigRuleStatuses", "PutOrganizationConfigRule": - if !tfawserr.ErrMessageContains(r.Error, configservice.ErrCodeOrganizationAccessDeniedException, "This action can be only made by AWS Organization's master account.") { - return - } - - // We only want to retry briefly as the default max retry count would - // excessively retry when the error could be legitimate. - // We currently depend on the DefaultRetryer exponential backoff here. - // ~10 retries gives a fair backoff of a few seconds. - if r.RetryCount < 9 { - r.Retryable = aws.Bool(true) - } else { - r.Retryable = aws.Bool(false) - } - case "DeleteOrganizationConformancePack", "DescribeOrganizationConformancePacks", "DescribeOrganizationConformancePackStatuses", "PutOrganizationConformancePack": - if !tfawserr.ErrCodeEquals(r.Error, configservice.ErrCodeOrganizationAccessDeniedException) { - if r.Operation.Name == "DeleteOrganizationConformancePack" && tfawserr.ErrCodeEquals(err, configservice.ErrCodeResourceInUseException) { - r.Retryable = aws.Bool(true) - } - return - } - - // We only want to retry briefly as the default max retry count would - // excessively retry when the error could be legitimate. - // We currently depend on the DefaultRetryer exponential backoff here. - // ~10 retries gives a fair backoff of a few seconds. - if r.RetryCount < 9 { - r.Retryable = aws.Bool(true) - } else { - r.Retryable = aws.Bool(false) - } - } - }) - - client.cloudformationConn.Handlers.Retry.PushBack(func(r *request.Request) { - if tfawserr.ErrMessageContains(r.Error, cloudformation.ErrCodeOperationInProgressException, "Another Operation on StackSet") { - r.Retryable = aws.Bool(true) - } - }) - - // See https://github.com/aws/aws-sdk-go/pull/1276 - client.dynamodbConn.Handlers.Retry.PushBack(func(r *request.Request) { - if r.Operation.Name != "PutItem" && r.Operation.Name != "UpdateItem" && r.Operation.Name != "DeleteItem" { - return - } - if tfawserr.ErrMessageContains(r.Error, dynamodb.ErrCodeLimitExceededException, "Subscriber limit exceeded:") { - r.Retryable = aws.Bool(true) - } - }) - - client.ec2Conn.Handlers.Retry.PushBack(func(r *request.Request) { - switch err := r.Error; r.Operation.Name { - case "AttachVpnGateway", "DetachVpnGateway": - if tfawserr.ErrMessageContains(err, "InvalidParameterValue", "This call cannot be completed because there are pending VPNs or Virtual Interfaces") { - r.Retryable = aws.Bool(true) - } - - case "CreateClientVpnEndpoint": - if tfawserr.ErrMessageContains(err, "OperationNotPermitted", "Endpoint cannot be created while another endpoint is being created") { - r.Retryable = aws.Bool(true) - } - - case "CreateClientVpnRoute", "DeleteClientVpnRoute": - if tfawserr.ErrMessageContains(err, "ConcurrentMutationLimitExceeded", "Cannot initiate another change for this endpoint at this time") { - r.Retryable = aws.Bool(true) - } - - case "CreateVpnConnection": - if tfawserr.ErrMessageContains(err, "VpnConnectionLimitExceeded", "maximum number of mutating objects has been reached") { - r.Retryable = aws.Bool(true) - } - - case "CreateVpnGateway": - if tfawserr.ErrMessageContains(err, "VpnGatewayLimitExceeded", "maximum number of mutating objects has been reached") { - r.Retryable = aws.Bool(true) - } - - case "RunInstances": - // `InsufficientInstanceCapacity` error has status code 500 and AWS SDK try retry this error by default. - if tfawserr.ErrCodeEquals(err, "InsufficientInstanceCapacity") { - r.Retryable = aws.Bool(false) - } - } - }) - - client.fmsConn.Handlers.Retry.PushBack(func(r *request.Request) { - // Acceptance testing creates and deletes resources in quick succession. - // The FMS onboarding process into Organizations is opaque to consumers. - // Since we cannot reasonably check this status before receiving the error, - // set the operation as retryable. - switch r.Operation.Name { - case "AssociateAdminAccount": - if tfawserr.ErrMessageContains(r.Error, fms.ErrCodeInvalidOperationException, "Your AWS Organization is currently offboarding with AWS Firewall Manager. Please submit onboard request after offboarded.") { - r.Retryable = aws.Bool(true) - } - case "DisassociateAdminAccount": - if tfawserr.ErrMessageContains(r.Error, fms.ErrCodeInvalidOperationException, "Your AWS Organization is currently onboarding with AWS Firewall Manager and cannot be offboarded.") { - r.Retryable = aws.Bool(true) - } - // System problems can arise during FMS policy updates (maybe also creation), - // so we set the following operation as retryable. - // Reference: https://github.com/hashicorp/terraform-provider-aws/issues/23946 - case "PutPolicy": - if tfawserr.ErrCodeEquals(r.Error, fms.ErrCodeInternalErrorException) { - r.Retryable = aws.Bool(true) - } - } - }) - - client.kafkaConn.Handlers.Retry.PushBack(func(r *request.Request) { - if tfawserr.ErrMessageContains(r.Error, kafka.ErrCodeTooManyRequestsException, "Too Many Requests") { - r.Retryable = aws.Bool(true) - } - }) - - client.kinesisConn.Handlers.Retry.PushBack(func(r *request.Request) { - if r.Operation.Name == "CreateStream" { - if tfawserr.ErrMessageContains(r.Error, kinesis.ErrCodeLimitExceededException, "simultaneously be in CREATING or DELETING") { - r.Retryable = aws.Bool(true) - } - } - if r.Operation.Name == "CreateStream" || r.Operation.Name == "DeleteStream" { - if tfawserr.ErrMessageContains(r.Error, kinesis.ErrCodeLimitExceededException, "Rate exceeded for stream") { - r.Retryable = aws.Bool(true) - } - } - }) - - client.lightsailConn.Handlers.Retry.PushBack(func(r *request.Request) { - switch r.Operation.Name { - case "CreateContainerService", "UpdateContainerService", "CreateContainerServiceDeployment": - if tfawserr.ErrMessageContains(r.Error, lightsail.ErrCodeInvalidInputException, "Please try again in a few minutes") { - r.Retryable = aws.Bool(true) - } - case "DeleteContainerService": - if tfawserr.ErrMessageContains(r.Error, lightsail.ErrCodeInvalidInputException, "Please try again in a few minutes") || - tfawserr.ErrMessageContains(r.Error, lightsail.ErrCodeInvalidInputException, "Please wait for it to complete before trying again") { - r.Retryable = aws.Bool(true) - } - } - }) - - client.organizationsConn.Handlers.Retry.PushBack(func(r *request.Request) { - // Retry on the following error: - // ConcurrentModificationException: AWS Organizations can't complete your request because it conflicts with another attempt to modify the same entity. Try again later. - if tfawserr.ErrMessageContains(r.Error, organizations.ErrCodeConcurrentModificationException, "Try again later") { - r.Retryable = aws.Bool(true) - } - }) - - client.s3Conn.Handlers.Retry.PushBack(func(r *request.Request) { - if tfawserr.ErrMessageContains(r.Error, "OperationAborted", "A conflicting conditional operation is currently in progress against this resource. Please try again.") { - r.Retryable = aws.Bool(true) - } - }) - - // Reference: https://github.com/hashicorp/terraform-provider-aws/issues/17996 - client.securityhubConn.Handlers.Retry.PushBack(func(r *request.Request) { - switch r.Operation.Name { - case "EnableOrganizationAdminAccount": - if tfawserr.ErrCodeEquals(r.Error, securityhub.ErrCodeResourceConflictException) { - r.Retryable = aws.Bool(true) - } - } - }) - - // Reference: https://github.com/hashicorp/terraform-provider-aws/issues/19215 - client.ssoadminConn.Handlers.Retry.PushBack(func(r *request.Request) { - if r.Operation.Name == "AttachManagedPolicyToPermissionSet" || r.Operation.Name == "DetachManagedPolicyFromPermissionSet" { - if tfawserr.ErrCodeEquals(r.Error, ssoadmin.ErrCodeConflictException) { - r.Retryable = aws.Bool(true) - } - } - }) - - client.storagegatewayConn.Handlers.Retry.PushBack(func(r *request.Request) { - // InvalidGatewayRequestException: The specified gateway proxy network connection is busy. - if tfawserr.ErrMessageContains(r.Error, storagegateway.ErrCodeInvalidGatewayRequestException, "The specified gateway proxy network connection is busy") { - r.Retryable = aws.Bool(true) - } - }) - - client.wafv2Conn.Handlers.Retry.PushBack(func(r *request.Request) { - if tfawserr.ErrMessageContains(r.Error, wafv2.ErrCodeWAFInternalErrorException, "Retry your request") { - r.Retryable = aws.Bool(true) - } - - if tfawserr.ErrMessageContains(r.Error, wafv2.ErrCodeWAFServiceLinkedRoleErrorException, "Retry") { - r.Retryable = aws.Bool(true) - } - - if r.Operation.Name == "CreateIPSet" || r.Operation.Name == "CreateRegexPatternSet" || - r.Operation.Name == "CreateRuleGroup" || r.Operation.Name == "CreateWebACL" { - // WAFv2 supports tag on create which can result in the below error codes according to the documentation - if tfawserr.ErrMessageContains(r.Error, wafv2.ErrCodeWAFTagOperationException, "Retry your request") { - r.Retryable = aws.Bool(true) - } - if tfawserr.ErrMessageContains(err, wafv2.ErrCodeWAFTagOperationInternalErrorException, "Retry your request") { - r.Retryable = aws.Bool(true) - } - } - }) - - // AWS SDK for Go v2 custom API clients. - - client.route53domainsClient = route53domains.NewFromConfig(cfg, func(o *route53domains.Options) { - if endpoint := c.Endpoints[names.Route53Domains]; endpoint != "" { - o.EndpointResolver = route53domains.EndpointResolverFromURL(endpoint) - } else if partition == endpoints.AwsPartitionID { - // Route 53 Domains is only available in AWS Commercial us-east-1 Region. - o.Region = endpoints.UsEast1RegionID - } - }) + // Used for lazy-loading AWS API clients. + client.awsConfig = &cfg + client.clients = make(map[string]any, 0) + client.conns = make(map[string]any, 0) + client.endpoints = c.Endpoints + client.s3UsePathStyle = c.S3UsePathStyle + client.stsRegion = c.STSRegion return client, nil } diff --git a/internal/conns/config_gen.go b/internal/conns/config_gen.go deleted file mode 100644 index 216f54cbe15..00000000000 --- a/internal/conns/config_gen.go +++ /dev/null @@ -1,813 +0,0 @@ -// Code generated by internal/generate/clientconfig/main.go; DO NOT EDIT. -package conns - -import ( - aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" - "github.com/aws/aws-sdk-go-v2/service/accessanalyzer" - "github.com/aws/aws-sdk-go-v2/service/account" - "github.com/aws/aws-sdk-go-v2/service/acm" - "github.com/aws/aws-sdk-go-v2/service/auditmanager" - "github.com/aws/aws-sdk-go-v2/service/cleanrooms" - "github.com/aws/aws-sdk-go-v2/service/cloudcontrol" - cloudwatchlogs_sdkv2 "github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs" - "github.com/aws/aws-sdk-go-v2/service/comprehend" - "github.com/aws/aws-sdk-go-v2/service/computeoptimizer" - directoryservice_sdkv2 "github.com/aws/aws-sdk-go-v2/service/directoryservice" - "github.com/aws/aws-sdk-go-v2/service/docdbelastic" - ec2_sdkv2 "github.com/aws/aws-sdk-go-v2/service/ec2" - "github.com/aws/aws-sdk-go-v2/service/fis" - "github.com/aws/aws-sdk-go-v2/service/healthlake" - "github.com/aws/aws-sdk-go-v2/service/identitystore" - "github.com/aws/aws-sdk-go-v2/service/inspector2" - "github.com/aws/aws-sdk-go-v2/service/ivschat" - "github.com/aws/aws-sdk-go-v2/service/kendra" - lambda_sdkv2 "github.com/aws/aws-sdk-go-v2/service/lambda" - "github.com/aws/aws-sdk-go-v2/service/medialive" - "github.com/aws/aws-sdk-go-v2/service/oam" - "github.com/aws/aws-sdk-go-v2/service/opensearchserverless" - "github.com/aws/aws-sdk-go-v2/service/pipes" - "github.com/aws/aws-sdk-go-v2/service/rbin" - rds_sdkv2 "github.com/aws/aws-sdk-go-v2/service/rds" - "github.com/aws/aws-sdk-go-v2/service/resourceexplorer2" - "github.com/aws/aws-sdk-go-v2/service/rolesanywhere" - s3control_sdkv2 "github.com/aws/aws-sdk-go-v2/service/s3control" - "github.com/aws/aws-sdk-go-v2/service/scheduler" - "github.com/aws/aws-sdk-go-v2/service/securitylake" - "github.com/aws/aws-sdk-go-v2/service/sesv2" - ssm_sdkv2 "github.com/aws/aws-sdk-go-v2/service/ssm" - "github.com/aws/aws-sdk-go-v2/service/ssmcontacts" - "github.com/aws/aws-sdk-go-v2/service/ssmincidents" - "github.com/aws/aws-sdk-go-v2/service/transcribe" - "github.com/aws/aws-sdk-go-v2/service/vpclattice" - "github.com/aws/aws-sdk-go-v2/service/xray" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/session" - "github.com/aws/aws-sdk-go/service/acmpca" - "github.com/aws/aws-sdk-go/service/alexaforbusiness" - "github.com/aws/aws-sdk-go/service/amplify" - "github.com/aws/aws-sdk-go/service/amplifybackend" - "github.com/aws/aws-sdk-go/service/amplifyuibuilder" - "github.com/aws/aws-sdk-go/service/apigateway" - "github.com/aws/aws-sdk-go/service/apigatewaymanagementapi" - "github.com/aws/aws-sdk-go/service/apigatewayv2" - "github.com/aws/aws-sdk-go/service/appconfig" - "github.com/aws/aws-sdk-go/service/appconfigdata" - "github.com/aws/aws-sdk-go/service/appflow" - "github.com/aws/aws-sdk-go/service/appintegrationsservice" - "github.com/aws/aws-sdk-go/service/applicationautoscaling" - "github.com/aws/aws-sdk-go/service/applicationcostprofiler" - "github.com/aws/aws-sdk-go/service/applicationdiscoveryservice" - "github.com/aws/aws-sdk-go/service/applicationinsights" - "github.com/aws/aws-sdk-go/service/appmesh" - "github.com/aws/aws-sdk-go/service/appregistry" - "github.com/aws/aws-sdk-go/service/apprunner" - "github.com/aws/aws-sdk-go/service/appstream" - "github.com/aws/aws-sdk-go/service/appsync" - "github.com/aws/aws-sdk-go/service/athena" - "github.com/aws/aws-sdk-go/service/augmentedairuntime" - "github.com/aws/aws-sdk-go/service/autoscaling" - "github.com/aws/aws-sdk-go/service/autoscalingplans" - "github.com/aws/aws-sdk-go/service/backup" - "github.com/aws/aws-sdk-go/service/backupgateway" - "github.com/aws/aws-sdk-go/service/batch" - "github.com/aws/aws-sdk-go/service/billingconductor" - "github.com/aws/aws-sdk-go/service/braket" - "github.com/aws/aws-sdk-go/service/budgets" - "github.com/aws/aws-sdk-go/service/chime" - "github.com/aws/aws-sdk-go/service/chimesdkidentity" - "github.com/aws/aws-sdk-go/service/chimesdkmediapipelines" - "github.com/aws/aws-sdk-go/service/chimesdkmeetings" - "github.com/aws/aws-sdk-go/service/chimesdkmessaging" - "github.com/aws/aws-sdk-go/service/chimesdkvoice" - "github.com/aws/aws-sdk-go/service/cloud9" - "github.com/aws/aws-sdk-go/service/clouddirectory" - "github.com/aws/aws-sdk-go/service/cloudformation" - "github.com/aws/aws-sdk-go/service/cloudfront" - "github.com/aws/aws-sdk-go/service/cloudhsmv2" - "github.com/aws/aws-sdk-go/service/cloudsearch" - "github.com/aws/aws-sdk-go/service/cloudsearchdomain" - "github.com/aws/aws-sdk-go/service/cloudtrail" - "github.com/aws/aws-sdk-go/service/cloudwatch" - "github.com/aws/aws-sdk-go/service/cloudwatchevidently" - "github.com/aws/aws-sdk-go/service/cloudwatchlogs" - "github.com/aws/aws-sdk-go/service/cloudwatchrum" - "github.com/aws/aws-sdk-go/service/codeartifact" - "github.com/aws/aws-sdk-go/service/codebuild" - "github.com/aws/aws-sdk-go/service/codecommit" - "github.com/aws/aws-sdk-go/service/codedeploy" - "github.com/aws/aws-sdk-go/service/codeguruprofiler" - "github.com/aws/aws-sdk-go/service/codegurureviewer" - "github.com/aws/aws-sdk-go/service/codepipeline" - "github.com/aws/aws-sdk-go/service/codestar" - "github.com/aws/aws-sdk-go/service/codestarconnections" - "github.com/aws/aws-sdk-go/service/codestarnotifications" - "github.com/aws/aws-sdk-go/service/cognitoidentity" - "github.com/aws/aws-sdk-go/service/cognitoidentityprovider" - "github.com/aws/aws-sdk-go/service/cognitosync" - "github.com/aws/aws-sdk-go/service/comprehendmedical" - "github.com/aws/aws-sdk-go/service/configservice" - "github.com/aws/aws-sdk-go/service/connect" - "github.com/aws/aws-sdk-go/service/connectcontactlens" - "github.com/aws/aws-sdk-go/service/connectparticipant" - "github.com/aws/aws-sdk-go/service/connectwisdomservice" - "github.com/aws/aws-sdk-go/service/controltower" - "github.com/aws/aws-sdk-go/service/costandusagereportservice" - "github.com/aws/aws-sdk-go/service/costexplorer" - "github.com/aws/aws-sdk-go/service/customerprofiles" - "github.com/aws/aws-sdk-go/service/databasemigrationservice" - "github.com/aws/aws-sdk-go/service/dataexchange" - "github.com/aws/aws-sdk-go/service/datapipeline" - "github.com/aws/aws-sdk-go/service/datasync" - "github.com/aws/aws-sdk-go/service/dax" - "github.com/aws/aws-sdk-go/service/detective" - "github.com/aws/aws-sdk-go/service/devicefarm" - "github.com/aws/aws-sdk-go/service/devopsguru" - "github.com/aws/aws-sdk-go/service/directconnect" - "github.com/aws/aws-sdk-go/service/directoryservice" - "github.com/aws/aws-sdk-go/service/dlm" - "github.com/aws/aws-sdk-go/service/docdb" - "github.com/aws/aws-sdk-go/service/drs" - "github.com/aws/aws-sdk-go/service/dynamodb" - "github.com/aws/aws-sdk-go/service/dynamodbstreams" - "github.com/aws/aws-sdk-go/service/ebs" - "github.com/aws/aws-sdk-go/service/ec2" - "github.com/aws/aws-sdk-go/service/ec2instanceconnect" - "github.com/aws/aws-sdk-go/service/ecr" - "github.com/aws/aws-sdk-go/service/ecrpublic" - "github.com/aws/aws-sdk-go/service/ecs" - "github.com/aws/aws-sdk-go/service/efs" - "github.com/aws/aws-sdk-go/service/eks" - "github.com/aws/aws-sdk-go/service/elasticache" - "github.com/aws/aws-sdk-go/service/elasticbeanstalk" - "github.com/aws/aws-sdk-go/service/elasticinference" - "github.com/aws/aws-sdk-go/service/elasticsearchservice" - "github.com/aws/aws-sdk-go/service/elastictranscoder" - "github.com/aws/aws-sdk-go/service/elb" - "github.com/aws/aws-sdk-go/service/elbv2" - "github.com/aws/aws-sdk-go/service/emr" - "github.com/aws/aws-sdk-go/service/emrcontainers" - "github.com/aws/aws-sdk-go/service/emrserverless" - "github.com/aws/aws-sdk-go/service/eventbridge" - "github.com/aws/aws-sdk-go/service/finspace" - "github.com/aws/aws-sdk-go/service/finspacedata" - "github.com/aws/aws-sdk-go/service/firehose" - "github.com/aws/aws-sdk-go/service/fms" - "github.com/aws/aws-sdk-go/service/forecastqueryservice" - "github.com/aws/aws-sdk-go/service/forecastservice" - "github.com/aws/aws-sdk-go/service/frauddetector" - "github.com/aws/aws-sdk-go/service/fsx" - "github.com/aws/aws-sdk-go/service/gamelift" - "github.com/aws/aws-sdk-go/service/glacier" - "github.com/aws/aws-sdk-go/service/glue" - "github.com/aws/aws-sdk-go/service/gluedatabrew" - "github.com/aws/aws-sdk-go/service/greengrass" - "github.com/aws/aws-sdk-go/service/greengrassv2" - "github.com/aws/aws-sdk-go/service/groundstation" - "github.com/aws/aws-sdk-go/service/guardduty" - "github.com/aws/aws-sdk-go/service/health" - "github.com/aws/aws-sdk-go/service/honeycode" - "github.com/aws/aws-sdk-go/service/iam" - "github.com/aws/aws-sdk-go/service/imagebuilder" - "github.com/aws/aws-sdk-go/service/inspector" - "github.com/aws/aws-sdk-go/service/internetmonitor" - "github.com/aws/aws-sdk-go/service/iot" - "github.com/aws/aws-sdk-go/service/iot1clickdevicesservice" - "github.com/aws/aws-sdk-go/service/iot1clickprojects" - "github.com/aws/aws-sdk-go/service/iotanalytics" - "github.com/aws/aws-sdk-go/service/iotdataplane" - "github.com/aws/aws-sdk-go/service/iotdeviceadvisor" - "github.com/aws/aws-sdk-go/service/iotevents" - "github.com/aws/aws-sdk-go/service/ioteventsdata" - "github.com/aws/aws-sdk-go/service/iotfleethub" - "github.com/aws/aws-sdk-go/service/iotjobsdataplane" - "github.com/aws/aws-sdk-go/service/iotsecuretunneling" - "github.com/aws/aws-sdk-go/service/iotsitewise" - "github.com/aws/aws-sdk-go/service/iotthingsgraph" - "github.com/aws/aws-sdk-go/service/iottwinmaker" - "github.com/aws/aws-sdk-go/service/iotwireless" - "github.com/aws/aws-sdk-go/service/ivs" - "github.com/aws/aws-sdk-go/service/kafka" - "github.com/aws/aws-sdk-go/service/kafkaconnect" - "github.com/aws/aws-sdk-go/service/keyspaces" - "github.com/aws/aws-sdk-go/service/kinesis" - "github.com/aws/aws-sdk-go/service/kinesisanalytics" - "github.com/aws/aws-sdk-go/service/kinesisanalyticsv2" - "github.com/aws/aws-sdk-go/service/kinesisvideo" - "github.com/aws/aws-sdk-go/service/kinesisvideoarchivedmedia" - "github.com/aws/aws-sdk-go/service/kinesisvideomedia" - "github.com/aws/aws-sdk-go/service/kinesisvideosignalingchannels" - "github.com/aws/aws-sdk-go/service/kms" - "github.com/aws/aws-sdk-go/service/lakeformation" - "github.com/aws/aws-sdk-go/service/lambda" - "github.com/aws/aws-sdk-go/service/lexmodelbuildingservice" - "github.com/aws/aws-sdk-go/service/lexmodelsv2" - "github.com/aws/aws-sdk-go/service/lexruntimeservice" - "github.com/aws/aws-sdk-go/service/lexruntimev2" - "github.com/aws/aws-sdk-go/service/licensemanager" - "github.com/aws/aws-sdk-go/service/lightsail" - "github.com/aws/aws-sdk-go/service/locationservice" - "github.com/aws/aws-sdk-go/service/lookoutequipment" - "github.com/aws/aws-sdk-go/service/lookoutforvision" - "github.com/aws/aws-sdk-go/service/lookoutmetrics" - "github.com/aws/aws-sdk-go/service/machinelearning" - "github.com/aws/aws-sdk-go/service/macie" - "github.com/aws/aws-sdk-go/service/macie2" - "github.com/aws/aws-sdk-go/service/managedblockchain" - "github.com/aws/aws-sdk-go/service/managedgrafana" - "github.com/aws/aws-sdk-go/service/marketplacecatalog" - "github.com/aws/aws-sdk-go/service/marketplacecommerceanalytics" - "github.com/aws/aws-sdk-go/service/marketplaceentitlementservice" - "github.com/aws/aws-sdk-go/service/marketplacemetering" - "github.com/aws/aws-sdk-go/service/mediaconnect" - "github.com/aws/aws-sdk-go/service/mediaconvert" - "github.com/aws/aws-sdk-go/service/mediapackage" - "github.com/aws/aws-sdk-go/service/mediapackagevod" - "github.com/aws/aws-sdk-go/service/mediastore" - "github.com/aws/aws-sdk-go/service/mediastoredata" - "github.com/aws/aws-sdk-go/service/mediatailor" - "github.com/aws/aws-sdk-go/service/memorydb" - "github.com/aws/aws-sdk-go/service/mgn" - "github.com/aws/aws-sdk-go/service/migrationhub" - "github.com/aws/aws-sdk-go/service/migrationhubconfig" - "github.com/aws/aws-sdk-go/service/migrationhubrefactorspaces" - "github.com/aws/aws-sdk-go/service/migrationhubstrategyrecommendations" - "github.com/aws/aws-sdk-go/service/mobile" - "github.com/aws/aws-sdk-go/service/mq" - "github.com/aws/aws-sdk-go/service/mturk" - "github.com/aws/aws-sdk-go/service/mwaa" - "github.com/aws/aws-sdk-go/service/neptune" - "github.com/aws/aws-sdk-go/service/networkfirewall" - "github.com/aws/aws-sdk-go/service/networkmanager" - "github.com/aws/aws-sdk-go/service/nimblestudio" - "github.com/aws/aws-sdk-go/service/opensearchservice" - "github.com/aws/aws-sdk-go/service/opsworks" - "github.com/aws/aws-sdk-go/service/opsworkscm" - "github.com/aws/aws-sdk-go/service/organizations" - "github.com/aws/aws-sdk-go/service/outposts" - "github.com/aws/aws-sdk-go/service/panorama" - "github.com/aws/aws-sdk-go/service/personalize" - "github.com/aws/aws-sdk-go/service/personalizeevents" - "github.com/aws/aws-sdk-go/service/personalizeruntime" - "github.com/aws/aws-sdk-go/service/pi" - "github.com/aws/aws-sdk-go/service/pinpoint" - "github.com/aws/aws-sdk-go/service/pinpointemail" - "github.com/aws/aws-sdk-go/service/pinpointsmsvoice" - "github.com/aws/aws-sdk-go/service/polly" - "github.com/aws/aws-sdk-go/service/pricing" - "github.com/aws/aws-sdk-go/service/prometheusservice" - "github.com/aws/aws-sdk-go/service/proton" - "github.com/aws/aws-sdk-go/service/qldb" - "github.com/aws/aws-sdk-go/service/qldbsession" - "github.com/aws/aws-sdk-go/service/quicksight" - "github.com/aws/aws-sdk-go/service/ram" - "github.com/aws/aws-sdk-go/service/rds" - "github.com/aws/aws-sdk-go/service/rdsdataservice" - "github.com/aws/aws-sdk-go/service/redshift" - "github.com/aws/aws-sdk-go/service/redshiftdataapiservice" - "github.com/aws/aws-sdk-go/service/redshiftserverless" - "github.com/aws/aws-sdk-go/service/rekognition" - "github.com/aws/aws-sdk-go/service/resiliencehub" - "github.com/aws/aws-sdk-go/service/resourcegroups" - "github.com/aws/aws-sdk-go/service/resourcegroupstaggingapi" - "github.com/aws/aws-sdk-go/service/robomaker" - "github.com/aws/aws-sdk-go/service/route53recoverycluster" - "github.com/aws/aws-sdk-go/service/route53resolver" - "github.com/aws/aws-sdk-go/service/s3control" - "github.com/aws/aws-sdk-go/service/s3outposts" - "github.com/aws/aws-sdk-go/service/sagemaker" - "github.com/aws/aws-sdk-go/service/sagemakeredgemanager" - "github.com/aws/aws-sdk-go/service/sagemakerfeaturestoreruntime" - "github.com/aws/aws-sdk-go/service/sagemakerruntime" - "github.com/aws/aws-sdk-go/service/savingsplans" - "github.com/aws/aws-sdk-go/service/schemas" - "github.com/aws/aws-sdk-go/service/secretsmanager" - "github.com/aws/aws-sdk-go/service/securityhub" - "github.com/aws/aws-sdk-go/service/serverlessapplicationrepository" - "github.com/aws/aws-sdk-go/service/servicecatalog" - "github.com/aws/aws-sdk-go/service/servicediscovery" - "github.com/aws/aws-sdk-go/service/servicequotas" - "github.com/aws/aws-sdk-go/service/ses" - "github.com/aws/aws-sdk-go/service/sfn" - "github.com/aws/aws-sdk-go/service/signer" - "github.com/aws/aws-sdk-go/service/simpledb" - "github.com/aws/aws-sdk-go/service/sms" - "github.com/aws/aws-sdk-go/service/snowball" - "github.com/aws/aws-sdk-go/service/snowdevicemanagement" - "github.com/aws/aws-sdk-go/service/sns" - "github.com/aws/aws-sdk-go/service/sqs" - "github.com/aws/aws-sdk-go/service/ssm" - "github.com/aws/aws-sdk-go/service/sso" - "github.com/aws/aws-sdk-go/service/ssoadmin" - "github.com/aws/aws-sdk-go/service/ssooidc" - "github.com/aws/aws-sdk-go/service/storagegateway" - "github.com/aws/aws-sdk-go/service/support" - "github.com/aws/aws-sdk-go/service/swf" - "github.com/aws/aws-sdk-go/service/synthetics" - "github.com/aws/aws-sdk-go/service/textract" - "github.com/aws/aws-sdk-go/service/timestreamquery" - "github.com/aws/aws-sdk-go/service/timestreamwrite" - "github.com/aws/aws-sdk-go/service/transcribestreamingservice" - "github.com/aws/aws-sdk-go/service/transfer" - "github.com/aws/aws-sdk-go/service/translate" - "github.com/aws/aws-sdk-go/service/voiceid" - "github.com/aws/aws-sdk-go/service/waf" - "github.com/aws/aws-sdk-go/service/wafregional" - "github.com/aws/aws-sdk-go/service/wafv2" - "github.com/aws/aws-sdk-go/service/wellarchitected" - "github.com/aws/aws-sdk-go/service/workdocs" - "github.com/aws/aws-sdk-go/service/worklink" - "github.com/aws/aws-sdk-go/service/workmail" - "github.com/aws/aws-sdk-go/service/workmailmessageflow" - "github.com/aws/aws-sdk-go/service/workspaces" - "github.com/aws/aws-sdk-go/service/workspacesweb" - "github.com/hashicorp/terraform-provider-aws/names" -) - -// sdkv1Conns initializes AWS SDK for Go v1 clients. -func (c *Config) sdkv1Conns(client *AWSClient, sess *session.Session) { - client.acmpcaConn = acmpca.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ACMPCA])})) - client.ampConn = prometheusservice.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.AMP])})) - client.apigatewayConn = apigateway.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.APIGateway])})) - client.apigatewaymanagementapiConn = apigatewaymanagementapi.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.APIGatewayManagementAPI])})) - client.apigatewayv2Conn = apigatewayv2.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.APIGatewayV2])})) - client.alexaforbusinessConn = alexaforbusiness.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.AlexaForBusiness])})) - client.amplifyConn = amplify.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Amplify])})) - client.amplifybackendConn = amplifybackend.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.AmplifyBackend])})) - client.amplifyuibuilderConn = amplifyuibuilder.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.AmplifyUIBuilder])})) - client.applicationautoscalingConn = applicationautoscaling.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.AppAutoScaling])})) - client.appconfigConn = appconfig.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.AppConfig])})) - client.appconfigdataConn = appconfigdata.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.AppConfigData])})) - client.appflowConn = appflow.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.AppFlow])})) - client.appintegrationsConn = appintegrationsservice.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.AppIntegrations])})) - client.appmeshConn = appmesh.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.AppMesh])})) - client.apprunnerConn = apprunner.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.AppRunner])})) - client.appstreamConn = appstream.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.AppStream])})) - client.appsyncConn = appsync.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.AppSync])})) - client.applicationcostprofilerConn = applicationcostprofiler.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ApplicationCostProfiler])})) - client.applicationinsightsConn = applicationinsights.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ApplicationInsights])})) - client.athenaConn = athena.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Athena])})) - client.autoscalingConn = autoscaling.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.AutoScaling])})) - client.autoscalingplansConn = autoscalingplans.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.AutoScalingPlans])})) - client.backupConn = backup.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Backup])})) - client.backupgatewayConn = backupgateway.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.BackupGateway])})) - client.batchConn = batch.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Batch])})) - client.billingconductorConn = billingconductor.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.BillingConductor])})) - client.braketConn = braket.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Braket])})) - client.budgetsConn = budgets.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Budgets])})) - client.ceConn = costexplorer.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.CE])})) - client.curConn = costandusagereportservice.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.CUR])})) - client.chimeConn = chime.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Chime])})) - client.chimesdkidentityConn = chimesdkidentity.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ChimeSDKIdentity])})) - client.chimesdkmediapipelinesConn = chimesdkmediapipelines.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ChimeSDKMediaPipelines])})) - client.chimesdkmeetingsConn = chimesdkmeetings.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ChimeSDKMeetings])})) - client.chimesdkmessagingConn = chimesdkmessaging.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ChimeSDKMessaging])})) - client.chimesdkvoiceConn = chimesdkvoice.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ChimeSDKVoice])})) - client.cloud9Conn = cloud9.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Cloud9])})) - client.clouddirectoryConn = clouddirectory.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.CloudDirectory])})) - client.cloudformationConn = cloudformation.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.CloudFormation])})) - client.cloudfrontConn = cloudfront.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.CloudFront])})) - client.cloudhsmv2Conn = cloudhsmv2.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.CloudHSMV2])})) - client.cloudsearchConn = cloudsearch.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.CloudSearch])})) - client.cloudsearchdomainConn = cloudsearchdomain.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.CloudSearchDomain])})) - client.cloudtrailConn = cloudtrail.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.CloudTrail])})) - client.cloudwatchConn = cloudwatch.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.CloudWatch])})) - client.codeartifactConn = codeartifact.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.CodeArtifact])})) - client.codebuildConn = codebuild.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.CodeBuild])})) - client.codecommitConn = codecommit.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.CodeCommit])})) - client.codeguruprofilerConn = codeguruprofiler.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.CodeGuruProfiler])})) - client.codegurureviewerConn = codegurureviewer.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.CodeGuruReviewer])})) - client.codepipelineConn = codepipeline.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.CodePipeline])})) - client.codestarConn = codestar.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.CodeStar])})) - client.codestarconnectionsConn = codestarconnections.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.CodeStarConnections])})) - client.codestarnotificationsConn = codestarnotifications.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.CodeStarNotifications])})) - client.cognitoidpConn = cognitoidentityprovider.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.CognitoIDP])})) - client.cognitoidentityConn = cognitoidentity.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.CognitoIdentity])})) - client.cognitosyncConn = cognitosync.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.CognitoSync])})) - client.comprehendmedicalConn = comprehendmedical.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ComprehendMedical])})) - client.configserviceConn = configservice.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ConfigService])})) - client.connectConn = connect.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Connect])})) - client.connectcontactlensConn = connectcontactlens.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ConnectContactLens])})) - client.connectparticipantConn = connectparticipant.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ConnectParticipant])})) - client.controltowerConn = controltower.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ControlTower])})) - client.customerprofilesConn = customerprofiles.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.CustomerProfiles])})) - client.daxConn = dax.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.DAX])})) - client.dlmConn = dlm.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.DLM])})) - client.dmsConn = databasemigrationservice.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.DMS])})) - client.drsConn = drs.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.DRS])})) - client.dsConn = directoryservice.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.DS])})) - client.databrewConn = gluedatabrew.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.DataBrew])})) - client.dataexchangeConn = dataexchange.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.DataExchange])})) - client.datapipelineConn = datapipeline.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.DataPipeline])})) - client.datasyncConn = datasync.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.DataSync])})) - client.deployConn = codedeploy.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Deploy])})) - client.detectiveConn = detective.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Detective])})) - client.devopsguruConn = devopsguru.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.DevOpsGuru])})) - client.devicefarmConn = devicefarm.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.DeviceFarm])})) - client.directconnectConn = directconnect.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.DirectConnect])})) - client.discoveryConn = applicationdiscoveryservice.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Discovery])})) - client.docdbConn = docdb.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.DocDB])})) - client.dynamodbConn = dynamodb.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.DynamoDB])})) - client.dynamodbstreamsConn = dynamodbstreams.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.DynamoDBStreams])})) - client.ebsConn = ebs.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.EBS])})) - client.ec2Conn = ec2.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.EC2])})) - client.ec2instanceconnectConn = ec2instanceconnect.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.EC2InstanceConnect])})) - client.ecrConn = ecr.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ECR])})) - client.ecrpublicConn = ecrpublic.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ECRPublic])})) - client.ecsConn = ecs.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ECS])})) - client.efsConn = efs.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.EFS])})) - client.eksConn = eks.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.EKS])})) - client.elbConn = elb.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ELB])})) - client.elbv2Conn = elbv2.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ELBV2])})) - client.emrConn = emr.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.EMR])})) - client.emrcontainersConn = emrcontainers.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.EMRContainers])})) - client.emrserverlessConn = emrserverless.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.EMRServerless])})) - client.elasticacheConn = elasticache.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ElastiCache])})) - client.elasticbeanstalkConn = elasticbeanstalk.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ElasticBeanstalk])})) - client.elasticinferenceConn = elasticinference.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ElasticInference])})) - client.elastictranscoderConn = elastictranscoder.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ElasticTranscoder])})) - client.esConn = elasticsearchservice.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Elasticsearch])})) - client.eventsConn = eventbridge.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Events])})) - client.evidentlyConn = cloudwatchevidently.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Evidently])})) - client.fmsConn = fms.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.FMS])})) - client.fsxConn = fsx.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.FSx])})) - client.finspaceConn = finspace.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.FinSpace])})) - client.finspacedataConn = finspacedata.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.FinSpaceData])})) - client.firehoseConn = firehose.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Firehose])})) - client.forecastConn = forecastservice.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Forecast])})) - client.forecastqueryConn = forecastqueryservice.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ForecastQuery])})) - client.frauddetectorConn = frauddetector.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.FraudDetector])})) - client.gameliftConn = gamelift.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.GameLift])})) - client.glacierConn = glacier.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Glacier])})) - client.glueConn = glue.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Glue])})) - client.grafanaConn = managedgrafana.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Grafana])})) - client.greengrassConn = greengrass.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Greengrass])})) - client.greengrassv2Conn = greengrassv2.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.GreengrassV2])})) - client.groundstationConn = groundstation.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.GroundStation])})) - client.guarddutyConn = guardduty.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.GuardDuty])})) - client.healthConn = health.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Health])})) - client.honeycodeConn = honeycode.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Honeycode])})) - client.iamConn = iam.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.IAM])})) - client.ivsConn = ivs.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.IVS])})) - client.imagebuilderConn = imagebuilder.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ImageBuilder])})) - client.inspectorConn = inspector.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Inspector])})) - client.internetmonitorConn = internetmonitor.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.InternetMonitor])})) - client.iotConn = iot.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.IoT])})) - client.iot1clickdevicesConn = iot1clickdevicesservice.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.IoT1ClickDevices])})) - client.iot1clickprojectsConn = iot1clickprojects.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.IoT1ClickProjects])})) - client.iotanalyticsConn = iotanalytics.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.IoTAnalytics])})) - client.iotdataConn = iotdataplane.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.IoTData])})) - client.iotdeviceadvisorConn = iotdeviceadvisor.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.IoTDeviceAdvisor])})) - client.ioteventsConn = iotevents.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.IoTEvents])})) - client.ioteventsdataConn = ioteventsdata.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.IoTEventsData])})) - client.iotfleethubConn = iotfleethub.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.IoTFleetHub])})) - client.iotjobsdataConn = iotjobsdataplane.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.IoTJobsData])})) - client.iotsecuretunnelingConn = iotsecuretunneling.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.IoTSecureTunneling])})) - client.iotsitewiseConn = iotsitewise.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.IoTSiteWise])})) - client.iotthingsgraphConn = iotthingsgraph.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.IoTThingsGraph])})) - client.iottwinmakerConn = iottwinmaker.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.IoTTwinMaker])})) - client.iotwirelessConn = iotwireless.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.IoTWireless])})) - client.kmsConn = kms.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.KMS])})) - client.kafkaConn = kafka.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Kafka])})) - client.kafkaconnectConn = kafkaconnect.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.KafkaConnect])})) - client.keyspacesConn = keyspaces.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Keyspaces])})) - client.kinesisConn = kinesis.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Kinesis])})) - client.kinesisanalyticsConn = kinesisanalytics.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.KinesisAnalytics])})) - client.kinesisanalyticsv2Conn = kinesisanalyticsv2.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.KinesisAnalyticsV2])})) - client.kinesisvideoConn = kinesisvideo.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.KinesisVideo])})) - client.kinesisvideoarchivedmediaConn = kinesisvideoarchivedmedia.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.KinesisVideoArchivedMedia])})) - client.kinesisvideomediaConn = kinesisvideomedia.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.KinesisVideoMedia])})) - client.kinesisvideosignalingConn = kinesisvideosignalingchannels.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.KinesisVideoSignaling])})) - client.lakeformationConn = lakeformation.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.LakeFormation])})) - client.lambdaConn = lambda.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Lambda])})) - client.lexmodelsConn = lexmodelbuildingservice.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.LexModels])})) - client.lexmodelsv2Conn = lexmodelsv2.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.LexModelsV2])})) - client.lexruntimeConn = lexruntimeservice.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.LexRuntime])})) - client.lexruntimev2Conn = lexruntimev2.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.LexRuntimeV2])})) - client.licensemanagerConn = licensemanager.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.LicenseManager])})) - client.lightsailConn = lightsail.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Lightsail])})) - client.locationConn = locationservice.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Location])})) - client.logsConn = cloudwatchlogs.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Logs])})) - client.lookoutequipmentConn = lookoutequipment.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.LookoutEquipment])})) - client.lookoutmetricsConn = lookoutmetrics.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.LookoutMetrics])})) - client.lookoutvisionConn = lookoutforvision.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.LookoutVision])})) - client.mqConn = mq.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.MQ])})) - client.mturkConn = mturk.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.MTurk])})) - client.mwaaConn = mwaa.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.MWAA])})) - client.machinelearningConn = machinelearning.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.MachineLearning])})) - client.macieConn = macie.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Macie])})) - client.macie2Conn = macie2.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Macie2])})) - client.managedblockchainConn = managedblockchain.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ManagedBlockchain])})) - client.marketplacecatalogConn = marketplacecatalog.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.MarketplaceCatalog])})) - client.marketplacecommerceanalyticsConn = marketplacecommerceanalytics.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.MarketplaceCommerceAnalytics])})) - client.marketplaceentitlementConn = marketplaceentitlementservice.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.MarketplaceEntitlement])})) - client.marketplacemeteringConn = marketplacemetering.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.MarketplaceMetering])})) - client.mediaconnectConn = mediaconnect.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.MediaConnect])})) - client.mediaconvertConn = mediaconvert.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.MediaConvert])})) - client.mediapackageConn = mediapackage.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.MediaPackage])})) - client.mediapackagevodConn = mediapackagevod.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.MediaPackageVOD])})) - client.mediastoreConn = mediastore.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.MediaStore])})) - client.mediastoredataConn = mediastoredata.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.MediaStoreData])})) - client.mediatailorConn = mediatailor.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.MediaTailor])})) - client.memorydbConn = memorydb.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.MemoryDB])})) - client.mghConn = migrationhub.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.MgH])})) - client.mgnConn = mgn.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Mgn])})) - client.migrationhubconfigConn = migrationhubconfig.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.MigrationHubConfig])})) - client.migrationhubrefactorspacesConn = migrationhubrefactorspaces.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.MigrationHubRefactorSpaces])})) - client.migrationhubstrategyConn = migrationhubstrategyrecommendations.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.MigrationHubStrategy])})) - client.mobileConn = mobile.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Mobile])})) - client.neptuneConn = neptune.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Neptune])})) - client.networkfirewallConn = networkfirewall.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.NetworkFirewall])})) - client.networkmanagerConn = networkmanager.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.NetworkManager])})) - client.nimbleConn = nimblestudio.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Nimble])})) - client.opensearchConn = opensearchservice.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.OpenSearch])})) - client.opsworksConn = opsworks.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.OpsWorks])})) - client.opsworkscmConn = opsworkscm.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.OpsWorksCM])})) - client.organizationsConn = organizations.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Organizations])})) - client.outpostsConn = outposts.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Outposts])})) - client.piConn = pi.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.PI])})) - client.panoramaConn = panorama.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Panorama])})) - client.personalizeConn = personalize.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Personalize])})) - client.personalizeeventsConn = personalizeevents.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.PersonalizeEvents])})) - client.personalizeruntimeConn = personalizeruntime.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.PersonalizeRuntime])})) - client.pinpointConn = pinpoint.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Pinpoint])})) - client.pinpointemailConn = pinpointemail.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.PinpointEmail])})) - client.pinpointsmsvoiceConn = pinpointsmsvoice.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.PinpointSMSVoice])})) - client.pollyConn = polly.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Polly])})) - client.pricingConn = pricing.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Pricing])})) - client.protonConn = proton.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Proton])})) - client.qldbConn = qldb.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.QLDB])})) - client.qldbsessionConn = qldbsession.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.QLDBSession])})) - client.quicksightConn = quicksight.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.QuickSight])})) - client.ramConn = ram.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.RAM])})) - client.rdsConn = rds.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.RDS])})) - client.rdsdataConn = rdsdataservice.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.RDSData])})) - client.rumConn = cloudwatchrum.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.RUM])})) - client.redshiftConn = redshift.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Redshift])})) - client.redshiftdataConn = redshiftdataapiservice.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.RedshiftData])})) - client.redshiftserverlessConn = redshiftserverless.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.RedshiftServerless])})) - client.rekognitionConn = rekognition.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Rekognition])})) - client.resiliencehubConn = resiliencehub.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ResilienceHub])})) - client.resourcegroupsConn = resourcegroups.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ResourceGroups])})) - client.resourcegroupstaggingapiConn = resourcegroupstaggingapi.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ResourceGroupsTaggingAPI])})) - client.robomakerConn = robomaker.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.RoboMaker])})) - client.route53recoveryclusterConn = route53recoverycluster.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Route53RecoveryCluster])})) - client.route53resolverConn = route53resolver.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Route53Resolver])})) - client.s3controlConn = s3control.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.S3Control])})) - client.s3outpostsConn = s3outposts.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.S3Outposts])})) - client.sesConn = ses.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.SES])})) - client.sfnConn = sfn.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.SFN])})) - client.smsConn = sms.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.SMS])})) - client.snsConn = sns.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.SNS])})) - client.sqsConn = sqs.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.SQS])})) - client.ssmConn = ssm.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.SSM])})) - client.ssoConn = sso.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.SSO])})) - client.ssoadminConn = ssoadmin.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.SSOAdmin])})) - client.ssooidcConn = ssooidc.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.SSOOIDC])})) - client.swfConn = swf.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.SWF])})) - client.sagemakerConn = sagemaker.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.SageMaker])})) - client.sagemakera2iruntimeConn = augmentedairuntime.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.SageMakerA2IRuntime])})) - client.sagemakeredgeConn = sagemakeredgemanager.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.SageMakerEdge])})) - client.sagemakerfeaturestoreruntimeConn = sagemakerfeaturestoreruntime.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.SageMakerFeatureStoreRuntime])})) - client.sagemakerruntimeConn = sagemakerruntime.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.SageMakerRuntime])})) - client.savingsplansConn = savingsplans.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.SavingsPlans])})) - client.schemasConn = schemas.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Schemas])})) - client.secretsmanagerConn = secretsmanager.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.SecretsManager])})) - client.securityhubConn = securityhub.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.SecurityHub])})) - client.serverlessrepoConn = serverlessapplicationrepository.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ServerlessRepo])})) - client.servicecatalogConn = servicecatalog.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ServiceCatalog])})) - client.servicecatalogappregistryConn = appregistry.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ServiceCatalogAppRegistry])})) - client.servicediscoveryConn = servicediscovery.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ServiceDiscovery])})) - client.servicequotasConn = servicequotas.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.ServiceQuotas])})) - client.signerConn = signer.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Signer])})) - client.sdbConn = simpledb.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.SimpleDB])})) - client.snowdevicemanagementConn = snowdevicemanagement.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.SnowDeviceManagement])})) - client.snowballConn = snowball.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Snowball])})) - client.storagegatewayConn = storagegateway.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.StorageGateway])})) - client.supportConn = support.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Support])})) - client.syntheticsConn = synthetics.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Synthetics])})) - client.textractConn = textract.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Textract])})) - client.timestreamqueryConn = timestreamquery.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.TimestreamQuery])})) - client.timestreamwriteConn = timestreamwrite.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.TimestreamWrite])})) - client.transcribestreamingConn = transcribestreamingservice.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.TranscribeStreaming])})) - client.transferConn = transfer.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Transfer])})) - client.translateConn = translate.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Translate])})) - client.voiceidConn = voiceid.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.VoiceID])})) - client.wafConn = waf.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.WAF])})) - client.wafregionalConn = wafregional.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.WAFRegional])})) - client.wafv2Conn = wafv2.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.WAFV2])})) - client.wellarchitectedConn = wellarchitected.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.WellArchitected])})) - client.wisdomConn = connectwisdomservice.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Wisdom])})) - client.workdocsConn = workdocs.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.WorkDocs])})) - client.worklinkConn = worklink.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.WorkLink])})) - client.workmailConn = workmail.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.WorkMail])})) - client.workmailmessageflowConn = workmailmessageflow.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.WorkMailMessageFlow])})) - client.workspacesConn = workspaces.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.WorkSpaces])})) - client.workspaceswebConn = workspacesweb.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.WorkSpacesWeb])})) -} - -// sdkv2Conns initializes AWS SDK for Go v2 clients. -func (c *Config) sdkv2Conns(client *AWSClient, cfg aws_sdkv2.Config) { - client.acmClient = acm.NewFromConfig(cfg, func(o *acm.Options) { - if endpoint := c.Endpoints[names.ACM]; endpoint != "" { - o.EndpointResolver = acm.EndpointResolverFromURL(endpoint) - } - }) - client.accessanalyzerClient = accessanalyzer.NewFromConfig(cfg, func(o *accessanalyzer.Options) { - if endpoint := c.Endpoints[names.AccessAnalyzer]; endpoint != "" { - o.EndpointResolver = accessanalyzer.EndpointResolverFromURL(endpoint) - } - }) - client.accountClient = account.NewFromConfig(cfg, func(o *account.Options) { - if endpoint := c.Endpoints[names.Account]; endpoint != "" { - o.EndpointResolver = account.EndpointResolverFromURL(endpoint) - } - }) - client.auditmanagerClient = auditmanager.NewFromConfig(cfg, func(o *auditmanager.Options) { - if endpoint := c.Endpoints[names.AuditManager]; endpoint != "" { - o.EndpointResolver = auditmanager.EndpointResolverFromURL(endpoint) - } - }) - client.cleanroomsClient = cleanrooms.NewFromConfig(cfg, func(o *cleanrooms.Options) { - if endpoint := c.Endpoints[names.CleanRooms]; endpoint != "" { - o.EndpointResolver = cleanrooms.EndpointResolverFromURL(endpoint) - } - }) - client.cloudcontrolClient = cloudcontrol.NewFromConfig(cfg, func(o *cloudcontrol.Options) { - if endpoint := c.Endpoints[names.CloudControl]; endpoint != "" { - o.EndpointResolver = cloudcontrol.EndpointResolverFromURL(endpoint) - } - }) - client.comprehendClient = comprehend.NewFromConfig(cfg, func(o *comprehend.Options) { - if endpoint := c.Endpoints[names.Comprehend]; endpoint != "" { - o.EndpointResolver = comprehend.EndpointResolverFromURL(endpoint) - } - }) - client.computeoptimizerClient = computeoptimizer.NewFromConfig(cfg, func(o *computeoptimizer.Options) { - if endpoint := c.Endpoints[names.ComputeOptimizer]; endpoint != "" { - o.EndpointResolver = computeoptimizer.EndpointResolverFromURL(endpoint) - } - }) - client.docdbelasticClient = docdbelastic.NewFromConfig(cfg, func(o *docdbelastic.Options) { - if endpoint := c.Endpoints[names.DocDBElastic]; endpoint != "" { - o.EndpointResolver = docdbelastic.EndpointResolverFromURL(endpoint) - } - }) - client.fisClient = fis.NewFromConfig(cfg, func(o *fis.Options) { - if endpoint := c.Endpoints[names.FIS]; endpoint != "" { - o.EndpointResolver = fis.EndpointResolverFromURL(endpoint) - } - }) - client.healthlakeClient = healthlake.NewFromConfig(cfg, func(o *healthlake.Options) { - if endpoint := c.Endpoints[names.HealthLake]; endpoint != "" { - o.EndpointResolver = healthlake.EndpointResolverFromURL(endpoint) - } - }) - client.ivschatClient = ivschat.NewFromConfig(cfg, func(o *ivschat.Options) { - if endpoint := c.Endpoints[names.IVSChat]; endpoint != "" { - o.EndpointResolver = ivschat.EndpointResolverFromURL(endpoint) - } - }) - client.identitystoreClient = identitystore.NewFromConfig(cfg, func(o *identitystore.Options) { - if endpoint := c.Endpoints[names.IdentityStore]; endpoint != "" { - o.EndpointResolver = identitystore.EndpointResolverFromURL(endpoint) - } - }) - client.inspector2Client = inspector2.NewFromConfig(cfg, func(o *inspector2.Options) { - if endpoint := c.Endpoints[names.Inspector2]; endpoint != "" { - o.EndpointResolver = inspector2.EndpointResolverFromURL(endpoint) - } - }) - client.kendraClient = kendra.NewFromConfig(cfg, func(o *kendra.Options) { - if endpoint := c.Endpoints[names.Kendra]; endpoint != "" { - o.EndpointResolver = kendra.EndpointResolverFromURL(endpoint) - } - }) - client.medialiveClient = medialive.NewFromConfig(cfg, func(o *medialive.Options) { - if endpoint := c.Endpoints[names.MediaLive]; endpoint != "" { - o.EndpointResolver = medialive.EndpointResolverFromURL(endpoint) - } - }) - client.oamClient = oam.NewFromConfig(cfg, func(o *oam.Options) { - if endpoint := c.Endpoints[names.ObservabilityAccessManager]; endpoint != "" { - o.EndpointResolver = oam.EndpointResolverFromURL(endpoint) - } - }) - client.opensearchserverlessClient = opensearchserverless.NewFromConfig(cfg, func(o *opensearchserverless.Options) { - if endpoint := c.Endpoints[names.OpenSearchServerless]; endpoint != "" { - o.EndpointResolver = opensearchserverless.EndpointResolverFromURL(endpoint) - } - }) - client.pipesClient = pipes.NewFromConfig(cfg, func(o *pipes.Options) { - if endpoint := c.Endpoints[names.Pipes]; endpoint != "" { - o.EndpointResolver = pipes.EndpointResolverFromURL(endpoint) - } - }) - client.rbinClient = rbin.NewFromConfig(cfg, func(o *rbin.Options) { - if endpoint := c.Endpoints[names.RBin]; endpoint != "" { - o.EndpointResolver = rbin.EndpointResolverFromURL(endpoint) - } - }) - client.resourceexplorer2Client = resourceexplorer2.NewFromConfig(cfg, func(o *resourceexplorer2.Options) { - if endpoint := c.Endpoints[names.ResourceExplorer2]; endpoint != "" { - o.EndpointResolver = resourceexplorer2.EndpointResolverFromURL(endpoint) - } - }) - client.rolesanywhereClient = rolesanywhere.NewFromConfig(cfg, func(o *rolesanywhere.Options) { - if endpoint := c.Endpoints[names.RolesAnywhere]; endpoint != "" { - o.EndpointResolver = rolesanywhere.EndpointResolverFromURL(endpoint) - } - }) - client.sesv2Client = sesv2.NewFromConfig(cfg, func(o *sesv2.Options) { - if endpoint := c.Endpoints[names.SESV2]; endpoint != "" { - o.EndpointResolver = sesv2.EndpointResolverFromURL(endpoint) - } - }) - client.ssmcontactsClient = ssmcontacts.NewFromConfig(cfg, func(o *ssmcontacts.Options) { - if endpoint := c.Endpoints[names.SSMContacts]; endpoint != "" { - o.EndpointResolver = ssmcontacts.EndpointResolverFromURL(endpoint) - } - }) - client.ssmincidentsClient = ssmincidents.NewFromConfig(cfg, func(o *ssmincidents.Options) { - if endpoint := c.Endpoints[names.SSMIncidents]; endpoint != "" { - o.EndpointResolver = ssmincidents.EndpointResolverFromURL(endpoint) - } - }) - client.schedulerClient = scheduler.NewFromConfig(cfg, func(o *scheduler.Options) { - if endpoint := c.Endpoints[names.Scheduler]; endpoint != "" { - o.EndpointResolver = scheduler.EndpointResolverFromURL(endpoint) - } - }) - client.securitylakeClient = securitylake.NewFromConfig(cfg, func(o *securitylake.Options) { - if endpoint := c.Endpoints[names.SecurityLake]; endpoint != "" { - o.EndpointResolver = securitylake.EndpointResolverFromURL(endpoint) - } - }) - client.transcribeClient = transcribe.NewFromConfig(cfg, func(o *transcribe.Options) { - if endpoint := c.Endpoints[names.Transcribe]; endpoint != "" { - o.EndpointResolver = transcribe.EndpointResolverFromURL(endpoint) - } - }) - client.vpclatticeClient = vpclattice.NewFromConfig(cfg, func(o *vpclattice.Options) { - if endpoint := c.Endpoints[names.VPCLattice]; endpoint != "" { - o.EndpointResolver = vpclattice.EndpointResolverFromURL(endpoint) - } - }) - client.xrayClient = xray.NewFromConfig(cfg, func(o *xray.Options) { - if endpoint := c.Endpoints[names.XRay]; endpoint != "" { - o.EndpointResolver = xray.EndpointResolverFromURL(endpoint) - } - }) -} - -// sdkv2LazyConns initializes AWS SDK for Go v2 lazy-load clients. -func (c *Config) sdkv2LazyConns(client *AWSClient, cfg aws_sdkv2.Config) { - client.dsClient.init(&cfg, func() *directoryservice_sdkv2.Client { - return directoryservice_sdkv2.NewFromConfig(cfg, func(o *directoryservice_sdkv2.Options) { - if endpoint := c.Endpoints[names.DS]; endpoint != "" { - o.EndpointResolver = directoryservice_sdkv2.EndpointResolverFromURL(endpoint) - } - }) - }) - client.ec2Client.init(&cfg, func() *ec2_sdkv2.Client { - return ec2_sdkv2.NewFromConfig(cfg, func(o *ec2_sdkv2.Options) { - if endpoint := c.Endpoints[names.EC2]; endpoint != "" { - o.EndpointResolver = ec2_sdkv2.EndpointResolverFromURL(endpoint) - } - }) - }) - client.lambdaClient.init(&cfg, func() *lambda_sdkv2.Client { - return lambda_sdkv2.NewFromConfig(cfg, func(o *lambda_sdkv2.Options) { - if endpoint := c.Endpoints[names.Lambda]; endpoint != "" { - o.EndpointResolver = lambda_sdkv2.EndpointResolverFromURL(endpoint) - } - }) - }) - client.logsClient.init(&cfg, func() *cloudwatchlogs_sdkv2.Client { - return cloudwatchlogs_sdkv2.NewFromConfig(cfg, func(o *cloudwatchlogs_sdkv2.Options) { - if endpoint := c.Endpoints[names.Logs]; endpoint != "" { - o.EndpointResolver = cloudwatchlogs_sdkv2.EndpointResolverFromURL(endpoint) - } - }) - }) - client.rdsClient.init(&cfg, func() *rds_sdkv2.Client { - return rds_sdkv2.NewFromConfig(cfg, func(o *rds_sdkv2.Options) { - if endpoint := c.Endpoints[names.RDS]; endpoint != "" { - o.EndpointResolver = rds_sdkv2.EndpointResolverFromURL(endpoint) - } - }) - }) - client.s3controlClient.init(&cfg, func() *s3control_sdkv2.Client { - return s3control_sdkv2.NewFromConfig(cfg, func(o *s3control_sdkv2.Options) { - if endpoint := c.Endpoints[names.S3Control]; endpoint != "" { - o.EndpointResolver = s3control_sdkv2.EndpointResolverFromURL(endpoint) - } - }) - }) - client.ssmClient.init(&cfg, func() *ssm_sdkv2.Client { - return ssm_sdkv2.NewFromConfig(cfg, func(o *ssm_sdkv2.Options) { - if endpoint := c.Endpoints[names.SSM]; endpoint != "" { - o.EndpointResolver = ssm_sdkv2.EndpointResolverFromURL(endpoint) - } - }) - }) -} diff --git a/internal/conns/conns.go b/internal/conns/conns.go index 3283a23e503..954266ba7a2 100644 --- a/internal/conns/conns.go +++ b/internal/conns/conns.go @@ -1,11 +1,14 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package conns import ( "context" "strings" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/session" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" awsbase "github.com/hashicorp/aws-sdk-go-base/v2" awsbasev1 "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2" "github.com/hashicorp/terraform-provider-aws/internal/types" @@ -61,8 +64,8 @@ func FromContext(ctx context.Context) (*InContext, bool) { return v, ok } -func NewSessionForRegion(cfg *aws.Config, region, terraformVersion string) (*session.Session, error) { - session, err := session.NewSession(cfg) +func NewSessionForRegion(cfg *aws_sdkv1.Config, region, terraformVersion string) (*session_sdkv1.Session, error) { + session, err := session_sdkv1.NewSession(cfg) if err != nil { return nil, err @@ -72,7 +75,7 @@ func NewSessionForRegion(cfg *aws.Config, region, terraformVersion string) (*ses awsbasev1.SetSessionUserAgent(session, apnInfo, awsbase.UserAgentProducts{}) - return session.Copy(&aws.Config{Region: aws.String(region)}), nil + return session.Copy(&aws_sdkv1.Config{Region: aws_sdkv1.String(region)}), nil } func StdUserAgentProducts(terraformVersion string) *awsbase.APNInfo { diff --git a/internal/conns/conns_test.go b/internal/conns/conns_test.go index 737bdbd1449..17c962cc58f 100644 --- a/internal/conns/conns_test.go +++ b/internal/conns/conns_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package conns import ( diff --git a/internal/conns/generate.go b/internal/conns/generate.go index dfbd9c478ae..201771e6fcc 100644 --- a/internal/conns/generate.go +++ b/internal/conns/generate.go @@ -1,5 +1,7 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../generate/awsclient/main.go -//go:generate go run ../generate/clientconfig/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package conns diff --git a/internal/conns/lazy.go b/internal/conns/lazy.go deleted file mode 100644 index c0ef723554d..00000000000 --- a/internal/conns/lazy.go +++ /dev/null @@ -1,27 +0,0 @@ -package conns - -import ( - "sync" - - "github.com/aws/aws-sdk-go-v2/aws" -) - -type clientInitFunc[T any] func() T - -type lazyClient[T any] struct { - initf clientInitFunc[T] - - once sync.Once - client T -} - -func (l *lazyClient[T]) init(config *aws.Config, f clientInitFunc[T]) { - l.initf = f -} - -func (l *lazyClient[T]) Client() T { - l.once.Do(func() { - l.client = l.initf() - }) - return l.client -} diff --git a/internal/conns/mutexkv.go b/internal/conns/mutexkv.go index c44e909d6b9..2c34d01ba72 100644 --- a/internal/conns/mutexkv.go +++ b/internal/conns/mutexkv.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package conns import ( diff --git a/internal/conns/mutexkv_test.go b/internal/conns/mutexkv_test.go index 986f476224a..6ce02b4b762 100644 --- a/internal/conns/mutexkv_test.go +++ b/internal/conns/mutexkv_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package conns import ( diff --git a/internal/create/errors.go b/internal/create/errors.go index 001a907638a..805dbd5cd0e 100644 --- a/internal/create/errors.go +++ b/internal/create/errors.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package create import ( @@ -49,14 +52,24 @@ func Error(service, action, resource, id string, gotError error) error { return errors.New(ProblemStandardMessage(service, action, resource, id, gotError)) } +// AddError returns diag.Diagnostics with an additional diag.Diagnostic containing +// an error using a standardized problem message +func AddError(diags diag.Diagnostics, service, action, resource, id string, gotError error) diag.Diagnostics { + return append(diags, newError(service, action, resource, id, gotError)) +} + // DiagError returns a 1-length diag.Diagnostics with a diag.Error-level diag.Diagnostic // with a standardized error message func DiagError(service, action, resource, id string, gotError error) diag.Diagnostics { return diag.Diagnostics{ - diag.Diagnostic{ - Severity: diag.Error, - Summary: ProblemStandardMessage(service, action, resource, id, gotError), - }, + newError(service, action, resource, id, gotError), + } +} + +func newError(service, action, resource, id string, gotError error) diag.Diagnostic { + return diag.Diagnostic{ + Severity: diag.Error, + Summary: ProblemStandardMessage(service, action, resource, id, gotError), } } @@ -99,6 +112,15 @@ func AddWarning(diags diag.Diagnostics, service, action, resource, id string, go ) } +func AddWarningMessage(diags diag.Diagnostics, service, action, resource, id, message string) diag.Diagnostics { + return append(diags, + diag.Diagnostic{ + Severity: diag.Warning, + Summary: ProblemStandardMessage(service, action, resource, id, fmt.Errorf(message)), + }, + ) +} + // AddWarningNotFoundRemoveState returns diag.Diagnostics with an additional diag.Diagnostic containing // a warning using a standardized problem message func AddWarningNotFoundRemoveState(service, action, resource, id string) diag.Diagnostics { diff --git a/internal/create/hashcode.go b/internal/create/hashcode.go index d4f77ac136c..d2d7a4f00c9 100644 --- a/internal/create/hashcode.go +++ b/internal/create/hashcode.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package create import ( diff --git a/internal/create/hashcode_test.go b/internal/create/hashcode_test.go index eee0bbd9ca9..f73bc61bace 100644 --- a/internal/create/hashcode_test.go +++ b/internal/create/hashcode_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package create import ( diff --git a/internal/create/naming.go b/internal/create/naming.go index 007c4ac6e13..1a564543613 100644 --- a/internal/create/naming.go +++ b/internal/create/naming.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package create import ( diff --git a/internal/create/naming_test.go b/internal/create/naming_test.go index 95c0c8eef8e..44f9b90b9b7 100644 --- a/internal/create/naming_test.go +++ b/internal/create/naming_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package create import ( diff --git a/internal/enum/validate.go b/internal/enum/validate.go index 4becca3521e..3005198c9fb 100644 --- a/internal/enum/validate.go +++ b/internal/enum/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package enum import ( diff --git a/internal/enum/values.go b/internal/enum/values.go index af8047be1d5..27f8c8dd0a8 100644 --- a/internal/enum/values.go +++ b/internal/enum/values.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package enum type valueser[T ~string] interface { diff --git a/internal/envvar/envvar.go b/internal/envvar/envvar.go index 52bdff09016..3eec0181d5a 100644 --- a/internal/envvar/envvar.go +++ b/internal/envvar/envvar.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package envvar import ( diff --git a/internal/envvar/envvar_test.go b/internal/envvar/envvar_test.go index 1f7387625eb..2b31d8469c6 100644 --- a/internal/envvar/envvar_test.go +++ b/internal/envvar/envvar_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package envvar import ( diff --git a/internal/errs/diag.go b/internal/errs/diag.go index 98b7c85766e..7faf6b596c5 100644 --- a/internal/errs/diag.go +++ b/internal/errs/diag.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package errs import ( diff --git a/internal/errs/errs.go b/internal/errs/errs.go index 439817e634d..ffdc39a798e 100644 --- a/internal/errs/errs.go +++ b/internal/errs/errs.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package errs import ( diff --git a/internal/errs/errs_test.go b/internal/errs/errs_test.go index cef5969f82a..6b7d35ef3ed 100644 --- a/internal/errs/errs_test.go +++ b/internal/errs/errs_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package errs_test import ( diff --git a/internal/errs/fwdiag/diags.go b/internal/errs/fwdiag/diags.go index 9c6ed9f4f9b..0fea976668a 100644 --- a/internal/errs/fwdiag/diags.go +++ b/internal/errs/fwdiag/diags.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fwdiag import ( diff --git a/internal/errs/fwdiag/must.go b/internal/errs/fwdiag/must.go new file mode 100644 index 00000000000..65d5b743692 --- /dev/null +++ b/internal/errs/fwdiag/must.go @@ -0,0 +1,18 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package fwdiag + +import ( + "github.com/hashicorp/terraform-plugin-framework/diag" + "github.com/hashicorp/terraform-provider-aws/internal/errs" +) + +// Must is a generic implementation of the Go Must idiom [1, 2]. It panics if +// the provided Diagnostics has errors and returns x otherwise. +// +// [1]: https://pkg.go.dev/text/template#Must +// [2]: https://pkg.go.dev/regexp#MustCompile +func Must[T any](x T, diags diag.Diagnostics) T { + return errs.Must(x, DiagnosticsError(diags)) +} diff --git a/internal/errs/must.go b/internal/errs/must.go new file mode 100644 index 00000000000..d5b92d4e9ad --- /dev/null +++ b/internal/errs/must.go @@ -0,0 +1,16 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package errs + +// Must is a generic implementation of the Go Must idiom [1, 2]. It panics if +// the provided error is non-nil and returns x otherwise. +// +// [1]: https://pkg.go.dev/text/template#Must +// [2]: https://pkg.go.dev/regexp#MustCompile +func Must[T any](x T, err error) T { + if err != nil { + panic(err) + } + return x +} diff --git a/internal/errs/sdkdiag/append.go b/internal/errs/sdkdiag/append.go index c2f77996a9b..fc14d8d8af1 100644 --- a/internal/errs/sdkdiag/append.go +++ b/internal/errs/sdkdiag/append.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sdkdiag import ( diff --git a/internal/errs/sdkdiag/diags.go b/internal/errs/sdkdiag/diags.go index ebb673c331f..483266e7448 100644 --- a/internal/errs/sdkdiag/diags.go +++ b/internal/errs/sdkdiag/diags.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sdkdiag import ( @@ -28,16 +31,27 @@ func severityFilter(s diag.Severity) tfslices.FilterFunc[diag.Diagnostic] { } // DiagnosticsError returns an error containing all Diagnostic with SeverityError -func DiagnosticsError(diags diag.Diagnostics) (errs error) { +func DiagnosticsError(diags diag.Diagnostics) error { if !diags.HasError() { - return + return nil + } + + errDiags := Errors(diags) + + if len(errDiags) == 1 { + return diagnosticError(errDiags[0]) } - for _, d := range Errors(diags) { - errs = multierror.Append(errs, errors.New(DiagnosticString(d))) + var errs error + for _, d := range errDiags { + errs = multierror.Append(errs, diagnosticError(d)) } - return + return errs +} + +func diagnosticError(diag diag.Diagnostic) error { + return errors.New(DiagnosticString(diag)) } // DiagnosticString formats a Diagnostic diff --git a/internal/errs/sdkdiag/must.go b/internal/errs/sdkdiag/must.go new file mode 100644 index 00000000000..6f780d2f410 --- /dev/null +++ b/internal/errs/sdkdiag/must.go @@ -0,0 +1,18 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package sdkdiag + +import ( + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-provider-aws/internal/errs" +) + +// Must is a generic implementation of the Go Must idiom [1, 2]. It panics if +// the provided Diagnostics has errors and returns x otherwise. +// +// [1]: https://pkg.go.dev/text/template#Must +// [2]: https://pkg.go.dev/regexp#MustCompile +func Must[T any](x T, diags diag.Diagnostics) T { + return errs.Must(x, DiagnosticsError(diags)) +} diff --git a/internal/errs/unsupported.go b/internal/errs/unsupported.go index d5eab1b4066..f8f8823211d 100644 --- a/internal/errs/unsupported.go +++ b/internal/errs/unsupported.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package errs import ( diff --git a/internal/experimental/depgraph/dependency_graph.go b/internal/experimental/depgraph/dependency_graph.go index 8f52bb56f1d..b4a5bfd291b 100644 --- a/internal/experimental/depgraph/dependency_graph.go +++ b/internal/experimental/depgraph/dependency_graph.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package depgraph import ( diff --git a/internal/experimental/depgraph/dependency_graph_test.go b/internal/experimental/depgraph/dependency_graph_test.go index 148c483eb37..51543f8a9df 100644 --- a/internal/experimental/depgraph/dependency_graph_test.go +++ b/internal/experimental/depgraph/dependency_graph_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package depgraph import ( diff --git a/internal/experimental/depgraph/stack.go b/internal/experimental/depgraph/stack.go index 1f414b08f0b..05095542460 100644 --- a/internal/experimental/depgraph/stack.go +++ b/internal/experimental/depgraph/stack.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package depgraph type stack struct { diff --git a/internal/experimental/depgraph/stack_test.go b/internal/experimental/depgraph/stack_test.go index 655e217bc96..7bf653e9db4 100644 --- a/internal/experimental/depgraph/stack_test.go +++ b/internal/experimental/depgraph/stack_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package depgraph import ( diff --git a/internal/experimental/nullable/testing.go b/internal/experimental/nullable/testing.go deleted file mode 100644 index 239c4dffb02..00000000000 --- a/internal/experimental/nullable/testing.go +++ /dev/null @@ -1,45 +0,0 @@ -package nullable - -import ( - "regexp" - "testing" - - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" -) - -type testCase struct { - val interface{} - f schema.SchemaValidateFunc - expectedErr *regexp.Regexp -} - -func runValidationTestCases(t *testing.T, cases []testCase) { - t.Helper() - - matchErr := func(errs []error, r *regexp.Regexp) bool { - // err must match one provided - for _, err := range errs { - if r.MatchString(err.Error()) { - return true - } - } - - return false - } - - for i, tc := range cases { - _, errs := tc.f(tc.val, "test_property") - - if len(errs) == 0 && tc.expectedErr == nil { - continue - } - - if len(errs) != 0 && tc.expectedErr == nil { - t.Fatalf("expected test case %d to produce no errors, got %v", i, errs) - } - - if !matchErr(errs, tc.expectedErr) { - t.Fatalf("expected test case %d to produce error matching \"%s\", got %v", i, tc.expectedErr, errs) - } - } -} diff --git a/internal/experimental/sync/sync.go b/internal/experimental/sync/sync.go index a50df8d9ece..6f4eb1e79d5 100644 --- a/internal/experimental/sync/sync.go +++ b/internal/experimental/sync/sync.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sync import ( diff --git a/internal/flex/flex.go b/internal/flex/flex.go index 8fb5da702d0..fb13d15d583 100644 --- a/internal/flex/flex.go +++ b/internal/flex/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package flex import ( @@ -286,3 +289,21 @@ func ResourceIdPartCount(id string) int { idParts := strings.Split(id, ResourceIdSeparator) return len(idParts) } + +type Set[T comparable] []T + +// Difference find the elements in two sets that are not similar. +func (s Set[T]) Difference(ns Set[T]) Set[T] { + m := make(map[T]struct{}) + for _, v := range ns { + m[v] = struct{}{} + } + + var result []T + for _, v := range s { + if _, ok := m[v]; !ok { + result = append(result, v) + } + } + return result +} diff --git a/internal/flex/flex_test.go b/internal/flex/flex_test.go index 0ff5bf65826..8c76a17871d 100644 --- a/internal/flex/flex_test.go +++ b/internal/flex/flex_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package flex import ( diff --git a/internal/flex/framework.go b/internal/flex/framework.go deleted file mode 100644 index 192d17e71d6..00000000000 --- a/internal/flex/framework.go +++ /dev/null @@ -1,362 +0,0 @@ -package flex - -import ( - "context" - - "github.com/aws/aws-sdk-go-v2/aws" - "github.com/hashicorp/terraform-plugin-framework/attr" - "github.com/hashicorp/terraform-plugin-framework/types" -) - -// Terraform Plugin Framework variants of standard flatteners and expanders. - -func ExpandFrameworkStringList(ctx context.Context, list types.List) []*string { - if list.IsNull() || list.IsUnknown() { - return nil - } - - var vl []*string - - if list.ElementsAs(ctx, &vl, false).HasError() { - return nil - } - - return vl -} - -func ExpandFrameworkStringValueList(ctx context.Context, list types.List) []string { - if list.IsNull() || list.IsUnknown() { - return nil - } - - var vl []string - - if list.ElementsAs(ctx, &vl, false).HasError() { - return nil - } - - return vl -} - -func ExpandFrameworkStringSet(ctx context.Context, set types.Set) []*string { - if set.IsNull() || set.IsUnknown() { - return nil - } - - var vs []*string - - if set.ElementsAs(ctx, &vs, false).HasError() { - return nil - } - - return vs -} - -func ExpandFrameworkStringValueSet(ctx context.Context, set types.Set) Set[string] { - if set.IsNull() || set.IsUnknown() { - return nil - } - - var vs []string - - if set.ElementsAs(ctx, &vs, false).HasError() { - return nil - } - - return vs -} - -func ExpandFrameworkStringValueMap(ctx context.Context, set types.Map) map[string]string { - if set.IsNull() || set.IsUnknown() { - return nil - } - - var m map[string]string - - if set.ElementsAs(ctx, &m, false).HasError() { - return nil - } - - return m -} - -// FlattenFrameworkStringList converts a slice of string pointers to a framework List value. -// -// A nil slice is converted to a null List. -// An empty slice is converted to a null List. -func FlattenFrameworkStringList(_ context.Context, vs []*string) types.List { - if len(vs) == 0 { - return types.ListNull(types.StringType) - } - - elems := make([]attr.Value, len(vs)) - - for i, v := range vs { - elems[i] = types.StringValue(aws.ToString(v)) - } - - return types.ListValueMust(types.StringType, elems) -} - -// FlattenFrameworkStringListLegacy is the Plugin Framework variant of FlattenStringList. -// A nil slice is converted to an empty (non-null) List. -func FlattenFrameworkStringListLegacy(_ context.Context, vs []*string) types.List { - elems := make([]attr.Value, len(vs)) - - for i, v := range vs { - elems[i] = types.StringValue(aws.ToString(v)) - } - - return types.ListValueMust(types.StringType, elems) -} - -// FlattenFrameworkStringValueList converts a slice of string values to a framework List value. -// -// A nil slice is converted to a null List. -// An empty slice is converted to a null List. -func FlattenFrameworkStringValueList(_ context.Context, vs []string) types.List { - if len(vs) == 0 { - return types.ListNull(types.StringType) - } - - elems := make([]attr.Value, len(vs)) - - for i, v := range vs { - elems[i] = types.StringValue(v) - } - - return types.ListValueMust(types.StringType, elems) -} - -// FlattenFrameworkStringValueListLegacy is the Plugin Framework variant of FlattenStringValueList. -// A nil slice is converted to an empty (non-null) List. -func FlattenFrameworkStringValueListLegacy(_ context.Context, vs []string) types.List { - elems := make([]attr.Value, len(vs)) - - for i, v := range vs { - elems[i] = types.StringValue(v) - } - - return types.ListValueMust(types.StringType, elems) -} - -// FlattenFrameworkStringSet converts a slice of string pointers to a framework Set value. -// -// A nil slice is converted to a null Set. -// An empty slice is converted to a null Set. -func FlattenFrameworkStringSet(_ context.Context, vs []*string) types.Set { - if len(vs) == 0 { - return types.SetNull(types.StringType) - } - - elems := make([]attr.Value, len(vs)) - - for i, v := range vs { - elems[i] = types.StringValue(aws.ToString(v)) - } - - return types.SetValueMust(types.StringType, elems) -} - -// FlattenFrameworkStringSetLegacy converts a slice of string pointers to a framework Set value. -// -// A nil slice is converted to an empty (non-null) Set. -func FlattenFrameworkStringSetLegacy(_ context.Context, vs []*string) types.Set { - elems := make([]attr.Value, len(vs)) - - for i, v := range vs { - elems[i] = types.StringValue(aws.ToString(v)) - } - - return types.SetValueMust(types.StringType, elems) -} - -// FlattenFrameworkStringValueSet converts a slice of string values to a framework Set value. -// -// A nil slice is converted to a null Set. -// An empty slice is converted to a null Set. -func FlattenFrameworkStringValueSet(_ context.Context, vs []string) types.Set { - if len(vs) == 0 { - return types.SetNull(types.StringType) - } - - elems := make([]attr.Value, len(vs)) - - for i, v := range vs { - elems[i] = types.StringValue(v) - } - - return types.SetValueMust(types.StringType, elems) -} - -// FlattenFrameworkStringValueSetLegacy is the Plugin Framework variant of FlattenStringValueSet. -// A nil slice is converted to an empty (non-null) Set. -func FlattenFrameworkStringValueSetLegacy(_ context.Context, vs []string) types.Set { - elems := make([]attr.Value, len(vs)) - - for i, v := range vs { - elems[i] = types.StringValue(v) - } - - return types.SetValueMust(types.StringType, elems) -} - -// FlattenFrameworkStringValueMapLegacy has no Plugin SDK equivalent as schema.ResourceData.Set can be passed string value maps directly. -// A nil map is converted to an empty (non-null) Map. -func FlattenFrameworkStringValueMapLegacy(_ context.Context, m map[string]string) types.Map { - elems := make(map[string]attr.Value, len(m)) - - for k, v := range m { - elems[k] = types.StringValue(v) - } - - return types.MapValueMust(types.StringType, elems) -} - -// BoolFromFramework converts a Framework Bool value to a bool pointer. -// A null Bool is converted to a nil bool pointer. -func BoolFromFramework(_ context.Context, v types.Bool) *bool { - if v.IsNull() || v.IsUnknown() { - return nil - } - - return aws.Bool(v.ValueBool()) -} - -// Int64FromFramework converts a Framework Int64 value to an int64 pointer. -// A null Int64 is converted to a nil int64 pointer. -func Int64FromFramework(_ context.Context, v types.Int64) *int64 { - if v.IsNull() || v.IsUnknown() { - return nil - } - - return aws.Int64(v.ValueInt64()) -} - -// StringFromFramework converts a Framework String value to a string pointer. -// A null String is converted to a nil string pointer. -func StringFromFramework(_ context.Context, v types.String) *string { - if v.IsNull() || v.IsUnknown() { - return nil - } - - return aws.String(v.ValueString()) -} - -// StringFromFramework converts a single Framework String value to a string pointer slice. -// A null String is converted to a nil slice. -func StringSliceFromFramework(_ context.Context, v types.String) []*string { - if v.IsNull() || v.IsUnknown() { - return nil - } - - return aws.StringSlice([]string{v.ValueString()}) -} - -// BoolToFramework converts a bool pointer to a Framework Bool value. -// A nil bool pointer is converted to a null Bool. -func BoolToFramework(_ context.Context, v *bool) types.Bool { - if v == nil { - return types.BoolNull() - } - - return types.BoolValue(aws.ToBool(v)) -} - -// BoolToFrameworkLegacy converts a bool pointer to a Framework Bool value. -// A nil bool pointer is converted to a false Bool. -func BoolToFrameworkLegacy(_ context.Context, v *bool) types.Bool { - return types.BoolValue(aws.ToBool(v)) -} - -// StringValueToFramework converts a string value to a Framework String value. -// An empty string is converted to a null String. -func StringValueToFramework[T ~string](_ context.Context, v T) types.String { - if v == "" { - return types.StringNull() - } - return types.StringValue(string(v)) -} - -// StringValueToFrameworkLegacy converts a string value to a Framework String value. -// An empty string is left as an empty String. -func StringValueToFrameworkLegacy[T ~string](_ context.Context, v T) types.String { - return types.StringValue(string(v)) -} - -// Int64ToFramework converts an int64 pointer to a Framework Int64 value. -// A nil int64 pointer is converted to a null Int64. -func Int64ToFramework(_ context.Context, v *int64) types.Int64 { - if v == nil { - return types.Int64Null() - } - - return types.Int64Value(aws.ToInt64(v)) -} - -// Int64ToFrameworkLegacy converts an int64 pointer to a Framework Int64 value. -// A nil int64 pointer is converted to a zero Int64. -func Int64ToFrameworkLegacy(_ context.Context, v *int64) types.Int64 { - return types.Int64Value(aws.ToInt64(v)) -} - -// StringToFramework converts a string pointer to a Framework String value. -// A nil string pointer is converted to a null String. -func StringToFramework(_ context.Context, v *string) types.String { - if v == nil { - return types.StringNull() - } - - return types.StringValue(aws.ToString(v)) -} - -// StringToFrameworkLegacy converts a string pointer to a Framework String value. -// A nil string pointer is converted to an empty String. -func StringToFrameworkLegacy(_ context.Context, v *string) types.String { - return types.StringValue(aws.ToString(v)) -} - -// StringToFrameworkWithTransform converts a string pointer to a Framework String value. -// A nil string pointer is converted to a null String. -// A non-nil string pointer has its value transformed by `f`. -func StringToFrameworkWithTransform(_ context.Context, v *string, f func(string) string) types.String { - if v == nil { - return types.StringNull() - } - - return types.StringValue(f(aws.ToString(v))) -} - -// Float64ToFramework converts a float64 pointer to a Framework Float64 value. -// A nil float64 pointer is converted to a null Float64. -func Float64ToFramework(_ context.Context, v *float64) types.Float64 { - if v == nil { - return types.Float64Null() - } - - return types.Float64Value(aws.ToFloat64(v)) -} - -// Float64ToFrameworkLegacy converts a float64 pointer to a Framework Float64 value. -// A nil float64 pointer is converted to a zero float64. -func Float64ToFrameworkLegacy(_ context.Context, v *float64) types.Float64 { - return types.Float64Value(aws.ToFloat64(v)) -} - -type Set[T comparable] []T - -// Difference find the elements in two sets that are not similar. -func (s Set[T]) Difference(ns Set[T]) Set[T] { - m := make(map[T]struct{}) - for _, v := range ns { - m[v] = struct{}{} - } - - var result []T - for _, v := range s { - if _, ok := m[v]; !ok { - result = append(result, v) - } - } - return result -} diff --git a/internal/flex/framework_test.go b/internal/flex/framework_test.go deleted file mode 100644 index 2ee82184b54..00000000000 --- a/internal/flex/framework_test.go +++ /dev/null @@ -1,1087 +0,0 @@ -package flex - -import ( - "context" - "strings" - "testing" - - "github.com/aws/aws-sdk-go-v2/aws" - "github.com/google/go-cmp/cmp" - "github.com/hashicorp/terraform-plugin-framework/attr" - "github.com/hashicorp/terraform-plugin-framework/types" -) - -func TestExpandFrameworkStringList(t *testing.T) { - t.Parallel() - - type testCase struct { - input types.List - expected []*string - } - tests := map[string]testCase{ - "null": { - input: types.ListNull(types.StringType), - expected: nil, - }, - "unknown": { - input: types.ListUnknown(types.StringType), - expected: nil, - }, - "two elements": { - input: types.ListValueMust(types.StringType, []attr.Value{ - types.StringValue("GET"), - types.StringValue("HEAD"), - }), - expected: []*string{aws.String("GET"), aws.String("HEAD")}, - }, - "zero elements": { - input: types.ListValueMust(types.StringType, []attr.Value{}), - expected: []*string{}, - }, - "invalid element type": { - input: types.ListValueMust(types.Int64Type, []attr.Value{ - types.Int64Value(42), - }), - expected: nil, - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - got := ExpandFrameworkStringList(context.Background(), test.input) - - if diff := cmp.Diff(got, test.expected); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} - -func TestExpandFrameworkStringValueList(t *testing.T) { - t.Parallel() - - type testCase struct { - input types.List - expected []string - } - tests := map[string]testCase{ - "null": { - input: types.ListNull(types.StringType), - expected: nil, - }, - "unknown": { - input: types.ListUnknown(types.StringType), - expected: nil, - }, - "two elements": { - input: types.ListValueMust(types.StringType, []attr.Value{ - types.StringValue("GET"), - types.StringValue("HEAD"), - }), - expected: []string{"GET", "HEAD"}, - }, - "zero elements": { - input: types.ListValueMust(types.StringType, []attr.Value{}), - expected: []string{}, - }, - "invalid element type": { - input: types.ListValueMust(types.Int64Type, []attr.Value{ - types.Int64Value(42), - }), - expected: nil, - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - got := ExpandFrameworkStringValueList(context.Background(), test.input) - - if diff := cmp.Diff(got, test.expected); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} - -func TestExpandFrameworkStringSet(t *testing.T) { - t.Parallel() - - type testCase struct { - input types.Set - expected []*string - } - tests := map[string]testCase{ - "null": { - input: types.SetNull(types.StringType), - expected: nil, - }, - "unknown": { - input: types.SetUnknown(types.StringType), - expected: nil, - }, - "two elements": { - input: types.SetValueMust(types.StringType, []attr.Value{ - types.StringValue("GET"), - types.StringValue("HEAD"), - }), - expected: []*string{aws.String("GET"), aws.String("HEAD")}, - }, - "zero elements": { - input: types.SetValueMust(types.StringType, []attr.Value{}), - expected: []*string{}, - }, - "invalid element type": { - input: types.SetValueMust(types.Int64Type, []attr.Value{ - types.Int64Value(42), - }), - expected: nil, - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - got := ExpandFrameworkStringSet(context.Background(), test.input) - - if diff := cmp.Diff(got, test.expected); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} - -func TestExpandFrameworkStringValueSet(t *testing.T) { - t.Parallel() - - type testCase struct { - input types.Set - expected Set[string] - } - tests := map[string]testCase{ - "null": { - input: types.SetNull(types.StringType), - expected: nil, - }, - "unknown": { - input: types.SetUnknown(types.StringType), - expected: nil, - }, - "two elements": { - input: types.SetValueMust(types.StringType, []attr.Value{ - types.StringValue("GET"), - types.StringValue("HEAD"), - }), - expected: []string{"GET", "HEAD"}, - }, - "zero elements": { - input: types.SetValueMust(types.StringType, []attr.Value{}), - expected: []string{}, - }, - "invalid element type": { - input: types.SetValueMust(types.Int64Type, []attr.Value{ - types.Int64Value(42), - }), - expected: nil, - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - got := ExpandFrameworkStringValueSet(context.Background(), test.input) - - if diff := cmp.Diff(got, test.expected); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} - -func TestExpandFrameworkStringValueMap(t *testing.T) { - t.Parallel() - - type testCase struct { - input types.Map - expected map[string]string - } - tests := map[string]testCase{ - "null": { - input: types.MapNull(types.StringType), - expected: nil, - }, - "unknown": { - input: types.MapUnknown(types.StringType), - expected: nil, - }, - "two elements": { - input: types.MapValueMust(types.StringType, map[string]attr.Value{ - "one": types.StringValue("GET"), - "two": types.StringValue("HEAD"), - }), - expected: map[string]string{ - "one": "GET", - "two": "HEAD", - }, - }, - "zero elements": { - input: types.MapValueMust(types.StringType, map[string]attr.Value{}), - expected: map[string]string{}, - }, - "invalid element type": { - input: types.MapValueMust(types.BoolType, map[string]attr.Value{ - "one": types.BoolValue(true), - }), - expected: nil, - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - got := ExpandFrameworkStringValueMap(context.Background(), test.input) - - if diff := cmp.Diff(got, test.expected); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} - -func TestFlattenFrameworkStringList(t *testing.T) { - t.Parallel() - - type testCase struct { - input []*string - expected types.List - } - tests := map[string]testCase{ - "two elements": { - input: []*string{aws.String("GET"), aws.String("HEAD")}, - expected: types.ListValueMust(types.StringType, []attr.Value{ - types.StringValue("GET"), - types.StringValue("HEAD"), - }), - }, - "zero elements": { - input: []*string{}, - expected: types.ListNull(types.StringType), - }, - "nil array": { - input: nil, - expected: types.ListNull(types.StringType), - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - got := FlattenFrameworkStringList(context.Background(), test.input) - - if diff := cmp.Diff(got, test.expected); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} - -func TestFlattenFrameworkStringListLegacy(t *testing.T) { - t.Parallel() - - type testCase struct { - input []*string - expected types.List - } - tests := map[string]testCase{ - "two elements": { - input: []*string{aws.String("GET"), aws.String("HEAD")}, - expected: types.ListValueMust(types.StringType, []attr.Value{ - types.StringValue("GET"), - types.StringValue("HEAD"), - }), - }, - "zero elements": { - input: []*string{}, - expected: types.ListValueMust(types.StringType, []attr.Value{}), - }, - "nil array": { - input: nil, - expected: types.ListValueMust(types.StringType, []attr.Value{}), - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - got := FlattenFrameworkStringListLegacy(context.Background(), test.input) - - if diff := cmp.Diff(got, test.expected); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} - -func TestFlattenFrameworkStringValueList(t *testing.T) { - t.Parallel() - - type testCase struct { - input []string - expected types.List - } - tests := map[string]testCase{ - "two elements": { - input: []string{"GET", "HEAD"}, - expected: types.ListValueMust(types.StringType, []attr.Value{ - types.StringValue("GET"), - types.StringValue("HEAD"), - }), - }, - "zero elements": { - input: []string{}, - expected: types.ListNull(types.StringType), - }, - "nil array": { - input: nil, - expected: types.ListNull(types.StringType), - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - got := FlattenFrameworkStringValueList(context.Background(), test.input) - - if diff := cmp.Diff(got, test.expected); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} - -func TestFlattenFrameworkStringValueListLegacy(t *testing.T) { - t.Parallel() - - type testCase struct { - input []string - expected types.List - } - tests := map[string]testCase{ - "two elements": { - input: []string{"GET", "HEAD"}, - expected: types.ListValueMust(types.StringType, []attr.Value{ - types.StringValue("GET"), - types.StringValue("HEAD"), - }), - }, - "zero elements": { - input: []string{}, - expected: types.ListValueMust(types.StringType, []attr.Value{}), - }, - "nil array": { - input: nil, - expected: types.ListValueMust(types.StringType, []attr.Value{}), - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - got := FlattenFrameworkStringValueListLegacy(context.Background(), test.input) - - if diff := cmp.Diff(got, test.expected); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} - -func TestFlattenFrameworkStringValueSet(t *testing.T) { - t.Parallel() - - type testCase struct { - input []string - expected types.Set - } - tests := map[string]testCase{ - "two elements": { - input: []string{"GET", "HEAD"}, - expected: types.SetValueMust(types.StringType, []attr.Value{ - types.StringValue("GET"), - types.StringValue("HEAD"), - }), - }, - "zero elements": { - input: []string{}, - expected: types.SetNull(types.StringType), - }, - "nil array": { - input: nil, - expected: types.SetNull(types.StringType), - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - got := FlattenFrameworkStringValueSet(context.Background(), test.input) - - if diff := cmp.Diff(got, test.expected); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} - -func TestFlattenFrameworkStringValueSetLegacy(t *testing.T) { - t.Parallel() - - type testCase struct { - input []string - expected types.Set - } - tests := map[string]testCase{ - "two elements": { - input: []string{"GET", "HEAD"}, - expected: types.SetValueMust(types.StringType, []attr.Value{ - types.StringValue("GET"), - types.StringValue("HEAD"), - }), - }, - "zero elements": { - input: []string{}, - expected: types.SetValueMust(types.StringType, []attr.Value{}), - }, - "nil array": { - input: nil, - expected: types.SetValueMust(types.StringType, []attr.Value{}), - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - got := FlattenFrameworkStringValueSetLegacy(context.Background(), test.input) - - if diff := cmp.Diff(got, test.expected); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} - -func TestFlattenFrameworkStringValueMapLegacy(t *testing.T) { - t.Parallel() - - type testCase struct { - input map[string]string - expected types.Map - } - tests := map[string]testCase{ - "two elements": { - input: map[string]string{ - "one": "GET", - "two": "HEAD", - }, - expected: types.MapValueMust(types.StringType, map[string]attr.Value{ - "one": types.StringValue("GET"), - "two": types.StringValue("HEAD"), - }), - }, - "zero elements": { - input: map[string]string{}, - expected: types.MapValueMust(types.StringType, map[string]attr.Value{}), - }, - "nil map": { - input: nil, - expected: types.MapValueMust(types.StringType, map[string]attr.Value{}), - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - got := FlattenFrameworkStringValueMapLegacy(context.Background(), test.input) - - if diff := cmp.Diff(got, test.expected); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} - -func TestBoolFromFramework(t *testing.T) { - t.Parallel() - - type testCase struct { - input types.Bool - expected *bool - } - tests := map[string]testCase{ - "valid bool": { - input: types.BoolValue(true), - expected: aws.Bool(true), - }, - "null bool": { - input: types.BoolNull(), - expected: nil, - }, - "unknown bool": { - input: types.BoolUnknown(), - expected: nil, - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - got := BoolFromFramework(context.Background(), test.input) - - if diff := cmp.Diff(got, test.expected); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} - -func TestInt64FromFramework(t *testing.T) { - t.Parallel() - - type testCase struct { - input types.Int64 - expected *int64 - } - tests := map[string]testCase{ - "valid int64": { - input: types.Int64Value(42), - expected: aws.Int64(42), - }, - "zero int64": { - input: types.Int64Value(0), - expected: aws.Int64(0), - }, - "null int64": { - input: types.Int64Null(), - expected: nil, - }, - "unknown int64": { - input: types.Int64Unknown(), - expected: nil, - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - got := Int64FromFramework(context.Background(), test.input) - - if diff := cmp.Diff(got, test.expected); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} - -func TestStringFromFramework(t *testing.T) { - t.Parallel() - - type testCase struct { - input types.String - expected *string - } - tests := map[string]testCase{ - "valid string": { - input: types.StringValue("TEST"), - expected: aws.String("TEST"), - }, - "empty string": { - input: types.StringValue(""), - expected: aws.String(""), - }, - "null string": { - input: types.StringNull(), - expected: nil, - }, - "unknown string": { - input: types.StringUnknown(), - expected: nil, - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - got := StringFromFramework(context.Background(), test.input) - - if diff := cmp.Diff(got, test.expected); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} - -func TestBoolToFramework(t *testing.T) { - t.Parallel() - - type testCase struct { - input *bool - expected types.Bool - } - tests := map[string]testCase{ - "valid bool": { - input: aws.Bool(true), - expected: types.BoolValue(true), - }, - "nil bool": { - input: nil, - expected: types.BoolNull(), - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - got := BoolToFramework(context.Background(), test.input) - - if diff := cmp.Diff(got, test.expected); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} - -func TestBoolToFrameworkLegacy(t *testing.T) { - t.Parallel() - - type testCase struct { - input *bool - expected types.Bool - } - tests := map[string]testCase{ - "valid bool": { - input: aws.Bool(true), - expected: types.BoolValue(true), - }, - "nil bool": { - input: nil, - expected: types.BoolValue(false), - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - got := BoolToFrameworkLegacy(context.Background(), test.input) - - if diff := cmp.Diff(got, test.expected); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} - -func TestInt64ToFramework(t *testing.T) { - t.Parallel() - - type testCase struct { - input *int64 - expected types.Int64 - } - tests := map[string]testCase{ - "valid int64": { - input: aws.Int64(42), - expected: types.Int64Value(42), - }, - "zero int64": { - input: aws.Int64(0), - expected: types.Int64Value(0), - }, - "nil int64": { - input: nil, - expected: types.Int64Null(), - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - got := Int64ToFramework(context.Background(), test.input) - - if diff := cmp.Diff(got, test.expected); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} - -func TestInt64ToFrameworkLegacy(t *testing.T) { - t.Parallel() - - type testCase struct { - input *int64 - expected types.Int64 - } - tests := map[string]testCase{ - "valid int64": { - input: aws.Int64(42), - expected: types.Int64Value(42), - }, - "zero int64": { - input: aws.Int64(0), - expected: types.Int64Value(0), - }, - "nil int64": { - input: nil, - expected: types.Int64Value(0), - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - got := Int64ToFrameworkLegacy(context.Background(), test.input) - - if diff := cmp.Diff(got, test.expected); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} - -func TestStringToFramework(t *testing.T) { - t.Parallel() - - type testCase struct { - input *string - expected types.String - } - tests := map[string]testCase{ - "valid string": { - input: aws.String("TEST"), - expected: types.StringValue("TEST"), - }, - "empty string": { - input: aws.String(""), - expected: types.StringValue(""), - }, - "nil string": { - input: nil, - expected: types.StringNull(), - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - got := StringToFramework(context.Background(), test.input) - - if diff := cmp.Diff(got, test.expected); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} - -func TestStringToFrameworkLegacy(t *testing.T) { - t.Parallel() - - type testCase struct { - input *string - expected types.String - } - tests := map[string]testCase{ - "valid string": { - input: aws.String("TEST"), - expected: types.StringValue("TEST"), - }, - "empty string": { - input: aws.String(""), - expected: types.StringValue(""), - }, - "nil string": { - input: nil, - expected: types.StringValue(""), - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - got := StringToFrameworkLegacy(context.Background(), test.input) - - if diff := cmp.Diff(got, test.expected); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} - -func TestStringToFrameworkWithTransform(t *testing.T) { - t.Parallel() - - type testCase struct { - input *string - expected types.String - } - tests := map[string]testCase{ - "valid string": { - input: aws.String("TEST"), - expected: types.StringValue("test"), - }, - "empty string": { - input: aws.String(""), - expected: types.StringValue(""), - }, - "nil string": { - input: nil, - expected: types.StringNull(), - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - got := StringToFrameworkWithTransform(context.Background(), test.input, strings.ToLower) - - if diff := cmp.Diff(got, test.expected); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} - -func TestStringValueToFramework(t *testing.T) { - t.Parallel() - - // AWS enums use custom types with an underlying string type - type custom string - - type testCase struct { - input custom - expected types.String - } - tests := map[string]testCase{ - "valid": { - input: "TEST", - expected: types.StringValue("TEST"), - }, - "empty": { - input: "", - expected: types.StringNull(), - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - got := StringValueToFramework(context.Background(), test.input) - - if diff := cmp.Diff(got, test.expected); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} - -func TestStringValueToFrameworkLegacy(t *testing.T) { - t.Parallel() - - // AWS enums use custom types with an underlying string type - type custom string - - type testCase struct { - input custom - expected types.String - } - tests := map[string]testCase{ - "valid": { - input: "TEST", - expected: types.StringValue("TEST"), - }, - "empty": { - input: "", - expected: types.StringValue(""), - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - got := StringValueToFrameworkLegacy(context.Background(), test.input) - - if diff := cmp.Diff(got, test.expected); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} - -func TestFloat64ToFramework(t *testing.T) { - t.Parallel() - - type testCase struct { - input *float64 - expected types.Float64 - } - tests := map[string]testCase{ - "valid float64": { - input: aws.Float64(42.1), - expected: types.Float64Value(42.1), - }, - "zero float64": { - input: aws.Float64(0), - expected: types.Float64Value(0), - }, - "nil float64": { - input: nil, - expected: types.Float64Null(), - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - got := Float64ToFramework(context.Background(), test.input) - - if diff := cmp.Diff(got, test.expected); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} - -func TestFloat64ToFrameworkLegacy(t *testing.T) { - t.Parallel() - - type testCase struct { - input *float64 - expected types.Float64 - } - tests := map[string]testCase{ - "valid int64": { - input: aws.Float64(42.1), - expected: types.Float64Value(42.1), - }, - "zero int64": { - input: aws.Float64(0), - expected: types.Float64Value(0), - }, - "nil int64": { - input: nil, - expected: types.Float64Value(0), - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - got := Float64ToFrameworkLegacy(context.Background(), test.input) - - if diff := cmp.Diff(got, test.expected); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} - -func TestSet_Difference_strings(t *testing.T) { - t.Parallel() - - type testCase struct { - original Set[string] - new Set[string] - expected Set[string] - } - tests := map[string]testCase{ - "nil": { - original: nil, - new: nil, - expected: nil, - }, - "equal": { - original: Set[string]{"one"}, - new: Set[string]{"one"}, - expected: nil, - }, - "difference": { - original: Set[string]{"one", "two", "four"}, - new: Set[string]{"one", "two", "three"}, - expected: Set[string]{"four"}, - }, - "difference_remove": { - original: Set[string]{"one", "two"}, - new: Set[string]{"one"}, - expected: Set[string]{"two"}, - }, - "difference_add": { - original: Set[string]{"one"}, - new: Set[string]{"one", "two"}, - expected: nil, - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - got := test.original.Difference(test.new) - if diff := cmp.Diff(got, test.expected); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} diff --git a/internal/framework/base.go b/internal/framework/base.go index 908e6bfcd74..6b899364aaa 100644 --- a/internal/framework/base.go +++ b/internal/framework/base.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package framework import ( @@ -11,8 +14,9 @@ import ( "github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-plugin-log/tflog" "github.com/hashicorp/terraform-provider-aws/internal/conns" - "github.com/hashicorp/terraform-provider-aws/internal/flex" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/names" ) type withMeta struct { @@ -51,26 +55,6 @@ func (r *ResourceWithConfigure) Configure(_ context.Context, request resource.Co } } -// ExpandTags returns the API tags for the specified "tags" value. -func (r *ResourceWithConfigure) ExpandTags(ctx context.Context, tags types.Map) tftags.KeyValueTags { - return r.Meta().DefaultTagsConfig.MergeTags(tftags.New(ctx, tags)) -} - -// FlattenTags returns the "tags" value from the specified API tags. -func (r *ResourceWithConfigure) FlattenTags(ctx context.Context, apiTags tftags.KeyValueTags) types.Map { - // AWS APIs often return empty lists of tags when none have been configured. - if v := apiTags.IgnoreAWS().IgnoreConfig(r.Meta().IgnoreTagsConfig).RemoveDefaultConfig(r.Meta().DefaultTagsConfig).Map(); len(v) == 0 { - return tftags.Null - } else { - return flex.FlattenFrameworkStringValueMapLegacy(ctx, v) - } -} - -// FlattenTagsAll returns the "tags_all" value from the specified API tags. -func (r *ResourceWithConfigure) FlattenTagsAll(ctx context.Context, apiTags tftags.KeyValueTags) types.Map { - return flex.FlattenFrameworkStringValueMapLegacy(ctx, apiTags.IgnoreAWS().IgnoreConfig(r.Meta().IgnoreTagsConfig).Map()) -} - // SetTagsAll calculates the new value for the `tags_all` attribute. func (r *ResourceWithConfigure) SetTagsAll(ctx context.Context, request resource.ModifyPlanRequest, response *resource.ModifyPlanResponse) { // If the entire plan is null, the resource is planned for destruction. @@ -83,7 +67,7 @@ func (r *ResourceWithConfigure) SetTagsAll(ctx context.Context, request resource var planTags types.Map - response.Diagnostics.Append(request.Plan.GetAttribute(ctx, path.Root("tags"), &planTags)...) + response.Diagnostics.Append(request.Plan.GetAttribute(ctx, path.Root(names.AttrTags), &planTags)...) if response.Diagnostics.HasError() { return @@ -92,18 +76,25 @@ func (r *ResourceWithConfigure) SetTagsAll(ctx context.Context, request resource if !planTags.IsUnknown() { if !mapHasUnknownElements(planTags) { resourceTags := tftags.New(ctx, planTags) - allTags := defaultTagsConfig.MergeTags(resourceTags).IgnoreConfig(ignoreTagsConfig) - response.Diagnostics.Append(response.Plan.SetAttribute(ctx, path.Root("tags_all"), flex.FlattenFrameworkStringValueMapLegacy(ctx, allTags.Map()))...) + response.Diagnostics.Append(response.Plan.SetAttribute(ctx, path.Root(names.AttrTagsAll), flex.FlattenFrameworkStringValueMapLegacy(ctx, allTags.Map()))...) } else { - response.Diagnostics.Append(response.Plan.SetAttribute(ctx, path.Root("tags_all"), tftags.Unknown)...) + response.Diagnostics.Append(response.Plan.SetAttribute(ctx, path.Root(names.AttrTagsAll), tftags.Unknown)...) } } else { - response.Diagnostics.Append(response.Plan.SetAttribute(ctx, path.Root("tags_all"), tftags.Unknown)...) + response.Diagnostics.Append(response.Plan.SetAttribute(ctx, path.Root(names.AttrTagsAll), tftags.Unknown)...) } } +// WithImportByID is intended to be embedded in resources which import state via the "id" attribute. +// See https://developer.hashicorp.com/terraform/plugin/framework/resources/import. +type WithImportByID struct{} + +func (w *WithImportByID) ImportState(ctx context.Context, request resource.ImportStateRequest, response *resource.ImportStateResponse) { + resource.ImportStatePassthroughID(ctx, path.Root(names.AttrID), request, response) +} + // DataSourceWithConfigure is a structure to be embedded within a DataSource that implements the DataSourceWithConfigure interface. type DataSourceWithConfigure struct { withMeta @@ -119,7 +110,7 @@ func (d *DataSourceWithConfigure) Configure(_ context.Context, request datasourc } } -// WithTimeouts is intended to be embedded in resource which use the special "timeouts" nested block. +// WithTimeouts is intended to be embedded in resources which use the special "timeouts" nested block. // See https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts. type WithTimeouts struct { defaultCreateTimeout, defaultReadTimeout, defaultUpdateTimeout, defaultDeleteTimeout time.Duration diff --git a/internal/framework/boolplanmodifier/default_value.go b/internal/framework/boolplanmodifier/default_value.go deleted file mode 100644 index 3f98c62e60d..00000000000 --- a/internal/framework/boolplanmodifier/default_value.go +++ /dev/null @@ -1,42 +0,0 @@ -package boolplanmodifier - -import ( - "context" - "fmt" - - "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier" - "github.com/hashicorp/terraform-plugin-framework/types" -) - -type defaultValue struct { - val bool -} - -// DefaultValue return a bool plan modifier that sets the specified value if the planned value is Null. -func DefaultValue(b bool) planmodifier.Bool { - return defaultValue{ - val: b, - } -} - -func (m defaultValue) Description(context.Context) string { - return fmt.Sprintf("If value is not configured, defaults to %t", m.val) -} - -func (m defaultValue) MarkdownDescription(ctx context.Context) string { - return m.Description(ctx) -} - -func (m defaultValue) PlanModifyBool(ctx context.Context, req planmodifier.BoolRequest, resp *planmodifier.BoolResponse) { - if !req.ConfigValue.IsNull() { - return - } - - // If the attribute plan is "known" and "not null", then a previous plan modifier in the sequence - // has already been applied, and we don't want to interfere. - if !req.PlanValue.IsUnknown() && !req.PlanValue.IsNull() { - return - } - - resp.PlanValue = types.BoolValue(m.val) -} diff --git a/internal/framework/boolplanmodifier/default_value_test.go b/internal/framework/boolplanmodifier/default_value_test.go deleted file mode 100644 index 06554f68e0f..00000000000 --- a/internal/framework/boolplanmodifier/default_value_test.go +++ /dev/null @@ -1,67 +0,0 @@ -package boolplanmodifier - -import ( - "context" - "testing" - - "github.com/google/go-cmp/cmp" - "github.com/hashicorp/terraform-plugin-framework/path" - "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier" - "github.com/hashicorp/terraform-plugin-framework/types" -) - -func TestDefaultValue(t *testing.T) { - t.Parallel() - - type testCase struct { - plannedValue types.Bool - currentValue types.Bool - defaultValue bool - expectedValue types.Bool - expectError bool - } - tests := map[string]testCase{ - "default bool": { - plannedValue: types.BoolNull(), - currentValue: types.BoolValue(true), - defaultValue: true, - expectedValue: types.BoolValue(true), - }, - "default bool on create": { - plannedValue: types.BoolNull(), - currentValue: types.BoolNull(), - defaultValue: true, - expectedValue: types.BoolValue(true), - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - ctx := context.Background() - request := planmodifier.BoolRequest{ - Path: path.Root("test"), - PlanValue: test.plannedValue, - StateValue: test.currentValue, - } - response := planmodifier.BoolResponse{ - PlanValue: request.PlanValue, - } - DefaultValue(test.defaultValue).PlanModifyBool(ctx, request, &response) - - if !response.Diagnostics.HasError() && test.expectError { - t.Fatal("expected error, got no error") - } - - if response.Diagnostics.HasError() && !test.expectError { - t.Fatalf("got unexpected error: %s", response.Diagnostics) - } - - if diff := cmp.Diff(response.PlanValue, test.expectedValue); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} diff --git a/internal/framework/attrtypes.go b/internal/framework/flex/attrtypes.go similarity index 85% rename from internal/framework/attrtypes.go rename to internal/framework/flex/attrtypes.go index 6c93eedb479..f028db4929f 100644 --- a/internal/framework/attrtypes.go +++ b/internal/framework/flex/attrtypes.go @@ -1,4 +1,7 @@ -package framework +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package flex import ( "context" @@ -6,6 +9,7 @@ import ( "reflect" "github.com/hashicorp/terraform-plugin-framework/attr" + "github.com/hashicorp/terraform-provider-aws/internal/errs" ) // AttributeTypes returns a map of attribute types for the specified type T. @@ -42,9 +46,5 @@ func AttributeTypes[T any](ctx context.Context) (map[string]attr.Type, error) { } func AttributeTypesMust[T any](ctx context.Context) map[string]attr.Type { - types, err := AttributeTypes[T](ctx) - if err != nil { - panic(fmt.Sprintf("AttributeTypesMust[%T] received error: %s", *new(T), err)) - } - return types + return errs.Must(AttributeTypes[T](ctx)) } diff --git a/internal/framework/attrtypes_test.go b/internal/framework/flex/attrtypes_test.go similarity index 82% rename from internal/framework/attrtypes_test.go rename to internal/framework/flex/attrtypes_test.go index 6244bf769fd..763d10b72cb 100644 --- a/internal/framework/attrtypes_test.go +++ b/internal/framework/flex/attrtypes_test.go @@ -1,4 +1,7 @@ -package framework_test +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package flex_test import ( "context" @@ -7,7 +10,7 @@ import ( "github.com/google/go-cmp/cmp" "github.com/hashicorp/terraform-plugin-framework/attr" "github.com/hashicorp/terraform-plugin-framework/types" - "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" ) func TestAttributeTypes(t *testing.T) { @@ -21,25 +24,20 @@ func TestAttributeTypes(t *testing.T) { } ctx := context.Background() - got, err := framework.AttributeTypes[struct1](ctx) - - if err != nil { - t.Fatalf("unexpected error") - } - + got := flex.AttributeTypesMust[struct1](ctx) wanted := map[string]attr.Type{} if diff := cmp.Diff(got, wanted); diff != "" { t.Errorf("unexpected diff (+wanted, -got): %s", diff) } - _, err = framework.AttributeTypes[int](ctx) + _, err := flex.AttributeTypes[int](ctx) if err == nil { t.Fatalf("expected error") } - got, err = framework.AttributeTypes[struct2](ctx) + got, err = flex.AttributeTypes[struct2](ctx) if err != nil { t.Fatalf("unexpected error") diff --git a/internal/framework/flex/bool.go b/internal/framework/flex/bool.go new file mode 100644 index 00000000000..fbdc9bf25ec --- /dev/null +++ b/internal/framework/flex/bool.go @@ -0,0 +1,37 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package flex + +import ( + "context" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/hashicorp/terraform-plugin-framework/types" +) + +// BoolFromFramework converts a Framework Bool value to a bool pointer. +// A null Bool is converted to a nil bool pointer. +func BoolFromFramework(_ context.Context, v types.Bool) *bool { + if v.IsNull() || v.IsUnknown() { + return nil + } + + return aws.Bool(v.ValueBool()) +} + +// BoolToFramework converts a bool pointer to a Framework Bool value. +// A nil bool pointer is converted to a null Bool. +func BoolToFramework(_ context.Context, v *bool) types.Bool { + if v == nil { + return types.BoolNull() + } + + return types.BoolValue(aws.ToBool(v)) +} + +// BoolToFrameworkLegacy converts a bool pointer to a Framework Bool value. +// A nil bool pointer is converted to a false Bool. +func BoolToFrameworkLegacy(_ context.Context, v *bool) types.Bool { + return types.BoolValue(aws.ToBool(v)) +} diff --git a/internal/framework/flex/bool_test.go b/internal/framework/flex/bool_test.go new file mode 100644 index 00000000000..a0ca89faf7f --- /dev/null +++ b/internal/framework/flex/bool_test.go @@ -0,0 +1,114 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package flex_test + +import ( + "context" + "testing" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/google/go-cmp/cmp" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" +) + +func TestBoolFromFramework(t *testing.T) { + t.Parallel() + + type testCase struct { + input types.Bool + expected *bool + } + tests := map[string]testCase{ + "valid bool": { + input: types.BoolValue(true), + expected: aws.Bool(true), + }, + "null bool": { + input: types.BoolNull(), + expected: nil, + }, + "unknown bool": { + input: types.BoolUnknown(), + expected: nil, + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := flex.BoolFromFramework(context.Background(), test.input) + + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} + +func TestBoolToFramework(t *testing.T) { + t.Parallel() + + type testCase struct { + input *bool + expected types.Bool + } + tests := map[string]testCase{ + "valid bool": { + input: aws.Bool(true), + expected: types.BoolValue(true), + }, + "nil bool": { + input: nil, + expected: types.BoolNull(), + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := flex.BoolToFramework(context.Background(), test.input) + + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} + +func TestBoolToFrameworkLegacy(t *testing.T) { + t.Parallel() + + type testCase struct { + input *bool + expected types.Bool + } + tests := map[string]testCase{ + "valid bool": { + input: aws.Bool(true), + expected: types.BoolValue(true), + }, + "nil bool": { + input: nil, + expected: types.BoolValue(false), + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := flex.BoolToFrameworkLegacy(context.Background(), test.input) + + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} diff --git a/internal/framework/flex/expand.go b/internal/framework/flex/expand.go new file mode 100644 index 00000000000..69fe2276e82 --- /dev/null +++ b/internal/framework/flex/expand.go @@ -0,0 +1,196 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package flex + +import ( + "context" + "fmt" + "reflect" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/hashicorp/terraform-plugin-framework/attr" + "github.com/hashicorp/terraform-plugin-framework/types" +) + +// Expand "expands" a resource's "business logic" data structure, +// implemented using Terraform Plugin Framework data types, into +// an AWS SDK for Go v2 API data structure. +// The resource's data structure is walked and exported fields that +// have a corresponding field in the API data structure (and a suitable +// target data type) are copied. +func Expand(ctx context.Context, tfObject, apiObject any) error { + if err := walkStructFields(ctx, tfObject, apiObject, expandVisitor{}); err != nil { + return fmt.Errorf("Expand[%T, %T]: %w", tfObject, apiObject, err) + } + + return nil +} + +// walkStructFields traverses `from` calling `visitor` for each exported field. +func walkStructFields(ctx context.Context, from any, to any, visitor fieldVisitor) error { + valFrom, valTo := reflect.ValueOf(from), reflect.ValueOf(to) + + if kind := valFrom.Kind(); kind == reflect.Ptr { + valFrom = valFrom.Elem() + } + if kind := valTo.Kind(); kind != reflect.Ptr { + return fmt.Errorf("target (%T): %s, want pointer", to, kind) + } + valTo = valTo.Elem() + + typFrom, typTo := valFrom.Type(), valTo.Type() + + if typFrom.Kind() != reflect.Struct { + return fmt.Errorf("source: %s, want struct", typFrom) + } + if typTo.Kind() != reflect.Struct { + return fmt.Errorf("target: %s, want struct", typTo) + } + + for i := 0; i < typFrom.NumField(); i++ { + field := typFrom.Field(i) + if field.PkgPath != "" { + continue // Skip unexported fields. + } + fieldName := field.Name + if fieldName == "Tags" { + continue // Resource tags are handled separately. + } + toFieldVal := valTo.FieldByName(fieldName) + if !toFieldVal.IsValid() { + continue // Corresponding field not found in to. + } + if !toFieldVal.CanSet() { + continue // Corresponding field value can't be changed. + } + if err := visitor.visit(ctx, fieldName, valFrom.Field(i), toFieldVal); err != nil { + return fmt.Errorf("visit (%s): %w", fieldName, err) + } + } + + return nil +} + +type fieldVisitor interface { + visit(context.Context, string, reflect.Value, reflect.Value) error +} + +type expandVisitor struct{} + +func (v expandVisitor) visit(ctx context.Context, fieldName string, valFrom, valTo reflect.Value) error { + vFrom, ok := valFrom.Interface().(attr.Value) + if !ok { + return fmt.Errorf("does not implement attr.Value: %s", valFrom.Kind()) + } + + // No need to set the target value if there's no source value. + if vFrom.IsNull() || vFrom.IsUnknown() { + return nil + } + + tFrom, kTo := vFrom.Type(ctx), valTo.Kind() + switch { + // Simple types. + case tFrom.Equal(types.BoolType): + vFrom := vFrom.(types.Bool).ValueBool() + switch kTo { + case reflect.Bool: + valTo.SetBool(vFrom) + return nil + case reflect.Ptr: + switch valTo.Type().Elem().Kind() { + case reflect.Bool: + valTo.Set(reflect.ValueOf(aws.Bool(vFrom))) + return nil + } + } + + case tFrom.Equal(types.Float64Type): + vFrom := vFrom.(types.Float64).ValueFloat64() + switch kTo { + case reflect.Float32, reflect.Float64: + valTo.SetFloat(vFrom) + return nil + case reflect.Ptr: + switch valTo.Type().Elem().Kind() { + case reflect.Float32: + valTo.Set(reflect.ValueOf(aws.Float32(float32(vFrom)))) + return nil + case reflect.Float64: + valTo.Set(reflect.ValueOf(aws.Float64(vFrom))) + return nil + } + } + + case tFrom.Equal(types.Int64Type): + vFrom := vFrom.(types.Int64).ValueInt64() + switch kTo { + case reflect.Int32, reflect.Int64: + valTo.SetInt(vFrom) + return nil + case reflect.Ptr: + switch valTo.Type().Elem().Kind() { + case reflect.Int32: + valTo.Set(reflect.ValueOf(aws.Int32(int32(vFrom)))) + return nil + case reflect.Int64: + valTo.Set(reflect.ValueOf(aws.Int64(vFrom))) + return nil + } + } + + case tFrom.Equal(types.StringType): + vFrom := vFrom.(types.String).ValueString() + switch kTo { + case reflect.String: + valTo.SetString(vFrom) + return nil + case reflect.Ptr: + switch valTo.Type().Elem().Kind() { + case reflect.String: + valTo.Set(reflect.ValueOf(aws.String(vFrom))) + return nil + } + } + + // Aggregate types. + case tFrom.Equal(types.ListType{ElemType: types.StringType}): + vFrom := vFrom.(types.List) + switch kTo { + case reflect.Slice: + switch tSliceElem := valTo.Type().Elem(); tSliceElem.Kind() { + case reflect.String: + valTo.Set(reflect.ValueOf(ExpandFrameworkStringValueList(ctx, vFrom))) + return nil + + case reflect.Ptr: + switch tSliceElem.Elem().Kind() { + case reflect.String: + valTo.Set(reflect.ValueOf(ExpandFrameworkStringList(ctx, vFrom))) + return nil + } + } + } + + case tFrom.Equal(types.SetType{ElemType: types.StringType}): + vFrom := vFrom.(types.Set) + switch kTo { + case reflect.Slice: + switch tSliceElem := valTo.Type().Elem(); tSliceElem.Kind() { + case reflect.String: + valTo.Set(reflect.ValueOf(ExpandFrameworkStringValueSet(ctx, vFrom))) + return nil + + case reflect.Ptr: + switch tSliceElem.Elem().Kind() { + case reflect.String: + valTo.Set(reflect.ValueOf(ExpandFrameworkStringSet(ctx, vFrom))) + return nil + } + } + } + } + + return fmt.Errorf("incompatible (%s): %s", tFrom, kTo) +} diff --git a/internal/framework/flex/expand_test.go b/internal/framework/flex/expand_test.go new file mode 100644 index 00000000000..0cff7fa1b27 --- /dev/null +++ b/internal/framework/flex/expand_test.go @@ -0,0 +1,293 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package flex + +import ( + "context" + "testing" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/google/go-cmp/cmp" + "github.com/hashicorp/terraform-plugin-framework/attr" + "github.com/hashicorp/terraform-plugin-framework/types" +) + +type ATestExpand struct{} + +type BTestExpand struct { + Name types.String +} + +type CTestExpand struct { + Name string +} + +type DTestExpand struct { + Name *string +} + +type ETestExpand struct { + Name types.Int64 +} + +type FTestExpand struct { + Name int64 +} + +type GTestExpand struct { + Name *int64 +} + +type HTestExpand struct { + Name int32 +} + +type ITestExpand struct { + Name *int32 +} + +type JTestExpand struct { + Name types.Float64 +} + +type KTestExpand struct { + Name float64 +} + +type LTestExpand struct { + Name *float64 +} + +type MTestExpand struct { + Name float32 +} + +type NTestExpand struct { + Name *float32 +} + +type OTestExpand struct { + Name types.Bool +} + +type PTestExpand struct { + Name bool +} + +type QTestExpand struct { + Name *bool +} + +type RTestExpand struct { + Names types.Set +} + +type STestExpand struct { + Names []string +} + +type TTestExpand struct { + Names []*string +} + +type UTestExpand struct { + Names types.List +} + +func TestGenericExpand(t *testing.T) { + t.Parallel() + + ctx := context.Background() + testString := "test" + testCases := []struct { + TestName string + Source any + Target any + WantErr bool + WantTarget any + }{ + { + TestName: "nil Source and Target", + WantErr: true, + }, + { + TestName: "non-pointer Target", + Source: ATestExpand{}, + Target: 0, + WantErr: true, + }, + { + TestName: "non-struct Source", + Source: testString, + Target: &ATestExpand{}, + WantErr: true, + }, + { + TestName: "non-struct Target", + Source: ATestExpand{}, + Target: &testString, + WantErr: true, + }, + { + TestName: "empty struct Source and Target", + Source: ATestExpand{}, + Target: &ATestExpand{}, + WantTarget: &ATestExpand{}, + }, + { + TestName: "empty struct pointer Source and Target", + Source: &ATestExpand{}, + Target: &ATestExpand{}, + WantTarget: &ATestExpand{}, + }, + { + TestName: "single string struct pointer Source and empty Target", + Source: &BTestExpand{Name: types.StringValue("a")}, + Target: &ATestExpand{}, + WantTarget: &ATestExpand{}, + }, + { + TestName: "does not implement attr.Value Source", + Source: &CTestExpand{Name: "a"}, + Target: &CTestExpand{}, + WantErr: true, + }, + { + TestName: "single string Source and single string Target", + Source: &BTestExpand{Name: types.StringValue("a")}, + Target: &CTestExpand{}, + WantTarget: &CTestExpand{Name: "a"}, + }, + { + TestName: "single string Source and single *string Target", + Source: &BTestExpand{Name: types.StringValue("a")}, + Target: &DTestExpand{}, + WantTarget: &DTestExpand{Name: aws.String("a")}, + }, + { + TestName: "single string Source and single int64 Target", + Source: &BTestExpand{Name: types.StringValue("a")}, + Target: &FTestExpand{}, + WantErr: true, + }, + { + TestName: "single int64 Source and single int64 Target", + Source: &ETestExpand{Name: types.Int64Value(42)}, + Target: &FTestExpand{}, + WantTarget: &FTestExpand{Name: 42}, + }, + { + TestName: "single int64 Source and single *int64 Target", + Source: &ETestExpand{Name: types.Int64Value(42)}, + Target: >estExpand{}, + WantTarget: >estExpand{Name: aws.Int64(42)}, + }, + { + TestName: "single int64 Source and single int32 Target", + Source: &ETestExpand{Name: types.Int64Value(42)}, + Target: &HTestExpand{}, + WantTarget: &HTestExpand{Name: 42}, + }, + { + TestName: "single int64 Source and single *int32 Target", + Source: &ETestExpand{Name: types.Int64Value(42)}, + Target: &ITestExpand{}, + WantTarget: &ITestExpand{Name: aws.Int32(42)}, + }, + { + TestName: "single int64 Source and single float64 Target", + Source: &ETestExpand{Name: types.Int64Value(42)}, + Target: &KTestExpand{}, + WantErr: true, + }, + { + TestName: "single float64 Source and single float64 Target", + Source: &JTestExpand{Name: types.Float64Value(4.2)}, + Target: &KTestExpand{}, + WantTarget: &KTestExpand{Name: 4.2}, + }, + { + TestName: "single float64 Source and single *float64 Target", + Source: &JTestExpand{Name: types.Float64Value(4.2)}, + Target: <estExpand{}, + WantTarget: <estExpand{Name: aws.Float64(4.2)}, + }, + { + TestName: "single float64 Source and single float32 Target", + Source: &JTestExpand{Name: types.Float64Value(4.2)}, + Target: &MTestExpand{}, + WantTarget: &MTestExpand{Name: 4.2}, + }, + { + TestName: "single float64 Source and single *float32 Target", + Source: &JTestExpand{Name: types.Float64Value(4.2)}, + Target: &NTestExpand{}, + WantTarget: &NTestExpand{Name: aws.Float32(4.2)}, + }, + { + TestName: "single float64 Source and single bool Target", + Source: &JTestExpand{Name: types.Float64Value(4.2)}, + Target: &PTestExpand{}, + WantErr: true, + }, + { + TestName: "single bool Source and single bool Target", + Source: &OTestExpand{Name: types.BoolValue(true)}, + Target: &PTestExpand{}, + WantTarget: &PTestExpand{Name: true}, + }, + { + TestName: "single bool Source and single *bool Target", + Source: &OTestExpand{Name: types.BoolValue(true)}, + Target: &QTestExpand{}, + WantTarget: &QTestExpand{Name: aws.Bool(true)}, + }, + { + TestName: "single set Source and single string slice Target", + Source: &RTestExpand{Names: types.SetValueMust(types.StringType, []attr.Value{types.StringValue("a")})}, + Target: &STestExpand{}, + WantTarget: &STestExpand{Names: []string{"a"}}, + }, + { + TestName: "single set Source and single *string slice Target", + Source: &RTestExpand{Names: types.SetValueMust(types.StringType, []attr.Value{types.StringValue("a")})}, + Target: &TTestExpand{}, + WantTarget: &TTestExpand{Names: aws.StringSlice([]string{"a"})}, + }, + { + TestName: "single list Source and single string slice Target", + Source: &UTestExpand{Names: types.ListValueMust(types.StringType, []attr.Value{types.StringValue("a")})}, + Target: &STestExpand{}, + WantTarget: &STestExpand{Names: []string{"a"}}, + }, + { + TestName: "single list Source and single *string slice Target", + Source: &UTestExpand{Names: types.ListValueMust(types.StringType, []attr.Value{types.StringValue("a")})}, + Target: &TTestExpand{}, + WantTarget: &TTestExpand{Names: aws.StringSlice([]string{"a"})}, + }, + } + + for _, testCase := range testCases { + testCase := testCase + t.Run(testCase.TestName, func(t *testing.T) { + t.Parallel() + + err := Expand(ctx, testCase.Source, testCase.Target) + gotErr := err != nil + + if gotErr != testCase.WantErr { + t.Errorf("gotErr = %v, wantErr = %v", gotErr, testCase.WantErr) + } + + if gotErr { + if !testCase.WantErr { + t.Errorf("err = %q", err) + } + } else if diff := cmp.Diff(testCase.Target, testCase.WantTarget); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} diff --git a/internal/framework/flex/flatten.go b/internal/framework/flex/flatten.go new file mode 100644 index 00000000000..db6ce3fee1c --- /dev/null +++ b/internal/framework/flex/flatten.go @@ -0,0 +1,167 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package flex + +import ( + "context" + "fmt" + "reflect" + + "github.com/hashicorp/terraform-plugin-framework/attr" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/hashicorp/terraform-plugin-go/tftypes" +) + +// Flatten "flattens" an AWS SDK for Go v2 API data structure into +// a resource's "business logic" data structure, implemented using +// Terraform Plugin Framework data types. +// The API data structure's fields are walked and exported fields that +// have a corresponding field in the resource's data structure (and a +// suitable target data type) are copied. +func Flatten(ctx context.Context, apiObject, tfObject any) error { + if err := walkStructFields(ctx, apiObject, tfObject, flattenVisitor{}); err != nil { + return fmt.Errorf("Flatten[%T, %T]: %w", apiObject, tfObject, err) + } + + return nil +} + +type flattenVisitor struct{} + +func (v flattenVisitor) visit(ctx context.Context, fieldName string, valFrom, valTo reflect.Value) error { + vTo, ok := valTo.Interface().(attr.Value) + if !ok { + return fmt.Errorf("does not implement attr.Value: %s", valTo.Kind()) + } + + kFrom, tTo := valFrom.Kind(), vTo.Type(ctx) + switch kFrom { + case reflect.Bool: + vFrom := valFrom.Bool() + switch { + case tTo.Equal(types.BoolType): + valTo.Set(reflect.ValueOf(types.BoolValue(vFrom))) + return nil + } + + case reflect.Float32, reflect.Float64: + vFrom := valFrom.Float() + switch { + case tTo.Equal(types.Float64Type): + valTo.Set(reflect.ValueOf(types.Float64Value(vFrom))) + return nil + } + + case reflect.Int32, reflect.Int64: + vFrom := valFrom.Int() + switch { + case tTo.Equal(types.Int64Type): + valTo.Set(reflect.ValueOf(types.Int64Value(vFrom))) + return nil + } + + case reflect.String: + vFrom := valFrom.String() + switch { + case tTo.Equal(types.StringType): + valTo.Set(reflect.ValueOf(types.StringValue(vFrom))) + return nil + } + + case reflect.Ptr: + vFrom := valFrom.Elem() + switch valFrom.Type().Elem().Kind() { + case reflect.Bool: + switch { + case tTo.Equal(types.BoolType): + if vFrom.IsValid() { + valTo.Set(reflect.ValueOf(types.BoolValue(vFrom.Bool()))) + } else { + valTo.Set(reflect.ValueOf(types.BoolNull())) + } + return nil + } + + case reflect.Float32, reflect.Float64: + switch { + case tTo.Equal(types.Float64Type): + if vFrom.IsValid() { + valTo.Set(reflect.ValueOf(types.Float64Value(vFrom.Float()))) + } else { + valTo.Set(reflect.ValueOf(types.Float64Null())) + } + return nil + } + + case reflect.Int32, reflect.Int64: + switch { + case tTo.Equal(types.Int64Type): + if vFrom.IsValid() { + valTo.Set(reflect.ValueOf(types.Int64Value(vFrom.Int()))) + } else { + valTo.Set(reflect.ValueOf(types.Int64Null())) + } + return nil + } + + case reflect.String: + switch { + case tTo.Equal(types.StringType): + if vFrom.IsValid() { + valTo.Set(reflect.ValueOf(types.StringValue(vFrom.String()))) + } else { + valTo.Set(reflect.ValueOf(types.StringNull())) + } + return nil + } + } + + case reflect.Slice: + vFrom := valFrom.Interface() + switch tSliceElem := valFrom.Type().Elem(); tSliceElem.Kind() { + case reflect.String: + switch { + case tTo.TerraformType(ctx).Is(tftypes.List{}): + if vFrom != nil { + valTo.Set(reflect.ValueOf(FlattenFrameworkStringValueList(ctx, vFrom.([]string)))) + } else { + valTo.Set(reflect.ValueOf(types.ListNull(types.StringType))) + } + return nil + + case tTo.TerraformType(ctx).Is(tftypes.Set{}): + if vFrom != nil { + valTo.Set(reflect.ValueOf(FlattenFrameworkStringValueSet(ctx, vFrom.([]string)))) + } else { + valTo.Set(reflect.ValueOf(types.SetNull(types.StringType))) + } + return nil + } + + case reflect.Ptr: + switch tSliceElem.Elem().Kind() { + case reflect.String: + switch { + case tTo.TerraformType(ctx).Is(tftypes.List{}): + if vFrom != nil { + valTo.Set(reflect.ValueOf(FlattenFrameworkStringList(ctx, vFrom.([]*string)))) + } else { + valTo.Set(reflect.ValueOf(types.ListNull(types.StringType))) + } + return nil + + case tTo.TerraformType(ctx).Is(tftypes.Set{}): + if vFrom != nil { + valTo.Set(reflect.ValueOf(FlattenFrameworkStringSet(ctx, vFrom.([]*string)))) + } else { + valTo.Set(reflect.ValueOf(types.SetNull(types.StringType))) + } + return nil + } + } + } + } + + return fmt.Errorf("incompatible (%s): %s", kFrom, tTo) +} diff --git a/internal/framework/flex/flatten_test.go b/internal/framework/flex/flatten_test.go new file mode 100644 index 00000000000..63e22b6d5c4 --- /dev/null +++ b/internal/framework/flex/flatten_test.go @@ -0,0 +1,311 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package flex + +import ( + "context" + "testing" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/google/go-cmp/cmp" + "github.com/hashicorp/terraform-plugin-framework/attr" + "github.com/hashicorp/terraform-plugin-framework/types" +) + +type ATestFlatten struct{} + +type BTestFlatten struct { + Name string +} + +type CTestFlatten struct { + Name *string +} + +type DTestFlatten struct { + Name types.String +} + +type ETestFlatten struct { + Name int64 +} + +type FTestFlatten struct { + Name *int64 +} + +type GTestFlatten struct { + Name types.Int64 +} + +type HTestFlatten struct { + Name int32 +} + +type ITestFlatten struct { + Name *int32 +} + +type JTestFlatten struct { + Name float64 +} + +type KTestFlatten struct { + Name *float64 +} + +type LTestFlatten struct { + Name types.Float64 +} + +type MTestFlatten struct { + Name float32 +} + +type NTestFlatten struct { + Name *float32 +} + +type OTestFlatten struct { + Name bool +} + +type PTestFlatten struct { + Name *bool +} + +type QTestFlatten struct { + Name types.Bool +} + +type RTestFlatten struct { + Names []string +} + +type STestFlatten struct { + Names []*string +} + +type TTestFlatten struct { + Names types.Set +} + +type UTestFlatten struct { + Names types.List +} + +func TestGenericFlatten(t *testing.T) { + t.Parallel() + + ctx := context.Background() + testString := "test" + testCases := []struct { + TestName string + Source any + Target any + WantErr bool + WantTarget any + }{ + { + TestName: "nil Source and Target", + WantErr: true, + }, + { + TestName: "non-pointer Target", + Source: ATestFlatten{}, + Target: 0, + WantErr: true, + }, + { + TestName: "non-struct Source", + Source: testString, + Target: &ATestFlatten{}, + WantErr: true, + }, + { + TestName: "non-struct Target", + Source: ATestFlatten{}, + Target: &testString, + WantErr: true, + }, + { + TestName: "empty struct Source and Target", + Source: ATestFlatten{}, + Target: &ATestFlatten{}, + WantTarget: &ATestFlatten{}, + }, + { + TestName: "empty struct pointer Source and Target", + Source: &ATestFlatten{}, + Target: &ATestFlatten{}, + WantTarget: &ATestFlatten{}, + }, + { + TestName: "single string struct pointer Source and empty Target", + Source: &BTestFlatten{Name: "a"}, + Target: &ATestFlatten{}, + WantTarget: &ATestFlatten{}, + }, + { + TestName: "does not implement attr.Value Target", + Source: &BTestFlatten{Name: "a"}, + Target: &CTestFlatten{}, + WantErr: true, + }, + { + TestName: "single empty string Source and single string Target", + Source: &BTestFlatten{}, + Target: &DTestFlatten{}, + WantTarget: &DTestFlatten{Name: types.StringValue("")}, + }, + { + TestName: "single string Source and single string Target", + Source: &BTestFlatten{Name: "a"}, + Target: &DTestFlatten{}, + WantTarget: &DTestFlatten{Name: types.StringValue("a")}, + }, + { + TestName: "single nil *string Source and single string Target", + Source: &CTestFlatten{}, + Target: &DTestFlatten{}, + WantTarget: &DTestFlatten{Name: types.StringNull()}, + }, + { + TestName: "single *string Source and single string Target", + Source: &CTestFlatten{Name: aws.String("a")}, + Target: &DTestFlatten{}, + WantTarget: &DTestFlatten{Name: types.StringValue("a")}, + }, + { + TestName: "single string Source and single int64 Target", + Source: &BTestFlatten{Name: "a"}, + Target: >estFlatten{}, + WantErr: true, + }, + { + TestName: "single int64 Source and single int64 Target", + Source: &ETestFlatten{Name: 42}, + Target: >estFlatten{}, + WantTarget: >estFlatten{Name: types.Int64Value(42)}, + }, + { + TestName: "single *int64 Source and single int64 Target", + Source: &FTestFlatten{Name: aws.Int64(42)}, + Target: >estFlatten{}, + WantTarget: >estFlatten{Name: types.Int64Value(42)}, + }, + { + TestName: "single int32 Source and single int64 Target", + Source: &HTestFlatten{Name: 42}, + Target: >estFlatten{}, + WantTarget: >estFlatten{Name: types.Int64Value(42)}, + }, + { + TestName: "single *int32 Source and single int64 Target", + Source: &ITestFlatten{Name: aws.Int32(42)}, + Target: >estFlatten{}, + WantTarget: >estFlatten{Name: types.Int64Value(42)}, + }, + { + TestName: "single *int32 Source and single string Target", + Source: &ITestFlatten{Name: aws.Int32(42)}, + Target: &DTestFlatten{}, + WantErr: true, + }, + { + TestName: "single float64 Source and single float64 Target", + Source: &JTestFlatten{Name: 4.2}, + Target: <estFlatten{}, + WantTarget: <estFlatten{Name: types.Float64Value(4.2)}, + }, + { + TestName: "single *float64 Source and single float64 Target", + Source: &KTestFlatten{Name: aws.Float64(4.2)}, + Target: <estFlatten{}, + WantTarget: <estFlatten{Name: types.Float64Value(4.2)}, + }, + { + TestName: "single float32 Source and single float64 Target", + Source: &MTestFlatten{Name: 4}, + Target: <estFlatten{}, + WantTarget: <estFlatten{Name: types.Float64Value(4)}, + }, + { + TestName: "single *float32 Source and single float64 Target", + Source: &NTestFlatten{Name: aws.Float32(4)}, + Target: <estFlatten{}, + WantTarget: <estFlatten{Name: types.Float64Value(4)}, + }, + { + TestName: "single nil *float32 Source and single float64 Target", + Source: &NTestFlatten{}, + Target: <estFlatten{}, + WantTarget: <estFlatten{Name: types.Float64Null()}, + }, + { + TestName: "single bool Source and single bool Target", + Source: &OTestFlatten{Name: true}, + Target: &QTestFlatten{}, + WantTarget: &QTestFlatten{Name: types.BoolValue(true)}, + }, + { + TestName: "single *bool Source and single bool Target", + Source: &PTestFlatten{Name: aws.Bool(true)}, + Target: &QTestFlatten{}, + WantTarget: &QTestFlatten{Name: types.BoolValue(true)}, + }, + { + TestName: "single string slice Source and single set Target", + Source: &RTestFlatten{Names: []string{"a"}}, + Target: &TTestFlatten{}, + WantTarget: &TTestFlatten{Names: types.SetValueMust(types.StringType, []attr.Value{types.StringValue("a")})}, + }, + { + TestName: "single nil string slice Source and single set Target", + Source: &RTestFlatten{}, + Target: &TTestFlatten{}, + WantTarget: &TTestFlatten{Names: types.SetNull(types.StringType)}, + }, + { + TestName: "single *string slice Source and single set Target", + Source: &STestFlatten{Names: aws.StringSlice([]string{"a"})}, + Target: &TTestFlatten{}, + WantTarget: &TTestFlatten{Names: types.SetValueMust(types.StringType, []attr.Value{types.StringValue("a")})}, + }, + { + TestName: "single string slice Source and single list Target", + Source: &RTestFlatten{Names: []string{"a"}}, + Target: &UTestFlatten{}, + WantTarget: &UTestFlatten{Names: types.ListValueMust(types.StringType, []attr.Value{types.StringValue("a")})}, + }, + { + TestName: "single *string slice Source and single list Target", + Source: &STestFlatten{Names: aws.StringSlice([]string{"a"})}, + Target: &UTestFlatten{}, + WantTarget: &UTestFlatten{Names: types.ListValueMust(types.StringType, []attr.Value{types.StringValue("a")})}, + }, + } + + for _, testCase := range testCases { + testCase := testCase + t.Run(testCase.TestName, func(t *testing.T) { + t.Parallel() + + err := Flatten(ctx, testCase.Source, testCase.Target) + gotErr := err != nil + + if gotErr != testCase.WantErr { + t.Errorf("gotErr = %v, wantErr = %v", gotErr, testCase.WantErr) + } + + if gotErr { + if !testCase.WantErr { + t.Errorf("err = %q", err) + } + } else if diff := cmp.Diff(testCase.Target, testCase.WantTarget); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} diff --git a/internal/framework/flex/flex.go b/internal/framework/flex/flex.go index 8b79c265388..c15a5e76b93 100644 --- a/internal/framework/flex/flex.go +++ b/internal/framework/flex/flex.go @@ -1,80 +1,22 @@ -package flex - -import ( - "context" - "fmt" - - "github.com/aws/aws-sdk-go-v2/aws" - "github.com/aws/aws-sdk-go-v2/aws/arn" - "github.com/hashicorp/terraform-plugin-framework/diag" - "github.com/hashicorp/terraform-plugin-framework/types" - "github.com/hashicorp/terraform-provider-aws/internal/flex" - fwtypes "github.com/hashicorp/terraform-provider-aws/internal/framework/types" -) - -// Breaking the cycles without changing all files -var ( - BoolFromFramework = flex.BoolFromFramework - ExpandFrameworkStringSet = flex.ExpandFrameworkStringSet - Int64FromFramework = flex.Int64FromFramework - StringFromFramework = flex.StringFromFramework -) - -var ( - BoolToFramework = flex.BoolToFramework - FlattenFrameworkStringSetLegacy = flex.FlattenFrameworkStringSetLegacy - Int64ToFramework = flex.Int64ToFramework - Int64ToFrameworkLegacy = flex.Int64ToFrameworkLegacy - StringToFramework = flex.StringToFramework - StringToFrameworkLegacy = flex.StringToFrameworkLegacy -) - -func ARNStringFromFramework(_ context.Context, v fwtypes.ARN) *string { - if v.IsNull() || v.IsUnknown() { - return nil - } - - return aws.String(v.ValueARN().String()) -} - -func StringToFrameworkARN(ctx context.Context, v *string, diags *diag.Diagnostics) fwtypes.ARN { - if v == nil { - return fwtypes.ARNNull() - } +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 - a, err := arn.Parse(aws.ToString(v)) - if err != nil { - diags.AddError( - "Parsing Error", - fmt.Sprintf("String %s cannot be parsed as an ARN.", aws.ToString(v)), - ) - } - - return fwtypes.ARNValue(a) -} - -func Int64FromFrameworkLegacy(_ context.Context, v types.Int64) *int64 { - if v.IsNull() || v.IsUnknown() { - return nil - } +package flex - i := v.ValueInt64() - if i == 0 { - return nil - } +type Set[T comparable] []T - return aws.Int64(i) -} - -func StringFromFrameworkLegacy(_ context.Context, v types.String) *string { - if v.IsNull() || v.IsUnknown() { - return nil +// Difference find the elements in two sets that are not similar. +func (s Set[T]) Difference(ns Set[T]) Set[T] { + m := make(map[T]struct{}) + for _, v := range ns { + m[v] = struct{}{} } - s := v.ValueString() - if s == "" { - return nil + var result []T + for _, v := range s { + if _, ok := m[v]; !ok { + result = append(result, v) + } } - - return aws.String(s) + return result } diff --git a/internal/framework/flex/flex_test.go b/internal/framework/flex/flex_test.go new file mode 100644 index 00000000000..c2143bf8efc --- /dev/null +++ b/internal/framework/flex/flex_test.go @@ -0,0 +1,59 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package flex + +import ( + "testing" + + "github.com/google/go-cmp/cmp" +) + +func TestSet_Difference_strings(t *testing.T) { + t.Parallel() + + type testCase struct { + original Set[string] + new Set[string] + expected Set[string] + } + tests := map[string]testCase{ + "nil": { + original: nil, + new: nil, + expected: nil, + }, + "equal": { + original: Set[string]{"one"}, + new: Set[string]{"one"}, + expected: nil, + }, + "difference": { + original: Set[string]{"one", "two", "four"}, + new: Set[string]{"one", "two", "three"}, + expected: Set[string]{"four"}, + }, + "difference_remove": { + original: Set[string]{"one", "two"}, + new: Set[string]{"one"}, + expected: Set[string]{"two"}, + }, + "difference_add": { + original: Set[string]{"one"}, + new: Set[string]{"one", "two"}, + expected: nil, + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := test.original.Difference(test.new) + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} diff --git a/internal/framework/flex/float.go b/internal/framework/flex/float.go new file mode 100644 index 00000000000..988f8966dbc --- /dev/null +++ b/internal/framework/flex/float.go @@ -0,0 +1,27 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package flex + +import ( + "context" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/hashicorp/terraform-plugin-framework/types" +) + +// Float64ToFramework converts a float64 pointer to a Framework Float64 value. +// A nil float64 pointer is converted to a null Float64. +func Float64ToFramework(_ context.Context, v *float64) types.Float64 { + if v == nil { + return types.Float64Null() + } + + return types.Float64Value(aws.ToFloat64(v)) +} + +// Float64ToFrameworkLegacy converts a float64 pointer to a Framework Float64 value. +// A nil float64 pointer is converted to a zero float64. +func Float64ToFrameworkLegacy(_ context.Context, v *float64) types.Float64 { + return types.Float64Value(aws.ToFloat64(v)) +} diff --git a/internal/framework/flex/float_test.go b/internal/framework/flex/float_test.go new file mode 100644 index 00000000000..e053b046b6d --- /dev/null +++ b/internal/framework/flex/float_test.go @@ -0,0 +1,86 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package flex_test + +import ( + "context" + "testing" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/google/go-cmp/cmp" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" +) + +func TestFloat64ToFramework(t *testing.T) { + t.Parallel() + + type testCase struct { + input *float64 + expected types.Float64 + } + tests := map[string]testCase{ + "valid float64": { + input: aws.Float64(42.1), + expected: types.Float64Value(42.1), + }, + "zero float64": { + input: aws.Float64(0), + expected: types.Float64Value(0), + }, + "nil float64": { + input: nil, + expected: types.Float64Null(), + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := flex.Float64ToFramework(context.Background(), test.input) + + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} + +func TestFloat64ToFrameworkLegacy(t *testing.T) { + t.Parallel() + + type testCase struct { + input *float64 + expected types.Float64 + } + tests := map[string]testCase{ + "valid int64": { + input: aws.Float64(42.1), + expected: types.Float64Value(42.1), + }, + "zero int64": { + input: aws.Float64(0), + expected: types.Float64Value(0), + }, + "nil int64": { + input: nil, + expected: types.Float64Value(0), + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := flex.Float64ToFrameworkLegacy(context.Background(), test.input) + + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} diff --git a/internal/framework/flex/int.go b/internal/framework/flex/int.go new file mode 100644 index 00000000000..51b8464622b --- /dev/null +++ b/internal/framework/flex/int.go @@ -0,0 +1,50 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package flex + +import ( + "context" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/hashicorp/terraform-plugin-framework/types" +) + +// Int64FromFramework converts a Framework Int64 value to an int64 pointer. +// A null Int64 is converted to a nil int64 pointer. +func Int64FromFramework(_ context.Context, v types.Int64) *int64 { + if v.IsNull() || v.IsUnknown() { + return nil + } + + return aws.Int64(v.ValueInt64()) +} + +// Int64ToFramework converts an int64 pointer to a Framework Int64 value. +// A nil int64 pointer is converted to a null Int64. +func Int64ToFramework(_ context.Context, v *int64) types.Int64 { + if v == nil { + return types.Int64Null() + } + + return types.Int64Value(aws.ToInt64(v)) +} + +// Int64ToFrameworkLegacy converts an int64 pointer to a Framework Int64 value. +// A nil int64 pointer is converted to a zero Int64. +func Int64ToFrameworkLegacy(_ context.Context, v *int64) types.Int64 { + return types.Int64Value(aws.ToInt64(v)) +} + +func Int64FromFrameworkLegacy(_ context.Context, v types.Int64) *int64 { + if v.IsNull() || v.IsUnknown() { + return nil + } + + i := v.ValueInt64() + if i == 0 { + return nil + } + + return aws.Int64(i) +} diff --git a/internal/framework/flex/int_test.go b/internal/framework/flex/int_test.go new file mode 100644 index 00000000000..a0a032098f6 --- /dev/null +++ b/internal/framework/flex/int_test.go @@ -0,0 +1,126 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package flex_test + +import ( + "context" + "testing" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/google/go-cmp/cmp" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" +) + +func TestInt64FromFramework(t *testing.T) { + t.Parallel() + + type testCase struct { + input types.Int64 + expected *int64 + } + tests := map[string]testCase{ + "valid int64": { + input: types.Int64Value(42), + expected: aws.Int64(42), + }, + "zero int64": { + input: types.Int64Value(0), + expected: aws.Int64(0), + }, + "null int64": { + input: types.Int64Null(), + expected: nil, + }, + "unknown int64": { + input: types.Int64Unknown(), + expected: nil, + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := flex.Int64FromFramework(context.Background(), test.input) + + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} + +func TestInt64ToFramework(t *testing.T) { + t.Parallel() + + type testCase struct { + input *int64 + expected types.Int64 + } + tests := map[string]testCase{ + "valid int64": { + input: aws.Int64(42), + expected: types.Int64Value(42), + }, + "zero int64": { + input: aws.Int64(0), + expected: types.Int64Value(0), + }, + "nil int64": { + input: nil, + expected: types.Int64Null(), + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := flex.Int64ToFramework(context.Background(), test.input) + + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} + +func TestInt64ToFrameworkLegacy(t *testing.T) { + t.Parallel() + + type testCase struct { + input *int64 + expected types.Int64 + } + tests := map[string]testCase{ + "valid int64": { + input: aws.Int64(42), + expected: types.Int64Value(42), + }, + "zero int64": { + input: aws.Int64(0), + expected: types.Int64Value(0), + }, + "nil int64": { + input: nil, + expected: types.Int64Value(0), + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := flex.Int64ToFrameworkLegacy(context.Background(), test.input) + + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} diff --git a/internal/framework/flex/list.go b/internal/framework/flex/list.go new file mode 100644 index 00000000000..b9542377138 --- /dev/null +++ b/internal/framework/flex/list.go @@ -0,0 +1,151 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package flex + +import ( + "context" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/hashicorp/terraform-plugin-framework/attr" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/hashicorp/terraform-provider-aws/internal/errs/fwdiag" + "github.com/hashicorp/terraform-provider-aws/internal/slices" +) + +func ExpandFrameworkStringList(ctx context.Context, list types.List) []*string { + if list.IsNull() || list.IsUnknown() { + return nil + } + + var vl []*string + + if list.ElementsAs(ctx, &vl, false).HasError() { + return nil + } + + return vl +} + +func ExpandFrameworkStringValueList(ctx context.Context, list types.List) []string { + if list.IsNull() || list.IsUnknown() { + return nil + } + + var vl []string + + if list.ElementsAs(ctx, &vl, false).HasError() { + return nil + } + + return vl +} + +// FlattenFrameworkStringList converts a slice of string pointers to a framework List value. +// +// A nil slice is converted to a null List. +// An empty slice is converted to a null List. +func FlattenFrameworkStringList(_ context.Context, vs []*string) types.List { + if len(vs) == 0 { + return types.ListNull(types.StringType) + } + + elems := make([]attr.Value, len(vs)) + + for i, v := range vs { + elems[i] = types.StringValue(aws.ToString(v)) + } + + return types.ListValueMust(types.StringType, elems) +} + +// FlattenFrameworkStringListLegacy is the Plugin Framework variant of FlattenStringList. +// A nil slice is converted to an empty (non-null) List. +func FlattenFrameworkStringListLegacy(_ context.Context, vs []*string) types.List { + elems := make([]attr.Value, len(vs)) + + for i, v := range vs { + elems[i] = types.StringValue(aws.ToString(v)) + } + + return types.ListValueMust(types.StringType, elems) +} + +// FlattenFrameworkStringValueList converts a slice of string values to a framework List value. +// +// A nil slice is converted to a null List. +// An empty slice is converted to a null List. +func FlattenFrameworkStringValueList(_ context.Context, vs []string) types.List { + if len(vs) == 0 { + return types.ListNull(types.StringType) + } + + elems := make([]attr.Value, len(vs)) + + for i, v := range vs { + elems[i] = types.StringValue(v) + } + + return types.ListValueMust(types.StringType, elems) +} + +// FlattenFrameworkStringValueListLegacy is the Plugin Framework variant of FlattenStringValueList. +// A nil slice is converted to an empty (non-null) List. +func FlattenFrameworkStringValueListLegacy(_ context.Context, vs []string) types.List { + elems := make([]attr.Value, len(vs)) + + for i, v := range vs { + elems[i] = types.StringValue(v) + } + + return types.ListValueMust(types.StringType, elems) +} + +type FrameworkElementExpanderFunc[T any, U any] func(context.Context, T) U + +func ExpandFrameworkListNestedBlock[T any, U any](ctx context.Context, tfList types.List, f FrameworkElementExpanderFunc[T, U]) []U { + if tfList.IsNull() || tfList.IsUnknown() { + return nil + } + + var data []T + + _ = fwdiag.Must(0, tfList.ElementsAs(ctx, &data, false)) + + return slices.ApplyToAll(data, func(t T) U { + return f(ctx, t) + }) +} + +func ExpandFrameworkListNestedBlockPtr[T any, U any](ctx context.Context, tfList types.List, f FrameworkElementExpanderFunc[T, *U]) *U { + if tfList.IsNull() || tfList.IsUnknown() { + return nil + } + + var data []T + + _ = fwdiag.Must(0, tfList.ElementsAs(ctx, &data, false)) + + if len(data) == 0 { + return nil + } + + return f(ctx, data[0]) +} + +type FrameworkElementFlattenerFunc[T any, U any] func(context.Context, U) T + +func FlattenFrameworkListNestedBlock[T any, U any](ctx context.Context, apiObjects []U, f FrameworkElementFlattenerFunc[T, U]) types.List { + attributeTypes := AttributeTypesMust[T](ctx) + elementType := types.ObjectType{AttrTypes: attributeTypes} + + if len(apiObjects) == 0 { + return types.ListNull(elementType) + } + + data := slices.ApplyToAll(apiObjects, func(apiObject U) T { + return f(ctx, apiObject) + }) + + return fwdiag.Must(types.ListValueFrom(ctx, elementType, data)) +} diff --git a/internal/framework/flex/list_test.go b/internal/framework/flex/list_test.go new file mode 100644 index 00000000000..22e5550eb5b --- /dev/null +++ b/internal/framework/flex/list_test.go @@ -0,0 +1,269 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package flex_test + +import ( + "context" + "testing" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/google/go-cmp/cmp" + "github.com/hashicorp/terraform-plugin-framework/attr" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" +) + +func TestExpandFrameworkStringList(t *testing.T) { + t.Parallel() + + type testCase struct { + input types.List + expected []*string + } + tests := map[string]testCase{ + "null": { + input: types.ListNull(types.StringType), + expected: nil, + }, + "unknown": { + input: types.ListUnknown(types.StringType), + expected: nil, + }, + "two elements": { + input: types.ListValueMust(types.StringType, []attr.Value{ + types.StringValue("GET"), + types.StringValue("HEAD"), + }), + expected: []*string{aws.String("GET"), aws.String("HEAD")}, + }, + "zero elements": { + input: types.ListValueMust(types.StringType, []attr.Value{}), + expected: []*string{}, + }, + "invalid element type": { + input: types.ListValueMust(types.Int64Type, []attr.Value{ + types.Int64Value(42), + }), + expected: nil, + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := flex.ExpandFrameworkStringList(context.Background(), test.input) + + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} + +func TestExpandFrameworkStringValueList(t *testing.T) { + t.Parallel() + + type testCase struct { + input types.List + expected []string + } + tests := map[string]testCase{ + "null": { + input: types.ListNull(types.StringType), + expected: nil, + }, + "unknown": { + input: types.ListUnknown(types.StringType), + expected: nil, + }, + "two elements": { + input: types.ListValueMust(types.StringType, []attr.Value{ + types.StringValue("GET"), + types.StringValue("HEAD"), + }), + expected: []string{"GET", "HEAD"}, + }, + "zero elements": { + input: types.ListValueMust(types.StringType, []attr.Value{}), + expected: []string{}, + }, + "invalid element type": { + input: types.ListValueMust(types.Int64Type, []attr.Value{ + types.Int64Value(42), + }), + expected: nil, + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := flex.ExpandFrameworkStringValueList(context.Background(), test.input) + + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} + +func TestFlattenFrameworkStringList(t *testing.T) { + t.Parallel() + + type testCase struct { + input []*string + expected types.List + } + tests := map[string]testCase{ + "two elements": { + input: []*string{aws.String("GET"), aws.String("HEAD")}, + expected: types.ListValueMust(types.StringType, []attr.Value{ + types.StringValue("GET"), + types.StringValue("HEAD"), + }), + }, + "zero elements": { + input: []*string{}, + expected: types.ListNull(types.StringType), + }, + "nil array": { + input: nil, + expected: types.ListNull(types.StringType), + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := flex.FlattenFrameworkStringList(context.Background(), test.input) + + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} + +func TestFlattenFrameworkStringListLegacy(t *testing.T) { + t.Parallel() + + type testCase struct { + input []*string + expected types.List + } + tests := map[string]testCase{ + "two elements": { + input: []*string{aws.String("GET"), aws.String("HEAD")}, + expected: types.ListValueMust(types.StringType, []attr.Value{ + types.StringValue("GET"), + types.StringValue("HEAD"), + }), + }, + "zero elements": { + input: []*string{}, + expected: types.ListValueMust(types.StringType, []attr.Value{}), + }, + "nil array": { + input: nil, + expected: types.ListValueMust(types.StringType, []attr.Value{}), + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := flex.FlattenFrameworkStringListLegacy(context.Background(), test.input) + + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} + +func TestFlattenFrameworkStringValueList(t *testing.T) { + t.Parallel() + + type testCase struct { + input []string + expected types.List + } + tests := map[string]testCase{ + "two elements": { + input: []string{"GET", "HEAD"}, + expected: types.ListValueMust(types.StringType, []attr.Value{ + types.StringValue("GET"), + types.StringValue("HEAD"), + }), + }, + "zero elements": { + input: []string{}, + expected: types.ListNull(types.StringType), + }, + "nil array": { + input: nil, + expected: types.ListNull(types.StringType), + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := flex.FlattenFrameworkStringValueList(context.Background(), test.input) + + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} + +func TestFlattenFrameworkStringValueListLegacy(t *testing.T) { + t.Parallel() + + type testCase struct { + input []string + expected types.List + } + tests := map[string]testCase{ + "two elements": { + input: []string{"GET", "HEAD"}, + expected: types.ListValueMust(types.StringType, []attr.Value{ + types.StringValue("GET"), + types.StringValue("HEAD"), + }), + }, + "zero elements": { + input: []string{}, + expected: types.ListValueMust(types.StringType, []attr.Value{}), + }, + "nil array": { + input: nil, + expected: types.ListValueMust(types.StringType, []attr.Value{}), + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := flex.FlattenFrameworkStringValueListLegacy(context.Background(), test.input) + + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} diff --git a/internal/framework/flex/map.go b/internal/framework/flex/map.go new file mode 100644 index 00000000000..c54cfd59375 --- /dev/null +++ b/internal/framework/flex/map.go @@ -0,0 +1,23 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package flex + +import ( + "context" + + "github.com/hashicorp/terraform-plugin-framework/attr" + "github.com/hashicorp/terraform-plugin-framework/types" +) + +// FlattenFrameworkStringValueMapLegacy has no Plugin SDK equivalent as schema.ResourceData.Set can be passed string value maps directly. +// A nil map is converted to an empty (non-null) Map. +func FlattenFrameworkStringValueMapLegacy(_ context.Context, m map[string]string) types.Map { + elems := make(map[string]attr.Value, len(m)) + + for k, v := range m { + elems[k] = types.StringValue(v) + } + + return types.MapValueMust(types.StringType, elems) +} diff --git a/internal/framework/flex/map_test.go b/internal/framework/flex/map_test.go new file mode 100644 index 00000000000..dfe032aa476 --- /dev/null +++ b/internal/framework/flex/map_test.go @@ -0,0 +1,108 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package flex_test + +import ( + "context" + "testing" + + "github.com/google/go-cmp/cmp" + "github.com/hashicorp/terraform-plugin-framework/attr" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" +) + +func TestExpandFrameworkStringValueMap(t *testing.T) { + t.Parallel() + + type testCase struct { + input types.Map + expected map[string]string + } + tests := map[string]testCase{ + "null": { + input: types.MapNull(types.StringType), + expected: nil, + }, + "unknown": { + input: types.MapUnknown(types.StringType), + expected: nil, + }, + "two elements": { + input: types.MapValueMust(types.StringType, map[string]attr.Value{ + "one": types.StringValue("GET"), + "two": types.StringValue("HEAD"), + }), + expected: map[string]string{ + "one": "GET", + "two": "HEAD", + }, + }, + "zero elements": { + input: types.MapValueMust(types.StringType, map[string]attr.Value{}), + expected: map[string]string{}, + }, + "invalid element type": { + input: types.MapValueMust(types.BoolType, map[string]attr.Value{ + "one": types.BoolValue(true), + }), + expected: nil, + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := flex.ExpandFrameworkStringValueMap(context.Background(), test.input) + + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} + +func TestFlattenFrameworkStringValueMapLegacy(t *testing.T) { + t.Parallel() + + type testCase struct { + input map[string]string + expected types.Map + } + tests := map[string]testCase{ + "two elements": { + input: map[string]string{ + "one": "GET", + "two": "HEAD", + }, + expected: types.MapValueMust(types.StringType, map[string]attr.Value{ + "one": types.StringValue("GET"), + "two": types.StringValue("HEAD"), + }), + }, + "zero elements": { + input: map[string]string{}, + expected: types.MapValueMust(types.StringType, map[string]attr.Value{}), + }, + "nil map": { + input: nil, + expected: types.MapValueMust(types.StringType, map[string]attr.Value{}), + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := flex.FlattenFrameworkStringValueMapLegacy(context.Background(), test.input) + + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} diff --git a/internal/framework/flex/set.go b/internal/framework/flex/set.go new file mode 100644 index 00000000000..7823bc6117a --- /dev/null +++ b/internal/framework/flex/set.go @@ -0,0 +1,115 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package flex + +import ( + "context" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/hashicorp/terraform-plugin-framework/attr" + "github.com/hashicorp/terraform-plugin-framework/types" +) + +func ExpandFrameworkStringSet(ctx context.Context, set types.Set) []*string { + if set.IsNull() || set.IsUnknown() { + return nil + } + + var vs []*string + + if set.ElementsAs(ctx, &vs, false).HasError() { + return nil + } + + return vs +} + +func ExpandFrameworkStringValueSet(ctx context.Context, set types.Set) Set[string] { + if set.IsNull() || set.IsUnknown() { + return nil + } + + var vs []string + + if set.ElementsAs(ctx, &vs, false).HasError() { + return nil + } + + return vs +} + +func ExpandFrameworkStringValueMap(ctx context.Context, set types.Map) map[string]string { + if set.IsNull() || set.IsUnknown() { + return nil + } + + var m map[string]string + + if set.ElementsAs(ctx, &m, false).HasError() { + return nil + } + + return m +} + +// FlattenFrameworkStringSet converts a slice of string pointers to a framework Set value. +// +// A nil slice is converted to a null Set. +// An empty slice is converted to a null Set. +func FlattenFrameworkStringSet(_ context.Context, vs []*string) types.Set { + if len(vs) == 0 { + return types.SetNull(types.StringType) + } + + elems := make([]attr.Value, len(vs)) + + for i, v := range vs { + elems[i] = types.StringValue(aws.ToString(v)) + } + + return types.SetValueMust(types.StringType, elems) +} + +// FlattenFrameworkStringSetLegacy converts a slice of string pointers to a framework Set value. +// +// A nil slice is converted to an empty (non-null) Set. +func FlattenFrameworkStringSetLegacy(_ context.Context, vs []*string) types.Set { + elems := make([]attr.Value, len(vs)) + + for i, v := range vs { + elems[i] = types.StringValue(aws.ToString(v)) + } + + return types.SetValueMust(types.StringType, elems) +} + +// FlattenFrameworkStringValueSet converts a slice of string values to a framework Set value. +// +// A nil slice is converted to a null Set. +// An empty slice is converted to a null Set. +func FlattenFrameworkStringValueSet(_ context.Context, vs []string) types.Set { + if len(vs) == 0 { + return types.SetNull(types.StringType) + } + + elems := make([]attr.Value, len(vs)) + + for i, v := range vs { + elems[i] = types.StringValue(v) + } + + return types.SetValueMust(types.StringType, elems) +} + +// FlattenFrameworkStringValueSetLegacy is the Plugin Framework variant of FlattenStringValueSet. +// A nil slice is converted to an empty (non-null) Set. +func FlattenFrameworkStringValueSetLegacy(_ context.Context, vs []string) types.Set { + elems := make([]attr.Value, len(vs)) + + for i, v := range vs { + elems[i] = types.StringValue(v) + } + + return types.SetValueMust(types.StringType, elems) +} diff --git a/internal/framework/flex/set_test.go b/internal/framework/flex/set_test.go new file mode 100644 index 00000000000..6f6effd332e --- /dev/null +++ b/internal/framework/flex/set_test.go @@ -0,0 +1,191 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package flex_test + +import ( + "context" + "testing" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/google/go-cmp/cmp" + "github.com/hashicorp/terraform-plugin-framework/attr" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" +) + +func TestExpandFrameworkStringSet(t *testing.T) { + t.Parallel() + + type testCase struct { + input types.Set + expected []*string + } + tests := map[string]testCase{ + "null": { + input: types.SetNull(types.StringType), + expected: nil, + }, + "unknown": { + input: types.SetUnknown(types.StringType), + expected: nil, + }, + "two elements": { + input: types.SetValueMust(types.StringType, []attr.Value{ + types.StringValue("GET"), + types.StringValue("HEAD"), + }), + expected: []*string{aws.String("GET"), aws.String("HEAD")}, + }, + "zero elements": { + input: types.SetValueMust(types.StringType, []attr.Value{}), + expected: []*string{}, + }, + "invalid element type": { + input: types.SetValueMust(types.Int64Type, []attr.Value{ + types.Int64Value(42), + }), + expected: nil, + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := flex.ExpandFrameworkStringSet(context.Background(), test.input) + + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} + +func TestExpandFrameworkStringValueSet(t *testing.T) { + t.Parallel() + + type testCase struct { + input types.Set + expected flex.Set[string] + } + tests := map[string]testCase{ + "null": { + input: types.SetNull(types.StringType), + expected: nil, + }, + "unknown": { + input: types.SetUnknown(types.StringType), + expected: nil, + }, + "two elements": { + input: types.SetValueMust(types.StringType, []attr.Value{ + types.StringValue("GET"), + types.StringValue("HEAD"), + }), + expected: []string{"GET", "HEAD"}, + }, + "zero elements": { + input: types.SetValueMust(types.StringType, []attr.Value{}), + expected: []string{}, + }, + "invalid element type": { + input: types.SetValueMust(types.Int64Type, []attr.Value{ + types.Int64Value(42), + }), + expected: nil, + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := flex.ExpandFrameworkStringValueSet(context.Background(), test.input) + + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} + +func TestFlattenFrameworkStringValueSet(t *testing.T) { + t.Parallel() + + type testCase struct { + input []string + expected types.Set + } + tests := map[string]testCase{ + "two elements": { + input: []string{"GET", "HEAD"}, + expected: types.SetValueMust(types.StringType, []attr.Value{ + types.StringValue("GET"), + types.StringValue("HEAD"), + }), + }, + "zero elements": { + input: []string{}, + expected: types.SetNull(types.StringType), + }, + "nil array": { + input: nil, + expected: types.SetNull(types.StringType), + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := flex.FlattenFrameworkStringValueSet(context.Background(), test.input) + + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} + +func TestFlattenFrameworkStringValueSetLegacy(t *testing.T) { + t.Parallel() + + type testCase struct { + input []string + expected types.Set + } + tests := map[string]testCase{ + "two elements": { + input: []string{"GET", "HEAD"}, + expected: types.SetValueMust(types.StringType, []attr.Value{ + types.StringValue("GET"), + types.StringValue("HEAD"), + }), + }, + "zero elements": { + input: []string{}, + expected: types.SetValueMust(types.StringType, []attr.Value{}), + }, + "nil array": { + input: nil, + expected: types.SetValueMust(types.StringType, []attr.Value{}), + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := flex.FlattenFrameworkStringValueSetLegacy(context.Background(), test.input) + + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} diff --git a/internal/framework/flex/string.go b/internal/framework/flex/string.go new file mode 100644 index 00000000000..07970a487f5 --- /dev/null +++ b/internal/framework/flex/string.go @@ -0,0 +1,114 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package flex + +import ( + "context" + "fmt" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/aws/arn" + "github.com/hashicorp/terraform-plugin-framework/diag" + "github.com/hashicorp/terraform-plugin-framework/types" + fwtypes "github.com/hashicorp/terraform-provider-aws/internal/framework/types" +) + +// StringFromFramework converts a Framework String value to a string pointer. +// A null String is converted to a nil string pointer. +func StringFromFramework(_ context.Context, v types.String) *string { + if v.IsNull() || v.IsUnknown() { + return nil + } + + return aws.String(v.ValueString()) +} + +// StringFromFramework converts a single Framework String value to a string pointer slice. +// A null String is converted to a nil slice. +func StringSliceFromFramework(_ context.Context, v types.String) []*string { + if v.IsNull() || v.IsUnknown() { + return nil + } + + return aws.StringSlice([]string{v.ValueString()}) +} + +// StringValueToFramework converts a string value to a Framework String value. +// An empty string is converted to a null String. +func StringValueToFramework[T ~string](_ context.Context, v T) types.String { + if v == "" { + return types.StringNull() + } + return types.StringValue(string(v)) +} + +// StringValueToFrameworkLegacy converts a string value to a Framework String value. +// An empty string is left as an empty String. +func StringValueToFrameworkLegacy[T ~string](_ context.Context, v T) types.String { + return types.StringValue(string(v)) +} + +// StringToFramework converts a string pointer to a Framework String value. +// A nil string pointer is converted to a null String. +func StringToFramework(_ context.Context, v *string) types.String { + if v == nil { + return types.StringNull() + } + + return types.StringValue(aws.ToString(v)) +} + +// StringToFrameworkLegacy converts a string pointer to a Framework String value. +// A nil string pointer is converted to an empty String. +func StringToFrameworkLegacy(_ context.Context, v *string) types.String { + return types.StringValue(aws.ToString(v)) +} + +// StringToFrameworkWithTransform converts a string pointer to a Framework String value. +// A nil string pointer is converted to a null String. +// A non-nil string pointer has its value transformed by `f`. +func StringToFrameworkWithTransform(_ context.Context, v *string, f func(string) string) types.String { + if v == nil { + return types.StringNull() + } + + return types.StringValue(f(aws.ToString(v))) +} + +func ARNStringFromFramework(_ context.Context, v fwtypes.ARN) *string { + if v.IsNull() || v.IsUnknown() { + return nil + } + + return aws.String(v.ValueARN().String()) +} + +func StringToFrameworkARN(ctx context.Context, v *string, diags *diag.Diagnostics) fwtypes.ARN { + if v == nil { + return fwtypes.ARNNull() + } + + a, err := arn.Parse(aws.ToString(v)) + if err != nil { + diags.AddError( + "Parsing Error", + fmt.Sprintf("String %s cannot be parsed as an ARN.", aws.ToString(v)), + ) + } + + return fwtypes.ARNValue(a) +} + +func StringFromFrameworkLegacy(_ context.Context, v types.String) *string { + if v.IsNull() || v.IsUnknown() { + return nil + } + + s := v.ValueString() + if s == "" { + return nil + } + + return aws.String(s) +} diff --git a/internal/framework/flex/string_test.go b/internal/framework/flex/string_test.go new file mode 100644 index 00000000000..eee2afdee71 --- /dev/null +++ b/internal/framework/flex/string_test.go @@ -0,0 +1,233 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package flex_test + +import ( + "context" + "strings" + "testing" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/google/go-cmp/cmp" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" +) + +func TestStringFromFramework(t *testing.T) { + t.Parallel() + + type testCase struct { + input types.String + expected *string + } + tests := map[string]testCase{ + "valid string": { + input: types.StringValue("TEST"), + expected: aws.String("TEST"), + }, + "empty string": { + input: types.StringValue(""), + expected: aws.String(""), + }, + "null string": { + input: types.StringNull(), + expected: nil, + }, + "unknown string": { + input: types.StringUnknown(), + expected: nil, + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := flex.StringFromFramework(context.Background(), test.input) + + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} + +func TestStringToFramework(t *testing.T) { + t.Parallel() + + type testCase struct { + input *string + expected types.String + } + tests := map[string]testCase{ + "valid string": { + input: aws.String("TEST"), + expected: types.StringValue("TEST"), + }, + "empty string": { + input: aws.String(""), + expected: types.StringValue(""), + }, + "nil string": { + input: nil, + expected: types.StringNull(), + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := flex.StringToFramework(context.Background(), test.input) + + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} + +func TestStringToFrameworkLegacy(t *testing.T) { + t.Parallel() + + type testCase struct { + input *string + expected types.String + } + tests := map[string]testCase{ + "valid string": { + input: aws.String("TEST"), + expected: types.StringValue("TEST"), + }, + "empty string": { + input: aws.String(""), + expected: types.StringValue(""), + }, + "nil string": { + input: nil, + expected: types.StringValue(""), + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := flex.StringToFrameworkLegacy(context.Background(), test.input) + + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} + +func TestStringToFrameworkWithTransform(t *testing.T) { + t.Parallel() + + type testCase struct { + input *string + expected types.String + } + tests := map[string]testCase{ + "valid string": { + input: aws.String("TEST"), + expected: types.StringValue("test"), + }, + "empty string": { + input: aws.String(""), + expected: types.StringValue(""), + }, + "nil string": { + input: nil, + expected: types.StringNull(), + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := flex.StringToFrameworkWithTransform(context.Background(), test.input, strings.ToLower) + + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} + +func TestStringValueToFramework(t *testing.T) { + t.Parallel() + + // AWS enums use custom types with an underlying string type + type custom string + + type testCase struct { + input custom + expected types.String + } + tests := map[string]testCase{ + "valid": { + input: "TEST", + expected: types.StringValue("TEST"), + }, + "empty": { + input: "", + expected: types.StringNull(), + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := flex.StringValueToFramework(context.Background(), test.input) + + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} + +func TestStringValueToFrameworkLegacy(t *testing.T) { + t.Parallel() + + // AWS enums use custom types with an underlying string type + type custom string + + type testCase struct { + input custom + expected types.String + } + tests := map[string]testCase{ + "valid": { + input: "TEST", + expected: types.StringValue("TEST"), + }, + "empty": { + input: "", + expected: types.StringValue(""), + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := flex.StringValueToFrameworkLegacy(context.Background(), test.input) + + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} diff --git a/internal/framework/int64planmodifier/default_value.go b/internal/framework/int64planmodifier/default_value.go deleted file mode 100644 index 7ff39b433b8..00000000000 --- a/internal/framework/int64planmodifier/default_value.go +++ /dev/null @@ -1,42 +0,0 @@ -package int64planmodifier - -import ( - "context" - "fmt" - - "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier" - "github.com/hashicorp/terraform-plugin-framework/types" -) - -type defaultValue struct { - val int64 -} - -// DefaultValue return a bool plan modifier that sets the specified value if the planned value is Null. -func DefaultValue(i int64) planmodifier.Int64 { - return defaultValue{ - val: i, - } -} - -func (m defaultValue) Description(context.Context) string { - return fmt.Sprintf("If value is not configured, defaults to %d", m.val) -} - -func (m defaultValue) MarkdownDescription(ctx context.Context) string { - return m.Description(ctx) -} - -func (m defaultValue) PlanModifyInt64(ctx context.Context, req planmodifier.Int64Request, resp *planmodifier.Int64Response) { - if !req.ConfigValue.IsNull() { - return - } - - // If the attribute plan is "known" and "not null", then a previous plan modifier in the sequence - // has already been applied, and we don't want to interfere. - if !req.PlanValue.IsUnknown() && !req.PlanValue.IsNull() { - return - } - - resp.PlanValue = types.Int64Value(m.val) -} diff --git a/internal/framework/int64planmodifier/default_value_test.go b/internal/framework/int64planmodifier/default_value_test.go deleted file mode 100644 index 011f878ced7..00000000000 --- a/internal/framework/int64planmodifier/default_value_test.go +++ /dev/null @@ -1,67 +0,0 @@ -package int64planmodifier - -import ( - "context" - "testing" - - "github.com/google/go-cmp/cmp" - "github.com/hashicorp/terraform-plugin-framework/path" - "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier" - "github.com/hashicorp/terraform-plugin-framework/types" -) - -func TestDefaultValue(t *testing.T) { - t.Parallel() - - type testCase struct { - plannedValue types.Int64 - currentValue types.Int64 - defaultValue int64 - expectedValue types.Int64 - expectError bool - } - tests := map[string]testCase{ - "default int64": { - plannedValue: types.Int64Null(), - currentValue: types.Int64Value(1), - defaultValue: 1, - expectedValue: types.Int64Value(1), - }, - "default int64 on create": { - plannedValue: types.Int64Null(), - currentValue: types.Int64Null(), - defaultValue: 1, - expectedValue: types.Int64Value(1), - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - ctx := context.Background() - request := planmodifier.Int64Request{ - Path: path.Root("test"), - PlanValue: test.plannedValue, - StateValue: test.currentValue, - } - response := planmodifier.Int64Response{ - PlanValue: request.PlanValue, - } - DefaultValue(test.defaultValue).PlanModifyInt64(ctx, request, &response) - - if !response.Diagnostics.HasError() && test.expectError { - t.Fatal("expected error, got no error") - } - - if response.Diagnostics.HasError() && !test.expectError { - t.Fatalf("got unexpected error: %s", response.Diagnostics) - } - - if diff := cmp.Diff(response.PlanValue, test.expectedValue); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} diff --git a/internal/framework/stdattributes.go b/internal/framework/stdattributes.go index f975e5a0541..4ec86062a8f 100644 --- a/internal/framework/stdattributes.go +++ b/internal/framework/stdattributes.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package framework import ( diff --git a/internal/framework/stringplanmodifier/README.md b/internal/framework/stringplanmodifier/README.md deleted file mode 100644 index 7f6bf3c0209..00000000000 --- a/internal/framework/stringplanmodifier/README.md +++ /dev/null @@ -1,3 +0,0 @@ -# Terraform Plugin Framework String Plan Modifiers - -This package contains Terraform Plugin Framework [string plan modifiers](https://developer.hashicorp.com/terraform/plugin/framework/resources/plan-modification). diff --git a/internal/framework/stringplanmodifier/default_value.go b/internal/framework/stringplanmodifier/default_value.go deleted file mode 100644 index 33771e45082..00000000000 --- a/internal/framework/stringplanmodifier/default_value.go +++ /dev/null @@ -1,43 +0,0 @@ -package stringplanmodifier - -import ( - "context" - "fmt" - - "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier" - "github.com/hashicorp/terraform-plugin-framework/types" -) - -type defaultValue struct { - val string -} - -// DefaultValue return a string plan modifier that sets the specified value if the planned value is Null. -func DefaultValue(s string) planmodifier.String { - return defaultValue{ - val: s, - } -} - -func (m defaultValue) Description(context.Context) string { - return fmt.Sprintf("If value is not configured, defaults to %s", m.val) -} - -func (m defaultValue) MarkdownDescription(ctx context.Context) string { - return m.Description(ctx) -} - -func (m defaultValue) PlanModifyString(ctx context.Context, req planmodifier.StringRequest, resp *planmodifier.StringResponse) { - // If the attribute configuration is not null, we are done here - if !req.ConfigValue.IsNull() { - return - } - - // If the attribute plan is "known" and "not null", then a previous plan modifier in the sequence - // has already been applied, and we don't want to interfere. - if !req.PlanValue.IsUnknown() && !req.PlanValue.IsNull() { - return - } - - resp.PlanValue = types.StringValue(m.val) -} diff --git a/internal/framework/stringplanmodifier/default_value_test.go b/internal/framework/stringplanmodifier/default_value_test.go deleted file mode 100644 index 03f9a675df5..00000000000 --- a/internal/framework/stringplanmodifier/default_value_test.go +++ /dev/null @@ -1,85 +0,0 @@ -package stringplanmodifier - -import ( - "context" - "testing" - - "github.com/google/go-cmp/cmp" - "github.com/hashicorp/terraform-plugin-framework/path" - "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier" - "github.com/hashicorp/terraform-plugin-framework/types" -) - -func TestDefaultValue(t *testing.T) { - t.Parallel() - - type testCase struct { - plannedValue types.String - currentValue types.String - defaultValue string - expectedValue types.String - expectError bool - } - tests := map[string]testCase{ - "non-default non-Null string": { - plannedValue: types.StringValue("gamma"), - currentValue: types.StringValue("beta"), - defaultValue: "alpha", - expectedValue: types.StringValue("gamma"), - }, - "non-default non-Null string, current Null": { - plannedValue: types.StringValue("gamma"), - currentValue: types.StringNull(), - defaultValue: "alpha", - expectedValue: types.StringValue("gamma"), - }, - "non-default Null string, current Null": { - plannedValue: types.StringNull(), - currentValue: types.StringValue("beta"), - defaultValue: "alpha", - expectedValue: types.StringValue("alpha"), - }, - "default string": { - plannedValue: types.StringNull(), - currentValue: types.StringValue("alpha"), - defaultValue: "alpha", - expectedValue: types.StringValue("alpha"), - }, - "default string on create": { - plannedValue: types.StringNull(), - currentValue: types.StringNull(), - defaultValue: "alpha", - expectedValue: types.StringValue("alpha"), - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - ctx := context.Background() - request := planmodifier.StringRequest{ - Path: path.Root("test"), - PlanValue: test.plannedValue, - StateValue: test.currentValue, - } - response := planmodifier.StringResponse{ - PlanValue: request.PlanValue, - } - DefaultValue(test.defaultValue).PlanModifyString(ctx, request, &response) - - if !response.Diagnostics.HasError() && test.expectError { - t.Fatal("expected error, got no error") - } - - if response.Diagnostics.HasError() && !test.expectError { - t.Fatalf("got unexpected error: %s", response.Diagnostics) - } - - if diff := cmp.Diff(response.PlanValue, test.expectedValue); diff != "" { - t.Errorf("unexpected diff (+wanted, -got): %s", diff) - } - }) - } -} diff --git a/internal/framework/types/arn.go b/internal/framework/types/arn.go index bae74fc7c1b..5dd3cd035a5 100644 --- a/internal/framework/types/arn.go +++ b/internal/framework/types/arn.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package types import ( diff --git a/internal/framework/types/arn_test.go b/internal/framework/types/arn_test.go index f21492a1359..7fb4774a801 100644 --- a/internal/framework/types/arn_test.go +++ b/internal/framework/types/arn_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package types_test import ( diff --git a/internal/framework/types/cidr_block.go b/internal/framework/types/cidr_block.go index 4678b8f65b8..8e387d01132 100644 --- a/internal/framework/types/cidr_block.go +++ b/internal/framework/types/cidr_block.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package types import ( @@ -11,7 +14,7 @@ import ( "github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-plugin-framework/types/basetypes" "github.com/hashicorp/terraform-plugin-go/tftypes" - "github.com/hashicorp/terraform-provider-aws/internal/verify" + itypes "github.com/hashicorp/terraform-provider-aws/internal/types" ) type cidrBlockType uint8 @@ -54,7 +57,7 @@ func (t cidrBlockType) ValueFromTerraform(_ context.Context, in tftypes.Value) ( return nil, err } - if err := verify.ValidateCIDRBlock(s); err != nil { + if err := itypes.ValidateCIDRBlock(s); err != nil { return CIDRBlockUnknown(), nil //nolint: nilerr // Must not return validation errors } @@ -107,7 +110,7 @@ func (t cidrBlockType) Validate(ctx context.Context, in tftypes.Value, path path return diags } - if err := verify.ValidateCIDRBlock(value); err != nil { + if err := itypes.ValidateCIDRBlock(value); err != nil { diags.AddAttributeError( path, "CIDRBlock Type Validation Error", diff --git a/internal/framework/types/cidr_block_test.go b/internal/framework/types/cidr_block_test.go index bb15fa6f873..a14af011959 100644 --- a/internal/framework/types/cidr_block_test.go +++ b/internal/framework/types/cidr_block_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package types_test import ( diff --git a/internal/framework/types/duration.go b/internal/framework/types/duration.go index 3c8b41d981a..b9c1ad2553f 100644 --- a/internal/framework/types/duration.go +++ b/internal/framework/types/duration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package types import ( diff --git a/internal/framework/types/duration_test.go b/internal/framework/types/duration_test.go index 0381e6d29a8..c345bf8ed46 100644 --- a/internal/framework/types/duration_test.go +++ b/internal/framework/types/duration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package types_test import ( diff --git a/internal/framework/types/regexp.go b/internal/framework/types/regexp.go index 96464f1418a..7f5b4a5ce3f 100644 --- a/internal/framework/types/regexp.go +++ b/internal/framework/types/regexp.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package types import ( diff --git a/internal/framework/types/regexp_test.go b/internal/framework/types/regexp_test.go index bf8677115fd..d7d93753784 100644 --- a/internal/framework/types/regexp_test.go +++ b/internal/framework/types/regexp_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package types_test import ( diff --git a/internal/framework/types/timestamp_type.go b/internal/framework/types/timestamp_type.go new file mode 100644 index 00000000000..a607eef6934 --- /dev/null +++ b/internal/framework/types/timestamp_type.go @@ -0,0 +1,128 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package types + +import ( + "context" + "fmt" + "time" + + "github.com/hashicorp/terraform-plugin-framework/attr" + "github.com/hashicorp/terraform-plugin-framework/attr/xattr" + "github.com/hashicorp/terraform-plugin-framework/diag" + "github.com/hashicorp/terraform-plugin-framework/path" + "github.com/hashicorp/terraform-plugin-framework/types/basetypes" + "github.com/hashicorp/terraform-plugin-go/tftypes" +) + +type TimestampType struct { + basetypes.StringType +} + +var ( + _ basetypes.StringTypable = TimestampType{} + _ xattr.TypeWithValidate = TimestampType{} +) + +func (typ TimestampType) ValueFromString(_ context.Context, in basetypes.StringValue) (basetypes.StringValuable, diag.Diagnostics) { + if in.IsUnknown() { + return NewTimestampUnknown(), nil + } + + if in.IsNull() { + return NewTimestampNull(), nil + } + + s := in.ValueString() + + var diags diag.Diagnostics + t, err := time.Parse(time.RFC3339, s) + if err != nil { + diags.AddError( + "Timestamp Type Validation Error", + fmt.Sprintf("Value %q cannot be parsed as a Timestamp.", s), + ) + return nil, diags + } + + return newTimestampValue(s, t), nil +} + +func (typ TimestampType) ValueFromTerraform(_ context.Context, in tftypes.Value) (attr.Value, error) { + if !in.IsKnown() { + return NewTimestampUnknown(), nil + } + + if in.IsNull() { + return NewTimestampNull(), nil + } + + var s string + err := in.As(&s) + if err != nil { + return nil, err + } + + t, err := time.Parse(time.RFC3339, s) + if err != nil { + return NewTimestampUnknown(), nil //nolint: nilerr // Must not return validation errors + } + + return newTimestampValue(s, t), nil +} + +func (typ TimestampType) ValueType(context.Context) attr.Value { + return TimestampValue{} +} + +func (typ TimestampType) Equal(o attr.Type) bool { + other, ok := o.(TimestampType) + if !ok { + return false + } + + return typ.StringType.Equal(other.StringType) +} + +// String returns a human-friendly description of the TimestampType. +func (typ TimestampType) String() string { + return "types.TimestampType" +} + +func (typ TimestampType) Validate(ctx context.Context, in tftypes.Value, path path.Path) diag.Diagnostics { + var diags diag.Diagnostics + + if !in.IsKnown() || in.IsNull() { + return diags + } + + var s string + err := in.As(&s) + if err != nil { + diags.AddAttributeError( + path, + "Invalid Terraform Value", + "An unexpected error occurred while attempting to convert a Terraform value to a string. "+ + "This is generally an issue with the provider schema implementation. "+ + "Please report the following to the provider developer:\n\n"+ + "Path: "+path.String()+"\n"+ + "Error: "+err.Error(), + ) + return diags + } + + _, err = time.Parse(time.RFC3339, s) + if err != nil { + diags.AddAttributeError( + path, + "Invalid Timestamp Value", + fmt.Sprintf("Value %q cannot be parsed as an RFC 3339 Timestamp.\n\n"+ + "Path: %s\n"+ + "Error: %s", s, path, err), + ) + return diags + } + + return diags +} diff --git a/internal/framework/types/timestamp_type_test.go b/internal/framework/types/timestamp_type_test.go new file mode 100644 index 00000000000..57aaaa9f819 --- /dev/null +++ b/internal/framework/types/timestamp_type_test.go @@ -0,0 +1,138 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package types_test + +import ( + "context" + "testing" + "time" + + "github.com/hashicorp/terraform-plugin-framework/attr" + "github.com/hashicorp/terraform-plugin-framework/path" + "github.com/hashicorp/terraform-plugin-go/tftypes" + fwtypes "github.com/hashicorp/terraform-provider-aws/internal/framework/types" +) + +func TestTimestampTypeValueFromTerraform(t *testing.T) { + t.Parallel() + + tests := map[string]struct { + val tftypes.Value + expected attr.Value + }{ + "null value": { + val: tftypes.NewValue(tftypes.String, nil), + expected: fwtypes.NewTimestampNull(), + }, + "unknown value": { + val: tftypes.NewValue(tftypes.String, tftypes.UnknownValue), + expected: fwtypes.NewTimestampUnknown(), + }, + "valid timestamp UTC": { + val: tftypes.NewValue(tftypes.String, "2023-06-07T15:11:34Z"), + expected: fwtypes.NewTimestampValue(time.Date(2023, time.June, 7, 15, 11, 34, 0, time.UTC)), + }, + "valid timestamp zone": { + val: tftypes.NewValue(tftypes.String, "2023-06-07T15:11:34-06:00"), + expected: fwtypes.NewTimestampValue(time.Date(2023, time.June, 7, 15, 11, 34, 0, locationFromString(t, "America/Regina"))), // No DST + }, + "invalid value": { + val: tftypes.NewValue(tftypes.String, "not ok"), + expected: fwtypes.NewTimestampUnknown(), + }, + "invalid no zone": { + val: tftypes.NewValue(tftypes.String, "2023-06-07T15:11:34"), + expected: fwtypes.NewTimestampUnknown(), + }, + "invalid date only": { + val: tftypes.NewValue(tftypes.String, "2023-06-07Z"), + expected: fwtypes.NewTimestampUnknown(), + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + ctx := context.Background() + val, err := fwtypes.TimestampType{}.ValueFromTerraform(ctx, test.val) + + if err != nil { + t.Fatalf("got unexpected error: %s", err) + } + + if !test.expected.Equal(val) { + t.Errorf("unexpected diff\nwanted: %s\ngot: %s", test.expected, val) + } + }) + } +} + +func TestTimestampTypeValidate(t *testing.T) { + t.Parallel() + + type testCase struct { + val tftypes.Value + expectError bool + } + tests := map[string]testCase{ + "not a string": { + val: tftypes.NewValue(tftypes.Bool, true), + expectError: true, + }, + "unknown string": { + val: tftypes.NewValue(tftypes.String, tftypes.UnknownValue), + }, + "null string": { + val: tftypes.NewValue(tftypes.String, nil), + }, + "valid timestamp UTC": { + val: tftypes.NewValue(tftypes.String, "2023-06-07T15:11:34Z"), + }, + "valid timestamp zone": { + val: tftypes.NewValue(tftypes.String, "2023-06-07T15:11:34-06:00"), + }, + "invalid string": { + val: tftypes.NewValue(tftypes.String, "not ok"), + expectError: true, + }, + "invalid no zone": { + val: tftypes.NewValue(tftypes.String, "2023-06-07T15:11:34"), + expectError: true, + }, + "invalid date only": { + val: tftypes.NewValue(tftypes.String, "2023-06-07Z"), + expectError: true, + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + ctx := context.Background() + + diags := fwtypes.TimestampType{}.Validate(ctx, test.val, path.Root("test")) + + if !diags.HasError() && test.expectError { + t.Fatal("expected error, got no error") + } + + if diags.HasError() && !test.expectError { + t.Fatalf("got unexpected error: %#v", diags) + } + }) + } +} + +func locationFromString(t *testing.T, s string) *time.Location { + location, err := time.LoadLocation(s) + if err != nil { + t.Fatalf("loading time.Location %q: %s", s, err) + } + + return location +} diff --git a/internal/framework/types/timestamp_value.go b/internal/framework/types/timestamp_value.go new file mode 100644 index 00000000000..73260f6261e --- /dev/null +++ b/internal/framework/types/timestamp_value.go @@ -0,0 +1,83 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package types + +import ( + "context" + "time" + + "github.com/hashicorp/terraform-plugin-framework/attr" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/hashicorp/terraform-plugin-framework/types/basetypes" +) + +func NewTimestampNull() TimestampValue { + return TimestampValue{ + StringValue: types.StringNull(), + } +} + +func NewTimestampUnknown() TimestampValue { + return TimestampValue{ + StringValue: types.StringUnknown(), + } +} + +func NewTimestampValue(t time.Time) TimestampValue { + return newTimestampValue(t.Format(time.RFC3339), t) +} + +func NewTimestampValueString(s string) (TimestampValue, error) { + t, err := time.Parse(time.RFC3339, s) + if err != nil { + return TimestampValue{}, err + } + return newTimestampValue(s, t), nil +} + +func newTimestampValue(s string, t time.Time) TimestampValue { + return TimestampValue{ + StringValue: types.StringValue(s), + value: t, + } +} + +var ( + _ basetypes.StringValuable = TimestampValue{} +) + +type TimestampValue struct { + basetypes.StringValue + + // value contains the parsed value, if not Null or Unknown. + value time.Time +} + +func (val TimestampValue) Type(_ context.Context) attr.Type { + return TimestampType{} +} + +func (val TimestampValue) Equal(other attr.Value) bool { + o, ok := other.(TimestampValue) + + if !ok { + return false + } + + if val.StringValue.IsUnknown() { + return o.StringValue.IsUnknown() + } + + if val.StringValue.IsNull() { + return o.StringValue.IsNull() + } + + return val.value.Equal(o.value) +} + +// ValueTimestamp returns the known time.Time value. If Timestamp is null or unknown, returns 0. +// To get the value as a string, use ValueString. +func (val TimestampValue) ValueTimestamp() time.Time { + return val.value +} diff --git a/internal/framework/types/timestamp_value_test.go b/internal/framework/types/timestamp_value_test.go new file mode 100644 index 00000000000..8d512da5680 --- /dev/null +++ b/internal/framework/types/timestamp_value_test.go @@ -0,0 +1,348 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package types_test + +import ( + "context" + "testing" + "time" + + "github.com/hashicorp/terraform-plugin-framework/attr" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/hashicorp/terraform-plugin-go/tftypes" + "github.com/hashicorp/terraform-provider-aws/internal/errs" + fwtypes "github.com/hashicorp/terraform-provider-aws/internal/framework/types" +) + +func TestTimestampValueToTerraformValue(t *testing.T) { + t.Parallel() + + tests := map[string]struct { + timestamp fwtypes.TimestampValue + expected tftypes.Value + }{ + "value": { + timestamp: errs.Must(fwtypes.NewTimestampValueString("2023-06-07T15:11:34Z")), + expected: tftypes.NewValue(tftypes.String, "2023-06-07T15:11:34Z"), + }, + "null": { + timestamp: fwtypes.NewTimestampNull(), + expected: tftypes.NewValue(tftypes.String, nil), + }, + "unknown": { + timestamp: fwtypes.NewTimestampUnknown(), + expected: tftypes.NewValue(tftypes.String, tftypes.UnknownValue), + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + ctx := context.Background() + + got, err := test.timestamp.ToTerraformValue(ctx) + if err != nil { + t.Fatalf("Unexpected error: %s", err) + } + + if !test.expected.Equal(got) { + t.Fatalf("expected %#v to equal %#v", got, test.expected) + } + }) + } +} + +func TestTimestampValueToStringValue(t *testing.T) { + t.Parallel() + + tests := map[string]struct { + timestamp fwtypes.TimestampValue + expected types.String + }{ + "value": { + timestamp: errs.Must(fwtypes.NewTimestampValueString("2023-06-07T15:11:34Z")), + expected: types.StringValue("2023-06-07T15:11:34Z"), + }, + "null": { + timestamp: fwtypes.NewTimestampNull(), + expected: types.StringNull(), + }, + "unknown": { + timestamp: fwtypes.NewTimestampUnknown(), + expected: types.StringUnknown(), + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + ctx := context.Background() + + s, _ := test.timestamp.ToStringValue(ctx) + + if !test.expected.Equal(s) { + t.Fatalf("expected %#v to equal %#v", s, test.expected) + } + }) + } +} + +func TestTimestampValueEqual(t *testing.T) { + t.Parallel() + + tests := map[string]struct { + input fwtypes.TimestampValue + candidate attr.Value + expected bool + }{ + "known-known-same": { + input: errs.Must(fwtypes.NewTimestampValueString("2023-06-07T15:11:34Z")), + candidate: errs.Must(fwtypes.NewTimestampValueString("2023-06-07T15:11:34Z")), + expected: true, + }, + "known-known-diff": { + input: errs.Must(fwtypes.NewTimestampValueString("2023-06-07T15:11:34Z")), + candidate: errs.Must(fwtypes.NewTimestampValueString("1999-06-07T15:11:34Z")), + expected: false, + }, + "known-unknown": { + input: errs.Must(fwtypes.NewTimestampValueString("2023-06-07T15:11:34Z")), + candidate: fwtypes.NewTimestampUnknown(), + expected: false, + }, + "known-null": { + input: errs.Must(fwtypes.NewTimestampValueString("2023-06-07T15:11:34Z")), + candidate: fwtypes.NewTimestampNull(), + expected: false, + }, + "unknown-known": { + input: fwtypes.NewTimestampUnknown(), + candidate: errs.Must(fwtypes.NewTimestampValueString("2023-06-07T15:11:34Z")), + expected: false, + }, + "unknown-unknown": { + input: fwtypes.NewTimestampUnknown(), + candidate: fwtypes.NewTimestampUnknown(), + expected: true, + }, + "unknown-null": { + input: fwtypes.NewTimestampUnknown(), + candidate: fwtypes.NewTimestampNull(), + expected: false, + }, + "null-known": { + input: fwtypes.NewTimestampNull(), + candidate: errs.Must(fwtypes.NewTimestampValueString("2023-06-07T15:11:34Z")), + expected: false, + }, + "null-unknown": { + input: fwtypes.NewTimestampNull(), + candidate: fwtypes.NewTimestampUnknown(), + expected: false, + }, + "null-null": { + input: fwtypes.NewTimestampNull(), + candidate: fwtypes.NewTimestampNull(), + expected: true, + }, + } + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := test.input.Equal(test.candidate) + if got != test.expected { + t.Errorf("expected %t, got %t", test.expected, got) + } + }) + } +} + +func TestTimestampValueIsNull(t *testing.T) { + t.Parallel() + + testCases := map[string]struct { + input fwtypes.TimestampValue + expected bool + }{ + "known": { + input: errs.Must(fwtypes.NewTimestampValueString("2023-06-07T15:11:34Z")), + expected: false, + }, + "null": { + input: fwtypes.NewTimestampNull(), + expected: true, + }, + "unknown": { + input: fwtypes.NewTimestampUnknown(), + expected: false, + }, + } + + for name, testCase := range testCases { + name, testCase := name, testCase + + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := testCase.input.IsNull() + + if got != testCase.expected { + t.Error("expected Null") + } + }) + } +} + +func TestTimestampValueIsUnknown(t *testing.T) { + t.Parallel() + + testCases := map[string]struct { + input fwtypes.TimestampValue + expected bool + }{ + "known": { + input: errs.Must(fwtypes.NewTimestampValueString("2023-06-07T15:11:34Z")), + expected: false, + }, + "null": { + input: fwtypes.NewTimestampNull(), + expected: false, + }, + "unknown": { + input: fwtypes.NewTimestampUnknown(), + expected: true, + }, + } + + for name, testCase := range testCases { + name, testCase := name, testCase + + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := testCase.input.IsUnknown() + + if got != testCase.expected { + t.Error("expected Unknown") + } + }) + } +} + +func TestTimestampValueString(t *testing.T) { + t.Parallel() + + type testCase struct { + input fwtypes.TimestampValue + expected string + } + tests := map[string]testCase{ + "known-non-empty": { + input: errs.Must(fwtypes.NewTimestampValueString("2023-06-07T15:11:34Z")), + expected: `"2023-06-07T15:11:34Z"`, + }, + "unknown": { + input: fwtypes.NewTimestampUnknown(), + expected: "", + }, + "null": { + input: fwtypes.NewTimestampNull(), + expected: "", + }, + "zero-value": { + input: fwtypes.TimestampValue{}, + expected: ``, + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := test.input.String() + if got != test.expected { + t.Errorf("Expected %q, got %q", test.expected, got) + } + }) + } +} + +func TestTimestampValueValueString(t *testing.T) { + t.Parallel() + + testCases := map[string]struct { + input fwtypes.TimestampValue + expected string + }{ + "known": { + input: errs.Must(fwtypes.NewTimestampValueString("2023-06-07T15:11:34Z")), + expected: "2023-06-07T15:11:34Z", + }, + "null": { + input: fwtypes.NewTimestampNull(), + expected: "", + }, + "unknown": { + input: fwtypes.NewTimestampUnknown(), + expected: "", + }, + } + + for name, testCase := range testCases { + name, testCase := name, testCase + + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := testCase.input.ValueString() + + if got != testCase.expected { + t.Errorf("Expected %q, got %q", testCase.expected, got) + } + }) + } +} + +func TestTimestampValueValueTimestamp(t *testing.T) { + t.Parallel() + + testCases := map[string]struct { + input fwtypes.TimestampValue + expected time.Time + }{ + "known": { + input: errs.Must(fwtypes.NewTimestampValueString("2023-06-07T15:11:34Z")), + expected: errs.Must(time.Parse(time.RFC3339, "2023-06-07T15:11:34Z")), + }, + "null": { + input: fwtypes.NewTimestampNull(), + expected: time.Time{}, + }, + "unknown": { + input: fwtypes.NewTimestampUnknown(), + expected: time.Time{}, + }, + } + + for name, testCase := range testCases { + name, testCase := name, testCase + + t.Run(name, func(t *testing.T) { + t.Parallel() + + got := testCase.input.ValueTimestamp() + + if !testCase.expected.Equal(got) { + t.Errorf("Expected %q, got %q", testCase.expected, got) + } + }) + } +} diff --git a/internal/framework/validators/arn.go b/internal/framework/validators/arn.go deleted file mode 100644 index 49c5bdeeb8a..00000000000 --- a/internal/framework/validators/arn.go +++ /dev/null @@ -1,38 +0,0 @@ -package validators - -import ( - "context" - - "github.com/aws/aws-sdk-go-v2/aws/arn" - "github.com/hashicorp/terraform-plugin-framework-validators/helpers/validatordiag" - "github.com/hashicorp/terraform-plugin-framework/schema/validator" -) - -type arnValidator struct{} - -func (validator arnValidator) Description(_ context.Context) string { - return "value must be a valid Amazon Resource Name" -} - -func (validator arnValidator) MarkdownDescription(ctx context.Context) string { - return validator.Description(ctx) -} - -func (validator arnValidator) ValidateString(ctx context.Context, request validator.StringRequest, response *validator.StringResponse) { - if request.ConfigValue.IsNull() || request.ConfigValue.IsUnknown() { - return - } - - if !arn.IsARN(request.ConfigValue.ValueString()) { - response.Diagnostics.Append(validatordiag.InvalidAttributeValueDiagnostic( - request.Path, - validator.Description(ctx), - request.ConfigValue.ValueString(), - )) - return - } -} - -func ARN() validator.String { - return arnValidator{} -} diff --git a/internal/framework/validators/arn_test.go b/internal/framework/validators/arn_test.go deleted file mode 100644 index d0ca67c82f2..00000000000 --- a/internal/framework/validators/arn_test.go +++ /dev/null @@ -1,63 +0,0 @@ -package validators_test - -import ( - "context" - "testing" - - "github.com/google/go-cmp/cmp" - "github.com/hashicorp/terraform-plugin-framework/diag" - "github.com/hashicorp/terraform-plugin-framework/path" - "github.com/hashicorp/terraform-plugin-framework/schema/validator" - "github.com/hashicorp/terraform-plugin-framework/types" - fwvalidators "github.com/hashicorp/terraform-provider-aws/internal/framework/validators" -) - -func TestARNValidator(t *testing.T) { - t.Parallel() - - type testCase struct { - val types.String - expectedDiagnostics diag.Diagnostics - } - - tests := map[string]testCase{ - "unknown String": { - val: types.StringUnknown(), - }, - "null String": { - val: types.StringNull(), - }, - "valid arn": { - val: types.StringValue("arn:aws:iam::aws:policy/CloudWatchReadOnlyAccess"), - }, - "invalid_arn": { - val: types.StringValue("arn"), - expectedDiagnostics: diag.Diagnostics{ - diag.NewAttributeErrorDiagnostic( - path.Root("test"), - "Invalid Attribute Value", - `Attribute test value must be a valid Amazon Resource Name, got: arn`, - ), - }, - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - request := validator.StringRequest{ - Path: path.Root("test"), - PathExpression: path.MatchRoot("test"), - ConfigValue: test.val, - } - response := validator.StringResponse{} - fwvalidators.ARN().ValidateString(context.Background(), request, &response) - - if diff := cmp.Diff(response.Diagnostics, test.expectedDiagnostics); diff != "" { - t.Errorf("unexpected diagnostics difference: %s", diff) - } - }) - } -} diff --git a/internal/framework/validators/cidr.go b/internal/framework/validators/cidr.go index 5b4b188aa3c..1ba2493a18b 100644 --- a/internal/framework/validators/cidr.go +++ b/internal/framework/validators/cidr.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package validators import ( diff --git a/internal/framework/validators/cidr_test.go b/internal/framework/validators/cidr_test.go index 023cc0c2cad..c04c5c34f5d 100644 --- a/internal/framework/validators/cidr_test.go +++ b/internal/framework/validators/cidr_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package validators_test import ( diff --git a/internal/framework/validators/cluster.go b/internal/framework/validators/cluster.go deleted file mode 100644 index 00a0cf8a1e1..00000000000 --- a/internal/framework/validators/cluster.go +++ /dev/null @@ -1,187 +0,0 @@ -package validators - -import ( - "context" - "errors" - "regexp" - - "github.com/hashicorp/terraform-plugin-framework-validators/helpers/validatordiag" - "github.com/hashicorp/terraform-plugin-framework/schema/validator" -) - -type clusterIdentifierValidator struct{} - -func (validator clusterIdentifierValidator) Description(_ context.Context) string { - return "value must be a valid Cluster Identifier" -} - -func (validator clusterIdentifierValidator) MarkdownDescription(ctx context.Context) string { - return validator.Description(ctx) -} - -func (validator clusterIdentifierValidator) ValidateString(ctx context.Context, request validator.StringRequest, response *validator.StringResponse) { - if request.ConfigValue.IsNull() || request.ConfigValue.IsUnknown() { - return - } - - if err := validateClusterIdentifier(request.ConfigValue.ValueString()); err != nil { - response.Diagnostics.Append(validatordiag.InvalidAttributeValueDiagnostic( - request.Path, - validator.Description(ctx)+": "+err.Error(), - request.ConfigValue.ValueString(), - )) - return - } -} - -func ClusterIdentifier() validator.String { - return clusterIdentifierValidator{} -} - -type clusterIdentifierPrefixValidator struct{} - -func (validator clusterIdentifierPrefixValidator) Description(_ context.Context) string { - return "value must be a valid Cluster Identifier Prefix" -} - -func (validator clusterIdentifierPrefixValidator) MarkdownDescription(ctx context.Context) string { - return validator.Description(ctx) -} - -func (validator clusterIdentifierPrefixValidator) ValidateString(ctx context.Context, request validator.StringRequest, response *validator.StringResponse) { - if request.ConfigValue.IsNull() || request.ConfigValue.IsUnknown() { - return - } - - if err := validateClusterIdentifierPrefix(request.ConfigValue.ValueString()); err != nil { - response.Diagnostics.Append(validatordiag.InvalidAttributeValueDiagnostic( - request.Path, - validator.Description(ctx)+": "+err.Error(), - request.ConfigValue.ValueString(), - )) - return - } -} - -func ClusterIdentifierPrefix() validator.String { - return clusterIdentifierPrefixValidator{} -} - -type clusterFinalSnapshotIdentifierValidator struct{} - -func (validator clusterFinalSnapshotIdentifierValidator) Description(_ context.Context) string { - return "value must be a valid Final Snapshot Identifier" -} - -func (validator clusterFinalSnapshotIdentifierValidator) MarkdownDescription(ctx context.Context) string { - return validator.Description(ctx) -} - -func (validator clusterFinalSnapshotIdentifierValidator) ValidateString(ctx context.Context, request validator.StringRequest, response *validator.StringResponse) { - if request.ConfigValue.IsNull() || request.ConfigValue.IsUnknown() { - return - } - - if err := validateFinalSnapshotIdentifier(request.ConfigValue.ValueString()); err != nil { - response.Diagnostics.Append(validatordiag.InvalidAttributeValueDiagnostic( - request.Path, - validator.Description(ctx)+": "+err.Error(), - request.ConfigValue.ValueString(), - )) - return - } -} - -func ClusterFinalSnapshotIdentifier() validator.String { - return clusterFinalSnapshotIdentifierValidator{} -} - -type evaluate struct { - regex func(string) bool - message string - isMatch bool -} - -func (e evaluate) match(value string) error { - if e.regex(value) == e.isMatch { - return errors.New(e.message) - } - - return nil -} - -var ( - firstCharacterIsLetter = evaluate{ - regex: regexp.MustCompile(`^[a-zA-Z]`).MatchString, - message: "first character must be a letter", - isMatch: false, - } - onlyAlphanumeric = evaluate{ - regex: regexp.MustCompile(`^[0-9A-Za-z-]+$`).MatchString, - message: "must contain only alphanumeric characters and hyphens", - isMatch: false, - } - onlyLowercaseAlphanumeric = evaluate{ - regex: regexp.MustCompile(`^[0-9a-z-]+$`).MatchString, - message: "must contain only lowercase alphanumeric characters and hyphens", - isMatch: false, - } - noConsecutiveHyphens = evaluate{ - regex: regexp.MustCompile(`--`).MatchString, - message: "cannot contain two consecutive hyphens", - isMatch: true, - } - cannotEndWithHyphen = evaluate{ - regex: regexp.MustCompile(`-$`).MatchString, - message: "cannot end with a hyphen", - isMatch: true, - } -) - -func validateClusterIdentifier(value string) error { - err := onlyLowercaseAlphanumeric.match(value) - if err != nil { - return err - } - err = noConsecutiveHyphens.match(value) - if err != nil { - return err - } - err = cannotEndWithHyphen.match(value) - if err != nil { - return err - } - - return firstCharacterIsLetter.match(value) -} - -func validateClusterIdentifierPrefix(value string) error { - err := onlyLowercaseAlphanumeric.match(value) - if err != nil { - return err - } - err = noConsecutiveHyphens.match(value) - if err != nil { - return err - } - - return firstCharacterIsLetter.match(value) -} - -func validateFinalSnapshotIdentifier(value string) error { - err := onlyAlphanumeric.match(value) - if err != nil { - return err - } - err = cannotEndWithHyphen.match(value) - if err != nil { - return err - } - - err = noConsecutiveHyphens.match(value) - if err != nil { - return err - } - - return firstCharacterIsLetter.match(value) -} diff --git a/internal/framework/validators/cluster_test.go b/internal/framework/validators/cluster_test.go deleted file mode 100644 index 13a031cb425..00000000000 --- a/internal/framework/validators/cluster_test.go +++ /dev/null @@ -1,271 +0,0 @@ -package validators_test - -import ( - "context" - "testing" - - "github.com/google/go-cmp/cmp" - "github.com/hashicorp/terraform-plugin-framework/diag" - "github.com/hashicorp/terraform-plugin-framework/path" - "github.com/hashicorp/terraform-plugin-framework/schema/validator" - "github.com/hashicorp/terraform-plugin-framework/types" - fwvalidators "github.com/hashicorp/terraform-provider-aws/internal/framework/validators" -) - -func TestClusterIdentifierValidator(t *testing.T) { - t.Parallel() - - type testCase struct { - val types.String - expectedDiagnostics diag.Diagnostics - } - - tests := map[string]testCase{ - "unknown String": { - val: types.StringUnknown(), - }, - "null String": { - val: types.StringNull(), - }, - "valid identifier": { - val: types.StringValue("valid-cluster-identifier"), - }, - "begins with number": { - val: types.StringValue("11-not-valid"), - expectedDiagnostics: diag.Diagnostics{ - diag.NewAttributeErrorDiagnostic( - path.Root("test"), - "Invalid Attribute Value", - `Attribute test value must be a valid Cluster Identifier: first character must be a letter, got: 11-not-valid`, - ), - }, - }, - "contains special character": { - val: types.StringValue("not-valid!-identifier"), - expectedDiagnostics: diag.Diagnostics{ - diag.NewAttributeErrorDiagnostic( - path.Root("test"), - "Invalid Attribute Value", - `Attribute test value must be a valid Cluster Identifier: must contain only lowercase alphanumeric characters and hyphens, got: not-valid!-identifier`, - ), - }, - }, - "contains uppercase": { - val: types.StringValue("Invalid-identifier"), - expectedDiagnostics: diag.Diagnostics{ - diag.NewAttributeErrorDiagnostic( - path.Root("test"), - "Invalid Attribute Value", - `Attribute test value must be a valid Cluster Identifier: must contain only lowercase alphanumeric characters and hyphens, got: Invalid-identifier`, - ), - }, - }, - "contains consecutive hyphens": { - val: types.StringValue("invalid--identifier"), - expectedDiagnostics: diag.Diagnostics{ - diag.NewAttributeErrorDiagnostic( - path.Root("test"), - "Invalid Attribute Value", - `Attribute test value must be a valid Cluster Identifier: cannot contain two consecutive hyphens, got: invalid--identifier`, - ), - }, - }, - "ends with hyphen": { - val: types.StringValue("invalid-identifier-"), - expectedDiagnostics: diag.Diagnostics{ - diag.NewAttributeErrorDiagnostic( - path.Root("test"), - "Invalid Attribute Value", - `Attribute test value must be a valid Cluster Identifier: cannot end with a hyphen, got: invalid-identifier-`, - ), - }, - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - ctx := context.Background() - - request := validator.StringRequest{ - Path: path.Root("test"), - PathExpression: path.MatchRoot("test"), - ConfigValue: test.val, - } - response := validator.StringResponse{} - fwvalidators.ClusterIdentifier().ValidateString(ctx, request, &response) - - if diff := cmp.Diff(response.Diagnostics, test.expectedDiagnostics); diff != "" { - t.Errorf("unexpected diagnostics difference: %s", diff) - } - }) - } -} - -func TestClusterIdentifierPrefixValidator(t *testing.T) { - t.Parallel() - - type testCase struct { - val types.String - expectedDiagnostics diag.Diagnostics - } - - tests := map[string]testCase{ - "unknown String": { - val: types.StringUnknown(), - }, - "null String": { - val: types.StringNull(), - }, - "valid identifier": { - val: types.StringValue("valid-cluster-identifier"), - }, - "begins with number": { - val: types.StringValue("11-not-valid"), - expectedDiagnostics: diag.Diagnostics{ - diag.NewAttributeErrorDiagnostic( - path.Root("test"), - "Invalid Attribute Value", - `Attribute test value must be a valid Cluster Identifier Prefix: first character must be a letter, got: 11-not-valid`, - ), - }, - }, - "contains special character": { - val: types.StringValue("not-valid!-identifier"), - expectedDiagnostics: diag.Diagnostics{ - diag.NewAttributeErrorDiagnostic( - path.Root("test"), - "Invalid Attribute Value", - `Attribute test value must be a valid Cluster Identifier Prefix: must contain only lowercase alphanumeric characters and hyphens, got: not-valid!-identifier`, - ), - }, - }, - "contains uppercase": { - val: types.StringValue("InValid-identifier"), - expectedDiagnostics: diag.Diagnostics{ - diag.NewAttributeErrorDiagnostic( - path.Root("test"), - "Invalid Attribute Value", - `Attribute test value must be a valid Cluster Identifier Prefix: must contain only lowercase alphanumeric characters and hyphens, got: InValid-identifier`, - ), - }, - }, - "contains consecutive hyphens": { - val: types.StringValue("invalid--identifier"), - expectedDiagnostics: diag.Diagnostics{ - diag.NewAttributeErrorDiagnostic( - path.Root("test"), - "Invalid Attribute Value", - `Attribute test value must be a valid Cluster Identifier Prefix: cannot contain two consecutive hyphens, got: invalid--identifier`, - ), - }, - }, - "ends with hyphen": { - val: types.StringValue("valid-identifier-"), - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - request := validator.StringRequest{ - Path: path.Root("test"), - PathExpression: path.MatchRoot("test"), - ConfigValue: test.val, - } - response := validator.StringResponse{} - fwvalidators.ClusterIdentifierPrefix().ValidateString(context.Background(), request, &response) - - if diff := cmp.Diff(response.Diagnostics, test.expectedDiagnostics); diff != "" { - t.Errorf("unexpected diagnostics difference: %s", diff) - } - }) - } -} - -func TestClusterFinalSnapshotIdentifierValidator(t *testing.T) { - t.Parallel() - - type testCase struct { - val types.String - expectedDiagnostics diag.Diagnostics - } - - tests := map[string]testCase{ - "unknown String": { - val: types.StringUnknown(), - }, - "null String": { - val: types.StringNull(), - }, - "valid identifier": { - val: types.StringValue("valid-cluster-identifier"), - }, - "begins with number": { - val: types.StringValue("11-not-valid"), - expectedDiagnostics: diag.Diagnostics{ - diag.NewAttributeErrorDiagnostic( - path.Root("test"), - "Invalid Attribute Value", - `Attribute test value must be a valid Final Snapshot Identifier: first character must be a letter, got: 11-not-valid`, - ), - }, - }, - "contains special character": { - val: types.StringValue("not-valid!-identifier"), - expectedDiagnostics: diag.Diagnostics{ - diag.NewAttributeErrorDiagnostic( - path.Root("test"), - "Invalid Attribute Value", - `Attribute test value must be a valid Final Snapshot Identifier: must contain only alphanumeric characters and hyphens, got: not-valid!-identifier`, - ), - }, - }, - "contains uppercase": { - val: types.StringValue("Valid-identifier"), - }, - "contains consecutive hyphens": { - val: types.StringValue("invalid--identifier"), - expectedDiagnostics: diag.Diagnostics{ - diag.NewAttributeErrorDiagnostic( - path.Root("test"), - "Invalid Attribute Value", - `Attribute test value must be a valid Final Snapshot Identifier: cannot contain two consecutive hyphens, got: invalid--identifier`, - ), - }, - }, - "ends with hyphens": { - val: types.StringValue("invalid-identifier-"), - expectedDiagnostics: diag.Diagnostics{ - diag.NewAttributeErrorDiagnostic( - path.Root("test"), - "Invalid Attribute Value", - `Attribute test value must be a valid Final Snapshot Identifier: cannot end with a hyphen, got: invalid-identifier-`, - ), - }, - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - request := validator.StringRequest{ - Path: path.Root("test"), - PathExpression: path.MatchRoot("test"), - ConfigValue: test.val, - } - response := validator.StringResponse{} - fwvalidators.ClusterFinalSnapshotIdentifier().ValidateString(context.Background(), request, &response) - - if diff := cmp.Diff(response.Diagnostics, test.expectedDiagnostics); diff != "" { - t.Errorf("unexpected diagnostics difference: %s", diff) - } - }) - } -} diff --git a/internal/framework/validators/ip.go b/internal/framework/validators/ip.go index 138bc2b2b27..7cebf5ad562 100644 --- a/internal/framework/validators/ip.go +++ b/internal/framework/validators/ip.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package validators import ( diff --git a/internal/framework/validators/ip_test.go b/internal/framework/validators/ip_test.go index 573b388c252..a724ef7be23 100644 --- a/internal/framework/validators/ip_test.go +++ b/internal/framework/validators/ip_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package validators_test import ( diff --git a/internal/framework/validators/timestamp.go b/internal/framework/validators/timestamp.go deleted file mode 100644 index 7b3be4ddeac..00000000000 --- a/internal/framework/validators/timestamp.go +++ /dev/null @@ -1,99 +0,0 @@ -package validators - -import ( - "context" - - "github.com/hashicorp/terraform-plugin-framework-validators/helpers/validatordiag" - "github.com/hashicorp/terraform-plugin-framework/schema/validator" - "github.com/hashicorp/terraform-provider-aws/internal/types/timestamp" -) - -type utcTimestampValidator struct{} - -func (validator utcTimestampValidator) Description(_ context.Context) string { - return "value must be a valid UTC Timestamp" -} - -func (validator utcTimestampValidator) MarkdownDescription(ctx context.Context) string { - return validator.Description(ctx) -} - -func (validator utcTimestampValidator) ValidateString(ctx context.Context, request validator.StringRequest, response *validator.StringResponse) { - if request.ConfigValue.IsNull() || request.ConfigValue.IsUnknown() { - return - } - - t := timestamp.New(request.ConfigValue.ValueString()) - if err := t.ValidateUTCFormat(); err != nil { - response.Diagnostics.Append(validatordiag.InvalidAttributeValueDiagnostic( - request.Path, - validator.Description(ctx), - request.ConfigValue.ValueString(), - )) - return - } -} - -func UTCTimestamp() validator.String { - return utcTimestampValidator{} -} - -type onceADayWindowFormatValidator struct{} - -func (validator onceADayWindowFormatValidator) Description(_ context.Context) string { - return `value must satisfy the format of "hh24:mi-hh24:mi"` -} - -func (validator onceADayWindowFormatValidator) MarkdownDescription(ctx context.Context) string { - return validator.Description(ctx) -} - -func (validator onceADayWindowFormatValidator) ValidateString(ctx context.Context, request validator.StringRequest, response *validator.StringResponse) { - if request.ConfigValue.IsNull() || request.ConfigValue.IsUnknown() { - return - } - - t := timestamp.New(request.ConfigValue.ValueString()) - if err := t.ValidateOnceADayWindowFormat(); err != nil { - response.Diagnostics.Append(validatordiag.InvalidAttributeValueDiagnostic( - request.Path, - validator.Description(ctx), - request.ConfigValue.ValueString(), - )) - return - } -} - -func OnceADayWindowFormat() validator.String { - return onceADayWindowFormatValidator{} -} - -type onceAWeekWindowFormatValidator struct{} - -func (validator onceAWeekWindowFormatValidator) Description(_ context.Context) string { - return `value must satisfy the format of "ddd:hh24:mi-ddd:hh24:mi"` -} - -func (validator onceAWeekWindowFormatValidator) MarkdownDescription(ctx context.Context) string { - return validator.Description(ctx) -} - -func (validator onceAWeekWindowFormatValidator) ValidateString(ctx context.Context, request validator.StringRequest, response *validator.StringResponse) { - if request.ConfigValue.IsNull() || request.ConfigValue.IsUnknown() { - return - } - - t := timestamp.New(request.ConfigValue.ValueString()) - if err := t.ValidateOnceAWeekWindowFormat(); err != nil { - response.Diagnostics.Append(validatordiag.InvalidAttributeValueDiagnostic( - request.Path, - validator.Description(ctx), - request.ConfigValue.ValueString(), - )) - return - } -} - -func OnceAWeekWindowFormat() validator.String { - return onceAWeekWindowFormatValidator{} -} diff --git a/internal/framework/validators/timestamp_test.go b/internal/framework/validators/timestamp_test.go deleted file mode 100644 index ac92c05032b..00000000000 --- a/internal/framework/validators/timestamp_test.go +++ /dev/null @@ -1,165 +0,0 @@ -package validators_test - -import ( - "context" - "testing" - - "github.com/google/go-cmp/cmp" - "github.com/hashicorp/terraform-plugin-framework/diag" - "github.com/hashicorp/terraform-plugin-framework/path" - "github.com/hashicorp/terraform-plugin-framework/schema/validator" - "github.com/hashicorp/terraform-plugin-framework/types" - fwvalidators "github.com/hashicorp/terraform-provider-aws/internal/framework/validators" -) - -func TestUTCTimestampValidator(t *testing.T) { - t.Parallel() - - type testCase struct { - val types.String - expectedDiagnostics diag.Diagnostics - } - - tests := map[string]testCase{ - "unknown String": { - val: types.StringUnknown(), - }, - "null String": { - val: types.StringNull(), - }, - "valid timestamp": { - val: types.StringValue("2023-02-28T14:04:05Z"), - }, - "invalid timestamp": { - val: types.StringValue("02 Jan 06 15:04 -0700"), - expectedDiagnostics: diag.Diagnostics{ - diag.NewAttributeErrorDiagnostic( - path.Root("test"), - "Invalid Attribute Value", - `Attribute test value must be a valid UTC Timestamp, got: 02 Jan 06 15:04 -0700`, - ), - }, - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - ctx := context.Background() - - request := validator.StringRequest{ - Path: path.Root("test"), - PathExpression: path.MatchRoot("test"), - ConfigValue: test.val, - } - response := validator.StringResponse{} - fwvalidators.UTCTimestamp().ValidateString(ctx, request, &response) - - if diff := cmp.Diff(response.Diagnostics, test.expectedDiagnostics); diff != "" { - t.Errorf("unexpected diagnostics difference: %s", diff) - } - }) - } -} - -func TestOnceADayWindowFormatValidator(t *testing.T) { - t.Parallel() - - type testCase struct { - val types.String - expectedDiagnostics diag.Diagnostics - } - - tests := map[string]testCase{ - "unknown String": { - val: types.StringUnknown(), - }, - "null String": { - val: types.StringNull(), - }, - "valid format": { - val: types.StringValue("04:00-05:00"), - }, - "invalid format": { - val: types.StringValue("24:00-25:00"), - expectedDiagnostics: diag.Diagnostics{ - diag.NewAttributeErrorDiagnostic( - path.Root("test"), - "Invalid Attribute Value", - `Attribute test value must satisfy the format of "hh24:mi-hh24:mi", got: 24:00-25:00`, - ), - }, - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - request := validator.StringRequest{ - Path: path.Root("test"), - PathExpression: path.MatchRoot("test"), - ConfigValue: test.val, - } - response := validator.StringResponse{} - fwvalidators.OnceADayWindowFormat().ValidateString(context.Background(), request, &response) - - if diff := cmp.Diff(response.Diagnostics, test.expectedDiagnostics); diff != "" { - t.Errorf("unexpected diagnostics difference: %s", diff) - } - }) - } -} - -func TestOnceAWeekWindowFormatValidator(t *testing.T) { - t.Parallel() - - type testCase struct { - val types.String - expectedDiagnostics diag.Diagnostics - } - - tests := map[string]testCase{ - "unknown String": { - val: types.StringUnknown(), - }, - "null String": { - val: types.StringNull(), - }, - "valid format": { - val: types.StringValue("sun:04:00-sun:05:00"), - }, - "invalid format": { - val: types.StringValue("sun:04:00-sun:04:60"), - expectedDiagnostics: diag.Diagnostics{ - diag.NewAttributeErrorDiagnostic( - path.Root("test"), - "Invalid Attribute Value", - `Attribute test value must satisfy the format of "ddd:hh24:mi-ddd:hh24:mi", got: sun:04:00-sun:04:60`, - ), - }, - }, - } - - for name, test := range tests { - name, test := name, test - t.Run(name, func(t *testing.T) { - t.Parallel() - - request := validator.StringRequest{ - Path: path.Root("test"), - PathExpression: path.MatchRoot("test"), - ConfigValue: test.val, - } - response := validator.StringResponse{} - fwvalidators.OnceAWeekWindowFormat().ValidateString(context.Background(), request, &response) - - if diff := cmp.Diff(response.Diagnostics, test.expectedDiagnostics); diff != "" { - t.Errorf("unexpected diagnostics difference: %s", diff) - } - }) - } -} diff --git a/internal/generate/allowsubcats/generate.go b/internal/generate/allowsubcats/generate.go index 8685829d7a6..c366a262d88 100644 --- a/internal/generate/allowsubcats/generate.go +++ b/internal/generate/allowsubcats/generate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run main.go // ONLY generate directives and package declaration! Do not add anything else to this file. diff --git a/internal/generate/allowsubcats/main.go b/internal/generate/allowsubcats/main.go index cdae5bb1d81..9b59653598d 100644 --- a/internal/generate/allowsubcats/main.go +++ b/internal/generate/allowsubcats/main.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build generate // +build generate diff --git a/internal/generate/awsclient/file.tmpl b/internal/generate/awsclient/file.tmpl index c6c270ac557..b93c0f591b5 100644 --- a/internal/generate/awsclient/file.tmpl +++ b/internal/generate/awsclient/file.tmpl @@ -2,65 +2,30 @@ package conns import ( - "net/http" + "context" {{ range .Services }} - {{- if eq .SDKVersion "1" }} - "github.com/aws/aws-sdk-go/service/{{ .GoV1Package }}" - {{- else if eq .SDKVersion "2" }} - "github.com/aws/aws-sdk-go-v2/service/{{ .GoV2Package }}" - {{- else if eq .SDKVersion "1,2" }} - "github.com/aws/aws-sdk-go/service/{{ .GoV1Package }}" - {{ .GoV2PackageOverride }} "github.com/aws/aws-sdk-go-v2/service/{{ .GoV2Package }}" + {{- if eq .SDKVersion "1" "1,2" }} + {{ .GoV1Package }}_sdkv1 "github.com/aws/aws-sdk-go/service/{{ .GoV1Package }}" + {{- end }} + {{- if eq .SDKVersion "2" "1,2" }} + {{ .GoV2Package }}_sdkv2 "github.com/aws/aws-sdk-go-v2/service/{{ .GoV2Package }}" {{- end }} {{- end }} - "github.com/aws/aws-sdk-go/aws/session" - tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/errs" + "github.com/hashicorp/terraform-provider-aws/names" ) -type AWSClient struct { - AccountID string - DefaultTagsConfig *tftags.DefaultConfig - DNSSuffix string - IgnoreTagsConfig *tftags.IgnoreConfig - MediaConvertAccountConn *mediaconvert.MediaConvert - Partition string - Region string - ReverseDNSPrefix string - ServicePackages map[string]ServicePackage - Session *session.Session - TerraformVersion string - - httpClient *http.Client - {{ range .Services }} - {{- if ne .SDKVersion "1,2" }}{{continue}}{{- end }} - {{ .ProviderPackage }}Client lazyClient[*{{ .GoV2PackageOverride }}.{{ .ClientTypeName }}] -{{- end }} - -{{ range .Services }} - {{- if eq .SDKVersion "1" }} - {{ .ProviderPackage }}Conn *{{ .GoV1Package }}.{{ .ClientTypeName }} - {{- else if eq .SDKVersion "2" }} - {{ .ProviderPackage }}Client *{{ .GoV2Package }}.{{ .ClientTypeName }} - {{- end }} -{{- end }} - - s3ConnURICleaningDisabled *s3.S3 + {{if eq .SDKVersion "1" "1,2" }} +func (c *AWSClient) {{ .ProviderNameUpper }}Conn(ctx context.Context) *{{ .GoV1Package }}_sdkv1.{{ .GoV1ClientTypeName }} { + return errs.Must(conn[*{{ .GoV1Package }}_sdkv1.{{ .GoV1ClientTypeName }}](ctx, c, names.{{ .ProviderNameUpper }})) } + {{- end }} -{{ range .Services }} - {{- if eq .SDKVersion "1" }} -func (client *AWSClient) {{ .ProviderNameUpper }}Conn() *{{ .GoV1Package }}.{{ .ClientTypeName }} { - return client.{{ .ProviderPackage }}Conn -} - {{- else if eq .SDKVersion "2" }} -func (client *AWSClient) {{ .ProviderNameUpper }}Client() *{{ .GoV2Package }}.{{ .ClientTypeName }} { - return client.{{ .ProviderPackage }}Client -} - {{- else if eq .SDKVersion "1,2" }} -func (client *AWSClient) {{ .ProviderNameUpper }}Client() *{{ .GoV2PackageOverride }}.{{ .ClientTypeName }} { - return client.{{ .ProviderPackage }}Client.Client() + {{if eq .SDKVersion "2" "1,2" }} +func (c *AWSClient) {{ .ProviderNameUpper }}Client(ctx context.Context) *{{ .GoV2Package }}_sdkv2.Client { + return errs.Must(client[*{{ .GoV2Package }}_sdkv2.Client](ctx, c, names.{{ .ProviderNameUpper }})) } {{- end }} {{ end }} diff --git a/internal/generate/awsclient/main.go b/internal/generate/awsclient/main.go index 491d17992b8..74e6df5b9ea 100644 --- a/internal/generate/awsclient/main.go +++ b/internal/generate/awsclient/main.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build generate // +build generate @@ -5,7 +8,6 @@ package main import ( _ "embed" - "fmt" "sort" "github.com/hashicorp/terraform-provider-aws/internal/generate/common" @@ -13,13 +15,11 @@ import ( ) type ServiceDatum struct { - SDKVersion string - GoV1Package string - GoV2Package string - GoV2PackageOverride string - ProviderNameUpper string - ClientTypeName string - ProviderPackage string + SDKVersion string + GoV1Package string + GoV1ClientTypeName string + GoV2Package string + ProviderNameUpper string } type TemplateData struct { @@ -56,32 +56,25 @@ func main() { continue } + s := ServiceDatum{ + ProviderNameUpper: l[names.ColProviderNameUpper], + GoV1Package: l[names.ColGoV1Package], + GoV2Package: l[names.ColGoV2Package], + } + if l[names.ColClientSDKV1] != "" { - td.Services = append(td.Services, ServiceDatum{ - ProviderNameUpper: l[names.ColProviderNameUpper], - SDKVersion: "1", - GoV1Package: l[names.ColGoV1Package], - GoV2Package: l[names.ColGoV2Package], - ClientTypeName: l[names.ColGoV1ClientTypeName], - ProviderPackage: l[names.ColProviderPackageCorrect], - }) + s.SDKVersion = "1" + s.GoV1ClientTypeName = l[names.ColGoV1ClientTypeName] } if l[names.ColClientSDKV2] != "" { - sd := ServiceDatum{ - ProviderNameUpper: l[names.ColProviderNameUpper], - SDKVersion: "2", - GoV1Package: l[names.ColGoV1Package], - GoV2Package: l[names.ColGoV2Package], - ClientTypeName: "Client", - ProviderPackage: l[names.ColProviderPackageCorrect], - } if l[names.ColClientSDKV1] != "" { - // Use `sdkv2` instead of `v2` to prevent collisions with e.g., `elbv2`. - sd.GoV2PackageOverride = fmt.Sprintf("%s_sdkv2", l[names.ColGoV2Package]) - sd.SDKVersion = "1,2" + s.SDKVersion = "1,2" + } else { + s.SDKVersion = "2" } - td.Services = append(td.Services, sd) } + + td.Services = append(td.Services, s) } sort.SliceStable(td.Services, func(i, j int) bool { diff --git a/internal/generate/checknames/generate.go b/internal/generate/checknames/generate.go index cd1bb152773..d43e761010e 100644 --- a/internal/generate/checknames/generate.go +++ b/internal/generate/checknames/generate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run main.go // ONLY generate directives and package declaration! Do not add anything else to this file. diff --git a/internal/generate/checknames/main.go b/internal/generate/checknames/main.go index d9ea6f472bb..a069eeff741 100644 --- a/internal/generate/checknames/main.go +++ b/internal/generate/checknames/main.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build generate // +build generate diff --git a/internal/generate/clientconfig/file.tmpl b/internal/generate/clientconfig/file.tmpl deleted file mode 100644 index 10ff80b7988..00000000000 --- a/internal/generate/clientconfig/file.tmpl +++ /dev/null @@ -1,56 +0,0 @@ -// Code generated by internal/generate/clientconfig/main.go; DO NOT EDIT. -package conns - -import ( -{{ range .Services }} - {{- if eq .SDKVersion "1" }} - "github.com/aws/aws-sdk-go/service/{{ .GoV1Package }}" - {{- else if eq .SDKVersion "2" }} - "github.com/aws/aws-sdk-go-v2/service/{{ .GoV2Package }}" - {{- else if eq .SDKVersion "1,2" }} - "github.com/aws/aws-sdk-go/service/{{ .GoV1Package }}" - {{ .GoV2PackageOverride }} "github.com/aws/aws-sdk-go-v2/service/{{ .GoV2Package }}" - {{- end }} -{{- end }} - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/session" - aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" - "github.com/hashicorp/terraform-provider-aws/names" -) - -// sdkv1Conns initializes AWS SDK for Go v1 clients. -func (c *Config) sdkv1Conns(client *AWSClient, sess *session.Session) { -{{- range .Services }} - {{- if eq .SDKVersion "1" }} - client.{{ .ProviderPackage }}Conn = {{ .GoV1Package }}.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.{{ .ProviderNameUpper }}])})) - {{- end }} -{{- end }} -} - -// sdkv2Conns initializes AWS SDK for Go v2 clients. -func (c *Config) sdkv2Conns(client *AWSClient, cfg aws_sdkv2.Config) { -{{- range .Services }} - {{- if eq .SDKVersion "2" }} - client.{{ .ProviderPackage }}Client = {{ .GoV2Package }}.NewFromConfig(cfg, func(o *{{ .GoV2Package }}.Options) { - if endpoint := c.Endpoints[names.{{ .ProviderNameUpper }}]; endpoint != "" { - o.EndpointResolver = {{ .GoV2Package }}.EndpointResolverFromURL(endpoint) - } - }) - {{- end }} -{{- end }} -} - -// sdkv2LazyConns initializes AWS SDK for Go v2 lazy-load clients. -func (c *Config) sdkv2LazyConns(client *AWSClient, cfg aws_sdkv2.Config) { -{{- range .Services }} - {{- if eq .SDKVersion "1,2" }} - client.{{ .ProviderPackage }}Client.init(&cfg, func() *{{ .GoV2PackageOverride }}.{{ .ClientTypeName }} { - return {{ .GoV2PackageOverride }}.NewFromConfig(cfg, func(o *{{ .GoV2PackageOverride }}.Options) { - if endpoint := c.Endpoints[names.{{ .ProviderNameUpper }}]; endpoint != "" { - o.EndpointResolver = {{ .GoV2PackageOverride }}.EndpointResolverFromURL(endpoint) - } - }) - }) - {{- end }} -{{- end }} -} \ No newline at end of file diff --git a/internal/generate/clientconfig/main.go b/internal/generate/clientconfig/main.go deleted file mode 100644 index e623943fe23..00000000000 --- a/internal/generate/clientconfig/main.go +++ /dev/null @@ -1,103 +0,0 @@ -//go:build generate -// +build generate - -package main - -import ( - _ "embed" - "fmt" - "sort" - - "github.com/hashicorp/terraform-provider-aws/internal/generate/common" - "github.com/hashicorp/terraform-provider-aws/names" -) - -type ServiceDatum struct { - SDKVersion string - GoV1Package string - GoV2Package string - GoV2PackageOverride string - ProviderNameUpper string - ClientTypeName string - ProviderPackage string -} - -type TemplateData struct { - Services []ServiceDatum -} - -func main() { - const ( - filename = `config_gen.go` - namesDataFile = "../../names/names_data.csv" - ) - g := common.NewGenerator() - - g.Infof("Generating internal/conns/%s", filename) - - data, err := common.ReadAllCSVData(namesDataFile) - - if err != nil { - g.Fatalf("error reading %s: %s", namesDataFile, err) - } - - td := TemplateData{} - - for i, l := range data { - if i < 1 { // no header - continue - } - - if l[names.ColExclude] != "" || l[names.ColSkipClientGenerate] != "" { - continue - } - - if l[names.ColProviderPackageActual] == "" && l[names.ColProviderPackageCorrect] == "" { - continue - } - - if l[names.ColClientSDKV1] != "" { - td.Services = append(td.Services, ServiceDatum{ - ProviderNameUpper: l[names.ColProviderNameUpper], - SDKVersion: "1", - GoV1Package: l[names.ColGoV1Package], - GoV2Package: l[names.ColGoV2Package], - ClientTypeName: l[names.ColGoV1ClientTypeName], - ProviderPackage: l[names.ColProviderPackageCorrect], - }) - } - if l[names.ColClientSDKV2] != "" { - sd := ServiceDatum{ - ProviderNameUpper: l[names.ColProviderNameUpper], - SDKVersion: "2", - GoV1Package: l[names.ColGoV1Package], - GoV2Package: l[names.ColGoV2Package], - ClientTypeName: "Client", - ProviderPackage: l[names.ColProviderPackageCorrect], - } - if l[names.ColClientSDKV1] != "" { - // Use `sdkv2` instead of `v2` to prevent collisions with e.g., `elbv2`. - sd.GoV2PackageOverride = fmt.Sprintf("%s_sdkv2", l[names.ColGoV2Package]) - sd.SDKVersion = "1,2" - } - td.Services = append(td.Services, sd) - } - } - - sort.SliceStable(td.Services, func(i, j int) bool { - return td.Services[i].ProviderNameUpper < td.Services[j].ProviderNameUpper - }) - - d := g.NewGoFileDestination(filename) - - if err := d.WriteTemplate("config", tmpl, td); err != nil { - g.Fatalf("generating file (%s): %s", filename, err) - } - - if err := d.Write(); err != nil { - g.Fatalf("generating file (%s): %s", filename, err) - } -} - -//go:embed file.tmpl -var tmpl string diff --git a/internal/generate/common/args.go b/internal/generate/common/args.go index dfc2f502b2e..72ea5434af2 100644 --- a/internal/generate/common/args.go +++ b/internal/generate/common/args.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package common import ( diff --git a/internal/generate/common/args_test.go b/internal/generate/common/args_test.go index 0f089575d9f..f0fa5bc4360 100644 --- a/internal/generate/common/args_test.go +++ b/internal/generate/common/args_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package common import ( diff --git a/internal/generate/common/csvdata.go b/internal/generate/common/csvdata.go index a71208c9ba5..2eef1b6b7e8 100644 --- a/internal/generate/common/csvdata.go +++ b/internal/generate/common/csvdata.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package common import ( diff --git a/internal/generate/common/generator.go b/internal/generate/common/generator.go index 7002d62794d..2704cef7fac 100644 --- a/internal/generate/common/generator.go +++ b/internal/generate/common/generator.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package common import ( @@ -99,7 +102,10 @@ func (d *fileDestination) WriteTemplate(templateName, templateBody string, templ } func parseTemplate(templateName, templateBody string, templateData any) ([]byte, error) { - tmpl, err := template.New(templateName).Parse(templateBody) + funcMap := template.FuncMap{ + "Title": strings.Title, + } + tmpl, err := template.New(templateName).Funcs(funcMap).Parse(templateBody) if err != nil { return nil, fmt.Errorf("parsing function template: %w", err) diff --git a/internal/generate/customends/generate.go b/internal/generate/customends/generate.go index fd7cfbaacdf..7ffaa3ecf92 100644 --- a/internal/generate/customends/generate.go +++ b/internal/generate/customends/generate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run main.go // ONLY generate directives and package declaration! Do not add anything else to this file. diff --git a/internal/generate/customends/main.go b/internal/generate/customends/main.go index b24edb03ba5..6e523e0d4d9 100644 --- a/internal/generate/customends/main.go +++ b/internal/generate/customends/main.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build generate // +build generate diff --git a/internal/generate/issuelabels/generate.go b/internal/generate/issuelabels/generate.go index 45dfaa1466c..a280a1825d4 100644 --- a/internal/generate/issuelabels/generate.go +++ b/internal/generate/issuelabels/generate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run main.go // ONLY generate directives and package declaration! Do not add anything else to this file. diff --git a/internal/generate/issuelabels/main.go b/internal/generate/issuelabels/main.go index 33e1950ac56..50b1f7ef258 100644 --- a/internal/generate/issuelabels/main.go +++ b/internal/generate/issuelabels/main.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build generate // +build generate diff --git a/internal/generate/listpages/main.go b/internal/generate/listpages/main.go index bd4c9d314ce..e7f5d49ce35 100644 --- a/internal/generate/listpages/main.go +++ b/internal/generate/listpages/main.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build generate // +build generate diff --git a/internal/generate/namescapslist/generate.go b/internal/generate/namescapslist/generate.go index ab2d50fb3cd..9159f740495 100644 --- a/internal/generate/namescapslist/generate.go +++ b/internal/generate/namescapslist/generate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run main.go // ONLY generate directives and package declaration! Do not add anything else to this file. diff --git a/internal/generate/namescapslist/main.go b/internal/generate/namescapslist/main.go index dc740b2a2b8..780eba34e97 100644 --- a/internal/generate/namescapslist/main.go +++ b/internal/generate/namescapslist/main.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build generate // +build generate diff --git a/internal/generate/namesconsts/main.go b/internal/generate/namesconsts/main.go index f04075dd844..6cb2bce44e2 100644 --- a/internal/generate/namesconsts/main.go +++ b/internal/generate/namesconsts/main.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build generate // +build generate diff --git a/internal/generate/namevaluesfilters/ec2_filters.go b/internal/generate/namevaluesfilters/ec2_filters.go index 290e2cbdbd1..5cbccf1bee0 100644 --- a/internal/generate/namevaluesfilters/ec2_filters.go +++ b/internal/generate/namevaluesfilters/ec2_filters.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build !generate // +build !generate diff --git a/internal/generate/namevaluesfilters/ec2_filters_test.go b/internal/generate/namevaluesfilters/ec2_filters_test.go index b8e8743ec5b..c272358b0c6 100644 --- a/internal/generate/namevaluesfilters/ec2_filters_test.go +++ b/internal/generate/namevaluesfilters/ec2_filters_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package namevaluesfilters_test import ( diff --git a/internal/generate/namevaluesfilters/generators/servicefilters/main.go b/internal/generate/namevaluesfilters/generators/servicefilters/main.go index 467d1ec6da3..6334e0421f0 100644 --- a/internal/generate/namevaluesfilters/generators/servicefilters/main.go +++ b/internal/generate/namevaluesfilters/generators/servicefilters/main.go @@ -93,7 +93,7 @@ var templateBody = ` package namevaluesfilters -import ( // nosemgrep:ci.aws-sdk-go-multiple-service-imports +import ( // nosemgrep:ci.semgrep.aws.multiple-service-imports "github.com/aws/aws-sdk-go/aws" {{- range .SliceServiceNames }} {{- if eq . (. | FilterPackage) }} diff --git a/internal/generate/namevaluesfilters/name_values_filters.go b/internal/generate/namevaluesfilters/name_values_filters.go index d28b38dc0e2..76abcb4715e 100644 --- a/internal/generate/namevaluesfilters/name_values_filters.go +++ b/internal/generate/namevaluesfilters/name_values_filters.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package namevaluesfilters import ( diff --git a/internal/generate/namevaluesfilters/name_values_filters_test.go b/internal/generate/namevaluesfilters/name_values_filters_test.go index d5b300578bc..07a377e852b 100644 --- a/internal/generate/namevaluesfilters/name_values_filters_test.go +++ b/internal/generate/namevaluesfilters/name_values_filters_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package namevaluesfilters_test import ( diff --git a/internal/generate/namevaluesfilters/service_filters_gen.go b/internal/generate/namevaluesfilters/service_filters_gen.go index e1ed32215c2..ef29039b458 100644 --- a/internal/generate/namevaluesfilters/service_filters_gen.go +++ b/internal/generate/namevaluesfilters/service_filters_gen.go @@ -2,7 +2,7 @@ package namevaluesfilters -import ( // nosemgrep:ci.aws-sdk-go-multiple-service-imports +import ( // nosemgrep:ci.semgrep.aws.multiple-service-imports "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/autoscaling" "github.com/aws/aws-sdk-go/service/databasemigrationservice" diff --git a/internal/generate/namevaluesfilters/service_generation_customizations.go b/internal/generate/namevaluesfilters/service_generation_customizations.go index afab63d9847..0f57d9385e2 100644 --- a/internal/generate/namevaluesfilters/service_generation_customizations.go +++ b/internal/generate/namevaluesfilters/service_generation_customizations.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // This file contains code generation customizations for each AWS Go SDK service. package namevaluesfilters diff --git a/internal/generate/prlabels/generate.go b/internal/generate/prlabels/generate.go index a31bb6ee17e..56cd7823b0a 100644 --- a/internal/generate/prlabels/generate.go +++ b/internal/generate/prlabels/generate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run main.go // ONLY generate directives and package declaration! Do not add anything else to this file. diff --git a/internal/generate/prlabels/main.go b/internal/generate/prlabels/main.go index b3a9a505049..6f3fd55f3a7 100644 --- a/internal/generate/prlabels/main.go +++ b/internal/generate/prlabels/main.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build generate // +build generate diff --git a/internal/generate/servicelabels/generate.go b/internal/generate/servicelabels/generate.go index cd212f3dd91..09e6ab833d4 100644 --- a/internal/generate/servicelabels/generate.go +++ b/internal/generate/servicelabels/generate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run main.go // ONLY generate directives and package declaration! Do not add anything else to this file. diff --git a/internal/generate/servicelabels/main.go b/internal/generate/servicelabels/main.go index 78b909caf66..981f75d475f 100644 --- a/internal/generate/servicelabels/main.go +++ b/internal/generate/servicelabels/main.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build generate // +build generate diff --git a/internal/generate/servicepackages/spd.tmpl b/internal/generate/servicepackage/file.tmpl similarity index 65% rename from internal/generate/servicepackages/spd.tmpl rename to internal/generate/servicepackage/file.tmpl index f0d9c76e4b9..948264e32e5 100644 --- a/internal/generate/servicepackages/spd.tmpl +++ b/internal/generate/servicepackage/file.tmpl @@ -5,6 +5,18 @@ package {{ .ProviderPackage }} import ( "context" +{{if not .SkipClientGenerate }} + {{- if eq .SDKVersion "1" "1,2" }} + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + {{ .GoV1Package }}_sdkv1 "github.com/aws/aws-sdk-go/service/{{ .GoV1Package }}" + {{- end }} + {{- if eq .SDKVersion "2" "1,2" }} + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + {{ .GoV2Package }}_sdkv2 "github.com/aws/aws-sdk-go-v2/service/{{ .GoV2Package }}" + {{- end }} +{{- end }} + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" {{- if ne .ProviderPackage "meta" }} "github.com/hashicorp/terraform-provider-aws/names" @@ -115,4 +127,30 @@ func (p *servicePackage) ServicePackageName() string { {{- end }} } -var ServicePackage = &servicePackage{} +{{- if not .SkipClientGenerate }} + {{if eq .SDKVersion "1" "1,2" }} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*{{ .GoV1Package }}_sdkv1.{{ .GoV1ClientTypeName }}, error) { + sess := config["session"].(*session_sdkv1.Session) + + return {{ .GoV1Package }}_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + {{- end }} + + {{if eq .SDKVersion "2" "1,2" }} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*{{ .GoV2Package }}_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return {{ .GoV2Package }}_sdkv2.NewFromConfig(cfg, func(o *{{ .GoV2Package }}_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = {{ .GoV2Package }}_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + {{- end }} +{{- end }} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/generate/servicepackage/main.go b/internal/generate/servicepackage/main.go new file mode 100644 index 00000000000..a0bf3e034a0 --- /dev/null +++ b/internal/generate/servicepackage/main.go @@ -0,0 +1,303 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:build generate +// +build generate + +package main + +import ( + _ "embed" + "fmt" + "go/ast" + "go/parser" + "go/token" + "os" + "regexp" + "sort" + "strings" + + multierror "github.com/hashicorp/go-multierror" + "github.com/hashicorp/terraform-provider-aws/internal/generate/common" + "github.com/hashicorp/terraform-provider-aws/names" + "golang.org/x/exp/slices" +) + +func main() { + const ( + filename = `service_package_gen.go` + namesDataFile = `../../../names/names_data.csv` + ) + g := common.NewGenerator() + + data, err := common.ReadAllCSVData(namesDataFile) + + if err != nil { + g.Fatalf("error reading %s: %s", namesDataFile, err) + } + + servicePackage := os.Getenv("GOPACKAGE") + + g.Infof("Generating internal/service/%s/%s", servicePackage, filename) + + for i, l := range data { + if i < 1 { // no header + continue + } + + if l[names.ColProviderPackageActual] == "" && l[names.ColProviderPackageCorrect] == "" { + continue + } + + // See internal/generate/namesconsts/main.go. + p := l[names.ColProviderPackageCorrect] + + if l[names.ColProviderPackageActual] != "" { + p = l[names.ColProviderPackageActual] + } + + if p != servicePackage { + continue + } + + // Look for Terraform Plugin Framework and SDK resource and data source annotations. + // These annotations are implemented as comments on factory functions. + v := &visitor{ + g: g, + + frameworkDataSources: make([]ResourceDatum, 0), + frameworkResources: make([]ResourceDatum, 0), + sdkDataSources: make(map[string]ResourceDatum), + sdkResources: make(map[string]ResourceDatum), + } + + v.processDir(".") + + if err := v.err.ErrorOrNil(); err != nil { + g.Fatalf("%s", err.Error()) + } + + s := ServiceDatum{ + SkipClientGenerate: l[names.ColSkipClientGenerate] != "", + GoV1Package: l[names.ColGoV1Package], + GoV2Package: l[names.ColGoV2Package], + ProviderPackage: p, + ProviderNameUpper: l[names.ColProviderNameUpper], + FrameworkDataSources: v.frameworkDataSources, + FrameworkResources: v.frameworkResources, + SDKDataSources: v.sdkDataSources, + SDKResources: v.sdkResources, + } + + if l[names.ColClientSDKV1] != "" { + s.SDKVersion = "1" + s.GoV1ClientTypeName = l[names.ColGoV1ClientTypeName] + } + if l[names.ColClientSDKV2] != "" { + if l[names.ColClientSDKV1] != "" { + s.SDKVersion = "1,2" + } else { + s.SDKVersion = "2" + } + } + + sort.SliceStable(s.FrameworkDataSources, func(i, j int) bool { + return s.FrameworkDataSources[i].FactoryName < s.FrameworkDataSources[j].FactoryName + }) + sort.SliceStable(s.FrameworkResources, func(i, j int) bool { + return s.FrameworkResources[i].FactoryName < s.FrameworkResources[j].FactoryName + }) + + d := g.NewGoFileDestination(filename) + + if err := d.WriteTemplate("servicepackagedata", tmpl, s); err != nil { + g.Fatalf("error generating %s service package data: %s", p, err) + } + + if err := d.Write(); err != nil { + g.Fatalf("generating file (%s): %s", filename, err) + } + + break + } +} + +type ResourceDatum struct { + FactoryName string + Name string // Friendly name (without service name), e.g. "Topic", not "SNS Topic" + TransparentTagging bool + TagsIdentifierAttribute string + TagsResourceType string +} + +type ServiceDatum struct { + SkipClientGenerate bool + SDKVersion string // AWS SDK for Go version ("1", "2" or "1,2") + GoV1Package string // AWS SDK for Go v1 package name + GoV1ClientTypeName string // AWS SDK for Go v1 client type name + GoV2Package string // AWS SDK for Go v2 package name + ProviderPackage string + ProviderNameUpper string + FrameworkDataSources []ResourceDatum + FrameworkResources []ResourceDatum + SDKDataSources map[string]ResourceDatum + SDKResources map[string]ResourceDatum +} + +//go:embed file.tmpl +var tmpl string + +// Annotation processing. +var ( + annotation = regexp.MustCompile(`^//\s*@([a-zA-Z0-9]+)(\(([^)]*)\))?\s*$`) +) + +type visitor struct { + err *multierror.Error + g *common.Generator + + fileName string + functionName string + packageName string + + frameworkDataSources []ResourceDatum + frameworkResources []ResourceDatum + sdkDataSources map[string]ResourceDatum + sdkResources map[string]ResourceDatum +} + +// processDir scans a single service package directory and processes contained Go sources files. +func (v *visitor) processDir(path string) { + fileSet := token.NewFileSet() + packageMap, err := parser.ParseDir(fileSet, path, func(fi os.FileInfo) bool { + // Skip tests. + return !strings.HasSuffix(fi.Name(), "_test.go") + }, parser.ParseComments) + + if err != nil { + v.err = multierror.Append(v.err, fmt.Errorf("parsing (%s): %w", path, err)) + + return + } + + for name, pkg := range packageMap { + v.packageName = name + + for name, file := range pkg.Files { + v.fileName = name + + v.processFile(file) + + v.fileName = "" + } + + v.packageName = "" + } +} + +// processFile processes a single Go source file. +func (v *visitor) processFile(file *ast.File) { + ast.Walk(v, file) +} + +// processFuncDecl processes a single Go function. +// The function's comments are scanned for annotations indicating a Plugin Framework or SDK resource or data source. +func (v *visitor) processFuncDecl(funcDecl *ast.FuncDecl) { + v.functionName = funcDecl.Name.Name + + // Look first for tagging annotations. + d := ResourceDatum{} + + for _, line := range funcDecl.Doc.List { + line := line.Text + + if m := annotation.FindStringSubmatch(line); len(m) > 0 && m[1] == "Tags" { + args := common.ParseArgs(m[3]) + + d.TransparentTagging = true + + if attr, ok := args.Keyword["identifierAttribute"]; ok { + if d.TagsIdentifierAttribute != "" { + v.err = multierror.Append(v.err, fmt.Errorf("multiple Tags annotations: %s", fmt.Sprintf("%s.%s", v.packageName, v.functionName))) + } + + d.TagsIdentifierAttribute = attr + } + + if attr, ok := args.Keyword["resourceType"]; ok { + d.TagsResourceType = attr + } + } + } + + for _, line := range funcDecl.Doc.List { + line := line.Text + + if m := annotation.FindStringSubmatch(line); len(m) > 0 { + d.FactoryName = v.functionName + + args := common.ParseArgs(m[3]) + + if attr, ok := args.Keyword["name"]; ok { + d.Name = attr + } + + switch annotationName := m[1]; annotationName { + case "FrameworkDataSource": + if slices.ContainsFunc(v.frameworkDataSources, func(d ResourceDatum) bool { return d.FactoryName == v.functionName }) { + v.err = multierror.Append(v.err, fmt.Errorf("duplicate Framework Data Source: %s", fmt.Sprintf("%s.%s", v.packageName, v.functionName))) + } else { + v.frameworkDataSources = append(v.frameworkDataSources, d) + } + case "FrameworkResource": + if slices.ContainsFunc(v.frameworkResources, func(d ResourceDatum) bool { return d.FactoryName == v.functionName }) { + v.err = multierror.Append(v.err, fmt.Errorf("duplicate Framework Resource: %s", fmt.Sprintf("%s.%s", v.packageName, v.functionName))) + } else { + v.frameworkResources = append(v.frameworkResources, d) + } + case "SDKDataSource": + if len(args.Positional) == 0 { + v.err = multierror.Append(v.err, fmt.Errorf("no type name: %s", fmt.Sprintf("%s.%s", v.packageName, v.functionName))) + continue + } + + typeName := args.Positional[0] + + if _, ok := v.sdkDataSources[typeName]; ok { + v.err = multierror.Append(v.err, fmt.Errorf("duplicate SDK Data Source (%s): %s", typeName, fmt.Sprintf("%s.%s", v.packageName, v.functionName))) + } else { + v.sdkDataSources[typeName] = d + } + case "SDKResource": + if len(args.Positional) == 0 { + v.err = multierror.Append(v.err, fmt.Errorf("no type name: %s", fmt.Sprintf("%s.%s", v.packageName, v.functionName))) + continue + } + + typeName := args.Positional[0] + + if _, ok := v.sdkResources[typeName]; ok { + v.err = multierror.Append(v.err, fmt.Errorf("duplicate SDK Resource (%s): %s", typeName, fmt.Sprintf("%s.%s", v.packageName, v.functionName))) + } else { + v.sdkResources[typeName] = d + } + case "Tags": + // Handled above. + default: + v.g.Warnf("unknown annotation: %s", annotationName) + } + } + } + + v.functionName = "" +} + +// Visit is called for each node visited by ast.Walk. +func (v *visitor) Visit(node ast.Node) ast.Visitor { + // Look at functions (not methods) with comments. + if funcDecl, ok := node.(*ast.FuncDecl); ok && funcDecl.Recv == nil && funcDecl.Doc != nil { + v.processFuncDecl(funcDecl) + } + + return v +} diff --git a/internal/generate/servicepackages/sps.tmpl b/internal/generate/servicepackages/file.tmpl similarity index 74% rename from internal/generate/servicepackages/sps.tmpl rename to internal/generate/servicepackages/file.tmpl index 02145f45b1c..cc7f14aee80 100644 --- a/internal/generate/servicepackages/sps.tmpl +++ b/internal/generate/servicepackages/file.tmpl @@ -1,6 +1,6 @@ // Code generated by internal/generate/servicepackages/main.go; DO NOT EDIT. -package provider +package {{ .PackageName }} import ( "context" @@ -12,10 +12,10 @@ import ( "golang.org/x/exp/slices" ) -func servicePackages(context.Context) []conns.ServicePackage { +func servicePackages(ctx context.Context) []conns.ServicePackage { v := []conns.ServicePackage{ {{- range .Services }} - {{ .ProviderPackage }}.ServicePackage, + {{ .ProviderPackage }}.ServicePackage(ctx), {{- end }} } diff --git a/internal/generate/servicepackages/generate.go b/internal/generate/servicepackages/generate.go deleted file mode 100644 index ff9551a308f..00000000000 --- a/internal/generate/servicepackages/generate.go +++ /dev/null @@ -1,4 +0,0 @@ -//go:generate go run main.go -// ONLY generate directives and package declaration! Do not add anything else to this file. - -package servicepackages diff --git a/internal/generate/servicepackages/main.go b/internal/generate/servicepackages/main.go index b45d264b31e..e5a61f98d6c 100644 --- a/internal/generate/servicepackages/main.go +++ b/internal/generate/servicepackages/main.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build generate // +build generate @@ -5,50 +8,48 @@ package main import ( _ "embed" + "flag" "fmt" - "go/ast" - "go/parser" - "go/token" "os" - "path/filepath" - "regexp" "sort" - "strings" - multierror "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-provider-aws/internal/generate/common" "github.com/hashicorp/terraform-provider-aws/names" - "golang.org/x/exp/slices" ) func main() { const ( - spdFile = `service_package_gen.go` - spsFile = `../../provider/service_packages_gen.go` - namesDataFile = `../../../names/names_data.csv` + namesDataFile = `../../names/names_data.csv` ) + filename := `service_packages_gen.go` + + flag.Parse() + args := flag.Args() + if len(args) > 0 { + filename = args[0] + } + g := common.NewGenerator() + packageName := os.Getenv("GOPACKAGE") + + g.Infof("Generating %s/%s", packageName, filename) + data, err := common.ReadAllCSVData(namesDataFile) if err != nil { g.Fatalf("error reading %s: %s", namesDataFile, err) } - g.Infof("Generating per-service %s", filepath.Base(spdFile)) - - td := TemplateData{} + td := TemplateData{ + PackageName: packageName, + } for i, l := range data { if i < 1 { // no header continue } - // Don't skip excluded packages, instead handle missing values in the template. - // if l[names.ColExclude] != "" { - // continue - // } - if l[names.ColProviderPackageActual] == "" && l[names.ColProviderPackageCorrect] == "" { continue } @@ -60,54 +61,14 @@ func main() { p = l[names.ColProviderPackageActual] } - dir := fmt.Sprintf("../../service/%s", p) + spdFile := fmt.Sprintf("../service/%s/service_package_gen.go", p) - if _, err := os.Stat(dir); err != nil { + if _, err := os.Stat(spdFile); err != nil { continue } - // Look for Terraform Plugin Framework and SDK resource and data source annotations. - // These annotations are implemented as comments on factory functions. - v := &visitor{ - g: g, - - frameworkDataSources: make([]ResourceDatum, 0), - frameworkResources: make([]ResourceDatum, 0), - sdkDataSources: make(map[string]ResourceDatum), - sdkResources: make(map[string]ResourceDatum), - } - - v.processDir(dir) - - if err := v.err.ErrorOrNil(); err != nil { - g.Fatalf("%s", err.Error()) - } - s := ServiceDatum{ - ProviderPackage: p, - ProviderNameUpper: l[names.ColProviderNameUpper], - FrameworkDataSources: v.frameworkDataSources, - FrameworkResources: v.frameworkResources, - SDKDataSources: v.sdkDataSources, - SDKResources: v.sdkResources, - } - - sort.SliceStable(s.FrameworkDataSources, func(i, j int) bool { - return s.FrameworkDataSources[i].FactoryName < s.FrameworkDataSources[j].FactoryName - }) - sort.SliceStable(s.FrameworkResources, func(i, j int) bool { - return s.FrameworkResources[i].FactoryName < s.FrameworkResources[j].FactoryName - }) - - filename := fmt.Sprintf("../../service/%s/%s", p, spdFile) - d := g.NewGoFileDestination(filename) - - if err := d.WriteTemplate("servicepackagedata", spdTmpl, s); err != nil { - g.Fatalf("error generating %s service package data: %s", p, err) - } - - if err := d.Write(); err != nil { - g.Fatalf("generating file (%s): %s", filename, err) + ProviderPackage: p, } td.Services = append(td.Services, s) @@ -117,197 +78,25 @@ func main() { return td.Services[i].ProviderPackage < td.Services[j].ProviderPackage }) - g.Infof("Generating %s", filepath.Base(spsFile)) - - d := g.NewGoFileDestination(spsFile) + d := g.NewGoFileDestination(filename) - if err := d.WriteTemplate("servicepackages", spsTmpl, td); err != nil { + if err := d.WriteTemplate("servicepackages", tmpl, td); err != nil { g.Fatalf("error generating service packages list: %s", err) } if err := d.Write(); err != nil { - g.Fatalf("generating file (%s): %s", spsFile, err) + g.Fatalf("generating file (%s): %s", filename, err) } } -type ResourceDatum struct { - FactoryName string - Name string // Friendly name (without service name), e.g. "Topic", not "SNS Topic" - TransparentTagging bool - TagsIdentifierAttribute string - TagsResourceType string -} - type ServiceDatum struct { - ProviderPackage string - ProviderNameUpper string - FrameworkDataSources []ResourceDatum - FrameworkResources []ResourceDatum - SDKDataSources map[string]ResourceDatum - SDKResources map[string]ResourceDatum + ProviderPackage string } type TemplateData struct { - Services []ServiceDatum -} - -//go:embed spd.tmpl -var spdTmpl string - -//go:embed sps.tmpl -var spsTmpl string - -// Annotation processing. -var ( - annotation = regexp.MustCompile(`^//\s*@([a-zA-Z0-9]+)(\(([^)]*)\))?\s*$`) -) - -type visitor struct { - err *multierror.Error - g *common.Generator - - fileName string - functionName string - packageName string - - frameworkDataSources []ResourceDatum - frameworkResources []ResourceDatum - sdkDataSources map[string]ResourceDatum - sdkResources map[string]ResourceDatum + PackageName string + Services []ServiceDatum } -// processDir scans a single service package directory and processes contained Go sources files. -func (v *visitor) processDir(path string) { - fileSet := token.NewFileSet() - packageMap, err := parser.ParseDir(fileSet, path, func(fi os.FileInfo) bool { - // Skip tests. - return !strings.HasSuffix(fi.Name(), "_test.go") - }, parser.ParseComments) - - if err != nil { - v.err = multierror.Append(v.err, fmt.Errorf("parsing (%s): %w", path, err)) - - return - } - - for name, pkg := range packageMap { - v.packageName = name - - for name, file := range pkg.Files { - v.fileName = name - - v.processFile(file) - - v.fileName = "" - } - - v.packageName = "" - } -} - -// processFile processes a single Go source file. -func (v *visitor) processFile(file *ast.File) { - ast.Walk(v, file) -} - -// processFuncDecl processes a single Go function. -// The function's comments are scanned for annotations indicating a Plugin Framework or SDK resource or data source. -func (v *visitor) processFuncDecl(funcDecl *ast.FuncDecl) { - v.functionName = funcDecl.Name.Name - - // Look first for tagging annotations. - d := ResourceDatum{} - - for _, line := range funcDecl.Doc.List { - line := line.Text - - if m := annotation.FindStringSubmatch(line); len(m) > 0 && m[1] == "Tags" { - args := common.ParseArgs(m[3]) - - d.TransparentTagging = true - - if attr, ok := args.Keyword["identifierAttribute"]; ok { - if d.TagsIdentifierAttribute != "" { - v.err = multierror.Append(v.err, fmt.Errorf("multiple Tags annotations: %s", fmt.Sprintf("%s.%s", v.packageName, v.functionName))) - } - - d.TagsIdentifierAttribute = attr - } - - if attr, ok := args.Keyword["resourceType"]; ok { - d.TagsResourceType = attr - } - } - } - - for _, line := range funcDecl.Doc.List { - line := line.Text - - if m := annotation.FindStringSubmatch(line); len(m) > 0 { - d.FactoryName = v.functionName - - args := common.ParseArgs(m[3]) - - if attr, ok := args.Keyword["name"]; ok { - d.Name = attr - } - - switch annotationName := m[1]; annotationName { - case "FrameworkDataSource": - if slices.ContainsFunc(v.frameworkDataSources, func(d ResourceDatum) bool { return d.FactoryName == v.functionName }) { - v.err = multierror.Append(v.err, fmt.Errorf("duplicate Framework Data Source: %s", fmt.Sprintf("%s.%s", v.packageName, v.functionName))) - } else { - v.frameworkDataSources = append(v.frameworkDataSources, d) - } - case "FrameworkResource": - if slices.ContainsFunc(v.frameworkResources, func(d ResourceDatum) bool { return d.FactoryName == v.functionName }) { - v.err = multierror.Append(v.err, fmt.Errorf("duplicate Framework Resource: %s", fmt.Sprintf("%s.%s", v.packageName, v.functionName))) - } else { - v.frameworkResources = append(v.frameworkResources, d) - } - case "SDKDataSource": - if len(args.Positional) == 0 { - v.err = multierror.Append(v.err, fmt.Errorf("no type name: %s", fmt.Sprintf("%s.%s", v.packageName, v.functionName))) - continue - } - - typeName := args.Positional[0] - - if _, ok := v.sdkDataSources[typeName]; ok { - v.err = multierror.Append(v.err, fmt.Errorf("duplicate SDK Data Source (%s): %s", typeName, fmt.Sprintf("%s.%s", v.packageName, v.functionName))) - } else { - v.sdkDataSources[typeName] = d - } - case "SDKResource": - if len(args.Positional) == 0 { - v.err = multierror.Append(v.err, fmt.Errorf("no type name: %s", fmt.Sprintf("%s.%s", v.packageName, v.functionName))) - continue - } - - typeName := args.Positional[0] - - if _, ok := v.sdkResources[typeName]; ok { - v.err = multierror.Append(v.err, fmt.Errorf("duplicate SDK Resource (%s): %s", typeName, fmt.Sprintf("%s.%s", v.packageName, v.functionName))) - } else { - v.sdkResources[typeName] = d - } - case "Tags": - // Handled above. - default: - v.g.Warnf("unknown annotation: %s", annotationName) - } - } - } - - v.functionName = "" -} - -// Visit is called for each node visited by ast.Walk. -func (v *visitor) Visit(node ast.Node) ast.Visitor { - // Look at functions (not methods) with comments. - if funcDecl, ok := node.(*ast.FuncDecl); ok && funcDecl.Recv == nil && funcDecl.Doc != nil { - v.processFuncDecl(funcDecl) - } - - return v -} +//go:embed file.tmpl +var tmpl string diff --git a/internal/generate/servicesemgrep/generate.go b/internal/generate/servicesemgrep/generate.go index 069d8f7309f..4280fb002ed 100644 --- a/internal/generate/servicesemgrep/generate.go +++ b/internal/generate/servicesemgrep/generate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run main.go // ONLY generate directives and package declaration! Do not add anything else to this file. diff --git a/internal/generate/servicesemgrep/main.go b/internal/generate/servicesemgrep/main.go index 724c733b76c..ba796367f95 100644 --- a/internal/generate/servicesemgrep/main.go +++ b/internal/generate/servicesemgrep/main.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build generate // +build generate diff --git a/internal/generate/sweepimp/file.tmpl b/internal/generate/sweepimp/file.tmpl index 9404497a081..8c153f067bc 100644 --- a/internal/generate/sweepimp/file.tmpl +++ b/internal/generate/sweepimp/file.tmpl @@ -1,8 +1,9 @@ // Code generated by internal/generate/sweepimp/main.go; DO NOT EDIT. -package sweep_test +package {{ .PackageName }} import ( + "context" "testing" "github.com/hashicorp/terraform-plugin-testing/helper/resource" @@ -13,6 +14,6 @@ import ( ) func TestMain(m *testing.M) { - sweep.SweeperClients = make(map[string]interface{}) + sweep.ServicePackages = servicePackages(context.Background()) resource.TestMain(m) } \ No newline at end of file diff --git a/internal/generate/sweepimp/generate.go b/internal/generate/sweepimp/generate.go deleted file mode 100644 index c5711059425..00000000000 --- a/internal/generate/sweepimp/generate.go +++ /dev/null @@ -1,4 +0,0 @@ -//go:generate go run main.go -// ONLY generate directives and package declaration! Do not add anything else to this file. - -package sweepimp diff --git a/internal/generate/sweepimp/main.go b/internal/generate/sweepimp/main.go index 9738afa341b..87aa288020f 100644 --- a/internal/generate/sweepimp/main.go +++ b/internal/generate/sweepimp/main.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build generate // +build generate @@ -10,7 +13,6 @@ import ( "io/fs" "os" "sort" - "strings" "github.com/hashicorp/terraform-provider-aws/internal/generate/common" "github.com/hashicorp/terraform-provider-aws/names" @@ -21,17 +23,20 @@ type ServiceDatum struct { } type TemplateData struct { - Services []ServiceDatum + PackageName string + Services []ServiceDatum } func main() { const ( - filename = `../../sweep/sweep_test.go` - namesDataFile = "../../../names/names_data.csv" + filename = `sweep_test.go` + namesDataFile = "../../names/names_data.csv" ) g := common.NewGenerator() - g.Infof("Generating %s", strings.TrimPrefix(filename, "../../")) + packageName := os.Getenv("GOPACKAGE") + + g.Infof("Generating %s/%s", packageName, filename) data, err := common.ReadAllCSVData(namesDataFile) @@ -39,7 +44,9 @@ func main() { g.Fatalf("error reading %s: %s", namesDataFile, err) } - td := TemplateData{} + td := TemplateData{ + PackageName: packageName, + } for i, l := range data { if i < 1 { // no header @@ -60,11 +67,11 @@ func main() { p = l[names.ColProviderPackageActual] } - if _, err := os.Stat(fmt.Sprintf("../../service/%s", p)); err != nil || errors.Is(err, fs.ErrNotExist) { + if _, err := os.Stat(fmt.Sprintf("../service/%s", p)); err != nil || errors.Is(err, fs.ErrNotExist) { continue } - if _, err := os.Stat(fmt.Sprintf("../../service/%s/sweep.go", p)); err != nil || errors.Is(err, fs.ErrNotExist) { + if _, err := os.Stat(fmt.Sprintf("../service/%s/sweep.go", p)); err != nil || errors.Is(err, fs.ErrNotExist) { continue } diff --git a/internal/generate/tagresource/main.go b/internal/generate/tagresource/main.go index 229a47d3461..2868ef43814 100644 --- a/internal/generate/tagresource/main.go +++ b/internal/generate/tagresource/main.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build generate // +build generate @@ -17,7 +20,7 @@ import ( var ( getTagFunc = flag.String("GetTagFunc", "GetTag", "getTagFunc") idAttribName = flag.String("IDAttribName", "resource_arn", "idAttribName") - updateTagsFunc = flag.String("UpdateTagsFunc", "UpdateTags", "updateTagsFunc") + updateTagsFunc = flag.String("UpdateTagsFunc", "updateTags", "updateTagsFunc") ) func usage() { diff --git a/internal/generate/tagresource/resource.tmpl b/internal/generate/tagresource/resource.tmpl index 1fa93e6172a..395363fa1da 100644 --- a/internal/generate/tagresource/resource.tmpl +++ b/internal/generate/tagresource/resource.tmpl @@ -46,7 +46,7 @@ func ResourceTag() *schema.Resource { } func resourceTagCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { {{ if not (eq .ServicePackage "ec2") }}// nosemgrep:ci.semgrep.tags.calling-UpdateTags-in-resource-create{{- end }} - conn := meta.(*conns.AWSClient).{{ .AWSServiceUpper }}Conn() + conn := meta.(*conns.AWSClient).{{ .AWSServiceUpper }}Conn(ctx) identifier := d.Get("{{ .IDAttribName }}").(string) key := d.Get("key").(string) @@ -66,7 +66,7 @@ func resourceTagCreate(ctx context.Context, d *schema.ResourceData, meta interfa } func resourceTagRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).{{ .AWSServiceUpper }}Conn() + conn := meta.(*conns.AWSClient).{{ .AWSServiceUpper }}Conn(ctx) identifier, key, err := tftags.GetResourceID(d.Id()) if err != nil { @@ -93,7 +93,7 @@ func resourceTagRead(ctx context.Context, d *schema.ResourceData, meta interface } func resourceTagUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).{{ .AWSServiceUpper }}Conn() + conn := meta.(*conns.AWSClient).{{ .AWSServiceUpper }}Conn(ctx) identifier, key, err := tftags.GetResourceID(d.Id()) if err != nil { @@ -108,7 +108,7 @@ func resourceTagUpdate(ctx context.Context, d *schema.ResourceData, meta interfa } func resourceTagDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).{{ .AWSServiceUpper }}Conn() + conn := meta.(*conns.AWSClient).{{ .AWSServiceUpper }}Conn(ctx) identifier, key, err := tftags.GetResourceID(d.Id()) if err != nil { diff --git a/internal/generate/tagresource/tests.tmpl b/internal/generate/tagresource/tests.tmpl index 2afe93c389e..0d295e5ae1d 100644 --- a/internal/generate/tagresource/tests.tmpl +++ b/internal/generate/tagresource/tests.tmpl @@ -18,7 +18,7 @@ import ( func testAccCheckTagDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).{{ .AWSServiceUpper }}Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).{{ .AWSServiceUpper }}Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_{{ .ServicePackage }}_tag" { @@ -64,7 +64,7 @@ func testAccCheckTagExists(ctx context.Context, resourceName string) resource.Te return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).{{ .AWSServiceUpper }}Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).{{ .AWSServiceUpper }}Conn(ctx) _, err = tf{{ .ServicePackage }}.{{ .GetTagFunc }}{{ if .WithContext }}WithContext{{ end }}(ctx, conn, identifier, key) diff --git a/internal/generate/tags/main.go b/internal/generate/tags/main.go index 27f778f36ee..1595a4b3d14 100644 --- a/internal/generate/tags/main.go +++ b/internal/generate/tags/main.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build generate // +build generate @@ -9,6 +12,7 @@ import ( "os" "regexp" "strings" + "time" "github.com/hashicorp/terraform-provider-aws/internal/generate/common" v1 "github.com/hashicorp/terraform-provider-aws/internal/generate/tags/templates/v1" @@ -21,6 +25,12 @@ const ( sdkV2 = 2 ) +const ( + defaultListTagsFunc = "listTags" + defaultUpdateTagsFunc = "updateTags" + defaultWaitTagsPropagatedFunc = "waitTagsPropagated" +) + var ( createTags = flag.Bool("CreateTags", false, "whether to generate CreateTags") getTag = flag.Bool("GetTag", false, "whether to generate GetTag") @@ -30,42 +40,55 @@ var ( untagInNeedTagType = flag.Bool("UntagInNeedTagType", false, "whether Untag input needs tag type") updateTags = flag.Bool("UpdateTags", false, "whether to generate UpdateTags") updateTagsNoIgnoreSystem = flag.Bool("UpdateTagsNoIgnoreSystem", false, "whether to not ignore system tags in UpdateTags") - - createTagsFunc = flag.String("CreateTagsFunc", "createTags", "createTagsFunc") - getTagFunc = flag.String("GetTagFunc", "GetTag", "getTagFunc") - listTagsFunc = flag.String("ListTagsFunc", "ListTags", "listTagsFunc") - listTagsInFiltIDName = flag.String("ListTagsInFiltIDName", "", "listTagsInFiltIDName") - listTagsInIDElem = flag.String("ListTagsInIDElem", "ResourceArn", "listTagsInIDElem") - listTagsInIDNeedSlice = flag.String("ListTagsInIDNeedSlice", "", "listTagsInIDNeedSlice") - listTagsOp = flag.String("ListTagsOp", "ListTagsForResource", "listTagsOp") - listTagsOutTagsElem = flag.String("ListTagsOutTagsElem", "Tags", "listTagsOutTagsElem") - tagInCustomVal = flag.String("TagInCustomVal", "", "tagInCustomVal") - tagInIDElem = flag.String("TagInIDElem", "ResourceArn", "tagInIDElem") - tagInIDNeedSlice = flag.String("TagInIDNeedSlice", "", "tagInIDNeedSlice") - tagInTagsElem = flag.String("TagInTagsElem", "Tags", "tagInTagsElem") - tagKeyType = flag.String("TagKeyType", "", "tagKeyType") - tagOp = flag.String("TagOp", "TagResource", "tagOp") - tagOpBatchSize = flag.String("TagOpBatchSize", "", "tagOpBatchSize") - tagResTypeElem = flag.String("TagResTypeElem", "", "tagResTypeElem") - tagType = flag.String("TagType", "Tag", "tagType") - tagType2 = flag.String("TagType2", "", "tagType") - tagTypeAddBoolElem = flag.String("TagTypeAddBoolElem", "", "TagTypeAddBoolElem") - tagTypeIDElem = flag.String("TagTypeIDElem", "", "tagTypeIDElem") - tagTypeKeyElem = flag.String("TagTypeKeyElem", "Key", "tagTypeKeyElem") - tagTypeValElem = flag.String("TagTypeValElem", "Value", "tagTypeValElem") - untagInCustomVal = flag.String("UntagInCustomVal", "", "untagInCustomVal") - untagInNeedTagKeyType = flag.String("UntagInNeedTagKeyType", "", "untagInNeedTagKeyType") - untagInTagsElem = flag.String("UntagInTagsElem", "TagKeys", "untagInTagsElem") - untagOp = flag.String("UntagOp", "UntagResource", "untagOp") - updateTagsFunc = flag.String("UpdateTagsFunc", "UpdateTags", "updateTagsFunc") + waitForPropagation = flag.Bool("Wait", false, "whether to generate WaitTagsPropagated") + + createTagsFunc = flag.String("CreateTagsFunc", "createTags", "createTagsFunc") + getTagFunc = flag.String("GetTagFunc", "GetTag", "getTagFunc") + getTagsInFunc = flag.String("GetTagsInFunc", "getTagsIn", "getTagsInFunc") + keyValueTagsFunc = flag.String("KeyValueTagsFunc", "KeyValueTags", "keyValueTagsFunc") + listTagsFunc = flag.String("ListTagsFunc", defaultListTagsFunc, "listTagsFunc") + listTagsInFiltIDName = flag.String("ListTagsInFiltIDName", "", "listTagsInFiltIDName") + listTagsInIDElem = flag.String("ListTagsInIDElem", "ResourceArn", "listTagsInIDElem") + listTagsInIDNeedSlice = flag.String("ListTagsInIDNeedSlice", "", "listTagsInIDNeedSlice") + listTagsOp = flag.String("ListTagsOp", "ListTagsForResource", "listTagsOp") + listTagsOutTagsElem = flag.String("ListTagsOutTagsElem", "Tags", "listTagsOutTagsElem") + setTagsOutFunc = flag.String("SetTagsOutFunc", "setTagsOut", "setTagsOutFunc") + tagInCustomVal = flag.String("TagInCustomVal", "", "tagInCustomVal") + tagInIDElem = flag.String("TagInIDElem", "ResourceArn", "tagInIDElem") + tagInIDNeedSlice = flag.String("TagInIDNeedSlice", "", "tagInIDNeedSlice") + tagInTagsElem = flag.String("TagInTagsElem", "Tags", "tagInTagsElem") + tagKeyType = flag.String("TagKeyType", "", "tagKeyType") + tagOp = flag.String("TagOp", "TagResource", "tagOp") + tagOpBatchSize = flag.String("TagOpBatchSize", "", "tagOpBatchSize") + tagResTypeElem = flag.String("TagResTypeElem", "", "tagResTypeElem") + tagType = flag.String("TagType", "Tag", "tagType") + tagType2 = flag.String("TagType2", "", "tagType") + tagTypeAddBoolElem = flag.String("TagTypeAddBoolElem", "", "TagTypeAddBoolElem") + tagTypeIDElem = flag.String("TagTypeIDElem", "", "tagTypeIDElem") + tagTypeKeyElem = flag.String("TagTypeKeyElem", "Key", "tagTypeKeyElem") + tagTypeValElem = flag.String("TagTypeValElem", "Value", "tagTypeValElem") + tagsFunc = flag.String("TagsFunc", "Tags", "tagsFunc") + untagInCustomVal = flag.String("UntagInCustomVal", "", "untagInCustomVal") + untagInNeedTagKeyType = flag.String("UntagInNeedTagKeyType", "", "untagInNeedTagKeyType") + untagInTagsElem = flag.String("UntagInTagsElem", "TagKeys", "untagInTagsElem") + untagOp = flag.String("UntagOp", "UntagResource", "untagOp") + updateTagsFunc = flag.String("UpdateTagsFunc", defaultUpdateTagsFunc, "updateTagsFunc") + waitTagsPropagatedFunc = flag.String("WaitFunc", defaultWaitTagsPropagatedFunc, "waitFunc") + waitContinuousOccurence = flag.Int("WaitContinuousOccurence", 0, "ContinuousTargetOccurence for Wait function") + waitDelay = flag.Duration("WaitDelay", 0, "Delay for Wait function") + waitMinTimeout = flag.Duration("WaitMinTimeout", 0, `"MinTimeout" (minimum poll interval) for Wait function`) + waitPollInterval = flag.Duration("WaitPollInterval", 0, "PollInterval for Wait function") + waitTimeout = flag.Duration("WaitTimeout", 0, "Timeout for Wait function") parentNotFoundErrCode = flag.String("ParentNotFoundErrCode", "", "Parent 'NotFound' Error Code") parentNotFoundErrMsg = flag.String("ParentNotFoundErrMsg", "", "Parent 'NotFound' Error Message") - sdkVersion = flag.Int("AWSSDKVersion", sdkV1, "Version of the AWS SDK Go to use i.e. 1 or 2") - kvtValues = flag.Bool("KVTValues", false, "Whether KVT string map is of string pointers") - skipNamesImp = flag.Bool("SkipNamesImp", false, "Whether to skip importing names") - skipTypesImp = flag.Bool("SkipTypesImp", false, "Whether to skip importing types") + sdkServicePackage = flag.String("AWSSDKServicePackage", "", "AWS Go SDK package to use. Defaults to the provider service package name.") + sdkVersion = flag.Int("AWSSDKVersion", sdkV1, "Version of the AWS SDK Go to use i.e. 1 or 2") + kvtValues = flag.Bool("KVTValues", false, "Whether KVT string map is of string pointers") + skipServiceImp = flag.Bool("SkipAWSServiceImp", false, "Whether to skip importing the AWS service package") + skipNamesImp = flag.Bool("SkipNamesImp", false, "Whether to skip importing names") + skipTypesImp = flag.Bool("SkipTypesImp", false, "Whether to skip importing types") ) func usage() { @@ -76,12 +99,13 @@ func usage() { } type TemplateBody struct { - getTag string - header string - listTags string - serviceTagsMap string - serviceTagsSlice string - updateTags string + getTag string + header string + listTags string + serviceTagsMap string + serviceTagsSlice string + updateTags string + waitTagsPropagated string } func newTemplateBody(version int, kvtValues bool) *TemplateBody { @@ -94,6 +118,7 @@ func newTemplateBody(version int, kvtValues bool) *TemplateBody { "\n" + v1.ServiceTagsMapBody, "\n" + v1.ServiceTagsSliceBody, "\n" + v1.UpdateTagsBody, + "\n" + v1.WaitTagsPropagatedBody, } case sdkV2: if kvtValues { @@ -104,6 +129,7 @@ func newTemplateBody(version int, kvtValues bool) *TemplateBody { "\n" + v2.ServiceTagsValueMapBody, "\n" + v2.ServiceTagsSliceBody, "\n" + v2.UpdateTagsBody, + "\n" + v2.WaitTagsPropagatedBody, } } return &TemplateBody{ @@ -113,6 +139,7 @@ func newTemplateBody(version int, kvtValues bool) *TemplateBody { "\n" + v2.ServiceTagsMapBody, "\n" + v2.ServiceTagsSliceBody, "\n" + v2.UpdateTagsBody, + "\n" + v2.WaitTagsPropagatedBody, } default: return nil @@ -128,6 +155,8 @@ type TemplateData struct { CreateTagsFunc string GetTagFunc string + GetTagsInFunc string + KeyValueTagsFunc string ListTagsFunc string ListTagsInFiltIDName string ListTagsInIDElem string @@ -137,6 +166,7 @@ type TemplateData struct { ParentNotFoundErrCode string ParentNotFoundErrMsg string RetryCreateOnNotFound string + SetTagsOutFunc string TagInCustomVal string TagInIDElem string TagInIDNeedSlice string @@ -153,6 +183,7 @@ type TemplateData struct { TagTypeIDElem string TagTypeKeyElem string TagTypeValElem string + TagsFunc string UntagInCustomVal string UntagInNeedTagKeyType string UntagInNeedTagType bool @@ -160,6 +191,13 @@ type TemplateData struct { UntagOp string UpdateTagsFunc string UpdateTagsIgnoreSystem bool + WaitForPropagation bool + WaitTagsPropagatedFunc string + WaitContinuousOccurence int + WaitDelay string + WaitMinTimeout string + WaitPollInterval string + WaitTimeout string // The following are specific to writing import paths in the `headerBody`; // to include the package, set the corresponding field's value to true @@ -168,8 +206,13 @@ type TemplateData struct { HelperSchemaPkg bool InternalTypesPkg bool NamesPkg bool + SkipServiceImp bool SkipTypesImp bool TfResourcePkg bool + TimePkg bool + + IsDefaultListTags bool + IsDefaultUpdateTags bool } func main() { @@ -188,8 +231,13 @@ func main() { } servicePackage := os.Getenv("GOPACKAGE") - awsPkg, err := names.AWSGoPackage(servicePackage, *sdkVersion) + if *sdkServicePackage == "" { + sdkServicePackage = &servicePackage + } + + g.Infof("Generating internal/service/%s/%s", servicePackage, filename) + awsPkg, err := names.AWSGoPackage(*sdkServicePackage, *sdkVersion) if err != nil { g.Fatalf("encountered: %s", err) } @@ -199,13 +247,13 @@ func main() { awsIntfPkg = fmt.Sprintf("%[1]s/%[1]siface", awsPkg) } - clientTypeName, err := names.AWSGoClientTypeName(servicePackage, *sdkVersion) + clientTypeName, err := names.AWSGoClientTypeName(*sdkServicePackage, *sdkVersion) if err != nil { g.Fatalf("encountered: %s", err) } - providerNameUpper, err := names.ProviderNameUpper(servicePackage) + providerNameUpper, err := names.ProviderNameUpper(*sdkServicePackage) if err != nil { g.Fatalf("encountered: %s", err) @@ -242,16 +290,20 @@ func main() { ProviderNameUpper: providerNameUpper, ServicePackage: servicePackage, - ConnsPkg: *listTags || *updateTags, + ConnsPkg: (*listTags && *listTagsFunc == defaultListTagsFunc) || (*updateTags && *updateTagsFunc == defaultUpdateTagsFunc), FmtPkg: *updateTags, HelperSchemaPkg: awsPkg == "autoscaling", - InternalTypesPkg: *listTags || *serviceTagsMap || *serviceTagsSlice, + InternalTypesPkg: (*listTags && *listTagsFunc == defaultListTagsFunc) || *serviceTagsMap || *serviceTagsSlice, NamesPkg: *updateTags && !*skipNamesImp, + SkipServiceImp: *skipServiceImp, SkipTypesImp: *skipTypesImp, - TfResourcePkg: *getTag, + TfResourcePkg: (*getTag || *waitForPropagation), + TimePkg: *waitForPropagation, CreateTagsFunc: createTagsFunc, GetTagFunc: *getTagFunc, + GetTagsInFunc: *getTagsInFunc, + KeyValueTagsFunc: *keyValueTagsFunc, ListTagsFunc: *listTagsFunc, ListTagsInFiltIDName: *listTagsInFiltIDName, ListTagsInIDElem: *listTagsInIDElem, @@ -260,6 +312,7 @@ func main() { ListTagsOutTagsElem: *listTagsOutTagsElem, ParentNotFoundErrCode: *parentNotFoundErrCode, ParentNotFoundErrMsg: *parentNotFoundErrMsg, + SetTagsOutFunc: *setTagsOutFunc, TagInCustomVal: *tagInCustomVal, TagInIDElem: *tagInIDElem, TagInIDNeedSlice: *tagInIDNeedSlice, @@ -276,6 +329,7 @@ func main() { TagTypeIDElem: *tagTypeIDElem, TagTypeKeyElem: *tagTypeKeyElem, TagTypeValElem: *tagTypeValElem, + TagsFunc: *tagsFunc, UntagInCustomVal: *untagInCustomVal, UntagInNeedTagKeyType: *untagInNeedTagKeyType, UntagInNeedTagType: *untagInNeedTagType, @@ -283,6 +337,16 @@ func main() { UntagOp: *untagOp, UpdateTagsFunc: *updateTagsFunc, UpdateTagsIgnoreSystem: !*updateTagsNoIgnoreSystem, + WaitForPropagation: *waitForPropagation, + WaitTagsPropagatedFunc: *waitTagsPropagatedFunc, + WaitContinuousOccurence: *waitContinuousOccurence, + WaitDelay: formatDuration(*waitDelay), + WaitMinTimeout: formatDuration(*waitMinTimeout), + WaitPollInterval: formatDuration(*waitPollInterval), + WaitTimeout: formatDuration(*waitTimeout), + + IsDefaultListTags: *listTagsFunc == defaultListTagsFunc, + IsDefaultUpdateTags: *updateTagsFunc == defaultUpdateTagsFunc, } templateBody := newTemplateBody(*sdkVersion, *kvtValues) @@ -331,6 +395,12 @@ func main() { } } + if *waitForPropagation { + if err := d.WriteTemplate("waittagspropagated", templateBody.waitTagsPropagated, templateData); err != nil { + g.Fatalf("generating file (%s): %s", filename, err) + } + } + if err := d.Write(); err != nil { g.Fatalf("generating file (%s): %s", filename, err) } @@ -341,3 +411,29 @@ func toSnakeCase(str string) string { result = regexp.MustCompile("([a-z0-9])([A-Z])").ReplaceAllString(result, "${1}_${2}") return strings.ToLower(result) } + +func formatDuration(d time.Duration) string { + if d == 0 { + return "" + } + + var buf []string + if h := d.Hours(); h >= 1 { + buf = append(buf, fmt.Sprintf("%d * time.Hour", int64(h))) + d = d - time.Duration(int64(h)*int64(time.Hour)) + } + if m := d.Minutes(); m >= 1 { + buf = append(buf, fmt.Sprintf("%d * time.Minute", int64(m))) + d = d - time.Duration(int64(m)*int64(time.Minute)) + } + if s := d.Seconds(); s >= 1 { + buf = append(buf, fmt.Sprintf("%d * time.Second", int64(s))) + d = d - time.Duration(int64(s)*int64(time.Second)) + } + if ms := d.Milliseconds(); ms >= 1 { + buf = append(buf, fmt.Sprintf("%d * time.Millisecond", int64(ms))) + } + // Ignoring anything below milliseconds + + return strings.Join(buf, " + ") +} diff --git a/internal/generate/tags/templates/v1/get_tag_body.tmpl b/internal/generate/tags/templates/v1/get_tag_body.tmpl index 1d23c9f4249..f7dc90fe26a 100644 --- a/internal/generate/tags/templates/v1/get_tag_body.tmpl +++ b/internal/generate/tags/templates/v1/get_tag_body.tmpl @@ -29,7 +29,7 @@ func {{ .GetTagFunc }}(ctx context.Context, conn {{ .ClientType }}, identifier{{ return nil, err } - listTags := KeyValueTags(ctx, output.{{ .ListTagsOutTagsElem }}{{ if .TagTypeIDElem }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}{{ end }}) + listTags := {{ .KeyValueTagsFunc }}(ctx, output.{{ .ListTagsOutTagsElem }}{{ if .TagTypeIDElem }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}{{ end }}) {{- else }} listTags, err := {{ .ListTagsFunc }}(ctx, conn, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}) diff --git a/internal/generate/tags/templates/v1/header_body.tmpl b/internal/generate/tags/templates/v1/header_body.tmpl index 96235cae303..1a51de1a5f6 100644 --- a/internal/generate/tags/templates/v1/header_body.tmpl +++ b/internal/generate/tags/templates/v1/header_body.tmpl @@ -6,11 +6,16 @@ import ( {{- if .FmtPkg }} "fmt" {{- end }} + {{- if .TimePkg }} + "time" + {{- end }} "github.com/aws/aws-sdk-go/aws" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" {{- if .AWSService }} + {{- if not .SkipServiceImp }} "github.com/aws/aws-sdk-go/service/{{ .AWSService }}" + {{- end }} {{- end }} {{- if .AWSServiceIfacePackage }} "github.com/aws/aws-sdk-go/service/{{ .AWSServiceIfacePackage }}" diff --git a/internal/generate/tags/templates/v1/list_tags_body.tmpl b/internal/generate/tags/templates/v1/list_tags_body.tmpl index 01635d41638..7849f71450b 100644 --- a/internal/generate/tags/templates/v1/list_tags_body.tmpl +++ b/internal/generate/tags/templates/v1/list_tags_body.tmpl @@ -44,13 +44,14 @@ func {{ .ListTagsFunc }}(ctx context.Context, conn {{ .ClientType }}, identifier return tftags.New(ctx, nil), err } - return KeyValueTags(ctx, output.{{ .ListTagsOutTagsElem }}{{ if .TagTypeIDElem }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}{{ end }}), nil + return {{ .KeyValueTagsFunc }}(ctx, output.{{ .ListTagsOutTagsElem }}{{ if .TagTypeIDElem }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}{{ end }}), nil } -// {{ .ListTagsFunc }} lists {{ .ServicePackage }} service tags and set them in Context. +{{- if .IsDefaultListTags }} +// {{ .ListTagsFunc | Title }} lists {{ .ServicePackage }} service tags and set them in Context. // It is called from outside this package. -func (p *servicePackage) {{ .ListTagsFunc }}(ctx context.Context, meta any, identifier{{ if .TagResTypeElem }}, resourceType{{ end }} string) error { - tags, err := {{ .ListTagsFunc }}(ctx, meta.(*conns.AWSClient).{{ .ProviderNameUpper }}Conn(), identifier{{ if .TagResTypeElem }}, resourceType{{ end }}) +func (p *servicePackage) {{ .ListTagsFunc | Title }}(ctx context.Context, meta any, identifier{{ if .TagResTypeElem }}, resourceType{{ end }} string) error { + tags, err := {{ .ListTagsFunc }}(ctx, meta.(*conns.AWSClient).{{ .ProviderNameUpper }}Conn(ctx), identifier{{ if .TagResTypeElem }}, resourceType{{ end }}) if err != nil { return err @@ -62,3 +63,4 @@ func (p *servicePackage) {{ .ListTagsFunc }}(ctx context.Context, meta any, iden return nil } +{{- end }} diff --git a/internal/generate/tags/templates/v1/service_tags_map_body.tmpl b/internal/generate/tags/templates/v1/service_tags_map_body.tmpl index d6097f32e60..8f72a4d3f09 100644 --- a/internal/generate/tags/templates/v1/service_tags_map_body.tmpl +++ b/internal/generate/tags/templates/v1/service_tags_map_body.tmpl @@ -1,20 +1,20 @@ // map[string]*string handling -// Tags returns {{ .ServicePackage }} service tags. -func Tags(tags tftags.KeyValueTags) map[string]*string { +// {{ .TagsFunc }} returns {{ .ServicePackage }} service tags. +func {{ .TagsFunc }}(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from {{ .ServicePackage }} service tags. -func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { +// {{ .KeyValueTagsFunc }} creates tftags.KeyValueTags from {{ .ServicePackage }} service tags. +func {{ .KeyValueTagsFunc }}(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns {{ .ServicePackage }} service tags from Context. +// {{ .GetTagsInFunc }} returns {{ .ServicePackage }} service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func {{ .GetTagsInFunc }}(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { - if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { + if tags := {{ .TagsFunc }}(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags } } @@ -22,10 +22,10 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets {{ .ServicePackage }} service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// {{ .SetTagsOutFunc }} sets {{ .ServicePackage }} service tags in Context. +func {{ .SetTagsOutFunc }}(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { - inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) + inContext.TagsOut = types.Some({{ .KeyValueTagsFunc }}(ctx, tags)) } } diff --git a/internal/generate/tags/templates/v1/service_tags_slice_body.tmpl b/internal/generate/tags/templates/v1/service_tags_slice_body.tmpl index b1666346d0e..4f46528a0f3 100644 --- a/internal/generate/tags/templates/v1/service_tags_slice_body.tmpl +++ b/internal/generate/tags/templates/v1/service_tags_slice_body.tmpl @@ -44,8 +44,8 @@ func TagKeys(tags tftags.KeyValueTags) []*{{ .TagPackage }}.{{ .TagKeyType }} { } {{- end }} -// Tags returns {{ .ServicePackage }} service tags. -func Tags(tags tftags.KeyValueTags) []*{{ .TagPackage }}.{{ .TagType }} { +// {{ .TagsFunc }} returns {{ .ServicePackage }} service tags. +func {{ .TagsFunc }}(tags tftags.KeyValueTags) []*{{ .TagPackage }}.{{ .TagType }} { {{- if or ( .TagTypeIDElem ) ( .TagTypeAddBoolElem) }} var result []*{{ .TagPackage }}.{{ .TagType }} @@ -82,7 +82,7 @@ func Tags(tags tftags.KeyValueTags) []*{{ .TagPackage }}.{{ .TagType }} { return result } -// KeyValueTags creates tftags.KeyValueTags from {{ .AWSService }} service tags. +// {{ .KeyValueTagsFunc }} creates tftags.KeyValueTags from {{ .AWSService }} service tags. {{- if or ( .TagType2 ) ( .TagTypeAddBoolElem ) }} // // Accepts the following types: @@ -94,7 +94,7 @@ func Tags(tags tftags.KeyValueTags) []*{{ .TagPackage }}.{{ .TagType }} { // - []any (Terraform TypeList configuration block compatible) // - *schema.Set (Terraform TypeSet configuration block compatible) {{- end }} -func KeyValueTags(ctx context.Context, tags any{{ if .TagTypeIDElem }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }} string{{ end }}) tftags.KeyValueTags { +func {{ .KeyValueTagsFunc }}(ctx context.Context, tags any{{ if .TagTypeIDElem }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }} string{{ end }}) tftags.KeyValueTags { switch tags := tags.(type) { case []*{{ .TagPackage }}.{{ .TagType }}: {{- if or ( .TagTypeIDElem ) ( .TagTypeAddBoolElem) }} @@ -164,7 +164,7 @@ func KeyValueTags(ctx context.Context, tags any{{ if .TagTypeIDElem }}, identifi return tftags.New(ctx, m) {{- if .TagTypeAddBoolElem }} case *schema.Set: - return KeyValueTags(ctx, tags.List(){{ if .TagTypeIDElem }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}{{ end }}) + return {{ .KeyValueTagsFunc }}(ctx, tags.List(){{ if .TagTypeIDElem }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}{{ end }}) case []any: result := make(map[string]*tftags.TagData) @@ -214,7 +214,7 @@ func KeyValueTags(ctx context.Context, tags any{{ if .TagTypeIDElem }}, identifi } } {{- else }} -func KeyValueTags(ctx context.Context, tags []*{{ .TagPackage }}.{{ .TagType }}) tftags.KeyValueTags { +func {{ .KeyValueTagsFunc }}(ctx context.Context, tags []*{{ .TagPackage }}.{{ .TagType }}) tftags.KeyValueTags { m := make(map[string]*string, len(tags)) for _, tag := range tags { @@ -225,11 +225,11 @@ func KeyValueTags(ctx context.Context, tags []*{{ .TagPackage }}.{{ .TagType }}) } {{- end }} -// GetTagsIn returns {{ .ServicePackage }} service tags from Context. +// {{ .GetTagsInFunc }} returns {{ .ServicePackage }} service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*{{ .TagPackage }}.{{ .TagType }} { +func {{ .GetTagsInFunc }}(ctx context.Context) []*{{ .TagPackage }}.{{ .TagType }} { if inContext, ok := tftags.FromContext(ctx); ok { - if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { + if tags := {{ .TagsFunc }}(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags } } @@ -237,17 +237,17 @@ func GetTagsIn(ctx context.Context) []*{{ .TagPackage }}.{{ .TagType }} { return nil } -// SetTagsOut sets {{ .ServicePackage }} service tags in Context. +// {{ .SetTagsOutFunc }} sets {{ .ServicePackage }} service tags in Context. {{- if or ( .TagType2 ) ( .TagTypeAddBoolElem ) }} -func SetTagsOut(ctx context.Context, tags any{{ if .TagTypeIDElem }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }} string{{ end }}) { +func {{ .SetTagsOutFunc }}(ctx context.Context, tags any{{ if .TagTypeIDElem }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }} string{{ end }}) { if inContext, ok := tftags.FromContext(ctx); ok { - inContext.TagsOut = types.Some(KeyValueTags(ctx, tags{{ if .TagTypeIDElem }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}{{ end }})) + inContext.TagsOut = types.Some({{ .KeyValueTagsFunc }}(ctx, tags{{ if .TagTypeIDElem }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}{{ end }})) } } {{- else }} -func SetTagsOut(ctx context.Context, tags []*{{ .TagPackage }}.{{ .TagType }}) { +func {{ .SetTagsOutFunc }}(ctx context.Context, tags []*{{ .TagPackage }}.{{ .TagType }}) { if inContext, ok := tftags.FromContext(ctx); ok { - inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) + inContext.TagsOut = types.Some({{ .KeyValueTagsFunc }}(ctx, tags)) } } {{- end }} @@ -259,6 +259,6 @@ func {{ .CreateTagsFunc }}(ctx context.Context, conn {{ .ClientType }}, identifi return nil } - return {{ .UpdateTagsFunc }}(ctx, conn, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}, nil, KeyValueTags(ctx, tags)) + return {{ .UpdateTagsFunc }}(ctx, conn, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}, nil, {{ .KeyValueTagsFunc }}(ctx, tags)) } {{- end }} diff --git a/internal/generate/tags/templates/v1/update_tags_body.tmpl b/internal/generate/tags/templates/v1/update_tags_body.tmpl index 8e18b46dc57..7ff616acdc8 100644 --- a/internal/generate/tags/templates/v1/update_tags_body.tmpl +++ b/internal/generate/tags/templates/v1/update_tags_body.tmpl @@ -3,8 +3,8 @@ // it may also be a different identifier depending on the service. {{if .TagTypeAddBoolElem -}} func {{ .UpdateTagsFunc }}(ctx context.Context, conn {{ .ClientType }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }} string, oldTagsSet, newTagsSet any) error { - oldTags := KeyValueTags(ctx, oldTagsSet, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}) - newTags := KeyValueTags(ctx, newTagsSet, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}) + oldTags := {{ .KeyValueTagsFunc }}(ctx, oldTagsSet, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}) + newTags := {{ .KeyValueTagsFunc }}(ctx, newTagsSet, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}) {{- else -}} func {{ .UpdateTagsFunc }}(ctx context.Context, conn {{ .ClientType }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }} string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) @@ -39,12 +39,12 @@ func {{ .UpdateTagsFunc }}(ctx context.Context, conn {{ .ClientType }}, identifi } if len(updatedTags) > 0 { - input.{{ .TagInTagsElem }} = Tags(updatedTags) + input.{{ .TagInTagsElem }} = {{ .TagsFunc }}(updatedTags) } if len(removedTags) > 0 { {{- if .UntagInNeedTagType }} - input.{{ .UntagInTagsElem }} = Tags(removedTags) + input.{{ .UntagInTagsElem }} = {{ .TagsFunc }}(removedTags) {{- else if .UntagInNeedTagKeyType }} input.{{ .UntagInTagsElem }} = TagKeys(removedTags) {{- else if .UntagInCustomVal }} @@ -82,7 +82,7 @@ func {{ .UpdateTagsFunc }}(ctx context.Context, conn {{ .ClientType }}, identifi {{- end }} {{- end }} {{- if .UntagInNeedTagType }} - {{ .UntagInTagsElem }}: Tags(removedTags), + {{ .UntagInTagsElem }}: {{ .TagsFunc }}(removedTags), {{- else if .UntagInNeedTagKeyType }} {{ .UntagInTagsElem }}: TagKeys(removedTags), {{- else if .UntagInCustomVal }} @@ -124,7 +124,7 @@ func {{ .UpdateTagsFunc }}(ctx context.Context, conn {{ .ClientType }}, identifi {{- if .TagInCustomVal }} {{ .TagInTagsElem }}: {{ .TagInCustomVal }}, {{- else }} - {{ .TagInTagsElem }}: Tags(updatedTags), + {{ .TagInTagsElem }}: {{ .TagsFunc }}(updatedTags), {{- end }} } @@ -140,11 +140,21 @@ func {{ .UpdateTagsFunc }}(ctx context.Context, conn {{ .ClientType }}, identifi {{- end }} + {{ if .WaitForPropagation }} + if len(removedTags) > 0 || len(updatedTags) > 0 { + if err := {{ .WaitTagsPropagatedFunc }}(ctx, conn, identifier, newTags); err != nil { + return fmt.Errorf("waiting for resource (%s) tag propagation: %w", identifier, err) + } + } + {{- end }} + return nil } -// {{ .UpdateTagsFunc }} updates {{ .ServicePackage }} service tags. +{{- if .IsDefaultUpdateTags }} +// {{ .UpdateTagsFunc | Title }} updates {{ .ServicePackage }} service tags. // It is called from outside this package. -func (p *servicePackage) {{ .UpdateTagsFunc }}(ctx context.Context, meta any, identifier{{ if .TagResTypeElem }}, resourceType{{ end }} string, oldTags, newTags any) error { - return {{ .UpdateTagsFunc }}(ctx, meta.(*conns.AWSClient).{{ .ProviderNameUpper }}Conn(), identifier{{ if .TagResTypeElem }}, resourceType{{ end }}, oldTags, newTags) +func (p *servicePackage) {{ .UpdateTagsFunc | Title }}(ctx context.Context, meta any, identifier{{ if .TagResTypeElem }}, resourceType{{ end }} string, oldTags, newTags any) error { + return {{ .UpdateTagsFunc }}(ctx, meta.(*conns.AWSClient).{{ .ProviderNameUpper }}Conn(ctx), identifier{{ if .TagResTypeElem }}, resourceType{{ end }}, oldTags, newTags) } +{{- end }} diff --git a/internal/generate/tags/templates/v1/v1.go b/internal/generate/tags/templates/v1/v1.go index 4b308bfccb6..7e933e491ed 100644 --- a/internal/generate/tags/templates/v1/v1.go +++ b/internal/generate/tags/templates/v1/v1.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package v1 import ( @@ -21,3 +24,6 @@ var ServiceTagsSliceBody string //go:embed update_tags_body.tmpl var UpdateTagsBody string + +//go:embed wait_tags_propagated_body.tmpl +var WaitTagsPropagatedBody string diff --git a/internal/generate/tags/templates/v1/wait_tags_propagated_body.tmpl b/internal/generate/tags/templates/v1/wait_tags_propagated_body.tmpl new file mode 100644 index 00000000000..52e8d005ec5 --- /dev/null +++ b/internal/generate/tags/templates/v1/wait_tags_propagated_body.tmpl @@ -0,0 +1,34 @@ +// {{ .WaitTagsPropagatedFunc }} waits for {{ .ServicePackage }} service tags to be propagated. +// The identifier is typically the Amazon Resource Name (ARN), although +// it may also be a different identifier depending on the service. +func {{ .WaitTagsPropagatedFunc }}(ctx context.Context, conn {{ .ClientType }}, id string, tags tftags.KeyValueTags) error { + checkFunc := func() (bool, error) { + output, err := listTags(ctx, conn, id) + + if tfresource.NotFound(err) { + return false, nil + } + + if err != nil { + return false, err + } + + return output.Equal(tags), nil + } + opts := tfresource.WaitOpts{ + {{- if ne .WaitContinuousOccurence 0 }} + ContinuousTargetOccurence: {{ .WaitContinuousOccurence }}, + {{- end }} + {{- if ne .WaitDelay "" }} + Delay: {{ .WaitDelay }}, + {{- end }} + {{- if ne .WaitMinTimeout "" }} + MinTimeout: {{ .WaitMinTimeout }}, + {{- end }} + {{- if ne .WaitPollInterval "" }} + PollInterval: {{ .WaitPollInterval }}, + {{- end }} + } + + return tfresource.WaitUntil(ctx, {{ .WaitTimeout }}, checkFunc, opts) +} diff --git a/internal/generate/tags/templates/v2/get_tag_body.tmpl b/internal/generate/tags/templates/v2/get_tag_body.tmpl index d2f9042c98f..914bd410381 100644 --- a/internal/generate/tags/templates/v2/get_tag_body.tmpl +++ b/internal/generate/tags/templates/v2/get_tag_body.tmpl @@ -28,7 +28,7 @@ func {{ .GetTagFunc }}(ctx context.Context, conn {{ .ClientType }}, identifier{{ return nil, err } - listTags := KeyValueTags(ctx, output.{{ .ListTagsOutTagsElem }}{{ if .TagTypeIDElem }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}{{ end }}) + listTags := {{ .KeyValueTagsFunc }}(ctx, output.{{ .ListTagsOutTagsElem }}{{ if .TagTypeIDElem }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}{{ end }}) {{- else }} listTags, err := {{ .ListTagsFunc }}(ctx, conn, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}) diff --git a/internal/generate/tags/templates/v2/header_body.tmpl b/internal/generate/tags/templates/v2/header_body.tmpl index 59dabbd4089..3ea1639da38 100644 --- a/internal/generate/tags/templates/v2/header_body.tmpl +++ b/internal/generate/tags/templates/v2/header_body.tmpl @@ -6,14 +6,19 @@ import ( {{- if .FmtPkg }} "fmt" {{- end }} + {{- if .TimePkg }} + "time" + {{- end }} "github.com/aws/aws-sdk-go-v2/aws" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" {{- if .AWSService }} + {{- if not .SkipServiceImp }} "github.com/aws/aws-sdk-go-v2/service/{{ .AWSService }}" - {{- if not .SkipTypesImp }} + {{- end }} + {{- if not .SkipTypesImp }} awstypes "github.com/aws/aws-sdk-go-v2/service/{{ .AWSService }}/types" - {{- end }} + {{- end }} {{- end }} {{- if .HelperSchemaPkg }} "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" diff --git a/internal/generate/tags/templates/v2/list_tags_body.tmpl b/internal/generate/tags/templates/v2/list_tags_body.tmpl index 0903ad7d0e7..a64ecc2db02 100644 --- a/internal/generate/tags/templates/v2/list_tags_body.tmpl +++ b/internal/generate/tags/templates/v2/list_tags_body.tmpl @@ -44,13 +44,14 @@ func {{ .ListTagsFunc }}(ctx context.Context, conn {{ .ClientType }}, identifier return tftags.New(ctx, nil), err } - return KeyValueTags(ctx, output.{{ .ListTagsOutTagsElem }}{{ if .TagTypeIDElem }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}{{ end }}), nil + return {{ .KeyValueTagsFunc }}(ctx, output.{{ .ListTagsOutTagsElem }}{{ if .TagTypeIDElem }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}{{ end }}), nil } -// {{ .ListTagsFunc }} lists {{ .ServicePackage }} service tags and set them in Context. +{{- if .IsDefaultListTags }} +// {{ .ListTagsFunc | Title }} lists {{ .ServicePackage }} service tags and set them in Context. // It is called from outside this package. -func (p *servicePackage) {{ .ListTagsFunc }}(ctx context.Context, meta any, identifier{{ if .TagResTypeElem }}, resourceType{{ end }} string) error { - tags, err := {{ .ListTagsFunc }}(ctx, meta.(*conns.AWSClient).{{ .ProviderNameUpper }}Client(), identifier{{ if .TagResTypeElem }}, resourceType{{ end }}) +func (p *servicePackage) {{ .ListTagsFunc | Title }}(ctx context.Context, meta any, identifier{{ if .TagResTypeElem }}, resourceType{{ end }} string) error { + tags, err := {{ .ListTagsFunc }}(ctx, meta.(*conns.AWSClient).{{ .ProviderNameUpper }}Client(ctx), identifier{{ if .TagResTypeElem }}, resourceType{{ end }}) if err != nil { return err @@ -62,3 +63,4 @@ func (p *servicePackage) {{ .ListTagsFunc }}(ctx context.Context, meta any, iden return nil } +{{- end }} diff --git a/internal/generate/tags/templates/v2/service_tags_map_body.tmpl b/internal/generate/tags/templates/v2/service_tags_map_body.tmpl index d6097f32e60..8f72a4d3f09 100644 --- a/internal/generate/tags/templates/v2/service_tags_map_body.tmpl +++ b/internal/generate/tags/templates/v2/service_tags_map_body.tmpl @@ -1,20 +1,20 @@ // map[string]*string handling -// Tags returns {{ .ServicePackage }} service tags. -func Tags(tags tftags.KeyValueTags) map[string]*string { +// {{ .TagsFunc }} returns {{ .ServicePackage }} service tags. +func {{ .TagsFunc }}(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from {{ .ServicePackage }} service tags. -func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { +// {{ .KeyValueTagsFunc }} creates tftags.KeyValueTags from {{ .ServicePackage }} service tags. +func {{ .KeyValueTagsFunc }}(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns {{ .ServicePackage }} service tags from Context. +// {{ .GetTagsInFunc }} returns {{ .ServicePackage }} service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func {{ .GetTagsInFunc }}(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { - if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { + if tags := {{ .TagsFunc }}(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags } } @@ -22,10 +22,10 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets {{ .ServicePackage }} service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// {{ .SetTagsOutFunc }} sets {{ .ServicePackage }} service tags in Context. +func {{ .SetTagsOutFunc }}(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { - inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) + inContext.TagsOut = types.Some({{ .KeyValueTagsFunc }}(ctx, tags)) } } diff --git a/internal/generate/tags/templates/v2/service_tags_slice_body.tmpl b/internal/generate/tags/templates/v2/service_tags_slice_body.tmpl index c701a32b4b2..4812d87bbd4 100644 --- a/internal/generate/tags/templates/v2/service_tags_slice_body.tmpl +++ b/internal/generate/tags/templates/v2/service_tags_slice_body.tmpl @@ -44,8 +44,8 @@ func TagKeys(tags tftags.KeyValueTags) []*{{ .AWSService }}.{{ .TagKeyType }} { } {{- end }} -// Tags returns {{ .ServicePackage }} service tags. -func Tags(tags tftags.KeyValueTags) []awstypes.{{ .TagType }} { +// {{ .TagsFunc }} returns {{ .ServicePackage }} service tags. +func {{ .TagsFunc }}(tags tftags.KeyValueTags) []awstypes.{{ .TagType }} { {{- if or ( .TagTypeIDElem ) ( .TagTypeAddBoolElem) }} var result []awstypes.{{ .TagType }} @@ -82,7 +82,7 @@ func Tags(tags tftags.KeyValueTags) []awstypes.{{ .TagType }} { return result } -// KeyValueTags creates tftags.KeyValueTags from {{ .AWSService }} service tags. +// {{ .KeyValueTagsFunc }} creates tftags.KeyValueTags from {{ .AWSService }} service tags. {{- if or ( .TagType2 ) ( .TagTypeAddBoolElem ) }} // // Accepts the following types: @@ -94,7 +94,7 @@ func Tags(tags tftags.KeyValueTags) []awstypes.{{ .TagType }} { // - []any (Terraform TypeList configuration block compatible) // - *schema.Set (Terraform TypeSet configuration block compatible) {{- end }} -func KeyValueTags(ctx context.Context, tags any{{ if .TagTypeIDElem }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }} string{{ end }}) tftags.KeyValueTags { +func {{ .KeyValueTagsFunc }}(ctx context.Context, tags any{{ if .TagTypeIDElem }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }} string{{ end }}) tftags.KeyValueTags { switch tags := tags.(type) { case []awstypes.{{ .TagType }}: {{- if or ( .TagTypeIDElem ) ( .TagTypeAddBoolElem) }} @@ -164,7 +164,7 @@ func KeyValueTags(ctx context.Context, tags any{{ if .TagTypeIDElem }}, identifi return tftags.New(ctx, m) {{- if .TagTypeAddBoolElem }} case *schema.Set: - return KeyValueTags(tags.List(){{ if .TagTypeIDElem }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}{{ end }}) + return {{ .KeyValueTagsFunc }}(tags.List(){{ if .TagTypeIDElem }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}{{ end }}) case []any: result := make(map[string]*tftags.TagData) @@ -214,7 +214,7 @@ func KeyValueTags(ctx context.Context, tags any{{ if .TagTypeIDElem }}, identifi } } {{- else }} -func KeyValueTags(ctx context.Context, tags []awstypes.{{ .TagType }}) tftags.KeyValueTags { +func {{ .KeyValueTagsFunc }}(ctx context.Context, tags []awstypes.{{ .TagType }}) tftags.KeyValueTags { m := make(map[string]*string, len(tags)) for _, tag := range tags { @@ -225,11 +225,11 @@ func KeyValueTags(ctx context.Context, tags []awstypes.{{ .TagType }}) tftags.Ke } {{- end }} -// GetTagsIn returns {{ .ServicePackage }} service tags from Context. +// {{ .GetTagsInFunc }} returns {{ .ServicePackage }} service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []awstypes.{{ .TagType }} { +func {{ .GetTagsInFunc }}(ctx context.Context) []awstypes.{{ .TagType }} { if inContext, ok := tftags.FromContext(ctx); ok { - if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { + if tags := {{ .TagsFunc }}(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags } } @@ -237,17 +237,17 @@ func GetTagsIn(ctx context.Context) []awstypes.{{ .TagType }} { return nil } -// SetTagsOut sets {{ .ServicePackage }} service tags in Context. +// {{ .SetTagsOutFunc }} sets {{ .ServicePackage }} service tags in Context. {{- if or ( .TagType2 ) ( .TagTypeAddBoolElem ) }} -func SetTagsOut(ctx context.Context, tags any{{ if .TagTypeIDElem }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }} string{{ end }}) { +func {{ .SetTagsOutFunc }}(ctx context.Context, tags any{{ if .TagTypeIDElem }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }} string{{ end }}) { if inContext, ok := tftags.FromContext(ctx); ok { - inContext.TagsOut = types.Some(KeyValueTags(ctx, tags{{ if .TagTypeIDElem }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}{{ end }})) + inContext.TagsOut = types.Some({{ .KeyValueTagsFunc }}(ctx, tags{{ if .TagTypeIDElem }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}{{ end }})) } } {{- else }} -func SetTagsOut(ctx context.Context, tags []awstypes.{{ .TagType }}) { +func {{ .SetTagsOutFunc }}(ctx context.Context, tags []awstypes.{{ .TagType }}) { if inContext, ok := tftags.FromContext(ctx); ok { - inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) + inContext.TagsOut = types.Some({{ .KeyValueTagsFunc }}(ctx, tags)) } } {{- end }} @@ -259,6 +259,6 @@ func {{ .CreateTagsFunc }}(ctx context.Context, conn {{ .ClientType }}, identifi return nil } - return {{ .UpdateTagsFunc }}(ctx, conn, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}, nil, KeyValueTags(ctx, tags)) + return {{ .UpdateTagsFunc }}(ctx, conn, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}, nil, {{ .KeyValueTagsFunc }}(ctx, tags)) } {{- end }} diff --git a/internal/generate/tags/templates/v2/service_tags_value_map_body.tmpl b/internal/generate/tags/templates/v2/service_tags_value_map_body.tmpl index 18513aac18e..b3918861970 100644 --- a/internal/generate/tags/templates/v2/service_tags_value_map_body.tmpl +++ b/internal/generate/tags/templates/v2/service_tags_value_map_body.tmpl @@ -1,20 +1,20 @@ // map[string]string handling -// Tags returns {{ .ServicePackage }} service tags. -func Tags(tags tftags.KeyValueTags) map[string]string { +// {{ .TagsFunc }} returns {{ .ServicePackage }} service tags. +func {{ .TagsFunc }}(tags tftags.KeyValueTags) map[string]string { return tags.Map() } -// KeyValueTags creates KeyValueTags from {{ .ServicePackage }} service tags. -func KeyValueTags(ctx context.Context, tags map[string]string) tftags.KeyValueTags { +// {{ .KeyValueTagsFunc }} creates tftags.KeyValueTags from {{ .ServicePackage }} service tags. +func {{ .KeyValueTagsFunc }}(ctx context.Context, tags map[string]string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns {{ .ServicePackage }} service tags from Context. +// {{ .GetTagsInFunc }} returns {{ .ServicePackage }} service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]string { +func {{ .GetTagsInFunc }}(ctx context.Context) map[string]string { if inContext, ok := tftags.FromContext(ctx); ok { - if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { + if tags := {{ .TagsFunc }}(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags } } @@ -22,9 +22,20 @@ func GetTagsIn(ctx context.Context) map[string]string { return nil } -// SetTagsOut sets {{ .ServicePackage }} service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]string) { +// {{ .SetTagsOutFunc }} sets {{ .ServicePackage }} service tags in Context. +func {{ .SetTagsOutFunc }}(ctx context.Context, tags map[string]string) { if inContext, ok := tftags.FromContext(ctx); ok { - inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) + inContext.TagsOut = types.Some({{ .KeyValueTagsFunc }}(ctx, tags)) } } + +{{- if ne .CreateTagsFunc "" }} +// {{ .CreateTagsFunc }} creates {{ .ServicePackage }} service tags for new resources. +func {{ .CreateTagsFunc }}(ctx context.Context, conn {{ .ClientType }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }} string, tags map[string]string) error { + if len(tags) == 0 { + return nil + } + + return {{ .UpdateTagsFunc }}(ctx, conn, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}, nil, tags) +} +{{- end }} diff --git a/internal/generate/tags/templates/v2/update_tags_body.tmpl b/internal/generate/tags/templates/v2/update_tags_body.tmpl index 06cd64b21f4..109af549783 100644 --- a/internal/generate/tags/templates/v2/update_tags_body.tmpl +++ b/internal/generate/tags/templates/v2/update_tags_body.tmpl @@ -3,8 +3,8 @@ // it may also be a different identifier depending on the service. {{if .TagTypeAddBoolElem -}} func {{ .UpdateTagsFunc }}(ctx context.Context, conn {{ .ClientType }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }} string, oldTagsSet, newTagsSet any) error { - oldTags := KeyValueTags(ctx, oldTagsSet, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}) - newTags := KeyValueTags(ctx, newTagsSet, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}) + oldTags := {{ .KeyValueTagsFunc }}(ctx, oldTagsSet, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}) + newTags := {{ .KeyValueTagsFunc }}(ctx, newTagsSet, identifier{{ if .TagResTypeElem }}, resourceType{{ end }}) {{- else -}} func {{ .UpdateTagsFunc }}(ctx context.Context, conn {{ .ClientType }}, identifier{{ if .TagResTypeElem }}, resourceType{{ end }} string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) @@ -39,12 +39,12 @@ func {{ .UpdateTagsFunc }}(ctx context.Context, conn {{ .ClientType }}, identifi } if len(updatedTags) > 0 { - input.{{ .TagInTagsElem }} = Tags(updatedTags) + input.{{ .TagInTagsElem }} = {{ .TagsFunc }}(updatedTags) } if len(removedTags) > 0 { {{- if .UntagInNeedTagType }} - input.{{ .UntagInTagsElem }} = Tags(removedTags) + input.{{ .UntagInTagsElem }} = {{ .TagsFunc }}(removedTags) {{- else if .UntagInNeedTagKeyType }} input.{{ .UntagInTagsElem }} = TagKeys(removedTags) {{- else if .UntagInCustomVal }} @@ -82,7 +82,7 @@ func {{ .UpdateTagsFunc }}(ctx context.Context, conn {{ .ClientType }}, identifi {{- end }} {{- end }} {{- if .UntagInNeedTagType }} - {{ .UntagInTagsElem }}: Tags(removedTags), + {{ .UntagInTagsElem }}: {{ .TagsFunc }}(removedTags), {{- else if .UntagInNeedTagKeyType }} {{ .UntagInTagsElem }}: TagKeys(removedTags), {{- else if .UntagInCustomVal }} @@ -124,7 +124,7 @@ func {{ .UpdateTagsFunc }}(ctx context.Context, conn {{ .ClientType }}, identifi {{- if .TagInCustomVal }} {{ .TagInTagsElem }}: {{ .TagInCustomVal }}, {{- else }} - {{ .TagInTagsElem }}: Tags(updatedTags), + {{ .TagInTagsElem }}: {{ .TagsFunc }}(updatedTags), {{- end }} } @@ -140,11 +140,21 @@ func {{ .UpdateTagsFunc }}(ctx context.Context, conn {{ .ClientType }}, identifi {{- end }} + {{ if .WaitForPropagation }} + if len(removedTags) > 0 || len(updatedTags) > 0 { + if err := {{ .WaitTagsPropagatedFunc }}(ctx, conn, identifier, newTags); err != nil { + return fmt.Errorf("waiting for resource (%s) tag propagation: %w", identifier, err) + } + } + {{- end }} + return nil } -// {{ .UpdateTagsFunc }} updates {{ .ServicePackage }} service tags. +{{- if .IsDefaultUpdateTags }} +// {{ .UpdateTagsFunc | Title }} updates {{ .ServicePackage }} service tags. // It is called from outside this package. -func (p *servicePackage) {{ .UpdateTagsFunc }}(ctx context.Context, meta any, identifier{{ if .TagResTypeElem }}, resourceType{{ end }} string, oldTags, newTags any) error { - return {{ .UpdateTagsFunc }}(ctx, meta.(*conns.AWSClient).{{ .ProviderNameUpper }}Client(), identifier{{ if .TagResTypeElem }}, resourceType{{ end }}, oldTags, newTags) +func (p *servicePackage) {{ .UpdateTagsFunc | Title }}(ctx context.Context, meta any, identifier{{ if .TagResTypeElem }}, resourceType{{ end }} string, oldTags, newTags any) error { + return {{ .UpdateTagsFunc }}(ctx, meta.(*conns.AWSClient).{{ .ProviderNameUpper }}Client(ctx), identifier{{ if .TagResTypeElem }}, resourceType{{ end }}, oldTags, newTags) } +{{- end }} diff --git a/internal/generate/tags/templates/v2/v2.go b/internal/generate/tags/templates/v2/v2.go index c70913785cd..c148caaaf66 100644 --- a/internal/generate/tags/templates/v2/v2.go +++ b/internal/generate/tags/templates/v2/v2.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package v2 import ( @@ -27,3 +30,6 @@ var ServiceTagsSliceBody string //go:embed update_tags_body.tmpl var UpdateTagsBody string + +//go:embed wait_tags_propagated_body.tmpl +var WaitTagsPropagatedBody string diff --git a/internal/generate/tags/templates/v2/wait_tags_propagated_body.tmpl b/internal/generate/tags/templates/v2/wait_tags_propagated_body.tmpl new file mode 100644 index 00000000000..52e8d005ec5 --- /dev/null +++ b/internal/generate/tags/templates/v2/wait_tags_propagated_body.tmpl @@ -0,0 +1,34 @@ +// {{ .WaitTagsPropagatedFunc }} waits for {{ .ServicePackage }} service tags to be propagated. +// The identifier is typically the Amazon Resource Name (ARN), although +// it may also be a different identifier depending on the service. +func {{ .WaitTagsPropagatedFunc }}(ctx context.Context, conn {{ .ClientType }}, id string, tags tftags.KeyValueTags) error { + checkFunc := func() (bool, error) { + output, err := listTags(ctx, conn, id) + + if tfresource.NotFound(err) { + return false, nil + } + + if err != nil { + return false, err + } + + return output.Equal(tags), nil + } + opts := tfresource.WaitOpts{ + {{- if ne .WaitContinuousOccurence 0 }} + ContinuousTargetOccurence: {{ .WaitContinuousOccurence }}, + {{- end }} + {{- if ne .WaitDelay "" }} + Delay: {{ .WaitDelay }}, + {{- end }} + {{- if ne .WaitMinTimeout "" }} + MinTimeout: {{ .WaitMinTimeout }}, + {{- end }} + {{- if ne .WaitPollInterval "" }} + PollInterval: {{ .WaitPollInterval }}, + {{- end }} + } + + return tfresource.WaitUntil(ctx, {{ .WaitTimeout }}, checkFunc, opts) +} diff --git a/internal/generate/teamcity/acctest_services.hcl b/internal/generate/teamcity/acctest_services.hcl index ef2462fb3f5..2d227404f8b 100644 --- a/internal/generate/teamcity/acctest_services.hcl +++ b/internal/generate/teamcity/acctest_services.hcl @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + service "appautoscaling" { vpc_lock = true } diff --git a/internal/generate/teamcity/generate.go b/internal/generate/teamcity/generate.go index 23f36495932..3b2ffe1b263 100644 --- a/internal/generate/teamcity/generate.go +++ b/internal/generate/teamcity/generate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run services.go //go:generate go run provider_tests.go // ONLY generate directives and package declaration! Do not add anything else to this file. diff --git a/internal/generate/teamcity/provider_tests.go b/internal/generate/teamcity/provider_tests.go index 8b64370ba2b..0535b8846eb 100644 --- a/internal/generate/teamcity/provider_tests.go +++ b/internal/generate/teamcity/provider_tests.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build generate // +build generate diff --git a/internal/generate/teamcity/services.go b/internal/generate/teamcity/services.go index 01ba3c75641..01f786900ee 100644 --- a/internal/generate/teamcity/services.go +++ b/internal/generate/teamcity/services.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build generate // +build generate diff --git a/internal/maps/maps.go b/internal/maps/maps.go index caa1cc63353..81cc2c6a959 100644 --- a/internal/maps/maps.go +++ b/internal/maps/maps.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package maps // ApplyToAll returns a new map containing the results of applying the function `f` to each element of the original map `m`. diff --git a/internal/maps/maps_test.go b/internal/maps/maps_test.go index 39463729aa3..bb599f860b1 100644 --- a/internal/maps/maps_test.go +++ b/internal/maps/maps_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package maps import ( diff --git a/internal/provider/factory.go b/internal/provider/factory.go index d9e592c0396..07482f9b46f 100644 --- a/internal/provider/factory.go +++ b/internal/provider/factory.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package provider import ( diff --git a/internal/provider/factory_test.go b/internal/provider/factory_test.go index 4d4e22e9d0b..20b6b70d720 100644 --- a/internal/provider/factory_test.go +++ b/internal/provider/factory_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package provider_test import ( diff --git a/internal/provider/fwprovider/intercept.go b/internal/provider/fwprovider/intercept.go index df936bfc97e..8c786f7f04a 100644 --- a/internal/provider/fwprovider/intercept.go +++ b/internal/provider/fwprovider/intercept.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fwprovider import ( @@ -11,7 +14,7 @@ import ( fwtypes "github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/errs" - "github.com/hashicorp/terraform-provider-aws/internal/flex" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" "github.com/hashicorp/terraform-provider-aws/internal/slices" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/types" diff --git a/internal/provider/fwprovider/provider.go b/internal/provider/fwprovider/provider.go index 96d85524215..7abd6f4276c 100644 --- a/internal/provider/fwprovider/provider.go +++ b/internal/provider/fwprovider/provider.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fwprovider import ( @@ -84,6 +87,10 @@ func (p *fwprovider) Schema(ctx context.Context, req provider.SchemaRequest, res Optional: true, Description: "The region where AWS operations will take place. Examples\nare us-east-1, us-west-2, etc.", // lintignore:AWSAT003 }, + "retry_mode": schema.StringAttribute{ + Optional: true, + Description: "Specifies how retries are attempted. Valid values are `standard` and `adaptive`. Can also be configured using the `AWS_RETRY_MODE` environment variable.", + }, "s3_use_path_style": schema.BoolAttribute{ Optional: true, Description: "Set this to true to enable the request to use path-style addressing,\ni.e., https://s3.amazonaws.com/BUCKET/KEY. By default, the S3 client will\nuse virtual hosted bucket addressing when possible\n(https://BUCKET.s3.amazonaws.com/KEY). Specific to the Amazon S3 service.", diff --git a/internal/provider/generate.go b/internal/provider/generate.go new file mode 100644 index 00000000000..2590226a3ce --- /dev/null +++ b/internal/provider/generate.go @@ -0,0 +1,7 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../generate/servicepackages/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package provider diff --git a/internal/provider/intercept.go b/internal/provider/intercept.go index ecda289d974..8d29f232fac 100644 --- a/internal/provider/intercept.go +++ b/internal/provider/intercept.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package provider import ( @@ -359,7 +362,7 @@ func (r tagsInterceptor) run(ctx context.Context, d schemaResourceData, meta any case Finally: switch why { case Update: - if !d.GetRawPlan().GetAttr(names.AttrTagsAll).IsWhollyKnown() { + if r.tags.IdentifierAttribute != "" && !d.GetRawPlan().GetAttr(names.AttrTagsAll).IsWhollyKnown() { ctx, diags = r.updateFunc(ctx, d, sp, r.tags, serviceName, resourceName, meta, diags) ctx, diags = r.readFunc(ctx, d, sp, r.tags, serviceName, resourceName, meta, diags) } diff --git a/internal/provider/intercept_test.go b/internal/provider/intercept_test.go index 64748c5107b..c9df335b6b4 100644 --- a/internal/provider/intercept_test.go +++ b/internal/provider/intercept_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package provider import ( diff --git a/internal/provider/provider.go b/internal/provider/provider.go index d06e2e26ea5..aefa030db52 100644 --- a/internal/provider/provider.go +++ b/internal/provider/provider.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package provider import ( @@ -8,6 +11,7 @@ import ( "regexp" "time" + "github.com/aws/aws-sdk-go-v2/aws" "github.com/aws/aws-sdk-go-v2/feature/ec2/imds" awsbase "github.com/hashicorp/aws-sdk-go-base/v2" multierror "github.com/hashicorp/go-multierror" @@ -16,9 +20,9 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" - "github.com/hashicorp/terraform-provider-aws/internal/experimental/nullable" "github.com/hashicorp/terraform-provider-aws/internal/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/types/nullable" "github.com/hashicorp/terraform-provider-aws/internal/verify" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -41,7 +45,6 @@ func New(ctx context.Context) (*schema.Provider, error) { Elem: &schema.Schema{Type: schema.TypeString}, Optional: true, ConflictsWith: []string{"forbidden_account_ids"}, - Set: schema.HashString, }, "assume_role": assumeRoleSchema(), "assume_role_with_web_identity": assumeRoleWithWebIdentitySchema(), @@ -86,7 +89,6 @@ func New(ctx context.Context) (*schema.Provider, error) { Elem: &schema.Schema{Type: schema.TypeString}, Optional: true, ConflictsWith: []string{"allowed_account_ids"}, - Set: schema.HashString, }, "http_proxy": { Type: schema.TypeString, @@ -105,14 +107,12 @@ func New(ctx context.Context) (*schema.Provider, error) { Type: schema.TypeSet, Optional: true, Elem: &schema.Schema{Type: schema.TypeString}, - Set: schema.HashString, Description: "Resource tag keys to ignore across all resources.", }, "key_prefixes": { Type: schema.TypeSet, Optional: true, Elem: &schema.Schema{Type: schema.TypeString}, - Set: schema.HashString, Description: "Resource tag key prefixes to ignore across all resources.", }, }, @@ -143,6 +143,12 @@ func New(ctx context.Context) (*schema.Provider, error) { Description: "The region where AWS operations will take place. Examples\n" + "are us-east-1, us-west-2, etc.", // lintignore:AWSAT003, }, + "retry_mode": { + Type: schema.TypeString, + Optional: true, + Description: "Specifies how retries are attempted. Valid values are `standard` and `adaptive`. " + + "Can also be configured using the `AWS_RETRY_MODE` environment variable.", + }, "s3_use_path_style": { Type: schema.TypeBool, Optional: true, @@ -316,9 +322,11 @@ func New(ctx context.Context) (*schema.Provider, error) { interceptors := interceptorItems{} if v.Tags != nil { + schema := r.SchemaMap() + // The resource has opted in to transparent tagging. // Ensure that the schema look OK. - if v, ok := r.Schema[names.AttrTags]; ok { + if v, ok := schema[names.AttrTags]; ok { if v.Computed { errs = multierror.Append(errs, fmt.Errorf("`%s` attribute cannot be Computed: %s", names.AttrTags, typeName)) continue @@ -327,7 +335,7 @@ func New(ctx context.Context) (*schema.Provider, error) { errs = multierror.Append(errs, fmt.Errorf("no `%s` attribute defined in schema: %s", names.AttrTags, typeName)) continue } - if v, ok := r.Schema[names.AttrTagsAll]; ok { + if v, ok := schema[names.AttrTagsAll]; ok { if !v.Computed { errs = multierror.Append(errs, fmt.Errorf("`%s` attribute must be Computed: %s", names.AttrTags, typeName)) continue @@ -434,6 +442,14 @@ func configure(ctx context.Context, provider *schema.Provider, d *schema.Resourc UseFIPSEndpoint: d.Get("use_fips_endpoint").(bool), } + if v, ok := d.Get("retry_mode").(string); ok && v != "" { + mode, err := aws.ParseRetryMode(v) + if err != nil { + return nil, diag.FromErr(err) + } + config.RetryMode = mode + } + if v, ok := d.GetOk("allowed_account_ids"); ok && v.(*schema.Set).Len() > 0 { config.AllowedAccountIds = flex.ExpandStringValueSet(v.(*schema.Set)) } diff --git a/internal/provider/provider_acc_test.go b/internal/provider/provider_acc_test.go index 658b0539bf4..34cb8088a24 100644 --- a/internal/provider/provider_acc_test.go +++ b/internal/provider/provider_acc_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package provider_test import ( @@ -27,13 +30,13 @@ func TestAccProvider_DefaultTags_emptyBlock(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, ErrorCheck: acctest.ErrorCheck(t), - ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(t, &provider), + ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(ctx, t, &provider), CheckDestroy: nil, Steps: []resource.TestStep{ { Config: testAccProviderConfig_defaultTagsEmptyConfigurationBlock(), Check: resource.ComposeTestCheckFunc( - testAccCheckProviderDefaultTags_Tags(t, &provider, map[string]string{}), + testAccCheckProviderDefaultTags_Tags(ctx, t, &provider, map[string]string{}), ), }, }, @@ -47,13 +50,13 @@ func TestAccProvider_DefaultTagsTags_none(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, ErrorCheck: acctest.ErrorCheck(t), - ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(t, &provider), + ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(ctx, t, &provider), CheckDestroy: nil, Steps: []resource.TestStep{ { // nosemgrep:ci.test-config-funcs-correct-form Config: acctest.ConfigDefaultTags_Tags0(), Check: resource.ComposeTestCheckFunc( - testAccCheckProviderDefaultTags_Tags(t, &provider, map[string]string{}), + testAccCheckProviderDefaultTags_Tags(ctx, t, &provider, map[string]string{}), ), }, }, @@ -67,13 +70,13 @@ func TestAccProvider_DefaultTagsTags_one(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, ErrorCheck: acctest.ErrorCheck(t), - ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(t, &provider), + ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(ctx, t, &provider), CheckDestroy: nil, Steps: []resource.TestStep{ { // nosemgrep:ci.test-config-funcs-correct-form Config: acctest.ConfigDefaultTags_Tags1("test", "value"), Check: resource.ComposeTestCheckFunc( - testAccCheckProviderDefaultTags_Tags(t, &provider, map[string]string{"test": "value"}), + testAccCheckProviderDefaultTags_Tags(ctx, t, &provider, map[string]string{"test": "value"}), ), }, }, @@ -87,13 +90,13 @@ func TestAccProvider_DefaultTagsTags_multiple(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, ErrorCheck: acctest.ErrorCheck(t), - ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(t, &provider), + ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(ctx, t, &provider), CheckDestroy: nil, Steps: []resource.TestStep{ { // nosemgrep:ci.test-config-funcs-correct-form Config: acctest.ConfigDefaultTags_Tags2("test1", "value1", "test2", "value2"), Check: resource.ComposeTestCheckFunc( - testAccCheckProviderDefaultTags_Tags(t, &provider, map[string]string{ + testAccCheckProviderDefaultTags_Tags(ctx, t, &provider, map[string]string{ "test1": "value1", "test2": "value2", }), @@ -110,15 +113,15 @@ func TestAccProvider_DefaultAndIgnoreTags_emptyBlocks(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, ErrorCheck: acctest.ErrorCheck(t), - ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(t, &provider), + ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(ctx, t, &provider), CheckDestroy: nil, Steps: []resource.TestStep{ { Config: testAccProviderConfig_defaultAndIgnoreTagsEmptyConfigurationBlock(), Check: resource.ComposeTestCheckFunc( - testAccCheckProviderDefaultTags_Tags(t, &provider, map[string]string{}), - testAccCheckIgnoreTagsKeys(t, &provider, []string{}), - testAccCheckIgnoreTagsKeyPrefixes(t, &provider, []string{}), + testAccCheckProviderDefaultTags_Tags(ctx, t, &provider, map[string]string{}), + testAccCheckIgnoreTagsKeys(ctx, t, &provider, []string{}), + testAccCheckIgnoreTagsKeyPrefixes(ctx, t, &provider, []string{}), ), }, }, @@ -138,13 +141,13 @@ func TestAccProvider_endpoints(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, ErrorCheck: acctest.ErrorCheck(t), - ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(t, &provider), + ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(ctx, t, &provider), CheckDestroy: nil, Steps: []resource.TestStep{ { Config: testAccProviderConfig_endpoints(endpoints.String()), Check: resource.ComposeTestCheckFunc( - testAccCheckEndpoints(&provider), + testAccCheckEndpoints(ctx, &provider), ), }, }, @@ -188,15 +191,15 @@ func TestAccProvider_unusualEndpoints(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, ErrorCheck: acctest.ErrorCheck(t), - ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(t, &provider), + ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(ctx, t, &provider), CheckDestroy: nil, Steps: []resource.TestStep{ { Config: testAccProviderConfig_unusualEndpoints(unusual1, unusual2, unusual3), Check: resource.ComposeTestCheckFunc( - testAccCheckUnusualEndpoints(&provider, unusual1), - testAccCheckUnusualEndpoints(&provider, unusual2), - testAccCheckUnusualEndpoints(&provider, unusual3), + testAccCheckUnusualEndpoints(ctx, &provider, unusual1), + testAccCheckUnusualEndpoints(ctx, &provider, unusual2), + testAccCheckUnusualEndpoints(ctx, &provider, unusual3), ), }, }, @@ -210,14 +213,14 @@ func TestAccProvider_IgnoreTags_emptyBlock(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, ErrorCheck: acctest.ErrorCheck(t), - ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(t, &provider), + ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(ctx, t, &provider), CheckDestroy: nil, Steps: []resource.TestStep{ { Config: testAccProviderConfig_ignoreTagsEmptyConfigurationBlock(), Check: resource.ComposeTestCheckFunc( - testAccCheckIgnoreTagsKeys(t, &provider, []string{}), - testAccCheckIgnoreTagsKeyPrefixes(t, &provider, []string{}), + testAccCheckIgnoreTagsKeys(ctx, t, &provider, []string{}), + testAccCheckIgnoreTagsKeyPrefixes(ctx, t, &provider, []string{}), ), }, }, @@ -231,13 +234,13 @@ func TestAccProvider_IgnoreTagsKeyPrefixes_none(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, ErrorCheck: acctest.ErrorCheck(t), - ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(t, &provider), + ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(ctx, t, &provider), CheckDestroy: nil, Steps: []resource.TestStep{ { Config: testAccProviderConfig_ignoreTagsKeyPrefixes0(), Check: resource.ComposeTestCheckFunc( - testAccCheckIgnoreTagsKeyPrefixes(t, &provider, []string{}), + testAccCheckIgnoreTagsKeyPrefixes(ctx, t, &provider, []string{}), ), }, }, @@ -251,13 +254,13 @@ func TestAccProvider_IgnoreTagsKeyPrefixes_one(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, ErrorCheck: acctest.ErrorCheck(t), - ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(t, &provider), + ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(ctx, t, &provider), CheckDestroy: nil, Steps: []resource.TestStep{ { Config: testAccProviderConfig_ignoreTagsKeyPrefixes3("test"), Check: resource.ComposeTestCheckFunc( - testAccCheckIgnoreTagsKeyPrefixes(t, &provider, []string{"test"}), + testAccCheckIgnoreTagsKeyPrefixes(ctx, t, &provider, []string{"test"}), ), }, }, @@ -271,13 +274,13 @@ func TestAccProvider_IgnoreTagsKeyPrefixes_multiple(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, ErrorCheck: acctest.ErrorCheck(t), - ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(t, &provider), + ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(ctx, t, &provider), CheckDestroy: nil, Steps: []resource.TestStep{ { Config: testAccProviderConfig_ignoreTagsKeyPrefixes2("test1", "test2"), Check: resource.ComposeTestCheckFunc( - testAccCheckIgnoreTagsKeyPrefixes(t, &provider, []string{"test1", "test2"}), + testAccCheckIgnoreTagsKeyPrefixes(ctx, t, &provider, []string{"test1", "test2"}), ), }, }, @@ -291,13 +294,13 @@ func TestAccProvider_IgnoreTagsKeys_none(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, ErrorCheck: acctest.ErrorCheck(t), - ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(t, &provider), + ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(ctx, t, &provider), CheckDestroy: nil, Steps: []resource.TestStep{ { Config: testAccProviderConfig_ignoreTagsKeys0(), Check: resource.ComposeTestCheckFunc( - testAccCheckIgnoreTagsKeys(t, &provider, []string{}), + testAccCheckIgnoreTagsKeys(ctx, t, &provider, []string{}), ), }, }, @@ -311,13 +314,13 @@ func TestAccProvider_IgnoreTagsKeys_one(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, ErrorCheck: acctest.ErrorCheck(t), - ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(t, &provider), + ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(ctx, t, &provider), CheckDestroy: nil, Steps: []resource.TestStep{ { Config: testAccProviderConfig_ignoreTagsKeys1("test"), Check: resource.ComposeTestCheckFunc( - testAccCheckIgnoreTagsKeys(t, &provider, []string{"test"}), + testAccCheckIgnoreTagsKeys(ctx, t, &provider, []string{"test"}), ), }, }, @@ -331,13 +334,13 @@ func TestAccProvider_IgnoreTagsKeys_multiple(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, ErrorCheck: acctest.ErrorCheck(t), - ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(t, &provider), + ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(ctx, t, &provider), CheckDestroy: nil, Steps: []resource.TestStep{ { Config: testAccProviderConfig_ignoreTagsKeys2("test1", "test2"), Check: resource.ComposeTestCheckFunc( - testAccCheckIgnoreTagsKeys(t, &provider, []string{"test1", "test2"}), + testAccCheckIgnoreTagsKeys(ctx, t, &provider, []string{"test1", "test2"}), ), }, }, @@ -351,15 +354,15 @@ func TestAccProvider_Region_c2s(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, ErrorCheck: acctest.ErrorCheck(t), - ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(t, &provider), + ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(ctx, t, &provider), CheckDestroy: nil, Steps: []resource.TestStep{ { Config: testAccProviderConfig_region(endpoints.UsIsoEast1RegionID), Check: resource.ComposeTestCheckFunc( - testAccCheckDNSSuffix(t, &provider, "c2s.ic.gov"), - testAccCheckPartition(t, &provider, endpoints.AwsIsoPartitionID), - testAccCheckReverseDNSPrefix(t, &provider, "gov.ic.c2s"), + testAccCheckDNSSuffix(ctx, t, &provider, "c2s.ic.gov"), + testAccCheckPartition(ctx, t, &provider, endpoints.AwsIsoPartitionID), + testAccCheckReverseDNSPrefix(ctx, t, &provider, "gov.ic.c2s"), ), PlanOnly: true, }, @@ -374,15 +377,15 @@ func TestAccProvider_Region_china(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, ErrorCheck: acctest.ErrorCheck(t), - ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(t, &provider), + ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(ctx, t, &provider), CheckDestroy: nil, Steps: []resource.TestStep{ { Config: testAccProviderConfig_region(endpoints.CnNorthwest1RegionID), Check: resource.ComposeTestCheckFunc( - testAccCheckDNSSuffix(t, &provider, "amazonaws.com.cn"), - testAccCheckPartition(t, &provider, endpoints.AwsCnPartitionID), - testAccCheckReverseDNSPrefix(t, &provider, "cn.com.amazonaws"), + testAccCheckDNSSuffix(ctx, t, &provider, "amazonaws.com.cn"), + testAccCheckPartition(ctx, t, &provider, endpoints.AwsCnPartitionID), + testAccCheckReverseDNSPrefix(ctx, t, &provider, "cn.com.amazonaws"), ), PlanOnly: true, }, @@ -397,15 +400,15 @@ func TestAccProvider_Region_commercial(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, ErrorCheck: acctest.ErrorCheck(t), - ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(t, &provider), + ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(ctx, t, &provider), CheckDestroy: nil, Steps: []resource.TestStep{ { Config: testAccProviderConfig_region(endpoints.UsWest2RegionID), Check: resource.ComposeTestCheckFunc( - testAccCheckDNSSuffix(t, &provider, "amazonaws.com"), - testAccCheckPartition(t, &provider, endpoints.AwsPartitionID), - testAccCheckReverseDNSPrefix(t, &provider, "com.amazonaws"), + testAccCheckDNSSuffix(ctx, t, &provider, "amazonaws.com"), + testAccCheckPartition(ctx, t, &provider, endpoints.AwsPartitionID), + testAccCheckReverseDNSPrefix(ctx, t, &provider, "com.amazonaws"), ), PlanOnly: true, }, @@ -420,15 +423,15 @@ func TestAccProvider_Region_govCloud(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, ErrorCheck: acctest.ErrorCheck(t), - ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(t, &provider), + ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(ctx, t, &provider), CheckDestroy: nil, Steps: []resource.TestStep{ { Config: testAccProviderConfig_region(endpoints.UsGovWest1RegionID), Check: resource.ComposeTestCheckFunc( - testAccCheckDNSSuffix(t, &provider, "amazonaws.com"), - testAccCheckPartition(t, &provider, endpoints.AwsUsGovPartitionID), - testAccCheckReverseDNSPrefix(t, &provider, "com.amazonaws"), + testAccCheckDNSSuffix(ctx, t, &provider, "amazonaws.com"), + testAccCheckPartition(ctx, t, &provider, endpoints.AwsUsGovPartitionID), + testAccCheckReverseDNSPrefix(ctx, t, &provider, "com.amazonaws"), ), PlanOnly: true, }, @@ -443,15 +446,15 @@ func TestAccProvider_Region_sc2s(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, ErrorCheck: acctest.ErrorCheck(t), - ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(t, &provider), + ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(ctx, t, &provider), CheckDestroy: nil, Steps: []resource.TestStep{ { Config: testAccProviderConfig_region(endpoints.UsIsobEast1RegionID), Check: resource.ComposeTestCheckFunc( - testAccCheckDNSSuffix(t, &provider, "sc2s.sgov.gov"), - testAccCheckPartition(t, &provider, endpoints.AwsIsoBPartitionID), - testAccCheckReverseDNSPrefix(t, &provider, "gov.sgov.sc2s"), + testAccCheckDNSSuffix(ctx, t, &provider, "sc2s.sgov.gov"), + testAccCheckPartition(ctx, t, &provider, endpoints.AwsIsoBPartitionID), + testAccCheckReverseDNSPrefix(ctx, t, &provider, "gov.sgov.sc2s"), ), PlanOnly: true, }, @@ -466,14 +469,14 @@ func TestAccProvider_Region_stsRegion(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, ErrorCheck: acctest.ErrorCheck(t), - ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(t, &provider), + ProtoV5ProviderFactories: testAccProtoV5ProviderFactoriesInternal(ctx, t, &provider), CheckDestroy: nil, Steps: []resource.TestStep{ { Config: testAccProviderConfig_stsRegion(endpoints.UsEast1RegionID, endpoints.UsWest2RegionID), Check: resource.ComposeTestCheckFunc( - testAccCheckRegion(t, &provider, endpoints.UsEast1RegionID), - testAccCheckSTSRegion(t, &provider, endpoints.UsWest2RegionID), + testAccCheckRegion(ctx, t, &provider, endpoints.UsEast1RegionID), + testAccCheckSTSRegion(ctx, t, &provider, endpoints.UsWest2RegionID), ), PlanOnly: true, }, @@ -499,8 +502,8 @@ func TestAccProvider_AssumeRole_empty(t *testing.T) { }) } -func testAccProtoV5ProviderFactoriesInternal(t *testing.T, v **schema.Provider) map[string]func() (tfprotov5.ProviderServer, error) { - providerServerFactory, p, err := provider.ProtoV5ProviderServerFactory(context.Background()) +func testAccProtoV5ProviderFactoriesInternal(ctx context.Context, t *testing.T, v **schema.Provider) map[string]func() (tfprotov5.ProviderServer, error) { + providerServerFactory, p, err := provider.ProtoV5ProviderServerFactory(ctx) if err != nil { t.Fatal(err) @@ -516,7 +519,7 @@ func testAccProtoV5ProviderFactoriesInternal(t *testing.T, v **schema.Provider) } } -func testAccCheckPartition(t *testing.T, p **schema.Provider, expectedPartition string) resource.TestCheckFunc { //nolint:unparam +func testAccCheckPartition(ctx context.Context, t *testing.T, p **schema.Provider, expectedPartition string) resource.TestCheckFunc { //nolint:unparam return func(s *terraform.State) error { if p == nil || *p == nil || (*p).Meta() == nil || (*p).Meta().(*conns.AWSClient) == nil { return fmt.Errorf("provider not initialized") @@ -532,7 +535,7 @@ func testAccCheckPartition(t *testing.T, p **schema.Provider, expectedPartition } } -func testAccCheckDNSSuffix(t *testing.T, p **schema.Provider, expectedDnsSuffix string) resource.TestCheckFunc { //nolint:unparam +func testAccCheckDNSSuffix(ctx context.Context, t *testing.T, p **schema.Provider, expectedDnsSuffix string) resource.TestCheckFunc { //nolint:unparam return func(s *terraform.State) error { if p == nil || *p == nil || (*p).Meta() == nil || (*p).Meta().(*conns.AWSClient) == nil { return fmt.Errorf("provider not initialized") @@ -548,7 +551,7 @@ func testAccCheckDNSSuffix(t *testing.T, p **schema.Provider, expectedDnsSuffix } } -func testAccCheckRegion(t *testing.T, p **schema.Provider, expectedRegion string) resource.TestCheckFunc { //nolint:unparam +func testAccCheckRegion(ctx context.Context, t *testing.T, p **schema.Provider, expectedRegion string) resource.TestCheckFunc { //nolint:unparam return func(s *terraform.State) error { if p == nil || *p == nil || (*p).Meta() == nil || (*p).Meta().(*conns.AWSClient) == nil { return fmt.Errorf("provider not initialized") @@ -562,13 +565,13 @@ func testAccCheckRegion(t *testing.T, p **schema.Provider, expectedRegion string } } -func testAccCheckSTSRegion(t *testing.T, p **schema.Provider, expectedRegion string) resource.TestCheckFunc { //nolint:unparam +func testAccCheckSTSRegion(ctx context.Context, t *testing.T, p **schema.Provider, expectedRegion string) resource.TestCheckFunc { //nolint:unparam return func(s *terraform.State) error { if p == nil || *p == nil || (*p).Meta() == nil || (*p).Meta().(*conns.AWSClient) == nil { return fmt.Errorf("provider not initialized") } - stsRegion := aws.StringValue((*p).Meta().(*conns.AWSClient).STSConn().Config.Region) + stsRegion := aws.StringValue((*p).Meta().(*conns.AWSClient).STSConn(ctx).Config.Region) if stsRegion != expectedRegion { return fmt.Errorf("expected STS Region (%s), got: %s", expectedRegion, stsRegion) @@ -578,7 +581,7 @@ func testAccCheckSTSRegion(t *testing.T, p **schema.Provider, expectedRegion str } } -func testAccCheckReverseDNSPrefix(t *testing.T, p **schema.Provider, expectedReverseDnsPrefix string) resource.TestCheckFunc { //nolint:unparam +func testAccCheckReverseDNSPrefix(ctx context.Context, t *testing.T, p **schema.Provider, expectedReverseDnsPrefix string) resource.TestCheckFunc { //nolint:unparam return func(s *terraform.State) error { if p == nil || *p == nil || (*p).Meta() == nil || (*p).Meta().(*conns.AWSClient) == nil { return fmt.Errorf("provider not initialized") @@ -593,7 +596,7 @@ func testAccCheckReverseDNSPrefix(t *testing.T, p **schema.Provider, expectedRev } } -func testAccCheckIgnoreTagsKeyPrefixes(t *testing.T, p **schema.Provider, expectedKeyPrefixes []string) resource.TestCheckFunc { //nolint:unparam +func testAccCheckIgnoreTagsKeyPrefixes(ctx context.Context, t *testing.T, p **schema.Provider, expectedKeyPrefixes []string) resource.TestCheckFunc { //nolint:unparam return func(s *terraform.State) error { if p == nil || *p == nil || (*p).Meta() == nil || (*p).Meta().(*conns.AWSClient) == nil { return fmt.Errorf("provider not initialized") @@ -650,7 +653,7 @@ func testAccCheckIgnoreTagsKeyPrefixes(t *testing.T, p **schema.Provider, expect } } -func testAccCheckIgnoreTagsKeys(t *testing.T, p **schema.Provider, expectedKeys []string) resource.TestCheckFunc { //nolint:unparam +func testAccCheckIgnoreTagsKeys(ctx context.Context, t *testing.T, p **schema.Provider, expectedKeys []string) resource.TestCheckFunc { //nolint:unparam return func(s *terraform.State) error { if p == nil || *p == nil || (*p).Meta() == nil || (*p).Meta().(*conns.AWSClient) == nil { return fmt.Errorf("provider not initialized") @@ -707,7 +710,7 @@ func testAccCheckIgnoreTagsKeys(t *testing.T, p **schema.Provider, expectedKeys } } -func testAccCheckProviderDefaultTags_Tags(t *testing.T, p **schema.Provider, expectedTags map[string]string) resource.TestCheckFunc { //nolint:unparam +func testAccCheckProviderDefaultTags_Tags(ctx context.Context, t *testing.T, p **schema.Provider, expectedTags map[string]string) resource.TestCheckFunc { //nolint:unparam return func(s *terraform.State) error { if p == nil || *p == nil || (*p).Meta() == nil || (*p).Meta().(*conns.AWSClient) == nil { return fmt.Errorf("provider not initialized") @@ -764,7 +767,7 @@ func testAccCheckProviderDefaultTags_Tags(t *testing.T, p **schema.Provider, exp } } -func testAccCheckEndpoints(p **schema.Provider) resource.TestCheckFunc { +func testAccCheckEndpoints(_ context.Context, p **schema.Provider) resource.TestCheckFunc { return func(s *terraform.State) error { if p == nil || *p == nil || (*p).Meta() == nil || (*p).Meta().(*conns.AWSClient) == nil { return fmt.Errorf("provider not initialized") @@ -803,7 +806,7 @@ func testAccCheckEndpoints(p **schema.Provider) resource.TestCheckFunc { } } -func testAccCheckUnusualEndpoints(p **schema.Provider, unusual unusualEndpoint) resource.TestCheckFunc { +func testAccCheckUnusualEndpoints(_ context.Context, p **schema.Provider, unusual unusualEndpoint) resource.TestCheckFunc { return func(s *terraform.State) error { if p == nil || *p == nil || (*p).Meta() == nil || (*p).Meta().(*conns.AWSClient) == nil { return fmt.Errorf("provider not initialized") diff --git a/internal/provider/provider_test.go b/internal/provider/provider_test.go index 638780b0121..d1e02e09bba 100644 --- a/internal/provider/provider_test.go +++ b/internal/provider/provider_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package provider import ( diff --git a/internal/provider/service_packages_gen.go b/internal/provider/service_packages_gen.go index 523a637b3be..3877334b344 100644 --- a/internal/provider/service_packages_gen.go +++ b/internal/provider/service_packages_gen.go @@ -89,6 +89,7 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/service/emrserverless" "github.com/hashicorp/terraform-provider-aws/internal/service/events" "github.com/hashicorp/terraform-provider-aws/internal/service/evidently" + "github.com/hashicorp/terraform-provider-aws/internal/service/finspace" "github.com/hashicorp/terraform-provider-aws/internal/service/firehose" "github.com/hashicorp/terraform-provider-aws/internal/service/fis" "github.com/hashicorp/terraform-provider-aws/internal/service/fms" @@ -200,6 +201,7 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/service/timestreamwrite" "github.com/hashicorp/terraform-provider-aws/internal/service/transcribe" "github.com/hashicorp/terraform-provider-aws/internal/service/transfer" + "github.com/hashicorp/terraform-provider-aws/internal/service/verifiedpermissions" "github.com/hashicorp/terraform-provider-aws/internal/service/vpclattice" "github.com/hashicorp/terraform-provider-aws/internal/service/waf" "github.com/hashicorp/terraform-provider-aws/internal/service/wafregional" @@ -210,209 +212,211 @@ import ( "golang.org/x/exp/slices" ) -func servicePackages(context.Context) []conns.ServicePackage { +func servicePackages(ctx context.Context) []conns.ServicePackage { v := []conns.ServicePackage{ - accessanalyzer.ServicePackage, - account.ServicePackage, - acm.ServicePackage, - acmpca.ServicePackage, - amp.ServicePackage, - amplify.ServicePackage, - apigateway.ServicePackage, - apigatewayv2.ServicePackage, - appautoscaling.ServicePackage, - appconfig.ServicePackage, - appflow.ServicePackage, - appintegrations.ServicePackage, - applicationinsights.ServicePackage, - appmesh.ServicePackage, - apprunner.ServicePackage, - appstream.ServicePackage, - appsync.ServicePackage, - athena.ServicePackage, - auditmanager.ServicePackage, - autoscaling.ServicePackage, - autoscalingplans.ServicePackage, - backup.ServicePackage, - batch.ServicePackage, - budgets.ServicePackage, - ce.ServicePackage, - chime.ServicePackage, - chimesdkmediapipelines.ServicePackage, - chimesdkvoice.ServicePackage, - cleanrooms.ServicePackage, - cloud9.ServicePackage, - cloudcontrol.ServicePackage, - cloudformation.ServicePackage, - cloudfront.ServicePackage, - cloudhsmv2.ServicePackage, - cloudsearch.ServicePackage, - cloudtrail.ServicePackage, - cloudwatch.ServicePackage, - codeartifact.ServicePackage, - codebuild.ServicePackage, - codecommit.ServicePackage, - codegurureviewer.ServicePackage, - codepipeline.ServicePackage, - codestarconnections.ServicePackage, - codestarnotifications.ServicePackage, - cognitoidentity.ServicePackage, - cognitoidp.ServicePackage, - comprehend.ServicePackage, - computeoptimizer.ServicePackage, - configservice.ServicePackage, - connect.ServicePackage, - controltower.ServicePackage, - cur.ServicePackage, - dataexchange.ServicePackage, - datapipeline.ServicePackage, - datasync.ServicePackage, - dax.ServicePackage, - deploy.ServicePackage, - detective.ServicePackage, - devicefarm.ServicePackage, - directconnect.ServicePackage, - dlm.ServicePackage, - dms.ServicePackage, - docdb.ServicePackage, - docdbelastic.ServicePackage, - ds.ServicePackage, - dynamodb.ServicePackage, - ec2.ServicePackage, - ecr.ServicePackage, - ecrpublic.ServicePackage, - ecs.ServicePackage, - efs.ServicePackage, - eks.ServicePackage, - elasticache.ServicePackage, - elasticbeanstalk.ServicePackage, - elasticsearch.ServicePackage, - elastictranscoder.ServicePackage, - elb.ServicePackage, - elbv2.ServicePackage, - emr.ServicePackage, - emrcontainers.ServicePackage, - emrserverless.ServicePackage, - events.ServicePackage, - evidently.ServicePackage, - firehose.ServicePackage, - fis.ServicePackage, - fms.ServicePackage, - fsx.ServicePackage, - gamelift.ServicePackage, - glacier.ServicePackage, - globalaccelerator.ServicePackage, - glue.ServicePackage, - grafana.ServicePackage, - greengrass.ServicePackage, - guardduty.ServicePackage, - healthlake.ServicePackage, - iam.ServicePackage, - identitystore.ServicePackage, - imagebuilder.ServicePackage, - inspector.ServicePackage, - inspector2.ServicePackage, - internetmonitor.ServicePackage, - iot.ServicePackage, - iotanalytics.ServicePackage, - iotevents.ServicePackage, - ivs.ServicePackage, - ivschat.ServicePackage, - kafka.ServicePackage, - kafkaconnect.ServicePackage, - kendra.ServicePackage, - keyspaces.ServicePackage, - kinesis.ServicePackage, - kinesisanalytics.ServicePackage, - kinesisanalyticsv2.ServicePackage, - kinesisvideo.ServicePackage, - kms.ServicePackage, - lakeformation.ServicePackage, - lambda.ServicePackage, - lexmodels.ServicePackage, - licensemanager.ServicePackage, - lightsail.ServicePackage, - location.ServicePackage, - logs.ServicePackage, - macie2.ServicePackage, - mediaconnect.ServicePackage, - mediaconvert.ServicePackage, - medialive.ServicePackage, - mediapackage.ServicePackage, - mediastore.ServicePackage, - memorydb.ServicePackage, - meta.ServicePackage, - mq.ServicePackage, - mwaa.ServicePackage, - neptune.ServicePackage, - networkfirewall.ServicePackage, - networkmanager.ServicePackage, - oam.ServicePackage, - opensearch.ServicePackage, - opensearchserverless.ServicePackage, - opsworks.ServicePackage, - organizations.ServicePackage, - outposts.ServicePackage, - pinpoint.ServicePackage, - pipes.ServicePackage, - pricing.ServicePackage, - qldb.ServicePackage, - quicksight.ServicePackage, - ram.ServicePackage, - rbin.ServicePackage, - rds.ServicePackage, - redshift.ServicePackage, - redshiftdata.ServicePackage, - redshiftserverless.ServicePackage, - resourceexplorer2.ServicePackage, - resourcegroups.ServicePackage, - resourcegroupstaggingapi.ServicePackage, - rolesanywhere.ServicePackage, - route53.ServicePackage, - route53domains.ServicePackage, - route53recoverycontrolconfig.ServicePackage, - route53recoveryreadiness.ServicePackage, - route53resolver.ServicePackage, - rum.ServicePackage, - s3.ServicePackage, - s3control.ServicePackage, - s3outposts.ServicePackage, - sagemaker.ServicePackage, - scheduler.ServicePackage, - schemas.ServicePackage, - secretsmanager.ServicePackage, - securityhub.ServicePackage, - securitylake.ServicePackage, - serverlessrepo.ServicePackage, - servicecatalog.ServicePackage, - servicediscovery.ServicePackage, - servicequotas.ServicePackage, - ses.ServicePackage, - sesv2.ServicePackage, - sfn.ServicePackage, - shield.ServicePackage, - signer.ServicePackage, - simpledb.ServicePackage, - sns.ServicePackage, - sqs.ServicePackage, - ssm.ServicePackage, - ssmcontacts.ServicePackage, - ssmincidents.ServicePackage, - ssoadmin.ServicePackage, - storagegateway.ServicePackage, - sts.ServicePackage, - swf.ServicePackage, - synthetics.ServicePackage, - timestreamwrite.ServicePackage, - transcribe.ServicePackage, - transfer.ServicePackage, - vpclattice.ServicePackage, - waf.ServicePackage, - wafregional.ServicePackage, - wafv2.ServicePackage, - worklink.ServicePackage, - workspaces.ServicePackage, - xray.ServicePackage, + accessanalyzer.ServicePackage(ctx), + account.ServicePackage(ctx), + acm.ServicePackage(ctx), + acmpca.ServicePackage(ctx), + amp.ServicePackage(ctx), + amplify.ServicePackage(ctx), + apigateway.ServicePackage(ctx), + apigatewayv2.ServicePackage(ctx), + appautoscaling.ServicePackage(ctx), + appconfig.ServicePackage(ctx), + appflow.ServicePackage(ctx), + appintegrations.ServicePackage(ctx), + applicationinsights.ServicePackage(ctx), + appmesh.ServicePackage(ctx), + apprunner.ServicePackage(ctx), + appstream.ServicePackage(ctx), + appsync.ServicePackage(ctx), + athena.ServicePackage(ctx), + auditmanager.ServicePackage(ctx), + autoscaling.ServicePackage(ctx), + autoscalingplans.ServicePackage(ctx), + backup.ServicePackage(ctx), + batch.ServicePackage(ctx), + budgets.ServicePackage(ctx), + ce.ServicePackage(ctx), + chime.ServicePackage(ctx), + chimesdkmediapipelines.ServicePackage(ctx), + chimesdkvoice.ServicePackage(ctx), + cleanrooms.ServicePackage(ctx), + cloud9.ServicePackage(ctx), + cloudcontrol.ServicePackage(ctx), + cloudformation.ServicePackage(ctx), + cloudfront.ServicePackage(ctx), + cloudhsmv2.ServicePackage(ctx), + cloudsearch.ServicePackage(ctx), + cloudtrail.ServicePackage(ctx), + cloudwatch.ServicePackage(ctx), + codeartifact.ServicePackage(ctx), + codebuild.ServicePackage(ctx), + codecommit.ServicePackage(ctx), + codegurureviewer.ServicePackage(ctx), + codepipeline.ServicePackage(ctx), + codestarconnections.ServicePackage(ctx), + codestarnotifications.ServicePackage(ctx), + cognitoidentity.ServicePackage(ctx), + cognitoidp.ServicePackage(ctx), + comprehend.ServicePackage(ctx), + computeoptimizer.ServicePackage(ctx), + configservice.ServicePackage(ctx), + connect.ServicePackage(ctx), + controltower.ServicePackage(ctx), + cur.ServicePackage(ctx), + dataexchange.ServicePackage(ctx), + datapipeline.ServicePackage(ctx), + datasync.ServicePackage(ctx), + dax.ServicePackage(ctx), + deploy.ServicePackage(ctx), + detective.ServicePackage(ctx), + devicefarm.ServicePackage(ctx), + directconnect.ServicePackage(ctx), + dlm.ServicePackage(ctx), + dms.ServicePackage(ctx), + docdb.ServicePackage(ctx), + docdbelastic.ServicePackage(ctx), + ds.ServicePackage(ctx), + dynamodb.ServicePackage(ctx), + ec2.ServicePackage(ctx), + ecr.ServicePackage(ctx), + ecrpublic.ServicePackage(ctx), + ecs.ServicePackage(ctx), + efs.ServicePackage(ctx), + eks.ServicePackage(ctx), + elasticache.ServicePackage(ctx), + elasticbeanstalk.ServicePackage(ctx), + elasticsearch.ServicePackage(ctx), + elastictranscoder.ServicePackage(ctx), + elb.ServicePackage(ctx), + elbv2.ServicePackage(ctx), + emr.ServicePackage(ctx), + emrcontainers.ServicePackage(ctx), + emrserverless.ServicePackage(ctx), + events.ServicePackage(ctx), + evidently.ServicePackage(ctx), + finspace.ServicePackage(ctx), + firehose.ServicePackage(ctx), + fis.ServicePackage(ctx), + fms.ServicePackage(ctx), + fsx.ServicePackage(ctx), + gamelift.ServicePackage(ctx), + glacier.ServicePackage(ctx), + globalaccelerator.ServicePackage(ctx), + glue.ServicePackage(ctx), + grafana.ServicePackage(ctx), + greengrass.ServicePackage(ctx), + guardduty.ServicePackage(ctx), + healthlake.ServicePackage(ctx), + iam.ServicePackage(ctx), + identitystore.ServicePackage(ctx), + imagebuilder.ServicePackage(ctx), + inspector.ServicePackage(ctx), + inspector2.ServicePackage(ctx), + internetmonitor.ServicePackage(ctx), + iot.ServicePackage(ctx), + iotanalytics.ServicePackage(ctx), + iotevents.ServicePackage(ctx), + ivs.ServicePackage(ctx), + ivschat.ServicePackage(ctx), + kafka.ServicePackage(ctx), + kafkaconnect.ServicePackage(ctx), + kendra.ServicePackage(ctx), + keyspaces.ServicePackage(ctx), + kinesis.ServicePackage(ctx), + kinesisanalytics.ServicePackage(ctx), + kinesisanalyticsv2.ServicePackage(ctx), + kinesisvideo.ServicePackage(ctx), + kms.ServicePackage(ctx), + lakeformation.ServicePackage(ctx), + lambda.ServicePackage(ctx), + lexmodels.ServicePackage(ctx), + licensemanager.ServicePackage(ctx), + lightsail.ServicePackage(ctx), + location.ServicePackage(ctx), + logs.ServicePackage(ctx), + macie2.ServicePackage(ctx), + mediaconnect.ServicePackage(ctx), + mediaconvert.ServicePackage(ctx), + medialive.ServicePackage(ctx), + mediapackage.ServicePackage(ctx), + mediastore.ServicePackage(ctx), + memorydb.ServicePackage(ctx), + meta.ServicePackage(ctx), + mq.ServicePackage(ctx), + mwaa.ServicePackage(ctx), + neptune.ServicePackage(ctx), + networkfirewall.ServicePackage(ctx), + networkmanager.ServicePackage(ctx), + oam.ServicePackage(ctx), + opensearch.ServicePackage(ctx), + opensearchserverless.ServicePackage(ctx), + opsworks.ServicePackage(ctx), + organizations.ServicePackage(ctx), + outposts.ServicePackage(ctx), + pinpoint.ServicePackage(ctx), + pipes.ServicePackage(ctx), + pricing.ServicePackage(ctx), + qldb.ServicePackage(ctx), + quicksight.ServicePackage(ctx), + ram.ServicePackage(ctx), + rbin.ServicePackage(ctx), + rds.ServicePackage(ctx), + redshift.ServicePackage(ctx), + redshiftdata.ServicePackage(ctx), + redshiftserverless.ServicePackage(ctx), + resourceexplorer2.ServicePackage(ctx), + resourcegroups.ServicePackage(ctx), + resourcegroupstaggingapi.ServicePackage(ctx), + rolesanywhere.ServicePackage(ctx), + route53.ServicePackage(ctx), + route53domains.ServicePackage(ctx), + route53recoverycontrolconfig.ServicePackage(ctx), + route53recoveryreadiness.ServicePackage(ctx), + route53resolver.ServicePackage(ctx), + rum.ServicePackage(ctx), + s3.ServicePackage(ctx), + s3control.ServicePackage(ctx), + s3outposts.ServicePackage(ctx), + sagemaker.ServicePackage(ctx), + scheduler.ServicePackage(ctx), + schemas.ServicePackage(ctx), + secretsmanager.ServicePackage(ctx), + securityhub.ServicePackage(ctx), + securitylake.ServicePackage(ctx), + serverlessrepo.ServicePackage(ctx), + servicecatalog.ServicePackage(ctx), + servicediscovery.ServicePackage(ctx), + servicequotas.ServicePackage(ctx), + ses.ServicePackage(ctx), + sesv2.ServicePackage(ctx), + sfn.ServicePackage(ctx), + shield.ServicePackage(ctx), + signer.ServicePackage(ctx), + simpledb.ServicePackage(ctx), + sns.ServicePackage(ctx), + sqs.ServicePackage(ctx), + ssm.ServicePackage(ctx), + ssmcontacts.ServicePackage(ctx), + ssmincidents.ServicePackage(ctx), + ssoadmin.ServicePackage(ctx), + storagegateway.ServicePackage(ctx), + sts.ServicePackage(ctx), + swf.ServicePackage(ctx), + synthetics.ServicePackage(ctx), + timestreamwrite.ServicePackage(ctx), + transcribe.ServicePackage(ctx), + transfer.ServicePackage(ctx), + verifiedpermissions.ServicePackage(ctx), + vpclattice.ServicePackage(ctx), + waf.ServicePackage(ctx), + wafregional.ServicePackage(ctx), + wafv2.ServicePackage(ctx), + worklink.ServicePackage(ctx), + workspaces.ServicePackage(ctx), + xray.ServicePackage(ctx), } return slices.Clone(v) diff --git a/internal/provider/tags_interceptor.go b/internal/provider/tags_interceptor.go index 896d77e23f0..a669d1aa562 100644 --- a/internal/provider/tags_interceptor.go +++ b/internal/provider/tags_interceptor.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package provider import ( @@ -43,7 +46,9 @@ func tagsUpdateFunc(ctx context.Context, d schemaResourceData, sp conns.ServiceP c := config.GetAttr(names.AttrTags) if !c.IsNull() { for k, v := range c.AsValueMap() { - configTags[k] = v.AsString() + if !v.IsNull() { + configTags[k] = v.AsString() + } } } } diff --git a/internal/provider/tags_interceptor_test.go b/internal/provider/tags_interceptor_test.go index 8eb3449db4d..08846f5c4f2 100644 --- a/internal/provider/tags_interceptor_test.go +++ b/internal/provider/tags_interceptor_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package provider import ( diff --git a/internal/provider/validate.go b/internal/provider/validate.go index 46d221f85a5..1bc35f38f43 100644 --- a/internal/provider/validate.go +++ b/internal/provider/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package provider import ( diff --git a/internal/provider/validate_test.go b/internal/provider/validate_test.go index 9b068e00e52..4a51a4051ef 100644 --- a/internal/provider/validate_test.go +++ b/internal/provider/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package provider import ( diff --git a/internal/sdktypes/duration.go b/internal/sdktypes/duration.go index 38b135a7ab4..01db216661a 100644 --- a/internal/sdktypes/duration.go +++ b/internal/sdktypes/duration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sdktypes import ( diff --git a/internal/sdktypes/duration_test.go b/internal/sdktypes/duration_test.go index 538787c3a21..ed56dcd42cc 100644 --- a/internal/sdktypes/duration_test.go +++ b/internal/sdktypes/duration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sdktypes import ( diff --git a/internal/sdktypes/rfc3339_duration.go b/internal/sdktypes/rfc3339_duration.go index d270b682fca..a0d397a4026 100644 --- a/internal/sdktypes/rfc3339_duration.go +++ b/internal/sdktypes/rfc3339_duration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sdktypes import ( diff --git a/internal/sdktypes/testing_test.go b/internal/sdktypes/testing_test.go index df642b4b3a8..c01810f20bc 100644 --- a/internal/sdktypes/testing_test.go +++ b/internal/sdktypes/testing_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sdktypes import ( diff --git a/internal/service/accessanalyzer/accessanalyzer_test.go b/internal/service/accessanalyzer/accessanalyzer_test.go index 0108228aa88..113263f09cb 100644 --- a/internal/service/accessanalyzer/accessanalyzer_test.go +++ b/internal/service/accessanalyzer/accessanalyzer_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package accessanalyzer_test import ( @@ -31,7 +34,7 @@ func TestAccAccessAnalyzer_serial(t *testing.T) { } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).AccessAnalyzerClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).AccessAnalyzerClient(ctx) input := &accessanalyzer.ListAnalyzersInput{} diff --git a/internal/service/accessanalyzer/analyzer.go b/internal/service/accessanalyzer/analyzer.go index 63f638693a2..5aa38fb057e 100644 --- a/internal/service/accessanalyzer/analyzer.go +++ b/internal/service/accessanalyzer/analyzer.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package accessanalyzer import ( @@ -76,13 +79,13 @@ func resourceAnalyzer() *schema.Resource { func resourceAnalyzerCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AccessAnalyzerClient() + conn := meta.(*conns.AWSClient).AccessAnalyzerClient(ctx) analyzerName := d.Get("analyzer_name").(string) input := &accessanalyzer.CreateAnalyzerInput{ AnalyzerName: aws.String(analyzerName), ClientToken: aws.String(id.UniqueId()), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), Type: types.Type(d.Get("type").(string)), } @@ -111,7 +114,7 @@ func resourceAnalyzerCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceAnalyzerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AccessAnalyzerClient() + conn := meta.(*conns.AWSClient).AccessAnalyzerClient(ctx) analyzer, err := findAnalyzerByName(ctx, conn, d.Id()) @@ -129,7 +132,7 @@ func resourceAnalyzerRead(ctx context.Context, d *schema.ResourceData, meta inte d.Set("arn", analyzer.Arn) d.Set("type", analyzer.Type) - SetTagsOut(ctx, analyzer.Tags) + setTagsOut(ctx, analyzer.Tags) return diags } @@ -144,7 +147,7 @@ func resourceAnalyzerUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceAnalyzerDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AccessAnalyzerClient() + conn := meta.(*conns.AWSClient).AccessAnalyzerClient(ctx) log.Printf("[DEBUG] Deleting IAM Access Analyzer Analyzer: %s", d.Id()) _, err := conn.DeleteAnalyzer(ctx, &accessanalyzer.DeleteAnalyzerInput{ diff --git a/internal/service/accessanalyzer/analyzer_test.go b/internal/service/accessanalyzer/analyzer_test.go index 4ed418b1a81..450ae360ec5 100644 --- a/internal/service/accessanalyzer/analyzer_test.go +++ b/internal/service/accessanalyzer/analyzer_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package accessanalyzer_test import ( @@ -155,7 +158,7 @@ func testAccAnalyzer_Type_Organization(t *testing.T) { func testAccCheckAnalyzerDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AccessAnalyzerClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).AccessAnalyzerClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_accessanalyzer_analyzer" { @@ -190,7 +193,7 @@ func testAccCheckAnalyzerExists(ctx context.Context, n string, v *types.Analyzer return fmt.Errorf("No IAM Access Analyzer Analyzer ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).AccessAnalyzerClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).AccessAnalyzerClient(ctx) output, err := tfaccessanalyzer.FindAnalyzerByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/accessanalyzer/archive_rule.go b/internal/service/accessanalyzer/archive_rule.go index a3371259d51..f86f1903060 100644 --- a/internal/service/accessanalyzer/archive_rule.go +++ b/internal/service/accessanalyzer/archive_rule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package accessanalyzer import ( @@ -16,9 +19,9 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/errs" - "github.com/hashicorp/terraform-provider-aws/internal/experimental/nullable" "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/types/nullable" ) // @SDKResource("aws_accessanalyzer_archive_rule") @@ -85,7 +88,7 @@ func resourceArchiveRule() *schema.Resource { } func resourceArchiveRuleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AccessAnalyzerClient() + conn := meta.(*conns.AWSClient).AccessAnalyzerClient(ctx) analyzerName := d.Get("analyzer_name").(string) ruleName := d.Get("rule_name").(string) @@ -112,7 +115,7 @@ func resourceArchiveRuleCreate(ctx context.Context, d *schema.ResourceData, meta } func resourceArchiveRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AccessAnalyzerClient() + conn := meta.(*conns.AWSClient).AccessAnalyzerClient(ctx) analyzerName, ruleName, err := archiveRuleParseResourceID(d.Id()) @@ -140,7 +143,7 @@ func resourceArchiveRuleRead(ctx context.Context, d *schema.ResourceData, meta i } func resourceArchiveRuleUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AccessAnalyzerClient() + conn := meta.(*conns.AWSClient).AccessAnalyzerClient(ctx) analyzerName, ruleName, err := archiveRuleParseResourceID(d.Id()) @@ -168,7 +171,7 @@ func resourceArchiveRuleUpdate(ctx context.Context, d *schema.ResourceData, meta } func resourceArchiveRuleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AccessAnalyzerClient() + conn := meta.(*conns.AWSClient).AccessAnalyzerClient(ctx) analyzerName, ruleName, err := archiveRuleParseResourceID(d.Id()) diff --git a/internal/service/accessanalyzer/archive_rule_test.go b/internal/service/accessanalyzer/archive_rule_test.go index b232a95d446..82b20b889ff 100644 --- a/internal/service/accessanalyzer/archive_rule_test.go +++ b/internal/service/accessanalyzer/archive_rule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package accessanalyzer_test import ( @@ -149,7 +152,7 @@ func testAccAnalyzerArchiveRule_disappears(t *testing.T) { func testAccCheckArchiveRuleDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AccessAnalyzerClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).AccessAnalyzerClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_accessanalyzer_archive_rule" { @@ -196,7 +199,7 @@ func testAccCheckArchiveRuleExists(ctx context.Context, n string, v *types.Archi return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).AccessAnalyzerClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).AccessAnalyzerClient(ctx) output, err := tfaccessanalyzer.FindArchiveRuleByTwoPartKey(ctx, conn, analyzerName, ruleName) diff --git a/internal/service/accessanalyzer/exports_test.go b/internal/service/accessanalyzer/exports_test.go index 84ae619ccd0..4d87a2c580b 100644 --- a/internal/service/accessanalyzer/exports_test.go +++ b/internal/service/accessanalyzer/exports_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package accessanalyzer // Exports for use in tests only. diff --git a/internal/service/accessanalyzer/generate.go b/internal/service/accessanalyzer/generate.go index a3bba754f30..947fe563087 100644 --- a/internal/service/accessanalyzer/generate.go +++ b/internal/service/accessanalyzer/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -ListTags -ServiceTagsMap -UpdateTags -KVTValues -SkipTypesImp +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package accessanalyzer diff --git a/internal/service/accessanalyzer/service_package_gen.go b/internal/service/accessanalyzer/service_package_gen.go index 42fae0fdad2..2f7eff23109 100644 --- a/internal/service/accessanalyzer/service_package_gen.go +++ b/internal/service/accessanalyzer/service_package_gen.go @@ -5,6 +5,9 @@ package accessanalyzer import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + accessanalyzer_sdkv2 "github.com/aws/aws-sdk-go-v2/service/accessanalyzer" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -44,4 +47,17 @@ func (p *servicePackage) ServicePackageName() string { return names.AccessAnalyzer } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*accessanalyzer_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return accessanalyzer_sdkv2.NewFromConfig(cfg, func(o *accessanalyzer_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = accessanalyzer_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/accessanalyzer/sweep.go b/internal/service/accessanalyzer/sweep.go index 5db64034507..66f844a6be9 100644 --- a/internal/service/accessanalyzer/sweep.go +++ b/internal/service/accessanalyzer/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go-v2/aws" "github.com/aws/aws-sdk-go-v2/service/accessanalyzer" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -23,11 +25,11 @@ func init() { func sweepAnalyzers(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).AccessAnalyzerClient() + conn := client.AccessAnalyzerClient(ctx) input := &accessanalyzer.ListAnalyzersInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -53,7 +55,7 @@ func sweepAnalyzers(region string) error { } } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("sweeping IAM Access Analyzer Analyzers (%s): %w", region, err) diff --git a/internal/service/accessanalyzer/tags_gen.go b/internal/service/accessanalyzer/tags_gen.go index 35611370499..cf43eee813b 100644 --- a/internal/service/accessanalyzer/tags_gen.go +++ b/internal/service/accessanalyzer/tags_gen.go @@ -13,10 +13,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists accessanalyzer service tags. +// listTags lists accessanalyzer service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn *accessanalyzer.Client, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *accessanalyzer.Client, identifier string) (tftags.KeyValueTags, error) { input := &accessanalyzer.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -33,7 +33,7 @@ func ListTags(ctx context.Context, conn *accessanalyzer.Client, identifier strin // ListTags lists accessanalyzer service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).AccessAnalyzerClient(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).AccessAnalyzerClient(ctx), identifier) if err != nil { return err @@ -53,14 +53,14 @@ func Tags(tags tftags.KeyValueTags) map[string]string { return tags.Map() } -// KeyValueTags creates KeyValueTags from accessanalyzer service tags. +// KeyValueTags creates tftags.KeyValueTags from accessanalyzer service tags. func KeyValueTags(ctx context.Context, tags map[string]string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns accessanalyzer service tags from Context. +// getTagsIn returns accessanalyzer service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]string { +func getTagsIn(ctx context.Context) map[string]string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -70,17 +70,17 @@ func GetTagsIn(ctx context.Context) map[string]string { return nil } -// SetTagsOut sets accessanalyzer service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]string) { +// setTagsOut sets accessanalyzer service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates accessanalyzer service tags. +// updateTags updates accessanalyzer service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn *accessanalyzer.Client, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *accessanalyzer.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -120,5 +120,5 @@ func UpdateTags(ctx context.Context, conn *accessanalyzer.Client, identifier str // UpdateTags updates accessanalyzer service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).AccessAnalyzerClient(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).AccessAnalyzerClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/account/account_test.go b/internal/service/account/account_test.go index d4664cd9fb5..0f6b6166303 100644 --- a/internal/service/account/account_test.go +++ b/internal/service/account/account_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package account_test import ( diff --git a/internal/service/account/alternate_contact.go b/internal/service/account/alternate_contact.go index 7732c1b7fde..cd05eab8a38 100644 --- a/internal/service/account/alternate_contact.go +++ b/internal/service/account/alternate_contact.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package account import ( @@ -79,7 +82,7 @@ func resourceAlternateContact() *schema.Resource { } func resourceAlternateContactCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AccountClient() + conn := meta.(*conns.AWSClient).AccountClient(ctx) accountID := d.Get("account_id").(string) contactType := d.Get("alternate_contact_type").(string) @@ -119,7 +122,7 @@ func resourceAlternateContactCreate(ctx context.Context, d *schema.ResourceData, } func resourceAlternateContactRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AccountClient() + conn := meta.(*conns.AWSClient).AccountClient(ctx) accountID, contactType, err := alternateContactParseResourceID(d.Id()) @@ -150,7 +153,7 @@ func resourceAlternateContactRead(ctx context.Context, d *schema.ResourceData, m } func resourceAlternateContactUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AccountClient() + conn := meta.(*conns.AWSClient).AccountClient(ctx) accountID, contactType, err := alternateContactParseResourceID(d.Id()) @@ -201,7 +204,7 @@ func resourceAlternateContactUpdate(ctx context.Context, d *schema.ResourceData, } func resourceAlternateContactDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AccountClient() + conn := meta.(*conns.AWSClient).AccountClient(ctx) accountID, contactType, err := alternateContactParseResourceID(d.Id()) diff --git a/internal/service/account/alternate_contact_test.go b/internal/service/account/alternate_contact_test.go index 520a63d8485..d490eafaebd 100644 --- a/internal/service/account/alternate_contact_test.go +++ b/internal/service/account/alternate_contact_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package account_test import ( @@ -144,7 +147,7 @@ func testAccAlternateContact_accountID(t *testing.T) { // nosemgrep:ci.account-i func testAccCheckAlternateContactDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AccountClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).AccountClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_account_alternate_contact" { @@ -191,7 +194,7 @@ func testAccCheckAlternateContactExists(ctx context.Context, n string) resource. return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).AccountClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).AccountClient(ctx) _, err = tfaccount.FindAlternateContactByTwoPartKey(ctx, conn, accountID, contactType) @@ -231,7 +234,7 @@ resource "aws_account_alternate_contact" "test" { } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).AccountClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).AccountClient(ctx) _, err := tfaccount.FindAlternateContactByTwoPartKey(ctx, conn, "", string(types.AlternateContactTypeOperations)) diff --git a/internal/service/account/exports_test.go b/internal/service/account/exports_test.go index ea47fd58d54..1ce71eced7a 100644 --- a/internal/service/account/exports_test.go +++ b/internal/service/account/exports_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package account // Exports for use in tests only. diff --git a/internal/service/account/generate.go b/internal/service/account/generate.go new file mode 100644 index 00000000000..c2b1d858819 --- /dev/null +++ b/internal/service/account/generate.go @@ -0,0 +1,7 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/servicepackage/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package account diff --git a/internal/service/account/primary_contact.go b/internal/service/account/primary_contact.go index 1af29f89de3..db7431be240 100644 --- a/internal/service/account/primary_contact.go +++ b/internal/service/account/primary_contact.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package account import ( @@ -92,7 +95,7 @@ func resourcePrimaryContact() *schema.Resource { } func resourcePrimaryContactPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AccountClient() + conn := meta.(*conns.AWSClient).AccountClient(ctx) id := "default" input := &account.PutContactInformationInput{ @@ -149,7 +152,7 @@ func resourcePrimaryContactPut(ctx context.Context, d *schema.ResourceData, meta } func resourcePrimaryContactRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AccountClient() + conn := meta.(*conns.AWSClient).AccountClient(ctx) contactInformation, err := findContactInformation(ctx, conn, d.Get("account_id").(string)) diff --git a/internal/service/account/primary_contact_test.go b/internal/service/account/primary_contact_test.go index 947b3920099..58457032d85 100644 --- a/internal/service/account/primary_contact_test.go +++ b/internal/service/account/primary_contact_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package account_test import ( @@ -80,7 +83,7 @@ func testAccCheckPrimaryContactExists(ctx context.Context, n string) resource.Te return fmt.Errorf("No Account Primary Contact ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).AccountClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).AccountClient(ctx) _, err := tfaccount.FindContactInformation(ctx, conn, rs.Primary.Attributes["account_id"]) diff --git a/internal/service/account/retry/retry.go b/internal/service/account/retry/retry.go index 6e5f94ac18c..85684bf3f24 100644 --- a/internal/service/account/retry/retry.go +++ b/internal/service/account/retry/retry.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package retry import ( diff --git a/internal/service/account/retry/retry_test.go b/internal/service/account/retry/retry_test.go index 241f45c3e29..e4904031830 100644 --- a/internal/service/account/retry/retry_test.go +++ b/internal/service/account/retry/retry_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package retry import ( diff --git a/internal/service/account/retry/wrappers.go b/internal/service/account/retry/wrappers.go index 710f618276d..421f824d668 100644 --- a/internal/service/account/retry/wrappers.go +++ b/internal/service/account/retry/wrappers.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package retry import ( diff --git a/internal/service/account/service_package_gen.go b/internal/service/account/service_package_gen.go index 127d7fa5a08..8cb1145b3c3 100644 --- a/internal/service/account/service_package_gen.go +++ b/internal/service/account/service_package_gen.go @@ -5,6 +5,9 @@ package account import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + account_sdkv2 "github.com/aws/aws-sdk-go-v2/service/account" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -40,4 +43,17 @@ func (p *servicePackage) ServicePackageName() string { return names.Account } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*account_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return account_sdkv2.NewFromConfig(cfg, func(o *account_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = account_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/acm/certificate.go b/internal/service/acm/certificate.go index d914ae4bcee..80c06f6c55a 100644 --- a/internal/service/acm/certificate.go +++ b/internal/service/acm/certificate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acm import ( @@ -325,7 +328,7 @@ func resourceCertificate() *schema.Resource { } func resourceCertificateCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ACMClient() + conn := meta.(*conns.AWSClient).ACMClient(ctx) if _, ok := d.GetOk("domain_name"); ok { _, v1 := d.GetOk("certificate_authority_arn") @@ -339,7 +342,7 @@ func resourceCertificateCreate(ctx context.Context, d *schema.ResourceData, meta input := &acm.RequestCertificateInput{ DomainName: aws.String(domainName), IdempotencyToken: aws.String(id.PrefixedUniqueId("tf")), // 32 character limit - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("certificate_authority_arn"); ok { @@ -377,7 +380,7 @@ func resourceCertificateCreate(ctx context.Context, d *schema.ResourceData, meta input := &acm.ImportCertificateInput{ Certificate: []byte(d.Get("certificate_body").(string)), PrivateKey: []byte(d.Get("private_key").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("certificate_chain"); ok { @@ -401,7 +404,7 @@ func resourceCertificateCreate(ctx context.Context, d *schema.ResourceData, meta } func resourceCertificateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ACMClient() + conn := meta.(*conns.AWSClient).ACMClient(ctx) certificate, err := findCertificateByARN(ctx, conn, d.Id()) @@ -470,7 +473,7 @@ func resourceCertificateRead(ctx context.Context, d *schema.ResourceData, meta i } func resourceCertificateUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ACMClient() + conn := meta.(*conns.AWSClient).ACMClient(ctx) if d.HasChanges("private_key", "certificate_body", "certificate_chain") { oCBRaw, nCBRaw := d.GetChange("certificate_body") @@ -526,7 +529,7 @@ func resourceCertificateUpdate(ctx context.Context, d *schema.ResourceData, meta } func resourceCertificateDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ACMClient() + conn := meta.(*conns.AWSClient).ACMClient(ctx) log.Printf("[INFO] Deleting ACM Certificate: %s", d.Id()) _, err := tfresource.RetryWhenIsA[*types.ResourceInUseException](ctx, certificateCrossServicePropagationTimeout, diff --git a/internal/service/acm/certificate_data_source.go b/internal/service/acm/certificate_data_source.go index 1d781d1226f..a3599f63848 100644 --- a/internal/service/acm/certificate_data_source.go +++ b/internal/service/acm/certificate_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acm import ( @@ -70,7 +73,7 @@ func dataSourceCertificate() *schema.Resource { } func dataSourceCertificateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ACMClient() + conn := meta.(*conns.AWSClient).ACMClient(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig domain := d.Get("domain") @@ -211,7 +214,7 @@ func dataSourceCertificateRead(ctx context.Context, d *schema.ResourceData, meta d.Set("arn", matchedCertificate.CertificateArn) d.Set("status", matchedCertificate.Status) - tags, err := ListTags(ctx, conn, aws.ToString(matchedCertificate.CertificateArn)) + tags, err := listTags(ctx, conn, aws.ToString(matchedCertificate.CertificateArn)) if err != nil { return diag.Errorf("listing tags for ACM Certificate (%s): %s", d.Id(), err) diff --git a/internal/service/acm/certificate_data_source_test.go b/internal/service/acm/certificate_data_source_test.go index 725d889fbce..03a673b4033 100644 --- a/internal/service/acm/certificate_data_source_test.go +++ b/internal/service/acm/certificate_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acm_test import ( diff --git a/internal/service/acm/certificate_test.go b/internal/service/acm/certificate_test.go index 80b6c0cd483..9f010f73339 100644 --- a/internal/service/acm/certificate_test.go +++ b/internal/service/acm/certificate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acm_test import ( @@ -51,7 +54,7 @@ func TestAccACMCertificate_emailValidation(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "type", string(types.CertificateTypeAmazonIssued)), resource.TestCheckResourceAttr(resourceName, "renewal_eligibility", string(types.RenewalEligibilityIneligible)), resource.TestCheckResourceAttr(resourceName, "renewal_summary.#", "0"), - acctest.CheckResourceAttrGreaterThanValue(resourceName, "validation_emails.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(resourceName, "validation_emails.#", 0), resource.TestMatchResourceAttr(resourceName, "validation_emails.0", regexp.MustCompile(`^[^@]+@.+$`)), resource.TestCheckResourceAttr(resourceName, "validation_method", string(types.ValidationMethodEmail)), resource.TestCheckResourceAttr(resourceName, "validation_option.#", "0"), @@ -177,7 +180,7 @@ func TestAccACMCertificate_validationOptions(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "status", string(types.CertificateStatusPendingValidation)), resource.TestCheckResourceAttr(resourceName, "subject_alternative_names.#", "1"), resource.TestCheckTypeSetElemAttr(resourceName, "subject_alternative_names.*", domain), - acctest.CheckResourceAttrGreaterThanValue(resourceName, "validation_emails.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(resourceName, "validation_emails.#", 0), resource.TestMatchResourceAttr(resourceName, "validation_emails.0", regexp.MustCompile(`^[^@]+@.+$`)), resource.TestCheckResourceAttr(resourceName, "validation_method", string(types.ValidationMethodEmail)), resource.TestCheckResourceAttr(resourceName, "validation_option.#", "1"), @@ -237,7 +240,7 @@ func TestAccACMCertificate_privateCertificate_renewable(t *testing.T) { }, { PreConfig: func() { - conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient(ctx) _, err := conn.ExportCertificate(ctx, &acm.ExportCertificateInput{ CertificateArn: v1.CertificateArn, @@ -265,7 +268,7 @@ func TestAccACMCertificate_privateCertificate_renewable(t *testing.T) { }, { PreConfig: func() { - conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient(ctx) _, err := conn.RenewCertificate(ctx, &acm.RenewCertificateInput{ CertificateArn: v1.CertificateArn, @@ -289,7 +292,7 @@ func TestAccACMCertificate_privateCertificate_renewable(t *testing.T) { }, { PreConfig: func() { - conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient(ctx) _, err := tfacm.WaitCertificateRenewed(ctx, conn, aws.ToString(v1.CertificateArn), tfacm.CertificateRenewalTimeout) if err != nil { @@ -363,7 +366,7 @@ func TestAccACMCertificate_privateCertificate_noRenewalPermission(t *testing.T) }, { PreConfig: func() { - conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient(ctx) _, err := conn.ExportCertificate(ctx, &acm.ExportCertificateInput{ CertificateArn: v1.CertificateArn, @@ -390,7 +393,7 @@ func TestAccACMCertificate_privateCertificate_noRenewalPermission(t *testing.T) }, { PreConfig: func() { - conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient(ctx) _, err := conn.RenewCertificate(ctx, &acm.RenewCertificateInput{ CertificateArn: v1.CertificateArn, @@ -473,7 +476,7 @@ func TestAccACMCertificate_privateCertificate_pendingRenewalGoDuration(t *testin }, { PreConfig: func() { - conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient(ctx) _, err := conn.ExportCertificate(ctx, &acm.ExportCertificateInput{ CertificateArn: v1.CertificateArn, @@ -547,7 +550,7 @@ func TestAccACMCertificate_privateCertificate_pendingRenewalRFC3339Duration(t *t }, { PreConfig: func() { - conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient(ctx) _, err := conn.ExportCertificate(ctx, &acm.ExportCertificateInput{ CertificateArn: v1.CertificateArn, @@ -621,7 +624,7 @@ func TestAccACMCertificate_privateCertificate_addEarlyRenewalPast(t *testing.T) }, { PreConfig: func() { - conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient(ctx) _, err := conn.ExportCertificate(ctx, &acm.ExportCertificateInput{ CertificateArn: v1.CertificateArn, @@ -762,7 +765,7 @@ func TestAccACMCertificate_privateCertificate_addEarlyRenewalFuture(t *testing.T }, { PreConfig: func() { - conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient(ctx) _, err := conn.ExportCertificate(ctx, &acm.ExportCertificateInput{ CertificateArn: v1.CertificateArn, @@ -848,7 +851,7 @@ func TestAccACMCertificate_privateCertificate_updateEarlyRenewalFuture(t *testin }, { PreConfig: func() { - conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient(ctx) _, err := conn.ExportCertificate(ctx, &acm.ExportCertificateInput{ CertificateArn: v1.CertificateArn, @@ -918,7 +921,7 @@ func TestAccACMCertificate_privateCertificate_removeEarlyRenewal(t *testing.T) { }, { PreConfig: func() { - conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient(ctx) _, err := conn.ExportCertificate(ctx, &acm.ExportCertificateInput{ CertificateArn: v1.CertificateArn, @@ -1691,7 +1694,7 @@ func testAccCheckCertificateExists(ctx context.Context, n string, v *types.Certi return fmt.Errorf("no ACM Certificate ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient(ctx) output, err := tfacm.FindCertificateByARN(ctx, conn, rs.Primary.ID) @@ -1707,7 +1710,7 @@ func testAccCheckCertificateExists(ctx context.Context, n string, v *types.Certi func testAccCheckCertificateDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_acm_certificate" { diff --git a/internal/service/acm/certificate_validation.go b/internal/service/acm/certificate_validation.go index 8a9acb20612..9998c500fcc 100644 --- a/internal/service/acm/certificate_validation.go +++ b/internal/service/acm/certificate_validation.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acm import ( @@ -48,7 +51,7 @@ func resourceCertificateValidation() *schema.Resource { } func resourceCertificateValidationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ACMClient() + conn := meta.(*conns.AWSClient).ACMClient(ctx) arn := d.Get("certificate_arn").(string) certificate, err := findCertificateByARN(ctx, conn, arn) @@ -101,7 +104,7 @@ func resourceCertificateValidationCreate(ctx context.Context, d *schema.Resource } func resourceCertificateValidationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ACMClient() + conn := meta.(*conns.AWSClient).ACMClient(ctx) arn := d.Get("certificate_arn").(string) certificate, err := findCertificateValidationByARN(ctx, conn, arn) diff --git a/internal/service/acm/certificate_validation_test.go b/internal/service/acm/certificate_validation_test.go index 0a2c6a81bc3..89b6c0b26e6 100644 --- a/internal/service/acm/certificate_validation_test.go +++ b/internal/service/acm/certificate_validation_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acm_test import ( @@ -241,7 +244,7 @@ func testAccCheckCertificateValidationExists(ctx context.Context, n string) reso return fmt.Errorf("no ACM Certificate Validation ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient(ctx) _, err := tfacm.FindCertificateValidationByARN(ctx, conn, rs.Primary.Attributes["certificate_arn"]) diff --git a/internal/service/acm/exports_test.go b/internal/service/acm/exports_test.go index 9d04bbc662c..12ad519f0e8 100644 --- a/internal/service/acm/exports_test.go +++ b/internal/service/acm/exports_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acm // Exports for use in tests only. diff --git a/internal/service/acm/generate.go b/internal/service/acm/generate.go index 34bb9461e2d..0615a7197b1 100644 --- a/internal/service/acm/generate.go +++ b/internal/service/acm/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -ListTags -ListTagsOp=ListTagsForCertificate -ListTagsInIDElem=CertificateArn -ServiceTagsSlice -TagOp=AddTagsToCertificate -TagInIDElem=CertificateArn -UntagOp=RemoveTagsFromCertificate -UntagInNeedTagType -UntagInTagsElem=Tags -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package acm diff --git a/internal/service/acm/service_package_gen.go b/internal/service/acm/service_package_gen.go index d05d7702b3b..ebc0ad8b532 100644 --- a/internal/service/acm/service_package_gen.go +++ b/internal/service/acm/service_package_gen.go @@ -5,6 +5,9 @@ package acm import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + acm_sdkv2 "github.com/aws/aws-sdk-go-v2/service/acm" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -49,4 +52,17 @@ func (p *servicePackage) ServicePackageName() string { return names.ACM } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*acm_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return acm_sdkv2.NewFromConfig(cfg, func(o *acm_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = acm_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/acm/sweep.go b/internal/service/acm/sweep.go index bd91cb297ec..7d8e5bc14db 100644 --- a/internal/service/acm/sweep.go +++ b/internal/service/acm/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go-v2/aws" "github.com/aws/aws-sdk-go-v2/service/acm" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -40,11 +42,11 @@ func init() { func sweepCertificates(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).ACMClient() + conn := client.ACMClient(ctx) input := &acm.ListCertificatesInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -97,7 +99,7 @@ func sweepCertificates(region string) error { } } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping ACM Certificates (%s): %w", region, err) diff --git a/internal/service/acm/tags_gen.go b/internal/service/acm/tags_gen.go index ebd3a936f08..264c8687642 100644 --- a/internal/service/acm/tags_gen.go +++ b/internal/service/acm/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists acm service tags. +// listTags lists acm service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn *acm.Client, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *acm.Client, identifier string) (tftags.KeyValueTags, error) { input := &acm.ListTagsForCertificateInput{ CertificateArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn *acm.Client, identifier string) (tftags. // ListTags lists acm service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).ACMClient(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).ACMClient(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []awstypes.Tag) tftags.KeyValueTags return tftags.New(ctx, m) } -// GetTagsIn returns acm service tags from Context. +// getTagsIn returns acm service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []awstypes.Tag { +func getTagsIn(ctx context.Context) []awstypes.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []awstypes.Tag { return nil } -// SetTagsOut sets acm service tags in Context. -func SetTagsOut(ctx context.Context, tags []awstypes.Tag) { +// setTagsOut sets acm service tags in Context. +func setTagsOut(ctx context.Context, tags []awstypes.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates acm service tags. +// updateTags updates acm service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn *acm.Client, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *acm.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn *acm.Client, identifier string, oldTag // UpdateTags updates acm service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).ACMClient(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).ACMClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/acmpca/certificate.go b/internal/service/acmpca/certificate.go index 7b658007d53..bd2992dfb6b 100644 --- a/internal/service/acmpca/certificate.go +++ b/internal/service/acmpca/certificate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acmpca import ( @@ -130,7 +133,7 @@ func ResourceCertificate() *schema.Resource { func resourceCertificateCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ACMPCAConn() + conn := meta.(*conns.AWSClient).ACMPCAConn(ctx) certificateAuthorityARN := d.Get("certificate_authority_arn").(string) input := &acmpca.IssueCertificateInput{ @@ -194,7 +197,7 @@ func resourceCertificateCreate(ctx context.Context, d *schema.ResourceData, meta func resourceCertificateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ACMPCAConn() + conn := meta.(*conns.AWSClient).ACMPCAConn(ctx) getCertificateInput := &acmpca.GetCertificateInput{ CertificateArn: aws.String(d.Id()), @@ -228,7 +231,7 @@ func resourceCertificateRead(ctx context.Context, d *schema.ResourceData, meta i func resourceCertificateRevoke(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ACMPCAConn() + conn := meta.(*conns.AWSClient).ACMPCAConn(ctx) block, _ := pem.Decode([]byte(d.Get("certificate").(string))) if block == nil { @@ -341,7 +344,7 @@ func expandValidity(l []interface{}) (*acmpca.Validity, error) { i, err := ExpandValidityValue(valueType, m["value"].(string)) if err != nil { - return nil, fmt.Errorf("error parsing value %q: %w", m["value"].(string), err) + return nil, fmt.Errorf("parsing value %q: %w", m["value"].(string), err) } result.Value = aws.Int64(i) diff --git a/internal/service/acmpca/certificate_authority.go b/internal/service/acmpca/certificate_authority.go index 870f7b9d697..f0d3ee76a77 100644 --- a/internal/service/acmpca/certificate_authority.go +++ b/internal/service/acmpca/certificate_authority.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acmpca import ( @@ -350,14 +353,14 @@ func ResourceCertificateAuthority() *schema.Resource { func resourceCertificateAuthorityCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ACMPCAConn() + conn := meta.(*conns.AWSClient).ACMPCAConn(ctx) input := &acmpca.CreateCertificateAuthorityInput{ CertificateAuthorityConfiguration: expandCertificateAuthorityConfiguration(d.Get("certificate_authority_configuration").([]interface{})), CertificateAuthorityType: aws.String(d.Get("type").(string)), IdempotencyToken: aws.String(id.UniqueId()), RevocationConfiguration: expandRevocationConfiguration(d.Get("revocation_configuration").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("key_storage_security_standard"); ok { @@ -388,7 +391,7 @@ func resourceCertificateAuthorityCreate(ctx context.Context, d *schema.ResourceD func resourceCertificateAuthorityRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ACMPCAConn() + conn := meta.(*conns.AWSClient).ACMPCAConn(ctx) certificateAuthority, err := FindCertificateAuthorityByARN(ctx, conn, d.Id()) @@ -472,7 +475,7 @@ func resourceCertificateAuthorityRead(ctx context.Context, d *schema.ResourceDat func resourceCertificateAuthorityUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ACMPCAConn() + conn := meta.(*conns.AWSClient).ACMPCAConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &acmpca.UpdateCertificateAuthorityInput{ @@ -502,7 +505,7 @@ func resourceCertificateAuthorityUpdate(ctx context.Context, d *schema.ResourceD func resourceCertificateAuthorityDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ACMPCAConn() + conn := meta.(*conns.AWSClient).ACMPCAConn(ctx) // The Certificate Authority must be in PENDING_CERTIFICATE or DISABLED state before deleting. updateInput := &acmpca.UpdateCertificateAuthorityInput{ diff --git a/internal/service/acmpca/certificate_authority_certificate.go b/internal/service/acmpca/certificate_authority_certificate.go index aecc68a63b8..f72a06b04cd 100644 --- a/internal/service/acmpca/certificate_authority_certificate.go +++ b/internal/service/acmpca/certificate_authority_certificate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acmpca import ( @@ -51,7 +54,7 @@ func ResourceCertificateAuthorityCertificate() *schema.Resource { func resourceCertificateAuthorityCertificateCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ACMPCAConn() + conn := meta.(*conns.AWSClient).ACMPCAConn(ctx) certificateAuthorityARN := d.Get("certificate_authority_arn").(string) @@ -75,7 +78,7 @@ func resourceCertificateAuthorityCertificateCreate(ctx context.Context, d *schem func resourceCertificateAuthorityCertificateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ACMPCAConn() + conn := meta.(*conns.AWSClient).ACMPCAConn(ctx) output, err := FindCertificateAuthorityCertificateByARN(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { diff --git a/internal/service/acmpca/certificate_authority_certificate_test.go b/internal/service/acmpca/certificate_authority_certificate_test.go index fee399956a7..a5c885bec66 100644 --- a/internal/service/acmpca/certificate_authority_certificate_test.go +++ b/internal/service/acmpca/certificate_authority_certificate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acmpca_test import ( @@ -119,7 +122,7 @@ func testAccCheckCertificateAuthorityCertificateExists(ctx context.Context, reso return fmt.Errorf("not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ACMPCAConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ACMPCAConn(ctx) output, err := tfacmpca.FindCertificateAuthorityCertificateByARN(ctx, conn, rs.Primary.ID) if err != nil { diff --git a/internal/service/acmpca/certificate_authority_data_source.go b/internal/service/acmpca/certificate_authority_data_source.go index d65093c058e..243e939d4c5 100644 --- a/internal/service/acmpca/certificate_authority_data_source.go +++ b/internal/service/acmpca/certificate_authority_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acmpca import ( @@ -127,7 +130,7 @@ func DataSourceCertificateAuthority() *schema.Resource { func dataSourceCertificateAuthorityRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ACMPCAConn() + conn := meta.(*conns.AWSClient).ACMPCAConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig certificateAuthorityARN := d.Get("arn").(string) @@ -197,7 +200,7 @@ func dataSourceCertificateAuthorityRead(ctx context.Context, d *schema.ResourceD d.Set("certificate_signing_request", getCertificateAuthorityCsrOutput.Csr) } - tags, err := ListTags(ctx, conn, certificateAuthorityARN) + tags, err := listTags(ctx, conn, certificateAuthorityARN) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags for ACM PCA Certificate Authority (%s): %s", certificateAuthorityARN, err) diff --git a/internal/service/acmpca/certificate_authority_data_source_test.go b/internal/service/acmpca/certificate_authority_data_source_test.go index 724d9b13066..3dd57230009 100644 --- a/internal/service/acmpca/certificate_authority_data_source_test.go +++ b/internal/service/acmpca/certificate_authority_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acmpca_test import ( diff --git a/internal/service/acmpca/certificate_authority_migrate.go b/internal/service/acmpca/certificate_authority_migrate.go index 9add1528216..1b57a46db84 100644 --- a/internal/service/acmpca/certificate_authority_migrate.go +++ b/internal/service/acmpca/certificate_authority_migrate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acmpca import ( diff --git a/internal/service/acmpca/certificate_authority_migrate_test.go b/internal/service/acmpca/certificate_authority_migrate_test.go index df3c311895b..35588222e5e 100644 --- a/internal/service/acmpca/certificate_authority_migrate_test.go +++ b/internal/service/acmpca/certificate_authority_migrate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acmpca_test import ( diff --git a/internal/service/acmpca/certificate_authority_test.go b/internal/service/acmpca/certificate_authority_test.go index 4acf891310b..29284861a94 100644 --- a/internal/service/acmpca/certificate_authority_test.go +++ b/internal/service/acmpca/certificate_authority_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acmpca_test import ( @@ -752,7 +755,7 @@ func TestAccACMPCACertificateAuthority_tags(t *testing.T) { func testAccCheckCertificateAuthorityDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ACMPCAConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ACMPCAConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_acmpca_certificate_authority" { diff --git a/internal/service/acmpca/certificate_data_source.go b/internal/service/acmpca/certificate_data_source.go index b02927e7f9b..578f06a27a0 100644 --- a/internal/service/acmpca/certificate_data_source.go +++ b/internal/service/acmpca/certificate_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acmpca import ( @@ -43,7 +46,7 @@ func DataSourceCertificate() *schema.Resource { func dataSourceCertificateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ACMPCAConn() + conn := meta.(*conns.AWSClient).ACMPCAConn(ctx) certificateARN := d.Get("arn").(string) getCertificateInput := &acmpca.GetCertificateInput{ diff --git a/internal/service/acmpca/certificate_data_source_test.go b/internal/service/acmpca/certificate_data_source_test.go index 73d379424d3..c77649df55d 100644 --- a/internal/service/acmpca/certificate_data_source_test.go +++ b/internal/service/acmpca/certificate_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acmpca_test import ( diff --git a/internal/service/acmpca/certificate_test.go b/internal/service/acmpca/certificate_test.go index 404d55915a0..1a2a3400b94 100644 --- a/internal/service/acmpca/certificate_test.go +++ b/internal/service/acmpca/certificate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acmpca_test import ( @@ -291,7 +294,7 @@ func TestAccACMPCACertificate_Validity_absolute(t *testing.T) { func testAccCheckCertificateDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ACMPCAConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ACMPCAConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_acmpca_certificate" { @@ -331,7 +334,7 @@ func testAccCheckCertificateExists(ctx context.Context, resourceName string) res return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ACMPCAConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ACMPCAConn(ctx) input := &acmpca.GetCertificateInput{ CertificateArn: aws.String(rs.Primary.ID), CertificateAuthorityArn: aws.String(rs.Primary.Attributes["certificate_authority_arn"]), diff --git a/internal/service/acmpca/find.go b/internal/service/acmpca/find.go index 637142abdb0..125eec74adb 100644 --- a/internal/service/acmpca/find.go +++ b/internal/service/acmpca/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acmpca import ( diff --git a/internal/service/acmpca/generate.go b/internal/service/acmpca/generate.go index 69a3c6eccd3..40c66b6f7b9 100644 --- a/internal/service/acmpca/generate.go +++ b/internal/service/acmpca/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=ListTags -ListTagsInIDElem=CertificateAuthorityArn -ServiceTagsSlice -TagOp=TagCertificateAuthority -TagInIDElem=CertificateAuthorityArn -UntagOp=UntagCertificateAuthority -UntagInNeedTagType -UntagInTagsElem=Tags -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package acmpca diff --git a/internal/service/acmpca/permission.go b/internal/service/acmpca/permission.go index 74ec17e5c4b..b1dbb20f4fe 100644 --- a/internal/service/acmpca/permission.go +++ b/internal/service/acmpca/permission.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acmpca import ( @@ -66,7 +69,7 @@ func ResourcePermission() *schema.Resource { func resourcePermissionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ACMPCAConn() + conn := meta.(*conns.AWSClient).ACMPCAConn(ctx) caARN := d.Get("certificate_authority_arn").(string) principal := d.Get("principal").(string) @@ -96,7 +99,7 @@ func resourcePermissionCreate(ctx context.Context, d *schema.ResourceData, meta func resourcePermissionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ACMPCAConn() + conn := meta.(*conns.AWSClient).ACMPCAConn(ctx) caARN, principal, sourceAccount, err := PermissionParseResourceID(d.Id()) @@ -127,7 +130,7 @@ func resourcePermissionRead(ctx context.Context, d *schema.ResourceData, meta in func resourcePermissionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ACMPCAConn() + conn := meta.(*conns.AWSClient).ACMPCAConn(ctx) caARN, principal, sourceAccount, err := PermissionParseResourceID(d.Id()) diff --git a/internal/service/acmpca/permission_test.go b/internal/service/acmpca/permission_test.go index afa23c453cd..aaa3fd9761b 100644 --- a/internal/service/acmpca/permission_test.go +++ b/internal/service/acmpca/permission_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acmpca_test import ( @@ -92,7 +95,7 @@ func TestAccACMPCAPermission_sourceAccount(t *testing.T) { func testAccCheckPermissionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ACMPCAConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ACMPCAConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_acmpca_permission" { @@ -139,7 +142,7 @@ func testAccCheckPermissionExists(ctx context.Context, n string, v *acmpca.Permi return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).ACMPCAConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ACMPCAConn(ctx) output, err := tfacmpca.FindPermission(ctx, conn, caARN, principal, sourceAccount) diff --git a/internal/service/acmpca/policy.go b/internal/service/acmpca/policy.go index cbf5dacde74..2633fc3ae09 100644 --- a/internal/service/acmpca/policy.go +++ b/internal/service/acmpca/policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acmpca import ( @@ -52,7 +55,7 @@ func ResourcePolicy() *schema.Resource { func resourcePolicyPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ACMPCAConn() + conn := meta.(*conns.AWSClient).ACMPCAConn(ctx) policy, err := structure.NormalizeJsonString(d.Get("policy").(string)) @@ -80,7 +83,7 @@ func resourcePolicyPut(ctx context.Context, d *schema.ResourceData, meta interfa func resourcePolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ACMPCAConn() + conn := meta.(*conns.AWSClient).ACMPCAConn(ctx) policy, err := FindPolicyByARN(ctx, conn, d.Id()) @@ -102,7 +105,7 @@ func resourcePolicyRead(ctx context.Context, d *schema.ResourceData, meta interf func resourcePolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ACMPCAConn() + conn := meta.(*conns.AWSClient).ACMPCAConn(ctx) log.Printf("[DEBUG] Deleting ACM PCA Policy: %s", d.Id()) _, err := conn.DeletePolicyWithContext(ctx, &acmpca.DeletePolicyInput{ diff --git a/internal/service/acmpca/policy_test.go b/internal/service/acmpca/policy_test.go index 0eda787fbd0..a36069ffeb0 100644 --- a/internal/service/acmpca/policy_test.go +++ b/internal/service/acmpca/policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package acmpca_test import ( @@ -42,7 +45,7 @@ func TestAccACMPCAPolicy_basic(t *testing.T) { func testAccCheckPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ACMPCAConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ACMPCAConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_acmpca_policy" { @@ -77,7 +80,7 @@ func testAccCheckPolicyExists(ctx context.Context, n string) resource.TestCheckF return fmt.Errorf("No ACM PCA Policy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ACMPCAConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ACMPCAConn(ctx) _, err := tfacmpca.FindPolicyByARN(ctx, conn, rs.Primary.ID) diff --git a/internal/service/acmpca/service_package_gen.go b/internal/service/acmpca/service_package_gen.go index 198796758be..04a2de126a2 100644 --- a/internal/service/acmpca/service_package_gen.go +++ b/internal/service/acmpca/service_package_gen.go @@ -5,6 +5,10 @@ package acmpca import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + acmpca_sdkv1 "github.com/aws/aws-sdk-go/service/acmpca" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -65,4 +69,13 @@ func (p *servicePackage) ServicePackageName() string { return names.ACMPCA } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*acmpca_sdkv1.ACMPCA, error) { + sess := config["session"].(*session_sdkv1.Session) + + return acmpca_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/acmpca/sweep.go b/internal/service/acmpca/sweep.go index 28ad2b84195..94df18ce451 100644 --- a/internal/service/acmpca/sweep.go +++ b/internal/service/acmpca/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/acmpca" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -24,12 +26,12 @@ func init() { func sweepCertificateAuthorities(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).ACMPCAConn() + conn := client.ACMPCAConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -66,7 +68,7 @@ func sweepCertificateAuthorities(region string) error { errs = multierror.Append(errs, fmt.Errorf("listing ACM PCA Certificate Authorities: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping ACM PCA Certificate Authorities for %s: %w", region, err)) } diff --git a/internal/service/acmpca/tags_gen.go b/internal/service/acmpca/tags_gen.go index 7bfbc3089af..9a6ed910bff 100644 --- a/internal/service/acmpca/tags_gen.go +++ b/internal/service/acmpca/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists acmpca service tags. +// listTags lists acmpca service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn acmpcaiface.ACMPCAAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn acmpcaiface.ACMPCAAPI, identifier string) (tftags.KeyValueTags, error) { input := &acmpca.ListTagsInput{ CertificateAuthorityArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn acmpcaiface.ACMPCAAPI, identifier string // ListTags lists acmpca service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).ACMPCAConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).ACMPCAConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*acmpca.Tag) tftags.KeyValueTags { return tftags.New(ctx, m) } -// GetTagsIn returns acmpca service tags from Context. +// getTagsIn returns acmpca service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*acmpca.Tag { +func getTagsIn(ctx context.Context) []*acmpca.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*acmpca.Tag { return nil } -// SetTagsOut sets acmpca service tags in Context. -func SetTagsOut(ctx context.Context, tags []*acmpca.Tag) { +// setTagsOut sets acmpca service tags in Context. +func setTagsOut(ctx context.Context, tags []*acmpca.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates acmpca service tags. +// updateTags updates acmpca service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn acmpcaiface.ACMPCAAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn acmpcaiface.ACMPCAAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn acmpcaiface.ACMPCAAPI, identifier stri // UpdateTags updates acmpca service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).ACMPCAConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).ACMPCAConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/amp/alert_manager_definition.go b/internal/service/amp/alert_manager_definition.go index 098fceec112..a853030fcc5 100644 --- a/internal/service/amp/alert_manager_definition.go +++ b/internal/service/amp/alert_manager_definition.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amp import ( @@ -40,7 +43,7 @@ func ResourceAlertManagerDefinition() *schema.Resource { } func resourceAlertManagerDefinitionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AMPConn() + conn := meta.(*conns.AWSClient).AMPConn(ctx) workspaceID := d.Get("workspace_id").(string) input := &prometheusservice.CreateAlertManagerDefinitionInput{ @@ -64,7 +67,7 @@ func resourceAlertManagerDefinitionCreate(ctx context.Context, d *schema.Resourc } func resourceAlertManagerDefinitionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AMPConn() + conn := meta.(*conns.AWSClient).AMPConn(ctx) amd, err := FindAlertManagerDefinitionByID(ctx, conn, d.Id()) @@ -85,7 +88,7 @@ func resourceAlertManagerDefinitionRead(ctx context.Context, d *schema.ResourceD } func resourceAlertManagerDefinitionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AMPConn() + conn := meta.(*conns.AWSClient).AMPConn(ctx) input := &prometheusservice.PutAlertManagerDefinitionInput{ Data: []byte(d.Get("definition").(string)), @@ -106,7 +109,7 @@ func resourceAlertManagerDefinitionUpdate(ctx context.Context, d *schema.Resourc } func resourceAlertManagerDefinitionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AMPConn() + conn := meta.(*conns.AWSClient).AMPConn(ctx) log.Printf("[DEBUG] Deleting Prometheus Alert Manager Definition: (%s)", d.Id()) _, err := conn.DeleteAlertManagerDefinitionWithContext(ctx, &prometheusservice.DeleteAlertManagerDefinitionInput{ diff --git a/internal/service/amp/alert_manager_definition_test.go b/internal/service/amp/alert_manager_definition_test.go index 8b2421efe03..c454a80b27f 100644 --- a/internal/service/amp/alert_manager_definition_test.go +++ b/internal/service/amp/alert_manager_definition_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amp_test import ( @@ -92,7 +95,7 @@ func testAccCheckAlertManagerDefinitionExists(ctx context.Context, n string) res return fmt.Errorf("No Prometheus Alert Manager Definition ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).AMPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AMPConn(ctx) _, err := tfamp.FindAlertManagerDefinitionByID(ctx, conn, rs.Primary.ID) @@ -102,7 +105,7 @@ func testAccCheckAlertManagerDefinitionExists(ctx context.Context, n string) res func testAccCheckAlertManagerDefinitionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AMPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AMPConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_prometheus_alert_manager_definition" { diff --git a/internal/service/amp/find.go b/internal/service/amp/find.go index 38fb205307b..7cab24a8ed5 100644 --- a/internal/service/amp/find.go +++ b/internal/service/amp/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amp import ( @@ -40,7 +43,7 @@ func FindAlertManagerDefinitionByID(ctx context.Context, conn *prometheusservice func nameAndWorkspaceIDFromRuleGroupNamespaceARN(arn string) (string, string, error) { parts := strings.Split(arn, "/") if len(parts) != 3 { - return "", "", fmt.Errorf("error reading Prometheus Rule Group Namespace expected the arn to be like: arn:PARTITION:aps:REGION:ACCOUNT:rulegroupsnamespace/IDstring/namespace_name but got: %s", arn) + return "", "", fmt.Errorf("reading Prometheus Rule Group Namespace expected the arn to be like: arn:PARTITION:aps:REGION:ACCOUNT:rulegroupsnamespace/IDstring/namespace_name but got: %s", arn) } return parts[2], parts[1], nil } diff --git a/internal/service/amp/generate.go b/internal/service/amp/generate.go index 7dc2d58b8ce..0bd610285cb 100644 --- a/internal/service/amp/generate.go +++ b/internal/service/amp/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceArn -ServiceTagsMap -TagInIDElem=ResourceArn -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package amp diff --git a/internal/service/amp/rule_group_namespace.go b/internal/service/amp/rule_group_namespace.go index 3d0f882c1a7..1892af73414 100644 --- a/internal/service/amp/rule_group_namespace.go +++ b/internal/service/amp/rule_group_namespace.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amp import ( @@ -45,7 +48,7 @@ func ResourceRuleGroupNamespace() *schema.Resource { } func resourceRuleGroupNamespaceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AMPConn() + conn := meta.(*conns.AWSClient).AMPConn(ctx) workspaceID := d.Get("workspace_id").(string) name := d.Get("name").(string) @@ -71,7 +74,7 @@ func resourceRuleGroupNamespaceCreate(ctx context.Context, d *schema.ResourceDat } func resourceRuleGroupNamespaceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AMPConn() + conn := meta.(*conns.AWSClient).AMPConn(ctx) rgn, err := FindRuleGroupNamespaceByARN(ctx, conn, d.Id()) @@ -97,7 +100,7 @@ func resourceRuleGroupNamespaceRead(ctx context.Context, d *schema.ResourceData, } func resourceRuleGroupNamespaceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AMPConn() + conn := meta.(*conns.AWSClient).AMPConn(ctx) input := &prometheusservice.PutRuleGroupsNamespaceInput{ Data: []byte(d.Get("data").(string)), @@ -119,7 +122,7 @@ func resourceRuleGroupNamespaceUpdate(ctx context.Context, d *schema.ResourceDat } func resourceRuleGroupNamespaceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AMPConn() + conn := meta.(*conns.AWSClient).AMPConn(ctx) log.Printf("[DEBUG] Deleting Prometheus Rule Group Namespace: (%s)", d.Id()) _, err := conn.DeleteRuleGroupsNamespaceWithContext(ctx, &prometheusservice.DeleteRuleGroupsNamespaceInput{ diff --git a/internal/service/amp/rule_group_namespace_test.go b/internal/service/amp/rule_group_namespace_test.go index 04f771db5df..41d2c6bb65d 100644 --- a/internal/service/amp/rule_group_namespace_test.go +++ b/internal/service/amp/rule_group_namespace_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amp_test import ( @@ -92,7 +95,7 @@ func testAccCheckRuleGroupNamespaceExists(ctx context.Context, n string) resourc return fmt.Errorf("No Prometheus Rule Group namspace ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).AMPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AMPConn(ctx) _, err := tfamp.FindRuleGroupNamespaceByARN(ctx, conn, rs.Primary.ID) @@ -102,7 +105,7 @@ func testAccCheckRuleGroupNamespaceExists(ctx context.Context, n string) resourc func testAccCheckRuleGroupNamespaceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AMPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AMPConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_prometheus_rule_group_namespace" { diff --git a/internal/service/amp/service_package_gen.go b/internal/service/amp/service_package_gen.go index e3494e7b938..cdd2b57d67b 100644 --- a/internal/service/amp/service_package_gen.go +++ b/internal/service/amp/service_package_gen.go @@ -5,6 +5,10 @@ package amp import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + prometheusservice_sdkv1 "github.com/aws/aws-sdk-go/service/prometheusservice" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -57,4 +61,13 @@ func (p *servicePackage) ServicePackageName() string { return names.AMP } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*prometheusservice_sdkv1.PrometheusService, error) { + sess := config["session"].(*session_sdkv1.Session) + + return prometheusservice_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/amp/status.go b/internal/service/amp/status.go index 07c4473d1fc..383b1331bb9 100644 --- a/internal/service/amp/status.go +++ b/internal/service/amp/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amp import ( diff --git a/internal/service/amp/tags_gen.go b/internal/service/amp/tags_gen.go index 8dedf31901d..6cc655d5292 100644 --- a/internal/service/amp/tags_gen.go +++ b/internal/service/amp/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists amp service tags. +// listTags lists amp service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn prometheusserviceiface.PrometheusServiceAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn prometheusserviceiface.PrometheusServiceAPI, identifier string) (tftags.KeyValueTags, error) { input := &prometheusservice.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn prometheusserviceiface.PrometheusService // ListTags lists amp service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).AMPConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).AMPConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from amp service tags. +// KeyValueTags creates tftags.KeyValueTags from amp service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns amp service tags from Context. +// getTagsIn returns amp service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets amp service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets amp service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates amp service tags. +// updateTags updates amp service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn prometheusserviceiface.PrometheusServiceAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn prometheusserviceiface.PrometheusServiceAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn prometheusserviceiface.PrometheusServi // UpdateTags updates amp service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).AMPConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).AMPConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/amp/wait.go b/internal/service/amp/wait.go index 5747607a800..5a467da543f 100644 --- a/internal/service/amp/wait.go +++ b/internal/service/amp/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amp import ( diff --git a/internal/service/amp/workspace.go b/internal/service/amp/workspace.go index aebcbb17d3b..665ccd6260b 100644 --- a/internal/service/amp/workspace.go +++ b/internal/service/amp/workspace.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amp import ( @@ -72,10 +75,10 @@ func ResourceWorkspace() *schema.Resource { } func resourceWorkspaceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AMPConn() + conn := meta.(*conns.AWSClient).AMPConn(ctx) input := &prometheusservice.CreateWorkspaceInput{ - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("alias"); ok { @@ -116,7 +119,7 @@ func resourceWorkspaceCreate(ctx context.Context, d *schema.ResourceData, meta i } func resourceWorkspaceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AMPConn() + conn := meta.(*conns.AWSClient).AMPConn(ctx) ws, err := FindWorkspaceByID(ctx, conn, d.Id()) @@ -151,7 +154,7 @@ func resourceWorkspaceRead(ctx context.Context, d *schema.ResourceData, meta int } func resourceWorkspaceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AMPConn() + conn := meta.(*conns.AWSClient).AMPConn(ctx) if d.HasChange("alias") { input := &prometheusservice.UpdateWorkspaceAliasInput{ @@ -220,7 +223,7 @@ func resourceWorkspaceUpdate(ctx context.Context, d *schema.ResourceData, meta i } func resourceWorkspaceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AMPConn() + conn := meta.(*conns.AWSClient).AMPConn(ctx) log.Printf("[INFO] Deleting Prometheus Workspace: %s", d.Id()) _, err := conn.DeleteWorkspaceWithContext(ctx, &prometheusservice.DeleteWorkspaceInput{ diff --git a/internal/service/amp/workspace_data_source.go b/internal/service/amp/workspace_data_source.go index f37b012eafe..2fc26485208 100644 --- a/internal/service/amp/workspace_data_source.go +++ b/internal/service/amp/workspace_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amp import ( @@ -46,7 +49,7 @@ func DataSourceWorkspace() *schema.Resource { } func dataSourceWorkspaceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AMPConn() + conn := meta.(*conns.AWSClient).AMPConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig workspaceID := d.Get("workspace_id").(string) diff --git a/internal/service/amp/workspace_data_source_test.go b/internal/service/amp/workspace_data_source_test.go index d7770800bce..1f9312b6b3f 100644 --- a/internal/service/amp/workspace_data_source_test.go +++ b/internal/service/amp/workspace_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amp_test import ( diff --git a/internal/service/amp/workspace_test.go b/internal/service/amp/workspace_test.go index 2ea3f88cd1b..d7f32539f00 100644 --- a/internal/service/amp/workspace_test.go +++ b/internal/service/amp/workspace_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amp_test import ( @@ -240,7 +243,7 @@ func testAccCheckWorkspaceExists(ctx context.Context, n string, v *prometheusser return fmt.Errorf("No Prometheus Workspace ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).AMPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AMPConn(ctx) output, err := tfamp.FindWorkspaceByID(ctx, conn, rs.Primary.ID) @@ -256,7 +259,7 @@ func testAccCheckWorkspaceExists(ctx context.Context, n string, v *prometheusser func testAccCheckWorkspaceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AMPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AMPConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_prometheus_workspace" { diff --git a/internal/service/amp/workspaces_data_source.go b/internal/service/amp/workspaces_data_source.go index 5ce2e03ac5a..c7d079c2ca6 100644 --- a/internal/service/amp/workspaces_data_source.go +++ b/internal/service/amp/workspaces_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amp import ( @@ -39,7 +42,7 @@ func DataSourceWorkspaces() *schema.Resource { // nosemgrep:ci.caps0-in-func-nam } func dataSourceWorkspacesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { // nosemgrep:ci.caps0-in-func-name - conn := meta.(*conns.AWSClient).AMPConn() + conn := meta.(*conns.AWSClient).AMPConn(ctx) alias_prefix := d.Get("alias_prefix").(string) workspaces, err := FindWorkspaces(ctx, conn, alias_prefix) diff --git a/internal/service/amp/workspaces_data_source_test.go b/internal/service/amp/workspaces_data_source_test.go index 5dea5a1f070..33394fc4ee9 100644 --- a/internal/service/amp/workspaces_data_source_test.go +++ b/internal/service/amp/workspaces_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amp_test import ( diff --git a/internal/service/amplify/amplify_test.go b/internal/service/amplify/amplify_test.go index 4b6a3dc2fb8..d988d84398f 100644 --- a/internal/service/amplify/amplify_test.go +++ b/internal/service/amplify/amplify_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amplify_test import ( diff --git a/internal/service/amplify/app.go b/internal/service/amplify/app.go index 5790f1833fb..9fb96cdc324 100644 --- a/internal/service/amplify/app.go +++ b/internal/service/amplify/app.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amplify import ( @@ -321,13 +324,13 @@ func ResourceApp() *schema.Resource { func resourceAppCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AmplifyConn() + conn := meta.(*conns.AWSClient).AmplifyConn(ctx) name := d.Get("name").(string) input := &lify.CreateAppInput{ Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("access_token"); ok { @@ -408,7 +411,7 @@ func resourceAppCreate(ctx context.Context, d *schema.ResourceData, meta interfa func resourceAppRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AmplifyConn() + conn := meta.(*conns.AWSClient).AmplifyConn(ctx) app, err := FindAppByID(ctx, conn, d.Id()) @@ -455,14 +458,14 @@ func resourceAppRead(ctx context.Context, d *schema.ResourceData, meta interface } d.Set("repository", app.Repository) - SetTagsOut(ctx, app.Tags) + setTagsOut(ctx, app.Tags) return diags } func resourceAppUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AmplifyConn() + conn := meta.(*conns.AWSClient).AmplifyConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &lify.UpdateAppInput{ @@ -563,7 +566,7 @@ func resourceAppUpdate(ctx context.Context, d *schema.ResourceData, meta interfa func resourceAppDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AmplifyConn() + conn := meta.(*conns.AWSClient).AmplifyConn(ctx) log.Printf("[DEBUG] Deleting Amplify App: %s", d.Id()) _, err := conn.DeleteAppWithContext(ctx, &lify.DeleteAppInput{ diff --git a/internal/service/amplify/app_test.go b/internal/service/amplify/app_test.go index ec5d1f0b8b8..af906d4187a 100644 --- a/internal/service/amplify/app_test.go +++ b/internal/service/amplify/app_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amplify_test import ( @@ -602,7 +605,7 @@ func testAccCheckAppExists(ctx context.Context, n string, v *amplify.App) resour return fmt.Errorf("No Amplify App ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).AmplifyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AmplifyConn(ctx) output, err := tfamplify.FindAppByID(ctx, conn, rs.Primary.ID) @@ -618,7 +621,7 @@ func testAccCheckAppExists(ctx context.Context, n string, v *amplify.App) resour func testAccCheckAppDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AmplifyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AmplifyConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_amplify_app" { diff --git a/internal/service/amplify/backend_environment.go b/internal/service/amplify/backend_environment.go index 97cec3b59fa..c1489cb35b2 100644 --- a/internal/service/amplify/backend_environment.go +++ b/internal/service/amplify/backend_environment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amplify import ( @@ -66,7 +69,7 @@ func ResourceBackendEnvironment() *schema.Resource { func resourceBackendEnvironmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AmplifyConn() + conn := meta.(*conns.AWSClient).AmplifyConn(ctx) appID := d.Get("app_id").(string) environmentName := d.Get("environment_name").(string) @@ -99,7 +102,7 @@ func resourceBackendEnvironmentCreate(ctx context.Context, d *schema.ResourceDat func resourceBackendEnvironmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AmplifyConn() + conn := meta.(*conns.AWSClient).AmplifyConn(ctx) appID, environmentName, err := BackendEnvironmentParseResourceID(d.Id()) @@ -130,7 +133,7 @@ func resourceBackendEnvironmentRead(ctx context.Context, d *schema.ResourceData, func resourceBackendEnvironmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AmplifyConn() + conn := meta.(*conns.AWSClient).AmplifyConn(ctx) appID, environmentName, err := BackendEnvironmentParseResourceID(d.Id()) diff --git a/internal/service/amplify/backend_environment_test.go b/internal/service/amplify/backend_environment_test.go index 0acab0a043d..8e5cd80bf5c 100644 --- a/internal/service/amplify/backend_environment_test.go +++ b/internal/service/amplify/backend_environment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amplify_test import ( @@ -125,7 +128,7 @@ func testAccCheckBackendEnvironmentExists(ctx context.Context, resourceName stri return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).AmplifyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AmplifyConn(ctx) backendEnvironment, err := tfamplify.FindBackendEnvironmentByAppIDAndEnvironmentName(ctx, conn, appID, environmentName) @@ -141,7 +144,7 @@ func testAccCheckBackendEnvironmentExists(ctx context.Context, resourceName stri func testAccCheckBackendEnvironmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AmplifyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AmplifyConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_amplify_backend_environment" { diff --git a/internal/service/amplify/branch.go b/internal/service/amplify/branch.go index 2a00bc97338..2fe476521d0 100644 --- a/internal/service/amplify/branch.go +++ b/internal/service/amplify/branch.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amplify import ( @@ -189,7 +192,7 @@ func ResourceBranch() *schema.Resource { func resourceBranchCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AmplifyConn() + conn := meta.(*conns.AWSClient).AmplifyConn(ctx) appID := d.Get("app_id").(string) branchName := d.Get("branch_name").(string) @@ -199,7 +202,7 @@ func resourceBranchCreate(ctx context.Context, d *schema.ResourceData, meta inte AppId: aws.String(appID), BranchName: aws.String(branchName), EnableAutoBuild: aws.Bool(d.Get("enable_auto_build").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("backend_environment_arn"); ok { @@ -268,7 +271,7 @@ func resourceBranchCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceBranchRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AmplifyConn() + conn := meta.(*conns.AWSClient).AmplifyConn(ctx) appID, branchName, err := BranchParseResourceID(d.Id()) @@ -310,14 +313,14 @@ func resourceBranchRead(ctx context.Context, d *schema.ResourceData, meta interf d.Set("stage", branch.Stage) d.Set("ttl", branch.Ttl) - SetTagsOut(ctx, branch.Tags) + setTagsOut(ctx, branch.Tags) return diags } func resourceBranchUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AmplifyConn() + conn := meta.(*conns.AWSClient).AmplifyConn(ctx) if d.HasChangesExcept("tags", "tags_all") { appID, branchName, err := BranchParseResourceID(d.Id()) @@ -403,7 +406,7 @@ func resourceBranchUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceBranchDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AmplifyConn() + conn := meta.(*conns.AWSClient).AmplifyConn(ctx) appID, branchName, err := BranchParseResourceID(d.Id()) diff --git a/internal/service/amplify/branch_test.go b/internal/service/amplify/branch_test.go index 9685738013b..d1184b9dfc9 100644 --- a/internal/service/amplify/branch_test.go +++ b/internal/service/amplify/branch_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amplify_test import ( @@ -304,7 +307,7 @@ func testAccCheckBranchExists(ctx context.Context, resourceName string, v *ampli return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).AmplifyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AmplifyConn(ctx) branch, err := tfamplify.FindBranchByAppIDAndBranchName(ctx, conn, appID, branchName) @@ -320,7 +323,7 @@ func testAccCheckBranchExists(ctx context.Context, resourceName string, v *ampli func testAccCheckBranchDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AmplifyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AmplifyConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_amplify_branch" { diff --git a/internal/service/amplify/consts.go b/internal/service/amplify/consts.go index 3049f2bcfc2..95f4ee52d63 100644 --- a/internal/service/amplify/consts.go +++ b/internal/service/amplify/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amplify const ( diff --git a/internal/service/amplify/domain_association.go b/internal/service/amplify/domain_association.go index 5a1636449e4..433d858b542 100644 --- a/internal/service/amplify/domain_association.go +++ b/internal/service/amplify/domain_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amplify import ( @@ -89,7 +92,7 @@ func ResourceDomainAssociation() *schema.Resource { func resourceDomainAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AmplifyConn() + conn := meta.(*conns.AWSClient).AmplifyConn(ctx) appID := d.Get("app_id").(string) domainName := d.Get("domain_name").(string) @@ -124,7 +127,7 @@ func resourceDomainAssociationCreate(ctx context.Context, d *schema.ResourceData func resourceDomainAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AmplifyConn() + conn := meta.(*conns.AWSClient).AmplifyConn(ctx) appID, domainName, err := DomainAssociationParseResourceID(d.Id()) @@ -158,7 +161,7 @@ func resourceDomainAssociationRead(ctx context.Context, d *schema.ResourceData, func resourceDomainAssociationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AmplifyConn() + conn := meta.(*conns.AWSClient).AmplifyConn(ctx) appID, domainName, err := DomainAssociationParseResourceID(d.Id()) @@ -198,7 +201,7 @@ func resourceDomainAssociationUpdate(ctx context.Context, d *schema.ResourceData func resourceDomainAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AmplifyConn() + conn := meta.(*conns.AWSClient).AmplifyConn(ctx) appID, domainName, err := DomainAssociationParseResourceID(d.Id()) diff --git a/internal/service/amplify/domain_association_test.go b/internal/service/amplify/domain_association_test.go index 30dbb79a374..1b95bbcd174 100644 --- a/internal/service/amplify/domain_association_test.go +++ b/internal/service/amplify/domain_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amplify_test import ( @@ -169,7 +172,7 @@ func testAccCheckDomainAssociationExists(ctx context.Context, resourceName strin return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).AmplifyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AmplifyConn(ctx) domainAssociation, err := tfamplify.FindDomainAssociationByAppIDAndDomainName(ctx, conn, appID, domainName) @@ -185,7 +188,7 @@ func testAccCheckDomainAssociationExists(ctx context.Context, resourceName strin func testAccCheckDomainAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AmplifyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AmplifyConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_amplify_domain_association" { diff --git a/internal/service/amplify/find.go b/internal/service/amplify/find.go index 202455139f2..d3794d3970a 100644 --- a/internal/service/amplify/find.go +++ b/internal/service/amplify/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amplify import ( diff --git a/internal/service/amplify/generate.go b/internal/service/amplify/generate.go index f23bea07cbc..040ef2e3a1b 100644 --- a/internal/service/amplify/generate.go +++ b/internal/service/amplify/generate.go @@ -1,5 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/listpages/main.go -ListOps=ListApps //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package amplify diff --git a/internal/service/amplify/id.go b/internal/service/amplify/id.go index aeda71e1b48..23d5af0020f 100644 --- a/internal/service/amplify/id.go +++ b/internal/service/amplify/id.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amplify import ( diff --git a/internal/service/amplify/id_test.go b/internal/service/amplify/id_test.go index e9dcfdddce0..9a712c1fcff 100644 --- a/internal/service/amplify/id_test.go +++ b/internal/service/amplify/id_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amplify_test import ( diff --git a/internal/service/amplify/service_package_gen.go b/internal/service/amplify/service_package_gen.go index a44a7c06db9..aba2fab07a2 100644 --- a/internal/service/amplify/service_package_gen.go +++ b/internal/service/amplify/service_package_gen.go @@ -5,6 +5,10 @@ package amplify import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + amplify_sdkv1 "github.com/aws/aws-sdk-go/service/amplify" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -60,4 +64,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Amplify } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*amplify_sdkv1.Amplify, error) { + sess := config["session"].(*session_sdkv1.Session) + + return amplify_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/amplify/status.go b/internal/service/amplify/status.go index 44b1acef251..d507dc5a387 100644 --- a/internal/service/amplify/status.go +++ b/internal/service/amplify/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amplify import ( diff --git a/internal/service/amplify/sweep.go b/internal/service/amplify/sweep.go index be032bb47db..e0645367f8f 100644 --- a/internal/service/amplify/sweep.go +++ b/internal/service/amplify/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/amplify" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -23,11 +25,11 @@ func init() { func sweepApps(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).AmplifyConn() + conn := client.AmplifyConn(ctx) sweepResources := make([]sweep.Sweepable, 0) input := &lify.ListAppsInput{} @@ -55,7 +57,7 @@ func sweepApps(region string) error { return fmt.Errorf("error listing Amplify Apps: %w", err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Amplify Apps (%s): %w", region, err) diff --git a/internal/service/amplify/tags_gen.go b/internal/service/amplify/tags_gen.go index b9ce28621f1..f18dce141f4 100644 --- a/internal/service/amplify/tags_gen.go +++ b/internal/service/amplify/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists amplify service tags. +// listTags lists amplify service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn amplifyiface.AmplifyAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn amplifyiface.AmplifyAPI, identifier string) (tftags.KeyValueTags, error) { input := &lify.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn amplifyiface.AmplifyAPI, identifier stri // ListTags lists amplify service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).AmplifyConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).AmplifyConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from amplify service tags. +// KeyValueTags creates tftags.KeyValueTags from amplify service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns amplify service tags from Context. +// getTagsIn returns amplify service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets amplify service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets amplify service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates amplify service tags. +// updateTags updates amplify service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn amplifyiface.AmplifyAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn amplifyiface.AmplifyAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn amplifyiface.AmplifyAPI, identifier st // UpdateTags updates amplify service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).AmplifyConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).AmplifyConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/amplify/wait.go b/internal/service/amplify/wait.go index a18dd3f0a63..b37d3229898 100644 --- a/internal/service/amplify/wait.go +++ b/internal/service/amplify/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amplify import ( diff --git a/internal/service/amplify/webhook.go b/internal/service/amplify/webhook.go index 5fff1b4768e..e9571efc594 100644 --- a/internal/service/amplify/webhook.go +++ b/internal/service/amplify/webhook.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amplify import ( @@ -62,7 +65,7 @@ func ResourceWebhook() *schema.Resource { func resourceWebhookCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AmplifyConn() + conn := meta.(*conns.AWSClient).AmplifyConn(ctx) input := &lify.CreateWebhookInput{ AppId: aws.String(d.Get("app_id").(string)), @@ -87,7 +90,7 @@ func resourceWebhookCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceWebhookRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AmplifyConn() + conn := meta.(*conns.AWSClient).AmplifyConn(ctx) webhook, err := FindWebhookByID(ctx, conn, d.Id()) @@ -126,7 +129,7 @@ func resourceWebhookRead(ctx context.Context, d *schema.ResourceData, meta inter func resourceWebhookUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AmplifyConn() + conn := meta.(*conns.AWSClient).AmplifyConn(ctx) input := &lify.UpdateWebhookInput{ WebhookId: aws.String(d.Id()), @@ -152,7 +155,7 @@ func resourceWebhookUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceWebhookDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AmplifyConn() + conn := meta.(*conns.AWSClient).AmplifyConn(ctx) log.Printf("[DEBUG] Deleting Amplify Webhook: %s", d.Id()) _, err := conn.DeleteWebhookWithContext(ctx, &lify.DeleteWebhookInput{ diff --git a/internal/service/amplify/webhook_test.go b/internal/service/amplify/webhook_test.go index da0b0beada1..00485011483 100644 --- a/internal/service/amplify/webhook_test.go +++ b/internal/service/amplify/webhook_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package amplify_test import ( @@ -119,7 +122,7 @@ func testAccCheckWebhookExists(ctx context.Context, resourceName string, v *ampl return fmt.Errorf("No Amplify Webhook ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).AmplifyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AmplifyConn(ctx) webhook, err := tfamplify.FindWebhookByID(ctx, conn, rs.Primary.ID) @@ -135,7 +138,7 @@ func testAccCheckWebhookExists(ctx context.Context, resourceName string, v *ampl func testAccCheckWebhookDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AmplifyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AmplifyConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_amplify_webhook" { diff --git a/internal/service/apigateway/account.go b/internal/service/apigateway/account.go index c8d08b9da4c..f4cd784ca69 100644 --- a/internal/service/apigateway/account.go +++ b/internal/service/apigateway/account.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -52,7 +55,7 @@ func ResourceAccount() *schema.Resource { func resourceAccountUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) input := &apigateway.UpdateAccountInput{} @@ -99,7 +102,7 @@ func resourceAccountUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceAccountRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) account, err := conn.GetAccountWithContext(ctx, &apigateway.GetAccountInput{}) diff --git a/internal/service/apigateway/account_test.go b/internal/service/apigateway/account_test.go index 15838671002..6214f12ed4c 100644 --- a/internal/service/apigateway/account_test.go +++ b/internal/service/apigateway/account_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( diff --git a/internal/service/apigateway/api_key.go b/internal/service/apigateway/api_key.go index 91e801ffabf..8ec40c0373f 100644 --- a/internal/service/apigateway/api_key.go +++ b/internal/service/apigateway/api_key.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -85,14 +88,14 @@ func ResourceAPIKey() *schema.Resource { func resourceAPIKeyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) name := d.Get("name").(string) input := &apigateway.CreateApiKeyInput{ Description: aws.String(d.Get("description").(string)), Enabled: aws.Bool(d.Get("enabled").(bool)), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), Value: aws.String(d.Get("value").(string)), } @@ -109,7 +112,7 @@ func resourceAPIKeyCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceAPIKeyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) apiKey, err := FindAPIKeyByID(ctx, conn, d.Id()) @@ -123,7 +126,7 @@ func resourceAPIKeyRead(ctx context.Context, d *schema.ResourceData, meta interf return sdkdiag.AppendErrorf(diags, "reading API Gateway API Key (%s): %s", d.Id(), err) } - SetTagsOut(ctx, apiKey.Tags) + setTagsOut(ctx, apiKey.Tags) arn := arn.ARN{ Partition: meta.(*conns.AWSClient).Partition, @@ -176,7 +179,7 @@ func resourceAPIKeyUpdateOperations(d *schema.ResourceData) []*apigateway.PatchO func resourceAPIKeyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) if d.HasChangesExcept("tags", "tags_all") { _, err := conn.UpdateApiKeyWithContext(ctx, &apigateway.UpdateApiKeyInput{ @@ -194,7 +197,7 @@ func resourceAPIKeyUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceAPIKeyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) log.Printf("[DEBUG] Deleting API Gateway API Key: %s", d.Id()) _, err := conn.DeleteApiKeyWithContext(ctx, &apigateway.DeleteApiKeyInput{ diff --git a/internal/service/apigateway/api_key_data_source.go b/internal/service/apigateway/api_key_data_source.go index 8ad9f52e588..7063a55c044 100644 --- a/internal/service/apigateway/api_key_data_source.go +++ b/internal/service/apigateway/api_key_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -53,7 +56,7 @@ func DataSourceAPIKey() *schema.Resource { func dataSourceAPIKeyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig id := d.Get("id").(string) diff --git a/internal/service/apigateway/api_key_data_source_test.go b/internal/service/apigateway/api_key_data_source_test.go index be815070588..f63bf8c1c65 100644 --- a/internal/service/apigateway/api_key_data_source_test.go +++ b/internal/service/apigateway/api_key_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( diff --git a/internal/service/apigateway/api_key_test.go b/internal/service/apigateway/api_key_test.go index 9716f4c1964..f5ba4e899e1 100644 --- a/internal/service/apigateway/api_key_test.go +++ b/internal/service/apigateway/api_key_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( @@ -232,7 +235,7 @@ func testAccCheckAPIKeyExists(ctx context.Context, n string, v *apigateway.ApiKe return fmt.Errorf("No API Gateway API Key ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) output, err := tfapigateway.FindAPIKeyByID(ctx, conn, rs.Primary.ID) @@ -248,7 +251,7 @@ func testAccCheckAPIKeyExists(ctx context.Context, n string, v *apigateway.ApiKe func testAccCheckAPIKeyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_api_gateway_api_key" { diff --git a/internal/service/apigateway/authorizer.go b/internal/service/apigateway/authorizer.go index 7a007c5c724..497a97f1ee7 100644 --- a/internal/service/apigateway/authorizer.go +++ b/internal/service/apigateway/authorizer.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -104,7 +107,7 @@ func ResourceAuthorizer() *schema.Resource { func resourceAuthorizerCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) if err := validateAuthorizerType(d); err != nil { return sdkdiag.AppendErrorf(diags, "creating API Gateway Authorizer: %s", err) @@ -175,7 +178,7 @@ func resourceAuthorizerCreate(ctx context.Context, d *schema.ResourceData, meta func resourceAuthorizerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) apiID := d.Get("rest_api_id").(string) authorizer, err := FindAuthorizerByTwoPartKey(ctx, conn, d.Id(), apiID) @@ -209,7 +212,7 @@ func resourceAuthorizerRead(ctx context.Context, d *schema.ResourceData, meta in func resourceAuthorizerUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) operations := make([]*apigateway.PatchOperation, 0) @@ -302,7 +305,7 @@ func resourceAuthorizerUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceAuthorizerDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) log.Printf("[INFO] Deleting API Gateway Authorizer: %s", d.Id()) _, err := conn.DeleteAuthorizerWithContext(ctx, &apigateway.DeleteAuthorizerInput{ diff --git a/internal/service/apigateway/authorizer_data_source.go b/internal/service/apigateway/authorizer_data_source.go index 073109251cb..70e6780fb58 100644 --- a/internal/service/apigateway/authorizer_data_source.go +++ b/internal/service/apigateway/authorizer_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -67,7 +70,7 @@ func DataSourceAuthorizer() *schema.Resource { func dataSourceAuthorizerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) authorizerID := d.Get("authorizer_id").(string) apiID := d.Get("rest_api_id").(string) diff --git a/internal/service/apigateway/authorizer_data_source_test.go b/internal/service/apigateway/authorizer_data_source_test.go index 48b0e122989..a8b295538f5 100644 --- a/internal/service/apigateway/authorizer_data_source_test.go +++ b/internal/service/apigateway/authorizer_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( diff --git a/internal/service/apigateway/authorizer_test.go b/internal/service/apigateway/authorizer_test.go index aac4b52b266..1a328a5c5e2 100644 --- a/internal/service/apigateway/authorizer_test.go +++ b/internal/service/apigateway/authorizer_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( @@ -326,7 +329,7 @@ func testAccCheckAuthorizerExists(ctx context.Context, n string, v *apigateway.A return fmt.Errorf("No API Gateway Authorizer ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) output, err := tfapigateway.FindAuthorizerByTwoPartKey(ctx, conn, rs.Primary.ID, rs.Primary.Attributes["rest_api_id"]) @@ -342,7 +345,7 @@ func testAccCheckAuthorizerExists(ctx context.Context, n string, v *apigateway.A func testAccCheckAuthorizerDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_api_gateway_authorizer" { diff --git a/internal/service/apigateway/authorizers_data_source.go b/internal/service/apigateway/authorizers_data_source.go index ff8cf55f8a0..5e9afe580bd 100644 --- a/internal/service/apigateway/authorizers_data_source.go +++ b/internal/service/apigateway/authorizers_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -32,7 +35,7 @@ func DataSourceAuthorizers() *schema.Resource { func dataSourceAuthorizersRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) apiID := d.Get("rest_api_id").(string) input := &apigateway.GetAuthorizersInput{ diff --git a/internal/service/apigateway/authorizers_data_source_test.go b/internal/service/apigateway/authorizers_data_source_test.go index a0bfa62f45e..3ae286e089b 100644 --- a/internal/service/apigateway/authorizers_data_source_test.go +++ b/internal/service/apigateway/authorizers_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( diff --git a/internal/service/apigateway/base_path_mapping.go b/internal/service/apigateway/base_path_mapping.go index a114ba7d4c2..7f8f69544ce 100644 --- a/internal/service/apigateway/base_path_mapping.go +++ b/internal/service/apigateway/base_path_mapping.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -55,7 +58,7 @@ func ResourceBasePathMapping() *schema.Resource { func resourceBasePathMappingCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) input := &apigateway.CreateBasePathMappingInput{ RestApiId: aws.String(d.Get("api_id").(string)), DomainName: aws.String(d.Get("domain_name").(string)), @@ -93,7 +96,7 @@ func resourceBasePathMappingCreate(ctx context.Context, d *schema.ResourceData, func resourceBasePathMappingUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) operations := make([]*apigateway.PatchOperation, 0) @@ -152,7 +155,7 @@ func resourceBasePathMappingUpdate(ctx context.Context, d *schema.ResourceData, func resourceBasePathMappingRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) domainName, basePath, err := DecodeBasePathMappingID(d.Id()) if err != nil { @@ -189,7 +192,7 @@ func resourceBasePathMappingRead(ctx context.Context, d *schema.ResourceData, me func resourceBasePathMappingDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) domainName, basePath, err := DecodeBasePathMappingID(d.Id()) if err != nil { diff --git a/internal/service/apigateway/base_path_mapping_test.go b/internal/service/apigateway/base_path_mapping_test.go index 76984f86c1e..63df946f5f1 100644 --- a/internal/service/apigateway/base_path_mapping_test.go +++ b/internal/service/apigateway/base_path_mapping_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( @@ -220,7 +223,7 @@ func testAccCheckBasePathExists(ctx context.Context, n string, res *apigateway.B return fmt.Errorf("No API Gateway ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) domainName, basePath, err := tfapigateway.DecodeBasePathMappingID(rs.Primary.ID) if err != nil { @@ -244,7 +247,7 @@ func testAccCheckBasePathExists(ctx context.Context, n string, res *apigateway.B func testAccCheckBasePathDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_api_gateway_base_path_mapping" { diff --git a/internal/service/apigateway/client_certificate.go b/internal/service/apigateway/client_certificate.go index ae1349e345a..51ef7d6d841 100644 --- a/internal/service/apigateway/client_certificate.go +++ b/internal/service/apigateway/client_certificate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -64,10 +67,10 @@ func ResourceClientCertificate() *schema.Resource { func resourceClientCertificateCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) input := &apigateway.GenerateClientCertificateInput{ - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -87,7 +90,7 @@ func resourceClientCertificateCreate(ctx context.Context, d *schema.ResourceData func resourceClientCertificateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) cert, err := FindClientCertificateByID(ctx, conn, d.Id()) @@ -113,14 +116,14 @@ func resourceClientCertificateRead(ctx context.Context, d *schema.ResourceData, d.Set("expiration_date", cert.ExpirationDate.String()) d.Set("pem_encoded_certificate", cert.PemEncodedCertificate) - SetTagsOut(ctx, cert.Tags) + setTagsOut(ctx, cert.Tags) return diags } func resourceClientCertificateUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &apigateway.UpdateClientCertificateInput{ @@ -146,7 +149,7 @@ func resourceClientCertificateUpdate(ctx context.Context, d *schema.ResourceData func resourceClientCertificateDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) log.Printf("[DEBUG] Deleting API Gateway Client Certificate: %s", d.Id()) _, err := conn.DeleteClientCertificateWithContext(ctx, &apigateway.DeleteClientCertificateInput{ diff --git a/internal/service/apigateway/client_certificate_test.go b/internal/service/apigateway/client_certificate_test.go index 127f2aa2351..837effea558 100644 --- a/internal/service/apigateway/client_certificate_test.go +++ b/internal/service/apigateway/client_certificate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( @@ -130,7 +133,7 @@ func testAccCheckClientCertificateExists(ctx context.Context, n string, v *apiga return fmt.Errorf("No API Gateway Client Certificate ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) output, err := tfapigateway.FindClientCertificateByID(ctx, conn, rs.Primary.ID) @@ -146,7 +149,7 @@ func testAccCheckClientCertificateExists(ctx context.Context, n string, v *apiga func testAccCheckClientCertificateDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_api_gateway_client_certificate" { diff --git a/internal/service/apigateway/consts.go b/internal/service/apigateway/consts.go index a9bfc7564c3..8e04daf1efc 100644 --- a/internal/service/apigateway/consts.go +++ b/internal/service/apigateway/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( diff --git a/internal/service/apigateway/deployment.go b/internal/service/apigateway/deployment.go index 9d04c2252c7..38c51689214 100644 --- a/internal/service/apigateway/deployment.go +++ b/internal/service/apigateway/deployment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -82,7 +85,7 @@ func ResourceDeployment() *schema.Resource { func resourceDeploymentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) input := &apigateway.CreateDeploymentInput{ Description: aws.String(d.Get("description").(string)), @@ -105,7 +108,7 @@ func resourceDeploymentCreate(ctx context.Context, d *schema.ResourceData, meta func resourceDeploymentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) restAPIID := d.Get("rest_api_id").(string) deployment, err := FindDeploymentByTwoPartKey(ctx, conn, restAPIID, d.Id()) @@ -138,7 +141,7 @@ func resourceDeploymentRead(ctx context.Context, d *schema.ResourceData, meta in func resourceDeploymentUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) operations := make([]*apigateway.PatchOperation, 0) @@ -167,7 +170,7 @@ func resourceDeploymentUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceDeploymentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) // If the stage has been updated to point at a different deployment, then // the stage should not be removed when this deployment is deleted. diff --git a/internal/service/apigateway/deployment_test.go b/internal/service/apigateway/deployment_test.go index d8e4c448688..00c2f891b96 100644 --- a/internal/service/apigateway/deployment_test.go +++ b/internal/service/apigateway/deployment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( @@ -314,7 +317,7 @@ func testAccCheckDeploymentExists(ctx context.Context, n string, v *apigateway.D return fmt.Errorf("No API Gateway Deployment ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) output, err := tfapigateway.FindDeploymentByTwoPartKey(ctx, conn, rs.Primary.Attributes["rest_api_id"], rs.Primary.ID) @@ -330,7 +333,7 @@ func testAccCheckDeploymentExists(ctx context.Context, n string, v *apigateway.D func testAccCheckDeploymentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_api_gateway_deployment" { diff --git a/internal/service/apigateway/documentation_part.go b/internal/service/apigateway/documentation_part.go index 7d718bd51ad..7a9fb5eb16a 100644 --- a/internal/service/apigateway/documentation_part.go +++ b/internal/service/apigateway/documentation_part.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -78,7 +81,7 @@ func ResourceDocumentationPart() *schema.Resource { func resourceDocumentationPartCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) apiId := d.Get("rest_api_id").(string) out, err := conn.CreateDocumentationPartWithContext(ctx, &apigateway.CreateDocumentationPartInput{ @@ -96,7 +99,7 @@ func resourceDocumentationPartCreate(ctx context.Context, d *schema.ResourceData func resourceDocumentationPartRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) log.Printf("[INFO] Reading API Gateway Documentation Part %s", d.Id()) @@ -127,7 +130,7 @@ func resourceDocumentationPartRead(ctx context.Context, d *schema.ResourceData, func resourceDocumentationPartUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) apiId, id, err := DecodeDocumentationPartID(d.Id()) if err != nil { @@ -160,7 +163,7 @@ func resourceDocumentationPartUpdate(ctx context.Context, d *schema.ResourceData func resourceDocumentationPartDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) apiId, id, err := DecodeDocumentationPartID(d.Id()) if err != nil { diff --git a/internal/service/apigateway/documentation_part_test.go b/internal/service/apigateway/documentation_part_test.go index 7dca01d3fec..8cc0e51e3bd 100644 --- a/internal/service/apigateway/documentation_part_test.go +++ b/internal/service/apigateway/documentation_part_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( @@ -206,7 +209,7 @@ func testAccCheckDocumentationPartExists(ctx context.Context, n string, res *api return fmt.Errorf("No API Gateway Documentation Part ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) apiId, id, err := tfapigateway.DecodeDocumentationPartID(rs.Primary.ID) if err != nil { @@ -230,7 +233,7 @@ func testAccCheckDocumentationPartExists(ctx context.Context, n string, res *api func testAccCheckDocumentationPartDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_api_gateway_documentation_part" { diff --git a/internal/service/apigateway/documentation_version.go b/internal/service/apigateway/documentation_version.go index 2911e46f35c..25dc7c06892 100644 --- a/internal/service/apigateway/documentation_version.go +++ b/internal/service/apigateway/documentation_version.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -48,7 +51,7 @@ func ResourceDocumentationVersion() *schema.Resource { func resourceDocumentationVersionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) restApiId := d.Get("rest_api_id").(string) @@ -74,7 +77,7 @@ func resourceDocumentationVersionCreate(ctx context.Context, d *schema.ResourceD func resourceDocumentationVersionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) log.Printf("[DEBUG] Reading API Gateway Documentation Version %s", d.Id()) apiId, docVersion, err := DecodeDocumentationVersionID(d.Id()) @@ -104,7 +107,7 @@ func resourceDocumentationVersionRead(ctx context.Context, d *schema.ResourceDat func resourceDocumentationVersionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) log.Printf("[DEBUG] Updating API Gateway Documentation Version %s", d.Id()) _, err := conn.UpdateDocumentationVersionWithContext(ctx, &apigateway.UpdateDocumentationVersionInput{ @@ -128,7 +131,7 @@ func resourceDocumentationVersionUpdate(ctx context.Context, d *schema.ResourceD func resourceDocumentationVersionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) log.Printf("[DEBUG] Deleting API Gateway Documentation Version: %s", d.Id()) _, err := conn.DeleteDocumentationVersionWithContext(ctx, &apigateway.DeleteDocumentationVersionInput{ diff --git a/internal/service/apigateway/documentation_version_test.go b/internal/service/apigateway/documentation_version_test.go index ace1b5381eb..e244ce0aeeb 100644 --- a/internal/service/apigateway/documentation_version_test.go +++ b/internal/service/apigateway/documentation_version_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( @@ -134,7 +137,7 @@ func testAccCheckDocumentationVersionExists(ctx context.Context, n string, res * return fmt.Errorf("No API Gateway Documentation Version ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) apiId, version, err := tfapigateway.DecodeDocumentationVersionID(rs.Primary.ID) if err != nil { @@ -158,7 +161,7 @@ func testAccCheckDocumentationVersionExists(ctx context.Context, n string, res * func testAccCheckDocumentationVersionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_api_gateway_documentation_version" { diff --git a/internal/service/apigateway/domain_name.go b/internal/service/apigateway/domain_name.go index 3e3653b4673..7b4eb0bab5a 100644 --- a/internal/service/apigateway/domain_name.go +++ b/internal/service/apigateway/domain_name.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -172,13 +175,13 @@ func ResourceDomainName() *schema.Resource { func resourceDomainNameCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) domainName := d.Get("domain_name").(string) input := &apigateway.CreateDomainNameInput{ DomainName: aws.String(domainName), MutualTlsAuthentication: expandMutualTLSAuthentication(d.Get("mutual_tls_authentication").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("certificate_arn"); ok { @@ -234,7 +237,7 @@ func resourceDomainNameCreate(ctx context.Context, d *schema.ResourceData, meta func resourceDomainNameRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) domainName, err := FindDomainName(ctx, conn, d.Id()) @@ -278,14 +281,14 @@ func resourceDomainNameRead(ctx context.Context, d *schema.ResourceData, meta in d.Set("regional_zone_id", domainName.RegionalHostedZoneId) d.Set("security_policy", domainName.SecurityPolicy) - SetTagsOut(ctx, domainName.Tags) + setTagsOut(ctx, domainName.Tags) return diags } func resourceDomainNameUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) if d.HasChangesExcept("tags", "tags_all") { var operations []*apigateway.PatchOperation @@ -392,7 +395,7 @@ func resourceDomainNameUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceDomainNameDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) log.Printf("[DEBUG] Deleting API Gateway Domain Name: %s", d.Id()) _, err := conn.DeleteDomainNameWithContext(ctx, &apigateway.DeleteDomainNameInput{ diff --git a/internal/service/apigateway/domain_name_data_source.go b/internal/service/apigateway/domain_name_data_source.go index 10ec4ef08a0..864c73368be 100644 --- a/internal/service/apigateway/domain_name_data_source.go +++ b/internal/service/apigateway/domain_name_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -88,7 +91,7 @@ func DataSourceDomainName() *schema.Resource { func dataSourceDomainNameRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig domainName := d.Get("domain_name").(string) diff --git a/internal/service/apigateway/domain_name_data_source_test.go b/internal/service/apigateway/domain_name_data_source_test.go index a41eb79f17e..ff4035aebaf 100644 --- a/internal/service/apigateway/domain_name_data_source_test.go +++ b/internal/service/apigateway/domain_name_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( diff --git a/internal/service/apigateway/domain_name_test.go b/internal/service/apigateway/domain_name_test.go index 70ad2b8149f..feb633fb7b2 100644 --- a/internal/service/apigateway/domain_name_test.go +++ b/internal/service/apigateway/domain_name_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( @@ -404,7 +407,7 @@ func testAccCheckDomainNameExists(ctx context.Context, n string, v *apigateway.D return fmt.Errorf("No API Gateway Domain Name ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) output, err := tfapigateway.FindDomainName(ctx, conn, rs.Primary.ID) @@ -420,7 +423,7 @@ func testAccCheckDomainNameExists(ctx context.Context, n string, v *apigateway.D func testAccCheckDomainNameDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_api_gateway_domain_name" { diff --git a/internal/service/apigateway/errorcheck_test.go b/internal/service/apigateway/errorcheck_test.go index b03b31e3b91..dc83e0a3fa4 100644 --- a/internal/service/apigateway/errorcheck_test.go +++ b/internal/service/apigateway/errorcheck_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( diff --git a/internal/service/apigateway/export_data_source.go b/internal/service/apigateway/export_data_source.go index 1005b2ac05c..87208885206 100644 --- a/internal/service/apigateway/export_data_source.go +++ b/internal/service/apigateway/export_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -60,7 +63,7 @@ func DataSourceExport() *schema.Resource { func dataSourceExportRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) restApiId := d.Get("rest_api_id").(string) stageName := d.Get("stage_name").(string) diff --git a/internal/service/apigateway/export_data_source_test.go b/internal/service/apigateway/export_data_source_test.go index 3594cf9f56b..75a6f31e530 100644 --- a/internal/service/apigateway/export_data_source_test.go +++ b/internal/service/apigateway/export_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( diff --git a/internal/service/apigateway/flex.go b/internal/service/apigateway/flex.go index 7c142de3efc..24824931833 100644 --- a/internal/service/apigateway/flex.go +++ b/internal/service/apigateway/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( diff --git a/internal/service/apigateway/forge.go b/internal/service/apigateway/forge.go index 9804e60543b..92328ed584c 100644 --- a/internal/service/apigateway/forge.go +++ b/internal/service/apigateway/forge.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( diff --git a/internal/service/apigateway/gateway_response.go b/internal/service/apigateway/gateway_response.go index 13b366b3ed4..9d52355476c 100644 --- a/internal/service/apigateway/gateway_response.go +++ b/internal/service/apigateway/gateway_response.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -72,7 +75,7 @@ func ResourceGatewayResponse() *schema.Resource { func resourceGatewayResponsePut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) input := &apigateway.PutGatewayResponseInput{ ResponseType: aws.String(d.Get("response_type").(string)), @@ -106,7 +109,7 @@ func resourceGatewayResponsePut(ctx context.Context, d *schema.ResourceData, met func resourceGatewayResponseRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) gatewayResponse, err := FindGatewayResponseByTwoPartKey(ctx, conn, d.Get("response_type").(string), d.Get("rest_api_id").(string)) @@ -130,7 +133,7 @@ func resourceGatewayResponseRead(ctx context.Context, d *schema.ResourceData, me func resourceGatewayResponseDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) log.Printf("[DEBUG] Deleting API Gateway Gateway Response: %s", d.Id()) _, err := conn.DeleteGatewayResponseWithContext(ctx, &apigateway.DeleteGatewayResponseInput{ diff --git a/internal/service/apigateway/gateway_response_test.go b/internal/service/apigateway/gateway_response_test.go index c1ab6046498..3eb03dbf147 100644 --- a/internal/service/apigateway/gateway_response_test.go +++ b/internal/service/apigateway/gateway_response_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( @@ -92,7 +95,7 @@ func testAccCheckGatewayResponseExists(ctx context.Context, n string, v *apigate return fmt.Errorf("No API Gateway Gateway Response ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) output, err := tfapigateway.FindGatewayResponseByTwoPartKey(ctx, conn, rs.Primary.Attributes["response_type"], rs.Primary.Attributes["rest_api_id"]) @@ -108,7 +111,7 @@ func testAccCheckGatewayResponseExists(ctx context.Context, n string, v *apigate func testAccCheckGatewayResponseDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_api_gateway_gateway_response" { diff --git a/internal/service/apigateway/generate.go b/internal/service/apigateway/generate.go index 69c14e7cdd2..9762c336307 100644 --- a/internal/service/apigateway/generate.go +++ b/internal/service/apigateway/generate.go @@ -1,5 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/listpages/main.go -ListOps=GetAuthorizers -Paginator=Position //go:generate go run ../../generate/tags/main.go -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package apigateway diff --git a/internal/service/apigateway/integration.go b/internal/service/apigateway/integration.go index 9fd2f8743cc..3a6a5f07686 100644 --- a/internal/service/apigateway/integration.go +++ b/internal/service/apigateway/integration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -154,7 +157,7 @@ func ResourceIntegration() *schema.Resource { func resourceIntegrationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) input := &apigateway.PutIntegrationInput{ HttpMethod: aws.String(d.Get("http_method").(string)), @@ -230,7 +233,7 @@ func resourceIntegrationCreate(ctx context.Context, d *schema.ResourceData, meta func resourceIntegrationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) integration, err := FindIntegrationByThreePartKey(ctx, conn, d.Get("http_method").(string), d.Get("resource_id").(string), d.Get("rest_api_id").(string)) @@ -275,7 +278,7 @@ func resourceIntegrationRead(ctx context.Context, d *schema.ResourceData, meta i func resourceIntegrationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) operations := make([]*apigateway.PatchOperation, 0) @@ -463,7 +466,7 @@ func resourceIntegrationUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceIntegrationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) log.Printf("[DEBUG] Deleting API Gateway Integration: %s", d.Id()) _, err := conn.DeleteIntegrationWithContext(ctx, &apigateway.DeleteIntegrationInput{ diff --git a/internal/service/apigateway/integration_response.go b/internal/service/apigateway/integration_response.go index e5c588f56e6..ed91d8fd1cd 100644 --- a/internal/service/apigateway/integration_response.go +++ b/internal/service/apigateway/integration_response.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -91,7 +94,7 @@ func ResourceIntegrationResponse() *schema.Resource { func resourceIntegrationResponsePut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) input := &apigateway.PutIntegrationResponseInput{ HttpMethod: aws.String(d.Get("http_method").(string)), @@ -131,7 +134,7 @@ func resourceIntegrationResponsePut(ctx context.Context, d *schema.ResourceData, func resourceIntegrationResponseRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) integrationResponse, err := FindIntegrationResponseByFourPartKey(ctx, conn, d.Get("http_method").(string), d.Get("resource_id").(string), d.Get("rest_api_id").(string), d.Get("status_code").(string)) @@ -160,7 +163,7 @@ func resourceIntegrationResponseRead(ctx context.Context, d *schema.ResourceData func resourceIntegrationResponseDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) log.Printf("[DEBUG] Deleting API Gateway Integration Response: %s", d.Id()) _, err := conn.DeleteIntegrationResponseWithContext(ctx, &apigateway.DeleteIntegrationResponseInput{ diff --git a/internal/service/apigateway/integration_response_test.go b/internal/service/apigateway/integration_response_test.go index da02845abce..02cb020e58a 100644 --- a/internal/service/apigateway/integration_response_test.go +++ b/internal/service/apigateway/integration_response_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( @@ -99,7 +102,7 @@ func testAccCheckIntegrationResponseExists(ctx context.Context, n string, v *api return fmt.Errorf("No API Gateway Integration Response ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) output, err := tfapigateway.FindIntegrationResponseByFourPartKey(ctx, conn, rs.Primary.Attributes["http_method"], rs.Primary.Attributes["resource_id"], rs.Primary.Attributes["rest_api_id"], rs.Primary.Attributes["status_code"]) @@ -115,7 +118,7 @@ func testAccCheckIntegrationResponseExists(ctx context.Context, n string, v *api func testAccCheckIntegrationResponseDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_api_gateway_integration_response" { diff --git a/internal/service/apigateway/integration_test.go b/internal/service/apigateway/integration_test.go index ec7f9f724f4..891a2c08d7d 100644 --- a/internal/service/apigateway/integration_test.go +++ b/internal/service/apigateway/integration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( @@ -436,7 +439,7 @@ func testAccCheckIntegrationExists(ctx context.Context, n string, v *apigateway. return fmt.Errorf("No API Gateway Integration ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) output, err := tfapigateway.FindIntegrationByThreePartKey(ctx, conn, rs.Primary.Attributes["http_method"], rs.Primary.Attributes["resource_id"], rs.Primary.Attributes["rest_api_id"]) @@ -452,7 +455,7 @@ func testAccCheckIntegrationExists(ctx context.Context, n string, v *apigateway. func testAccCheckIntegrationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_api_gateway_integration" { diff --git a/internal/service/apigateway/method.go b/internal/service/apigateway/method.go index 8ce8331d69e..eb2199eb909 100644 --- a/internal/service/apigateway/method.go +++ b/internal/service/apigateway/method.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -102,7 +105,7 @@ func ResourceMethod() *schema.Resource { func resourceMethodCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) input := &apigateway.PutMethodInput{ ApiKeyRequired: aws.Bool(d.Get("api_key_required").(bool)), @@ -149,7 +152,7 @@ func resourceMethodCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceMethodRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) method, err := FindMethodByThreePartKey(ctx, conn, d.Get("http_method").(string), d.Get("resource_id").(string), d.Get("rest_api_id").(string)) @@ -177,7 +180,7 @@ func resourceMethodRead(ctx context.Context, d *schema.ResourceData, meta interf func resourceMethodUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) operations := make([]*apigateway.PatchOperation, 0) @@ -338,7 +341,7 @@ func resourceMethodUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceMethodDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) log.Printf("[DEBUG] Deleting API Gateway Method: %s", d.Id()) _, err := conn.DeleteMethodWithContext(ctx, &apigateway.DeleteMethodInput{ diff --git a/internal/service/apigateway/method_response.go b/internal/service/apigateway/method_response.go index d95dee9c831..1b0668d474d 100644 --- a/internal/service/apigateway/method_response.go +++ b/internal/service/apigateway/method_response.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -86,7 +89,7 @@ func ResourceMethodResponse() *schema.Resource { func resourceMethodResponseCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) input := &apigateway.PutMethodResponseInput{ HttpMethod: aws.String(d.Get("http_method").(string)), @@ -121,7 +124,7 @@ func resourceMethodResponseCreate(ctx context.Context, d *schema.ResourceData, m func resourceMethodResponseRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) methodResponse, err := FindMethodResponseByFourPartKey(ctx, conn, d.Get("http_method").(string), d.Get("resource_id").(string), d.Get("rest_api_id").(string), d.Get("status_code").(string)) @@ -150,7 +153,7 @@ func resourceMethodResponseRead(ctx context.Context, d *schema.ResourceData, met func resourceMethodResponseUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) operations := make([]*apigateway.PatchOperation, 0) @@ -181,7 +184,7 @@ func resourceMethodResponseUpdate(ctx context.Context, d *schema.ResourceData, m func resourceMethodResponseDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) log.Printf("[DEBUG] Deleting API Gateway Method Response: %s", d.Id()) _, err := conn.DeleteMethodResponseWithContext(ctx, &apigateway.DeleteMethodResponseInput{ diff --git a/internal/service/apigateway/method_response_test.go b/internal/service/apigateway/method_response_test.go index f199b8f715a..611cb3b4364 100644 --- a/internal/service/apigateway/method_response_test.go +++ b/internal/service/apigateway/method_response_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( @@ -96,7 +99,7 @@ func testAccCheckMethodResponseExists(ctx context.Context, n string, v *apigatew return fmt.Errorf("No API Gateway Method Response ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) output, err := tfapigateway.FindMethodResponseByFourPartKey(ctx, conn, rs.Primary.Attributes["http_method"], rs.Primary.Attributes["resource_id"], rs.Primary.Attributes["rest_api_id"], rs.Primary.Attributes["status_code"]) @@ -112,7 +115,7 @@ func testAccCheckMethodResponseExists(ctx context.Context, n string, v *apigatew func testAccCheckMethodResponseDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_api_gateway_method_response" { diff --git a/internal/service/apigateway/method_settings.go b/internal/service/apigateway/method_settings.go index 3b9062055e8..311e39d9a64 100644 --- a/internal/service/apigateway/method_settings.go +++ b/internal/service/apigateway/method_settings.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -137,7 +140,7 @@ func flattenMethodSettings(settings *apigateway.MethodSetting) []interface{} { func resourceMethodSettingsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) stage, err := FindStageByTwoPartKey(ctx, conn, d.Get("rest_api_id").(string), d.Get("stage_name").(string)) @@ -169,7 +172,7 @@ func resourceMethodSettingsRead(ctx context.Context, d *schema.ResourceData, met func resourceMethodSettingsUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) methodPath := d.Get("method_path").(string) prefix := fmt.Sprintf("/%s/", methodPath) @@ -271,7 +274,7 @@ func resourceMethodSettingsUpdate(ctx context.Context, d *schema.ResourceData, m func resourceMethodSettingsDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) input := &apigateway.UpdateStageInput{ RestApiId: aws.String(d.Get("rest_api_id").(string)), diff --git a/internal/service/apigateway/method_settings_test.go b/internal/service/apigateway/method_settings_test.go index 7f864b602ff..0119ce0c5ad 100644 --- a/internal/service/apigateway/method_settings_test.go +++ b/internal/service/apigateway/method_settings_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( @@ -619,7 +622,7 @@ func TestAccAPIGatewayMethodSettings_disappears(t *testing.T) { func testAccCheckMethodSettingsDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_api_gateway_method_settings" { diff --git a/internal/service/apigateway/method_test.go b/internal/service/apigateway/method_test.go index 7ebd5b8e06b..8e33a66fcb3 100644 --- a/internal/service/apigateway/method_test.go +++ b/internal/service/apigateway/method_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( @@ -250,7 +253,7 @@ func testAccCheckMethodExists(ctx context.Context, n string, v *apigateway.Metho return fmt.Errorf("No API Gateway Method ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) output, err := tfapigateway.FindMethodByThreePartKey(ctx, conn, rs.Primary.Attributes["http_method"], rs.Primary.Attributes["resource_id"], rs.Primary.Attributes["rest_api_id"]) @@ -266,7 +269,7 @@ func testAccCheckMethodExists(ctx context.Context, n string, v *apigateway.Metho func testAccCheckMethodDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_api_gateway_method" { diff --git a/internal/service/apigateway/model.go b/internal/service/apigateway/model.go index 3a5709afc60..fb1979578b6 100644 --- a/internal/service/apigateway/model.go +++ b/internal/service/apigateway/model.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -39,7 +42,7 @@ func ResourceModel() *schema.Resource { d.Set("name", name) d.Set("rest_api_id", restApiID) - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) output, err := conn.GetModelWithContext(ctx, &apigateway.GetModelInput{ ModelName: aws.String(name), @@ -92,7 +95,7 @@ func ResourceModel() *schema.Resource { func resourceModelCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) name := d.Get("name").(string) input := &apigateway.CreateModelInput{ @@ -122,7 +125,7 @@ func resourceModelCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceModelRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) model, err := FindModelByTwoPartKey(ctx, conn, d.Get("name").(string), d.Get("rest_api_id").(string)) @@ -145,7 +148,7 @@ func resourceModelRead(ctx context.Context, d *schema.ResourceData, meta interfa func resourceModelUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) operations := make([]*apigateway.PatchOperation, 0) @@ -182,7 +185,7 @@ func resourceModelUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceModelDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) log.Printf("[DEBUG] Deleting API Gateway Model: %s", d.Id()) _, err := conn.DeleteModelWithContext(ctx, &apigateway.DeleteModelInput{ diff --git a/internal/service/apigateway/model_test.go b/internal/service/apigateway/model_test.go index 6ecfa7d6c41..d09ab158eed 100644 --- a/internal/service/apigateway/model_test.go +++ b/internal/service/apigateway/model_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( @@ -83,7 +86,7 @@ func testAccCheckModelExists(ctx context.Context, n string, v *apigateway.Model) return fmt.Errorf("No API Gateway Model ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) output, err := tfapigateway.FindModelByTwoPartKey(ctx, conn, rs.Primary.Attributes["name"], rs.Primary.Attributes["rest_api_id"]) @@ -99,7 +102,7 @@ func testAccCheckModelExists(ctx context.Context, n string, v *apigateway.Model) func testAccCheckModelDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_api_gateway_model" { diff --git a/internal/service/apigateway/request_validator.go b/internal/service/apigateway/request_validator.go index 1cd8363da7d..6d61379d050 100644 --- a/internal/service/apigateway/request_validator.go +++ b/internal/service/apigateway/request_validator.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -65,7 +68,7 @@ func ResourceRequestValidator() *schema.Resource { func resourceRequestValidatorCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) name := d.Get("name").(string) input := &apigateway.CreateRequestValidatorInput{ @@ -88,7 +91,7 @@ func resourceRequestValidatorCreate(ctx context.Context, d *schema.ResourceData, func resourceRequestValidatorRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) output, err := FindRequestValidatorByTwoPartKey(ctx, conn, d.Id(), d.Get("rest_api_id").(string)) @@ -111,7 +114,7 @@ func resourceRequestValidatorRead(ctx context.Context, d *schema.ResourceData, m func resourceRequestValidatorUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) operations := make([]*apigateway.PatchOperation, 0) @@ -156,7 +159,7 @@ func resourceRequestValidatorUpdate(ctx context.Context, d *schema.ResourceData, func resourceRequestValidatorDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) log.Printf("[DEBUG] Deleting API Gateway Request Validator: %s", d.Id()) _, err := conn.DeleteRequestValidatorWithContext(ctx, &apigateway.DeleteRequestValidatorInput{ diff --git a/internal/service/apigateway/request_validator_test.go b/internal/service/apigateway/request_validator_test.go index a545d17e43f..6ada6d9c823 100644 --- a/internal/service/apigateway/request_validator_test.go +++ b/internal/service/apigateway/request_validator_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( @@ -90,7 +93,7 @@ func testAccCheckRequestValidatorExists(ctx context.Context, n string, v *apigat return fmt.Errorf("No API Gateway Request Validator ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) output, err := tfapigateway.FindRequestValidatorByTwoPartKey(ctx, conn, rs.Primary.ID, rs.Primary.Attributes["rest_api_id"]) @@ -106,7 +109,7 @@ func testAccCheckRequestValidatorExists(ctx context.Context, n string, v *apigat func testAccCheckRequestValidatorDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_api_gateway_request_validator" { diff --git a/internal/service/apigateway/resource.go b/internal/service/apigateway/resource.go index 7d5a674d6c7..a7b07582a5c 100644 --- a/internal/service/apigateway/resource.go +++ b/internal/service/apigateway/resource.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -63,7 +66,7 @@ func ResourceResource() *schema.Resource { func resourceResourceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) input := &apigateway.CreateResourceInput{ ParentId: aws.String(d.Get("parent_id").(string)), @@ -84,7 +87,7 @@ func resourceResourceCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceResourceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) resource, err := FindResourceByTwoPartKey(ctx, conn, d.Id(), d.Get("rest_api_id").(string)) @@ -127,7 +130,7 @@ func resourceResourceUpdateOperations(d *schema.ResourceData) []*apigateway.Patc func resourceResourceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) input := &apigateway.UpdateResourceInput{ ResourceId: aws.String(d.Id()), @@ -146,7 +149,7 @@ func resourceResourceUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceResourceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) log.Printf("[DEBUG] Deleting API Gateway Resource: %s", d.Id()) _, err := conn.DeleteResourceWithContext(ctx, &apigateway.DeleteResourceInput{ diff --git a/internal/service/apigateway/resource_data_source.go b/internal/service/apigateway/resource_data_source.go index 3de97fd65a2..bf36dc30605 100644 --- a/internal/service/apigateway/resource_data_source.go +++ b/internal/service/apigateway/resource_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -39,7 +42,7 @@ func DataSourceResource() *schema.Resource { func dataSourceResourceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) restApiId := d.Get("rest_api_id").(string) target := d.Get("path").(string) diff --git a/internal/service/apigateway/resource_data_source_test.go b/internal/service/apigateway/resource_data_source_test.go index 48f20dfbf53..e03e0760e98 100644 --- a/internal/service/apigateway/resource_data_source_test.go +++ b/internal/service/apigateway/resource_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( diff --git a/internal/service/apigateway/resource_test.go b/internal/service/apigateway/resource_test.go index 232252ba3b2..9bfa6d1890e 100644 --- a/internal/service/apigateway/resource_test.go +++ b/internal/service/apigateway/resource_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( @@ -118,7 +121,7 @@ func testAccCheckResourceExists(ctx context.Context, n string, v *apigateway.Res return fmt.Errorf("No API Gateway Resource ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) output, err := tfapigateway.FindResourceByTwoPartKey(ctx, conn, rs.Primary.ID, rs.Primary.Attributes["rest_api_id"]) @@ -134,7 +137,7 @@ func testAccCheckResourceExists(ctx context.Context, n string, v *apigateway.Res func testAccCheckResourceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_api_gateway_resource" { diff --git a/internal/service/apigateway/rest_api.go b/internal/service/apigateway/rest_api.go index ef768397920..9c53b7f3441 100644 --- a/internal/service/apigateway/rest_api.go +++ b/internal/service/apigateway/rest_api.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -19,10 +22,10 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" - "github.com/hashicorp/terraform-provider-aws/internal/experimental/nullable" "github.com/hashicorp/terraform-provider-aws/internal/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/types/nullable" "github.com/hashicorp/terraform-provider-aws/internal/verify" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -173,12 +176,12 @@ func ResourceRestAPI() *schema.Resource { func resourceRestAPICreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) name := d.Get("name").(string) input := &apigateway.CreateRestApiInput{ Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("api_key_source"); ok { @@ -274,7 +277,7 @@ func resourceRestAPICreate(ctx context.Context, d *schema.ResourceData, meta int func resourceRestAPIRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) api, err := FindRESTAPIByID(ctx, conn, d.Id()) @@ -365,14 +368,14 @@ func resourceRestAPIRead(ctx context.Context, d *schema.ResourceData, meta inter d.Set("policy", policyToSet) - SetTagsOut(ctx, api.Tags) + setTagsOut(ctx, api.Tags) return diags } func resourceRestAPIUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) if d.HasChangesExcept("tags", "tags_all") { operations := make([]*apigateway.PatchOperation, 0) @@ -395,19 +398,23 @@ func resourceRestAPIUpdate(ctx context.Context, d *schema.ResourceData, meta int // Remove every binary media types. Simpler to remove and add new ones, // since there are no replacings. for _, v := range old { - operations = append(operations, &apigateway.PatchOperation{ - Op: aws.String(apigateway.OpRemove), - Path: aws.String(fmt.Sprintf("/%s/%s", prefix, escapeJSONPointer(v.(string)))), - }) + if e, ok := v.(string); ok { + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String(apigateway.OpRemove), + Path: aws.String(fmt.Sprintf("/%s/%s", prefix, escapeJSONPointer(e))), + }) + } } // Handle additions if len(new) > 0 { for _, v := range new { - operations = append(operations, &apigateway.PatchOperation{ - Op: aws.String(apigateway.OpAdd), - Path: aws.String(fmt.Sprintf("/%s/%s", prefix, escapeJSONPointer(v.(string)))), - }) + if e, ok := v.(string); ok { + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String(apigateway.OpAdd), + Path: aws.String(fmt.Sprintf("/%s/%s", prefix, escapeJSONPointer(e))), + }) + } } } } @@ -571,7 +578,7 @@ func resourceRestAPIUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceRestAPIDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) log.Printf("[DEBUG] Deleting API Gateway REST API: %s", d.Id()) _, err := conn.DeleteRestApiWithContext(ctx, &apigateway.DeleteRestApiInput{ @@ -626,18 +633,22 @@ func resourceRestAPIWithBodyUpdateOperations(d *schema.ResourceData, output *api } if v, ok := d.GetOk("binary_media_types"); ok && len(v.([]interface{})) > 0 { - for _, elem := range aws.StringValueSlice(output.BinaryMediaTypes) { - operations = append(operations, &apigateway.PatchOperation{ - Op: aws.String(apigateway.OpRemove), - Path: aws.String("/binaryMediaTypes/" + escapeJSONPointer(elem)), - }) + if len(output.BinaryMediaTypes) > 0 { + for _, elem := range aws.StringValueSlice(output.BinaryMediaTypes) { + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String(apigateway.OpRemove), + Path: aws.String("/binaryMediaTypes/" + escapeJSONPointer(elem)), + }) + } } for _, elem := range v.([]interface{}) { - operations = append(operations, &apigateway.PatchOperation{ - Op: aws.String(apigateway.OpAdd), - Path: aws.String("/binaryMediaTypes/" + escapeJSONPointer(elem.(string))), - }) + if el, ok := elem.(string); ok { + operations = append(operations, &apigateway.PatchOperation{ + Op: aws.String(apigateway.OpAdd), + Path: aws.String("/binaryMediaTypes/" + escapeJSONPointer(el)), + }) + } } } diff --git a/internal/service/apigateway/rest_api_data_source.go b/internal/service/apigateway/rest_api_data_source.go index 66a99c8bd4d..37f0a500e20 100644 --- a/internal/service/apigateway/rest_api_data_source.go +++ b/internal/service/apigateway/rest_api_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -13,8 +16,8 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" - "github.com/hashicorp/terraform-provider-aws/internal/experimental/nullable" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/types/nullable" ) // @SDKDataSource("aws_api_gateway_rest_api") @@ -84,7 +87,7 @@ func DataSourceRestAPI() *schema.Resource { func dataSourceRestAPIRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig params := &apigateway.GetRestApisInput{} diff --git a/internal/service/apigateway/rest_api_data_source_test.go b/internal/service/apigateway/rest_api_data_source_test.go index 41bc6322377..b519da7b239 100644 --- a/internal/service/apigateway/rest_api_data_source_test.go +++ b/internal/service/apigateway/rest_api_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( diff --git a/internal/service/apigateway/rest_api_policy.go b/internal/service/apigateway/rest_api_policy.go index 702d15dda33..756d289157f 100644 --- a/internal/service/apigateway/rest_api_policy.go +++ b/internal/service/apigateway/rest_api_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -52,7 +55,7 @@ func ResourceRestAPIPolicy() *schema.Resource { func resourceRestAPIPolicyPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) restApiId := d.Get("rest_api_id").(string) log.Printf("[DEBUG] Setting API Gateway REST API Policy: %s", restApiId) @@ -89,7 +92,7 @@ func resourceRestAPIPolicyPut(ctx context.Context, d *schema.ResourceData, meta func resourceRestAPIPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) log.Printf("[DEBUG] Reading API Gateway REST API Policy %s", d.Id()) @@ -130,7 +133,7 @@ func resourceRestAPIPolicyRead(ctx context.Context, d *schema.ResourceData, meta func resourceRestAPIPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) restApiId := d.Get("rest_api_id").(string) log.Printf("[DEBUG] Deleting API Gateway REST API Policy: %s", restApiId) diff --git a/internal/service/apigateway/rest_api_policy_test.go b/internal/service/apigateway/rest_api_policy_test.go index 359af377eee..06497856e15 100644 --- a/internal/service/apigateway/rest_api_policy_test.go +++ b/internal/service/apigateway/rest_api_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( @@ -112,7 +115,7 @@ func testAccCheckRestAPIPolicyExists(ctx context.Context, n string, res *apigate return fmt.Errorf("No API Gateway ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) req := &apigateway.GetRestApiInput{ RestApiId: aws.String(rs.Primary.ID), @@ -144,7 +147,7 @@ func testAccCheckRestAPIPolicyExists(ctx context.Context, n string, res *apigate func testAccCheckRestAPIPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_api_gateway_rest_api_policy" { diff --git a/internal/service/apigateway/rest_api_test.go b/internal/service/apigateway/rest_api_test.go index d6dbf6dbcef..9f0a0f07ed1 100644 --- a/internal/service/apigateway/rest_api_test.go +++ b/internal/service/apigateway/rest_api_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( @@ -179,7 +182,7 @@ func TestAccAPIGatewayRestAPI_endpoint(t *testing.T) { // This can eventually be moved to a PreCheck function // If the region does not support EDGE endpoint type, this test will either show // SKIP (if REGIONAL passed) or FAIL (if REGIONAL failed) - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) output, err := conn.CreateRestApiWithContext(ctx, &apigateway.CreateRestApiInput{ Name: aws.String(sdkacctest.RandomWithPrefix("tf-acc-test-edge-endpoint-precheck")), EndpointConfiguration: &apigateway.EndpointConfiguration{ @@ -229,7 +232,7 @@ func TestAccAPIGatewayRestAPI_Endpoint_private(t *testing.T) { PreConfig: func() { // Ensure region supports PRIVATE endpoint // This can eventually be moved to a PreCheck function - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) output, err := conn.CreateRestApiWithContext(ctx, &apigateway.CreateRestApiInput{ Name: aws.String(sdkacctest.RandomWithPrefix("tf-acc-test-private-endpoint-precheck")), EndpointConfiguration: &apigateway.EndpointConfiguration{ @@ -1407,7 +1410,7 @@ func TestAccAPIGatewayRestAPI_Policy_setByBody(t *testing.T) { func testAccCheckRestAPIRoutes(ctx context.Context, conf *apigateway.RestApi, routes []string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) resp, err := conn.GetResourcesWithContext(ctx, &apigateway.GetResourcesInput{ RestApiId: conf.Id, @@ -1438,7 +1441,7 @@ func testAccCheckRestAPIRoutes(ctx context.Context, conf *apigateway.RestApi, ro func testAccCheckRestAPIEndpointsCount(ctx context.Context, conf *apigateway.RestApi, count int) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) resp, err := conn.GetRestApiWithContext(ctx, &apigateway.GetRestApiInput{ RestApiId: conf.Id, @@ -1471,7 +1474,7 @@ func testAccCheckRestAPIExists(ctx context.Context, n string, v *apigateway.Rest return fmt.Errorf("No API Gateway ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) output, err := tfapigateway.FindRESTAPIByID(ctx, conn, rs.Primary.ID) @@ -1487,7 +1490,7 @@ func testAccCheckRestAPIExists(ctx context.Context, n string, v *apigateway.Rest func testAccCheckRestAPIDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_api_gateway_rest_api" { diff --git a/internal/service/apigateway/sdk_data_source.go b/internal/service/apigateway/sdk_data_source.go index 9799af25a75..d36b88635de 100644 --- a/internal/service/apigateway/sdk_data_source.go +++ b/internal/service/apigateway/sdk_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -55,7 +58,7 @@ func DataSourceSdk() *schema.Resource { func dataSourceSdkRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) restApiId := d.Get("rest_api_id").(string) stageName := d.Get("stage_name").(string) diff --git a/internal/service/apigateway/sdk_data_source_test.go b/internal/service/apigateway/sdk_data_source_test.go index 985230fb677..9ffe7899f66 100644 --- a/internal/service/apigateway/sdk_data_source_test.go +++ b/internal/service/apigateway/sdk_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( diff --git a/internal/service/apigateway/service_package.go b/internal/service/apigateway/service_package.go new file mode 100644 index 00000000000..4e53f749961 --- /dev/null +++ b/internal/service/apigateway/service_package.go @@ -0,0 +1,27 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package apigateway + +import ( + "context" + + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + request_sdkv1 "github.com/aws/aws-sdk-go/aws/request" + apigateway_sdkv1 "github.com/aws/aws-sdk-go/service/apigateway" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" +) + +// CustomizeConn customizes a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) CustomizeConn(ctx context.Context, conn *apigateway_sdkv1.APIGateway) (*apigateway_sdkv1.APIGateway, error) { + conn.Handlers.Retry.PushBack(func(r *request_sdkv1.Request) { + // Many operations can return an error such as: + // ConflictException: Unable to complete operation due to concurrent modification. Please try again later. + // Handle them all globally for the service client. + if tfawserr.ErrMessageContains(r.Error, apigateway_sdkv1.ErrCodeConflictException, "try again later") { + r.Retryable = aws_sdkv1.Bool(true) + } + }) + + return conn, nil +} diff --git a/internal/service/apigateway/service_package_gen.go b/internal/service/apigateway/service_package_gen.go index 38d9b3da07f..d72af166fc4 100644 --- a/internal/service/apigateway/service_package_gen.go +++ b/internal/service/apigateway/service_package_gen.go @@ -5,6 +5,10 @@ package apigateway import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + apigateway_sdkv1 "github.com/aws/aws-sdk-go/service/apigateway" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -193,4 +197,13 @@ func (p *servicePackage) ServicePackageName() string { return names.APIGateway } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*apigateway_sdkv1.APIGateway, error) { + sess := config["session"].(*session_sdkv1.Session) + + return apigateway_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/apigateway/stage.go b/internal/service/apigateway/stage.go index 0214dbba2a3..6b3c611e28d 100644 --- a/internal/service/apigateway/stage.go +++ b/internal/service/apigateway/stage.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -92,7 +95,7 @@ func ResourceStage() *schema.Resource { }, "stage_variable_overrides": { Type: schema.TypeMap, - Elem: schema.TypeString, + Elem: &schema.Schema{Type: schema.TypeString}, Optional: true, }, "use_stage_cache": { @@ -159,7 +162,7 @@ func ResourceStage() *schema.Resource { func resourceStageCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) restAPIID := d.Get("rest_api_id").(string) stageName := d.Get("stage_name").(string) @@ -168,7 +171,7 @@ func resourceStageCreate(ctx context.Context, d *schema.ResourceData, meta inter RestApiId: aws.String(restAPIID), StageName: aws.String(stageName), DeploymentId: aws.String(deploymentID), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } waitForCache := false @@ -228,7 +231,7 @@ func resourceStageCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceStageRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) restAPIID := d.Get("rest_api_id").(string) stageName := d.Get("stage_name").(string) @@ -288,14 +291,14 @@ func resourceStageRead(ctx context.Context, d *schema.ResourceData, meta interfa d.Set("web_acl_arn", stage.WebAclArn) d.Set("xray_tracing_enabled", stage.TracingEnabled) - SetTagsOut(ctx, stage.Tags) + setTagsOut(ctx, stage.Tags) return diags } func resourceStageUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) restAPIID := d.Get("rest_api_id").(string) stageName := d.Get("stage_name").(string) @@ -421,7 +424,7 @@ func resourceStageUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceStageDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) log.Printf("[DEBUG] Deleting API Gateway Stage: %s", d.Id()) _, err := conn.DeleteStageWithContext(ctx, &apigateway.DeleteStageInput{ diff --git a/internal/service/apigateway/stage_test.go b/internal/service/apigateway/stage_test.go index 2663cfd0064..9a32837f990 100644 --- a/internal/service/apigateway/stage_test.go +++ b/internal/service/apigateway/stage_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( @@ -522,7 +525,7 @@ func testAccCheckStageExists(ctx context.Context, n string, v *apigateway.Stage) return fmt.Errorf("No API Gateway Stage ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) output, err := tfapigateway.FindStageByTwoPartKey(ctx, conn, rs.Primary.Attributes["rest_api_id"], rs.Primary.Attributes["stage_name"]) @@ -538,7 +541,7 @@ func testAccCheckStageExists(ctx context.Context, n string, v *apigateway.Stage) func testAccCheckStageDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_api_gateway_stage" { diff --git a/internal/service/apigateway/status.go b/internal/service/apigateway/status.go index b3a05e35bc8..878c36d0163 100644 --- a/internal/service/apigateway/status.go +++ b/internal/service/apigateway/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( diff --git a/internal/service/apigateway/sweep.go b/internal/service/apigateway/sweep.go index 028df01978a..f842bdaaeac 100644 --- a/internal/service/apigateway/sweep.go +++ b/internal/service/apigateway/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -14,7 +17,6 @@ import ( "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -55,11 +57,11 @@ func init() { func sweepRestAPIs(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).APIGatewayConn() + conn := client.APIGatewayConn(ctx) err = conn.GetRestApisPagesWithContext(ctx, &apigateway.GetRestApisInput{}, func(page *apigateway.GetRestApisOutput, lastPage bool) bool { for _, item := range page.Items { @@ -98,11 +100,11 @@ func sweepRestAPIs(region string) error { func sweepVPCLinks(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).APIGatewayConn() + conn := client.APIGatewayConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -127,7 +129,7 @@ func sweepVPCLinks(region string) error { return fmt.Errorf("retrieving API Gateway VPC Links: %w", err) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("sweeping API Gateway VPC Links: %w", err)) } @@ -136,12 +138,12 @@ func sweepVPCLinks(region string) error { func sweepClientCertificates(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).APIGatewayConn() + conn := client.APIGatewayConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -165,7 +167,7 @@ func sweepClientCertificates(region string) error { errs = multierror.Append(errs, fmt.Errorf("describing API Gateway Client Certificates for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping API Gateway Client Certificates for %s: %w", region, err)) } @@ -179,14 +181,14 @@ func sweepClientCertificates(region string) error { func sweepUsagePlans(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } log.Printf("[INFO] Sweeping API Gateway Usage Plans for %s", region) - conn := client.(*conns.AWSClient).APIGatewayConn() + conn := client.APIGatewayConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -213,7 +215,7 @@ func sweepUsagePlans(region string) error { errs = multierror.Append(errs, fmt.Errorf("describing API Gateway Usage Plans for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping API Gateway Usage Plans for %s: %w", region, err)) } @@ -227,14 +229,14 @@ func sweepUsagePlans(region string) error { func sweepAPIKeys(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } log.Printf("[INFO] Sweeping API Gateway API Keys for %s", region) - conn := client.(*conns.AWSClient).APIGatewayConn() + conn := client.APIGatewayConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -260,7 +262,7 @@ func sweepAPIKeys(region string) error { errs = multierror.Append(errs, fmt.Errorf("describing API Gateway API Keys for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping API Gateway API Keys for %s: %w", region, err)) } @@ -274,14 +276,14 @@ func sweepAPIKeys(region string) error { func sweepDomainNames(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } log.Printf("[INFO] Sweeping API Gateway Domain Names for %s", region) - conn := client.(*conns.AWSClient).APIGatewayConn() + conn := client.APIGatewayConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -307,7 +309,7 @@ func sweepDomainNames(region string) error { errs = multierror.Append(errs, fmt.Errorf("describing API Gateway Domain Names for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping API Gateway Domain Names for %s: %w", region, err)) } diff --git a/internal/service/apigateway/tags_gen.go b/internal/service/apigateway/tags_gen.go index dd0b404e95a..f52875cfca8 100644 --- a/internal/service/apigateway/tags_gen.go +++ b/internal/service/apigateway/tags_gen.go @@ -21,14 +21,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from apigateway service tags. +// KeyValueTags creates tftags.KeyValueTags from apigateway service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns apigateway service tags from Context. +// getTagsIn returns apigateway service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -38,17 +38,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets apigateway service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets apigateway service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates apigateway service tags. +// updateTags updates apigateway service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn apigatewayiface.APIGatewayAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn apigatewayiface.APIGatewayAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -88,5 +88,5 @@ func UpdateTags(ctx context.Context, conn apigatewayiface.APIGatewayAPI, identif // UpdateTags updates apigateway service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).APIGatewayConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).APIGatewayConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/apigateway/usage_plan.go b/internal/service/apigateway/usage_plan.go index 91c89631fdf..fb0c7ee0583 100644 --- a/internal/service/apigateway/usage_plan.go +++ b/internal/service/apigateway/usage_plan.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -144,12 +147,12 @@ func ResourceUsagePlan() *schema.Resource { func resourceUsagePlanCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) name := d.Get("name").(string) input := &apigateway.CreateUsagePlanInput{ Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("api_stages"); ok && v.(*schema.Set).Len() > 0 { @@ -213,7 +216,7 @@ func resourceUsagePlanCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceUsagePlanRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) up, err := FindUsagePlanByID(ctx, conn, d.Id()) @@ -253,14 +256,14 @@ func resourceUsagePlanRead(ctx context.Context, d *schema.ResourceData, meta int } } - SetTagsOut(ctx, up.Tags) + setTagsOut(ctx, up.Tags) return diags } func resourceUsagePlanUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) if d.HasChangesExcept("tags", "tags_all") { operations := make([]*apigateway.PatchOperation, 0) @@ -462,7 +465,7 @@ func resourceUsagePlanUpdate(ctx context.Context, d *schema.ResourceData, meta i func resourceUsagePlanDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) // Removing existing api stages associated if apistages, ok := d.GetOk("api_stages"); ok { diff --git a/internal/service/apigateway/usage_plan_key.go b/internal/service/apigateway/usage_plan_key.go index 091033f2713..9e543b6e755 100644 --- a/internal/service/apigateway/usage_plan_key.go +++ b/internal/service/apigateway/usage_plan_key.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -69,7 +72,7 @@ func ResourceUsagePlanKey() *schema.Resource { func resourceUsagePlanKeyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) input := &apigateway.CreateUsagePlanKeyInput{ KeyId: aws.String(d.Get("key_id").(string)), @@ -90,7 +93,7 @@ func resourceUsagePlanKeyCreate(ctx context.Context, d *schema.ResourceData, met func resourceUsagePlanKeyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) upk, err := FindUsagePlanKeyByTwoPartKey(ctx, conn, d.Get("usage_plan_id").(string), d.Get("key_id").(string)) @@ -113,7 +116,7 @@ func resourceUsagePlanKeyRead(ctx context.Context, d *schema.ResourceData, meta func resourceUsagePlanKeyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) log.Printf("[DEBUG] Deleting API Gateway Usage Plan Key: %s", d.Id()) _, err := conn.DeleteUsagePlanKeyWithContext(ctx, &apigateway.DeleteUsagePlanKeyInput{ diff --git a/internal/service/apigateway/usage_plan_key_test.go b/internal/service/apigateway/usage_plan_key_test.go index bd8bfe89ac9..12d008a602d 100644 --- a/internal/service/apigateway/usage_plan_key_test.go +++ b/internal/service/apigateway/usage_plan_key_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( @@ -115,7 +118,7 @@ func testAccCheckUsagePlanKeyExists(ctx context.Context, n string, v *apigateway return fmt.Errorf("No API Gateway Usage Plan Key ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) output, err := tfapigateway.FindUsagePlanKeyByTwoPartKey(ctx, conn, rs.Primary.Attributes["usage_plan_id"], rs.Primary.Attributes["key_id"]) @@ -131,7 +134,7 @@ func testAccCheckUsagePlanKeyExists(ctx context.Context, n string, v *apigateway func testAccCheckUsagePlanKeyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_api_gateway_usage_plan_key" { diff --git a/internal/service/apigateway/usage_plan_test.go b/internal/service/apigateway/usage_plan_test.go index 9d57dc0a81d..3a0e4b14ed4 100644 --- a/internal/service/apigateway/usage_plan_test.go +++ b/internal/service/apigateway/usage_plan_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( @@ -565,7 +568,7 @@ func testAccCheckUsagePlanExists(ctx context.Context, n string, v *apigateway.Us return fmt.Errorf("No API Gateway Usage Plan ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) output, err := tfapigateway.FindUsagePlanByID(ctx, conn, rs.Primary.ID) @@ -581,7 +584,7 @@ func testAccCheckUsagePlanExists(ctx context.Context, n string, v *apigateway.Us func testAccCheckUsagePlanDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_api_gateway_usage_plan" { diff --git a/internal/service/apigateway/validate.go b/internal/service/apigateway/validate.go index 8b61c3f9c98..aacd02b7ac5 100644 --- a/internal/service/apigateway/validate.go +++ b/internal/service/apigateway/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( diff --git a/internal/service/apigateway/validate_test.go b/internal/service/apigateway/validate_test.go index 5a06420a01a..9d91d2a2820 100644 --- a/internal/service/apigateway/validate_test.go +++ b/internal/service/apigateway/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( diff --git a/internal/service/apigateway/vpc_link.go b/internal/service/apigateway/vpc_link.go index 20c050fbc6a..c870a1ed301 100644 --- a/internal/service/apigateway/vpc_link.go +++ b/internal/service/apigateway/vpc_link.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -62,12 +65,12 @@ func ResourceVPCLink() *schema.Resource { func resourceVPCLinkCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) input := &apigateway.CreateVpcLinkInput{ Name: aws.String(d.Get("name").(string)), TargetArns: flex.ExpandStringList(d.Get("target_arns").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { input.Description = aws.String(v.(string)) @@ -89,7 +92,7 @@ func resourceVPCLinkCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceVPCLinkRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) input := &apigateway.GetVpcLinkInput{ VpcLinkId: aws.String(d.Id()), @@ -105,7 +108,7 @@ func resourceVPCLinkRead(ctx context.Context, d *schema.ResourceData, meta inter return sdkdiag.AppendErrorf(diags, "reading API Gateway VPC Link (%s): %s", d.Id(), err) } - SetTagsOut(ctx, resp.Tags) + setTagsOut(ctx, resp.Tags) arn := arn.ARN{ Partition: meta.(*conns.AWSClient).Partition, @@ -126,7 +129,7 @@ func resourceVPCLinkRead(ctx context.Context, d *schema.ResourceData, meta inter func resourceVPCLinkUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) operations := make([]*apigateway.PatchOperation, 0) @@ -165,7 +168,7 @@ func resourceVPCLinkUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceVPCLinkDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) input := &apigateway.DeleteVpcLinkInput{ VpcLinkId: aws.String(d.Id()), diff --git a/internal/service/apigateway/vpc_link_data_source.go b/internal/service/apigateway/vpc_link_data_source.go index 7dbd07cd3eb..0acf4550251 100644 --- a/internal/service/apigateway/vpc_link_data_source.go +++ b/internal/service/apigateway/vpc_link_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( @@ -53,7 +56,7 @@ func DataSourceVPCLink() *schema.Resource { func dataSourceVPCLinkRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayConn() + conn := meta.(*conns.AWSClient).APIGatewayConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig params := &apigateway.GetVpcLinksInput{} diff --git a/internal/service/apigateway/vpc_link_data_source_test.go b/internal/service/apigateway/vpc_link_data_source_test.go index ec0d676ed74..0f7ee3e49cc 100644 --- a/internal/service/apigateway/vpc_link_data_source_test.go +++ b/internal/service/apigateway/vpc_link_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( diff --git a/internal/service/apigateway/vpc_link_test.go b/internal/service/apigateway/vpc_link_test.go index 7f114e7b2bf..044a448c2b8 100644 --- a/internal/service/apigateway/vpc_link_test.go +++ b/internal/service/apigateway/vpc_link_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway_test import ( @@ -140,7 +143,7 @@ func TestAccAPIGatewayVPCLink_disappears(t *testing.T) { func testAccCheckVPCLinkDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_api_gateway_vpc_link" { @@ -173,7 +176,7 @@ func testAccCheckVPCLinkExists(ctx context.Context, name string) resource.TestCh return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) input := &apigateway.GetVpcLinkInput{ VpcLinkId: aws.String(rs.Primary.ID), diff --git a/internal/service/apigateway/wait.go b/internal/service/apigateway/wait.go index 8a7937795cb..01f06c848fa 100644 --- a/internal/service/apigateway/wait.go +++ b/internal/service/apigateway/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigateway import ( diff --git a/internal/service/apigatewayv2/api.go b/internal/service/apigatewayv2/api.go index 95a9ac70f38..74e69669511 100644 --- a/internal/service/apigatewayv2/api.go +++ b/internal/service/apigatewayv2/api.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2 import ( @@ -164,13 +167,13 @@ func ResourceAPI() *schema.Resource { func resourceAPICreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) name := d.Get("name").(string) input := &apigatewayv2.CreateApiInput{ Name: aws.String(name), ProtocolType: aws.String(d.Get("protocol_type").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("api_key_selection_expression"); ok { @@ -228,7 +231,7 @@ func resourceAPICreate(ctx context.Context, d *schema.ResourceData, meta interfa func resourceAPIRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) output, err := FindAPIByID(ctx, conn, d.Id()) @@ -268,7 +271,7 @@ func resourceAPIRead(ctx context.Context, d *schema.ResourceData, meta interface d.Set("protocol_type", output.ProtocolType) d.Set("route_selection_expression", output.RouteSelectionExpression) - SetTagsOut(ctx, output.Tags) + setTagsOut(ctx, output.Tags) d.Set("version", output.Version) @@ -277,7 +280,7 @@ func resourceAPIRead(ctx context.Context, d *schema.ResourceData, meta interface func resourceAPIUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) corsConfigurationDeleted := false if d.HasChange("cors_configuration") { @@ -348,7 +351,7 @@ func resourceAPIUpdate(ctx context.Context, d *schema.ResourceData, meta interfa func resourceAPIDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) log.Printf("[DEBUG] Deleting API Gateway v2 API: %s", d.Id()) _, err := conn.DeleteApiWithContext(ctx, &apigatewayv2.DeleteApiInput{ @@ -367,7 +370,7 @@ func resourceAPIDelete(ctx context.Context, d *schema.ResourceData, meta interfa } func reimportOpenAPIDefinition(ctx context.Context, d *schema.ResourceData, meta interface{}) error { - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) if body, ok := d.GetOk("body"); ok { inputR := &apigatewayv2.ReimportApiInput{ @@ -412,7 +415,7 @@ func reimportOpenAPIDefinition(ctx context.Context, d *schema.ResourceData, meta } } - if err := UpdateTags(ctx, conn, d.Get("arn").(string), d.Get("tags_all"), KeyValueTags(ctx, GetTagsIn(ctx))); err != nil { + if err := updateTags(ctx, conn, d.Get("arn").(string), d.Get("tags_all"), KeyValueTags(ctx, getTagsIn(ctx))); err != nil { return fmt.Errorf("updating API Gateway v2 API (%s) tags: %w", d.Id(), err) } diff --git a/internal/service/apigatewayv2/api_data_source.go b/internal/service/apigatewayv2/api_data_source.go index fecc3b618f2..b14fe59819d 100644 --- a/internal/service/apigatewayv2/api_data_source.go +++ b/internal/service/apigatewayv2/api_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2 import ( @@ -110,7 +113,7 @@ func DataSourceAPI() *schema.Resource { func dataSourceAPIRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig apiID := d.Get("api_id").(string) diff --git a/internal/service/apigatewayv2/api_data_source_test.go b/internal/service/apigatewayv2/api_data_source_test.go index afc81678da3..643cadfa3e3 100644 --- a/internal/service/apigatewayv2/api_data_source_test.go +++ b/internal/service/apigatewayv2/api_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2_test import ( diff --git a/internal/service/apigatewayv2/api_mapping.go b/internal/service/apigatewayv2/api_mapping.go index ca49edcde08..f8072f177aa 100644 --- a/internal/service/apigatewayv2/api_mapping.go +++ b/internal/service/apigatewayv2/api_mapping.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2 import ( @@ -54,7 +57,7 @@ func ResourceAPIMapping() *schema.Resource { func resourceAPIMappingCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) input := &apigatewayv2.CreateApiMappingInput{ ApiId: aws.String(d.Get("api_id").(string)), @@ -79,7 +82,7 @@ func resourceAPIMappingCreate(ctx context.Context, d *schema.ResourceData, meta func resourceAPIMappingRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) output, err := FindAPIMappingByTwoPartKey(ctx, conn, d.Id(), d.Get("domain_name").(string)) @@ -102,7 +105,7 @@ func resourceAPIMappingRead(ctx context.Context, d *schema.ResourceData, meta in func resourceAPIMappingUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) input := &apigatewayv2.UpdateApiMappingInput{ ApiId: aws.String(d.Get("api_id").(string)), @@ -129,7 +132,7 @@ func resourceAPIMappingUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceAPIMappingDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) log.Printf("[DEBUG] Deleting API Gateway v2 API Mapping (%s)", d.Id()) _, err := conn.DeleteApiMappingWithContext(ctx, &apigatewayv2.DeleteApiMappingInput{ diff --git a/internal/service/apigatewayv2/api_mapping_test.go b/internal/service/apigatewayv2/api_mapping_test.go index 5cc0a4b884c..e545f089ce2 100644 --- a/internal/service/apigatewayv2/api_mapping_test.go +++ b/internal/service/apigatewayv2/api_mapping_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2_test import ( @@ -168,7 +171,7 @@ func testAccCheckAPIMappingCreateCertificate(ctx context.Context, t *testing.T, privateKey := acctest.TLSRSAPrivateKeyPEM(t, 2048) certificate := acctest.TLSRSAX509SelfSignedCertificatePEM(t, privateKey, fmt.Sprintf("%s.example.com", rName)) - conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).ACMClient(ctx) output, err := conn.ImportCertificate(ctx, &acm.ImportCertificateInput{ Certificate: []byte(certificate), @@ -190,7 +193,7 @@ func testAccCheckAPIMappingCreateCertificate(ctx context.Context, t *testing.T, func testAccCheckAPIMappingDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_apigatewayv2_api_mapping" { @@ -225,7 +228,7 @@ func testAccCheckAPIMappingExists(ctx context.Context, n string, vDomainName *st return fmt.Errorf("No API Gateway v2 API Mapping ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) domainName := rs.Primary.Attributes["domain_name"] output, err := tfapigatewayv2.FindAPIMappingByTwoPartKey(ctx, conn, rs.Primary.ID, domainName) diff --git a/internal/service/apigatewayv2/api_test.go b/internal/service/apigatewayv2/api_test.go index f7a2974c5f1..8e6068e55ce 100644 --- a/internal/service/apigatewayv2/api_test.go +++ b/internal/service/apigatewayv2/api_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2_test import ( @@ -551,7 +554,7 @@ func TestAccAPIGatewayV2API_OpenAPI_failOnWarnings(t *testing.T) { func testAccCheckAPIRoutes(ctx context.Context, v *apigatewayv2.GetApiOutput, routes []string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) resp, err := conn.GetRoutesWithContext(ctx, &apigatewayv2.GetRoutesInput{ ApiId: v.ApiId, @@ -775,7 +778,7 @@ func TestAccAPIGatewayV2API_quickCreate(t *testing.T) { func testAccCheckAPIDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_apigatewayv2_api" { @@ -810,7 +813,7 @@ func testAccCheckAPIExists(ctx context.Context, n string, v *apigatewayv2.GetApi return fmt.Errorf("No API Gateway v2 API ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) output, err := tfapigatewayv2.FindAPIByID(ctx, conn, rs.Primary.ID) @@ -835,7 +838,7 @@ func testAccCheckAPIQuickCreateIntegration(ctx context.Context, n, expectedType, return fmt.Errorf("No API Gateway v2 API ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) resp, err := conn.GetIntegrationsWithContext(ctx, &apigatewayv2.GetIntegrationsInput{ ApiId: aws.String(rs.Primary.ID), @@ -870,7 +873,7 @@ func testAccCheckAPIQuickCreateRoute(ctx context.Context, n, expectedRouteKey st return fmt.Errorf("No API Gateway v2 API ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) resp, err := conn.GetRoutesWithContext(ctx, &apigatewayv2.GetRoutesInput{ ApiId: aws.String(rs.Primary.ID), @@ -902,7 +905,7 @@ func testAccCheckAPIQuickCreateStage(ctx context.Context, n, expectedName string return fmt.Errorf("No API Gateway v2 API ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) resp, err := conn.GetStagesWithContext(ctx, &apigatewayv2.GetStagesInput{ ApiId: aws.String(rs.Primary.ID), diff --git a/internal/service/apigatewayv2/apis_data_source.go b/internal/service/apigatewayv2/apis_data_source.go index 7b3a754d18f..a0daa655fcf 100644 --- a/internal/service/apigatewayv2/apis_data_source.go +++ b/internal/service/apigatewayv2/apis_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2 import ( @@ -40,7 +43,7 @@ func DataSourceAPIs() *schema.Resource { func dataSourceAPIsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig tagsToMatch := tftags.New(ctx, d.Get("tags").(map[string]interface{})).IgnoreAWS().IgnoreConfig(ignoreTagsConfig) diff --git a/internal/service/apigatewayv2/apis_data_source_test.go b/internal/service/apigatewayv2/apis_data_source_test.go index 7b760fec648..89da50998bd 100644 --- a/internal/service/apigatewayv2/apis_data_source_test.go +++ b/internal/service/apigatewayv2/apis_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2_test import ( diff --git a/internal/service/apigatewayv2/authorizer.go b/internal/service/apigatewayv2/authorizer.go index 6bd8a872af2..3f4e9f2c762 100644 --- a/internal/service/apigatewayv2/authorizer.go +++ b/internal/service/apigatewayv2/authorizer.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2 import ( @@ -100,7 +103,7 @@ func ResourceAuthorizer() *schema.Resource { func resourceAuthorizerCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) apiId := d.Get("api_id").(string) authorizerType := d.Get("authorizer_type").(string) @@ -155,7 +158,7 @@ func resourceAuthorizerCreate(ctx context.Context, d *schema.ResourceData, meta func resourceAuthorizerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) resp, err := conn.GetAuthorizerWithContext(ctx, &apigatewayv2.GetAuthorizerInput{ ApiId: aws.String(d.Get("api_id").(string)), @@ -189,7 +192,7 @@ func resourceAuthorizerRead(ctx context.Context, d *schema.ResourceData, meta in func resourceAuthorizerUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) req := &apigatewayv2.UpdateAuthorizerInput{ ApiId: aws.String(d.Get("api_id").(string)), @@ -234,7 +237,7 @@ func resourceAuthorizerUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceAuthorizerDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) log.Printf("[DEBUG] Deleting API Gateway v2 authorizer (%s)", d.Id()) _, err := conn.DeleteAuthorizerWithContext(ctx, &apigatewayv2.DeleteAuthorizerInput{ diff --git a/internal/service/apigatewayv2/authorizer_test.go b/internal/service/apigatewayv2/authorizer_test.go index b34ad2ba9d0..658826fb75b 100644 --- a/internal/service/apigatewayv2/authorizer_test.go +++ b/internal/service/apigatewayv2/authorizer_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2_test import ( @@ -347,7 +350,7 @@ func TestAccAPIGatewayV2Authorizer_HTTPAPILambdaRequestAuthorizer_initialZeroCac func testAccCheckAuthorizerDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_apigatewayv2_authorizer" { @@ -383,7 +386,7 @@ func testAccCheckAuthorizerExists(ctx context.Context, n string, vApiId *string, return fmt.Errorf("No API Gateway v2 authorizer ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) apiId := aws.String(rs.Primary.Attributes["api_id"]) resp, err := conn.GetAuthorizerWithContext(ctx, &apigatewayv2.GetAuthorizerInput{ diff --git a/internal/service/apigatewayv2/deployment.go b/internal/service/apigatewayv2/deployment.go index c134f24ca89..dbc782bf9fa 100644 --- a/internal/service/apigatewayv2/deployment.go +++ b/internal/service/apigatewayv2/deployment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2 import ( @@ -54,7 +57,7 @@ func ResourceDeployment() *schema.Resource { func resourceDeploymentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) req := &apigatewayv2.CreateDeploymentInput{ ApiId: aws.String(d.Get("api_id").(string)), @@ -80,7 +83,7 @@ func resourceDeploymentCreate(ctx context.Context, d *schema.ResourceData, meta func resourceDeploymentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) outputRaw, _, err := StatusDeployment(ctx, conn, d.Get("api_id").(string), d.Id())() if tfawserr.ErrCodeEquals(err, apigatewayv2.ErrCodeNotFoundException) && !d.IsNewResource() { @@ -101,7 +104,7 @@ func resourceDeploymentRead(ctx context.Context, d *schema.ResourceData, meta in func resourceDeploymentUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) req := &apigatewayv2.UpdateDeploymentInput{ ApiId: aws.String(d.Get("api_id").(string)), @@ -126,7 +129,7 @@ func resourceDeploymentUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceDeploymentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) log.Printf("[DEBUG] Deleting API Gateway v2 deployment (%s)", d.Id()) _, err := conn.DeleteDeploymentWithContext(ctx, &apigatewayv2.DeleteDeploymentInput{ diff --git a/internal/service/apigatewayv2/deployment_test.go b/internal/service/apigatewayv2/deployment_test.go index 89e5196f5bf..376231c6580 100644 --- a/internal/service/apigatewayv2/deployment_test.go +++ b/internal/service/apigatewayv2/deployment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2_test import ( @@ -135,7 +138,7 @@ func TestAccAPIGatewayV2Deployment_triggers(t *testing.T) { func testAccCheckDeploymentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_apigatewayv2_deployment" { @@ -162,7 +165,7 @@ func testAccCheckDeploymentDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckDeploymentDisappears(ctx context.Context, apiId *string, v *apigatewayv2.GetDeploymentOutput) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) _, err := conn.DeleteDeploymentWithContext(ctx, &apigatewayv2.DeleteDeploymentInput{ ApiId: apiId, @@ -184,7 +187,7 @@ func testAccCheckDeploymentExists(ctx context.Context, n string, vApiId *string, return fmt.Errorf("No API Gateway v2 deployment ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) apiId := aws.String(rs.Primary.Attributes["api_id"]) resp, err := conn.GetDeploymentWithContext(ctx, &apigatewayv2.GetDeploymentInput{ diff --git a/internal/service/apigatewayv2/domain_name.go b/internal/service/apigatewayv2/domain_name.go index fb9d004250d..e9a9f5cfb71 100644 --- a/internal/service/apigatewayv2/domain_name.go +++ b/internal/service/apigatewayv2/domain_name.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2 import ( @@ -126,14 +129,14 @@ func ResourceDomainName() *schema.Resource { func resourceDomainNameCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) domainName := d.Get("domain_name").(string) input := &apigatewayv2.CreateDomainNameInput{ DomainName: aws.String(domainName), DomainNameConfigurations: expandDomainNameConfigurations(d.Get("domain_name_configuration").([]interface{})), MutualTlsAuthentication: expandMutualTLSAuthentication(d.Get("mutual_tls_authentication").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } output, err := conn.CreateDomainNameWithContext(ctx, input) @@ -153,7 +156,7 @@ func resourceDomainNameCreate(ctx context.Context, d *schema.ResourceData, meta func resourceDomainNameRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) output, err := FindDomainName(ctx, conn, d.Id()) @@ -183,14 +186,14 @@ func resourceDomainNameRead(ctx context.Context, d *schema.ResourceData, meta in return sdkdiag.AppendErrorf(diags, "setting mutual_tls_authentication: %s", err) } - SetTagsOut(ctx, output.Tags) + setTagsOut(ctx, output.Tags) return diags } func resourceDomainNameUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) if d.HasChanges("domain_name_configuration", "mutual_tls_authentication") { input := &apigatewayv2.UpdateDomainNameInput{ @@ -235,7 +238,7 @@ func resourceDomainNameUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceDomainNameDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) log.Printf("[DEBUG] Deleting API Gateway v2 Domain Name: %s", d.Id()) _, err := conn.DeleteDomainNameWithContext(ctx, &apigatewayv2.DeleteDomainNameInput{ diff --git a/internal/service/apigatewayv2/domain_name_test.go b/internal/service/apigatewayv2/domain_name_test.go index fd5b6fbe491..de3fbd9ff13 100644 --- a/internal/service/apigatewayv2/domain_name_test.go +++ b/internal/service/apigatewayv2/domain_name_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2_test import ( @@ -401,7 +404,7 @@ func TestAccAPIGatewayV2DomainName_MutualTLSAuthentication_ownership(t *testing. func testAccCheckDomainNameDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_apigatewayv2_domain_name" { @@ -436,7 +439,7 @@ func testAccCheckDomainNameExists(ctx context.Context, n string, v *apigatewayv2 return fmt.Errorf("No API Gateway v2 Domain Name ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) output, err := tfapigatewayv2.FindDomainName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/apigatewayv2/errorcheck_test.go b/internal/service/apigatewayv2/errorcheck_test.go index a161bc8fe6a..28633b81b17 100644 --- a/internal/service/apigatewayv2/errorcheck_test.go +++ b/internal/service/apigatewayv2/errorcheck_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2_test import ( diff --git a/internal/service/apigatewayv2/export_data_source.go b/internal/service/apigatewayv2/export_data_source.go index 35a81f02afc..c9139ab99a3 100644 --- a/internal/service/apigatewayv2/export_data_source.go +++ b/internal/service/apigatewayv2/export_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2 import ( @@ -56,7 +59,7 @@ func DataSourceExport() *schema.Resource { func dataSourceExportRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) apiId := d.Get("api_id").(string) diff --git a/internal/service/apigatewayv2/export_data_source_test.go b/internal/service/apigatewayv2/export_data_source_test.go index c888376d1e1..0ea180d2515 100644 --- a/internal/service/apigatewayv2/export_data_source_test.go +++ b/internal/service/apigatewayv2/export_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2_test import ( diff --git a/internal/service/apigatewayv2/flex.go b/internal/service/apigatewayv2/flex.go index c0c0a9fb537..3ce515047e8 100644 --- a/internal/service/apigatewayv2/flex.go +++ b/internal/service/apigatewayv2/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2 import ( diff --git a/internal/service/apigatewayv2/forge.go b/internal/service/apigatewayv2/forge.go index 7c0d5273402..5be4c1a6a35 100644 --- a/internal/service/apigatewayv2/forge.go +++ b/internal/service/apigatewayv2/forge.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2 import ( diff --git a/internal/service/apigatewayv2/generate.go b/internal/service/apigatewayv2/generate.go index 6a1bad6e059..bd1186bc0ec 100644 --- a/internal/service/apigatewayv2/generate.go +++ b/internal/service/apigatewayv2/generate.go @@ -1,5 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/listpages/main.go -ListOps=GetApis,GetApiMappings,GetDomainNames,GetVpcLinks //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=GetTags -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package apigatewayv2 diff --git a/internal/service/apigatewayv2/integration.go b/internal/service/apigatewayv2/integration.go index 5c36b6d20b9..67a91d5b2d9 100644 --- a/internal/service/apigatewayv2/integration.go +++ b/internal/service/apigatewayv2/integration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2 import ( @@ -175,7 +178,7 @@ func ResourceIntegration() *schema.Resource { func resourceIntegrationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) req := &apigatewayv2.CreateIntegrationInput{ ApiId: aws.String(d.Get("api_id").(string)), @@ -243,7 +246,7 @@ func resourceIntegrationCreate(ctx context.Context, d *schema.ResourceData, meta func resourceIntegrationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) resp, err := conn.GetIntegrationWithContext(ctx, &apigatewayv2.GetIntegrationInput{ ApiId: aws.String(d.Get("api_id").(string)), @@ -293,7 +296,7 @@ func resourceIntegrationRead(ctx context.Context, d *schema.ResourceData, meta i func resourceIntegrationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) req := &apigatewayv2.UpdateIntegrationInput{ ApiId: aws.String(d.Get("api_id").(string)), @@ -400,7 +403,7 @@ func resourceIntegrationUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceIntegrationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) log.Printf("[DEBUG] Deleting API Gateway v2 integration (%s)", d.Id()) _, err := conn.DeleteIntegrationWithContext(ctx, &apigatewayv2.DeleteIntegrationInput{ @@ -426,7 +429,7 @@ func resourceIntegrationImport(ctx context.Context, d *schema.ResourceData, meta apiId := parts[0] integrationId := parts[1] - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) resp, err := conn.GetIntegrationWithContext(ctx, &apigatewayv2.GetIntegrationInput{ ApiId: aws.String(apiId), diff --git a/internal/service/apigatewayv2/integration_response.go b/internal/service/apigatewayv2/integration_response.go index d112709ec86..b31c0f42eca 100644 --- a/internal/service/apigatewayv2/integration_response.go +++ b/internal/service/apigatewayv2/integration_response.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2 import ( @@ -66,7 +69,7 @@ func ResourceIntegrationResponse() *schema.Resource { func resourceIntegrationResponseCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) req := &apigatewayv2.CreateIntegrationResponseInput{ ApiId: aws.String(d.Get("api_id").(string)), @@ -96,7 +99,7 @@ func resourceIntegrationResponseCreate(ctx context.Context, d *schema.ResourceDa func resourceIntegrationResponseRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) resp, err := conn.GetIntegrationResponseWithContext(ctx, &apigatewayv2.GetIntegrationResponseInput{ ApiId: aws.String(d.Get("api_id").(string)), @@ -125,7 +128,7 @@ func resourceIntegrationResponseRead(ctx context.Context, d *schema.ResourceData func resourceIntegrationResponseUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) req := &apigatewayv2.UpdateIntegrationResponseInput{ ApiId: aws.String(d.Get("api_id").(string)), @@ -156,7 +159,7 @@ func resourceIntegrationResponseUpdate(ctx context.Context, d *schema.ResourceDa func resourceIntegrationResponseDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) log.Printf("[DEBUG] Deleting API Gateway v2 integration response (%s)", d.Id()) _, err := conn.DeleteIntegrationResponseWithContext(ctx, &apigatewayv2.DeleteIntegrationResponseInput{ diff --git a/internal/service/apigatewayv2/integration_response_test.go b/internal/service/apigatewayv2/integration_response_test.go index 92e58ce30b0..b7770562192 100644 --- a/internal/service/apigatewayv2/integration_response_test.go +++ b/internal/service/apigatewayv2/integration_response_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2_test import ( @@ -126,7 +129,7 @@ func TestAccAPIGatewayV2IntegrationResponse_allAttributes(t *testing.T) { func testAccCheckIntegrationResponseDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_apigatewayv2_integration_response" { @@ -154,7 +157,7 @@ func testAccCheckIntegrationResponseDestroy(ctx context.Context) resource.TestCh func testAccCheckIntegrationResponseDisappears(ctx context.Context, apiId, integrationId *string, v *apigatewayv2.GetIntegrationResponseOutput) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) _, err := conn.DeleteIntegrationResponseWithContext(ctx, &apigatewayv2.DeleteIntegrationResponseInput{ ApiId: apiId, @@ -177,7 +180,7 @@ func testAccCheckIntegrationResponseExists(ctx context.Context, n string, vApiId return fmt.Errorf("No API Gateway v2 integration response ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) apiId := aws.String(rs.Primary.Attributes["api_id"]) integrationId := aws.String(rs.Primary.Attributes["integration_id"]) diff --git a/internal/service/apigatewayv2/integration_test.go b/internal/service/apigatewayv2/integration_test.go index ddd441850b5..b3ea3526d8e 100644 --- a/internal/service/apigatewayv2/integration_test.go +++ b/internal/service/apigatewayv2/integration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2_test import ( @@ -606,7 +609,7 @@ func TestAccAPIGatewayV2Integration_serviceIntegration(t *testing.T) { func testAccCheckIntegrationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_apigatewayv2_integration" { @@ -633,7 +636,7 @@ func testAccCheckIntegrationDestroy(ctx context.Context) resource.TestCheckFunc func testAccCheckIntegrationDisappears(ctx context.Context, apiId *string, v *apigatewayv2.GetIntegrationOutput) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) _, err := conn.DeleteIntegrationWithContext(ctx, &apigatewayv2.DeleteIntegrationInput{ ApiId: apiId, @@ -655,7 +658,7 @@ func testAccCheckIntegrationExists(ctx context.Context, n string, vApiId *string return fmt.Errorf("No API Gateway v2 integration ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) apiId := aws.String(rs.Primary.Attributes["api_id"]) resp, err := conn.GetIntegrationWithContext(ctx, &apigatewayv2.GetIntegrationInput{ diff --git a/internal/service/apigatewayv2/model.go b/internal/service/apigatewayv2/model.go index e60cfca0bc1..7d901aef716 100644 --- a/internal/service/apigatewayv2/model.go +++ b/internal/service/apigatewayv2/model.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2 import ( @@ -73,7 +76,7 @@ func ResourceModel() *schema.Resource { func resourceModelCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) req := &apigatewayv2.CreateModelInput{ ApiId: aws.String(d.Get("api_id").(string)), @@ -98,7 +101,7 @@ func resourceModelCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceModelRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) resp, err := conn.GetModelWithContext(ctx, &apigatewayv2.GetModelInput{ ApiId: aws.String(d.Get("api_id").(string)), @@ -123,7 +126,7 @@ func resourceModelRead(ctx context.Context, d *schema.ResourceData, meta interfa func resourceModelUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) req := &apigatewayv2.UpdateModelInput{ ApiId: aws.String(d.Get("api_id").(string)), @@ -153,7 +156,7 @@ func resourceModelUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceModelDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) log.Printf("[DEBUG] Deleting API Gateway v2 model (%s)", d.Id()) _, err := conn.DeleteModelWithContext(ctx, &apigatewayv2.DeleteModelInput{ diff --git a/internal/service/apigatewayv2/model_test.go b/internal/service/apigatewayv2/model_test.go index a9f3aca972f..02f8747a7ba 100644 --- a/internal/service/apigatewayv2/model_test.go +++ b/internal/service/apigatewayv2/model_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2_test import ( @@ -183,7 +186,7 @@ func TestAccAPIGatewayV2Model_allAttributes(t *testing.T) { func testAccCheckModelDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_apigatewayv2_model" { @@ -210,7 +213,7 @@ func testAccCheckModelDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckModelDisappears(ctx context.Context, apiId *string, v *apigatewayv2.GetModelOutput) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) _, err := conn.DeleteModelWithContext(ctx, &apigatewayv2.DeleteModelInput{ ApiId: apiId, @@ -232,7 +235,7 @@ func testAccCheckModelExists(ctx context.Context, n string, vApiId *string, v *a return fmt.Errorf("No API Gateway v2 model ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) apiId := aws.String(rs.Primary.Attributes["api_id"]) resp, err := conn.GetModelWithContext(ctx, &apigatewayv2.GetModelInput{ diff --git a/internal/service/apigatewayv2/route.go b/internal/service/apigatewayv2/route.go index 3c82cd4c420..a0c0ed3c5d4 100644 --- a/internal/service/apigatewayv2/route.go +++ b/internal/service/apigatewayv2/route.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2 import ( @@ -104,7 +107,7 @@ func ResourceRoute() *schema.Resource { func resourceRouteCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) req := &apigatewayv2.CreateRouteInput{ ApiId: aws.String(d.Get("api_id").(string)), @@ -150,7 +153,7 @@ func resourceRouteCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceRouteRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) resp, err := conn.GetRouteWithContext(ctx, &apigatewayv2.GetRouteInput{ ApiId: aws.String(d.Get("api_id").(string)), @@ -190,7 +193,7 @@ func resourceRouteRead(ctx context.Context, d *schema.ResourceData, meta interfa func resourceRouteUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) var requestParameters map[string]*apigatewayv2.ParameterConstraints @@ -279,7 +282,7 @@ func resourceRouteUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceRouteDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) log.Printf("[DEBUG] Deleting API Gateway v2 route (%s)", d.Id()) _, err := conn.DeleteRouteWithContext(ctx, &apigatewayv2.DeleteRouteInput{ @@ -307,7 +310,7 @@ func resourceRouteImport(ctx context.Context, d *schema.ResourceData, meta inter apiId := parts[0] routeId := parts[1] - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) resp, err := conn.GetRouteWithContext(ctx, &apigatewayv2.GetRouteInput{ ApiId: aws.String(apiId), diff --git a/internal/service/apigatewayv2/route_response.go b/internal/service/apigatewayv2/route_response.go index 0a1a44d8fec..5873231be7d 100644 --- a/internal/service/apigatewayv2/route_response.go +++ b/internal/service/apigatewayv2/route_response.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2 import ( @@ -57,7 +60,7 @@ func ResourceRouteResponse() *schema.Resource { func resourceRouteResponseCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) req := &apigatewayv2.CreateRouteResponseInput{ ApiId: aws.String(d.Get("api_id").(string)), @@ -84,7 +87,7 @@ func resourceRouteResponseCreate(ctx context.Context, d *schema.ResourceData, me func resourceRouteResponseRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) resp, err := conn.GetRouteResponseWithContext(ctx, &apigatewayv2.GetRouteResponseInput{ ApiId: aws.String(d.Get("api_id").(string)), @@ -111,7 +114,7 @@ func resourceRouteResponseRead(ctx context.Context, d *schema.ResourceData, meta func resourceRouteResponseUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) req := &apigatewayv2.UpdateRouteResponseInput{ ApiId: aws.String(d.Get("api_id").(string)), @@ -139,7 +142,7 @@ func resourceRouteResponseUpdate(ctx context.Context, d *schema.ResourceData, me func resourceRouteResponseDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) log.Printf("[DEBUG] Deleting API Gateway v2 route response (%s)", d.Id()) _, err := conn.DeleteRouteResponseWithContext(ctx, &apigatewayv2.DeleteRouteResponseInput{ diff --git a/internal/service/apigatewayv2/route_response_test.go b/internal/service/apigatewayv2/route_response_test.go index c65c191de56..ad50257d018 100644 --- a/internal/service/apigatewayv2/route_response_test.go +++ b/internal/service/apigatewayv2/route_response_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2_test import ( @@ -114,7 +117,7 @@ func TestAccAPIGatewayV2RouteResponse_model(t *testing.T) { func testAccCheckRouteResponseDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_apigatewayv2_route_response" { @@ -142,7 +145,7 @@ func testAccCheckRouteResponseDestroy(ctx context.Context) resource.TestCheckFun func testAccCheckRouteResponseDisappears(ctx context.Context, apiId, routeId *string, v *apigatewayv2.GetRouteResponseOutput) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) _, err := conn.DeleteRouteResponseWithContext(ctx, &apigatewayv2.DeleteRouteResponseInput{ ApiId: apiId, @@ -165,7 +168,7 @@ func testAccCheckRouteResponseExists(ctx context.Context, n string, vApiId, vRou return fmt.Errorf("No API Gateway v2 route response ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) apiId := aws.String(rs.Primary.Attributes["api_id"]) routeId := aws.String(rs.Primary.Attributes["route_id"]) diff --git a/internal/service/apigatewayv2/route_test.go b/internal/service/apigatewayv2/route_test.go index abe8bbcfa0b..35939fbe211 100644 --- a/internal/service/apigatewayv2/route_test.go +++ b/internal/service/apigatewayv2/route_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2_test import ( @@ -490,7 +493,7 @@ func TestAccAPIGatewayV2Route_updateRouteKey(t *testing.T) { func testAccCheckRouteDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_apigatewayv2_route" { @@ -526,7 +529,7 @@ func testAccCheckRouteExists(ctx context.Context, n string, vApiId *string, v *a return fmt.Errorf("No API Gateway v2 route ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) apiId := aws.String(rs.Primary.Attributes["api_id"]) resp, err := conn.GetRouteWithContext(ctx, &apigatewayv2.GetRouteInput{ diff --git a/internal/service/apigatewayv2/service_package.go b/internal/service/apigatewayv2/service_package.go new file mode 100644 index 00000000000..42d7b0377a0 --- /dev/null +++ b/internal/service/apigatewayv2/service_package.go @@ -0,0 +1,27 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package apigatewayv2 + +import ( + "context" + + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + request_sdkv1 "github.com/aws/aws-sdk-go/aws/request" + apigatewayv2_sdkv1 "github.com/aws/aws-sdk-go/service/apigatewayv2" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" +) + +// CustomizeConn customizes a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) CustomizeConn(ctx context.Context, conn *apigatewayv2_sdkv1.ApiGatewayV2) (*apigatewayv2_sdkv1.ApiGatewayV2, error) { + conn.Handlers.Retry.PushBack(func(r *request_sdkv1.Request) { + // Many operations can return an error such as: + // ConflictException: Unable to complete operation due to concurrent modification. Please try again later. + // Handle them all globally for the service client. + if tfawserr.ErrMessageContains(r.Error, apigatewayv2_sdkv1.ErrCodeConflictException, "try again later") { + r.Retryable = aws_sdkv1.Bool(true) + } + }) + + return conn, nil +} diff --git a/internal/service/apigatewayv2/service_package_gen.go b/internal/service/apigatewayv2/service_package_gen.go index 972dbda4244..b5b05c23a73 100644 --- a/internal/service/apigatewayv2/service_package_gen.go +++ b/internal/service/apigatewayv2/service_package_gen.go @@ -5,6 +5,10 @@ package apigatewayv2 import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + apigatewayv2_sdkv1 "github.com/aws/aws-sdk-go/service/apigatewayv2" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -109,4 +113,13 @@ func (p *servicePackage) ServicePackageName() string { return names.APIGatewayV2 } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*apigatewayv2_sdkv1.ApiGatewayV2, error) { + sess := config["session"].(*session_sdkv1.Session) + + return apigatewayv2_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/apigatewayv2/stage.go b/internal/service/apigatewayv2/stage.go index 6f27da09921..cde6ab467d6 100644 --- a/internal/service/apigatewayv2/stage.go +++ b/internal/service/apigatewayv2/stage.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2 import ( @@ -194,7 +197,7 @@ func ResourceStage() *schema.Resource { func resourceStageCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) apiId := d.Get("api_id").(string) @@ -211,7 +214,7 @@ func resourceStageCreate(ctx context.Context, d *schema.ResourceData, meta inter ApiId: aws.String(apiId), AutoDeploy: aws.Bool(d.Get("auto_deploy").(bool)), StageName: aws.String(d.Get("name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("access_log_settings"); ok { req.AccessLogSettings = expandAccessLogSettings(v.([]interface{})) @@ -248,7 +251,7 @@ func resourceStageCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceStageRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) apiId := d.Get("api_id").(string) resp, err := conn.GetStageWithContext(ctx, &apigatewayv2.GetStageInput{ @@ -303,7 +306,7 @@ func resourceStageRead(ctx context.Context, d *schema.ResourceData, meta interfa return sdkdiag.AppendErrorf(diags, "setting stage_variables: %s", err) } - SetTagsOut(ctx, resp.Tags) + setTagsOut(ctx, resp.Tags) apiOutput, err := conn.GetApiWithContext(ctx, &apigatewayv2.GetApiInput{ ApiId: aws.String(apiId), @@ -328,7 +331,7 @@ func resourceStageRead(ctx context.Context, d *schema.ResourceData, meta interfa func resourceStageUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) if d.HasChanges("access_log_settings", "auto_deploy", "client_certificate_id", "default_route_settings", "deployment_id", "description", @@ -416,7 +419,7 @@ func resourceStageUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceStageDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) log.Printf("[DEBUG] Deleting API Gateway v2 stage (%s)", d.Id()) _, err := conn.DeleteStageWithContext(ctx, &apigatewayv2.DeleteStageInput{ @@ -442,7 +445,7 @@ func resourceStageImport(ctx context.Context, d *schema.ResourceData, meta inter apiId := parts[0] stageName := parts[1] - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) resp, err := conn.GetStageWithContext(ctx, &apigatewayv2.GetStageInput{ ApiId: aws.String(apiId), diff --git a/internal/service/apigatewayv2/stage_test.go b/internal/service/apigatewayv2/stage_test.go index 8c82469784c..71d13a781ed 100644 --- a/internal/service/apigatewayv2/stage_test.go +++ b/internal/service/apigatewayv2/stage_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2_test import ( @@ -1096,7 +1099,7 @@ func TestAccAPIGatewayV2Stage_tags(t *testing.T) { func testAccCheckStageDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_apigatewayv2_stage" { @@ -1132,7 +1135,7 @@ func testAccCheckStageExists(ctx context.Context, n string, vApiId *string, v *a return fmt.Errorf("No API Gateway v2 stage ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) apiId := aws.String(rs.Primary.Attributes["api_id"]) resp, err := conn.GetStageWithContext(ctx, &apigatewayv2.GetStageInput{ @@ -1537,7 +1540,7 @@ resource "aws_apigatewayv2_stage" "test" { // testAccPreCheckAPIGatewayAccountCloudWatchRoleARN checks whether a CloudWatch role ARN has been configured in the current AWS region. func testAccPreCheckAPIGatewayAccountCloudWatchRoleARN(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayConn(ctx) output, err := conn.GetAccountWithContext(ctx, &apigateway.GetAccountInput{}) diff --git a/internal/service/apigatewayv2/status.go b/internal/service/apigatewayv2/status.go index 624f48c1e5b..77b00eb809c 100644 --- a/internal/service/apigatewayv2/status.go +++ b/internal/service/apigatewayv2/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2 import ( diff --git a/internal/service/apigatewayv2/sweep.go b/internal/service/apigatewayv2/sweep.go index 9592fccdfc4..fa2ec38ad88 100644 --- a/internal/service/apigatewayv2/sweep.go +++ b/internal/service/apigatewayv2/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/apigatewayv2" multierror "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -45,11 +47,11 @@ func init() { func sweepAPIs(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).APIGatewayV2Conn() + conn := client.APIGatewayV2Conn(ctx) input := &apigatewayv2.GetApisInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -78,7 +80,7 @@ func sweepAPIs(region string) error { return fmt.Errorf("error listing API Gateway v2 APIs (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping API Gateway v2 APIs (%s): %w", region, err) @@ -89,11 +91,11 @@ func sweepAPIs(region string) error { func sweepAPIMappings(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).APIGatewayV2Conn() + conn := client.APIGatewayV2Conn(ctx) var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -147,7 +149,7 @@ func sweepAPIMappings(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing API Gateway v2 Domain Names (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping API Gateway v2 API Mappings (%s): %w", region, err)) @@ -158,11 +160,11 @@ func sweepAPIMappings(region string) error { func sweepDomainNames(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).APIGatewayV2Conn() + conn := client.APIGatewayV2Conn(ctx) input := &apigatewayv2.GetDomainNamesInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -191,7 +193,7 @@ func sweepDomainNames(region string) error { return fmt.Errorf("error listing API Gateway v2 Domain Names (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping API Gateway v2 Domain Names (%s): %w", region, err) @@ -202,11 +204,11 @@ func sweepDomainNames(region string) error { func sweepVPCLinks(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).APIGatewayV2Conn() + conn := client.APIGatewayV2Conn(ctx) input := &apigatewayv2.GetVpcLinksInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -235,7 +237,7 @@ func sweepVPCLinks(region string) error { return fmt.Errorf("error listing API Gateway v2 VPC Links (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping API Gateway v2 VPC Links (%s): %w", region, err) diff --git a/internal/service/apigatewayv2/tags_gen.go b/internal/service/apigatewayv2/tags_gen.go index a499f453e9c..ca7a3b92aeb 100644 --- a/internal/service/apigatewayv2/tags_gen.go +++ b/internal/service/apigatewayv2/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists apigatewayv2 service tags. +// listTags lists apigatewayv2 service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn apigatewayv2iface.ApiGatewayV2API, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn apigatewayv2iface.ApiGatewayV2API, identifier string) (tftags.KeyValueTags, error) { input := &apigatewayv2.GetTagsInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn apigatewayv2iface.ApiGatewayV2API, ident // ListTags lists apigatewayv2 service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).APIGatewayV2Conn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).APIGatewayV2Conn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from apigatewayv2 service tags. +// KeyValueTags creates tftags.KeyValueTags from apigatewayv2 service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns apigatewayv2 service tags from Context. +// getTagsIn returns apigatewayv2 service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets apigatewayv2 service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets apigatewayv2 service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates apigatewayv2 service tags. +// updateTags updates apigatewayv2 service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn apigatewayv2iface.ApiGatewayV2API, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn apigatewayv2iface.ApiGatewayV2API, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn apigatewayv2iface.ApiGatewayV2API, ide // UpdateTags updates apigatewayv2 service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).APIGatewayV2Conn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).APIGatewayV2Conn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/apigatewayv2/validate.go b/internal/service/apigatewayv2/validate.go index 8de2253e750..3fb07674a5d 100644 --- a/internal/service/apigatewayv2/validate.go +++ b/internal/service/apigatewayv2/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2 import ( diff --git a/internal/service/apigatewayv2/vpc_link.go b/internal/service/apigatewayv2/vpc_link.go index 0d9c5f74f98..7da56df9f70 100644 --- a/internal/service/apigatewayv2/vpc_link.go +++ b/internal/service/apigatewayv2/vpc_link.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2 import ( @@ -64,13 +67,13 @@ func ResourceVPCLink() *schema.Resource { func resourceVPCLinkCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) input := &apigatewayv2.CreateVpcLinkInput{ Name: aws.String(d.Get("name").(string)), SecurityGroupIds: flex.ExpandStringSet(d.Get("security_group_ids").(*schema.Set)), SubnetIds: flex.ExpandStringSet(d.Get("subnet_ids").(*schema.Set)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } log.Printf("[DEBUG] Creating API Gateway v2 VPC Link: %s", input) @@ -90,7 +93,7 @@ func resourceVPCLinkCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceVPCLinkRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) outputRaw, _, err := StatusVPCLink(ctx, conn, d.Id())() if tfawserr.ErrCodeEquals(err, apigatewayv2.ErrCodeNotFoundException) && !d.IsNewResource() { @@ -118,14 +121,14 @@ func resourceVPCLinkRead(ctx context.Context, d *schema.ResourceData, meta inter return sdkdiag.AppendErrorf(diags, "setting subnet_ids: %s", err) } - SetTagsOut(ctx, output.Tags) + setTagsOut(ctx, output.Tags) return diags } func resourceVPCLinkUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) if d.HasChange("name") { req := &apigatewayv2.UpdateVpcLinkInput{ @@ -145,7 +148,7 @@ func resourceVPCLinkUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceVPCLinkDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).APIGatewayV2Conn() + conn := meta.(*conns.AWSClient).APIGatewayV2Conn(ctx) log.Printf("[DEBUG] Deleting API Gateway v2 VPC Link: %s", d.Id()) _, err := conn.DeleteVpcLinkWithContext(ctx, &apigatewayv2.DeleteVpcLinkInput{ diff --git a/internal/service/apigatewayv2/vpc_link_test.go b/internal/service/apigatewayv2/vpc_link_test.go index e53b0351caf..3457dba0f80 100644 --- a/internal/service/apigatewayv2/vpc_link_test.go +++ b/internal/service/apigatewayv2/vpc_link_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2_test import ( @@ -129,7 +132,7 @@ func TestAccAPIGatewayV2VPCLink_tags(t *testing.T) { func testAccCheckVPCLinkDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_apigatewayv2_vpc_link" { @@ -155,7 +158,7 @@ func testAccCheckVPCLinkDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckVPCLinkDisappears(ctx context.Context, v *apigatewayv2.GetVpcLinkOutput) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) if _, err := conn.DeleteVpcLinkWithContext(ctx, &apigatewayv2.DeleteVpcLinkInput{ VpcLinkId: v.VpcLinkId, @@ -186,7 +189,7 @@ func testAccCheckVPCLinkExists(ctx context.Context, n string, v *apigatewayv2.Ge return fmt.Errorf("No API Gateway v2 VPC Link ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).APIGatewayV2Conn(ctx) resp, err := conn.GetVpcLinkWithContext(ctx, &apigatewayv2.GetVpcLinkInput{ VpcLinkId: aws.String(rs.Primary.ID), diff --git a/internal/service/apigatewayv2/wait.go b/internal/service/apigatewayv2/wait.go index 247cb328238..b1f54062897 100644 --- a/internal/service/apigatewayv2/wait.go +++ b/internal/service/apigatewayv2/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apigatewayv2 import ( @@ -55,7 +58,7 @@ func WaitVPCLinkAvailable(ctx context.Context, conn *apigatewayv2.ApiGatewayV2, return nil, err } -// WaitVPCLinkAvailable waits for a VPC Link to return Deleted +// WaitVPCLinkDeleted waits for a VPC Link to return Deleted func WaitVPCLinkDeleted(ctx context.Context, conn *apigatewayv2.ApiGatewayV2, vpcLinkId string) (*apigatewayv2.GetVpcLinkOutput, error) { stateConf := &retry.StateChangeConf{ Pending: []string{apigatewayv2.VpcLinkStatusDeleting}, diff --git a/internal/service/appautoscaling/consts.go b/internal/service/appautoscaling/consts.go index 8d0acf2b8a4..e4ddfad0fa8 100644 --- a/internal/service/appautoscaling/consts.go +++ b/internal/service/appautoscaling/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appautoscaling import ( diff --git a/internal/service/appautoscaling/diff.go b/internal/service/appautoscaling/diff.go index 5a23ca40695..b0060d5bf52 100644 --- a/internal/service/appautoscaling/diff.go +++ b/internal/service/appautoscaling/diff.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appautoscaling import ( diff --git a/internal/service/appautoscaling/find.go b/internal/service/appautoscaling/find.go index cf78be347c7..bbc317915c6 100644 --- a/internal/service/appautoscaling/find.go +++ b/internal/service/appautoscaling/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appautoscaling import ( diff --git a/internal/service/appautoscaling/generate.go b/internal/service/appautoscaling/generate.go index 3c9c9b1651b..e74ba40f1ab 100644 --- a/internal/service/appautoscaling/generate.go +++ b/internal/service/appautoscaling/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceARN -ServiceTagsMap -TagInIDElem=ResourceARN -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package appautoscaling diff --git a/internal/service/appautoscaling/policy.go b/internal/service/appautoscaling/policy.go index 438a8fb6ecb..66650215ea1 100644 --- a/internal/service/appautoscaling/policy.go +++ b/internal/service/appautoscaling/policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appautoscaling import ( @@ -301,7 +304,7 @@ func ResourcePolicy() *schema.Resource { func resourcePolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppAutoScalingConn() + conn := meta.(*conns.AWSClient).AppAutoScalingConn(ctx) params := getPutScalingPolicyInput(d) @@ -398,7 +401,7 @@ func resourcePolicyRead(ctx context.Context, d *schema.ResourceData, meta interf func resourcePolicyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppAutoScalingConn() + conn := meta.(*conns.AWSClient).AppAutoScalingConn(ctx) params := getPutScalingPolicyInput(d) @@ -427,7 +430,7 @@ func resourcePolicyUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourcePolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppAutoScalingConn() + conn := meta.(*conns.AWSClient).AppAutoScalingConn(ctx) p, err := getPolicy(ctx, d, meta) if err != nil { return create.DiagError(names.AppAutoScaling, create.ErrActionDeleting, ResNamePolicy, d.Id(), err) @@ -741,7 +744,7 @@ func getPutScalingPolicyInput(d *schema.ResourceData) applicationautoscaling.Put } func getPolicy(ctx context.Context, d *schema.ResourceData, meta interface{}) (*applicationautoscaling.ScalingPolicy, error) { - conn := meta.(*conns.AWSClient).AppAutoScalingConn() + conn := meta.(*conns.AWSClient).AppAutoScalingConn(ctx) params := applicationautoscaling.DescribeScalingPoliciesInput{ PolicyNames: []*string{aws.String(d.Get("name").(string))}, diff --git a/internal/service/appautoscaling/policy_test.go b/internal/service/appautoscaling/policy_test.go index c954950e175..06563cf43d1 100644 --- a/internal/service/appautoscaling/policy_test.go +++ b/internal/service/appautoscaling/policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appautoscaling_test import ( @@ -548,7 +551,7 @@ func testAccCheckPolicyExists(ctx context.Context, n string, policy *application return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppAutoScalingConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppAutoScalingConn(ctx) params := &applicationautoscaling.DescribeScalingPoliciesInput{ PolicyNames: []*string{aws.String(rs.Primary.ID)}, ResourceId: aws.String(rs.Primary.Attributes["resource_id"]), @@ -571,7 +574,7 @@ func testAccCheckPolicyExists(ctx context.Context, n string, policy *application func testAccCheckPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppAutoScalingConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppAutoScalingConn(ctx) for _, rs := range s.RootModule().Resources { params := applicationautoscaling.DescribeScalingPoliciesInput{ diff --git a/internal/service/appautoscaling/scheduled_action.go b/internal/service/appautoscaling/scheduled_action.go index e5966886938..73bfed9f874 100644 --- a/internal/service/appautoscaling/scheduled_action.go +++ b/internal/service/appautoscaling/scheduled_action.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appautoscaling import ( @@ -15,8 +18,8 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" - "github.com/hashicorp/terraform-provider-aws/internal/experimental/nullable" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/types/nullable" ) // @SDKResource("aws_appautoscaling_scheduled_action") @@ -108,7 +111,7 @@ func ResourceScheduledAction() *schema.Resource { func resourceScheduledActionPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppAutoScalingConn() + conn := meta.(*conns.AWSClient).AppAutoScalingConn(ctx) input := &applicationautoscaling.PutScheduledActionInput{ ScheduledActionName: aws.String(d.Get("name").(string)), @@ -203,7 +206,7 @@ func scheduledActionPopulateInputForUpdate(input *applicationautoscaling.PutSche func resourceScheduledActionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppAutoScalingConn() + conn := meta.(*conns.AWSClient).AppAutoScalingConn(ctx) scheduledAction, err := FindScheduledAction(ctx, conn, d.Get("name").(string), d.Get("service_namespace").(string), d.Get("resource_id").(string)) if tfresource.NotFound(err) && !d.IsNewResource() { @@ -234,7 +237,7 @@ func resourceScheduledActionRead(ctx context.Context, d *schema.ResourceData, me func resourceScheduledActionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppAutoScalingConn() + conn := meta.(*conns.AWSClient).AppAutoScalingConn(ctx) input := &applicationautoscaling.DeleteScheduledActionInput{ ScheduledActionName: aws.String(d.Get("name").(string)), diff --git a/internal/service/appautoscaling/scheduled_action_test.go b/internal/service/appautoscaling/scheduled_action_test.go index 12622259cd8..ed35cd5c4bf 100644 --- a/internal/service/appautoscaling/scheduled_action_test.go +++ b/internal/service/appautoscaling/scheduled_action_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appautoscaling_test import ( @@ -566,7 +569,7 @@ func TestAccAppAutoScalingScheduledAction_maxCapacity(t *testing.T) { func testAccCheckScheduledActionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppAutoScalingConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppAutoScalingConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appautoscaling_scheduled_action" { @@ -601,7 +604,7 @@ func testAccCheckScheduledActionExists(ctx context.Context, name string, obj *ap return fmt.Errorf("Application Autoscaling scheduled action (%s) ID not set", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppAutoScalingConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppAutoScalingConn(ctx) sa, err := tfappautoscaling.FindScheduledAction(ctx, conn, rs.Primary.Attributes["name"], rs.Primary.Attributes["service_namespace"], rs.Primary.Attributes["resource_id"]) if err != nil { diff --git a/internal/service/appautoscaling/service_package.go b/internal/service/appautoscaling/service_package.go new file mode 100644 index 00000000000..c03d1a61306 --- /dev/null +++ b/internal/service/appautoscaling/service_package.go @@ -0,0 +1,29 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package appautoscaling + +import ( + "context" + "strings" + + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + request_sdkv1 "github.com/aws/aws-sdk-go/aws/request" + applicationautoscaling_sdkv1 "github.com/aws/aws-sdk-go/service/applicationautoscaling" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" +) + +// CustomizeConn customizes a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) CustomizeConn(ctx context.Context, conn *applicationautoscaling_sdkv1.ApplicationAutoScaling) (*applicationautoscaling_sdkv1.ApplicationAutoScaling, error) { + // Workaround for https://github.com/aws/aws-sdk-go/issues/1472. + conn.Handlers.Retry.PushBack(func(r *request_sdkv1.Request) { + if !strings.HasPrefix(r.Operation.Name, "Describe") && !strings.HasPrefix(r.Operation.Name, "List") { + return + } + if tfawserr.ErrCodeEquals(r.Error, applicationautoscaling_sdkv1.ErrCodeFailedResourceAccessException) { + r.Retryable = aws_sdkv1.Bool(true) + } + }) + + return conn, nil +} diff --git a/internal/service/appautoscaling/service_package_gen.go b/internal/service/appautoscaling/service_package_gen.go index c971c8dab9b..b3afc60b206 100644 --- a/internal/service/appautoscaling/service_package_gen.go +++ b/internal/service/appautoscaling/service_package_gen.go @@ -5,6 +5,10 @@ package appautoscaling import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + applicationautoscaling_sdkv1 "github.com/aws/aws-sdk-go/service/applicationautoscaling" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -48,4 +52,13 @@ func (p *servicePackage) ServicePackageName() string { return names.AppAutoScaling } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*applicationautoscaling_sdkv1.ApplicationAutoScaling, error) { + sess := config["session"].(*session_sdkv1.Session) + + return applicationautoscaling_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/appautoscaling/tags_gen.go b/internal/service/appautoscaling/tags_gen.go index 638d6d6f712..f7d11b116b2 100644 --- a/internal/service/appautoscaling/tags_gen.go +++ b/internal/service/appautoscaling/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists appautoscaling service tags. +// listTags lists appautoscaling service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn applicationautoscalingiface.ApplicationAutoScalingAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn applicationautoscalingiface.ApplicationAutoScalingAPI, identifier string) (tftags.KeyValueTags, error) { input := &applicationautoscaling.ListTagsForResourceInput{ ResourceARN: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn applicationautoscalingiface.ApplicationA // ListTags lists appautoscaling service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).AppAutoScalingConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).AppAutoScalingConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from appautoscaling service tags. +// KeyValueTags creates tftags.KeyValueTags from appautoscaling service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns appautoscaling service tags from Context. +// getTagsIn returns appautoscaling service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets appautoscaling service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets appautoscaling service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates appautoscaling service tags. +// updateTags updates appautoscaling service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn applicationautoscalingiface.ApplicationAutoScalingAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn applicationautoscalingiface.ApplicationAutoScalingAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn applicationautoscalingiface.Applicatio // UpdateTags updates appautoscaling service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).AppAutoScalingConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).AppAutoScalingConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/appautoscaling/target.go b/internal/service/appautoscaling/target.go index 2925be57529..5fa630c3e3b 100644 --- a/internal/service/appautoscaling/target.go +++ b/internal/service/appautoscaling/target.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appautoscaling import ( @@ -77,7 +80,7 @@ func ResourceTarget() *schema.Resource { func resourceTargetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppAutoScalingConn() + conn := meta.(*conns.AWSClient).AppAutoScalingConn(ctx) resourceID := d.Get("resource_id").(string) input := &applicationautoscaling.RegisterScalableTargetInput{ @@ -86,7 +89,7 @@ func resourceTargetCreate(ctx context.Context, d *schema.ResourceData, meta inte ResourceId: aws.String(resourceID), ScalableDimension: aws.String(d.Get("scalable_dimension").(string)), ServiceNamespace: aws.String(d.Get("service_namespace").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("role_arn"); ok { @@ -106,7 +109,7 @@ func resourceTargetCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceTargetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppAutoScalingConn() + conn := meta.(*conns.AWSClient).AppAutoScalingConn(ctx) outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, 2*time.Minute, func() (interface{}, error) { @@ -140,7 +143,7 @@ func resourceTargetRead(ctx context.Context, d *schema.ResourceData, meta interf func resourceTargetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppAutoScalingConn() + conn := meta.(*conns.AWSClient).AppAutoScalingConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &applicationautoscaling.RegisterScalableTargetInput{ @@ -167,7 +170,7 @@ func resourceTargetUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceTargetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppAutoScalingConn() + conn := meta.(*conns.AWSClient).AppAutoScalingConn(ctx) input := &applicationautoscaling.DeregisterScalableTargetInput{ ResourceId: aws.String(d.Id()), diff --git a/internal/service/appautoscaling/target_test.go b/internal/service/appautoscaling/target_test.go index 537ab5a107b..2a60a2f153d 100644 --- a/internal/service/appautoscaling/target_test.go +++ b/internal/service/appautoscaling/target_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appautoscaling_test import ( @@ -253,7 +256,7 @@ func TestAccAppAutoScalingTarget_optionalRoleARN(t *testing.T) { func testAccCheckTargetDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppAutoScalingConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppAutoScalingConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appautoscaling_target" { @@ -288,7 +291,7 @@ func testAccCheckTargetExists(ctx context.Context, n string, v *applicationautos return fmt.Errorf("No Application AutoScaling Target ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppAutoScalingConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppAutoScalingConn(ctx) output, err := tfappautoscaling.FindTargetByThreePartKey(ctx, conn, rs.Primary.ID, rs.Primary.Attributes["service_namespace"], rs.Primary.Attributes["scalable_dimension"]) diff --git a/internal/service/appconfig/application.go b/internal/service/appconfig/application.go index 1a4d36e1669..f63b50fffc3 100644 --- a/internal/service/appconfig/application.go +++ b/internal/service/appconfig/application.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appconfig import ( @@ -55,12 +58,12 @@ func ResourceApplication() *schema.Resource { func resourceApplicationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) applicationName := d.Get("name").(string) input := &appconfig.CreateApplicationInput{ Name: aws.String(applicationName), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -84,7 +87,7 @@ func resourceApplicationCreate(ctx context.Context, d *schema.ResourceData, meta func resourceApplicationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) input := &appconfig.GetApplicationInput{ ApplicationId: aws.String(d.Id()), @@ -123,7 +126,7 @@ func resourceApplicationRead(ctx context.Context, d *schema.ResourceData, meta i func resourceApplicationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) if d.HasChangesExcept("tags", "tags_all") { updateInput := &appconfig.UpdateApplicationInput{ @@ -150,7 +153,7 @@ func resourceApplicationUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceApplicationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) input := &appconfig.DeleteApplicationInput{ ApplicationId: aws.String(d.Id()), diff --git a/internal/service/appconfig/application_test.go b/internal/service/appconfig/application_test.go index 96ff9828ac6..8ff932409fc 100644 --- a/internal/service/appconfig/application_test.go +++ b/internal/service/appconfig/application_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appconfig_test import ( @@ -197,7 +200,7 @@ func TestAccAppConfigApplication_tags(t *testing.T) { func testAccCheckApplicationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appconfig_application" { @@ -238,7 +241,7 @@ func testAccCheckApplicationExists(ctx context.Context, resourceName string) res return fmt.Errorf("Resource (%s) ID not set", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn(ctx) input := &appconfig.GetApplicationInput{ ApplicationId: aws.String(rs.Primary.ID), diff --git a/internal/service/appconfig/configuration_profile.go b/internal/service/appconfig/configuration_profile.go index da0c1614b1b..042db097e33 100644 --- a/internal/service/appconfig/configuration_profile.go +++ b/internal/service/appconfig/configuration_profile.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appconfig import ( @@ -109,7 +112,7 @@ func ResourceConfigurationProfile() *schema.Resource { func resourceConfigurationProfileCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) appId := d.Get("application_id").(string) name := d.Get("name").(string) @@ -117,7 +120,7 @@ func resourceConfigurationProfileCreate(ctx context.Context, d *schema.ResourceD ApplicationId: aws.String(appId), LocationUri: aws.String(d.Get("location_uri").(string)), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -153,7 +156,7 @@ func resourceConfigurationProfileCreate(ctx context.Context, d *schema.ResourceD func resourceConfigurationProfileRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) confProfID, appID, err := ConfigurationProfileParseID(d.Id()) @@ -208,7 +211,7 @@ func resourceConfigurationProfileRead(ctx context.Context, d *schema.ResourceDat func resourceConfigurationProfileUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) if d.HasChangesExcept("tags", "tags_all") { confProfID, appID, err := ConfigurationProfileParseID(d.Id()) @@ -250,7 +253,7 @@ func resourceConfigurationProfileUpdate(ctx context.Context, d *schema.ResourceD func resourceConfigurationProfileDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) confProfID, appID, err := ConfigurationProfileParseID(d.Id()) diff --git a/internal/service/appconfig/configuration_profile_data_source.go b/internal/service/appconfig/configuration_profile_data_source.go index 548232b40dc..dab6e859f8d 100644 --- a/internal/service/appconfig/configuration_profile_data_source.go +++ b/internal/service/appconfig/configuration_profile_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appconfig import ( @@ -82,7 +85,7 @@ const ( ) func dataSourceConfigurationProfileRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) appId := d.Get("application_id").(string) profileId := d.Get("configuration_profile_id").(string) @@ -117,7 +120,7 @@ func dataSourceConfigurationProfileRead(ctx context.Context, d *schema.ResourceD return create.DiagError(names.AppConfig, create.ErrActionSetting, DSNameConfigurationProfile, ID, err) } - tags, err := ListTags(ctx, conn, arn) + tags, err := listTags(ctx, conn, arn) if err != nil { return create.DiagError(names.AppConfig, create.ErrActionReading, DSNameConfigurationProfile, ID, err) } diff --git a/internal/service/appconfig/configuration_profile_data_source_test.go b/internal/service/appconfig/configuration_profile_data_source_test.go index dc9ffd8f85b..d6c1497930e 100644 --- a/internal/service/appconfig/configuration_profile_data_source_test.go +++ b/internal/service/appconfig/configuration_profile_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appconfig_test import ( diff --git a/internal/service/appconfig/configuration_profile_test.go b/internal/service/appconfig/configuration_profile_test.go index 63e1b6d2ee1..140ff64cc47 100644 --- a/internal/service/appconfig/configuration_profile_test.go +++ b/internal/service/appconfig/configuration_profile_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appconfig_test import ( @@ -325,7 +328,7 @@ func TestAccAppConfigConfigurationProfile_tags(t *testing.T) { func testAccCheckConfigurationProfileDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appconfig_configuration_profile" { @@ -379,7 +382,7 @@ func testAccCheckConfigurationProfileExists(ctx context.Context, resourceName st return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn(ctx) output, err := conn.GetConfigurationProfileWithContext(ctx, &appconfig.GetConfigurationProfileInput{ ApplicationId: aws.String(appID), diff --git a/internal/service/appconfig/configuration_profiles_data_source.go b/internal/service/appconfig/configuration_profiles_data_source.go index 94da79f1534..cc9712030e9 100644 --- a/internal/service/appconfig/configuration_profiles_data_source.go +++ b/internal/service/appconfig/configuration_profiles_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appconfig import ( @@ -35,7 +38,7 @@ const ( ) func dataSourceConfigurationProfilesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) appId := d.Get("application_id").(string) out, err := findConfigurationProfileSummariesByApplication(ctx, conn, appId) diff --git a/internal/service/appconfig/configuration_profiles_data_source_test.go b/internal/service/appconfig/configuration_profiles_data_source_test.go index 8eac714b577..5cca863720e 100644 --- a/internal/service/appconfig/configuration_profiles_data_source_test.go +++ b/internal/service/appconfig/configuration_profiles_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appconfig_test import ( diff --git a/internal/service/appconfig/deployment.go b/internal/service/appconfig/deployment.go index 14763968887..b8ec4ec408e 100644 --- a/internal/service/appconfig/deployment.go +++ b/internal/service/appconfig/deployment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appconfig import ( @@ -92,7 +95,7 @@ func ResourceDeployment() *schema.Resource { func resourceDeploymentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) input := &appconfig.StartDeploymentInput{ ApplicationId: aws.String(d.Get("application_id").(string)), @@ -101,7 +104,7 @@ func resourceDeploymentCreate(ctx context.Context, d *schema.ResourceData, meta ConfigurationVersion: aws.String(d.Get("configuration_version").(string)), DeploymentStrategyId: aws.String(d.Get("deployment_strategy_id").(string)), Description: aws.String(d.Get("description").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } output, err := conn.StartDeploymentWithContext(ctx, input) @@ -125,7 +128,7 @@ func resourceDeploymentCreate(ctx context.Context, d *schema.ResourceData, meta func resourceDeploymentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) appID, envID, deploymentNum, err := DeploymentParseID(d.Id()) @@ -199,7 +202,7 @@ func DeploymentParseID(id string) (string, string, int, error) { num, err := strconv.Atoi(parts[2]) if err != nil { - return "", "", 0, fmt.Errorf("error parsing AppConfig Deployment resource ID deployment_number: %w", err) + return "", "", 0, fmt.Errorf("parsing AppConfig Deployment resource ID deployment_number: %w", err) } return parts[0], parts[1], num, nil diff --git a/internal/service/appconfig/deployment_strategy.go b/internal/service/appconfig/deployment_strategy.go index 3fb5571f361..b481d78c3c9 100644 --- a/internal/service/appconfig/deployment_strategy.go +++ b/internal/service/appconfig/deployment_strategy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appconfig import ( @@ -83,7 +86,7 @@ func ResourceDeploymentStrategy() *schema.Resource { func resourceDeploymentStrategyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) name := d.Get("name").(string) input := &appconfig.CreateDeploymentStrategyInput{ @@ -92,7 +95,7 @@ func resourceDeploymentStrategyCreate(ctx context.Context, d *schema.ResourceDat GrowthType: aws.String(d.Get("growth_type").(string)), Name: aws.String(name), ReplicateTo: aws.String(d.Get("replicate_to").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -116,7 +119,7 @@ func resourceDeploymentStrategyCreate(ctx context.Context, d *schema.ResourceDat func resourceDeploymentStrategyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) input := &appconfig.GetDeploymentStrategyInput{ DeploymentStrategyId: aws.String(d.Id()), @@ -160,7 +163,7 @@ func resourceDeploymentStrategyRead(ctx context.Context, d *schema.ResourceData, func resourceDeploymentStrategyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) if d.HasChangesExcept("tags", "tags_all") { updateInput := &appconfig.UpdateDeploymentStrategyInput{ @@ -199,7 +202,7 @@ func resourceDeploymentStrategyUpdate(ctx context.Context, d *schema.ResourceDat func resourceDeploymentStrategyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) input := &appconfig.DeleteDeploymentStrategyInput{ DeploymentStrategyId: aws.String(d.Id()), diff --git a/internal/service/appconfig/deployment_strategy_test.go b/internal/service/appconfig/deployment_strategy_test.go index 1787a4768ac..7be682e075f 100644 --- a/internal/service/appconfig/deployment_strategy_test.go +++ b/internal/service/appconfig/deployment_strategy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appconfig_test import ( @@ -200,7 +203,7 @@ func TestAccAppConfigDeploymentStrategy_tags(t *testing.T) { func testAccCheckDeploymentStrategyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appconfig_deployment_strategy" { @@ -241,7 +244,7 @@ func testAccCheckDeploymentStrategyExists(ctx context.Context, resourceName stri return fmt.Errorf("Resource (%s) ID not set", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn(ctx) input := &appconfig.GetDeploymentStrategyInput{ DeploymentStrategyId: aws.String(rs.Primary.ID), diff --git a/internal/service/appconfig/deployment_test.go b/internal/service/appconfig/deployment_test.go index e34d2243abb..fc4cab0e98e 100644 --- a/internal/service/appconfig/deployment_test.go +++ b/internal/service/appconfig/deployment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appconfig_test import ( @@ -156,7 +159,7 @@ func testAccCheckDeploymentExists(ctx context.Context, resourceName string) reso return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn(ctx) input := &appconfig.GetDeploymentInput{ ApplicationId: aws.String(appID), diff --git a/internal/service/appconfig/enum.go b/internal/service/appconfig/enum.go index 7206bdac7a6..2f6f243a3ab 100644 --- a/internal/service/appconfig/enum.go +++ b/internal/service/appconfig/enum.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appconfig const ( diff --git a/internal/service/appconfig/environment.go b/internal/service/appconfig/environment.go index fc577e38936..3f9631733db 100644 --- a/internal/service/appconfig/environment.go +++ b/internal/service/appconfig/environment.go @@ -1,335 +1,485 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appconfig import ( "context" "fmt" - "log" "regexp" "strings" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/arn" - "github.com/aws/aws-sdk-go/service/appconfig" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" - "github.com/hashicorp/terraform-plugin-sdk/v2/diag" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/aws/arn" + "github.com/aws/aws-sdk-go-v2/service/appconfig" + awstypes "github.com/aws/aws-sdk-go-v2/service/appconfig/types" + "github.com/hashicorp/terraform-plugin-framework-validators/setvalidator" + "github.com/hashicorp/terraform-plugin-framework-validators/stringvalidator" + "github.com/hashicorp/terraform-plugin-framework/attr" + "github.com/hashicorp/terraform-plugin-framework/diag" + "github.com/hashicorp/terraform-plugin-framework/path" + "github.com/hashicorp/terraform-plugin-framework/resource" + "github.com/hashicorp/terraform-plugin-framework/resource/schema" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/stringdefault" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier" + "github.com/hashicorp/terraform-plugin-framework/schema/validator" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/hashicorp/terraform-plugin-log/tflog" "github.com/hashicorp/terraform-provider-aws/internal/conns" - "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" + "github.com/hashicorp/terraform-provider-aws/internal/errs" + "github.com/hashicorp/terraform-provider-aws/internal/errs/fwdiag" + "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" + fwtypes "github.com/hashicorp/terraform-provider-aws/internal/framework/types" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" - "github.com/hashicorp/terraform-provider-aws/internal/verify" "github.com/hashicorp/terraform-provider-aws/names" ) -// @SDKResource("aws_appconfig_environment", name="Environment") +// @FrameworkResource // @Tags(identifierAttribute="arn") -func ResourceEnvironment() *schema.Resource { - return &schema.Resource{ - CreateWithoutTimeout: resourceEnvironmentCreate, - ReadWithoutTimeout: resourceEnvironmentRead, - UpdateWithoutTimeout: resourceEnvironmentUpdate, - DeleteWithoutTimeout: resourceEnvironmentDelete, - Importer: &schema.ResourceImporter{ - StateContext: schema.ImportStatePassthroughContext, - }, +func newResourceEnvironment(context.Context) (resource.ResourceWithConfigure, error) { + r := &resourceEnvironment{} + r.SetMigratedFromPluginSDK(true) + + return r, nil +} + +type resourceEnvironment struct { + framework.ResourceWithConfigure +} + +func (r *resourceEnvironment) Metadata(_ context.Context, request resource.MetadataRequest, response *resource.MetadataResponse) { + response.TypeName = "aws_appconfig_environment" +} - Schema: map[string]*schema.Schema{ - "application_id": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validation.StringMatch(regexp.MustCompile(`[a-z0-9]{4,7}`), ""), +func (r *resourceEnvironment) Schema(ctx context.Context, request resource.SchemaRequest, response *resource.SchemaResponse) { + s := schema.Schema{ + Attributes: map[string]schema.Attribute{ + "application_id": schema.StringAttribute{ + Required: true, + PlanModifiers: []planmodifier.String{ + stringplanmodifier.RequiresReplace(), + }, + Validators: []validator.String{ + stringvalidator.RegexMatches( + regexp.MustCompile(`^[a-z0-9]{4,7}$`), + "value must contain 4-7 lowercase letters or numbers", + ), + }, }, - "environment_id": { - Type: schema.TypeString, + "arn": schema.StringAttribute{ + Computed: true, + PlanModifiers: []planmodifier.String{ + stringplanmodifier.UseStateForUnknown(), + }, + }, + "description": schema.StringAttribute{ + Optional: true, Computed: true, + Default: stringdefault.StaticString(""), // Needed for backwards compatibility with SDK resource }, - "arn": { - Type: schema.TypeString, + "environment_id": schema.StringAttribute{ Computed: true, + PlanModifiers: []planmodifier.String{ + stringplanmodifier.UseStateForUnknown(), + }, }, - "name": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringLenBetween(1, 64), + "id": schema.StringAttribute{ + Computed: true, + DeprecationMessage: "This attribute is unused and will be removed in a future version of the provider", + PlanModifiers: []planmodifier.String{ + stringplanmodifier.UseStateForUnknown(), + }, }, - "description": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.StringLenBetween(0, 1024), + "name": schema.StringAttribute{ + Required: true, + Validators: []validator.String{ + stringvalidator.LengthBetween(1, 64), + }, }, - "monitor": { - Type: schema.TypeSet, - Optional: true, - MaxItems: 5, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "alarm_arn": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.All( - validation.StringLenBetween(1, 2048), - verify.ValidARN, - ), + "state": schema.StringAttribute{ + Computed: true, + PlanModifiers: []planmodifier.String{ + stringplanmodifier.UseStateForUnknown(), + }, + }, + names.AttrTags: tftags.TagsAttribute(), + names.AttrTagsAll: tftags.TagsAttributeComputedOnly(), + }, + Blocks: map[string]schema.Block{ + "monitor": schema.SetNestedBlock{ + Validators: []validator.Set{ + setvalidator.SizeAtMost(5), + }, + NestedObject: schema.NestedBlockObject{ + Attributes: map[string]schema.Attribute{ + "alarm_arn": schema.StringAttribute{ + CustomType: fwtypes.ARNType, + Required: true, + Validators: []validator.String{ + stringvalidator.LengthBetween(1, 2048), + }, }, - "alarm_role_arn": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: verify.ValidARN, + "alarm_role_arn": schema.StringAttribute{ + CustomType: fwtypes.ARNType, + Optional: true, + Validators: []validator.String{ + stringvalidator.LengthBetween(20, 2048), + }, }, }, }, }, - "state": { - Type: schema.TypeString, - Computed: true, - }, - names.AttrTags: tftags.TagsSchema(), - names.AttrTagsAll: tftags.TagsSchemaComputed(), }, - CustomizeDiff: verify.SetTagsDiff, } + + response.Schema = s } -func resourceEnvironmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppConfigConn() +func (r *resourceEnvironment) Create(ctx context.Context, request resource.CreateRequest, response *resource.CreateResponse) { + conn := r.Meta().AppConfigClient(ctx) - appId := d.Get("application_id").(string) - input := &appconfig.CreateEnvironmentInput{ - Name: aws.String(d.Get("name").(string)), - ApplicationId: aws.String(appId), - Tags: GetTagsIn(ctx), + var plan resourceEnvironmentData + response.Diagnostics.Append(request.Plan.Get(ctx, &plan)...) + if response.Diagnostics.HasError() { + return } - if v, ok := d.GetOk("description"); ok { - input.Description = aws.String(v.(string)) + appId := plan.ApplicationID.ValueString() + + var monitors []monitorData + response.Diagnostics.Append(plan.Monitors.ElementsAs(ctx, &monitors, false)...) + if response.Diagnostics.HasError() { + return } - if v, ok := d.GetOk("monitor"); ok && v.(*schema.Set).Len() > 0 { - input.Monitors = expandEnvironmentMonitors(v.(*schema.Set).List()) + input := &appconfig.CreateEnvironmentInput{ + Name: aws.String(plan.Name.ValueString()), + ApplicationId: aws.String(appId), + Tags: aws.ToStringMap(getTagsIn(ctx)), + Monitors: expandMonitors(monitors), } - environment, err := conn.CreateEnvironmentWithContext(ctx, input) + if !(plan.Description.IsNull() || plan.Description.IsUnknown()) { + input.Description = aws.String(plan.Description.ValueString()) + } + environment, err := conn.CreateEnvironment(ctx, input) if err != nil { - return sdkdiag.AppendErrorf(diags, "creating AppConfig Environment for Application (%s): %s", appId, err) + response.Diagnostics.AddError( + fmt.Sprintf("creating AppConfig Environment for Application (%s)", appId), + err.Error(), + ) } - if environment == nil { - return sdkdiag.AppendErrorf(diags, "creating AppConfig Environment for Application (%s): empty response", appId) + response.Diagnostics.AddError( + fmt.Sprintf("creating AppConfig Environment for Application (%s)", appId), + "empty response", + ) } - d.Set("environment_id", environment.Id) - d.SetId(fmt.Sprintf("%s:%s", aws.StringValue(environment.Id), aws.StringValue(environment.ApplicationId))) - - return append(diags, resourceEnvironmentRead(ctx, d, meta)...) -} - -func resourceEnvironmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppConfigConn() - - envID, appID, err := EnvironmentParseID(d.Id()) + state := plan - if err != nil { - return sdkdiag.AppendErrorf(diags, "reading AppConfig Environment (%s): %s", d.Id(), err) + response.Diagnostics.Append(state.refreshFromCreateOutput(ctx, r.Meta(), environment)...) + if response.Diagnostics.HasError() { + return } - input := &appconfig.GetEnvironmentInput{ - ApplicationId: aws.String(appID), - EnvironmentId: aws.String(envID), - } + response.Diagnostics.Append(response.State.Set(ctx, &state)...) +} - output, err := conn.GetEnvironmentWithContext(ctx, input) +func (r *resourceEnvironment) Read(ctx context.Context, request resource.ReadRequest, response *resource.ReadResponse) { + conn := r.Meta().AppConfigClient(ctx) - if !d.IsNewResource() && tfawserr.ErrCodeEquals(err, appconfig.ErrCodeResourceNotFoundException) { - log.Printf("[WARN] Appconfig Environment (%s) for Application (%s) not found, removing from state", envID, appID) - d.SetId("") - return diags + var state resourceEnvironmentData + response.Diagnostics.Append(request.State.Get(ctx, &state)...) + if response.Diagnostics.HasError() { + return } + output, err := conn.GetEnvironment(ctx, state.getEnvironmentInput()) + if errs.IsA[*awstypes.ResourceNotFoundException](err) { + response.Diagnostics.Append(fwdiag.NewResourceNotFoundWarningDiagnostic(err)) + response.State.RemoveResource(ctx) + return + } if err != nil { - return sdkdiag.AppendErrorf(diags, "reading AppConfig Environment (%s) for Application (%s): %s", envID, appID, err) + response.Diagnostics.AddError( + fmt.Sprintf("reading AppConfig Environment (%s) for Application (%s)", state.EnvironmentID.ValueString(), state.ApplicationID.ValueString()), + err.Error(), + ) } - if output == nil { - return sdkdiag.AppendErrorf(diags, "reading AppConfig Environment (%s) for Application (%s): empty response", envID, appID) + response.Diagnostics.Append(state.refreshFromGetOutput(ctx, r.Meta(), output)...) + if response.Diagnostics.HasError() { + return } - d.Set("application_id", output.ApplicationId) - d.Set("environment_id", output.Id) - d.Set("description", output.Description) - d.Set("name", output.Name) - d.Set("state", output.State) - - if err := d.Set("monitor", flattenEnvironmentMonitors(output.Monitors)); err != nil { - return sdkdiag.AppendErrorf(diags, "setting monitor: %s", err) - } - - arn := arn.ARN{ - AccountID: meta.(*conns.AWSClient).AccountID, - Partition: meta.(*conns.AWSClient).Partition, - Region: meta.(*conns.AWSClient).Region, - Resource: fmt.Sprintf("application/%s/environment/%s", appID, envID), - Service: "appconfig", - }.String() - - d.Set("arn", arn) - - return diags + response.Diagnostics.Append(response.State.Set(ctx, &state)...) } -func resourceEnvironmentUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppConfigConn() +func (r *resourceEnvironment) Update(ctx context.Context, request resource.UpdateRequest, response *resource.UpdateResponse) { + conn := r.Meta().AppConfigClient(ctx) - if d.HasChangesExcept("tags", "tags_all") { - envID, appID, err := EnvironmentParseID(d.Id()) + var state resourceEnvironmentData + response.Diagnostics.Append(request.State.Get(ctx, &state)...) + if response.Diagnostics.HasError() { + return + } - if err != nil { - return sdkdiag.AppendErrorf(diags, "updating AppConfig Environment (%s): %s", d.Id(), err) - } + var plan resourceEnvironmentData + response.Diagnostics.Append(request.Plan.Get(ctx, &plan)...) + if response.Diagnostics.HasError() { + return + } - updateInput := &appconfig.UpdateEnvironmentInput{ - EnvironmentId: aws.String(envID), - ApplicationId: aws.String(appID), - } + if !plan.Description.Equal(state.Description) || + !plan.Name.Equal(state.Name) || + !plan.Monitors.Equal(state.Monitors) { + updateInput := plan.updateEnvironmentInput() - if d.HasChange("description") { - updateInput.Description = aws.String(d.Get("description").(string)) + if !plan.Description.Equal(state.Description) { + updateInput.Description = aws.String(plan.Description.ValueString()) } - if d.HasChange("name") { - updateInput.Name = aws.String(d.Get("name").(string)) + if !plan.Name.Equal(state.Name) { + updateInput.Name = aws.String(plan.Name.ValueString()) } - if d.HasChange("monitor") { - updateInput.Monitors = expandEnvironmentMonitors(d.Get("monitor").(*schema.Set).List()) + if !plan.Monitors.Equal(state.Monitors) { + var monitors []monitorData + response.Diagnostics.Append(plan.Monitors.ElementsAs(ctx, &monitors, false)...) + if response.Diagnostics.HasError() { + return + } + updateInput.Monitors = expandMonitors(monitors) } - _, err = conn.UpdateEnvironmentWithContext(ctx, updateInput) - + output, err := conn.UpdateEnvironment(ctx, updateInput) if err != nil { - return sdkdiag.AppendErrorf(diags, "updating AppConfig Environment (%s) for Application (%s): %s", envID, appID, err) + response.Diagnostics.AddError( + fmt.Sprintf("updating AppConfig Environment (%s) for Application (%s)", state.EnvironmentID.ValueString(), state.ApplicationID.ValueString()), + err.Error(), + ) + } + + response.Diagnostics.Append(plan.refreshFromUpdateOutput(ctx, r.Meta(), output)...) + if response.Diagnostics.HasError() { + return } } - return append(diags, resourceEnvironmentRead(ctx, d, meta)...) + response.Diagnostics.Append(response.State.Set(ctx, &plan)...) } -func resourceEnvironmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppConfigConn() +func (r *resourceEnvironment) Delete(ctx context.Context, request resource.DeleteRequest, response *resource.DeleteResponse) { + conn := r.Meta().AppConfigClient(ctx) + + var state resourceEnvironmentData + response.Diagnostics.Append(request.State.Get(ctx, &state)...) + if response.Diagnostics.HasError() { + return + } - envID, appID, err := EnvironmentParseID(d.Id()) + tflog.Debug(ctx, "Deleting AppConfig Environment", map[string]any{ + "application_id": state.ApplicationID.ValueString(), + "environment_id": state.EnvironmentID.ValueString(), + }) + _, err := conn.DeleteEnvironment(ctx, state.deleteEnvironmentInput()) + if errs.IsA[*awstypes.ResourceNotFoundException](err) { + return + } if err != nil { - return sdkdiag.AppendErrorf(diags, "deleting AppConfig Environment (%s): %s", d.Id(), err) + response.Diagnostics.AddError( + fmt.Sprintf("deleting AppConfig Environment (%s) for Application (%s)", state.EnvironmentID.ValueString(), state.ApplicationID.ValueString()), + err.Error(), + ) } +} - input := &appconfig.DeleteEnvironmentInput{ - EnvironmentId: aws.String(envID), - ApplicationId: aws.String(appID), +func (r *resourceEnvironment) ImportState(ctx context.Context, request resource.ImportStateRequest, response *resource.ImportStateResponse) { + parts := strings.Split(request.ID, ":") + if len(parts) != 2 { + response.Diagnostics.AddError("Resource Import Invalid ID", fmt.Sprintf(`Unexpected format for import ID (%s), use: "EnvironmentID:ApplicationID"`, request.ID)) + return } - _, err = conn.DeleteEnvironmentWithContext(ctx, input) + response.Diagnostics.Append(response.State.SetAttribute(ctx, path.Root("environment_id"), parts[0])...) + response.Diagnostics.Append(response.State.SetAttribute(ctx, path.Root("application_id"), parts[1])...) +} + +func (r *resourceEnvironment) ModifyPlan(ctx context.Context, request resource.ModifyPlanRequest, response *resource.ModifyPlanResponse) { + r.SetTagsAll(ctx, request, response) +} + +type resourceEnvironmentData struct { + ApplicationID types.String `tfsdk:"application_id"` + ARN types.String `tfsdk:"arn"` + Description types.String `tfsdk:"description"` + EnvironmentID types.String `tfsdk:"environment_id"` + ID types.String `tfsdk:"id"` + Monitors types.Set `tfsdk:"monitor"` + Name types.String `tfsdk:"name"` + State types.String `tfsdk:"state"` + Tags types.Map `tfsdk:"tags"` + TagsAll types.Map `tfsdk:"tags_all"` +} + +func (d *resourceEnvironmentData) refreshFromCreateOutput(ctx context.Context, meta *conns.AWSClient, out *appconfig.CreateEnvironmentOutput) diag.Diagnostics { + var diags diag.Diagnostics - if tfawserr.ErrCodeEquals(err, appconfig.ErrCodeResourceNotFoundException) { + if out == nil { return diags } - if err != nil { - return sdkdiag.AppendErrorf(diags, "deleting Appconfig Environment (%s) for Application (%s): %s", envID, appID, err) - } + appID := aws.ToString(out.ApplicationId) + envID := aws.ToString(out.Id) + + d.ApplicationID = types.StringValue(appID) + d.ARN = types.StringValue(environmentARN(meta, aws.ToString(out.ApplicationId), aws.ToString(out.Id)).String()) + d.Description = flex.StringToFrameworkLegacy(ctx, out.Description) + d.EnvironmentID = types.StringValue(envID) + d.ID = types.StringValue(fmt.Sprintf("%s:%s", envID, appID)) + d.Monitors = flattenMonitors(ctx, out.Monitors, &diags) + d.Name = flex.StringToFramework(ctx, out.Name) + d.State = flex.StringValueToFramework(ctx, out.State) return diags } -func EnvironmentParseID(id string) (string, string, error) { - parts := strings.Split(id, ":") +func (d *resourceEnvironmentData) refreshFromGetOutput(ctx context.Context, meta *conns.AWSClient, out *appconfig.GetEnvironmentOutput) diag.Diagnostics { + var diags diag.Diagnostics - if len(parts) != 2 || parts[0] == "" || parts[1] == "" { - return "", "", fmt.Errorf("unexpected format of ID (%q), expected EnvironmentID:ApplicationID", id) + if out == nil { + return diags } - return parts[0], parts[1], nil -} + appID := aws.ToString(out.ApplicationId) + envID := aws.ToString(out.Id) -func expandEnvironmentMonitor(tfMap map[string]interface{}) *appconfig.Monitor { - if tfMap == nil { - return nil - } + d.ApplicationID = types.StringValue(appID) + d.ARN = types.StringValue(environmentARN(meta, aws.ToString(out.ApplicationId), aws.ToString(out.Id)).String()) + d.Description = flex.StringToFrameworkLegacy(ctx, out.Description) + d.EnvironmentID = types.StringValue(envID) + d.ID = types.StringValue(fmt.Sprintf("%s:%s", envID, appID)) + d.Monitors = flattenMonitors(ctx, out.Monitors, &diags) + d.Name = flex.StringToFramework(ctx, out.Name) + d.State = flex.StringValueToFramework(ctx, out.State) - monitor := &appconfig.Monitor{} + return diags +} - if v, ok := tfMap["alarm_arn"].(string); ok && v != "" { - monitor.AlarmArn = aws.String(v) - } +func (d *resourceEnvironmentData) refreshFromUpdateOutput(ctx context.Context, meta *conns.AWSClient, out *appconfig.UpdateEnvironmentOutput) diag.Diagnostics { + var diags diag.Diagnostics - if v, ok := tfMap["alarm_role_arn"].(string); ok && v != "" { - monitor.AlarmRoleArn = aws.String(v) + if out == nil { + return diags } - return monitor -} - -func expandEnvironmentMonitors(tfList []interface{}) []*appconfig.Monitor { - // AppConfig API requires a 0 length slice instead of a nil value - // when updating from N monitors to 0/nil monitors - monitors := make([]*appconfig.Monitor, 0) + appID := aws.ToString(out.ApplicationId) + envID := aws.ToString(out.Id) - for _, tfMapRaw := range tfList { - tfMap, ok := tfMapRaw.(map[string]interface{}) + d.ApplicationID = types.StringValue(appID) + d.ARN = types.StringValue(environmentARN(meta, aws.ToString(out.ApplicationId), aws.ToString(out.Id)).String()) + d.Description = flex.StringToFrameworkLegacy(ctx, out.Description) + d.EnvironmentID = types.StringValue(envID) + d.ID = types.StringValue(fmt.Sprintf("%s:%s", envID, appID)) + d.Monitors = flattenMonitors(ctx, out.Monitors, &diags) + d.Name = flex.StringToFramework(ctx, out.Name) + d.State = flex.StringValueToFramework(ctx, out.State) - if !ok { - continue - } + return diags +} - monitor := expandEnvironmentMonitor(tfMap) +func (d *resourceEnvironmentData) getEnvironmentInput() *appconfig.GetEnvironmentInput { + return &appconfig.GetEnvironmentInput{ + ApplicationId: aws.String(d.ApplicationID.ValueString()), + EnvironmentId: aws.String(d.EnvironmentID.ValueString()), + } +} - if monitor == nil { - continue - } +func (d *resourceEnvironmentData) updateEnvironmentInput() *appconfig.UpdateEnvironmentInput { + return &appconfig.UpdateEnvironmentInput{ + ApplicationId: aws.String(d.ApplicationID.ValueString()), + EnvironmentId: aws.String(d.EnvironmentID.ValueString()), + } +} - monitors = append(monitors, monitor) +func (d *resourceEnvironmentData) deleteEnvironmentInput() *appconfig.DeleteEnvironmentInput { + return &appconfig.DeleteEnvironmentInput{ + ApplicationId: aws.String(d.ApplicationID.ValueString()), + EnvironmentId: aws.String(d.EnvironmentID.ValueString()), } +} - return monitors +func environmentARN(meta *conns.AWSClient, appID, envID string) arn.ARN { + return arn.ARN{ + AccountID: meta.AccountID, + Partition: meta.Partition, + Region: meta.Region, + Resource: fmt.Sprintf("application/%s/environment/%s", appID, envID), + Service: "appconfig", + } } -func flattenEnvironmentMonitor(monitor *appconfig.Monitor) map[string]interface{} { - if monitor == nil { - return nil +func expandMonitors(l []monitorData) []awstypes.Monitor { + monitors := make([]awstypes.Monitor, len(l)) + for i, item := range l { + monitors[i] = item.expand() } + return monitors +} - tfMap := map[string]interface{}{} +func flattenMonitors(ctx context.Context, apiObjects []awstypes.Monitor, diags *diag.Diagnostics) types.Set { + monitorDataTypes := flex.AttributeTypesMust[monitorData](ctx) + elemType := types.ObjectType{AttrTypes: monitorDataTypes} - if v := monitor.AlarmArn; v != nil { - tfMap["alarm_arn"] = aws.StringValue(v) + if len(apiObjects) == 0 { + return types.SetValueMust(elemType, []attr.Value{}) } - if v := monitor.AlarmRoleArn; v != nil { - tfMap["alarm_role_arn"] = aws.StringValue(v) + values := make([]attr.Value, len(apiObjects)) + for i, o := range apiObjects { + values[i] = flattenMonitorData(ctx, o, diags).value(ctx, diags) } - return tfMap + result, d := types.SetValueFrom(ctx, elemType, values) + diags.Append(d...) + + return result +} + +type monitorData struct { + AlarmARN fwtypes.ARN `tfsdk:"alarm_arn"` + AlarmRoleARN fwtypes.ARN `tfsdk:"alarm_role_arn"` } -func flattenEnvironmentMonitors(monitors []*appconfig.Monitor) []interface{} { - if len(monitors) == 0 { - return nil +func (m monitorData) expand() awstypes.Monitor { + result := awstypes.Monitor{ + AlarmArn: aws.String(m.AlarmARN.ValueARN().String()), } - var tfList []interface{} + if !m.AlarmRoleARN.IsNull() { + result.AlarmRoleArn = aws.String(m.AlarmRoleARN.ValueARN().String()) + } - for _, monitor := range monitors { - if monitor == nil { - continue - } + return result +} - tfList = append(tfList, flattenEnvironmentMonitor(monitor)) +func flattenMonitorData(ctx context.Context, apiObject awstypes.Monitor, diags *diag.Diagnostics) monitorData { + return monitorData{ + AlarmARN: flex.StringToFrameworkARN(ctx, apiObject.AlarmArn, diags), + AlarmRoleARN: flex.StringToFrameworkARN(ctx, apiObject.AlarmRoleArn, diags), } +} + +func (m monitorData) value(ctx context.Context, diags *diag.Diagnostics) types.Object { + monitorDataTypes := flex.AttributeTypesMust[monitorData](ctx) + + obj, d := types.ObjectValueFrom(ctx, monitorDataTypes, m) + diags.Append(d...) - return tfList + return obj } diff --git a/internal/service/appconfig/environment_data_source.go b/internal/service/appconfig/environment_data_source.go index c79222c0252..f70a73cf045 100644 --- a/internal/service/appconfig/environment_data_source.go +++ b/internal/service/appconfig/environment_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appconfig import ( @@ -5,8 +8,7 @@ import ( "fmt" "regexp" - "github.com/aws/aws-sdk-go-v2/aws" - "github.com/aws/aws-sdk-go/aws/arn" + "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/appconfig" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" @@ -74,7 +76,7 @@ const ( ) func dataSourceEnvironmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) appID := d.Get("application_id").(string) envID := d.Get("environment_id").(string) @@ -97,17 +99,11 @@ func dataSourceEnvironmentRead(ctx context.Context, d *schema.ResourceData, meta return create.DiagError(names.AppConfig, create.ErrActionReading, DSNameEnvironment, ID, err) } - arn := arn.ARN{ - AccountID: meta.(*conns.AWSClient).AccountID, - Partition: meta.(*conns.AWSClient).Partition, - Region: meta.(*conns.AWSClient).Region, - Resource: fmt.Sprintf("application/%s/environment/%s", appID, envID), - Service: "appconfig", - }.String() + arn := environmentARN(meta.(*conns.AWSClient), appID, envID).String() d.Set("arn", arn) - tags, err := ListTags(ctx, conn, arn) + tags, err := listTags(ctx, conn, arn) if err != nil { return create.DiagError(names.AppConfig, create.ErrActionReading, DSNameEnvironment, ID, err) @@ -136,3 +132,39 @@ func findEnvironmentByApplicationAndEnvironment(ctx context.Context, conn *appco return res, nil } + +func flattenEnvironmentMonitors(monitors []*appconfig.Monitor) []interface{} { + if len(monitors) == 0 { + return nil + } + + var tfList []interface{} + + for _, monitor := range monitors { + if monitor == nil { + continue + } + + tfList = append(tfList, flattenEnvironmentMonitor(monitor)) + } + + return tfList +} + +func flattenEnvironmentMonitor(monitor *appconfig.Monitor) map[string]interface{} { + if monitor == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := monitor.AlarmArn; v != nil { + tfMap["alarm_arn"] = aws.StringValue(v) + } + + if v := monitor.AlarmRoleArn; v != nil { + tfMap["alarm_role_arn"] = aws.StringValue(v) + } + + return tfMap +} diff --git a/internal/service/appconfig/environment_data_source_test.go b/internal/service/appconfig/environment_data_source_test.go index 6fdbee7d121..13587685436 100644 --- a/internal/service/appconfig/environment_data_source_test.go +++ b/internal/service/appconfig/environment_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appconfig_test import ( diff --git a/internal/service/appconfig/environment_test.go b/internal/service/appconfig/environment_test.go index 34d42e2ae9f..ae965f01f36 100644 --- a/internal/service/appconfig/environment_test.go +++ b/internal/service/appconfig/environment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appconfig_test import ( @@ -35,6 +38,7 @@ func TestAccAppConfigEnvironment_basic(t *testing.T) { testAccCheckEnvironmentExists(ctx, resourceName), acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "appconfig", regexp.MustCompile(`application/[a-z0-9]{4,7}/environment/[a-z0-9]{4,7}`)), resource.TestCheckResourceAttrPair(resourceName, "application_id", appResourceName, "id"), + resource.TestCheckResourceAttr(resourceName, "description", ""), resource.TestCheckResourceAttr(resourceName, "monitor.#", "0"), resource.TestCheckResourceAttr(resourceName, "name", rName), resource.TestCheckResourceAttrSet(resourceName, "state"), @@ -65,7 +69,7 @@ func TestAccAppConfigEnvironment_disappears(t *testing.T) { Config: testAccEnvironmentConfig_basic(rName), Check: resource.ComposeTestCheckFunc( testAccCheckEnvironmentExists(ctx, resourceName), - acctest.CheckResourceDisappears(ctx, acctest.Provider, tfappconfig.ResourceEnvironment(), resourceName), + acctest.CheckFrameworkResourceDisappears(ctx, acctest.Provider, tfappconfig.ResourceEnvironmentFW, resourceName), ), ExpectNonEmptyPlan: true, }, @@ -148,6 +152,7 @@ func TestAccAppConfigEnvironment_updateDescription(t *testing.T) { Config: testAccEnvironmentConfig_basic(rName), Check: resource.ComposeTestCheckFunc( testAccCheckEnvironmentExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "description", ""), ), }, }, @@ -300,20 +305,80 @@ func TestAccAppConfigEnvironment_tags(t *testing.T) { }) } +func TestAccAppConfigEnvironment_frameworkMigration_basic(t *testing.T) { + ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_appconfig_environment.test" + description := "Description" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, appconfig.EndpointsID), + CheckDestroy: testAccCheckEnvironmentDestroy(ctx), + Steps: []resource.TestStep{ + { + ExternalProviders: map[string]resource.ExternalProvider{ + "aws": { + Source: "hashicorp/aws", + VersionConstraint: "5.3.0", + }, + }, + Config: testAccEnvironmentConfig_description(rName, description), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckEnvironmentExists(ctx, resourceName), + ), + }, + { + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + Config: testAccEnvironmentConfig_description(rName, description), + PlanOnly: true, + }, + }, + }) +} + +func TestAccAppConfigEnvironment_frameworkMigration_monitors(t *testing.T) { + ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_appconfig_environment.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, appconfig.EndpointsID), + CheckDestroy: testAccCheckEnvironmentDestroy(ctx), + Steps: []resource.TestStep{ + { + ExternalProviders: map[string]resource.ExternalProvider{ + "aws": { + Source: "hashicorp/aws", + VersionConstraint: "5.3.0", + }, + }, + Config: testAccEnvironmentConfig_monitors(rName, 2), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckEnvironmentExists(ctx, resourceName), + ), + }, + { + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + Config: testAccEnvironmentConfig_monitors(rName, 2), + PlanOnly: true, + }, + }, + }) +} + func testAccCheckEnvironmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appconfig_environment" { continue } - envID, appID, err := tfappconfig.EnvironmentParseID(rs.Primary.ID) - - if err != nil { - return err - } + appID := rs.Primary.Attributes["application_id"] + envID := rs.Primary.Attributes["environment_id"] input := &appconfig.GetEnvironmentInput{ ApplicationId: aws.String(appID), @@ -327,7 +392,7 @@ func testAccCheckEnvironmentDestroy(ctx context.Context) resource.TestCheckFunc } if err != nil { - return fmt.Errorf("error reading AppConfig Environment (%s) for Application (%s): %w", envID, appID, err) + return fmt.Errorf("reading AppConfig Environment (%s) for Application (%s): %w", envID, appID, err) } if output != nil { @@ -350,13 +415,10 @@ func testAccCheckEnvironmentExists(ctx context.Context, resourceName string) res return fmt.Errorf("Resource (%s) ID not set", resourceName) } - envID, appID, err := tfappconfig.EnvironmentParseID(rs.Primary.ID) - - if err != nil { - return err - } + appID := rs.Primary.Attributes["application_id"] + envID := rs.Primary.Attributes["environment_id"] - conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn(ctx) input := &appconfig.GetEnvironmentInput{ ApplicationId: aws.String(appID), @@ -366,7 +428,7 @@ func testAccCheckEnvironmentExists(ctx context.Context, resourceName string) res output, err := conn.GetEnvironmentWithContext(ctx, input) if err != nil { - return fmt.Errorf("error reading AppConfig Environment (%s) for Application (%s): %w", envID, appID, err) + return fmt.Errorf("reading AppConfig Environment (%s) for Application (%s): %w", envID, appID, err) } if output == nil { diff --git a/internal/service/appconfig/environments_data_source.go b/internal/service/appconfig/environments_data_source.go index 40d3c433bce..76aa55e0065 100644 --- a/internal/service/appconfig/environments_data_source.go +++ b/internal/service/appconfig/environments_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appconfig import ( @@ -38,7 +41,7 @@ const ( ) func dataSourceEnvironmentsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) appID := d.Get("application_id").(string) out, err := findEnvironmentsByApplication(ctx, conn, appID) diff --git a/internal/service/appconfig/environments_data_source_test.go b/internal/service/appconfig/environments_data_source_test.go index a5c53d288e5..e9c896a0d84 100644 --- a/internal/service/appconfig/environments_data_source_test.go +++ b/internal/service/appconfig/environments_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appconfig_test import ( diff --git a/internal/service/appconfig/exports_test.go b/internal/service/appconfig/exports_test.go new file mode 100644 index 00000000000..ebddbafc599 --- /dev/null +++ b/internal/service/appconfig/exports_test.go @@ -0,0 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package appconfig + +// Exports for use in tests only. +var ( + ResourceEnvironmentFW = newResourceEnvironment +) diff --git a/internal/service/appconfig/extension.go b/internal/service/appconfig/extension.go index efcf0a6cfe6..7393c5c1d0e 100644 --- a/internal/service/appconfig/extension.go +++ b/internal/service/appconfig/extension.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appconfig import ( @@ -121,12 +124,12 @@ func ResourceExtension() *schema.Resource { } func resourceExtensionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) in := appconfig.CreateExtensionInput{ Actions: expandExtensionActionPoints(d.Get("action_point").(*schema.Set).List()), Name: aws.String(d.Get("name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -153,7 +156,7 @@ func resourceExtensionCreate(ctx context.Context, d *schema.ResourceData, meta i } func resourceExtensionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) out, err := FindExtensionById(ctx, conn, d.Id()) @@ -178,7 +181,7 @@ func resourceExtensionRead(ctx context.Context, d *schema.ResourceData, meta int } func resourceExtensionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) requestUpdate := false in := &appconfig.UpdateExtensionInput{ @@ -216,7 +219,7 @@ func resourceExtensionUpdate(ctx context.Context, d *schema.ResourceData, meta i } func resourceExtensionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) _, err := conn.DeleteExtensionWithContext(ctx, &appconfig.DeleteExtensionInput{ ExtensionIdentifier: aws.String(d.Id()), diff --git a/internal/service/appconfig/extension_association.go b/internal/service/appconfig/extension_association.go index 83c4c5a350c..b2359c6da0b 100644 --- a/internal/service/appconfig/extension_association.go +++ b/internal/service/appconfig/extension_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appconfig import ( @@ -60,7 +63,7 @@ func ResourceExtensionAssociation() *schema.Resource { } func resourceExtensionAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) in := appconfig.CreateExtensionAssociationInput{ ExtensionIdentifier: aws.String(d.Get("extension_arn").(string)), @@ -87,7 +90,7 @@ func resourceExtensionAssociationCreate(ctx context.Context, d *schema.ResourceD } func resourceExtensionAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) out, err := FindExtensionAssociationById(ctx, conn, d.Id()) @@ -111,7 +114,7 @@ func resourceExtensionAssociationRead(ctx context.Context, d *schema.ResourceDat } func resourceExtensionAssociationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) requestUpdate := false in := &appconfig.UpdateExtensionAssociationInput{ @@ -139,7 +142,7 @@ func resourceExtensionAssociationUpdate(ctx context.Context, d *schema.ResourceD } func resourceExtensionAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) _, err := conn.DeleteExtensionAssociationWithContext(ctx, &appconfig.DeleteExtensionAssociationInput{ ExtensionAssociationId: aws.String(d.Id()), diff --git a/internal/service/appconfig/extension_test.go b/internal/service/appconfig/extension_test.go index f5058dc011a..325dce4c214 100644 --- a/internal/service/appconfig/extension_test.go +++ b/internal/service/appconfig/extension_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appconfig_test import ( @@ -322,7 +325,7 @@ func TestAccAppConfigExtension_disappears(t *testing.T) { func testAccCheckExtensionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appconfig_environment" { @@ -363,7 +366,7 @@ func testAccCheckExtensionExists(ctx context.Context, resourceName string) resou return fmt.Errorf("Resource (%s) ID not set", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn(ctx) in := &appconfig.GetExtensionInput{ ExtensionIdentifier: aws.String(rs.Primary.ID), diff --git a/internal/service/appconfig/extenstion_association_test.go b/internal/service/appconfig/extenstion_association_test.go index 4c950454b85..00920b3eec5 100644 --- a/internal/service/appconfig/extenstion_association_test.go +++ b/internal/service/appconfig/extenstion_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appconfig_test import ( @@ -130,7 +133,7 @@ func TestAccAppConfigExtensionAssociation_disappears(t *testing.T) { func testAccCheckExtensionAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appconfig_environment" { @@ -171,7 +174,7 @@ func testAccCheckExtensionAssociationExists(ctx context.Context, resourceName st return fmt.Errorf("Resource (%s) ID not set", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn(ctx) in := &appconfig.GetExtensionAssociationInput{ ExtensionAssociationId: aws.String(rs.Primary.ID), diff --git a/internal/service/appconfig/find.go b/internal/service/appconfig/find.go index e62965b29ea..baf17375f81 100644 --- a/internal/service/appconfig/find.go +++ b/internal/service/appconfig/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appconfig import ( diff --git a/internal/service/appconfig/generate.go b/internal/service/appconfig/generate.go index cca6d87f6b0..ced958f80c8 100644 --- a/internal/service/appconfig/generate.go +++ b/internal/service/appconfig/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package appconfig diff --git a/internal/service/appconfig/hosted_configuration_version.go b/internal/service/appconfig/hosted_configuration_version.go index 18b0c8ce178..c22058215d9 100644 --- a/internal/service/appconfig/hosted_configuration_version.go +++ b/internal/service/appconfig/hosted_configuration_version.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appconfig import ( @@ -74,7 +77,7 @@ func ResourceHostedConfigurationVersion() *schema.Resource { func resourceHostedConfigurationVersionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) appID := d.Get("application_id").(string) profileID := d.Get("configuration_profile_id").(string) @@ -103,7 +106,7 @@ func resourceHostedConfigurationVersionCreate(ctx context.Context, d *schema.Res func resourceHostedConfigurationVersionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) appID, confProfID, versionNumber, err := HostedConfigurationVersionParseID(d.Id()) @@ -155,7 +158,7 @@ func resourceHostedConfigurationVersionRead(ctx context.Context, d *schema.Resou func resourceHostedConfigurationVersionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppConfigConn() + conn := meta.(*conns.AWSClient).AppConfigConn(ctx) appID, confProfID, versionNumber, err := HostedConfigurationVersionParseID(d.Id()) @@ -191,7 +194,7 @@ func HostedConfigurationVersionParseID(id string) (string, string, int, error) { version, err := strconv.Atoi(parts[2]) if err != nil { - return "", "", 0, fmt.Errorf("error parsing Hosted Configuration Version version_number: %w", err) + return "", "", 0, fmt.Errorf("parsing Hosted Configuration Version version_number: %w", err) } return parts[0], parts[1], version, nil diff --git a/internal/service/appconfig/hosted_configuration_version_test.go b/internal/service/appconfig/hosted_configuration_version_test.go index 4295dc00017..8eaef10128a 100644 --- a/internal/service/appconfig/hosted_configuration_version_test.go +++ b/internal/service/appconfig/hosted_configuration_version_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appconfig_test import ( @@ -75,7 +78,7 @@ func TestAccAppConfigHostedConfigurationVersion_disappears(t *testing.T) { func testAccCheckHostedConfigurationVersionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appconfig_hosted_configuration_version" { @@ -130,7 +133,7 @@ func testAccCheckHostedConfigurationVersionExists(ctx context.Context, resourceN return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppConfigConn(ctx) output, err := conn.GetHostedConfigurationVersionWithContext(ctx, &appconfig.GetHostedConfigurationVersionInput{ ApplicationId: aws.String(appID), diff --git a/internal/service/appconfig/service_package.go b/internal/service/appconfig/service_package.go new file mode 100644 index 00000000000..528eec7bd42 --- /dev/null +++ b/internal/service/appconfig/service_package.go @@ -0,0 +1,29 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package appconfig + +import ( + "context" + + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + request_sdkv1 "github.com/aws/aws-sdk-go/aws/request" + appconfig_sdkv1 "github.com/aws/aws-sdk-go/service/appconfig" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" +) + +// CustomizeConn customizes a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) CustomizeConn(ctx context.Context, conn *appconfig_sdkv1.AppConfig) (*appconfig_sdkv1.AppConfig, error) { + // StartDeployment operations can return a ConflictException + // if ongoing deployments are in-progress, thus we handle them + // here for the service client. + conn.Handlers.Retry.PushBack(func(r *request_sdkv1.Request) { + if r.Operation.Name == "StartDeployment" { + if tfawserr.ErrCodeEquals(r.Error, appconfig_sdkv1.ErrCodeConflictException) { + r.Retryable = aws_sdkv1.Bool(true) + } + } + }) + + return conn, nil +} diff --git a/internal/service/appconfig/service_package_gen.go b/internal/service/appconfig/service_package_gen.go index 18479cf8d3e..89f4cec7046 100644 --- a/internal/service/appconfig/service_package_gen.go +++ b/internal/service/appconfig/service_package_gen.go @@ -5,6 +5,12 @@ package appconfig import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + appconfig_sdkv2 "github.com/aws/aws-sdk-go-v2/service/appconfig" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + appconfig_sdkv1 "github.com/aws/aws-sdk-go/service/appconfig" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -16,7 +22,14 @@ func (p *servicePackage) FrameworkDataSources(ctx context.Context) []*types.Serv } func (p *servicePackage) FrameworkResources(ctx context.Context) []*types.ServicePackageFrameworkResource { - return []*types.ServicePackageFrameworkResource{} + return []*types.ServicePackageFrameworkResource{ + { + Factory: newResourceEnvironment, + Tags: &types.ServicePackageResourceTags{ + IdentifierAttribute: "arn", + }, + }, + } } func (p *servicePackage) SDKDataSources(ctx context.Context) []*types.ServicePackageSDKDataSource { @@ -74,14 +87,6 @@ func (p *servicePackage) SDKResources(ctx context.Context) []*types.ServicePacka IdentifierAttribute: "arn", }, }, - { - Factory: ResourceEnvironment, - TypeName: "aws_appconfig_environment", - Name: "Environment", - Tags: &types.ServicePackageResourceTags{ - IdentifierAttribute: "arn", - }, - }, { Factory: ResourceExtension, TypeName: "aws_appconfig_extension", @@ -105,4 +110,24 @@ func (p *servicePackage) ServicePackageName() string { return names.AppConfig } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*appconfig_sdkv1.AppConfig, error) { + sess := config["session"].(*session_sdkv1.Session) + + return appconfig_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*appconfig_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return appconfig_sdkv2.NewFromConfig(cfg, func(o *appconfig_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = appconfig_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/appconfig/sweep.go b/internal/service/appconfig/sweep.go index eae09cd0f68..9cdab69000d 100644 --- a/internal/service/appconfig/sweep.go +++ b/internal/service/appconfig/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -12,8 +15,8 @@ import ( "github.com/aws/aws-sdk-go/service/appconfig" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" + "github.com/hashicorp/terraform-provider-aws/internal/sweep/framework" ) func init() { @@ -52,13 +55,13 @@ func init() { func sweepApplications(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).AppConfigConn() + conn := client.AppConfigConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -91,7 +94,7 @@ func sweepApplications(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing AppConfig Applications: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping AppConfig Applications for %s: %w", region, err)) } @@ -105,13 +108,13 @@ func sweepApplications(region string) error { func sweepConfigurationProfiles(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).AppConfigConn() + conn := client.AppConfigConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -168,7 +171,7 @@ func sweepConfigurationProfiles(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing AppConfig Applications: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping AppConfig Configuration Profiles for %s: %w", region, err)) } @@ -182,13 +185,13 @@ func sweepConfigurationProfiles(region string) error { func sweepDeploymentStrategies(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).AppConfigConn() + conn := client.AppConfigConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -227,7 +230,7 @@ func sweepDeploymentStrategies(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing AppConfig Deployment Strategies: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping AppConfig Deployment Strategies for %s: %w", region, err)) } @@ -241,13 +244,13 @@ func sweepDeploymentStrategies(region string) error { func sweepEnvironments(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).AppConfigConn() + conn := client.AppConfigConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -279,14 +282,10 @@ func sweepEnvironments(region string) error { continue } - id := fmt.Sprintf("%s:%s", aws.StringValue(item.Id), appId) - - log.Printf("[INFO] Deleting AppConfig Environment (%s)", id) - r := ResourceEnvironment() - d := r.Data(nil) - d.SetId(id) - - sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) + sweepResources = append(sweepResources, framework.NewSweepResource(newResourceEnvironment, client, + framework.NewAttribute("application_id", aws.StringValue(item.ApplicationId)), + framework.NewAttribute("environment_id", aws.StringValue(item.Id)), + )) } return !lastPage @@ -304,7 +303,7 @@ func sweepEnvironments(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing AppConfig Applications: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping AppConfig Environments for %s: %w", region, err)) } @@ -318,13 +317,13 @@ func sweepEnvironments(region string) error { func sweepHostedConfigurationVersions(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).AppConfigConn() + conn := client.AppConfigConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -406,7 +405,7 @@ func sweepHostedConfigurationVersions(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing AppConfig Applications: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping AppConfig Hosted Configuration Versions for %s: %w", region, err)) } diff --git a/internal/service/appconfig/tags_gen.go b/internal/service/appconfig/tags_gen.go index 28ee324f524..1122cea6a66 100644 --- a/internal/service/appconfig/tags_gen.go +++ b/internal/service/appconfig/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists appconfig service tags. +// listTags lists appconfig service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn appconfigiface.AppConfigAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn appconfigiface.AppConfigAPI, identifier string) (tftags.KeyValueTags, error) { input := &appconfig.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn appconfigiface.AppConfigAPI, identifier // ListTags lists appconfig service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).AppConfigConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).AppConfigConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from appconfig service tags. +// KeyValueTags creates tftags.KeyValueTags from appconfig service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns appconfig service tags from Context. +// getTagsIn returns appconfig service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets appconfig service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets appconfig service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates appconfig service tags. +// updateTags updates appconfig service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn appconfigiface.AppConfigAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn appconfigiface.AppConfigAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn appconfigiface.AppConfigAPI, identifie // UpdateTags updates appconfig service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).AppConfigConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).AppConfigConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/appflow/connector_profile.go b/internal/service/appflow/connector_profile.go index 9c0d73c7284..c618a983346 100644 --- a/internal/service/appflow/connector_profile.go +++ b/internal/service/appflow/connector_profile.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appflow import ( @@ -1409,7 +1412,7 @@ func ResourceConnectorProfile() *schema.Resource { } func resourceConnectorProfileCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppFlowConn() + conn := meta.(*conns.AWSClient).AppFlowConn(ctx) name := d.Get("name").(string) createConnectorProfileInput := appflow.CreateConnectorProfileInput{ @@ -1443,7 +1446,7 @@ func resourceConnectorProfileCreate(ctx context.Context, d *schema.ResourceData, } func resourceConnectorProfileRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppFlowConn() + conn := meta.(*conns.AWSClient).AppFlowConn(ctx) connectorProfile, err := FindConnectorProfileByARN(ctx, conn, d.Id()) @@ -1477,7 +1480,7 @@ func resourceConnectorProfileRead(ctx context.Context, d *schema.ResourceData, m } func resourceConnectorProfileUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppFlowConn() + conn := meta.(*conns.AWSClient).AppFlowConn(ctx) name := d.Get("name").(string) updateConnectorProfileInput := appflow.UpdateConnectorProfileInput{ @@ -1496,7 +1499,7 @@ func resourceConnectorProfileUpdate(ctx context.Context, d *schema.ResourceData, } func resourceConnectorProfileDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppFlowConn() + conn := meta.(*conns.AWSClient).AppFlowConn(ctx) out, _ := FindConnectorProfileByARN(ctx, conn, d.Id()) diff --git a/internal/service/appflow/connector_profile_test.go b/internal/service/appflow/connector_profile_test.go index 69198ec3999..cb1479bd337 100644 --- a/internal/service/appflow/connector_profile_test.go +++ b/internal/service/appflow/connector_profile_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appflow_test import ( @@ -122,7 +125,7 @@ func TestAccAppFlowConnectorProfile_disappears(t *testing.T) { func testAccCheckConnectorProfileDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppFlowConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppFlowConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appflow_connector_profile" { @@ -148,7 +151,7 @@ func testAccCheckConnectorProfileDestroy(ctx context.Context) resource.TestCheck func testAccCheckConnectorProfileExists(ctx context.Context, n string, res *appflow.DescribeConnectorProfilesOutput) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppFlowConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppFlowConn(ctx) rs, ok := s.RootModule().Resources[n] if !ok { diff --git a/internal/service/appflow/find.go b/internal/service/appflow/find.go index ad86bb21248..bc54b6a2e8c 100644 --- a/internal/service/appflow/find.go +++ b/internal/service/appflow/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appflow import ( diff --git a/internal/service/appflow/flow.go b/internal/service/appflow/flow.go index dcef8ffd408..2e12666ca0b 100644 --- a/internal/service/appflow/flow.go +++ b/internal/service/appflow/flow.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appflow import ( @@ -1216,13 +1219,13 @@ func ResourceFlow() *schema.Resource { } func resourceFlowCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppFlowConn() + conn := meta.(*conns.AWSClient).AppFlowConn(ctx) in := &appflow.CreateFlowInput{ FlowName: aws.String(d.Get(names.AttrName).(string)), DestinationFlowConfigList: expandDestinationFlowConfigs(d.Get("destination_flow_config").(*schema.Set).List()), SourceFlowConfig: expandSourceFlowConfig(d.Get("source_flow_config").([]interface{})[0].(map[string]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), Tasks: expandTasks(d.Get("task").(*schema.Set).List()), TriggerConfig: expandTriggerConfig(d.Get("trigger_config").([]interface{})[0].(map[string]interface{})), } @@ -1251,7 +1254,7 @@ func resourceFlowCreate(ctx context.Context, d *schema.ResourceData, meta interf } func resourceFlowRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppFlowConn() + conn := meta.(*conns.AWSClient).AppFlowConn(ctx) out, err := FindFlowByARN(ctx, conn, d.Id()) @@ -1280,26 +1283,26 @@ func resourceFlowRead(ctx context.Context, d *schema.ResourceData, meta interfac d.Set(names.AttrDescription, out2.Description) if err := d.Set("destination_flow_config", flattenDestinationFlowConfigs(out2.DestinationFlowConfigList)); err != nil { - return diag.Errorf("error setting destination_flow_config: %s", err) + return diag.Errorf("setting destination_flow_config: %s", err) } d.Set("kms_arn", out2.KmsArn) if out2.SourceFlowConfig != nil { if err := d.Set("source_flow_config", []interface{}{flattenSourceFlowConfig(out2.SourceFlowConfig)}); err != nil { - return diag.Errorf("error setting source_flow_config: %s", err) + return diag.Errorf("setting source_flow_config: %s", err) } } else { d.Set("source_flow_config", nil) } if err := d.Set("task", flattenTasks(out2.Tasks)); err != nil { - return diag.Errorf("error setting task: %s", err) + return diag.Errorf("setting task: %s", err) } if out2.TriggerConfig != nil { if err := d.Set("trigger_config", []interface{}{flattenTriggerConfig(out2.TriggerConfig)}); err != nil { - return diag.Errorf("error setting trigger_config: %s", err) + return diag.Errorf("setting trigger_config: %s", err) } } else { d.Set("trigger_config", nil) @@ -1309,7 +1312,7 @@ func resourceFlowRead(ctx context.Context, d *schema.ResourceData, meta interfac } func resourceFlowUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppFlowConn() + conn := meta.(*conns.AWSClient).AppFlowConn(ctx) if d.HasChangesExcept("tags", "tags_all") { in := &appflow.UpdateFlowInput{ @@ -1336,7 +1339,7 @@ func resourceFlowUpdate(ctx context.Context, d *schema.ResourceData, meta interf } func resourceFlowDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppFlowConn() + conn := meta.(*conns.AWSClient).AppFlowConn(ctx) out, _ := FindFlowByARN(ctx, conn, d.Id()) diff --git a/internal/service/appflow/flow_test.go b/internal/service/appflow/flow_test.go index 54efac0bb89..1ef2f00a4c3 100644 --- a/internal/service/appflow/flow_test.go +++ b/internal/service/appflow/flow_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appflow_test import ( @@ -690,7 +693,7 @@ func testAccCheckFlowExists(ctx context.Context, resourceName string, flow *appf return fmt.Errorf("not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppFlowConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppFlowConn(ctx) resp, err := tfappflow.FindFlowByARN(ctx, conn, rs.Primary.ID) if err != nil { @@ -709,7 +712,7 @@ func testAccCheckFlowExists(ctx context.Context, resourceName string, flow *appf func testAccCheckFlowDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppFlowConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppFlowConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appflow_flow" { diff --git a/internal/service/appflow/generate.go b/internal/service/appflow/generate.go index 5b5206f9b0d..487625d8e06 100644 --- a/internal/service/appflow/generate.go +++ b/internal/service/appflow/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ServiceTagsMap -ListTags -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package appflow diff --git a/internal/service/appflow/service_package_gen.go b/internal/service/appflow/service_package_gen.go index e076b3d97ed..ee3fd7b58fa 100644 --- a/internal/service/appflow/service_package_gen.go +++ b/internal/service/appflow/service_package_gen.go @@ -5,6 +5,10 @@ package appflow import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + appflow_sdkv1 "github.com/aws/aws-sdk-go/service/appflow" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -44,4 +48,13 @@ func (p *servicePackage) ServicePackageName() string { return names.AppFlow } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*appflow_sdkv1.Appflow, error) { + sess := config["session"].(*session_sdkv1.Session) + + return appflow_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/appflow/status.go b/internal/service/appflow/status.go index 1151f015dcf..0b12ee5a32f 100644 --- a/internal/service/appflow/status.go +++ b/internal/service/appflow/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appflow import ( diff --git a/internal/service/appflow/tags_gen.go b/internal/service/appflow/tags_gen.go index 076caec1d53..7e21a906633 100644 --- a/internal/service/appflow/tags_gen.go +++ b/internal/service/appflow/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists appflow service tags. +// listTags lists appflow service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn appflowiface.AppflowAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn appflowiface.AppflowAPI, identifier string) (tftags.KeyValueTags, error) { input := &appflow.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn appflowiface.AppflowAPI, identifier stri // ListTags lists appflow service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).AppFlowConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).AppFlowConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from appflow service tags. +// KeyValueTags creates tftags.KeyValueTags from appflow service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns appflow service tags from Context. +// getTagsIn returns appflow service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets appflow service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets appflow service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates appflow service tags. +// updateTags updates appflow service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn appflowiface.AppflowAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn appflowiface.AppflowAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn appflowiface.AppflowAPI, identifier st // UpdateTags updates appflow service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).AppFlowConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).AppFlowConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/appflow/wait.go b/internal/service/appflow/wait.go index 677369d7280..ba7fbd8eef0 100644 --- a/internal/service/appflow/wait.go +++ b/internal/service/appflow/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appflow import ( diff --git a/internal/service/appintegrations/data_integration.go b/internal/service/appintegrations/data_integration.go index 3f20cc52073..8222d2dcc6a 100644 --- a/internal/service/appintegrations/data_integration.go +++ b/internal/service/appintegrations/data_integration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appintegrations import ( @@ -104,7 +107,7 @@ func ResourceDataIntegration() *schema.Resource { } func resourceDataIntegrationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppIntegrationsConn() + conn := meta.(*conns.AWSClient).AppIntegrationsConn(ctx) name := d.Get("name").(string) input := &appintegrationsservice.CreateDataIntegrationInput{ @@ -113,7 +116,7 @@ func resourceDataIntegrationCreate(ctx context.Context, d *schema.ResourceData, Name: aws.String(name), ScheduleConfig: expandScheduleConfig(d.Get("schedule_config").([]interface{})), SourceURI: aws.String(d.Get("source_uri").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -132,7 +135,7 @@ func resourceDataIntegrationCreate(ctx context.Context, d *schema.ResourceData, } func resourceDataIntegrationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppIntegrationsConn() + conn := meta.(*conns.AWSClient).AppIntegrationsConn(ctx) output, err := conn.GetDataIntegrationWithContext(ctx, &appintegrationsservice.GetDataIntegrationInput{ Identifier: aws.String(d.Id()), @@ -157,13 +160,13 @@ func resourceDataIntegrationRead(ctx context.Context, d *schema.ResourceData, me } d.Set("source_uri", output.SourceURI) - SetTagsOut(ctx, output.Tags) + setTagsOut(ctx, output.Tags) return nil } func resourceDataIntegrationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppIntegrationsConn() + conn := meta.(*conns.AWSClient).AppIntegrationsConn(ctx) if d.HasChanges("description", "name") { _, err := conn.UpdateDataIntegrationWithContext(ctx, &appintegrationsservice.UpdateDataIntegrationInput{ @@ -181,7 +184,7 @@ func resourceDataIntegrationUpdate(ctx context.Context, d *schema.ResourceData, } func resourceDataIntegrationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppIntegrationsConn() + conn := meta.(*conns.AWSClient).AppIntegrationsConn(ctx) _, err := conn.DeleteDataIntegrationWithContext(ctx, &appintegrationsservice.DeleteDataIntegrationInput{ DataIntegrationIdentifier: aws.String(d.Id()), diff --git a/internal/service/appintegrations/data_integration_test.go b/internal/service/appintegrations/data_integration_test.go index 96a03909cda..880c7a76dca 100644 --- a/internal/service/appintegrations/data_integration_test.go +++ b/internal/service/appintegrations/data_integration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appintegrations_test import ( @@ -223,7 +226,7 @@ func TestAccAppIntegrationsDataIntegration_updateTags(t *testing.T) { func testAccCheckDataIntegrationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppIntegrationsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppIntegrationsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appintegrations_data_integration" { @@ -255,7 +258,7 @@ func testAccCheckDataIntegrationExists(ctx context.Context, name string, dataInt return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppIntegrationsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppIntegrationsConn(ctx) input := &appintegrationsservice.GetDataIntegrationInput{ Identifier: aws.String(rs.Primary.ID), } diff --git a/internal/service/appintegrations/event_integration.go b/internal/service/appintegrations/event_integration.go index 9e6f47a58e5..63f01364b8c 100644 --- a/internal/service/appintegrations/event_integration.go +++ b/internal/service/appintegrations/event_integration.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appintegrations import ( "context" - "fmt" "log" "regexp" @@ -76,7 +78,7 @@ func ResourceEventIntegration() *schema.Resource { } func resourceEventIntegrationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppIntegrationsConn() + conn := meta.(*conns.AWSClient).AppIntegrationsConn(ctx) name := d.Get("name").(string) input := &appintegrationsservice.CreateEventIntegrationInput{ @@ -84,7 +86,7 @@ func resourceEventIntegrationCreate(ctx context.Context, d *schema.ResourceData, EventBridgeBus: aws.String(d.Get("eventbridge_bus").(string)), EventFilter: expandEventFilter(d.Get("event_filter").([]interface{})), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -95,11 +97,11 @@ func resourceEventIntegrationCreate(ctx context.Context, d *schema.ResourceData, output, err := conn.CreateEventIntegrationWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error creating AppIntegrations Event Integration (%s): %w", name, err)) + return diag.Errorf("creating AppIntegrations Event Integration (%s): %s", name, err) } if output == nil { - return diag.FromErr(fmt.Errorf("error creating AppIntegrations Event Integration (%s): empty output", name)) + return diag.Errorf("creating AppIntegrations Event Integration (%s): empty output", name) } // Name is unique @@ -109,7 +111,7 @@ func resourceEventIntegrationCreate(ctx context.Context, d *schema.ResourceData, } func resourceEventIntegrationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppIntegrationsConn() + conn := meta.(*conns.AWSClient).AppIntegrationsConn(ctx) name := d.Id() @@ -124,11 +126,11 @@ func resourceEventIntegrationRead(ctx context.Context, d *schema.ResourceData, m } if err != nil { - return diag.FromErr(fmt.Errorf("error getting AppIntegrations Event Integration (%s): %w", d.Id(), err)) + return diag.Errorf("getting AppIntegrations Event Integration (%s): %s", d.Id(), err) } if resp == nil { - return diag.FromErr(fmt.Errorf("error getting AppIntegrations Event Integration (%s): empty response", d.Id())) + return diag.Errorf("getting AppIntegrations Event Integration (%s): empty response", d.Id()) } d.Set("arn", resp.EventIntegrationArn) @@ -137,16 +139,16 @@ func resourceEventIntegrationRead(ctx context.Context, d *schema.ResourceData, m d.Set("name", resp.Name) if err := d.Set("event_filter", flattenEventFilter(resp.EventFilter)); err != nil { - return diag.FromErr(fmt.Errorf("error setting event_filter: %w", err)) + return diag.Errorf("setting event_filter: %s", err) } - SetTagsOut(ctx, resp.Tags) + setTagsOut(ctx, resp.Tags) return nil } func resourceEventIntegrationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppIntegrationsConn() + conn := meta.(*conns.AWSClient).AppIntegrationsConn(ctx) name := d.Id() @@ -157,7 +159,7 @@ func resourceEventIntegrationUpdate(ctx context.Context, d *schema.ResourceData, }) if err != nil { - return diag.FromErr(fmt.Errorf("updating EventIntegration (%s): %w", d.Id(), err)) + return diag.Errorf("updating EventIntegration (%s): %s", d.Id(), err) } } @@ -165,7 +167,7 @@ func resourceEventIntegrationUpdate(ctx context.Context, d *schema.ResourceData, } func resourceEventIntegrationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppIntegrationsConn() + conn := meta.(*conns.AWSClient).AppIntegrationsConn(ctx) name := d.Id() @@ -174,7 +176,7 @@ func resourceEventIntegrationDelete(ctx context.Context, d *schema.ResourceData, }) if err != nil { - return diag.FromErr(fmt.Errorf("error deleting EventIntegration (%s): %w", d.Id(), err)) + return diag.Errorf("deleting EventIntegration (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/appintegrations/event_integration_data_source.go b/internal/service/appintegrations/event_integration_data_source.go index bdd0175a0a1..0343bb3ad15 100644 --- a/internal/service/appintegrations/event_integration_data_source.go +++ b/internal/service/appintegrations/event_integration_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appintegrations import ( @@ -51,7 +54,7 @@ func DataSourceEventIntegration() *schema.Resource { } func dataSourceEventIntegrationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppIntegrationsConn() + conn := meta.(*conns.AWSClient).AppIntegrationsConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig name := d.Get("name").(string) diff --git a/internal/service/appintegrations/event_integration_data_source_test.go b/internal/service/appintegrations/event_integration_data_source_test.go index d1d653c64cb..0581e21d713 100644 --- a/internal/service/appintegrations/event_integration_data_source_test.go +++ b/internal/service/appintegrations/event_integration_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appintegrations_test import ( diff --git a/internal/service/appintegrations/event_integration_test.go b/internal/service/appintegrations/event_integration_test.go index 92f745fb19a..4031900fe58 100644 --- a/internal/service/appintegrations/event_integration_test.go +++ b/internal/service/appintegrations/event_integration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appintegrations_test import ( @@ -194,7 +197,7 @@ func TestAccAppIntegrationsEventIntegration_disappears(t *testing.T) { func testAccCheckEventIntegrationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppIntegrationsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppIntegrationsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appintegrations_event_integration" { continue @@ -225,7 +228,7 @@ func testAccCheckEventIntegrationExists(ctx context.Context, name string, eventI return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppIntegrationsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppIntegrationsConn(ctx) input := &appintegrationsservice.GetEventIntegrationInput{ Name: aws.String(rs.Primary.ID), } diff --git a/internal/service/appintegrations/generate.go b/internal/service/appintegrations/generate.go index 9950430a4bb..6fd29e6c380 100644 --- a/internal/service/appintegrations/generate.go +++ b/internal/service/appintegrations/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package appintegrations diff --git a/internal/service/appintegrations/service_package_gen.go b/internal/service/appintegrations/service_package_gen.go index a03d6cd5551..0ad4dcc869a 100644 --- a/internal/service/appintegrations/service_package_gen.go +++ b/internal/service/appintegrations/service_package_gen.go @@ -5,6 +5,10 @@ package appintegrations import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + appintegrationsservice_sdkv1 "github.com/aws/aws-sdk-go/service/appintegrationsservice" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -54,4 +58,13 @@ func (p *servicePackage) ServicePackageName() string { return names.AppIntegrations } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*appintegrationsservice_sdkv1.AppIntegrationsService, error) { + sess := config["session"].(*session_sdkv1.Session) + + return appintegrationsservice_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/appintegrations/tags_gen.go b/internal/service/appintegrations/tags_gen.go index 8a23e264a90..27e3b12daa7 100644 --- a/internal/service/appintegrations/tags_gen.go +++ b/internal/service/appintegrations/tags_gen.go @@ -21,14 +21,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from appintegrations service tags. +// KeyValueTags creates tftags.KeyValueTags from appintegrations service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns appintegrations service tags from Context. +// getTagsIn returns appintegrations service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -38,17 +38,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets appintegrations service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets appintegrations service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates appintegrations service tags. +// updateTags updates appintegrations service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn appintegrationsserviceiface.AppIntegrationsServiceAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn appintegrationsserviceiface.AppIntegrationsServiceAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -88,5 +88,5 @@ func UpdateTags(ctx context.Context, conn appintegrationsserviceiface.AppIntegra // UpdateTags updates appintegrations service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).AppIntegrationsConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).AppIntegrationsConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/applicationinsights/application.go b/internal/service/applicationinsights/application.go index 40c44e8134e..8cfb90485da 100644 --- a/internal/service/applicationinsights/application.go +++ b/internal/service/applicationinsights/application.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package applicationinsights import ( @@ -78,7 +81,7 @@ func ResourceApplication() *schema.Resource { func resourceApplicationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ApplicationInsightsConn() + conn := meta.(*conns.AWSClient).ApplicationInsightsConn(ctx) input := &applicationinsights.CreateApplicationInput{ AutoConfigEnabled: aws.Bool(d.Get("auto_config_enabled").(bool)), @@ -86,7 +89,7 @@ func resourceApplicationCreate(ctx context.Context, d *schema.ResourceData, meta CWEMonitorEnabled: aws.Bool(d.Get("cwe_monitor_enabled").(bool)), OpsCenterEnabled: aws.Bool(d.Get("ops_center_enabled").(bool)), ResourceGroupName: aws.String(d.Get("resource_group_name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("grouping_type"); ok { @@ -99,7 +102,7 @@ func resourceApplicationCreate(ctx context.Context, d *schema.ResourceData, meta out, err := conn.CreateApplicationWithContext(ctx, input) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error creating ApplicationInsights Application: %s", err) + return sdkdiag.AppendErrorf(diags, "creating ApplicationInsights Application: %s", err) } d.SetId(aws.StringValue(out.ApplicationInfo.ResourceGroupName)) @@ -113,7 +116,7 @@ func resourceApplicationCreate(ctx context.Context, d *schema.ResourceData, meta func resourceApplicationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ApplicationInsightsConn() + conn := meta.(*conns.AWSClient).ApplicationInsightsConn(ctx) application, err := FindApplicationByName(ctx, conn, d.Id()) @@ -147,7 +150,7 @@ func resourceApplicationRead(ctx context.Context, d *schema.ResourceData, meta i func resourceApplicationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ApplicationInsightsConn() + conn := meta.(*conns.AWSClient).ApplicationInsightsConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &applicationinsights.UpdateApplicationInput{ @@ -178,7 +181,7 @@ func resourceApplicationUpdate(ctx context.Context, d *schema.ResourceData, meta log.Printf("[DEBUG] Updating ApplicationInsights Application: %s", d.Id()) _, err := conn.UpdateApplicationWithContext(ctx, input) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error Updating ApplicationInsights Application: %s", err) + return sdkdiag.AppendErrorf(diags, "updating ApplicationInsights Application (%s): %s", d.Id(), err) } } @@ -187,7 +190,7 @@ func resourceApplicationUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceApplicationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ApplicationInsightsConn() + conn := meta.(*conns.AWSClient).ApplicationInsightsConn(ctx) input := &applicationinsights.DeleteApplicationInput{ ResourceGroupName: aws.String(d.Id()), @@ -199,7 +202,7 @@ func resourceApplicationDelete(ctx context.Context, d *schema.ResourceData, meta if tfawserr.ErrCodeEquals(err, applicationinsights.ErrCodeResourceNotFoundException) { return diags } - return sdkdiag.AppendErrorf(diags, "Error deleting ApplicationInsights Application: %s", err) + return sdkdiag.AppendErrorf(diags, "deleting ApplicationInsights Application: %s", err) } if _, err := waitApplicationTerminated(ctx, conn, d.Id()); err != nil { diff --git a/internal/service/applicationinsights/application_test.go b/internal/service/applicationinsights/application_test.go index 6fcac3aead8..e98b019611d 100644 --- a/internal/service/applicationinsights/application_test.go +++ b/internal/service/applicationinsights/application_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package applicationinsights_test import ( @@ -167,7 +170,7 @@ func TestAccApplicationInsightsApplication_disappears(t *testing.T) { func testAccCheckApplicationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ApplicationInsightsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ApplicationInsightsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_applicationinsights_application" { @@ -203,7 +206,7 @@ func testAccCheckApplicationExists(ctx context.Context, n string, app *applicati return fmt.Errorf("No applicationinsights Application ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ApplicationInsightsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ApplicationInsightsConn(ctx) resp, err := tfapplicationinsights.FindApplicationByName(ctx, conn, rs.Primary.ID) if err != nil { return err diff --git a/internal/service/applicationinsights/find.go b/internal/service/applicationinsights/find.go index 5e3f8e751fc..e84bb449386 100644 --- a/internal/service/applicationinsights/find.go +++ b/internal/service/applicationinsights/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package applicationinsights import ( diff --git a/internal/service/applicationinsights/generate.go b/internal/service/applicationinsights/generate.go index 725def6c061..d6606cc11b0 100644 --- a/internal/service/applicationinsights/generate.go +++ b/internal/service/applicationinsights/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceARN -ServiceTagsSlice -TagInIDElem=ResourceARN -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package applicationinsights diff --git a/internal/service/applicationinsights/service_package_gen.go b/internal/service/applicationinsights/service_package_gen.go index 60cc1681deb..d00b8577f36 100644 --- a/internal/service/applicationinsights/service_package_gen.go +++ b/internal/service/applicationinsights/service_package_gen.go @@ -5,6 +5,10 @@ package applicationinsights import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + applicationinsights_sdkv1 "github.com/aws/aws-sdk-go/service/applicationinsights" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -40,4 +44,13 @@ func (p *servicePackage) ServicePackageName() string { return names.ApplicationInsights } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*applicationinsights_sdkv1.ApplicationInsights, error) { + sess := config["session"].(*session_sdkv1.Session) + + return applicationinsights_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/applicationinsights/status.go b/internal/service/applicationinsights/status.go index ffd3c9e270a..71de4604040 100644 --- a/internal/service/applicationinsights/status.go +++ b/internal/service/applicationinsights/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package applicationinsights import ( diff --git a/internal/service/applicationinsights/sweep.go b/internal/service/applicationinsights/sweep.go index 64f5320fd38..035ae7e3688 100644 --- a/internal/service/applicationinsights/sweep.go +++ b/internal/service/applicationinsights/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/applicationinsights" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -24,13 +26,13 @@ func init() { func sweepApplications(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).ApplicationInsightsConn() + conn := client.ApplicationInsightsConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -56,7 +58,7 @@ func sweepApplications(region string) error { // in case work can be done, don't jump out yet } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping ApplicationInsights Applications for %s: %w", region, err)) } diff --git a/internal/service/applicationinsights/tags_gen.go b/internal/service/applicationinsights/tags_gen.go index 568dec1cf4f..facf77f754d 100644 --- a/internal/service/applicationinsights/tags_gen.go +++ b/internal/service/applicationinsights/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists applicationinsights service tags. +// listTags lists applicationinsights service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn applicationinsightsiface.ApplicationInsightsAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn applicationinsightsiface.ApplicationInsightsAPI, identifier string) (tftags.KeyValueTags, error) { input := &applicationinsights.ListTagsForResourceInput{ ResourceARN: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn applicationinsightsiface.ApplicationInsi // ListTags lists applicationinsights service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).ApplicationInsightsConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).ApplicationInsightsConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*applicationinsights.Tag) tftags.K return tftags.New(ctx, m) } -// GetTagsIn returns applicationinsights service tags from Context. +// getTagsIn returns applicationinsights service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*applicationinsights.Tag { +func getTagsIn(ctx context.Context) []*applicationinsights.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*applicationinsights.Tag { return nil } -// SetTagsOut sets applicationinsights service tags in Context. -func SetTagsOut(ctx context.Context, tags []*applicationinsights.Tag) { +// setTagsOut sets applicationinsights service tags in Context. +func setTagsOut(ctx context.Context, tags []*applicationinsights.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates applicationinsights service tags. +// updateTags updates applicationinsights service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn applicationinsightsiface.ApplicationInsightsAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn applicationinsightsiface.ApplicationInsightsAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn applicationinsightsiface.ApplicationIn // UpdateTags updates applicationinsights service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).ApplicationInsightsConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).ApplicationInsightsConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/applicationinsights/wait.go b/internal/service/applicationinsights/wait.go index bfe4bda875a..10a90e17c41 100644 --- a/internal/service/applicationinsights/wait.go +++ b/internal/service/applicationinsights/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package applicationinsights import ( diff --git a/internal/service/appmesh/appmesh_test.go b/internal/service/appmesh/appmesh_test.go index d5a461b50c5..4f6bf3d3407 100644 --- a/internal/service/appmesh/appmesh_test.go +++ b/internal/service/appmesh/appmesh_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh_test import ( diff --git a/internal/service/appmesh/flex.go b/internal/service/appmesh/flex.go index 6c6c4560082..1745247d538 100644 --- a/internal/service/appmesh/flex.go +++ b/internal/service/appmesh/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh import ( diff --git a/internal/service/appmesh/gateway_route.go b/internal/service/appmesh/gateway_route.go index 42e7f6add41..ebe69ae139f 100644 --- a/internal/service/appmesh/gateway_route.go +++ b/internal/service/appmesh/gateway_route.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh import ( @@ -480,14 +483,14 @@ func resourceGatewayRouteSpecSchema() *schema.Schema { func resourceGatewayRouteCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) name := d.Get("name").(string) input := &appmesh.CreateGatewayRouteInput{ GatewayRouteName: aws.String(name), MeshName: aws.String(d.Get("mesh_name").(string)), Spec: expandGatewayRouteSpec(d.Get("spec").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), VirtualGatewayName: aws.String(d.Get("virtual_gateway_name").(string)), } @@ -508,7 +511,7 @@ func resourceGatewayRouteCreate(ctx context.Context, d *schema.ResourceData, met func resourceGatewayRouteRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { return FindGatewayRouteByFourPartKey(ctx, conn, d.Get("mesh_name").(string), d.Get("mesh_owner").(string), d.Get("virtual_gateway_name").(string), d.Get("name").(string)) @@ -544,7 +547,7 @@ func resourceGatewayRouteRead(ctx context.Context, d *schema.ResourceData, meta func resourceGatewayRouteUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) if d.HasChange("spec") { input := &appmesh.UpdateGatewayRouteInput{ @@ -570,7 +573,7 @@ func resourceGatewayRouteUpdate(ctx context.Context, d *schema.ResourceData, met func resourceGatewayRouteDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) log.Printf("[DEBUG] Deleting App Mesh Gateway Route: %s", d.Id()) input := &appmesh.DeleteGatewayRouteInput{ @@ -602,7 +605,7 @@ func resourceGatewayRouteImport(ctx context.Context, d *schema.ResourceData, met return []*schema.ResourceData{}, fmt.Errorf("wrong format of import ID (%s), use: 'mesh-name/virtual-gateway-name/gateway-route-name'", d.Id()) } - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) meshName := parts[0] virtualGatewayName := parts[1] name := parts[2] diff --git a/internal/service/appmesh/gateway_route_data_source.go b/internal/service/appmesh/gateway_route_data_source.go index f1490031b61..77b66b00fc0 100644 --- a/internal/service/appmesh/gateway_route_data_source.go +++ b/internal/service/appmesh/gateway_route_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh import ( @@ -60,7 +63,7 @@ func DataSourceGatewayRoute() *schema.Resource { func dataSourceGatewayRouteRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig gatewayRouteName := d.Get("name").(string) @@ -91,7 +94,7 @@ func dataSourceGatewayRouteRead(ctx context.Context, d *schema.ResourceData, met var tags tftags.KeyValueTags if meshOwner == meta.(*conns.AWSClient).AccountID { - tags, err = ListTags(ctx, conn, arn) + tags, err = listTags(ctx, conn, arn) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags for App Mesh Gateway Route (%s): %s", arn, err) diff --git a/internal/service/appmesh/gateway_route_data_source_test.go b/internal/service/appmesh/gateway_route_data_source_test.go index 4b2da858f38..08a07516793 100644 --- a/internal/service/appmesh/gateway_route_data_source_test.go +++ b/internal/service/appmesh/gateway_route_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh_test import ( diff --git a/internal/service/appmesh/gateway_route_test.go b/internal/service/appmesh/gateway_route_test.go index 567841db610..0db3878936d 100644 --- a/internal/service/appmesh/gateway_route_test.go +++ b/internal/service/appmesh/gateway_route_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh_test import ( @@ -1233,7 +1236,7 @@ func testAccGatewayRouteImportStateIdFunc(resourceName string) resource.ImportSt func testAccCheckGatewayRouteDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appmesh_gateway_route" { @@ -1259,7 +1262,7 @@ func testAccCheckGatewayRouteDestroy(ctx context.Context) resource.TestCheckFunc func testAccCheckGatewayRouteExists(ctx context.Context, n string, v *appmesh.GatewayRouteData) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn(ctx) rs, ok := s.RootModule().Resources[n] if !ok { diff --git a/internal/service/appmesh/generate.go b/internal/service/appmesh/generate.go index 0f8eff46078..4793092a6aa 100644 --- a/internal/service/appmesh/generate.go +++ b/internal/service/appmesh/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsSlice -TagType=TagRef -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package appmesh diff --git a/internal/service/appmesh/mesh.go b/internal/service/appmesh/mesh.go index a2d3f4de548..043d4cfedb2 100644 --- a/internal/service/appmesh/mesh.go +++ b/internal/service/appmesh/mesh.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh import ( @@ -101,13 +104,13 @@ func resourceMeshSpecSchema() *schema.Schema { func resourceMeshCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) name := d.Get("name").(string) input := &appmesh.CreateMeshInput{ MeshName: aws.String(name), Spec: expandMeshSpec(d.Get("spec").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } _, err := conn.CreateMeshWithContext(ctx, input) @@ -123,7 +126,7 @@ func resourceMeshCreate(ctx context.Context, d *schema.ResourceData, meta interf func resourceMeshRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { return FindMeshByTwoPartKey(ctx, conn, d.Id(), d.Get("mesh_owner").(string)) @@ -156,7 +159,7 @@ func resourceMeshRead(ctx context.Context, d *schema.ResourceData, meta interfac func resourceMeshUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) if d.HasChange("spec") { input := &appmesh.UpdateMeshInput{ @@ -176,7 +179,7 @@ func resourceMeshUpdate(ctx context.Context, d *schema.ResourceData, meta interf func resourceMeshDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) log.Printf("[DEBUG] Deleting App Mesh Service Mesh: %s", d.Id()) _, err := conn.DeleteMeshWithContext(ctx, &appmesh.DeleteMeshInput{ diff --git a/internal/service/appmesh/mesh_data_source.go b/internal/service/appmesh/mesh_data_source.go index dcc637259e7..ccd8bb95f23 100644 --- a/internal/service/appmesh/mesh_data_source.go +++ b/internal/service/appmesh/mesh_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh import ( @@ -52,7 +55,7 @@ func DataSourceMesh() *schema.Resource { func dataSourceMeshRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig meshName := d.Get("name").(string) @@ -80,7 +83,7 @@ func dataSourceMeshRead(ctx context.Context, d *schema.ResourceData, meta interf var tags tftags.KeyValueTags if meshOwner == meta.(*conns.AWSClient).AccountID { - tags, err = ListTags(ctx, conn, arn) + tags, err = listTags(ctx, conn, arn) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags for App Mesh Service Mesh (%s): %s", arn, err) diff --git a/internal/service/appmesh/mesh_data_source_test.go b/internal/service/appmesh/mesh_data_source_test.go index 954315add1c..8ab21f423f1 100644 --- a/internal/service/appmesh/mesh_data_source_test.go +++ b/internal/service/appmesh/mesh_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh_test import ( diff --git a/internal/service/appmesh/mesh_test.go b/internal/service/appmesh/mesh_test.go index afffabb80c6..65626b19cc2 100644 --- a/internal/service/appmesh/mesh_test.go +++ b/internal/service/appmesh/mesh_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh_test import ( @@ -164,7 +167,7 @@ func testAccMesh_tags(t *testing.T) { func testAccCheckMeshDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appmesh_mesh" { @@ -190,7 +193,7 @@ func testAccCheckMeshDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckMeshExists(ctx context.Context, n string, v *appmesh.MeshData) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn(ctx) rs, ok := s.RootModule().Resources[n] if !ok { diff --git a/internal/service/appmesh/route.go b/internal/service/appmesh/route.go index 15492588b84..d0d56046a60 100644 --- a/internal/service/appmesh/route.go +++ b/internal/service/appmesh/route.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh import ( @@ -737,14 +740,14 @@ func resourceRouteSpecSchema() *schema.Schema { func resourceRouteCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) name := d.Get("name").(string) input := &appmesh.CreateRouteInput{ MeshName: aws.String(d.Get("mesh_name").(string)), RouteName: aws.String(name), Spec: expandRouteSpec(d.Get("spec").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), VirtualRouterName: aws.String(d.Get("virtual_router_name").(string)), } @@ -765,7 +768,7 @@ func resourceRouteCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceRouteRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { return FindRouteByFourPartKey(ctx, conn, d.Get("mesh_name").(string), d.Get("mesh_owner").(string), d.Get("virtual_router_name").(string), d.Get("name").(string)) @@ -800,7 +803,7 @@ func resourceRouteRead(ctx context.Context, d *schema.ResourceData, meta interfa func resourceRouteUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) if d.HasChange("spec") { input := &appmesh.UpdateRouteInput{ @@ -826,7 +829,7 @@ func resourceRouteUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceRouteDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) log.Printf("[DEBUG] Deleting App Mesh Route: %s", d.Id()) input := &appmesh.DeleteRouteInput{ @@ -862,7 +865,7 @@ func resourceRouteImport(ctx context.Context, d *schema.ResourceData, meta inter virtualRouterName := parts[1] name := parts[2] - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) route, err := FindRouteByFourPartKey(ctx, conn, meshName, "", virtualRouterName, name) diff --git a/internal/service/appmesh/route_data_source.go b/internal/service/appmesh/route_data_source.go index 5bd357fee0d..f8eee5ef00a 100644 --- a/internal/service/appmesh/route_data_source.go +++ b/internal/service/appmesh/route_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh import ( @@ -60,7 +63,7 @@ func DataSourceRoute() *schema.Resource { func dataSourceRouteRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig routeName := d.Get("name").(string) @@ -91,7 +94,7 @@ func dataSourceRouteRead(ctx context.Context, d *schema.ResourceData, meta inter var tags tftags.KeyValueTags if meshOwner == meta.(*conns.AWSClient).AccountID { - tags, err = ListTags(ctx, conn, arn) + tags, err = listTags(ctx, conn, arn) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags for App Mesh Route (%s): %s", arn, err) diff --git a/internal/service/appmesh/route_data_source_test.go b/internal/service/appmesh/route_data_source_test.go index c7fe4d7f7e9..c4d8201bdb6 100644 --- a/internal/service/appmesh/route_data_source_test.go +++ b/internal/service/appmesh/route_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh_test import ( diff --git a/internal/service/appmesh/route_test.go b/internal/service/appmesh/route_test.go index f3efe38f4f9..e4e8f2aa069 100644 --- a/internal/service/appmesh/route_test.go +++ b/internal/service/appmesh/route_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh_test import ( @@ -2100,7 +2103,7 @@ func testAccRouteImportStateIdFunc(resourceName string) resource.ImportStateIdFu func testAccCheckRouteDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appmesh_route" { @@ -2126,7 +2129,7 @@ func testAccCheckRouteDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckRouteExists(ctx context.Context, n string, v *appmesh.RouteData) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn(ctx) rs, ok := s.RootModule().Resources[n] if !ok { diff --git a/internal/service/appmesh/service_package_gen.go b/internal/service/appmesh/service_package_gen.go index bf00c8c79aa..a09059d9d56 100644 --- a/internal/service/appmesh/service_package_gen.go +++ b/internal/service/appmesh/service_package_gen.go @@ -5,6 +5,10 @@ package appmesh import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + appmesh_sdkv1 "github.com/aws/aws-sdk-go/service/appmesh" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -117,4 +121,13 @@ func (p *servicePackage) ServicePackageName() string { return names.AppMesh } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*appmesh_sdkv1.AppMesh, error) { + sess := config["session"].(*session_sdkv1.Session) + + return appmesh_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/appmesh/sweep.go b/internal/service/appmesh/sweep.go index d353f49a4d2..5c822c79778 100644 --- a/internal/service/appmesh/sweep.go +++ b/internal/service/appmesh/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/appmesh" multierror "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -66,11 +68,11 @@ func init() { func sweepMeshes(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).AppMeshConn() + conn := client.AppMeshConn(ctx) input := &appmesh.ListMeshesInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -99,7 +101,7 @@ func sweepMeshes(region string) error { return fmt.Errorf("error listing App Mesh Service Meshes (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping App Mesh Service Meshes (%s): %w", region, err) @@ -110,11 +112,11 @@ func sweepMeshes(region string) error { func sweepVirtualGateways(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).AppMeshConn() + conn := client.AppMeshConn(ctx) var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -170,7 +172,7 @@ func sweepVirtualGateways(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing App Mesh Service Meshes (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping App Mesh Virtual Gateways (%s): %w", region, err)) @@ -181,11 +183,11 @@ func sweepVirtualGateways(region string) error { func sweepVirtualNodes(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).AppMeshConn() + conn := client.AppMeshConn(ctx) input := &appmesh.ListMeshesInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -241,7 +243,7 @@ func sweepVirtualNodes(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing App Mesh Service Meshes (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping App Mesh Virtual Nodes (%s): %w", region, err)) @@ -252,11 +254,11 @@ func sweepVirtualNodes(region string) error { func sweepVirtualRouters(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).AppMeshConn() + conn := client.AppMeshConn(ctx) input := &appmesh.ListMeshesInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -312,7 +314,7 @@ func sweepVirtualRouters(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing App Mesh Service Meshes (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping App Mesh Virtual Routers (%s): %w", region, err)) @@ -323,11 +325,11 @@ func sweepVirtualRouters(region string) error { func sweepVirtualServices(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).AppMeshConn() + conn := client.AppMeshConn(ctx) input := &appmesh.ListMeshesInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -383,7 +385,7 @@ func sweepVirtualServices(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing App Mesh Service Meshes (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping App Mesh Virtual Services (%s): %w", region, err)) @@ -394,11 +396,11 @@ func sweepVirtualServices(region string) error { func sweepGatewayRoutes(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).AppMeshConn() + conn := client.AppMeshConn(ctx) input := &appmesh.ListMeshesInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -479,7 +481,7 @@ func sweepGatewayRoutes(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing App Mesh Service Meshes (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping App Mesh Gateway Routes (%s): %w", region, err)) @@ -490,11 +492,11 @@ func sweepGatewayRoutes(region string) error { func sweepRoutes(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).AppMeshConn() + conn := client.AppMeshConn(ctx) input := &appmesh.ListMeshesInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -575,7 +577,7 @@ func sweepRoutes(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing App Mesh Service Meshes (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping App Mesh Routes (%s): %w", region, err)) diff --git a/internal/service/appmesh/tags_gen.go b/internal/service/appmesh/tags_gen.go index a628ff78574..dd4fc3d9dd3 100644 --- a/internal/service/appmesh/tags_gen.go +++ b/internal/service/appmesh/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists appmesh service tags. +// listTags lists appmesh service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn appmeshiface.AppMeshAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn appmeshiface.AppMeshAPI, identifier string) (tftags.KeyValueTags, error) { input := &appmesh.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn appmeshiface.AppMeshAPI, identifier stri // ListTags lists appmesh service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).AppMeshConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).AppMeshConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*appmesh.TagRef) tftags.KeyValueTa return tftags.New(ctx, m) } -// GetTagsIn returns appmesh service tags from Context. +// getTagsIn returns appmesh service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*appmesh.TagRef { +func getTagsIn(ctx context.Context) []*appmesh.TagRef { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*appmesh.TagRef { return nil } -// SetTagsOut sets appmesh service tags in Context. -func SetTagsOut(ctx context.Context, tags []*appmesh.TagRef) { +// setTagsOut sets appmesh service tags in Context. +func setTagsOut(ctx context.Context, tags []*appmesh.TagRef) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates appmesh service tags. +// updateTags updates appmesh service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn appmeshiface.AppMeshAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn appmeshiface.AppMeshAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn appmeshiface.AppMeshAPI, identifier st // UpdateTags updates appmesh service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).AppMeshConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).AppMeshConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/appmesh/virtual_gateway.go b/internal/service/appmesh/virtual_gateway.go index 3cf57b21427..ac37f5de06d 100644 --- a/internal/service/appmesh/virtual_gateway.go +++ b/internal/service/appmesh/virtual_gateway.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh import ( @@ -650,13 +653,13 @@ func resourceVirtualGatewaySpecSchema() *schema.Schema { func resourceVirtualGatewayCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) name := d.Get("name").(string) input := &appmesh.CreateVirtualGatewayInput{ MeshName: aws.String(d.Get("mesh_name").(string)), Spec: expandVirtualGatewaySpec(d.Get("spec").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), VirtualGatewayName: aws.String(name), } @@ -677,7 +680,7 @@ func resourceVirtualGatewayCreate(ctx context.Context, d *schema.ResourceData, m func resourceVirtualGatewayRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { return FindVirtualGatewayByThreePartKey(ctx, conn, d.Get("mesh_name").(string), d.Get("mesh_owner").(string), d.Get("name").(string)) @@ -712,7 +715,7 @@ func resourceVirtualGatewayRead(ctx context.Context, d *schema.ResourceData, met func resourceVirtualGatewayUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) if d.HasChange("spec") { input := &appmesh.UpdateVirtualGatewayInput{ @@ -737,7 +740,7 @@ func resourceVirtualGatewayUpdate(ctx context.Context, d *schema.ResourceData, m func resourceVirtualGatewayDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) log.Printf("[DEBUG] Deleting App Mesh Virtual Gateway: %s", d.Id()) input := &appmesh.DeleteVirtualGatewayInput{ @@ -771,7 +774,7 @@ func resourceVirtualGatewayImport(ctx context.Context, d *schema.ResourceData, m meshName := parts[0] name := parts[1] - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) virtualGateway, err := FindVirtualGatewayByThreePartKey(ctx, conn, meshName, "", name) diff --git a/internal/service/appmesh/virtual_gateway_data_source.go b/internal/service/appmesh/virtual_gateway_data_source.go index 6c8f548ece1..4480731d5f4 100644 --- a/internal/service/appmesh/virtual_gateway_data_source.go +++ b/internal/service/appmesh/virtual_gateway_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh import ( @@ -55,7 +58,7 @@ func DataSourceVirtualGateway() *schema.Resource { func dataSourceVirtualGatewayRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig virtualGatewayName := d.Get("name").(string) @@ -85,7 +88,7 @@ func dataSourceVirtualGatewayRead(ctx context.Context, d *schema.ResourceData, m var tags tftags.KeyValueTags if meshOwner == meta.(*conns.AWSClient).AccountID { - tags, err = ListTags(ctx, conn, arn) + tags, err = listTags(ctx, conn, arn) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags for App Mesh Virtual Gateway (%s): %s", arn, err) diff --git a/internal/service/appmesh/virtual_gateway_data_source_test.go b/internal/service/appmesh/virtual_gateway_data_source_test.go index ffe066155e5..3a04e1c1b48 100644 --- a/internal/service/appmesh/virtual_gateway_data_source_test.go +++ b/internal/service/appmesh/virtual_gateway_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh_test import ( diff --git a/internal/service/appmesh/virtual_gateway_test.go b/internal/service/appmesh/virtual_gateway_test.go index 1c5044ab53b..2c567f3f9b1 100644 --- a/internal/service/appmesh/virtual_gateway_test.go +++ b/internal/service/appmesh/virtual_gateway_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh_test import ( @@ -919,7 +922,7 @@ func testAccVirtualGateway_Tags(t *testing.T) { func testAccCheckVirtualGatewayDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appmesh_virtual_gateway" { @@ -945,7 +948,7 @@ func testAccCheckVirtualGatewayDestroy(ctx context.Context) resource.TestCheckFu func testAccCheckVirtualGatewayExists(ctx context.Context, n string, v *appmesh.VirtualGatewayData) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn(ctx) rs, ok := s.RootModule().Resources[n] if !ok { diff --git a/internal/service/appmesh/virtual_node.go b/internal/service/appmesh/virtual_node.go index e8991d8faa5..dfaeaa7ff70 100644 --- a/internal/service/appmesh/virtual_node.go +++ b/internal/service/appmesh/virtual_node.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh import ( @@ -968,13 +971,13 @@ func resourceVirtualNodeSpecSchema() *schema.Schema { func resourceVirtualNodeCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) name := d.Get("name").(string) input := &appmesh.CreateVirtualNodeInput{ MeshName: aws.String(d.Get("mesh_name").(string)), Spec: expandVirtualNodeSpec(d.Get("spec").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), VirtualNodeName: aws.String(name), } @@ -995,7 +998,7 @@ func resourceVirtualNodeCreate(ctx context.Context, d *schema.ResourceData, meta func resourceVirtualNodeRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { return FindVirtualNodeByThreePartKey(ctx, conn, d.Get("mesh_name").(string), d.Get("mesh_owner").(string), d.Get("name").(string)) @@ -1030,7 +1033,7 @@ func resourceVirtualNodeRead(ctx context.Context, d *schema.ResourceData, meta i func resourceVirtualNodeUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) if d.HasChange("spec") { input := &appmesh.UpdateVirtualNodeInput{ @@ -1055,7 +1058,7 @@ func resourceVirtualNodeUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceVirtualNodeDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) log.Printf("[DEBUG] Deleting App Mesh Virtual Node: %s", d.Id()) input := &appmesh.DeleteVirtualNodeInput{ @@ -1086,7 +1089,7 @@ func resourceVirtualNodeImport(ctx context.Context, d *schema.ResourceData, meta return []*schema.ResourceData{}, fmt.Errorf("wrong format of import ID (%s), use: 'mesh-name/virtual-node-name'", d.Id()) } - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) meshName := parts[0] name := parts[1] diff --git a/internal/service/appmesh/virtual_node_data_source.go b/internal/service/appmesh/virtual_node_data_source.go index 9201c9041c3..e9b439e79bc 100644 --- a/internal/service/appmesh/virtual_node_data_source.go +++ b/internal/service/appmesh/virtual_node_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh import ( @@ -56,7 +59,7 @@ func DataSourceVirtualNode() *schema.Resource { func dataSourceVirtualNodeRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig virtualNodeName := d.Get("name").(string) @@ -86,7 +89,7 @@ func dataSourceVirtualNodeRead(ctx context.Context, d *schema.ResourceData, meta var tags tftags.KeyValueTags if meshOwner == meta.(*conns.AWSClient).AccountID { - tags, err = ListTags(ctx, conn, arn) + tags, err = listTags(ctx, conn, arn) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags for App Mesh Virtual Node (%s): %s", arn, err) diff --git a/internal/service/appmesh/virtual_node_data_source_test.go b/internal/service/appmesh/virtual_node_data_source_test.go index d48e082a06d..c23487fcab9 100644 --- a/internal/service/appmesh/virtual_node_data_source_test.go +++ b/internal/service/appmesh/virtual_node_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh_test import ( diff --git a/internal/service/appmesh/virtual_node_migrate.go b/internal/service/appmesh/virtual_node_migrate.go index b553ea311c1..0bbede1c97c 100644 --- a/internal/service/appmesh/virtual_node_migrate.go +++ b/internal/service/appmesh/virtual_node_migrate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh import ( diff --git a/internal/service/appmesh/virtual_node_migrate_test.go b/internal/service/appmesh/virtual_node_migrate_test.go index c89fa0567c9..b0ec1bd380f 100644 --- a/internal/service/appmesh/virtual_node_migrate_test.go +++ b/internal/service/appmesh/virtual_node_migrate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh_test import ( diff --git a/internal/service/appmesh/virtual_node_test.go b/internal/service/appmesh/virtual_node_test.go index cdcd059cebd..6e3a7bd5d0f 100644 --- a/internal/service/appmesh/virtual_node_test.go +++ b/internal/service/appmesh/virtual_node_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh_test import ( @@ -1460,7 +1463,7 @@ func testAccVirtualNode_tags(t *testing.T) { func testAccCheckVirtualNodeDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appmesh_virtual_node" { @@ -1486,7 +1489,7 @@ func testAccCheckVirtualNodeDestroy(ctx context.Context) resource.TestCheckFunc func testAccCheckVirtualNodeExists(ctx context.Context, n string, v *appmesh.VirtualNodeData) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn(ctx) rs, ok := s.RootModule().Resources[n] if !ok { diff --git a/internal/service/appmesh/virtual_router.go b/internal/service/appmesh/virtual_router.go index 55ca4ef1736..186281f0c62 100644 --- a/internal/service/appmesh/virtual_router.go +++ b/internal/service/appmesh/virtual_router.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh import ( @@ -128,13 +131,13 @@ func resourceVirtualRouterSpecSchema() *schema.Schema { func resourceVirtualRouterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) name := d.Get("name").(string) input := &appmesh.CreateVirtualRouterInput{ MeshName: aws.String(d.Get("mesh_name").(string)), Spec: expandVirtualRouterSpec(d.Get("spec").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), VirtualRouterName: aws.String(name), } @@ -155,7 +158,7 @@ func resourceVirtualRouterCreate(ctx context.Context, d *schema.ResourceData, me func resourceVirtualRouterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { return FindVirtualRouterByThreePartKey(ctx, conn, d.Get("mesh_name").(string), d.Get("mesh_owner").(string), d.Get("name").(string)) @@ -190,7 +193,7 @@ func resourceVirtualRouterRead(ctx context.Context, d *schema.ResourceData, meta func resourceVirtualRouterUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) if d.HasChange("spec") { input := &appmesh.UpdateVirtualRouterInput{ @@ -215,7 +218,7 @@ func resourceVirtualRouterUpdate(ctx context.Context, d *schema.ResourceData, me func resourceVirtualRouterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) log.Printf("[DEBUG] Deleting App Mesh Virtual Router: %s", d.Id()) input := &appmesh.DeleteVirtualRouterInput{ @@ -246,7 +249,7 @@ func resourceVirtualRouterImport(ctx context.Context, d *schema.ResourceData, me return []*schema.ResourceData{}, fmt.Errorf("wrong format of import ID (%s), use: 'mesh-name/virtual-router-name'", d.Id()) } - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) meshName := parts[0] name := parts[1] diff --git a/internal/service/appmesh/virtual_router_data_source.go b/internal/service/appmesh/virtual_router_data_source.go index 3f45574aeef..9bc091b45b3 100644 --- a/internal/service/appmesh/virtual_router_data_source.go +++ b/internal/service/appmesh/virtual_router_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh import ( @@ -56,7 +59,7 @@ func DataSourceVirtualRouter() *schema.Resource { func dataSourceVirtualRouterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig virtualRouterName := d.Get("name").(string) @@ -86,7 +89,7 @@ func dataSourceVirtualRouterRead(ctx context.Context, d *schema.ResourceData, me var tags tftags.KeyValueTags if meshOwner == meta.(*conns.AWSClient).AccountID { - tags, err = ListTags(ctx, conn, arn) + tags, err = listTags(ctx, conn, arn) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags for App Mesh Virtual Router (%s): %s", arn, err) diff --git a/internal/service/appmesh/virtual_router_data_source_test.go b/internal/service/appmesh/virtual_router_data_source_test.go index 9d220c64ef4..6329c8d2f80 100644 --- a/internal/service/appmesh/virtual_router_data_source_test.go +++ b/internal/service/appmesh/virtual_router_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh_test import ( diff --git a/internal/service/appmesh/virtual_router_migrate.go b/internal/service/appmesh/virtual_router_migrate.go index a6b8a7675bd..f92e7c3b867 100644 --- a/internal/service/appmesh/virtual_router_migrate.go +++ b/internal/service/appmesh/virtual_router_migrate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh import ( diff --git a/internal/service/appmesh/virtual_router_migrate_test.go b/internal/service/appmesh/virtual_router_migrate_test.go index 42a92d3f926..6c1d94eaa99 100644 --- a/internal/service/appmesh/virtual_router_migrate_test.go +++ b/internal/service/appmesh/virtual_router_migrate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh_test import ( diff --git a/internal/service/appmesh/virtual_router_test.go b/internal/service/appmesh/virtual_router_test.go index 04db985830c..a84a503b814 100644 --- a/internal/service/appmesh/virtual_router_test.go +++ b/internal/service/appmesh/virtual_router_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh_test import ( @@ -213,7 +216,7 @@ func testAccVirtualRouter_disappears(t *testing.T) { func testAccCheckVirtualRouterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appmesh_virtual_router" { @@ -239,7 +242,7 @@ func testAccCheckVirtualRouterDestroy(ctx context.Context) resource.TestCheckFun func testAccCheckVirtualRouterExists(ctx context.Context, n string, v *appmesh.VirtualRouterData) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn(ctx) rs, ok := s.RootModule().Resources[n] if !ok { diff --git a/internal/service/appmesh/virtual_service.go b/internal/service/appmesh/virtual_service.go index 2eb70a8b782..894f8c36f5d 100644 --- a/internal/service/appmesh/virtual_service.go +++ b/internal/service/appmesh/virtual_service.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh import ( @@ -137,13 +140,13 @@ func resourceVirtualServiceSpecSchema() *schema.Schema { func resourceVirtualServiceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) name := d.Get("name").(string) input := &appmesh.CreateVirtualServiceInput{ MeshName: aws.String(d.Get("mesh_name").(string)), Spec: expandVirtualServiceSpec(d.Get("spec").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), VirtualServiceName: aws.String(name), } @@ -164,7 +167,7 @@ func resourceVirtualServiceCreate(ctx context.Context, d *schema.ResourceData, m func resourceVirtualServiceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { return FindVirtualServiceByThreePartKey(ctx, conn, d.Get("mesh_name").(string), d.Get("mesh_owner").(string), d.Get("name").(string)) @@ -199,7 +202,7 @@ func resourceVirtualServiceRead(ctx context.Context, d *schema.ResourceData, met func resourceVirtualServiceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) if d.HasChange("spec") { input := &appmesh.UpdateVirtualServiceInput{ @@ -224,7 +227,7 @@ func resourceVirtualServiceUpdate(ctx context.Context, d *schema.ResourceData, m func resourceVirtualServiceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) log.Printf("[DEBUG] Deleting App Mesh Virtual Service: %s", d.Id()) input := &appmesh.DeleteVirtualServiceInput{ @@ -255,7 +258,7 @@ func resourceVirtualServiceImport(ctx context.Context, d *schema.ResourceData, m return []*schema.ResourceData{}, fmt.Errorf("wrong format of import ID (%s), use: 'mesh-name/virtual-service-name'", d.Id()) } - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) meshName := parts[0] name := parts[1] diff --git a/internal/service/appmesh/virtual_service_data_source.go b/internal/service/appmesh/virtual_service_data_source.go index 658e50b5dfc..54916c0ac05 100644 --- a/internal/service/appmesh/virtual_service_data_source.go +++ b/internal/service/appmesh/virtual_service_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh import ( @@ -56,7 +59,7 @@ func DataSourceVirtualService() *schema.Resource { func dataSourceVirtualServiceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppMeshConn() + conn := meta.(*conns.AWSClient).AppMeshConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig virtualServiceName := d.Get("name").(string) @@ -86,7 +89,7 @@ func dataSourceVirtualServiceRead(ctx context.Context, d *schema.ResourceData, m var tags tftags.KeyValueTags if meshOwner == meta.(*conns.AWSClient).AccountID { - tags, err = ListTags(ctx, conn, arn) + tags, err = listTags(ctx, conn, arn) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags for App Mesh Virtual Service (%s): %s", arn, err) diff --git a/internal/service/appmesh/virtual_service_data_source_test.go b/internal/service/appmesh/virtual_service_data_source_test.go index 3c50e8f8348..8a46869002a 100644 --- a/internal/service/appmesh/virtual_service_data_source_test.go +++ b/internal/service/appmesh/virtual_service_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh_test import ( diff --git a/internal/service/appmesh/virtual_service_test.go b/internal/service/appmesh/virtual_service_test.go index 896de0b0785..6c68fc7a3ae 100644 --- a/internal/service/appmesh/virtual_service_test.go +++ b/internal/service/appmesh/virtual_service_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh_test import ( @@ -197,7 +200,7 @@ func testAccVirtualService_disappears(t *testing.T) { func testAccCheckVirtualServiceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appmesh_virtual_service" { @@ -223,7 +226,7 @@ func testAccCheckVirtualServiceDestroy(ctx context.Context) resource.TestCheckFu func testAccCheckVirtualServiceExists(ctx context.Context, n string, v *appmesh.VirtualServiceData) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppMeshConn(ctx) rs, ok := s.RootModule().Resources[n] if !ok { diff --git a/internal/service/appmesh/wait.go b/internal/service/appmesh/wait.go index 9be690a2d0c..0dbb7547ee3 100644 --- a/internal/service/appmesh/wait.go +++ b/internal/service/appmesh/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appmesh import ( diff --git a/internal/service/apprunner/auto_scaling_configuration_version.go b/internal/service/apprunner/auto_scaling_configuration_version.go index 1ea2c7f24f1..bebb07b0d42 100644 --- a/internal/service/apprunner/auto_scaling_configuration_version.go +++ b/internal/service/apprunner/auto_scaling_configuration_version.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apprunner import ( "context" - "fmt" "log" "github.com/aws/aws-sdk-go/aws" @@ -82,12 +84,12 @@ func ResourceAutoScalingConfigurationVersion() *schema.Resource { } func resourceAutoScalingConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppRunnerConn() + conn := meta.(*conns.AWSClient).AppRunnerConn(ctx) name := d.Get("auto_scaling_configuration_name").(string) input := &apprunner.CreateAutoScalingConfigurationInput{ AutoScalingConfigurationName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("max_concurrency"); ok { @@ -105,24 +107,24 @@ func resourceAutoScalingConfigurationCreate(ctx context.Context, d *schema.Resou output, err := conn.CreateAutoScalingConfigurationWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error creating App Runner AutoScaling Configuration Version (%s): %w", name, err)) + return diag.Errorf("creating App Runner AutoScaling Configuration Version (%s): %s", name, err) } if output == nil || output.AutoScalingConfiguration == nil { - return diag.FromErr(fmt.Errorf("error creating App Runner AutoScaling Configuration Version (%s): empty output", name)) + return diag.Errorf("creating App Runner AutoScaling Configuration Version (%s): empty output", name) } d.SetId(aws.StringValue(output.AutoScalingConfiguration.AutoScalingConfigurationArn)) if err := WaitAutoScalingConfigurationActive(ctx, conn, d.Id()); err != nil { - return diag.FromErr(fmt.Errorf("error waiting for AutoScaling Configuration Version (%s) creation: %w", d.Id(), err)) + return diag.Errorf("waiting for AutoScaling Configuration Version (%s) creation: %s", d.Id(), err) } return resourceAutoScalingConfigurationRead(ctx, d, meta) } func resourceAutoScalingConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppRunnerConn() + conn := meta.(*conns.AWSClient).AppRunnerConn(ctx) input := &apprunner.DescribeAutoScalingConfigurationInput{ AutoScalingConfigurationArn: aws.String(d.Id()), @@ -137,16 +139,16 @@ func resourceAutoScalingConfigurationRead(ctx context.Context, d *schema.Resourc } if err != nil { - return diag.FromErr(fmt.Errorf("error reading App Runner AutoScaling Configuration Version (%s): %w", d.Id(), err)) + return diag.Errorf("reading App Runner AutoScaling Configuration Version (%s): %s", d.Id(), err) } if output == nil || output.AutoScalingConfiguration == nil { - return diag.FromErr(fmt.Errorf("error reading App Runner AutoScaling Configuration Version (%s): empty output", d.Id())) + return diag.Errorf("reading App Runner AutoScaling Configuration Version (%s): empty output", d.Id()) } if aws.StringValue(output.AutoScalingConfiguration.Status) == AutoScalingConfigurationStatusInactive { if d.IsNewResource() { - return diag.FromErr(fmt.Errorf("error reading App Runner AutoScaling Configuration Version (%s): %s after creation", d.Id(), aws.StringValue(output.AutoScalingConfiguration.Status))) + return diag.Errorf("reading App Runner AutoScaling Configuration Version (%s): %s after creation", d.Id(), aws.StringValue(output.AutoScalingConfiguration.Status)) } log.Printf("[WARN] App Runner AutoScaling Configuration Version (%s) not found, removing from state", d.Id()) d.SetId("") @@ -174,7 +176,7 @@ func resourceAutoScalingConfigurationUpdate(ctx context.Context, d *schema.Resou } func resourceAutoScalingConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppRunnerConn() + conn := meta.(*conns.AWSClient).AppRunnerConn(ctx) input := &apprunner.DeleteAutoScalingConfigurationInput{ AutoScalingConfigurationArn: aws.String(d.Id()), @@ -187,14 +189,14 @@ func resourceAutoScalingConfigurationDelete(ctx context.Context, d *schema.Resou } if err != nil { - return diag.FromErr(fmt.Errorf("error deleting App Runner AutoScaling Configuration Version (%s): %w", d.Id(), err)) + return diag.Errorf("deleting App Runner AutoScaling Configuration Version (%s): %s", d.Id(), err) } if err := WaitAutoScalingConfigurationInactive(ctx, conn, d.Id()); err != nil { if tfawserr.ErrCodeEquals(err, apprunner.ErrCodeResourceNotFoundException) { return nil } - return diag.FromErr(fmt.Errorf("error waiting for AutoScaling Configuration Version (%s) deletion: %w", d.Id(), err)) + return diag.Errorf("waiting for AutoScaling Configuration Version (%s) deletion: %s", d.Id(), err) } return nil diff --git a/internal/service/apprunner/auto_scaling_configuration_version_test.go b/internal/service/apprunner/auto_scaling_configuration_version_test.go index 5993a2fb1f9..37911926b24 100644 --- a/internal/service/apprunner/auto_scaling_configuration_version_test.go +++ b/internal/service/apprunner/auto_scaling_configuration_version_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apprunner_test import ( @@ -310,7 +313,7 @@ func testAccCheckAutoScalingConfigurationVersionDestroy(ctx context.Context) res continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn(ctx) input := &apprunner.DescribeAutoScalingConfigurationInput{ AutoScalingConfigurationArn: aws.String(rs.Primary.ID), @@ -346,7 +349,7 @@ func testAccCheckAutoScalingConfigurationVersionExists(ctx context.Context, n st return fmt.Errorf("No App Runner Service ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn(ctx) input := &apprunner.DescribeAutoScalingConfigurationInput{ AutoScalingConfigurationArn: aws.String(rs.Primary.ID), diff --git a/internal/service/apprunner/connection.go b/internal/service/apprunner/connection.go index c588a2da0d1..6dcee52f5dc 100644 --- a/internal/service/apprunner/connection.go +++ b/internal/service/apprunner/connection.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apprunner import ( "context" - "fmt" "log" "github.com/aws/aws-sdk-go/aws" @@ -63,23 +65,23 @@ func ResourceConnection() *schema.Resource { } func resourceConnectionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppRunnerConn() + conn := meta.(*conns.AWSClient).AppRunnerConn(ctx) name := d.Get("connection_name").(string) input := &apprunner.CreateConnectionInput{ ConnectionName: aws.String(name), ProviderType: aws.String(d.Get("provider_type").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } output, err := conn.CreateConnectionWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error creating App Runner Connection (%s): %w", name, err)) + return diag.Errorf("creating App Runner Connection (%s): %s", name, err) } if output == nil || output.Connection == nil { - return diag.FromErr(fmt.Errorf("error creating App Runner Connection (%s): empty output", name)) + return diag.Errorf("creating App Runner Connection (%s): empty output", name) } d.SetId(aws.StringValue(output.Connection.ConnectionName)) @@ -88,7 +90,7 @@ func resourceConnectionCreate(ctx context.Context, d *schema.ResourceData, meta } func resourceConnectionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppRunnerConn() + conn := meta.(*conns.AWSClient).AppRunnerConn(ctx) c, err := FindConnectionSummaryByName(ctx, conn, d.Id()) @@ -99,12 +101,12 @@ func resourceConnectionRead(ctx context.Context, d *schema.ResourceData, meta in } if err != nil { - return diag.FromErr(fmt.Errorf("error reading App Runner Connection (%s): %w", d.Id(), err)) + return diag.Errorf("reading App Runner Connection (%s): %s", d.Id(), err) } if c == nil { if d.IsNewResource() { - return diag.FromErr(fmt.Errorf("error reading App Runner Connection (%s): empty output after creation", d.Id())) + return diag.Errorf("reading App Runner Connection (%s): empty output after creation", d.Id()) } log.Printf("[WARN] App Runner Connection (%s) not found, removing from state", d.Id()) d.SetId("") @@ -127,7 +129,7 @@ func resourceConnectionUpdate(ctx context.Context, d *schema.ResourceData, meta } func resourceConnectionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppRunnerConn() + conn := meta.(*conns.AWSClient).AppRunnerConn(ctx) input := &apprunner.DeleteConnectionInput{ ConnectionArn: aws.String(d.Get("arn").(string)), @@ -139,14 +141,14 @@ func resourceConnectionDelete(ctx context.Context, d *schema.ResourceData, meta if tfawserr.ErrCodeEquals(err, apprunner.ErrCodeResourceNotFoundException) { return nil } - return diag.FromErr(fmt.Errorf("error deleting App Runner Connection (%s): %w", d.Id(), err)) + return diag.Errorf("deleting App Runner Connection (%s): %s", d.Id(), err) } if err := WaitConnectionDeleted(ctx, conn, d.Id()); err != nil { if tfawserr.ErrCodeEquals(err, apprunner.ErrCodeResourceNotFoundException) { return nil } - return diag.FromErr(fmt.Errorf("error waiting for App Runner Connection (%s) deletion: %w", d.Id(), err)) + return diag.Errorf("waiting for App Runner Connection (%s) deletion: %s", d.Id(), err) } return nil diff --git a/internal/service/apprunner/connection_test.go b/internal/service/apprunner/connection_test.go index c0647ae3c95..0d30882472f 100644 --- a/internal/service/apprunner/connection_test.go +++ b/internal/service/apprunner/connection_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apprunner_test import ( @@ -129,7 +132,7 @@ func testAccCheckConnectionDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn(ctx) connection, err := tfapprunner.FindConnectionSummaryByName(ctx, conn, rs.Primary.ID) @@ -161,7 +164,7 @@ func testAccCheckConnectionExists(ctx context.Context, n string) resource.TestCh return fmt.Errorf("No App Runner Connection ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn(ctx) connection, err := tfapprunner.FindConnectionSummaryByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/apprunner/consts.go b/internal/service/apprunner/consts.go index 59eb6f17648..bcb5ff0621f 100644 --- a/internal/service/apprunner/consts.go +++ b/internal/service/apprunner/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apprunner import ( diff --git a/internal/service/apprunner/custom_domain_association.go b/internal/service/apprunner/custom_domain_association.go index 1d0d523f133..80b20b3c5f4 100644 --- a/internal/service/apprunner/custom_domain_association.go +++ b/internal/service/apprunner/custom_domain_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apprunner import ( @@ -82,7 +85,7 @@ func ResourceCustomDomainAssociation() *schema.Resource { } func resourceCustomDomainAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppRunnerConn() + conn := meta.(*conns.AWSClient).AppRunnerConn(ctx) domainName := d.Get("domain_name").(string) serviceArn := d.Get("service_arn").(string) @@ -96,25 +99,25 @@ func resourceCustomDomainAssociationCreate(ctx context.Context, d *schema.Resour output, err := conn.AssociateCustomDomainWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error associating App Runner Custom Domain (%s) for Service (%s): %w", domainName, serviceArn, err)) + return diag.Errorf("associating App Runner Custom Domain (%s) for Service (%s): %s", domainName, serviceArn, err) } if output == nil { - return diag.FromErr(fmt.Errorf("error associating App Runner Custom Domain (%s) for Service (%s): empty output", domainName, serviceArn)) + return diag.Errorf("associating App Runner Custom Domain (%s) for Service (%s): empty output", domainName, serviceArn) } d.SetId(fmt.Sprintf("%s,%s", aws.StringValue(output.CustomDomain.DomainName), aws.StringValue(output.ServiceArn))) d.Set("dns_target", output.DNSTarget) if err := WaitCustomDomainAssociationCreated(ctx, conn, domainName, serviceArn); err != nil { - return diag.FromErr(fmt.Errorf("error waiting for App Runner Custom Domain Association (%s) creation: %w", d.Id(), err)) + return diag.Errorf("waiting for App Runner Custom Domain Association (%s) creation: %s", d.Id(), err) } return resourceCustomDomainAssociationRead(ctx, d, meta) } func resourceCustomDomainAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppRunnerConn() + conn := meta.(*conns.AWSClient).AppRunnerConn(ctx) domainName, serviceArn, err := CustomDomainAssociationParseID(d.Id()) @@ -132,7 +135,7 @@ func resourceCustomDomainAssociationRead(ctx context.Context, d *schema.Resource if customDomain == nil { if d.IsNewResource() { - return diag.FromErr(fmt.Errorf("error reading App Runner Custom Domain Association (%s): empty output after creation", d.Id())) + return diag.Errorf("reading App Runner Custom Domain Association (%s): empty output after creation", d.Id()) } log.Printf("[WARN] App Runner Custom Domain Association (%s) not found, removing from state", d.Id()) d.SetId("") @@ -140,7 +143,7 @@ func resourceCustomDomainAssociationRead(ctx context.Context, d *schema.Resource } if err := d.Set("certificate_validation_records", flattenCustomDomainCertificateValidationRecords(customDomain.CertificateValidationRecords)); err != nil { - return diag.FromErr(fmt.Errorf("error setting certificate_validation_records: %w", err)) + return diag.Errorf("setting certificate_validation_records: %s", err) } d.Set("domain_name", customDomain.DomainName) @@ -152,7 +155,7 @@ func resourceCustomDomainAssociationRead(ctx context.Context, d *schema.Resource } func resourceCustomDomainAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppRunnerConn() + conn := meta.(*conns.AWSClient).AppRunnerConn(ctx) domainName, serviceArn, err := CustomDomainAssociationParseID(d.Id()) @@ -172,7 +175,7 @@ func resourceCustomDomainAssociationDelete(ctx context.Context, d *schema.Resour } if err != nil { - return diag.FromErr(fmt.Errorf("error disassociating App Runner Custom Domain (%s) for Service (%s): %w", domainName, serviceArn, err)) + return diag.Errorf("disassociating App Runner Custom Domain (%s) for Service (%s): %s", domainName, serviceArn, err) } if err := WaitCustomDomainAssociationDeleted(ctx, conn, domainName, serviceArn); err != nil { @@ -180,7 +183,7 @@ func resourceCustomDomainAssociationDelete(ctx context.Context, d *schema.Resour return nil } - return diag.FromErr(fmt.Errorf("error waiting for App Runner Custom Domain Association (%s) deletion: %w", d.Id(), err)) + return diag.Errorf("waiting for App Runner Custom Domain Association (%s) deletion: %s", d.Id(), err) } return nil diff --git a/internal/service/apprunner/custom_domain_association_test.go b/internal/service/apprunner/custom_domain_association_test.go index ec53414e3a9..242630c8b3f 100644 --- a/internal/service/apprunner/custom_domain_association_test.go +++ b/internal/service/apprunner/custom_domain_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apprunner_test import ( @@ -90,7 +93,7 @@ func testAccCheckCustomDomainAssociationDestroy(ctx context.Context) resource.Te continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn(ctx) domainName, serviceArn, err := tfapprunner.CustomDomainAssociationParseID(rs.Primary.ID) @@ -134,7 +137,7 @@ func testAccCheckCustomDomainAssociationExists(ctx context.Context, n string) re return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn(ctx) customDomain, err := tfapprunner.FindCustomDomain(ctx, conn, domainName, serviceArn) diff --git a/internal/service/apprunner/find.go b/internal/service/apprunner/find.go index e58743f8133..6c9d59a8094 100644 --- a/internal/service/apprunner/find.go +++ b/internal/service/apprunner/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apprunner import ( diff --git a/internal/service/apprunner/generate.go b/internal/service/apprunner/generate.go index 1eaa611a0c3..60033f1570c 100644 --- a/internal/service/apprunner/generate.go +++ b/internal/service/apprunner/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsSlice -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package apprunner diff --git a/internal/service/apprunner/id.go b/internal/service/apprunner/id.go index 1674c942501..dfe437e27de 100644 --- a/internal/service/apprunner/id.go +++ b/internal/service/apprunner/id.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apprunner import ( diff --git a/internal/service/apprunner/id_test.go b/internal/service/apprunner/id_test.go index 9f2b1702ae6..9782b792f1f 100644 --- a/internal/service/apprunner/id_test.go +++ b/internal/service/apprunner/id_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apprunner_test import ( diff --git a/internal/service/apprunner/observability_configuration.go b/internal/service/apprunner/observability_configuration.go index f00c2802960..33b0d2599cc 100644 --- a/internal/service/apprunner/observability_configuration.go +++ b/internal/service/apprunner/observability_configuration.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apprunner import ( "context" - "fmt" "log" "github.com/aws/aws-sdk-go/aws" @@ -75,12 +77,12 @@ func ResourceObservabilityConfiguration() *schema.Resource { } func resourceObservabilityConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppRunnerConn() + conn := meta.(*conns.AWSClient).AppRunnerConn(ctx) name := d.Get("observability_configuration_name").(string) input := &apprunner.CreateObservabilityConfigurationInput{ ObservabilityConfigurationName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("trace_configuration"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { @@ -90,24 +92,24 @@ func resourceObservabilityConfigurationCreate(ctx context.Context, d *schema.Res output, err := conn.CreateObservabilityConfigurationWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error creating App Runner Observability Configuration (%s): %w", name, err)) + return diag.Errorf("creating App Runner Observability Configuration (%s): %s", name, err) } if output == nil || output.ObservabilityConfiguration == nil { - return diag.FromErr(fmt.Errorf("error creating App Runner Observability Configuration (%s): empty output", name)) + return diag.Errorf("creating App Runner Observability Configuration (%s): empty output", name) } d.SetId(aws.StringValue(output.ObservabilityConfiguration.ObservabilityConfigurationArn)) if err := WaitObservabilityConfigurationActive(ctx, conn, d.Id()); err != nil { - return diag.FromErr(fmt.Errorf("error waiting for App Runner Observability Configuration (%s) creation: %w", d.Id(), err)) + return diag.Errorf("waiting for App Runner Observability Configuration (%s) creation: %s", d.Id(), err) } return resourceObservabilityConfigurationRead(ctx, d, meta) } func resourceObservabilityConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppRunnerConn() + conn := meta.(*conns.AWSClient).AppRunnerConn(ctx) input := &apprunner.DescribeObservabilityConfigurationInput{ ObservabilityConfigurationArn: aws.String(d.Id()), @@ -122,16 +124,16 @@ func resourceObservabilityConfigurationRead(ctx context.Context, d *schema.Resou } if err != nil { - return diag.FromErr(fmt.Errorf("error reading App Runner Observability Configuration (%s): %w", d.Id(), err)) + return diag.Errorf("reading App Runner Observability Configuration (%s): %s", d.Id(), err) } if output == nil || output.ObservabilityConfiguration == nil { - return diag.FromErr(fmt.Errorf("error reading App Runner Observability Configuration (%s): empty output", d.Id())) + return diag.Errorf("reading App Runner Observability Configuration (%s): empty output", d.Id()) } if aws.StringValue(output.ObservabilityConfiguration.Status) == ObservabilityConfigurationStatusInactive { if d.IsNewResource() { - return diag.FromErr(fmt.Errorf("error reading App Runner Observability Configuration (%s): %s after creation", d.Id(), aws.StringValue(output.ObservabilityConfiguration.Status))) + return diag.Errorf("reading App Runner Observability Configuration (%s): %s after creation", d.Id(), aws.StringValue(output.ObservabilityConfiguration.Status)) } log.Printf("[WARN] App Runner Observability Configuration (%s) not found, removing from state", d.Id()) d.SetId("") @@ -148,7 +150,7 @@ func resourceObservabilityConfigurationRead(ctx context.Context, d *schema.Resou d.Set("status", config.Status) if err := d.Set("trace_configuration", flattenTraceConfiguration(config.TraceConfiguration)); err != nil { - return diag.Errorf("error setting trace_configuration: %s", err) + return diag.Errorf("setting trace_configuration: %s", err) } return nil @@ -160,7 +162,7 @@ func resourceObservabilityConfigurationUpdate(ctx context.Context, d *schema.Res } func resourceObservabilityConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppRunnerConn() + conn := meta.(*conns.AWSClient).AppRunnerConn(ctx) input := &apprunner.DeleteObservabilityConfigurationInput{ ObservabilityConfigurationArn: aws.String(d.Id()), @@ -173,14 +175,14 @@ func resourceObservabilityConfigurationDelete(ctx context.Context, d *schema.Res } if err != nil { - return diag.FromErr(fmt.Errorf("error deleting App Runner Observability Configuration (%s): %w", d.Id(), err)) + return diag.Errorf("deleting App Runner Observability Configuration (%s): %s", d.Id(), err) } if err := WaitObservabilityConfigurationInactive(ctx, conn, d.Id()); err != nil { if tfawserr.ErrCodeEquals(err, apprunner.ErrCodeResourceNotFoundException) { return nil } - return diag.FromErr(fmt.Errorf("error waiting for App Runner Observability Configuration (%s) deletion: %w", d.Id(), err)) + return diag.Errorf("waiting for App Runner Observability Configuration (%s) deletion: %s", d.Id(), err) } return nil diff --git a/internal/service/apprunner/observability_configuration_version_test.go b/internal/service/apprunner/observability_configuration_version_test.go index dea20a331ea..f2671931543 100644 --- a/internal/service/apprunner/observability_configuration_version_test.go +++ b/internal/service/apprunner/observability_configuration_version_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apprunner_test import ( @@ -154,7 +157,7 @@ func testAccCheckObservabilityConfigurationDestroy(ctx context.Context) resource continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn(ctx) input := &apprunner.DescribeObservabilityConfigurationInput{ ObservabilityConfigurationArn: aws.String(rs.Primary.ID), @@ -190,7 +193,7 @@ func testAccCheckObservabilityConfigurationExists(ctx context.Context, n string) return fmt.Errorf("No App Runner Service ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn(ctx) input := &apprunner.DescribeObservabilityConfigurationInput{ ObservabilityConfigurationArn: aws.String(rs.Primary.ID), diff --git a/internal/service/apprunner/service.go b/internal/service/apprunner/service.go index b93bb7396d9..00c4f10a8b6 100644 --- a/internal/service/apprunner/service.go +++ b/internal/service/apprunner/service.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apprunner import ( "context" - "fmt" "log" "regexp" @@ -433,13 +435,13 @@ func ResourceService() *schema.Resource { } func resourceServiceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppRunnerConn() + conn := meta.(*conns.AWSClient).AppRunnerConn(ctx) serviceName := d.Get("service_name").(string) input := &apprunner.CreateServiceInput{ ServiceName: aws.String(serviceName), SourceConfiguration: expandServiceSourceConfiguration(d.Get("source_configuration").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("auto_scaling_configuration_arn"); ok { @@ -488,24 +490,24 @@ func resourceServiceCreate(ctx context.Context, d *schema.ResourceData, meta int } if err != nil { - return diag.FromErr(fmt.Errorf("error creating App Runner Service (%s): %w", serviceName, err)) + return diag.Errorf("creating App Runner Service (%s): %s", serviceName, err) } if output == nil || output.Service == nil { - return diag.FromErr(fmt.Errorf("error creating App Runner Service (%s): empty output", serviceName)) + return diag.Errorf("creating App Runner Service (%s): empty output", serviceName) } d.SetId(aws.StringValue(output.Service.ServiceArn)) if err := WaitServiceCreated(ctx, conn, d.Id()); err != nil { - return diag.FromErr(fmt.Errorf("error waiting for App Runner Service (%s) creation: %w", d.Id(), err)) + return diag.Errorf("waiting for App Runner Service (%s) creation: %s", d.Id(), err) } return resourceServiceRead(ctx, d, meta) } func resourceServiceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppRunnerConn() + conn := meta.(*conns.AWSClient).AppRunnerConn(ctx) input := &apprunner.DescribeServiceInput{ ServiceArn: aws.String(d.Id()), @@ -520,16 +522,16 @@ func resourceServiceRead(ctx context.Context, d *schema.ResourceData, meta inter } if err != nil { - return diag.FromErr(fmt.Errorf("error reading App Runner Service (%s): %w", d.Id(), err)) + return diag.Errorf("reading App Runner Service (%s): %s", d.Id(), err) } if output == nil || output.Service == nil { - return diag.FromErr(fmt.Errorf("error reading App Runner Service (%s): empty output", d.Id())) + return diag.Errorf("reading App Runner Service (%s): empty output", d.Id()) } if aws.StringValue(output.Service.Status) == apprunner.ServiceStatusDeleted { if d.IsNewResource() { - return diag.FromErr(fmt.Errorf("error reading App Runner Service (%s): %s after creation", d.Id(), aws.StringValue(output.Service.Status))) + return diag.Errorf("reading App Runner Service (%s): %s after creation", d.Id(), aws.StringValue(output.Service.Status)) } log.Printf("[WARN] App Runner Service (%s) not found, removing from state", d.Id()) d.SetId("") @@ -551,34 +553,34 @@ func resourceServiceRead(ctx context.Context, d *schema.ResourceData, meta inter d.Set("service_url", service.ServiceUrl) d.Set("status", service.Status) if err := d.Set("encryption_configuration", flattenServiceEncryptionConfiguration(service.EncryptionConfiguration)); err != nil { - return diag.FromErr(fmt.Errorf("error setting encryption_configuration: %w", err)) + return diag.Errorf("setting encryption_configuration: %s", err) } if err := d.Set("health_check_configuration", flattenServiceHealthCheckConfiguration(service.HealthCheckConfiguration)); err != nil { - return diag.FromErr(fmt.Errorf("error setting health_check_configuration: %w", err)) + return diag.Errorf("setting health_check_configuration: %s", err) } if err := d.Set("instance_configuration", flattenServiceInstanceConfiguration(service.InstanceConfiguration)); err != nil { - return diag.FromErr(fmt.Errorf("error setting instance_configuration: %w", err)) + return diag.Errorf("setting instance_configuration: %s", err) } if err := d.Set("network_configuration", flattenNetworkConfiguration(service.NetworkConfiguration)); err != nil { - return diag.FromErr(fmt.Errorf("error setting network_configuration: %w", err)) + return diag.Errorf("setting network_configuration: %s", err) } if err := d.Set("observability_configuration", flattenServiceObservabilityConfiguration(service.ObservabilityConfiguration)); err != nil { - return diag.FromErr(fmt.Errorf("error setting observability_configuration: %w", err)) + return diag.Errorf("setting observability_configuration: %s", err) } if err := d.Set("source_configuration", flattenServiceSourceConfiguration(service.SourceConfiguration)); err != nil { - return diag.FromErr(fmt.Errorf("error setting source_configuration: %w", err)) + return diag.Errorf("setting source_configuration: %s", err) } return nil } func resourceServiceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppRunnerConn() + conn := meta.(*conns.AWSClient).AppRunnerConn(ctx) if d.HasChanges( "auto_scaling_configuration_arn", @@ -614,11 +616,11 @@ func resourceServiceUpdate(ctx context.Context, d *schema.ResourceData, meta int _, err := conn.UpdateServiceWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error updating App Runner Service (%s): %w", d.Id(), err)) + return diag.Errorf("updating App Runner Service (%s): %s", d.Id(), err) } if err := WaitServiceUpdated(ctx, conn, d.Id()); err != nil { - return diag.FromErr(fmt.Errorf("error waiting for App Runner Service (%s) to update: %w", d.Id(), err)) + return diag.Errorf("waiting for App Runner Service (%s) to update: %s", d.Id(), err) } } @@ -626,7 +628,7 @@ func resourceServiceUpdate(ctx context.Context, d *schema.ResourceData, meta int } func resourceServiceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppRunnerConn() + conn := meta.(*conns.AWSClient).AppRunnerConn(ctx) input := &apprunner.DeleteServiceInput{ ServiceArn: aws.String(d.Id()), @@ -639,7 +641,7 @@ func resourceServiceDelete(ctx context.Context, d *schema.ResourceData, meta int } if err != nil { - return diag.FromErr(fmt.Errorf("error deleting App Runner Service (%s): %w", d.Id(), err)) + return diag.Errorf("deleting App Runner Service (%s): %s", d.Id(), err) } if err := WaitServiceDeleted(ctx, conn, d.Id()); err != nil { @@ -647,7 +649,7 @@ func resourceServiceDelete(ctx context.Context, d *schema.ResourceData, meta int return nil } - return diag.FromErr(fmt.Errorf("error waiting for App Runner Service (%s) deletion: %w", d.Id(), err)) + return diag.Errorf("waiting for App Runner Service (%s) deletion: %s", d.Id(), err) } return nil diff --git a/internal/service/apprunner/service_package_gen.go b/internal/service/apprunner/service_package_gen.go index e7e132acfeb..01609e9fab0 100644 --- a/internal/service/apprunner/service_package_gen.go +++ b/internal/service/apprunner/service_package_gen.go @@ -5,6 +5,10 @@ package apprunner import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + apprunner_sdkv1 "github.com/aws/aws-sdk-go/service/apprunner" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -84,4 +88,13 @@ func (p *servicePackage) ServicePackageName() string { return names.AppRunner } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*apprunner_sdkv1.AppRunner, error) { + sess := config["session"].(*session_sdkv1.Session) + + return apprunner_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/apprunner/service_test.go b/internal/service/apprunner/service_test.go index 5a05db345ba..aaa5aec56f9 100644 --- a/internal/service/apprunner/service_test.go +++ b/internal/service/apprunner/service_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apprunner_test import ( @@ -555,7 +558,7 @@ func testAccCheckServiceDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn(ctx) input := &apprunner.DescribeServiceInput{ ServiceArn: aws.String(rs.Primary.ID), @@ -591,7 +594,7 @@ func testAccCheckServiceExists(ctx context.Context, n string) resource.TestCheck return fmt.Errorf("No App Runner Service ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn(ctx) input := &apprunner.DescribeServiceInput{ ServiceArn: aws.String(rs.Primary.ID), @@ -612,7 +615,7 @@ func testAccCheckServiceExists(ctx context.Context, n string) resource.TestCheck } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn(ctx) input := &apprunner.ListServicesInput{} diff --git a/internal/service/apprunner/status.go b/internal/service/apprunner/status.go index f857460969c..3278a77e468 100644 --- a/internal/service/apprunner/status.go +++ b/internal/service/apprunner/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apprunner import ( diff --git a/internal/service/apprunner/sweep.go b/internal/service/apprunner/sweep.go index 832046965c1..866f443e752 100644 --- a/internal/service/apprunner/sweep.go +++ b/internal/service/apprunner/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/apprunner" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -36,13 +38,13 @@ func init() { func sweepAutoScalingConfigurationVersions(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).AppRunnerConn() + conn := client.AppRunnerConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -88,7 +90,7 @@ func sweepAutoScalingConfigurationVersions(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing App Runner AutoScaling Configuration Versions: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping App Runner AutoScaling Configuration Version for %s: %w", region, err)) } @@ -102,13 +104,13 @@ func sweepAutoScalingConfigurationVersions(region string) error { func sweepConnections(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).AppRunnerConn() + conn := client.AppRunnerConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -149,7 +151,7 @@ func sweepConnections(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing App Runner Connections: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping App Runner Connections for %s: %w", region, err)) } @@ -163,13 +165,13 @@ func sweepConnections(region string) error { func sweepServices(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).AppRunnerConn() + conn := client.AppRunnerConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -209,7 +211,7 @@ func sweepServices(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing App Runner Services: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping App Runner Services for %s: %w", region, err)) } diff --git a/internal/service/apprunner/tags_gen.go b/internal/service/apprunner/tags_gen.go index 1d58f54ff4f..1417de0b460 100644 --- a/internal/service/apprunner/tags_gen.go +++ b/internal/service/apprunner/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists apprunner service tags. +// listTags lists apprunner service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn apprunneriface.AppRunnerAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn apprunneriface.AppRunnerAPI, identifier string) (tftags.KeyValueTags, error) { input := &apprunner.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn apprunneriface.AppRunnerAPI, identifier // ListTags lists apprunner service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).AppRunnerConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).AppRunnerConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*apprunner.Tag) tftags.KeyValueTag return tftags.New(ctx, m) } -// GetTagsIn returns apprunner service tags from Context. +// getTagsIn returns apprunner service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*apprunner.Tag { +func getTagsIn(ctx context.Context) []*apprunner.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*apprunner.Tag { return nil } -// SetTagsOut sets apprunner service tags in Context. -func SetTagsOut(ctx context.Context, tags []*apprunner.Tag) { +// setTagsOut sets apprunner service tags in Context. +func setTagsOut(ctx context.Context, tags []*apprunner.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates apprunner service tags. +// updateTags updates apprunner service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn apprunneriface.AppRunnerAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn apprunneriface.AppRunnerAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn apprunneriface.AppRunnerAPI, identifie // UpdateTags updates apprunner service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).AppRunnerConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).AppRunnerConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/apprunner/vpc_connector.go b/internal/service/apprunner/vpc_connector.go index 4d74f3b559d..93144b3598b 100644 --- a/internal/service/apprunner/vpc_connector.go +++ b/internal/service/apprunner/vpc_connector.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apprunner import ( @@ -72,13 +75,13 @@ func ResourceVPCConnector() *schema.Resource { } func resourceVPCConnectorCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppRunnerConn() + conn := meta.(*conns.AWSClient).AppRunnerConn(ctx) vpcConnectorName := d.Get("vpc_connector_name").(string) input := &apprunner.CreateVpcConnectorInput{ SecurityGroups: flex.ExpandStringSet(d.Get("security_groups").(*schema.Set)), Subnets: flex.ExpandStringSet(d.Get("subnets").(*schema.Set)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), VpcConnectorName: aws.String(vpcConnectorName), } @@ -98,7 +101,7 @@ func resourceVPCConnectorCreate(ctx context.Context, d *schema.ResourceData, met } func resourceVPCConnectorRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppRunnerConn() + conn := meta.(*conns.AWSClient).AppRunnerConn(ctx) vpcConnector, err := FindVPCConnectorByARN(ctx, conn, d.Id()) @@ -128,7 +131,7 @@ func resourceVPCConnectorUpdate(ctx context.Context, d *schema.ResourceData, met } func resourceVPCConnectorDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppRunnerConn() + conn := meta.(*conns.AWSClient).AppRunnerConn(ctx) log.Printf("[DEBUG] Deleting App Runner VPC Connector: %s", d.Id()) _, err := conn.DeleteVpcConnectorWithContext(ctx, &apprunner.DeleteVpcConnectorInput{ diff --git a/internal/service/apprunner/vpc_connector_test.go b/internal/service/apprunner/vpc_connector_test.go index f00418aee8e..b24237baf33 100644 --- a/internal/service/apprunner/vpc_connector_test.go +++ b/internal/service/apprunner/vpc_connector_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apprunner_test import ( @@ -167,7 +170,7 @@ func testAccCheckVPCConnectorDestroy(ctx context.Context) resource.TestCheckFunc continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn(ctx) _, err := tfapprunner.FindVPCConnectorByARN(ctx, conn, rs.Primary.ID) @@ -197,7 +200,7 @@ func testAccCheckVPCConnectorExists(ctx context.Context, n string) resource.Test return fmt.Errorf("No App Runner VPC Connector ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn(ctx) _, err := tfapprunner.FindVPCConnectorByARN(ctx, conn, rs.Primary.ID) @@ -258,7 +261,7 @@ resource "aws_apprunner_vpc_connector" "test" { } func testAccPreCheckVPCConnector(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn(ctx) input := &apprunner.ListVpcConnectorsInput{} diff --git a/internal/service/apprunner/vpc_ingress_connection.go b/internal/service/apprunner/vpc_ingress_connection.go index 333009fe128..f2592d0f9f2 100644 --- a/internal/service/apprunner/vpc_ingress_connection.go +++ b/internal/service/apprunner/vpc_ingress_connection.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apprunner import ( "context" - "fmt" "log" "github.com/aws/aws-sdk-go/aws" @@ -77,12 +79,12 @@ func ResourceVPCIngressConnection() *schema.Resource { } func resourceVPCIngressConnectionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppRunnerConn() + conn := meta.(*conns.AWSClient).AppRunnerConn(ctx) name := d.Get("name").(string) input := &apprunner.CreateVpcIngressConnectionInput{ ServiceArn: aws.String(d.Get("service_arn").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), VpcIngressConnectionName: aws.String(name), } @@ -93,24 +95,24 @@ func resourceVPCIngressConnectionCreate(ctx context.Context, d *schema.ResourceD output, err := conn.CreateVpcIngressConnectionWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error creating App Runner VPC Ingress Configuration (%s): %w", name, err)) + return diag.Errorf("creating App Runner VPC Ingress Configuration (%s): %s", name, err) } if output == nil || output.VpcIngressConnection == nil { - return diag.FromErr(fmt.Errorf("error creating App Runner VPC Ingress Configuration (%s): empty output", name)) + return diag.Errorf("creating App Runner VPC Ingress Configuration (%s): empty output", name) } d.SetId(aws.StringValue(output.VpcIngressConnection.VpcIngressConnectionArn)) if err := WaitVPCIngressConnectionActive(ctx, conn, d.Id()); err != nil { - return diag.FromErr(fmt.Errorf("error waiting for App Runner VPC Ingress Configuration (%s) creation: %w", d.Id(), err)) + return diag.Errorf("waiting for App Runner VPC Ingress Configuration (%s) creation: %s", d.Id(), err) } return resourceVPCIngressConnectionRead(ctx, d, meta) } func resourceVPCIngressConnectionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppRunnerConn() + conn := meta.(*conns.AWSClient).AppRunnerConn(ctx) input := &apprunner.DescribeVpcIngressConnectionInput{ VpcIngressConnectionArn: aws.String(d.Id()), @@ -125,16 +127,16 @@ func resourceVPCIngressConnectionRead(ctx context.Context, d *schema.ResourceDat } if err != nil { - return diag.FromErr(fmt.Errorf("error reading App Runner VPC Ingress Configuration (%s): %w", d.Id(), err)) + return diag.Errorf("reading App Runner VPC Ingress Configuration (%s): %s", d.Id(), err) } if output == nil || output.VpcIngressConnection == nil { - return diag.FromErr(fmt.Errorf("error reading App Runner VPC Ingress Configuration (%s): empty output", d.Id())) + return diag.Errorf("reading App Runner VPC Ingress Configuration (%s): empty output", d.Id()) } if aws.StringValue(output.VpcIngressConnection.Status) == VPCIngressConnectionStatusDeleted { if d.IsNewResource() { - return diag.FromErr(fmt.Errorf("error reading App Runner VPC Ingress Configuration (%s): %s after creation", d.Id(), aws.StringValue(output.VpcIngressConnection.Status))) + return diag.Errorf("reading App Runner VPC Ingress Configuration (%s): %s after creation", d.Id(), aws.StringValue(output.VpcIngressConnection.Status)) } log.Printf("[WARN] App Runner VPC Ingress Configuration (%s) not found, removing from state", d.Id()) d.SetId("") @@ -151,7 +153,7 @@ func resourceVPCIngressConnectionRead(ctx context.Context, d *schema.ResourceDat d.Set("domain_name", config.DomainName) if err := d.Set("ingress_vpc_configuration", flattenIngressVPCConfiguration(config.IngressVpcConfiguration)); err != nil { - return diag.Errorf("error setting ingress_vpc_configuration: %s", err) + return diag.Errorf("setting ingress_vpc_configuration: %s", err) } return nil @@ -163,7 +165,7 @@ func resourceVPCIngressConnectionUpdate(ctx context.Context, d *schema.ResourceD } func resourceVPCIngressConnectionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppRunnerConn() + conn := meta.(*conns.AWSClient).AppRunnerConn(ctx) input := &apprunner.DeleteVpcIngressConnectionInput{ VpcIngressConnectionArn: aws.String(d.Id()), @@ -176,14 +178,14 @@ func resourceVPCIngressConnectionDelete(ctx context.Context, d *schema.ResourceD } if err != nil { - return diag.FromErr(fmt.Errorf("error deleting App Runner VPC Ingress Configuration (%s): %w", d.Id(), err)) + return diag.Errorf("deleting App Runner VPC Ingress Configuration (%s): %s", d.Id(), err) } if err := WaitVPCIngressConnectionDeleted(ctx, conn, d.Id()); err != nil { if tfawserr.ErrCodeEquals(err, apprunner.ErrCodeResourceNotFoundException) { return nil } - return diag.FromErr(fmt.Errorf("error waiting for App Runner VPC Ingress Configuration (%s) deletion: %w", d.Id(), err)) + return diag.Errorf("waiting for App Runner VPC Ingress Configuration (%s) deletion: %s", d.Id(), err) } return nil diff --git a/internal/service/apprunner/vpc_ingress_connection_test.go b/internal/service/apprunner/vpc_ingress_connection_test.go index 4144e5eccd3..1356c41c2ca 100644 --- a/internal/service/apprunner/vpc_ingress_connection_test.go +++ b/internal/service/apprunner/vpc_ingress_connection_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apprunner_test import ( @@ -128,7 +131,7 @@ func testAccCheckVPCIngressConnectionDestroy(ctx context.Context) resource.TestC continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn(ctx) input := &apprunner.DescribeVpcIngressConnectionInput{ VpcIngressConnectionArn: aws.String(rs.Primary.ID), @@ -164,7 +167,7 @@ func testAccCheckVPCIngressConnectionExists(ctx context.Context, n string) resou return fmt.Errorf("No App Runner Service ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppRunnerConn(ctx) input := &apprunner.DescribeVpcIngressConnectionInput{ VpcIngressConnectionArn: aws.String(rs.Primary.ID), diff --git a/internal/service/apprunner/wait.go b/internal/service/apprunner/wait.go index 47e5b10c5af..6387d374bf4 100644 --- a/internal/service/apprunner/wait.go +++ b/internal/service/apprunner/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package apprunner import ( diff --git a/internal/service/appstream/directory_config.go b/internal/service/appstream/directory_config.go index ff4a26d19f6..168a7588d22 100644 --- a/internal/service/appstream/directory_config.go +++ b/internal/service/appstream/directory_config.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appstream import ( "context" - "fmt" "log" "time" @@ -68,7 +70,7 @@ func ResourceDirectoryConfig() *schema.Resource { } func resourceDirectoryConfigCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppStreamConn() + conn := meta.(*conns.AWSClient).AppStreamConn(ctx) directoryName := d.Get("directory_name").(string) input := &appstream.CreateDirectoryConfigInput{ @@ -79,11 +81,11 @@ func resourceDirectoryConfigCreate(ctx context.Context, d *schema.ResourceData, output, err := conn.CreateDirectoryConfigWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error creating AppStream Directory Config (%s): %w", directoryName, err)) + return diag.Errorf("creating AppStream Directory Config (%s): %s", directoryName, err) } if output == nil || output.DirectoryConfig == nil { - return diag.Errorf("error creating AppStream Directory Config (%s): empty response", directoryName) + return diag.Errorf("creating AppStream Directory Config (%s): empty response", directoryName) } d.SetId(aws.StringValue(output.DirectoryConfig.DirectoryName)) @@ -92,7 +94,7 @@ func resourceDirectoryConfigCreate(ctx context.Context, d *schema.ResourceData, } func resourceDirectoryConfigRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppStreamConn() + conn := meta.(*conns.AWSClient).AppStreamConn(ctx) resp, err := conn.DescribeDirectoryConfigsWithContext(ctx, &appstream.DescribeDirectoryConfigsInput{DirectoryNames: []*string{aws.String(d.Id())}}) @@ -103,15 +105,15 @@ func resourceDirectoryConfigRead(ctx context.Context, d *schema.ResourceData, me } if err != nil { - return diag.FromErr(fmt.Errorf("error reading AppStream Directory Config (%s): %w", d.Id(), err)) + return diag.Errorf("reading AppStream Directory Config (%s): %s", d.Id(), err) } if len(resp.DirectoryConfigs) == 0 { - return diag.FromErr(fmt.Errorf("error reading AppStream Directory Config (%s): %s", d.Id(), "empty response")) + return diag.Errorf("reading AppStream Directory Config (%s): %s", d.Id(), "empty response") } if len(resp.DirectoryConfigs) > 1 { - return diag.FromErr(fmt.Errorf("error reading AppStream Directory Config (%s): %s", d.Id(), "multiple Directory Configs found")) + return diag.Errorf("reading AppStream Directory Config (%s): %s", d.Id(), "multiple Directory Configs found") } directoryConfig := resp.DirectoryConfigs[0] @@ -121,14 +123,14 @@ func resourceDirectoryConfigRead(ctx context.Context, d *schema.ResourceData, me d.Set("organizational_unit_distinguished_names", flex.FlattenStringSet(directoryConfig.OrganizationalUnitDistinguishedNames)) if err = d.Set("service_account_credentials", flattenServiceAccountCredentials(directoryConfig.ServiceAccountCredentials, d)); err != nil { - return diag.FromErr(fmt.Errorf("error setting `%s` for AppStream Directory Config (%s): %w", "service_account_credentials", d.Id(), err)) + return diag.Errorf("setting `%s` for AppStream Directory Config (%s): %s", "service_account_credentials", d.Id(), err) } return nil } func resourceDirectoryConfigUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppStreamConn() + conn := meta.(*conns.AWSClient).AppStreamConn(ctx) input := &appstream.UpdateDirectoryConfigInput{ DirectoryName: aws.String(d.Id()), } @@ -143,14 +145,14 @@ func resourceDirectoryConfigUpdate(ctx context.Context, d *schema.ResourceData, _, err := conn.UpdateDirectoryConfigWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error updating AppStream Directory Config (%s): %w", d.Id(), err)) + return diag.Errorf("updating AppStream Directory Config (%s): %s", d.Id(), err) } return resourceDirectoryConfigRead(ctx, d, meta) } func resourceDirectoryConfigDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppStreamConn() + conn := meta.(*conns.AWSClient).AppStreamConn(ctx) log.Printf("[DEBUG] Deleting AppStream Directory Config: (%s)", d.Id()) _, err := conn.DeleteDirectoryConfigWithContext(ctx, &appstream.DeleteDirectoryConfigInput{ @@ -162,7 +164,7 @@ func resourceDirectoryConfigDelete(ctx context.Context, d *schema.ResourceData, } if err != nil { - return diag.FromErr(fmt.Errorf("error deleting AppStream Directory Config (%s): %w", d.Id(), err)) + return diag.Errorf("deleting AppStream Directory Config (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/appstream/directory_config_test.go b/internal/service/appstream/directory_config_test.go index 75c8e007988..d82fba8388a 100644 --- a/internal/service/appstream/directory_config_test.go +++ b/internal/service/appstream/directory_config_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appstream_test import ( @@ -156,7 +159,7 @@ func testAccCheckDirectoryConfigExists(ctx context.Context, resourceName string, return fmt.Errorf("not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn(ctx) resp, err := conn.DescribeDirectoryConfigsWithContext(ctx, &appstream.DescribeDirectoryConfigsInput{DirectoryNames: []*string{aws.String(rs.Primary.ID)}}) if err != nil { @@ -175,7 +178,7 @@ func testAccCheckDirectoryConfigExists(ctx context.Context, resourceName string, func testAccCheckDirectoryConfigDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appstream_directory_config" { diff --git a/internal/service/appstream/find.go b/internal/service/appstream/find.go index 67aaedecb70..8fe26a96c63 100644 --- a/internal/service/appstream/find.go +++ b/internal/service/appstream/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appstream import ( diff --git a/internal/service/appstream/fleet.go b/internal/service/appstream/fleet.go index 98e59dd3668..b54236fd7b4 100644 --- a/internal/service/appstream/fleet.go +++ b/internal/service/appstream/fleet.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appstream import ( "context" - "fmt" "log" "reflect" "time" @@ -202,12 +204,12 @@ func ResourceFleet() *schema.Resource { } func resourceFleetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppStreamConn() + conn := meta.(*conns.AWSClient).AppStreamConn(ctx) input := &appstream.CreateFleetInput{ Name: aws.String(d.Get("name").(string)), InstanceType: aws.String(d.Get("instance_type").(string)), ComputeCapacity: expandComputeCapacity(d.Get("compute_capacity").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -286,7 +288,7 @@ func resourceFleetCreate(ctx context.Context, d *schema.ResourceData, meta inter output, err = conn.CreateFleetWithContext(ctx, input) } if err != nil { - return diag.FromErr(fmt.Errorf("error creating Appstream Fleet (%s): %w", d.Id(), err)) + return diag.Errorf("creating Appstream Fleet (%s): %s", d.Id(), err) } d.SetId(aws.StringValue(output.Fleet.Name)) @@ -296,18 +298,18 @@ func resourceFleetCreate(ctx context.Context, d *schema.ResourceData, meta inter Name: aws.String(d.Id()), }) if err != nil { - return diag.FromErr(fmt.Errorf("error starting Appstream Fleet (%s): %w", d.Id(), err)) + return diag.Errorf("starting Appstream Fleet (%s): %s", d.Id(), err) } if _, err = waitFleetStateRunning(ctx, conn, d.Id()); err != nil { - return diag.FromErr(fmt.Errorf("error waiting for Appstream Fleet (%s) to be running: %w", d.Id(), err)) + return diag.Errorf("waiting for Appstream Fleet (%s) to be running: %s", d.Id(), err) } return resourceFleetRead(ctx, d, meta) } func resourceFleetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppStreamConn() + conn := meta.(*conns.AWSClient).AppStreamConn(ctx) resp, err := conn.DescribeFleetsWithContext(ctx, &appstream.DescribeFleetsInput{Names: []*string{aws.String(d.Id())}}) @@ -318,15 +320,15 @@ func resourceFleetRead(ctx context.Context, d *schema.ResourceData, meta interfa } if err != nil { - return diag.FromErr(fmt.Errorf("error reading Appstream Fleet (%s): %w", d.Id(), err)) + return diag.Errorf("reading Appstream Fleet (%s): %s", d.Id(), err) } if len(resp.Fleets) == 0 { - return diag.FromErr(fmt.Errorf("error reading Appstream Fleet (%s): %s", d.Id(), "empty response")) + return diag.Errorf("reading Appstream Fleet (%s): %s", d.Id(), "empty response") } if len(resp.Fleets) > 1 { - return diag.FromErr(fmt.Errorf("error reading Appstream Fleet (%s): %s", d.Id(), "multiple fleets found")) + return diag.Errorf("reading Appstream Fleet (%s): %s", d.Id(), "multiple fleets found") } fleet := resp.Fleets[0] @@ -378,7 +380,7 @@ func resourceFleetRead(ctx context.Context, d *schema.ResourceData, meta interfa } func resourceFleetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppStreamConn() + conn := meta.(*conns.AWSClient).AppStreamConn(ctx) input := &appstream.UpdateFleetInput{ Name: aws.String(d.Id()), } @@ -394,10 +396,10 @@ func resourceFleetUpdate(ctx context.Context, d *schema.ResourceData, meta inter Name: aws.String(d.Id()), }) if err != nil { - return diag.FromErr(fmt.Errorf("error stopping Appstream Fleet (%s): %w", d.Id(), err)) + return diag.Errorf("stopping Appstream Fleet (%s): %s", d.Id(), err) } if _, err = waitFleetStateStopped(ctx, conn, d.Id()); err != nil { - return diag.FromErr(fmt.Errorf("error waiting for Appstream Fleet (%s) to be stopped: %w", d.Id(), err)) + return diag.Errorf("waiting for Appstream Fleet (%s) to be stopped: %s", d.Id(), err) } } @@ -459,7 +461,7 @@ func resourceFleetUpdate(ctx context.Context, d *schema.ResourceData, meta inter _, err := conn.UpdateFleetWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error updating Appstream Fleet (%s): %w", d.Id(), err)) + return diag.Errorf("updating Appstream Fleet (%s): %s", d.Id(), err) } // Start fleet workflow if stopped @@ -468,11 +470,11 @@ func resourceFleetUpdate(ctx context.Context, d *schema.ResourceData, meta inter Name: aws.String(d.Id()), }) if err != nil { - return diag.FromErr(fmt.Errorf("error starting Appstream Fleet (%s): %w", d.Id(), err)) + return diag.Errorf("starting Appstream Fleet (%s): %s", d.Id(), err) } if _, err = waitFleetStateRunning(ctx, conn, d.Id()); err != nil { - return diag.FromErr(fmt.Errorf("error waiting for Appstream Fleet (%s) to be running: %w", d.Id(), err)) + return diag.Errorf("waiting for Appstream Fleet (%s) to be running: %s", d.Id(), err) } } @@ -480,7 +482,7 @@ func resourceFleetUpdate(ctx context.Context, d *schema.ResourceData, meta inter } func resourceFleetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppStreamConn() + conn := meta.(*conns.AWSClient).AppStreamConn(ctx) // Stop fleet workflow log.Printf("[DEBUG] Stopping AppStream Fleet: (%s)", d.Id()) @@ -488,11 +490,11 @@ func resourceFleetDelete(ctx context.Context, d *schema.ResourceData, meta inter Name: aws.String(d.Id()), }) if err != nil { - return diag.FromErr(fmt.Errorf("error stopping Appstream Fleet (%s): %w", d.Id(), err)) + return diag.Errorf("stopping Appstream Fleet (%s): %s", d.Id(), err) } if _, err = waitFleetStateStopped(ctx, conn, d.Id()); err != nil { - return diag.FromErr(fmt.Errorf("error waiting for Appstream Fleet (%s) to be stopped: %w", d.Id(), err)) + return diag.Errorf("waiting for Appstream Fleet (%s) to be stopped: %s", d.Id(), err) } log.Printf("[DEBUG] Deleting AppStream Fleet: (%s)", d.Id()) @@ -505,7 +507,7 @@ func resourceFleetDelete(ctx context.Context, d *schema.ResourceData, meta inter } if err != nil { - return diag.FromErr(fmt.Errorf("error deleting Appstream Fleet (%s): %w", d.Id(), err)) + return diag.Errorf("deleting Appstream Fleet (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/appstream/fleet_stack_association.go b/internal/service/appstream/fleet_stack_association.go index 1031c6508fd..8e91b8dac6b 100644 --- a/internal/service/appstream/fleet_stack_association.go +++ b/internal/service/appstream/fleet_stack_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appstream import ( @@ -41,7 +44,7 @@ func ResourceFleetStackAssociation() *schema.Resource { } func resourceFleetStackAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppStreamConn() + conn := meta.(*conns.AWSClient).AppStreamConn(ctx) input := &appstream.AssociateFleetInput{ FleetName: aws.String(d.Get("fleet_name").(string)), StackName: aws.String(d.Get("stack_name").(string)), @@ -64,7 +67,7 @@ func resourceFleetStackAssociationCreate(ctx context.Context, d *schema.Resource _, err = conn.AssociateFleetWithContext(ctx, input) } if err != nil { - return diag.FromErr(fmt.Errorf("error creating AppStream Fleet Stack Association (%s): %w", d.Id(), err)) + return diag.Errorf("creating AppStream Fleet Stack Association (%s): %s", d.Id(), err) } d.SetId(EncodeStackFleetID(d.Get("fleet_name").(string), d.Get("stack_name").(string))) @@ -73,11 +76,11 @@ func resourceFleetStackAssociationCreate(ctx context.Context, d *schema.Resource } func resourceFleetStackAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppStreamConn() + conn := meta.(*conns.AWSClient).AppStreamConn(ctx) fleetName, stackName, err := DecodeStackFleetID(d.Id()) if err != nil { - return diag.FromErr(fmt.Errorf("error decoding AppStream Fleet Stack Association ID (%s): %w", d.Id(), err)) + return diag.Errorf("decoding AppStream Fleet Stack Association ID (%s): %s", d.Id(), err) } err = FindFleetStackAssociation(ctx, conn, fleetName, stackName) @@ -89,7 +92,7 @@ func resourceFleetStackAssociationRead(ctx context.Context, d *schema.ResourceDa } if err != nil { - return diag.FromErr(fmt.Errorf("error reading AppStream Fleet Stack Association (%s): %w", d.Id(), err)) + return diag.Errorf("reading AppStream Fleet Stack Association (%s): %s", d.Id(), err) } d.Set("fleet_name", fleetName) @@ -99,11 +102,11 @@ func resourceFleetStackAssociationRead(ctx context.Context, d *schema.ResourceDa } func resourceFleetStackAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppStreamConn() + conn := meta.(*conns.AWSClient).AppStreamConn(ctx) fleetName, stackName, err := DecodeStackFleetID(d.Id()) if err != nil { - return diag.FromErr(fmt.Errorf("error decoding AppStream Fleet Stack Association ID (%s): %w", d.Id(), err)) + return diag.Errorf("decoding AppStream Fleet Stack Association ID (%s): %s", d.Id(), err) } _, err = conn.DisassociateFleetWithContext(ctx, &appstream.DisassociateFleetInput{ @@ -115,7 +118,7 @@ func resourceFleetStackAssociationDelete(ctx context.Context, d *schema.Resource if tfawserr.ErrCodeEquals(err, appstream.ErrCodeResourceNotFoundException) { return nil } - return diag.FromErr(fmt.Errorf("error deleting AppStream Fleet Stack Association (%s): %w", d.Id(), err)) + return diag.Errorf("deleting AppStream Fleet Stack Association (%s): %s", d.Id(), err) } return nil } diff --git a/internal/service/appstream/fleet_stack_association_test.go b/internal/service/appstream/fleet_stack_association_test.go index fb703d58346..18898ff1c21 100644 --- a/internal/service/appstream/fleet_stack_association_test.go +++ b/internal/service/appstream/fleet_stack_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appstream_test import ( @@ -79,7 +82,7 @@ func testAccCheckFleetStackAssociationExists(ctx context.Context, resourceName s return fmt.Errorf("not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn(ctx) fleetName, stackName, err := tfappstream.DecodeStackFleetID(rs.Primary.ID) if err != nil { @@ -98,7 +101,7 @@ func testAccCheckFleetStackAssociationExists(ctx context.Context, resourceName s func testAccCheckFleetStackAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appstream_fleet_stack_association" { diff --git a/internal/service/appstream/fleet_test.go b/internal/service/appstream/fleet_test.go index 4aeae96d117..e4d54e27f10 100644 --- a/internal/service/appstream/fleet_test.go +++ b/internal/service/appstream/fleet_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appstream_test import ( @@ -312,7 +315,7 @@ func testAccCheckFleetExists(ctx context.Context, resourceName string, appStream return fmt.Errorf("not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn(ctx) resp, err := conn.DescribeFleetsWithContext(ctx, &appstream.DescribeFleetsInput{Names: []*string{aws.String(rs.Primary.ID)}}) if err != nil { @@ -331,7 +334,7 @@ func testAccCheckFleetExists(ctx context.Context, resourceName string, appStream func testAccCheckFleetDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appstream_fleet" { diff --git a/internal/service/appstream/generate.go b/internal/service/appstream/generate.go index edcee22bc01..18f30d88779 100644 --- a/internal/service/appstream/generate.go +++ b/internal/service/appstream/generate.go @@ -1,5 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/listpages/main.go -ListOps=DescribeDirectoryConfigs,DescribeFleets,DescribeImageBuilders,DescribeStacks,DescribeUsers,ListAssociatedStacks //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package appstream diff --git a/internal/service/appstream/image_builder.go b/internal/service/appstream/image_builder.go index 14753e72e40..7b3abd67c49 100644 --- a/internal/service/appstream/image_builder.go +++ b/internal/service/appstream/image_builder.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appstream import ( @@ -176,13 +179,13 @@ func ResourceImageBuilder() *schema.Resource { } func resourceImageBuilderCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppStreamConn() + conn := meta.(*conns.AWSClient).AppStreamConn(ctx) name := d.Get("name").(string) input := &appstream.CreateImageBuilderInput{ InstanceType: aws.String(d.Get("instance_type").(string)), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("access_endpoint"); ok && v.(*schema.Set).Len() > 0 { @@ -243,7 +246,7 @@ func resourceImageBuilderCreate(ctx context.Context, d *schema.ResourceData, met } func resourceImageBuilderRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppStreamConn() + conn := meta.(*conns.AWSClient).AppStreamConn(ctx) imageBuilder, err := FindImageBuilderByName(ctx, conn, d.Id()) @@ -296,7 +299,7 @@ func resourceImageBuilderUpdate(ctx context.Context, d *schema.ResourceData, met } func resourceImageBuilderDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppStreamConn() + conn := meta.(*conns.AWSClient).AppStreamConn(ctx) log.Printf("[DEBUG] Deleting AppStream ImageBuilder: %s", d.Id()) _, err := conn.DeleteImageBuilderWithContext(ctx, &appstream.DeleteImageBuilderInput{ diff --git a/internal/service/appstream/image_builder_test.go b/internal/service/appstream/image_builder_test.go index 23eff0e49ae..f6c98062477 100644 --- a/internal/service/appstream/image_builder_test.go +++ b/internal/service/appstream/image_builder_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appstream_test import ( @@ -236,7 +239,7 @@ func testAccCheckImageBuilderExists(ctx context.Context, n string) resource.Test return fmt.Errorf("No AppStream ImageBuilder ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn(ctx) _, err := tfappstream.FindImageBuilderByName(ctx, conn, rs.Primary.ID) @@ -246,7 +249,7 @@ func testAccCheckImageBuilderExists(ctx context.Context, n string) resource.Test func testAccCheckImageBuilderDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appstream_image_builder" { diff --git a/internal/service/appstream/service_package_gen.go b/internal/service/appstream/service_package_gen.go index 89f2f94cad9..e3855c290f3 100644 --- a/internal/service/appstream/service_package_gen.go +++ b/internal/service/appstream/service_package_gen.go @@ -5,6 +5,10 @@ package appstream import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + appstream_sdkv1 "github.com/aws/aws-sdk-go/service/appstream" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -72,4 +76,13 @@ func (p *servicePackage) ServicePackageName() string { return names.AppStream } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*appstream_sdkv1.AppStream, error) { + sess := config["session"].(*session_sdkv1.Session) + + return appstream_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/appstream/stack.go b/internal/service/appstream/stack.go index 42f3ecf81af..0929727c929 100644 --- a/internal/service/appstream/stack.go +++ b/internal/service/appstream/stack.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appstream import ( @@ -249,12 +252,12 @@ func ResourceStack() *schema.Resource { } func resourceStackCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppStreamConn() + conn := meta.(*conns.AWSClient).AppStreamConn(ctx) name := d.Get("name").(string) input := &appstream.CreateStackInput{ Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("access_endpoints"); ok { @@ -311,7 +314,7 @@ func resourceStackCreate(ctx context.Context, d *schema.ResourceData, meta inter } func resourceStackRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppStreamConn() + conn := meta.(*conns.AWSClient).AppStreamConn(ctx) stack, err := FindStackByName(ctx, conn, d.Id()) @@ -355,7 +358,7 @@ func resourceStackRead(ctx context.Context, d *schema.ResourceData, meta interfa } func resourceStackUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppStreamConn() + conn := meta.(*conns.AWSClient).AppStreamConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &appstream.UpdateStackInput{ @@ -405,7 +408,7 @@ func resourceStackUpdate(ctx context.Context, d *schema.ResourceData, meta inter } func resourceStackDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppStreamConn() + conn := meta.(*conns.AWSClient).AppStreamConn(ctx) log.Printf("[DEBUG] Deleting AppStream Stack: (%s)", d.Id()) _, err := conn.DeleteStackWithContext(ctx, &appstream.DeleteStackInput{ diff --git a/internal/service/appstream/stack_test.go b/internal/service/appstream/stack_test.go index 7ab989d6945..dc0f3d9f906 100644 --- a/internal/service/appstream/stack_test.go +++ b/internal/service/appstream/stack_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appstream_test import ( @@ -358,7 +361,7 @@ func testAccCheckStackExists(ctx context.Context, n string, v *appstream.Stack) return fmt.Errorf("No Appstream Stack ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn(ctx) output, err := tfappstream.FindStackByName(ctx, conn, rs.Primary.ID) @@ -374,7 +377,7 @@ func testAccCheckStackExists(ctx context.Context, n string, v *appstream.Stack) func testAccCheckStackDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appstream_stack" { diff --git a/internal/service/appstream/status.go b/internal/service/appstream/status.go index 8a8eedfe366..7fdec319781 100644 --- a/internal/service/appstream/status.go +++ b/internal/service/appstream/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appstream import ( diff --git a/internal/service/appstream/sweep.go b/internal/service/appstream/sweep.go index 83070ca2f48..da9f4a2d1dc 100644 --- a/internal/service/appstream/sweep.go +++ b/internal/service/appstream/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/appstream" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -38,11 +40,11 @@ func init() { func sweepDirectoryConfigs(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).AppStreamConn() + conn := client.AppStreamConn(ctx) input := &appstream.DescribeDirectoryConfigsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -71,7 +73,7 @@ func sweepDirectoryConfigs(region string) error { return fmt.Errorf("error listing AppStream Directory Configs (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping AppStream Directory Configs (%s): %w", region, err) @@ -82,11 +84,11 @@ func sweepDirectoryConfigs(region string) error { func sweepFleets(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).AppStreamConn() + conn := client.AppStreamConn(ctx) input := &appstream.DescribeFleetsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -115,7 +117,7 @@ func sweepFleets(region string) error { return fmt.Errorf("error listing AppStream Fleets (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping AppStream Fleets (%s): %w", region, err) @@ -126,11 +128,11 @@ func sweepFleets(region string) error { func sweepImageBuilders(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).AppStreamConn() + conn := client.AppStreamConn(ctx) input := &appstream.DescribeImageBuildersInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -159,7 +161,7 @@ func sweepImageBuilders(region string) error { return fmt.Errorf("error listing AppStream Image Builders (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping AppStream Image Builders (%s): %w", region, err) @@ -170,11 +172,11 @@ func sweepImageBuilders(region string) error { func sweepStacks(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).AppStreamConn() + conn := client.AppStreamConn(ctx) input := &appstream.DescribeStacksInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -203,7 +205,7 @@ func sweepStacks(region string) error { return fmt.Errorf("error listing AppStream Stacks (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping AppStream Stacks (%s): %w", region, err) diff --git a/internal/service/appstream/tags_gen.go b/internal/service/appstream/tags_gen.go index 596edb8500e..a7e8b6f68fb 100644 --- a/internal/service/appstream/tags_gen.go +++ b/internal/service/appstream/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists appstream service tags. +// listTags lists appstream service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn appstreamiface.AppStreamAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn appstreamiface.AppStreamAPI, identifier string) (tftags.KeyValueTags, error) { input := &appstream.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn appstreamiface.AppStreamAPI, identifier // ListTags lists appstream service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).AppStreamConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).AppStreamConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from appstream service tags. +// KeyValueTags creates tftags.KeyValueTags from appstream service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns appstream service tags from Context. +// getTagsIn returns appstream service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets appstream service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets appstream service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates appstream service tags. +// updateTags updates appstream service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn appstreamiface.AppStreamAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn appstreamiface.AppStreamAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn appstreamiface.AppStreamAPI, identifie // UpdateTags updates appstream service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).AppStreamConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).AppStreamConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/appstream/user.go b/internal/service/appstream/user.go index 3aac806870d..44102302a81 100644 --- a/internal/service/appstream/user.go +++ b/internal/service/appstream/user.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appstream import ( @@ -75,7 +78,7 @@ func ResourceUser() *schema.Resource { } func resourceUserCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppStreamConn() + conn := meta.(*conns.AWSClient).AppStreamConn(ctx) userName := d.Get("user_name").(string) authType := d.Get("authentication_type").(string) @@ -102,11 +105,11 @@ func resourceUserCreate(ctx context.Context, d *schema.ResourceData, meta interf id := EncodeUserID(userName, authType) if err != nil { - return diag.FromErr(fmt.Errorf("error creating AppStream User (%s): %w", id, err)) + return diag.Errorf("creating AppStream User (%s): %s", id, err) } if _, err = waitUserAvailable(ctx, conn, userName, authType); err != nil { - return diag.FromErr(fmt.Errorf("error waiting for AppStream User (%s) to be available: %w", id, err)) + return diag.Errorf("waiting for AppStream User (%s) to be available: %s", id, err) } // Enabling/disabling workflow @@ -118,7 +121,7 @@ func resourceUserCreate(ctx context.Context, d *schema.ResourceData, meta interf _, err = conn.DisableUserWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error disabling AppStream User (%s): %w", id, err)) + return diag.Errorf("disabling AppStream User (%s): %s", id, err) } } @@ -128,11 +131,11 @@ func resourceUserCreate(ctx context.Context, d *schema.ResourceData, meta interf } func resourceUserRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppStreamConn() + conn := meta.(*conns.AWSClient).AppStreamConn(ctx) userName, authType, err := DecodeUserID(d.Id()) if err != nil { - return diag.FromErr(fmt.Errorf("error decoding AppStream User ID (%s): %w", d.Id(), err)) + return diag.Errorf("decoding AppStream User ID (%s): %s", d.Id(), err) } user, err := FindUserByUserNameAndAuthType(ctx, conn, userName, authType) @@ -142,7 +145,7 @@ func resourceUserRead(ctx context.Context, d *schema.ResourceData, meta interfac return nil } if err != nil { - return diag.FromErr(fmt.Errorf("error reading AppStream User (%s): %w", d.Id(), err)) + return diag.Errorf("reading AppStream User (%s): %s", d.Id(), err) } d.Set("arn", user.Arn) @@ -158,11 +161,11 @@ func resourceUserRead(ctx context.Context, d *schema.ResourceData, meta interfac } func resourceUserUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppStreamConn() + conn := meta.(*conns.AWSClient).AppStreamConn(ctx) userName, authType, err := DecodeUserID(d.Id()) if err != nil { - return diag.FromErr(fmt.Errorf("error decoding AppStream User ID (%s): %w", d.Id(), err)) + return diag.Errorf("decoding AppStream User ID (%s): %s", d.Id(), err) } if d.HasChange("enabled") { @@ -174,7 +177,7 @@ func resourceUserUpdate(ctx context.Context, d *schema.ResourceData, meta interf _, err = conn.EnableUserWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error enabling AppStream User (%s): %w", d.Id(), err)) + return diag.Errorf("enabling AppStream User (%s): %s", d.Id(), err) } } else { input := &appstream.DisableUserInput{ @@ -184,7 +187,7 @@ func resourceUserUpdate(ctx context.Context, d *schema.ResourceData, meta interf _, err = conn.DisableUserWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error disabling AppStream User (%s): %w", d.Id(), err)) + return diag.Errorf("disabling AppStream User (%s): %s", d.Id(), err) } } } @@ -193,11 +196,11 @@ func resourceUserUpdate(ctx context.Context, d *schema.ResourceData, meta interf } func resourceUserDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppStreamConn() + conn := meta.(*conns.AWSClient).AppStreamConn(ctx) userName, authType, err := DecodeUserID(d.Id()) if err != nil { - return diag.FromErr(fmt.Errorf("error decoding AppStream User ID (%s): %w", d.Id(), err)) + return diag.Errorf("decoding AppStream User ID (%s): %s", d.Id(), err) } _, err = conn.DeleteUserWithContext(ctx, &appstream.DeleteUserInput{ @@ -209,7 +212,7 @@ func resourceUserDelete(ctx context.Context, d *schema.ResourceData, meta interf if tfawserr.ErrCodeEquals(err, appstream.ErrCodeResourceNotFoundException) { return nil } - return diag.FromErr(fmt.Errorf("error deleting AppStream User (%s): %w", d.Id(), err)) + return diag.Errorf("deleting AppStream User (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/appstream/user_stack_association.go b/internal/service/appstream/user_stack_association.go index bcd12d9b24b..22902195d15 100644 --- a/internal/service/appstream/user_stack_association.go +++ b/internal/service/appstream/user_stack_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appstream import ( @@ -53,7 +56,7 @@ func ResourceUserStackAssociation() *schema.Resource { } func resourceUserStackAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppStreamConn() + conn := meta.(*conns.AWSClient).AppStreamConn(ctx) input := &appstream.UserStackAssociation{ AuthenticationType: aws.String(d.Get("authentication_type").(string)), @@ -72,7 +75,7 @@ func resourceUserStackAssociationCreate(ctx context.Context, d *schema.ResourceD }) if err != nil { - return diag.FromErr(fmt.Errorf("error creating AppStream User Stack Association (%s): %w", id, err)) + return diag.Errorf("creating AppStream User Stack Association (%s): %s", id, err) } if len(output.Errors) > 0 { var errs *multierror.Error @@ -80,7 +83,7 @@ func resourceUserStackAssociationCreate(ctx context.Context, d *schema.ResourceD for _, err := range output.Errors { errs = multierror.Append(errs, fmt.Errorf("%s: %s", aws.StringValue(err.ErrorCode), aws.StringValue(err.ErrorMessage))) } - return diag.FromErr(fmt.Errorf("error creating AppStream User Stack Association (%s): %w", id, errs)) + return diag.Errorf("creating AppStream User Stack Association (%s): %s", id, errs) } d.SetId(id) @@ -89,11 +92,11 @@ func resourceUserStackAssociationCreate(ctx context.Context, d *schema.ResourceD } func resourceUserStackAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppStreamConn() + conn := meta.(*conns.AWSClient).AppStreamConn(ctx) userName, authType, stackName, err := DecodeUserStackAssociationID(d.Id()) if err != nil { - return diag.FromErr(fmt.Errorf("error decoding AppStream User Stack Association ID (%s): %w", d.Id(), err)) + return diag.Errorf("decoding AppStream User Stack Association ID (%s): %s", d.Id(), err) } resp, err := conn.DescribeUserStackAssociationsWithContext(ctx, @@ -110,12 +113,12 @@ func resourceUserStackAssociationRead(ctx context.Context, d *schema.ResourceDat } if err != nil { - return diag.FromErr(fmt.Errorf("error reading AppStream User Stack Association (%s): %w", d.Id(), err)) + return diag.Errorf("reading AppStream User Stack Association (%s): %s", d.Id(), err) } if resp == nil || len(resp.UserStackAssociations) == 0 || resp.UserStackAssociations[0] == nil { if d.IsNewResource() { - return diag.Errorf("error reading AppStream User Stack Association (%s): empty output after creation", d.Id()) + return diag.Errorf("reading AppStream User Stack Association (%s): empty output after creation", d.Id()) } log.Printf("[WARN] AppStream User Stack Association (%s) not found, removing from state", d.Id()) d.SetId("") @@ -132,11 +135,11 @@ func resourceUserStackAssociationRead(ctx context.Context, d *schema.ResourceDat } func resourceUserStackAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AppStreamConn() + conn := meta.(*conns.AWSClient).AppStreamConn(ctx) userName, authType, stackName, err := DecodeUserStackAssociationID(d.Id()) if err != nil { - return diag.FromErr(fmt.Errorf("error decoding AppStream User Stack Association ID (%s): %w", d.Id(), err)) + return diag.Errorf("decoding AppStream User Stack Association ID (%s): %s", d.Id(), err) } input := &appstream.UserStackAssociation{ @@ -153,7 +156,7 @@ func resourceUserStackAssociationDelete(ctx context.Context, d *schema.ResourceD if tfawserr.ErrCodeEquals(err, appstream.ErrCodeResourceNotFoundException) { return nil } - return diag.FromErr(fmt.Errorf("error deleting AppStream User Stack Association (%s): %w", d.Id(), err)) + return diag.Errorf("deleting AppStream User Stack Association (%s): %s", d.Id(), err) } return nil } diff --git a/internal/service/appstream/user_stack_association_test.go b/internal/service/appstream/user_stack_association_test.go index 41b44e6bfd3..4e599931614 100644 --- a/internal/service/appstream/user_stack_association_test.go +++ b/internal/service/appstream/user_stack_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appstream_test import ( @@ -123,7 +126,7 @@ func testAccCheckUserStackAssociationExists(ctx context.Context, resourceName st return fmt.Errorf("not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn(ctx) userName, authType, stackName, err := tfappstream.DecodeUserStackAssociationID(rs.Primary.ID) if err != nil { @@ -150,7 +153,7 @@ func testAccCheckUserStackAssociationExists(ctx context.Context, resourceName st func testAccCheckUserStackAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appstream_user_stack_association" { diff --git a/internal/service/appstream/user_test.go b/internal/service/appstream/user_test.go index 7c0599f75a1..fc97fb97a32 100644 --- a/internal/service/appstream/user_test.go +++ b/internal/service/appstream/user_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appstream_test import ( @@ -145,7 +148,7 @@ func testAccCheckUserExists(ctx context.Context, resourceName string, appStreamU return fmt.Errorf("not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn(ctx) userName, authType, err := tfappstream.DecodeUserID(rs.Primary.ID) if err != nil { @@ -168,7 +171,7 @@ func testAccCheckUserExists(ctx context.Context, resourceName string, appStreamU func testAccCheckUserDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppStreamConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appstream_user" { diff --git a/internal/service/appstream/wait.go b/internal/service/appstream/wait.go index a5b1d780aef..bd806d4dc06 100644 --- a/internal/service/appstream/wait.go +++ b/internal/service/appstream/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appstream import ( diff --git a/internal/service/appsync/api_cache.go b/internal/service/appsync/api_cache.go index fcf2444c6c0..f7b4c5d6db4 100644 --- a/internal/service/appsync/api_cache.go +++ b/internal/service/appsync/api_cache.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appsync import ( @@ -61,7 +64,7 @@ func ResourceAPICache() *schema.Resource { func resourceAPICacheCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) apiID := d.Get("api_id").(string) @@ -96,7 +99,7 @@ func resourceAPICacheCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceAPICacheRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) cache, err := FindAPICacheByID(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -121,7 +124,7 @@ func resourceAPICacheRead(ctx context.Context, d *schema.ResourceData, meta inte func resourceAPICacheUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) params := &appsync.UpdateApiCacheInput{ ApiId: aws.String(d.Id()), @@ -153,7 +156,7 @@ func resourceAPICacheUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceAPICacheDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) input := &appsync.DeleteApiCacheInput{ ApiId: aws.String(d.Id()), diff --git a/internal/service/appsync/api_cache_test.go b/internal/service/appsync/api_cache_test.go index aae4e6baa0d..7aaff9bb7dd 100644 --- a/internal/service/appsync/api_cache_test.go +++ b/internal/service/appsync/api_cache_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appsync_test import ( @@ -71,7 +74,7 @@ func testAccAPICache_disappears(t *testing.T) { func testAccCheckAPICacheDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appsync_api_cache" { continue @@ -98,7 +101,7 @@ func testAccCheckAPICacheExists(ctx context.Context, resourceName string, apiCac return fmt.Errorf("Appsync Api Cache Not found in state: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn(ctx) cache, err := tfappsync.FindAPICacheByID(ctx, conn, rs.Primary.ID) if err != nil { return err diff --git a/internal/service/appsync/api_key.go b/internal/service/appsync/api_key.go index bb9540d0181..154306cde7f 100644 --- a/internal/service/appsync/api_key.go +++ b/internal/service/appsync/api_key.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appsync import ( @@ -61,7 +64,7 @@ func ResourceAPIKey() *schema.Resource { func resourceAPIKeyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) apiID := d.Get("api_id").(string) @@ -84,7 +87,7 @@ func resourceAPIKeyCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceAPIKeyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) apiID, keyID, err := DecodeAPIKeyID(d.Id()) if err != nil { @@ -110,7 +113,7 @@ func resourceAPIKeyRead(ctx context.Context, d *schema.ResourceData, meta interf func resourceAPIKeyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) apiID, keyID, err := DecodeAPIKeyID(d.Id()) if err != nil { @@ -139,7 +142,7 @@ func resourceAPIKeyUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceAPIKeyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) apiID, keyID, err := DecodeAPIKeyID(d.Id()) if err != nil { diff --git a/internal/service/appsync/api_key_test.go b/internal/service/appsync/api_key_test.go index 4c107be7a69..d849f18b970 100644 --- a/internal/service/appsync/api_key_test.go +++ b/internal/service/appsync/api_key_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appsync_test import ( @@ -123,7 +126,7 @@ func testAccAPIKey_expires(t *testing.T) { func testAccCheckAPIKeyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appsync_api_key" { continue @@ -164,7 +167,7 @@ func testAccCheckAPIKeyExists(ctx context.Context, resourceName string, apiKey * return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn(ctx) key, err := tfappsync.GetAPIKey(ctx, apiID, keyID, conn) if err != nil { return err diff --git a/internal/service/appsync/appsync_test.go b/internal/service/appsync/appsync_test.go index 409c2aec4e1..b81b5ba04a3 100644 --- a/internal/service/appsync/appsync_test.go +++ b/internal/service/appsync/appsync_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appsync_test import ( @@ -63,6 +66,7 @@ func TestAccAppSync_serial(t *testing.T) { "AdditionalAuthentication_awsLambda": testAccGraphQLAPI_AdditionalAuthentication_lambda, "AdditionalAuthentication_multiple": testAccGraphQLAPI_AdditionalAuthentication_multiple, "xrayEnabled": testAccGraphQLAPI_xrayEnabled, + "visibility": testAccGraphQLAPI_visibility, }, "Function": { "basic": testAccFunction_basic, diff --git a/internal/service/appsync/datasource.go b/internal/service/appsync/datasource.go index 05628f7d84f..de883a6c5d4 100644 --- a/internal/service/appsync/datasource.go +++ b/internal/service/appsync/datasource.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appsync import ( @@ -278,7 +281,7 @@ func ResourceDataSource() *schema.Resource { func resourceDataSourceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) region := meta.(*conns.AWSClient).Region name := d.Get("name").(string) @@ -337,7 +340,7 @@ func resourceDataSourceCreate(ctx context.Context, d *schema.ResourceData, meta func resourceDataSourceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) apiID, name, err := DecodeID(d.Id()) @@ -390,7 +393,7 @@ func resourceDataSourceRead(ctx context.Context, d *schema.ResourceData, meta in func resourceDataSourceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) region := meta.(*conns.AWSClient).Region apiID, name, err := DecodeID(d.Id()) @@ -448,7 +451,7 @@ func resourceDataSourceUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceDataSourceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) apiID, name, err := DecodeID(d.Id()) diff --git a/internal/service/appsync/datasource_test.go b/internal/service/appsync/datasource_test.go index 5e6fb55ebf0..12186bb3747 100644 --- a/internal/service/appsync/datasource_test.go +++ b/internal/service/appsync/datasource_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appsync_test import ( @@ -615,7 +618,7 @@ func testAccDataSource_Type_none(t *testing.T) { func testAccCheckDataSourceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appsync_datasource" { continue @@ -661,7 +664,7 @@ func testAccCheckExistsDataSource(ctx context.Context, name string) resource.Tes return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn(ctx) _, err = tfappsync.FindDataSourceByTwoPartKey(ctx, conn, apiID, name) diff --git a/internal/service/appsync/domain_name.go b/internal/service/appsync/domain_name.go index 394c7979654..89c6e67b1f4 100644 --- a/internal/service/appsync/domain_name.go +++ b/internal/service/appsync/domain_name.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appsync import ( @@ -59,7 +62,7 @@ func ResourceDomainName() *schema.Resource { func resourceDomainNameCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) params := &appsync.CreateDomainNameInput{ CertificateArn: aws.String(d.Get("certificate_arn").(string)), @@ -79,7 +82,7 @@ func resourceDomainNameCreate(ctx context.Context, d *schema.ResourceData, meta func resourceDomainNameRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) domainName, err := FindDomainNameByID(ctx, conn, d.Id()) if domainName == nil && !d.IsNewResource() { @@ -103,7 +106,7 @@ func resourceDomainNameRead(ctx context.Context, d *schema.ResourceData, meta in func resourceDomainNameUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) params := &appsync.UpdateDomainNameInput{ DomainName: aws.String(d.Id()), @@ -123,7 +126,7 @@ func resourceDomainNameUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceDomainNameDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) input := &appsync.DeleteDomainNameInput{ DomainName: aws.String(d.Id()), diff --git a/internal/service/appsync/domain_name_api_association.go b/internal/service/appsync/domain_name_api_association.go index 8cdb3998f5a..1bff3ff4717 100644 --- a/internal/service/appsync/domain_name_api_association.go +++ b/internal/service/appsync/domain_name_api_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appsync import ( @@ -40,7 +43,7 @@ func ResourceDomainNameAPIAssociation() *schema.Resource { func resourceDomainNameAPIAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) params := &appsync.AssociateApiInput{ ApiId: aws.String(d.Get("api_id").(string)), @@ -63,7 +66,7 @@ func resourceDomainNameAPIAssociationCreate(ctx context.Context, d *schema.Resou func resourceDomainNameAPIAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) association, err := FindDomainNameAPIAssociationByID(ctx, conn, d.Id()) if association == nil && !d.IsNewResource() { @@ -84,7 +87,7 @@ func resourceDomainNameAPIAssociationRead(ctx context.Context, d *schema.Resourc func resourceDomainNameAPIAssociationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) params := &appsync.AssociateApiInput{ ApiId: aws.String(d.Get("api_id").(string)), @@ -105,7 +108,7 @@ func resourceDomainNameAPIAssociationUpdate(ctx context.Context, d *schema.Resou func resourceDomainNameAPIAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) input := &appsync.DisassociateApiInput{ DomainName: aws.String(d.Id()), diff --git a/internal/service/appsync/domain_name_api_association_test.go b/internal/service/appsync/domain_name_api_association_test.go index 4230317b265..97e6b1f7d15 100644 --- a/internal/service/appsync/domain_name_api_association_test.go +++ b/internal/service/appsync/domain_name_api_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appsync_test import ( @@ -83,7 +86,7 @@ func testAccDomainNameAPIAssociation_disappears(t *testing.T) { func testAccCheckDomainNameAPIAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appsync_domain_name" { continue @@ -113,7 +116,7 @@ func testAccCheckDomainNameAPIAssociationExists(ctx context.Context, resourceNam if !ok { return fmt.Errorf("Appsync Domain Name Not found in state: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn(ctx) association, err := tfappsync.FindDomainNameAPIAssociationByID(ctx, conn, rs.Primary.ID) if err != nil { diff --git a/internal/service/appsync/domain_name_test.go b/internal/service/appsync/domain_name_test.go index c479f816cc5..3fdd909bef4 100644 --- a/internal/service/appsync/domain_name_test.go +++ b/internal/service/appsync/domain_name_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appsync_test import ( @@ -113,7 +116,7 @@ func testAccDomainName_disappears(t *testing.T) { func testAccCheckDomainNameDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appsync_domain_name" { continue @@ -143,7 +146,7 @@ func testAccCheckDomainNameExists(ctx context.Context, resourceName string, doma if !ok { return fmt.Errorf("Appsync Domain Name Not found in state: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn(ctx) domain, err := tfappsync.FindDomainNameByID(ctx, conn, rs.Primary.ID) if err != nil { diff --git a/internal/service/appsync/find.go b/internal/service/appsync/find.go index 926c787a776..05906df4040 100644 --- a/internal/service/appsync/find.go +++ b/internal/service/appsync/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appsync import ( @@ -82,13 +85,14 @@ func FindDomainNameAPIAssociationByID(ctx context.Context, conn *appsync.AppSync return out.ApiAssociation, nil } -func FindTypeByID(ctx context.Context, conn *appsync.AppSync, apiID, format, name string) (*appsync.Type, error) { +func FindTypeByThreePartKey(ctx context.Context, conn *appsync.AppSync, apiID, format, name string) (*appsync.Type, error) { input := &appsync.GetTypeInput{ ApiId: aws.String(apiID), Format: aws.String(format), TypeName: aws.String(name), } - out, err := conn.GetTypeWithContext(ctx, input) + + output, err := conn.GetTypeWithContext(ctx, input) if tfawserr.ErrCodeEquals(err, appsync.ErrCodeNotFoundException) { return nil, &retry.NotFoundError{ @@ -101,9 +105,9 @@ func FindTypeByID(ctx context.Context, conn *appsync.AppSync, apiID, format, nam return nil, err } - if out == nil { + if output == nil || output.Type == nil { return nil, tfresource.NewEmptyResultError(input) } - return out.Type, nil + return output.Type, nil } diff --git a/internal/service/appsync/function.go b/internal/service/appsync/function.go index 25ae1649cef..323664c2af8 100644 --- a/internal/service/appsync/function.go +++ b/internal/service/appsync/function.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appsync import ( @@ -142,7 +145,7 @@ func ResourceFunction() *schema.Resource { func resourceFunctionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) apiID := d.Get("api_id").(string) @@ -194,7 +197,7 @@ func resourceFunctionCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceFunctionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) apiID, functionID, err := DecodeFunctionID(d.Id()) if err != nil { @@ -242,7 +245,7 @@ func resourceFunctionRead(ctx context.Context, d *schema.ResourceData, meta inte func resourceFunctionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) apiID, functionID, err := DecodeFunctionID(d.Id()) if err != nil { @@ -295,7 +298,7 @@ func resourceFunctionUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceFunctionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) apiID, functionID, err := DecodeFunctionID(d.Id()) if err != nil { diff --git a/internal/service/appsync/function_test.go b/internal/service/appsync/function_test.go index 42688d6c390..e1eb5413d1a 100644 --- a/internal/service/appsync/function_test.go +++ b/internal/service/appsync/function_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appsync_test import ( @@ -226,7 +229,7 @@ func testAccFunction_disappears(t *testing.T) { func testAccCheckFunctionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appsync_function" { continue @@ -261,7 +264,7 @@ func testAccCheckFunctionExists(ctx context.Context, name string, config *appsyn return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn(ctx) apiID, functionID, err := tfappsync.DecodeFunctionID(rs.Primary.ID) if err != nil { diff --git a/internal/service/appsync/generate.go b/internal/service/appsync/generate.go index 3749a923f5c..3efb706569b 100644 --- a/internal/service/appsync/generate.go +++ b/internal/service/appsync/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package appsync diff --git a/internal/service/appsync/graphql_api.go b/internal/service/appsync/graphql_api.go index b73aaaf2760..63c08e76120 100644 --- a/internal/service/appsync/graphql_api.go +++ b/internal/service/appsync/graphql_api.go @@ -1,10 +1,15 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appsync import ( "context" + "errors" "fmt" "log" "regexp" + "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/appsync" @@ -16,6 +21,7 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/verify" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -48,6 +54,29 @@ func ResourceGraphQLAPI() *schema.Resource { Required: true, ValidateFunc: validation.StringInSlice(appsync.AuthenticationType_Values(), false), }, + "lambda_authorizer_config": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "authorizer_result_ttl_in_seconds": { + Type: schema.TypeInt, + Optional: true, + Default: DefaultAuthorizerResultTTLInSeconds, + ValidateFunc: validateAuthorizerResultTTLInSeconds, + }, + "authorizer_uri": { + Type: schema.TypeString, + Required: true, + }, + "identity_validation_expression": { + Type: schema.TypeString, + Optional: true, + }, + }, + }, + }, "openid_connect_config": { Type: schema.TypeList, Optional: true, @@ -95,50 +124,39 @@ func ResourceGraphQLAPI() *schema.Resource { }, }, }, - "lambda_authorizer_config": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "authorizer_result_ttl_in_seconds": { - Type: schema.TypeInt, - Optional: true, - Default: DefaultAuthorizerResultTTLInSeconds, - ValidateFunc: validateAuthorizerResultTTLInSeconds, - }, - "authorizer_uri": { - Type: schema.TypeString, - Required: true, - }, - "identity_validation_expression": { - Type: schema.TypeString, - Optional: true, - }, - }, - }, - }, }, }, }, + "arn": { + Type: schema.TypeString, + Computed: true, + }, "authentication_type": { Type: schema.TypeString, Required: true, ValidateFunc: validation.StringInSlice(appsync.AuthenticationType_Values(), false), }, - "schema": { - Type: schema.TypeString, + "lambda_authorizer_config": { + Type: schema.TypeList, Optional: true, - }, - "name": { - Type: schema.TypeString, - Required: true, - ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { - value := v.(string) - if !regexp.MustCompile(`[_A-Za-z][_0-9A-Za-z]*`).MatchString(value) { - errors = append(errors, fmt.Errorf("%q must match [_A-Za-z][_0-9A-Za-z]*", k)) - } - return + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "authorizer_result_ttl_in_seconds": { + Type: schema.TypeInt, + Optional: true, + Default: DefaultAuthorizerResultTTLInSeconds, + ValidateFunc: validateAuthorizerResultTTLInSeconds, + }, + "authorizer_uri": { + Type: schema.TypeString, + Required: true, + }, + "identity_validation_expression": { + Type: schema.TypeString, + Optional: true, + }, + }, }, }, "log_config": { @@ -148,26 +166,34 @@ func ResourceGraphQLAPI() *schema.Resource { Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "cloudwatch_logs_role_arn": { - Type: schema.TypeString, - Required: true, - }, - "field_log_level": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice([]string{ - appsync.FieldLogLevelAll, - appsync.FieldLogLevelError, - appsync.FieldLogLevelNone, - }, false), + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidARN, }, "exclude_verbose_content": { Type: schema.TypeBool, Optional: true, Default: false, }, + "field_log_level": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(appsync.FieldLogLevel_Values(), false), + }, }, }, }, + "name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`[_A-Za-z][_0-9A-Za-z]*`).MatchString(value) { + errors = append(errors, fmt.Errorf("%q must match [_A-Za-z][_0-9A-Za-z]*", k)) + } + return + }, + }, "openid_connect_config": { Type: schema.TypeList, Optional: true, @@ -193,6 +219,17 @@ func ResourceGraphQLAPI() *schema.Resource { }, }, }, + "schema": { + Type: schema.TypeString, + Optional: true, + }, + names.AttrTags: tftags.TagsSchema(), + names.AttrTagsAll: tftags.TagsSchemaComputed(), + "uris": { + Type: schema.TypeMap, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "user_pool_config": { Type: schema.TypeList, Optional: true, @@ -209,12 +246,9 @@ func ResourceGraphQLAPI() *schema.Resource { Computed: true, }, "default_action": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice([]string{ - appsync.DefaultActionAllow, - appsync.DefaultActionDeny, - }, false), + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(appsync.DefaultAction_Values(), false), }, "user_pool_id": { Type: schema.TypeString, @@ -223,40 +257,13 @@ func ResourceGraphQLAPI() *schema.Resource { }, }, }, - "lambda_authorizer_config": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "authorizer_result_ttl_in_seconds": { - Type: schema.TypeInt, - Optional: true, - Default: DefaultAuthorizerResultTTLInSeconds, - ValidateFunc: validateAuthorizerResultTTLInSeconds, - }, - "authorizer_uri": { - Type: schema.TypeString, - Required: true, - }, - "identity_validation_expression": { - Type: schema.TypeString, - Optional: true, - }, - }, - }, - }, - "arn": { - Type: schema.TypeString, - Computed: true, - }, - "uris": { - Type: schema.TypeMap, - Computed: true, - Elem: &schema.Schema{Type: schema.TypeString}, + "visibility": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: appsync.GraphQLApiVisibilityGlobal, + ValidateFunc: validation.StringInSlice(appsync.GraphQLApiVisibility_Values(), false), }, - names.AttrTags: tftags.TagsSchema(), - names.AttrTagsAll: tftags.TagsSchemaComputed(), "xray_enabled": { Type: schema.TypeBool, Optional: true, @@ -269,12 +276,21 @@ func ResourceGraphQLAPI() *schema.Resource { func resourceGraphQLAPICreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) + name := d.Get("name").(string) input := &appsync.CreateGraphqlApiInput{ AuthenticationType: aws.String(d.Get("authentication_type").(string)), - Name: aws.String(d.Get("name").(string)), - Tags: GetTagsIn(ctx), + Name: aws.String(name), + Tags: getTagsIn(ctx), + } + + if v, ok := d.GetOk("additional_authentication_provider"); ok { + input.AdditionalAuthenticationProviders = expandGraphQLAPIAdditionalAuthProviders(v.([]interface{}), meta.(*conns.AWSClient).Region) + } + + if v, ok := d.GetOk("lambda_authorizer_config"); ok { + input.LambdaAuthorizerConfig = expandGraphQLAPILambdaAuthorizerConfig(v.([]interface{})) } if v, ok := d.GetOk("log_config"); ok { @@ -289,27 +305,26 @@ func resourceGraphQLAPICreate(ctx context.Context, d *schema.ResourceData, meta input.UserPoolConfig = expandGraphQLAPIUserPoolConfig(v.([]interface{}), meta.(*conns.AWSClient).Region) } - if v, ok := d.GetOk("lambda_authorizer_config"); ok { - input.LambdaAuthorizerConfig = expandGraphQLAPILambdaAuthorizerConfig(v.([]interface{})) + if v, ok := d.GetOk("xray_enabled"); ok { + input.XrayEnabled = aws.Bool(v.(bool)) } - if v, ok := d.GetOk("additional_authentication_provider"); ok { - input.AdditionalAuthenticationProviders = expandGraphQLAPIAdditionalAuthProviders(v.([]interface{}), meta.(*conns.AWSClient).Region) + if v, ok := d.GetOk("visibility"); ok { + input.Visibility = aws.String(v.(string)) } - if v, ok := d.GetOk("xray_enabled"); ok { - input.XrayEnabled = aws.Bool(v.(bool)) - } + output, err := conn.CreateGraphqlApiWithContext(ctx, input) - resp, err := conn.CreateGraphqlApiWithContext(ctx, input) if err != nil { - return sdkdiag.AppendErrorf(diags, "creating AppSync GraphQL API: %s", err) + return sdkdiag.AppendErrorf(diags, "creating AppSync GraphQL API (%s): %s", name, err) } - d.SetId(aws.StringValue(resp.GraphqlApi.ApiId)) + d.SetId(aws.StringValue(output.GraphqlApi.ApiId)) - if err := resourceSchemaPut(ctx, d, meta); err != nil { - return sdkdiag.AppendErrorf(diags, "creating AppSync GraphQL API (%s) Schema: %s", d.Id(), err) + if v, ok := d.GetOk("schema"); ok { + if err := putSchema(ctx, conn, d.Id(), v.(string), d.Timeout(schema.TimeoutCreate)); err != nil { + return sdkdiag.AppendFromErr(diags, err) + } } return append(diags, resourceGraphQLAPIRead(ctx, d, meta)...) @@ -317,127 +332,222 @@ func resourceGraphQLAPICreate(ctx context.Context, d *schema.ResourceData, meta func resourceGraphQLAPIRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() - - input := &appsync.GetGraphqlApiInput{ - ApiId: aws.String(d.Id()), - } + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) - resp, err := conn.GetGraphqlApiWithContext(ctx, input) + api, err := FindGraphQLAPIByID(ctx, conn, d.Id()) - if tfawserr.ErrCodeEquals(err, appsync.ErrCodeNotFoundException) && !d.IsNewResource() { + if !d.IsNewResource() && tfresource.NotFound(err) { log.Printf("[WARN] AppSync GraphQL API (%s) not found, removing from state", d.Id()) d.SetId("") - return diags + return nil } if err != nil { - return sdkdiag.AppendErrorf(diags, "getting AppSync GraphQL API (%s): %s", d.Id(), err) + return diag.Errorf("reading AppSync GraphQL API (%s): %s", d.Id(), err) } - d.Set("arn", resp.GraphqlApi.Arn) - d.Set("authentication_type", resp.GraphqlApi.AuthenticationType) - d.Set("name", resp.GraphqlApi.Name) - - if err := d.Set("log_config", flattenGraphQLAPILogConfig(resp.GraphqlApi.LogConfig)); err != nil { + if err := d.Set("additional_authentication_provider", flattenGraphQLAPIAdditionalAuthenticationProviders(api.AdditionalAuthenticationProviders)); err != nil { + return sdkdiag.AppendErrorf(diags, "setting additional_authentication_provider: %s", err) + } + d.Set("arn", api.Arn) + d.Set("authentication_type", api.AuthenticationType) + if err := d.Set("lambda_authorizer_config", flattenGraphQLAPILambdaAuthorizerConfig(api.LambdaAuthorizerConfig)); err != nil { + return sdkdiag.AppendErrorf(diags, "setting lambda_authorizer_config: %s", err) + } + if err := d.Set("log_config", flattenGraphQLAPILogConfig(api.LogConfig)); err != nil { return sdkdiag.AppendErrorf(diags, "setting log_config: %s", err) } - - if err := d.Set("openid_connect_config", flattenGraphQLAPIOpenIDConnectConfig(resp.GraphqlApi.OpenIDConnectConfig)); err != nil { + if err := d.Set("openid_connect_config", flattenGraphQLAPIOpenIDConnectConfig(api.OpenIDConnectConfig)); err != nil { return sdkdiag.AppendErrorf(diags, "setting openid_connect_config: %s", err) } - - if err := d.Set("user_pool_config", flattenGraphQLAPIUserPoolConfig(resp.GraphqlApi.UserPoolConfig)); err != nil { + d.Set("name", api.Name) + d.Set("uris", aws.StringValueMap(api.Uris)) + if err := d.Set("user_pool_config", flattenGraphQLAPIUserPoolConfig(api.UserPoolConfig)); err != nil { return sdkdiag.AppendErrorf(diags, "setting user_pool_config: %s", err) } - - if err := d.Set("lambda_authorizer_config", flattenGraphQLAPILambdaAuthorizerConfig(resp.GraphqlApi.LambdaAuthorizerConfig)); err != nil { - return sdkdiag.AppendErrorf(diags, "setting lambda_authorizer_config: %s", err) + d.Set("visibility", api.Visibility) + if err := d.Set("xray_enabled", api.XrayEnabled); err != nil { + return sdkdiag.AppendErrorf(diags, "setting xray_enabled: %s", err) } - if err := d.Set("additional_authentication_provider", flattenGraphQLAPIAdditionalAuthenticationProviders(resp.GraphqlApi.AdditionalAuthenticationProviders)); err != nil { - return sdkdiag.AppendErrorf(diags, "setting additional_authentication_provider: %s", err) - } + setTagsOut(ctx, api.Tags) - if err := d.Set("uris", aws.StringValueMap(resp.GraphqlApi.Uris)); err != nil { - return sdkdiag.AppendErrorf(diags, "setting uris: %s", err) - } + return diags +} - SetTagsOut(ctx, resp.GraphqlApi.Tags) +func resourceGraphQLAPIUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) - if err := d.Set("xray_enabled", resp.GraphqlApi.XrayEnabled); err != nil { - return sdkdiag.AppendErrorf(diags, "setting xray_enabled: %s", err) + if d.HasChangesExcept("tags", "tags_all") { + input := &appsync.UpdateGraphqlApiInput{ + ApiId: aws.String(d.Id()), + AuthenticationType: aws.String(d.Get("authentication_type").(string)), + Name: aws.String(d.Get("name").(string)), + } + + if v, ok := d.GetOk("additional_authentication_provider"); ok { + input.AdditionalAuthenticationProviders = expandGraphQLAPIAdditionalAuthProviders(v.([]interface{}), meta.(*conns.AWSClient).Region) + } + + if v, ok := d.GetOk("lambda_authorizer_config"); ok { + input.LambdaAuthorizerConfig = expandGraphQLAPILambdaAuthorizerConfig(v.([]interface{})) + } + + if v, ok := d.GetOk("log_config"); ok { + input.LogConfig = expandGraphQLAPILogConfig(v.([]interface{})) + } + + if v, ok := d.GetOk("openid_connect_config"); ok { + input.OpenIDConnectConfig = expandGraphQLAPIOpenIDConnectConfig(v.([]interface{})) + } + + if v, ok := d.GetOk("user_pool_config"); ok { + input.UserPoolConfig = expandGraphQLAPIUserPoolConfig(v.([]interface{}), meta.(*conns.AWSClient).Region) + } + + if v, ok := d.GetOk("xray_enabled"); ok { + input.XrayEnabled = aws.Bool(v.(bool)) + } + + _, err := conn.UpdateGraphqlApiWithContext(ctx, input) + + if err != nil { + return sdkdiag.AppendErrorf(diags, "updating AppSync GraphQL API (%s): %s", d.Id(), err) + } + + if d.HasChange("schema") { + if v, ok := d.GetOk("schema"); ok { + if err := putSchema(ctx, conn, d.Id(), v.(string), d.Timeout(schema.TimeoutCreate)); err != nil { + return sdkdiag.AppendFromErr(diags, err) + } + } + } } - return diags + return append(diags, resourceGraphQLAPIRead(ctx, d, meta)...) } -func resourceGraphQLAPIUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { +func resourceGraphQLAPIDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) - input := &appsync.UpdateGraphqlApiInput{ - ApiId: aws.String(d.Id()), - AuthenticationType: aws.String(d.Get("authentication_type").(string)), - Name: aws.String(d.Get("name").(string)), + log.Printf("[DEBUG] Deleting AppSync GraphQL API: %s", d.Id()) + _, err := conn.DeleteGraphqlApiWithContext(ctx, &appsync.DeleteGraphqlApiInput{ + ApiId: aws.String(d.Id()), + }) + + if tfawserr.ErrCodeEquals(err, appsync.ErrCodeNotFoundException) { + return diags } - if v, ok := d.GetOk("log_config"); ok { - input.LogConfig = expandGraphQLAPILogConfig(v.([]interface{})) + if err != nil { + return sdkdiag.AppendErrorf(diags, "deleting AppSync GraphQL API (%s): %s", d.Id(), err) } - if v, ok := d.GetOk("openid_connect_config"); ok { - input.OpenIDConnectConfig = expandGraphQLAPIOpenIDConnectConfig(v.([]interface{})) + return diags +} + +func putSchema(ctx context.Context, conn *appsync.AppSync, apiID, definition string, timeout time.Duration) error { + input := &appsync.StartSchemaCreationInput{ + ApiId: aws.String(apiID), + Definition: ([]byte)(definition), } - if v, ok := d.GetOk("user_pool_config"); ok { - input.UserPoolConfig = expandGraphQLAPIUserPoolConfig(v.([]interface{}), meta.(*conns.AWSClient).Region) + _, err := conn.StartSchemaCreationWithContext(ctx, input) + + if err != nil { + return fmt.Errorf("creating AppSync GraphQL API (%s) schema: %w", apiID, err) } - if v, ok := d.GetOk("lambda_authorizer_config"); ok { - input.LambdaAuthorizerConfig = expandGraphQLAPILambdaAuthorizerConfig(v.([]interface{})) + if err := waitSchemaCreated(ctx, conn, apiID, timeout); err != nil { + return fmt.Errorf("waiting for AppSync GraphQL API (%s) schema create: %w", apiID, err) } - if v, ok := d.GetOk("additional_authentication_provider"); ok { - input.AdditionalAuthenticationProviders = expandGraphQLAPIAdditionalAuthProviders(v.([]interface{}), meta.(*conns.AWSClient).Region) + return nil +} + +func FindGraphQLAPIByID(ctx context.Context, conn *appsync.AppSync, id string) (*appsync.GraphqlApi, error) { + input := &appsync.GetGraphqlApiInput{ + ApiId: aws.String(id), } - if v, ok := d.GetOk("xray_enabled"); ok { - input.XrayEnabled = aws.Bool(v.(bool)) + output, err := conn.GetGraphqlApiWithContext(ctx, input) + + if tfawserr.ErrCodeEquals(err, appsync.ErrCodeNotFoundException) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: input, + } } - _, err := conn.UpdateGraphqlApiWithContext(ctx, input) if err != nil { - return sdkdiag.AppendErrorf(diags, "updating AppSync GraphQL API (%s): %s", d.Id(), err) + return nil, err } - if d.HasChange("schema") { - if err := resourceSchemaPut(ctx, d, meta); err != nil { - return sdkdiag.AppendErrorf(diags, "updating AppSync GraphQL API (%s) Schema: %s", d.Id(), err) - } + if output == nil || output.GraphqlApi == nil { + return nil, tfresource.NewEmptyResultError(input) } - return append(diags, resourceGraphQLAPIRead(ctx, d, meta)...) + return output.GraphqlApi, nil } -func resourceGraphQLAPIDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() - - input := &appsync.DeleteGraphqlApiInput{ - ApiId: aws.String(d.Id()), +func findSchemaCreationStatusByID(ctx context.Context, conn *appsync.AppSync, id string) (*appsync.GetSchemaCreationStatusOutput, error) { + input := &appsync.GetSchemaCreationStatusInput{ + ApiId: aws.String(id), } - _, err := conn.DeleteGraphqlApiWithContext(ctx, input) + + output, err := conn.GetSchemaCreationStatusWithContext(ctx, input) if tfawserr.ErrCodeEquals(err, appsync.ErrCodeNotFoundException) { - return diags + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: input, + } } if err != nil { - return sdkdiag.AppendErrorf(diags, "deleting AppSync GraphQL API (%s): %s", d.Id(), err) + return nil, err } - return diags + if output == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + return output, nil +} + +func statusSchemaCreation(ctx context.Context, conn *appsync.AppSync, id string) retry.StateRefreshFunc { + return func() (interface{}, string, error) { + output, err := findSchemaCreationStatusByID(ctx, conn, id) + + if tfresource.NotFound(err) { + return nil, "", nil + } + + if err != nil { + return nil, "", err + } + + return output, aws.StringValue(output.Status), nil + } +} + +func waitSchemaCreated(ctx context.Context, conn *appsync.AppSync, id string, timeout time.Duration) error { + stateConf := &retry.StateChangeConf{ + Pending: []string{appsync.SchemaStatusProcessing}, + Target: []string{appsync.SchemaStatusActive, appsync.SchemaStatusSuccess}, + Refresh: statusSchemaCreation(ctx, conn, id), + Timeout: timeout, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + + if output, ok := outputRaw.(*appsync.GetSchemaCreationStatusOutput); ok { + tfresource.SetLastError(err, errors.New(aws.StringValue(output.Details))) + } + + return err } func expandGraphQLAPILogConfig(l []interface{}) *appsync.LogConfig { @@ -685,38 +795,3 @@ func flattenGraphQLAPICognitoUserPoolConfig(userPoolConfig *appsync.CognitoUserP return []interface{}{m} } - -func resourceSchemaPut(ctx context.Context, d *schema.ResourceData, meta interface{}) error { - conn := meta.(*conns.AWSClient).AppSyncConn() - - if v, ok := d.GetOk("schema"); ok { - input := &appsync.StartSchemaCreationInput{ - ApiId: aws.String(d.Id()), - Definition: ([]byte)(v.(string)), - } - if _, err := conn.StartSchemaCreationWithContext(ctx, input); err != nil { - return err - } - - activeSchemaConfig := &retry.StateChangeConf{ - Pending: []string{appsync.SchemaStatusProcessing}, - Target: []string{"SUCCESS", appsync.SchemaStatusActive}, // should be only appsync.SchemaStatusActive . I think this is a problem in documentation: https://docs.aws.amazon.com/appsync/latest/APIReference/API_GetSchemaCreationStatus.html - Refresh: func() (interface{}, string, error) { - result, err := conn.GetSchemaCreationStatusWithContext(ctx, &appsync.GetSchemaCreationStatusInput{ - ApiId: aws.String(d.Id()), - }) - if err != nil { - return 0, "", err - } - return result, *result.Status, nil - }, - Timeout: d.Timeout(schema.TimeoutCreate), - } - - if _, err := activeSchemaConfig.WaitForStateContext(ctx); err != nil { - return fmt.Errorf("waiting for completion: %s", err) - } - } - - return nil -} diff --git a/internal/service/appsync/graphql_api_test.go b/internal/service/appsync/graphql_api_test.go index 87ace70d9fd..4598d020d68 100644 --- a/internal/service/appsync/graphql_api_test.go +++ b/internal/service/appsync/graphql_api_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appsync_test import ( @@ -7,15 +10,14 @@ import ( "strconv" "testing" - "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/appsync" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" tfappsync "github.com/hashicorp/terraform-provider-aws/internal/service/appsync" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) func testAccGraphQLAPI_basic(t *testing.T) { @@ -46,6 +48,7 @@ func testAccGraphQLAPI_basic(t *testing.T) { resource.TestCheckNoResourceAttr(resourceName, "tags"), resource.TestCheckResourceAttr(resourceName, "additional_authentication_provider.#", "0"), resource.TestCheckResourceAttr(resourceName, "xray_enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "visibility", "GLOBAL"), ), }, { @@ -108,7 +111,7 @@ func testAccGraphQLAPI_schema(t *testing.T) { resource.TestCheckResourceAttrSet(resourceName, "schema"), resource.TestCheckResourceAttrSet(resourceName, "uris.%"), resource.TestCheckResourceAttrSet(resourceName, "uris.GRAPHQL"), - testAccCheckTypeExists(ctx, resourceName, "Post"), + testAccCheckGraphQLAPITypeExists(ctx, resourceName, "Post"), ), }, { @@ -121,7 +124,7 @@ func testAccGraphQLAPI_schema(t *testing.T) { Config: testAccGraphQLAPIConfig_schemaUpdate(rName), Check: resource.ComposeTestCheckFunc( testAccCheckGraphQLAPIExists(ctx, resourceName, &api2), - testAccCheckTypeExists(ctx, resourceName, "PostV2"), + testAccCheckGraphQLAPITypeExists(ctx, resourceName, "PostV2"), ), }, }, @@ -879,14 +882,11 @@ func testAccGraphQLAPI_tags(t *testing.T) { CheckDestroy: testAccCheckGraphQLAPIDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccGraphQLAPIConfig_tags(rName), + Config: testAccGraphQLAPIConfig_tags1(rName, "key1", "value1"), Check: resource.ComposeTestCheckFunc( testAccCheckGraphQLAPIExists(ctx, resourceName, &api1), - resource.TestCheckResourceAttr(resourceName, "authentication_type", "API_KEY"), - resource.TestCheckResourceAttr(resourceName, "name", rName), - resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), - resource.TestCheckResourceAttr(resourceName, "tags.Key1", "Value One"), - resource.TestCheckResourceAttr(resourceName, "tags.Description", "Very interesting"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), ), }, { @@ -895,13 +895,20 @@ func testAccGraphQLAPI_tags(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccGraphQLAPIConfig_tagsModified(rName), + Config: testAccGraphQLAPIConfig_tags2(rName, "key1", "value1updated", "key2", "value2"), Check: resource.ComposeTestCheckFunc( testAccCheckGraphQLAPIExists(ctx, resourceName, &api1), - resource.TestCheckResourceAttr(resourceName, "tags.%", "3"), - resource.TestCheckResourceAttr(resourceName, "tags.Key1", "Value One Changed"), - resource.TestCheckResourceAttr(resourceName, "tags.Key2", "Value Two"), - resource.TestCheckResourceAttr(resourceName, "tags.Key3", "Value Three"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + { + Config: testAccGraphQLAPIConfig_tags1(rName, "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckGraphQLAPIExists(ctx, resourceName, &api1), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), ), }, }, @@ -1172,71 +1179,94 @@ func testAccGraphQLAPI_xrayEnabled(t *testing.T) { }) } +func testAccGraphQLAPI_visibility(t *testing.T) { + ctx := acctest.Context(t) + var api1 appsync.GraphqlApi + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_appsync_graphql_api.test" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckPartitionHasService(t, appsync.EndpointsID) }, + ErrorCheck: acctest.ErrorCheck(t, appsync.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckGraphQLAPIDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccGraphQLAPIConfig_visibility(rName, "PRIVATE"), + Check: resource.ComposeTestCheckFunc( + testAccCheckGraphQLAPIExists(ctx, resourceName, &api1), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "visibility", "PRIVATE"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func testAccCheckGraphQLAPIDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appsync_graphql_api" { continue } - input := &appsync.GetGraphqlApiInput{ - ApiId: aws.String(rs.Primary.ID), + _, err := tfappsync.FindGraphQLAPIByID(ctx, conn, rs.Primary.ID) + + if tfresource.NotFound(err) { + continue } - _, err := conn.GetGraphqlApiWithContext(ctx, input) if err != nil { - if tfawserr.ErrCodeEquals(err, appsync.ErrCodeNotFoundException) { - return nil - } return err } + + return fmt.Errorf("AppSync GraphQL API %s still exists", rs.Primary.ID) } return nil } } -func testAccCheckGraphQLAPIExists(ctx context.Context, name string, api *appsync.GraphqlApi) resource.TestCheckFunc { +func testAccCheckGraphQLAPIExists(ctx context.Context, n string, v *appsync.GraphqlApi) resource.TestCheckFunc { return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[name] + rs, ok := s.RootModule().Resources[n] if !ok { - return fmt.Errorf("Not found: %s", name) + return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn() - - input := &appsync.GetGraphqlApiInput{ - ApiId: aws.String(rs.Primary.ID), + if rs.Primary.ID == "" { + return fmt.Errorf("No AppSync GraphQL API ID is set") } - output, err := conn.GetGraphqlApiWithContext(ctx, input) + conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn(ctx) + + output, err := tfappsync.FindGraphQLAPIByID(ctx, conn, rs.Primary.ID) if err != nil { return err } - *api = *output.GraphqlApi + *v = *output return nil } } -func testAccCheckTypeExists(ctx context.Context, name, typeName string) resource.TestCheckFunc { +func testAccCheckGraphQLAPITypeExists(ctx context.Context, n, typeName string) resource.TestCheckFunc { return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[name] + rs, ok := s.RootModule().Resources[n] if !ok { - return fmt.Errorf("Not found: %s", name) + return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn() - - input := &appsync.GetTypeInput{ - ApiId: aws.String(rs.Primary.ID), - TypeName: aws.String(typeName), - Format: aws.String(appsync.OutputTypeSdl), - } + conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn(ctx) - _, err := conn.GetTypeWithContext(ctx, input) + _, err := tfappsync.FindTypeByThreePartKey(ctx, conn, rs.Primary.ID, appsync.OutputTypeSdl, typeName) return err } @@ -1245,18 +1275,28 @@ func testAccCheckTypeExists(ctx context.Context, name, typeName string) resource func testAccGraphQLAPIConfig_authenticationType(rName, authenticationType string) string { return fmt.Sprintf(` resource "aws_appsync_graphql_api" "test" { - authentication_type = %q - name = %q + authentication_type = %[1]q + name = %[2]q } `, authenticationType, rName) } +func testAccGraphQLAPIConfig_visibility(rName, visibility string) string { + return fmt.Sprintf(` +resource "aws_appsync_graphql_api" "test" { + authentication_type = "API_KEY" + name = %[1]q + visibility = %[2]q +} +`, rName, visibility) +} + func testAccGraphQLAPIConfig_logFieldLogLevel(rName, fieldLogLevel string) string { return fmt.Sprintf(` data "aws_partition" "current" {} resource "aws_iam_role" "test" { - name = %q + name = %[1]q assume_role_policy = < 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets appsync service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets appsync service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates appsync service tags. +// updateTags updates appsync service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn appsynciface.AppSyncAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn appsynciface.AppSyncAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn appsynciface.AppSyncAPI, identifier st // UpdateTags updates appsync service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).AppSyncConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).AppSyncConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/appsync/test-fixtures/test-code-updated.js b/internal/service/appsync/test-fixtures/test-code-updated.js index cccf930150f..ed9ec31eff2 100644 --- a/internal/service/appsync/test-fixtures/test-code-updated.js +++ b/internal/service/appsync/test-fixtures/test-code-updated.js @@ -1,3 +1,8 @@ +/** + * Copyright (c) HashiCorp, Inc. + * SPDX-License-Identifier: MPL-2.0 + */ + import { util } from '@aws-appsync/utils'; /** diff --git a/internal/service/appsync/test-fixtures/test-code.js b/internal/service/appsync/test-fixtures/test-code.js index b9da7287553..5b4a1936f5b 100644 --- a/internal/service/appsync/test-fixtures/test-code.js +++ b/internal/service/appsync/test-fixtures/test-code.js @@ -1,3 +1,8 @@ +/** + * Copyright (c) HashiCorp, Inc. + * SPDX-License-Identifier: MPL-2.0 + */ + import { util } from '@aws-appsync/utils'; /** diff --git a/internal/service/appsync/type.go b/internal/service/appsync/type.go index 68f45560322..84b817a3b1b 100644 --- a/internal/service/appsync/type.go +++ b/internal/service/appsync/type.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appsync import ( @@ -61,7 +64,7 @@ func ResourceType() *schema.Resource { func resourceTypeCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) apiID := d.Get("api_id").(string) @@ -83,14 +86,14 @@ func resourceTypeCreate(ctx context.Context, d *schema.ResourceData, meta interf func resourceTypeRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) apiID, format, name, err := DecodeTypeID(d.Id()) if err != nil { return sdkdiag.AppendErrorf(diags, "reading Appsync Type %q: %s", d.Id(), err) } - resp, err := FindTypeByID(ctx, conn, apiID, format, name) + resp, err := FindTypeByThreePartKey(ctx, conn, apiID, format, name) if !d.IsNewResource() && tfresource.NotFound(err) { log.Printf("[WARN] AppSync Type (%s) not found, removing from state", d.Id()) d.SetId("") @@ -113,7 +116,7 @@ func resourceTypeRead(ctx context.Context, d *schema.ResourceData, meta interfac func resourceTypeUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) params := &appsync.UpdateTypeInput{ ApiId: aws.String(d.Get("api_id").(string)), @@ -132,7 +135,7 @@ func resourceTypeUpdate(ctx context.Context, d *schema.ResourceData, meta interf func resourceTypeDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AppSyncConn() + conn := meta.(*conns.AWSClient).AppSyncConn(ctx) input := &appsync.DeleteTypeInput{ ApiId: aws.String(d.Get("api_id").(string)), diff --git a/internal/service/appsync/type_test.go b/internal/service/appsync/type_test.go index 490db332149..534f12bc401 100644 --- a/internal/service/appsync/type_test.go +++ b/internal/service/appsync/type_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appsync_test import ( @@ -31,7 +34,7 @@ func testAccType_basic(t *testing.T) { { Config: testAccTypeConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckTypeResourceExists(ctx, resourceName, &typ), + testAccCheckTypeExists(ctx, resourceName, &typ), acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "appsync", regexp.MustCompile("apis/.+/types/.+")), resource.TestCheckResourceAttrPair(resourceName, "api_id", "aws_appsync_graphql_api.test", "id"), resource.TestCheckResourceAttr(resourceName, "format", "SDL"), @@ -62,7 +65,7 @@ func testAccType_disappears(t *testing.T) { { Config: testAccTypeConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckTypeResourceExists(ctx, resourceName, &typ), + testAccCheckTypeExists(ctx, resourceName, &typ), acctest.CheckResourceDisappears(ctx, acctest.Provider, tfappsync.ResourceType(), resourceName), ), ExpectNonEmptyPlan: true, @@ -73,7 +76,7 @@ func testAccType_disappears(t *testing.T) { func testAccCheckTypeDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_appsync_type" { @@ -85,7 +88,7 @@ func testAccCheckTypeDestroy(ctx context.Context) resource.TestCheckFunc { return err } - _, err = tfappsync.FindTypeByID(ctx, conn, apiID, format, name) + _, err = tfappsync.FindTypeByThreePartKey(ctx, conn, apiID, format, name) if err == nil { if tfresource.NotFound(err) { return nil @@ -99,7 +102,7 @@ func testAccCheckTypeDestroy(ctx context.Context) resource.TestCheckFunc { } } -func testAccCheckTypeResourceExists(ctx context.Context, resourceName string, typ *appsync.Type) resource.TestCheckFunc { +func testAccCheckTypeExists(ctx context.Context, resourceName string, typ *appsync.Type) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[resourceName] if !ok { @@ -111,8 +114,8 @@ func testAccCheckTypeResourceExists(ctx context.Context, resourceName string, ty return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn() - out, err := tfappsync.FindTypeByID(ctx, conn, apiID, format, name) + conn := acctest.Provider.Meta().(*conns.AWSClient).AppSyncConn(ctx) + out, err := tfappsync.FindTypeByThreePartKey(ctx, conn, apiID, format, name) if err != nil { return err } diff --git a/internal/service/appsync/wait.go b/internal/service/appsync/wait.go index 76aa4a7a147..59bd7cb9c48 100644 --- a/internal/service/appsync/wait.go +++ b/internal/service/appsync/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package appsync import ( diff --git a/internal/service/athena/data_catalog.go b/internal/service/athena/data_catalog.go index 95723c123b1..8d9e2ea63ac 100644 --- a/internal/service/athena/data_catalog.go +++ b/internal/service/athena/data_catalog.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package athena import ( @@ -75,13 +78,13 @@ func ResourceDataCatalog() *schema.Resource { } func resourceDataCatalogCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AthenaConn() + conn := meta.(*conns.AWSClient).AthenaConn(ctx) name := d.Get("name").(string) input := &athena.CreateDataCatalogInput{ Name: aws.String(name), Description: aws.String(d.Get("description").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), Type: aws.String(d.Get("type").(string)), } @@ -102,7 +105,7 @@ func resourceDataCatalogCreate(ctx context.Context, d *schema.ResourceData, meta } func resourceDataCatalogRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AthenaConn() + conn := meta.(*conns.AWSClient).AthenaConn(ctx) input := &athena.GetDataCatalogInput{ Name: aws.String(d.Id()), @@ -155,7 +158,7 @@ func resourceDataCatalogRead(ctx context.Context, d *schema.ResourceData, meta i } func resourceDataCatalogUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AthenaConn() + conn := meta.(*conns.AWSClient).AthenaConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &athena.UpdateDataCatalogInput{ @@ -182,7 +185,7 @@ func resourceDataCatalogUpdate(ctx context.Context, d *schema.ResourceData, meta } func resourceDataCatalogDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).AthenaConn() + conn := meta.(*conns.AWSClient).AthenaConn(ctx) log.Printf("[DEBUG] Deleting Athena Data Catalog: (%s)", d.Id()) _, err := conn.DeleteDataCatalogWithContext(ctx, &athena.DeleteDataCatalogInput{ diff --git a/internal/service/athena/data_catalog_test.go b/internal/service/athena/data_catalog_test.go index 26370c77414..6ba63bbe50b 100644 --- a/internal/service/athena/data_catalog_test.go +++ b/internal/service/athena/data_catalog_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package athena_test import ( @@ -262,7 +265,7 @@ func testAccCheckDataCatalogExists(ctx context.Context, n string) resource.TestC return fmt.Errorf("No Athena Data Catalog name is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).AthenaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AthenaConn(ctx) input := &athena.GetDataCatalogInput{ Name: aws.String(rs.Primary.ID), @@ -276,7 +279,7 @@ func testAccCheckDataCatalogExists(ctx context.Context, n string) resource.TestC func testAccCheckDataCatalogDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AthenaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AthenaConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_athena_data_catalog" { diff --git a/internal/service/athena/database.go b/internal/service/athena/database.go index 35d76631652..8f7b9074e1e 100644 --- a/internal/service/athena/database.go +++ b/internal/service/athena/database.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package athena import ( @@ -106,7 +109,7 @@ func ResourceDatabase() *schema.Resource { func resourceDatabaseCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AthenaConn() + conn := meta.(*conns.AWSClient).AthenaConn(ctx) name := d.Get("name").(string) var queryString bytes.Buffer @@ -154,7 +157,7 @@ func resourceDatabaseCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceDatabaseRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AthenaConn() + conn := meta.(*conns.AWSClient).AthenaConn(ctx) input := &athena.GetDatabaseInput{ DatabaseName: aws.String(d.Id()), @@ -183,7 +186,7 @@ func resourceDatabaseRead(ctx context.Context, d *schema.ResourceData, meta inte func resourceDatabaseDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AthenaConn() + conn := meta.(*conns.AWSClient).AthenaConn(ctx) queryString := fmt.Sprintf("drop database `%s`", d.Id()) if d.Get("force_destroy").(bool) { diff --git a/internal/service/athena/database_test.go b/internal/service/athena/database_test.go index 8aa64a25536..9d4a141a7d7 100644 --- a/internal/service/athena/database_test.go +++ b/internal/service/athena/database_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package athena_test import ( @@ -322,7 +325,7 @@ func TestAccAthenaDatabase_disppears(t *testing.T) { func testAccCheckDatabaseDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AthenaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AthenaConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_athena_database" { continue @@ -370,7 +373,7 @@ func testAccDatabaseCreateTables(ctx context.Context, dbName string) resource.Te return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).AthenaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AthenaConn(ctx) input := &athena.StartQueryExecutionInput{ QueryExecutionContext: &athena.QueryExecutionContext{ @@ -400,7 +403,7 @@ func testAccDatabaseDestroyTables(ctx context.Context, dbName string) resource.T return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).AthenaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AthenaConn(ctx) input := &athena.StartQueryExecutionInput{ QueryExecutionContext: &athena.QueryExecutionContext{ @@ -429,7 +432,7 @@ func testAccCheckDatabaseDropFails(ctx context.Context, dbName string) resource. return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).AthenaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AthenaConn(ctx) input := &athena.StartQueryExecutionInput{ QueryExecutionContext: &athena.QueryExecutionContext{ diff --git a/internal/service/athena/generate.go b/internal/service/athena/generate.go index dfedcdea387..d0f2ed305c1 100644 --- a/internal/service/athena/generate.go +++ b/internal/service/athena/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceARN -ServiceTagsSlice -TagInIDElem=ResourceARN -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package athena diff --git a/internal/service/athena/named_query.go b/internal/service/athena/named_query.go index 9c96afdffa2..8512ae51d28 100644 --- a/internal/service/athena/named_query.go +++ b/internal/service/athena/named_query.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package athena import ( @@ -57,7 +60,7 @@ func ResourceNamedQuery() *schema.Resource { func resourceNamedQueryCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AthenaConn() + conn := meta.(*conns.AWSClient).AthenaConn(ctx) input := &athena.CreateNamedQueryInput{ Database: aws.String(d.Get("database").(string)), @@ -81,7 +84,7 @@ func resourceNamedQueryCreate(ctx context.Context, d *schema.ResourceData, meta func resourceNamedQueryRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AthenaConn() + conn := meta.(*conns.AWSClient).AthenaConn(ctx) input := &athena.GetNamedQueryInput{ NamedQueryId: aws.String(d.Id()), @@ -107,7 +110,7 @@ func resourceNamedQueryRead(ctx context.Context, d *schema.ResourceData, meta in func resourceNamedQueryDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AthenaConn() + conn := meta.(*conns.AWSClient).AthenaConn(ctx) input := &athena.DeleteNamedQueryInput{ NamedQueryId: aws.String(d.Id()), diff --git a/internal/service/athena/named_query_test.go b/internal/service/athena/named_query_test.go index 5d0f4e92ae1..a42d34d8c80 100644 --- a/internal/service/athena/named_query_test.go +++ b/internal/service/athena/named_query_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package athena_test import ( @@ -67,7 +70,7 @@ func TestAccAthenaNamedQuery_withWorkGroup(t *testing.T) { func testAccCheckNamedQueryDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AthenaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AthenaConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_athena_named_query" { continue @@ -99,7 +102,7 @@ func testAccCheckNamedQueryExists(ctx context.Context, name string) resource.Tes return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AthenaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AthenaConn(ctx) input := &athena.GetNamedQueryInput{ NamedQueryId: aws.String(rs.Primary.ID), diff --git a/internal/service/athena/service_package_gen.go b/internal/service/athena/service_package_gen.go index adcce9c85dc..27c10fdde17 100644 --- a/internal/service/athena/service_package_gen.go +++ b/internal/service/athena/service_package_gen.go @@ -5,6 +5,10 @@ package athena import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + athena_sdkv1 "github.com/aws/aws-sdk-go/service/athena" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -56,4 +60,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Athena } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*athena_sdkv1.Athena, error) { + sess := config["session"].(*session_sdkv1.Session) + + return athena_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/athena/sweep.go b/internal/service/athena/sweep.go index 18a22d40d7e..b47c9e4d335 100644 --- a/internal/service/athena/sweep.go +++ b/internal/service/athena/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/athena" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -24,11 +26,11 @@ func init() { func sweepDatabases(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).AthenaConn() + conn := client.AthenaConn(ctx) input := &athena.ListDatabasesInput{ CatalogName: aws.String("AwsDataCatalog"), } @@ -69,7 +71,7 @@ func sweepDatabases(region string) error { return nil } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Athena Databases (%s): %w", region, err)) diff --git a/internal/service/athena/tags_gen.go b/internal/service/athena/tags_gen.go index 1de9de477f3..dd0d7781686 100644 --- a/internal/service/athena/tags_gen.go +++ b/internal/service/athena/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists athena service tags. +// listTags lists athena service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn athenaiface.AthenaAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn athenaiface.AthenaAPI, identifier string) (tftags.KeyValueTags, error) { input := &athena.ListTagsForResourceInput{ ResourceARN: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn athenaiface.AthenaAPI, identifier string // ListTags lists athena service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).AthenaConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).AthenaConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*athena.Tag) tftags.KeyValueTags { return tftags.New(ctx, m) } -// GetTagsIn returns athena service tags from Context. +// getTagsIn returns athena service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*athena.Tag { +func getTagsIn(ctx context.Context) []*athena.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*athena.Tag { return nil } -// SetTagsOut sets athena service tags in Context. -func SetTagsOut(ctx context.Context, tags []*athena.Tag) { +// setTagsOut sets athena service tags in Context. +func setTagsOut(ctx context.Context, tags []*athena.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates athena service tags. +// updateTags updates athena service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn athenaiface.AthenaAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn athenaiface.AthenaAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn athenaiface.AthenaAPI, identifier stri // UpdateTags updates athena service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).AthenaConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).AthenaConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/athena/workgroup.go b/internal/service/athena/workgroup.go index ed534938ad6..4e97500ffed 100644 --- a/internal/service/athena/workgroup.go +++ b/internal/service/athena/workgroup.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package athena import ( @@ -178,13 +181,13 @@ func ResourceWorkGroup() *schema.Resource { func resourceWorkGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AthenaConn() + conn := meta.(*conns.AWSClient).AthenaConn(ctx) name := d.Get("name").(string) input := &athena.CreateWorkGroupInput{ Configuration: expandWorkGroupConfiguration(d.Get("configuration").([]interface{})), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -215,7 +218,7 @@ func resourceWorkGroupCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceWorkGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AthenaConn() + conn := meta.(*conns.AWSClient).AthenaConn(ctx) input := &athena.GetWorkGroupInput{ WorkGroup: aws.String(d.Id()), @@ -262,7 +265,7 @@ func resourceWorkGroupRead(ctx context.Context, d *schema.ResourceData, meta int func resourceWorkGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AthenaConn() + conn := meta.(*conns.AWSClient).AthenaConn(ctx) input := &athena.DeleteWorkGroupInput{ WorkGroup: aws.String(d.Id()), @@ -282,7 +285,7 @@ func resourceWorkGroupDelete(ctx context.Context, d *schema.ResourceData, meta i func resourceWorkGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AthenaConn() + conn := meta.(*conns.AWSClient).AthenaConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &athena.UpdateWorkGroupInput{ diff --git a/internal/service/athena/workgroup_test.go b/internal/service/athena/workgroup_test.go index 0699fa4227b..a051d66811e 100644 --- a/internal/service/athena/workgroup_test.go +++ b/internal/service/athena/workgroup_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package athena_test import ( @@ -682,7 +685,7 @@ func TestAccAthenaWorkGroup_tags(t *testing.T) { func testAccCheckCreateNamedQuery(ctx context.Context, workGroup *athena.WorkGroup, databaseName, queryName, query string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AthenaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AthenaConn(ctx) input := &athena.CreateNamedQueryInput{ Name: aws.String(queryName), @@ -702,7 +705,7 @@ func testAccCheckCreateNamedQuery(ctx context.Context, workGroup *athena.WorkGro func testAccCheckWorkGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AthenaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AthenaConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_athena_workgroup" { continue @@ -737,7 +740,7 @@ func testAccCheckWorkGroupExists(ctx context.Context, name string, workgroup *at return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AthenaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AthenaConn(ctx) input := &athena.GetWorkGroupInput{ WorkGroup: aws.String(rs.Primary.ID), diff --git a/internal/service/auditmanager/account_registration.go b/internal/service/auditmanager/account_registration.go index 5ef60cf9921..2f655b86b53 100644 --- a/internal/service/auditmanager/account_registration.go +++ b/internal/service/auditmanager/account_registration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package auditmanager import ( @@ -11,8 +14,8 @@ import ( "github.com/hashicorp/terraform-plugin-framework/resource/schema" "github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-provider-aws/internal/create" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -54,7 +57,7 @@ func (r *resourceAccountRegistration) Schema(ctx context.Context, req resource.S } func (r *resourceAccountRegistration) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) // Registration is applied per region, so use this as the ID id := r.Meta().Region @@ -87,7 +90,7 @@ func (r *resourceAccountRegistration) Create(ctx context.Context, req resource.C } func (r *resourceAccountRegistration) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) var state resourceAccountRegistrationData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) @@ -116,7 +119,7 @@ func (r *resourceAccountRegistration) Read(ctx context.Context, req resource.Rea } func (r *resourceAccountRegistration) Update(ctx context.Context, req resource.UpdateRequest, resp *resource.UpdateResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) var plan, state resourceAccountRegistrationData resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) @@ -156,7 +159,7 @@ func (r *resourceAccountRegistration) Update(ctx context.Context, req resource.U } func (r *resourceAccountRegistration) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) var state resourceAccountRegistrationData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) diff --git a/internal/service/auditmanager/account_registration_test.go b/internal/service/auditmanager/account_registration_test.go index 88f0ab969f6..7fc7e464237 100644 --- a/internal/service/auditmanager/account_registration_test.go +++ b/internal/service/auditmanager/account_registration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package auditmanager_test import ( @@ -137,7 +140,7 @@ func testAccAccountRegistration_optionalKMSKey(t *testing.T) { // registration is inactive, simply that the status check returns a valid response. func testAccCheckAccountRegistrationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_auditmanager_account_registration" { @@ -166,7 +169,7 @@ func testAccCheckAccountRegisterationIsActive(ctx context.Context, name string) return create.Error(names.AuditManager, create.ErrActionCheckingExistence, tfauditmanager.ResNameAccountRegistration, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient(ctx) out, err := conn.GetAccountStatus(ctx, &auditmanager.GetAccountStatusInput{}) if err != nil { return create.Error(names.AuditManager, create.ErrActionCheckingExistence, tfauditmanager.ResNameAccountRegistration, rs.Primary.ID, err) diff --git a/internal/service/auditmanager/assessment.go b/internal/service/auditmanager/assessment.go index fc74c9cf7eb..0a27df1500d 100644 --- a/internal/service/auditmanager/assessment.go +++ b/internal/service/auditmanager/assessment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package auditmanager import ( @@ -24,8 +27,8 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-provider-aws/internal/create" "github.com/hashicorp/terraform-provider-aws/internal/enum" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/names" @@ -153,7 +156,7 @@ func (r *resourceAssessment) Schema(ctx context.Context, req resource.SchemaRequ } func (r *resourceAssessment) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) var plan resourceAssessmentData resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) @@ -189,7 +192,7 @@ func (r *resourceAssessment) Create(ctx context.Context, req resource.CreateRequ Name: aws.String(plan.Name.ValueString()), Roles: expandAssessmentRoles(roles), Scope: scopeInput, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if !plan.Description.IsNull() { @@ -236,7 +239,7 @@ func (r *resourceAssessment) Create(ctx context.Context, req resource.CreateRequ } func (r *resourceAssessment) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) var state resourceAssessmentData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) @@ -266,7 +269,7 @@ func (r *resourceAssessment) Read(ctx context.Context, req resource.ReadRequest, } func (r *resourceAssessment) Update(ctx context.Context, req resource.UpdateRequest, resp *resource.UpdateResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) var plan, state resourceAssessmentData resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) @@ -339,7 +342,7 @@ func (r *resourceAssessment) Update(ctx context.Context, req resource.UpdateRequ } func (r *resourceAssessment) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) var state resourceAssessmentData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) @@ -485,7 +488,7 @@ func (rd *resourceAssessmentData) refreshFromOutput(ctx context.Context, out *aw diags.Append(d...) rd.Scope = scope - SetTagsOut(ctx, out.Tags) + setTagsOut(ctx, out.Tags) return diags } diff --git a/internal/service/auditmanager/assessment_delegation.go b/internal/service/auditmanager/assessment_delegation.go index 294cc71ea2f..0d73659b304 100644 --- a/internal/service/auditmanager/assessment_delegation.go +++ b/internal/service/auditmanager/assessment_delegation.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package auditmanager import ( @@ -18,8 +21,8 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-provider-aws/internal/create" "github.com/hashicorp/terraform-provider-aws/internal/enum" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -99,7 +102,7 @@ func (r *resourceAssessmentDelegation) Schema(ctx context.Context, req resource. } func (r *resourceAssessmentDelegation) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) var plan resourceAssessmentDelegationData resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) @@ -182,7 +185,7 @@ func (r *resourceAssessmentDelegation) Create(ctx context.Context, req resource. } func (r *resourceAssessmentDelegation) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) var state resourceAssessmentDelegationData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) @@ -212,7 +215,7 @@ func (r *resourceAssessmentDelegation) Update(ctx context.Context, req resource. } func (r *resourceAssessmentDelegation) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) var state resourceAssessmentDelegationData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) diff --git a/internal/service/auditmanager/assessment_delegation_test.go b/internal/service/auditmanager/assessment_delegation_test.go index dcabf55fa0a..af838c7f1fd 100644 --- a/internal/service/auditmanager/assessment_delegation_test.go +++ b/internal/service/auditmanager/assessment_delegation_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package auditmanager_test import ( @@ -187,7 +190,7 @@ func TestAccAuditManagerAssessmentDelegation_multiple(t *testing.T) { func testAccCheckAssessmentDelegationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_auditmanager_assessment_delegation" { @@ -221,7 +224,7 @@ func testAccCheckAssessmentDelegationExists(ctx context.Context, name string, de return create.Error(names.AuditManager, create.ErrActionCheckingExistence, tfauditmanager.ResNameAssessmentDelegation, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient(ctx) resp, err := tfauditmanager.FindAssessmentDelegationByID(ctx, conn, rs.Primary.ID) if err != nil { return create.Error(names.AuditManager, create.ErrActionCheckingExistence, tfauditmanager.ResNameAssessmentDelegation, rs.Primary.ID, err) diff --git a/internal/service/auditmanager/assessment_report.go b/internal/service/auditmanager/assessment_report.go index 75fc90a501c..8b5ad533448 100644 --- a/internal/service/auditmanager/assessment_report.go +++ b/internal/service/auditmanager/assessment_report.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package auditmanager import ( @@ -17,8 +20,8 @@ import ( "github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-provider-aws/internal/create" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -75,7 +78,7 @@ func (r *resourceAssessmentReport) Schema(ctx context.Context, req resource.Sche } func (r *resourceAssessmentReport) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) var plan resourceAssessmentReportData resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) @@ -113,7 +116,7 @@ func (r *resourceAssessmentReport) Create(ctx context.Context, req resource.Crea } func (r *resourceAssessmentReport) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) var state resourceAssessmentReportData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) @@ -147,7 +150,7 @@ func (r *resourceAssessmentReport) Update(ctx context.Context, req resource.Upda } func (r *resourceAssessmentReport) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) var state resourceAssessmentReportData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) diff --git a/internal/service/auditmanager/assessment_report_test.go b/internal/service/auditmanager/assessment_report_test.go index fa227e74990..0accd0940c4 100644 --- a/internal/service/auditmanager/assessment_report_test.go +++ b/internal/service/auditmanager/assessment_report_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package auditmanager_test import ( @@ -131,7 +134,7 @@ func TestAccAuditManagerAssessmentReport_optional(t *testing.T) { func testAccCheckAssessmentReportDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_auditmanager_assessment_report" { @@ -165,7 +168,7 @@ func testAccCheckAssessmentReportExists(ctx context.Context, name string, assess return create.Error(names.AuditManager, create.ErrActionCheckingExistence, tfauditmanager.ResNameAssessmentReport, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient(ctx) resp, err := tfauditmanager.FindAssessmentReportByID(ctx, conn, rs.Primary.ID) if err != nil { return create.Error(names.AuditManager, create.ErrActionCheckingExistence, tfauditmanager.ResNameAssessmentReport, rs.Primary.ID, err) diff --git a/internal/service/auditmanager/assessment_test.go b/internal/service/auditmanager/assessment_test.go index 514e8b7382e..fa4d5f33df4 100644 --- a/internal/service/auditmanager/assessment_test.go +++ b/internal/service/auditmanager/assessment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package auditmanager_test import ( @@ -180,7 +183,7 @@ func TestAccAuditManagerAssessment_optional(t *testing.T) { func testAccCheckAssessmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_auditmanager_assessment" { @@ -214,7 +217,7 @@ func testAccCheckAssessmentExists(ctx context.Context, name string, assessment * return create.Error(names.AuditManager, create.ErrActionCheckingExistence, tfauditmanager.ResNameAssessment, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient(ctx) resp, err := tfauditmanager.FindAssessmentByID(ctx, conn, rs.Primary.ID) if err != nil { return create.Error(names.AuditManager, create.ErrActionCheckingExistence, tfauditmanager.ResNameAssessment, rs.Primary.ID, err) diff --git a/internal/service/auditmanager/control.go b/internal/service/auditmanager/control.go index cb419cebff4..01631e3e969 100644 --- a/internal/service/auditmanager/control.go +++ b/internal/service/auditmanager/control.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package auditmanager import ( @@ -21,8 +24,8 @@ import ( "github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-provider-aws/internal/create" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/names" @@ -126,7 +129,7 @@ func (r *resourceControl) Schema(ctx context.Context, req resource.SchemaRequest } func (r *resourceControl) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) var plan resourceControlData resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) @@ -149,7 +152,7 @@ func (r *resourceControl) Create(ctx context.Context, req resource.CreateRequest in := auditmanager.CreateControlInput{ Name: aws.String(plan.Name.ValueString()), ControlMappingSources: cmsInput, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if !plan.ActionPlanInstructions.IsNull() { in.ActionPlanInstructions = aws.String(plan.ActionPlanInstructions.ValueString()) @@ -186,7 +189,7 @@ func (r *resourceControl) Create(ctx context.Context, req resource.CreateRequest } func (r *resourceControl) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) var state resourceControlData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) @@ -216,7 +219,7 @@ func (r *resourceControl) Read(ctx context.Context, req resource.ReadRequest, re } func (r *resourceControl) Update(ctx context.Context, req resource.UpdateRequest, resp *resource.UpdateResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) var plan, state resourceControlData resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) @@ -283,7 +286,7 @@ func (r *resourceControl) Update(ctx context.Context, req resource.UpdateRequest } func (r *resourceControl) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) var state resourceControlData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) @@ -431,7 +434,7 @@ func (rd *resourceControlData) refreshFromOutput(ctx context.Context, out *awsty rd.ARN = flex.StringToFramework(ctx, out.Arn) rd.Type = types.StringValue(string(out.Type)) - SetTagsOut(ctx, out.Tags) + setTagsOut(ctx, out.Tags) return diags } diff --git a/internal/service/auditmanager/control_data_source.go b/internal/service/auditmanager/control_data_source.go index fbce333cb34..8fde891e458 100644 --- a/internal/service/auditmanager/control_data_source.go +++ b/internal/service/auditmanager/control_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package auditmanager import ( @@ -16,8 +19,8 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/enum" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" ) @@ -113,7 +116,7 @@ func (d *dataSourceControl) Schema(ctx context.Context, req datasource.SchemaReq } func (d *dataSourceControl) Read(ctx context.Context, req datasource.ReadRequest, resp *datasource.ReadResponse) { - conn := d.Meta().AuditManagerClient() + conn := d.Meta().AuditManagerClient(ctx) var data dataSourceControlData resp.Diagnostics.Append(req.Config.Get(ctx, &data)...) diff --git a/internal/service/auditmanager/control_data_source_test.go b/internal/service/auditmanager/control_data_source_test.go index 847669ee50b..27058940715 100644 --- a/internal/service/auditmanager/control_data_source_test.go +++ b/internal/service/auditmanager/control_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package auditmanager_test import ( diff --git a/internal/service/auditmanager/control_test.go b/internal/service/auditmanager/control_test.go index 60a9178ead6..0779a2696e8 100644 --- a/internal/service/auditmanager/control_test.go +++ b/internal/service/auditmanager/control_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package auditmanager_test import ( @@ -255,7 +258,7 @@ func TestAccAuditManagerControl_optionalSources(t *testing.T) { func testAccCheckControlDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_auditmanager_control" { @@ -289,7 +292,7 @@ func testAccCheckControlExists(ctx context.Context, name string, control *types. return create.Error(names.AuditManager, create.ErrActionCheckingExistence, tfauditmanager.ResNameControl, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient(ctx) resp, err := tfauditmanager.FindControlByID(ctx, conn, rs.Primary.ID) if err != nil { return create.Error(names.AuditManager, create.ErrActionCheckingExistence, tfauditmanager.ResNameControl, rs.Primary.ID, err) diff --git a/internal/service/auditmanager/exports_test.go b/internal/service/auditmanager/exports_test.go index 2ae72094478..86d78aba5dd 100644 --- a/internal/service/auditmanager/exports_test.go +++ b/internal/service/auditmanager/exports_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package auditmanager // Exports for use in tests only. diff --git a/internal/service/auditmanager/framework.go b/internal/service/auditmanager/framework.go index 49e353cc909..942630acaec 100644 --- a/internal/service/auditmanager/framework.go +++ b/internal/service/auditmanager/framework.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package auditmanager import ( @@ -20,8 +23,8 @@ import ( "github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-provider-aws/internal/create" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/names" @@ -101,7 +104,7 @@ func (r *resourceFramework) Schema(ctx context.Context, req resource.SchemaReque } func (r *resourceFramework) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) var plan resourceFrameworkData resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) @@ -124,7 +127,7 @@ func (r *resourceFramework) Create(ctx context.Context, req resource.CreateReque in := auditmanager.CreateAssessmentFrameworkInput{ Name: aws.String(plan.Name.ValueString()), ControlSets: csInput, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if !plan.ComplianceType.IsNull() { @@ -156,7 +159,7 @@ func (r *resourceFramework) Create(ctx context.Context, req resource.CreateReque } func (r *resourceFramework) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) var state resourceFrameworkData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) @@ -186,7 +189,7 @@ func (r *resourceFramework) Read(ctx context.Context, req resource.ReadRequest, } func (r *resourceFramework) Update(ctx context.Context, req resource.UpdateRequest, resp *resource.UpdateResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) var plan, state resourceFrameworkData resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) @@ -246,7 +249,7 @@ func (r *resourceFramework) Update(ctx context.Context, req resource.UpdateReque } func (r *resourceFramework) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) var state resourceFrameworkData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) @@ -378,7 +381,7 @@ func (rd *resourceFrameworkData) refreshFromOutput(ctx context.Context, out *aws rd.FrameworkType = flex.StringValueToFramework(ctx, out.Type) rd.ARN = flex.StringToFramework(ctx, out.Arn) - SetTagsOut(ctx, out.Tags) + setTagsOut(ctx, out.Tags) return diags } diff --git a/internal/service/auditmanager/framework_data_source.go b/internal/service/auditmanager/framework_data_source.go index ed8da339e4c..6abd6a48e8f 100644 --- a/internal/service/auditmanager/framework_data_source.go +++ b/internal/service/auditmanager/framework_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package auditmanager import ( @@ -14,8 +17,8 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/enum" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" ) @@ -81,7 +84,7 @@ func (d *dataSourceFramework) Schema(ctx context.Context, req datasource.SchemaR } func (d *dataSourceFramework) Read(ctx context.Context, req datasource.ReadRequest, resp *datasource.ReadResponse) { - conn := d.Meta().AuditManagerClient() + conn := d.Meta().AuditManagerClient(ctx) var data dataSourceFrameworkData resp.Diagnostics.Append(req.Config.Get(ctx, &data)...) diff --git a/internal/service/auditmanager/framework_data_source_test.go b/internal/service/auditmanager/framework_data_source_test.go index 808c97ff1c4..b8d5cd31cd0 100644 --- a/internal/service/auditmanager/framework_data_source_test.go +++ b/internal/service/auditmanager/framework_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package auditmanager_test import ( diff --git a/internal/service/auditmanager/framework_share.go b/internal/service/auditmanager/framework_share.go index 9be145c9058..fee1c941476 100644 --- a/internal/service/auditmanager/framework_share.go +++ b/internal/service/auditmanager/framework_share.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package auditmanager import ( @@ -16,8 +19,8 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-provider-aws/internal/create" "github.com/hashicorp/terraform-provider-aws/internal/enum" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -78,7 +81,7 @@ func (r *resourceFrameworkShare) Schema(ctx context.Context, req resource.Schema } func (r *resourceFrameworkShare) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) var plan resourceFrameworkShareData resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) @@ -116,7 +119,7 @@ func (r *resourceFrameworkShare) Create(ctx context.Context, req resource.Create } func (r *resourceFrameworkShare) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) var state resourceFrameworkShareData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) @@ -147,7 +150,7 @@ func (r *resourceFrameworkShare) Update(ctx context.Context, req resource.Update } func (r *resourceFrameworkShare) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) var state resourceFrameworkShareData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) diff --git a/internal/service/auditmanager/framework_share_test.go b/internal/service/auditmanager/framework_share_test.go index 094c0d2cdfc..841cf03bcd8 100644 --- a/internal/service/auditmanager/framework_share_test.go +++ b/internal/service/auditmanager/framework_share_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package auditmanager_test import ( @@ -168,7 +171,7 @@ func TestAccAuditManagerFrameworkShare_optional(t *testing.T) { func testAccCheckFrameworkShareDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_auditmanager_framework_share" { @@ -202,7 +205,7 @@ func testAccCheckFrameworkShareExists(ctx context.Context, name string, framewor return create.Error(names.AuditManager, create.ErrActionCheckingExistence, tfauditmanager.ResNameFrameworkShare, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient(ctx) resp, err := tfauditmanager.FindFrameworkShareByID(ctx, conn, rs.Primary.ID) if err != nil { return create.Error(names.AuditManager, create.ErrActionCheckingExistence, tfauditmanager.ResNameFrameworkShare, rs.Primary.ID, err) diff --git a/internal/service/auditmanager/framework_test.go b/internal/service/auditmanager/framework_test.go index 77f587fd0e6..685b68e064a 100644 --- a/internal/service/auditmanager/framework_test.go +++ b/internal/service/auditmanager/framework_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package auditmanager_test import ( @@ -176,7 +179,7 @@ func TestAccAuditManagerFramework_optional(t *testing.T) { func testAccCheckFrameworkDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_auditmanager_framework" { @@ -210,7 +213,7 @@ func testAccCheckFrameworkExists(ctx context.Context, name string, framework *ty return create.Error(names.AuditManager, create.ErrActionCheckingExistence, tfauditmanager.ResNameFramework, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient(ctx) resp, err := tfauditmanager.FindFrameworkByID(ctx, conn, rs.Primary.ID) if err != nil { return create.Error(names.AuditManager, create.ErrActionCheckingExistence, tfauditmanager.ResNameFramework, rs.Primary.ID, err) diff --git a/internal/service/auditmanager/generate.go b/internal/service/auditmanager/generate.go index 400ea055a5c..48cabfcce65 100644 --- a/internal/service/auditmanager/generate.go +++ b/internal/service/auditmanager/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -ServiceTagsMap -KVTValues -SkipTypesImp -ListTags -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package auditmanager diff --git a/internal/service/auditmanager/organization_admin_account_registration.go b/internal/service/auditmanager/organization_admin_account_registration.go index 167f817229c..b91385676a0 100644 --- a/internal/service/auditmanager/organization_admin_account_registration.go +++ b/internal/service/auditmanager/organization_admin_account_registration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package auditmanager import ( @@ -12,8 +15,8 @@ import ( "github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier" "github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-provider-aws/internal/create" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -52,7 +55,7 @@ func (r *resourceOrganizationAdminAccountRegistration) Schema(ctx context.Contex } func (r *resourceOrganizationAdminAccountRegistration) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) var plan resourceOrganizationAdminAccountRegistrationData resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) @@ -80,7 +83,7 @@ func (r *resourceOrganizationAdminAccountRegistration) Create(ctx context.Contex } func (r *resourceOrganizationAdminAccountRegistration) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) var state resourceOrganizationAdminAccountRegistrationData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) @@ -113,7 +116,7 @@ func (r *resourceOrganizationAdminAccountRegistration) Update(ctx context.Contex } func (r *resourceOrganizationAdminAccountRegistration) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) { - conn := r.Meta().AuditManagerClient() + conn := r.Meta().AuditManagerClient(ctx) var state resourceOrganizationAdminAccountRegistrationData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) diff --git a/internal/service/auditmanager/organization_admin_account_registration_test.go b/internal/service/auditmanager/organization_admin_account_registration_test.go index 65ef0999c4f..18ae53c0dcf 100644 --- a/internal/service/auditmanager/organization_admin_account_registration_test.go +++ b/internal/service/auditmanager/organization_admin_account_registration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package auditmanager_test import ( @@ -94,7 +97,7 @@ func testAccOrganizationAdminAccountRegistration_disappears(t *testing.T) { func testAccCheckOrganizationAdminAccountRegistrationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_auditmanager_organization_admin_account_registration" { @@ -125,7 +128,7 @@ func testAccCheckOrganizationAdminAccountRegistrationExists(ctx context.Context, return create.Error(names.AuditManager, create.ErrActionCheckingExistence, tfauditmanager.ResNameOrganizationAdminAccountRegistration, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).AuditManagerClient(ctx) out, err := conn.GetOrganizationAdminAccount(ctx, &auditmanager.GetOrganizationAdminAccountInput{}) if err != nil { return create.Error(names.AuditManager, create.ErrActionCheckingExistence, tfauditmanager.ResNameOrganizationAdminAccountRegistration, rs.Primary.ID, err) diff --git a/internal/service/auditmanager/service_package_gen.go b/internal/service/auditmanager/service_package_gen.go index b73f77a3ec0..dab9768f826 100644 --- a/internal/service/auditmanager/service_package_gen.go +++ b/internal/service/auditmanager/service_package_gen.go @@ -5,6 +5,9 @@ package auditmanager import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + auditmanager_sdkv2 "github.com/aws/aws-sdk-go-v2/service/auditmanager" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -75,4 +78,17 @@ func (p *servicePackage) ServicePackageName() string { return names.AuditManager } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*auditmanager_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return auditmanager_sdkv2.NewFromConfig(cfg, func(o *auditmanager_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = auditmanager_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/auditmanager/sweep.go b/internal/service/auditmanager/sweep.go index 521b9281996..372255e1e1e 100644 --- a/internal/service/auditmanager/sweep.go +++ b/internal/service/auditmanager/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -13,8 +16,8 @@ import ( "github.com/aws/aws-sdk-go-v2/service/auditmanager/types" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" + "github.com/hashicorp/terraform-provider-aws/internal/sweep/framework" ) func init() { @@ -65,12 +68,12 @@ func isCompleteSetupError(err error) bool { func sweepAssessments(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).AuditManagerClient() + conn := client.AuditManagerClient(ctx) sweepResources := make([]sweep.Sweepable, 0) in := &auditmanager.ListAssessmentsInput{} var errs *multierror.Error @@ -91,11 +94,13 @@ func sweepAssessments(region string) error { id := aws.ToString(assessment.Id) log.Printf("[INFO] Deleting AuditManager Assessment: %s", id) - sweepResources = append(sweepResources, sweep.NewSweepFrameworkResource(newResourceAssessment, id, client)) + sweepResources = append(sweepResources, framework.NewSweepResource(newResourceAssessment, client, + framework.NewAttribute("id", id), + )) } } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping AuditManager Assessments for %s: %w", region, err)) } if sweep.SkipSweepError(err) { @@ -108,12 +113,12 @@ func sweepAssessments(region string) error { func sweepAssessmentDelegations(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).AuditManagerClient() + conn := client.AuditManagerClient(ctx) sweepResources := make([]sweep.Sweepable, 0) in := &auditmanager.GetDelegationsInput{} var errs *multierror.Error @@ -131,25 +136,15 @@ func sweepAssessmentDelegations(region string) error { } for _, d := range page.Delegations { - id := "" // ID is a combination of attributes for this resource, but not used for deletion - - // assessment ID is required for delete operations - assessmentIDAttr := sweep.FrameworkSupplementalAttribute{ - Path: "assessment_id", - Value: aws.ToString(d.AssessmentId), - } - // delegation ID is required for delete operations - delegationIDAttr := sweep.FrameworkSupplementalAttribute{ - Path: "delegation_id", - Value: aws.ToString(d.Id), - } - - log.Printf("[INFO] Deleting AuditManager Assessment Delegation: %s", delegationIDAttr.Value) - sweepResources = append(sweepResources, sweep.NewSweepFrameworkResource(newResourceAssessmentDelegation, id, client, assessmentIDAttr, delegationIDAttr)) + log.Printf("[INFO] Deleting AuditManager Assessment Delegation: %s", aws.ToString(d.Id)) + sweepResources = append(sweepResources, framework.NewSweepResource(newResourceAssessmentDelegation, client, + framework.NewAttribute("assessment_id", aws.ToString(d.AssessmentId)), + framework.NewAttribute("delegation_id", aws.ToString(d.Id)), + )) } } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping AuditManager Assessment Delegations for %s: %w", region, err)) } if sweep.SkipSweepError(err) { @@ -162,12 +157,12 @@ func sweepAssessmentDelegations(region string) error { func sweepAssessmentReports(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).AuditManagerClient() + conn := client.AuditManagerClient(ctx) sweepResources := make([]sweep.Sweepable, 0) in := &auditmanager.ListAssessmentReportsInput{} var errs *multierror.Error @@ -186,18 +181,16 @@ func sweepAssessmentReports(region string) error { for _, report := range page.AssessmentReports { id := aws.ToString(report.Id) - // assessment ID is required for delete operations - assessmentIDAttr := sweep.FrameworkSupplementalAttribute{ - Path: "assessment_id", - Value: aws.ToString(report.AssessmentId), - } log.Printf("[INFO] Deleting AuditManager Assessment Report: %s", id) - sweepResources = append(sweepResources, sweep.NewSweepFrameworkResource(newResourceAssessmentReport, id, client, assessmentIDAttr)) + sweepResources = append(sweepResources, framework.NewSweepResource(newResourceAssessmentReport, client, + framework.NewAttribute("id", id), + framework.NewAttribute("assessment_id", aws.ToString(report.AssessmentId)), + )) } } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping AuditManager Assessment Reports for %s: %w", region, err)) } if sweep.SkipSweepError(err) { @@ -210,12 +203,12 @@ func sweepAssessmentReports(region string) error { func sweepControls(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).AuditManagerClient() + conn := client.AuditManagerClient(ctx) sweepResources := make([]sweep.Sweepable, 0) in := &auditmanager.ListControlsInput{ControlType: types.ControlTypeCustom} var errs *multierror.Error @@ -236,11 +229,13 @@ func sweepControls(region string) error { id := aws.ToString(control.Id) log.Printf("[INFO] Deleting AuditManager Control: %s", id) - sweepResources = append(sweepResources, sweep.NewSweepFrameworkResource(newResourceControl, id, client)) + sweepResources = append(sweepResources, framework.NewSweepResource(newResourceControl, client, + framework.NewAttribute("id", id), + )) } } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping AuditManager Controls for %s: %w", region, err)) } if sweep.SkipSweepError(err) { @@ -253,12 +248,12 @@ func sweepControls(region string) error { func sweepFrameworks(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).AuditManagerClient() + conn := client.AuditManagerClient(ctx) sweepResources := make([]sweep.Sweepable, 0) in := &auditmanager.ListAssessmentFrameworksInput{FrameworkType: types.FrameworkTypeCustom} var errs *multierror.Error @@ -275,15 +270,17 @@ func sweepFrameworks(region string) error { return fmt.Errorf("error retrieving AuditManager Frameworks: %w", err) } - for _, framework := range page.FrameworkMetadataList { - id := aws.ToString(framework.Id) + for _, f := range page.FrameworkMetadataList { + id := aws.ToString(f.Id) log.Printf("[INFO] Deleting AuditManager Framework: %s", id) - sweepResources = append(sweepResources, sweep.NewSweepFrameworkResource(newResourceFramework, id, client)) + sweepResources = append(sweepResources, framework.NewSweepResource(newResourceFramework, client, + framework.NewAttribute("id", id), + )) } } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping AuditManager Frameworks for %s: %w", region, err)) } if sweep.SkipSweepError(err) { @@ -296,12 +293,12 @@ func sweepFrameworks(region string) error { func sweepFrameworkShares(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).AuditManagerClient() + conn := client.AuditManagerClient(ctx) sweepResources := make([]sweep.Sweepable, 0) in := &auditmanager.ListAssessmentFrameworkShareRequestsInput{RequestType: types.ShareRequestTypeSent} var errs *multierror.Error @@ -322,11 +319,13 @@ func sweepFrameworkShares(region string) error { id := aws.ToString(share.Id) log.Printf("[INFO] Deleting AuditManager Framework Share: %s", id) - sweepResources = append(sweepResources, sweep.NewSweepFrameworkResource(newResourceFrameworkShare, id, client)) + sweepResources = append(sweepResources, framework.NewSweepResource(newResourceFrameworkShare, client, + framework.NewAttribute("id", id), + )) } } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping AuditManager Framework Shares for %s: %w", region, err)) } if sweep.SkipSweepError(err) { diff --git a/internal/service/auditmanager/tags_gen.go b/internal/service/auditmanager/tags_gen.go index 5c363991b2b..bd111d2f181 100644 --- a/internal/service/auditmanager/tags_gen.go +++ b/internal/service/auditmanager/tags_gen.go @@ -13,10 +13,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists auditmanager service tags. +// listTags lists auditmanager service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn *auditmanager.Client, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *auditmanager.Client, identifier string) (tftags.KeyValueTags, error) { input := &auditmanager.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -33,7 +33,7 @@ func ListTags(ctx context.Context, conn *auditmanager.Client, identifier string) // ListTags lists auditmanager service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).AuditManagerClient(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).AuditManagerClient(ctx), identifier) if err != nil { return err @@ -53,14 +53,14 @@ func Tags(tags tftags.KeyValueTags) map[string]string { return tags.Map() } -// KeyValueTags creates KeyValueTags from auditmanager service tags. +// KeyValueTags creates tftags.KeyValueTags from auditmanager service tags. func KeyValueTags(ctx context.Context, tags map[string]string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns auditmanager service tags from Context. +// getTagsIn returns auditmanager service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]string { +func getTagsIn(ctx context.Context) map[string]string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -70,17 +70,17 @@ func GetTagsIn(ctx context.Context) map[string]string { return nil } -// SetTagsOut sets auditmanager service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]string) { +// setTagsOut sets auditmanager service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates auditmanager service tags. +// updateTags updates auditmanager service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn *auditmanager.Client, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *auditmanager.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -120,5 +120,5 @@ func UpdateTags(ctx context.Context, conn *auditmanager.Client, identifier strin // UpdateTags updates auditmanager service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).AuditManagerClient(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).AuditManagerClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/autoscaling/attachment.go b/internal/service/autoscaling/attachment.go index 66ef21e1172..3fbd96a1ff8 100644 --- a/internal/service/autoscaling/attachment.go +++ b/internal/service/autoscaling/attachment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscaling import ( @@ -47,7 +50,7 @@ func ResourceAttachment() *schema.Resource { func resourceAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) asgName := d.Get("autoscaling_group_name").(string) if v, ok := d.GetOk("elb"); ok { @@ -92,7 +95,7 @@ func resourceAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta func resourceAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) asgName := d.Get("autoscaling_group_name").(string) var err error @@ -118,7 +121,7 @@ func resourceAttachmentRead(ctx context.Context, d *schema.ResourceData, meta in func resourceAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) asgName := d.Get("autoscaling_group_name").(string) if v, ok := d.GetOk("elb"); ok { diff --git a/internal/service/autoscaling/attachment_test.go b/internal/service/autoscaling/attachment_test.go index 4c79f721f80..b646ae3ec69 100644 --- a/internal/service/autoscaling/attachment_test.go +++ b/internal/service/autoscaling/attachment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscaling_test import ( @@ -119,7 +122,7 @@ func TestAccAutoScalingAttachment_multipleALBTargetGroups(t *testing.T) { func testAccCheckAttachmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_autoscaling_attachment" { @@ -156,7 +159,7 @@ func testAccCheckAttachmentByLoadBalancerNameExists(ctx context.Context, n strin return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn(ctx) return tfautoscaling.FindAttachmentByLoadBalancerName(ctx, conn, rs.Primary.Attributes["autoscaling_group_name"], rs.Primary.Attributes["elb"]) } @@ -169,7 +172,7 @@ func testAccCheckAttachmentByTargetGroupARNExists(ctx context.Context, n string) return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn(ctx) return tfautoscaling.FindAttachmentByTargetGroupARN(ctx, conn, rs.Primary.Attributes["autoscaling_group_name"], rs.Primary.Attributes["lb_target_group_arn"]) } @@ -207,10 +210,6 @@ resource "aws_autoscaling_group" "test" { value = %[1]q propagate_at_launch = true } - - lifecycle { - ignore_changes = [load_balancers] - } } `, rName, elbCount)) } @@ -251,10 +250,6 @@ resource "aws_autoscaling_group" "test" { value = %[1]q propagate_at_launch = true } - - lifecycle { - ignore_changes = [target_group_arns] - } } `, rName, targetGroupCount)) } diff --git a/internal/service/autoscaling/consts.go b/internal/service/autoscaling/consts.go index 78484c10ae3..84b7d8a2dff 100644 --- a/internal/service/autoscaling/consts.go +++ b/internal/service/autoscaling/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscaling import ( @@ -74,3 +77,11 @@ func PolicyType_Values() []string { PolicyTypeTargetTrackingScaling, } } + +const ( + TrafficSourceStateAdding = "Adding" + TrafficSourceStateAdded = "Added" + TrafficSourceStateInService = "InService" + TrafficSourceStateRemoving = "Removing" + TrafficSourceStateRemoved = "Removed" +) diff --git a/internal/service/autoscaling/errors.go b/internal/service/autoscaling/errors.go index c1b17bdd180..6f1ca00713f 100644 --- a/internal/service/autoscaling/errors.go +++ b/internal/service/autoscaling/errors.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscaling const ( diff --git a/internal/service/autoscaling/flex.go b/internal/service/autoscaling/flex.go index 4cf62ffdd54..28c41aab07f 100644 --- a/internal/service/autoscaling/flex.go +++ b/internal/service/autoscaling/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscaling import ( diff --git a/internal/service/autoscaling/flex_test.go b/internal/service/autoscaling/flex_test.go index fd5f244a041..598de484cc1 100644 --- a/internal/service/autoscaling/flex_test.go +++ b/internal/service/autoscaling/flex_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscaling import ( diff --git a/internal/service/autoscaling/generate.go b/internal/service/autoscaling/generate.go index 85658864007..c29c3890830 100644 --- a/internal/service/autoscaling/generate.go +++ b/internal/service/autoscaling/generate.go @@ -1,5 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -GetTag -ListTags -ListTagsOp=DescribeTags -ListTagsInFiltIDName=auto-scaling-group -ServiceTagsSlice -TagOp=CreateOrUpdateTags -TagResTypeElem=ResourceType -TagType2=TagDescription -TagTypeAddBoolElem=PropagateAtLaunch -TagTypeIDElem=ResourceId -UntagOp=DeleteTags -UntagInNeedTagType -UntagInTagsElem=Tags -UpdateTags //go:generate go run ../../generate/listpages/main.go -ListOps=DescribeInstanceRefreshes,DescribeLoadBalancers,DescribeLoadBalancerTargetGroups,DescribeWarmPool +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package autoscaling diff --git a/internal/service/autoscaling/group.go b/internal/service/autoscaling/group.go index 9effe0a26cd..b087c3ec50f 100644 --- a/internal/service/autoscaling/group.go +++ b/internal/service/autoscaling/group.go @@ -1,7 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscaling -import ( // nosemgrep:ci.aws-sdk-go-multiple-service-imports - "bytes" +import ( // nosemgrep:ci.semgrep.aws.multiple-service-imports "context" "fmt" "log" @@ -26,10 +28,11 @@ import ( // nosemgrep:ci.aws-sdk-go-multiple-service-imports "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/create" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" - "github.com/hashicorp/terraform-provider-aws/internal/experimental/nullable" "github.com/hashicorp/terraform-provider-aws/internal/flex" tfelb "github.com/hashicorp/terraform-provider-aws/internal/service/elb" + "github.com/hashicorp/terraform-provider-aws/internal/slices" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/types/nullable" "github.com/hashicorp/terraform-provider-aws/internal/verify" ) @@ -250,9 +253,11 @@ func ResourceGroup() *schema.Resource { ExactlyOneOf: []string{"launch_configuration", "launch_template", "mixed_instances_policy"}, }, "load_balancers": { - Type: schema.TypeSet, - Optional: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + ConflictsWith: []string{"traffic_source"}, }, "max_instance_lifetime": { Type: schema.TypeInt, @@ -738,27 +743,39 @@ func ResourceGroup() *schema.Resource { }, }, }, - // This should be removable, but wait until other tags work is being done. - Set: func(v interface{}) int { - var buf bytes.Buffer - m := v.(map[string]interface{}) - buf.WriteString(fmt.Sprintf("%s-", m["key"].(string))) - buf.WriteString(fmt.Sprintf("%s-", m["value"].(string))) - buf.WriteString(fmt.Sprintf("%t-", m["propagate_at_launch"].(bool))) - - return create.StringHashcode(buf.String()) - }, }, "target_group_arns": { - Type: schema.TypeSet, - Optional: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + ConflictsWith: []string{"traffic_source"}, }, "termination_policies": { Type: schema.TypeList, Optional: true, Elem: &schema.Schema{Type: schema.TypeString}, }, + "traffic_source": { + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "identifier": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 2048), + }, + "type": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(1, 2048), + }, + }, + }, + ConflictsWith: []string{"load_balancers", "target_group_arns"}, + }, "vpc_zone_identifier": { Type: schema.TypeSet, Optional: true, @@ -834,7 +851,9 @@ func ResourceGroup() *schema.Resource { func resourceGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) + + startTime := time.Now() asgName := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) createInput := &autoscaling.CreateAutoScalingGroupInput{ @@ -947,6 +966,10 @@ func resourceGroupCreate(ctx context.Context, d *schema.ResourceData, meta inter createInput.TerminationPolicies = flex.ExpandStringList(v.([]interface{})) } + if v, ok := d.GetOk("traffic_source"); ok && v.(*schema.Set).Len() > 0 { + createInput.TrafficSources = expandTrafficSourceIdentifiers(v.(*schema.Set).List()) + } + if v, ok := d.GetOk("vpc_zone_identifier"); ok && v.(*schema.Set).Len() > 0 { createInput.VPCZoneIdentifier = expandVPCZoneIdentifiers(v.(*schema.Set).List()) } @@ -985,7 +1008,7 @@ func resourceGroupCreate(ctx context.Context, d *schema.ResourceData, meta inter } if v, ok := d.GetOk("wait_for_capacity_timeout"); ok { - if v, _ := time.ParseDuration(v.(string)); v > 0 { + if timeout, _ := time.ParseDuration(v.(string)); timeout > 0 { // On creation all targets are minimums. f := func(nASG, nELB int) error { minSize := minSize @@ -1009,7 +1032,7 @@ func resourceGroupCreate(ctx context.Context, d *schema.ResourceData, meta inter return nil } - if err := waitGroupCapacitySatisfied(ctx, conn, meta.(*conns.AWSClient).ELBConn(), meta.(*conns.AWSClient).ELBV2Conn(), d.Id(), f, v); err != nil { + if err := waitGroupCapacitySatisfied(ctx, conn, meta.(*conns.AWSClient).ELBConn(ctx), meta.(*conns.AWSClient).ELBV2Conn(ctx), d.Id(), f, startTime, timeout); err != nil { return sdkdiag.AppendErrorf(diags, "waiting for Auto Scaling Group (%s) capacity satisfied: %s", d.Id(), err) } } @@ -1055,7 +1078,7 @@ func resourceGroupCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig g, err := FindGroupByName(ctx, conn, d.Id()) @@ -1078,7 +1101,6 @@ func resourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interfa d.Set("default_instance_warmup", g.DefaultInstanceWarmup) d.Set("desired_capacity", g.DesiredCapacity) d.Set("desired_capacity_type", g.DesiredCapacityType) - if len(g.EnabledMetrics) > 0 { d.Set("enabled_metrics", flattenEnabledMetrics(g.EnabledMetrics)) d.Set("metrics_granularity", g.EnabledMetrics[0].Granularity) @@ -1088,7 +1110,6 @@ func resourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interfa } d.Set("health_check_grace_period", g.HealthCheckGracePeriod) d.Set("health_check_type", g.HealthCheckType) - d.Set("load_balancers", aws.StringValueSlice(g.LoadBalancerNames)) d.Set("launch_configuration", g.LaunchConfigurationName) if g.LaunchTemplate != nil { if err := d.Set("launch_template", []interface{}{flattenLaunchTemplateSpecification(g.LaunchTemplate)}); err != nil { @@ -1097,6 +1118,7 @@ func resourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interfa } else { d.Set("launch_template", nil) } + d.Set("load_balancers", aws.StringValueSlice(g.LoadBalancerNames)) d.Set("max_instance_lifetime", g.MaxInstanceLifetime) d.Set("max_size", g.MaxSize) d.Set("min_size", g.MinSize) @@ -1114,6 +1136,9 @@ func resourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interfa d.Set("protect_from_scale_in", g.NewInstancesProtectedFromScaleIn) d.Set("service_linked_role_arn", g.ServiceLinkedRoleARN) d.Set("suspended_processes", flattenSuspendedProcesses(g.SuspendedProcesses)) + if err := d.Set("traffic_source", flattenTrafficSourceIdentifiers(g.TrafficSources)); err != nil { + return sdkdiag.AppendErrorf(diags, "setting traffic_source: %s", err) + } d.Set("target_group_arns", aws.StringValueSlice(g.TargetGroupARNs)) // If no termination polices are explicitly configured and the upstream state // is only using the "Default" policy, clear the state to make it consistent @@ -1146,7 +1171,9 @@ func resourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interfa func resourceGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) + + startTime := time.Now() var shouldWaitForCapacity bool var shouldRefreshInstances bool @@ -1157,6 +1184,7 @@ func resourceGroupUpdate(ctx context.Context, d *schema.ResourceData, meta inter "suspended_processes", "tag", "target_group_arns", + "traffic_source", "warm_pool", ) { input := &autoscaling.UpdateAutoScalingGroupInput{ @@ -1267,7 +1295,6 @@ func resourceGroupUpdate(ctx context.Context, d *schema.ResourceData, meta inter input.VPCZoneIdentifier = expandVPCZoneIdentifiers(d.Get("vpc_zone_identifier").(*schema.Set).List()) } - log.Printf("[DEBUG] Updating Auto Scaling Group: %s", input) _, err := conn.UpdateAutoScalingGroupWithContext(ctx, input) if err != nil { @@ -1281,73 +1308,96 @@ func resourceGroupUpdate(ctx context.Context, d *schema.ResourceData, meta inter oldTags := Tags(KeyValueTags(ctx, oTagRaw, d.Id(), TagResourceTypeGroup)) newTags := Tags(KeyValueTags(ctx, nTagRaw, d.Id(), TagResourceTypeGroup)) - if err := UpdateTags(ctx, conn, d.Id(), TagResourceTypeGroup, oldTags, newTags); err != nil { + if err := updateTags(ctx, conn, d.Id(), TagResourceTypeGroup, oldTags, newTags); err != nil { return sdkdiag.AppendErrorf(diags, "updating tags for Auto Scaling Group (%s): %s", d.Id(), err) } } - if d.HasChange("load_balancers") { - o, n := d.GetChange("load_balancers") + if d.HasChange("traffic_source") { + o, n := d.GetChange("traffic_source") if o == nil { o = new(schema.Set) } if n == nil { n = new(schema.Set) } + os := o.(*schema.Set) ns := n.(*schema.Set) - if remove := flex.ExpandStringSet(os.Difference(ns)); len(remove) > 0 { - // API only supports removing 10 at a time. - batchSize := 10 + // API only supports adding or removing 10 at a time. + batchSize := 10 + for _, chunk := range slices.Chunks(expandTrafficSourceIdentifiers(os.Difference(ns).List()), batchSize) { + _, err := conn.DetachTrafficSourcesWithContext(ctx, &autoscaling.DetachTrafficSourcesInput{ + AutoScalingGroupName: aws.String(d.Id()), + TrafficSources: chunk, + }) - var batches [][]*string + if err != nil { + return sdkdiag.AppendErrorf(diags, "detaching Auto Scaling Group (%s) traffic sources: %s", d.Id(), err) + } - for batchSize < len(remove) { - remove, batches = remove[batchSize:], append(batches, remove[0:batchSize:batchSize]) + if _, err := waitTrafficSourcesDeleted(ctx, conn, d.Id(), "", d.Timeout(schema.TimeoutUpdate)); err != nil { + return sdkdiag.AppendErrorf(diags, "waiting for Auto Scaling Group (%s) traffic sources removed: %s", d.Id(), err) } - batches = append(batches, remove) + } - for _, batch := range batches { - _, err := conn.DetachLoadBalancersWithContext(ctx, &autoscaling.DetachLoadBalancersInput{ - AutoScalingGroupName: aws.String(d.Id()), - LoadBalancerNames: batch, - }) + for _, chunk := range slices.Chunks(expandTrafficSourceIdentifiers(ns.Difference(os).List()), batchSize) { + _, err := conn.AttachTrafficSourcesWithContext(ctx, &autoscaling.AttachTrafficSourcesInput{ + AutoScalingGroupName: aws.String(d.Id()), + TrafficSources: chunk, + }) - if err != nil { - return sdkdiag.AppendErrorf(diags, "detaching Auto Scaling Group (%s) load balancers: %s", d.Id(), err) - } + if err != nil { + return sdkdiag.AppendErrorf(diags, "attaching Auto Scaling Group (%s) traffic sources: %s", d.Id(), err) + } - if _, err := waitLoadBalancersRemoved(ctx, conn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { - return sdkdiag.AppendErrorf(diags, "waiting for Auto Scaling Group (%s) load balancers removed: %s", d.Id(), err) - } + if _, err := waitTrafficSourcesCreated(ctx, conn, d.Id(), "", d.Timeout(schema.TimeoutUpdate)); err != nil { + return sdkdiag.AppendErrorf(diags, "waiting for Auto Scaling Group (%s) traffic sources added: %s", d.Id(), err) } } + } + + if d.HasChange("load_balancers") { + o, n := d.GetChange("load_balancers") + if o == nil { + o = new(schema.Set) + } + if n == nil { + n = new(schema.Set) + } + os := o.(*schema.Set) + ns := n.(*schema.Set) - if add := flex.ExpandStringSet(ns.Difference(os)); len(add) > 0 { - // API only supports adding 10 at a time. - batchSize := 10 + // API only supports adding or removing 10 at a time. + batchSize := 10 + for _, chunk := range slices.Chunks(flex.ExpandStringSet(os.Difference(ns)), batchSize) { + _, err := conn.DetachLoadBalancersWithContext(ctx, &autoscaling.DetachLoadBalancersInput{ + AutoScalingGroupName: aws.String(d.Id()), + LoadBalancerNames: chunk, + }) - var batches [][]*string + if err != nil { + return sdkdiag.AppendErrorf(diags, "detaching Auto Scaling Group (%s) load balancers: %s", d.Id(), err) + } - for batchSize < len(add) { - add, batches = add[batchSize:], append(batches, add[0:batchSize:batchSize]) + if _, err := waitLoadBalancersRemoved(ctx, conn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { + return sdkdiag.AppendErrorf(diags, "waiting for Auto Scaling Group (%s) load balancers removed: %s", d.Id(), err) } - batches = append(batches, add) + } - for _, batch := range batches { - _, err := conn.AttachLoadBalancersWithContext(ctx, &autoscaling.AttachLoadBalancersInput{ - AutoScalingGroupName: aws.String(d.Id()), - LoadBalancerNames: batch, - }) + for _, chunk := range slices.Chunks(flex.ExpandStringSet(ns.Difference(os)), batchSize) { + _, err := conn.AttachLoadBalancersWithContext(ctx, &autoscaling.AttachLoadBalancersInput{ + AutoScalingGroupName: aws.String(d.Id()), + LoadBalancerNames: chunk, + }) - if err != nil { - return sdkdiag.AppendErrorf(diags, "attaching Auto Scaling Group (%s) load balancers: %s", d.Id(), err) - } + if err != nil { + return sdkdiag.AppendErrorf(diags, "attaching Auto Scaling Group (%s) load balancers: %s", d.Id(), err) + } - if _, err := waitLoadBalancersAdded(ctx, conn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { - return sdkdiag.AppendErrorf(diags, "waiting for Auto Scaling Group (%s) load balancers added: %s", d.Id(), err) - } + if _, err := waitLoadBalancersAdded(ctx, conn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { + return sdkdiag.AppendErrorf(diags, "waiting for Auto Scaling Group (%s) load balancers added: %s", d.Id(), err) } } } @@ -1363,57 +1413,35 @@ func resourceGroupUpdate(ctx context.Context, d *schema.ResourceData, meta inter os := o.(*schema.Set) ns := n.(*schema.Set) - if remove := flex.ExpandStringSet(os.Difference(ns)); len(remove) > 0 { - // AWS API only supports adding/removing 10 at a time. - batchSize := 10 - - var batches [][]*string + // API only supports adding or removing 10 at a time. + batchSize := 10 + for _, chunk := range slices.Chunks(flex.ExpandStringSet(os.Difference(ns)), batchSize) { + _, err := conn.DetachLoadBalancerTargetGroupsWithContext(ctx, &autoscaling.DetachLoadBalancerTargetGroupsInput{ + AutoScalingGroupName: aws.String(d.Id()), + TargetGroupARNs: chunk, + }) - for batchSize < len(remove) { - remove, batches = remove[batchSize:], append(batches, remove[0:batchSize:batchSize]) + if err != nil { + return sdkdiag.AppendErrorf(diags, "detaching Auto Scaling Group (%s) target groups: %s", d.Id(), err) } - batches = append(batches, remove) - - for _, batch := range batches { - _, err := conn.DetachLoadBalancerTargetGroupsWithContext(ctx, &autoscaling.DetachLoadBalancerTargetGroupsInput{ - AutoScalingGroupName: aws.String(d.Id()), - TargetGroupARNs: batch, - }) - if err != nil { - return sdkdiag.AppendErrorf(diags, "detaching Auto Scaling Group (%s) target groups: %s", d.Id(), err) - } - - if _, err := waitLoadBalancerTargetGroupsRemoved(ctx, conn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { - return sdkdiag.AppendErrorf(diags, "waiting for Auto Scaling Group (%s) target groups removed: %s", d.Id(), err) - } + if _, err := waitLoadBalancerTargetGroupsRemoved(ctx, conn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { + return sdkdiag.AppendErrorf(diags, "waiting for Auto Scaling Group (%s) target groups removed: %s", d.Id(), err) } } - if add := flex.ExpandStringSet(ns.Difference(os)); len(add) > 0 { - // AWS API only supports adding/removing 10 at a time. - batchSize := 10 - - var batches [][]*string + for _, chunk := range slices.Chunks(flex.ExpandStringSet(ns.Difference(os)), batchSize) { + _, err := conn.AttachLoadBalancerTargetGroupsWithContext(ctx, &autoscaling.AttachLoadBalancerTargetGroupsInput{ + AutoScalingGroupName: aws.String(d.Id()), + TargetGroupARNs: chunk, + }) - for batchSize < len(add) { - add, batches = add[batchSize:], append(batches, add[0:batchSize:batchSize]) + if err != nil { + return sdkdiag.AppendErrorf(diags, "attaching Auto Scaling Group (%s) target groups: %s", d.Id(), err) } - batches = append(batches, add) - - for _, batch := range batches { - _, err := conn.AttachLoadBalancerTargetGroupsWithContext(ctx, &autoscaling.AttachLoadBalancerTargetGroupsInput{ - AutoScalingGroupName: aws.String(d.Id()), - TargetGroupARNs: batch, - }) - if err != nil { - return sdkdiag.AppendErrorf(diags, "attaching Auto Scaling Group (%s) target groups: %s", d.Id(), err) - } - - if _, err := waitLoadBalancerTargetGroupsAdded(ctx, conn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { - return sdkdiag.AppendErrorf(diags, "waiting for Auto Scaling Group (%s) target groups added: %s", d.Id(), err) - } + if _, err := waitLoadBalancerTargetGroupsAdded(ctx, conn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { + return sdkdiag.AppendErrorf(diags, "waiting for Auto Scaling Group (%s) target groups added: %s", d.Id(), err) } } } @@ -1436,7 +1464,19 @@ func resourceGroupUpdate(ctx context.Context, d *schema.ResourceData, meta inter } if shouldRefreshInstances { - if err := startInstanceRefresh(ctx, conn, expandStartInstanceRefreshInput(d.Id(), tfMap)); err != nil { + var launchTemplate *autoscaling.LaunchTemplateSpecification + + if v, ok := d.GetOk("launch_template"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + launchTemplate = expandLaunchTemplateSpecification(v.([]interface{})[0].(map[string]interface{})) + } + + var mixedInstancesPolicy *autoscaling.MixedInstancesPolicy + + if v, ok := d.GetOk("mixed_instances_policy"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + mixedInstancesPolicy = expandMixedInstancesPolicy(v.([]interface{})[0].(map[string]interface{})) + } + + if err := startInstanceRefresh(ctx, conn, expandStartInstanceRefreshInput(d.Id(), tfMap, launchTemplate, mixedInstancesPolicy)); err != nil { return sdkdiag.AppendFromErr(diags, err) } } @@ -1463,7 +1503,7 @@ func resourceGroupUpdate(ctx context.Context, d *schema.ResourceData, meta inter if shouldWaitForCapacity { if v, ok := d.GetOk("wait_for_capacity_timeout"); ok { - if v, _ := time.ParseDuration(v.(string)); v > 0 { + if timeout, _ := time.ParseDuration(v.(string)); timeout > 0 { // On update all targets are specific. f := func(nASG, nELB int) error { minSize := d.Get("min_size").(int) @@ -1484,7 +1524,7 @@ func resourceGroupUpdate(ctx context.Context, d *schema.ResourceData, meta inter return nil } - if err := waitGroupCapacitySatisfied(ctx, conn, meta.(*conns.AWSClient).ELBConn(), meta.(*conns.AWSClient).ELBV2Conn(), d.Id(), f, v); err != nil { + if err := waitGroupCapacitySatisfied(ctx, conn, meta.(*conns.AWSClient).ELBConn(ctx), meta.(*conns.AWSClient).ELBV2Conn(ctx), d.Id(), f, startTime, timeout); err != nil { return sdkdiag.AppendErrorf(diags, "waiting for Auto Scaling Group (%s) capacity satisfied: %s", d.Id(), err) } } @@ -1573,7 +1613,7 @@ func resourceGroupUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) forceDeleteGroup := d.Get("force_delete").(bool) forceDeleteWarmPool := forceDeleteGroup || d.Get("force_delete_warm_pool").(bool) @@ -1997,7 +2037,7 @@ func findLoadBalancerTargetGroupStates(ctx context.Context, conn *autoscaling.Au return output, nil } -func findScalingActivities(ctx context.Context, conn *autoscaling.AutoScaling, input *autoscaling.DescribeScalingActivitiesInput) ([]*autoscaling.Activity, error) { +func findScalingActivities(ctx context.Context, conn *autoscaling.AutoScaling, input *autoscaling.DescribeScalingActivitiesInput, startTime time.Time) ([]*autoscaling.Activity, error) { var output []*autoscaling.Activity err := conn.DescribeScalingActivitiesPagesWithContext(ctx, input, func(page *autoscaling.DescribeScalingActivitiesOutput, lastPage bool) bool { @@ -2005,12 +2045,14 @@ func findScalingActivities(ctx context.Context, conn *autoscaling.AutoScaling, i return !lastPage } - for _, v := range page.Activities { - if v == nil { + for _, activity := range page.Activities { + if activity == nil { continue } - output = append(output, v) + if startTime.Before(aws.TimeValue(activity.StartTime)) { + output = append(output, activity) + } } return !lastPage @@ -2030,12 +2072,56 @@ func findScalingActivities(ctx context.Context, conn *autoscaling.AutoScaling, i return output, nil } -func findScalingActivitiesByName(ctx context.Context, conn *autoscaling.AutoScaling, name string) ([]*autoscaling.Activity, error) { +func findScalingActivitiesByName(ctx context.Context, conn *autoscaling.AutoScaling, name string, startTime time.Time) ([]*autoscaling.Activity, error) { input := &autoscaling.DescribeScalingActivitiesInput{ AutoScalingGroupName: aws.String(name), } - return findScalingActivities(ctx, conn, input) + return findScalingActivities(ctx, conn, input, startTime) +} + +func findTrafficSourceStates(ctx context.Context, conn *autoscaling.AutoScaling, input *autoscaling.DescribeTrafficSourcesInput) ([]*autoscaling.TrafficSourceState, error) { + var output []*autoscaling.TrafficSourceState + + err := conn.DescribeTrafficSourcesPagesWithContext(ctx, input, func(page *autoscaling.DescribeTrafficSourcesOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, v := range page.TrafficSources { + if v == nil { + continue + } + + output = append(output, v) + } + + return !lastPage + }) + + if tfawserr.ErrMessageContains(err, ErrCodeValidationError, "not found") { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + return output, nil +} + +func findTrafficSourceStatesByTwoPartKey(ctx context.Context, conn *autoscaling.AutoScaling, asgName, trafficSourceType string) ([]*autoscaling.TrafficSourceState, error) { + input := &autoscaling.DescribeTrafficSourcesInput{ + AutoScalingGroupName: aws.String(asgName), + } + if trafficSourceType != "" { + input.TrafficSourceType = aws.String(trafficSourceType) + } + + return findTrafficSourceStates(ctx, conn, input) } func findWarmPool(ctx context.Context, conn *autoscaling.AutoScaling, name string) (*autoscaling.DescribeWarmPoolOutput, error) { @@ -2076,10 +2162,10 @@ func findWarmPool(ctx context.Context, conn *autoscaling.AutoScaling, name strin return output, nil } -func statusGroupCapacity(ctx context.Context, conn *autoscaling.AutoScaling, elbconn *elb.ELB, elbv2conn *elbv2.ELBV2, name string, cb func(int, int) error) retry.StateRefreshFunc { +func statusGroupCapacity(ctx context.Context, conn *autoscaling.AutoScaling, elbconn *elb.ELB, elbv2conn *elbv2.ELBV2, name string, startTime time.Time, cb func(int, int) error) retry.StateRefreshFunc { return func() (interface{}, string, error) { // Check for fatal error in activity logs. - scalingActivities, err := findScalingActivitiesByName(ctx, conn, name) + scalingActivities, err := findScalingActivitiesByName(ctx, conn, name, startTime) if err != nil { return nil, "", fmt.Errorf("reading scaling activities: %w", err) @@ -2089,6 +2175,10 @@ func statusGroupCapacity(ctx context.Context, conn *autoscaling.AutoScaling, elb for _, v := range scalingActivities { if statusCode := aws.StringValue(v.StatusCode); statusCode == autoscaling.ScalingActivityStatusCodeFailed && aws.Int64Value(v.Progress) == 100 { + if strings.Contains(aws.StringValue(v.StatusMessage), "Invalid IAM Instance Profile") { + // the activity will likely be retried + continue + } errors = multierror.Append(errors, fmt.Errorf("Scaling activity (%s): %s: %s", aws.StringValue(v.ActivityId), statusCode, aws.StringValue(v.StatusMessage))) } } @@ -2266,6 +2356,33 @@ func statusLoadBalancerTargetGroupInStateCount(ctx context.Context, conn *autosc } } +func statusTrafficSourcesInStateCount(ctx context.Context, conn *autoscaling.AutoScaling, asgName, trafficSourceType string, states ...string) retry.StateRefreshFunc { + return func() (interface{}, string, error) { + output, err := findTrafficSourceStatesByTwoPartKey(ctx, conn, asgName, trafficSourceType) + + if tfresource.NotFound(err) { + return nil, "", nil + } + + if err != nil { + return nil, "", err + } + + var count int + + for _, v := range output { + for _, state := range states { + if aws.StringValue(v.State) == state { + count++ + break + } + } + } + + return output, strconv.Itoa(count), nil + } +} + func statusWarmPool(ctx context.Context, conn *autoscaling.AutoScaling, name string) retry.StateRefreshFunc { return func() (interface{}, string, error) { output, err := findWarmPool(ctx, conn, name) @@ -2298,10 +2415,10 @@ func statusWarmPoolInstanceCount(ctx context.Context, conn *autoscaling.AutoScal } } -func waitGroupCapacitySatisfied(ctx context.Context, conn *autoscaling.AutoScaling, elbconn *elb.ELB, elbv2conn *elbv2.ELBV2, name string, cb func(int, int) error, timeout time.Duration) error { +func waitGroupCapacitySatisfied(ctx context.Context, conn *autoscaling.AutoScaling, elbconn *elb.ELB, elbv2conn *elbv2.ELBV2, name string, cb func(int, int) error, startTime time.Time, timeout time.Duration) error { stateConf := &retry.StateChangeConf{ Target: []string{"ok"}, - Refresh: statusGroupCapacity(ctx, conn, elbconn, elbv2conn, name, cb), + Refresh: statusGroupCapacity(ctx, conn, elbconn, elbv2conn, name, startTime, cb), Timeout: timeout, } @@ -2394,6 +2511,38 @@ func waitLoadBalancerTargetGroupsRemoved(ctx context.Context, conn *autoscaling. return nil, err } +func waitTrafficSourcesCreated(ctx context.Context, conn *autoscaling.AutoScaling, asgName, trafficSourceType string, timeout time.Duration) ([]*autoscaling.TrafficSourceState, error) { + stateConf := &retry.StateChangeConf{ + Target: []string{"0"}, + Refresh: statusTrafficSourcesInStateCount(ctx, conn, asgName, trafficSourceType, TrafficSourceStateAdding), + Timeout: timeout, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + + if output, ok := outputRaw.([]*autoscaling.TrafficSourceState); ok { + return output, err + } + + return nil, err +} + +func waitTrafficSourcesDeleted(ctx context.Context, conn *autoscaling.AutoScaling, asgName, trafficSourceType string, timeout time.Duration) ([]*autoscaling.TrafficSourceState, error) { + stateConf := &retry.StateChangeConf{ + Target: []string{"0"}, + Refresh: statusTrafficSourcesInStateCount(ctx, conn, asgName, trafficSourceType, TrafficSourceStateRemoving), + Timeout: timeout, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + + if output, ok := outputRaw.([]*autoscaling.TrafficSourceState); ok { + return output, err + } + + return nil, err +} + const ( // Maximum amount of time to wait for an InstanceRefresh to be started // Must be at least as long as instanceRefreshCancelledTimeout, since we try to cancel any @@ -3019,7 +3168,7 @@ func expandInstanceReusePolicy(tfMap map[string]interface{}) *autoscaling.Instan return apiObject } -func expandStartInstanceRefreshInput(name string, tfMap map[string]interface{}) *autoscaling.StartInstanceRefreshInput { +func expandStartInstanceRefreshInput(name string, tfMap map[string]interface{}, launchTemplate *autoscaling.LaunchTemplateSpecification, mixedInstancesPolicy *autoscaling.MixedInstancesPolicy) *autoscaling.StartInstanceRefreshInput { if tfMap == nil { return nil } @@ -3030,6 +3179,14 @@ func expandStartInstanceRefreshInput(name string, tfMap map[string]interface{}) if v, ok := tfMap["preferences"].([]interface{}); ok && len(v) > 0 { apiObject.Preferences = expandRefreshPreferences(v[0].(map[string]interface{})) + + // "The AutoRollback parameter cannot be set to true when the DesiredConfiguration parameter is empty". + if aws.BoolValue(apiObject.Preferences.AutoRollback) { + apiObject.DesiredConfiguration = &autoscaling.DesiredConfiguration{ + LaunchTemplate: launchTemplate, + MixedInstancesPolicy: mixedInstancesPolicy, + } + } } if v, ok := tfMap["strategy"].(string); ok && v != "" { @@ -3087,6 +3244,50 @@ func expandVPCZoneIdentifiers(tfList []interface{}) *string { return aws.String(strings.Join(vpcZoneIDs, ",")) } +func expandTrafficSourceIdentifier(tfMap map[string]interface{}) *autoscaling.TrafficSourceIdentifier { + if tfMap == nil { + return nil + } + + apiObject := &autoscaling.TrafficSourceIdentifier{} + + if v, ok := tfMap["identifier"].(string); ok && v != "" { + apiObject.Identifier = aws.String(v) + } + + if v, ok := tfMap["type"].(string); ok && v != "" { + apiObject.Type = aws.String(v) + } + + return apiObject +} + +func expandTrafficSourceIdentifiers(tfList []interface{}) []*autoscaling.TrafficSourceIdentifier { + if len(tfList) == 0 { + return nil + } + + var apiObjects []*autoscaling.TrafficSourceIdentifier + + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + + if !ok { + continue + } + + apiObject := expandTrafficSourceIdentifier(tfMap) + + if apiObject == nil { + continue + } + + apiObjects = append(apiObjects, apiObject) + } + + return apiObjects +} + func flattenEnabledMetrics(apiObjects []*autoscaling.EnabledMetric) []string { var tfList []string @@ -3507,6 +3708,42 @@ func flattentTotalLocalStorageGB(apiObject *autoscaling.TotalLocalStorageGBReque return tfMap } +func flattenTrafficSourceIdentifier(apiObject *autoscaling.TrafficSourceIdentifier) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.Identifier; v != nil { + tfMap["identifier"] = aws.StringValue(v) + } + + if v := apiObject.Type; v != nil { + tfMap["type"] = aws.StringValue(v) + } + + return tfMap +} + +func flattenTrafficSourceIdentifiers(apiObjects []*autoscaling.TrafficSourceIdentifier) []interface{} { + if len(apiObjects) == 0 { + return nil + } + + var tfList []interface{} + + for _, apiObject := range apiObjects { + if apiObject == nil { + continue + } + + tfList = append(tfList, flattenTrafficSourceIdentifier(apiObject)) + } + + return tfList +} + func flattenVCPUCount(apiObject *autoscaling.VCpuCountRequest) map[string]interface{} { if apiObject == nil { return nil @@ -3648,7 +3885,7 @@ func validateGroupInstanceRefreshTriggerFields(i interface{}, path cty.Path) dia } } - schema := ResourceGroup().Schema + schema := ResourceGroup().SchemaMap() for attr, attrSchema := range schema { if v == attr { if attrSchema.Computed && !attrSchema.Optional { diff --git a/internal/service/autoscaling/group_data_source.go b/internal/service/autoscaling/group_data_source.go index 359e534794a..968df4b694a 100644 --- a/internal/service/autoscaling/group_data_source.go +++ b/internal/service/autoscaling/group_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscaling import ( @@ -493,6 +496,22 @@ func DataSourceGroup() *schema.Resource { Type: schema.TypeString, }, }, + "traffic_source": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "identifier": { + Type: schema.TypeString, + Computed: true, + }, + "type": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, "vpc_zone_identifier": { Type: schema.TypeString, Computed: true, @@ -539,7 +558,7 @@ func DataSourceGroup() *schema.Resource { func dataSourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig groupName := d.Get("name").(string) @@ -589,6 +608,9 @@ func dataSourceGroupRead(ctx context.Context, d *schema.ResourceData, meta inter } d.Set("target_group_arns", aws.StringValueSlice(group.TargetGroupARNs)) d.Set("termination_policies", aws.StringValueSlice(group.TerminationPolicies)) + if err := d.Set("traffic_source", flattenTrafficSourceIdentifiers(group.TrafficSources)); err != nil { + return sdkdiag.AppendErrorf(diags, "setting traffic_source: %s", err) + } d.Set("vpc_zone_identifier", group.VPCZoneIdentifier) if group.WarmPoolConfiguration != nil { if err := d.Set("warm_pool", []interface{}{flattenWarmPoolConfiguration(group.WarmPoolConfiguration)}); err != nil { diff --git a/internal/service/autoscaling/group_data_source_test.go b/internal/service/autoscaling/group_data_source_test.go index 56b327e3a60..6454972bf8a 100644 --- a/internal/service/autoscaling/group_data_source_test.go +++ b/internal/service/autoscaling/group_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscaling_test import ( @@ -49,7 +52,8 @@ func TestAccAutoScalingGroupDataSource_basic(t *testing.T) { resource.TestCheckResourceAttrPair(datasourceName, "tag.#", resourceName, "tag.#"), resource.TestCheckResourceAttrPair(datasourceName, "target_group_arns.#", resourceName, "target_group_arns.#"), resource.TestCheckResourceAttr(datasourceName, "termination_policies.#", "1"), // Not set in resource. - resource.TestCheckResourceAttr(datasourceName, "vpc_zone_identifier", ""), // Not set in resource. + resource.TestCheckResourceAttrPair(datasourceName, "traffic_source.#", resourceName, "traffic_source.#"), + resource.TestCheckResourceAttr(datasourceName, "vpc_zone_identifier", ""), // Not set in resource. resource.TestCheckResourceAttrPair(datasourceName, "warm_pool.#", resourceName, "warm_pool.#"), resource.TestCheckResourceAttrPair(datasourceName, "warm_pool_size", resourceName, "warm_pool_size"), ), diff --git a/internal/service/autoscaling/group_tag.go b/internal/service/autoscaling/group_tag.go index 963e5497fe6..e2967504f6c 100644 --- a/internal/service/autoscaling/group_tag.go +++ b/internal/service/autoscaling/group_tag.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscaling import ( @@ -57,13 +60,13 @@ func ResourceGroupTag() *schema.Resource { func resourceGroupTagCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { // nosemgrep:ci.semgrep.tags.calling-UpdateTags-in-resource-create var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) identifier := d.Get("autoscaling_group_name").(string) tags := d.Get("tag").([]interface{}) key := tags[0].(map[string]interface{})["key"].(string) - if err := UpdateTags(ctx, conn, identifier, TagResourceTypeGroup, nil, tags); err != nil { + if err := updateTags(ctx, conn, identifier, TagResourceTypeGroup, nil, tags); err != nil { return sdkdiag.AppendErrorf(diags, "creating AutoScaling Group (%s) tag (%s): %s", identifier, key, err) } @@ -74,7 +77,7 @@ func resourceGroupTagCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceGroupTagRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) identifier, key, err := tftags.GetResourceID(d.Id()) if err != nil { @@ -108,14 +111,14 @@ func resourceGroupTagRead(ctx context.Context, d *schema.ResourceData, meta inte func resourceGroupTagUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) identifier, key, err := tftags.GetResourceID(d.Id()) if err != nil { return sdkdiag.AppendErrorf(diags, "updating AutoScaling Group Tag (%s): %s", d.Id(), err) } - if err := UpdateTags(ctx, conn, identifier, TagResourceTypeGroup, nil, d.Get("tag")); err != nil { + if err := updateTags(ctx, conn, identifier, TagResourceTypeGroup, nil, d.Get("tag")); err != nil { return sdkdiag.AppendErrorf(diags, "updating AutoScaling Group (%s) tag (%s): %s", identifier, key, err) } @@ -124,14 +127,14 @@ func resourceGroupTagUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceGroupTagDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) identifier, key, err := tftags.GetResourceID(d.Id()) if err != nil { return sdkdiag.AppendErrorf(diags, "deleting AutoScaling Group Tag (%s): %s", d.Id(), err) } - if err := UpdateTags(ctx, conn, identifier, TagResourceTypeGroup, d.Get("tag"), nil); err != nil { + if err := updateTags(ctx, conn, identifier, TagResourceTypeGroup, d.Get("tag"), nil); err != nil { return sdkdiag.AppendErrorf(diags, "deleting AutoScaling Group (%s) tag (%s): %s", identifier, key, err) } diff --git a/internal/service/autoscaling/group_tag_test.go b/internal/service/autoscaling/group_tag_test.go index 1823deb8b1f..010f4521818 100644 --- a/internal/service/autoscaling/group_tag_test.go +++ b/internal/service/autoscaling/group_tag_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscaling_test import ( @@ -101,7 +104,7 @@ func TestAccAutoScalingGroupTag_value(t *testing.T) { func testAccCheckGroupTagDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_autoscaling_group_tag" { @@ -148,7 +151,7 @@ func testAccCheckGroupTagExists(ctx context.Context, n string) resource.TestChec return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn(ctx) _, err = tfautoscaling.GetTag(ctx, conn, identifier, tfautoscaling.TagResourceTypeGroup, key) diff --git a/internal/service/autoscaling/group_test.go b/internal/service/autoscaling/group_test.go index f27520a537d..771737a99b2 100644 --- a/internal/service/autoscaling/group_test.go +++ b/internal/service/autoscaling/group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscaling_test import ( @@ -93,11 +96,11 @@ func TestAccAutoScalingGroup_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "predicted_capacity", "0"), resource.TestCheckResourceAttr(resourceName, "protect_from_scale_in", "false"), acctest.CheckResourceAttrGlobalARN(resourceName, "service_linked_role_arn", "iam", "role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling"), - resource.TestCheckResourceAttr(resourceName, "suspended_processes.#", "0"), - resource.TestCheckNoResourceAttr(resourceName, "tag.#"), - resource.TestCheckResourceAttr(resourceName, "tags.#", "0"), + resource.TestCheckResourceAttr(resourceName, "tag.#", "0"), + resource.TestCheckNoResourceAttr(resourceName, "tags.#"), // "tags" removed at v5.0.0. resource.TestCheckResourceAttr(resourceName, "target_group_arns.#", "0"), resource.TestCheckResourceAttr(resourceName, "termination_policies.#", "0"), + resource.TestCheckResourceAttr(resourceName, "traffic_source.#", "0"), resource.TestCheckResourceAttr(resourceName, "vpc_zone_identifier.#", "0"), resource.TestCheckResourceAttr(resourceName, "wait_for_capacity_timeout", "10m"), resource.TestCheckNoResourceAttr(resourceName, "wait_for_elb_capacity"), @@ -237,7 +240,6 @@ func TestAccAutoScalingGroup_tags(t *testing.T) { "value": "value1", "propagate_at_launch": "true", }), - resource.TestCheckResourceAttr(resourceName, "tags.#", "0"), ), }, testAccGroupImportStep(resourceName), @@ -256,7 +258,6 @@ func TestAccAutoScalingGroup_tags(t *testing.T) { "value": "value2", "propagate_at_launch": "false", }), - resource.TestCheckResourceAttr(resourceName, "tags.#", "0"), ), }, { @@ -269,7 +270,6 @@ func TestAccAutoScalingGroup_tags(t *testing.T) { "value": "value2", "propagate_at_launch": "true", }), - resource.TestCheckResourceAttr(resourceName, "tags.#", "0"), ), }, }, @@ -327,11 +327,11 @@ func TestAccAutoScalingGroup_simple(t *testing.T) { "value": rName, "propagate_at_launch": "true", }), - resource.TestCheckNoResourceAttr(resourceName, "tags.#"), resource.TestCheckResourceAttr(resourceName, "target_group_arns.#", "0"), resource.TestCheckResourceAttr(resourceName, "termination_policies.#", "2"), resource.TestCheckResourceAttr(resourceName, "termination_policies.0", "OldestInstance"), resource.TestCheckResourceAttr(resourceName, "termination_policies.1", "ClosestToNextInstanceHour"), + resource.TestCheckResourceAttr(resourceName, "traffic_source.#", "0"), resource.TestCheckResourceAttr(resourceName, "vpc_zone_identifier.#", "0"), resource.TestCheckResourceAttr(resourceName, "wait_for_capacity_timeout", "10m"), resource.TestCheckNoResourceAttr(resourceName, "wait_for_elb_capacity"), @@ -378,10 +378,10 @@ func TestAccAutoScalingGroup_simple(t *testing.T) { "value": rName, "propagate_at_launch": "true", }), - resource.TestCheckNoResourceAttr(resourceName, "tags.#"), resource.TestCheckResourceAttr(resourceName, "target_group_arns.#", "0"), resource.TestCheckResourceAttr(resourceName, "termination_policies.#", "1"), resource.TestCheckResourceAttr(resourceName, "termination_policies.0", "ClosestToNextInstanceHour"), + resource.TestCheckResourceAttr(resourceName, "traffic_source.#", "0"), resource.TestCheckResourceAttr(resourceName, "vpc_zone_identifier.#", "0"), resource.TestCheckResourceAttr(resourceName, "wait_for_capacity_timeout", "10m"), resource.TestCheckNoResourceAttr(resourceName, "wait_for_elb_capacity"), @@ -494,6 +494,7 @@ func TestAccAutoScalingGroup_withLoadBalancer(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "load_balancers.#", "1"), resource.TestCheckTypeSetElemAttrPair(resourceName, "load_balancers.*", "aws_elb.test", "name"), resource.TestCheckResourceAttr(resourceName, "target_group_arns.#", "0"), + resource.TestCheckResourceAttr(resourceName, "traffic_source.#", "1"), resource.TestCheckResourceAttr(resourceName, "vpc_zone_identifier.#", "1"), resource.TestCheckResourceAttr(resourceName, "wait_for_elb_capacity", "2"), ), @@ -521,6 +522,7 @@ func TestAccAutoScalingGroup_WithLoadBalancer_toTargetGroup(t *testing.T) { testAccCheckGroupExists(ctx, resourceName, &group), resource.TestCheckResourceAttr(resourceName, "load_balancers.#", "1"), resource.TestCheckResourceAttr(resourceName, "target_group_arns.#", "0"), + resource.TestCheckResourceAttr(resourceName, "traffic_source.#", "1"), ), }, { @@ -529,6 +531,7 @@ func TestAccAutoScalingGroup_WithLoadBalancer_toTargetGroup(t *testing.T) { testAccCheckGroupExists(ctx, resourceName, &group), resource.TestCheckResourceAttr(resourceName, "load_balancers.#", "0"), resource.TestCheckResourceAttr(resourceName, "target_group_arns.#", "1"), + resource.TestCheckResourceAttr(resourceName, "traffic_source.#", "1"), ), }, testAccGroupImportStep(resourceName), @@ -538,12 +541,209 @@ func TestAccAutoScalingGroup_WithLoadBalancer_toTargetGroup(t *testing.T) { testAccCheckGroupExists(ctx, resourceName, &group), resource.TestCheckResourceAttr(resourceName, "load_balancers.#", "1"), resource.TestCheckResourceAttr(resourceName, "target_group_arns.#", "0"), + resource.TestCheckResourceAttr(resourceName, "traffic_source.#", "1"), + ), + }, + }, + }) +} + +func TestAccAutoScalingGroup_withTrafficSourceELB(t *testing.T) { + ctx := acctest.Context(t) + var group autoscaling.Group + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_autoscaling_group.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, autoscaling.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckGroupDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccGroupConfig_trafficSourceELB(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckGroupExists(ctx, resourceName, &group), + resource.TestCheckResourceAttr(resourceName, "force_delete", "true"), + resource.TestCheckResourceAttr(resourceName, "health_check_grace_period", "300"), + resource.TestCheckResourceAttr(resourceName, "health_check_type", "ELB"), + resource.TestCheckResourceAttr(resourceName, "traffic_source.#", "1"), + resource.TestCheckResourceAttr(resourceName, "traffic_source.0.%", "2"), + resource.TestCheckTypeSetElemAttrPair(resourceName, "traffic_source.0.identifier", "aws_elb.test", "name"), + resource.TestCheckResourceAttr(resourceName, "traffic_source.0.type", "elb"), + resource.TestCheckResourceAttr(resourceName, "vpc_zone_identifier.#", "1"), + resource.TestCheckResourceAttr(resourceName, "wait_for_elb_capacity", "2"), + ), + }, + testAccGroupImportStep(resourceName), + }, + }) +} + +func TestAccAutoScalingGroup_withTrafficSourcesELBs(t *testing.T) { + ctx := acctest.Context(t) + var group autoscaling.Group + resourceName := "aws_autoscaling_group.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, autoscaling.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckGroupDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccGroupConfig_trafficSourceELBs(rName, 10), + Check: resource.ComposeTestCheckFunc( + testAccCheckGroupExists(ctx, resourceName, &group), + ), + }, + testAccGroupImportStep(resourceName), + { + Config: testAccGroupConfig_trafficSourceELBs(rName, 1), + Check: resource.ComposeTestCheckFunc( + testAccCheckGroupExists(ctx, resourceName, &group), + ), + }, + { + Config: testAccGroupConfig_trafficSourceELBs(rName, 10), + Check: resource.ComposeTestCheckFunc( + testAccCheckGroupExists(ctx, resourceName, &group), ), }, }, }) } +func TestAccAutoScalingGroup_withTrafficSourceELB_toTargetGroup(t *testing.T) { + ctx := acctest.Context(t) + var group autoscaling.Group + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_autoscaling_group.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, autoscaling.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckGroupDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccGroupConfig_trafficSourceELB(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckGroupExists(ctx, resourceName, &group), + resource.TestCheckResourceAttr(resourceName, "traffic_source.#", "1"), + ), + }, + { + Config: testAccGroupConfig_trafficSourceELBtoELBv2(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckGroupExists(ctx, resourceName, &group), + resource.TestCheckResourceAttr(resourceName, "traffic_source.#", "1"), + ), + }, + testAccGroupImportStep(resourceName), + { + Config: testAccGroupConfig_trafficSourceELB(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckGroupExists(ctx, resourceName, &group), + resource.TestCheckResourceAttr(resourceName, "traffic_source.#", "1"), + ), + }, + }, + }) +} + +func TestAccAutoScalingGroup_withTrafficSourceELBV2(t *testing.T) { + ctx := acctest.Context(t) + var group autoscaling.Group + resourceName := "aws_autoscaling_group.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, autoscaling.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckGroupDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccGroupConfig_trafficSourceELBv2(rName, 0), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckGroupExists(ctx, resourceName, &group), + resource.TestCheckResourceAttr(resourceName, "traffic_source.#", "0"), + ), + }, + { + Config: testAccGroupConfig_trafficSourceELBv2(rName, 10), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckGroupExists(ctx, resourceName, &group), + resource.TestCheckResourceAttr(resourceName, "traffic_source.#", "10"), + ), + }, + testAccGroupImportStep(resourceName), + { + Config: testAccGroupConfig_trafficSourceELBv2(rName, 1), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckGroupExists(ctx, resourceName, &group), + resource.TestCheckResourceAttr(resourceName, "traffic_source.#", "1"), + ), + }, + }, + }) +} + +func TestAccAutoScalingGroup_withTrafficSourceVPCLatticeTargetGroup(t *testing.T) { + ctx := acctest.Context(t) + var group autoscaling.Group + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_autoscaling_group.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, autoscaling.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckGroupDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccGroupConfig_trafficSourceVPCLatticeTargetGroup(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckGroupExists(ctx, resourceName, &group), + resource.TestCheckResourceAttr(resourceName, "force_delete", "true"), + resource.TestCheckResourceAttr(resourceName, "health_check_grace_period", "300"), + resource.TestCheckResourceAttr(resourceName, "health_check_type", "ELB"), + resource.TestCheckResourceAttr(resourceName, "traffic_source.#", "1"), + resource.TestCheckResourceAttr(resourceName, "vpc_zone_identifier.#", "1"), + resource.TestCheckResourceAttr(resourceName, "wait_for_elb_capacity", "2"), + ), + }, + testAccGroupImportStep(resourceName), + }, + }) +} + +func TestAccAutoScalingGroup_withTrafficSourceVPCLatticeTargetGroups(t *testing.T) { + ctx := acctest.Context(t) + var group autoscaling.Group + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_autoscaling_group.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, autoscaling.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckGroupDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccGroupConfig_trafficSourceVPCLatticeTargetGroups(rName, 5), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckGroupExists(ctx, resourceName, &group), + resource.TestCheckResourceAttr(resourceName, "traffic_source.#", "5"), + ), + }, + testAccGroupImportStep(resourceName), + }, + }) +} + func TestAccAutoScalingGroup_withPlacementGroup(t *testing.T) { ctx := acctest.Context(t) var group autoscaling.Group @@ -597,14 +797,14 @@ func TestAccAutoScalingGroup_withScalingActivityErrorIncorrectInstanceArchitectu CheckDestroy: testAccCheckGroupDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccGroupConfig_withPotentialScalingActivityError(rName, "t4g.micro"), + Config: testAccGroupConfig_withPotentialScalingActivityError(rName, "t4g.micro", 1), ExpectError: regexp.MustCompile(`The architecture 'arm64' of the specified instance type does not match the architecture 'x86_64' of the specified AMI`), }, }, }) } -func TestAccAutoScalingGroup_withNoScalingActivityErrorCorrectInstanceArchitecture(t *testing.T) { +func TestAccAutoScalingGroup_withScalingActivityErrorIncorrectInstanceArchitecture_Recovers(t *testing.T) { ctx := acctest.Context(t) var group autoscaling.Group rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) @@ -617,8 +817,18 @@ func TestAccAutoScalingGroup_withNoScalingActivityErrorCorrectInstanceArchitectu CheckDestroy: testAccCheckGroupDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccGroupConfig_withPotentialScalingActivityError(rName, "t2.micro"), - Check: resource.ComposeTestCheckFunc( + Config: testAccGroupConfig_withPotentialScalingActivityError(rName, "t2.micro", 1), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckGroupExists(ctx, resourceName, &group), + ), + }, + { + Config: testAccGroupConfig_withPotentialScalingActivityError(rName, "t4g.micro", 2), + ExpectError: regexp.MustCompile(`The architecture 'arm64' of the specified instance type does not match the architecture 'x86_64' of the specified AMI`), + }, + { + Config: testAccGroupConfig_withPotentialScalingActivityError(rName, "t2.micro", 3), + Check: resource.ComposeAggregateTestCheckFunc( testAccCheckGroupExists(ctx, resourceName, &group), ), }, @@ -1014,22 +1224,6 @@ func TestAccAutoScalingGroup_InstanceRefresh_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.triggers.#", "0"), ), }, - { - Config: testAccGroupConfig_instanceRefreshAutoRollback(rName), - Check: resource.ComposeTestCheckFunc( - testAccCheckGroupExists(ctx, resourceName, &group), - resource.TestCheckResourceAttr(resourceName, "instance_refresh.#", "1"), - resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.preferences.#", "1"), - resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.preferences.0.auto_rollback", "true"), - resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.preferences.0.checkpoint_delay", ""), - resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.preferences.0.checkpoint_percentages.#", "0"), - resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.preferences.0.instance_warmup", ""), - resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.preferences.0.min_healthy_percentage", "0"), - resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.preferences.0.skip_matching", "false"), - resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.strategy", "Rolling"), - resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.triggers.#", "0"), - ), - }, { Config: testAccGroupConfig_instanceRefreshFull(rName), Check: resource.ComposeTestCheckFunc( @@ -1146,6 +1340,54 @@ func TestAccAutoScalingGroup_InstanceRefresh_triggers(t *testing.T) { }) } +func TestAccAutoScalingGroup_InstanceRefresh_autoRollback(t *testing.T) { + ctx := acctest.Context(t) + var group autoscaling.Group + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_autoscaling_group.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, autoscaling.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckGroupDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccGroupConfig_instanceRefreshAutoRollback(rName, "t2.micro"), + Check: resource.ComposeTestCheckFunc( + testAccCheckGroupExists(ctx, resourceName, &group), + resource.TestCheckResourceAttr(resourceName, "instance_refresh.#", "1"), + resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.preferences.#", "1"), + resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.preferences.0.auto_rollback", "true"), + resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.preferences.0.checkpoint_delay", ""), + resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.preferences.0.checkpoint_percentages.#", "0"), + resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.preferences.0.instance_warmup", ""), + resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.preferences.0.min_healthy_percentage", "0"), + resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.preferences.0.skip_matching", "false"), + resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.strategy", "Rolling"), + resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.triggers.#", "0"), + ), + }, + { + Config: testAccGroupConfig_instanceRefreshAutoRollback(rName, "t3.micro"), + Check: resource.ComposeTestCheckFunc( + testAccCheckGroupExists(ctx, resourceName, &group), + resource.TestCheckResourceAttr(resourceName, "instance_refresh.#", "1"), + resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.preferences.#", "1"), + resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.preferences.0.auto_rollback", "true"), + resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.preferences.0.checkpoint_delay", ""), + resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.preferences.0.checkpoint_percentages.#", "0"), + resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.preferences.0.instance_warmup", ""), + resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.preferences.0.min_healthy_percentage", "0"), + resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.preferences.0.skip_matching", "false"), + resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.strategy", "Rolling"), + resource.TestCheckResourceAttr(resourceName, "instance_refresh.0.triggers.#", "0"), + ), + }, + }, + }) +} + // Reference: https://github.com/hashicorp/terraform-provider-aws/issues/256 func TestAccAutoScalingGroup_loadBalancers(t *testing.T) { ctx := acctest.Context(t) @@ -3442,7 +3684,7 @@ func testAccCheckGroupExists(ctx context.Context, n string, v *autoscaling.Group return fmt.Errorf("No Auto Scaling Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn(ctx) output, err := tfautoscaling.FindGroupByName(ctx, conn, rs.Primary.ID) @@ -3458,7 +3700,7 @@ func testAccCheckGroupExists(ctx context.Context, n string, v *autoscaling.Group func testAccCheckGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_autoscaling_group" { @@ -3502,7 +3744,7 @@ func testAccCheckGroupHealthyInstanceCount(v *autoscaling.Group, expected int) r func testAccCheckInstanceRefreshCount(ctx context.Context, v *autoscaling.Group, expected int) resource.TestCheckFunc { return func(state *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn(ctx) output, err := tfautoscaling.FindInstanceRefreshes(ctx, conn, &autoscaling.DescribeInstanceRefreshesInput{ AutoScalingGroupName: v.AutoScalingGroupName, @@ -3522,7 +3764,7 @@ func testAccCheckInstanceRefreshCount(ctx context.Context, v *autoscaling.Group, func testAccCheckInstanceRefreshStatus(ctx context.Context, v *autoscaling.Group, index int, expected ...string) resource.TestCheckFunc { return func(state *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn(ctx) output, err := tfautoscaling.FindInstanceRefreshes(ctx, conn, &autoscaling.DescribeInstanceRefreshesInput{ AutoScalingGroupName: v.AutoScalingGroupName, @@ -3559,7 +3801,7 @@ func testAccCheckLBTargetGroupExists(ctx context.Context, n string, v *elbv2.Tar return errors.New("No ELBv2 Target Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn(ctx) output, err := tfelbv2.FindTargetGroupByARN(ctx, conn, rs.Primary.ID) @@ -3577,7 +3819,7 @@ func testAccCheckLBTargetGroupExists(ctx context.Context, n string, v *elbv2.Tar // sure that all instances in it are healthy. func testAccCheckALBTargetGroupHealthy(ctx context.Context, v *elbv2.TargetGroup) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn(ctx) output, err := conn.DescribeTargetHealthWithContext(ctx, &elbv2.DescribeTargetHealthInput{ TargetGroupArn: v.TargetGroupArn, @@ -3603,9 +3845,13 @@ func testAccGroupConfig_launchConfigurationBase(rName, instanceType string) stri acctest.ConfigLatestAmazonLinuxHVMEBSAMI(), fmt.Sprintf(` resource "aws_launch_configuration" "test" { - name = %[1]q + name_prefix = %[1]q image_id = data.aws_ami.amzn-ami-minimal-hvm-ebs.id instance_type = %[2]q + + lifecycle { + create_before_destroy = true + } } `, rName, instanceType)) } @@ -3930,6 +4176,7 @@ resource "aws_autoscaling_group" "test" { wait_for_elb_capacity = 2 force_delete = true load_balancers = [aws_elb.test.name] + target_group_arns = [] tag { key = "Name" @@ -3953,6 +4200,7 @@ resource "aws_autoscaling_group" "test" { health_check_type = "ELB" wait_for_elb_capacity = 2 force_delete = true + load_balancers = [] target_group_arns = [aws_lb_target_group.test.arn] tag { @@ -3964,6 +4212,267 @@ resource "aws_autoscaling_group" "test" { `, rName)) } +func testAccGroupConfig_vpcLatticeBase(rName string, targetGroupCount int) string { + return acctest.ConfigCompose( + acctest.ConfigVPCWithSubnets(rName, 1), + acctest.ConfigLatestAmazonLinuxHVMEBSAMI(), + fmt.Sprintf(` +resource "aws_vpclattice_target_group" "test" { + count = %[2]d + name = %[1]q + type = "INSTANCE" + + config { + port = 80 + protocol = "HTTP" + vpc_identifier = aws_vpc.test.id + } +} + +resource "aws_launch_configuration" "test" { + name = %[1]q + image_id = data.aws_ami.amzn-ami-minimal-hvm-ebs.id + instance_type = "t2.micro" +} +`, rName, targetGroupCount)) +} + +func testAccGroupConfig_trafficSourceELB(rName string) string { + return acctest.ConfigCompose(testAccGroupConfig_elbBase(rName), fmt.Sprintf(` +resource "aws_autoscaling_group" "test" { + vpc_zone_identifier = aws_subnet.test[*].id + max_size = 2 + min_size = 2 + name = %[1]q + launch_configuration = aws_launch_configuration.test.name + + health_check_grace_period = 300 + health_check_type = "ELB" + force_delete = true + wait_for_elb_capacity = 2 + + traffic_source { + identifier = aws_elb.test.name + type = "elb" + } + + tag { + key = "Name" + value = %[1]q + propagate_at_launch = true + } +} +`, rName)) +} + +func testAccGroupConfig_trafficSourceELBs(rName string, elbCount int) string { + return acctest.ConfigCompose( + acctest.ConfigLatestAmazonLinuxHVMEBSAMI(), + acctest.ConfigVPCWithSubnets(rName, 1), + fmt.Sprintf(` +resource "aws_internet_gateway" "test" { + vpc_id = aws_vpc.test.id + + tags = { + Name = %[1]q + } +} + +resource "aws_launch_template" "test" { + image_id = data.aws_ami.amzn-ami-minimal-hvm-ebs.id + instance_type = "t3.micro" + name = %[1]q +} + +resource "aws_elb" "test" { + count = %[2]d + + # "name" cannot be longer than 32 characters. + name = format("%%s-%%d", substr(%[1]q, 0, 28), count.index) + subnets = aws_subnet.test[*].id + + listener { + instance_port = 80 + instance_protocol = "http" + lb_port = 80 + lb_protocol = "http" + } + + depends_on = [aws_internet_gateway.test] +} + +resource "aws_autoscaling_group" "test" { + name = %[1]q + force_delete = true + max_size = 0 + min_size = 0 + + dynamic "traffic_source" { + for_each = aws_elb.test[*] + content { + identifier = traffic_source.value.name + type = "elb" + } + } + + vpc_zone_identifier = aws_subnet.test[*].id + + launch_template { + id = aws_launch_template.test.id + } +} +`, rName, elbCount)) +} + +func testAccGroupConfig_trafficSourceELBtoELBv2(rName string) string { + return acctest.ConfigCompose(testAccGroupConfig_elbBase(rName), fmt.Sprintf(` +resource "aws_autoscaling_group" "test" { + vpc_zone_identifier = aws_subnet.test[*].id + max_size = 2 + min_size = 2 + name = %[1]q + launch_configuration = aws_launch_configuration.test.name + + health_check_grace_period = 300 + health_check_type = "ELB" + wait_for_elb_capacity = 2 + force_delete = true + + traffic_source { + identifier = aws_lb_target_group.test.arn + type = "elbv2" + } + + tag { + key = "Name" + value = %[1]q + propagate_at_launch = true + } +} +`, rName)) +} + +func testAccGroupConfig_trafficSourceELBv2(rName string, targetGroupCount int) string { + return acctest.ConfigCompose( + acctest.ConfigLatestAmazonLinuxHVMEBSAMI(), + acctest.ConfigVPCWithSubnets(rName, 2), + fmt.Sprintf(` +resource "aws_launch_configuration" "test" { + name = %[1]q + image_id = data.aws_ami.amzn-ami-minimal-hvm-ebs.id + instance_type = "t2.micro" + + enable_monitoring = false +} + +resource "aws_lb_target_group" "test" { + count = %[2]d + + # "name" cannot be longer than 32 characters. + name = format("%%s-%%d", substr(%[1]q, 0, 28), count.index) + port = 80 + protocol = "HTTP" + vpc_id = aws_vpc.test.id +} + +resource "aws_autoscaling_group" "test" { + vpc_zone_identifier = aws_subnet.test[*].id + max_size = 0 + min_size = 0 + name = %[1]q + launch_configuration = aws_launch_configuration.test.name + + dynamic "traffic_source" { + for_each = aws_lb_target_group.test[*] + content { + identifier = traffic_source.value.arn + type = "elbv2" + } + } +} +`, rName, targetGroupCount)) +} + +func testAccGroupConfig_trafficSourceVPCLatticeTargetGroup(rName string) string { + return acctest.ConfigCompose(testAccGroupConfig_vpcLatticeBase(rName, 1), fmt.Sprintf(` +resource "aws_autoscaling_group" "test" { + vpc_zone_identifier = aws_subnet.test[*].id + max_size = 2 + min_size = 2 + name = %[1]q + launch_configuration = aws_launch_configuration.test.name + + health_check_grace_period = 300 + health_check_type = "ELB" + wait_for_elb_capacity = 2 + force_delete = true + + traffic_source { + identifier = aws_vpclattice_target_group.test[0].arn + type = "vpc-lattice" + } + + tag { + key = "Name" + value = %[1]q + propagate_at_launch = true + } +} +`, rName)) +} + +func testAccGroupConfig_trafficSourceVPCLatticeTargetGroups(rName string, targetGroupCount int) string { + return acctest.ConfigCompose( + acctest.ConfigLatestAmazonLinuxHVMEBSAMI(), + acctest.ConfigVPCWithSubnets(rName, 2), fmt.Sprintf(` +resource "aws_vpclattice_target_group" "test" { + count = %[2]d + + name = "%[1]s-${count.index}" + type = "INSTANCE" + + config { + port = 80 + protocol = "HTTP" + vpc_identifier = aws_vpc.test.id + } +} + +resource "aws_launch_configuration" "test" { + name = %[1]q + image_id = data.aws_ami.amzn-ami-minimal-hvm-ebs.id + instance_type = "t2.micro" +} + +resource "aws_autoscaling_group" "test" { + vpc_zone_identifier = aws_subnet.test[*].id + max_size = 2 + min_size = 2 + name = %[1]q + launch_configuration = aws_launch_configuration.test.name + + health_check_grace_period = 300 + health_check_type = "ELB" + wait_for_elb_capacity = 2 + force_delete = true + + dynamic "traffic_source" { + for_each = aws_vpclattice_target_group.test[*] + content { + identifier = traffic_source.value.arn + type = "vpc-lattice" + } + } + + tag { + key = "Name" + value = %[1]q + propagate_at_launch = true + } +} +`, rName, targetGroupCount)) +} + func testAccGroupConfig_placement(rName string) string { return acctest.ConfigCompose(testAccGroupConfig_launchConfigurationBase(rName, "c3.large"), fmt.Sprintf(` resource "aws_placement_group" "test" { @@ -4120,28 +4629,32 @@ resource "aws_autoscaling_group" "test" { `, rName)) } -func testAccGroupConfig_withPotentialScalingActivityError(rName, instanceType string) string { +func testAccGroupConfig_withPotentialScalingActivityError(rName, instanceType string, instanceCount int) string { return acctest.ConfigCompose(testAccGroupConfig_launchConfigurationBase(rName, instanceType), fmt.Sprintf(` resource "aws_autoscaling_group" "test" { availability_zones = [data.aws_availability_zones.available.names[0]] name = %[1]q - max_size = 1 + max_size = %[2]d min_size = 1 health_check_grace_period = 300 health_check_type = "ELB" - desired_capacity = 1 + desired_capacity = %[2]d force_delete = true termination_policies = ["OldestInstance", "ClosestToNextInstanceHour"] wait_for_capacity_timeout = "2m" launch_configuration = aws_launch_configuration.test.name + instance_refresh { + strategy = "Rolling" + } + tag { key = "Name" value = %[1]q propagate_at_launch = true } } -`, rName)) +`, rName, instanceCount)) } func testAccGroupConfig_serviceLinkedRoleARN(rName string) string { @@ -4349,15 +4862,19 @@ resource "aws_autoscaling_group" "test" { `, rName)) } -func testAccGroupConfig_instanceRefreshAutoRollback(rName string) string { - return acctest.ConfigCompose(testAccGroupConfig_launchConfigurationBase(rName, "t3.nano"), fmt.Sprintf(` +func testAccGroupConfig_instanceRefreshAutoRollback(rName, instanceType string) string { + return acctest.ConfigCompose(testAccGroupConfig_launchTemplateBase(rName, instanceType), fmt.Sprintf(` resource "aws_autoscaling_group" "test" { - availability_zones = [data.aws_availability_zones.available.names[0]] - name = %[1]q - max_size = 2 - min_size = 1 - desired_capacity = 1 - launch_configuration = aws_launch_configuration.test.name + availability_zones = [data.aws_availability_zones.available.names[0]] + name = %[1]q + max_size = 2 + min_size = 1 + desired_capacity = 1 + + launch_template { + id = aws_launch_template.test.id + version = aws_launch_template.test.default_version + } instance_refresh { strategy = "Rolling" diff --git a/internal/service/autoscaling/groups_data_source.go b/internal/service/autoscaling/groups_data_source.go index e2962ddf353..e0d7cd1545b 100644 --- a/internal/service/autoscaling/groups_data_source.go +++ b/internal/service/autoscaling/groups_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscaling import ( @@ -80,7 +83,7 @@ func buildFiltersDataSource(set *schema.Set) []*autoscaling.Filter { func dataSourceGroupsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) input := &autoscaling.DescribeAutoScalingGroupsInput{} diff --git a/internal/service/autoscaling/groups_data_source_test.go b/internal/service/autoscaling/groups_data_source_test.go index 82d7220e1cb..c338c35df6e 100644 --- a/internal/service/autoscaling/groups_data_source_test.go +++ b/internal/service/autoscaling/groups_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscaling_test import ( diff --git a/internal/service/autoscaling/launch_configuration.go b/internal/service/autoscaling/launch_configuration.go index 567ffca83ec..0cd504e232d 100644 --- a/internal/service/autoscaling/launch_configuration.go +++ b/internal/service/autoscaling/launch_configuration.go @@ -1,6 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscaling -import ( // nosemgrep:ci.aws-sdk-go-multiple-service-imports +import ( // nosemgrep:ci.semgrep.aws.multiple-service-imports "context" "crypto/sha1" "encoding/base64" @@ -315,8 +318,8 @@ func ResourceLaunchConfiguration() *schema.Resource { func resourceLaunchConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - autoscalingconn := meta.(*conns.AWSClient).AutoScalingConn() - ec2conn := meta.(*conns.AWSClient).EC2Conn() + autoscalingconn := meta.(*conns.AWSClient).AutoScalingConn(ctx) + ec2conn := meta.(*conns.AWSClient).EC2Conn(ctx) lcName := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) input := autoscaling.CreateLaunchConfigurationInput{ @@ -430,8 +433,8 @@ func resourceLaunchConfigurationCreate(ctx context.Context, d *schema.ResourceDa func resourceLaunchConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - autoscalingconn := meta.(*conns.AWSClient).AutoScalingConn() - ec2conn := meta.(*conns.AWSClient).EC2Conn() + autoscalingconn := meta.(*conns.AWSClient).AutoScalingConn(ctx) + ec2conn := meta.(*conns.AWSClient).EC2Conn(ctx) lc, err := FindLaunchConfigurationByName(ctx, autoscalingconn, d.Id()) @@ -517,7 +520,7 @@ func resourceLaunchConfigurationRead(ctx context.Context, d *schema.ResourceData func resourceLaunchConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) log.Printf("[DEBUG] Deleting Auto Scaling Launch Configuration: %s", d.Id()) _, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, propagationTimeout, diff --git a/internal/service/autoscaling/launch_configuration_data_source.go b/internal/service/autoscaling/launch_configuration_data_source.go index 054f66eb68e..14e3def9785 100644 --- a/internal/service/autoscaling/launch_configuration_data_source.go +++ b/internal/service/autoscaling/launch_configuration_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscaling import ( @@ -187,8 +190,8 @@ func DataSourceLaunchConfiguration() *schema.Resource { func dataSourceLaunchConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - autoscalingconn := meta.(*conns.AWSClient).AutoScalingConn() - ec2conn := meta.(*conns.AWSClient).EC2Conn() + autoscalingconn := meta.(*conns.AWSClient).AutoScalingConn(ctx) + ec2conn := meta.(*conns.AWSClient).EC2Conn(ctx) name := d.Get("name").(string) lc, err := FindLaunchConfigurationByName(ctx, autoscalingconn, name) diff --git a/internal/service/autoscaling/launch_configuration_data_source_test.go b/internal/service/autoscaling/launch_configuration_data_source_test.go index c4dbe62ef4c..6a711d57ad6 100644 --- a/internal/service/autoscaling/launch_configuration_data_source_test.go +++ b/internal/service/autoscaling/launch_configuration_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscaling_test import ( diff --git a/internal/service/autoscaling/launch_configuration_test.go b/internal/service/autoscaling/launch_configuration_test.go index d0838f5be02..7210ac39f22 100644 --- a/internal/service/autoscaling/launch_configuration_test.go +++ b/internal/service/autoscaling/launch_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscaling_test import ( @@ -766,7 +769,7 @@ func TestAccAutoScalingLaunchConfiguration_AssociatePublicIPAddress_subnetTrueCo func testAccCheckLaunchConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_launch_configuration" { @@ -801,7 +804,7 @@ func testAccCheckLaunchConfigurationExists(ctx context.Context, n string, v *aut return fmt.Errorf("No Auto Scaling Launch Configuration ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn(ctx) output, err := tfautoscaling.FindLaunchConfigurationByName(ctx, conn, rs.Primary.ID) @@ -826,7 +829,7 @@ func testAccCheckAMIExists(ctx context.Context, n string, v *ec2.Image) resource return fmt.Errorf("No EC2 AMI ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindImageByID(ctx, conn, rs.Primary.ID) @@ -842,7 +845,7 @@ func testAccCheckAMIExists(ctx context.Context, n string, v *ec2.Image) resource func testAccCheckInstanceHasPublicIPAddress(ctx context.Context, group *autoscaling.Group, idx int, expected bool) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) instanceID := aws.StringValue(group.Instances[idx].InstanceId) instance, err := tfec2.FindInstanceByID(ctx, conn, instanceID) diff --git a/internal/service/autoscaling/lifecycle_hook.go b/internal/service/autoscaling/lifecycle_hook.go index 1d4ec88e293..b6a80a68bad 100644 --- a/internal/service/autoscaling/lifecycle_hook.go +++ b/internal/service/autoscaling/lifecycle_hook.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscaling import ( @@ -71,7 +74,7 @@ func ResourceLifecycleHook() *schema.Resource { func resourceLifecycleHookPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) input := getPutLifecycleHookInput(d) name := d.Get("name").(string) @@ -94,7 +97,7 @@ func resourceLifecycleHookPut(ctx context.Context, d *schema.ResourceData, meta func resourceLifecycleHookRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) p, err := FindLifecycleHook(ctx, conn, d.Get("autoscaling_group_name").(string), d.Id()) @@ -121,7 +124,7 @@ func resourceLifecycleHookRead(ctx context.Context, d *schema.ResourceData, meta func resourceLifecycleHookDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) log.Printf("[INFO] Deleting Auto Scaling Lifecycle Hook: %s", d.Id()) _, err := conn.DeleteLifecycleHookWithContext(ctx, &autoscaling.DeleteLifecycleHookInput{ diff --git a/internal/service/autoscaling/lifecycle_hook_test.go b/internal/service/autoscaling/lifecycle_hook_test.go index 3ef892b2ba6..5189c3f2fd7 100644 --- a/internal/service/autoscaling/lifecycle_hook_test.go +++ b/internal/service/autoscaling/lifecycle_hook_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscaling_test import ( @@ -102,7 +105,7 @@ func testAccCheckLifecycleHookExists(ctx context.Context, n string) resource.Tes return fmt.Errorf("No Auto Scaling Lifecycle Hook ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn(ctx) _, err := tfautoscaling.FindLifecycleHook(ctx, conn, rs.Primary.Attributes["autoscaling_group_name"], rs.Primary.ID) @@ -112,7 +115,7 @@ func testAccCheckLifecycleHookExists(ctx context.Context, n string) resource.Tes func testAccCheckLifecycleHookDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_autoscaling_lifecycle_hook" { diff --git a/internal/service/autoscaling/notification.go b/internal/service/autoscaling/notification.go index 6d1901c9f74..618629834f0 100644 --- a/internal/service/autoscaling/notification.go +++ b/internal/service/autoscaling/notification.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscaling import ( @@ -44,7 +47,7 @@ func ResourceNotification() *schema.Resource { func resourceNotificationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) gl := flex.ExpandStringSet(d.Get("group_names").(*schema.Set)) nl := flex.ExpandStringSet(d.Get("notifications").(*schema.Set)) @@ -61,7 +64,7 @@ func resourceNotificationCreate(ctx context.Context, d *schema.ResourceData, met func resourceNotificationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) gl := flex.ExpandStringSet(d.Get("group_names").(*schema.Set)) opts := &autoscaling.DescribeNotificationConfigurationsInput{ @@ -125,7 +128,7 @@ func resourceNotificationRead(ctx context.Context, d *schema.ResourceData, meta func resourceNotificationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) // Notifications API call is a PUT, so we don't need to diff the list, just // push whatever it is and AWS sorts it out @@ -197,7 +200,7 @@ func removeNotificationConfigToGroupsWithTopic(ctx context.Context, conn *autosc func resourceNotificationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) gl := flex.ExpandStringSet(d.Get("group_names").(*schema.Set)) diff --git a/internal/service/autoscaling/notification_test.go b/internal/service/autoscaling/notification_test.go index 591cec1f276..df37c9c889b 100644 --- a/internal/service/autoscaling/notification_test.go +++ b/internal/service/autoscaling/notification_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscaling_test import ( @@ -125,7 +128,7 @@ func testAccCheckASGNotificationExists(ctx context.Context, n string, groups []s return fmt.Errorf("No ASG Notification ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn(ctx) opts := &autoscaling.DescribeNotificationConfigurationsInput{ AutoScalingGroupNames: aws.StringSlice(groups), MaxRecords: aws.Int64(100), @@ -150,7 +153,7 @@ func testAccCheckASGNDestroy(ctx context.Context) resource.TestCheckFunc { } groups := []*string{aws.String("foobar1-terraform-test")} - conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn(ctx) opts := &autoscaling.DescribeNotificationConfigurationsInput{ AutoScalingGroupNames: groups, } diff --git a/internal/service/autoscaling/policy.go b/internal/service/autoscaling/policy.go index 325b60cba7c..c8ad504dd69 100644 --- a/internal/service/autoscaling/policy.go +++ b/internal/service/autoscaling/policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscaling import ( @@ -18,8 +21,8 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/create" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" - "github.com/hashicorp/terraform-provider-aws/internal/experimental/nullable" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/types/nullable" ) // @SDKResource("aws_autoscaling_policy") @@ -503,7 +506,7 @@ func customizedMetricDataQuerySchema() *schema.Schema { func resourcePolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) name := d.Get("name").(string) input, err := getPutScalingPolicyInput(d) @@ -526,7 +529,7 @@ func resourcePolicyCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourcePolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) p, err := FindScalingPolicy(ctx, conn, d.Get("autoscaling_group_name").(string), d.Id()) @@ -567,7 +570,7 @@ func resourcePolicyRead(ctx context.Context, d *schema.ResourceData, meta interf func resourcePolicyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) input, err := getPutScalingPolicyInput(d) @@ -587,7 +590,7 @@ func resourcePolicyUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourcePolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) log.Printf("[INFO] Deleting Auto Scaling Policy: %s", d.Id()) _, err := conn.DeletePolicyWithContext(ctx, &autoscaling.DeletePolicyInput{ diff --git a/internal/service/autoscaling/policy_test.go b/internal/service/autoscaling/policy_test.go index fc021df693e..dd5bee5dfe4 100644 --- a/internal/service/autoscaling/policy_test.go +++ b/internal/service/autoscaling/policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscaling_test import ( @@ -548,7 +551,7 @@ func testAccCheckScalingPolicyExists(ctx context.Context, n string, v *autoscali return fmt.Errorf("No Auto Scaling Policy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn(ctx) output, err := tfautoscaling.FindScalingPolicy(ctx, conn, rs.Primary.Attributes["autoscaling_group_name"], rs.Primary.ID) @@ -564,7 +567,7 @@ func testAccCheckScalingPolicyExists(ctx context.Context, n string, v *autoscali func testAccCheckPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_autoscaling_policy" { diff --git a/internal/service/autoscaling/schedule.go b/internal/service/autoscaling/schedule.go index a3d66f0ceb6..041ebfa2139 100644 --- a/internal/service/autoscaling/schedule.go +++ b/internal/service/autoscaling/schedule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscaling import ( @@ -90,7 +93,7 @@ func ResourceSchedule() *schema.Resource { func resourceSchedulePut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) name := d.Get("scheduled_action_name").(string) input := &autoscaling.PutScheduledUpdateGroupActionInput{ @@ -151,7 +154,7 @@ func resourceSchedulePut(ctx context.Context, d *schema.ResourceData, meta inter func resourceScheduleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) sa, err := FindScheduledUpdateGroupAction(ctx, conn, d.Get("autoscaling_group_name").(string), d.Id()) @@ -196,7 +199,7 @@ func resourceScheduleRead(ctx context.Context, d *schema.ResourceData, meta inte func resourceScheduleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) log.Printf("[INFO] Deleting Auto Scaling Scheduled Action: %s", d.Id()) _, err := conn.DeleteScheduledActionWithContext(ctx, &autoscaling.DeleteScheduledActionInput{ diff --git a/internal/service/autoscaling/schedule_test.go b/internal/service/autoscaling/schedule_test.go index 52fbd9fbd49..be15466c5da 100644 --- a/internal/service/autoscaling/schedule_test.go +++ b/internal/service/autoscaling/schedule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscaling_test import ( @@ -194,7 +197,7 @@ func testAccCheckScalingScheduleExists(ctx context.Context, n string, v *autosca return fmt.Errorf("No Auto Scaling Scheduled Action ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn(ctx) output, err := tfautoscaling.FindScheduledUpdateGroupAction(ctx, conn, rs.Primary.Attributes["autoscaling_group_name"], rs.Primary.ID) @@ -210,7 +213,7 @@ func testAccCheckScalingScheduleExists(ctx context.Context, n string, v *autosca func testAccCheckScheduleDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_autoscaling_schedule" { diff --git a/internal/service/autoscaling/service_package_gen.go b/internal/service/autoscaling/service_package_gen.go index ae396498ea4..8abf7846587 100644 --- a/internal/service/autoscaling/service_package_gen.go +++ b/internal/service/autoscaling/service_package_gen.go @@ -5,6 +5,10 @@ package autoscaling import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + autoscaling_sdkv1 "github.com/aws/aws-sdk-go/service/autoscaling" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -66,6 +70,10 @@ func (p *servicePackage) SDKResources(ctx context.Context) []*types.ServicePacka Factory: ResourceSchedule, TypeName: "aws_autoscaling_schedule", }, + { + Factory: ResourceTrafficSourceAttachment, + TypeName: "aws_autoscaling_traffic_source_attachment", + }, { Factory: ResourceLaunchConfiguration, TypeName: "aws_launch_configuration", @@ -77,4 +85,13 @@ func (p *servicePackage) ServicePackageName() string { return names.AutoScaling } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*autoscaling_sdkv1.AutoScaling, error) { + sess := config["session"].(*session_sdkv1.Session) + + return autoscaling_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/autoscaling/sweep.go b/internal/service/autoscaling/sweep.go index b3014430a36..67a180ccb4f 100644 --- a/internal/service/autoscaling/sweep.go +++ b/internal/service/autoscaling/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/autoscaling" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -29,11 +31,11 @@ func init() { func sweepGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).AutoScalingConn() + conn := client.AutoScalingConn(ctx) input := &autoscaling.DescribeAutoScalingGroupsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -63,7 +65,7 @@ func sweepGroups(region string) error { return fmt.Errorf("error listing Auto Scaling Groups (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Auto Scaling Groups (%s): %w", region, err) @@ -74,11 +76,11 @@ func sweepGroups(region string) error { func sweepLaunchConfigurations(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).AutoScalingConn() + conn := client.AutoScalingConn(ctx) input := &autoscaling.DescribeLaunchConfigurationsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -107,7 +109,7 @@ func sweepLaunchConfigurations(region string) error { return fmt.Errorf("error listing Auto Scaling Launch Configurations (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Auto Scaling Launch Configurations (%s): %w", region, err) diff --git a/internal/service/autoscaling/tags_gen.go b/internal/service/autoscaling/tags_gen.go index 96b4b2983f2..d351488670d 100644 --- a/internal/service/autoscaling/tags_gen.go +++ b/internal/service/autoscaling/tags_gen.go @@ -18,7 +18,7 @@ import ( // GetTag fetches an individual autoscaling service tag for a resource. // Returns whether the key value and any errors. A NotFoundError is used to signal that no value was found. -// This function will optimise the handling over ListTags, if possible. +// This function will optimise the handling over listTags, if possible. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. func GetTag(ctx context.Context, conn autoscalingiface.AutoScalingAPI, identifier, resourceType, key string) (*tftags.TagData, error) { @@ -50,10 +50,10 @@ func GetTag(ctx context.Context, conn autoscalingiface.AutoScalingAPI, identifie return listTags.KeyTagData(key), nil } -// ListTags lists autoscaling service tags. +// listTags lists autoscaling service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn autoscalingiface.AutoScalingAPI, identifier, resourceType string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn autoscalingiface.AutoScalingAPI, identifier, resourceType string) (tftags.KeyValueTags, error) { input := &autoscaling.DescribeTagsInput{ Filters: []*autoscaling.Filter{ { @@ -75,7 +75,7 @@ func ListTags(ctx context.Context, conn autoscalingiface.AutoScalingAPI, identif // ListTags lists autoscaling service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier, resourceType string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).AutoScalingConn(), identifier, resourceType) + tags, err := listTags(ctx, meta.(*conns.AWSClient).AutoScalingConn(ctx), identifier, resourceType) if err != nil { return err @@ -219,9 +219,9 @@ func KeyValueTags(ctx context.Context, tags any, identifier, resourceType string } } -// GetTagsIn returns autoscaling service tags from Context. +// getTagsIn returns autoscaling service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*autoscaling.Tag { +func getTagsIn(ctx context.Context) []*autoscaling.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -231,17 +231,17 @@ func GetTagsIn(ctx context.Context) []*autoscaling.Tag { return nil } -// SetTagsOut sets autoscaling service tags in Context. -func SetTagsOut(ctx context.Context, tags any, identifier, resourceType string) { +// setTagsOut sets autoscaling service tags in Context. +func setTagsOut(ctx context.Context, tags any, identifier, resourceType string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags, identifier, resourceType)) } } -// UpdateTags updates autoscaling service tags. +// updateTags updates autoscaling service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn autoscalingiface.AutoScalingAPI, identifier, resourceType string, oldTagsSet, newTagsSet any) error { +func updateTags(ctx context.Context, conn autoscalingiface.AutoScalingAPI, identifier, resourceType string, oldTagsSet, newTagsSet any) error { oldTags := KeyValueTags(ctx, oldTagsSet, identifier, resourceType) newTags := KeyValueTags(ctx, newTagsSet, identifier, resourceType) @@ -279,5 +279,5 @@ func UpdateTags(ctx context.Context, conn autoscalingiface.AutoScalingAPI, ident // UpdateTags updates autoscaling service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier, resourceType string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).AutoScalingConn(), identifier, resourceType, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).AutoScalingConn(ctx), identifier, resourceType, oldTags, newTags) } diff --git a/internal/service/autoscaling/traffic_source_attachment.go b/internal/service/autoscaling/traffic_source_attachment.go new file mode 100644 index 00000000000..ffb2ffd1bb8 --- /dev/null +++ b/internal/service/autoscaling/traffic_source_attachment.go @@ -0,0 +1,236 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package autoscaling + +import ( + "context" + "fmt" + "log" + "strings" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/autoscaling" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" + "github.com/hashicorp/terraform-provider-aws/internal/slices" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" +) + +// @SDKResource("aws_autoscaling_traffic_source_attachment") +func ResourceTrafficSourceAttachment() *schema.Resource { + return &schema.Resource{ + CreateWithoutTimeout: resourceTrafficSourceAttachmentCreate, + ReadWithoutTimeout: resourceTrafficSourceAttachmentRead, + DeleteWithoutTimeout: resourceTrafficSourceAttachmentDelete, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(30 * time.Minute), + Delete: schema.DefaultTimeout(30 * time.Minute), + }, + + Schema: map[string]*schema.Schema{ + "autoscaling_group_name": { + Type: schema.TypeString, + ForceNew: true, + Required: true, + }, + "traffic_source": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "identifier": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(1, 2048), + }, + "type": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(1, 2048), + }, + }, + }, + }, + }, + } +} + +func resourceTrafficSourceAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) + + asgName := d.Get("autoscaling_group_name").(string) + trafficSource := expandTrafficSourceIdentifier(d.Get("traffic_source").([]interface{})[0].(map[string]interface{})) + trafficSourceID := aws.StringValue(trafficSource.Identifier) + trafficSourceType := aws.StringValue(trafficSource.Type) + id := trafficSourceAttachmentCreateResourceID(asgName, trafficSourceType, trafficSourceID) + input := &autoscaling.AttachTrafficSourcesInput{ + AutoScalingGroupName: aws.String(asgName), + TrafficSources: []*autoscaling.TrafficSourceIdentifier{trafficSource}, + } + + _, err := conn.AttachTrafficSourcesWithContext(ctx, input) + + if err != nil { + return sdkdiag.AppendErrorf(diags, "creating Auto Scaling Traffic Source Attachment (%s): %s", id, err) + } + + d.SetId(id) + + if _, err := waitTrafficSourceAttachmentCreated(ctx, conn, asgName, trafficSourceType, trafficSourceID, d.Timeout(schema.TimeoutCreate)); err != nil { + return sdkdiag.AppendErrorf(diags, "waiting for Auto Scaling Traffic Source Attachment (%s) create: %s", id, err) + } + + return resourceTrafficSourceAttachmentRead(ctx, d, meta) +} + +func resourceTrafficSourceAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) + + asgName, trafficSourceType, trafficSourceID, err := TrafficSourceAttachmentParseResourceID(d.Id()) + + if err != nil { + return sdkdiag.AppendFromErr(diags, err) + } + + _, err = FindTrafficSourceAttachmentByThreePartKey(ctx, conn, asgName, trafficSourceType, trafficSourceID) + + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] Auto Scaling Traffic Source Attachment (%s) not found, removing from state", d.Id()) + d.SetId("") + return diags + } + + if err != nil { + return sdkdiag.AppendErrorf(diags, "reading Auto Scaling Traffic Source Attachment (%s): %s", d.Id(), err) + } + + return diags +} + +func resourceTrafficSourceAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).AutoScalingConn(ctx) + + asgName, trafficSourceType, trafficSourceID, err := TrafficSourceAttachmentParseResourceID(d.Id()) + + if err != nil { + return sdkdiag.AppendFromErr(diags, err) + } + + log.Printf("[INFO] Deleting Auto Scaling Traffic Source Attachment: %s", d.Id()) + _, err = conn.DetachTrafficSourcesWithContext(ctx, &autoscaling.DetachTrafficSourcesInput{ + AutoScalingGroupName: aws.String(asgName), + TrafficSources: []*autoscaling.TrafficSourceIdentifier{expandTrafficSourceIdentifier(d.Get("traffic_source").([]interface{})[0].(map[string]interface{}))}, + }) + + if err != nil { + return sdkdiag.AppendErrorf(diags, "deleting Auto Scaling Traffic Source Attachment (%s): %s", d.Id(), err) + } + + if _, err := waitTrafficSourceAttachmentDeleted(ctx, conn, asgName, trafficSourceType, trafficSourceID, d.Timeout(schema.TimeoutDelete)); err != nil { + return sdkdiag.AppendErrorf(diags, "waiting for Auto Scaling Traffic Source Attachment (%s) delete: %s", d.Id(), err) + } + + return nil +} + +const trafficSourceAttachmentIDSeparator = "," + +func trafficSourceAttachmentCreateResourceID(asgName, trafficSourceType, trafficSourceID string) string { + parts := []string{asgName, trafficSourceType, trafficSourceID} + id := strings.Join(parts, trafficSourceAttachmentIDSeparator) + + return id +} + +func TrafficSourceAttachmentParseResourceID(id string) (string, string, string, error) { + parts := strings.Split(id, trafficSourceAttachmentIDSeparator) + + if len(parts) == 3 && parts[0] != "" && parts[1] != "" && parts[2] != "" { + return parts[0], parts[1], parts[2], nil + } + + return "", "", "", fmt.Errorf("unexpected format for ID (%[1]s), expected asg-name%[2]straffic-source-type%[2]straffic-source-id", id, trafficSourceAttachmentIDSeparator) +} + +func FindTrafficSourceAttachmentByThreePartKey(ctx context.Context, conn *autoscaling.AutoScaling, asgName, trafficSourceType, trafficSourceID string) (*autoscaling.TrafficSourceState, error) { + input := &autoscaling.DescribeTrafficSourcesInput{ + AutoScalingGroupName: aws.String(asgName), + TrafficSourceType: aws.String(trafficSourceType), + } + + output, err := findTrafficSourceStates(ctx, conn, input) + + if err != nil { + return nil, err + } + + output = slices.Filter(output, func(v *autoscaling.TrafficSourceState) bool { + return aws.StringValue(v.Identifier) == trafficSourceID + }) + + return tfresource.AssertSinglePtrResult(output) +} + +func statusTrafficSourceAttachment(ctx context.Context, conn *autoscaling.AutoScaling, asgName, trafficSourceType, trafficSourceID string) retry.StateRefreshFunc { + return func() (interface{}, string, error) { + output, err := FindTrafficSourceAttachmentByThreePartKey(ctx, conn, asgName, trafficSourceType, trafficSourceID) + + if tfresource.NotFound(err) { + return nil, "", nil + } + + if err != nil { + return nil, "", err + } + + return output, aws.StringValue(output.State), nil + } +} + +func waitTrafficSourceAttachmentCreated(ctx context.Context, conn *autoscaling.AutoScaling, asgName, trafficSourceType, trafficSourceID string, timeout time.Duration) (*autoscaling.TrafficSourceState, error) { + stateConf := &retry.StateChangeConf{ + Pending: []string{TrafficSourceStateAdding}, + Target: []string{TrafficSourceStateAdded, TrafficSourceStateInService}, + Refresh: statusTrafficSourceAttachment(ctx, conn, asgName, trafficSourceType, trafficSourceID), + Timeout: timeout, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + + if output, ok := outputRaw.(*autoscaling.TrafficSourceState); ok { + return output, err + } + + return nil, err +} + +func waitTrafficSourceAttachmentDeleted(ctx context.Context, conn *autoscaling.AutoScaling, asgName, trafficSourceType, trafficSourceID string, timeout time.Duration) (*autoscaling.TrafficSourceState, error) { + stateConf := &retry.StateChangeConf{ + Pending: []string{TrafficSourceStateRemoving, TrafficSourceStateRemoved}, + Target: []string{}, + Refresh: statusTrafficSourceAttachment(ctx, conn, asgName, trafficSourceType, trafficSourceID), + Timeout: timeout, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + + if output, ok := outputRaw.(*autoscaling.TrafficSourceState); ok { + return output, err + } + + return nil, err +} diff --git a/internal/service/autoscaling/traffic_source_attachment_test.go b/internal/service/autoscaling/traffic_source_attachment_test.go new file mode 100644 index 00000000000..c6c4974fc0e --- /dev/null +++ b/internal/service/autoscaling/traffic_source_attachment_test.go @@ -0,0 +1,427 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package autoscaling_test + +import ( + "context" + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/service/autoscaling" + sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-plugin-testing/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + tfautoscaling "github.com/hashicorp/terraform-provider-aws/internal/service/autoscaling" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" +) + +func TestAccAutoScalingTrafficSourceAttachment_elb(t *testing.T) { + ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_autoscaling_traffic_source_attachment.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, autoscaling.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckTrafficSourceAttachmentDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccTrafficSourceAttachmentConfig_elb(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckTrafficSourceAttachmentExists(ctx, resourceName), + ), + }, + }, + }) +} + +func TestAccAutoScalingTrafficSourceAttachment_albTargetGroup(t *testing.T) { + ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_autoscaling_traffic_source_attachment.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, autoscaling.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckTrafficSourceAttachmentDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccTrafficSourceAttachmentConfig_targetGroup(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckTrafficSourceAttachmentExists(ctx, resourceName), + ), + }, + }, + }) +} + +func TestAccAutoScalingTrafficSourceAttachment_vpcLatticeTargetGroup(t *testing.T) { + ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_autoscaling_traffic_source_attachment.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, autoscaling.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckTrafficSourceAttachmentDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccTrafficSourceAttachmentConfig_vpcLatticeTargetGrpoup(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckTrafficSourceAttachmentExists(ctx, resourceName), + ), + }, + }, + }) +} + +func TestAccAutoScalingTrafficSourceAttachment_multipleELBs(t *testing.T) { + ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resource1Name := "aws_autoscaling_traffic_source_attachment.test.0" + resource5Name := "aws_autoscaling_traffic_source_attachment.test.4" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, autoscaling.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckTrafficSourceAttachmentDestroy(ctx), + Steps: []resource.TestStep{ + // Create all the ELBs first. + { + Config: testAccTrafficSourceAttachmentConfig_elbBase(rName, 5), + }, + { + Config: testAccTrafficSourceAttachmentConfig_multipleELBs(rName, 5), + Check: resource.ComposeTestCheckFunc( + testAccCheckTrafficSourceAttachmentExists(ctx, resource1Name), + testAccCheckTrafficSourceAttachmentExists(ctx, resource5Name), + ), + }, + { + Config: testAccTrafficSourceAttachmentConfig_elbBase(rName, 5), + }, + }, + }) +} + +func TestAccAutoScalingTrafficSourceAttachment_multipleVPCLatticeTargetGroups(t *testing.T) { + ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resource1Name := "aws_autoscaling_traffic_source_attachment.test.0" + resource4Name := "aws_autoscaling_traffic_source_attachment.test.4" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, autoscaling.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckTrafficSourceAttachmentDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccTrafficSourceAttachmentConfig_vpcLatticeBase(rName, 5), + }, + { + Config: testAccTrafficSourceAttachmentConfig_multipleVPCLatticeTargetGroups(rName, 5), + Check: resource.ComposeTestCheckFunc( + testAccCheckTrafficSourceAttachmentExists(ctx, resource1Name), + testAccCheckTrafficSourceAttachmentExists(ctx, resource4Name), + ), + }, + { + Config: testAccTrafficSourceAttachmentConfig_vpcLatticeBase(rName, 5), + }, + }, + }) +} + +func TestAccAutoScalingTrafficSourceAttachment_multipleALBTargetGroups(t *testing.T) { + ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resource1Name := "aws_autoscaling_traffic_source_attachment.test.0" + resource5Name := "aws_autoscaling_traffic_source_attachment.test.4" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, autoscaling.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckTrafficSourceAttachmentDestroy(ctx), + Steps: []resource.TestStep{ + // Create all the target groups first. + { + Config: testAccTrafficSourceAttachmentConfig_targetGroupBase(rName, 5), + }, + { + Config: testAccTrafficSourceAttachmentConfig_multipleTargetGroups(rName, 5), + Check: resource.ComposeTestCheckFunc( + testAccCheckTrafficSourceAttachmentExists(ctx, resource1Name), + testAccCheckTrafficSourceAttachmentExists(ctx, resource5Name), + ), + }, + { + Config: testAccTrafficSourceAttachmentConfig_targetGroupBase(rName, 5), + }, + }, + }) +} + +func testAccCheckTrafficSourceAttachmentExists(ctx context.Context, n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + asgName, trafficSourceType, trafficSourceID, err := tfautoscaling.TrafficSourceAttachmentParseResourceID(rs.Primary.ID) + + if err != nil { + return err + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn(ctx) + + _, err = tfautoscaling.FindTrafficSourceAttachmentByThreePartKey(ctx, conn, asgName, trafficSourceType, trafficSourceID) + + return err + } +} + +func testAccCheckTrafficSourceAttachmentDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingConn(ctx) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_autoscaling_traffic_source_attachment" { + continue + } + + asgName, trafficSourceType, trafficSourceID, err := tfautoscaling.TrafficSourceAttachmentParseResourceID(rs.Primary.ID) + + if err != nil { + return err + } + + _, err = tfautoscaling.FindTrafficSourceAttachmentByThreePartKey(ctx, conn, asgName, trafficSourceType, trafficSourceID) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return err + } + + return fmt.Errorf("Auto Scaling Group Traffic Source Attachment %s still exists", rs.Primary.ID) + } + + return nil + } +} + +func testAccTrafficSourceAttachmentConfig_elbBase(rName string, elbCount int) string { + return acctest.ConfigCompose(testAccGroupConfig_launchConfigurationBase(rName, "t2.micro"), fmt.Sprintf(` +resource "aws_elb" "test" { + count = %[2]d + + # "name" cannot be longer than 32 characters. + name = format("%%s-%%d", substr(%[1]q, 0, 28), count.index) + availability_zones = data.aws_availability_zones.available.names + + listener { + instance_port = 8000 + instance_protocol = "http" + lb_port = 80 + lb_protocol = "http" + } +} + +resource "aws_autoscaling_group" "test" { + availability_zones = data.aws_availability_zones.available.names + max_size = 1 + min_size = 0 + desired_capacity = 0 + health_check_grace_period = 300 + force_delete = true + name = %[1]q + launch_configuration = aws_launch_configuration.test.name + + tag { + key = "Name" + value = %[1]q + propagate_at_launch = true + } +} +`, rName, elbCount)) +} + +func testAccTrafficSourceAttachmentConfig_targetGroupBase(rName string, targetGroupCount int) string { + return acctest.ConfigCompose( + acctest.ConfigLatestAmazonLinuxHVMEBSAMI(), + acctest.ConfigVPCWithSubnets(rName, 1), + fmt.Sprintf(` +resource "aws_lb_target_group" "test" { + count = %[2]d + + # "name" cannot be longer than 32 characters. + name = format("%%s-%%d", substr(%[1]q, 0, 28), count.index) + port = 80 + protocol = "HTTP" + vpc_id = aws_vpc.test.id +} + +resource "aws_launch_configuration" "test" { + name = %[1]q + image_id = data.aws_ami.amzn-ami-minimal-hvm-ebs.id + instance_type = "t2.micro" +} + +resource "aws_autoscaling_group" "test" { + vpc_zone_identifier = aws_subnet.test[*].id + max_size = 1 + min_size = 0 + desired_capacity = 0 + health_check_grace_period = 300 + force_delete = true + name = %[1]q + launch_configuration = aws_launch_configuration.test.name + + tag { + key = "Name" + value = %[1]q + propagate_at_launch = true + } +} +`, rName, targetGroupCount)) +} + +func testAccTrafficSourceAttachmentConfig_vpcLatticeBase(rName string, targetGroupCount int) string { + return acctest.ConfigCompose( + acctest.ConfigVPCWithSubnets(rName, 1), + acctest.ConfigLatestAmazonLinuxHVMEBSAMI(), + fmt.Sprintf(` +resource "aws_vpclattice_target_group" "test" { + count = %[2]d + + name = "%[1]s-${count.index}" + type = "INSTANCE" + + config { + port = 80 + protocol = "HTTP" + vpc_identifier = aws_vpc.test.id + } +} + +resource "aws_launch_configuration" "test" { + name = %[1]q + image_id = data.aws_ami.amzn-ami-minimal-hvm-ebs.id + instance_type = "t2.micro" +} + +resource "aws_autoscaling_group" "test" { + vpc_zone_identifier = aws_subnet.test[*].id + max_size = 1 + min_size = 0 + desired_capacity = 0 + health_check_grace_period = 300 + force_delete = true + name = %[1]q + launch_configuration = aws_launch_configuration.test.name + + tag { + key = "Name" + value = %[1]q + propagate_at_launch = true + } +} +`, rName, targetGroupCount)) +} + +func testAccTrafficSourceAttachmentConfig_elb(rName string) string { + return acctest.ConfigCompose(testAccTrafficSourceAttachmentConfig_elbBase(rName, 1), ` +resource "aws_autoscaling_traffic_source_attachment" "test" { + autoscaling_group_name = aws_autoscaling_group.test.id + + traffic_source { + identifier = aws_elb.test[0].id + type = "elb" + } +} +`) +} + +func testAccTrafficSourceAttachmentConfig_multipleELBs(rName string, n int) string { + return acctest.ConfigCompose(testAccTrafficSourceAttachmentConfig_elbBase(rName, n), fmt.Sprintf(` +resource "aws_autoscaling_traffic_source_attachment" "test" { + count = %[1]d + + autoscaling_group_name = aws_autoscaling_group.test.id + + traffic_source { + identifier = aws_elb.test[count.index].id + type = "elb" + } +} +`, n)) +} + +func testAccTrafficSourceAttachmentConfig_targetGroup(rName string) string { + return acctest.ConfigCompose(testAccTrafficSourceAttachmentConfig_targetGroupBase(rName, 1), ` +resource "aws_autoscaling_traffic_source_attachment" "test" { + autoscaling_group_name = aws_autoscaling_group.test.id + + traffic_source { + identifier = aws_lb_target_group.test[0].arn + type = "elbv2" + } +} +`) +} + +func testAccTrafficSourceAttachmentConfig_multipleTargetGroups(rName string, n int) string { + return acctest.ConfigCompose(testAccTrafficSourceAttachmentConfig_targetGroupBase(rName, n), fmt.Sprintf(` +resource "aws_autoscaling_traffic_source_attachment" "test" { + count = %[1]d + + autoscaling_group_name = aws_autoscaling_group.test.id + + traffic_source { + identifier = aws_lb_target_group.test[0].arn + type = "elbv2" + } +} +`, n)) +} + +func testAccTrafficSourceAttachmentConfig_vpcLatticeTargetGrpoup(rName string) string { + return acctest.ConfigCompose(testAccTrafficSourceAttachmentConfig_vpcLatticeBase(rName, 1), ` +resource "aws_autoscaling_traffic_source_attachment" "test" { + autoscaling_group_name = aws_autoscaling_group.test.id + + traffic_source { + identifier = aws_vpclattice_target_group.test[0].arn + type = "vpc-lattice" + } +} +`) +} + +func testAccTrafficSourceAttachmentConfig_multipleVPCLatticeTargetGroups(rName string, n int) string { + return acctest.ConfigCompose(testAccTrafficSourceAttachmentConfig_vpcLatticeBase(rName, n), fmt.Sprintf(` +resource "aws_autoscaling_traffic_source_attachment" "test" { + count = %[1]d + + autoscaling_group_name = aws_autoscaling_group.test.id + + traffic_source { + identifier = aws_vpclattice_target_group.test[count.index].arn + type = "vpc-lattice" + } +} +`, n)) +} diff --git a/internal/service/autoscalingplans/find.go b/internal/service/autoscalingplans/find.go index a90d23e7435..348a761376b 100644 --- a/internal/service/autoscalingplans/find.go +++ b/internal/service/autoscalingplans/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscalingplans import ( diff --git a/internal/service/autoscalingplans/generate.go b/internal/service/autoscalingplans/generate.go index ef4c3ad43c9..2391e0a87d5 100644 --- a/internal/service/autoscalingplans/generate.go +++ b/internal/service/autoscalingplans/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/listpages/main.go -ListOps=DescribeScalingPlans +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package autoscalingplans diff --git a/internal/service/autoscalingplans/id.go b/internal/service/autoscalingplans/id.go index e5ec990c364..fbf9eb9b308 100644 --- a/internal/service/autoscalingplans/id.go +++ b/internal/service/autoscalingplans/id.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscalingplans import ( diff --git a/internal/service/autoscalingplans/scaling_plan.go b/internal/service/autoscalingplans/scaling_plan.go index 2c151e1e5cb..78b691c2b64 100644 --- a/internal/service/autoscalingplans/scaling_plan.go +++ b/internal/service/autoscalingplans/scaling_plan.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscalingplans import ( @@ -323,7 +326,7 @@ func ResourceScalingPlan() *schema.Resource { func resourceScalingPlanCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingPlansConn() + conn := meta.(*conns.AWSClient).AutoScalingPlansConn(ctx) scalingPlanName := d.Get("name").(string) input := &autoscalingplans.CreateScalingPlanInput{ @@ -354,7 +357,7 @@ func resourceScalingPlanCreate(ctx context.Context, d *schema.ResourceData, meta func resourceScalingPlanRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingPlansConn() + conn := meta.(*conns.AWSClient).AutoScalingPlansConn(ctx) scalingPlanName, scalingPlanVersion, err := scalingPlanParseResourceID(d.Id()) @@ -390,7 +393,7 @@ func resourceScalingPlanRead(ctx context.Context, d *schema.ResourceData, meta i func resourceScalingPlanUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingPlansConn() + conn := meta.(*conns.AWSClient).AutoScalingPlansConn(ctx) scalingPlanName, scalingPlanVersion, err := scalingPlanParseResourceID(d.Id()) @@ -423,7 +426,7 @@ func resourceScalingPlanUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceScalingPlanDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).AutoScalingPlansConn() + conn := meta.(*conns.AWSClient).AutoScalingPlansConn(ctx) scalingPlanName, scalingPlanVersion, err := scalingPlanParseResourceID(d.Id()) diff --git a/internal/service/autoscalingplans/scaling_plan_test.go b/internal/service/autoscalingplans/scaling_plan_test.go index 6ff314a9586..48a1b62025f 100644 --- a/internal/service/autoscalingplans/scaling_plan_test.go +++ b/internal/service/autoscalingplans/scaling_plan_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscalingplans_test import ( @@ -364,7 +367,7 @@ func TestAccAutoScalingPlansScalingPlan_DynamicScaling_customizedScalingMetricSp func testAccCheckScalingPlanDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingPlansConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingPlansConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_autoscalingplans_scaling_plan" { @@ -392,7 +395,7 @@ func testAccCheckScalingPlanDestroy(ctx context.Context) resource.TestCheckFunc func testAccCheckScalingPlanExists(ctx context.Context, name string, v *autoscalingplans.ScalingPlan) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingPlansConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).AutoScalingPlansConn(ctx) rs, ok := s.RootModule().Resources[name] if !ok { diff --git a/internal/service/autoscalingplans/service_package_gen.go b/internal/service/autoscalingplans/service_package_gen.go index 65ee51ce3b3..c428380f24a 100644 --- a/internal/service/autoscalingplans/service_package_gen.go +++ b/internal/service/autoscalingplans/service_package_gen.go @@ -5,6 +5,10 @@ package autoscalingplans import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + autoscalingplans_sdkv1 "github.com/aws/aws-sdk-go/service/autoscalingplans" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -36,4 +40,13 @@ func (p *servicePackage) ServicePackageName() string { return names.AutoScalingPlans } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*autoscalingplans_sdkv1.AutoScalingPlans, error) { + sess := config["session"].(*session_sdkv1.Session) + + return autoscalingplans_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/autoscalingplans/status.go b/internal/service/autoscalingplans/status.go index 30dd408affc..709dd744327 100644 --- a/internal/service/autoscalingplans/status.go +++ b/internal/service/autoscalingplans/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscalingplans import ( diff --git a/internal/service/autoscalingplans/sweep.go b/internal/service/autoscalingplans/sweep.go index 42da9b2a448..4319585d157 100644 --- a/internal/service/autoscalingplans/sweep.go +++ b/internal/service/autoscalingplans/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/autoscalingplans" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -23,11 +25,11 @@ func init() { func sweepScalingPlans(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).AutoScalingPlansConn() + conn := client.AutoScalingPlansConn(ctx) input := &autoscalingplans.DescribeScalingPlansInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -61,7 +63,7 @@ func sweepScalingPlans(region string) error { return fmt.Errorf("error listing Auto Scaling Scaling Plans (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Auto Scaling Scaling Plans (%s): %w", region, err) diff --git a/internal/service/autoscalingplans/wait.go b/internal/service/autoscalingplans/wait.go index 1331ec643e7..8f10d0ff810 100644 --- a/internal/service/autoscalingplans/wait.go +++ b/internal/service/autoscalingplans/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package autoscalingplans import ( diff --git a/internal/service/backup/consts.go b/internal/service/backup/consts.go index dc43465190f..ea4860cd65f 100644 --- a/internal/service/backup/consts.go +++ b/internal/service/backup/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup const ( diff --git a/internal/service/backup/errors.go b/internal/service/backup/errors.go index 43f019afb26..b8dff1cad97 100644 --- a/internal/service/backup/errors.go +++ b/internal/service/backup/errors.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup const ( diff --git a/internal/service/backup/find.go b/internal/service/backup/find.go index 16639c7b933..81a2aa0304a 100644 --- a/internal/service/backup/find.go +++ b/internal/service/backup/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup import ( diff --git a/internal/service/backup/framework.go b/internal/service/backup/framework.go index 7ec591e152e..dd7c4456a44 100644 --- a/internal/service/backup/framework.go +++ b/internal/service/backup/framework.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup import ( @@ -136,14 +139,14 @@ func ResourceFramework() *schema.Resource { func resourceFrameworkCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) name := d.Get("name").(string) input := &backup.CreateFrameworkInput{ IdempotencyToken: aws.String(id.UniqueId()), FrameworkControls: expandFrameworkControls(ctx, d.Get("control").(*schema.Set).List()), FrameworkName: aws.String(name), - FrameworkTags: GetTagsIn(ctx), + FrameworkTags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -168,7 +171,7 @@ func resourceFrameworkCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceFrameworkRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) resp, err := conn.DescribeFrameworkWithContext(ctx, &backup.DescribeFrameworkInput{ FrameworkName: aws.String(d.Id()), @@ -202,7 +205,7 @@ func resourceFrameworkRead(ctx context.Context, d *schema.ResourceData, meta int func resourceFrameworkUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) if d.HasChanges("description", "control") { input := &backup.UpdateFrameworkInput{ @@ -232,7 +235,7 @@ func resourceFrameworkUpdate(ctx context.Context, d *schema.ResourceData, meta i func resourceFrameworkDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) input := &backup.DeleteFrameworkInput{ FrameworkName: aws.String(d.Id()), diff --git a/internal/service/backup/framework_data_source.go b/internal/service/backup/framework_data_source.go index bb6e3ac06a9..846e580add7 100644 --- a/internal/service/backup/framework_data_source.go +++ b/internal/service/backup/framework_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup import ( @@ -101,7 +104,7 @@ func DataSourceFramework() *schema.Resource { func dataSourceFrameworkRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig name := d.Get("name").(string) @@ -110,7 +113,7 @@ func dataSourceFrameworkRead(ctx context.Context, d *schema.ResourceData, meta i FrameworkName: aws.String(name), }) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error getting Backup Framework: %s", err) + return sdkdiag.AppendErrorf(diags, "getting Backup Framework: %s", err) } d.SetId(aws.StringValue(resp.FrameworkName)) @@ -129,7 +132,7 @@ func dataSourceFrameworkRead(ctx context.Context, d *schema.ResourceData, meta i return sdkdiag.AppendErrorf(diags, "setting control: %s", err) } - tags, err := ListTags(ctx, conn, aws.StringValue(resp.FrameworkArn)) + tags, err := listTags(ctx, conn, aws.StringValue(resp.FrameworkArn)) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags for Backup Framework (%s): %s", d.Id(), err) diff --git a/internal/service/backup/framework_data_source_test.go b/internal/service/backup/framework_data_source_test.go index 7241967788b..fd9e9d495de 100644 --- a/internal/service/backup/framework_data_source_test.go +++ b/internal/service/backup/framework_data_source_test.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup_test import ( "fmt" - "regexp" "testing" "github.com/aws/aws-sdk-go/service/backup" @@ -22,10 +24,6 @@ func testAccFrameworkDataSource_basic(t *testing.T) { ErrorCheck: acctest.ErrorCheck(t, backup.EndpointsID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, Steps: []resource.TestStep{ - { - Config: testAccFrameworkDataSourceConfig_nonExistent, - ExpectError: regexp.MustCompile(`Error getting Backup Framework`), - }, { Config: testAccFrameworkDataSourceConfig_basic(rName), Check: resource.ComposeTestCheckFunc( @@ -101,12 +99,6 @@ func testAccFrameworkDataSource_controlScopeTag(t *testing.T) { }) } -const testAccFrameworkDataSourceConfig_nonExistent = ` -data "aws_backup_framework" "test" { - name = "tf_acc_test_does_not_exist" -} -` - func testAccFrameworkDataSourceConfig_basic(rName string) string { return fmt.Sprintf(` data "aws_availability_zones" "available" { diff --git a/internal/service/backup/framework_test.go b/internal/service/backup/framework_test.go index 6a636a4859a..e13e82334e0 100644 --- a/internal/service/backup/framework_test.go +++ b/internal/service/backup/framework_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup_test import ( @@ -488,7 +491,7 @@ func testAccFramework_disappears(t *testing.T) { } func testAccFrameworkPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn(ctx) _, err := conn.ListFrameworksWithContext(ctx, &backup.ListFrameworksInput{}) @@ -503,7 +506,7 @@ func testAccFrameworkPreCheck(ctx context.Context, t *testing.T) { func testAccCheckFrameworkDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_backup_framework" { continue @@ -534,7 +537,7 @@ func testAccCheckFrameworkExists(ctx context.Context, name string, framework *ba return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn(ctx) input := &backup.DescribeFrameworkInput{ FrameworkName: aws.String(rs.Primary.ID), } diff --git a/internal/service/backup/generate.go b/internal/service/backup/generate.go index 052e5dbc946..27f6f21aefd 100644 --- a/internal/service/backup/generate.go +++ b/internal/service/backup/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=ListTags -ServiceTagsMap -UntagInTagsElem=TagKeyList -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package backup diff --git a/internal/service/backup/global_settings.go b/internal/service/backup/global_settings.go index 1226983dc0a..4b62e526d86 100644 --- a/internal/service/backup/global_settings.go +++ b/internal/service/backup/global_settings.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup import ( @@ -35,7 +38,7 @@ func ResourceGlobalSettings() *schema.Resource { func resourceGlobalSettingsUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) input := &backup.UpdateGlobalSettingsInput{ GlobalSettings: flex.ExpandStringMap(d.Get("global_settings").(map[string]interface{})), @@ -53,7 +56,7 @@ func resourceGlobalSettingsUpdate(ctx context.Context, d *schema.ResourceData, m func resourceGlobalSettingsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) resp, err := conn.DescribeGlobalSettingsWithContext(ctx, &backup.DescribeGlobalSettingsInput{}) if err != nil { diff --git a/internal/service/backup/global_settings_test.go b/internal/service/backup/global_settings_test.go index 3d543d7f294..0ed512498c3 100644 --- a/internal/service/backup/global_settings_test.go +++ b/internal/service/backup/global_settings_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup_test import ( @@ -62,7 +65,7 @@ func TestAccBackupGlobalSettings_basic(t *testing.T) { func testAccCheckGlobalSettingsExists(ctx context.Context, settings *backup.DescribeGlobalSettingsOutput) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn(ctx) resp, err := conn.DescribeGlobalSettingsWithContext(ctx, &backup.DescribeGlobalSettingsInput{}) if err != nil { return err diff --git a/internal/service/backup/plan.go b/internal/service/backup/plan.go index f4c4e285ca4..6f8b872de96 100644 --- a/internal/service/backup/plan.go +++ b/internal/service/backup/plan.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup import ( @@ -173,7 +176,7 @@ func ResourcePlan() *schema.Resource { func resourcePlanCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) input := &backup.CreateBackupPlanInput{ BackupPlan: &backup.PlanInput{ @@ -181,7 +184,7 @@ func resourcePlanCreate(ctx context.Context, d *schema.ResourceData, meta interf Rules: expandPlanRules(ctx, d.Get("rule").(*schema.Set)), AdvancedBackupSettings: expandPlanAdvancedSettings(d.Get("advanced_backup_setting").(*schema.Set)), }, - BackupPlanTags: GetTagsIn(ctx), + BackupPlanTags: getTagsIn(ctx), } resp, err := conn.CreateBackupPlanWithContext(ctx, input) @@ -196,7 +199,7 @@ func resourcePlanCreate(ctx context.Context, d *schema.ResourceData, meta interf func resourcePlanRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) resp, err := conn.GetBackupPlanWithContext(ctx, &backup.GetBackupPlanInput{ BackupPlanId: aws.String(d.Id()), @@ -229,7 +232,7 @@ func resourcePlanRead(ctx context.Context, d *schema.ResourceData, meta interfac func resourcePlanUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) if d.HasChanges("rule", "advanced_backup_setting") { input := &backup.UpdateBackupPlanInput{ @@ -253,7 +256,7 @@ func resourcePlanUpdate(ctx context.Context, d *schema.ResourceData, meta interf func resourcePlanDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) input := &backup.DeleteBackupPlanInput{ BackupPlanId: aws.String(d.Id()), diff --git a/internal/service/backup/plan_data_source.go b/internal/service/backup/plan_data_source.go index 9adff4ec8ee..27a73b45d1f 100644 --- a/internal/service/backup/plan_data_source.go +++ b/internal/service/backup/plan_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup import ( @@ -41,7 +44,7 @@ func DataSourcePlan() *schema.Resource { func dataSourcePlanRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig id := d.Get("plan_id").(string) @@ -50,7 +53,7 @@ func dataSourcePlanRead(ctx context.Context, d *schema.ResourceData, meta interf BackupPlanId: aws.String(id), }) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error getting Backup Plan: %s", err) + return sdkdiag.AppendErrorf(diags, "getting Backup Plan: %s", err) } d.SetId(aws.StringValue(resp.BackupPlanId)) @@ -58,7 +61,7 @@ func dataSourcePlanRead(ctx context.Context, d *schema.ResourceData, meta interf d.Set("name", resp.BackupPlan.BackupPlanName) d.Set("version", resp.VersionId) - tags, err := ListTags(ctx, conn, aws.StringValue(resp.BackupPlanArn)) + tags, err := listTags(ctx, conn, aws.StringValue(resp.BackupPlanArn)) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags for Backup Plan (%s): %s", id, err) } diff --git a/internal/service/backup/plan_data_source_test.go b/internal/service/backup/plan_data_source_test.go index 87d84a78208..f68ae0f05d8 100644 --- a/internal/service/backup/plan_data_source_test.go +++ b/internal/service/backup/plan_data_source_test.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup_test import ( "fmt" - "regexp" "testing" "github.com/aws/aws-sdk-go/service/backup" @@ -22,10 +24,6 @@ func TestAccBackupPlanDataSource_basic(t *testing.T) { ErrorCheck: acctest.ErrorCheck(t, backup.EndpointsID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, Steps: []resource.TestStep{ - { - Config: testAccPlanDataSourceConfig_nonExistent, - ExpectError: regexp.MustCompile(`Error getting Backup Plan`), - }, { Config: testAccPlanDataSourceConfig_basic(rInt), Check: resource.ComposeTestCheckFunc( @@ -39,12 +37,6 @@ func TestAccBackupPlanDataSource_basic(t *testing.T) { }) } -const testAccPlanDataSourceConfig_nonExistent = ` -data "aws_backup_plan" "test" { - plan_id = "tf-acc-test-does-not-exist" -} -` - func testAccPlanDataSourceConfig_basic(rInt int) string { return fmt.Sprintf(` resource "aws_backup_vault" "test" { diff --git a/internal/service/backup/plan_test.go b/internal/service/backup/plan_test.go index 5f13d3cd83d..fde0ca38531 100644 --- a/internal/service/backup/plan_test.go +++ b/internal/service/backup/plan_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup_test import ( @@ -658,7 +661,7 @@ func TestAccBackupPlan_disappears(t *testing.T) { func testAccCheckPlanDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_backup_plan" { continue @@ -683,7 +686,7 @@ func testAccCheckPlanDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckPlanExists(ctx context.Context, name string, plan *backup.GetBackupPlanOutput) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn(ctx) rs, ok := s.RootModule().Resources[name] if !ok { diff --git a/internal/service/backup/region_settings.go b/internal/service/backup/region_settings.go index 98b37169234..2fd8549896c 100644 --- a/internal/service/backup/region_settings.go +++ b/internal/service/backup/region_settings.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup import ( @@ -41,7 +44,7 @@ func ResourceRegionSettings() *schema.Resource { func resourceRegionSettingsUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) input := &backup.UpdateRegionSettingsInput{} @@ -66,7 +69,7 @@ func resourceRegionSettingsUpdate(ctx context.Context, d *schema.ResourceData, m func resourceRegionSettingsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) output, err := conn.DescribeRegionSettingsWithContext(ctx, &backup.DescribeRegionSettingsInput{}) diff --git a/internal/service/backup/region_settings_test.go b/internal/service/backup/region_settings_test.go index 00aa30529fc..fec2eb3b5cf 100644 --- a/internal/service/backup/region_settings_test.go +++ b/internal/service/backup/region_settings_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup_test import ( @@ -104,7 +107,7 @@ func TestAccBackupRegionSettings_basic(t *testing.T) { func testAccCheckRegionSettingsExists(ctx context.Context, v *backup.DescribeRegionSettingsOutput) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn(ctx) output, err := conn.DescribeRegionSettingsWithContext(ctx, &backup.DescribeRegionSettingsInput{}) diff --git a/internal/service/backup/report_plan.go b/internal/service/backup/report_plan.go index eb03c12d6a3..7d38d7298f1 100644 --- a/internal/service/backup/report_plan.go +++ b/internal/service/backup/report_plan.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup import ( @@ -142,14 +145,14 @@ func ResourceReportPlan() *schema.Resource { func resourceReportPlanCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) name := d.Get("name").(string) input := &backup.CreateReportPlanInput{ IdempotencyToken: aws.String(id.UniqueId()), ReportDeliveryChannel: expandReportDeliveryChannel(d.Get("report_delivery_channel").([]interface{})), ReportPlanName: aws.String(name), - ReportPlanTags: GetTagsIn(ctx), + ReportPlanTags: getTagsIn(ctx), ReportSetting: expandReportSetting(d.Get("report_setting").([]interface{})), } @@ -175,7 +178,7 @@ func resourceReportPlanCreate(ctx context.Context, d *schema.ResourceData, meta func resourceReportPlanRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) reportPlan, err := FindReportPlanByName(ctx, conn, d.Id()) @@ -208,7 +211,7 @@ func resourceReportPlanRead(ctx context.Context, d *schema.ResourceData, meta in func resourceReportPlanUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) if d.HasChangesExcept("tags_all", "tags") { input := &backup.UpdateReportPlanInput{ @@ -236,7 +239,7 @@ func resourceReportPlanUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceReportPlanDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) log.Printf("[DEBUG] Deleting Backup Report Plan: %s", d.Id()) _, err := conn.DeleteReportPlanWithContext(ctx, &backup.DeleteReportPlanInput{ diff --git a/internal/service/backup/report_plan_data_source.go b/internal/service/backup/report_plan_data_source.go index 2f76c672c73..82c084dd179 100644 --- a/internal/service/backup/report_plan_data_source.go +++ b/internal/service/backup/report_plan_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup import ( @@ -112,7 +115,7 @@ func DataSourceReportPlan() *schema.Resource { func dataSourceReportPlanRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig name := d.Get("name").(string) @@ -138,7 +141,7 @@ func dataSourceReportPlanRead(ctx context.Context, d *schema.ResourceData, meta return sdkdiag.AppendErrorf(diags, "setting report_setting: %s", err) } - tags, err := ListTags(ctx, conn, aws.StringValue(reportPlan.ReportPlanArn)) + tags, err := listTags(ctx, conn, aws.StringValue(reportPlan.ReportPlanArn)) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags for Backup Report Plan (%s): %s", d.Id(), err) diff --git a/internal/service/backup/report_plan_data_source_test.go b/internal/service/backup/report_plan_data_source_test.go index bb6d6165881..a6be0fefb28 100644 --- a/internal/service/backup/report_plan_data_source_test.go +++ b/internal/service/backup/report_plan_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup_test import ( diff --git a/internal/service/backup/report_plan_test.go b/internal/service/backup/report_plan_test.go index 1049545754c..9df823f4033 100644 --- a/internal/service/backup/report_plan_test.go +++ b/internal/service/backup/report_plan_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup_test import ( @@ -312,7 +315,7 @@ func TestAccBackupReportPlan_disappears(t *testing.T) { } func testAccReportPlanPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn(ctx) _, err := conn.ListReportPlansWithContext(ctx, &backup.ListReportPlansInput{}) @@ -327,7 +330,7 @@ func testAccReportPlanPreCheck(ctx context.Context, t *testing.T) { func testAccCheckReportPlanDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_backup_report_plan" { @@ -362,7 +365,7 @@ func testAccCheckReportPlanExists(ctx context.Context, n string, v *backup.Repor return fmt.Errorf("No Backup Report Plan ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn(ctx) output, err := tfbackup.FindReportPlanByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/backup/selection.go b/internal/service/backup/selection.go index 7a7a89523d0..3cf2a914849 100644 --- a/internal/service/backup/selection.go +++ b/internal/service/backup/selection.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup import ( @@ -186,7 +189,7 @@ func ResourceSelection() *schema.Resource { func resourceSelectionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) selection := &backup.Selection{ Conditions: expandConditions(d.Get("condition").(*schema.Set).List()), @@ -244,7 +247,7 @@ func resourceSelectionCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceSelectionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) input := &backup.GetBackupSelectionInput{ BackupPlanId: aws.String(d.Get("plan_id").(string)), @@ -342,7 +345,7 @@ func resourceSelectionRead(ctx context.Context, d *schema.ResourceData, meta int func resourceSelectionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) input := &backup.DeleteBackupSelectionInput{ BackupPlanId: aws.String(d.Get("plan_id").(string)), diff --git a/internal/service/backup/selection_data_source.go b/internal/service/backup/selection_data_source.go index 67c4ed0a52c..2e9511251b6 100644 --- a/internal/service/backup/selection_data_source.go +++ b/internal/service/backup/selection_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup import ( @@ -44,7 +47,7 @@ func DataSourceSelection() *schema.Resource { func dataSourceSelectionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) input := &backup.GetBackupSelectionInput{ BackupPlanId: aws.String(d.Get("plan_id").(string)), @@ -53,7 +56,7 @@ func dataSourceSelectionRead(ctx context.Context, d *schema.ResourceData, meta i resp, err := conn.GetBackupSelectionWithContext(ctx, input) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error getting Backup Selection: %s", err) + return sdkdiag.AppendErrorf(diags, "getting Backup Selection: %s", err) } d.SetId(aws.StringValue(resp.SelectionId)) diff --git a/internal/service/backup/selection_data_source_test.go b/internal/service/backup/selection_data_source_test.go index 09f77df5823..67fadec48e6 100644 --- a/internal/service/backup/selection_data_source_test.go +++ b/internal/service/backup/selection_data_source_test.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup_test import ( "fmt" - "regexp" "testing" "github.com/aws/aws-sdk-go/service/backup" @@ -22,10 +24,6 @@ func TestAccBackupSelectionDataSource_basic(t *testing.T) { ErrorCheck: acctest.ErrorCheck(t, backup.EndpointsID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, Steps: []resource.TestStep{ - { - Config: testAccSelectionDataSourceConfig_nonExistent, - ExpectError: regexp.MustCompile(`Error getting Backup Selection`), - }, { Config: testAccSelectionDataSourceConfig_basic(rInt), Check: resource.ComposeTestCheckFunc( @@ -38,13 +36,6 @@ func TestAccBackupSelectionDataSource_basic(t *testing.T) { }) } -const testAccSelectionDataSourceConfig_nonExistent = ` -data "aws_backup_selection" "test" { - plan_id = "tf-acc-test-does-not-exist" - selection_id = "tf-acc-test-dne" -} -` - func testAccSelectionDataSourceConfig_basic(rInt int) string { return fmt.Sprintf(` data "aws_caller_identity" "current" {} diff --git a/internal/service/backup/selection_test.go b/internal/service/backup/selection_test.go index 70a94e5ec40..ff09aab6106 100644 --- a/internal/service/backup/selection_test.go +++ b/internal/service/backup/selection_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup_test import ( @@ -250,7 +253,7 @@ func TestAccBackupSelection_updateTag(t *testing.T) { func testAccCheckSelectionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_backup_selection" { continue @@ -281,7 +284,7 @@ func testAccCheckSelectionExists(ctx context.Context, name string, selection *ba return fmt.Errorf("not found: %s, %v", name, s.RootModule().Resources) } - conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn(ctx) input := &backup.GetBackupSelectionInput{ BackupPlanId: aws.String(rs.Primary.Attributes["plan_id"]), diff --git a/internal/service/backup/service_package_gen.go b/internal/service/backup/service_package_gen.go index 7e2713c873c..0e8ed95b223 100644 --- a/internal/service/backup/service_package_gen.go +++ b/internal/service/backup/service_package_gen.go @@ -5,6 +5,10 @@ package backup import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + backup_sdkv1 "github.com/aws/aws-sdk-go/service/backup" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -109,4 +113,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Backup } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*backup_sdkv1.Backup, error) { + sess := config["session"].(*session_sdkv1.Session) + + return backup_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/backup/status.go b/internal/service/backup/status.go index d6fa3a2dcb5..8fb4acdec96 100644 --- a/internal/service/backup/status.go +++ b/internal/service/backup/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup import ( diff --git a/internal/service/backup/sweep.go b/internal/service/backup/sweep.go index c4660f7404e..b0cc820813e 100644 --- a/internal/service/backup/sweep.go +++ b/internal/service/backup/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/backup" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -54,11 +56,11 @@ func init() { func sweepFramework(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("Error getting client: %w", err) } - conn := client.(*conns.AWSClient).BackupConn() + conn := client.BackupConn(ctx) input := &backup.ListFrameworksInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -88,7 +90,7 @@ func sweepFramework(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Backup Frameworks for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Backup Frameworks for %s: %w", region, err)) } @@ -97,11 +99,11 @@ func sweepFramework(region string) error { func sweepReportPlan(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("Error getting client: %w", err) } - conn := client.(*conns.AWSClient).BackupConn() + conn := client.BackupConn(ctx) input := &backup.ListReportPlansInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -131,7 +133,7 @@ func sweepReportPlan(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Backup Report Plans for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Backup Report Plans for %s: %w", region, err)) } @@ -140,13 +142,13 @@ func sweepReportPlan(region string) error { func sweepVaultLockConfiguration(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("Error getting client: %w", err) } - conn := client.(*conns.AWSClient).BackupConn() + conn := client.BackupConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -176,7 +178,7 @@ func sweepVaultLockConfiguration(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing Backup Vaults for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Backup Vault Lock Configuration for %s: %w", region, err)) } @@ -190,13 +192,13 @@ func sweepVaultLockConfiguration(region string) error { func sweepVaultNotifications(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("Error getting client: %w", err) } - conn := client.(*conns.AWSClient).BackupConn() + conn := client.BackupConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -226,7 +228,7 @@ func sweepVaultNotifications(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing Backup Vaults for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Backup Vault Notifications for %s: %w", region, err)) } @@ -240,11 +242,11 @@ func sweepVaultNotifications(region string) error { func sweepVaultPolicies(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("Error getting client: %w", err) } - conn := client.(*conns.AWSClient).BackupConn() + conn := client.BackupConn(ctx) input := &backup.ListBackupVaultsInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -274,7 +276,7 @@ func sweepVaultPolicies(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Backup Vaults for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Backup Vault Policies for %s: %w", region, err)) } @@ -283,12 +285,12 @@ func sweepVaultPolicies(region string) error { func sweepVaults(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("Error getting client: %w", err) } - conn := client.(*conns.AWSClient).BackupConn() + conn := client.BackupConn(ctx) input := &backup.ListBackupVaultsInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -327,7 +329,7 @@ func sweepVaults(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Backup Vaults for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Backup Vaults for %s: %w", region, err)) } diff --git a/internal/service/backup/tags_gen.go b/internal/service/backup/tags_gen.go index 09878c3eeb8..3dfe6ab3bf6 100644 --- a/internal/service/backup/tags_gen.go +++ b/internal/service/backup/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists backup service tags. +// listTags lists backup service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn backupiface.BackupAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn backupiface.BackupAPI, identifier string) (tftags.KeyValueTags, error) { input := &backup.ListTagsInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn backupiface.BackupAPI, identifier string // ListTags lists backup service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).BackupConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).BackupConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from backup service tags. +// KeyValueTags creates tftags.KeyValueTags from backup service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns backup service tags from Context. +// getTagsIn returns backup service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets backup service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets backup service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates backup service tags. +// updateTags updates backup service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn backupiface.BackupAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn backupiface.BackupAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn backupiface.BackupAPI, identifier stri // UpdateTags updates backup service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).BackupConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).BackupConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/backup/validate.go b/internal/service/backup/validate.go index 86497de70b4..3d6986dbba9 100644 --- a/internal/service/backup/validate.go +++ b/internal/service/backup/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup import ( diff --git a/internal/service/backup/validate_test.go b/internal/service/backup/validate_test.go index 6ad37453e5b..9631e583d1e 100644 --- a/internal/service/backup/validate_test.go +++ b/internal/service/backup/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup import ( diff --git a/internal/service/backup/vault.go b/internal/service/backup/vault.go index d287d8986cd..2f2facb414c 100644 --- a/internal/service/backup/vault.go +++ b/internal/service/backup/vault.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup import ( @@ -79,12 +82,12 @@ func ResourceVault() *schema.Resource { func resourceVaultCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) name := d.Get("name").(string) input := &backup.CreateBackupVaultInput{ BackupVaultName: aws.String(name), - BackupVaultTags: GetTagsIn(ctx), + BackupVaultTags: getTagsIn(ctx), } if v, ok := d.GetOk("kms_key_arn"); ok { @@ -104,7 +107,7 @@ func resourceVaultCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceVaultRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) output, err := FindVaultByName(ctx, conn, d.Id()) @@ -136,7 +139,7 @@ func resourceVaultUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceVaultDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) if d.Get("force_destroy").(bool) { input := &backup.ListRecoveryPointsByBackupVaultInput{ diff --git a/internal/service/backup/vault_data_source.go b/internal/service/backup/vault_data_source.go index 795938dc850..0817d1108d3 100644 --- a/internal/service/backup/vault_data_source.go +++ b/internal/service/backup/vault_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup import ( @@ -41,7 +44,7 @@ func DataSourceVault() *schema.Resource { func dataSourceVaultRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig name := d.Get("name").(string) @@ -51,7 +54,7 @@ func dataSourceVaultRead(ctx context.Context, d *schema.ResourceData, meta inter resp, err := conn.DescribeBackupVaultWithContext(ctx, input) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error getting Backup Vault: %s", err) + return sdkdiag.AppendErrorf(diags, "getting Backup Vault: %s", err) } d.SetId(aws.StringValue(resp.BackupVaultName)) @@ -60,7 +63,7 @@ func dataSourceVaultRead(ctx context.Context, d *schema.ResourceData, meta inter d.Set("name", resp.BackupVaultName) d.Set("recovery_points", resp.NumberOfRecoveryPoints) - tags, err := ListTags(ctx, conn, aws.StringValue(resp.BackupVaultArn)) + tags, err := listTags(ctx, conn, aws.StringValue(resp.BackupVaultArn)) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags for Backup Vault (%s): %s", name, err) } diff --git a/internal/service/backup/vault_data_source_test.go b/internal/service/backup/vault_data_source_test.go index 8cbc5ddbc41..11663e4f1fe 100644 --- a/internal/service/backup/vault_data_source_test.go +++ b/internal/service/backup/vault_data_source_test.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup_test import ( "fmt" - "regexp" "testing" "github.com/aws/aws-sdk-go/service/backup" @@ -22,10 +24,6 @@ func TestAccBackupVaultDataSource_basic(t *testing.T) { ErrorCheck: acctest.ErrorCheck(t, backup.EndpointsID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, Steps: []resource.TestStep{ - { - Config: testAccVaultDataSourceConfig_nonExistent, - ExpectError: regexp.MustCompile(`Error getting Backup Vault`), - }, { Config: testAccVaultDataSourceConfig_basic(rInt), Check: resource.ComposeTestCheckFunc( @@ -40,12 +38,6 @@ func TestAccBackupVaultDataSource_basic(t *testing.T) { }) } -const testAccVaultDataSourceConfig_nonExistent = ` -data "aws_backup_vault" "test" { - name = "tf-acc-test-does-not-exist" -} -` - func testAccVaultDataSourceConfig_basic(rInt int) string { return fmt.Sprintf(` resource "aws_backup_vault" "test" { diff --git a/internal/service/backup/vault_lock_configuration.go b/internal/service/backup/vault_lock_configuration.go index 81e6a35de8b..8e6c4275607 100644 --- a/internal/service/backup/vault_lock_configuration.go +++ b/internal/service/backup/vault_lock_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup import ( @@ -59,7 +62,7 @@ func ResourceVaultLockConfiguration() *schema.Resource { func resourceVaultLockConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) name := d.Get("backup_vault_name").(string) input := &backup.PutBackupVaultLockConfigurationInput{ @@ -91,7 +94,7 @@ func resourceVaultLockConfigurationCreate(ctx context.Context, d *schema.Resourc func resourceVaultLockConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) output, err := FindVaultByName(ctx, conn, d.Id()) @@ -115,7 +118,7 @@ func resourceVaultLockConfigurationRead(ctx context.Context, d *schema.ResourceD func resourceVaultLockConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) log.Printf("[DEBUG] Deleting Backup Vault Lock Configuration: %s", d.Id()) _, err := conn.DeleteBackupVaultLockConfigurationWithContext(ctx, &backup.DeleteBackupVaultLockConfigurationInput{ diff --git a/internal/service/backup/vault_lock_configuration_test.go b/internal/service/backup/vault_lock_configuration_test.go index f3a458bc3c0..2f59204988b 100644 --- a/internal/service/backup/vault_lock_configuration_test.go +++ b/internal/service/backup/vault_lock_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup_test import ( @@ -73,7 +76,7 @@ func TestAccBackupVaultLockConfiguration_disappears(t *testing.T) { func testAccCheckVaultLockConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_backup_vault_lock_configuration" { continue @@ -107,7 +110,7 @@ func testAccCheckVaultLockConfigurationExists(ctx context.Context, name string, return fmt.Errorf("No Backup Vault Lock Configuration ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn(ctx) output, err := tfbackup.FindVaultByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/backup/vault_notifications.go b/internal/service/backup/vault_notifications.go index 12c809a24cf..7065cc8bd8e 100644 --- a/internal/service/backup/vault_notifications.go +++ b/internal/service/backup/vault_notifications.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup import ( @@ -59,7 +62,7 @@ func ResourceVaultNotifications() *schema.Resource { func resourceVaultNotificationsCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) input := &backup.PutBackupVaultNotificationsInput{ BackupVaultName: aws.String(d.Get("backup_vault_name").(string)), @@ -79,7 +82,7 @@ func resourceVaultNotificationsCreate(ctx context.Context, d *schema.ResourceDat func resourceVaultNotificationsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) input := &backup.GetBackupVaultNotificationsInput{ BackupVaultName: aws.String(d.Id()), @@ -107,7 +110,7 @@ func resourceVaultNotificationsRead(ctx context.Context, d *schema.ResourceData, func resourceVaultNotificationsDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) input := &backup.DeleteBackupVaultNotificationsInput{ BackupVaultName: aws.String(d.Id()), diff --git a/internal/service/backup/vault_notifications_test.go b/internal/service/backup/vault_notifications_test.go index 33cd6568dee..1aa60920e6b 100644 --- a/internal/service/backup/vault_notifications_test.go +++ b/internal/service/backup/vault_notifications_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup_test import ( @@ -69,7 +72,7 @@ func TestAccBackupVaultNotification_disappears(t *testing.T) { func testAccCheckVaultNotificationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_backup_vault_notifications" { continue @@ -99,7 +102,7 @@ func testAccCheckVaultNotificationExists(ctx context.Context, name string, vault return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn(ctx) params := &backup.GetBackupVaultNotificationsInput{ BackupVaultName: aws.String(rs.Primary.ID), } diff --git a/internal/service/backup/vault_policy.go b/internal/service/backup/vault_policy.go index 3cef1ae4fd2..7a3240a7e04 100644 --- a/internal/service/backup/vault_policy.go +++ b/internal/service/backup/vault_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup import ( @@ -55,7 +58,7 @@ func ResourceVaultPolicy() *schema.Resource { func resourceVaultPolicyPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) policy, err := structure.NormalizeJsonString(d.Get("policy").(string)) @@ -82,7 +85,7 @@ func resourceVaultPolicyPut(ctx context.Context, d *schema.ResourceData, meta in func resourceVaultPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) output, err := FindVaultAccessPolicyByName(ctx, conn, d.Id()) @@ -118,7 +121,7 @@ func resourceVaultPolicyRead(ctx context.Context, d *schema.ResourceData, meta i func resourceVaultPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BackupConn() + conn := meta.(*conns.AWSClient).BackupConn(ctx) log.Printf("[DEBUG] Deleting Backup Vault Policy (%s)", d.Id()) _, err := conn.DeleteBackupVaultAccessPolicyWithContext(ctx, &backup.DeleteBackupVaultAccessPolicyInput{ diff --git a/internal/service/backup/vault_policy_test.go b/internal/service/backup/vault_policy_test.go index 7280bb15138..4607098cd0c 100644 --- a/internal/service/backup/vault_policy_test.go +++ b/internal/service/backup/vault_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup_test import ( @@ -128,7 +131,7 @@ func TestAccBackupVaultPolicy_ignoreEquivalent(t *testing.T) { func testAccCheckVaultPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_backup_vault_policy" { @@ -163,7 +166,7 @@ func testAccCheckVaultPolicyExists(ctx context.Context, name string, vault *back return fmt.Errorf("No Backup Vault Policy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn(ctx) output, err := tfbackup.FindVaultAccessPolicyByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/backup/vault_test.go b/internal/service/backup/vault_test.go index ccc6d7e19e0..e6d1804e5ca 100644 --- a/internal/service/backup/vault_test.go +++ b/internal/service/backup/vault_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup_test import ( @@ -219,7 +222,7 @@ func TestAccBackupVault_forceDestroyWithRecoveryPoint(t *testing.T) { func testAccCheckVaultDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_backup_vault" { continue @@ -253,7 +256,7 @@ func testAccCheckVaultExists(ctx context.Context, name string, v *backup.Describ return fmt.Errorf("No Backup Vault ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn(ctx) output, err := tfbackup.FindVaultByName(ctx, conn, rs.Primary.ID) @@ -270,7 +273,7 @@ func testAccCheckVaultExists(ctx context.Context, name string, v *backup.Describ func testAccCheckRunDynamoDBTableBackupJob(ctx context.Context, rName string) resource.TestCheckFunc { // nosemgrep:ci.backup-in-func-name return func(s *terraform.State) error { client := acctest.Provider.Meta().(*conns.AWSClient) - conn := client.BackupConn() + conn := client.BackupConn(ctx) iamRoleARN := arn.ARN{ Partition: client.Partition, @@ -308,7 +311,7 @@ func testAccCheckRunDynamoDBTableBackupJob(ctx context.Context, rName string) re } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn(ctx) input := &backup.ListBackupVaultsInput{} diff --git a/internal/service/backup/wait.go b/internal/service/backup/wait.go index a66330a77e7..4a98129440d 100644 --- a/internal/service/backup/wait.go +++ b/internal/service/backup/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package backup import ( diff --git a/internal/service/batch/compute_environment.go b/internal/service/batch/compute_environment.go index c1f53316e86..953efbd473e 100644 --- a/internal/service/batch/compute_environment.go +++ b/internal/service/batch/compute_environment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package batch import ( @@ -172,6 +175,11 @@ func ResourceComputeEnvironment() *schema.Resource { Type: schema.TypeInt, Optional: true, }, + "placement_group": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, "security_group_ids": { Type: schema.TypeSet, Optional: true, @@ -267,14 +275,14 @@ func ResourceComputeEnvironment() *schema.Resource { func resourceComputeEnvironmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BatchConn() + conn := meta.(*conns.AWSClient).BatchConn(ctx) computeEnvironmentName := create.Name(d.Get("compute_environment_name").(string), d.Get("compute_environment_name_prefix").(string)) computeEnvironmentType := d.Get("type").(string) input := &batch.CreateComputeEnvironmentInput{ ComputeEnvironmentName: aws.String(computeEnvironmentName), ServiceRole: aws.String(d.Get("service_role").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), Type: aws.String(computeEnvironmentType), } @@ -307,7 +315,7 @@ func resourceComputeEnvironmentCreate(ctx context.Context, d *schema.ResourceDat func resourceComputeEnvironmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BatchConn() + conn := meta.(*conns.AWSClient).BatchConn(ctx) computeEnvironment, err := FindComputeEnvironmentDetailByName(ctx, conn, d.Id()) @@ -349,14 +357,14 @@ func resourceComputeEnvironmentRead(ctx context.Context, d *schema.ResourceData, d.Set("eks_configuration", nil) } - SetTagsOut(ctx, computeEnvironment.Tags) + setTagsOut(ctx, computeEnvironment.Tags) return diags } func resourceComputeEnvironmentUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BatchConn() + conn := meta.(*conns.AWSClient).BatchConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &batch.UpdateComputeEnvironmentInput{ @@ -411,7 +419,7 @@ func resourceComputeEnvironmentUpdate(ctx context.Context, d *schema.ResourceDat func resourceComputeEnvironmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BatchConn() + conn := meta.(*conns.AWSClient).BatchConn(ctx) log.Printf("[DEBUG] Disabling Batch Compute Environment: %s", d.Id()) { @@ -674,6 +682,10 @@ func expandComputeResource(ctx context.Context, tfMap map[string]interface{}) *b apiObject.MinvCpus = aws.Int64(0) } + if v, ok := tfMap["placement_group"].(string); ok && v != "" { + apiObject.PlacementGroup = aws.String(v) + } + if v, ok := tfMap["security_group_ids"].(*schema.Set); ok && v.Len() > 0 { apiObject.SecurityGroupIds = flex.ExpandStringSet(v) } @@ -832,6 +844,10 @@ func flattenComputeResource(ctx context.Context, apiObject *batch.ComputeResourc tfMap["min_vcpus"] = aws.Int64Value(v) } + if v := apiObject.PlacementGroup; v != nil { + tfMap["placement_group"] = aws.StringValue(v) + } + if v := apiObject.SecurityGroupIds; v != nil { tfMap["security_group_ids"] = aws.StringValueSlice(v) } diff --git a/internal/service/batch/compute_environment_data_source.go b/internal/service/batch/compute_environment_data_source.go index 07a6996dac5..7163667f07c 100644 --- a/internal/service/batch/compute_environment_data_source.go +++ b/internal/service/batch/compute_environment_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package batch import ( @@ -65,7 +68,7 @@ func DataSourceComputeEnvironment() *schema.Resource { func dataSourceComputeEnvironmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BatchConn() + conn := meta.(*conns.AWSClient).BatchConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig params := &batch.DescribeComputeEnvironmentsInput{ diff --git a/internal/service/batch/compute_environment_data_source_test.go b/internal/service/batch/compute_environment_data_source_test.go index 26b6c42f0f2..6f35d54e2d5 100644 --- a/internal/service/batch/compute_environment_data_source_test.go +++ b/internal/service/batch/compute_environment_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package batch_test import ( diff --git a/internal/service/batch/compute_environment_test.go b/internal/service/batch/compute_environment_test.go index be78e2d1069..e00fed8a2f2 100644 --- a/internal/service/batch/compute_environment_test.go +++ b/internal/service/batch/compute_environment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package batch_test import ( @@ -202,6 +205,7 @@ func TestAccBatchComputeEnvironment_createEC2(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "compute_resources.0.launch_template.#", "0"), resource.TestCheckResourceAttr(resourceName, "compute_resources.0.max_vcpus", "16"), resource.TestCheckResourceAttr(resourceName, "compute_resources.0.min_vcpus", "0"), + resource.TestCheckResourceAttr(resourceName, "compute_resources.0.placement_group", ""), resource.TestCheckResourceAttr(resourceName, "compute_resources.0.security_group_ids.#", "1"), resource.TestCheckTypeSetElemAttrPair(resourceName, "compute_resources.0.security_group_ids.*", securityGroupResourceName, "id"), resource.TestCheckResourceAttr(resourceName, "compute_resources.0.spot_iam_fleet_role", ""), @@ -1068,6 +1072,72 @@ func TestAccBatchComputeEnvironment_ec2Configuration(t *testing.T) { }) } +func TestAccBatchComputeEnvironment_ec2ConfigurationPlacementGroup(t *testing.T) { + ctx := acctest.Context(t) + var ce batch.ComputeEnvironmentDetail + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_batch_compute_environment.test" + instanceProfileResourceName := "aws_iam_instance_profile.ecs_instance" + securityGroupResourceName := "aws_security_group.test" + serviceRoleResourceName := "aws_iam_role.batch_service" + spotFleetRoleResourceName := "aws_iam_role.ec2_spot_fleet" + subnetResourceName := "aws_subnet.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, batch.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckComputeEnvironmentDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccComputeEnvironmentConfig_ec2ConfigurationPlacementGroup(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckComputeEnvironmentExists(ctx, resourceName, &ce), + acctest.CheckResourceAttrRegionalARN(resourceName, "arn", "batch", fmt.Sprintf("compute-environment/%s", rName)), + resource.TestCheckResourceAttr(resourceName, "compute_environment_name", rName), + resource.TestCheckResourceAttr(resourceName, "compute_environment_name_prefix", ""), + resource.TestCheckResourceAttr(resourceName, "compute_resources.#", "1"), + resource.TestCheckResourceAttr(resourceName, "compute_resources.0.allocation_strategy", ""), + resource.TestCheckResourceAttr(resourceName, "compute_resources.0.bid_percentage", "0"), + resource.TestCheckResourceAttr(resourceName, "compute_resources.0.desired_vcpus", "0"), + resource.TestCheckResourceAttr(resourceName, "compute_resources.0.ec2_key_pair", ""), + resource.TestCheckResourceAttr(resourceName, "compute_resources.0.image_id", ""), + resource.TestCheckResourceAttrPair(resourceName, "compute_resources.0.instance_role", instanceProfileResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "compute_resources.0.instance_type.#", "1"), + resource.TestCheckTypeSetElemAttr(resourceName, "compute_resources.0.instance_type.*", "optimal"), + resource.TestCheckResourceAttr(resourceName, "compute_resources.0.ec2_configuration.#", "2"), + resource.TestCheckResourceAttrSet(resourceName, "compute_resources.0.ec2_configuration.0.image_id_override"), + resource.TestCheckResourceAttr(resourceName, "compute_resources.0.ec2_configuration.0.image_type", "ECS_AL2"), + resource.TestCheckResourceAttrSet(resourceName, "compute_resources.0.ec2_configuration.1.image_id_override"), + resource.TestCheckResourceAttr(resourceName, "compute_resources.0.ec2_configuration.1.image_type", "ECS_AL2_NVIDIA"), + resource.TestCheckResourceAttr(resourceName, "compute_resources.0.max_vcpus", "16"), + resource.TestCheckResourceAttr(resourceName, "compute_resources.0.min_vcpus", "0"), + resource.TestCheckResourceAttr(resourceName, "compute_resources.0.placement_group", rName), + resource.TestCheckResourceAttr(resourceName, "compute_resources.0.security_group_ids.#", "1"), + resource.TestCheckTypeSetElemAttrPair(resourceName, "compute_resources.0.security_group_ids.*", securityGroupResourceName, "id"), + resource.TestCheckResourceAttrPair(resourceName, "compute_resources.0.spot_iam_fleet_role", spotFleetRoleResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "compute_resources.0.subnets.#", "1"), + resource.TestCheckTypeSetElemAttrPair(resourceName, "compute_resources.0.subnets.*", subnetResourceName, "id"), + resource.TestCheckResourceAttr(resourceName, "compute_resources.0.tags.%", "0"), + resource.TestCheckResourceAttr(resourceName, "compute_resources.0.type", "SPOT"), + resource.TestCheckResourceAttrSet(resourceName, "ecs_cluster_arn"), + resource.TestCheckResourceAttrPair(resourceName, "service_role", serviceRoleResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "state", "ENABLED"), + resource.TestCheckResourceAttrSet(resourceName, "status"), + resource.TestCheckResourceAttrSet(resourceName, "status_reason"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttr(resourceName, "type", "MANAGED"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccBatchComputeEnvironment_launchTemplate(t *testing.T) { ctx := acctest.Context(t) var ce batch.ComputeEnvironmentDetail @@ -1434,7 +1504,7 @@ func TestAccBatchComputeEnvironment_createSpotWithoutIAMFleetRole(t *testing.T) func testAccCheckComputeEnvironmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).BatchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BatchConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_batch_compute_environment" { @@ -1468,7 +1538,7 @@ func testAccCheckComputeEnvironmentExists(ctx context.Context, n string, v *batc return fmt.Errorf("No Batch Compute Environment ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).BatchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BatchConn(ctx) computeEnvironment, err := tfbatch.FindComputeEnvironmentDetailByName(ctx, conn, rs.Primary.ID) @@ -1483,7 +1553,7 @@ func testAccCheckComputeEnvironmentExists(ctx context.Context, n string, v *batc } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).BatchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BatchConn(ctx) input := &batch.DescribeComputeEnvironmentsInput{} @@ -2493,3 +2563,49 @@ resource "aws_batch_compute_environment" "test" { } `, rName)) } + +func testAccComputeEnvironmentConfig_ec2ConfigurationPlacementGroup(rName string) string { + return acctest.ConfigCompose(testAccComputeEnvironmentConfig_base(rName), acctest.ConfigLatestAmazonLinuxHVMEBSAMI(), fmt.Sprintf(` +resource "aws_placement_group" "test" { + name = %[1]q + strategy = "cluster" +} + +resource "aws_batch_compute_environment" "test" { + compute_environment_name = %[1]q + + compute_resources { + instance_role = aws_iam_instance_profile.ecs_instance.arn + instance_type = ["optimal"] + + ec2_configuration { + image_id_override = data.aws_ami.amzn-ami-minimal-hvm-ebs.id + image_type = "ECS_AL2" + } + + ec2_configuration { + image_id_override = data.aws_ami.amzn-ami-minimal-hvm-ebs.id + image_type = "ECS_AL2_NVIDIA" + } + + max_vcpus = 16 + min_vcpus = 0 + + placement_group = aws_placement_group.test.name + + security_group_ids = [ + aws_security_group.test.id + ] + spot_iam_fleet_role = aws_iam_role.ec2_spot_fleet.arn + subnets = [ + aws_subnet.test.id + ] + type = "SPOT" + } + + service_role = aws_iam_role.batch_service.arn + type = "MANAGED" + depends_on = [aws_iam_role_policy_attachment.batch_service] +} +`, rName)) +} diff --git a/internal/service/batch/container_properties.go b/internal/service/batch/container_properties.go index 98e0eb83b29..49c355a8aba 100644 --- a/internal/service/batch/container_properties.go +++ b/internal/service/batch/container_properties.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package batch import ( diff --git a/internal/service/batch/container_properties_test.go b/internal/service/batch/container_properties_test.go index 556f5d4209d..4038a34e081 100644 --- a/internal/service/batch/container_properties_test.go +++ b/internal/service/batch/container_properties_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package batch_test import ( diff --git a/internal/service/batch/generate.go b/internal/service/batch/generate.go index 7ada0b8bb89..0d04a2ef789 100644 --- a/internal/service/batch/generate.go +++ b/internal/service/batch/generate.go @@ -1,4 +1,8 @@ -//go:generate go run ../../generate/tags/main.go -GetTag -ListTags -ServiceTagsMap -UpdateTags +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package batch diff --git a/internal/service/batch/job_definition.go b/internal/service/batch/job_definition.go index b021f18bfa7..c177d77b26b 100644 --- a/internal/service/batch/job_definition.go +++ b/internal/service/batch/job_definition.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package batch import ( @@ -185,13 +188,13 @@ func ResourceJobDefinition() *schema.Resource { func resourceJobDefinitionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BatchConn() + conn := meta.(*conns.AWSClient).BatchConn(ctx) name := d.Get("name").(string) input := &batch.RegisterJobDefinitionInput{ JobDefinitionName: aws.String(name), PropagateTags: aws.Bool(d.Get("propagate_tags").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), Type: aws.String(d.Get("type").(string)), } @@ -243,7 +246,7 @@ func resourceJobDefinitionCreate(ctx context.Context, d *schema.ResourceData, me func resourceJobDefinitionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BatchConn() + conn := meta.(*conns.AWSClient).BatchConn(ctx) jobDefinition, err := FindJobDefinitionByARN(ctx, conn, d.Id()) @@ -282,7 +285,7 @@ func resourceJobDefinitionRead(ctx context.Context, d *schema.ResourceData, meta d.Set("retry_strategy", nil) } - SetTagsOut(ctx, jobDefinition.Tags) + setTagsOut(ctx, jobDefinition.Tags) if jobDefinition.Timeout != nil { if err := d.Set("timeout", []interface{}{flattenJobTimeout(jobDefinition.Timeout)}); err != nil { @@ -308,7 +311,7 @@ func resourceJobDefinitionUpdate(ctx context.Context, d *schema.ResourceData, me func resourceJobDefinitionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BatchConn() + conn := meta.(*conns.AWSClient).BatchConn(ctx) log.Printf("[DEBUG] Deleting Batch Job Definition: %s", d.Id()) _, err := conn.DeregisterJobDefinitionWithContext(ctx, &batch.DeregisterJobDefinitionInput{ diff --git a/internal/service/batch/job_definition_test.go b/internal/service/batch/job_definition_test.go index 7fee0e88cbb..17ea8b6aa2a 100644 --- a/internal/service/batch/job_definition_test.go +++ b/internal/service/batch/job_definition_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package batch_test import ( @@ -466,7 +469,7 @@ func testAccCheckJobDefinitionExists(ctx context.Context, n string, jd *batch.Jo return fmt.Errorf("No Batch Job Queue ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).BatchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BatchConn(ctx) jobDefinition, err := tfbatch.FindJobDefinitionByARN(ctx, conn, rs.Primary.ID) @@ -519,7 +522,7 @@ func testAccCheckJobDefinitionRecreated(t *testing.T, before, after *batch.JobDe func testAccCheckJobDefinitionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).BatchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BatchConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_batch_job_definition" { diff --git a/internal/service/batch/job_queue.go b/internal/service/batch/job_queue.go index e5d8ebd8f50..7d3ceda7ae3 100644 --- a/internal/service/batch/job_queue.go +++ b/internal/service/batch/job_queue.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package batch import ( @@ -76,14 +79,14 @@ func ResourceJobQueue() *schema.Resource { func resourceJobQueueCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BatchConn() + conn := meta.(*conns.AWSClient).BatchConn(ctx) input := batch.CreateJobQueueInput{ ComputeEnvironmentOrder: createComputeEnvironmentOrder(d.Get("compute_environments").([]interface{})), JobQueueName: aws.String(d.Get("name").(string)), Priority: aws.Int64(int64(d.Get("priority").(int))), State: aws.String(d.Get("state").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("scheduling_policy_arn"); ok { @@ -107,7 +110,7 @@ func resourceJobQueueCreate(ctx context.Context, d *schema.ResourceData, meta in _, err = stateConf.WaitForStateContext(ctx) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error waiting for JobQueue state to be \"VALID\": %s", err) + return sdkdiag.AppendErrorf(diags, "waiting for JobQueue state to be \"VALID\": %s", err) } arn := aws.StringValue(out.JobQueueArn) @@ -119,7 +122,7 @@ func resourceJobQueueCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceJobQueueRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BatchConn() + conn := meta.(*conns.AWSClient).BatchConn(ctx) jq, err := GetJobQueue(ctx, conn, d.Id()) if err != nil { @@ -152,14 +155,14 @@ func resourceJobQueueRead(ctx context.Context, d *schema.ResourceData, meta inte d.Set("scheduling_policy_arn", jq.SchedulingPolicyArn) d.Set("state", jq.State) - SetTagsOut(ctx, jq.Tags) + setTagsOut(ctx, jq.Tags) return diags } func resourceJobQueueUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BatchConn() + conn := meta.(*conns.AWSClient).BatchConn(ctx) if d.HasChanges("compute_environments", "priority", "scheduling_policy_arn", "state") { name := d.Get("name").(string) @@ -209,7 +212,7 @@ func resourceJobQueueUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceJobQueueDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BatchConn() + conn := meta.(*conns.AWSClient).BatchConn(ctx) name := d.Get("name").(string) log.Printf("[DEBUG] Disabling Batch Job Queue: %s", name) diff --git a/internal/service/batch/job_queue_data_source.go b/internal/service/batch/job_queue_data_source.go index 47396500e9e..3d2f1d3dfc6 100644 --- a/internal/service/batch/job_queue_data_source.go +++ b/internal/service/batch/job_queue_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package batch import ( @@ -78,7 +81,7 @@ func DataSourceJobQueue() *schema.Resource { func dataSourceJobQueueRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BatchConn() + conn := meta.(*conns.AWSClient).BatchConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig params := &batch.DescribeJobQueuesInput{ diff --git a/internal/service/batch/job_queue_data_source_test.go b/internal/service/batch/job_queue_data_source_test.go index d8ed204dfed..f8089c048ef 100644 --- a/internal/service/batch/job_queue_data_source_test.go +++ b/internal/service/batch/job_queue_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package batch_test import ( diff --git a/internal/service/batch/job_queue_test.go b/internal/service/batch/job_queue_test.go index 48da70fa3a6..d82cd7a48ee 100644 --- a/internal/service/batch/job_queue_test.go +++ b/internal/service/batch/job_queue_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package batch_test import ( @@ -271,7 +274,7 @@ func testAccCheckJobQueueExists(ctx context.Context, n string, jq *batch.JobQueu return fmt.Errorf("No Batch Job Queue ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).BatchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BatchConn(ctx) name := rs.Primary.Attributes["name"] queue, err := tfbatch.GetJobQueue(ctx, conn, name) if err != nil { @@ -292,7 +295,7 @@ func testAccCheckJobQueueDestroy(ctx context.Context) resource.TestCheckFunc { if rs.Type != "aws_batch_job_queue" { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).BatchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BatchConn(ctx) jq, err := tfbatch.GetJobQueue(ctx, conn, rs.Primary.Attributes["name"]) if err == nil { if jq != nil { @@ -307,7 +310,7 @@ func testAccCheckJobQueueDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckJobQueueDisappears(ctx context.Context, jobQueue *batch.JobQueueDetail) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).BatchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BatchConn(ctx) name := aws.StringValue(jobQueue.JobQueueName) err := tfbatch.DisableJobQueue(ctx, name, conn) @@ -325,7 +328,7 @@ func testAccCheckJobQueueDisappears(ctx context.Context, jobQueue *batch.JobQueu // For example, Terraform may set a single Compute Environment with Order 0, but the console updates it to 1. func testAccCheckJobQueueComputeEnvironmentOrderUpdate(ctx context.Context, jobQueue *batch.JobQueueDetail) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).BatchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BatchConn(ctx) input := &batch.UpdateJobQueueInput{ ComputeEnvironmentOrder: jobQueue.ComputeEnvironmentOrder, @@ -565,7 +568,7 @@ resource "aws_batch_job_queue" "test" { func testAccCheckLaunchTemplateDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_launch_template" { diff --git a/internal/service/batch/scheduling_policy.go b/internal/service/batch/scheduling_policy.go index fa0764d9b81..53bb4dd27a8 100644 --- a/internal/service/batch/scheduling_policy.go +++ b/internal/service/batch/scheduling_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package batch import ( @@ -98,13 +101,13 @@ func ResourceSchedulingPolicy() *schema.Resource { } func resourceSchedulingPolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).BatchConn() + conn := meta.(*conns.AWSClient).BatchConn(ctx) name := d.Get("name").(string) input := &batch.CreateSchedulingPolicyInput{ FairsharePolicy: expandFairsharePolicy(d.Get("fair_share_policy").([]interface{})), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } output, err := conn.CreateSchedulingPolicyWithContext(ctx, input) @@ -120,7 +123,7 @@ func resourceSchedulingPolicyCreate(ctx context.Context, d *schema.ResourceData, func resourceSchedulingPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BatchConn() + conn := meta.(*conns.AWSClient).BatchConn(ctx) sp, err := FindSchedulingPolicyByARN(ctx, conn, d.Id()) @@ -140,13 +143,13 @@ func resourceSchedulingPolicyRead(ctx context.Context, d *schema.ResourceData, m } d.Set("name", sp.Name) - SetTagsOut(ctx, sp.Tags) + setTagsOut(ctx, sp.Tags) return diags } func resourceSchedulingPolicyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).BatchConn() + conn := meta.(*conns.AWSClient).BatchConn(ctx) if d.HasChange("fair_share_policy") { input := &batch.UpdateSchedulingPolicyInput{ @@ -165,7 +168,7 @@ func resourceSchedulingPolicyUpdate(ctx context.Context, d *schema.ResourceData, } func resourceSchedulingPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).BatchConn() + conn := meta.(*conns.AWSClient).BatchConn(ctx) log.Printf("[DEBUG] Deleting Batch Scheduling Policy: %s", d.Id()) _, err := conn.DeleteSchedulingPolicyWithContext(ctx, &batch.DeleteSchedulingPolicyInput{ diff --git a/internal/service/batch/scheduling_policy_data_source.go b/internal/service/batch/scheduling_policy_data_source.go index ea2b56a9d62..1059e7cdea3 100644 --- a/internal/service/batch/scheduling_policy_data_source.go +++ b/internal/service/batch/scheduling_policy_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package batch import ( @@ -76,7 +79,7 @@ func DataSourceSchedulingPolicy() *schema.Resource { func dataSourceSchedulingPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).BatchConn() + conn := meta.(*conns.AWSClient).BatchConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig schedulingPolicy, err := FindSchedulingPolicyByARN(ctx, conn, d.Get("arn").(string)) diff --git a/internal/service/batch/scheduling_policy_data_source_test.go b/internal/service/batch/scheduling_policy_data_source_test.go index 1e23eb0dd71..37832894991 100644 --- a/internal/service/batch/scheduling_policy_data_source_test.go +++ b/internal/service/batch/scheduling_policy_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package batch_test import ( diff --git a/internal/service/batch/scheduling_policy_test.go b/internal/service/batch/scheduling_policy_test.go index b6b2422ff9c..27d266e5221 100644 --- a/internal/service/batch/scheduling_policy_test.go +++ b/internal/service/batch/scheduling_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package batch_test import ( @@ -98,7 +101,7 @@ func testAccCheckSchedulingPolicyExists(ctx context.Context, n string, v *batch. return fmt.Errorf("No Batch Scheduling Policy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).BatchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BatchConn(ctx) output, err := tfbatch.FindSchedulingPolicyByARN(ctx, conn, rs.Primary.ID) @@ -118,7 +121,7 @@ func testAccCheckSchedulingPolicyDestroy(ctx context.Context) resource.TestCheck if rs.Type != "aws_batch_scheduling_policy" { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).BatchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BatchConn(ctx) _, err := tfbatch.FindSchedulingPolicyByARN(ctx, conn, rs.Primary.ID) diff --git a/internal/service/batch/service_package_gen.go b/internal/service/batch/service_package_gen.go index 6f17fe31797..1d1ddd2209f 100644 --- a/internal/service/batch/service_package_gen.go +++ b/internal/service/batch/service_package_gen.go @@ -5,6 +5,10 @@ package batch import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + batch_sdkv1 "github.com/aws/aws-sdk-go/service/batch" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -77,4 +81,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Batch } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*batch_sdkv1.Batch, error) { + sess := config["session"].(*session_sdkv1.Session) + + return batch_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/batch/sweep.go b/internal/service/batch/sweep.go index 3d025ab5885..b2cfd726a18 100644 --- a/internal/service/batch/sweep.go +++ b/internal/service/batch/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -14,8 +17,8 @@ import ( "github.com/aws/aws-sdk-go/service/iam" multierror "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" + "github.com/hashicorp/terraform-provider-aws/internal/sweep/sdk" ) func init() { @@ -51,12 +54,12 @@ func init() { func sweepComputeEnvironments(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).BatchConn() - iamconn := client.(*conns.AWSClient).IAMConn() + conn := client.BatchConn(ctx) + iamconn := client.IAMConn(ctx) var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -139,7 +142,7 @@ func sweepComputeEnvironments(region string) error { d := r.Data(nil) d.SetId(name) - sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) + sweepResources = append(sweepResources, sdk.NewSweepResource(r, d, client)) } return !lastPage @@ -147,36 +150,32 @@ func sweepComputeEnvironments(region string) error { if sweep.SkipSweepError(err) { log.Printf("[WARN] Skipping Batch Compute Environment sweep for %s: %s", region, err) - return sweeperErrs.ErrorOrNil() // In case we have completed some pages, but had errors + return nil } if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Batch Compute Environments (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Batch Compute Environments (%s): %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { - sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Batch Compute Environments: %w", err)) - } - return sweeperErrs.ErrorOrNil() } func sweepJobDefinitions(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } input := &batch.DescribeJobDefinitionsInput{ Status: aws.String("ACTIVE"), } - conn := client.(*conns.AWSClient).BatchConn() + conn := client.BatchConn(ctx) sweepResources := make([]sweep.Sweepable, 0) err = conn.DescribeJobDefinitionsPagesWithContext(ctx, input, func(page *batch.DescribeJobDefinitionsOutput, lastPage bool) bool { @@ -185,11 +184,11 @@ func sweepJobDefinitions(region string) error { } for _, v := range page.JobDefinitions { - r := ResourceSchedulingPolicy() + r := ResourceJobDefinition() d := r.Data(nil) d.SetId(aws.StringValue(v.JobDefinitionArn)) - sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) + sweepResources = append(sweepResources, sdk.NewSweepResource(r, d, client)) } return !lastPage @@ -204,7 +203,7 @@ func sweepJobDefinitions(region string) error { return fmt.Errorf("error listing Batch Job Definitions (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Batch Job Definitions (%s): %w", region, err) @@ -215,12 +214,12 @@ func sweepJobDefinitions(region string) error { func sweepJobQueues(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } input := &batch.DescribeJobQueuesInput{} - conn := client.(*conns.AWSClient).BatchConn() + conn := client.BatchConn(ctx) sweepResources := make([]sweep.Sweepable, 0) err = conn.DescribeJobQueuesPagesWithContext(ctx, input, func(page *batch.DescribeJobQueuesOutput, lastPage bool) bool { @@ -234,7 +233,7 @@ func sweepJobQueues(region string) error { d.SetId(aws.StringValue(v.JobQueueArn)) d.Set("name", v.JobQueueName) - sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) + sweepResources = append(sweepResources, sdk.NewSweepResource(r, d, client)) } return !lastPage @@ -249,7 +248,7 @@ func sweepJobQueues(region string) error { return fmt.Errorf("error listing Batch Job Queues (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Batch Job Queues (%s): %w", region, err) @@ -260,12 +259,12 @@ func sweepJobQueues(region string) error { func sweepSchedulingPolicies(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } input := &batch.ListSchedulingPoliciesInput{} - conn := client.(*conns.AWSClient).BatchConn() + conn := client.BatchConn(ctx) sweepResources := make([]sweep.Sweepable, 0) err = conn.ListSchedulingPoliciesPagesWithContext(ctx, input, func(page *batch.ListSchedulingPoliciesOutput, lastPage bool) bool { @@ -278,7 +277,7 @@ func sweepSchedulingPolicies(region string) error { d := r.Data(nil) d.SetId(aws.StringValue(v.Arn)) - sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) + sweepResources = append(sweepResources, sdk.NewSweepResource(r, d, client)) } return !lastPage @@ -293,7 +292,7 @@ func sweepSchedulingPolicies(region string) error { return fmt.Errorf("error listing Batch Scheduling Policies (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Batch Scheduling Policies (%s): %w", region, err) diff --git a/internal/service/batch/tags_gen.go b/internal/service/batch/tags_gen.go index 06fe3cae3d6..217aa4f54d7 100644 --- a/internal/service/batch/tags_gen.go +++ b/internal/service/batch/tags_gen.go @@ -10,34 +10,14 @@ import ( "github.com/aws/aws-sdk-go/service/batch/batchiface" "github.com/hashicorp/terraform-provider-aws/internal/conns" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" - "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) -// GetTag fetches an individual batch service tag for a resource. -// Returns whether the key value and any errors. A NotFoundError is used to signal that no value was found. -// This function will optimise the handling over ListTags, if possible. +// listTags lists batch service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func GetTag(ctx context.Context, conn batchiface.BatchAPI, identifier, key string) (*string, error) { - listTags, err := ListTags(ctx, conn, identifier) - - if err != nil { - return nil, err - } - - if !listTags.KeyExists(key) { - return nil, tfresource.NewEmptyResultError(nil) - } - - return listTags.KeyValue(key), nil -} - -// ListTags lists batch service tags. -// The identifier is typically the Amazon Resource Name (ARN), although -// it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn batchiface.BatchAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn batchiface.BatchAPI, identifier string) (tftags.KeyValueTags, error) { input := &batch.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -54,7 +34,7 @@ func ListTags(ctx context.Context, conn batchiface.BatchAPI, identifier string) // ListTags lists batch service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).BatchConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).BatchConn(ctx), identifier) if err != nil { return err @@ -74,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from batch service tags. +// KeyValueTags creates tftags.KeyValueTags from batch service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns batch service tags from Context. +// getTagsIn returns batch service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -91,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets batch service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets batch service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates batch service tags. +// updateTags updates batch service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn batchiface.BatchAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn batchiface.BatchAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -141,5 +121,5 @@ func UpdateTags(ctx context.Context, conn batchiface.BatchAPI, identifier string // UpdateTags updates batch service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).BatchConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).BatchConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/batch/validate.go b/internal/service/batch/validate.go index 4d9920448fa..3a8af808e5a 100644 --- a/internal/service/batch/validate.go +++ b/internal/service/batch/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package batch import ( diff --git a/internal/service/batch/validate_test.go b/internal/service/batch/validate_test.go index 4b0d839608c..2054340deb4 100644 --- a/internal/service/batch/validate_test.go +++ b/internal/service/batch/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package batch import ( diff --git a/internal/service/budgets/budget.go b/internal/service/budgets/budget.go index add1eddbf2f..9cef689f3f3 100644 --- a/internal/service/budgets/budget.go +++ b/internal/service/budgets/budget.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package budgets import ( @@ -286,7 +289,7 @@ func ResourceBudget() *schema.Resource { } func resourceBudgetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).BudgetsConn() + conn := meta.(*conns.AWSClient).BudgetsConn(ctx) budget, err := expandBudgetUnmarshal(d) @@ -326,7 +329,7 @@ func resourceBudgetCreate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceBudgetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).BudgetsConn() + conn := meta.(*conns.AWSClient).BudgetsConn(ctx) accountID, budgetName, err := BudgetParseResourceID(d.Id()) @@ -448,7 +451,7 @@ func resourceBudgetRead(ctx context.Context, d *schema.ResourceData, meta interf } func resourceBudgetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).BudgetsConn() + conn := meta.(*conns.AWSClient).BudgetsConn(ctx) accountID, _, err := BudgetParseResourceID(d.Id()) @@ -481,7 +484,7 @@ func resourceBudgetUpdate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceBudgetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).BudgetsConn() + conn := meta.(*conns.AWSClient).BudgetsConn(ctx) accountID, budgetName, err := BudgetParseResourceID(d.Id()) @@ -706,7 +709,7 @@ func flattenAutoAdjustData(autoAdjustData *budgets.AutoAdjustData) []map[string] "last_auto_adjust_time": aws.TimeValue(autoAdjustData.LastAutoAdjustTime).Format(time.RFC3339), } - if *autoAdjustData.HistoricalOptions != (budgets.HistoricalOptions{}) { // nosemgrep: ci.prefer-aws-go-sdk-pointer-conversion-conditional + if *autoAdjustData.HistoricalOptions != (budgets.HistoricalOptions{}) { // nosemgrep:ci.semgrep.aws.prefer-pointer-conversion-conditional attrs["historical_options"] = flattenHistoricalOptions(autoAdjustData.HistoricalOptions) } diff --git a/internal/service/budgets/budget_action.go b/internal/service/budgets/budget_action.go index eb5dc75b576..b97eedd7815 100644 --- a/internal/service/budgets/budget_action.go +++ b/internal/service/budgets/budget_action.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package budgets import ( @@ -218,7 +221,7 @@ func ResourceBudgetAction() *schema.Resource { } func resourceBudgetActionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).BudgetsConn() + conn := meta.(*conns.AWSClient).BudgetsConn(ctx) accountID := d.Get("account_id").(string) if accountID == "" { @@ -258,7 +261,7 @@ func resourceBudgetActionCreate(ctx context.Context, d *schema.ResourceData, met } func resourceBudgetActionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).BudgetsConn() + conn := meta.(*conns.AWSClient).BudgetsConn(ctx) accountID, actionID, budgetName, err := BudgetActionParseResourceID(d.Id()) @@ -307,7 +310,7 @@ func resourceBudgetActionRead(ctx context.Context, d *schema.ResourceData, meta } func resourceBudgetActionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).BudgetsConn() + conn := meta.(*conns.AWSClient).BudgetsConn(ctx) accountID, actionID, budgetName, err := BudgetActionParseResourceID(d.Id()) @@ -359,7 +362,7 @@ func resourceBudgetActionUpdate(ctx context.Context, d *schema.ResourceData, met } func resourceBudgetActionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).BudgetsConn() + conn := meta.(*conns.AWSClient).BudgetsConn(ctx) accountID, actionID, budgetName, err := BudgetActionParseResourceID(d.Id()) diff --git a/internal/service/budgets/budget_action_test.go b/internal/service/budgets/budget_action_test.go index 3421715852b..0a8911d978e 100644 --- a/internal/service/budgets/budget_action_test.go +++ b/internal/service/budgets/budget_action_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package budgets_test import ( @@ -92,7 +95,7 @@ func testAccBudgetActionExists(ctx context.Context, resourceName string, config return fmt.Errorf("No Budget Action ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).BudgetsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BudgetsConn(ctx) accountID, actionID, budgetName, err := tfbudgets.BudgetActionParseResourceID(rs.Primary.ID) @@ -114,7 +117,7 @@ func testAccBudgetActionExists(ctx context.Context, resourceName string, config func testAccCheckBudgetActionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).BudgetsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BudgetsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_budgets_budget_action" { diff --git a/internal/service/budgets/budget_data_source.go b/internal/service/budgets/budget_data_source.go new file mode 100644 index 00000000000..e268c71b5e4 --- /dev/null +++ b/internal/service/budgets/budget_data_source.go @@ -0,0 +1,355 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package budgets + +import ( + "context" + "fmt" + "strconv" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" + "github.com/aws/aws-sdk-go/service/budgets" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @SDKDataSource("aws_budgets_budget") +func DataSourceBudget() *schema.Resource { + return &schema.Resource{ + ReadWithoutTimeout: dataSourceBudgetRead, + + Schema: map[string]*schema.Schema{ + "account_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "auto_adjust_data": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "auto_adjust_type": { + Type: schema.TypeString, + Computed: true, + }, + "historical_options": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "budget_adjustment_period": { + Type: schema.TypeInt, + Computed: true, + }, + "lookback_available_periods": { + Type: schema.TypeInt, + Computed: true, + }, + }, + }, + }, + "last_auto_adjust_time": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "budget_type": { + Type: schema.TypeString, + Computed: true, + }, + "budget_limit": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "amount": { + Type: schema.TypeString, + Computed: true, + }, + "unit": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "calculated_spend": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "actual_spend": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "amount": { + Type: schema.TypeString, + Computed: true, + }, + "unit": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + }, + }, + }, + "cost_filter": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Computed: true, + }, + "values": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + }, + }, + }, + "cost_types": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "include_credit": { + Type: schema.TypeBool, + Computed: true, + }, + "include_discount": { + Type: schema.TypeBool, + Computed: true, + }, + "include_other_subscription": { + Type: schema.TypeBool, + Computed: true, + }, + "include_recurring": { + Type: schema.TypeBool, + Computed: true, + }, + "include_refund": { + Type: schema.TypeBool, + Computed: true, + }, + "include_subscription": { + Type: schema.TypeBool, + Computed: true, + }, + "include_support": { + Type: schema.TypeBool, + Computed: true, + }, + "include_tax": { + Type: schema.TypeBool, + Computed: true, + }, + "include_upfront": { + Type: schema.TypeBool, + Computed: true, + }, + "use_amortized": { + Type: schema.TypeBool, + Computed: true, + }, + "use_blended": { + Type: schema.TypeBool, + Computed: true, + }, + }, + }, + }, + "name": { + Type: schema.TypeString, + Required: true, + }, + "name_prefix": { + Type: schema.TypeString, + Optional: true, + }, + "notification": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "comparison_operator": { + Type: schema.TypeString, + Computed: true, + }, + "notification_type": { + Type: schema.TypeString, + Computed: true, + }, + "subscriber_email_addresses": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "subscriber_sns_topic_arns": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "threshold": { + Type: schema.TypeFloat, + Computed: true, + }, + "threshold_type": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "planned_limit": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "amount": { + Type: schema.TypeString, + Computed: true, + }, + "start_time": { + Type: schema.TypeString, + Computed: true, + }, + "unit": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "time_period_end": { + Type: schema.TypeString, + Computed: true, + }, + "time_period_start": { + Type: schema.TypeString, + Computed: true, + }, + "time_unit": { + Type: schema.TypeString, + Computed: true, + }, + "budget_exceeded": { + Type: schema.TypeBool, + Computed: true, + }, + }, + } +} + +const ( + DSNameBudget = "Budget Data Source" +) + +func dataSourceBudgetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).BudgetsConn(ctx) + + budgetName := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) + + accountID := d.Get("account_id").(string) + if accountID == "" { + accountID = meta.(*conns.AWSClient).AccountID + } + d.Set("account_id", accountID) + + budget, err := FindBudgetByTwoPartKey(ctx, conn, accountID, budgetName) + if err != nil { + return create.DiagError(names.Budgets, create.ErrActionReading, DSNameBudget, d.Id(), err) + } + + d.SetId(fmt.Sprintf("%s:%s", accountID, budgetName)) + + arn := arn.ARN{ + Partition: meta.(*conns.AWSClient).Partition, + Service: "budgets", + AccountID: accountID, + Resource: fmt.Sprintf("budget/%s", budgetName), + } + d.Set("arn", arn.String()) + + d.Set("budget_type", budget.BudgetType) + + if err := d.Set("budget_limit", flattenSpend(budget.BudgetLimit)); err != nil { + return diag.Errorf("setting budget_spend: %s", err) + } + + if err := d.Set("calculated_spend", flattenCalculatedSpend(budget.CalculatedSpend)); err != nil { + return diag.Errorf("setting calculated_spend: %s", err) + } + + d.Set("budget_exceeded", false) + if budget.CalculatedSpend != nil && budget.CalculatedSpend.ActualSpend != nil { + if aws.StringValue(budget.BudgetLimit.Unit) == aws.StringValue(budget.CalculatedSpend.ActualSpend.Unit) { + bLimit, err := strconv.ParseFloat(aws.StringValue(budget.BudgetLimit.Amount), 64) + if err != nil { + return create.DiagError(names.Budgets, create.ErrActionReading, DSNameBudget, d.Id(), err) + } + bSpend, err := strconv.ParseFloat(aws.StringValue(budget.CalculatedSpend.ActualSpend.Amount), 64) + if err != nil { + return create.DiagError(names.Budgets, create.ErrActionReading, DSNameBudget, d.Id(), err) + } + + if bLimit < bSpend { + d.Set("budget_exceeded", true) + } else { + d.Set("budget_exceeded", false) + } + } + } + + d.Set("name", budget.BudgetName) + d.Set("name_prefix", create.NamePrefixFromName(aws.StringValue(budget.BudgetName))) + + return nil +} + +func flattenCalculatedSpend(apiObject *budgets.CalculatedSpend) []interface{} { + if apiObject == nil { + return nil + } + + attrs := map[string]interface{}{ + "actual_spend": flattenSpend(apiObject.ActualSpend), + } + return []interface{}{attrs} +} + +func flattenSpend(apiObject *budgets.Spend) []interface{} { + if apiObject == nil { + return nil + } + + attrs := map[string]interface{}{ + "amount": aws.StringValue(apiObject.Amount), + "unit": aws.StringValue(apiObject.Unit), + } + + return []interface{}{attrs} +} diff --git a/internal/service/budgets/budget_data_source_test.go b/internal/service/budgets/budget_data_source_test.go new file mode 100644 index 00000000000..5fe6c1c1c5c --- /dev/null +++ b/internal/service/budgets/budget_data_source_test.go @@ -0,0 +1,66 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package budgets_test + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/service/budgets" + sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" +) + +func TestAccBudgetsBudgetDataSource_basic(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + ctx := acctest.Context(t) + var budget budgets.Budget + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_budgets_budget.test" + dataSourceName := "data.aws_budgets_budget.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, budgets.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckBudgetDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccBudgetDataSourceConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccBudgetExists(ctx, resourceName, &budget), + acctest.CheckResourceAttrAccountID(dataSourceName, "account_id"), + resource.TestCheckResourceAttrPair(dataSourceName, "name", resourceName, "name"), + resource.TestCheckResourceAttrSet(dataSourceName, "calculated_spend.#"), + resource.TestCheckResourceAttrSet(dataSourceName, "budget_limit.#"), + ), + }, + }, + }) +} + +func testAccBudgetDataSourceConfig_basic(rName string) string { + return fmt.Sprintf(` +resource "aws_budgets_budget" "test" { + name = %[1]q + budget_type = "RI_UTILIZATION" + limit_amount = "100.0" + limit_unit = "PERCENTAGE" + time_unit = "QUARTERLY" + + cost_filter { + name = "Service" + values = ["Amazon Redshift"] + } +} + +data "aws_budgets_budget" "test" { + name = aws_budgets_budget.test.name +} +`, rName) +} diff --git a/internal/service/budgets/budget_test.go b/internal/service/budgets/budget_test.go index ac32e89d6df..5ee29c870f5 100644 --- a/internal/service/budgets/budget_test.go +++ b/internal/service/budgets/budget_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package budgets_test import ( @@ -212,7 +215,7 @@ func TestAccBudgetsBudget_autoAdjustDataForecast(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "auto_adjust_data.0.auto_adjust_type", "FORECAST"), resource.TestCheckResourceAttr(resourceName, "cost_filter.#", "0"), resource.TestCheckResourceAttr(resourceName, "name", rName), - acctest.CheckResourceAttrGreaterThanValue(resourceName, "limit_amount", "0"), + acctest.CheckResourceAttrGreaterThanValue(resourceName, "limit_amount", 0), resource.TestCheckResourceAttr(resourceName, "limit_unit", "USD"), ), }, @@ -249,7 +252,7 @@ func TestAccBudgetsBudget_autoAdjustDataHistorical(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "cost_filter.#", "0"), resource.TestCheckResourceAttr(resourceName, "name", rName), resource.TestCheckResourceAttr(resourceName, "time_unit", "MONTHLY"), - acctest.CheckResourceAttrGreaterThanValue(resourceName, "limit_amount", "0"), + acctest.CheckResourceAttrGreaterThanValue(resourceName, "limit_amount", 0), resource.TestCheckResourceAttr(resourceName, "limit_unit", "USD"), ), }, @@ -270,7 +273,7 @@ func TestAccBudgetsBudget_autoAdjustDataHistorical(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "cost_filter.#", "0"), resource.TestCheckResourceAttr(resourceName, "name", rName), resource.TestCheckResourceAttr(resourceName, "time_unit", "MONTHLY"), - acctest.CheckResourceAttrGreaterThanValue(resourceName, "limit_amount", "0"), + acctest.CheckResourceAttrGreaterThanValue(resourceName, "limit_amount", 0), resource.TestCheckResourceAttr(resourceName, "limit_unit", "USD"), ), }, @@ -521,7 +524,7 @@ func testAccBudgetExists(ctx context.Context, resourceName string, v *budgets.Bu return fmt.Errorf("No Budget ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).BudgetsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BudgetsConn(ctx) accountID, budgetName, err := tfbudgets.BudgetParseResourceID(rs.Primary.ID) @@ -543,7 +546,7 @@ func testAccBudgetExists(ctx context.Context, resourceName string, v *budgets.Bu func testAccCheckBudgetDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).BudgetsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).BudgetsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_budgets_budget" { diff --git a/internal/service/budgets/generate.go b/internal/service/budgets/generate.go new file mode 100644 index 00000000000..923b283c6e4 --- /dev/null +++ b/internal/service/budgets/generate.go @@ -0,0 +1,7 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/servicepackage/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package budgets diff --git a/internal/service/budgets/service_package_gen.go b/internal/service/budgets/service_package_gen.go index b2b83fdfee7..a8aaaebfde5 100644 --- a/internal/service/budgets/service_package_gen.go +++ b/internal/service/budgets/service_package_gen.go @@ -5,6 +5,10 @@ package budgets import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + budgets_sdkv1 "github.com/aws/aws-sdk-go/service/budgets" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -20,7 +24,12 @@ func (p *servicePackage) FrameworkResources(ctx context.Context) []*types.Servic } func (p *servicePackage) SDKDataSources(ctx context.Context) []*types.ServicePackageSDKDataSource { - return []*types.ServicePackageSDKDataSource{} + return []*types.ServicePackageSDKDataSource{ + { + Factory: DataSourceBudget, + TypeName: "aws_budgets_budget", + }, + } } func (p *servicePackage) SDKResources(ctx context.Context) []*types.ServicePackageSDKResource { @@ -40,4 +49,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Budgets } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*budgets_sdkv1.Budgets, error) { + sess := config["session"].(*session_sdkv1.Session) + + return budgets_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/budgets/sweep.go b/internal/service/budgets/sweep.go index 9009d5e936e..8799102be08 100644 --- a/internal/service/budgets/sweep.go +++ b/internal/service/budgets/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/budgets" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -32,12 +34,12 @@ func init() { func sweepBudgetActions(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).BudgetsConn() - accountID := client.(*conns.AWSClient).AccountID + conn := client.BudgetsConn(ctx) + accountID := client.AccountID input := &budgets.DescribeBudgetActionsForAccountInput{ AccountId: aws.String(accountID), } @@ -68,7 +70,7 @@ func sweepBudgetActions(region string) error { return fmt.Errorf("error listing Budget Actions (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Budget Actions (%s): %w", region, err) @@ -79,12 +81,12 @@ func sweepBudgetActions(region string) error { func sweepBudgets(region string) error { // nosemgrep:ci.budgets-in-func-name ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).BudgetsConn() - accountID := client.(*conns.AWSClient).AccountID + conn := client.BudgetsConn(ctx) + accountID := client.AccountID input := &budgets.DescribeBudgetsInput{ AccountId: aws.String(accountID), } @@ -121,7 +123,7 @@ func sweepBudgets(region string) error { // nosemgrep:ci.budgets-in-func-name return fmt.Errorf("error listing Budgets (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Budgets (%s): %w", region, err) diff --git a/internal/service/ce/anomaly_monitor.go b/internal/service/ce/anomaly_monitor.go index a5d47100695..5f507e3cc55 100644 --- a/internal/service/ce/anomaly_monitor.go +++ b/internal/service/ce/anomaly_monitor.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ce import ( @@ -72,14 +75,14 @@ func ResourceAnomalyMonitor() *schema.Resource { } func resourceAnomalyMonitorCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CEConn() + conn := meta.(*conns.AWSClient).CEConn(ctx) input := &costexplorer.CreateAnomalyMonitorInput{ AnomalyMonitor: &costexplorer.AnomalyMonitor{ MonitorName: aws.String(d.Get("name").(string)), MonitorType: aws.String(d.Get("monitor_type").(string)), }, - ResourceTags: GetTagsIn(ctx), + ResourceTags: getTagsIn(ctx), } switch d.Get("monitor_type").(string) { case costexplorer.MonitorTypeDimensional: @@ -93,7 +96,7 @@ func resourceAnomalyMonitorCreate(ctx context.Context, d *schema.ResourceData, m expression := costexplorer.Expression{} if err := json.Unmarshal([]byte(v.(string)), &expression); err != nil { - return diag.Errorf("Error parsing specification: %s", err) + return diag.Errorf("parsing specification: %s", err) } input.AnomalyMonitor.MonitorSpecification = &expression @@ -105,7 +108,7 @@ func resourceAnomalyMonitorCreate(ctx context.Context, d *schema.ResourceData, m resp, err := conn.CreateAnomalyMonitorWithContext(ctx, input) if err != nil { - return diag.Errorf("Error creating Anomaly Monitor: %s", err) + return diag.Errorf("creating Anomaly Monitor: %s", err) } if resp == nil || resp.MonitorArn == nil { @@ -118,7 +121,7 @@ func resourceAnomalyMonitorCreate(ctx context.Context, d *schema.ResourceData, m } func resourceAnomalyMonitorRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CEConn() + conn := meta.(*conns.AWSClient).CEConn(ctx) monitor, err := FindAnomalyMonitorByARN(ctx, conn, d.Id()) @@ -135,7 +138,7 @@ func resourceAnomalyMonitorRead(ctx context.Context, d *schema.ResourceData, met if monitor.MonitorSpecification != nil { specificationToJson, err := json.Marshal(monitor.MonitorSpecification) if err != nil { - return diag.Errorf("Error parsing specification response: %s", err) + return diag.Errorf("parsing specification response: %s", err) } specificationToSet, err := structure.NormalizeJsonString(string(specificationToJson)) @@ -155,7 +158,7 @@ func resourceAnomalyMonitorRead(ctx context.Context, d *schema.ResourceData, met } func resourceAnomalyMonitorUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CEConn() + conn := meta.(*conns.AWSClient).CEConn(ctx) requestUpdate := false input := &costexplorer.UpdateAnomalyMonitorInput{ @@ -179,7 +182,7 @@ func resourceAnomalyMonitorUpdate(ctx context.Context, d *schema.ResourceData, m } func resourceAnomalyMonitorDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CEConn() + conn := meta.(*conns.AWSClient).CEConn(ctx) _, err := conn.DeleteAnomalyMonitorWithContext(ctx, &costexplorer.DeleteAnomalyMonitorInput{MonitorArn: aws.String(d.Id())}) diff --git a/internal/service/ce/anomaly_monitor_test.go b/internal/service/ce/anomaly_monitor_test.go index 051f3032cce..b58df270a9b 100644 --- a/internal/service/ce/anomaly_monitor_test.go +++ b/internal/service/ce/anomaly_monitor_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ce_test import ( @@ -191,7 +194,7 @@ func TestAccCEAnomalyMonitor_Dimensional(t *testing.T) { func testAccCheckAnomalyMonitorExists(ctx context.Context, n string, anomalyMonitor *costexplorer.AnomalyMonitor) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CEConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CEConn(ctx) rs, ok := s.RootModule().Resources[n] if !ok { @@ -220,7 +223,7 @@ func testAccCheckAnomalyMonitorExists(ctx context.Context, n string, anomalyMoni func testAccCheckAnomalyMonitorDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CEConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CEConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ce_anomaly_monitor" { diff --git a/internal/service/ce/anomaly_subscription.go b/internal/service/ce/anomaly_subscription.go index 7c9cec3166f..7b1109877fc 100644 --- a/internal/service/ce/anomaly_subscription.go +++ b/internal/service/ce/anomaly_subscription.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ce import ( @@ -94,7 +97,7 @@ func ResourceAnomalySubscription() *schema.Resource { } func resourceAnomalySubscriptionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CEConn() + conn := meta.(*conns.AWSClient).CEConn(ctx) input := &costexplorer.CreateAnomalySubscriptionInput{ AnomalySubscription: &costexplorer.AnomalySubscription{ @@ -103,7 +106,7 @@ func resourceAnomalySubscriptionCreate(ctx context.Context, d *schema.ResourceDa MonitorArnList: aws.StringSlice(expandAnomalySubscriptionMonitorARNList(d.Get("monitor_arn_list").([]interface{}))), Subscribers: expandAnomalySubscriptionSubscribers(d.Get("subscriber").(*schema.Set).List()), }, - ResourceTags: GetTagsIn(ctx), + ResourceTags: getTagsIn(ctx), } if v, ok := d.GetOk("account_id"); ok { @@ -130,7 +133,7 @@ func resourceAnomalySubscriptionCreate(ctx context.Context, d *schema.ResourceDa } func resourceAnomalySubscriptionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CEConn() + conn := meta.(*conns.AWSClient).CEConn(ctx) subscription, err := FindAnomalySubscriptionByARN(ctx, conn, d.Id()) @@ -159,7 +162,7 @@ func resourceAnomalySubscriptionRead(ctx context.Context, d *schema.ResourceData } func resourceAnomalySubscriptionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CEConn() + conn := meta.(*conns.AWSClient).CEConn(ctx) if d.HasChangesExcept("tags", "tags_All") { input := &costexplorer.UpdateAnomalySubscriptionInput{ @@ -193,7 +196,7 @@ func resourceAnomalySubscriptionUpdate(ctx context.Context, d *schema.ResourceDa } func resourceAnomalySubscriptionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CEConn() + conn := meta.(*conns.AWSClient).CEConn(ctx) _, err := conn.DeleteAnomalySubscriptionWithContext(ctx, &costexplorer.DeleteAnomalySubscriptionInput{SubscriptionArn: aws.String(d.Id())}) diff --git a/internal/service/ce/anomaly_subscription_test.go b/internal/service/ce/anomaly_subscription_test.go index 53335e7e663..ffd6828af89 100644 --- a/internal/service/ce/anomaly_subscription_test.go +++ b/internal/service/ce/anomaly_subscription_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ce_test import ( @@ -275,7 +278,7 @@ func TestAccCEAnomalySubscription_Tags(t *testing.T) { func testAccCheckAnomalySubscriptionExists(ctx context.Context, n string, anomalySubscription *costexplorer.AnomalySubscription) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CEConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CEConn(ctx) rs, ok := s.RootModule().Resources[n] if !ok { @@ -304,7 +307,7 @@ func testAccCheckAnomalySubscriptionExists(ctx context.Context, n string, anomal func testAccCheckAnomalySubscriptionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CEConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CEConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ce_anomaly_subscription" { diff --git a/internal/service/ce/consts.go b/internal/service/ce/consts.go index a2b079fa3a1..bd3dee92d93 100644 --- a/internal/service/ce/consts.go +++ b/internal/service/ce/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ce const ( diff --git a/internal/service/ce/cost_allocation_tag.go b/internal/service/ce/cost_allocation_tag.go index 0db6c360845..a0fbe183349 100644 --- a/internal/service/ce/cost_allocation_tag.go +++ b/internal/service/ce/cost_allocation_tag.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ce import ( @@ -44,7 +47,7 @@ func ResourceCostAllocationTag() *schema.Resource { } func resourceCostAllocationTagRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CEConn() + conn := meta.(*conns.AWSClient).CEConn(ctx) costAllocTag, err := FindCostAllocationTagByKey(ctx, conn, d.Id()) @@ -80,7 +83,7 @@ func resourceCostAllocationTagDelete(ctx context.Context, d *schema.ResourceData } func updateTagStatus(ctx context.Context, d *schema.ResourceData, meta interface{}, delete bool) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CEConn() + conn := meta.(*conns.AWSClient).CEConn(ctx) key := d.Get("tag_key").(string) tagStatus := &costexplorer.CostAllocationTagStatusEntry{ diff --git a/internal/service/ce/cost_allocation_tag_test.go b/internal/service/ce/cost_allocation_tag_test.go index a818c157033..aef3ecbd781 100644 --- a/internal/service/ce/cost_allocation_tag_test.go +++ b/internal/service/ce/cost_allocation_tag_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ce_test import ( @@ -70,7 +73,7 @@ func testAccCheckCostAllocationTagExists(ctx context.Context, resourceName strin return create.Error(names.CE, create.ErrActionCheckingExistence, tfce.ResNameCostAllocationTag, resourceName, errors.New("not found in state")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).CEConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CEConn(ctx) costAllocTag, err := tfce.FindCostAllocationTagByKey(ctx, conn, rs.Primary.ID) if err != nil { diff --git a/internal/service/ce/cost_category.go b/internal/service/ce/cost_category.go index febf5ba1212..b7358e49c22 100644 --- a/internal/service/ce/cost_category.go +++ b/internal/service/ce/cost_category.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ce import ( @@ -374,11 +377,11 @@ func schemaCostCategoryRuleExpression() *schema.Resource { } func resourceCostCategoryCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CEConn() + conn := meta.(*conns.AWSClient).CEConn(ctx) input := &costexplorer.CreateCostCategoryDefinitionInput{ Name: aws.String(d.Get("name").(string)), - ResourceTags: GetTagsIn(ctx), + ResourceTags: getTagsIn(ctx), Rules: expandCostCategoryRules(d.Get("rule").(*schema.Set).List()), RuleVersion: aws.String(d.Get("rule_version").(string)), } @@ -411,7 +414,7 @@ func resourceCostCategoryCreate(ctx context.Context, d *schema.ResourceData, met } func resourceCostCategoryRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CEConn() + conn := meta.(*conns.AWSClient).CEConn(ctx) costCategory, err := FindCostCategoryByARN(ctx, conn, d.Id()) @@ -441,7 +444,7 @@ func resourceCostCategoryRead(ctx context.Context, d *schema.ResourceData, meta } func resourceCostCategoryUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CEConn() + conn := meta.(*conns.AWSClient).CEConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &costexplorer.UpdateCostCategoryDefinitionInput{ @@ -470,7 +473,7 @@ func resourceCostCategoryUpdate(ctx context.Context, d *schema.ResourceData, met } func resourceCostCategoryDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CEConn() + conn := meta.(*conns.AWSClient).CEConn(ctx) _, err := conn.DeleteCostCategoryDefinitionWithContext(ctx, &costexplorer.DeleteCostCategoryDefinitionInput{ CostCategoryArn: aws.String(d.Id()), diff --git a/internal/service/ce/cost_category_data_source.go b/internal/service/ce/cost_category_data_source.go index 06c4ece6f80..4a047448026 100644 --- a/internal/service/ce/cost_category_data_source.go +++ b/internal/service/ce/cost_category_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ce import ( @@ -314,7 +317,7 @@ func DataSourceCostCategory() *schema.Resource { } func dataSourceCostCategoryRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CEConn() + conn := meta.(*conns.AWSClient).CEConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig costCategory, err := FindCostCategoryByARN(ctx, conn, d.Get("cost_category_arn").(string)) @@ -337,7 +340,7 @@ func dataSourceCostCategoryRead(ctx context.Context, d *schema.ResourceData, met d.SetId(aws.StringValue(costCategory.CostCategoryArn)) - tags, err := ListTags(ctx, conn, d.Id()) + tags, err := listTags(ctx, conn, d.Id()) if err != nil { return create.DiagError(names.CE, "listing tags", ResNameCostCategory, d.Id(), err) diff --git a/internal/service/ce/cost_category_data_source_test.go b/internal/service/ce/cost_category_data_source_test.go index 0736d5a2ded..47925dd8b40 100644 --- a/internal/service/ce/cost_category_data_source_test.go +++ b/internal/service/ce/cost_category_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ce_test import ( diff --git a/internal/service/ce/cost_category_test.go b/internal/service/ce/cost_category_test.go index fe55c756c8c..39caac69f69 100644 --- a/internal/service/ce/cost_category_test.go +++ b/internal/service/ce/cost_category_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ce_test import ( @@ -234,7 +237,7 @@ func testAccCheckCostCategoryExists(ctx context.Context, n string, v *costexplor return fmt.Errorf("No CE Cost Category ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CEConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CEConn(ctx) output, err := tfce.FindCostCategoryByARN(ctx, conn, rs.Primary.ID) @@ -250,7 +253,7 @@ func testAccCheckCostCategoryExists(ctx context.Context, n string, v *costexplor func testAccCheckCostCategoryDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CEConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CEConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ce_cost_category" { diff --git a/internal/service/ce/find.go b/internal/service/ce/find.go index 9b2af70e19f..d6553ae01e6 100644 --- a/internal/service/ce/find.go +++ b/internal/service/ce/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ce import ( diff --git a/internal/service/ce/generate.go b/internal/service/ce/generate.go index 3e98b006e0a..4336a942c2c 100644 --- a/internal/service/ce/generate.go +++ b/internal/service/ce/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOutTagsElem=ResourceTags -ServiceTagsSlice -TagInTagsElem=ResourceTags -UpdateTags -UntagInTagsElem=ResourceTagKeys -UntagInTagsElem=ResourceTagKeys -TagType=ResourceTag +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package ce diff --git a/internal/service/ce/service_package_gen.go b/internal/service/ce/service_package_gen.go index b89fc28f901..2dde43272d5 100644 --- a/internal/service/ce/service_package_gen.go +++ b/internal/service/ce/service_package_gen.go @@ -5,6 +5,10 @@ package ce import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + costexplorer_sdkv1 "github.com/aws/aws-sdk-go/service/costexplorer" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -69,4 +73,13 @@ func (p *servicePackage) ServicePackageName() string { return names.CE } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*costexplorer_sdkv1.CostExplorer, error) { + sess := config["session"].(*session_sdkv1.Session) + + return costexplorer_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/ce/tags_data_source.go b/internal/service/ce/tags_data_source.go index 69295dc6f83..b749d445783 100644 --- a/internal/service/ce/tags_data_source.go +++ b/internal/service/ce/tags_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ce import ( @@ -89,7 +92,7 @@ func DataSourceTags() *schema.Resource { } func dataSourceTagsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CEConn() + conn := meta.(*conns.AWSClient).CEConn(ctx) input := &costexplorer.GetTagsInput{ TimePeriod: expandTagsTimePeriod(d.Get("time_period").([]interface{})[0].(map[string]interface{})), diff --git a/internal/service/ce/tags_data_source_test.go b/internal/service/ce/tags_data_source_test.go index dbbccbcaefc..37db90e30de 100644 --- a/internal/service/ce/tags_data_source_test.go +++ b/internal/service/ce/tags_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ce_test import ( diff --git a/internal/service/ce/tags_gen.go b/internal/service/ce/tags_gen.go index f9b1f17690e..a5fb62d6663 100644 --- a/internal/service/ce/tags_gen.go +++ b/internal/service/ce/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists ce service tags. +// listTags lists ce service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn costexploreriface.CostExplorerAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn costexploreriface.CostExplorerAPI, identifier string) (tftags.KeyValueTags, error) { input := &costexplorer.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn costexploreriface.CostExplorerAPI, ident // ListTags lists ce service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).CEConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).CEConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*costexplorer.ResourceTag) tftags. return tftags.New(ctx, m) } -// GetTagsIn returns ce service tags from Context. +// getTagsIn returns ce service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*costexplorer.ResourceTag { +func getTagsIn(ctx context.Context) []*costexplorer.ResourceTag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*costexplorer.ResourceTag { return nil } -// SetTagsOut sets ce service tags in Context. -func SetTagsOut(ctx context.Context, tags []*costexplorer.ResourceTag) { +// setTagsOut sets ce service tags in Context. +func setTagsOut(ctx context.Context, tags []*costexplorer.ResourceTag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates ce service tags. +// updateTags updates ce service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn costexploreriface.CostExplorerAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn costexploreriface.CostExplorerAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn costexploreriface.CostExplorerAPI, ide // UpdateTags updates ce service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).CEConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).CEConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/chime/generate.go b/internal/service/chime/generate.go new file mode 100644 index 00000000000..7bbc5209df2 --- /dev/null +++ b/internal/service/chime/generate.go @@ -0,0 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceARN -ServiceTagsSlice -TagInIDElem=ResourceARN -CreateTags -UpdateTags -AWSSDKServicePackage=chimesdkvoice +//go:generate go run ../../generate/servicepackage/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package chime diff --git a/internal/service/chime/service_package.go b/internal/service/chime/service_package.go new file mode 100644 index 00000000000..2ddfe0505ea --- /dev/null +++ b/internal/service/chime/service_package.go @@ -0,0 +1,28 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package chime + +import ( + "context" + + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + request_sdkv1 "github.com/aws/aws-sdk-go/aws/request" + chime_sdkv1 "github.com/aws/aws-sdk-go/service/chime" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" +) + +// CustomizeConn customizes a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) CustomizeConn(ctx context.Context, conn *chime_sdkv1.Chime) (*chime_sdkv1.Chime, error) { + conn.Handlers.Retry.PushBack(func(r *request_sdkv1.Request) { + // When calling CreateVoiceConnector across multiple resources, + // the API can randomly return a BadRequestException without explanation + if r.Operation.Name == "CreateVoiceConnector" { + if tfawserr.ErrMessageContains(r.Error, chime_sdkv1.ErrCodeBadRequestException, "Service received a bad request") { + r.Retryable = aws_sdkv1.Bool(true) + } + } + }) + + return conn, nil +} diff --git a/internal/service/chime/service_package_gen.go b/internal/service/chime/service_package_gen.go index 56af19cd21f..e875959ddac 100644 --- a/internal/service/chime/service_package_gen.go +++ b/internal/service/chime/service_package_gen.go @@ -5,6 +5,10 @@ package chime import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + chime_sdkv1 "github.com/aws/aws-sdk-go/service/chime" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -28,6 +32,10 @@ func (p *servicePackage) SDKResources(ctx context.Context) []*types.ServicePacka { Factory: ResourceVoiceConnector, TypeName: "aws_chime_voice_connector", + Name: "Voice Connector", + Tags: &types.ServicePackageResourceTags{ + IdentifierAttribute: "arn", + }, }, { Factory: ResourceVoiceConnectorGroup, @@ -60,4 +68,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Chime } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*chime_sdkv1.Chime, error) { + sess := config["session"].(*session_sdkv1.Session) + + return chime_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/chime/tags_gen.go b/internal/service/chime/tags_gen.go new file mode 100644 index 00000000000..81d20a7bbee --- /dev/null +++ b/internal/service/chime/tags_gen.go @@ -0,0 +1,151 @@ +// Code generated by internal/generate/tags/main.go; DO NOT EDIT. +package chime + +import ( + "context" + "fmt" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/chimesdkvoice" + "github.com/aws/aws-sdk-go/service/chimesdkvoice/chimesdkvoiceiface" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/types" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// listTags lists chime service tags. +// The identifier is typically the Amazon Resource Name (ARN), although +// it may also be a different identifier depending on the service. +func listTags(ctx context.Context, conn chimesdkvoiceiface.ChimeSDKVoiceAPI, identifier string) (tftags.KeyValueTags, error) { + input := &chimesdkvoice.ListTagsForResourceInput{ + ResourceARN: aws.String(identifier), + } + + output, err := conn.ListTagsForResourceWithContext(ctx, input) + + if err != nil { + return tftags.New(ctx, nil), err + } + + return KeyValueTags(ctx, output.Tags), nil +} + +// ListTags lists chime service tags and set them in Context. +// It is called from outside this package. +func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { + tags, err := listTags(ctx, meta.(*conns.AWSClient).ChimeSDKVoiceConn(ctx), identifier) + + if err != nil { + return err + } + + if inContext, ok := tftags.FromContext(ctx); ok { + inContext.TagsOut = types.Some(tags) + } + + return nil +} + +// []*SERVICE.Tag handling + +// Tags returns chime service tags. +func Tags(tags tftags.KeyValueTags) []*chimesdkvoice.Tag { + result := make([]*chimesdkvoice.Tag, 0, len(tags)) + + for k, v := range tags.Map() { + tag := &chimesdkvoice.Tag{ + Key: aws.String(k), + Value: aws.String(v), + } + + result = append(result, tag) + } + + return result +} + +// KeyValueTags creates tftags.KeyValueTags from chimesdkvoice service tags. +func KeyValueTags(ctx context.Context, tags []*chimesdkvoice.Tag) tftags.KeyValueTags { + m := make(map[string]*string, len(tags)) + + for _, tag := range tags { + m[aws.StringValue(tag.Key)] = tag.Value + } + + return tftags.New(ctx, m) +} + +// getTagsIn returns chime service tags from Context. +// nil is returned if there are no input tags. +func getTagsIn(ctx context.Context) []*chimesdkvoice.Tag { + if inContext, ok := tftags.FromContext(ctx); ok { + if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { + return tags + } + } + + return nil +} + +// setTagsOut sets chime service tags in Context. +func setTagsOut(ctx context.Context, tags []*chimesdkvoice.Tag) { + if inContext, ok := tftags.FromContext(ctx); ok { + inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) + } +} + +// createTags creates chime service tags for new resources. +func createTags(ctx context.Context, conn chimesdkvoiceiface.ChimeSDKVoiceAPI, identifier string, tags []*chimesdkvoice.Tag) error { + if len(tags) == 0 { + return nil + } + + return updateTags(ctx, conn, identifier, nil, KeyValueTags(ctx, tags)) +} + +// updateTags updates chime service tags. +// The identifier is typically the Amazon Resource Name (ARN), although +// it may also be a different identifier depending on the service. +func updateTags(ctx context.Context, conn chimesdkvoiceiface.ChimeSDKVoiceAPI, identifier string, oldTagsMap, newTagsMap any) error { + oldTags := tftags.New(ctx, oldTagsMap) + newTags := tftags.New(ctx, newTagsMap) + + removedTags := oldTags.Removed(newTags) + removedTags = removedTags.IgnoreSystem(names.ChimeSDKVoice) + if len(removedTags) > 0 { + input := &chimesdkvoice.UntagResourceInput{ + ResourceARN: aws.String(identifier), + TagKeys: aws.StringSlice(removedTags.Keys()), + } + + _, err := conn.UntagResourceWithContext(ctx, input) + + if err != nil { + return fmt.Errorf("untagging resource (%s): %w", identifier, err) + } + } + + updatedTags := oldTags.Updated(newTags) + updatedTags = updatedTags.IgnoreSystem(names.ChimeSDKVoice) + if len(updatedTags) > 0 { + input := &chimesdkvoice.TagResourceInput{ + ResourceARN: aws.String(identifier), + Tags: Tags(updatedTags), + } + + _, err := conn.TagResourceWithContext(ctx, input) + + if err != nil { + return fmt.Errorf("tagging resource (%s): %w", identifier, err) + } + } + + return nil +} + +// UpdateTags updates chime service tags. +// It is called from outside this package. +func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { + return updateTags(ctx, meta.(*conns.AWSClient).ChimeSDKVoiceConn(ctx), identifier, oldTags, newTags) +} diff --git a/internal/service/chime/voice_connector.go b/internal/service/chime/voice_connector.go index 0a1e40c7801..da241dd7f60 100644 --- a/internal/service/chime/voice_connector.go +++ b/internal/service/chime/voice_connector.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package chime import ( @@ -11,9 +14,14 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/verify" + "github.com/hashicorp/terraform-provider-aws/names" ) -// @SDKResource("aws_chime_voice_connector") +// @SDKResource("aws_chime_voice_connector", name="Voice Connector") +// @Tags(identifierAttribute="arn") func ResourceVoiceConnector() *schema.Resource { return &schema.Resource{ CreateWithoutTimeout: resourceVoiceConnectorCreate, @@ -26,6 +34,10 @@ func ResourceVoiceConnector() *schema.Resource { }, Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, "aws_region": { Type: schema.TypeString, ForceNew: true, @@ -46,12 +58,17 @@ func ResourceVoiceConnector() *schema.Resource { Type: schema.TypeBool, Required: true, }, + names.AttrTags: tftags.TagsSchema(), + names.AttrTagsAll: tftags.TagsSchemaComputed(), }, + + CustomizeDiff: verify.SetTagsDiff, } } func resourceVoiceConnectorCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeConn() + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).ChimeConn(ctx) createInput := &chime.CreateVoiceConnectorInput{ Name: aws.String(d.Get("name").(string)), @@ -64,16 +81,22 @@ func resourceVoiceConnectorCreate(ctx context.Context, d *schema.ResourceData, m resp, err := conn.CreateVoiceConnectorWithContext(ctx, createInput) if err != nil || resp.VoiceConnector == nil { - return diag.Errorf("Error creating Chime Voice connector: %s", err) + return sdkdiag.AppendErrorf(diags, "creating Chime Voice connector: %s", err) } d.SetId(aws.StringValue(resp.VoiceConnector.VoiceConnectorId)) - return resourceVoiceConnectorRead(ctx, d, meta) + tagsConn := meta.(*conns.AWSClient).ChimeSDKVoiceConn(ctx) + if err := createTags(ctx, tagsConn, aws.StringValue(resp.VoiceConnector.VoiceConnectorArn), getTagsIn(ctx)); err != nil { + return sdkdiag.AppendErrorf(diags, "setting Chime Voice Connector (%s) tags: %s", d.Id(), err) + } + + return append(diags, resourceVoiceConnectorRead(ctx, d, meta)...) } func resourceVoiceConnectorRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeConn() + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).ChimeConn(ctx) getInput := &chime.GetVoiceConnectorInput{ VoiceConnectorId: aws.String(d.Id()), @@ -83,23 +106,25 @@ func resourceVoiceConnectorRead(ctx context.Context, d *schema.ResourceData, met if !d.IsNewResource() && tfawserr.ErrCodeEquals(err, chime.ErrCodeNotFoundException) { log.Printf("[WARN] Chime Voice connector %s not found", d.Id()) d.SetId("") - return nil + return diags } if err != nil || resp.VoiceConnector == nil { - return diag.Errorf("Error getting Voice connector (%s): %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "getting Voice connector (%s): %s", d.Id(), err) } + d.Set("arn", resp.VoiceConnector.VoiceConnectorArn) d.Set("aws_region", resp.VoiceConnector.AwsRegion) d.Set("outbound_host_name", resp.VoiceConnector.OutboundHostName) d.Set("require_encryption", resp.VoiceConnector.RequireEncryption) d.Set("name", resp.VoiceConnector.Name) - return nil + return diags } func resourceVoiceConnectorUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeConn() + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).ChimeConn(ctx) if d.HasChanges("name", "require_encryption") { updateInput := &chime.UpdateVoiceConnectorInput{ @@ -109,14 +134,16 @@ func resourceVoiceConnectorUpdate(ctx context.Context, d *schema.ResourceData, m } if _, err := conn.UpdateVoiceConnectorWithContext(ctx, updateInput); err != nil { - return diag.Errorf("Error updating Voice connector (%s): %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "updating Voice connector (%s): %s", d.Id(), err) } } - return resourceVoiceConnectorRead(ctx, d, meta) + + return append(diags, resourceVoiceConnectorRead(ctx, d, meta)...) } func resourceVoiceConnectorDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeConn() + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).ChimeConn(ctx) input := &chime.DeleteVoiceConnectorInput{ VoiceConnectorId: aws.String(d.Id()), @@ -125,9 +152,10 @@ func resourceVoiceConnectorDelete(ctx context.Context, d *schema.ResourceData, m if _, err := conn.DeleteVoiceConnectorWithContext(ctx, input); err != nil { if tfawserr.ErrCodeEquals(err, chime.ErrCodeNotFoundException) { log.Printf("[WARN] Chime Voice connector %s not found", d.Id()) - return nil + return diags } - return diag.Errorf("Error deleting Voice connector (%s)", d.Id()) + return sdkdiag.AppendErrorf(diags, "deleting Voice connector (%s)", d.Id()) } - return nil + + return diags } diff --git a/internal/service/chime/voice_connector_group.go b/internal/service/chime/voice_connector_group.go index f8173b478f2..b82ce50c422 100644 --- a/internal/service/chime/voice_connector_group.go +++ b/internal/service/chime/voice_connector_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package chime import ( @@ -55,7 +58,7 @@ func ResourceVoiceConnectorGroup() *schema.Resource { } func resourceVoiceConnectorGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeConn() + conn := meta.(*conns.AWSClient).ChimeConn(ctx) input := &chime.CreateVoiceConnectorGroupInput{ Name: aws.String(d.Get("name").(string)), @@ -67,7 +70,7 @@ func resourceVoiceConnectorGroupCreate(ctx context.Context, d *schema.ResourceDa resp, err := conn.CreateVoiceConnectorGroupWithContext(ctx, input) if err != nil || resp.VoiceConnectorGroup == nil { - return diag.Errorf("error creating Chime Voice Connector group: %s", err) + return diag.Errorf("creating Chime Voice Connector group: %s", err) } d.SetId(aws.StringValue(resp.VoiceConnectorGroup.VoiceConnectorGroupId)) @@ -76,7 +79,7 @@ func resourceVoiceConnectorGroupCreate(ctx context.Context, d *schema.ResourceDa } func resourceVoiceConnectorGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeConn() + conn := meta.(*conns.AWSClient).ChimeConn(ctx) getInput := &chime.GetVoiceConnectorGroupInput{ VoiceConnectorGroupId: aws.String(d.Id()), @@ -89,19 +92,19 @@ func resourceVoiceConnectorGroupRead(ctx context.Context, d *schema.ResourceData return nil } if err != nil || resp.VoiceConnectorGroup == nil { - return diag.Errorf("error getting Chime Voice Connector group (%s): %s", d.Id(), err) + return diag.Errorf("getting Chime Voice Connector group (%s): %s", d.Id(), err) } d.Set("name", resp.VoiceConnectorGroup.Name) if err := d.Set("connector", flattenVoiceConnectorItems(resp.VoiceConnectorGroup.VoiceConnectorItems)); err != nil { - return diag.Errorf("error setting Chime Voice Connector group items (%s): %s", d.Id(), err) + return diag.Errorf("setting Chime Voice Connector group items (%s): %s", d.Id(), err) } return nil } func resourceVoiceConnectorGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeConn() + conn := meta.(*conns.AWSClient).ChimeConn(ctx) input := &chime.UpdateVoiceConnectorGroupInput{ Name: aws.String(d.Get("name").(string)), @@ -117,14 +120,14 @@ func resourceVoiceConnectorGroupUpdate(ctx context.Context, d *schema.ResourceDa } if _, err := conn.UpdateVoiceConnectorGroupWithContext(ctx, input); err != nil { - return diag.Errorf("error updating Chime Voice Connector group (%s): %s", d.Id(), err) + return diag.Errorf("updating Chime Voice Connector group (%s): %s", d.Id(), err) } return resourceVoiceConnectorGroupRead(ctx, d, meta) } func resourceVoiceConnectorGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeConn() + conn := meta.(*conns.AWSClient).ChimeConn(ctx) if v, ok := d.GetOk("connector"); ok && v.(*schema.Set).Len() > 0 { if err := resourceVoiceConnectorGroupUpdate(ctx, d, meta); err != nil { @@ -141,7 +144,7 @@ func resourceVoiceConnectorGroupDelete(ctx context.Context, d *schema.ResourceDa log.Printf("[WARN] Chime Voice conector group %s not found", d.Id()) return nil } - return diag.Errorf("error deleting Chime Voice Connector group (%s): %s", d.Id(), err) + return diag.Errorf("deleting Chime Voice Connector group (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/chime/voice_connector_group_test.go b/internal/service/chime/voice_connector_group_test.go index 43b4050dfde..581442870c5 100644 --- a/internal/service/chime/voice_connector_group_test.go +++ b/internal/service/chime/voice_connector_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package chime_test import ( @@ -155,7 +158,7 @@ func testAccCheckVoiceConnectorGroupExists(ctx context.Context, name string, vc return fmt.Errorf("no Chime voice connector group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeConn(ctx) input := &chime.GetVoiceConnectorGroupInput{ VoiceConnectorGroupId: aws.String(rs.Primary.ID), } @@ -176,7 +179,7 @@ func testAccCheckVoiceConnectorGroupDestroy(ctx context.Context) resource.TestCh if rs.Type != "aws_chime_voice_connector" { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeConn(ctx) input := &chime.GetVoiceConnectorGroupInput{ VoiceConnectorGroupId: aws.String(rs.Primary.ID), } diff --git a/internal/service/chime/voice_connector_logging.go b/internal/service/chime/voice_connector_logging.go index 072ac5d8b0c..3c61de84a21 100644 --- a/internal/service/chime/voice_connector_logging.go +++ b/internal/service/chime/voice_connector_logging.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package chime import ( @@ -45,7 +48,7 @@ func ResourceVoiceConnectorLogging() *schema.Resource { } func resourceVoiceConnectorLoggingCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeConn() + conn := meta.(*conns.AWSClient).ChimeConn(ctx) vcId := d.Get("voice_connector_id").(string) input := &chime.PutVoiceConnectorLoggingConfigurationInput{ @@ -57,7 +60,7 @@ func resourceVoiceConnectorLoggingCreate(ctx context.Context, d *schema.Resource } if _, err := conn.PutVoiceConnectorLoggingConfigurationWithContext(ctx, input); err != nil { - return diag.Errorf("error creating Chime Voice Connector (%s) logging configuration: %s", vcId, err) + return diag.Errorf("creating Chime Voice Connector (%s) logging configuration: %s", vcId, err) } d.SetId(vcId) @@ -65,7 +68,7 @@ func resourceVoiceConnectorLoggingCreate(ctx context.Context, d *schema.Resource } func resourceVoiceConnectorLoggingRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeConn() + conn := meta.(*conns.AWSClient).ChimeConn(ctx) input := &chime.GetVoiceConnectorLoggingConfigurationInput{ VoiceConnectorId: aws.String(d.Id()), @@ -79,7 +82,7 @@ func resourceVoiceConnectorLoggingRead(ctx context.Context, d *schema.ResourceDa } if err != nil || resp.LoggingConfiguration == nil { - return diag.Errorf("error getting Chime Voice Connector (%s) logging configuration: %s", d.Id(), err) + return diag.Errorf("getting Chime Voice Connector (%s) logging configuration: %s", d.Id(), err) } d.Set("enable_media_metric_logs", resp.LoggingConfiguration.EnableMediaMetricLogs) d.Set("enable_sip_logs", resp.LoggingConfiguration.EnableSIPLogs) @@ -89,7 +92,7 @@ func resourceVoiceConnectorLoggingRead(ctx context.Context, d *schema.ResourceDa } func resourceVoiceConnectorLoggingUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeConn() + conn := meta.(*conns.AWSClient).ChimeConn(ctx) if d.HasChanges("enable_sip_logs", "enable_media_metric_logs") { input := &chime.PutVoiceConnectorLoggingConfigurationInput{ @@ -101,7 +104,7 @@ func resourceVoiceConnectorLoggingUpdate(ctx context.Context, d *schema.Resource } if _, err := conn.PutVoiceConnectorLoggingConfigurationWithContext(ctx, input); err != nil { - return diag.Errorf("error updating Chime Voice Connector (%s) logging configuration: %s", d.Id(), err) + return diag.Errorf("updating Chime Voice Connector (%s) logging configuration: %s", d.Id(), err) } } @@ -109,7 +112,7 @@ func resourceVoiceConnectorLoggingUpdate(ctx context.Context, d *schema.Resource } func resourceVoiceConnectorLoggingDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeConn() + conn := meta.(*conns.AWSClient).ChimeConn(ctx) input := &chime.PutVoiceConnectorLoggingConfigurationInput{ VoiceConnectorId: aws.String(d.Id()), @@ -126,7 +129,7 @@ func resourceVoiceConnectorLoggingDelete(ctx context.Context, d *schema.Resource } if err != nil { - return diag.Errorf("error deleting Chime Voice Connector (%s) logging configuration: %s", d.Id(), err) + return diag.Errorf("deleting Chime Voice Connector (%s) logging configuration: %s", d.Id(), err) } return nil diff --git a/internal/service/chime/voice_connector_logging_test.go b/internal/service/chime/voice_connector_logging_test.go index 908f6161b2f..925b80bef44 100644 --- a/internal/service/chime/voice_connector_logging_test.go +++ b/internal/service/chime/voice_connector_logging_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package chime_test import ( @@ -141,7 +144,7 @@ func testAccCheckVoiceConnectorLoggingExists(ctx context.Context, name string) r return fmt.Errorf("no Chime Voice Connector logging ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeConn(ctx) input := &chime.GetVoiceConnectorLoggingConfigurationInput{ VoiceConnectorId: aws.String(rs.Primary.ID), } diff --git a/internal/service/chime/voice_connector_origination.go b/internal/service/chime/voice_connector_origination.go index a8f21bd0b41..3e1b02f6ee2 100644 --- a/internal/service/chime/voice_connector_origination.go +++ b/internal/service/chime/voice_connector_origination.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package chime import ( @@ -76,7 +79,7 @@ func ResourceVoiceConnectorOrigination() *schema.Resource { } func resourceVoiceConnectorOriginationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeConn() + conn := meta.(*conns.AWSClient).ChimeConn(ctx) vcId := d.Get("voice_connector_id").(string) @@ -92,7 +95,7 @@ func resourceVoiceConnectorOriginationCreate(ctx context.Context, d *schema.Reso } if _, err := conn.PutVoiceConnectorOriginationWithContext(ctx, input); err != nil { - return diag.Errorf("error creating Chime Voice Connector (%s) origination: %s", vcId, err) + return diag.Errorf("creating Chime Voice Connector (%s) origination: %s", vcId, err) } d.SetId(vcId) @@ -101,7 +104,7 @@ func resourceVoiceConnectorOriginationCreate(ctx context.Context, d *schema.Reso } func resourceVoiceConnectorOriginationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeConn() + conn := meta.(*conns.AWSClient).ChimeConn(ctx) input := &chime.GetVoiceConnectorOriginationInput{ VoiceConnectorId: aws.String(d.Id()), @@ -116,25 +119,25 @@ func resourceVoiceConnectorOriginationRead(ctx context.Context, d *schema.Resour } if err != nil { - return diag.Errorf("error getting Chime Voice Connector (%s) origination: %s", d.Id(), err) + return diag.Errorf("getting Chime Voice Connector (%s) origination: %s", d.Id(), err) } if resp == nil || resp.Origination == nil { - return diag.Errorf("error getting Chime Voice Connector (%s) origination: empty response", d.Id()) + return diag.Errorf("getting Chime Voice Connector (%s) origination: empty response", d.Id()) } d.Set("disabled", resp.Origination.Disabled) d.Set("voice_connector_id", d.Id()) if err := d.Set("route", flattenOriginationRoutes(resp.Origination.Routes)); err != nil { - return diag.Errorf("error setting Chime Voice Connector (%s) origination routes: %s", d.Id(), err) + return diag.Errorf("setting Chime Voice Connector (%s) origination routes: %s", d.Id(), err) } return nil } func resourceVoiceConnectorOriginationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeConn() + conn := meta.(*conns.AWSClient).ChimeConn(ctx) if d.HasChanges("route", "disabled") { input := &chime.PutVoiceConnectorOriginationInput{ @@ -151,7 +154,7 @@ func resourceVoiceConnectorOriginationUpdate(ctx context.Context, d *schema.Reso _, err := conn.PutVoiceConnectorOriginationWithContext(ctx, input) if err != nil { - return diag.Errorf("error updating Chime Voice Connector (%s) origination: %s", d.Id(), err) + return diag.Errorf("updating Chime Voice Connector (%s) origination: %s", d.Id(), err) } } @@ -159,7 +162,7 @@ func resourceVoiceConnectorOriginationUpdate(ctx context.Context, d *schema.Reso } func resourceVoiceConnectorOriginationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeConn() + conn := meta.(*conns.AWSClient).ChimeConn(ctx) input := &chime.DeleteVoiceConnectorOriginationInput{ VoiceConnectorId: aws.String(d.Id()), @@ -172,7 +175,7 @@ func resourceVoiceConnectorOriginationDelete(ctx context.Context, d *schema.Reso } if err != nil { - return diag.Errorf("error deleting Chime Voice Connector (%s) origination: %s", d.Id(), err) + return diag.Errorf("deleting Chime Voice Connector (%s) origination: %s", d.Id(), err) } return nil diff --git a/internal/service/chime/voice_connector_origination_test.go b/internal/service/chime/voice_connector_origination_test.go index 781c93e99b7..890ac9dbe32 100644 --- a/internal/service/chime/voice_connector_origination_test.go +++ b/internal/service/chime/voice_connector_origination_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package chime_test import ( @@ -124,7 +127,7 @@ func testAccCheckVoiceConnectorOriginationExists(ctx context.Context, name strin return fmt.Errorf("no Chime voice connector origination ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeConn(ctx) input := &chime.GetVoiceConnectorOriginationInput{ VoiceConnectorId: aws.String(rs.Primary.ID), } @@ -148,7 +151,7 @@ func testAccCheckVoiceConnectorOriginationDestroy(ctx context.Context) resource. if rs.Type != "aws_chime_voice_connector_origination" { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeConn(ctx) input := &chime.GetVoiceConnectorOriginationInput{ VoiceConnectorId: aws.String(rs.Primary.ID), } diff --git a/internal/service/chime/voice_connector_streaming.go b/internal/service/chime/voice_connector_streaming.go index 8eb16d3b981..f71196256df 100644 --- a/internal/service/chime/voice_connector_streaming.go +++ b/internal/service/chime/voice_connector_streaming.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package chime import ( @@ -76,7 +79,7 @@ func ResourceVoiceConnectorStreaming() *schema.Resource { } func resourceVoiceConnectorStreamingCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeSDKVoiceConn() + conn := meta.(*conns.AWSClient).ChimeSDKVoiceConn(ctx) vcId := d.Get("voice_connector_id").(string) input := &chimesdkvoice.PutVoiceConnectorStreamingConfigurationInput{ @@ -99,7 +102,7 @@ func resourceVoiceConnectorStreamingCreate(ctx context.Context, d *schema.Resour input.StreamingConfiguration = config if _, err := conn.PutVoiceConnectorStreamingConfigurationWithContext(ctx, input); err != nil { - return diag.Errorf("error creating Chime Voice Connector (%s) streaming configuration: %s", vcId, err) + return diag.Errorf("creating Chime Voice Connector (%s) streaming configuration: %s", vcId, err) } d.SetId(vcId) @@ -108,7 +111,7 @@ func resourceVoiceConnectorStreamingCreate(ctx context.Context, d *schema.Resour } func resourceVoiceConnectorStreamingRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeSDKVoiceConn() + conn := meta.(*conns.AWSClient).ChimeSDKVoiceConn(ctx) input := &chimesdkvoice.GetVoiceConnectorStreamingConfigurationInput{ VoiceConnectorId: aws.String(d.Id()), @@ -122,11 +125,11 @@ func resourceVoiceConnectorStreamingRead(ctx context.Context, d *schema.Resource } if err != nil { - return diag.Errorf("error getting Chime Voice Connector (%s) streaming: %s", d.Id(), err) + return diag.Errorf("getting Chime Voice Connector (%s) streaming: %s", d.Id(), err) } if resp == nil || resp.StreamingConfiguration == nil { - return diag.Errorf("error getting Chime Voice Connector (%s) streaming: empty response", d.Id()) + return diag.Errorf("getting Chime Voice Connector (%s) streaming: empty response", d.Id()) } d.Set("disabled", resp.StreamingConfiguration.Disabled) @@ -134,18 +137,18 @@ func resourceVoiceConnectorStreamingRead(ctx context.Context, d *schema.Resource d.Set("voice_connector_id", d.Id()) if err := d.Set("streaming_notification_targets", flattenStreamingNotificationTargets(resp.StreamingConfiguration.StreamingNotificationTargets)); err != nil { - return diag.Errorf("error setting Chime Voice Connector streaming configuration targets (%s): %s", d.Id(), err) + return diag.Errorf("setting Chime Voice Connector streaming configuration targets (%s): %s", d.Id(), err) } if err := d.Set("media_insights_configuration", flattenMediaInsightsConfiguration(resp.StreamingConfiguration.MediaInsightsConfiguration)); err != nil { - return diag.Errorf("error setting Chime Voice Connector streaming configuration media insights configuration (%s): %s", d.Id(), err) + return diag.Errorf("setting Chime Voice Connector streaming configuration media insights configuration (%s): %s", d.Id(), err) } return nil } func resourceVoiceConnectorStreamingUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeSDKVoiceConn() + conn := meta.(*conns.AWSClient).ChimeSDKVoiceConn(ctx) vcId := d.Get("voice_connector_id").(string) @@ -170,7 +173,7 @@ func resourceVoiceConnectorStreamingUpdate(ctx context.Context, d *schema.Resour input.StreamingConfiguration = config if _, err := conn.PutVoiceConnectorStreamingConfigurationWithContext(ctx, input); err != nil { - return diag.Errorf("error updating Chime Voice Connector (%s) streaming configuration: %s", d.Id(), err) + return diag.Errorf("updating Chime Voice Connector (%s) streaming configuration: %s", d.Id(), err) } } @@ -178,7 +181,7 @@ func resourceVoiceConnectorStreamingUpdate(ctx context.Context, d *schema.Resour } func resourceVoiceConnectorStreamingDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeSDKVoiceConn() + conn := meta.(*conns.AWSClient).ChimeSDKVoiceConn(ctx) input := &chimesdkvoice.DeleteVoiceConnectorStreamingConfigurationInput{ VoiceConnectorId: aws.String(d.Id()), @@ -191,7 +194,7 @@ func resourceVoiceConnectorStreamingDelete(ctx context.Context, d *schema.Resour } if err != nil { - return diag.Errorf("error deleting Chime Voice Connector (%s) streaming configuration: %s", d.Id(), err) + return diag.Errorf("deleting Chime Voice Connector (%s) streaming configuration: %s", d.Id(), err) } return nil diff --git a/internal/service/chime/voice_connector_streaming_test.go b/internal/service/chime/voice_connector_streaming_test.go index 2336bc5aa47..7948f05fef4 100644 --- a/internal/service/chime/voice_connector_streaming_test.go +++ b/internal/service/chime/voice_connector_streaming_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package chime_test import ( @@ -201,7 +204,7 @@ func testAccCheckVoiceConnectorStreamingExists(ctx context.Context, name string) return fmt.Errorf("no Chime Voice Connector streaming configuration ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeSDKVoiceConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeSDKVoiceConn(ctx) input := &chimesdkvoice.GetVoiceConnectorStreamingConfigurationInput{ VoiceConnectorId: aws.String(rs.Primary.ID), } @@ -225,7 +228,7 @@ func testAccCheckVoiceConnectorStreamingDestroy(ctx context.Context) resource.Te if rs.Type != "aws_chime_voice_connector_termination" { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeSDKVoiceConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeSDKVoiceConn(ctx) input := &chimesdkvoice.GetVoiceConnectorStreamingConfigurationInput{ VoiceConnectorId: aws.String(rs.Primary.ID), } diff --git a/internal/service/chime/voice_connector_termination.go b/internal/service/chime/voice_connector_termination.go index 714596bd0b3..e02edb0d504 100644 --- a/internal/service/chime/voice_connector_termination.go +++ b/internal/service/chime/voice_connector_termination.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package chime import ( @@ -71,7 +74,7 @@ func ResourceVoiceConnectorTermination() *schema.Resource { } func resourceVoiceConnectorTerminationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeConn() + conn := meta.(*conns.AWSClient).ChimeConn(ctx) vcId := d.Get("voice_connector_id").(string) @@ -99,7 +102,7 @@ func resourceVoiceConnectorTerminationCreate(ctx context.Context, d *schema.Reso input.Termination = termination if _, err := conn.PutVoiceConnectorTerminationWithContext(ctx, input); err != nil { - return diag.Errorf("error creating Chime Voice Connector (%s) termination: %s", vcId, err) + return diag.Errorf("creating Chime Voice Connector (%s) termination: %s", vcId, err) } d.SetId(vcId) @@ -108,7 +111,7 @@ func resourceVoiceConnectorTerminationCreate(ctx context.Context, d *schema.Reso } func resourceVoiceConnectorTerminationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeConn() + conn := meta.(*conns.AWSClient).ChimeConn(ctx) input := &chime.GetVoiceConnectorTerminationInput{ VoiceConnectorId: aws.String(d.Id()), @@ -123,11 +126,11 @@ func resourceVoiceConnectorTerminationRead(ctx context.Context, d *schema.Resour } if err != nil { - return diag.Errorf("error getting Chime Voice Connector (%s) termination: %s", d.Id(), err) + return diag.Errorf("getting Chime Voice Connector (%s) termination: %s", d.Id(), err) } if resp == nil || resp.Termination == nil { - return diag.Errorf("error getting Chime Voice Connector (%s) termination: empty response", d.Id()) + return diag.Errorf("getting Chime Voice Connector (%s) termination: empty response", d.Id()) } d.Set("cps_limit", resp.Termination.CpsLimit) @@ -135,10 +138,10 @@ func resourceVoiceConnectorTerminationRead(ctx context.Context, d *schema.Resour d.Set("default_phone_number", resp.Termination.DefaultPhoneNumber) if err := d.Set("calling_regions", flex.FlattenStringList(resp.Termination.CallingRegions)); err != nil { - return diag.Errorf("error setting termination calling regions (%s): %s", d.Id(), err) + return diag.Errorf("setting termination calling regions (%s): %s", d.Id(), err) } if err := d.Set("cidr_allow_list", flex.FlattenStringList(resp.Termination.CidrAllowedList)); err != nil { - return diag.Errorf("error setting termination cidr allow list (%s): %s", d.Id(), err) + return diag.Errorf("setting termination cidr allow list (%s): %s", d.Id(), err) } d.Set("voice_connector_id", d.Id()) @@ -147,7 +150,7 @@ func resourceVoiceConnectorTerminationRead(ctx context.Context, d *schema.Resour } func resourceVoiceConnectorTerminationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeConn() + conn := meta.(*conns.AWSClient).ChimeConn(ctx) if d.HasChanges("calling_regions", "cidr_allow_list", "disabled", "cps_limit", "default_phone_number") { termination := &chime.Termination{ @@ -172,7 +175,7 @@ func resourceVoiceConnectorTerminationUpdate(ctx context.Context, d *schema.Reso _, err := conn.PutVoiceConnectorTerminationWithContext(ctx, input) if err != nil { - return diag.Errorf("error updating Chime Voice Connector (%s) termination: %s", d.Id(), err) + return diag.Errorf("updating Chime Voice Connector (%s) termination: %s", d.Id(), err) } } @@ -180,7 +183,7 @@ func resourceVoiceConnectorTerminationUpdate(ctx context.Context, d *schema.Reso } func resourceVoiceConnectorTerminationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeConn() + conn := meta.(*conns.AWSClient).ChimeConn(ctx) input := &chime.DeleteVoiceConnectorTerminationInput{ VoiceConnectorId: aws.String(d.Id()), @@ -193,7 +196,7 @@ func resourceVoiceConnectorTerminationDelete(ctx context.Context, d *schema.Reso } if err != nil { - return diag.Errorf("error deleting Chime Voice Connector termination (%s): %s", d.Id(), err) + return diag.Errorf("deleting Chime Voice Connector termination (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/chime/voice_connector_termination_credentials.go b/internal/service/chime/voice_connector_termination_credentials.go index 493a8202e4e..409a3a2a1ee 100644 --- a/internal/service/chime/voice_connector_termination_credentials.go +++ b/internal/service/chime/voice_connector_termination_credentials.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package chime import ( @@ -57,7 +60,7 @@ func ResourceVoiceConnectorTerminationCredentials() *schema.Resource { } func resourceVoiceConnectorTerminationCredentialsCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeConn() + conn := meta.(*conns.AWSClient).ChimeConn(ctx) vcId := d.Get("voice_connector_id").(string) @@ -67,7 +70,7 @@ func resourceVoiceConnectorTerminationCredentialsCreate(ctx context.Context, d * } if _, err := conn.PutVoiceConnectorTerminationCredentialsWithContext(ctx, input); err != nil { - return diag.Errorf("error creating Chime Voice Connector (%s) termination credentials: %s", vcId, err) + return diag.Errorf("creating Chime Voice Connector (%s) termination credentials: %s", vcId, err) } d.SetId(vcId) @@ -76,7 +79,7 @@ func resourceVoiceConnectorTerminationCredentialsCreate(ctx context.Context, d * } func resourceVoiceConnectorTerminationCredentialsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeConn() + conn := meta.(*conns.AWSClient).ChimeConn(ctx) input := &chime.ListVoiceConnectorTerminationCredentialsInput{ VoiceConnectorId: aws.String(d.Id()), @@ -90,7 +93,7 @@ func resourceVoiceConnectorTerminationCredentialsRead(ctx context.Context, d *sc } if err != nil { - return diag.Errorf("error getting Chime Voice Connector (%s) termination credentials: %s", d.Id(), err) + return diag.Errorf("getting Chime Voice Connector (%s) termination credentials: %s", d.Id(), err) } d.Set("voice_connector_id", d.Id()) @@ -99,7 +102,7 @@ func resourceVoiceConnectorTerminationCredentialsRead(ctx context.Context, d *sc } func resourceVoiceConnectorTerminationCredentialsUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeConn() + conn := meta.(*conns.AWSClient).ChimeConn(ctx) if d.HasChanges("credentials") { input := &chime.PutVoiceConnectorTerminationCredentialsInput{ @@ -110,7 +113,7 @@ func resourceVoiceConnectorTerminationCredentialsUpdate(ctx context.Context, d * _, err := conn.PutVoiceConnectorTerminationCredentialsWithContext(ctx, input) if err != nil { - return diag.Errorf("error updating Chime Voice Connector (%s) termination credentials: %s", d.Id(), err) + return diag.Errorf("updating Chime Voice Connector (%s) termination credentials: %s", d.Id(), err) } } @@ -118,7 +121,7 @@ func resourceVoiceConnectorTerminationCredentialsUpdate(ctx context.Context, d * } func resourceVoiceConnectorTerminationCredentialsDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeConn() + conn := meta.(*conns.AWSClient).ChimeConn(ctx) input := &chime.DeleteVoiceConnectorTerminationCredentialsInput{ VoiceConnectorId: aws.String(d.Id()), @@ -132,7 +135,7 @@ func resourceVoiceConnectorTerminationCredentialsDelete(ctx context.Context, d * } if err != nil { - return diag.Errorf("error deleting Chime Voice Connector (%s) termination credentials: %s", d.Id(), err) + return diag.Errorf("deleting Chime Voice Connector (%s) termination credentials: %s", d.Id(), err) } return nil diff --git a/internal/service/chime/voice_connector_termination_credentials_test.go b/internal/service/chime/voice_connector_termination_credentials_test.go index 883e8fdca3b..c75b4def599 100644 --- a/internal/service/chime/voice_connector_termination_credentials_test.go +++ b/internal/service/chime/voice_connector_termination_credentials_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package chime_test import ( @@ -107,7 +110,7 @@ func testAccCheckVoiceConnectorTerminationCredentialsExists(ctx context.Context, return fmt.Errorf("no Chime Voice Connector termination credentials ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeConn(ctx) input := &chime.ListVoiceConnectorTerminationCredentialsInput{ VoiceConnectorId: aws.String(rs.Primary.ID), } @@ -131,7 +134,7 @@ func testAccCheckVoiceConnectorTerminationCredentialsDestroy(ctx context.Context if rs.Type != "aws_chime_voice_connector_termination_credentials" { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeConn(ctx) input := &chime.ListVoiceConnectorTerminationCredentialsInput{ VoiceConnectorId: aws.String(rs.Primary.ID), } diff --git a/internal/service/chime/voice_connector_termination_test.go b/internal/service/chime/voice_connector_termination_test.go index 7a2320758a9..b21cd7ebb0b 100644 --- a/internal/service/chime/voice_connector_termination_test.go +++ b/internal/service/chime/voice_connector_termination_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package chime_test import ( @@ -149,7 +152,7 @@ func testAccCheckVoiceConnectorTerminationExists(ctx context.Context, name strin return fmt.Errorf("no Chime Voice Connector termination ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeConn(ctx) input := &chime.GetVoiceConnectorTerminationInput{ VoiceConnectorId: aws.String(rs.Primary.ID), } @@ -173,7 +176,7 @@ func testAccCheckVoiceConnectorTerminationDestroy(ctx context.Context) resource. if rs.Type != "aws_chime_voice_connector_termination" { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeConn(ctx) input := &chime.GetVoiceConnectorTerminationInput{ VoiceConnectorId: aws.String(rs.Primary.ID), } diff --git a/internal/service/chime/voice_connector_test.go b/internal/service/chime/voice_connector_test.go index 2403124931b..ee2519c0c5d 100644 --- a/internal/service/chime/voice_connector_test.go +++ b/internal/service/chime/voice_connector_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package chime_test import ( @@ -6,6 +9,7 @@ import ( "testing" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/endpoints" "github.com/aws/aws-sdk-go/service/chime" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" @@ -23,7 +27,10 @@ func TestAccChimeVoiceConnector_basic(t *testing.T) { resourceName := "aws_chime_voice_connector.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(ctx, t) }, + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckRegion(t, endpoints.UsEast1RegionID) + }, ErrorCheck: acctest.ErrorCheck(t, chime.EndpointsID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckVoiceConnectorDestroy(ctx), @@ -54,7 +61,10 @@ func TestAccChimeVoiceConnector_disappears(t *testing.T) { resourceName := "aws_chime_voice_connector.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(ctx, t) }, + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckRegion(t, endpoints.UsEast1RegionID) + }, ErrorCheck: acctest.ErrorCheck(t, chime.EndpointsID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckVoiceConnectorDestroy(ctx), @@ -79,7 +89,10 @@ func TestAccChimeVoiceConnector_update(t *testing.T) { resourceName := "aws_chime_voice_connector.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(ctx, t) }, + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckRegion(t, endpoints.UsEast1RegionID) + }, ErrorCheck: acctest.ErrorCheck(t, chime.EndpointsID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckVoiceConnectorDestroy(ctx), @@ -108,6 +121,61 @@ func TestAccChimeVoiceConnector_update(t *testing.T) { }) } +func TestAccChimeVoiceConnector_tags(t *testing.T) { + ctx := acctest.Context(t) + var voiceConnector *chime.VoiceConnector + + vcName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_chime_voice_connector.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + // Legacy chime resources are always created in us-east-1, and the ListTags operation + // can behave unexpectedly when configured with a different region. + acctest.PreCheckRegion(t, endpoints.UsEast1RegionID) + }, + ErrorCheck: acctest.ErrorCheck(t, chime.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckVoiceConnectorDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccVoiceConnectorConfig_tags1(vcName, "key1", "value1"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckVoiceConnectorExists(ctx, resourceName, voiceConnector), + resource.TestCheckResourceAttr(resourceName, "name", fmt.Sprintf("vc-%s", vcName)), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccVoiceConnectorConfig_tags2(vcName, "key1", "value1updated", "key2", "value2"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckVoiceConnectorExists(ctx, resourceName, voiceConnector), + resource.TestCheckResourceAttr(resourceName, "name", fmt.Sprintf("vc-%s", vcName)), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + { + Config: testAccVoiceConnectorConfig_tags1(vcName, "key2", "value2"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckVoiceConnectorExists(ctx, resourceName, voiceConnector), + resource.TestCheckResourceAttr(resourceName, "name", fmt.Sprintf("vc-%s", vcName)), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + }, + }) +} + func testAccVoiceConnectorConfig_basic(name string) string { return fmt.Sprintf(` resource "aws_chime_voice_connector" "test" { @@ -126,6 +194,33 @@ resource "aws_chime_voice_connector" "test" { `, name) } +func testAccVoiceConnectorConfig_tags1(name, tagKey1, tagValue1 string) string { + return fmt.Sprintf(` +resource "aws_chime_voice_connector" "test" { + name = "vc-%s" + require_encryption = true + + tags = { + %[2]q = %[3]q + } +} +`, name, tagKey1, tagValue1) +} + +func testAccVoiceConnectorConfig_tags2(name, tagKey1, tagValue1, tagKey2, tagValue2 string) string { + return fmt.Sprintf(` +resource "aws_chime_voice_connector" "test" { + name = "vc-%s" + require_encryption = true + + tags = { + %[2]q = %[3]q + %[4]q = %[5]q + } +} +`, name, tagKey1, tagValue1, tagKey2, tagValue2) +} + func testAccCheckVoiceConnectorExists(ctx context.Context, name string, vc *chime.VoiceConnector) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[name] @@ -137,7 +232,7 @@ func testAccCheckVoiceConnectorExists(ctx context.Context, name string, vc *chim return fmt.Errorf("no Chime voice connector ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeConn(ctx) input := &chime.GetVoiceConnectorInput{ VoiceConnectorId: aws.String(rs.Primary.ID), } @@ -158,7 +253,7 @@ func testAccCheckVoiceConnectorDestroy(ctx context.Context) resource.TestCheckFu if rs.Type != "aws_chime_voice_connector" { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeConn(ctx) input := &chime.GetVoiceConnectorInput{ VoiceConnectorId: aws.String(rs.Primary.ID), } diff --git a/internal/service/chimesdkmediapipelines/generate.go b/internal/service/chimesdkmediapipelines/generate.go index 251698fb753..7ad21965177 100644 --- a/internal/service/chimesdkmediapipelines/generate.go +++ b/internal/service/chimesdkmediapipelines/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceARN -ServiceTagsSlice -TagInIDElem=ResourceARN -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package chimesdkmediapipelines diff --git a/internal/service/chimesdkmediapipelines/media_insights_pipeline_configuration.go b/internal/service/chimesdkmediapipelines/media_insights_pipeline_configuration.go index 00d0e2a5820..41f2c933d23 100644 --- a/internal/service/chimesdkmediapipelines/media_insights_pipeline_configuration.go +++ b/internal/service/chimesdkmediapipelines/media_insights_pipeline_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package chimesdkmediapipelines import ( @@ -466,7 +469,7 @@ func RealTimeAlertConfigurationSchema() *schema.Schema { } func resourceMediaInsightsPipelineConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeSDKMediaPipelinesConn() + conn := meta.(*conns.AWSClient).ChimeSDKMediaPipelinesConn(ctx) elements, err := expandElements(d.Get("elements").([]interface{})) if err != nil { @@ -478,7 +481,7 @@ func resourceMediaInsightsPipelineConfigurationCreate(ctx context.Context, d *sc MediaInsightsPipelineConfigurationName: aws.String(d.Get("name").(string)), ResourceAccessRoleArn: aws.String(d.Get("resource_access_role_arn").(string)), Elements: elements, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if realTimeAlertConfiguration, ok := d.GetOk("real_time_alert_configuration"); ok && len(realTimeAlertConfiguration.([]interface{})) > 0 { @@ -519,7 +522,7 @@ func resourceMediaInsightsPipelineConfigurationCreate(ctx context.Context, d *sc } func resourceMediaInsightsPipelineConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeSDKMediaPipelinesConn() + conn := meta.(*conns.AWSClient).ChimeSDKMediaPipelinesConn(ctx) out, err := FindMediaInsightsPipelineConfigurationByID(ctx, conn, d.Id()) @@ -550,7 +553,7 @@ func resourceMediaInsightsPipelineConfigurationRead(ctx context.Context, d *sche } func resourceMediaInsightsPipelineConfigurationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeSDKMediaPipelinesConn() + conn := meta.(*conns.AWSClient).ChimeSDKMediaPipelinesConn(ctx) if d.HasChangesExcept(names.AttrTags, names.AttrTagsAll) { elements, err := expandElements(d.Get("elements").([]interface{})) @@ -594,7 +597,7 @@ func resourceMediaInsightsPipelineConfigurationUpdate(ctx context.Context, d *sc } func resourceMediaInsightsPipelineConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeSDKMediaPipelinesConn() + conn := meta.(*conns.AWSClient).ChimeSDKMediaPipelinesConn(ctx) log.Printf("[INFO] Deleting ChimeSDKMediaPipelines MediaInsightsPipelineConfiguration %s", d.Id()) diff --git a/internal/service/chimesdkmediapipelines/media_insights_pipeline_configuration_test.go b/internal/service/chimesdkmediapipelines/media_insights_pipeline_configuration_test.go index 04ea77aa140..657b6d05e72 100644 --- a/internal/service/chimesdkmediapipelines/media_insights_pipeline_configuration_test.go +++ b/internal/service/chimesdkmediapipelines/media_insights_pipeline_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package chimesdkmediapipelines_test import ( @@ -251,7 +254,7 @@ func TestAccChimeSDKMediaPipelinesMediaInsightsPipelineConfiguration_tags(t *tes func testAccCheckMediaInsightsPipelineConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeSDKMediaPipelinesConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeSDKMediaPipelinesConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_chimesdkmediapipelines_media_insights_pipeline_configuration" { @@ -287,7 +290,7 @@ func testAccCheckMediaInsightsPipelineConfigurationExists(ctx context.Context, n tfchimesdkmediapipelines.ResNameMediaInsightsPipelineConfiguration, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeSDKMediaPipelinesConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeSDKMediaPipelinesConn(ctx) resp, err := tfchimesdkmediapipelines.FindMediaInsightsPipelineConfigurationByID(ctx, conn, rs.Primary.ID) if err != nil { return create.Error(names.ChimeSDKMediaPipelines, create.ErrActionCheckingExistence, @@ -301,7 +304,7 @@ func testAccCheckMediaInsightsPipelineConfigurationExists(ctx context.Context, n } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeSDKMediaPipelinesConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeSDKMediaPipelinesConn(ctx) input := &chimesdkmediapipelines.ListMediaInsightsPipelineConfigurationsInput{} _, err := conn.ListMediaInsightsPipelineConfigurationsWithContext(ctx, input) diff --git a/internal/service/chimesdkmediapipelines/service_package_gen.go b/internal/service/chimesdkmediapipelines/service_package_gen.go index 2989f99fc1e..cd672f57a57 100644 --- a/internal/service/chimesdkmediapipelines/service_package_gen.go +++ b/internal/service/chimesdkmediapipelines/service_package_gen.go @@ -5,6 +5,10 @@ package chimesdkmediapipelines import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + chimesdkmediapipelines_sdkv1 "github.com/aws/aws-sdk-go/service/chimesdkmediapipelines" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -40,4 +44,13 @@ func (p *servicePackage) ServicePackageName() string { return names.ChimeSDKMediaPipelines } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*chimesdkmediapipelines_sdkv1.ChimeSDKMediaPipelines, error) { + sess := config["session"].(*session_sdkv1.Session) + + return chimesdkmediapipelines_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/chimesdkmediapipelines/tags_gen.go b/internal/service/chimesdkmediapipelines/tags_gen.go index 9525c258b9f..87f7211aae5 100644 --- a/internal/service/chimesdkmediapipelines/tags_gen.go +++ b/internal/service/chimesdkmediapipelines/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists chimesdkmediapipelines service tags. +// listTags lists chimesdkmediapipelines service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn chimesdkmediapipelinesiface.ChimeSDKMediaPipelinesAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn chimesdkmediapipelinesiface.ChimeSDKMediaPipelinesAPI, identifier string) (tftags.KeyValueTags, error) { input := &chimesdkmediapipelines.ListTagsForResourceInput{ ResourceARN: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn chimesdkmediapipelinesiface.ChimeSDKMedi // ListTags lists chimesdkmediapipelines service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).ChimeSDKMediaPipelinesConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).ChimeSDKMediaPipelinesConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*chimesdkmediapipelines.Tag) tftag return tftags.New(ctx, m) } -// GetTagsIn returns chimesdkmediapipelines service tags from Context. +// getTagsIn returns chimesdkmediapipelines service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*chimesdkmediapipelines.Tag { +func getTagsIn(ctx context.Context) []*chimesdkmediapipelines.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*chimesdkmediapipelines.Tag { return nil } -// SetTagsOut sets chimesdkmediapipelines service tags in Context. -func SetTagsOut(ctx context.Context, tags []*chimesdkmediapipelines.Tag) { +// setTagsOut sets chimesdkmediapipelines service tags in Context. +func setTagsOut(ctx context.Context, tags []*chimesdkmediapipelines.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates chimesdkmediapipelines service tags. +// updateTags updates chimesdkmediapipelines service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn chimesdkmediapipelinesiface.ChimeSDKMediaPipelinesAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn chimesdkmediapipelinesiface.ChimeSDKMediaPipelinesAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn chimesdkmediapipelinesiface.ChimeSDKMe // UpdateTags updates chimesdkmediapipelines service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).ChimeSDKMediaPipelinesConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).ChimeSDKMediaPipelinesConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/chimesdkvoice/generate.go b/internal/service/chimesdkvoice/generate.go index 270b8187d14..2be2e798c78 100644 --- a/internal/service/chimesdkvoice/generate.go +++ b/internal/service/chimesdkvoice/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceARN -ServiceTagsSlice -TagInIDElem=ResourceARN -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package chimesdkvoice diff --git a/internal/service/chimesdkvoice/global_settings.go b/internal/service/chimesdkvoice/global_settings.go new file mode 100644 index 00000000000..fe75232113e --- /dev/null +++ b/internal/service/chimesdkvoice/global_settings.go @@ -0,0 +1,148 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package chimesdkvoice + +import ( + "context" + "errors" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/chimesdkvoice" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" +) + +const ( + ResNameGlobalSettings = "Global Settings" + globalSettingsPropagationTimeout = time.Second * 10 +) + +// @SDKResource("aws_chimesdkvoice_global_settings") +func ResourceGlobalSettings() *schema.Resource { + return &schema.Resource{ + CreateWithoutTimeout: resourceGlobalSettingsUpdate, + ReadWithoutTimeout: resourceGlobalSettingsRead, + UpdateWithoutTimeout: resourceGlobalSettingsUpdate, + DeleteWithoutTimeout: resourceGlobalSettingsDelete, + + Importer: &schema.ResourceImporter{ + StateContext: schema.ImportStatePassthroughContext, + }, + + Schema: map[string]*schema.Schema{ + "voice_connector": { + Type: schema.TypeList, + MaxItems: 1, + Required: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cdr_bucket": { + Type: schema.TypeString, + Optional: true, + }, + }, + }, + }, + }, + } +} + +func resourceGlobalSettingsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).ChimeSDKVoiceConn(ctx) + + // Include retry handling to allow for propagation of the Global Settings + // logging bucket configuration + var out *chimesdkvoice.GetGlobalSettingsOutput + err := tfresource.Retry(ctx, globalSettingsPropagationTimeout, func() *retry.RetryError { + var getErr error + out, getErr = conn.GetGlobalSettingsWithContext(ctx, &chimesdkvoice.GetGlobalSettingsInput{}) + if getErr != nil { + return retry.NonRetryableError(getErr) + } + if out.VoiceConnector == nil { + return retry.RetryableError(tfresource.NewEmptyResultError(&chimesdkvoice.GetGlobalSettingsInput{})) + } + + return nil + }) + + var ere *tfresource.EmptyResultError + if !d.IsNewResource() && errors.As(err, &ere) { + log.Printf("[WARN] ChimeSDKVoice GlobalSettings (%s) not found, removing from state", d.Id()) + d.SetId("") + return diags + } + + if err != nil { + return append(diags, create.DiagError(names.ChimeSDKVoice, create.ErrActionReading, ResNameGlobalSettings, d.Id(), err)...) + } + + d.SetId(meta.(*conns.AWSClient).AccountID) + d.Set("voice_connector", flattenVoiceConnectorSettings(out.VoiceConnector)) + + return diags +} + +func resourceGlobalSettingsUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).ChimeSDKVoiceConn(ctx) + + if d.HasChange("voice_connector") { + input := &chimesdkvoice.UpdateGlobalSettingsInput{ + VoiceConnector: expandVoiceConnectorSettings(d.Get("voice_connector").([]interface{})), + } + + _, err := conn.UpdateGlobalSettingsWithContext(ctx, input) + if err != nil { + return append(diags, create.DiagError(names.ChimeSDKVoice, create.ErrActionUpdating, ResNameGlobalSettings, d.Id(), err)...) + } + } + d.SetId(meta.(*conns.AWSClient).AccountID) + + return append(diags, resourceGlobalSettingsRead(ctx, d, meta)...) +} + +func resourceGlobalSettingsDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).ChimeSDKVoiceConn(ctx) + + _, err := conn.UpdateGlobalSettingsWithContext(ctx, &chimesdkvoice.UpdateGlobalSettingsInput{ + VoiceConnector: &chimesdkvoice.VoiceConnectorSettings{}, + }) + if err != nil { + return append(diags, create.DiagError(names.ChimeSDKVoice, create.ErrActionDeleting, ResNameGlobalSettings, d.Id(), err)...) + } + + return diags +} + +func expandVoiceConnectorSettings(tfList []interface{}) *chimesdkvoice.VoiceConnectorSettings { + if len(tfList) == 0 { + return nil + } + + tfMap, ok := tfList[0].(map[string]interface{}) + if !ok { + return nil + } + + return &chimesdkvoice.VoiceConnectorSettings{ + CdrBucket: aws.String(tfMap["cdr_bucket"].(string)), + } +} + +func flattenVoiceConnectorSettings(apiObject *chimesdkvoice.VoiceConnectorSettings) []interface{} { + m := map[string]interface{}{ + "cdr_bucket": aws.StringValue(apiObject.CdrBucket), + } + return []interface{}{m} +} diff --git a/internal/service/chimesdkvoice/global_settings_test.go b/internal/service/chimesdkvoice/global_settings_test.go new file mode 100644 index 00000000000..4823a94f7a4 --- /dev/null +++ b/internal/service/chimesdkvoice/global_settings_test.go @@ -0,0 +1,159 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package chimesdkvoice_test + +import ( + "context" + "errors" + "fmt" + "testing" + "time" + + "github.com/aws/aws-sdk-go/service/chimesdkvoice" + sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-plugin-testing/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + tfchimesdkvoice "github.com/hashicorp/terraform-provider-aws/internal/service/chimesdkvoice" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" +) + +func TestAccChimeSDKVoiceGlobalSettings_serial(t *testing.T) { + t.Parallel() + + testCases := map[string]func(t *testing.T){ + "basic": testAccGlobalSettings_basic, + "disappears": testAccGlobalSettings_disappears, + "update": testAccGlobalSettings_update, + } + + acctest.RunSerialTests1Level(t, testCases, 0) +} + +func testAccGlobalSettings_basic(t *testing.T) { + ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_chimesdkvoice_global_settings.test" + bucketResourceName := "aws_s3_bucket.test" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, chimesdkvoice.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckGlobalSettingsDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccGlobalSettingsConfig_basic(rName), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrPair(resourceName, "voice_connector.0.cdr_bucket", bucketResourceName, "id"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccGlobalSettings_disappears(t *testing.T) { + ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_chimesdkvoice_global_settings.test" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, chimesdkvoice.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckGlobalSettingsDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccGlobalSettingsConfig_basic(rName), + Check: resource.ComposeAggregateTestCheckFunc( + acctest.CheckResourceDisappears(ctx, acctest.Provider, tfchimesdkvoice.ResourceGlobalSettings(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func testAccGlobalSettings_update(t *testing.T) { + ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rNameUpdated := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_chimesdkvoice_global_settings.test" + bucketResourceName := "aws_s3_bucket.test" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, chimesdkvoice.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckGlobalSettingsDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccGlobalSettingsConfig_basic(rName), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrPair(resourceName, "voice_connector.0.cdr_bucket", bucketResourceName, "id"), + ), + }, + // Note: due to eventual consistency, the read after update can occasionally + // return the previous cdr_bucket name, causing this test to fail. Running + // in us-east-1 produces the most consistent results. + { + Config: testAccGlobalSettingsConfig_basic(rNameUpdated), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrPair(resourceName, "voice_connector.0.cdr_bucket", bucketResourceName, "id"), + ), + }, + }, + }) +} + +func testAccCheckGlobalSettingsDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_chimesdkvoice_global_settings" { + continue + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeSDKVoiceConn(ctx) + input := &chimesdkvoice.GetGlobalSettingsInput{} + + const retryTimeout = 10 * time.Second + response := &chimesdkvoice.GetGlobalSettingsOutput{} + + err := tfresource.Retry(ctx, retryTimeout, func() *retry.RetryError { + var err error + response, err = conn.GetGlobalSettingsWithContext(ctx, input) + if err == nil && response.VoiceConnector.CdrBucket != nil { + return retry.RetryableError(errors.New("error Chime Voice Connector Global settings still exists")) + } + return nil + }) + + if err != nil { + return fmt.Errorf("error Chime Voice Connector Global settings still exists") + } + } + return nil + } +} + +func testAccGlobalSettingsConfig_basic(rName string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q +} + +resource "aws_chimesdkvoice_global_settings" "test" { + voice_connector { + cdr_bucket = aws_s3_bucket.test.id + } +} +`, rName) +} diff --git a/internal/service/chimesdkvoice/service_package_gen.go b/internal/service/chimesdkvoice/service_package_gen.go index 98fd34b527d..3fd5a5cfe6a 100644 --- a/internal/service/chimesdkvoice/service_package_gen.go +++ b/internal/service/chimesdkvoice/service_package_gen.go @@ -5,6 +5,10 @@ package chimesdkvoice import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + chimesdkvoice_sdkv1 "github.com/aws/aws-sdk-go/service/chimesdkvoice" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -25,6 +29,23 @@ func (p *servicePackage) SDKDataSources(ctx context.Context) []*types.ServicePac func (p *servicePackage) SDKResources(ctx context.Context) []*types.ServicePackageSDKResource { return []*types.ServicePackageSDKResource{ + { + Factory: ResourceGlobalSettings, + TypeName: "aws_chimesdkvoice_global_settings", + }, + { + Factory: ResourceSipMediaApplication, + TypeName: "aws_chimesdkvoice_sip_media_application", + Name: "Sip Media Application", + Tags: &types.ServicePackageResourceTags{ + IdentifierAttribute: "arn", + }, + }, + { + Factory: ResourceSipRule, + TypeName: "aws_chimesdkvoice_sip_rule", + Name: "Sip Rule", + }, { Factory: ResourceVoiceProfileDomain, TypeName: "aws_chimesdkvoice_voice_profile_domain", @@ -40,4 +61,13 @@ func (p *servicePackage) ServicePackageName() string { return names.ChimeSDKVoice } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*chimesdkvoice_sdkv1.ChimeSDKVoice, error) { + sess := config["session"].(*session_sdkv1.Session) + + return chimesdkvoice_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/chimesdkvoice/sip_media_application.go b/internal/service/chimesdkvoice/sip_media_application.go new file mode 100644 index 00000000000..08a4b461afc --- /dev/null +++ b/internal/service/chimesdkvoice/sip_media_application.go @@ -0,0 +1,180 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package chimesdkvoice + +import ( + "context" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/chimesdkvoice" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/verify" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @SDKResource("aws_chimesdkvoice_sip_media_application", name="Sip Media Application") +// @Tags(identifierAttribute="arn") +func ResourceSipMediaApplication() *schema.Resource { + return &schema.Resource{ + CreateWithoutTimeout: resourceSipMediaApplicationCreate, + ReadWithoutTimeout: resourceSipMediaApplicationRead, + UpdateWithoutTimeout: resourceSipMediaApplicationUpdate, + DeleteWithoutTimeout: resourceSipMediaApplicationDelete, + + Importer: &schema.ResourceImporter{ + StateContext: schema.ImportStatePassthroughContext, + }, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "aws_region": { + Type: schema.TypeString, + ForceNew: true, + Required: true, + }, + "endpoints": { + Type: schema.TypeList, + MaxItems: 1, + Required: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "lambda_arn": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidARN, + }, + }, + }, + }, + "name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + names.AttrTags: tftags.TagsSchema(), + names.AttrTagsAll: tftags.TagsSchemaComputed(), + }, + CustomizeDiff: verify.SetTagsDiff, + } +} + +func resourceSipMediaApplicationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).ChimeSDKVoiceConn(ctx) + + createInput := &chimesdkvoice.CreateSipMediaApplicationInput{ + AwsRegion: aws.String(d.Get("aws_region").(string)), + Name: aws.String(d.Get("name").(string)), + Endpoints: expandSipMediaApplicationEndpoints(d.Get("endpoints").([]interface{})), + Tags: getTagsIn(ctx), + } + + resp, err := conn.CreateSipMediaApplicationWithContext(ctx, createInput) + if err != nil || resp.SipMediaApplication == nil { + return sdkdiag.AppendErrorf(diags, "creating Chime Sip Media Application: %s", err) + } + + d.SetId(aws.StringValue(resp.SipMediaApplication.SipMediaApplicationId)) + return append(diags, resourceSipMediaApplicationRead(ctx, d, meta)...) +} + +func resourceSipMediaApplicationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).ChimeSDKVoiceConn(ctx) + + getInput := &chimesdkvoice.GetSipMediaApplicationInput{ + SipMediaApplicationId: aws.String(d.Id()), + } + + resp, err := conn.GetSipMediaApplicationWithContext(ctx, getInput) + if !d.IsNewResource() && tfawserr.ErrCodeEquals(err, chimesdkvoice.ErrCodeNotFoundException) { + log.Printf("[WARN] Chime Sip Media Application %s not found", d.Id()) + d.SetId("") + return diags + } + + if err != nil || resp.SipMediaApplication == nil { + return sdkdiag.AppendErrorf(diags, "getting Sip Media Application (%s): %s", d.Id(), err) + } + + d.Set("arn", resp.SipMediaApplication.SipMediaApplicationArn) + d.Set("aws_region", resp.SipMediaApplication.AwsRegion) + d.Set("name", resp.SipMediaApplication.Name) + d.Set("endpoints", flattenSipMediaApplicationEndpoints(resp.SipMediaApplication.Endpoints)) + + return diags +} + +func resourceSipMediaApplicationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).ChimeSDKVoiceConn(ctx) + + if d.HasChanges("name", "endpoints") { + updateInput := &chimesdkvoice.UpdateSipMediaApplicationInput{ + SipMediaApplicationId: aws.String(d.Id()), + Name: aws.String(d.Get("name").(string)), + Endpoints: expandSipMediaApplicationEndpoints(d.Get("endpoints").([]interface{})), + } + + if _, err := conn.UpdateSipMediaApplicationWithContext(ctx, updateInput); err != nil { + return sdkdiag.AppendErrorf(diags, "updating Sip Media Application (%s): %s", d.Id(), err) + } + } + + return append(diags, resourceSipMediaApplicationRead(ctx, d, meta)...) +} + +func resourceSipMediaApplicationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).ChimeSDKVoiceConn(ctx) + + input := &chimesdkvoice.DeleteSipMediaApplicationInput{ + SipMediaApplicationId: aws.String(d.Id()), + } + + if _, err := conn.DeleteSipMediaApplicationWithContext(ctx, input); err != nil { + if tfawserr.ErrCodeEquals(err, chimesdkvoice.ErrCodeNotFoundException) { + log.Printf("[WARN] Chime Sip Media Application %s not found", d.Id()) + return diags + } + return sdkdiag.AppendErrorf(diags, "deleting Sip Media Application (%s)", d.Id()) + } + + return diags +} + +func expandSipMediaApplicationEndpoints(data []interface{}) []*chimesdkvoice.SipMediaApplicationEndpoint { + var sipMediaApplicationEndpoint []*chimesdkvoice.SipMediaApplicationEndpoint + + tfMap, ok := data[0].(map[string]interface{}) + if !ok { + return nil + } + + sipMediaApplicationEndpoint = append(sipMediaApplicationEndpoint, &chimesdkvoice.SipMediaApplicationEndpoint{ + LambdaArn: aws.String(tfMap["lambda_arn"].(string))}) + return sipMediaApplicationEndpoint +} + +func flattenSipMediaApplicationEndpoints(apiObject []*chimesdkvoice.SipMediaApplicationEndpoint) []interface{} { + var rawSipMediaApplicationEndpoints []interface{} + + for _, e := range apiObject { + rawEndpoint := map[string]interface{}{ + "lambda_arn": aws.StringValue(e.LambdaArn), + } + rawSipMediaApplicationEndpoints = append(rawSipMediaApplicationEndpoints, rawEndpoint) + } + return rawSipMediaApplicationEndpoints +} diff --git a/internal/service/chimesdkvoice/sip_media_application_test.go b/internal/service/chimesdkvoice/sip_media_application_test.go new file mode 100644 index 00000000000..36948ac10bd --- /dev/null +++ b/internal/service/chimesdkvoice/sip_media_application_test.go @@ -0,0 +1,319 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package chimesdkvoice_test + +import ( + "context" + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/endpoints" + "github.com/aws/aws-sdk-go/service/chime" + "github.com/aws/aws-sdk-go/service/chimesdkvoice" + sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-plugin-testing/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + tfchimesdkvoice "github.com/hashicorp/terraform-provider-aws/internal/service/chimesdkvoice" +) + +func TestAccChimeSDKVoiceSipMediaApplication_basic(t *testing.T) { + ctx := acctest.Context(t) + var chimeSipMediaApplication *chimesdkvoice.SipMediaApplication + + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_chimesdkvoice_sip_media_application.test" + lambdaFunctionResourceName := "aws_lambda_function.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckRegion(t, endpoints.UsEast1RegionID) + }, + ErrorCheck: acctest.ErrorCheck(t, chimesdkvoice.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckSipMediaApplicationDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccSipMediaApplicationConfig_basic(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckSipMediaApplicationExists(ctx, resourceName, chimeSipMediaApplication), + resource.TestCheckResourceAttrSet(resourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "aws_region", endpoints.UsEast1RegionID), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttrPair(resourceName, "endpoints.0.lambda_arn", lambdaFunctionResourceName, "arn"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccChimeSDKVoiceSipMediaApplication_disappears(t *testing.T) { + ctx := acctest.Context(t) + var chimeSipMediaApplication *chimesdkvoice.SipMediaApplication + + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_chimesdkvoice_sip_media_application.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckRegion(t, endpoints.UsEast1RegionID) + }, + ErrorCheck: acctest.ErrorCheck(t, chime.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckSipMediaApplicationDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccSipMediaApplicationConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckSipMediaApplicationExists(ctx, resourceName, chimeSipMediaApplication), + acctest.CheckResourceDisappears(ctx, acctest.Provider, tfchimesdkvoice.ResourceSipMediaApplication(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccChimeSDKVoiceSipMediaApplication_update(t *testing.T) { + ctx := acctest.Context(t) + var chimeSipMediaApplication *chimesdkvoice.SipMediaApplication + + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rNameUpdated := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_chimesdkvoice_sip_media_application.test" + lambdaFunctionResourceName := "aws_lambda_function.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckRegion(t, endpoints.UsEast1RegionID) + }, + ErrorCheck: acctest.ErrorCheck(t, chimesdkvoice.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckSipMediaApplicationDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccSipMediaApplicationConfig_basic(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckSipMediaApplicationExists(ctx, resourceName, chimeSipMediaApplication), + resource.TestCheckResourceAttrSet(resourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "aws_region", endpoints.UsEast1RegionID), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttrPair(resourceName, "endpoints.0.lambda_arn", lambdaFunctionResourceName, "arn"), + ), + }, + { + Config: testAccSipMediaApplicationConfig_basic(rNameUpdated), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckSipMediaApplicationExists(ctx, resourceName, chimeSipMediaApplication), + resource.TestCheckResourceAttrSet(resourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "aws_region", endpoints.UsEast1RegionID), + resource.TestCheckResourceAttr(resourceName, "name", rNameUpdated), + resource.TestCheckResourceAttrPair(resourceName, "endpoints.0.lambda_arn", lambdaFunctionResourceName, "arn"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccChimeSDKVoiceSipMediaApplication_tags(t *testing.T) { + ctx := acctest.Context(t) + var sipMediaApplication *chimesdkvoice.SipMediaApplication + + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_chimesdkvoice_sip_media_application.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckRegion(t, endpoints.UsEast1RegionID) + }, + ErrorCheck: acctest.ErrorCheck(t, chimesdkvoice.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckSipMediaApplicationDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccSipMediaApplicationConfig_tags1(rName, "key1", "value1"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckSipMediaApplicationExists(ctx, resourceName, sipMediaApplication), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccSipMediaApplicationConfig_tags2(rName, "key1", "value1updated", "key2", "value2"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckSipMediaApplicationExists(ctx, resourceName, sipMediaApplication), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + { + Config: testAccSipMediaApplicationConfig_tags1(rName, "key2", "value2"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckSipMediaApplicationExists(ctx, resourceName, sipMediaApplication), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + }, + }) +} + +func testAccCheckSipMediaApplicationExists(ctx context.Context, name string, vc *chimesdkvoice.SipMediaApplication) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("not found: %s", name) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("no ChimeSdkVoice Sip Media Application ID is set") + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeSDKVoiceConn(ctx) + input := &chimesdkvoice.GetSipMediaApplicationInput{ + SipMediaApplicationId: aws.String(rs.Primary.ID), + } + resp, err := conn.GetSipMediaApplicationWithContext(ctx, input) + if err != nil { + return err + } + + vc = resp.SipMediaApplication + + return nil + } +} + +func testAccCheckSipMediaApplicationDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_chimesdkvoice_sip_media_application" { + continue + } + conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeSDKVoiceConn(ctx) + input := &chimesdkvoice.GetSipMediaApplicationInput{ + SipMediaApplicationId: aws.String(rs.Primary.ID), + } + resp, err := conn.GetSipMediaApplicationWithContext(ctx, input) + if err == nil { + if resp.SipMediaApplication != nil && aws.StringValue(resp.SipMediaApplication.Name) != "" { + return fmt.Errorf("error ChimeSdkVoice Sip Media Application still exists") + } + } + return nil + } + return nil + } +} + +func testAccSipMediaApplicationConfigBase(rName string) string { + return fmt.Sprintf(` +data "aws_region" "current" {} + +resource "aws_iam_role" "test" { + name = %[1]q + + assume_role_policy = < 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*chimesdkvoice.Tag { return nil } -// SetTagsOut sets chimesdkvoice service tags in Context. -func SetTagsOut(ctx context.Context, tags []*chimesdkvoice.Tag) { +// setTagsOut sets chimesdkvoice service tags in Context. +func setTagsOut(ctx context.Context, tags []*chimesdkvoice.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates chimesdkvoice service tags. +// updateTags updates chimesdkvoice service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn chimesdkvoiceiface.ChimeSDKVoiceAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn chimesdkvoiceiface.ChimeSDKVoiceAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn chimesdkvoiceiface.ChimeSDKVoiceAPI, i // UpdateTags updates chimesdkvoice service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).ChimeSDKVoiceConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).ChimeSDKVoiceConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/chimesdkvoice/test-fixtures/lambdatest.zip b/internal/service/chimesdkvoice/test-fixtures/lambdatest.zip new file mode 100644 index 00000000000..5c636e955b2 Binary files /dev/null and b/internal/service/chimesdkvoice/test-fixtures/lambdatest.zip differ diff --git a/internal/service/chimesdkvoice/voice_profile_domain.go b/internal/service/chimesdkvoice/voice_profile_domain.go index 67ac0160a18..b709441028b 100644 --- a/internal/service/chimesdkvoice/voice_profile_domain.go +++ b/internal/service/chimesdkvoice/voice_profile_domain.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package chimesdkvoice import ( @@ -91,12 +94,12 @@ func ResourceVoiceProfileDomain() *schema.Resource { } func resourceVoiceProfileDomainCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeSDKVoiceConn() + conn := meta.(*conns.AWSClient).ChimeSDKVoiceConn(ctx) in := &chimesdkvoice.CreateVoiceProfileDomainInput{ Name: aws.String(d.Get(names.AttrName).(string)), ServerSideEncryptionConfiguration: expandServerSideEncryptionConfiguration(d.Get("server_side_encryption_configuration").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk(names.AttrDescription); ok { @@ -118,7 +121,7 @@ func resourceVoiceProfileDomainCreate(ctx context.Context, d *schema.ResourceDat } func resourceVoiceProfileDomainRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeSDKVoiceConn() + conn := meta.(*conns.AWSClient).ChimeSDKVoiceConn(ctx) out, err := FindVoiceProfileDomainByID(ctx, conn, d.Id()) @@ -145,7 +148,7 @@ func resourceVoiceProfileDomainRead(ctx context.Context, d *schema.ResourceData, } func resourceVoiceProfileDomainUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeSDKVoiceConn() + conn := meta.(*conns.AWSClient).ChimeSDKVoiceConn(ctx) if d.HasChanges(names.AttrName, names.AttrDescription) { in := &chimesdkvoice.UpdateVoiceProfileDomainInput{ @@ -169,7 +172,7 @@ func resourceVoiceProfileDomainUpdate(ctx context.Context, d *schema.ResourceDat } func resourceVoiceProfileDomainDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ChimeSDKVoiceConn() + conn := meta.(*conns.AWSClient).ChimeSDKVoiceConn(ctx) log.Printf("[INFO] Deleting ChimeSDKVoice VoiceProfileDomain %s", d.Id()) diff --git a/internal/service/chimesdkvoice/voice_profile_domain_test.go b/internal/service/chimesdkvoice/voice_profile_domain_test.go index bdc534281eb..f2e15ce1a93 100644 --- a/internal/service/chimesdkvoice/voice_profile_domain_test.go +++ b/internal/service/chimesdkvoice/voice_profile_domain_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package chimesdkvoice_test import ( @@ -192,7 +195,7 @@ func testAccVoiceProfileDomain_tags(t *testing.T) { func testAccCheckVoiceProfileDomainDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeSDKVoiceConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeSDKVoiceConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_chimesdkvoice_voice_profile_domain" { @@ -225,7 +228,7 @@ func testAccCheckVoiceProfileDomainExists(ctx context.Context, name string, voic return create.Error(names.ChimeSDKVoice, create.ErrActionCheckingExistence, tfchimesdkvoice.ResNameVoiceProfileDomain, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeSDKVoiceConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeSDKVoiceConn(ctx) resp, err := tfchimesdkvoice.FindVoiceProfileDomainByID(ctx, conn, rs.Primary.ID) if err != nil { return create.Error(names.ChimeSDKVoice, create.ErrActionCheckingExistence, tfchimesdkvoice.ResNameVoiceProfileDomain, rs.Primary.ID, err) @@ -238,7 +241,7 @@ func testAccCheckVoiceProfileDomainExists(ctx context.Context, name string, voic } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeSDKVoiceConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ChimeSDKVoiceConn(ctx) input := &chimesdkvoice.ListVoiceProfileDomainsInput{} _, err := conn.ListVoiceProfileDomainsWithContext(ctx, input) diff --git a/internal/service/cleanrooms/collaboration.go b/internal/service/cleanrooms/collaboration.go new file mode 100644 index 00000000000..1e457a94be0 --- /dev/null +++ b/internal/service/cleanrooms/collaboration.go @@ -0,0 +1,427 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package cleanrooms + +import ( + "context" + "errors" + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/cleanrooms" + "github.com/aws/aws-sdk-go-v2/service/cleanrooms/types" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/errs" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/verify" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @SDKResource("aws_cleanrooms_collaboration") +// @Tags(identifierAttribute="arn") +func ResourceCollaboration() *schema.Resource { + return &schema.Resource{ + CreateWithoutTimeout: resourceCollaborationCreate, + ReadWithoutTimeout: resourceCollaborationRead, + UpdateWithoutTimeout: resourceCollaborationUpdate, + DeleteWithoutTimeout: resourceCollaborationDelete, + + Importer: &schema.ResourceImporter{ + StateContext: schema.ImportStatePassthroughContext, + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(1 * time.Minute), + Update: schema.DefaultTimeout(1 * time.Minute), + Delete: schema.DefaultTimeout(1 * time.Minute), + }, + + Schema: map[string]*schema.Schema{ + names.AttrARN: { + Type: schema.TypeString, + Computed: true, + }, + "create_time": { + Type: schema.TypeString, + Computed: true, + }, + "creator_display_name": { + Type: schema.TypeString, + ForceNew: true, + Required: true, + }, + "creator_member_abilities": { + Type: schema.TypeList, + Required: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + names.AttrDescription: { + Type: schema.TypeString, + Required: true, + }, + "data_encryption_metadata": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "allow_clear_text": { + Type: schema.TypeBool, + Required: true, + ForceNew: true, + }, + "allow_duplicates": { + Type: schema.TypeBool, + Required: true, + ForceNew: true, + }, + "allow_joins_on_columns_with_different_names": { + Type: schema.TypeBool, + Required: true, + ForceNew: true, + }, + "preserve_nulls": { + Type: schema.TypeBool, + Required: true, + ForceNew: true, + }, + }, + }, + }, + names.AttrID: { + Type: schema.TypeString, + Computed: true, + }, + "member": { + Type: schema.TypeSet, + Optional: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "account_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "display_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "status": { + Type: schema.TypeString, + Computed: true, + }, + "member_abilities": { + Type: schema.TypeList, + Required: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + }, + }, + }, + names.AttrName: { + Type: schema.TypeString, + Required: true, + }, + "query_log_status": { + Type: schema.TypeString, + ForceNew: true, + Required: true, + }, + names.AttrTags: tftags.TagsSchema(), + names.AttrTagsAll: tftags.TagsSchemaComputed(), + "update_time": { + Type: schema.TypeString, + Computed: true, + }, + }, + CustomizeDiff: verify.SetTagsDiff, + } +} + +const ( + ResNameCollaboration = "Collaboration" +) + +func resourceCollaborationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).CleanRoomsClient(ctx) + + creatorAbilities := d.Get("creator_member_abilities").([]interface{}) + + input := &cleanrooms.CreateCollaborationInput{ + Name: aws.String(d.Get(names.AttrName).(string)), + CreatorDisplayName: aws.String(d.Get("creator_display_name").(string)), + CreatorMemberAbilities: expandMemberAbilities(creatorAbilities), + Members: *expandMembers(d.Get("member").(*schema.Set).List()), + Tags: getTagsIn(ctx), + } + + queryLogStatus, err := expandQueryLogStatus(d.Get("query_log_status").(string)) + if err != nil { + return create.DiagError(names.CleanRooms, create.ErrActionCreating, ResNameCollaboration, d.Get("name").(string), err) + } + input.QueryLogStatus = queryLogStatus + + if v, ok := d.GetOk("data_encryption_metadata"); ok { + input.DataEncryptionMetadata = expandDataEncryptionMetadata(v.([]interface{})) + } + + if v, ok := d.GetOk(names.AttrDescription); ok { + input.Description = aws.String(v.(string)) + } + + out, err := conn.CreateCollaboration(ctx, input) + if err != nil { + return create.DiagError(names.CleanRooms, create.ErrActionCreating, ResNameCollaboration, d.Get("name").(string), err) + } + + if out == nil || out.Collaboration == nil { + return create.DiagError(names.CleanRooms, create.ErrActionCreating, ResNameCollaboration, d.Get("name").(string), errors.New("empty output")) + } + d.SetId(aws.ToString(out.Collaboration.Id)) + + return resourceCollaborationRead(ctx, d, meta) +} + +func resourceCollaborationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).CleanRoomsClient(ctx) + + out, err := findCollaborationByID(ctx, conn, d.Id()) + + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] CleanRooms Collaboration (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return create.DiagError(names.CleanRooms, create.ErrActionReading, ResNameCollaboration, d.Id(), err) + } + + collaboration := out.Collaboration + d.Set(names.AttrARN, collaboration.Arn) + d.Set(names.AttrName, collaboration.Name) + d.Set(names.AttrDescription, collaboration.Description) + d.Set("creator_display_name", collaboration.CreatorDisplayName) + d.Set("create_time", collaboration.CreateTime.String()) + d.Set("update_time", collaboration.UpdateTime.String()) + d.Set("query_log_status", collaboration.QueryLogStatus) + if err := d.Set("data_encryption_metadata", flattenDataEncryptionMetadata(collaboration.DataEncryptionMetadata)); err != nil { + return diag.Errorf("setting data_encryption_metadata: %s", err) + } + + membersOut, err := findMembersByCollaborationId(ctx, conn, d.Id()) + if err != nil { + return create.DiagError(names.CleanRooms, create.ErrActionSetting, ResNameCollaboration, d.Id(), err) + } + + if err := d.Set("member", flattenMembers(membersOut.MemberSummaries, collaboration.CreatorAccountId)); err != nil { + return diag.Errorf("setting member: %s", err) + } + if err := d.Set("creator_member_abilities", flattenCreatorAbilities(membersOut.MemberSummaries, collaboration.CreatorAccountId)); err != nil { + return diag.Errorf("setting creator_member_abilities: %s", err) + } + + return nil +} + +func resourceCollaborationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).CleanRoomsClient(ctx) + + if d.HasChangesExcept("tags", "tags_all") { + input := &cleanrooms.UpdateCollaborationInput{ + CollaborationIdentifier: aws.String(d.Id()), + } + + if d.HasChanges(names.AttrDescription) { + input.Description = aws.String(d.Get(names.AttrDescription).(string)) + } + + if d.HasChanges(names.AttrName) { + input.Name = aws.String(d.Get(names.AttrName).(string)) + } + + _, err := conn.UpdateCollaboration(ctx, input) + if err != nil { + return create.DiagError(names.CleanRooms, create.ErrActionUpdating, ResNameCollaboration, d.Id(), err) + } + } + + return append(diags, resourceCollaborationRead(ctx, d, meta)...) +} + +func resourceCollaborationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).CleanRoomsClient(ctx) + + log.Printf("[INFO] Deleting CleanRooms Collaboration %s", d.Id()) + _, err := conn.DeleteCollaboration(ctx, &cleanrooms.DeleteCollaborationInput{ + CollaborationIdentifier: aws.String(d.Id()), + }) + + if errs.IsA[*types.AccessDeniedException](err) { + return nil + } + + if err != nil { + return create.DiagError(names.CleanRooms, create.ErrActionDeleting, ResNameCollaboration, d.Id(), err) + } + + return nil +} + +func findCollaborationByID(ctx context.Context, conn *cleanrooms.Client, id string) (*cleanrooms.GetCollaborationOutput, error) { + in := &cleanrooms.GetCollaborationInput{ + CollaborationIdentifier: aws.String(id), + } + out, err := conn.GetCollaboration(ctx, in) + + if errs.IsA[*types.AccessDeniedException](err) { + //We throw Access Denied for NFE in Cleanrooms for collaborations since they are cross account + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + if err != nil { + return nil, err + } + + if out == nil || out.Collaboration == nil { + return nil, tfresource.NewEmptyResultError(in) + } + + return out, nil +} + +func findMembersByCollaborationId(ctx context.Context, conn *cleanrooms.Client, id string) (*cleanrooms.ListMembersOutput, error) { + in := &cleanrooms.ListMembersInput{ + CollaborationIdentifier: aws.String(id), + } + out, err := conn.ListMembers(ctx, in) + + if errs.IsA[*types.ResourceNotFoundException](err) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + if err != nil { + return nil, err + } + + if out == nil || out.MemberSummaries == nil { + return nil, tfresource.NewEmptyResultError(in) + } + + return out, nil +} + +func expandMemberAbilities(data []interface{}) []types.MemberAbility { + mappedAbilities := []types.MemberAbility{} + for _, v := range data { + switch v.(string) { + case "CAN_QUERY": + mappedAbilities = append(mappedAbilities, types.MemberAbilityCanQuery) + case "CAN_RECEIVE_RESULTS": + mappedAbilities = append(mappedAbilities, types.MemberAbilityCanReceiveResults) + } + } + return mappedAbilities +} + +func expandQueryLogStatus(status string) (types.CollaborationQueryLogStatus, error) { + switch status { + case "ENABLED": + return types.CollaborationQueryLogStatusEnabled, nil + case "DISABLED": + return types.CollaborationQueryLogStatusDisabled, nil + default: + return types.CollaborationQueryLogStatusDisabled, fmt.Errorf("Invalid query log status type: %s", status) + } +} + +func expandDataEncryptionMetadata(data []interface{}) *types.DataEncryptionMetadata { + dataEncryptionMetadata := &types.DataEncryptionMetadata{} + if len(data) > 0 { + metadata := data[0].(map[string]interface{}) + dataEncryptionMetadata.PreserveNulls = aws.Bool(metadata["preserve_nulls"].(bool)) + dataEncryptionMetadata.AllowCleartext = aws.Bool(metadata["allow_clear_text"].(bool)) + dataEncryptionMetadata.AllowJoinsOnColumnsWithDifferentNames = aws.Bool(metadata["allow_joins_on_columns_with_different_names"].(bool)) + dataEncryptionMetadata.AllowDuplicates = aws.Bool(metadata["allow_duplicates"].(bool)) + } + return dataEncryptionMetadata +} + +func expandMembers(data []interface{}) *[]types.MemberSpecification { + members := []types.MemberSpecification{} + for _, member := range data { + memberMap := member.(map[string]interface{}) + member := &types.MemberSpecification{ + AccountId: aws.String(memberMap["account_id"].(string)), + MemberAbilities: expandMemberAbilities(memberMap["member_abilities"].([]interface{})), + DisplayName: aws.String(memberMap["display_name"].(string)), + } + members = append(members, *member) + } + return &members +} + +func flattenDataEncryptionMetadata(dataEncryptionMetadata *types.DataEncryptionMetadata) []interface{} { + if dataEncryptionMetadata == nil { + return nil + } + m := map[string]interface{}{} + m["preserve_nulls"] = aws.Bool(*dataEncryptionMetadata.PreserveNulls) + m["allow_clear_text"] = aws.Bool(*dataEncryptionMetadata.AllowCleartext) + m["allow_joins_on_columns_with_different_names"] = aws.Bool(*dataEncryptionMetadata.AllowJoinsOnColumnsWithDifferentNames) + m["allow_duplicates"] = aws.Bool(*dataEncryptionMetadata.AllowDuplicates) + return []interface{}{m} +} + +func flattenMembers(members []types.MemberSummary, ownerAccount *string) []interface{} { + flattenedMembers := []interface{}{} + for _, member := range members { + if aws.ToString(member.AccountId) != aws.ToString(ownerAccount) { + memberMap := map[string]interface{}{} + memberMap["status"] = member.Status + memberMap["account_id"] = member.AccountId + memberMap["display_name"] = member.DisplayName + memberMap["member_abilities"] = flattenMemberAbilities(member.Abilities) + flattenedMembers = append(flattenedMembers, memberMap) + } + } + return flattenedMembers +} + +func flattenCreatorAbilities(members []types.MemberSummary, ownerAccount *string) []string { + flattenedAbilities := []string{} + for _, member := range members { + if aws.ToString(member.AccountId) == aws.ToString(ownerAccount) { + return flattenMemberAbilities(member.Abilities) + } + } + return flattenedAbilities +} + +func flattenMemberAbilities(abilities []types.MemberAbility) []string { + flattenedAbilities := []string{} + for _, ability := range abilities { + flattenedAbilities = append(flattenedAbilities, string(ability)) + } + return flattenedAbilities +} diff --git a/internal/service/cleanrooms/collaboration_test.go b/internal/service/cleanrooms/collaboration_test.go new file mode 100644 index 00000000000..eeeec2ef9fc --- /dev/null +++ b/internal/service/cleanrooms/collaboration_test.go @@ -0,0 +1,539 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package cleanrooms_test + +import ( + "context" + "errors" + "fmt" + "reflect" + "regexp" + "testing" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/cleanrooms" + "github.com/aws/aws-sdk-go-v2/service/cleanrooms/types" + sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-plugin-testing/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + tfcleanrooms "github.com/hashicorp/terraform-provider-aws/internal/service/cleanrooms" + "github.com/hashicorp/terraform-provider-aws/names" +) + +func TestAccCleanRoomsCollaboration_basic(t *testing.T) { + ctx := acctest.Context(t) + + var collaboration cleanrooms.GetCollaborationOutput + resourceName := "aws_cleanrooms_collaboration.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, names.CleanRoomsEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckCollaborationDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccCollaborationConfig_basic(TEST_NAME, TEST_DESCRIPTION, TEST_TAG), + Check: resource.ComposeTestCheckFunc( + testAccCheckCollaborationExists(ctx, resourceName, &collaboration), + resource.TestCheckResourceAttr(resourceName, "name", TEST_NAME), + resource.TestCheckResourceAttr(resourceName, "description", TEST_DESCRIPTION), + resource.TestCheckResourceAttr(resourceName, "query_log_status", TEST_QUERY_LOG_STATUS), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "data_encryption_metadata.*", map[string]string{ + "allow_clear_text": "true", + "allow_duplicates": "true", + "allow_joins_on_columns_with_different_names": "true", + "preserve_nulls": "false", + }), + acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "cleanrooms", regexp.MustCompile(`collaboration:*`)), + testCheckCreatorMember(ctx, resourceName), + testAccCollaborationTags(ctx, resourceName, map[string]string{ + "Project": TEST_TAG, + }), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"apply_immediately", "user"}, + }, + }, + }) +} + +func TestAccCleanRoomsCollaboration_disappears(t *testing.T) { + ctx := acctest.Context(t) + + var collaboration cleanrooms.GetCollaborationOutput + resourceName := "aws_cleanrooms_collaboration.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, names.CleanRoomsEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckCollaborationDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccCollaborationConfig_basic(TEST_NAME, TEST_DESCRIPTION, TEST_TAG), + Check: resource.ComposeTestCheckFunc( + testAccCheckCollaborationExists(ctx, resourceName, &collaboration), + acctest.CheckResourceDisappears(ctx, acctest.Provider, tfcleanrooms.ResourceCollaboration(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccCleanRoomsCollaboration_mutableProperties(t *testing.T) { + ctx := acctest.Context(t) + + var collaboration cleanrooms.GetCollaborationOutput + updatedName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_cleanrooms_collaboration.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, names.CleanRoomsEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckCollaborationDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccCollaborationConfig_basic(TEST_NAME, TEST_DESCRIPTION, TEST_TAG), + Check: resource.ComposeTestCheckFunc( + testAccCheckCollaborationExists(ctx, resourceName, &collaboration), + ), + }, + { + Config: testAccCollaborationConfig_basic(updatedName, "updated Description", "Not Terraform"), + Check: resource.ComposeTestCheckFunc( + testAccCheckCollaborationIsTheSame(resourceName, &collaboration), + resource.TestCheckResourceAttr(resourceName, "name", updatedName), + resource.TestCheckResourceAttr(resourceName, "description", "updated Description"), + testAccCollaborationTags(ctx, resourceName, map[string]string{ + "Project": "Not Terraform", + }), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"apply_immediately", "user"}, + }, + }, + }) +} + +func TestAccCleanRoomsCollaboration_updateCreatorDisplayName(t *testing.T) { + ctx := acctest.Context(t) + + var collaboration cleanrooms.GetCollaborationOutput + resourceName := "aws_cleanrooms_collaboration.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, names.CleanRoomsEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckCollaborationDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccCollaborationConfig_basic(TEST_NAME, TEST_DESCRIPTION, TEST_TAG), + Check: resource.ComposeTestCheckFunc( + testAccCheckCollaborationExists(ctx, resourceName, &collaboration), + ), + }, + { + Config: testAccCollaborationConfig_creatorDisplayName(TEST_NAME, TEST_DESCRIPTION, TEST_TAG, "updatedName"), + Check: resource.ComposeTestCheckFunc( + testAccCheckCollaborationRecreated(resourceName, &collaboration), + resource.TestCheckResourceAttr(resourceName, "creator_display_name", "updatedName"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"apply_immediately", "user"}, + }, + }, + }) +} +func TestAccCleanRoomsCollaboration_updateQueryLogStatus(t *testing.T) { + ctx := acctest.Context(t) + + var collaboration cleanrooms.GetCollaborationOutput + resourceName := "aws_cleanrooms_collaboration.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, names.CleanRoomsEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckCollaborationDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccCollaborationConfig_basic(TEST_NAME, TEST_DESCRIPTION, TEST_TAG), + Check: resource.ComposeTestCheckFunc( + testAccCheckCollaborationExists(ctx, resourceName, &collaboration), + ), + }, + { + Config: testAccCollaborationConfig_queryLogStatus(TEST_NAME, TEST_DESCRIPTION, TEST_TAG, "ENABLED"), + Check: resource.ComposeTestCheckFunc( + testAccCheckCollaborationRecreated(resourceName, &collaboration), + resource.TestCheckResourceAttr(resourceName, "query_log_status", "ENABLED"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"apply_immediately", "user"}, + }, + }, + }) +} +func TestAccCleanRoomsCollaboration_dataEncryptionSettings(t *testing.T) { + ctx := acctest.Context(t) + + var collaboration cleanrooms.GetCollaborationOutput + resourceName := "aws_cleanrooms_collaboration.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, names.CleanRoomsEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckCollaborationDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccCollaborationConfig_basic(TEST_NAME, TEST_DESCRIPTION, TEST_TAG), + Check: resource.ComposeTestCheckFunc( + testAccCheckCollaborationExists(ctx, resourceName, &collaboration), + ), + }, + { + Config: testAccCollaborationConfig_updatedDataEncryptionSettings(TEST_NAME, TEST_DESCRIPTION, TEST_TAG), + Check: resource.ComposeTestCheckFunc( + testAccCheckCollaborationRecreated(resourceName, &collaboration), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "data_encryption_metadata.*", map[string]string{ + "allow_clear_text": "true", + "allow_duplicates": "true", + "allow_joins_on_columns_with_different_names": "true", + "preserve_nulls": "true", + }), + ), + }, + { + Config: testAccCollaborationConfig_noDataEncryptionSettings(TEST_NAME, TEST_DESCRIPTION, TEST_TAG), + Check: resource.ComposeTestCheckFunc( + testAccCheckCollaborationRecreated(resourceName, &collaboration), + resource.TestCheckResourceAttr(resourceName, "data_encryption_metadata.#", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"apply_immediately", "user"}, + }, + }, + }) +} + +func TestAccCleanRoomsCollaboration_updateMemberAbilities(t *testing.T) { + ctx := acctest.Context(t) + + var collaboration cleanrooms.GetCollaborationOutput + resourceName := "aws_cleanrooms_collaboration.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, names.CleanRoomsEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckCollaborationDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccCollaborationConfig_additionalMember(TEST_NAME, TEST_DESCRIPTION, TEST_TAG), + Check: resource.ComposeTestCheckFunc( + testAccCheckCollaborationExists(ctx, resourceName, &collaboration), + resource.TestCheckResourceAttr(resourceName, "member.0.account_id", "123456789012"), + resource.TestCheckResourceAttr(resourceName, "member.0.display_name", "OtherMember"), + resource.TestCheckResourceAttr(resourceName, "member.0.status", "INVITED"), + resource.TestCheckResourceAttr(resourceName, "member.0.member_abilities.#", "0"), + ), + }, + { + Config: testAccCollaborationConfig_swapMemberAbilities(TEST_NAME, TEST_DESCRIPTION, TEST_TAG), + Check: resource.ComposeTestCheckFunc( + testAccCheckCollaborationRecreated(resourceName, &collaboration), + resource.TestCheckResourceAttr(resourceName, "creator_member_abilities.#", "0"), + resource.TestCheckResourceAttr(resourceName, "member.0.member_abilities.#", "2"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"apply_immediately", "user"}, + }, + }, + }) +} + +func testAccCheckCollaborationDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).CleanRoomsClient(ctx) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_cleanrooms_collaboration" { + continue + } + + _, err := conn.GetCollaboration(ctx, &cleanrooms.GetCollaborationInput{ + CollaborationIdentifier: aws.String(rs.Primary.ID), + }) + if err != nil { + // We throw access denied exceptions for Not Found Collaboration since they are cross account resources + var nfe *types.AccessDeniedException + if errors.As(err, &nfe) { + return nil + } + return err + } + + return create.Error(names.CleanRooms, create.ErrActionCheckingDestroyed, tfcleanrooms.ResNameCollaboration, rs.Primary.ID, errors.New("not destroyed")) + } + + return nil + } +} + +func testAccCheckCollaborationExists(ctx context.Context, name string, collaboration *cleanrooms.GetCollaborationOutput) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return create.Error(names.CleanRooms, create.ErrActionCheckingExistence, tfcleanrooms.ResNameCollaboration, name, errors.New("not found")) + } + + if rs.Primary.ID == "" { + return create.Error(names.CleanRooms, create.ErrActionCheckingExistence, tfcleanrooms.ResNameCollaboration, name, errors.New("not set")) + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).CleanRoomsClient(ctx) + resp, err := conn.GetCollaboration(ctx, &cleanrooms.GetCollaborationInput{ + CollaborationIdentifier: aws.String(rs.Primary.ID), + }) + + if err != nil { + return create.Error(names.CleanRooms, create.ErrActionCheckingExistence, tfcleanrooms.ResNameCollaboration, rs.Primary.ID, err) + } + + *collaboration = *resp + + return nil + } +} + +func testAccCheckCollaborationIsTheSame(name string, collaboration *cleanrooms.GetCollaborationOutput) resource.TestCheckFunc { + return func(state *terraform.State) error { + return checkCollaborationIsSame(name, collaboration, state) + } +} + +func testAccCheckCollaborationRecreated(name string, collaboration *cleanrooms.GetCollaborationOutput) resource.TestCheckFunc { + return func(state *terraform.State) error { + err := checkCollaborationIsSame(name, collaboration, state) + if err == nil { + return fmt.Errorf("Collaboration was expected to be recreated but was updated") + } + return nil + } +} + +func checkCollaborationIsSame(name string, collaboration *cleanrooms.GetCollaborationOutput, s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return create.Error(names.CleanRooms, create.ErrActionCheckingExistence, tfcleanrooms.ResNameCollaboration, name, errors.New("not found")) + } + + if rs.Primary.ID == "" { + return create.Error(names.CleanRooms, create.ErrActionCheckingExistence, tfcleanrooms.ResNameCollaboration, name, errors.New("not set")) + } + if rs.Primary.ID != *collaboration.Collaboration.Id { + return fmt.Errorf("New collaboration: %s created instead of updating: %s", rs.Primary.ID, *collaboration.Collaboration.Id) + } + return nil +} + +func testAccPreCheck(ctx context.Context, t *testing.T) { + conn := acctest.Provider.Meta().(*conns.AWSClient).CleanRoomsClient(ctx) + + input := &cleanrooms.ListCollaborationsInput{} + _, err := conn.ListCollaborations(ctx, input) + + if acctest.PreCheckSkipError(err) { + t.Skipf("skipping acceptance testing: %s", err) + } + + if err != nil { + t.Fatalf("unexpected PreCheck error: %s", err) + } +} + +func testCheckCreatorMember(ctx context.Context, name string) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).CleanRoomsClient(ctx) + collaboration, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Collaboration: %s not found in resources", name) + } + membersOut, err := conn.ListMembers(ctx, &cleanrooms.ListMembersInput{ + CollaborationIdentifier: &collaboration.Primary.ID, + }) + if err != nil { + return err + } + if len(membersOut.MemberSummaries) != 1 { + return fmt.Errorf("Expected 1 member but found %d", len(membersOut.MemberSummaries)) + } + member := membersOut.MemberSummaries[0] + if *member.AccountId != acctest.AccountID() { + return fmt.Errorf("Member account id %s does not match expected value", acctest.AccountID()) + } + if member.Status != types.MemberStatusInvited { + return fmt.Errorf("Member status: %s does not match expected value", member.Status) + } + if *member.DisplayName != "creator" { + return fmt.Errorf("member ") + } + expectedAbilities := []types.MemberAbility{types.MemberAbilityCanQuery, types.MemberAbilityCanReceiveResults} + if !reflect.DeepEqual(member.Abilities, expectedAbilities) { + return fmt.Errorf("Member abilities: %s do not match expected values: %s", member.Abilities, expectedAbilities) + } + + return nil + } +} + +func testAccCollaborationTags(ctx context.Context, name string, expectedTags map[string]string) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).CleanRoomsClient(ctx) + collaboration, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Collaboration: %s not found in resources", name) + } + tagsOut, err := conn.ListTagsForResource(ctx, &cleanrooms.ListTagsForResourceInput{ + ResourceArn: aws.String(collaboration.Primary.Attributes["arn"]), + }) + if err != nil { + return err + } + if !reflect.DeepEqual(expectedTags, tagsOut.Tags) { + return fmt.Errorf("Actual tags do not match expected") + } + return nil + } +} + +const TEST_NAME = "name" +const TEST_DESCRIPTION = "description" +const TEST_TAG = "Terraform" +const TEST_MEMBER_ABILITIES = "[\"CAN_QUERY\", \"CAN_RECEIVE_RESULTS\"]" +const TEST_CREATOR_DISPLAY_NAME = "creator" +const TEST_QUERY_LOG_STATUS = "DISABLED" +const TEST_DATA_ENCRYPTION_SETTINGS = ` + data_encryption_metadata { + allow_clear_text = true + allow_duplicates = true + allow_joins_on_columns_with_different_names = true + preserve_nulls = false + } +` +const TEST_ADDITIONAL_MEMBER = ` +member { + account_id = 123456789012 + display_name = "OtherMember" + member_abilities = [] + } +` + +func testAccCollaborationConfig_basic(rName string, description string, tagValue string) string { + return testAccCollaboration_configurable(rName, description, tagValue, TEST_MEMBER_ABILITIES, + TEST_CREATOR_DISPLAY_NAME, TEST_QUERY_LOG_STATUS, TEST_DATA_ENCRYPTION_SETTINGS, "") +} + +func testAccCollaborationConfig_additionalMember(rName string, description string, tagValue string) string { + return testAccCollaboration_configurable(rName, description, tagValue, TEST_MEMBER_ABILITIES, + TEST_CREATOR_DISPLAY_NAME, TEST_QUERY_LOG_STATUS, TEST_DATA_ENCRYPTION_SETTINGS, TEST_ADDITIONAL_MEMBER) +} + +func testAccCollaborationConfig_swapMemberAbilities(rName string, description string, tagValue string) string { + additionalMember := ` + member { + account_id = 123456789012 + display_name = "OtherMember" + member_abilities = ["CAN_QUERY", "CAN_RECEIVE_RESULTS"] + } + ` + + return testAccCollaboration_configurable(rName, description, tagValue, "[]", + TEST_CREATOR_DISPLAY_NAME, TEST_QUERY_LOG_STATUS, TEST_DATA_ENCRYPTION_SETTINGS, additionalMember) +} + +func testAccCollaborationConfig_creatorDisplayName(name string, description string, tagValue string, creatorDisplayName string) string { + return testAccCollaboration_configurable(name, description, tagValue, TEST_MEMBER_ABILITIES, + creatorDisplayName, TEST_QUERY_LOG_STATUS, TEST_DATA_ENCRYPTION_SETTINGS, "") +} + +func testAccCollaborationConfig_queryLogStatus(rName string, description string, tagValue string, queryLogStatus string) string { + return testAccCollaboration_configurable(rName, description, tagValue, TEST_MEMBER_ABILITIES, + TEST_CREATOR_DISPLAY_NAME, queryLogStatus, TEST_DATA_ENCRYPTION_SETTINGS, "") +} + +func testAccCollaborationConfig_updatedDataEncryptionSettings(name string, description string, tagValue string) string { + encryptionSettings := ` + data_encryption_metadata { + allow_clear_text = true + allow_duplicates = true + allow_joins_on_columns_with_different_names = true + preserve_nulls = true + } + ` + return testAccCollaboration_configurable(name, description, tagValue, TEST_MEMBER_ABILITIES, + TEST_CREATOR_DISPLAY_NAME, TEST_QUERY_LOG_STATUS, encryptionSettings, "") +} + +func testAccCollaborationConfig_noDataEncryptionSettings(name string, description string, tagValue string) string { + return testAccCollaboration_configurable(name, description, tagValue, TEST_MEMBER_ABILITIES, + TEST_CREATOR_DISPLAY_NAME, TEST_QUERY_LOG_STATUS, "", "") +} + +func testAccCollaboration_configurable(name string, description string, tagValue string, + creatorMemberAbilities string, creatorDisplayName string, queryLogStatus string, + dataEncryptionMetadata string, additionalMember string) string { + return fmt.Sprintf(` +resource "aws_cleanrooms_collaboration" "test" { + name = %[1]q + creator_member_abilities = %[4]s + creator_display_name = %[5]q + description = %[2]q + query_log_status = %[6]q + + %[7]s + + %[8]s + + tags = { + Project = %[3]q + } +} + + + `, name, description, tagValue, creatorMemberAbilities, creatorDisplayName, queryLogStatus, + dataEncryptionMetadata, additionalMember) +} diff --git a/internal/service/cleanrooms/generate.go b/internal/service/cleanrooms/generate.go new file mode 100644 index 00000000000..a71b8f0cd26 --- /dev/null +++ b/internal/service/cleanrooms/generate.go @@ -0,0 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/servicepackage/main.go +//go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -ListTags -ServiceTagsMap -UpdateTags -TagTypeKeyElem=key -TagTypeValElem=value -KVTValues -SkipTypesImp +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package cleanrooms diff --git a/internal/service/cleanrooms/service_package_gen.go b/internal/service/cleanrooms/service_package_gen.go index 81ea466a8c1..db30827902e 100644 --- a/internal/service/cleanrooms/service_package_gen.go +++ b/internal/service/cleanrooms/service_package_gen.go @@ -5,6 +5,9 @@ package cleanrooms import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + cleanrooms_sdkv2 "github.com/aws/aws-sdk-go-v2/service/cleanrooms" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -24,11 +27,32 @@ func (p *servicePackage) SDKDataSources(ctx context.Context) []*types.ServicePac } func (p *servicePackage) SDKResources(ctx context.Context) []*types.ServicePackageSDKResource { - return []*types.ServicePackageSDKResource{} + return []*types.ServicePackageSDKResource{ + { + Factory: ResourceCollaboration, + TypeName: "aws_cleanrooms_collaboration", + Tags: &types.ServicePackageResourceTags{ + IdentifierAttribute: "arn", + }, + }, + } } func (p *servicePackage) ServicePackageName() string { return names.CleanRooms } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*cleanrooms_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return cleanrooms_sdkv2.NewFromConfig(cfg, func(o *cleanrooms_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = cleanrooms_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/cleanrooms/tags_gen.go b/internal/service/cleanrooms/tags_gen.go new file mode 100644 index 00000000000..8d6b3862815 --- /dev/null +++ b/internal/service/cleanrooms/tags_gen.go @@ -0,0 +1,124 @@ +// Code generated by internal/generate/tags/main.go; DO NOT EDIT. +package cleanrooms + +import ( + "context" + "fmt" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/cleanrooms" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/types" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// listTags lists cleanrooms service tags. +// The identifier is typically the Amazon Resource Name (ARN), although +// it may also be a different identifier depending on the service. +func listTags(ctx context.Context, conn *cleanrooms.Client, identifier string) (tftags.KeyValueTags, error) { + input := &cleanrooms.ListTagsForResourceInput{ + ResourceArn: aws.String(identifier), + } + + output, err := conn.ListTagsForResource(ctx, input) + + if err != nil { + return tftags.New(ctx, nil), err + } + + return KeyValueTags(ctx, output.Tags), nil +} + +// ListTags lists cleanrooms service tags and set them in Context. +// It is called from outside this package. +func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { + tags, err := listTags(ctx, meta.(*conns.AWSClient).CleanRoomsClient(ctx), identifier) + + if err != nil { + return err + } + + if inContext, ok := tftags.FromContext(ctx); ok { + inContext.TagsOut = types.Some(tags) + } + + return nil +} + +// map[string]string handling + +// Tags returns cleanrooms service tags. +func Tags(tags tftags.KeyValueTags) map[string]string { + return tags.Map() +} + +// KeyValueTags creates tftags.KeyValueTags from cleanrooms service tags. +func KeyValueTags(ctx context.Context, tags map[string]string) tftags.KeyValueTags { + return tftags.New(ctx, tags) +} + +// getTagsIn returns cleanrooms service tags from Context. +// nil is returned if there are no input tags. +func getTagsIn(ctx context.Context) map[string]string { + if inContext, ok := tftags.FromContext(ctx); ok { + if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { + return tags + } + } + + return nil +} + +// setTagsOut sets cleanrooms service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]string) { + if inContext, ok := tftags.FromContext(ctx); ok { + inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) + } +} + +// updateTags updates cleanrooms service tags. +// The identifier is typically the Amazon Resource Name (ARN), although +// it may also be a different identifier depending on the service. +func updateTags(ctx context.Context, conn *cleanrooms.Client, identifier string, oldTagsMap, newTagsMap any) error { + oldTags := tftags.New(ctx, oldTagsMap) + newTags := tftags.New(ctx, newTagsMap) + + removedTags := oldTags.Removed(newTags) + removedTags = removedTags.IgnoreSystem(names.CleanRooms) + if len(removedTags) > 0 { + input := &cleanrooms.UntagResourceInput{ + ResourceArn: aws.String(identifier), + TagKeys: removedTags.Keys(), + } + + _, err := conn.UntagResource(ctx, input) + + if err != nil { + return fmt.Errorf("untagging resource (%s): %w", identifier, err) + } + } + + updatedTags := oldTags.Updated(newTags) + updatedTags = updatedTags.IgnoreSystem(names.CleanRooms) + if len(updatedTags) > 0 { + input := &cleanrooms.TagResourceInput{ + ResourceArn: aws.String(identifier), + Tags: Tags(updatedTags), + } + + _, err := conn.TagResource(ctx, input) + + if err != nil { + return fmt.Errorf("tagging resource (%s): %w", identifier, err) + } + } + + return nil +} + +// UpdateTags updates cleanrooms service tags. +// It is called from outside this package. +func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { + return updateTags(ctx, meta.(*conns.AWSClient).CleanRoomsClient(ctx), identifier, oldTags, newTags) +} diff --git a/internal/service/cloud9/consts.go b/internal/service/cloud9/consts.go index b697a543a48..e69b8ed2222 100644 --- a/internal/service/cloud9/consts.go +++ b/internal/service/cloud9/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloud9 import ( diff --git a/internal/service/cloud9/environment_ec2.go b/internal/service/cloud9/environment_ec2.go index bdc3aed03b9..882ab28940e 100644 --- a/internal/service/cloud9/environment_ec2.go +++ b/internal/service/cloud9/environment_ec2.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloud9 import ( @@ -105,7 +108,7 @@ func ResourceEnvironmentEC2() *schema.Resource { func resourceEnvironmentEC2Create(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Cloud9Conn() + conn := meta.(*conns.AWSClient).Cloud9Conn(ctx) name := d.Get("name").(string) input := &cloud9.CreateEnvironmentEC2Input{ @@ -113,7 +116,7 @@ func resourceEnvironmentEC2Create(ctx context.Context, d *schema.ResourceData, m ConnectionType: aws.String(d.Get("connection_type").(string)), InstanceType: aws.String(d.Get("instance_type").(string)), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("automatic_stop_time_minutes"); ok { @@ -171,7 +174,7 @@ func resourceEnvironmentEC2Create(ctx context.Context, d *schema.ResourceData, m func resourceEnvironmentEC2Read(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Cloud9Conn() + conn := meta.(*conns.AWSClient).Cloud9Conn(ctx) env, err := FindEnvironmentByID(ctx, conn, d.Id()) @@ -198,7 +201,7 @@ func resourceEnvironmentEC2Read(ctx context.Context, d *schema.ResourceData, met func resourceEnvironmentEC2Update(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Cloud9Conn() + conn := meta.(*conns.AWSClient).Cloud9Conn(ctx) if d.HasChangesExcept("tags_all", "tags") { input := cloud9.UpdateEnvironmentInput{ @@ -220,7 +223,7 @@ func resourceEnvironmentEC2Update(ctx context.Context, d *schema.ResourceData, m func resourceEnvironmentEC2Delete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Cloud9Conn() + conn := meta.(*conns.AWSClient).Cloud9Conn(ctx) log.Printf("[INFO] Deleting Cloud9 EC2 Environment: %s", d.Id()) _, err := conn.DeleteEnvironmentWithContext(ctx, &cloud9.DeleteEnvironmentInput{ diff --git a/internal/service/cloud9/environment_ec2_test.go b/internal/service/cloud9/environment_ec2_test.go index 478a2e8a307..c90bd2de0fc 100644 --- a/internal/service/cloud9/environment_ec2_test.go +++ b/internal/service/cloud9/environment_ec2_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloud9_test import ( @@ -186,7 +189,7 @@ func testAccCheckEnvironmentEC2Exists(ctx context.Context, n string, v *cloud9.E return fmt.Errorf("No Cloud9 Environment EC2 ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).Cloud9Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Cloud9Conn(ctx) output, err := tfcloud9.FindEnvironmentByID(ctx, conn, rs.Primary.ID) @@ -202,7 +205,7 @@ func testAccCheckEnvironmentEC2Exists(ctx context.Context, n string, v *cloud9.E func testAccCheckEnvironmentEC2Destroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Cloud9Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Cloud9Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloud9_environment_ec2" { diff --git a/internal/service/cloud9/environment_membership.go b/internal/service/cloud9/environment_membership.go index 2b15bdd8b9c..db373ee8a2b 100644 --- a/internal/service/cloud9/environment_membership.go +++ b/internal/service/cloud9/environment_membership.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloud9 import ( @@ -57,7 +60,7 @@ func ResourceEnvironmentMembership() *schema.Resource { func resourceEnvironmentMembershipCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Cloud9Conn() + conn := meta.(*conns.AWSClient).Cloud9Conn(ctx) envId := d.Get("environment_id").(string) userArn := d.Get("user_arn").(string) @@ -80,7 +83,7 @@ func resourceEnvironmentMembershipCreate(ctx context.Context, d *schema.Resource func resourceEnvironmentMembershipRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Cloud9Conn() + conn := meta.(*conns.AWSClient).Cloud9Conn(ctx) envId, userArn, err := DecodeEnviornmentMemberId(d.Id()) if err != nil { @@ -109,7 +112,7 @@ func resourceEnvironmentMembershipRead(ctx context.Context, d *schema.ResourceDa func resourceEnvironmentMembershipUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Cloud9Conn() + conn := meta.(*conns.AWSClient).Cloud9Conn(ctx) input := cloud9.UpdateEnvironmentMembershipInput{ EnvironmentId: aws.String(d.Get("environment_id").(string)), @@ -129,7 +132,7 @@ func resourceEnvironmentMembershipUpdate(ctx context.Context, d *schema.Resource func resourceEnvironmentMembershipDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Cloud9Conn() + conn := meta.(*conns.AWSClient).Cloud9Conn(ctx) _, err := conn.DeleteEnvironmentMembershipWithContext(ctx, &cloud9.DeleteEnvironmentMembershipInput{ EnvironmentId: aws.String(d.Get("environment_id").(string)), diff --git a/internal/service/cloud9/environment_membership_test.go b/internal/service/cloud9/environment_membership_test.go index f6642d56228..da54ab2fde3 100644 --- a/internal/service/cloud9/environment_membership_test.go +++ b/internal/service/cloud9/environment_membership_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloud9_test import ( @@ -118,7 +121,7 @@ func testAccCheckEnvironmentMemberExists(ctx context.Context, n string, res *clo return fmt.Errorf("No Cloud9 Environment Member ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).Cloud9Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Cloud9Conn(ctx) envId, userArn, err := tfcloud9.DecodeEnviornmentMemberId(rs.Primary.ID) if err != nil { @@ -138,7 +141,7 @@ func testAccCheckEnvironmentMemberExists(ctx context.Context, n string, res *clo func testAccCheckEnvironmentMemberDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Cloud9Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Cloud9Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloud9_environment_membership" { diff --git a/internal/service/cloud9/find.go b/internal/service/cloud9/find.go index d8c65262208..5a13c3f95d2 100644 --- a/internal/service/cloud9/find.go +++ b/internal/service/cloud9/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloud9 import ( diff --git a/internal/service/cloud9/generate.go b/internal/service/cloud9/generate.go index f0fe60221ad..6c2e0a708f7 100644 --- a/internal/service/cloud9/generate.go +++ b/internal/service/cloud9/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceARN -ServiceTagsSlice -TagInIDElem=ResourceARN -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package cloud9 diff --git a/internal/service/cloud9/service_package_gen.go b/internal/service/cloud9/service_package_gen.go index 12409d44c53..a399df9b48a 100644 --- a/internal/service/cloud9/service_package_gen.go +++ b/internal/service/cloud9/service_package_gen.go @@ -5,6 +5,10 @@ package cloud9 import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + cloud9_sdkv1 "github.com/aws/aws-sdk-go/service/cloud9" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -44,4 +48,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Cloud9 } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*cloud9_sdkv1.Cloud9, error) { + sess := config["session"].(*session_sdkv1.Session) + + return cloud9_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/cloud9/status.go b/internal/service/cloud9/status.go index 0187562b8c0..ceef6094ce8 100644 --- a/internal/service/cloud9/status.go +++ b/internal/service/cloud9/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloud9 import ( diff --git a/internal/service/cloud9/sweep.go b/internal/service/cloud9/sweep.go index 3705252f984..6ba1782ec87 100644 --- a/internal/service/cloud9/sweep.go +++ b/internal/service/cloud9/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/cloud9" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -23,11 +25,11 @@ func init() { func sweepEnvironmentEC2s(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).Cloud9Conn() + conn := client.Cloud9Conn(ctx) input := &cloud9.ListEnvironmentsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -56,7 +58,7 @@ func sweepEnvironmentEC2s(region string) error { return fmt.Errorf("error listing Cloud9 EC2 Environments (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Cloud9 EC2 Environments (%s): %w", region, err) diff --git a/internal/service/cloud9/tags_gen.go b/internal/service/cloud9/tags_gen.go index 45e8acb061f..795a361cb24 100644 --- a/internal/service/cloud9/tags_gen.go +++ b/internal/service/cloud9/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists cloud9 service tags. +// listTags lists cloud9 service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn cloud9iface.Cloud9API, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn cloud9iface.Cloud9API, identifier string) (tftags.KeyValueTags, error) { input := &cloud9.ListTagsForResourceInput{ ResourceARN: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn cloud9iface.Cloud9API, identifier string // ListTags lists cloud9 service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).Cloud9Conn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).Cloud9Conn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*cloud9.Tag) tftags.KeyValueTags { return tftags.New(ctx, m) } -// GetTagsIn returns cloud9 service tags from Context. +// getTagsIn returns cloud9 service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*cloud9.Tag { +func getTagsIn(ctx context.Context) []*cloud9.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*cloud9.Tag { return nil } -// SetTagsOut sets cloud9 service tags in Context. -func SetTagsOut(ctx context.Context, tags []*cloud9.Tag) { +// setTagsOut sets cloud9 service tags in Context. +func setTagsOut(ctx context.Context, tags []*cloud9.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates cloud9 service tags. +// updateTags updates cloud9 service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn cloud9iface.Cloud9API, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn cloud9iface.Cloud9API, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn cloud9iface.Cloud9API, identifier stri // UpdateTags updates cloud9 service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).Cloud9Conn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).Cloud9Conn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/cloud9/wait.go b/internal/service/cloud9/wait.go index 94008508411..634191871c0 100644 --- a/internal/service/cloud9/wait.go +++ b/internal/service/cloud9/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloud9 import ( diff --git a/internal/service/cloudcontrol/generate.go b/internal/service/cloudcontrol/generate.go new file mode 100644 index 00000000000..8cf40119c93 --- /dev/null +++ b/internal/service/cloudcontrol/generate.go @@ -0,0 +1,7 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/servicepackage/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package cloudcontrol diff --git a/internal/service/cloudcontrol/resource.go b/internal/service/cloudcontrol/resource.go index 97e181a6338..bb64db598ac 100644 --- a/internal/service/cloudcontrol/resource.go +++ b/internal/service/cloudcontrol/resource.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudcontrol import ( @@ -82,7 +85,7 @@ func ResourceResource() *schema.Resource { } func resourceResourceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CloudControlClient() + conn := meta.(*conns.AWSClient).CloudControlClient(ctx) typeName := d.Get("type_name").(string) input := &cloudcontrol.CreateResourceInput{ @@ -123,7 +126,7 @@ func resourceResourceCreate(ctx context.Context, d *schema.ResourceData, meta in } func resourceResourceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CloudControlClient() + conn := meta.(*conns.AWSClient).CloudControlClient(ctx) typeName := d.Get("type_name").(string) resourceDescription, err := FindResource(ctx, conn, @@ -149,7 +152,7 @@ func resourceResourceRead(ctx context.Context, d *schema.ResourceData, meta inte } func resourceResourceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CloudControlClient() + conn := meta.(*conns.AWSClient).CloudControlClient(ctx) if d.HasChange("desired_state") { oldRaw, newRaw := d.GetChange("desired_state") @@ -191,7 +194,7 @@ func resourceResourceUpdate(ctx context.Context, d *schema.ResourceData, meta in } func resourceResourceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CloudControlClient() + conn := meta.(*conns.AWSClient).CloudControlClient(ctx) typeName := d.Get("type_name").(string) input := &cloudcontrol.DeleteResourceInput{ @@ -229,7 +232,7 @@ func resourceResourceDelete(ctx context.Context, d *schema.ResourceData, meta in } func resourceResourceCustomizeDiffGetSchema(ctx context.Context, diff *schema.ResourceDiff, meta interface{}) error { - conn := meta.(*conns.AWSClient).CloudFormationConn() + conn := meta.(*conns.AWSClient).CloudFormationConn(ctx) resourceSchema := diff.Get("schema").(string) diff --git a/internal/service/cloudcontrol/resource_data_source.go b/internal/service/cloudcontrol/resource_data_source.go index 40c9e3a2242..23d8f659ff4 100644 --- a/internal/service/cloudcontrol/resource_data_source.go +++ b/internal/service/cloudcontrol/resource_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudcontrol import ( @@ -43,7 +46,7 @@ func DataSourceResource() *schema.Resource { } func dataSourceResourceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CloudControlClient() + conn := meta.(*conns.AWSClient).CloudControlClient(ctx) identifier := d.Get("identifier").(string) typeName := d.Get("type_name").(string) diff --git a/internal/service/cloudcontrol/resource_data_source_test.go b/internal/service/cloudcontrol/resource_data_source_test.go index 1828b2af7d8..f83839c9b57 100644 --- a/internal/service/cloudcontrol/resource_data_source_test.go +++ b/internal/service/cloudcontrol/resource_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudcontrol_test import ( diff --git a/internal/service/cloudcontrol/resource_test.go b/internal/service/cloudcontrol/resource_test.go index 8f4813b5a31..a76d40cf791 100644 --- a/internal/service/cloudcontrol/resource_test.go +++ b/internal/service/cloudcontrol/resource_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudcontrol_test import ( @@ -519,7 +522,7 @@ func TestAccCloudControlResource_lambdaFunction(t *testing.T) { func testAccCheckResourceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudControlClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudControlClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudcontrolapi_resource" { diff --git a/internal/service/cloudcontrol/service_package_gen.go b/internal/service/cloudcontrol/service_package_gen.go index 455d962b616..21b97cfdf01 100644 --- a/internal/service/cloudcontrol/service_package_gen.go +++ b/internal/service/cloudcontrol/service_package_gen.go @@ -5,6 +5,9 @@ package cloudcontrol import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + cloudcontrol_sdkv2 "github.com/aws/aws-sdk-go-v2/service/cloudcontrol" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -41,4 +44,17 @@ func (p *servicePackage) ServicePackageName() string { return names.CloudControl } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*cloudcontrol_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return cloudcontrol_sdkv2.NewFromConfig(cfg, func(o *cloudcontrol_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = cloudcontrol_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/cloudformation/arn.go b/internal/service/cloudformation/arn.go index 55057b4809f..b44847dd1ed 100644 --- a/internal/service/cloudformation/arn.go +++ b/internal/service/cloudformation/arn.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudformation import ( @@ -22,7 +25,7 @@ func TypeVersionARNToTypeARNAndVersionID(inputARN string) (string, string, error parsedARN, err := arn.Parse(inputARN) if err != nil { - return "", "", fmt.Errorf("error parsing ARN (%s): %w", inputARN, err) + return "", "", fmt.Errorf("parsing ARN (%s): %w", inputARN, err) } if actual, expected := parsedARN.Service, ARNService; actual != expected { diff --git a/internal/service/cloudformation/arn_test.go b/internal/service/cloudformation/arn_test.go index 906d6448381..f40c0c76d7c 100644 --- a/internal/service/cloudformation/arn_test.go +++ b/internal/service/cloudformation/arn_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudformation_test import ( @@ -20,12 +23,12 @@ func TestTypeVersionARNToTypeARNAndVersionID(t *testing.T) { { TestName: "empty ARN", InputARN: "", - ExpectedError: regexp.MustCompile(`error parsing ARN`), + ExpectedError: regexp.MustCompile(`parsing ARN`), }, { TestName: "unparsable ARN", InputARN: "test", - ExpectedError: regexp.MustCompile(`error parsing ARN`), + ExpectedError: regexp.MustCompile(`parsing ARN`), }, { TestName: "invalid ARN service", diff --git a/internal/service/cloudformation/consts.go b/internal/service/cloudformation/consts.go index 75a88edf52d..f9ac918a043 100644 --- a/internal/service/cloudformation/consts.go +++ b/internal/service/cloudformation/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudformation import ( diff --git a/internal/service/cloudformation/errors.go b/internal/service/cloudformation/errors.go index 28e5473ed14..0a5fe1ee634 100644 --- a/internal/service/cloudformation/errors.go +++ b/internal/service/cloudformation/errors.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudformation import ( diff --git a/internal/service/cloudformation/export_data_source.go b/internal/service/cloudformation/export_data_source.go index cdfec0d3efb..80703ca1e38 100644 --- a/internal/service/cloudformation/export_data_source.go +++ b/internal/service/cloudformation/export_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudformation import ( @@ -36,7 +39,7 @@ func DataSourceExport() *schema.Resource { func dataSourceExportRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFormationConn() + conn := meta.(*conns.AWSClient).CloudFormationConn(ctx) var value string name := d.Get("name").(string) region := meta.(*conns.AWSClient).Region diff --git a/internal/service/cloudformation/export_data_source_test.go b/internal/service/cloudformation/export_data_source_test.go index 00431019b30..6f2e870eff0 100644 --- a/internal/service/cloudformation/export_data_source_test.go +++ b/internal/service/cloudformation/export_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudformation_test import ( diff --git a/internal/service/cloudformation/find.go b/internal/service/cloudformation/find.go index 56fbc7ca251..78e680353bc 100644 --- a/internal/service/cloudformation/find.go +++ b/internal/service/cloudformation/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudformation import ( diff --git a/internal/service/cloudformation/flex.go b/internal/service/cloudformation/flex.go index a45239b03d1..eb20d5718d4 100644 --- a/internal/service/cloudformation/flex.go +++ b/internal/service/cloudformation/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudformation import ( diff --git a/internal/service/cloudformation/generate.go b/internal/service/cloudformation/generate.go index 981274605a0..86b5e981d71 100644 --- a/internal/service/cloudformation/generate.go +++ b/internal/service/cloudformation/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ServiceTagsSlice +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package cloudformation diff --git a/internal/service/cloudformation/id.go b/internal/service/cloudformation/id.go index 74f254beda9..c5f1b5b7f30 100644 --- a/internal/service/cloudformation/id.go +++ b/internal/service/cloudformation/id.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudformation import ( diff --git a/internal/service/cloudformation/list.go b/internal/service/cloudformation/list.go index 1c1f685ebcd..3876786a501 100644 --- a/internal/service/cloudformation/list.go +++ b/internal/service/cloudformation/list.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudformation import ( diff --git a/internal/service/cloudformation/service_package.go b/internal/service/cloudformation/service_package.go new file mode 100644 index 00000000000..3dd9f3640bd --- /dev/null +++ b/internal/service/cloudformation/service_package.go @@ -0,0 +1,24 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package cloudformation + +import ( + "context" + + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + request_sdkv1 "github.com/aws/aws-sdk-go/aws/request" + cloudformation_sdkv1 "github.com/aws/aws-sdk-go/service/cloudformation" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" +) + +// CustomizeConn customizes a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) CustomizeConn(ctx context.Context, conn *cloudformation_sdkv1.CloudFormation) (*cloudformation_sdkv1.CloudFormation, error) { + conn.Handlers.Retry.PushBack(func(r *request_sdkv1.Request) { + if tfawserr.ErrMessageContains(r.Error, cloudformation_sdkv1.ErrCodeOperationInProgressException, "Another Operation on StackSet") { + r.Retryable = aws_sdkv1.Bool(true) + } + }) + + return conn, nil +} diff --git a/internal/service/cloudformation/service_package_gen.go b/internal/service/cloudformation/service_package_gen.go index 527b49d3e55..e046d9e5475 100644 --- a/internal/service/cloudformation/service_package_gen.go +++ b/internal/service/cloudformation/service_package_gen.go @@ -5,6 +5,10 @@ package cloudformation import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + cloudformation_sdkv1 "github.com/aws/aws-sdk-go/service/cloudformation" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -65,4 +69,13 @@ func (p *servicePackage) ServicePackageName() string { return names.CloudFormation } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*cloudformation_sdkv1.CloudFormation, error) { + sess := config["session"].(*session_sdkv1.Session) + + return cloudformation_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/cloudformation/stack.go b/internal/service/cloudformation/stack.go index 4fed0a0561b..ee72f6ec505 100644 --- a/internal/service/cloudformation/stack.go +++ b/internal/service/cloudformation/stack.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudformation import ( @@ -132,14 +135,14 @@ func ResourceStack() *schema.Resource { func resourceStackCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFormationConn() + conn := meta.(*conns.AWSClient).CloudFormationConn(ctx) requestToken := id.UniqueId() name := d.Get("name").(string) input := &cloudformation.CreateStackInput{ ClientRequestToken: aws.String(requestToken), StackName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("template_body"); ok { @@ -214,7 +217,7 @@ func resourceStackCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceStackRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFormationConn() + conn := meta.(*conns.AWSClient).CloudFormationConn(ctx) input := &cloudformation.DescribeStacksInput{ StackName: aws.String(d.Id()), @@ -295,7 +298,7 @@ func resourceStackRead(ctx context.Context, d *schema.ResourceData, meta interfa return sdkdiag.AppendErrorf(diags, "reading CloudFormation Stack (%s): %s", d.Id(), err) } - SetTagsOut(ctx, stack.Tags) + setTagsOut(ctx, stack.Tags) err = d.Set("outputs", flattenOutputs(stack.Outputs)) if err != nil { @@ -314,7 +317,7 @@ func resourceStackRead(ctx context.Context, d *schema.ResourceData, meta interfa func resourceStackUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFormationConn() + conn := meta.(*conns.AWSClient).CloudFormationConn(ctx) requestToken := id.UniqueId() input := &cloudformation.UpdateStackInput{ @@ -349,7 +352,7 @@ func resourceStackUpdate(ctx context.Context, d *schema.ResourceData, meta inter input.Parameters = expandParameters(v.(map[string]interface{})) } - if tags := GetTagsIn(ctx); len(tags) > 0 { + if tags := getTagsIn(ctx); len(tags) > 0 { input.Tags = tags } @@ -397,7 +400,7 @@ func resourceStackUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceStackDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFormationConn() + conn := meta.(*conns.AWSClient).CloudFormationConn(ctx) requestToken := id.UniqueId() input := &cloudformation.DeleteStackInput{ diff --git a/internal/service/cloudformation/stack_data_source.go b/internal/service/cloudformation/stack_data_source.go index 3d1655bb86b..ae0f92892f9 100644 --- a/internal/service/cloudformation/stack_data_source.go +++ b/internal/service/cloudformation/stack_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudformation import ( @@ -74,7 +77,7 @@ func DataSourceStack() *schema.Resource { func dataSourceStackRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFormationConn() + conn := meta.(*conns.AWSClient).CloudFormationConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig name := d.Get("name").(string) diff --git a/internal/service/cloudformation/stack_data_source_test.go b/internal/service/cloudformation/stack_data_source_test.go index 159170a70c7..cb4774eca6a 100644 --- a/internal/service/cloudformation/stack_data_source_test.go +++ b/internal/service/cloudformation/stack_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudformation_test import ( diff --git a/internal/service/cloudformation/stack_set.go b/internal/service/cloudformation/stack_set.go index c368f17b594..f7484aa037a 100644 --- a/internal/service/cloudformation/stack_set.go +++ b/internal/service/cloudformation/stack_set.go @@ -1,9 +1,13 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudformation import ( "context" "log" "regexp" + "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/cloudformation" @@ -35,7 +39,7 @@ func ResourceStackSet() *schema.Resource { }, Timeouts: &schema.ResourceTimeout{ - Update: schema.DefaultTimeout(StackSetUpdatedDefaultTimeout), + Update: schema.DefaultTimeout(30 * time.Minute), }, Schema: map[string]*schema.Schema{ @@ -97,6 +101,21 @@ func ResourceStackSet() *schema.Resource { Computed: true, ConflictsWith: []string{"auto_deployment"}, }, + "managed_execution": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "active": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + }, + }, + DiffSuppressFunc: verify.SuppressMissingOptionalConfigurationBlock, + }, "name": { Type: schema.TypeString, Required: true, @@ -192,13 +211,13 @@ func ResourceStackSet() *schema.Resource { func resourceStackSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFormationConn() + conn := meta.(*conns.AWSClient).CloudFormationConn(ctx) name := d.Get("name").(string) input := &cloudformation.CreateStackSetInput{ ClientRequestToken: aws.String(id.UniqueId()), StackSetName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("administration_role_arn"); ok { @@ -221,6 +240,10 @@ func resourceStackSetCreate(ctx context.Context, d *schema.ResourceData, meta in input.ExecutionRoleName = aws.String(v.(string)) } + if v, ok := d.GetOk("managed_execution"); ok { + input.ManagedExecution = expandManagedExecution(v.([]interface{})) + } + if v, ok := d.GetOk("parameters"); ok { input.Parameters = expandParameters(v.(map[string]interface{})) } @@ -255,7 +278,7 @@ func resourceStackSetCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceStackSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFormationConn() + conn := meta.(*conns.AWSClient).CloudFormationConn(ctx) callAs := d.Get("call_as").(string) stackSet, err := FindStackSetByName(ctx, conn, d.Id(), callAs) @@ -283,6 +306,11 @@ func resourceStackSetRead(ctx context.Context, d *schema.ResourceData, meta inte d.Set("description", stackSet.Description) d.Set("execution_role_name", stackSet.ExecutionRoleName) + + if err := d.Set("managed_execution", flattenStackSetManagedExecution(stackSet.ManagedExecution)); err != nil { + return sdkdiag.AppendErrorf(diags, "setting managed_execution: %s", err) + } + d.Set("name", stackSet.StackSetName) d.Set("permission_model", stackSet.PermissionModel) @@ -292,7 +320,7 @@ func resourceStackSetRead(ctx context.Context, d *schema.ResourceData, meta inte d.Set("stack_set_id", stackSet.StackSetId) - SetTagsOut(ctx, stackSet.Tags) + setTagsOut(ctx, stackSet.Tags) d.Set("template_body", stackSet.TemplateBody) @@ -301,7 +329,7 @@ func resourceStackSetRead(ctx context.Context, d *schema.ResourceData, meta inte func resourceStackSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFormationConn() + conn := meta.(*conns.AWSClient).CloudFormationConn(ctx) input := &cloudformation.UpdateStackSetInput{ OperationId: aws.String(id.UniqueId()), @@ -326,6 +354,10 @@ func resourceStackSetUpdate(ctx context.Context, d *schema.ResourceData, meta in input.ExecutionRoleName = aws.String(v.(string)) } + if v, ok := d.GetOk("managed_execution"); ok { + input.ManagedExecution = expandManagedExecution(v.([]interface{})) + } + if v, ok := d.GetOk("operation_preferences"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { input.OperationPreferences = expandOperationPreferences(v.([]interface{})[0].(map[string]interface{})) } @@ -343,7 +375,7 @@ func resourceStackSetUpdate(ctx context.Context, d *schema.ResourceData, meta in input.CallAs = aws.String(v.(string)) } - if tags := GetTagsIn(ctx); len(tags) > 0 { + if tags := getTagsIn(ctx); len(tags) > 0 { input.Tags = tags } @@ -379,7 +411,7 @@ func resourceStackSetUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceStackSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFormationConn() + conn := meta.(*conns.AWSClient).CloudFormationConn(ctx) input := &cloudformation.DeleteStackSetInput{ StackSetName: aws.String(d.Id()), @@ -418,6 +450,20 @@ func expandAutoDeployment(l []interface{}) *cloudformation.AutoDeployment { return autoDeployment } +func expandManagedExecution(l []interface{}) *cloudformation.ManagedExecution { + if len(l) == 0 { + return nil + } + + m := l[0].(map[string]interface{}) + + managedExecution := &cloudformation.ManagedExecution{ + Active: aws.Bool(m["active"].(bool)), + } + + return managedExecution +} + func flattenStackSetAutoDeploymentResponse(autoDeployment *cloudformation.AutoDeployment) []map[string]interface{} { if autoDeployment == nil { return []map[string]interface{}{} @@ -430,3 +476,15 @@ func flattenStackSetAutoDeploymentResponse(autoDeployment *cloudformation.AutoDe return []map[string]interface{}{m} } + +func flattenStackSetManagedExecution(managedExecution *cloudformation.ManagedExecution) []map[string]interface{} { + if managedExecution == nil { + return []map[string]interface{}{} + } + + m := map[string]interface{}{ + "active": aws.BoolValue(managedExecution.Active), + } + + return []map[string]interface{}{m} +} diff --git a/internal/service/cloudformation/stack_set_instance.go b/internal/service/cloudformation/stack_set_instance.go index f5120c329e8..0278898c2e1 100644 --- a/internal/service/cloudformation/stack_set_instance.go +++ b/internal/service/cloudformation/stack_set_instance.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudformation import ( @@ -156,7 +159,7 @@ func ResourceStackSetInstance() *schema.Resource { func resourceStackSetInstanceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFormationConn() + conn := meta.(*conns.AWSClient).CloudFormationConn(ctx) region := meta.(*conns.AWSClient).Region if v, ok := d.GetOk("region"); ok { @@ -251,7 +254,7 @@ func resourceStackSetInstanceCreate(ctx context.Context, d *schema.ResourceData, return true, err } - return false, fmt.Errorf("error waiting for CloudFormation StackSet Instance (%s) creation: %w", d.Id(), err) + return false, fmt.Errorf("waiting for CloudFormation StackSet Instance (%s) creation: %w", d.Id(), err) }, ) @@ -264,7 +267,7 @@ func resourceStackSetInstanceCreate(ctx context.Context, d *schema.ResourceData, func resourceStackSetInstanceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFormationConn() + conn := meta.(*conns.AWSClient).CloudFormationConn(ctx) stackSetName, accountID, region, err := StackSetInstanceParseResourceID(d.Id()) @@ -315,7 +318,7 @@ func resourceStackSetInstanceRead(ctx context.Context, d *schema.ResourceData, m func resourceStackSetInstanceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFormationConn() + conn := meta.(*conns.AWSClient).CloudFormationConn(ctx) if d.HasChanges("deployment_targets", "parameter_overrides", "operation_preferences") { stackSetName, accountID, region, err := StackSetInstanceParseResourceID(d.Id()) @@ -368,7 +371,7 @@ func resourceStackSetInstanceUpdate(ctx context.Context, d *schema.ResourceData, func resourceStackSetInstanceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFormationConn() + conn := meta.(*conns.AWSClient).CloudFormationConn(ctx) stackSetName, accountID, region, err := StackSetInstanceParseResourceID(d.Id()) diff --git a/internal/service/cloudformation/stack_set_instance_test.go b/internal/service/cloudformation/stack_set_instance_test.go index 7fd74c3f992..aa75120eea7 100644 --- a/internal/service/cloudformation/stack_set_instance_test.go +++ b/internal/service/cloudformation/stack_set_instance_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudformation_test import ( @@ -325,7 +328,7 @@ func testAccCheckStackSetInstanceExists(ctx context.Context, resourceName string callAs := rs.Primary.Attributes["call_as"] - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFormationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFormationConn(ctx) stackSetName, accountID, region, err := tfcloudformation.StackSetInstanceParseResourceID(rs.Primary.ID) @@ -347,7 +350,7 @@ func testAccCheckStackSetInstanceExists(ctx context.Context, resourceName string func testAccCheckStackSetInstanceStackExists(ctx context.Context, stackInstance *cloudformation.StackInstance, v *cloudformation.Stack) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFormationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFormationConn(ctx) output, err := tfcloudformation.FindStackByID(ctx, conn, aws.StringValue(stackInstance.StackId)) @@ -363,7 +366,7 @@ func testAccCheckStackSetInstanceStackExists(ctx context.Context, stackInstance func testAccCheckStackSetInstanceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFormationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFormationConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudformation_stack_set_instance" { diff --git a/internal/service/cloudformation/stack_set_test.go b/internal/service/cloudformation/stack_set_test.go index e48ddcdab96..0fe6d5ec27a 100644 --- a/internal/service/cloudformation/stack_set_test.go +++ b/internal/service/cloudformation/stack_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudformation_test import ( @@ -22,7 +25,7 @@ func TestAccCloudFormationStackSet_basic(t *testing.T) { ctx := acctest.Context(t) var stackSet1 cloudformation.StackSet rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - iamRoleResourceName := "aws_iam_role.test" + iamRoleResourceName := "aws_iam_role.test.0" resourceName := "aws_cloudformation_stack_set.test" resource.ParallelTest(t, resource.TestCase{ @@ -41,6 +44,8 @@ func TestAccCloudFormationStackSet_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "call_as", "SELF"), resource.TestCheckResourceAttr(resourceName, "description", ""), resource.TestCheckResourceAttr(resourceName, "execution_role_name", "AWSCloudFormationStackSetExecutionRole"), + resource.TestCheckResourceAttr(resourceName, "managed_execution.#", "1"), + resource.TestCheckResourceAttr(resourceName, "managed_execution.0.active", "false"), resource.TestCheckResourceAttr(resourceName, "name", rName), resource.TestCheckResourceAttr(resourceName, "operation_preferences.#", "0"), resource.TestCheckResourceAttr(resourceName, "parameters.%", "0"), @@ -92,8 +97,8 @@ func TestAccCloudFormationStackSet_administrationRoleARN(t *testing.T) { ctx := acctest.Context(t) var stackSet1, stackSet2 cloudformation.StackSet rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - iamRoleResourceName1 := "aws_iam_role.test1" - iamRoleResourceName2 := "aws_iam_role.test2" + iamRole1ResourceName := "aws_iam_role.test.0" + iamRole2ResourceName := "aws_iam_role.test.1" resourceName := "aws_cloudformation_stack_set.test" resource.ParallelTest(t, resource.TestCase{ @@ -106,7 +111,7 @@ func TestAccCloudFormationStackSet_administrationRoleARN(t *testing.T) { Config: testAccStackSetConfig_administrationRoleARN1(rName), Check: resource.ComposeTestCheckFunc( testAccCheckStackSetExists(ctx, resourceName, &stackSet1), - resource.TestCheckResourceAttrPair(resourceName, "administration_role_arn", iamRoleResourceName1, "arn"), + resource.TestCheckResourceAttrPair(resourceName, "administration_role_arn", iamRole1ResourceName, "arn"), ), }, { @@ -123,7 +128,7 @@ func TestAccCloudFormationStackSet_administrationRoleARN(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckStackSetExists(ctx, resourceName, &stackSet2), testAccCheckStackSetNotRecreated(&stackSet1, &stackSet2), - resource.TestCheckResourceAttrPair(resourceName, "administration_role_arn", iamRoleResourceName2, "arn"), + resource.TestCheckResourceAttrPair(resourceName, "administration_role_arn", iamRole2ResourceName, "arn"), ), }, }, @@ -210,6 +215,39 @@ func TestAccCloudFormationStackSet_executionRoleName(t *testing.T) { }) } +func TestAccCloudFormationStackSet_managedExecution(t *testing.T) { + ctx := acctest.Context(t) + var stackSet1 cloudformation.StackSet + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_cloudformation_stack_set.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheckStackSet(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, cloudformation.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckStackSetDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccStackSetConfig_managedExecution(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckStackSetExists(ctx, resourceName, &stackSet1), + resource.TestCheckResourceAttr(resourceName, "managed_execution.#", "1"), + resource.TestCheckResourceAttr(resourceName, "managed_execution.0.active", "true"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "call_as", + "template_url", + }, + }, + }, + }) +} + func TestAccCloudFormationStackSet_name(t *testing.T) { ctx := acctest.Context(t) var stackSet1, stackSet2 cloudformation.StackSet @@ -579,11 +617,11 @@ func TestAccCloudFormationStackSet_tags(t *testing.T) { CheckDestroy: testAccCheckStackSetDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccStackSetConfig_tags1(rName, "value1"), + Config: testAccStackSetConfig_tags1(rName, "key1", "value1"), Check: resource.ComposeTestCheckFunc( testAccCheckStackSetExists(ctx, resourceName, &stackSet1), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), - resource.TestCheckResourceAttr(resourceName, "tags.Key1", "value1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), ), }, { @@ -596,28 +634,21 @@ func TestAccCloudFormationStackSet_tags(t *testing.T) { }, }, { - Config: testAccStackSetConfig_tags2(rName, "value1updated", "value2"), + Config: testAccStackSetConfig_tags2(rName, "key1", "value1updated", "key2", "value2"), Check: resource.ComposeTestCheckFunc( testAccCheckStackSetExists(ctx, resourceName, &stackSet2), testAccCheckStackSetNotRecreated(&stackSet1, &stackSet2), resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), - resource.TestCheckResourceAttr(resourceName, "tags.Key1", "value1updated"), - resource.TestCheckResourceAttr(resourceName, "tags.Key2", "value2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), ), }, { - Config: testAccStackSetConfig_tags1(rName, "value1updated"), + Config: testAccStackSetConfig_tags1(rName, "key2", "value2"), Check: resource.ComposeTestCheckFunc( testAccCheckStackSetExists(ctx, resourceName, &stackSet1), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), - resource.TestCheckResourceAttr(resourceName, "tags.Key1", "value1updated"), - ), - }, - { - Config: testAccStackSetConfig_name(rName), - Check: resource.ComposeTestCheckFunc( - testAccCheckStackSetExists(ctx, resourceName, &stackSet1), - resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), ), }, }, @@ -715,7 +746,7 @@ func testAccCheckStackSetExists(ctx context.Context, resourceName string, v *clo callAs := rs.Primary.Attributes["call_as"] - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFormationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFormationConn(ctx) output, err := tfcloudformation.FindStackSetByName(ctx, conn, rs.Primary.ID, callAs) @@ -731,7 +762,7 @@ func testAccCheckStackSetExists(ctx context.Context, resourceName string, v *clo func testAccCheckStackSetDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFormationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFormationConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudformation_stack_set" { @@ -778,7 +809,7 @@ func testAccCheckStackSetRecreated(i, j *cloudformation.StackSet) resource.TestC } func testAccPreCheckStackSet(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFormationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFormationConn(ctx) input := &cloudformation.ListStackSetsInput{} _, err := conn.ListStackSetsWithContext(ctx, input) @@ -908,32 +939,11 @@ Outputs: `, rName) } -func testAccStackSetConfig_administrationRoleARN1(rName string) string { +func testAccStackSetConfig_baseAdministrationRoleARNs(rName string, count int) string { return fmt.Sprintf(` -resource "aws_iam_role" "test1" { - assume_role_policy = < 0 { return tags @@ -51,8 +51,8 @@ func GetTagsIn(ctx context.Context) []*cloudformation.Tag { return nil } -// SetTagsOut sets cloudformation service tags in Context. -func SetTagsOut(ctx context.Context, tags []*cloudformation.Tag) { +// setTagsOut sets cloudformation service tags in Context. +func setTagsOut(ctx context.Context, tags []*cloudformation.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } diff --git a/internal/service/cloudformation/test-fixtures/cloudformation-template.yaml b/internal/service/cloudformation/test-fixtures/cloudformation-template.yaml index c84e3fcdd6b..717e8ce48c9 100644 --- a/internal/service/cloudformation/test-fixtures/cloudformation-template.yaml +++ b/internal/service/cloudformation/test-fixtures/cloudformation-template.yaml @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + Parameters: VpcCIDR: Description: CIDR to be used for the VPC diff --git a/internal/service/cloudformation/testdata/examplecompany-exampleservice-exampleresource/cmd/resource/resource.go b/internal/service/cloudformation/testdata/examplecompany-exampleservice-exampleresource/cmd/resource/resource.go index 8a98effbe07..f0e512cae04 100644 --- a/internal/service/cloudformation/testdata/examplecompany-exampleservice-exampleresource/cmd/resource/resource.go +++ b/internal/service/cloudformation/testdata/examplecompany-exampleservice-exampleresource/cmd/resource/resource.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resource import ( diff --git a/internal/service/cloudformation/testdata/examplecompany-exampleservice-exampleresource/resource-role.yaml b/internal/service/cloudformation/testdata/examplecompany-exampleservice-exampleresource/resource-role.yaml index f4726f6b780..0d8fb7da4d2 100644 --- a/internal/service/cloudformation/testdata/examplecompany-exampleservice-exampleresource/resource-role.yaml +++ b/internal/service/cloudformation/testdata/examplecompany-exampleservice-exampleresource/resource-role.yaml @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + AWSTemplateFormatVersion: "2010-09-09" Description: > This CloudFormation template creates a role assumed by CloudFormation diff --git a/internal/service/cloudformation/testdata/examplecompany-exampleservice-exampleresource/template.yml b/internal/service/cloudformation/testdata/examplecompany-exampleservice-exampleresource/template.yml index 27db91b33c3..1b42556fa57 100644 --- a/internal/service/cloudformation/testdata/examplecompany-exampleservice-exampleresource/template.yml +++ b/internal/service/cloudformation/testdata/examplecompany-exampleservice-exampleresource/template.yml @@ -1,3 +1,6 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + AWSTemplateFormatVersion: "2010-09-09" Transform: AWS::Serverless-2016-10-31 Description: AWS SAM template for the ExampleCompany::ExampleService::ExampleResource resource type diff --git a/internal/service/cloudformation/type.go b/internal/service/cloudformation/type.go index a56ff3ac1cc..2401cf7ded6 100644 --- a/internal/service/cloudformation/type.go +++ b/internal/service/cloudformation/type.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudformation import ( "context" - "fmt" "log" "regexp" @@ -133,7 +135,7 @@ func ResourceType() *schema.Resource { } func resourceTypeCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CloudFormationConn() + conn := meta.(*conns.AWSClient).CloudFormationConn(ctx) typeName := d.Get("type_name").(string) input := &cloudformation.RegisterTypeInput{ @@ -157,17 +159,17 @@ func resourceTypeCreate(ctx context.Context, d *schema.ResourceData, meta interf output, err := conn.RegisterTypeWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error registering CloudFormation Type (%s): %w", typeName, err)) + return diag.Errorf("registering CloudFormation Type (%s): %s", typeName, err) } if output == nil || output.RegistrationToken == nil { - return diag.FromErr(fmt.Errorf("error registering CloudFormation Type (%s): empty result", typeName)) + return diag.Errorf("registering CloudFormation Type (%s): empty result", typeName) } registrationOutput, err := WaitTypeRegistrationProgressStatusComplete(ctx, conn, aws.StringValue(output.RegistrationToken)) if err != nil { - return diag.FromErr(fmt.Errorf("error waiting for CloudFormation Type (%s) register: %w", typeName, err)) + return diag.Errorf("waiting for CloudFormation Type (%s) register: %s", typeName, err) } // Type Version ARN is not available until after registration is complete @@ -177,7 +179,7 @@ func resourceTypeCreate(ctx context.Context, d *schema.ResourceData, meta interf } func resourceTypeRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CloudFormationConn() + conn := meta.(*conns.AWSClient).CloudFormationConn(ctx) output, err := FindTypeByARN(ctx, conn, d.Id()) @@ -188,13 +190,13 @@ func resourceTypeRead(ctx context.Context, d *schema.ResourceData, meta interfac } if err != nil { - return diag.FromErr(fmt.Errorf("error reading CloudFormation Type (%s): %w", d.Id(), err)) + return diag.Errorf("reading CloudFormation Type (%s): %s", d.Id(), err) } typeARN, versionID, err := TypeVersionARNToTypeARNAndVersionID(d.Id()) if err != nil { - return diag.FromErr(fmt.Errorf("error parsing CloudFormation Type (%s) ARN: %w", d.Id(), err)) + return diag.Errorf("parsing CloudFormation Type (%s) ARN: %s", d.Id(), err) } d.Set("arn", output.Arn) @@ -206,7 +208,7 @@ func resourceTypeRead(ctx context.Context, d *schema.ResourceData, meta interfac d.Set("is_default_version", output.IsDefaultVersion) if output.LoggingConfig != nil { if err := d.Set("logging_config", []interface{}{flattenLoggingConfig(output.LoggingConfig)}); err != nil { - return diag.FromErr(fmt.Errorf("error setting logging_config: %w", err)) + return diag.Errorf("setting logging_config: %s", err) } } else { d.Set("logging_config", nil) @@ -224,7 +226,7 @@ func resourceTypeRead(ctx context.Context, d *schema.ResourceData, meta interfac } func resourceTypeDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CloudFormationConn() + conn := meta.(*conns.AWSClient).CloudFormationConn(ctx) input := &cloudformation.DeregisterTypeInput{ Arn: aws.String(d.Id()), @@ -238,7 +240,7 @@ func resourceTypeDelete(ctx context.Context, d *schema.ResourceData, meta interf typeARN, _, err := TypeVersionARNToTypeARNAndVersionID(d.Id()) if err != nil { - return diag.FromErr(fmt.Errorf("error parsing CloudFormation Type (%s) ARN: %w", d.Id(), err)) + return diag.Errorf("parsing CloudFormation Type (%s) ARN: %s", d.Id(), err) } input := &cloudformation.ListTypeVersionsInput{ @@ -263,7 +265,7 @@ func resourceTypeDelete(ctx context.Context, d *schema.ResourceData, meta interf }) if err != nil { - return diag.FromErr(fmt.Errorf("error listing CloudFormation Type (%s) Versions: %w", d.Id(), err)) + return diag.Errorf("listing CloudFormation Type (%s) Versions: %s", d.Id(), err) } if len(typeVersionSummaries) <= 1 { @@ -278,7 +280,7 @@ func resourceTypeDelete(ctx context.Context, d *schema.ResourceData, meta interf } if err != nil { - return diag.FromErr(fmt.Errorf("error deregistering CloudFormation Type (%s): %w", d.Id(), err)) + return diag.Errorf("deregistering CloudFormation Type (%s): %s", d.Id(), err) } return nil @@ -290,7 +292,7 @@ func resourceTypeDelete(ctx context.Context, d *schema.ResourceData, meta interf } if err != nil { - return diag.FromErr(fmt.Errorf("error deregistering CloudFormation Type (%s): %w", d.Id(), err)) + return diag.Errorf("deregistering CloudFormation Type (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/cloudformation/type_data_source.go b/internal/service/cloudformation/type_data_source.go index eb4f4e5f8f8..bdcb3199000 100644 --- a/internal/service/cloudformation/type_data_source.go +++ b/internal/service/cloudformation/type_data_source.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudformation import ( "context" - "fmt" "regexp" "github.com/aws/aws-sdk-go/aws" @@ -108,7 +110,7 @@ func DataSourceType() *schema.Resource { } func dataSourceTypeRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CloudFormationConn() + conn := meta.(*conns.AWSClient).CloudFormationConn(ctx) input := &cloudformation.DescribeTypeInput{} @@ -131,11 +133,11 @@ func dataSourceTypeRead(ctx context.Context, d *schema.ResourceData, meta interf output, err := conn.DescribeTypeWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error reading CloudFormation Type: %w", err)) + return diag.Errorf("reading CloudFormation Type: %s", err) } if output == nil { - return diag.FromErr(fmt.Errorf("error reading CloudFormation Type: empty response")) + return diag.Errorf("reading CloudFormation Type: empty response") } d.SetId(aws.StringValue(output.Arn)) @@ -149,7 +151,7 @@ func dataSourceTypeRead(ctx context.Context, d *schema.ResourceData, meta interf d.Set("is_default_version", output.IsDefaultVersion) if output.LoggingConfig != nil { if err := d.Set("logging_config", []interface{}{flattenLoggingConfig(output.LoggingConfig)}); err != nil { - return diag.FromErr(fmt.Errorf("error setting logging_config: %w", err)) + return diag.Errorf("setting logging_config: %s", err) } } else { d.Set("logging_config", nil) diff --git a/internal/service/cloudformation/type_data_source_test.go b/internal/service/cloudformation/type_data_source_test.go index dba6de99e97..d4bc0a1ec1f 100644 --- a/internal/service/cloudformation/type_data_source_test.go +++ b/internal/service/cloudformation/type_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudformation_test import ( diff --git a/internal/service/cloudformation/type_test.go b/internal/service/cloudformation/type_test.go index 87565e64687..20c0f4e86a2 100644 --- a/internal/service/cloudformation/type_test.go +++ b/internal/service/cloudformation/type_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudformation_test import ( @@ -153,7 +156,7 @@ func testAccCheckTypeExists(ctx context.Context, resourceName string) resource.T return fmt.Errorf("No CloudFormation Type ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFormationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFormationConn(ctx) _, err := tfcloudformation.FindTypeByARN(ctx, conn, rs.Primary.ID) @@ -163,7 +166,7 @@ func testAccCheckTypeExists(ctx context.Context, resourceName string) resource.T func testAccCheckTypeDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFormationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFormationConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudformation_stack_set" { diff --git a/internal/service/cloudformation/wait.go b/internal/service/cloudformation/wait.go index 073bf2e41b8..1f14b49f0e4 100644 --- a/internal/service/cloudformation/wait.go +++ b/internal/service/cloudformation/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudformation import ( @@ -51,11 +54,6 @@ const ( stackSetOperationDelay = 5 * time.Second ) -const ( - // Default maximum amount of time to wait for a StackSet to be Updated - StackSetUpdatedDefaultTimeout = 30 * time.Minute -) - func WaitStackSetOperationSucceeded(ctx context.Context, conn *cloudformation.CloudFormation, stackSetName, operationID, callAs string, timeout time.Duration) (*cloudformation.StackSetOperation, error) { stateConf := &retry.StateChangeConf{ Pending: []string{cloudformation.StackSetOperationStatusRunning, cloudformation.StackSetOperationStatusQueued}, diff --git a/internal/service/cloudfront/cache_policy.go b/internal/service/cloudfront/cache_policy.go index 7a2434e892e..e6d9c9d2995 100644 --- a/internal/service/cloudfront/cache_policy.go +++ b/internal/service/cloudfront/cache_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( @@ -161,7 +164,7 @@ func ResourceCachePolicy() *schema.Resource { func resourceCachePolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) name := d.Get("name").(string) apiObject := &cloudfront.CachePolicyConfig{ @@ -197,7 +200,7 @@ func resourceCachePolicyCreate(ctx context.Context, d *schema.ResourceData, meta func resourceCachePolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) output, err := FindCachePolicyByID(ctx, conn, d.Id()) @@ -231,7 +234,7 @@ func resourceCachePolicyRead(ctx context.Context, d *schema.ResourceData, meta i func resourceCachePolicyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) // // https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_UpdateCachePolicy.html: @@ -270,7 +273,7 @@ func resourceCachePolicyUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceCachePolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) log.Printf("[DEBUG] Deleting CloudFront Cache Policy: (%s)", d.Id()) _, err := conn.DeleteCachePolicyWithContext(ctx, &cloudfront.DeleteCachePolicyInput{ diff --git a/internal/service/cloudfront/cache_policy_data_source.go b/internal/service/cloudfront/cache_policy_data_source.go index d0793270e1a..100b9913e7a 100644 --- a/internal/service/cloudfront/cache_policy_data_source.go +++ b/internal/service/cloudfront/cache_policy_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( @@ -143,7 +146,7 @@ func DataSourceCachePolicy() *schema.Resource { } func dataSourceCachePolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) var cachePolicyID string diff --git a/internal/service/cloudfront/cache_policy_data_source_test.go b/internal/service/cloudfront/cache_policy_data_source_test.go index ef5ee39f7b8..cc43e5b118f 100644 --- a/internal/service/cloudfront/cache_policy_data_source_test.go +++ b/internal/service/cloudfront/cache_policy_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront_test import ( diff --git a/internal/service/cloudfront/cache_policy_test.go b/internal/service/cloudfront/cache_policy_test.go index c0dc785f2c3..2da5c089f80 100644 --- a/internal/service/cloudfront/cache_policy_test.go +++ b/internal/service/cloudfront/cache_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront_test import ( @@ -210,7 +213,7 @@ func TestAccCloudFrontCachePolicy_ZeroTTLs(t *testing.T) { func testAccCheckCachePolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudfront_cache_policy" { @@ -245,7 +248,7 @@ func testAccCheckCachePolicyExists(ctx context.Context, n string) resource.TestC return fmt.Errorf("No CloudFront Cache Policy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) _, err := tfcloudfront.FindCachePolicyByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/cloudfront/cloudfront_test.go b/internal/service/cloudfront/cloudfront_test.go index 45c31c33d98..447a5b25aef 100644 --- a/internal/service/cloudfront/cloudfront_test.go +++ b/internal/service/cloudfront/cloudfront_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront_test import ( diff --git a/internal/service/cloudfront/consts.go b/internal/service/cloudfront/consts.go index a7e17b5dff1..679a26f9aa1 100644 --- a/internal/service/cloudfront/consts.go +++ b/internal/service/cloudfront/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront const ( diff --git a/internal/service/cloudfront/distribution.go b/internal/service/cloudfront/distribution.go index 8df20d239b3..65ada8fd542 100644 --- a/internal/service/cloudfront/distribution.go +++ b/internal/service/cloudfront/distribution.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( @@ -545,7 +548,7 @@ func ResourceDistribution() *schema.Resource { Type: schema.TypeInt, Optional: true, Default: 5, - ValidateFunc: validation.IntBetween(1, 180), + ValidateFunc: validation.IntAtLeast(1), }, "origin_read_timeout": { Type: schema.TypeInt, @@ -830,7 +833,7 @@ func ResourceDistribution() *schema.Resource { func resourceDistributionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) input := &cloudfront.CreateDistributionWithTagsInput{ DistributionConfigWithTags: &cloudfront.DistributionConfigWithTags{ @@ -839,7 +842,7 @@ func resourceDistributionCreate(ctx context.Context, d *schema.ResourceData, met }, } - if tags := GetTagsIn(ctx); len(tags) > 0 { + if tags := getTagsIn(ctx); len(tags) > 0 { input.DistributionConfigWithTags.Tags.Items = tags } @@ -885,7 +888,7 @@ func resourceDistributionCreate(ctx context.Context, d *schema.ResourceData, met func resourceDistributionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) output, err := FindDistributionByID(ctx, conn, d.Id()) @@ -925,7 +928,7 @@ func resourceDistributionRead(ctx context.Context, d *schema.ResourceData, meta func resourceDistributionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) params := &cloudfront.UpdateDistributionInput{ Id: aws.String(d.Id()), DistributionConfig: expandDistributionConfig(d), @@ -993,7 +996,7 @@ func resourceDistributionUpdate(ctx context.Context, d *schema.ResourceData, met func resourceDistributionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) if d.Get("retain_on_delete").(bool) { // Check if we need to disable first. @@ -1150,7 +1153,7 @@ func DistributionWaitUntilDeployed(ctx context.Context, id string, meta interfac // The refresh function for resourceAwsCloudFrontWebDistributionWaitUntilDeployed. func resourceWebDistributionStateRefreshFunc(ctx context.Context, id string, meta interface{}) retry.StateRefreshFunc { return func() (interface{}, string, error) { - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) params := &cloudfront.GetDistributionInput{ Id: aws.String(id), } diff --git a/internal/service/cloudfront/distribution_configuration_structure.go b/internal/service/cloudfront/distribution_configuration_structure.go index 2658322e904..7a35d7f1dd6 100644 --- a/internal/service/cloudfront/distribution_configuration_structure.go +++ b/internal/service/cloudfront/distribution_configuration_structure.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // CloudFront DistributionConfig structure helpers. // // These functions assist in pulling in data from Terraform resource diff --git a/internal/service/cloudfront/distribution_configuration_structure_test.go b/internal/service/cloudfront/distribution_configuration_structure_test.go index 586100fc245..e00abe7e0f1 100644 --- a/internal/service/cloudfront/distribution_configuration_structure_test.go +++ b/internal/service/cloudfront/distribution_configuration_structure_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront_test import ( diff --git a/internal/service/cloudfront/distribution_data_source.go b/internal/service/cloudfront/distribution_data_source.go index a8f088c3e43..cef40042311 100644 --- a/internal/service/cloudfront/distribution_data_source.go +++ b/internal/service/cloudfront/distribution_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( @@ -66,7 +69,7 @@ func DataSourceDistribution() *schema.Resource { func dataSourceDistributionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig output, err := FindDistributionByID(ctx, conn, d.Get("id").(string)) @@ -91,7 +94,7 @@ func dataSourceDistributionRead(ctx context.Context, d *schema.ResourceData, met } } } - tags, err := ListTags(ctx, conn, d.Get("arn").(string)) + tags, err := listTags(ctx, conn, d.Get("arn").(string)) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags for CloudFront Distribution (%s): %s", d.Id(), err) } diff --git a/internal/service/cloudfront/distribution_data_source_test.go b/internal/service/cloudfront/distribution_data_source_test.go index e8ecf7b3d49..3d29ffbcd5f 100644 --- a/internal/service/cloudfront/distribution_data_source_test.go +++ b/internal/service/cloudfront/distribution_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront_test import ( diff --git a/internal/service/cloudfront/distribution_migrate.go b/internal/service/cloudfront/distribution_migrate.go index 9aff2a78974..5f95a74fb95 100644 --- a/internal/service/cloudfront/distribution_migrate.go +++ b/internal/service/cloudfront/distribution_migrate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( diff --git a/internal/service/cloudfront/distribution_migrate_test.go b/internal/service/cloudfront/distribution_migrate_test.go index 21bc925e7d3..0d90b2fb455 100644 --- a/internal/service/cloudfront/distribution_migrate_test.go +++ b/internal/service/cloudfront/distribution_migrate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront_test import ( diff --git a/internal/service/cloudfront/distribution_test.go b/internal/service/cloudfront/distribution_test.go index 8b954f4b5b7..2d65c2b3010 100644 --- a/internal/service/cloudfront/distribution_test.go +++ b/internal/service/cloudfront/distribution_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront_test import ( @@ -1458,7 +1461,7 @@ func TestAccCloudFrontDistribution_preconditionFailed(t *testing.T) { func testAccCheckDistributionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudfront_distribution" { @@ -1504,7 +1507,7 @@ func testAccCheckDistributionExists(ctx context.Context, resourceName string, di return fmt.Errorf("Resource ID not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) input := &cloudfront.GetDistributionInput{ Id: aws.String(rs.Primary.ID), @@ -1524,7 +1527,7 @@ func testAccCheckDistributionExists(ctx context.Context, resourceName string, di func testAccCheckDistributionExistsAPIOnly(ctx context.Context, distribution *cloudfront.Distribution) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) input := &cloudfront.GetDistributionInput{ Id: distribution.Id, @@ -1588,7 +1591,7 @@ func testAccCheckDistributionDisabled(distribution *cloudfront.Distribution) res // This requires the CloudFront Distribution to previously be disabled and fetches latest ETag automatically. func testAccCheckDistributionDisappears(ctx context.Context, distribution *cloudfront.Distribution) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) getDistributionInput := &cloudfront.GetDistributionInput{ Id: distribution.Id, diff --git a/internal/service/cloudfront/field_level_encryption_config.go b/internal/service/cloudfront/field_level_encryption_config.go index 90a2ce99ac5..3ad35c46e38 100644 --- a/internal/service/cloudfront/field_level_encryption_config.go +++ b/internal/service/cloudfront/field_level_encryption_config.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( @@ -127,7 +130,7 @@ func ResourceFieldLevelEncryptionConfig() *schema.Resource { func resourceFieldLevelEncryptionConfigCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) apiObject := &cloudfront.FieldLevelEncryptionConfig{ CallerReference: aws.String(id.UniqueId()), @@ -163,7 +166,7 @@ func resourceFieldLevelEncryptionConfigCreate(ctx context.Context, d *schema.Res func resourceFieldLevelEncryptionConfigRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) output, err := FindFieldLevelEncryptionConfigByID(ctx, conn, d.Id()) @@ -201,7 +204,7 @@ func resourceFieldLevelEncryptionConfigRead(ctx context.Context, d *schema.Resou func resourceFieldLevelEncryptionConfigUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) apiObject := &cloudfront.FieldLevelEncryptionConfig{ CallerReference: aws.String(d.Get("caller_reference").(string)), @@ -237,7 +240,7 @@ func resourceFieldLevelEncryptionConfigUpdate(ctx context.Context, d *schema.Res func resourceFieldLevelEncryptionConfigDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) log.Printf("[DEBUG] Deleting CloudFront Field-level Encryption Config: (%s)", d.Id()) _, err := conn.DeleteFieldLevelEncryptionConfigWithContext(ctx, &cloudfront.DeleteFieldLevelEncryptionConfigInput{ diff --git a/internal/service/cloudfront/field_level_encryption_config_test.go b/internal/service/cloudfront/field_level_encryption_config_test.go index 7639e852b3f..8b99d4e3a37 100644 --- a/internal/service/cloudfront/field_level_encryption_config_test.go +++ b/internal/service/cloudfront/field_level_encryption_config_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront_test import ( @@ -107,7 +110,7 @@ func TestAccCloudFrontFieldLevelEncryptionConfig_disappears(t *testing.T) { func testAccCheckFieldLevelEncryptionConfigDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudfront_field_level_encryption_config" { @@ -142,7 +145,7 @@ func testAccCheckFieldLevelEncryptionConfigExists(ctx context.Context, r string, return fmt.Errorf("No CloudFront Field-level Encryption Config ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) output, err := tfcloudfront.FindFieldLevelEncryptionConfigByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/cloudfront/field_level_encryption_profile.go b/internal/service/cloudfront/field_level_encryption_profile.go index 968d6854f54..0a42d3d5a44 100644 --- a/internal/service/cloudfront/field_level_encryption_profile.go +++ b/internal/service/cloudfront/field_level_encryption_profile.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( @@ -89,7 +92,7 @@ func ResourceFieldLevelEncryptionProfile() *schema.Resource { func resourceFieldLevelEncryptionProfileCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) apiObject := &cloudfront.FieldLevelEncryptionProfileConfig{ CallerReference: aws.String(id.UniqueId()), @@ -122,7 +125,7 @@ func resourceFieldLevelEncryptionProfileCreate(ctx context.Context, d *schema.Re func resourceFieldLevelEncryptionProfileRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) output, err := FindFieldLevelEncryptionProfileByID(ctx, conn, d.Id()) @@ -154,7 +157,7 @@ func resourceFieldLevelEncryptionProfileRead(ctx context.Context, d *schema.Reso func resourceFieldLevelEncryptionProfileUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) apiObject := &cloudfront.FieldLevelEncryptionProfileConfig{ CallerReference: aws.String(d.Get("caller_reference").(string)), @@ -187,7 +190,7 @@ func resourceFieldLevelEncryptionProfileUpdate(ctx context.Context, d *schema.Re func resourceFieldLevelEncryptionProfileDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) log.Printf("[DEBUG] Deleting CloudFront Field-level Encryption Profile: (%s)", d.Id()) _, err := conn.DeleteFieldLevelEncryptionProfileWithContext(ctx, &cloudfront.DeleteFieldLevelEncryptionProfileInput{ diff --git a/internal/service/cloudfront/field_level_encryption_profile_test.go b/internal/service/cloudfront/field_level_encryption_profile_test.go index 4891fc97eb5..1d64a5cced6 100644 --- a/internal/service/cloudfront/field_level_encryption_profile_test.go +++ b/internal/service/cloudfront/field_level_encryption_profile_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront_test import ( @@ -98,7 +101,7 @@ func TestAccCloudFrontFieldLevelEncryptionProfile_disappears(t *testing.T) { func testAccCheckFieldLevelEncryptionProfileDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudfront_field_level_encryption_profile" { @@ -133,7 +136,7 @@ func testAccCheckFieldLevelEncryptionProfileExists(ctx context.Context, r string return fmt.Errorf("No CloudFront Field-level Encryption Profile ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) output, err := tfcloudfront.FindFieldLevelEncryptionProfileByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/cloudfront/find.go b/internal/service/cloudfront/find.go index 4a1223733a7..5d6b4858ffd 100644 --- a/internal/service/cloudfront/find.go +++ b/internal/service/cloudfront/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( diff --git a/internal/service/cloudfront/forge.go b/internal/service/cloudfront/forge.go index 50229c0a269..4343b4143bb 100644 --- a/internal/service/cloudfront/forge.go +++ b/internal/service/cloudfront/forge.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( diff --git a/internal/service/cloudfront/function.go b/internal/service/cloudfront/function.go index 87bdb99f18d..c9ec3475dfb 100644 --- a/internal/service/cloudfront/function.go +++ b/internal/service/cloudfront/function.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( @@ -72,7 +75,7 @@ func ResourceFunction() *schema.Resource { func resourceFunctionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) functionName := d.Get("name").(string) input := &cloudfront.CreateFunctionInput{ @@ -112,7 +115,7 @@ func resourceFunctionCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceFunctionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) describeFunctionOutput, err := FindFunctionByNameAndStage(ctx, conn, d.Id(), cloudfront.FunctionStageDevelopment) @@ -159,7 +162,7 @@ func resourceFunctionRead(ctx context.Context, d *schema.ResourceData, meta inte func resourceFunctionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) etag := d.Get("etag").(string) if d.HasChanges("code", "comment", "runtime") { @@ -202,7 +205,7 @@ func resourceFunctionUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceFunctionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) log.Printf("[INFO] Deleting CloudFront Function: %s", d.Id()) _, err := conn.DeleteFunctionWithContext(ctx, &cloudfront.DeleteFunctionInput{ diff --git a/internal/service/cloudfront/function_data_source.go b/internal/service/cloudfront/function_data_source.go index b257a7fb1af..5cfa12a638f 100644 --- a/internal/service/cloudfront/function_data_source.go +++ b/internal/service/cloudfront/function_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( @@ -70,7 +73,7 @@ func DataSourceFunction() *schema.Resource { func dataSourceFunctionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) name := d.Get("name").(string) stage := d.Get("stage").(string) diff --git a/internal/service/cloudfront/function_data_source_test.go b/internal/service/cloudfront/function_data_source_test.go index 262a2a2f702..62a2e2083fb 100644 --- a/internal/service/cloudfront/function_data_source_test.go +++ b/internal/service/cloudfront/function_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront_test import ( diff --git a/internal/service/cloudfront/function_test.go b/internal/service/cloudfront/function_test.go index a0a76a8d6c3..1bf85fc8bd4 100644 --- a/internal/service/cloudfront/function_test.go +++ b/internal/service/cloudfront/function_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront_test import ( @@ -288,7 +291,7 @@ func TestAccCloudFrontFunction_Update_comment(t *testing.T) { func testAccCheckFunctionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudfront_function" { @@ -323,7 +326,7 @@ func testAccCheckFunctionExists(ctx context.Context, n string, v *cloudfront.Des return fmt.Errorf("CloudFront Function ID not set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) output, err := tfcloudfront.FindFunctionByNameAndStage(ctx, conn, rs.Primary.ID, cloudfront.FunctionStageDevelopment) diff --git a/internal/service/cloudfront/generate.go b/internal/service/cloudfront/generate.go index 0895c367696..f3ba5081dd0 100644 --- a/internal/service/cloudfront/generate.go +++ b/internal/service/cloudfront/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=Resource -ListTagsOutTagsElem=Tags.Items -ServiceTagsSlice "-TagInCustomVal=&cloudfront.Tags{Items: Tags(updatedTags)}" -TagInIDElem=Resource "-UntagInCustomVal=&cloudfront.TagKeys{Items: aws.StringSlice(removedTags.Keys())}" -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package cloudfront diff --git a/internal/service/cloudfront/key_group.go b/internal/service/cloudfront/key_group.go index 7c03d32ddef..cd049cd9c53 100644 --- a/internal/service/cloudfront/key_group.go +++ b/internal/service/cloudfront/key_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( @@ -50,7 +53,7 @@ func ResourceKeyGroup() *schema.Resource { func resourceKeyGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) input := &cloudfront.CreateKeyGroupInput{ KeyGroupConfig: expandKeyGroupConfig(d), @@ -73,7 +76,7 @@ func resourceKeyGroupCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceKeyGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) input := &cloudfront.GetKeyGroupInput{ Id: aws.String(d.Id()), } @@ -104,7 +107,7 @@ func resourceKeyGroupRead(ctx context.Context, d *schema.ResourceData, meta inte func resourceKeyGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) input := &cloudfront.UpdateKeyGroupInput{ Id: aws.String(d.Id()), @@ -122,7 +125,7 @@ func resourceKeyGroupUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceKeyGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) input := &cloudfront.DeleteKeyGroupInput{ Id: aws.String(d.Id()), diff --git a/internal/service/cloudfront/key_group_test.go b/internal/service/cloudfront/key_group_test.go index ca78e067e83..a3a5e08603c 100644 --- a/internal/service/cloudfront/key_group_test.go +++ b/internal/service/cloudfront/key_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront_test import ( @@ -151,7 +154,7 @@ func testAccCheckKeyGroupExistence(ctx context.Context, r string) resource.TestC return fmt.Errorf("no Id is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) input := &cloudfront.GetKeyGroupInput{ Id: aws.String(rs.Primary.ID), @@ -167,7 +170,7 @@ func testAccCheckKeyGroupExistence(ctx context.Context, r string) resource.TestC func testAccCheckKeyGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudfront_key_group" { diff --git a/internal/service/cloudfront/list.go b/internal/service/cloudfront/list.go index 763bfadc51f..c77c15b4a6e 100644 --- a/internal/service/cloudfront/list.go +++ b/internal/service/cloudfront/list.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( diff --git a/internal/service/cloudfront/log_delivery_canonical_user_id_data_source.go b/internal/service/cloudfront/log_delivery_canonical_user_id_data_source.go index 09a0b23cad4..0c58ff94790 100644 --- a/internal/service/cloudfront/log_delivery_canonical_user_id_data_source.go +++ b/internal/service/cloudfront/log_delivery_canonical_user_id_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( diff --git a/internal/service/cloudfront/log_delivery_canonical_user_id_data_source_test.go b/internal/service/cloudfront/log_delivery_canonical_user_id_data_source_test.go index 1a729e6c3f4..3f8c26f6e58 100644 --- a/internal/service/cloudfront/log_delivery_canonical_user_id_data_source_test.go +++ b/internal/service/cloudfront/log_delivery_canonical_user_id_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront_test import ( diff --git a/internal/service/cloudfront/monitoring_subscription.go b/internal/service/cloudfront/monitoring_subscription.go index 5dc6dc056d1..f41647df718 100644 --- a/internal/service/cloudfront/monitoring_subscription.go +++ b/internal/service/cloudfront/monitoring_subscription.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( @@ -63,7 +66,7 @@ func ResourceMonitoringSubscription() *schema.Resource { func resourceMonitoringSubscriptionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) id := d.Get("distribution_id").(string) input := &cloudfront.CreateMonitoringSubscriptionInput{ @@ -88,7 +91,7 @@ func resourceMonitoringSubscriptionCreate(ctx context.Context, d *schema.Resourc func resourceMonitoringSubscriptionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) output, err := FindMonitoringSubscriptionByDistributionID(ctx, conn, d.Id()) @@ -115,7 +118,7 @@ func resourceMonitoringSubscriptionRead(ctx context.Context, d *schema.ResourceD func resourceMonitoringSubscriptionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) log.Printf("[DEBUG] Deleting CloudFront Monitoring Subscription (%s)", d.Id()) _, err := conn.DeleteMonitoringSubscriptionWithContext(ctx, &cloudfront.DeleteMonitoringSubscriptionInput{ diff --git a/internal/service/cloudfront/monitoring_subscription_test.go b/internal/service/cloudfront/monitoring_subscription_test.go index f651a949c7b..eeaf20e39e2 100644 --- a/internal/service/cloudfront/monitoring_subscription_test.go +++ b/internal/service/cloudfront/monitoring_subscription_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront_test import ( @@ -109,7 +112,7 @@ func TestAccCloudFrontMonitoringSubscription_update(t *testing.T) { func testAccCheckMonitoringSubscriptionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudfront_monitoring_subscription" { @@ -144,7 +147,7 @@ func testAccCheckMonitoringSubscriptionExists(ctx context.Context, n string, v * return fmt.Errorf("No CloudFront Monitoring Subscription ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) output, err := tfcloudfront.FindMonitoringSubscriptionByDistributionID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/cloudfront/origin_access_control.go b/internal/service/cloudfront/origin_access_control.go index edf50628767..68125930870 100644 --- a/internal/service/cloudfront/origin_access_control.go +++ b/internal/service/cloudfront/origin_access_control.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( @@ -70,7 +73,7 @@ const ( ) func resourceOriginAccessControlCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) in := &cloudfront.CreateOriginAccessControlInput{ OriginAccessControlConfig: &cloudfront.OriginAccessControlConfig{ @@ -97,7 +100,7 @@ func resourceOriginAccessControlCreate(ctx context.Context, d *schema.ResourceDa } func resourceOriginAccessControlRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) out, err := findOriginAccessControlByID(ctx, conn, d.Id()) @@ -128,7 +131,7 @@ func resourceOriginAccessControlRead(ctx context.Context, d *schema.ResourceData } func resourceOriginAccessControlUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) in := &cloudfront.UpdateOriginAccessControlInput{ Id: aws.String(d.Id()), @@ -152,7 +155,7 @@ func resourceOriginAccessControlUpdate(ctx context.Context, d *schema.ResourceDa } func resourceOriginAccessControlDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) log.Printf("[INFO] Deleting CloudFront Origin Access Control %s", d.Id()) diff --git a/internal/service/cloudfront/origin_access_control_test.go b/internal/service/cloudfront/origin_access_control_test.go index 70c34153e0f..0ecacd37eea 100644 --- a/internal/service/cloudfront/origin_access_control_test.go +++ b/internal/service/cloudfront/origin_access_control_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront_test import ( @@ -217,7 +220,7 @@ func TestAccCloudFrontOriginAccessControl_SigningBehavior(t *testing.T) { func testAccCheckOriginAccessControlDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudfront_origin_access_control" { @@ -252,7 +255,7 @@ func testAccCheckOriginAccessControlExists(ctx context.Context, name string, ori return create.Error(names.CloudFront, create.ErrActionCheckingExistence, tfcloudfront.ResNameOriginAccessControl, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) resp, err := conn.GetOriginAccessControlWithContext(ctx, &cloudfront.GetOriginAccessControlInput{ Id: aws.String(rs.Primary.ID), @@ -269,7 +272,7 @@ func testAccCheckOriginAccessControlExists(ctx context.Context, name string, ori } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) input := &cloudfront.ListOriginAccessControlsInput{} _, err := conn.ListOriginAccessControlsWithContext(ctx, input) diff --git a/internal/service/cloudfront/origin_access_identities_data_source.go b/internal/service/cloudfront/origin_access_identities_data_source.go index 1c64bc4d5f6..024f0cf0c29 100644 --- a/internal/service/cloudfront/origin_access_identities_data_source.go +++ b/internal/service/cloudfront/origin_access_identities_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( @@ -46,7 +49,7 @@ func DataSourceOriginAccessIdentities() *schema.Resource { func dataSourceOriginAccessIdentitiesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) var comments []interface{} diff --git a/internal/service/cloudfront/origin_access_identities_data_source_test.go b/internal/service/cloudfront/origin_access_identities_data_source_test.go index 284309c2b1e..0c06fe8934c 100644 --- a/internal/service/cloudfront/origin_access_identities_data_source_test.go +++ b/internal/service/cloudfront/origin_access_identities_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront_test import ( @@ -51,9 +54,9 @@ func TestAccCloudFrontOriginAccessIdentitiesDataSource_all(t *testing.T) { { Config: testAccOriginAccessIdentitiesDataSourceConfig_noComments(rName), Check: resource.ComposeTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "iam_arns.#", "1"), - acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "ids.#", "1"), - acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "s3_canonical_user_ids.#", "1"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "iam_arns.#", 1), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "ids.#", 1), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "s3_canonical_user_ids.#", 1), ), }, }, diff --git a/internal/service/cloudfront/origin_access_identity.go b/internal/service/cloudfront/origin_access_identity.go index 9dc1b6c5a03..2ba2812ee78 100644 --- a/internal/service/cloudfront/origin_access_identity.go +++ b/internal/service/cloudfront/origin_access_identity.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( @@ -59,7 +62,7 @@ func ResourceOriginAccessIdentity() *schema.Resource { func resourceOriginAccessIdentityCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) params := &cloudfront.CreateCloudFrontOriginAccessIdentityInput{ CloudFrontOriginAccessIdentityConfig: expandOriginAccessIdentityConfig(d), } @@ -74,7 +77,7 @@ func resourceOriginAccessIdentityCreate(ctx context.Context, d *schema.ResourceD func resourceOriginAccessIdentityRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) params := &cloudfront.GetCloudFrontOriginAccessIdentityInput{ Id: aws.String(d.Id()), } @@ -109,7 +112,7 @@ func resourceOriginAccessIdentityRead(ctx context.Context, d *schema.ResourceDat func resourceOriginAccessIdentityUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) params := &cloudfront.UpdateCloudFrontOriginAccessIdentityInput{ Id: aws.String(d.Id()), CloudFrontOriginAccessIdentityConfig: expandOriginAccessIdentityConfig(d), @@ -125,7 +128,7 @@ func resourceOriginAccessIdentityUpdate(ctx context.Context, d *schema.ResourceD func resourceOriginAccessIdentityDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) params := &cloudfront.DeleteCloudFrontOriginAccessIdentityInput{ Id: aws.String(d.Id()), IfMatch: aws.String(d.Get("etag").(string)), diff --git a/internal/service/cloudfront/origin_access_identity_data_source.go b/internal/service/cloudfront/origin_access_identity_data_source.go index 20c75c7b661..10e3bedc0e1 100644 --- a/internal/service/cloudfront/origin_access_identity_data_source.go +++ b/internal/service/cloudfront/origin_access_identity_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( @@ -54,7 +57,7 @@ func DataSourceOriginAccessIdentity() *schema.Resource { func dataSourceOriginAccessIdentityRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) id := d.Get("id").(string) params := &cloudfront.GetCloudFrontOriginAccessIdentityInput{ Id: aws.String(id), diff --git a/internal/service/cloudfront/origin_access_identity_data_source_test.go b/internal/service/cloudfront/origin_access_identity_data_source_test.go index da6d138fe23..685694e7006 100644 --- a/internal/service/cloudfront/origin_access_identity_data_source_test.go +++ b/internal/service/cloudfront/origin_access_identity_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront_test import ( diff --git a/internal/service/cloudfront/origin_access_identity_test.go b/internal/service/cloudfront/origin_access_identity_test.go index dfa6b90f478..1109472b7be 100644 --- a/internal/service/cloudfront/origin_access_identity_test.go +++ b/internal/service/cloudfront/origin_access_identity_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront_test import ( @@ -104,7 +107,7 @@ func TestAccCloudFrontOriginAccessIdentity_disappears(t *testing.T) { func testAccCheckOriginAccessIdentityDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudfront_origin_access_identity" { @@ -135,7 +138,7 @@ func testAccCheckOriginAccessIdentityExistence(ctx context.Context, r string, or return fmt.Errorf("No Id is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) params := &cloudfront.GetCloudFrontOriginAccessIdentityInput{ Id: aws.String(rs.Primary.ID), diff --git a/internal/service/cloudfront/origin_request_policy.go b/internal/service/cloudfront/origin_request_policy.go index 3f9f79fccf0..8df610c51a5 100644 --- a/internal/service/cloudfront/origin_request_policy.go +++ b/internal/service/cloudfront/origin_request_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( @@ -129,7 +132,7 @@ func ResourceOriginRequestPolicy() *schema.Resource { func resourceOriginRequestPolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) name := d.Get("name").(string) apiObject := &cloudfront.OriginRequestPolicyConfig{ @@ -170,7 +173,7 @@ func resourceOriginRequestPolicyCreate(ctx context.Context, d *schema.ResourceDa func resourceOriginRequestPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) output, err := FindOriginRequestPolicyByID(ctx, conn, d.Id()) @@ -215,7 +218,7 @@ func resourceOriginRequestPolicyRead(ctx context.Context, d *schema.ResourceData func resourceOriginRequestPolicyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) // // https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_UpdateOriginRequestPolicy.html: @@ -259,7 +262,7 @@ func resourceOriginRequestPolicyUpdate(ctx context.Context, d *schema.ResourceDa func resourceOriginRequestPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) log.Printf("[DEBUG] Deleting CloudFront Origin Request Policy: (%s)", d.Id()) _, err := conn.DeleteOriginRequestPolicyWithContext(ctx, &cloudfront.DeleteOriginRequestPolicyInput{ diff --git a/internal/service/cloudfront/origin_request_policy_data_source.go b/internal/service/cloudfront/origin_request_policy_data_source.go index 1d1604b5599..d78ba84c7af 100644 --- a/internal/service/cloudfront/origin_request_policy_data_source.go +++ b/internal/service/cloudfront/origin_request_policy_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( @@ -116,7 +119,7 @@ func DataSourceOriginRequestPolicy() *schema.Resource { func dataSourceOriginRequestPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) var originRequestPolicyID string diff --git a/internal/service/cloudfront/origin_request_policy_data_source_test.go b/internal/service/cloudfront/origin_request_policy_data_source_test.go index b8746b4fe08..d259bde33da 100644 --- a/internal/service/cloudfront/origin_request_policy_data_source_test.go +++ b/internal/service/cloudfront/origin_request_policy_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront_test import ( diff --git a/internal/service/cloudfront/origin_request_policy_test.go b/internal/service/cloudfront/origin_request_policy_test.go index 651516f5350..e3fe8103a45 100644 --- a/internal/service/cloudfront/origin_request_policy_test.go +++ b/internal/service/cloudfront/origin_request_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront_test import ( @@ -143,7 +146,7 @@ func TestAccCloudFrontOriginRequestPolicy_Items(t *testing.T) { func testAccCheckOriginRequestPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudfront_origin_request_policy" { @@ -178,7 +181,7 @@ func testAccCheckOriginRequestPolicyExists(ctx context.Context, n string) resour return fmt.Errorf("No CloudFront Origin Request Policy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) _, err := tfcloudfront.FindOriginRequestPolicyByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/cloudfront/public_key.go b/internal/service/cloudfront/public_key.go index 3c7bad924aa..6f359bdb708 100644 --- a/internal/service/cloudfront/public_key.go +++ b/internal/service/cloudfront/public_key.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( @@ -68,7 +71,7 @@ func ResourcePublicKey() *schema.Resource { func resourcePublicKeyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) if v, ok := d.GetOk("name"); ok { d.Set("name", v.(string)) @@ -95,7 +98,7 @@ func resourcePublicKeyCreate(ctx context.Context, d *schema.ResourceData, meta i func resourcePublicKeyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) request := &cloudfront.GetPublicKeyInput{ Id: aws.String(d.Id()), } @@ -133,7 +136,7 @@ func resourcePublicKeyRead(ctx context.Context, d *schema.ResourceData, meta int func resourcePublicKeyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) request := &cloudfront.UpdatePublicKeyInput{ Id: aws.String(d.Id()), @@ -151,7 +154,7 @@ func resourcePublicKeyUpdate(ctx context.Context, d *schema.ResourceData, meta i func resourcePublicKeyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) request := &cloudfront.DeletePublicKeyInput{ Id: aws.String(d.Id()), diff --git a/internal/service/cloudfront/public_key_test.go b/internal/service/cloudfront/public_key_test.go index cf6b9db5fa4..05e1b1474f8 100644 --- a/internal/service/cloudfront/public_key_test.go +++ b/internal/service/cloudfront/public_key_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront_test import ( @@ -144,7 +147,7 @@ func testAccCheckPublicKeyExistence(ctx context.Context, r string) resource.Test return fmt.Errorf("No Id is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) params := &cloudfront.GetPublicKeyInput{ Id: aws.String(rs.Primary.ID), @@ -160,7 +163,7 @@ func testAccCheckPublicKeyExistence(ctx context.Context, r string) resource.Test func testAccCheckPublicKeyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudfront_public_key" { diff --git a/internal/service/cloudfront/realtime_log_config.go b/internal/service/cloudfront/realtime_log_config.go index 661a3119e01..1544e1de06b 100644 --- a/internal/service/cloudfront/realtime_log_config.go +++ b/internal/service/cloudfront/realtime_log_config.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( @@ -89,7 +92,7 @@ func ResourceRealtimeLogConfig() *schema.Resource { func resourceRealtimeLogConfigCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) name := d.Get("name").(string) input := &cloudfront.CreateRealtimeLogConfigInput{ @@ -122,7 +125,7 @@ func resourceRealtimeLogConfigCreate(ctx context.Context, d *schema.ResourceData func resourceRealtimeLogConfigRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) logConfig, err := FindRealtimeLogConfigByARN(ctx, conn, d.Id()) @@ -149,7 +152,7 @@ func resourceRealtimeLogConfigRead(ctx context.Context, d *schema.ResourceData, func resourceRealtimeLogConfigUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) // // https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_UpdateRealtimeLogConfig.html: @@ -183,7 +186,7 @@ func resourceRealtimeLogConfigUpdate(ctx context.Context, d *schema.ResourceData func resourceRealtimeLogConfigDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) log.Printf("[DEBUG] Deleting CloudFront Real-time Log Config (%s)", d.Id()) _, err := conn.DeleteRealtimeLogConfigWithContext(ctx, &cloudfront.DeleteRealtimeLogConfigInput{ diff --git a/internal/service/cloudfront/realtime_log_config_data_source.go b/internal/service/cloudfront/realtime_log_config_data_source.go index bbeccb1f943..b7aa0fc1128 100644 --- a/internal/service/cloudfront/realtime_log_config_data_source.go +++ b/internal/service/cloudfront/realtime_log_config_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( @@ -67,7 +70,7 @@ func DataSourceRealtimeLogConfig() *schema.Resource { func dataSourceRealtimeLogConfigRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) name := d.Get("name").(string) logConfig, err := FindRealtimeLogConfigByName(ctx, conn, name) diff --git a/internal/service/cloudfront/realtime_log_config_data_source_test.go b/internal/service/cloudfront/realtime_log_config_data_source_test.go index badb9e07c4e..cc578b57fb9 100644 --- a/internal/service/cloudfront/realtime_log_config_data_source_test.go +++ b/internal/service/cloudfront/realtime_log_config_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront_test import ( diff --git a/internal/service/cloudfront/realtime_log_config_test.go b/internal/service/cloudfront/realtime_log_config_test.go index 66428152e11..acb00047cd7 100644 --- a/internal/service/cloudfront/realtime_log_config_test.go +++ b/internal/service/cloudfront/realtime_log_config_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront_test import ( @@ -146,7 +149,7 @@ func TestAccCloudFrontRealtimeLogConfig_updates(t *testing.T) { func testAccCheckRealtimeLogConfigDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudfront_realtime_log_config" { @@ -181,7 +184,7 @@ func testAccCheckRealtimeLogConfigExists(ctx context.Context, n string, v *cloud return fmt.Errorf("No CloudFront Real-time Log Config ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) output, err := tfcloudfront.FindRealtimeLogConfigByARN(ctx, conn, rs.Primary.ID) diff --git a/internal/service/cloudfront/response_headers_policy.go b/internal/service/cloudfront/response_headers_policy.go index 880e919cd15..90a330f2680 100644 --- a/internal/service/cloudfront/response_headers_policy.go +++ b/internal/service/cloudfront/response_headers_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( @@ -323,7 +326,7 @@ func ResourceResponseHeadersPolicy() *schema.Resource { func resourceResponseHeadersPolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) name := d.Get("name").(string) apiObject := &cloudfront.ResponseHeadersPolicyConfig{ @@ -371,7 +374,7 @@ func resourceResponseHeadersPolicyCreate(ctx context.Context, d *schema.Resource func resourceResponseHeadersPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) output, err := FindResponseHeadersPolicyByID(ctx, conn, d.Id()) @@ -431,7 +434,7 @@ func resourceResponseHeadersPolicyRead(ctx context.Context, d *schema.ResourceDa func resourceResponseHeadersPolicyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) // // https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_UpdateResponseHeadersPolicy.html: @@ -482,7 +485,7 @@ func resourceResponseHeadersPolicyUpdate(ctx context.Context, d *schema.Resource func resourceResponseHeadersPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) log.Printf("[DEBUG] Deleting CloudFront Response Headers Policy: (%s)", d.Id()) _, err := conn.DeleteResponseHeadersPolicyWithContext(ctx, &cloudfront.DeleteResponseHeadersPolicyInput{ diff --git a/internal/service/cloudfront/response_headers_policy_data_source.go b/internal/service/cloudfront/response_headers_policy_data_source.go index 971f3fbaa3c..2013edde403 100644 --- a/internal/service/cloudfront/response_headers_policy_data_source.go +++ b/internal/service/cloudfront/response_headers_policy_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( @@ -294,7 +297,7 @@ func DataSourceResponseHeadersPolicy() *schema.Resource { func dataSourceResponseHeadersPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudFrontConn() + conn := meta.(*conns.AWSClient).CloudFrontConn(ctx) var responseHeadersPolicyID string diff --git a/internal/service/cloudfront/response_headers_policy_data_source_test.go b/internal/service/cloudfront/response_headers_policy_data_source_test.go index 2cacecabff2..ab02ba31482 100644 --- a/internal/service/cloudfront/response_headers_policy_data_source_test.go +++ b/internal/service/cloudfront/response_headers_policy_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront_test import ( diff --git a/internal/service/cloudfront/response_headers_policy_test.go b/internal/service/cloudfront/response_headers_policy_test.go index 74608af7c43..4758708c971 100644 --- a/internal/service/cloudfront/response_headers_policy_test.go +++ b/internal/service/cloudfront/response_headers_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront_test import ( @@ -385,7 +388,7 @@ func TestAccCloudFrontResponseHeadersPolicy_disappears(t *testing.T) { func testAccCheckResponseHeadersPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudfront_response_headers_policy" { @@ -420,7 +423,7 @@ func testAccCheckResponseHeadersPolicyExists(ctx context.Context, n string) reso return fmt.Errorf("No CloudFront Response Headers Policy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFrontConn(ctx) _, err := tfcloudfront.FindResponseHeadersPolicyByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/cloudfront/service_package_gen.go b/internal/service/cloudfront/service_package_gen.go index 19130ed2da4..267e772027c 100644 --- a/internal/service/cloudfront/service_package_gen.go +++ b/internal/service/cloudfront/service_package_gen.go @@ -5,6 +5,10 @@ package cloudfront import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + cloudfront_sdkv1 "github.com/aws/aws-sdk-go/service/cloudfront" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -125,4 +129,13 @@ func (p *servicePackage) ServicePackageName() string { return names.CloudFront } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*cloudfront_sdkv1.CloudFront, error) { + sess := config["session"].(*session_sdkv1.Session) + + return cloudfront_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/cloudfront/sweep.go b/internal/service/cloudfront/sweep.go index 5492e6e58aa..ce762d67b3a 100644 --- a/internal/service/cloudfront/sweep.go +++ b/internal/service/cloudfront/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/cloudfront" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) @@ -90,11 +92,11 @@ func init() { func sweepCachePolicies(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).CloudFrontConn() + conn := client.CloudFrontConn(ctx) input := &cloudfront.ListCachePoliciesInput{ Type: aws.String(cloudfront.ResponseHeadersPolicyTypeCustom), } @@ -139,7 +141,7 @@ func sweepCachePolicies(region string) error { return fmt.Errorf("error listing CloudFront Cache Policies (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping CloudFront Cache Policies (%s): %w", region, err) @@ -150,11 +152,11 @@ func sweepCachePolicies(region string) error { func sweepDistributions(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).CloudFrontConn() + conn := client.CloudFrontConn(ctx) input := &cloudfront.ListDistributionsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -197,7 +199,7 @@ func sweepDistributions(region string) error { return fmt.Errorf("error listing CloudFront Distributions (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping CloudFront Distributions (%s): %w", region, err) @@ -208,11 +210,11 @@ func sweepDistributions(region string) error { func sweepFunctions(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).CloudFrontConn() + conn := client.CloudFrontConn(ctx) var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -257,7 +259,7 @@ func sweepFunctions(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing CloudFront Functions: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping CloudFront Functions: %w", err)) } @@ -266,11 +268,11 @@ func sweepFunctions(region string) error { func sweepKeyGroup(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("Error getting client: %w", err) } - conn := client.(*conns.AWSClient).CloudFrontConn() + conn := client.CloudFrontConn(ctx) var sweeperErrs *multierror.Error input := &cloudfront.ListKeyGroupsInput{} @@ -326,11 +328,11 @@ func sweepKeyGroup(region string) error { func sweepMonitoringSubscriptions(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).CloudFrontConn() + conn := client.CloudFrontConn(ctx) var sweeperErrs *multierror.Error distributionSummaries := make([]*cloudfront.DistributionSummary, 0) @@ -375,11 +377,11 @@ func sweepMonitoringSubscriptions(region string) error { func sweepRealtimeLogsConfig(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).CloudFrontConn() + conn := client.CloudFrontConn(ctx) var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -414,7 +416,7 @@ func sweepRealtimeLogsConfig(region string) error { input.Marker = output.RealtimeLogConfigs.NextMarker } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping CloudFront Real-time Log Configs: %w", err)) } @@ -423,11 +425,11 @@ func sweepRealtimeLogsConfig(region string) error { func sweepFieldLevelEncryptionConfigs(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).CloudFrontConn() + conn := client.CloudFrontConn(ctx) input := &cloudfront.ListFieldLevelEncryptionConfigsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -470,7 +472,7 @@ func sweepFieldLevelEncryptionConfigs(region string) error { return fmt.Errorf("error listing CloudFront Field-level Encryption Configs (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping CloudFront Field-level Encryption Configs (%s): %w", region, err) @@ -481,11 +483,11 @@ func sweepFieldLevelEncryptionConfigs(region string) error { func sweepFieldLevelEncryptionProfiles(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).CloudFrontConn() + conn := client.CloudFrontConn(ctx) input := &cloudfront.ListFieldLevelEncryptionProfilesInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -528,7 +530,7 @@ func sweepFieldLevelEncryptionProfiles(region string) error { return fmt.Errorf("error listing CloudFront Field-level Encryption Profiles (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping CloudFront Field-level Encryption Profiles (%s): %w", region, err) @@ -539,11 +541,11 @@ func sweepFieldLevelEncryptionProfiles(region string) error { func sweepOriginRequestPolicies(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).CloudFrontConn() + conn := client.CloudFrontConn(ctx) input := &cloudfront.ListOriginRequestPoliciesInput{ Type: aws.String(cloudfront.ResponseHeadersPolicyTypeCustom), } @@ -588,7 +590,7 @@ func sweepOriginRequestPolicies(region string) error { return fmt.Errorf("error listing CloudFront Origin Request Policies (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping CloudFront Origin Request Policies (%s): %w", region, err) @@ -599,11 +601,11 @@ func sweepOriginRequestPolicies(region string) error { func sweepResponseHeadersPolicies(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).CloudFrontConn() + conn := client.CloudFrontConn(ctx) input := &cloudfront.ListResponseHeadersPoliciesInput{ Type: aws.String(cloudfront.ResponseHeadersPolicyTypeCustom), } @@ -648,7 +650,7 @@ func sweepResponseHeadersPolicies(region string) error { return fmt.Errorf("error listing CloudFront Response Headers Policies (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping CloudFront Response Headers Policies (%s): %w", region, err) @@ -659,11 +661,11 @@ func sweepResponseHeadersPolicies(region string) error { func sweepOriginAccessControls(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).CloudFrontConn() + conn := client.CloudFrontConn(ctx) input := &cloudfront.ListOriginAccessControlsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -706,7 +708,7 @@ func sweepOriginAccessControls(region string) error { return fmt.Errorf("error listing CloudFront Origin Access Controls (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping CloudFront Origin Access Controls (%s): %w", region, err) diff --git a/internal/service/cloudfront/tags_gen.go b/internal/service/cloudfront/tags_gen.go index 53c64291dfb..186e47f0991 100644 --- a/internal/service/cloudfront/tags_gen.go +++ b/internal/service/cloudfront/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists cloudfront service tags. +// listTags lists cloudfront service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn cloudfrontiface.CloudFrontAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn cloudfrontiface.CloudFrontAPI, identifier string) (tftags.KeyValueTags, error) { input := &cloudfront.ListTagsForResourceInput{ Resource: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn cloudfrontiface.CloudFrontAPI, identifie // ListTags lists cloudfront service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).CloudFrontConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).CloudFrontConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*cloudfront.Tag) tftags.KeyValueTa return tftags.New(ctx, m) } -// GetTagsIn returns cloudfront service tags from Context. +// getTagsIn returns cloudfront service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*cloudfront.Tag { +func getTagsIn(ctx context.Context) []*cloudfront.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*cloudfront.Tag { return nil } -// SetTagsOut sets cloudfront service tags in Context. -func SetTagsOut(ctx context.Context, tags []*cloudfront.Tag) { +// setTagsOut sets cloudfront service tags in Context. +func setTagsOut(ctx context.Context, tags []*cloudfront.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates cloudfront service tags. +// updateTags updates cloudfront service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn cloudfrontiface.CloudFrontAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn cloudfrontiface.CloudFrontAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn cloudfrontiface.CloudFrontAPI, identif // UpdateTags updates cloudfront service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).CloudFrontConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).CloudFrontConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/cloudfront/validate.go b/internal/service/cloudfront/validate.go index fa4acc36a0e..f23b9a36fe5 100644 --- a/internal/service/cloudfront/validate.go +++ b/internal/service/cloudfront/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( diff --git a/internal/service/cloudfront/validate_test.go b/internal/service/cloudfront/validate_test.go index 6493517af08..115edc2aa9d 100644 --- a/internal/service/cloudfront/validate_test.go +++ b/internal/service/cloudfront/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudfront import ( diff --git a/internal/service/cloudhsmv2/cloudhsmv2_test.go b/internal/service/cloudhsmv2/cloudhsmv2_test.go index 9bbe9bcc518..6f801d57252 100644 --- a/internal/service/cloudhsmv2/cloudhsmv2_test.go +++ b/internal/service/cloudhsmv2/cloudhsmv2_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudhsmv2_test import ( diff --git a/internal/service/cloudhsmv2/cluster.go b/internal/service/cloudhsmv2/cluster.go index f56f5df61bc..f7e068a2e78 100644 --- a/internal/service/cloudhsmv2/cluster.go +++ b/internal/service/cloudhsmv2/cluster.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudhsmv2 import ( @@ -111,12 +114,12 @@ func ResourceCluster() *schema.Resource { func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudHSMV2Conn() + conn := meta.(*conns.AWSClient).CloudHSMV2Conn(ctx) input := &cloudhsmv2.CreateClusterInput{ HsmType: aws.String(d.Get("hsm_type").(string)), SubnetIds: flex.ExpandStringSet(d.Get("subnet_ids").(*schema.Set)), - TagList: GetTagsIn(ctx), + TagList: getTagsIn(ctx), } if v, ok := d.GetOk("source_backup_identifier"); ok { @@ -145,7 +148,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudHSMV2Conn() + conn := meta.(*conns.AWSClient).CloudHSMV2Conn(ctx) cluster, err := FindClusterByID(ctx, conn, d.Id()) @@ -174,7 +177,7 @@ func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta inter d.Set("subnet_ids", subnetIDs) d.Set("vpc_id", cluster.VpcId) - SetTagsOut(ctx, cluster.TagList) + setTagsOut(ctx, cluster.TagList) return diags } @@ -189,7 +192,7 @@ func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceClusterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudHSMV2Conn() + conn := meta.(*conns.AWSClient).CloudHSMV2Conn(ctx) log.Printf("[INFO] Deleting CloudHSMv2 Cluster: %s", d.Id()) _, err := conn.DeleteClusterWithContext(ctx, &cloudhsmv2.DeleteClusterInput{ diff --git a/internal/service/cloudhsmv2/cluster_data_source.go b/internal/service/cloudhsmv2/cluster_data_source.go index 87138024691..7d9e6844d97 100644 --- a/internal/service/cloudhsmv2/cluster_data_source.go +++ b/internal/service/cloudhsmv2/cluster_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudhsmv2 import ( @@ -73,7 +76,7 @@ func DataSourceCluster() *schema.Resource { func dataSourceClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudHSMV2Conn() + conn := meta.(*conns.AWSClient).CloudHSMV2Conn(ctx) clusterID := d.Get("cluster_id").(string) input := &cloudhsmv2.DescribeClustersInput{ diff --git a/internal/service/cloudhsmv2/cluster_data_source_test.go b/internal/service/cloudhsmv2/cluster_data_source_test.go index 5bdf3ba4b82..806a28a0bf2 100644 --- a/internal/service/cloudhsmv2/cluster_data_source_test.go +++ b/internal/service/cloudhsmv2/cluster_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudhsmv2_test import ( diff --git a/internal/service/cloudhsmv2/cluster_test.go b/internal/service/cloudhsmv2/cluster_test.go index e8264ec55a6..5018d4fdc08 100644 --- a/internal/service/cloudhsmv2/cluster_test.go +++ b/internal/service/cloudhsmv2/cluster_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudhsmv2_test import ( @@ -124,7 +127,7 @@ func testAccCluster_tags(t *testing.T) { func testAccCheckClusterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudHSMV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudHSMV2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudhsm_v2_cluster" { @@ -150,7 +153,7 @@ func testAccCheckClusterDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckClusterExists(ctx context.Context, n string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudHSMV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudHSMV2Conn(ctx) rs, ok := s.RootModule().Resources[n] if !ok { return fmt.Errorf("Not found: %s", n) diff --git a/internal/service/cloudhsmv2/find.go b/internal/service/cloudhsmv2/find.go index 1df551977c2..551b4d684b9 100644 --- a/internal/service/cloudhsmv2/find.go +++ b/internal/service/cloudhsmv2/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudhsmv2 import ( diff --git a/internal/service/cloudhsmv2/generate.go b/internal/service/cloudhsmv2/generate.go index 0faa1bf2ac0..ee5756a0659 100644 --- a/internal/service/cloudhsmv2/generate.go +++ b/internal/service/cloudhsmv2/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=ListTags -ListTagsInIDElem=ResourceId -ListTagsOutTagsElem=TagList -ServiceTagsSlice -TagInIDElem=ResourceId -TagInTagsElem=TagList -UntagInTagsElem=TagKeyList -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package cloudhsmv2 diff --git a/internal/service/cloudhsmv2/hsm.go b/internal/service/cloudhsmv2/hsm.go index 384f4bf42bd..a6962c8d269 100644 --- a/internal/service/cloudhsmv2/hsm.go +++ b/internal/service/cloudhsmv2/hsm.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudhsmv2 import ( @@ -74,7 +77,7 @@ func ResourceHSM() *schema.Resource { func resourceHSMCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudHSMV2Conn() + conn := meta.(*conns.AWSClient).CloudHSMV2Conn(ctx) clusterID := d.Get("cluster_id").(string) input := &cloudhsmv2.CreateHsmInput{ @@ -119,7 +122,7 @@ func resourceHSMCreate(ctx context.Context, d *schema.ResourceData, meta interfa func resourceHSMRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudHSMV2Conn() + conn := meta.(*conns.AWSClient).CloudHSMV2Conn(ctx) hsm, err := FindHSMByTwoPartKey(ctx, conn, d.Id(), d.Get("hsm_eni_id").(string)) @@ -151,7 +154,7 @@ func resourceHSMRead(ctx context.Context, d *schema.ResourceData, meta interface func resourceHSMDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudHSMV2Conn() + conn := meta.(*conns.AWSClient).CloudHSMV2Conn(ctx) log.Printf("[INFO] Deleting CloudHSMv2 HSM: %s", d.Id()) _, err := conn.DeleteHsmWithContext(ctx, &cloudhsmv2.DeleteHsmInput{ diff --git a/internal/service/cloudhsmv2/hsm_test.go b/internal/service/cloudhsmv2/hsm_test.go index 29ff8c75ac4..9be2b8e1c65 100644 --- a/internal/service/cloudhsmv2/hsm_test.go +++ b/internal/service/cloudhsmv2/hsm_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudhsmv2_test import ( @@ -128,7 +131,7 @@ func testAccHSM_IPAddress(t *testing.T) { func testAccCheckHSMDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudHSMV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudHSMV2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudhsm_v2_hsm" { @@ -154,7 +157,7 @@ func testAccCheckHSMDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckHSMExists(ctx context.Context, n string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudHSMV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudHSMV2Conn(ctx) rs, ok := s.RootModule().Resources[n] if !ok { return fmt.Errorf("Not found: %s", n) diff --git a/internal/service/cloudhsmv2/service_package.go b/internal/service/cloudhsmv2/service_package.go new file mode 100644 index 00000000000..d7e5f62dd52 --- /dev/null +++ b/internal/service/cloudhsmv2/service_package.go @@ -0,0 +1,24 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package cloudhsmv2 + +import ( + "context" + + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + request_sdkv1 "github.com/aws/aws-sdk-go/aws/request" + cloudhsmv2_sdkv1 "github.com/aws/aws-sdk-go/service/cloudhsmv2" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" +) + +// CustomizeConn customizes a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) CustomizeConn(ctx context.Context, conn *cloudhsmv2_sdkv1.CloudHSMV2) (*cloudhsmv2_sdkv1.CloudHSMV2, error) { + conn.Handlers.Retry.PushBack(func(r *request_sdkv1.Request) { + if tfawserr.ErrMessageContains(r.Error, cloudhsmv2_sdkv1.ErrCodeCloudHsmInternalFailureException, "request was rejected because of an AWS CloudHSM internal failure") { + r.Retryable = aws_sdkv1.Bool(true) + } + }) + + return conn, nil +} diff --git a/internal/service/cloudhsmv2/service_package_gen.go b/internal/service/cloudhsmv2/service_package_gen.go index 147e2e18335..06f30cc5c0e 100644 --- a/internal/service/cloudhsmv2/service_package_gen.go +++ b/internal/service/cloudhsmv2/service_package_gen.go @@ -5,6 +5,10 @@ package cloudhsmv2 import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + cloudhsmv2_sdkv1 "github.com/aws/aws-sdk-go/service/cloudhsmv2" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -49,4 +53,13 @@ func (p *servicePackage) ServicePackageName() string { return names.CloudHSMV2 } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*cloudhsmv2_sdkv1.CloudHSMV2, error) { + sess := config["session"].(*session_sdkv1.Session) + + return cloudhsmv2_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/cloudhsmv2/status.go b/internal/service/cloudhsmv2/status.go index cbbdb045000..85196f1a072 100644 --- a/internal/service/cloudhsmv2/status.go +++ b/internal/service/cloudhsmv2/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudhsmv2 import ( diff --git a/internal/service/cloudhsmv2/sweep.go b/internal/service/cloudhsmv2/sweep.go index e8595ed4347..2a397c8f2d6 100644 --- a/internal/service/cloudhsmv2/sweep.go +++ b/internal/service/cloudhsmv2/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/cloudhsmv2" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -29,11 +31,11 @@ func init() { func sweepClusters(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).CloudHSMV2Conn() + conn := client.CloudHSMV2Conn(ctx) input := &cloudhsmv2.DescribeClustersInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -65,7 +67,7 @@ func sweepClusters(region string) error { return fmt.Errorf("error listing CloudHSMv2 Clusters (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping CloudHSMv2 Clusters (%s): %w", region, err) @@ -76,11 +78,11 @@ func sweepClusters(region string) error { func sweepHSMs(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).CloudHSMV2Conn() + conn := client.CloudHSMV2Conn(ctx) input := &cloudhsmv2.DescribeClustersInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -115,7 +117,7 @@ func sweepHSMs(region string) error { return fmt.Errorf("error listing CloudHSMv2 HSMs (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping CloudHSMv2 HSMs (%s): %w", region, err) diff --git a/internal/service/cloudhsmv2/tags_gen.go b/internal/service/cloudhsmv2/tags_gen.go index 702f1bd8f48..456af59b999 100644 --- a/internal/service/cloudhsmv2/tags_gen.go +++ b/internal/service/cloudhsmv2/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists cloudhsmv2 service tags. +// listTags lists cloudhsmv2 service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn cloudhsmv2iface.CloudHSMV2API, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn cloudhsmv2iface.CloudHSMV2API, identifier string) (tftags.KeyValueTags, error) { input := &cloudhsmv2.ListTagsInput{ ResourceId: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn cloudhsmv2iface.CloudHSMV2API, identifie // ListTags lists cloudhsmv2 service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).CloudHSMV2Conn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).CloudHSMV2Conn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*cloudhsmv2.Tag) tftags.KeyValueTa return tftags.New(ctx, m) } -// GetTagsIn returns cloudhsmv2 service tags from Context. +// getTagsIn returns cloudhsmv2 service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*cloudhsmv2.Tag { +func getTagsIn(ctx context.Context) []*cloudhsmv2.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*cloudhsmv2.Tag { return nil } -// SetTagsOut sets cloudhsmv2 service tags in Context. -func SetTagsOut(ctx context.Context, tags []*cloudhsmv2.Tag) { +// setTagsOut sets cloudhsmv2 service tags in Context. +func setTagsOut(ctx context.Context, tags []*cloudhsmv2.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates cloudhsmv2 service tags. +// updateTags updates cloudhsmv2 service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn cloudhsmv2iface.CloudHSMV2API, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn cloudhsmv2iface.CloudHSMV2API, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn cloudhsmv2iface.CloudHSMV2API, identif // UpdateTags updates cloudhsmv2 service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).CloudHSMV2Conn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).CloudHSMV2Conn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/cloudhsmv2/wait.go b/internal/service/cloudhsmv2/wait.go index 622e1aa4aee..4bd900b6a1c 100644 --- a/internal/service/cloudhsmv2/wait.go +++ b/internal/service/cloudhsmv2/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudhsmv2 import ( diff --git a/internal/service/cloudsearch/domain.go b/internal/service/cloudsearch/domain.go index 92c6a58e432..833f0d335ad 100644 --- a/internal/service/cloudsearch/domain.go +++ b/internal/service/cloudsearch/domain.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudsearch import ( @@ -175,7 +178,7 @@ func ResourceDomain() *schema.Resource { func resourceDomainCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudSearchConn() + conn := meta.(*conns.AWSClient).CloudSearchConn(ctx) name := d.Get("name").(string) input := cloudsearch.CreateDomainInput{ @@ -263,7 +266,7 @@ func resourceDomainCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceDomainRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudSearchConn() + conn := meta.(*conns.AWSClient).CloudSearchConn(ctx) domainStatus, err := FindDomainStatusByName(ctx, conn, d.Id()) @@ -339,7 +342,7 @@ func resourceDomainRead(ctx context.Context, d *schema.ResourceData, meta interf func resourceDomainUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudSearchConn() + conn := meta.(*conns.AWSClient).CloudSearchConn(ctx) requiresIndexDocuments := false if d.HasChange("scaling_parameters") { @@ -470,7 +473,7 @@ func resourceDomainUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceDomainDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudSearchConn() + conn := meta.(*conns.AWSClient).CloudSearchConn(ctx) log.Printf("[DEBUG] Deleting CloudSearch Domain: %s", d.Id()) _, err := conn.DeleteDomainWithContext(ctx, &cloudsearch.DeleteDomainInput{ @@ -542,7 +545,7 @@ func defineIndexFields(ctx context.Context, conn *cloudsearch.CloudSearch, domai _, err = conn.DefineIndexFieldWithContext(ctx, input) if err != nil { - return fmt.Errorf("error defining CloudSearch Domain (%s) index field (%s): %w", domainName, aws.StringValue(apiObject.IndexFieldName), err) + return fmt.Errorf("defining CloudSearch Domain (%s) index field (%s): %w", domainName, aws.StringValue(apiObject.IndexFieldName), err) } } } diff --git a/internal/service/cloudsearch/domain_service_access_policy.go b/internal/service/cloudsearch/domain_service_access_policy.go index 3fb289cf113..aa94b0c0979 100644 --- a/internal/service/cloudsearch/domain_service_access_policy.go +++ b/internal/service/cloudsearch/domain_service_access_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudsearch import ( @@ -57,7 +60,7 @@ func ResourceDomainServiceAccessPolicy() *schema.Resource { func resourceDomainServiceAccessPolicyPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudSearchConn() + conn := meta.(*conns.AWSClient).CloudSearchConn(ctx) domainName := d.Get("domain_name").(string) input := &cloudsearch.UpdateServiceAccessPoliciesInput{ @@ -93,7 +96,7 @@ func resourceDomainServiceAccessPolicyPut(ctx context.Context, d *schema.Resourc func resourceDomainServiceAccessPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudSearchConn() + conn := meta.(*conns.AWSClient).CloudSearchConn(ctx) accessPolicy, err := FindAccessPolicyByName(ctx, conn, d.Id()) @@ -121,7 +124,7 @@ func resourceDomainServiceAccessPolicyRead(ctx context.Context, d *schema.Resour func resourceDomainServiceAccessPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudSearchConn() + conn := meta.(*conns.AWSClient).CloudSearchConn(ctx) input := &cloudsearch.UpdateServiceAccessPoliciesInput{ AccessPolicies: aws.String(""), diff --git a/internal/service/cloudsearch/domain_service_access_policy_test.go b/internal/service/cloudsearch/domain_service_access_policy_test.go index eab083baf35..562dc3b1a8f 100644 --- a/internal/service/cloudsearch/domain_service_access_policy_test.go +++ b/internal/service/cloudsearch/domain_service_access_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudsearch_test import ( @@ -87,7 +90,7 @@ func testAccDomainServiceAccessPolicyExists(ctx context.Context, n string) resou return fmt.Errorf("No CloudSearch Domain Service Access Policy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudSearchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudSearchConn(ctx) _, err := tfcloudsearch.FindAccessPolicyByName(ctx, conn, rs.Primary.ID) @@ -102,7 +105,7 @@ func testAccCheckDomainServiceAccessPolicyDestroy(ctx context.Context) resource. continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudSearchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudSearchConn(ctx) _, err := tfcloudsearch.FindAccessPolicyByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/cloudsearch/domain_test.go b/internal/service/cloudsearch/domain_test.go index 52917b4e6eb..469650fa9be 100644 --- a/internal/service/cloudsearch/domain_test.go +++ b/internal/service/cloudsearch/domain_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudsearch_test import ( @@ -300,7 +303,7 @@ func testAccDomainExists(ctx context.Context, n string, v *cloudsearch.DomainSta return fmt.Errorf("No CloudSearch Domain ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudSearchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudSearchConn(ctx) output, err := tfcloudsearch.FindDomainStatusByName(ctx, conn, rs.Primary.ID) @@ -321,7 +324,7 @@ func testAccCheckDomainDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudSearchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudSearchConn(ctx) _, err := tfcloudsearch.FindDomainStatusByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/cloudsearch/generate.go b/internal/service/cloudsearch/generate.go new file mode 100644 index 00000000000..eae13231dfb --- /dev/null +++ b/internal/service/cloudsearch/generate.go @@ -0,0 +1,7 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/servicepackage/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package cloudsearch diff --git a/internal/service/cloudsearch/service_package_gen.go b/internal/service/cloudsearch/service_package_gen.go index 3f08f502c51..0d18b71f5a8 100644 --- a/internal/service/cloudsearch/service_package_gen.go +++ b/internal/service/cloudsearch/service_package_gen.go @@ -5,6 +5,10 @@ package cloudsearch import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + cloudsearch_sdkv1 "github.com/aws/aws-sdk-go/service/cloudsearch" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -40,4 +44,13 @@ func (p *servicePackage) ServicePackageName() string { return names.CloudSearch } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*cloudsearch_sdkv1.CloudSearch, error) { + sess := config["session"].(*session_sdkv1.Session) + + return cloudsearch_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/cloudsearch/sweep.go b/internal/service/cloudsearch/sweep.go index 4f1fb2d8846..8d690e4a16a 100644 --- a/internal/service/cloudsearch/sweep.go +++ b/internal/service/cloudsearch/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/cloudsearch" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -23,11 +25,11 @@ func init() { func sweepDomains(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).CloudSearchConn() + conn := client.CloudSearchConn(ctx) input := &cloudsearch.DescribeDomainsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -54,7 +56,7 @@ func sweepDomains(region string) error { return fmt.Errorf("error listing CloudSearch Domains (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping CloudSearch Domains (%s): %w", region, err) diff --git a/internal/service/cloudtrail/cloudtrail.go b/internal/service/cloudtrail/cloudtrail.go index a294cbcd22f..ca25e1c8c3c 100644 --- a/internal/service/cloudtrail/cloudtrail.go +++ b/internal/service/cloudtrail/cloudtrail.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudtrail import ( @@ -254,12 +257,12 @@ func ResourceCloudTrail() *schema.Resource { // nosemgrep:ci.cloudtrail-in-func- func resourceCloudTrailCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { // nosemgrep:ci.cloudtrail-in-func-name var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudTrailConn() + conn := meta.(*conns.AWSClient).CloudTrailConn(ctx) input := cloudtrail.CreateTrailInput{ Name: aws.String(d.Get("name").(string)), S3BucketName: aws.String(d.Get("s3_bucket_name").(string)), - TagsList: GetTagsIn(ctx), + TagsList: getTagsIn(ctx), } if v, ok := d.GetOk("cloud_watch_logs_group_arn"); ok { @@ -348,7 +351,7 @@ func resourceCloudTrailCreate(ctx context.Context, d *schema.ResourceData, meta func resourceCloudTrailRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { // nosemgrep:ci.cloudtrail-in-func-name var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudTrailConn() + conn := meta.(*conns.AWSClient).CloudTrailConn(ctx) input := cloudtrail.DescribeTrailsInput{ TrailNameList: []*string{ @@ -447,7 +450,7 @@ func resourceCloudTrailRead(ctx context.Context, d *schema.ResourceData, meta in func resourceCloudTrailUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { // nosemgrep:ci.cloudtrail-in-func-name var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudTrailConn() + conn := meta.(*conns.AWSClient).CloudTrailConn(ctx) if d.HasChangesExcept("tags", "tags_all", "insight_selector", "advanced_event_selector", "event_selector", "enable_logging") { input := cloudtrail.UpdateTrailInput{ @@ -542,7 +545,7 @@ func resourceCloudTrailUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceCloudTrailDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { // nosemgrep:ci.cloudtrail-in-func-name var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudTrailConn() + conn := meta.(*conns.AWSClient).CloudTrailConn(ctx) log.Printf("[DEBUG] Deleting CloudTrail: %q", d.Id()) _, err := conn.DeleteTrailWithContext(ctx, &cloudtrail.DeleteTrailInput{ diff --git a/internal/service/cloudtrail/cloudtrail_test.go b/internal/service/cloudtrail/cloudtrail_test.go index 37c370cebb2..dcb1623e085 100644 --- a/internal/service/cloudtrail/cloudtrail_test.go +++ b/internal/service/cloudtrail/cloudtrail_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudtrail_test import ( @@ -773,7 +776,7 @@ func testAccCheckExists(ctx context.Context, n string, trail *cloudtrail.Trail) return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudTrailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudTrailConn(ctx) params := cloudtrail.DescribeTrailsInput{ TrailNameList: []*string{aws.String(rs.Primary.ID)}, } @@ -797,7 +800,7 @@ func testAccCheckLoggingEnabled(ctx context.Context, n string, desired bool) res return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudTrailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudTrailConn(ctx) params := cloudtrail.GetTrailStatusInput{ Name: aws.String(rs.Primary.ID), } @@ -849,7 +852,7 @@ func testAccCheckLogValidationEnabled(n string, desired bool, trail *cloudtrail. func testAccCheckTrailDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudTrailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudTrailConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudtrail" { @@ -876,7 +879,7 @@ func testAccCheckTrailDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckLoadTags(ctx context.Context, trail *cloudtrail.Trail, tags *[]*cloudtrail.Tag) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudTrailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudTrailConn(ctx) input := cloudtrail.ListTagsInput{ ResourceIdList: []*string{trail.TrailARN}, } @@ -895,8 +898,12 @@ func testAccCheckLoadTags(ctx context.Context, trail *cloudtrail.Trail, tags *[] func testAccBaseConfig(rName string) string { return fmt.Sprintf(` +data "aws_caller_identity" "current" {} + data "aws_partition" "current" {} +data "aws_region" "current" {} + resource "aws_s3_bucket" "test" { bucket = %[1]q force_destroy = true @@ -908,21 +915,31 @@ resource "aws_s3_bucket_policy" "test" { Version = "2012-10-17" Statement = [ { - Sid = "AWSCloudTrailAclCheck" - Effect = "Allow" - Principal = "*" - Action = "s3:GetBucketAcl" - Resource = "arn:${data.aws_partition.current.partition}:s3:::%[1]s" + Sid = "AWSCloudTrailAclCheck" + Effect = "Allow" + Principal = { + Service = "cloudtrail.amazonaws.com" + } + Action = "s3:GetBucketAcl" + Resource = "arn:${data.aws_partition.current.partition}:s3:::%[1]s" + Condition = { + StringEquals = { + "aws:SourceArn" = "arn:${data.aws_partition.current.partition}:cloudtrail:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:trail/%[1]s" + } + } }, { - Sid = "AWSCloudTrailWrite" - Effect = "Allow" - Principal = "*" - Action = "s3:PutObject" - Resource = "arn:${data.aws_partition.current.partition}:s3:::%[1]s/*" + Sid = "AWSCloudTrailWrite" + Effect = "Allow" + Principal = { + Service = "cloudtrail.amazonaws.com" + } + Action = "s3:PutObject" + Resource = "arn:${data.aws_partition.current.partition}:s3:::%[1]s/*" Condition = { StringEquals = { - "s3:x-amz-acl" = "bucket-owner-full-control" + "s3:x-amz-acl" = "bucket-owner-full-control" + "aws:SourceArn" = "arn:${data.aws_partition.current.partition}:cloudtrail:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:trail/%[1]s" } } } @@ -1497,13 +1514,13 @@ resource "aws_cloudtrail" "test" { } func testAccCloudTrailConfig_advancedEventSelector(rName string) string { - return fmt.Sprintf(` + return acctest.ConfigCompose(testAccBaseConfig(rName), fmt.Sprintf(` resource "aws_cloudtrail" "test" { # Must have bucket policy attached first depends_on = [aws_s3_bucket_policy.test] name = %[1]q - s3_bucket_name = aws_s3_bucket.test1.id + s3_bucket_name = aws_s3_bucket.test.id advanced_event_selector { name = "s3Custom" @@ -1593,46 +1610,9 @@ resource "aws_cloudtrail" "test" { } } -data "aws_partition" "current" {} - -resource "aws_s3_bucket" "test1" { - bucket = "%[1]s-1" - force_destroy = true -} - -resource "aws_s3_bucket_policy" "test" { - bucket = aws_s3_bucket.test1.id - policy = < 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*cloudtrail.Tag { return nil } -// SetTagsOut sets cloudtrail service tags in Context. -func SetTagsOut(ctx context.Context, tags []*cloudtrail.Tag) { +// setTagsOut sets cloudtrail service tags in Context. +func setTagsOut(ctx context.Context, tags []*cloudtrail.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates cloudtrail service tags. +// updateTags updates cloudtrail service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn cloudtrailiface.CloudTrailAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn cloudtrailiface.CloudTrailAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn cloudtrailiface.CloudTrailAPI, identif // UpdateTags updates cloudtrail service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).CloudTrailConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).CloudTrailConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/cloudwatch/composite_alarm.go b/internal/service/cloudwatch/composite_alarm.go index 095e411eb48..2730202a3e2 100644 --- a/internal/service/cloudwatch/composite_alarm.go +++ b/internal/service/cloudwatch/composite_alarm.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudwatch import ( @@ -99,7 +102,7 @@ func ResourceCompositeAlarm() *schema.Resource { } func resourceCompositeAlarmCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CloudWatchConn() + conn := meta.(*conns.AWSClient).CloudWatchConn(ctx) name := d.Get("alarm_name").(string) input := expandPutCompositeAlarmInput(ctx, d) @@ -120,7 +123,7 @@ func resourceCompositeAlarmCreate(ctx context.Context, d *schema.ResourceData, m d.SetId(name) // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { alarm, err := FindCompositeAlarmByName(ctx, conn, d.Id()) if err != nil { @@ -143,7 +146,7 @@ func resourceCompositeAlarmCreate(ctx context.Context, d *schema.ResourceData, m } func resourceCompositeAlarmRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CloudWatchConn() + conn := meta.(*conns.AWSClient).CloudWatchConn(ctx) alarm, err := FindCompositeAlarmByName(ctx, conn, d.Id()) @@ -170,7 +173,7 @@ func resourceCompositeAlarmRead(ctx context.Context, d *schema.ResourceData, met } func resourceCompositeAlarmUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CloudWatchConn() + conn := meta.(*conns.AWSClient).CloudWatchConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := expandPutCompositeAlarmInput(ctx, d) @@ -186,7 +189,7 @@ func resourceCompositeAlarmUpdate(ctx context.Context, d *schema.ResourceData, m } func resourceCompositeAlarmDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CloudWatchConn() + conn := meta.(*conns.AWSClient).CloudWatchConn(ctx) log.Printf("[INFO] Deleting CloudWatch Composite Alarm: %s", d.Id()) _, err := conn.DeleteAlarmsWithContext(ctx, &cloudwatch.DeleteAlarmsInput{ @@ -237,7 +240,7 @@ func FindCompositeAlarmByName(ctx context.Context, conn *cloudwatch.CloudWatch, func expandPutCompositeAlarmInput(ctx context.Context, d *schema.ResourceData) *cloudwatch.PutCompositeAlarmInput { apiObject := &cloudwatch.PutCompositeAlarmInput{ ActionsEnabled: aws.Bool(d.Get("actions_enabled").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("alarm_actions"); ok { diff --git a/internal/service/cloudwatch/composite_alarm_test.go b/internal/service/cloudwatch/composite_alarm_test.go index 73ab0700658..4e9737a0df9 100644 --- a/internal/service/cloudwatch/composite_alarm_test.go +++ b/internal/service/cloudwatch/composite_alarm_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudwatch_test import ( @@ -388,7 +391,7 @@ func TestAccCloudWatchCompositeAlarm_allActions(t *testing.T) { func testAccCheckCompositeAlarmDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudWatchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudWatchConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudwatch_composite_alarm" { @@ -423,7 +426,7 @@ func testAccCheckCompositeAlarmExists(ctx context.Context, n string) resource.Te return fmt.Errorf("No CloudWatch Composite Alarm ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudWatchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudWatchConn(ctx) _, err := tfcloudwatch.FindCompositeAlarmByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/cloudwatch/consts.go b/internal/service/cloudwatch/consts.go index 70ac0cdd239..a2ffad3a878 100644 --- a/internal/service/cloudwatch/consts.go +++ b/internal/service/cloudwatch/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudwatch const ( diff --git a/internal/service/cloudwatch/dashboard.go b/internal/service/cloudwatch/dashboard.go index 1a1effafe80..688ea38eb12 100644 --- a/internal/service/cloudwatch/dashboard.go +++ b/internal/service/cloudwatch/dashboard.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudwatch import ( @@ -63,7 +66,7 @@ func resourceDashboardRead(ctx context.Context, d *schema.ResourceData, meta int var diags diag.Diagnostics dashboardName := d.Get("dashboard_name").(string) log.Printf("[DEBUG] Reading CloudWatch Dashboard: %s", dashboardName) - conn := meta.(*conns.AWSClient).CloudWatchConn() + conn := meta.(*conns.AWSClient).CloudWatchConn(ctx) params := cloudwatch.GetDashboardInput{ DashboardName: aws.String(d.Id()), @@ -88,7 +91,7 @@ func resourceDashboardRead(ctx context.Context, d *schema.ResourceData, meta int func resourceDashboardPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudWatchConn() + conn := meta.(*conns.AWSClient).CloudWatchConn(ctx) params := cloudwatch.PutDashboardInput{ DashboardBody: aws.String(d.Get("dashboard_body").(string)), DashboardName: aws.String(d.Get("dashboard_name").(string)), @@ -109,7 +112,7 @@ func resourceDashboardPut(ctx context.Context, d *schema.ResourceData, meta inte func resourceDashboardDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics log.Printf("[INFO] Deleting CloudWatch Dashboard %s", d.Id()) - conn := meta.(*conns.AWSClient).CloudWatchConn() + conn := meta.(*conns.AWSClient).CloudWatchConn(ctx) params := cloudwatch.DeleteDashboardsInput{ DashboardNames: []*string{aws.String(d.Id())}, } @@ -118,7 +121,7 @@ func resourceDashboardDelete(ctx context.Context, d *schema.ResourceData, meta i if IsDashboardNotFoundErr(err) { return diags } - return sdkdiag.AppendErrorf(diags, "Error deleting CloudWatch Dashboard: %s", err) + return sdkdiag.AppendErrorf(diags, "deleting CloudWatch Dashboard: %s", err) } log.Printf("[INFO] CloudWatch Dashboard %s deleted", d.Id()) diff --git a/internal/service/cloudwatch/dashboard_test.go b/internal/service/cloudwatch/dashboard_test.go index 6cb630e0d8b..f450a278362 100644 --- a/internal/service/cloudwatch/dashboard_test.go +++ b/internal/service/cloudwatch/dashboard_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudwatch_test import ( @@ -123,7 +126,7 @@ func testAccCheckDashboardExists(ctx context.Context, n string, dashboard *cloud return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudWatchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudWatchConn(ctx) params := cloudwatch.GetDashboardInput{ DashboardName: aws.String(rs.Primary.ID), } @@ -141,7 +144,7 @@ func testAccCheckDashboardExists(ctx context.Context, n string, dashboard *cloud func testAccCheckDashboardDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudWatchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudWatchConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudwatch_dashboard" { @@ -167,7 +170,7 @@ func testAccCheckDashboardDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckDashboardDestroyPrevious(ctx context.Context, dashboardName string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudWatchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudWatchConn(ctx) params := cloudwatch.GetDashboardInput{ DashboardName: aws.String(dashboardName), @@ -254,7 +257,7 @@ func testAccCheckDashboardBodyIsExpected(ctx context.Context, resourceName, expe return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudWatchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudWatchConn(ctx) params := cloudwatch.GetDashboardInput{ DashboardName: aws.String(rs.Primary.ID), } diff --git a/internal/service/cloudwatch/generate.go b/internal/service/cloudwatch/generate.go index 83a8d949a38..5e45b181003 100644 --- a/internal/service/cloudwatch/generate.go +++ b/internal/service/cloudwatch/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceARN -ServiceTagsSlice -TagInIDElem=ResourceARN -UpdateTags -CreateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package cloudwatch diff --git a/internal/service/cloudwatch/metric_alarm.go b/internal/service/cloudwatch/metric_alarm.go index f2bf525f11c..d0cc39c2ebf 100644 --- a/internal/service/cloudwatch/metric_alarm.go +++ b/internal/service/cloudwatch/metric_alarm.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudwatch import ( @@ -313,7 +316,7 @@ func validMetricAlarm(d *schema.ResourceData) error { func resourceMetricAlarmCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudWatchConn() + conn := meta.(*conns.AWSClient).CloudWatchConn(ctx) err := validMetricAlarm(d) if err != nil { @@ -339,7 +342,7 @@ func resourceMetricAlarmCreate(ctx context.Context, d *schema.ResourceData, meta d.SetId(name) // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { alarm, err := FindMetricAlarmByName(ctx, conn, d.Id()) if err != nil { @@ -363,7 +366,7 @@ func resourceMetricAlarmCreate(ctx context.Context, d *schema.ResourceData, meta func resourceMetricAlarmRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudWatchConn() + conn := meta.(*conns.AWSClient).CloudWatchConn(ctx) alarm, err := FindMetricAlarmByName(ctx, conn, d.Id()) @@ -415,7 +418,7 @@ func resourceMetricAlarmRead(ctx context.Context, d *schema.ResourceData, meta i func resourceMetricAlarmUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudWatchConn() + conn := meta.(*conns.AWSClient).CloudWatchConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := expandPutMetricAlarmInput(ctx, d) @@ -432,7 +435,7 @@ func resourceMetricAlarmUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceMetricAlarmDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CloudWatchConn() + conn := meta.(*conns.AWSClient).CloudWatchConn(ctx) log.Printf("[INFO] Deleting CloudWatch Metric Alarm: %s", d.Id()) _, err := conn.DeleteAlarmsWithContext(ctx, &cloudwatch.DeleteAlarmsInput{ @@ -485,7 +488,7 @@ func expandPutMetricAlarmInput(ctx context.Context, d *schema.ResourceData) *clo AlarmName: aws.String(d.Get("alarm_name").(string)), ComparisonOperator: aws.String(d.Get("comparison_operator").(string)), EvaluationPeriods: aws.Int64(int64(d.Get("evaluation_periods").(int))), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), TreatMissingData: aws.String(d.Get("treat_missing_data").(string)), } diff --git a/internal/service/cloudwatch/metric_alarm_migrate.go b/internal/service/cloudwatch/metric_alarm_migrate.go index 0c8f0415028..48098a12349 100644 --- a/internal/service/cloudwatch/metric_alarm_migrate.go +++ b/internal/service/cloudwatch/metric_alarm_migrate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudwatch import ( diff --git a/internal/service/cloudwatch/metric_alarm_migrate_test.go b/internal/service/cloudwatch/metric_alarm_migrate_test.go index 20d6c8fa645..8dd2268625d 100644 --- a/internal/service/cloudwatch/metric_alarm_migrate_test.go +++ b/internal/service/cloudwatch/metric_alarm_migrate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudwatch_test import ( diff --git a/internal/service/cloudwatch/metric_alarm_test.go b/internal/service/cloudwatch/metric_alarm_test.go index 2340e8428ab..804a806be2c 100755 --- a/internal/service/cloudwatch/metric_alarm_test.go +++ b/internal/service/cloudwatch/metric_alarm_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudwatch_test import ( @@ -651,7 +654,7 @@ func testAccCheckMetricAlarmExists(ctx context.Context, n string, v *cloudwatch. return fmt.Errorf("No CloudWatch Metric Alarm ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudWatchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudWatchConn(ctx) output, err := tfcloudwatch.FindMetricAlarmByName(ctx, conn, rs.Primary.ID) @@ -667,7 +670,7 @@ func testAccCheckMetricAlarmExists(ctx context.Context, n string, v *cloudwatch. func testAccCheckMetricAlarmDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudWatchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudWatchConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudwatch_metric_alarm" { diff --git a/internal/service/cloudwatch/metric_stream.go b/internal/service/cloudwatch/metric_stream.go index 8af96a8941d..69edee37c04 100644 --- a/internal/service/cloudwatch/metric_stream.go +++ b/internal/service/cloudwatch/metric_stream.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudwatch import ( @@ -196,7 +199,7 @@ func ResourceMetricStream() *schema.Resource { } func resourceMetricStreamCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CloudWatchConn() + conn := meta.(*conns.AWSClient).CloudWatchConn(ctx) name := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) input := &cloudwatch.PutMetricStreamInput{ @@ -205,7 +208,7 @@ func resourceMetricStreamCreate(ctx context.Context, d *schema.ResourceData, met Name: aws.String(name), OutputFormat: aws.String(d.Get("output_format").(string)), RoleArn: aws.String(d.Get("role_arn").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("exclude_filter"); ok && v.(*schema.Set).Len() > 0 { @@ -240,7 +243,7 @@ func resourceMetricStreamCreate(ctx context.Context, d *schema.ResourceData, met } // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := createTags(ctx, conn, aws.StringValue(output.Arn), tags) // If default tags only, continue. Otherwise, error. @@ -257,7 +260,7 @@ func resourceMetricStreamCreate(ctx context.Context, d *schema.ResourceData, met } func resourceMetricStreamRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CloudWatchConn() + conn := meta.(*conns.AWSClient).CloudWatchConn(ctx) output, err := FindMetricStreamByName(ctx, conn, d.Id()) @@ -304,7 +307,7 @@ func resourceMetricStreamRead(ctx context.Context, d *schema.ResourceData, meta } func resourceMetricStreamUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CloudWatchConn() + conn := meta.(*conns.AWSClient).CloudWatchConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &cloudwatch.PutMetricStreamInput{ @@ -342,7 +345,7 @@ func resourceMetricStreamUpdate(ctx context.Context, d *schema.ResourceData, met } func resourceMetricStreamDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CloudWatchConn() + conn := meta.(*conns.AWSClient).CloudWatchConn(ctx) log.Printf("[INFO] Deleting CloudWatch Metric Stream: %s", d.Id()) _, err := conn.DeleteMetricStreamWithContext(ctx, &cloudwatch.DeleteMetricStreamInput{ diff --git a/internal/service/cloudwatch/metric_stream_test.go b/internal/service/cloudwatch/metric_stream_test.go index ed05a063be5..b5cf9904a3b 100644 --- a/internal/service/cloudwatch/metric_stream_test.go +++ b/internal/service/cloudwatch/metric_stream_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudwatch_test import ( @@ -492,7 +495,7 @@ func testAccCheckMetricStreamExists(ctx context.Context, n string) resource.Test return fmt.Errorf("No CloudWatch Metric Stream ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudWatchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudWatchConn(ctx) _, err := tfcloudwatch.FindMetricStreamByName(ctx, conn, rs.Primary.ID) @@ -502,7 +505,7 @@ func testAccCheckMetricStreamExists(ctx context.Context, n string) resource.Test func testAccCheckMetricStreamDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudWatchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudWatchConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudwatch_metric_stream" { @@ -623,9 +626,9 @@ EOF resource "aws_kinesis_firehose_delivery_stream" "s3_stream" { name = %[1]q - destination = "s3" + destination = "extended_s3" - s3_configuration { + extended_s3_configuration { role_arn = aws_iam_role.firehose_to_s3.arn bucket_arn = aws_s3_bucket.bucket.arn } diff --git a/internal/service/cloudwatch/service_package_gen.go b/internal/service/cloudwatch/service_package_gen.go index ac025275abf..608f2fb673a 100644 --- a/internal/service/cloudwatch/service_package_gen.go +++ b/internal/service/cloudwatch/service_package_gen.go @@ -5,6 +5,10 @@ package cloudwatch import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + cloudwatch_sdkv1 "github.com/aws/aws-sdk-go/service/cloudwatch" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -60,4 +64,13 @@ func (p *servicePackage) ServicePackageName() string { return names.CloudWatch } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*cloudwatch_sdkv1.CloudWatch, error) { + sess := config["session"].(*session_sdkv1.Session) + + return cloudwatch_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/cloudwatch/sweep.go b/internal/service/cloudwatch/sweep.go index f4619bc40da..c782d348e78 100644 --- a/internal/service/cloudwatch/sweep.go +++ b/internal/service/cloudwatch/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/cloudwatch" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -23,11 +25,11 @@ func init() { func sweepCompositeAlarms(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).CloudWatchConn() + conn := client.CloudWatchConn(ctx) input := &cloudwatch.DescribeAlarmsInput{ AlarmTypes: aws.StringSlice([]string{cloudwatch.AlarmTypeCompositeAlarm}), } @@ -58,7 +60,7 @@ func sweepCompositeAlarms(region string) error { return fmt.Errorf("error listing CloudWatch Composite Alarms (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping CloudWatch Composite Alarms (%s): %w", region, err) diff --git a/internal/service/cloudwatch/tags_gen.go b/internal/service/cloudwatch/tags_gen.go index 582da482285..c919ebbb8db 100644 --- a/internal/service/cloudwatch/tags_gen.go +++ b/internal/service/cloudwatch/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists cloudwatch service tags. +// listTags lists cloudwatch service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn cloudwatchiface.CloudWatchAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn cloudwatchiface.CloudWatchAPI, identifier string) (tftags.KeyValueTags, error) { input := &cloudwatch.ListTagsForResourceInput{ ResourceARN: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn cloudwatchiface.CloudWatchAPI, identifie // ListTags lists cloudwatch service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).CloudWatchConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).CloudWatchConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*cloudwatch.Tag) tftags.KeyValueTa return tftags.New(ctx, m) } -// GetTagsIn returns cloudwatch service tags from Context. +// getTagsIn returns cloudwatch service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*cloudwatch.Tag { +func getTagsIn(ctx context.Context) []*cloudwatch.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,8 +88,8 @@ func GetTagsIn(ctx context.Context) []*cloudwatch.Tag { return nil } -// SetTagsOut sets cloudwatch service tags in Context. -func SetTagsOut(ctx context.Context, tags []*cloudwatch.Tag) { +// setTagsOut sets cloudwatch service tags in Context. +func setTagsOut(ctx context.Context, tags []*cloudwatch.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } @@ -101,13 +101,13 @@ func createTags(ctx context.Context, conn cloudwatchiface.CloudWatchAPI, identif return nil } - return UpdateTags(ctx, conn, identifier, nil, KeyValueTags(ctx, tags)) + return updateTags(ctx, conn, identifier, nil, KeyValueTags(ctx, tags)) } -// UpdateTags updates cloudwatch service tags. +// updateTags updates cloudwatch service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn cloudwatchiface.CloudWatchAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn cloudwatchiface.CloudWatchAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -147,5 +147,5 @@ func UpdateTags(ctx context.Context, conn cloudwatchiface.CloudWatchAPI, identif // UpdateTags updates cloudwatch service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).CloudWatchConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).CloudWatchConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/cloudwatch/validate.go b/internal/service/cloudwatch/validate.go index d489392c5c2..9a3efe14a3c 100644 --- a/internal/service/cloudwatch/validate.go +++ b/internal/service/cloudwatch/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudwatch import ( diff --git a/internal/service/cloudwatch/validate_test.go b/internal/service/cloudwatch/validate_test.go index 9759440cd5e..33a356b78a9 100644 --- a/internal/service/cloudwatch/validate_test.go +++ b/internal/service/cloudwatch/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cloudwatch import ( diff --git a/internal/service/codeartifact/authorization_token_data_source.go b/internal/service/codeartifact/authorization_token_data_source.go index c94ec5c6d58..879d641464f 100644 --- a/internal/service/codeartifact/authorization_token_data_source.go +++ b/internal/service/codeartifact/authorization_token_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codeartifact import ( @@ -54,7 +57,7 @@ func DataSourceAuthorizationToken() *schema.Resource { func dataSourceAuthorizationTokenRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeArtifactConn() + conn := meta.(*conns.AWSClient).CodeArtifactConn(ctx) domain := d.Get("domain").(string) domainOwner := meta.(*conns.AWSClient).AccountID params := &codeartifact.GetAuthorizationTokenInput{ diff --git a/internal/service/codeartifact/authorization_token_data_source_test.go b/internal/service/codeartifact/authorization_token_data_source_test.go index d1924de8a8e..af72537207b 100644 --- a/internal/service/codeartifact/authorization_token_data_source_test.go +++ b/internal/service/codeartifact/authorization_token_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codeartifact_test import ( diff --git a/internal/service/codeartifact/codeartifact_test.go b/internal/service/codeartifact/codeartifact_test.go index a301ff29f30..553247608aa 100644 --- a/internal/service/codeartifact/codeartifact_test.go +++ b/internal/service/codeartifact/codeartifact_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codeartifact_test import ( diff --git a/internal/service/codeartifact/consts.go b/internal/service/codeartifact/consts.go index 0fb508f82d4..dc97c23cbeb 100644 --- a/internal/service/codeartifact/consts.go +++ b/internal/service/codeartifact/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codeartifact const ( diff --git a/internal/service/codeartifact/domain.go b/internal/service/codeartifact/domain.go index a3ce3464d5d..5c54049ad9f 100644 --- a/internal/service/codeartifact/domain.go +++ b/internal/service/codeartifact/domain.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codeartifact import ( @@ -76,11 +79,11 @@ func ResourceDomain() *schema.Resource { func resourceDomainCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeArtifactConn() + conn := meta.(*conns.AWSClient).CodeArtifactConn(ctx) input := &codeartifact.CreateDomainInput{ Domain: aws.String(d.Get("domain").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("encryption_key"); ok { @@ -102,7 +105,7 @@ func resourceDomainCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceDomainRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeArtifactConn() + conn := meta.(*conns.AWSClient).CodeArtifactConn(ctx) domainOwner, domainName, err := DecodeDomainID(d.Id()) if err != nil { @@ -145,7 +148,7 @@ func resourceDomainUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceDomainDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeArtifactConn() + conn := meta.(*conns.AWSClient).CodeArtifactConn(ctx) log.Printf("[DEBUG] Deleting CodeArtifact Domain: %s", d.Id()) domainOwner, domainName, err := DecodeDomainID(d.Id()) diff --git a/internal/service/codeartifact/domain_permissions_policy.go b/internal/service/codeartifact/domain_permissions_policy.go index 0db8e3bf73c..51be09fdba4 100644 --- a/internal/service/codeartifact/domain_permissions_policy.go +++ b/internal/service/codeartifact/domain_permissions_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codeartifact import ( @@ -67,7 +70,7 @@ func ResourceDomainPermissionsPolicy() *schema.Resource { func resourceDomainPermissionsPolicyPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeArtifactConn() + conn := meta.(*conns.AWSClient).CodeArtifactConn(ctx) log.Print("[DEBUG] Creating CodeArtifact Domain Permissions Policy") policy, err := structure.NormalizeJsonString(d.Get("policy_document").(string)) @@ -101,7 +104,7 @@ func resourceDomainPermissionsPolicyPut(ctx context.Context, d *schema.ResourceD func resourceDomainPermissionsPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeArtifactConn() + conn := meta.(*conns.AWSClient).CodeArtifactConn(ctx) log.Printf("[DEBUG] Reading CodeArtifact Domain Permissions Policy: %s", d.Id()) domainOwner, domainName, err := DecodeDomainID(d.Id()) @@ -147,7 +150,7 @@ func resourceDomainPermissionsPolicyRead(ctx context.Context, d *schema.Resource func resourceDomainPermissionsPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeArtifactConn() + conn := meta.(*conns.AWSClient).CodeArtifactConn(ctx) log.Printf("[DEBUG] Deleting CodeArtifact Domain Permissions Policy: %s", d.Id()) domainOwner, domainName, err := DecodeDomainID(d.Id()) diff --git a/internal/service/codeartifact/domain_permissions_policy_test.go b/internal/service/codeartifact/domain_permissions_policy_test.go index 4fb87b25aee..0ec38bd2f87 100644 --- a/internal/service/codeartifact/domain_permissions_policy_test.go +++ b/internal/service/codeartifact/domain_permissions_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codeartifact_test import ( @@ -175,7 +178,7 @@ func testAccCheckDomainPermissionsExists(ctx context.Context, n string) resource return fmt.Errorf("no CodeArtifact domain set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeArtifactConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeArtifactConn(ctx) domainOwner, domainName, err := tfcodeartifact.DecodeDomainID(rs.Primary.ID) if err != nil { @@ -198,7 +201,7 @@ func testAccCheckDomainPermissionsDestroy(ctx context.Context) resource.TestChec continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeArtifactConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeArtifactConn(ctx) domainOwner, domainName, err := tfcodeartifact.DecodeDomainID(rs.Primary.ID) if err != nil { diff --git a/internal/service/codeartifact/domain_test.go b/internal/service/codeartifact/domain_test.go index 9b28ecc718d..fef6516e373 100644 --- a/internal/service/codeartifact/domain_test.go +++ b/internal/service/codeartifact/domain_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codeartifact_test import ( @@ -162,7 +165,7 @@ func testAccCheckDomainExists(ctx context.Context, n string) resource.TestCheckF return fmt.Errorf("no CodeArtifact domain set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeArtifactConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeArtifactConn(ctx) domainOwner, domainName, err := tfcodeartifact.DecodeDomainID(rs.Primary.ID) if err != nil { @@ -185,7 +188,7 @@ func testAccCheckDomainDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeArtifactConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeArtifactConn(ctx) domainOwner, domainName, err := tfcodeartifact.DecodeDomainID(rs.Primary.ID) if err != nil { diff --git a/internal/service/codeartifact/generate.go b/internal/service/codeartifact/generate.go index 52e0874dc43..3d3b4379763 100644 --- a/internal/service/codeartifact/generate.go +++ b/internal/service/codeartifact/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsSlice -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package codeartifact diff --git a/internal/service/codeartifact/repository.go b/internal/service/codeartifact/repository.go index ec8372bdbd9..d438821f13b 100644 --- a/internal/service/codeartifact/repository.go +++ b/internal/service/codeartifact/repository.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codeartifact import ( @@ -106,12 +109,12 @@ func ResourceRepository() *schema.Resource { func resourceRepositoryCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeArtifactConn() + conn := meta.(*conns.AWSClient).CodeArtifactConn(ctx) input := &codeartifact.CreateRepositoryInput{ Repository: aws.String(d.Get("repository").(string)), Domain: aws.String(d.Get("domain").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -154,7 +157,7 @@ func resourceRepositoryCreate(ctx context.Context, d *schema.ResourceData, meta func resourceRepositoryRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeArtifactConn() + conn := meta.(*conns.AWSClient).CodeArtifactConn(ctx) owner, domain, repo, err := DecodeRepositoryID(d.Id()) if err != nil { @@ -200,7 +203,7 @@ func resourceRepositoryRead(ctx context.Context, d *schema.ResourceData, meta in func resourceRepositoryUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeArtifactConn() + conn := meta.(*conns.AWSClient).CodeArtifactConn(ctx) log.Print("[DEBUG] Updating CodeArtifact Repository") needsUpdate := false @@ -267,7 +270,7 @@ func resourceRepositoryUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceRepositoryDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeArtifactConn() + conn := meta.(*conns.AWSClient).CodeArtifactConn(ctx) log.Printf("[DEBUG] Deleting CodeArtifact Repository: %s", d.Id()) owner, domain, repo, err := DecodeRepositoryID(d.Id()) diff --git a/internal/service/codeartifact/repository_endpoint_data_source.go b/internal/service/codeartifact/repository_endpoint_data_source.go index 4805826bf07..107bd6a6a36 100644 --- a/internal/service/codeartifact/repository_endpoint_data_source.go +++ b/internal/service/codeartifact/repository_endpoint_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codeartifact import ( @@ -50,7 +53,7 @@ func DataSourceRepositoryEndpoint() *schema.Resource { func dataSourceRepositoryEndpointRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeArtifactConn() + conn := meta.(*conns.AWSClient).CodeArtifactConn(ctx) domainOwner := meta.(*conns.AWSClient).AccountID domain := d.Get("domain").(string) repo := d.Get("repository").(string) diff --git a/internal/service/codeartifact/repository_endpoint_data_source_test.go b/internal/service/codeartifact/repository_endpoint_data_source_test.go index 375cc03ad3b..b86058b56d2 100644 --- a/internal/service/codeartifact/repository_endpoint_data_source_test.go +++ b/internal/service/codeartifact/repository_endpoint_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codeartifact_test import ( diff --git a/internal/service/codeartifact/repository_permissions_policy.go b/internal/service/codeartifact/repository_permissions_policy.go index 4c01339c695..aba3e91b510 100644 --- a/internal/service/codeartifact/repository_permissions_policy.go +++ b/internal/service/codeartifact/repository_permissions_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codeartifact import ( @@ -72,7 +75,7 @@ func ResourceRepositoryPermissionsPolicy() *schema.Resource { func resourceRepositoryPermissionsPolicyPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeArtifactConn() + conn := meta.(*conns.AWSClient).CodeArtifactConn(ctx) log.Print("[DEBUG] Creating CodeArtifact Repository Permissions Policy") policy, err := structure.NormalizeJsonString(d.Get("policy_document").(string)) @@ -107,7 +110,7 @@ func resourceRepositoryPermissionsPolicyPut(ctx context.Context, d *schema.Resou func resourceRepositoryPermissionsPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeArtifactConn() + conn := meta.(*conns.AWSClient).CodeArtifactConn(ctx) log.Printf("[DEBUG] Reading CodeArtifact Repository Permissions Policy: %s", d.Id()) domainOwner, domainName, repoName, err := DecodeRepositoryID(d.Id()) @@ -155,7 +158,7 @@ func resourceRepositoryPermissionsPolicyRead(ctx context.Context, d *schema.Reso func resourceRepositoryPermissionsPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeArtifactConn() + conn := meta.(*conns.AWSClient).CodeArtifactConn(ctx) log.Printf("[DEBUG] Deleting CodeArtifact Repository Permissions Policy: %s", d.Id()) domainOwner, domainName, repoName, err := DecodeRepositoryID(d.Id()) diff --git a/internal/service/codeartifact/repository_permissions_policy_test.go b/internal/service/codeartifact/repository_permissions_policy_test.go index 309e42cb686..e54e0a4cec3 100644 --- a/internal/service/codeartifact/repository_permissions_policy_test.go +++ b/internal/service/codeartifact/repository_permissions_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codeartifact_test import ( @@ -174,7 +177,7 @@ func testAccCheckRepositoryPermissionsExists(ctx context.Context, n string) reso return fmt.Errorf("no CodeArtifact domain set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeArtifactConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeArtifactConn(ctx) domainOwner, domainName, repoName, err := tfcodeartifact.DecodeRepositoryID(rs.Primary.ID) if err != nil { @@ -198,7 +201,7 @@ func testAccCheckRepositoryPermissionsDestroy(ctx context.Context) resource.Test continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeArtifactConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeArtifactConn(ctx) domainOwner, domainName, repoName, err := tfcodeartifact.DecodeRepositoryID(rs.Primary.ID) if err != nil { diff --git a/internal/service/codeartifact/repository_test.go b/internal/service/codeartifact/repository_test.go index 0f7517c025a..265202cb612 100644 --- a/internal/service/codeartifact/repository_test.go +++ b/internal/service/codeartifact/repository_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codeartifact_test import ( @@ -290,7 +293,7 @@ func testAccCheckRepositoryExists(ctx context.Context, n string) resource.TestCh return fmt.Errorf("no CodeArtifact repository set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeArtifactConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeArtifactConn(ctx) owner, domain, repo, err := tfcodeartifact.DecodeRepositoryID(rs.Primary.ID) if err != nil { return err @@ -316,7 +319,7 @@ func testAccCheckRepositoryDestroy(ctx context.Context) resource.TestCheckFunc { if err != nil { return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeArtifactConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeArtifactConn(ctx) resp, err := conn.DescribeRepositoryWithContext(ctx, &codeartifact.DescribeRepositoryInput{ Repository: aws.String(repo), Domain: aws.String(domain), diff --git a/internal/service/codeartifact/service_package_gen.go b/internal/service/codeartifact/service_package_gen.go index 14ff8112948..2a18ec34189 100644 --- a/internal/service/codeartifact/service_package_gen.go +++ b/internal/service/codeartifact/service_package_gen.go @@ -5,6 +5,10 @@ package codeartifact import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + codeartifact_sdkv1 "github.com/aws/aws-sdk-go/service/codeartifact" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -65,4 +69,13 @@ func (p *servicePackage) ServicePackageName() string { return names.CodeArtifact } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*codeartifact_sdkv1.CodeArtifact, error) { + sess := config["session"].(*session_sdkv1.Session) + + return codeartifact_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/codeartifact/sweep.go b/internal/service/codeartifact/sweep.go index b5d135503be..9d67b48b092 100644 --- a/internal/service/codeartifact/sweep.go +++ b/internal/service/codeartifact/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/codeartifact" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -29,11 +31,11 @@ func init() { func sweepDomains(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).CodeArtifactConn() + conn := client.CodeArtifactConn(ctx) input := &codeartifact.ListDomainsInput{} var sweeperErrs *multierror.Error @@ -76,11 +78,11 @@ func sweepDomains(region string) error { func sweepRepositories(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).CodeArtifactConn() + conn := client.CodeArtifactConn(ctx) input := &codeartifact.ListRepositoriesInput{} var sweeperErrs *multierror.Error diff --git a/internal/service/codeartifact/tags_gen.go b/internal/service/codeartifact/tags_gen.go index 384fc794766..f631876458a 100644 --- a/internal/service/codeartifact/tags_gen.go +++ b/internal/service/codeartifact/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists codeartifact service tags. +// listTags lists codeartifact service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn codeartifactiface.CodeArtifactAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn codeartifactiface.CodeArtifactAPI, identifier string) (tftags.KeyValueTags, error) { input := &codeartifact.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn codeartifactiface.CodeArtifactAPI, ident // ListTags lists codeartifact service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).CodeArtifactConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).CodeArtifactConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*codeartifact.Tag) tftags.KeyValue return tftags.New(ctx, m) } -// GetTagsIn returns codeartifact service tags from Context. +// getTagsIn returns codeartifact service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*codeartifact.Tag { +func getTagsIn(ctx context.Context) []*codeartifact.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*codeartifact.Tag { return nil } -// SetTagsOut sets codeartifact service tags in Context. -func SetTagsOut(ctx context.Context, tags []*codeartifact.Tag) { +// setTagsOut sets codeartifact service tags in Context. +func setTagsOut(ctx context.Context, tags []*codeartifact.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates codeartifact service tags. +// updateTags updates codeartifact service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn codeartifactiface.CodeArtifactAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn codeartifactiface.CodeArtifactAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn codeartifactiface.CodeArtifactAPI, ide // UpdateTags updates codeartifact service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).CodeArtifactConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).CodeArtifactConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/codebuild/consts.go b/internal/service/codebuild/consts.go index c6ec7f3ed09..00a7ff7167f 100644 --- a/internal/service/codebuild/consts.go +++ b/internal/service/codebuild/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codebuild import ( diff --git a/internal/service/codebuild/find.go b/internal/service/codebuild/find.go index 1b6b4c03cf1..0dae36d7569 100644 --- a/internal/service/codebuild/find.go +++ b/internal/service/codebuild/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codebuild import ( diff --git a/internal/service/codebuild/generate.go b/internal/service/codebuild/generate.go index da9441c12d6..38b80319dc1 100644 --- a/internal/service/codebuild/generate.go +++ b/internal/service/codebuild/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ServiceTagsSlice +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package codebuild diff --git a/internal/service/codebuild/project.go b/internal/service/codebuild/project.go index 7ba4e8edd51..37fae5f333d 100644 --- a/internal/service/codebuild/project.go +++ b/internal/service/codebuild/project.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codebuild import ( @@ -452,28 +455,6 @@ func ResourceProject() *schema.Resource { MaxItems: 12, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "auth": { - Type: schema.TypeList, - MaxItems: 1, - Optional: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "resource": { - Type: schema.TypeString, - Sensitive: true, - Optional: true, - Deprecated: "Use the aws_codebuild_source_credential resource instead", - }, - "type": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice(codebuild.SourceAuthType_Values(), false), - Deprecated: "Use the aws_codebuild_source_credential resource instead", - }, - }, - }, - Deprecated: "Use the aws_codebuild_source_credential resource instead", - }, "buildspec": { Type: schema.TypeString, Optional: true, @@ -565,28 +546,6 @@ func ResourceProject() *schema.Resource { Required: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "auth": { - Type: schema.TypeList, - MaxItems: 1, - Optional: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "resource": { - Type: schema.TypeString, - Sensitive: true, - Optional: true, - Deprecated: "Use the aws_codebuild_source_credential resource instead", - }, - "type": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice(codebuild.SourceAuthType_Values(), false), - Deprecated: "Use the aws_codebuild_source_credential resource instead", - }, - }, - }, - Deprecated: "Use the aws_codebuild_source_credential resource instead", - }, "buildspec": { Type: schema.TypeString, Optional: true, @@ -738,7 +697,7 @@ func ResourceProject() *schema.Resource { func resourceProjectCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeBuildConn() + conn := meta.(*conns.AWSClient).CodeBuildConn(ctx) projectEnv := expandProjectEnvironment(d) projectSource := expandProjectSource(d) @@ -769,7 +728,7 @@ func resourceProjectCreate(ctx context.Context, d *schema.ResourceData, meta int LogsConfig: projectLogsConfig, BuildBatchConfig: projectBatchConfig, FileSystemLocations: projectFileSystemLocations, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("cache"); ok { @@ -839,7 +798,7 @@ func resourceProjectCreate(ctx context.Context, d *schema.ResourceData, meta int resp, err = conn.CreateProjectWithContext(ctx, input) } if err != nil { - return sdkdiag.AppendErrorf(diags, "Error creating CodeBuild project: %s", err) + return sdkdiag.AppendErrorf(diags, "creating CodeBuild project: %s", err) } d.SetId(aws.StringValue(resp.Project.Arn)) @@ -856,7 +815,7 @@ func resourceProjectCreate(ctx context.Context, d *schema.ResourceData, meta int _, err = conn.UpdateProjectVisibilityWithContext(ctx, visInput) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error updating CodeBuild project (%s) visibility: %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "updating CodeBuild project (%s) visibility: %s", d.Id(), err) } } return append(diags, resourceProjectRead(ctx, d, meta)...) @@ -1292,17 +1251,6 @@ func expandProjectSourceData(data map[string]interface{}) codebuild.ProjectSourc projectSource.ReportBuildStatus = aws.Bool(data["report_build_status"].(bool)) } - // Probe data for auth details (max of 1 auth per ProjectSource object) - if v, ok := data["auth"]; ok && len(v.([]interface{})) > 0 { - if auths := v.([]interface{}); auths[0] != nil { - auth := auths[0].(map[string]interface{}) - projectSource.Auth = &codebuild.SourceAuth{ - Type: aws.String(auth["type"].(string)), - Resource: aws.String(auth["resource"].(string)), - } - } - } - // Only valid for CODECOMMIT, GITHUB, GITHUB_ENTERPRISE, BITBUCKET source types. if sourceType == codebuild.SourceTypeCodecommit || sourceType == codebuild.SourceTypeGithub || sourceType == codebuild.SourceTypeGithubEnterprise || sourceType == codebuild.SourceTypeBitbucket { if v, ok := data["git_submodules_config"]; ok && len(v.([]interface{})) > 0 { @@ -1341,7 +1289,7 @@ func expandProjectSourceData(data map[string]interface{}) codebuild.ProjectSourc func resourceProjectRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeBuildConn() + conn := meta.(*conns.AWSClient).CodeBuildConn(ctx) project, err := FindProjectByARN(ctx, conn, d.Id()) @@ -1419,14 +1367,14 @@ func resourceProjectRead(ctx context.Context, d *schema.ResourceData, meta inter d.Set("badge_url", "") } - SetTagsOut(ctx, project.Tags) + setTagsOut(ctx, project.Tags) return diags } func resourceProjectUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeBuildConn() + conn := meta.(*conns.AWSClient).CodeBuildConn(ctx) if d.HasChanges("project_visibility", "resource_access_role") { visInput := &codebuild.UpdateProjectVisibilityInput{ @@ -1440,7 +1388,7 @@ func resourceProjectUpdate(ctx context.Context, d *schema.ResourceData, meta int _, err := conn.UpdateProjectVisibilityWithContext(ctx, visInput) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error updating CodeBuild project (%s) visibility: %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "updating CodeBuild project (%s) visibility: %s", d.Id(), err) } } @@ -1560,7 +1508,7 @@ func resourceProjectUpdate(ctx context.Context, d *schema.ResourceData, meta int // The documentation clearly says "The replacement set of tags for this build project." // But its a slice of pointers so if not set for every update, they get removed. - input.Tags = GetTagsIn(ctx) + input.Tags = getTagsIn(ctx) // Handle IAM eventual consistency err := retry.RetryContext(ctx, propagationTimeout, func() *retry.RetryError { @@ -1591,7 +1539,7 @@ func resourceProjectUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceProjectDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeBuildConn() + conn := meta.(*conns.AWSClient).CodeBuildConn(ctx) _, err := conn.DeleteProjectWithContext(ctx, &codebuild.DeleteProjectInput{ Name: aws.String(d.Id()), @@ -1833,9 +1781,6 @@ func flattenProjectSourceData(source *codebuild.ProjectSource) interface{} { m["build_status_config"] = flattenProjectBuildStatusConfig(source.BuildStatusConfig) - if source.Auth != nil { - m["auth"] = []interface{}{sourceAuthToMap(source.Auth)} - } if source.SourceIdentifier != nil { m["source_identifier"] = aws.StringValue(source.SourceIdentifier) } @@ -2001,17 +1946,6 @@ func environmentVariablesToMap(environmentVariables []*codebuild.EnvironmentVari return envVariables } -func sourceAuthToMap(sourceAuth *codebuild.SourceAuth) map[string]interface{} { - auth := map[string]interface{}{} - auth["type"] = aws.StringValue(sourceAuth.Type) - - if sourceAuth.Resource != nil { - auth["resource"] = aws.StringValue(sourceAuth.Resource) - } - - return auth -} - func ValidProjectName(v interface{}, k string) (ws []string, errors []error) { value := v.(string) if !regexp.MustCompile(`^[A-Za-z0-9]`).MatchString(value) { diff --git a/internal/service/codebuild/project_test.go b/internal/service/codebuild/project_test.go index 8bdf09776f2..df9295b6e1a 100644 --- a/internal/service/codebuild/project_test.go +++ b/internal/service/codebuild/project_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codebuild_test import ( @@ -91,7 +94,6 @@ func TestAccCodeBuildProject_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "project_visibility", "PRIVATE"), resource.TestCheckResourceAttrPair(resourceName, "service_role", roleResourceName, "arn"), resource.TestCheckResourceAttr(resourceName, "source.#", "1"), - resource.TestCheckResourceAttr(resourceName, "source.0.auth.#", "0"), resource.TestCheckResourceAttr(resourceName, "source.0.git_clone_depth", "0"), resource.TestCheckResourceAttr(resourceName, "source.0.insecure_ssl", "false"), resource.TestCheckResourceAttr(resourceName, "source.0.location", "https://github.com/hashibot-test/aws-test.git"), @@ -2631,7 +2633,7 @@ func testAccCheckProjectExists(ctx context.Context, n string, project *codebuild return fmt.Errorf("No CodeBuild Project ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeBuildConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeBuildConn(ctx) output, err := tfcodebuild.FindProjectByARN(ctx, conn, rs.Primary.ID) if err != nil { @@ -2650,7 +2652,7 @@ func testAccCheckProjectExists(ctx context.Context, n string, project *codebuild func testAccCheckProjectDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeBuildConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeBuildConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_codebuild_project" { @@ -2684,7 +2686,7 @@ func testAccCheckProjectCertificate(project *codebuild.Project, expectedCertific } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeBuildConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeBuildConn(ctx) _, err := tfcodebuild.FindProjectByARN(ctx, conn, "tf-acc-test-precheck") @@ -5183,7 +5185,7 @@ resource "aws_subnet" "private" { } resource "aws_eip" "test" { - vpc = true + domain = "vpc" tags = { Name = %[1]q diff --git a/internal/service/codebuild/report_group.go b/internal/service/codebuild/report_group.go index b5352082182..821f838d51c 100644 --- a/internal/service/codebuild/report_group.go +++ b/internal/service/codebuild/report_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codebuild import ( @@ -114,13 +117,13 @@ func ResourceReportGroup() *schema.Resource { func resourceReportGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeBuildConn() + conn := meta.(*conns.AWSClient).CodeBuildConn(ctx) input := &codebuild.CreateReportGroupInput{ Name: aws.String(d.Get("name").(string)), Type: aws.String(d.Get("type").(string)), ExportConfig: expandReportGroupExportConfig(d.Get("export_config").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } resp, err := conn.CreateReportGroupWithContext(ctx, input) @@ -135,7 +138,7 @@ func resourceReportGroupCreate(ctx context.Context, d *schema.ResourceData, meta func resourceReportGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeBuildConn() + conn := meta.(*conns.AWSClient).CodeBuildConn(ctx) reportGroup, err := FindReportGroupByARN(ctx, conn, d.Id()) if !d.IsNewResource() && tfawserr.ErrCodeEquals(err, codebuild.ErrCodeResourceNotFoundException) { @@ -170,14 +173,14 @@ func resourceReportGroupRead(ctx context.Context, d *schema.ResourceData, meta i return sdkdiag.AppendErrorf(diags, "setting export config: %s", err) } - SetTagsOut(ctx, reportGroup.Tags) + setTagsOut(ctx, reportGroup.Tags) return diags } func resourceReportGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeBuildConn() + conn := meta.(*conns.AWSClient).CodeBuildConn(ctx) input := &codebuild.UpdateReportGroupInput{ Arn: aws.String(d.Id()), @@ -188,7 +191,7 @@ func resourceReportGroupUpdate(ctx context.Context, d *schema.ResourceData, meta } if d.HasChange("tags_all") { - input.Tags = GetTagsIn(ctx) + input.Tags = getTagsIn(ctx) } _, err := conn.UpdateReportGroupWithContext(ctx, input) @@ -201,7 +204,7 @@ func resourceReportGroupUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceReportGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeBuildConn() + conn := meta.(*conns.AWSClient).CodeBuildConn(ctx) deleteOpts := &codebuild.DeleteReportGroupInput{ Arn: aws.String(d.Id()), diff --git a/internal/service/codebuild/report_group_test.go b/internal/service/codebuild/report_group_test.go index 4b98f3cb3f1..49b50bb7cf0 100644 --- a/internal/service/codebuild/report_group_test.go +++ b/internal/service/codebuild/report_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codebuild_test import ( @@ -199,7 +202,7 @@ func TestAccCodeBuildReportGroup_disappears(t *testing.T) { } func testAccPreCheckReportGroup(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeBuildConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeBuildConn(ctx) input := &codebuild.ListReportGroupsInput{} @@ -216,7 +219,7 @@ func testAccPreCheckReportGroup(ctx context.Context, t *testing.T) { func testAccCheckReportGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeBuildConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeBuildConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_codebuild_report_group" { @@ -243,7 +246,7 @@ func testAccCheckReportGroupExists(ctx context.Context, name string, reportGroup return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeBuildConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeBuildConn(ctx) resp, err := tfcodebuild.FindReportGroupByARN(ctx, conn, rs.Primary.ID) if err != nil { diff --git a/internal/service/codebuild/resource_policy.go b/internal/service/codebuild/resource_policy.go index 610d793e815..8844f2a5a61 100644 --- a/internal/service/codebuild/resource_policy.go +++ b/internal/service/codebuild/resource_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codebuild import ( @@ -53,7 +56,7 @@ func ResourceResourcePolicy() *schema.Resource { func resourceResourcePolicyPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeBuildConn() + conn := meta.(*conns.AWSClient).CodeBuildConn(ctx) policy, err := structure.NormalizeJsonString(d.Get("policy").(string)) @@ -78,7 +81,7 @@ func resourceResourcePolicyPut(ctx context.Context, d *schema.ResourceData, meta func resourceResourcePolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeBuildConn() + conn := meta.(*conns.AWSClient).CodeBuildConn(ctx) output, err := FindResourcePolicyByARN(ctx, conn, d.Id()) @@ -112,7 +115,7 @@ func resourceResourcePolicyRead(ctx context.Context, d *schema.ResourceData, met func resourceResourcePolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeBuildConn() + conn := meta.(*conns.AWSClient).CodeBuildConn(ctx) deleteOpts := &codebuild.DeleteResourcePolicyInput{ ResourceArn: aws.String(d.Id()), diff --git a/internal/service/codebuild/resource_policy_test.go b/internal/service/codebuild/resource_policy_test.go index 584f052da43..32077ec9396 100644 --- a/internal/service/codebuild/resource_policy_test.go +++ b/internal/service/codebuild/resource_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codebuild_test import ( @@ -96,7 +99,7 @@ func TestAccCodeBuildResourcePolicy_disappears_resource(t *testing.T) { func testAccCheckResourcePolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeBuildConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeBuildConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_codebuild_resource_policy" { @@ -127,7 +130,7 @@ func testAccCheckResourcePolicyExists(ctx context.Context, name string, policy * return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeBuildConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeBuildConn(ctx) resp, err := tfcodebuild.FindResourcePolicyByARN(ctx, conn, rs.Primary.ID) if err != nil { diff --git a/internal/service/codebuild/service_package_gen.go b/internal/service/codebuild/service_package_gen.go index 50faa001f7a..7f2fef33260 100644 --- a/internal/service/codebuild/service_package_gen.go +++ b/internal/service/codebuild/service_package_gen.go @@ -5,6 +5,10 @@ package codebuild import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + codebuild_sdkv1 "github.com/aws/aws-sdk-go/service/codebuild" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -56,4 +60,13 @@ func (p *servicePackage) ServicePackageName() string { return names.CodeBuild } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*codebuild_sdkv1.CodeBuild, error) { + sess := config["session"].(*session_sdkv1.Session) + + return codebuild_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/codebuild/source_credential.go b/internal/service/codebuild/source_credential.go index 1d06d0e8e9b..7d29ab77638 100644 --- a/internal/service/codebuild/source_credential.go +++ b/internal/service/codebuild/source_credential.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codebuild import ( @@ -60,7 +63,7 @@ func ResourceSourceCredential() *schema.Resource { func resourceSourceCredentialCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeBuildConn() + conn := meta.(*conns.AWSClient).CodeBuildConn(ctx) authType := d.Get("auth_type").(string) @@ -76,7 +79,7 @@ func resourceSourceCredentialCreate(ctx context.Context, d *schema.ResourceData, resp, err := conn.ImportSourceCredentialsWithContext(ctx, createOpts) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error importing source credentials: %s", err) + return sdkdiag.AppendErrorf(diags, "importing source credentials: %s", err) } d.SetId(aws.StringValue(resp.Arn)) @@ -86,7 +89,7 @@ func resourceSourceCredentialCreate(ctx context.Context, d *schema.ResourceData, func resourceSourceCredentialRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeBuildConn() + conn := meta.(*conns.AWSClient).CodeBuildConn(ctx) resp, err := FindSourceCredentialByARN(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -108,7 +111,7 @@ func resourceSourceCredentialRead(ctx context.Context, d *schema.ResourceData, m func resourceSourceCredentialDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeBuildConn() + conn := meta.(*conns.AWSClient).CodeBuildConn(ctx) deleteOpts := &codebuild.DeleteSourceCredentialsInput{ Arn: aws.String(d.Id()), @@ -118,7 +121,7 @@ func resourceSourceCredentialDelete(ctx context.Context, d *schema.ResourceData, if tfawserr.ErrCodeEquals(err, codebuild.ErrCodeResourceNotFoundException) { return diags } - return sdkdiag.AppendErrorf(diags, "Error deleting CodeBuild Source Credentials(%s): %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "deleting CodeBuild Source Credentials(%s): %s", d.Id(), err) } return diags diff --git a/internal/service/codebuild/source_credential_test.go b/internal/service/codebuild/source_credential_test.go index 83e2cdeedc5..7fe25be1b2e 100644 --- a/internal/service/codebuild/source_credential_test.go +++ b/internal/service/codebuild/source_credential_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codebuild_test import ( @@ -122,7 +125,7 @@ func TestAccCodeBuildSourceCredential_disappears(t *testing.T) { func testAccCheckSourceCredentialDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeBuildConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeBuildConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_codebuild_source_credential" { @@ -152,7 +155,7 @@ func testAccCheckSourceCredentialExists(ctx context.Context, name string, source return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeBuildConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeBuildConn(ctx) output, err := tfcodebuild.FindSourceCredentialByARN(ctx, conn, rs.Primary.ID) if err != nil { diff --git a/internal/service/codebuild/status.go b/internal/service/codebuild/status.go index f0554c726fe..19c237d16f3 100644 --- a/internal/service/codebuild/status.go +++ b/internal/service/codebuild/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codebuild import ( diff --git a/internal/service/codebuild/sweep.go b/internal/service/codebuild/sweep.go index 0e8f6fd482a..d659ca0fad5 100644 --- a/internal/service/codebuild/sweep.go +++ b/internal/service/codebuild/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/codebuild" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -34,12 +36,12 @@ func init() { func sweepReportGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).CodeBuildConn() + conn := client.CodeBuildConn(ctx) sweepResources := make([]sweep.Sweepable, 0) input := &codebuild.ListReportGroupsInput{} @@ -70,7 +72,7 @@ func sweepReportGroups(region string) error { return fmt.Errorf("error retrieving CodeBuild ReportGroups: %w", err) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { return fmt.Errorf("error sweeping CodeBuild ReportGroups: %w", err) } @@ -79,12 +81,12 @@ func sweepReportGroups(region string) error { func sweepProjects(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).CodeBuildConn() + conn := client.CodeBuildConn(ctx) sweepResources := make([]sweep.Sweepable, 0) input := &codebuild.ListProjectsInput{} @@ -113,7 +115,7 @@ func sweepProjects(region string) error { return fmt.Errorf("error retrieving CodeBuild Projects: %w", err) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { return fmt.Errorf("error sweeping CodeBuild Projects: %w", err) } @@ -122,12 +124,12 @@ func sweepProjects(region string) error { func sweepSourceCredentials(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).CodeBuildConn() + conn := client.CodeBuildConn(ctx) var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -151,7 +153,7 @@ func sweepSourceCredentials(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error retrieving CodeBuild Source Credentials: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping CodeBuild Source Credentials: %w", err)) } diff --git a/internal/service/codebuild/tags_gen.go b/internal/service/codebuild/tags_gen.go index 12510d165c5..f96f2056d22 100644 --- a/internal/service/codebuild/tags_gen.go +++ b/internal/service/codebuild/tags_gen.go @@ -39,9 +39,9 @@ func KeyValueTags(ctx context.Context, tags []*codebuild.Tag) tftags.KeyValueTag return tftags.New(ctx, m) } -// GetTagsIn returns codebuild service tags from Context. +// getTagsIn returns codebuild service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*codebuild.Tag { +func getTagsIn(ctx context.Context) []*codebuild.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -51,8 +51,8 @@ func GetTagsIn(ctx context.Context) []*codebuild.Tag { return nil } -// SetTagsOut sets codebuild service tags in Context. -func SetTagsOut(ctx context.Context, tags []*codebuild.Tag) { +// setTagsOut sets codebuild service tags in Context. +func setTagsOut(ctx context.Context, tags []*codebuild.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } diff --git a/internal/service/codebuild/wait.go b/internal/service/codebuild/wait.go index 594017293c3..207e940faa4 100644 --- a/internal/service/codebuild/wait.go +++ b/internal/service/codebuild/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codebuild import ( diff --git a/internal/service/codebuild/webhook.go b/internal/service/codebuild/webhook.go index e4e6b40822d..0bc4bf27a73 100644 --- a/internal/service/codebuild/webhook.go +++ b/internal/service/codebuild/webhook.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codebuild import ( @@ -97,7 +100,7 @@ func ResourceWebhook() *schema.Resource { func resourceWebhookCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeBuildConn() + conn := meta.(*conns.AWSClient).CodeBuildConn(ctx) input := &codebuild.CreateWebhookInput{ ProjectName: aws.String(d.Get("project_name").(string)), @@ -163,7 +166,7 @@ func expandWebhookFilterData(data map[string]interface{}) []*codebuild.WebhookFi func resourceWebhookRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeBuildConn() + conn := meta.(*conns.AWSClient).CodeBuildConn(ctx) resp, err := conn.BatchGetProjectsWithContext(ctx, &codebuild.BatchGetProjectsInput{ Names: []*string{ @@ -216,7 +219,7 @@ func resourceWebhookRead(ctx context.Context, d *schema.ResourceData, meta inter func resourceWebhookUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeBuildConn() + conn := meta.(*conns.AWSClient).CodeBuildConn(ctx) var err error filterGroups := expandWebhookFilterGroups(d) @@ -251,7 +254,7 @@ func resourceWebhookUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceWebhookDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeBuildConn() + conn := meta.(*conns.AWSClient).CodeBuildConn(ctx) _, err := conn.DeleteWebhookWithContext(ctx, &codebuild.DeleteWebhookInput{ ProjectName: aws.String(d.Id()), diff --git a/internal/service/codebuild/webhook_test.go b/internal/service/codebuild/webhook_test.go index ccd0af560fd..01b1d9aab0d 100644 --- a/internal/service/codebuild/webhook_test.go +++ b/internal/service/codebuild/webhook_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codebuild_test import ( @@ -278,7 +281,7 @@ func testAccCheckWebhookFilter(webhook *codebuild.Webhook, expectedFilters [][]* func testAccCheckWebhookDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeBuildConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeBuildConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_codebuild_webhook" { @@ -315,7 +318,7 @@ func testAccCheckWebhookExists(ctx context.Context, name string, webhook *codebu return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeBuildConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeBuildConn(ctx) resp, err := conn.BatchGetProjectsWithContext(ctx, &codebuild.BatchGetProjectsInput{ Names: []*string{ diff --git a/internal/service/codecommit/approval_rule_template.go b/internal/service/codecommit/approval_rule_template.go index 8240508022d..85c90e6809a 100644 --- a/internal/service/codecommit/approval_rule_template.go +++ b/internal/service/codecommit/approval_rule_template.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codecommit import ( @@ -78,7 +81,7 @@ func ResourceApprovalRuleTemplate() *schema.Resource { func resourceApprovalRuleTemplateCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeCommitConn() + conn := meta.(*conns.AWSClient).CodeCommitConn(ctx) name := d.Get("name").(string) @@ -103,7 +106,7 @@ func resourceApprovalRuleTemplateCreate(ctx context.Context, d *schema.ResourceD func resourceApprovalRuleTemplateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeCommitConn() + conn := meta.(*conns.AWSClient).CodeCommitConn(ctx) input := &codecommit.GetApprovalRuleTemplateInput{ ApprovalRuleTemplateName: aws.String(d.Id()), @@ -141,7 +144,7 @@ func resourceApprovalRuleTemplateRead(ctx context.Context, d *schema.ResourceDat func resourceApprovalRuleTemplateUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeCommitConn() + conn := meta.(*conns.AWSClient).CodeCommitConn(ctx) if d.HasChange("description") { input := &codecommit.UpdateApprovalRuleTemplateDescriptionInput{ @@ -192,7 +195,7 @@ func resourceApprovalRuleTemplateUpdate(ctx context.Context, d *schema.ResourceD func resourceApprovalRuleTemplateDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeCommitConn() + conn := meta.(*conns.AWSClient).CodeCommitConn(ctx) input := &codecommit.DeleteApprovalRuleTemplateInput{ ApprovalRuleTemplateName: aws.String(d.Id()), diff --git a/internal/service/codecommit/approval_rule_template_association.go b/internal/service/codecommit/approval_rule_template_association.go index 0171e6805d2..f258205a0f3 100644 --- a/internal/service/codecommit/approval_rule_template_association.go +++ b/internal/service/codecommit/approval_rule_template_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codecommit import ( @@ -50,7 +53,7 @@ func ResourceApprovalRuleTemplateAssociation() *schema.Resource { func resourceApprovalRuleTemplateAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeCommitConn() + conn := meta.(*conns.AWSClient).CodeCommitConn(ctx) approvalRuleTemplateName := d.Get("approval_rule_template_name").(string) repositoryName := d.Get("repository_name").(string) @@ -73,7 +76,7 @@ func resourceApprovalRuleTemplateAssociationCreate(ctx context.Context, d *schem func resourceApprovalRuleTemplateAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeCommitConn() + conn := meta.(*conns.AWSClient).CodeCommitConn(ctx) approvalRuleTemplateName, repositoryName, err := ApprovalRuleTemplateAssociationParseID(d.Id()) @@ -101,7 +104,7 @@ func resourceApprovalRuleTemplateAssociationRead(ctx context.Context, d *schema. func resourceApprovalRuleTemplateAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeCommitConn() + conn := meta.(*conns.AWSClient).CodeCommitConn(ctx) approvalRuleTemplateName, repositoryName, err := ApprovalRuleTemplateAssociationParseID(d.Id()) diff --git a/internal/service/codecommit/approval_rule_template_association_test.go b/internal/service/codecommit/approval_rule_template_association_test.go index cab6e8741da..dad29db8600 100644 --- a/internal/service/codecommit/approval_rule_template_association_test.go +++ b/internal/service/codecommit/approval_rule_template_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codecommit_test import ( @@ -103,7 +106,7 @@ func testAccCheckApprovalRuleTemplateAssociationExists(ctx context.Context, name return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeCommitConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeCommitConn(ctx) approvalTemplateName, repositoryName, err := tfcodecommit.ApprovalRuleTemplateAssociationParseID(rs.Primary.ID) @@ -117,7 +120,7 @@ func testAccCheckApprovalRuleTemplateAssociationExists(ctx context.Context, name func testAccCheckApprovalRuleTemplateAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeCommitConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeCommitConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_codecommit_approval_rule_template_association" { diff --git a/internal/service/codecommit/approval_rule_template_data_source.go b/internal/service/codecommit/approval_rule_template_data_source.go index d4a3b939e84..ee4f1a419fe 100644 --- a/internal/service/codecommit/approval_rule_template_data_source.go +++ b/internal/service/codecommit/approval_rule_template_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codecommit import ( @@ -58,7 +61,7 @@ func DataSourceApprovalRuleTemplate() *schema.Resource { func dataSourceApprovalRuleTemplateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeCommitConn() + conn := meta.(*conns.AWSClient).CodeCommitConn(ctx) templateName := d.Get("name").(string) input := &codecommit.GetApprovalRuleTemplateInput{ diff --git a/internal/service/codecommit/approval_rule_template_data_source_test.go b/internal/service/codecommit/approval_rule_template_data_source_test.go index af906d26b6a..30d9d19aa63 100644 --- a/internal/service/codecommit/approval_rule_template_data_source_test.go +++ b/internal/service/codecommit/approval_rule_template_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codecommit_test import ( diff --git a/internal/service/codecommit/approval_rule_template_test.go b/internal/service/codecommit/approval_rule_template_test.go index f734a11a645..7e9b76682e8 100644 --- a/internal/service/codecommit/approval_rule_template_test.go +++ b/internal/service/codecommit/approval_rule_template_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codecommit_test import ( @@ -163,7 +166,7 @@ func testAccCheckApprovalRuleTemplateExists(ctx context.Context, name string) re return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeCommitConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeCommitConn(ctx) _, err := conn.GetApprovalRuleTemplateWithContext(ctx, &codecommit.GetApprovalRuleTemplateInput{ ApprovalRuleTemplateName: aws.String(rs.Primary.ID), @@ -175,7 +178,7 @@ func testAccCheckApprovalRuleTemplateExists(ctx context.Context, name string) re func testAccCheckApprovalRuleTemplateDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeCommitConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeCommitConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_codecommit_approval_rule_template" { diff --git a/internal/service/codecommit/consts.go b/internal/service/codecommit/consts.go index 3385290d4e2..04e56aa61a8 100644 --- a/internal/service/codecommit/consts.go +++ b/internal/service/codecommit/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codecommit const ( diff --git a/internal/service/codecommit/find.go b/internal/service/codecommit/find.go index 19bf93e5f31..3f5f68ab4a9 100644 --- a/internal/service/codecommit/find.go +++ b/internal/service/codecommit/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codecommit import ( diff --git a/internal/service/codecommit/generate.go b/internal/service/codecommit/generate.go index e173b1fdf64..e9349829fa0 100644 --- a/internal/service/codecommit/generate.go +++ b/internal/service/codecommit/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package codecommit diff --git a/internal/service/codecommit/repository.go b/internal/service/codecommit/repository.go index 7ada656a77c..fce802bca87 100644 --- a/internal/service/codecommit/repository.go +++ b/internal/service/codecommit/repository.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codecommit import ( @@ -79,17 +82,17 @@ func ResourceRepository() *schema.Resource { func resourceRepositoryCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeCommitConn() + conn := meta.(*conns.AWSClient).CodeCommitConn(ctx) input := &codecommit.CreateRepositoryInput{ RepositoryName: aws.String(d.Get("repository_name").(string)), RepositoryDescription: aws.String(d.Get("description").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } out, err := conn.CreateRepositoryWithContext(ctx, input) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error creating CodeCommit Repository: %s", err) + return sdkdiag.AppendErrorf(diags, "creating CodeCommit Repository: %s", err) } d.SetId(d.Get("repository_name").(string)) @@ -109,7 +112,7 @@ func resourceRepositoryCreate(ctx context.Context, d *schema.ResourceData, meta func resourceRepositoryRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeCommitConn() + conn := meta.(*conns.AWSClient).CodeCommitConn(ctx) input := &codecommit.GetRepositoryInput{ RepositoryName: aws.String(d.Id()), @@ -142,7 +145,7 @@ func resourceRepositoryRead(ctx context.Context, d *schema.ResourceData, meta in func resourceRepositoryUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeCommitConn() + conn := meta.(*conns.AWSClient).CodeCommitConn(ctx) if d.HasChange("default_branch") { if err := resourceUpdateDefaultBranch(ctx, conn, d); err != nil { @@ -161,14 +164,14 @@ func resourceRepositoryUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceRepositoryDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeCommitConn() + conn := meta.(*conns.AWSClient).CodeCommitConn(ctx) log.Printf("[DEBUG] CodeCommit Delete Repository: %s", d.Id()) _, err := conn.DeleteRepositoryWithContext(ctx, &codecommit.DeleteRepositoryInput{ RepositoryName: aws.String(d.Id()), }) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error deleting CodeCommit Repository: %s", err.Error()) + return sdkdiag.AppendErrorf(diags, "deleting CodeCommit Repository: %s", err.Error()) } return diags @@ -182,7 +185,7 @@ func resourceUpdateDescription(ctx context.Context, conn *codecommit.CodeCommit, _, err := conn.UpdateRepositoryDescriptionWithContext(ctx, branchInput) if err != nil { - return fmt.Errorf("Error Updating Repository Description for CodeCommit Repository: %s", err.Error()) + return fmt.Errorf("Updating Repository Description for CodeCommit Repository: %s", err.Error()) } return nil @@ -195,7 +198,7 @@ func resourceUpdateDefaultBranch(ctx context.Context, conn *codecommit.CodeCommi out, err := conn.ListBranchesWithContext(ctx, input) if err != nil { - return fmt.Errorf("Error reading CodeCommit Repository branches: %s", err.Error()) + return fmt.Errorf("reading CodeCommit Repository branches: %s", err.Error()) } if len(out.Branches) == 0 { @@ -210,7 +213,7 @@ func resourceUpdateDefaultBranch(ctx context.Context, conn *codecommit.CodeCommi _, err = conn.UpdateDefaultBranchWithContext(ctx, branchInput) if err != nil { - return fmt.Errorf("Error Updating Default Branch for CodeCommit Repository: %s", err.Error()) + return fmt.Errorf("Updating Default Branch for CodeCommit Repository: %s", err.Error()) } return nil diff --git a/internal/service/codecommit/repository_data_source.go b/internal/service/codecommit/repository_data_source.go index 45c1f5bc567..d569bab4696 100644 --- a/internal/service/codecommit/repository_data_source.go +++ b/internal/service/codecommit/repository_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codecommit import ( @@ -51,7 +54,7 @@ func DataSourceRepository() *schema.Resource { func dataSourceRepositoryRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeCommitConn() + conn := meta.(*conns.AWSClient).CodeCommitConn(ctx) repositoryName := d.Get("repository_name").(string) input := &codecommit.GetRepositoryInput{ @@ -65,7 +68,7 @@ func dataSourceRepositoryRead(ctx context.Context, d *schema.ResourceData, meta d.SetId("") return sdkdiag.AppendErrorf(diags, "Resource codecommit repository not found for %s", repositoryName) } else { - return sdkdiag.AppendErrorf(diags, "Error reading CodeCommit Repository: %s", err) + return sdkdiag.AppendErrorf(diags, "reading CodeCommit Repository: %s", err) } } diff --git a/internal/service/codecommit/repository_data_source_test.go b/internal/service/codecommit/repository_data_source_test.go index 4f5e19386e8..6aebe2d562d 100644 --- a/internal/service/codecommit/repository_data_source_test.go +++ b/internal/service/codecommit/repository_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codecommit_test import ( diff --git a/internal/service/codecommit/repository_test.go b/internal/service/codecommit/repository_test.go index 8d6fe8a5400..1195f852909 100644 --- a/internal/service/codecommit/repository_test.go +++ b/internal/service/codecommit/repository_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codecommit_test import ( @@ -198,7 +201,7 @@ func testAccCheckRepositoryExists(ctx context.Context, name string) resource.Tes return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeCommitConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeCommitConn(ctx) out, err := conn.GetRepositoryWithContext(ctx, &codecommit.GetRepositoryInput{ RepositoryName: aws.String(rs.Primary.ID), }) @@ -222,7 +225,7 @@ func testAccCheckRepositoryExists(ctx context.Context, name string) resource.Tes func testAccCheckRepositoryDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeCommitConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeCommitConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_codecommit_repository" { diff --git a/internal/service/codecommit/service_package_gen.go b/internal/service/codecommit/service_package_gen.go index 59f00740832..e63566ac80e 100644 --- a/internal/service/codecommit/service_package_gen.go +++ b/internal/service/codecommit/service_package_gen.go @@ -5,6 +5,10 @@ package codecommit import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + codecommit_sdkv1 "github.com/aws/aws-sdk-go/service/codecommit" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -61,4 +65,13 @@ func (p *servicePackage) ServicePackageName() string { return names.CodeCommit } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*codecommit_sdkv1.CodeCommit, error) { + sess := config["session"].(*session_sdkv1.Session) + + return codecommit_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/codecommit/tags_gen.go b/internal/service/codecommit/tags_gen.go index d8ab405f749..ba69736c276 100644 --- a/internal/service/codecommit/tags_gen.go +++ b/internal/service/codecommit/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists codecommit service tags. +// listTags lists codecommit service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn codecommitiface.CodeCommitAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn codecommitiface.CodeCommitAPI, identifier string) (tftags.KeyValueTags, error) { input := &codecommit.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn codecommitiface.CodeCommitAPI, identifie // ListTags lists codecommit service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).CodeCommitConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).CodeCommitConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from codecommit service tags. +// KeyValueTags creates tftags.KeyValueTags from codecommit service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns codecommit service tags from Context. +// getTagsIn returns codecommit service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets codecommit service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets codecommit service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates codecommit service tags. +// updateTags updates codecommit service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn codecommitiface.CodeCommitAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn codecommitiface.CodeCommitAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn codecommitiface.CodeCommitAPI, identif // UpdateTags updates codecommit service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).CodeCommitConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).CodeCommitConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/codecommit/trigger.go b/internal/service/codecommit/trigger.go index 935e8b744e7..a0af93e7123 100644 --- a/internal/service/codecommit/trigger.go +++ b/internal/service/codecommit/trigger.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codecommit import ( @@ -76,7 +79,7 @@ func ResourceTrigger() *schema.Resource { func resourceTriggerCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeCommitConn() + conn := meta.(*conns.AWSClient).CodeCommitConn(ctx) // Expand the "trigger" set to aws-sdk-go compat []*codecommit.RepositoryTrigger triggers := expandTriggers(d.Get("trigger").(*schema.Set).List()) @@ -88,7 +91,7 @@ func resourceTriggerCreate(ctx context.Context, d *schema.ResourceData, meta int resp, err := conn.PutRepositoryTriggersWithContext(ctx, input) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error creating CodeCommit Trigger: %s", err) + return sdkdiag.AppendErrorf(diags, "creating CodeCommit Trigger: %s", err) } log.Printf("[INFO] Code Commit Trigger Created %s input %s", resp, input) @@ -101,7 +104,7 @@ func resourceTriggerCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceTriggerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeCommitConn() + conn := meta.(*conns.AWSClient).CodeCommitConn(ctx) input := &codecommit.GetRepositoryTriggersInput{ RepositoryName: aws.String(d.Id()), @@ -109,7 +112,7 @@ func resourceTriggerRead(ctx context.Context, d *schema.ResourceData, meta inter resp, err := conn.GetRepositoryTriggersWithContext(ctx, input) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error reading CodeCommit Trigger: %s", err.Error()) + return sdkdiag.AppendErrorf(diags, "reading CodeCommit Trigger: %s", err.Error()) } log.Printf("[DEBUG] CodeCommit Trigger: %s", resp) @@ -119,7 +122,7 @@ func resourceTriggerRead(ctx context.Context, d *schema.ResourceData, meta inter func resourceTriggerDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeCommitConn() + conn := meta.(*conns.AWSClient).CodeCommitConn(ctx) log.Printf("[DEBUG] Deleting Trigger: %q", d.Id()) diff --git a/internal/service/codecommit/trigger_test.go b/internal/service/codecommit/trigger_test.go index 84ecd61e9de..fbcdd7cb026 100644 --- a/internal/service/codecommit/trigger_test.go +++ b/internal/service/codecommit/trigger_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codecommit_test import ( @@ -39,7 +42,7 @@ func TestAccCodeCommitTrigger_basic(t *testing.T) { func testAccCheckTriggerDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeCommitConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeCommitConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_codecommit_trigger" { @@ -75,7 +78,7 @@ func testAccCheckTriggerExists(ctx context.Context, name string) resource.TestCh return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeCommitConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeCommitConn(ctx) out, err := conn.GetRepositoryTriggersWithContext(ctx, &codecommit.GetRepositoryTriggersInput{ RepositoryName: aws.String(rs.Primary.ID), }) diff --git a/internal/service/codegurureviewer/generate.go b/internal/service/codegurureviewer/generate.go index 488d44f4153..de601265188 100644 --- a/internal/service/codegurureviewer/generate.go +++ b/internal/service/codegurureviewer/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package codegurureviewer diff --git a/internal/service/codegurureviewer/repository_association.go b/internal/service/codegurureviewer/repository_association.go index 88f8213313a..552b80064e8 100644 --- a/internal/service/codegurureviewer/repository_association.go +++ b/internal/service/codegurureviewer/repository_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codegurureviewer import ( @@ -266,10 +269,10 @@ const ( ) func resourceRepositoryAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CodeGuruReviewerConn() + conn := meta.(*conns.AWSClient).CodeGuruReviewerConn(ctx) in := &codegurureviewer.AssociateRepositoryInput{ - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } in.KMSKeyDetails = expandKMSKeyDetails(d.Get("kms_key_details").([]interface{})) @@ -298,7 +301,7 @@ func resourceRepositoryAssociationCreate(ctx context.Context, d *schema.Resource } func resourceRepositoryAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CodeGuruReviewerConn() + conn := meta.(*conns.AWSClient).CodeGuruReviewerConn(ctx) out, err := findRepositoryAssociationByID(ctx, conn, d.Id()) @@ -342,7 +345,7 @@ func resourceRepositoryAssociationUpdate(ctx context.Context, d *schema.Resource } func resourceRepositoryAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CodeGuruReviewerConn() + conn := meta.(*conns.AWSClient).CodeGuruReviewerConn(ctx) log.Printf("[INFO] Deleting CodeGuruReviewer RepositoryAssociation %s", d.Id()) diff --git a/internal/service/codegurureviewer/repository_association_test.go b/internal/service/codegurureviewer/repository_association_test.go index 0f4d474269b..7a23e647317 100644 --- a/internal/service/codegurureviewer/repository_association_test.go +++ b/internal/service/codegurureviewer/repository_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codegurureviewer_test import ( @@ -202,7 +205,7 @@ func TestAccCodeGuruReviewerRepositoryAssociation_disappears(t *testing.T) { func testAccCheckRepositoryAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeGuruReviewerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeGuruReviewerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_codegurureviewer_repository_association" { @@ -238,7 +241,7 @@ func testAccCheckRepositoryAssociationExists(ctx context.Context, name string, r return create.Error(names.CodeGuruReviewer, create.ErrActionCheckingExistence, tfcodegurureviewer.ResNameRepositoryAssociation, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeGuruReviewerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeGuruReviewerConn(ctx) resp, err := conn.DescribeRepositoryAssociationWithContext(ctx, &codegurureviewer.DescribeRepositoryAssociationInput{ AssociationArn: aws.String(rs.Primary.ID), }) @@ -254,7 +257,7 @@ func testAccCheckRepositoryAssociationExists(ctx context.Context, name string, r } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeGuruReviewerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeGuruReviewerConn(ctx) input := &codegurureviewer.ListRepositoryAssociationsInput{} _, err := conn.ListRepositoryAssociationsWithContext(ctx, input) diff --git a/internal/service/codegurureviewer/service_package_gen.go b/internal/service/codegurureviewer/service_package_gen.go index c85c50805cb..422603b711d 100644 --- a/internal/service/codegurureviewer/service_package_gen.go +++ b/internal/service/codegurureviewer/service_package_gen.go @@ -5,6 +5,10 @@ package codegurureviewer import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + codegurureviewer_sdkv1 "github.com/aws/aws-sdk-go/service/codegurureviewer" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -40,4 +44,13 @@ func (p *servicePackage) ServicePackageName() string { return names.CodeGuruReviewer } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*codegurureviewer_sdkv1.CodeGuruReviewer, error) { + sess := config["session"].(*session_sdkv1.Session) + + return codegurureviewer_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/codegurureviewer/sweep.go b/internal/service/codegurureviewer/sweep.go index dbc5a7b9466..3493e3bdac5 100644 --- a/internal/service/codegurureviewer/sweep.go +++ b/internal/service/codegurureviewer/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/codegurureviewer" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -23,12 +25,12 @@ func init() { func sweepAssociations(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } input := &codegurureviewer.ListRepositoryAssociationsInput{} - conn := client.(*conns.AWSClient).CodeGuruReviewerConn() + conn := client.CodeGuruReviewerConn(ctx) sweepResources := make([]sweep.Sweepable, 0) @@ -58,7 +60,7 @@ func sweepAssociations(region string) error { return fmt.Errorf("error listing CodeGuruReviewer Associations (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping CodeGuruReviewer Associations (%s): %w", region, err) diff --git a/internal/service/codegurureviewer/tags_gen.go b/internal/service/codegurureviewer/tags_gen.go index f93e487e2f3..058f2fcc655 100644 --- a/internal/service/codegurureviewer/tags_gen.go +++ b/internal/service/codegurureviewer/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists codegurureviewer service tags. +// listTags lists codegurureviewer service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn codegururevieweriface.CodeGuruReviewerAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn codegururevieweriface.CodeGuruReviewerAPI, identifier string) (tftags.KeyValueTags, error) { input := &codegurureviewer.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn codegururevieweriface.CodeGuruReviewerAP // ListTags lists codegurureviewer service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).CodeGuruReviewerConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).CodeGuruReviewerConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from codegurureviewer service tags. +// KeyValueTags creates tftags.KeyValueTags from codegurureviewer service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns codegurureviewer service tags from Context. +// getTagsIn returns codegurureviewer service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets codegurureviewer service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets codegurureviewer service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates codegurureviewer service tags. +// updateTags updates codegurureviewer service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn codegururevieweriface.CodeGuruReviewerAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn codegururevieweriface.CodeGuruReviewerAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn codegururevieweriface.CodeGuruReviewer // UpdateTags updates codegurureviewer service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).CodeGuruReviewerConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).CodeGuruReviewerConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/codepipeline/codepipeline.go b/internal/service/codepipeline/codepipeline.go index b0fea07572e..f701fa0025c 100644 --- a/internal/service/codepipeline/codepipeline.go +++ b/internal/service/codepipeline/codepipeline.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codepipeline import ( @@ -213,7 +216,7 @@ func ResourcePipeline() *schema.Resource { } func resourcePipelineCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CodePipelineConn() + conn := meta.(*conns.AWSClient).CodePipelineConn(ctx) pipeline, err := expandPipelineDeclaration(d) @@ -224,7 +227,7 @@ func resourcePipelineCreate(ctx context.Context, d *schema.ResourceData, meta in name := d.Get("name").(string) input := &codepipeline.CreatePipelineInput{ Pipeline: pipeline, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } outputRaw, err := tfresource.RetryWhenAWSErrMessageContains(ctx, propagationTimeout, func() (interface{}, error) { @@ -241,7 +244,7 @@ func resourcePipelineCreate(ctx context.Context, d *schema.ResourceData, meta in } func resourcePipelineRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CodePipelineConn() + conn := meta.(*conns.AWSClient).CodePipelineConn(ctx) output, err := FindPipelineByName(ctx, conn, d.Id()) @@ -281,7 +284,7 @@ func resourcePipelineRead(ctx context.Context, d *schema.ResourceData, meta inte } func resourcePipelineUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CodePipelineConn() + conn := meta.(*conns.AWSClient).CodePipelineConn(ctx) if d.HasChangesExcept("tags", "tags_all") { pipeline, err := expandPipelineDeclaration(d) @@ -303,7 +306,7 @@ func resourcePipelineUpdate(ctx context.Context, d *schema.ResourceData, meta in } func resourcePipelineDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CodePipelineConn() + conn := meta.(*conns.AWSClient).CodePipelineConn(ctx) log.Printf("[INFO] Deleting CodePipeline: %s", d.Id()) _, err := conn.DeletePipelineWithContext(ctx, &codepipeline.DeletePipelineInput{ diff --git a/internal/service/codepipeline/codepipeline_test.go b/internal/service/codepipeline/codepipeline_test.go index 6f89104b326..d9bf4bcc567 100644 --- a/internal/service/codepipeline/codepipeline_test.go +++ b/internal/service/codepipeline/codepipeline_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codepipeline_test import ( @@ -621,7 +624,7 @@ func testAccCheckPipelineExists(ctx context.Context, n string, v *codepipeline.P return fmt.Errorf("No CodePipeline ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CodePipelineConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodePipelineConn(ctx) output, err := tfcodepipeline.FindPipelineByName(ctx, conn, rs.Primary.ID) @@ -637,7 +640,7 @@ func testAccCheckPipelineExists(ctx context.Context, n string, v *codepipeline.P func testAccCheckPipelineDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CodePipelineConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodePipelineConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_codepipeline" { @@ -672,7 +675,7 @@ func testAccPreCheckSupported(ctx context.Context, t *testing.T, regions ...stri if diags.HasError() { t.Fatalf("error getting AWS client for region %s", region) } - conn := client.CodePipelineConn() + conn := client.CodePipelineConn(ctx) input := &codepipeline.ListPipelinesInput{} _, err := conn.ListPipelinesWithContext(ctx, input) diff --git a/internal/service/codepipeline/consts.go b/internal/service/codepipeline/consts.go index 5174ba73333..b45da1f0464 100644 --- a/internal/service/codepipeline/consts.go +++ b/internal/service/codepipeline/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codepipeline import ( diff --git a/internal/service/codepipeline/custom_action_type.go b/internal/service/codepipeline/custom_action_type.go index 3c00312f114..171629f0812 100644 --- a/internal/service/codepipeline/custom_action_type.go +++ b/internal/service/codepipeline/custom_action_type.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codepipeline import ( @@ -178,7 +181,7 @@ func ResourceCustomActionType() *schema.Resource { } func resourceCustomActionTypeCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CodePipelineConn() + conn := meta.(*conns.AWSClient).CodePipelineConn(ctx) category := d.Get("category").(string) provider := d.Get("provider_name").(string) @@ -187,7 +190,7 @@ func resourceCustomActionTypeCreate(ctx context.Context, d *schema.ResourceData, input := &codepipeline.CreateCustomActionTypeInput{ Category: aws.String(category), Provider: aws.String(provider), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), Version: aws.String(version), } @@ -219,7 +222,7 @@ func resourceCustomActionTypeCreate(ctx context.Context, d *schema.ResourceData, } func resourceCustomActionTypeRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CodePipelineConn() + conn := meta.(*conns.AWSClient).CodePipelineConn(ctx) category, provider, version, err := CustomActionTypeParseResourceID(d.Id()) @@ -287,7 +290,7 @@ func resourceCustomActionTypeUpdate(ctx context.Context, d *schema.ResourceData, } func resourceCustomActionTypeDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).CodePipelineConn() + conn := meta.(*conns.AWSClient).CodePipelineConn(ctx) category, provider, version, err := CustomActionTypeParseResourceID(d.Id()) diff --git a/internal/service/codepipeline/custom_action_type_test.go b/internal/service/codepipeline/custom_action_type_test.go index a6fba1249f1..d4b71374040 100644 --- a/internal/service/codepipeline/custom_action_type_test.go +++ b/internal/service/codepipeline/custom_action_type_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codepipeline_test import ( @@ -219,7 +222,7 @@ func testAccCheckCustomActionTypeExists(ctx context.Context, n string, v *codepi return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).CodePipelineConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodePipelineConn(ctx) output, err := tfcodepipeline.FindCustomActionTypeByThreePartKey(ctx, conn, category, provider, version) @@ -235,7 +238,7 @@ func testAccCheckCustomActionTypeExists(ctx context.Context, n string, v *codepi func testAccCheckCustomActionTypeDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CodePipelineConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodePipelineConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_codepipeline_custom_action_type" { diff --git a/internal/service/codepipeline/generate.go b/internal/service/codepipeline/generate.go index 7848faaf5e3..13364bd8346 100644 --- a/internal/service/codepipeline/generate.go +++ b/internal/service/codepipeline/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsSlice -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package codepipeline diff --git a/internal/service/codepipeline/service_package_gen.go b/internal/service/codepipeline/service_package_gen.go index 3ac32c5ebbd..7906ed8b3e3 100644 --- a/internal/service/codepipeline/service_package_gen.go +++ b/internal/service/codepipeline/service_package_gen.go @@ -5,6 +5,10 @@ package codepipeline import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + codepipeline_sdkv1 "github.com/aws/aws-sdk-go/service/codepipeline" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -56,4 +60,13 @@ func (p *servicePackage) ServicePackageName() string { return names.CodePipeline } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*codepipeline_sdkv1.CodePipeline, error) { + sess := config["session"].(*session_sdkv1.Session) + + return codepipeline_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/codepipeline/sweep.go b/internal/service/codepipeline/sweep.go index 71b79ef3acd..e097acd500f 100644 --- a/internal/service/codepipeline/sweep.go +++ b/internal/service/codepipeline/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/codepipeline" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -23,12 +25,12 @@ func init() { func sweepPipelines(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } input := &codepipeline.ListPipelinesInput{} - conn := client.(*conns.AWSClient).CodePipelineConn() + conn := client.CodePipelineConn(ctx) sweepResources := make([]sweep.Sweepable, 0) err = conn.ListPipelinesPagesWithContext(ctx, input, func(page *codepipeline.ListPipelinesOutput, lastPage bool) bool { @@ -57,7 +59,7 @@ func sweepPipelines(region string) error { return fmt.Errorf("error listing Codepipeline Pipelines (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Codepipeline Pipelines (%s): %w", region, err) diff --git a/internal/service/codepipeline/tags_gen.go b/internal/service/codepipeline/tags_gen.go index 25e62f36402..714ed1f99f3 100644 --- a/internal/service/codepipeline/tags_gen.go +++ b/internal/service/codepipeline/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists codepipeline service tags. +// listTags lists codepipeline service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn codepipelineiface.CodePipelineAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn codepipelineiface.CodePipelineAPI, identifier string) (tftags.KeyValueTags, error) { input := &codepipeline.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn codepipelineiface.CodePipelineAPI, ident // ListTags lists codepipeline service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).CodePipelineConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).CodePipelineConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*codepipeline.Tag) tftags.KeyValue return tftags.New(ctx, m) } -// GetTagsIn returns codepipeline service tags from Context. +// getTagsIn returns codepipeline service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*codepipeline.Tag { +func getTagsIn(ctx context.Context) []*codepipeline.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*codepipeline.Tag { return nil } -// SetTagsOut sets codepipeline service tags in Context. -func SetTagsOut(ctx context.Context, tags []*codepipeline.Tag) { +// setTagsOut sets codepipeline service tags in Context. +func setTagsOut(ctx context.Context, tags []*codepipeline.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates codepipeline service tags. +// updateTags updates codepipeline service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn codepipelineiface.CodePipelineAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn codepipelineiface.CodePipelineAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn codepipelineiface.CodePipelineAPI, ide // UpdateTags updates codepipeline service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).CodePipelineConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).CodePipelineConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/codepipeline/webhook.go b/internal/service/codepipeline/webhook.go index fe80c22543e..9df81dca0fe 100644 --- a/internal/service/codepipeline/webhook.go +++ b/internal/service/codepipeline/webhook.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codepipeline import ( @@ -121,7 +124,7 @@ func ResourceWebhook() *schema.Resource { func resourceWebhookCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodePipelineConn() + conn := meta.(*conns.AWSClient).CodePipelineConn(ctx) authType := d.Get("authentication").(string) var authConfig map[string]interface{} @@ -138,12 +141,12 @@ func resourceWebhookCreate(ctx context.Context, d *schema.ResourceData, meta int TargetPipeline: aws.String(d.Get("target_pipeline").(string)), AuthenticationConfiguration: extractWebhookAuthConfig(authType, authConfig), }, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } webhook, err := conn.PutWebhookWithContext(ctx, request) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error creating webhook: %s", err) + return sdkdiag.AppendErrorf(diags, "creating webhook: %s", err) } d.SetId(aws.StringValue(webhook.Webhook.Arn)) @@ -153,7 +156,7 @@ func resourceWebhookCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceWebhookRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodePipelineConn() + conn := meta.(*conns.AWSClient).CodePipelineConn(ctx) arn := d.Id() webhook, err := GetWebhook(ctx, conn, arn) @@ -190,14 +193,14 @@ func resourceWebhookRead(ctx context.Context, d *schema.ResourceData, meta inter return sdkdiag.AppendErrorf(diags, "setting filter: %s", err) } - SetTagsOut(ctx, webhook.Tags) + setTagsOut(ctx, webhook.Tags) return diags } func resourceWebhookUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodePipelineConn() + conn := meta.(*conns.AWSClient).CodePipelineConn(ctx) if d.HasChangesExcept("tags_all", "tags", "register_with_third_party") { authType := d.Get("authentication").(string) @@ -220,7 +223,7 @@ func resourceWebhookUpdate(ctx context.Context, d *schema.ResourceData, meta int _, err := conn.PutWebhookWithContext(ctx, request) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error updating webhook: %s", err) + return sdkdiag.AppendErrorf(diags, "updating webhook: %s", err) } } @@ -229,7 +232,7 @@ func resourceWebhookUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceWebhookDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodePipelineConn() + conn := meta.(*conns.AWSClient).CodePipelineConn(ctx) name := d.Get("name").(string) input := codepipeline.DeleteWebhookInput{ diff --git a/internal/service/codepipeline/webhook_test.go b/internal/service/codepipeline/webhook_test.go index c65f2ce6156..6692622417b 100644 --- a/internal/service/codepipeline/webhook_test.go +++ b/internal/service/codepipeline/webhook_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codepipeline_test import ( @@ -313,7 +316,7 @@ func testAccCheckWebhookExists(ctx context.Context, n string, webhook *codepipel return fmt.Errorf("No webhook ARN is set as ID") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CodePipelineConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodePipelineConn(ctx) resp, err := tfcodepipeline.GetWebhook(ctx, conn, rs.Primary.ID) diff --git a/internal/service/codestarconnections/connection.go b/internal/service/codestarconnections/connection.go index 1be82df10a4..0a7873bafcf 100644 --- a/internal/service/codestarconnections/connection.go +++ b/internal/service/codestarconnections/connection.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codestarconnections import ( @@ -71,12 +74,12 @@ func ResourceConnection() *schema.Resource { func resourceConnectionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeStarConnectionsConn() + conn := meta.(*conns.AWSClient).CodeStarConnectionsConn(ctx) name := d.Get("name").(string) input := &codestarconnections.CreateConnectionInput{ ConnectionName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("host_arn"); ok { @@ -101,7 +104,7 @@ func resourceConnectionCreate(ctx context.Context, d *schema.ResourceData, meta func resourceConnectionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeStarConnectionsConn() + conn := meta.(*conns.AWSClient).CodeStarConnectionsConn(ctx) connection, err := FindConnectionByARN(ctx, conn, d.Id()) @@ -136,7 +139,7 @@ func resourceConnectionUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceConnectionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeStarConnectionsConn() + conn := meta.(*conns.AWSClient).CodeStarConnectionsConn(ctx) log.Printf("[DEBUG] Deleting CodeStar Connections Connection: %s", d.Id()) _, err := conn.DeleteConnectionWithContext(ctx, &codestarconnections.DeleteConnectionInput{ diff --git a/internal/service/codestarconnections/connection_data_source.go b/internal/service/codestarconnections/connection_data_source.go index 2346e9c8a96..d95ded33831 100644 --- a/internal/service/codestarconnections/connection_data_source.go +++ b/internal/service/codestarconnections/connection_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codestarconnections import ( @@ -51,7 +54,7 @@ func DataSourceConnection() *schema.Resource { func dataSourceConnectionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeStarConnectionsConn() + conn := meta.(*conns.AWSClient).CodeStarConnectionsConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig var connection *codestarconnections.Connection @@ -100,7 +103,7 @@ func dataSourceConnectionRead(ctx context.Context, d *schema.ResourceData, meta d.Set("name", connection.ConnectionName) d.Set("provider_type", connection.ProviderType) - tags, err := ListTags(ctx, conn, arn) + tags, err := listTags(ctx, conn, arn) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags for CodeStar Connections Connection (%s): %s", arn, err) diff --git a/internal/service/codestarconnections/connection_data_source_test.go b/internal/service/codestarconnections/connection_data_source_test.go index 37033b456ec..60b20a15204 100644 --- a/internal/service/codestarconnections/connection_data_source_test.go +++ b/internal/service/codestarconnections/connection_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codestarconnections_test import ( diff --git a/internal/service/codestarconnections/connection_test.go b/internal/service/codestarconnections/connection_test.go index 7634a0c598e..069fb9ff492 100644 --- a/internal/service/codestarconnections/connection_test.go +++ b/internal/service/codestarconnections/connection_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codestarconnections_test import ( @@ -175,7 +178,7 @@ func testAccCheckConnectionExists(ctx context.Context, n string, v *codestarconn return errors.New("No CodeStar Connections Connection ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeStarConnectionsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeStarConnectionsConn(ctx) output, err := tfcodestarconnections.FindConnectionByARN(ctx, conn, rs.Primary.ID) @@ -191,7 +194,7 @@ func testAccCheckConnectionExists(ctx context.Context, n string, v *codestarconn func testAccCheckConnectionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeStarConnectionsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeStarConnectionsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_codestarconnections_connection" { diff --git a/internal/service/codestarconnections/consts.go b/internal/service/codestarconnections/consts.go index 66715de861d..c5b8de7d839 100644 --- a/internal/service/codestarconnections/consts.go +++ b/internal/service/codestarconnections/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codestarconnections const ( diff --git a/internal/service/codestarconnections/find.go b/internal/service/codestarconnections/find.go index 230d442a95b..ff1d68e44d6 100644 --- a/internal/service/codestarconnections/find.go +++ b/internal/service/codestarconnections/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codestarconnections import ( diff --git a/internal/service/codestarconnections/generate.go b/internal/service/codestarconnections/generate.go index afe3014339b..48307bde4cf 100644 --- a/internal/service/codestarconnections/generate.go +++ b/internal/service/codestarconnections/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsSlice -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package codestarconnections diff --git a/internal/service/codestarconnections/host.go b/internal/service/codestarconnections/host.go index b119a36bdaa..2116e3e4135 100644 --- a/internal/service/codestarconnections/host.go +++ b/internal/service/codestarconnections/host.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codestarconnections import ( @@ -96,7 +99,7 @@ func ResourceHost() *schema.Resource { func resourceHostCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeStarConnectionsConn() + conn := meta.(*conns.AWSClient).CodeStarConnectionsConn(ctx) name := d.Get("name").(string) input := &codestarconnections.CreateHostInput{ @@ -124,7 +127,7 @@ func resourceHostCreate(ctx context.Context, d *schema.ResourceData, meta interf func resourceHostRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeStarConnectionsConn() + conn := meta.(*conns.AWSClient).CodeStarConnectionsConn(ctx) output, err := FindHostByARN(ctx, conn, d.Id()) @@ -150,7 +153,7 @@ func resourceHostRead(ctx context.Context, d *schema.ResourceData, meta interfac func resourceHostUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeStarConnectionsConn() + conn := meta.(*conns.AWSClient).CodeStarConnectionsConn(ctx) if d.HasChanges("provider_endpoint", "vpc_configuration") { input := &codestarconnections.UpdateHostInput{ @@ -175,7 +178,7 @@ func resourceHostUpdate(ctx context.Context, d *schema.ResourceData, meta interf func resourceHostDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeStarConnectionsConn() + conn := meta.(*conns.AWSClient).CodeStarConnectionsConn(ctx) log.Printf("[DEBUG] Deleting CodeStar Connections Host: %s", d.Id()) _, err := conn.DeleteHostWithContext(ctx, &codestarconnections.DeleteHostInput{ diff --git a/internal/service/codestarconnections/host_test.go b/internal/service/codestarconnections/host_test.go index 5b2dcac4048..e460c179421 100644 --- a/internal/service/codestarconnections/host_test.go +++ b/internal/service/codestarconnections/host_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codestarconnections_test import ( @@ -130,7 +133,7 @@ func testAccCheckHostExists(ctx context.Context, n string, v *codestarconnection return errors.New("No CodeStar Connections Host ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeStarConnectionsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeStarConnectionsConn(ctx) output, err := tfcodestarconnections.FindHostByARN(ctx, conn, rs.Primary.ID) @@ -146,7 +149,7 @@ func testAccCheckHostExists(ctx context.Context, n string, v *codestarconnection func testAccCheckHostDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeStarConnectionsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeStarConnectionsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_codestarconnections_host" { diff --git a/internal/service/codestarconnections/service_package_gen.go b/internal/service/codestarconnections/service_package_gen.go index a88d9e932a9..425b7c4d75f 100644 --- a/internal/service/codestarconnections/service_package_gen.go +++ b/internal/service/codestarconnections/service_package_gen.go @@ -5,6 +5,10 @@ package codestarconnections import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + codestarconnections_sdkv1 "github.com/aws/aws-sdk-go/service/codestarconnections" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -49,4 +53,13 @@ func (p *servicePackage) ServicePackageName() string { return names.CodeStarConnections } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*codestarconnections_sdkv1.CodeStarConnections, error) { + sess := config["session"].(*session_sdkv1.Session) + + return codestarconnections_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/codestarconnections/sweep.go b/internal/service/codestarconnections/sweep.go index 15ca1dd51bc..2edf5ec11ed 100644 --- a/internal/service/codestarconnections/sweep.go +++ b/internal/service/codestarconnections/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/codestarconnections" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -31,11 +33,11 @@ func init() { func sweepConnections(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).CodeStarConnectionsConn() + conn := client.CodeStarConnectionsConn(ctx) input := &codestarconnections.ListConnectionsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -64,7 +66,7 @@ func sweepConnections(region string) error { return fmt.Errorf("error listing CodeStar Connections Connections (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping CodeStar Connections Connections (%s): %w", region, err) @@ -75,11 +77,11 @@ func sweepConnections(region string) error { func sweepHosts(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).CodeStarConnectionsConn() + conn := client.CodeStarConnectionsConn(ctx) input := &codestarconnections.ListHostsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -108,7 +110,7 @@ func sweepHosts(region string) error { return fmt.Errorf("error listing CodeStar Connections Hosts (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping CodeStar Connections Hosts (%s): %w", region, err) diff --git a/internal/service/codestarconnections/tags_gen.go b/internal/service/codestarconnections/tags_gen.go index 218f2aa03c1..bd31a0e5d23 100644 --- a/internal/service/codestarconnections/tags_gen.go +++ b/internal/service/codestarconnections/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists codestarconnections service tags. +// listTags lists codestarconnections service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn codestarconnectionsiface.CodeStarConnectionsAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn codestarconnectionsiface.CodeStarConnectionsAPI, identifier string) (tftags.KeyValueTags, error) { input := &codestarconnections.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn codestarconnectionsiface.CodeStarConnect // ListTags lists codestarconnections service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).CodeStarConnectionsConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).CodeStarConnectionsConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*codestarconnections.Tag) tftags.K return tftags.New(ctx, m) } -// GetTagsIn returns codestarconnections service tags from Context. +// getTagsIn returns codestarconnections service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*codestarconnections.Tag { +func getTagsIn(ctx context.Context) []*codestarconnections.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*codestarconnections.Tag { return nil } -// SetTagsOut sets codestarconnections service tags in Context. -func SetTagsOut(ctx context.Context, tags []*codestarconnections.Tag) { +// setTagsOut sets codestarconnections service tags in Context. +func setTagsOut(ctx context.Context, tags []*codestarconnections.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates codestarconnections service tags. +// updateTags updates codestarconnections service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn codestarconnectionsiface.CodeStarConnectionsAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn codestarconnectionsiface.CodeStarConnectionsAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn codestarconnectionsiface.CodeStarConne // UpdateTags updates codestarconnections service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).CodeStarConnectionsConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).CodeStarConnectionsConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/codestarnotifications/consts.go b/internal/service/codestarnotifications/consts.go index 00a48ab6f3f..b5f158297d0 100644 --- a/internal/service/codestarnotifications/consts.go +++ b/internal/service/codestarnotifications/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codestarnotifications const ( diff --git a/internal/service/codestarnotifications/generate.go b/internal/service/codestarnotifications/generate.go index cf164dd2ecb..78755f2772d 100644 --- a/internal/service/codestarnotifications/generate.go +++ b/internal/service/codestarnotifications/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=Arn -ServiceTagsMap -TagInIDElem=Arn -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package codestarnotifications diff --git a/internal/service/codestarnotifications/notification_rule.go b/internal/service/codestarnotifications/notification_rule.go index 1ec6cbc53e1..961c41f357b 100644 --- a/internal/service/codestarnotifications/notification_rule.go +++ b/internal/service/codestarnotifications/notification_rule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codestarnotifications import ( @@ -134,7 +137,7 @@ func expandNotificationRuleTargets(targetsData []interface{}) []*codestarnotific func resourceNotificationRuleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeStarNotificationsConn() + conn := meta.(*conns.AWSClient).CodeStarNotificationsConn(ctx) input := &codestarnotifications.CreateNotificationRuleInput{ DetailType: aws.String(d.Get("detail_type").(string)), @@ -142,7 +145,7 @@ func resourceNotificationRuleCreate(ctx context.Context, d *schema.ResourceData, Name: aws.String(d.Get("name").(string)), Resource: aws.String(d.Get("resource").(string)), Status: aws.String(d.Get("status").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), Targets: expandNotificationRuleTargets(d.Get("target").(*schema.Set).List()), } @@ -158,7 +161,7 @@ func resourceNotificationRuleCreate(ctx context.Context, d *schema.ResourceData, func resourceNotificationRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeStarNotificationsConn() + conn := meta.(*conns.AWSClient).CodeStarNotificationsConn(ctx) rule, err := conn.DescribeNotificationRuleWithContext(ctx, &codestarnotifications.DescribeNotificationRuleInput{ Arn: aws.String(d.Id()), @@ -187,7 +190,7 @@ func resourceNotificationRuleRead(ctx context.Context, d *schema.ResourceData, m d.Set("status", rule.Status) d.Set("resource", rule.Resource) - SetTagsOut(ctx, rule.Tags) + setTagsOut(ctx, rule.Tags) targets := make([]map[string]interface{}, 0, len(rule.Targets)) for _, t := range rule.Targets { @@ -260,7 +263,7 @@ func cleanupNotificationRuleTargets(ctx context.Context, conn *codestarnotificat func resourceNotificationRuleUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeStarNotificationsConn() + conn := meta.(*conns.AWSClient).CodeStarNotificationsConn(ctx) input := &codestarnotifications.UpdateNotificationRuleInput{ Arn: aws.String(d.Id()), @@ -287,7 +290,7 @@ func resourceNotificationRuleUpdate(ctx context.Context, d *schema.ResourceData, func resourceNotificationRuleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CodeStarNotificationsConn() + conn := meta.(*conns.AWSClient).CodeStarNotificationsConn(ctx) _, err := conn.DeleteNotificationRuleWithContext(ctx, &codestarnotifications.DeleteNotificationRuleInput{ Arn: aws.String(d.Id()), diff --git a/internal/service/codestarnotifications/notification_rule_test.go b/internal/service/codestarnotifications/notification_rule_test.go index 753421d6a37..e9bca987f5c 100644 --- a/internal/service/codestarnotifications/notification_rule_test.go +++ b/internal/service/codestarnotifications/notification_rule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package codestarnotifications_test import ( @@ -212,7 +215,7 @@ func TestAccCodeStarNotificationsNotificationRule_eventTypeIDs(t *testing.T) { func testAccCheckNotificationRuleDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeStarNotificationsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeStarNotificationsConn(ctx) for _, rs := range s.RootModule().Resources { switch rs.Type { @@ -252,7 +255,7 @@ func testAccCheckNotificationRuleDestroy(ctx context.Context) resource.TestCheck } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).CodeStarNotificationsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CodeStarNotificationsConn(ctx) input := &codestarnotifications.ListTargetsInput{ MaxResults: aws.Int64(1), diff --git a/internal/service/codestarnotifications/service_package_gen.go b/internal/service/codestarnotifications/service_package_gen.go index ae4657b0cba..2987a26aaae 100644 --- a/internal/service/codestarnotifications/service_package_gen.go +++ b/internal/service/codestarnotifications/service_package_gen.go @@ -5,6 +5,10 @@ package codestarnotifications import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + codestarnotifications_sdkv1 "github.com/aws/aws-sdk-go/service/codestarnotifications" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -40,4 +44,13 @@ func (p *servicePackage) ServicePackageName() string { return names.CodeStarNotifications } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*codestarnotifications_sdkv1.CodeStarNotifications, error) { + sess := config["session"].(*session_sdkv1.Session) + + return codestarnotifications_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/codestarnotifications/tags_gen.go b/internal/service/codestarnotifications/tags_gen.go index 13fd5c4897d..b4e4ffd6cf9 100644 --- a/internal/service/codestarnotifications/tags_gen.go +++ b/internal/service/codestarnotifications/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists codestarnotifications service tags. +// listTags lists codestarnotifications service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn codestarnotificationsiface.CodeStarNotificationsAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn codestarnotificationsiface.CodeStarNotificationsAPI, identifier string) (tftags.KeyValueTags, error) { input := &codestarnotifications.ListTagsForResourceInput{ Arn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn codestarnotificationsiface.CodeStarNotif // ListTags lists codestarnotifications service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).CodeStarNotificationsConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).CodeStarNotificationsConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from codestarnotifications service tags. +// KeyValueTags creates tftags.KeyValueTags from codestarnotifications service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns codestarnotifications service tags from Context. +// getTagsIn returns codestarnotifications service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets codestarnotifications service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets codestarnotifications service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates codestarnotifications service tags. +// updateTags updates codestarnotifications service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn codestarnotificationsiface.CodeStarNotificationsAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn codestarnotificationsiface.CodeStarNotificationsAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn codestarnotificationsiface.CodeStarNot // UpdateTags updates codestarnotifications service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).CodeStarNotificationsConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).CodeStarNotificationsConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/cognitoidentity/consts.go b/internal/service/cognitoidentity/consts.go index 78c941d7fce..ec582580bd1 100644 --- a/internal/service/cognitoidentity/consts.go +++ b/internal/service/cognitoidentity/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidentity const ( diff --git a/internal/service/cognitoidentity/flex.go b/internal/service/cognitoidentity/flex.go index 0226be8a6d6..54e16fffea8 100644 --- a/internal/service/cognitoidentity/flex.go +++ b/internal/service/cognitoidentity/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidentity import ( diff --git a/internal/service/cognitoidentity/generate.go b/internal/service/cognitoidentity/generate.go index c0b177d4efd..044a634a357 100644 --- a/internal/service/cognitoidentity/generate.go +++ b/internal/service/cognitoidentity/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package cognitoidentity diff --git a/internal/service/cognitoidentity/pool.go b/internal/service/cognitoidentity/pool.go index 489fc5c7659..c17b5a2b63b 100644 --- a/internal/service/cognitoidentity/pool.go +++ b/internal/service/cognitoidentity/pool.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidentity import ( @@ -125,13 +128,13 @@ func ResourcePool() *schema.Resource { func resourcePoolCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIdentityConn() + conn := meta.(*conns.AWSClient).CognitoIdentityConn(ctx) input := &cognitoidentity.CreateIdentityPoolInput{ IdentityPoolName: aws.String(d.Get("identity_pool_name").(string)), AllowUnauthenticatedIdentities: aws.Bool(d.Get("allow_unauthenticated_identities").(bool)), AllowClassicFlow: aws.Bool(d.Get("allow_classic_flow").(bool)), - IdentityPoolTags: GetTagsIn(ctx), + IdentityPoolTags: getTagsIn(ctx), } if v, ok := d.GetOk("developer_provider_name"); ok { @@ -156,7 +159,7 @@ func resourcePoolCreate(ctx context.Context, d *schema.ResourceData, meta interf entity, err := conn.CreateIdentityPoolWithContext(ctx, input) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error creating Cognito Identity Pool: %s", err) + return sdkdiag.AppendErrorf(diags, "creating Cognito Identity Pool: %s", err) } d.SetId(aws.StringValue(entity.IdentityPoolId)) @@ -166,7 +169,7 @@ func resourcePoolCreate(ctx context.Context, d *schema.ResourceData, meta interf func resourcePoolRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIdentityConn() + conn := meta.(*conns.AWSClient).CognitoIdentityConn(ctx) ip, err := conn.DescribeIdentityPoolWithContext(ctx, &cognitoidentity.DescribeIdentityPoolInput{ IdentityPoolId: aws.String(d.Id()), @@ -194,22 +197,22 @@ func resourcePoolRead(ctx context.Context, d *schema.ResourceData, meta interfac d.Set("allow_classic_flow", ip.AllowClassicFlow) d.Set("developer_provider_name", ip.DeveloperProviderName) - SetTagsOut(ctx, ip.IdentityPoolTags) + setTagsOut(ctx, ip.IdentityPoolTags) if err := d.Set("cognito_identity_providers", flattenIdentityProviders(ip.CognitoIdentityProviders)); err != nil { - return sdkdiag.AppendErrorf(diags, "Error setting cognito_identity_providers error: %s", err) + return sdkdiag.AppendErrorf(diags, "setting cognito_identity_providers error: %s", err) } if err := d.Set("openid_connect_provider_arns", flex.FlattenStringList(ip.OpenIdConnectProviderARNs)); err != nil { - return sdkdiag.AppendErrorf(diags, "Error setting openid_connect_provider_arns error: %s", err) + return sdkdiag.AppendErrorf(diags, "setting openid_connect_provider_arns error: %s", err) } if err := d.Set("saml_provider_arns", flex.FlattenStringList(ip.SamlProviderARNs)); err != nil { - return sdkdiag.AppendErrorf(diags, "Error setting saml_provider_arns error: %s", err) + return sdkdiag.AppendErrorf(diags, "setting saml_provider_arns error: %s", err) } if err := d.Set("supported_login_providers", aws.StringValueMap(ip.SupportedLoginProviders)); err != nil { - return sdkdiag.AppendErrorf(diags, "Error setting supported_login_providers error: %s", err) + return sdkdiag.AppendErrorf(diags, "setting supported_login_providers error: %s", err) } return diags @@ -217,7 +220,7 @@ func resourcePoolRead(ctx context.Context, d *schema.ResourceData, meta interfac func resourcePoolUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIdentityConn() + conn := meta.(*conns.AWSClient).CognitoIdentityConn(ctx) log.Print("[DEBUG] Updating Cognito Identity Pool") if d.HasChangesExcept("tags_all", "tags") { @@ -243,7 +246,7 @@ func resourcePoolUpdate(ctx context.Context, d *schema.ResourceData, meta interf func resourcePoolDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIdentityConn() + conn := meta.(*conns.AWSClient).CognitoIdentityConn(ctx) log.Printf("[DEBUG] Deleting Cognito Identity Pool: %s", d.Id()) _, err := conn.DeleteIdentityPoolWithContext(ctx, &cognitoidentity.DeleteIdentityPoolInput{ @@ -251,7 +254,7 @@ func resourcePoolDelete(ctx context.Context, d *schema.ResourceData, meta interf }) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error deleting Cognito identity pool (%s): %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "deleting Cognito identity pool (%s): %s", d.Id(), err) } return diags } diff --git a/internal/service/cognitoidentity/pool_provider_principal_tag.go b/internal/service/cognitoidentity/pool_provider_principal_tag.go index 2b5de76321a..56add86ea9f 100644 --- a/internal/service/cognitoidentity/pool_provider_principal_tag.go +++ b/internal/service/cognitoidentity/pool_provider_principal_tag.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidentity import ( @@ -62,7 +65,7 @@ func ResourcePoolProviderPrincipalTag() *schema.Resource { func resourcePoolProviderPrincipalTagCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIdentityConn() + conn := meta.(*conns.AWSClient).CognitoIdentityConn(ctx) log.Print("[DEBUG] Creating Cognito Identity Provider Principal Tags") providerName := d.Get("identity_provider_name").(string) @@ -93,7 +96,7 @@ func resourcePoolProviderPrincipalTagCreate(ctx context.Context, d *schema.Resou func resourcePoolProviderPrincipalTagRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIdentityConn() + conn := meta.(*conns.AWSClient).CognitoIdentityConn(ctx) log.Printf("[DEBUG] Reading Cognito Identity Provider Principal Tags: %s", d.Id()) poolId, providerName, err := DecodePoolProviderPrincipalTagsID(d.Id()) @@ -130,7 +133,7 @@ func resourcePoolProviderPrincipalTagRead(ctx context.Context, d *schema.Resourc func resourcePoolProviderPrincipalTagUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIdentityConn() + conn := meta.(*conns.AWSClient).CognitoIdentityConn(ctx) log.Print("[DEBUG] Updating Cognito Identity Provider Principal Tags") poolId, providerName, err := DecodePoolProviderPrincipalTagsID(d.Id()) @@ -158,7 +161,7 @@ func resourcePoolProviderPrincipalTagUpdate(ctx context.Context, d *schema.Resou func resourcePoolProviderPrincipalTagDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIdentityConn() + conn := meta.(*conns.AWSClient).CognitoIdentityConn(ctx) log.Printf("[DEBUG] Deleting Cognito Identity Provider Principal Tags: %s", d.Id()) poolId, providerName, err := DecodePoolProviderPrincipalTagsID(d.Id()) diff --git a/internal/service/cognitoidentity/pool_provider_principal_tag_test.go b/internal/service/cognitoidentity/pool_provider_principal_tag_test.go index 9ef54e5ae99..83ba67f0319 100644 --- a/internal/service/cognitoidentity/pool_provider_principal_tag_test.go +++ b/internal/service/cognitoidentity/pool_provider_principal_tag_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidentity_test import ( @@ -134,7 +137,7 @@ func testAccCheckPoolProviderPrincipalTagsExists(ctx context.Context, n string) return errors.New("No Cognito Identity Princpal Tags is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIdentityConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIdentityConn(ctx) _, err := conn.GetPrincipalTagAttributeMapWithContext(ctx, &cognitoidentity.GetPrincipalTagAttributeMapInput{ IdentityPoolId: aws.String(rs.Primary.Attributes["identity_pool_id"]), @@ -147,7 +150,7 @@ func testAccCheckPoolProviderPrincipalTagsExists(ctx context.Context, n string) func testAccCheckPoolProviderPrincipalTagsDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIdentityConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIdentityConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cognito_identity_pool_provider_principal_tag" { diff --git a/internal/service/cognitoidentity/pool_roles_attachment.go b/internal/service/cognitoidentity/pool_roles_attachment.go index a4f468a4fd6..b660333e674 100644 --- a/internal/service/cognitoidentity/pool_roles_attachment.go +++ b/internal/service/cognitoidentity/pool_roles_attachment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidentity import ( @@ -65,14 +68,9 @@ func ResourcePoolRolesAttachment() *schema.Resource { ValidateFunc: validRoleMappingsRulesClaim, }, "match_type": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice([]string{ - cognitoidentity.MappingRuleMatchTypeEquals, - cognitoidentity.MappingRuleMatchTypeContains, - cognitoidentity.MappingRuleMatchTypeStartsWith, - cognitoidentity.MappingRuleMatchTypeNotEqual, - }, false), + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(cognitoidentity.MappingRuleMatchType_Values(), false), }, "role_arn": { Type: schema.TypeString, @@ -114,12 +112,12 @@ func ResourcePoolRolesAttachment() *schema.Resource { func resourcePoolRolesAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIdentityConn() + conn := meta.(*conns.AWSClient).CognitoIdentityConn(ctx) // Validates role keys to be either authenticated or unauthenticated, // since ValidateFunc validates only the value not the key. if errors := validRoles(d.Get("roles").(map[string]interface{})); len(errors) > 0 { - return sdkdiag.AppendErrorf(diags, "Error validating Roles: %v", errors) + return sdkdiag.AppendErrorf(diags, "validating Roles: %v", errors) } params := &cognitoidentity.SetIdentityPoolRolesInput{ @@ -131,7 +129,7 @@ func resourcePoolRolesAttachmentCreate(ctx context.Context, d *schema.ResourceDa errors := validateRoleMappings(v.(*schema.Set).List()) if len(errors) > 0 { - return sdkdiag.AppendErrorf(diags, "Error validating ambiguous role resolution: %v", errors) + return sdkdiag.AppendErrorf(diags, "validating ambiguous role resolution: %v", errors) } params.RoleMappings = expandIdentityPoolRoleMappingsAttachment(v.(*schema.Set).List()) @@ -140,7 +138,7 @@ func resourcePoolRolesAttachmentCreate(ctx context.Context, d *schema.ResourceDa log.Printf("[DEBUG] Creating Cognito Identity Pool Roles Association: %#v", params) _, err := conn.SetIdentityPoolRolesWithContext(ctx, params) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error creating Cognito Identity Pool Roles Association: %s", err) + return sdkdiag.AppendErrorf(diags, "creating Cognito Identity Pool Roles Association: %s", err) } d.SetId(d.Get("identity_pool_id").(string)) @@ -150,7 +148,7 @@ func resourcePoolRolesAttachmentCreate(ctx context.Context, d *schema.ResourceDa func resourcePoolRolesAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIdentityConn() + conn := meta.(*conns.AWSClient).CognitoIdentityConn(ctx) log.Printf("[DEBUG] Reading Cognito Identity Pool Roles Association: %s", d.Id()) ip, err := conn.GetIdentityPoolRolesWithContext(ctx, &cognitoidentity.GetIdentityPoolRolesInput{ @@ -169,11 +167,11 @@ func resourcePoolRolesAttachmentRead(ctx context.Context, d *schema.ResourceData d.Set("identity_pool_id", ip.IdentityPoolId) if err := d.Set("roles", aws.StringValueMap(ip.Roles)); err != nil { - return sdkdiag.AppendErrorf(diags, "Error setting roles error: %#v", err) + return sdkdiag.AppendErrorf(diags, "setting roles error: %#v", err) } if err := d.Set("role_mapping", flattenIdentityPoolRoleMappingsAttachment(ip.RoleMappings)); err != nil { - return sdkdiag.AppendErrorf(diags, "Error setting role mappings error: %#v", err) + return sdkdiag.AppendErrorf(diags, "setting role mappings error: %#v", err) } return diags @@ -181,12 +179,12 @@ func resourcePoolRolesAttachmentRead(ctx context.Context, d *schema.ResourceData func resourcePoolRolesAttachmentUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIdentityConn() + conn := meta.(*conns.AWSClient).CognitoIdentityConn(ctx) // Validates role keys to be either authenticated or unauthenticated, // since ValidateFunc validates only the value not the key. if errors := validRoles(d.Get("roles").(map[string]interface{})); len(errors) > 0 { - return sdkdiag.AppendErrorf(diags, "Error validating Roles: %v", errors) + return sdkdiag.AppendErrorf(diags, "validating Roles: %v", errors) } params := &cognitoidentity.SetIdentityPoolRolesInput{ @@ -202,7 +200,7 @@ func resourcePoolRolesAttachmentUpdate(ctx context.Context, d *schema.ResourceDa errors := validateRoleMappings(v.(*schema.Set).List()) if len(errors) > 0 { - return sdkdiag.AppendErrorf(diags, "Error validating ambiguous role resolution: %v", errors) + return sdkdiag.AppendErrorf(diags, "validating ambiguous role resolution: %v", errors) } mappings = v.(*schema.Set).List() } else { @@ -215,7 +213,7 @@ func resourcePoolRolesAttachmentUpdate(ctx context.Context, d *schema.ResourceDa log.Printf("[DEBUG] Updating Cognito Identity Pool Roles Association: %#v", params) _, err := conn.SetIdentityPoolRolesWithContext(ctx, params) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error updating Cognito Identity Pool Roles Association: %s", err) + return sdkdiag.AppendErrorf(diags, "updating Cognito Identity Pool Roles Association: %s", err) } d.SetId(d.Get("identity_pool_id").(string)) @@ -225,7 +223,7 @@ func resourcePoolRolesAttachmentUpdate(ctx context.Context, d *schema.ResourceDa func resourcePoolRolesAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIdentityConn() + conn := meta.(*conns.AWSClient).CognitoIdentityConn(ctx) log.Printf("[DEBUG] Deleting Cognito Identity Pool Roles Association: %s", d.Id()) _, err := conn.SetIdentityPoolRolesWithContext(ctx, &cognitoidentity.SetIdentityPoolRolesInput{ @@ -235,7 +233,7 @@ func resourcePoolRolesAttachmentDelete(ctx context.Context, d *schema.ResourceDa }) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error deleting Cognito identity pool roles association: %s", err) + return sdkdiag.AppendErrorf(diags, "deleting Cognito identity pool roles association: %s", err) } return diags diff --git a/internal/service/cognitoidentity/pool_roles_attachment_test.go b/internal/service/cognitoidentity/pool_roles_attachment_test.go index d139a741d98..a992b9df443 100644 --- a/internal/service/cognitoidentity/pool_roles_attachment_test.go +++ b/internal/service/cognitoidentity/pool_roles_attachment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidentity_test import ( @@ -146,7 +149,7 @@ func TestAccCognitoIdentityPoolRolesAttachment_roleMappingsWithAmbiguousRoleReso Steps: []resource.TestStep{ { Config: testAccPoolRolesAttachmentConfig_roleMappingsWithAmbiguousRoleResolutionError(name), - ExpectError: regexp.MustCompile(`Error validating ambiguous role resolution`), + ExpectError: regexp.MustCompile(`validating ambiguous role resolution`), }, }, }) @@ -199,7 +202,7 @@ func testAccCheckPoolRolesAttachmentExists(ctx context.Context, n string) resour return errors.New("No Cognito Identity Pool Roles Attachment ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIdentityConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIdentityConn(ctx) _, err := conn.GetIdentityPoolRolesWithContext(ctx, &cognitoidentity.GetIdentityPoolRolesInput{ IdentityPoolId: aws.String(rs.Primary.Attributes["identity_pool_id"]), @@ -211,7 +214,7 @@ func testAccCheckPoolRolesAttachmentExists(ctx context.Context, n string) resour func testAccCheckPoolRolesAttachmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIdentityConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIdentityConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cognito_identity_pool_roles_attachment" { diff --git a/internal/service/cognitoidentity/pool_test.go b/internal/service/cognitoidentity/pool_test.go index 2fb1c9cff3a..6f4f4940a46 100644 --- a/internal/service/cognitoidentity/pool_test.go +++ b/internal/service/cognitoidentity/pool_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidentity_test import ( @@ -440,7 +443,7 @@ func testAccCheckPoolExists(ctx context.Context, n string, identityPool *cognito return errors.New("No Cognito Identity Pool ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIdentityConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIdentityConn(ctx) result, err := conn.DescribeIdentityPoolWithContext(ctx, &cognitoidentity.DescribeIdentityPoolInput{ IdentityPoolId: aws.String(rs.Primary.ID), @@ -461,7 +464,7 @@ func testAccCheckPoolExists(ctx context.Context, n string, identityPool *cognito func testAccCheckPoolDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIdentityConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIdentityConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cognito_identity_pool" { @@ -486,7 +489,7 @@ func testAccCheckPoolDestroy(ctx context.Context) resource.TestCheckFunc { } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIdentityConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIdentityConn(ctx) input := &cognitoidentity.ListIdentityPoolsInput{ MaxResults: aws.Int64(1), diff --git a/internal/service/cognitoidentity/service_package_gen.go b/internal/service/cognitoidentity/service_package_gen.go index 21af8126809..4c4bf00d555 100644 --- a/internal/service/cognitoidentity/service_package_gen.go +++ b/internal/service/cognitoidentity/service_package_gen.go @@ -5,6 +5,10 @@ package cognitoidentity import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + cognitoidentity_sdkv1 "github.com/aws/aws-sdk-go/service/cognitoidentity" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -48,4 +52,13 @@ func (p *servicePackage) ServicePackageName() string { return names.CognitoIdentity } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*cognitoidentity_sdkv1.CognitoIdentity, error) { + sess := config["session"].(*session_sdkv1.Session) + + return cognitoidentity_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/cognitoidentity/tags_gen.go b/internal/service/cognitoidentity/tags_gen.go index 0bbdcb2173d..ea1ac65b56a 100644 --- a/internal/service/cognitoidentity/tags_gen.go +++ b/internal/service/cognitoidentity/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists cognitoidentity service tags. +// listTags lists cognitoidentity service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn cognitoidentityiface.CognitoIdentityAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn cognitoidentityiface.CognitoIdentityAPI, identifier string) (tftags.KeyValueTags, error) { input := &cognitoidentity.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn cognitoidentityiface.CognitoIdentityAPI, // ListTags lists cognitoidentity service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).CognitoIdentityConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).CognitoIdentityConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from cognitoidentity service tags. +// KeyValueTags creates tftags.KeyValueTags from cognitoidentity service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns cognitoidentity service tags from Context. +// getTagsIn returns cognitoidentity service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets cognitoidentity service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets cognitoidentity service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates cognitoidentity service tags. +// updateTags updates cognitoidentity service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn cognitoidentityiface.CognitoIdentityAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn cognitoidentityiface.CognitoIdentityAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn cognitoidentityiface.CognitoIdentityAP // UpdateTags updates cognitoidentity service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).CognitoIdentityConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).CognitoIdentityConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/cognitoidentity/validate.go b/internal/service/cognitoidentity/validate.go index 8180335ca1e..e9bbfff5554 100644 --- a/internal/service/cognitoidentity/validate.go +++ b/internal/service/cognitoidentity/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidentity import ( diff --git a/internal/service/cognitoidentity/validate_test.go b/internal/service/cognitoidentity/validate_test.go index d13b5e3e49c..00059c5b6bf 100644 --- a/internal/service/cognitoidentity/validate_test.go +++ b/internal/service/cognitoidentity/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidentity import ( diff --git a/internal/service/cognitoidp/consts.go b/internal/service/cognitoidp/consts.go index 78797fcf3c2..355dd1d7418 100644 --- a/internal/service/cognitoidp/consts.go +++ b/internal/service/cognitoidp/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp import ( diff --git a/internal/service/cognitoidp/exports_test.go b/internal/service/cognitoidp/exports_test.go index 5e18c9ce128..7746cfae13c 100644 --- a/internal/service/cognitoidp/exports_test.go +++ b/internal/service/cognitoidp/exports_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp // Exports for use in tests only. diff --git a/internal/service/cognitoidp/find.go b/internal/service/cognitoidp/find.go index 26bcf297aa1..9c9c5e8e092 100644 --- a/internal/service/cognitoidp/find.go +++ b/internal/service/cognitoidp/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp import ( @@ -73,7 +76,7 @@ func FindCognitoUserInGroup(ctx context.Context, conn *cognitoidentityprovider.C }) if err != nil { - return false, fmt.Errorf("error reading groups for user: %w", err) + return false, fmt.Errorf("reading groups for user: %w", err) } return found, nil diff --git a/internal/service/cognitoidp/flex.go b/internal/service/cognitoidp/flex.go index 5d671d9c447..184027b31cb 100644 --- a/internal/service/cognitoidp/flex.go +++ b/internal/service/cognitoidp/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp import ( diff --git a/internal/service/cognitoidp/flex_test.go b/internal/service/cognitoidp/flex_test.go index f900b9557fe..042f13da0bb 100644 --- a/internal/service/cognitoidp/flex_test.go +++ b/internal/service/cognitoidp/flex_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp import ( diff --git a/internal/service/cognitoidp/generate.go b/internal/service/cognitoidp/generate.go index b761d36e0c6..11c3168f128 100644 --- a/internal/service/cognitoidp/generate.go +++ b/internal/service/cognitoidp/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package cognitoidp diff --git a/internal/service/cognitoidp/identity_provider.go b/internal/service/cognitoidp/identity_provider.go index 220ff01848a..c1d7be8b9c0 100644 --- a/internal/service/cognitoidp/identity_provider.go +++ b/internal/service/cognitoidp/identity_provider.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp import ( @@ -90,7 +93,7 @@ func ResourceIdentityProvider() *schema.Resource { func resourceIdentityProviderCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) log.Print("[DEBUG] Creating Cognito Identity Provider") providerName := d.Get("provider_name").(string) @@ -115,7 +118,7 @@ func resourceIdentityProviderCreate(ctx context.Context, d *schema.ResourceData, _, err := conn.CreateIdentityProviderWithContext(ctx, params) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error creating Cognito Identity Provider: %s", err) + return sdkdiag.AppendErrorf(diags, "creating Cognito Identity Provider: %s", err) } d.SetId(fmt.Sprintf("%s:%s", userPoolID, providerName)) @@ -125,7 +128,7 @@ func resourceIdentityProviderCreate(ctx context.Context, d *schema.ResourceData, func resourceIdentityProviderRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) log.Printf("[DEBUG] Reading Cognito Identity Provider: %s", d.Id()) userPoolID, providerName, err := DecodeIdentityProviderID(d.Id()) @@ -180,7 +183,7 @@ func resourceIdentityProviderRead(ctx context.Context, d *schema.ResourceData, m func resourceIdentityProviderUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) log.Print("[DEBUG] Updating Cognito Identity Provider") userPoolID, providerName, err := DecodeIdentityProviderID(d.Id()) @@ -215,7 +218,7 @@ func resourceIdentityProviderUpdate(ctx context.Context, d *schema.ResourceData, func resourceIdentityProviderDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) log.Printf("[DEBUG] Deleting Cognito Identity Provider: %s", d.Id()) userPoolID, providerName, err := DecodeIdentityProviderID(d.Id()) diff --git a/internal/service/cognitoidp/identity_provider_test.go b/internal/service/cognitoidp/identity_provider_test.go index 95e7b4ad817..1de01c60915 100644 --- a/internal/service/cognitoidp/identity_provider_test.go +++ b/internal/service/cognitoidp/identity_provider_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp_test import ( @@ -165,7 +168,7 @@ func TestAccCognitoIDPIdentityProvider_Disappears_userPool(t *testing.T) { func testAccCheckIdentityProviderDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cognito_identity_provider" { @@ -206,7 +209,7 @@ func testAccCheckIdentityProviderExists(ctx context.Context, resourceName string return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn(ctx) input := &cognitoidentityprovider.DescribeIdentityProviderInput{ ProviderName: aws.String(providerName), diff --git a/internal/service/cognitoidp/managed_user_pool_client.go b/internal/service/cognitoidp/managed_user_pool_client.go index e29cdd9663b..6e9c86fa748 100644 --- a/internal/service/cognitoidp/managed_user_pool_client.go +++ b/internal/service/cognitoidp/managed_user_pool_client.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp import ( @@ -21,13 +24,13 @@ import ( "github.com/hashicorp/terraform-plugin-framework/resource/schema/int64planmodifier" "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier" "github.com/hashicorp/terraform-plugin-framework/resource/schema/setplanmodifier" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/stringdefault" "github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier" "github.com/hashicorp/terraform-plugin-framework/schema/validator" "github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-provider-aws/internal/create" "github.com/hashicorp/terraform-provider-aws/internal/framework" "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" - fwstringplanmodifier "github.com/hashicorp/terraform-provider-aws/internal/framework/stringplanmodifier" fwtypes "github.com/hashicorp/terraform-provider-aws/internal/framework/types" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/names" @@ -305,9 +308,7 @@ func (r *resourceManagedUserPoolClient) Schema(ctx context.Context, request reso "access_token": schema.StringAttribute{ Optional: true, Computed: true, - PlanModifiers: []planmodifier.String{ - fwstringplanmodifier.DefaultValue(cognitoidentityprovider.TimeUnitsTypeHours), - }, + Default: stringdefault.StaticString(cognitoidentityprovider.TimeUnitsTypeHours), Validators: []validator.String{ stringvalidator.OneOf(cognitoidentityprovider.TimeUnitsType_Values()...), }, @@ -315,9 +316,7 @@ func (r *resourceManagedUserPoolClient) Schema(ctx context.Context, request reso "id_token": schema.StringAttribute{ Optional: true, Computed: true, - PlanModifiers: []planmodifier.String{ - fwstringplanmodifier.DefaultValue(cognitoidentityprovider.TimeUnitsTypeHours), - }, + Default: stringdefault.StaticString(cognitoidentityprovider.TimeUnitsTypeHours), Validators: []validator.String{ stringvalidator.OneOf(cognitoidentityprovider.TimeUnitsType_Values()...), }, @@ -325,9 +324,7 @@ func (r *resourceManagedUserPoolClient) Schema(ctx context.Context, request reso "refresh_token": schema.StringAttribute{ Optional: true, Computed: true, - PlanModifiers: []planmodifier.String{ - fwstringplanmodifier.DefaultValue(cognitoidentityprovider.TimeUnitsTypeDays), - }, + Default: stringdefault.StaticString(cognitoidentityprovider.TimeUnitsTypeDays), Validators: []validator.String{ stringvalidator.OneOf(cognitoidentityprovider.TimeUnitsType_Values()...), }, @@ -342,7 +339,7 @@ func (r *resourceManagedUserPoolClient) Schema(ctx context.Context, request reso } func (r *resourceManagedUserPoolClient) Create(ctx context.Context, request resource.CreateRequest, response *resource.CreateResponse) { - conn := r.Meta().CognitoIDPConn() + conn := r.Meta().CognitoIDPConn(ctx) var config resourceManagedUserPoolClientData response.Diagnostics.Append(request.Config.Get(ctx, &config)...) @@ -544,7 +541,7 @@ func (r *resourceManagedUserPoolClient) Read(ctx context.Context, request resour return } - conn := r.Meta().CognitoIDPConn() + conn := r.Meta().CognitoIDPConn(ctx) poolClient, err := FindCognitoUserPoolClientByID(ctx, conn, state.UserPoolID.ValueString(), state.ID.ValueString()) if tfresource.NotFound(err) { @@ -578,7 +575,7 @@ func (r *resourceManagedUserPoolClient) Read(ctx context.Context, request resour state.RefreshTokenValidity = flex.Int64ToFramework(ctx, poolClient.RefreshTokenValidity) state.SupportedIdentityProviders = flex.FlattenFrameworkStringSetLegacy(ctx, poolClient.SupportedIdentityProviders) if state.TokenValidityUnits.IsNull() && isDefaultTokenValidityUnits(poolClient.TokenValidityUnits) { - attributeTypes := framework.AttributeTypesMust[tokenValidityUnits](ctx) + attributeTypes := flex.AttributeTypesMust[tokenValidityUnits](ctx) elemType := types.ObjectType{AttrTypes: attributeTypes} state.TokenValidityUnits = types.ListNull(elemType) } else { @@ -613,7 +610,7 @@ func (r *resourceManagedUserPoolClient) Update(ctx context.Context, request reso return } - conn := r.Meta().CognitoIDPConn() + conn := r.Meta().CognitoIDPConn(ctx) params := plan.updateInput(ctx, &response.Diagnostics) if response.Diagnostics.HasError() { @@ -660,7 +657,7 @@ func (r *resourceManagedUserPoolClient) Update(ctx context.Context, request reso config.RefreshTokenValidity = flex.Int64ToFramework(ctx, poolClient.RefreshTokenValidity) config.SupportedIdentityProviders = flex.FlattenFrameworkStringSetLegacy(ctx, poolClient.SupportedIdentityProviders) if !state.TokenValidityUnits.IsNull() && plan.TokenValidityUnits.IsNull() && isDefaultTokenValidityUnits(poolClient.TokenValidityUnits) { - attributeTypes := framework.AttributeTypesMust[tokenValidityUnits](ctx) + attributeTypes := flex.AttributeTypesMust[tokenValidityUnits](ctx) elemType := types.ObjectType{AttrTypes: attributeTypes} config.TokenValidityUnits = types.ListNull(elemType) } else { diff --git a/internal/service/cognitoidp/managed_user_pool_client_test.go b/internal/service/cognitoidp/managed_user_pool_client_test.go index 6f893f20e43..19e7057455d 100644 --- a/internal/service/cognitoidp/managed_user_pool_client_test.go +++ b/internal/service/cognitoidp/managed_user_pool_client_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp_test import ( @@ -1045,8 +1048,6 @@ resource "aws_cognito_identity_pool" "test" { resource "aws_opensearch_domain" "test" { domain_name = %[1]q - engine_version = "OpenSearch_1.1" - cognito_options { enabled = true user_pool_id = aws_cognito_user_pool.test.id diff --git a/internal/service/cognitoidp/resource_server.go b/internal/service/cognitoidp/resource_server.go index dfa5c0dcdde..7d5d22d217b 100644 --- a/internal/service/cognitoidp/resource_server.go +++ b/internal/service/cognitoidp/resource_server.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp import ( @@ -80,7 +83,7 @@ func ResourceResourceServer() *schema.Resource { func resourceResourceServerCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) identifier := d.Get("identifier").(string) userPoolID := d.Get("user_pool_id").(string) @@ -101,7 +104,7 @@ func resourceResourceServerCreate(ctx context.Context, d *schema.ResourceData, m _, err := conn.CreateResourceServerWithContext(ctx, params) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error creating Cognito Resource Server: %s", err) + return sdkdiag.AppendErrorf(diags, "creating Cognito Resource Server: %s", err) } d.SetId(fmt.Sprintf("%s|%s", userPoolID, identifier)) @@ -111,7 +114,7 @@ func resourceResourceServerCreate(ctx context.Context, d *schema.ResourceData, m func resourceResourceServerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) userPoolID, identifier, err := DecodeResourceServerID(d.Id()) if err != nil { @@ -169,7 +172,7 @@ func resourceResourceServerRead(ctx context.Context, d *schema.ResourceData, met func resourceResourceServerUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) userPoolID, identifier, err := DecodeResourceServerID(d.Id()) if err != nil { @@ -195,7 +198,7 @@ func resourceResourceServerUpdate(ctx context.Context, d *schema.ResourceData, m func resourceResourceServerDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) userPoolID, identifier, err := DecodeResourceServerID(d.Id()) if err != nil { diff --git a/internal/service/cognitoidp/resource_server_test.go b/internal/service/cognitoidp/resource_server_test.go index 6ea023cc790..5ffeca000f1 100644 --- a/internal/service/cognitoidp/resource_server_test.go +++ b/internal/service/cognitoidp/resource_server_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp_test import ( @@ -120,7 +123,7 @@ func testAccCheckResourceServerExists(ctx context.Context, n string, resourceSer return errors.New("No Cognito Resource Server ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn(ctx) userPoolID, identifier, err := tfcognitoidp.DecodeResourceServerID(rs.Primary.ID) if err != nil { @@ -148,7 +151,7 @@ func testAccCheckResourceServerExists(ctx context.Context, n string, resourceSer func testAccCheckResourceServerDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cognito_resource_server" { diff --git a/internal/service/cognitoidp/risk_configuration.go b/internal/service/cognitoidp/risk_configuration.go index b3b68ff644b..51f6dc7f172 100644 --- a/internal/service/cognitoidp/risk_configuration.go +++ b/internal/service/cognitoidp/risk_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp import ( @@ -287,7 +290,7 @@ func ResourceRiskConfiguration() *schema.Resource { func resourceRiskConfigurationPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) userPoolId := d.Get("user_pool_id").(string) id := userPoolId @@ -325,7 +328,7 @@ func resourceRiskConfigurationPut(ctx context.Context, d *schema.ResourceData, m func resourceRiskConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) userPoolId, clientId, err := RiskConfigurationParseID(d.Id()) if err != nil { @@ -368,7 +371,7 @@ func resourceRiskConfigurationRead(ctx context.Context, d *schema.ResourceData, func resourceRiskConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) userPoolId, clientId, err := RiskConfigurationParseID(d.Id()) if err != nil { diff --git a/internal/service/cognitoidp/risk_configuration_test.go b/internal/service/cognitoidp/risk_configuration_test.go index 3f84b0866b7..f19c22648b4 100644 --- a/internal/service/cognitoidp/risk_configuration_test.go +++ b/internal/service/cognitoidp/risk_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp_test import ( @@ -243,7 +246,7 @@ func TestAccCognitoIDPRiskConfiguration_emptyRiskException(t *testing.T) { func testAccCheckRiskConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cognito_risk_configuration" { @@ -276,7 +279,7 @@ func testAccCheckRiskConfigurationExists(ctx context.Context, name string) resou return errors.New("No Cognito Risk Configuration ID set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn(ctx) _, err := tfcognitoidp.FindRiskConfigurationById(ctx, conn, rs.Primary.ID) diff --git a/internal/service/cognitoidp/service_package_gen.go b/internal/service/cognitoidp/service_package_gen.go index 920f4f4435d..f3f2acbbb89 100644 --- a/internal/service/cognitoidp/service_package_gen.go +++ b/internal/service/cognitoidp/service_package_gen.go @@ -5,6 +5,10 @@ package cognitoidp import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + cognitoidentityprovider_sdkv1 "github.com/aws/aws-sdk-go/service/cognitoidentityprovider" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -94,4 +98,13 @@ func (p *servicePackage) ServicePackageName() string { return names.CognitoIDP } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*cognitoidentityprovider_sdkv1.CognitoIdentityProvider, error) { + sess := config["session"].(*session_sdkv1.Session) + + return cognitoidentityprovider_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/cognitoidp/sweep.go b/internal/service/cognitoidp/sweep.go index bbe4a2762d5..498dd6e20fe 100644 --- a/internal/service/cognitoidp/sweep.go +++ b/internal/service/cognitoidp/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/cognitoidentityprovider" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -31,11 +33,11 @@ func init() { func sweepUserPoolDomains(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("Error getting client: %s", err) } - conn := client.(*conns.AWSClient).CognitoIDPConn() + conn := client.CognitoIDPConn(ctx) input := &cognitoidentityprovider.ListUserPoolsInput{ MaxResults: aws.Int64(50), @@ -84,11 +86,11 @@ func sweepUserPoolDomains(region string) error { func sweepUserPools(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).CognitoIDPConn() + conn := client.CognitoIDPConn(ctx) input := &cognitoidentityprovider.ListUserPoolsInput{ MaxResults: aws.Int64(50), diff --git a/internal/service/cognitoidp/tags_gen.go b/internal/service/cognitoidp/tags_gen.go index efa1eb0564c..19525cbeb1e 100644 --- a/internal/service/cognitoidp/tags_gen.go +++ b/internal/service/cognitoidp/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists cognitoidp service tags. +// listTags lists cognitoidp service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn cognitoidentityprovideriface.CognitoIdentityProviderAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn cognitoidentityprovideriface.CognitoIdentityProviderAPI, identifier string) (tftags.KeyValueTags, error) { input := &cognitoidentityprovider.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn cognitoidentityprovideriface.CognitoIden // ListTags lists cognitoidp service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).CognitoIDPConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).CognitoIDPConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from cognitoidp service tags. +// KeyValueTags creates tftags.KeyValueTags from cognitoidp service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns cognitoidp service tags from Context. +// getTagsIn returns cognitoidp service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets cognitoidp service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets cognitoidp service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates cognitoidp service tags. +// updateTags updates cognitoidp service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn cognitoidentityprovideriface.CognitoIdentityProviderAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn cognitoidentityprovideriface.CognitoIdentityProviderAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn cognitoidentityprovideriface.CognitoId // UpdateTags updates cognitoidp service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).CognitoIDPConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).CognitoIDPConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/cognitoidp/test-fixtures/saml-metadata.xml b/internal/service/cognitoidp/test-fixtures/saml-metadata.xml index fb42fca70f1..f961397cf52 100644 --- a/internal/service/cognitoidp/test-fixtures/saml-metadata.xml +++ b/internal/service/cognitoidp/test-fixtures/saml-metadata.xml @@ -1,4 +1,9 @@ + + diff --git a/internal/service/cognitoidp/user.go b/internal/service/cognitoidp/user.go index 92fa22a463f..fbe9171b0a2 100644 --- a/internal/service/cognitoidp/user.go +++ b/internal/service/cognitoidp/user.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp import ( @@ -142,7 +145,7 @@ func ResourceUser() *schema.Resource { func resourceUserCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) username := d.Get("username").(string) userPoolId := d.Get("user_pool_id").(string) @@ -226,7 +229,7 @@ func resourceUserCreate(ctx context.Context, d *schema.ResourceData, meta interf func resourceUserRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) user, err := FindUserByTwoPartKey(ctx, conn, d.Get("user_pool_id").(string), d.Get("username").(string)) @@ -260,7 +263,7 @@ func resourceUserRead(ctx context.Context, d *schema.ResourceData, meta interfac func resourceUserUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) log.Println("[DEBUG] Updating Cognito User") @@ -368,7 +371,7 @@ func resourceUserUpdate(ctx context.Context, d *schema.ResourceData, meta interf func resourceUserDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) log.Printf("[DEBUG] Deleting Cognito User: %s", d.Id()) _, err := conn.AdminDeleteUserWithContext(ctx, &cognitoidentityprovider.AdminDeleteUserInput{ diff --git a/internal/service/cognitoidp/user_group.go b/internal/service/cognitoidp/user_group.go index c78d542cf92..1c0e89abb1d 100644 --- a/internal/service/cognitoidp/user_group.go +++ b/internal/service/cognitoidp/user_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp import ( @@ -66,7 +69,7 @@ func ResourceUserGroup() *schema.Resource { func resourceUserGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) params := &cognitoidentityprovider.CreateGroupInput{ GroupName: aws.String(d.Get("name").(string)), @@ -89,7 +92,7 @@ func resourceUserGroupCreate(ctx context.Context, d *schema.ResourceData, meta i resp, err := conn.CreateGroupWithContext(ctx, params) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error creating Cognito User Group: %s", err) + return sdkdiag.AppendErrorf(diags, "creating Cognito User Group: %s", err) } d.SetId(fmt.Sprintf("%s/%s", *resp.Group.UserPoolId, *resp.Group.GroupName)) @@ -99,7 +102,7 @@ func resourceUserGroupCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceUserGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) params := &cognitoidentityprovider.GetGroupInput{ GroupName: aws.String(d.Get("name").(string)), @@ -128,7 +131,7 @@ func resourceUserGroupRead(ctx context.Context, d *schema.ResourceData, meta int func resourceUserGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) params := &cognitoidentityprovider.UpdateGroupInput{ GroupName: aws.String(d.Get("name").(string)), @@ -151,7 +154,7 @@ func resourceUserGroupUpdate(ctx context.Context, d *schema.ResourceData, meta i _, err := conn.UpdateGroupWithContext(ctx, params) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error updating Cognito User Group: %s", err) + return sdkdiag.AppendErrorf(diags, "updating Cognito User Group: %s", err) } return append(diags, resourceUserGroupRead(ctx, d, meta)...) @@ -159,7 +162,7 @@ func resourceUserGroupUpdate(ctx context.Context, d *schema.ResourceData, meta i func resourceUserGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) params := &cognitoidentityprovider.DeleteGroupInput{ GroupName: aws.String(d.Get("name").(string)), @@ -170,7 +173,7 @@ func resourceUserGroupDelete(ctx context.Context, d *schema.ResourceData, meta i _, err := conn.DeleteGroupWithContext(ctx, params) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error deleting Cognito User Group: %s", err) + return sdkdiag.AppendErrorf(diags, "deleting Cognito User Group: %s", err) } return diags diff --git a/internal/service/cognitoidp/user_group_test.go b/internal/service/cognitoidp/user_group_test.go index 505135e8d0a..09f8aba1b73 100644 --- a/internal/service/cognitoidp/user_group_test.go +++ b/internal/service/cognitoidp/user_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp_test import ( @@ -151,7 +154,7 @@ func testAccCheckUserGroupExists(ctx context.Context, name string) resource.Test return fmt.Errorf(fmt.Sprintf("ID should be user_pool_id/name. ID was %s. name was %s, user_pool_id was %s", id, name, userPoolId)) } - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn(ctx) params := &cognitoidentityprovider.GetGroupInput{ GroupName: aws.String(rs.Primary.Attributes["name"]), @@ -165,7 +168,7 @@ func testAccCheckUserGroupExists(ctx context.Context, name string) resource.Test func testAccCheckUserGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cognito_user_group" { diff --git a/internal/service/cognitoidp/user_in_group.go b/internal/service/cognitoidp/user_in_group.go index 1abe77cf816..764a19db415 100644 --- a/internal/service/cognitoidp/user_in_group.go +++ b/internal/service/cognitoidp/user_in_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp import ( @@ -44,7 +47,7 @@ func ResourceUserInGroup() *schema.Resource { func resourceUserInGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) input := &cognitoidentityprovider.AdminAddUserToGroupInput{} @@ -74,7 +77,7 @@ func resourceUserInGroupCreate(ctx context.Context, d *schema.ResourceData, meta func resourceUserInGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) groupName := d.Get("group_name").(string) userPoolId := d.Get("user_pool_id").(string) @@ -95,7 +98,7 @@ func resourceUserInGroupRead(ctx context.Context, d *schema.ResourceData, meta i func resourceUserInGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) groupName := d.Get("group_name").(string) userPoolID := d.Get("user_pool_id").(string) diff --git a/internal/service/cognitoidp/user_in_group_test.go b/internal/service/cognitoidp/user_in_group_test.go index 74446959ed5..1b69360a3f0 100644 --- a/internal/service/cognitoidp/user_in_group_test.go +++ b/internal/service/cognitoidp/user_in_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp_test import ( @@ -104,7 +107,7 @@ func testAccCheckUserInGroupExists(ctx context.Context, resourceName string) res return fmt.Errorf("resource not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn(ctx) groupName := rs.Primary.Attributes["group_name"] userPoolId := rs.Primary.Attributes["user_pool_id"] @@ -126,7 +129,7 @@ func testAccCheckUserInGroupExists(ctx context.Context, resourceName string) res func testAccCheckUserInGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cognito_user_in_group" { diff --git a/internal/service/cognitoidp/user_pool.go b/internal/service/cognitoidp/user_pool.go index 308c5759eb4..52653e57ba7 100644 --- a/internal/service/cognitoidp/user_pool.go +++ b/internal/service/cognitoidp/user_pool.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp import ( @@ -617,11 +620,11 @@ func ResourceUserPool() *schema.Resource { func resourceUserPoolCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) input := &cognitoidentityprovider.CreateUserPoolInput{ PoolName: aws.String(d.Get("name").(string)), - UserPoolTags: GetTagsIn(ctx), + UserPoolTags: getTagsIn(ctx), } if v, ok := d.GetOk("admin_create_user_config"); ok { @@ -844,7 +847,7 @@ func resourceUserPoolCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceUserPoolRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) params := &cognitoidentityprovider.DescribeUserPoolInput{ UserPoolId: aws.String(d.Id()), @@ -937,7 +940,7 @@ func resourceUserPoolRead(ctx context.Context, d *schema.ResourceData, meta inte d.Set("last_modified_date", userPool.LastModifiedDate.Format(time.RFC3339)) d.Set("name", userPool.Name) - SetTagsOut(ctx, userPool.UserPoolTags) + setTagsOut(ctx, userPool.UserPoolTags) input := &cognitoidentityprovider.GetUserPoolMfaConfigInput{ UserPoolId: aws.String(d.Id()), @@ -966,7 +969,7 @@ func resourceUserPoolRead(ctx context.Context, d *schema.ResourceData, meta inte func resourceUserPoolUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) // Multi-Factor Authentication updates if d.HasChanges( @@ -1049,7 +1052,7 @@ func resourceUserPoolUpdate(ctx context.Context, d *schema.ResourceData, meta in ) { input := &cognitoidentityprovider.UpdateUserPoolInput{ UserPoolId: aws.String(d.Id()), - UserPoolTags: GetTagsIn(ctx), + UserPoolTags: getTagsIn(ctx), } if v, ok := d.GetOk("admin_create_user_config"); ok { @@ -1232,7 +1235,7 @@ func resourceUserPoolUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceUserPoolDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) params := &cognitoidentityprovider.DeleteUserPoolInput{ UserPoolId: aws.String(d.Id()), diff --git a/internal/service/cognitoidp/user_pool_client.go b/internal/service/cognitoidp/user_pool_client.go index e3231f4db0a..f1487ed8ac1 100644 --- a/internal/service/cognitoidp/user_pool_client.go +++ b/internal/service/cognitoidp/user_pool_client.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp import ( @@ -23,6 +26,7 @@ import ( "github.com/hashicorp/terraform-plugin-framework/resource/schema/int64planmodifier" "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier" "github.com/hashicorp/terraform-plugin-framework/resource/schema/setplanmodifier" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/stringdefault" "github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier" "github.com/hashicorp/terraform-plugin-framework/schema/validator" "github.com/hashicorp/terraform-plugin-framework/types" @@ -30,7 +34,6 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/create" "github.com/hashicorp/terraform-provider-aws/internal/framework" "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" - fwstringplanmodifier "github.com/hashicorp/terraform-provider-aws/internal/framework/stringplanmodifier" fwtypes "github.com/hashicorp/terraform-provider-aws/internal/framework/types" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/names" @@ -300,9 +303,7 @@ func (r *resourceUserPoolClient) Schema(ctx context.Context, request resource.Sc "access_token": schema.StringAttribute{ Optional: true, Computed: true, - PlanModifiers: []planmodifier.String{ - fwstringplanmodifier.DefaultValue(cognitoidentityprovider.TimeUnitsTypeHours), - }, + Default: stringdefault.StaticString(cognitoidentityprovider.TimeUnitsTypeHours), Validators: []validator.String{ stringvalidator.OneOf(cognitoidentityprovider.TimeUnitsType_Values()...), }, @@ -310,9 +311,7 @@ func (r *resourceUserPoolClient) Schema(ctx context.Context, request resource.Sc "id_token": schema.StringAttribute{ Optional: true, Computed: true, - PlanModifiers: []planmodifier.String{ - fwstringplanmodifier.DefaultValue(cognitoidentityprovider.TimeUnitsTypeHours), - }, + Default: stringdefault.StaticString(cognitoidentityprovider.TimeUnitsTypeHours), Validators: []validator.String{ stringvalidator.OneOf(cognitoidentityprovider.TimeUnitsType_Values()...), }, @@ -320,9 +319,7 @@ func (r *resourceUserPoolClient) Schema(ctx context.Context, request resource.Sc "refresh_token": schema.StringAttribute{ Optional: true, Computed: true, - PlanModifiers: []planmodifier.String{ - fwstringplanmodifier.DefaultValue(cognitoidentityprovider.TimeUnitsTypeDays), - }, + Default: stringdefault.StaticString(cognitoidentityprovider.TimeUnitsTypeDays), Validators: []validator.String{ stringvalidator.OneOf(cognitoidentityprovider.TimeUnitsType_Values()...), }, @@ -337,7 +334,7 @@ func (r *resourceUserPoolClient) Schema(ctx context.Context, request resource.Sc } func (r *resourceUserPoolClient) Create(ctx context.Context, request resource.CreateRequest, response *resource.CreateResponse) { - conn := r.Meta().CognitoIDPConn() + conn := r.Meta().CognitoIDPConn(ctx) var config resourceUserPoolClientData response.Diagnostics.Append(request.Config.Get(ctx, &config)...) @@ -405,7 +402,7 @@ func (r *resourceUserPoolClient) Read(ctx context.Context, request resource.Read return } - conn := r.Meta().CognitoIDPConn() + conn := r.Meta().CognitoIDPConn(ctx) poolClient, err := FindCognitoUserPoolClientByID(ctx, conn, state.UserPoolID.ValueString(), state.ID.ValueString()) if tfresource.NotFound(err) { @@ -439,7 +436,7 @@ func (r *resourceUserPoolClient) Read(ctx context.Context, request resource.Read state.RefreshTokenValidity = flex.Int64ToFramework(ctx, poolClient.RefreshTokenValidity) state.SupportedIdentityProviders = flex.FlattenFrameworkStringSetLegacy(ctx, poolClient.SupportedIdentityProviders) if state.TokenValidityUnits.IsNull() && isDefaultTokenValidityUnits(poolClient.TokenValidityUnits) { - attributeTypes := framework.AttributeTypesMust[tokenValidityUnits](ctx) + attributeTypes := flex.AttributeTypesMust[tokenValidityUnits](ctx) elemType := types.ObjectType{AttrTypes: attributeTypes} state.TokenValidityUnits = types.ListNull(elemType) } else { @@ -474,7 +471,7 @@ func (r *resourceUserPoolClient) Update(ctx context.Context, request resource.Up return } - conn := r.Meta().CognitoIDPConn() + conn := r.Meta().CognitoIDPConn(ctx) params := plan.updateInput(ctx, &response.Diagnostics) if response.Diagnostics.HasError() { @@ -521,7 +518,7 @@ func (r *resourceUserPoolClient) Update(ctx context.Context, request resource.Up config.RefreshTokenValidity = flex.Int64ToFramework(ctx, poolClient.RefreshTokenValidity) config.SupportedIdentityProviders = flex.FlattenFrameworkStringSetLegacy(ctx, poolClient.SupportedIdentityProviders) if !state.TokenValidityUnits.IsNull() && plan.TokenValidityUnits.IsNull() && isDefaultTokenValidityUnits(poolClient.TokenValidityUnits) { - attributeTypes := framework.AttributeTypesMust[tokenValidityUnits](ctx) + attributeTypes := flex.AttributeTypesMust[tokenValidityUnits](ctx) elemType := types.ObjectType{AttrTypes: attributeTypes} config.TokenValidityUnits = types.ListNull(elemType) } else { @@ -551,7 +548,7 @@ func (r *resourceUserPoolClient) Delete(ctx context.Context, request resource.De "user_pool_id": state.UserPoolID.ValueString(), }) - conn := r.Meta().CognitoIDPConn() + conn := r.Meta().CognitoIDPConn(ctx) _, err := conn.DeleteUserPoolClientWithContext(ctx, params) if tfawserr.ErrCodeEquals(err, cognitoidentityprovider.ErrCodeResourceNotFoundException) { @@ -733,7 +730,7 @@ func expandAnaylticsConfiguration(ctx context.Context, list types.List, diags *d } func flattenAnaylticsConfiguration(ctx context.Context, ac *cognitoidentityprovider.AnalyticsConfigurationType, diags *diag.Diagnostics) types.List { - attributeTypes := framework.AttributeTypesMust[analyticsConfiguration](ctx) + attributeTypes := flex.AttributeTypesMust[analyticsConfiguration](ctx) elemType := types.ObjectType{AttrTypes: attributeTypes} if ac == nil { @@ -799,7 +796,7 @@ func expandTokenValidityUnits(ctx context.Context, list types.List, diags *diag. } func flattenTokenValidityUnits(ctx context.Context, tvu *cognitoidentityprovider.TokenValidityUnitsType) types.List { - attributeTypes := framework.AttributeTypesMust[tokenValidityUnits](ctx) + attributeTypes := flex.AttributeTypesMust[tokenValidityUnits](ctx) elemType := types.ObjectType{AttrTypes: attributeTypes} if tvu == nil || (tvu.AccessToken == nil && tvu.IdToken == nil && tvu.RefreshToken == nil) { diff --git a/internal/service/cognitoidp/user_pool_client_data_source.go b/internal/service/cognitoidp/user_pool_client_data_source.go index 3a5b6edc5be..329b3aba191 100644 --- a/internal/service/cognitoidp/user_pool_client_data_source.go +++ b/internal/service/cognitoidp/user_pool_client_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp import ( @@ -181,7 +184,7 @@ func DataSourceUserPoolClient() *schema.Resource { func dataSourceUserPoolClientRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) clientId := d.Get("client_id").(string) d.SetId(clientId) diff --git a/internal/service/cognitoidp/user_pool_client_data_source_test.go b/internal/service/cognitoidp/user_pool_client_data_source_test.go index c18a7871aa2..dbc39772e41 100644 --- a/internal/service/cognitoidp/user_pool_client_data_source_test.go +++ b/internal/service/cognitoidp/user_pool_client_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp_test import ( diff --git a/internal/service/cognitoidp/user_pool_client_test.go b/internal/service/cognitoidp/user_pool_client_test.go index 3b2123225d4..a2c0c63265a 100644 --- a/internal/service/cognitoidp/user_pool_client_test.go +++ b/internal/service/cognitoidp/user_pool_client_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp_test import ( @@ -1082,7 +1085,7 @@ func testAccUserPoolClientImportStateIDFunc(ctx context.Context, resourceName st return "", errors.New("No Cognito User Pool Client ID set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn(ctx) userPoolId := rs.Primary.Attributes["user_pool_id"] clientId := rs.Primary.ID @@ -1098,7 +1101,7 @@ func testAccUserPoolClientImportStateIDFunc(ctx context.Context, resourceName st func testAccCheckUserPoolClientDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cognito_user_pool_client" { @@ -1130,7 +1133,7 @@ func testAccCheckUserPoolClientExists(ctx context.Context, name string, client * return errors.New("No Cognito User Pool Client ID set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn(ctx) resp, err := tfcognitoidp.FindCognitoUserPoolClientByID(ctx, conn, rs.Primary.Attributes["user_pool_id"], rs.Primary.ID) if err != nil { @@ -1539,7 +1542,7 @@ resource "aws_cognito_user_pool_client" "test" { } func testAccPreCheckPinpointApp(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn(ctx) input := &pinpoint.GetAppsInput{} diff --git a/internal/service/cognitoidp/user_pool_clients_data_source.go b/internal/service/cognitoidp/user_pool_clients_data_source.go index d3cfc43a44b..e4d20ccda87 100644 --- a/internal/service/cognitoidp/user_pool_clients_data_source.go +++ b/internal/service/cognitoidp/user_pool_clients_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp import ( @@ -40,7 +43,7 @@ func DataSourceUserPoolClients() *schema.Resource { func dataSourceuserPoolClientsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) userPoolID := d.Get("user_pool_id").(string) input := &cognitoidentityprovider.ListUserPoolClientsInput{ @@ -67,7 +70,7 @@ func dataSourceuserPoolClientsRead(ctx context.Context, d *schema.ResourceData, }) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error getting user pool clients: %s", err) + return sdkdiag.AppendErrorf(diags, "getting user pool clients: %s", err) } d.SetId(userPoolID) diff --git a/internal/service/cognitoidp/user_pool_clients_data_source_test.go b/internal/service/cognitoidp/user_pool_clients_data_source_test.go index 2a888e3c298..be7c4d78dee 100644 --- a/internal/service/cognitoidp/user_pool_clients_data_source_test.go +++ b/internal/service/cognitoidp/user_pool_clients_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp_test import ( diff --git a/internal/service/cognitoidp/user_pool_domain.go b/internal/service/cognitoidp/user_pool_domain.go index 39a810401bc..834b47cf51b 100644 --- a/internal/service/cognitoidp/user_pool_domain.go +++ b/internal/service/cognitoidp/user_pool_domain.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp import ( @@ -85,7 +88,7 @@ func ResourceUserPoolDomain() *schema.Resource { func resourceUserPoolDomainCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) domain := d.Get("domain").(string) timeout := 1 * time.Minute @@ -118,7 +121,7 @@ func resourceUserPoolDomainCreate(ctx context.Context, d *schema.ResourceData, m func resourceUserPoolDomainRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) desc, err := FindUserPoolDomain(ctx, conn, d.Id()) @@ -150,7 +153,7 @@ func resourceUserPoolDomainRead(ctx context.Context, d *schema.ResourceData, met func resourceUserPoolDomainUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) input := &cognitoidentityprovider.UpdateUserPoolDomainInput{ CustomDomainConfig: &cognitoidentityprovider.CustomDomainConfigType{ @@ -178,7 +181,7 @@ func resourceUserPoolDomainUpdate(ctx context.Context, d *schema.ResourceData, m func resourceUserPoolDomainDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) log.Printf("[DEBUG] Deleting Cognito User Pool Domain: %s", d.Id()) _, err := conn.DeleteUserPoolDomainWithContext(ctx, &cognitoidentityprovider.DeleteUserPoolDomainInput{ diff --git a/internal/service/cognitoidp/user_pool_domain_test.go b/internal/service/cognitoidp/user_pool_domain_test.go index 8d5e880b1ae..c5038a35057 100644 --- a/internal/service/cognitoidp/user_pool_domain_test.go +++ b/internal/service/cognitoidp/user_pool_domain_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp_test import ( @@ -165,7 +168,7 @@ func testAccCheckUserPoolDomainExists(ctx context.Context, n string) resource.Te return errors.New("No Cognito User Pool Domain ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn(ctx) _, err := tfcognitoidp.FindUserPoolDomain(ctx, conn, rs.Primary.ID) @@ -175,7 +178,7 @@ func testAccCheckUserPoolDomainExists(ctx context.Context, n string) resource.Te func testAccCheckUserPoolDomainDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cognito_user_pool_domain" { @@ -219,7 +222,7 @@ func testAccCheckUserPoolDomainCertMatches(ctx context.Context, cognitoResourceN return errors.New("No ACM Certificate ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn(ctx) domain, err := tfcognitoidp.FindUserPoolDomain(ctx, conn, cognitoResource.Primary.ID) diff --git a/internal/service/cognitoidp/user_pool_signing_certificate_data_source.go b/internal/service/cognitoidp/user_pool_signing_certificate_data_source.go index 6045d182f72..48df4e17f98 100644 --- a/internal/service/cognitoidp/user_pool_signing_certificate_data_source.go +++ b/internal/service/cognitoidp/user_pool_signing_certificate_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp import ( @@ -31,7 +34,7 @@ func DataSourceUserPoolSigningCertificate() *schema.Resource { func dataSourceUserPoolSigningCertificateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) userPoolID := d.Get("user_pool_id").(string) input := &cognitoidentityprovider.GetSigningCertificateInput{ diff --git a/internal/service/cognitoidp/user_pool_signing_certificate_data_source_test.go b/internal/service/cognitoidp/user_pool_signing_certificate_data_source_test.go index e808b8a1dfd..bb1642008d2 100644 --- a/internal/service/cognitoidp/user_pool_signing_certificate_data_source_test.go +++ b/internal/service/cognitoidp/user_pool_signing_certificate_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp_test import ( diff --git a/internal/service/cognitoidp/user_pool_test.go b/internal/service/cognitoidp/user_pool_test.go index 2a5c54fe9a8..849e2d41ee2 100644 --- a/internal/service/cognitoidp/user_pool_test.go +++ b/internal/service/cognitoidp/user_pool_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp_test import ( @@ -1588,7 +1591,7 @@ func TestAccCognitoIDPUserPool_withUserAttributeUpdateSettings(t *testing.T) { func testAccCheckUserPoolDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cognito_user_pool" { @@ -1624,7 +1627,7 @@ func testAccCheckUserPoolExists(ctx context.Context, name string, pool *cognitoi return errors.New("No Cognito User Pool ID set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn(ctx) params := &cognitoidentityprovider.DescribeUserPoolInput{ UserPoolId: aws.String(rs.Primary.ID), @@ -1644,7 +1647,7 @@ func testAccCheckUserPoolExists(ctx context.Context, name string, pool *cognitoi } func testAccPreCheckIdentityProvider(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn(ctx) input := &cognitoidentityprovider.ListUserPoolsInput{ MaxResults: aws.Int64(1), diff --git a/internal/service/cognitoidp/user_pool_ui_customization.go b/internal/service/cognitoidp/user_pool_ui_customization.go index ebda5eaa43f..4f81a42b7bb 100644 --- a/internal/service/cognitoidp/user_pool_ui_customization.go +++ b/internal/service/cognitoidp/user_pool_ui_customization.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp import ( @@ -78,7 +81,7 @@ func ResourceUserPoolUICustomization() *schema.Resource { func resourceUserPoolUICustomizationPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) clientId := d.Get("client_id").(string) userPoolId := d.Get("user_pool_id").(string) @@ -114,7 +117,7 @@ func resourceUserPoolUICustomizationPut(ctx context.Context, d *schema.ResourceD func resourceUserPoolUICustomizationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) userPoolId, clientId, err := ParseUserPoolUICustomizationID(d.Id()) @@ -157,7 +160,7 @@ func resourceUserPoolUICustomizationRead(ctx context.Context, d *schema.Resource func resourceUserPoolUICustomizationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) userPoolId, clientId, err := ParseUserPoolUICustomizationID(d.Id()) diff --git a/internal/service/cognitoidp/user_pool_ui_customization_test.go b/internal/service/cognitoidp/user_pool_ui_customization_test.go index f6df2abaa5d..7b35c890a58 100644 --- a/internal/service/cognitoidp/user_pool_ui_customization_test.go +++ b/internal/service/cognitoidp/user_pool_ui_customization_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp_test import ( @@ -506,7 +509,7 @@ func TestAccCognitoIDPUserPoolUICustomization_UpdateAllToClient_cSS(t *testing.T func testAccCheckUserPoolUICustomizationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cognito_user_pool_ui_customization" { @@ -561,7 +564,7 @@ func testAccCheckUserPoolUICustomizationExists(ctx context.Context, name string) return fmt.Errorf("error parsing Cognito User Pool UI customization ID (%s): %w", rs.Primary.ID, err) } - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn(ctx) output, err := tfcognitoidp.FindCognitoUserPoolUICustomization(ctx, conn, userPoolId, clientId) diff --git a/internal/service/cognitoidp/user_pools_data_source.go b/internal/service/cognitoidp/user_pools_data_source.go index 90519f69c29..828dacee1c5 100644 --- a/internal/service/cognitoidp/user_pools_data_source.go +++ b/internal/service/cognitoidp/user_pools_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp import ( @@ -39,7 +42,7 @@ func DataSourceUserPools() *schema.Resource { func dataSourceUserPoolsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CognitoIDPConn() + conn := meta.(*conns.AWSClient).CognitoIDPConn(ctx) output, err := findUserPoolDescriptionTypes(ctx, conn) diff --git a/internal/service/cognitoidp/user_pools_data_source_test.go b/internal/service/cognitoidp/user_pools_data_source_test.go index b4a104c4b9e..630ab135f98 100644 --- a/internal/service/cognitoidp/user_pools_data_source_test.go +++ b/internal/service/cognitoidp/user_pools_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp_test import ( diff --git a/internal/service/cognitoidp/user_test.go b/internal/service/cognitoidp/user_test.go index 36eb401ea56..f93d62b8f3b 100644 --- a/internal/service/cognitoidp/user_test.go +++ b/internal/service/cognitoidp/user_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp_test import ( @@ -300,7 +303,7 @@ func testAccCheckUserExists(ctx context.Context, n string) resource.TestCheckFun return fmt.Errorf("No Cognito User ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn(ctx) _, err := tfcognitoidp.FindUserByTwoPartKey(ctx, conn, rs.Primary.Attributes["user_pool_id"], rs.Primary.Attributes["username"]) @@ -310,7 +313,7 @@ func testAccCheckUserExists(ctx context.Context, n string) resource.TestCheckFun func testAccCheckUserDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cognito_user" { @@ -350,7 +353,7 @@ func testAccUserTemporaryPassword(ctx context.Context, userResName string, clien userPassword := userRs.Primary.Attributes["temporary_password"] clientId := clientRs.Primary.Attributes["id"] - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn(ctx) params := &cognitoidentityprovider.InitiateAuthInput{ AuthFlow: aws.String(cognitoidentityprovider.AuthFlowTypeUserPasswordAuth), @@ -390,7 +393,7 @@ func testAccUserPassword(ctx context.Context, userResName string, clientResName userPassword := userRs.Primary.Attributes["password"] clientId := clientRs.Primary.Attributes["id"] - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn(ctx) params := &cognitoidentityprovider.InitiateAuthInput{ AuthFlow: aws.String(cognitoidentityprovider.AuthFlowTypeUserPasswordAuth), diff --git a/internal/service/cognitoidp/validate.go b/internal/service/cognitoidp/validate.go index 3ccba82c56e..10900c3c3d5 100644 --- a/internal/service/cognitoidp/validate.go +++ b/internal/service/cognitoidp/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp import ( diff --git a/internal/service/cognitoidp/validate_test.go b/internal/service/cognitoidp/validate_test.go index fea7541fa9f..00b5d87a9d2 100644 --- a/internal/service/cognitoidp/validate_test.go +++ b/internal/service/cognitoidp/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cognitoidp import ( diff --git a/internal/service/comprehend/acc_test.go b/internal/service/comprehend/acc_test.go index a04ca5c20bc..75f6fe46038 100644 --- a/internal/service/comprehend/acc_test.go +++ b/internal/service/comprehend/acc_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package comprehend_test import ( @@ -12,7 +15,7 @@ import ( ) func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).ComprehendClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).ComprehendClient(ctx) input := &comprehend.ListEntityRecognizersInput{} diff --git a/internal/service/comprehend/common_model.go b/internal/service/comprehend/common_model.go index 62c6ee1db88..d35e4d6969c 100644 --- a/internal/service/comprehend/common_model.go +++ b/internal/service/comprehend/common_model.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package comprehend import ( diff --git a/internal/service/comprehend/consts.go b/internal/service/comprehend/consts.go index e57884a12d2..c500235f3cb 100644 --- a/internal/service/comprehend/consts.go +++ b/internal/service/comprehend/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package comprehend import ( diff --git a/internal/service/comprehend/document_classifier.go b/internal/service/comprehend/document_classifier.go index e3262942d1d..2975e19a141 100644 --- a/internal/service/comprehend/document_classifier.go +++ b/internal/service/comprehend/document_classifier.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package comprehend import ( @@ -269,7 +272,7 @@ func ResourceDocumentClassifier() *schema.Resource { func resourceDocumentClassifierCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { awsClient := meta.(*conns.AWSClient) - conn := awsClient.ComprehendClient() + conn := awsClient.ComprehendClient(ctx) var versionName *string raw := d.GetRawConfig().GetAttr("version_name") @@ -288,7 +291,7 @@ func resourceDocumentClassifierCreate(ctx context.Context, d *schema.ResourceDat } func resourceDocumentClassifierRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ComprehendClient() + conn := meta.(*conns.AWSClient).ComprehendClient(ctx) out, err := FindDocumentClassifierByID(ctx, conn, d.Id()) @@ -335,7 +338,7 @@ func resourceDocumentClassifierRead(ctx context.Context, d *schema.ResourceData, func resourceDocumentClassifierUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { awsClient := meta.(*conns.AWSClient) - conn := awsClient.ComprehendClient() + conn := awsClient.ComprehendClient(ctx) var diags diag.Diagnostics @@ -357,7 +360,7 @@ func resourceDocumentClassifierUpdate(ctx context.Context, d *schema.ResourceDat } func resourceDocumentClassifierDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ComprehendClient() + conn := meta.(*conns.AWSClient).ComprehendClient(ctx) log.Printf("[INFO] Stopping Comprehend Document Classifier (%s)", d.Id()) @@ -412,7 +415,7 @@ func resourceDocumentClassifierDelete(ctx context.Context, d *schema.ResourceDat return fmt.Errorf("waiting for version (%s) to be deleted: %s", aws.ToString(v.VersionName), err) } - ec2Conn := meta.(*conns.AWSClient).EC2Conn() + ec2Conn := meta.(*conns.AWSClient).EC2Conn(ctx) networkInterfaces, err := tfec2.FindNetworkInterfaces(ctx, ec2Conn, &ec2.DescribeNetworkInterfacesInput{ Filters: []*ec2.Filter{ tfec2.NewFilter(fmt.Sprintf("tag:%s", documentClassifierTagKey), []string{aws.ToString(v.DocumentClassifierArn)}), @@ -466,7 +469,7 @@ func documentClassifierPublishVersion(ctx context.Context, conn *comprehend.Clie VersionName: versionName, VpcConfig: expandVPCConfig(d.Get("vpc_config").([]interface{})), ClientRequestToken: aws.String(id.UniqueId()), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.Get("model_kms_key_id").(string); ok && v != "" { @@ -539,7 +542,7 @@ func documentClassifierPublishVersion(ctx context.Context, conn *comprehend.Clie if in.VpcConfig != nil { g.Go(func() error { - ec2Conn := awsClient.EC2Conn() + ec2Conn := awsClient.EC2Conn(ctx) enis, err := findNetworkInterfaces(waitCtx, ec2Conn, in.VpcConfig.SecurityGroupIds, in.VpcConfig.Subnets) if err != nil { diags = sdkdiag.AppendWarningf(diags, "waiting for Amazon Comprehend Document Classifier (%s) %s: %s", d.Id(), tobe, err) diff --git a/internal/service/comprehend/document_classifier_test.go b/internal/service/comprehend/document_classifier_test.go index 99cbe105dd6..6a12e8f74f1 100644 --- a/internal/service/comprehend/document_classifier_test.go +++ b/internal/service/comprehend/document_classifier_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package comprehend_test import ( @@ -1376,7 +1379,7 @@ func TestAccComprehendDocumentClassifier_DefaultTags_providerOnly(t *testing.T) func testAccCheckDocumentClassifierDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ComprehendClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).ComprehendClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_comprehend_document_classifier" { @@ -1424,7 +1427,7 @@ func testAccCheckDocumentClassifierExists(ctx context.Context, name string, docu return fmt.Errorf("No Comprehend Document Classifier is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ComprehendClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).ComprehendClient(ctx) resp, err := tfcomprehend.FindDocumentClassifierByID(ctx, conn, rs.Primary.ID) if err != nil { @@ -1472,7 +1475,7 @@ func testAccCheckDocumentClassifierPublishedVersions(ctx context.Context, name s return fmt.Errorf("No Comprehend Document Classifier is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ComprehendClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).ComprehendClient(ctx) name, err := tfcomprehend.DocumentClassifierParseARN(rs.Primary.ID) if err != nil { diff --git a/internal/service/comprehend/entity_recognizer.go b/internal/service/comprehend/entity_recognizer.go index e1ff9762bfd..ceff0d8aec0 100644 --- a/internal/service/comprehend/entity_recognizer.go +++ b/internal/service/comprehend/entity_recognizer.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package comprehend import ( @@ -304,7 +307,7 @@ func ResourceEntityRecognizer() *schema.Resource { func resourceEntityRecognizerCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { awsClient := meta.(*conns.AWSClient) - conn := awsClient.ComprehendClient() + conn := awsClient.ComprehendClient(ctx) var versionName *string raw := d.GetRawConfig().GetAttr("version_name") @@ -323,7 +326,7 @@ func resourceEntityRecognizerCreate(ctx context.Context, d *schema.ResourceData, } func resourceEntityRecognizerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ComprehendClient() + conn := meta.(*conns.AWSClient).ComprehendClient(ctx) out, err := FindEntityRecognizerByID(ctx, conn, d.Id()) @@ -365,7 +368,7 @@ func resourceEntityRecognizerRead(ctx context.Context, d *schema.ResourceData, m func resourceEntityRecognizerUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { awsClient := meta.(*conns.AWSClient) - conn := awsClient.ComprehendClient() + conn := awsClient.ComprehendClient(ctx) var diags diag.Diagnostics @@ -387,7 +390,7 @@ func resourceEntityRecognizerUpdate(ctx context.Context, d *schema.ResourceData, } func resourceEntityRecognizerDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ComprehendClient() + conn := meta.(*conns.AWSClient).ComprehendClient(ctx) log.Printf("[INFO] Stopping Comprehend Entity Recognizer (%s)", d.Id()) @@ -442,7 +445,7 @@ func resourceEntityRecognizerDelete(ctx context.Context, d *schema.ResourceData, return fmt.Errorf("waiting for version (%s) to be deleted: %s", aws.ToString(v.VersionName), err) } - ec2Conn := meta.(*conns.AWSClient).EC2Conn() + ec2Conn := meta.(*conns.AWSClient).EC2Conn(ctx) networkInterfaces, err := tfec2.FindNetworkInterfaces(ctx, ec2Conn, &ec2.DescribeNetworkInterfacesInput{ Filters: []*ec2.Filter{ tfec2.NewFilter(fmt.Sprintf("tag:%s", entityRecognizerTagKey), []string{aws.ToString(v.EntityRecognizerArn)}), @@ -494,7 +497,7 @@ func entityRecognizerPublishVersion(ctx context.Context, conn *comprehend.Client VersionName: versionName, VpcConfig: expandVPCConfig(d.Get("vpc_config").([]interface{})), ClientRequestToken: aws.String(id.UniqueId()), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.Get("model_kms_key_id").(string); ok && v != "" { @@ -567,7 +570,7 @@ func entityRecognizerPublishVersion(ctx context.Context, conn *comprehend.Client if in.VpcConfig != nil { g.Go(func() error { - ec2Conn := awsClient.EC2Conn() + ec2Conn := awsClient.EC2Conn(ctx) enis, err := findNetworkInterfaces(waitCtx, ec2Conn, in.VpcConfig.SecurityGroupIds, in.VpcConfig.Subnets) if err != nil { diags = sdkdiag.AppendWarningf(diags, "waiting for Amazon Comprehend Entity Recognizer (%s) %s: %s", d.Id(), tobe, err) diff --git a/internal/service/comprehend/entity_recognizer_test.go b/internal/service/comprehend/entity_recognizer_test.go index 2c55b82ffd2..a35d62b307d 100644 --- a/internal/service/comprehend/entity_recognizer_test.go +++ b/internal/service/comprehend/entity_recognizer_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package comprehend_test import ( @@ -914,7 +917,7 @@ func TestAccComprehendEntityRecognizer_DefaultTags_providerOnly(t *testing.T) { func testAccCheckEntityRecognizerDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ComprehendClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).ComprehendClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_comprehend_entity_recognizer" { @@ -962,7 +965,7 @@ func testAccCheckEntityRecognizerExists(ctx context.Context, name string, entity return fmt.Errorf("No Comprehend Entity Recognizer is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ComprehendClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).ComprehendClient(ctx) resp, err := tfcomprehend.FindEntityRecognizerByID(ctx, conn, rs.Primary.ID) if err != nil { @@ -1010,7 +1013,7 @@ func testAccCheckEntityRecognizerPublishedVersions(ctx context.Context, name str return fmt.Errorf("No Comprehend Entity Recognizer is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ComprehendClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).ComprehendClient(ctx) name, err := tfcomprehend.EntityRecognizerParseARN(rs.Primary.ID) if err != nil { diff --git a/internal/service/comprehend/generate.go b/internal/service/comprehend/generate.go index c6ba1b3e578..f2a3adadcbb 100644 --- a/internal/service/comprehend/generate.go +++ b/internal/service/comprehend/generate.go @@ -1,6 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ServiceTagsSlice -ListTags -UpdateTags -AWSSDKVersion=2 //go:generate go run ./test-fixtures/generate/document_classifier/main.go //go:generate go run ./test-fixtures/generate/entity_recognizer/main.go +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package comprehend diff --git a/internal/service/comprehend/service_package_gen.go b/internal/service/comprehend/service_package_gen.go index 39ccecee1b5..9afb3f16b58 100644 --- a/internal/service/comprehend/service_package_gen.go +++ b/internal/service/comprehend/service_package_gen.go @@ -5,6 +5,9 @@ package comprehend import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + comprehend_sdkv2 "github.com/aws/aws-sdk-go-v2/service/comprehend" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -48,4 +51,17 @@ func (p *servicePackage) ServicePackageName() string { return names.Comprehend } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*comprehend_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return comprehend_sdkv2.NewFromConfig(cfg, func(o *comprehend_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = comprehend_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/comprehend/tags_gen.go b/internal/service/comprehend/tags_gen.go index a9862391876..a2d21dd35ad 100644 --- a/internal/service/comprehend/tags_gen.go +++ b/internal/service/comprehend/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists comprehend service tags. +// listTags lists comprehend service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn *comprehend.Client, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *comprehend.Client, identifier string) (tftags.KeyValueTags, error) { input := &comprehend.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn *comprehend.Client, identifier string) ( // ListTags lists comprehend service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).ComprehendClient(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).ComprehendClient(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []awstypes.Tag) tftags.KeyValueTags return tftags.New(ctx, m) } -// GetTagsIn returns comprehend service tags from Context. +// getTagsIn returns comprehend service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []awstypes.Tag { +func getTagsIn(ctx context.Context) []awstypes.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []awstypes.Tag { return nil } -// SetTagsOut sets comprehend service tags in Context. -func SetTagsOut(ctx context.Context, tags []awstypes.Tag) { +// setTagsOut sets comprehend service tags in Context. +func setTagsOut(ctx context.Context, tags []awstypes.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates comprehend service tags. +// updateTags updates comprehend service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn *comprehend.Client, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *comprehend.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn *comprehend.Client, identifier string, // UpdateTags updates comprehend service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).ComprehendClient(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).ComprehendClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/comprehend/test-fixtures/generate/document_classifier/main.go b/internal/service/comprehend/test-fixtures/generate/document_classifier/main.go index 76bf105df6f..3805e9ad1c8 100644 --- a/internal/service/comprehend/test-fixtures/generate/document_classifier/main.go +++ b/internal/service/comprehend/test-fixtures/generate/document_classifier/main.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build generate // +build generate diff --git a/internal/service/comprehend/test-fixtures/generate/document_classifier_multilabel/main.go b/internal/service/comprehend/test-fixtures/generate/document_classifier_multilabel/main.go index 2c28768fa6d..4264d30090c 100644 --- a/internal/service/comprehend/test-fixtures/generate/document_classifier_multilabel/main.go +++ b/internal/service/comprehend/test-fixtures/generate/document_classifier_multilabel/main.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build generate // +build generate diff --git a/internal/service/comprehend/test-fixtures/generate/entity_recognizer/main.go b/internal/service/comprehend/test-fixtures/generate/entity_recognizer/main.go index 3247cfdeedf..01678618724 100644 --- a/internal/service/comprehend/test-fixtures/generate/entity_recognizer/main.go +++ b/internal/service/comprehend/test-fixtures/generate/entity_recognizer/main.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build generate // +build generate diff --git a/internal/service/comprehend/validate.go b/internal/service/comprehend/validate.go index ab1c4c522e1..64830e93805 100644 --- a/internal/service/comprehend/validate.go +++ b/internal/service/comprehend/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package comprehend import ( diff --git a/internal/service/computeoptimizer/generate.go b/internal/service/computeoptimizer/generate.go new file mode 100644 index 00000000000..01b6d8383f5 --- /dev/null +++ b/internal/service/computeoptimizer/generate.go @@ -0,0 +1,7 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/servicepackage/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package computeoptimizer diff --git a/internal/service/computeoptimizer/service_package_gen.go b/internal/service/computeoptimizer/service_package_gen.go index c3eca435750..39403ff3ba9 100644 --- a/internal/service/computeoptimizer/service_package_gen.go +++ b/internal/service/computeoptimizer/service_package_gen.go @@ -5,6 +5,9 @@ package computeoptimizer import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + computeoptimizer_sdkv2 "github.com/aws/aws-sdk-go-v2/service/computeoptimizer" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -31,4 +34,17 @@ func (p *servicePackage) ServicePackageName() string { return names.ComputeOptimizer } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*computeoptimizer_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return computeoptimizer_sdkv2.NewFromConfig(cfg, func(o *computeoptimizer_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = computeoptimizer_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/configservice/aggregate_authorization.go b/internal/service/configservice/aggregate_authorization.go index ac80412dd53..746aa239611 100644 --- a/internal/service/configservice/aggregate_authorization.go +++ b/internal/service/configservice/aggregate_authorization.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configservice import ( @@ -58,19 +61,19 @@ func ResourceAggregateAuthorization() *schema.Resource { func resourceAggregateAuthorizationPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) accountId := d.Get("account_id").(string) region := d.Get("region").(string) input := &configservice.PutAggregationAuthorizationInput{ AuthorizedAccountId: aws.String(accountId), AuthorizedAwsRegion: aws.String(region), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } _, err := conn.PutAggregationAuthorizationWithContext(ctx, input) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error creating aggregate authorization: %s", err) + return sdkdiag.AppendErrorf(diags, "creating aggregate authorization: %s", err) } d.SetId(fmt.Sprintf("%s:%s", accountId, region)) @@ -80,7 +83,7 @@ func resourceAggregateAuthorizationPut(ctx context.Context, d *schema.ResourceDa func resourceAggregateAuthorizationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) accountId, region, err := AggregateAuthorizationParseID(d.Id()) if err != nil { @@ -134,7 +137,7 @@ func resourceAggregateAuthorizationUpdate(ctx context.Context, d *schema.Resourc func resourceAggregateAuthorizationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) accountId, region, err := AggregateAuthorizationParseID(d.Id()) if err != nil { diff --git a/internal/service/configservice/aggregate_authorization_test.go b/internal/service/configservice/aggregate_authorization_test.go index 2d48f50a5bd..56465425648 100644 --- a/internal/service/configservice/aggregate_authorization_test.go +++ b/internal/service/configservice/aggregate_authorization_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configservice_test // func TestAccConfigServiceAggregateAuthorization_basic(t *testing.T) { @@ -72,7 +75,7 @@ package configservice_test // } // func testAccCheckAggregateAuthorizationDestroy(s *terraform.State) error { -// conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn() +// conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn(ctx) // for _, rs := range s.RootModule().Resources { // if rs.Type != "aws_config_aggregate_authorization" { diff --git a/internal/service/configservice/config_rule.go b/internal/service/configservice/config_rule.go index 6c085e98523..9d9356bdf15 100644 --- a/internal/service/configservice/config_rule.go +++ b/internal/service/configservice/config_rule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configservice import ( @@ -188,7 +191,7 @@ func ResourceConfigRule() *schema.Resource { func resourceRulePutConfig(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) name := d.Get("name").(string) ruleInput := configservice.ConfigRule{ @@ -209,7 +212,7 @@ func resourceRulePutConfig(ctx context.Context, d *schema.ResourceData, meta int input := configservice.PutConfigRuleInput{ ConfigRule: &ruleInput, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } log.Printf("[DEBUG] Creating AWSConfig config rule: %s", input) err := retry.RetryContext(ctx, propagationTimeout, func() *retry.RetryError { @@ -229,7 +232,7 @@ func resourceRulePutConfig(ctx context.Context, d *schema.ResourceData, meta int _, err = conn.PutConfigRuleWithContext(ctx, &input) } if err != nil { - return sdkdiag.AppendErrorf(diags, "Error creating AWSConfig rule: %s", err) + return sdkdiag.AppendErrorf(diags, "creating AWSConfig rule: %s", err) } d.SetId(name) @@ -239,7 +242,7 @@ func resourceRulePutConfig(ctx context.Context, d *schema.ResourceData, meta int func resourceConfigRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) rule, err := FindConfigRule(ctx, conn, d.Id()) @@ -268,7 +271,7 @@ func resourceConfigRuleRead(ctx context.Context, d *schema.ResourceData, meta in func resourceConfigRuleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) name := d.Get("name").(string) diff --git a/internal/service/configservice/config_rule_test.go b/internal/service/configservice/config_rule_test.go index 522e1e42b4c..a4bb2047ba1 100644 --- a/internal/service/configservice/config_rule_test.go +++ b/internal/service/configservice/config_rule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configservice_test import ( @@ -352,7 +355,7 @@ func testAccCheckConfigRuleExists(ctx context.Context, n string, obj *configserv return fmt.Errorf("No config rule ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn(ctx) rule, err := tfconfig.FindConfigRule(ctx, conn, rs.Primary.ID) if err != nil { @@ -366,7 +369,7 @@ func testAccCheckConfigRuleExists(ctx context.Context, n string, obj *configserv func testAccCheckConfigRuleDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_config_config_rule" { diff --git a/internal/service/configservice/configservice.go b/internal/service/configservice/configservice.go index 846a05db493..5370844c917 100644 --- a/internal/service/configservice/configservice.go +++ b/internal/service/configservice/configservice.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configservice import ( diff --git a/internal/service/configservice/configservice_test.go b/internal/service/configservice/configservice_test.go index e8fbd59e1e7..bb7fedb380b 100644 --- a/internal/service/configservice/configservice_test.go +++ b/internal/service/configservice/configservice_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configservice_test import ( @@ -27,9 +30,10 @@ func TestAccConfigService_serial(t *testing.T) { "importBasic": testAccConfigurationRecorderStatus_importBasic, }, "ConfigurationRecorder": { - "basic": testAccConfigurationRecorder_basic, - "allParams": testAccConfigurationRecorder_allParams, - "importBasic": testAccConfigurationRecorder_importBasic, + "basic": testAccConfigurationRecorder_basic, + "allParams": testAccConfigurationRecorder_allParams, + "recordStrategy": testAccConfigurationRecorder_recordStrategy, + "disappears": testAccConfigurationRecorder_disappears, }, "ConformancePack": { "basic": testAccConformancePack_basic, diff --git a/internal/service/configservice/configuration_aggregator.go b/internal/service/configservice/configuration_aggregator.go index 28ae98c1896..67871159936 100644 --- a/internal/service/configservice/configuration_aggregator.go +++ b/internal/service/configservice/configuration_aggregator.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configservice import ( @@ -124,11 +127,11 @@ func ResourceConfigurationAggregator() *schema.Resource { func resourceConfigurationAggregatorPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) input := &configservice.PutConfigurationAggregatorInput{ ConfigurationAggregatorName: aws.String(d.Get("name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("account_aggregation_source"); ok && len(v.([]interface{})) > 0 { @@ -152,7 +155,7 @@ func resourceConfigurationAggregatorPut(ctx context.Context, d *schema.ResourceD func resourceConfigurationAggregatorRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) req := &configservice.DescribeConfigurationAggregatorsInput{ ConfigurationAggregatorNames: []*string{aws.String(d.Id())}, @@ -197,7 +200,7 @@ func resourceConfigurationAggregatorRead(ctx context.Context, d *schema.Resource func resourceConfigurationAggregatorDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) req := &configservice.DeleteConfigurationAggregatorInput{ ConfigurationAggregatorName: aws.String(d.Id()), diff --git a/internal/service/configservice/configuration_aggregator_test.go b/internal/service/configservice/configuration_aggregator_test.go index 2b0b6cb7ef6..afc45a8a852 100644 --- a/internal/service/configservice/configuration_aggregator_test.go +++ b/internal/service/configservice/configuration_aggregator_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configservice_test import ( @@ -211,7 +214,7 @@ func testAccCheckConfigurationAggregatorExists(ctx context.Context, n string, ob return fmt.Errorf("No config configuration aggregator ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn(ctx) out, err := conn.DescribeConfigurationAggregatorsWithContext(ctx, &configservice.DescribeConfigurationAggregatorsInput{ ConfigurationAggregatorNames: []*string{aws.String(rs.Primary.Attributes["name"])}, }) @@ -231,7 +234,7 @@ func testAccCheckConfigurationAggregatorExists(ctx context.Context, n string, ob func testAccCheckConfigurationAggregatorDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_config_configuration_aggregator" { diff --git a/internal/service/configservice/configuration_recorder.go b/internal/service/configservice/configuration_recorder.go index 75c6ab52f01..7985d95b180 100644 --- a/internal/service/configservice/configuration_recorder.go +++ b/internal/service/configservice/configuration_recorder.go @@ -1,20 +1,26 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configservice import ( "context" "errors" + "log" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/configservice" "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" - "github.com/hashicorp/terraform-provider-aws/internal/create" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" + "github.com/hashicorp/terraform-provider-aws/internal/flex" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/verify" - "github.com/hashicorp/terraform-provider-aws/names" ) // @SDKResource("aws_config_configuration_recorder") @@ -29,6 +35,10 @@ func ResourceConfigurationRecorder() *schema.Resource { StateContext: schema.ImportStatePassthroughContext, }, + CustomizeDiff: customdiff.All( + resourceConfigCustomizeDiff, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -37,11 +47,6 @@ func ResourceConfigurationRecorder() *schema.Resource { Default: "default", ValidateFunc: validation.StringLenBetween(0, 256), }, - "role_arn": { - Type: schema.TypeString, - Required: true, - ValidateFunc: verify.ValidARN, - }, "recording_group": { Type: schema.TypeList, Optional: true, @@ -54,94 +59,106 @@ func ResourceConfigurationRecorder() *schema.Resource { Optional: true, Default: true, }, + "exclusion_by_resource_types": { + Type: schema.TypeList, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "resource_types": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + }, + }, + }, "include_global_resource_types": { Type: schema.TypeBool, Optional: true, }, + "recording_strategy": { + Type: schema.TypeList, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "use_only": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice(configservice.RecordingStrategyType_Values(), false), + }, + }, + }, + }, "resource_types": { Type: schema.TypeSet, - Set: schema.HashString, Optional: true, Elem: &schema.Schema{Type: schema.TypeString}, }, }, }, }, + "role_arn": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidARN, + }, }, } } func resourceConfigurationRecorderPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) name := d.Get("name").(string) - recorder := configservice.ConfigurationRecorder{ - Name: aws.String(name), - RoleARN: aws.String(d.Get("role_arn").(string)), + input := &configservice.PutConfigurationRecorderInput{ + ConfigurationRecorder: &configservice.ConfigurationRecorder{ + Name: aws.String(name), + RoleARN: aws.String(d.Get("role_arn").(string)), + }, } - if g, ok := d.GetOk("recording_group"); ok { - recorder.RecordingGroup = expandRecordingGroup(g.([]interface{})) + if v, ok := d.GetOk("recording_group"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.ConfigurationRecorder.RecordingGroup = expandRecordingGroup(v.([]interface{})[0].(map[string]interface{})) } - input := configservice.PutConfigurationRecorderInput{ - ConfigurationRecorder: &recorder, - } - _, err := conn.PutConfigurationRecorderWithContext(ctx, &input) + _, err := conn.PutConfigurationRecorderWithContext(ctx, input) + if err != nil { - return sdkdiag.AppendErrorf(diags, "Creating Configuration Recorder failed: %s", err) + return sdkdiag.AppendErrorf(diags, "putting ConfigService Configuration Recorder (%s): %s", name, err) } - d.SetId(name) + if d.IsNewResource() { + d.SetId(name) + } return append(diags, resourceConfigurationRecorderRead(ctx, d, meta)...) } func resourceConfigurationRecorderRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) - input := configservice.DescribeConfigurationRecordersInput{ - ConfigurationRecorderNames: []*string{aws.String(d.Id())}, - } - out, err := conn.DescribeConfigurationRecordersWithContext(ctx, &input) - if !d.IsNewResource() && tfawserr.ErrCodeEquals(err, configservice.ErrCodeNoSuchConfigurationRecorderException) { - create.LogNotFoundRemoveState(names.ConfigService, create.ErrActionReading, ResNameConfigurationRecorder, d.Id()) - d.SetId("") - return diags - } + recorder, err := FindConfigurationRecorderByName(ctx, conn, d.Id()) - if err != nil { - return create.DiagError(names.ConfigService, create.ErrActionReading, ResNameConfigurationRecorder, d.Id(), err) - } - - numberOfRecorders := len(out.ConfigurationRecorders) - if !d.IsNewResource() && numberOfRecorders < 1 { - create.LogNotFoundRemoveState(names.ConfigService, create.ErrActionReading, ResNameConfigurationRecorder, d.Id()) + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] ConfigService Configuration Recorder (%s) not found, removing from state", d.Id()) d.SetId("") return diags } - if d.IsNewResource() && numberOfRecorders < 1 { - return create.DiagError(names.ConfigService, create.ErrActionReading, ResNameConfigurationRecorder, d.Id(), errors.New("none found")) - } - - if numberOfRecorders > 1 { - return sdkdiag.AppendErrorf(diags, "Expected exactly 1 Configuration Recorder, received %d: %#v", - numberOfRecorders, out.ConfigurationRecorders) + if err != nil { + return sdkdiag.AppendErrorf(diags, "reading ConfigService Configuration Recorder (%s): %s", d.Id(), err) } - recorder := out.ConfigurationRecorders[0] - d.Set("name", recorder.Name) d.Set("role_arn", recorder.RoleARN) if recorder.RecordingGroup != nil { - flattened := flattenRecordingGroup(recorder.RecordingGroup) - err = d.Set("recording_group", flattened) - if err != nil { - return sdkdiag.AppendErrorf(diags, "Failed to set recording_group: %s", err) + if err := d.Set("recording_group", flattenRecordingGroup(recorder.RecordingGroup)); err != nil { + return sdkdiag.AppendErrorf(diags, "setting recording_group: %s", err) } } @@ -150,15 +167,100 @@ func resourceConfigurationRecorderRead(ctx context.Context, d *schema.ResourceDa func resourceConfigurationRecorderDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() - input := configservice.DeleteConfigurationRecorderInput{ + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) + + log.Printf("[DEBUG] Deleting ConfigService Configuration Recorder: %s", d.Id()) + _, err := conn.DeleteConfigurationRecorderWithContext(ctx, &configservice.DeleteConfigurationRecorderInput{ ConfigurationRecorderName: aws.String(d.Id()), + }) + + if tfawserr.ErrCodeEquals(err, configservice.ErrCodeNoSuchConfigurationRecorderException) { + return diags } - _, err := conn.DeleteConfigurationRecorderWithContext(ctx, &input) + if err != nil { - if !tfawserr.ErrCodeEquals(err, configservice.ErrCodeNoSuchConfigurationRecorderException) { - return sdkdiag.AppendErrorf(diags, "Deleting Configuration Recorder failed: %s", err) - } + return sdkdiag.AppendErrorf(diags, "deleting ConfigService Configuration Recorder (%s): %s", d.Id(), err) } + return diags } + +func FindConfigurationRecorderByName(ctx context.Context, conn *configservice.ConfigService, name string) (*configservice.ConfigurationRecorder, error) { + input := &configservice.DescribeConfigurationRecordersInput{ + ConfigurationRecorderNames: aws.StringSlice([]string{name}), + } + + output, err := conn.DescribeConfigurationRecordersWithContext(ctx, input) + + if tfawserr.ErrCodeEquals(err, configservice.ErrCodeNoSuchConfigurationRecorderException) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + return tfresource.AssertSinglePtrResult(output.ConfigurationRecorders) +} + +func resourceConfigCustomizeDiff(_ context.Context, diff *schema.ResourceDiff, v interface{}) error { + if diff.Id() == "" { // New resource. + if g, ok := diff.GetOk("recording_group"); ok { + group := g.([]interface{})[0].(map[string]interface{}) + + if h, ok := group["all_supported"]; ok { + if i, ok := group["recording_strategy"]; ok && len(i.([]interface{})) > 0 && i.([]interface{})[0] != nil { + strategy := i.([]interface{})[0].(map[string]interface{}) + + if j, ok := strategy["use_only"].(string); ok { + if h.(bool) && j != configservice.RecordingStrategyTypeAllSupportedResourceTypes { + return errors.New(` Invalid record group strategy , all_supported must be set to true `) + } + + if k, ok := group["exclusion_by_resource_types"]; ok && len(k.([]interface{})) > 0 && k.([]interface{})[0] != nil { + if h.(bool) { + return errors.New(` Invalid record group , all_supported must be set to false when exclusion_by_resource_types is set `) + } + + if j != configservice.RecordingStrategyTypeExclusionByResourceTypes { + return errors.New(` Invalid record group strategy , use only must be set to EXCLUSION_BY_RESOURCE_TYPES`) + } + + if l, ok := group["resource_types"]; ok { + resourceTypes := flex.ExpandStringSet(l.(*schema.Set)) + if len(resourceTypes) > 0 { + return errors.New(` Invalid record group , resource_types must not be set when exclusion_by_resource_types is set `) + } + } + } + + if l, ok := group["resource_types"]; ok { + resourceTypes := flex.ExpandStringSet(l.(*schema.Set)) + if len(resourceTypes) > 0 { + if h.(bool) { + return errors.New(` Invalid record group , all_supported must be set to false when resource_types is set `) + } + + if j != configservice.RecordingStrategyTypeInclusionByResourceTypes { + return errors.New(` Invalid record group strategy , use only must be set to INCLUSION_BY_RESOURCE_TYPES`) + } + + if m, ok := group["exclusion_by_resource_types"]; ok && len(m.([]interface{})) > 0 && i.([]interface{})[0] != nil { + return errors.New(` Invalid record group , exclusion_by_resource_types must not be set when resource_types is set `) + } + } + } + } + } + } + } + } + return nil +} diff --git a/internal/service/configservice/configuration_recorder_status.go b/internal/service/configservice/configuration_recorder_status.go index 2140705cbd6..3dd7910d21e 100644 --- a/internal/service/configservice/configuration_recorder_status.go +++ b/internal/service/configservice/configuration_recorder_status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configservice import ( @@ -46,7 +49,7 @@ func ResourceConfigurationRecorderStatus() *schema.Resource { func resourceConfigurationRecorderStatusPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) name := d.Get("name").(string) d.SetId(name) @@ -79,7 +82,7 @@ func resourceConfigurationRecorderStatusPut(ctx context.Context, d *schema.Resou func resourceConfigurationRecorderStatusRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) name := d.Id() statusInput := configservice.DescribeConfigurationRecorderStatusInput{ @@ -119,7 +122,7 @@ func resourceConfigurationRecorderStatusRead(ctx context.Context, d *schema.Reso func resourceConfigurationRecorderStatusDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) input := configservice.StopConfigurationRecorderInput{ ConfigurationRecorderName: aws.String(d.Get("name").(string)), } diff --git a/internal/service/configservice/configuration_recorder_status_test.go b/internal/service/configservice/configuration_recorder_status_test.go index 23de7751095..6e987b02cbb 100644 --- a/internal/service/configservice/configuration_recorder_status_test.go +++ b/internal/service/configservice/configuration_recorder_status_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configservice_test import ( @@ -119,7 +122,7 @@ func testAccCheckConfigurationRecorderStatusExists(ctx context.Context, n string return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn(ctx) out, err := conn.DescribeConfigurationRecorderStatusWithContext(ctx, &configservice.DescribeConfigurationRecorderStatusInput{ ConfigurationRecorderNames: []*string{aws.String(rs.Primary.Attributes["name"])}, }) @@ -155,7 +158,7 @@ func testAccCheckConfigurationRecorderStatus(n string, desired bool, obj *config func testAccCheckConfigurationRecorderStatusDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_config_configuration_recorder_status" { diff --git a/internal/service/configservice/configuration_recorder_test.go b/internal/service/configservice/configuration_recorder_test.go index 61363619597..d9a36202734 100644 --- a/internal/service/configservice/configuration_recorder_test.go +++ b/internal/service/configservice/configuration_recorder_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configservice_test import ( @@ -5,23 +8,50 @@ import ( "fmt" "testing" - "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/configservice" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" + tfconfig "github.com/hashicorp/terraform-provider-aws/internal/service/configservice" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) func testAccConfigurationRecorder_basic(t *testing.T) { ctx := acctest.Context(t) var cr configservice.ConfigurationRecorder - rInt := sdkacctest.RandInt() - expectedName := fmt.Sprintf("tf-acc-test-%d", rInt) - expectedRoleName := fmt.Sprintf("tf-acc-test-awsconfig-%d", rInt) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_config_configuration_recorder.test" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, configservice.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckConfigurationRecorderDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccConfigurationRecorderConfig_basic(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckConfigurationRecorderExists(ctx, resourceName, &cr), + resource.TestCheckResourceAttr(resourceName, "name", rName), + acctest.CheckResourceAttrGlobalARN(resourceName, "role_arn", "iam", fmt.Sprintf("role/%s", rName)), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} - resourceName := "aws_config_configuration_recorder.foo" +func testAccConfigurationRecorder_disappears(t *testing.T) { + ctx := acctest.Context(t) + var cr configservice.ConfigurationRecorder + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_config_configuration_recorder.test" resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, @@ -30,13 +60,12 @@ func testAccConfigurationRecorder_basic(t *testing.T) { CheckDestroy: testAccCheckConfigurationRecorderDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccConfigurationRecorderConfig_basic(rInt), + Config: testAccConfigurationRecorderConfig_basic(rName), Check: resource.ComposeTestCheckFunc( testAccCheckConfigurationRecorderExists(ctx, resourceName, &cr), - testAccCheckConfigurationRecorderName(resourceName, expectedName, &cr), - acctest.CheckResourceAttrGlobalARN(resourceName, "role_arn", "iam", fmt.Sprintf("role/%s", expectedRoleName)), - resource.TestCheckResourceAttr(resourceName, "name", expectedName), + acctest.CheckResourceDisappears(ctx, acctest.Provider, tfconfig.ResourceConfigurationRecorder(), resourceName), ), + ExpectNonEmptyPlan: true, }, }, }) @@ -45,11 +74,8 @@ func testAccConfigurationRecorder_basic(t *testing.T) { func testAccConfigurationRecorder_allParams(t *testing.T) { ctx := acctest.Context(t) var cr configservice.ConfigurationRecorder - rInt := sdkacctest.RandInt() - expectedName := fmt.Sprintf("tf-acc-test-%d", rInt) - expectedRoleName := fmt.Sprintf("tf-acc-test-awsconfig-%d", rInt) - - resourceName := "aws_config_configuration_recorder.foo" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_config_configuration_recorder.test" resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, @@ -58,26 +84,26 @@ func testAccConfigurationRecorder_allParams(t *testing.T) { CheckDestroy: testAccCheckConfigurationRecorderDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccConfigurationRecorderConfig_allParams(rInt), + Config: testAccConfigurationRecorderConfig_allParams(rName), Check: resource.ComposeTestCheckFunc( testAccCheckConfigurationRecorderExists(ctx, resourceName, &cr), - testAccCheckConfigurationRecorderName(resourceName, expectedName, &cr), - acctest.CheckResourceAttrGlobalARN(resourceName, "role_arn", "iam", fmt.Sprintf("role/%s", expectedRoleName)), - resource.TestCheckResourceAttr(resourceName, "name", expectedName), + resource.TestCheckResourceAttr(resourceName, "name", rName), resource.TestCheckResourceAttr(resourceName, "recording_group.#", "1"), resource.TestCheckResourceAttr(resourceName, "recording_group.0.all_supported", "false"), resource.TestCheckResourceAttr(resourceName, "recording_group.0.include_global_resource_types", "false"), resource.TestCheckResourceAttr(resourceName, "recording_group.0.resource_types.#", "2"), + acctest.CheckResourceAttrGlobalARN(resourceName, "role_arn", "iam", fmt.Sprintf("role/%s", rName)), ), }, }, }) } -func testAccConfigurationRecorder_importBasic(t *testing.T) { +func testAccConfigurationRecorder_recordStrategy(t *testing.T) { ctx := acctest.Context(t) - resourceName := "aws_config_configuration_recorder.foo" - rInt := sdkacctest.RandInt() + var cr configservice.ConfigurationRecorder + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_config_configuration_recorder.test" resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, @@ -86,58 +112,41 @@ func testAccConfigurationRecorder_importBasic(t *testing.T) { CheckDestroy: testAccCheckConfigurationRecorderDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccConfigurationRecorderConfig_basic(rInt), - }, - - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + Config: testAccConfigurationRecorderConfig_recordStrategy(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckConfigurationRecorderExists(ctx, resourceName, &cr), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "recording_group.#", "1"), + resource.TestCheckResourceAttr(resourceName, "recording_group.0.all_supported", "false"), + resource.TestCheckResourceAttr(resourceName, "recording_group.0.exclusion_by_resource_types.0.resource_types.#", "2"), + resource.TestCheckResourceAttr(resourceName, "recording_group.0.recording_strategy.0.use_only", "EXCLUSION_BY_RESOURCE_TYPES"), + acctest.CheckResourceAttrGlobalARN(resourceName, "role_arn", "iam", fmt.Sprintf("role/%s", rName)), + ), }, }, }) } -func testAccCheckConfigurationRecorderName(n string, desired string, obj *configservice.ConfigurationRecorder) resource.TestCheckFunc { +func testAccCheckConfigurationRecorderExists(ctx context.Context, n string, v *configservice.ConfigurationRecorder) resource.TestCheckFunc { return func(s *terraform.State) error { - _, ok := s.RootModule().Resources[n] + rs, ok := s.RootModule().Resources[n] if !ok { return fmt.Errorf("Not found: %s", n) } - if *obj.Name != desired { - return fmt.Errorf("Expected configuration recorder %q name to be %q, given: %q", - n, desired, *obj.Name) + if rs.Primary.ID == "" { + return fmt.Errorf("No ConfigService Configuration Recorder ID is set") } - return nil - } -} + conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn(ctx) -func testAccCheckConfigurationRecorderExists(ctx context.Context, n string, obj *configservice.ConfigurationRecorder) resource.TestCheckFunc { - return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[n] - if !ok { - return fmt.Errorf("Not Found: %s", n) - } + output, err := tfconfig.FindConfigurationRecorderByName(ctx, conn, rs.Primary.ID) - if rs.Primary.ID == "" { - return fmt.Errorf("No configuration recorder ID is set") - } - - conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn() - out, err := conn.DescribeConfigurationRecordersWithContext(ctx, &configservice.DescribeConfigurationRecordersInput{ - ConfigurationRecorderNames: []*string{aws.String(rs.Primary.Attributes["name"])}, - }) if err != nil { - return fmt.Errorf("Failed to describe configuration recorder: %s", err) - } - if len(out.ConfigurationRecorders) < 1 { - return fmt.Errorf("No configuration recorder found when describing %q", rs.Primary.Attributes["name"]) + return err } - cr := out.ConfigurationRecorders[0] - *obj = *cr + *v = *output return nil } @@ -145,38 +154,39 @@ func testAccCheckConfigurationRecorderExists(ctx context.Context, n string, obj func testAccCheckConfigurationRecorderDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_config_configuration_recorder_status" { continue } - resp, err := conn.DescribeConfigurationRecordersWithContext(ctx, &configservice.DescribeConfigurationRecordersInput{ - ConfigurationRecorderNames: []*string{aws.String(rs.Primary.Attributes["name"])}, - }) + _, err := tfconfig.FindConfigurationRecorderByName(ctx, conn, rs.Primary.ID) - if err == nil { - if len(resp.ConfigurationRecorders) != 0 && - *resp.ConfigurationRecorders[0].Name == rs.Primary.Attributes["name"] { - return fmt.Errorf("Configuration recorder still exists: %s", rs.Primary.Attributes["name"]) - } + if tfresource.NotFound(err) { + continue } + + if err != nil { + return err + } + + return fmt.Errorf("ConfigService Configuration Recorder %s still exists", rs.Primary.ID) } return nil } } -func testAccConfigurationRecorderConfig_basic(randInt int) string { +func testAccConfigurationRecorderConfig_basic(rName string) string { return fmt.Sprintf(` -resource "aws_config_configuration_recorder" "foo" { - name = "tf-acc-test-%d" - role_arn = aws_iam_role.r.arn +resource "aws_config_configuration_recorder" "test" { + name = %[1]q + role_arn = aws_iam_role.test.arn } -resource "aws_iam_role" "r" { - name = "tf-acc-test-awsconfig-%d" +resource "aws_iam_role" "test" { + name = %[1]q assume_role_policy = < 0 { + recordingGroup.ExclusionByResourceTypes = expandRecordingGroupExclusionByResourceTypes(v.([]interface{})) + } + } + if v, ok := group["include_global_resource_types"]; ok { recordingGroup.IncludeGlobalResourceTypes = aws.Bool(v.(bool)) } + if v, ok := group["recording_strategy"]; ok { + if len(v.([]interface{})) > 0 { + recordingGroup.RecordingStrategy = expandRecordingGroupRecordingStrategy(v.([]interface{})) + } + } + if v, ok := group["resource_types"]; ok { recordingGroup.ResourceTypes = flex.ExpandStringSet(v.(*schema.Set)) } return &recordingGroup } +func expandRecordingGroupExclusionByResourceTypes(configured []interface{}) *configservice.ExclusionByResourceTypes { + exclusionByResourceTypes := configservice.ExclusionByResourceTypes{} + exclusion := configured[0].(map[string]interface{}) + if v, ok := exclusion["resource_types"]; ok { + exclusionByResourceTypes.ResourceTypes = flex.ExpandStringSet(v.(*schema.Set)) + } + return &exclusionByResourceTypes +} + +func expandRecordingGroupRecordingStrategy(configured []interface{}) *configservice.RecordingStrategy { + recordingStrategy := configservice.RecordingStrategy{} + strategy := configured[0].(map[string]interface{}) + if v, ok := strategy["use_only"].(string); ok { + recordingStrategy.UseOnly = aws.String(v) + } + return &recordingStrategy +} + func expandRuleScope(l []interface{}) *configservice.Scope { if len(l) == 0 || l[0] == nil { return nil @@ -187,17 +219,47 @@ func flattenRecordingGroup(g *configservice.RecordingGroup) []map[string]interfa m["all_supported"] = aws.BoolValue(g.AllSupported) } + if g.ExclusionByResourceTypes != nil { + m["exclusion_by_resource_types"] = flattenExclusionByResourceTypes(g.ExclusionByResourceTypes) + } + if g.IncludeGlobalResourceTypes != nil { m["include_global_resource_types"] = aws.BoolValue(g.IncludeGlobalResourceTypes) } + if g.RecordingStrategy != nil { + m["recording_strategy"] = flattenRecordingGroupRecordingStrategy(g.RecordingStrategy) + } + if g.ResourceTypes != nil && len(g.ResourceTypes) > 0 { m["resource_types"] = flex.FlattenStringSet(g.ResourceTypes) } return []map[string]interface{}{m} } +func flattenExclusionByResourceTypes(exclusionByResourceTypes *configservice.ExclusionByResourceTypes) []interface{} { + if exclusionByResourceTypes == nil { + return nil + } + m := make(map[string]interface{}) + if exclusionByResourceTypes.ResourceTypes != nil { + m["resource_types"] = flex.FlattenStringSet(exclusionByResourceTypes.ResourceTypes) + } + + return []interface{}{m} +} +func flattenRecordingGroupRecordingStrategy(recordingStrategy *configservice.RecordingStrategy) []interface{} { + if recordingStrategy == nil { + return nil + } + m := make(map[string]interface{}) + if recordingStrategy.UseOnly != nil { + m["use_only"] = aws.StringValue(recordingStrategy.UseOnly) + } + + return []interface{}{m} +} func flattenRuleScope(scope *configservice.Scope) []interface{} { var items []interface{} diff --git a/internal/service/configservice/generate.go b/internal/service/configservice/generate.go index 71b3492b3ae..74dc65f9061 100644 --- a/internal/service/configservice/generate.go +++ b/internal/service/configservice/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsSlice -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package configservice diff --git a/internal/service/configservice/organization_conformance_pack.go b/internal/service/configservice/organization_conformance_pack.go index c0429d63c96..96bf5bbcafb 100644 --- a/internal/service/configservice/organization_conformance_pack.go +++ b/internal/service/configservice/organization_conformance_pack.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configservice import ( @@ -115,7 +118,7 @@ func ResourceOrganizationConformancePack() *schema.Resource { func resourceOrganizationConformancePackCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) name := d.Get("name").(string) @@ -164,7 +167,7 @@ func resourceOrganizationConformancePackCreate(ctx context.Context, d *schema.Re func resourceOrganizationConformancePackRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) pack, err := DescribeOrganizationConformancePack(ctx, conn, d.Id()) @@ -206,7 +209,7 @@ func resourceOrganizationConformancePackRead(ctx context.Context, d *schema.Reso func resourceOrganizationConformancePackUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) input := &configservice.PutOrganizationConformancePackInput{ OrganizationConformancePackName: aws.String(d.Id()), @@ -251,7 +254,7 @@ func resourceOrganizationConformancePackUpdate(ctx context.Context, d *schema.Re func resourceOrganizationConformancePackDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) input := &configservice.DeleteOrganizationConformancePackInput{ OrganizationConformancePackName: aws.String(d.Id()), diff --git a/internal/service/configservice/organization_conformance_pack_test.go b/internal/service/configservice/organization_conformance_pack_test.go index 36445abde32..a805cb2188d 100644 --- a/internal/service/configservice/organization_conformance_pack_test.go +++ b/internal/service/configservice/organization_conformance_pack_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configservice_test import ( @@ -427,7 +430,7 @@ func testAccOrganizationConformancePack_updateTemplateBody(t *testing.T) { func testAccCheckOrganizationConformancePackDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_config_organization_conformance_pack" { @@ -466,7 +469,7 @@ func testAccCheckOrganizationConformancePackExists(ctx context.Context, resource return fmt.Errorf("Not Found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn(ctx) pack, err := tfconfig.DescribeOrganizationConformancePack(ctx, conn, rs.Primary.ID) @@ -648,10 +651,6 @@ resource "aws_s3_bucket" "test" { force_destroy = true } -resource "aws_s3_bucket_acl" "test" { - bucket = aws_s3_bucket.test.id - acl = "private" -} `, rName, bName)) } @@ -670,11 +669,6 @@ resource "aws_s3_bucket" "test" { force_destroy = true } -resource "aws_s3_bucket_acl" "test" { - bucket = aws_s3_bucket.test.id - acl = "private" -} - resource "aws_s3_object" "test" { bucket = aws_s3_bucket.test.id key = %[2]q diff --git a/internal/service/configservice/organization_custom_policy_rule.go b/internal/service/configservice/organization_custom_policy_rule.go index c67d314b2d9..6152993d422 100644 --- a/internal/service/configservice/organization_custom_policy_rule.go +++ b/internal/service/configservice/organization_custom_policy_rule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configservice import ( @@ -76,15 +79,9 @@ func ResourceOrganizationCustomPolicyRule() *schema.Resource { ), }, "maximum_execution_frequency": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.StringInSlice([]string{ - configservice.MaximumExecutionFrequencyOneHour, - configservice.MaximumExecutionFrequencyThreeHours, - configservice.MaximumExecutionFrequencySixHours, - configservice.MaximumExecutionFrequencyTwelveHours, - configservice.MaximumExecutionFrequencyTwentyFourHours, - }, false), + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice(configservice.MaximumExecutionFrequency_Values(), false), }, "name": { Type: schema.TypeString, @@ -148,7 +145,7 @@ const ( ) func resourceOrganizationCustomPolicyRuleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) name := d.Get("name").(string) in := &configservice.PutOrganizationConfigRuleInput{ @@ -216,7 +213,7 @@ func resourceOrganizationCustomPolicyRuleCreate(ctx context.Context, d *schema.R } func resourceOrganizationCustomPolicyRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) rule, err := FindOrganizationConfigRule(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -280,7 +277,7 @@ func resourceOrganizationCustomPolicyRuleRead(ctx context.Context, d *schema.Res } func resourceOrganizationCustomPolicyRuleUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) in := &configservice.PutOrganizationConfigRuleInput{ OrganizationConfigRuleName: aws.String(d.Id()), @@ -342,7 +339,7 @@ func resourceOrganizationCustomPolicyRuleUpdate(ctx context.Context, d *schema.R } func resourceOrganizationCustomPolicyRuleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) log.Printf("[INFO] Deleting ConfigService %s %s", ResNameOrganizationCustomPolicyRule, d.Id()) diff --git a/internal/service/configservice/organization_custom_policy_rule_test.go b/internal/service/configservice/organization_custom_policy_rule_test.go index ecf22d09401..9b624df83cf 100644 --- a/internal/service/configservice/organization_custom_policy_rule_test.go +++ b/internal/service/configservice/organization_custom_policy_rule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configservice_test import ( @@ -143,7 +146,7 @@ func TestAccConfigServiceOrganizationCustomPolicyRule_PolicyText(t *testing.T) { func testAccCheckOrganizationCustomPolicyRuleDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_config_organization_custom_policy_rule" { @@ -178,7 +181,7 @@ func testAccCheckOrganizationCustomPolicyRuleExists(ctx context.Context, name st return create.Error(names.ConfigService, create.ErrActionCheckingExistence, tfconfigservice.ResNameOrganizationCustomPolicyRule, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn(ctx) resp, err := tfconfigservice.FindOrganizationConfigRule(ctx, conn, rs.Primary.ID) diff --git a/internal/service/configservice/organization_custom_rule.go b/internal/service/configservice/organization_custom_rule.go index c06bdd6d6d5..f4b77e25bf8 100644 --- a/internal/service/configservice/organization_custom_rule.go +++ b/internal/service/configservice/organization_custom_rule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configservice import ( @@ -71,15 +74,9 @@ func ResourceOrganizationCustomRule() *schema.Resource { ValidateFunc: validation.StringLenBetween(1, 256), }, "maximum_execution_frequency": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.StringInSlice([]string{ - configservice.MaximumExecutionFrequencyOneHour, - configservice.MaximumExecutionFrequencyThreeHours, - configservice.MaximumExecutionFrequencySixHours, - configservice.MaximumExecutionFrequencyTwelveHours, - configservice.MaximumExecutionFrequencyTwentyFourHours, - }, false), + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice(configservice.MaximumExecutionFrequency_Values(), false), }, "name": { Type: schema.TypeString, @@ -131,7 +128,7 @@ func ResourceOrganizationCustomRule() *schema.Resource { func resourceOrganizationCustomRuleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) name := d.Get("name").(string) input := &configservice.PutOrganizationConfigRuleInput{ @@ -191,7 +188,7 @@ func resourceOrganizationCustomRuleCreate(ctx context.Context, d *schema.Resourc func resourceOrganizationCustomRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) rule, err := DescribeOrganizationConfigRule(ctx, conn, d.Id()) @@ -256,7 +253,7 @@ func resourceOrganizationCustomRuleRead(ctx context.Context, d *schema.ResourceD func resourceOrganizationCustomRuleUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) input := &configservice.PutOrganizationConfigRuleInput{ OrganizationConfigRuleName: aws.String(d.Id()), @@ -313,7 +310,7 @@ func resourceOrganizationCustomRuleUpdate(ctx context.Context, d *schema.Resourc func resourceOrganizationCustomRuleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) input := &configservice.DeleteOrganizationConfigRuleInput{ OrganizationConfigRuleName: aws.String(d.Id()), diff --git a/internal/service/configservice/organization_custom_rule_test.go b/internal/service/configservice/organization_custom_rule_test.go index 45042df5e86..8ed55c3d55d 100644 --- a/internal/service/configservice/organization_custom_rule_test.go +++ b/internal/service/configservice/organization_custom_rule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configservice_test import ( @@ -463,7 +466,7 @@ func testAccCheckOrganizationCustomRuleExists(ctx context.Context, resourceName return create.Error(names.ConfigService, create.ErrActionCheckingExistence, tfconfigservice.ResNameOrganizationCustomRule, resourceName, errors.New("not found")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn(ctx) rule, err := tfconfigservice.DescribeOrganizationConfigRule(ctx, conn, rs.Primary.ID) @@ -483,7 +486,7 @@ func testAccCheckOrganizationCustomRuleExists(ctx context.Context, resourceName func testAccCheckOrganizationCustomRuleDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_config_organization_custom_rule" { diff --git a/internal/service/configservice/organization_managed_rule.go b/internal/service/configservice/organization_managed_rule.go index a37b93f7ae3..6dc8de5cca0 100644 --- a/internal/service/configservice/organization_managed_rule.go +++ b/internal/service/configservice/organization_managed_rule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configservice import ( @@ -67,15 +70,9 @@ func ResourceOrganizationManagedRule() *schema.Resource { ), }, "maximum_execution_frequency": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.StringInSlice([]string{ - configservice.MaximumExecutionFrequencyOneHour, - configservice.MaximumExecutionFrequencyThreeHours, - configservice.MaximumExecutionFrequencySixHours, - configservice.MaximumExecutionFrequencyTwelveHours, - configservice.MaximumExecutionFrequencyTwentyFourHours, - }, false), + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice(configservice.MaximumExecutionFrequency_Values(), false), }, "name": { Type: schema.TypeString, @@ -118,7 +115,7 @@ func ResourceOrganizationManagedRule() *schema.Resource { func resourceOrganizationManagedRuleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) name := d.Get("name").(string) input := &configservice.PutOrganizationConfigRuleInput{ @@ -177,7 +174,7 @@ func resourceOrganizationManagedRuleCreate(ctx context.Context, d *schema.Resour func resourceOrganizationManagedRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) rule, err := DescribeOrganizationConfigRule(ctx, conn, d.Id()) @@ -238,7 +235,7 @@ func resourceOrganizationManagedRuleRead(ctx context.Context, d *schema.Resource func resourceOrganizationManagedRuleUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) input := &configservice.PutOrganizationConfigRuleInput{ OrganizationConfigRuleName: aws.String(d.Id()), @@ -294,7 +291,7 @@ func resourceOrganizationManagedRuleUpdate(ctx context.Context, d *schema.Resour func resourceOrganizationManagedRuleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) input := &configservice.DeleteOrganizationConfigRuleInput{ OrganizationConfigRuleName: aws.String(d.Id()), diff --git a/internal/service/configservice/organization_managed_rule_test.go b/internal/service/configservice/organization_managed_rule_test.go index 43baab30f5e..03859ae38d7 100644 --- a/internal/service/configservice/organization_managed_rule_test.go +++ b/internal/service/configservice/organization_managed_rule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configservice_test import ( @@ -421,7 +424,7 @@ func testAccCheckOrganizationManagedRuleExists(ctx context.Context, resourceName return fmt.Errorf("Not Found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn(ctx) rule, err := tfconfig.DescribeOrganizationConfigRule(ctx, conn, rs.Primary.ID) @@ -441,7 +444,7 @@ func testAccCheckOrganizationManagedRuleExists(ctx context.Context, resourceName func testAccCheckOrganizationManagedRuleDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_config_organization_managed_rule" { diff --git a/internal/service/configservice/remediation_configuration.go b/internal/service/configservice/remediation_configuration.go index 7b51215d0b6..d9d3d6722ae 100644 --- a/internal/service/configservice/remediation_configuration.go +++ b/internal/service/configservice/remediation_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configservice import ( @@ -144,7 +147,7 @@ func ResourceRemediationConfiguration() *schema.Resource { func resourceRemediationConfigurationPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) name := d.Get("config_rule_name").(string) input := configservice.RemediationConfiguration{ @@ -198,7 +201,7 @@ func resourceRemediationConfigurationPut(ctx context.Context, d *schema.Resource func resourceRemediationConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) out, err := conn.DescribeRemediationConfigurationsWithContext(ctx, &configservice.DescribeRemediationConfigurationsInput{ ConfigRuleNames: []*string{aws.String(d.Id())}, }) @@ -251,7 +254,7 @@ func resourceRemediationConfigurationRead(ctx context.Context, d *schema.Resourc func resourceRemediationConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ConfigServiceConn() + conn := meta.(*conns.AWSClient).ConfigServiceConn(ctx) name := d.Get("config_rule_name").(string) diff --git a/internal/service/configservice/remediation_configuration_test.go b/internal/service/configservice/remediation_configuration_test.go index b883a930c9a..e708d2d3c93 100644 --- a/internal/service/configservice/remediation_configuration_test.go +++ b/internal/service/configservice/remediation_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configservice_test import ( @@ -330,7 +333,7 @@ func testAccCheckRemediationConfigurationExists(ctx context.Context, n string, o return create.Error(names.ConfigService, create.ErrActionCheckingExistence, tfconfigservice.ResNameRemediationConfiguration, n, errors.New("ID not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn(ctx) out, err := conn.DescribeRemediationConfigurationsWithContext(ctx, &configservice.DescribeRemediationConfigurationsInput{ ConfigRuleNames: []*string{aws.String(rs.Primary.Attributes["config_rule_name"])}, }) @@ -350,7 +353,7 @@ func testAccCheckRemediationConfigurationExists(ctx context.Context, n string, o func testAccCheckRemediationConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConfigServiceConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_config_remediation_configuration" { diff --git a/internal/service/configservice/service_package.go b/internal/service/configservice/service_package.go new file mode 100644 index 00000000000..49ed104ba3f --- /dev/null +++ b/internal/service/configservice/service_package.go @@ -0,0 +1,58 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package configservice + +import ( + "context" + + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + request_sdkv1 "github.com/aws/aws-sdk-go/aws/request" + configservice_sdkv1 "github.com/aws/aws-sdk-go/service/configservice" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" +) + +// CustomizeConn customizes a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) CustomizeConn(ctx context.Context, conn *configservice_sdkv1.ConfigService) (*configservice_sdkv1.ConfigService, error) { + conn.Handlers.Retry.PushBack(func(r *request_sdkv1.Request) { + // When calling Config Organization Rules API actions immediately + // after Organization creation, the API can randomly return the + // OrganizationAccessDeniedException error for a few minutes, even + // after succeeding a few requests. + switch r.Operation.Name { + case "DeleteOrganizationConfigRule", "DescribeOrganizationConfigRules", "DescribeOrganizationConfigRuleStatuses", "PutOrganizationConfigRule": + if !tfawserr.ErrMessageContains(r.Error, configservice_sdkv1.ErrCodeOrganizationAccessDeniedException, "This action can be only made by AWS Organization's master account.") { + return + } + + // We only want to retry briefly as the default max retry count would + // excessively retry when the error could be legitimate. + // We currently depend on the DefaultRetryer exponential backoff here. + // ~10 retries gives a fair backoff of a few seconds. + if r.RetryCount < 9 { + r.Retryable = aws_sdkv1.Bool(true) + } else { + r.Retryable = aws_sdkv1.Bool(false) + } + case "DeleteOrganizationConformancePack", "DescribeOrganizationConformancePacks", "DescribeOrganizationConformancePackStatuses", "PutOrganizationConformancePack": + if !tfawserr.ErrCodeEquals(r.Error, configservice_sdkv1.ErrCodeOrganizationAccessDeniedException) { + if r.Operation.Name == "DeleteOrganizationConformancePack" && tfawserr.ErrCodeEquals(r.Error, configservice_sdkv1.ErrCodeResourceInUseException) { + r.Retryable = aws_sdkv1.Bool(true) + } + return + } + + // We only want to retry briefly as the default max retry count would + // excessively retry when the error could be legitimate. + // We currently depend on the DefaultRetryer exponential backoff here. + // ~10 retries gives a fair backoff of a few seconds. + if r.RetryCount < 9 { + r.Retryable = aws_sdkv1.Bool(true) + } else { + r.Retryable = aws_sdkv1.Bool(false) + } + } + }) + + return conn, nil +} diff --git a/internal/service/configservice/service_package_gen.go b/internal/service/configservice/service_package_gen.go index 2e1ad961348..42f7adae36b 100644 --- a/internal/service/configservice/service_package_gen.go +++ b/internal/service/configservice/service_package_gen.go @@ -5,6 +5,10 @@ package configservice import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + configservice_sdkv1 "github.com/aws/aws-sdk-go/service/configservice" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -92,4 +96,13 @@ func (p *servicePackage) ServicePackageName() string { return names.ConfigService } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*configservice_sdkv1.ConfigService, error) { + sess := config["session"].(*session_sdkv1.Session) + + return configservice_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/configservice/status.go b/internal/service/configservice/status.go index 647b29e7b73..270b4823277 100644 --- a/internal/service/configservice/status.go +++ b/internal/service/configservice/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configservice import ( diff --git a/internal/service/configservice/sweep.go b/internal/service/configservice/sweep.go index 212747d5c7c..82211cb62f1 100644 --- a/internal/service/configservice/sweep.go +++ b/internal/service/configservice/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -13,7 +16,6 @@ import ( "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -44,11 +46,11 @@ func init() { func sweepAggregateAuthorizations(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("Error getting client: %s", err) } - conn := client.(*conns.AWSClient).ConfigServiceConn() + conn := client.ConfigServiceConn(ctx) aggregateAuthorizations, err := DescribeAggregateAuthorizations(ctx, conn) if err != nil { @@ -82,11 +84,11 @@ func sweepAggregateAuthorizations(region string) error { func sweepConfigurationAggregators(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("Error getting client: %s", err) } - conn := client.(*conns.AWSClient).ConfigServiceConn() + conn := client.ConfigServiceConn(ctx) resp, err := conn.DescribeConfigurationAggregatorsWithContext(ctx, &configservice.DescribeConfigurationAggregatorsInput{}) if err != nil { @@ -121,11 +123,11 @@ func sweepConfigurationAggregators(region string) error { func sweepConfigurationRecorder(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).ConfigServiceConn() + conn := client.ConfigServiceConn(ctx) req := &configservice.DescribeConfigurationRecordersInput{} resp, err := conn.DescribeConfigurationRecordersWithContext(ctx, req) @@ -165,11 +167,11 @@ func sweepConfigurationRecorder(region string) error { func sweepDeliveryChannels(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).ConfigServiceConn() + conn := client.ConfigServiceConn(ctx) req := &configservice.DescribeDeliveryChannelsInput{} var resp *configservice.DescribeDeliveryChannelsOutput diff --git a/internal/service/configservice/tags_gen.go b/internal/service/configservice/tags_gen.go index 38a5debd2ce..3799baf24ce 100644 --- a/internal/service/configservice/tags_gen.go +++ b/internal/service/configservice/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists configservice service tags. +// listTags lists configservice service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn configserviceiface.ConfigServiceAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn configserviceiface.ConfigServiceAPI, identifier string) (tftags.KeyValueTags, error) { input := &configservice.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn configserviceiface.ConfigServiceAPI, ide // ListTags lists configservice service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).ConfigServiceConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).ConfigServiceConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*configservice.Tag) tftags.KeyValu return tftags.New(ctx, m) } -// GetTagsIn returns configservice service tags from Context. +// getTagsIn returns configservice service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*configservice.Tag { +func getTagsIn(ctx context.Context) []*configservice.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*configservice.Tag { return nil } -// SetTagsOut sets configservice service tags in Context. -func SetTagsOut(ctx context.Context, tags []*configservice.Tag) { +// setTagsOut sets configservice service tags in Context. +func setTagsOut(ctx context.Context, tags []*configservice.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates configservice service tags. +// updateTags updates configservice service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn configserviceiface.ConfigServiceAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn configserviceiface.ConfigServiceAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn configserviceiface.ConfigServiceAPI, i // UpdateTags updates configservice service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).ConfigServiceConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).ConfigServiceConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/configservice/validate.go b/internal/service/configservice/validate.go index cc31526d07d..f677244bbf8 100644 --- a/internal/service/configservice/validate.go +++ b/internal/service/configservice/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configservice import ( @@ -7,11 +10,5 @@ import ( ) func validExecutionFrequency() schema.SchemaValidateFunc { - return validation.StringInSlice([]string{ - configservice.MaximumExecutionFrequencyOneHour, - configservice.MaximumExecutionFrequencyThreeHours, - configservice.MaximumExecutionFrequencySixHours, - configservice.MaximumExecutionFrequencyTwelveHours, - configservice.MaximumExecutionFrequencyTwentyFourHours, - }, false) + return validation.StringInSlice(configservice.MaximumExecutionFrequency_Values(), false) } diff --git a/internal/service/configservice/wait.go b/internal/service/configservice/wait.go index 6215ceaa035..e4dfabac5a1 100644 --- a/internal/service/configservice/wait.go +++ b/internal/service/configservice/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package configservice import ( diff --git a/internal/service/connect/bot_association.go b/internal/service/connect/bot_association.go index b72234f17f2..197797c6b2c 100644 --- a/internal/service/connect/bot_association.go +++ b/internal/service/connect/bot_association.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( "context" - "fmt" "log" "github.com/aws/aws-sdk-go/aws" @@ -76,7 +78,7 @@ func ResourceBotAssociation() *schema.Resource { } func resourceBotAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceId := d.Get("instance_id").(string) @@ -114,7 +116,7 @@ func resourceBotAssociationCreate(ctx context.Context, d *schema.ResourceData, m lbaId := BotV1AssociationCreateResourceID(instanceId, aws.StringValue(input.LexBot.Name), aws.StringValue(input.LexBot.LexRegion)) if err != nil { - return diag.FromErr(fmt.Errorf("error creating Connect Bot Association (%s): %w", lbaId, err)) + return diag.Errorf("creating Connect Bot Association (%s): %s", lbaId, err) } d.SetId(lbaId) @@ -123,7 +125,7 @@ func resourceBotAssociationCreate(ctx context.Context, d *schema.ResourceData, m } func resourceBotAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceId, name, region, err := BotV1AssociationParseResourceID(d.Id()) @@ -140,23 +142,23 @@ func resourceBotAssociationRead(ctx context.Context, d *schema.ResourceData, met } if err != nil { - return diag.FromErr(fmt.Errorf("error reading Connect Bot Association (%s): %w", d.Id(), err)) + return diag.Errorf("reading Connect Bot Association (%s): %s", d.Id(), err) } if lexBot == nil { - return diag.FromErr(fmt.Errorf("error reading Connect Bot Association (%s): empty output", d.Id())) + return diag.Errorf("reading Connect Bot Association (%s): empty output", d.Id()) } d.Set("instance_id", instanceId) if err := d.Set("lex_bot", flattenLexBot(lexBot)); err != nil { - return diag.FromErr(fmt.Errorf("error setting lex_bot: %w", err)) + return diag.Errorf("setting lex_bot: %s", err) } return nil } func resourceBotAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, name, region, err := BotV1AssociationParseResourceID(d.Id()) @@ -181,7 +183,7 @@ func resourceBotAssociationDelete(ctx context.Context, d *schema.ResourceData, m } if err != nil { - return diag.FromErr(fmt.Errorf("error deleting Connect Bot Association (%s): %w", d.Id(), err)) + return diag.Errorf("deleting Connect Bot Association (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/connect/bot_association_data_source.go b/internal/service/connect/bot_association_data_source.go index e073fc98d87..c2c3eb5813a 100644 --- a/internal/service/connect/bot_association_data_source.go +++ b/internal/service/connect/bot_association_data_source.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( "context" - "fmt" "github.com/aws/aws-sdk-go/aws" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" @@ -44,7 +46,7 @@ func DataSourceBotAssociation() *schema.Resource { } func dataSourceBotAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID := d.Get("instance_id").(string) @@ -57,18 +59,18 @@ func dataSourceBotAssociationRead(ctx context.Context, d *schema.ResourceData, m lexBot, err := FindBotAssociationV1ByNameAndRegionWithContext(ctx, conn, instanceID, name, region) if err != nil { - return diag.FromErr(fmt.Errorf("error finding Connect Bot Association (%s,%s): %w", instanceID, name, err)) + return diag.Errorf("finding Connect Bot Association (%s,%s): %s", instanceID, name, err) } if lexBot == nil { - return diag.FromErr(fmt.Errorf("error finding Connect Bot Association (%s,%s) : not found", instanceID, name)) + return diag.Errorf("finding Connect Bot Association (%s,%s) : not found", instanceID, name) } d.SetId(meta.(*conns.AWSClient).Region) d.Set("instance_id", instanceID) if err := d.Set("lex_bot", flattenLexBot(lexBot)); err != nil { - return diag.FromErr(fmt.Errorf("error setting lex_bot: %w", err)) + return diag.Errorf("setting lex_bot: %s", err) } return nil diff --git a/internal/service/connect/bot_association_data_source_test.go b/internal/service/connect/bot_association_data_source_test.go index b09e0edfb75..b6ff81f459b 100644 --- a/internal/service/connect/bot_association_data_source_test.go +++ b/internal/service/connect/bot_association_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( diff --git a/internal/service/connect/bot_association_test.go b/internal/service/connect/bot_association_test.go index 13da007c90d..f15bc4da688 100644 --- a/internal/service/connect/bot_association_test.go +++ b/internal/service/connect/bot_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( @@ -87,7 +90,7 @@ func testAccCheckBotAssociationExists(ctx context.Context, resourceName string) return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) lexBot, err := tfconnect.FindBotAssociationV1ByNameAndRegionWithContext(ctx, conn, instanceID, name, region) @@ -119,7 +122,7 @@ func testAccCheckBotAssociationDestroy(ctx context.Context) resource.TestCheckFu return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) lexBot, err := tfconnect.FindBotAssociationV1ByNameAndRegionWithContext(ctx, conn, instanceID, name, region) diff --git a/internal/service/connect/connect_test.go b/internal/service/connect/connect_test.go index 0dbbfa307bb..b103c4c68e9 100644 --- a/internal/service/connect/connect_test.go +++ b/internal/service/connect/connect_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( diff --git a/internal/service/connect/contact_flow.go b/internal/service/connect/contact_flow.go index b6152ef364e..ad96e3897c8 100644 --- a/internal/service/connect/contact_flow.go +++ b/internal/service/connect/contact_flow.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( @@ -91,7 +94,7 @@ func ResourceContactFlow() *schema.Resource { } func resourceContactFlowCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID := d.Get("instance_id").(string) name := d.Get("name").(string) @@ -99,7 +102,7 @@ func resourceContactFlowCreate(ctx context.Context, d *schema.ResourceData, meta input := &connect.CreateContactFlowInput{ Name: aws.String(name), InstanceId: aws.String(instanceID), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), Type: aws.String(d.Get("type").(string)), } @@ -116,7 +119,7 @@ func resourceContactFlowCreate(ctx context.Context, d *schema.ResourceData, meta defer conns.GlobalMutexKV.Unlock(contactFlowMutexKey) file, err := resourceContactFlowLoadFileContent(filename) if err != nil { - return diag.FromErr(fmt.Errorf("unable to load %q: %w", filename, err)) + return diag.Errorf("unable to load %q: %s", filename, err) } input.Content = aws.String(file) } else if v, ok := d.GetOk("content"); ok { @@ -126,11 +129,11 @@ func resourceContactFlowCreate(ctx context.Context, d *schema.ResourceData, meta output, err := conn.CreateContactFlowWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error creating Connect Contact Flow (%s): %w", name, err)) + return diag.Errorf("creating Connect Contact Flow (%s): %s", name, err) } if output == nil { - return diag.FromErr(fmt.Errorf("error creating Connect Contact Flow (%s): empty output", name)) + return diag.Errorf("creating Connect Contact Flow (%s): empty output", name) } d.SetId(fmt.Sprintf("%s:%s", instanceID, aws.StringValue(output.ContactFlowId))) @@ -139,7 +142,7 @@ func resourceContactFlowCreate(ctx context.Context, d *schema.ResourceData, meta } func resourceContactFlowRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, contactFlowID, err := ContactFlowParseID(d.Id()) @@ -159,11 +162,11 @@ func resourceContactFlowRead(ctx context.Context, d *schema.ResourceData, meta i } if err != nil { - return diag.FromErr(fmt.Errorf("error getting Connect Contact Flow (%s): %w", d.Id(), err)) + return diag.Errorf("getting Connect Contact Flow (%s): %s", d.Id(), err) } if resp == nil || resp.ContactFlow == nil { - return diag.FromErr(fmt.Errorf("error getting Connect Contact Flow (%s): empty response", d.Id())) + return diag.Errorf("getting Connect Contact Flow (%s): empty response", d.Id()) } d.Set("arn", resp.ContactFlow.Arn) @@ -174,13 +177,13 @@ func resourceContactFlowRead(ctx context.Context, d *schema.ResourceData, meta i d.Set("type", resp.ContactFlow.Type) d.Set("content", resp.ContactFlow.Content) - SetTagsOut(ctx, resp.ContactFlow.Tags) + setTagsOut(ctx, resp.ContactFlow.Tags) return nil } func resourceContactFlowUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, contactFlowID, err := ContactFlowParseID(d.Id()) @@ -199,7 +202,7 @@ func resourceContactFlowUpdate(ctx context.Context, d *schema.ResourceData, meta _, updateMetadataInputErr := conn.UpdateContactFlowNameWithContext(ctx, updateMetadataInput) if updateMetadataInputErr != nil { - return diag.FromErr(fmt.Errorf("error updating Connect Contact Flow (%s): %w", d.Id(), updateMetadataInputErr)) + return diag.Errorf("updating Connect Contact Flow (%s): %s", d.Id(), updateMetadataInputErr) } } @@ -218,7 +221,7 @@ func resourceContactFlowUpdate(ctx context.Context, d *schema.ResourceData, meta defer conns.GlobalMutexKV.Unlock(contactFlowMutexKey) file, err := resourceContactFlowLoadFileContent(filename) if err != nil { - return diag.FromErr(fmt.Errorf("unable to load %q: %w", filename, err)) + return diag.Errorf("unable to load %q: %s", filename, err) } updateContentInput.Content = aws.String(file) } else if v, ok := d.GetOk("content"); ok { @@ -228,7 +231,7 @@ func resourceContactFlowUpdate(ctx context.Context, d *schema.ResourceData, meta _, updateContentInputErr := conn.UpdateContactFlowContentWithContext(ctx, updateContentInput) if updateContentInputErr != nil { - return diag.FromErr(fmt.Errorf("error updating Connect Contact Flow content (%s): %w", d.Id(), updateContentInputErr)) + return diag.Errorf("updating Connect Contact Flow content (%s): %s", d.Id(), updateContentInputErr) } } @@ -236,7 +239,7 @@ func resourceContactFlowUpdate(ctx context.Context, d *schema.ResourceData, meta } func resourceContactFlowDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, contactFlowID, err := ContactFlowParseID(d.Id()) @@ -254,7 +257,7 @@ func resourceContactFlowDelete(ctx context.Context, d *schema.ResourceData, meta _, deleteContactFlowErr := conn.DeleteContactFlowWithContext(ctx, input) if deleteContactFlowErr != nil { - return diag.FromErr(fmt.Errorf("error deleting Connect Contact Flow (%s): %w", d.Id(), deleteContactFlowErr)) + return diag.Errorf("deleting Connect Contact Flow (%s): %s", d.Id(), deleteContactFlowErr) } return nil diff --git a/internal/service/connect/contact_flow_data_source.go b/internal/service/connect/contact_flow_data_source.go index 4acb0b76541..b4a69a91442 100644 --- a/internal/service/connect/contact_flow_data_source.go +++ b/internal/service/connect/contact_flow_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( @@ -55,7 +58,7 @@ func DataSourceContactFlow() *schema.Resource { } func dataSourceContactFlowRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig instanceID := d.Get("instance_id").(string) @@ -71,11 +74,11 @@ func dataSourceContactFlowRead(ctx context.Context, d *schema.ResourceData, meta contactFlowSummary, err := dataSourceGetContactFlowSummaryByName(ctx, conn, instanceID, name) if err != nil { - return diag.FromErr(fmt.Errorf("error finding Connect Contact Flow Summary by name (%s): %w", name, err)) + return diag.Errorf("finding Connect Contact Flow Summary by name (%s): %s", name, err) } if contactFlowSummary == nil { - return diag.FromErr(fmt.Errorf("error finding Connect Contact Flow Summary by name (%s): not found", name)) + return diag.Errorf("finding Connect Contact Flow Summary by name (%s): not found", name) } input.ContactFlowId = contactFlowSummary.Id @@ -84,11 +87,11 @@ func dataSourceContactFlowRead(ctx context.Context, d *schema.ResourceData, meta resp, err := conn.DescribeContactFlowWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error getting Connect Contact Flow: %w", err)) + return diag.Errorf("getting Connect Contact Flow: %s", err) } if resp == nil || resp.ContactFlow == nil { - return diag.FromErr(fmt.Errorf("error getting Connect Contact Flow: empty response")) + return diag.Errorf("getting Connect Contact Flow: empty response") } contactFlow := resp.ContactFlow @@ -102,7 +105,7 @@ func dataSourceContactFlowRead(ctx context.Context, d *schema.ResourceData, meta d.Set("type", contactFlow.Type) if err := d.Set("tags", KeyValueTags(ctx, contactFlow.Tags).IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return diag.FromErr(fmt.Errorf("error setting tags: %s", err)) + return diag.Errorf("setting tags: %s", err) } d.SetId(fmt.Sprintf("%s:%s", instanceID, aws.StringValue(contactFlow.Id))) diff --git a/internal/service/connect/contact_flow_data_source_test.go b/internal/service/connect/contact_flow_data_source_test.go index c0292eb9e52..d3a6f1400a8 100644 --- a/internal/service/connect/contact_flow_data_source_test.go +++ b/internal/service/connect/contact_flow_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( diff --git a/internal/service/connect/contact_flow_module.go b/internal/service/connect/contact_flow_module.go index a4000885060..fab5a67df35 100644 --- a/internal/service/connect/contact_flow_module.go +++ b/internal/service/connect/contact_flow_module.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( @@ -86,7 +89,7 @@ func ResourceContactFlowModule() *schema.Resource { } func resourceContactFlowModuleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID := d.Get("instance_id").(string) name := d.Get("name").(string) @@ -94,7 +97,7 @@ func resourceContactFlowModuleCreate(ctx context.Context, d *schema.ResourceData input := &connect.CreateContactFlowModuleInput{ Name: aws.String(name), InstanceId: aws.String(instanceID), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -110,7 +113,7 @@ func resourceContactFlowModuleCreate(ctx context.Context, d *schema.ResourceData defer conns.GlobalMutexKV.Unlock(contactFlowModuleMutexKey) file, err := resourceContactFlowModuleLoadFileContent(filename) if err != nil { - return diag.FromErr(fmt.Errorf("unable to load %q: %w", filename, err)) + return diag.Errorf("unable to load %q: %s", filename, err) } input.Content = aws.String(file) } else if v, ok := d.GetOk("content"); ok { @@ -120,11 +123,11 @@ func resourceContactFlowModuleCreate(ctx context.Context, d *schema.ResourceData output, err := conn.CreateContactFlowModuleWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error creating Connect Contact Flow Module (%s): %w", name, err)) + return diag.Errorf("creating Connect Contact Flow Module (%s): %s", name, err) } if output == nil { - return diag.FromErr(fmt.Errorf("error creating Connect Contact Flow Module (%s): empty output", name)) + return diag.Errorf("creating Connect Contact Flow Module (%s): empty output", name) } d.SetId(fmt.Sprintf("%s:%s", instanceID, aws.StringValue(output.Id))) @@ -133,7 +136,7 @@ func resourceContactFlowModuleCreate(ctx context.Context, d *schema.ResourceData } func resourceContactFlowModuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, contactFlowModuleID, err := ContactFlowModuleParseID(d.Id()) @@ -153,11 +156,11 @@ func resourceContactFlowModuleRead(ctx context.Context, d *schema.ResourceData, } if err != nil { - return diag.FromErr(fmt.Errorf("error getting Connect Contact Flow Module (%s): %w", d.Id(), err)) + return diag.Errorf("getting Connect Contact Flow Module (%s): %s", d.Id(), err) } if resp == nil || resp.ContactFlowModule == nil { - return diag.FromErr(fmt.Errorf("error getting Connect Contact Flow Module (%s): empty response", d.Id())) + return diag.Errorf("getting Connect Contact Flow Module (%s): empty response", d.Id()) } d.Set("arn", resp.ContactFlowModule.Arn) @@ -167,13 +170,13 @@ func resourceContactFlowModuleRead(ctx context.Context, d *schema.ResourceData, d.Set("description", resp.ContactFlowModule.Description) d.Set("content", resp.ContactFlowModule.Content) - SetTagsOut(ctx, resp.ContactFlowModule.Tags) + setTagsOut(ctx, resp.ContactFlowModule.Tags) return nil } func resourceContactFlowModuleUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, contactFlowModuleID, err := ContactFlowModuleParseID(d.Id()) @@ -192,7 +195,7 @@ func resourceContactFlowModuleUpdate(ctx context.Context, d *schema.ResourceData _, updateMetadataInputErr := conn.UpdateContactFlowModuleMetadataWithContext(ctx, updateMetadataInput) if updateMetadataInputErr != nil { - return diag.FromErr(fmt.Errorf("error updating Connect Contact Flow Module (%s): %w", d.Id(), updateMetadataInputErr)) + return diag.Errorf("updating Connect Contact Flow Module (%s): %s", d.Id(), updateMetadataInputErr) } } @@ -211,7 +214,7 @@ func resourceContactFlowModuleUpdate(ctx context.Context, d *schema.ResourceData defer conns.GlobalMutexKV.Unlock(contactFlowModuleMutexKey) file, err := resourceContactFlowModuleLoadFileContent(filename) if err != nil { - return diag.FromErr(fmt.Errorf("unable to load %q: %w", filename, err)) + return diag.Errorf("unable to load %q: %s", filename, err) } updateContentInput.Content = aws.String(file) } else if v, ok := d.GetOk("content"); ok { @@ -221,7 +224,7 @@ func resourceContactFlowModuleUpdate(ctx context.Context, d *schema.ResourceData _, updateContentInputErr := conn.UpdateContactFlowModuleContentWithContext(ctx, updateContentInput) if updateContentInputErr != nil { - return diag.FromErr(fmt.Errorf("error updating Connect Contact Flow Module content (%s): %w", d.Id(), updateContentInputErr)) + return diag.Errorf("updating Connect Contact Flow Module content (%s): %s", d.Id(), updateContentInputErr) } } @@ -229,7 +232,7 @@ func resourceContactFlowModuleUpdate(ctx context.Context, d *schema.ResourceData } func resourceContactFlowModuleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, contactFlowModuleID, err := ContactFlowModuleParseID(d.Id()) if err != nil { @@ -243,7 +246,7 @@ func resourceContactFlowModuleDelete(ctx context.Context, d *schema.ResourceData _, deleteContactFlowModuleErr := conn.DeleteContactFlowModuleWithContext(ctx, input) if deleteContactFlowModuleErr != nil { - return diag.FromErr(fmt.Errorf("error deleting Connect Contact Flow Module (%s): %w", d.Id(), deleteContactFlowModuleErr)) + return diag.Errorf("deleting Connect Contact Flow Module (%s): %s", d.Id(), deleteContactFlowModuleErr) } return nil } diff --git a/internal/service/connect/contact_flow_module_data_source.go b/internal/service/connect/contact_flow_module_data_source.go index e006919e012..b5fb79145cd 100644 --- a/internal/service/connect/contact_flow_module_data_source.go +++ b/internal/service/connect/contact_flow_module_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( @@ -59,7 +62,7 @@ func DataSourceContactFlowModule() *schema.Resource { } func dataSourceContactFlowModuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig instanceID := d.Get("instance_id").(string) @@ -75,11 +78,11 @@ func dataSourceContactFlowModuleRead(ctx context.Context, d *schema.ResourceData contactFlowModuleSummary, err := dataSourceGetContactFlowModuleSummaryByName(ctx, conn, instanceID, name) if err != nil { - return diag.FromErr(fmt.Errorf("error finding Connect Contact Flow Module Summary by name (%s): %w", name, err)) + return diag.Errorf("finding Connect Contact Flow Module Summary by name (%s): %s", name, err) } if contactFlowModuleSummary == nil { - return diag.FromErr(fmt.Errorf("error finding Connect Contact Flow Module Summary by name (%s): not found", name)) + return diag.Errorf("finding Connect Contact Flow Module Summary by name (%s): not found", name) } input.ContactFlowModuleId = contactFlowModuleSummary.Id @@ -88,11 +91,11 @@ func dataSourceContactFlowModuleRead(ctx context.Context, d *schema.ResourceData resp, err := conn.DescribeContactFlowModuleWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error getting Connect Contact Flow Module: %w", err)) + return diag.Errorf("getting Connect Contact Flow Module: %s", err) } if resp == nil || resp.ContactFlowModule == nil { - return diag.FromErr(fmt.Errorf("error getting Connect Contact Flow Module: empty response")) + return diag.Errorf("getting Connect Contact Flow Module: empty response") } contactFlowModule := resp.ContactFlowModule @@ -106,7 +109,7 @@ func dataSourceContactFlowModuleRead(ctx context.Context, d *schema.ResourceData d.Set("status", contactFlowModule.Status) if err := d.Set("tags", KeyValueTags(ctx, contactFlowModule.Tags).IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return diag.FromErr(fmt.Errorf("error setting tags: %s", err)) + return diag.Errorf("setting tags: %s", err) } d.SetId(fmt.Sprintf("%s:%s", instanceID, aws.StringValue(contactFlowModule.Id))) diff --git a/internal/service/connect/contact_flow_module_data_source_test.go b/internal/service/connect/contact_flow_module_data_source_test.go index 82f8c22087b..b3e325990cd 100644 --- a/internal/service/connect/contact_flow_module_data_source_test.go +++ b/internal/service/connect/contact_flow_module_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( diff --git a/internal/service/connect/contact_flow_module_test.go b/internal/service/connect/contact_flow_module_test.go index 66f18f9f912..bb809ed1e1b 100644 --- a/internal/service/connect/contact_flow_module_test.go +++ b/internal/service/connect/contact_flow_module_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( @@ -155,7 +158,7 @@ func testAccCheckContactFlowModuleExists(ctx context.Context, resourceName strin return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) params := &connect.DescribeContactFlowModuleInput{ ContactFlowModuleId: aws.String(contactFlowModuleID), @@ -180,7 +183,7 @@ func testAccCheckContactFlowModuleDestroy(ctx context.Context) resource.TestChec continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) instanceID, contactFlowModuleID, err := tfconnect.ContactFlowModuleParseID(rs.Primary.ID) diff --git a/internal/service/connect/contact_flow_test.go b/internal/service/connect/contact_flow_test.go index 7a97383516d..46b2d01370a 100644 --- a/internal/service/connect/contact_flow_test.go +++ b/internal/service/connect/contact_flow_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( @@ -160,7 +163,7 @@ func testAccCheckContactFlowExists(ctx context.Context, resourceName string, fun return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) params := &connect.DescribeContactFlowInput{ ContactFlowId: aws.String(contactFlowID), @@ -185,7 +188,7 @@ func testAccCheckContactFlowDestroy(ctx context.Context) resource.TestCheckFunc continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) instanceID, contactFlowID, err := tfconnect.ContactFlowParseID(rs.Primary.ID) diff --git a/internal/service/connect/enum.go b/internal/service/connect/enum.go index 90f58bd1cc3..08f9820b2c3 100644 --- a/internal/service/connect/enum.go +++ b/internal/service/connect/enum.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( diff --git a/internal/service/connect/errors.go b/internal/service/connect/errors.go index 7a30fcc2ce8..f8ea4c3d83f 100644 --- a/internal/service/connect/errors.go +++ b/internal/service/connect/errors.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect // Error code constants missing from AWS Go SDK: diff --git a/internal/service/connect/find.go b/internal/service/connect/find.go index e0c71b90574..5cda2ca33ef 100644 --- a/internal/service/connect/find.go +++ b/internal/service/connect/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( diff --git a/internal/service/connect/generate.go b/internal/service/connect/generate.go index 3cc44ce983d..c81edd5d94e 100644 --- a/internal/service/connect/generate.go +++ b/internal/service/connect/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package connect diff --git a/internal/service/connect/hours_of_operation.go b/internal/service/connect/hours_of_operation.go index 6c5d6cd291e..433230c9d26 100644 --- a/internal/service/connect/hours_of_operation.go +++ b/internal/service/connect/hours_of_operation.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( @@ -28,10 +31,13 @@ func ResourceHoursOfOperation() *schema.Resource { ReadWithoutTimeout: resourceHoursOfOperationRead, UpdateWithoutTimeout: resourceHoursOfOperationUpdate, DeleteWithoutTimeout: resourceHoursOfOperationDelete, + Importer: &schema.ResourceImporter{ StateContext: schema.ImportStatePassthroughContext, }, + CustomizeDiff: verify.SetTagsDiff, + Schema: map[string]*schema.Schema{ "arn": { Type: schema.TypeString, @@ -98,11 +104,6 @@ func ResourceHoursOfOperation() *schema.Resource { Optional: true, ValidateFunc: validation.StringLenBetween(1, 250), }, - "hours_of_operation_arn": { - Type: schema.TypeString, - Computed: true, - Deprecated: "use 'arn' attribute instead", - }, "hours_of_operation_id": { Type: schema.TypeString, Computed: true, @@ -127,7 +128,7 @@ func ResourceHoursOfOperation() *schema.Resource { } func resourceHoursOfOperationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID := d.Get("instance_id").(string) name := d.Get("name").(string) @@ -136,7 +137,7 @@ func resourceHoursOfOperationCreate(ctx context.Context, d *schema.ResourceData, Config: config, InstanceId: aws.String(instanceID), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), TimeZone: aws.String(d.Get("time_zone").(string)), } @@ -148,11 +149,11 @@ func resourceHoursOfOperationCreate(ctx context.Context, d *schema.ResourceData, output, err := conn.CreateHoursOfOperationWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error creating Connect Hours of Operation (%s): %w", name, err)) + return diag.Errorf("creating Connect Hours of Operation (%s): %s", name, err) } if output == nil { - return diag.FromErr(fmt.Errorf("error creating Connect Hours of Operation (%s): empty output", name)) + return diag.Errorf("creating Connect Hours of Operation (%s): empty output", name) } d.SetId(fmt.Sprintf("%s:%s", instanceID, aws.StringValue(output.HoursOfOperationId))) @@ -161,7 +162,7 @@ func resourceHoursOfOperationCreate(ctx context.Context, d *schema.ResourceData, } func resourceHoursOfOperationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, hoursOfOperationID, err := HoursOfOperationParseID(d.Id()) @@ -181,11 +182,11 @@ func resourceHoursOfOperationRead(ctx context.Context, d *schema.ResourceData, m } if err != nil { - return diag.FromErr(fmt.Errorf("error getting Connect Hours of Operation (%s): %w", d.Id(), err)) + return diag.Errorf("getting Connect Hours of Operation (%s): %s", d.Id(), err) } if resp == nil || resp.HoursOfOperation == nil { - return diag.FromErr(fmt.Errorf("error getting Connect Hours of Operation (%s): empty response", d.Id())) + return diag.Errorf("getting Connect Hours of Operation (%s): empty response", d.Id()) } if err := d.Set("config", flattenConfigs(resp.HoursOfOperation.Config)); err != nil { @@ -193,20 +194,19 @@ func resourceHoursOfOperationRead(ctx context.Context, d *schema.ResourceData, m } d.Set("arn", resp.HoursOfOperation.HoursOfOperationArn) - d.Set("hours_of_operation_arn", resp.HoursOfOperation.HoursOfOperationArn) // Deprecated d.Set("hours_of_operation_id", resp.HoursOfOperation.HoursOfOperationId) d.Set("instance_id", instanceID) d.Set("description", resp.HoursOfOperation.Description) d.Set("name", resp.HoursOfOperation.Name) d.Set("time_zone", resp.HoursOfOperation.TimeZone) - SetTagsOut(ctx, resp.HoursOfOperation.Tags) + setTagsOut(ctx, resp.HoursOfOperation.Tags) return nil } func resourceHoursOfOperationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, hoursOfOperationID, err := HoursOfOperationParseID(d.Id()) @@ -224,7 +224,7 @@ func resourceHoursOfOperationUpdate(ctx context.Context, d *schema.ResourceData, TimeZone: aws.String(d.Get("time_zone").(string)), }) if err != nil { - return diag.FromErr(fmt.Errorf("updating HoursOfOperation (%s): %w", d.Id(), err)) + return diag.Errorf("updating HoursOfOperation (%s): %s", d.Id(), err) } } @@ -232,7 +232,7 @@ func resourceHoursOfOperationUpdate(ctx context.Context, d *schema.ResourceData, } func resourceHoursOfOperationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, hoursOfOperationID, err := HoursOfOperationParseID(d.Id()) @@ -246,7 +246,7 @@ func resourceHoursOfOperationDelete(ctx context.Context, d *schema.ResourceData, }) if err != nil { - return diag.FromErr(fmt.Errorf("error deleting HoursOfOperation (%s): %w", d.Id(), err)) + return diag.Errorf("deleting HoursOfOperation (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/connect/hours_of_operation_data_source.go b/internal/service/connect/hours_of_operation_data_source.go index d602c97cd2e..050ddcd820a 100644 --- a/internal/service/connect/hours_of_operation_data_source.go +++ b/internal/service/connect/hours_of_operation_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( @@ -79,11 +82,6 @@ func DataSourceHoursOfOperation() *schema.Resource { Type: schema.TypeString, Computed: true, }, - "hours_of_operation_arn": { - Type: schema.TypeString, - Computed: true, - Deprecated: "use 'arn' attribute instead", - }, "hours_of_operation_id": { Type: schema.TypeString, Optional: true, @@ -110,7 +108,7 @@ func DataSourceHoursOfOperation() *schema.Resource { } func dataSourceHoursOfOperationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig instanceID := d.Get("instance_id").(string) @@ -126,11 +124,11 @@ func dataSourceHoursOfOperationRead(ctx context.Context, d *schema.ResourceData, hoursOfOperationSummary, err := dataSourceGetHoursOfOperationSummaryByName(ctx, conn, instanceID, name) if err != nil { - return diag.FromErr(fmt.Errorf("error finding Connect Hours of Operation Summary by name (%s): %w", name, err)) + return diag.Errorf("finding Connect Hours of Operation Summary by name (%s): %s", name, err) } if hoursOfOperationSummary == nil { - return diag.FromErr(fmt.Errorf("error finding Connect Hours of Operation Summary by name (%s): not found", name)) + return diag.Errorf("finding Connect Hours of Operation Summary by name (%s): not found", name) } input.HoursOfOperationId = hoursOfOperationSummary.Id @@ -139,17 +137,16 @@ func dataSourceHoursOfOperationRead(ctx context.Context, d *schema.ResourceData, resp, err := conn.DescribeHoursOfOperationWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error getting Connect Hours of Operation: %w", err)) + return diag.Errorf("getting Connect Hours of Operation: %s", err) } if resp == nil || resp.HoursOfOperation == nil { - return diag.FromErr(fmt.Errorf("error getting Connect Hours of Operation: empty response")) + return diag.Errorf("getting Connect Hours of Operation: empty response") } hoursOfOperation := resp.HoursOfOperation d.Set("arn", hoursOfOperation.HoursOfOperationArn) - d.Set("hours_of_operation_arn", hoursOfOperation.HoursOfOperationArn) // Deprecated d.Set("hours_of_operation_id", hoursOfOperation.HoursOfOperationId) d.Set("instance_id", instanceID) d.Set("description", hoursOfOperation.Description) @@ -157,11 +154,11 @@ func dataSourceHoursOfOperationRead(ctx context.Context, d *schema.ResourceData, d.Set("time_zone", hoursOfOperation.TimeZone) if err := d.Set("config", flattenConfigs(hoursOfOperation.Config)); err != nil { - return diag.FromErr(fmt.Errorf("error setting config: %s", err)) + return diag.Errorf("setting config: %s", err) } if err := d.Set("tags", KeyValueTags(ctx, hoursOfOperation.Tags).IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return diag.FromErr(fmt.Errorf("error setting tags: %s", err)) + return diag.Errorf("setting tags: %s", err) } d.SetId(fmt.Sprintf("%s:%s", instanceID, aws.StringValue(hoursOfOperation.HoursOfOperationId))) diff --git a/internal/service/connect/hours_of_operation_data_source_test.go b/internal/service/connect/hours_of_operation_data_source_test.go index c275f0044f5..3a002213e7b 100644 --- a/internal/service/connect/hours_of_operation_data_source_test.go +++ b/internal/service/connect/hours_of_operation_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( @@ -25,7 +28,6 @@ func testAccHoursOfOperationDataSource_hoursOfOperationID(t *testing.T) { Config: testAccHoursOfOperationDataSourceConfig_id(rName, resourceName), Check: resource.ComposeAggregateTestCheckFunc( resource.TestCheckResourceAttrPair(datasourceName, "arn", resourceName, "arn"), - resource.TestCheckResourceAttrPair(datasourceName, "hours_of_operation_arn", resourceName, "hours_of_operation_arn"), // Deprecated resource.TestCheckResourceAttrPair(datasourceName, "hours_of_operation_id", resourceName, "hours_of_operation_id"), resource.TestCheckResourceAttrPair(datasourceName, "instance_id", resourceName, "instance_id"), resource.TestCheckResourceAttrPair(datasourceName, "name", resourceName, "name"), @@ -55,7 +57,6 @@ func testAccHoursOfOperationDataSource_name(t *testing.T) { Config: testAccHoursOfOperationDataSourceConfig_name(rName, rName2), Check: resource.ComposeAggregateTestCheckFunc( resource.TestCheckResourceAttrPair(datasourceName, "arn", resourceName, "arn"), - resource.TestCheckResourceAttrPair(datasourceName, "hours_of_operation_arn", resourceName, "hours_of_operation_arn"), // Deprecated resource.TestCheckResourceAttrPair(datasourceName, "hours_of_operation_id", resourceName, "hours_of_operation_id"), resource.TestCheckResourceAttrPair(datasourceName, "instance_id", resourceName, "instance_id"), resource.TestCheckResourceAttrPair(datasourceName, "name", resourceName, "name"), diff --git a/internal/service/connect/hours_of_operation_test.go b/internal/service/connect/hours_of_operation_test.go index e33b1a40beb..1884b750d50 100644 --- a/internal/service/connect/hours_of_operation_test.go +++ b/internal/service/connect/hours_of_operation_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( @@ -48,7 +51,6 @@ func testAccHoursOfOperation_basic(t *testing.T) { "start_time.0.minutes": "0", }), resource.TestCheckResourceAttr(resourceName, "description", originalDescription), - resource.TestCheckResourceAttrSet(resourceName, "hours_of_operation_arn"), // Deprecated resource.TestCheckResourceAttrSet(resourceName, "hours_of_operation_id"), resource.TestCheckResourceAttrPair(resourceName, "instance_id", "aws_connect_instance.test", "id"), resource.TestCheckResourceAttr(resourceName, "name", rName2), @@ -78,7 +80,6 @@ func testAccHoursOfOperation_basic(t *testing.T) { "start_time.0.minutes": "0", }), resource.TestCheckResourceAttr(resourceName, "description", updatedDescription), - resource.TestCheckResourceAttrSet(resourceName, "hours_of_operation_arn"), // Deprecated resource.TestCheckResourceAttrSet(resourceName, "hours_of_operation_id"), resource.TestCheckResourceAttrPair(resourceName, "instance_id", "aws_connect_instance.test", "id"), resource.TestCheckResourceAttr(resourceName, "name", rName2), @@ -248,7 +249,7 @@ func testAccCheckHoursOfOperationExists(ctx context.Context, resourceName string return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) params := &connect.DescribeHoursOfOperationInput{ HoursOfOperationId: aws.String(hoursOfOperationID), @@ -273,7 +274,7 @@ func testAccCheckHoursOfOperationDestroy(ctx context.Context) resource.TestCheck continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) instanceID, hoursOfOperationID, err := tfconnect.HoursOfOperationParseID(rs.Primary.ID) diff --git a/internal/service/connect/id.go b/internal/service/connect/id.go index ee440fc2a29..6e578d8cb50 100644 --- a/internal/service/connect/id.go +++ b/internal/service/connect/id.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( diff --git a/internal/service/connect/instance.go b/internal/service/connect/instance.go index 342d678c7e1..a1ffe121aaa 100644 --- a/internal/service/connect/instance.go +++ b/internal/service/connect/instance.go @@ -1,7 +1,11 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( "context" + "errors" "fmt" "log" "regexp" @@ -13,9 +17,11 @@ import ( "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) // @SDKResource("aws_connect_instance") @@ -25,13 +31,16 @@ func ResourceInstance() *schema.Resource { ReadWithoutTimeout: resourceInstanceRead, UpdateWithoutTimeout: resourceInstanceUpdate, DeleteWithoutTimeout: resourceInstanceDelete, + Importer: &schema.ResourceImporter{ StateContext: schema.ImportStatePassthroughContext, }, + Timeouts: &schema.ResourceTimeout{ - Create: schema.DefaultTimeout(instanceCreatedTimeout), - Delete: schema.DefaultTimeout(instanceDeletedTimeout), + Create: schema.DefaultTimeout(5 * time.Minute), + Delete: schema.DefaultTimeout(5 * time.Minute), }, + Schema: map[string]*schema.Schema{ "arn": { Type: schema.TypeString, @@ -117,7 +126,7 @@ func ResourceInstance() *schema.Resource { } func resourceInstanceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) input := &connect.CreateInstanceInput{ ClientToken: aws.String(id.UniqueId()), @@ -129,81 +138,52 @@ func resourceInstanceCreate(ctx context.Context, d *schema.ResourceData, meta in if v, ok := d.GetOk("directory_id"); ok { input.DirectoryId = aws.String(v.(string)) } + if v, ok := d.GetOk("instance_alias"); ok { input.InstanceAlias = aws.String(v.(string)) } - log.Printf("[DEBUG] Creating Connect Instance %s", input) output, err := conn.CreateInstanceWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error creating Connect Instance (%s): %w", d.Id(), err)) + return diag.Errorf("creating Connect Instance: %s", err) } d.SetId(aws.StringValue(output.Id)) - if _, err := waitInstanceCreated(ctx, conn, d.Timeout(schema.TimeoutCreate), d.Id()); err != nil { - return diag.FromErr(fmt.Errorf("error waiting for Connect instance creation (%s): %w", d.Id(), err)) + if _, err := waitInstanceCreated(ctx, conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { + return diag.Errorf("waiting for Connect Instance (%s) create: %s", d.Id(), err) } - for att := range InstanceAttributeMapping() { - rKey := InstanceAttributeMapping()[att] - err := resourceInstanceUpdateAttribute(ctx, conn, d.Id(), att, strconv.FormatBool(d.Get(rKey).(bool))) - //Pre-release attribute, user/account/instance now allow-listed - if err != nil && tfawserr.ErrCodeEquals(err, ErrCodeAccessDeniedException) || tfawserr.ErrMessageContains(err, ErrCodeAccessDeniedException, "not authorized to update") { - log.Printf("[WARN] error setting Connect instance (%s) attribute (%s): %s", d.Id(), att, err) - } else if err != nil { - return diag.FromErr(fmt.Errorf("error setting Connect instance (%s) attribute (%s): %w", d.Id(), att, err)) + for attributeType, key := range InstanceAttributeMapping() { + if err := updateInstanceAttribute(ctx, conn, d.Id(), attributeType, strconv.FormatBool(d.Get(key).(bool))); err != nil { + return diag.FromErr(err) } } return resourceInstanceRead(ctx, d, meta) } -func resourceInstanceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() - - for att := range InstanceAttributeMapping() { - rKey := InstanceAttributeMapping()[att] - if d.HasChange(rKey) { - _, n := d.GetChange(rKey) - err := resourceInstanceUpdateAttribute(ctx, conn, d.Id(), att, strconv.FormatBool(n.(bool))) - //Pre-release attribute, user/account/instance now allow-listed - if err != nil && tfawserr.ErrCodeEquals(err, ErrCodeAccessDeniedException) || tfawserr.ErrMessageContains(err, ErrCodeAccessDeniedException, "not authorized to update") { - log.Printf("[WARN] error setting Connect instance (%s) attribute (%s): %s", d.Id(), att, err) - } else if err != nil { - return diag.FromErr(fmt.Errorf("error setting Connect instance (%s) attribute (%s): %s", d.Id(), att, err)) - } - } - } - - return nil -} func resourceInstanceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) - input := connect.DescribeInstanceInput{ - InstanceId: aws.String(d.Id()), - } + instance, err := FindInstanceByID(ctx, conn, d.Id()) - log.Printf("[DEBUG] Reading Connect Instance %s", d.Id()) - output, err := conn.DescribeInstanceWithContext(ctx, &input) - - if !d.IsNewResource() && tfawserr.ErrCodeEquals(err, connect.ErrCodeResourceNotFoundException) { + if !d.IsNewResource() && tfresource.NotFound(err) { log.Printf("[WARN] Connect Instance (%s) not found, removing from state", d.Id()) d.SetId("") return nil } if err != nil { - return diag.FromErr(fmt.Errorf("error reading Connect Instance (%s): %s", d.Id(), err)) + return diag.Errorf("reading Connect Instance (%s): %s", d.Id(), err) } - instance := output.Instance - d.SetId(aws.StringValue(instance.Id)) d.Set("arn", instance.Arn) - d.Set("created_time", instance.CreatedTime.Format(time.RFC3339)) + if instance.CreatedTime != nil { + d.Set("created_time", instance.CreatedTime.Format(time.RFC3339)) + } d.Set("identity_management_type", instance.IdentityManagementType) d.Set("inbound_calls_enabled", instance.InboundCallsEnabled) d.Set("instance_alias", instance.InstanceAlias) @@ -211,64 +191,163 @@ func resourceInstanceRead(ctx context.Context, d *schema.ResourceData, meta inte d.Set("service_role", instance.ServiceRole) d.Set("status", instance.InstanceStatus) - for att := range InstanceAttributeMapping() { - value, err := resourceInstanceReadAttribute(ctx, conn, d.Id(), att) + for attributeType, key := range InstanceAttributeMapping() { + input := &connect.DescribeInstanceAttributeInput{ + AttributeType: aws.String(attributeType), + InstanceId: aws.String(d.Id()), + } + + output, err := conn.DescribeInstanceAttributeWithContext(ctx, input) + + if err != nil { + return diag.Errorf("reading Connect Instance (%s) attribute (%s): %s", d.Id(), attributeType, err) + } + + v, err := strconv.ParseBool(aws.StringValue(output.Attribute.Value)) + if err != nil { - return diag.FromErr(fmt.Errorf("error reading Connect instance (%s) attribute (%s): %s", d.Id(), att, err)) + return diag.FromErr(err) } - d.Set(InstanceAttributeMapping()[att], value) + + d.Set(key, v) } return nil } -func resourceInstanceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() +func resourceInstanceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).ConnectConn(ctx) - input := &connect.DeleteInstanceInput{ - InstanceId: aws.String(d.Id()), + for attributeType, key := range InstanceAttributeMapping() { + if !d.HasChange(key) { + continue + } + + if err := updateInstanceAttribute(ctx, conn, d.Id(), attributeType, strconv.FormatBool(d.Get(key).(bool))); err != nil { + return diag.FromErr(err) + } } - log.Printf("[DEBUG] Deleting Connect Instance %s", d.Id()) + return nil +} - _, err := conn.DeleteInstanceWithContext(ctx, input) +func resourceInstanceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).ConnectConn(ctx) + + log.Printf("[DEBUG] Deleting Connect Instance: %s", d.Id()) + _, err := conn.DeleteInstanceWithContext(ctx, &connect.DeleteInstanceInput{ + InstanceId: aws.String(d.Id()), + }) if tfawserr.ErrCodeEquals(err, connect.ErrCodeResourceNotFoundException) { return nil } if err != nil { - return diag.FromErr(fmt.Errorf("error deleting Connect Instance (%s): %s", d.Id(), err)) + return diag.Errorf("deleting Connect Instance (%s): %s", d.Id(), err) } - if _, err := waitInstanceDeleted(ctx, conn, d.Timeout(schema.TimeoutCreate), d.Id()); err != nil { - return diag.FromErr(fmt.Errorf("error waiting for Connect Instance deletion (%s): %s", d.Id(), err)) + if _, err := waitInstanceDeleted(ctx, conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { + return diag.Errorf("waiting for Connect Instance (%s) delete: %s", d.Id(), err) } + return nil } -func resourceInstanceUpdateAttribute(ctx context.Context, conn *connect.Connect, instanceID string, attributeType string, value string) error { +func updateInstanceAttribute(ctx context.Context, conn *connect.Connect, instanceID, attributeType, value string) error { input := &connect.UpdateInstanceAttributeInput{ - InstanceId: aws.String(instanceID), AttributeType: aws.String(attributeType), + InstanceId: aws.String(instanceID), Value: aws.String(value), } _, err := conn.UpdateInstanceAttributeWithContext(ctx, input) - return err + if tfawserr.ErrCodeEquals(err, ErrCodeAccessDeniedException) || tfawserr.ErrMessageContains(err, ErrCodeAccessDeniedException, "not authorized to update") { + return nil + } + + if err != nil { + return fmt.Errorf("updating Connect Instance (%s) attribute (%s): %w", instanceID, attributeType, err) + } + + return nil } -func resourceInstanceReadAttribute(ctx context.Context, conn *connect.Connect, instanceID string, attributeType string) (bool, error) { - input := &connect.DescribeInstanceAttributeInput{ - InstanceId: aws.String(instanceID), - AttributeType: aws.String(attributeType), +func FindInstanceByID(ctx context.Context, conn *connect.Connect, id string) (*connect.Instance, error) { + input := &connect.DescribeInstanceInput{ + InstanceId: aws.String(id), + } + + output, err := conn.DescribeInstanceWithContext(ctx, input) + + if tfawserr.ErrCodeEquals(err, connect.ErrCodeResourceNotFoundException) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: input, + } } - output, err := conn.DescribeInstanceAttributeWithContext(ctx, input) if err != nil { - return false, err + return nil, err } - result, parseerr := strconv.ParseBool(*output.Attribute.Value) - return result, parseerr + + if output == nil || output.Instance == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + return output.Instance, nil +} + +func statusInstance(ctx context.Context, conn *connect.Connect, id string) retry.StateRefreshFunc { + return func() (interface{}, string, error) { + output, err := FindInstanceByID(ctx, conn, id) + + if tfresource.NotFound(err) { + return nil, "", nil + } + + if err != nil { + return nil, "", err + } + + return output, aws.StringValue(output.InstanceStatus), nil + } +} + +func waitInstanceCreated(ctx context.Context, conn *connect.Connect, id string, timeout time.Duration) (*connect.Instance, error) { + stateConf := &retry.StateChangeConf{ + Pending: []string{connect.InstanceStatusCreationInProgress}, + Target: []string{connect.InstanceStatusActive}, + Refresh: statusInstance(ctx, conn, id), + Timeout: timeout, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + + if output, ok := outputRaw.(*connect.Instance); ok { + if output.StatusReason != nil { + tfresource.SetLastError(err, errors.New(aws.StringValue(output.StatusReason.Message))) + } + return output, err + } + + return nil, err +} + +func waitInstanceDeleted(ctx context.Context, conn *connect.Connect, id string, timeout time.Duration) (*connect.Instance, error) { + stateConf := &retry.StateChangeConf{ + Pending: []string{connect.InstanceStatusActive}, + Target: []string{}, + Refresh: statusInstance(ctx, conn, id), + Timeout: timeout, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + + if output, ok := outputRaw.(*connect.Instance); ok { + return output, err + } + + return nil, err } diff --git a/internal/service/connect/instance_data_source.go b/internal/service/connect/instance_data_source.go index a47d900e3a3..a6086929149 100644 --- a/internal/service/connect/instance_data_source.go +++ b/internal/service/connect/instance_data_source.go @@ -1,9 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( "context" - "fmt" - "log" "strconv" "time" @@ -88,41 +89,30 @@ func DataSourceInstance() *schema.Resource { } func dataSourceInstanceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) var matchedInstance *connect.Instance if v, ok := d.GetOk("instance_id"); ok { - instanceId := v.(string) - - input := connect.DescribeInstanceInput{ - InstanceId: aws.String(instanceId), - } - - log.Printf("[DEBUG] Reading Connect Instance by instance_id: %s", input) - - output, err := conn.DescribeInstanceWithContext(ctx, &input) + instanceID := v.(string) + instance, err := FindInstanceByID(ctx, conn, instanceID) if err != nil { - return diag.FromErr(fmt.Errorf("error getting Connect Instance by instance_id (%s): %w", instanceId, err)) + return diag.Errorf("reading Connect Instance (%s): %s", instanceID, err) } - if output == nil { - return diag.FromErr(fmt.Errorf("error getting Connect Instance by instance_id (%s): empty output", instanceId)) - } - - matchedInstance = output.Instance + matchedInstance = instance } else if v, ok := d.GetOk("instance_alias"); ok { instanceAlias := v.(string) instanceSummary, err := dataSourceGetInstanceSummaryByInstanceAlias(ctx, conn, instanceAlias) if err != nil { - return diag.FromErr(fmt.Errorf("error finding Connect Instance Summary by instance_alias (%s): %w", instanceAlias, err)) + return diag.Errorf("finding Connect Instance Summary by instance_alias (%s): %s", instanceAlias, err) } if instanceSummary == nil { - return diag.FromErr(fmt.Errorf("error finding Connect Instance Summary by instance_alias (%s): not found", instanceAlias)) + return diag.Errorf("finding Connect Instance Summary by instance_alias (%s): not found", instanceAlias) } matchedInstance = &connect.Instance{ @@ -139,13 +129,14 @@ func dataSourceInstanceRead(ctx context.Context, d *schema.ResourceData, meta in } if matchedInstance == nil { - return diag.FromErr(fmt.Errorf("no Connect Instance found for query, try adjusting your search criteria")) + return diag.Errorf("no Connect Instance found for query, try adjusting your search criteria") } d.SetId(aws.StringValue(matchedInstance.Id)) - d.Set("arn", matchedInstance.Arn) - d.Set("created_time", matchedInstance.CreatedTime.Format(time.RFC3339)) + if matchedInstance.CreatedTime != nil { + d.Set("created_time", matchedInstance.CreatedTime.Format(time.RFC3339)) + } d.Set("identity_management_type", matchedInstance.IdentityManagementType) d.Set("inbound_calls_enabled", matchedInstance.InboundCallsEnabled) d.Set("instance_alias", matchedInstance.InstanceAlias) @@ -156,7 +147,7 @@ func dataSourceInstanceRead(ctx context.Context, d *schema.ResourceData, meta in for att := range InstanceAttributeMapping() { value, err := dataSourceInstanceReadAttribute(ctx, conn, d.Id(), att) if err != nil { - return diag.FromErr(fmt.Errorf("error reading Connect Instance (%s) attribute (%s): %w", d.Id(), att, err)) + return diag.Errorf("reading Connect Instance (%s) attribute (%s): %s", d.Id(), att, err) } d.Set(InstanceAttributeMapping()[att], value) } diff --git a/internal/service/connect/instance_data_source_test.go b/internal/service/connect/instance_data_source_test.go index ee1e838e8df..58daeffff2e 100644 --- a/internal/service/connect/instance_data_source_test.go +++ b/internal/service/connect/instance_data_source_test.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( "fmt" - "regexp" "testing" "github.com/aws/aws-sdk-go/service/connect" @@ -21,14 +23,6 @@ func testAccInstanceDataSource_basic(t *testing.T) { ErrorCheck: acctest.ErrorCheck(t, connect.EndpointsID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, Steps: []resource.TestStep{ - { - Config: testAccInstanceDataSourceConfig_nonExistentID, - ExpectError: regexp.MustCompile(`error getting Connect Instance by instance_id`), - }, - { - Config: testAccInstanceDataSourceConfig_nonExistentAlias, - ExpectError: regexp.MustCompile(`error finding Connect Instance Summary by instance_alias`), - }, { Config: testAccInstanceDataSourceConfig_basic(rName), Check: resource.ComposeAggregateTestCheckFunc( @@ -69,18 +63,6 @@ func testAccInstanceDataSource_basic(t *testing.T) { }) } -const testAccInstanceDataSourceConfig_nonExistentID = ` -data "aws_connect_instance" "test" { - instance_id = "97afc98d-101a-ba98-ab97-ae114fc115ec" -} -` - -const testAccInstanceDataSourceConfig_nonExistentAlias = ` -data "aws_connect_instance" "test" { - instance_alias = "tf-acc-test-does-not-exist" -} -` - func testAccInstanceDataSourceConfig_basic(rName string) string { return fmt.Sprintf(` resource "aws_connect_instance" "test" { diff --git a/internal/service/connect/instance_storage_config.go b/internal/service/connect/instance_storage_config.go index d4e4bf34464..121aeb44c8b 100644 --- a/internal/service/connect/instance_storage_config.go +++ b/internal/service/connect/instance_storage_config.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( @@ -174,7 +177,7 @@ func ResourceInstanceStorageConfig() *schema.Resource { } func resourceInstanceStorageConfigCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceId := d.Get("instance_id").(string) resourceType := d.Get("resource_type").(string) @@ -202,7 +205,7 @@ func resourceInstanceStorageConfigCreate(ctx context.Context, d *schema.Resource } func resourceInstanceStorageConfigRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceId, associationId, resourceType, err := InstanceStorageConfigParseId(d.Id()) @@ -244,7 +247,7 @@ func resourceInstanceStorageConfigRead(ctx context.Context, d *schema.ResourceDa } func resourceInstanceStorageConfigUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceId, associationId, resourceType, err := InstanceStorageConfigParseId(d.Id()) @@ -272,7 +275,7 @@ func resourceInstanceStorageConfigUpdate(ctx context.Context, d *schema.Resource } func resourceInstanceStorageConfigDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceId, associationId, resourceType, err := InstanceStorageConfigParseId(d.Id()) diff --git a/internal/service/connect/instance_storage_config_data_source.go b/internal/service/connect/instance_storage_config_data_source.go index 429261f6cc4..a9ea6e907d6 100644 --- a/internal/service/connect/instance_storage_config_data_source.go +++ b/internal/service/connect/instance_storage_config_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( @@ -137,7 +140,7 @@ func DataSourceInstanceStorageConfig() *schema.Resource { } func dataSourceInstanceStorageConfigRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) associationId := d.Get("association_id").(string) instanceId := d.Get("instance_id").(string) diff --git a/internal/service/connect/instance_storage_config_data_source_test.go b/internal/service/connect/instance_storage_config_data_source_test.go index 6c2df116535..49d749fab3a 100644 --- a/internal/service/connect/instance_storage_config_data_source_test.go +++ b/internal/service/connect/instance_storage_config_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( @@ -175,11 +178,6 @@ resource "aws_s3_bucket" "bucket" { bucket = %[1]q } -resource "aws_s3_bucket_acl" "test" { - bucket = aws_s3_bucket.bucket.id - acl = "private" -} - resource "aws_iam_role_policy" "firehose" { name = %[1]q role = aws_iam_role.firehose.id @@ -232,9 +230,9 @@ EOF resource "aws_kinesis_firehose_delivery_stream" "test" { depends_on = [aws_iam_role_policy.firehose] name = %[1]q - destination = "s3" + destination = "extended_s3" - s3_configuration { + extended_s3_configuration { role_arn = aws_iam_role.firehose.arn bucket_arn = aws_s3_bucket.bucket.arn } diff --git a/internal/service/connect/instance_storage_config_test.go b/internal/service/connect/instance_storage_config_test.go index 4bf6bc8490d..f7751fcd090 100644 --- a/internal/service/connect/instance_storage_config_test.go +++ b/internal/service/connect/instance_storage_config_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( @@ -486,7 +489,7 @@ func testAccCheckInstanceStorageConfigExists(ctx context.Context, resourceName s return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) params := &connect.DescribeInstanceStorageConfigInput{ AssociationId: aws.String(associationId), @@ -512,7 +515,7 @@ func testAccCheckInstanceStorageConfigDestroy(ctx context.Context) resource.Test continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) instanceId, associationId, resourceType, err := tfconnect.InstanceStorageConfigParseId(rs.Primary.ID) @@ -610,11 +613,6 @@ resource "aws_s3_bucket" "bucket" { bucket = %[1]q } -resource "aws_s3_bucket_acl" "test" { - bucket = aws_s3_bucket.bucket.id - acl = "private" -} - resource "aws_iam_role_policy" "firehose" { name = %[1]q role = aws_iam_role.firehose.id @@ -678,9 +676,9 @@ locals { resource "aws_kinesis_firehose_delivery_stream" "test" { depends_on = [aws_iam_role_policy.firehose] name = %[1]q - destination = "s3" + destination = "extended_s3" - s3_configuration { + extended_s3_configuration { role_arn = aws_iam_role.firehose.arn bucket_arn = aws_s3_bucket.bucket.arn } @@ -689,9 +687,9 @@ resource "aws_kinesis_firehose_delivery_stream" "test" { resource "aws_kinesis_firehose_delivery_stream" "test2" { depends_on = [aws_iam_role_policy.firehose] name = %[2]q - destination = "s3" + destination = "extended_s3" - s3_configuration { + extended_s3_configuration { role_arn = aws_iam_role.firehose.arn bucket_arn = aws_s3_bucket.bucket.arn } diff --git a/internal/service/connect/instance_test.go b/internal/service/connect/instance_test.go index f8af578a359..52ed81bb96d 100644 --- a/internal/service/connect/instance_test.go +++ b/internal/service/connect/instance_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( @@ -6,19 +9,19 @@ import ( "regexp" "testing" - "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/connect" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" + tfconnect "github.com/hashicorp/terraform-provider-aws/internal/service/connect" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) func testAccInstance_basic(t *testing.T) { ctx := acctest.Context(t) - var v connect.DescribeInstanceOutput + var v connect.Instance rName := sdkacctest.RandomWithPrefix("resource-test-terraform") resourceName := "aws_connect_instance.test" @@ -75,7 +78,7 @@ func testAccInstance_basic(t *testing.T) { func testAccInstance_directory(t *testing.T) { ctx := acctest.Context(t) - var v connect.DescribeInstanceOutput + var v connect.Instance rName := sdkacctest.RandomWithPrefix("resource-test-terraform") resourceName := "aws_connect_instance.test" @@ -107,7 +110,7 @@ func testAccInstance_directory(t *testing.T) { func testAccInstance_saml(t *testing.T) { ctx := acctest.Context(t) - var v connect.DescribeInstanceOutput + var v connect.Instance rName := sdkacctest.RandomWithPrefix("resource-test-terraform") resourceName := "aws_connect_instance.test" @@ -133,32 +136,26 @@ func testAccInstance_saml(t *testing.T) { }) } -func testAccCheckInstanceExists(ctx context.Context, resourceName string, instance *connect.DescribeInstanceOutput) resource.TestCheckFunc { +func testAccCheckInstanceExists(ctx context.Context, n string, v *connect.Instance) resource.TestCheckFunc { return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[resourceName] + rs, ok := s.RootModule().Resources[n] if !ok { - return fmt.Errorf("Connect instance not found: %s", resourceName) + return fmt.Errorf("Not found: %s", n) } if rs.Primary.ID == "" { - return fmt.Errorf("Connect instance ID not set") + return fmt.Errorf("No Connect Instance ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) - input := &connect.DescribeInstanceInput{ - InstanceId: aws.String(rs.Primary.ID), - } + output, err := tfconnect.FindInstanceByID(ctx, conn, rs.Primary.ID) - output, err := conn.DescribeInstanceWithContext(ctx, input) if err != nil { return err } - if output == nil { - return fmt.Errorf("Connect instance %q does not exist", rs.Primary.ID) - } - *instance = *output + *v = *output return nil } @@ -171,23 +168,19 @@ func testAccCheckInstanceDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) - instanceID := rs.Primary.ID + _, err := tfconnect.FindInstanceByID(ctx, conn, rs.Primary.ID) - input := &connect.DescribeInstanceInput{ - InstanceId: aws.String(instanceID), - } - - _, err := conn.DescribeInstanceWithContext(ctx, input) - - if tfawserr.ErrCodeEquals(err, connect.ErrCodeResourceNotFoundException) { + if tfresource.NotFound(err) { continue } if err != nil { return err } + + return fmt.Errorf("Connect Instance %s still exists", rs.Primary.ID) } return nil diff --git a/internal/service/connect/lambda_function_association.go b/internal/service/connect/lambda_function_association.go index a01fe0f2e00..939eb1e87eb 100644 --- a/internal/service/connect/lambda_function_association.go +++ b/internal/service/connect/lambda_function_association.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( "context" - "fmt" "log" "github.com/aws/aws-sdk-go/aws" @@ -41,7 +43,7 @@ func ResourceLambdaFunctionAssociation() *schema.Resource { } func resourceLambdaFunctionAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceId := d.Get("instance_id").(string) functionArn := d.Get("function_arn").(string) @@ -53,7 +55,7 @@ func resourceLambdaFunctionAssociationCreate(ctx context.Context, d *schema.Reso _, err := conn.AssociateLambdaFunctionWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error creating Connect Lambda Function Association (%s,%s): %s", instanceId, functionArn, err)) + return diag.Errorf("creating Connect Lambda Function Association (%s,%s): %s", instanceId, functionArn, err) } d.SetId(LambdaFunctionAssociationCreateResourceID(instanceId, functionArn)) @@ -62,7 +64,7 @@ func resourceLambdaFunctionAssociationCreate(ctx context.Context, d *schema.Reso } func resourceLambdaFunctionAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, functionArn, err := LambdaFunctionAssociationParseResourceID(d.Id()) @@ -79,7 +81,7 @@ func resourceLambdaFunctionAssociationRead(ctx context.Context, d *schema.Resour } if err != nil { - return diag.FromErr(fmt.Errorf("error finding Connect Lambda Function Association by Function ARN (%s): %w", functionArn, err)) + return diag.Errorf("finding Connect Lambda Function Association by Function ARN (%s): %s", functionArn, err) } d.Set("function_arn", lfaArn) @@ -89,7 +91,7 @@ func resourceLambdaFunctionAssociationRead(ctx context.Context, d *schema.Resour } func resourceLambdaFunctionAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, functionArn, err := LambdaFunctionAssociationParseResourceID(d.Id()) if err != nil { @@ -108,7 +110,7 @@ func resourceLambdaFunctionAssociationDelete(ctx context.Context, d *schema.Reso } if err != nil { - return diag.FromErr(fmt.Errorf("error deleting Connect Lambda Function Association (%s): %w", d.Id(), err)) + return diag.Errorf("deleting Connect Lambda Function Association (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/connect/lambda_function_association_data_source.go b/internal/service/connect/lambda_function_association_data_source.go index a5ad4933d39..fa5f68f8bb0 100644 --- a/internal/service/connect/lambda_function_association_data_source.go +++ b/internal/service/connect/lambda_function_association_data_source.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( "context" - "fmt" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" @@ -29,17 +31,17 @@ func DataSourceLambdaFunctionAssociation() *schema.Resource { } func dataSourceLambdaFunctionAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) functionArn := d.Get("function_arn") instanceID := d.Get("instance_id") lfaArn, err := FindLambdaFunctionAssociationByARNWithContext(ctx, conn, instanceID.(string), functionArn.(string)) if err != nil { - return diag.FromErr(fmt.Errorf("error finding Connect Lambda Function Association by ARN (%s): %w", functionArn, err)) + return diag.Errorf("finding Connect Lambda Function Association by ARN (%s): %s", functionArn, err) } if lfaArn == "" { - return diag.FromErr(fmt.Errorf("error finding Connect Lambda Function Association by ARN (%s): not found", functionArn)) + return diag.Errorf("finding Connect Lambda Function Association by ARN (%s): not found", functionArn) } d.SetId(meta.(*conns.AWSClient).Region) diff --git a/internal/service/connect/lambda_function_association_data_source_test.go b/internal/service/connect/lambda_function_association_data_source_test.go index f70070218c1..92121a41336 100644 --- a/internal/service/connect/lambda_function_association_data_source_test.go +++ b/internal/service/connect/lambda_function_association_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( diff --git a/internal/service/connect/lambda_function_association_test.go b/internal/service/connect/lambda_function_association_test.go index 5a73709599b..9c2b23b4240 100644 --- a/internal/service/connect/lambda_function_association_test.go +++ b/internal/service/connect/lambda_function_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( @@ -70,7 +73,7 @@ func testAccLambdaFunctionAssociation_disappears(t *testing.T) { func testAccCheckLambdaFunctionAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_connect_lambda_function_association" { @@ -117,7 +120,7 @@ func testAccCheckLambdaFunctionAssociationExists(ctx context.Context, resourceNa return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) lfaArn, err := tfconnect.FindLambdaFunctionAssociationByARNWithContext(ctx, conn, instanceID, functionArn) diff --git a/internal/service/connect/phone_number.go b/internal/service/connect/phone_number.go index 2ec3edccd11..6a072e1be70 100644 --- a/internal/service/connect/phone_number.go +++ b/internal/service/connect/phone_number.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( @@ -95,7 +98,7 @@ func ResourcePhoneNumber() *schema.Resource { } func resourcePhoneNumberCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) targetArn := d.Get("target_arn").(string) phoneNumberType := d.Get("type").(string) @@ -131,7 +134,7 @@ func resourcePhoneNumberCreate(ctx context.Context, d *schema.ResourceData, meta input2 := &connect.ClaimPhoneNumberInput{ ClientToken: aws.String(uuid), // can't use aws.String(id.UniqueId()), because it's not a valid uuid PhoneNumber: phoneNumber, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), TargetArn: aws.String(targetArn), } @@ -161,7 +164,7 @@ func resourcePhoneNumberCreate(ctx context.Context, d *schema.ResourceData, meta } func resourcePhoneNumberRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) phoneNumberId := d.Id() @@ -196,13 +199,13 @@ func resourcePhoneNumberRead(ctx context.Context, d *schema.ResourceData, meta i return diag.Errorf("setting status: %s", err) } - SetTagsOut(ctx, resp.ClaimedPhoneNumberSummary.Tags) + setTagsOut(ctx, resp.ClaimedPhoneNumberSummary.Tags) return nil } func resourcePhoneNumberUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) phoneNumberId := d.Id() @@ -231,7 +234,7 @@ func resourcePhoneNumberUpdate(ctx context.Context, d *schema.ResourceData, meta } func resourcePhoneNumberDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) phoneNumberId := d.Id() diff --git a/internal/service/connect/phone_number_test.go b/internal/service/connect/phone_number_test.go index 68f361e0e8a..d80251b8dab 100644 --- a/internal/service/connect/phone_number_test.go +++ b/internal/service/connect/phone_number_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( @@ -229,7 +232,7 @@ func testAccCheckPhoneNumberExists(ctx context.Context, resourceName string, fun return fmt.Errorf("Connect Phone Number ID not set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) params := &connect.DescribePhoneNumberInput{ PhoneNumberId: aws.String(rs.Primary.ID), @@ -253,7 +256,7 @@ func testAccCheckPhoneNumberDestroy(ctx context.Context) resource.TestCheckFunc continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) params := &connect.DescribePhoneNumberInput{ PhoneNumberId: aws.String(rs.Primary.ID), diff --git a/internal/service/connect/prompt_data_source.go b/internal/service/connect/prompt_data_source.go index 37c0185001e..ad4e72b12b5 100644 --- a/internal/service/connect/prompt_data_source.go +++ b/internal/service/connect/prompt_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( @@ -37,7 +40,7 @@ func DataSourcePrompt() *schema.Resource { } func dataSourcePromptRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID := d.Get("instance_id").(string) name := d.Get("name").(string) @@ -45,11 +48,11 @@ func dataSourcePromptRead(ctx context.Context, d *schema.ResourceData, meta inte promptSummary, err := dataSourceGetPromptSummaryByName(ctx, conn, instanceID, name) if err != nil { - return diag.FromErr(fmt.Errorf("error finding Connect Prompt Summary by name (%s): %w", name, err)) + return diag.Errorf("finding Connect Prompt Summary by name (%s): %s", name, err) } if promptSummary == nil { - return diag.FromErr(fmt.Errorf("error finding Connect Prompt Summary by name (%s): not found", name)) + return diag.Errorf("finding Connect Prompt Summary by name (%s): not found", name) } d.Set("arn", promptSummary.Arn) diff --git a/internal/service/connect/prompt_data_source_test.go b/internal/service/connect/prompt_data_source_test.go index 3efad2f1560..c7d4bc6851c 100644 --- a/internal/service/connect/prompt_data_source_test.go +++ b/internal/service/connect/prompt_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( diff --git a/internal/service/connect/queue.go b/internal/service/connect/queue.go index 55a2dba0de2..016f294b3b7 100644 --- a/internal/service/connect/queue.go +++ b/internal/service/connect/queue.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( @@ -112,14 +115,14 @@ func ResourceQueue() *schema.Resource { } func resourceQueueCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID := d.Get("instance_id").(string) name := d.Get("name").(string) input := &connect.CreateQueueInput{ InstanceId: aws.String(instanceID), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -146,11 +149,11 @@ func resourceQueueCreate(ctx context.Context, d *schema.ResourceData, meta inter output, err := conn.CreateQueueWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error creating Connect Queue (%s): %w", name, err)) + return diag.Errorf("creating Connect Queue (%s): %s", name, err) } if output == nil { - return diag.FromErr(fmt.Errorf("error creating Connect Queue (%s): empty output", name)) + return diag.Errorf("creating Connect Queue (%s): empty output", name) } d.SetId(fmt.Sprintf("%s:%s", instanceID, aws.StringValue(output.QueueId))) @@ -159,7 +162,7 @@ func resourceQueueCreate(ctx context.Context, d *schema.ResourceData, meta inter } func resourceQueueRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, queueID, err := QueueParseID(d.Id()) @@ -179,11 +182,11 @@ func resourceQueueRead(ctx context.Context, d *schema.ResourceData, meta interfa } if err != nil { - return diag.FromErr(fmt.Errorf("error getting Connect Queue (%s): %w", d.Id(), err)) + return diag.Errorf("getting Connect Queue (%s): %s", d.Id(), err) } if resp == nil || resp.Queue == nil { - return diag.FromErr(fmt.Errorf("error getting Connect Queue (%s): empty response", d.Id())) + return diag.Errorf("getting Connect Queue (%s): empty response", d.Id()) } if err := d.Set("outbound_caller_config", flattenOutboundCallerConfig(resp.Queue.OutboundCallerConfig)); err != nil { @@ -203,18 +206,18 @@ func resourceQueueRead(ctx context.Context, d *schema.ResourceData, meta interfa quickConnectIds, err := getQueueQuickConnectIDs(ctx, conn, instanceID, queueID) if err != nil { - return diag.FromErr(fmt.Errorf("error finding Connect Queue Quick Connect ID for Queue (%s): %w", queueID, err)) + return diag.Errorf("finding Connect Queue Quick Connect ID for Queue (%s): %s", queueID, err) } d.Set("quick_connect_ids", aws.StringValueSlice(quickConnectIds)) - SetTagsOut(ctx, resp.Queue.Tags) + setTagsOut(ctx, resp.Queue.Tags) return nil } func resourceQueueUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, queueID, err := QueueParseID(d.Id()) @@ -240,7 +243,7 @@ func resourceQueueUpdate(ctx context.Context, d *schema.ResourceData, meta inter _, err = conn.UpdateQueueHoursOfOperationWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("updating Queue Hours of Operation (%s): %w", d.Id(), err)) + return diag.Errorf("updating Queue Hours of Operation (%s): %s", d.Id(), err) } } @@ -254,7 +257,7 @@ func resourceQueueUpdate(ctx context.Context, d *schema.ResourceData, meta inter _, err = conn.UpdateQueueMaxContactsWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("updating Queue Max Contacts (%s): %w", d.Id(), err)) + return diag.Errorf("updating Queue Max Contacts (%s): %s", d.Id(), err) } } @@ -269,7 +272,7 @@ func resourceQueueUpdate(ctx context.Context, d *schema.ResourceData, meta inter _, err = conn.UpdateQueueNameWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("updating Queue Name and/or Description (%s): %w", d.Id(), err)) + return diag.Errorf("updating Queue Name and/or Description (%s): %s", d.Id(), err) } } @@ -283,7 +286,7 @@ func resourceQueueUpdate(ctx context.Context, d *schema.ResourceData, meta inter _, err = conn.UpdateQueueOutboundCallerConfigWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("updating Queue Outbound Caller Config (%s): %w", d.Id(), err)) + return diag.Errorf("updating Queue Outbound Caller Config (%s): %s", d.Id(), err) } } @@ -297,7 +300,7 @@ func resourceQueueUpdate(ctx context.Context, d *schema.ResourceData, meta inter _, err = conn.UpdateQueueStatusWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("updating Queue Status (%s): %w", d.Id(), err)) + return diag.Errorf("updating Queue Status (%s): %s", d.Id(), err) } } diff --git a/internal/service/connect/queue_data_source.go b/internal/service/connect/queue_data_source.go index e826dcdebc6..fe1010cad39 100644 --- a/internal/service/connect/queue_data_source.go +++ b/internal/service/connect/queue_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( @@ -81,7 +84,7 @@ func DataSourceQueue() *schema.Resource { } func dataSourceQueueRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig instanceID := d.Get("instance_id").(string) @@ -97,11 +100,11 @@ func dataSourceQueueRead(ctx context.Context, d *schema.ResourceData, meta inter queueSummary, err := dataSourceGetQueueSummaryByName(ctx, conn, instanceID, name) if err != nil { - return diag.FromErr(fmt.Errorf("error finding Connect Queue Summary by name (%s): %w", name, err)) + return diag.Errorf("finding Connect Queue Summary by name (%s): %s", name, err) } if queueSummary == nil { - return diag.FromErr(fmt.Errorf("error finding Connect Queue Summary by name (%s): not found", name)) + return diag.Errorf("finding Connect Queue Summary by name (%s): not found", name) } input.QueueId = queueSummary.Id @@ -110,11 +113,11 @@ func dataSourceQueueRead(ctx context.Context, d *schema.ResourceData, meta inter resp, err := conn.DescribeQueueWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error getting Connect Queue: %w", err)) + return diag.Errorf("getting Connect Queue: %s", err) } if resp == nil || resp.Queue == nil { - return diag.FromErr(fmt.Errorf("error getting Connect Queue: empty response")) + return diag.Errorf("getting Connect Queue: empty response") } queue := resp.Queue @@ -128,11 +131,11 @@ func dataSourceQueueRead(ctx context.Context, d *schema.ResourceData, meta inter d.Set("status", queue.Status) if err := d.Set("outbound_caller_config", flattenOutboundCallerConfig(queue.OutboundCallerConfig)); err != nil { - return diag.FromErr(fmt.Errorf("error setting outbound_caller_config: %s", err)) + return diag.Errorf("setting outbound_caller_config: %s", err) } if err := d.Set("tags", KeyValueTags(ctx, queue.Tags).IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return diag.FromErr(fmt.Errorf("error setting tags: %s", err)) + return diag.Errorf("setting tags: %s", err) } d.SetId(fmt.Sprintf("%s:%s", instanceID, aws.StringValue(queue.QueueId))) diff --git a/internal/service/connect/queue_data_source_test.go b/internal/service/connect/queue_data_source_test.go index e2eb1347758..f1132bc84e4 100644 --- a/internal/service/connect/queue_data_source_test.go +++ b/internal/service/connect/queue_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( diff --git a/internal/service/connect/queue_test.go b/internal/service/connect/queue_test.go index e54f3f477eb..4692f3bfe65 100644 --- a/internal/service/connect/queue_test.go +++ b/internal/service/connect/queue_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( @@ -509,7 +512,7 @@ func testAccCheckQueueExists(ctx context.Context, resourceName string, function return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) params := &connect.DescribeQueueInput{ QueueId: aws.String(queueID), @@ -534,7 +537,7 @@ func testAccCheckQueueDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) instanceID, queueID, err := tfconnect.QueueParseID(rs.Primary.ID) diff --git a/internal/service/connect/quick_connect.go b/internal/service/connect/quick_connect.go index b03d3da2c1c..a2e71656a22 100644 --- a/internal/service/connect/quick_connect.go +++ b/internal/service/connect/quick_connect.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( @@ -136,7 +139,7 @@ func ResourceQuickConnect() *schema.Resource { } func resourceQuickConnectCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID := d.Get("instance_id").(string) name := d.Get("name").(string) @@ -145,7 +148,7 @@ func resourceQuickConnectCreate(ctx context.Context, d *schema.ResourceData, met QuickConnectConfig: quickConnectConfig, InstanceId: aws.String(instanceID), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -156,11 +159,11 @@ func resourceQuickConnectCreate(ctx context.Context, d *schema.ResourceData, met output, err := conn.CreateQuickConnectWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error creating Connect Quick Connect (%s): %w", name, err)) + return diag.Errorf("creating Connect Quick Connect (%s): %s", name, err) } if output == nil { - return diag.FromErr(fmt.Errorf("error creating Connect Quick Connect (%s): empty output", name)) + return diag.Errorf("creating Connect Quick Connect (%s): empty output", name) } d.SetId(fmt.Sprintf("%s:%s", instanceID, aws.StringValue(output.QuickConnectId))) @@ -169,7 +172,7 @@ func resourceQuickConnectCreate(ctx context.Context, d *schema.ResourceData, met } func resourceQuickConnectRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, quickConnectID, err := QuickConnectParseID(d.Id()) @@ -189,11 +192,11 @@ func resourceQuickConnectRead(ctx context.Context, d *schema.ResourceData, meta } if err != nil { - return diag.FromErr(fmt.Errorf("error getting Connect Quick Connect (%s): %w", d.Id(), err)) + return diag.Errorf("getting Connect Quick Connect (%s): %s", d.Id(), err) } if resp == nil || resp.QuickConnect == nil { - return diag.FromErr(fmt.Errorf("error getting Connect Quick Connect (%s): empty response", d.Id())) + return diag.Errorf("getting Connect Quick Connect (%s): empty response", d.Id()) } if err := d.Set("quick_connect_config", flattenQuickConnectConfig(resp.QuickConnect.QuickConnectConfig)); err != nil { @@ -206,13 +209,13 @@ func resourceQuickConnectRead(ctx context.Context, d *schema.ResourceData, meta d.Set("arn", resp.QuickConnect.QuickConnectARN) d.Set("quick_connect_id", resp.QuickConnect.QuickConnectId) - SetTagsOut(ctx, resp.QuickConnect.Tags) + setTagsOut(ctx, resp.QuickConnect.Tags) return nil } func resourceQuickConnectUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, quickConnectID, err := QuickConnectParseID(d.Id()) @@ -237,7 +240,7 @@ func resourceQuickConnectUpdate(ctx context.Context, d *schema.ResourceData, met _, err = conn.UpdateQuickConnectNameWithContext(ctx, inputNameDesc) if err != nil { - return diag.FromErr(fmt.Errorf("updating QuickConnect Name (%s): %w", d.Id(), err)) + return diag.Errorf("updating QuickConnect Name (%s): %s", d.Id(), err) } } @@ -253,7 +256,7 @@ func resourceQuickConnectUpdate(ctx context.Context, d *schema.ResourceData, met inputConfig.QuickConnectConfig = quickConnectConfig _, err = conn.UpdateQuickConnectConfigWithContext(ctx, inputConfig) if err != nil { - return diag.FromErr(fmt.Errorf("updating QuickConnect (%s): %w", d.Id(), err)) + return diag.Errorf("updating QuickConnect (%s): %s", d.Id(), err) } } @@ -261,7 +264,7 @@ func resourceQuickConnectUpdate(ctx context.Context, d *schema.ResourceData, met } func resourceQuickConnectDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, quickConnectID, err := QuickConnectParseID(d.Id()) @@ -275,7 +278,7 @@ func resourceQuickConnectDelete(ctx context.Context, d *schema.ResourceData, met }) if err != nil { - return diag.FromErr(fmt.Errorf("error deleting QuickConnect (%s): %w", d.Id(), err)) + return diag.Errorf("deleting QuickConnect (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/connect/quick_connect_data_source.go b/internal/service/connect/quick_connect_data_source.go index 04b321d6177..3c983149722 100644 --- a/internal/service/connect/quick_connect_data_source.go +++ b/internal/service/connect/quick_connect_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( @@ -105,7 +108,7 @@ func DataSourceQuickConnect() *schema.Resource { } func dataSourceQuickConnectRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig instanceID := d.Get("instance_id").(string) @@ -121,11 +124,11 @@ func dataSourceQuickConnectRead(ctx context.Context, d *schema.ResourceData, met quickConnectSummary, err := dataSourceGetQuickConnectSummaryByName(ctx, conn, instanceID, name) if err != nil { - return diag.FromErr(fmt.Errorf("error finding Connect Quick Connect Summary by name (%s): %w", name, err)) + return diag.Errorf("finding Connect Quick Connect Summary by name (%s): %s", name, err) } if quickConnectSummary == nil { - return diag.FromErr(fmt.Errorf("error finding Connect Quick Connect Summary by name (%s): not found", name)) + return diag.Errorf("finding Connect Quick Connect Summary by name (%s): not found", name) } input.QuickConnectId = quickConnectSummary.Id @@ -134,11 +137,11 @@ func dataSourceQuickConnectRead(ctx context.Context, d *schema.ResourceData, met resp, err := conn.DescribeQuickConnectWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error getting Connect Quick Connect: %w", err)) + return diag.Errorf("getting Connect Quick Connect: %s", err) } if resp == nil || resp.QuickConnect == nil { - return diag.FromErr(fmt.Errorf("error getting Connect Quick Connect: empty response")) + return diag.Errorf("getting Connect Quick Connect: empty response") } quickConnect := resp.QuickConnect @@ -149,11 +152,11 @@ func dataSourceQuickConnectRead(ctx context.Context, d *schema.ResourceData, met d.Set("quick_connect_id", quickConnect.QuickConnectId) if err := d.Set("quick_connect_config", flattenQuickConnectConfig(quickConnect.QuickConnectConfig)); err != nil { - return diag.FromErr(fmt.Errorf("error setting quick_connect_config: %s", err)) + return diag.Errorf("setting quick_connect_config: %s", err) } if err := d.Set("tags", KeyValueTags(ctx, quickConnect.Tags).IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return diag.FromErr(fmt.Errorf("error setting tags: %s", err)) + return diag.Errorf("setting tags: %s", err) } d.SetId(fmt.Sprintf("%s:%s", instanceID, aws.StringValue(quickConnect.QuickConnectId))) diff --git a/internal/service/connect/quick_connect_data_source_test.go b/internal/service/connect/quick_connect_data_source_test.go index 22e10283600..14c72cecd77 100644 --- a/internal/service/connect/quick_connect_data_source_test.go +++ b/internal/service/connect/quick_connect_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( diff --git a/internal/service/connect/quick_connect_test.go b/internal/service/connect/quick_connect_test.go index 4eadddf4e1f..0ddd5f96a0b 100644 --- a/internal/service/connect/quick_connect_test.go +++ b/internal/service/connect/quick_connect_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( @@ -189,7 +192,7 @@ func testAccCheckQuickConnectExists(ctx context.Context, resourceName string, fu return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) params := &connect.DescribeQuickConnectInput{ QuickConnectId: aws.String(quickConnectID), @@ -214,7 +217,7 @@ func testAccCheckQuickConnectDestroy(ctx context.Context) resource.TestCheckFunc continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) instanceID, quickConnectID, err := tfconnect.QuickConnectParseID(rs.Primary.ID) diff --git a/internal/service/connect/routing_profile.go b/internal/service/connect/routing_profile.go index a049041af34..beb723eb142 100644 --- a/internal/service/connect/routing_profile.go +++ b/internal/service/connect/routing_profile.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( @@ -131,7 +134,7 @@ func ResourceRoutingProfile() *schema.Resource { } func resourceRoutingProfileCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID := d.Get("instance_id").(string) name := d.Get("name").(string) @@ -141,7 +144,7 @@ func resourceRoutingProfileCreate(ctx context.Context, d *schema.ResourceData, m InstanceId: aws.String(instanceID), MediaConcurrencies: expandRoutingProfileMediaConcurrencies(d.Get("media_concurrencies").(*schema.Set).List()), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("queue_configs"); ok && v.(*schema.Set).Len() > 0 && v.(*schema.Set).Len() <= CreateRoutingProfileQueuesMaxItems { @@ -152,11 +155,11 @@ func resourceRoutingProfileCreate(ctx context.Context, d *schema.ResourceData, m output, err := conn.CreateRoutingProfileWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error creating Connect Routing Profile (%s): %w", name, err)) + return diag.Errorf("creating Connect Routing Profile (%s): %s", name, err) } if output == nil { - return diag.FromErr(fmt.Errorf("error creating Connect Routing Profile (%s): empty output", name)) + return diag.Errorf("creating Connect Routing Profile (%s): empty output", name) } // call the batched association API if the number of queues to associate with the routing profile is > CreateRoutingProfileQueuesMaxItems @@ -175,7 +178,7 @@ func resourceRoutingProfileCreate(ctx context.Context, d *schema.ResourceData, m } func resourceRoutingProfileRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, routingProfileID, err := RoutingProfileParseID(d.Id()) @@ -195,11 +198,11 @@ func resourceRoutingProfileRead(ctx context.Context, d *schema.ResourceData, met } if err != nil { - return diag.FromErr(fmt.Errorf("error getting Connect Routing Profile (%s): %w", d.Id(), err)) + return diag.Errorf("getting Connect Routing Profile (%s): %s", d.Id(), err) } if resp == nil || resp.RoutingProfile == nil { - return diag.FromErr(fmt.Errorf("error getting Connect Routing Profile (%s): empty response", d.Id())) + return diag.Errorf("getting Connect Routing Profile (%s): empty response", d.Id()) } routingProfile := resp.RoutingProfile @@ -220,18 +223,18 @@ func resourceRoutingProfileRead(ctx context.Context, d *schema.ResourceData, met queueConfigs, err := getRoutingProfileQueueConfigs(ctx, conn, instanceID, routingProfileID) if err != nil { - return diag.FromErr(fmt.Errorf("error finding Connect Routing Profile Queue Configs Summary by Routing Profile ID (%s): %w", routingProfileID, err)) + return diag.Errorf("finding Connect Routing Profile Queue Configs Summary by Routing Profile ID (%s): %s", routingProfileID, err) } d.Set("queue_configs", queueConfigs) - SetTagsOut(ctx, resp.RoutingProfile.Tags) + setTagsOut(ctx, resp.RoutingProfile.Tags) return nil } func resourceRoutingProfileUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, routingProfileID, err := RoutingProfileParseID(d.Id()) @@ -256,7 +259,7 @@ func resourceRoutingProfileUpdate(ctx context.Context, d *schema.ResourceData, m inputConcurrency.MediaConcurrencies = mediaConcurrencies _, err = conn.UpdateRoutingProfileConcurrencyWithContext(ctx, inputConcurrency) if err != nil { - return diag.FromErr(fmt.Errorf("updating RoutingProfile Media Concurrency (%s): %w", d.Id(), err)) + return diag.Errorf("updating RoutingProfile Media Concurrency (%s): %s", d.Id(), err) } } @@ -271,7 +274,7 @@ func resourceRoutingProfileUpdate(ctx context.Context, d *schema.ResourceData, m _, err = conn.UpdateRoutingProfileDefaultOutboundQueueWithContext(ctx, inputDefaultOutboundQueue) if err != nil { - return diag.FromErr(fmt.Errorf("updating RoutingProfile Default Outbound Queue ID (%s): %w", d.Id(), err)) + return diag.Errorf("updating RoutingProfile Default Outbound Queue ID (%s): %s", d.Id(), err) } } @@ -287,7 +290,7 @@ func resourceRoutingProfileUpdate(ctx context.Context, d *schema.ResourceData, m _, err = conn.UpdateRoutingProfileNameWithContext(ctx, inputNameDesc) if err != nil { - return diag.FromErr(fmt.Errorf("updating RoutingProfile Name (%s): %w", d.Id(), err)) + return diag.Errorf("updating RoutingProfile Name (%s): %s", d.Id(), err) } } @@ -371,7 +374,7 @@ func updateQueueConfigs(ctx context.Context, conn *connect.Connect, instanceID, } // func resourceRoutingProfileDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { -// conn := meta.(*conns.AWSClient).ConnectConn() +// conn := meta.(*conns.AWSClient).ConnectConn(ctx) // instanceID, routingProfileID, err := RoutingProfileParseID(d.Id()) @@ -385,7 +388,7 @@ func updateQueueConfigs(ctx context.Context, conn *connect.Connect, instanceID, // }) // if err != nil { -// return diag.FromErr(fmt.Errorf("error deleting RoutingProfile (%s): %w", d.Id(), err)) +// return diag.Errorf("deleting RoutingProfile (%s): %s", d.Id(), err) // } // return nil diff --git a/internal/service/connect/routing_profile_data_source.go b/internal/service/connect/routing_profile_data_source.go index 5bee01d5af4..7675519582d 100644 --- a/internal/service/connect/routing_profile_data_source.go +++ b/internal/service/connect/routing_profile_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( @@ -101,7 +104,7 @@ func DataSourceRoutingProfile() *schema.Resource { } func dataSourceRoutingProfileRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig instanceID := d.Get("instance_id").(string) @@ -117,11 +120,11 @@ func dataSourceRoutingProfileRead(ctx context.Context, d *schema.ResourceData, m routingProfileSummary, err := dataSourceGetRoutingProfileSummaryByName(ctx, conn, instanceID, name) if err != nil { - return diag.FromErr(fmt.Errorf("error finding Connect Routing Profile Summary by name (%s): %w", name, err)) + return diag.Errorf("finding Connect Routing Profile Summary by name (%s): %s", name, err) } if routingProfileSummary == nil { - return diag.FromErr(fmt.Errorf("error finding Connect Routing Profile Summary by name (%s): not found", name)) + return diag.Errorf("finding Connect Routing Profile Summary by name (%s): not found", name) } input.RoutingProfileId = routingProfileSummary.Id @@ -130,11 +133,11 @@ func dataSourceRoutingProfileRead(ctx context.Context, d *schema.ResourceData, m resp, err := conn.DescribeRoutingProfileWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error getting Connect Routing Profile: %w", err)) + return diag.Errorf("getting Connect Routing Profile: %s", err) } if resp == nil || resp.RoutingProfile == nil { - return diag.FromErr(fmt.Errorf("error getting Connect Routing Profile: empty response")) + return diag.Errorf("getting Connect Routing Profile: empty response") } routingProfile := resp.RoutingProfile @@ -154,13 +157,13 @@ func dataSourceRoutingProfileRead(ctx context.Context, d *schema.ResourceData, m queueConfigs, err := getRoutingProfileQueueConfigs(ctx, conn, instanceID, *routingProfile.RoutingProfileId) if err != nil { - return diag.FromErr(fmt.Errorf("error finding Connect Routing Profile Queue Configs Summary by Routing Profile ID (%s): %w", *routingProfile.RoutingProfileId, err)) + return diag.Errorf("finding Connect Routing Profile Queue Configs Summary by Routing Profile ID (%s): %s", *routingProfile.RoutingProfileId, err) } d.Set("queue_configs", queueConfigs) if err := d.Set("tags", KeyValueTags(ctx, routingProfile.Tags).IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return diag.FromErr(fmt.Errorf("error setting tags: %s", err)) + return diag.Errorf("setting tags: %s", err) } d.SetId(fmt.Sprintf("%s:%s", instanceID, aws.StringValue(routingProfile.RoutingProfileId))) diff --git a/internal/service/connect/routing_profile_data_source_test.go b/internal/service/connect/routing_profile_data_source_test.go index 309ac12603d..918742705dc 100644 --- a/internal/service/connect/routing_profile_data_source_test.go +++ b/internal/service/connect/routing_profile_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( diff --git a/internal/service/connect/routing_profile_test.go b/internal/service/connect/routing_profile_test.go index 0357a80ed38..de33b6f52b4 100644 --- a/internal/service/connect/routing_profile_test.go +++ b/internal/service/connect/routing_profile_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( @@ -671,7 +674,7 @@ func testAccCheckRoutingProfileExists(ctx context.Context, resourceName string, return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) params := &connect.DescribeRoutingProfileInput{ InstanceId: aws.String(instanceID), @@ -696,7 +699,7 @@ func testAccCheckRoutingProfileDestroy(ctx context.Context) resource.TestCheckFu continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) instanceID, routingProfileID, err := tfconnect.RoutingProfileParseID(rs.Primary.ID) diff --git a/internal/service/connect/security_profile.go b/internal/service/connect/security_profile.go index 2321ecbddb9..d3d339aace5 100644 --- a/internal/service/connect/security_profile.go +++ b/internal/service/connect/security_profile.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( @@ -75,14 +78,14 @@ func ResourceSecurityProfile() *schema.Resource { } func resourceSecurityProfileCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID := d.Get("instance_id").(string) securityProfileName := d.Get("name").(string) input := &connect.CreateSecurityProfileInput{ InstanceId: aws.String(instanceID), SecurityProfileName: aws.String(securityProfileName), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -97,11 +100,11 @@ func resourceSecurityProfileCreate(ctx context.Context, d *schema.ResourceData, output, err := conn.CreateSecurityProfileWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error creating Connect Security Profile (%s): %w", securityProfileName, err)) + return diag.Errorf("creating Connect Security Profile (%s): %s", securityProfileName, err) } if output == nil { - return diag.FromErr(fmt.Errorf("error creating Connect Security Profile (%s): empty output", securityProfileName)) + return diag.Errorf("creating Connect Security Profile (%s): empty output", securityProfileName) } d.SetId(fmt.Sprintf("%s:%s", instanceID, aws.StringValue(output.SecurityProfileId))) @@ -110,7 +113,7 @@ func resourceSecurityProfileCreate(ctx context.Context, d *schema.ResourceData, } func resourceSecurityProfileRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, securityProfileID, err := SecurityProfileParseID(d.Id()) @@ -130,11 +133,11 @@ func resourceSecurityProfileRead(ctx context.Context, d *schema.ResourceData, me } if err != nil { - return diag.FromErr(fmt.Errorf("error getting Connect Security Profile (%s): %w", d.Id(), err)) + return diag.Errorf("getting Connect Security Profile (%s): %s", d.Id(), err) } if resp == nil || resp.SecurityProfile == nil { - return diag.FromErr(fmt.Errorf("error getting Connect Security Profile (%s): empty response", d.Id())) + return diag.Errorf("getting Connect Security Profile (%s): empty response", d.Id()) } d.Set("arn", resp.SecurityProfile.Arn) @@ -148,20 +151,20 @@ func resourceSecurityProfileRead(ctx context.Context, d *schema.ResourceData, me permissions, err := getSecurityProfilePermissions(ctx, conn, instanceID, securityProfileID) if err != nil { - return diag.FromErr(fmt.Errorf("error finding Connect Security Profile Permissions for Security Profile (%s): %w", securityProfileID, err)) + return diag.Errorf("finding Connect Security Profile Permissions for Security Profile (%s): %s", securityProfileID, err) } if permissions != nil { d.Set("permissions", flex.FlattenStringSet(permissions)) } - SetTagsOut(ctx, resp.SecurityProfile.AllowedAccessControlTags) + setTagsOut(ctx, resp.SecurityProfile.Tags) return nil } func resourceSecurityProfileUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, securityProfileID, err := SecurityProfileParseID(d.Id()) @@ -185,14 +188,14 @@ func resourceSecurityProfileUpdate(ctx context.Context, d *schema.ResourceData, _, err = conn.UpdateSecurityProfileWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("updating SecurityProfile (%s): %w", d.Id(), err)) + return diag.Errorf("updating SecurityProfile (%s): %s", d.Id(), err) } return resourceSecurityProfileRead(ctx, d, meta) } func resourceSecurityProfileDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, securityProfileID, err := SecurityProfileParseID(d.Id()) @@ -206,7 +209,7 @@ func resourceSecurityProfileDelete(ctx context.Context, d *schema.ResourceData, }) if err != nil { - return diag.FromErr(fmt.Errorf("error deleting SecurityProfile (%s): %w", d.Id(), err)) + return diag.Errorf("deleting SecurityProfile (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/connect/security_profile_data_source.go b/internal/service/connect/security_profile_data_source.go index d17bd99644e..caf63971bfd 100644 --- a/internal/service/connect/security_profile_data_source.go +++ b/internal/service/connect/security_profile_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( @@ -61,7 +64,7 @@ func DataSourceSecurityProfile() *schema.Resource { } func dataSourceSecurityProfileRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig instanceID := d.Get("instance_id").(string) @@ -77,11 +80,11 @@ func dataSourceSecurityProfileRead(ctx context.Context, d *schema.ResourceData, securityProfileSummary, err := dataSourceGetSecurityProfileSummaryByName(ctx, conn, instanceID, name) if err != nil { - return diag.FromErr(fmt.Errorf("error finding Connect Security Profile Summary by name (%s): %w", name, err)) + return diag.Errorf("finding Connect Security Profile Summary by name (%s): %s", name, err) } if securityProfileSummary == nil { - return diag.FromErr(fmt.Errorf("error finding Connect Security Profile Summary by name (%s): not found", name)) + return diag.Errorf("finding Connect Security Profile Summary by name (%s): not found", name) } input.SecurityProfileId = securityProfileSummary.Id @@ -90,11 +93,11 @@ func dataSourceSecurityProfileRead(ctx context.Context, d *schema.ResourceData, resp, err := conn.DescribeSecurityProfileWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error getting Connect Security Profile: %w", err)) + return diag.Errorf("getting Connect Security Profile: %s", err) } if resp == nil || resp.SecurityProfile == nil { - return diag.FromErr(fmt.Errorf("error getting Connect Security Profile: empty response")) + return diag.Errorf("getting Connect Security Profile: empty response") } securityProfile := resp.SecurityProfile @@ -110,7 +113,7 @@ func dataSourceSecurityProfileRead(ctx context.Context, d *schema.ResourceData, permissions, err := getSecurityProfilePermissions(ctx, conn, instanceID, *resp.SecurityProfile.Id) if err != nil { - return diag.FromErr(fmt.Errorf("error finding Connect Security Profile Permissions for Security Profile (%s): %w", *resp.SecurityProfile.Id, err)) + return diag.Errorf("finding Connect Security Profile Permissions for Security Profile (%s): %s", *resp.SecurityProfile.Id, err) } if permissions != nil { @@ -118,7 +121,7 @@ func dataSourceSecurityProfileRead(ctx context.Context, d *schema.ResourceData, } if err := d.Set("tags", KeyValueTags(ctx, securityProfile.Tags).IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return diag.FromErr(fmt.Errorf("error setting tags: %s", err)) + return diag.Errorf("setting tags: %s", err) } d.SetId(fmt.Sprintf("%s:%s", instanceID, aws.StringValue(resp.SecurityProfile.Id))) diff --git a/internal/service/connect/security_profile_data_source_test.go b/internal/service/connect/security_profile_data_source_test.go index 4c28629ca19..f510410092c 100644 --- a/internal/service/connect/security_profile_data_source_test.go +++ b/internal/service/connect/security_profile_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( diff --git a/internal/service/connect/security_profile_test.go b/internal/service/connect/security_profile_test.go index 76204aeade2..c97b371cb9d 100644 --- a/internal/service/connect/security_profile_test.go +++ b/internal/service/connect/security_profile_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( @@ -209,7 +212,7 @@ func testAccCheckSecurityProfileExists(ctx context.Context, resourceName string, return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) params := &connect.DescribeSecurityProfileInput{ InstanceId: aws.String(instanceID), @@ -234,7 +237,7 @@ func testAccCheckSecurityProfileDestroy(ctx context.Context) resource.TestCheckF continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) instanceID, securityProfileID, err := tfconnect.SecurityProfileParseID(rs.Primary.ID) diff --git a/internal/service/connect/service_package_gen.go b/internal/service/connect/service_package_gen.go index 27106f48f49..459fcde327f 100644 --- a/internal/service/connect/service_package_gen.go +++ b/internal/service/connect/service_package_gen.go @@ -5,6 +5,10 @@ package connect import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + connect_sdkv1 "github.com/aws/aws-sdk-go/service/connect" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -205,4 +209,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Connect } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*connect_sdkv1.Connect, error) { + sess := config["session"].(*session_sdkv1.Session) + + return connect_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/connect/status.go b/internal/service/connect/status.go index ea4f25fcefb..37fceec327f 100644 --- a/internal/service/connect/status.go +++ b/internal/service/connect/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( @@ -9,26 +12,6 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" ) -func statusInstance(ctx context.Context, conn *connect.Connect, instanceId string) retry.StateRefreshFunc { - return func() (interface{}, string, error) { - input := &connect.DescribeInstanceInput{ - InstanceId: aws.String(instanceId), - } - - output, err := conn.DescribeInstanceWithContext(ctx, input) - - if tfawserr.ErrCodeEquals(err, connect.ErrCodeResourceNotFoundException) { - return output, connect.ErrCodeResourceNotFoundException, nil - } - - if err != nil { - return nil, "", err - } - - return output, aws.StringValue(output.Instance.InstanceStatus), nil - } -} - func statusPhoneNumber(ctx context.Context, conn *connect.Connect, phoneNumberId string) retry.StateRefreshFunc { return func() (interface{}, string, error) { input := &connect.DescribePhoneNumberInput{ diff --git a/internal/service/connect/sweep.go b/internal/service/connect/sweep.go index 9ce6a7c055e..99b05feb04f 100644 --- a/internal/service/connect/sweep.go +++ b/internal/service/connect/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/connect" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -24,12 +26,12 @@ func init() { func sweepInstance(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).ConnectConn() + conn := client.ConnectConn(ctx) var errs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -64,7 +66,7 @@ func sweepInstance(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing Connect Instances: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Connect Instances for %s: %w", region, err)) } diff --git a/internal/service/connect/tags_gen.go b/internal/service/connect/tags_gen.go index 418d7d82cb1..e95229a045b 100644 --- a/internal/service/connect/tags_gen.go +++ b/internal/service/connect/tags_gen.go @@ -21,14 +21,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from connect service tags. +// KeyValueTags creates tftags.KeyValueTags from connect service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns connect service tags from Context. +// getTagsIn returns connect service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -38,17 +38,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets connect service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets connect service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates connect service tags. +// updateTags updates connect service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn connectiface.ConnectAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn connectiface.ConnectAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -88,5 +88,5 @@ func UpdateTags(ctx context.Context, conn connectiface.ConnectAPI, identifier st // UpdateTags updates connect service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).ConnectConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).ConnectConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/connect/user.go b/internal/service/connect/user.go index 147a14b9328..22def900b02 100644 --- a/internal/service/connect/user.go +++ b/internal/service/connect/user.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( @@ -144,7 +147,7 @@ func ResourceUser() *schema.Resource { } func resourceUserCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID := d.Get("instance_id").(string) name := d.Get("name").(string) @@ -153,7 +156,7 @@ func resourceUserCreate(ctx context.Context, d *schema.ResourceData, meta interf PhoneConfig: expandPhoneConfig(d.Get("phone_config").([]interface{})), RoutingProfileId: aws.String(d.Get("routing_profile_id").(string)), SecurityProfileIds: flex.ExpandStringSet(d.Get("security_profile_ids").(*schema.Set)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), Username: aws.String(name), } @@ -176,11 +179,11 @@ func resourceUserCreate(ctx context.Context, d *schema.ResourceData, meta interf output, err := conn.CreateUserWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error creating Connect User (%s): %w", name, err)) + return diag.Errorf("creating Connect User (%s): %s", name, err) } if output == nil { - return diag.FromErr(fmt.Errorf("error creating Connect User (%s): empty output", name)) + return diag.Errorf("creating Connect User (%s): empty output", name) } d.SetId(fmt.Sprintf("%s:%s", instanceID, aws.StringValue(output.UserId))) @@ -189,7 +192,7 @@ func resourceUserCreate(ctx context.Context, d *schema.ResourceData, meta interf } func resourceUserRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, userID, err := UserParseID(d.Id()) @@ -209,11 +212,11 @@ func resourceUserRead(ctx context.Context, d *schema.ResourceData, meta interfac } if err != nil { - return diag.FromErr(fmt.Errorf("error getting Connect User (%s): %w", d.Id(), err)) + return diag.Errorf("getting Connect User (%s): %s", d.Id(), err) } if resp == nil || resp.User == nil { - return diag.FromErr(fmt.Errorf("error getting Connect User (%s): empty response", d.Id())) + return diag.Errorf("getting Connect User (%s): empty response", d.Id()) } user := resp.User @@ -228,20 +231,20 @@ func resourceUserRead(ctx context.Context, d *schema.ResourceData, meta interfac d.Set("user_id", user.Id) if err := d.Set("identity_info", flattenIdentityInfo(user.IdentityInfo)); err != nil { - return diag.FromErr(fmt.Errorf("error setting identity_info: %w", err)) + return diag.Errorf("setting identity_info: %s", err) } if err := d.Set("phone_config", flattenPhoneConfig(user.PhoneConfig)); err != nil { - return diag.FromErr(fmt.Errorf("error setting phone_config: %w", err)) + return diag.Errorf("setting phone_config: %s", err) } - SetTagsOut(ctx, resp.User.Tags) + setTagsOut(ctx, resp.User.Tags) return nil } func resourceUserUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, userID, err := UserParseID(d.Id()) @@ -270,7 +273,7 @@ func resourceUserUpdate(ctx context.Context, d *schema.ResourceData, meta interf _, err = conn.UpdateUserHierarchyWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("updating User hierarchy_group_id (%s): %w", d.Id(), err)) + return diag.Errorf("updating User hierarchy_group_id (%s): %s", d.Id(), err) } } @@ -285,7 +288,7 @@ func resourceUserUpdate(ctx context.Context, d *schema.ResourceData, meta interf _, err = conn.UpdateUserIdentityInfoWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("updating User identity_info (%s): %w", d.Id(), err)) + return diag.Errorf("updating User identity_info (%s): %s", d.Id(), err) } } @@ -300,7 +303,7 @@ func resourceUserUpdate(ctx context.Context, d *schema.ResourceData, meta interf _, err = conn.UpdateUserPhoneConfigWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("updating User phone_config (%s): %w", d.Id(), err)) + return diag.Errorf("updating User phone_config (%s): %s", d.Id(), err) } } @@ -315,7 +318,7 @@ func resourceUserUpdate(ctx context.Context, d *schema.ResourceData, meta interf _, err = conn.UpdateUserRoutingProfileWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("updating User routing_profile_id (%s): %w", d.Id(), err)) + return diag.Errorf("updating User routing_profile_id (%s): %s", d.Id(), err) } } @@ -330,7 +333,7 @@ func resourceUserUpdate(ctx context.Context, d *schema.ResourceData, meta interf _, err = conn.UpdateUserSecurityProfilesWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("updating User security_profile_ids (%s): %w", d.Id(), err)) + return diag.Errorf("updating User security_profile_ids (%s): %s", d.Id(), err) } } @@ -338,7 +341,7 @@ func resourceUserUpdate(ctx context.Context, d *schema.ResourceData, meta interf } func resourceUserDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, userID, err := UserParseID(d.Id()) @@ -352,7 +355,7 @@ func resourceUserDelete(ctx context.Context, d *schema.ResourceData, meta interf }) if err != nil { - return diag.FromErr(fmt.Errorf("error deleting User (%s): %w", d.Id(), err)) + return diag.Errorf("deleting User (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/connect/user_data_source.go b/internal/service/connect/user_data_source.go index af30b4ae51d..6ce0e1d31a9 100644 --- a/internal/service/connect/user_data_source.go +++ b/internal/service/connect/user_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( @@ -110,7 +113,7 @@ func DataSourceUser() *schema.Resource { } func dataSourceUserRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig instanceID := d.Get("instance_id").(string) diff --git a/internal/service/connect/user_data_source_test.go b/internal/service/connect/user_data_source_test.go index 4c8d291fede..6a940e23bf4 100644 --- a/internal/service/connect/user_data_source_test.go +++ b/internal/service/connect/user_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( diff --git a/internal/service/connect/user_hierarchy_group.go b/internal/service/connect/user_hierarchy_group.go index c67aac765c5..bc6d14d4491 100644 --- a/internal/service/connect/user_hierarchy_group.go +++ b/internal/service/connect/user_hierarchy_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( @@ -117,14 +120,14 @@ func userHierarchyPathLevelSchema() *schema.Schema { } func resourceUserHierarchyGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID := d.Get("instance_id").(string) userHierarchyGroupName := d.Get("name").(string) input := &connect.CreateUserHierarchyGroupInput{ InstanceId: aws.String(instanceID), Name: aws.String(userHierarchyGroupName), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("parent_group_id"); ok { @@ -135,11 +138,11 @@ func resourceUserHierarchyGroupCreate(ctx context.Context, d *schema.ResourceDat output, err := conn.CreateUserHierarchyGroupWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error creating Connect User Hierarchy Group (%s): %w", userHierarchyGroupName, err)) + return diag.Errorf("creating Connect User Hierarchy Group (%s): %s", userHierarchyGroupName, err) } if output == nil { - return diag.FromErr(fmt.Errorf("error creating Connect User Hierarchy Group (%s): empty output", userHierarchyGroupName)) + return diag.Errorf("creating Connect User Hierarchy Group (%s): empty output", userHierarchyGroupName) } d.SetId(fmt.Sprintf("%s:%s", instanceID, aws.StringValue(output.HierarchyGroupId))) @@ -148,7 +151,7 @@ func resourceUserHierarchyGroupCreate(ctx context.Context, d *schema.ResourceDat } func resourceUserHierarchyGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, userHierarchyGroupID, err := UserHierarchyGroupParseID(d.Id()) @@ -168,11 +171,11 @@ func resourceUserHierarchyGroupRead(ctx context.Context, d *schema.ResourceData, } if err != nil { - return diag.FromErr(fmt.Errorf("error getting Connect User Hierarchy Group (%s): %w", d.Id(), err)) + return diag.Errorf("getting Connect User Hierarchy Group (%s): %s", d.Id(), err) } if resp == nil || resp.HierarchyGroup == nil { - return diag.FromErr(fmt.Errorf("error getting Connect User Hierarchy Group (%s): empty response", d.Id())) + return diag.Errorf("getting Connect User Hierarchy Group (%s): empty response", d.Id()) } d.Set("arn", resp.HierarchyGroup.Arn) @@ -182,16 +185,16 @@ func resourceUserHierarchyGroupRead(ctx context.Context, d *schema.ResourceData, d.Set("name", resp.HierarchyGroup.Name) if err := d.Set("hierarchy_path", flattenUserHierarchyPath(resp.HierarchyGroup.HierarchyPath)); err != nil { - return diag.FromErr(fmt.Errorf("error setting Connect User Hierarchy Group hierarchy_path (%s): %w", d.Id(), err)) + return diag.Errorf("setting Connect User Hierarchy Group hierarchy_path (%s): %s", d.Id(), err) } - SetTagsOut(ctx, resp.HierarchyGroup.Tags) + setTagsOut(ctx, resp.HierarchyGroup.Tags) return nil } func resourceUserHierarchyGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, userHierarchyGroupID, err := UserHierarchyGroupParseID(d.Id()) @@ -206,7 +209,7 @@ func resourceUserHierarchyGroupUpdate(ctx context.Context, d *schema.ResourceDat Name: aws.String(d.Get("name").(string)), }) if err != nil { - return diag.FromErr(fmt.Errorf("updating User Hierarchy Group (%s): %w", d.Id(), err)) + return diag.Errorf("updating User Hierarchy Group (%s): %s", d.Id(), err) } } @@ -214,7 +217,7 @@ func resourceUserHierarchyGroupUpdate(ctx context.Context, d *schema.ResourceDat } func resourceUserHierarchyGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, userHierarchyGroupID, err := UserHierarchyGroupParseID(d.Id()) @@ -228,7 +231,7 @@ func resourceUserHierarchyGroupDelete(ctx context.Context, d *schema.ResourceDat }) if err != nil { - return diag.FromErr(fmt.Errorf("error deleting User Hierarchy Group (%s): %w", d.Id(), err)) + return diag.Errorf("deleting User Hierarchy Group (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/connect/user_hierarchy_group_data_source.go b/internal/service/connect/user_hierarchy_group_data_source.go index 2faa6bf6a6f..9d28648ca94 100644 --- a/internal/service/connect/user_hierarchy_group_data_source.go +++ b/internal/service/connect/user_hierarchy_group_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( @@ -82,7 +85,7 @@ func DataSourceUserHierarchyGroup() *schema.Resource { } func dataSourceUserHierarchyGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig instanceID := d.Get("instance_id").(string) @@ -98,11 +101,11 @@ func dataSourceUserHierarchyGroupRead(ctx context.Context, d *schema.ResourceDat hierarchyGroupSummary, err := userHierarchyGroupSummaryByName(ctx, conn, instanceID, name) if err != nil { - return diag.FromErr(fmt.Errorf("error finding Connect Hierarchy Group Summary by name (%s): %w", name, err)) + return diag.Errorf("finding Connect Hierarchy Group Summary by name (%s): %s", name, err) } if hierarchyGroupSummary == nil { - return diag.FromErr(fmt.Errorf("error finding Connect Hierarchy Group Summary by name (%s): not found", name)) + return diag.Errorf("finding Connect Hierarchy Group Summary by name (%s): not found", name) } input.HierarchyGroupId = hierarchyGroupSummary.Id @@ -111,11 +114,11 @@ func dataSourceUserHierarchyGroupRead(ctx context.Context, d *schema.ResourceDat resp, err := conn.DescribeUserHierarchyGroupWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error getting Connect Hierarchy Group: %w", err)) + return diag.Errorf("getting Connect Hierarchy Group: %s", err) } if resp == nil || resp.HierarchyGroup == nil { - return diag.FromErr(fmt.Errorf("error getting Connect Hierarchy Group: empty response")) + return diag.Errorf("getting Connect Hierarchy Group: empty response") } hierarchyGroup := resp.HierarchyGroup @@ -127,11 +130,11 @@ func dataSourceUserHierarchyGroupRead(ctx context.Context, d *schema.ResourceDat d.Set("name", hierarchyGroup.Name) if err := d.Set("hierarchy_path", flattenUserHierarchyPath(hierarchyGroup.HierarchyPath)); err != nil { - return diag.FromErr(fmt.Errorf("error setting Connect User Hierarchy Group hierarchy_path (%s): %w", d.Id(), err)) + return diag.Errorf("setting Connect User Hierarchy Group hierarchy_path (%s): %s", d.Id(), err) } if err := d.Set("tags", KeyValueTags(ctx, hierarchyGroup.Tags).IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return diag.FromErr(fmt.Errorf("error setting tags: %s", err)) + return diag.Errorf("setting tags: %s", err) } d.SetId(fmt.Sprintf("%s:%s", instanceID, aws.StringValue(hierarchyGroup.Id))) diff --git a/internal/service/connect/user_hierarchy_group_data_source_test.go b/internal/service/connect/user_hierarchy_group_data_source_test.go index b360f113651..98e5b98484e 100644 --- a/internal/service/connect/user_hierarchy_group_data_source_test.go +++ b/internal/service/connect/user_hierarchy_group_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( diff --git a/internal/service/connect/user_hierarchy_group_test.go b/internal/service/connect/user_hierarchy_group_test.go index c45d91b4c3a..ed8312cc40f 100644 --- a/internal/service/connect/user_hierarchy_group_test.go +++ b/internal/service/connect/user_hierarchy_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( @@ -203,7 +206,7 @@ func testAccCheckUserHierarchyGroupExists(ctx context.Context, resourceName stri return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) params := &connect.DescribeUserHierarchyGroupInput{ HierarchyGroupId: aws.String(userHierarchyGroupID), @@ -228,7 +231,7 @@ func testAccCheckUserHierarchyGroupDestroy(ctx context.Context) resource.TestChe continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) instanceID, userHierarchyGroupID, err := tfconnect.UserHierarchyGroupParseID(rs.Primary.ID) diff --git a/internal/service/connect/user_hierarchy_structure.go b/internal/service/connect/user_hierarchy_structure.go index a27771ffe38..0b20d011da1 100644 --- a/internal/service/connect/user_hierarchy_structure.go +++ b/internal/service/connect/user_hierarchy_structure.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( "context" - "fmt" "log" "github.com/aws/aws-sdk-go/aws" @@ -91,7 +93,7 @@ func userHierarchyLevelSchema() *schema.Schema { } func resourceUserHierarchyStructureCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID := d.Get("instance_id").(string) @@ -104,7 +106,7 @@ func resourceUserHierarchyStructureCreate(ctx context.Context, d *schema.Resourc _, err := conn.UpdateUserHierarchyStructureWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error creating Connect User Hierarchy Structure for Connect Instance (%s): %w", instanceID, err)) + return diag.Errorf("creating Connect User Hierarchy Structure for Connect Instance (%s): %s", instanceID, err) } d.SetId(instanceID) @@ -113,7 +115,7 @@ func resourceUserHierarchyStructureCreate(ctx context.Context, d *schema.Resourc } func resourceUserHierarchyStructureRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID := d.Id() @@ -128,15 +130,15 @@ func resourceUserHierarchyStructureRead(ctx context.Context, d *schema.ResourceD } if err != nil { - return diag.FromErr(fmt.Errorf("error getting Connect User Hierarchy Structure (%s): %w", d.Id(), err)) + return diag.Errorf("getting Connect User Hierarchy Structure (%s): %s", d.Id(), err) } if resp == nil || resp.HierarchyStructure == nil { - return diag.FromErr(fmt.Errorf("error getting Connect User Hierarchy Structure (%s): empty response", d.Id())) + return diag.Errorf("getting Connect User Hierarchy Structure (%s): empty response", d.Id()) } if err := d.Set("hierarchy_structure", flattenUserHierarchyStructure(resp.HierarchyStructure)); err != nil { - return diag.FromErr(fmt.Errorf("error setting Connect User Hierarchy Structure hierarchy_structure for Connect instance: (%s)", d.Id())) + return diag.Errorf("setting Connect User Hierarchy Structure hierarchy_structure for Connect instance: (%s)", d.Id()) } d.Set("instance_id", instanceID) @@ -145,7 +147,7 @@ func resourceUserHierarchyStructureRead(ctx context.Context, d *schema.ResourceD } func resourceUserHierarchyStructureUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID := d.Id() @@ -156,7 +158,7 @@ func resourceUserHierarchyStructureUpdate(ctx context.Context, d *schema.Resourc }) if err != nil { - return diag.FromErr(fmt.Errorf("error updating UserHierarchyStructure Name (%s): %w", d.Id(), err)) + return diag.Errorf("updating UserHierarchyStructure Name (%s): %s", d.Id(), err) } } @@ -164,7 +166,7 @@ func resourceUserHierarchyStructureUpdate(ctx context.Context, d *schema.Resourc } func resourceUserHierarchyStructureDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID := d.Id() @@ -174,7 +176,7 @@ func resourceUserHierarchyStructureDelete(ctx context.Context, d *schema.Resourc }) if err != nil { - return diag.FromErr(fmt.Errorf("error deleting UserHierarchyStructure (%s): %w", d.Id(), err)) + return diag.Errorf("deleting UserHierarchyStructure (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/connect/user_hierarchy_structure_data_source.go b/internal/service/connect/user_hierarchy_structure_data_source.go index 09ed771707b..3094da8f794 100644 --- a/internal/service/connect/user_hierarchy_structure_data_source.go +++ b/internal/service/connect/user_hierarchy_structure_data_source.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( "context" - "fmt" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/connect" @@ -79,7 +81,7 @@ func userHierarchyLevelDataSourceSchema() *schema.Schema { } func dataSourceUserHierarchyStructureRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID := d.Get("instance_id").(string) @@ -88,15 +90,15 @@ func dataSourceUserHierarchyStructureRead(ctx context.Context, d *schema.Resourc }) if err != nil { - return diag.FromErr(fmt.Errorf("error getting Connect User Hierarchy Structure for Connect Instance (%s): %w", instanceID, err)) + return diag.Errorf("getting Connect User Hierarchy Structure for Connect Instance (%s): %s", instanceID, err) } if resp == nil || resp.HierarchyStructure == nil { - return diag.FromErr(fmt.Errorf("error getting Connect User Hierarchy Structure for Connect Instance (%s): empty response", instanceID)) + return diag.Errorf("getting Connect User Hierarchy Structure for Connect Instance (%s): empty response", instanceID) } if err := d.Set("hierarchy_structure", flattenUserHierarchyStructure(resp.HierarchyStructure)); err != nil { - return diag.FromErr(fmt.Errorf("error setting Connect User Hierarchy Structure for Connect Instance: (%s)", instanceID)) + return diag.Errorf("setting Connect User Hierarchy Structure for Connect Instance: (%s)", instanceID) } d.SetId(instanceID) diff --git a/internal/service/connect/user_hierarchy_structure_data_source_test.go b/internal/service/connect/user_hierarchy_structure_data_source_test.go index 95d16acc1ff..15db6ae9cb3 100644 --- a/internal/service/connect/user_hierarchy_structure_data_source_test.go +++ b/internal/service/connect/user_hierarchy_structure_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( diff --git a/internal/service/connect/user_hierarchy_structure_test.go b/internal/service/connect/user_hierarchy_structure_test.go index a53145f9d50..b1b2282da5d 100644 --- a/internal/service/connect/user_hierarchy_structure_test.go +++ b/internal/service/connect/user_hierarchy_structure_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( @@ -218,7 +221,7 @@ func testAccCheckUserHierarchyStructureExists(ctx context.Context, resourceName } instanceID := rs.Primary.ID - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) params := &connect.DescribeUserHierarchyStructureInput{ InstanceId: aws.String(instanceID), @@ -242,7 +245,7 @@ func testAccCheckUserHierarchyStructureDestroy(ctx context.Context) resource.Tes continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) instanceID := rs.Primary.ID diff --git a/internal/service/connect/user_test.go b/internal/service/connect/user_test.go index 5eccb60ecb0..a41e700c5d9 100644 --- a/internal/service/connect/user_test.go +++ b/internal/service/connect/user_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( @@ -425,7 +428,7 @@ func testAccCheckUserExists(ctx context.Context, resourceName string, function * return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) params := &connect.DescribeUserInput{ UserId: aws.String(userID), @@ -450,7 +453,7 @@ func testAccCheckUserDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) instanceID, userID, err := tfconnect.UserParseID(rs.Primary.ID) diff --git a/internal/service/connect/validate.go b/internal/service/connect/validate.go index 55403966d3c..bb786d87cf5 100644 --- a/internal/service/connect/validate.go +++ b/internal/service/connect/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( diff --git a/internal/service/connect/validate_test.go b/internal/service/connect/validate_test.go index eca5876e506..59c3113d65c 100644 --- a/internal/service/connect/validate_test.go +++ b/internal/service/connect/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( diff --git a/internal/service/connect/vocabulary.go b/internal/service/connect/vocabulary.go index d93c383e6dd..151a7ad1d7c 100644 --- a/internal/service/connect/vocabulary.go +++ b/internal/service/connect/vocabulary.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( @@ -96,7 +99,7 @@ func ResourceVocabulary() *schema.Resource { } func resourceVocabularyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID := d.Get("instance_id").(string) vocabularyName := d.Get("name").(string) @@ -105,7 +108,7 @@ func resourceVocabularyCreate(ctx context.Context, d *schema.ResourceData, meta InstanceId: aws.String(instanceID), Content: aws.String(d.Get("content").(string)), LanguageCode: aws.String(d.Get("language_code").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), VocabularyName: aws.String(vocabularyName), } @@ -113,11 +116,11 @@ func resourceVocabularyCreate(ctx context.Context, d *schema.ResourceData, meta output, err := conn.CreateVocabularyWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error creating Connect Vocabulary (%s): %w", vocabularyName, err)) + return diag.Errorf("creating Connect Vocabulary (%s): %s", vocabularyName, err) } if output == nil { - return diag.FromErr(fmt.Errorf("error creating Connect Vocabulary (%s): empty output", vocabularyName)) + return diag.Errorf("creating Connect Vocabulary (%s): empty output", vocabularyName) } vocabularyID := aws.StringValue(output.VocabularyId) @@ -126,14 +129,14 @@ func resourceVocabularyCreate(ctx context.Context, d *schema.ResourceData, meta // waiter since the status changes from CREATION_IN_PROGRESS to either ACTIVE or CREATION_FAILED if _, err := waitVocabularyCreated(ctx, conn, d.Timeout(schema.TimeoutCreate), instanceID, vocabularyID); err != nil { - return diag.FromErr(fmt.Errorf("error waiting for Vocabulary (%s) creation: %w", d.Id(), err)) + return diag.Errorf("waiting for Vocabulary (%s) creation: %s", d.Id(), err) } return resourceVocabularyRead(ctx, d, meta) } func resourceVocabularyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, vocabularyID, err := VocabularyParseID(d.Id()) @@ -153,11 +156,11 @@ func resourceVocabularyRead(ctx context.Context, d *schema.ResourceData, meta in } if err != nil { - return diag.FromErr(fmt.Errorf("error getting Connect Vocabulary (%s): %w", d.Id(), err)) + return diag.Errorf("getting Connect Vocabulary (%s): %s", d.Id(), err) } if resp == nil || resp.Vocabulary == nil { - return diag.FromErr(fmt.Errorf("error getting Connect Vocabulary (%s): empty response", d.Id())) + return diag.Errorf("getting Connect Vocabulary (%s): empty response", d.Id()) } vocabulary := resp.Vocabulary @@ -172,7 +175,7 @@ func resourceVocabularyRead(ctx context.Context, d *schema.ResourceData, meta in d.Set("state", vocabulary.State) d.Set("vocabulary_id", vocabulary.Id) - SetTagsOut(ctx, resp.Vocabulary.Tags) + setTagsOut(ctx, resp.Vocabulary.Tags) return nil } @@ -183,7 +186,7 @@ func resourceVocabularyUpdate(ctx context.Context, d *schema.ResourceData, meta } func resourceVocabularyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) instanceID, vocabularyID, err := VocabularyParseID(d.Id()) @@ -197,11 +200,11 @@ func resourceVocabularyDelete(ctx context.Context, d *schema.ResourceData, meta }) if err != nil { - return diag.FromErr(fmt.Errorf("error deleting Vocabulary (%s): %w", d.Id(), err)) + return diag.Errorf("deleting Vocabulary (%s): %s", d.Id(), err) } if _, err := waitVocabularyDeleted(ctx, conn, d.Timeout(schema.TimeoutDelete), instanceID, vocabularyID); err != nil { - return diag.FromErr(fmt.Errorf("error waiting for Vocabulary (%s) deletion: %w", d.Id(), err)) + return diag.Errorf("waiting for Vocabulary (%s) deletion: %s", d.Id(), err) } return nil diff --git a/internal/service/connect/vocabulary_data_source.go b/internal/service/connect/vocabulary_data_source.go index 38ac7b55c69..3ba42d9024d 100644 --- a/internal/service/connect/vocabulary_data_source.go +++ b/internal/service/connect/vocabulary_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( @@ -67,7 +70,7 @@ func DataSourceVocabulary() *schema.Resource { } func dataSourceVocabularyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ConnectConn() + conn := meta.(*conns.AWSClient).ConnectConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig instanceID := d.Get("instance_id").(string) diff --git a/internal/service/connect/vocabulary_data_source_test.go b/internal/service/connect/vocabulary_data_source_test.go index 9dbee75dcc4..03ec6290b00 100644 --- a/internal/service/connect/vocabulary_data_source_test.go +++ b/internal/service/connect/vocabulary_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( diff --git a/internal/service/connect/vocabulary_test.go b/internal/service/connect/vocabulary_test.go index 5147ddb535a..30facae9e1f 100644 --- a/internal/service/connect/vocabulary_test.go +++ b/internal/service/connect/vocabulary_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect_test import ( @@ -165,7 +168,7 @@ func testAccCheckVocabularyExists(ctx context.Context, resourceName string, func return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) params := &connect.DescribeVocabularyInput{ InstanceId: aws.String(instanceID), @@ -190,7 +193,7 @@ func testAccCheckVocabularyDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ConnectConn(ctx) instanceID, vocabularyID, err := tfconnect.VocabularyParseID(rs.Primary.ID) diff --git a/internal/service/connect/wait.go b/internal/service/connect/wait.go index a1fb02a473f..d06d8c8c4dd 100644 --- a/internal/service/connect/wait.go +++ b/internal/service/connect/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package connect import ( @@ -12,10 +15,6 @@ import ( ) const ( - // ConnectInstanceCreateTimeout Timeout for connect instance creation - instanceCreatedTimeout = 5 * time.Minute - instanceDeletedTimeout = 5 * time.Minute - botAssociationCreateTimeout = 5 * time.Minute phoneNumberCreatedTimeout = 2 * time.Minute @@ -28,43 +27,6 @@ const ( vocabularyDeletedTimeout = 100 * time.Minute ) -func waitInstanceCreated(ctx context.Context, conn *connect.Connect, timeout time.Duration, instanceId string) (*connect.DescribeInstanceOutput, error) { - stateConf := &retry.StateChangeConf{ - Pending: []string{connect.InstanceStatusCreationInProgress}, - Target: []string{connect.InstanceStatusActive}, - Refresh: statusInstance(ctx, conn, instanceId), - Timeout: timeout, - } - - outputRaw, err := stateConf.WaitForStateContext(ctx) - - if v, ok := outputRaw.(*connect.DescribeInstanceOutput); ok { - return v, err - } - - return nil, err -} - -// We don't have a PENDING_DELETION or DELETED for the Connect instance. -// If the Connect Instance has an associated EXISTING DIRECTORY, removing the connect instance -// will cause an error because it is still has authorized applications. -func waitInstanceDeleted(ctx context.Context, conn *connect.Connect, timeout time.Duration, instanceId string) (*connect.DescribeInstanceOutput, error) { - stateConf := &retry.StateChangeConf{ - Pending: []string{connect.InstanceStatusActive}, - Target: []string{connect.ErrCodeResourceNotFoundException}, - Refresh: statusInstance(ctx, conn, instanceId), - Timeout: timeout, - } - - outputRaw, err := stateConf.WaitForStateContext(ctx) - - if v, ok := outputRaw.(*connect.DescribeInstanceOutput); ok { - return v, err - } - - return nil, err -} - func waitPhoneNumberCreated(ctx context.Context, conn *connect.Connect, timeout time.Duration, phoneNumberId string) (*connect.DescribePhoneNumberOutput, error) { stateConf := &retry.StateChangeConf{ Pending: []string{connect.PhoneNumberWorkflowStatusInProgress}, diff --git a/internal/service/controltower/control.go b/internal/service/controltower/control.go index c09f65e332a..aa331ba4b36 100644 --- a/internal/service/controltower/control.go +++ b/internal/service/controltower/control.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package controltower import ( @@ -53,7 +56,7 @@ func ResourceControl() *schema.Resource { } func resourceControlCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ControlTowerConn() + conn := meta.(*conns.AWSClient).ControlTowerConn(ctx) controlIdentifier := d.Get("control_identifier").(string) targetIdentifier := d.Get("target_identifier").(string) @@ -72,14 +75,14 @@ func resourceControlCreate(ctx context.Context, d *schema.ResourceData, meta int d.SetId(id) if _, err := waitOperationSucceeded(ctx, conn, aws.StringValue(output.OperationIdentifier), d.Timeout(schema.TimeoutCreate)); err != nil { - return diag.FromErr(fmt.Errorf("waiting for ControlTower Control (%s) create: %w", d.Id(), err)) + return diag.Errorf("waiting for ControlTower Control (%s) create: %s", d.Id(), err) } return resourceControlRead(ctx, d, meta) } func resourceControlRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ControlTowerConn() + conn := meta.(*conns.AWSClient).ControlTowerConn(ctx) targetIdentifier, controlIdentifier, err := ControlParseResourceID(d.Id()) @@ -106,7 +109,7 @@ func resourceControlRead(ctx context.Context, d *schema.ResourceData, meta inter } func resourceControlDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ControlTowerConn() + conn := meta.(*conns.AWSClient).ControlTowerConn(ctx) targetIdentifier, controlIdentifier, err := ControlParseResourceID(d.Id()) diff --git a/internal/service/controltower/control_test.go b/internal/service/controltower/control_test.go index dcf6636357d..0ac8c1fcde3 100644 --- a/internal/service/controltower/control_test.go +++ b/internal/service/controltower/control_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package controltower_test import ( @@ -100,7 +103,7 @@ func testAccCheckControlExists(ctx context.Context, n string, v *controltower.En return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).ControlTowerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ControlTowerConn(ctx) output, err := tfcontroltower.FindEnabledControlByTwoPartKey(ctx, conn, targetIdentifier, controlIdentifier) @@ -116,7 +119,7 @@ func testAccCheckControlExists(ctx context.Context, n string, v *controltower.En func testAccCheckControlDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ControlTowerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ControlTowerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_controltower_control" { diff --git a/internal/service/controltower/controls_data_source.go b/internal/service/controltower/controls_data_source.go index f15462005ba..225bbaf93e6 100644 --- a/internal/service/controltower/controls_data_source.go +++ b/internal/service/controltower/controls_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package controltower import ( @@ -32,7 +35,7 @@ func DataSourceControls() *schema.Resource { } func DataSourceControlsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ControlTowerConn() + conn := meta.(*conns.AWSClient).ControlTowerConn(ctx) targetIdentifier := d.Get("target_identifier").(string) input := &controltower.ListEnabledControlsInput{ diff --git a/internal/service/controltower/controls_data_source_test.go b/internal/service/controltower/controls_data_source_test.go index 9879e244ef3..3c0a7aa8ac2 100644 --- a/internal/service/controltower/controls_data_source_test.go +++ b/internal/service/controltower/controls_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package controltower_test import ( @@ -39,7 +42,7 @@ func TestAccControlTowerControlsDataSource_basic(t *testing.T) { func testAccPreCheck(ctx context.Context, t *testing.T) { // leverage the control tower created "aws-controltower-BaselineCloudTrail" to confirm control tower is deployed var trails []string - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudTrailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudTrailConn(ctx) input := &cloudtrail.ListTrailsInput{} err := conn.ListTrailsPagesWithContext(ctx, input, func(page *cloudtrail.ListTrailsOutput, lastPage bool) bool { diff --git a/internal/service/controltower/generate.go b/internal/service/controltower/generate.go new file mode 100644 index 00000000000..5bed2ff12f6 --- /dev/null +++ b/internal/service/controltower/generate.go @@ -0,0 +1,7 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/servicepackage/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package controltower diff --git a/internal/service/controltower/service_package_gen.go b/internal/service/controltower/service_package_gen.go index 9838b167fe2..02e9a984e42 100644 --- a/internal/service/controltower/service_package_gen.go +++ b/internal/service/controltower/service_package_gen.go @@ -5,6 +5,10 @@ package controltower import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + controltower_sdkv1 "github.com/aws/aws-sdk-go/service/controltower" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -41,4 +45,13 @@ func (p *servicePackage) ServicePackageName() string { return names.ControlTower } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*controltower_sdkv1.ControlTower, error) { + sess := config["session"].(*session_sdkv1.Session) + + return controltower_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/cur/cur_test.go b/internal/service/cur/cur_test.go index f50deb843eb..dd06b365e5e 100644 --- a/internal/service/cur/cur_test.go +++ b/internal/service/cur/cur_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cur_test import ( @@ -13,13 +16,15 @@ func TestAccCUR_serial(t *testing.T) { testCases := map[string]map[string]func(t *testing.T){ "ReportDefinition": { - "basic": testAccReportDefinition_basic, - "disappears": testAccReportDefinition_disappears, - "textOrCsv": testAccReportDefinition_textOrCSV, - "parquet": testAccReportDefinition_parquet, - "athena": testAccReportDefinition_athena, - "refresh": testAccReportDefinition_refresh, - "overwrite": testAccReportDefinition_overwrite, + "basic": testAccReportDefinition_basic, + "disappears": testAccReportDefinition_disappears, + "textOrCsv": testAccReportDefinition_textOrCSV, + "parquet": testAccReportDefinition_parquet, + "athena": testAccReportDefinition_athena, + "refresh": testAccReportDefinition_refresh, + "overwrite": testAccReportDefinition_overwrite, + "DataSource_basic": testAccReportDefinitionDataSource_basic, + "DataSource_additional": testAccReportDefinitionDataSource_additional, }, } diff --git a/internal/service/cur/find.go b/internal/service/cur/find.go index bce99647601..df4774444ec 100644 --- a/internal/service/cur/find.go +++ b/internal/service/cur/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cur import ( diff --git a/internal/service/cur/generate.go b/internal/service/cur/generate.go new file mode 100644 index 00000000000..30d172b6f9e --- /dev/null +++ b/internal/service/cur/generate.go @@ -0,0 +1,7 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/servicepackage/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package cur diff --git a/internal/service/cur/report_definition.go b/internal/service/cur/report_definition.go index d137d69a996..7623a60c1ad 100644 --- a/internal/service/cur/report_definition.go +++ b/internal/service/cur/report_definition.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cur import ( @@ -106,7 +109,7 @@ func ResourceReportDefinition() *schema.Resource { func resourceReportDefinitionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CURConn() + conn := meta.(*conns.AWSClient).CURConn(ctx) reportName := d.Get("report_name").(string) additionalArtifacts := flex.ExpandStringSet(d.Get("additional_artifacts").(*schema.Set)) @@ -163,7 +166,7 @@ func resourceReportDefinitionCreate(ctx context.Context, d *schema.ResourceData, func resourceReportDefinitionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CURConn() + conn := meta.(*conns.AWSClient).CURConn(ctx) reportDefinition, err := FindReportDefinitionByName(ctx, conn, d.Id()) @@ -209,7 +212,7 @@ func resourceReportDefinitionRead(ctx context.Context, d *schema.ResourceData, m func resourceReportDefinitionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CURConn() + conn := meta.(*conns.AWSClient).CURConn(ctx) additionalArtifacts := flex.ExpandStringSet(d.Get("additional_artifacts").(*schema.Set)) compression := d.Get("compression").(string) @@ -266,7 +269,7 @@ func resourceReportDefinitionUpdate(ctx context.Context, d *schema.ResourceData, func resourceReportDefinitionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CURConn() + conn := meta.(*conns.AWSClient).CURConn(ctx) log.Printf("[DEBUG] Deleting Cost And Usage Report Definition: %s", d.Id()) _, err := conn.DeleteReportDefinitionWithContext(ctx, &cur.DeleteReportDefinitionInput{ diff --git a/internal/service/cur/report_definition_data_source.go b/internal/service/cur/report_definition_data_source.go index 6f398ba3d80..c4af34555c2 100644 --- a/internal/service/cur/report_definition_data_source.go +++ b/internal/service/cur/report_definition_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cur import ( @@ -70,7 +73,7 @@ func DataSourceReportDefinition() *schema.Resource { func dataSourceReportDefinitionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).CURConn() + conn := meta.(*conns.AWSClient).CURConn(ctx) reportName := d.Get("report_name").(string) diff --git a/internal/service/cur/report_definition_data_source_test.go b/internal/service/cur/report_definition_data_source_test.go index e3052c42c2b..78f7a34bc0f 100644 --- a/internal/service/cur/report_definition_data_source_test.go +++ b/internal/service/cur/report_definition_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cur_test import ( @@ -11,7 +14,7 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/acctest" ) -func TestAccCURReportDefinitionDataSource_basic(t *testing.T) { +func testAccReportDefinitionDataSource_basic(t *testing.T) { ctx := acctest.Context(t) resourceName := "aws_cur_report_definition.test" datasourceName := "data.aws_cur_report_definition.test" @@ -19,7 +22,7 @@ func TestAccCURReportDefinitionDataSource_basic(t *testing.T) { reportName := sdkacctest.RandomWithPrefix("tf_acc_test") bucketName := fmt.Sprintf("tf-test-bucket-%d", sdkacctest.RandInt()) - resource.ParallelTest(t, resource.TestCase{ + resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckRegion(t, endpoints.UsEast1RegionID) }, ErrorCheck: acctest.ErrorCheck(t, cur.EndpointsID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, @@ -42,7 +45,7 @@ func TestAccCURReportDefinitionDataSource_basic(t *testing.T) { }) } -func TestAccCURReportDefinitionDataSource_additional(t *testing.T) { +func testAccReportDefinitionDataSource_additional(t *testing.T) { ctx := acctest.Context(t) resourceName := "aws_cur_report_definition.test" datasourceName := "data.aws_cur_report_definition.test" @@ -50,7 +53,7 @@ func TestAccCURReportDefinitionDataSource_additional(t *testing.T) { reportName := sdkacctest.RandomWithPrefix("tf_acc_test") bucketName := fmt.Sprintf("tf-test-bucket-%d", sdkacctest.RandInt()) - resource.ParallelTest(t, resource.TestCase{ + resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckRegion(t, endpoints.UsEast1RegionID) }, ErrorCheck: acctest.ErrorCheck(t, cur.EndpointsID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, @@ -82,15 +85,10 @@ data "aws_billing_service_account" "test" {} data "aws_partition" "current" {} resource "aws_s3_bucket" "test" { - bucket = "%[2]s" + bucket = %[2]q force_destroy = true } -resource "aws_s3_bucket_acl" "test" { - bucket = aws_s3_bucket.test.id - acl = "private" -} - resource "aws_s3_bucket_policy" "test" { bucket = aws_s3_bucket.test.id @@ -128,11 +126,11 @@ POLICY resource "aws_cur_report_definition" "test" { depends_on = [aws_s3_bucket_policy.test] # needed to avoid "ValidationException: Failed to verify customer bucket permission." - report_name = "%[1]s" + report_name = %[1]q time_unit = "DAILY" format = "textORcsv" compression = "GZIP" - additional_schema_elements = ["RESOURCES"] + additional_schema_elements = ["RESOURCES", "SPLIT_COST_ALLOCATION_DATA"] s3_bucket = aws_s3_bucket.test.id s3_prefix = "" s3_region = aws_s3_bucket.test.region @@ -152,15 +150,10 @@ data "aws_billing_service_account" "test" {} data "aws_partition" "current" {} resource "aws_s3_bucket" "test" { - bucket = "%[2]s" + bucket = %[2]q force_destroy = true } -resource "aws_s3_bucket_acl" "test" { - bucket = aws_s3_bucket.test.id - acl = "private" -} - resource "aws_s3_bucket_policy" "test" { bucket = aws_s3_bucket.test.id @@ -198,11 +191,11 @@ POLICY resource "aws_cur_report_definition" "test" { depends_on = [aws_s3_bucket_policy.test] # needed to avoid "ValidationException: Failed to verify customer bucket permission." - report_name = "%[1]s" + report_name = %[1]q time_unit = "DAILY" format = "textORcsv" compression = "GZIP" - additional_schema_elements = ["RESOURCES"] + additional_schema_elements = ["RESOURCES", "SPLIT_COST_ALLOCATION_DATA"] s3_bucket = aws_s3_bucket.test.id s3_prefix = "" s3_region = aws_s3_bucket.test.region diff --git a/internal/service/cur/report_definition_test.go b/internal/service/cur/report_definition_test.go index 36e63bab9b5..983333f0947 100644 --- a/internal/service/cur/report_definition_test.go +++ b/internal/service/cur/report_definition_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cur_test import ( @@ -38,7 +41,7 @@ func testAccReportDefinition_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "report_name", reportName), resource.TestCheckResourceAttr(resourceName, "time_unit", "DAILY"), resource.TestCheckResourceAttr(resourceName, "compression", "GZIP"), - resource.TestCheckResourceAttr(resourceName, "additional_schema_elements.#", "1"), + resource.TestCheckResourceAttr(resourceName, "additional_schema_elements.#", "2"), resource.TestCheckResourceAttr(resourceName, "s3_bucket", bucketName), resource.TestCheckResourceAttr(resourceName, "s3_prefix", ""), resource.TestCheckResourceAttrPair(resourceName, "s3_region", s3BucketResourceName, "region"), @@ -59,7 +62,7 @@ func testAccReportDefinition_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "report_name", reportName), resource.TestCheckResourceAttr(resourceName, "time_unit", "DAILY"), resource.TestCheckResourceAttr(resourceName, "compression", "GZIP"), - resource.TestCheckResourceAttr(resourceName, "additional_schema_elements.#", "1"), + resource.TestCheckResourceAttr(resourceName, "additional_schema_elements.#", "2"), resource.TestCheckResourceAttr(resourceName, "s3_bucket", bucketName), resource.TestCheckResourceAttr(resourceName, "s3_prefix", "test"), resource.TestCheckResourceAttrPair(resourceName, "s3_region", s3BucketResourceName, "region"), @@ -97,7 +100,7 @@ func testAccReportDefinition_textOrCSV(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "time_unit", "DAILY"), resource.TestCheckResourceAttr(resourceName, "format", format), resource.TestCheckResourceAttr(resourceName, "compression", compression), - resource.TestCheckResourceAttr(resourceName, "additional_schema_elements.#", "1"), + resource.TestCheckResourceAttr(resourceName, "additional_schema_elements.#", "2"), resource.TestCheckResourceAttr(resourceName, "s3_bucket", bucketName), resource.TestCheckResourceAttr(resourceName, "s3_prefix", bucketPrefix), resource.TestCheckResourceAttrPair(resourceName, "s3_region", s3BucketResourceName, "region"), @@ -142,7 +145,7 @@ func testAccReportDefinition_parquet(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "time_unit", "DAILY"), resource.TestCheckResourceAttr(resourceName, "format", format), resource.TestCheckResourceAttr(resourceName, "compression", compression), - resource.TestCheckResourceAttr(resourceName, "additional_schema_elements.#", "1"), + resource.TestCheckResourceAttr(resourceName, "additional_schema_elements.#", "2"), resource.TestCheckResourceAttr(resourceName, "s3_bucket", bucketName), resource.TestCheckResourceAttr(resourceName, "s3_prefix", bucketPrefix), resource.TestCheckResourceAttrPair(resourceName, "s3_region", s3BucketResourceName, "region"), @@ -186,7 +189,7 @@ func testAccReportDefinition_athena(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "time_unit", "DAILY"), resource.TestCheckResourceAttr(resourceName, "format", format), resource.TestCheckResourceAttr(resourceName, "compression", compression), - resource.TestCheckResourceAttr(resourceName, "additional_schema_elements.#", "1"), + resource.TestCheckResourceAttr(resourceName, "additional_schema_elements.#", "2"), resource.TestCheckResourceAttr(resourceName, "s3_bucket", bucketName), resource.TestCheckResourceAttr(resourceName, "s3_prefix", bucketPrefix), resource.TestCheckResourceAttrPair(resourceName, "s3_region", s3BucketResourceName, "region"), @@ -231,7 +234,7 @@ func testAccReportDefinition_refresh(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "time_unit", "DAILY"), resource.TestCheckResourceAttr(resourceName, "format", format), resource.TestCheckResourceAttr(resourceName, "compression", compression), - resource.TestCheckResourceAttr(resourceName, "additional_schema_elements.#", "1"), + resource.TestCheckResourceAttr(resourceName, "additional_schema_elements.#", "2"), resource.TestCheckResourceAttr(resourceName, "s3_bucket", bucketName), resource.TestCheckResourceAttr(resourceName, "s3_prefix", bucketPrefix), resource.TestCheckResourceAttrPair(resourceName, "s3_region", s3BucketResourceName, "region"), @@ -276,7 +279,7 @@ func testAccReportDefinition_overwrite(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "time_unit", "DAILY"), resource.TestCheckResourceAttr(resourceName, "format", format), resource.TestCheckResourceAttr(resourceName, "compression", compression), - resource.TestCheckResourceAttr(resourceName, "additional_schema_elements.#", "1"), + resource.TestCheckResourceAttr(resourceName, "additional_schema_elements.#", "2"), resource.TestCheckResourceAttr(resourceName, "s3_bucket", bucketName), resource.TestCheckResourceAttr(resourceName, "s3_prefix", ""), resource.TestCheckResourceAttrPair(resourceName, "s3_region", s3BucketResourceName, "region"), @@ -320,7 +323,7 @@ func testAccReportDefinition_disappears(t *testing.T) { func testAccCheckReportDefinitionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CURConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CURConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cur_report_definition" { @@ -344,7 +347,7 @@ func testAccCheckReportDefinitionDestroy(ctx context.Context) resource.TestCheck func testAccCheckReportDefinitionExists(ctx context.Context, resourceName string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CURConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CURConn(ctx) rs, ok := s.RootModule().Resources[resourceName] if !ok { @@ -371,11 +374,6 @@ resource "aws_s3_bucket" "test" { force_destroy = true } -resource "aws_s3_bucket_acl" "test" { - bucket = aws_s3_bucket.test.id - acl = "private" -} - data "aws_partition" "current" {} resource "aws_s3_bucket_policy" "test" { @@ -419,7 +417,7 @@ resource "aws_cur_report_definition" "test" { time_unit = "DAILY" format = "textORcsv" compression = "GZIP" - additional_schema_elements = ["RESOURCES"] + additional_schema_elements = ["RESOURCES", "SPLIT_COST_ALLOCATION_DATA"] s3_bucket = aws_s3_bucket.test.id s3_prefix = %[3]q s3_region = aws_s3_bucket.test.region @@ -439,15 +437,10 @@ func testAccReportDefinitionConfig_additional(reportName string, bucketName stri return fmt.Sprintf(` resource "aws_s3_bucket" "test" { - bucket = "%[2]s" + bucket = %[2]q force_destroy = true } -resource "aws_s3_bucket_acl" "test" { - bucket = aws_s3_bucket.test.id - acl = "private" -} - data "aws_partition" "current" {} resource "aws_s3_bucket_policy" "test" { @@ -487,17 +480,17 @@ POLICY resource "aws_cur_report_definition" "test" { depends_on = [aws_s3_bucket_policy.test] # needed to avoid "ValidationException: Failed to verify customer bucket permission." - report_name = "%[1]s" + report_name = %[1]q time_unit = "DAILY" - format = "%[4]s" - compression = "%[5]s" - additional_schema_elements = ["RESOURCES"] + format = %[4]q + compression = %[5]q + additional_schema_elements = ["RESOURCES", "SPLIT_COST_ALLOCATION_DATA"] s3_bucket = aws_s3_bucket.test.id - s3_prefix = "%[3]s" + s3_prefix = %[3]q s3_region = aws_s3_bucket.test.region %[6]s refresh_closed_reports = %[7]t - report_versioning = "%[8]s" + report_versioning = %[8]q } `, reportName, bucketName, bucketPrefix, format, compression, artifactsStr, refreshClosedReports, reportVersioning) } diff --git a/internal/service/cur/service_package_gen.go b/internal/service/cur/service_package_gen.go index dd00b41a3dd..581ffce2872 100644 --- a/internal/service/cur/service_package_gen.go +++ b/internal/service/cur/service_package_gen.go @@ -5,6 +5,10 @@ package cur import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + costandusagereportservice_sdkv1 "github.com/aws/aws-sdk-go/service/costandusagereportservice" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -41,4 +45,13 @@ func (p *servicePackage) ServicePackageName() string { return names.CUR } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*costandusagereportservice_sdkv1.CostandUsageReportService, error) { + sess := config["session"].(*session_sdkv1.Session) + + return costandusagereportservice_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/cur/sweep.go b/internal/service/cur/sweep.go index 9b6b02e3d0e..07655842ed6 100644 --- a/internal/service/cur/sweep.go +++ b/internal/service/cur/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" cur "github.com/aws/aws-sdk-go/service/costandusagereportservice" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -23,11 +25,11 @@ func init() { func sweepReportDefinitions(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).CURConn() + conn := client.CURConn(ctx) input := &cur.DescribeReportDefinitionsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -55,7 +57,7 @@ func sweepReportDefinitions(region string) error { return fmt.Errorf("error listing Cost And Usage Report Definitions (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Cost And Usage Report Definitions (%s): %w", region, err) diff --git a/internal/service/dataexchange/data_set.go b/internal/service/dataexchange/data_set.go index c7f8ed5fe6d..1433cb7d915 100644 --- a/internal/service/dataexchange/data_set.go +++ b/internal/service/dataexchange/data_set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dataexchange import ( @@ -59,18 +62,18 @@ func ResourceDataSet() *schema.Resource { func resourceDataSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataExchangeConn() + conn := meta.(*conns.AWSClient).DataExchangeConn(ctx) input := &dataexchange.CreateDataSetInput{ Name: aws.String(d.Get("name").(string)), AssetType: aws.String(d.Get("asset_type").(string)), Description: aws.String(d.Get("description").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } out, err := conn.CreateDataSetWithContext(ctx, input) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error creating DataExchange DataSet: %s", err) + return sdkdiag.AppendErrorf(diags, "creating DataExchange DataSet: %s", err) } d.SetId(aws.StringValue(out.Id)) @@ -80,7 +83,7 @@ func resourceDataSetCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceDataSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataExchangeConn() + conn := meta.(*conns.AWSClient).DataExchangeConn(ctx) dataSet, err := FindDataSetById(ctx, conn, d.Id()) @@ -99,14 +102,14 @@ func resourceDataSetRead(ctx context.Context, d *schema.ResourceData, meta inter d.Set("description", dataSet.Description) d.Set("arn", dataSet.Arn) - SetTagsOut(ctx, dataSet.Tags) + setTagsOut(ctx, dataSet.Tags) return diags } func resourceDataSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataExchangeConn() + conn := meta.(*conns.AWSClient).DataExchangeConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &dataexchange.UpdateDataSetInput{ @@ -124,7 +127,7 @@ func resourceDataSetUpdate(ctx context.Context, d *schema.ResourceData, meta int log.Printf("[DEBUG] Updating DataExchange DataSet: %s", d.Id()) _, err := conn.UpdateDataSetWithContext(ctx, input) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error Updating DataExchange DataSet: %s", err) + return sdkdiag.AppendErrorf(diags, "updating DataExchange DataSet (%s): %s", d.Id(), err) } } @@ -133,7 +136,7 @@ func resourceDataSetUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceDataSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataExchangeConn() + conn := meta.(*conns.AWSClient).DataExchangeConn(ctx) input := &dataexchange.DeleteDataSetInput{ DataSetId: aws.String(d.Id()), @@ -145,7 +148,7 @@ func resourceDataSetDelete(ctx context.Context, d *schema.ResourceData, meta int if tfawserr.ErrCodeEquals(err, dataexchange.ErrCodeResourceNotFoundException) { return diags } - return sdkdiag.AppendErrorf(diags, "Error deleting DataExchange DataSet: %s", err) + return sdkdiag.AppendErrorf(diags, "deleting DataExchange DataSet: %s", err) } return diags diff --git a/internal/service/dataexchange/data_set_test.go b/internal/service/dataexchange/data_set_test.go index ae9e9222803..4a715f26661 100644 --- a/internal/service/dataexchange/data_set_test.go +++ b/internal/service/dataexchange/data_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dataexchange_test import ( @@ -139,7 +142,7 @@ func testAccCheckDataSetExists(ctx context.Context, n string, v *dataexchange.Ge return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DataExchangeConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataExchangeConn(ctx) resp, err := tfdataexchange.FindDataSetById(ctx, conn, rs.Primary.ID) if err != nil { return err @@ -156,7 +159,7 @@ func testAccCheckDataSetExists(ctx context.Context, n string, v *dataexchange.Ge func testAccCheckDataSetDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DataExchangeConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataExchangeConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_dataexchange_data_set" { diff --git a/internal/service/dataexchange/find.go b/internal/service/dataexchange/find.go index 19dfc2d0e10..f79453bc560 100644 --- a/internal/service/dataexchange/find.go +++ b/internal/service/dataexchange/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dataexchange import ( diff --git a/internal/service/dataexchange/generate.go b/internal/service/dataexchange/generate.go index c4136904627..3bc2017f691 100644 --- a/internal/service/dataexchange/generate.go +++ b/internal/service/dataexchange/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package dataexchange diff --git a/internal/service/dataexchange/revision.go b/internal/service/dataexchange/revision.go index ced2b95ec7e..d0a919ce11b 100644 --- a/internal/service/dataexchange/revision.go +++ b/internal/service/dataexchange/revision.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dataexchange import ( @@ -60,17 +63,17 @@ func ResourceRevision() *schema.Resource { func resourceRevisionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataExchangeConn() + conn := meta.(*conns.AWSClient).DataExchangeConn(ctx) input := &dataexchange.CreateRevisionInput{ DataSetId: aws.String(d.Get("data_set_id").(string)), Comment: aws.String(d.Get("comment").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } out, err := conn.CreateRevisionWithContext(ctx, input) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error creating DataExchange Revision: %s", err) + return sdkdiag.AppendErrorf(diags, "creating DataExchange Revision: %s", err) } d.SetId(fmt.Sprintf("%s:%s", aws.StringValue(out.DataSetId), aws.StringValue(out.Id))) @@ -80,7 +83,7 @@ func resourceRevisionCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceRevisionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataExchangeConn() + conn := meta.(*conns.AWSClient).DataExchangeConn(ctx) dataSetId, revisionId, err := RevisionParseResourceID(d.Id()) if err != nil { @@ -104,14 +107,14 @@ func resourceRevisionRead(ctx context.Context, d *schema.ResourceData, meta inte d.Set("arn", revision.Arn) d.Set("revision_id", revision.Id) - SetTagsOut(ctx, revision.Tags) + setTagsOut(ctx, revision.Tags) return diags } func resourceRevisionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataExchangeConn() + conn := meta.(*conns.AWSClient).DataExchangeConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &dataexchange.UpdateRevisionInput{ @@ -126,7 +129,7 @@ func resourceRevisionUpdate(ctx context.Context, d *schema.ResourceData, meta in log.Printf("[DEBUG] Updating DataExchange Revision: %s", d.Id()) _, err := conn.UpdateRevisionWithContext(ctx, input) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error Updating DataExchange Revision: %s", err) + return sdkdiag.AppendErrorf(diags, "updating DataExchange Revision (%s): %s", d.Id(), err) } } @@ -135,7 +138,7 @@ func resourceRevisionUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceRevisionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataExchangeConn() + conn := meta.(*conns.AWSClient).DataExchangeConn(ctx) input := &dataexchange.DeleteRevisionInput{ RevisionId: aws.String(d.Get("revision_id").(string)), @@ -148,7 +151,7 @@ func resourceRevisionDelete(ctx context.Context, d *schema.ResourceData, meta in if tfawserr.ErrCodeEquals(err, dataexchange.ErrCodeResourceNotFoundException) { return diags } - return sdkdiag.AppendErrorf(diags, "Error deleting DataExchange Revision: %s", err) + return sdkdiag.AppendErrorf(diags, "deleting DataExchange Revision: %s", err) } return diags diff --git a/internal/service/dataexchange/revision_test.go b/internal/service/dataexchange/revision_test.go index 888538b2f9a..5bae36816f4 100644 --- a/internal/service/dataexchange/revision_test.go +++ b/internal/service/dataexchange/revision_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dataexchange_test import ( @@ -153,7 +156,7 @@ func testAccCheckRevisionExists(ctx context.Context, n string, v *dataexchange.G return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DataExchangeConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataExchangeConn(ctx) dataSetId, revisionId, err := tfdataexchange.RevisionParseResourceID(rs.Primary.ID) if err != nil { @@ -176,7 +179,7 @@ func testAccCheckRevisionExists(ctx context.Context, n string, v *dataexchange.G func testAccCheckRevisionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DataExchangeConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataExchangeConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_dataexchange_revision" { diff --git a/internal/service/dataexchange/service_package_gen.go b/internal/service/dataexchange/service_package_gen.go index 791fe62b309..dac6b0d8820 100644 --- a/internal/service/dataexchange/service_package_gen.go +++ b/internal/service/dataexchange/service_package_gen.go @@ -5,6 +5,10 @@ package dataexchange import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + dataexchange_sdkv1 "github.com/aws/aws-sdk-go/service/dataexchange" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -48,4 +52,13 @@ func (p *servicePackage) ServicePackageName() string { return names.DataExchange } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*dataexchange_sdkv1.DataExchange, error) { + sess := config["session"].(*session_sdkv1.Session) + + return dataexchange_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/dataexchange/sweep.go b/internal/service/dataexchange/sweep.go index eb0ac8d1528..d6278b6d23d 100644 --- a/internal/service/dataexchange/sweep.go +++ b/internal/service/dataexchange/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/dataexchange" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -24,13 +26,13 @@ func init() { func sweepDataSets(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).DataExchangeConn() + conn := client.DataExchangeConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -57,7 +59,7 @@ func sweepDataSets(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing DataExchange DataSet for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping DataExchange DataSet for %s: %w", region, err)) } diff --git a/internal/service/dataexchange/tags_gen.go b/internal/service/dataexchange/tags_gen.go index 93943764761..e9bef3a1b70 100644 --- a/internal/service/dataexchange/tags_gen.go +++ b/internal/service/dataexchange/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists dataexchange service tags. +// listTags lists dataexchange service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn dataexchangeiface.DataExchangeAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn dataexchangeiface.DataExchangeAPI, identifier string) (tftags.KeyValueTags, error) { input := &dataexchange.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn dataexchangeiface.DataExchangeAPI, ident // ListTags lists dataexchange service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).DataExchangeConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).DataExchangeConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from dataexchange service tags. +// KeyValueTags creates tftags.KeyValueTags from dataexchange service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns dataexchange service tags from Context. +// getTagsIn returns dataexchange service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets dataexchange service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets dataexchange service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates dataexchange service tags. +// updateTags updates dataexchange service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn dataexchangeiface.DataExchangeAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn dataexchangeiface.DataExchangeAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn dataexchangeiface.DataExchangeAPI, ide // UpdateTags updates dataexchange service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).DataExchangeConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).DataExchangeConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/datapipeline/generate.go b/internal/service/datapipeline/generate.go index dad5a27a8eb..fcee5d2990e 100644 --- a/internal/service/datapipeline/generate.go +++ b/internal/service/datapipeline/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTagsInIDElem=PipelineId -ServiceTagsSlice -TagOp=AddTags -TagInIDElem=PipelineId -UntagOp=RemoveTags -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package datapipeline diff --git a/internal/service/datapipeline/pipeline.go b/internal/service/datapipeline/pipeline.go index e086c070db5..e8c583fd539 100644 --- a/internal/service/datapipeline/pipeline.go +++ b/internal/service/datapipeline/pipeline.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datapipeline import ( @@ -55,13 +58,13 @@ func ResourcePipeline() *schema.Resource { func resourcePipelineCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataPipelineConn() + conn := meta.(*conns.AWSClient).DataPipelineConn(ctx) uniqueID := id.UniqueId() input := datapipeline.CreatePipelineInput{ Name: aws.String(d.Get("name").(string)), UniqueId: aws.String(uniqueID), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -71,7 +74,7 @@ func resourcePipelineCreate(ctx context.Context, d *schema.ResourceData, meta in resp, err := conn.CreatePipelineWithContext(ctx, &input) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error creating datapipeline: %s", err) + return sdkdiag.AppendErrorf(diags, "creating datapipeline: %s", err) } d.SetId(aws.StringValue(resp.PipelineId)) @@ -81,7 +84,7 @@ func resourcePipelineCreate(ctx context.Context, d *schema.ResourceData, meta in func resourcePipelineRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataPipelineConn() + conn := meta.(*conns.AWSClient).DataPipelineConn(ctx) v, err := PipelineRetrieve(ctx, d.Id(), conn) if tfawserr.ErrCodeEquals(err, datapipeline.ErrCodePipelineNotFoundException) || tfawserr.ErrCodeEquals(err, datapipeline.ErrCodePipelineDeletedException) || v == nil { @@ -90,13 +93,13 @@ func resourcePipelineRead(ctx context.Context, d *schema.ResourceData, meta inte return diags } if err != nil { - return sdkdiag.AppendErrorf(diags, "Error describing DataPipeline (%s): %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "describing DataPipeline (%s): %s", d.Id(), err) } d.Set("name", v.Name) d.Set("description", v.Description) - SetTagsOut(ctx, v.Tags) + setTagsOut(ctx, v.Tags) return diags } @@ -111,7 +114,7 @@ func resourcePipelineUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourcePipelineDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataPipelineConn() + conn := meta.(*conns.AWSClient).DataPipelineConn(ctx) opts := datapipeline.DeletePipelineInput{ PipelineId: aws.String(d.Id()), diff --git a/internal/service/datapipeline/pipeline_data_source.go b/internal/service/datapipeline/pipeline_data_source.go index 47c120a2304..1d06d40c0c4 100644 --- a/internal/service/datapipeline/pipeline_data_source.go +++ b/internal/service/datapipeline/pipeline_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datapipeline import ( @@ -33,7 +36,7 @@ func DataSourcePipeline() *schema.Resource { } func dataSourcePipelineRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DataPipelineConn() + conn := meta.(*conns.AWSClient).DataPipelineConn(ctx) defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig @@ -41,7 +44,7 @@ func dataSourcePipelineRead(ctx context.Context, d *schema.ResourceData, meta in v, err := PipelineRetrieve(ctx, pipelineId, conn) if err != nil { - return diag.Errorf("Error describing DataPipeline Pipeline (%s): %s", pipelineId, err) + return diag.Errorf("describing DataPipeline Pipeline (%s): %s", pipelineId, err) } d.Set("name", v.Name) @@ -50,7 +53,7 @@ func dataSourcePipelineRead(ctx context.Context, d *schema.ResourceData, meta in tags := KeyValueTags(ctx, v.Tags).IgnoreAWS().IgnoreConfig(ignoreTagsConfig) if err := d.Set("tags", tags.RemoveDefaultConfig(defaultTagsConfig).Map()); err != nil { - return diag.Errorf("error setting tags: %s", err) + return diag.Errorf("setting tags: %s", err) } d.SetId(pipelineId) diff --git a/internal/service/datapipeline/pipeline_data_source_test.go b/internal/service/datapipeline/pipeline_data_source_test.go index 167187d7c9b..c66dba8ef6c 100644 --- a/internal/service/datapipeline/pipeline_data_source_test.go +++ b/internal/service/datapipeline/pipeline_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datapipeline_test import ( diff --git a/internal/service/datapipeline/pipeline_definition.go b/internal/service/datapipeline/pipeline_definition.go index 32d5c0466b9..3d174dc2ec6 100644 --- a/internal/service/datapipeline/pipeline_definition.go +++ b/internal/service/datapipeline/pipeline_definition.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datapipeline import ( @@ -130,7 +133,7 @@ func ResourcePipelineDefinition() *schema.Resource { } func resourcePipelineDefinitionPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DataPipelineConn() + conn := meta.(*conns.AWSClient).DataPipelineConn(ctx) pipelineID := d.Get("pipeline_id").(string) input := &datapipeline.PutPipelineDefinitionInput{ @@ -160,7 +163,7 @@ func resourcePipelineDefinitionPut(ctx context.Context, d *schema.ResourceData, if aws.BoolValue(output.Errored) { errors := getValidationError(output.ValidationErrors) if strings.Contains(errors.Error(), "role") { - return retry.RetryableError(fmt.Errorf("error validating after creation DataPipeline Pipeline Definition (%s): %w", pipelineID, errors)) + return retry.RetryableError(fmt.Errorf("validating after creation DataPipeline Pipeline Definition (%s): %w", pipelineID, errors)) } } @@ -172,11 +175,11 @@ func resourcePipelineDefinitionPut(ctx context.Context, d *schema.ResourceData, } if err != nil { - return diag.Errorf("error creating DataPipeline Pipeline Definition (%s): %s", pipelineID, err) + return diag.Errorf("creating DataPipeline Pipeline Definition (%s): %s", pipelineID, err) } if aws.BoolValue(output.Errored) { - return diag.Errorf("error validating after creation DataPipeline Pipeline Definition (%s): %s", pipelineID, getValidationError(output.ValidationErrors)) + return diag.Errorf("validating after creation DataPipeline Pipeline Definition (%s): %s", pipelineID, getValidationError(output.ValidationErrors)) } // Activate pipeline if enabled @@ -186,7 +189,7 @@ func resourcePipelineDefinitionPut(ctx context.Context, d *schema.ResourceData, _, err = conn.ActivatePipelineWithContext(ctx, input2) if err != nil { - return diag.Errorf("error activating DataPipeline Pipeline Definition (%s): %s", pipelineID, err) + return diag.Errorf("activating DataPipeline Pipeline Definition (%s): %s", pipelineID, err) } d.SetId(pipelineID) @@ -195,7 +198,7 @@ func resourcePipelineDefinitionPut(ctx context.Context, d *schema.ResourceData, } func resourcePipelineDefinitionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DataPipelineConn() + conn := meta.(*conns.AWSClient).DataPipelineConn(ctx) input := &datapipeline.GetPipelineDefinitionInput{ PipelineId: aws.String(d.Id()), } @@ -210,17 +213,17 @@ func resourcePipelineDefinitionRead(ctx context.Context, d *schema.ResourceData, } if err != nil { - return diag.Errorf("error reading DataPipeline Pipeline Definition (%s): %s", d.Id(), err) + return diag.Errorf("reading DataPipeline Pipeline Definition (%s): %s", d.Id(), err) } if err = d.Set("parameter_object", flattenPipelineDefinitionParameterObjects(resp.ParameterObjects)); err != nil { - return diag.Errorf("error setting `%s` for DataPipeline Pipeline Definition (%s): %s", "parameter_object", d.Id(), err) + return diag.Errorf("setting `%s` for DataPipeline Pipeline Definition (%s): %s", "parameter_object", d.Id(), err) } if err = d.Set("parameter_value", flattenPipelineDefinitionParameterValues(resp.ParameterValues)); err != nil { - return diag.Errorf("error setting `%s` for DataPipeline Pipeline Definition (%s): %s", "parameter_object", d.Id(), err) + return diag.Errorf("setting `%s` for DataPipeline Pipeline Definition (%s): %s", "parameter_object", d.Id(), err) } if err = d.Set("pipeline_object", flattenPipelineDefinitionObjects(resp.PipelineObjects)); err != nil { - return diag.Errorf("error setting `%s` for DataPipeline Pipeline Definition (%s): %s", "parameter_object", d.Id(), err) + return diag.Errorf("setting `%s` for DataPipeline Pipeline Definition (%s): %s", "parameter_object", d.Id(), err) } d.Set("pipeline_id", d.Id()) diff --git a/internal/service/datapipeline/pipeline_definition_data_source.go b/internal/service/datapipeline/pipeline_definition_data_source.go index 9ccd5df06b9..3ea6987aa29 100644 --- a/internal/service/datapipeline/pipeline_definition_data_source.go +++ b/internal/service/datapipeline/pipeline_definition_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datapipeline import ( @@ -109,7 +112,7 @@ func DataSourcePipelineDefinition() *schema.Resource { } func dataSourcePipelineDefinitionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DataPipelineConn() + conn := meta.(*conns.AWSClient).DataPipelineConn(ctx) pipelineID := d.Get("pipeline_id").(string) input := &datapipeline.GetPipelineDefinitionInput{ @@ -119,17 +122,17 @@ func dataSourcePipelineDefinitionRead(ctx context.Context, d *schema.ResourceDat resp, err := conn.GetPipelineDefinitionWithContext(ctx, input) if err != nil { - return diag.Errorf("error getting DataPipeline Definition (%s): %s", pipelineID, err) + return diag.Errorf("getting DataPipeline Definition (%s): %s", pipelineID, err) } if err = d.Set("parameter_object", flattenPipelineDefinitionParameterObjects(resp.ParameterObjects)); err != nil { - return diag.Errorf("error setting `%s` for DataPipeline Pipeline Definition (%s): %s", "parameter_object", pipelineID, err) + return diag.Errorf("setting `%s` for DataPipeline Pipeline Definition (%s): %s", "parameter_object", pipelineID, err) } if err = d.Set("parameter_value", flattenPipelineDefinitionParameterValues(resp.ParameterValues)); err != nil { - return diag.Errorf("error setting `%s` for DataPipeline Pipeline Definition (%s): %s", "parameter_object", pipelineID, err) + return diag.Errorf("setting `%s` for DataPipeline Pipeline Definition (%s): %s", "parameter_object", pipelineID, err) } if err = d.Set("pipeline_object", flattenPipelineDefinitionObjects(resp.PipelineObjects)); err != nil { - return diag.Errorf("error setting `%s` for DataPipeline Pipeline Definition (%s): %s", "parameter_object", pipelineID, err) + return diag.Errorf("setting `%s` for DataPipeline Pipeline Definition (%s): %s", "parameter_object", pipelineID, err) } d.SetId(pipelineID) diff --git a/internal/service/datapipeline/pipeline_definition_data_source_test.go b/internal/service/datapipeline/pipeline_definition_data_source_test.go index 9bc5c3f4a44..8e187c50eee 100644 --- a/internal/service/datapipeline/pipeline_definition_data_source_test.go +++ b/internal/service/datapipeline/pipeline_definition_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datapipeline_test import ( diff --git a/internal/service/datapipeline/pipeline_definition_test.go b/internal/service/datapipeline/pipeline_definition_test.go index 7408d6b3a3f..c0f3cbffb52 100644 --- a/internal/service/datapipeline/pipeline_definition_test.go +++ b/internal/service/datapipeline/pipeline_definition_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datapipeline_test import ( @@ -112,7 +115,7 @@ func testAccCheckPipelineDefinitionExists(ctx context.Context, resourceName stri return fmt.Errorf("not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).DataPipelineConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataPipelineConn(ctx) resp, err := conn.GetPipelineDefinitionWithContext(ctx, &datapipeline.GetPipelineDefinitionInput{PipelineId: aws.String(rs.Primary.ID)}) if err != nil { return fmt.Errorf("problem checking for DataPipeline Pipeline Definition existence: %w", err) @@ -130,7 +133,7 @@ func testAccCheckPipelineDefinitionExists(ctx context.Context, resourceName stri func testAccCheckPipelineDefinitionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DataPipelineConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataPipelineConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_datapipeline_pipeline_definition" { diff --git a/internal/service/datapipeline/pipeline_test.go b/internal/service/datapipeline/pipeline_test.go index 38d0ba44850..965bc490556 100644 --- a/internal/service/datapipeline/pipeline_test.go +++ b/internal/service/datapipeline/pipeline_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datapipeline_test import ( @@ -165,7 +168,7 @@ func TestAccDataPipelinePipeline_tags(t *testing.T) { func testAccCheckPipelineDisappears(ctx context.Context, conf *datapipeline.PipelineDescription) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DataPipelineConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataPipelineConn(ctx) params := &datapipeline.DeletePipelineInput{ PipelineId: conf.PipelineId, } @@ -180,7 +183,7 @@ func testAccCheckPipelineDisappears(ctx context.Context, conf *datapipeline.Pipe func testAccCheckPipelineDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DataPipelineConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataPipelineConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_datapipeline_pipeline" { @@ -217,7 +220,7 @@ func testAccCheckPipelineExists(ctx context.Context, n string, v *datapipeline.P return fmt.Errorf("No DataPipeline ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DataPipelineConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataPipelineConn(ctx) pipelineDescription, err := tfdatapipeline.PipelineRetrieve(ctx, rs.Primary.ID, conn) @@ -234,7 +237,7 @@ func testAccCheckPipelineExists(ctx context.Context, n string, v *datapipeline.P } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).DataPipelineConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataPipelineConn(ctx) input := &datapipeline.ListPipelinesInput{} diff --git a/internal/service/datapipeline/service_package_gen.go b/internal/service/datapipeline/service_package_gen.go index abf49bb5660..e82b85ef161 100644 --- a/internal/service/datapipeline/service_package_gen.go +++ b/internal/service/datapipeline/service_package_gen.go @@ -5,6 +5,10 @@ package datapipeline import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + datapipeline_sdkv1 "github.com/aws/aws-sdk-go/service/datapipeline" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -53,4 +57,13 @@ func (p *servicePackage) ServicePackageName() string { return names.DataPipeline } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*datapipeline_sdkv1.DataPipeline, error) { + sess := config["session"].(*session_sdkv1.Session) + + return datapipeline_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/datapipeline/tags_gen.go b/internal/service/datapipeline/tags_gen.go index 7d06df244d9..6740f823ba1 100644 --- a/internal/service/datapipeline/tags_gen.go +++ b/internal/service/datapipeline/tags_gen.go @@ -43,9 +43,9 @@ func KeyValueTags(ctx context.Context, tags []*datapipeline.Tag) tftags.KeyValue return tftags.New(ctx, m) } -// GetTagsIn returns datapipeline service tags from Context. +// getTagsIn returns datapipeline service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*datapipeline.Tag { +func getTagsIn(ctx context.Context) []*datapipeline.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -55,17 +55,17 @@ func GetTagsIn(ctx context.Context) []*datapipeline.Tag { return nil } -// SetTagsOut sets datapipeline service tags in Context. -func SetTagsOut(ctx context.Context, tags []*datapipeline.Tag) { +// setTagsOut sets datapipeline service tags in Context. +func setTagsOut(ctx context.Context, tags []*datapipeline.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates datapipeline service tags. +// updateTags updates datapipeline service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn datapipelineiface.DataPipelineAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn datapipelineiface.DataPipelineAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -105,5 +105,5 @@ func UpdateTags(ctx context.Context, conn datapipelineiface.DataPipelineAPI, ide // UpdateTags updates datapipeline service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).DataPipelineConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).DataPipelineConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/datasync/agent.go b/internal/service/datasync/agent.go index 89c7fdf2733..a4b1081baea 100644 --- a/internal/service/datasync/agent.go +++ b/internal/service/datasync/agent.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datasync import ( @@ -96,7 +99,7 @@ func ResourceAgent() *schema.Resource { func resourceAgentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) activationKey := d.Get("activation_key").(string) agentIpAddress := d.Get("ip_address").(string) @@ -174,7 +177,7 @@ func resourceAgentCreate(ctx context.Context, d *schema.ResourceData, meta inter input := &datasync.CreateAgentInput{ ActivationKey: aws.String(activationKey), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("name"); ok { @@ -213,7 +216,7 @@ func resourceAgentCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceAgentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) output, err := FindAgentByARN(ctx, conn, d.Id()) @@ -246,7 +249,7 @@ func resourceAgentRead(ctx context.Context, d *schema.ResourceData, meta interfa func resourceAgentUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) if d.HasChange("name") { input := &datasync.UpdateAgentInput{ @@ -266,7 +269,7 @@ func resourceAgentUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceAgentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) log.Printf("[DEBUG] Deleting DataSync Agent: %s", d.Id()) _, err := conn.DeleteAgentWithContext(ctx, &datasync.DeleteAgentInput{ diff --git a/internal/service/datasync/agent_test.go b/internal/service/datasync/agent_test.go index a9b00c59eef..b9ce999adc5 100644 --- a/internal/service/datasync/agent_test.go +++ b/internal/service/datasync/agent_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datasync_test import ( @@ -201,7 +204,7 @@ func TestAccDataSyncAgent_vpcEndpointID(t *testing.T) { func testAccCheckAgentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_datasync_agent" { @@ -232,7 +235,7 @@ func testAccCheckAgentExists(ctx context.Context, resourceName string, agent *da return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn(ctx) output, err := tfdatasync.FindAgentByARN(ctx, conn, rs.Primary.ID) diff --git a/internal/service/datasync/consts.go b/internal/service/datasync/consts.go index f05e7a6b33b..2bd15873f8b 100644 --- a/internal/service/datasync/consts.go +++ b/internal/service/datasync/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datasync import ( diff --git a/internal/service/datasync/find.go b/internal/service/datasync/find.go index e9492ad5f7e..10098164ead 100644 --- a/internal/service/datasync/find.go +++ b/internal/service/datasync/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datasync import ( diff --git a/internal/service/datasync/generate.go b/internal/service/datasync/generate.go index 4bcda4359f3..da13cbae9e4 100644 --- a/internal/service/datasync/generate.go +++ b/internal/service/datasync/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsSlice -TagType=TagListEntry -UntagInTagsElem=Keys -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package datasync diff --git a/internal/service/datasync/location_efs.go b/internal/service/datasync/location_efs.go index 2743a3984ad..c25050ccfd6 100644 --- a/internal/service/datasync/location_efs.go +++ b/internal/service/datasync/location_efs.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datasync import ( @@ -115,13 +118,13 @@ func ResourceLocationEFS() *schema.Resource { func resourceLocationEFSCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.CreateLocationEfsInput{ Ec2Config: expandEC2Config(d.Get("ec2_config").([]interface{})), EfsFilesystemArn: aws.String(d.Get("efs_file_system_arn").(string)), Subdirectory: aws.String(d.Get("subdirectory").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("access_point_arn"); ok { @@ -149,7 +152,7 @@ func resourceLocationEFSCreate(ctx context.Context, d *schema.ResourceData, meta func resourceLocationEFSRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.DescribeLocationEfsInput{ LocationArn: aws.String(d.Id()), @@ -168,7 +171,7 @@ func resourceLocationEFSRead(ctx context.Context, d *schema.ResourceData, meta i return sdkdiag.AppendErrorf(diags, "reading DataSync Location EFS (%s): %s", d.Id(), err) } - subdirectory, err := SubdirectoryFromLocationURI(aws.StringValue(output.LocationUri)) + subdirectory, err := subdirectoryFromLocationURI(aws.StringValue(output.LocationUri)) if err != nil { return sdkdiag.AppendErrorf(diags, "reading DataSync Location EFS (%s): %s", d.Id(), err) @@ -199,7 +202,7 @@ func resourceLocationEFSUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceLocationEFSDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.DeleteLocationInput{ LocationArn: aws.String(d.Id()), diff --git a/internal/service/datasync/location_efs_test.go b/internal/service/datasync/location_efs_test.go index 7ae40f99cb8..bc12189f3b6 100644 --- a/internal/service/datasync/location_efs_test.go +++ b/internal/service/datasync/location_efs_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datasync_test import ( @@ -191,7 +194,7 @@ func TestAccDataSyncLocationEFS_tags(t *testing.T) { func testAccCheckLocationEFSDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_datasync_location_efs" { @@ -224,7 +227,7 @@ func testAccCheckLocationEFSExists(ctx context.Context, resourceName string, loc return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.DescribeLocationEfsInput{ LocationArn: aws.String(rs.Primary.ID), } diff --git a/internal/service/datasync/location_fsx_lustre_file_system.go b/internal/service/datasync/location_fsx_lustre_file_system.go index b4056e5d7d2..36e09fb9d2a 100644 --- a/internal/service/datasync/location_fsx_lustre_file_system.go +++ b/internal/service/datasync/location_fsx_lustre_file_system.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datasync import ( @@ -94,13 +97,13 @@ func ResourceLocationFSxLustreFileSystem() *schema.Resource { func resourceLocationFSxLustreFileSystemCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) fsxArn := d.Get("fsx_filesystem_arn").(string) input := &datasync.CreateLocationFsxLustreInput{ FsxFilesystemArn: aws.String(fsxArn), SecurityGroupArns: flex.ExpandStringSet(d.Get("security_group_arns").(*schema.Set)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("subdirectory"); ok { @@ -120,7 +123,7 @@ func resourceLocationFSxLustreFileSystemCreate(ctx context.Context, d *schema.Re func resourceLocationFSxLustreFileSystemRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) output, err := FindFSxLustreLocationByARN(ctx, conn, d.Id()) @@ -134,7 +137,7 @@ func resourceLocationFSxLustreFileSystemRead(ctx context.Context, d *schema.Reso return sdkdiag.AppendErrorf(diags, "reading DataSync Location Fsx Lustre (%s): %s", d.Id(), err) } - subdirectory, err := SubdirectoryFromLocationURI(aws.StringValue(output.LocationUri)) + subdirectory, err := subdirectoryFromLocationURI(aws.StringValue(output.LocationUri)) if err != nil { return sdkdiag.AppendErrorf(diags, "reading DataSync Location Fsx Lustre (%s): %s", d.Id(), err) @@ -165,7 +168,7 @@ func resourceLocationFSxLustreFileSystemUpdate(ctx context.Context, d *schema.Re func resourceLocationFSxLustreFileSystemDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.DeleteLocationInput{ LocationArn: aws.String(d.Id()), diff --git a/internal/service/datasync/location_fsx_lustre_file_system_test.go b/internal/service/datasync/location_fsx_lustre_file_system_test.go index 7bdf913f4a3..70df8893f86 100644 --- a/internal/service/datasync/location_fsx_lustre_file_system_test.go +++ b/internal/service/datasync/location_fsx_lustre_file_system_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datasync_test import ( @@ -166,7 +169,7 @@ func TestAccDataSyncLocationFSxLustreFileSystem_tags(t *testing.T) { func testAccCheckLocationFSxLustreDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_datasync_location_fsx_lustre_file_system" { @@ -197,7 +200,7 @@ func testAccCheckLocationFSxLustreExists(ctx context.Context, resourceName strin return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn(ctx) output, err := tfdatasync.FindFSxLustreLocationByARN(ctx, conn, rs.Primary.ID) if err != nil { diff --git a/internal/service/datasync/location_fsx_openzfs_file_system.go b/internal/service/datasync/location_fsx_openzfs_file_system.go index e16abf57a58..7bd78085de2 100644 --- a/internal/service/datasync/location_fsx_openzfs_file_system.go +++ b/internal/service/datasync/location_fsx_openzfs_file_system.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datasync import ( @@ -131,14 +134,14 @@ func ResourceLocationFSxOpenZFSFileSystem() *schema.Resource { func resourceLocationFSxOpenZFSFileSystemCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) fsxArn := d.Get("fsx_filesystem_arn").(string) input := &datasync.CreateLocationFsxOpenZfsInput{ FsxFilesystemArn: aws.String(fsxArn), Protocol: expandProtocol(d.Get("protocol").([]interface{})), SecurityGroupArns: flex.ExpandStringSet(d.Get("security_group_arns").(*schema.Set)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("subdirectory"); ok { @@ -158,7 +161,7 @@ func resourceLocationFSxOpenZFSFileSystemCreate(ctx context.Context, d *schema.R func resourceLocationFSxOpenZFSFileSystemRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) output, err := FindFSxOpenZFSLocationByARN(ctx, conn, d.Id()) @@ -172,7 +175,7 @@ func resourceLocationFSxOpenZFSFileSystemRead(ctx context.Context, d *schema.Res return sdkdiag.AppendErrorf(diags, "reading DataSync Location Fsx OpenZfs (%s): %s", d.Id(), err) } - subdirectory, err := SubdirectoryFromLocationURI(aws.StringValue(output.LocationUri)) + subdirectory, err := subdirectoryFromLocationURI(aws.StringValue(output.LocationUri)) if err != nil { return sdkdiag.AppendErrorf(diags, "reading DataSync Location Fsx OpenZfs (%s): %s", d.Id(), err) @@ -207,7 +210,7 @@ func resourceLocationFSxOpenZFSFileSystemUpdate(ctx context.Context, d *schema.R func resourceLocationFSxOpenZFSFileSystemDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.DeleteLocationInput{ LocationArn: aws.String(d.Id()), diff --git a/internal/service/datasync/location_fsx_openzfs_file_system_test.go b/internal/service/datasync/location_fsx_openzfs_file_system_test.go index f97916d093d..ac177b078f2 100644 --- a/internal/service/datasync/location_fsx_openzfs_file_system_test.go +++ b/internal/service/datasync/location_fsx_openzfs_file_system_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datasync_test import ( @@ -166,7 +169,7 @@ func TestAccDataSyncLocationFSxOpenZFSFileSystem_tags(t *testing.T) { func testAccCheckLocationFSxOpenZFSDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_datasync_location_fsx_openzfs_file_system" { @@ -197,7 +200,7 @@ func testAccCheckLocationFSxOpenZFSExists(ctx context.Context, resourceName stri return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn(ctx) output, err := tfdatasync.FindFSxOpenZFSLocationByARN(ctx, conn, rs.Primary.ID) if err != nil { diff --git a/internal/service/datasync/location_fsx_windows_file_system.go b/internal/service/datasync/location_fsx_windows_file_system.go index 7d76a1a0bf0..0508d5fedeb 100644 --- a/internal/service/datasync/location_fsx_windows_file_system.go +++ b/internal/service/datasync/location_fsx_windows_file_system.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datasync import ( @@ -112,7 +115,7 @@ func ResourceLocationFSxWindowsFileSystem() *schema.Resource { func resourceLocationFSxWindowsFileSystemCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) fsxArn := d.Get("fsx_filesystem_arn").(string) input := &datasync.CreateLocationFsxWindowsInput{ @@ -120,7 +123,7 @@ func resourceLocationFSxWindowsFileSystemCreate(ctx context.Context, d *schema.R User: aws.String(d.Get("user").(string)), Password: aws.String(d.Get("password").(string)), SecurityGroupArns: flex.ExpandStringSet(d.Get("security_group_arns").(*schema.Set)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("subdirectory"); ok { @@ -143,7 +146,7 @@ func resourceLocationFSxWindowsFileSystemCreate(ctx context.Context, d *schema.R func resourceLocationFSxWindowsFileSystemRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.DescribeLocationFsxWindowsInput{ LocationArn: aws.String(d.Id()), @@ -162,7 +165,7 @@ func resourceLocationFSxWindowsFileSystemRead(ctx context.Context, d *schema.Res return sdkdiag.AppendErrorf(diags, "reading DataSync Location Fsx Windows (%s): %s", d.Id(), err) } - subdirectory, err := SubdirectoryFromLocationURI(aws.StringValue(output.LocationUri)) + subdirectory, err := subdirectoryFromLocationURI(aws.StringValue(output.LocationUri)) if err != nil { return sdkdiag.AppendErrorf(diags, "reading DataSync Location Fsx Windows (%s): %s", d.Id(), err) @@ -195,7 +198,7 @@ func resourceLocationFSxWindowsFileSystemUpdate(ctx context.Context, d *schema.R func resourceLocationFSxWindowsFileSystemDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.DeleteLocationInput{ LocationArn: aws.String(d.Id()), diff --git a/internal/service/datasync/location_fsx_windows_file_system_test.go b/internal/service/datasync/location_fsx_windows_file_system_test.go index e3e12db6ece..57075f66542 100644 --- a/internal/service/datasync/location_fsx_windows_file_system_test.go +++ b/internal/service/datasync/location_fsx_windows_file_system_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datasync_test import ( @@ -178,7 +181,7 @@ func TestAccDataSyncLocationFSxWindowsFileSystem_tags(t *testing.T) { func testAccCheckLocationFSxWindowsDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_datasync_location_fsx_windows_file_system" { @@ -211,7 +214,7 @@ func testAccCheckLocationFSxWindowsExists(ctx context.Context, resourceName stri return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.DescribeLocationFsxWindowsInput{ LocationArn: aws.String(rs.Primary.ID), } diff --git a/internal/service/datasync/location_hdfs.go b/internal/service/datasync/location_hdfs.go index 6e870dc3c09..7c163611372 100644 --- a/internal/service/datasync/location_hdfs.go +++ b/internal/service/datasync/location_hdfs.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datasync import ( @@ -155,14 +158,14 @@ func ResourceLocationHDFS() *schema.Resource { func resourceLocationHDFSCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.CreateLocationHdfsInput{ AgentArns: flex.ExpandStringSet(d.Get("agent_arns").(*schema.Set)), NameNodes: expandHDFSNameNodes(d.Get("name_node").(*schema.Set)), AuthenticationType: aws.String(d.Get("authentication_type").(string)), Subdirectory: aws.String(d.Get("subdirectory").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("simple_user"); ok { @@ -210,7 +213,7 @@ func resourceLocationHDFSCreate(ctx context.Context, d *schema.ResourceData, met func resourceLocationHDFSRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) output, err := FindLocationHDFSByARN(ctx, conn, d.Id()) @@ -224,7 +227,7 @@ func resourceLocationHDFSRead(ctx context.Context, d *schema.ResourceData, meta return sdkdiag.AppendErrorf(diags, "reading DataSync Location HDFS (%s): %s", d.Id(), err) } - subdirectory, err := SubdirectoryFromLocationURI(aws.StringValue(output.LocationUri)) + subdirectory, err := subdirectoryFromLocationURI(aws.StringValue(output.LocationUri)) if err != nil { return sdkdiag.AppendErrorf(diags, "reading DataSync Location HDFS (%s): %s", d.Id(), err) @@ -254,7 +257,7 @@ func resourceLocationHDFSRead(ctx context.Context, d *schema.ResourceData, meta func resourceLocationHDFSUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) if d.HasChangesExcept("tags_all", "tags") { input := &datasync.UpdateLocationHdfsInput{ @@ -320,7 +323,7 @@ func resourceLocationHDFSUpdate(ctx context.Context, d *schema.ResourceData, met func resourceLocationHDFSDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.DeleteLocationInput{ LocationArn: aws.String(d.Id()), diff --git a/internal/service/datasync/location_hdfs_test.go b/internal/service/datasync/location_hdfs_test.go index 10ca92e3c45..4eed82e4a34 100644 --- a/internal/service/datasync/location_hdfs_test.go +++ b/internal/service/datasync/location_hdfs_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datasync_test import ( @@ -133,7 +136,7 @@ func TestAccDataSyncLocationHDFS_tags(t *testing.T) { func testAccCheckLocationHDFSDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_datasync_location_hdfs" { @@ -164,7 +167,7 @@ func testAccCheckLocationHDFSExists(ctx context.Context, resourceName string, lo return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn(ctx) output, err := tfdatasync.FindLocationHDFSByARN(ctx, conn, rs.Primary.ID) if err != nil { diff --git a/internal/service/datasync/location_nfs.go b/internal/service/datasync/location_nfs.go index bfa4bfed14f..ef8f34e4726 100644 --- a/internal/service/datasync/location_nfs.go +++ b/internal/service/datasync/location_nfs.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datasync import ( @@ -106,13 +109,13 @@ func ResourceLocationNFS() *schema.Resource { func resourceLocationNFSCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.CreateLocationNfsInput{ OnPremConfig: expandOnPremConfig(d.Get("on_prem_config").([]interface{})), ServerHostname: aws.String(d.Get("server_hostname").(string)), Subdirectory: aws.String(d.Get("subdirectory").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("mount_options"); ok { @@ -132,7 +135,7 @@ func resourceLocationNFSCreate(ctx context.Context, d *schema.ResourceData, meta func resourceLocationNFSRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.DescribeLocationNfsInput{ LocationArn: aws.String(d.Id()), @@ -151,7 +154,7 @@ func resourceLocationNFSRead(ctx context.Context, d *schema.ResourceData, meta i return sdkdiag.AppendErrorf(diags, "reading DataSync Location NFS (%s): %s", d.Id(), err) } - subdirectory, err := SubdirectoryFromLocationURI(aws.StringValue(output.LocationUri)) + subdirectory, err := subdirectoryFromLocationURI(aws.StringValue(output.LocationUri)) if err != nil { return sdkdiag.AppendErrorf(diags, "reading DataSync Location NFS (%s): %s", d.Id(), err) @@ -175,7 +178,7 @@ func resourceLocationNFSRead(ctx context.Context, d *schema.ResourceData, meta i func resourceLocationNFSUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) if d.HasChangesExcept("tags_all", "tags") { input := &datasync.UpdateLocationNfsInput{ @@ -199,7 +202,7 @@ func resourceLocationNFSUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceLocationNFSDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.DeleteLocationInput{ LocationArn: aws.String(d.Id()), diff --git a/internal/service/datasync/location_nfs_test.go b/internal/service/datasync/location_nfs_test.go index 391d88766f7..8d89bbf9755 100644 --- a/internal/service/datasync/location_nfs_test.go +++ b/internal/service/datasync/location_nfs_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datasync_test import ( @@ -231,7 +234,7 @@ func TestAccDataSyncLocationNFS_tags(t *testing.T) { func testAccCheckLocationNFSDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_datasync_location_nfs" { @@ -264,7 +267,7 @@ func testAccCheckLocationNFSExists(ctx context.Context, resourceName string, loc return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.DescribeLocationNfsInput{ LocationArn: aws.String(rs.Primary.ID), } @@ -287,7 +290,7 @@ func testAccCheckLocationNFSExists(ctx context.Context, resourceName string, loc func testAccCheckLocationNFSDisappears(ctx context.Context, location *datasync.DescribeLocationNfsOutput) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.DeleteLocationInput{ LocationArn: location.LocationArn, diff --git a/internal/service/datasync/location_object_storage.go b/internal/service/datasync/location_object_storage.go index 106866fa819..5bdcc248930 100644 --- a/internal/service/datasync/location_object_storage.go +++ b/internal/service/datasync/location_object_storage.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datasync import ( @@ -106,14 +109,14 @@ func ResourceLocationObjectStorage() *schema.Resource { func resourceLocationObjectStorageCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.CreateLocationObjectStorageInput{ AgentArns: flex.ExpandStringSet(d.Get("agent_arns").(*schema.Set)), Subdirectory: aws.String(d.Get("subdirectory").(string)), BucketName: aws.String(d.Get("bucket_name").(string)), ServerHostname: aws.String(d.Get("server_hostname").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("access_key"); ok { @@ -132,7 +135,7 @@ func resourceLocationObjectStorageCreate(ctx context.Context, d *schema.Resource input.SecretKey = aws.String(v.(string)) } - if v, ok := d.GetOk("server_certficate"); ok { + if v, ok := d.GetOk("server_certificate"); ok { input.ServerCertificate = []byte(v.(string)) } @@ -149,7 +152,7 @@ func resourceLocationObjectStorageCreate(ctx context.Context, d *schema.Resource func resourceLocationObjectStorageRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) output, err := FindLocationObjectStorageByARN(ctx, conn, d.Id()) @@ -163,31 +166,22 @@ func resourceLocationObjectStorageRead(ctx context.Context, d *schema.ResourceDa return sdkdiag.AppendErrorf(diags, "reading DataSync Location Object Storage (%s): %s", d.Id(), err) } - subdirectory, err := SubdirectoryFromLocationURI(aws.StringValue(output.LocationUri)) + uri := aws.StringValue(output.LocationUri) + hostname, bucketName, subdirectory, err := decodeObjectStorageURI(uri) if err != nil { - return sdkdiag.AppendErrorf(diags, "parsing DataSync Location Object Storage (%s) location URI: %s", d.Id(), err) + return sdkdiag.AppendFromErr(diags, err) } - d.Set("agent_arns", flex.FlattenStringSet(output.AgentArns)) - d.Set("arn", output.LocationArn) - d.Set("server_protocol", output.ServerProtocol) - d.Set("subdirectory", subdirectory) d.Set("access_key", output.AccessKey) - d.Set("server_port", output.ServerPort) + d.Set("agent_arns", aws.StringValueSlice(output.AgentArns)) + d.Set("arn", output.LocationArn) + d.Set("bucket_name", bucketName) d.Set("server_certificate", string(output.ServerCertificate)) - - uri := aws.StringValue(output.LocationUri) - - hostname, bucketName, err := decodeObjectStorageURI(uri) - - if err != nil { - return sdkdiag.AppendErrorf(diags, "parsing DataSync Location Object Storage (%s) object-storage URI: %s", d.Id(), err) - } - d.Set("server_hostname", hostname) - d.Set("bucket_name", bucketName) - + d.Set("server_port", output.ServerPort) + d.Set("server_protocol", output.ServerProtocol) + d.Set("subdirectory", subdirectory) d.Set("uri", uri) return diags @@ -195,21 +189,13 @@ func resourceLocationObjectStorageRead(ctx context.Context, d *schema.ResourceDa func resourceLocationObjectStorageUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) - if d.HasChangesExcept("tags_all", "tags") { + if d.HasChangesExcept("tags", "tags_all") { input := &datasync.UpdateLocationObjectStorageInput{ LocationArn: aws.String(d.Id()), } - if d.HasChange("server_protocol") { - input.ServerProtocol = aws.String(d.Get("server_protocol").(string)) - } - - if d.HasChange("server_port") { - input.ServerPort = aws.Int64(int64(d.Get("server_port").(int))) - } - if d.HasChange("access_key") { input.AccessKey = aws.String(d.Get("access_key").(string)) } @@ -218,12 +204,20 @@ func resourceLocationObjectStorageUpdate(ctx context.Context, d *schema.Resource input.SecretKey = aws.String(d.Get("secret_key").(string)) } - if d.HasChange("subdirectory") { - input.Subdirectory = aws.String(d.Get("subdirectory").(string)) + if d.HasChange("server_certificate") { + input.ServerCertificate = []byte(d.Get("server_certificate").(string)) } - if d.HasChange("server_certficate") { - input.ServerCertificate = []byte(d.Get("server_certficate").(string)) + if d.HasChange("server_port") { + input.ServerPort = aws.Int64(int64(d.Get("server_port").(int))) + } + + if d.HasChange("server_protocol") { + input.ServerProtocol = aws.String(d.Get("server_protocol").(string)) + } + + if d.HasChange("subdirectory") { + input.Subdirectory = aws.String(d.Get("subdirectory").(string)) } _, err := conn.UpdateLocationObjectStorageWithContext(ctx, input) @@ -238,7 +232,7 @@ func resourceLocationObjectStorageUpdate(ctx context.Context, d *schema.Resource func resourceLocationObjectStorageDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.DeleteLocationInput{ LocationArn: aws.String(d.Id()), @@ -258,17 +252,17 @@ func resourceLocationObjectStorageDelete(ctx context.Context, d *schema.Resource return diags } -func decodeObjectStorageURI(uri string) (string, string, error) { +func decodeObjectStorageURI(uri string) (string, string, string, error) { prefix := "object-storage://" if !strings.HasPrefix(uri, prefix) { - return "", "", fmt.Errorf("incorrect uri format needs to start with %s", prefix) + return "", "", "", fmt.Errorf("incorrect uri format needs to start with %s", prefix) } trimmedUri := strings.TrimPrefix(uri, prefix) uriParts := strings.Split(trimmedUri, "/") if len(uri) < 2 { - return "", "", fmt.Errorf("incorrect uri format needs to start with %sSERVER-NAME/BUCKET-NAME/SUBDIRECTORY", prefix) + return "", "", "", fmt.Errorf("incorrect uri format needs to start with %sSERVER-NAME/BUCKET-NAME/SUBDIRECTORY", prefix) } - return uriParts[0], uriParts[1], nil + return uriParts[0], uriParts[1], "/" + strings.Join(uriParts[2:], "/"), nil } diff --git a/internal/service/datasync/location_object_storage_test.go b/internal/service/datasync/location_object_storage_test.go index 17177fcc9bd..eb0a4959e5f 100644 --- a/internal/service/datasync/location_object_storage_test.go +++ b/internal/service/datasync/location_object_storage_test.go @@ -1,13 +1,14 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datasync_test import ( "context" - "errors" "fmt" "regexp" "testing" - "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/datasync" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" @@ -20,10 +21,10 @@ import ( func TestAccDataSyncLocationObjectStorage_basic(t *testing.T) { ctx := acctest.Context(t) - var locationObjectStorage1 datasync.DescribeLocationObjectStorageOutput - + var locationObjectStorage datasync.DescribeLocationObjectStorageOutput resourceName := "aws_datasync_location_object_storage.test" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + domain := acctest.RandomDomainName() resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, @@ -32,15 +33,21 @@ func TestAccDataSyncLocationObjectStorage_basic(t *testing.T) { CheckDestroy: testAccCheckLocationObjectStorageDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccLocationObjectStorageConfig_basic(rName), - Check: resource.ComposeTestCheckFunc( - testAccCheckLocationObjectStorageExists(ctx, resourceName, &locationObjectStorage1), - acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "datasync", regexp.MustCompile(`location/loc-.+`)), + Config: testAccLocationObjectStorageConfig_basic(rName, domain), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckLocationObjectStorageExists(ctx, resourceName, &locationObjectStorage), + resource.TestCheckResourceAttr(resourceName, "access_key", ""), resource.TestCheckResourceAttr(resourceName, "agent_arns.#", "1"), - resource.TestCheckResourceAttr(resourceName, "server_hostname", rName), + acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "datasync", regexp.MustCompile(`location/loc-.+`)), resource.TestCheckResourceAttr(resourceName, "bucket_name", rName), + resource.TestCheckNoResourceAttr(resourceName, "secret_key"), + resource.TestCheckResourceAttr(resourceName, "server_certificate", ""), + resource.TestCheckResourceAttr(resourceName, "server_hostname", domain), + resource.TestCheckResourceAttr(resourceName, "server_port", "8080"), + resource.TestCheckResourceAttr(resourceName, "server_protocol", "HTTP"), + resource.TestCheckResourceAttr(resourceName, "subdirectory", "/"), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), - resource.TestMatchResourceAttr(resourceName, "uri", regexp.MustCompile(`^object-storage://.+/`)), + resource.TestCheckResourceAttr(resourceName, "uri", fmt.Sprintf("object-storage://%s/%s/", domain, rName)), ), }, { @@ -54,9 +61,10 @@ func TestAccDataSyncLocationObjectStorage_basic(t *testing.T) { func TestAccDataSyncLocationObjectStorage_disappears(t *testing.T) { ctx := acctest.Context(t) - var locationObjectStorage1 datasync.DescribeLocationObjectStorageOutput + var locationObjectStorage datasync.DescribeLocationObjectStorageOutput resourceName := "aws_datasync_location_object_storage.test" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + domain := acctest.RandomDomainName() resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, @@ -65,10 +73,9 @@ func TestAccDataSyncLocationObjectStorage_disappears(t *testing.T) { CheckDestroy: testAccCheckLocationObjectStorageDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccLocationObjectStorageConfig_basic(rName), + Config: testAccLocationObjectStorageConfig_basic(rName, domain), Check: resource.ComposeTestCheckFunc( - testAccCheckLocationObjectStorageExists(ctx, resourceName, &locationObjectStorage1), - acctest.CheckResourceDisappears(ctx, acctest.Provider, tfdatasync.ResourceLocationObjectStorage(), resourceName), + testAccCheckLocationObjectStorageExists(ctx, resourceName, &locationObjectStorage), acctest.CheckResourceDisappears(ctx, acctest.Provider, tfdatasync.ResourceLocationObjectStorage(), resourceName), ), ExpectNonEmptyPlan: true, @@ -79,9 +86,10 @@ func TestAccDataSyncLocationObjectStorage_disappears(t *testing.T) { func TestAccDataSyncLocationObjectStorage_tags(t *testing.T) { ctx := acctest.Context(t) - var locationObjectStorage1, locationObjectStorage2, locationObjectStorage3 datasync.DescribeLocationObjectStorageOutput + var locationObjectStorage datasync.DescribeLocationObjectStorageOutput resourceName := "aws_datasync_location_object_storage.test" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + domain := acctest.RandomDomainName() resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, @@ -90,9 +98,9 @@ func TestAccDataSyncLocationObjectStorage_tags(t *testing.T) { CheckDestroy: testAccCheckLocationObjectStorageDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccLocationObjectStorageConfig_tags1(rName, "key1", "value1"), + Config: testAccLocationObjectStorageConfig_tags1(rName, domain, "key1", "value1"), Check: resource.ComposeTestCheckFunc( - testAccCheckLocationObjectStorageExists(ctx, resourceName, &locationObjectStorage1), + testAccCheckLocationObjectStorageExists(ctx, resourceName, &locationObjectStorage), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), ), @@ -103,20 +111,18 @@ func TestAccDataSyncLocationObjectStorage_tags(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccLocationObjectStorageConfig_tags2(rName, "key1", "value1updated", "key2", "value2"), + Config: testAccLocationObjectStorageConfig_tags2(rName, domain, "key1", "value1updated", "key2", "value2"), Check: resource.ComposeTestCheckFunc( - testAccCheckLocationObjectStorageExists(ctx, resourceName, &locationObjectStorage2), - testAccCheckLocationObjectStorageNotRecreated(&locationObjectStorage1, &locationObjectStorage2), + testAccCheckLocationObjectStorageExists(ctx, resourceName, &locationObjectStorage), resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), ), }, { - Config: testAccLocationObjectStorageConfig_tags1(rName, "key1", "value1"), + Config: testAccLocationObjectStorageConfig_tags1(rName, domain, "key1", "value1"), Check: resource.ComposeTestCheckFunc( - testAccCheckLocationObjectStorageExists(ctx, resourceName, &locationObjectStorage3), - testAccCheckLocationObjectStorageNotRecreated(&locationObjectStorage2, &locationObjectStorage3), + testAccCheckLocationObjectStorageExists(ctx, resourceName, &locationObjectStorage), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), ), @@ -125,9 +131,46 @@ func TestAccDataSyncLocationObjectStorage_tags(t *testing.T) { }) } +func TestAccDataSyncLocationObjectStorage_serverCertificate(t *testing.T) { + ctx := acctest.Context(t) + var locationObjectStorage datasync.DescribeLocationObjectStorageOutput + resourceName := "aws_datasync_location_object_storage.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + domain := acctest.RandomDomainName() + caKey := acctest.TLSRSAPrivateKeyPEM(t, 2048) + caCertificate := acctest.TLSRSAX509SelfSignedCACertificatePEM(t, caKey) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, datasync.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckLocationObjectStorageDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccLocationObjectStorageConfig_serverCertificate(rName, domain, caCertificate), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckLocationObjectStorageExists(ctx, resourceName, &locationObjectStorage), + resource.TestCheckResourceAttr(resourceName, "bucket_name", rName), + resource.TestCheckResourceAttr(resourceName, "server_certificate", caCertificate), + resource.TestCheckResourceAttr(resourceName, "server_hostname", domain), + resource.TestCheckResourceAttr(resourceName, "server_port", "443"), + resource.TestCheckResourceAttr(resourceName, "server_protocol", "HTTPS"), + resource.TestCheckResourceAttr(resourceName, "subdirectory", "/test/"), + resource.TestCheckResourceAttr(resourceName, "uri", fmt.Sprintf("object-storage://%s/%s/test/", domain, rName)), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func testAccCheckLocationObjectStorageDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_datasync_location_object_storage" { @@ -144,48 +187,35 @@ func testAccCheckLocationObjectStorageDestroy(ctx context.Context) resource.Test return err } - return fmt.Errorf("DataSync Task %s still exists", rs.Primary.ID) + return fmt.Errorf("DataSync Location Object Storage %s still exists", rs.Primary.ID) } return nil } } -func testAccCheckLocationObjectStorageExists(ctx context.Context, resourceName string, locationObjectStorage *datasync.DescribeLocationObjectStorageOutput) resource.TestCheckFunc { +func testAccCheckLocationObjectStorageExists(ctx context.Context, n string, v *datasync.DescribeLocationObjectStorageOutput) resource.TestCheckFunc { return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[resourceName] + rs, ok := s.RootModule().Resources[n] if !ok { - return fmt.Errorf("Not found: %s", resourceName) + return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn(ctx) + output, err := tfdatasync.FindLocationObjectStorageByARN(ctx, conn, rs.Primary.ID) if err != nil { return err } - if output == nil { - return fmt.Errorf("Location %q does not exist", rs.Primary.ID) - } - - *locationObjectStorage = *output + *v = *output return nil } } -func testAccCheckLocationObjectStorageNotRecreated(i, j *datasync.DescribeLocationObjectStorageOutput) resource.TestCheckFunc { - return func(s *terraform.State) error { - if !aws.TimeValue(i.CreationTime).Equal(aws.TimeValue(j.CreationTime)) { - return errors.New("DataSync Location Object Storage was recreated") - } - - return nil - } -} - -func testAccLocationObjectStorageBaseConfig(rName string) string { +func testAccLocationObjectStorageConfig_base(rName string) string { return acctest.ConfigCompose( acctest.ConfigVPCWithSubnets(rName, 1), // Reference: https://docs.aws.amazon.com/datasync/latest/userguide/agent-requirements.html @@ -218,6 +248,7 @@ resource "aws_default_route_table" "test" { } resource "aws_security_group" "test" { + name = %[1]q vpc_id = aws_vpc.test.id egress { @@ -260,41 +291,56 @@ resource "aws_datasync_agent" "test" { `, rName)) } -func testAccLocationObjectStorageConfig_basic(rName string) string { - return acctest.ConfigCompose(testAccLocationObjectStorageBaseConfig(rName), fmt.Sprintf(` +func testAccLocationObjectStorageConfig_basic(rName, domain string) string { + return acctest.ConfigCompose(testAccLocationObjectStorageConfig_base(rName), fmt.Sprintf(` resource "aws_datasync_location_object_storage" "test" { agent_arns = [aws_datasync_agent.test.arn] - server_hostname = %[1]q + server_hostname = %[2]q bucket_name = %[1]q + server_protocol = "HTTP" + server_port = 8080 } -`, rName)) +`, rName, domain)) } -func testAccLocationObjectStorageConfig_tags1(rName, key1, value1 string) string { - return acctest.ConfigCompose(testAccLocationObjectStorageBaseConfig(rName), fmt.Sprintf(` +func testAccLocationObjectStorageConfig_tags1(rName, domain, key1, value1 string) string { + return acctest.ConfigCompose(testAccLocationObjectStorageConfig_base(rName), fmt.Sprintf(` resource "aws_datasync_location_object_storage" "test" { agent_arns = [aws_datasync_agent.test.arn] - server_hostname = %[1]q + server_hostname = %[2]q bucket_name = %[1]q tags = { - %[2]q = %[3]q + %[3]q = %[4]q } } -`, rName, key1, value1)) +`, rName, domain, key1, value1)) } -func testAccLocationObjectStorageConfig_tags2(rName, key1, value1, key2, value2 string) string { - return acctest.ConfigCompose(testAccLocationObjectStorageBaseConfig(rName), fmt.Sprintf(` +func testAccLocationObjectStorageConfig_tags2(rName, domain, key1, value1, key2, value2 string) string { + return acctest.ConfigCompose(testAccLocationObjectStorageConfig_base(rName), fmt.Sprintf(` resource "aws_datasync_location_object_storage" "test" { agent_arns = [aws_datasync_agent.test.arn] - server_hostname = %[1]q + server_hostname = %[2]q bucket_name = %[1]q tags = { - %[2]q = %[3]q - %[4]q = %[5]q + %[3]q = %[4]q + %[5]q = %[6]q } } -`, rName, key1, value1, key2, value2)) +`, rName, domain, key1, value1, key2, value2)) +} + +func testAccLocationObjectStorageConfig_serverCertificate(rName, domain, certificate string) string { + return acctest.ConfigCompose(testAccLocationObjectStorageConfig_base(rName), fmt.Sprintf(` +resource "aws_datasync_location_object_storage" "test" { + agent_arns = [aws_datasync_agent.test.arn] + server_hostname = %[2]q + bucket_name = %[1]q + subdirectory = "/test/" + + server_certificate = "%[3]s" +} +`, rName, domain, acctest.TLSPEMEscapeNewlines(certificate))) } diff --git a/internal/service/datasync/location_s3.go b/internal/service/datasync/location_s3.go index c15991693be..1ce0e703abb 100644 --- a/internal/service/datasync/location_s3.go +++ b/internal/service/datasync/location_s3.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datasync import ( @@ -105,13 +108,13 @@ func ResourceLocationS3() *schema.Resource { func resourceLocationS3Create(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.CreateLocationS3Input{ S3BucketArn: aws.String(d.Get("s3_bucket_arn").(string)), S3Config: expandS3Config(d.Get("s3_config").([]interface{})), Subdirectory: aws.String(d.Get("subdirectory").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("agent_arns"); ok { @@ -163,7 +166,7 @@ func resourceLocationS3Create(ctx context.Context, d *schema.ResourceData, meta func resourceLocationS3Read(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.DescribeLocationS3Input{ LocationArn: aws.String(d.Id()), @@ -182,7 +185,7 @@ func resourceLocationS3Read(ctx context.Context, d *schema.ResourceData, meta in return sdkdiag.AppendErrorf(diags, "reading DataSync Location S3 (%s): %s", d.Id(), err) } - subdirectory, err := SubdirectoryFromLocationURI(aws.StringValue(output.LocationUri)) + subdirectory, err := subdirectoryFromLocationURI(aws.StringValue(output.LocationUri)) if err != nil { return sdkdiag.AppendErrorf(diags, "reading DataSync Location S3 (%s): %s", d.Id(), err) @@ -210,7 +213,7 @@ func resourceLocationS3Update(ctx context.Context, d *schema.ResourceData, meta func resourceLocationS3Delete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.DeleteLocationInput{ LocationArn: aws.String(d.Id()), diff --git a/internal/service/datasync/location_s3_test.go b/internal/service/datasync/location_s3_test.go index 38089aadd8b..bbe4c0eb2e5 100644 --- a/internal/service/datasync/location_s3_test.go +++ b/internal/service/datasync/location_s3_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datasync_test import ( @@ -169,7 +172,7 @@ func TestAccDataSyncLocationS3_tags(t *testing.T) { func testAccCheckLocationS3Destroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_datasync_location_s3" { @@ -202,7 +205,7 @@ func testAccCheckLocationS3Exists(ctx context.Context, resourceName string, loca return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.DescribeLocationS3Input{ LocationArn: aws.String(rs.Primary.ID), } diff --git a/internal/service/datasync/location_smb.go b/internal/service/datasync/location_smb.go index 347037e4de1..bac49fa19f2 100644 --- a/internal/service/datasync/location_smb.go +++ b/internal/service/datasync/location_smb.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datasync import ( @@ -112,7 +115,7 @@ func ResourceLocationSMB() *schema.Resource { func resourceLocationSMBCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.CreateLocationSmbInput{ AgentArns: flex.ExpandStringSet(d.Get("agent_arns").(*schema.Set)), @@ -120,7 +123,7 @@ func resourceLocationSMBCreate(ctx context.Context, d *schema.ResourceData, meta Password: aws.String(d.Get("password").(string)), ServerHostname: aws.String(d.Get("server_hostname").(string)), Subdirectory: aws.String(d.Get("subdirectory").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), User: aws.String(d.Get("user").(string)), } @@ -140,7 +143,7 @@ func resourceLocationSMBCreate(ctx context.Context, d *schema.ResourceData, meta func resourceLocationSMBRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.DescribeLocationSmbInput{ LocationArn: aws.String(d.Id()), @@ -159,7 +162,7 @@ func resourceLocationSMBRead(ctx context.Context, d *schema.ResourceData, meta i return sdkdiag.AppendErrorf(diags, "reading DataSync Location SMB (%s): %s", d.Id(), err) } - subdirectory, err := SubdirectoryFromLocationURI(aws.StringValue(output.LocationUri)) + subdirectory, err := subdirectoryFromLocationURI(aws.StringValue(output.LocationUri)) if err != nil { return sdkdiag.AppendErrorf(diags, "reading DataSync Location SMB (%s) tags: %s", d.Id(), err) @@ -184,7 +187,7 @@ func resourceLocationSMBRead(ctx context.Context, d *schema.ResourceData, meta i func resourceLocationSMBUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) if d.HasChangesExcept("tags_all", "tags") { input := &datasync.UpdateLocationSmbInput{ @@ -211,7 +214,7 @@ func resourceLocationSMBUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceLocationSMBDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.DeleteLocationInput{ LocationArn: aws.String(d.Id()), diff --git a/internal/service/datasync/location_smb_test.go b/internal/service/datasync/location_smb_test.go index 5101164a749..db1d79a14d9 100644 --- a/internal/service/datasync/location_smb_test.go +++ b/internal/service/datasync/location_smb_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datasync_test import ( @@ -141,7 +144,7 @@ func TestAccDataSyncLocationSMB_tags(t *testing.T) { func testAccCheckLocationSMBDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_datasync_location_smb" { @@ -174,7 +177,7 @@ func testAccCheckLocationSMBExists(ctx context.Context, resourceName string, loc return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.DescribeLocationSmbInput{ LocationArn: aws.String(rs.Primary.ID), } @@ -197,7 +200,7 @@ func testAccCheckLocationSMBExists(ctx context.Context, resourceName string, loc func testAccCheckLocationSMBDisappears(ctx context.Context, location *datasync.DescribeLocationSmbOutput) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.DeleteLocationInput{ LocationArn: location.LocationArn, diff --git a/internal/service/datasync/service_package_gen.go b/internal/service/datasync/service_package_gen.go index 386c6492883..543a1c3a17d 100644 --- a/internal/service/datasync/service_package_gen.go +++ b/internal/service/datasync/service_package_gen.go @@ -5,6 +5,10 @@ package datasync import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + datasync_sdkv1 "github.com/aws/aws-sdk-go/service/datasync" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -120,4 +124,13 @@ func (p *servicePackage) ServicePackageName() string { return names.DataSync } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*datasync_sdkv1.DataSync, error) { + sess := config["session"].(*session_sdkv1.Session) + + return datasync_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/datasync/sweep.go b/internal/service/datasync/sweep.go index dd1912e048d..42d77748ee4 100644 --- a/internal/service/datasync/sweep.go +++ b/internal/service/datasync/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -13,7 +16,6 @@ import ( "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -71,11 +73,11 @@ func init() { func sweepAgents(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).DataSyncConn() + conn := client.DataSyncConn(ctx) input := &datasync.ListAgentsInput{} for { @@ -126,11 +128,11 @@ func sweepAgents(region string) error { func sweepLocationEFSs(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).DataSyncConn() + conn := client.DataSyncConn(ctx) input := &datasync.ListLocationsInput{} for { @@ -184,11 +186,11 @@ func sweepLocationEFSs(region string) error { func sweepLocationFSxWindows(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).DataSyncConn() + conn := client.DataSyncConn(ctx) input := &datasync.ListLocationsInput{} for { @@ -242,11 +244,11 @@ func sweepLocationFSxWindows(region string) error { func sweepLocationFSxLustres(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).DataSyncConn() + conn := client.DataSyncConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -290,7 +292,7 @@ func sweepLocationFSxLustres(region string) error { input.NextToken = output.NextToken } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping DataSync Location FSX Lustre File Systems: %w", err)) } @@ -299,11 +301,11 @@ func sweepLocationFSxLustres(region string) error { func sweepLocationNFSs(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).DataSyncConn() + conn := client.DataSyncConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -348,7 +350,7 @@ func sweepLocationNFSs(region string) error { input.NextToken = output.NextToken } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping DataSync Location Nfs: %w", err)) } @@ -357,11 +359,11 @@ func sweepLocationNFSs(region string) error { func sweepLocationS3s(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).DataSyncConn() + conn := client.DataSyncConn(ctx) input := &datasync.ListLocationsInput{} for { @@ -415,11 +417,11 @@ func sweepLocationS3s(region string) error { func sweepLocationSMBs(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).DataSyncConn() + conn := client.DataSyncConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -464,7 +466,7 @@ func sweepLocationSMBs(region string) error { input.NextToken = output.NextToken } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping DataSync Location SMB: %w", err)) } @@ -473,11 +475,11 @@ func sweepLocationSMBs(region string) error { func sweepLocationHDFSs(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).DataSyncConn() + conn := client.DataSyncConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -522,7 +524,7 @@ func sweepLocationHDFSs(region string) error { input.NextToken = output.NextToken } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping DataSync Location HDFS: %w", err)) } @@ -531,11 +533,11 @@ func sweepLocationHDFSs(region string) error { func sweepLocationObjectStorages(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).DataSyncConn() + conn := client.DataSyncConn(ctx) input := &datasync.ListLocationsInput{} for { @@ -589,11 +591,11 @@ func sweepLocationObjectStorages(region string) error { func sweepTasks(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).DataSyncConn() + conn := client.DataSyncConn(ctx) input := &datasync.ListTasksInput{} for { diff --git a/internal/service/datasync/tags_gen.go b/internal/service/datasync/tags_gen.go index 5bcd5e8eb23..8ad08e0749a 100644 --- a/internal/service/datasync/tags_gen.go +++ b/internal/service/datasync/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists datasync service tags. +// listTags lists datasync service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn datasynciface.DataSyncAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn datasynciface.DataSyncAPI, identifier string) (tftags.KeyValueTags, error) { input := &datasync.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn datasynciface.DataSyncAPI, identifier st // ListTags lists datasync service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).DataSyncConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).DataSyncConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*datasync.TagListEntry) tftags.Key return tftags.New(ctx, m) } -// GetTagsIn returns datasync service tags from Context. +// getTagsIn returns datasync service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*datasync.TagListEntry { +func getTagsIn(ctx context.Context) []*datasync.TagListEntry { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*datasync.TagListEntry { return nil } -// SetTagsOut sets datasync service tags in Context. -func SetTagsOut(ctx context.Context, tags []*datasync.TagListEntry) { +// setTagsOut sets datasync service tags in Context. +func setTagsOut(ctx context.Context, tags []*datasync.TagListEntry) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates datasync service tags. +// updateTags updates datasync service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn datasynciface.DataSyncAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn datasynciface.DataSyncAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn datasynciface.DataSyncAPI, identifier // UpdateTags updates datasync service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).DataSyncConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).DataSyncConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/datasync/task.go b/internal/service/datasync/task.go index 6cd26de96b4..d302e932470 100644 --- a/internal/service/datasync/task.go +++ b/internal/service/datasync/task.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datasync import ( @@ -130,6 +133,12 @@ func ResourceTask() *schema.Resource { Default: datasync.MtimePreserve, ValidateFunc: validation.StringInSlice(datasync.Mtime_Values(), false), }, + "object_tags": { + Type: schema.TypeString, + Optional: true, + Default: datasync.ObjectTagsPreserve, + ValidateFunc: validation.StringInSlice(datasync.ObjectTags_Values(), false), + }, "overwrite_mode": { Type: schema.TypeString, Optional: true, @@ -221,13 +230,13 @@ func ResourceTask() *schema.Resource { func resourceTaskCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.CreateTaskInput{ DestinationLocationArn: aws.String(d.Get("destination_location_arn").(string)), Options: expandOptions(d.Get("options").([]interface{})), SourceLocationArn: aws.String(d.Get("source_location_arn").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("cloudwatch_log_group_arn"); ok { @@ -267,7 +276,7 @@ func resourceTaskCreate(ctx context.Context, d *schema.ResourceData, meta interf func resourceTaskRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) output, err := FindTaskByARN(ctx, conn, d.Id()) @@ -304,7 +313,7 @@ func resourceTaskRead(ctx context.Context, d *schema.ResourceData, meta interfac func resourceTaskUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &datasync.UpdateTaskInput{ @@ -345,7 +354,7 @@ func resourceTaskUpdate(ctx context.Context, d *schema.ResourceData, meta interf func resourceTaskDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DataSyncConn() + conn := meta.(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.DeleteTaskInput{ TaskArn: aws.String(d.Id()), @@ -413,6 +422,7 @@ func flattenOptions(options *datasync.Options) []interface{} { "gid": aws.StringValue(options.Gid), "log_level": aws.StringValue(options.LogLevel), "mtime": aws.StringValue(options.Mtime), + "object_tags": aws.StringValue(options.ObjectTags), "overwrite_mode": aws.StringValue(options.OverwriteMode), "posix_permissions": aws.StringValue(options.PosixPermissions), "preserve_deleted_files": aws.StringValue(options.PreserveDeletedFiles), @@ -439,6 +449,7 @@ func expandOptions(l []interface{}) *datasync.Options { Gid: aws.String(m["gid"].(string)), LogLevel: aws.String(m["log_level"].(string)), Mtime: aws.String(m["mtime"].(string)), + ObjectTags: aws.String(m["object_tags"].(string)), OverwriteMode: aws.String(m["overwrite_mode"].(string)), PreserveDeletedFiles: aws.String(m["preserve_deleted_files"].(string)), PreserveDevices: aws.String(m["preserve_devices"].(string)), diff --git a/internal/service/datasync/task_test.go b/internal/service/datasync/task_test.go index 7bc00939cd3..ce1486c8d20 100644 --- a/internal/service/datasync/task_test.go +++ b/internal/service/datasync/task_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datasync_test import ( @@ -48,6 +51,7 @@ func TestAccDataSyncTask_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "options.0.gid", "INT_VALUE"), resource.TestCheckResourceAttr(resourceName, "options.0.log_level", "OFF"), resource.TestCheckResourceAttr(resourceName, "options.0.mtime", "PRESERVE"), + resource.TestCheckResourceAttr(resourceName, "options.0.object_tags", "PRESERVE"), resource.TestCheckResourceAttr(resourceName, "options.0.overwrite_mode", "ALWAYS"), resource.TestCheckResourceAttr(resourceName, "options.0.posix_permissions", "PRESERVE"), resource.TestCheckResourceAttr(resourceName, "options.0.preserve_deleted_files", "PRESERVE"), @@ -398,6 +402,44 @@ func TestAccDataSyncTask_DefaultSyncOptions_logLevel(t *testing.T) { }) } +func TestAccDataSyncTask_DefaultSyncOptions_objectTags(t *testing.T) { + ctx := acctest.Context(t) + var task1, task2 datasync.DescribeTaskOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_datasync_task.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, datasync.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckTaskDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccTaskConfig_defaultSyncOptionsObjectTags(rName, "NONE"), + Check: resource.ComposeTestCheckFunc( + testAccCheckTaskExists(ctx, resourceName, &task1), + resource.TestCheckResourceAttr(resourceName, "options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "options.0.object_tags", "NONE"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccTaskConfig_defaultSyncOptionsObjectTags(rName, "PRESERVE"), + Check: resource.ComposeTestCheckFunc( + testAccCheckTaskExists(ctx, resourceName, &task2), + testAccCheckTaskNotRecreated(&task1, &task2), + resource.TestCheckResourceAttr(resourceName, "options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "options.0.object_tags", "PRESERVE"), + ), + }, + }, + }) +} + func TestAccDataSyncTask_DefaultSyncOptions_overwriteMode(t *testing.T) { ctx := acctest.Context(t) var task1, task2 datasync.DescribeTaskOutput @@ -806,7 +848,7 @@ func TestAccDataSyncTask_tags(t *testing.T) { func testAccCheckTaskDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_datasync_task" { @@ -837,7 +879,7 @@ func testAccCheckTaskExists(ctx context.Context, resourceName string, task *data return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn(ctx) output, err := tfdatasync.FindTaskByARN(ctx, conn, rs.Primary.ID) @@ -867,7 +909,7 @@ func testAccCheckTaskNotRecreated(i, j *datasync.DescribeTaskOutput) resource.Te } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DataSyncConn(ctx) input := &datasync.ListTasksInput{ MaxResults: aws.Int64(1), @@ -1234,6 +1276,23 @@ resource "aws_datasync_task" "test" { `, rName, logLevel)) } +func testAccTaskConfig_defaultSyncOptionsObjectTags(rName, objectTags string) string { + return acctest.ConfigCompose( + testAccTaskConfig_baseLocationS3(rName), + testAccTaskConfig_baseLocationNFS(rName), + fmt.Sprintf(` +resource "aws_datasync_task" "test" { + destination_location_arn = aws_datasync_location_s3.test.arn + name = %[1]q + source_location_arn = aws_datasync_location_nfs.test.arn + + options { + object_tags = %[2]q + } +} +`, rName, objectTags)) +} + func testAccTaskConfig_defaultSyncOptionsOverwriteMode(rName, overwriteMode string) string { return acctest.ConfigCompose( testAccTaskConfig_baseLocationS3(rName), diff --git a/internal/service/datasync/uri.go b/internal/service/datasync/uri.go index 182a02f1975..ae45afa8983 100644 --- a/internal/service/datasync/uri.go +++ b/internal/service/datasync/uri.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datasync import ( @@ -8,14 +11,14 @@ import ( ) var ( - locationURIPattern = regexp.MustCompile(`^(efs|hdfs|nfs|s3|smb|fsx[a-z0-9]+|object-storage)://(.+)$`) + locationURIPattern = regexp.MustCompile(`^(efs|hdfs|nfs|s3|smb|fsx[a-z0-9]+)://(.+)$`) locationURIGlobalIDAndSubdirPattern = regexp.MustCompile(`^([a-zA-Z0-9.\-]+)(?::\d{0,5})?(/.*)$`) s3OutpostsAccessPointARNResourcePattern = regexp.MustCompile(`^outpost/.*/accesspoint/.*?(/.*)$`) ) -// SubdirectoryFromLocationURI extracts the subdirectory from a location URI. +// subdirectoryFromLocationURI extracts the subdirectory from a location URI. // https://docs.aws.amazon.com/datasync/latest/userguide/API_LocationListEntry.html#DataSync-Type-LocationListEntry-LocationUri -func SubdirectoryFromLocationURI(uri string) (string, error) { +func subdirectoryFromLocationURI(uri string) (string, error) { submatches := locationURIPattern.FindStringSubmatch(uri) if len(submatches) != 3 { diff --git a/internal/service/datasync/uri_test.go b/internal/service/datasync/uri_test.go index 3ec60b20e80..ebcb648424e 100644 --- a/internal/service/datasync/uri_test.go +++ b/internal/service/datasync/uri_test.go @@ -1,10 +1,9 @@ -package datasync_test +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 -import ( - "testing" +package datasync - tfdatasync "github.com/hashicorp/terraform-provider-aws/internal/service/datasync" -) +import "testing" func TestSubdirectoryFromLocationURI(t *testing.T) { t.Parallel() @@ -156,9 +155,9 @@ func TestSubdirectoryFromLocationURI(t *testing.T) { ExpectedSubdirectory: "/my-folder-1/my-folder-2", }, { - TestName: "Object storage one level", - InputURI: "object-storage://tf-acc-test-5815577519131245007/tf-acc-test-5815577519131245008/", - ExpectedSubdirectory: "/tf-acc-test-5815577519131245008/", + TestName: "Object storage two levels", + InputURI: "object-storage://192.168.1.1/tf-acc-test-5815577519131245007/tf-acc-test-5815577519131245008/", + ExpectedError: true, }, } @@ -167,7 +166,7 @@ func TestSubdirectoryFromLocationURI(t *testing.T) { t.Run(testCase.TestName, func(t *testing.T) { t.Parallel() - got, err := tfdatasync.SubdirectoryFromLocationURI(testCase.InputURI) + got, err := subdirectoryFromLocationURI(testCase.InputURI) if err == nil && testCase.ExpectedError { t.Fatalf("expected error") @@ -183,3 +182,77 @@ func TestSubdirectoryFromLocationURI(t *testing.T) { }) } } + +func TestDecodeObjectStorageURI(t *testing.T) { + t.Parallel() + + testCases := []struct { + TestName string + InputURI string + ExpectedError bool + ExpectedHostname string + ExpectedBucketName string + ExpectedSubdirectory string + }{ + { + TestName: "empty URI", + InputURI: "", + ExpectedError: true, + }, + { + TestName: "S3 bucket URI top level", + InputURI: "s3://bucket/", + ExpectedError: true, + }, + { + TestName: "Object storage top level", + InputURI: "object-storage://tawn19fp.test/tf-acc-test-6405856757419817388/", + ExpectedHostname: "tawn19fp.test", + ExpectedBucketName: "tf-acc-test-6405856757419817388", + ExpectedSubdirectory: "/", + }, + { + TestName: "Object storage one level", + InputURI: "object-storage://tawn19fp.test/tf-acc-test-6405856757419817388/test", + ExpectedHostname: "tawn19fp.test", + ExpectedBucketName: "tf-acc-test-6405856757419817388", + ExpectedSubdirectory: "/test", + }, + { + TestName: "Object storage two levels", + InputURI: "object-storage://192.168.1.1/tf-acc-test-5815577519131245007/tf-acc-test-5815577519131245008/tf-acc-test-5815577519131245009/", + ExpectedHostname: "192.168.1.1", + ExpectedBucketName: "tf-acc-test-5815577519131245007", + ExpectedSubdirectory: "/tf-acc-test-5815577519131245008/tf-acc-test-5815577519131245009/", + }, + } + + for _, testCase := range testCases { + testCase := testCase + t.Run(testCase.TestName, func(t *testing.T) { + t.Parallel() + + gotHostname, gotBucketName, gotSubdirectory, err := decodeObjectStorageURI(testCase.InputURI) + + if err == nil && testCase.ExpectedError { + t.Fatalf("expected error") + } + + if err != nil && !testCase.ExpectedError { + t.Fatalf("unexpected error: %s", err) + } + + if gotHostname != testCase.ExpectedHostname { + t.Errorf("hostname %s, expected %s", gotHostname, testCase.ExpectedHostname) + } + + if gotBucketName != testCase.ExpectedBucketName { + t.Errorf("bucketName %s, expected %s", gotBucketName, testCase.ExpectedBucketName) + } + + if gotSubdirectory != testCase.ExpectedSubdirectory { + t.Errorf("subdirectory %s, expected %s", gotSubdirectory, testCase.ExpectedSubdirectory) + } + }) + } +} diff --git a/internal/service/dax/cluster.go b/internal/service/dax/cluster.go index 03772f6d545..1af7af66499 100644 --- a/internal/service/dax/cluster.go +++ b/internal/service/dax/cluster.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dax import ( @@ -201,7 +204,7 @@ func ResourceCluster() *schema.Resource { func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DAXConn() + conn := meta.(*conns.AWSClient).DAXConn(ctx) clusterName := d.Get("cluster_name").(string) iamRoleArn := d.Get("iam_role_arn").(string) @@ -217,7 +220,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int ReplicationFactor: aws.Int64(numNodes), SecurityGroupIds: securityIds, SubnetGroupName: aws.String(subnetGroupName), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } // optionals can be defaulted by AWS @@ -270,7 +273,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int resp, err = conn.CreateClusterWithContext(ctx, input) } if err != nil { - return sdkdiag.AppendErrorf(diags, "Error creating DAX cluster: %s", err) + return sdkdiag.AppendErrorf(diags, "creating DAX cluster: %s", err) } // Assign the cluster id as the resource ID @@ -292,7 +295,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int log.Printf("[DEBUG] Waiting for state to become available: %v", d.Id()) _, sterr := stateConf.WaitForStateContext(ctx) if sterr != nil { - return sdkdiag.AppendErrorf(diags, "Error waiting for DAX cluster (%s) to be created: %s", d.Id(), sterr) + return sdkdiag.AppendErrorf(diags, "waiting for DAX cluster (%s) to be created: %s", d.Id(), sterr) } return append(diags, resourceClusterRead(ctx, d, meta)...) @@ -300,7 +303,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DAXConn() + conn := meta.(*conns.AWSClient).DAXConn(ctx) req := &dax.DescribeClustersInput{ ClusterNames: []*string{aws.String(d.Id())}, @@ -366,7 +369,7 @@ func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta inter func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DAXConn() + conn := meta.(*conns.AWSClient).DAXConn(ctx) req := &dax.UpdateClusterInput{ ClusterName: aws.String(d.Id()), @@ -410,7 +413,7 @@ func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta int log.Printf("[DEBUG] Modifying DAX Cluster (%s), opts:\n%s", d.Id(), req) _, err := conn.UpdateClusterWithContext(ctx, req) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error updating DAX cluster (%s), error: %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "updating DAX cluster (%s), error: %s", d.Id(), err) } awaitUpdate = true } @@ -426,7 +429,7 @@ func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta int NewReplicationFactor: aws.Int64(int64(nraw.(int))), }) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error increasing nodes in DAX cluster %s, error: %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "increasing nodes in DAX cluster %s, error: %s", d.Id(), err) } awaitUpdate = true } @@ -437,7 +440,7 @@ func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta int NewReplicationFactor: aws.Int64(int64(nraw.(int))), }) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error increasing nodes in DAX cluster %s, error: %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "increasing nodes in DAX cluster %s, error: %s", d.Id(), err) } awaitUpdate = true } @@ -457,7 +460,7 @@ func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta int _, sterr := stateConf.WaitForStateContext(ctx) if sterr != nil { - return sdkdiag.AppendErrorf(diags, "Error waiting for DAX (%s) to update: %s", d.Id(), sterr) + return sdkdiag.AppendErrorf(diags, "waiting for DAX (%s) to update: %s", d.Id(), sterr) } } @@ -497,7 +500,7 @@ func (b byNodeId) Less(i, j int) bool { func resourceClusterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DAXConn() + conn := meta.(*conns.AWSClient).DAXConn(ctx) req := &dax.DeleteClusterInput{ ClusterName: aws.String(d.Id()), @@ -516,7 +519,7 @@ func resourceClusterDelete(ctx context.Context, d *schema.ResourceData, meta int _, err = conn.DeleteClusterWithContext(ctx, req) } if err != nil { - return sdkdiag.AppendErrorf(diags, "Error deleting DAX cluster: %s", err) + return sdkdiag.AppendErrorf(diags, "deleting DAX cluster: %s", err) } log.Printf("[DEBUG] Waiting for deletion: %v", d.Id()) @@ -531,7 +534,7 @@ func resourceClusterDelete(ctx context.Context, d *schema.ResourceData, meta int _, sterr := stateConf.WaitForStateContext(ctx) if sterr != nil { - return sdkdiag.AppendErrorf(diags, "Error waiting for DAX (%s) to delete: %s", d.Id(), sterr) + return sdkdiag.AppendErrorf(diags, "waiting for DAX (%s) to delete: %s", d.Id(), sterr) } return diags diff --git a/internal/service/dax/cluster_test.go b/internal/service/dax/cluster_test.go index 0d8b63e5c2a..7ad605d21e1 100644 --- a/internal/service/dax/cluster_test.go +++ b/internal/service/dax/cluster_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dax_test import ( @@ -261,7 +264,7 @@ func TestAccDAXCluster_EndpointEncryption_enabled(t *testing.T) { func testAccCheckClusterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DAXConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DAXConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_dax_cluster" { @@ -296,7 +299,7 @@ func testAccCheckClusterExists(ctx context.Context, n string, v *dax.Cluster) re return fmt.Errorf("No DAX cluster ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DAXConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DAXConn(ctx) resp, err := conn.DescribeClustersWithContext(ctx, &dax.DescribeClustersInput{ ClusterNames: []*string{aws.String(rs.Primary.ID)}, }) @@ -315,7 +318,7 @@ func testAccCheckClusterExists(ctx context.Context, n string, v *dax.Cluster) re } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).DAXConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DAXConn(ctx) input := &dax.DescribeClustersInput{} diff --git a/internal/service/dax/consts.go b/internal/service/dax/consts.go index 5eb0f6620b4..9771d39acc1 100644 --- a/internal/service/dax/consts.go +++ b/internal/service/dax/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dax import ( diff --git a/internal/service/dax/flex.go b/internal/service/dax/flex.go index e3597e36c9d..224ab855365 100644 --- a/internal/service/dax/flex.go +++ b/internal/service/dax/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dax import ( diff --git a/internal/service/dax/generate.go b/internal/service/dax/generate.go index 04f9cbb39b5..ea98d23b767 100644 --- a/internal/service/dax/generate.go +++ b/internal/service/dax/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=ListTags -ListTagsInIDElem=ResourceName -ServiceTagsSlice -TagInIDElem=ResourceName -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package dax diff --git a/internal/service/dax/parameter_group.go b/internal/service/dax/parameter_group.go index c9b4e9e31d0..9e8ac6418e3 100644 --- a/internal/service/dax/parameter_group.go +++ b/internal/service/dax/parameter_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dax import ( @@ -59,7 +62,7 @@ func ResourceParameterGroup() *schema.Resource { func resourceParameterGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DAXConn() + conn := meta.(*conns.AWSClient).DAXConn(ctx) input := &dax.CreateParameterGroupInput{ ParameterGroupName: aws.String(d.Get("name").(string)), @@ -83,7 +86,7 @@ func resourceParameterGroupCreate(ctx context.Context, d *schema.ResourceData, m func resourceParameterGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DAXConn() + conn := meta.(*conns.AWSClient).DAXConn(ctx) resp, err := conn.DescribeParameterGroupsWithContext(ctx, &dax.DescribeParameterGroupsInput{ ParameterGroupNames: []*string{aws.String(d.Id())}, @@ -130,7 +133,7 @@ func resourceParameterGroupRead(ctx context.Context, d *schema.ResourceData, met func resourceParameterGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DAXConn() + conn := meta.(*conns.AWSClient).DAXConn(ctx) input := &dax.UpdateParameterGroupInput{ ParameterGroupName: aws.String(d.Id()), @@ -152,7 +155,7 @@ func resourceParameterGroupUpdate(ctx context.Context, d *schema.ResourceData, m func resourceParameterGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DAXConn() + conn := meta.(*conns.AWSClient).DAXConn(ctx) input := &dax.DeleteParameterGroupInput{ ParameterGroupName: aws.String(d.Id()), diff --git a/internal/service/dax/parameter_group_test.go b/internal/service/dax/parameter_group_test.go index 69439f9970d..7824d29eea1 100644 --- a/internal/service/dax/parameter_group_test.go +++ b/internal/service/dax/parameter_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dax_test import ( @@ -51,7 +54,7 @@ func TestAccDAXParameterGroup_basic(t *testing.T) { func testAccCheckParameterGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DAXConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DAXConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_dax_parameter_group" { @@ -79,7 +82,7 @@ func testAccCheckParameterGroupExists(ctx context.Context, name string) resource return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).DAXConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DAXConn(ctx) _, err := conn.DescribeParameterGroupsWithContext(ctx, &dax.DescribeParameterGroupsInput{ ParameterGroupNames: []*string{aws.String(rs.Primary.ID)}, diff --git a/internal/service/dax/service_package_gen.go b/internal/service/dax/service_package_gen.go index 115e80e5e40..b6d0267c154 100644 --- a/internal/service/dax/service_package_gen.go +++ b/internal/service/dax/service_package_gen.go @@ -5,6 +5,10 @@ package dax import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + dax_sdkv1 "github.com/aws/aws-sdk-go/service/dax" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -48,4 +52,13 @@ func (p *servicePackage) ServicePackageName() string { return names.DAX } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*dax_sdkv1.DAX, error) { + sess := config["session"].(*session_sdkv1.Session) + + return dax_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/dax/subnet_group.go b/internal/service/dax/subnet_group.go index 0860070ee7e..945b3b75edd 100644 --- a/internal/service/dax/subnet_group.go +++ b/internal/service/dax/subnet_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dax import ( @@ -52,7 +55,7 @@ func ResourceSubnetGroup() *schema.Resource { func resourceSubnetGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DAXConn() + conn := meta.(*conns.AWSClient).DAXConn(ctx) input := &dax.CreateSubnetGroupInput{ SubnetGroupName: aws.String(d.Get("name").(string)), @@ -73,7 +76,7 @@ func resourceSubnetGroupCreate(ctx context.Context, d *schema.ResourceData, meta func resourceSubnetGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DAXConn() + conn := meta.(*conns.AWSClient).DAXConn(ctx) resp, err := conn.DescribeSubnetGroupsWithContext(ctx, &dax.DescribeSubnetGroupsInput{ SubnetGroupNames: []*string{aws.String(d.Id())}, @@ -101,7 +104,7 @@ func resourceSubnetGroupRead(ctx context.Context, d *schema.ResourceData, meta i func resourceSubnetGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DAXConn() + conn := meta.(*conns.AWSClient).DAXConn(ctx) input := &dax.UpdateSubnetGroupInput{ SubnetGroupName: aws.String(d.Id()), @@ -125,7 +128,7 @@ func resourceSubnetGroupUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceSubnetGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DAXConn() + conn := meta.(*conns.AWSClient).DAXConn(ctx) input := &dax.DeleteSubnetGroupInput{ SubnetGroupName: aws.String(d.Id()), diff --git a/internal/service/dax/subnet_group_test.go b/internal/service/dax/subnet_group_test.go index 626e0b5eed5..3380698a3fe 100644 --- a/internal/service/dax/subnet_group_test.go +++ b/internal/service/dax/subnet_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dax_test import ( @@ -54,7 +57,7 @@ func TestAccDAXSubnetGroup_basic(t *testing.T) { func testAccCheckSubnetGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DAXConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DAXConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_dax_subnet_group" { @@ -82,7 +85,7 @@ func testAccCheckSubnetGroupExists(ctx context.Context, name string) resource.Te return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).DAXConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DAXConn(ctx) _, err := conn.DescribeSubnetGroupsWithContext(ctx, &dax.DescribeSubnetGroupsInput{ SubnetGroupNames: []*string{aws.String(rs.Primary.ID)}, diff --git a/internal/service/dax/sweep.go b/internal/service/dax/sweep.go index 69837841942..3ed529f6d0a 100644 --- a/internal/service/dax/sweep.go +++ b/internal/service/dax/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/service/dax" "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -23,11 +25,11 @@ func init() { func sweepClusters(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("Error getting client: %s", err) } - conn := client.(*conns.AWSClient).DAXConn() + conn := client.DAXConn(ctx) resp, err := conn.DescribeClustersWithContext(ctx, &dax.DescribeClustersInput{}) if err != nil { diff --git a/internal/service/dax/tags_gen.go b/internal/service/dax/tags_gen.go index 758fed2f97a..df9907f562f 100644 --- a/internal/service/dax/tags_gen.go +++ b/internal/service/dax/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists dax service tags. +// listTags lists dax service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn daxiface.DAXAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn daxiface.DAXAPI, identifier string) (tftags.KeyValueTags, error) { input := &dax.ListTagsInput{ ResourceName: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn daxiface.DAXAPI, identifier string) (tft // ListTags lists dax service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).DAXConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).DAXConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*dax.Tag) tftags.KeyValueTags { return tftags.New(ctx, m) } -// GetTagsIn returns dax service tags from Context. +// getTagsIn returns dax service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*dax.Tag { +func getTagsIn(ctx context.Context) []*dax.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*dax.Tag { return nil } -// SetTagsOut sets dax service tags in Context. -func SetTagsOut(ctx context.Context, tags []*dax.Tag) { +// setTagsOut sets dax service tags in Context. +func setTagsOut(ctx context.Context, tags []*dax.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates dax service tags. +// updateTags updates dax service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn daxiface.DAXAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn daxiface.DAXAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn daxiface.DAXAPI, identifier string, ol // UpdateTags updates dax service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).DAXConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).DAXConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/deploy/app.go b/internal/service/deploy/app.go index dbf29440ec0..e3ba6f64bf6 100644 --- a/internal/service/deploy/app.go +++ b/internal/service/deploy/app.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package deploy import ( @@ -37,7 +40,7 @@ func ResourceApp() *schema.Resource { } applicationName := d.Id() - conn := meta.(*conns.AWSClient).DeployConn() + conn := meta.(*conns.AWSClient).DeployConn(ctx) input := &codedeploy.GetApplicationInput{ ApplicationName: aws.String(applicationName), @@ -51,7 +54,7 @@ func ResourceApp() *schema.Resource { } if output == nil || output.Application == nil { - return []*schema.ResourceData{}, fmt.Errorf("error reading CodeDeploy Application (%s): empty response", applicationName) + return []*schema.ResourceData{}, fmt.Errorf("reading CodeDeploy Application (%s): empty response", applicationName) } d.SetId(fmt.Sprintf("%s:%s", aws.StringValue(output.Application.ApplicationId), applicationName)) @@ -101,14 +104,14 @@ func ResourceApp() *schema.Resource { func resourceAppCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeployConn() + conn := meta.(*conns.AWSClient).DeployConn(ctx) application := d.Get("name").(string) computePlatform := d.Get("compute_platform").(string) resp, err := conn.CreateApplicationWithContext(ctx, &codedeploy.CreateApplicationInput{ ApplicationName: aws.String(application), ComputePlatform: aws.String(computePlatform), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), }) if err != nil { return sdkdiag.AppendErrorf(diags, "creating CodeDeploy Application (%s): %s", application, err) @@ -126,7 +129,7 @@ func resourceAppCreate(ctx context.Context, d *schema.ResourceData, meta interfa func resourceAppRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeployConn() + conn := meta.(*conns.AWSClient).DeployConn(ctx) application := resourceAppParseID(d.Id()) name := d.Get("name").(string) @@ -175,7 +178,7 @@ func resourceAppRead(ctx context.Context, d *schema.ResourceData, meta interface func resourceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeployConn() + conn := meta.(*conns.AWSClient).DeployConn(ctx) if d.HasChange("name") { o, n := d.GetChange("name") @@ -195,7 +198,7 @@ func resourceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{ func resourceAppDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeployConn() + conn := meta.(*conns.AWSClient).DeployConn(ctx) _, err := conn.DeleteApplicationWithContext(ctx, &codedeploy.DeleteApplicationInput{ ApplicationName: aws.String(d.Get("name").(string)), diff --git a/internal/service/deploy/app_test.go b/internal/service/deploy/app_test.go index 39f4f3c3e4e..bb3da7828f3 100644 --- a/internal/service/deploy/app_test.go +++ b/internal/service/deploy/app_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package deploy_test import ( @@ -253,7 +256,7 @@ func TestAccDeployApp_disappears(t *testing.T) { func testAccCheckAppDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DeployConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DeployConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_codedeploy_app" { @@ -286,7 +289,7 @@ func testAccCheckAppExists(ctx context.Context, name string, application *codede return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).DeployConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DeployConn(ctx) input := &codedeploy.GetApplicationInput{ ApplicationName: aws.String(rs.Primary.Attributes["name"]), diff --git a/internal/service/deploy/deployment_config.go b/internal/service/deploy/deployment_config.go index 7439df22aff..882374f44ba 100644 --- a/internal/service/deploy/deployment_config.go +++ b/internal/service/deploy/deployment_config.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package deploy import ( @@ -144,7 +147,7 @@ func ResourceDeploymentConfig() *schema.Resource { func resourceDeploymentConfigCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeployConn() + conn := meta.(*conns.AWSClient).DeployConn(ctx) input := &codedeploy.CreateDeploymentConfigInput{ DeploymentConfigName: aws.String(d.Get("deployment_config_name").(string)), @@ -165,7 +168,7 @@ func resourceDeploymentConfigCreate(ctx context.Context, d *schema.ResourceData, func resourceDeploymentConfigRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeployConn() + conn := meta.(*conns.AWSClient).DeployConn(ctx) input := &codedeploy.GetDeploymentConfigInput{ DeploymentConfigName: aws.String(d.Id()), @@ -204,7 +207,7 @@ func resourceDeploymentConfigRead(ctx context.Context, d *schema.ResourceData, m func resourceDeploymentConfigDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeployConn() + conn := meta.(*conns.AWSClient).DeployConn(ctx) input := &codedeploy.DeleteDeploymentConfigInput{ DeploymentConfigName: aws.String(d.Id()), diff --git a/internal/service/deploy/deployment_config_test.go b/internal/service/deploy/deployment_config_test.go index 95a5cdca72e..9dde224c66f 100644 --- a/internal/service/deploy/deployment_config_test.go +++ b/internal/service/deploy/deployment_config_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package deploy_test import ( @@ -236,7 +239,7 @@ func TestAccDeployDeploymentConfig_trafficLinear(t *testing.T) { func testAccCheckDeploymentConfigDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DeployConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DeployConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_codedeploy_deployment_config" { @@ -271,7 +274,7 @@ func testAccCheckDeploymentConfigExists(ctx context.Context, name string, config return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).DeployConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DeployConn(ctx) resp, err := conn.GetDeploymentConfigWithContext(ctx, &codedeploy.GetDeploymentConfigInput{ DeploymentConfigName: aws.String(rs.Primary.ID), diff --git a/internal/service/deploy/deployment_group.go b/internal/service/deploy/deployment_group.go index dc463fbefef..1e66697c404 100644 --- a/internal/service/deploy/deployment_group.go +++ b/internal/service/deploy/deployment_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package deploy import ( @@ -45,7 +48,7 @@ func ResourceDeploymentGroup() *schema.Resource { applicationName := idParts[0] deploymentGroupName := idParts[1] - conn := meta.(*conns.AWSClient).DeployConn() + conn := meta.(*conns.AWSClient).DeployConn(ctx) input := &codedeploy.GetDeploymentGroupInput{ ApplicationName: aws.String(applicationName), @@ -60,7 +63,7 @@ func ResourceDeploymentGroup() *schema.Resource { } if output == nil || output.DeploymentGroupInfo == nil { - return []*schema.ResourceData{}, fmt.Errorf("error reading CodeDeploy Application (%s): empty response", d.Id()) + return []*schema.ResourceData{}, fmt.Errorf("reading CodeDeploy Application (%s): empty response", d.Id()) } d.SetId(aws.StringValue(output.DeploymentGroupInfo.DeploymentGroupId)) @@ -492,7 +495,7 @@ func ResourceDeploymentGroup() *schema.Resource { func resourceDeploymentGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeployConn() + conn := meta.(*conns.AWSClient).DeployConn(ctx) applicationName := d.Get("app_name").(string) deploymentGroupName := d.Get("deployment_group_name").(string) @@ -501,7 +504,7 @@ func resourceDeploymentGroupCreate(ctx context.Context, d *schema.ResourceData, ApplicationName: aws.String(applicationName), DeploymentGroupName: aws.String(deploymentGroupName), ServiceRoleArn: aws.String(serviceRoleArn), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if attr, ok := d.GetOk("deployment_style"); ok { @@ -580,7 +583,7 @@ func resourceDeploymentGroupCreate(ctx context.Context, d *schema.ResourceData, resp, err = conn.CreateDeploymentGroupWithContext(ctx, &input) } if err != nil { - return sdkdiag.AppendErrorf(diags, "Error creating CodeDeploy deployment group: %s", err) + return sdkdiag.AppendErrorf(diags, "creating CodeDeploy deployment group: %s", err) } d.SetId(aws.StringValue(resp.DeploymentGroupId)) @@ -590,7 +593,7 @@ func resourceDeploymentGroupCreate(ctx context.Context, d *schema.ResourceData, func resourceDeploymentGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeployConn() + conn := meta.(*conns.AWSClient).DeployConn(ctx) deploymentGroupName := d.Get("deployment_group_name").(string) resp, err := conn.GetDeploymentGroupWithContext(ctx, &codedeploy.GetDeploymentGroupInput{ @@ -680,7 +683,7 @@ func resourceDeploymentGroupRead(ctx context.Context, d *schema.ResourceData, me func resourceDeploymentGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeployConn() + conn := meta.(*conns.AWSClient).DeployConn(ctx) if d.HasChangesExcept("tags", "tags_all") { // required fields @@ -799,7 +802,7 @@ func resourceDeploymentGroupUpdate(ctx context.Context, d *schema.ResourceData, func resourceDeploymentGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeployConn() + conn := meta.(*conns.AWSClient).DeployConn(ctx) log.Printf("[DEBUG] Deleting CodeDeploy DeploymentGroup %s", d.Id()) _, err := conn.DeleteDeploymentGroupWithContext(ctx, &codedeploy.DeleteDeploymentGroupInput{ diff --git a/internal/service/deploy/deployment_group_test.go b/internal/service/deploy/deployment_group_test.go index 1934c99cc50..a5827273825 100644 --- a/internal/service/deploy/deployment_group_test.go +++ b/internal/service/deploy/deployment_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package deploy_test import ( @@ -2337,7 +2340,7 @@ func testAccCheckDeploymentGroupTriggerTargetARN(group *codedeploy.DeploymentGro func testAccCheckDeploymentGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DeployConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DeployConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_codedeploy_deployment_group" { @@ -2373,7 +2376,7 @@ func testAccCheckDeploymentGroupExists(ctx context.Context, name string, group * return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).DeployConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DeployConn(ctx) resp, err := conn.GetDeploymentGroupWithContext(ctx, &codedeploy.GetDeploymentGroupInput{ ApplicationName: aws.String(rs.Primary.Attributes["app_name"]), diff --git a/internal/service/deploy/generate.go b/internal/service/deploy/generate.go index 0aa9b2a98bc..b31e992c86d 100644 --- a/internal/service/deploy/generate.go +++ b/internal/service/deploy/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsSlice -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package deploy diff --git a/internal/service/deploy/service_package_gen.go b/internal/service/deploy/service_package_gen.go index 616d8e73f6b..d6b5f03cecb 100644 --- a/internal/service/deploy/service_package_gen.go +++ b/internal/service/deploy/service_package_gen.go @@ -5,6 +5,10 @@ package deploy import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + codedeploy_sdkv1 "github.com/aws/aws-sdk-go/service/codedeploy" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -52,4 +56,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Deploy } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*codedeploy_sdkv1.CodeDeploy, error) { + sess := config["session"].(*session_sdkv1.Session) + + return codedeploy_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/deploy/sweep.go b/internal/service/deploy/sweep.go index 5a01f74b72e..b6182d9e223 100644 --- a/internal/service/deploy/sweep.go +++ b/internal/service/deploy/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/codedeploy" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -24,13 +26,13 @@ func init() { func sweepApps(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).DeployConn() + conn := client.DeployConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -58,7 +60,7 @@ func sweepApps(region string) error { errs = multierror.Append(errs, fmt.Errorf("error describing CodeDeploy Applications for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping CodeDeploy Applications for %s: %w", region, err)) } diff --git a/internal/service/deploy/tags_gen.go b/internal/service/deploy/tags_gen.go index 1c38a8203c5..35c987045e7 100644 --- a/internal/service/deploy/tags_gen.go +++ b/internal/service/deploy/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists deploy service tags. +// listTags lists deploy service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn codedeployiface.CodeDeployAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn codedeployiface.CodeDeployAPI, identifier string) (tftags.KeyValueTags, error) { input := &codedeploy.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn codedeployiface.CodeDeployAPI, identifie // ListTags lists deploy service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).DeployConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).DeployConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*codedeploy.Tag) tftags.KeyValueTa return tftags.New(ctx, m) } -// GetTagsIn returns deploy service tags from Context. +// getTagsIn returns deploy service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*codedeploy.Tag { +func getTagsIn(ctx context.Context) []*codedeploy.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*codedeploy.Tag { return nil } -// SetTagsOut sets deploy service tags in Context. -func SetTagsOut(ctx context.Context, tags []*codedeploy.Tag) { +// setTagsOut sets deploy service tags in Context. +func setTagsOut(ctx context.Context, tags []*codedeploy.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates deploy service tags. +// updateTags updates deploy service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn codedeployiface.CodeDeployAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn codedeployiface.CodeDeployAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn codedeployiface.CodeDeployAPI, identif // UpdateTags updates deploy service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).DeployConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).DeployConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/deploy/validate.go b/internal/service/deploy/validate.go index e403f0af15d..3337bf2f83e 100644 --- a/internal/service/deploy/validate.go +++ b/internal/service/deploy/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package deploy import ( diff --git a/internal/service/detective/detective_test.go b/internal/service/detective/detective_test.go index e59ff06b2a4..1ddfbca9fb8 100644 --- a/internal/service/detective/detective_test.go +++ b/internal/service/detective/detective_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package detective_test import ( diff --git a/internal/service/detective/find.go b/internal/service/detective/find.go index 7324ee02f2c..54c6ce5a4ff 100644 --- a/internal/service/detective/find.go +++ b/internal/service/detective/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package detective import ( diff --git a/internal/service/detective/generate.go b/internal/service/detective/generate.go index 9284016ebb4..06ef30ccf6e 100644 --- a/internal/service/detective/generate.go +++ b/internal/service/detective/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ServiceTagsMap -ListTags -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package detective diff --git a/internal/service/detective/graph.go b/internal/service/detective/graph.go index bc0dddb2c33..bc3505089aa 100644 --- a/internal/service/detective/graph.go +++ b/internal/service/detective/graph.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package detective import ( @@ -45,10 +48,10 @@ func ResourceGraph() *schema.Resource { } func resourceGraphCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DetectiveConn() + conn := meta.(*conns.AWSClient).DetectiveConn(ctx) input := &detective.CreateGraphInput{ - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } var output *detective.CreateGraphOutput @@ -71,7 +74,7 @@ func resourceGraphCreate(ctx context.Context, d *schema.ResourceData, meta inter } if err != nil { - return diag.Errorf("error creating detective Graph: %s", err) + return diag.Errorf("creating detective Graph: %s", err) } d.SetId(aws.StringValue(output.GraphArn)) @@ -80,7 +83,7 @@ func resourceGraphCreate(ctx context.Context, d *schema.ResourceData, meta inter } func resourceGraphRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DetectiveConn() + conn := meta.(*conns.AWSClient).DetectiveConn(ctx) resp, err := FindGraphByARN(ctx, conn, d.Id()) @@ -89,7 +92,7 @@ func resourceGraphRead(ctx context.Context, d *schema.ResourceData, meta interfa return nil } if err != nil { - return diag.Errorf("error reading detective Graph (%s): %s", d.Id(), err) + return diag.Errorf("reading detective Graph (%s): %s", d.Id(), err) } d.Set("created_time", aws.TimeValue(resp.CreatedTime).Format(time.RFC3339)) @@ -104,7 +107,7 @@ func resourceGraphUpdate(ctx context.Context, d *schema.ResourceData, meta inter } func resourceGraphDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DetectiveConn() + conn := meta.(*conns.AWSClient).DetectiveConn(ctx) input := &detective.DeleteGraphInput{ GraphArn: aws.String(d.Id()), @@ -115,7 +118,7 @@ func resourceGraphDelete(ctx context.Context, d *schema.ResourceData, meta inter if tfawserr.ErrCodeEquals(err, detective.ErrCodeResourceNotFoundException) { return nil } - return diag.Errorf("error deleting detective Graph (%s): %s", d.Id(), err) + return diag.Errorf("deleting detective Graph (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/detective/graph_test.go b/internal/service/detective/graph_test.go index 3325f3a8b4b..283b2022e89 100644 --- a/internal/service/detective/graph_test.go +++ b/internal/service/detective/graph_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package detective_test import ( @@ -124,7 +127,7 @@ func testAccGraph_disappears(t *testing.T) { func testAccCheckGraphDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DetectiveConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DetectiveConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_detective_graph" { @@ -157,7 +160,7 @@ func testAccCheckGraphExists(ctx context.Context, resourceName string, graph *de return fmt.Errorf("not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).DetectiveConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DetectiveConn(ctx) resp, err := tfdetective.FindGraphByARN(ctx, conn, rs.Primary.ID) if err != nil { diff --git a/internal/service/detective/invitation_accepter.go b/internal/service/detective/invitation_accepter.go index 83f7cf42053..befccf8ecc5 100644 --- a/internal/service/detective/invitation_accepter.go +++ b/internal/service/detective/invitation_accepter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package detective import ( @@ -34,7 +37,7 @@ func ResourceInvitationAccepter() *schema.Resource { } func resourceInvitationAccepterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DetectiveConn() + conn := meta.(*conns.AWSClient).DetectiveConn(ctx) graphArn := d.Get("graph_arn").(string) @@ -45,7 +48,7 @@ func resourceInvitationAccepterCreate(ctx context.Context, d *schema.ResourceDat _, err := conn.AcceptInvitationWithContext(ctx, acceptInvitationInput) if err != nil { - return diag.Errorf("error accepting Detective InvitationAccepter (%s): %s", d.Id(), err) + return diag.Errorf("accepting Detective InvitationAccepter (%s): %s", d.Id(), err) } d.SetId(graphArn) @@ -54,7 +57,7 @@ func resourceInvitationAccepterCreate(ctx context.Context, d *schema.ResourceDat } func resourceInvitationAccepterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DetectiveConn() + conn := meta.(*conns.AWSClient).DetectiveConn(ctx) graphArn, err := FindInvitationByGraphARN(ctx, conn, d.Id()) @@ -65,7 +68,7 @@ func resourceInvitationAccepterRead(ctx context.Context, d *schema.ResourceData, } if err != nil { - return diag.Errorf("error listing Detective InvitationAccepter (%s): %s", d.Id(), err) + return diag.Errorf("listing Detective InvitationAccepter (%s): %s", d.Id(), err) } d.Set("graph_arn", graphArn) @@ -73,7 +76,7 @@ func resourceInvitationAccepterRead(ctx context.Context, d *schema.ResourceData, } func resourceInvitationAccepterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DetectiveConn() + conn := meta.(*conns.AWSClient).DetectiveConn(ctx) input := &detective.DisassociateMembershipInput{ GraphArn: aws.String(d.Id()), @@ -84,7 +87,7 @@ func resourceInvitationAccepterDelete(ctx context.Context, d *schema.ResourceDat if tfawserr.ErrCodeEquals(err, detective.ErrCodeResourceNotFoundException) { return nil } - return diag.Errorf("error disassociating Detective InvitationAccepter (%s): %s", d.Id(), err) + return diag.Errorf("disassociating Detective InvitationAccepter (%s): %s", d.Id(), err) } return nil } diff --git a/internal/service/detective/invitation_accepter_test.go b/internal/service/detective/invitation_accepter_test.go index af57541355a..dddd940516c 100644 --- a/internal/service/detective/invitation_accepter_test.go +++ b/internal/service/detective/invitation_accepter_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package detective_test import ( @@ -56,7 +59,7 @@ func testAccCheckInvitationAccepterExists(ctx context.Context, resourceName stri return fmt.Errorf("resource (%s) has empty ID", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).DetectiveConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DetectiveConn(ctx) result, err := tfdetective.FindInvitationByGraphARN(ctx, conn, rs.Primary.ID) @@ -74,7 +77,7 @@ func testAccCheckInvitationAccepterExists(ctx context.Context, resourceName stri func testAccCheckInvitationAccepterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DetectiveConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DetectiveConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_detective_invitation_accepter" { diff --git a/internal/service/detective/member.go b/internal/service/detective/member.go index 31cede63a9a..593eb4061a3 100644 --- a/internal/service/detective/member.go +++ b/internal/service/detective/member.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package detective import ( @@ -84,7 +87,7 @@ func ResourceMember() *schema.Resource { } func resourceMemberCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DetectiveConn() + conn := meta.(*conns.AWSClient).DetectiveConn(ctx) accountId := d.Get("account_id").(string) graphArn := d.Get("graph_arn").(string) @@ -127,11 +130,11 @@ func resourceMemberCreate(ctx context.Context, d *schema.ResourceData, meta inte } if err != nil { - return diag.Errorf("error creating Detective Member: %s", err) + return diag.Errorf("creating Detective Member: %s", err) } if _, err = MemberStatusUpdated(ctx, conn, graphArn, accountId, detective.MemberStatusInvited); err != nil { - return diag.Errorf("error waiting for Detective Member (%s) to be invited: %s", d.Id(), err) + return diag.Errorf("waiting for Detective Member (%s) to be invited: %s", d.Id(), err) } d.SetId(EncodeMemberID(graphArn, accountId)) @@ -140,11 +143,11 @@ func resourceMemberCreate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceMemberRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DetectiveConn() + conn := meta.(*conns.AWSClient).DetectiveConn(ctx) graphArn, accountId, err := DecodeMemberID(d.Id()) if err != nil { - return diag.Errorf("error decoding ID Detective Member (%s): %s", d.Id(), err) + return diag.Errorf("decoding ID Detective Member (%s): %s", d.Id(), err) } resp, err := FindMemberByGraphARNAndAccountID(ctx, conn, graphArn, accountId) @@ -156,7 +159,7 @@ func resourceMemberRead(ctx context.Context, d *schema.ResourceData, meta interf d.SetId("") return nil } - return diag.Errorf("error reading Detective Member (%s): %s", d.Id(), err) + return diag.Errorf("reading Detective Member (%s): %s", d.Id(), err) } if !d.IsNewResource() && resp == nil { @@ -178,11 +181,11 @@ func resourceMemberRead(ctx context.Context, d *schema.ResourceData, meta interf return nil } func resourceMemberDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DetectiveConn() + conn := meta.(*conns.AWSClient).DetectiveConn(ctx) graphArn, accountId, err := DecodeMemberID(d.Id()) if err != nil { - return diag.Errorf("error decoding ID Detective Member (%s): %s", d.Id(), err) + return diag.Errorf("decoding ID Detective Member (%s): %s", d.Id(), err) } input := &detective.DeleteMembersInput{ @@ -195,7 +198,7 @@ func resourceMemberDelete(ctx context.Context, d *schema.ResourceData, meta inte if tfawserr.ErrCodeEquals(err, detective.ErrCodeResourceNotFoundException) { return nil } - return diag.Errorf("error deleting Detective Member (%s): %s", d.Id(), err) + return diag.Errorf("deleting Detective Member (%s): %s", d.Id(), err) } return nil } diff --git a/internal/service/detective/member_test.go b/internal/service/detective/member_test.go index d8a8015bd29..99baaf6a57a 100644 --- a/internal/service/detective/member_test.go +++ b/internal/service/detective/member_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package detective_test import ( @@ -135,7 +138,7 @@ func testAccCheckMemberExists(ctx context.Context, resourceName string, detectiv return fmt.Errorf("not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).DetectiveConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DetectiveConn(ctx) graphArn, accountId, err := tfdetective.DecodeMemberID(rs.Primary.ID) if err != nil { @@ -159,7 +162,7 @@ func testAccCheckMemberExists(ctx context.Context, resourceName string, detectiv func testAccCheckMemberDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DetectiveConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DetectiveConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_detective_member" { diff --git a/internal/service/detective/service_package_gen.go b/internal/service/detective/service_package_gen.go index f92832db39e..bfe36c55e9a 100644 --- a/internal/service/detective/service_package_gen.go +++ b/internal/service/detective/service_package_gen.go @@ -5,6 +5,10 @@ package detective import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + detective_sdkv1 "github.com/aws/aws-sdk-go/service/detective" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -48,4 +52,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Detective } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*detective_sdkv1.Detective, error) { + sess := config["session"].(*session_sdkv1.Session) + + return detective_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/detective/status.go b/internal/service/detective/status.go index 15ec791e75e..8318551652a 100644 --- a/internal/service/detective/status.go +++ b/internal/service/detective/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package detective import ( diff --git a/internal/service/detective/tags_gen.go b/internal/service/detective/tags_gen.go index b319e19484d..cbbb123adb3 100644 --- a/internal/service/detective/tags_gen.go +++ b/internal/service/detective/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists detective service tags. +// listTags lists detective service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn detectiveiface.DetectiveAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn detectiveiface.DetectiveAPI, identifier string) (tftags.KeyValueTags, error) { input := &detective.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn detectiveiface.DetectiveAPI, identifier // ListTags lists detective service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).DetectiveConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).DetectiveConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from detective service tags. +// KeyValueTags creates tftags.KeyValueTags from detective service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns detective service tags from Context. +// getTagsIn returns detective service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets detective service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets detective service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates detective service tags. +// updateTags updates detective service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn detectiveiface.DetectiveAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn detectiveiface.DetectiveAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn detectiveiface.DetectiveAPI, identifie // UpdateTags updates detective service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).DetectiveConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).DetectiveConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/detective/wait.go b/internal/service/detective/wait.go index 391ef23b1ba..9b4876e1253 100644 --- a/internal/service/detective/wait.go +++ b/internal/service/detective/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package detective import ( diff --git a/internal/service/devicefarm/device_pool.go b/internal/service/devicefarm/device_pool.go index 3e297a3d54a..08d43759535 100644 --- a/internal/service/devicefarm/device_pool.go +++ b/internal/service/devicefarm/device_pool.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package devicefarm import ( @@ -93,7 +96,7 @@ func ResourceDevicePool() *schema.Resource { func resourceDevicePoolCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeviceFarmConn() + conn := meta.(*conns.AWSClient).DeviceFarmConn(ctx) name := d.Get("name").(string) input := &devicefarm.CreateDevicePoolInput{ @@ -118,7 +121,7 @@ func resourceDevicePoolCreate(ctx context.Context, d *schema.ResourceData, meta d.SetId(aws.StringValue(output.DevicePool.Arn)) - if err := createTags(ctx, conn, d.Id(), GetTagsIn(ctx)); err != nil { + if err := createTags(ctx, conn, d.Id(), getTagsIn(ctx)); err != nil { return sdkdiag.AppendErrorf(diags, "setting DeviceFarm Device Pool (%s) tags: %s", d.Id(), err) } @@ -127,7 +130,7 @@ func resourceDevicePoolCreate(ctx context.Context, d *schema.ResourceData, meta func resourceDevicePoolRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeviceFarmConn() + conn := meta.(*conns.AWSClient).DeviceFarmConn(ctx) devicePool, err := FindDevicePoolByARN(ctx, conn, d.Id()) @@ -163,7 +166,7 @@ func resourceDevicePoolRead(ctx context.Context, d *schema.ResourceData, meta in func resourceDevicePoolUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeviceFarmConn() + conn := meta.(*conns.AWSClient).DeviceFarmConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &devicefarm.UpdateDevicePoolInput{ @@ -202,7 +205,7 @@ func resourceDevicePoolUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceDevicePoolDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeviceFarmConn() + conn := meta.(*conns.AWSClient).DeviceFarmConn(ctx) log.Printf("[DEBUG] Deleting DeviceFarm Device Pool: %s", d.Id()) _, err := conn.DeleteDevicePoolWithContext(ctx, &devicefarm.DeleteDevicePoolInput{ @@ -273,7 +276,7 @@ func flattenDevicePoolRules(list []*devicefarm.Rule) []map[string]interface{} { func decodeProjectARN(id, typ string, meta interface{}) (string, error) { poolArn, err := arn.Parse(id) if err != nil { - return "", fmt.Errorf("Error parsing '%s': %w", id, err) + return "", fmt.Errorf("parsing '%s': %w", id, err) } poolArnResouce := poolArn.Resource diff --git a/internal/service/devicefarm/device_pool_test.go b/internal/service/devicefarm/device_pool_test.go index 0d75d01c2ec..c6b9710b131 100644 --- a/internal/service/devicefarm/device_pool_test.go +++ b/internal/service/devicefarm/device_pool_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package devicefarm_test import ( @@ -189,7 +192,7 @@ func testAccCheckDevicePoolExists(ctx context.Context, n string, v *devicefarm.D return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DeviceFarmConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DeviceFarmConn(ctx) resp, err := tfdevicefarm.FindDevicePoolByARN(ctx, conn, rs.Primary.ID) if err != nil { return err @@ -206,7 +209,7 @@ func testAccCheckDevicePoolExists(ctx context.Context, n string, v *devicefarm.D func testAccCheckDevicePoolDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DeviceFarmConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DeviceFarmConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_devicefarm_device_pool" { diff --git a/internal/service/devicefarm/find.go b/internal/service/devicefarm/find.go index b71fab7c017..7e774e95880 100644 --- a/internal/service/devicefarm/find.go +++ b/internal/service/devicefarm/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package devicefarm import ( diff --git a/internal/service/devicefarm/generate.go b/internal/service/devicefarm/generate.go index 6831112e7f6..3d7ddc520d5 100644 --- a/internal/service/devicefarm/generate.go +++ b/internal/service/devicefarm/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceARN -ServiceTagsSlice -TagInIDElem=ResourceARN -UpdateTags -CreateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package devicefarm diff --git a/internal/service/devicefarm/instance_profile.go b/internal/service/devicefarm/instance_profile.go index 137270cfb81..fb2cee90f88 100644 --- a/internal/service/devicefarm/instance_profile.go +++ b/internal/service/devicefarm/instance_profile.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package devicefarm import ( @@ -70,7 +73,7 @@ func ResourceInstanceProfile() *schema.Resource { func resourceInstanceProfileCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeviceFarmConn() + conn := meta.(*conns.AWSClient).DeviceFarmConn(ctx) name := d.Get("name").(string) input := &devicefarm.CreateInstanceProfileInput{ @@ -101,7 +104,7 @@ func resourceInstanceProfileCreate(ctx context.Context, d *schema.ResourceData, d.SetId(aws.StringValue(output.InstanceProfile.Arn)) - if err := createTags(ctx, conn, d.Id(), GetTagsIn(ctx)); err != nil { + if err := createTags(ctx, conn, d.Id(), getTagsIn(ctx)); err != nil { return sdkdiag.AppendErrorf(diags, "setting DeviceFarm Instance Profile (%s) tags: %s", d.Id(), err) } @@ -110,7 +113,7 @@ func resourceInstanceProfileCreate(ctx context.Context, d *schema.ResourceData, func resourceInstanceProfileRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeviceFarmConn() + conn := meta.(*conns.AWSClient).DeviceFarmConn(ctx) instaceProf, err := FindInstanceProfileByARN(ctx, conn, d.Id()) @@ -137,7 +140,7 @@ func resourceInstanceProfileRead(ctx context.Context, d *schema.ResourceData, me func resourceInstanceProfileUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeviceFarmConn() + conn := meta.(*conns.AWSClient).DeviceFarmConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &devicefarm.UpdateInstanceProfileInput{ @@ -176,7 +179,7 @@ func resourceInstanceProfileUpdate(ctx context.Context, d *schema.ResourceData, func resourceInstanceProfileDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeviceFarmConn() + conn := meta.(*conns.AWSClient).DeviceFarmConn(ctx) log.Printf("[DEBUG] Deleting DeviceFarm Instance Profile: %s", d.Id()) _, err := conn.DeleteInstanceProfileWithContext(ctx, &devicefarm.DeleteInstanceProfileInput{ diff --git a/internal/service/devicefarm/instance_profile_test.go b/internal/service/devicefarm/instance_profile_test.go index 2463b47e39b..b1843836079 100644 --- a/internal/service/devicefarm/instance_profile_test.go +++ b/internal/service/devicefarm/instance_profile_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package devicefarm_test import ( @@ -159,7 +162,7 @@ func testAccCheckInstanceProfileExists(ctx context.Context, n string, v *devicef return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DeviceFarmConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DeviceFarmConn(ctx) resp, err := tfdevicefarm.FindInstanceProfileByARN(ctx, conn, rs.Primary.ID) if err != nil { return err @@ -176,7 +179,7 @@ func testAccCheckInstanceProfileExists(ctx context.Context, n string, v *devicef func testAccCheckInstanceProfileDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DeviceFarmConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DeviceFarmConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_devicefarm_instance_profile" { diff --git a/internal/service/devicefarm/network_profile.go b/internal/service/devicefarm/network_profile.go index e84b74b0893..5f113348c65 100644 --- a/internal/service/devicefarm/network_profile.go +++ b/internal/service/devicefarm/network_profile.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package devicefarm import ( @@ -108,7 +111,7 @@ func ResourceNetworkProfile() *schema.Resource { func resourceNetworkProfileCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeviceFarmConn() + conn := meta.(*conns.AWSClient).DeviceFarmConn(ctx) name := d.Get("name").(string) input := &devicefarm.CreateNetworkProfileInput{ @@ -164,7 +167,7 @@ func resourceNetworkProfileCreate(ctx context.Context, d *schema.ResourceData, m d.SetId(aws.StringValue(output.NetworkProfile.Arn)) - if err := createTags(ctx, conn, d.Id(), GetTagsIn(ctx)); err != nil { + if err := createTags(ctx, conn, d.Id(), getTagsIn(ctx)); err != nil { return sdkdiag.AppendErrorf(diags, "setting DeviceFarm Network Profile (%s) tags: %s", d.Id(), err) } @@ -173,7 +176,7 @@ func resourceNetworkProfileCreate(ctx context.Context, d *schema.ResourceData, m func resourceNetworkProfileRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeviceFarmConn() + conn := meta.(*conns.AWSClient).DeviceFarmConn(ctx) project, err := FindNetworkProfileByARN(ctx, conn, d.Id()) @@ -213,7 +216,7 @@ func resourceNetworkProfileRead(ctx context.Context, d *schema.ResourceData, met func resourceNetworkProfileUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeviceFarmConn() + conn := meta.(*conns.AWSClient).DeviceFarmConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &devicefarm.UpdateNetworkProfileInput{ @@ -276,7 +279,7 @@ func resourceNetworkProfileUpdate(ctx context.Context, d *schema.ResourceData, m func resourceNetworkProfileDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeviceFarmConn() + conn := meta.(*conns.AWSClient).DeviceFarmConn(ctx) log.Printf("[DEBUG] Deleting DeviceFarm Network Profile: %s", d.Id()) _, err := conn.DeleteNetworkProfileWithContext(ctx, &devicefarm.DeleteNetworkProfileInput{ diff --git a/internal/service/devicefarm/network_profile_test.go b/internal/service/devicefarm/network_profile_test.go index 63825397f5c..a4980680720 100644 --- a/internal/service/devicefarm/network_profile_test.go +++ b/internal/service/devicefarm/network_profile_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package devicefarm_test import ( @@ -196,7 +199,7 @@ func testAccCheckNetworkProfileExists(ctx context.Context, n string, v *devicefa return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DeviceFarmConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DeviceFarmConn(ctx) resp, err := tfdevicefarm.FindNetworkProfileByARN(ctx, conn, rs.Primary.ID) if err != nil { return err @@ -213,7 +216,7 @@ func testAccCheckNetworkProfileExists(ctx context.Context, n string, v *devicefa func testAccCheckNetworkProfileDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DeviceFarmConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DeviceFarmConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_devicefarm_network_profile" { diff --git a/internal/service/devicefarm/project.go b/internal/service/devicefarm/project.go index 4596676fb99..158731c403e 100644 --- a/internal/service/devicefarm/project.go +++ b/internal/service/devicefarm/project.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package devicefarm import ( @@ -55,7 +58,7 @@ func ResourceProject() *schema.Resource { func resourceProjectCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeviceFarmConn() + conn := meta.(*conns.AWSClient).DeviceFarmConn(ctx) name := d.Get("name").(string) input := &devicefarm.CreateProjectInput{ @@ -74,7 +77,7 @@ func resourceProjectCreate(ctx context.Context, d *schema.ResourceData, meta int d.SetId(aws.StringValue(output.Project.Arn)) - if err := createTags(ctx, conn, d.Id(), GetTagsIn(ctx)); err != nil { + if err := createTags(ctx, conn, d.Id(), getTagsIn(ctx)); err != nil { return sdkdiag.AppendErrorf(diags, "setting DeviceFarm Project (%s) tags: %s", d.Id(), err) } @@ -83,7 +86,7 @@ func resourceProjectCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceProjectRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeviceFarmConn() + conn := meta.(*conns.AWSClient).DeviceFarmConn(ctx) project, err := FindProjectByARN(ctx, conn, d.Id()) @@ -107,7 +110,7 @@ func resourceProjectRead(ctx context.Context, d *schema.ResourceData, meta inter func resourceProjectUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeviceFarmConn() + conn := meta.(*conns.AWSClient).DeviceFarmConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &devicefarm.UpdateProjectInput{ @@ -134,7 +137,7 @@ func resourceProjectUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceProjectDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeviceFarmConn() + conn := meta.(*conns.AWSClient).DeviceFarmConn(ctx) log.Printf("[DEBUG] Deleting DeviceFarm Project: %s", d.Id()) _, err := conn.DeleteProjectWithContext(ctx, &devicefarm.DeleteProjectInput{ diff --git a/internal/service/devicefarm/project_test.go b/internal/service/devicefarm/project_test.go index 1375cbe5760..3810798872c 100644 --- a/internal/service/devicefarm/project_test.go +++ b/internal/service/devicefarm/project_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package devicefarm_test import ( @@ -199,7 +202,7 @@ func testAccCheckProjectExists(ctx context.Context, n string, v *devicefarm.Proj return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DeviceFarmConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DeviceFarmConn(ctx) resp, err := tfdevicefarm.FindProjectByARN(ctx, conn, rs.Primary.ID) if err != nil { return err @@ -216,7 +219,7 @@ func testAccCheckProjectExists(ctx context.Context, n string, v *devicefarm.Proj func testAccCheckProjectDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DeviceFarmConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DeviceFarmConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_devicefarm_project" { diff --git a/internal/service/devicefarm/service_package_gen.go b/internal/service/devicefarm/service_package_gen.go index e370bef7ca4..073777ea9ac 100644 --- a/internal/service/devicefarm/service_package_gen.go +++ b/internal/service/devicefarm/service_package_gen.go @@ -5,6 +5,10 @@ package devicefarm import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + devicefarm_sdkv1 "github.com/aws/aws-sdk-go/service/devicefarm" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -76,4 +80,13 @@ func (p *servicePackage) ServicePackageName() string { return names.DeviceFarm } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*devicefarm_sdkv1.DeviceFarm, error) { + sess := config["session"].(*session_sdkv1.Session) + + return devicefarm_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/devicefarm/sweep.go b/internal/service/devicefarm/sweep.go index 45f675e7ebd..eca15ef2dc3 100644 --- a/internal/service/devicefarm/sweep.go +++ b/internal/service/devicefarm/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/devicefarm" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -29,13 +31,13 @@ func init() { func sweepProjects(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).DeviceFarmConn() + conn := client.DeviceFarmConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -70,7 +72,7 @@ func sweepProjects(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing DeviceFarm Project for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping DeviceFarm Project for %s: %w", region, err)) } @@ -84,13 +86,13 @@ func sweepProjects(region string) error { func sweepTestGridProjects(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).DeviceFarmConn() + conn := client.DeviceFarmConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -125,7 +127,7 @@ func sweepTestGridProjects(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing DeviceFarm Test Grid Project for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping DeviceFarm Test Grid Project for %s: %w", region, err)) } diff --git a/internal/service/devicefarm/tags_gen.go b/internal/service/devicefarm/tags_gen.go index 7011f0b035e..5878f73a397 100644 --- a/internal/service/devicefarm/tags_gen.go +++ b/internal/service/devicefarm/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists devicefarm service tags. +// listTags lists devicefarm service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn devicefarmiface.DeviceFarmAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn devicefarmiface.DeviceFarmAPI, identifier string) (tftags.KeyValueTags, error) { input := &devicefarm.ListTagsForResourceInput{ ResourceARN: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn devicefarmiface.DeviceFarmAPI, identifie // ListTags lists devicefarm service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).DeviceFarmConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).DeviceFarmConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*devicefarm.Tag) tftags.KeyValueTa return tftags.New(ctx, m) } -// GetTagsIn returns devicefarm service tags from Context. +// getTagsIn returns devicefarm service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*devicefarm.Tag { +func getTagsIn(ctx context.Context) []*devicefarm.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,8 +88,8 @@ func GetTagsIn(ctx context.Context) []*devicefarm.Tag { return nil } -// SetTagsOut sets devicefarm service tags in Context. -func SetTagsOut(ctx context.Context, tags []*devicefarm.Tag) { +// setTagsOut sets devicefarm service tags in Context. +func setTagsOut(ctx context.Context, tags []*devicefarm.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } @@ -101,13 +101,13 @@ func createTags(ctx context.Context, conn devicefarmiface.DeviceFarmAPI, identif return nil } - return UpdateTags(ctx, conn, identifier, nil, KeyValueTags(ctx, tags)) + return updateTags(ctx, conn, identifier, nil, KeyValueTags(ctx, tags)) } -// UpdateTags updates devicefarm service tags. +// updateTags updates devicefarm service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn devicefarmiface.DeviceFarmAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn devicefarmiface.DeviceFarmAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -147,5 +147,5 @@ func UpdateTags(ctx context.Context, conn devicefarmiface.DeviceFarmAPI, identif // UpdateTags updates devicefarm service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).DeviceFarmConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).DeviceFarmConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/devicefarm/test_grid_project.go b/internal/service/devicefarm/test_grid_project.go index ba21cb05de1..df6f57a53e3 100644 --- a/internal/service/devicefarm/test_grid_project.go +++ b/internal/service/devicefarm/test_grid_project.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package devicefarm import ( @@ -80,7 +83,7 @@ func ResourceTestGridProject() *schema.Resource { func resourceTestGridProjectCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeviceFarmConn() + conn := meta.(*conns.AWSClient).DeviceFarmConn(ctx) name := d.Get("name").(string) input := &devicefarm.CreateTestGridProjectInput{ @@ -103,7 +106,7 @@ func resourceTestGridProjectCreate(ctx context.Context, d *schema.ResourceData, d.SetId(aws.StringValue(output.TestGridProject.Arn)) - if err := createTags(ctx, conn, d.Id(), GetTagsIn(ctx)); err != nil { + if err := createTags(ctx, conn, d.Id(), getTagsIn(ctx)); err != nil { return sdkdiag.AppendErrorf(diags, "setting DeviceFarm Test Grid Project (%s) tags: %s", d.Id(), err) } @@ -112,7 +115,7 @@ func resourceTestGridProjectCreate(ctx context.Context, d *schema.ResourceData, func resourceTestGridProjectRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeviceFarmConn() + conn := meta.(*conns.AWSClient).DeviceFarmConn(ctx) project, err := FindTestGridProjectByARN(ctx, conn, d.Id()) @@ -139,7 +142,7 @@ func resourceTestGridProjectRead(ctx context.Context, d *schema.ResourceData, me func resourceTestGridProjectUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeviceFarmConn() + conn := meta.(*conns.AWSClient).DeviceFarmConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &devicefarm.UpdateTestGridProjectInput{ @@ -166,7 +169,7 @@ func resourceTestGridProjectUpdate(ctx context.Context, d *schema.ResourceData, func resourceTestGridProjectDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeviceFarmConn() + conn := meta.(*conns.AWSClient).DeviceFarmConn(ctx) log.Printf("[DEBUG] Deleting DeviceFarm Test Grid Project: %s", d.Id()) _, err := conn.DeleteTestGridProjectWithContext(ctx, &devicefarm.DeleteTestGridProjectInput{ diff --git a/internal/service/devicefarm/test_grid_project_test.go b/internal/service/devicefarm/test_grid_project_test.go index 0e526cd0820..f2d40065acb 100644 --- a/internal/service/devicefarm/test_grid_project_test.go +++ b/internal/service/devicefarm/test_grid_project_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package devicefarm_test import ( @@ -193,7 +196,7 @@ func testAccCheckProjectTestGridProjectExists(ctx context.Context, n string, v * return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DeviceFarmConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DeviceFarmConn(ctx) resp, err := tfdevicefarm.FindTestGridProjectByARN(ctx, conn, rs.Primary.ID) if err != nil { return err @@ -210,7 +213,7 @@ func testAccCheckProjectTestGridProjectExists(ctx context.Context, n string, v * func testAccCheckProjectTestGridProjectDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DeviceFarmConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DeviceFarmConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_devicefarm_test_grid_project" { diff --git a/internal/service/devicefarm/upload.go b/internal/service/devicefarm/upload.go index 6a66bb5f071..53b67c7673a 100644 --- a/internal/service/devicefarm/upload.go +++ b/internal/service/devicefarm/upload.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package devicefarm import ( @@ -72,7 +75,7 @@ func ResourceUpload() *schema.Resource { func resourceUploadCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeviceFarmConn() + conn := meta.(*conns.AWSClient).DeviceFarmConn(ctx) input := &devicefarm.CreateUploadInput{ Name: aws.String(d.Get("name").(string)), @@ -86,7 +89,7 @@ func resourceUploadCreate(ctx context.Context, d *schema.ResourceData, meta inte out, err := conn.CreateUploadWithContext(ctx, input) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error creating DeviceFarm Upload: %s", err) + return sdkdiag.AppendErrorf(diags, "creating DeviceFarm Upload: %s", err) } arn := aws.StringValue(out.Upload.Arn) @@ -98,7 +101,7 @@ func resourceUploadCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceUploadRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeviceFarmConn() + conn := meta.(*conns.AWSClient).DeviceFarmConn(ctx) upload, err := FindUploadByARN(ctx, conn, d.Id()) @@ -133,7 +136,7 @@ func resourceUploadRead(ctx context.Context, d *schema.ResourceData, meta interf func resourceUploadUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeviceFarmConn() + conn := meta.(*conns.AWSClient).DeviceFarmConn(ctx) input := &devicefarm.UpdateUploadInput{ Arn: aws.String(d.Id()), @@ -150,7 +153,7 @@ func resourceUploadUpdate(ctx context.Context, d *schema.ResourceData, meta inte log.Printf("[DEBUG] Updating DeviceFarm Upload: %s", d.Id()) _, err := conn.UpdateUploadWithContext(ctx, input) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error Updating DeviceFarm Upload: %s", err) + return sdkdiag.AppendErrorf(diags, "updating DeviceFarm Upload (%s): %s", d.Id(), err) } return append(diags, resourceUploadRead(ctx, d, meta)...) @@ -158,7 +161,7 @@ func resourceUploadUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceUploadDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DeviceFarmConn() + conn := meta.(*conns.AWSClient).DeviceFarmConn(ctx) input := &devicefarm.DeleteUploadInput{ Arn: aws.String(d.Id()), @@ -170,7 +173,7 @@ func resourceUploadDelete(ctx context.Context, d *schema.ResourceData, meta inte if tfawserr.ErrCodeEquals(err, devicefarm.ErrCodeNotFoundException) { return diags } - return sdkdiag.AppendErrorf(diags, "Error deleting DeviceFarm Upload: %s", err) + return sdkdiag.AppendErrorf(diags, "deleting DeviceFarm Upload: %s", err) } return diags diff --git a/internal/service/devicefarm/upload_test.go b/internal/service/devicefarm/upload_test.go index f08e88f08d0..29a8adc8741 100644 --- a/internal/service/devicefarm/upload_test.go +++ b/internal/service/devicefarm/upload_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package devicefarm_test import ( @@ -141,7 +144,7 @@ func testAccCheckUploadExists(ctx context.Context, n string, v *devicefarm.Uploa return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DeviceFarmConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DeviceFarmConn(ctx) resp, err := tfdevicefarm.FindUploadByARN(ctx, conn, rs.Primary.ID) if err != nil { return err @@ -158,7 +161,7 @@ func testAccCheckUploadExists(ctx context.Context, n string, v *devicefarm.Uploa func testAccCheckUploadDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DeviceFarmConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DeviceFarmConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_devicefarm_upload" { diff --git a/internal/service/directconnect/acc_test.go b/internal/service/directconnect/acc_test.go index cfafa793e01..0886a423d3b 100644 --- a/internal/service/directconnect/acc_test.go +++ b/internal/service/directconnect/acc_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect_test import ( @@ -15,7 +18,7 @@ import ( func testAccCheckVirtualInterfaceExists(ctx context.Context, name string, vif *directconnect.VirtualInterface) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn(ctx) rs, ok := s.RootModule().Resources[name] if !ok { @@ -45,7 +48,7 @@ func testAccCheckVirtualInterfaceExists(ctx context.Context, name string, vif *d } func testAccCheckVirtualInterfaceDestroy(ctx context.Context, s *terraform.State, t string) error { // nosemgrep:ci.semgrep.acctest.naming.destroy-check-signature - conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != t { diff --git a/internal/service/directconnect/bgp_peer.go b/internal/service/directconnect/bgp_peer.go index f0baa0cab12..4d48c9cb8bd 100644 --- a/internal/service/directconnect/bgp_peer.go +++ b/internal/service/directconnect/bgp_peer.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( @@ -82,7 +85,7 @@ func ResourceBGPPeer() *schema.Resource { func resourceBGPPeerCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) vifId := d.Get("virtual_interface_id").(string) addrFamily := d.Get("address_family").(string) @@ -108,7 +111,7 @@ func resourceBGPPeerCreate(ctx context.Context, d *schema.ResourceData, meta int log.Printf("[DEBUG] Creating Direct Connect BGP peer: %#v", req) _, err := conn.CreateBGPPeerWithContext(ctx, req) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error creating Direct Connect BGP peer: %s", err) + return sdkdiag.AppendErrorf(diags, "creating Direct Connect BGP peer: %s", err) } d.SetId(fmt.Sprintf("%s-%s-%d", vifId, addrFamily, asn)) @@ -128,7 +131,7 @@ func resourceBGPPeerCreate(ctx context.Context, d *schema.ResourceData, meta int } _, err = stateConf.WaitForStateContext(ctx) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error waiting for Direct Connect BGP peer (%s) to be available: %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "waiting for Direct Connect BGP peer (%s) to be available: %s", d.Id(), err) } return append(diags, resourceBGPPeerRead(ctx, d, meta)...) @@ -136,7 +139,7 @@ func resourceBGPPeerCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceBGPPeerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) vifId := d.Get("virtual_interface_id").(string) addrFamily := d.Get("address_family").(string) @@ -144,7 +147,7 @@ func resourceBGPPeerRead(ctx context.Context, d *schema.ResourceData, meta inter bgpPeerRaw, state, err := bgpPeerStateRefresh(ctx, conn, vifId, addrFamily, asn)() if err != nil { - return sdkdiag.AppendErrorf(diags, "Error reading Direct Connect BGP peer: %s", err) + return sdkdiag.AppendErrorf(diags, "reading Direct Connect BGP peer: %s", err) } if state == directconnect.BGPPeerStateDeleted { log.Printf("[WARN] Direct Connect BGP peer (%s) not found, removing from state", d.Id()) @@ -165,7 +168,7 @@ func resourceBGPPeerRead(ctx context.Context, d *schema.ResourceData, meta inter func resourceBGPPeerDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) vifId := d.Get("virtual_interface_id").(string) addrFamily := d.Get("address_family").(string) diff --git a/internal/service/directconnect/bgp_peer_test.go b/internal/service/directconnect/bgp_peer_test.go index 88df1c72a2b..ba3a6e17d33 100644 --- a/internal/service/directconnect/bgp_peer_test.go +++ b/internal/service/directconnect/bgp_peer_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect_test import ( @@ -44,7 +47,7 @@ func TestAccDirectConnectBGPPeer_basic(t *testing.T) { func testAccCheckBGPPeerDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_dx_bgp_peer" { diff --git a/internal/service/directconnect/connection.go b/internal/service/directconnect/connection.go index d7dce8a9900..a8f8693a164 100644 --- a/internal/service/directconnect/connection.go +++ b/internal/service/directconnect/connection.go @@ -1,9 +1,13 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( "context" "fmt" "log" + "strconv" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/arn" @@ -23,15 +27,128 @@ import ( // @SDKResource("aws_dx_connection", name="Connection") // @Tags(identifierAttribute="arn") func ResourceConnection() *schema.Resource { + // Resource with v0 schema (provider v5.0.1). + resourceV0 := &schema.Resource{ + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "aws_device": { + Type: schema.TypeString, + Computed: true, + }, + "bandwidth": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validConnectionBandWidth(), + }, + // The MAC Security (MACsec) connection encryption mode. + "encryption_mode": { + Type: schema.TypeString, + Computed: true, + Optional: true, + ValidateFunc: validation.StringInSlice([]string{"no_encrypt", "should_encrypt", "must_encrypt"}, false), + }, + "has_logical_redundancy": { + Type: schema.TypeString, + Computed: true, + }, + "jumbo_frame_capable": { + Type: schema.TypeBool, + Computed: true, + }, + "location": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + // Indicates whether the connection supports MAC Security (MACsec). + "macsec_capable": { + Type: schema.TypeBool, + Computed: true, + }, + // Enable or disable MAC Security (MACsec) on this connection. + "request_macsec": { + Type: schema.TypeBool, + Optional: true, + Default: false, + ForceNew: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "owner_account_id": { + Type: schema.TypeString, + Computed: true, + }, + "partner_name": { + Type: schema.TypeString, + Computed: true, + }, + "port_encryption_status": { + Type: schema.TypeString, + Computed: true, + }, + "provider_name": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + "skip_destroy": { + Type: schema.TypeBool, + Default: false, + Optional: true, + }, + names.AttrTags: tftags.TagsSchema(), + names.AttrTagsAll: tftags.TagsSchemaComputed(), + "vlan_id": { + Type: schema.TypeString, + Computed: true, + }, + }, + } + return &schema.Resource{ CreateWithoutTimeout: resourceConnectionCreate, ReadWithoutTimeout: resourceConnectionRead, UpdateWithoutTimeout: resourceConnectionUpdate, DeleteWithoutTimeout: resourceConnectionDelete, + Importer: &schema.ResourceImporter{ StateContext: schema.ImportStatePassthroughContext, }, + SchemaVersion: 1, + StateUpgraders: []schema.StateUpgrader{ + { + Type: resourceV0.CoreConfigSchema().ImpliedType(), + Upgrade: func(ctx context.Context, rawState map[string]interface{}, meta interface{}) (map[string]interface{}, error) { + // Convert vlan_id from string to int. + if v, ok := rawState["vlan_id"]; ok { + if v, ok := v.(string); ok { + if v == "" { + rawState["vlan_id"] = 0 + } else { + if v, err := strconv.Atoi(v); err == nil { + rawState["vlan_id"] = v + } else { + return nil, err + } + } + } + } + + return rawState, nil + }, + Version: 0, + }, + }, + Schema: map[string]*schema.Schema{ "arn": { Type: schema.TypeString, @@ -110,7 +227,7 @@ func ResourceConnection() *schema.Resource { names.AttrTags: tftags.TagsSchema(), names.AttrTagsAll: tftags.TagsSchemaComputed(), "vlan_id": { - Type: schema.TypeString, + Type: schema.TypeInt, Computed: true, }, }, @@ -121,7 +238,7 @@ func ResourceConnection() *schema.Resource { func resourceConnectionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) name := d.Get("name").(string) input := &directconnect.CreateConnectionInput{ @@ -129,7 +246,7 @@ func resourceConnectionCreate(ctx context.Context, d *schema.ResourceData, meta ConnectionName: aws.String(name), Location: aws.String(d.Get("location").(string)), RequestMACSec: aws.Bool(d.Get("request_macsec").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("provider_name"); ok { @@ -149,7 +266,7 @@ func resourceConnectionCreate(ctx context.Context, d *schema.ResourceData, meta func resourceConnectionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) connection, err := FindConnectionByID(ctx, conn, d.Id()) @@ -196,7 +313,7 @@ func resourceConnectionRead(ctx context.Context, d *schema.ResourceData, meta in func resourceConnectionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) if d.HasChange("encryption_mode") { input := &directconnect.UpdateConnectionInput{ @@ -220,7 +337,7 @@ func resourceConnectionUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceConnectionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) if _, ok := d.GetOk("skip_destroy"); ok { return diags diff --git a/internal/service/directconnect/connection_association.go b/internal/service/directconnect/connection_association.go index 062f928dee5..384a6b79f4c 100644 --- a/internal/service/directconnect/connection_association.go +++ b/internal/service/directconnect/connection_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( @@ -39,7 +42,7 @@ func ResourceConnectionAssociation() *schema.Resource { func resourceConnectionAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) connectionID := d.Get("connection_id").(string) lagID := d.Get("lag_id").(string) @@ -62,7 +65,7 @@ func resourceConnectionAssociationCreate(ctx context.Context, d *schema.Resource func resourceConnectionAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) lagID := d.Get("lag_id").(string) err := FindConnectionAssociationExists(ctx, conn, d.Id(), lagID) @@ -82,7 +85,7 @@ func resourceConnectionAssociationRead(ctx context.Context, d *schema.ResourceDa func resourceConnectionAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) if err := deleteConnectionLAGAssociation(ctx, conn, d.Id(), d.Get("lag_id").(string)); err != nil { return sdkdiag.AppendFromErr(diags, err) diff --git a/internal/service/directconnect/connection_association_test.go b/internal/service/directconnect/connection_association_test.go index 5eab320d80f..8191316b8ec 100644 --- a/internal/service/directconnect/connection_association_test.go +++ b/internal/service/directconnect/connection_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect_test import ( @@ -82,7 +85,7 @@ func TestAccDirectConnectConnectionAssociation_multiple(t *testing.T) { func testAccCheckConnectionAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_dx_connection_association" { @@ -108,7 +111,7 @@ func testAccCheckConnectionAssociationDestroy(ctx context.Context) resource.Test func testAccCheckConnectionAssociationExists(ctx context.Context, name string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn(ctx) rs, ok := s.RootModule().Resources[name] if !ok { diff --git a/internal/service/directconnect/connection_confirmation.go b/internal/service/directconnect/connection_confirmation.go index 283e785c381..7dd9f4e8fb2 100644 --- a/internal/service/directconnect/connection_confirmation.go +++ b/internal/service/directconnect/connection_confirmation.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( @@ -32,7 +35,7 @@ func ResourceConnectionConfirmation() *schema.Resource { func resourceConnectionConfirmationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) connectionID := d.Get("connection_id").(string) input := &directconnect.ConfirmConnectionInput{ @@ -57,7 +60,7 @@ func resourceConnectionConfirmationCreate(ctx context.Context, d *schema.Resourc func resourceConnectionConfirmationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) _, err := FindConnectionByID(ctx, conn, d.Id()) diff --git a/internal/service/directconnect/connection_confirmation_test.go b/internal/service/directconnect/connection_confirmation_test.go index f16e38c2a9c..a6372418345 100644 --- a/internal/service/directconnect/connection_confirmation_test.go +++ b/internal/service/directconnect/connection_confirmation_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect_test import ( @@ -60,7 +63,7 @@ func testAccCheckConnectionConfirmationExists(ctx context.Context, name string, } provider := providerFunc() - conn := provider.Meta().(*conns.AWSClient).DirectConnectConn() + conn := provider.Meta().(*conns.AWSClient).DirectConnectConn(ctx) connection, err := tfdirectconnect.FindConnectionByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/directconnect/connection_data_source.go b/internal/service/directconnect/connection_data_source.go index 21a3a87f166..fe7333b5395 100644 --- a/internal/service/directconnect/connection_data_source.go +++ b/internal/service/directconnect/connection_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( @@ -54,7 +57,7 @@ func DataSourceConnection() *schema.Resource { }, "tags": tftags.TagsSchemaComputed(), "vlan_id": { - Type: schema.TypeString, + Type: schema.TypeInt, Computed: true, }, }, @@ -63,7 +66,7 @@ func DataSourceConnection() *schema.Resource { func dataSourceConnectionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig var connections []*directconnect.Connection @@ -112,7 +115,7 @@ func dataSourceConnectionRead(ctx context.Context, d *schema.ResourceData, meta d.Set("provider_name", connection.ProviderName) d.Set("vlan_id", connection.Vlan) - tags, err := ListTags(ctx, conn, arn) + tags, err := listTags(ctx, conn, arn) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags for Direct Connect Connection (%s): %s", arn, err) diff --git a/internal/service/directconnect/connection_data_source_test.go b/internal/service/directconnect/connection_data_source_test.go index 6d78c53330b..8eb88a07034 100644 --- a/internal/service/directconnect/connection_data_source_test.go +++ b/internal/service/directconnect/connection_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect_test import ( diff --git a/internal/service/directconnect/connection_test.go b/internal/service/directconnect/connection_test.go index 24278239a0e..a98678af058 100644 --- a/internal/service/directconnect/connection_test.go +++ b/internal/service/directconnect/connection_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect_test import ( @@ -41,7 +44,7 @@ func TestAccDirectConnectConnection_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "partner_name", ""), resource.TestCheckResourceAttr(resourceName, "provider_name", ""), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), - resource.TestCheckResourceAttr(resourceName, "vlan_id", ""), + resource.TestCheckResourceAttr(resourceName, "vlan_id", "0"), ), }, // Test import. @@ -283,9 +286,81 @@ func TestAccDirectConnectConnection_tags(t *testing.T) { }) } +// https://github.com/hashicorp/terraform-provider-aws/issues/31732. +func TestAccDirectConnectConnection_vlanIDMigration501(t *testing.T) { + ctx := acctest.Context(t) + var connection directconnect.Connection + resourceName := "aws_dx_connection.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, directconnect.EndpointsID), + CheckDestroy: testAccCheckConnectionDestroy(ctx), + Steps: []resource.TestStep{ + { + // At v5.0.1 the resource's schema is v0 and vlan_id is TypeString. + ExternalProviders: map[string]resource.ExternalProvider{ + "aws": { + Source: "hashicorp/aws", + VersionConstraint: "5.0.1", + }, + }, + Config: testAccConnectionConfig_basic(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckConnectionExists(ctx, resourceName, &connection), + resource.TestCheckResourceAttr(resourceName, "vlan_id", ""), + ), + }, + { + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + Config: testAccConnectionConfig_basic(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckConnectionExists(ctx, resourceName, &connection), + resource.TestCheckResourceAttr(resourceName, "vlan_id", "0"), + ), + }, + }, + }) +} + +func TestAccDirectConnectConnection_vlanIDMigration510(t *testing.T) { + ctx := acctest.Context(t) + var connection directconnect.Connection + resourceName := "aws_dx_connection.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, directconnect.EndpointsID), + CheckDestroy: testAccCheckConnectionDestroy(ctx), + Steps: []resource.TestStep{ + { + // At v5.1.0 the resource's schema is v0 and vlan_id is TypeInt. + ExternalProviders: map[string]resource.ExternalProvider{ + "aws": { + Source: "hashicorp/aws", + VersionConstraint: "5.1.0", + }, + }, + Config: testAccConnectionConfig_basic(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckConnectionExists(ctx, resourceName, &connection), + resource.TestCheckResourceAttr(resourceName, "vlan_id", "0"), + ), + }, + { + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + Config: testAccConnectionConfig_basic(rName), + PlanOnly: true, + }, + }, + }) +} + func testAccCheckConnectionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_dx_connection" { @@ -311,7 +386,7 @@ func testAccCheckConnectionDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckConnectionExists(ctx context.Context, name string, v *directconnect.Connection) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn(ctx) rs, ok := s.RootModule().Resources[name] if !ok { @@ -336,7 +411,7 @@ func testAccCheckConnectionExists(ctx context.Context, name string, v *directcon func testAccCheckConnectionNoDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_dx_connection" { diff --git a/internal/service/directconnect/find.go b/internal/service/directconnect/find.go index e28d753b167..a9bd6ce7a18 100644 --- a/internal/service/directconnect/find.go +++ b/internal/service/directconnect/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( diff --git a/internal/service/directconnect/gateway.go b/internal/service/directconnect/gateway.go index adf17071f69..f8ccdcf5702 100644 --- a/internal/service/directconnect/gateway.go +++ b/internal/service/directconnect/gateway.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( @@ -55,7 +58,7 @@ func ResourceGateway() *schema.Resource { func resourceGatewayCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) name := d.Get("name").(string) input := &directconnect.CreateDirectConnectGatewayInput{ @@ -84,7 +87,7 @@ func resourceGatewayCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceGatewayRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) output, err := FindGatewayByID(ctx, conn, d.Id()) @@ -107,7 +110,7 @@ func resourceGatewayRead(ctx context.Context, d *schema.ResourceData, meta inter func resourceGatewayUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) if d.HasChange("name") { input := &directconnect.UpdateDirectConnectGatewayInput{ @@ -127,7 +130,7 @@ func resourceGatewayUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceGatewayDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) log.Printf("[DEBUG] Deleting Direct Connect Gateway: %s", d.Id()) _, err := conn.DeleteDirectConnectGatewayWithContext(ctx, &directconnect.DeleteDirectConnectGatewayInput{ diff --git a/internal/service/directconnect/gateway_association.go b/internal/service/directconnect/gateway_association.go index f8839c61cbb..5f2b9012615 100644 --- a/internal/service/directconnect/gateway_association.go +++ b/internal/service/directconnect/gateway_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( @@ -114,7 +117,7 @@ func ResourceGatewayAssociation() *schema.Resource { func resourceGatewayAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) var associationID string directConnectGatewayID := d.Get("dx_gateway_id").(string) @@ -175,7 +178,7 @@ func resourceGatewayAssociationCreate(ctx context.Context, d *schema.ResourceDat func resourceGatewayAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) associationID := d.Get("dx_gateway_association_id").(string) @@ -207,7 +210,7 @@ func resourceGatewayAssociationRead(ctx context.Context, d *schema.ResourceData, func resourceGatewayAssociationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) associationID := d.Get("dx_gateway_association_id").(string) input := &directconnect.UpdateDirectConnectGatewayAssociationInput{ @@ -241,7 +244,7 @@ func resourceGatewayAssociationUpdate(ctx context.Context, d *schema.ResourceDat func resourceGatewayAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) associationID := d.Get("dx_gateway_association_id").(string) @@ -266,7 +269,7 @@ func resourceGatewayAssociationDelete(ctx context.Context, d *schema.ResourceDat } func resourceGatewayAssociationImport(ctx context.Context, d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) parts := strings.Split(d.Id(), "/") diff --git a/internal/service/directconnect/gateway_association_migrate.go b/internal/service/directconnect/gateway_association_migrate.go index 6bb9854c8e5..6b655bb2816 100644 --- a/internal/service/directconnect/gateway_association_migrate.go +++ b/internal/service/directconnect/gateway_association_migrate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( @@ -76,7 +79,7 @@ func resourceGatewayAssociationResourceV0() *schema.Resource { } func GatewayAssociationStateUpgradeV0(ctx context.Context, rawState map[string]interface{}, meta interface{}) (map[string]interface{}, error) { - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) log.Println("[INFO] Found Direct Connect Gateway Association state v0; migrating to v1") diff --git a/internal/service/directconnect/gateway_association_proposal.go b/internal/service/directconnect/gateway_association_proposal.go index 3608f0c5bf8..f64345ececd 100644 --- a/internal/service/directconnect/gateway_association_proposal.go +++ b/internal/service/directconnect/gateway_association_proposal.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( @@ -33,7 +36,7 @@ func ResourceGatewayAssociationProposal() *schema.Resource { // Accepting the proposal with overridden prefixes changes the returned RequestedAllowedPrefixesToDirectConnectGateway value (allowed_prefixes attribute). // We only want to force a new resource if this value changes and the current proposal state is "requested". customdiff.ForceNewIf("allowed_prefixes", func(ctx context.Context, d *schema.ResourceDiff, meta interface{}) bool { - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) log.Printf("[DEBUG] CustomizeDiff for Direct Connect Gateway Association Proposal (%s) allowed_prefixes", d.Id()) @@ -95,7 +98,7 @@ func ResourceGatewayAssociationProposal() *schema.Resource { func resourceGatewayAssociationProposalCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) directConnectGatewayID := d.Get("dx_gateway_id").(string) associatedGatewayID := d.Get("associated_gateway_id").(string) @@ -123,7 +126,7 @@ func resourceGatewayAssociationProposalCreate(ctx context.Context, d *schema.Res func resourceGatewayAssociationProposalRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) // First attempt to find by proposal ID. output, err := FindGatewayAssociationProposalByID(ctx, conn, d.Id()) @@ -178,7 +181,7 @@ func resourceGatewayAssociationProposalRead(ctx context.Context, d *schema.Resou func resourceGatewayAssociationProposalDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) log.Printf("[DEBUG] Deleting Direct Connect Gateway Association Proposal: %s", d.Id()) _, err := conn.DeleteDirectConnectGatewayAssociationProposalWithContext(ctx, &directconnect.DeleteDirectConnectGatewayAssociationProposalInput{ diff --git a/internal/service/directconnect/gateway_association_proposal_test.go b/internal/service/directconnect/gateway_association_proposal_test.go index c27810054d3..5d366d179a7 100644 --- a/internal/service/directconnect/gateway_association_proposal_test.go +++ b/internal/service/directconnect/gateway_association_proposal_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect_test import ( @@ -230,7 +233,7 @@ func TestAccDirectConnectGatewayAssociationProposal_allowedPrefixes(t *testing.T func testAccCheckGatewayAssociationProposalDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_dx_gateway_association_proposal" { @@ -265,7 +268,7 @@ func testAccCheckGatewayAssociationProposalExists(ctx context.Context, resourceN return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn(ctx) output, err := tfdirectconnect.FindGatewayAssociationProposalByID(ctx, conn, rs.Primary.ID) @@ -296,7 +299,7 @@ func testAccCheckGatewayAssociationProposalAccepted(ctx context.Context, resourc return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn(ctx) output, err := tfdirectconnect.FindGatewayAssociationProposalByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/directconnect/gateway_association_test.go b/internal/service/directconnect/gateway_association_test.go index 47bf79744c3..187ae16ba64 100644 --- a/internal/service/directconnect/gateway_association_test.go +++ b/internal/service/directconnect/gateway_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect_test import ( @@ -362,7 +365,7 @@ func testAccGatewayAssociationImportStateIdFunc(resourceName string) resource.Im func testAccCheckGatewayAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_dx_gateway_association" { @@ -396,7 +399,7 @@ func testAccCheckGatewayAssociationExists(ctx context.Context, name string, ga * return fmt.Errorf("No Direct Connect Gateway Association ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn(ctx) output, err := tfdirectconnect.FindGatewayAssociationByID(ctx, conn, rs.Primary.Attributes["dx_gateway_association_id"]) diff --git a/internal/service/directconnect/gateway_data_source.go b/internal/service/directconnect/gateway_data_source.go index 48582daa740..0765282ae5f 100644 --- a/internal/service/directconnect/gateway_data_source.go +++ b/internal/service/directconnect/gateway_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( @@ -36,7 +39,7 @@ func DataSourceGateway() *schema.Resource { func dataSourceGatewayRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) name := d.Get("name").(string) gateways := make([]*directconnect.Gateway, 0) diff --git a/internal/service/directconnect/gateway_data_source_test.go b/internal/service/directconnect/gateway_data_source_test.go index d168c9773d1..57cc4e584c7 100644 --- a/internal/service/directconnect/gateway_data_source_test.go +++ b/internal/service/directconnect/gateway_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect_test import ( diff --git a/internal/service/directconnect/gateway_test.go b/internal/service/directconnect/gateway_test.go index 18fb9ff3099..be7e8bab52d 100644 --- a/internal/service/directconnect/gateway_test.go +++ b/internal/service/directconnect/gateway_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect_test import ( @@ -132,7 +135,7 @@ func TestAccDirectConnectGateway_update(t *testing.T) { func testAccCheckGatewayDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_dx_gateway" { @@ -166,7 +169,7 @@ func testAccCheckGatewayExists(ctx context.Context, name string, v *directconnec return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn(ctx) output, err := tfdirectconnect.FindGatewayByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/directconnect/generate.go b/internal/service/directconnect/generate.go index 96c7a9ad81d..13a0f277c61 100644 --- a/internal/service/directconnect/generate.go +++ b/internal/service/directconnect/generate.go @@ -1,5 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/listpages/main.go -ListOps=DescribeDirectConnectGateways,DescribeDirectConnectGatewayAssociations,DescribeDirectConnectGatewayAssociationProposals //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=DescribeTags -ListTagsInIDElem=ResourceArns -ListTagsInIDNeedSlice=yes -ListTagsOutTagsElem=ResourceTags[0].Tags -ServiceTagsSlice -UpdateTags -CreateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package directconnect diff --git a/internal/service/directconnect/hosted_connection.go b/internal/service/directconnect/hosted_connection.go index bb73981468c..c0601520e31 100644 --- a/internal/service/directconnect/hosted_connection.go +++ b/internal/service/directconnect/hosted_connection.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( @@ -98,7 +101,7 @@ func ResourceHostedConnection() *schema.Resource { func resourceHostedConnectionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) name := d.Get("name").(string) input := &directconnect.AllocateHostedConnectionInput{ @@ -123,7 +126,7 @@ func resourceHostedConnectionCreate(ctx context.Context, d *schema.ResourceData, func resourceHostedConnectionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) connection, err := FindHostedConnectionByID(ctx, conn, d.Id()) @@ -159,7 +162,7 @@ func resourceHostedConnectionRead(ctx context.Context, d *schema.ResourceData, m func resourceHostedConnectionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) if err := deleteConnection(ctx, conn, d.Id(), waitHostedConnectionDeleted); err != nil { return sdkdiag.AppendFromErr(diags, err) diff --git a/internal/service/directconnect/hosted_connection_test.go b/internal/service/directconnect/hosted_connection_test.go index d6f4111e312..5bd3d79b3cb 100644 --- a/internal/service/directconnect/hosted_connection_test.go +++ b/internal/service/directconnect/hosted_connection_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect_test import ( @@ -70,7 +73,7 @@ func testAccCheckHostedConnectionEnv() (*testAccDxHostedConnectionEnv, error) { func testAccCheckHostedConnectionDestroy(ctx context.Context, providerFunc func() *schema.Provider) resource.TestCheckFunc { return func(s *terraform.State) error { provider := providerFunc() - conn := provider.Meta().(*conns.AWSClient).DirectConnectConn() + conn := provider.Meta().(*conns.AWSClient).DirectConnectConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_dx_hosted_connection" { @@ -96,7 +99,7 @@ func testAccCheckHostedConnectionDestroy(ctx context.Context, providerFunc func( func testAccCheckHostedConnectionExists(ctx context.Context, name string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn(ctx) rs, ok := s.RootModule().Resources[name] if !ok { diff --git a/internal/service/directconnect/hosted_private_virtual_interface.go b/internal/service/directconnect/hosted_private_virtual_interface.go index 451036397d2..50af122d5f2 100644 --- a/internal/service/directconnect/hosted_private_virtual_interface.go +++ b/internal/service/directconnect/hosted_private_virtual_interface.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( @@ -117,7 +120,7 @@ func ResourceHostedPrivateVirtualInterface() *schema.Resource { func resourceHostedPrivateVirtualInterfaceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) req := &directconnect.AllocatePrivateVirtualInterfaceInput{ ConnectionId: aws.String(d.Get("connection_id").(string)), @@ -160,7 +163,7 @@ func resourceHostedPrivateVirtualInterfaceCreate(ctx context.Context, d *schema. func resourceHostedPrivateVirtualInterfaceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) vif, err := virtualInterfaceRead(ctx, d.Id(), conn) if err != nil { @@ -202,7 +205,7 @@ func resourceHostedPrivateVirtualInterfaceDelete(ctx context.Context, d *schema. } func resourceHostedPrivateVirtualInterfaceImport(ctx context.Context, d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) vif, err := virtualInterfaceRead(ctx, d.Id(), conn) if err != nil { diff --git a/internal/service/directconnect/hosted_private_virtual_interface_accepter.go b/internal/service/directconnect/hosted_private_virtual_interface_accepter.go index 6794e3ea7d9..3744491ffd9 100644 --- a/internal/service/directconnect/hosted_private_virtual_interface_accepter.go +++ b/internal/service/directconnect/hosted_private_virtual_interface_accepter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( @@ -67,7 +70,7 @@ func ResourceHostedPrivateVirtualInterfaceAccepter() *schema.Resource { func resourceHostedPrivateVirtualInterfaceAccepterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) vgwIdRaw, vgwOk := d.GetOk("vpn_gateway_id") dxgwIdRaw, dxgwOk := d.GetOk("dx_gateway_id") @@ -106,7 +109,7 @@ func resourceHostedPrivateVirtualInterfaceAccepterCreate(ctx context.Context, d return sdkdiag.AppendFromErr(diags, err) } - if err := createTags(ctx, conn, arn, GetTagsIn(ctx)); err != nil { + if err := createTags(ctx, conn, arn, getTagsIn(ctx)); err != nil { return sdkdiag.AppendErrorf(diags, "setting Direct Connect hosted private virtual interface (%s) tags: %s", arn, err) } @@ -115,7 +118,7 @@ func resourceHostedPrivateVirtualInterfaceAccepterCreate(ctx context.Context, d func resourceHostedPrivateVirtualInterfaceAccepterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) vif, err := virtualInterfaceRead(ctx, d.Id(), conn) if err != nil { @@ -159,7 +162,7 @@ func resourceHostedPrivateVirtualInterfaceAccepterDelete(ctx context.Context, d } func resourceHostedPrivateVirtualInterfaceAccepterImport(ctx context.Context, d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) vif, err := virtualInterfaceRead(ctx, d.Id(), conn) if err != nil { diff --git a/internal/service/directconnect/hosted_private_virtual_interface_test.go b/internal/service/directconnect/hosted_private_virtual_interface_test.go index c1b339bf00b..c63002abdcb 100644 --- a/internal/service/directconnect/hosted_private_virtual_interface_test.go +++ b/internal/service/directconnect/hosted_private_virtual_interface_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect_test import ( diff --git a/internal/service/directconnect/hosted_public_virtual_interface.go b/internal/service/directconnect/hosted_public_virtual_interface.go index cb4dd4fd7b5..4f679676055 100644 --- a/internal/service/directconnect/hosted_public_virtual_interface.go +++ b/internal/service/directconnect/hosted_public_virtual_interface.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( @@ -114,7 +117,7 @@ func ResourceHostedPublicVirtualInterface() *schema.Resource { func resourceHostedPublicVirtualInterfaceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) req := &directconnect.AllocatePublicVirtualInterfaceInput{ ConnectionId: aws.String(d.Get("connection_id").(string)), @@ -156,7 +159,7 @@ func resourceHostedPublicVirtualInterfaceCreate(ctx context.Context, d *schema.R func resourceHostedPublicVirtualInterfaceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) vif, err := virtualInterfaceRead(ctx, d.Id(), conn) if err != nil { @@ -199,7 +202,7 @@ func resourceHostedPublicVirtualInterfaceDelete(ctx context.Context, d *schema.R } func resourceHostedPublicVirtualInterfaceImport(ctx context.Context, d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) vif, err := virtualInterfaceRead(ctx, d.Id(), conn) if err != nil { diff --git a/internal/service/directconnect/hosted_public_virtual_interface_accepter.go b/internal/service/directconnect/hosted_public_virtual_interface_accepter.go index 062c6c23f1e..b864da38d75 100644 --- a/internal/service/directconnect/hosted_public_virtual_interface_accepter.go +++ b/internal/service/directconnect/hosted_public_virtual_interface_accepter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( @@ -55,7 +58,7 @@ func ResourceHostedPublicVirtualInterfaceAccepter() *schema.Resource { func resourceHostedPublicVirtualInterfaceAccepterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) vifId := d.Get("virtual_interface_id").(string) req := &directconnect.ConfirmPublicVirtualInterfaceInput{ @@ -82,7 +85,7 @@ func resourceHostedPublicVirtualInterfaceAccepterCreate(ctx context.Context, d * return sdkdiag.AppendFromErr(diags, err) } - if err := createTags(ctx, conn, arn, GetTagsIn(ctx)); err != nil { + if err := createTags(ctx, conn, arn, getTagsIn(ctx)); err != nil { return sdkdiag.AppendErrorf(diags, "setting Direct Connect hosted public virtual interface (%s) tags: %s", arn, err) } @@ -91,7 +94,7 @@ func resourceHostedPublicVirtualInterfaceAccepterCreate(ctx context.Context, d * func resourceHostedPublicVirtualInterfaceAccepterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) vif, err := virtualInterfaceRead(ctx, d.Id(), conn) if err != nil { @@ -134,7 +137,7 @@ func resourceHostedPublicVirtualInterfaceAccepterDelete(ctx context.Context, d * } func resourceHostedPublicVirtualInterfaceAccepterImport(ctx context.Context, d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) vif, err := virtualInterfaceRead(ctx, d.Id(), conn) if err != nil { diff --git a/internal/service/directconnect/hosted_public_virtual_interface_test.go b/internal/service/directconnect/hosted_public_virtual_interface_test.go index 28182612cce..c6ba3cf2cf0 100644 --- a/internal/service/directconnect/hosted_public_virtual_interface_test.go +++ b/internal/service/directconnect/hosted_public_virtual_interface_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect_test import ( diff --git a/internal/service/directconnect/hosted_transit_virtual_interface.go b/internal/service/directconnect/hosted_transit_virtual_interface.go index 12d7eadffe2..8dce83bbf10 100644 --- a/internal/service/directconnect/hosted_transit_virtual_interface.go +++ b/internal/service/directconnect/hosted_transit_virtual_interface.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( @@ -117,7 +120,7 @@ func ResourceHostedTransitVirtualInterface() *schema.Resource { func resourceHostedTransitVirtualInterfaceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) req := &directconnect.AllocateTransitVirtualInterfaceInput{ ConnectionId: aws.String(d.Get("connection_id").(string)), @@ -157,7 +160,7 @@ func resourceHostedTransitVirtualInterfaceCreate(ctx context.Context, d *schema. func resourceHostedTransitVirtualInterfaceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) vif, err := virtualInterfaceRead(ctx, d.Id(), conn) if err != nil { @@ -199,7 +202,7 @@ func resourceHostedTransitVirtualInterfaceDelete(ctx context.Context, d *schema. } func resourceHostedTransitVirtualInterfaceImport(ctx context.Context, d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) vif, err := virtualInterfaceRead(ctx, d.Id(), conn) if err != nil { diff --git a/internal/service/directconnect/hosted_transit_virtual_interface_accepter.go b/internal/service/directconnect/hosted_transit_virtual_interface_accepter.go index f8a469af0b0..dc0184894b4 100644 --- a/internal/service/directconnect/hosted_transit_virtual_interface_accepter.go +++ b/internal/service/directconnect/hosted_transit_virtual_interface_accepter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( @@ -60,7 +63,7 @@ func ResourceHostedTransitVirtualInterfaceAccepter() *schema.Resource { func resourceHostedTransitVirtualInterfaceAccepterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) vifId := d.Get("virtual_interface_id").(string) req := &directconnect.ConfirmTransitVirtualInterfaceInput{ @@ -88,7 +91,7 @@ func resourceHostedTransitVirtualInterfaceAccepterCreate(ctx context.Context, d return sdkdiag.AppendFromErr(diags, err) } - if err := createTags(ctx, conn, arn, GetTagsIn(ctx)); err != nil { + if err := createTags(ctx, conn, arn, getTagsIn(ctx)); err != nil { return sdkdiag.AppendErrorf(diags, "setting Direct Connect hosted transit virtual interface (%s) tags: %s", arn, err) } @@ -97,7 +100,7 @@ func resourceHostedTransitVirtualInterfaceAccepterCreate(ctx context.Context, d func resourceHostedTransitVirtualInterfaceAccepterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) vif, err := virtualInterfaceRead(ctx, d.Id(), conn) if err != nil { @@ -139,7 +142,7 @@ func resourceHostedTransitVirtualInterfaceAccepterDelete(ctx context.Context, d } func resourceHostedTransitVirtualInterfaceAccepterImport(ctx context.Context, d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) vif, err := virtualInterfaceRead(ctx, d.Id(), conn) if err != nil { diff --git a/internal/service/directconnect/hosted_transit_virtual_interface_test.go b/internal/service/directconnect/hosted_transit_virtual_interface_test.go index f389ad45d7b..bde61aa33b7 100644 --- a/internal/service/directconnect/hosted_transit_virtual_interface_test.go +++ b/internal/service/directconnect/hosted_transit_virtual_interface_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect_test import ( diff --git a/internal/service/directconnect/id.go b/internal/service/directconnect/id.go index d1cfd0dd061..9bef8c3a844 100644 --- a/internal/service/directconnect/id.go +++ b/internal/service/directconnect/id.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( diff --git a/internal/service/directconnect/lag.go b/internal/service/directconnect/lag.go index a0c6dce3644..3fe7f9a3585 100644 --- a/internal/service/directconnect/lag.go +++ b/internal/service/directconnect/lag.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( @@ -89,14 +92,14 @@ func ResourceLag() *schema.Resource { func resourceLagCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) name := d.Get("name").(string) input := &directconnect.CreateLagInput{ ConnectionsBandwidth: aws.String(d.Get("connections_bandwidth").(string)), LagName: aws.String(name), Location: aws.String(d.Get("location").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } var connectionIDSpecified bool @@ -133,7 +136,7 @@ func resourceLagCreate(ctx context.Context, d *schema.ResourceData, meta interfa func resourceLagRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) lag, err := FindLagByID(ctx, conn, d.Id()) @@ -168,7 +171,7 @@ func resourceLagRead(ctx context.Context, d *schema.ResourceData, meta interface func resourceLagUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) if d.HasChange("name") { input := &directconnect.UpdateLagInput{ @@ -189,7 +192,7 @@ func resourceLagUpdate(ctx context.Context, d *schema.ResourceData, meta interfa func resourceLagDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) if d.Get("force_destroy").(bool) { lag, err := FindLagByID(ctx, conn, d.Id()) diff --git a/internal/service/directconnect/lag_test.go b/internal/service/directconnect/lag_test.go index 32f029b8346..50a3c131a4b 100644 --- a/internal/service/directconnect/lag_test.go +++ b/internal/service/directconnect/lag_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect_test import ( @@ -228,7 +231,7 @@ func TestAccDirectConnectLag_tags(t *testing.T) { func testAccCheckLagDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_dx_lag" { @@ -254,7 +257,7 @@ func testAccCheckLagDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckLagExists(ctx context.Context, name string, v *directconnect.Lag) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DirectConnectConn(ctx) rs, ok := s.RootModule().Resources[name] if !ok { diff --git a/internal/service/directconnect/location_data_source.go b/internal/service/directconnect/location_data_source.go index 36204f3b88d..d865335e036 100644 --- a/internal/service/directconnect/location_data_source.go +++ b/internal/service/directconnect/location_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( @@ -50,7 +53,7 @@ func DataSourceLocation() *schema.Resource { func dataSourceLocationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) locationCode := d.Get("location_code").(string) location, err := FindLocationByCode(ctx, conn, locationCode) diff --git a/internal/service/directconnect/location_data_source_test.go b/internal/service/directconnect/location_data_source_test.go index 6d810b39c0e..371556d77fd 100644 --- a/internal/service/directconnect/location_data_source_test.go +++ b/internal/service/directconnect/location_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect_test import ( diff --git a/internal/service/directconnect/locations_data_source.go b/internal/service/directconnect/locations_data_source.go index 3f4bb0aa29b..78bfc6437c6 100644 --- a/internal/service/directconnect/locations_data_source.go +++ b/internal/service/directconnect/locations_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( @@ -28,7 +31,7 @@ func DataSourceLocations() *schema.Resource { func dataSourceLocationsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) locations, err := FindLocations(ctx, conn, &directconnect.DescribeLocationsInput{}) diff --git a/internal/service/directconnect/macsec_key.go b/internal/service/directconnect/macsec_key.go index d9f2ee15793..7d9cc862ef2 100644 --- a/internal/service/directconnect/macsec_key.go +++ b/internal/service/directconnect/macsec_key.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( @@ -71,7 +74,7 @@ func ResourceMacSecKeyAssociation() *schema.Resource { func resourceMacSecKeyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) input := &directconnect.AssociateMacSecKeyInput{ ConnectionId: aws.String(d.Get("connection_id").(string)), @@ -105,7 +108,7 @@ func resourceMacSecKeyCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceMacSecKeyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) secretArn, connId, err := MacSecKeyParseID(d.Id()) if err != nil { @@ -136,7 +139,7 @@ func resourceMacSecKeyRead(ctx context.Context, d *schema.ResourceData, meta int func resourceMacSecKeyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) input := &directconnect.DisassociateMacSecKeyInput{ ConnectionId: aws.String(d.Get("connection_id").(string)), diff --git a/internal/service/directconnect/macsec_key_test.go b/internal/service/directconnect/macsec_key_test.go index c5adedc98c5..43505c83a8b 100644 --- a/internal/service/directconnect/macsec_key_test.go +++ b/internal/service/directconnect/macsec_key_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect_test import ( diff --git a/internal/service/directconnect/private_virtual_interface.go b/internal/service/directconnect/private_virtual_interface.go index ae09c894aae..35389d199e5 100644 --- a/internal/service/directconnect/private_virtual_interface.go +++ b/internal/service/directconnect/private_virtual_interface.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( @@ -135,7 +138,7 @@ func ResourcePrivateVirtualInterface() *schema.Resource { func resourcePrivateVirtualInterfaceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) vgwIdRaw, vgwOk := d.GetOk("vpn_gateway_id") dxgwIdRaw, dxgwOk := d.GetOk("dx_gateway_id") @@ -150,7 +153,7 @@ func resourcePrivateVirtualInterfaceCreate(ctx context.Context, d *schema.Resour Asn: aws.Int64(int64(d.Get("bgp_asn").(int))), EnableSiteLink: aws.Bool(d.Get("sitelink_enabled").(bool)), Mtu: aws.Int64(int64(d.Get("mtu").(int))), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), VirtualInterfaceName: aws.String(d.Get("name").(string)), Vlan: aws.Int64(int64(d.Get("vlan").(int))), }, @@ -188,7 +191,7 @@ func resourcePrivateVirtualInterfaceCreate(ctx context.Context, d *schema.Resour func resourcePrivateVirtualInterfaceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) vif, err := virtualInterfaceRead(ctx, d.Id(), conn) if err != nil { @@ -235,7 +238,7 @@ func resourcePrivateVirtualInterfaceUpdate(ctx context.Context, d *schema.Resour return diags } - if err := privateVirtualInterfaceWaitUntilAvailable(ctx, meta.(*conns.AWSClient).DirectConnectConn(), d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { + if err := privateVirtualInterfaceWaitUntilAvailable(ctx, meta.(*conns.AWSClient).DirectConnectConn(ctx), d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { return sdkdiag.AppendFromErr(diags, err) } @@ -247,7 +250,7 @@ func resourcePrivateVirtualInterfaceDelete(ctx context.Context, d *schema.Resour } func resourcePrivateVirtualInterfaceImport(ctx context.Context, d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) vif, err := virtualInterfaceRead(ctx, d.Id(), conn) if err != nil { diff --git a/internal/service/directconnect/private_virtual_interface_test.go b/internal/service/directconnect/private_virtual_interface_test.go index 7541eb45187..7274b8bc0d9 100644 --- a/internal/service/directconnect/private_virtual_interface_test.go +++ b/internal/service/directconnect/private_virtual_interface_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect_test import ( diff --git a/internal/service/directconnect/public_virtual_interface.go b/internal/service/directconnect/public_virtual_interface.go index dcb782a4e45..d1580d7de5f 100644 --- a/internal/service/directconnect/public_virtual_interface.go +++ b/internal/service/directconnect/public_virtual_interface.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( @@ -118,14 +121,14 @@ func ResourcePublicVirtualInterface() *schema.Resource { func resourcePublicVirtualInterfaceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) req := &directconnect.CreatePublicVirtualInterfaceInput{ ConnectionId: aws.String(d.Get("connection_id").(string)), NewPublicVirtualInterface: &directconnect.NewPublicVirtualInterface{ AddressFamily: aws.String(d.Get("address_family").(string)), Asn: aws.Int64(int64(d.Get("bgp_asn").(int))), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), VirtualInterfaceName: aws.String(d.Get("name").(string)), Vlan: aws.Int64(int64(d.Get("vlan").(int))), }, @@ -160,7 +163,7 @@ func resourcePublicVirtualInterfaceCreate(ctx context.Context, d *schema.Resourc func resourcePublicVirtualInterfaceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) vif, err := virtualInterfaceRead(ctx, d.Id(), conn) if err != nil { @@ -213,7 +216,7 @@ func resourcePublicVirtualInterfaceDelete(ctx context.Context, d *schema.Resourc } func resourcePublicVirtualInterfaceImport(ctx context.Context, d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) vif, err := virtualInterfaceRead(ctx, d.Id(), conn) if err != nil { diff --git a/internal/service/directconnect/public_virtual_interface_test.go b/internal/service/directconnect/public_virtual_interface_test.go index 5d461876baa..d0f1ba87f01 100644 --- a/internal/service/directconnect/public_virtual_interface_test.go +++ b/internal/service/directconnect/public_virtual_interface_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect_test import ( diff --git a/internal/service/directconnect/router_configuration_data_source.go b/internal/service/directconnect/router_configuration_data_source.go index 255d2d9d147..02b3bd6edc0 100644 --- a/internal/service/directconnect/router_configuration_data_source.go +++ b/internal/service/directconnect/router_configuration_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( @@ -78,7 +81,7 @@ const ( ) func dataSourceRouterConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) routerTypeIdentifier := d.Get("router_type_identifier").(string) virtualInterfaceId := d.Get("virtual_interface_id").(string) diff --git a/internal/service/directconnect/router_configuration_data_source_test.go b/internal/service/directconnect/router_configuration_data_source_test.go index 68cd61cc2f8..1114549124f 100644 --- a/internal/service/directconnect/router_configuration_data_source_test.go +++ b/internal/service/directconnect/router_configuration_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect_test import ( diff --git a/internal/service/directconnect/service_package_gen.go b/internal/service/directconnect/service_package_gen.go index c9801bbb68f..df3792aa21c 100644 --- a/internal/service/directconnect/service_package_gen.go +++ b/internal/service/directconnect/service_package_gen.go @@ -5,6 +5,10 @@ package directconnect import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + directconnect_sdkv1 "github.com/aws/aws-sdk-go/service/directconnect" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -161,4 +165,13 @@ func (p *servicePackage) ServicePackageName() string { return names.DirectConnect } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*directconnect_sdkv1.DirectConnect, error) { + sess := config["session"].(*session_sdkv1.Session) + + return directconnect_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/directconnect/status.go b/internal/service/directconnect/status.go index 8d3fb26aae3..45c3e9b06f4 100644 --- a/internal/service/directconnect/status.go +++ b/internal/service/directconnect/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( diff --git a/internal/service/directconnect/sweep.go b/internal/service/directconnect/sweep.go index 727f58cc035..3236f0e9261 100644 --- a/internal/service/directconnect/sweep.go +++ b/internal/service/directconnect/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -13,7 +16,6 @@ import ( "github.com/aws/aws-sdk-go/service/secretsmanager" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -59,12 +61,12 @@ func init() { func sweepConnections(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).DirectConnectConn() + conn := client.DirectConnectConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -105,7 +107,7 @@ func sweepConnections(region string) error { sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Direct Connect Connection: %w", err)) } @@ -114,11 +116,11 @@ func sweepConnections(region string) error { func sweepGatewayAssociationProposals(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).DirectConnectConn() + conn := client.DirectConnectConn(ctx) input := &directconnect.DescribeDirectConnectGatewayAssociationProposalsInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -160,7 +162,7 @@ func sweepGatewayAssociationProposals(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Direct Connect Gateway Association Proposals (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Direct Connect Gateway Association Proposals (%s): %w", region, err)) @@ -171,11 +173,11 @@ func sweepGatewayAssociationProposals(region string) error { func sweepGatewayAssociations(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).DirectConnectConn() + conn := client.DirectConnectConn(ctx) input := &directconnect.DescribeDirectConnectGatewaysInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -243,7 +245,7 @@ func sweepGatewayAssociations(region string) error { // these within the service itself so they can only be found // via AssociatedGatewayId of the EC2 Transit Gateway since the // DirectConnectGatewayId lives in the other account. - ec2conn := client.(*conns.AWSClient).EC2Conn() + ec2conn := client.EC2Conn(ctx) err = ec2conn.DescribeTransitGatewaysPagesWithContext(ctx, &ec2.DescribeTransitGatewaysInput{}, func(page *ec2.DescribeTransitGatewaysOutput, lastPage bool) bool { if page == nil { @@ -297,7 +299,7 @@ func sweepGatewayAssociations(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing EC2 Transit Gateways (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Direct Connect Gateway Associations (%s): %w", region, err)) @@ -308,11 +310,11 @@ func sweepGatewayAssociations(region string) error { func sweepGateways(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).DirectConnectConn() + conn := client.DirectConnectConn(ctx) input := &directconnect.DescribeDirectConnectGatewaysInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -382,7 +384,7 @@ func sweepGateways(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Direct Connect Gateways (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Direct Connect Gateways (%s): %w", region, err)) @@ -393,12 +395,12 @@ func sweepGateways(region string) error { func sweepLags(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).DirectConnectConn() + conn := client.DirectConnectConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -439,7 +441,7 @@ func sweepLags(region string) error { sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Direct Connect LAG: %w", err)) } @@ -448,19 +450,19 @@ func sweepLags(region string) error { func sweepMacSecKeys(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - dxConn := client.(*conns.AWSClient).DirectConnectConn() + dxConn := client.DirectConnectConn(ctx) // Clean up leaked Secrets Manager resources created by Direct Connect. // Direct Connect does not remove the corresponding Secrets Manager // key when deleting the MACsec key association. The only option to // clean up the dangling resource is to use Secrets Manager to delete // the MACsec key secret. - smConn := client.(*conns.AWSClient).SecretsManagerConn() + smConn := client.SecretsManagerConn(ctx) dxInput := &directconnect.DescribeConnectionsInput{} var sweeperErrs *multierror.Error diff --git a/internal/service/directconnect/tags_gen.go b/internal/service/directconnect/tags_gen.go index 4b22379cd31..5a4124f3919 100644 --- a/internal/service/directconnect/tags_gen.go +++ b/internal/service/directconnect/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists directconnect service tags. +// listTags lists directconnect service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn directconnectiface.DirectConnectAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn directconnectiface.DirectConnectAPI, identifier string) (tftags.KeyValueTags, error) { input := &directconnect.DescribeTagsInput{ ResourceArns: aws.StringSlice([]string{identifier}), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn directconnectiface.DirectConnectAPI, ide // ListTags lists directconnect service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).DirectConnectConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).DirectConnectConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*directconnect.Tag) tftags.KeyValu return tftags.New(ctx, m) } -// GetTagsIn returns directconnect service tags from Context. +// getTagsIn returns directconnect service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*directconnect.Tag { +func getTagsIn(ctx context.Context) []*directconnect.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,8 +88,8 @@ func GetTagsIn(ctx context.Context) []*directconnect.Tag { return nil } -// SetTagsOut sets directconnect service tags in Context. -func SetTagsOut(ctx context.Context, tags []*directconnect.Tag) { +// setTagsOut sets directconnect service tags in Context. +func setTagsOut(ctx context.Context, tags []*directconnect.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } @@ -101,13 +101,13 @@ func createTags(ctx context.Context, conn directconnectiface.DirectConnectAPI, i return nil } - return UpdateTags(ctx, conn, identifier, nil, KeyValueTags(ctx, tags)) + return updateTags(ctx, conn, identifier, nil, KeyValueTags(ctx, tags)) } -// UpdateTags updates directconnect service tags. +// updateTags updates directconnect service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn directconnectiface.DirectConnectAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn directconnectiface.DirectConnectAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -147,5 +147,5 @@ func UpdateTags(ctx context.Context, conn directconnectiface.DirectConnectAPI, i // UpdateTags updates directconnect service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).DirectConnectConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).DirectConnectConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/directconnect/transit_virtual_interface.go b/internal/service/directconnect/transit_virtual_interface.go index a23428fa3d6..314bcbe26ed 100644 --- a/internal/service/directconnect/transit_virtual_interface.go +++ b/internal/service/directconnect/transit_virtual_interface.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( @@ -128,7 +131,7 @@ func ResourceTransitVirtualInterface() *schema.Resource { func resourceTransitVirtualInterfaceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) req := &directconnect.CreateTransitVirtualInterfaceInput{ ConnectionId: aws.String(d.Get("connection_id").(string)), @@ -138,7 +141,7 @@ func resourceTransitVirtualInterfaceCreate(ctx context.Context, d *schema.Resour DirectConnectGatewayId: aws.String(d.Get("dx_gateway_id").(string)), EnableSiteLink: aws.Bool(d.Get("sitelink_enabled").(bool)), Mtu: aws.Int64(int64(d.Get("mtu").(int))), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), VirtualInterfaceName: aws.String(d.Get("name").(string)), Vlan: aws.Int64(int64(d.Get("vlan").(int))), }, @@ -170,7 +173,7 @@ func resourceTransitVirtualInterfaceCreate(ctx context.Context, d *schema.Resour func resourceTransitVirtualInterfaceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) vif, err := virtualInterfaceRead(ctx, d.Id(), conn) if err != nil { @@ -216,7 +219,7 @@ func resourceTransitVirtualInterfaceUpdate(ctx context.Context, d *schema.Resour return diags } - if err := transitVirtualInterfaceWaitUntilAvailable(ctx, meta.(*conns.AWSClient).DirectConnectConn(), d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { + if err := transitVirtualInterfaceWaitUntilAvailable(ctx, meta.(*conns.AWSClient).DirectConnectConn(ctx), d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { return sdkdiag.AppendFromErr(diags, err) } @@ -228,7 +231,7 @@ func resourceTransitVirtualInterfaceDelete(ctx context.Context, d *schema.Resour } func resourceTransitVirtualInterfaceImport(ctx context.Context, d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) vif, err := virtualInterfaceRead(ctx, d.Id(), conn) if err != nil { diff --git a/internal/service/directconnect/transit_virtual_interface_test.go b/internal/service/directconnect/transit_virtual_interface_test.go index 868a6933cbd..d340e793b15 100644 --- a/internal/service/directconnect/transit_virtual_interface_test.go +++ b/internal/service/directconnect/transit_virtual_interface_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect_test import ( diff --git a/internal/service/directconnect/validate.go b/internal/service/directconnect/validate.go index e54a5fb1f9c..10b599378cf 100644 --- a/internal/service/directconnect/validate.go +++ b/internal/service/directconnect/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( diff --git a/internal/service/directconnect/validate_test.go b/internal/service/directconnect/validate_test.go index 21acbafe432..9533d73f650 100644 --- a/internal/service/directconnect/validate_test.go +++ b/internal/service/directconnect/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( diff --git a/internal/service/directconnect/vif.go b/internal/service/directconnect/vif.go index e1951081fc4..6c1648a64d1 100644 --- a/internal/service/directconnect/vif.go +++ b/internal/service/directconnect/vif.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( @@ -30,7 +33,7 @@ func virtualInterfaceRead(ctx context.Context, id string, conn *directconnect.Di func virtualInterfaceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) if d.HasChange("mtu") { req := &directconnect.UpdateVirtualInterfaceAttributesInput{ @@ -60,7 +63,7 @@ func virtualInterfaceUpdate(ctx context.Context, d *schema.ResourceData, meta in func virtualInterfaceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DirectConnectConn() + conn := meta.(*conns.AWSClient).DirectConnectConn(ctx) log.Printf("[DEBUG] Deleting Direct Connect virtual interface: %s", d.Id()) _, err := conn.DeleteVirtualInterfaceWithContext(ctx, &directconnect.DeleteVirtualInterfaceInput{ diff --git a/internal/service/directconnect/wait.go b/internal/service/directconnect/wait.go index 5bb124b09ef..98f6d84dc58 100644 --- a/internal/service/directconnect/wait.go +++ b/internal/service/directconnect/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package directconnect import ( diff --git a/internal/service/dlm/generate.go b/internal/service/dlm/generate.go index aa2f6350445..80993444437 100644 --- a/internal/service/dlm/generate.go +++ b/internal/service/dlm/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package dlm diff --git a/internal/service/dlm/lifecycle_policy.go b/internal/service/dlm/lifecycle_policy.go index 580f6afd9ba..8e657bff37a 100644 --- a/internal/service/dlm/lifecycle_policy.go +++ b/internal/service/dlm/lifecycle_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dlm import ( @@ -499,14 +502,14 @@ func ResourceLifecyclePolicy() *schema.Resource { func resourceLifecyclePolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DLMConn() + conn := meta.(*conns.AWSClient).DLMConn(ctx) input := dlm.CreateLifecyclePolicyInput{ Description: aws.String(d.Get("description").(string)), ExecutionRoleArn: aws.String(d.Get("execution_role_arn").(string)), PolicyDetails: expandPolicyDetails(d.Get("policy_details").([]interface{})), State: aws.String(d.Get("state").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } log.Printf("[INFO] Creating DLM lifecycle policy: %s", input) @@ -525,7 +528,7 @@ func resourceLifecyclePolicyCreate(ctx context.Context, d *schema.ResourceData, func resourceLifecyclePolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DLMConn() + conn := meta.(*conns.AWSClient).DLMConn(ctx) log.Printf("[INFO] Reading DLM lifecycle policy: %s", d.Id()) out, err := conn.GetLifecyclePolicyWithContext(ctx, &dlm.GetLifecyclePolicyInput{ @@ -550,14 +553,14 @@ func resourceLifecyclePolicyRead(ctx context.Context, d *schema.ResourceData, me return sdkdiag.AppendErrorf(diags, "setting policy details %s", err) } - SetTagsOut(ctx, out.Policy.Tags) + setTagsOut(ctx, out.Policy.Tags) return diags } func resourceLifecyclePolicyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DLMConn() + conn := meta.(*conns.AWSClient).DLMConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := dlm.UpdateLifecyclePolicyInput{ @@ -589,7 +592,7 @@ func resourceLifecyclePolicyUpdate(ctx context.Context, d *schema.ResourceData, func resourceLifecyclePolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DLMConn() + conn := meta.(*conns.AWSClient).DLMConn(ctx) log.Printf("[INFO] Deleting DLM lifecycle policy: %s", d.Id()) _, err := conn.DeleteLifecyclePolicyWithContext(ctx, &dlm.DeleteLifecyclePolicyInput{ diff --git a/internal/service/dlm/lifecycle_policy_test.go b/internal/service/dlm/lifecycle_policy_test.go index e220bf2a080..c7e5c532933 100644 --- a/internal/service/dlm/lifecycle_policy_test.go +++ b/internal/service/dlm/lifecycle_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dlm_test import ( @@ -522,7 +525,7 @@ func TestAccDLMLifecyclePolicy_disappears(t *testing.T) { func testAccCheckLifecyclePolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DLMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DLMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_dlm_lifecycle_policy" { @@ -559,7 +562,7 @@ func checkLifecyclePolicyExists(ctx context.Context, name string) resource.TestC return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).DLMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DLMConn(ctx) input := dlm.GetLifecyclePolicyInput{ PolicyId: aws.String(rs.Primary.ID), @@ -576,7 +579,7 @@ func checkLifecyclePolicyExists(ctx context.Context, name string) resource.TestC } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).DLMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DLMConn(ctx) input := &dlm.GetLifecyclePoliciesInput{} diff --git a/internal/service/dlm/service_package_gen.go b/internal/service/dlm/service_package_gen.go index 74415ef323c..9ebd35cd101 100644 --- a/internal/service/dlm/service_package_gen.go +++ b/internal/service/dlm/service_package_gen.go @@ -5,6 +5,10 @@ package dlm import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + dlm_sdkv1 "github.com/aws/aws-sdk-go/service/dlm" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -40,4 +44,13 @@ func (p *servicePackage) ServicePackageName() string { return names.DLM } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*dlm_sdkv1.DLM, error) { + sess := config["session"].(*session_sdkv1.Session) + + return dlm_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/dlm/sweep.go b/internal/service/dlm/sweep.go index 55824d25156..26430d3c590 100644 --- a/internal/service/dlm/sweep.go +++ b/internal/service/dlm/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/dlm" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -25,13 +27,13 @@ func init() { func sweepLifecyclePolicies(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).DLMConn() + conn := client.DLMConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -58,7 +60,7 @@ func sweepLifecyclePolicies(region string) error { sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping DLM Lifecycle Policy for %s: %w", region, err)) } diff --git a/internal/service/dlm/tags_gen.go b/internal/service/dlm/tags_gen.go index db2ab2482b7..cd55a3c4533 100644 --- a/internal/service/dlm/tags_gen.go +++ b/internal/service/dlm/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists dlm service tags. +// listTags lists dlm service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn dlmiface.DLMAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn dlmiface.DLMAPI, identifier string) (tftags.KeyValueTags, error) { input := &dlm.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn dlmiface.DLMAPI, identifier string) (tft // ListTags lists dlm service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).DLMConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).DLMConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from dlm service tags. +// KeyValueTags creates tftags.KeyValueTags from dlm service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns dlm service tags from Context. +// getTagsIn returns dlm service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets dlm service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets dlm service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates dlm service tags. +// updateTags updates dlm service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn dlmiface.DLMAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn dlmiface.DLMAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn dlmiface.DLMAPI, identifier string, ol // UpdateTags updates dlm service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).DLMConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).DLMConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/dms/certificate.go b/internal/service/dms/certificate.go index 97b0ce711bc..7f0d960df07 100644 --- a/internal/service/dms/certificate.go +++ b/internal/service/dms/certificate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms import ( @@ -70,12 +73,12 @@ func ResourceCertificate() *schema.Resource { func resourceCertificateCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) certificateID := d.Get("certificate_id").(string) request := &dms.ImportCertificateInput{ CertificateIdentifier: aws.String(certificateID), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } pem, pemSet := d.GetOk("certificate_pem") @@ -110,7 +113,7 @@ func resourceCertificateCreate(ctx context.Context, d *schema.ResourceData, meta func resourceCertificateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) response, err := conn.DescribeCertificatesWithContext(ctx, &dms.DescribeCertificatesInput{ Filters: []*dms.Filter{ @@ -155,7 +158,7 @@ func resourceCertificateUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceCertificateDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) request := &dms.DeleteCertificateInput{ CertificateArn: aws.String(d.Get("certificate_arn").(string)), diff --git a/internal/service/dms/certificate_data_source.go b/internal/service/dms/certificate_data_source.go index f1587c60170..ee9f5f00692 100644 --- a/internal/service/dms/certificate_data_source.go +++ b/internal/service/dms/certificate_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms import ( @@ -81,7 +84,7 @@ const ( ) func dataSourceCertificateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig @@ -111,7 +114,7 @@ func dataSourceCertificateRead(ctx context.Context, d *schema.ResourceData, meta to_date := out.ValidToDate.String() d.Set("valid_to_date", to_date) - tags, err := ListTags(ctx, conn, aws.StringValue(out.CertificateArn)) + tags, err := listTags(ctx, conn, aws.StringValue(out.CertificateArn)) if err != nil { return create.DiagError(names.DMS, create.ErrActionReading, DSNameCertificate, d.Id(), err) diff --git a/internal/service/dms/certificate_data_source_test.go b/internal/service/dms/certificate_data_source_test.go index d92e005ae84..b94c03a207b 100644 --- a/internal/service/dms/certificate_data_source_test.go +++ b/internal/service/dms/certificate_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms_test import ( diff --git a/internal/service/dms/certificate_test.go b/internal/service/dms/certificate_test.go index 9f3af0042ea..ed05912c081 100644 --- a/internal/service/dms/certificate_test.go +++ b/internal/service/dms/certificate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms_test import ( @@ -145,7 +148,7 @@ func testAccCheckCertificateDestroy(ctx context.Context) resource.TestCheckFunc continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).DMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DMSConn(ctx) output, err := conn.DescribeCertificatesWithContext(ctx, &dms.DescribeCertificatesInput{ Filters: []*dms.Filter{ @@ -184,7 +187,7 @@ func testAccCertificateExists(ctx context.Context, n string) resource.TestCheckF return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DMSConn(ctx) output, err := conn.DescribeCertificatesWithContext(ctx, &dms.DescribeCertificatesInput{ Filters: []*dms.Filter{ diff --git a/internal/service/dms/consts.go b/internal/service/dms/consts.go index 3caf64af684..ad7c868f476 100644 --- a/internal/service/dms/consts.go +++ b/internal/service/dms/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms const ( diff --git a/internal/service/dms/endpoint.go b/internal/service/dms/endpoint.go index f77a7900bc4..66a8f4257b1 100644 --- a/internal/service/dms/endpoint.go +++ b/internal/service/dms/endpoint.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms import ( @@ -691,14 +694,14 @@ func ResourceEndpoint() *schema.Resource { func resourceEndpointCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) endpointID := d.Get("endpoint_id").(string) input := &dms.CreateEndpointInput{ EndpointIdentifier: aws.String(endpointID), EndpointType: aws.String(d.Get("endpoint_type").(string)), EngineName: aws.String(d.Get("engine_name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("certificate_arn"); ok { @@ -942,7 +945,7 @@ func resourceEndpointCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceEndpointRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) endpoint, err := FindEndpointByID(ctx, conn, d.Id()) @@ -967,7 +970,7 @@ func resourceEndpointRead(ctx context.Context, d *schema.ResourceData, meta inte func resourceEndpointUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &dms.ModifyEndpointInput{ @@ -1309,7 +1312,7 @@ func resourceEndpointUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceEndpointDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) log.Printf("[DEBUG] Deleting DMS Endpoint: (%s)", d.Id()) _, err := conn.DeleteEndpointWithContext(ctx, &dms.DeleteEndpointInput{ @@ -1559,7 +1562,7 @@ func resourceEndpointSetState(d *schema.ResourceData, endpoint *dms.Endpoint) er } case engineNameS3: if err := d.Set("s3_settings", flattenS3Settings(endpoint.S3Settings)); err != nil { - return fmt.Errorf("Error setting s3_settings for DMS: %s", err) + return fmt.Errorf("setting s3_settings for DMS: %s", err) } default: d.Set("database_name", endpoint.DatabaseName) diff --git a/internal/service/dms/endpoint_data_source.go b/internal/service/dms/endpoint_data_source.go index f9fe7a6d1a7..70344cea43b 100644 --- a/internal/service/dms/endpoint_data_source.go +++ b/internal/service/dms/endpoint_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms import ( @@ -496,7 +499,7 @@ const ( ) func dataSourceEndpointRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig @@ -524,7 +527,7 @@ func dataSourceEndpointRead(ctx context.Context, d *schema.ResourceData, meta in create.DiagError(names.DMS, create.ErrActionReading, DSNameEndpoint, d.Id(), err) } - tags, err := ListTags(ctx, conn, aws.StringValue(out.EndpointArn)) + tags, err := listTags(ctx, conn, aws.StringValue(out.EndpointArn)) if err != nil { return create.DiagError(names.DMS, create.ErrActionReading, DSNameEndpoint, d.Id(), err) } diff --git a/internal/service/dms/endpoint_data_source_test.go b/internal/service/dms/endpoint_data_source_test.go index 7dc5b1fd063..9e56fc88445 100644 --- a/internal/service/dms/endpoint_data_source_test.go +++ b/internal/service/dms/endpoint_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms_test import ( diff --git a/internal/service/dms/endpoint_test.go b/internal/service/dms/endpoint_test.go index e22c2c7ab3e..4c045fb1781 100644 --- a/internal/service/dms/endpoint_test.go +++ b/internal/service/dms/endpoint_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms_test import ( @@ -2076,7 +2079,7 @@ func testAccCheckResourceAttrRegionalHostname(resourceName, attributeName, servi func testAccCheckEndpointDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DMSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_dms_endpoint" { @@ -2111,7 +2114,7 @@ func testAccCheckEndpointExists(ctx context.Context, n string) resource.TestChec return fmt.Errorf("No DMS Endpoint ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DMSConn(ctx) _, err := tfdms.FindEndpointByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/dms/event_subscription.go b/internal/service/dms/event_subscription.go index e4da3fae399..6e8226ba0ba 100644 --- a/internal/service/dms/event_subscription.go +++ b/internal/service/dms/event_subscription.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms import ( @@ -94,14 +97,14 @@ func ResourceEventSubscription() *schema.Resource { func resourceEventSubscriptionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) request := &dms.CreateEventSubscriptionInput{ Enabled: aws.Bool(d.Get("enabled").(bool)), SnsTopicArn: aws.String(d.Get("sns_topic_arn").(string)), SubscriptionName: aws.String(d.Get("name").(string)), SourceType: aws.String(d.Get("source_type").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("event_categories"); ok { @@ -139,7 +142,7 @@ func resourceEventSubscriptionCreate(ctx context.Context, d *schema.ResourceData func resourceEventSubscriptionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) request := &dms.DescribeEventSubscriptionsInput{ SubscriptionName: aws.String(d.Id()), @@ -154,7 +157,7 @@ func resourceEventSubscriptionRead(ctx context.Context, d *schema.ResourceData, } if err != nil { - return sdkdiag.AppendErrorf(diags, "Error reading DMS event subscription: %s", err) + return sdkdiag.AppendErrorf(diags, "reading DMS event subscription: %s", err) } if response == nil || len(response.EventSubscriptionsList) == 0 || response.EventSubscriptionsList[0] == nil { @@ -186,7 +189,7 @@ func resourceEventSubscriptionRead(ctx context.Context, d *schema.ResourceData, func resourceEventSubscriptionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) if d.HasChanges("enabled", "event_categories", "sns_topic_arn", "source_type") { request := &dms.ModifyEventSubscriptionInput{ @@ -226,7 +229,7 @@ func resourceEventSubscriptionUpdate(ctx context.Context, d *schema.ResourceData func resourceEventSubscriptionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) request := &dms.DeleteEventSubscriptionInput{ SubscriptionName: aws.String(d.Id()), diff --git a/internal/service/dms/event_subscription_test.go b/internal/service/dms/event_subscription_test.go index 1fdd8650d08..c6665a4c870 100644 --- a/internal/service/dms/event_subscription_test.go +++ b/internal/service/dms/event_subscription_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms_test import ( @@ -211,7 +214,7 @@ func testAccCheckEventSubscriptionDestroy(ctx context.Context) resource.TestChec continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).DMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DMSConn(ctx) resp, err := conn.DescribeEventSubscriptionsWithContext(ctx, &dms.DescribeEventSubscriptionsInput{ SubscriptionName: aws.String(rs.Primary.ID), @@ -245,7 +248,7 @@ func testAccCheckEventSubscriptionExists(ctx context.Context, n string, eventSub return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DMSConn(ctx) resp, err := conn.DescribeEventSubscriptionsWithContext(ctx, &dms.DescribeEventSubscriptionsInput{ SubscriptionName: aws.String(rs.Primary.ID), }) @@ -386,7 +389,7 @@ resource "aws_dms_event_subscription" "test" { } func testAccPreCheckEKS(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).EKSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EKSConn(ctx) input := &eks.ListClustersInput{} diff --git a/internal/service/dms/find.go b/internal/service/dms/find.go index 26541737a86..fe01de0b028 100644 --- a/internal/service/dms/find.go +++ b/internal/service/dms/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms import ( diff --git a/internal/service/dms/generate.go b/internal/service/dms/generate.go index 4764686ce63..a9da99dfa6b 100644 --- a/internal/service/dms/generate.go +++ b/internal/service/dms/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOutTagsElem=TagList -ServiceTagsSlice -TagOp=AddTagsToResource -UntagOp=RemoveTagsFromResource -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package dms diff --git a/internal/service/dms/replication_instance.go b/internal/service/dms/replication_instance.go index a615a148209..6a7e9ff102a 100644 --- a/internal/service/dms/replication_instance.go +++ b/internal/service/dms/replication_instance.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms import ( @@ -143,7 +146,7 @@ func ResourceReplicationInstance() *schema.Resource { func resourceReplicationInstanceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) request := &dms.CreateReplicationInstanceInput{ AutoMinorVersionUpgrade: aws.Bool(d.Get("auto_minor_version_upgrade").(bool)), @@ -151,7 +154,7 @@ func resourceReplicationInstanceCreate(ctx context.Context, d *schema.ResourceDa MultiAZ: aws.Bool(d.Get("multi_az").(bool)), ReplicationInstanceClass: aws.String(d.Get("replication_instance_class").(string)), ReplicationInstanceIdentifier: aws.String(d.Get("replication_instance_id").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } // WARNING: GetOk returns the zero value for the type if the key is omitted in config. This means for optional @@ -209,7 +212,7 @@ func resourceReplicationInstanceCreate(ctx context.Context, d *schema.ResourceDa func resourceReplicationInstanceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) response, err := conn.DescribeReplicationInstancesWithContext(ctx, &dms.DescribeReplicationInstancesInput{ Filters: []*dms.Filter{ @@ -274,7 +277,7 @@ func resourceReplicationInstanceRead(ctx context.Context, d *schema.ResourceData func resourceReplicationInstanceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) request := &dms.ModifyReplicationInstanceInput{ ApplyImmediately: aws.Bool(d.Get("apply_immediately").(bool)), @@ -358,7 +361,7 @@ func resourceReplicationInstanceUpdate(ctx context.Context, d *schema.ResourceDa func resourceReplicationInstanceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) request := &dms.DeleteReplicationInstanceInput{ ReplicationInstanceArn: aws.String(d.Get("replication_instance_arn").(string)), diff --git a/internal/service/dms/replication_instance_data_source.go b/internal/service/dms/replication_instance_data_source.go index 3b60d3cc60a..47519c7b512 100644 --- a/internal/service/dms/replication_instance_data_source.go +++ b/internal/service/dms/replication_instance_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms import ( @@ -102,7 +105,7 @@ const ( ) func dataSourceReplicationInstanceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig @@ -127,7 +130,7 @@ func dataSourceReplicationInstanceRead(ctx context.Context, d *schema.ResourceDa d.Set("replication_instance_class", instance.ReplicationInstanceClass) d.Set("replication_instance_id", instance.ReplicationInstanceIdentifier) - tags, err := ListTags(ctx, conn, aws.StringValue(instance.ReplicationInstanceArn)) + tags, err := listTags(ctx, conn, aws.StringValue(instance.ReplicationInstanceArn)) if err != nil { return create.DiagError(names.DMS, create.ErrActionReading, DSNameReplicationInstance, d.Id(), err) diff --git a/internal/service/dms/replication_instance_data_source_test.go b/internal/service/dms/replication_instance_data_source_test.go index f29302e4a46..96d7578e98c 100644 --- a/internal/service/dms/replication_instance_data_source_test.go +++ b/internal/service/dms/replication_instance_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms_test import ( diff --git a/internal/service/dms/replication_instance_test.go b/internal/service/dms/replication_instance_test.go index ccb10525caf..8ec626dba7c 100644 --- a/internal/service/dms/replication_instance_test.go +++ b/internal/service/dms/replication_instance_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms_test import ( @@ -508,7 +511,7 @@ func testAccCheckReplicationInstanceExists(ctx context.Context, n string) resour if rs.Primary.ID == "" { return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DMSConn(ctx) resp, err := conn.DescribeReplicationInstancesWithContext(ctx, &dms.DescribeReplicationInstancesInput{ Filters: []*dms.Filter{ { @@ -536,7 +539,7 @@ func testAccCheckReplicationInstanceDestroy(ctx context.Context) resource.TestCh continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).DMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DMSConn(ctx) resp, err := conn.DescribeReplicationInstancesWithContext(ctx, &dms.DescribeReplicationInstancesInput{ Filters: []*dms.Filter{ @@ -573,7 +576,7 @@ func testAccCheckReplicationInstanceDestroy(ctx context.Context) resource.TestCh // Ensure at least two engine versions of the replication instance class are available func testAccReplicationInstanceEngineVersionsPreCheck(t *testing.T) []string { - conn := acctest.Provider.Meta().(*conns.AWSClient).DMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DMSConn(ctx) // Gather all orderable DMS replication instances of the instance class // used in the acceptance testing. Not currently available as an input diff --git a/internal/service/dms/replication_subnet_group.go b/internal/service/dms/replication_subnet_group.go index a02f70fc727..f5c18d26b58 100644 --- a/internal/service/dms/replication_subnet_group.go +++ b/internal/service/dms/replication_subnet_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms import ( @@ -81,13 +84,13 @@ const ( func resourceReplicationSubnetGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) request := &dms.CreateReplicationSubnetGroupInput{ ReplicationSubnetGroupIdentifier: aws.String(d.Get("replication_subnet_group_id").(string)), ReplicationSubnetGroupDescription: aws.String(d.Get("replication_subnet_group_description").(string)), SubnetIds: flex.ExpandStringSet(d.Get("subnet_ids").(*schema.Set)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } log.Println("[DEBUG] DMS create replication subnet group:", request) @@ -124,7 +127,7 @@ func resourceReplicationSubnetGroupCreate(ctx context.Context, d *schema.Resourc func resourceReplicationSubnetGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) response, err := conn.DescribeReplicationSubnetGroupsWithContext(ctx, &dms.DescribeReplicationSubnetGroupsInput{ Filters: []*dms.Filter{ @@ -180,7 +183,7 @@ func resourceReplicationSubnetGroupRead(ctx context.Context, d *schema.ResourceD func resourceReplicationSubnetGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) // Updates to subnet groups are only valid when sending SubnetIds even if there are no // changes to SubnetIds. @@ -205,7 +208,7 @@ func resourceReplicationSubnetGroupUpdate(ctx context.Context, d *schema.Resourc func resourceReplicationSubnetGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) request := &dms.DeleteReplicationSubnetGroupInput{ ReplicationSubnetGroupIdentifier: aws.String(d.Get("replication_subnet_group_id").(string)), diff --git a/internal/service/dms/replication_subnet_group_data_source.go b/internal/service/dms/replication_subnet_group_data_source.go index a9812c1b364..d6dad6bcb37 100644 --- a/internal/service/dms/replication_subnet_group_data_source.go +++ b/internal/service/dms/replication_subnet_group_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms import ( @@ -57,7 +60,7 @@ const ( ) func dataSourceReplicationSubnetGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig @@ -89,7 +92,7 @@ func dataSourceReplicationSubnetGroupRead(ctx context.Context, d *schema.Resourc } d.Set("subnet_ids", subnetIDs) - tags, err := ListTags(ctx, conn, arn) + tags, err := listTags(ctx, conn, arn) if err != nil { return create.DiagError(names.DMS, create.ErrActionReading, DSNameReplicationSubnetGroup, d.Id(), err) } diff --git a/internal/service/dms/replication_subnet_group_data_source_test.go b/internal/service/dms/replication_subnet_group_data_source_test.go index 4b761aba401..581abe76255 100644 --- a/internal/service/dms/replication_subnet_group_data_source_test.go +++ b/internal/service/dms/replication_subnet_group_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms_test import ( diff --git a/internal/service/dms/replication_subnet_group_test.go b/internal/service/dms/replication_subnet_group_test.go index 3ec940c1bec..58a7bca8fad 100644 --- a/internal/service/dms/replication_subnet_group_test.go +++ b/internal/service/dms/replication_subnet_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms_test import ( @@ -69,7 +72,7 @@ func checkReplicationSubnetGroupExistsProviders(ctx context.Context, n string, p continue } - conn := provider.Meta().(*conns.AWSClient).DMSConn() + conn := provider.Meta().(*conns.AWSClient).DMSConn(ctx) _, err := conn.DescribeReplicationSubnetGroupsWithContext(ctx, &dms.DescribeReplicationSubnetGroupsInput{ Filters: []*dms.Filter{ { diff --git a/internal/service/dms/replication_task.go b/internal/service/dms/replication_task.go index 81740fae5ea..fc051eef743 100644 --- a/internal/service/dms/replication_task.go +++ b/internal/service/dms/replication_task.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms import ( @@ -116,7 +119,7 @@ func ResourceReplicationTask() *schema.Resource { func resourceReplicationTaskCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) taskId := d.Get("replication_task_id").(string) @@ -126,7 +129,7 @@ func resourceReplicationTaskCreate(ctx context.Context, d *schema.ResourceData, ReplicationTaskIdentifier: aws.String(taskId), SourceEndpointArn: aws.String(d.Get("source_endpoint_arn").(string)), TableMappings: aws.String(d.Get("table_mappings").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), TargetEndpointArn: aws.String(d.Get("target_endpoint_arn").(string)), } @@ -170,7 +173,7 @@ func resourceReplicationTaskCreate(ctx context.Context, d *schema.ResourceData, func resourceReplicationTaskRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) task, err := FindReplicationTaskByID(ctx, conn, d.Id()) @@ -210,7 +213,7 @@ func resourceReplicationTaskRead(ctx context.Context, d *schema.ResourceData, me func resourceReplicationTaskUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) if d.HasChangesExcept("tags", "tags_all", "start_replication_task") { input := &dms.ModifyReplicationTaskInput{ @@ -283,7 +286,7 @@ func resourceReplicationTaskUpdate(ctx context.Context, d *schema.ResourceData, func resourceReplicationTaskDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) if status := d.Get("status").(string); status == replicationTaskStatusRunning { if err := stopReplicationTask(ctx, d.Id(), conn); err != nil { diff --git a/internal/service/dms/replication_task_data_source.go b/internal/service/dms/replication_task_data_source.go index 6d750319c59..f69f06cd2ee 100644 --- a/internal/service/dms/replication_task_data_source.go +++ b/internal/service/dms/replication_task_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms import ( @@ -76,7 +79,7 @@ const ( ) func dataSourceReplicationTaskRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig @@ -105,7 +108,7 @@ func dataSourceReplicationTaskRead(ctx context.Context, d *schema.ResourceData, d.Set("replication_task_settings", settings) - tags, err := ListTags(ctx, conn, aws.StringValue(task.ReplicationTaskArn)) + tags, err := listTags(ctx, conn, aws.StringValue(task.ReplicationTaskArn)) if err != nil { return create.DiagError(names.DMS, create.ErrActionReading, DSNameReplicationTask, d.Id(), err) } diff --git a/internal/service/dms/replication_task_data_source_test.go b/internal/service/dms/replication_task_data_source_test.go index 8cd2658aaeb..0fcde0c5746 100644 --- a/internal/service/dms/replication_task_data_source_test.go +++ b/internal/service/dms/replication_task_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms_test import ( diff --git a/internal/service/dms/replication_task_test.go b/internal/service/dms/replication_task_test.go index 141884dd57c..f5cc32d4fc6 100644 --- a/internal/service/dms/replication_task_test.go +++ b/internal/service/dms/replication_task_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms_test import ( @@ -292,7 +295,7 @@ func testAccCheckReplicationTaskExists(ctx context.Context, n string) resource.T return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DMSConn(ctx) _, err := tfdms.FindReplicationTaskByID(ctx, conn, rs.Primary.ID) @@ -307,7 +310,7 @@ func testAccCheckReplicationTaskDestroy(ctx context.Context) resource.TestCheckF continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).DMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DMSConn(ctx) _, err := tfdms.FindReplicationTaskByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/dms/s3_endpoint.go b/internal/service/dms/s3_endpoint.go index 65857ef32fa..f60d0bf6bed 100644 --- a/internal/service/dms/s3_endpoint.go +++ b/internal/service/dms/s3_endpoint.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms import ( @@ -319,13 +322,13 @@ const ( func resourceS3EndpointCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) input := &dms.CreateEndpointInput{ EndpointIdentifier: aws.String(d.Get("endpoint_id").(string)), EndpointType: aws.String(d.Get("endpoint_type").(string)), EngineName: aws.String("s3"), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("certificate_arn"); ok { @@ -387,7 +390,7 @@ func resourceS3EndpointCreate(ctx context.Context, d *schema.ResourceData, meta func resourceS3EndpointRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) endpoint, err := FindEndpointByID(ctx, conn, d.Id()) @@ -476,7 +479,7 @@ func resourceS3EndpointRead(ctx context.Context, d *schema.ResourceData, meta in func resourceS3EndpointUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &dms.ModifyEndpointInput{ @@ -538,7 +541,7 @@ func resourceS3EndpointUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceS3EndpointDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DMSConn() + conn := meta.(*conns.AWSClient).DMSConn(ctx) log.Printf("[DEBUG] Deleting DMS Endpoint: (%s)", d.Id()) _, err := conn.DeleteEndpointWithContext(ctx, &dms.DeleteEndpointInput{ diff --git a/internal/service/dms/s3_endpoint_test.go b/internal/service/dms/s3_endpoint_test.go index d234512f4c1..8b23b104e02 100644 --- a/internal/service/dms/s3_endpoint_test.go +++ b/internal/service/dms/s3_endpoint_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms_test import ( diff --git a/internal/service/dms/service_package_gen.go b/internal/service/dms/service_package_gen.go index 250b23ed954..3df3cff8dea 100644 --- a/internal/service/dms/service_package_gen.go +++ b/internal/service/dms/service_package_gen.go @@ -5,6 +5,10 @@ package dms import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + databasemigrationservice_sdkv1 "github.com/aws/aws-sdk-go/service/databasemigrationservice" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -109,4 +113,13 @@ func (p *servicePackage) ServicePackageName() string { return names.DMS } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*databasemigrationservice_sdkv1.DatabaseMigrationService, error) { + sess := config["session"].(*session_sdkv1.Session) + + return databasemigrationservice_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/dms/status.go b/internal/service/dms/status.go index f6874d3b043..884db7c08fc 100644 --- a/internal/service/dms/status.go +++ b/internal/service/dms/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms import ( diff --git a/internal/service/dms/sweep.go b/internal/service/dms/sweep.go index e1d608798fd..ddcc9db0434 100644 --- a/internal/service/dms/sweep.go +++ b/internal/service/dms/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( dms "github.com/aws/aws-sdk-go/service/databasemigrationservice" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -37,13 +39,13 @@ func init() { func sweepReplicationInstances(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).DMSConn() + conn := client.DMSConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -64,7 +66,7 @@ func sweepReplicationInstances(region string) error { errs = multierror.Append(errs, fmt.Errorf("error describing DMS Replication Instances: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping DMS Replication Instances for %s: %w", region, err)) } @@ -78,13 +80,13 @@ func sweepReplicationInstances(region string) error { func sweepReplicationTasks(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).DMSConn() + conn := client.DMSConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -108,7 +110,7 @@ func sweepReplicationTasks(region string) error { errs = multierror.Append(errs, fmt.Errorf("error describing DMS Replication Tasks: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping DMS Replication Tasks for %s: %w", region, err)) } @@ -122,13 +124,13 @@ func sweepReplicationTasks(region string) error { func sweepEndpoints(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).DMSConn() + conn := client.DMSConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -149,7 +151,7 @@ func sweepEndpoints(region string) error { errs = multierror.Append(errs, fmt.Errorf("error describing DMS Endpoints: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping DMS Endpoints for %s: %w", region, err)) } diff --git a/internal/service/dms/tags_gen.go b/internal/service/dms/tags_gen.go index 7b8099e59a0..ecedfb12efb 100644 --- a/internal/service/dms/tags_gen.go +++ b/internal/service/dms/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists dms service tags. +// listTags lists dms service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn databasemigrationserviceiface.DatabaseMigrationServiceAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn databasemigrationserviceiface.DatabaseMigrationServiceAPI, identifier string) (tftags.KeyValueTags, error) { input := &databasemigrationservice.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn databasemigrationserviceiface.DatabaseMi // ListTags lists dms service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).DMSConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).DMSConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*databasemigrationservice.Tag) tft return tftags.New(ctx, m) } -// GetTagsIn returns dms service tags from Context. +// getTagsIn returns dms service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*databasemigrationservice.Tag { +func getTagsIn(ctx context.Context) []*databasemigrationservice.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*databasemigrationservice.Tag { return nil } -// SetTagsOut sets dms service tags in Context. -func SetTagsOut(ctx context.Context, tags []*databasemigrationservice.Tag) { +// setTagsOut sets dms service tags in Context. +func setTagsOut(ctx context.Context, tags []*databasemigrationservice.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates dms service tags. +// updateTags updates dms service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn databasemigrationserviceiface.DatabaseMigrationServiceAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn databasemigrationserviceiface.DatabaseMigrationServiceAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn databasemigrationserviceiface.Database // UpdateTags updates dms service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).DMSConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).DMSConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/dms/validate.go b/internal/service/dms/validate.go index b5c146fc24b..470122a31e3 100644 --- a/internal/service/dms/validate.go +++ b/internal/service/dms/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms import ( diff --git a/internal/service/dms/validate_test.go b/internal/service/dms/validate_test.go index 23f7c8d1832..e16967deeaa 100644 --- a/internal/service/dms/validate_test.go +++ b/internal/service/dms/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms import ( diff --git a/internal/service/dms/wait.go b/internal/service/dms/wait.go index 05b1b739ac8..c5bde1a019c 100644 --- a/internal/service/dms/wait.go +++ b/internal/service/dms/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dms import ( diff --git a/internal/service/docdb/cluster.go b/internal/service/docdb/cluster.go index 42c9d3bd32a..ed0f080dc96 100644 --- a/internal/service/docdb/cluster.go +++ b/internal/service/docdb/cluster.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package docdb import ( @@ -283,7 +286,7 @@ func resourceClusterImport(ctx context.Context, func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) // Some API calls (e.g. RestoreDBClusterFromSnapshot do not support all // parameters to correctly apply all settings in one pass. For missing @@ -310,7 +313,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int Engine: aws.String(d.Get("engine").(string)), SnapshotIdentifier: aws.String(d.Get("snapshot_identifier").(string)), DeletionProtection: aws.Bool(d.Get("deletion_protection").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if attr := d.Get("availability_zones").(*schema.Set); attr.Len() > 0 { @@ -398,7 +401,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int MasterUserPassword: aws.String(d.Get("master_password").(string)), MasterUsername: aws.String(d.Get("master_username").(string)), DeletionProtection: aws.Bool(d.Get("deletion_protection").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if attr, ok := d.GetOk("global_cluster_identifier"); ok { @@ -515,7 +518,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) input := &docdb.DescribeDBClustersInput{ DBClusterIdentifier: aws.String(d.Id()), @@ -616,7 +619,7 @@ func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta inter func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) requestUpdate := false req := &docdb.ModifyDBClusterInput{ @@ -737,7 +740,7 @@ func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceClusterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) log.Printf("[DEBUG] Destroying DocumentDB Cluster (%s)", d.Id()) // Automatically remove from global cluster to bypass this error on deletion: diff --git a/internal/service/docdb/cluster_instance.go b/internal/service/docdb/cluster_instance.go index 2b9679fc88b..c5dbc870e81 100644 --- a/internal/service/docdb/cluster_instance.go +++ b/internal/service/docdb/cluster_instance.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package docdb import ( @@ -178,7 +181,7 @@ func ResourceClusterInstance() *schema.Resource { func resourceClusterInstanceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) input := &docdb.CreateDBInstanceInput{ DBInstanceClass: aws.String(d.Get("instance_class").(string)), @@ -186,7 +189,7 @@ func resourceClusterInstanceCreate(ctx context.Context, d *schema.ResourceData, Engine: aws.String(d.Get("engine").(string)), PromotionTier: aws.Int64(int64(d.Get("promotion_tier").(int))), AutoMinorVersionUpgrade: aws.Bool(d.Get("auto_minor_version_upgrade").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if attr, ok := d.GetOk("availability_zone"); ok { @@ -257,7 +260,7 @@ func resourceClusterInstanceCreate(ctx context.Context, d *schema.ResourceData, func resourceClusterInstanceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) db, err := resourceInstanceRetrieve(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -333,7 +336,7 @@ func resourceClusterInstanceRead(ctx context.Context, d *schema.ResourceData, me func resourceClusterInstanceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) requestUpdate := false req := &docdb.ModifyDBInstanceInput{ @@ -391,7 +394,7 @@ func resourceClusterInstanceUpdate(ctx context.Context, d *schema.ResourceData, _, err = conn.ModifyDBInstanceWithContext(ctx, req) } if err != nil { - return sdkdiag.AppendErrorf(diags, "Error modifying DB Instance %s: %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "modifying DB Instance %s: %s", d.Id(), err) } // reuse db_instance refresh func @@ -416,7 +419,7 @@ func resourceClusterInstanceUpdate(ctx context.Context, d *schema.ResourceData, func resourceClusterInstanceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) opts := docdb.DeleteDBInstanceInput{DBInstanceIdentifier: aws.String(d.Id())} diff --git a/internal/service/docdb/cluster_instance_test.go b/internal/service/docdb/cluster_instance_test.go index 679f15c4cfe..22ba8310512 100644 --- a/internal/service/docdb/cluster_instance_test.go +++ b/internal/service/docdb/cluster_instance_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package docdb_test import ( @@ -286,7 +289,7 @@ func testAccCheckClusterInstanceAttributes(v *docdb.DBInstance) resource.TestChe func testAccClusterInstanceDisappears(ctx context.Context, v *docdb.DBInstance) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn(ctx) opts := &docdb.DeleteDBInstanceInput{ DBInstanceIdentifier: v.DBInstanceIdentifier, } @@ -322,7 +325,7 @@ func testAccCheckClusterInstanceExists(ctx context.Context, n string, v *docdb.D return fmt.Errorf("No DB Instance ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn(ctx) resp, err := conn.DescribeDBInstancesWithContext(ctx, &docdb.DescribeDBInstancesInput{ DBInstanceIdentifier: aws.String(rs.Primary.ID), }) diff --git a/internal/service/docdb/cluster_parameter_group.go b/internal/service/docdb/cluster_parameter_group.go index 9409aee8511..27a0b9ab0dc 100644 --- a/internal/service/docdb/cluster_parameter_group.go +++ b/internal/service/docdb/cluster_parameter_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package docdb import ( @@ -103,7 +106,7 @@ func ResourceClusterParameterGroup() *schema.Resource { func resourceClusterParameterGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) var groupName string if v, ok := d.GetOk("name"); ok { @@ -118,7 +121,7 @@ func resourceClusterParameterGroupCreate(ctx context.Context, d *schema.Resource DBClusterParameterGroupName: aws.String(groupName), DBParameterGroupFamily: aws.String(d.Get("family").(string)), Description: aws.String(d.Get("description").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } resp, err := conn.CreateDBClusterParameterGroupWithContext(ctx, &input) @@ -135,7 +138,7 @@ func resourceClusterParameterGroupCreate(ctx context.Context, d *schema.Resource func resourceClusterParameterGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) describeOpts := &docdb.DescribeDBClusterParameterGroupsInput{ DBClusterParameterGroupName: aws.String(d.Id()), @@ -180,7 +183,7 @@ func resourceClusterParameterGroupRead(ctx context.Context, d *schema.ResourceDa func resourceClusterParameterGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) if d.HasChange("parameter") { o, n := d.GetChange("parameter") @@ -224,7 +227,7 @@ func resourceClusterParameterGroupUpdate(ctx context.Context, d *schema.Resource func resourceClusterParameterGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) deleteOpts := &docdb.DeleteDBClusterParameterGroupInput{ DBClusterParameterGroupName: aws.String(d.Id()), diff --git a/internal/service/docdb/cluster_parameter_group_test.go b/internal/service/docdb/cluster_parameter_group_test.go index 27b1c0568a0..804b2b68c8a 100644 --- a/internal/service/docdb/cluster_parameter_group_test.go +++ b/internal/service/docdb/cluster_parameter_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package docdb_test import ( @@ -295,7 +298,7 @@ func TestAccDocDBClusterParameterGroup_tags(t *testing.T) { func testAccCheckClusterParameterGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_docdb_cluster_parameter_group" { @@ -327,7 +330,7 @@ func testAccCheckClusterParameterGroupDestroy(ctx context.Context) resource.Test func testAccCheckClusterParameterGroupDisappears(ctx context.Context, group *docdb.DBClusterParameterGroup) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn(ctx) params := &docdb.DeleteDBClusterParameterGroupInput{ DBClusterParameterGroupName: group.DBClusterParameterGroupName, @@ -367,7 +370,7 @@ func testAccCheckClusterParameterGroupExists(ctx context.Context, n string, v *d return errors.New("No DocumentDB Cluster Parameter Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn(ctx) opts := docdb.DescribeDBClusterParameterGroupsInput{ DBClusterParameterGroupName: aws.String(rs.Primary.ID), diff --git a/internal/service/docdb/cluster_snapshot.go b/internal/service/docdb/cluster_snapshot.go index f8ea1e2438a..a101ab6e18e 100644 --- a/internal/service/docdb/cluster_snapshot.go +++ b/internal/service/docdb/cluster_snapshot.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package docdb import ( @@ -96,7 +99,7 @@ func ResourceClusterSnapshot() *schema.Resource { func resourceClusterSnapshotCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) params := &docdb.CreateDBClusterSnapshotInput{ DBClusterIdentifier: aws.String(d.Get("db_cluster_identifier").(string)), @@ -129,7 +132,7 @@ func resourceClusterSnapshotCreate(ctx context.Context, d *schema.ResourceData, func resourceClusterSnapshotRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) params := &docdb.DescribeDBClusterSnapshotsInput{ DBClusterSnapshotIdentifier: aws.String(d.Id()), @@ -173,7 +176,7 @@ func resourceClusterSnapshotRead(ctx context.Context, d *schema.ResourceData, me func resourceClusterSnapshotDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) params := &docdb.DeleteDBClusterSnapshotInput{ DBClusterSnapshotIdentifier: aws.String(d.Id()), diff --git a/internal/service/docdb/cluster_snapshot_test.go b/internal/service/docdb/cluster_snapshot_test.go index 61281653650..436ebf97fb2 100644 --- a/internal/service/docdb/cluster_snapshot_test.go +++ b/internal/service/docdb/cluster_snapshot_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package docdb_test import ( @@ -56,7 +59,7 @@ func TestAccDocDBClusterSnapshot_basic(t *testing.T) { func testAccCheckClusterSnapshotDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_docdb_cluster_snapshot" { @@ -95,7 +98,7 @@ func testAccCheckClusterSnapshotExists(ctx context.Context, resourceName string, return fmt.Errorf("No ID is set for %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn(ctx) request := &docdb.DescribeDBClusterSnapshotsInput{ DBClusterSnapshotIdentifier: aws.String(rs.Primary.ID), diff --git a/internal/service/docdb/cluster_test.go b/internal/service/docdb/cluster_test.go index d993f835f73..4319f7e6ef0 100644 --- a/internal/service/docdb/cluster_test.go +++ b/internal/service/docdb/cluster_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package docdb_test import ( @@ -839,7 +842,7 @@ func testAccCheckClusterDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckClusterDestroyWithProvider(ctx context.Context) acctest.TestCheckWithProviderFunc { return func(s *terraform.State, provider *schema.Provider) error { - conn := provider.Meta().(*conns.AWSClient).DocDBConn() + conn := provider.Meta().(*conns.AWSClient).DocDBConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_docdb_cluster" { @@ -886,7 +889,7 @@ func testAccCheckClusterExistsProvider(ctx context.Context, n string, v *docdb.D } provider := providerF() - conn := provider.Meta().(*conns.AWSClient).DocDBConn() + conn := provider.Meta().(*conns.AWSClient).DocDBConn(ctx) resp, err := conn.DescribeDBClustersWithContext(ctx, &docdb.DescribeDBClustersInput{ DBClusterIdentifier: aws.String(rs.Primary.ID), }) @@ -927,7 +930,7 @@ func testAccCheckClusterSnapshot(ctx context.Context, rInt int) resource.TestChe snapshot_identifier := fmt.Sprintf("tf-acctest-docdbcluster-snapshot-%d", rInt) awsClient := acctest.Provider.Meta().(*conns.AWSClient) - conn := awsClient.DocDBConn() + conn := awsClient.DocDBConn(ctx) log.Printf("[INFO] Deleting the Snapshot %s", snapshot_identifier) _, snapDeleteErr := conn.DeleteDBClusterSnapshotWithContext(ctx, &docdb.DeleteDBClusterSnapshotInput{ diff --git a/internal/service/docdb/consts.go b/internal/service/docdb/consts.go index fc573cd750f..0cf8edbbf85 100644 --- a/internal/service/docdb/consts.go +++ b/internal/service/docdb/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package docdb import ( diff --git a/internal/service/docdb/engine_version_data_source.go b/internal/service/docdb/engine_version_data_source.go index 348fcbd5896..56bbb430501 100644 --- a/internal/service/docdb/engine_version_data_source.go +++ b/internal/service/docdb/engine_version_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package docdb import ( @@ -77,7 +80,7 @@ func DataSourceEngineVersion() *schema.Resource { func dataSourceEngineVersionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) input := &docdb.DescribeDBEngineVersionsInput{} diff --git a/internal/service/docdb/engine_version_data_source_test.go b/internal/service/docdb/engine_version_data_source_test.go index 5358faba352..ee9a165bf26 100644 --- a/internal/service/docdb/engine_version_data_source_test.go +++ b/internal/service/docdb/engine_version_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package docdb_test import ( @@ -84,7 +87,7 @@ func TestAccDocDBEngineVersionDataSource_defaultOnly(t *testing.T) { } func testAccEngineVersionPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn(ctx) input := &docdb.DescribeDBEngineVersionsInput{ Engine: aws.String("docdb"), diff --git a/internal/service/docdb/event_subscription.go b/internal/service/docdb/event_subscription.go index ed3a492b6c4..98e009e5205 100644 --- a/internal/service/docdb/event_subscription.go +++ b/internal/service/docdb/event_subscription.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package docdb import ( @@ -96,14 +99,14 @@ func ResourceEventSubscription() *schema.Resource { } func resourceEventSubscriptionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) name := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) input := &docdb.CreateEventSubscriptionInput{ Enabled: aws.Bool(d.Get("enabled").(bool)), SnsTopicArn: aws.String(d.Get("sns_topic_arn").(string)), SubscriptionName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("event_categories"); ok && v.(*schema.Set).Len() > 0 { @@ -134,7 +137,7 @@ func resourceEventSubscriptionCreate(ctx context.Context, d *schema.ResourceData } func resourceEventSubscriptionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) output, err := FindEventSubscriptionByID(ctx, conn, d.Id()) @@ -162,7 +165,7 @@ func resourceEventSubscriptionRead(ctx context.Context, d *schema.ResourceData, } func resourceEventSubscriptionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) if d.HasChangesExcept("tags", "tags_all", "source_ids") { input := &docdb.ModifyEventSubscriptionInput{ @@ -242,7 +245,7 @@ func resourceEventSubscriptionUpdate(ctx context.Context, d *schema.ResourceData } func resourceEventSubscriptionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) log.Printf("[DEBUG] Deleting DocumentDB Event Subscription: %s", d.Id()) _, err := conn.DeleteEventSubscriptionWithContext(ctx, &docdb.DeleteEventSubscriptionInput{ diff --git a/internal/service/docdb/event_subscription_test.go b/internal/service/docdb/event_subscription_test.go index 49e4ef54a66..a015dc93c85 100644 --- a/internal/service/docdb/event_subscription_test.go +++ b/internal/service/docdb/event_subscription_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package docdb_test import ( @@ -240,7 +243,7 @@ func testAccCheckEventSubscriptionDestroy(ctx context.Context) resource.TestChec continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn(ctx) _, err := tfdocdb.FindEventSubscriptionByID(ctx, conn, rs.Primary.ID) @@ -270,7 +273,7 @@ func testAccCheckEventSubscriptionExists(ctx context.Context, n string, eventSub return fmt.Errorf("No DocumentDB Event Subscription ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn(ctx) res, err := tfdocdb.FindEventSubscriptionByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/docdb/find.go b/internal/service/docdb/find.go index ff186887534..eb66e779ade 100644 --- a/internal/service/docdb/find.go +++ b/internal/service/docdb/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package docdb import ( diff --git a/internal/service/docdb/flex.go b/internal/service/docdb/flex.go index ed9a624b0ef..2a361355aff 100644 --- a/internal/service/docdb/flex.go +++ b/internal/service/docdb/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package docdb import ( diff --git a/internal/service/docdb/generate.go b/internal/service/docdb/generate.go index df6327f6df0..6820d934fa4 100644 --- a/internal/service/docdb/generate.go +++ b/internal/service/docdb/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceName -ListTagsOutTagsElem=TagList -ServiceTagsSlice -TagOp=AddTagsToResource -TagInIDElem=ResourceName -UntagOp=RemoveTagsFromResource -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package docdb diff --git a/internal/service/docdb/global_cluster.go b/internal/service/docdb/global_cluster.go index 10f809b9623..7bd43218feb 100644 --- a/internal/service/docdb/global_cluster.go +++ b/internal/service/docdb/global_cluster.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package docdb import ( @@ -113,7 +116,7 @@ func ResourceGlobalCluster() *schema.Resource { } func resourceGlobalClusterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) input := &docdb.CreateGlobalClusterInput{ GlobalClusterIdentifier: aws.String(d.Get("global_cluster_identifier").(string)), @@ -145,20 +148,20 @@ func resourceGlobalClusterCreate(ctx context.Context, d *schema.ResourceData, me output, err := conn.CreateGlobalClusterWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("creating DocumentDB Global Cluster: %w", err)) + return diag.Errorf("creating DocumentDB Global Cluster: %s", err) } d.SetId(aws.StringValue(output.GlobalCluster.GlobalClusterIdentifier)) if err := waitForGlobalClusterCreation(ctx, conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { - return diag.FromErr(fmt.Errorf("waiting for DocumentDB Global Cluster (%s) availability: %w", d.Id(), err)) + return diag.Errorf("waiting for DocumentDB Global Cluster (%s) availability: %s", d.Id(), err) } return resourceGlobalClusterRead(ctx, d, meta) } func resourceGlobalClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) globalCluster, err := FindGlobalClusterById(ctx, conn, d.Id()) @@ -169,7 +172,7 @@ func resourceGlobalClusterRead(ctx context.Context, d *schema.ResourceData, meta } if err != nil { - return diag.FromErr(fmt.Errorf("reading DocumentDB Global Cluster: %w", err)) + return diag.Errorf("reading DocumentDB Global Cluster: %s", err) } if !d.IsNewResource() && globalCluster == nil { @@ -192,7 +195,7 @@ func resourceGlobalClusterRead(ctx context.Context, d *schema.ResourceData, meta d.Set("global_cluster_identifier", globalCluster.GlobalClusterIdentifier) if err := d.Set("global_cluster_members", flattenGlobalClusterMembers(globalCluster.GlobalClusterMembers)); err != nil { - return diag.FromErr(fmt.Errorf("error setting global_cluster_members: %w", err)) + return diag.Errorf("setting global_cluster_members: %s", err) } d.Set("global_cluster_resource_id", globalCluster.GlobalClusterResourceId) @@ -202,7 +205,7 @@ func resourceGlobalClusterRead(ctx context.Context, d *schema.ResourceData, meta } func resourceGlobalClusterUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) input := &docdb.ModifyGlobalClusterInput{ DeletionProtection: aws.Bool(d.Get("deletion_protection").(bool)), @@ -224,18 +227,18 @@ func resourceGlobalClusterUpdate(ctx context.Context, d *schema.ResourceData, me } if err != nil { - return diag.FromErr(fmt.Errorf("updating DocumentDB Global Cluster: %w", err)) + return diag.Errorf("updating DocumentDB Global Cluster: %s", err) } if err := waitForGlobalClusterUpdate(ctx, conn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { - return diag.FromErr(fmt.Errorf("waiting for DocumentDB Global Cluster (%s) update: %w", d.Id(), err)) + return diag.Errorf("waiting for DocumentDB Global Cluster (%s) update: %s", d.Id(), err) } return resourceGlobalClusterRead(ctx, d, meta) } func resourceGlobalClusterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) for _, globalClusterMemberRaw := range d.Get("global_cluster_members").(*schema.Set).List() { globalClusterMember, ok := globalClusterMemberRaw.(map[string]interface{}) @@ -258,11 +261,11 @@ func resourceGlobalClusterDelete(ctx context.Context, d *schema.ResourceData, me continue } if err != nil { - return diag.FromErr(fmt.Errorf("removing DocumentDB Cluster (%s) from Global Cluster (%s): %w", dbClusterArn, d.Id(), err)) + return diag.Errorf("removing DocumentDB Cluster (%s) from Global Cluster (%s): %s", dbClusterArn, d.Id(), err) } if err := waitForGlobalClusterRemoval(ctx, conn, dbClusterArn, d.Timeout(schema.TimeoutDelete)); err != nil { - return diag.FromErr(fmt.Errorf("waiting for DocumentDB Cluster (%s) removal from DocumentDB Global Cluster (%s): %w", dbClusterArn, d.Id(), err)) + return diag.Errorf("waiting for DocumentDB Cluster (%s) removal from DocumentDB Global Cluster (%s): %s", dbClusterArn, d.Id(), err) } } @@ -294,11 +297,11 @@ func resourceGlobalClusterDelete(ctx context.Context, d *schema.ResourceData, me } if err != nil { - return diag.FromErr(fmt.Errorf("deleting DocumentDB Global Cluster: %w", err)) + return diag.Errorf("deleting DocumentDB Global Cluster: %s", err) } if err := WaitForGlobalClusterDeletion(ctx, conn, d.Id(), d.Timeout(schema.TimeoutDelete)); err != nil { - return diag.FromErr(fmt.Errorf("waiting for DocumentDB Global Cluster (%s) deletion: %w", d.Id(), err)) + return diag.Errorf("waiting for DocumentDB Global Cluster (%s) deletion: %s", d.Id(), err) } return nil diff --git a/internal/service/docdb/global_cluster_test.go b/internal/service/docdb/global_cluster_test.go index 34c83ef31cc..1719be4f81b 100644 --- a/internal/service/docdb/global_cluster_test.go +++ b/internal/service/docdb/global_cluster_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package docdb_test import ( @@ -314,7 +317,7 @@ func testAccCheckGlobalClusterExists(ctx context.Context, resourceName string, g return fmt.Errorf("no DocumentDB Global Cluster ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn(ctx) cluster, err := tfdocdb.FindGlobalClusterById(ctx, conn, rs.Primary.ID) if err != nil { @@ -337,7 +340,7 @@ func testAccCheckGlobalClusterExists(ctx context.Context, resourceName string, g func testAccCheckGlobalClusterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_docdb_global_cluster" { @@ -367,7 +370,7 @@ func testAccCheckGlobalClusterDestroy(ctx context.Context) resource.TestCheckFun func testAccCheckGlobalClusterDisappears(ctx context.Context, globalCluster *docdb.GlobalCluster) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn(ctx) input := &docdb.DeleteGlobalClusterInput{ GlobalClusterIdentifier: globalCluster.GlobalClusterIdentifier, @@ -404,7 +407,7 @@ func testAccCheckGlobalClusterRecreated(i, j *docdb.GlobalCluster) resource.Test } func testAccPreCheckGlobalCluster(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn(ctx) input := &docdb.DescribeGlobalClustersInput{} diff --git a/internal/service/docdb/orderable_db_instance_data_source.go b/internal/service/docdb/orderable_db_instance_data_source.go index a729324a30f..91395002cd7 100644 --- a/internal/service/docdb/orderable_db_instance_data_source.go +++ b/internal/service/docdb/orderable_db_instance_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package docdb import ( @@ -65,7 +68,7 @@ func DataSourceOrderableDBInstance() *schema.Resource { func dataSourceOrderableDBInstanceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) input := &docdb.DescribeOrderableDBInstanceOptionsInput{} diff --git a/internal/service/docdb/orderable_db_instance_data_source_test.go b/internal/service/docdb/orderable_db_instance_data_source_test.go index a728e8142af..f5e4088871a 100644 --- a/internal/service/docdb/orderable_db_instance_data_source_test.go +++ b/internal/service/docdb/orderable_db_instance_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package docdb_test import ( @@ -67,7 +70,7 @@ func TestAccDocDBOrderableDBInstanceDataSource_preferred(t *testing.T) { } func testAccPreCheckOrderableDBInstance(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn(ctx) input := &docdb.DescribeOrderableDBInstanceOptionsInput{ Engine: aws.String("docdb"), diff --git a/internal/service/docdb/service_package_gen.go b/internal/service/docdb/service_package_gen.go index 3f2203dfb9e..b7f3ec3f39a 100644 --- a/internal/service/docdb/service_package_gen.go +++ b/internal/service/docdb/service_package_gen.go @@ -5,6 +5,10 @@ package docdb import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + docdb_sdkv1 "github.com/aws/aws-sdk-go/service/docdb" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -89,4 +93,13 @@ func (p *servicePackage) ServicePackageName() string { return names.DocDB } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*docdb_sdkv1.DocDB, error) { + sess := config["session"].(*session_sdkv1.Session) + + return docdb_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/docdb/status.go b/internal/service/docdb/status.go index 4339116d333..90e8e229151 100644 --- a/internal/service/docdb/status.go +++ b/internal/service/docdb/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package docdb import ( diff --git a/internal/service/docdb/subnet_group.go b/internal/service/docdb/subnet_group.go index d9c7a8b5113..bf1d7bba974 100644 --- a/internal/service/docdb/subnet_group.go +++ b/internal/service/docdb/subnet_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package docdb import ( @@ -81,7 +84,7 @@ func ResourceSubnetGroup() *schema.Resource { func resourceSubnetGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) subnetIds := flex.ExpandStringSet(d.Get("subnet_ids").(*schema.Set)) @@ -98,7 +101,7 @@ func resourceSubnetGroupCreate(ctx context.Context, d *schema.ResourceData, meta DBSubnetGroupName: aws.String(groupName), DBSubnetGroupDescription: aws.String(d.Get("description").(string)), SubnetIds: subnetIds, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } _, err := conn.CreateDBSubnetGroupWithContext(ctx, &input) @@ -113,7 +116,7 @@ func resourceSubnetGroupCreate(ctx context.Context, d *schema.ResourceData, meta func resourceSubnetGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) describeOpts := docdb.DescribeDBSubnetGroupsInput{ DBSubnetGroupName: aws.String(d.Id()), @@ -156,7 +159,7 @@ func resourceSubnetGroupRead(ctx context.Context, d *schema.ResourceData, meta i func resourceSubnetGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) if d.HasChanges("subnet_ids", "description") { _, n := d.GetChange("subnet_ids") @@ -181,7 +184,7 @@ func resourceSubnetGroupUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceSubnetGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DocDBConn() + conn := meta.(*conns.AWSClient).DocDBConn(ctx) delOpts := docdb.DeleteDBSubnetGroupInput{ DBSubnetGroupName: aws.String(d.Id()), diff --git a/internal/service/docdb/subnet_group_test.go b/internal/service/docdb/subnet_group_test.go index cba6758cd58..4f9cba607a6 100644 --- a/internal/service/docdb/subnet_group_test.go +++ b/internal/service/docdb/subnet_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package docdb_test import ( @@ -165,7 +168,7 @@ func TestAccDocDBSubnetGroup_updateDescription(t *testing.T) { func testAccCheckSubnetGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_docdb_subnet_group" { @@ -196,7 +199,7 @@ func testAccCheckSubnetGroupDestroy(ctx context.Context) resource.TestCheckFunc func testAccCheckSubnetGroupDisappears(ctx context.Context, group *docdb.DBSubnetGroup) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn(ctx) params := &docdb.DeleteDBSubnetGroupInput{ DBSubnetGroupName: group.DBSubnetGroupName, @@ -222,7 +225,7 @@ func testAccCheckSubnetGroupExists(ctx context.Context, n string, v *docdb.DBSub return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DocDBConn(ctx) resp, err := conn.DescribeDBSubnetGroupsWithContext(ctx, &docdb.DescribeDBSubnetGroupsInput{DBSubnetGroupName: aws.String(rs.Primary.ID)}) if err != nil { return err diff --git a/internal/service/docdb/sweep.go b/internal/service/docdb/sweep.go index e39133addd2..aee3affa6a2 100644 --- a/internal/service/docdb/sweep.go +++ b/internal/service/docdb/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -12,7 +15,6 @@ import ( "github.com/aws/aws-sdk-go/service/docdb" multierror "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -71,13 +73,13 @@ func init() { func sweepDBClusters(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).DocDBConn() + conn := client.DocDBConn(ctx) input := &docdb.DescribeDBClustersInput{} err = conn.DescribeDBClustersPagesWithContext(ctx, input, func(out *docdb.DescribeDBClustersOutput, lastPage bool) bool { @@ -118,13 +120,13 @@ func sweepDBClusters(region string) error { func sweepDBClusterSnapshots(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).DocDBConn() + conn := client.DocDBConn(ctx) input := &docdb.DescribeDBClusterSnapshotsInput{} err = conn.DescribeDBClusterSnapshotsPagesWithContext(ctx, input, func(out *docdb.DescribeDBClusterSnapshotsOutput, lastPage bool) bool { @@ -164,13 +166,13 @@ func sweepDBClusterSnapshots(region string) error { func sweepDBClusterParameterGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).DocDBConn() + conn := client.DocDBConn(ctx) input := &docdb.DescribeDBClusterParameterGroupsInput{} err = conn.DescribeDBClusterParameterGroupsPagesWithContext(ctx, input, func(out *docdb.DescribeDBClusterParameterGroupsOutput, lastPage bool) bool { @@ -211,13 +213,13 @@ func sweepDBClusterParameterGroups(region string) error { func sweepDBInstances(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).DocDBConn() + conn := client.DocDBConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error input := &docdb.DescribeDBInstancesInput{} @@ -242,7 +244,7 @@ func sweepDBInstances(region string) error { errs = multierror.Append(errs, fmt.Errorf("listing DocumentDB Instances for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping DocumentDB Instances for %s: %w", region, err)) } @@ -256,13 +258,13 @@ func sweepDBInstances(region string) error { func sweepGlobalClusters(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).DocDBConn() + conn := client.DocDBConn(ctx) input := &docdb.DescribeGlobalClustersInput{} err = conn.DescribeGlobalClustersPagesWithContext(ctx, input, func(out *docdb.DescribeGlobalClustersOutput, lastPage bool) bool { @@ -302,13 +304,13 @@ func sweepGlobalClusters(region string) error { func sweepDBSubnetGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).DocDBConn() + conn := client.DocDBConn(ctx) input := &docdb.DescribeDBSubnetGroupsInput{} err = conn.DescribeDBSubnetGroupsPagesWithContext(ctx, input, func(out *docdb.DescribeDBSubnetGroupsOutput, lastPage bool) bool { @@ -348,13 +350,13 @@ func sweepDBSubnetGroups(region string) error { func sweepEventSubscriptions(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).DocDBConn() + conn := client.DocDBConn(ctx) input := &docdb.DescribeEventSubscriptionsInput{} err = conn.DescribeEventSubscriptionsPagesWithContext(ctx, input, func(out *docdb.DescribeEventSubscriptionsOutput, lastPage bool) bool { diff --git a/internal/service/docdb/tags_gen.go b/internal/service/docdb/tags_gen.go index 812ed060c6c..84c55f7d51b 100644 --- a/internal/service/docdb/tags_gen.go +++ b/internal/service/docdb/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists docdb service tags. +// listTags lists docdb service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn docdbiface.DocDBAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn docdbiface.DocDBAPI, identifier string) (tftags.KeyValueTags, error) { input := &docdb.ListTagsForResourceInput{ ResourceName: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn docdbiface.DocDBAPI, identifier string) // ListTags lists docdb service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).DocDBConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).DocDBConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*docdb.Tag) tftags.KeyValueTags { return tftags.New(ctx, m) } -// GetTagsIn returns docdb service tags from Context. +// getTagsIn returns docdb service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*docdb.Tag { +func getTagsIn(ctx context.Context) []*docdb.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*docdb.Tag { return nil } -// SetTagsOut sets docdb service tags in Context. -func SetTagsOut(ctx context.Context, tags []*docdb.Tag) { +// setTagsOut sets docdb service tags in Context. +func setTagsOut(ctx context.Context, tags []*docdb.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates docdb service tags. +// updateTags updates docdb service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn docdbiface.DocDBAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn docdbiface.DocDBAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn docdbiface.DocDBAPI, identifier string // UpdateTags updates docdb service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).DocDBConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).DocDBConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/docdb/validate.go b/internal/service/docdb/validate.go index 6f8f275e995..8f6882e2f40 100644 --- a/internal/service/docdb/validate.go +++ b/internal/service/docdb/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package docdb import ( diff --git a/internal/service/docdb/validate_test.go b/internal/service/docdb/validate_test.go index a05e0bf0955..29920e12ce9 100644 --- a/internal/service/docdb/validate_test.go +++ b/internal/service/docdb/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package docdb import ( diff --git a/internal/service/docdb/wait.go b/internal/service/docdb/wait.go index a7d02417958..205321d9ae2 100644 --- a/internal/service/docdb/wait.go +++ b/internal/service/docdb/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package docdb import ( diff --git a/internal/service/docdbelastic/generate.go b/internal/service/docdbelastic/generate.go new file mode 100644 index 00000000000..4f56829e80e --- /dev/null +++ b/internal/service/docdbelastic/generate.go @@ -0,0 +1,7 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/servicepackage/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package docdbelastic diff --git a/internal/service/docdbelastic/service_package_gen.go b/internal/service/docdbelastic/service_package_gen.go index 620ec14bc80..b19fb04264b 100644 --- a/internal/service/docdbelastic/service_package_gen.go +++ b/internal/service/docdbelastic/service_package_gen.go @@ -5,6 +5,9 @@ package docdbelastic import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + docdbelastic_sdkv2 "github.com/aws/aws-sdk-go-v2/service/docdbelastic" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -31,4 +34,17 @@ func (p *servicePackage) ServicePackageName() string { return names.DocDBElastic } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*docdbelastic_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return docdbelastic_sdkv2.NewFromConfig(cfg, func(o *docdbelastic_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = docdbelastic_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/ds/conditional_forwarder.go b/internal/service/ds/conditional_forwarder.go index 8b33d9908b9..3694fc4e84e 100644 --- a/internal/service/ds/conditional_forwarder.go +++ b/internal/service/ds/conditional_forwarder.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ds import ( @@ -57,7 +60,7 @@ func ResourceConditionalForwarder() *schema.Resource { func resourceConditionalForwarderCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DSConn() + conn := meta.(*conns.AWSClient).DSConn(ctx) dnsIps := flex.ExpandStringList(d.Get("dns_ips").([]interface{})) @@ -81,7 +84,7 @@ func resourceConditionalForwarderCreate(ctx context.Context, d *schema.ResourceD func resourceConditionalForwarderRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DSConn() + conn := meta.(*conns.AWSClient).DSConn(ctx) directoryId, domainName, err := ParseConditionalForwarderID(d.Id()) if err != nil { @@ -119,7 +122,7 @@ func resourceConditionalForwarderRead(ctx context.Context, d *schema.ResourceDat func resourceConditionalForwarderUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DSConn() + conn := meta.(*conns.AWSClient).DSConn(ctx) directoryId, domainName, err := ParseConditionalForwarderID(d.Id()) if err != nil { @@ -143,7 +146,7 @@ func resourceConditionalForwarderUpdate(ctx context.Context, d *schema.ResourceD func resourceConditionalForwarderDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DSConn() + conn := meta.(*conns.AWSClient).DSConn(ctx) directoryId, domainName, err := ParseConditionalForwarderID(d.Id()) if err != nil { diff --git a/internal/service/ds/conditional_forwarder_test.go b/internal/service/ds/conditional_forwarder_test.go index 7ff9b77b861..4987490509f 100644 --- a/internal/service/ds/conditional_forwarder_test.go +++ b/internal/service/ds/conditional_forwarder_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ds_test import ( @@ -59,7 +62,7 @@ func TestAccDSConditionalForwarder_Condition_basic(t *testing.T) { func testAccCheckConditionalForwarderDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_directory_service_conditional_forwarder" { @@ -109,7 +112,7 @@ func testAccCheckConditionalForwarderExists(ctx context.Context, name string, dn return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).DSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DSConn(ctx) res, err := conn.DescribeConditionalForwardersWithContext(ctx, &directoryservice.DescribeConditionalForwardersInput{ DirectoryId: aws.String(directoryId), diff --git a/internal/service/ds/directory.go b/internal/service/ds/directory.go index 3214d46c9f8..33252ddb7db 100644 --- a/internal/service/ds/directory.go +++ b/internal/service/ds/directory.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ds import ( @@ -204,7 +207,7 @@ func ResourceDirectory() *schema.Resource { func resourceDirectoryCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DSConn() + conn := meta.(*conns.AWSClient).DSConn(ctx) name := d.Get("name").(string) var creator directoryCreator @@ -279,7 +282,7 @@ func resourceDirectoryCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceDirectoryRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DSConn() + conn := meta.(*conns.AWSClient).DSConn(ctx) dir, err := FindDirectoryByID(ctx, conn, d.Id()) @@ -333,7 +336,7 @@ func resourceDirectoryRead(ctx context.Context, d *schema.ResourceData, meta int func resourceDirectoryUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DSConn() + conn := meta.(*conns.AWSClient).DSConn(ctx) if d.HasChange("desired_number_of_domain_controllers") { if err := updateNumberOfDomainControllers(ctx, conn, d.Id(), d.Get("desired_number_of_domain_controllers").(int), d.Timeout(schema.TimeoutUpdate)); err != nil { @@ -358,7 +361,7 @@ func resourceDirectoryUpdate(ctx context.Context, d *schema.ResourceData, meta i func resourceDirectoryDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DSConn() + conn := meta.(*conns.AWSClient).DSConn(ctx) log.Printf("[DEBUG] Deleting Directory Service Directory: %s", d.Id()) _, err := tfresource.RetryWhenAWSErrMessageContains(ctx, directoryApplicationDeauthorizedPropagationTimeout, func() (interface{}, error) { @@ -397,7 +400,7 @@ func (c adConnectorCreator) Create(ctx context.Context, conn *directoryservice.D input := &directoryservice.ConnectDirectoryInput{ Name: aws.String(name), Password: aws.String(d.Get("password").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("connect_settings"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { @@ -440,7 +443,7 @@ func (c microsoftADCreator) Create(ctx context.Context, conn *directoryservice.D input := &directoryservice.CreateMicrosoftADInput{ Name: aws.String(name), Password: aws.String(d.Get("password").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -480,7 +483,7 @@ func (c simpleADCreator) Create(ctx context.Context, conn *directoryservice.Dire input := &directoryservice.CreateDirectoryInput{ Name: aws.String(name), Password: aws.String(d.Get("password").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { diff --git a/internal/service/ds/directory_data_source.go b/internal/service/ds/directory_data_source.go index 089b35f52e1..a2aa792c433 100644 --- a/internal/service/ds/directory_data_source.go +++ b/internal/service/ds/directory_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ds import ( @@ -170,7 +173,7 @@ func DataSourceDirectory() *schema.Resource { func dataSourceDirectoryRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DSConn() + conn := meta.(*conns.AWSClient).DSConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig dir, err := FindDirectoryByID(ctx, conn, d.Get("directory_id").(string)) @@ -225,7 +228,7 @@ func dataSourceDirectoryRead(ctx context.Context, d *schema.ResourceData, meta i d.Set("vpc_settings", nil) } - tags, err := ListTags(ctx, conn, d.Id()) + tags, err := listTags(ctx, conn, d.Id()) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags for Directory Service Directory (%s): %s", d.Id(), err) diff --git a/internal/service/ds/directory_data_source_test.go b/internal/service/ds/directory_data_source_test.go index 61e5dd542e0..80a590278cf 100644 --- a/internal/service/ds/directory_data_source_test.go +++ b/internal/service/ds/directory_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ds_test import ( diff --git a/internal/service/ds/directory_test.go b/internal/service/ds/directory_test.go index 7a3c2214603..a4e4ecfe8b5 100644 --- a/internal/service/ds/directory_test.go +++ b/internal/service/ds/directory_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ds_test import ( @@ -41,7 +44,7 @@ func TestAccDSDirectory_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "connect_settings.#", "0"), resource.TestCheckResourceAttr(resourceName, "description", ""), resource.TestCheckResourceAttr(resourceName, "desired_number_of_domain_controllers", "0"), - acctest.CheckResourceAttrGreaterThanValue(resourceName, "dns_ip_addresses.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(resourceName, "dns_ip_addresses.#", 0), resource.TestCheckResourceAttr(resourceName, "edition", ""), resource.TestCheckResourceAttr(resourceName, "enable_sso", "false"), resource.TestCheckResourceAttr(resourceName, "name", domainName), @@ -172,7 +175,7 @@ func TestAccDSDirectory_microsoft(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "connect_settings.#", "0"), resource.TestCheckResourceAttr(resourceName, "description", ""), resource.TestCheckResourceAttr(resourceName, "desired_number_of_domain_controllers", "2"), - acctest.CheckResourceAttrGreaterThanValue(resourceName, "dns_ip_addresses.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(resourceName, "dns_ip_addresses.#", 0), resource.TestCheckResourceAttr(resourceName, "edition", "Enterprise"), resource.TestCheckResourceAttr(resourceName, "enable_sso", "false"), resource.TestCheckResourceAttr(resourceName, "name", domainName), @@ -220,7 +223,7 @@ func TestAccDSDirectory_microsoftStandard(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "connect_settings.#", "0"), resource.TestCheckResourceAttr(resourceName, "description", ""), resource.TestCheckResourceAttr(resourceName, "desired_number_of_domain_controllers", "2"), - acctest.CheckResourceAttrGreaterThanValue(resourceName, "dns_ip_addresses.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(resourceName, "dns_ip_addresses.#", 0), resource.TestCheckResourceAttr(resourceName, "edition", "Standard"), resource.TestCheckResourceAttr(resourceName, "enable_sso", "false"), resource.TestCheckResourceAttr(resourceName, "name", domainName), @@ -270,12 +273,12 @@ func TestAccDSDirectory_connector(t *testing.T) { resource.TestCheckResourceAttrSet(resourceName, "access_url"), resource.TestCheckResourceAttrSet(resourceName, "alias"), resource.TestCheckResourceAttr(resourceName, "connect_settings.#", "1"), - acctest.CheckResourceAttrGreaterThanValue(resourceName, "connect_settings.0.customer_dns_ips.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(resourceName, "connect_settings.0.customer_dns_ips.#", 0), resource.TestCheckResourceAttr(resourceName, "connect_settings.0.customer_username", "Administrator"), resource.TestCheckResourceAttr(resourceName, "connect_settings.0.subnet_ids.#", "2"), resource.TestCheckResourceAttr(resourceName, "description", ""), resource.TestCheckResourceAttr(resourceName, "desired_number_of_domain_controllers", "0"), - acctest.CheckResourceAttrGreaterThanValue(resourceName, "dns_ip_addresses.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(resourceName, "dns_ip_addresses.#", 0), resource.TestCheckResourceAttr(resourceName, "edition", ""), resource.TestCheckResourceAttr(resourceName, "enable_sso", "false"), resource.TestCheckResourceAttr(resourceName, "name", domainName), @@ -327,7 +330,7 @@ func TestAccDSDirectory_withAliasAndSSO(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "connect_settings.#", "0"), resource.TestCheckResourceAttr(resourceName, "description", ""), resource.TestCheckResourceAttr(resourceName, "desired_number_of_domain_controllers", "0"), - acctest.CheckResourceAttrGreaterThanValue(resourceName, "dns_ip_addresses.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(resourceName, "dns_ip_addresses.#", 0), resource.TestCheckResourceAttr(resourceName, "edition", ""), resource.TestCheckResourceAttr(resourceName, "enable_sso", "false"), resource.TestCheckResourceAttr(resourceName, "name", domainName), @@ -389,7 +392,7 @@ func TestAccDSDirectory_desiredNumberOfDomainControllers(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "connect_settings.#", "0"), resource.TestCheckResourceAttr(resourceName, "description", ""), resource.TestCheckResourceAttr(resourceName, "desired_number_of_domain_controllers", "2"), - acctest.CheckResourceAttrGreaterThanValue(resourceName, "dns_ip_addresses.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(resourceName, "dns_ip_addresses.#", 0), resource.TestCheckResourceAttr(resourceName, "edition", "Enterprise"), resource.TestCheckResourceAttr(resourceName, "enable_sso", "false"), resource.TestCheckResourceAttr(resourceName, "name", domainName), @@ -431,7 +434,7 @@ func TestAccDSDirectory_desiredNumberOfDomainControllers(t *testing.T) { func testAccCheckDirectoryDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_directory_service_directory" { @@ -466,7 +469,7 @@ func testAccCheckDirectoryExists(ctx context.Context, n string, v *directoryserv return fmt.Errorf("No Directory Service Directory ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DSConn(ctx) output, err := tfds.FindDirectoryByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ds/find.go b/internal/service/ds/find.go index 05bf929d4d6..2073bc00d6f 100644 --- a/internal/service/ds/find.go +++ b/internal/service/ds/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ds import ( diff --git a/internal/service/ds/generate.go b/internal/service/ds/generate.go index 21fbe308472..2b4a67038d3 100644 --- a/internal/service/ds/generate.go +++ b/internal/service/ds/generate.go @@ -1,5 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/listpages/main.go -ListOps=DescribeDirectories,DescribeRegions //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceId -ServiceTagsSlice -TagOp=AddTagsToResource -TagInIDElem=ResourceId -UntagOp=RemoveTagsFromResource -UpdateTags -CreateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package ds diff --git a/internal/service/ds/log_subscription.go b/internal/service/ds/log_subscription.go index 63d6b3e16f6..169714e5356 100644 --- a/internal/service/ds/log_subscription.go +++ b/internal/service/ds/log_subscription.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ds import ( @@ -39,7 +42,7 @@ func ResourceLogSubscription() *schema.Resource { func resourceLogSubscriptionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DSConn() + conn := meta.(*conns.AWSClient).DSConn(ctx) directoryId := d.Get("directory_id") logGroupName := d.Get("log_group_name") @@ -61,7 +64,7 @@ func resourceLogSubscriptionCreate(ctx context.Context, d *schema.ResourceData, func resourceLogSubscriptionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DSConn() + conn := meta.(*conns.AWSClient).DSConn(ctx) directoryId := d.Id() @@ -89,7 +92,7 @@ func resourceLogSubscriptionRead(ctx context.Context, d *schema.ResourceData, me func resourceLogSubscriptionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DSConn() + conn := meta.(*conns.AWSClient).DSConn(ctx) directoryId := d.Id() diff --git a/internal/service/ds/log_subscription_test.go b/internal/service/ds/log_subscription_test.go index 080d4a4a2fc..db944465bdc 100644 --- a/internal/service/ds/log_subscription_test.go +++ b/internal/service/ds/log_subscription_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ds_test import ( @@ -49,7 +52,7 @@ func TestAccDSLogSubscription_basic(t *testing.T) { func testAccCheckLogSubscriptionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_directory_service_log_subscription" { @@ -88,7 +91,7 @@ func testAccCheckLogSubscriptionExists(ctx context.Context, name string, logGrou return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DSConn(ctx) res, err := conn.ListLogSubscriptionsWithContext(ctx, &directoryservice.ListLogSubscriptionsInput{ DirectoryId: aws.String(rs.Primary.ID), diff --git a/internal/service/ds/radius_settings.go b/internal/service/ds/radius_settings.go index 32698e7a924..a507c0c77cb 100644 --- a/internal/service/ds/radius_settings.go +++ b/internal/service/ds/radius_settings.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ds import ( @@ -87,7 +90,7 @@ func ResourceRadiusSettings() *schema.Resource { } func resourceRadiusSettingsCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DSConn() + conn := meta.(*conns.AWSClient).DSConn(ctx) directoryID := d.Get("directory_id").(string) input := &directoryservice.EnableRadiusInput{ @@ -121,7 +124,7 @@ func resourceRadiusSettingsCreate(ctx context.Context, d *schema.ResourceData, m } func resourceRadiusSettingsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DSConn() + conn := meta.(*conns.AWSClient).DSConn(ctx) output, err := FindRadiusSettings(ctx, conn, d.Id()) @@ -149,7 +152,7 @@ func resourceRadiusSettingsRead(ctx context.Context, d *schema.ResourceData, met } func resourceRadiusSettingsUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DSConn() + conn := meta.(*conns.AWSClient).DSConn(ctx) input := &directoryservice.UpdateRadiusInput{ DirectoryId: aws.String(d.Id()), @@ -180,7 +183,7 @@ func resourceRadiusSettingsUpdate(ctx context.Context, d *schema.ResourceData, m } func resourceRadiusSettingsDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DSConn() + conn := meta.(*conns.AWSClient).DSConn(ctx) _, err := conn.DisableRadiusWithContext(ctx, &directoryservice.DisableRadiusInput{ DirectoryId: aws.String(d.Id()), diff --git a/internal/service/ds/radius_settings_test.go b/internal/service/ds/radius_settings_test.go index 8305c36a619..54197a50e7e 100644 --- a/internal/service/ds/radius_settings_test.go +++ b/internal/service/ds/radius_settings_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ds_test import ( @@ -100,7 +103,7 @@ func TestAccDSRadiusSettings_disappears(t *testing.T) { func testAccCheckRadiusSettingsDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_directory_service_radius_settings" { @@ -135,7 +138,7 @@ func testAccCheckRadiusSettingsExists(ctx context.Context, n string, v *director return fmt.Errorf("No Directory Service RADIUS Settings ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DSConn(ctx) output, err := tfds.FindRadiusSettings(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ds/region.go b/internal/service/ds/region.go index 9df6d12fdfe..419dfb543b7 100644 --- a/internal/service/ds/region.go +++ b/internal/service/ds/region.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ds import ( @@ -87,7 +90,7 @@ func ResourceRegion() *schema.Resource { } func resourceRegionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DSConn() + conn := meta.(*conns.AWSClient).DSConn(ctx) directoryID := d.Get("directory_id").(string) regionName := d.Get("region_name").(string) @@ -113,13 +116,13 @@ func resourceRegionCreate(ctx context.Context, d *schema.ResourceData, meta inte return diag.Errorf("waiting for Directory Service Region (%s) create: %s", d.Id(), err) } - regionConn, err := regionalConn(meta.(*conns.AWSClient), regionName) + regionConn, err := regionalConn(ctx, meta.(*conns.AWSClient), regionName) if err != nil { return diag.FromErr(err) } - if tags := GetTagsIn(ctx); len(tags) > 0 { + if tags := getTagsIn(ctx); len(tags) > 0 { if err := createTags(ctx, regionConn, directoryID, tags); err != nil { return diag.Errorf("setting Directory Service Directory (%s) tags: %s", directoryID, err) } @@ -135,7 +138,7 @@ func resourceRegionCreate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceRegionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DSConn() + conn := meta.(*conns.AWSClient).DSConn(ctx) directoryID, regionName, err := RegionParseResourceID(d.Id()) @@ -166,19 +169,19 @@ func resourceRegionRead(ctx context.Context, d *schema.ResourceData, meta interf d.Set("vpc_settings", nil) } - regionConn, err := regionalConn(meta.(*conns.AWSClient), regionName) + regionConn, err := regionalConn(ctx, meta.(*conns.AWSClient), regionName) if err != nil { return diag.FromErr(err) } - tags, err := ListTags(ctx, regionConn, directoryID) + tags, err := listTags(ctx, regionConn, directoryID) if err != nil { return diag.Errorf("listing tags for Directory Service Directory (%s): %s", directoryID, err) } - SetTagsOut(ctx, Tags(tags)) + setTagsOut(ctx, Tags(tags)) return nil } @@ -190,7 +193,7 @@ func resourceRegionUpdate(ctx context.Context, d *schema.ResourceData, meta inte return diag.FromErr(err) } - conn, err := regionalConn(meta.(*conns.AWSClient), regionName) + conn, err := regionalConn(ctx, meta.(*conns.AWSClient), regionName) if err != nil { return diag.FromErr(err) @@ -205,7 +208,7 @@ func resourceRegionUpdate(ctx context.Context, d *schema.ResourceData, meta inte if d.HasChange("tags_all") { o, n := d.GetChange("tags_all") - if err := UpdateTags(ctx, conn, directoryID, o, n); err != nil { + if err := updateTags(ctx, conn, directoryID, o, n); err != nil { return diag.Errorf("updating Directory Service Directory (%s) tags: %s", directoryID, err) } } @@ -221,7 +224,7 @@ func resourceRegionDelete(ctx context.Context, d *schema.ResourceData, meta inte } // The Region must be removed using a client in the region. - conn, err := regionalConn(meta.(*conns.AWSClient), regionName) + conn, err := regionalConn(ctx, meta.(*conns.AWSClient), regionName) if err != nil { return diag.FromErr(err) @@ -246,8 +249,8 @@ func resourceRegionDelete(ctx context.Context, d *schema.ResourceData, meta inte return nil } -func regionalConn(client *conns.AWSClient, regionName string) (*directoryservice.DirectoryService, error) { - sess, err := conns.NewSessionForRegion(&client.DSConn().Config, regionName, client.TerraformVersion) +func regionalConn(ctx context.Context, client *conns.AWSClient, regionName string) (*directoryservice.DirectoryService, error) { + sess, err := conns.NewSessionForRegion(&client.DSConn(ctx).Config, regionName, client.TerraformVersion) if err != nil { return nil, fmt.Errorf("creating AWS session (%s): %w", regionName, err) diff --git a/internal/service/ds/region_test.go b/internal/service/ds/region_test.go index 536c1c22041..2b3066b55ac 100644 --- a/internal/service/ds/region_test.go +++ b/internal/service/ds/region_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ds_test import ( @@ -172,7 +175,7 @@ func TestAccDSRegion_desiredNumberOfDomainControllers(t *testing.T) { func testAccCheckRegionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_directory_service_region" { @@ -219,7 +222,7 @@ func testAccCheckRegionExists(ctx context.Context, n string, v *directoryservice return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).DSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DSConn(ctx) output, err := tfds.FindRegion(ctx, conn, directoryID, regionName) diff --git a/internal/service/ds/service_package_gen.go b/internal/service/ds/service_package_gen.go index 22568232a09..4d82399a36c 100644 --- a/internal/service/ds/service_package_gen.go +++ b/internal/service/ds/service_package_gen.go @@ -5,6 +5,12 @@ package ds import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + directoryservice_sdkv2 "github.com/aws/aws-sdk-go-v2/service/directoryservice" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + directoryservice_sdkv1 "github.com/aws/aws-sdk-go/service/directoryservice" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -75,4 +81,24 @@ func (p *servicePackage) ServicePackageName() string { return names.DS } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*directoryservice_sdkv1.DirectoryService, error) { + sess := config["session"].(*session_sdkv1.Session) + + return directoryservice_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*directoryservice_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return directoryservice_sdkv2.NewFromConfig(cfg, func(o *directoryservice_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = directoryservice_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/ds/shared_directory.go b/internal/service/ds/shared_directory.go index ab19dd460b4..0494ffa9bb2 100644 --- a/internal/service/ds/shared_directory.go +++ b/internal/service/ds/shared_directory.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ds import ( @@ -86,7 +89,7 @@ func ResourceSharedDirectory() *schema.Resource { } func resourceSharedDirectoryCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DSConn() + conn := meta.(*conns.AWSClient).DSConn(ctx) dirId := d.Get("directory_id").(string) input := directoryservice.ShareDirectoryInput{ @@ -114,7 +117,7 @@ func resourceSharedDirectoryCreate(ctx context.Context, d *schema.ResourceData, } func resourceSharedDirectoryRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DSConn() + conn := meta.(*conns.AWSClient).DSConn(ctx) ownerDirID, sharedDirID, err := parseSharedDirectoryID(d.Id()) @@ -153,7 +156,7 @@ func resourceSharedDirectoryRead(ctx context.Context, d *schema.ResourceData, me } func resourceSharedDirectoryDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DSConn() + conn := meta.(*conns.AWSClient).DSConn(ctx) dirId := d.Get("directory_id").(string) sharedId := d.Get("shared_directory_id").(string) diff --git a/internal/service/ds/shared_directory_accepter.go b/internal/service/ds/shared_directory_accepter.go index 0ee9390decd..636bbbe1edf 100644 --- a/internal/service/ds/shared_directory_accepter.go +++ b/internal/service/ds/shared_directory_accepter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ds import ( @@ -64,7 +67,7 @@ func ResourceSharedDirectoryAccepter() *schema.Resource { } func resourceSharedDirectoryAccepterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DSConn() + conn := meta.(*conns.AWSClient).DSConn(ctx) input := directoryservice.AcceptSharedDirectoryInput{ SharedDirectoryId: aws.String(d.Get("shared_directory_id").(string)), @@ -96,7 +99,7 @@ func resourceSharedDirectoryAccepterCreate(ctx context.Context, d *schema.Resour } func resourceSharedDirectoryAccepterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DSConn() + conn := meta.(*conns.AWSClient).DSConn(ctx) dir, err := FindDirectoryByID(ctx, conn, d.Id()) @@ -113,7 +116,7 @@ func resourceSharedDirectoryAccepterRead(ctx context.Context, d *schema.Resource } func resourceSharedDirectoryAccepterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DSConn() + conn := meta.(*conns.AWSClient).DSConn(ctx) log.Printf("[DEBUG] Deleting Directory Service Directory: %s", d.Id()) _, err := tfresource.RetryWhenAWSErrMessageContains(ctx, directoryApplicationDeauthorizedPropagationTimeout, func() (interface{}, error) { diff --git a/internal/service/ds/shared_directory_accepter_test.go b/internal/service/ds/shared_directory_accepter_test.go index 1a4e1ea2e41..290e721d3f4 100644 --- a/internal/service/ds/shared_directory_accepter_test.go +++ b/internal/service/ds/shared_directory_accepter_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ds_test import ( @@ -65,7 +68,7 @@ func testAccCheckSharedDirectoryAccepterExists(ctx context.Context, n string) re return create.Error(names.DS, create.ErrActionCheckingExistence, tfds.ResNameSharedDirectoryAccepter, n, errors.New("no ID is set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).DSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DSConn(ctx) _, err := tfds.FindSharedDirectory(ctx, conn, rs.Primary.Attributes["owner_directory_id"], rs.Primary.Attributes["shared_directory_id"]) diff --git a/internal/service/ds/shared_directory_test.go b/internal/service/ds/shared_directory_test.go index 5ec24a5ba67..286e08fc4bf 100644 --- a/internal/service/ds/shared_directory_test.go +++ b/internal/service/ds/shared_directory_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ds_test import ( @@ -60,7 +63,7 @@ func testAccCheckSharedDirectoryExists(ctx context.Context, n string, v *directo return fmt.Errorf("No Directory Service Shared Directory ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DSConn(ctx) output, err := tfds.FindSharedDirectory(ctx, conn, rs.Primary.Attributes["directory_id"], rs.Primary.Attributes["shared_directory_id"]) @@ -76,7 +79,7 @@ func testAccCheckSharedDirectoryExists(ctx context.Context, n string, v *directo func testAccCheckSharedDirectoryDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_directory_service_shared_directory" { diff --git a/internal/service/ds/status.go b/internal/service/ds/status.go index 797cd9069fc..b4af681dd01 100644 --- a/internal/service/ds/status.go +++ b/internal/service/ds/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ds import ( diff --git a/internal/service/ds/sweep.go b/internal/service/ds/sweep.go index 23b6a9fffcd..b2957180e7e 100644 --- a/internal/service/ds/sweep.go +++ b/internal/service/ds/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -12,7 +15,6 @@ import ( "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -41,11 +43,11 @@ func init() { func sweepDirectories(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).DSConn() + conn := client.DSConn(ctx) sweepResources := make([]sweep.Sweepable, 0) @@ -75,7 +77,7 @@ func sweepDirectories(region string) error { return fmt.Errorf("listing Directory Service Directories (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("sweeping Directory Service Directories (%s): %w", region, err) @@ -86,12 +88,12 @@ func sweepDirectories(region string) error { func sweepRegions(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).DSConn() + conn := client.DSConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -147,7 +149,7 @@ func sweepRegions(region string) error { errs = multierror.Append(errs, fmt.Errorf("listing Directory Service Directories for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping Directory Service Regions for %s: %w", region, err)) } diff --git a/internal/service/ds/tags_gen.go b/internal/service/ds/tags_gen.go index f7637e840bb..0f21c9d1d06 100644 --- a/internal/service/ds/tags_gen.go +++ b/internal/service/ds/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists ds service tags. +// listTags lists ds service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn directoryserviceiface.DirectoryServiceAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn directoryserviceiface.DirectoryServiceAPI, identifier string) (tftags.KeyValueTags, error) { input := &directoryservice.ListTagsForResourceInput{ ResourceId: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn directoryserviceiface.DirectoryServiceAP // ListTags lists ds service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).DSConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).DSConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*directoryservice.Tag) tftags.KeyV return tftags.New(ctx, m) } -// GetTagsIn returns ds service tags from Context. +// getTagsIn returns ds service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*directoryservice.Tag { +func getTagsIn(ctx context.Context) []*directoryservice.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,8 +88,8 @@ func GetTagsIn(ctx context.Context) []*directoryservice.Tag { return nil } -// SetTagsOut sets ds service tags in Context. -func SetTagsOut(ctx context.Context, tags []*directoryservice.Tag) { +// setTagsOut sets ds service tags in Context. +func setTagsOut(ctx context.Context, tags []*directoryservice.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } @@ -101,13 +101,13 @@ func createTags(ctx context.Context, conn directoryserviceiface.DirectoryService return nil } - return UpdateTags(ctx, conn, identifier, nil, KeyValueTags(ctx, tags)) + return updateTags(ctx, conn, identifier, nil, KeyValueTags(ctx, tags)) } -// UpdateTags updates ds service tags. +// updateTags updates ds service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn directoryserviceiface.DirectoryServiceAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn directoryserviceiface.DirectoryServiceAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -147,5 +147,5 @@ func UpdateTags(ctx context.Context, conn directoryserviceiface.DirectoryService // UpdateTags updates ds service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).DSConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).DSConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/ds/trust.go b/internal/service/ds/trust.go index 6548bca53a2..e847b718c9d 100644 --- a/internal/service/ds/trust.go +++ b/internal/service/ds/trust.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ds import ( @@ -15,8 +18,10 @@ import ( "github.com/hashicorp/terraform-plugin-framework/path" "github.com/hashicorp/terraform-plugin-framework/resource" "github.com/hashicorp/terraform-plugin-framework/resource/schema" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/booldefault" "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier" "github.com/hashicorp/terraform-plugin-framework/resource/schema/setplanmodifier" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/stringdefault" "github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier" "github.com/hashicorp/terraform-plugin-framework/schema/validator" "github.com/hashicorp/terraform-plugin-framework/types" @@ -25,10 +30,8 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/enum" "github.com/hashicorp/terraform-provider-aws/internal/errs" "github.com/hashicorp/terraform-provider-aws/internal/errs/fwdiag" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" - "github.com/hashicorp/terraform-provider-aws/internal/framework/boolplanmodifier" - fwstringplanmodifier "github.com/hashicorp/terraform-provider-aws/internal/framework/stringplanmodifier" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" fwvalidators "github.com/hashicorp/terraform-provider-aws/internal/framework/validators" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/names" @@ -79,9 +82,7 @@ func (r *resourceTrust) Schema(ctx context.Context, req resource.SchemaRequest, "delete_associated_conditional_forwarder": schema.BoolAttribute{ Computed: true, Optional: true, - PlanModifiers: []planmodifier.Bool{ - boolplanmodifier.DefaultValue(false), - }, + Default: booldefault.StaticBool(false), }, "directory_id": schema.StringAttribute{ Required: true, @@ -144,19 +145,17 @@ func (r *resourceTrust) Schema(ctx context.Context, req resource.SchemaRequest, "trust_type": schema.StringAttribute{ Optional: true, Computed: true, + Default: stringdefault.StaticString(string(awstypes.TrustTypeForest)), Validators: []validator.String{ enum.FrameworkValidate[awstypes.TrustType](), }, - PlanModifiers: []planmodifier.String{ - fwstringplanmodifier.DefaultValue(string(awstypes.TrustTypeForest)), - }, }, }, } } func (r *resourceTrust) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) { - conn := r.Meta().DSClient() + conn := r.Meta().DSClient(ctx) var plan resourceTrustData resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) @@ -198,7 +197,7 @@ func (r *resourceTrust) Create(ctx context.Context, req resource.CreateRequest, } func (r *resourceTrust) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) { - conn := r.Meta().DSClient() + conn := r.Meta().DSClient(ctx) var data resourceTrustData resp.Diagnostics.Append(req.State.Get(ctx, &data)...) @@ -248,7 +247,7 @@ func (r *resourceTrust) Update(ctx context.Context, req resource.UpdateRequest, return } - conn := r.Meta().DSClient() + conn := r.Meta().DSClient(ctx) if !plan.SelectiveAuth.IsUnknown() && !state.SelectiveAuth.Equal(plan.SelectiveAuth) { params := plan.updateInput(ctx) @@ -311,7 +310,7 @@ func (r *resourceTrust) Delete(ctx context.Context, req resource.DeleteRequest, params := state.deleteInput(ctx) - conn := r.Meta().DSClient() + conn := r.Meta().DSClient(ctx) _, err := conn.DeleteTrust(ctx, params) if isTrustNotFoundErr(err) { @@ -344,7 +343,7 @@ func (r *resourceTrust) ImportState(ctx context.Context, req resource.ImportStat directoryID := parts[0] domain := parts[1] - trust, err := findTrustByDomain(ctx, r.Meta().DSClient(), directoryID, domain) + trust, err := findTrustByDomain(ctx, r.Meta().DSClient(ctx), directoryID, domain) if err != nil { resp.Diagnostics.AddError( "Importing Resource", diff --git a/internal/service/ds/trust_test.go b/internal/service/ds/trust_test.go index abea132f044..92ffdcdcb7d 100644 --- a/internal/service/ds/trust_test.go +++ b/internal/service/ds/trust_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ds_test import ( @@ -485,7 +488,7 @@ func testAccCheckTrustExists(ctx context.Context, n string, v *awstypes.Trust) r return fmt.Errorf("No Directory Service Trust ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DSClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).DSClient(ctx) output, err := tfds.FindTrustByID(ctx, conn, rs.Primary.Attributes["directory_id"], rs.Primary.ID) @@ -501,7 +504,7 @@ func testAccCheckTrustExists(ctx context.Context, n string, v *awstypes.Trust) r func testAccCheckTrustDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DSClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).DSClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_directory_service_trust" { diff --git a/internal/service/ds/validate.go b/internal/service/ds/validate.go index 186038c559e..a2a7fcf40c6 100644 --- a/internal/service/ds/validate.go +++ b/internal/service/ds/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ds import ( diff --git a/internal/service/ds/validate_test.go b/internal/service/ds/validate_test.go index 964c3aa4bed..bf44b10396f 100644 --- a/internal/service/ds/validate_test.go +++ b/internal/service/ds/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ds import ( diff --git a/internal/service/ds/wait.go b/internal/service/ds/wait.go index 64946f1fcbd..fbba0196e2e 100644 --- a/internal/service/ds/wait.go +++ b/internal/service/ds/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ds import ( diff --git a/internal/service/dynamodb/arn.go b/internal/service/dynamodb/arn.go index 8aedf611f50..5d895c5b854 100644 --- a/internal/service/dynamodb/arn.go +++ b/internal/service/dynamodb/arn.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb import ( diff --git a/internal/service/dynamodb/arn_test.go b/internal/service/dynamodb/arn_test.go index bf8de6db4bd..afc91f394bb 100644 --- a/internal/service/dynamodb/arn_test.go +++ b/internal/service/dynamodb/arn_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb_test import ( diff --git a/internal/service/dynamodb/backup.go b/internal/service/dynamodb/backup.go index 793f42bc782..1d17c124f95 100644 --- a/internal/service/dynamodb/backup.go +++ b/internal/service/dynamodb/backup.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb import ( diff --git a/internal/service/dynamodb/contributor_insights.go b/internal/service/dynamodb/contributor_insights.go index 151d967d1ab..aac81d4b25a 100644 --- a/internal/service/dynamodb/contributor_insights.go +++ b/internal/service/dynamodb/contributor_insights.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb import ( @@ -47,7 +50,7 @@ func ResourceContributorInsights() *schema.Resource { } func resourceContributorInsightsCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) input := &dynamodb.UpdateContributorInsightsInput{ ContributorInsightsAction: aws.String(dynamodb.ContributorInsightsActionEnable), @@ -79,7 +82,7 @@ func resourceContributorInsightsCreate(ctx context.Context, d *schema.ResourceDa } func resourceContributorInsightsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) tableName, indexName, err := DecodeContributorInsightsID(d.Id()) if err != nil { @@ -105,7 +108,7 @@ func resourceContributorInsightsRead(ctx context.Context, d *schema.ResourceData } func resourceContributorInsightsDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) log.Printf("[INFO] Deleting DynamoDB ContributorInsights %s", d.Id()) diff --git a/internal/service/dynamodb/contributor_insights_test.go b/internal/service/dynamodb/contributor_insights_test.go index 4397379679e..30a2fc5297b 100644 --- a/internal/service/dynamodb/contributor_insights_test.go +++ b/internal/service/dynamodb/contributor_insights_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb_test import ( @@ -121,7 +124,7 @@ func testAccCheckContributorInsightsExists(ctx context.Context, n string, ci *dy return fmt.Errorf("no DynamodDB Contributor Insights ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn(ctx) tableName, indexName, err := tfdynamodb.DecodeContributorInsightsID(rs.Primary.ID) if err != nil { @@ -141,7 +144,7 @@ func testAccCheckContributorInsightsExists(ctx context.Context, n string, ci *dy func testAccCheckContributorInsightsDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_dynamodb_contributor_insights" { diff --git a/internal/service/dynamodb/exports_test.go b/internal/service/dynamodb/exports_test.go new file mode 100644 index 00000000000..b27e345cdb2 --- /dev/null +++ b/internal/service/dynamodb/exports_test.go @@ -0,0 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package dynamodb + +// Exports for use in tests only. +var ( + ListTags = listTags +) diff --git a/internal/service/dynamodb/find.go b/internal/service/dynamodb/find.go index 786ae94b40f..0a8f99d13a7 100644 --- a/internal/service/dynamodb/find.go +++ b/internal/service/dynamodb/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb import ( diff --git a/internal/service/dynamodb/flex.go b/internal/service/dynamodb/flex.go index e0f22593aa4..a95e472decc 100644 --- a/internal/service/dynamodb/flex.go +++ b/internal/service/dynamodb/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb import ( diff --git a/internal/service/dynamodb/flex_test.go b/internal/service/dynamodb/flex_test.go index 3db08693a5a..ea6c67fad94 100644 --- a/internal/service/dynamodb/flex_test.go +++ b/internal/service/dynamodb/flex_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb import ( diff --git a/internal/service/dynamodb/forge.go b/internal/service/dynamodb/forge.go index 005fe9f9de0..a5371b5d96f 100644 --- a/internal/service/dynamodb/forge.go +++ b/internal/service/dynamodb/forge.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb import ( diff --git a/internal/service/dynamodb/generate.go b/internal/service/dynamodb/generate.go index a6fe32205d1..d03931c957b 100644 --- a/internal/service/dynamodb/generate.go +++ b/internal/service/dynamodb/generate.go @@ -1,5 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tagresource/main.go //go:generate go run ../../generate/tags/main.go -GetTag -ListTags -ListTagsOp=ListTagsOfResource -ServiceTagsSlice -UpdateTags -ParentNotFoundErrCode=ResourceNotFoundException +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package dynamodb diff --git a/internal/service/dynamodb/global_table.go b/internal/service/dynamodb/global_table.go index 2100c7e9942..baeee3e13b2 100644 --- a/internal/service/dynamodb/global_table.go +++ b/internal/service/dynamodb/global_table.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb import ( @@ -64,7 +67,7 @@ func ResourceGlobalTable() *schema.Resource { func resourceGlobalTableCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) name := d.Get(names.AttrName).(string) input := &dynamodb.CreateGlobalTableInput{ @@ -89,7 +92,7 @@ func resourceGlobalTableCreate(ctx context.Context, d *schema.ResourceData, meta func resourceGlobalTableRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) globalTableDescription, err := FindGlobalTableByName(ctx, conn, d.Id()) @@ -114,7 +117,7 @@ func resourceGlobalTableRead(ctx context.Context, d *schema.ResourceData, meta i func resourceGlobalTableUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) o, n := d.GetChange("replica") if o == nil { @@ -154,7 +157,7 @@ func resourceGlobalTableUpdate(ctx context.Context, d *schema.ResourceData, meta // Deleting a DynamoDB Global Table is represented by removing all replicas. func resourceGlobalTableDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) log.Printf("[DEBUG] Deleting DynamoDB Global Table: %s", d.Id()) _, err := conn.UpdateGlobalTableWithContext(ctx, &dynamodb.UpdateGlobalTableInput{ diff --git a/internal/service/dynamodb/global_table_test.go b/internal/service/dynamodb/global_table_test.go index 7e5b724761e..d1694b8c3bb 100644 --- a/internal/service/dynamodb/global_table_test.go +++ b/internal/service/dynamodb/global_table_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb_test import ( @@ -112,7 +115,7 @@ func TestAccDynamoDBGlobalTable_multipleRegions(t *testing.T) { func testAccCheckGlobalTableDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_dynamodb_global_table" { @@ -147,7 +150,7 @@ func testAccCheckGlobalTableExists(ctx context.Context, n string) resource.TestC return fmt.Errorf("No DynamoDB Global Table ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn(ctx) _, err := tfdynamodb.FindGlobalTableByName(ctx, conn, rs.Primary.ID) @@ -172,7 +175,7 @@ func testAccPreCheckGlobalTable(ctx context.Context, t *testing.T) { } acctest.PreCheckRegion(t, supportedRegions...) - conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn(ctx) input := &dynamodb.ListGlobalTablesInput{} diff --git a/internal/service/dynamodb/kinesis_streaming_destination.go b/internal/service/dynamodb/kinesis_streaming_destination.go index 87c6f2413e1..d2e10e34e81 100644 --- a/internal/service/dynamodb/kinesis_streaming_destination.go +++ b/internal/service/dynamodb/kinesis_streaming_destination.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb import ( @@ -43,7 +46,7 @@ func ResourceKinesisStreamingDestination() *schema.Resource { } func resourceKinesisStreamingDestinationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) streamArn := d.Get("stream_arn").(string) tableName := d.Get("table_name").(string) @@ -56,15 +59,15 @@ func resourceKinesisStreamingDestinationCreate(ctx context.Context, d *schema.Re output, err := conn.EnableKinesisStreamingDestinationWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error enabling DynamoDB Kinesis streaming destination (stream: %s, table: %s): %w", streamArn, tableName, err)) + return diag.Errorf("enabling DynamoDB Kinesis streaming destination (stream: %s, table: %s): %s", streamArn, tableName, err) } if output == nil { - return diag.FromErr(fmt.Errorf("error enabling DynamoDB Kinesis streaming destination (stream: %s, table: %s): empty output", streamArn, tableName)) + return diag.Errorf("enabling DynamoDB Kinesis streaming destination (stream: %s, table: %s): empty output", streamArn, tableName) } if err := waitKinesisStreamingDestinationActive(ctx, conn, streamArn, tableName); err != nil { - return diag.FromErr(fmt.Errorf("error waiting for DynamoDB Kinesis streaming destination (stream: %s, table: %s) to be active: %w", streamArn, tableName, err)) + return diag.Errorf("waiting for DynamoDB Kinesis streaming destination (stream: %s, table: %s) to be active: %s", streamArn, tableName, err) } d.SetId(fmt.Sprintf("%s,%s", aws.StringValue(output.TableName), aws.StringValue(output.StreamArn))) @@ -73,7 +76,7 @@ func resourceKinesisStreamingDestinationCreate(ctx context.Context, d *schema.Re } func resourceKinesisStreamingDestinationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) tableName, streamArn, err := KinesisStreamingDestinationParseID(d.Id()) @@ -90,12 +93,12 @@ func resourceKinesisStreamingDestinationRead(ctx context.Context, d *schema.Reso } if err != nil { - return diag.FromErr(fmt.Errorf("error retrieving DynamoDB Kinesis streaming destination (stream: %s, table: %s): %w", streamArn, tableName, err)) + return diag.Errorf("retrieving DynamoDB Kinesis streaming destination (stream: %s, table: %s): %s", streamArn, tableName, err) } if output == nil || aws.StringValue(output.DestinationStatus) == dynamodb.DestinationStatusDisabled { if d.IsNewResource() { - return diag.FromErr(fmt.Errorf("error retrieving DynamoDB Kinesis streaming destination (stream: %s, table: %s): empty output after creation", streamArn, tableName)) + return diag.Errorf("retrieving DynamoDB Kinesis streaming destination (stream: %s, table: %s): empty output after creation", streamArn, tableName) } log.Printf("[WARN] DynamoDB Kinesis Streaming Destination (stream: %s, table: %s) not found, removing from state", streamArn, tableName) d.SetId("") @@ -109,7 +112,7 @@ func resourceKinesisStreamingDestinationRead(ctx context.Context, d *schema.Reso } func resourceKinesisStreamingDestinationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) tableName, streamArn, err := KinesisStreamingDestinationParseID(d.Id()) @@ -125,11 +128,11 @@ func resourceKinesisStreamingDestinationDelete(ctx context.Context, d *schema.Re _, err = conn.DisableKinesisStreamingDestinationWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error disabling DynamoDB Kinesis streaming destination (stream: %s, table: %s): %w", streamArn, tableName, err)) + return diag.Errorf("disabling DynamoDB Kinesis streaming destination (stream: %s, table: %s): %s", streamArn, tableName, err) } if err := waitKinesisStreamingDestinationDisabled(ctx, conn, streamArn, tableName); err != nil { - return diag.FromErr(fmt.Errorf("error waiting for DynamoDB Kinesis streaming destination (stream: %s, table: %s) to be disabled: %w", streamArn, tableName, err)) + return diag.Errorf("waiting for DynamoDB Kinesis streaming destination (stream: %s, table: %s) to be disabled: %s", streamArn, tableName, err) } return nil diff --git a/internal/service/dynamodb/kinesis_streaming_destination_test.go b/internal/service/dynamodb/kinesis_streaming_destination_test.go index 95a4c644f49..ed52d862caf 100644 --- a/internal/service/dynamodb/kinesis_streaming_destination_test.go +++ b/internal/service/dynamodb/kinesis_streaming_destination_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb_test import ( @@ -135,7 +138,7 @@ func testAccCheckKinesisStreamingDestinationExists(ctx context.Context, resource return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn(ctx) output, err := tfdynamodb.FindKinesisDataStreamDestination(ctx, conn, streamArn, tableName) @@ -153,7 +156,7 @@ func testAccCheckKinesisStreamingDestinationExists(ctx context.Context, resource func testAccCheckKinesisStreamingDestinationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_dynamodb_kinesis_streaming_destination" { diff --git a/internal/service/dynamodb/service_package.go b/internal/service/dynamodb/service_package.go new file mode 100644 index 00000000000..8face023d55 --- /dev/null +++ b/internal/service/dynamodb/service_package.go @@ -0,0 +1,28 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package dynamodb + +import ( + "context" + + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + request_sdkv1 "github.com/aws/aws-sdk-go/aws/request" + dynamodb_sdkv1 "github.com/aws/aws-sdk-go/service/dynamodb" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" +) + +// CustomizeConn customizes a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) CustomizeConn(ctx context.Context, conn *dynamodb_sdkv1.DynamoDB) (*dynamodb_sdkv1.DynamoDB, error) { + // See https://github.com/aws/aws-sdk-go/pull/1276. + conn.Handlers.Retry.PushBack(func(r *request_sdkv1.Request) { + if r.Operation.Name != "PutItem" && r.Operation.Name != "UpdateItem" && r.Operation.Name != "DeleteItem" { + return + } + if tfawserr.ErrMessageContains(r.Error, dynamodb_sdkv1.ErrCodeLimitExceededException, "Subscriber limit exceeded:") { + r.Retryable = aws_sdkv1.Bool(true) + } + }) + + return conn, nil +} diff --git a/internal/service/dynamodb/service_package_gen.go b/internal/service/dynamodb/service_package_gen.go index ffe5361634f..59299d79838 100644 --- a/internal/service/dynamodb/service_package_gen.go +++ b/internal/service/dynamodb/service_package_gen.go @@ -5,6 +5,10 @@ package dynamodb import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + dynamodb_sdkv1 "github.com/aws/aws-sdk-go/service/dynamodb" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -75,4 +79,13 @@ func (p *servicePackage) ServicePackageName() string { return names.DynamoDB } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*dynamodb_sdkv1.DynamoDB, error) { + sess := config["session"].(*session_sdkv1.Session) + + return dynamodb_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/dynamodb/status.go b/internal/service/dynamodb/status.go index 90ad1e095e0..4eb294b3641 100644 --- a/internal/service/dynamodb/status.go +++ b/internal/service/dynamodb/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb import ( diff --git a/internal/service/dynamodb/sweep.go b/internal/service/dynamodb/sweep.go index 7e370158021..ce77f719b0e 100644 --- a/internal/service/dynamodb/sweep.go +++ b/internal/service/dynamodb/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -16,8 +19,8 @@ import ( "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" + "github.com/hashicorp/terraform-provider-aws/internal/sweep/sdk" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) @@ -35,13 +38,13 @@ func init() { func sweepTables(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).DynamoDBConn() + conn := client.DynamoDBConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error var g multierror.Group @@ -62,7 +65,7 @@ func sweepTables(region string) error { // read concurrently and gather errors g.Go(func() error { // Need to Read first to fill in `replica` attribute - err := sweep.ReadResource(ctx, r, d, client) + err := sdk.ReadResource(ctx, r, d, client) if err != nil { return err @@ -92,7 +95,7 @@ func sweepTables(region string) error { errs = multierror.Append(errs, fmt.Errorf("error concurrently reading DynamoDB Tables: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping DynamoDB Tables for %s: %w", region, err)) } @@ -106,13 +109,13 @@ func sweepTables(region string) error { func sweepBackups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).DynamoDBConn() + conn := client.DynamoDBConn(ctx) sweepables := make([]sweep.Sweepable, 0) var errs *multierror.Error var g multierror.Group @@ -148,7 +151,7 @@ func sweepBackups(region string) error { errs = multierror.Append(errs, fmt.Errorf("reading DynamoDB Backups: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepables); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepables); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping DynamoDB Backups for %s: %w", region, err)) } diff --git a/internal/service/dynamodb/table.go b/internal/service/dynamodb/table.go index 22a94e0ad6d..f476961eed3 100644 --- a/internal/service/dynamodb/table.go +++ b/internal/service/dynamodb/table.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb import ( @@ -82,7 +85,7 @@ func ResourceTable() *schema.Resource { func(_ context.Context, diff *schema.ResourceDiff, meta interface{}) error { if diff.Id() != "" && diff.HasChange("stream_enabled") { if err := diff.SetNewComputed("stream_arn"); err != nil { - return fmt.Errorf("error setting stream_arn to computed: %s", err) + return fmt.Errorf("setting stream_arn to computed: %s", err) } } return nil @@ -402,7 +405,7 @@ func ResourceTable() *schema.Resource { func resourceTableCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) tableName := d.Get(names.AttrName).(string) keySchemaMap := map[string]interface{}{ @@ -488,7 +491,7 @@ func resourceTableCreate(ctx context.Context, d *schema.ResourceData, meta inter BillingMode: aws.String(d.Get("billing_mode").(string)), KeySchema: expandKeySchema(keySchemaMap), TableName: aws.String(tableName), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } billingMode := d.Get("billing_mode").(string) @@ -603,7 +606,7 @@ func resourceTableCreate(ctx context.Context, d *schema.ResourceData, meta inter return create.DiagError(names.DynamoDB, create.ErrActionCreating, ResNameTable, d.Id(), fmt.Errorf("replicas: %w", err)) } - if err := updateReplicaTags(ctx, conn, aws.StringValue(output.TableArn), v.List(), KeyValueTags(ctx, GetTagsIn(ctx)), meta.(*conns.AWSClient).TerraformVersion); err != nil { + if err := updateReplicaTags(ctx, conn, aws.StringValue(output.TableArn), v.List(), KeyValueTags(ctx, getTagsIn(ctx)), meta.(*conns.AWSClient).TerraformVersion); err != nil { return create.DiagError(names.DynamoDB, create.ErrActionCreating, ResNameTable, d.Id(), fmt.Errorf("replica tags: %w", err)) } } @@ -613,7 +616,7 @@ func resourceTableCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceTableRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) table, err := FindTableByName(ctx, conn, d.Id()) @@ -735,7 +738,7 @@ func resourceTableRead(ctx context.Context, d *schema.ResourceData, meta interfa func resourceTableUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) o, n := d.GetChange("billing_mode") billingMode := n.(string) oldBillingMode := o.(string) @@ -1008,7 +1011,7 @@ func resourceTableUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceTableDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) if replicas := d.Get("replica").(*schema.Set).List(); len(replicas) > 0 { log.Printf("[DEBUG] Deleting DynamoDB Table replicas: %s", d.Id()) @@ -1218,12 +1221,12 @@ func updateReplicaTags(ctx context.Context, conn *dynamodb.DynamoDB, rn string, return fmt.Errorf("per region ARN for replica (%s): %w", region, err) } - oldTags, err := ListTags(ctx, conn, repARN) + oldTags, err := listTags(ctx, conn, repARN) if err != nil { return fmt.Errorf("listing tags (%s): %w", repARN, err) } - if err := UpdateTags(ctx, conn, repARN, oldTags, newTags); err != nil { + if err := updateTags(ctx, conn, repARN, oldTags, newTags); err != nil { return fmt.Errorf("updating tags: %w", err) } } @@ -1403,7 +1406,7 @@ func updateReplica(ctx context.Context, d *schema.ResourceData, conn *dynamodb.D return nil } -func UpdateDiffGSI(oldGsi, newGsi []interface{}, billingMode string) (ops []*dynamodb.GlobalSecondaryIndexUpdate, e error) { +func UpdateDiffGSI(oldGsi, newGsi []interface{}, billingMode string) ([]*dynamodb.GlobalSecondaryIndexUpdate, error) { // Transform slices into maps oldGsis := make(map[string]interface{}) for _, gsidata := range oldGsi { @@ -1414,12 +1417,14 @@ func UpdateDiffGSI(oldGsi, newGsi []interface{}, billingMode string) (ops []*dyn for _, gsidata := range newGsi { m := gsidata.(map[string]interface{}) // validate throughput input early, to avoid unnecessary processing - if e = validateGSIProvisionedThroughput(m, billingMode); e != nil { - return + if err := validateGSIProvisionedThroughput(m, billingMode); err != nil { + return nil, err } newGsis[m[names.AttrName].(string)] = m } + var ops []*dynamodb.GlobalSecondaryIndexUpdate + for _, data := range newGsi { newMap := data.(map[string]interface{}) newName := newMap[names.AttrName].(string) diff --git a/internal/service/dynamodb/table_data_source.go b/internal/service/dynamodb/table_data_source.go index 7e00e8fd88f..6267ade462b 100644 --- a/internal/service/dynamodb/table_data_source.go +++ b/internal/service/dynamodb/table_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb import ( @@ -234,7 +237,7 @@ func DataSourceTable() *schema.Resource { func dataSourceTableRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig name := d.Get(names.AttrName).(string) @@ -341,7 +344,7 @@ func dataSourceTableRead(ctx context.Context, d *schema.ResourceData, meta inter return sdkdiag.AppendErrorf(diags, "setting ttl: %s", err) } - tags, err := ListTags(ctx, conn, d.Get(names.AttrARN).(string)) + tags, err := listTags(ctx, conn, d.Get(names.AttrARN).(string)) // When a Table is `ARCHIVED`, ListTags returns `ResourceNotFoundException` if err != nil && !(tfawserr.ErrMessageContains(err, "UnknownOperationException", "Tagging is not currently supported in DynamoDB Local.") || tfresource.NotFound(err)) { return sdkdiag.AppendErrorf(diags, "listing tags for DynamoDB Table (%s): %s", d.Get(names.AttrARN).(string), err) diff --git a/internal/service/dynamodb/table_data_source_test.go b/internal/service/dynamodb/table_data_source_test.go index b8f20b5bdb0..0fcb8c294c7 100644 --- a/internal/service/dynamodb/table_data_source_test.go +++ b/internal/service/dynamodb/table_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb_test import ( diff --git a/internal/service/dynamodb/table_item.go b/internal/service/dynamodb/table_item.go index bd02523f04b..13dd35d6a4c 100644 --- a/internal/service/dynamodb/table_item.go +++ b/internal/service/dynamodb/table_item.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb import ( @@ -65,7 +68,7 @@ func validateTableItem(v interface{}, k string) (ws []string, errors []error) { func resourceTableItemCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) tableName := d.Get("table_name").(string) hashKey := d.Get("hash_key").(string) @@ -99,7 +102,7 @@ func resourceTableItemCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceTableItemUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics log.Printf("[DEBUG] Updating DynamoDB table %s", d.Id()) - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) if d.HasChange("item") { tableName := d.Get("table_name").(string) @@ -175,7 +178,7 @@ func resourceTableItemUpdate(ctx context.Context, d *schema.ResourceData, meta i func resourceTableItemRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) log.Printf("[DEBUG] Loading data for DynamoDB table item '%s'", d.Id()) @@ -216,7 +219,7 @@ func resourceTableItemRead(ctx context.Context, d *schema.ResourceData, meta int func resourceTableItemDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) attributes, err := ExpandTableItemAttributes(d.Get("item").(string)) if err != nil { diff --git a/internal/service/dynamodb/table_item_data_source.go b/internal/service/dynamodb/table_item_data_source.go index 4f6540abe37..bfdb0695144 100644 --- a/internal/service/dynamodb/table_item_data_source.go +++ b/internal/service/dynamodb/table_item_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb import ( @@ -52,7 +55,7 @@ const ( ) func dataSourceTableItemRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) tableName := d.Get("table_name").(string) key, err := ExpandTableItemAttributes(d.Get("key").(string)) diff --git a/internal/service/dynamodb/table_item_data_source_test.go b/internal/service/dynamodb/table_item_data_source_test.go index 71cd5fbc79d..32e42667992 100644 --- a/internal/service/dynamodb/table_item_data_source_test.go +++ b/internal/service/dynamodb/table_item_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb_test import ( diff --git a/internal/service/dynamodb/table_item_test.go b/internal/service/dynamodb/table_item_test.go index df4ef6728cc..8192e0306f6 100644 --- a/internal/service/dynamodb/table_item_test.go +++ b/internal/service/dynamodb/table_item_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb_test import ( @@ -435,7 +438,7 @@ func TestAccDynamoDBTableItem_mapOutOfBandUpdate(t *testing.T) { }, { PreConfig: func() { - conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn(ctx) attributes, err := tfdynamodb.ExpandTableItemAttributes(newItem) if err != nil { @@ -472,7 +475,7 @@ func TestAccDynamoDBTableItem_mapOutOfBandUpdate(t *testing.T) { func testAccCheckTableItemDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_dynamodb_table_item" { @@ -515,7 +518,7 @@ func testAccCheckTableItemExists(ctx context.Context, n string, item *dynamodb.G return fmt.Errorf("No DynamoDB table item ID specified!") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn(ctx) attrs := rs.Primary.Attributes attributes, err := tfdynamodb.ExpandTableItemAttributes(attrs["item"]) @@ -539,7 +542,7 @@ func testAccCheckTableItemExists(ctx context.Context, n string, item *dynamodb.G func testAccCheckTableItemCount(ctx context.Context, tableName string, count int64) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn(ctx) out, err := conn.ScanWithContext(ctx, &dynamodb.ScanInput{ ConsistentRead: aws.Bool(true), TableName: aws.String(tableName), diff --git a/internal/service/dynamodb/table_migrate.go b/internal/service/dynamodb/table_migrate.go index 81be642d661..64654ae16e9 100644 --- a/internal/service/dynamodb/table_migrate.go +++ b/internal/service/dynamodb/table_migrate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb import ( diff --git a/internal/service/dynamodb/table_replica.go b/internal/service/dynamodb/table_replica.go index f60b42f656d..425c72a52e8 100644 --- a/internal/service/dynamodb/table_replica.go +++ b/internal/service/dynamodb/table_replica.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb import ( @@ -93,7 +96,7 @@ func ResourceTableReplica() *schema.Resource { func resourceTableReplicaCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) replicaRegion := aws.StringValue(conn.Config.Region) @@ -188,7 +191,7 @@ func resourceTableReplicaRead(ctx context.Context, d *schema.ResourceData, meta // * table_class_override diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) replicaRegion := aws.StringValue(conn.Config.Region) @@ -280,7 +283,7 @@ func resourceTableReplicaReadReplica(ctx context.Context, d *schema.ResourceData // * tags diags diag.Diagnostics - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) tableName, _, err := TableReplicaParseID(d.Id()) if err != nil { @@ -326,13 +329,13 @@ func resourceTableReplicaReadReplica(ctx context.Context, d *schema.ResourceData d.Set("point_in_time_recovery", false) } - tags, err := ListTags(ctx, conn, d.Get(names.AttrARN).(string)) + tags, err := listTags(ctx, conn, d.Get(names.AttrARN).(string)) // When a Table is `ARCHIVED`, ListTags returns `ResourceNotFoundException` if err != nil && !(tfawserr.ErrMessageContains(err, "UnknownOperationException", "Tagging is not currently supported in DynamoDB Local.") || tfresource.NotFound(err)) { return create.DiagError(names.DynamoDB, create.ErrActionReading, ResNameTableReplica, d.Id(), fmt.Errorf("tags: %w", err)) } - SetTagsOut(ctx, Tags(tags)) + setTagsOut(ctx, Tags(tags)) return diags } @@ -346,7 +349,7 @@ func resourceTableReplicaUpdate(ctx context.Context, d *schema.ResourceData, met // * table_class_override diags diag.Diagnostics - repConn := meta.(*conns.AWSClient).DynamoDBConn() + repConn := meta.(*conns.AWSClient).DynamoDBConn(ctx) tableName, mainRegion, err := TableReplicaParseID(d.Id()) if err != nil { @@ -428,7 +431,7 @@ func resourceTableReplicaUpdate(ctx context.Context, d *schema.ResourceData, met if d.HasChanges("point_in_time_recovery", names.AttrTagsAll) { if d.HasChange(names.AttrTagsAll) { o, n := d.GetChange(names.AttrTagsAll) - if err := UpdateTags(ctx, repConn, d.Get(names.AttrARN).(string), o, n); err != nil { + if err := updateTags(ctx, repConn, d.Get(names.AttrARN).(string), o, n); err != nil { return create.DiagError(names.DynamoDB, create.ErrActionUpdating, ResNameTableReplica, d.Id(), err) } } @@ -456,7 +459,7 @@ func resourceTableReplicaDelete(ctx context.Context, d *schema.ResourceData, met return create.DiagError(names.DynamoDB, create.ErrActionDeleting, ResNameTableReplica, d.Id(), err) } - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) replicaRegion := aws.StringValue(conn.Config.Region) diff --git a/internal/service/dynamodb/table_replica_test.go b/internal/service/dynamodb/table_replica_test.go index ddd88542a2c..964566656db 100644 --- a/internal/service/dynamodb/table_replica_test.go +++ b/internal/service/dynamodb/table_replica_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb_test import ( @@ -345,7 +348,7 @@ func TestAccDynamoDBTableReplica_keys(t *testing.T) { func testAccCheckTableReplicaDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn(ctx) replicaRegion := aws.StringValue(conn.Config.Region) for _, rs := range s.RootModule().Resources { @@ -412,7 +415,7 @@ func testAccCheckTableReplicaExists(ctx context.Context, n string) resource.Test return create.Error(names.DynamoDB, create.ErrActionCheckingExistence, tfdynamodb.ResNameTableReplica, rs.Primary.ID, errors.New("no ID")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn(ctx) tableName, mainRegion, err := tfdynamodb.TableReplicaParseID(rs.Primary.ID) if err != nil { diff --git a/internal/service/dynamodb/table_test.go b/internal/service/dynamodb/table_test.go index 60f4d62ee08..66437741f63 100644 --- a/internal/service/dynamodb/table_test.go +++ b/internal/service/dynamodb/table_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb_test import ( @@ -2545,7 +2548,7 @@ func TestAccDynamoDBTable_backup_overrideEncryption(t *testing.T) { func testAccCheckTableDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_dynamodb_table" { @@ -2580,7 +2583,7 @@ func testAccCheckInitialTableExists(ctx context.Context, n string, v *dynamodb.T return fmt.Errorf("No DynamoDB Table ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn(ctx) output, err := tfdynamodb.FindTableByName(ctx, conn, rs.Primary.ID) @@ -2615,7 +2618,7 @@ func testAccCheckReplicaExists(ctx context.Context, n string, region string, v * return fmt.Errorf("no DynamoDB table name specified!") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn(ctx) terraformVersion := acctest.Provider.Meta().(*conns.AWSClient).TerraformVersion if aws.StringValue(conn.Config.Region) != region { @@ -2650,7 +2653,7 @@ func testAccCheckReplicaHasTags(ctx context.Context, n string, region string, co return fmt.Errorf("no DynamoDB table name specified!") } - conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn(ctx) terraformVersion := acctest.Provider.Meta().(*conns.AWSClient).TerraformVersion if aws.StringValue(conn.Config.Region) != region { diff --git a/internal/service/dynamodb/tag_gen.go b/internal/service/dynamodb/tag_gen.go index d338112829c..0b7dbb6ebd0 100644 --- a/internal/service/dynamodb/tag_gen.go +++ b/internal/service/dynamodb/tag_gen.go @@ -46,13 +46,13 @@ func ResourceTag() *schema.Resource { } func resourceTagCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { // nosemgrep:ci.semgrep.tags.calling-UpdateTags-in-resource-create - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) identifier := d.Get("resource_arn").(string) key := d.Get("key").(string) value := d.Get("value").(string) - if err := UpdateTags(ctx, conn, identifier, nil, map[string]string{key: value}); err != nil { + if err := updateTags(ctx, conn, identifier, nil, map[string]string{key: value}); err != nil { return diag.Errorf("creating %s resource (%s) tag (%s): %s", dynamodb.ServiceID, identifier, key, err) } @@ -62,7 +62,7 @@ func resourceTagCreate(ctx context.Context, d *schema.ResourceData, meta interfa } func resourceTagRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) identifier, key, err := tftags.GetResourceID(d.Id()) if err != nil { @@ -89,14 +89,14 @@ func resourceTagRead(ctx context.Context, d *schema.ResourceData, meta interface } func resourceTagUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) identifier, key, err := tftags.GetResourceID(d.Id()) if err != nil { return diag.FromErr(err) } - if err := UpdateTags(ctx, conn, identifier, nil, map[string]string{key: d.Get("value").(string)}); err != nil { + if err := updateTags(ctx, conn, identifier, nil, map[string]string{key: d.Get("value").(string)}); err != nil { return diag.Errorf("updating %s resource (%s) tag (%s): %s", dynamodb.ServiceID, identifier, key, err) } @@ -104,14 +104,14 @@ func resourceTagUpdate(ctx context.Context, d *schema.ResourceData, meta interfa } func resourceTagDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).DynamoDBConn() + conn := meta.(*conns.AWSClient).DynamoDBConn(ctx) identifier, key, err := tftags.GetResourceID(d.Id()) if err != nil { return diag.FromErr(err) } - if err := UpdateTags(ctx, conn, identifier, map[string]string{key: d.Get("value").(string)}, nil); err != nil { + if err := updateTags(ctx, conn, identifier, map[string]string{key: d.Get("value").(string)}, nil); err != nil { return diag.Errorf("deleting %s resource (%s) tag (%s): %s", dynamodb.ServiceID, identifier, key, err) } diff --git a/internal/service/dynamodb/tag_gen_test.go b/internal/service/dynamodb/tag_gen_test.go index 145ace7fc77..76967c74e51 100644 --- a/internal/service/dynamodb/tag_gen_test.go +++ b/internal/service/dynamodb/tag_gen_test.go @@ -18,7 +18,7 @@ import ( func testAccCheckTagDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_dynamodb_tag" { @@ -64,7 +64,7 @@ func testAccCheckTagExists(ctx context.Context, resourceName string) resource.Te return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).DynamoDBConn(ctx) _, err = tfdynamodb.GetTag(ctx, conn, identifier, key) diff --git a/internal/service/dynamodb/tag_test.go b/internal/service/dynamodb/tag_test.go index 044235f52db..865363ba838 100644 --- a/internal/service/dynamodb/tag_test.go +++ b/internal/service/dynamodb/tag_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb_test import ( diff --git a/internal/service/dynamodb/tags_gen.go b/internal/service/dynamodb/tags_gen.go index 553e427fbea..465fb194522 100644 --- a/internal/service/dynamodb/tags_gen.go +++ b/internal/service/dynamodb/tags_gen.go @@ -19,11 +19,11 @@ import ( // GetTag fetches an individual dynamodb service tag for a resource. // Returns whether the key value and any errors. A NotFoundError is used to signal that no value was found. -// This function will optimise the handling over ListTags, if possible. +// This function will optimise the handling over listTags, if possible. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. func GetTag(ctx context.Context, conn dynamodbiface.DynamoDBAPI, identifier, key string) (*string, error) { - listTags, err := ListTags(ctx, conn, identifier) + listTags, err := listTags(ctx, conn, identifier) if err != nil { return nil, err @@ -36,10 +36,10 @@ func GetTag(ctx context.Context, conn dynamodbiface.DynamoDBAPI, identifier, key return listTags.KeyValue(key), nil } -// ListTags lists dynamodb service tags. +// listTags lists dynamodb service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn dynamodbiface.DynamoDBAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn dynamodbiface.DynamoDBAPI, identifier string) (tftags.KeyValueTags, error) { input := &dynamodb.ListTagsOfResourceInput{ ResourceArn: aws.String(identifier), } @@ -63,7 +63,7 @@ func ListTags(ctx context.Context, conn dynamodbiface.DynamoDBAPI, identifier st // ListTags lists dynamodb service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).DynamoDBConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).DynamoDBConn(ctx), identifier) if err != nil { return err @@ -105,9 +105,9 @@ func KeyValueTags(ctx context.Context, tags []*dynamodb.Tag) tftags.KeyValueTags return tftags.New(ctx, m) } -// GetTagsIn returns dynamodb service tags from Context. +// getTagsIn returns dynamodb service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*dynamodb.Tag { +func getTagsIn(ctx context.Context) []*dynamodb.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -117,17 +117,17 @@ func GetTagsIn(ctx context.Context) []*dynamodb.Tag { return nil } -// SetTagsOut sets dynamodb service tags in Context. -func SetTagsOut(ctx context.Context, tags []*dynamodb.Tag) { +// setTagsOut sets dynamodb service tags in Context. +func setTagsOut(ctx context.Context, tags []*dynamodb.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates dynamodb service tags. +// updateTags updates dynamodb service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn dynamodbiface.DynamoDBAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn dynamodbiface.DynamoDBAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -167,5 +167,5 @@ func UpdateTags(ctx context.Context, conn dynamodbiface.DynamoDBAPI, identifier // UpdateTags updates dynamodb service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).DynamoDBConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).DynamoDBConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/dynamodb/validate.go b/internal/service/dynamodb/validate.go index 9a17fbf5b98..938971342f1 100644 --- a/internal/service/dynamodb/validate.go +++ b/internal/service/dynamodb/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb import ( diff --git a/internal/service/dynamodb/wait.go b/internal/service/dynamodb/wait.go index a1111fedc6d..ab26a11dae6 100644 --- a/internal/service/dynamodb/wait.go +++ b/internal/service/dynamodb/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package dynamodb import ( @@ -229,7 +232,7 @@ func waitSSEUpdated(ctx context.Context, conn *dynamodb.DynamoDB, tableName stri } func waitReplicaSSEUpdated(ctx context.Context, client *conns.AWSClient, region string, tableName string, timeout time.Duration) (*dynamodb.TableDescription, error) { - sess, err := conns.NewSessionForRegion(&client.DynamoDBConn().Config, region, client.TerraformVersion) + sess, err := conns.NewSessionForRegion(&client.DynamoDBConn(ctx).Config, region, client.TerraformVersion) if err != nil { return nil, fmt.Errorf("creating session for region %q: %w", region, err) } diff --git a/internal/service/ec2/arn.go b/internal/service/ec2/arn.go index 26c3159f264..79929c08f99 100644 --- a/internal/service/ec2/arn.go +++ b/internal/service/ec2/arn.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -19,7 +22,7 @@ func InstanceProfileARNToName(inputARN string) (string, error) { parsedARN, err := arn.Parse(inputARN) if err != nil { - return "", fmt.Errorf("error parsing ARN (%s): %w", inputARN, err) + return "", fmt.Errorf("parsing ARN (%s): %w", inputARN, err) } if actual, expected := parsedARN.Service, ARNService; actual != expected { diff --git a/internal/service/ec2/arn_test.go b/internal/service/ec2/arn_test.go index 1ea6d824a13..b425e74fbce 100644 --- a/internal/service/ec2/arn_test.go +++ b/internal/service/ec2/arn_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -19,12 +22,12 @@ func TestInstanceProfileARNToName(t *testing.T) { { TestName: "empty ARN", InputARN: "", - ExpectedError: regexp.MustCompile(`error parsing ARN`), + ExpectedError: regexp.MustCompile(`parsing ARN`), }, { TestName: "unparsable ARN", InputARN: "test", - ExpectedError: regexp.MustCompile(`error parsing ARN`), + ExpectedError: regexp.MustCompile(`parsing ARN`), }, { TestName: "invalid ARN service", diff --git a/internal/service/ec2/common_schema_data_source.go b/internal/service/ec2/common_schema_data_source.go index 6c71048d437..7201e806db7 100644 --- a/internal/service/ec2/common_schema_data_source.go +++ b/internal/service/ec2/common_schema_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( diff --git a/internal/service/ec2/consts.go b/internal/service/ec2/consts.go index fed490d0d03..787d400d004 100644 --- a/internal/service/ec2/consts.go +++ b/internal/service/ec2/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -42,6 +45,13 @@ func SpotAllocationStrategy_Values() []string { ) } +const ( + // https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-request-status.html#spot-instance-request-status-understand + spotInstanceRequestStatusCodeFulfilled = "fulfilled" + spotInstanceRequestStatusCodePendingEvaluation = "pending-evaluation" + spotInstanceRequestStatusCodePendingFulfillment = "pending-fulfillment" +) + const ( // https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-interface.html#vpce-interface-lifecycle vpcEndpointStateAvailable = "available" diff --git a/internal/service/ec2/diff.go b/internal/service/ec2/diff.go index 4112198db9f..a55acdb51d8 100644 --- a/internal/service/ec2/diff.go +++ b/internal/service/ec2/diff.go @@ -1,12 +1,15 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" - "github.com/hashicorp/terraform-provider-aws/internal/verify" + "github.com/hashicorp/terraform-provider-aws/internal/types" ) // suppressEqualCIDRBlockDiffs provides custom difference suppression for CIDR blocks // that have different string values but represent the same CIDR. func suppressEqualCIDRBlockDiffs(k, old, new string, d *schema.ResourceData) bool { - return verify.CIDRBlocksEqual(old, new) + return types.CIDRBlocksEqual(old, new) } diff --git a/internal/service/ec2/ebs_default_kms_key.go b/internal/service/ec2/ebs_default_kms_key.go index 0c1cadb903d..f785001e232 100644 --- a/internal/service/ec2/ebs_default_kms_key.go +++ b/internal/service/ec2/ebs_default_kms_key.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -35,7 +38,7 @@ func ResourceEBSDefaultKMSKey() *schema.Resource { func resourceEBSDefaultKMSKeyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) resp, err := conn.ModifyEbsDefaultKmsKeyIdWithContext(ctx, &ec2.ModifyEbsDefaultKmsKeyIdInput{ KmsKeyId: aws.String(d.Get("key_arn").(string)), @@ -51,7 +54,7 @@ func resourceEBSDefaultKMSKeyCreate(ctx context.Context, d *schema.ResourceData, func resourceEBSDefaultKMSKeyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) resp, err := conn.GetEbsDefaultKmsKeyIdWithContext(ctx, &ec2.GetEbsDefaultKmsKeyIdInput{}) if err != nil { @@ -65,7 +68,7 @@ func resourceEBSDefaultKMSKeyRead(ctx context.Context, d *schema.ResourceData, m func resourceEBSDefaultKMSKeyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) _, err := conn.ResetEbsDefaultKmsKeyIdWithContext(ctx, &ec2.ResetEbsDefaultKmsKeyIdInput{}) if err != nil { diff --git a/internal/service/ec2/ebs_default_kms_key_data_source.go b/internal/service/ec2/ebs_default_kms_key_data_source.go index 2a318112fec..4a402abe039 100644 --- a/internal/service/ec2/ebs_default_kms_key_data_source.go +++ b/internal/service/ec2/ebs_default_kms_key_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -30,11 +33,11 @@ func DataSourceEBSDefaultKMSKey() *schema.Resource { } func dataSourceEBSDefaultKMSKeyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) res, err := conn.GetEbsDefaultKmsKeyIdWithContext(ctx, &ec2.GetEbsDefaultKmsKeyIdInput{}) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error reading EBS default KMS key: %s", err) + return sdkdiag.AppendErrorf(diags, "reading EBS default KMS key: %s", err) } d.SetId(meta.(*conns.AWSClient).Region) diff --git a/internal/service/ec2/ebs_default_kms_key_data_source_test.go b/internal/service/ec2/ebs_default_kms_key_data_source_test.go index 4df502a1de9..6deb4b5d5d2 100644 --- a/internal/service/ec2/ebs_default_kms_key_data_source_test.go +++ b/internal/service/ec2/ebs_default_kms_key_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/ebs_default_kms_key_test.go b/internal/service/ec2/ebs_default_kms_key_test.go index d8775a38bd4..60250c0e3b4 100644 --- a/internal/service/ec2/ebs_default_kms_key_test.go +++ b/internal/service/ec2/ebs_default_kms_key_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -49,7 +52,7 @@ func testAccCheckEBSDefaultKMSKeyDestroy(ctx context.Context) resource.TestCheck return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) resp, err := conn.GetEbsDefaultKmsKeyIdWithContext(ctx, &ec2.GetEbsDefaultKmsKeyIdInput{}) if err != nil { @@ -81,7 +84,7 @@ func testAccCheckEBSDefaultKMSKey(ctx context.Context, name string) resource.Tes return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) resp, err := conn.GetEbsDefaultKmsKeyIdWithContext(ctx, &ec2.GetEbsDefaultKmsKeyIdInput{}) if err != nil { @@ -99,7 +102,7 @@ func testAccCheckEBSDefaultKMSKey(ctx context.Context, name string) resource.Tes // testAccEBSManagedDefaultKey returns' the account's AWS-managed default CMK. func testAccEBSManagedDefaultKey(ctx context.Context) (*arn.ARN, error) { - conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn(ctx) alias, err := tfkms.FindAliasByName(ctx, conn, "alias/aws/ebs") if err != nil { diff --git a/internal/service/ec2/ebs_encryption_by_default.go b/internal/service/ec2/ebs_encryption_by_default.go index 12a6f5c0b2a..a796c7719f8 100644 --- a/internal/service/ec2/ebs_encryption_by_default.go +++ b/internal/service/ec2/ebs_encryption_by_default.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -33,7 +36,7 @@ func ResourceEBSEncryptionByDefault() *schema.Resource { func resourceEBSEncryptionByDefaultCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) enabled := d.Get("enabled").(bool) if err := setEBSEncryptionByDefault(ctx, conn, enabled); err != nil { @@ -48,7 +51,7 @@ func resourceEBSEncryptionByDefaultCreate(ctx context.Context, d *schema.Resourc func resourceEBSEncryptionByDefaultRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) resp, err := conn.GetEbsEncryptionByDefaultWithContext(ctx, &ec2.GetEbsEncryptionByDefaultInput{}) if err != nil { @@ -62,7 +65,7 @@ func resourceEBSEncryptionByDefaultRead(ctx context.Context, d *schema.ResourceD func resourceEBSEncryptionByDefaultUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) enabled := d.Get("enabled").(bool) if err := setEBSEncryptionByDefault(ctx, conn, enabled); err != nil { @@ -74,7 +77,7 @@ func resourceEBSEncryptionByDefaultUpdate(ctx context.Context, d *schema.Resourc func resourceEBSEncryptionByDefaultDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) // Removing the resource disables default encryption. if err := setEBSEncryptionByDefault(ctx, conn, false); err != nil { diff --git a/internal/service/ec2/ebs_encryption_by_default_data_source.go b/internal/service/ec2/ebs_encryption_by_default_data_source.go index 9c569adc973..6007282e918 100644 --- a/internal/service/ec2/ebs_encryption_by_default_data_source.go +++ b/internal/service/ec2/ebs_encryption_by_default_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -30,11 +33,11 @@ func DataSourceEBSEncryptionByDefault() *schema.Resource { } func dataSourceEBSEncryptionByDefaultRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) res, err := conn.GetEbsEncryptionByDefaultWithContext(ctx, &ec2.GetEbsEncryptionByDefaultInput{}) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error reading default EBS encryption toggle: %s", err) + return sdkdiag.AppendErrorf(diags, "reading default EBS encryption toggle: %s", err) } d.SetId(meta.(*conns.AWSClient).Region) diff --git a/internal/service/ec2/ebs_encryption_by_default_data_source_test.go b/internal/service/ec2/ebs_encryption_by_default_data_source_test.go index 4a4673ce974..6ba6860542e 100644 --- a/internal/service/ec2/ebs_encryption_by_default_data_source_test.go +++ b/internal/service/ec2/ebs_encryption_by_default_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -33,7 +36,7 @@ func TestAccEC2EBSEncryptionByDefaultDataSource_basic(t *testing.T) { func testAccCheckEBSEncryptionByDefaultDataSource(ctx context.Context, n string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) rs, ok := s.RootModule().Resources[n] if !ok { diff --git a/internal/service/ec2/ebs_encryption_by_default_test.go b/internal/service/ec2/ebs_encryption_by_default_test.go index 58954600f0d..ef8f801497a 100644 --- a/internal/service/ec2/ebs_encryption_by_default_test.go +++ b/internal/service/ec2/ebs_encryption_by_default_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -48,7 +51,7 @@ func TestAccEC2EBSEncryptionByDefault_basic(t *testing.T) { func testAccCheckEncryptionByDefaultDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) response, err := conn.GetEbsEncryptionByDefaultWithContext(ctx, &ec2.GetEbsEncryptionByDefaultInput{}) if err != nil { @@ -74,7 +77,7 @@ func testAccCheckEBSEncryptionByDefault(ctx context.Context, n string, enabled b return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) response, err := conn.GetEbsEncryptionByDefaultWithContext(ctx, &ec2.GetEbsEncryptionByDefaultInput{}) if err != nil { diff --git a/internal/service/ec2/ebs_snapshot.go b/internal/service/ec2/ebs_snapshot.go index b73673af6da..daa55f8c27b 100644 --- a/internal/service/ec2/ebs_snapshot.go +++ b/internal/service/ec2/ebs_snapshot.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -108,7 +111,7 @@ func ResourceEBSSnapshot() *schema.Resource { func resourceEBSSnapshotCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) volumeID := d.Get("volume_id").(string) input := &ec2.CreateSnapshotInput{ @@ -169,7 +172,7 @@ func resourceEBSSnapshotCreate(ctx context.Context, d *schema.ResourceData, meta func resourceEBSSnapshotRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) snapshot, err := FindSnapshotByID(ctx, conn, d.Id()) @@ -201,14 +204,14 @@ func resourceEBSSnapshotRead(ctx context.Context, d *schema.ResourceData, meta i d.Set("volume_id", snapshot.VolumeId) d.Set("volume_size", snapshot.VolumeSize) - SetTagsOut(ctx, snapshot.Tags) + setTagsOut(ctx, snapshot.Tags) return diags } func resourceEBSSnapshotUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if d.HasChange("storage_tier") { if tier := d.Get("storage_tier").(string); tier == ec2.TargetStorageTierArchive { @@ -251,7 +254,7 @@ func resourceEBSSnapshotUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceEBSSnapshotDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[INFO] Deleting EBS Snapshot: %s", d.Id()) _, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, d.Timeout(schema.TimeoutDelete), func() (interface{}, error) { diff --git a/internal/service/ec2/ebs_snapshot_copy.go b/internal/service/ec2/ebs_snapshot_copy.go index c686a1428d8..f56c76d8af9 100644 --- a/internal/service/ec2/ebs_snapshot_copy.go +++ b/internal/service/ec2/ebs_snapshot_copy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -109,7 +112,7 @@ func ResourceEBSSnapshotCopy() *schema.Resource { func resourceEBSSnapshotCopyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CopySnapshotInput{ SourceRegion: aws.String(d.Get("source_region").(string)), diff --git a/internal/service/ec2/ebs_snapshot_copy_test.go b/internal/service/ec2/ebs_snapshot_copy_test.go index 98b02559ef6..0a56e5305cc 100644 --- a/internal/service/ec2/ebs_snapshot_copy_test.go +++ b/internal/service/ec2/ebs_snapshot_copy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/ebs_snapshot_create_volume_permission.go b/internal/service/ec2/ebs_snapshot_create_volume_permission.go index 11cf4c37cee..a2698888885 100644 --- a/internal/service/ec2/ebs_snapshot_create_volume_permission.go +++ b/internal/service/ec2/ebs_snapshot_create_volume_permission.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -48,7 +51,7 @@ func ResourceSnapshotCreateVolumePermission() *schema.Resource { func resourceSnapshotCreateVolumePermissionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) snapshotID := d.Get("snapshot_id").(string) accountID := d.Get("account_id").(string) @@ -85,7 +88,7 @@ func resourceSnapshotCreateVolumePermissionCreate(ctx context.Context, d *schema func resourceSnapshotCreateVolumePermissionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) snapshotID, accountID, err := EBSSnapshotCreateVolumePermissionParseResourceID(d.Id()) @@ -110,7 +113,7 @@ func resourceSnapshotCreateVolumePermissionRead(ctx context.Context, d *schema.R func resourceSnapshotCreateVolumePermissionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) snapshotID, accountID, err := EBSSnapshotCreateVolumePermissionParseResourceID(d.Id()) @@ -151,7 +154,7 @@ func resourceSnapshotCreateVolumePermissionDelete(ctx context.Context, d *schema func resourceSnapshotCreateVolumePermissionCustomizeDiff(ctx context.Context, diff *schema.ResourceDiff, meta interface{}) error { if diff.Id() == "" { if snapshotID := diff.Get("snapshot_id").(string); snapshotID != "" { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) snapshot, err := FindSnapshotByID(ctx, conn, snapshotID) diff --git a/internal/service/ec2/ebs_snapshot_create_volume_permission_test.go b/internal/service/ec2/ebs_snapshot_create_volume_permission_test.go index d2411a29da5..db08132e493 100644 --- a/internal/service/ec2/ebs_snapshot_create_volume_permission_test.go +++ b/internal/service/ec2/ebs_snapshot_create_volume_permission_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -88,7 +91,7 @@ func TestAccEC2EBSSnapshotCreateVolumePermission_snapshotOwnerExpectError(t *tes func testAccCheckSnapshotCreateVolumePermissionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_snapshot_create_volume_permission" { @@ -135,7 +138,7 @@ func testAccSnapshotCreateVolumePermissionExists(ctx context.Context, n string) return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) _, err = tfec2.FindCreateSnapshotCreateVolumePermissionByTwoPartKey(ctx, conn, snapshotID, accountID) diff --git a/internal/service/ec2/ebs_snapshot_data_source.go b/internal/service/ec2/ebs_snapshot_data_source.go index bb2f7554c73..a7f4f80a92d 100644 --- a/internal/service/ec2/ebs_snapshot_data_source.go +++ b/internal/service/ec2/ebs_snapshot_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -107,7 +110,7 @@ func DataSourceEBSSnapshot() *schema.Resource { func dataSourceEBSSnapshotRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeSnapshotsInput{} diff --git a/internal/service/ec2/ebs_snapshot_data_source_test.go b/internal/service/ec2/ebs_snapshot_data_source_test.go index 904493826ab..024e9866b5f 100644 --- a/internal/service/ec2/ebs_snapshot_data_source_test.go +++ b/internal/service/ec2/ebs_snapshot_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/ebs_snapshot_ids_data_source.go b/internal/service/ec2/ebs_snapshot_ids_data_source.go index d04b380241c..245cfd9041c 100644 --- a/internal/service/ec2/ebs_snapshot_ids_data_source.go +++ b/internal/service/ec2/ebs_snapshot_ids_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -46,7 +49,7 @@ func DataSourceEBSSnapshotIDs() *schema.Resource { func dataSourceEBSSnapshotIDsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeSnapshotsInput{} diff --git a/internal/service/ec2/ebs_snapshot_ids_data_source_test.go b/internal/service/ec2/ebs_snapshot_ids_data_source_test.go index b299f8b77c5..272e43e5c53 100644 --- a/internal/service/ec2/ebs_snapshot_ids_data_source_test.go +++ b/internal/service/ec2/ebs_snapshot_ids_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -23,7 +26,7 @@ func TestAccEC2EBSSnapshotIDsDataSource_basic(t *testing.T) { { Config: testAccEBSSnapshotIdsDataSourceConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "ids.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "ids.#", 0), resource.TestCheckTypeSetElemAttrPair(dataSourceName, "ids.*", "aws_ebs_snapshot.test", "id"), ), }, diff --git a/internal/service/ec2/ebs_snapshot_import.go b/internal/service/ec2/ebs_snapshot_import.go index ba9b2b15ef5..e25ceb77dce 100644 --- a/internal/service/ec2/ebs_snapshot_import.go +++ b/internal/service/ec2/ebs_snapshot_import.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -190,7 +193,7 @@ func ResourceEBSSnapshotImport() *schema.Resource { func resourceEBSSnapshotImportCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.ImportSnapshotInput{ ClientToken: aws.String(id.UniqueId()), @@ -221,7 +224,7 @@ func resourceEBSSnapshotImportCreate(ctx context.Context, d *schema.ResourceData input.RoleName = aws.String(v.(string)) } - outputRaw, err := tfresource.RetryWhenAWSErrMessageContains(ctx, propagationTimeout, + outputRaw, err := tfresource.RetryWhenAWSErrMessageContains(ctx, iamPropagationTimeout, func() (interface{}, error) { return conn.ImportSnapshotWithContext(ctx, input) }, @@ -240,7 +243,7 @@ func resourceEBSSnapshotImportCreate(ctx context.Context, d *schema.ResourceData d.SetId(aws.StringValue(output.SnapshotId)) - if err := createTags(ctx, conn, d.Id(), GetTagsIn(ctx)); err != nil { + if err := createTags(ctx, conn, d.Id(), getTagsIn(ctx)); err != nil { return sdkdiag.AppendErrorf(diags, "setting EBS Snapshot Import (%s) tags: %s", d.Id(), err) } @@ -266,7 +269,7 @@ func resourceEBSSnapshotImportCreate(ctx context.Context, d *schema.ResourceData func resourceEBSSnapshotImportRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) snapshot, err := FindSnapshotByID(ctx, conn, d.Id()) @@ -296,7 +299,7 @@ func resourceEBSSnapshotImportRead(ctx context.Context, d *schema.ResourceData, d.Set("storage_tier", snapshot.StorageTier) d.Set("volume_size", snapshot.VolumeSize) - SetTagsOut(ctx, snapshot.Tags) + setTagsOut(ctx, snapshot.Tags) return diags } diff --git a/internal/service/ec2/ebs_snapshot_import_test.go b/internal/service/ec2/ebs_snapshot_import_test.go index 188ea31e708..a548cdd272a 100644 --- a/internal/service/ec2/ebs_snapshot_import_test.go +++ b/internal/service/ec2/ebs_snapshot_import_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/ebs_snapshot_test.go b/internal/service/ec2/ebs_snapshot_test.go index f446f42e0bf..5255f078eee 100644 --- a/internal/service/ec2/ebs_snapshot_test.go +++ b/internal/service/ec2/ebs_snapshot_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -244,7 +247,7 @@ func testAccCheckSnapshotExists(ctx context.Context, n string, v *ec2.Snapshot) return fmt.Errorf("No EBS Snapshot ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindSnapshotByID(ctx, conn, rs.Primary.ID) @@ -260,7 +263,7 @@ func testAccCheckSnapshotExists(ctx context.Context, n string, v *ec2.Snapshot) func testAccCheckEBSSnapshotDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ebs_snapshot" { diff --git a/internal/service/ec2/ebs_volume.go b/internal/service/ec2/ebs_volume.go index 4bc44c3dd63..311db606076 100644 --- a/internal/service/ec2/ebs_volume.go +++ b/internal/service/ec2/ebs_volume.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -123,7 +126,7 @@ func ResourceEBSVolume() *schema.Resource { func resourceEBSVolumeCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateVolumeInput{ AvailabilityZone: aws.String(d.Get("availability_zone").(string)), @@ -184,7 +187,7 @@ func resourceEBSVolumeCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceEBSVolumeRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) volume, err := FindEBSVolumeByID(ctx, conn, d.Id()) @@ -217,14 +220,14 @@ func resourceEBSVolumeRead(ctx context.Context, d *schema.ResourceData, meta int d.Set("throughput", volume.Throughput) d.Set("type", volume.VolumeType) - SetTagsOut(ctx, volume.Tags) + setTagsOut(ctx, volume.Tags) return diags } func resourceEBSVolumeUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &ec2.ModifyVolumeInput{ @@ -274,7 +277,7 @@ func resourceEBSVolumeUpdate(ctx context.Context, d *schema.ResourceData, meta i func resourceEBSVolumeDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if d.Get("final_snapshot").(bool) { input := &ec2.CreateSnapshotInput{ diff --git a/internal/service/ec2/ebs_volume_attachment.go b/internal/service/ec2/ebs_volume_attachment.go index d6544e778fa..734798f3fb1 100644 --- a/internal/service/ec2/ebs_volume_attachment.go +++ b/internal/service/ec2/ebs_volume_attachment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -85,7 +88,7 @@ func ResourceVolumeAttachment() *schema.Resource { func resourceVolumeAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) deviceName := d.Get("device_name").(string) instanceID := d.Get("instance_id").(string) volumeID := d.Get("volume_id").(string) @@ -127,7 +130,7 @@ func resourceVolumeAttachmentCreate(ctx context.Context, d *schema.ResourceData, func resourceVolumeAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) deviceName := d.Get("device_name").(string) instanceID := d.Get("instance_id").(string) volumeID := d.Get("volume_id").(string) @@ -149,7 +152,7 @@ func resourceVolumeAttachmentRead(ctx context.Context, d *schema.ResourceData, m func resourceVolumeAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if _, ok := d.GetOk("skip_destroy"); ok { return diags diff --git a/internal/service/ec2/ebs_volume_attachment_test.go b/internal/service/ec2/ebs_volume_attachment_test.go index e0b35628f3f..833c5735535 100644 --- a/internal/service/ec2/ebs_volume_attachment_test.go +++ b/internal/service/ec2/ebs_volume_attachment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -83,7 +86,7 @@ func TestAccEC2EBSVolumeAttachment_attachStopped(t *testing.T) { rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) stopInstance := func() { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) err := tfec2.StopInstance(ctx, conn, aws.StringValue(i.InstanceId), 10*time.Minute) @@ -240,7 +243,7 @@ func testAccCheckVolumeAttachmentExists(ctx context.Context, n string) resource. return fmt.Errorf("No EBS Volume Attachment ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) _, err := tfec2.FindEBSVolumeAttachment(ctx, conn, rs.Primary.Attributes["volume_id"], rs.Primary.Attributes["instance_id"], rs.Primary.Attributes["device_name"]) @@ -250,7 +253,7 @@ func testAccCheckVolumeAttachmentExists(ctx context.Context, n string) resource. func testAccCheckVolumeAttachmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_volume_attachment" { diff --git a/internal/service/ec2/ebs_volume_data_source.go b/internal/service/ec2/ebs_volume_data_source.go index 627a07ad378..1432d7be1e9 100644 --- a/internal/service/ec2/ebs_volume_data_source.go +++ b/internal/service/ec2/ebs_volume_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -87,7 +90,7 @@ func DataSourceEBSVolume() *schema.Resource { func dataSourceEBSVolumeRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeVolumesInput{} diff --git a/internal/service/ec2/ebs_volume_data_source_test.go b/internal/service/ec2/ebs_volume_data_source_test.go index 80db98eaee3..43243c71beb 100644 --- a/internal/service/ec2/ebs_volume_data_source_test.go +++ b/internal/service/ec2/ebs_volume_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/ebs_volume_test.go b/internal/service/ec2/ebs_volume_test.go index 6c8fe4dfbae..2a7c4b20308 100644 --- a/internal/service/ec2/ebs_volume_test.go +++ b/internal/service/ec2/ebs_volume_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -906,7 +909,7 @@ func TestAccEC2EBSVolume_finalSnapshot(t *testing.T) { func testAccCheckVolumeDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ebs_volume" { @@ -941,7 +944,7 @@ func testAccCheckVolumeExists(ctx context.Context, n string, v *ec2.Volume) reso return fmt.Errorf("No EBS Volume ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindEBSVolumeByID(ctx, conn, rs.Primary.ID) @@ -957,7 +960,7 @@ func testAccCheckVolumeExists(ctx context.Context, n string, v *ec2.Volume) reso func testAccCheckVolumeFinalSnapshotExists(ctx context.Context, v *ec2.Volume) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeSnapshotsInput{ Filters: tfec2.BuildAttributeFilterList(map[string]string{ diff --git a/internal/service/ec2/ebs_volumes_data_source.go b/internal/service/ec2/ebs_volumes_data_source.go index c7c3502ad9b..c8b731f605c 100644 --- a/internal/service/ec2/ebs_volumes_data_source.go +++ b/internal/service/ec2/ebs_volumes_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -36,7 +39,7 @@ func DataSourceEBSVolumes() *schema.Resource { func dataSourceEBSVolumesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeVolumesInput{} diff --git a/internal/service/ec2/ebs_volumes_data_source_test.go b/internal/service/ec2/ebs_volumes_data_source_test.go index 9d0dfad2b51..8e3ee3bcb99 100644 --- a/internal/service/ec2/ebs_volumes_data_source_test.go +++ b/internal/service/ec2/ebs_volumes_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/ec2_ami.go b/internal/service/ec2/ec2_ami.go index 084b1da792c..bf77c0a5750 100644 --- a/internal/service/ec2/ec2_ami.go +++ b/internal/service/ec2/ec2_ami.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -290,7 +293,7 @@ func ResourceAMI() *schema.Resource { func resourceAMICreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) name := d.Get("name").(string) input := &ec2.RegisterImageInput{ @@ -364,7 +367,7 @@ func resourceAMICreate(ctx context.Context, d *schema.ResourceData, meta interfa d.SetId(aws.StringValue(output.ImageId)) - if err := createTags(ctx, conn, d.Id(), GetTagsIn(ctx)); err != nil { + if err := createTags(ctx, conn, d.Id(), getTagsIn(ctx)); err != nil { return sdkdiag.AppendErrorf(diags, "setting EC2 AMI (%s) tags: %s", d.Id(), err) } @@ -383,9 +386,9 @@ func resourceAMICreate(ctx context.Context, d *schema.ResourceData, meta interfa func resourceAMIRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) - outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { + outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { return FindImageByID(ctx, conn, d.Id()) }, d.IsNewResource()) @@ -453,14 +456,14 @@ func resourceAMIRead(ctx context.Context, d *schema.ResourceData, meta interface return sdkdiag.AppendErrorf(diags, "setting ephemeral_block_device: %s", err) } - SetTagsOut(ctx, image.Tags) + setTagsOut(ctx, image.Tags) return diags } func resourceAMIUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if d.Get("description").(string) != "" { _, err := conn.ModifyImageAttributeWithContext(ctx, &ec2.ModifyImageAttributeInput{ @@ -486,7 +489,7 @@ func resourceAMIUpdate(ctx context.Context, d *schema.ResourceData, meta interfa func resourceAMIDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[INFO] Deleting EC2 AMI: %s", d.Id()) _, err := conn.DeregisterImageWithContext(ctx, &ec2.DeregisterImageInput{ diff --git a/internal/service/ec2/ec2_ami_copy.go b/internal/service/ec2/ec2_ami_copy.go index 49181001c64..cc753688a0c 100644 --- a/internal/service/ec2/ec2_ami_copy.go +++ b/internal/service/ec2/ec2_ami_copy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -268,7 +271,7 @@ func ResourceAMICopy() *schema.Resource { func resourceAMICopyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) name := d.Get("name").(string) sourceImageID := d.Get("source_ami_id").(string) @@ -298,7 +301,7 @@ func resourceAMICopyCreate(ctx context.Context, d *schema.ResourceData, meta int d.SetId(aws.StringValue(output.ImageId)) d.Set("manage_ebs_snapshots", true) - if err := createTags(ctx, conn, d.Id(), GetTagsIn(ctx)); err != nil { + if err := createTags(ctx, conn, d.Id(), getTagsIn(ctx)); err != nil { return sdkdiag.AppendErrorf(diags, "setting EC2 AMI (%s) tags: %s", d.Id(), err) } diff --git a/internal/service/ec2/ec2_ami_copy_test.go b/internal/service/ec2/ec2_ami_copy_test.go index 6dd77c5e9ad..db94076c5fb 100644 --- a/internal/service/ec2/ec2_ami_copy_test.go +++ b/internal/service/ec2/ec2_ami_copy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/ec2_ami_data_source.go b/internal/service/ec2/ec2_ami_data_source.go index 6fa584c1710..854d9d09c81 100644 --- a/internal/service/ec2/ec2_ami_data_source.go +++ b/internal/service/ec2/ec2_ami_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -229,7 +232,7 @@ func DataSourceAMI() *schema.Resource { func dataSourceAMIRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeImagesInput{ diff --git a/internal/service/ec2/ec2_ami_data_source_test.go b/internal/service/ec2/ec2_ami_data_source_test.go index dc3b00ce2f6..b5d00aedfc0 100644 --- a/internal/service/ec2/ec2_ami_data_source_test.go +++ b/internal/service/ec2/ec2_ami_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/ec2_ami_from_instance.go b/internal/service/ec2/ec2_ami_from_instance.go index 5cb2aed01e9..b15c02e9c02 100644 --- a/internal/service/ec2/ec2_ami_from_instance.go +++ b/internal/service/ec2/ec2_ami_from_instance.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -249,7 +252,7 @@ func ResourceAMIFromInstance() *schema.Resource { func resourceAMIFromInstanceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) instanceID := d.Get("source_instance_id").(string) name := d.Get("name").(string) diff --git a/internal/service/ec2/ec2_ami_from_instance_test.go b/internal/service/ec2/ec2_ami_from_instance_test.go index 1de81d672c3..b99f06109a0 100644 --- a/internal/service/ec2/ec2_ami_from_instance_test.go +++ b/internal/service/ec2/ec2_ami_from_instance_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/ec2_ami_ids_data_source.go b/internal/service/ec2/ec2_ami_ids_data_source.go index d45042f3f68..6d9eb8c74a6 100644 --- a/internal/service/ec2/ec2_ami_ids_data_source.go +++ b/internal/service/ec2/ec2_ami_ids_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -69,7 +72,7 @@ func DataSourceAMIIDs() *schema.Resource { func dataSourceAMIIDsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeImagesInput{ IncludeDeprecated: aws.Bool(d.Get("include_deprecated").(bool)), diff --git a/internal/service/ec2/ec2_ami_ids_data_source_test.go b/internal/service/ec2/ec2_ami_ids_data_source_test.go index ba4339d2c79..90647256381 100644 --- a/internal/service/ec2/ec2_ami_ids_data_source_test.go +++ b/internal/service/ec2/ec2_ami_ids_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -21,7 +24,7 @@ func TestAccEC2AMIIDsDataSource_basic(t *testing.T) { { Config: testAccAMIIDsDataSourceConfig_basic, Check: resource.ComposeTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue(datasourceName, "ids.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(datasourceName, "ids.#", 0), ), }, }, @@ -69,7 +72,7 @@ func TestAccEC2AMIIDsDataSource_includeDeprecated(t *testing.T) { { Config: testAccAMIIDsDataSourceConfig_includeDeprecated(true), Check: resource.ComposeTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue(datasourceName, "ids.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(datasourceName, "ids.#", 0), ), }, }, diff --git a/internal/service/ec2/ec2_ami_launch_permission.go b/internal/service/ec2/ec2_ami_launch_permission.go index 8aafe068980..4b120bd86bf 100644 --- a/internal/service/ec2/ec2_ami_launch_permission.go +++ b/internal/service/ec2/ec2_ami_launch_permission.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -68,7 +71,7 @@ func ResourceAMILaunchPermission() *schema.Resource { } func resourceAMILaunchPermissionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) imageID := d.Get("image_id").(string) accountID := d.Get("account_id").(string) @@ -97,7 +100,7 @@ func resourceAMILaunchPermissionCreate(ctx context.Context, d *schema.ResourceDa } func resourceAMILaunchPermissionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) imageID, accountID, group, organizationARN, organizationalUnitARN, err := AMILaunchPermissionParseResourceID(d.Id()) @@ -127,7 +130,7 @@ func resourceAMILaunchPermissionRead(ctx context.Context, d *schema.ResourceData } func resourceAMILaunchPermissionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) imageID, accountID, group, organizationARN, organizationalUnitARN, err := AMILaunchPermissionParseResourceID(d.Id()) diff --git a/internal/service/ec2/ec2_ami_launch_permission_test.go b/internal/service/ec2/ec2_ami_launch_permission_test.go index 1d14b888ebc..d8d8a427c9a 100644 --- a/internal/service/ec2/ec2_ami_launch_permission_test.go +++ b/internal/service/ec2/ec2_ami_launch_permission_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -223,7 +226,7 @@ func testAccCheckAMILaunchPermissionExists(ctx context.Context, n string) resour return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) _, err = tfec2.FindImageLaunchPermission(ctx, conn, imageID, accountID, group, organizationARN, organizationalUnitARN) @@ -233,7 +236,7 @@ func testAccCheckAMILaunchPermissionExists(ctx context.Context, n string) resour func testAccCheckAMILaunchPermissionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ami_launch_permission" { diff --git a/internal/service/ec2/ec2_ami_test.go b/internal/service/ec2/ec2_ami_test.go index da3111bb1a3..1f96b1bcc3a 100644 --- a/internal/service/ec2/ec2_ami_test.go +++ b/internal/service/ec2/ec2_ami_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -594,7 +597,7 @@ func TestAccEC2AMI_imdsSupport(t *testing.T) { func testAccCheckAMIDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for n, rs := range s.RootModule().Resources { // The configuration may contain aws_ami data sources. @@ -636,7 +639,7 @@ func testAccCheckAMIExists(ctx context.Context, n string, v *ec2.Image) resource return fmt.Errorf("No EC2 AMI ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindImageByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ec2/ec2_availability_zone_data_source.go b/internal/service/ec2/ec2_availability_zone_data_source.go index 8b154568593..56a8a3a445b 100644 --- a/internal/service/ec2/ec2_availability_zone_data_source.go +++ b/internal/service/ec2/ec2_availability_zone_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -82,7 +85,7 @@ func DataSourceAvailabilityZone() *schema.Resource { func dataSourceAvailabilityZoneRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeAvailabilityZonesInput{} diff --git a/internal/service/ec2/ec2_availability_zone_data_source_test.go b/internal/service/ec2/ec2_availability_zone_data_source_test.go index 764eb16cc7f..b34da36fee8 100644 --- a/internal/service/ec2/ec2_availability_zone_data_source_test.go +++ b/internal/service/ec2/ec2_availability_zone_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -189,7 +192,7 @@ func TestAccEC2AvailabilityZoneDataSource_zoneID(t *testing.T) { } func testAccPreCheckLocalZoneAvailable(ctx context.Context, t *testing.T, groupNames ...string) { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeAvailabilityZonesInput{ Filters: tfec2.BuildAttributeFilterList(map[string]string{ diff --git a/internal/service/ec2/ec2_availability_zone_group.go b/internal/service/ec2/ec2_availability_zone_group.go index 03d85e51e7d..2e824e76673 100644 --- a/internal/service/ec2/ec2_availability_zone_group.go +++ b/internal/service/ec2/ec2_availability_zone_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -45,7 +48,7 @@ func ResourceAvailabilityZoneGroup() *schema.Resource { func resourceAvailabilityZoneGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) groupName := d.Get("group_name").(string) availabilityZone, err := FindAvailabilityZoneGroupByName(ctx, conn, groupName) @@ -67,7 +70,7 @@ func resourceAvailabilityZoneGroupCreate(ctx context.Context, d *schema.Resource func resourceAvailabilityZoneGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) availabilityZone, err := FindAvailabilityZoneGroupByName(ctx, conn, d.Id()) @@ -87,7 +90,7 @@ func resourceAvailabilityZoneGroupRead(ctx context.Context, d *schema.ResourceDa func resourceAvailabilityZoneGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if err := modifyAvailabilityZoneOptInStatus(ctx, conn, d.Id(), d.Get("opt_in_status").(string)); err != nil { return sdkdiag.AppendErrorf(diags, "updating EC2 Availability Zone Group (%s): %s", d.Id(), err) diff --git a/internal/service/ec2/ec2_availability_zone_group_test.go b/internal/service/ec2/ec2_availability_zone_group_test.go index 490e824cf15..1792dfed26d 100644 --- a/internal/service/ec2/ec2_availability_zone_group_test.go +++ b/internal/service/ec2/ec2_availability_zone_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/ec2_availability_zones_data_source.go b/internal/service/ec2/ec2_availability_zones_data_source.go index 50e93d798d1..525a4d0cdd0 100644 --- a/internal/service/ec2/ec2_availability_zones_data_source.go +++ b/internal/service/ec2/ec2_availability_zones_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -51,14 +54,9 @@ func DataSourceAvailabilityZones() *schema.Resource { Elem: &schema.Schema{Type: schema.TypeString}, }, "state": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.StringInSlice([]string{ - ec2.AvailabilityZoneStateAvailable, - ec2.AvailabilityZoneStateInformation, - ec2.AvailabilityZoneStateImpaired, - ec2.AvailabilityZoneStateUnavailable, - }, false), + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice(ec2.AvailabilityZoneState_Values(), false), }, "zone_ids": { Type: schema.TypeList, @@ -71,7 +69,7 @@ func DataSourceAvailabilityZones() *schema.Resource { func dataSourceAvailabilityZonesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[DEBUG] Reading Availability Zones.") @@ -104,7 +102,7 @@ func dataSourceAvailabilityZonesRead(ctx context.Context, d *schema.ResourceData log.Printf("[DEBUG] Reading Availability Zones: %s", request) resp, err := conn.DescribeAvailabilityZonesWithContext(ctx, request) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error fetching Availability Zones: %s", err) + return sdkdiag.AppendErrorf(diags, "fetching Availability Zones: %s", err) } sort.Slice(resp.AvailabilityZones, func(i, j int) bool { @@ -144,10 +142,10 @@ func dataSourceAvailabilityZonesRead(ctx context.Context, d *schema.ResourceData return sdkdiag.AppendErrorf(diags, "setting group_names: %s", err) } if err := d.Set("names", names); err != nil { - return sdkdiag.AppendErrorf(diags, "Error setting Availability Zone names: %s", err) + return sdkdiag.AppendErrorf(diags, "setting Availability Zone names: %s", err) } if err := d.Set("zone_ids", zoneIds); err != nil { - return sdkdiag.AppendErrorf(diags, "Error setting Availability Zone IDs: %s", err) + return sdkdiag.AppendErrorf(diags, "setting Availability Zone IDs: %s", err) } return diags diff --git a/internal/service/ec2/ec2_availability_zones_data_source_test.go b/internal/service/ec2/ec2_availability_zones_data_source_test.go index 3313502a389..d93aee9b02f 100644 --- a/internal/service/ec2/ec2_availability_zones_data_source_test.go +++ b/internal/service/ec2/ec2_availability_zones_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/ec2_capacity_reservation.go b/internal/service/ec2/ec2_capacity_reservation.go index 9f1b7203ab3..04dfd3464aa 100644 --- a/internal/service/ec2/ec2_capacity_reservation.go +++ b/internal/service/ec2/ec2_capacity_reservation.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -117,7 +120,7 @@ func ResourceCapacityReservation() *schema.Resource { func resourceCapacityReservationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateCapacityReservationInput{ AvailabilityZone: aws.String(d.Get("availability_zone").(string)), @@ -176,7 +179,7 @@ func resourceCapacityReservationCreate(ctx context.Context, d *schema.ResourceDa func resourceCapacityReservationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) reservation, err := FindCapacityReservationByID(ctx, conn, d.Id()) @@ -209,14 +212,14 @@ func resourceCapacityReservationRead(ctx context.Context, d *schema.ResourceData d.Set("placement_group_arn", reservation.PlacementGroupArn) d.Set("tenancy", reservation.Tenancy) - SetTagsOut(ctx, reservation.Tags) + setTagsOut(ctx, reservation.Tags) return diags } func resourceCapacityReservationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &ec2.ModifyCapacityReservationInput{ @@ -248,7 +251,7 @@ func resourceCapacityReservationUpdate(ctx context.Context, d *schema.ResourceDa func resourceCapacityReservationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[DEBUG] Deleting EC2 Capacity Reservation: %s", d.Id()) _, err := conn.CancelCapacityReservationWithContext(ctx, &ec2.CancelCapacityReservationInput{ diff --git a/internal/service/ec2/ec2_capacity_reservation_test.go b/internal/service/ec2/ec2_capacity_reservation_test.go index 1f3e04c5f59..1b33cd34d19 100644 --- a/internal/service/ec2/ec2_capacity_reservation_test.go +++ b/internal/service/ec2/ec2_capacity_reservation_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -404,7 +407,7 @@ func testAccCheckCapacityReservationExists(ctx context.Context, n string, v *ec2 return fmt.Errorf("No EC2 Capacity Reservation ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindCapacityReservationByID(ctx, conn, rs.Primary.ID) @@ -420,7 +423,7 @@ func testAccCheckCapacityReservationExists(ctx context.Context, n string, v *ec2 func testAccCheckCapacityReservationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_capacity_reservation" { @@ -445,7 +448,7 @@ func testAccCheckCapacityReservationDestroy(ctx context.Context) resource.TestCh } func testAccPreCheckCapacityReservation(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeCapacityReservationsInput{ MaxResults: aws.Int64(1), diff --git a/internal/service/ec2/ec2_eip.go b/internal/service/ec2/ec2_eip.go index 0251b90e8d2..d51b1432afa 100644 --- a/internal/service/ec2/ec2_eip.go +++ b/internal/service/ec2/ec2_eip.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -13,6 +16,7 @@ import ( "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" @@ -72,8 +76,12 @@ func ResourceEIP() *schema.Resource { Optional: true, }, "domain": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + ForceNew: true, + Optional: true, + Computed: true, + ValidateFunc: validation.StringInSlice(ec2.DomainType_Values(), false), + ConflictsWith: []string{"vpc"}, }, "instance": { Type: schema.TypeString, @@ -116,10 +124,12 @@ func ResourceEIP() *schema.Resource { names.AttrTags: tftags.TagsSchema(), names.AttrTagsAll: tftags.TagsSchemaComputed(), "vpc": { - Type: schema.TypeBool, - Optional: true, - ForceNew: true, - Computed: true, + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Computed: true, + Deprecated: "use domain attribute instead", + ConflictsWith: []string{"domain"}, }, }, } @@ -127,7 +137,7 @@ func ResourceEIP() *schema.Resource { func resourceEIPCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.AllocateAddressInput{ TagSpecifications: getTagSpecificationsIn(ctx, ec2.ResourceTypeElasticIp), @@ -141,6 +151,10 @@ func resourceEIPCreate(ctx context.Context, d *schema.ResourceData, meta interfa input.CustomerOwnedIpv4Pool = aws.String(v.(string)) } + if v := d.Get("domain"); v != nil && v.(string) != "" { + input.Domain = aws.String(v.(string)) + } + if v := d.Get("vpc"); v != nil && v.(bool) { input.Domain = aws.String(ec2.DomainTypeVpc) } @@ -185,13 +199,15 @@ func resourceEIPCreate(ctx context.Context, d *schema.ResourceData, meta interfa func resourceEIPRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if !eipID(d.Id()).IsVPC() { return sdkdiag.AppendErrorf(diags, `with the retirement of EC2-Classic %s domain EC2 EIPs are no longer supported`, ec2.DomainTypeStandard) } - address, err := FindEIPByAllocationID(ctx, conn, d.Id()) + outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { + return FindEIPByAllocationID(ctx, conn, d.Id()) + }, d.IsNewResource()) if !d.IsNewResource() && tfresource.NotFound(err) { log.Printf("[WARN] EC2 EIP (%s) not found, removing from state", d.Id()) @@ -203,6 +219,7 @@ func resourceEIPRead(ctx context.Context, d *schema.ResourceData, meta interface return sdkdiag.AppendErrorf(diags, "reading EC2 EIP (%s): %s", d.Id(), err) } + address := outputRaw.(*ec2.Address) d.Set("allocation_id", address.AllocationId) d.Set("association_id", address.AssociationId) d.Set("carrier_ip", address.CarrierIp) @@ -213,17 +230,15 @@ func resourceEIPRead(ctx context.Context, d *schema.ResourceData, meta interface d.Set("network_border_group", address.NetworkBorderGroup) d.Set("network_interface", address.NetworkInterfaceId) d.Set("public_ipv4_pool", address.PublicIpv4Pool) - d.Set("vpc", aws.StringValue(address.Domain) == ec2.DomainTypeVpc) - d.Set("private_ip", address.PrivateIpAddress) if v := aws.StringValue(address.PrivateIpAddress); v != "" { d.Set("private_dns", PrivateDNSNameForIP(meta.(*conns.AWSClient), v)) } - d.Set("public_ip", address.PublicIp) if v := aws.StringValue(address.PublicIp); v != "" { d.Set("public_dns", PublicDNSNameForIP(meta.(*conns.AWSClient), v)) } + d.Set("vpc", aws.StringValue(address.Domain) == ec2.DomainTypeVpc) // Force ID to be an Allocation ID if we're on a VPC. // This allows users to import the EIP based on the IP if they are in a VPC. @@ -231,14 +246,14 @@ func resourceEIPRead(ctx context.Context, d *schema.ResourceData, meta interface d.SetId(aws.StringValue(address.AllocationId)) } - SetTagsOut(ctx, address.Tags) + setTagsOut(ctx, address.Tags) return diags } func resourceEIPUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if d.HasChanges("associate_with_private_ip", "instance", "network_interface") { o, n := d.GetChange("instance") @@ -262,7 +277,7 @@ func resourceEIPUpdate(ctx context.Context, d *schema.ResourceData, meta interfa func resourceEIPDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if !eipID(d.Id()).IsVPC() { return sdkdiag.AppendErrorf(diags, `with the retirement of EC2-Classic %s domain EC2 EIPs are no longer supported`, ec2.DomainTypeStandard) @@ -327,7 +342,7 @@ func associateEIP(ctx context.Context, conn *ec2.EC2, allocationID, instanceID, return fmt.Errorf("associating EC2 EIP (%s): %w", allocationID, err) } - _, err = tfresource.RetryWhen(ctx, propagationTimeout, + _, err = tfresource.RetryWhen(ctx, ec2PropagationTimeout, func() (interface{}, error) { return FindEIPByAssociationID(ctx, conn, aws.StringValue(output.AssociationId)) }, diff --git a/internal/service/ec2/ec2_eip_association.go b/internal/service/ec2/ec2_eip_association.go index 2cd762a0d95..f2ea615c3b2 100644 --- a/internal/service/ec2/ec2_eip_association.go +++ b/internal/service/ec2/ec2_eip_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -68,7 +71,7 @@ func ResourceEIPAssociation() *schema.Resource { func resourceEIPAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.AssociateAddressInput{} @@ -104,7 +107,7 @@ func resourceEIPAssociationCreate(ctx context.Context, d *schema.ResourceData, m d.SetId(aws.StringValue(output.AssociationId)) - _, err = tfresource.RetryWhen(ctx, propagationTimeout, + _, err = tfresource.RetryWhen(ctx, ec2PropagationTimeout, func() (interface{}, error) { return FindEIPByAssociationID(ctx, conn, d.Id()) }, @@ -131,7 +134,7 @@ func resourceEIPAssociationCreate(ctx context.Context, d *schema.ResourceData, m func resourceEIPAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if !eipAssociationID(d.Id()).IsVPC() { return sdkdiag.AppendErrorf(diags, `with the retirement of EC2-Classic %s domain EC2 EIPs are no longer supported`, ec2.DomainTypeStandard) @@ -160,7 +163,7 @@ func resourceEIPAssociationRead(ctx context.Context, d *schema.ResourceData, met func resourceEIPAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if !eipAssociationID(d.Id()).IsVPC() { return sdkdiag.AppendErrorf(diags, `with the retirement of EC2-Classic %s domain EC2 EIPs are no longer supported`, ec2.DomainTypeStandard) diff --git a/internal/service/ec2/ec2_eip_association_test.go b/internal/service/ec2/ec2_eip_association_test.go index 59eabb3a3d5..c88ee0dd886 100644 --- a/internal/service/ec2/ec2_eip_association_test.go +++ b/internal/service/ec2/ec2_eip_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -161,7 +164,7 @@ func testAccCheckEIPAssociationExists(ctx context.Context, n string, v *ec2.Addr return fmt.Errorf("No EC2 EIP Association ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindEIPByAssociationID(ctx, conn, rs.Primary.ID) @@ -177,7 +180,7 @@ func testAccCheckEIPAssociationExists(ctx context.Context, n string, v *ec2.Addr func testAccCheckEIPAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_eip_association" { @@ -226,7 +229,7 @@ resource "aws_instance" "test" { } resource "aws_eip" "test" { - vpc = true + domain = "vpc" tags = { Name = %[1]q @@ -269,7 +272,7 @@ resource "aws_instance" "test" { resource "aws_eip" "test" { count = 2 - vpc = true + domain = "vpc" tags = { Name = %[1]q @@ -307,7 +310,7 @@ resource "aws_network_interface" "test" { } resource "aws_eip" "test" { - vpc = true + domain = "vpc" tags = { Name = %[1]q @@ -364,7 +367,7 @@ resource "aws_ec2_tag" "test" { } resource "aws_eip" "test" { - vpc = true + domain = "vpc" tags = { Name = %[1]q diff --git a/internal/service/ec2/ec2_eip_data_source.go b/internal/service/ec2/ec2_eip_data_source.go index 0b3c6f34609..47e5d9e96c5 100644 --- a/internal/service/ec2/ec2_eip_data_source.go +++ b/internal/service/ec2/ec2_eip_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -90,7 +93,7 @@ func DataSourceEIP() *schema.Resource { func dataSourceEIPRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeAddressesInput{} diff --git a/internal/service/ec2/ec2_eip_data_source_test.go b/internal/service/ec2/ec2_eip_data_source_test.go index 001fb4b6a38..a47709ee60b 100644 --- a/internal/service/ec2/ec2_eip_data_source_test.go +++ b/internal/service/ec2/ec2_eip_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -198,7 +201,7 @@ func TestAccEC2EIPDataSource_customerOwnedIPv4Pool(t *testing.T) { func testAccEIPDataSourceConfig_filter(rName string) string { return fmt.Sprintf(` resource "aws_eip" "test" { - vpc = true + domain = "vpc" tags = { Name = %[1]q @@ -217,7 +220,7 @@ data "aws_eip" "test" { func testAccEIPDataSourceConfig_id(rName string) string { return fmt.Sprintf(` resource "aws_eip" "test" { - vpc = true + domain = "vpc" tags = { Name = %[1]q @@ -233,7 +236,7 @@ data "aws_eip" "test" { func testAccEIPDataSourceConfig_publicIP(rName string) string { return fmt.Sprintf(` resource "aws_eip" "test" { - vpc = true + domain = "vpc" tags = { Name = %[1]q @@ -249,7 +252,7 @@ data "aws_eip" "test" { func testAccEIPDataSourceConfig_tags(rName string) string { return fmt.Sprintf(` resource "aws_eip" "test" { - vpc = true + domain = "vpc" tags = { Name = %[1]q diff --git a/internal/service/ec2/ec2_eip_test.go b/internal/service/ec2/ec2_eip_test.go index b570f6bade5..c5911174d53 100644 --- a/internal/service/ec2/ec2_eip_test.go +++ b/internal/service/ec2/ec2_eip_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -69,6 +72,40 @@ func TestAccEC2EIP_disappears(t *testing.T) { }) } +func TestAccEC2EIP_migrateVPCToDomain(t *testing.T) { + ctx := acctest.Context(t) + var conf ec2.Address + resourceName := "aws_eip.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), + CheckDestroy: testAccCheckEIPDestroy(ctx), + Steps: []resource.TestStep{ + { + ExternalProviders: map[string]resource.ExternalProvider{ + "aws": { + Source: "hashicorp/aws", + VersionConstraint: "4.67.0", + }, + }, + Config: testAccEIPConfig_vpc, + Check: resource.ComposeTestCheckFunc( + testAccCheckEIPExists(ctx, resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "domain", "vpc"), + resource.TestCheckResourceAttrSet(resourceName, "public_ip"), + testAccCheckEIPPublicDNS(resourceName), + ), + }, + { + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + Config: testAccEIPConfig_basic, + PlanOnly: true, + }, + }, + }) +} + func TestAccEC2EIP_noVPC(t *testing.T) { ctx := acctest.Context(t) var conf ec2.Address @@ -651,7 +688,7 @@ func testAccCheckEIPExists(ctx context.Context, n string, v *ec2.Address) resour return fmt.Errorf("No EC2 EIP ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindEIPByAllocationID(ctx, conn, rs.Primary.ID) @@ -667,7 +704,7 @@ func testAccCheckEIPExists(ctx context.Context, n string, v *ec2.Address) resour func testAccCheckEIPDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_eip" { @@ -737,6 +774,12 @@ func testAccCheckEIPPublicDNS(resourceName string) resource.TestCheckFunc { } const testAccEIPConfig_basic = ` +resource "aws_eip" "test" { + domain = "vpc" +} +` + +const testAccEIPConfig_vpc = ` resource "aws_eip" "test" { vpc = true } @@ -762,7 +805,7 @@ resource "aws_eip" "test" { func testAccEIPConfig_tags2(tagKey1, tagValue1, tagKey2, tagValue2 string) string { return fmt.Sprintf(` resource "aws_eip" "test" { - vpc = true + domain = "vpc" tags = { %[1]q = %[2]q @@ -802,7 +845,7 @@ func testAccEIPConfig_instance(rName string) string { return acctest.ConfigCompose(testAccEIPConfig_baseInstance(rName), fmt.Sprintf(` resource "aws_eip" "test" { instance = aws_instance.test.id - vpc = true + domain = "vpc" tags = { Name = %[1]q @@ -819,7 +862,7 @@ func testAccEIPConfig_instanceReassociate(rName string) string { fmt.Sprintf(` resource "aws_eip" "test" { instance = aws_instance.test.id - vpc = true + domain = "vpc" tags = { Name = %[1]q @@ -924,7 +967,7 @@ resource "aws_instance" "test" { func testAccEIPConfig_instanceAssociated(rName string) string { return acctest.ConfigCompose(testAccEIPConfig_baseInstanceAssociated(rName), fmt.Sprintf(` resource "aws_eip" "test" { - vpc = true + domain = "vpc" instance = aws_instance.test[1].id associate_with_private_ip = aws_instance.test[1].private_ip @@ -939,7 +982,7 @@ resource "aws_eip" "test" { func testAccEIPConfig_instanceAssociatedSwitch(rName string) string { return acctest.ConfigCompose(testAccEIPConfig_baseInstanceAssociated(rName), fmt.Sprintf(` resource "aws_eip" "test" { - vpc = true + domain = "vpc" instance = aws_instance.test[0].id associate_with_private_ip = aws_instance.test[0].private_ip @@ -954,7 +997,7 @@ resource "aws_eip" "test" { func testAccEIPConfig_instanceAssociateNotAssociated(rName string) string { return acctest.ConfigCompose(testAccEIPConfig_baseInstance(rName), fmt.Sprintf(` resource "aws_eip" "test" { - vpc = true + domain = "vpc" tags = { Name = %[1]q @@ -984,7 +1027,7 @@ resource "aws_network_interface" "test" { } resource "aws_eip" "test" { - vpc = "true" + domain = "vpc" network_interface = aws_network_interface.test.id tags = { @@ -1019,7 +1062,7 @@ resource "aws_network_interface" "test" { resource "aws_eip" "test" { count = 2 - vpc = "true" + domain = "vpc" network_interface = aws_network_interface.test.id associate_with_private_ip = "10.0.0.1${count.index}" @@ -1051,7 +1094,7 @@ resource "aws_network_interface" "test" { func testAccEIPConfig_associationNone(rName string) string { return acctest.ConfigCompose(testAccEIPConfig_baseAssociation(rName), fmt.Sprintf(` resource "aws_eip" "test" { - vpc = true + domain = "vpc" tags = { Name = %[1]q @@ -1065,7 +1108,7 @@ resource "aws_eip" "test" { func testAccEIPConfig_associationENI(rName string) string { return acctest.ConfigCompose(testAccEIPConfig_baseAssociation(rName), fmt.Sprintf(` resource "aws_eip" "test" { - vpc = true + domain = "vpc" tags = { Name = %[1]q @@ -1079,7 +1122,7 @@ resource "aws_eip" "test" { func testAccEIPConfig_associationInstance(rName string) string { return acctest.ConfigCompose(testAccEIPConfig_baseAssociation(rName), fmt.Sprintf(` resource "aws_eip" "test" { - vpc = true + domain = "vpc" tags = { Name = %[1]q @@ -1095,7 +1138,7 @@ resource "aws_eip" "test" { func testAccEIPConfig_publicIPv4PoolDefault(rName string) string { return fmt.Sprintf(` resource "aws_eip" "test" { - vpc = true + domain = "vpc" tags = { Name = %[1]q @@ -1107,7 +1150,7 @@ resource "aws_eip" "test" { func testAccEIPConfig_publicIPv4PoolCustom(rName, poolName string) string { return fmt.Sprintf(` resource "aws_eip" "test" { - vpc = true + domain = "vpc" public_ipv4_pool = %[2]q tags = { @@ -1123,7 +1166,7 @@ data "aws_ec2_coip_pools" "test" {} resource "aws_eip" "test" { customer_owned_ipv4_pool = tolist(data.aws_ec2_coip_pools.test.pool_ids)[0] - vpc = true + domain = "vpc" tags = { Name = %[1]q @@ -1137,7 +1180,7 @@ func testAccEIPConfig_networkBorderGroup(rName string) string { data "aws_region" current {} resource "aws_eip" "test" { - vpc = true + domain = "vpc" network_border_group = data.aws_region.current.name tags = { @@ -1156,7 +1199,7 @@ data "aws_availability_zone" "available" { } resource "aws_eip" "test" { - vpc = true + domain = "vpc" network_border_group = data.aws_availability_zone.available.network_border_group tags = { @@ -1169,7 +1212,7 @@ resource "aws_eip" "test" { func testAccEIPConfig_byoipAddressCustomDefault(rName string) string { return fmt.Sprintf(` resource "aws_eip" "test" { - vpc = true + domain = "vpc" tags = { Name = %[1]q @@ -1181,7 +1224,7 @@ resource "aws_eip" "test" { func testAccEIPConfig_byoipAddressCustom(rName, address string) string { return fmt.Sprintf(` resource "aws_eip" "test" { - vpc = true + domain = "vpc" address = %[2]q tags = { @@ -1194,7 +1237,7 @@ resource "aws_eip" "test" { func testAccEIPConfig_byoipAddressCustomPublicIPv4Pool(rName, address, poolName string) string { return fmt.Sprintf(` resource "aws_eip" "test" { - vpc = true + domain = "vpc" address = %[2]q public_ipv4_pool = %[3]q diff --git a/internal/service/ec2/ec2_eips_data_source.go b/internal/service/ec2/ec2_eips_data_source.go index d14e563dd23..739a7f4b956 100644 --- a/internal/service/ec2/ec2_eips_data_source.go +++ b/internal/service/ec2/ec2_eips_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -41,7 +44,7 @@ func DataSourceEIPs() *schema.Resource { func dataSourceEIPsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeAddressesInput{} diff --git a/internal/service/ec2/ec2_eips_data_source_test.go b/internal/service/ec2/ec2_eips_data_source_test.go index 06a1ff54fa4..4299f353803 100644 --- a/internal/service/ec2/ec2_eips_data_source_test.go +++ b/internal/service/ec2/ec2_eips_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -22,7 +25,7 @@ func TestAccEC2EIPsDataSource_basic(t *testing.T) { { Config: testAccEIPsDataSourceConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue("data.aws_eips.all", "allocation_ids.#", "1"), + acctest.CheckResourceAttrGreaterThanValue("data.aws_eips.all", "allocation_ids.#", 1), resource.TestCheckResourceAttr("data.aws_eips.by_tags", "allocation_ids.#", "1"), resource.TestCheckResourceAttr("data.aws_eips.by_tags", "public_ips.#", "1"), resource.TestCheckResourceAttr("data.aws_eips.none", "allocation_ids.#", "0"), @@ -36,7 +39,7 @@ func TestAccEC2EIPsDataSource_basic(t *testing.T) { func testAccEIPsDataSourceConfig_basic(rName string) string { return fmt.Sprintf(` resource "aws_eip" "test1" { - vpc = true + domain = "vpc" tags = { Name = "%[1]s-1" @@ -44,7 +47,7 @@ resource "aws_eip" "test1" { } resource "aws_eip" "test2" { - vpc = true + domain = "vpc" tags = { Name = "%[1]s-2" diff --git a/internal/service/ec2/ec2_fleet.go b/internal/service/ec2/ec2_fleet.go index 22c2d872dea..9510c5fe366 100644 --- a/internal/service/ec2/ec2_fleet.go +++ b/internal/service/ec2/ec2_fleet.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -699,7 +702,7 @@ func ResourceFleet() *schema.Resource { func resourceFleetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) fleetType := d.Get("type").(string) input := &ec2.CreateFleetInput{ @@ -776,7 +779,7 @@ func resourceFleetCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceFleetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) fleet, err := FindFleetByID(ctx, conn, d.Id()) @@ -842,14 +845,14 @@ func resourceFleetRead(ctx context.Context, d *schema.ResourceData, meta interfa d.Set("valid_until", aws.TimeValue(fleet.ValidUntil).Format(time.RFC3339)) } - SetTagsOut(ctx, fleet.Tags) + setTagsOut(ctx, fleet.Tags) return diags } func resourceFleetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &ec2.ModifyFleetInput{ @@ -889,7 +892,7 @@ func resourceFleetUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceFleetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[DEBUG] Deleting EC2 Fleet: %s", d.Id()) output, err := conn.DeleteFleetsWithContext(ctx, &ec2.DeleteFleetsInput{ diff --git a/internal/service/ec2/ec2_fleet_test.go b/internal/service/ec2/ec2_fleet_test.go index 7f2bf2a15f1..21d012ddbb3 100644 --- a/internal/service/ec2/ec2_fleet_test.go +++ b/internal/service/ec2/ec2_fleet_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -3147,7 +3150,7 @@ func testAccCheckFleetHistory(ctx context.Context, resourceName string, errorMsg return fmt.Errorf("No EC2 Fleet ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeFleetHistoryInput{ FleetId: aws.String(rs.Primary.ID), @@ -3192,7 +3195,7 @@ func testAccCheckFleetExists(ctx context.Context, n string, v *ec2.FleetData) re return fmt.Errorf("No EC2 Fleet ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindFleetByID(ctx, conn, rs.Primary.ID) @@ -3208,7 +3211,7 @@ func testAccCheckFleetExists(ctx context.Context, n string, v *ec2.FleetData) re func testAccCheckFleetDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_fleet" { @@ -3253,7 +3256,7 @@ func testAccCheckFleetRecreated(i, j *ec2.FleetData) resource.TestCheckFunc { } func testAccPreCheckFleet(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeFleetsInput{ MaxResults: aws.Int64(1), diff --git a/internal/service/ec2/ec2_host.go b/internal/service/ec2/ec2_host.go index 5c38956d062..376bf493fc9 100644 --- a/internal/service/ec2/ec2_host.go +++ b/internal/service/ec2/ec2_host.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -85,7 +88,7 @@ func ResourceHost() *schema.Resource { func resourceHostCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.AllocateHostsInput{ AutoPlacement: aws.String(d.Get("auto_placement").(string)), @@ -125,7 +128,7 @@ func resourceHostCreate(ctx context.Context, d *schema.ResourceData, meta interf func resourceHostRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) host, err := FindHostByID(ctx, conn, d.Id()) @@ -155,14 +158,14 @@ func resourceHostRead(ctx context.Context, d *schema.ResourceData, meta interfac d.Set("outpost_arn", host.OutpostArn) d.Set("owner_id", host.OwnerId) - SetTagsOut(ctx, host.Tags) + setTagsOut(ctx, host.Tags) return diags } func resourceHostUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &ec2.ModifyHostsInput{ @@ -205,7 +208,7 @@ func resourceHostUpdate(ctx context.Context, d *schema.ResourceData, meta interf func resourceHostDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[INFO] Deleting EC2 Host: %s", d.Id()) output, err := conn.ReleaseHostsWithContext(ctx, &ec2.ReleaseHostsInput{ diff --git a/internal/service/ec2/ec2_host_data_source.go b/internal/service/ec2/ec2_host_data_source.go index 2670f3a1719..eb2fd8beedc 100644 --- a/internal/service/ec2/ec2_host_data_source.go +++ b/internal/service/ec2/ec2_host_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -83,7 +86,7 @@ func DataSourceHost() *schema.Resource { func dataSourceHostRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeHostsInput{ diff --git a/internal/service/ec2/ec2_host_data_source_test.go b/internal/service/ec2/ec2_host_data_source_test.go index d0328e270d1..d7d29bc3aa8 100644 --- a/internal/service/ec2/ec2_host_data_source_test.go +++ b/internal/service/ec2/ec2_host_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/ec2_host_test.go b/internal/service/ec2/ec2_host_test.go index 5af07e0fa7a..6ed0ac332f8 100644 --- a/internal/service/ec2/ec2_host_test.go +++ b/internal/service/ec2/ec2_host_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -207,7 +210,7 @@ func testAccCheckHostExists(ctx context.Context, n string, v *ec2.Host) resource return fmt.Errorf("No EC2 Host ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindHostByID(ctx, conn, rs.Primary.ID) @@ -223,7 +226,7 @@ func testAccCheckHostExists(ctx context.Context, n string, v *ec2.Host) resource func testAccCheckHostDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_host" { diff --git a/internal/service/ec2/ec2_instance.go b/internal/service/ec2/ec2_instance.go index ffbde4ae180..172dda9f050 100644 --- a/internal/service/ec2/ec2_instance.go +++ b/internal/service/ec2/ec2_instance.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -381,6 +384,72 @@ func ResourceInstance() *schema.Resource { Optional: true, Computed: true, }, + "instance_lifecycle": { + Type: schema.TypeString, + Computed: true, + }, + "instance_market_options": { + Type: schema.TypeList, + Optional: true, + Computed: true, + ForceNew: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "market_type": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice(ec2.MarketType_Values(), false), + }, + "spot_options": { + Type: schema.TypeList, + Optional: true, + Computed: true, + ForceNew: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "instance_interruption_behavior": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice(ec2.InstanceInterruptionBehavior_Values(), false), + }, + "max_price": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + DiffSuppressFunc: func(k, oldValue, newValue string, d *schema.ResourceData) bool { + if (oldValue != "" && newValue == "") || (strings.TrimRight(oldValue, "0") == strings.TrimRight(newValue, "0")) { + return true + } + return false + }, + }, + "spot_instance_type": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice(ec2.SpotInstanceType_Values(), false), + }, + "valid_until": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: verify.ValidUTCTimestamp, + }, + }, + }, + }, + }, + }, + }, "instance_state": { Type: schema.TypeString, Computed: true, @@ -474,7 +543,7 @@ func ResourceInstance() *schema.Resource { "http_endpoint": { Type: schema.TypeString, Optional: true, - Computed: true, + Default: ec2.InstanceMetadataEndpointStateEnabled, ValidateFunc: validation.StringInSlice(ec2.InstanceMetadataEndpointState_Values(), false), }, "http_put_response_hop_limit": { @@ -694,6 +763,10 @@ func ResourceInstance() *schema.Resource { return ok }, }, + "spot_instance_request_id": { + Type: schema.TypeString, + Computed: true, + }, "subnet_id": { Type: schema.TypeString, Optional: true, @@ -769,7 +842,7 @@ func ResourceInstance() *schema.Resource { _, ok := diff.GetOk("launch_template") if diff.Id() != "" && diff.HasChange("launch_template.0.version") && ok { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) stateVersion := diff.Get("launch_template.0.version") @@ -848,7 +921,7 @@ func throughputDiffSuppressFunc(k, old, new string, d *schema.ResourceData) bool func resourceInstanceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) instanceOpts, err := buildInstanceOpts(ctx, d, meta) if err != nil { @@ -870,6 +943,7 @@ func resourceInstanceCreate(ctx context.Context, d *schema.ResourceData, meta in IamInstanceProfile: instanceOpts.IAMInstanceProfile, ImageId: instanceOpts.ImageID, InstanceInitiatedShutdownBehavior: instanceOpts.InstanceInitiatedShutdownBehavior, + InstanceMarketOptions: instanceOpts.InstanceMarketOptions, InstanceType: instanceOpts.InstanceType, Ipv6AddressCount: instanceOpts.Ipv6AddressCount, Ipv6Addresses: instanceOpts.Ipv6Addresses, @@ -896,7 +970,7 @@ func resourceInstanceCreate(ctx context.Context, d *schema.ResourceData, meta in } log.Printf("[DEBUG] Creating EC2 Instance: %s", input) - outputRaw, err := tfresource.RetryWhen(ctx, propagationTimeout, + outputRaw, err := tfresource.RetryWhen(ctx, iamPropagationTimeout, func() (interface{}, error) { return conn.RunInstancesWithContext(ctx, input) }, @@ -982,7 +1056,7 @@ func resourceInstanceCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceInstanceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) instance, err := FindInstanceByID(ctx, conn, d.Id()) @@ -1132,7 +1206,7 @@ func resourceInstanceRead(ctx context.Context, d *schema.ResourceData, meta inte networkInterfaces = append(networkInterfaces, ni) } if err := d.Set("network_interface", networkInterfaces); err != nil { - return sdkdiag.AppendErrorf(diags, "Error setting network_interfaces: %v", err) + return sdkdiag.AppendErrorf(diags, "setting network_interfaces: %v", err) } // Set primary network interface details @@ -1170,7 +1244,7 @@ func resourceInstanceRead(ctx context.Context, d *schema.ResourceData, meta inte } if err := d.Set("secondary_private_ips", secondaryPrivateIPs); err != nil { - return sdkdiag.AppendErrorf(diags, "Error setting private_ips for AWS Instance (%s): %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "setting private_ips for AWS Instance (%s): %s", d.Id(), err) } if err := d.Set("ipv6_addresses", ipv6Addresses); err != nil { @@ -1187,7 +1261,7 @@ func resourceInstanceRead(ctx context.Context, d *schema.ResourceData, meta inte d.Set("monitoring", monitoringState == ec2.MonitoringStateEnabled || monitoringState == ec2.MonitoringStatePending) } - SetTagsOut(ctx, instance.Tags) + setTagsOut(ctx, instance.Tags) if _, ok := d.GetOk("volume_tags"); ok && !blockDeviceTagsDefined(d) { volumeTags, err := readVolumeTags(ctx, conn, d.Id()) @@ -1321,12 +1395,56 @@ func resourceInstanceRead(ctx context.Context, d *schema.ResourceData, meta inte d.Set("capacity_reservation_specification", nil) } + if spotInstanceRequestID := aws.StringValue(instance.SpotInstanceRequestId); spotInstanceRequestID != "" && instance.InstanceLifecycle != nil { + d.Set("instance_lifecycle", instance.InstanceLifecycle) + d.Set("spot_instance_request_id", spotInstanceRequestID) + + input := &ec2.DescribeSpotInstanceRequestsInput{ + SpotInstanceRequestIds: aws.StringSlice([]string{spotInstanceRequestID}), + } + + apiObject, err := FindSpotInstanceRequest(ctx, conn, input) + + if err != nil { + return sdkdiag.AppendErrorf(diags, "reading EC2 Spot Instance Request (%s): %s", spotInstanceRequestID, err) + } + + tfMap := map[string]interface{}{} + + if v := apiObject.InstanceInterruptionBehavior; v != nil { + tfMap["instance_interruption_behavior"] = aws.StringValue(v) + } + + if v := apiObject.SpotPrice; v != nil { + tfMap["max_price"] = aws.StringValue(v) + } + + if v := apiObject.Type; v != nil { + tfMap["spot_instance_type"] = aws.StringValue(v) + } + + if v := apiObject.ValidUntil; v != nil { + tfMap["valid_until"] = aws.TimeValue(v).Format(time.RFC3339) + } + + if err := d.Set("instance_market_options", []interface{}{map[string]interface{}{ + "market_type": ec2.MarketTypeSpot, + "spot_options": []interface{}{tfMap}, + }}); err != nil { + return sdkdiag.AppendErrorf(diags, "setting instance_market_options: %s", err) + } + } else { + d.Set("instance_lifecycle", nil) + d.Set("instance_market_options", nil) + d.Set("spot_instance_request_id", nil) + } + return diags } func resourceInstanceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if d.HasChange("volume_tags") && !d.IsNewResource() { volumeIds, err := getInstanceVolumeIDs(ctx, conn, d.Id()) @@ -1337,7 +1455,7 @@ func resourceInstanceUpdate(ctx context.Context, d *schema.ResourceData, meta in o, n := d.GetChange("volume_tags") for _, volumeId := range volumeIds { - if err := UpdateTags(ctx, conn, volumeId, o, n); err != nil { + if err := updateTags(ctx, conn, volumeId, o, n); err != nil { return sdkdiag.AppendErrorf(diags, "updating volume_tags (%s): %s", volumeId, err) } } @@ -1389,7 +1507,7 @@ func resourceInstanceUpdate(ctx context.Context, d *schema.ResourceData, meta in return sdkdiag.AppendErrorf(diags, "updating EC2 Instance (%s): %s", d.Id(), err) } } else { - err := retry.RetryContext(ctx, propagationTimeout, func() *retry.RetryError { + err := retry.RetryContext(ctx, iamPropagationTimeout, func() *retry.RetryError { _, err := conn.ReplaceIamInstanceProfileAssociationWithContext(ctx, input) if err != nil { if tfawserr.ErrMessageContains(err, "InvalidParameterValue", "Invalid IAM Instance Profile") { @@ -1664,7 +1782,7 @@ func resourceInstanceUpdate(ctx context.Context, d *schema.ResourceData, meta in }) } if mErr != nil { - return sdkdiag.AppendErrorf(diags, "Error updating Instance monitoring: %s", mErr) + return sdkdiag.AppendErrorf(diags, "updating Instance monitoring: %s", mErr) } } @@ -1807,7 +1925,7 @@ func resourceInstanceUpdate(ctx context.Context, d *schema.ResourceData, meta in if d.HasChange("root_block_device.0.tags") { o, n := d.GetChange("root_block_device.0.tags") - if err := UpdateTags(ctx, conn, volumeID, o, n); err != nil { + if err := updateTags(ctx, conn, volumeID, o, n); err != nil { return sdkdiag.AppendErrorf(diags, "updating tags for volume (%s): %s", volumeID, err) } } @@ -1873,7 +1991,7 @@ func resourceInstanceUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceInstanceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if err := disableInstanceAPITermination(ctx, conn, d.Id(), false); err != nil { log.Printf("[WARN] attempting to terminate EC2 Instance (%s) despite error disabling API termination: %s", d.Id(), err) @@ -1885,6 +2003,17 @@ func resourceInstanceDelete(ctx context.Context, d *schema.ResourceData, meta in } } + if v, ok := d.GetOk("instance_lifecycle"); ok && v == ec2.InstanceLifecycleSpot { + spotInstanceRequestID := d.Get("spot_instance_request_id").(string) + _, err := conn.CancelSpotInstanceRequestsWithContext(ctx, &ec2.CancelSpotInstanceRequestsInput{ + SpotInstanceRequestIds: aws.StringSlice([]string{spotInstanceRequestID}), + }) + + if err != nil { + return sdkdiag.AppendErrorf(diags, "cancelling EC2 Spot Fleet Request (%s): %s", spotInstanceRequestID, err) + } + } + if err := terminateInstance(ctx, conn, d.Id(), d.Timeout(schema.TimeoutDelete)); err != nil { return sdkdiag.AppendFromErr(diags, err) } @@ -1948,7 +2077,7 @@ func modifyInstanceAttributeWithStopStart(ctx context.Context, conn *ec2.EC2, in } // Reference: https://github.com/hashicorp/terraform-provider-aws/issues/16433. - _, err := tfresource.RetryWhenAWSErrMessageContains(ctx, propagationTimeout, + _, err := tfresource.RetryWhenAWSErrMessageContains(ctx, ec2PropagationTimeout, func() (interface{}, error) { return conn.StartInstancesWithContext(ctx, &ec2.StartInstancesInput{ InstanceIds: aws.StringSlice([]string{id}), @@ -2027,7 +2156,7 @@ func associateInstanceProfile(ctx context.Context, d *schema.ResourceData, conn Name: aws.String(d.Get("iam_instance_profile").(string)), }, } - err := retry.RetryContext(ctx, propagationTimeout, func() *retry.RetryError { + err := retry.RetryContext(ctx, iamPropagationTimeout, func() *retry.RetryError { _, err := conn.AssociateIamInstanceProfileWithContext(ctx, input) if err != nil { if tfawserr.ErrMessageContains(err, "InvalidParameterValue", "Invalid IAM Instance Profile") { @@ -2189,7 +2318,7 @@ func FetchRootDeviceName(ctx context.Context, conn *ec2.EC2, amiID string) (*str } if rootDeviceName == nil { - return nil, fmt.Errorf("Error finding Root Device Name for AMI (%s)", amiID) + return nil, fmt.Errorf("finding Root Device Name for AMI (%s)", amiID) } return rootDeviceName, nil @@ -2305,7 +2434,7 @@ func readBlockDeviceMappingsFromConfig(ctx context.Context, d *schema.ResourceDa } else { // Enforce IOPs usage with a valid volume type // Reference: https://github.com/hashicorp/terraform-provider-aws/issues/12667 - return nil, fmt.Errorf("error creating resource: iops attribute not supported for ebs_block_device with volume_type %s", v) + return nil, fmt.Errorf("creating resource: iops attribute not supported for ebs_block_device with volume_type %s", v) } } if throughput, ok := bd["throughput"].(int); ok && throughput > 0 { @@ -2313,7 +2442,7 @@ func readBlockDeviceMappingsFromConfig(ctx context.Context, d *schema.ResourceDa if ec2.VolumeTypeGp3 == strings.ToLower(v) { ebs.Throughput = aws.Int64(int64(throughput)) } else { - return nil, fmt.Errorf("error creating resource: throughput attribute not supported for ebs_block_device with volume_type %s", v) + return nil, fmt.Errorf("creating resource: throughput attribute not supported for ebs_block_device with volume_type %s", v) } } } @@ -2380,7 +2509,7 @@ func readBlockDeviceMappingsFromConfig(ctx context.Context, d *schema.ResourceDa } else { // Enforce IOPs usage with a valid volume type // Reference: https://github.com/hashicorp/terraform-provider-aws/issues/12667 - return nil, fmt.Errorf("error creating resource: iops attribute not supported for root_block_device with volume_type %s", v) + return nil, fmt.Errorf("creating resource: iops attribute not supported for root_block_device with volume_type %s", v) } } if throughput, ok := bd["throughput"].(int); ok && throughput > 0 { @@ -2389,7 +2518,7 @@ func readBlockDeviceMappingsFromConfig(ctx context.Context, d *schema.ResourceDa ebs.Throughput = aws.Int64(int64(throughput)) } else { // Enforce throughput usage with a valid volume type - return nil, fmt.Errorf("error creating resource: throughput attribute not supported for root_block_device with volume_type %s", v) + return nil, fmt.Errorf("creating resource: throughput attribute not supported for root_block_device with volume_type %s", v) } } } @@ -2579,6 +2708,7 @@ type instanceOpts struct { IAMInstanceProfile *ec2.IamInstanceProfileSpecification ImageID *string InstanceInitiatedShutdownBehavior *string + InstanceMarketOptions *ec2.InstanceMarketOptionsRequest InstanceType *string Ipv6AddressCount *int64 Ipv6Addresses []*ec2.InstanceIpv6Address @@ -2599,7 +2729,7 @@ type instanceOpts struct { } func buildInstanceOpts(ctx context.Context, d *schema.ResourceData, meta interface{}) (*instanceOpts, error) { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) opts := &instanceOpts{ DisableAPITermination: aws.Bool(d.Get("disable_api_termination").(bool)), @@ -2826,6 +2956,10 @@ func buildInstanceOpts(ctx context.Context, d *schema.ResourceData, meta interfa opts.PrivateDNSNameOptions = expandPrivateDNSNameOptionsRequest(v.([]interface{})[0].(map[string]interface{})) } + if v, ok := d.GetOk("instance_market_options"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + opts.InstanceMarketOptions = expandInstanceMarketOptionsRequest(v.([]interface{})[0].(map[string]interface{})) + } + return opts, nil } @@ -3310,6 +3444,44 @@ func flattenPrivateDNSNameOptionsResponse(apiObject *ec2.PrivateDnsNameOptionsRe return tfMap } +func expandInstanceMarketOptionsRequest(tfMap map[string]interface{}) *ec2.InstanceMarketOptionsRequest { + apiObject := &ec2.InstanceMarketOptionsRequest{} + + if v, ok := tfMap["market_type"]; ok && v.(string) != "" { + apiObject.MarketType = aws.String(tfMap["market_type"].(string)) + } + + if v, ok := tfMap["spot_options"]; ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + apiObject.SpotOptions = expandSpotMarketOptions(v.([]interface{})[0].(map[string]interface{})) + } + + return apiObject +} + +func expandSpotMarketOptions(tfMap map[string]interface{}) *ec2.SpotMarketOptions { + apiObject := &ec2.SpotMarketOptions{} + + if v, ok := tfMap["instance_interruption_behavior"].(string); ok && v != "" { + apiObject.InstanceInterruptionBehavior = aws.String(v) + } + + if v, ok := tfMap["max_price"].(string); ok && v != "" { + apiObject.MaxPrice = aws.String(v) + } + + if v, ok := tfMap["spot_instance_type"].(string); ok && v != "" { + apiObject.SpotInstanceType = aws.String(v) + } + + if v, ok := tfMap["valid_until"].(string); ok && v != "" { + v, _ := time.Parse(time.RFC3339, v) + + apiObject.ValidUntil = aws.Time(v) + } + + return apiObject +} + func expandLaunchTemplateSpecification(tfMap map[string]interface{}) *ec2.LaunchTemplateSpecification { if tfMap == nil { return nil diff --git a/internal/service/ec2/ec2_instance_connect_endpoint.go b/internal/service/ec2/ec2_instance_connect_endpoint.go new file mode 100644 index 00000000000..82ddcd5eb95 --- /dev/null +++ b/internal/service/ec2/ec2_instance_connect_endpoint.go @@ -0,0 +1,280 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package ec2 + +import ( + "context" + "fmt" + "time" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/ec2" + awstypes "github.com/aws/aws-sdk-go-v2/service/ec2/types" + "github.com/hashicorp/aws-sdk-go-base/v2/tfawserr" + "github.com/hashicorp/terraform-plugin-framework-timeouts/resource/timeouts" + "github.com/hashicorp/terraform-plugin-framework/resource" + "github.com/hashicorp/terraform-plugin-framework/resource/schema" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/booldefault" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/boolplanmodifier" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/listplanmodifier" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/setplanmodifier" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" + "github.com/hashicorp/terraform-provider-aws/internal/errs/fwdiag" + "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @FrameworkResource(name="Instance Connect Endpoint") +// @Tags(identifierAttribute="id") +func newResourceInstanceConnectEndpoint(context.Context) (resource.ResourceWithConfigure, error) { + r := &resourceInstanceConnectEndpoint{} + r.SetDefaultCreateTimeout(10 * time.Minute) + r.SetDefaultDeleteTimeout(10 * time.Minute) + + return r, nil +} + +type resourceInstanceConnectEndpoint struct { + framework.ResourceWithConfigure + framework.WithImportByID + framework.WithTimeouts +} + +func (r *resourceInstanceConnectEndpoint) Metadata(_ context.Context, request resource.MetadataRequest, response *resource.MetadataResponse) { + response.TypeName = "aws_ec2_instance_connect_endpoint" +} + +func (r *resourceInstanceConnectEndpoint) Schema(ctx context.Context, req resource.SchemaRequest, resp *resource.SchemaResponse) { + resp.Schema = schema.Schema{ + Attributes: map[string]schema.Attribute{ + "arn": schema.StringAttribute{ + Computed: true, + PlanModifiers: []planmodifier.String{ + stringplanmodifier.UseStateForUnknown(), + }, + }, + "availability_zone": schema.StringAttribute{ + Computed: true, + PlanModifiers: []planmodifier.String{ + stringplanmodifier.UseStateForUnknown(), + }, + }, + "dns_name": schema.StringAttribute{ + Computed: true, + PlanModifiers: []planmodifier.String{ + stringplanmodifier.UseStateForUnknown(), + }, + }, + "fips_dns_name": schema.StringAttribute{ + Computed: true, + PlanModifiers: []planmodifier.String{ + stringplanmodifier.UseStateForUnknown(), + }, + }, + names.AttrID: framework.IDAttribute(), + "network_interface_ids": schema.ListAttribute{ + Computed: true, + ElementType: types.StringType, + PlanModifiers: []planmodifier.List{ + listplanmodifier.UseStateForUnknown(), + }, + }, + "owner_id": schema.StringAttribute{ + Computed: true, + PlanModifiers: []planmodifier.String{ + stringplanmodifier.UseStateForUnknown(), + }, + }, + "preserve_client_ip": schema.BoolAttribute{ + Optional: true, + Computed: true, + Default: booldefault.StaticBool(true), + PlanModifiers: []planmodifier.Bool{ + boolplanmodifier.RequiresReplace(), + }, + }, + "security_group_ids": schema.SetAttribute{ + Optional: true, + Computed: true, + ElementType: types.StringType, + PlanModifiers: []planmodifier.Set{ + setplanmodifier.RequiresReplace(), + }, + }, + "subnet_id": schema.StringAttribute{ + Required: true, + PlanModifiers: []planmodifier.String{ + stringplanmodifier.RequiresReplace(), + }, + }, + names.AttrTags: tftags.TagsAttribute(), + names.AttrTagsAll: tftags.TagsAttributeComputedOnly(), + "vpc_id": schema.StringAttribute{ + Computed: true, + PlanModifiers: []planmodifier.String{ + stringplanmodifier.UseStateForUnknown(), + }, + }, + }, + Blocks: map[string]schema.Block{ + names.AttrTimeouts: timeouts.Block(ctx, timeouts.Opts{ + Create: true, + Delete: true, + }), + }, + } +} + +func (r *resourceInstanceConnectEndpoint) Create(ctx context.Context, request resource.CreateRequest, response *resource.CreateResponse) { + var data resourceInstanceConnectEndpointData + + response.Diagnostics.Append(request.Plan.Get(ctx, &data)...) + + if response.Diagnostics.HasError() { + return + } + + conn := r.Meta().EC2Client(ctx) + + input := &ec2.CreateInstanceConnectEndpointInput{ + ClientToken: aws.String(id.UniqueId()), + PreserveClientIp: aws.Bool(data.PreserveClientIp.ValueBool()), + SecurityGroupIds: flex.ExpandFrameworkStringValueSet(ctx, data.SecurityGroupIds), + SubnetId: aws.String(data.SubnetId.ValueString()), + TagSpecifications: getTagSpecificationsInV2(ctx, awstypes.ResourceTypeInstanceConnectEndpoint), + } + + output, err := conn.CreateInstanceConnectEndpoint(ctx, input) + + if err != nil { + response.Diagnostics.AddError("creating EC2 Instance Connect Endpoint", err.Error()) + + return + } + + data.InstanceConnectEndpointId = types.StringPointerValue(output.InstanceConnectEndpoint.InstanceConnectEndpointId) + id := data.InstanceConnectEndpointId.ValueString() + + createTimeout := r.CreateTimeout(ctx, data.Timeouts) + instanceConnectEndpoint, err := WaitInstanceConnectEndpointCreated(ctx, conn, id, createTimeout) + if err != nil { + response.Diagnostics.AddError(fmt.Sprintf("waiting for EC2 Instance Connect Endpoint (%s) create", id), err.Error()) + + return + } + + // Set values for unknowns. + if err := flex.Flatten(ctx, instanceConnectEndpoint, &data); err != nil { + response.Diagnostics.AddError("flattening data", err.Error()) + + return + } + + response.Diagnostics.Append(response.State.Set(ctx, &data)...) +} + +func (r *resourceInstanceConnectEndpoint) Read(ctx context.Context, request resource.ReadRequest, response *resource.ReadResponse) { + var data resourceInstanceConnectEndpointData + + response.Diagnostics.Append(request.State.Get(ctx, &data)...) + + if response.Diagnostics.HasError() { + return + } + + conn := r.Meta().EC2Client(ctx) + + id := data.InstanceConnectEndpointId.ValueString() + instanceConnectEndpoint, err := FindInstanceConnectEndpointByID(ctx, conn, id) + + if tfresource.NotFound(err) { + response.Diagnostics.Append(fwdiag.NewResourceNotFoundWarningDiagnostic(err)) + response.State.RemoveResource(ctx) + + return + } + + if err != nil { + response.Diagnostics.AddError(fmt.Sprintf("reading EC2 Instance Connect Endpoint (%s)", id), err.Error()) + + return + } + + if err := flex.Flatten(ctx, instanceConnectEndpoint, &data); err != nil { + response.Diagnostics.AddError("flattening data", err.Error()) + + return + } + + setTagsOutV2(ctx, instanceConnectEndpoint.Tags) + + response.Diagnostics.Append(response.State.Set(ctx, &data)...) +} + +func (r *resourceInstanceConnectEndpoint) Update(ctx context.Context, request resource.UpdateRequest, response *resource.UpdateResponse) { + // Tags only. +} + +func (r *resourceInstanceConnectEndpoint) Delete(ctx context.Context, request resource.DeleteRequest, response *resource.DeleteResponse) { + var data resourceInstanceConnectEndpointData + + response.Diagnostics.Append(request.State.Get(ctx, &data)...) + + if response.Diagnostics.HasError() { + return + } + + conn := r.Meta().EC2Client(ctx) + + _, err := conn.DeleteInstanceConnectEndpoint(ctx, &ec2.DeleteInstanceConnectEndpointInput{ + InstanceConnectEndpointId: flex.StringFromFramework(ctx, data.InstanceConnectEndpointId), + }) + + if tfawserr.ErrCodeEquals(err, errCodeInvalidInstanceConnectEndpointIdNotFound) { + return + } + + id := data.InstanceConnectEndpointId.ValueString() + + if err != nil { + response.Diagnostics.AddError(fmt.Sprintf("deleting EC2 Instance Connect Endpoint (%s)", id), err.Error()) + + return + } + + deleteTimeout := r.DeleteTimeout(ctx, data.Timeouts) + if _, err := WaitInstanceConnectEndpointDeleted(ctx, conn, id, deleteTimeout); err != nil { + response.Diagnostics.AddError(fmt.Sprintf("waiting for EC2 Instance Connect Endpoint (%s) delete", id), err.Error()) + + return + } +} + +func (r *resourceInstanceConnectEndpoint) ModifyPlan(ctx context.Context, request resource.ModifyPlanRequest, response *resource.ModifyPlanResponse) { + r.SetTagsAll(ctx, request, response) +} + +// See https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_Ec2InstanceConnectEndpoint.html. +type resourceInstanceConnectEndpointData struct { + InstanceConnectEndpointArn types.String `tfsdk:"arn"` + AvailabilityZone types.String `tfsdk:"availability_zone"` + DnsName types.String `tfsdk:"dns_name"` + FipsDnsName types.String `tfsdk:"fips_dns_name"` + InstanceConnectEndpointId types.String `tfsdk:"id"` + NetworkInterfaceIds types.List `tfsdk:"network_interface_ids"` + OwnerId types.String `tfsdk:"owner_id"` + PreserveClientIp types.Bool `tfsdk:"preserve_client_ip"` + SecurityGroupIds types.Set `tfsdk:"security_group_ids"` + SubnetId types.String `tfsdk:"subnet_id"` + Tags types.Map `tfsdk:"tags"` + TagsAll types.Map `tfsdk:"tags_all"` + Timeouts timeouts.Value `tfsdk:"timeouts"` + VpcId types.String `tfsdk:"vpc_id"` +} diff --git a/internal/service/ec2/ec2_instance_connect_endpoint_test.go b/internal/service/ec2/ec2_instance_connect_endpoint_test.go new file mode 100644 index 00000000000..46376e4631b --- /dev/null +++ b/internal/service/ec2/ec2_instance_connect_endpoint_test.go @@ -0,0 +1,272 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package ec2_test + +import ( + "context" + "fmt" + "regexp" + "testing" + + "github.com/aws/aws-sdk-go/service/ec2" + sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-plugin-testing/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + tfec2 "github.com/hashicorp/terraform-provider-aws/internal/service/ec2" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" +) + +func TestAccEC2InstanceConnectEndpoint_basic(t *testing.T) { + ctx := acctest.Context(t) + resourceName := "aws_ec2_instance_connect_endpoint.test" + subnetResourceName := "aws_subnet.test.0" + vpcResourceName := "aws_vpc.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckInstanceConnectEndpointDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccInstanceConnectEndpointConfig_basic(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckInstanceConnectEndpointExists(ctx, resourceName), + acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "ec2", regexp.MustCompile(`instance-connect-endpoint/.+`)), + resource.TestCheckResourceAttrSet(resourceName, "availability_zone"), + resource.TestCheckResourceAttrSet(resourceName, "dns_name"), + resource.TestCheckResourceAttrSet(resourceName, "fips_dns_name"), + acctest.CheckResourceAttrGreaterThanOrEqualValue(resourceName, "network_interface_ids.#", 1), + acctest.CheckResourceAttrAccountID(resourceName, "owner_id"), + resource.TestCheckResourceAttr(resourceName, "preserve_client_ip", "true"), + resource.TestCheckResourceAttr(resourceName, "security_group_ids.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "subnet_id", subnetResourceName, "id"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttrPair(resourceName, "vpc_id", vpcResourceName, "id"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccEC2InstanceConnectEndpoint_disappears(t *testing.T) { + ctx := acctest.Context(t) + resourceName := "aws_ec2_instance_connect_endpoint.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckInstanceConnectEndpointDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccInstanceConnectEndpointConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceConnectEndpointExists(ctx, resourceName), + acctest.CheckFrameworkResourceDisappears(ctx, acctest.Provider, tfec2.ResourceInstanceConnectEndpoint, resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccEC2InstanceConnectEndpoint_tags(t *testing.T) { + ctx := acctest.Context(t) + resourceName := "aws_ec2_instance_connect_endpoint.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckInstanceConnectEndpointDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccInstanceConnectEndpointConfig_tags1(rName, "key1", "value1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceConnectEndpointExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccInstanceConnectEndpointConfig_tags2(rName, "key1", "value1updated", "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceConnectEndpointExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + { + Config: testAccInstanceConnectEndpointConfig_tags1(rName, "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceConnectEndpointExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + }, + }) +} + +func TestAccEC2InstanceConnectEndpoint_securityGroupIDs(t *testing.T) { + ctx := acctest.Context(t) + resourceName := "aws_ec2_instance_connect_endpoint.test" + securityGroup1ResourceName := "aws_security_group.test.0" + securityGroup2ResourceName := "aws_security_group.test.1" + subnetResourceName := "aws_subnet.test.0" + vpcResourceName := "aws_vpc.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckInstanceConnectEndpointDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccInstanceConnectEndpointConfig_securityGroupIDs(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckInstanceConnectEndpointExists(ctx, resourceName), + acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "ec2", regexp.MustCompile(`instance-connect-endpoint/.+`)), + resource.TestCheckResourceAttrSet(resourceName, "availability_zone"), + resource.TestCheckResourceAttrSet(resourceName, "dns_name"), + resource.TestCheckResourceAttrSet(resourceName, "fips_dns_name"), + acctest.CheckResourceAttrGreaterThanOrEqualValue(resourceName, "network_interface_ids.#", 1), + resource.TestCheckResourceAttr(resourceName, "preserve_client_ip", "false"), + resource.TestCheckResourceAttr(resourceName, "security_group_ids.#", "2"), + resource.TestCheckTypeSetElemAttrPair(resourceName, "security_group_ids.*", securityGroup1ResourceName, "id"), + resource.TestCheckTypeSetElemAttrPair(resourceName, "security_group_ids.*", securityGroup2ResourceName, "id"), + resource.TestCheckResourceAttrPair(resourceName, "subnet_id", subnetResourceName, "id"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.Name", rName), + resource.TestCheckResourceAttrPair(resourceName, "vpc_id", vpcResourceName, "id"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckInstanceConnectEndpointExists(ctx context.Context, n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + if rs.Primary.ID == "" { + return fmt.Errorf("No EC2 Instance Connect Endpoint ID is set") + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Client(ctx) + + _, err := tfec2.FindInstanceConnectEndpointByID(ctx, conn, rs.Primary.ID) + + return err + } +} + +func testAccCheckInstanceConnectEndpointDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Client(ctx) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_ec2_instance_connect_endpoint" { + continue + } + + _, err := tfec2.FindInstanceConnectEndpointByID(ctx, conn, rs.Primary.ID) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return err + } + + return fmt.Errorf("EC2 Instance Connect Endpoint %s still exists", rs.Primary.ID) + } + + return nil + } +} + +func testAccInstanceConnectEndpointConfig_basic(rName string) string { + return acctest.ConfigCompose(acctest.ConfigVPCWithSubnets(rName, 1), ` +resource "aws_ec2_instance_connect_endpoint" "test" { + subnet_id = aws_subnet.test[0].id +} +`) +} + +func testAccInstanceConnectEndpointConfig_tags1(rName, tagKey1, tagValue1 string) string { + return acctest.ConfigCompose(acctest.ConfigVPCWithSubnets(rName, 1), fmt.Sprintf(` +resource "aws_ec2_instance_connect_endpoint" "test" { + subnet_id = aws_subnet.test[0].id + + tags = { + %[1]q = %[2]q + } +} +`, tagKey1, tagValue1)) +} + +func testAccInstanceConnectEndpointConfig_tags2(rName, tagKey1, tagValue1, tagKey2, tagValue2 string) string { + return acctest.ConfigCompose(acctest.ConfigVPCWithSubnets(rName, 1), fmt.Sprintf(` +resource "aws_ec2_instance_connect_endpoint" "test" { + subnet_id = aws_subnet.test[0].id + + tags = { + %[1]q = %[2]q + %[3]q = %[4]q + } +} +`, tagKey1, tagValue1, tagKey2, tagValue2)) +} + +func testAccInstanceConnectEndpointConfig_securityGroupIDs(rName string) string { + return acctest.ConfigCompose(acctest.ConfigVPCWithSubnets(rName, 1), fmt.Sprintf(` +resource "aws_security_group" "test" { + count = 2 + + name = "%[1]s-${count.index}" + vpc_id = aws_vpc.test.id + + tags = { + Name = %[1]q + } +} + +resource "aws_ec2_instance_connect_endpoint" "test" { + preserve_client_ip = false + subnet_id = aws_subnet.test[0].id + security_group_ids = aws_security_group.test[*].id + + tags = { + Name = %[1]q + } +} +`, rName)) +} diff --git a/internal/service/ec2/ec2_instance_data_source.go b/internal/service/ec2/ec2_instance_data_source.go index 0c00885119a..73e54d824aa 100644 --- a/internal/service/ec2/ec2_instance_data_source.go +++ b/internal/service/ec2/ec2_instance_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -392,7 +395,7 @@ func DataSourceInstance() *schema.Resource { // dataSourceInstanceRead performs the instanceID lookup func dataSourceInstanceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig // Build up search parameters @@ -481,7 +484,7 @@ func instanceDescriptionAttributes(ctx context.Context, d *schema.ResourceData, name, err := InstanceProfileARNToName(aws.StringValue(instance.IamInstanceProfile.Arn)) if err != nil { - return fmt.Errorf("error setting iam_instance_profile: %w", err) + return fmt.Errorf("setting iam_instance_profile: %w", err) } d.Set("iam_instance_profile", name) @@ -504,7 +507,7 @@ func instanceDescriptionAttributes(ctx context.Context, d *schema.ResourceData, } } if err := d.Set("secondary_private_ips", secondaryIPs); err != nil { - return fmt.Errorf("error setting secondary_private_ips: %w", err) + return fmt.Errorf("setting secondary_private_ips: %w", err) } ipV6Addresses := make([]string, 0, len(ni.Ipv6Addresses)) @@ -512,7 +515,7 @@ func instanceDescriptionAttributes(ctx context.Context, d *schema.ResourceData, ipV6Addresses = append(ipV6Addresses, aws.StringValue(ip.Ipv6Address)) } if err := d.Set("ipv6_addresses", ipV6Addresses); err != nil { - return fmt.Errorf("error setting ipv6_addresses: %w", err) + return fmt.Errorf("setting ipv6_addresses: %w", err) } } } @@ -532,7 +535,7 @@ func instanceDescriptionAttributes(ctx context.Context, d *schema.ResourceData, } if err := d.Set("tags", KeyValueTags(ctx, instance.Tags).IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return fmt.Errorf("error setting tags: %w", err) + return fmt.Errorf("setting tags: %w", err) } // Security Groups @@ -602,7 +605,7 @@ func instanceDescriptionAttributes(ctx context.Context, d *schema.ResourceData, if instanceCreditSpecification != nil { if err := d.Set("credit_specification", []interface{}{flattenInstanceCreditSpecification(instanceCreditSpecification)}); err != nil { - return fmt.Errorf("error setting credit_specification: %w", err) + return fmt.Errorf("setting credit_specification: %w", err) } } else { d.Set("credit_specification", nil) @@ -612,24 +615,24 @@ func instanceDescriptionAttributes(ctx context.Context, d *schema.ResourceData, } if err := d.Set("enclave_options", flattenEnclaveOptions(instance.EnclaveOptions)); err != nil { - return fmt.Errorf("error setting enclave_options: %w", err) + return fmt.Errorf("setting enclave_options: %w", err) } if instance.MaintenanceOptions != nil { if err := d.Set("maintenance_options", []interface{}{flattenInstanceMaintenanceOptions(instance.MaintenanceOptions)}); err != nil { - return fmt.Errorf("error setting maintenance_options: %w", err) + return fmt.Errorf("setting maintenance_options: %w", err) } } else { d.Set("maintenance_options", nil) } if err := d.Set("metadata_options", flattenInstanceMetadataOptions(instance.MetadataOptions)); err != nil { - return fmt.Errorf("error setting metadata_options: %w", err) + return fmt.Errorf("setting metadata_options: %w", err) } if instance.PrivateDnsNameOptions != nil { if err := d.Set("private_dns_name_options", []interface{}{flattenPrivateDNSNameOptionsResponse(instance.PrivateDnsNameOptions)}); err != nil { - return fmt.Errorf("error setting private_dns_name_options: %w", err) + return fmt.Errorf("setting private_dns_name_options: %w", err) } } else { d.Set("private_dns_name_options", nil) diff --git a/internal/service/ec2/ec2_instance_data_source_test.go b/internal/service/ec2/ec2_instance_data_source_test.go index d7daef3dec8..74b89de3e4c 100644 --- a/internal/service/ec2/ec2_instance_data_source_test.go +++ b/internal/service/ec2/ec2_instance_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/ec2_instance_migrate.go b/internal/service/ec2/ec2_instance_migrate.go index 53480d7db82..6ba6ce8518d 100644 --- a/internal/service/ec2/ec2_instance_migrate.go +++ b/internal/service/ec2/ec2_instance_migrate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( diff --git a/internal/service/ec2/ec2_instance_migrate_test.go b/internal/service/ec2/ec2_instance_migrate_test.go index 9bc1c90487a..6c846a1da69 100644 --- a/internal/service/ec2/ec2_instance_migrate_test.go +++ b/internal/service/ec2/ec2_instance_migrate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/ec2_instance_state.go b/internal/service/ec2/ec2_instance_state.go index d75295a13d7..2b812406564 100644 --- a/internal/service/ec2/ec2_instance_state.go +++ b/internal/service/ec2/ec2_instance_state.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -55,7 +58,7 @@ func ResourceInstanceState() *schema.Resource { } func resourceInstanceStateCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) instanceId := d.Get("instance_id").(string) instance, instanceErr := WaitInstanceReadyWithContext(ctx, conn, instanceId, d.Timeout(schema.TimeoutCreate)) @@ -76,7 +79,7 @@ func resourceInstanceStateCreate(ctx context.Context, d *schema.ResourceData, me } func resourceInstanceStateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) state, err := FindInstanceStateByID(ctx, conn, d.Id()) @@ -98,7 +101,7 @@ func resourceInstanceStateRead(ctx context.Context, d *schema.ResourceData, meta } func resourceInstanceStateUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) instance, instanceErr := WaitInstanceReadyWithContext(ctx, conn, d.Id(), d.Timeout(schema.TimeoutUpdate)) diff --git a/internal/service/ec2/ec2_instance_state_test.go b/internal/service/ec2/ec2_instance_state_test.go index e9831f287c4..9f00b933986 100644 --- a/internal/service/ec2/ec2_instance_state_test.go +++ b/internal/service/ec2/ec2_instance_state_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -112,7 +115,7 @@ func testAccCheckInstanceStateExists(ctx context.Context, n string) resource.Tes return errors.New("No EC2InstanceState ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) out, err := tfec2.FindInstanceStateByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ec2/ec2_instance_test.go b/internal/service/ec2/ec2_instance_test.go index a874a11e32d..5c9da246688 100644 --- a/internal/service/ec2/ec2_instance_test.go +++ b/internal/service/ec2/ec2_instance_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -430,7 +433,7 @@ func TestAccEC2Instance_EBSBlockDevice_invalidIopsForVolumeType(t *testing.T) { Steps: []resource.TestStep{ { Config: testAccInstanceConfig_ebsBlockDeviceInvalidIOPS, - ExpectError: regexp.MustCompile(`error creating resource: iops attribute not supported for ebs_block_device with volume_type gp2`), + ExpectError: regexp.MustCompile(`creating resource: iops attribute not supported for ebs_block_device with volume_type gp2`), }, }, }) @@ -446,7 +449,7 @@ func TestAccEC2Instance_EBSBlockDevice_invalidThroughputForVolumeType(t *testing Steps: []resource.TestStep{ { Config: testAccInstanceConfig_ebsBlockDeviceInvalidThroughput, - ExpectError: regexp.MustCompile(`error creating resource: throughput attribute not supported for ebs_block_device with volume_type gp2`), + ExpectError: regexp.MustCompile(`creating resource: throughput attribute not supported for ebs_block_device with volume_type gp2`), }, }, }) @@ -713,7 +716,7 @@ func TestAccEC2Instance_gp2WithIopsValue(t *testing.T) { Steps: []resource.TestStep{ { Config: testAccInstanceConfig_gp2IOPSValue(rName), - ExpectError: regexp.MustCompile(`error creating resource: iops attribute not supported for root_block_device with volume_type gp2`), + ExpectError: regexp.MustCompile(`creating resource: iops attribute not supported for root_block_device with volume_type gp2`), }, }, }) @@ -4999,7 +5002,18 @@ func TestAccEC2Instance_metadataOptions(t *testing.T) { CheckDestroy: testAccCheckInstanceDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccInstanceConfig_metadataOptions(rName), + Config: testAccInstanceConfig_metadataOptionsDefaults(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckInstanceExists(ctx, resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "metadata_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "metadata_options.0.http_endpoint", "enabled"), + resource.TestCheckResourceAttr(resourceName, "metadata_options.0.http_tokens", "optional"), + resource.TestCheckResourceAttr(resourceName, "metadata_options.0.http_put_response_hop_limit", "1"), + resource.TestCheckResourceAttr(resourceName, "metadata_options.0.instance_metadata_tags", "disabled"), + ), + }, + { + Config: testAccInstanceConfig_metadataOptionsDisabled(rName), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckInstanceExists(ctx, resourceName, &v), resource.TestCheckResourceAttr(resourceName, "metadata_options.#", "1"), @@ -5320,7 +5334,7 @@ func testAccCheckInstanceDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckInstanceDestroyWithProvider(ctx context.Context) acctest.TestCheckWithProviderFunc { return func(s *terraform.State, provider *schema.Provider) error { - conn := provider.Meta().(*conns.AWSClient).EC2Conn() + conn := provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_instance" { @@ -5359,7 +5373,7 @@ func testAccCheckInstanceExistsWithProvider(ctx context.Context, n string, v *ec return fmt.Errorf("No EC2 Instance ID is set") } - conn := providerF().Meta().(*conns.AWSClient).EC2Conn() + conn := providerF().Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindInstanceByID(ctx, conn, rs.Primary.ID) @@ -5375,7 +5389,7 @@ func testAccCheckInstanceExistsWithProvider(ctx context.Context, n string, v *ec func testAccCheckStopInstance(ctx context.Context, v *ec2.Instance) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) return tfec2.StopInstance(ctx, conn, aws.StringValue(v.InstanceId), 10*time.Minute) } @@ -5383,7 +5397,7 @@ func testAccCheckStopInstance(ctx context.Context, v *ec2.Instance) resource.Tes func testAccCheckDetachVolumes(ctx context.Context, instance *ec2.Instance) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, v := range instance.BlockDeviceMappings { if v.Ebs != nil && v.Ebs.VolumeId != nil { @@ -5415,7 +5429,7 @@ func testAccCheckDetachVolumes(ctx context.Context, instance *ec2.Instance) reso func TestInstanceHostIDSchema(t *testing.T) { t.Parallel() - actualSchema := tfec2.ResourceInstance().Schema["host_id"] + actualSchema := tfec2.ResourceInstance().SchemaMap()["host_id"] expectedSchema := &schema.Schema{ Type: schema.TypeString, Optional: true, @@ -5433,7 +5447,7 @@ func TestInstanceHostIDSchema(t *testing.T) { func TestInstanceCPUCoreCountSchema(t *testing.T) { t.Parallel() - actualSchema := tfec2.ResourceInstance().Schema["cpu_core_count"] + actualSchema := tfec2.ResourceInstance().SchemaMap()["cpu_core_count"] expectedSchema := &schema.Schema{ Type: schema.TypeInt, Optional: true, @@ -5453,7 +5467,7 @@ func TestInstanceCPUCoreCountSchema(t *testing.T) { func TestInstanceCPUThreadsPerCoreSchema(t *testing.T) { t.Parallel() - actualSchema := tfec2.ResourceInstance().Schema["cpu_threads_per_core"] + actualSchema := tfec2.ResourceInstance().SchemaMap()["cpu_threads_per_core"] expectedSchema := &schema.Schema{ Type: schema.TypeInt, Optional: true, @@ -5472,7 +5486,7 @@ func TestInstanceCPUThreadsPerCoreSchema(t *testing.T) { func driftTags(ctx context.Context, instance *ec2.Instance) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) _, err := conn.CreateTagsWithContext(ctx, &ec2.CreateTagsInput{ Resources: []*string{instance.InstanceId}, Tags: []*ec2.Tag{ @@ -5499,7 +5513,7 @@ func testAccPreCheckHasDefaultVPCDefaultSubnets(ctx context.Context, t *testing. // defaultVPC returns the ID of the default VPC for the current AWS Region, or "" if none exists. func defaultVPC(ctx context.Context, t *testing.T) string { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := conn.DescribeAccountAttributesWithContext(ctx, &ec2.DescribeAccountAttributesInput{ AttributeNames: aws.StringSlice([]string{ec2.AccountAttributeNameDefaultVpc}), @@ -5528,7 +5542,7 @@ func hasDefaultVPC(ctx context.Context, t *testing.T) bool { // defaultSubnetCount returns the number of default subnets in the current region's default VPC. func defaultSubnetCount(ctx context.Context, t *testing.T) int { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeSubnetsInput{ Filters: tfec2.BuildAttributeFilterList( @@ -5551,6 +5565,46 @@ func defaultSubnetCount(ctx context.Context, t *testing.T) int { return len(subnets) } +func TestAccEC2Instance_basicWithSpot(t *testing.T) { + ctx := acctest.Context(t) + var v ec2.Instance + resourceName := "aws_instance.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + // No subnet_id specified requires default VPC with default subnets. + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckInstanceDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccInstanceConfig_basicWithSpot(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceExists(ctx, resourceName, &v), + resource.TestCheckResourceAttrSet(resourceName, "spot_instance_request_id"), + resource.TestCheckResourceAttr(resourceName, "instance_lifecycle", "spot"), + resource.TestCheckResourceAttr(resourceName, "instance_market_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "instance_market_options.0.%", "2"), + resource.TestCheckResourceAttr(resourceName, "instance_market_options.0.market_type", "spot"), + resource.TestCheckResourceAttr(resourceName, "instance_market_options.0.spot_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "instance_market_options.0.spot_options.0.%", "4"), + resource.TestCheckResourceAttr(resourceName, "instance_market_options.0.spot_options.0.instance_interruption_behavior", "terminate"), + resource.TestCheckResourceAttrSet(resourceName, "instance_market_options.0.spot_options.0.max_price"), + resource.TestCheckResourceAttr(resourceName, "instance_market_options.0.spot_options.0.spot_instance_type", "one-time"), + resource.TestCheckResourceAttr(resourceName, "instance_market_options.0.spot_options.0.valid_until", ""), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"user_data_replace_on_change"}, + }, + }, + }) +} + // testAccLatestAmazonLinuxPVEBSAMIConfig returns the configuration for a data source that // describes the latest Amazon Linux AMI using PV virtualization and an EBS root device. // The data source is named 'amzn-ami-minimal-pv-ebs'. @@ -7324,7 +7378,7 @@ resource "aws_instance" "test" { resource "aws_eip" "test" { instance = aws_instance.test.id - vpc = true + domain = "vpc" depends_on = [aws_internet_gateway.test] tags = { @@ -7354,7 +7408,7 @@ resource "aws_instance" "test" { resource "aws_eip" "test" { instance = aws_instance.test.id - vpc = true + domain = "vpc" depends_on = [aws_internet_gateway.test] tags = { @@ -7384,7 +7438,7 @@ resource "aws_instance" "test" { resource "aws_eip" "test" { instance = aws_instance.test.id - vpc = true + domain = "vpc" depends_on = [aws_internet_gateway.test] tags = { @@ -8429,7 +8483,27 @@ resource "aws_instance" "test" { `, rName, hibernation)) } -func testAccInstanceConfig_metadataOptions(rName string) string { +func testAccInstanceConfig_metadataOptionsDefaults(rName string) string { + return acctest.ConfigCompose( + acctest.ConfigLatestAmazonLinuxHVMEBSAMI(), + testAccInstanceVPCConfig(rName, false, 0), + acctest.AvailableEC2InstanceTypeForRegion("t3.micro", "t2.micro"), + fmt.Sprintf(` +resource "aws_instance" "test" { + ami = data.aws_ami.amzn-ami-minimal-hvm-ebs.id + instance_type = data.aws_ec2_instance_type_offering.available.instance_type + subnet_id = aws_subnet.test.id + + tags = { + Name = %[1]q + } + + metadata_options {} +} +`, rName)) +} + +func testAccInstanceConfig_metadataOptionsDisabled(rName string) string { return acctest.ConfigCompose( acctest.ConfigLatestAmazonLinuxHVMEBSAMI(), testAccInstanceVPCConfig(rName, false, 0), @@ -8849,3 +8923,24 @@ resource "aws_instance" "test" { } `, rName)) } + +func testAccInstanceConfig_basicWithSpot(rName string) string { + return acctest.ConfigCompose( + acctest.ConfigLatestAmazonLinux2HVMEBSARM64AMI(), + acctest.AvailableEC2InstanceTypeForRegion("t4g.nano"), + fmt.Sprintf(` +resource "aws_instance" "test" { + ami = data.aws_ami.amzn2-ami-minimal-hvm-ebs-arm64.id + + instance_market_options { + market_type = "spot" + } + + instance_type = data.aws_ec2_instance_type_offering.available.instance_type + + tags = { + Name = %[1]q + } +} +`, rName)) +} diff --git a/internal/service/ec2/ec2_instance_type_data_source.go b/internal/service/ec2/ec2_instance_type_data_source.go index c2cf13f1717..547f4f66395 100644 --- a/internal/service/ec2/ec2_instance_type_data_source.go +++ b/internal/service/ec2/ec2_instance_type_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -291,7 +294,7 @@ func DataSourceInstanceType() *schema.Resource { func dataSourceInstanceTypeRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) v, err := FindInstanceTypeByName(ctx, conn, d.Get("instance_type").(string)) diff --git a/internal/service/ec2/ec2_instance_type_data_source_test.go b/internal/service/ec2/ec2_instance_type_data_source_test.go index c46a9e57428..135745f5f9e 100644 --- a/internal/service/ec2/ec2_instance_type_data_source_test.go +++ b/internal/service/ec2/ec2_instance_type_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/ec2_instance_type_offering_data_source.go b/internal/service/ec2/ec2_instance_type_offering_data_source.go index 4661540be42..85fe5d4e279 100644 --- a/internal/service/ec2/ec2_instance_type_offering_data_source.go +++ b/internal/service/ec2/ec2_instance_type_offering_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -44,7 +47,7 @@ func DataSourceInstanceTypeOffering() *schema.Resource { func dataSourceInstanceTypeOfferingRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeInstanceTypeOfferingsInput{} diff --git a/internal/service/ec2/ec2_instance_type_offering_data_source_test.go b/internal/service/ec2/ec2_instance_type_offering_data_source_test.go index d142281472b..0a6af63897b 100644 --- a/internal/service/ec2/ec2_instance_type_offering_data_source_test.go +++ b/internal/service/ec2/ec2_instance_type_offering_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/ec2_instance_type_offerings_data_source.go b/internal/service/ec2/ec2_instance_type_offerings_data_source.go index 3426c30cb34..0d98861ab05 100644 --- a/internal/service/ec2/ec2_instance_type_offerings_data_source.go +++ b/internal/service/ec2/ec2_instance_type_offerings_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -50,7 +53,7 @@ func DataSourceInstanceTypeOfferings() *schema.Resource { func dataSourceInstanceTypeOfferingsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeInstanceTypeOfferingsInput{} diff --git a/internal/service/ec2/ec2_instance_type_offerings_data_source_test.go b/internal/service/ec2/ec2_instance_type_offerings_data_source_test.go index b604a354776..1673a00e8e6 100644 --- a/internal/service/ec2/ec2_instance_type_offerings_data_source_test.go +++ b/internal/service/ec2/ec2_instance_type_offerings_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -24,9 +27,9 @@ func TestAccEC2InstanceTypeOfferingsDataSource_filter(t *testing.T) { { Config: testAccInstanceTypeOfferingsDataSourceConfig_filter(), Check: resource.ComposeTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "instance_types.#", "0"), - acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "locations.#", "0"), - acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "location_types.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "instance_types.#", 0), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "locations.#", 0), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "location_types.#", 0), ), }, }, @@ -46,9 +49,9 @@ func TestAccEC2InstanceTypeOfferingsDataSource_locationType(t *testing.T) { { Config: testAccInstanceTypeOfferingsDataSourceConfig_location(), Check: resource.ComposeTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "instance_types.#", "0"), - acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "locations.#", "0"), - acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "location_types.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "instance_types.#", 0), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "locations.#", 0), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "location_types.#", 0), ), }, }, @@ -56,7 +59,7 @@ func TestAccEC2InstanceTypeOfferingsDataSource_locationType(t *testing.T) { } func testAccPreCheckInstanceTypeOfferings(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeInstanceTypeOfferingsInput{ MaxResults: aws.Int64(5), diff --git a/internal/service/ec2/ec2_instance_types_data_source.go b/internal/service/ec2/ec2_instance_types_data_source.go index aa5541489c5..783ad53fb44 100644 --- a/internal/service/ec2/ec2_instance_types_data_source.go +++ b/internal/service/ec2/ec2_instance_types_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -34,7 +37,7 @@ func DataSourceInstanceTypes() *schema.Resource { func dataSourceInstanceTypesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeInstanceTypesInput{} diff --git a/internal/service/ec2/ec2_instance_types_data_source_test.go b/internal/service/ec2/ec2_instance_types_data_source_test.go index 93c6af35f36..e62f45021fb 100644 --- a/internal/service/ec2/ec2_instance_types_data_source_test.go +++ b/internal/service/ec2/ec2_instance_types_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -23,7 +26,7 @@ func TestAccEC2InstanceTypesDataSource_basic(t *testing.T) { { Config: testAccInstanceTypesDataSourceConfig_basic(), Check: resource.ComposeTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "instance_types.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "instance_types.#", 0), ), }, }, @@ -43,7 +46,7 @@ func TestAccEC2InstanceTypesDataSource_filter(t *testing.T) { { Config: testAccInstanceTypesDataSourceConfig_filter(), Check: resource.ComposeTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "instance_types.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "instance_types.#", 0), ), }, }, @@ -51,7 +54,7 @@ func TestAccEC2InstanceTypesDataSource_filter(t *testing.T) { } func testAccPreCheckInstanceTypes(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeInstanceTypesInput{} diff --git a/internal/service/ec2/ec2_instances_data_source.go b/internal/service/ec2/ec2_instances_data_source.go index 391a57b726e..dda6d745634 100644 --- a/internal/service/ec2/ec2_instances_data_source.go +++ b/internal/service/ec2/ec2_instances_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -61,7 +64,7 @@ func DataSourceInstances() *schema.Resource { func dataSourceInstancesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeInstancesInput{} diff --git a/internal/service/ec2/ec2_instances_data_source_test.go b/internal/service/ec2/ec2_instances_data_source_test.go index 8dea633ef42..74a87ff554c 100644 --- a/internal/service/ec2/ec2_instances_data_source_test.go +++ b/internal/service/ec2/ec2_instances_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/ec2_key_pair.go b/internal/service/ec2/ec2_key_pair.go index 24962c268aa..c1748b90bfe 100644 --- a/internal/service/ec2/ec2_key_pair.go +++ b/internal/service/ec2/ec2_key_pair.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -96,7 +99,7 @@ func ResourceKeyPair() *schema.Resource { func resourceKeyPairCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) keyName := create.Name(d.Get("key_name").(string), d.Get("key_name_prefix").(string)) input := &ec2.ImportKeyPairInput{ @@ -118,7 +121,7 @@ func resourceKeyPairCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceKeyPairRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) keyPair, err := FindKeyPairByName(ctx, conn, d.Id()) @@ -146,7 +149,7 @@ func resourceKeyPairRead(ctx context.Context, d *schema.ResourceData, meta inter d.Set("key_type", keyPair.KeyType) d.Set("key_pair_id", keyPair.KeyPairId) - SetTagsOut(ctx, keyPair.Tags) + setTagsOut(ctx, keyPair.Tags) return diags } @@ -161,7 +164,7 @@ func resourceKeyPairUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceKeyPairDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[DEBUG] Deleting EC2 Key Pair: %s", d.Id()) _, err := conn.DeleteKeyPairWithContext(ctx, &ec2.DeleteKeyPairInput{ diff --git a/internal/service/ec2/ec2_key_pair_data_source.go b/internal/service/ec2/ec2_key_pair_data_source.go index 90d9e323f6b..f81154a788f 100644 --- a/internal/service/ec2/ec2_key_pair_data_source.go +++ b/internal/service/ec2/ec2_key_pair_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -67,7 +70,7 @@ func DataSourceKeyPair() *schema.Resource { func dataSourceKeyPairRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeKeyPairsInput{} diff --git a/internal/service/ec2/ec2_key_pair_data_source_test.go b/internal/service/ec2/ec2_key_pair_data_source_test.go index 13debd181b1..d6f725510bc 100644 --- a/internal/service/ec2/ec2_key_pair_data_source_test.go +++ b/internal/service/ec2/ec2_key_pair_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/ec2_key_pair_migrate.go b/internal/service/ec2/ec2_key_pair_migrate.go index 2ef9315abf0..136a5641dab 100644 --- a/internal/service/ec2/ec2_key_pair_migrate.go +++ b/internal/service/ec2/ec2_key_pair_migrate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( diff --git a/internal/service/ec2/ec2_key_pair_migrate_test.go b/internal/service/ec2/ec2_key_pair_migrate_test.go index 7e6bd7b1eea..c1bc6693ce8 100644 --- a/internal/service/ec2/ec2_key_pair_migrate_test.go +++ b/internal/service/ec2/ec2_key_pair_migrate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/ec2_key_pair_test.go b/internal/service/ec2/ec2_key_pair_test.go index f6a781c56b2..e76816798ab 100644 --- a/internal/service/ec2/ec2_key_pair_test.go +++ b/internal/service/ec2/ec2_key_pair_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -205,7 +208,7 @@ func TestAccEC2KeyPair_disappears(t *testing.T) { func testAccCheckKeyPairDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_key_pair" { @@ -240,7 +243,7 @@ func testAccCheckKeyPairExists(ctx context.Context, n string, v *ec2.KeyPairInfo return fmt.Errorf("No EC2 Key Pair ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindKeyPairByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ec2/ec2_launch_template.go b/internal/service/ec2/ec2_launch_template.go index 0b11c7873b4..9042f646df2 100644 --- a/internal/service/ec2/ec2_launch_template.go +++ b/internal/service/ec2/ec2_launch_template.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -19,10 +22,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/create" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" - "github.com/hashicorp/terraform-provider-aws/internal/experimental/nullable" "github.com/hashicorp/terraform-provider-aws/internal/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/types/nullable" "github.com/hashicorp/terraform-provider-aws/internal/verify" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -672,7 +675,7 @@ func ResourceLaunchTemplate() *schema.Resource { "http_protocol_ipv6": { Type: schema.TypeString, Optional: true, - Default: ec2.LaunchTemplateInstanceMetadataProtocolIpv6Disabled, + Computed: true, ValidateFunc: validation.StringInSlice(ec2.LaunchTemplateInstanceMetadataProtocolIpv6_Values(), false), }, "http_put_response_hop_limit": { @@ -690,7 +693,7 @@ func ResourceLaunchTemplate() *schema.Resource { "instance_metadata_tags": { Type: schema.TypeString, Optional: true, - Default: ec2.LaunchTemplateInstanceMetadataTagsStateDisabled, + Computed: true, ValidateFunc: validation.StringInSlice(ec2.LaunchTemplateInstanceMetadataTagsState_Values(), false), }, }, @@ -975,7 +978,7 @@ func ResourceLaunchTemplate() *schema.Resource { func resourceLaunchTemplateCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) name := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) input := &ec2.CreateLaunchTemplateInput{ @@ -1007,7 +1010,7 @@ func resourceLaunchTemplateCreate(ctx context.Context, d *schema.ResourceData, m func resourceLaunchTemplateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) lt, err := FindLaunchTemplateByID(ctx, conn, d.Id()) @@ -1046,14 +1049,14 @@ func resourceLaunchTemplateRead(ctx context.Context, d *schema.ResourceData, met return sdkdiag.AppendFromErr(diags, err) } - SetTagsOut(ctx, lt.Tags) + setTagsOut(ctx, lt.Tags) return diags } func resourceLaunchTemplateUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) updateKeys := []string{ "block_device_mappings", @@ -1138,7 +1141,7 @@ func resourceLaunchTemplateUpdate(ctx context.Context, d *schema.ResourceData, m func resourceLaunchTemplateDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[DEBUG] Deleting EC2 Launch Template: %s", d.Id()) _, err := conn.DeleteLaunchTemplateWithContext(ctx, &ec2.DeleteLaunchTemplateInput{ diff --git a/internal/service/ec2/ec2_launch_template_data_source.go b/internal/service/ec2/ec2_launch_template_data_source.go index 0a109641acf..9b45ad2723a 100644 --- a/internal/service/ec2/ec2_launch_template_data_source.go +++ b/internal/service/ec2/ec2_launch_template_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -777,7 +780,7 @@ func DataSourceLaunchTemplate() *schema.Resource { func dataSourceLaunchTemplateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeLaunchTemplatesInput{} diff --git a/internal/service/ec2/ec2_launch_template_data_source_test.go b/internal/service/ec2/ec2_launch_template_data_source_test.go index c857670d55d..8d3f8ddac4a 100644 --- a/internal/service/ec2/ec2_launch_template_data_source_test.go +++ b/internal/service/ec2/ec2_launch_template_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/ec2_launch_template_test.go b/internal/service/ec2/ec2_launch_template_test.go index 2db034581bb..349d94c2bd2 100644 --- a/internal/service/ec2/ec2_launch_template_test.go +++ b/internal/service/ec2/ec2_launch_template_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -2930,8 +2933,8 @@ func TestAccEC2LaunchTemplate_metadataOptions(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "metadata_options.0.http_endpoint", "enabled"), resource.TestCheckResourceAttr(resourceName, "metadata_options.0.http_tokens", "required"), resource.TestCheckResourceAttr(resourceName, "metadata_options.0.http_put_response_hop_limit", "2"), - resource.TestCheckResourceAttr(resourceName, "metadata_options.0.http_protocol_ipv6", "disabled"), - resource.TestCheckResourceAttr(resourceName, "metadata_options.0.instance_metadata_tags", "disabled"), + resource.TestCheckResourceAttr(resourceName, "metadata_options.0.http_protocol_ipv6", ""), + resource.TestCheckResourceAttr(resourceName, "metadata_options.0.instance_metadata_tags", ""), ), }, { @@ -2948,7 +2951,7 @@ func TestAccEC2LaunchTemplate_metadataOptions(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "metadata_options.0.http_tokens", "required"), resource.TestCheckResourceAttr(resourceName, "metadata_options.0.http_put_response_hop_limit", "2"), resource.TestCheckResourceAttr(resourceName, "metadata_options.0.http_protocol_ipv6", "enabled"), - resource.TestCheckResourceAttr(resourceName, "metadata_options.0.instance_metadata_tags", "disabled"), + resource.TestCheckResourceAttr(resourceName, "metadata_options.0.instance_metadata_tags", ""), ), }, { @@ -2981,7 +2984,7 @@ func TestAccEC2LaunchTemplate_metadataOptions(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "metadata_options.0.http_endpoint", "enabled"), //Setting any of the values in metadata options will set the http_endpoint to enabled, you will not see it via the Console, but will in the API for any instance made from the template resource.TestCheckResourceAttr(resourceName, "metadata_options.0.http_tokens", "required"), resource.TestCheckResourceAttr(resourceName, "metadata_options.0.http_put_response_hop_limit", "2"), - resource.TestCheckResourceAttr(resourceName, "metadata_options.0.http_protocol_ipv6", "disabled"), + resource.TestCheckResourceAttr(resourceName, "metadata_options.0.http_protocol_ipv6", "enabled"), resource.TestCheckResourceAttr(resourceName, "metadata_options.0.instance_metadata_tags", "enabled"), ), }, @@ -3190,7 +3193,7 @@ func testAccCheckLaunchTemplateExists(ctx context.Context, n string, v *ec2.Laun return fmt.Errorf("No EC2 Launch Template ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindLaunchTemplateByID(ctx, conn, rs.Primary.ID) @@ -3206,7 +3209,7 @@ func testAccCheckLaunchTemplateExists(ctx context.Context, n string, v *ec2.Laun func testAccCheckLaunchTemplateDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_launch_template" { diff --git a/internal/service/ec2/ec2_placement_group.go b/internal/service/ec2/ec2_placement_group.go index c5c16c309bd..7a169a0ddf7 100644 --- a/internal/service/ec2/ec2_placement_group.go +++ b/internal/service/ec2/ec2_placement_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -82,7 +85,7 @@ func ResourcePlacementGroup() *schema.Resource { func resourcePlacementGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) name := d.Get("name").(string) input := &ec2.CreatePlacementGroupInput{ @@ -118,7 +121,7 @@ func resourcePlacementGroupCreate(ctx context.Context, d *schema.ResourceData, m func resourcePlacementGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) pg, err := FindPlacementGroupByName(ctx, conn, d.Id()) @@ -146,7 +149,7 @@ func resourcePlacementGroupRead(ctx context.Context, d *schema.ResourceData, met d.Set("spread_level", pg.SpreadLevel) d.Set("strategy", pg.Strategy) - SetTagsOut(ctx, pg.Tags) + setTagsOut(ctx, pg.Tags) return diags } @@ -161,7 +164,7 @@ func resourcePlacementGroupUpdate(ctx context.Context, d *schema.ResourceData, m func resourcePlacementGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[DEBUG] Deleting EC2 Placement Group: %s", d.Id()) _, err := conn.DeletePlacementGroupWithContext(ctx, &ec2.DeletePlacementGroupInput{ diff --git a/internal/service/ec2/ec2_placement_group_test.go b/internal/service/ec2/ec2_placement_group_test.go index 99c0c04bb12..fae3073b3af 100644 --- a/internal/service/ec2/ec2_placement_group_test.go +++ b/internal/service/ec2/ec2_placement_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -207,7 +210,7 @@ func TestAccEC2PlacementGroup_spreadLevel(t *testing.T) { func testAccCheckPlacementGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_placement_group" { @@ -242,7 +245,7 @@ func testAccCheckPlacementGroupExists(ctx context.Context, n string, v *ec2.Plac return fmt.Errorf("No EC2 Placement Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindPlacementGroupByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ec2/ec2_public_ipv4_pool_data_source.go b/internal/service/ec2/ec2_public_ipv4_pool_data_source.go index 4099b783dd8..0cb7b2c65db 100644 --- a/internal/service/ec2/ec2_public_ipv4_pool_data_source.go +++ b/internal/service/ec2/ec2_public_ipv4_pool_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -69,7 +72,7 @@ func DataSourcePublicIPv4Pool() *schema.Resource { func dataSourcePublicIPv4PoolRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig poolID := d.Get("pool_id").(string) diff --git a/internal/service/ec2/ec2_public_ipv4_pool_data_source_test.go b/internal/service/ec2/ec2_public_ipv4_pool_data_source_test.go index b3dd166332d..ddcc6304143 100644 --- a/internal/service/ec2/ec2_public_ipv4_pool_data_source_test.go +++ b/internal/service/ec2/ec2_public_ipv4_pool_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -32,7 +35,7 @@ func TestAccEC2PublicIPv4PoolDataSource_basic(t *testing.T) { } func testAccPreCheckPublicIPv4Pools(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindPublicIPv4Pools(ctx, conn, &ec2.DescribePublicIpv4PoolsInput{}) diff --git a/internal/service/ec2/ec2_public_ipv4_pools_data_source.go b/internal/service/ec2/ec2_public_ipv4_pools_data_source.go index 712f6adb2d3..219760e4105 100644 --- a/internal/service/ec2/ec2_public_ipv4_pools_data_source.go +++ b/internal/service/ec2/ec2_public_ipv4_pools_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -31,7 +34,7 @@ func DataSourcePublicIPv4Pools() *schema.Resource { func dataSourcePublicIPv4PoolsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribePublicIpv4PoolsInput{} diff --git a/internal/service/ec2/ec2_public_ipv4_pools_data_source_test.go b/internal/service/ec2/ec2_public_ipv4_pools_data_source_test.go index 5d9e095fc85..22aa467b241 100644 --- a/internal/service/ec2/ec2_public_ipv4_pools_data_source_test.go +++ b/internal/service/ec2/ec2_public_ipv4_pools_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/ec2_serial_console_access.go b/internal/service/ec2/ec2_serial_console_access.go index b4775dcc7ea..7ae26cabe08 100644 --- a/internal/service/ec2/ec2_serial_console_access.go +++ b/internal/service/ec2/ec2_serial_console_access.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -32,11 +35,11 @@ func ResourceSerialConsoleAccess() *schema.Resource { } func resourceSerialConsoleAccessCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) enabled := d.Get("enabled").(bool) if err := setSerialConsoleAccess(ctx, conn, enabled); err != nil { - return diag.Errorf("error setting EC2 Serial Console Access (%t): %s", enabled, err) + return diag.Errorf("setting EC2 Serial Console Access (%t): %s", enabled, err) } d.SetId(meta.(*conns.AWSClient).Region) @@ -45,12 +48,12 @@ func resourceSerialConsoleAccessCreate(ctx context.Context, d *schema.ResourceDa } func resourceSerialConsoleAccessRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) output, err := conn.GetSerialConsoleAccessStatusWithContext(ctx, &ec2.GetSerialConsoleAccessStatusInput{}) if err != nil { - return diag.Errorf("error reading EC2 Serial Console Access: %s", err) + return diag.Errorf("reading EC2 Serial Console Access: %s", err) } d.Set("enabled", output.SerialConsoleAccessEnabled) @@ -59,22 +62,22 @@ func resourceSerialConsoleAccessRead(ctx context.Context, d *schema.ResourceData } func resourceSerialConsoleAccessUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) enabled := d.Get("enabled").(bool) if err := setSerialConsoleAccess(ctx, conn, enabled); err != nil { - return diag.Errorf("error updating EC2 Serial Console Access (%t): %s", enabled, err) + return diag.Errorf("updating EC2 Serial Console Access (%t): %s", enabled, err) } return resourceSerialConsoleAccessRead(ctx, d, meta) } func resourceSerialConsoleAccessDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) // Removing the resource disables serial console access. if err := setSerialConsoleAccess(ctx, conn, false); err != nil { - return diag.Errorf("error disabling EC2 Serial Console Access: %s", err) + return diag.Errorf("disabling EC2 Serial Console Access: %s", err) } return nil diff --git a/internal/service/ec2/ec2_serial_console_access_data_source.go b/internal/service/ec2/ec2_serial_console_access_data_source.go index 9bf1d4389ac..68be0c89e9f 100644 --- a/internal/service/ec2/ec2_serial_console_access_data_source.go +++ b/internal/service/ec2/ec2_serial_console_access_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -28,12 +31,12 @@ func DataSourceSerialConsoleAccess() *schema.Resource { } } func dataSourceSerialConsoleAccessRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) output, err := conn.GetSerialConsoleAccessStatusWithContext(ctx, &ec2.GetSerialConsoleAccessStatusInput{}) if err != nil { - return diag.Errorf("error reading EC2 Serial Console Access: %s", err) + return diag.Errorf("reading EC2 Serial Console Access: %s", err) } d.SetId(meta.(*conns.AWSClient).Region) diff --git a/internal/service/ec2/ec2_serial_console_access_data_source_test.go b/internal/service/ec2/ec2_serial_console_access_data_source_test.go index 8cf072933b4..14c28173142 100644 --- a/internal/service/ec2/ec2_serial_console_access_data_source_test.go +++ b/internal/service/ec2/ec2_serial_console_access_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -33,7 +36,7 @@ func TestAccEC2SerialConsoleAccessDataSource_basic(t *testing.T) { func testAccCheckSerialConsoleAccessDataSource(ctx context.Context, n string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) rs, ok := s.RootModule().Resources[n] if !ok { diff --git a/internal/service/ec2/ec2_serial_console_access_test.go b/internal/service/ec2/ec2_serial_console_access_test.go index dcb8d2f4706..d3724e15753 100644 --- a/internal/service/ec2/ec2_serial_console_access_test.go +++ b/internal/service/ec2/ec2_serial_console_access_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -48,7 +51,7 @@ func TestAccEC2SerialConsoleAccess_basic(t *testing.T) { func testAccCheckSerialConsoleAccessDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) response, err := conn.GetSerialConsoleAccessStatusWithContext(ctx, &ec2.GetSerialConsoleAccessStatusInput{}) if err != nil { @@ -74,7 +77,7 @@ func testAccCheckSerialConsoleAccess(ctx context.Context, n string, enabled bool return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) response, err := conn.GetSerialConsoleAccessStatusWithContext(ctx, &ec2.GetSerialConsoleAccessStatusInput{}) if err != nil { diff --git a/internal/service/ec2/ec2_spot_datafeed_subscription.go b/internal/service/ec2/ec2_spot_datafeed_subscription.go index 96af984fee4..f0070bf1ea8 100644 --- a/internal/service/ec2/ec2_spot_datafeed_subscription.go +++ b/internal/service/ec2/ec2_spot_datafeed_subscription.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -41,7 +44,7 @@ func ResourceSpotDataFeedSubscription() *schema.Resource { func resourceSpotDataFeedSubscriptionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateSpotDatafeedSubscriptionInput{ Bucket: aws.String(d.Get("bucket").(string)), @@ -64,7 +67,7 @@ func resourceSpotDataFeedSubscriptionCreate(ctx context.Context, d *schema.Resou func resourceSpotDataFeedSubscriptionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) subscription, err := FindSpotDatafeedSubscription(ctx, conn) @@ -86,7 +89,7 @@ func resourceSpotDataFeedSubscriptionRead(ctx context.Context, d *schema.Resourc func resourceSpotDataFeedSubscriptionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[INFO] Deleting EC2 Spot Datafeed Subscription: %s", d.Id()) _, err := conn.DeleteSpotDatafeedSubscriptionWithContext(ctx, &ec2.DeleteSpotDatafeedSubscriptionInput{}) diff --git a/internal/service/ec2/ec2_spot_datafeed_subscription_test.go b/internal/service/ec2/ec2_spot_datafeed_subscription_test.go index ddc1d07f52c..6139d30726a 100644 --- a/internal/service/ec2/ec2_spot_datafeed_subscription_test.go +++ b/internal/service/ec2/ec2_spot_datafeed_subscription_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -89,7 +92,7 @@ func testAccCheckSpotDatafeedSubscriptionExists(ctx context.Context, n string, v return fmt.Errorf("No EC2 Spot Datafeed Subscription ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindSpotDatafeedSubscription(ctx, conn) @@ -105,7 +108,7 @@ func testAccCheckSpotDatafeedSubscriptionExists(ctx context.Context, n string, v func testAccCheckSpotDatafeedSubscriptionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_spot_datafeed_subscription" { @@ -130,7 +133,7 @@ func testAccCheckSpotDatafeedSubscriptionDestroy(ctx context.Context) resource.T } func testAccPreCheckSpotDatafeedSubscription(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) _, err := conn.DescribeSpotDatafeedSubscriptionWithContext(ctx, &ec2.DescribeSpotDatafeedSubscriptionInput{}) diff --git a/internal/service/ec2/ec2_spot_fleet_request.go b/internal/service/ec2/ec2_spot_fleet_request.go index 144fa1f343b..67f6254ccc3 100644 --- a/internal/service/ec2/ec2_spot_fleet_request.go +++ b/internal/service/ec2/ec2_spot_fleet_request.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -18,10 +21,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/create" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" - "github.com/hashicorp/terraform-provider-aws/internal/experimental/nullable" "github.com/hashicorp/terraform-provider-aws/internal/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/types/nullable" "github.com/hashicorp/terraform-provider-aws/internal/verify" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -869,7 +872,7 @@ func ResourceSpotFleetRequest() *schema.Resource { func resourceSpotFleetRequestCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) _, launchSpecificationOk := d.GetOk("launch_specification") @@ -993,7 +996,7 @@ func resourceSpotFleetRequestCreate(ctx context.Context, d *schema.ResourceData, } log.Printf("[DEBUG] Creating EC2 Spot Fleet Request: %s", input) - outputRaw, err := tfresource.RetryWhenAWSErrMessageContains(ctx, propagationTimeout, + outputRaw, err := tfresource.RetryWhenAWSErrMessageContains(ctx, iamPropagationTimeout, func() (interface{}, error) { return conn.RequestSpotFleetWithContext(ctx, input) }, @@ -1021,7 +1024,7 @@ func resourceSpotFleetRequestCreate(ctx context.Context, d *schema.ResourceData, func resourceSpotFleetRequestRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) output, err := FindSpotFleetRequestByID(ctx, conn, d.Id()) @@ -1068,7 +1071,7 @@ func resourceSpotFleetRequestRead(ctx context.Context, d *schema.ResourceData, m d.Set("fleet_type", config.Type) d.Set("launch_specification", launchSpec) - SetTagsOut(ctx, output.Tags) + setTagsOut(ctx, output.Tags) if err := d.Set("launch_template_config", flattenLaunchTemplateConfigs(config.LaunchTemplateConfigs)); err != nil { return sdkdiag.AppendErrorf(diags, "setting launch_template_config: %s", err) @@ -1108,7 +1111,7 @@ func resourceSpotFleetRequestRead(ctx context.Context, d *schema.ResourceData, m func resourceSpotFleetRequestUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &ec2.ModifySpotFleetRequestInput{ @@ -1144,7 +1147,7 @@ func resourceSpotFleetRequestUpdate(ctx context.Context, d *schema.ResourceData, func resourceSpotFleetRequestDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) terminateInstances := d.Get("terminate_instances_with_expiration").(bool) // If terminate_instances_on_delete is not null, its value is used. @@ -1200,7 +1203,7 @@ func resourceSpotFleetRequestDelete(ctx context.Context, d *schema.ResourceData, } func buildSpotFleetLaunchSpecification(ctx context.Context, d map[string]interface{}, meta interface{}) (*ec2.SpotFleetLaunchSpecification, error) { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) opts := &ec2.SpotFleetLaunchSpecification{ ImageId: aws.String(d["ami"].(string)), diff --git a/internal/service/ec2/ec2_spot_fleet_request_migrate.go b/internal/service/ec2/ec2_spot_fleet_request_migrate.go index 77d298458dc..e186eb3eedd 100644 --- a/internal/service/ec2/ec2_spot_fleet_request_migrate.go +++ b/internal/service/ec2/ec2_spot_fleet_request_migrate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( diff --git a/internal/service/ec2/ec2_spot_fleet_request_migrate_test.go b/internal/service/ec2/ec2_spot_fleet_request_migrate_test.go index 8098f4228c6..5621a0f4d2b 100644 --- a/internal/service/ec2/ec2_spot_fleet_request_migrate_test.go +++ b/internal/service/ec2/ec2_spot_fleet_request_migrate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/ec2_spot_fleet_request_test.go b/internal/service/ec2/ec2_spot_fleet_request_test.go index d8adc3845c5..ea557083523 100644 --- a/internal/service/ec2/ec2_spot_fleet_request_test.go +++ b/internal/service/ec2/ec2_spot_fleet_request_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -1791,7 +1794,7 @@ func testAccCheckSpotFleetRequestExists(ctx context.Context, n string, v *ec2.Sp return errors.New("No EC2 Spot Fleet Request ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindSpotFleetRequestByID(ctx, conn, rs.Primary.ID) @@ -1807,7 +1810,7 @@ func testAccCheckSpotFleetRequestExists(ctx context.Context, n string, v *ec2.Sp func testAccCheckSpotFleetRequestDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_spot_fleet_request" { @@ -1881,7 +1884,7 @@ func testAccCheckSpotFleetRequest_PlacementAttributes( } func testAccPreCheckSpotFleetRequest(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) _, err := tfec2.FindSpotFleetRequests(ctx, conn, &ec2.DescribeSpotFleetRequestsInput{}) diff --git a/internal/service/ec2/ec2_spot_instance_request.go b/internal/service/ec2/ec2_spot_instance_request.go index bcb0cb83eb2..db39e66c415 100644 --- a/internal/service/ec2/ec2_spot_instance_request.go +++ b/internal/service/ec2/ec2_spot_instance_request.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -13,7 +16,6 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" @@ -43,44 +45,37 @@ func ResourceSpotInstanceRequest() *schema.Resource { Schema: func() map[string]*schema.Schema { // The Spot Instance Request Schema is based on the AWS Instance schema. - s := ResourceInstance().Schema + s := ResourceInstance().SchemaMap() - // Everything on a spot instance is ForceNew except tags + // Everything on a spot instance is ForceNew (except tags/tags_all). for k, v := range s { + if v.Computed && !v.Optional { + continue + } + // tags_all is Optional+Computed. if k == names.AttrTags || k == names.AttrTagsAll { continue } v.ForceNew = true } - s["volume_tags"] = &schema.Schema{ - Type: schema.TypeMap, - Optional: true, - Elem: &schema.Schema{Type: schema.TypeString}, - } - - s["spot_price"] = &schema.Schema{ - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { - oldFloat, _ := strconv.ParseFloat(old, 64) - newFloat, _ := strconv.ParseFloat(new, 64) + // Remove attributes added for spot instances. + delete(s, "instance_lifecycle") + delete(s, "instance_market_options") + delete(s, "spot_instance_request_id") - return big.NewFloat(oldFloat).Cmp(big.NewFloat(newFloat)) == 0 - }, + s["block_duration_minutes"] = &schema.Schema{ + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + ValidateFunc: validation.IntDivisibleBy(60), } - s["spot_type"] = &schema.Schema{ + s["instance_interruption_behavior"] = &schema.Schema{ Type: schema.TypeString, Optional: true, - Default: ec2.SpotInstanceTypePersistent, - ValidateFunc: validation.StringInSlice(ec2.SpotInstanceType_Values(), false), - } - s["wait_for_fulfillment"] = &schema.Schema{ - Type: schema.TypeBool, - Optional: true, - Default: false, + Default: ec2.InstanceInterruptionBehaviorTerminate, + ForceNew: true, + ValidateFunc: validation.StringInSlice(ec2.InstanceInterruptionBehavior_Values(), false), } s["launch_group"] = &schema.Schema{ Type: schema.TypeString, @@ -91,26 +86,31 @@ func ResourceSpotInstanceRequest() *schema.Resource { Type: schema.TypeString, Computed: true, } - s["spot_request_state"] = &schema.Schema{ + s["spot_instance_id"] = &schema.Schema{ Type: schema.TypeString, Computed: true, } - s["spot_instance_id"] = &schema.Schema{ + s["spot_price"] = &schema.Schema{ Type: schema.TypeString, + Optional: true, Computed: true, + ForceNew: true, + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + oldFloat, _ := strconv.ParseFloat(old, 64) + newFloat, _ := strconv.ParseFloat(new, 64) + + return big.NewFloat(oldFloat).Cmp(big.NewFloat(newFloat)) == 0 + }, } - s["block_duration_minutes"] = &schema.Schema{ - Type: schema.TypeInt, - Optional: true, - ForceNew: true, - ValidateFunc: validation.IntDivisibleBy(60), + s["spot_request_state"] = &schema.Schema{ + Type: schema.TypeString, + Computed: true, } - s["instance_interruption_behavior"] = &schema.Schema{ + s["spot_type"] = &schema.Schema{ Type: schema.TypeString, Optional: true, - Default: ec2.InstanceInterruptionBehaviorTerminate, - ForceNew: true, - ValidateFunc: validation.StringInSlice(ec2.InstanceInterruptionBehavior_Values(), false), + Default: ec2.SpotInstanceTypePersistent, + ValidateFunc: validation.StringInSlice(ec2.SpotInstanceType_Values(), false), } s["valid_from"] = &schema.Schema{ Type: schema.TypeString, @@ -126,6 +126,17 @@ func ResourceSpotInstanceRequest() *schema.Resource { ValidateFunc: validation.IsRFC3339Time, Computed: true, } + s["volume_tags"] = &schema.Schema{ + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + } + s["wait_for_fulfillment"] = &schema.Schema{ + Type: schema.TypeBool, + Optional: true, + Default: false, + } + return s }(), @@ -137,14 +148,14 @@ func ResourceSpotInstanceRequest() *schema.Resource { func resourceSpotInstanceRequestCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) instanceOpts, err := buildInstanceOpts(ctx, d, meta) if err != nil { - return sdkdiag.AppendErrorf(diags, "requesting EC2 Spot Instances: %s", err) + return sdkdiag.AppendFromErr(diags, err) } - spotOpts := &ec2.RequestSpotInstancesInput{ + input := &ec2.RequestSpotInstancesInput{ ClientToken: aws.String(id.UniqueId()), // Though the AWS API supports creating spot instance requests for multiple // instances, for TF purposes we fix this to one instance per request. @@ -171,97 +182,68 @@ func resourceSpotInstanceRequestCreate(ctx context.Context, d *schema.ResourceDa } if v, ok := d.GetOk("block_duration_minutes"); ok { - spotOpts.BlockDurationMinutes = aws.Int64(int64(v.(int))) + input.BlockDurationMinutes = aws.Int64(int64(v.(int))) } if v, ok := d.GetOk("launch_group"); ok { - spotOpts.LaunchGroup = aws.String(v.(string)) + input.LaunchGroup = aws.String(v.(string)) } if v, ok := d.GetOk("valid_from"); ok { - validFrom, err := time.Parse(time.RFC3339, v.(string)) - if err != nil { - return sdkdiag.AppendErrorf(diags, "requesting EC2 Spot Instances: %s", err) - } - spotOpts.ValidFrom = aws.Time(validFrom) + v, _ := time.Parse(time.RFC3339, v.(string)) + input.ValidFrom = aws.Time(v) } if v, ok := d.GetOk("valid_until"); ok { - validUntil, err := time.Parse(time.RFC3339, v.(string)) - if err != nil { - return sdkdiag.AppendErrorf(diags, "requesting EC2 Spot Instances: %s", err) - } - spotOpts.ValidUntil = aws.Time(validUntil) + v, _ := time.Parse(time.RFC3339, v.(string)) + input.ValidUntil = aws.Time(v) } // Placement GroupName can only be specified when instanceInterruptionBehavior is not set or set to 'terminate' if v, exists := d.GetOkExists("instance_interruption_behavior"); v.(string) == ec2.InstanceInterruptionBehaviorTerminate || !exists { - spotOpts.LaunchSpecification.Placement = instanceOpts.SpotPlacement + input.LaunchSpecification.Placement = instanceOpts.SpotPlacement } - // Make the spot instance request - var resp *ec2.RequestSpotInstancesOutput - err = retry.RetryContext(ctx, propagationTimeout, func() *retry.RetryError { - resp, err = conn.RequestSpotInstancesWithContext(ctx, spotOpts) - // IAM instance profiles can take ~10 seconds to propagate in AWS: - // http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#launch-instance-with-role-console - if tfawserr.ErrMessageContains(err, "InvalidParameterValue", "Invalid IAM Instance Profile") { - log.Printf("[DEBUG] Invalid IAM Instance Profile referenced, retrying...") - return retry.RetryableError(err) - } - // IAM roles can also take time to propagate in AWS: - if tfawserr.ErrMessageContains(err, "InvalidParameterValue", " has no associated IAM Roles") { - log.Printf("[DEBUG] IAM Instance Profile appears to have no IAM roles, retrying...") - return retry.RetryableError(err) - } - if err != nil { - return retry.NonRetryableError(err) - } - return nil - }) + outputRaw, err := tfresource.RetryWhen(ctx, iamPropagationTimeout, + func() (interface{}, error) { + return conn.RequestSpotInstancesWithContext(ctx, input) + }, + func(err error) (bool, error) { + // IAM instance profiles can take ~10 seconds to propagate in AWS: + // http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#launch-instance-with-role-console + if tfawserr.ErrMessageContains(err, errCodeInvalidParameterValue, "Invalid IAM Instance Profile") { + return true, err + } - if tfresource.TimedOut(err) { - resp, err = conn.RequestSpotInstancesWithContext(ctx, spotOpts) - } + // IAM roles can also take time to propagate in AWS: + if tfawserr.ErrMessageContains(err, errCodeInvalidParameterValue, " has no associated IAM Roles") { + return true, err + } + + return false, err + }, + ) if err != nil { - return sdkdiag.AppendErrorf(diags, "requesting EC2 Spot Instances: %s", err) - } - if len(resp.SpotInstanceRequests) != 1 { - return sdkdiag.AppendErrorf(diags, "Expected response with length 1, got: %s", resp) + return sdkdiag.AppendErrorf(diags, "requesting EC2 Spot Instance: %s", err) } - sir := resp.SpotInstanceRequests[0] - d.SetId(aws.StringValue(sir.SpotInstanceRequestId)) + d.SetId(aws.StringValue(outputRaw.(*ec2.RequestSpotInstancesOutput).SpotInstanceRequests[0].SpotInstanceRequestId)) if d.Get("wait_for_fulfillment").(bool) { - spotStateConf := &retry.StateChangeConf{ - // http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-bid-status.html - Pending: []string{"start", "pending-evaluation", "pending-fulfillment"}, - Target: []string{"fulfilled"}, - Refresh: SpotInstanceStateRefreshFunc(ctx, conn, sir), - Timeout: d.Timeout(schema.TimeoutCreate), - Delay: 10 * time.Second, - MinTimeout: 3 * time.Second, - } - - log.Printf("[DEBUG] waiting for spot bid to resolve... this may take several minutes.") - _, err = spotStateConf.WaitForStateContext(ctx) - - if err != nil { - return sdkdiag.AppendErrorf(diags, "Error while waiting for spot request (%s) to resolve: %s", sir, err) + if _, err := WaitSpotInstanceRequestFulfilled(ctx, conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { + return sdkdiag.AppendErrorf(diags, "waiting for EC2 Spot Instance Request (%s) to be fulfilled: %s", d.Id(), err) } } return append(diags, resourceSpotInstanceRequestRead(ctx, d, meta)...) } -// Update spot state, etc func resourceSpotInstanceRequestRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) - outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { + outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { return FindSpotInstanceRequestByID(ctx, conn, d.Id()) }, d.IsNewResource()) @@ -292,7 +274,7 @@ func resourceSpotInstanceRequestRead(ctx context.Context, d *schema.ResourceData d.Set("launch_group", request.LaunchGroup) d.Set("block_duration_minutes", request.BlockDurationMinutes) - SetTagsOut(ctx, request.Tags) + setTagsOut(ctx, request.Tags) d.Set("instance_interruption_behavior", request.InstanceInterruptionBehavior) d.Set("valid_from", aws.TimeValue(request.ValidFrom).Format(time.RFC3339)) @@ -308,7 +290,7 @@ func resourceSpotInstanceRequestRead(ctx context.Context, d *schema.ResourceData func readInstance(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) instance, err := FindInstanceByID(ctx, conn, d.Get("spot_instance_id").(string)) @@ -389,51 +371,26 @@ func resourceSpotInstanceRequestUpdate(ctx context.Context, d *schema.ResourceDa func resourceSpotInstanceRequestDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) - log.Printf("[INFO] Cancelling spot request: %s", d.Id()) + log.Printf("[INFO] Cancelling EC2 Spot Instance Request: %s", d.Id()) _, err := conn.CancelSpotInstanceRequestsWithContext(ctx, &ec2.CancelSpotInstanceRequestsInput{ SpotInstanceRequestIds: []*string{aws.String(d.Id())}, }) + if tfawserr.ErrCodeEquals(err, errCodeInvalidSpotInstanceRequestIDNotFound) { + return diags + } + if err != nil { - return sdkdiag.AppendErrorf(diags, "Error cancelling spot request (%s): %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "cancelling EC2 Spot Instance Request (%s): %s", d.Id(), err) } - if instanceId := d.Get("spot_instance_id").(string); instanceId != "" { - if err := terminateInstance(ctx, conn, instanceId, d.Timeout(schema.TimeoutDelete)); err != nil { + if instanceID := d.Get("spot_instance_id").(string); instanceID != "" { + if err := terminateInstance(ctx, conn, instanceID, d.Timeout(schema.TimeoutDelete)); err != nil { return sdkdiag.AppendFromErr(diags, err) } } return diags } - -// SpotInstanceStateRefreshFunc returns a retry.StateRefreshFunc that is used to watch -// an EC2 spot instance request -func SpotInstanceStateRefreshFunc(ctx context.Context, conn *ec2.EC2, sir *ec2.SpotInstanceRequest) retry.StateRefreshFunc { - return func() (interface{}, string, error) { - resp, err := conn.DescribeSpotInstanceRequestsWithContext(ctx, &ec2.DescribeSpotInstanceRequestsInput{ - SpotInstanceRequestIds: []*string{sir.SpotInstanceRequestId}, - }) - - if err != nil { - if tfawserr.ErrCodeEquals(err, "InvalidSpotInstanceRequestID.NotFound") { - // Set this to nil as if we didn't find anything. - resp = nil - } else { - log.Printf("Error on StateRefresh: %s", err) - return nil, "", err - } - } - - if resp == nil || len(resp.SpotInstanceRequests) == 0 { - // Sometimes AWS just has consistency issues and doesn't see - // our request yet. Return an empty state. - return nil, "", nil - } - - req := resp.SpotInstanceRequests[0] - return req, *req.Status.Code, nil - } -} diff --git a/internal/service/ec2/ec2_spot_instance_request_test.go b/internal/service/ec2/ec2_spot_instance_request_test.go index 1f3f3802257..54adf851251 100644 --- a/internal/service/ec2/ec2_spot_instance_request_test.go +++ b/internal/service/ec2/ec2_spot_instance_request_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -76,7 +79,7 @@ func TestAccEC2SpotInstanceRequest_disappears(t *testing.T) { func TestAccEC2SpotInstanceRequest_tags(t *testing.T) { ctx := acctest.Context(t) - var sir ec2.SpotInstanceRequest + var sir1, sir2, sir3 ec2.SpotInstanceRequest resourceName := "aws_spot_instance_request.test" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) @@ -89,7 +92,7 @@ func TestAccEC2SpotInstanceRequest_tags(t *testing.T) { { Config: testAccSpotInstanceRequestConfig_tags1(rName, "key1", "value1"), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckSpotInstanceRequestExists(ctx, resourceName, &sir), + testAccCheckSpotInstanceRequestExists(ctx, resourceName, &sir1), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), ), @@ -103,7 +106,8 @@ func TestAccEC2SpotInstanceRequest_tags(t *testing.T) { { Config: testAccSpotInstanceRequestConfig_tags2(rName, "key1", "value1updated", "key2", "value2"), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckSpotInstanceRequestExists(ctx, resourceName, &sir), + testAccCheckSpotInstanceRequestExists(ctx, resourceName, &sir2), + testAccCheckSpotInstanceRequestIDsEqual(&sir2, &sir1), resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), @@ -112,7 +116,8 @@ func TestAccEC2SpotInstanceRequest_tags(t *testing.T) { { Config: testAccSpotInstanceRequestConfig_tags1(rName, "key2", "value2"), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckSpotInstanceRequestExists(ctx, resourceName, &sir), + testAccCheckSpotInstanceRequestExists(ctx, resourceName, &sir3), + testAccCheckSpotInstanceRequestIDsEqual(&sir3, &sir2), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), ), @@ -496,7 +501,7 @@ func TestAccEC2SpotInstanceRequest_interruptUpdate(t *testing.T) { Config: testAccSpotInstanceRequestConfig_interrupt(rName, "terminate"), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckSpotInstanceRequestExists(ctx, resourceName, &sir2), - testAccCheckSpotInstanceRequestRecreated(&sir1, &sir2), + testAccCheckSpotInstanceRequestIDsNotEqual(&sir1, &sir2), resource.TestCheckResourceAttr(resourceName, "instance_interruption_behavior", "terminate"), ), }, @@ -504,6 +509,31 @@ func TestAccEC2SpotInstanceRequest_interruptUpdate(t *testing.T) { }) } +func TestAccEC2SpotInstanceRequest_withInstanceProfile(t *testing.T) { + ctx := acctest.Context(t) + var sir ec2.SpotInstanceRequest + resourceName := "aws_spot_instance_request.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckSpotInstanceRequestDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccSpotInstanceRequestConfig_withInstanceProfile(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckSpotInstanceRequestExists(ctx, resourceName, &sir), + resource.TestCheckResourceAttrSet(resourceName, "iam_instance_profile"), + resource.TestCheckResourceAttr(resourceName, "spot_bid_status", "fulfilled"), + resource.TestCheckResourceAttr(resourceName, "spot_request_state", "active"), + ), + }, + }, + }) +} + func testAccSpotInstanceRequestValidUntil(t *testing.T) string { return testAccSpotInstanceRequestTime(t, "12h") } @@ -519,7 +549,7 @@ func testAccSpotInstanceRequestTime(t *testing.T, duration string) string { func testAccCheckSpotInstanceRequestDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_spot_instance_request" { @@ -566,7 +596,7 @@ func testAccCheckSpotInstanceRequestExists(ctx context.Context, n string, v *ec2 return fmt.Errorf("No EC2 Spot Instance Request ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindSpotInstanceRequestByID(ctx, conn, rs.Primary.ID) @@ -621,7 +651,7 @@ func testAccCheckSpotInstanceRequestAttributesCheckSIRWithoutSpot( func testAccCheckSpotInstanceRequest_InstanceAttributes(ctx context.Context, v *ec2.SpotInstanceRequest, sgName string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) instance, err := tfec2.FindInstanceByID(ctx, conn, aws.StringValue(v.InstanceId)) @@ -668,10 +698,20 @@ func testAccCheckSpotInstanceRequestAttributesVPC( } } -func testAccCheckSpotInstanceRequestRecreated(before, after *ec2.SpotInstanceRequest) resource.TestCheckFunc { +func testAccCheckSpotInstanceRequestIDsEqual(sir1, sir2 *ec2.SpotInstanceRequest) resource.TestCheckFunc { return func(s *terraform.State) error { - if before, after := aws.StringValue(before.InstanceId), aws.StringValue(after.InstanceId); before == after { - return fmt.Errorf("Spot Instance (%s) not recreated", before) + if aws.StringValue(sir1.SpotInstanceRequestId) != aws.StringValue(sir2.SpotInstanceRequestId) { + return fmt.Errorf("Spot Instance Request IDs are not equal") + } + + return nil + } +} + +func testAccCheckSpotInstanceRequestIDsNotEqual(sir1, sir2 *ec2.SpotInstanceRequest) resource.TestCheckFunc { + return func(s *terraform.State) error { + if aws.StringValue(sir1.SpotInstanceRequestId) == aws.StringValue(sir2.SpotInstanceRequestId) { + return fmt.Errorf("Spot Instance Request IDs are equal") } return nil @@ -1038,3 +1078,48 @@ resource "aws_ec2_tag" "test" { } `, rName, interruptionBehavior)) } + +func testAccSpotInstanceRequestConfig_withInstanceProfile(rName string) string { + return acctest.ConfigCompose( + acctest.ConfigLatestAmazonLinuxHVMEBSAMI(), + acctest.AvailableEC2InstanceTypeForRegion("t3.micro", "t2.micro"), + fmt.Sprintf(` +resource "aws_iam_role" "test" { + name = %[1]q + + assume_role_policy = < 0 { return tags @@ -145,17 +145,17 @@ func GetTagsIn(ctx context.Context) []*ec2.Tag { return nil } -// SetTagsOut sets ec2 service tags in Context. -func SetTagsOut(ctx context.Context, tags any) { +// setTagsOut sets ec2 service tags in Context. +func setTagsOut(ctx context.Context, tags any) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates ec2 service tags. +// updateTags updates ec2 service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn ec2iface.EC2API, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn ec2iface.EC2API, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -195,5 +195,5 @@ func UpdateTags(ctx context.Context, conn ec2iface.EC2API, identifier string, ol // UpdateTags updates ec2 service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).EC2Conn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).EC2Conn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/ec2/tagsv2_gen.go b/internal/service/ec2/tagsv2_gen.go new file mode 100644 index 00000000000..a2547c97692 --- /dev/null +++ b/internal/service/ec2/tagsv2_gen.go @@ -0,0 +1,59 @@ +// Code generated by internal/generate/tags/main.go; DO NOT EDIT. +package ec2 + +import ( + "context" + + "github.com/aws/aws-sdk-go-v2/aws" + awstypes "github.com/aws/aws-sdk-go-v2/service/ec2/types" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/types" +) + +// []*SERVICE.Tag handling + +// TagsV2 returns ec2 service tags. +func TagsV2(tags tftags.KeyValueTags) []awstypes.Tag { + result := make([]awstypes.Tag, 0, len(tags)) + + for k, v := range tags.Map() { + tag := awstypes.Tag{ + Key: aws.String(k), + Value: aws.String(v), + } + + result = append(result, tag) + } + + return result +} + +// keyValueTagsV2 creates tftags.KeyValueTags from ec2 service tags. +func keyValueTagsV2(ctx context.Context, tags []awstypes.Tag) tftags.KeyValueTags { + m := make(map[string]*string, len(tags)) + + for _, tag := range tags { + m[aws.ToString(tag.Key)] = tag.Value + } + + return tftags.New(ctx, m) +} + +// getTagsInV2 returns ec2 service tags from Context. +// nil is returned if there are no input tags. +func getTagsInV2(ctx context.Context) []awstypes.Tag { + if inContext, ok := tftags.FromContext(ctx); ok { + if tags := TagsV2(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { + return tags + } + } + + return nil +} + +// setTagsOutV2 sets ec2 service tags in Context. +func setTagsOutV2(ctx context.Context, tags []awstypes.Tag) { + if inContext, ok := tftags.FromContext(ctx); ok { + inContext.TagsOut = types.Some(keyValueTagsV2(ctx, tags)) + } +} diff --git a/internal/service/ec2/test-fixtures/userdata-test.sh b/internal/service/ec2/test-fixtures/userdata-test.sh index 8a058c78617..670a272dbeb 100644 --- a/internal/service/ec2/test-fixtures/userdata-test.sh +++ b/internal/service/ec2/test-fixtures/userdata-test.sh @@ -1,4 +1,7 @@ #!/bin/bash -v +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + apt-get update -y apt-get install -y nginx > /tmp/nginx.log diff --git a/internal/service/ec2/transitgateway_.go b/internal/service/ec2/transitgateway_.go index 99798424278..3c63c977b11 100644 --- a/internal/service/ec2/transitgateway_.go +++ b/internal/service/ec2/transitgateway_.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -139,7 +142,7 @@ func ResourceTransitGateway() *schema.Resource { func resourceTransitGatewayCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateTransitGatewayInput{ Options: &ec2.TransitGatewayRequestOptions{ @@ -183,7 +186,7 @@ func resourceTransitGatewayCreate(ctx context.Context, d *schema.ResourceData, m func resourceTransitGatewayRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) transitGateway, err := FindTransitGatewayByID(ctx, conn, d.Id()) @@ -211,14 +214,14 @@ func resourceTransitGatewayRead(ctx context.Context, d *schema.ResourceData, met d.Set("transit_gateway_cidr_blocks", aws.StringValueSlice(transitGateway.Options.TransitGatewayCidrBlocks)) d.Set("vpn_ecmp_support", transitGateway.Options.VpnEcmpSupport) - SetTagsOut(ctx, transitGateway.Tags) + setTagsOut(ctx, transitGateway.Tags) return diags } func resourceTransitGatewayUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &ec2.ModifyTransitGatewayInput{ @@ -281,7 +284,7 @@ func resourceTransitGatewayUpdate(ctx context.Context, d *schema.ResourceData, m func resourceTransitGatewayDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[DEBUG] Deleting EC2 Transit Gateway: %s", d.Id()) _, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, TransitGatewayIncorrectStateTimeout, func() (interface{}, error) { diff --git a/internal/service/ec2/transitgateway_attachment_data_source.go b/internal/service/ec2/transitgateway_attachment_data_source.go index eb66fc44e95..4adfdb38180 100644 --- a/internal/service/ec2/transitgateway_attachment_data_source.go +++ b/internal/service/ec2/transitgateway_attachment_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -70,7 +73,7 @@ func DataSourceTransitGatewayAttachment() *schema.Resource { func dataSourceTransitGatewayAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeTransitGatewayAttachmentsInput{} diff --git a/internal/service/ec2/transitgateway_attachment_data_source_test.go b/internal/service/ec2/transitgateway_attachment_data_source_test.go index 25ed3b449ff..d54d1f28fbe 100644 --- a/internal/service/ec2/transitgateway_attachment_data_source_test.go +++ b/internal/service/ec2/transitgateway_attachment_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/transitgateway_attachments_data_source.go b/internal/service/ec2/transitgateway_attachments_data_source.go index 022faf332ba..0e10d440f7a 100644 --- a/internal/service/ec2/transitgateway_attachments_data_source.go +++ b/internal/service/ec2/transitgateway_attachments_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -36,7 +39,7 @@ func DataSourceTransitGatewayAttachments() *schema.Resource { func dataSourceTransitGatewayAttachmentsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeTransitGatewayAttachmentsInput{} diff --git a/internal/service/ec2/transitgateway_attachments_data_source_test.go b/internal/service/ec2/transitgateway_attachments_data_source_test.go index 0973bbe6ce5..69b385556ef 100644 --- a/internal/service/ec2/transitgateway_attachments_data_source_test.go +++ b/internal/service/ec2/transitgateway_attachments_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/transitgateway_connect.go b/internal/service/ec2/transitgateway_connect.go index 65fd3e61e83..b6a605c3c3b 100644 --- a/internal/service/ec2/transitgateway_connect.go +++ b/internal/service/ec2/transitgateway_connect.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -76,7 +79,7 @@ func ResourceTransitGatewayConnect() *schema.Resource { } func resourceTransitGatewayConnectCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) transportAttachmentID := d.Get("transport_attachment_id").(string) input := &ec2.CreateTransitGatewayConnectInput{ @@ -128,7 +131,7 @@ func resourceTransitGatewayConnectCreate(ctx context.Context, d *schema.Resource } func resourceTransitGatewayConnectRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) transitGatewayConnect, err := FindTransitGatewayConnectByID(ctx, conn, d.Id()) @@ -191,13 +194,13 @@ func resourceTransitGatewayConnectRead(ctx context.Context, d *schema.ResourceDa d.Set("transit_gateway_id", transitGatewayConnect.TransitGatewayId) d.Set("transport_attachment_id", transitGatewayConnect.TransportTransitGatewayAttachmentId) - SetTagsOut(ctx, transitGatewayConnect.Tags) + setTagsOut(ctx, transitGatewayConnect.Tags) return nil } func resourceTransitGatewayConnectUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if d.HasChanges("transit_gateway_default_route_table_association", "transit_gateway_default_route_table_propagation") { transitGatewayID := d.Get("transit_gateway_id").(string) @@ -224,7 +227,7 @@ func resourceTransitGatewayConnectUpdate(ctx context.Context, d *schema.Resource } func resourceTransitGatewayConnectDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[DEBUG] Deleting EC2 Transit Gateway Connect: %s", d.Id()) _, err := conn.DeleteTransitGatewayConnectWithContext(ctx, &ec2.DeleteTransitGatewayConnectInput{ diff --git a/internal/service/ec2/transitgateway_connect_data_source.go b/internal/service/ec2/transitgateway_connect_data_source.go index d72b71dbbbd..7e76993568a 100644 --- a/internal/service/ec2/transitgateway_connect_data_source.go +++ b/internal/service/ec2/transitgateway_connect_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -47,7 +50,7 @@ func DataSourceTransitGatewayConnect() *schema.Resource { } func dataSourceTransitGatewayConnectRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeTransitGatewayConnectsInput{} diff --git a/internal/service/ec2/transitgateway_connect_data_source_test.go b/internal/service/ec2/transitgateway_connect_data_source_test.go index b55ae375b97..ee05f2c00f9 100644 --- a/internal/service/ec2/transitgateway_connect_data_source_test.go +++ b/internal/service/ec2/transitgateway_connect_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/transitgateway_connect_peer.go b/internal/service/ec2/transitgateway_connect_peer.go index 139c5d2d9bf..4c832004dc5 100644 --- a/internal/service/ec2/transitgateway_connect_peer.go +++ b/internal/service/ec2/transitgateway_connect_peer.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -17,6 +20,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/flex" + "github.com/hashicorp/terraform-provider-aws/internal/slices" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/verify" @@ -55,6 +59,15 @@ func ResourceTransitGatewayConnectPeer() *schema.Resource { ForceNew: true, ValidateFunc: verify.Valid4ByteASN, }, + "bgp_peer_address": { + Type: schema.TypeString, + Computed: true, + }, + "bgp_transit_gateway_addresses": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "inside_cidr_blocks": { Type: schema.TypeSet, Required: true, @@ -101,7 +114,7 @@ func ResourceTransitGatewayConnectPeer() *schema.Resource { } func resourceTransitGatewayConnectPeerCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateTransitGatewayConnectPeerInput{ InsideCidrBlocks: flex.ExpandStringSet(d.Get("inside_cidr_blocks").(*schema.Set)), @@ -143,7 +156,7 @@ func resourceTransitGatewayConnectPeerCreate(ctx context.Context, d *schema.Reso } func resourceTransitGatewayConnectPeerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) transitGatewayConnectPeer, err := FindTransitGatewayConnectPeerByID(ctx, conn, d.Id()) @@ -164,14 +177,19 @@ func resourceTransitGatewayConnectPeerRead(ctx context.Context, d *schema.Resour AccountID: meta.(*conns.AWSClient).AccountID, Resource: fmt.Sprintf("transit-gateway-connect-peer/%s", d.Id()), }.String() + bgpConfigurations := transitGatewayConnectPeer.ConnectPeerConfiguration.BgpConfigurations d.Set("arn", arn) - d.Set("bgp_asn", strconv.FormatInt(aws.Int64Value(transitGatewayConnectPeer.ConnectPeerConfiguration.BgpConfigurations[0].PeerAsn), 10)) + d.Set("bgp_asn", strconv.FormatInt(aws.Int64Value(bgpConfigurations[0].PeerAsn), 10)) + d.Set("bgp_peer_address", bgpConfigurations[0].PeerAddress) + d.Set("bgp_transit_gateway_addresses", slices.ApplyToAll(bgpConfigurations, func(v *ec2.TransitGatewayAttachmentBgpConfiguration) string { + return aws.StringValue(v.TransitGatewayAddress) + })) d.Set("inside_cidr_blocks", aws.StringValueSlice(transitGatewayConnectPeer.ConnectPeerConfiguration.InsideCidrBlocks)) d.Set("peer_address", transitGatewayConnectPeer.ConnectPeerConfiguration.PeerAddress) d.Set("transit_gateway_address", transitGatewayConnectPeer.ConnectPeerConfiguration.TransitGatewayAddress) d.Set("transit_gateway_attachment_id", transitGatewayConnectPeer.TransitGatewayAttachmentId) - SetTagsOut(ctx, transitGatewayConnectPeer.Tags) + setTagsOut(ctx, transitGatewayConnectPeer.Tags) return nil } @@ -182,7 +200,7 @@ func resourceTransitGatewayConnectPeerUpdate(ctx context.Context, d *schema.Reso } func resourceTransitGatewayConnectPeerDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[DEBUG] Deleting EC2 Transit Gateway Connect Peer: %s", d.Id()) _, err := conn.DeleteTransitGatewayConnectPeerWithContext(ctx, &ec2.DeleteTransitGatewayConnectPeerInput{ diff --git a/internal/service/ec2/transitgateway_connect_peer_data_source.go b/internal/service/ec2/transitgateway_connect_peer_data_source.go index 2db99f317bd..c7a751959f6 100644 --- a/internal/service/ec2/transitgateway_connect_peer_data_source.go +++ b/internal/service/ec2/transitgateway_connect_peer_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -12,6 +15,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/slices" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) @@ -34,6 +38,15 @@ func DataSourceTransitGatewayConnectPeer() *schema.Resource { Type: schema.TypeString, Computed: true, }, + "bgp_peer_address": { + Type: schema.TypeString, + Computed: true, + }, + "bgp_transit_gateway_addresses": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "filter": DataSourceFiltersSchema(), "inside_cidr_blocks": { Type: schema.TypeList, @@ -63,7 +76,7 @@ func DataSourceTransitGatewayConnectPeer() *schema.Resource { } func dataSourceTransitGatewayConnectPeerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeTransitGatewayConnectPeersInput{} @@ -91,8 +104,13 @@ func dataSourceTransitGatewayConnectPeerRead(ctx context.Context, d *schema.Reso AccountID: meta.(*conns.AWSClient).AccountID, Resource: fmt.Sprintf("transit-gateway-connect-peer/%s", d.Id()), }.String() + bgpConfigurations := transitGatewayConnectPeer.ConnectPeerConfiguration.BgpConfigurations d.Set("arn", arn) - d.Set("bgp_asn", strconv.FormatInt(aws.Int64Value(transitGatewayConnectPeer.ConnectPeerConfiguration.BgpConfigurations[0].PeerAsn), 10)) + d.Set("bgp_asn", strconv.FormatInt(aws.Int64Value(bgpConfigurations[0].PeerAsn), 10)) + d.Set("bgp_peer_address", bgpConfigurations[0].PeerAddress) + d.Set("bgp_transit_gateway_addresses", slices.ApplyToAll(bgpConfigurations, func(v *ec2.TransitGatewayAttachmentBgpConfiguration) string { + return aws.StringValue(v.TransitGatewayAddress) + })) d.Set("inside_cidr_blocks", aws.StringValueSlice(transitGatewayConnectPeer.ConnectPeerConfiguration.InsideCidrBlocks)) d.Set("peer_address", transitGatewayConnectPeer.ConnectPeerConfiguration.PeerAddress) d.Set("transit_gateway_address", transitGatewayConnectPeer.ConnectPeerConfiguration.TransitGatewayAddress) diff --git a/internal/service/ec2/transitgateway_connect_peer_data_source_test.go b/internal/service/ec2/transitgateway_connect_peer_data_source_test.go index 20c7261712d..ca94a00f9d1 100644 --- a/internal/service/ec2/transitgateway_connect_peer_data_source_test.go +++ b/internal/service/ec2/transitgateway_connect_peer_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -27,6 +30,8 @@ func testAccTransitGatewayConnectPeerDataSource_Filter(t *testing.T) { Check: resource.ComposeAggregateTestCheckFunc( resource.TestCheckResourceAttrPair(dataSourceName, "arn", resourceName, "arn"), resource.TestCheckResourceAttrPair(dataSourceName, "bgp_asn", resourceName, "bgp_asn"), + resource.TestCheckResourceAttrPair(dataSourceName, "bgp_peer_address", resourceName, "bgp_peer_address"), + resource.TestCheckResourceAttrPair(dataSourceName, "bgp_transit_gateway_addresses.#", resourceName, "bgp_transit_gateway_addresses.#"), resource.TestCheckResourceAttrPair(dataSourceName, "inside_cidr_blocks.#", resourceName, "inside_cidr_blocks.#"), resource.TestCheckResourceAttrPair(dataSourceName, "peer_address", resourceName, "peer_address"), resource.TestCheckResourceAttrPair(dataSourceName, "tags.%", resourceName, "tags.%"), @@ -56,6 +61,8 @@ func testAccTransitGatewayConnectPeerDataSource_ID(t *testing.T) { Check: resource.ComposeAggregateTestCheckFunc( resource.TestCheckResourceAttrPair(dataSourceName, "arn", resourceName, "arn"), resource.TestCheckResourceAttrPair(dataSourceName, "bgp_asn", resourceName, "bgp_asn"), + resource.TestCheckResourceAttrPair(dataSourceName, "bgp_peer_address", resourceName, "bgp_peer_address"), + resource.TestCheckResourceAttrPair(dataSourceName, "bgp_transit_gateway_addresses.#", resourceName, "bgp_transit_gateway_addresses.#"), resource.TestCheckResourceAttrPair(dataSourceName, "inside_cidr_blocks.#", resourceName, "inside_cidr_blocks.#"), resource.TestCheckResourceAttrPair(dataSourceName, "peer_address", resourceName, "peer_address"), resource.TestCheckResourceAttrPair(dataSourceName, "tags.%", resourceName, "tags.%"), diff --git a/internal/service/ec2/transitgateway_connect_peer_test.go b/internal/service/ec2/transitgateway_connect_peer_test.go index 2cf732b24a4..c2ec39fd744 100644 --- a/internal/service/ec2/transitgateway_connect_peer_test.go +++ b/internal/service/ec2/transitgateway_connect_peer_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -33,6 +36,8 @@ func testAccTransitGatewayConnectPeer_basic(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckTransitGatewayConnectPeerExists(ctx, resourceName, &v), resource.TestCheckResourceAttr(resourceName, "bgp_asn", "64512"), + resource.TestCheckResourceAttrSet(resourceName, "bgp_peer_address"), + acctest.CheckResourceAttrGreaterThanValue(resourceName, "bgp_transit_gateway_addresses.#", 0), resource.TestCheckResourceAttr(resourceName, "inside_cidr_blocks.#", "1"), resource.TestCheckResourceAttr(resourceName, "peer_address", "1.1.1.1"), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), @@ -206,7 +211,7 @@ func testAccCheckTransitGatewayConnectPeerExists(ctx context.Context, n string, return fmt.Errorf("No EC2 Transit Gateway Connect Peer ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindTransitGatewayConnectPeerByID(ctx, conn, rs.Primary.ID) @@ -222,7 +227,7 @@ func testAccCheckTransitGatewayConnectPeerExists(ctx context.Context, n string, func testAccCheckTransitGatewayConnectPeerDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_transit_gateway_connect_peer" { diff --git a/internal/service/ec2/transitgateway_connect_test.go b/internal/service/ec2/transitgateway_connect_test.go index 1e272f19c72..99f879709f5 100644 --- a/internal/service/ec2/transitgateway_connect_test.go +++ b/internal/service/ec2/transitgateway_connect_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -271,7 +274,7 @@ func testAccCheckTransitGatewayConnectExists(ctx context.Context, n string, v *e return fmt.Errorf("No EC2 Transit Gateway Connect ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindTransitGatewayConnectByID(ctx, conn, rs.Primary.ID) @@ -287,7 +290,7 @@ func testAccCheckTransitGatewayConnectExists(ctx context.Context, n string, v *e func testAccCheckTransitGatewayConnectDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_transit_gateway_connect" { diff --git a/internal/service/ec2/transitgateway_data_source.go b/internal/service/ec2/transitgateway_data_source.go index 4f34696fff4..0bc7dccb4e1 100644 --- a/internal/service/ec2/transitgateway_data_source.go +++ b/internal/service/ec2/transitgateway_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -90,7 +93,7 @@ func DataSourceTransitGateway() *schema.Resource { func dataSourceTransitGatewayRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeTransitGatewaysInput{} diff --git a/internal/service/ec2/transitgateway_data_source_test.go b/internal/service/ec2/transitgateway_data_source_test.go index d25beccbae7..0f835d1d31d 100644 --- a/internal/service/ec2/transitgateway_data_source_test.go +++ b/internal/service/ec2/transitgateway_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/transitgateway_dx_gateway_attachment_data_source.go b/internal/service/ec2/transitgateway_dx_gateway_attachment_data_source.go index 7eed5d09cd5..486182c407a 100644 --- a/internal/service/ec2/transitgateway_dx_gateway_attachment_data_source.go +++ b/internal/service/ec2/transitgateway_dx_gateway_attachment_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -40,7 +43,7 @@ func DataSourceTransitGatewayDxGatewayAttachment() *schema.Resource { func dataSourceTransitGatewayDxGatewayAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeTransitGatewayAttachmentsInput{ diff --git a/internal/service/ec2/transitgateway_dx_gateway_attachment_data_source_test.go b/internal/service/ec2/transitgateway_dx_gateway_attachment_data_source_test.go index 3ba0680e345..3e755018b3c 100644 --- a/internal/service/ec2/transitgateway_dx_gateway_attachment_data_source_test.go +++ b/internal/service/ec2/transitgateway_dx_gateway_attachment_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/transitgateway_multicast_domain.go b/internal/service/ec2/transitgateway_multicast_domain.go index d9a416796df..2c98148262a 100644 --- a/internal/service/ec2/transitgateway_multicast_domain.go +++ b/internal/service/ec2/transitgateway_multicast_domain.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -80,7 +83,7 @@ func ResourceTransitGatewayMulticastDomain() *schema.Resource { } func resourceTransitGatewayMulticastDomainCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateTransitGatewayMulticastDomainInput{ Options: &ec2.CreateTransitGatewayMulticastDomainRequestOptions{ @@ -109,7 +112,7 @@ func resourceTransitGatewayMulticastDomainCreate(ctx context.Context, d *schema. } func resourceTransitGatewayMulticastDomainRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) multicastDomain, err := FindTransitGatewayMulticastDomainByID(ctx, conn, d.Id()) @@ -130,7 +133,7 @@ func resourceTransitGatewayMulticastDomainRead(ctx context.Context, d *schema.Re d.Set("static_sources_support", multicastDomain.Options.StaticSourcesSupport) d.Set("transit_gateway_id", multicastDomain.TransitGatewayId) - SetTagsOut(ctx, multicastDomain.Tags) + setTagsOut(ctx, multicastDomain.Tags) return nil } @@ -141,7 +144,7 @@ func resourceTransitGatewayMulticastDomainUpdate(ctx context.Context, d *schema. } func resourceTransitGatewayMulticastDomainDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) groups, err := FindTransitGatewayMulticastGroups(ctx, conn, &ec2.SearchTransitGatewayMulticastGroupsInput{ TransitGatewayMulticastDomainId: aws.String(d.Id()), diff --git a/internal/service/ec2/transitgateway_multicast_domain_association.go b/internal/service/ec2/transitgateway_multicast_domain_association.go index cbedda0d998..cfcc93a9d78 100644 --- a/internal/service/ec2/transitgateway_multicast_domain_association.go +++ b/internal/service/ec2/transitgateway_multicast_domain_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -49,7 +52,7 @@ func ResourceTransitGatewayMulticastDomainAssociation() *schema.Resource { } func resourceTransitGatewayMulticastDomainAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) multicastDomainID := d.Get("transit_gateway_multicast_domain_id").(string) attachmentID := d.Get("transit_gateway_attachment_id").(string) @@ -78,7 +81,7 @@ func resourceTransitGatewayMulticastDomainAssociationCreate(ctx context.Context, } func resourceTransitGatewayMulticastDomainAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) multicastDomainID, attachmentID, subnetID, err := TransitGatewayMulticastDomainAssociationParseResourceID(d.Id()) @@ -106,7 +109,7 @@ func resourceTransitGatewayMulticastDomainAssociationRead(ctx context.Context, d } func resourceTransitGatewayMulticastDomainAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) multicastDomainID, attachmentID, subnetID, err := TransitGatewayMulticastDomainAssociationParseResourceID(d.Id()) diff --git a/internal/service/ec2/transitgateway_multicast_domain_association_test.go b/internal/service/ec2/transitgateway_multicast_domain_association_test.go index b8ebbd47368..37c6851addf 100644 --- a/internal/service/ec2/transitgateway_multicast_domain_association_test.go +++ b/internal/service/ec2/transitgateway_multicast_domain_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -127,7 +130,7 @@ func testAccCheckTransitGatewayMulticastDomainAssociationExists(ctx context.Cont return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindTransitGatewayMulticastDomainAssociationByThreePartKey(ctx, conn, multicastDomainID, attachmentID, subnetID) @@ -143,7 +146,7 @@ func testAccCheckTransitGatewayMulticastDomainAssociationExists(ctx context.Cont func testAccCheckTransitGatewayMulticastDomainAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_transit_gateway_multicast_domain_association" { diff --git a/internal/service/ec2/transitgateway_multicast_domain_data_source.go b/internal/service/ec2/transitgateway_multicast_domain_data_source.go index f9183c0bb4f..8619c2e943e 100644 --- a/internal/service/ec2/transitgateway_multicast_domain_data_source.go +++ b/internal/service/ec2/transitgateway_multicast_domain_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -115,7 +118,7 @@ func DataSourceTransitGatewayMulticastDomain() *schema.Resource { } func dataSourceTransitGatewayMulticastDomainRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeTransitGatewayMulticastDomainsInput{} diff --git a/internal/service/ec2/transitgateway_multicast_domain_data_source_test.go b/internal/service/ec2/transitgateway_multicast_domain_data_source_test.go index 60c9e39ebf7..3631a6f3848 100644 --- a/internal/service/ec2/transitgateway_multicast_domain_data_source_test.go +++ b/internal/service/ec2/transitgateway_multicast_domain_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/transitgateway_multicast_domain_test.go b/internal/service/ec2/transitgateway_multicast_domain_test.go index 2e6524ba412..f210402608a 100644 --- a/internal/service/ec2/transitgateway_multicast_domain_test.go +++ b/internal/service/ec2/transitgateway_multicast_domain_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -159,7 +162,7 @@ func testAccCheckTransitGatewayMulticastDomainExists(ctx context.Context, n stri return fmt.Errorf("No EC2 Transit Gateway Multicast Domain ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindTransitGatewayMulticastDomainByID(ctx, conn, rs.Primary.ID) @@ -175,7 +178,7 @@ func testAccCheckTransitGatewayMulticastDomainExists(ctx context.Context, n stri func testAccCheckTransitGatewayMulticastDomainDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_transit_gateway_multicast_domain" { diff --git a/internal/service/ec2/transitgateway_multicast_group_member.go b/internal/service/ec2/transitgateway_multicast_group_member.go index 4e799fb1cbd..807d09dc6cc 100644 --- a/internal/service/ec2/transitgateway_multicast_group_member.go +++ b/internal/service/ec2/transitgateway_multicast_group_member.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -45,7 +48,7 @@ func ResourceTransitGatewayMulticastGroupMember() *schema.Resource { } func resourceTransitGatewayMulticastGroupMemberCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) multicastDomainID := d.Get("transit_gateway_multicast_domain_id").(string) groupIPAddress := d.Get("group_ip_address").(string) @@ -70,7 +73,7 @@ func resourceTransitGatewayMulticastGroupMemberCreate(ctx context.Context, d *sc } func resourceTransitGatewayMulticastGroupMemberRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) multicastDomainID, groupIPAddress, eniID, err := TransitGatewayMulticastGroupMemberParseResourceID(d.Id()) @@ -78,7 +81,7 @@ func resourceTransitGatewayMulticastGroupMemberRead(ctx context.Context, d *sche return diag.FromErr(err) } - outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { + outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { return FindTransitGatewayMulticastGroupMemberByThreePartKey(ctx, conn, multicastDomainID, groupIPAddress, eniID) }, d.IsNewResource()) @@ -102,7 +105,7 @@ func resourceTransitGatewayMulticastGroupMemberRead(ctx context.Context, d *sche } func resourceTransitGatewayMulticastGroupMemberDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) multicastDomainID, groupIPAddress, eniID, err := TransitGatewayMulticastGroupMemberParseResourceID(d.Id()) @@ -137,7 +140,7 @@ func deregisterTransitGatewayMulticastGroupMember(ctx context.Context, conn *ec2 return fmt.Errorf("deleting EC2 Transit Gateway Multicast Group Member (%s): %w", id, err) } - _, err = tfresource.RetryUntilNotFound(ctx, propagationTimeout, func() (interface{}, error) { + _, err = tfresource.RetryUntilNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { return FindTransitGatewayMulticastGroupMemberByThreePartKey(ctx, conn, multicastDomainID, groupIPAddress, eniID) }) diff --git a/internal/service/ec2/transitgateway_multicast_group_member_test.go b/internal/service/ec2/transitgateway_multicast_group_member_test.go index 96a60bbc5fb..2c4ff07ec86 100644 --- a/internal/service/ec2/transitgateway_multicast_group_member_test.go +++ b/internal/service/ec2/transitgateway_multicast_group_member_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -127,7 +130,7 @@ func testAccCheckTransitGatewayMulticastGroupMemberExists(ctx context.Context, n return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindTransitGatewayMulticastGroupMemberByThreePartKey(ctx, conn, multicastDomainID, groupIPAddress, eniID) @@ -143,7 +146,7 @@ func testAccCheckTransitGatewayMulticastGroupMemberExists(ctx context.Context, n func testAccCheckTransitGatewayMulticastGroupMemberDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_transit_gateway_multicast_group_member" { diff --git a/internal/service/ec2/transitgateway_multicast_group_source.go b/internal/service/ec2/transitgateway_multicast_group_source.go index ffaec50b53b..9704fb3a9f3 100644 --- a/internal/service/ec2/transitgateway_multicast_group_source.go +++ b/internal/service/ec2/transitgateway_multicast_group_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -45,7 +48,7 @@ func ResourceTransitGatewayMulticastGroupSource() *schema.Resource { } func resourceTransitGatewayMulticastGroupSourceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) multicastDomainID := d.Get("transit_gateway_multicast_domain_id").(string) groupIPAddress := d.Get("group_ip_address").(string) @@ -70,7 +73,7 @@ func resourceTransitGatewayMulticastGroupSourceCreate(ctx context.Context, d *sc } func resourceTransitGatewayMulticastGroupSourceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) multicastDomainID, groupIPAddress, eniID, err := TransitGatewayMulticastGroupSourceParseResourceID(d.Id()) @@ -78,7 +81,7 @@ func resourceTransitGatewayMulticastGroupSourceRead(ctx context.Context, d *sche return diag.FromErr(err) } - outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { + outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { return FindTransitGatewayMulticastGroupSourceByThreePartKey(ctx, conn, multicastDomainID, groupIPAddress, eniID) }, d.IsNewResource()) @@ -102,7 +105,7 @@ func resourceTransitGatewayMulticastGroupSourceRead(ctx context.Context, d *sche } func resourceTransitGatewayMulticastGroupSourceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) multicastDomainID, groupIPAddress, eniID, err := TransitGatewayMulticastGroupSourceParseResourceID(d.Id()) @@ -137,7 +140,7 @@ func deregisterTransitGatewayMulticastGroupSource(ctx context.Context, conn *ec2 return fmt.Errorf("deleting EC2 Transit Gateway Multicast Group Source (%s): %w", id, err) } - _, err = tfresource.RetryUntilNotFound(ctx, propagationTimeout, func() (interface{}, error) { + _, err = tfresource.RetryUntilNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { return FindTransitGatewayMulticastGroupSourceByThreePartKey(ctx, conn, multicastDomainID, groupIPAddress, eniID) }) diff --git a/internal/service/ec2/transitgateway_multicast_group_source_test.go b/internal/service/ec2/transitgateway_multicast_group_source_test.go index f82db5fdc89..e8e14f0af8a 100644 --- a/internal/service/ec2/transitgateway_multicast_group_source_test.go +++ b/internal/service/ec2/transitgateway_multicast_group_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -103,7 +106,7 @@ func testAccCheckTransitGatewayMulticastGroupSourceExists(ctx context.Context, n return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindTransitGatewayMulticastGroupSourceByThreePartKey(ctx, conn, multicastDomainID, groupIPAddress, eniID) @@ -119,7 +122,7 @@ func testAccCheckTransitGatewayMulticastGroupSourceExists(ctx context.Context, n func testAccCheckTransitGatewayMulticastGroupSourceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_transit_gateway_multicast_group_source" { diff --git a/internal/service/ec2/transitgateway_peering_attachment.go b/internal/service/ec2/transitgateway_peering_attachment.go index 8f4ccd18d21..a8b61c00576 100644 --- a/internal/service/ec2/transitgateway_peering_attachment.go +++ b/internal/service/ec2/transitgateway_peering_attachment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -63,7 +66,7 @@ func ResourceTransitGatewayPeeringAttachment() *schema.Resource { func resourceTransitGatewayPeeringAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) peerAccountID := meta.(*conns.AWSClient).AccountID if v, ok := d.GetOk("peer_account_id"); ok { @@ -95,7 +98,7 @@ func resourceTransitGatewayPeeringAttachmentCreate(ctx context.Context, d *schem func resourceTransitGatewayPeeringAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) transitGatewayPeeringAttachment, err := FindTransitGatewayPeeringAttachmentByID(ctx, conn, d.Id()) @@ -114,7 +117,7 @@ func resourceTransitGatewayPeeringAttachmentRead(ctx context.Context, d *schema. d.Set("peer_transit_gateway_id", transitGatewayPeeringAttachment.AccepterTgwInfo.TransitGatewayId) d.Set("transit_gateway_id", transitGatewayPeeringAttachment.RequesterTgwInfo.TransitGatewayId) - SetTagsOut(ctx, transitGatewayPeeringAttachment.Tags) + setTagsOut(ctx, transitGatewayPeeringAttachment.Tags) return diags } @@ -129,7 +132,7 @@ func resourceTransitGatewayPeeringAttachmentUpdate(ctx context.Context, d *schem func resourceTransitGatewayPeeringAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[DEBUG] Deleting EC2 Transit Gateway Peering Attachment: %s", d.Id()) _, err := conn.DeleteTransitGatewayPeeringAttachmentWithContext(ctx, &ec2.DeleteTransitGatewayPeeringAttachmentInput{ diff --git a/internal/service/ec2/transitgateway_peering_attachment_accepter.go b/internal/service/ec2/transitgateway_peering_attachment_accepter.go index b2b4f105f37..b31c87690e6 100644 --- a/internal/service/ec2/transitgateway_peering_attachment_accepter.go +++ b/internal/service/ec2/transitgateway_peering_attachment_accepter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -62,7 +65,7 @@ func ResourceTransitGatewayPeeringAttachmentAccepter() *schema.Resource { func resourceTransitGatewayPeeringAttachmentAccepterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) transitGatewayAttachmentID := d.Get("transit_gateway_attachment_id").(string) input := &ec2.AcceptTransitGatewayPeeringAttachmentInput{ @@ -82,7 +85,7 @@ func resourceTransitGatewayPeeringAttachmentAccepterCreate(ctx context.Context, return sdkdiag.AppendErrorf(diags, "waiting for EC2 Transit Gateway Peering Attachment (%s) update: %s", d.Id(), err) } - if err := createTags(ctx, conn, d.Id(), GetTagsIn(ctx)); err != nil { + if err := createTags(ctx, conn, d.Id(), getTagsIn(ctx)); err != nil { return sdkdiag.AppendErrorf(diags, "setting EC2 Transit Gateway Peering Attachment (%s) tags: %s", d.Id(), err) } @@ -91,7 +94,7 @@ func resourceTransitGatewayPeeringAttachmentAccepterCreate(ctx context.Context, func resourceTransitGatewayPeeringAttachmentAccepterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) transitGatewayPeeringAttachment, err := FindTransitGatewayPeeringAttachmentByID(ctx, conn, d.Id()) @@ -118,7 +121,7 @@ func resourceTransitGatewayPeeringAttachmentAccepterRead(ctx context.Context, d d.Set("transit_gateway_attachment_id", transitGatewayPeeringAttachment.TransitGatewayAttachmentId) d.Set("transit_gateway_id", transitGatewayPeeringAttachment.AccepterTgwInfo.TransitGatewayId) - SetTagsOut(ctx, transitGatewayPeeringAttachment.Tags) + setTagsOut(ctx, transitGatewayPeeringAttachment.Tags) return diags } @@ -133,7 +136,7 @@ func resourceTransitGatewayPeeringAttachmentAccepterUpdate(ctx context.Context, func resourceTransitGatewayPeeringAttachmentAccepterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[DEBUG] Deleting EC2 Transit Gateway Peering Attachment: %s", d.Id()) _, err := conn.DeleteTransitGatewayPeeringAttachmentWithContext(ctx, &ec2.DeleteTransitGatewayPeeringAttachmentInput{ diff --git a/internal/service/ec2/transitgateway_peering_attachment_accepter_test.go b/internal/service/ec2/transitgateway_peering_attachment_accepter_test.go index 60e3ea62a95..7b5f4d1294e 100644 --- a/internal/service/ec2/transitgateway_peering_attachment_accepter_test.go +++ b/internal/service/ec2/transitgateway_peering_attachment_accepter_test.go @@ -1,15 +1,16 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( "fmt" - "os" "testing" "github.com/aws/aws-sdk-go/service/ec2" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-provider-aws/internal/acctest" - "github.com/hashicorp/terraform-provider-aws/internal/envvar" ) func testAccTransitGatewayPeeringAttachmentAccepter_basic(t *testing.T) { @@ -146,18 +147,6 @@ func testAccTransitGatewayPeeringAttachmentAccepter_differentAccount(t *testing. }) } -func testAccAlternateAccountAlternateRegionProviderConfig() string { - //lintignore:AT004 - return fmt.Sprintf(` -provider %[1]q { - access_key = %[2]q - profile = %[3]q - region = %[4]q - secret_key = %[5]q -} -`, acctest.ProviderNameAlternate, os.Getenv(envvar.AlternateAccessKeyId), os.Getenv(envvar.AlternateProfile), acctest.AlternateRegion(), os.Getenv(envvar.AlternateSecretAccessKey)) -} - func testAccTransitGatewayPeeringAttachmentAccepterConfig_base(rName string) string { return fmt.Sprintf(` data "aws_region" "current" {} @@ -235,7 +224,7 @@ resource "aws_ec2_transit_gateway_peering_attachment_accepter" "test" { func testAccTransitGatewayPeeringAttachmentAccepterConfig_differentAccount(rName string) string { return acctest.ConfigCompose( - testAccAlternateAccountAlternateRegionProviderConfig(), + acctest.ConfigAlternateAccountAlternateRegionProvider(), testAccTransitGatewayPeeringAttachmentAccepterConfig_base(rName), fmt.Sprintf(` resource "aws_ec2_transit_gateway_peering_attachment_accepter" "test" { diff --git a/internal/service/ec2/transitgateway_peering_attachment_data_source.go b/internal/service/ec2/transitgateway_peering_attachment_data_source.go index af3a166653e..f37a8881011 100644 --- a/internal/service/ec2/transitgateway_peering_attachment_data_source.go +++ b/internal/service/ec2/transitgateway_peering_attachment_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -53,7 +56,7 @@ func DataSourceTransitGatewayPeeringAttachment() *schema.Resource { func dataSourceTransitGatewayPeeringAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeTransitGatewayPeeringAttachmentsInput{} diff --git a/internal/service/ec2/transitgateway_peering_attachment_data_source_test.go b/internal/service/ec2/transitgateway_peering_attachment_data_source_test.go index 2c8a0eb17e2..d191d902395 100644 --- a/internal/service/ec2/transitgateway_peering_attachment_data_source_test.go +++ b/internal/service/ec2/transitgateway_peering_attachment_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/transitgateway_peering_attachment_test.go b/internal/service/ec2/transitgateway_peering_attachment_test.go index 38394fbf32d..dbdb7c6e48e 100644 --- a/internal/service/ec2/transitgateway_peering_attachment_test.go +++ b/internal/service/ec2/transitgateway_peering_attachment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -191,7 +194,7 @@ func testAccCheckTransitGatewayPeeringAttachmentExists(ctx context.Context, n st return fmt.Errorf("No EC2 Transit Gateway Peering Attachment ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindTransitGatewayPeeringAttachmentByID(ctx, conn, rs.Primary.ID) @@ -207,7 +210,7 @@ func testAccCheckTransitGatewayPeeringAttachmentExists(ctx context.Context, n st func testAccCheckTransitGatewayPeeringAttachmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_transit_gateway_peering_attachment" { @@ -250,11 +253,17 @@ resource "aws_ec2_transit_gateway" "peer" { } func testAccTransitGatewayPeeringAttachmentConfig_sameAccount_base(rName string) string { - return acctest.ConfigCompose(acctest.ConfigAlternateRegionProvider(), testAccTransitGatewayPeeringAttachmentConfig_base(rName)) + return acctest.ConfigCompose( + acctest.ConfigAlternateRegionProvider(), + testAccTransitGatewayPeeringAttachmentConfig_base(rName), + ) } func testAccTransitGatewayPeeringAttachmentConfig_differentAccount_base(rName string) string { - return acctest.ConfigCompose(testAccAlternateAccountAlternateRegionProviderConfig(), testAccTransitGatewayPeeringAttachmentConfig_base(rName)) + return acctest.ConfigCompose( + acctest.ConfigAlternateAccountAlternateRegionProvider(), + testAccTransitGatewayPeeringAttachmentConfig_base(rName), + ) } func testAccTransitGatewayPeeringAttachmentConfig_sameAccount(rName string) string { diff --git a/internal/service/ec2/transitgateway_policy_table.go b/internal/service/ec2/transitgateway_policy_table.go index 461155da4aa..319eb8ff778 100644 --- a/internal/service/ec2/transitgateway_policy_table.go +++ b/internal/service/ec2/transitgateway_policy_table.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -58,7 +61,7 @@ func ResourceTransitGatewayPolicyTable() *schema.Resource { func resourceTransitGatewayPolicyTableCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) transitGatewayID := d.Get("transit_gateway_id").(string) input := &ec2.CreateTransitGatewayPolicyTableInput{ @@ -84,7 +87,7 @@ func resourceTransitGatewayPolicyTableCreate(ctx context.Context, d *schema.Reso func resourceTransitGatewayPolicyTableRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) transitGatewayPolicyTable, err := FindTransitGatewayPolicyTableByID(ctx, conn, d.Id()) @@ -109,7 +112,7 @@ func resourceTransitGatewayPolicyTableRead(ctx context.Context, d *schema.Resour d.Set("state", transitGatewayPolicyTable.State) d.Set("transit_gateway_id", transitGatewayPolicyTable.TransitGatewayId) - SetTagsOut(ctx, transitGatewayPolicyTable.Tags) + setTagsOut(ctx, transitGatewayPolicyTable.Tags) return diags } @@ -124,7 +127,7 @@ func resourceTransitGatewayPolicyTableUpdate(ctx context.Context, d *schema.Reso func resourceTransitGatewayPolicyTableDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[DEBUG] Deleting EC2 Transit Gateway Policy Table: %s", d.Id()) _, err := conn.DeleteTransitGatewayPolicyTableWithContext(ctx, &ec2.DeleteTransitGatewayPolicyTableInput{ diff --git a/internal/service/ec2/transitgateway_policy_table_association.go b/internal/service/ec2/transitgateway_policy_table_association.go index b24f292d040..3ddbf41e6b0 100644 --- a/internal/service/ec2/transitgateway_policy_table_association.go +++ b/internal/service/ec2/transitgateway_policy_table_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -55,7 +58,7 @@ func ResourceTransitGatewayPolicyTableAssociation() *schema.Resource { func resourceTransitGatewayPolicyTableAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) // If the TGW attachment is already associated with a TGW route table, disassociate it to prevent errors like // "IncorrectState: Cannot have both PolicyTableAssociation and RouteTableAssociation on the same TransitGateway Attachment". @@ -108,7 +111,7 @@ func resourceTransitGatewayPolicyTableAssociationCreate(ctx context.Context, d * func resourceTransitGatewayPolicyTableAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) transitGatewayPolicyTableID, transitGatewayAttachmentID, err := TransitGatewayPolicyTableAssociationParseResourceID(d.Id()) @@ -138,7 +141,7 @@ func resourceTransitGatewayPolicyTableAssociationRead(ctx context.Context, d *sc func resourceTransitGatewayPolicyTableAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) transitGatewayPolicyTableID, transitGatewayAttachmentID, err := TransitGatewayPolicyTableAssociationParseResourceID(d.Id()) diff --git a/internal/service/ec2/transitgateway_policy_table_association_test.go b/internal/service/ec2/transitgateway_policy_table_association_test.go index 1a647ed2333..60a261eec53 100644 --- a/internal/service/ec2/transitgateway_policy_table_association_test.go +++ b/internal/service/ec2/transitgateway_policy_table_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -89,7 +92,7 @@ func testAccCheckTransitGatewayPolicyTableAssociationExists(ctx context.Context, return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindTransitGatewayPolicyTableAssociationByTwoPartKey(ctx, conn, transitGatewayPolicyTableID, transitGatewayAttachmentID) @@ -105,7 +108,7 @@ func testAccCheckTransitGatewayPolicyTableAssociationExists(ctx context.Context, func testAccCheckTransitGatewayPolicyTableAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_transit_gateway_policy_table_association" { diff --git a/internal/service/ec2/transitgateway_policy_table_test.go b/internal/service/ec2/transitgateway_policy_table_test.go index 6870bacf252..8b52d5ffbc7 100644 --- a/internal/service/ec2/transitgateway_policy_table_test.go +++ b/internal/service/ec2/transitgateway_policy_table_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -160,7 +163,7 @@ func testAccCheckTransitGatewayPolicyTableExists(ctx context.Context, n string, return fmt.Errorf("No EC2 Transit Gateway Policy Table ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindTransitGatewayPolicyTableByID(ctx, conn, rs.Primary.ID) @@ -176,7 +179,7 @@ func testAccCheckTransitGatewayPolicyTableExists(ctx context.Context, n string, func testAccCheckTransitGatewayPolicyTableDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_transit_gateway_policy_table" { diff --git a/internal/service/ec2/transitgateway_prefix_list_reference.go b/internal/service/ec2/transitgateway_prefix_list_reference.go index f8871490410..59603221d71 100644 --- a/internal/service/ec2/transitgateway_prefix_list_reference.go +++ b/internal/service/ec2/transitgateway_prefix_list_reference.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -60,7 +63,7 @@ func ResourceTransitGatewayPrefixListReference() *schema.Resource { func resourceTransitGatewayPrefixListReferenceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateTransitGatewayPrefixListReferenceInput{} @@ -98,7 +101,7 @@ func resourceTransitGatewayPrefixListReferenceCreate(ctx context.Context, d *sch func resourceTransitGatewayPrefixListReferenceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) transitGatewayRouteTableID, prefixListID, err := TransitGatewayPrefixListReferenceParseResourceID(d.Id()) @@ -133,7 +136,7 @@ func resourceTransitGatewayPrefixListReferenceRead(ctx context.Context, d *schem func resourceTransitGatewayPrefixListReferenceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.ModifyTransitGatewayPrefixListReferenceInput{} @@ -168,7 +171,7 @@ func resourceTransitGatewayPrefixListReferenceUpdate(ctx context.Context, d *sch func resourceTransitGatewayPrefixListReferenceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) transitGatewayRouteTableID, prefixListID, err := TransitGatewayPrefixListReferenceParseResourceID(d.Id()) diff --git a/internal/service/ec2/transitgateway_prefix_list_reference_test.go b/internal/service/ec2/transitgateway_prefix_list_reference_test.go index eac01fa33f0..e738d9faa6d 100644 --- a/internal/service/ec2/transitgateway_prefix_list_reference_test.go +++ b/internal/service/ec2/transitgateway_prefix_list_reference_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -151,7 +154,7 @@ func testAccTransitGatewayPrefixListReference_TransitGatewayAttachmentID(t *test func testAccCheckTransitGatewayPrefixListReferenceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_transit_gateway_prefix_list_reference" { @@ -198,7 +201,7 @@ func testAccTransitGatewayPrefixListReferenceExists(ctx context.Context, n strin return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) _, err = tfec2.FindTransitGatewayPrefixListReferenceByTwoPartKey(ctx, conn, transitGatewayRouteTableID, prefixListID) diff --git a/internal/service/ec2/transitgateway_route.go b/internal/service/ec2/transitgateway_route.go index 87fba2267a0..2c17af15ad3 100644 --- a/internal/service/ec2/transitgateway_route.go +++ b/internal/service/ec2/transitgateway_route.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -60,7 +63,7 @@ func ResourceTransitGatewayRoute() *schema.Resource { func resourceTransitGatewayRouteCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) destination := d.Get("destination_cidr_block").(string) transitGatewayRouteTableID := d.Get("transit_gateway_route_table_id").(string) @@ -90,7 +93,7 @@ func resourceTransitGatewayRouteCreate(ctx context.Context, d *schema.ResourceDa func resourceTransitGatewayRouteRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) transitGatewayRouteTableID, destination, err := TransitGatewayRouteParseResourceID(d.Id()) @@ -98,7 +101,7 @@ func resourceTransitGatewayRouteRead(ctx context.Context, d *schema.ResourceData return sdkdiag.AppendErrorf(diags, "reading EC2 Transit Gateway Route (%s): %s", d.Id(), err) } - outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { + outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { return FindTransitGatewayRoute(ctx, conn, transitGatewayRouteTableID, destination) }, d.IsNewResource()) @@ -129,7 +132,7 @@ func resourceTransitGatewayRouteRead(ctx context.Context, d *schema.ResourceData func resourceTransitGatewayRouteDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) transitGatewayRouteTableID, destination, err := TransitGatewayRouteParseResourceID(d.Id()) diff --git a/internal/service/ec2/transitgateway_route_table.go b/internal/service/ec2/transitgateway_route_table.go index 3030e3fcf50..28e1c7de650 100644 --- a/internal/service/ec2/transitgateway_route_table.go +++ b/internal/service/ec2/transitgateway_route_table.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -62,7 +65,7 @@ func ResourceTransitGatewayRouteTable() *schema.Resource { func resourceTransitGatewayRouteTableCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateTransitGatewayRouteTableInput{ TransitGatewayId: aws.String(d.Get("transit_gateway_id").(string)), @@ -87,7 +90,7 @@ func resourceTransitGatewayRouteTableCreate(ctx context.Context, d *schema.Resou func resourceTransitGatewayRouteTableRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) transitGatewayRouteTable, err := FindTransitGatewayRouteTableByID(ctx, conn, d.Id()) @@ -113,7 +116,7 @@ func resourceTransitGatewayRouteTableRead(ctx context.Context, d *schema.Resourc d.Set("default_propagation_route_table", transitGatewayRouteTable.DefaultPropagationRouteTable) d.Set("transit_gateway_id", transitGatewayRouteTable.TransitGatewayId) - SetTagsOut(ctx, transitGatewayRouteTable.Tags) + setTagsOut(ctx, transitGatewayRouteTable.Tags) return diags } @@ -128,7 +131,7 @@ func resourceTransitGatewayRouteTableUpdate(ctx context.Context, d *schema.Resou func resourceTransitGatewayRouteTableDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[DEBUG] Deleting EC2 Transit Gateway Route Table: %s", d.Id()) _, err := conn.DeleteTransitGatewayRouteTableWithContext(ctx, &ec2.DeleteTransitGatewayRouteTableInput{ diff --git a/internal/service/ec2/transitgateway_route_table_association.go b/internal/service/ec2/transitgateway_route_table_association.go index ab54bed27a6..d815a669044 100644 --- a/internal/service/ec2/transitgateway_route_table_association.go +++ b/internal/service/ec2/transitgateway_route_table_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -22,6 +25,7 @@ func ResourceTransitGatewayRouteTableAssociation() *schema.Resource { return &schema.Resource{ CreateWithoutTimeout: resourceTransitGatewayRouteTableAssociationCreate, ReadWithoutTimeout: resourceTransitGatewayRouteTableAssociationRead, + UpdateWithoutTimeout: schema.NoopContext, DeleteWithoutTimeout: resourceTransitGatewayRouteTableAssociationDelete, Importer: &schema.ResourceImporter{ @@ -29,6 +33,11 @@ func ResourceTransitGatewayRouteTableAssociation() *schema.Resource { }, Schema: map[string]*schema.Schema{ + "replace_existing_association": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, "resource_id": { Type: schema.TypeString, Computed: true, @@ -55,11 +64,33 @@ func ResourceTransitGatewayRouteTableAssociation() *schema.Resource { func resourceTransitGatewayRouteTableAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) transitGatewayAttachmentID := d.Get("transit_gateway_attachment_id").(string) transitGatewayRouteTableID := d.Get("transit_gateway_route_table_id").(string) id := TransitGatewayRouteTableAssociationCreateResourceID(transitGatewayRouteTableID, transitGatewayAttachmentID) + + if d.Get("replace_existing_association").(bool) { + transitGatewayAttachment, err := FindTransitGatewayAttachmentByID(ctx, conn, transitGatewayAttachmentID) + + if err != nil { + return sdkdiag.AppendFromErr(diags, err) + } + + // If no Association object was found then Gateway Attachment is not linked to a Route Table. + if transitGatewayAttachment.Association != nil { + transitGatewayRouteTableID := aws.StringValue(transitGatewayAttachment.Association.TransitGatewayRouteTableId) + + if state := aws.StringValue(transitGatewayAttachment.Association.State); state != ec2.AssociationStatusCodeAssociated { + return sdkdiag.AppendErrorf(diags, "existing EC2 Transit Gateway Route Table (%s) Association (%s) in unexpected state: %s", transitGatewayRouteTableID, transitGatewayAttachmentID, state) + } + + if err := disassociateTransitGatewayRouteTable(ctx, conn, transitGatewayRouteTableID, transitGatewayAttachmentID); err != nil { + return sdkdiag.AppendFromErr(diags, err) + } + } + } + input := &ec2.AssociateTransitGatewayRouteTableInput{ TransitGatewayAttachmentId: aws.String(transitGatewayAttachmentID), TransitGatewayRouteTableId: aws.String(transitGatewayRouteTableID), @@ -82,7 +113,7 @@ func resourceTransitGatewayRouteTableAssociationCreate(ctx context.Context, d *s func resourceTransitGatewayRouteTableAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) transitGatewayRouteTableID, transitGatewayAttachmentID, err := TransitGatewayRouteTableAssociationParseResourceID(d.Id()) @@ -112,7 +143,7 @@ func resourceTransitGatewayRouteTableAssociationRead(ctx context.Context, d *sch func resourceTransitGatewayRouteTableAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) transitGatewayRouteTableID, transitGatewayAttachmentID, err := TransitGatewayRouteTableAssociationParseResourceID(d.Id()) @@ -121,21 +152,8 @@ func resourceTransitGatewayRouteTableAssociationDelete(ctx context.Context, d *s } log.Printf("[DEBUG] Deleting EC2 Transit Gateway Route Table Association: %s", d.Id()) - _, err = conn.DisassociateTransitGatewayRouteTableWithContext(ctx, &ec2.DisassociateTransitGatewayRouteTableInput{ - TransitGatewayAttachmentId: aws.String(transitGatewayAttachmentID), - TransitGatewayRouteTableId: aws.String(transitGatewayRouteTableID), - }) - - if tfawserr.ErrCodeEquals(err, errCodeInvalidRouteTableIDNotFound) { - return diags - } - - if err != nil { - return sdkdiag.AppendErrorf(diags, "deleting EC2 Transit Gateway Route Table Association (%s): %s", d.Id(), err) - } - - if _, err := WaitTransitGatewayRouteTableAssociationDeleted(ctx, conn, transitGatewayRouteTableID, transitGatewayAttachmentID); err != nil { - return sdkdiag.AppendErrorf(diags, "waiting for EC2 Transit Gateway Route Table Association (%s) delete: %s", d.Id(), err) + if err := disassociateTransitGatewayRouteTable(ctx, conn, transitGatewayRouteTableID, transitGatewayAttachmentID); err != nil { + return sdkdiag.AppendFromErr(diags, err) } return diags @@ -183,18 +201,32 @@ func transitGatewayRouteTableAssociationUpdate(ctx context.Context, conn *ec2.EC return fmt.Errorf("waiting for EC2 Transit Gateway Route Table Association (%s) create: %w", id, err) } - input := &ec2.DisassociateTransitGatewayRouteTableInput{ - TransitGatewayAttachmentId: aws.String(transitGatewayAttachmentID), - TransitGatewayRouteTableId: aws.String(transitGatewayRouteTableID), + if err := disassociateTransitGatewayRouteTable(ctx, conn, transitGatewayRouteTableID, transitGatewayAttachmentID); err != nil { + return err } + } - if _, err := conn.DisassociateTransitGatewayRouteTableWithContext(ctx, input); err != nil { - return fmt.Errorf("deleting EC2 Transit Gateway Route Table Association (%s): %w", id, err) - } + return nil +} - if _, err := WaitTransitGatewayRouteTableAssociationDeleted(ctx, conn, transitGatewayRouteTableID, transitGatewayAttachmentID); err != nil { - return fmt.Errorf("waiting for EC2 Transit Gateway Route Table Association (%s) delete: %w", id, err) - } +func disassociateTransitGatewayRouteTable(ctx context.Context, conn *ec2.EC2, transitGatewayRouteTableID, transitGatewayAttachmentID string) error { + input := &ec2.DisassociateTransitGatewayRouteTableInput{ + TransitGatewayAttachmentId: aws.String(transitGatewayAttachmentID), + TransitGatewayRouteTableId: aws.String(transitGatewayRouteTableID), + } + + _, err := conn.DisassociateTransitGatewayRouteTableWithContext(ctx, input) + + if tfawserr.ErrCodeEquals(err, errCodeInvalidRouteTableIDNotFound) { + return nil + } + + if err != nil { + return fmt.Errorf("deleting EC2 Transit Gateway Route Table (%s) Association (%s): %w", transitGatewayRouteTableID, transitGatewayAttachmentID, err) + } + + if _, err := WaitTransitGatewayRouteTableAssociationDeleted(ctx, conn, transitGatewayRouteTableID, transitGatewayAttachmentID); err != nil { + return fmt.Errorf("waiting for EC2 Transit Gateway Route Table (%s) Association (%s) delete: %w", transitGatewayRouteTableID, transitGatewayAttachmentID, err) } return nil diff --git a/internal/service/ec2/transitgateway_route_table_association_test.go b/internal/service/ec2/transitgateway_route_table_association_test.go index 1e756c14a3f..e1ddac46cbb 100644 --- a/internal/service/ec2/transitgateway_route_table_association_test.go +++ b/internal/service/ec2/transitgateway_route_table_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -33,6 +36,7 @@ func testAccTransitGatewayRouteTableAssociation_basic(t *testing.T) { Config: testAccTransitGatewayRouteTableAssociationConfig_basic(rName), Check: resource.ComposeTestCheckFunc( testAccCheckTransitGatewayRouteTableAssociationExists(ctx, resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "replace_existing_association", "false"), resource.TestCheckResourceAttrSet(resourceName, "resource_id"), resource.TestCheckResourceAttrSet(resourceName, "resource_type"), resource.TestCheckResourceAttrPair(resourceName, "transit_gateway_attachment_id", transitGatewayVpcAttachmentResourceName, "id"), @@ -40,9 +44,10 @@ func testAccTransitGatewayRouteTableAssociation_basic(t *testing.T) { ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"replace_existing_association"}, }, }, }) @@ -72,6 +77,43 @@ func testAccTransitGatewayRouteTableAssociation_disappears(t *testing.T) { }) } +func testAccTransitGatewayRouteTableAssociation_replaceExistingAssociation(t *testing.T) { + ctx := acctest.Context(t) + var v ec2.TransitGatewayRouteTableAssociation + resourceName := "aws_ec2_transit_gateway_route_table_association.test" + transitGatewayRouteTableResourceName := "aws_ec2_transit_gateway_route_table.test" + transitGatewayVpcAttachmentResourceName := "aws_ec2_transit_gateway_vpc_attachment.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheckTransitGateway(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckTransitGatewayRouteTableAssociationDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccTransitGatewayRouteTableAssociationConfig_replaceExistingAssociation(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckTransitGatewayRouteTableAssociationExists(ctx, resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "replace_existing_association", "true"), + resource.TestCheckResourceAttrSet(resourceName, "resource_id"), + resource.TestCheckResourceAttrSet(resourceName, "resource_type"), + resource.TestCheckResourceAttrPair(resourceName, "transit_gateway_attachment_id", transitGatewayVpcAttachmentResourceName, "id"), + resource.TestCheckResourceAttrPair(resourceName, "transit_gateway_route_table_id", transitGatewayRouteTableResourceName, "id"), + ), + // aws_ec2_transit_gateway_vpc_attachment.test.transit_gateway_default_route_table_association shows diff: + ExpectNonEmptyPlan: true, + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"replace_existing_association"}, + }, + }, + }) +} + func testAccCheckTransitGatewayRouteTableAssociationExists(ctx context.Context, n string, v *ec2.TransitGatewayRouteTableAssociation) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -89,7 +131,7 @@ func testAccCheckTransitGatewayRouteTableAssociationExists(ctx context.Context, return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindTransitGatewayRouteTableAssociationByTwoPartKey(ctx, conn, transitGatewayRouteTableID, transitGatewayAttachmentID) @@ -105,7 +147,7 @@ func testAccCheckTransitGatewayRouteTableAssociationExists(ctx context.Context, func testAccCheckTransitGatewayRouteTableAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_transit_gateway_route_table_association" { @@ -136,24 +178,41 @@ func testAccCheckTransitGatewayRouteTableAssociationDestroy(ctx context.Context) } func testAccTransitGatewayRouteTableAssociationConfig_basic(rName string) string { - return fmt.Sprintf(` -resource "aws_vpc" "test" { - cidr_block = "10.0.0.0/16" + return acctest.ConfigCompose(acctest.ConfigVPCWithSubnets(rName, 1), fmt.Sprintf(` +resource "aws_ec2_transit_gateway" "test" { + tags = { + Name = %[1]q + } +} + +resource "aws_ec2_transit_gateway_vpc_attachment" "test" { + subnet_ids = aws_subnet.test[*].id + transit_gateway_default_route_table_association = false + transit_gateway_id = aws_ec2_transit_gateway.test.id + vpc_id = aws_vpc.test.id tags = { Name = %[1]q } } -resource "aws_subnet" "test" { - cidr_block = "10.0.0.0/24" - vpc_id = aws_vpc.test.id +resource "aws_ec2_transit_gateway_route_table" "test" { + transit_gateway_id = aws_ec2_transit_gateway.test.id tags = { Name = %[1]q } } +resource "aws_ec2_transit_gateway_route_table_association" "test" { + transit_gateway_attachment_id = aws_ec2_transit_gateway_vpc_attachment.test.id + transit_gateway_route_table_id = aws_ec2_transit_gateway_route_table.test.id +} +`, rName)) +} + +func testAccTransitGatewayRouteTableAssociationConfig_replaceExistingAssociation(rName string) string { + return acctest.ConfigCompose(acctest.ConfigVPCWithSubnets(rName, 1), fmt.Sprintf(` resource "aws_ec2_transit_gateway" "test" { tags = { Name = %[1]q @@ -161,8 +220,8 @@ resource "aws_ec2_transit_gateway" "test" { } resource "aws_ec2_transit_gateway_vpc_attachment" "test" { - subnet_ids = [aws_subnet.test.id] - transit_gateway_default_route_table_association = false + subnet_ids = aws_subnet.test[*].id + transit_gateway_default_route_table_association = true transit_gateway_id = aws_ec2_transit_gateway.test.id vpc_id = aws_vpc.test.id @@ -182,6 +241,8 @@ resource "aws_ec2_transit_gateway_route_table" "test" { resource "aws_ec2_transit_gateway_route_table_association" "test" { transit_gateway_attachment_id = aws_ec2_transit_gateway_vpc_attachment.test.id transit_gateway_route_table_id = aws_ec2_transit_gateway_route_table.test.id + + replace_existing_association = true } -`, rName) +`, rName)) } diff --git a/internal/service/ec2/transitgateway_route_table_associations_data_source.go b/internal/service/ec2/transitgateway_route_table_associations_data_source.go index 6fd16fe83b0..687bbf5b26f 100644 --- a/internal/service/ec2/transitgateway_route_table_associations_data_source.go +++ b/internal/service/ec2/transitgateway_route_table_associations_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -40,7 +43,7 @@ func DataSourceTransitGatewayRouteTableAssociations() *schema.Resource { func dataSourceTransitGatewayRouteTableAssociationsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.GetTransitGatewayRouteTableAssociationsInput{} diff --git a/internal/service/ec2/transitgateway_route_table_associations_data_source_test.go b/internal/service/ec2/transitgateway_route_table_associations_data_source_test.go index abd4caf8cf8..efa8c46d6a0 100644 --- a/internal/service/ec2/transitgateway_route_table_associations_data_source_test.go +++ b/internal/service/ec2/transitgateway_route_table_associations_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -23,7 +26,7 @@ func testAccTransitGatewayRouteTableAssociationsDataSource_basic(t *testing.T) { { Config: testAccTransitGatewayRouteTableAssociationsDataSourceConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "ids.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "ids.#", 0), ), }, }, diff --git a/internal/service/ec2/transitgateway_route_table_data_source.go b/internal/service/ec2/transitgateway_route_table_data_source.go index 149fd80128e..420abbff9a5 100644 --- a/internal/service/ec2/transitgateway_route_table_data_source.go +++ b/internal/service/ec2/transitgateway_route_table_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -55,7 +58,7 @@ func DataSourceTransitGatewayRouteTable() *schema.Resource { func dataSourceTransitGatewayRouteTableRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeTransitGatewayRouteTablesInput{} diff --git a/internal/service/ec2/transitgateway_route_table_data_source_test.go b/internal/service/ec2/transitgateway_route_table_data_source_test.go index ef48a8d3052..d9ca2e7ba82 100644 --- a/internal/service/ec2/transitgateway_route_table_data_source_test.go +++ b/internal/service/ec2/transitgateway_route_table_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/transitgateway_route_table_propagation.go b/internal/service/ec2/transitgateway_route_table_propagation.go index b7b62dbc427..3d6a3a15d0d 100644 --- a/internal/service/ec2/transitgateway_route_table_propagation.go +++ b/internal/service/ec2/transitgateway_route_table_propagation.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -55,7 +58,7 @@ func ResourceTransitGatewayRouteTablePropagation() *schema.Resource { func resourceTransitGatewayRouteTablePropagationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) transitGatewayAttachmentID := d.Get("transit_gateway_attachment_id").(string) transitGatewayRouteTableID := d.Get("transit_gateway_route_table_id").(string) @@ -82,7 +85,7 @@ func resourceTransitGatewayRouteTablePropagationCreate(ctx context.Context, d *s func resourceTransitGatewayRouteTablePropagationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) transitGatewayRouteTableID, transitGatewayAttachmentID, err := TransitGatewayRouteTablePropagationParseResourceID(d.Id()) @@ -112,7 +115,7 @@ func resourceTransitGatewayRouteTablePropagationRead(ctx context.Context, d *sch func resourceTransitGatewayRouteTablePropagationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) transitGatewayRouteTableID, transitGatewayAttachmentID, err := TransitGatewayRouteTablePropagationParseResourceID(d.Id()) diff --git a/internal/service/ec2/transitgateway_route_table_propagation_test.go b/internal/service/ec2/transitgateway_route_table_propagation_test.go index 68597c55371..d76473d86c0 100644 --- a/internal/service/ec2/transitgateway_route_table_propagation_test.go +++ b/internal/service/ec2/transitgateway_route_table_propagation_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -89,7 +92,7 @@ func testAccCheckTransitGatewayRouteTablePropagationExists(ctx context.Context, return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindTransitGatewayRouteTablePropagationByTwoPartKey(ctx, conn, transitGatewayRouteTableID, transitGatewayAttachmentID) @@ -105,7 +108,7 @@ func testAccCheckTransitGatewayRouteTablePropagationExists(ctx context.Context, func testAccCheckTransitGatewayRouteTablePropagationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_transit_gateway_route_table_propagation" { diff --git a/internal/service/ec2/transitgateway_route_table_propagations_data_source.go b/internal/service/ec2/transitgateway_route_table_propagations_data_source.go index d3a9a6081d3..80183b2a24c 100644 --- a/internal/service/ec2/transitgateway_route_table_propagations_data_source.go +++ b/internal/service/ec2/transitgateway_route_table_propagations_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -40,7 +43,7 @@ func DataSourceTransitGatewayRouteTablePropagations() *schema.Resource { func dataSourceTransitGatewayRouteTablePropagationsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.GetTransitGatewayRouteTablePropagationsInput{} diff --git a/internal/service/ec2/transitgateway_route_table_propagations_data_source_test.go b/internal/service/ec2/transitgateway_route_table_propagations_data_source_test.go index afb612dc855..57346ab4b0a 100644 --- a/internal/service/ec2/transitgateway_route_table_propagations_data_source_test.go +++ b/internal/service/ec2/transitgateway_route_table_propagations_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -23,7 +26,7 @@ func testAccTransitGatewayRouteTablePropagationsDataSource_basic(t *testing.T) { { Config: testAccTransitGatewayRouteTablePropagationsDataSourceConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "ids.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "ids.#", 0), ), }, }, diff --git a/internal/service/ec2/transitgateway_route_table_test.go b/internal/service/ec2/transitgateway_route_table_test.go index 7f928559884..04a7155eb2b 100644 --- a/internal/service/ec2/transitgateway_route_table_test.go +++ b/internal/service/ec2/transitgateway_route_table_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -161,7 +164,7 @@ func testAccCheckTransitGatewayRouteTableExists(ctx context.Context, n string, v return fmt.Errorf("No EC2 Transit Gateway Route Table ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindTransitGatewayRouteTableByID(ctx, conn, rs.Primary.ID) @@ -177,7 +180,7 @@ func testAccCheckTransitGatewayRouteTableExists(ctx context.Context, n string, v func testAccCheckTransitGatewayRouteTableDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_transit_gateway_route_table" { diff --git a/internal/service/ec2/transitgateway_route_tables_data_source.go b/internal/service/ec2/transitgateway_route_tables_data_source.go index 8ff91305ae9..4066d1eb32b 100644 --- a/internal/service/ec2/transitgateway_route_tables_data_source.go +++ b/internal/service/ec2/transitgateway_route_tables_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -36,7 +39,7 @@ func DataSourceTransitGatewayRouteTables() *schema.Resource { func dataSourceTransitGatewayRouteTablesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeTransitGatewayRouteTablesInput{} diff --git a/internal/service/ec2/transitgateway_route_tables_data_source_test.go b/internal/service/ec2/transitgateway_route_tables_data_source_test.go index d4cf704336c..e940ab200b3 100644 --- a/internal/service/ec2/transitgateway_route_tables_data_source_test.go +++ b/internal/service/ec2/transitgateway_route_tables_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -23,7 +26,7 @@ func testAccTransitGatewayRouteTablesDataSource_basic(t *testing.T) { { Config: testAccTransitGatewayRouteTablesDataSourceConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "ids.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "ids.#", 0), ), }, }, diff --git a/internal/service/ec2/transitgateway_route_test.go b/internal/service/ec2/transitgateway_route_test.go index 545e023426f..ad7ee084ca7 100644 --- a/internal/service/ec2/transitgateway_route_test.go +++ b/internal/service/ec2/transitgateway_route_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -179,7 +182,7 @@ func testAccCheckTransitGatewayRouteExists(ctx context.Context, n string, v *ec2 return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindTransitGatewayRoute(ctx, conn, transitGatewayRouteTableID, destination) @@ -195,7 +198,7 @@ func testAccCheckTransitGatewayRouteExists(ctx context.Context, n string, v *ec2 func testAccCheckTransitGatewayRouteDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_transit_gateway_route" { diff --git a/internal/service/ec2/transitgateway_test.go b/internal/service/ec2/transitgateway_test.go index e3417054be1..b12a07f1042 100644 --- a/internal/service/ec2/transitgateway_test.go +++ b/internal/service/ec2/transitgateway_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -117,8 +120,9 @@ func TestAccTransitGateway_serial(t *testing.T) { "tags": testAccTransitGatewayRouteTable_tags, }, "RouteTableAssociation": { - "basic": testAccTransitGatewayRouteTableAssociation_basic, - "disappears": testAccTransitGatewayRouteTableAssociation_disappears, + "basic": testAccTransitGatewayRouteTableAssociation_basic, + "disappears": testAccTransitGatewayRouteTableAssociation_disappears, + "ReplaceExistingAssociation": testAccTransitGatewayRouteTableAssociation_replaceExistingAssociation, }, "RouteTablePropagation": { "basic": testAccTransitGatewayRouteTablePropagation_basic, @@ -613,7 +617,7 @@ func testAccCheckTransitGatewayExists(ctx context.Context, n string, v *ec2.Tran return fmt.Errorf("No EC2 Transit Gateway ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindTransitGatewayByID(ctx, conn, rs.Primary.ID) @@ -629,7 +633,7 @@ func testAccCheckTransitGatewayExists(ctx context.Context, n string, v *ec2.Tran func testAccCheckTransitGatewayDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_transit_gateway" { @@ -681,7 +685,7 @@ func testAccCheckTransitGatewayAssociationDefaultRouteTableAttachmentAssociated( return errors.New("EC2 Transit Gateway Association Default Route Table empty") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) var transitGatewayAttachmentID string switch transitGatewayAttachment := transitGatewayAttachment.(type) { @@ -709,7 +713,7 @@ func testAccCheckTransitGatewayAssociationDefaultRouteTableAttachmentNotAssociat return nil } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) var transitGatewayAttachmentID string switch transitGatewayAttachment := transitGatewayAttachment.(type) { @@ -741,7 +745,7 @@ func testAccCheckTransitGatewayPropagationDefaultRouteTableAttachmentPropagated( return errors.New("EC2 Transit Gateway Propagation Default Route Table empty") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) var transitGatewayAttachmentID string switch transitGatewayAttachment := transitGatewayAttachment.(type) { @@ -769,7 +773,7 @@ func testAccCheckTransitGatewayPropagationDefaultRouteTableAttachmentNotPropagat return nil } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) var transitGatewayAttachmentID string switch transitGatewayAttachment := transitGatewayAttachment.(type) { @@ -794,7 +798,7 @@ func testAccCheckTransitGatewayPropagationDefaultRouteTableAttachmentNotPropagat } func testAccPreCheckTransitGateway(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeTransitGatewaysInput{ MaxResults: aws.Int64(5), @@ -812,7 +816,7 @@ func testAccPreCheckTransitGateway(ctx context.Context, t *testing.T) { } func testAccPreCheckTransitGatewayConnect(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeTransitGatewayConnectsInput{ MaxResults: aws.Int64(5), @@ -830,7 +834,7 @@ func testAccPreCheckTransitGatewayConnect(ctx context.Context, t *testing.T) { } func testAccPreCheckTransitGatewayVPCAttachment(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeTransitGatewayVpcAttachmentsInput{ MaxResults: aws.Int64(5), diff --git a/internal/service/ec2/transitgateway_vpc_attachment.go b/internal/service/ec2/transitgateway_vpc_attachment.go index cffde817b76..b1d0a085403 100644 --- a/internal/service/ec2/transitgateway_vpc_attachment.go +++ b/internal/service/ec2/transitgateway_vpc_attachment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -93,7 +96,7 @@ func ResourceTransitGatewayVPCAttachment() *schema.Resource { func resourceTransitGatewayVPCAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) transitGatewayID := d.Get("transit_gateway_id").(string) input := &ec2.CreateTransitGatewayVpcAttachmentInput{ @@ -143,7 +146,7 @@ func resourceTransitGatewayVPCAttachmentCreate(ctx context.Context, d *schema.Re func resourceTransitGatewayVPCAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) transitGatewayVPCAttachment, err := FindTransitGatewayVPCAttachmentByID(ctx, conn, d.Id()) @@ -204,14 +207,14 @@ func resourceTransitGatewayVPCAttachmentRead(ctx context.Context, d *schema.Reso d.Set("vpc_id", transitGatewayVPCAttachment.VpcId) d.Set("vpc_owner_id", transitGatewayVPCAttachment.VpcOwnerId) - SetTagsOut(ctx, transitGatewayVPCAttachment.Tags) + setTagsOut(ctx, transitGatewayVPCAttachment.Tags) return diags } func resourceTransitGatewayVPCAttachmentUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if d.HasChanges("appliance_mode_support", "dns_support", "ipv6_support", "subnet_ids") { input := &ec2.ModifyTransitGatewayVpcAttachmentInput{ @@ -270,7 +273,7 @@ func resourceTransitGatewayVPCAttachmentUpdate(ctx context.Context, d *schema.Re func resourceTransitGatewayVPCAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[DEBUG] Deleting EC2 Transit Gateway VPC Attachment: %s", d.Id()) _, err := conn.DeleteTransitGatewayVpcAttachmentWithContext(ctx, &ec2.DeleteTransitGatewayVpcAttachmentInput{ diff --git a/internal/service/ec2/transitgateway_vpc_attachment_accepter.go b/internal/service/ec2/transitgateway_vpc_attachment_accepter.go index f90b99cfb1c..601fd793150 100644 --- a/internal/service/ec2/transitgateway_vpc_attachment_accepter.go +++ b/internal/service/ec2/transitgateway_vpc_attachment_accepter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -85,7 +88,7 @@ func ResourceTransitGatewayVPCAttachmentAccepter() *schema.Resource { func resourceTransitGatewayVPCAttachmentAccepterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) transitGatewayAttachmentID := d.Get("transit_gateway_attachment_id").(string) input := &ec2.AcceptTransitGatewayVpcAttachmentInput{ @@ -106,7 +109,7 @@ func resourceTransitGatewayVPCAttachmentAccepterCreate(ctx context.Context, d *s return sdkdiag.AppendErrorf(diags, "accepting EC2 Transit Gateway VPC Attachment (%s): waiting for completion: %s", transitGatewayAttachmentID, err) } - if err := createTags(ctx, conn, d.Id(), GetTagsIn(ctx)); err != nil { + if err := createTags(ctx, conn, d.Id(), getTagsIn(ctx)); err != nil { return sdkdiag.AppendErrorf(diags, "setting EC2 Transit Gateway VPC Attachment (%s) tags: %s", d.Id(), err) } @@ -129,7 +132,7 @@ func resourceTransitGatewayVPCAttachmentAccepterCreate(ctx context.Context, d *s func resourceTransitGatewayVPCAttachmentAccepterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) transitGatewayVPCAttachment, err := FindTransitGatewayVPCAttachmentByID(ctx, conn, d.Id()) @@ -188,14 +191,14 @@ func resourceTransitGatewayVPCAttachmentAccepterRead(ctx context.Context, d *sch d.Set("vpc_id", transitGatewayVPCAttachment.VpcId) d.Set("vpc_owner_id", transitGatewayVPCAttachment.VpcOwnerId) - SetTagsOut(ctx, transitGatewayVPCAttachment.Tags) + setTagsOut(ctx, transitGatewayVPCAttachment.Tags) return diags } func resourceTransitGatewayVPCAttachmentAccepterUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if d.HasChanges("transit_gateway_default_route_table_association", "transit_gateway_default_route_table_propagation") { transitGatewayID := d.Get("transit_gateway_id").(string) @@ -223,7 +226,7 @@ func resourceTransitGatewayVPCAttachmentAccepterUpdate(ctx context.Context, d *s func resourceTransitGatewayVPCAttachmentAccepterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[DEBUG] Deleting EC2 Transit Gateway VPC Attachment: %s", d.Id()) _, err := conn.DeleteTransitGatewayVpcAttachmentWithContext(ctx, &ec2.DeleteTransitGatewayVpcAttachmentInput{ diff --git a/internal/service/ec2/transitgateway_vpc_attachment_accepter_test.go b/internal/service/ec2/transitgateway_vpc_attachment_accepter_test.go index 32a4c8c93f8..fae7e9ca933 100644 --- a/internal/service/ec2/transitgateway_vpc_attachment_accepter_test.go +++ b/internal/service/ec2/transitgateway_vpc_attachment_accepter_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/transitgateway_vpc_attachment_data_source.go b/internal/service/ec2/transitgateway_vpc_attachment_data_source.go index 3a9f2bbff7d..43d05f70ef1 100644 --- a/internal/service/ec2/transitgateway_vpc_attachment_data_source.go +++ b/internal/service/ec2/transitgateway_vpc_attachment_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -66,7 +69,7 @@ func DataSourceTransitGatewayVPCAttachment() *schema.Resource { func dataSourceTransitGatewayVPCAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeTransitGatewayVpcAttachmentsInput{} diff --git a/internal/service/ec2/transitgateway_vpc_attachment_data_source_test.go b/internal/service/ec2/transitgateway_vpc_attachment_data_source_test.go index a8586e1e8d7..ac68d1879c6 100644 --- a/internal/service/ec2/transitgateway_vpc_attachment_data_source_test.go +++ b/internal/service/ec2/transitgateway_vpc_attachment_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/transitgateway_vpc_attachment_test.go b/internal/service/ec2/transitgateway_vpc_attachment_test.go index b8d74a117e2..62911fc1dc6 100644 --- a/internal/service/ec2/transitgateway_vpc_attachment_test.go +++ b/internal/service/ec2/transitgateway_vpc_attachment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -460,7 +463,7 @@ func testAccCheckTransitGatewayVPCAttachmentExists(ctx context.Context, n string return fmt.Errorf("No EC2 Transit Gateway VPC Attachment ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindTransitGatewayVPCAttachmentByID(ctx, conn, rs.Primary.ID) @@ -476,7 +479,7 @@ func testAccCheckTransitGatewayVPCAttachmentExists(ctx context.Context, n string func testAccCheckTransitGatewayVPCAttachmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_transit_gateway_vpc_attachment" { diff --git a/internal/service/ec2/transitgateway_vpc_attachments_data_source.go b/internal/service/ec2/transitgateway_vpc_attachments_data_source.go index a5297f7615c..eef222500d2 100644 --- a/internal/service/ec2/transitgateway_vpc_attachments_data_source.go +++ b/internal/service/ec2/transitgateway_vpc_attachments_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -32,7 +35,7 @@ func DataSourceTransitGatewayVPCAttachments() *schema.Resource { } func dataSourceTransitGatewayVPCAttachmentsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeTransitGatewayVpcAttachmentsInput{} diff --git a/internal/service/ec2/transitgateway_vpc_attachments_data_source_test.go b/internal/service/ec2/transitgateway_vpc_attachments_data_source_test.go index f90b7a91619..1b666f004b2 100644 --- a/internal/service/ec2/transitgateway_vpc_attachments_data_source_test.go +++ b/internal/service/ec2/transitgateway_vpc_attachments_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/transitgateway_vpn_attachment_data_source.go b/internal/service/ec2/transitgateway_vpn_attachment_data_source.go index f7d4e373f65..eaba84041a2 100644 --- a/internal/service/ec2/transitgateway_vpn_attachment_data_source.go +++ b/internal/service/ec2/transitgateway_vpn_attachment_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -40,7 +43,7 @@ func DataSourceTransitGatewayVPNAttachment() *schema.Resource { func dataSourceTransitGatewayVPNAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeTransitGatewayAttachmentsInput{ diff --git a/internal/service/ec2/transitgateway_vpn_attachment_data_source_test.go b/internal/service/ec2/transitgateway_vpn_attachment_data_source_test.go index dd1e7057749..3ab566e1bd5 100644 --- a/internal/service/ec2/transitgateway_vpn_attachment_data_source_test.go +++ b/internal/service/ec2/transitgateway_vpn_attachment_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/validate.go b/internal/service/ec2/validate.go index 830a1eb1132..50f81f12fa5 100644 --- a/internal/service/ec2/validate.go +++ b/internal/service/ec2/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( diff --git a/internal/service/ec2/validate_test.go b/internal/service/ec2/validate_test.go index 121c7323a26..85a7e6f48e4 100644 --- a/internal/service/ec2/validate_test.go +++ b/internal/service/ec2/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( diff --git a/internal/service/ec2/vpc_.go b/internal/service/ec2/vpc_.go index d346abd5e9c..0ef134971bb 100644 --- a/internal/service/ec2/vpc_.go +++ b/internal/service/ec2/vpc_.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -172,7 +175,7 @@ func ResourceVPC() *schema.Resource { func resourceVPCCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateVpcInput{ AmazonProvidedIpv6CidrBlock: aws.Bool(d.Get("assign_generated_ipv6_cidr_block").(bool)), @@ -208,7 +211,7 @@ func resourceVPCCreate(ctx context.Context, d *schema.ResourceData, meta interfa input.Ipv6NetmaskLength = aws.Int64(int64(v.(int))) } - outputRaw, err := tfresource.RetryWhenAWSErrMessageContains(ctx, propagationTimeout, func() (interface{}, error) { + outputRaw, err := tfresource.RetryWhenAWSErrMessageContains(ctx, ec2PropagationTimeout, func() (interface{}, error) { return conn.CreateVpcWithContext(ctx, input) // "UnsupportedOperation: The operation AllocateIpamPoolCidr is not supported. Account 123456789012 is not monitored by IPAM ipam-07b079e3392782a55." }, errCodeUnsupportedOperation, "is not monitored by IPAM") @@ -253,9 +256,9 @@ func resourceVPCCreate(ctx context.Context, d *schema.ResourceData, meta interfa func resourceVPCRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) - outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { + outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { return FindVPCByID(ctx, conn, d.Id()) }, d.IsNewResource()) @@ -285,19 +288,25 @@ func resourceVPCRead(ctx context.Context, d *schema.ResourceData, meta interface d.Set("instance_tenancy", vpc.InstanceTenancy) d.Set("owner_id", ownerID) - if v, err := FindVPCAttribute(ctx, conn, d.Id(), ec2.VpcAttributeNameEnableDnsHostnames); err != nil { + if v, err := tfresource.RetryWhenNewResourceNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { + return FindVPCAttribute(ctx, conn, d.Id(), ec2.VpcAttributeNameEnableDnsHostnames) + }, d.IsNewResource()); err != nil { return sdkdiag.AppendErrorf(diags, "reading EC2 VPC (%s) Attribute (%s): %s", d.Id(), ec2.VpcAttributeNameEnableDnsHostnames, err) } else { d.Set("enable_dns_hostnames", v) } - if v, err := FindVPCAttribute(ctx, conn, d.Id(), ec2.VpcAttributeNameEnableDnsSupport); err != nil { + if v, err := tfresource.RetryWhenNewResourceNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { + return FindVPCAttribute(ctx, conn, d.Id(), ec2.VpcAttributeNameEnableDnsSupport) + }, d.IsNewResource()); err != nil { return sdkdiag.AppendErrorf(diags, "reading EC2 VPC (%s) Attribute (%s): %s", d.Id(), ec2.VpcAttributeNameEnableDnsSupport, err) } else { d.Set("enable_dns_support", v) } - if v, err := FindVPCAttribute(ctx, conn, d.Id(), ec2.VpcAttributeNameEnableNetworkAddressUsageMetrics); err != nil { + if v, err := tfresource.RetryWhenNewResourceNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { + return FindVPCAttribute(ctx, conn, d.Id(), ec2.VpcAttributeNameEnableNetworkAddressUsageMetrics) + }, d.IsNewResource()); err != nil { return sdkdiag.AppendErrorf(diags, "reading EC2 VPC (%s) Attribute (%s): %s", d.Id(), ec2.VpcAttributeNameEnableNetworkAddressUsageMetrics, err) } else { d.Set("enable_network_address_usage_metrics", v) @@ -364,14 +373,14 @@ func resourceVPCRead(ctx context.Context, d *schema.ResourceData, meta interface } } - SetTagsOut(ctx, vpc.Tags) + setTagsOut(ctx, vpc.Tags) return diags } func resourceVPCUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if d.HasChange("enable_dns_hostnames") { if err := modifyVPCDNSHostnames(ctx, conn, d.Id(), d.Get("enable_dns_hostnames").(bool)); err != nil { @@ -434,7 +443,7 @@ func resourceVPCUpdate(ctx context.Context, d *schema.ResourceData, meta interfa func resourceVPCDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DeleteVpcInput{ VpcId: aws.String(d.Id()), @@ -495,10 +504,10 @@ func resourceVPCImport(ctx context.Context, d *schema.ResourceData, meta interfa func resourceVPCCustomizeDiff(_ context.Context, diff *schema.ResourceDiff, v interface{}) error { if diff.HasChange("assign_generated_ipv6_cidr_block") { if err := diff.SetNewComputed("ipv6_association_id"); err != nil { - return fmt.Errorf("error setting ipv6_association_id to computed: %s", err) + return fmt.Errorf("setting ipv6_association_id to computed: %s", err) } if err := diff.SetNewComputed("ipv6_cidr_block"); err != nil { - return fmt.Errorf("error setting ipv6_cidr_block to computed: %s", err) + return fmt.Errorf("setting ipv6_cidr_block to computed: %s", err) } } diff --git a/internal/service/ec2/vpc_data_source.go b/internal/service/ec2/vpc_data_source.go index ee43032d68c..0e76e035b0e 100644 --- a/internal/service/ec2/vpc_data_source.go +++ b/internal/service/ec2/vpc_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -116,7 +119,7 @@ func DataSourceVPC() *schema.Resource { func dataSourceVPCRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig // We specify "default" as boolean, but EC2 filters want diff --git a/internal/service/ec2/vpc_data_source_test.go b/internal/service/ec2/vpc_data_source_test.go index 8816d391d97..d68c074d4f5 100644 --- a/internal/service/ec2/vpc_data_source_test.go +++ b/internal/service/ec2/vpc_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpc_default_network_acl.go b/internal/service/ec2/vpc_default_network_acl.go index e8b12e7ce65..328d83fbf5e 100644 --- a/internal/service/ec2/vpc_default_network_acl.go +++ b/internal/service/ec2/vpc_default_network_acl.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -95,7 +98,7 @@ func ResourceDefaultNetworkACL() *schema.Resource { func resourceDefaultNetworkACLCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { // nosemgrep:ci.semgrep.tags.calling-UpdateTags-in-resource-create var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) naclID := d.Get("default_network_acl_id").(string) nacl, err := FindNetworkACLByID(ctx, conn, naclID) @@ -121,11 +124,11 @@ func resourceDefaultNetworkACLCreate(ctx context.Context, d *schema.ResourceData // Configure tags. ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig - newTags := KeyValueTags(ctx, GetTagsIn(ctx)) + newTags := KeyValueTags(ctx, getTagsIn(ctx)) oldTags := KeyValueTags(ctx, nacl.Tags).IgnoreSystem(names.EC2).IgnoreConfig(ignoreTagsConfig) if !oldTags.Equal(newTags) { - if err := UpdateTags(ctx, conn, d.Id(), oldTags, newTags); err != nil { + if err := updateTags(ctx, conn, d.Id(), oldTags, newTags); err != nil { return sdkdiag.AppendErrorf(diags, "updating EC2 Default Network ACL (%s) tags: %s", d.Id(), err) } } @@ -135,7 +138,7 @@ func resourceDefaultNetworkACLCreate(ctx context.Context, d *schema.ResourceData func resourceDefaultNetworkACLUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) // Subnets *must* belong to a Network ACL. Subnets are not "removed" from // Network ACLs, instead their association is replaced. In a normal diff --git a/internal/service/ec2/vpc_default_network_acl_test.go b/internal/service/ec2/vpc_default_network_acl_test.go index dc65912c54f..e3d79e3d25d 100644 --- a/internal/service/ec2/vpc_default_network_acl_test.go +++ b/internal/service/ec2/vpc_default_network_acl_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -315,7 +318,7 @@ func testAccCheckDefaultNetworkACLExists(ctx context.Context, n string, v *ec2.N return fmt.Errorf("No EC2 Default Network ACL ID is set: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindNetworkACLByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ec2/vpc_default_route_table.go b/internal/service/ec2/vpc_default_route_table.go index 466bd579248..1ef761ad7ed 100644 --- a/internal/service/ec2/vpc_default_route_table.go +++ b/internal/service/ec2/vpc_default_route_table.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -140,7 +143,7 @@ func ResourceDefaultRouteTable() *schema.Resource { func resourceDefaultRouteTableCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) routeTableID := d.Get("default_route_table_id").(string) @@ -233,7 +236,7 @@ func resourceDefaultRouteTableCreate(ctx context.Context, d *schema.ResourceData } } - if err := createTags(ctx, conn, d.Id(), GetTagsIn(ctx)); err != nil { + if err := createTags(ctx, conn, d.Id(), getTagsIn(ctx)); err != nil { return sdkdiag.AppendErrorf(diags, "setting EC2 Default Route Table (%s) tags: %s", d.Id(), err) } @@ -249,7 +252,7 @@ func resourceDefaultRouteTableRead(ctx context.Context, d *schema.ResourceData, } func resourceDefaultRouteTableImport(ctx context.Context, d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) routeTable, err := FindMainRouteTableByVPCID(ctx, conn, d.Id()) diff --git a/internal/service/ec2/vpc_default_route_table_test.go b/internal/service/ec2/vpc_default_route_table_test.go index ac8469a52c3..1d58533866a 100644 --- a/internal/service/ec2/vpc_default_route_table_test.go +++ b/internal/service/ec2/vpc_default_route_table_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -530,7 +533,7 @@ func TestAccVPCDefaultRouteTable_revokeExistingRules(t *testing.T) { func testAccCheckDefaultRouteTableDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_default_route_table" { @@ -1233,7 +1236,7 @@ resource "aws_default_route_table" "test" { } func testAccPreCheckELBv2GatewayLoadBalancer(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn(ctx) input := &elbv2.DescribeAccountLimitsInput{} diff --git a/internal/service/ec2/vpc_default_security_group.go b/internal/service/ec2/vpc_default_security_group.go index ad6f224189b..2e414e7e749 100644 --- a/internal/service/ec2/vpc_default_security_group.go +++ b/internal/service/ec2/vpc_default_security_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -78,7 +81,7 @@ func ResourceDefaultSecurityGroup() *schema.Resource { } func resourceDefaultSecurityGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { // nosemgrep:ci.semgrep.tags.calling-UpdateTags-in-resource-create - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeSecurityGroupsInput{ Filters: BuildAttributeFilterList( @@ -111,11 +114,11 @@ func resourceDefaultSecurityGroupCreate(ctx context.Context, d *schema.ResourceD d.SetId(aws.StringValue(sg.GroupId)) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig - newTags := KeyValueTags(ctx, GetTagsIn(ctx)) + newTags := KeyValueTags(ctx, getTagsIn(ctx)) oldTags := KeyValueTags(ctx, sg.Tags).IgnoreSystem(names.EC2).IgnoreConfig(ignoreTagsConfig) if !newTags.Equal(oldTags) { - if err := UpdateTags(ctx, conn, d.Id(), oldTags, newTags); err != nil { + if err := updateTags(ctx, conn, d.Id(), oldTags, newTags); err != nil { return diag.Errorf("updating Default Security Group (%s) tags: %s", d.Id(), err) } } diff --git a/internal/service/ec2/vpc_default_security_group_test.go b/internal/service/ec2/vpc_default_security_group_test.go index a28115dd121..bf551bb9b3d 100644 --- a/internal/service/ec2/vpc_default_security_group_test.go +++ b/internal/service/ec2/vpc_default_security_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpc_default_subnet.go b/internal/service/ec2/vpc_default_subnet.go index 371bd8a9ca4..1efae8f241b 100644 --- a/internal/service/ec2/vpc_default_subnet.go +++ b/internal/service/ec2/vpc_default_subnet.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -162,7 +165,7 @@ func ResourceDefaultSubnet() *schema.Resource { func resourceDefaultSubnetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { // nosemgrep:ci.semgrep.tags.calling-UpdateTags-in-resource-create var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) availabilityZone := d.Get("availability_zone").(string) input := &ec2.DescribeSubnetsInput{ @@ -238,11 +241,11 @@ func resourceDefaultSubnetCreate(ctx context.Context, d *schema.ResourceData, me // Configure tags. ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig - newTags := KeyValueTags(ctx, GetTagsIn(ctx)) + newTags := KeyValueTags(ctx, getTagsIn(ctx)) oldTags := KeyValueTags(ctx, subnet.Tags).IgnoreSystem(names.EC2).IgnoreConfig(ignoreTagsConfig) if !oldTags.Equal(newTags) { - if err := UpdateTags(ctx, conn, d.Id(), oldTags, newTags); err != nil { + if err := updateTags(ctx, conn, d.Id(), oldTags, newTags); err != nil { return sdkdiag.AppendErrorf(diags, "updating EC2 Default Subnet (%s) tags: %s", d.Id(), err) } } diff --git a/internal/service/ec2/vpc_default_subnet_test.go b/internal/service/ec2/vpc_default_subnet_test.go index 09586d3e5cf..59ef4baf16a 100644 --- a/internal/service/ec2/vpc_default_subnet_test.go +++ b/internal/service/ec2/vpc_default_subnet_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -19,7 +22,7 @@ import ( ) func testAccPreCheckDefaultSubnetExists(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeSubnetsInput{ Filters: tfec2.BuildAttributeFilterList( @@ -41,7 +44,7 @@ func testAccPreCheckDefaultSubnetExists(ctx context.Context, t *testing.T) { } func testAccPreCheckDefaultSubnetNotFound(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeSubnetsInput{ Filters: tfec2.BuildAttributeFilterList( @@ -338,7 +341,7 @@ func testAccDefaultSubnet_NotFound_ipv6Native(t *testing.T) { // Any missing default subnets are then created. func testAccCheckDefaultSubnetDestroyExists(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_subnet" { @@ -361,7 +364,7 @@ func testAccCheckDefaultSubnetDestroyExists(ctx context.Context) resource.TestCh // Any missing default subnets are then created. func testAccCheckDefaultSubnetDestroyNotFound(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_subnet" { @@ -386,7 +389,7 @@ func testAccCheckDefaultSubnetDestroyNotFound(ctx context.Context) resource.Test } func testAccCreateMissingDefaultSubnets(ctx context.Context) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := conn.DescribeAvailabilityZonesWithContext(ctx, &ec2.DescribeAvailabilityZonesInput{ Filters: tfec2.BuildAttributeFilterList( diff --git a/internal/service/ec2/vpc_default_vpc.go b/internal/service/ec2/vpc_default_vpc.go index 5231b132050..4abae394a82 100644 --- a/internal/service/ec2/vpc_default_vpc.go +++ b/internal/service/ec2/vpc_default_vpc.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -151,7 +154,7 @@ func ResourceDefaultVPC() *schema.Resource { func resourceDefaultVPCCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { // nosemgrep:ci.semgrep.tags.calling-UpdateTags-in-resource-create var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeVpcsInput{ Filters: BuildAttributeFilterList( @@ -269,11 +272,11 @@ func resourceDefaultVPCCreate(ctx context.Context, d *schema.ResourceData, meta // Configure tags. ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig - newTags := KeyValueTags(ctx, GetTagsIn(ctx)) + newTags := KeyValueTags(ctx, getTagsIn(ctx)) oldTags := KeyValueTags(ctx, vpc.Tags).IgnoreSystem(names.EC2).IgnoreConfig(ignoreTagsConfig) if !oldTags.Equal(newTags) { - if err := UpdateTags(ctx, conn, d.Id(), oldTags, newTags); err != nil { + if err := updateTags(ctx, conn, d.Id(), oldTags, newTags); err != nil { return sdkdiag.AppendErrorf(diags, "updating EC2 Default VPC (%s) tags: %s", d.Id(), err) } } diff --git a/internal/service/ec2/vpc_default_vpc_dhcp_options.go b/internal/service/ec2/vpc_default_vpc_dhcp_options.go index 956b10d2f23..bcce8535d15 100644 --- a/internal/service/ec2/vpc_default_vpc_dhcp_options.go +++ b/internal/service/ec2/vpc_default_vpc_dhcp_options.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -77,7 +80,7 @@ func ResourceDefaultVPCDHCPOptions() *schema.Resource { func resourceDefaultVPCDHCPOptionsCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeDhcpOptionsInput{} diff --git a/internal/service/ec2/vpc_default_vpc_dhcp_options_test.go b/internal/service/ec2/vpc_default_vpc_dhcp_options_test.go index f1cc0027c93..729407dd213 100644 --- a/internal/service/ec2/vpc_default_vpc_dhcp_options_test.go +++ b/internal/service/ec2/vpc_default_vpc_dhcp_options_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpc_default_vpc_test.go b/internal/service/ec2/vpc_default_vpc_test.go index b257f91bd5c..1aa4e765f59 100644 --- a/internal/service/ec2/vpc_default_vpc_test.go +++ b/internal/service/ec2/vpc_default_vpc_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -410,7 +413,7 @@ func testAccDefaultVPC_NotFound_assignGeneratedIPv6CIDRBlockAdoption(t *testing. // It verifies that the default VPC still exists. func testAccCheckDefaultVPCDestroyExists(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_default_vpc" { @@ -433,7 +436,7 @@ func testAccCheckDefaultVPCDestroyExists(ctx context.Context) resource.TestCheck // A new default VPC is then created. func testAccCheckDefaultVPCDestroyNotFound(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_default_vpc" { @@ -472,7 +475,7 @@ func testAccCheckDefaultVPCEmpty(ctx context.Context, v *ec2.Vpc) resource.TestC // testAccEmptyDefaultVPC empties a default VPC so that it can be deleted. func testAccEmptyDefaultVPC(ctx context.Context, vpcID string) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) // Delete the default IGW. igw, err := tfec2.FindInternetGateway(ctx, conn, &ec2.DescribeInternetGatewaysInput{ diff --git a/internal/service/ec2/vpc_dhcp_options.go b/internal/service/ec2/vpc_dhcp_options.go index 34eae02815e..c96fa61a779 100644 --- a/internal/service/ec2/vpc_dhcp_options.go +++ b/internal/service/ec2/vpc_dhcp_options.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -96,7 +99,7 @@ var ( func resourceVPCDHCPOptionsCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) dhcpConfigurations, err := optionsMap.resourceDataToDHCPConfigurations(d) @@ -122,9 +125,9 @@ func resourceVPCDHCPOptionsCreate(ctx context.Context, d *schema.ResourceData, m func resourceVPCDHCPOptionsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) - outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { + outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { return FindDHCPOptionsByID(ctx, conn, d.Id()) }, d.IsNewResource()) @@ -157,7 +160,7 @@ func resourceVPCDHCPOptionsRead(ctx context.Context, d *schema.ResourceData, met return sdkdiag.AppendErrorf(diags, "reading EC2 DHCP Options (%s): %s", d.Id(), err) } - SetTagsOut(ctx, opts.Tags) + setTagsOut(ctx, opts.Tags) return diags } @@ -172,7 +175,7 @@ func resourceVPCDHCPOptionsUpdate(ctx context.Context, d *schema.ResourceData, m func resourceVPCDHCPOptionsDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) vpcs, err := FindVPCs(ctx, conn, &ec2.DescribeVpcsInput{ Filters: BuildAttributeFilterList(map[string]string{ @@ -270,7 +273,7 @@ func (m *dhcpOptionsMap) dhcpConfigurationsToResourceData(dhcpConfigurations []* return nil } -// resourceDataToNewDhcpConfigurations returns a list of AWS API DHCP configurations from Terraform ResourceData. +// resourceDataToDHCPConfigurations returns a list of AWS API DHCP configurations from Terraform ResourceData. func (m *dhcpOptionsMap) resourceDataToDHCPConfigurations(d *schema.ResourceData) ([]*ec2.NewDhcpConfiguration, error) { var output []*ec2.NewDhcpConfiguration diff --git a/internal/service/ec2/vpc_dhcp_options_association.go b/internal/service/ec2/vpc_dhcp_options_association.go index 9032c03de5c..ab44b4e8d41 100644 --- a/internal/service/ec2/vpc_dhcp_options_association.go +++ b/internal/service/ec2/vpc_dhcp_options_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -44,7 +47,7 @@ func ResourceVPCDHCPOptionsAssociation() *schema.Resource { func resourceVPCDHCPOptionsAssociationPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) dhcpOptionsID := d.Get("dhcp_options_id").(string) vpcID := d.Get("vpc_id").(string) @@ -68,7 +71,7 @@ func resourceVPCDHCPOptionsAssociationPut(ctx context.Context, d *schema.Resourc func resourceVPCDHCPOptionsAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) dhcpOptionsID, vpcID, err := VPCDHCPOptionsAssociationParseResourceID(d.Id()) @@ -76,7 +79,7 @@ func resourceVPCDHCPOptionsAssociationRead(ctx context.Context, d *schema.Resour return sdkdiag.AppendErrorf(diags, "reading EC2 VPC DHCP Options Set Association (%s): %s", d.Id(), err) } - _, err = tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { + _, err = tfresource.RetryWhenNewResourceNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { return nil, FindVPCDHCPOptionsAssociation(ctx, conn, vpcID, dhcpOptionsID) }, d.IsNewResource()) @@ -98,7 +101,7 @@ func resourceVPCDHCPOptionsAssociationRead(ctx context.Context, d *schema.Resour func resourceVPCDHCPOptionsAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) dhcpOptionsID, vpcID, err := VPCDHCPOptionsAssociationParseResourceID(d.Id()) @@ -131,12 +134,12 @@ func resourceVPCDHCPOptionsAssociationDelete(ctx context.Context, d *schema.Reso } func resourceVPCDHCPOptionsAssociationImport(ctx context.Context, d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) vpc, err := FindVPCByID(ctx, conn, d.Id()) if err != nil { - return nil, fmt.Errorf("error reading EC2 VPC (%s): %w", d.Id(), err) + return nil, fmt.Errorf("reading EC2 VPC (%s): %w", d.Id(), err) } dhcpOptionsID := aws.StringValue(vpc.DhcpOptionsId) diff --git a/internal/service/ec2/vpc_dhcp_options_association_test.go b/internal/service/ec2/vpc_dhcp_options_association_test.go index 9e428f041e2..98f723b74e9 100644 --- a/internal/service/ec2/vpc_dhcp_options_association_test.go +++ b/internal/service/ec2/vpc_dhcp_options_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -151,7 +154,7 @@ func testAccVPCDHCPOptionsAssociationVPCImportIdFunc(resourceName string) resour func testAccCheckVPCDHCPOptionsAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpc_dhcp_options_association" { @@ -198,7 +201,7 @@ func testAccCheckVPCDHCPOptionsAssociationExist(ctx context.Context, n string) r return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) return tfec2.FindVPCDHCPOptionsAssociation(ctx, conn, vpcID, dhcpOptionsID) } diff --git a/internal/service/ec2/vpc_dhcp_options_data_source.go b/internal/service/ec2/vpc_dhcp_options_data_source.go index a971e5f8187..358f97d6850 100644 --- a/internal/service/ec2/vpc_dhcp_options_data_source.go +++ b/internal/service/ec2/vpc_dhcp_options_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -70,7 +73,7 @@ func DataSourceVPCDHCPOptions() *schema.Resource { func dataSourceVPCDHCPOptionsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeDhcpOptionsInput{} diff --git a/internal/service/ec2/vpc_dhcp_options_data_source_test.go b/internal/service/ec2/vpc_dhcp_options_data_source_test.go index eb803743744..4c5044e3e80 100644 --- a/internal/service/ec2/vpc_dhcp_options_data_source_test.go +++ b/internal/service/ec2/vpc_dhcp_options_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpc_dhcp_options_test.go b/internal/service/ec2/vpc_dhcp_options_test.go index 2ad7351c176..06cd182fe04 100644 --- a/internal/service/ec2/vpc_dhcp_options_test.go +++ b/internal/service/ec2/vpc_dhcp_options_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -161,7 +164,7 @@ func TestAccVPCDHCPOptions_disappears(t *testing.T) { func testAccCheckDHCPOptionsDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpc_dhcp_options" { @@ -196,7 +199,7 @@ func testAccCheckDHCPOptionsExists(ctx context.Context, n string, v *ec2.DhcpOpt return fmt.Errorf("No EC2 DHCP Options Set ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindDHCPOptionsByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ec2/vpc_egress_only_internet_gateway.go b/internal/service/ec2/vpc_egress_only_internet_gateway.go index 9a6faed2c54..7cc526b131c 100644 --- a/internal/service/ec2/vpc_egress_only_internet_gateway.go +++ b/internal/service/ec2/vpc_egress_only_internet_gateway.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -47,7 +50,7 @@ func ResourceEgressOnlyInternetGateway() *schema.Resource { func resourceEgressOnlyInternetGatewayCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateEgressOnlyInternetGatewayInput{ ClientToken: aws.String(id.UniqueId()), @@ -68,9 +71,9 @@ func resourceEgressOnlyInternetGatewayCreate(ctx context.Context, d *schema.Reso func resourceEgressOnlyInternetGatewayRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) - outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { + outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { return FindEgressOnlyInternetGatewayByID(ctx, conn, d.Id()) }, d.IsNewResource()) @@ -92,7 +95,7 @@ func resourceEgressOnlyInternetGatewayRead(ctx context.Context, d *schema.Resour d.Set("vpc_id", nil) } - SetTagsOut(ctx, ig.Tags) + setTagsOut(ctx, ig.Tags) return diags } @@ -107,7 +110,7 @@ func resourceEgressOnlyInternetGatewayUpdate(ctx context.Context, d *schema.Reso func resourceEgressOnlyInternetGatewayDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[INFO] Deleting EC2 Egress-only Internet Gateway: %s", d.Id()) _, err := conn.DeleteEgressOnlyInternetGatewayWithContext(ctx, &ec2.DeleteEgressOnlyInternetGatewayInput{ diff --git a/internal/service/ec2/vpc_egress_only_internet_gateway_test.go b/internal/service/ec2/vpc_egress_only_internet_gateway_test.go index aa19bfef2e9..3ee03c1965f 100644 --- a/internal/service/ec2/vpc_egress_only_internet_gateway_test.go +++ b/internal/service/ec2/vpc_egress_only_internet_gateway_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -91,7 +94,7 @@ func TestAccVPCEgressOnlyInternetGateway_tags(t *testing.T) { func testAccCheckEgressOnlyInternetGatewayDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_egress_only_internet_gateway" { @@ -126,7 +129,7 @@ func testAccCheckEgressOnlyInternetGatewayExists(ctx context.Context, n string, return fmt.Errorf("No EC2 Egress-only Internet Gateway ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindEgressOnlyInternetGatewayByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ec2/vpc_endpoint.go b/internal/service/ec2/vpc_endpoint.go index af589794b62..8498dfd618d 100644 --- a/internal/service/ec2/vpc_endpoint.go +++ b/internal/service/ec2/vpc_endpoint.go @@ -1,9 +1,13 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( "context" "fmt" "log" + "regexp" "time" "github.com/aws/aws-sdk-go/aws" @@ -17,6 +21,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/errs" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" "github.com/hashicorp/terraform-provider-aws/internal/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" @@ -86,6 +91,10 @@ func ResourceVPCEndpoint() *schema.Resource { Optional: true, ValidateFunc: validation.StringInSlice(ec2.DnsRecordIpType_Values(), false), }, + "private_dns_only_for_inbound_resolver_endpoint": { + Type: schema.TypeBool, + Optional: true, + }, }, }, }, @@ -184,7 +193,7 @@ func ResourceVPCEndpoint() *schema.Resource { func resourceVPCEndpointCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) serviceName := d.Get("service_name").(string) input := &ec2.CreateVpcEndpointInput{ @@ -197,7 +206,13 @@ func resourceVPCEndpointCreate(ctx context.Context, d *schema.ResourceData, meta } if v, ok := d.GetOk("dns_options"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { - input.DnsOptions = expandDNSOptionsSpecification(v.([]interface{})[0].(map[string]interface{})) + // PrivateDnsOnlyForInboundResolverEndpoint is only supported for services + // that support both gateway and interface endpoints, i.e. S3. + if isAmazonS3VPCEndpoint(serviceName) { + input.DnsOptions = expandDNSOptionsSpecificationWithPrivateDNSOnly(v.([]interface{})[0].(map[string]interface{})) + } else { + input.DnsOptions = expandDNSOptionsSpecification(v.([]interface{})[0].(map[string]interface{})) + } } if v, ok := d.GetOk("ip_address_type"); ok { @@ -208,7 +223,7 @@ func resourceVPCEndpointCreate(ctx context.Context, d *schema.ResourceData, meta policy, err := structure.NormalizeJsonString(v) if err != nil { - return sdkdiag.AppendErrorf(diags, "policy contains invalid JSON: %s", err) + return sdkdiag.AppendFromErr(diags, err) } input.PolicyDocument = aws.String(policy) @@ -228,6 +243,12 @@ func resourceVPCEndpointCreate(ctx context.Context, d *schema.ResourceData, meta output, err := conn.CreateVpcEndpointWithContext(ctx, input) + // Some partitions (e.g. ISO) may not support tag-on-create. + if input.TagSpecifications != nil && errs.IsUnsupportedOperationInPartitionError(conn.PartitionID, err) { + input.TagSpecifications = nil + output, err = conn.CreateVpcEndpointWithContext(ctx, input) + } + if err != nil { return sdkdiag.AppendErrorf(diags, "creating EC2 VPC Endpoint (%s): %s", serviceName, err) } @@ -237,12 +258,26 @@ func resourceVPCEndpointCreate(ctx context.Context, d *schema.ResourceData, meta if d.Get("auto_accept").(bool) && aws.StringValue(vpce.State) == vpcEndpointStatePendingAcceptance { if err := vpcEndpointAccept(ctx, conn, d.Id(), aws.StringValue(vpce.ServiceName), d.Timeout(schema.TimeoutCreate)); err != nil { - return sdkdiag.AppendErrorf(diags, "creating EC2 VPC Endpoint (%s): %s", serviceName, err) + return sdkdiag.AppendFromErr(diags, err) } } if _, err = WaitVPCEndpointAvailable(ctx, conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { - return sdkdiag.AppendErrorf(diags, "creating EC2 VPC Endpoint (%s): waiting for completion: %s", serviceName, err) + return sdkdiag.AppendErrorf(diags, "waiting for EC2 VPC Endpoint (%s) create: %s", serviceName, err) + } + + // For partitions not supporting tag-on-create, attempt tag after create. + if tags := getTagsIn(ctx); input.TagSpecifications == nil && len(tags) > 0 { + err := createTags(ctx, conn, d.Id(), tags) + + // If default tags only, continue. Otherwise, error. + if v, ok := d.GetOk(names.AttrTags); (!ok || len(v.(map[string]interface{})) == 0) && errs.IsUnsupportedOperationInPartitionError(conn.PartitionID, err) { + return append(diags, resourceVPCEndpointRead(ctx, d, meta)...) + } + + if err != nil { + return sdkdiag.AppendErrorf(diags, "setting EC2 VPC Endpoint (%s) tags: %s", serviceName, err) + } } return append(diags, resourceVPCEndpointRead(ctx, d, meta)...) @@ -250,7 +285,7 @@ func resourceVPCEndpointCreate(ctx context.Context, d *schema.ResourceData, meta func resourceVPCEndpointRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) vpce, err := FindVPCEndpointByID(ctx, conn, d.Id()) @@ -272,7 +307,6 @@ func resourceVPCEndpointRead(ctx context.Context, d *schema.ResourceData, meta i Resource: fmt.Sprintf("vpc-endpoint/%s", d.Id()), }.String() serviceName := aws.StringValue(vpce.ServiceName) - d.Set("arn", arn) if err := d.Set("dns_entry", flattenDNSEntries(vpce.DnsEntries)); err != nil { return sdkdiag.AppendErrorf(diags, "setting dns_entry: %s", err) @@ -316,40 +350,48 @@ func resourceVPCEndpointRead(ctx context.Context, d *schema.ResourceData, meta i policyToSet, err := verify.SecondJSONUnlessEquivalent(d.Get("policy").(string), aws.StringValue(vpce.PolicyDocument)) if err != nil { - return sdkdiag.AppendErrorf(diags, "while setting policy (%s), encountered: %s", policyToSet, err) + return sdkdiag.AppendFromErr(diags, err) } policyToSet, err = structure.NormalizeJsonString(policyToSet) if err != nil { - return sdkdiag.AppendErrorf(diags, "policy (%s) is invalid JSON: %s", policyToSet, err) + return sdkdiag.AppendFromErr(diags, err) } d.Set("policy", policyToSet) - SetTagsOut(ctx, vpce.Tags) + setTagsOut(ctx, vpce.Tags) return diags } func resourceVPCEndpointUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if d.HasChange("auto_accept") && d.Get("auto_accept").(bool) && d.Get("state").(string) == vpcEndpointStatePendingAcceptance { if err := vpcEndpointAccept(ctx, conn, d.Id(), d.Get("service_name").(string), d.Timeout(schema.TimeoutUpdate)); err != nil { - return sdkdiag.AppendErrorf(diags, "updating EC2 VPC Endpoint (%s): %s", d.Get("service_name").(string), err) + return sdkdiag.AppendFromErr(diags, err) } } if d.HasChanges("dns_options", "ip_address_type", "policy", "private_dns_enabled", "security_group_ids", "route_table_ids", "subnet_ids") { + privateDNSEnabled := d.Get("private_dns_enabled").(bool) input := &ec2.ModifyVpcEndpointInput{ VpcEndpointId: aws.String(d.Id()), } if d.HasChange("dns_options") { if v, ok := d.GetOk("dns_options"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { - input.DnsOptions = expandDNSOptionsSpecification(v.([]interface{})[0].(map[string]interface{})) + tfMap := v.([]interface{})[0].(map[string]interface{}) + // PrivateDnsOnlyForInboundResolverEndpoint is only supported for services + // that support both gateway and interface endpoints, i.e. S3. + if isAmazonS3VPCEndpoint(d.Get("service_name").(string)) { + input.DnsOptions = expandDNSOptionsSpecificationWithPrivateDNSOnly(tfMap) + } else { + input.DnsOptions = expandDNSOptionsSpecification(tfMap) + } } } @@ -358,7 +400,7 @@ func resourceVPCEndpointUpdate(ctx context.Context, d *schema.ResourceData, meta } if d.HasChange("private_dns_enabled") { - input.PrivateDnsEnabled = aws.Bool(d.Get("private_dns_enabled").(bool)) + input.PrivateDnsEnabled = aws.Bool(privateDNSEnabled) } input.AddRouteTableIds, input.RemoveRouteTableIds = flattenAddAndRemoveStringLists(d, "route_table_ids") @@ -372,7 +414,7 @@ func resourceVPCEndpointUpdate(ctx context.Context, d *schema.ResourceData, meta policy, err := structure.NormalizeJsonString(d.Get("policy")) if err != nil { - return sdkdiag.AppendErrorf(diags, "policy contains invalid JSON: %s", err) + return sdkdiag.AppendFromErr(diags, err) } if policy == "" { @@ -399,7 +441,7 @@ func resourceVPCEndpointUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceVPCEndpointDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[DEBUG] Deleting EC2 VPC Endpoint: %s", d.Id()) output, err := conn.DeleteVpcEndpointsWithContext(ctx, &ec2.DeleteVpcEndpointsInput{ @@ -450,6 +492,11 @@ func vpcEndpointAccept(ctx context.Context, conn *ec2.EC2, vpceID, serviceName s return nil } +func isAmazonS3VPCEndpoint(serviceName string) bool { + ok, _ := regexp.MatchString("com\\.amazonaws\\.([a-z]+\\-[a-z]+\\-[0-9])\\.s3", serviceName) + return ok +} + func expandDNSOptionsSpecification(tfMap map[string]interface{}) *ec2.DnsOptionsSpecification { if tfMap == nil { return nil @@ -464,6 +511,24 @@ func expandDNSOptionsSpecification(tfMap map[string]interface{}) *ec2.DnsOptions return apiObject } +func expandDNSOptionsSpecificationWithPrivateDNSOnly(tfMap map[string]interface{}) *ec2.DnsOptionsSpecification { + if tfMap == nil { + return nil + } + + apiObject := &ec2.DnsOptionsSpecification{} + + if v, ok := tfMap["dns_record_ip_type"].(string); ok && v != "" { + apiObject.DnsRecordIpType = aws.String(v) + } + + if v, ok := tfMap["private_dns_only_for_inbound_resolver_endpoint"].(bool); ok { + apiObject.PrivateDnsOnlyForInboundResolverEndpoint = aws.Bool(v) + } + + return apiObject +} + func flattenDNSEntry(apiObject *ec2.DnsEntry) map[string]interface{} { if apiObject == nil { return nil @@ -511,6 +576,10 @@ func flattenDNSOptions(apiObject *ec2.DnsOptions) map[string]interface{} { tfMap["dns_record_ip_type"] = aws.StringValue(v) } + if v := apiObject.PrivateDnsOnlyForInboundResolverEndpoint; v != nil { + tfMap["private_dns_only_for_inbound_resolver_endpoint"] = aws.BoolValue(v) + } + return tfMap } diff --git a/internal/service/ec2/vpc_endpoint_connection_accepter.go b/internal/service/ec2/vpc_endpoint_connection_accepter.go index 6f57ba5a6b9..ae00d264ce6 100644 --- a/internal/service/ec2/vpc_endpoint_connection_accepter.go +++ b/internal/service/ec2/vpc_endpoint_connection_accepter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -48,7 +51,7 @@ func ResourceVPCEndpointConnectionAccepter() *schema.Resource { func resourceVPCEndpointConnectionAccepterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) serviceID := d.Get("vpc_endpoint_service_id").(string) vpcEndpointID := d.Get("vpc_endpoint_id").(string) @@ -78,7 +81,7 @@ func resourceVPCEndpointConnectionAccepterCreate(ctx context.Context, d *schema. func resourceVPCEndpointConnectionAccepterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) serviceID, vpcEndpointID, err := VPCEndpointConnectionAccepterParseResourceID(d.Id()) @@ -107,7 +110,7 @@ func resourceVPCEndpointConnectionAccepterRead(ctx context.Context, d *schema.Re func resourceVPCEndpointConnectionAccepterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) serviceID, vpcEndpointID, err := VPCEndpointConnectionAccepterParseResourceID(d.Id()) diff --git a/internal/service/ec2/vpc_endpoint_connection_accepter_test.go b/internal/service/ec2/vpc_endpoint_connection_accepter_test.go index 26abbc39e07..449c8572566 100644 --- a/internal/service/ec2/vpc_endpoint_connection_accepter_test.go +++ b/internal/service/ec2/vpc_endpoint_connection_accepter_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -47,7 +50,7 @@ func TestAccVPCEndpointConnectionAccepter_crossAccount(t *testing.T) { func testAccCheckVPCEndpointConnectionAccepterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpc_endpoint_connection_accepter" { diff --git a/internal/service/ec2/vpc_endpoint_connection_notification.go b/internal/service/ec2/vpc_endpoint_connection_notification.go index f2802ec58b8..48c65a93940 100644 --- a/internal/service/ec2/vpc_endpoint_connection_notification.go +++ b/internal/service/ec2/vpc_endpoint_connection_notification.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -66,7 +69,7 @@ func ResourceVPCEndpointConnectionNotification() *schema.Resource { func resourceVPCEndpointConnectionNotificationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateVpcEndpointConnectionNotificationInput{ ConnectionEvents: flex.ExpandStringSet(d.Get("connection_events").(*schema.Set)), @@ -94,7 +97,7 @@ func resourceVPCEndpointConnectionNotificationCreate(ctx context.Context, d *sch func resourceVPCEndpointConnectionNotificationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) cn, err := FindVPCConnectionNotificationByID(ctx, conn, d.Id()) @@ -120,7 +123,7 @@ func resourceVPCEndpointConnectionNotificationRead(ctx context.Context, d *schem func resourceVPCEndpointConnectionNotificationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.ModifyVpcEndpointConnectionNotificationInput{ ConnectionNotificationId: aws.String(d.Id()), @@ -145,7 +148,7 @@ func resourceVPCEndpointConnectionNotificationUpdate(ctx context.Context, d *sch func resourceVPCEndpointConnectionNotificationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[DEBUG] Deleting EC2 VPC Endpoint Connection Notification: %s", d.Id()) _, err := conn.DeleteVpcEndpointConnectionNotificationsWithContext(ctx, &ec2.DeleteVpcEndpointConnectionNotificationsInput{ diff --git a/internal/service/ec2/vpc_endpoint_connection_notification_test.go b/internal/service/ec2/vpc_endpoint_connection_notification_test.go index 2c66d6f6dae..f31632f3591 100644 --- a/internal/service/ec2/vpc_endpoint_connection_notification_test.go +++ b/internal/service/ec2/vpc_endpoint_connection_notification_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -55,7 +58,7 @@ func TestAccVPCEndpointConnectionNotification_basic(t *testing.T) { func testAccCheckVPCEndpointConnectionNotificationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpc_endpoint_connection_notification" { @@ -90,7 +93,7 @@ func testAccCheckVPCEndpointConnectionNotificationExists(ctx context.Context, n return fmt.Errorf("No EC2 VPC Endpoint Connection Notification ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) _, err := tfec2.FindVPCConnectionNotificationByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ec2/vpc_endpoint_data_source.go b/internal/service/ec2/vpc_endpoint_data_source.go index b9d49c9b7e2..88fbb02a798 100644 --- a/internal/service/ec2/vpc_endpoint_data_source.go +++ b/internal/service/ec2/vpc_endpoint_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -140,7 +143,7 @@ func DataSourceVPCEndpoint() *schema.Resource { func dataSourceVPCEndpointRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeVpcEndpointsInput{ diff --git a/internal/service/ec2/vpc_endpoint_data_source_test.go b/internal/service/ec2/vpc_endpoint_data_source_test.go index af8b7d2ab93..5a4391c88a1 100644 --- a/internal/service/ec2/vpc_endpoint_data_source_test.go +++ b/internal/service/ec2/vpc_endpoint_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpc_endpoint_policy.go b/internal/service/ec2/vpc_endpoint_policy.go index 1cba93aeda3..f1cdaea7b3e 100644 --- a/internal/service/ec2/vpc_endpoint_policy.go +++ b/internal/service/ec2/vpc_endpoint_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -57,7 +60,7 @@ func ResourceVPCEndpointPolicy() *schema.Resource { func resourceVPCEndpointPolicyPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) endpointID := d.Get("vpc_endpoint_id").(string) req := &ec2.ModifyVpcEndpointInput{ @@ -77,7 +80,7 @@ func resourceVPCEndpointPolicyPut(ctx context.Context, d *schema.ResourceData, m log.Printf("[DEBUG] Updating VPC Endpoint Policy: %#v", req) if _, err := conn.ModifyVpcEndpointWithContext(ctx, req); err != nil { - return sdkdiag.AppendErrorf(diags, "Error updating VPC Endpoint Policy: %s", err) + return sdkdiag.AppendErrorf(diags, "updating VPC Endpoint Policy: %s", err) } d.SetId(endpointID) @@ -92,7 +95,7 @@ func resourceVPCEndpointPolicyPut(ctx context.Context, d *schema.ResourceData, m func resourceVPCEndpointPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) vpce, err := FindVPCEndpointByID(ctx, conn, d.Id()) @@ -126,7 +129,7 @@ func resourceVPCEndpointPolicyRead(ctx context.Context, d *schema.ResourceData, func resourceVPCEndpointPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) req := &ec2.ModifyVpcEndpointInput{ VpcEndpointId: aws.String(d.Id()), @@ -135,7 +138,7 @@ func resourceVPCEndpointPolicyDelete(ctx context.Context, d *schema.ResourceData log.Printf("[DEBUG] Resetting VPC Endpoint Policy: %#v", req) if _, err := conn.ModifyVpcEndpointWithContext(ctx, req); err != nil { - return sdkdiag.AppendErrorf(diags, "Error Resetting VPC Endpoint Policy: %s", err) + return sdkdiag.AppendErrorf(diags, "Resetting VPC Endpoint Policy: %s", err) } _, err := WaitVPCEndpointAvailable(ctx, conn, d.Id(), d.Timeout(schema.TimeoutDelete)) diff --git a/internal/service/ec2/vpc_endpoint_policy_test.go b/internal/service/ec2/vpc_endpoint_policy_test.go index 2c31a75fd3c..8c3a78ecb46 100644 --- a/internal/service/ec2/vpc_endpoint_policy_test.go +++ b/internal/service/ec2/vpc_endpoint_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpc_endpoint_route_table_association.go b/internal/service/ec2/vpc_endpoint_route_table_association.go index 03dc85761e8..46a60d78323 100644 --- a/internal/service/ec2/vpc_endpoint_route_table_association.go +++ b/internal/service/ec2/vpc_endpoint_route_table_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -43,7 +46,7 @@ func ResourceVPCEndpointRouteTableAssociation() *schema.Resource { func resourceVPCEndpointRouteTableAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) endpointID := d.Get("vpc_endpoint_id").(string) routeTableID := d.Get("route_table_id").(string) @@ -74,14 +77,14 @@ func resourceVPCEndpointRouteTableAssociationCreate(ctx context.Context, d *sche func resourceVPCEndpointRouteTableAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) endpointID := d.Get("vpc_endpoint_id").(string) routeTableID := d.Get("route_table_id").(string) // Human friendly ID for error messages since d.Id() is non-descriptive id := fmt.Sprintf("%s/%s", endpointID, routeTableID) - _, err := tfresource.RetryWhenNewResourceNotFound(ctx, RouteTableAssociationPropagationTimeout, func() (interface{}, error) { + _, err := tfresource.RetryWhenNewResourceNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { return nil, FindVPCEndpointRouteTableAssociationExists(ctx, conn, endpointID, routeTableID) }, d.IsNewResource()) @@ -100,7 +103,7 @@ func resourceVPCEndpointRouteTableAssociationRead(ctx context.Context, d *schema func resourceVPCEndpointRouteTableAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) endpointID := d.Get("vpc_endpoint_id").(string) routeTableID := d.Get("route_table_id").(string) diff --git a/internal/service/ec2/vpc_endpoint_route_table_association_test.go b/internal/service/ec2/vpc_endpoint_route_table_association_test.go index 99dc4d29be6..7d1f9d8abcc 100644 --- a/internal/service/ec2/vpc_endpoint_route_table_association_test.go +++ b/internal/service/ec2/vpc_endpoint_route_table_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -67,7 +70,7 @@ func TestAccVPCEndpointRouteTableAssociation_disappears(t *testing.T) { func testAccCheckVPCEndpointRouteTableAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpc_endpoint_route_table_association" { @@ -102,7 +105,7 @@ func testAccCheckVPCEndpointRouteTableAssociationExists(ctx context.Context, n s return fmt.Errorf("No VPC Endpoint Route Table Association ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) return tfec2.FindVPCEndpointRouteTableAssociationExists(ctx, conn, rs.Primary.Attributes["vpc_endpoint_id"], rs.Primary.Attributes["route_table_id"]) } diff --git a/internal/service/ec2/vpc_endpoint_security_group_association.go b/internal/service/ec2/vpc_endpoint_security_group_association.go index b456542c5e5..f4592491616 100644 --- a/internal/service/ec2/vpc_endpoint_security_group_association.go +++ b/internal/service/ec2/vpc_endpoint_security_group_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -45,7 +48,7 @@ func ResourceVPCEndpointSecurityGroupAssociation() *schema.Resource { func resourceVPCEndpointSecurityGroupAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) vpcEndpointID := d.Get("vpc_endpoint_id").(string) securityGroupID := d.Get("security_group_id").(string) @@ -107,7 +110,7 @@ func resourceVPCEndpointSecurityGroupAssociationCreate(ctx context.Context, d *s func resourceVPCEndpointSecurityGroupAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) vpcEndpointID := d.Get("vpc_endpoint_id").(string) securityGroupID := d.Get("security_group_id").(string) @@ -131,7 +134,7 @@ func resourceVPCEndpointSecurityGroupAssociationRead(ctx context.Context, d *sch func resourceVPCEndpointSecurityGroupAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) vpcEndpointID := d.Get("vpc_endpoint_id").(string) securityGroupID := d.Get("security_group_id").(string) diff --git a/internal/service/ec2/vpc_endpoint_security_group_association_test.go b/internal/service/ec2/vpc_endpoint_security_group_association_test.go index 6ba2f019420..1d008861d32 100644 --- a/internal/service/ec2/vpc_endpoint_security_group_association_test.go +++ b/internal/service/ec2/vpc_endpoint_security_group_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -114,7 +117,7 @@ func TestAccVPCEndpointSecurityGroupAssociation_replaceDefaultAssociation(t *tes func testAccCheckVPCEndpointSecurityGroupAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpc_endpoint_security_group_association" { @@ -149,7 +152,7 @@ func testAccCheckVPCEndpointSecurityGroupAssociationExists(ctx context.Context, return fmt.Errorf("No VPC Endpoint Security Group Association ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindVPCEndpointByID(ctx, conn, rs.Primary.Attributes["vpc_endpoint_id"]) diff --git a/internal/service/ec2/vpc_endpoint_service.go b/internal/service/ec2/vpc_endpoint_service.go index ab199246182..39123da011b 100644 --- a/internal/service/ec2/vpc_endpoint_service.go +++ b/internal/service/ec2/vpc_endpoint_service.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -148,7 +151,7 @@ func ResourceVPCEndpointService() *schema.Resource { func resourceVPCEndpointServiceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateVpcEndpointServiceConfigurationInput{ AcceptanceRequired: aws.Bool(d.Get("acceptance_required").(bool)), @@ -200,7 +203,7 @@ func resourceVPCEndpointServiceCreate(ctx context.Context, d *schema.ResourceDat func resourceVPCEndpointServiceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) svcCfg, err := FindVPCEndpointServiceConfigurationByID(ctx, conn, d.Id()) @@ -246,7 +249,7 @@ func resourceVPCEndpointServiceRead(ctx context.Context, d *schema.ResourceData, d.Set("state", svcCfg.ServiceState) d.Set("supported_ip_address_types", aws.StringValueSlice(svcCfg.SupportedIpAddressTypes)) - SetTagsOut(ctx, svcCfg.Tags) + setTagsOut(ctx, svcCfg.Tags) allowedPrincipals, err := FindVPCEndpointServicePermissionsByServiceID(ctx, conn, d.Id()) @@ -261,7 +264,7 @@ func resourceVPCEndpointServiceRead(ctx context.Context, d *schema.ResourceData, func resourceVPCEndpointServiceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if d.HasChanges("acceptance_required", "gateway_load_balancer_arns", "network_load_balancer_arns", "private_dns_name", "supported_ip_address_types") { input := &ec2.ModifyVpcEndpointServiceConfigurationInput{ @@ -310,7 +313,7 @@ func resourceVPCEndpointServiceUpdate(ctx context.Context, d *schema.ResourceDat func resourceVPCEndpointServiceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[INFO] Deleting EC2 VPC Endpoint Service: %s", d.Id()) output, err := conn.DeleteVpcEndpointServiceConfigurationsWithContext(ctx, &ec2.DeleteVpcEndpointServiceConfigurationsInput{ diff --git a/internal/service/ec2/vpc_endpoint_service_allowed_principal.go b/internal/service/ec2/vpc_endpoint_service_allowed_principal.go index 48949722b45..2ae534a2be2 100644 --- a/internal/service/ec2/vpc_endpoint_service_allowed_principal.go +++ b/internal/service/ec2/vpc_endpoint_service_allowed_principal.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -38,7 +41,7 @@ func ResourceVPCEndpointServiceAllowedPrincipal() *schema.Resource { func resourceVPCEndpointServiceAllowedPrincipalCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) serviceID := d.Get("vpc_endpoint_service_id").(string) principalARN := d.Get("principal_arn").(string) @@ -63,7 +66,7 @@ func resourceVPCEndpointServiceAllowedPrincipalCreate(ctx context.Context, d *sc func resourceVPCEndpointServiceAllowedPrincipalRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) serviceID := d.Get("vpc_endpoint_service_id").(string) principalARN := d.Get("principal_arn").(string) @@ -87,7 +90,7 @@ func resourceVPCEndpointServiceAllowedPrincipalRead(ctx context.Context, d *sche func resourceVPCEndpointServiceAllowedPrincipalDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) serviceID := d.Get("vpc_endpoint_service_id").(string) principalARN := d.Get("principal_arn").(string) diff --git a/internal/service/ec2/vpc_endpoint_service_allowed_principal_test.go b/internal/service/ec2/vpc_endpoint_service_allowed_principal_test.go index 2aa40c650f8..3c6a09120e5 100644 --- a/internal/service/ec2/vpc_endpoint_service_allowed_principal_test.go +++ b/internal/service/ec2/vpc_endpoint_service_allowed_principal_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -168,7 +171,7 @@ func TestAccVPCEndpointServiceAllowedPrincipal_migrateAndTag(t *testing.T) { func testAccCheckVPCEndpointServiceAllowedPrincipalDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpc_endpoint_service_allowed_principal" { @@ -203,7 +206,7 @@ func testAccCheckVPCEndpointServiceAllowedPrincipalExists(ctx context.Context, n return fmt.Errorf("No EC2 VPC Endpoint Service Allowed Principal ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) _, err := tfec2.FindVPCEndpointServicePermission(ctx, conn, rs.Primary.Attributes["vpc_endpoint_service_id"], rs.Primary.Attributes["principal_arn"]) diff --git a/internal/service/ec2/vpc_endpoint_service_data_source.go b/internal/service/ec2/vpc_endpoint_service_data_source.go index ac31d05af51..46741d21159 100644 --- a/internal/service/ec2/vpc_endpoint_service_data_source.go +++ b/internal/service/ec2/vpc_endpoint_service_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -96,7 +99,7 @@ func DataSourceVPCEndpointService() *schema.Resource { func dataSourceVPCEndpointServiceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeVpcEndpointServicesInput{ diff --git a/internal/service/ec2/vpc_endpoint_service_data_source_test.go b/internal/service/ec2/vpc_endpoint_service_data_source_test.go index fdb84bdb094..19d9b1854e0 100644 --- a/internal/service/ec2/vpc_endpoint_service_data_source_test.go +++ b/internal/service/ec2/vpc_endpoint_service_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -26,14 +29,14 @@ func TestAccVPCEndpointServiceDataSource_gateway(t *testing.T) { Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr(datasourceName, "acceptance_required", "false"), acctest.MatchResourceAttrRegionalARN(datasourceName, "arn", "ec2", regexp.MustCompile(`vpc-endpoint-service/vpce-svc-.+`)), - acctest.CheckResourceAttrGreaterThanValue(datasourceName, "availability_zones.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(datasourceName, "availability_zones.#", 0), resource.TestCheckResourceAttr(datasourceName, "base_endpoint_dns_names.#", "1"), resource.TestCheckResourceAttr(datasourceName, "manages_vpc_endpoints", "false"), resource.TestCheckResourceAttr(datasourceName, "owner", "amazon"), resource.TestCheckResourceAttr(datasourceName, "private_dns_name", ""), testAccCheckResourceAttrRegionalReverseDNSService(datasourceName, "service_name", "dynamodb"), resource.TestCheckResourceAttr(datasourceName, "service_type", "Gateway"), - acctest.CheckResourceAttrGreaterThanValue(datasourceName, "supported_ip_address_types.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(datasourceName, "supported_ip_address_types.#", 0), resource.TestCheckResourceAttr(datasourceName, "tags.%", "0"), resource.TestCheckResourceAttr(datasourceName, "vpc_endpoint_policy_supported", "true"), ), @@ -56,14 +59,14 @@ func TestAccVPCEndpointServiceDataSource_interface(t *testing.T) { Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr(datasourceName, "acceptance_required", "false"), acctest.MatchResourceAttrRegionalARN(datasourceName, "arn", "ec2", regexp.MustCompile(`vpc-endpoint-service/vpce-svc-.+`)), - acctest.CheckResourceAttrGreaterThanValue(datasourceName, "availability_zones.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(datasourceName, "availability_zones.#", 0), resource.TestCheckResourceAttr(datasourceName, "base_endpoint_dns_names.#", "1"), resource.TestCheckResourceAttr(datasourceName, "manages_vpc_endpoints", "false"), resource.TestCheckResourceAttr(datasourceName, "owner", "amazon"), acctest.CheckResourceAttrRegionalHostnameService(datasourceName, "private_dns_name", "ec2"), testAccCheckResourceAttrRegionalReverseDNSService(datasourceName, "service_name", "ec2"), resource.TestCheckResourceAttr(datasourceName, "service_type", "Interface"), - acctest.CheckResourceAttrGreaterThanValue(datasourceName, "supported_ip_address_types.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(datasourceName, "supported_ip_address_types.#", 0), resource.TestCheckResourceAttr(datasourceName, "tags.%", "0"), resource.TestCheckResourceAttr(datasourceName, "vpc_endpoint_policy_supported", "true"), ), diff --git a/internal/service/ec2/vpc_endpoint_service_test.go b/internal/service/ec2/vpc_endpoint_service_test.go index 4a6256575a4..9c34d8858df 100644 --- a/internal/service/ec2/vpc_endpoint_service_test.go +++ b/internal/service/ec2/vpc_endpoint_service_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -35,8 +38,8 @@ func TestAccVPCEndpointService_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "acceptance_required", "false"), resource.TestCheckResourceAttr(resourceName, "allowed_principals.#", "0"), acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "ec2", regexp.MustCompile(`vpc-endpoint-service/vpce-svc-.+`)), - acctest.CheckResourceAttrGreaterThanValue(resourceName, "availability_zones.#", "0"), - acctest.CheckResourceAttrGreaterThanValue(resourceName, "base_endpoint_dns_names.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(resourceName, "availability_zones.#", 0), + acctest.CheckResourceAttrGreaterThanValue(resourceName, "base_endpoint_dns_names.#", 0), resource.TestCheckResourceAttr(resourceName, "gateway_load_balancer_arns.#", "0"), resource.TestCheckResourceAttr(resourceName, "manages_vpc_endpoints", "false"), resource.TestCheckResourceAttr(resourceName, "network_load_balancer_arns.#", "1"), @@ -321,7 +324,7 @@ func TestAccVPCEndpointService_privateDNSName(t *testing.T) { func testAccCheckVPCEndpointServiceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpc_endpoint_service" { @@ -356,7 +359,7 @@ func testAccCheckVPCEndpointServiceExists(ctx context.Context, n string, v *ec2. return fmt.Errorf("No EC2 VPC Endpoint Service ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindVPCEndpointServiceConfigurationByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ec2/vpc_endpoint_subnet_association.go b/internal/service/ec2/vpc_endpoint_subnet_association.go index 7fcf2999467..1a2b190e8fb 100644 --- a/internal/service/ec2/vpc_endpoint_subnet_association.go +++ b/internal/service/ec2/vpc_endpoint_subnet_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -50,7 +53,7 @@ func ResourceVPCEndpointSubnetAssociation() *schema.Resource { func resourceVPCEndpointSubnetAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) endpointID := d.Get("vpc_endpoint_id").(string) subnetID := d.Get("subnet_id").(string) @@ -99,7 +102,7 @@ func resourceVPCEndpointSubnetAssociationCreate(ctx context.Context, d *schema.R func resourceVPCEndpointSubnetAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) endpointID := d.Get("vpc_endpoint_id").(string) subnetID := d.Get("subnet_id").(string) @@ -123,7 +126,7 @@ func resourceVPCEndpointSubnetAssociationRead(ctx context.Context, d *schema.Res func resourceVPCEndpointSubnetAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) endpointID := d.Get("vpc_endpoint_id").(string) subnetID := d.Get("subnet_id").(string) diff --git a/internal/service/ec2/vpc_endpoint_subnet_association_test.go b/internal/service/ec2/vpc_endpoint_subnet_association_test.go index 9deac9879ec..7b4d748b2dd 100644 --- a/internal/service/ec2/vpc_endpoint_subnet_association_test.go +++ b/internal/service/ec2/vpc_endpoint_subnet_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -95,7 +98,7 @@ func TestAccVPCEndpointSubnetAssociation_multiple(t *testing.T) { func testAccCheckVPCEndpointSubnetAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpc_endpoint_subnet_association" { @@ -130,7 +133,7 @@ func testAccCheckVPCEndpointSubnetAssociationExists(ctx context.Context, n strin return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) out, err := tfec2.FindVPCEndpointByID(ctx, conn, rs.Primary.Attributes["vpc_endpoint_id"]) diff --git a/internal/service/ec2/vpc_endpoint_test.go b/internal/service/ec2/vpc_endpoint_test.go index ecf6e70e93f..2fdf098a153 100644 --- a/internal/service/ec2/vpc_endpoint_test.go +++ b/internal/service/ec2/vpc_endpoint_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -33,7 +36,7 @@ func TestAccVPCEndpoint_gatewayBasic(t *testing.T) { Check: resource.ComposeAggregateTestCheckFunc( testAccCheckVPCEndpointExists(ctx, resourceName, &endpoint), acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "ec2", regexp.MustCompile(`vpc-endpoint/vpce-.+`)), - acctest.CheckResourceAttrGreaterThanValue(resourceName, "cidr_blocks.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(resourceName, "cidr_blocks.#", 0), resource.TestCheckResourceAttr(resourceName, "dns_entry.#", "0"), resource.TestCheckResourceAttr(resourceName, "dns_options.#", "0"), resource.TestCheckResourceAttr(resourceName, "ip_address_type", ""), @@ -80,6 +83,7 @@ func TestAccVPCEndpoint_interfaceBasic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "dns_entry.#", "0"), resource.TestCheckResourceAttr(resourceName, "dns_options.#", "1"), resource.TestCheckResourceAttr(resourceName, "dns_options.0.dns_record_ip_type", "ipv4"), + resource.TestCheckResourceAttr(resourceName, "dns_options.0.private_dns_only_for_inbound_resolver_endpoint", "false"), resource.TestCheckResourceAttr(resourceName, "ip_address_type", "ipv4"), resource.TestCheckResourceAttr(resourceName, "network_interface_ids.#", "0"), acctest.CheckResourceAttrAccountID(resourceName, "owner_id"), @@ -103,6 +107,84 @@ func TestAccVPCEndpoint_interfaceBasic(t *testing.T) { }) } +func TestAccVPCEndpoint_interfacePrivateDNS(t *testing.T) { + ctx := acctest.Context(t) + var endpoint ec2.VpcEndpoint + resourceName := "aws_vpc_endpoint.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckVPCEndpointDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccVPCEndpointConfig_interfacePrivateDNS(rName, true), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckVPCEndpointExists(ctx, resourceName, &endpoint), + acctest.CheckResourceAttrGreaterThanValue(resourceName, "cidr_blocks.#", 0), + resource.TestCheckResourceAttr(resourceName, "dns_entry.#", "0"), + resource.TestCheckResourceAttr(resourceName, "dns_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "dns_options.0.dns_record_ip_type", "ipv4"), + resource.TestCheckResourceAttr(resourceName, "dns_options.0.private_dns_only_for_inbound_resolver_endpoint", "true"), + resource.TestCheckResourceAttr(resourceName, "private_dns_enabled", "true"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccVPCEndpointConfig_interfacePrivateDNS(rName, false), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckVPCEndpointExists(ctx, resourceName, &endpoint), + acctest.CheckResourceAttrGreaterThanValue(resourceName, "cidr_blocks.#", 0), + resource.TestCheckResourceAttr(resourceName, "dns_entry.#", "0"), + resource.TestCheckResourceAttr(resourceName, "dns_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "dns_options.0.dns_record_ip_type", "ipv4"), + resource.TestCheckResourceAttr(resourceName, "dns_options.0.private_dns_only_for_inbound_resolver_endpoint", "false"), + resource.TestCheckResourceAttr(resourceName, "private_dns_enabled", "true"), + ), + }, + }, + }) +} + +func TestAccVPCEndpoint_interfacePrivateDNSNoGateway(t *testing.T) { + ctx := acctest.Context(t) + var endpoint ec2.VpcEndpoint + resourceName := "aws_vpc_endpoint.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckVPCEndpointDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccVPCEndpointConfig_interfacePrivateDNSNoGateway(rName, false), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckVPCEndpointExists(ctx, resourceName, &endpoint), + acctest.CheckResourceAttrGreaterThanValue(resourceName, "cidr_blocks.#", 0), + resource.TestCheckResourceAttr(resourceName, "dns_entry.#", "0"), + resource.TestCheckResourceAttr(resourceName, "dns_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "dns_options.0.dns_record_ip_type", "ipv4"), + resource.TestCheckResourceAttr(resourceName, "dns_options.0.private_dns_only_for_inbound_resolver_endpoint", "false"), + resource.TestCheckResourceAttr(resourceName, "private_dns_enabled", "true"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccVPCEndpoint_disappears(t *testing.T) { ctx := acctest.Context(t) var endpoint ec2.VpcEndpoint @@ -323,6 +405,7 @@ func TestAccVPCEndpoint_ipAddressType(t *testing.T) { testAccCheckVPCEndpointExists(ctx, resourceName, &endpoint), resource.TestCheckResourceAttr(resourceName, "dns_options.#", "1"), resource.TestCheckResourceAttr(resourceName, "dns_options.0.dns_record_ip_type", "ipv4"), + resource.TestCheckResourceAttr(resourceName, "dns_options.0.private_dns_only_for_inbound_resolver_endpoint", "false"), resource.TestCheckResourceAttr(resourceName, "ip_address_type", "ipv4"), ), }, @@ -338,6 +421,7 @@ func TestAccVPCEndpoint_ipAddressType(t *testing.T) { testAccCheckVPCEndpointExists(ctx, resourceName, &endpoint), resource.TestCheckResourceAttr(resourceName, "dns_options.#", "1"), resource.TestCheckResourceAttr(resourceName, "dns_options.0.dns_record_ip_type", "dualstack"), + resource.TestCheckResourceAttr(resourceName, "dns_options.0.private_dns_only_for_inbound_resolver_endpoint", "false"), resource.TestCheckResourceAttr(resourceName, "ip_address_type", "dualstack"), ), }, @@ -480,7 +564,7 @@ func TestAccVPCEndpoint_VPCEndpointType_gatewayLoadBalancer(t *testing.T) { func testAccCheckVPCEndpointDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpc_endpoint" { @@ -513,7 +597,7 @@ func testAccCheckVPCEndpointExists(ctx context.Context, n string, v *ec2.VpcEndp return fmt.Errorf("No EC2 VPC Endpoint ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindVPCEndpointByID(ctx, conn, rs.Primary.ID) @@ -694,6 +778,84 @@ resource "aws_vpc_endpoint" "test" { `, rName) } +func testAccVPCEndpointConfig_interfacePrivateDNS(rName string, privateDNSOnlyForInboundResolverEndpoint bool) string { + return fmt.Sprintf(` +resource "aws_vpc" "test" { + cidr_block = "10.0.0.0/16" + enable_dns_support = true + enable_dns_hostnames = true + + tags = { + Name = %[1]q + } +} + +data "aws_region" "current" {} + +resource "aws_vpc_endpoint" "gateway" { + vpc_id = aws_vpc.test.id + service_name = "com.amazonaws.${data.aws_region.current.name}.s3" + + tags = { + Name = %[1]q + } +} + +resource "aws_vpc_endpoint" "test" { + vpc_id = aws_vpc.test.id + service_name = "com.amazonaws.${data.aws_region.current.name}.s3" + private_dns_enabled = true + vpc_endpoint_type = "Interface" + ip_address_type = "ipv4" + + dns_options { + dns_record_ip_type = "ipv4" + private_dns_only_for_inbound_resolver_endpoint = %[2]t + } + + tags = { + Name = %[1]q + } + + # To set PrivateDnsOnlyForInboundResolverEndpoint to true, the VPC vpc-abcd1234 must have a Gateway endpoint for the service. + depends_on = [aws_vpc_endpoint.gateway] +} +`, rName, privateDNSOnlyForInboundResolverEndpoint) +} + +func testAccVPCEndpointConfig_interfacePrivateDNSNoGateway(rName string, privateDNSOnlyForInboundResolverEndpoint bool) string { + return fmt.Sprintf(` +resource "aws_vpc" "test" { + cidr_block = "10.0.0.0/16" + enable_dns_support = true + enable_dns_hostnames = true + + tags = { + Name = %[1]q + } +} + +data "aws_region" "current" {} + +resource "aws_vpc_endpoint" "test" { + vpc_id = aws_vpc.test.id + service_name = "com.amazonaws.${data.aws_region.current.name}.s3" + private_dns_enabled = true + vpc_endpoint_type = "Interface" + ip_address_type = "ipv4" + + dns_options { + dns_record_ip_type = "ipv4" + private_dns_only_for_inbound_resolver_endpoint = %[2]t + } + + tags = { + Name = %[1]q + } +} +`, rName, privateDNSOnlyForInboundResolverEndpoint) +} + func testAccVPCEndpointConfig_ipAddressType(rName, addressType string) string { return acctest.ConfigCompose(testAccVPCEndpointServiceConfig_baseSupportedIPAddressTypes(rName), fmt.Sprintf(` resource "aws_vpc_endpoint_service" "test" { @@ -717,6 +879,10 @@ resource "aws_vpc_endpoint" "test" { dns_options { dns_record_ip_type = %[2]q } + + tags = { + Name = %[1]q + } } `, rName, addressType)) } diff --git a/internal/service/ec2/vpc_flow_log.go b/internal/service/ec2/vpc_flow_log.go index cd518302dd5..11c3511b5b4 100644 --- a/internal/service/ec2/vpc_flow_log.go +++ b/internal/service/ec2/vpc_flow_log.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -164,7 +167,7 @@ func ResourceFlowLog() *schema.Resource { func resourceLogFlowCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) var resourceID string var resourceType string @@ -242,7 +245,7 @@ func resourceLogFlowCreate(ctx context.Context, d *schema.ResourceData, meta int input.MaxAggregationInterval = aws.Int64(int64(v.(int))) } - outputRaw, err := tfresource.RetryWhenAWSErrMessageContains(ctx, propagationTimeout, func() (interface{}, error) { + outputRaw, err := tfresource.RetryWhenAWSErrMessageContains(ctx, iamPropagationTimeout, func() (interface{}, error) { return conn.CreateFlowLogsWithContext(ctx, input) }, errCodeInvalidParameter, "Unable to assume given IAM role") @@ -261,7 +264,7 @@ func resourceLogFlowCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceLogFlowRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) fl, err := FindFlowLogByID(ctx, conn, d.Id()) @@ -315,7 +318,7 @@ func resourceLogFlowRead(ctx context.Context, d *schema.ResourceData, meta inter d.Set("traffic_type", fl.TrafficType) } - SetTagsOut(ctx, fl.Tags) + setTagsOut(ctx, fl.Tags) return diags } @@ -330,7 +333,7 @@ func resourceLogFlowUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceLogFlowDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[INFO] Deleting Flow Log: %s", d.Id()) output, err := conn.DeleteFlowLogsWithContext(ctx, &ec2.DeleteFlowLogsInput{ diff --git a/internal/service/ec2/vpc_flow_log_test.go b/internal/service/ec2/vpc_flow_log_test.go index c26f36db716..0127a41250c 100644 --- a/internal/service/ec2/vpc_flow_log_test.go +++ b/internal/service/ec2/vpc_flow_log_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -582,7 +585,7 @@ func testAccCheckFlowLogExists(ctx context.Context, n string, v *ec2.FlowLog) re return fmt.Errorf("No Flow Log ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindFlowLogByID(ctx, conn, rs.Primary.ID) @@ -598,7 +601,7 @@ func testAccCheckFlowLogExists(ctx context.Context, n string, v *ec2.FlowLog) re func testAccCheckFlowLogDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_flow_log" { diff --git a/internal/service/ec2/vpc_internet_gateway.go b/internal/service/ec2/vpc_internet_gateway.go index f9ae2c6f73b..9a46a52631a 100644 --- a/internal/service/ec2/vpc_internet_gateway.go +++ b/internal/service/ec2/vpc_internet_gateway.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -63,7 +66,7 @@ func ResourceInternetGateway() *schema.Resource { func resourceInternetGatewayCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateInternetGatewayInput{ TagSpecifications: getTagSpecificationsIn(ctx, ec2.ResourceTypeInternetGateway), @@ -89,9 +92,9 @@ func resourceInternetGatewayCreate(ctx context.Context, d *schema.ResourceData, func resourceInternetGatewayRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) - outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { + outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { return FindInternetGatewayByID(ctx, conn, d.Id()) }, d.IsNewResource()) @@ -124,14 +127,14 @@ func resourceInternetGatewayRead(ctx context.Context, d *schema.ResourceData, me d.Set("vpc_id", ig.Attachments[0].VpcId) } - SetTagsOut(ctx, ig.Tags) + setTagsOut(ctx, ig.Tags) return diags } func resourceInternetGatewayUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if d.HasChange("vpc_id") { o, n := d.GetChange("vpc_id") @@ -154,7 +157,7 @@ func resourceInternetGatewayUpdate(ctx context.Context, d *schema.ResourceData, func resourceInternetGatewayDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) // Detach if it is attached. if v, ok := d.GetOk("vpc_id"); ok { @@ -195,13 +198,13 @@ func attachInternetGateway(ctx context.Context, conn *ec2.EC2, internetGatewayID }, errCodeInvalidInternetGatewayIDNotFound) if err != nil { - return fmt.Errorf("error attaching EC2 Internet Gateway (%s) to VPC (%s): %w", internetGatewayID, vpcID, err) + return fmt.Errorf("attaching EC2 Internet Gateway (%s) to VPC (%s): %w", internetGatewayID, vpcID, err) } _, err = WaitInternetGatewayAttached(ctx, conn, internetGatewayID, vpcID, timeout) if err != nil { - return fmt.Errorf("error waiting for EC2 Internet Gateway (%s) to attach to VPC (%s): %w", internetGatewayID, vpcID, err) + return fmt.Errorf("waiting for EC2 Internet Gateway (%s) to attach to VPC (%s): %w", internetGatewayID, vpcID, err) } return nil @@ -223,13 +226,13 @@ func detachInternetGateway(ctx context.Context, conn *ec2.EC2, internetGatewayID } if err != nil { - return fmt.Errorf("error detaching EC2 Internet Gateway (%s) from VPC (%s): %w", internetGatewayID, vpcID, err) + return fmt.Errorf("detaching EC2 Internet Gateway (%s) from VPC (%s): %w", internetGatewayID, vpcID, err) } _, err = WaitInternetGatewayDetached(ctx, conn, internetGatewayID, vpcID, timeout) if err != nil { - return fmt.Errorf("error waiting for EC2 Internet Gateway (%s) to detach from VPC (%s): %w", internetGatewayID, vpcID, err) + return fmt.Errorf("waiting for EC2 Internet Gateway (%s) to detach from VPC (%s): %w", internetGatewayID, vpcID, err) } return nil diff --git a/internal/service/ec2/vpc_internet_gateway_attachment.go b/internal/service/ec2/vpc_internet_gateway_attachment.go index 1fa2706197a..539d62945a3 100644 --- a/internal/service/ec2/vpc_internet_gateway_attachment.go +++ b/internal/service/ec2/vpc_internet_gateway_attachment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -48,7 +51,7 @@ func ResourceInternetGatewayAttachment() *schema.Resource { func resourceInternetGatewayAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) igwID := d.Get("internet_gateway_id").(string) vpcID := d.Get("vpc_id").(string) @@ -64,7 +67,7 @@ func resourceInternetGatewayAttachmentCreate(ctx context.Context, d *schema.Reso func resourceInternetGatewayAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) igwID, vpcID, err := InternetGatewayAttachmentParseResourceID(d.Id()) @@ -72,7 +75,7 @@ func resourceInternetGatewayAttachmentRead(ctx context.Context, d *schema.Resour return sdkdiag.AppendErrorf(diags, "reading EC2 Internet Gateway Attachment (%s): %s", d.Id(), err) } - outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { + outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { return FindInternetGatewayAttachment(ctx, conn, igwID, vpcID) }, d.IsNewResource()) @@ -96,7 +99,7 @@ func resourceInternetGatewayAttachmentRead(ctx context.Context, d *schema.Resour func resourceInternetGatewayAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) igwID, vpcID, err := InternetGatewayAttachmentParseResourceID(d.Id()) if err != nil { diff --git a/internal/service/ec2/vpc_internet_gateway_attachment_test.go b/internal/service/ec2/vpc_internet_gateway_attachment_test.go index 15f18946c5f..83ceb47730f 100644 --- a/internal/service/ec2/vpc_internet_gateway_attachment_test.go +++ b/internal/service/ec2/vpc_internet_gateway_attachment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -72,7 +75,7 @@ func TestAccVPCInternetGatewayAttachment_disappears(t *testing.T) { func testAccCheckInternetGatewayAttachmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_internet_gateway_attachment" { @@ -113,7 +116,7 @@ func testAccCheckInternetGatewayAttachmentExists(ctx context.Context, n string, return fmt.Errorf("No EC2 Internet Gateway Attachment ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) igwID, vpcID, err := tfec2.InternetGatewayAttachmentParseResourceID(rs.Primary.ID) diff --git a/internal/service/ec2/vpc_internet_gateway_data_source.go b/internal/service/ec2/vpc_internet_gateway_data_source.go index 110ee282e1c..2bf9cc97969 100644 --- a/internal/service/ec2/vpc_internet_gateway_data_source.go +++ b/internal/service/ec2/vpc_internet_gateway_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -63,7 +66,7 @@ func DataSourceInternetGateway() *schema.Resource { func dataSourceInternetGatewayRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig internetGatewayId, internetGatewayIdOk := d.GetOk("internet_gateway_id") diff --git a/internal/service/ec2/vpc_internet_gateway_data_source_test.go b/internal/service/ec2/vpc_internet_gateway_data_source_test.go index 3621f7da911..213e6e9df54 100644 --- a/internal/service/ec2/vpc_internet_gateway_data_source_test.go +++ b/internal/service/ec2/vpc_internet_gateway_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpc_internet_gateway_test.go b/internal/service/ec2/vpc_internet_gateway_test.go index c4209f6f984..ba72e3e3d0c 100644 --- a/internal/service/ec2/vpc_internet_gateway_test.go +++ b/internal/service/ec2/vpc_internet_gateway_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -154,7 +157,7 @@ func TestAccVPCInternetGateway_Tags(t *testing.T) { func testAccCheckInternetGatewayDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_internet_gateway" { @@ -189,7 +192,7 @@ func testAccCheckInternetGatewayExists(ctx context.Context, n string, v *ec2.Int return fmt.Errorf("No EC2 Internet Gateway ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindInternetGatewayByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ec2/vpc_ipv4_cidr_block_association.go b/internal/service/ec2/vpc_ipv4_cidr_block_association.go index a5dceea12f7..d414a28b6df 100644 --- a/internal/service/ec2/vpc_ipv4_cidr_block_association.go +++ b/internal/service/ec2/vpc_ipv4_cidr_block_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -74,7 +77,7 @@ func ResourceVPCIPv4CIDRBlockAssociation() *schema.Resource { func resourceVPCIPv4CIDRBlockAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) vpcID := d.Get("vpc_id").(string) input := &ec2.AssociateVpcCidrBlockInput{ @@ -113,7 +116,7 @@ func resourceVPCIPv4CIDRBlockAssociationCreate(ctx context.Context, d *schema.Re func resourceVPCIPv4CIDRBlockAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) vpcCidrBlockAssociation, vpc, err := FindVPCCIDRBlockAssociationByID(ctx, conn, d.Id()) @@ -135,7 +138,7 @@ func resourceVPCIPv4CIDRBlockAssociationRead(ctx context.Context, d *schema.Reso func resourceVPCIPv4CIDRBlockAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[DEBUG] Deleting EC2 VPC IPv4 CIDR Block Association: %s", d.Id()) _, err := conn.DisassociateVpcCidrBlockWithContext(ctx, &ec2.DisassociateVpcCidrBlockInput{ diff --git a/internal/service/ec2/vpc_ipv4_cidr_block_association_test.go b/internal/service/ec2/vpc_ipv4_cidr_block_association_test.go index fd8418d20b4..754a1d0e99a 100644 --- a/internal/service/ec2/vpc_ipv4_cidr_block_association_test.go +++ b/internal/service/ec2/vpc_ipv4_cidr_block_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -154,7 +157,7 @@ func testAccCheckVPCAssociationCIDRPrefix(association *ec2.VpcCidrBlockAssociati func testAccCheckVPCIPv4CIDRBlockAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpc_ipv4_cidr_block_association" { @@ -189,7 +192,7 @@ func testAccCheckVPCIPv4CIDRBlockAssociationExists(ctx context.Context, n string return fmt.Errorf("No EC2 VPC IPv4 CIDR Block Association is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, _, err := tfec2.FindVPCCIDRBlockAssociationByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ec2/vpc_ipv6_cidr_block_association.go b/internal/service/ec2/vpc_ipv6_cidr_block_association.go index 8b6ff4e7b2a..1ff8413f9d6 100644 --- a/internal/service/ec2/vpc_ipv6_cidr_block_association.go +++ b/internal/service/ec2/vpc_ipv6_cidr_block_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -81,7 +84,7 @@ func ResourceVPCIPv6CIDRBlockAssociation() *schema.Resource { func resourceVPCIPv6CIDRBlockAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) vpcID := d.Get("vpc_id").(string) input := &ec2.AssociateVpcCidrBlockInput{ @@ -120,7 +123,7 @@ func resourceVPCIPv6CIDRBlockAssociationCreate(ctx context.Context, d *schema.Re func resourceVPCIPv6CIDRBlockAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) vpcIpv6CidrBlockAssociation, vpc, err := FindVPCIPv6CIDRBlockAssociationByID(ctx, conn, d.Id()) @@ -142,7 +145,7 @@ func resourceVPCIPv6CIDRBlockAssociationRead(ctx context.Context, d *schema.Reso func resourceVPCIPv6CIDRBlockAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[DEBUG] Deleting VPC IPv6 CIDR Block Association: %s", d.Id()) _, err := conn.DisassociateVpcCidrBlockWithContext(ctx, &ec2.DisassociateVpcCidrBlockInput{ diff --git a/internal/service/ec2/vpc_ipv6_cidr_block_association_test.go b/internal/service/ec2/vpc_ipv6_cidr_block_association_test.go index bf0261a822e..2b0642b7b6d 100644 --- a/internal/service/ec2/vpc_ipv6_cidr_block_association_test.go +++ b/internal/service/ec2/vpc_ipv6_cidr_block_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -17,7 +20,7 @@ import ( func testAccCheckVPCIPv6CIDRBlockAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpc_ipv6_cidr_block_association" { @@ -52,7 +55,7 @@ func testAccCheckVPCIPv6CIDRBlockAssociationExists(ctx context.Context, n string return fmt.Errorf("No EC2 VPC IPv6 CIDR Block Association is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, _, err := tfec2.FindVPCIPv6CIDRBlockAssociationByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ec2/vpc_main_route_table_association.go b/internal/service/ec2/vpc_main_route_table_association.go index 9446458a3b1..5c274c91041 100644 --- a/internal/service/ec2/vpc_main_route_table_association.go +++ b/internal/service/ec2/vpc_main_route_table_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -53,7 +56,7 @@ func ResourceMainRouteTableAssociation() *schema.Resource { func resourceMainRouteTableAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) vpcID := d.Get("vpc_id").(string) @@ -90,7 +93,7 @@ func resourceMainRouteTableAssociationCreate(ctx context.Context, d *schema.Reso func resourceMainRouteTableAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) _, err := FindMainRouteTableAssociationByID(ctx, conn, d.Id()) @@ -109,7 +112,7 @@ func resourceMainRouteTableAssociationRead(ctx context.Context, d *schema.Resour func resourceMainRouteTableAssociationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) routeTableID := d.Get("route_table_id").(string) input := &ec2.ReplaceRouteTableAssociationInput{ @@ -138,7 +141,7 @@ func resourceMainRouteTableAssociationUpdate(ctx context.Context, d *schema.Reso func resourceMainRouteTableAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.ReplaceRouteTableAssociationInput{ AssociationId: aws.String(d.Id()), diff --git a/internal/service/ec2/vpc_main_route_table_association_test.go b/internal/service/ec2/vpc_main_route_table_association_test.go index db9499bdb19..cb56f516c32 100644 --- a/internal/service/ec2/vpc_main_route_table_association_test.go +++ b/internal/service/ec2/vpc_main_route_table_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -45,7 +48,7 @@ func TestAccVPCMainRouteTableAssociation_basic(t *testing.T) { func testAccCheckMainRouteTableAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_main_route_table_association" { @@ -80,7 +83,7 @@ func testAccCheckMainRouteTableAssociationExists(ctx context.Context, n string, return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) association, err := tfec2.FindMainRouteTableAssociationByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ec2/vpc_managed_prefix_list.go b/internal/service/ec2/vpc_managed_prefix_list.go index 864cba57999..cbc3433d9e6 100644 --- a/internal/service/ec2/vpc_managed_prefix_list.go +++ b/internal/service/ec2/vpc_managed_prefix_list.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -94,7 +97,7 @@ func ResourceManagedPrefixList() *schema.Resource { } func resourceManagedPrefixListCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateManagedPrefixListInput{ TagSpecifications: getTagSpecificationsIn(ctx, ec2.ResourceTypePrefixList), @@ -133,7 +136,7 @@ func resourceManagedPrefixListCreate(ctx context.Context, d *schema.ResourceData } func resourceManagedPrefixListRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) pl, err := FindManagedPrefixListByID(ctx, conn, d.Id()) @@ -163,13 +166,13 @@ func resourceManagedPrefixListRead(ctx context.Context, d *schema.ResourceData, d.Set("owner_id", pl.OwnerId) d.Set("version", pl.Version) - SetTagsOut(ctx, pl.Tags) + setTagsOut(ctx, pl.Tags) return nil } func resourceManagedPrefixListUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) // MaxEntries & Entry cannot change in the same API call. // If MaxEntry is increasing, complete before updating entry(s) @@ -298,7 +301,7 @@ func resourceManagedPrefixListUpdate(ctx context.Context, d *schema.ResourceData } func resourceManagedPrefixListDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[INFO] Deleting EC2 Managed Prefix List: %s", d.Id()) _, err := conn.DeleteManagedPrefixListWithContext(ctx, &ec2.DeleteManagedPrefixListInput{ @@ -327,7 +330,7 @@ func updateMaxEntry(ctx context.Context, conn *ec2.EC2, id string, maxEntries in }) if err != nil { - return fmt.Errorf("error updating MaxEntries for EC2 Managed Prefix List (%s): %s", id, err) + return fmt.Errorf("updating MaxEntries for EC2 Managed Prefix List (%s): %s", id, err) } _, err = WaitManagedPrefixListModified(ctx, conn, id) diff --git a/internal/service/ec2/vpc_managed_prefix_list_data_source.go b/internal/service/ec2/vpc_managed_prefix_list_data_source.go index a8c4bdaad8e..2e4133bee2b 100644 --- a/internal/service/ec2/vpc_managed_prefix_list_data_source.go +++ b/internal/service/ec2/vpc_managed_prefix_list_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -76,7 +79,7 @@ func DataSourceManagedPrefixList() *schema.Resource { } func dataSourceManagedPrefixListRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeManagedPrefixListsInput{ diff --git a/internal/service/ec2/vpc_managed_prefix_list_data_source_test.go b/internal/service/ec2/vpc_managed_prefix_list_data_source_test.go index a7c8669044c..a93ce22174b 100644 --- a/internal/service/ec2/vpc_managed_prefix_list_data_source_test.go +++ b/internal/service/ec2/vpc_managed_prefix_list_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -16,7 +19,7 @@ import ( func testAccManagedPrefixListGetIdByNameDataSource(ctx context.Context, name string, id *string, arn *string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := conn.DescribeManagedPrefixListsWithContext(ctx, &ec2.DescribeManagedPrefixListsInput{ Filters: []*ec2.Filter{ diff --git a/internal/service/ec2/vpc_managed_prefix_list_entry.go b/internal/service/ec2/vpc_managed_prefix_list_entry.go index aab507ae0cc..dca9474bcb3 100644 --- a/internal/service/ec2/vpc_managed_prefix_list_entry.go +++ b/internal/service/ec2/vpc_managed_prefix_list_entry.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -50,7 +53,7 @@ func ResourceManagedPrefixListEntry() *schema.Resource { } func resourceManagedPrefixListEntryCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) cidr := d.Get("cidr").(string) plID := d.Get("prefix_list_id").(string) @@ -96,7 +99,7 @@ func resourceManagedPrefixListEntryCreate(ctx context.Context, d *schema.Resourc } func resourceManagedPrefixListEntryRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) plID, cidr, err := ManagedPrefixListEntryParseResourceID(d.Id()) @@ -127,7 +130,7 @@ func resourceManagedPrefixListEntryRead(ctx context.Context, d *schema.ResourceD } func resourceManagedPrefixListEntryDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) plID, cidr, err := ManagedPrefixListEntryParseResourceID(d.Id()) diff --git a/internal/service/ec2/vpc_managed_prefix_list_entry_test.go b/internal/service/ec2/vpc_managed_prefix_list_entry_test.go index b7bc76a6eb1..aa72191edcf 100644 --- a/internal/service/ec2/vpc_managed_prefix_list_entry_test.go +++ b/internal/service/ec2/vpc_managed_prefix_list_entry_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -207,7 +210,7 @@ func TestAccVPCManagedPrefixListEntry_disappears(t *testing.T) { func testAccCheckManagedPrefixListEntryDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_managed_prefix_list_entry" { @@ -248,7 +251,7 @@ func testAccCheckManagedPrefixListEntryExists(ctx context.Context, n string, v * return fmt.Errorf("No EC2 Managed Prefix List Entry ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) plID, cidr, err := tfec2.ManagedPrefixListEntryParseResourceID(rs.Primary.ID) diff --git a/internal/service/ec2/vpc_managed_prefix_list_test.go b/internal/service/ec2/vpc_managed_prefix_list_test.go index d600c837c9c..cb2fa372d8a 100644 --- a/internal/service/ec2/vpc_managed_prefix_list_test.go +++ b/internal/service/ec2/vpc_managed_prefix_list_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -356,7 +359,7 @@ func TestAccVPCManagedPrefixList_tags(t *testing.T) { func testAccCheckManagedPrefixListDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_managed_prefix_list" { @@ -391,7 +394,7 @@ func testAccManagedPrefixListExists(ctx context.Context, resourceName string) re return fmt.Errorf("No EC2 Managed Prefix List ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) _, err := tfec2.FindManagedPrefixListByID(ctx, conn, rs.Primary.ID) @@ -400,7 +403,7 @@ func testAccManagedPrefixListExists(ctx context.Context, resourceName string) re } func testAccPreCheckManagedPrefixList(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeManagedPrefixListsInput{} diff --git a/internal/service/ec2/vpc_managed_prefix_lists_data_source.go b/internal/service/ec2/vpc_managed_prefix_lists_data_source.go index e00d3b377b0..63ff1d82a7f 100644 --- a/internal/service/ec2/vpc_managed_prefix_lists_data_source.go +++ b/internal/service/ec2/vpc_managed_prefix_lists_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -29,7 +32,7 @@ func DataSourceManagedPrefixLists() *schema.Resource { } func dataSourceManagedPrefixListsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeManagedPrefixListsInput{} diff --git a/internal/service/ec2/vpc_managed_prefix_lists_data_source_test.go b/internal/service/ec2/vpc_managed_prefix_lists_data_source_test.go index 5f3ddabdb7a..585b5921506 100644 --- a/internal/service/ec2/vpc_managed_prefix_lists_data_source_test.go +++ b/internal/service/ec2/vpc_managed_prefix_lists_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -20,7 +23,7 @@ func TestAccVPCManagedPrefixListsDataSource_basic(t *testing.T) { { Config: testAccVPCManagedPrefixListsDataSourceConfig_basic, Check: resource.ComposeTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue("data.aws_ec2_managed_prefix_lists.test", "ids.#", "0"), + acctest.CheckResourceAttrGreaterThanValue("data.aws_ec2_managed_prefix_lists.test", "ids.#", 0), ), }, }, diff --git a/internal/service/ec2/vpc_migrate.go b/internal/service/ec2/vpc_migrate.go index 0196c2c5547..6ce1ced9176 100644 --- a/internal/service/ec2/vpc_migrate.go +++ b/internal/service/ec2/vpc_migrate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( diff --git a/internal/service/ec2/vpc_migrate_test.go b/internal/service/ec2/vpc_migrate_test.go index 23743ee0813..ef4d8ad2107 100644 --- a/internal/service/ec2/vpc_migrate_test.go +++ b/internal/service/ec2/vpc_migrate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpc_nat_gateway.go b/internal/service/ec2/vpc_nat_gateway.go index 99f6b61774a..d728e921331 100644 --- a/internal/service/ec2/vpc_nat_gateway.go +++ b/internal/service/ec2/vpc_nat_gateway.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -77,7 +80,7 @@ func ResourceNATGateway() *schema.Resource { } func resourceNATGatewayCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateNatGatewayInput{ ClientToken: aws.String(id.UniqueId()), @@ -116,7 +119,7 @@ func resourceNATGatewayCreate(ctx context.Context, d *schema.ResourceData, meta } func resourceNATGatewayRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ng, err := FindNATGatewayByID(ctx, conn, d.Id()) @@ -145,7 +148,7 @@ func resourceNATGatewayRead(ctx context.Context, d *schema.ResourceData, meta in d.Set("connectivity_type", ng.ConnectivityType) d.Set("subnet_id", ng.SubnetId) - SetTagsOut(ctx, ng.Tags) + setTagsOut(ctx, ng.Tags) return nil } @@ -156,7 +159,7 @@ func resourceNATGatewayUpdate(ctx context.Context, d *schema.ResourceData, meta } func resourceNATGatewayDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[INFO] Deleting EC2 NAT Gateway: %s", d.Id()) _, err := conn.DeleteNatGatewayWithContext(ctx, &ec2.DeleteNatGatewayInput{ diff --git a/internal/service/ec2/vpc_nat_gateway_data_source.go b/internal/service/ec2/vpc_nat_gateway_data_source.go index d34f547672b..93b52d06e1c 100644 --- a/internal/service/ec2/vpc_nat_gateway_data_source.go +++ b/internal/service/ec2/vpc_nat_gateway_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -74,7 +77,7 @@ func DataSourceNATGateway() *schema.Resource { } func dataSourceNATGatewayRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeNatGatewaysInput{ @@ -130,7 +133,7 @@ func dataSourceNATGatewayRead(ctx context.Context, d *schema.ResourceData, meta } if err := d.Set("tags", KeyValueTags(ctx, ngw.Tags).IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return diag.Errorf("error setting tags: %s", err) + return diag.Errorf("setting tags: %s", err) } return nil diff --git a/internal/service/ec2/vpc_nat_gateway_data_source_test.go b/internal/service/ec2/vpc_nat_gateway_data_source_test.go index 41b9bfbfbd8..80dd617fdf5 100644 --- a/internal/service/ec2/vpc_nat_gateway_data_source_test.go +++ b/internal/service/ec2/vpc_nat_gateway_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -65,7 +68,7 @@ resource "aws_subnet" "test" { } resource "aws_eip" "test" { - vpc = true + domain = "vpc" tags = { Name = %[1]q diff --git a/internal/service/ec2/vpc_nat_gateway_test.go b/internal/service/ec2/vpc_nat_gateway_test.go index e4dce97a54e..f15dd4ccfa2 100644 --- a/internal/service/ec2/vpc_nat_gateway_test.go +++ b/internal/service/ec2/vpc_nat_gateway_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -189,7 +192,7 @@ func TestAccVPCNATGateway_tags(t *testing.T) { func testAccCheckNATGatewayDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_nat_gateway" { @@ -224,7 +227,7 @@ func testAccCheckNATGatewayExists(ctx context.Context, n string, v *ec2.NatGatew return fmt.Errorf("No EC2 NAT Gateway ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindNATGatewayByID(ctx, conn, rs.Primary.ID) @@ -277,7 +280,7 @@ resource "aws_internet_gateway" "test" { } resource "aws_eip" "test" { - vpc = true + domain = "vpc" tags = { Name = %[1]q diff --git a/internal/service/ec2/vpc_nat_gateways_data_source.go b/internal/service/ec2/vpc_nat_gateways_data_source.go index 57ab60a1414..4f5e58c83b3 100644 --- a/internal/service/ec2/vpc_nat_gateways_data_source.go +++ b/internal/service/ec2/vpc_nat_gateways_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -38,7 +41,7 @@ func DataSourceNATGateways() *schema.Resource { } func dataSourceNATGatewaysRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeNatGatewaysInput{} @@ -67,7 +70,7 @@ func dataSourceNATGatewaysRead(ctx context.Context, d *schema.ResourceData, meta output, err := FindNATGateways(ctx, conn, input) if err != nil { - return diag.Errorf("error reading EC2 NAT Gateways: %s", err) + return diag.Errorf("reading EC2 NAT Gateways: %s", err) } var natGatewayIDs []string diff --git a/internal/service/ec2/vpc_nat_gateways_data_source_test.go b/internal/service/ec2/vpc_nat_gateways_data_source_test.go index 04930e5dfe1..cbdc1688767 100644 --- a/internal/service/ec2/vpc_nat_gateways_data_source_test.go +++ b/internal/service/ec2/vpc_nat_gateways_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -81,7 +84,7 @@ resource "aws_subnet" "test3" { } resource "aws_eip" "test1" { - vpc = true + domain = "vpc" tags = { Name = %[1]q @@ -89,7 +92,7 @@ resource "aws_eip" "test1" { } resource "aws_eip" "test2" { - vpc = true + domain = "vpc" tags = { Name = %[1]q @@ -97,7 +100,7 @@ resource "aws_eip" "test2" { } resource "aws_eip" "test3" { - vpc = true + domain = "vpc" tags = { Name = %[1]q diff --git a/internal/service/ec2/vpc_network_acl.go b/internal/service/ec2/vpc_network_acl.go index acd31807310..537dfcd4362 100644 --- a/internal/service/ec2/vpc_network_acl.go +++ b/internal/service/ec2/vpc_network_acl.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -44,7 +47,7 @@ func ResourceNetworkACL() *schema.Resource { Importer: &schema.ResourceImporter{ StateContext: func(ctx context.Context, d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) nacl, err := FindNetworkACLByID(ctx, conn, d.Id()) @@ -155,7 +158,7 @@ var networkACLRuleNestedBlock = &schema.Resource{ func resourceNetworkACLCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateNetworkAclInput{ TagSpecifications: getTagSpecificationsIn(ctx, ec2.ResourceTypeNetworkAcl), @@ -180,9 +183,9 @@ func resourceNetworkACLCreate(ctx context.Context, d *schema.ResourceData, meta func resourceNetworkACLRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) - outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { + outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { return FindNetworkACLByID(ctx, conn, d.Id()) }, d.IsNewResource()) @@ -239,14 +242,14 @@ func resourceNetworkACLRead(ctx context.Context, d *schema.ResourceData, meta in return sdkdiag.AppendErrorf(diags, "setting ingress: %s", err) } - SetTagsOut(ctx, nacl.Tags) + setTagsOut(ctx, nacl.Tags) return diags } func resourceNetworkACLUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if err := modifyNetworkACLAttributesOnUpdate(ctx, conn, d, true); err != nil { return sdkdiag.AppendErrorf(diags, "updating EC2 Network ACL (%s): %s", d.Id(), err) @@ -257,7 +260,7 @@ func resourceNetworkACLUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceNetworkACLDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) // Delete all NACL/Subnet associations, even if they are managed via aws_network_acl_association resources. nacl, err := FindNetworkACLByID(ctx, conn, d.Id()) @@ -281,7 +284,7 @@ func resourceNetworkACLDelete(ctx context.Context, d *schema.ResourceData, meta } log.Printf("[INFO] Deleting EC2 Network ACL: %s", d.Id()) - _, err = tfresource.RetryWhenAWSErrCodeEquals(ctx, propagationTimeout, func() (interface{}, error) { + _, err = tfresource.RetryWhenAWSErrCodeEquals(ctx, ec2PropagationTimeout, func() (interface{}, error) { return conn.DeleteNetworkAclWithContext(ctx, input) }, errCodeDependencyViolation) diff --git a/internal/service/ec2/vpc_network_acl_association.go b/internal/service/ec2/vpc_network_acl_association.go index c5541d63be3..cad52b58b9e 100644 --- a/internal/service/ec2/vpc_network_acl_association.go +++ b/internal/service/ec2/vpc_network_acl_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -42,7 +45,7 @@ func ResourceNetworkACLAssociation() *schema.Resource { func resourceNetworkACLAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) associationID, err := networkACLAssociationCreate(ctx, conn, d.Get("network_acl_id").(string), d.Get("subnet_id").(string)) @@ -57,9 +60,9 @@ func resourceNetworkACLAssociationCreate(ctx context.Context, d *schema.Resource func resourceNetworkACLAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) - outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { + outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { return FindNetworkACLAssociationByID(ctx, conn, d.Id()) }, d.IsNewResource()) @@ -83,7 +86,7 @@ func resourceNetworkACLAssociationRead(ctx context.Context, d *schema.ResourceDa func resourceNetworkACLAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeNetworkAclsInput{ Filters: BuildAttributeFilterList(map[string]string{ @@ -126,7 +129,7 @@ func networkACLAssociationCreate(ctx context.Context, conn *ec2.EC2, naclID, sub } log.Printf("[DEBUG] Creating EC2 Network ACL Association: %s", input) - outputRaw, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, propagationTimeout, func() (interface{}, error) { + outputRaw, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, ec2PropagationTimeout, func() (interface{}, error) { return conn.ReplaceNetworkAclAssociationWithContext(ctx, input) }, errCodeInvalidAssociationIDNotFound) @@ -178,7 +181,7 @@ func networkACLAssociationDelete(ctx context.Context, conn *ec2.EC2, association return nil } -// networkACLAssociationDelete deletes the specified NACL associations for the specified subnets. +// networkACLAssociationsDelete deletes the specified NACL associations for the specified subnets. // Each subnet's current association is replaced by an association with the specified VPC's default NACL. func networkACLAssociationsDelete(ctx context.Context, conn *ec2.EC2, vpcID string, subnetIDs []interface{}) error { defaultNACL, err := FindVPCDefaultNetworkACL(ctx, conn, vpcID) diff --git a/internal/service/ec2/vpc_network_acl_association_test.go b/internal/service/ec2/vpc_network_acl_association_test.go index aaeef877d87..11445050d01 100644 --- a/internal/service/ec2/vpc_network_acl_association_test.go +++ b/internal/service/ec2/vpc_network_acl_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -192,7 +195,7 @@ func TestAccVPCNetworkACLAssociation_associateWithDefaultNACL(t *testing.T) { func testAccCheckNetworkACLAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_network_acl_association" { @@ -227,7 +230,7 @@ func testAccCheckNetworkACLAssociationExists(ctx context.Context, n string, v *e return fmt.Errorf("No EC2 Network ACL Association ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindNetworkACLAssociationByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ec2/vpc_network_acl_rule.go b/internal/service/ec2/vpc_network_acl_rule.go index e7c2f9dab52..a347754edba 100644 --- a/internal/service/ec2/vpc_network_acl_rule.go +++ b/internal/service/ec2/vpc_network_acl_rule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -119,7 +122,7 @@ func ResourceNetworkACLRule() *schema.Resource { func resourceNetworkACLRuleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) protocol := d.Get("protocol").(string) protocolNumber, err := networkACLProtocolNumber(protocol) @@ -175,13 +178,13 @@ func resourceNetworkACLRuleCreate(ctx context.Context, d *schema.ResourceData, m func resourceNetworkACLRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) egress := d.Get("egress").(bool) naclID := d.Get("network_acl_id").(string) ruleNumber := d.Get("rule_number").(int) - outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { + outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { return FindNetworkACLEntryByThreePartKey(ctx, conn, naclID, egress, ruleNumber) }, d.IsNewResource()) @@ -230,7 +233,7 @@ func resourceNetworkACLRuleRead(ctx context.Context, d *schema.ResourceData, met func resourceNetworkACLRuleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[INFO] Deleting EC2 Network ACL Rule: %s", d.Id()) _, err := conn.DeleteNetworkAclEntryWithContext(ctx, &ec2.DeleteNetworkAclEntryInput{ diff --git a/internal/service/ec2/vpc_network_acl_rule_test.go b/internal/service/ec2/vpc_network_acl_rule_test.go index b2279f01fe6..d652516be60 100644 --- a/internal/service/ec2/vpc_network_acl_rule_test.go +++ b/internal/service/ec2/vpc_network_acl_rule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -336,7 +339,7 @@ func TestAccVPCNetworkACLRule_tcpProtocol(t *testing.T) { func testAccCheckNetworkACLRuleDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_network_acl_rule" { @@ -376,7 +379,7 @@ func testAccCheckNetworkACLRuleDestroy(ctx context.Context) resource.TestCheckFu func testAccCheckNetworkACLRuleExists(ctx context.Context, n string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) rs, ok := s.RootModule().Resources[n] if !ok { return fmt.Errorf("Not found: %s", n) diff --git a/internal/service/ec2/vpc_network_acl_test.go b/internal/service/ec2/vpc_network_acl_test.go index 310fb996158..56763ec3460 100644 --- a/internal/service/ec2/vpc_network_acl_test.go +++ b/internal/service/ec2/vpc_network_acl_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -646,7 +649,7 @@ func TestAccVPCNetworkACL_espProtocol(t *testing.T) { func testAccCheckNetworkACLDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_network_acl" { @@ -681,7 +684,7 @@ func testAccCheckNetworkACLExists(ctx context.Context, n string, v *ec2.NetworkA return fmt.Errorf("No EC2 Network ACL ID is set: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindNetworkACLByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ec2/vpc_network_acls_data_source.go b/internal/service/ec2/vpc_network_acls_data_source.go index 9f06fdc3c7b..fb16ddbd7a0 100644 --- a/internal/service/ec2/vpc_network_acls_data_source.go +++ b/internal/service/ec2/vpc_network_acls_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -40,7 +43,7 @@ func DataSourceNetworkACLs() *schema.Resource { func dataSourceNetworkACLsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeNetworkAclsInput{} diff --git a/internal/service/ec2/vpc_network_acls_data_source_test.go b/internal/service/ec2/vpc_network_acls_data_source_test.go index 292bf1bcc75..9a5374f2ced 100644 --- a/internal/service/ec2/vpc_network_acls_data_source_test.go +++ b/internal/service/ec2/vpc_network_acls_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -24,7 +27,7 @@ func TestAccVPCNetworkACLsDataSource_basic(t *testing.T) { { Config: testAccVPCNetworkACLsDataSourceConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "ids.#", "1"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "ids.#", 1), ), }, }, diff --git a/internal/service/ec2/vpc_network_insights_analysis.go b/internal/service/ec2/vpc_network_insights_analysis.go index 288c2fdb591..8a1b28cb443 100644 --- a/internal/service/ec2/vpc_network_insights_analysis.go +++ b/internal/service/ec2/vpc_network_insights_analysis.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -1401,7 +1404,7 @@ var networkInsightsAnalysisExplanationsSchema = &schema.Schema{ } func resourceNetworkInsightsAnalysisCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.StartNetworkInsightsAnalysisInput{ NetworkInsightsPathId: aws.String(d.Get("network_insights_path_id").(string)), @@ -1416,14 +1419,14 @@ func resourceNetworkInsightsAnalysisCreate(ctx context.Context, d *schema.Resour output, err := conn.StartNetworkInsightsAnalysisWithContext(ctx, input) if err != nil { - return diag.Errorf("error creating EC2 Network Insights Analysis: %s", err) + return diag.Errorf("creating EC2 Network Insights Analysis: %s", err) } d.SetId(aws.StringValue(output.NetworkInsightsAnalysis.NetworkInsightsAnalysisId)) if d.Get("wait_for_completion").(bool) { if _, err := WaitNetworkInsightsAnalysisCreated(ctx, conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { - return diag.Errorf("error waiting for EC2 Network Insights Analysis (%s) create: %s", d.Id(), err) + return diag.Errorf("waiting for EC2 Network Insights Analysis (%s) create: %s", d.Id(), err) } } @@ -1431,7 +1434,7 @@ func resourceNetworkInsightsAnalysisCreate(ctx context.Context, d *schema.Resour } func resourceNetworkInsightsAnalysisRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) output, err := FindNetworkInsightsAnalysisByID(ctx, conn, d.Id()) @@ -1466,7 +1469,7 @@ func resourceNetworkInsightsAnalysisRead(ctx context.Context, d *schema.Resource d.Set("status_message", output.StatusMessage) d.Set("warning_message", output.WarningMessage) - SetTagsOut(ctx, output.Tags) + setTagsOut(ctx, output.Tags) return nil } @@ -1477,7 +1480,7 @@ func resourceNetworkInsightsAnalysisUpdate(ctx context.Context, d *schema.Resour } func resourceNetworkInsightsAnalysisDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[DEBUG] Deleting EC2 Network Insights Analysis: %s", d.Id()) _, err := conn.DeleteNetworkInsightsAnalysisWithContext(ctx, &ec2.DeleteNetworkInsightsAnalysisInput{ diff --git a/internal/service/ec2/vpc_network_insights_analysis_data_source.go b/internal/service/ec2/vpc_network_insights_analysis_data_source.go index 7f3e15b361f..5e36d838752 100644 --- a/internal/service/ec2/vpc_network_insights_analysis_data_source.go +++ b/internal/service/ec2/vpc_network_insights_analysis_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -83,7 +86,7 @@ func DataSourceNetworkInsightsAnalysis() *schema.Resource { } func dataSourceNetworkInsightsAnalysisRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeNetworkInsightsAnalysesInput{} diff --git a/internal/service/ec2/vpc_network_insights_analysis_data_source_test.go b/internal/service/ec2/vpc_network_insights_analysis_data_source_test.go index 16146ca8761..0325a2b624b 100644 --- a/internal/service/ec2/vpc_network_insights_analysis_data_source_test.go +++ b/internal/service/ec2/vpc_network_insights_analysis_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpc_network_insights_analysis_test.go b/internal/service/ec2/vpc_network_insights_analysis_test.go index 3037782933e..90730eb9ca6 100644 --- a/internal/service/ec2/vpc_network_insights_analysis_test.go +++ b/internal/service/ec2/vpc_network_insights_analysis_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -202,7 +205,7 @@ func testAccCheckNetworkInsightsAnalysisExists(ctx context.Context, n string) re return fmt.Errorf("No EC2 Network Insights Analysis ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) _, err := tfec2.FindNetworkInsightsAnalysisByID(ctx, conn, rs.Primary.ID) @@ -212,7 +215,7 @@ func testAccCheckNetworkInsightsAnalysisExists(ctx context.Context, n string) re func testAccCheckNetworkInsightsAnalysisDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_network_insights_analysis" { diff --git a/internal/service/ec2/vpc_network_insights_path.go b/internal/service/ec2/vpc_network_insights_path.go index 3e5b2649429..f6a5130dda0 100644 --- a/internal/service/ec2/vpc_network_insights_path.go +++ b/internal/service/ec2/vpc_network_insights_path.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -75,7 +78,7 @@ func ResourceNetworkInsightsPath() *schema.Resource { } func resourceNetworkInsightsPathCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateNetworkInsightsPathInput{ Destination: aws.String(d.Get("destination").(string)), @@ -109,7 +112,7 @@ func resourceNetworkInsightsPathCreate(ctx context.Context, d *schema.ResourceDa } func resourceNetworkInsightsPathRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) nip, err := FindNetworkInsightsPathByID(ctx, conn, d.Id()) @@ -131,7 +134,7 @@ func resourceNetworkInsightsPathRead(ctx context.Context, d *schema.ResourceData d.Set("source", nip.Source) d.Set("source_ip", nip.SourceIp) - SetTagsOut(ctx, nip.Tags) + setTagsOut(ctx, nip.Tags) return nil } @@ -142,7 +145,7 @@ func resourceNetworkInsightsPathUpdate(ctx context.Context, d *schema.ResourceDa } func resourceNetworkInsightsPathDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[DEBUG] Deleting EC2 Network Insights Path: %s", d.Id()) _, err := conn.DeleteNetworkInsightsPathWithContext(ctx, &ec2.DeleteNetworkInsightsPathInput{ diff --git a/internal/service/ec2/vpc_network_insights_path_data_source.go b/internal/service/ec2/vpc_network_insights_path_data_source.go index ff3aa6dc2f5..f7c8e35d2ba 100644 --- a/internal/service/ec2/vpc_network_insights_path_data_source.go +++ b/internal/service/ec2/vpc_network_insights_path_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -58,7 +61,7 @@ func DataSourceNetworkInsightsPath() *schema.Resource { } func dataSourceNetworkInsightsPathRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeNetworkInsightsPathsInput{} diff --git a/internal/service/ec2/vpc_network_insights_path_data_source_test.go b/internal/service/ec2/vpc_network_insights_path_data_source_test.go index baa9895b7d3..64ef05cacfe 100644 --- a/internal/service/ec2/vpc_network_insights_path_data_source_test.go +++ b/internal/service/ec2/vpc_network_insights_path_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpc_network_insights_path_test.go b/internal/service/ec2/vpc_network_insights_path_test.go index 81ca1724037..f396b08372f 100644 --- a/internal/service/ec2/vpc_network_insights_path_test.go +++ b/internal/service/ec2/vpc_network_insights_path_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -231,7 +234,7 @@ func testAccCheckNetworkInsightsPathExists(ctx context.Context, n string) resour return fmt.Errorf("No EC2 Network Insights Path ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) _, err := tfec2.FindNetworkInsightsPathByID(ctx, conn, rs.Primary.ID) @@ -241,7 +244,7 @@ func testAccCheckNetworkInsightsPathExists(ctx context.Context, n string) resour func testAccCheckNetworkInsightsPathDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_network_insights_path" { diff --git a/internal/service/ec2/vpc_network_interface.go b/internal/service/ec2/vpc_network_interface.go index 4550d14b888..a05e3a41a4e 100644 --- a/internal/service/ec2/vpc_network_interface.go +++ b/internal/service/ec2/vpc_network_interface.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -330,7 +333,7 @@ func ResourceNetworkInterface() *schema.Resource { func resourceNetworkInterfaceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ipv4PrefixesSpecified := false ipv6PrefixesSpecified := false @@ -450,7 +453,7 @@ func resourceNetworkInterfaceCreate(ctx context.Context, d *schema.ResourceData, } if ipv4PrefixesSpecified || ipv6PrefixesSpecified { - if err := createTags(ctx, conn, d.Id(), GetTagsIn(ctx)); err != nil { + if err := createTags(ctx, conn, d.Id(), getTagsIn(ctx)); err != nil { return sdkdiag.AppendErrorf(diags, "setting EC2 Network Interface (%s) tags: %s", d.Id(), err) } } @@ -484,9 +487,9 @@ func resourceNetworkInterfaceCreate(ctx context.Context, d *schema.ResourceData, func resourceNetworkInterfaceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) - outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { + outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { return FindNetworkInterfaceByID(ctx, conn, d.Id()) }, d.IsNewResource()) @@ -553,14 +556,14 @@ func resourceNetworkInterfaceRead(ctx context.Context, d *schema.ResourceData, m d.Set("source_dest_check", eni.SourceDestCheck) d.Set("subnet_id", eni.SubnetId) - SetTagsOut(ctx, eni.TagSet) + setTagsOut(ctx, eni.TagSet) return diags } func resourceNetworkInterfaceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) privateIPsNetChange := 0 if d.HasChange("attachment") { @@ -1035,7 +1038,7 @@ func resourceNetworkInterfaceUpdate(ctx context.Context, d *schema.ResourceData, func resourceNetworkInterfaceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if v, ok := d.GetOk("attachment"); ok && v.(*schema.Set).Len() > 0 { attachment := v.(*schema.Set).List()[0].(map[string]interface{}) diff --git a/internal/service/ec2/vpc_network_interface_attachment.go b/internal/service/ec2/vpc_network_interface_attachment.go index ba5069c34e6..faec365b891 100644 --- a/internal/service/ec2/vpc_network_interface_attachment.go +++ b/internal/service/ec2/vpc_network_interface_attachment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -51,7 +54,7 @@ func ResourceNetworkInterfaceAttachment() *schema.Resource { func resourceNetworkInterfaceAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) attachmentID, err := attachNetworkInterface(ctx, conn, d.Get("network_interface_id").(string), @@ -73,7 +76,7 @@ func resourceNetworkInterfaceAttachmentCreate(ctx context.Context, d *schema.Res func resourceNetworkInterfaceAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) network_interface, err := FindNetworkInterfaceByAttachmentID(ctx, conn, d.Id()) @@ -98,7 +101,7 @@ func resourceNetworkInterfaceAttachmentRead(ctx context.Context, d *schema.Resou func resourceNetworkInterfaceAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if err := DetachNetworkInterface(ctx, conn, d.Get("network_interface_id").(string), d.Id(), NetworkInterfaceDetachedTimeout); err != nil { return sdkdiag.AppendFromErr(diags, err) diff --git a/internal/service/ec2/vpc_network_interface_attachment_test.go b/internal/service/ec2/vpc_network_interface_attachment_test.go index dc65d681155..f7d501db614 100644 --- a/internal/service/ec2/vpc_network_interface_attachment_test.go +++ b/internal/service/ec2/vpc_network_interface_attachment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpc_network_interface_data_source.go b/internal/service/ec2/vpc_network_interface_data_source.go index dd814bf1bdf..87ec562d593 100644 --- a/internal/service/ec2/vpc_network_interface_data_source.go +++ b/internal/service/ec2/vpc_network_interface_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -161,7 +164,7 @@ func DataSourceNetworkInterface() *schema.Resource { func dataSourceNetworkInterfaceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeNetworkInterfacesInput{} diff --git a/internal/service/ec2/vpc_network_interface_data_source_test.go b/internal/service/ec2/vpc_network_interface_data_source_test.go index 506a7c767a9..7eaf401df9e 100644 --- a/internal/service/ec2/vpc_network_interface_data_source_test.go +++ b/internal/service/ec2/vpc_network_interface_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -309,7 +312,7 @@ data "aws_availability_zone" "available" { } resource "aws_eip" "test" { - vpc = true + domain = "vpc" network_border_group = data.aws_availability_zone.available.network_border_group tags = { @@ -341,7 +344,7 @@ resource "aws_internet_gateway" "test" { } resource "aws_eip" "test" { - vpc = true + domain = "vpc" tags = { Name = %[1]q diff --git a/internal/service/ec2/vpc_network_interface_sg_attachment.go b/internal/service/ec2/vpc_network_interface_sg_attachment.go index 10e18183c08..3e95eec051a 100644 --- a/internal/service/ec2/vpc_network_interface_sg_attachment.go +++ b/internal/service/ec2/vpc_network_interface_sg_attachment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -43,7 +46,7 @@ func ResourceNetworkInterfaceSGAttachment() *schema.Resource { func resourceNetworkInterfaceSGAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) networkInterfaceID := d.Get("network_interface_id").(string) sgID := d.Get("security_group_id").(string) @@ -92,11 +95,11 @@ func resourceNetworkInterfaceSGAttachmentCreate(ctx context.Context, d *schema.R func resourceNetworkInterfaceSGAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) networkInterfaceID := d.Get("network_interface_id").(string) sgID := d.Get("security_group_id").(string) - outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { + outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { return FindNetworkInterfaceSecurityGroup(ctx, conn, networkInterfaceID, sgID) }, d.IsNewResource()) @@ -120,7 +123,7 @@ func resourceNetworkInterfaceSGAttachmentRead(ctx context.Context, d *schema.Res func resourceNetworkInterfaceSGAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) networkInterfaceID := d.Get("network_interface_id").(string) sgID := d.Get("security_group_id").(string) @@ -184,7 +187,7 @@ func resourceNetworkInterfaceSGAttachmentImport(ctx context.Context, d *schema.R log.Printf("[DEBUG] Importing network interface security group association, Interface: %s, Security Group: %s", networkInterfaceID, securityGroupID) - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) networkInterface, err := FindNetworkInterfaceByID(ctx, conn, networkInterfaceID) diff --git a/internal/service/ec2/vpc_network_interface_sg_attachment_test.go b/internal/service/ec2/vpc_network_interface_sg_attachment_test.go index 3fa30781571..c5a0fe3a4e3 100644 --- a/internal/service/ec2/vpc_network_interface_sg_attachment_test.go +++ b/internal/service/ec2/vpc_network_interface_sg_attachment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -145,7 +148,7 @@ func testAccCheckNetworkInterfaceSGAttachmentExists(ctx context.Context, resourc return fmt.Errorf("No EC2 Network Interface Security Group Attachment ID is set: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) _, err := tfec2.FindNetworkInterfaceSecurityGroup(ctx, conn, rs.Primary.Attributes["network_interface_id"], rs.Primary.Attributes["security_group_id"]) @@ -155,7 +158,7 @@ func testAccCheckNetworkInterfaceSGAttachmentExists(ctx context.Context, resourc func testAccCheckNetworkInterfaceSGAttachmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_network_interface_sg_attachment" { diff --git a/internal/service/ec2/vpc_network_interface_test.go b/internal/service/ec2/vpc_network_interface_test.go index 22ca6d4b8fe..f033e7d76f0 100644 --- a/internal/service/ec2/vpc_network_interface_test.go +++ b/internal/service/ec2/vpc_network_interface_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -381,7 +384,7 @@ func TestAccVPCNetworkInterface_ignoreExternalAttachment(t *testing.T) { Config: testAccVPCNetworkInterfaceConfig_externalAttachment(rName), Check: resource.ComposeTestCheckFunc( testAccCheckENIExists(ctx, resourceName, &conf), - testAccCheckENIMakeExternalAttachment("aws_instance.test", &conf), + testAccCheckENIMakeExternalAttachment(ctx, "aws_instance.test", &conf), ), }, { @@ -1002,7 +1005,7 @@ func testAccCheckENIExists(ctx context.Context, n string, v *ec2.NetworkInterfac return fmt.Errorf("No EC2 Network Interface ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindNetworkInterfaceByID(ctx, conn, rs.Primary.ID) @@ -1018,7 +1021,7 @@ func testAccCheckENIExists(ctx context.Context, n string, v *ec2.NetworkInterfac func testAccCheckENIDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_network_interface" { @@ -1042,7 +1045,7 @@ func testAccCheckENIDestroy(ctx context.Context) resource.TestCheckFunc { } } -func testAccCheckENIMakeExternalAttachment(n string, conf *ec2.NetworkInterface) resource.TestCheckFunc { +func testAccCheckENIMakeExternalAttachment(ctx context.Context, n string, networkInterface *ec2.NetworkInterface) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok || rs.Primary.ID == "" { @@ -1052,10 +1055,10 @@ func testAccCheckENIMakeExternalAttachment(n string, conf *ec2.NetworkInterface) input := &ec2.AttachNetworkInterfaceInput{ DeviceIndex: aws.Int64(1), InstanceId: aws.String(rs.Primary.ID), - NetworkInterfaceId: conf.NetworkInterfaceId, + NetworkInterfaceId: networkInterface.NetworkInterfaceId, } - _, err := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn().AttachNetworkInterface(input) + _, err := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx).AttachNetworkInterfaceWithContext(ctx, input) if err != nil { return fmt.Errorf("error attaching ENI: %w", err) diff --git a/internal/service/ec2/vpc_network_interfaces_data_source.go b/internal/service/ec2/vpc_network_interfaces_data_source.go index 94327a11054..2de97502b17 100644 --- a/internal/service/ec2/vpc_network_interfaces_data_source.go +++ b/internal/service/ec2/vpc_network_interfaces_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -36,7 +39,7 @@ func DataSourceNetworkInterfaces() *schema.Resource { func dataSourceNetworkInterfacesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeNetworkInterfacesInput{} diff --git a/internal/service/ec2/vpc_network_interfaces_data_source_test.go b/internal/service/ec2/vpc_network_interfaces_data_source_test.go index 733d53434e1..52d6f2f5421 100644 --- a/internal/service/ec2/vpc_network_interfaces_data_source_test.go +++ b/internal/service/ec2/vpc_network_interfaces_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpc_network_performance_metric_subscription.go b/internal/service/ec2/vpc_network_performance_metric_subscription.go index 703b5712173..cac7c6e82c2 100644 --- a/internal/service/ec2/vpc_network_performance_metric_subscription.go +++ b/internal/service/ec2/vpc_network_performance_metric_subscription.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -57,7 +60,7 @@ func ResourceNetworkPerformanceMetricSubscription() *schema.Resource { } func resourceNetworkPerformanceMetricSubscriptionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Client() + conn := meta.(*conns.AWSClient).EC2Client(ctx) source := d.Get("source").(string) destination := d.Get("destination").(string) @@ -83,7 +86,7 @@ func resourceNetworkPerformanceMetricSubscriptionCreate(ctx context.Context, d * } func resourceNetworkPerformanceMetricSubscriptionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Client() + conn := meta.(*conns.AWSClient).EC2Client(ctx) source, destination, metric, statistic, err := NetworkPerformanceMetricSubscriptionResourceID(d.Id()) @@ -113,7 +116,7 @@ func resourceNetworkPerformanceMetricSubscriptionRead(ctx context.Context, d *sc } func resourceNetworkPerformanceMetricSubscriptionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Client() + conn := meta.(*conns.AWSClient).EC2Client(ctx) source, destination, metric, statistic, err := NetworkPerformanceMetricSubscriptionResourceID(d.Id()) diff --git a/internal/service/ec2/vpc_network_performance_metric_subscription_test.go b/internal/service/ec2/vpc_network_performance_metric_subscription_test.go index 6fcc8dc8117..349afc11a1d 100644 --- a/internal/service/ec2/vpc_network_performance_metric_subscription_test.go +++ b/internal/service/ec2/vpc_network_performance_metric_subscription_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -92,7 +95,7 @@ func testAccCheckNetworkPerformanceMetricSubscriptionExists(ctx context.Context, return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Client(ctx) _, err = tfec2.FindNetworkPerformanceMetricSubscriptionByFourPartKey(ctx, conn, source, destination, metric, statistic) @@ -102,7 +105,7 @@ func testAccCheckNetworkPerformanceMetricSubscriptionExists(ctx context.Context, func testAccCheckNetworkPerformanceMetricSubscriptionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Client(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpc_network_performance_metric_subscription" { diff --git a/internal/service/ec2/vpc_peering_connection.go b/internal/service/ec2/vpc_peering_connection.go index 1b3a2117b8a..3f1e4201467 100644 --- a/internal/service/ec2/vpc_peering_connection.go +++ b/internal/service/ec2/vpc_peering_connection.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -100,7 +103,7 @@ var vpcPeeringConnectionOptionsSchema = &schema.Schema{ func resourceVPCPeeringConnectionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateVpcPeeringConnectionInput{ PeerVpcId: aws.String(d.Get("peer_vpc_id").(string)), @@ -152,7 +155,7 @@ func resourceVPCPeeringConnectionCreate(ctx context.Context, d *schema.ResourceD func resourceVPCPeeringConnectionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) vpcPeeringConnection, err := FindVPCPeeringConnectionByID(ctx, conn, d.Id()) @@ -197,14 +200,14 @@ func resourceVPCPeeringConnectionRead(ctx context.Context, d *schema.ResourceDat d.Set("requester", nil) } - SetTagsOut(ctx, vpcPeeringConnection.Tags) + setTagsOut(ctx, vpcPeeringConnection.Tags) return diags } func resourceVPCPeeringConnectionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) vpcPeeringConnection, err := FindVPCPeeringConnectionByID(ctx, conn, d.Id()) @@ -231,7 +234,7 @@ func resourceVPCPeeringConnectionUpdate(ctx context.Context, d *schema.ResourceD func resourceVPCPeeringConnectionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[INFO] Deleting EC2 VPC Peering Connection: %s", d.Id()) _, err := conn.DeleteVpcPeeringConnectionWithContext(ctx, &ec2.DeleteVpcPeeringConnectionInput{ @@ -321,7 +324,7 @@ func modifyVPCPeeringConnectionOptions(ctx context.Context, conn *ec2.EC2, d *sc // Retry reading back the modified options to deal with eventual consistency. // Often this is to do with a delay transitioning from pending-acceptance to active. - err := retry.RetryContext(ctx, VPCPeeringConnectionOptionsPropagationTimeout, func() *retry.RetryError { // nosemgrep:ci.helper-schema-retry-RetryContext-without-TimeoutError-check + err := retry.RetryContext(ctx, ec2PropagationTimeout, func() *retry.RetryError { // nosemgrep:ci.helper-schema-retry-RetryContext-without-TimeoutError-check vpcPeeringConnection, err := FindVPCPeeringConnectionByID(ctx, conn, d.Id()) if err != nil { diff --git a/internal/service/ec2/vpc_peering_connection_accepter.go b/internal/service/ec2/vpc_peering_connection_accepter.go index d3e9623fcaf..0ebe4d396c0 100644 --- a/internal/service/ec2/vpc_peering_connection_accepter.go +++ b/internal/service/ec2/vpc_peering_connection_accepter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -86,7 +89,7 @@ func ResourceVPCPeeringConnectionAccepter() *schema.Resource { func resourceVPCPeeringAccepterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) vpcPeeringConnectionID := d.Get("vpc_peering_connection_id").(string) vpcPeeringConnection, err := FindVPCPeeringConnectionByID(ctx, conn, vpcPeeringConnectionID) @@ -109,7 +112,7 @@ func resourceVPCPeeringAccepterCreate(ctx context.Context, d *schema.ResourceDat return sdkdiag.AppendFromErr(diags, err) } - if err := createTags(ctx, conn, d.Id(), GetTagsIn(ctx)); err != nil { + if err := createTags(ctx, conn, d.Id(), getTagsIn(ctx)); err != nil { return sdkdiag.AppendErrorf(diags, "setting EC2 VPC Peering Connection (%s) tags: %s", d.Id(), err) } diff --git a/internal/service/ec2/vpc_peering_connection_accepter_test.go b/internal/service/ec2/vpc_peering_connection_accepter_test.go index cdd8400a374..bd22870fc74 100644 --- a/internal/service/ec2/vpc_peering_connection_accepter_test.go +++ b/internal/service/ec2/vpc_peering_connection_accepter_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -331,7 +334,9 @@ resource "aws_vpc_peering_connection_accepter" "peer" { } func testAccVPCPeeringConnectionAccepterConfig_differentRegionDifferentAccount(rName string) string { - return acctest.ConfigCompose(testAccAlternateAccountAlternateRegionProviderConfig(), fmt.Sprintf(` + return acctest.ConfigCompose( + acctest.ConfigAlternateAccountAlternateRegionProvider(), + fmt.Sprintf(` resource "aws_vpc" "main" { cidr_block = "10.0.0.0/16" diff --git a/internal/service/ec2/vpc_peering_connection_data_source.go b/internal/service/ec2/vpc_peering_connection_data_source.go index 7c4c6ce5063..a67236c9317 100644 --- a/internal/service/ec2/vpc_peering_connection_data_source.go +++ b/internal/service/ec2/vpc_peering_connection_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -116,7 +119,7 @@ func DataSourceVPCPeeringConnection() *schema.Resource { func dataSourceVPCPeeringConnectionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeVpcPeeringConnectionsInput{} diff --git a/internal/service/ec2/vpc_peering_connection_data_source_test.go b/internal/service/ec2/vpc_peering_connection_data_source_test.go index cbf9d7db847..a764503454c 100644 --- a/internal/service/ec2/vpc_peering_connection_data_source_test.go +++ b/internal/service/ec2/vpc_peering_connection_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpc_peering_connection_options.go b/internal/service/ec2/vpc_peering_connection_options.go index 8050b045ca3..cb02993cecc 100644 --- a/internal/service/ec2/vpc_peering_connection_options.go +++ b/internal/service/ec2/vpc_peering_connection_options.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -36,7 +39,7 @@ func ResourceVPCPeeringConnectionOptions() *schema.Resource { func resourceVPCPeeringConnectionOptionsCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) vpcPeeringConnectionID := d.Get("vpc_peering_connection_id").(string) vpcPeeringConnection, err := FindVPCPeeringConnectionByID(ctx, conn, vpcPeeringConnectionID) @@ -56,7 +59,7 @@ func resourceVPCPeeringConnectionOptionsCreate(ctx context.Context, d *schema.Re func resourceVPCPeeringConnectionOptionsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) vpcPeeringConnection, err := FindVPCPeeringConnectionByID(ctx, conn, d.Id()) @@ -93,7 +96,7 @@ func resourceVPCPeeringConnectionOptionsRead(ctx context.Context, d *schema.Reso func resourceVPCPeeringConnectionOptionsUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) vpcPeeringConnection, err := FindVPCPeeringConnectionByID(ctx, conn, d.Id()) diff --git a/internal/service/ec2/vpc_peering_connection_options_test.go b/internal/service/ec2/vpc_peering_connection_options_test.go index 305dab6e6b5..a4634ea1452 100644 --- a/internal/service/ec2/vpc_peering_connection_options_test.go +++ b/internal/service/ec2/vpc_peering_connection_options_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -229,7 +232,7 @@ func testAccCheckVPCPeeringConnectionOptionsWithProvider(ctx context.Context, n, return fmt.Errorf("No EC2 VPC Peering Connection ID is set.") } - conn := providerF().Meta().(*conns.AWSClient).EC2Conn() + conn := providerF().Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindVPCPeeringConnectionByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ec2/vpc_peering_connection_test.go b/internal/service/ec2/vpc_peering_connection_test.go index d1b8a5102a0..385c038f397 100644 --- a/internal/service/ec2/vpc_peering_connection_test.go +++ b/internal/service/ec2/vpc_peering_connection_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -130,7 +133,7 @@ func TestAccVPCPeeringConnection_options(t *testing.T) { resourceName := "aws_vpc_peering_connection.test" testAccepterChange := func(*terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) log.Printf("[DEBUG] Test change to the VPC Peering Connection Options.") _, err := conn.ModifyVpcPeeringConnectionOptionsWithContext(ctx, &ec2.ModifyVpcPeeringConnectionOptionsInput{ @@ -380,7 +383,7 @@ func TestAccVPCPeeringConnection_optionsNoAutoAccept(t *testing.T) { func testAccCheckVPCPeeringConnectionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpc_peering_connection" { @@ -419,7 +422,7 @@ func testAccCheckVPCPeeringConnectionExistsWithProvider(ctx context.Context, n s return fmt.Errorf("No EC2 VPC Peering Connection ID is set.") } - conn := providerF().Meta().(*conns.AWSClient).EC2Conn() + conn := providerF().Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindVPCPeeringConnectionByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ec2/vpc_peering_connections_data_source.go b/internal/service/ec2/vpc_peering_connections_data_source.go index 37d5f14d437..ca023bb6f5d 100644 --- a/internal/service/ec2/vpc_peering_connections_data_source.go +++ b/internal/service/ec2/vpc_peering_connections_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -36,7 +39,7 @@ func DataSourceVPCPeeringConnections() *schema.Resource { func dataSourceVPCPeeringConnectionsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeVpcPeeringConnectionsInput{} diff --git a/internal/service/ec2/vpc_peering_connections_data_source_test.go b/internal/service/ec2/vpc_peering_connections_data_source_test.go index fdde68f31f5..e12d1ce2ff2 100644 --- a/internal/service/ec2/vpc_peering_connections_data_source_test.go +++ b/internal/service/ec2/vpc_peering_connections_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpc_prefix_list_data_source.go b/internal/service/ec2/vpc_prefix_list_data_source.go index 7d3214a0884..3a9e78809ce 100644 --- a/internal/service/ec2/vpc_prefix_list_data_source.go +++ b/internal/service/ec2/vpc_prefix_list_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -44,7 +47,7 @@ func DataSourcePrefixList() *schema.Resource { func dataSourcePrefixListRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribePrefixListsInput{} diff --git a/internal/service/ec2/vpc_prefix_list_data_source_test.go b/internal/service/ec2/vpc_prefix_list_data_source_test.go index 51c840cac9a..60afaf646bb 100644 --- a/internal/service/ec2/vpc_prefix_list_data_source_test.go +++ b/internal/service/ec2/vpc_prefix_list_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -22,9 +25,9 @@ func TestAccVPCPrefixListDataSource_basic(t *testing.T) { { Config: testAccVPCPrefixListDataSourceConfig_basic, Check: resource.ComposeTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue(ds1Name, "cidr_blocks.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(ds1Name, "cidr_blocks.#", 0), resource.TestCheckResourceAttrSet(ds1Name, "name"), - acctest.CheckResourceAttrGreaterThanValue(ds2Name, "cidr_blocks.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(ds2Name, "cidr_blocks.#", 0), resource.TestCheckResourceAttrSet(ds2Name, "name"), ), }, @@ -45,9 +48,9 @@ func TestAccVPCPrefixListDataSource_filter(t *testing.T) { { Config: testAccVPCPrefixListDataSourceConfig_filter, Check: resource.ComposeTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue(ds1Name, "cidr_blocks.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(ds1Name, "cidr_blocks.#", 0), resource.TestCheckResourceAttrSet(ds1Name, "name"), - acctest.CheckResourceAttrGreaterThanValue(ds2Name, "cidr_blocks.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(ds2Name, "cidr_blocks.#", 0), resource.TestCheckResourceAttrSet(ds2Name, "name"), ), }, diff --git a/internal/service/ec2/vpc_route.go b/internal/service/ec2/vpc_route.go index b26f93e3adb..9ad717ca1f1 100644 --- a/internal/service/ec2/vpc_route.go +++ b/internal/service/ec2/vpc_route.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -178,18 +181,18 @@ func ResourceRoute() *schema.Resource { func resourceRouteCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) destinationAttributeKey, destination, err := routeDestinationAttribute(d) if err != nil { - return sdkdiag.AppendErrorf(diags, "creating Route: %s", err) + return sdkdiag.AppendFromErr(diags, err) } targetAttributeKey, target, err := routeTargetAttribute(d) if err != nil { - return sdkdiag.AppendErrorf(diags, "creating Route: %s", err) + return sdkdiag.AppendFromErr(diags, err) } routeTableID := d.Get("route_table_id").(string) @@ -266,16 +269,15 @@ func resourceRouteCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceRouteRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) destinationAttributeKey, destination, err := routeDestinationAttribute(d) if err != nil { - return sdkdiag.AppendErrorf(diags, "reading Route: %s", err) + return sdkdiag.AppendFromErr(diags, err) } var routeFinder RouteFinder - switch destinationAttributeKey { case routeDestinationCIDRBlock: routeFinder = FindRouteByIPv4Destination @@ -288,8 +290,9 @@ func resourceRouteRead(ctx context.Context, d *schema.ResourceData, meta interfa } routeTableID := d.Get("route_table_id").(string) - - route, err := routeFinder(ctx, conn, routeTableID, destination) + outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { + return routeFinder(ctx, conn, routeTableID, destination) + }, d.IsNewResource()) if !d.IsNewResource() && tfresource.NotFound(err) { log.Printf("[WARN] Route in Route Table (%s) with destination (%s) not found, removing from state", routeTableID, destination) @@ -301,6 +304,7 @@ func resourceRouteRead(ctx context.Context, d *schema.ResourceData, meta interfa return sdkdiag.AppendErrorf(diags, "reading Route in Route Table (%s) with destination (%s): %s", routeTableID, destination, err) } + route := outputRaw.(*ec2.Route) d.Set("carrier_gateway_id", route.CarrierGatewayId) d.Set("core_network_arn", route.CoreNetworkArn) d.Set(routeDestinationCIDRBlock, route.DestinationCidrBlock) @@ -330,18 +334,18 @@ func resourceRouteRead(ctx context.Context, d *schema.ResourceData, meta interfa func resourceRouteUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) destinationAttributeKey, destination, err := routeDestinationAttribute(d) if err != nil { - return sdkdiag.AppendErrorf(diags, "updating Route: %s", err) + return sdkdiag.AppendFromErr(diags, err) } targetAttributeKey, target, err := routeTargetAttribute(d) if err != nil { - return sdkdiag.AppendErrorf(diags, "updating Route: %s", err) + return sdkdiag.AppendFromErr(diags, err) } routeTableID := d.Get("route_table_id").(string) @@ -411,12 +415,12 @@ func resourceRouteUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceRouteDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) destinationAttributeKey, destination, err := routeDestinationAttribute(d) if err != nil { - return sdkdiag.AppendErrorf(diags, "deleting Route: %s", err) + return sdkdiag.AppendFromErr(diags, err) } routeTableID := d.Get("route_table_id").(string) diff --git a/internal/service/ec2/vpc_route_data_source.go b/internal/service/ec2/vpc_route_data_source.go index c19081f418d..500cbf5a598 100644 --- a/internal/service/ec2/vpc_route_data_source.go +++ b/internal/service/ec2/vpc_route_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -106,7 +109,7 @@ func DataSourceRoute() *schema.Resource { func dataSourceRouteRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) routeTableID := d.Get("route_table_id").(string) diff --git a/internal/service/ec2/vpc_route_data_source_test.go b/internal/service/ec2/vpc_route_data_source_test.go index 08666b3abb3..fe19f41bb34 100644 --- a/internal/service/ec2/vpc_route_data_source_test.go +++ b/internal/service/ec2/vpc_route_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -510,7 +513,7 @@ resource "aws_internet_gateway" "test" { } resource "aws_eip" "test" { - vpc = true + domain = "vpc" tags = { Name = %[1]q diff --git a/internal/service/ec2/vpc_route_table.go b/internal/service/ec2/vpc_route_table.go index b1432e8f8be..b848dc5e83c 100644 --- a/internal/service/ec2/vpc_route_table.go +++ b/internal/service/ec2/vpc_route_table.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -19,6 +22,7 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + itypes "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/internal/verify" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -161,7 +165,7 @@ func ResourceRouteTable() *schema.Resource { func resourceRouteTableCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateRouteTableInput{ VpcId: aws.String(d.Get("vpc_id").(string)), @@ -205,9 +209,11 @@ func resourceRouteTableCreate(ctx context.Context, d *schema.ResourceData, meta func resourceRouteTableRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) - routeTable, err := FindRouteTableByID(ctx, conn, d.Id()) + outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { + return FindRouteTableByID(ctx, conn, d.Id()) + }, d.IsNewResource()) if !d.IsNewResource() && tfresource.NotFound(err) { log.Printf("[WARN] Route Table (%s) not found, removing from state", d.Id()) @@ -219,6 +225,7 @@ func resourceRouteTableRead(ctx context.Context, d *schema.ResourceData, meta in return sdkdiag.AppendErrorf(diags, "reading Route Table (%s): %s", d.Id(), err) } + routeTable := outputRaw.(*ec2.RouteTable) ownerID := aws.StringValue(routeTable.OwnerId) arn := arn.ARN{ Partition: meta.(*conns.AWSClient).Partition, @@ -242,14 +249,14 @@ func resourceRouteTableRead(ctx context.Context, d *schema.ResourceData, meta in d.Set("vpc_id", routeTable.VpcId) // Ignore the AmazonFSx service tag in addition to standard ignores. - SetTagsOut(ctx, Tags(KeyValueTags(ctx, routeTable.Tags).Ignore(tftags.New(ctx, []string{"AmazonFSx"})))) + setTagsOut(ctx, Tags(KeyValueTags(ctx, routeTable.Tags).Ignore(tftags.New(ctx, []string{"AmazonFSx"})))) return diags } func resourceRouteTableUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if d.HasChange("propagating_vgws") { o, n := d.GetChange("propagating_vgws") @@ -340,7 +347,7 @@ func resourceRouteTableUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceRouteTableDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) routeTable, err := FindRouteTableByID(ctx, conn, d.Id()) @@ -385,7 +392,7 @@ func resourceRouteTableHash(v interface{}) int { } if v, ok := m["ipv6_cidr_block"]; ok { - buf.WriteString(fmt.Sprintf("%s-", verify.CanonicalCIDRBlock(v.(string)))) + buf.WriteString(fmt.Sprintf("%s-", itypes.CanonicalCIDRBlock(v.(string)))) } if v, ok := m["cidr_block"]; ok { diff --git a/internal/service/ec2/vpc_route_table_association.go b/internal/service/ec2/vpc_route_table_association.go index a31767f2b83..5fce2f8e86c 100644 --- a/internal/service/ec2/vpc_route_table_association.go +++ b/internal/service/ec2/vpc_route_table_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -57,7 +60,7 @@ func ResourceRouteTableAssociation() *schema.Resource { func resourceRouteTableAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) routeTableID := d.Get("route_table_id").(string) input := &ec2.AssociateRouteTableInput{ @@ -72,8 +75,7 @@ func resourceRouteTableAssociationCreate(ctx context.Context, d *schema.Resource input.SubnetId = aws.String(v.(string)) } - log.Printf("[DEBUG] Creating Route Table Association: %s", input) - output, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, RouteTableAssociationPropagationTimeout, + output, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, ec2PropagationTimeout, func() (interface{}, error) { return conn.AssociateRouteTableWithContext(ctx, input) }, @@ -86,7 +88,6 @@ func resourceRouteTableAssociationCreate(ctx context.Context, d *schema.Resource d.SetId(aws.StringValue(output.(*ec2.AssociateRouteTableOutput).AssociationId)) - log.Printf("[DEBUG] Waiting for Route Table Association (%s) creation", d.Id()) if _, err := WaitRouteTableAssociationCreated(ctx, conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { return sdkdiag.AppendErrorf(diags, "waiting for Route Table Association (%s) create: %s", d.Id(), err) } @@ -96,9 +97,9 @@ func resourceRouteTableAssociationCreate(ctx context.Context, d *schema.Resource func resourceRouteTableAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) - outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { + outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { return FindRouteTableAssociationByID(ctx, conn, d.Id()) }, d.IsNewResource()) @@ -123,7 +124,7 @@ func resourceRouteTableAssociationRead(ctx context.Context, d *schema.ResourceDa func resourceRouteTableAssociationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.ReplaceRouteTableAssociationInput{ AssociationId: aws.String(d.Id()), @@ -150,7 +151,6 @@ func resourceRouteTableAssociationUpdate(ctx context.Context, d *schema.Resource d.SetId(aws.StringValue(output.NewAssociationId)) - log.Printf("[DEBUG] Waiting for Route Table Association (%s) update", d.Id()) if _, err := WaitRouteTableAssociationUpdated(ctx, conn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { return sdkdiag.AppendErrorf(diags, "waiting for Route Table Association (%s) update: %s", d.Id(), err) } @@ -160,7 +160,7 @@ func resourceRouteTableAssociationUpdate(ctx context.Context, d *schema.Resource func resourceRouteTableAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if err := routeTableAssociationDelete(ctx, conn, d.Id(), d.Timeout(schema.TimeoutDelete)); err != nil { return sdkdiag.AppendFromErr(diags, err) @@ -179,7 +179,7 @@ func resourceRouteTableAssociationImport(ctx context.Context, d *schema.Resource log.Printf("[DEBUG] Importing route table association, target: %s, route table: %s", targetID, routeTableID) - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) routeTable, err := FindRouteTableByID(ctx, conn, routeTableID) @@ -230,7 +230,6 @@ func routeTableAssociationDelete(ctx context.Context, conn *ec2.EC2, association return fmt.Errorf("deleting Route Table Association (%s): %w", associationID, err) } - log.Printf("[DEBUG] Waiting for Route Table Association (%s) deletion", associationID) if _, err := WaitRouteTableAssociationDeleted(ctx, conn, associationID, timeout); err != nil { return fmt.Errorf("deleting Route Table Association (%s): waiting for completion: %w", associationID, err) } diff --git a/internal/service/ec2/vpc_route_table_association_test.go b/internal/service/ec2/vpc_route_table_association_test.go index 49db9c6834d..997095bb939 100644 --- a/internal/service/ec2/vpc_route_table_association_test.go +++ b/internal/service/ec2/vpc_route_table_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -175,7 +178,7 @@ func TestAccVPCRouteTableAssociation_disappears(t *testing.T) { func testAccCheckRouteTableAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route_table_association" { @@ -210,7 +213,7 @@ func testAccCheckRouteTableAssociationExists(ctx context.Context, n string, v *e return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) association, err := tfec2.FindRouteTableAssociationByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ec2/vpc_route_table_data_source.go b/internal/service/ec2/vpc_route_table_data_source.go index 3ac28d073a0..4874fee9d5f 100644 --- a/internal/service/ec2/vpc_route_table_data_source.go +++ b/internal/service/ec2/vpc_route_table_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -181,7 +184,7 @@ func DataSourceRouteTable() *schema.Resource { func dataSourceRouteTableRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig req := &ec2.DescribeRouteTablesInput{} diff --git a/internal/service/ec2/vpc_route_table_data_source_test.go b/internal/service/ec2/vpc_route_table_data_source_test.go index f6f907793cd..32c45275275 100644 --- a/internal/service/ec2/vpc_route_table_data_source_test.go +++ b/internal/service/ec2/vpc_route_table_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpc_route_table_test.go b/internal/service/ec2/vpc_route_table_test.go index c35184c418b..33bbfb496c6 100644 --- a/internal/service/ec2/vpc_route_table_test.go +++ b/internal/service/ec2/vpc_route_table_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -1041,7 +1044,7 @@ func testAccCheckRouteTableExists(ctx context.Context, n string, v *ec2.RouteTab return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) routeTable, err := tfec2.FindRouteTableByID(ctx, conn, rs.Primary.ID) @@ -1057,7 +1060,7 @@ func testAccCheckRouteTableExists(ctx context.Context, n string, v *ec2.RouteTab func testAccCheckRouteTableDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route_table" { @@ -1143,7 +1146,7 @@ func testAccCheckRouteTablePrefixListRoute(resourceName, prefixListResourceName, // a route to the specified VPC endpoint's prefix list to appear in the specified route table. func testAccCheckRouteTableWaitForVPCEndpointRoute(ctx context.Context, routeTable *ec2.RouteTable, vpce *ec2.VpcEndpoint) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) resp, err := conn.DescribePrefixListsWithContext(ctx, &ec2.DescribePrefixListsInput{ Filters: tfec2.BuildAttributeFilterList(map[string]string{ @@ -1891,7 +1894,7 @@ resource "aws_internet_gateway" "test" { } resource "aws_eip" "test" { - vpc = true + domain = "vpc" tags = { Name = %[1]q diff --git a/internal/service/ec2/vpc_route_tables_data_source.go b/internal/service/ec2/vpc_route_tables_data_source.go index 294f975e920..37366547d0c 100644 --- a/internal/service/ec2/vpc_route_tables_data_source.go +++ b/internal/service/ec2/vpc_route_tables_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -40,7 +43,7 @@ func DataSourceRouteTables() *schema.Resource { func dataSourceRouteTablesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeRouteTablesInput{} diff --git a/internal/service/ec2/vpc_route_tables_data_source_test.go b/internal/service/ec2/vpc_route_tables_data_source_test.go index 8dbf6a99c79..db93c5fe53c 100644 --- a/internal/service/ec2/vpc_route_tables_data_source_test.go +++ b/internal/service/ec2/vpc_route_tables_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpc_route_test.go b/internal/service/ec2/vpc_route_test.go index a8af4cdada3..12923400853 100644 --- a/internal/service/ec2/vpc_route_test.go +++ b/internal/service/ec2/vpc_route_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -2212,7 +2215,7 @@ func testAccCheckRouteExists(ctx context.Context, n string, v *ec2.Route) resour return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) var route *ec2.Route var err error @@ -2241,7 +2244,7 @@ func testAccCheckRouteDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) var err error if v := rs.Primary.Attributes["destination_cidr_block"]; v != "" { @@ -3155,7 +3158,7 @@ resource "aws_internet_gateway" "test" { } resource "aws_eip" "test" { - vpc = true + domain = "vpc" tags = { Name = %[1]q @@ -3547,7 +3550,7 @@ resource "aws_vpc_peering_connection" "test" { } resource "aws_eip" "test" { - vpc = true + domain = "vpc" tags = { Name = %[1]q @@ -4164,7 +4167,7 @@ resource "aws_internet_gateway" "test" { } resource "aws_eip" "test" { - vpc = true + domain = "vpc" tags = { Name = %[1]q diff --git a/internal/service/ec2/vpc_security_group.go b/internal/service/ec2/vpc_security_group.go index 2d7d00102ec..0adcac865df 100644 --- a/internal/service/ec2/vpc_security_group.go +++ b/internal/service/ec2/vpc_security_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -179,7 +182,7 @@ var ( ) func resourceSecurityGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) name := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) inputC := &ec2.CreateSecurityGroupInput{ @@ -257,7 +260,7 @@ func resourceSecurityGroupCreate(ctx context.Context, d *schema.ResourceData, me } func resourceSecurityGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) sg, err := FindSecurityGroupByID(ctx, conn, d.Id()) @@ -305,13 +308,13 @@ func resourceSecurityGroupRead(ctx context.Context, d *schema.ResourceData, meta return diag.Errorf("setting egress: %s", err) } - SetTagsOut(ctx, sg.Tags) + setTagsOut(ctx, sg.Tags) return nil } func resourceSecurityGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) group, err := FindSecurityGroupByID(ctx, conn, d.Id()) @@ -335,7 +338,7 @@ func resourceSecurityGroupUpdate(ctx context.Context, d *schema.ResourceData, me } func resourceSecurityGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if err := deleteLingeringENIs(ctx, conn, "group-id", d.Id(), d.Timeout(schema.TimeoutDelete)); err != nil { return diag.Errorf("deleting ENIs using Security Group (%s): %s", d.Id(), err) @@ -397,7 +400,7 @@ func resourceSecurityGroupDelete(ctx context.Context, d *schema.ResourceData, me return diag.Errorf("deleting Security Group (%s): %s", d.Id(), err) } - _, err = tfresource.RetryUntilNotFound(ctx, propagationTimeout, func() (interface{}, error) { + _, err = tfresource.RetryUntilNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { return FindSecurityGroupByID(ctx, conn, d.Id()) }) diff --git a/internal/service/ec2/vpc_security_group_data_source.go b/internal/service/ec2/vpc_security_group_data_source.go index b9215fbdde6..29e1d0f63d2 100644 --- a/internal/service/ec2/vpc_security_group_data_source.go +++ b/internal/service/ec2/vpc_security_group_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -55,7 +58,7 @@ func DataSourceSecurityGroup() *schema.Resource { } func dataSourceSecurityGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeSecurityGroupsInput{ diff --git a/internal/service/ec2/vpc_security_group_data_source_test.go b/internal/service/ec2/vpc_security_group_data_source_test.go index 6c7e6c3e0a6..50b184802e7 100644 --- a/internal/service/ec2/vpc_security_group_data_source_test.go +++ b/internal/service/ec2/vpc_security_group_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpc_security_group_egress_rule.go b/internal/service/ec2/vpc_security_group_egress_rule.go index 49b346feb03..3bd4bf886f6 100644 --- a/internal/service/ec2/vpc_security_group_egress_rule.go +++ b/internal/service/ec2/vpc_security_group_egress_rule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -6,7 +9,7 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform-plugin-framework/resource" - "github.com/hashicorp/terraform-provider-aws/internal/flex" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" ) // @FrameworkResource(name="Security Group Egress Rule") @@ -29,7 +32,7 @@ func (r *resourceSecurityGroupEgressRule) Metadata(_ context.Context, request re } func (r *resourceSecurityGroupEgressRule) createSecurityGroupRule(ctx context.Context, data *resourceSecurityGroupRuleData) (string, error) { - conn := r.Meta().EC2Conn() + conn := r.Meta().EC2Conn(ctx) input := &ec2.AuthorizeSecurityGroupEgressInput{ GroupId: flex.StringFromFramework(ctx, data.SecurityGroupID), @@ -46,7 +49,7 @@ func (r *resourceSecurityGroupEgressRule) createSecurityGroupRule(ctx context.Co } func (r *resourceSecurityGroupEgressRule) deleteSecurityGroupRule(ctx context.Context, data *resourceSecurityGroupRuleData) error { - conn := r.Meta().EC2Conn() + conn := r.Meta().EC2Conn(ctx) _, err := conn.RevokeSecurityGroupEgressWithContext(ctx, &ec2.RevokeSecurityGroupEgressInput{ GroupId: flex.StringFromFramework(ctx, data.SecurityGroupID), @@ -57,7 +60,7 @@ func (r *resourceSecurityGroupEgressRule) deleteSecurityGroupRule(ctx context.Co } func (r *resourceSecurityGroupEgressRule) findSecurityGroupRuleByID(ctx context.Context, id string) (*ec2.SecurityGroupRule, error) { - conn := r.Meta().EC2Conn() + conn := r.Meta().EC2Conn(ctx) return FindSecurityGroupEgressRuleByID(ctx, conn, id) } diff --git a/internal/service/ec2/vpc_security_group_egress_rule_test.go b/internal/service/ec2/vpc_security_group_egress_rule_test.go index 8eb0ca39976..1666628e3df 100644 --- a/internal/service/ec2/vpc_security_group_egress_rule_test.go +++ b/internal/service/ec2/vpc_security_group_egress_rule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -79,7 +82,7 @@ func TestAccVPCSecurityGroupEgressRule_disappears(t *testing.T) { func testAccCheckSecurityGroupEgressRuleDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpc_security_group_egress_rule" { @@ -114,7 +117,7 @@ func testAccCheckSecurityGroupEgressRuleExists(ctx context.Context, n string, v return fmt.Errorf("No VPC Security Group Egress Rule ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindSecurityGroupEgressRuleByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ec2/vpc_security_group_ingress_rule.go b/internal/service/ec2/vpc_security_group_ingress_rule.go index cd206aa8547..0b867c6616a 100644 --- a/internal/service/ec2/vpc_security_group_ingress_rule.go +++ b/internal/service/ec2/vpc_security_group_ingress_rule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -19,8 +22,8 @@ import ( "github.com/hashicorp/terraform-plugin-framework/schema/validator" "github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-plugin-log/tflog" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" fwvalidators "github.com/hashicorp/terraform-provider-aws/internal/framework/validators" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" @@ -47,7 +50,7 @@ func (r *resourceSecurityGroupIngressRule) Metadata(_ context.Context, request r } func (r *resourceSecurityGroupIngressRule) createSecurityGroupRule(ctx context.Context, data *resourceSecurityGroupRuleData) (string, error) { - conn := r.Meta().EC2Conn() + conn := r.Meta().EC2Conn(ctx) input := &ec2.AuthorizeSecurityGroupIngressInput{ GroupId: flex.StringFromFramework(ctx, data.SecurityGroupID), @@ -64,7 +67,7 @@ func (r *resourceSecurityGroupIngressRule) createSecurityGroupRule(ctx context.C } func (r *resourceSecurityGroupIngressRule) deleteSecurityGroupRule(ctx context.Context, data *resourceSecurityGroupRuleData) error { - conn := r.Meta().EC2Conn() + conn := r.Meta().EC2Conn(ctx) _, err := conn.RevokeSecurityGroupIngressWithContext(ctx, &ec2.RevokeSecurityGroupIngressInput{ GroupId: flex.StringFromFramework(ctx, data.SecurityGroupID), @@ -75,7 +78,7 @@ func (r *resourceSecurityGroupIngressRule) deleteSecurityGroupRule(ctx context.C } func (r *resourceSecurityGroupIngressRule) findSecurityGroupRuleByID(ctx context.Context, id string) (*ec2.SecurityGroupRule, error) { - conn := r.Meta().EC2Conn() + conn := r.Meta().EC2Conn(ctx) return FindSecurityGroupIngressRuleByID(ctx, conn, id) } @@ -134,7 +137,7 @@ func (r *resourceSecurityGroupRule) Schema(ctx context.Context, req resource.Sch Optional: true, }, "security_group_id": schema.StringAttribute{ - Optional: true, + Required: true, PlanModifiers: []planmodifier.String{ stringplanmodifier.RequiresReplace(), }, @@ -176,9 +179,9 @@ func (r *resourceSecurityGroupRule) Create(ctx context.Context, request resource data.ID = types.StringValue(securityGroupRuleID) - conn := r.Meta().EC2Conn() - if err := UpdateTags(ctx, conn, data.ID.ValueString(), nil, KeyValueTags(ctx, GetTagsIn(ctx))); err != nil { - response.Diagnostics.AddError(fmt.Sprintf("adding VPC Security Group Rule (%s) tags", data.ID.ValueString()), err.Error()) + conn := r.Meta().EC2Conn(ctx) + if err := createTags(ctx, conn, data.ID.ValueString(), getTagsIn(ctx)); err != nil { + response.Diagnostics.AddError(fmt.Sprintf("setting VPC Security Group Rule (%s) tags", data.ID.ValueString()), err.Error()) return } @@ -238,7 +241,7 @@ func (r *resourceSecurityGroupRule) Read(ctx context.Context, request resource.R data.ToPort = flex.Int64ToFramework(ctx, output.ToPort) } - SetTagsOut(ctx, output.Tags) + setTagsOut(ctx, output.Tags) response.Diagnostics.Append(response.State.Set(ctx, &data)...) } @@ -258,7 +261,7 @@ func (r *resourceSecurityGroupRule) Update(ctx context.Context, request resource return } - conn := r.Meta().EC2Conn() + conn := r.Meta().EC2Conn(ctx) if !new.CIDRIPv4.Equal(old.CIDRIPv4) || !new.CIDRIPv6.Equal(old.CIDRIPv6) || diff --git a/internal/service/ec2/vpc_security_group_ingress_rule_test.go b/internal/service/ec2/vpc_security_group_ingress_rule_test.go index 841cfd8bd95..96fda4824d8 100644 --- a/internal/service/ec2/vpc_security_group_ingress_rule_test.go +++ b/internal/service/ec2/vpc_security_group_ingress_rule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -1005,7 +1008,7 @@ func testAccCheckSecurityGroupRuleRecreated(i, j *ec2.SecurityGroupRule) resourc func testAccCheckSecurityGroupIngressRuleDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpc_security_group_ingress_rule" { @@ -1040,7 +1043,7 @@ func testAccCheckSecurityGroupIngressRuleExists(ctx context.Context, n string, v return fmt.Errorf("No VPC Security Group Ingress Rule ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindSecurityGroupIngressRuleByID(ctx, conn, rs.Primary.ID) @@ -1056,7 +1059,7 @@ func testAccCheckSecurityGroupIngressRuleExists(ctx context.Context, n string, v func testAccCheckSecurityGroupIngressRuleUpdateTags(ctx context.Context, v *ec2.SecurityGroupRule, oldTags, newTags map[string]string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) return tfec2.UpdateTags(ctx, conn, aws.StringValue(v.SecurityGroupRuleId), oldTags, newTags) } @@ -1116,7 +1119,7 @@ resource "aws_vpc_security_group_ingress_rule" "test" { func testAccVPCSecurityGroupIngressRuleConfig_computed(rName string) string { return acctest.ConfigCompose(testAccVPCSecurityGroupRuleConfig_base(rName), ` resource "aws_eip" "test" { - vpc = true + domain = "vpc" } resource "aws_vpc_security_group_ingress_rule" "test" { diff --git a/internal/service/ec2/vpc_security_group_migrate.go b/internal/service/ec2/vpc_security_group_migrate.go index a83fe431349..385baec959e 100644 --- a/internal/service/ec2/vpc_security_group_migrate.go +++ b/internal/service/ec2/vpc_security_group_migrate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( diff --git a/internal/service/ec2/vpc_security_group_migrate_test.go b/internal/service/ec2/vpc_security_group_migrate_test.go index 51f789f405e..8d54ce6de9c 100644 --- a/internal/service/ec2/vpc_security_group_migrate_test.go +++ b/internal/service/ec2/vpc_security_group_migrate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpc_security_group_rule.go b/internal/service/ec2/vpc_security_group_rule.go index bdd14abe4bc..30f2264632d 100644 --- a/internal/service/ec2/vpc_security_group_rule.go +++ b/internal/service/ec2/vpc_security_group_rule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -149,7 +152,7 @@ func ResourceSecurityGroupRule() *schema.Resource { } func resourceSecurityGroupRuleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) securityGroupID := d.Get("security_group_id").(string) conns.GlobalMutexKV.Lock(securityGroupID) @@ -247,7 +250,7 @@ information and instructions for recovery. Error: %s`, securityGroupID, err) } func resourceSecurityGroupRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) securityGroupID := d.Get("security_group_id").(string) ruleType := d.Get("type").(string) @@ -309,7 +312,7 @@ func resourceSecurityGroupRuleRead(ctx context.Context, d *schema.ResourceData, } func resourceSecurityGroupRuleUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if d.HasChange("description") { securityGroupID := d.Get("security_group_id").(string) @@ -353,7 +356,7 @@ func resourceSecurityGroupRuleUpdate(ctx context.Context, d *schema.ResourceData } func resourceSecurityGroupRuleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) securityGroupID := d.Get("security_group_id").(string) conns.GlobalMutexKV.Lock(securityGroupID) diff --git a/internal/service/ec2/vpc_security_group_rule_data_source.go b/internal/service/ec2/vpc_security_group_rule_data_source.go index 82515feef67..ffa261acaf6 100644 --- a/internal/service/ec2/vpc_security_group_rule_data_source.go +++ b/internal/service/ec2/vpc_security_group_rule_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -11,8 +14,8 @@ import ( "github.com/hashicorp/terraform-plugin-framework/datasource" "github.com/hashicorp/terraform-plugin-framework/datasource/schema" "github.com/hashicorp/terraform-plugin-framework/types" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) @@ -88,7 +91,7 @@ func (d *dataSourceSecurityGroupRule) Read(ctx context.Context, request datasour return } - conn := d.Meta().EC2Conn() + conn := d.Meta().EC2Conn(ctx) ignoreTagsConfig := d.Meta().IgnoreTagsConfig input := &ec2.DescribeSecurityGroupRulesInput{ diff --git a/internal/service/ec2/vpc_security_group_rule_data_source_test.go b/internal/service/ec2/vpc_security_group_rule_data_source_test.go index 4e46a4e7a3c..7907d0dadb2 100644 --- a/internal/service/ec2/vpc_security_group_rule_data_source_test.go +++ b/internal/service/ec2/vpc_security_group_rule_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpc_security_group_rule_migrate.go b/internal/service/ec2/vpc_security_group_rule_migrate.go index 5e06fb82b62..f3b48fc5b06 100644 --- a/internal/service/ec2/vpc_security_group_rule_migrate.go +++ b/internal/service/ec2/vpc_security_group_rule_migrate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -37,7 +40,7 @@ func migrateSGRuleStateV0toV1(is *terraform.InstanceState) (*terraform.InstanceS perm, err := migrateExpandIPPerm(is.Attributes) if err != nil { - return nil, fmt.Errorf("Error making new IP Permission in Security Group migration") + return nil, fmt.Errorf("making new IP Permission in Security Group migration") } log.Printf("[DEBUG] Attributes before migration: %#v", is.Attributes) @@ -52,12 +55,12 @@ func migrateExpandIPPerm(attrs map[string]string) (*ec2.IpPermission, error) { var perm ec2.IpPermission tp, err := strconv.Atoi(attrs["to_port"]) if err != nil { - return nil, fmt.Errorf("Error converting to_port in Security Group migration") + return nil, fmt.Errorf("converting to_port in Security Group migration") } fp, err := strconv.Atoi(attrs["from_port"]) if err != nil { - return nil, fmt.Errorf("Error converting from_port in Security Group migration") + return nil, fmt.Errorf("converting from_port in Security Group migration") } perm.ToPort = aws.Int64(int64(tp)) diff --git a/internal/service/ec2/vpc_security_group_rule_migrate_test.go b/internal/service/ec2/vpc_security_group_rule_migrate_test.go index 36ee22af7a5..f175474ebc7 100644 --- a/internal/service/ec2/vpc_security_group_rule_migrate_test.go +++ b/internal/service/ec2/vpc_security_group_rule_migrate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpc_security_group_rule_test.go b/internal/service/ec2/vpc_security_group_rule_test.go index 35d2ff86577..357770b9cba 100644 --- a/internal/service/ec2/vpc_security_group_rule_test.go +++ b/internal/service/ec2/vpc_security_group_rule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpc_security_group_rules_data_source.go b/internal/service/ec2/vpc_security_group_rules_data_source.go index 0ff144d374f..8c66dcc1fe3 100644 --- a/internal/service/ec2/vpc_security_group_rules_data_source.go +++ b/internal/service/ec2/vpc_security_group_rules_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -8,8 +11,8 @@ import ( "github.com/hashicorp/terraform-plugin-framework/datasource" "github.com/hashicorp/terraform-plugin-framework/datasource/schema" "github.com/hashicorp/terraform-plugin-framework/types" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" ) @@ -51,7 +54,7 @@ func (d *dataSourceSecurityGroupRules) Read(ctx context.Context, request datasou return } - conn := d.Meta().EC2Conn() + conn := d.Meta().EC2Conn(ctx) input := &ec2.DescribeSecurityGroupRulesInput{ Filters: append(BuildCustomFilters(ctx, data.Filters), BuildTagFilterList(Tags(tftags.New(ctx, data.Tags)))...), diff --git a/internal/service/ec2/vpc_security_group_rules_data_source_test.go b/internal/service/ec2/vpc_security_group_rules_data_source_test.go index 6e5132a8db7..8b1ffacfeef 100644 --- a/internal/service/ec2/vpc_security_group_rules_data_source_test.go +++ b/internal/service/ec2/vpc_security_group_rules_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpc_security_group_rules_matching_test.go b/internal/service/ec2/vpc_security_group_rules_matching_test.go index cdf88f52f89..2bb30d811c1 100644 --- a/internal/service/ec2/vpc_security_group_rules_matching_test.go +++ b/internal/service/ec2/vpc_security_group_rules_matching_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpc_security_group_test.go b/internal/service/ec2/vpc_security_group_test.go index ceec0ead0b3..65f5b32b55d 100644 --- a/internal/service/ec2/vpc_security_group_test.go +++ b/internal/service/ec2/vpc_security_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -2527,7 +2530,7 @@ func TestAccVPCSecurityGroup_RuleLimit_cidrBlockExceededAppend(t *testing.T) { id := aws.StringValue(group.GroupId) - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) match, err := tfec2.FindSecurityGroupByID(ctx, conn, id) if tfresource.NotFound(err) { @@ -2748,7 +2751,7 @@ func testAddRuleCycle(ctx context.Context, primary, secondary *ec2.SecurityGroup return fmt.Errorf("Secondary SG not set for TestAccAWSSecurityGroup_forceRevokeRules_should_fail") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) // cycle from primary to secondary perm1 := cycleIPPermForGroup(aws.StringValue(secondary.GroupId)) @@ -2788,7 +2791,7 @@ func testRemoveRuleCycle(ctx context.Context, primary, secondary *ec2.SecurityGr return fmt.Errorf("Secondary SG not set for TestAccAWSSecurityGroup_forceRevokeRules_should_fail") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, sg := range []*ec2.SecurityGroup{primary, secondary} { var err error if sg.IpPermissions != nil { @@ -2819,7 +2822,7 @@ func testRemoveRuleCycle(ctx context.Context, primary, secondary *ec2.SecurityGr func testAccCheckSecurityGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_security_group" { @@ -2854,7 +2857,7 @@ func testAccCheckSecurityGroupExists(ctx context.Context, n string, v *ec2.Secur return fmt.Errorf("No VPC Security Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindSecurityGroupByID(ctx, conn, rs.Primary.ID) @@ -2898,7 +2901,7 @@ func testAccCheckSecurityGroupRuleCount(ctx context.Context, group *ec2.Security } func testSecurityGroupRuleCount(ctx context.Context, id string, expectedIngressCount, expectedEgressCount int) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) group, err := tfec2.FindSecurityGroupByID(ctx, conn, id) if tfresource.NotFound(err) { @@ -4767,7 +4770,7 @@ resource "aws_internet_gateway" "gw" { # elastic ip for NAT gateway resource "aws_eip" "nat" { - vpc = true + domain = "vpc" tags = { Name = %[1]q } diff --git a/internal/service/ec2/vpc_security_groups_data_source.go b/internal/service/ec2/vpc_security_groups_data_source.go index fac761fa442..50382404ef6 100644 --- a/internal/service/ec2/vpc_security_groups_data_source.go +++ b/internal/service/ec2/vpc_security_groups_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -46,7 +49,7 @@ func DataSourceSecurityGroups() *schema.Resource { } func dataSourceSecurityGroupsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeSecurityGroupsInput{} diff --git a/internal/service/ec2/vpc_security_groups_data_source_test.go b/internal/service/ec2/vpc_security_groups_data_source_test.go index 7bfbad0ea62..422ab006676 100644 --- a/internal/service/ec2/vpc_security_groups_data_source_test.go +++ b/internal/service/ec2/vpc_security_groups_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpc_subnet.go b/internal/service/ec2/vpc_subnet.go index 374891de6d7..f32b5301820 100644 --- a/internal/service/ec2/vpc_subnet.go +++ b/internal/service/ec2/vpc_subnet.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -154,7 +157,7 @@ func ResourceSubnet() *schema.Resource { func resourceSubnetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateSubnetInput{ TagSpecifications: getTagSpecificationsIn(ctx, ec2.ResourceTypeSubnet), @@ -223,9 +226,9 @@ func resourceSubnetCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceSubnetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) - outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, SubnetPropagationTimeout, func() (interface{}, error) { + outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { return FindSubnetByID(ctx, conn, d.Id()) }, d.IsNewResource()) @@ -278,14 +281,14 @@ func resourceSubnetRead(ctx context.Context, d *schema.ResourceData, meta interf d.Set("private_dns_hostname_type_on_launch", nil) } - SetTagsOut(ctx, subnet.Tags) + setTagsOut(ctx, subnet.Tags) return diags } func resourceSubnetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) // You cannot modify multiple subnet attributes in the same request, // except CustomerOwnedIpv4Pool and MapCustomerOwnedIpOnLaunch. @@ -358,7 +361,7 @@ func resourceSubnetUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceSubnetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[INFO] Deleting EC2 Subnet: %s", d.Id()) diff --git a/internal/service/ec2/vpc_subnet_cidr_reservation.go b/internal/service/ec2/vpc_subnet_cidr_reservation.go index 0c71fb53b8b..6a3c06368f8 100644 --- a/internal/service/ec2/vpc_subnet_cidr_reservation.go +++ b/internal/service/ec2/vpc_subnet_cidr_reservation.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -73,7 +76,7 @@ func ResourceSubnetCIDRReservation() *schema.Resource { func resourceSubnetCIDRReservationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateSubnetCidrReservationInput{ Cidr: aws.String(d.Get("cidr_block").(string)), @@ -99,7 +102,7 @@ func resourceSubnetCIDRReservationCreate(ctx context.Context, d *schema.Resource func resourceSubnetCIDRReservationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) output, err := FindSubnetCIDRReservationBySubnetIDAndReservationID(ctx, conn, d.Get("subnet_id").(string), d.Id()) @@ -124,7 +127,7 @@ func resourceSubnetCIDRReservationRead(ctx context.Context, d *schema.ResourceDa func resourceSubnetCIDRReservationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[INFO] Deleting EC2 Subnet CIDR Reservation: %s", d.Id()) _, err := conn.DeleteSubnetCidrReservationWithContext(ctx, &ec2.DeleteSubnetCidrReservationInput{ diff --git a/internal/service/ec2/vpc_subnet_cidr_reservation_test.go b/internal/service/ec2/vpc_subnet_cidr_reservation_test.go index 6c9c6b35c0c..eeafc378eb6 100644 --- a/internal/service/ec2/vpc_subnet_cidr_reservation_test.go +++ b/internal/service/ec2/vpc_subnet_cidr_reservation_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -112,7 +115,7 @@ func testAccCheckSubnetCIDRReservationExists(ctx context.Context, n string, v *e return fmt.Errorf("No EC2 Subnet CIDR Reservation ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindSubnetCIDRReservationBySubnetIDAndReservationID(ctx, conn, rs.Primary.Attributes["subnet_id"], rs.Primary.ID) @@ -128,7 +131,7 @@ func testAccCheckSubnetCIDRReservationExists(ctx context.Context, n string, v *e func testAccCheckSubnetCIDRReservationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_subnet_cidr_reservation" { diff --git a/internal/service/ec2/vpc_subnet_data_source.go b/internal/service/ec2/vpc_subnet_data_source.go index b8615136e3b..4f87e826a42 100644 --- a/internal/service/ec2/vpc_subnet_data_source.go +++ b/internal/service/ec2/vpc_subnet_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -132,7 +135,7 @@ func DataSourceSubnet() *schema.Resource { func dataSourceSubnetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeSubnetsInput{} diff --git a/internal/service/ec2/vpc_subnet_data_source_test.go b/internal/service/ec2/vpc_subnet_data_source_test.go index c0f2d1776d3..767513f3154 100644 --- a/internal/service/ec2/vpc_subnet_data_source_test.go +++ b/internal/service/ec2/vpc_subnet_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpc_subnet_migrate.go b/internal/service/ec2/vpc_subnet_migrate.go index 554dee657be..c5b5575cd21 100644 --- a/internal/service/ec2/vpc_subnet_migrate.go +++ b/internal/service/ec2/vpc_subnet_migrate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( diff --git a/internal/service/ec2/vpc_subnet_migrate_test.go b/internal/service/ec2/vpc_subnet_migrate_test.go index 1be26e60b41..087ec548eb5 100644 --- a/internal/service/ec2/vpc_subnet_migrate_test.go +++ b/internal/service/ec2/vpc_subnet_migrate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpc_subnet_test.go b/internal/service/ec2/vpc_subnet_test.go index f44723645b3..94950ce25bd 100644 --- a/internal/service/ec2/vpc_subnet_test.go +++ b/internal/service/ec2/vpc_subnet_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -952,7 +955,7 @@ func testAccCheckSubnetNotRecreated(t *testing.T, before, after *ec2.Subnet) res func testAccCheckSubnetDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_subnet" { @@ -987,7 +990,7 @@ func testAccCheckSubnetExists(ctx context.Context, n string, v *ec2.Subnet) reso return fmt.Errorf("No EC2 Subnet ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindSubnetByID(ctx, conn, rs.Primary.ID) @@ -1003,7 +1006,7 @@ func testAccCheckSubnetExists(ctx context.Context, n string, v *ec2.Subnet) reso func testAccCheckSubnetUpdateTags(ctx context.Context, subnet *ec2.Subnet, oldTags, newTags map[string]string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) return tfec2.UpdateTags(ctx, conn, aws.StringValue(subnet.SubnetId), oldTags, newTags) } diff --git a/internal/service/ec2/vpc_subnets_data_source.go b/internal/service/ec2/vpc_subnets_data_source.go index d3d27d22512..1a277f7732a 100644 --- a/internal/service/ec2/vpc_subnets_data_source.go +++ b/internal/service/ec2/vpc_subnets_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -36,7 +39,7 @@ func DataSourceSubnets() *schema.Resource { func dataSourceSubnetsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeSubnetsInput{} diff --git a/internal/service/ec2/vpc_subnets_data_source_test.go b/internal/service/ec2/vpc_subnets_data_source_test.go index fead1db346a..cb6a1155c68 100644 --- a/internal/service/ec2/vpc_subnets_data_source_test.go +++ b/internal/service/ec2/vpc_subnets_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -27,7 +30,7 @@ func TestAccVPCSubnetsDataSource_basic(t *testing.T) { Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr("data.aws_subnets.selected", "ids.#", "4"), resource.TestCheckResourceAttr("data.aws_subnets.private", "ids.#", "2"), - acctest.CheckResourceAttrGreaterThanValue("data.aws_subnets.all", "ids.#", "0"), + acctest.CheckResourceAttrGreaterThanValue("data.aws_subnets.all", "ids.#", 0), resource.TestCheckResourceAttr("data.aws_subnets.none", "ids.#", "0"), ), }, diff --git a/internal/service/ec2/vpc_test.go b/internal/service/ec2/vpc_test.go index 22f7762588f..ae5c1e31e0f 100644 --- a/internal/service/ec2/vpc_test.go +++ b/internal/service/ec2/vpc_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -91,7 +94,7 @@ func TestAccVPC_disappears(t *testing.T) { func TestAccVPC_tags(t *testing.T) { ctx := acctest.Context(t) - var vpc ec2.Vpc + var vpc1, vpc2, vpc3 ec2.Vpc resourceName := "aws_vpc.test" resource.ParallelTest(t, resource.TestCase{ @@ -103,7 +106,7 @@ func TestAccVPC_tags(t *testing.T) { { Config: testAccVPCConfig_tags1("key1", "value1"), Check: resource.ComposeTestCheckFunc( - acctest.CheckVPCExists(ctx, resourceName, &vpc), + acctest.CheckVPCExists(ctx, resourceName, &vpc1), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), ), @@ -116,7 +119,8 @@ func TestAccVPC_tags(t *testing.T) { { Config: testAccVPCConfig_tags2("key1", "value1updated", "key2", "value2"), Check: resource.ComposeTestCheckFunc( - acctest.CheckVPCExists(ctx, resourceName, &vpc), + acctest.CheckVPCExists(ctx, resourceName, &vpc2), + testAccCheckVPCIDsEqual(&vpc2, &vpc1), resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), @@ -125,7 +129,8 @@ func TestAccVPC_tags(t *testing.T) { { Config: testAccVPCConfig_tags1("key2", "value2"), Check: resource.ComposeTestCheckFunc( - acctest.CheckVPCExists(ctx, resourceName, &vpc), + acctest.CheckVPCExists(ctx, resourceName, &vpc3), + testAccCheckVPCIDsEqual(&vpc3, &vpc2), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), ), @@ -157,6 +162,28 @@ func TestAccVPC_tags_computed(t *testing.T) { }) } +func TestAccVPC_tags_null(t *testing.T) { + ctx := acctest.Context(t) + var vpc ec2.Vpc + resourceName := "aws_vpc.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckVPCDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccVPCConfig_tags_null, + Check: resource.ComposeTestCheckFunc( + acctest.CheckVPCExists(ctx, resourceName, &vpc), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + ), + }, + }, + }) +} + func TestAccVPC_DefaultTags_zeroValue(t *testing.T) { ctx := acctest.Context(t) var vpc ec2.Vpc @@ -501,6 +528,52 @@ func TestAccVPC_DefaultTagsProviderAndResource_duplicateTag(t *testing.T) { }) } +func TestAccVPC_DefaultTagsProviderAndResource_moveDuplicateTags(t *testing.T) { + ctx := acctest.Context(t) + var vpc ec2.Vpc + resourceName := "aws_vpc.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: nil, + Steps: []resource.TestStep{ + { + Config: acctest.ConfigCompose( + testAccVPCConfig_tags1("overlapkey", "overlapvalue"), + ), + Check: resource.ComposeTestCheckFunc( + acctest.CheckVPCExists(ctx, resourceName, &vpc), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags_all.%", "1"), + ), + }, + { + Config: acctest.ConfigCompose( + testAccVPCConfig_basic, + acctest.ConfigDefaultTags_Tags1("overlapkey", "overlapvalue"), + ), + Check: resource.ComposeTestCheckFunc( + acctest.CheckVPCExists(ctx, resourceName, &vpc), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttr(resourceName, "tags_all.%", "1"), + ), + }, + { + Config: acctest.ConfigCompose( + testAccVPCConfig_tags1("overlapkey", "overlapvalue"), + ), + Check: resource.ComposeTestCheckFunc( + acctest.CheckVPCExists(ctx, resourceName, &vpc), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags_all.%", "1"), + ), + }, + }, + }) +} + // TestAccVPC_DynamicResourceTagsMergedWithLocals_ignoreChanges ensures computed "tags_all" // attributes are correctly determined when the provider-level default_tags block // is left unused and resource tags (merged with local.tags) are only known at apply time, @@ -1032,7 +1105,7 @@ func TestAccVPC_IPAMIPv6(t *testing.T) { func testAccCheckVPCDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpc" { @@ -1058,7 +1131,7 @@ func testAccCheckVPCDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckVPCUpdateTags(ctx context.Context, vpc *ec2.Vpc, oldTags, newTags map[string]string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) return tfec2.UpdateTags(ctx, conn, aws.StringValue(vpc.VpcId), oldTags, newTags) } @@ -1127,7 +1200,7 @@ resource "aws_vpc" "test" { const testAccVPCConfig_tags_computed = ` resource "aws_eip" "test" { - vpc = true + domain = "vpc" } resource "aws_vpc" "test" { @@ -1139,6 +1212,16 @@ resource "aws_vpc" "test" { } ` +const testAccVPCConfig_tags_null = ` +resource "aws_vpc" "test" { + cidr_block = "10.1.0.0/16" + + tags = { + Name = null + } +} +` + func testAccVPCConfig_ignoreChangesDynamicTagsMergedLocals(localTagKey1, localTagValue1 string) string { return fmt.Sprintf(` locals { diff --git a/internal/service/ec2/vpc_traffic_mirror_filter.go b/internal/service/ec2/vpc_traffic_mirror_filter.go index 7514477c8ea..fb2cc5e88b4 100644 --- a/internal/service/ec2/vpc_traffic_mirror_filter.go +++ b/internal/service/ec2/vpc_traffic_mirror_filter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -61,7 +64,7 @@ func ResourceTrafficMirrorFilter() *schema.Resource { func resourceTrafficMirrorFilterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateTrafficMirrorFilterInput{ TagSpecifications: getTagSpecificationsIn(ctx, ec2.ResourceTypeTrafficMirrorFilter), @@ -97,7 +100,7 @@ func resourceTrafficMirrorFilterCreate(ctx context.Context, d *schema.ResourceDa func resourceTrafficMirrorFilterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) trafficMirrorFilter, err := FindTrafficMirrorFilterByID(ctx, conn, d.Id()) @@ -122,14 +125,14 @@ func resourceTrafficMirrorFilterRead(ctx context.Context, d *schema.ResourceData d.Set("description", trafficMirrorFilter.Description) d.Set("network_services", aws.StringValueSlice(trafficMirrorFilter.NetworkServices)) - SetTagsOut(ctx, trafficMirrorFilter.Tags) + setTagsOut(ctx, trafficMirrorFilter.Tags) return diags } func resourceTrafficMirrorFilterUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if d.HasChange("network_services") { input := &ec2.ModifyTrafficMirrorFilterNetworkServicesInput{ @@ -158,7 +161,7 @@ func resourceTrafficMirrorFilterUpdate(ctx context.Context, d *schema.ResourceDa func resourceTrafficMirrorFilterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[DEBUG] Deleting EC2 Traffic Mirror Filter: %s", d.Id()) _, err := conn.DeleteTrafficMirrorFilterWithContext(ctx, &ec2.DeleteTrafficMirrorFilterInput{ diff --git a/internal/service/ec2/vpc_traffic_mirror_filter_rule.go b/internal/service/ec2/vpc_traffic_mirror_filter_rule.go index 044f7e629d8..794c3153364 100644 --- a/internal/service/ec2/vpc_traffic_mirror_filter_rule.go +++ b/internal/service/ec2/vpc_traffic_mirror_filter_rule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -116,7 +119,7 @@ func ResourceTrafficMirrorFilterRule() *schema.Resource { func resourceTrafficMirrorFilterRuleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateTrafficMirrorFilterRuleInput{ DestinationCidrBlock: aws.String(d.Get("destination_cidr_block").(string)), @@ -156,7 +159,7 @@ func resourceTrafficMirrorFilterRuleCreate(ctx context.Context, d *schema.Resour func resourceTrafficMirrorFilterRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) rule, err := FindTrafficMirrorFilterRuleByTwoPartKey(ctx, conn, d.Get("traffic_mirror_filter_id").(string), d.Id()) @@ -206,7 +209,7 @@ func resourceTrafficMirrorFilterRuleRead(ctx context.Context, d *schema.Resource func resourceTrafficMirrorFilterRuleUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.ModifyTrafficMirrorFilterRuleInput{ TrafficMirrorFilterRuleId: aws.String(d.Id()), @@ -285,7 +288,7 @@ func resourceTrafficMirrorFilterRuleUpdate(ctx context.Context, d *schema.Resour func resourceTrafficMirrorFilterRuleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[DEBUG] Deleting EC2 Traffic Mirror Filter Rule: %s", d.Id()) _, err := conn.DeleteTrafficMirrorFilterRuleWithContext(ctx, &ec2.DeleteTrafficMirrorFilterRuleInput{ diff --git a/internal/service/ec2/vpc_traffic_mirror_filter_rule_test.go b/internal/service/ec2/vpc_traffic_mirror_filter_rule_test.go index ea6c4487639..e3d3a831dba 100644 --- a/internal/service/ec2/vpc_traffic_mirror_filter_rule_test.go +++ b/internal/service/ec2/vpc_traffic_mirror_filter_rule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -137,7 +140,7 @@ func TestAccVPCTrafficMirrorFilterRule_disappears(t *testing.T) { } func testAccPreCheckTrafficMirrorFilterRule(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) _, err := conn.DescribeTrafficMirrorFiltersWithContext(ctx, &ec2.DescribeTrafficMirrorFiltersInput{}) @@ -152,7 +155,7 @@ func testAccPreCheckTrafficMirrorFilterRule(ctx context.Context, t *testing.T) { func testAccCheckTrafficMirrorFilterRuleDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_traffic_mirror_filter_rule" { @@ -198,7 +201,7 @@ func testAccCheckTrafficMirrorFilterRuleExists(ctx context.Context, n string) re return fmt.Errorf("No EC2 Traffic Mirror Filter Rule ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) _, err := tfec2.FindTrafficMirrorFilterRuleByTwoPartKey(ctx, conn, rs.Primary.Attributes["traffic_mirror_filter_id"], rs.Primary.ID) diff --git a/internal/service/ec2/vpc_traffic_mirror_filter_test.go b/internal/service/ec2/vpc_traffic_mirror_filter_test.go index 61688fba043..cef28f67797 100644 --- a/internal/service/ec2/vpc_traffic_mirror_filter_test.go +++ b/internal/service/ec2/vpc_traffic_mirror_filter_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -143,7 +146,7 @@ func TestAccVPCTrafficMirrorFilter_disappears(t *testing.T) { } func testAccPreCheckTrafficMirrorFilter(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) _, err := conn.DescribeTrafficMirrorFiltersWithContext(ctx, &ec2.DescribeTrafficMirrorFiltersInput{}) @@ -158,7 +161,7 @@ func testAccPreCheckTrafficMirrorFilter(ctx context.Context, t *testing.T) { func testAccCheckTrafficMirrorFilterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_traffic_mirror_filter" { @@ -193,7 +196,7 @@ func testAccCheckTrafficMirrorFilterExists(ctx context.Context, n string, v *ec2 return fmt.Errorf("No EC2 Traffic Mirror Filter ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindTrafficMirrorFilterByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ec2/vpc_traffic_mirror_session.go b/internal/service/ec2/vpc_traffic_mirror_session.go index 8788b3d7575..b7b39074748 100644 --- a/internal/service/ec2/vpc_traffic_mirror_session.go +++ b/internal/service/ec2/vpc_traffic_mirror_session.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -82,7 +85,7 @@ func ResourceTrafficMirrorSession() *schema.Resource { func resourceTrafficMirrorSessionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateTrafficMirrorSessionInput{ NetworkInterfaceId: aws.String(d.Get("network_interface_id").(string)), @@ -120,7 +123,7 @@ func resourceTrafficMirrorSessionCreate(ctx context.Context, d *schema.ResourceD func resourceTrafficMirrorSessionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) session, err := FindTrafficMirrorSessionByID(ctx, conn, d.Id()) @@ -152,14 +155,14 @@ func resourceTrafficMirrorSessionRead(ctx context.Context, d *schema.ResourceDat d.Set("traffic_mirror_target_id", session.TrafficMirrorTargetId) d.Set("virtual_network_id", session.VirtualNetworkId) - SetTagsOut(ctx, session.Tags) + setTagsOut(ctx, session.Tags) return diags } func resourceTrafficMirrorSessionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &ec2.ModifyTrafficMirrorSessionInput{ @@ -220,7 +223,7 @@ func resourceTrafficMirrorSessionUpdate(ctx context.Context, d *schema.ResourceD func resourceTrafficMirrorSessionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[DEBUG] Deleting EC2 Traffic Mirror Session: %s", d.Id()) _, err := conn.DeleteTrafficMirrorSessionWithContext(ctx, &ec2.DeleteTrafficMirrorSessionInput{ diff --git a/internal/service/ec2/vpc_traffic_mirror_session_test.go b/internal/service/ec2/vpc_traffic_mirror_session_test.go index 5092d816bbd..c34a854c055 100644 --- a/internal/service/ec2/vpc_traffic_mirror_session_test.go +++ b/internal/service/ec2/vpc_traffic_mirror_session_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -195,7 +198,7 @@ func TestAccVPCTrafficMirrorSession_updateTrafficMirrorTarget(t *testing.T) { } func testAccPreCheckTrafficMirrorSession(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) _, err := conn.DescribeTrafficMirrorSessionsWithContext(ctx, &ec2.DescribeTrafficMirrorSessionsInput{}) @@ -220,7 +223,7 @@ func testAccCheckTrafficMirrorSessionNotRecreated(t *testing.T, before, after *e func testAccCheckTrafficMirrorSessionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_traffic_mirror_session" { @@ -255,7 +258,7 @@ func testAccCheckTrafficMirrorSessionExists(ctx context.Context, n string, v *ec return fmt.Errorf("No EC2 Traffic Mirror Session ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindTrafficMirrorSessionByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ec2/vpc_traffic_mirror_target.go b/internal/service/ec2/vpc_traffic_mirror_target.go index e70521fc593..80597922c8c 100644 --- a/internal/service/ec2/vpc_traffic_mirror_target.go +++ b/internal/service/ec2/vpc_traffic_mirror_target.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -86,7 +89,7 @@ func ResourceTrafficMirrorTarget() *schema.Resource { func resourceTrafficMirrorTargetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateTrafficMirrorTargetInput{ TagSpecifications: getTagSpecificationsIn(ctx, ec2.ResourceTypeTrafficMirrorTarget), @@ -121,7 +124,7 @@ func resourceTrafficMirrorTargetCreate(ctx context.Context, d *schema.ResourceDa func resourceTrafficMirrorTargetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) target, err := FindTrafficMirrorTargetByID(ctx, conn, d.Id()) @@ -150,7 +153,7 @@ func resourceTrafficMirrorTargetRead(ctx context.Context, d *schema.ResourceData d.Set("network_load_balancer_arn", target.NetworkLoadBalancerArn) d.Set("owner_id", ownerID) - SetTagsOut(ctx, target.Tags) + setTagsOut(ctx, target.Tags) return diags } @@ -165,7 +168,7 @@ func resourceTrafficMirrorTargetUpdate(ctx context.Context, d *schema.ResourceDa func resourceTrafficMirrorTargetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[DEBUG] Deleting EC2 Traffic Mirror Target: %s", d.Id()) _, err := conn.DeleteTrafficMirrorTargetWithContext(ctx, &ec2.DeleteTrafficMirrorTargetInput{ diff --git a/internal/service/ec2/vpc_traffic_mirror_target_test.go b/internal/service/ec2/vpc_traffic_mirror_target_test.go index 9b42c50a0ab..1d4130521a2 100644 --- a/internal/service/ec2/vpc_traffic_mirror_target_test.go +++ b/internal/service/ec2/vpc_traffic_mirror_target_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -197,7 +200,7 @@ func TestAccVPCTrafficMirrorTarget_gwlb(t *testing.T) { func testAccCheckTrafficMirrorTargetDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_traffic_mirror_target" { @@ -232,7 +235,7 @@ func testAccCheckTrafficMirrorTargetExists(ctx context.Context, n string, v *ec2 return fmt.Errorf("No EC2 Traffic Mirror Target ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindTrafficMirrorTargetByID(ctx, conn, rs.Primary.ID) @@ -343,7 +346,7 @@ resource "aws_ec2_traffic_mirror_target" "test" { } func testAccPreCheckTrafficMirrorTarget(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) _, err := conn.DescribeTrafficMirrorTargetsWithContext(ctx, &ec2.DescribeTrafficMirrorTargetsInput{}) diff --git a/internal/service/ec2/vpc_vpcs_data_source.go b/internal/service/ec2/vpc_vpcs_data_source.go index 5c8ed34bf79..54f3b18c9e3 100644 --- a/internal/service/ec2/vpc_vpcs_data_source.go +++ b/internal/service/ec2/vpc_vpcs_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -36,7 +39,7 @@ func DataSourceVPCs() *schema.Resource { func dataSourceVPCsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeVpcsInput{} diff --git a/internal/service/ec2/vpc_vpcs_data_source_test.go b/internal/service/ec2/vpc_vpcs_data_source_test.go index fa99093c189..241ae2a8dc4 100644 --- a/internal/service/ec2/vpc_vpcs_data_source_test.go +++ b/internal/service/ec2/vpc_vpcs_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -22,7 +25,7 @@ func TestAccVPCsDataSource_basic(t *testing.T) { { Config: testAccVPCVPCsDataSourceConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue("data.aws_vpcs.test", "ids.#", "0"), + acctest.CheckResourceAttrGreaterThanValue("data.aws_vpcs.test", "ids.#", 0), ), }, }, @@ -60,7 +63,7 @@ func TestAccVPCsDataSource_filters(t *testing.T) { { Config: testAccVPCVPCsDataSourceConfig_filters(rName), Check: resource.ComposeTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue("data.aws_vpcs.test", "ids.#", "0"), + acctest.CheckResourceAttrGreaterThanValue("data.aws_vpcs.test", "ids.#", 0), ), }, }, diff --git a/internal/service/ec2/vpnclient_authorization_rule.go b/internal/service/ec2/vpnclient_authorization_rule.go index 1ff5f3afa37..9800c2ac3b0 100644 --- a/internal/service/ec2/vpnclient_authorization_rule.go +++ b/internal/service/ec2/vpnclient_authorization_rule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -69,7 +72,7 @@ func ResourceClientVPNAuthorizationRule() *schema.Resource { func resourceClientVPNAuthorizationRuleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) endpointID := d.Get("client_vpn_endpoint_id").(string) targetNetworkCIDR := d.Get("target_network_cidr").(string) @@ -113,7 +116,7 @@ func resourceClientVPNAuthorizationRuleCreate(ctx context.Context, d *schema.Res func resourceClientVPNAuthorizationRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) endpointID, targetNetworkCIDR, accessGroupID, err := ClientVPNAuthorizationRuleParseResourceID(d.Id()) @@ -144,7 +147,7 @@ func resourceClientVPNAuthorizationRuleRead(ctx context.Context, d *schema.Resou func resourceClientVPNAuthorizationRuleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) endpointID, targetNetworkCIDR, accessGroupID, err := ClientVPNAuthorizationRuleParseResourceID(d.Id()) diff --git a/internal/service/ec2/vpnclient_authorization_rule_test.go b/internal/service/ec2/vpnclient_authorization_rule_test.go index b417afa51b7..7655f46047b 100644 --- a/internal/service/ec2/vpnclient_authorization_rule_test.go +++ b/internal/service/ec2/vpnclient_authorization_rule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -229,7 +232,7 @@ func testAccClientVPNAuthorizationRule_subnets(t *testing.T) { func testAccCheckClientVPNAuthorizationRuleDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_client_vpn_authorization_rule" { @@ -276,7 +279,7 @@ func testAccCheckClientVPNAuthorizationRuleExists(ctx context.Context, name stri return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindClientVPNAuthorizationRuleByThreePartKey(ctx, conn, endpointID, targetNetworkCIDR, accessGroupID) diff --git a/internal/service/ec2/vpnclient_endpoint.go b/internal/service/ec2/vpnclient_endpoint.go index 898277af8bc..7007a577bde 100644 --- a/internal/service/ec2/vpnclient_endpoint.go +++ b/internal/service/ec2/vpnclient_endpoint.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -221,7 +224,7 @@ func ResourceClientVPNEndpoint() *schema.Resource { func resourceClientVPNEndpointCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateClientVpnEndpointInput{ ClientCidrBlock: aws.String(d.Get("client_cidr_block").(string)), @@ -286,7 +289,7 @@ func resourceClientVPNEndpointCreate(ctx context.Context, d *schema.ResourceData func resourceClientVPNEndpointRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ep, err := FindClientVPNEndpointByID(ctx, conn, d.Id()) @@ -349,14 +352,14 @@ func resourceClientVPNEndpointRead(ctx context.Context, d *schema.ResourceData, d.Set("vpc_id", ep.VpcId) d.Set("vpn_port", ep.VpnPort) - SetTagsOut(ctx, ep.Tags) + setTagsOut(ctx, ep.Tags) return diags } func resourceClientVPNEndpointUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if d.HasChangesExcept("tags", "tags_all") { var waitForClientConnectResponseOptionsUpdate bool @@ -446,7 +449,7 @@ func resourceClientVPNEndpointUpdate(ctx context.Context, d *schema.ResourceData func resourceClientVPNEndpointDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[DEBUG] Deleting EC2 Client VPN Endpoint: %s", d.Id()) _, err := conn.DeleteClientVpnEndpointWithContext(ctx, &ec2.DeleteClientVpnEndpointInput{ diff --git a/internal/service/ec2/vpnclient_endpoint_data_source.go b/internal/service/ec2/vpnclient_endpoint_data_source.go index 1d687d583ed..99860409971 100644 --- a/internal/service/ec2/vpnclient_endpoint_data_source.go +++ b/internal/service/ec2/vpnclient_endpoint_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -173,7 +176,7 @@ func DataSourceClientVPNEndpoint() *schema.Resource { func dataSourceClientVPNEndpointRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeClientVpnEndpointsInput{} diff --git a/internal/service/ec2/vpnclient_endpoint_data_source_test.go b/internal/service/ec2/vpnclient_endpoint_data_source_test.go index 1a57186887d..9da4b2e0a3f 100644 --- a/internal/service/ec2/vpnclient_endpoint_data_source_test.go +++ b/internal/service/ec2/vpnclient_endpoint_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpnclient_endpoint_test.go b/internal/service/ec2/vpnclient_endpoint_test.go index 845b233874a..cfb6dc76b45 100644 --- a/internal/service/ec2/vpnclient_endpoint_test.go +++ b/internal/service/ec2/vpnclient_endpoint_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -722,7 +725,7 @@ func testAccPreCheckClientVPNSyncronize(t *testing.T) { func testAccCheckClientVPNEndpointDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_client_vpn_endpoint" { @@ -756,7 +759,7 @@ func testAccCheckClientVPNEndpointExists(ctx context.Context, name string, v *ec return fmt.Errorf("No EC2 Client VPN Endpoint ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindClientVPNEndpointByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ec2/vpnclient_network_association.go b/internal/service/ec2/vpnclient_network_association.go index cbd8aba1f4a..e2755136501 100644 --- a/internal/service/ec2/vpnclient_network_association.go +++ b/internal/service/ec2/vpnclient_network_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -57,7 +60,7 @@ func ResourceClientVPNNetworkAssociation() *schema.Resource { func resourceClientVPNNetworkAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) endpointID := d.Get("client_vpn_endpoint_id").(string) input := &ec2.AssociateClientVpnTargetNetworkInput{ @@ -84,7 +87,7 @@ func resourceClientVPNNetworkAssociationCreate(ctx context.Context, d *schema.Re func resourceClientVPNNetworkAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) endpointID := d.Get("client_vpn_endpoint_id").(string) network, err := FindClientVPNNetworkAssociationByIDs(ctx, conn, d.Id(), endpointID) @@ -109,7 +112,7 @@ func resourceClientVPNNetworkAssociationRead(ctx context.Context, d *schema.Reso func resourceClientVPNNetworkAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) endpointID := d.Get("client_vpn_endpoint_id").(string) diff --git a/internal/service/ec2/vpnclient_network_association_test.go b/internal/service/ec2/vpnclient_network_association_test.go index 101aa1da657..f996ea86fc3 100644 --- a/internal/service/ec2/vpnclient_network_association_test.go +++ b/internal/service/ec2/vpnclient_network_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -110,7 +113,7 @@ func testAccClientVPNNetworkAssociation_disappears(t *testing.T) { func testAccCheckClientVPNNetworkAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_client_vpn_network_association" { @@ -145,7 +148,7 @@ func testAccCheckClientVPNNetworkAssociationExists(ctx context.Context, name str return fmt.Errorf("No EC2 Client VPN Network Association ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindClientVPNNetworkAssociationByIDs(ctx, conn, rs.Primary.ID, rs.Primary.Attributes["client_vpn_endpoint_id"]) diff --git a/internal/service/ec2/vpnclient_route.go b/internal/service/ec2/vpnclient_route.go index 23a3120216c..ffd3c8d1b3d 100644 --- a/internal/service/ec2/vpnclient_route.go +++ b/internal/service/ec2/vpnclient_route.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -69,7 +72,7 @@ func ResourceClientVPNRoute() *schema.Resource { func resourceClientVPNRouteCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) endpointID := d.Get("client_vpn_endpoint_id").(string) targetSubnetID := d.Get("target_vpc_subnet_id").(string) @@ -85,8 +88,7 @@ func resourceClientVPNRouteCreate(ctx context.Context, d *schema.ResourceData, m input.Description = aws.String(v.(string)) } - log.Printf("[DEBUG] Creating EC2 Client VPN Route: %s", input) - _, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, propagationTimeout, func() (interface{}, error) { + _, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, ec2PropagationTimeout, func() (interface{}, error) { return conn.CreateClientVpnRouteWithContext(ctx, input) }, errCodeInvalidClientVPNActiveAssociationNotFound) @@ -105,7 +107,7 @@ func resourceClientVPNRouteCreate(ctx context.Context, d *schema.ResourceData, m func resourceClientVPNRouteRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) endpointID, targetSubnetID, destinationCIDR, err := ClientVPNRouteParseResourceID(d.Id()) @@ -137,7 +139,7 @@ func resourceClientVPNRouteRead(ctx context.Context, d *schema.ResourceData, met func resourceClientVPNRouteDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) endpointID, targetSubnetID, destinationCIDR, err := ClientVPNRouteParseResourceID(d.Id()) diff --git a/internal/service/ec2/vpnclient_route_test.go b/internal/service/ec2/vpnclient_route_test.go index 8b5c92fb3a2..0f25bafce9d 100644 --- a/internal/service/ec2/vpnclient_route_test.go +++ b/internal/service/ec2/vpnclient_route_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -104,7 +107,7 @@ func testAccClientVPNRoute_description(t *testing.T) { func testAccCheckClientVPNRouteDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_client_vpn_route" { @@ -151,7 +154,7 @@ func testAccCheckClientVPNRouteExists(ctx context.Context, name string, v *ec2.C return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindClientVPNRouteByThreePartKey(ctx, conn, endpointID, targetSubnetID, destinationCIDR) diff --git a/internal/service/ec2/vpnsite_connection.go b/internal/service/ec2/vpnsite_connection.go index fd1223f47bf..c7a8d4daa5c 100644 --- a/internal/service/ec2/vpnsite_connection.go +++ b/internal/service/ec2/vpnsite_connection.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -676,7 +679,7 @@ var ( func resourceVPNConnectionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateVpnConnectionInput{ CustomerGatewayId: aws.String(d.Get("customer_gateway_id").(string)), @@ -712,7 +715,7 @@ func resourceVPNConnectionCreate(ctx context.Context, d *schema.ResourceData, me func resourceVPNConnectionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) vpnConnection, err := FindVPNConnectionByID(ctx, conn, d.Id()) @@ -770,7 +773,7 @@ func resourceVPNConnectionRead(ctx context.Context, d *schema.ResourceData, meta return sdkdiag.AppendErrorf(diags, "setting vgw_telemetry: %s", err) } - SetTagsOut(ctx, vpnConnection.Tags) + setTagsOut(ctx, vpnConnection.Tags) if v := vpnConnection.Options; v != nil { d.Set("enable_acceleration", v.EnableAcceleration) @@ -849,7 +852,7 @@ func resourceVPNConnectionRead(ctx context.Context, d *schema.ResourceData, meta func resourceVPNConnectionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if d.HasChanges("customer_gateway_id", "transit_gateway_id", "vpn_gateway_id") { input := &ec2.ModifyVpnConnectionInput{ @@ -937,7 +940,7 @@ func resourceVPNConnectionUpdate(ctx context.Context, d *schema.ResourceData, me func resourceVPNConnectionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[INFO] Deleting EC2 VPN Connection: %s", d.Id()) _, err := conn.DeleteVpnConnectionWithContext(ctx, &ec2.DeleteVpnConnectionInput{ diff --git a/internal/service/ec2/vpnsite_connection_route.go b/internal/service/ec2/vpnsite_connection_route.go index d6eaf4f1160..a7801a118d3 100644 --- a/internal/service/ec2/vpnsite_connection_route.go +++ b/internal/service/ec2/vpnsite_connection_route.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -40,7 +43,7 @@ func ResourceVPNConnectionRoute() *schema.Resource { func resourceVPNConnectionRouteCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) cidrBlock := d.Get("destination_cidr_block").(string) vpnConnectionID := d.Get("vpn_connection_id").(string) @@ -68,7 +71,7 @@ func resourceVPNConnectionRouteCreate(ctx context.Context, d *schema.ResourceDat func resourceVPNConnectionRouteRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) cidrBlock, vpnConnectionID, err := VPNConnectionRouteParseResourceID(d.Id()) @@ -96,7 +99,7 @@ func resourceVPNConnectionRouteRead(ctx context.Context, d *schema.ResourceData, func resourceVPNConnectionRouteDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) cidrBlock, vpnConnectionID, err := VPNConnectionRouteParseResourceID(d.Id()) diff --git a/internal/service/ec2/vpnsite_connection_route_test.go b/internal/service/ec2/vpnsite_connection_route_test.go index 4a19b19a8b7..6138af1dcc5 100644 --- a/internal/service/ec2/vpnsite_connection_route_test.go +++ b/internal/service/ec2/vpnsite_connection_route_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -63,7 +66,7 @@ func TestAccSiteVPNConnectionRoute_disappears(t *testing.T) { func testAccCheckVPNConnectionRouteDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpn_connection_route" { @@ -110,7 +113,7 @@ func testAccVPNConnectionRouteExists(ctx context.Context, n string) resource.Tes return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) _, err = tfec2.FindVPNConnectionRouteByVPNConnectionIDAndCIDR(ctx, conn, vpnConnectionID, cidrBlock) diff --git a/internal/service/ec2/vpnsite_connection_test.go b/internal/service/ec2/vpnsite_connection_test.go index 6750075d904..d4d8f7e0416 100644 --- a/internal/service/ec2/vpnsite_connection_test.go +++ b/internal/service/ec2/vpnsite_connection_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -1678,7 +1681,7 @@ func TestAccSiteVPNConnection_transitGatewayIDToVPNGatewayID(t *testing.T) { func testAccCheckVPNConnectionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpn_connection" { @@ -1713,7 +1716,7 @@ func testAccVPNConnectionExists(ctx context.Context, n string, v *ec2.VpnConnect return fmt.Errorf("No EC2 VPN Connection ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindVPNConnectionByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ec2/vpnsite_customer_gateway.go b/internal/service/ec2/vpnsite_customer_gateway.go index 5ee346b66d4..e4d922c8430 100644 --- a/internal/service/ec2/vpnsite_customer_gateway.go +++ b/internal/service/ec2/vpnsite_customer_gateway.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -77,7 +80,7 @@ func ResourceCustomerGateway() *schema.Resource { } func resourceCustomerGatewayCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateCustomerGatewayInput{ TagSpecifications: getTagSpecificationsIn(ctx, ec2.ResourceTypeCustomerGateway), @@ -122,7 +125,7 @@ func resourceCustomerGatewayCreate(ctx context.Context, d *schema.ResourceData, } func resourceCustomerGatewayRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) customerGateway, err := FindCustomerGatewayByID(ctx, conn, d.Id()) @@ -150,7 +153,7 @@ func resourceCustomerGatewayRead(ctx context.Context, d *schema.ResourceData, me d.Set("ip_address", customerGateway.IpAddress) d.Set("type", customerGateway.Type) - SetTagsOut(ctx, customerGateway.Tags) + setTagsOut(ctx, customerGateway.Tags) return nil } @@ -161,7 +164,7 @@ func resourceCustomerGatewayUpdate(ctx context.Context, d *schema.ResourceData, } func resourceCustomerGatewayDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[INFO] Deleting EC2 Customer Gateway: %s", d.Id()) _, err := conn.DeleteCustomerGatewayWithContext(ctx, &ec2.DeleteCustomerGatewayInput{ diff --git a/internal/service/ec2/vpnsite_customer_gateway_data_source.go b/internal/service/ec2/vpnsite_customer_gateway_data_source.go index 8de81751600..5e2c331c07b 100644 --- a/internal/service/ec2/vpnsite_customer_gateway_data_source.go +++ b/internal/service/ec2/vpnsite_customer_gateway_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -62,7 +65,7 @@ func DataSourceCustomerGateway() *schema.Resource { } func dataSourceCustomerGatewayRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeCustomerGatewaysInput{} diff --git a/internal/service/ec2/vpnsite_customer_gateway_data_source_test.go b/internal/service/ec2/vpnsite_customer_gateway_data_source_test.go index 36ea0257a3f..e5cb04ffcd3 100644 --- a/internal/service/ec2/vpnsite_customer_gateway_data_source_test.go +++ b/internal/service/ec2/vpnsite_customer_gateway_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpnsite_customer_gateway_test.go b/internal/service/ec2/vpnsite_customer_gateway_test.go index 9dd38beb653..04bfcf2d275 100644 --- a/internal/service/ec2/vpnsite_customer_gateway_test.go +++ b/internal/service/ec2/vpnsite_customer_gateway_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -236,7 +239,7 @@ func TestAccSiteVPNCustomerGateway_certificate(t *testing.T) { func testAccCheckCustomerGatewayDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_customer_gateway" { @@ -271,7 +274,7 @@ func testAccCheckCustomerGatewayExists(ctx context.Context, n string, v *ec2.Cus return fmt.Errorf("No EC2 Customer Gateway ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindCustomerGatewayByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ec2/vpnsite_gateway.go b/internal/service/ec2/vpnsite_gateway.go index b8770a1c2fd..41b7c27ddf2 100644 --- a/internal/service/ec2/vpnsite_gateway.go +++ b/internal/service/ec2/vpnsite_gateway.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -65,7 +68,7 @@ func ResourceVPNGateway() *schema.Resource { func resourceVPNGatewayCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateVpnGatewayInput{ AvailabilityZone: aws.String(d.Get("availability_zone").(string)), @@ -83,7 +86,6 @@ func resourceVPNGatewayCreate(ctx context.Context, d *schema.ResourceData, meta input.AmazonSideAsn = aws.Int64(v) } - log.Printf("[DEBUG] Creating EC2 VPN Gateway: %s", input) output, err := conn.CreateVpnGatewayWithContext(ctx, input) if err != nil { @@ -103,9 +105,9 @@ func resourceVPNGatewayCreate(ctx context.Context, d *schema.ResourceData, meta func resourceVPNGatewayRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) - outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { + outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, ec2PropagationTimeout, func() (interface{}, error) { return FindVPNGatewayByID(ctx, conn, d.Id()) }, d.IsNewResource()) @@ -141,14 +143,14 @@ func resourceVPNGatewayRead(ctx context.Context, d *schema.ResourceData, meta in } } - SetTagsOut(ctx, vpnGateway.Tags) + setTagsOut(ctx, vpnGateway.Tags) return diags } func resourceVPNGatewayUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if d.HasChange("vpc_id") { o, n := d.GetChange("vpc_id") @@ -171,7 +173,7 @@ func resourceVPNGatewayUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceVPNGatewayDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) if v, ok := d.GetOk("vpc_id"); ok { if err := detachVPNGatewayFromVPC(ctx, conn, d.Id(), v.(string)); err != nil { @@ -203,19 +205,16 @@ func attachVPNGatewayToVPC(ctx context.Context, conn *ec2.EC2, vpnGatewayID, vpc VpnGatewayId: aws.String(vpnGatewayID), } - log.Printf("[INFO] Attaching EC2 VPN Gateway (%s) to VPC (%s)", vpnGatewayID, vpcID) - _, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, propagationTimeout, func() (interface{}, error) { + _, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, ec2PropagationTimeout, func() (interface{}, error) { return conn.AttachVpnGatewayWithContext(ctx, input) }, errCodeInvalidVPNGatewayIDNotFound) if err != nil { - return fmt.Errorf("attaching to VPC (%s): %w", vpcID, err) + return fmt.Errorf("attaching EC2 VPN Gateway (%s) to VPC (%s): %w", vpnGatewayID, vpcID, err) } - _, err = WaitVPNGatewayVPCAttachmentAttached(ctx, conn, vpnGatewayID, vpcID) - - if err != nil { - return fmt.Errorf("attaching to VPC (%s): waiting for completion: %w", vpcID, err) + if _, err := WaitVPNGatewayVPCAttachmentAttached(ctx, conn, vpnGatewayID, vpcID); err != nil { + return fmt.Errorf("waiting for EC2 VPN Gateway (%s) to VPC (%s) attachment create: %w", vpnGatewayID, vpcID, err) } return nil @@ -227,7 +226,6 @@ func detachVPNGatewayFromVPC(ctx context.Context, conn *ec2.EC2, vpnGatewayID, v VpnGatewayId: aws.String(vpnGatewayID), } - log.Printf("[INFO] Detaching EC2 VPN Gateway (%s) from VPC (%s)", vpnGatewayID, vpcID) _, err := conn.DetachVpnGatewayWithContext(ctx, input) if tfawserr.ErrCodeEquals(err, errCodeInvalidVPNGatewayAttachmentNotFound, errCodeInvalidVPNGatewayIDNotFound) { @@ -235,13 +233,11 @@ func detachVPNGatewayFromVPC(ctx context.Context, conn *ec2.EC2, vpnGatewayID, v } if err != nil { - return fmt.Errorf("detaching from VPC (%s): %w", vpcID, err) + return fmt.Errorf("detaching EC2 VPN Gateway (%s) from VPC (%s): %w", vpnGatewayID, vpcID, err) } - _, err = WaitVPNGatewayVPCAttachmentDetached(ctx, conn, vpnGatewayID, vpcID) - - if err != nil { - return fmt.Errorf("detaching from VPC (%s): waiting for completion: %w", vpcID, err) + if _, err := WaitVPNGatewayVPCAttachmentDetached(ctx, conn, vpnGatewayID, vpcID); err != nil { + return fmt.Errorf("waiting for EC2 VPN Gateway (%s) to VPC (%s) attachment delete: %w", vpnGatewayID, vpcID, err) } return nil diff --git a/internal/service/ec2/vpnsite_gateway_attachment.go b/internal/service/ec2/vpnsite_gateway_attachment.go index 85dc895c464..2e37b32d4ae 100644 --- a/internal/service/ec2/vpnsite_gateway_attachment.go +++ b/internal/service/ec2/vpnsite_gateway_attachment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -38,7 +41,7 @@ func ResourceVPNGatewayAttachment() *schema.Resource { func resourceVPNGatewayAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) vpcID := d.Get("vpc_id").(string) vpnGatewayID := d.Get("vpn_gateway_id").(string) @@ -67,7 +70,7 @@ func resourceVPNGatewayAttachmentCreate(ctx context.Context, d *schema.ResourceD func resourceVPNGatewayAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) vpcID := d.Get("vpc_id").(string) vpnGatewayID := d.Get("vpn_gateway_id").(string) @@ -89,7 +92,7 @@ func resourceVPNGatewayAttachmentRead(ctx context.Context, d *schema.ResourceDat func resourceVPNGatewayAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) vpcID := d.Get("vpc_id").(string) vpnGatewayID := d.Get("vpn_gateway_id").(string) diff --git a/internal/service/ec2/vpnsite_gateway_attachment_test.go b/internal/service/ec2/vpnsite_gateway_attachment_test.go index 4b4d13b7456..ec2162cc5d8 100644 --- a/internal/service/ec2/vpnsite_gateway_attachment_test.go +++ b/internal/service/ec2/vpnsite_gateway_attachment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -72,7 +75,7 @@ func testAccCheckVPNGatewayAttachmentExists(ctx context.Context, n string, v *ec return fmt.Errorf("No EC2 VPN Gateway Attachment ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindVPNGatewayVPCAttachment(ctx, conn, rs.Primary.Attributes["vpn_gateway_id"], rs.Primary.Attributes["vpc_id"]) @@ -88,7 +91,7 @@ func testAccCheckVPNGatewayAttachmentExists(ctx context.Context, n string, v *ec func testAccCheckVPNGatewayAttachmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpn_gateway_attachment" { diff --git a/internal/service/ec2/vpnsite_gateway_data_source.go b/internal/service/ec2/vpnsite_gateway_data_source.go index 1aa62d09e00..6d94b5970df 100644 --- a/internal/service/ec2/vpnsite_gateway_data_source.go +++ b/internal/service/ec2/vpnsite_gateway_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -64,7 +67,7 @@ func DataSourceVPNGateway() *schema.Resource { func dataSourceVPNGatewayRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &ec2.DescribeVpnGatewaysInput{} diff --git a/internal/service/ec2/vpnsite_gateway_data_source_test.go b/internal/service/ec2/vpnsite_gateway_data_source_test.go index cdc9972ad66..4eeaa3fedf2 100644 --- a/internal/service/ec2/vpnsite_gateway_data_source_test.go +++ b/internal/service/ec2/vpnsite_gateway_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( diff --git a/internal/service/ec2/vpnsite_gateway_route_propagation.go b/internal/service/ec2/vpnsite_gateway_route_propagation.go index 975e4665426..644ec71f2dc 100644 --- a/internal/service/ec2/vpnsite_gateway_route_propagation.go +++ b/internal/service/ec2/vpnsite_gateway_route_propagation.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -42,7 +45,7 @@ func ResourceVPNGatewayRoutePropagation() *schema.Resource { func resourceVPNGatewayRoutePropagationEnable(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) gatewayID := d.Get("vpn_gateway_id").(string) routeTableID := d.Get("route_table_id").(string) @@ -59,7 +62,7 @@ func resourceVPNGatewayRoutePropagationEnable(ctx context.Context, d *schema.Res func resourceVPNGatewayRoutePropagationDisable(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) routeTableID, gatewayID, err := VPNGatewayRoutePropagationParseID(d.Id()) @@ -78,7 +81,7 @@ func resourceVPNGatewayRoutePropagationDisable(ctx context.Context, d *schema.Re func resourceVPNGatewayRoutePropagationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) routeTableID, gatewayID, err := VPNGatewayRoutePropagationParseID(d.Id()) diff --git a/internal/service/ec2/vpnsite_gateway_route_propagation_test.go b/internal/service/ec2/vpnsite_gateway_route_propagation_test.go index 417b969694a..a0b098a9d4d 100644 --- a/internal/service/ec2/vpnsite_gateway_route_propagation_test.go +++ b/internal/service/ec2/vpnsite_gateway_route_propagation_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -76,7 +79,7 @@ func testAccCheckVPNGatewayRoutePropagationExists(ctx context.Context, n string) return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) return tfec2.FindVPNGatewayRoutePropagationExists(ctx, conn, routeTableID, gatewayID) } @@ -95,7 +98,7 @@ func testAccCheckVPNGatewayRoutePropagationDestroy(ctx context.Context) resource return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) err = tfec2.FindVPNGatewayRoutePropagationExists(ctx, conn, routeTableID, gatewayID) diff --git a/internal/service/ec2/vpnsite_gateway_test.go b/internal/service/ec2/vpnsite_gateway_test.go index 893b1eb06ec..c20dfe1b01a 100644 --- a/internal/service/ec2/vpnsite_gateway_test.go +++ b/internal/service/ec2/vpnsite_gateway_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -292,7 +295,7 @@ func TestAccSiteVPNGateway_tags(t *testing.T) { func testAccCheckVPNGatewayDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpn_gateway" { @@ -327,7 +330,7 @@ func testAccCheckVPNGatewayExists(ctx context.Context, n string, v *ec2.VpnGatew return fmt.Errorf("No EC2 VPN Gateway ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindVPNGatewayByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ec2/wait.go b/internal/service/ec2/wait.go index 4a95c07339c..0218607109f 100644 --- a/internal/service/ec2/wait.go +++ b/internal/service/ec2/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -7,11 +10,15 @@ import ( "strconv" "time" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + ec2_sdkv2 "github.com/aws/aws-sdk-go-v2/service/ec2" + "github.com/aws/aws-sdk-go-v2/service/ec2/types" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" multierror "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/hashicorp/terraform-provider-aws/internal/enum" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) @@ -20,9 +27,14 @@ const ( InstanceStartTimeout = 10 * time.Minute InstanceStopTimeout = 10 * time.Minute - // General timeout for EC2 resource creations to propagate. + // General timeout for IAM resource change to propagate. + // See https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_general.html#troubleshoot_general_eventual-consistency. + // We have settled on 2 minutes as the best timeout value. + iamPropagationTimeout = 2 * time.Minute + + // General timeout for EC2 resource changes to propagate. // See https://docs.aws.amazon.com/AWSEC2/latest/APIReference/query-api-troubleshooting.html#eventual-consistency. - propagationTimeout = 2 * time.Minute + ec2PropagationTimeout = 5 * time.Minute // nosemgrep:ci.ec2-in-const-name, ci.ec2-in-var-name RouteNotFoundChecks = 1000 // Should exceed any reasonable custom timeout value. RouteTableNotFoundChecks = 1000 // Should exceed any reasonable custom timeout value. @@ -436,7 +448,7 @@ func WaitInstanceIAMInstanceProfileUpdated(ctx context.Context, conn *ec2.EC2, i stateConf := &retry.StateChangeConf{ Target: []string{expectedValue}, Refresh: StatusInstanceIAMInstanceProfile(ctx, conn, instanceID), - Timeout: propagationTimeout, + Timeout: ec2PropagationTimeout, Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } @@ -580,7 +592,7 @@ func WaitInstanceCapacityReservationSpecificationUpdated(ctx context.Context, co stateConf := &retry.StateChangeConf{ Target: []string{strconv.FormatBool(true)}, Refresh: StatusInstanceCapacityReservationSpecificationEquals(ctx, conn, instanceID, expectedValue), - Timeout: propagationTimeout, + Timeout: ec2PropagationTimeout, Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } @@ -689,8 +701,6 @@ func WaitRouteReady(ctx context.Context, conn *ec2.EC2, routeFinder RouteFinder, } const ( - RouteTableAssociationPropagationTimeout = 5 * time.Minute - RouteTableAssociationCreatedTimeout = 5 * time.Minute RouteTableAssociationUpdatedTimeout = 5 * time.Minute RouteTableAssociationDeletedTimeout = 5 * time.Minute @@ -817,8 +827,6 @@ func WaitSecurityGroupCreated(ctx context.Context, conn *ec2.EC2, id string, tim } const ( - SubnetPropagationTimeout = 2 * time.Minute - SubnetAttributePropagationTimeout = 5 * time.Minute SubnetIPv6CIDRBlockAssociationCreatedTimeout = 3 * time.Minute SubnetIPv6CIDRBlockAssociationDeletedTimeout = 3 * time.Minute ) @@ -886,7 +894,7 @@ func waitSubnetAssignIPv6AddressOnCreationUpdated(ctx context.Context, conn *ec2 stateConf := &retry.StateChangeConf{ Target: []string{strconv.FormatBool(expectedValue)}, Refresh: StatusSubnetAssignIPv6AddressOnCreation(ctx, conn, subnetID), - Timeout: SubnetAttributePropagationTimeout, + Timeout: ec2PropagationTimeout, Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } @@ -904,7 +912,7 @@ func waitSubnetEnableLniAtDeviceIndexUpdated(ctx context.Context, conn *ec2.EC2, stateConf := &retry.StateChangeConf{ Target: []string{strconv.FormatInt(expectedValue, 10)}, Refresh: StatusSubnetEnableLniAtDeviceIndex(ctx, conn, subnetID), - Timeout: SubnetAttributePropagationTimeout, + Timeout: ec2PropagationTimeout, Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } @@ -922,7 +930,7 @@ func waitSubnetEnableDNS64Updated(ctx context.Context, conn *ec2.EC2, subnetID s stateConf := &retry.StateChangeConf{ Target: []string{strconv.FormatBool(expectedValue)}, Refresh: StatusSubnetEnableDNS64(ctx, conn, subnetID), - Timeout: SubnetAttributePropagationTimeout, + Timeout: ec2PropagationTimeout, Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } @@ -940,7 +948,7 @@ func waitSubnetEnableResourceNameDNSAAAARecordOnLaunchUpdated(ctx context.Contex stateConf := &retry.StateChangeConf{ Target: []string{strconv.FormatBool(expectedValue)}, Refresh: StatusSubnetEnableResourceNameDNSAAAARecordOnLaunch(ctx, conn, subnetID), - Timeout: SubnetAttributePropagationTimeout, + Timeout: ec2PropagationTimeout, Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } @@ -958,7 +966,7 @@ func waitSubnetEnableResourceNameDNSARecordOnLaunchUpdated(ctx context.Context, stateConf := &retry.StateChangeConf{ Target: []string{strconv.FormatBool(expectedValue)}, Refresh: StatusSubnetEnableResourceNameDNSARecordOnLaunch(ctx, conn, subnetID), - Timeout: SubnetAttributePropagationTimeout, + Timeout: ec2PropagationTimeout, Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } @@ -976,7 +984,7 @@ func WaitSubnetMapCustomerOwnedIPOnLaunchUpdated(ctx context.Context, conn *ec2. stateConf := &retry.StateChangeConf{ Target: []string{strconv.FormatBool(expectedValue)}, Refresh: StatusSubnetMapCustomerOwnedIPOnLaunch(ctx, conn, subnetID), - Timeout: SubnetAttributePropagationTimeout, + Timeout: ec2PropagationTimeout, Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } @@ -994,7 +1002,7 @@ func WaitSubnetMapPublicIPOnLaunchUpdated(ctx context.Context, conn *ec2.EC2, su stateConf := &retry.StateChangeConf{ Target: []string{strconv.FormatBool(expectedValue)}, Refresh: StatusSubnetMapPublicIPOnLaunch(ctx, conn, subnetID), - Timeout: SubnetAttributePropagationTimeout, + Timeout: ec2PropagationTimeout, Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } @@ -1012,7 +1020,7 @@ func WaitSubnetPrivateDNSHostnameTypeOnLaunchUpdated(ctx context.Context, conn * stateConf := &retry.StateChangeConf{ Target: []string{expectedValue}, Refresh: StatusSubnetPrivateDNSHostnameTypeOnLaunch(ctx, conn, subnetID), - Timeout: SubnetAttributePropagationTimeout, + Timeout: ec2PropagationTimeout, Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } @@ -1781,9 +1789,8 @@ func WaitVolumeModificationComplete(ctx context.Context, conn *ec2.EC2, id strin } const ( - vpcAttributePropagationTimeout = 5 * time.Minute - vpcCreatedTimeout = 10 * time.Minute - vpcDeletedTimeout = 5 * time.Minute + vpcCreatedTimeout = 10 * time.Minute + vpcDeletedTimeout = 5 * time.Minute ) func WaitVPCCreated(ctx context.Context, conn *ec2.EC2, id string) (*ec2.Vpc, error) { @@ -1807,7 +1814,7 @@ func WaitVPCAttributeUpdated(ctx context.Context, conn *ec2.EC2, vpcID string, a stateConf := &retry.StateChangeConf{ Target: []string{strconv.FormatBool(expectedValue)}, Refresh: StatusVPCAttributeValue(ctx, conn, vpcID, attribute), - Timeout: vpcAttributePropagationTimeout, + Timeout: ec2PropagationTimeout, Delay: 10 * time.Second, MinTimeout: 3 * time.Second, } @@ -1918,10 +1925,6 @@ func WaitVPCIPv6CIDRBlockAssociationDeleted(ctx context.Context, conn *ec2.EC2, return nil, err } -const ( - VPCPeeringConnectionOptionsPropagationTimeout = 3 * time.Minute -) - func WaitVPCPeeringConnectionActive(ctx context.Context, conn *ec2.EC2, id string, timeout time.Duration) (*ec2.VpcPeeringConnection, error) { stateConf := &retry.StateChangeConf{ Pending: []string{ec2.VpcPeeringConnectionStateReasonCodeInitiatingRequest, ec2.VpcPeeringConnectionStateReasonCodeProvisioning}, @@ -2576,6 +2579,32 @@ func WaitSpotFleetRequestUpdated(ctx context.Context, conn *ec2.EC2, id string, return nil, err } +func WaitSpotInstanceRequestFulfilled(ctx context.Context, conn *ec2.EC2, id string, timeout time.Duration) (*ec2.SpotInstanceRequest, error) { + stateConf := &retry.StateChangeConf{ + Pending: []string{spotInstanceRequestStatusCodePendingEvaluation, spotInstanceRequestStatusCodePendingFulfillment}, + Target: []string{spotInstanceRequestStatusCodeFulfilled}, + Refresh: StatusSpotInstanceRequest(ctx, conn, id), + Timeout: timeout, + Delay: 10 * time.Second, + MinTimeout: 3 * time.Second, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + + if output, ok := outputRaw.(*ec2.SpotInstanceRequest); ok { + if fault := output.Fault; fault != nil { + errFault := fmt.Errorf("%s: %s", aws.StringValue(fault.Code), aws.StringValue(fault.Message)) + tfresource.SetLastError(err, fmt.Errorf("%s %w", aws.StringValue(output.Status.Message), errFault)) + } else { + tfresource.SetLastError(err, errors.New(aws.StringValue(output.Status.Message))) + } + + return output, err + } + + return nil, err +} + func WaitVPCEndpointAccepted(ctx context.Context, conn *ec2.EC2, vpcEndpointID string, timeout time.Duration) (*ec2.VpcEndpoint, error) { stateConf := &retry.StateChangeConf{ Pending: []string{vpcEndpointStatePendingAcceptance}, @@ -2684,7 +2713,7 @@ func WaitVPCEndpointRouteTableAssociationDeleted(ctx context.Context, conn *ec2. Pending: []string{VPCEndpointRouteTableAssociationStatusReady}, Target: []string{}, Refresh: StatusVPCEndpointRouteTableAssociation(ctx, conn, vpcEndpointID, routeTableID), - Timeout: propagationTimeout, + Timeout: ec2PropagationTimeout, ContinuousTargetOccurence: 2, } @@ -2698,7 +2727,7 @@ func WaitVPCEndpointRouteTableAssociationReady(ctx context.Context, conn *ec2.EC Pending: []string{}, Target: []string{VPCEndpointRouteTableAssociationStatusReady}, Refresh: StatusVPCEndpointRouteTableAssociation(ctx, conn, vpcEndpointID, routeTableID), - Timeout: propagationTimeout, + Timeout: ec2PropagationTimeout, ContinuousTargetOccurence: 2, } @@ -3176,3 +3205,41 @@ func WaitInstanceReadyWithContext(ctx context.Context, conn *ec2.EC2, id string, return nil, err } + +func WaitInstanceConnectEndpointCreated(ctx context.Context, conn *ec2_sdkv2.Client, id string, timeout time.Duration) (*types.Ec2InstanceConnectEndpoint, error) { + stateConf := &retry.StateChangeConf{ + Pending: enum.Slice(types.Ec2InstanceConnectEndpointStateCreateInProgress), + Target: enum.Slice(types.Ec2InstanceConnectEndpointStateCreateComplete), + Refresh: StatusInstanceConnectEndpointState(ctx, conn, id), + Timeout: timeout, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + + if output, ok := outputRaw.(*types.Ec2InstanceConnectEndpoint); ok { + tfresource.SetLastError(err, errors.New(aws_sdkv2.ToString(output.StateMessage))) + + return output, err + } + + return nil, err +} + +func WaitInstanceConnectEndpointDeleted(ctx context.Context, conn *ec2_sdkv2.Client, id string, timeout time.Duration) (*types.Ec2InstanceConnectEndpoint, error) { + stateConf := &retry.StateChangeConf{ + Pending: enum.Slice(types.Ec2InstanceConnectEndpointStateDeleteInProgress), + Target: []string{}, + Refresh: StatusInstanceConnectEndpointState(ctx, conn, id), + Timeout: timeout, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + + if output, ok := outputRaw.(*types.Ec2InstanceConnectEndpoint); ok { + tfresource.SetLastError(err, errors.New(aws_sdkv2.ToString(output.StateMessage))) + + return output, err + } + + return nil, err +} diff --git a/internal/service/ec2/wavelength_carrier_gateway.go b/internal/service/ec2/wavelength_carrier_gateway.go index fc3084b77c4..26a28b5098f 100644 --- a/internal/service/ec2/wavelength_carrier_gateway.go +++ b/internal/service/ec2/wavelength_carrier_gateway.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2 import ( @@ -56,7 +59,7 @@ func ResourceCarrierGateway() *schema.Resource { func resourceCarrierGatewayCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) input := &ec2.CreateCarrierGatewayInput{ TagSpecifications: getTagSpecificationsIn(ctx, ec2.ResourceTypeCarrierGateway), @@ -82,7 +85,7 @@ func resourceCarrierGatewayCreate(ctx context.Context, d *schema.ResourceData, m func resourceCarrierGatewayRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) carrierGateway, err := FindCarrierGatewayByID(ctx, conn, d.Id()) @@ -108,7 +111,7 @@ func resourceCarrierGatewayRead(ctx context.Context, d *schema.ResourceData, met d.Set("owner_id", ownerID) d.Set("vpc_id", carrierGateway.VpcId) - SetTagsOut(ctx, carrierGateway.Tags) + setTagsOut(ctx, carrierGateway.Tags) return diags } @@ -123,7 +126,7 @@ func resourceCarrierGatewayUpdate(ctx context.Context, d *schema.ResourceData, m func resourceCarrierGatewayDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) log.Printf("[INFO] Deleting EC2 Carrier Gateway (%s)", d.Id()) _, err := conn.DeleteCarrierGatewayWithContext(ctx, &ec2.DeleteCarrierGatewayInput{ diff --git a/internal/service/ec2/wavelength_carrier_gateway_test.go b/internal/service/ec2/wavelength_carrier_gateway_test.go index f4e49589fd1..58b62e6939c 100644 --- a/internal/service/ec2/wavelength_carrier_gateway_test.go +++ b/internal/service/ec2/wavelength_carrier_gateway_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ec2_test import ( @@ -120,7 +123,7 @@ func TestAccWavelengthCarrierGateway_tags(t *testing.T) { func testAccCheckCarrierGatewayDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ec2_carrier_gateway" { @@ -155,7 +158,7 @@ func testAccCheckCarrierGatewayExists(ctx context.Context, n string, v *ec2.Carr return fmt.Errorf("No EC2 Carrier Gateway ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) output, err := tfec2.FindCarrierGatewayByID(ctx, conn, rs.Primary.ID) @@ -170,7 +173,7 @@ func testAccCheckCarrierGatewayExists(ctx context.Context, n string, v *ec2.Carr } func testAccPreCheckWavelengthZoneAvailable(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) input := &ec2.DescribeAvailabilityZonesInput{ Filters: tfec2.BuildAttributeFilterList(map[string]string{ diff --git a/internal/service/ecr/authorization_token_data_source.go b/internal/service/ecr/authorization_token_data_source.go index d369929887a..77c5c94f051 100644 --- a/internal/service/ecr/authorization_token_data_source.go +++ b/internal/service/ecr/authorization_token_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecr import ( @@ -53,7 +56,7 @@ func DataSourceAuthorizationToken() *schema.Resource { func dataSourceAuthorizationTokenRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRConn() + conn := meta.(*conns.AWSClient).ECRConn(ctx) params := &ecr.GetAuthorizationTokenInput{} if v, ok := d.GetOk("registry_id"); ok { params.RegistryIds = []*string{aws.String(v.(string))} diff --git a/internal/service/ecr/authorization_token_data_source_test.go b/internal/service/ecr/authorization_token_data_source_test.go index 2d0968909c4..deb4c94ea52 100644 --- a/internal/service/ecr/authorization_token_data_source_test.go +++ b/internal/service/ecr/authorization_token_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecr_test import ( diff --git a/internal/service/ecr/consts.go b/internal/service/ecr/consts.go index 7b1774acc19..a324e427309 100644 --- a/internal/service/ecr/consts.go +++ b/internal/service/ecr/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecr import ( diff --git a/internal/service/ecr/errorcheck_test.go b/internal/service/ecr/errorcheck_test.go index 2ebd3ad6481..f0ee803650a 100644 --- a/internal/service/ecr/errorcheck_test.go +++ b/internal/service/ecr/errorcheck_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecr_test import ( diff --git a/internal/service/ecr/find.go b/internal/service/ecr/find.go index c1783193d77..fd6e63143ca 100644 --- a/internal/service/ecr/find.go +++ b/internal/service/ecr/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecr import ( diff --git a/internal/service/ecr/generate.go b/internal/service/ecr/generate.go index ba246dc7dfc..c56b40a0d16 100644 --- a/internal/service/ecr/generate.go +++ b/internal/service/ecr/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsSlice -UpdateTags -CreateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package ecr diff --git a/internal/service/ecr/image_data_source.go b/internal/service/ecr/image_data_source.go index 9a02c13bfbe..d202d4f0c80 100644 --- a/internal/service/ecr/image_data_source.go +++ b/internal/service/ecr/image_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecr import ( @@ -68,7 +71,7 @@ func DataSourceImage() *schema.Resource { func dataSourceImageRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRConn() + conn := meta.(*conns.AWSClient).ECRConn(ctx) input := &ecr.DescribeImagesInput{ RepositoryName: aws.String(d.Get("repository_name").(string)), diff --git a/internal/service/ecr/image_data_source_test.go b/internal/service/ecr/image_data_source_test.go index 05f03bc1e76..b2534568b55 100644 --- a/internal/service/ecr/image_data_source_test.go +++ b/internal/service/ecr/image_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecr_test import ( diff --git a/internal/service/ecr/lifecycle_policy.go b/internal/service/ecr/lifecycle_policy.go index eb12e41438f..9e9c9e9c44a 100644 --- a/internal/service/ecr/lifecycle_policy.go +++ b/internal/service/ecr/lifecycle_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecr import ( @@ -64,7 +67,7 @@ func ResourceLifecyclePolicy() *schema.Resource { func resourceLifecyclePolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRConn() + conn := meta.(*conns.AWSClient).ECRConn(ctx) policy, err := structure.NormalizeJsonString(d.Get("policy").(string)) @@ -88,7 +91,7 @@ func resourceLifecyclePolicyCreate(ctx context.Context, d *schema.ResourceData, func resourceLifecyclePolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRConn() + conn := meta.(*conns.AWSClient).ECRConn(ctx) input := &ecr.GetLifecyclePolicyInput{ RepositoryName: aws.String(d.Id()), @@ -164,7 +167,7 @@ func resourceLifecyclePolicyRead(ctx context.Context, d *schema.ResourceData, me func resourceLifecyclePolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRConn() + conn := meta.(*conns.AWSClient).ECRConn(ctx) input := &ecr.DeleteLifecyclePolicyInput{ RepositoryName: aws.String(d.Id()), diff --git a/internal/service/ecr/lifecycle_policy_test.go b/internal/service/ecr/lifecycle_policy_test.go index bd5aebd6bc9..5b3bacc0f31 100644 --- a/internal/service/ecr/lifecycle_policy_test.go +++ b/internal/service/ecr/lifecycle_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecr_test import ( @@ -95,7 +98,7 @@ func TestAccECRLifecyclePolicy_detectDiff(t *testing.T) { func testAccCheckLifecyclePolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ECRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECRConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ecr_lifecycle_policy" { @@ -129,7 +132,7 @@ func testAccCheckLifecyclePolicyExists(ctx context.Context, name string) resourc return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ECRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECRConn(ctx) input := &ecr.GetLifecyclePolicyInput{ RepositoryName: aws.String(rs.Primary.ID), diff --git a/internal/service/ecr/pull_through_cache_rule.go b/internal/service/ecr/pull_through_cache_rule.go index be2363bebac..5920e4121b2 100644 --- a/internal/service/ecr/pull_through_cache_rule.go +++ b/internal/service/ecr/pull_through_cache_rule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecr import ( @@ -52,7 +55,7 @@ func ResourcePullThroughCacheRule() *schema.Resource { } func resourcePullThroughCacheRuleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { // nosemgrep:ci.ecr-in-func-name - conn := meta.(*conns.AWSClient).ECRConn() + conn := meta.(*conns.AWSClient).ECRConn(ctx) repositoryPrefix := d.Get("ecr_repository_prefix").(string) input := &ecr.CreatePullThroughCacheRuleInput{ @@ -64,7 +67,7 @@ func resourcePullThroughCacheRuleCreate(ctx context.Context, d *schema.ResourceD _, err := conn.CreatePullThroughCacheRuleWithContext(ctx, input) if err != nil { - return diag.Errorf("error creating ECR Pull Through Cache Rule (%s): %s", repositoryPrefix, err) + return diag.Errorf("creating ECR Pull Through Cache Rule (%s): %s", repositoryPrefix, err) } d.SetId(repositoryPrefix) @@ -73,7 +76,7 @@ func resourcePullThroughCacheRuleCreate(ctx context.Context, d *schema.ResourceD } func resourcePullThroughCacheRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ECRConn() + conn := meta.(*conns.AWSClient).ECRConn(ctx) rule, err := FindPullThroughCacheRuleByRepositoryPrefix(ctx, conn, d.Id()) @@ -84,7 +87,7 @@ func resourcePullThroughCacheRuleRead(ctx context.Context, d *schema.ResourceDat } if err != nil { - return diag.Errorf("error reading ECR Pull Through Cache Rule (%s): %s", d.Id(), err) + return diag.Errorf("reading ECR Pull Through Cache Rule (%s): %s", d.Id(), err) } d.Set("ecr_repository_prefix", rule.EcrRepositoryPrefix) @@ -95,7 +98,7 @@ func resourcePullThroughCacheRuleRead(ctx context.Context, d *schema.ResourceDat } func resourcePullThroughCacheRuleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ECRConn() + conn := meta.(*conns.AWSClient).ECRConn(ctx) log.Printf("[DEBUG] Deleting ECR Pull Through Cache Rule: (%s)", d.Id()) _, err := conn.DeletePullThroughCacheRuleWithContext(ctx, &ecr.DeletePullThroughCacheRuleInput{ @@ -108,7 +111,7 @@ func resourcePullThroughCacheRuleDelete(ctx context.Context, d *schema.ResourceD } if err != nil { - return diag.Errorf("error deleting ECR Pull Through Cache Rule (%s): %s", d.Id(), err) + return diag.Errorf("deleting ECR Pull Through Cache Rule (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/ecr/pull_through_cache_rule_data_source.go b/internal/service/ecr/pull_through_cache_rule_data_source.go new file mode 100644 index 00000000000..fb3b9da20af --- /dev/null +++ b/internal/service/ecr/pull_through_cache_rule_data_source.go @@ -0,0 +1,62 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package ecr + +import ( + "context" + "regexp" + + "github.com/aws/aws-sdk-go/aws" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/conns" +) + +// @SDKDataSource("aws_ecr_pull_through_cache_rule") +func DataSourcePullThroughCacheRule() *schema.Resource { + return &schema.Resource{ + ReadWithoutTimeout: dataSourcePullThroughCacheRuleRead, + Schema: map[string]*schema.Schema{ + "ecr_repository_prefix": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.All( + validation.StringLenBetween(2, 20), + validation.StringMatch( + regexp.MustCompile(`^[a-z0-9]+(?:[._-][a-z0-9]+)*$`), + "must only include alphanumeric, underscore, period, or hyphen characters"), + ), + }, + "registry_id": { + Type: schema.TypeString, + Computed: true, + }, + "upstream_registry_url": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func dataSourcePullThroughCacheRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).ECRConn(ctx) + + repositoryPrefix := d.Get("ecr_repository_prefix").(string) + + rule, err := FindPullThroughCacheRuleByRepositoryPrefix(ctx, conn, repositoryPrefix) + + if err != nil { + return diag.Errorf("reading ECR Pull Through Cache Rule (%s): %s", repositoryPrefix, err) + } + + d.SetId(aws.StringValue(rule.EcrRepositoryPrefix)) + d.Set("ecr_repository_prefix", rule.EcrRepositoryPrefix) + d.Set("registry_id", rule.RegistryId) + d.Set("upstream_registry_url", rule.UpstreamRegistryUrl) + + return nil +} diff --git a/internal/service/ecr/pull_through_cache_rule_data_source_test.go b/internal/service/ecr/pull_through_cache_rule_data_source_test.go new file mode 100644 index 00000000000..f0e1ef7b84a --- /dev/null +++ b/internal/service/ecr/pull_through_cache_rule_data_source_test.go @@ -0,0 +1,45 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package ecr_test + +import ( + "testing" + + "github.com/aws/aws-sdk-go/service/ecr" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" +) + +func TestAccECRPullThroughCacheRuleDataSource_basic(t *testing.T) { + ctx := acctest.Context(t) + dataSource := "data.aws_ecr_pull_through_cache_rule.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, ecr.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + Steps: []resource.TestStep{ + { + Config: testAccPullThroughCacheRuleDataSourceConfig_basic(), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(dataSource, "upstream_registry_url", "public.ecr.aws"), + acctest.CheckResourceAttrAccountID(dataSource, "registry_id"), + ), + }, + }, + }) +} + +func testAccPullThroughCacheRuleDataSourceConfig_basic() string { + return ` +resource "aws_ecr_pull_through_cache_rule" "test" { + ecr_repository_prefix = "ecr-public" + upstream_registry_url = "public.ecr.aws" +} + +data "aws_ecr_pull_through_cache_rule" "test" { + ecr_repository_prefix = aws_ecr_pull_through_cache_rule.test.ecr_repository_prefix +} +` +} diff --git a/internal/service/ecr/pull_through_cache_rule_test.go b/internal/service/ecr/pull_through_cache_rule_test.go index a09c78d65bc..2ccbefea86c 100644 --- a/internal/service/ecr/pull_through_cache_rule_test.go +++ b/internal/service/ecr/pull_through_cache_rule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecr_test import ( @@ -96,7 +99,7 @@ func TestAccECRPullThroughCacheRule_failWhenAlreadyExists(t *testing.T) { func testAccCheckPullThroughCacheRuleDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ECRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECRConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ecr_pull_through_cache_rule" { @@ -131,7 +134,7 @@ func testAccCheckPullThroughCacheRuleExists(ctx context.Context, n string) resou return fmt.Errorf("No ECR Pull Through Cache Rule ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ECRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECRConn(ctx) _, err := tfecr.FindPullThroughCacheRuleByRepositoryPrefix(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ecr/registry_policy.go b/internal/service/ecr/registry_policy.go index e99435b4bbd..53c030854ff 100644 --- a/internal/service/ecr/registry_policy.go +++ b/internal/service/ecr/registry_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecr import ( @@ -49,7 +52,7 @@ func ResourceRegistryPolicy() *schema.Resource { func resourceRegistryPolicyPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRConn() + conn := meta.(*conns.AWSClient).ECRConn(ctx) policy, err := structure.NormalizeJsonString(d.Get("policy").(string)) @@ -75,7 +78,7 @@ func resourceRegistryPolicyPut(ctx context.Context, d *schema.ResourceData, meta func resourceRegistryPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRConn() + conn := meta.(*conns.AWSClient).ECRConn(ctx) log.Printf("[DEBUG] Reading registry policy %s", d.Id()) out, err := conn.GetRegistryPolicyWithContext(ctx, &ecr.GetRegistryPolicyInput{}) @@ -109,7 +112,7 @@ func resourceRegistryPolicyRead(ctx context.Context, d *schema.ResourceData, met func resourceRegistryPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRConn() + conn := meta.(*conns.AWSClient).ECRConn(ctx) _, err := conn.DeleteRegistryPolicyWithContext(ctx, &ecr.DeleteRegistryPolicyInput{}) if err != nil { diff --git a/internal/service/ecr/registry_policy_test.go b/internal/service/ecr/registry_policy_test.go index 27fe5cece5e..ca4a37c8b40 100644 --- a/internal/service/ecr/registry_policy_test.go +++ b/internal/service/ecr/registry_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecr_test import ( @@ -88,7 +91,7 @@ func testAccRegistryPolicy_disappears(t *testing.T) { func testAccCheckRegistryPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ECRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECRConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ecr_registry_policy" { @@ -119,7 +122,7 @@ func testAccCheckRegistryPolicyExists(ctx context.Context, name string, res *ecr return fmt.Errorf("No ECR registry policy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ECRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECRConn(ctx) output, err := conn.GetRegistryPolicyWithContext(ctx, &ecr.GetRegistryPolicyInput{}) if err != nil { diff --git a/internal/service/ecr/registry_scanning_configuration.go b/internal/service/ecr/registry_scanning_configuration.go index 924dd582ffc..b47ca40a6f7 100644 --- a/internal/service/ecr/registry_scanning_configuration.go +++ b/internal/service/ecr/registry_scanning_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecr import ( @@ -78,7 +81,7 @@ func ResourceRegistryScanningConfiguration() *schema.Resource { func resourceRegistryScanningConfigurationPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRConn() + conn := meta.(*conns.AWSClient).ECRConn(ctx) input := ecr.PutRegistryScanningConfigurationInput{ ScanType: aws.String(d.Get("scan_type").(string)), @@ -98,7 +101,7 @@ func resourceRegistryScanningConfigurationPut(ctx context.Context, d *schema.Res func resourceRegistryScanningConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRConn() + conn := meta.(*conns.AWSClient).ECRConn(ctx) out, err := conn.GetRegistryScanningConfigurationWithContext(ctx, &ecr.GetRegistryScanningConfigurationInput{}) @@ -115,7 +118,7 @@ func resourceRegistryScanningConfigurationRead(ctx context.Context, d *schema.Re func resourceRegistryScanningConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRConn() + conn := meta.(*conns.AWSClient).ECRConn(ctx) log.Printf("[DEBUG] Deleting ECR Registry Scanning Configuration: (%s)", d.Id()) _, err := conn.PutRegistryScanningConfigurationWithContext(ctx, &ecr.PutRegistryScanningConfigurationInput{ diff --git a/internal/service/ecr/registry_scanning_configuration_test.go b/internal/service/ecr/registry_scanning_configuration_test.go index 3037112c6d1..6fb0d2d9004 100644 --- a/internal/service/ecr/registry_scanning_configuration_test.go +++ b/internal/service/ecr/registry_scanning_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecr_test import ( @@ -112,7 +115,7 @@ func testAccRegistryScanningConfiguration_update(t *testing.T) { func testAccCheckRegistryScanningConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ECRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECRConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ecr_registry_scanning_configuration" { @@ -141,7 +144,7 @@ func testAccRegistryScanningConfigurationExists(ctx context.Context, name string return fmt.Errorf("No ECR Registry Scanning Configuration ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ECRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECRConn(ctx) output, err := conn.GetRegistryScanningConfigurationWithContext(ctx, &ecr.GetRegistryScanningConfigurationInput{}) diff --git a/internal/service/ecr/replication_configuration.go b/internal/service/ecr/replication_configuration.go index b8678c04048..c537ce532b6 100644 --- a/internal/service/ecr/replication_configuration.go +++ b/internal/service/ecr/replication_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecr import ( @@ -92,7 +95,7 @@ func ResourceReplicationConfiguration() *schema.Resource { func resourceReplicationConfigurationPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRConn() + conn := meta.(*conns.AWSClient).ECRConn(ctx) input := ecr.PutReplicationConfigurationInput{ ReplicationConfiguration: expandReplicationConfigurationReplicationConfiguration(d.Get("replication_configuration").([]interface{})), @@ -110,7 +113,7 @@ func resourceReplicationConfigurationPut(ctx context.Context, d *schema.Resource func resourceReplicationConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRConn() + conn := meta.(*conns.AWSClient).ECRConn(ctx) log.Printf("[DEBUG] Reading ECR Replication Configuration %s", d.Id()) out, err := conn.DescribeRegistryWithContext(ctx, &ecr.DescribeRegistryInput{}) @@ -129,7 +132,7 @@ func resourceReplicationConfigurationRead(ctx context.Context, d *schema.Resourc func resourceReplicationConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRConn() + conn := meta.(*conns.AWSClient).ECRConn(ctx) input := ecr.PutReplicationConfigurationInput{ ReplicationConfiguration: &ecr.ReplicationConfiguration{ diff --git a/internal/service/ecr/replication_configuration_test.go b/internal/service/ecr/replication_configuration_test.go index 52ff076887d..67fc5018d31 100644 --- a/internal/service/ecr/replication_configuration_test.go +++ b/internal/service/ecr/replication_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecr_test import ( @@ -149,7 +152,7 @@ func testAccCheckReplicationConfigurationExists(ctx context.Context, name string return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ECRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECRConn(ctx) out, err := conn.DescribeRegistryWithContext(ctx, &ecr.DescribeRegistryInput{}) if err != nil { return fmt.Errorf("ECR replication rules not found: %w", err) @@ -165,7 +168,7 @@ func testAccCheckReplicationConfigurationExists(ctx context.Context, name string func testAccCheckReplicationConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ECRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECRConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ecr_replication_configuration" { diff --git a/internal/service/ecr/repository.go b/internal/service/ecr/repository.go index 120d7d5f269..159f33ea851 100644 --- a/internal/service/ecr/repository.go +++ b/internal/service/ecr/repository.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecr import ( @@ -113,14 +116,14 @@ func ResourceRepository() *schema.Resource { func resourceRepositoryCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRConn() + conn := meta.(*conns.AWSClient).ECRConn(ctx) name := d.Get("name").(string) input := &ecr.CreateRepositoryInput{ EncryptionConfiguration: expandRepositoryEncryptionConfiguration(d.Get("encryption_configuration").([]interface{})), ImageTagMutability: aws.String(d.Get("image_tag_mutability").(string)), RepositoryName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("image_scanning_configuration"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { @@ -146,7 +149,7 @@ func resourceRepositoryCreate(ctx context.Context, d *schema.ResourceData, meta d.SetId(aws.StringValue(output.Repository.RepositoryName)) // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := createTags(ctx, conn, aws.StringValue(output.Repository.RepositoryArn), tags) // If default tags only, continue. Otherwise, error. @@ -164,7 +167,7 @@ func resourceRepositoryCreate(ctx context.Context, d *schema.ResourceData, meta func resourceRepositoryRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRConn() + conn := meta.(*conns.AWSClient).ECRConn(ctx) outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { return FindRepositoryByName(ctx, conn, d.Id()) @@ -199,7 +202,7 @@ func resourceRepositoryRead(ctx context.Context, d *schema.ResourceData, meta in func resourceRepositoryUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRConn() + conn := meta.(*conns.AWSClient).ECRConn(ctx) if d.HasChange("image_tag_mutability") { input := &ecr.PutImageTagMutabilityInput{ @@ -239,7 +242,7 @@ func resourceRepositoryUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceRepositoryDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRConn() + conn := meta.(*conns.AWSClient).ECRConn(ctx) log.Printf("[DEBUG] Deleting ECR Repository: %s", d.Id()) _, err := conn.DeleteRepositoryWithContext(ctx, &ecr.DeleteRepositoryInput{ diff --git a/internal/service/ecr/repository_data_source.go b/internal/service/ecr/repository_data_source.go index e45c0c57891..8cf604638c8 100644 --- a/internal/service/ecr/repository_data_source.go +++ b/internal/service/ecr/repository_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecr import ( @@ -83,7 +86,7 @@ func DataSourceRepository() *schema.Resource { func dataSourceRepositoryRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRConn() + conn := meta.(*conns.AWSClient).ECRConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig name := d.Get("name").(string) @@ -115,7 +118,7 @@ func dataSourceRepositoryRead(ctx context.Context, d *schema.ResourceData, meta d.Set("registry_id", repository.RegistryId) d.Set("repository_url", repository.RepositoryUri) - tags, err := ListTags(ctx, conn, arn) + tags, err := listTags(ctx, conn, arn) // Some partitions (i.e., ISO) may not support tagging, giving error if meta.(*conns.AWSClient).Partition != endpoints.AwsPartitionID && verify.ErrorISOUnsupported(conn.PartitionID, err) { diff --git a/internal/service/ecr/repository_data_source_test.go b/internal/service/ecr/repository_data_source_test.go index 2c84980a2e6..238bbf9a182 100644 --- a/internal/service/ecr/repository_data_source_test.go +++ b/internal/service/ecr/repository_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecr_test import ( diff --git a/internal/service/ecr/repository_policy.go b/internal/service/ecr/repository_policy.go index 22192757dd6..cafc71a8ddd 100644 --- a/internal/service/ecr/repository_policy.go +++ b/internal/service/ecr/repository_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecr import ( @@ -56,7 +59,7 @@ func ResourceRepositoryPolicy() *schema.Resource { func resourceRepositoryPolicyPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRConn() + conn := meta.(*conns.AWSClient).ECRConn(ctx) policy, err := structure.NormalizeJsonString(d.Get("policy").(string)) @@ -100,7 +103,7 @@ func resourceRepositoryPolicyPut(ctx context.Context, d *schema.ResourceData, me func resourceRepositoryPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRConn() + conn := meta.(*conns.AWSClient).ECRConn(ctx) input := &ecr.GetRepositoryPolicyInput{ RepositoryName: aws.String(d.Id()), @@ -176,7 +179,7 @@ func resourceRepositoryPolicyRead(ctx context.Context, d *schema.ResourceData, m func resourceRepositoryPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRConn() + conn := meta.(*conns.AWSClient).ECRConn(ctx) _, err := conn.DeleteRepositoryPolicyWithContext(ctx, &ecr.DeleteRepositoryPolicyInput{ RepositoryName: aws.String(d.Id()), diff --git a/internal/service/ecr/repository_policy_test.go b/internal/service/ecr/repository_policy_test.go index 4f791adab03..55c3eca34b7 100644 --- a/internal/service/ecr/repository_policy_test.go +++ b/internal/service/ecr/repository_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecr_test import ( @@ -166,7 +169,7 @@ func TestAccECRRepositoryPolicy_Disappears_repository(t *testing.T) { func testAccCheckRepositoryPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ECRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECRConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ecr_repository_policy" { diff --git a/internal/service/ecr/repository_test.go b/internal/service/ecr/repository_test.go index aa4b91c4522..f393416f68e 100644 --- a/internal/service/ecr/repository_test.go +++ b/internal/service/ecr/repository_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecr_test import ( @@ -294,7 +297,7 @@ func TestAccECRRepository_Encryption_aes256(t *testing.T) { func testAccCheckRepositoryDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ECRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECRConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ecr_repository" { @@ -329,7 +332,7 @@ func testAccCheckRepositoryExists(ctx context.Context, n string, v *ecr.Reposito return fmt.Errorf("No ECR Repository ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ECRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECRConn(ctx) output, err := tfecr.FindRepositoryByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ecr/service_package_gen.go b/internal/service/ecr/service_package_gen.go index b1b0d34e0a1..21c06cb21fe 100644 --- a/internal/service/ecr/service_package_gen.go +++ b/internal/service/ecr/service_package_gen.go @@ -5,6 +5,10 @@ package ecr import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + ecr_sdkv1 "github.com/aws/aws-sdk-go/service/ecr" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -29,6 +33,10 @@ func (p *servicePackage) SDKDataSources(ctx context.Context) []*types.ServicePac Factory: DataSourceImage, TypeName: "aws_ecr_image", }, + { + Factory: DataSourcePullThroughCacheRule, + TypeName: "aws_ecr_pull_through_cache_rule", + }, { Factory: DataSourceRepository, TypeName: "aws_ecr_repository", @@ -77,4 +85,13 @@ func (p *servicePackage) ServicePackageName() string { return names.ECR } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*ecr_sdkv1.ECR, error) { + sess := config["session"].(*session_sdkv1.Session) + + return ecr_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/ecr/sweep.go b/internal/service/ecr/sweep.go index d25ed4fcc01..94765a849e4 100644 --- a/internal/service/ecr/sweep.go +++ b/internal/service/ecr/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -12,7 +15,6 @@ import ( "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -25,11 +27,11 @@ func init() { func sweepRepositories(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).ECRConn() + conn := client.ECRConn(ctx) var errors error err = conn.DescribeRepositoriesPagesWithContext(ctx, &ecr.DescribeRepositoriesInput{}, func(page *ecr.DescribeRepositoriesOutput, lastPage bool) bool { diff --git a/internal/service/ecr/tags_gen.go b/internal/service/ecr/tags_gen.go index 00e637766db..7d75f0d328e 100644 --- a/internal/service/ecr/tags_gen.go +++ b/internal/service/ecr/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists ecr service tags. +// listTags lists ecr service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn ecriface.ECRAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn ecriface.ECRAPI, identifier string) (tftags.KeyValueTags, error) { input := &ecr.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn ecriface.ECRAPI, identifier string) (tft // ListTags lists ecr service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).ECRConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).ECRConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*ecr.Tag) tftags.KeyValueTags { return tftags.New(ctx, m) } -// GetTagsIn returns ecr service tags from Context. +// getTagsIn returns ecr service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*ecr.Tag { +func getTagsIn(ctx context.Context) []*ecr.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,8 +88,8 @@ func GetTagsIn(ctx context.Context) []*ecr.Tag { return nil } -// SetTagsOut sets ecr service tags in Context. -func SetTagsOut(ctx context.Context, tags []*ecr.Tag) { +// setTagsOut sets ecr service tags in Context. +func setTagsOut(ctx context.Context, tags []*ecr.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } @@ -101,13 +101,13 @@ func createTags(ctx context.Context, conn ecriface.ECRAPI, identifier string, ta return nil } - return UpdateTags(ctx, conn, identifier, nil, KeyValueTags(ctx, tags)) + return updateTags(ctx, conn, identifier, nil, KeyValueTags(ctx, tags)) } -// UpdateTags updates ecr service tags. +// updateTags updates ecr service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn ecriface.ECRAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn ecriface.ECRAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -147,5 +147,5 @@ func UpdateTags(ctx context.Context, conn ecriface.ECRAPI, identifier string, ol // UpdateTags updates ecr service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).ECRConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).ECRConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/ecrpublic/authorization_token_data_source.go b/internal/service/ecrpublic/authorization_token_data_source.go index b8564fe6f54..720cc3a2311 100644 --- a/internal/service/ecrpublic/authorization_token_data_source.go +++ b/internal/service/ecrpublic/authorization_token_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecrpublic import ( @@ -44,7 +47,7 @@ func DataSourceAuthorizationToken() *schema.Resource { func dataSourceAuthorizationTokenRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRPublicConn() + conn := meta.(*conns.AWSClient).ECRPublicConn(ctx) params := &ecrpublic.GetAuthorizationTokenInput{} out, err := conn.GetAuthorizationTokenWithContext(ctx, params) diff --git a/internal/service/ecrpublic/authorization_token_data_source_test.go b/internal/service/ecrpublic/authorization_token_data_source_test.go index 1a192008b2d..e83224e7fa7 100644 --- a/internal/service/ecrpublic/authorization_token_data_source_test.go +++ b/internal/service/ecrpublic/authorization_token_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecrpublic_test import ( diff --git a/internal/service/ecrpublic/generate.go b/internal/service/ecrpublic/generate.go index 7c2e8e6e8b6..5e9838dab92 100644 --- a/internal/service/ecrpublic/generate.go +++ b/internal/service/ecrpublic/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsSlice -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package ecrpublic diff --git a/internal/service/ecrpublic/repository.go b/internal/service/ecrpublic/repository.go index ede723612c9..eda33aaf245 100644 --- a/internal/service/ecrpublic/repository.go +++ b/internal/service/ecrpublic/repository.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecrpublic import ( @@ -123,11 +126,11 @@ func ResourceRepository() *schema.Resource { func resourceRepositoryCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRPublicConn() + conn := meta.(*conns.AWSClient).ECRPublicConn(ctx) input := ecrpublic.CreateRepositoryInput{ RepositoryName: aws.String(d.Get("repository_name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("catalog_data"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { @@ -154,7 +157,7 @@ func resourceRepositoryCreate(ctx context.Context, d *schema.ResourceData, meta func resourceRepositoryRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRPublicConn() + conn := meta.(*conns.AWSClient).ECRPublicConn(ctx) log.Printf("[DEBUG] Reading ECR Public repository %s", d.Id()) var out *ecrpublic.DescribeRepositoriesOutput @@ -235,7 +238,7 @@ func resourceRepositoryRead(ctx context.Context, d *schema.ResourceData, meta in func resourceRepositoryDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRPublicConn() + conn := meta.(*conns.AWSClient).ECRPublicConn(ctx) deleteInput := &ecrpublic.DeleteRepositoryInput{ RepositoryName: aws.String(d.Id()), @@ -290,7 +293,7 @@ func resourceRepositoryDelete(ctx context.Context, d *schema.ResourceData, meta func resourceRepositoryUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRPublicConn() + conn := meta.(*conns.AWSClient).ECRPublicConn(ctx) if d.HasChange("catalog_data") { if err := resourceRepositoryUpdateCatalogData(ctx, conn, d); err != nil { @@ -380,7 +383,7 @@ func resourceRepositoryUpdateCatalogData(ctx context.Context, conn *ecrpublic.EC _, err := conn.PutRepositoryCatalogDataWithContext(ctx, &input) if err != nil { - return fmt.Errorf("error updating catalog data for repository(%s): %s", d.Id(), err) + return fmt.Errorf("updating catalog data for repository(%s): %s", d.Id(), err) } } } diff --git a/internal/service/ecrpublic/repository_policy.go b/internal/service/ecrpublic/repository_policy.go index b837e5bfcf8..671eabc100d 100644 --- a/internal/service/ecrpublic/repository_policy.go +++ b/internal/service/ecrpublic/repository_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecrpublic import ( @@ -61,7 +64,7 @@ const ( func resourceRepositoryPolicyPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRPublicConn() + conn := meta.(*conns.AWSClient).ECRPublicConn(ctx) policy, err := structure.NormalizeJsonString(d.Get("policy").(string)) @@ -102,7 +105,7 @@ func resourceRepositoryPolicyPut(ctx context.Context, d *schema.ResourceData, me func resourceRepositoryPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRPublicConn() + conn := meta.(*conns.AWSClient).ECRPublicConn(ctx) output, err := FindRepositoryPolicyByName(ctx, conn, d.Id()) @@ -137,7 +140,7 @@ func resourceRepositoryPolicyRead(ctx context.Context, d *schema.ResourceData, m func resourceRepositoryPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECRPublicConn() + conn := meta.(*conns.AWSClient).ECRPublicConn(ctx) _, err := conn.DeleteRepositoryPolicyWithContext(ctx, &ecrpublic.DeleteRepositoryPolicyInput{ RegistryId: aws.String(d.Get("registry_id").(string)), diff --git a/internal/service/ecrpublic/repository_policy_test.go b/internal/service/ecrpublic/repository_policy_test.go index 8fad4640fba..501f4a88249 100644 --- a/internal/service/ecrpublic/repository_policy_test.go +++ b/internal/service/ecrpublic/repository_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecrpublic_test import ( @@ -135,7 +138,7 @@ func TestAccECRPublicRepositoryPolicy_iam(t *testing.T) { func testAccCheckRepositoryPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ECRPublicConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECRPublicConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ecrpublic_repository_policy" { @@ -170,7 +173,7 @@ func testAccCheckRepositoryPolicyExists(ctx context.Context, name string) resour return fmt.Errorf("No ECR Public Repository Policy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ECRPublicConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECRPublicConn(ctx) _, err := tfecrpublic.FindRepositoryPolicyByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ecrpublic/repository_test.go b/internal/service/ecrpublic/repository_test.go index a1f1f8d5271..c6f95e0ef45 100644 --- a/internal/service/ecrpublic/repository_test.go +++ b/internal/service/ecrpublic/repository_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecrpublic_test import ( @@ -374,7 +377,7 @@ func testAccCheckRepositoryExists(ctx context.Context, name string, res *ecrpubl return fmt.Errorf("No ECR Public repository ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ECRPublicConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECRPublicConn(ctx) output, err := conn.DescribeRepositoriesWithContext(ctx, &ecrpublic.DescribeRepositoriesInput{ RepositoryNames: aws.StringSlice([]string{rs.Primary.ID}), @@ -394,7 +397,7 @@ func testAccCheckRepositoryExists(ctx context.Context, name string, res *ecrpubl func testAccCheckRepositoryDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ECRPublicConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECRPublicConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ecrpublic_repository" { diff --git a/internal/service/ecrpublic/service_package_gen.go b/internal/service/ecrpublic/service_package_gen.go index 12f31bf7514..08cc778d7b9 100644 --- a/internal/service/ecrpublic/service_package_gen.go +++ b/internal/service/ecrpublic/service_package_gen.go @@ -5,6 +5,10 @@ package ecrpublic import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + ecrpublic_sdkv1 "github.com/aws/aws-sdk-go/service/ecrpublic" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -49,4 +53,13 @@ func (p *servicePackage) ServicePackageName() string { return names.ECRPublic } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*ecrpublic_sdkv1.ECRPublic, error) { + sess := config["session"].(*session_sdkv1.Session) + + return ecrpublic_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/ecrpublic/sweep.go b/internal/service/ecrpublic/sweep.go index 5a896ac4372..4aefd73bae3 100644 --- a/internal/service/ecrpublic/sweep.go +++ b/internal/service/ecrpublic/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ecrpublic" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -23,11 +25,11 @@ func init() { func sweepRepositories(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).ECRPublicConn() + conn := client.ECRPublicConn(ctx) input := &ecrpublic.DescribeRepositoriesInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -58,7 +60,7 @@ func sweepRepositories(region string) error { return fmt.Errorf("error listing ECR Public Repositories (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping ECR Public Repositories (%s): %w", region, err) diff --git a/internal/service/ecrpublic/tags_gen.go b/internal/service/ecrpublic/tags_gen.go index f25eb4783c5..f14b333d18e 100644 --- a/internal/service/ecrpublic/tags_gen.go +++ b/internal/service/ecrpublic/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists ecrpublic service tags. +// listTags lists ecrpublic service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn ecrpubliciface.ECRPublicAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn ecrpubliciface.ECRPublicAPI, identifier string) (tftags.KeyValueTags, error) { input := &ecrpublic.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn ecrpubliciface.ECRPublicAPI, identifier // ListTags lists ecrpublic service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).ECRPublicConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).ECRPublicConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*ecrpublic.Tag) tftags.KeyValueTag return tftags.New(ctx, m) } -// GetTagsIn returns ecrpublic service tags from Context. +// getTagsIn returns ecrpublic service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*ecrpublic.Tag { +func getTagsIn(ctx context.Context) []*ecrpublic.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*ecrpublic.Tag { return nil } -// SetTagsOut sets ecrpublic service tags in Context. -func SetTagsOut(ctx context.Context, tags []*ecrpublic.Tag) { +// setTagsOut sets ecrpublic service tags in Context. +func setTagsOut(ctx context.Context, tags []*ecrpublic.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates ecrpublic service tags. +// updateTags updates ecrpublic service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn ecrpubliciface.ECRPublicAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn ecrpubliciface.ECRPublicAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn ecrpubliciface.ECRPublicAPI, identifie // UpdateTags updates ecrpublic service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).ECRPublicConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).ECRPublicConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/ecs/account_setting_default.go b/internal/service/ecs/account_setting_default.go index 8022ded646f..746fe80dfca 100644 --- a/internal/service/ecs/account_setting_default.go +++ b/internal/service/ecs/account_setting_default.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs import ( @@ -61,7 +64,7 @@ func resourceAccountSettingDefaultImport(ctx context.Context, d *schema.Resource func resourceAccountSettingDefaultCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) settingName := d.Get("name").(string) settingValue := d.Get("value").(string) @@ -87,7 +90,7 @@ func resourceAccountSettingDefaultCreate(ctx context.Context, d *schema.Resource func resourceAccountSettingDefaultRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) input := &ecs.ListAccountSettingsInput{ Name: aws.String(d.Get("name").(string)), @@ -119,7 +122,7 @@ func resourceAccountSettingDefaultRead(ctx context.Context, d *schema.ResourceDa func resourceAccountSettingDefaultUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) settingName := d.Get("name").(string) settingValue := d.Get("value").(string) @@ -141,7 +144,7 @@ func resourceAccountSettingDefaultUpdate(ctx context.Context, d *schema.Resource func resourceAccountSettingDefaultDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) settingName := d.Get("name").(string) diff --git a/internal/service/ecs/account_setting_default_test.go b/internal/service/ecs/account_setting_default_test.go index 6b6d98a0374..74a166a7e94 100644 --- a/internal/service/ecs/account_setting_default_test.go +++ b/internal/service/ecs/account_setting_default_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs_test import ( @@ -162,7 +165,7 @@ func TestAccECSAccountSettingDefault_containerInsights(t *testing.T) { func testAccCheckAccountSettingDefaultDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ECSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ecs_account_setting_default" { diff --git a/internal/service/ecs/capacity_provider.go b/internal/service/ecs/capacity_provider.go index 9fd3e7f2b4e..ae413198690 100644 --- a/internal/service/ecs/capacity_provider.go +++ b/internal/service/ecs/capacity_provider.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs import ( @@ -116,13 +119,13 @@ func ResourceCapacityProvider() *schema.Resource { func resourceCapacityProviderCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) name := d.Get("name").(string) input := ecs.CreateCapacityProviderInput{ Name: aws.String(name), AutoScalingGroupProvider: expandAutoScalingGroupProviderCreate(d.Get("auto_scaling_group_provider")), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } output, err := conn.CreateCapacityProviderWithContext(ctx, &input) @@ -141,7 +144,7 @@ func resourceCapacityProviderCreate(ctx context.Context, d *schema.ResourceData, d.SetId(aws.StringValue(output.CapacityProvider.CapacityProviderArn)) // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := createTags(ctx, conn, d.Id(), tags) // If default tags only, continue. Otherwise, error. @@ -159,7 +162,7 @@ func resourceCapacityProviderCreate(ctx context.Context, d *schema.ResourceData, func resourceCapacityProviderRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) output, err := FindCapacityProviderByARN(ctx, conn, d.Id()) @@ -181,14 +184,14 @@ func resourceCapacityProviderRead(ctx context.Context, d *schema.ResourceData, m d.Set("name", output.Name) - SetTagsOut(ctx, output.Tags) + setTagsOut(ctx, output.Tags) return diags } func resourceCapacityProviderUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &ecs.UpdateCapacityProviderInput{ @@ -229,7 +232,7 @@ func resourceCapacityProviderUpdate(ctx context.Context, d *schema.ResourceData, func resourceCapacityProviderDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) log.Printf("[DEBUG] Deleting ECS Capacity Provider (%s)", d.Id()) _, err := conn.DeleteCapacityProviderWithContext(ctx, &ecs.DeleteCapacityProviderInput{ diff --git a/internal/service/ecs/capacity_provider_test.go b/internal/service/ecs/capacity_provider_test.go index 583a089cce9..0b5142fcf3c 100644 --- a/internal/service/ecs/capacity_provider_test.go +++ b/internal/service/ecs/capacity_provider_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs_test import ( @@ -227,7 +230,7 @@ func TestAccECSCapacityProvider_tags(t *testing.T) { func testAccCheckCapacityProviderDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ECSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ecs_capacity_provider" { @@ -262,7 +265,7 @@ func testAccCheckCapacityProviderExists(ctx context.Context, resourceName string return fmt.Errorf("No ECS Capacity Provider ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ECSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECSConn(ctx) output, err := tfecs.FindCapacityProviderByARN(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ecs/cluster.go b/internal/service/ecs/cluster.go index 63bbf7e7552..fc5cd133800 100644 --- a/internal/service/ecs/cluster.go +++ b/internal/service/ecs/cluster.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs import ( @@ -156,12 +159,12 @@ func resourceClusterImport(ctx context.Context, d *schema.ResourceData, meta int func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) clusterName := d.Get("name").(string) input := &ecs.CreateClusterInput{ ClusterName: aws.String(clusterName), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("configuration"); ok && len(v.([]interface{})) > 0 { @@ -198,7 +201,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int } // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := createTags(ctx, conn, d.Id(), tags) // If default tags only, continue. Otherwise, error. @@ -216,7 +219,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, clusterReadTimeout, func() (interface{}, error) { return FindClusterByNameOrARN(ctx, conn, d.Id()) @@ -251,13 +254,13 @@ func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta inter return sdkdiag.AppendErrorf(diags, "setting setting: %s", err) } - SetTagsOut(ctx, cluster.Tags) + setTagsOut(ctx, cluster.Tags) return diags } func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) if d.HasChanges("configuration", "service_connect_defaults", "setting") { input := &ecs.UpdateClusterInput{ @@ -394,7 +397,7 @@ func waitClusterDeleted(ctx context.Context, conn *ecs.ECS, arn string) (*ecs.Cl func resourceClusterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) log.Printf("[DEBUG] Deleting ECS Cluster: %s", d.Id()) _, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, clusterDeleteTimeout, func() (interface{}, error) { diff --git a/internal/service/ecs/cluster_capacity_providers.go b/internal/service/ecs/cluster_capacity_providers.go index 526a889698f..35ff52c0c94 100644 --- a/internal/service/ecs/cluster_capacity_providers.go +++ b/internal/service/ecs/cluster_capacity_providers.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs import ( @@ -73,7 +76,7 @@ func ResourceClusterCapacityProviders() *schema.Resource { } func resourceClusterCapacityProvidersPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) clusterName := d.Get("cluster_name").(string) input := &ecs.PutClusterCapacityProvidersInput{ @@ -100,7 +103,7 @@ func resourceClusterCapacityProvidersPut(ctx context.Context, d *schema.Resource } func resourceClusterCapacityProvidersRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) cluster, err := FindClusterByNameOrARN(ctx, conn, d.Id()) @@ -126,7 +129,7 @@ func resourceClusterCapacityProvidersRead(ctx context.Context, d *schema.Resourc } func resourceClusterCapacityProvidersDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) input := &ecs.PutClusterCapacityProvidersInput{ CapacityProviders: []*string{}, diff --git a/internal/service/ecs/cluster_capacity_providers_test.go b/internal/service/ecs/cluster_capacity_providers_test.go index f058fe832f5..fda541b8f5e 100644 --- a/internal/service/ecs/cluster_capacity_providers_test.go +++ b/internal/service/ecs/cluster_capacity_providers_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs_test import ( diff --git a/internal/service/ecs/cluster_data_source.go b/internal/service/ecs/cluster_data_source.go index c41a881643e..06d4b7b0306 100644 --- a/internal/service/ecs/cluster_data_source.go +++ b/internal/service/ecs/cluster_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs import ( @@ -74,7 +77,7 @@ func DataSourceCluster() *schema.Resource { } func dataSourceClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig clusterName := d.Get("cluster_name").(string) diff --git a/internal/service/ecs/cluster_data_source_test.go b/internal/service/ecs/cluster_data_source_test.go index 481c23ab198..f3313597f05 100644 --- a/internal/service/ecs/cluster_data_source_test.go +++ b/internal/service/ecs/cluster_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs_test import ( diff --git a/internal/service/ecs/cluster_test.go b/internal/service/ecs/cluster_test.go index a38a6949166..36178ff8087 100644 --- a/internal/service/ecs/cluster_test.go +++ b/internal/service/ecs/cluster_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs_test import ( @@ -263,7 +266,7 @@ func TestAccECSCluster_configuration(t *testing.T) { func testAccCheckClusterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ECSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ecs_cluster" { @@ -298,7 +301,7 @@ func testAccCheckClusterExists(ctx context.Context, n string, v *ecs.Cluster) re return fmt.Errorf("No ECS Cluster ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ECSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECSConn(ctx) output, err := tfecs.FindClusterByNameOrARN(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ecs/consts.go b/internal/service/ecs/consts.go index 3dfae22b648..51cea2cb9ae 100644 --- a/internal/service/ecs/consts.go +++ b/internal/service/ecs/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs import ( diff --git a/internal/service/ecs/container_definition_data_source.go b/internal/service/ecs/container_definition_data_source.go index e0f62e8a029..0db5a4d8fca 100644 --- a/internal/service/ecs/container_definition_data_source.go +++ b/internal/service/ecs/container_definition_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs import ( @@ -69,7 +72,7 @@ func DataSourceContainerDefinition() *schema.Resource { func dataSourceContainerDefinitionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) params := &ecs.DescribeTaskDefinitionInput{ TaskDefinition: aws.String(d.Get("task_definition").(string)), diff --git a/internal/service/ecs/container_definition_data_source_test.go b/internal/service/ecs/container_definition_data_source_test.go index 0e172591d03..8fdf91de7ee 100644 --- a/internal/service/ecs/container_definition_data_source_test.go +++ b/internal/service/ecs/container_definition_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs_test import ( diff --git a/internal/service/ecs/find.go b/internal/service/ecs/find.go index 11e8e550517..d8635e37baf 100644 --- a/internal/service/ecs/find.go +++ b/internal/service/ecs/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs import ( diff --git a/internal/service/ecs/flex.go b/internal/service/ecs/flex.go index bb28fdeeb07..a9d9042e711 100644 --- a/internal/service/ecs/flex.go +++ b/internal/service/ecs/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs import ( diff --git a/internal/service/ecs/generate.go b/internal/service/ecs/generate.go index 6766c112378..b9c02534b90 100644 --- a/internal/service/ecs/generate.go +++ b/internal/service/ecs/generate.go @@ -1,6 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/listpages/main.go -ListOps=DescribeCapacityProviders //go:generate go run ../../generate/tagresource/main.go //go:generate go run ../../generate/tags/main.go -GetTag -ListTags -ServiceTagsSlice -UpdateTags -CreateTags -ParentNotFoundErrCode=InvalidParameterException "-ParentNotFoundErrMsg=The specified cluster is inactive. Specify an active cluster and try again." +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package ecs diff --git a/internal/service/ecs/service.go b/internal/service/ecs/service.go index 0794865b6e7..139abef7aea 100644 --- a/internal/service/ecs/service.go +++ b/internal/service/ecs/service.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs import ( @@ -476,7 +479,7 @@ func ResourceService() *schema.Resource { func resourceServiceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) deploymentController := expandDeploymentController(d.Get("deployment_controller").([]interface{})) deploymentMinimumHealthyPercent := d.Get("deployment_minimum_healthy_percent").(int) @@ -492,7 +495,7 @@ func resourceServiceCreate(ctx context.Context, d *schema.ResourceData, meta int NetworkConfiguration: expandNetworkConfiguration(d.Get("network_configuration").([]interface{})), SchedulingStrategy: aws.String(schedulingStrategy), ServiceName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("alarms"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { @@ -622,7 +625,7 @@ func resourceServiceCreate(ctx context.Context, d *schema.ResourceData, meta int } // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := createTags(ctx, conn, d.Id(), tags) // If default tags only, continue. Otherwise, error. @@ -640,7 +643,7 @@ func resourceServiceCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceServiceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) cluster := d.Get("cluster").(string) @@ -762,21 +765,21 @@ func resourceServiceRead(ctx context.Context, d *schema.ResourceData, meta inter } // if err := d.Set("service_connect_configuration", flattenServiceConnectConfiguration(service.ServiceConnectConfiguration)); err != nil { - // return fmt.Errorf("error setting service_connect_configuration for (%s): %w", d.Id(), err) + // return fmt.Errorf("setting service_connect_configuration for (%s): %w", d.Id(), err) // } if err := d.Set("service_registries", flattenServiceRegistries(service.ServiceRegistries)); err != nil { return sdkdiag.AppendErrorf(diags, "setting service_registries: %s", err) } - SetTagsOut(ctx, service.Tags) + setTagsOut(ctx, service.Tags) return diags } func resourceServiceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &ecs.UpdateServiceInput{ @@ -785,28 +788,20 @@ func resourceServiceUpdate(ctx context.Context, d *schema.ResourceData, meta int Service: aws.String(d.Id()), } - schedulingStrategy := d.Get("scheduling_strategy").(string) - - switch schedulingStrategy { - case ecs.SchedulingStrategyDaemon: - if d.HasChange("deployment_minimum_healthy_percent") { - input.DeploymentConfiguration = &ecs.DeploymentConfiguration{ - MinimumHealthyPercent: aws.Int64(int64(d.Get("deployment_minimum_healthy_percent").(int))), - } - } - case ecs.SchedulingStrategyReplica: - if d.HasChange("desired_count") { - input.DesiredCount = aws.Int64(int64(d.Get("desired_count").(int))) + if d.HasChange("alarms") { + if input.DeploymentConfiguration == nil { + input.DeploymentConfiguration = &ecs.DeploymentConfiguration{} } - if d.HasChanges("deployment_maximum_percent", "deployment_minimum_healthy_percent") { - input.DeploymentConfiguration = &ecs.DeploymentConfiguration{ - MaximumPercent: aws.Int64(int64(d.Get("deployment_maximum_percent").(int))), - MinimumHealthyPercent: aws.Int64(int64(d.Get("deployment_minimum_healthy_percent").(int))), - } + if v, ok := d.GetOk("alarms"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.DeploymentConfiguration.Alarms = expandAlarms(v.([]interface{})[0].(map[string]interface{})) } } + if d.HasChange("capacity_provider_strategy") { + input.CapacityProviderStrategy = expandCapacityProviderStrategy(d.Get("capacity_provider_strategy").(*schema.Set)) + } + if d.HasChange("deployment_circuit_breaker") { if input.DeploymentConfiguration == nil { input.DeploymentConfiguration = &ecs.DeploymentConfiguration{} @@ -820,6 +815,52 @@ func resourceServiceUpdate(ctx context.Context, d *schema.ResourceData, meta int } } + switch schedulingStrategy := d.Get("scheduling_strategy").(string); schedulingStrategy { + case ecs.SchedulingStrategyDaemon: + if d.HasChange("deployment_minimum_healthy_percent") { + if input.DeploymentConfiguration == nil { + input.DeploymentConfiguration = &ecs.DeploymentConfiguration{} + } + + input.DeploymentConfiguration.MinimumHealthyPercent = aws.Int64(int64(d.Get("deployment_minimum_healthy_percent").(int))) + } + case ecs.SchedulingStrategyReplica: + if d.HasChanges("deployment_maximum_percent", "deployment_minimum_healthy_percent") { + if input.DeploymentConfiguration == nil { + input.DeploymentConfiguration = &ecs.DeploymentConfiguration{} + } + + input.DeploymentConfiguration.MaximumPercent = aws.Int64(int64(d.Get("deployment_maximum_percent").(int))) + input.DeploymentConfiguration.MinimumHealthyPercent = aws.Int64(int64(d.Get("deployment_minimum_healthy_percent").(int))) + } + + if d.HasChange("desired_count") { + input.DesiredCount = aws.Int64(int64(d.Get("desired_count").(int))) + } + } + + if d.HasChange("enable_ecs_managed_tags") { + input.EnableECSManagedTags = aws.Bool(d.Get("enable_ecs_managed_tags").(bool)) + } + + if d.HasChange("enable_execute_command") { + input.EnableExecuteCommand = aws.Bool(d.Get("enable_execute_command").(bool)) + } + + if d.HasChange("health_check_grace_period_seconds") { + input.HealthCheckGracePeriodSeconds = aws.Int64(int64(d.Get("health_check_grace_period_seconds").(int))) + } + + if d.HasChange("load_balancer") { + if v, ok := d.Get("load_balancer").(*schema.Set); ok && v != nil { + input.LoadBalancers = expandLoadBalancers(v.List()) + } + } + + if d.HasChange("network_configuration") { + input.NetworkConfiguration = expandNetworkConfiguration(d.Get("network_configuration").([]interface{})) + } + if d.HasChange("ordered_placement_strategy") { // Reference: https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_UpdateService.html#ECS-UpdateService-request-placementStrategy // To remove an existing placement strategy, specify an empty object. @@ -852,46 +893,10 @@ func resourceServiceUpdate(ctx context.Context, d *schema.ResourceData, meta int } } - if d.HasChange("alarms") { - if v, ok := d.GetOk("alarms"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { - input.DeploymentConfiguration.Alarms = expandAlarms(v.([]interface{})[0].(map[string]interface{})) - } - } - if d.HasChange("platform_version") { input.PlatformVersion = aws.String(d.Get("platform_version").(string)) } - if d.HasChange("health_check_grace_period_seconds") { - input.HealthCheckGracePeriodSeconds = aws.Int64(int64(d.Get("health_check_grace_period_seconds").(int))) - } - - if d.HasChange("task_definition") { - input.TaskDefinition = aws.String(d.Get("task_definition").(string)) - } - - if d.HasChange("network_configuration") { - input.NetworkConfiguration = expandNetworkConfiguration(d.Get("network_configuration").([]interface{})) - } - - if d.HasChange("capacity_provider_strategy") { - input.CapacityProviderStrategy = expandCapacityProviderStrategy(d.Get("capacity_provider_strategy").(*schema.Set)) - } - - if d.HasChange("enable_execute_command") { - input.EnableExecuteCommand = aws.Bool(d.Get("enable_execute_command").(bool)) - } - - if d.HasChange("enable_ecs_managed_tags") { - input.EnableECSManagedTags = aws.Bool(d.Get("enable_ecs_managed_tags").(bool)) - } - - if d.HasChange("load_balancer") { - if v, ok := d.Get("load_balancer").(*schema.Set); ok && v != nil { - input.LoadBalancers = expandLoadBalancers(v.List()) - } - } - if d.HasChange("propagate_tags") { input.PropagateTags = aws.String(d.Get("propagate_tags").(string)) } @@ -904,6 +909,10 @@ func resourceServiceUpdate(ctx context.Context, d *schema.ResourceData, meta int input.ServiceRegistries = expandServiceRegistries(d.Get("service_registries").([]interface{})) } + if d.HasChange("task_definition") { + input.TaskDefinition = aws.String(d.Get("task_definition").(string)) + } + // Retry due to IAM eventual consistency err := retry.RetryContext(ctx, propagationTimeout+serviceUpdateTimeout, func() *retry.RetryError { _, err := conn.UpdateServiceWithContext(ctx, input) @@ -944,7 +953,7 @@ func resourceServiceUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceServiceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) service, err := FindServiceNoTagsByID(ctx, conn, d.Id(), d.Get("cluster").(string)) diff --git a/internal/service/ecs/service_data_source.go b/internal/service/ecs/service_data_source.go index 3f0330dd3b5..0a2e47aa186 100644 --- a/internal/service/ecs/service_data_source.go +++ b/internal/service/ecs/service_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs import ( @@ -54,7 +57,7 @@ func DataSourceService() *schema.Resource { func dataSourceServiceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig clusterArn := d.Get("cluster_arn").(string) diff --git a/internal/service/ecs/service_data_source_test.go b/internal/service/ecs/service_data_source_test.go index 98aa9337e7c..466853d8ee6 100644 --- a/internal/service/ecs/service_data_source_test.go +++ b/internal/service/ecs/service_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs_test import ( diff --git a/internal/service/ecs/service_package_gen.go b/internal/service/ecs/service_package_gen.go index a51c38017c7..1f800c59c81 100644 --- a/internal/service/ecs/service_package_gen.go +++ b/internal/service/ecs/service_package_gen.go @@ -5,6 +5,10 @@ package ecs import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + ecs_sdkv1 "github.com/aws/aws-sdk-go/service/ecs" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -105,4 +109,13 @@ func (p *servicePackage) ServicePackageName() string { return names.ECS } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*ecs_sdkv1.ECS, error) { + sess := config["session"].(*session_sdkv1.Session) + + return ecs_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/ecs/service_test.go b/internal/service/ecs/service_test.go index b5c4ebe5d32..7c2ff3dd075 100644 --- a/internal/service/ecs/service_test.go +++ b/internal/service/ecs/service_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs_test import ( @@ -511,7 +514,7 @@ func TestAccECSService_DeploymentControllerType_external(t *testing.T) { }) } -func TestAccECSService_Alarms(t *testing.T) { +func TestAccECSService_alarmsAdd(t *testing.T) { ctx := acctest.Context(t) var service ecs.Service rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) @@ -524,10 +527,50 @@ func TestAccECSService_Alarms(t *testing.T) { CheckDestroy: testAccCheckServiceDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccServiceConfig_alarms(rName), + Config: testAccServiceConfig_noAlarms(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckServiceExists(ctx, resourceName, &service), + resource.TestCheckResourceAttr(resourceName, "alarms.#", "0"), + ), + }, + { + Config: testAccServiceConfig_alarms(rName, true), Check: resource.ComposeTestCheckFunc( testAccCheckServiceExists(ctx, resourceName, &service), resource.TestCheckResourceAttr(resourceName, "alarms.#", "1"), + resource.TestCheckResourceAttr(resourceName, "alarms.0.enable", "true"), + ), + }, + }, + }) +} + +func TestAccECSService_alarmsUpdate(t *testing.T) { + ctx := acctest.Context(t) + var service ecs.Service + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_ecs_service.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, ecs.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckServiceDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccServiceConfig_alarms(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckServiceExists(ctx, resourceName, &service), + resource.TestCheckResourceAttr(resourceName, "alarms.#", "1"), + resource.TestCheckResourceAttr(resourceName, "alarms.0.enable", "true"), + ), + }, + { + Config: testAccServiceConfig_alarms(rName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckServiceExists(ctx, resourceName, &service), + resource.TestCheckResourceAttr(resourceName, "alarms.#", "1"), + resource.TestCheckResourceAttr(resourceName, "alarms.0.enable", "false"), ), }, }, @@ -1534,7 +1577,7 @@ func TestAccECSService_executeCommand(t *testing.T) { func testAccCheckServiceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ECSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ecs_service" { @@ -1567,7 +1610,7 @@ func testAccCheckServiceExists(ctx context.Context, name string, service *ecs.Se return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ECSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECSConn(ctx) err := retry.RetryContext(ctx, 1*time.Minute, func() *retry.RetryError { var err error @@ -2542,7 +2585,7 @@ resource "aws_ecs_service" "test" { `, rName)) } -func testAccServiceConfig_alarms(rName string) string { +func testAccServiceConfig_alarms(rName string, enable bool) string { return fmt.Sprintf(` resource "aws_ecs_cluster" "test" { name = %[1]q @@ -2571,14 +2614,57 @@ resource "aws_ecs_service" "test" { desired_count = 1 alarms { - enable = true - rollback = true + enable = %[2]t + rollback = %[2]t alarm_names = [ aws_cloudwatch_metric_alarm.test.alarm_name ] } } +resource "aws_cloudwatch_metric_alarm" "test" { + alarm_name = %[1]q + comparison_operator = "GreaterThanOrEqualToThreshold" + evaluation_periods = "2" + metric_name = "CPUReservation" + namespace = "AWS/ECS" + period = "120" + statistic = "Average" + threshold = "80" + insufficient_data_actions = [] +} +`, rName, enable) +} + +func testAccServiceConfig_noAlarms(rName string) string { + return fmt.Sprintf(` +resource "aws_ecs_cluster" "test" { + name = %[1]q +} + +resource "aws_ecs_task_definition" "test" { + family = %[1]q + + container_definitions = < 0 { return tags @@ -117,8 +117,8 @@ func GetTagsIn(ctx context.Context) []*ecs.Tag { return nil } -// SetTagsOut sets ecs service tags in Context. -func SetTagsOut(ctx context.Context, tags []*ecs.Tag) { +// setTagsOut sets ecs service tags in Context. +func setTagsOut(ctx context.Context, tags []*ecs.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } @@ -130,13 +130,13 @@ func createTags(ctx context.Context, conn ecsiface.ECSAPI, identifier string, ta return nil } - return UpdateTags(ctx, conn, identifier, nil, KeyValueTags(ctx, tags)) + return updateTags(ctx, conn, identifier, nil, KeyValueTags(ctx, tags)) } -// UpdateTags updates ecs service tags. +// updateTags updates ecs service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn ecsiface.ECSAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn ecsiface.ECSAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -176,5 +176,5 @@ func UpdateTags(ctx context.Context, conn ecsiface.ECSAPI, identifier string, ol // UpdateTags updates ecs service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).ECSConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).ECSConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/ecs/task_definition.go b/internal/service/ecs/task_definition.go index 20013fe50a2..b5a8bb33385 100644 --- a/internal/service/ecs/task_definition.go +++ b/internal/service/ecs/task_definition.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs import ( @@ -443,7 +446,7 @@ func ValidTaskDefinitionContainerDefinitions(v interface{}, k string) (ws []stri func resourceTaskDefinitionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) rawDefinitions := d.Get("container_definitions").(string) definitions, err := expandContainerDefinitions(rawDefinitions) @@ -454,7 +457,7 @@ func resourceTaskDefinitionCreate(ctx context.Context, d *schema.ResourceData, m input := &ecs.RegisterTaskDefinitionInput{ ContainerDefinitions: definitions, Family: aws.String(d.Get("family").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("cpu"); ok { @@ -531,14 +534,14 @@ func resourceTaskDefinitionCreate(ctx context.Context, d *schema.ResourceData, m return sdkdiag.AppendErrorf(diags, "creating ECS Task Definition (%s): %s", d.Get("family").(string), err) } - taskDefinition := *output.TaskDefinition // nosemgrep:ci.prefer-aws-go-sdk-pointer-conversion-assignment // false positive + taskDefinition := *output.TaskDefinition // nosemgrep:ci.semgrep.aws.prefer-pointer-conversion-assignment // false positive d.SetId(aws.StringValue(taskDefinition.Family)) d.Set("arn", taskDefinition.TaskDefinitionArn) d.Set("arn_without_revision", StripRevision(aws.StringValue(taskDefinition.TaskDefinitionArn))) // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := createTags(ctx, conn, aws.StringValue(taskDefinition.TaskDefinitionArn), tags) // If default tags only, continue. Otherwise, error. @@ -556,7 +559,7 @@ func resourceTaskDefinitionCreate(ctx context.Context, d *schema.ResourceData, m func resourceTaskDefinitionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) input := ecs.DescribeTaskDefinitionInput{ TaskDefinition: aws.String(d.Get("arn").(string)), @@ -644,7 +647,7 @@ func resourceTaskDefinitionRead(ctx context.Context, d *schema.ResourceData, met return sdkdiag.AppendErrorf(diags, "setting ephemeral_storage: %s", err) } - SetTagsOut(ctx, out.Tags) + setTagsOut(ctx, out.Tags) return diags } @@ -664,7 +667,7 @@ func resourceTaskDefinitionDelete(ctx context.Context, d *schema.ResourceData, m return diags } - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) _, err := conn.DeregisterTaskDefinitionWithContext(ctx, &ecs.DeregisterTaskDefinitionInput{ TaskDefinition: aws.String(d.Get("arn").(string)), @@ -971,7 +974,6 @@ func expandVolumesEFSVolume(efsConfig []interface{}) *ecs.EFSVolumeConfiguration efsVol.TransitEncryptionPort = aws.Int64(int64(v)) } if v, ok := config["authorization_config"].([]interface{}); ok && len(v) > 0 { - efsVol.RootDirectory = nil efsVol.AuthorizationConfig = expandVolumesEFSVolumeAuthorizationConfig(v) } @@ -1178,7 +1180,7 @@ func expandContainerDefinitions(rawDefinitions string) ([]*ecs.ContainerDefiniti err := json.Unmarshal([]byte(rawDefinitions), &definitions) if err != nil { - return nil, fmt.Errorf("Error decoding JSON: %s", err) + return nil, fmt.Errorf("decoding JSON: %s", err) } for i, c := range definitions { diff --git a/internal/service/ecs/task_definition_data_source.go b/internal/service/ecs/task_definition_data_source.go index 55092894863..0f2bd0fa13c 100644 --- a/internal/service/ecs/task_definition_data_source.go +++ b/internal/service/ecs/task_definition_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs import ( @@ -60,7 +63,7 @@ func DataSourceTaskDefinition() *schema.Resource { func dataSourceTaskDefinitionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) taskDefinitionName := d.Get("task_definition").(string) input := &ecs.DescribeTaskDefinitionInput{ diff --git a/internal/service/ecs/task_definition_data_source_test.go b/internal/service/ecs/task_definition_data_source_test.go index d6fdce77d71..9b9930e1b93 100644 --- a/internal/service/ecs/task_definition_data_source_test.go +++ b/internal/service/ecs/task_definition_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs_test import ( diff --git a/internal/service/ecs/task_definition_equivalency.go b/internal/service/ecs/task_definition_equivalency.go index a3607d8082a..43a649a594f 100644 --- a/internal/service/ecs/task_definition_equivalency.go +++ b/internal/service/ecs/task_definition_equivalency.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs import ( diff --git a/internal/service/ecs/task_definition_equivalency_test.go b/internal/service/ecs/task_definition_equivalency_test.go index 633e8ed048a..40efcc046aa 100644 --- a/internal/service/ecs/task_definition_equivalency_test.go +++ b/internal/service/ecs/task_definition_equivalency_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs_test import ( diff --git a/internal/service/ecs/task_definition_migrate.go b/internal/service/ecs/task_definition_migrate.go index 695216b8c2a..3683b8bf3a4 100644 --- a/internal/service/ecs/task_definition_migrate.go +++ b/internal/service/ecs/task_definition_migrate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs import ( @@ -13,7 +16,8 @@ import ( ) func resourceTaskDefinitionMigrateState(v int, is *terraform.InstanceState, meta interface{}) (*terraform.InstanceState, error) { - conn := meta.(*conns.AWSClient).ECSConn() + ctx := context.Background() + conn := meta.(*conns.AWSClient).ECSConn(ctx) switch v { case 0: diff --git a/internal/service/ecs/task_definition_test.go b/internal/service/ecs/task_definition_test.go index 29ec6e71fec..2e7bc0b0fc8 100644 --- a/internal/service/ecs/task_definition_test.go +++ b/internal/service/ecs/task_definition_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs_test import ( @@ -1228,7 +1231,7 @@ func TestValidTaskDefinitionContainerDefinitions(t *testing.T) { func testAccCheckTaskDefinitionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ECSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ecs_task_definition" { @@ -1261,7 +1264,7 @@ func testAccCheckTaskDefinitionExists(ctx context.Context, name string, def *ecs return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ECSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECSConn(ctx) out, err := conn.DescribeTaskDefinitionWithContext(ctx, &ecs.DescribeTaskDefinitionInput{ TaskDefinition: aws.String(rs.Primary.Attributes["arn"]), diff --git a/internal/service/ecs/task_execution_data_source.go b/internal/service/ecs/task_execution_data_source.go index 75e82b76d6c..0d9e1af0233 100644 --- a/internal/service/ecs/task_execution_data_source.go +++ b/internal/service/ecs/task_execution_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs import ( @@ -272,7 +275,7 @@ const ( func dataSourceTaskExecutionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) cluster := d.Get("cluster").(string) taskDefinition := d.Get("task_definition").(string) diff --git a/internal/service/ecs/task_execution_data_source_test.go b/internal/service/ecs/task_execution_data_source_test.go index ff4cabc6e80..60a0b592872 100644 --- a/internal/service/ecs/task_execution_data_source_test.go +++ b/internal/service/ecs/task_execution_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs_test import ( diff --git a/internal/service/ecs/task_set.go b/internal/service/ecs/task_set.go index c10ea759a8a..728234f3caa 100644 --- a/internal/service/ecs/task_set.go +++ b/internal/service/ecs/task_set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs import ( @@ -270,7 +273,7 @@ func ResourceTaskSet() *schema.Resource { func resourceTaskSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) cluster := d.Get("cluster").(string) service := d.Get("service").(string) @@ -278,7 +281,7 @@ func resourceTaskSetCreate(ctx context.Context, d *schema.ResourceData, meta int ClientToken: aws.String(id.UniqueId()), Cluster: aws.String(cluster), Service: aws.String(service), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), TaskDefinition: aws.String(d.Get("task_definition").(string)), } @@ -339,7 +342,7 @@ func resourceTaskSetCreate(ctx context.Context, d *schema.ResourceData, meta int } // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := createTags(ctx, conn, aws.StringValue(output.TaskSet.TaskSetArn), tags) // If default tags only, continue. Otherwise, error. @@ -357,7 +360,7 @@ func resourceTaskSetCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceTaskSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) taskSetId, service, cluster, err := TaskSetParseID(d.Id()) @@ -434,14 +437,14 @@ func resourceTaskSetRead(ctx context.Context, d *schema.ResourceData, meta inter return sdkdiag.AppendErrorf(diags, "setting service_registries: %s", err) } - SetTagsOut(ctx, taskSet.Tags) + setTagsOut(ctx, taskSet.Tags) return diags } func resourceTaskSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) if d.HasChangesExcept("tags", "tags_all") { taskSetId, service, cluster, err := TaskSetParseID(d.Id()) @@ -476,7 +479,7 @@ func resourceTaskSetUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceTaskSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ECSConn() + conn := meta.(*conns.AWSClient).ECSConn(ctx) taskSetId, service, cluster, err := TaskSetParseID(d.Id()) @@ -537,7 +540,7 @@ func retryTaskSetCreate(ctx context.Context, conn *ecs.ECS, input *ecs.CreateTas output, ok := outputRaw.(*ecs.CreateTaskSetOutput) if !ok || output == nil || output.TaskSet == nil { - return nil, fmt.Errorf("error creating ECS TaskSet: empty output") + return nil, fmt.Errorf("creating ECS TaskSet: empty output") } return output, err diff --git a/internal/service/ecs/task_set_test.go b/internal/service/ecs/task_set_test.go index 20853f626f0..84f5b390f99 100644 --- a/internal/service/ecs/task_set_test.go +++ b/internal/service/ecs/task_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs_test import ( @@ -393,7 +396,7 @@ func testAccCheckTaskSetExists(ctx context.Context, name string) resource.TestCh return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ECSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECSConn(ctx) taskSetId, service, cluster, err := tfecs.TaskSetParseID(rs.Primary.ID) @@ -423,7 +426,7 @@ func testAccCheckTaskSetExists(ctx context.Context, name string) resource.TestCh func testAccCheckTaskSetDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ECSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ECSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ecs_task_set" { diff --git a/internal/service/ecs/validate.go b/internal/service/ecs/validate.go index 9b887db021b..8572fe12e49 100644 --- a/internal/service/ecs/validate.go +++ b/internal/service/ecs/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs import ( diff --git a/internal/service/ecs/validate_test.go b/internal/service/ecs/validate_test.go index e71ea511aa5..71c0be520ff 100644 --- a/internal/service/ecs/validate_test.go +++ b/internal/service/ecs/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs import ( diff --git a/internal/service/ecs/wait.go b/internal/service/ecs/wait.go index 66013207961..47546258871 100644 --- a/internal/service/ecs/wait.go +++ b/internal/service/ecs/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ecs import ( diff --git a/internal/service/efs/access_point.go b/internal/service/efs/access_point.go index 848952658c9..d6c87267da3 100644 --- a/internal/service/efs/access_point.go +++ b/internal/service/efs/access_point.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package efs import ( @@ -130,12 +133,12 @@ func ResourceAccessPoint() *schema.Resource { func resourceAccessPointCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EFSConn() + conn := meta.(*conns.AWSClient).EFSConn(ctx) fsId := d.Get("file_system_id").(string) input := efs.CreateAccessPointInput{ FileSystemId: aws.String(fsId), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("posix_user"); ok { @@ -172,7 +175,7 @@ func resourceAccessPointUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceAccessPointRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EFSConn() + conn := meta.(*conns.AWSClient).EFSConn(ctx) resp, err := conn.DescribeAccessPointsWithContext(ctx, &efs.DescribeAccessPointsInput{ AccessPointId: aws.String(d.Id()), @@ -215,14 +218,14 @@ func resourceAccessPointRead(ctx context.Context, d *schema.ResourceData, meta i return sdkdiag.AppendErrorf(diags, "setting root directory: %s", err) } - SetTagsOut(ctx, ap.Tags) + setTagsOut(ctx, ap.Tags) return diags } func resourceAccessPointDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EFSConn() + conn := meta.(*conns.AWSClient).EFSConn(ctx) log.Printf("[DEBUG] Deleting EFS access point %q", d.Id()) _, err := conn.DeleteAccessPointWithContext(ctx, &efs.DeleteAccessPointInput{ diff --git a/internal/service/efs/access_point_data_source.go b/internal/service/efs/access_point_data_source.go index a794f5526b5..4fac9c1e90d 100644 --- a/internal/service/efs/access_point_data_source.go +++ b/internal/service/efs/access_point_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package efs import ( @@ -102,7 +105,7 @@ func DataSourceAccessPoint() *schema.Resource { func dataSourceAccessPointRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EFSConn() + conn := meta.(*conns.AWSClient).EFSConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig resp, err := conn.DescribeAccessPointsWithContext(ctx, &efs.DescribeAccessPointsInput{ diff --git a/internal/service/efs/access_point_data_source_test.go b/internal/service/efs/access_point_data_source_test.go index efb0c48dfe3..f2eef346d3d 100644 --- a/internal/service/efs/access_point_data_source_test.go +++ b/internal/service/efs/access_point_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package efs_test import ( diff --git a/internal/service/efs/access_point_test.go b/internal/service/efs/access_point_test.go index c8b1eb57255..b558fc67821 100644 --- a/internal/service/efs/access_point_test.go +++ b/internal/service/efs/access_point_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package efs_test import ( @@ -249,7 +252,7 @@ func TestAccEFSAccessPoint_disappears(t *testing.T) { func testAccCheckAccessPointDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EFSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EFSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_efs_access_point" { continue @@ -289,7 +292,7 @@ func testAccCheckAccessPointExists(ctx context.Context, resourceID string, mount return fmt.Errorf("Not found: %s", resourceID) } - conn := acctest.Provider.Meta().(*conns.AWSClient).EFSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EFSConn(ctx) mt, err := conn.DescribeAccessPointsWithContext(ctx, &efs.DescribeAccessPointsInput{ AccessPointId: aws.String(fs.Primary.ID), }) diff --git a/internal/service/efs/access_points_data_source.go b/internal/service/efs/access_points_data_source.go index a1ff8d095e8..e238d6acc4b 100644 --- a/internal/service/efs/access_points_data_source.go +++ b/internal/service/efs/access_points_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package efs import ( @@ -39,7 +42,7 @@ func DataSourceAccessPoints() *schema.Resource { func dataSourceAccessPointsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EFSConn() + conn := meta.(*conns.AWSClient).EFSConn(ctx) fileSystemID := d.Get("file_system_id").(string) input := &efs.DescribeAccessPointsInput{ diff --git a/internal/service/efs/access_points_data_source_test.go b/internal/service/efs/access_points_data_source_test.go index 1d29e6c9a32..72ed9f99863 100644 --- a/internal/service/efs/access_points_data_source_test.go +++ b/internal/service/efs/access_points_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package efs_test import ( diff --git a/internal/service/efs/backup_policy.go b/internal/service/efs/backup_policy.go index eb8315a5263..c44bdb240c1 100644 --- a/internal/service/efs/backup_policy.go +++ b/internal/service/efs/backup_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package efs import ( @@ -57,7 +60,7 @@ func ResourceBackupPolicy() *schema.Resource { func resourceBackupPolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EFSConn() + conn := meta.(*conns.AWSClient).EFSConn(ctx) fsID := d.Get("file_system_id").(string) @@ -72,7 +75,7 @@ func resourceBackupPolicyCreate(ctx context.Context, d *schema.ResourceData, met func resourceBackupPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EFSConn() + conn := meta.(*conns.AWSClient).EFSConn(ctx) output, err := FindBackupPolicyByID(ctx, conn, d.Id()) @@ -97,7 +100,7 @@ func resourceBackupPolicyRead(ctx context.Context, d *schema.ResourceData, meta func resourceBackupPolicyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EFSConn() + conn := meta.(*conns.AWSClient).EFSConn(ctx) if err := backupPolicyPut(ctx, conn, d.Id(), d.Get("backup_policy").([]interface{})[0].(map[string]interface{})); err != nil { return sdkdiag.AppendErrorf(diags, "updating EFS Backup Policy (%s): %s", d.Id(), err) @@ -108,7 +111,7 @@ func resourceBackupPolicyUpdate(ctx context.Context, d *schema.ResourceData, met func resourceBackupPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EFSConn() + conn := meta.(*conns.AWSClient).EFSConn(ctx) err := backupPolicyPut(ctx, conn, d.Id(), map[string]interface{}{ "status": efs.StatusDisabled, @@ -137,16 +140,16 @@ func backupPolicyPut(ctx context.Context, conn *efs.EFS, fsID string, tfMap map[ _, err := conn.PutBackupPolicyWithContext(ctx, input) if err != nil { - return fmt.Errorf("error putting EFS Backup Policy (%s): %w", fsID, err) + return fmt.Errorf("putting EFS Backup Policy (%s): %w", fsID, err) } if aws.StringValue(input.BackupPolicy.Status) == efs.StatusEnabled { if _, err := waitBackupPolicyEnabled(ctx, conn, fsID); err != nil { - return fmt.Errorf("error waiting for EFS Backup Policy (%s) to enable: %w", fsID, err) + return fmt.Errorf("waiting for EFS Backup Policy (%s) to enable: %w", fsID, err) } } else { if _, err := waitBackupPolicyDisabled(ctx, conn, fsID); err != nil { - return fmt.Errorf("error waiting for EFS Backup Policy (%s) to disable: %w", fsID, err) + return fmt.Errorf("waiting for EFS Backup Policy (%s) to disable: %w", fsID, err) } } diff --git a/internal/service/efs/backup_policy_test.go b/internal/service/efs/backup_policy_test.go index 3b1293b3d97..a42ae3297fd 100644 --- a/internal/service/efs/backup_policy_test.go +++ b/internal/service/efs/backup_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package efs_test import ( @@ -126,7 +129,7 @@ func testAccCheckBackupPolicyExists(ctx context.Context, name string, v *efs.Bac return fmt.Errorf("no ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EFSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EFSConn(ctx) output, err := tfefs.FindBackupPolicyByID(ctx, conn, rs.Primary.ID) @@ -142,7 +145,7 @@ func testAccCheckBackupPolicyExists(ctx context.Context, name string, v *efs.Bac func testAccCheckBackupPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EFSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EFSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_efs_backup_policy" { diff --git a/internal/service/efs/file_system.go b/internal/service/efs/file_system.go index e12f1f4bc42..b1dfa1b1183 100644 --- a/internal/service/efs/file_system.go +++ b/internal/service/efs/file_system.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package efs import ( @@ -148,7 +151,7 @@ func ResourceFileSystem() *schema.Resource { } func resourceFileSystemCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EFSConn() + conn := meta.(*conns.AWSClient).EFSConn(ctx) creationToken := "" if v, ok := d.GetOk("creation_token"); ok { @@ -160,7 +163,7 @@ func resourceFileSystemCreate(ctx context.Context, d *schema.ResourceData, meta input := &efs.CreateFileSystemInput{ CreationToken: aws.String(creationToken), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), ThroughputMode: aws.String(throughputMode), } @@ -218,7 +221,7 @@ func resourceFileSystemCreate(ctx context.Context, d *schema.ResourceData, meta } func resourceFileSystemRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EFSConn() + conn := meta.(*conns.AWSClient).EFSConn(ctx) fs, err := FindFileSystemByID(ctx, conn, d.Id()) @@ -245,7 +248,7 @@ func resourceFileSystemRead(ctx context.Context, d *schema.ResourceData, meta in d.Set("provisioned_throughput_in_mibps", fs.ProvisionedThroughputInMibps) d.Set("throughput_mode", fs.ThroughputMode) - SetTagsOut(ctx, fs.Tags) + setTagsOut(ctx, fs.Tags) if err := d.Set("size_in_bytes", flattenFileSystemSizeInBytes(fs.SizeInBytes)); err != nil { return diag.Errorf("setting size_in_bytes: %s", err) @@ -267,7 +270,7 @@ func resourceFileSystemRead(ctx context.Context, d *schema.ResourceData, meta in } func resourceFileSystemUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EFSConn() + conn := meta.(*conns.AWSClient).EFSConn(ctx) if d.HasChanges("provisioned_throughput_in_mibps", "throughput_mode") { throughputMode := d.Get("throughput_mode").(string) @@ -316,7 +319,7 @@ func resourceFileSystemUpdate(ctx context.Context, d *schema.ResourceData, meta } func resourceFileSystemDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EFSConn() + conn := meta.(*conns.AWSClient).EFSConn(ctx) log.Printf("[DEBUG] Deleting EFS file system: %s", d.Id()) _, err := conn.DeleteFileSystemWithContext(ctx, &efs.DeleteFileSystemInput{ diff --git a/internal/service/efs/file_system_data_source.go b/internal/service/efs/file_system_data_source.go index dc618032414..251e1d23e47 100644 --- a/internal/service/efs/file_system_data_source.go +++ b/internal/service/efs/file_system_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package efs import ( @@ -95,7 +98,7 @@ func DataSourceFileSystem() *schema.Resource { func dataSourceFileSystemRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EFSConn() + conn := meta.(*conns.AWSClient).EFSConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig tagsToMatch := tftags.New(ctx, d.Get("tags").(map[string]interface{})).IgnoreAWS().IgnoreConfig(ignoreTagsConfig) diff --git a/internal/service/efs/file_system_data_source_test.go b/internal/service/efs/file_system_data_source_test.go index 82a2fe8fd6b..44ce0eef482 100644 --- a/internal/service/efs/file_system_data_source_test.go +++ b/internal/service/efs/file_system_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package efs_test import ( @@ -127,21 +130,6 @@ func TestAccEFSFileSystemDataSource_availabilityZone(t *testing.T) { }) } -func TestAccEFSFileSystemDataSource_nonExistent_fileSystemID(t *testing.T) { - ctx := acctest.Context(t) - resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, efs.EndpointsID), - ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - Steps: []resource.TestStep{ - { - Config: testAccFileSystemDataSourceConfig_idNonExistent, - ExpectError: regexp.MustCompile(`error reading EFS FileSystem`), - }, - }, - }) -} - func TestAccEFSFileSystemDataSource_nonExistent_tags(t *testing.T) { ctx := acctest.Context(t) var desc efs.FileSystemDescription @@ -209,12 +197,6 @@ resource "aws_efs_file_system" "test" { `, rName) } -const testAccFileSystemDataSourceConfig_idNonExistent = ` -data "aws_efs_file_system" "test" { - file_system_id = "fs-nonexistent" -} -` - func testAccFileSystemDataSourceConfig_tagsNonExistent(rName string) string { return acctest.ConfigCompose( testAccFileSystemConfig_dataSourceBasic(rName), diff --git a/internal/service/efs/file_system_policy.go b/internal/service/efs/file_system_policy.go index a8fcee0556d..eaebd82cd6d 100644 --- a/internal/service/efs/file_system_policy.go +++ b/internal/service/efs/file_system_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package efs import ( @@ -56,7 +59,7 @@ func ResourceFileSystemPolicy() *schema.Resource { func resourceFileSystemPolicyPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EFSConn() + conn := meta.(*conns.AWSClient).EFSConn(ctx) policy, err := structure.NormalizeJsonString(d.Get("policy").(string)) @@ -86,7 +89,7 @@ func resourceFileSystemPolicyPut(ctx context.Context, d *schema.ResourceData, me func resourceFileSystemPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EFSConn() + conn := meta.(*conns.AWSClient).EFSConn(ctx) output, err := FindFileSystemPolicyByID(ctx, conn, d.Id()) @@ -121,7 +124,7 @@ func resourceFileSystemPolicyRead(ctx context.Context, d *schema.ResourceData, m func resourceFileSystemPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EFSConn() + conn := meta.(*conns.AWSClient).EFSConn(ctx) log.Printf("[DEBUG] Deleting EFS File System Policy: %s", d.Id()) _, err := conn.DeleteFileSystemPolicyWithContext(ctx, &efs.DeleteFileSystemPolicyInput{ diff --git a/internal/service/efs/file_system_policy_test.go b/internal/service/efs/file_system_policy_test.go index d079cf5b6de..d880cc8a1c9 100644 --- a/internal/service/efs/file_system_policy_test.go +++ b/internal/service/efs/file_system_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package efs_test import ( @@ -169,7 +172,7 @@ func TestAccEFSFileSystemPolicy_equivalentPoliciesIAMPolicyDoc(t *testing.T) { func testAccCheckFileSystemPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EFSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EFSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_efs_file_system_policy" { @@ -204,7 +207,7 @@ func testAccCheckFileSystemPolicyExists(ctx context.Context, n string, v *efs.De return fmt.Errorf("No EFS File System Policy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EFSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EFSConn(ctx) output, err := tfefs.FindFileSystemPolicyByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/efs/file_system_test.go b/internal/service/efs/file_system_test.go index c5f773c0181..532522ddd7f 100644 --- a/internal/service/efs/file_system_test.go +++ b/internal/service/efs/file_system_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package efs_test import ( @@ -390,7 +393,7 @@ func TestAccEFSFileSystem_lifecyclePolicy(t *testing.T) { func testAccCheckFileSystemDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EFSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EFSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_efs_file_system" { continue @@ -423,7 +426,7 @@ func testAccCheckFileSystem(ctx context.Context, n string, v *efs.FileSystemDesc return fmt.Errorf("No EFS file system ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EFSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EFSConn(ctx) output, err := tfefs.FindFileSystemByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/efs/find.go b/internal/service/efs/find.go index 95df7736599..9204154dfb8 100644 --- a/internal/service/efs/find.go +++ b/internal/service/efs/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package efs import ( diff --git a/internal/service/efs/generate.go b/internal/service/efs/generate.go index 04cdbe7d963..87e0cb30452 100644 --- a/internal/service/efs/generate.go +++ b/internal/service/efs/generate.go @@ -1,5 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/listpages/main.go -ListOps=DescribeMountTargets -InputPaginator=Marker -OutputPaginator=NextMarker //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=DescribeTags -ListTagsInIDElem=FileSystemId -ServiceTagsSlice -TagInIDElem=ResourceId -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package efs diff --git a/internal/service/efs/mount_target.go b/internal/service/efs/mount_target.go index b07768e7b5a..2d1bb3bd25b 100644 --- a/internal/service/efs/mount_target.go +++ b/internal/service/efs/mount_target.go @@ -1,6 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package efs -import ( // nosemgrep:ci.aws-sdk-go-multiple-service-imports +import ( // nosemgrep:ci.semgrep.aws.multiple-service-imports "context" "fmt" "log" @@ -97,14 +100,14 @@ func ResourceMountTarget() *schema.Resource { func resourceMountTargetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EFSConn() + conn := meta.(*conns.AWSClient).EFSConn(ctx) // CreateMountTarget would return the same Mount Target ID // to parallel requests if they both include the same AZ // and we would end up managing the same MT as 2 resources. // So we make it fail by calling 1 request per AZ at a time. subnetID := d.Get("subnet_id").(string) - az, err := getAZFromSubnetID(ctx, meta.(*conns.AWSClient).EC2Conn(), subnetID) + az, err := getAZFromSubnetID(ctx, meta.(*conns.AWSClient).EC2Conn(ctx), subnetID) if err != nil { return sdkdiag.AppendErrorf(diags, "reading EC2 Subnet (%s): %s", subnetID, err) @@ -145,7 +148,7 @@ func resourceMountTargetCreate(ctx context.Context, d *schema.ResourceData, meta func resourceMountTargetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EFSConn() + conn := meta.(*conns.AWSClient).EFSConn(ctx) mt, err := FindMountTargetByID(ctx, conn, d.Id()) @@ -192,7 +195,7 @@ func resourceMountTargetRead(ctx context.Context, d *schema.ResourceData, meta i func resourceMountTargetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EFSConn() + conn := meta.(*conns.AWSClient).EFSConn(ctx) if d.HasChange("security_groups") { input := &efs.ModifyMountTargetSecurityGroupsInput{ @@ -212,7 +215,7 @@ func resourceMountTargetUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceMountTargetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EFSConn() + conn := meta.(*conns.AWSClient).EFSConn(ctx) log.Printf("[DEBUG] Deleting EFS Mount Target: %s", d.Id()) _, err := conn.DeleteMountTargetWithContext(ctx, &efs.DeleteMountTargetInput{ diff --git a/internal/service/efs/mount_target_data_source.go b/internal/service/efs/mount_target_data_source.go index 315c9731a83..bcfbcca3c82 100644 --- a/internal/service/efs/mount_target_data_source.go +++ b/internal/service/efs/mount_target_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package efs import ( @@ -82,7 +85,7 @@ func DataSourceMountTarget() *schema.Resource { func dataSourceMountTargetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EFSConn() + conn := meta.(*conns.AWSClient).EFSConn(ctx) input := &efs.DescribeMountTargetsInput{} diff --git a/internal/service/efs/mount_target_data_source_test.go b/internal/service/efs/mount_target_data_source_test.go index 7c0625d3a11..b63c3ac9434 100644 --- a/internal/service/efs/mount_target_data_source_test.go +++ b/internal/service/efs/mount_target_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package efs_test import ( diff --git a/internal/service/efs/mount_target_test.go b/internal/service/efs/mount_target_test.go index 05b3b6fc60d..541d2e80e42 100644 --- a/internal/service/efs/mount_target_test.go +++ b/internal/service/efs/mount_target_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package efs_test import ( @@ -144,7 +147,7 @@ func TestAccEFSMountTarget_IPAddress_emptyString(t *testing.T) { func testAccCheckMountTargetDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EFSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EFSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_efs_mount_target" { continue @@ -178,7 +181,7 @@ func testAccCheckMountTargetExists(ctx context.Context, n string, v *efs.MountTa return fmt.Errorf("No EFS Mount Target ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EFSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EFSConn(ctx) output, err := tfefs.FindMountTargetByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/efs/replication_configuration.go b/internal/service/efs/replication_configuration.go index 3f12493b59f..f8783a732d1 100644 --- a/internal/service/efs/replication_configuration.go +++ b/internal/service/efs/replication_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package efs import ( @@ -97,7 +100,7 @@ func ResourceReplicationConfiguration() *schema.Resource { func resourceReplicationConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EFSConn() + conn := meta.(*conns.AWSClient).EFSConn(ctx) fsID := d.Get("source_file_system_id").(string) input := &efs.CreateReplicationConfigurationInput{ @@ -125,7 +128,7 @@ func resourceReplicationConfigurationCreate(ctx context.Context, d *schema.Resou func resourceReplicationConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EFSConn() + conn := meta.(*conns.AWSClient).EFSConn(ctx) replication, err := FindReplicationConfigurationByID(ctx, conn, d.Id()) @@ -165,7 +168,7 @@ func resourceReplicationConfigurationRead(ctx context.Context, d *schema.Resourc func resourceReplicationConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EFSConn() + conn := meta.(*conns.AWSClient).EFSConn(ctx) // Deletion of the replication configuration must be done from the // Region in which the destination file system is located. diff --git a/internal/service/efs/replication_configuration_test.go b/internal/service/efs/replication_configuration_test.go index 1f2faa22d8e..883bf010371 100644 --- a/internal/service/efs/replication_configuration_test.go +++ b/internal/service/efs/replication_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package efs_test import ( @@ -137,7 +140,7 @@ func testAccCheckReplicationConfigurationExists(ctx context.Context, n string) r return fmt.Errorf("No EFS Replication Configuration ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EFSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EFSConn(ctx) _, err := tfefs.FindReplicationConfigurationByID(ctx, conn, rs.Primary.ID) @@ -147,7 +150,7 @@ func testAccCheckReplicationConfigurationExists(ctx context.Context, n string) r func testAccCheckReplicationConfigurationDestroyWithProvider(ctx context.Context) acctest.TestCheckWithProviderFunc { return func(s *terraform.State, provider *schema.Provider) error { - conn := provider.Meta().(*conns.AWSClient).EFSConn() + conn := provider.Meta().(*conns.AWSClient).EFSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_efs_replication_configuration" { diff --git a/internal/service/efs/service_package_gen.go b/internal/service/efs/service_package_gen.go index a3db82d02db..1a99347b981 100644 --- a/internal/service/efs/service_package_gen.go +++ b/internal/service/efs/service_package_gen.go @@ -5,6 +5,10 @@ package efs import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + efs_sdkv1 "github.com/aws/aws-sdk-go/service/efs" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -81,4 +85,13 @@ func (p *servicePackage) ServicePackageName() string { return names.EFS } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*efs_sdkv1.EFS, error) { + sess := config["session"].(*session_sdkv1.Session) + + return efs_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/efs/status.go b/internal/service/efs/status.go index 13f8b1623e9..f9083d2dd19 100644 --- a/internal/service/efs/status.go +++ b/internal/service/efs/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package efs import ( diff --git a/internal/service/efs/sweep.go b/internal/service/efs/sweep.go index 89894eaa2ec..88a8ff11135 100644 --- a/internal/service/efs/sweep.go +++ b/internal/service/efs/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/efs" multierror "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -38,11 +40,11 @@ func init() { func sweepAccessPoints(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).EFSConn() + conn := client.EFSConn(ctx) input := &efs.DescribeFileSystemsInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -94,7 +96,7 @@ func sweepAccessPoints(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing EFS File Systems (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping EFS Access Points (%s): %w", region, err)) @@ -105,11 +107,11 @@ func sweepAccessPoints(region string) error { func sweepFileSystems(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).EFSConn() + conn := client.EFSConn(ctx) input := &efs.DescribeFileSystemsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -138,7 +140,7 @@ func sweepFileSystems(region string) error { return fmt.Errorf("error listing EFS File Systems (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping EFS File Systems (%s): %w", region, err) @@ -149,11 +151,11 @@ func sweepFileSystems(region string) error { func sweepMountTargets(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).EFSConn() + conn := client.EFSConn(ctx) input := &efs.DescribeFileSystemsInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -205,7 +207,7 @@ func sweepMountTargets(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing EFS File Systems (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping EFS Mount Targets (%s): %w", region, err)) diff --git a/internal/service/efs/tags_gen.go b/internal/service/efs/tags_gen.go index 2f05d6675e1..ed96a331662 100644 --- a/internal/service/efs/tags_gen.go +++ b/internal/service/efs/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists efs service tags. +// listTags lists efs service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn efsiface.EFSAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn efsiface.EFSAPI, identifier string) (tftags.KeyValueTags, error) { input := &efs.DescribeTagsInput{ FileSystemId: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn efsiface.EFSAPI, identifier string) (tft // ListTags lists efs service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).EFSConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).EFSConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*efs.Tag) tftags.KeyValueTags { return tftags.New(ctx, m) } -// GetTagsIn returns efs service tags from Context. +// getTagsIn returns efs service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*efs.Tag { +func getTagsIn(ctx context.Context) []*efs.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*efs.Tag { return nil } -// SetTagsOut sets efs service tags in Context. -func SetTagsOut(ctx context.Context, tags []*efs.Tag) { +// setTagsOut sets efs service tags in Context. +func setTagsOut(ctx context.Context, tags []*efs.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates efs service tags. +// updateTags updates efs service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn efsiface.EFSAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn efsiface.EFSAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn efsiface.EFSAPI, identifier string, ol // UpdateTags updates efs service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).EFSConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).EFSConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/efs/wait.go b/internal/service/efs/wait.go index 35a344130c7..77f0d95dcdd 100644 --- a/internal/service/efs/wait.go +++ b/internal/service/efs/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package efs import ( diff --git a/internal/service/eks/addon.go b/internal/service/eks/addon.go index 6679ec4c59c..13dc8bba02e 100644 --- a/internal/service/eks/addon.go +++ b/internal/service/eks/addon.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks import ( @@ -119,7 +122,7 @@ func ResourceAddon() *schema.Resource { func resourceAddonCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EKSConn() + conn := meta.(*conns.AWSClient).EKSConn(ctx) addonName := d.Get("addon_name").(string) clusterName := d.Get("cluster_name").(string) @@ -128,7 +131,7 @@ func resourceAddonCreate(ctx context.Context, d *schema.ResourceData, meta inter AddonName: aws.String(addonName), ClientRequestToken: aws.String(sdkid.UniqueId()), ClusterName: aws.String(clusterName), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("addon_version"); ok { @@ -191,7 +194,7 @@ func resourceAddonCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceAddonRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EKSConn() + conn := meta.(*conns.AWSClient).EKSConn(ctx) clusterName, addonName, err := AddonParseResourceID(d.Id()) @@ -220,14 +223,14 @@ func resourceAddonRead(ctx context.Context, d *schema.ResourceData, meta interfa d.Set("modified_at", aws.TimeValue(addon.ModifiedAt).Format(time.RFC3339)) d.Set("service_account_role_arn", addon.ServiceAccountRoleArn) - SetTagsOut(ctx, addon.Tags) + setTagsOut(ctx, addon.Tags) return diags } func resourceAddonUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EKSConn() + conn := meta.(*conns.AWSClient).EKSConn(ctx) clusterName, addonName, err := AddonParseResourceID(d.Id()) @@ -292,7 +295,7 @@ func resourceAddonUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceAddonDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EKSConn() + conn := meta.(*conns.AWSClient).EKSConn(ctx) clusterName, addonName, err := AddonParseResourceID(d.Id()) diff --git a/internal/service/eks/addon_data_source.go b/internal/service/eks/addon_data_source.go index 25678a323f4..7779c12f741 100644 --- a/internal/service/eks/addon_data_source.go +++ b/internal/service/eks/addon_data_source.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks import ( "context" - "fmt" "time" "github.com/aws/aws-sdk-go/aws" @@ -58,7 +60,7 @@ func DataSourceAddon() *schema.Resource { } func dataSourceAddonRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EKSConn() + conn := meta.(*conns.AWSClient).EKSConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig addonName := d.Get("addon_name").(string) @@ -68,7 +70,7 @@ func dataSourceAddonRead(ctx context.Context, d *schema.ResourceData, meta inter addon, err := FindAddonByClusterNameAndAddonName(ctx, conn, clusterName, addonName) if err != nil { - return diag.FromErr(fmt.Errorf("error reading EKS Add-On (%s): %w", id, err)) + return diag.Errorf("reading EKS Add-On (%s): %s", id, err) } d.SetId(id) @@ -80,7 +82,7 @@ func dataSourceAddonRead(ctx context.Context, d *schema.ResourceData, meta inter d.Set("service_account_role_arn", addon.ServiceAccountRoleArn) if err := d.Set("tags", KeyValueTags(ctx, addon.Tags).IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return diag.FromErr(fmt.Errorf("error setting tags: %w", err)) + return diag.Errorf("setting tags: %s", err) } return nil diff --git a/internal/service/eks/addon_data_source_test.go b/internal/service/eks/addon_data_source_test.go index 33653dd95a0..9091f9676b8 100644 --- a/internal/service/eks/addon_data_source_test.go +++ b/internal/service/eks/addon_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks_test import ( diff --git a/internal/service/eks/addon_test.go b/internal/service/eks/addon_test.go index 503e7c0a9b9..2a44ec7fea7 100644 --- a/internal/service/eks/addon_test.go +++ b/internal/service/eks/addon_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks_test import ( @@ -410,7 +413,7 @@ func testAccCheckAddonExists(ctx context.Context, n string, v *eks.Addon) resour return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).EKSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EKSConn(ctx) output, err := tfeks.FindAddonByClusterNameAndAddonName(ctx, conn, clusterName, addonName) @@ -426,7 +429,7 @@ func testAccCheckAddonExists(ctx context.Context, n string, v *eks.Addon) resour func testAccCheckAddonDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EKSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EKSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_eks_addon" { @@ -457,7 +460,7 @@ func testAccCheckAddonDestroy(ctx context.Context) resource.TestCheckFunc { } func testAccPreCheckAddon(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).EKSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EKSConn(ctx) input := &eks.DescribeAddonVersionsInput{} diff --git a/internal/service/eks/addon_version_data_source.go b/internal/service/eks/addon_version_data_source.go index 0c390431c6d..47264e58ea5 100644 --- a/internal/service/eks/addon_version_data_source.go +++ b/internal/service/eks/addon_version_data_source.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks import ( "context" - "fmt" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" @@ -38,7 +40,7 @@ func DataSourceAddonVersion() *schema.Resource { } func dataSourceAddonVersionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EKSConn() + conn := meta.(*conns.AWSClient).EKSConn(ctx) addonName := d.Get("addon_name").(string) kubernetesVersion := d.Get("kubernetes_version").(string) @@ -48,7 +50,7 @@ func dataSourceAddonVersionRead(ctx context.Context, d *schema.ResourceData, met versionInfo, err := FindAddonVersionByAddonNameAndKubernetesVersion(ctx, conn, id, kubernetesVersion, mostRecent) if err != nil { - return diag.FromErr(fmt.Errorf("error reading EKS Add-On version info (%s, %s): %w", id, kubernetesVersion, err)) + return diag.Errorf("reading EKS Add-On version info (%s, %s): %s", id, kubernetesVersion, err) } d.SetId(id) diff --git a/internal/service/eks/addon_version_data_source_test.go b/internal/service/eks/addon_version_data_source_test.go index 4f6e713c3fc..576fc022b33 100644 --- a/internal/service/eks/addon_version_data_source_test.go +++ b/internal/service/eks/addon_version_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks_test import ( diff --git a/internal/service/eks/arn.go b/internal/service/eks/arn.go index f1aacf98017..a8ece1dada7 100644 --- a/internal/service/eks/arn.go +++ b/internal/service/eks/arn.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + /* This file is a hard copy of: https://github.com/kubernetes-sigs/aws-iam-authenticator/blob/7547c74e660f8d34d9980f2c69aa008eed1f48d0/pkg/arn/arn.go diff --git a/internal/service/eks/arn_test.go b/internal/service/eks/arn_test.go index 6635ae28e58..18b1d08ffe7 100644 --- a/internal/service/eks/arn_test.go +++ b/internal/service/eks/arn_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + /* This file is a hard copy of: https://github.com/kubernetes-sigs/aws-iam-authenticator/blob/7547c74e660f8d34d9980f2c69aa008eed1f48d0/pkg/arn/arn_test.go diff --git a/internal/service/eks/cluster.go b/internal/service/eks/cluster.go index cd1ce02bfba..70da4d788e0 100644 --- a/internal/service/eks/cluster.go +++ b/internal/service/eks/cluster.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks import ( @@ -289,7 +292,7 @@ func ResourceCluster() *schema.Resource { } func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EKSConn() + conn := meta.(*conns.AWSClient).EKSConn(ctx) name := d.Get("name").(string) input := &eks.CreateClusterInput{ @@ -298,7 +301,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int Name: aws.String(name), ResourcesVpcConfig: expandVPCConfigRequestForCreate(d.Get("vpc_config").([]interface{})), RoleArn: aws.String(d.Get("role_arn").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if _, ok := d.GetOk("kubernetes_network_config"); ok { @@ -360,7 +363,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int } func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EKSConn() + conn := meta.(*conns.AWSClient).EKSConn(ctx) cluster, err := FindClusterByName(ctx, conn, d.Id()) @@ -408,13 +411,13 @@ func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta inter return diag.Errorf("setting vpc_config: %s", err) } - SetTagsOut(ctx, cluster.Tags) + setTagsOut(ctx, cluster.Tags) return nil } func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EKSConn() + conn := meta.(*conns.AWSClient).EKSConn(ctx) // Do any version update first. if d.HasChange("version") { @@ -509,7 +512,7 @@ func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta int } func resourceClusterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EKSConn() + conn := meta.(*conns.AWSClient).EKSConn(ctx) log.Printf("[DEBUG] Deleting EKS Cluster: %s", d.Id()) diff --git a/internal/service/eks/cluster_auth_data_source.go b/internal/service/eks/cluster_auth_data_source.go index f9edaf955ea..c97a8092909 100644 --- a/internal/service/eks/cluster_auth_data_source.go +++ b/internal/service/eks/cluster_auth_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks import ( @@ -33,7 +36,7 @@ func DataSourceClusterAuth() *schema.Resource { func dataSourceClusterAuthRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).STSConn() + conn := meta.(*conns.AWSClient).STSConn(ctx) name := d.Get("name").(string) generator, err := NewGenerator(false, false) if err != nil { diff --git a/internal/service/eks/cluster_auth_data_source_test.go b/internal/service/eks/cluster_auth_data_source_test.go index 4fc869488db..2e58cd74b3f 100644 --- a/internal/service/eks/cluster_auth_data_source_test.go +++ b/internal/service/eks/cluster_auth_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks_test import ( diff --git a/internal/service/eks/cluster_data_source.go b/internal/service/eks/cluster_data_source.go index 33d2c9c04fe..18cc7eeeb9f 100644 --- a/internal/service/eks/cluster_data_source.go +++ b/internal/service/eks/cluster_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks import ( @@ -187,7 +190,7 @@ func DataSourceCluster() *schema.Resource { } func dataSourceClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EKSConn() + conn := meta.(*conns.AWSClient).EKSConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig name := d.Get("name").(string) diff --git a/internal/service/eks/cluster_data_source_test.go b/internal/service/eks/cluster_data_source_test.go index 54e68c36b04..bd16816dcf4 100644 --- a/internal/service/eks/cluster_data_source_test.go +++ b/internal/service/eks/cluster_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks_test import ( diff --git a/internal/service/eks/cluster_test.go b/internal/service/eks/cluster_test.go index 94b7c276965..02cd0cc05e0 100644 --- a/internal/service/eks/cluster_test.go +++ b/internal/service/eks/cluster_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks_test import ( @@ -695,7 +698,7 @@ func testAccCheckClusterExists(ctx context.Context, resourceName string, cluster return fmt.Errorf("No EKS Cluster ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EKSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EKSConn(ctx) output, err := tfeks.FindClusterByName(ctx, conn, rs.Primary.ID) @@ -716,7 +719,7 @@ func testAccCheckClusterDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).EKSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EKSConn(ctx) _, err := tfeks.FindClusterByName(ctx, conn, rs.Primary.ID) @@ -756,7 +759,7 @@ func testAccCheckClusterNotRecreated(i, j *eks.Cluster) resource.TestCheckFunc { } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).EKSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EKSConn(ctx) input := &eks.ListClustersInput{} diff --git a/internal/service/eks/clusters_data_source.go b/internal/service/eks/clusters_data_source.go index e00e63e87e3..ffdd46b1a90 100644 --- a/internal/service/eks/clusters_data_source.go +++ b/internal/service/eks/clusters_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks import ( @@ -28,7 +31,7 @@ func DataSourceClusters() *schema.Resource { func dataSourceClustersRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EKSConn() + conn := meta.(*conns.AWSClient).EKSConn(ctx) var clusters []*string diff --git a/internal/service/eks/clusters_data_source_test.go b/internal/service/eks/clusters_data_source_test.go index c0a7b5ef073..a35a5c0f752 100644 --- a/internal/service/eks/clusters_data_source_test.go +++ b/internal/service/eks/clusters_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks_test import ( @@ -23,7 +26,7 @@ func TestAccEKSClustersDataSource_basic(t *testing.T) { { Config: testAccClustersDataSourceConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue(dataSourceResourceName, "names.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceResourceName, "names.#", 0), ), }, }, diff --git a/internal/service/eks/consts.go b/internal/service/eks/consts.go index 09168e1649a..4bf92304f5c 100644 --- a/internal/service/eks/consts.go +++ b/internal/service/eks/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks import ( diff --git a/internal/service/eks/errors.go b/internal/service/eks/errors.go index 6bf3e10d1ff..ee8ac774ab2 100644 --- a/internal/service/eks/errors.go +++ b/internal/service/eks/errors.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks import ( diff --git a/internal/service/eks/fargate_profile.go b/internal/service/eks/fargate_profile.go index 266a6454b42..55a4954086d 100644 --- a/internal/service/eks/fargate_profile.go +++ b/internal/service/eks/fargate_profile.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks import ( @@ -106,7 +109,7 @@ func ResourceFargateProfile() *schema.Resource { func resourceFargateProfileCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EKSConn() + conn := meta.(*conns.AWSClient).EKSConn(ctx) clusterName := d.Get("cluster_name").(string) fargateProfileName := d.Get("fargate_profile_name").(string) @@ -118,7 +121,7 @@ func resourceFargateProfileCreate(ctx context.Context, d *schema.ResourceData, m PodExecutionRoleArn: aws.String(d.Get("pod_execution_role_arn").(string)), Selectors: expandFargateProfileSelectors(d.Get("selector").(*schema.Set).List()), Subnets: flex.ExpandStringSet(d.Get("subnet_ids").(*schema.Set)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } // mutex lock for creation/deletion serialization @@ -163,7 +166,7 @@ func resourceFargateProfileCreate(ctx context.Context, d *schema.ResourceData, m func resourceFargateProfileRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EKSConn() + conn := meta.(*conns.AWSClient).EKSConn(ctx) clusterName, fargateProfileName, err := FargateProfileParseResourceID(d.Id()) @@ -198,7 +201,7 @@ func resourceFargateProfileRead(ctx context.Context, d *schema.ResourceData, met return sdkdiag.AppendErrorf(diags, "setting subnet_ids: %s", err) } - SetTagsOut(ctx, fargateProfile.Tags) + setTagsOut(ctx, fargateProfile.Tags) return diags } @@ -213,7 +216,7 @@ func resourceFargateProfileUpdate(ctx context.Context, d *schema.ResourceData, m func resourceFargateProfileDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EKSConn() + conn := meta.(*conns.AWSClient).EKSConn(ctx) clusterName, fargateProfileName, err := FargateProfileParseResourceID(d.Id()) diff --git a/internal/service/eks/fargate_profile_test.go b/internal/service/eks/fargate_profile_test.go index 4001dca356a..3ea2e077d80 100644 --- a/internal/service/eks/fargate_profile_test.go +++ b/internal/service/eks/fargate_profile_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks_test import ( @@ -192,7 +195,7 @@ func testAccCheckFargateProfileExists(ctx context.Context, n string, v *eks.Farg return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).EKSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EKSConn(ctx) output, err := tfeks.FindFargateProfileByClusterNameAndFargateProfileName(ctx, conn, clusterName, fargateProfileName) @@ -208,7 +211,7 @@ func testAccCheckFargateProfileExists(ctx context.Context, n string, v *eks.Farg func testAccCheckFargateProfileDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EKSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EKSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_eks_fargate_profile" { @@ -378,7 +381,7 @@ resource "aws_eip" "private" { count = 2 depends_on = [aws_internet_gateway.test] - vpc = true + domain = "vpc" tags = { Name = %[1]q diff --git a/internal/service/eks/find.go b/internal/service/eks/find.go index 55aec84a7ea..2d5b5e6e11e 100644 --- a/internal/service/eks/find.go +++ b/internal/service/eks/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks import ( diff --git a/internal/service/eks/generate.go b/internal/service/eks/generate.go index 8a83a6d7b86..d9cf18f5965 100644 --- a/internal/service/eks/generate.go +++ b/internal/service/eks/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package eks diff --git a/internal/service/eks/id.go b/internal/service/eks/id.go index 66c86d64104..d91fdaf2e4b 100644 --- a/internal/service/eks/id.go +++ b/internal/service/eks/id.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks import ( diff --git a/internal/service/eks/identity_provider_config.go b/internal/service/eks/identity_provider_config.go index e1800c4ecbd..476a8bea0c7 100644 --- a/internal/service/eks/identity_provider_config.go +++ b/internal/service/eks/identity_provider_config.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks import ( @@ -128,7 +131,7 @@ func ResourceIdentityProviderConfig() *schema.Resource { } func resourceIdentityProviderConfigCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EKSConn() + conn := meta.(*conns.AWSClient).EKSConn(ctx) clusterName := d.Get("cluster_name").(string) configName, oidc := expandOIDCIdentityProviderConfigRequest(d.Get("oidc").([]interface{})[0].(map[string]interface{})) @@ -137,13 +140,13 @@ func resourceIdentityProviderConfigCreate(ctx context.Context, d *schema.Resourc ClientRequestToken: aws.String(id.UniqueId()), ClusterName: aws.String(clusterName), Oidc: oidc, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } _, err := conn.AssociateIdentityProviderConfigWithContext(ctx, input) if err != nil { - return diag.Errorf("error associating EKS Identity Provider Config (%s): %s", idpID, err) + return diag.Errorf("associating EKS Identity Provider Config (%s): %s", idpID, err) } d.SetId(idpID) @@ -151,14 +154,14 @@ func resourceIdentityProviderConfigCreate(ctx context.Context, d *schema.Resourc _, err = waitOIDCIdentityProviderConfigCreated(ctx, conn, clusterName, configName, d.Timeout(schema.TimeoutCreate)) if err != nil { - return diag.Errorf("error waiting for EKS Identity Provider Config (%s) association: %s", d.Id(), err) + return diag.Errorf("waiting for EKS Identity Provider Config (%s) association: %s", d.Id(), err) } return resourceIdentityProviderConfigRead(ctx, d, meta) } func resourceIdentityProviderConfigRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EKSConn() + conn := meta.(*conns.AWSClient).EKSConn(ctx) clusterName, configName, err := IdentityProviderConfigParseResourceID(d.Id()) @@ -175,19 +178,19 @@ func resourceIdentityProviderConfigRead(ctx context.Context, d *schema.ResourceD } if err != nil { - return diag.Errorf("error reading EKS Identity Provider Config (%s): %s", d.Id(), err) + return diag.Errorf("reading EKS Identity Provider Config (%s): %s", d.Id(), err) } d.Set("arn", oidc.IdentityProviderConfigArn) d.Set("cluster_name", oidc.ClusterName) if err := d.Set("oidc", []interface{}{flattenOIDCIdentityProviderConfig(oidc)}); err != nil { - return diag.Errorf("error setting oidc: %s", err) + return diag.Errorf("setting oidc: %s", err) } d.Set("status", oidc.Status) - SetTagsOut(ctx, oidc.Tags) + setTagsOut(ctx, oidc.Tags) return nil } @@ -198,7 +201,7 @@ func resourceIdentityProviderConfigUpdate(ctx context.Context, d *schema.Resourc } func resourceIdentityProviderConfigDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EKSConn() + conn := meta.(*conns.AWSClient).EKSConn(ctx) clusterName, configName, err := IdentityProviderConfigParseResourceID(d.Id()) @@ -224,13 +227,13 @@ func resourceIdentityProviderConfigDelete(ctx context.Context, d *schema.Resourc } if err != nil { - return diag.Errorf("error disassociating EKS Identity Provider Config (%s): %s", d.Id(), err) + return diag.Errorf("disassociating EKS Identity Provider Config (%s): %s", d.Id(), err) } _, err = waitOIDCIdentityProviderConfigDeleted(ctx, conn, clusterName, configName, d.Timeout(schema.TimeoutDelete)) if err != nil { - return diag.Errorf("error waiting for EKS Identity Provider Config (%s) disassociation: %s", d.Id(), err) + return diag.Errorf("waiting for EKS Identity Provider Config (%s) disassociation: %s", d.Id(), err) } return nil diff --git a/internal/service/eks/identity_provider_config_test.go b/internal/service/eks/identity_provider_config_test.go index da479103e0f..ebfe0b40fbc 100644 --- a/internal/service/eks/identity_provider_config_test.go +++ b/internal/service/eks/identity_provider_config_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks_test import ( @@ -185,7 +188,7 @@ func testAccCheckIdentityProviderExistsConfig(ctx context.Context, resourceName return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).EKSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EKSConn(ctx) output, err := tfeks.FindOIDCIdentityProviderConfigByClusterNameAndConfigName(ctx, conn, clusterName, configName) @@ -201,7 +204,7 @@ func testAccCheckIdentityProviderExistsConfig(ctx context.Context, resourceName func testAccCheckIdentityProviderConfigDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EKSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EKSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_eks_identity_provider_config" { diff --git a/internal/service/eks/node_group.go b/internal/service/eks/node_group.go index 115238b4154..ad1a8af448b 100644 --- a/internal/service/eks/node_group.go +++ b/internal/service/eks/node_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks import ( @@ -290,7 +293,7 @@ func ResourceNodeGroup() *schema.Resource { } func resourceNodeGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EKSConn() + conn := meta.(*conns.AWSClient).EKSConn(ctx) clusterName := d.Get("cluster_name").(string) nodeGroupName := create.Name(d.Get("node_group_name").(string), d.Get("node_group_name_prefix").(string)) @@ -301,7 +304,7 @@ func resourceNodeGroupCreate(ctx context.Context, d *schema.ResourceData, meta i NodegroupName: aws.String(nodeGroupName), NodeRole: aws.String(d.Get("node_role_arn").(string)), Subnets: flex.ExpandStringSet(d.Get("subnet_ids").(*schema.Set)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("ami_type"); ok { @@ -370,7 +373,7 @@ func resourceNodeGroupCreate(ctx context.Context, d *schema.ResourceData, meta i } func resourceNodeGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EKSConn() + conn := meta.(*conns.AWSClient).EKSConn(ctx) clusterName, nodeGroupName, err := NodeGroupParseResourceID(d.Id()) @@ -387,7 +390,7 @@ func resourceNodeGroupRead(ctx context.Context, d *schema.ResourceData, meta int } if err != nil { - return diag.Errorf("error reading EKS Node Group (%s): %s", d.Id(), err) + return diag.Errorf("reading EKS Node Group (%s): %s", d.Id(), err) } d.Set("ami_type", nodeGroup.AmiType) @@ -397,15 +400,15 @@ func resourceNodeGroupRead(ctx context.Context, d *schema.ResourceData, meta int d.Set("disk_size", nodeGroup.DiskSize) if err := d.Set("instance_types", aws.StringValueSlice(nodeGroup.InstanceTypes)); err != nil { - return diag.Errorf("error setting instance_types: %s", err) + return diag.Errorf("setting instance_types: %s", err) } if err := d.Set("labels", aws.StringValueMap(nodeGroup.Labels)); err != nil { - return diag.Errorf("error setting labels: %s", err) + return diag.Errorf("setting labels: %s", err) } if err := d.Set("launch_template", flattenLaunchTemplateSpecification(nodeGroup.LaunchTemplate)); err != nil { - return diag.Errorf("error setting launch_template: %s", err) + return diag.Errorf("setting launch_template: %s", err) } d.Set("node_group_name", nodeGroup.NodegroupName) @@ -414,16 +417,16 @@ func resourceNodeGroupRead(ctx context.Context, d *schema.ResourceData, meta int d.Set("release_version", nodeGroup.ReleaseVersion) if err := d.Set("remote_access", flattenRemoteAccessConfig(nodeGroup.RemoteAccess)); err != nil { - return diag.Errorf("error setting remote_access: %s", err) + return diag.Errorf("setting remote_access: %s", err) } if err := d.Set("resources", flattenNodeGroupResources(nodeGroup.Resources)); err != nil { - return diag.Errorf("error setting resources: %s", err) + return diag.Errorf("setting resources: %s", err) } if nodeGroup.ScalingConfig != nil { if err := d.Set("scaling_config", []interface{}{flattenNodeGroupScalingConfig(nodeGroup.ScalingConfig)}); err != nil { - return diag.Errorf("error setting scaling_config: %s", err) + return diag.Errorf("setting scaling_config: %s", err) } } else { d.Set("scaling_config", nil) @@ -432,16 +435,16 @@ func resourceNodeGroupRead(ctx context.Context, d *schema.ResourceData, meta int d.Set("status", nodeGroup.Status) if err := d.Set("subnet_ids", aws.StringValueSlice(nodeGroup.Subnets)); err != nil { - return diag.Errorf("error setting subnets: %s", err) + return diag.Errorf("setting subnets: %s", err) } if err := d.Set("taint", flattenTaints(nodeGroup.Taints)); err != nil { - return diag.Errorf("error setting taint: %s", err) + return diag.Errorf("setting taint: %s", err) } if nodeGroup.UpdateConfig != nil { if err := d.Set("update_config", []interface{}{flattenNodeGroupUpdateConfig(nodeGroup.UpdateConfig)}); err != nil { - return diag.Errorf("error setting update_config: %s", err) + return diag.Errorf("setting update_config: %s", err) } } else { d.Set("update_config", nil) @@ -449,13 +452,13 @@ func resourceNodeGroupRead(ctx context.Context, d *schema.ResourceData, meta int d.Set("version", nodeGroup.Version) - SetTagsOut(ctx, nodeGroup.Tags) + setTagsOut(ctx, nodeGroup.Tags) return nil } func resourceNodeGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EKSConn() + conn := meta.(*conns.AWSClient).EKSConn(ctx) clusterName, nodeGroupName, err := NodeGroupParseResourceID(d.Id()) @@ -503,7 +506,7 @@ func resourceNodeGroupUpdate(ctx context.Context, d *schema.ResourceData, meta i output, err := conn.UpdateNodegroupVersionWithContext(ctx, input) if err != nil { - return diag.Errorf("error updating EKS Node Group (%s) version: %s", d.Id(), err) + return diag.Errorf("updating EKS Node Group (%s) version: %s", d.Id(), err) } updateID := aws.StringValue(output.Update.Id) @@ -511,7 +514,7 @@ func resourceNodeGroupUpdate(ctx context.Context, d *schema.ResourceData, meta i _, err = waitNodegroupUpdateSuccessful(ctx, conn, clusterName, nodeGroupName, updateID, d.Timeout(schema.TimeoutUpdate)) if err != nil { - return diag.Errorf("error waiting for EKS Node Group (%s) version update (%s): %s", d.Id(), updateID, err) + return diag.Errorf("waiting for EKS Node Group (%s) version update (%s): %s", d.Id(), updateID, err) } } @@ -542,7 +545,7 @@ func resourceNodeGroupUpdate(ctx context.Context, d *schema.ResourceData, meta i output, err := conn.UpdateNodegroupConfigWithContext(ctx, input) if err != nil { - return diag.Errorf("error updating EKS Node Group (%s) config: %s", d.Id(), err) + return diag.Errorf("updating EKS Node Group (%s) config: %s", d.Id(), err) } updateID := aws.StringValue(output.Update.Id) @@ -550,7 +553,7 @@ func resourceNodeGroupUpdate(ctx context.Context, d *schema.ResourceData, meta i _, err = waitNodegroupUpdateSuccessful(ctx, conn, clusterName, nodeGroupName, updateID, d.Timeout(schema.TimeoutUpdate)) if err != nil { - return diag.Errorf("error waiting for EKS Node Group (%s) config update (%s): %s", d.Id(), updateID, err) + return diag.Errorf("waiting for EKS Node Group (%s) config update (%s): %s", d.Id(), updateID, err) } } @@ -558,7 +561,7 @@ func resourceNodeGroupUpdate(ctx context.Context, d *schema.ResourceData, meta i } func resourceNodeGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EKSConn() + conn := meta.(*conns.AWSClient).EKSConn(ctx) clusterName, nodeGroupName, err := NodeGroupParseResourceID(d.Id()) @@ -577,13 +580,13 @@ func resourceNodeGroupDelete(ctx context.Context, d *schema.ResourceData, meta i } if err != nil { - return diag.Errorf("error deleting EKS Node Group (%s): %s", d.Id(), err) + return diag.Errorf("deleting EKS Node Group (%s): %s", d.Id(), err) } _, err = waitNodegroupDeleted(ctx, conn, clusterName, nodeGroupName, d.Timeout(schema.TimeoutDelete)) if err != nil { - return diag.Errorf("error waiting for EKS Node Group (%s) to delete: %s", d.Id(), err) + return diag.Errorf("waiting for EKS Node Group (%s) to delete: %s", d.Id(), err) } return nil diff --git a/internal/service/eks/node_group_data_source.go b/internal/service/eks/node_group_data_source.go index 76ada4f5981..32cbdf4af4c 100644 --- a/internal/service/eks/node_group_data_source.go +++ b/internal/service/eks/node_group_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks import ( @@ -181,7 +184,7 @@ func DataSourceNodeGroup() *schema.Resource { } func dataSourceNodeGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EKSConn() + conn := meta.(*conns.AWSClient).EKSConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig clusterName := d.Get("cluster_name").(string) diff --git a/internal/service/eks/node_group_data_source_test.go b/internal/service/eks/node_group_data_source_test.go index a25102e66ae..e44c6c34166 100644 --- a/internal/service/eks/node_group_data_source_test.go +++ b/internal/service/eks/node_group_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks_test import ( diff --git a/internal/service/eks/node_group_test.go b/internal/service/eks/node_group_test.go index 8282bf49a1c..cfa352dc623 100644 --- a/internal/service/eks/node_group_test.go +++ b/internal/service/eks/node_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks_test import ( @@ -1002,7 +1005,7 @@ func testAccCheckNodeGroupExists(ctx context.Context, resourceName string, nodeG return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).EKSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EKSConn(ctx) output, err := tfeks.FindNodegroupByClusterNameAndNodegroupName(ctx, conn, clusterName, nodeGroupName) @@ -1018,7 +1021,7 @@ func testAccCheckNodeGroupExists(ctx context.Context, resourceName string, nodeG func testAccCheckNodeGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EKSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EKSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_eks_node_group" { diff --git a/internal/service/eks/node_groups_data_source.go b/internal/service/eks/node_groups_data_source.go index 6b98eefecd2..4f3eb7bc7dc 100644 --- a/internal/service/eks/node_groups_data_source.go +++ b/internal/service/eks/node_groups_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks import ( @@ -34,7 +37,7 @@ func DataSourceNodeGroups() *schema.Resource { func dataSourceNodeGroupsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EKSConn() + conn := meta.(*conns.AWSClient).EKSConn(ctx) clusterName := d.Get("cluster_name").(string) diff --git a/internal/service/eks/node_groups_data_source_test.go b/internal/service/eks/node_groups_data_source_test.go index f770915782f..2dd1c1c8942 100644 --- a/internal/service/eks/node_groups_data_source_test.go +++ b/internal/service/eks/node_groups_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks_test import ( diff --git a/internal/service/eks/service_package_gen.go b/internal/service/eks/service_package_gen.go index 92f5d051797..9e6d4ad9270 100644 --- a/internal/service/eks/service_package_gen.go +++ b/internal/service/eks/service_package_gen.go @@ -5,6 +5,10 @@ package eks import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + eks_sdkv1 "github.com/aws/aws-sdk-go/service/eks" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -101,4 +105,13 @@ func (p *servicePackage) ServicePackageName() string { return names.EKS } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*eks_sdkv1.EKS, error) { + sess := config["session"].(*session_sdkv1.Session) + + return eks_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/eks/status.go b/internal/service/eks/status.go index 641c7abd757..1716b138070 100644 --- a/internal/service/eks/status.go +++ b/internal/service/eks/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks import ( diff --git a/internal/service/eks/sweep.go b/internal/service/eks/sweep.go index 580a191b1b0..276b7380792 100644 --- a/internal/service/eks/sweep.go +++ b/internal/service/eks/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -12,7 +15,6 @@ import ( "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" multierror "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -51,12 +53,12 @@ func init() { func sweepAddons(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).EKSConn() + conn := client.EKSConn(ctx) input := &eks.ListClustersInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -115,7 +117,7 @@ func sweepAddons(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing EKS Clusters (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping EKS Add-Ons (%s): %w", region, err)) @@ -126,11 +128,11 @@ func sweepAddons(region string) error { func sweepClusters(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).EKSConn() + conn := client.EKSConn(ctx) input := &eks.ListClustersInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -159,7 +161,7 @@ func sweepClusters(region string) error { return fmt.Errorf("error listing EKS Clusters (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping EKS Clusters (%s): %w", region, err) @@ -170,11 +172,11 @@ func sweepClusters(region string) error { func sweepFargateProfiles(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).EKSConn() + conn := client.EKSConn(ctx) input := &eks.ListClustersInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -232,7 +234,7 @@ func sweepFargateProfiles(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing EKS Clusters (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping EKS Fargate Profiles (%s): %w", region, err)) @@ -243,12 +245,12 @@ func sweepFargateProfiles(region string) error { func sweepIdentityProvidersConfig(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).EKSConn() + conn := client.EKSConn(ctx) input := &eks.ListClustersInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -306,7 +308,7 @@ func sweepIdentityProvidersConfig(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing EKS Clusters (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping EKS Identity Provider Configs (%s): %w", region, err)) @@ -317,11 +319,11 @@ func sweepIdentityProvidersConfig(region string) error { func sweepNodeGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).EKSConn() + conn := client.EKSConn(ctx) input := &eks.ListClustersInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -379,7 +381,7 @@ func sweepNodeGroups(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing EKS Clusters (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping EKS Node Groups (%s): %w", region, err)) diff --git a/internal/service/eks/tags_gen.go b/internal/service/eks/tags_gen.go index 89e93c75ce3..829e66110ba 100644 --- a/internal/service/eks/tags_gen.go +++ b/internal/service/eks/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists eks service tags. +// listTags lists eks service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn eksiface.EKSAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn eksiface.EKSAPI, identifier string) (tftags.KeyValueTags, error) { input := &eks.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn eksiface.EKSAPI, identifier string) (tft // ListTags lists eks service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).EKSConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).EKSConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from eks service tags. +// KeyValueTags creates tftags.KeyValueTags from eks service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns eks service tags from Context. +// getTagsIn returns eks service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets eks service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets eks service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates eks service tags. +// updateTags updates eks service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn eksiface.EKSAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn eksiface.EKSAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn eksiface.EKSAPI, identifier string, ol // UpdateTags updates eks service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).EKSConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).EKSConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/eks/token_test.go b/internal/service/eks/token_test.go index e791ec2bc6f..d109edd959b 100644 --- a/internal/service/eks/token_test.go +++ b/internal/service/eks/token_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + /* This file is a hard copy of: https://github.com/kubernetes-sigs/aws-iam-authenticator/blob/7547c74e660f8d34d9980f2c69aa008eed1f48d0/pkg/token/token_test.go diff --git a/internal/service/eks/validate.go b/internal/service/eks/validate.go index 3099019f740..902c90cefd1 100644 --- a/internal/service/eks/validate.go +++ b/internal/service/eks/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks import ( diff --git a/internal/service/eks/validate_test.go b/internal/service/eks/validate_test.go index 352a6bb2bcd..f6627cc6538 100644 --- a/internal/service/eks/validate_test.go +++ b/internal/service/eks/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks import ( diff --git a/internal/service/eks/wait.go b/internal/service/eks/wait.go index 34fa213a1aa..57a94e414fd 100644 --- a/internal/service/eks/wait.go +++ b/internal/service/eks/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package eks import ( diff --git a/internal/service/elasticache/cluster.go b/internal/service/elasticache/cluster.go index 33d65532c14..729d95243ee 100644 --- a/internal/service/elasticache/cluster.go +++ b/internal/service/elasticache/cluster.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache import ( @@ -22,10 +25,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/errs" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" - "github.com/hashicorp/terraform-provider-aws/internal/experimental/nullable" "github.com/hashicorp/terraform-provider-aws/internal/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/types/nullable" "github.com/hashicorp/terraform-provider-aws/internal/verify" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -341,12 +344,12 @@ func ResourceCluster() *schema.Resource { func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) clusterID := d.Get("cluster_id").(string) input := &elasticache.CreateCacheClusterInput{ CacheClusterId: aws.String(clusterID), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("replication_group_id"); ok { @@ -467,7 +470,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int } // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := createTags(ctx, conn, arn, tags) // If default tags only, continue. Otherwise, error. @@ -485,7 +488,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) c, err := FindCacheClusterWithNodeInfoByID(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -573,7 +576,7 @@ func setFromCacheCluster(d *schema.ResourceData, c *elasticache.CacheCluster) er func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &elasticache.ModifyCacheClusterInput{ @@ -763,7 +766,7 @@ func (b byCacheNodeId) Less(i, j int) bool { func resourceClusterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) var finalSnapshotID = d.Get("final_snapshot_identifier").(string) err := DeleteCacheCluster(ctx, conn, d.Id(), finalSnapshotID) diff --git a/internal/service/elasticache/cluster_data_source.go b/internal/service/elasticache/cluster_data_source.go index 56bdc756b87..9e48676917a 100644 --- a/internal/service/elasticache/cluster_data_source.go +++ b/internal/service/elasticache/cluster_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache import ( @@ -170,7 +173,7 @@ func DataSourceCluster() *schema.Resource { func dataSourceClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig clusterID := d.Get("cluster_id").(string) @@ -225,7 +228,7 @@ func dataSourceClusterRead(ctx context.Context, d *schema.ResourceData, meta int d.Set("arn", cluster.ARN) - tags, err := ListTags(ctx, conn, aws.StringValue(cluster.ARN)) + tags, err := listTags(ctx, conn, aws.StringValue(cluster.ARN)) if err != nil && !verify.ErrorISOUnsupported(conn.PartitionID, err) { return sdkdiag.AppendErrorf(diags, "listing tags for ElastiCache Cluster (%s): %s", d.Id(), err) diff --git a/internal/service/elasticache/cluster_data_source_test.go b/internal/service/elasticache/cluster_data_source_test.go index f60d8b5d2c6..2cdf3d59bcb 100644 --- a/internal/service/elasticache/cluster_data_source_test.go +++ b/internal/service/elasticache/cluster_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache_test import ( diff --git a/internal/service/elasticache/cluster_test.go b/internal/service/elasticache/cluster_test.go index 91c4cb3ab1a..831541a237b 100644 --- a/internal/service/elasticache/cluster_test.go +++ b/internal/service/elasticache/cluster_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache_test import ( @@ -1421,7 +1424,7 @@ func testAccCheckClusterRecreated(i, j *elasticache.CacheCluster) resource.TestC func testAccCheckClusterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_elasticache_cluster" { @@ -1455,7 +1458,7 @@ func testAccCheckClusterExists(ctx context.Context, n string, v *elasticache.Cac return fmt.Errorf("No ElastiCache Cluster ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) output, err := tfelasticache.FindCacheClusterByID(ctx, conn, rs.Primary.ID) @@ -1721,11 +1724,9 @@ resource "aws_elasticache_cluster" "test" { node_type = "cache.t3.small" num_cache_nodes = 1 engine = "redis" - engine_version = "2.8.19" port = 6379 subnet_group_name = aws_elasticache_subnet_group.test.name security_group_ids = [aws_security_group.test.id] - parameter_group_name = "default.redis2.8" notification_topic_arn = aws_sns_topic.test.arn availability_zone = data.aws_availability_zones.available.names[0] } @@ -1799,12 +1800,10 @@ resource "aws_security_group_rule" "test" { } resource "aws_elasticache_cluster" "test" { - cluster_id = %[1]q - engine = "redis" - engine_version = "5.0.4" - node_type = "cache.t2.micro" - num_cache_nodes = 1 - parameter_group_name = "default.redis5.0" + cluster_id = %[1]q + engine = "redis" + node_type = "cache.t2.micro" + num_cache_nodes = 1 } `, rName) } @@ -1907,14 +1906,14 @@ resource "aws_elasticache_cluster" "test" { func testAccClusterConfig_replicationGroupIDAvailabilityZone(rName string) string { return acctest.ConfigCompose(acctest.ConfigAvailableAZsNoOptIn(), fmt.Sprintf(` resource "aws_elasticache_replication_group" "test" { - replication_group_description = "Terraform Acceptance Testing" - replication_group_id = %[1]q - node_type = "cache.t3.medium" - number_cache_clusters = 1 - port = 6379 + description = "Terraform Acceptance Testing" + replication_group_id = %[1]q + node_type = "cache.t3.medium" + num_cache_clusters = 1 + port = 6379 lifecycle { - ignore_changes = [number_cache_clusters] + ignore_changes = [num_cache_clusters] } } @@ -1929,14 +1928,14 @@ resource "aws_elasticache_cluster" "test" { func testAccClusterConfig_replicationGroupIDReplica(rName string, count int) string { return fmt.Sprintf(` resource "aws_elasticache_replication_group" "test" { - replication_group_description = "Terraform Acceptance Testing" - replication_group_id = %[1]q - node_type = "cache.t3.medium" - number_cache_clusters = 1 - port = 6379 + description = "Terraform Acceptance Testing" + replication_group_id = %[1]q + node_type = "cache.t3.medium" + num_cache_clusters = 1 + port = 6379 lifecycle { - ignore_changes = [number_cache_clusters] + ignore_changes = [num_cache_clusters] } } @@ -2060,11 +2059,13 @@ resource "aws_iam_role" "r" { resource "aws_kinesis_firehose_delivery_stream" "ds" { name = %[1]q - destination = "s3" - s3_configuration { - role_arn = aws_iam_role.r.arn + destination = "extended_s3" + + extended_s3_configuration { bucket_arn = aws_s3_bucket.b.arn + role_arn = aws_iam_role.r.arn } + lifecycle { ignore_changes = [ tags["LogDeliveryEnabled"], diff --git a/internal/service/elasticache/diff.go b/internal/service/elasticache/diff.go index 979ea1da74b..dfedae91963 100644 --- a/internal/service/elasticache/diff.go +++ b/internal/service/elasticache/diff.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache import ( diff --git a/internal/service/elasticache/engine_version.go b/internal/service/elasticache/engine_version.go index 842665370ea..cedbdceb097 100644 --- a/internal/service/elasticache/engine_version.go +++ b/internal/service/elasticache/engine_version.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache import ( @@ -30,15 +33,10 @@ func validMemcachedVersionString(v interface{}, k string) (ws []string, errors [ } const ( - redisVersionPreV6RegexpRaw = `[1-5](\.[[:digit:]]+){2}` - redisVersionPostV6RegexpRaw = `(([6-9])\.x)|([6-9]\.[[:digit:]]+)` - - redisVersionRegexpRaw = redisVersionPreV6RegexpRaw + "|" + redisVersionPostV6RegexpRaw -) + redisVersionPreV6RegexpPattern = `^[1-5](\.[[:digit:]]+){2}$` + redisVersionPostV6RegexpPattern = `^((6)\.x)|([6-9]\.[[:digit:]]+)$` -const ( - redisVersionRegexpPattern = "^" + redisVersionRegexpRaw + "$" - redisVersionPostV6RegexpPattern = "^" + redisVersionPostV6RegexpRaw + "$" + redisVersionRegexpPattern = redisVersionPreV6RegexpPattern + "|" + redisVersionPostV6RegexpPattern ) var ( @@ -97,11 +95,11 @@ func engineVersionIsDowngrade(diff getChangeDiffer) (bool, error) { o, n := diff.GetChange("engine_version") oVersion, err := normalizeEngineVersion(o.(string)) if err != nil { - return false, fmt.Errorf("error parsing old engine_version: %w", err) + return false, fmt.Errorf("parsing old engine_version: %w", err) } nVersion, err := normalizeEngineVersion(n.(string)) if err != nil { - return false, fmt.Errorf("error parsing new engine_version: %w", err) + return false, fmt.Errorf("parsing new engine_version: %w", err) } return nVersion.LessThan(oVersion), nil @@ -128,15 +126,14 @@ func engineVersionForceNewOnDowngrade(diff forceNewDiffer) error { return diff.ForceNew("engine_version") } -// normalizeEngineVersion returns a github.com/hashicorp/go-version Version -// that can handle a regular 1.2.3 version number or either the 6.x or 6.0 version number used for -// ElastiCache Redis version 6 and higher. 6.x will sort to 6. +// normalizeEngineVersion returns a github.com/hashicorp/go-version Version from: +// - a regular 1.2.3 version number +// - either the 6.x or 6.0 version number used for ElastiCache Redis version 6. 6.x will sort to 6. +// - a 7.0 version number used from version 7 func normalizeEngineVersion(version string) (*gversion.Version, error) { if matches := redisVersionPostV6Regexp.FindStringSubmatch(version); matches != nil { if matches[1] != "" { version = fmt.Sprintf("%s.%d", matches[2], math.MaxInt) - } else if matches[3] != "" { - version = matches[3] } } return gversion.NewVersion(version) diff --git a/internal/service/elasticache/engine_version_test.go b/internal/service/elasticache/engine_version_test.go index fd096c073d8..76a81e8f43c 100644 --- a/internal/service/elasticache/engine_version_test.go +++ b/internal/service/elasticache/engine_version_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache import ( @@ -139,6 +142,42 @@ func TestValidRedisVersionString(t *testing.T) { version: "6.y", valid: false, }, + { + version: "7.0", + valid: true, + }, + { + version: "7.2", + valid: true, + }, + { + version: "7.x", + valid: false, + }, + { + version: "7.2.x", + valid: false, + }, + { + version: "7.5.0", + valid: false, + }, + { + version: "7.5.", + valid: false, + }, + { + version: "7.", + valid: false, + }, + { + version: "7", + valid: false, + }, + { + version: "7.y", + valid: false, + }, } for _, testcase := range testcases { @@ -191,6 +230,11 @@ func TestValidateClusterEngineVersion(t *testing.T) { version: "6.0", valid: false, }, + { + engine: "", + version: "7.0", + valid: false, + }, { engine: engineMemcached, @@ -207,6 +251,11 @@ func TestValidateClusterEngineVersion(t *testing.T) { version: "6.0", valid: false, }, + { + engine: engineMemcached, + version: "7.0", + valid: false, + }, { engine: engineRedis, @@ -223,6 +272,11 @@ func TestValidateClusterEngineVersion(t *testing.T) { version: "6.0", valid: true, }, + { + engine: engineRedis, + version: "7.0", + valid: true, + }, } for _, testcase := range testcases { @@ -277,17 +331,17 @@ func TestCustomizeDiffEngineVersionIsDowngrade(t *testing.T) { expected: false, }, - // "upgrade major 6.x": { - // old: "5.0.6", - // new: "6.x", - // expectForceNew: false, - // }, + "upgrade major 6.x": { + old: "5.0.6", + new: "6.x", + expected: false, + }, - // "upgrade major 6.digit": { - // old: "5.0.6", - // new: "6.0", - // expectForceNew: false, - // }, + "upgrade major 6.digit": { + old: "5.0.6", + new: "6.0", + expected: false, + }, "downgrade minor versions": { old: "1.3.5", @@ -319,15 +373,15 @@ func TestCustomizeDiffEngineVersionIsDowngrade(t *testing.T) { expected: false, }, - "downgrade from major 7.x to 6.x": { - old: "7.x", + "downgrade from major 7.digit to 6.x": { + old: "7.2", new: "6.x", expected: true, }, - "downgrade from major 7.digit to 6.x": { + "downgrade from major 7.digit to 6.digit": { old: "7.2", - new: "6.x", + new: "6.2", expected: true, }, } @@ -419,17 +473,23 @@ func TestCustomizeDiffEngineVersionForceNewOnDowngrade(t *testing.T) { expectForceNew: false, }, - // "upgrade major 6.x": { - // old: "5.0.6", - // new: "6.x", - // expectForceNew: false, - // }, + "upgrade major 6.x": { + old: "5.0.6", + new: "6.x", + expectForceNew: false, + }, - // "upgrade major 6.digit": { - // old: "5.0.6", - // new: "6.0", - // expectForceNew: false, - // }, + "upgrade major 6.digit": { + old: "5.0.6", + new: "6.0", + expectForceNew: false, + }, + + "upgrade major 7.digit": { + old: "6.x", + new: "7.0", + expectForceNew: false, + }, "downgrade minor versions": { old: "1.3.5", @@ -461,15 +521,21 @@ func TestCustomizeDiffEngineVersionForceNewOnDowngrade(t *testing.T) { expectForceNew: false, }, - "downgrade from major 7.x to 6.x": { - old: "7.x", + "downgrade from major 7.digit to 6.x": { + old: "7.2", new: "6.x", expectForceNew: true, }, - "downgrade from major 7.digit to 6.x": { + "downgrade from major 7.digit to 6.digit": { old: "7.2", - new: "6.x", + new: "6.2", + expectForceNew: true, + }, + + "downgrade major 7.digit": { + old: "7.2", + new: "7.0", expectForceNew: true, }, } @@ -529,10 +595,19 @@ func TestNormalizeEngineVersion(t *testing.T) { normalized: fmt.Sprintf("6.%d.0", math.MaxInt), valid: true, }, + { + version: "7.2", + normalized: "7.2.0", + valid: true, + }, { version: "5.x", valid: false, }, + { + version: "7.x", + valid: false, + }, } for _, testcase := range testcases { diff --git a/internal/service/elasticache/find.go b/internal/service/elasticache/find.go index abaf1117926..3371482ddf1 100644 --- a/internal/service/elasticache/find.go +++ b/internal/service/elasticache/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache import ( diff --git a/internal/service/elasticache/flex.go b/internal/service/elasticache/flex.go index 87fa713d7f5..10419cd1d2d 100644 --- a/internal/service/elasticache/flex.go +++ b/internal/service/elasticache/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache import ( diff --git a/internal/service/elasticache/generate.go b/internal/service/elasticache/generate.go index 55337bf438a..b6f9413373d 100644 --- a/internal/service/elasticache/generate.go +++ b/internal/service/elasticache/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceName -ListTagsOutTagsElem=TagList -ServiceTagsSlice -TagOp=AddTagsToResource -TagInIDElem=ResourceName -UntagOp=RemoveTagsFromResource -UpdateTags -CreateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package elasticache diff --git a/internal/service/elasticache/global_replication_group.go b/internal/service/elasticache/global_replication_group.go index cf74c0c405c..f50e5d8eef9 100644 --- a/internal/service/elasticache/global_replication_group.go +++ b/internal/service/elasticache/global_replication_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache import ( @@ -268,7 +271,7 @@ func paramGroupNameRequiresMajorVersionUpgrade(diff changeDiffer) error { } func resourceGlobalReplicationGroupCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) id := d.Get("global_replication_group_id_suffix").(string) input := &elasticache.CreateGlobalReplicationGroupInput{ @@ -381,7 +384,7 @@ func resourceGlobalReplicationGroupCreate(ctx context.Context, d *schema.Resourc } func resourceGlobalReplicationGroupRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) globalReplicationGroup, err := FindGlobalReplicationGroupByID(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -427,7 +430,7 @@ func resourceGlobalReplicationGroupRead(ctx context.Context, d *schema.ResourceD type globalReplicationGroupUpdater func(input *elasticache.ModifyGlobalReplicationGroupInput) func resourceGlobalReplicationGroupUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) // Only one field can be changed per request if d.HasChange("cache_node_type") { @@ -551,7 +554,7 @@ func updateGlobalReplicationGroup(ctx context.Context, conn *elasticache.ElastiC } func resourceGlobalReplicationGroupDelete(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) // Using Update timeout because the Global Replication Group could be in the middle of an update operation err := deleteGlobalReplicationGroup(ctx, conn, d.Id(), d.Timeout(schema.TimeoutUpdate), d.Timeout(schema.TimeoutDelete)) diff --git a/internal/service/elasticache/global_replication_group_test.go b/internal/service/elasticache/global_replication_group_test.go index 57e610d4b8f..acb4bead54c 100644 --- a/internal/service/elasticache/global_replication_group_test.go +++ b/internal/service/elasticache/global_replication_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache_test import ( @@ -1446,7 +1449,7 @@ func testAccCheckGlobalReplicationGroupExists(ctx context.Context, resourceName return fmt.Errorf("No ElastiCache Global Replication Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) grg, err := tfelasticache.FindGlobalReplicationGroupByID(ctx, conn, rs.Primary.ID) if err != nil { return fmt.Errorf("retrieving ElastiCache Global Replication Group (%s): %w", rs.Primary.ID, err) @@ -1464,7 +1467,7 @@ func testAccCheckGlobalReplicationGroupExists(ctx context.Context, resourceName func testAccCheckGlobalReplicationGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_elasticache_global_replication_group" { @@ -1486,7 +1489,7 @@ func testAccCheckGlobalReplicationGroupDestroy(ctx context.Context) resource.Tes } func testAccPreCheckGlobalReplicationGroup(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) input := &elasticache.DescribeGlobalReplicationGroupsInput{} _, err := conn.DescribeGlobalReplicationGroupsWithContext(ctx, input) @@ -1503,7 +1506,7 @@ func testAccPreCheckGlobalReplicationGroup(ctx context.Context, t *testing.T) { func testAccMatchReplicationGroupActualVersion(ctx context.Context, j *elasticache.ReplicationGroup, r *regexp.Regexp) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) cacheCluster := j.NodeGroups[0].NodeGroupMembers[0] cluster, err := tfelasticache.FindCacheClusterByID(ctx, conn, aws.StringValue(cacheCluster.CacheClusterId)) @@ -1526,13 +1529,13 @@ resource "aws_elasticache_global_replication_group" "test" { } resource "aws_elasticache_replication_group" "test" { - replication_group_id = %[2]q - replication_group_description = "test" + replication_group_id = %[2]q + description = "test" - engine = "redis" - engine_version = "5.0.6" - node_type = "cache.m5.large" - number_cache_clusters = 1 + engine = "redis" + engine_version = "5.0.6" + node_type = "cache.m5.large" + num_cache_clusters = 1 } `, rName, primaryReplicationGroupId) } @@ -1545,13 +1548,13 @@ resource "aws_elasticache_global_replication_group" "test" { } resource "aws_elasticache_replication_group" "test" { - replication_group_id = %[2]q - replication_group_description = "test" + replication_group_id = %[2]q + description = "test" - engine = "redis" - engine_version = "5.0.6" - node_type = %[3]q - number_cache_clusters = 1 + engine = "redis" + engine_version = "5.0.6" + node_type = %[3]q + num_cache_clusters = 1 } `, rName, primaryReplicationGroupId, nodeType) } @@ -1564,14 +1567,14 @@ resource "aws_elasticache_global_replication_group" "test" { } resource "aws_elasticache_replication_group" "test" { - replication_group_id = %[2]q - replication_group_description = "test" + replication_group_id = %[2]q + description = "test" node_type = "cache.m5.large" - engine = "redis" - engine_version = "5.0.6" - number_cache_clusters = 2 + engine = "redis" + engine_version = "5.0.6" + num_cache_clusters = 2 automatic_failover_enabled = %[3]s } @@ -1588,13 +1591,13 @@ resource "aws_elasticache_global_replication_group" "test" { } resource "aws_elasticache_replication_group" "test" { - replication_group_id = %[2]q - replication_group_description = "test" + replication_group_id = %[2]q + description = "test" - engine = "redis" - engine_version = "5.0.6" - node_type = "cache.m5.large" - number_cache_clusters = 1 + engine = "redis" + engine_version = "5.0.6" + node_type = "cache.m5.large" + num_cache_clusters = 1 } `, rName, primaryReplicationGroupId, description) } @@ -1608,13 +1611,13 @@ resource "aws_elasticache_global_replication_group" "test" { } resource "aws_elasticache_replication_group" "test" { - replication_group_id = %[2]q - replication_group_description = "test" + replication_group_id = %[2]q + description = "test" - engine = "redis" - engine_version = "5.0.6" - node_type = %[3]q - number_cache_clusters = 1 + engine = "redis" + engine_version = "5.0.6" + node_type = %[3]q + num_cache_clusters = 1 } `, rName, primaryReplicationGroupId, nodeType) } @@ -1628,13 +1631,13 @@ resource "aws_elasticache_global_replication_group" "test" { } resource "aws_elasticache_replication_group" "test" { - replication_group_id = %[2]q - replication_group_description = "test" + replication_group_id = %[2]q + description = "test" - engine = "redis" - engine_version = "5.0.6" - node_type = %[3]q - number_cache_clusters = 1 + engine = "redis" + engine_version = "5.0.6" + node_type = %[3]q + num_cache_clusters = 1 lifecycle { ignore_changes = [node_type] @@ -1652,12 +1655,12 @@ resource "aws_elasticache_global_replication_group" "test" { } resource "aws_elasticache_replication_group" "test" { - replication_group_id = %[2]q - replication_group_description = "test" + replication_group_id = %[2]q + description = "test" - engine = "redis" - engine_version = "5.0.6" - number_cache_clusters = 1 + engine = "redis" + engine_version = "5.0.6" + num_cache_clusters = 1 } `, rName, primaryReplicationGroupId, nodeType) } @@ -1672,14 +1675,14 @@ resource "aws_elasticache_global_replication_group" "test" { } resource "aws_elasticache_replication_group" "test" { - replication_group_id = %[2]q - replication_group_description = "test" + replication_group_id = %[2]q + description = "test" node_type = "cache.m5.large" - engine = "redis" - engine_version = "5.0.6" - number_cache_clusters = 2 + engine = "redis" + engine_version = "5.0.6" + num_cache_clusters = 2 automatic_failover_enabled = %[3]s } @@ -1696,14 +1699,14 @@ resource "aws_elasticache_global_replication_group" "test" { } resource "aws_elasticache_replication_group" "test" { - replication_group_id = %[2]q - replication_group_description = "test" + replication_group_id = %[2]q + description = "test" node_type = "cache.m5.large" - engine = "redis" - engine_version = "5.0.6" - number_cache_clusters = 2 + engine = "redis" + engine_version = "5.0.6" + num_cache_clusters = 2 automatic_failover_enabled = %[3]s @@ -1724,12 +1727,12 @@ resource "aws_elasticache_global_replication_group" "test" { } resource "aws_elasticache_replication_group" "test" { - replication_group_id = %[2]q - replication_group_description = "test" + replication_group_id = %[2]q + description = "test" - engine = "redis" - engine_version = "5.0.6" - number_cache_clusters = 2 + engine = "redis" + engine_version = "5.0.6" + num_cache_clusters = 2 lifecycle { ignore_changes = [automatic_failover_enabled] @@ -1755,40 +1758,40 @@ resource "aws_elasticache_global_replication_group" "test" { resource "aws_elasticache_replication_group" "primary" { provider = aws - replication_group_id = "%[1]s-p" - replication_group_description = "primary" + replication_group_id = "%[1]s-p" + description = "primary" subnet_group_name = aws_elasticache_subnet_group.primary.name node_type = "cache.m5.large" - engine = "redis" - engine_version = "5.0.6" - number_cache_clusters = 1 + engine = "redis" + engine_version = "5.0.6" + num_cache_clusters = 1 } resource "aws_elasticache_replication_group" "alternate" { provider = awsalternate - replication_group_id = "%[1]s-a" - replication_group_description = "alternate" - global_replication_group_id = aws_elasticache_global_replication_group.test.global_replication_group_id + replication_group_id = "%[1]s-a" + description = "alternate" + global_replication_group_id = aws_elasticache_global_replication_group.test.global_replication_group_id subnet_group_name = aws_elasticache_subnet_group.alternate.name - number_cache_clusters = 1 + num_cache_clusters = 1 } resource "aws_elasticache_replication_group" "third" { provider = awsthird - replication_group_id = "%[1]s-t" - replication_group_description = "third" - global_replication_group_id = aws_elasticache_global_replication_group.test.global_replication_group_id + replication_group_id = "%[1]s-t" + description = "third" + global_replication_group_id = aws_elasticache_global_replication_group.test.global_replication_group_id subnet_group_name = aws_elasticache_subnet_group.third.name - number_cache_clusters = 1 + num_cache_clusters = 1 } `, rName)) } @@ -1810,28 +1813,28 @@ resource "aws_elasticache_global_replication_group" "test" { resource "aws_elasticache_replication_group" "primary" { provider = aws - replication_group_id = "%[1]s-p" - replication_group_description = "primary" + replication_group_id = "%[1]s-p" + description = "primary" subnet_group_name = aws_elasticache_subnet_group.primary.name node_type = "cache.m5.large" - engine = "redis" - engine_version = "5.0.6" - number_cache_clusters = 1 + engine = "redis" + engine_version = "5.0.6" + num_cache_clusters = 1 } resource "aws_elasticache_replication_group" "secondary" { provider = awsalternate - replication_group_id = "%[1]s-a" - replication_group_description = "alternate" - global_replication_group_id = aws_elasticache_global_replication_group.test.global_replication_group_id + replication_group_id = "%[1]s-a" + description = "alternate" + global_replication_group_id = aws_elasticache_global_replication_group.test.global_replication_group_id subnet_group_name = aws_elasticache_subnet_group.secondary.name - number_cache_clusters = 1 + num_cache_clusters = 1 } `, rName)) } @@ -1853,28 +1856,28 @@ resource "aws_elasticache_global_replication_group" "test" { resource "aws_elasticache_replication_group" "primary" { provider = aws - replication_group_id = "%[1]s-p" - replication_group_description = "primary" + replication_group_id = "%[1]s-p" + description = "primary" subnet_group_name = aws_elasticache_subnet_group.primary.name node_type = "cache.m5.large" - engine = "redis" - engine_version = "5.0.6" - number_cache_clusters = 1 + engine = "redis" + engine_version = "5.0.6" + num_cache_clusters = 1 } resource "aws_elasticache_replication_group" "third" { provider = awsthird - replication_group_id = "%[1]s-t" - replication_group_description = "third" - global_replication_group_id = aws_elasticache_global_replication_group.test.global_replication_group_id + replication_group_id = "%[1]s-t" + description = "third" + global_replication_group_id = aws_elasticache_global_replication_group.test.global_replication_group_id subnet_group_name = aws_elasticache_subnet_group.third.name - number_cache_clusters = 1 + num_cache_clusters = 1 } `, rName)) } @@ -1887,8 +1890,8 @@ resource "aws_elasticache_global_replication_group" "test" { } resource "aws_elasticache_replication_group" "test" { - replication_group_id = %[1]q - replication_group_description = "test" + replication_group_id = %[1]q + description = "test" engine = "redis" engine_version = "6.2" @@ -1910,8 +1913,8 @@ resource "aws_elasticache_global_replication_group" "test" { } resource "aws_elasticache_replication_group" "test" { - replication_group_id = %[1]q - replication_group_description = "test" + replication_group_id = %[1]q + description = "test" engine = "redis" engine_version = "6.2" @@ -1939,8 +1942,8 @@ resource "aws_elasticache_global_replication_group" "test" { } resource "aws_elasticache_replication_group" "test" { - replication_group_id = %[1]q - replication_group_description = "test" + replication_group_id = %[1]q + description = "test" engine = "redis" engine_version = "6.2" @@ -1966,13 +1969,13 @@ resource "aws_elasticache_global_replication_group" "test" { } resource "aws_elasticache_replication_group" "test" { - replication_group_id = %[2]q - replication_group_description = "test" + replication_group_id = %[2]q + description = "test" - engine = "redis" - engine_version = %[3]q - node_type = "cache.m5.large" - number_cache_clusters = 1 + engine = "redis" + engine_version = %[3]q + node_type = "cache.m5.large" + num_cache_clusters = 1 } `, rName, primaryReplicationGroupId, repGroupEngineVersion) } @@ -1987,13 +1990,13 @@ resource "aws_elasticache_global_replication_group" "test" { } resource "aws_elasticache_replication_group" "test" { - replication_group_id = %[2]q - replication_group_description = "test" + replication_group_id = %[2]q + description = "test" - engine = "redis" - engine_version = %[3]q - node_type = "cache.m5.large" - number_cache_clusters = 1 + engine = "redis" + engine_version = %[3]q + node_type = "cache.m5.large" + num_cache_clusters = 1 lifecycle { ignore_changes = [engine_version] @@ -2013,13 +2016,13 @@ resource "aws_elasticache_global_replication_group" "test" { } resource "aws_elasticache_replication_group" "test" { - replication_group_id = %[2]q - replication_group_description = "test" + replication_group_id = %[2]q + description = "test" - engine = "redis" - engine_version = %[3]q - node_type = "cache.m5.large" - number_cache_clusters = 1 + engine = "redis" + engine_version = %[3]q + node_type = "cache.m5.large" + num_cache_clusters = 1 lifecycle { ignore_changes = [engine_version] @@ -2039,13 +2042,13 @@ resource "aws_elasticache_global_replication_group" "test" { } resource "aws_elasticache_replication_group" "test" { - replication_group_id = %[2]q - replication_group_description = "test" + replication_group_id = %[2]q + description = "test" - engine = "redis" - engine_version = %[3]q - node_type = "cache.m5.large" - number_cache_clusters = 1 + engine = "redis" + engine_version = %[3]q + node_type = "cache.m5.large" + num_cache_clusters = 1 lifecycle { ignore_changes = [engine_version] @@ -2070,13 +2073,13 @@ resource "aws_elasticache_global_replication_group" "test" { } resource "aws_elasticache_replication_group" "test" { - replication_group_id = %[2]q - replication_group_description = "test" + replication_group_id = %[2]q + description = "test" - engine = "redis" - engine_version = %[3]q - node_type = "cache.m5.large" - number_cache_clusters = 1 + engine = "redis" + engine_version = %[3]q + node_type = "cache.m5.large" + num_cache_clusters = 1 lifecycle { ignore_changes = [engine_version] diff --git a/internal/service/elasticache/parameter_group.go b/internal/service/elasticache/parameter_group.go index ae30f42a8d5..a0ce4260fd8 100644 --- a/internal/service/elasticache/parameter_group.go +++ b/internal/service/elasticache/parameter_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache import ( @@ -84,13 +87,13 @@ func ResourceParameterGroup() *schema.Resource { func resourceParameterGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) input := elasticache.CreateCacheParameterGroupInput{ CacheParameterGroupName: aws.String(d.Get("name").(string)), CacheParameterGroupFamily: aws.String(d.Get("family").(string)), Description: aws.String(d.Get("description").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } resp, err := conn.CreateCacheParameterGroupWithContext(ctx, &input) @@ -115,7 +118,7 @@ func resourceParameterGroupCreate(ctx context.Context, d *schema.ResourceData, m func resourceParameterGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) parameterGroup, err := FindParameterGroupByName(ctx, conn, d.Id()) if err != nil { @@ -145,7 +148,7 @@ func resourceParameterGroupRead(ctx context.Context, d *schema.ResourceData, met func resourceParameterGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) if d.HasChange("parameter") { o, n := d.GetChange("parameter") @@ -267,7 +270,7 @@ func resourceParameterGroupUpdate(ctx context.Context, d *schema.ResourceData, m func resourceParameterGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) err := deleteParameterGroup(ctx, conn, d.Id()) if tfawserr.ErrCodeEquals(err, elasticache.ErrCodeCacheParameterGroupNotFoundFault) { diff --git a/internal/service/elasticache/parameter_group_test.go b/internal/service/elasticache/parameter_group_test.go index 81358689e1d..11e028e9121 100644 --- a/internal/service/elasticache/parameter_group_test.go +++ b/internal/service/elasticache/parameter_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache_test import ( @@ -420,7 +423,7 @@ func TestAccElastiCacheParameterGroup_tags(t *testing.T) { func testAccCheckParameterGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_elasticache_parameter_group" { @@ -475,7 +478,7 @@ func testAccCheckParameterGroupExists(ctx context.Context, n string, v *elastica return fmt.Errorf("No Cache Parameter Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) opts := elasticache.DescribeCacheParameterGroupsInput{ CacheParameterGroupName: aws.String(rs.Primary.ID), diff --git a/internal/service/elasticache/replication_group.go b/internal/service/elasticache/replication_group.go index e777c395192..89665620edd 100644 --- a/internal/service/elasticache/replication_group.go +++ b/internal/service/elasticache/replication_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache import ( @@ -20,10 +23,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/errs" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" - "github.com/hashicorp/terraform-provider-aws/internal/experimental/nullable" "github.com/hashicorp/terraform-provider-aws/internal/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/types/nullable" "github.com/hashicorp/terraform-provider-aws/internal/verify" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -355,12 +358,12 @@ func ResourceReplicationGroup() *schema.Resource { func resourceReplicationGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) replicationGroupID := d.Get("replication_group_id").(string) input := &elasticache.CreateReplicationGroupInput{ ReplicationGroupId: aws.String(replicationGroupID), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -517,7 +520,7 @@ func resourceReplicationGroupCreate(ctx context.Context, d *schema.ResourceData, } // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := createTags(ctx, conn, aws.StringValue(output.ReplicationGroup.ARN), tags) // If default tags only, continue. Otherwise, error. @@ -535,7 +538,7 @@ func resourceReplicationGroupCreate(ctx context.Context, d *schema.ResourceData, func resourceReplicationGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) rgp, err := FindReplicationGroupByID(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -662,7 +665,7 @@ func resourceReplicationGroupRead(ctx context.Context, d *schema.ResourceData, m func resourceReplicationGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) if d.HasChangesExcept("tags", "tags_all") { if d.HasChanges( @@ -843,7 +846,7 @@ func resourceReplicationGroupUpdate(ctx context.Context, d *schema.ResourceData, func resourceReplicationGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) v, hasGlobalReplicationGroupID := d.GetOk("global_replication_group_id") if hasGlobalReplicationGroupID { @@ -1001,12 +1004,12 @@ func modifyReplicationGroupShardConfigurationNumNodeGroups(ctx context.Context, log.Printf("[DEBUG] Modifying ElastiCache Replication Group (%s) shard configuration: %s", d.Id(), input) _, err := conn.ModifyReplicationGroupShardConfigurationWithContext(ctx, input) if err != nil { - return fmt.Errorf("error modifying ElastiCache Replication Group shard configuration: %w", err) + return fmt.Errorf("modifying ElastiCache Replication Group shard configuration: %w", err) } _, err = WaitReplicationGroupAvailable(ctx, conn, d.Id(), d.Timeout(schema.TimeoutUpdate)) if err != nil { - return fmt.Errorf("error waiting for ElastiCache Replication Group (%s) shard reconfiguration completion: %w", d.Id(), err) + return fmt.Errorf("waiting for ElastiCache Replication Group (%s) shard reconfiguration completion: %w", d.Id(), err) } return nil @@ -1025,11 +1028,11 @@ func modifyReplicationGroupShardConfigurationReplicasPerNodeGroup(ctx context.Co } _, err := conn.IncreaseReplicaCountWithContext(ctx, input) if err != nil { - return fmt.Errorf("error adding ElastiCache Replication Group (%s) replicas: %w", d.Id(), err) + return fmt.Errorf("adding ElastiCache Replication Group (%s) replicas: %w", d.Id(), err) } _, err = WaitReplicationGroupAvailable(ctx, conn, d.Id(), d.Timeout(schema.TimeoutUpdate)) if err != nil { - return fmt.Errorf("error waiting for ElastiCache Replication Group (%s) replica addition: %w", d.Id(), err) + return fmt.Errorf("waiting for ElastiCache Replication Group (%s) replica addition: %w", d.Id(), err) } } else { input := &elasticache.DecreaseReplicaCountInput{ @@ -1039,11 +1042,11 @@ func modifyReplicationGroupShardConfigurationReplicasPerNodeGroup(ctx context.Co } _, err := conn.DecreaseReplicaCountWithContext(ctx, input) if err != nil { - return fmt.Errorf("error removing ElastiCache Replication Group (%s) replicas: %w", d.Id(), err) + return fmt.Errorf("removing ElastiCache Replication Group (%s) replicas: %w", d.Id(), err) } _, err = WaitReplicationGroupAvailable(ctx, conn, d.Id(), d.Timeout(schema.TimeoutUpdate)) if err != nil { - return fmt.Errorf("error waiting for ElastiCache Replication Group (%s) replica removal: %w", d.Id(), err) + return fmt.Errorf("waiting for ElastiCache Replication Group (%s) replica removal: %w", d.Id(), err) } } @@ -1072,12 +1075,12 @@ func increaseReplicationGroupNumCacheClusters(ctx context.Context, conn *elastic } _, err := conn.IncreaseReplicaCountWithContext(ctx, input) if err != nil { - return fmt.Errorf("error adding ElastiCache Replication Group (%s) replicas: %w", replicationGroupID, err) + return fmt.Errorf("adding ElastiCache Replication Group (%s) replicas: %w", replicationGroupID, err) } _, err = WaitReplicationGroupMemberClustersAvailable(ctx, conn, replicationGroupID, timeout) if err != nil { - return fmt.Errorf("error waiting for ElastiCache Replication Group (%s) replica addition: %w", replicationGroupID, err) + return fmt.Errorf("waiting for ElastiCache Replication Group (%s) replica addition: %w", replicationGroupID, err) } return nil @@ -1091,12 +1094,12 @@ func decreaseReplicationGroupNumCacheClusters(ctx context.Context, conn *elastic } _, err := conn.DecreaseReplicaCountWithContext(ctx, input) if err != nil { - return fmt.Errorf("error removing ElastiCache Replication Group (%s) replicas: %w", replicationGroupID, err) + return fmt.Errorf("removing ElastiCache Replication Group (%s) replicas: %w", replicationGroupID, err) } _, err = WaitReplicationGroupMemberClustersAvailable(ctx, conn, replicationGroupID, timeout) if err != nil { - return fmt.Errorf("error waiting for ElastiCache Replication Group (%s) replica removal: %w", replicationGroupID, err) + return fmt.Errorf("waiting for ElastiCache Replication Group (%s) replica removal: %w", replicationGroupID, err) } return nil diff --git a/internal/service/elasticache/replication_group_data_source.go b/internal/service/elasticache/replication_group_data_source.go index e3ac2acdfe9..90bf3b8c4a9 100644 --- a/internal/service/elasticache/replication_group_data_source.go +++ b/internal/service/elasticache/replication_group_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache import ( @@ -119,7 +122,7 @@ func DataSourceReplicationGroup() *schema.Resource { func dataSourceReplicationGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) groupID := d.Get("replication_group_id").(string) diff --git a/internal/service/elasticache/replication_group_data_source_test.go b/internal/service/elasticache/replication_group_data_source_test.go index 3aa51a5c5fa..9121f3acea9 100644 --- a/internal/service/elasticache/replication_group_data_source_test.go +++ b/internal/service/elasticache/replication_group_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache_test import ( @@ -71,9 +74,9 @@ func TestAccElastiCacheReplicationGroupDataSource_clusterMode(t *testing.T) { resource.TestCheckResourceAttrPair(dataSourceName, "multi_az_enabled", resourceName, "multi_az_enabled"), resource.TestCheckResourceAttrPair(dataSourceName, "configuration_endpoint_address", resourceName, "configuration_endpoint_address"), resource.TestCheckResourceAttrPair(dataSourceName, "node_type", resourceName, "node_type"), - resource.TestCheckResourceAttrPair(dataSourceName, "num_node_groups", resourceName, "cluster_mode.0.num_node_groups"), + resource.TestCheckResourceAttrPair(dataSourceName, "num_node_groups", resourceName, "num_node_groups"), resource.TestCheckResourceAttrPair(dataSourceName, "port", resourceName, "port"), - resource.TestCheckResourceAttrPair(dataSourceName, "replicas_per_node_group", resourceName, "cluster_mode.0.replicas_per_node_group"), + resource.TestCheckResourceAttrPair(dataSourceName, "replicas_per_node_group", resourceName, "replicas_per_node_group"), resource.TestCheckResourceAttrPair(dataSourceName, "description", resourceName, "description"), resource.TestCheckResourceAttrPair(dataSourceName, "replication_group_id", resourceName, "replication_group_id"), ), @@ -182,10 +185,8 @@ resource "aws_elasticache_replication_group" "test" { port = 6379 automatic_failover_enabled = true - cluster_mode { - replicas_per_node_group = 1 - num_node_groups = 2 - } + replicas_per_node_group = 1 + num_node_groups = 2 } data "aws_elasticache_replication_group" "test" { diff --git a/internal/service/elasticache/replication_group_test.go b/internal/service/elasticache/replication_group_test.go index 25c8eda2990..b62e6e29792 100644 --- a/internal/service/elasticache/replication_group_test.go +++ b/internal/service/elasticache/replication_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache_test import ( @@ -135,6 +138,41 @@ func TestAccElastiCacheReplicationGroup_uppercase(t *testing.T) { }) } +func TestAccElastiCacheReplicationGroup_EngineVersion_v7(t *testing.T) { + ctx := acctest.Context(t) + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var rg elasticache.ReplicationGroup + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_elasticache_replication_group.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, elasticache.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckReplicationGroupDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccReplicationGroupConfig_v7(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckReplicationGroupExists(ctx, resourceName, &rg), + resource.TestCheckResourceAttr(resourceName, "engine", "redis"), + resource.TestCheckResourceAttr(resourceName, "engine_version", "7.0"), + resource.TestMatchResourceAttr(resourceName, "engine_version_actual", regexp.MustCompile(`^7\.[[:digit:]]+\.[[:digit:]]+$`)), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"apply_immediately"}, //not in the API + }, + }, + }) +} + func TestAccElastiCacheReplicationGroup_EngineVersion_update(t *testing.T) { ctx := acctest.Context(t) if testing.Short() { @@ -1359,7 +1397,7 @@ func TestAccElastiCacheReplicationGroup_NumberCacheClustersFailover_autoFailover { PreConfig: func() { // Ensure that primary is on the node we are trying to delete - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) timeout := 40 * time.Minute if err := resourceReplicationGroupSetPrimaryClusterID(ctx, conn, rName, formatReplicationGroupClusterID(rName, 3), timeout); err != nil { @@ -1411,7 +1449,7 @@ func TestAccElastiCacheReplicationGroup_NumberCacheClustersFailover_autoFailover { PreConfig: func() { // Ensure that primary is on the node we are trying to delete - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) timeout := 40 * time.Minute // Must disable automatic failover first @@ -1474,7 +1512,7 @@ func TestAccElastiCacheReplicationGroup_NumberCacheClusters_multiAZEnabled(t *te { PreConfig: func() { // Ensure that primary is on the node we are trying to delete - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) timeout := 40 * time.Minute // Must disable automatic failover first @@ -1532,7 +1570,7 @@ func TestAccElastiCacheReplicationGroup_NumberCacheClustersMemberClusterDisappea { PreConfig: func() { // Remove one of the Cache Clusters - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) timeout := 40 * time.Minute cacheClusterID := formatReplicationGroupClusterID(rName, 2) @@ -1583,7 +1621,7 @@ func TestAccElastiCacheReplicationGroup_NumberCacheClustersMemberClusterDisappea { PreConfig: func() { // Remove one of the Cache Clusters - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) timeout := 40 * time.Minute cacheClusterID := formatReplicationGroupClusterID(rName, 2) @@ -1634,7 +1672,7 @@ func TestAccElastiCacheReplicationGroup_NumberCacheClustersMemberClusterDisappea { PreConfig: func() { // Remove one of the Cache Clusters - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) timeout := 40 * time.Minute cacheClusterID := formatReplicationGroupClusterID(rName, 2) @@ -1685,7 +1723,7 @@ func TestAccElastiCacheReplicationGroup_NumberCacheClustersMemberClusterDisappea { PreConfig: func() { // Remove one of the Cache Clusters - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) timeout := 40 * time.Minute cacheClusterID := formatReplicationGroupClusterID(rName, 2) @@ -2415,7 +2453,7 @@ func testAccCheckReplicationGroupExists(ctx context.Context, n string, v *elasti return fmt.Errorf("No replication group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) rg, err := tfelasticache.FindReplicationGroupByID(ctx, conn, rs.Primary.ID) if err != nil { return fmt.Errorf("ElastiCache error: %w", err) @@ -2429,7 +2467,7 @@ func testAccCheckReplicationGroupExists(ctx context.Context, n string, v *elasti func testAccCheckReplicationGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_elasticache_replication_group" { @@ -2450,7 +2488,7 @@ func testAccCheckReplicationGroupDestroy(ctx context.Context) resource.TestCheck func testAccCheckReplicationGroupParameterGroup(ctx context.Context, rg *elasticache.ReplicationGroup, pg *elasticache.CacheParameterGroup) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) cacheCluster := rg.NodeGroups[0].NodeGroupMembers[0] cluster, err := tfelasticache.FindCacheClusterByID(ctx, conn, aws.StringValue(cacheCluster.CacheClusterId)) @@ -2473,7 +2511,7 @@ func testAccCheckReplicationGroupParameterGroup(ctx context.Context, rg *elastic func testAccCheckGlobalReplicationGroupMemberParameterGroupDestroy(ctx context.Context, pg *elasticache.CacheParameterGroup) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) paramGroupName := aws.StringValue(pg.CacheParameterGroupName) @@ -2495,7 +2533,7 @@ func testAccCheckReplicationGroupUserGroup(ctx context.Context, resourceName, us return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) rg, err := tfelasticache.FindReplicationGroupByID(ctx, conn, rs.Primary.ID) if err != nil { @@ -2557,7 +2595,7 @@ func testCheckEngineStuffDefault(ctx context.Context, resourceName string) resou func testCheckRedisEngineVersionLatest(ctx context.Context, v *elasticache.CacheEngineVersion) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) versions, err := conn.DescribeCacheEngineVersionsWithContext(ctx, &elasticache.DescribeCacheEngineVersionsInput{ Engine: aws.String("redis"), @@ -2581,7 +2619,7 @@ func testCheckRedisEngineVersionLatest(ctx context.Context, v *elasticache.Cache func testCheckRedisParameterGroupDefault(ctx context.Context, version *elasticache.CacheEngineVersion, v *elasticache.CacheParameterGroup) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) parameterGroup, err := tfelasticache.FindParameterGroupByFilter(ctx, conn, tfelasticache.FilterRedisParameterGroupFamily(aws.StringValue(version.CacheParameterGroupFamily)), @@ -2622,7 +2660,7 @@ func testCheckEngineStuffClusterEnabledDefault(ctx context.Context, resourceName func testCheckRedisParameterGroupClusterEnabledDefault(ctx context.Context, version *elasticache.CacheEngineVersion, v *elasticache.CacheParameterGroup) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) parameterGroup, err := tfelasticache.FindParameterGroupByFilter(ctx, conn, tfelasticache.FilterRedisParameterGroupFamily(aws.StringValue(version.CacheParameterGroupFamily)), @@ -2688,6 +2726,17 @@ resource "aws_elasticache_replication_group" "test" { `, rName) } +func testAccReplicationGroupConfig_v7(rName string) string { + return fmt.Sprintf(` +resource "aws_elasticache_replication_group" "test" { + replication_group_id = %[1]q + replication_group_description = "test description" + node_type = "cache.t3.small" + engine_version = "7.0" +} +`, rName) +} + func testAccReplicationGroupConfig_uppercase(rName string) string { return acctest.ConfigCompose( acctest.ConfigVPCWithSubnets(rName, 2), @@ -3700,8 +3749,8 @@ resource "aws_iam_role" "r" { resource "aws_kinesis_firehose_delivery_stream" "ds" { name = "%[1]s" - destination = "s3" - s3_configuration { + destination = "extended_s3" + extended_s3_configuration { role_arn = aws_iam_role.r.arn bucket_arn = aws_s3_bucket.b.arn } diff --git a/internal/service/elasticache/service.go b/internal/service/elasticache/service.go index d4b7ff0734f..439a7f5b420 100644 --- a/internal/service/elasticache/service.go +++ b/internal/service/elasticache/service.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache const ( diff --git a/internal/service/elasticache/service_package_gen.go b/internal/service/elasticache/service_package_gen.go index 1ea8e199433..3845f2843fc 100644 --- a/internal/service/elasticache/service_package_gen.go +++ b/internal/service/elasticache/service_package_gen.go @@ -5,6 +5,10 @@ package elasticache import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + elasticache_sdkv1 "github.com/aws/aws-sdk-go/service/elasticache" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -105,4 +109,13 @@ func (p *servicePackage) ServicePackageName() string { return names.ElastiCache } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*elasticache_sdkv1.ElastiCache, error) { + sess := config["session"].(*session_sdkv1.Session) + + return elasticache_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/elasticache/status.go b/internal/service/elasticache/status.go index 48d254fafa6..a1d1b6b480f 100644 --- a/internal/service/elasticache/status.go +++ b/internal/service/elasticache/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache import ( diff --git a/internal/service/elasticache/subnet_group.go b/internal/service/elasticache/subnet_group.go index 80cbca91e96..47ffb9fa19f 100644 --- a/internal/service/elasticache/subnet_group.go +++ b/internal/service/elasticache/subnet_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache import ( @@ -82,14 +85,14 @@ func resourceSubnetGroupDiff(ctx context.Context, diff *schema.ResourceDiff, met func resourceSubnetGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) name := d.Get("name").(string) input := &elasticache.CreateCacheSubnetGroupInput{ CacheSubnetGroupDescription: aws.String(d.Get("description").(string)), CacheSubnetGroupName: aws.String(name), SubnetIds: flex.ExpandStringSet(d.Get("subnet_ids").(*schema.Set)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } output, err := conn.CreateCacheSubnetGroupWithContext(ctx, input) @@ -112,7 +115,7 @@ func resourceSubnetGroupCreate(ctx context.Context, d *schema.ResourceData, meta d.SetId(strings.ToLower(name)) // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := createTags(ctx, conn, aws.StringValue(output.CacheSubnetGroup.ARN), tags) // If default tags only, continue. Otherwise, error. @@ -130,7 +133,7 @@ func resourceSubnetGroupCreate(ctx context.Context, d *schema.ResourceData, meta func resourceSubnetGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) group, err := FindCacheSubnetGroupByName(ctx, conn, d.Id()) @@ -159,7 +162,7 @@ func resourceSubnetGroupRead(ctx context.Context, d *schema.ResourceData, meta i func resourceSubnetGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) if d.HasChanges("subnet_ids", "description") { input := &elasticache.ModifyCacheSubnetGroupInput{ @@ -180,7 +183,7 @@ func resourceSubnetGroupUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceSubnetGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) log.Printf("[DEBUG] Deleting ElastiCache Subnet Group: %s", d.Id()) _, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, 5*time.Minute, func() (interface{}, error) { diff --git a/internal/service/elasticache/subnet_group_data_source.go b/internal/service/elasticache/subnet_group_data_source.go index 61952a50e3c..52e5314b73b 100644 --- a/internal/service/elasticache/subnet_group_data_source.go +++ b/internal/service/elasticache/subnet_group_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache import ( @@ -42,7 +45,7 @@ func DataSourceSubnetGroup() *schema.Resource { func dataSourceSubnetGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig name := d.Get("name").(string) @@ -65,7 +68,7 @@ func dataSourceSubnetGroupRead(ctx context.Context, d *schema.ResourceData, meta d.Set("subnet_ids", flex.FlattenStringSet(subnetIds)) d.Set("name", group.CacheSubnetGroupName) - tags, err := ListTags(ctx, conn, d.Get("arn").(string)) + tags, err := listTags(ctx, conn, d.Get("arn").(string)) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags for ElastiCache Subnet Group (%s): %s", d.Id(), err) diff --git a/internal/service/elasticache/subnet_group_data_source_test.go b/internal/service/elasticache/subnet_group_data_source_test.go index a8a05671503..c3c2ef99ef9 100644 --- a/internal/service/elasticache/subnet_group_data_source_test.go +++ b/internal/service/elasticache/subnet_group_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache_test import ( diff --git a/internal/service/elasticache/subnet_group_test.go b/internal/service/elasticache/subnet_group_test.go index d109cf47506..9e8346c91ef 100644 --- a/internal/service/elasticache/subnet_group_test.go +++ b/internal/service/elasticache/subnet_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache_test import ( @@ -166,7 +169,7 @@ func TestAccElastiCacheSubnetGroup_update(t *testing.T) { func testAccCheckSubnetGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_elasticache_subnet_group" { @@ -201,7 +204,7 @@ func testAccCheckSubnetGroupExists(ctx context.Context, n string, v *elasticache return fmt.Errorf("No ElastiCache Subnet Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) output, err := tfelasticache.FindCacheSubnetGroupByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/elasticache/sweep.go b/internal/service/elasticache/sweep.go index 33eae5d89a9..b524723e2d4 100644 --- a/internal/service/elasticache/sweep.go +++ b/internal/service/elasticache/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -14,7 +17,6 @@ import ( "github.com/aws/aws-sdk-go/service/elasticache" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -67,11 +69,11 @@ func init() { func sweepClusters(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).ElastiCacheConn() + conn := client.ElastiCacheConn(ctx) var sweeperErrs *multierror.Error @@ -114,11 +116,11 @@ func sweepClusters(region string) error { func sweepGlobalReplicationGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).ElastiCacheConn() + conn := client.ElastiCacheConn(ctx) var grgGroup multierror.Group @@ -169,11 +171,11 @@ func sweepGlobalReplicationGroups(region string) error { func sweepParameterGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).ElastiCacheConn() + conn := client.ElastiCacheConn(ctx) err = conn.DescribeCacheParameterGroupsPagesWithContext(ctx, &elasticache.DescribeCacheParameterGroupsInput{}, func(page *elasticache.DescribeCacheParameterGroupsOutput, lastPage bool) bool { if len(page.CacheParameterGroups) == 0 { @@ -211,13 +213,13 @@ func sweepParameterGroups(region string) error { func sweepReplicationGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).ElastiCacheConn() + conn := client.ElastiCacheConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -247,7 +249,7 @@ func sweepReplicationGroups(region string) error { errs = multierror.Append(errs, fmt.Errorf("error describing ElastiCache Replication Groups: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping ElastiCache Replication Groups for %s: %w", region, err)) } @@ -263,11 +265,11 @@ func sweepReplicationGroups(region string) error { func sweepSubnetGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).ElastiCacheConn() + conn := client.ElastiCacheConn(ctx) err = conn.DescribeCacheSubnetGroupsPagesWithContext(ctx, &elasticache.DescribeCacheSubnetGroupsInput{}, func(page *elasticache.DescribeCacheSubnetGroupsOutput, lastPage bool) bool { if len(page.CacheSubnetGroups) == 0 { diff --git a/internal/service/elasticache/tags_gen.go b/internal/service/elasticache/tags_gen.go index 3cd415d3779..ec0b575a155 100644 --- a/internal/service/elasticache/tags_gen.go +++ b/internal/service/elasticache/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists elasticache service tags. +// listTags lists elasticache service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn elasticacheiface.ElastiCacheAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn elasticacheiface.ElastiCacheAPI, identifier string) (tftags.KeyValueTags, error) { input := &elasticache.ListTagsForResourceInput{ ResourceName: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn elasticacheiface.ElastiCacheAPI, identif // ListTags lists elasticache service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).ElastiCacheConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).ElastiCacheConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*elasticache.Tag) tftags.KeyValueT return tftags.New(ctx, m) } -// GetTagsIn returns elasticache service tags from Context. +// getTagsIn returns elasticache service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*elasticache.Tag { +func getTagsIn(ctx context.Context) []*elasticache.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,8 +88,8 @@ func GetTagsIn(ctx context.Context) []*elasticache.Tag { return nil } -// SetTagsOut sets elasticache service tags in Context. -func SetTagsOut(ctx context.Context, tags []*elasticache.Tag) { +// setTagsOut sets elasticache service tags in Context. +func setTagsOut(ctx context.Context, tags []*elasticache.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } @@ -101,13 +101,13 @@ func createTags(ctx context.Context, conn elasticacheiface.ElastiCacheAPI, ident return nil } - return UpdateTags(ctx, conn, identifier, nil, KeyValueTags(ctx, tags)) + return updateTags(ctx, conn, identifier, nil, KeyValueTags(ctx, tags)) } -// UpdateTags updates elasticache service tags. +// updateTags updates elasticache service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn elasticacheiface.ElastiCacheAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn elasticacheiface.ElastiCacheAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -147,5 +147,5 @@ func UpdateTags(ctx context.Context, conn elasticacheiface.ElastiCacheAPI, ident // UpdateTags updates elasticache service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).ElastiCacheConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).ElastiCacheConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/elasticache/user.go b/internal/service/elasticache/user.go index 4c9279f1798..b5e8b3bb292 100644 --- a/internal/service/elasticache/user.go +++ b/internal/service/elasticache/user.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache import ( @@ -123,14 +126,14 @@ func ResourceUser() *schema.Resource { func resourceUserCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) userID := d.Get("user_id").(string) input := &elasticache.CreateUserInput{ AccessString: aws.String(d.Get("access_string").(string)), Engine: aws.String(d.Get("engine").(string)), NoPasswordRequired: aws.Bool(d.Get("no_password_required").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), UserId: aws.String(userID), UserName: aws.String(d.Get("user_name").(string)), } @@ -163,7 +166,7 @@ func resourceUserCreate(ctx context.Context, d *schema.ResourceData, meta interf } // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := createTags(ctx, conn, aws.StringValue(output.ARN), tags) // If default tags only, continue. Otherwise, error. @@ -181,7 +184,7 @@ func resourceUserCreate(ctx context.Context, d *schema.ResourceData, meta interf func resourceUserRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) user, err := FindUserByID(ctx, conn, d.Id()) @@ -219,7 +222,7 @@ func resourceUserRead(ctx context.Context, d *schema.ResourceData, meta interfac func resourceUserUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &elasticache.ModifyUserInput{ @@ -260,7 +263,7 @@ func resourceUserUpdate(ctx context.Context, d *schema.ResourceData, meta interf func resourceUserDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) log.Printf("[INFO] Deleting ElastiCache User: %s", d.Id()) _, err := conn.DeleteUserWithContext(ctx, &elasticache.DeleteUserInput{ diff --git a/internal/service/elasticache/user_data_source.go b/internal/service/elasticache/user_data_source.go index b1624fec595..5e0d3015743 100644 --- a/internal/service/elasticache/user_data_source.go +++ b/internal/service/elasticache/user_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache import ( @@ -66,7 +69,7 @@ func DataSourceUser() *schema.Resource { func dataSourceUserRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) user, err := FindUserByID(ctx, conn, d.Get("user_id").(string)) if tfresource.NotFound(err) { diff --git a/internal/service/elasticache/user_data_source_test.go b/internal/service/elasticache/user_data_source_test.go index 78de2aada81..956e4d4aa17 100644 --- a/internal/service/elasticache/user_data_source_test.go +++ b/internal/service/elasticache/user_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache_test import ( diff --git a/internal/service/elasticache/user_group.go b/internal/service/elasticache/user_group.go index ea58ecbb6c3..76283038037 100644 --- a/internal/service/elasticache/user_group.go +++ b/internal/service/elasticache/user_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache import ( @@ -70,12 +73,12 @@ func ResourceUserGroup() *schema.Resource { func resourceUserGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) userGroupID := d.Get("user_group_id").(string) input := &elasticache.CreateUserGroupInput{ Engine: aws.String(d.Get("engine").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), UserGroupId: aws.String(userGroupID), } @@ -103,7 +106,7 @@ func resourceUserGroupCreate(ctx context.Context, d *schema.ResourceData, meta i } // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := createTags(ctx, conn, aws.StringValue(output.ARN), tags) // If default tags only, continue. Otherwise, error. @@ -121,7 +124,7 @@ func resourceUserGroupCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceUserGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) userGroup, err := FindUserGroupByID(ctx, conn, d.Id()) @@ -145,7 +148,7 @@ func resourceUserGroupRead(ctx context.Context, d *schema.ResourceData, meta int func resourceUserGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &elasticache.ModifyUserGroupInput{ @@ -181,7 +184,7 @@ func resourceUserGroupUpdate(ctx context.Context, d *schema.ResourceData, meta i func resourceUserGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) log.Printf("[INFO] Deleting ElastiCache User Group: %s", d.Id()) _, err := conn.DeleteUserGroupWithContext(ctx, &elasticache.DeleteUserGroupInput{ diff --git a/internal/service/elasticache/user_group_association.go b/internal/service/elasticache/user_group_association.go index 86afa234aec..ab8fb7ad74c 100644 --- a/internal/service/elasticache/user_group_association.go +++ b/internal/service/elasticache/user_group_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache import ( @@ -46,7 +49,7 @@ func ResourceUserGroupAssociation() *schema.Resource { func resourceUserGroupAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) userGroupID := d.Get("user_group_id").(string) userID := d.Get("user_id").(string) @@ -75,7 +78,7 @@ func resourceUserGroupAssociationCreate(ctx context.Context, d *schema.ResourceD func resourceUserGroupAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) userGroupID, userID, err := UserGroupAssociationParseResourceID(d.Id()) @@ -103,7 +106,7 @@ func resourceUserGroupAssociationRead(ctx context.Context, d *schema.ResourceDat func resourceUserGroupAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElastiCacheConn() + conn := meta.(*conns.AWSClient).ElastiCacheConn(ctx) userGroupID, userID, err := UserGroupAssociationParseResourceID(d.Id()) diff --git a/internal/service/elasticache/user_group_association_test.go b/internal/service/elasticache/user_group_association_test.go index db2a1b45ce6..31f3551ab4d 100644 --- a/internal/service/elasticache/user_group_association_test.go +++ b/internal/service/elasticache/user_group_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache_test import ( @@ -137,7 +140,7 @@ func TestAccElastiCacheUserGroupAssociation_multiple(t *testing.T) { func testAccCheckUserGroupAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_elasticache_user_group_association" { @@ -184,7 +187,7 @@ func testAccCheckUserGroupAssociationExists(ctx context.Context, n string) resou return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) err = tfelasticache.FindUserGroupAssociation(ctx, conn, userGroupID, userID) diff --git a/internal/service/elasticache/user_group_test.go b/internal/service/elasticache/user_group_test.go index a45f70c341c..b23e0acb1fa 100644 --- a/internal/service/elasticache/user_group_test.go +++ b/internal/service/elasticache/user_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache_test import ( @@ -155,7 +158,7 @@ func TestAccElastiCacheUserGroup_disappears(t *testing.T) { func testAccCheckUserGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_elasticache_user_group" { @@ -190,7 +193,7 @@ func testAccCheckUserGroupExists(ctx context.Context, n string, v *elasticache.U return fmt.Errorf("No ElastiCache User Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) output, err := tfelasticache.FindUserGroupByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/elasticache/user_test.go b/internal/service/elasticache/user_test.go index eafd9f5d6db..d6175fcf6a6 100644 --- a/internal/service/elasticache/user_test.go +++ b/internal/service/elasticache/user_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache_test import ( @@ -302,7 +305,7 @@ func TestAccElastiCacheUser_disappears(t *testing.T) { func testAccCheckUserDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_elasticache_user" { @@ -337,7 +340,7 @@ func testAccCheckUserExists(ctx context.Context, n string, v *elasticache.User) return fmt.Errorf("No ElastiCache User ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElastiCacheConn(ctx) output, err := tfelasticache.FindUserByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/elasticache/validate.go b/internal/service/elasticache/validate.go index 6eeb223d090..13ad0cb77ff 100644 --- a/internal/service/elasticache/validate.go +++ b/internal/service/elasticache/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache import ( diff --git a/internal/service/elasticache/validate_test.go b/internal/service/elasticache/validate_test.go index 609b7e9f561..2a5891b6476 100644 --- a/internal/service/elasticache/validate_test.go +++ b/internal/service/elasticache/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache import ( diff --git a/internal/service/elasticache/wait.go b/internal/service/elasticache/wait.go index a63d8862de2..bde501cac98 100644 --- a/internal/service/elasticache/wait.go +++ b/internal/service/elasticache/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticache import ( diff --git a/internal/service/elasticbeanstalk/application.go b/internal/service/elasticbeanstalk/application.go index 4f53134b50c..37a2ae7ac17 100644 --- a/internal/service/elasticbeanstalk/application.go +++ b/internal/service/elasticbeanstalk/application.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticbeanstalk import ( @@ -82,13 +85,13 @@ func ResourceApplication() *schema.Resource { func resourceApplicationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticBeanstalkConn() + conn := meta.(*conns.AWSClient).ElasticBeanstalkConn(ctx) name := d.Get("name").(string) input := &elasticbeanstalk.CreateApplicationInput{ ApplicationName: aws.String(name), Description: aws.String(d.Get("description").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } _, err := conn.CreateApplicationWithContext(ctx, input) @@ -125,7 +128,7 @@ func resourceApplicationCreate(ctx context.Context, d *schema.ResourceData, meta func resourceApplicationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticBeanstalkConn() + conn := meta.(*conns.AWSClient).ElasticBeanstalkConn(ctx) app, err := FindApplicationByName(ctx, conn, d.Id()) @@ -151,7 +154,7 @@ func resourceApplicationRead(ctx context.Context, d *schema.ResourceData, meta i func resourceApplicationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticBeanstalkConn() + conn := meta.(*conns.AWSClient).ElasticBeanstalkConn(ctx) if d.HasChange("description") { input := &elasticbeanstalk.UpdateApplicationInput{ @@ -192,7 +195,7 @@ func resourceApplicationUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceApplicationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticBeanstalkConn() + conn := meta.(*conns.AWSClient).ElasticBeanstalkConn(ctx) log.Printf("[DEBUG] Deleting Elastic Beanstalk Application: %s", d.Id()) _, err := conn.DeleteApplicationWithContext(ctx, &elasticbeanstalk.DeleteApplicationInput{ diff --git a/internal/service/elasticbeanstalk/application_data_source.go b/internal/service/elasticbeanstalk/application_data_source.go index b8fca5d261a..bec09bab0ac 100644 --- a/internal/service/elasticbeanstalk/application_data_source.go +++ b/internal/service/elasticbeanstalk/application_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticbeanstalk import ( @@ -57,7 +60,7 @@ func DataSourceApplication() *schema.Resource { func dataSourceApplicationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticBeanstalkConn() + conn := meta.(*conns.AWSClient).ElasticBeanstalkConn(ctx) name := d.Get("name").(string) app, err := FindApplicationByName(ctx, conn, name) diff --git a/internal/service/elasticbeanstalk/application_data_source_test.go b/internal/service/elasticbeanstalk/application_data_source_test.go index e311cc689d0..6c31f579f22 100644 --- a/internal/service/elasticbeanstalk/application_data_source_test.go +++ b/internal/service/elasticbeanstalk/application_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticbeanstalk_test import ( diff --git a/internal/service/elasticbeanstalk/application_test.go b/internal/service/elasticbeanstalk/application_test.go index a45399b9b09..76ff2e30e68 100644 --- a/internal/service/elasticbeanstalk/application_test.go +++ b/internal/service/elasticbeanstalk/application_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticbeanstalk_test import ( @@ -223,7 +226,7 @@ func TestAccElasticBeanstalkApplication_appVersionLifecycle(t *testing.T) { func testAccCheckApplicationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticBeanstalkConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticBeanstalkConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_elastic_beanstalk_application" { @@ -258,7 +261,7 @@ func testAccCheckApplicationExists(ctx context.Context, n string, v *elasticbean return fmt.Errorf("No Elastic Beanstalk Application ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticBeanstalkConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticBeanstalkConn(ctx) output, err := tfelasticbeanstalk.FindApplicationByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/elasticbeanstalk/application_version.go b/internal/service/elasticbeanstalk/application_version.go index 6187cf5b2cd..5d4ea640320 100644 --- a/internal/service/elasticbeanstalk/application_version.go +++ b/internal/service/elasticbeanstalk/application_version.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticbeanstalk import ( @@ -70,7 +73,7 @@ func ResourceApplicationVersion() *schema.Resource { func resourceApplicationVersionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticBeanstalkConn() + conn := meta.(*conns.AWSClient).ElasticBeanstalkConn(ctx) application := d.Get("application").(string) description := d.Get("description").(string) @@ -87,7 +90,7 @@ func resourceApplicationVersionCreate(ctx context.Context, d *schema.ResourceDat ApplicationName: aws.String(application), Description: aws.String(description), SourceBundle: &s3Location, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), VersionLabel: aws.String(name), } @@ -103,7 +106,7 @@ func resourceApplicationVersionCreate(ctx context.Context, d *schema.ResourceDat func resourceApplicationVersionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticBeanstalkConn() + conn := meta.(*conns.AWSClient).ElasticBeanstalkConn(ctx) resp, err := conn.DescribeApplicationVersionsWithContext(ctx, &elasticbeanstalk.DescribeApplicationVersionsInput{ ApplicationName: aws.String(d.Get("application").(string)), @@ -133,7 +136,7 @@ func resourceApplicationVersionRead(ctx context.Context, d *schema.ResourceData, func resourceApplicationVersionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticBeanstalkConn() + conn := meta.(*conns.AWSClient).ElasticBeanstalkConn(ctx) if d.HasChange("description") { if err := resourceApplicationVersionDescriptionUpdate(ctx, conn, d); err != nil { @@ -146,7 +149,7 @@ func resourceApplicationVersionUpdate(ctx context.Context, d *schema.ResourceDat func resourceApplicationVersionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticBeanstalkConn() + conn := meta.(*conns.AWSClient).ElasticBeanstalkConn(ctx) application := d.Get("application").(string) name := d.Id() diff --git a/internal/service/elasticbeanstalk/application_version_test.go b/internal/service/elasticbeanstalk/application_version_test.go index 2cc5d6a742f..dbbf19a53e9 100644 --- a/internal/service/elasticbeanstalk/application_version_test.go +++ b/internal/service/elasticbeanstalk/application_version_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticbeanstalk_test import ( @@ -112,7 +115,7 @@ func TestAccElasticBeanstalkApplicationVersion_BeanstalkApp_tags(t *testing.T) { func testAccCheckApplicationVersionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticBeanstalkConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticBeanstalkConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_elastic_beanstalk_application_version" { @@ -151,7 +154,7 @@ func testAccCheckApplicationVersionExists(ctx context.Context, n string, app *el return fmt.Errorf("Elastic Beanstalk Application Version is not set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticBeanstalkConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticBeanstalkConn(ctx) describeApplicationVersionOpts := &elasticbeanstalk.DescribeApplicationVersionsInput{ ApplicationName: aws.String(rs.Primary.Attributes["application"]), VersionLabels: []*string{aws.String(rs.Primary.ID)}, diff --git a/internal/service/elasticbeanstalk/configuration_template.go b/internal/service/elasticbeanstalk/configuration_template.go index 527dc3a6f1c..6bd163a333c 100644 --- a/internal/service/elasticbeanstalk/configuration_template.go +++ b/internal/service/elasticbeanstalk/configuration_template.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticbeanstalk import ( @@ -61,7 +64,7 @@ func ResourceConfigurationTemplate() *schema.Resource { func resourceConfigurationTemplateCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticBeanstalkConn() + conn := meta.(*conns.AWSClient).ElasticBeanstalkConn(ctx) name := d.Get("name").(string) input := &elasticbeanstalk.CreateConfigurationTemplateInput{ @@ -95,7 +98,7 @@ func resourceConfigurationTemplateCreate(ctx context.Context, d *schema.Resource func resourceConfigurationTemplateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticBeanstalkConn() + conn := meta.(*conns.AWSClient).ElasticBeanstalkConn(ctx) settings, err := FindConfigurationSettingsByTwoPartKey(ctx, conn, d.Get("application").(string), d.Id()) @@ -119,7 +122,7 @@ func resourceConfigurationTemplateRead(ctx context.Context, d *schema.ResourceDa func resourceConfigurationTemplateUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticBeanstalkConn() + conn := meta.(*conns.AWSClient).ElasticBeanstalkConn(ctx) if d.HasChange("description") { input := &elasticbeanstalk.UpdateConfigurationTemplateInput{ @@ -194,7 +197,7 @@ func resourceConfigurationTemplateUpdate(ctx context.Context, d *schema.Resource func resourceConfigurationTemplateDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticBeanstalkConn() + conn := meta.(*conns.AWSClient).ElasticBeanstalkConn(ctx) log.Printf("[INFO] Deleting Elastic Beanstalk Configuration Template: %s", d.Id()) _, err := conn.DeleteConfigurationTemplateWithContext(ctx, &elasticbeanstalk.DeleteConfigurationTemplateInput{ diff --git a/internal/service/elasticbeanstalk/configuration_template_test.go b/internal/service/elasticbeanstalk/configuration_template_test.go index 162ff73ac76..40bf27c30b1 100644 --- a/internal/service/elasticbeanstalk/configuration_template_test.go +++ b/internal/service/elasticbeanstalk/configuration_template_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticbeanstalk_test import ( @@ -135,7 +138,7 @@ func TestAccElasticBeanstalkConfigurationTemplate_settings(t *testing.T) { func testAccCheckConfigurationTemplateDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticBeanstalkConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticBeanstalkConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_elastic_beanstalk_configuration_template" { @@ -170,7 +173,7 @@ func testAccCheckConfigurationTemplateExists(ctx context.Context, n string, v *e return fmt.Errorf("No Elastic Beanstalk Configuration Template ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticBeanstalkConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticBeanstalkConn(ctx) output, err := tfelasticbeanstalk.FindConfigurationSettingsByTwoPartKey(ctx, conn, rs.Primary.Attributes["application"], rs.Primary.ID) diff --git a/internal/service/elasticbeanstalk/environment.go b/internal/service/elasticbeanstalk/environment.go index 5079f51a987..543bb447d93 100644 --- a/internal/service/elasticbeanstalk/environment.go +++ b/internal/service/elasticbeanstalk/environment.go @@ -1,6 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticbeanstalk -import ( // nosemgrep:ci.aws-sdk-go-multiple-service-imports +import ( // nosemgrep:ci.semgrep.aws.multiple-service-imports "context" "fmt" "log" @@ -216,14 +219,14 @@ func ResourceEnvironment() *schema.Resource { func resourceEnvironmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticBeanstalkConn() + conn := meta.(*conns.AWSClient).ElasticBeanstalkConn(ctx) name := d.Get("name").(string) input := &elasticbeanstalk.CreateEnvironmentInput{ ApplicationName: aws.String(d.Get("application").(string)), EnvironmentName: aws.String(name), OptionSettings: extractOptionSettings(d.Get("setting").(*schema.Set)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v := d.Get("description"); v.(string) != "" { @@ -304,7 +307,7 @@ func resourceEnvironmentCreate(ctx context.Context, d *schema.ResourceData, meta func resourceEnvironmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticBeanstalkConn() + conn := meta.(*conns.AWSClient).ElasticBeanstalkConn(ctx) env, err := FindEnvironmentByID(ctx, conn, d.Id()) @@ -395,7 +398,7 @@ func resourceEnvironmentRead(ctx context.Context, d *schema.ResourceData, meta i if value := aws.StringValue(optionSetting.Value); value != "" { switch aws.StringValue(optionSetting.OptionName) { case "SecurityGroups": - m["value"] = dropGeneratedSecurityGroup(ctx, meta.(*conns.AWSClient).EC2Conn(), value) + m["value"] = dropGeneratedSecurityGroup(ctx, meta.(*conns.AWSClient).EC2Conn(ctx), value) case "Subnets", "ELBSubnets": m["value"] = sortValues(value) default: @@ -432,7 +435,7 @@ func resourceEnvironmentRead(ctx context.Context, d *schema.ResourceData, meta i func resourceEnvironmentUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticBeanstalkConn() + conn := meta.(*conns.AWSClient).ElasticBeanstalkConn(ctx) waitForReadyTimeOut, _, err := sdktypes.Duration(d.Get("wait_for_ready_timeout").(string)).Value() @@ -566,7 +569,7 @@ func resourceEnvironmentUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceEnvironmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticBeanstalkConn() + conn := meta.(*conns.AWSClient).ElasticBeanstalkConn(ctx) waitForReadyTimeOut, _, err := sdktypes.Duration(d.Get("wait_for_ready_timeout").(string)).Value() diff --git a/internal/service/elasticbeanstalk/environment_migrate.go b/internal/service/elasticbeanstalk/environment_migrate.go index 296f4fcf12b..779b322fedd 100644 --- a/internal/service/elasticbeanstalk/environment_migrate.go +++ b/internal/service/elasticbeanstalk/environment_migrate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticbeanstalk import ( diff --git a/internal/service/elasticbeanstalk/environment_migrate_test.go b/internal/service/elasticbeanstalk/environment_migrate_test.go index 05115368395..f73d6d4208c 100644 --- a/internal/service/elasticbeanstalk/environment_migrate_test.go +++ b/internal/service/elasticbeanstalk/environment_migrate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticbeanstalk_test import ( diff --git a/internal/service/elasticbeanstalk/environment_test.go b/internal/service/elasticbeanstalk/environment_test.go index a36bba5f78b..c4c82b74a95 100644 --- a/internal/service/elasticbeanstalk/environment_test.go +++ b/internal/service/elasticbeanstalk/environment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticbeanstalk_test import ( @@ -468,7 +471,7 @@ func TestAccElasticBeanstalkEnvironment_platformARN(t *testing.T) { func testAccCheckEnvironmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticBeanstalkConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticBeanstalkConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_elastic_beanstalk_environment" { @@ -503,7 +506,7 @@ func testAccCheckEnvironmentExists(ctx context.Context, n string, v *elasticbean return fmt.Errorf("No Elastic Beanstalk Environment ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticBeanstalkConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticBeanstalkConn(ctx) output, err := tfelasticbeanstalk.FindEnvironmentByID(ctx, conn, rs.Primary.ID) @@ -522,7 +525,7 @@ func testAccVerifyConfig(ctx context.Context, env *elasticbeanstalk.EnvironmentD if env == nil { return fmt.Errorf("Nil environment in testAccVerifyConfig") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticBeanstalkConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticBeanstalkConn(ctx) resp, err := conn.DescribeConfigurationSettingsWithContext(ctx, &elasticbeanstalk.DescribeConfigurationSettingsInput{ ApplicationName: env.ApplicationName, @@ -571,7 +574,7 @@ func testAccVerifyConfig(ctx context.Context, env *elasticbeanstalk.EnvironmentD func testAccCheckEnvironmentConfigValue(ctx context.Context, n string, expectedValue string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticBeanstalkConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticBeanstalkConn(ctx) rs, ok := s.RootModule().Resources[n] if !ok { diff --git a/internal/service/elasticbeanstalk/flex.go b/internal/service/elasticbeanstalk/flex.go index efb3c0bea27..b0c8da80fef 100644 --- a/internal/service/elasticbeanstalk/flex.go +++ b/internal/service/elasticbeanstalk/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticbeanstalk import ( diff --git a/internal/service/elasticbeanstalk/generate.go b/internal/service/elasticbeanstalk/generate.go index 17c6b6c5fa5..013b79a31d7 100644 --- a/internal/service/elasticbeanstalk/generate.go +++ b/internal/service/elasticbeanstalk/generate.go @@ -1,5 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOutTagsElem=ResourceTags -ServiceTagsSlice -TagOp=UpdateTagsForResource -TagInTagsElem=TagsToAdd -UntagOp=UpdateTagsForResource -UntagInTagsElem=TagsToRemove -UpdateTags //go:generate go run ../../generate/listpages/main.go -ListOps=DescribeEnvironments +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package elasticbeanstalk diff --git a/internal/service/elasticbeanstalk/hosted_zone_data_source.go b/internal/service/elasticbeanstalk/hosted_zone_data_source.go index de95b69832f..4e38cd7e44a 100644 --- a/internal/service/elasticbeanstalk/hosted_zone_data_source.go +++ b/internal/service/elasticbeanstalk/hosted_zone_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticbeanstalk import ( diff --git a/internal/service/elasticbeanstalk/hosted_zone_data_source_test.go b/internal/service/elasticbeanstalk/hosted_zone_data_source_test.go index 604c8c1ef7b..bd4f5acc260 100644 --- a/internal/service/elasticbeanstalk/hosted_zone_data_source_test.go +++ b/internal/service/elasticbeanstalk/hosted_zone_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticbeanstalk_test import ( diff --git a/internal/service/elasticbeanstalk/service_package_gen.go b/internal/service/elasticbeanstalk/service_package_gen.go index ea025585700..d469473a98c 100644 --- a/internal/service/elasticbeanstalk/service_package_gen.go +++ b/internal/service/elasticbeanstalk/service_package_gen.go @@ -5,6 +5,10 @@ package elasticbeanstalk import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + elasticbeanstalk_sdkv1 "github.com/aws/aws-sdk-go/service/elasticbeanstalk" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -73,4 +77,13 @@ func (p *servicePackage) ServicePackageName() string { return names.ElasticBeanstalk } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*elasticbeanstalk_sdkv1.ElasticBeanstalk, error) { + sess := config["session"].(*session_sdkv1.Session) + + return elasticbeanstalk_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/elasticbeanstalk/solution_stack_data_source.go b/internal/service/elasticbeanstalk/solution_stack_data_source.go index 94daca38504..3ac9b5112e7 100644 --- a/internal/service/elasticbeanstalk/solution_stack_data_source.go +++ b/internal/service/elasticbeanstalk/solution_stack_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticbeanstalk import ( @@ -42,7 +45,7 @@ func DataSourceSolutionStack() *schema.Resource { // dataSourceSolutionStackRead performs the API lookup. func dataSourceSolutionStackRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticBeanstalkConn() + conn := meta.(*conns.AWSClient).ElasticBeanstalkConn(ctx) nameRegex := d.Get("name_regex") diff --git a/internal/service/elasticbeanstalk/solution_stack_data_source_test.go b/internal/service/elasticbeanstalk/solution_stack_data_source_test.go index c5c2f4a12b1..301a46be170 100644 --- a/internal/service/elasticbeanstalk/solution_stack_data_source_test.go +++ b/internal/service/elasticbeanstalk/solution_stack_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticbeanstalk_test import ( diff --git a/internal/service/elasticbeanstalk/sweep.go b/internal/service/elasticbeanstalk/sweep.go index e01086c3dae..f45a57684f1 100644 --- a/internal/service/elasticbeanstalk/sweep.go +++ b/internal/service/elasticbeanstalk/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -12,7 +15,6 @@ import ( "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -31,11 +33,11 @@ func init() { func sweepApplications(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).ElasticBeanstalkConn() + conn := client.ElasticBeanstalkConn(ctx) resp, err := conn.DescribeApplicationsWithContext(ctx, &elasticbeanstalk.DescribeApplicationsInput{}) if err != nil { @@ -72,11 +74,11 @@ func sweepApplications(region string) error { func sweepEnvironments(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).ElasticBeanstalkConn() + conn := client.ElasticBeanstalkConn(ctx) input := &elasticbeanstalk.DescribeEnvironmentsInput{ IncludeDeleted: aws.Bool(false), } @@ -109,7 +111,7 @@ func sweepEnvironments(region string) error { return fmt.Errorf("listing Elastic Beanstalk Environments (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("sweeping Elastic Beanstalk Environments (%s): %w", region, err) diff --git a/internal/service/elasticbeanstalk/tags_gen.go b/internal/service/elasticbeanstalk/tags_gen.go index 38d887475ea..7f1f6980b46 100644 --- a/internal/service/elasticbeanstalk/tags_gen.go +++ b/internal/service/elasticbeanstalk/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists elasticbeanstalk service tags. +// listTags lists elasticbeanstalk service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn elasticbeanstalkiface.ElasticBeanstalkAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn elasticbeanstalkiface.ElasticBeanstalkAPI, identifier string) (tftags.KeyValueTags, error) { input := &elasticbeanstalk.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn elasticbeanstalkiface.ElasticBeanstalkAP // ListTags lists elasticbeanstalk service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).ElasticBeanstalkConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).ElasticBeanstalkConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*elasticbeanstalk.Tag) tftags.KeyV return tftags.New(ctx, m) } -// GetTagsIn returns elasticbeanstalk service tags from Context. +// getTagsIn returns elasticbeanstalk service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*elasticbeanstalk.Tag { +func getTagsIn(ctx context.Context) []*elasticbeanstalk.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*elasticbeanstalk.Tag { return nil } -// SetTagsOut sets elasticbeanstalk service tags in Context. -func SetTagsOut(ctx context.Context, tags []*elasticbeanstalk.Tag) { +// setTagsOut sets elasticbeanstalk service tags in Context. +func setTagsOut(ctx context.Context, tags []*elasticbeanstalk.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates elasticbeanstalk service tags. +// updateTags updates elasticbeanstalk service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn elasticbeanstalkiface.ElasticBeanstalkAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn elasticbeanstalkiface.ElasticBeanstalkAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) removedTags := oldTags.Removed(newTags) @@ -135,5 +135,5 @@ func UpdateTags(ctx context.Context, conn elasticbeanstalkiface.ElasticBeanstalk // UpdateTags updates elasticbeanstalk service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).ElasticBeanstalkConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).ElasticBeanstalkConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/elasticsearch/acc_test.go b/internal/service/elasticsearch/acc_test.go index 76009068d76..2424584bdaf 100644 --- a/internal/service/elasticsearch/acc_test.go +++ b/internal/service/elasticsearch/acc_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticsearch_test import ( diff --git a/internal/service/elasticsearch/consts.go b/internal/service/elasticsearch/consts.go index de1ab351ac7..1845f4a1f5a 100644 --- a/internal/service/elasticsearch/consts.go +++ b/internal/service/elasticsearch/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticsearch import ( diff --git a/internal/service/elasticsearch/domain.go b/internal/service/elasticsearch/domain.go index 7a9f61ca1ef..3bd96669e00 100644 --- a/internal/service/elasticsearch/domain.go +++ b/internal/service/elasticsearch/domain.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticsearch import ( @@ -50,7 +53,7 @@ func ResourceDomain() *schema.Resource { newVersion := d.Get("elasticsearch_version").(string) domainName := d.Get("domain_name").(string) - conn := meta.(*conns.AWSClient).ElasticsearchConn() + conn := meta.(*conns.AWSClient).ElasticsearchConn(ctx) resp, err := conn.GetCompatibleElasticsearchVersionsWithContext(ctx, &elasticsearch.GetCompatibleElasticsearchVersionsInput{ DomainName: aws.String(domainName), }) @@ -536,7 +539,7 @@ func ResourceDomain() *schema.Resource { func resourceDomainCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticsearchConn() + conn := meta.(*conns.AWSClient).ElasticsearchConn(ctx) // The API doesn't check for duplicate names // so w/out this check Create would act as upsert @@ -551,7 +554,7 @@ func resourceDomainCreate(ctx context.Context, d *schema.ResourceData, meta inte input := &elasticsearch.CreateElasticsearchDomainInput{ DomainName: aws.String(name), ElasticsearchVersion: aws.String(d.Get("elasticsearch_version").(string)), - TagList: GetTagsIn(ctx), + TagList: getTagsIn(ctx), } if v, ok := d.GetOk("access_policies"); ok { @@ -713,7 +716,7 @@ func resourceDomainCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceDomainRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticsearchConn() + conn := meta.(*conns.AWSClient).ElasticsearchConn(ctx) name := d.Get("domain_name").(string) ds, err := FindDomainByName(ctx, conn, name) @@ -840,7 +843,7 @@ func resourceDomainRead(ctx context.Context, d *schema.ResourceData, meta interf func resourceDomainUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticsearchConn() + conn := meta.(*conns.AWSClient).ElasticsearchConn(ctx) if d.HasChangesExcept("tags", "tags_all") { name := d.Get("domain_name").(string) @@ -982,7 +985,7 @@ func resourceDomainUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceDomainDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticsearchConn() + conn := meta.(*conns.AWSClient).ElasticsearchConn(ctx) name := d.Get("domain_name").(string) @@ -1007,7 +1010,7 @@ func resourceDomainDelete(ctx context.Context, d *schema.ResourceData, meta inte } func resourceDomainImport(ctx context.Context, d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - conn := meta.(*conns.AWSClient).ElasticsearchConn() + conn := meta.(*conns.AWSClient).ElasticsearchConn(ctx) d.Set("domain_name", d.Id()) diff --git a/internal/service/elasticsearch/domain_data_source.go b/internal/service/elasticsearch/domain_data_source.go index a4f97fdb8ad..bdf3e37d941 100644 --- a/internal/service/elasticsearch/domain_data_source.go +++ b/internal/service/elasticsearch/domain_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticsearch import ( @@ -343,7 +346,7 @@ func DataSourceDomain() *schema.Resource { func dataSourceDomainRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticsearchConn() + conn := meta.(*conns.AWSClient).ElasticsearchConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig ds, err := FindDomainByName(ctx, conn, d.Get("domain_name").(string)) @@ -453,7 +456,7 @@ func dataSourceDomainRead(ctx context.Context, d *schema.ResourceData, meta inte d.Set("processing", ds.Processing) - tags, err := ListTags(ctx, conn, d.Id()) + tags, err := listTags(ctx, conn, d.Id()) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags for Elasticsearch Cluster (%s): %s", d.Id(), err) diff --git a/internal/service/elasticsearch/domain_data_source_test.go b/internal/service/elasticsearch/domain_data_source_test.go index 5008c45482b..8c82fbb7112 100644 --- a/internal/service/elasticsearch/domain_data_source_test.go +++ b/internal/service/elasticsearch/domain_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticsearch_test import ( diff --git a/internal/service/elasticsearch/domain_policy.go b/internal/service/elasticsearch/domain_policy.go index 5bcc200a7bf..33ad70f3e12 100644 --- a/internal/service/elasticsearch/domain_policy.go +++ b/internal/service/elasticsearch/domain_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticsearch import ( @@ -52,7 +55,7 @@ func ResourceDomainPolicy() *schema.Resource { func resourceDomainPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticsearchConn() + conn := meta.(*conns.AWSClient).ElasticsearchConn(ctx) ds, err := FindDomainByName(ctx, conn, d.Get("domain_name").(string)) @@ -79,7 +82,7 @@ func resourceDomainPolicyRead(ctx context.Context, d *schema.ResourceData, meta func resourceDomainPolicyUpsert(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticsearchConn() + conn := meta.(*conns.AWSClient).ElasticsearchConn(ctx) domainName := d.Get("domain_name").(string) policy, err := structure.NormalizeJsonString(d.Get("access_policies").(string)) @@ -107,7 +110,7 @@ func resourceDomainPolicyUpsert(ctx context.Context, d *schema.ResourceData, met func resourceDomainPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticsearchConn() + conn := meta.(*conns.AWSClient).ElasticsearchConn(ctx) _, err := conn.UpdateElasticsearchDomainConfigWithContext(ctx, &elasticsearch.UpdateElasticsearchDomainConfigInput{ DomainName: aws.String(d.Get("domain_name").(string)), diff --git a/internal/service/elasticsearch/domain_policy_test.go b/internal/service/elasticsearch/domain_policy_test.go index 2ff434b3de8..10d60481e13 100644 --- a/internal/service/elasticsearch/domain_policy_test.go +++ b/internal/service/elasticsearch/domain_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticsearch_test import ( diff --git a/internal/service/elasticsearch/domain_saml_options.go b/internal/service/elasticsearch/domain_saml_options.go index 4bb1644127e..c8a732ecdcb 100644 --- a/internal/service/elasticsearch/domain_saml_options.go +++ b/internal/service/elasticsearch/domain_saml_options.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticsearch import ( @@ -115,7 +118,7 @@ func domainSamlOptionsDiffSupress(k, old, new string, d *schema.ResourceData) bo func resourceDomainSAMLOptionsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticsearchConn() + conn := meta.(*conns.AWSClient).ElasticsearchConn(ctx) ds, err := FindDomainByName(ctx, conn, d.Get("domain_name").(string)) @@ -142,7 +145,7 @@ func resourceDomainSAMLOptionsRead(ctx context.Context, d *schema.ResourceData, func resourceDomainSAMLOptionsPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticsearchConn() + conn := meta.(*conns.AWSClient).ElasticsearchConn(ctx) domainName := d.Get("domain_name").(string) config := elasticsearch.AdvancedSecurityOptionsInput{} @@ -170,7 +173,7 @@ func resourceDomainSAMLOptionsPut(ctx context.Context, d *schema.ResourceData, m func resourceDomainSAMLOptionsDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticsearchConn() + conn := meta.(*conns.AWSClient).ElasticsearchConn(ctx) domainName := d.Get("domain_name").(string) config := elasticsearch.AdvancedSecurityOptionsInput{} diff --git a/internal/service/elasticsearch/domain_saml_options_test.go b/internal/service/elasticsearch/domain_saml_options_test.go index 7a63930b09d..f32c8e4e98a 100644 --- a/internal/service/elasticsearch/domain_saml_options_test.go +++ b/internal/service/elasticsearch/domain_saml_options_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticsearch_test import ( @@ -182,7 +185,7 @@ func testAccCheckESDomainSAMLOptionsDestroy(ctx context.Context) resource.TestCh continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticsearchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticsearchConn(ctx) _, err := tfelasticsearch.FindDomainByName(ctx, conn, rs.Primary.Attributes["domain_name"]) if tfresource.NotFound(err) { @@ -216,7 +219,7 @@ func testAccCheckESDomainSAMLOptions(ctx context.Context, esResource string, sam return fmt.Errorf("Not found: %s", samlOptionsResource) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticsearchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticsearchConn(ctx) _, err := tfelasticsearch.FindDomainByName(ctx, conn, options.Primary.Attributes["domain_name"]) return err diff --git a/internal/service/elasticsearch/domain_structure.go b/internal/service/elasticsearch/domain_structure.go index f8b605162f7..84b39e8d7a6 100644 --- a/internal/service/elasticsearch/domain_structure.go +++ b/internal/service/elasticsearch/domain_structure.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticsearch import ( diff --git a/internal/service/elasticsearch/domain_test.go b/internal/service/elasticsearch/domain_test.go index d82d1db5123..872494da996 100644 --- a/internal/service/elasticsearch/domain_test.go +++ b/internal/service/elasticsearch/domain_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticsearch_test import ( @@ -357,7 +360,7 @@ func TestAccElasticsearchDomain_duplicate(t *testing.T) { ErrorCheck: acctest.ErrorCheck(t, elasticsearch.EndpointsID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticsearchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticsearchConn(ctx) _, err := conn.DeleteElasticsearchDomainWithContext(ctx, &elasticsearch.DeleteElasticsearchDomainInput{ DomainName: aws.String(rName), }) @@ -367,7 +370,7 @@ func TestAccElasticsearchDomain_duplicate(t *testing.T) { { PreConfig: func() { // Create duplicate - conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticsearchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticsearchConn(ctx) _, err := conn.CreateElasticsearchDomainWithContext(ctx, &elasticsearch.CreateElasticsearchDomainInput{ DomainName: aws.String(rName), EBSOptions: &elasticsearch.EBSOptions{ @@ -1714,7 +1717,7 @@ func testAccCheckDomainExists(ctx context.Context, n string, domain *elasticsear return fmt.Errorf("No ES Domain ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticsearchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticsearchConn(ctx) resp, err := tfelasticsearch.FindDomainByName(ctx, conn, rs.Primary.Attributes["domain_name"]) if err != nil { return fmt.Errorf("Error describing domain: %s", err.Error()) @@ -1734,7 +1737,7 @@ func testAccCheckDomainExists(ctx context.Context, n string, domain *elasticsear func testAccCheckDomainNotRecreated(domain1, domain2 *elasticsearch.ElasticsearchDomainStatus) resource.TestCheckFunc { return func(s *terraform.State) error { /* - conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticsearchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticsearchConn(ctx) ic, err := conn.DescribeElasticsearchDomainConfig(&elasticsearch.DescribeElasticsearchDomainConfigInput{ DomainName: domain1.DomainName, @@ -1772,7 +1775,7 @@ func testAccCheckDomainDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticsearchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticsearchConn(ctx) _, err := tfelasticsearch.FindDomainByName(ctx, conn, rs.Primary.Attributes["domain_name"]) if tfresource.NotFound(err) { @@ -3026,7 +3029,7 @@ resource "aws_elasticsearch_domain" "test" { } func testAccPreCheckCognitoIdentityProvider(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn(ctx) input := &cognitoidentityprovider.ListUserPoolsInput{ MaxResults: aws.Int64(1), diff --git a/internal/service/elasticsearch/find.go b/internal/service/elasticsearch/find.go index aaf67806021..41bf9b858de 100644 --- a/internal/service/elasticsearch/find.go +++ b/internal/service/elasticsearch/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticsearch import ( diff --git a/internal/service/elasticsearch/flex.go b/internal/service/elasticsearch/flex.go index b0d1a5b0295..85038af5b04 100644 --- a/internal/service/elasticsearch/flex.go +++ b/internal/service/elasticsearch/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticsearch import ( diff --git a/internal/service/elasticsearch/generate.go b/internal/service/elasticsearch/generate.go index 7a05a5de5ab..362e91cd0ae 100644 --- a/internal/service/elasticsearch/generate.go +++ b/internal/service/elasticsearch/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=ListTags -ListTagsInIDElem=ARN -ListTagsOutTagsElem=TagList -ServiceTagsSlice -TagOp=AddTags -TagInIDElem=ARN -TagInTagsElem=TagList -UntagOp=RemoveTags -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package elasticsearch diff --git a/internal/service/elasticsearch/service_package_gen.go b/internal/service/elasticsearch/service_package_gen.go index 1ed6cafb2ba..ffa1d40cafc 100644 --- a/internal/service/elasticsearch/service_package_gen.go +++ b/internal/service/elasticsearch/service_package_gen.go @@ -5,6 +5,10 @@ package elasticsearch import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + elasticsearchservice_sdkv1 "github.com/aws/aws-sdk-go/service/elasticsearchservice" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -53,4 +57,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Elasticsearch } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*elasticsearchservice_sdkv1.ElasticsearchService, error) { + sess := config["session"].(*session_sdkv1.Session) + + return elasticsearchservice_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/elasticsearch/status.go b/internal/service/elasticsearch/status.go index dd51fff6506..5ce295a7c77 100644 --- a/internal/service/elasticsearch/status.go +++ b/internal/service/elasticsearch/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticsearch import ( diff --git a/internal/service/elasticsearch/sweep.go b/internal/service/elasticsearch/sweep.go index b3b93a5968d..4e8ba7123fa 100644 --- a/internal/service/elasticsearch/sweep.go +++ b/internal/service/elasticsearch/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/elasticsearchservice" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -24,13 +26,13 @@ func init() { func sweepDomains(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).ElasticsearchConn() + conn := client.ElasticsearchConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -88,7 +90,7 @@ func sweepDomains(region string) error { sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Elasticsearch Domains for %s: %w", region, err)) } diff --git a/internal/service/elasticsearch/tags_gen.go b/internal/service/elasticsearch/tags_gen.go index 2deccb8503f..70f310da133 100644 --- a/internal/service/elasticsearch/tags_gen.go +++ b/internal/service/elasticsearch/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists elasticsearch service tags. +// listTags lists elasticsearch service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn elasticsearchserviceiface.ElasticsearchServiceAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn elasticsearchserviceiface.ElasticsearchServiceAPI, identifier string) (tftags.KeyValueTags, error) { input := &elasticsearchservice.ListTagsInput{ ARN: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn elasticsearchserviceiface.ElasticsearchS // ListTags lists elasticsearch service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).ElasticsearchConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).ElasticsearchConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*elasticsearchservice.Tag) tftags. return tftags.New(ctx, m) } -// GetTagsIn returns elasticsearch service tags from Context. +// getTagsIn returns elasticsearch service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*elasticsearchservice.Tag { +func getTagsIn(ctx context.Context) []*elasticsearchservice.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*elasticsearchservice.Tag { return nil } -// SetTagsOut sets elasticsearch service tags in Context. -func SetTagsOut(ctx context.Context, tags []*elasticsearchservice.Tag) { +// setTagsOut sets elasticsearch service tags in Context. +func setTagsOut(ctx context.Context, tags []*elasticsearchservice.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates elasticsearch service tags. +// updateTags updates elasticsearch service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn elasticsearchserviceiface.ElasticsearchServiceAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn elasticsearchserviceiface.ElasticsearchServiceAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn elasticsearchserviceiface.Elasticsearc // UpdateTags updates elasticsearch service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).ElasticsearchConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).ElasticsearchConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/elasticsearch/wait.go b/internal/service/elasticsearch/wait.go index 82b7897fe95..6d7d6326e06 100644 --- a/internal/service/elasticsearch/wait.go +++ b/internal/service/elasticsearch/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elasticsearch import ( @@ -55,7 +58,7 @@ func WaitForDomainCreation(ctx context.Context, conn *elasticsearch.Elasticsearc if tfresource.TimedOut(err) { out, err = FindDomainByName(ctx, conn, domainName) if err != nil { - return fmt.Errorf("Error describing Elasticsearch domain: %w", err) + return fmt.Errorf("describing Elasticsearch domain: %w", err) } if !aws.BoolValue(out.Processing) && (out.Endpoint != nil || out.Endpoints != nil) { return nil @@ -84,7 +87,7 @@ func waitForDomainUpdate(ctx context.Context, conn *elasticsearch.ElasticsearchS if tfresource.TimedOut(err) { out, err = FindDomainByName(ctx, conn, domainName) if err != nil { - return fmt.Errorf("Error describing Elasticsearch domain: %w", err) + return fmt.Errorf("describing Elasticsearch domain: %w", err) } if !aws.BoolValue(out.Processing) { return nil @@ -119,7 +122,7 @@ func waitForDomainDelete(ctx context.Context, conn *elasticsearch.ElasticsearchS if tfresource.NotFound(err) { return nil } - return fmt.Errorf("Error describing Elasticsearch domain: %s", err) + return fmt.Errorf("describing Elasticsearch domain: %s", err) } if out != nil && !aws.BoolValue(out.Processing) { return nil diff --git a/internal/service/elastictranscoder/generate.go b/internal/service/elastictranscoder/generate.go new file mode 100644 index 00000000000..f70c4d3760b --- /dev/null +++ b/internal/service/elastictranscoder/generate.go @@ -0,0 +1,7 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/servicepackage/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package elastictranscoder diff --git a/internal/service/elastictranscoder/pipeline.go b/internal/service/elastictranscoder/pipeline.go index b30ff6ea6e3..b577e6223dd 100644 --- a/internal/service/elastictranscoder/pipeline.go +++ b/internal/service/elastictranscoder/pipeline.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elastictranscoder import ( @@ -232,7 +235,7 @@ func ResourcePipeline() *schema.Resource { func resourcePipelineCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticTranscoderConn() + conn := meta.(*conns.AWSClient).ElasticTranscoderConn(ctx) req := &elastictranscoder.CreatePipelineInput{ AwsKmsKeyArn: aws.String(d.Get("aws_kms_key_arn").(string)), @@ -409,7 +412,7 @@ func flattenETPermList(perms []*elastictranscoder.Permission) []map[string]inter func resourcePipelineUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticTranscoderConn() + conn := meta.(*conns.AWSClient).ElasticTranscoderConn(ctx) req := &elastictranscoder.UpdatePipelineInput{ Id: aws.String(d.Id()), @@ -459,7 +462,7 @@ func resourcePipelineUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourcePipelineRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticTranscoderConn() + conn := meta.(*conns.AWSClient).ElasticTranscoderConn(ctx) resp, err := conn.ReadPipelineWithContext(ctx, &elastictranscoder.ReadPipelineInput{ Id: aws.String(d.Id()), @@ -525,7 +528,7 @@ func resourcePipelineRead(ctx context.Context, d *schema.ResourceData, meta inte func resourcePipelineDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticTranscoderConn() + conn := meta.(*conns.AWSClient).ElasticTranscoderConn(ctx) log.Printf("[DEBUG] Elastic Transcoder Delete Pipeline: %s", d.Id()) _, err := conn.DeletePipelineWithContext(ctx, &elastictranscoder.DeletePipelineInput{ diff --git a/internal/service/elastictranscoder/pipeline_test.go b/internal/service/elastictranscoder/pipeline_test.go index bb0b5bc3d32..2b1d8c19cb4 100644 --- a/internal/service/elastictranscoder/pipeline_test.go +++ b/internal/service/elastictranscoder/pipeline_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elastictranscoder_test import ( @@ -243,7 +246,7 @@ func testAccCheckPipelineExists(ctx context.Context, n string, res *elastictrans return fmt.Errorf("No Pipeline ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticTranscoderConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticTranscoderConn(ctx) out, err := conn.ReadPipelineWithContext(ctx, &elastictranscoder.ReadPipelineInput{ Id: aws.String(rs.Primary.ID), @@ -261,7 +264,7 @@ func testAccCheckPipelineExists(ctx context.Context, n string, res *elastictrans func testAccCheckPipelineDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticTranscoderConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticTranscoderConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_elastictranscoder_pipline" { @@ -287,7 +290,7 @@ func testAccCheckPipelineDestroy(ctx context.Context) resource.TestCheckFunc { } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticTranscoderConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticTranscoderConn(ctx) input := &elastictranscoder.ListPipelinesInput{} diff --git a/internal/service/elastictranscoder/preset.go b/internal/service/elastictranscoder/preset.go index 499e12fcd4a..266d6941061 100644 --- a/internal/service/elastictranscoder/preset.go +++ b/internal/service/elastictranscoder/preset.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elastictranscoder import ( @@ -504,7 +507,7 @@ func ResourcePreset() *schema.Resource { func resourcePresetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticTranscoderConn() + conn := meta.(*conns.AWSClient).ElasticTranscoderConn(ctx) req := &elastictranscoder.CreatePresetInput{ Audio: expandETAudioParams(d), @@ -751,7 +754,7 @@ func expandETVideoWatermarks(d *schema.ResourceData) []*elastictranscoder.Preset func resourcePresetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticTranscoderConn() + conn := meta.(*conns.AWSClient).ElasticTranscoderConn(ctx) resp, err := conn.ReadPresetWithContext(ctx, &elastictranscoder.ReadPresetInput{ Id: aws.String(d.Id()), @@ -918,7 +921,7 @@ func flattenETWatermarks(watermarks []*elastictranscoder.PresetWatermark) []map[ func resourcePresetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ElasticTranscoderConn() + conn := meta.(*conns.AWSClient).ElasticTranscoderConn(ctx) log.Printf("[DEBUG] Elastic Transcoder Delete Preset: %s", d.Id()) _, err := conn.DeletePresetWithContext(ctx, &elastictranscoder.DeletePresetInput{ diff --git a/internal/service/elastictranscoder/preset_test.go b/internal/service/elastictranscoder/preset_test.go index d562304d9b7..c6b75ceb3a0 100644 --- a/internal/service/elastictranscoder/preset_test.go +++ b/internal/service/elastictranscoder/preset_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elastictranscoder_test import ( @@ -263,7 +266,7 @@ func TestAccElasticTranscoderPreset_Video_frameRate(t *testing.T) { func testAccCheckPresetExists(ctx context.Context, name string, preset *elastictranscoder.Preset) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticTranscoderConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticTranscoderConn(ctx) rs, ok := s.RootModule().Resources[name] if !ok { @@ -289,7 +292,7 @@ func testAccCheckPresetExists(ctx context.Context, name string, preset *elastict func testAccCheckPresetDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticTranscoderConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ElasticTranscoderConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_elastictranscoder_preset" { diff --git a/internal/service/elastictranscoder/service_package_gen.go b/internal/service/elastictranscoder/service_package_gen.go index 3532c5c2cbd..18c4a859477 100644 --- a/internal/service/elastictranscoder/service_package_gen.go +++ b/internal/service/elastictranscoder/service_package_gen.go @@ -5,6 +5,10 @@ package elastictranscoder import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + elastictranscoder_sdkv1 "github.com/aws/aws-sdk-go/service/elastictranscoder" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -40,4 +44,13 @@ func (p *servicePackage) ServicePackageName() string { return names.ElasticTranscoder } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*elastictranscoder_sdkv1.ElasticTranscoder, error) { + sess := config["session"].(*session_sdkv1.Session) + + return elastictranscoder_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/elb/app_cookie_stickiness_policy.go b/internal/service/elb/app_cookie_stickiness_policy.go index 6a888f756c4..518154393de 100644 --- a/internal/service/elb/app_cookie_stickiness_policy.go +++ b/internal/service/elb/app_cookie_stickiness_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elb import ( @@ -65,7 +68,7 @@ func ResourceAppCookieStickinessPolicy() *schema.Resource { func resourceAppCookieStickinessPolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBConn() + conn := meta.(*conns.AWSClient).ELBConn(ctx) lbName := d.Get("load_balancer").(string) lbPort := d.Get("lb_port").(int) @@ -102,7 +105,7 @@ func resourceAppCookieStickinessPolicyCreate(ctx context.Context, d *schema.Reso func resourceAppCookieStickinessPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBConn() + conn := meta.(*conns.AWSClient).ELBConn(ctx) lbName, lbPort, policyName, err := AppCookieStickinessPolicyParseResourceID(d.Id()) @@ -136,7 +139,7 @@ func resourceAppCookieStickinessPolicyRead(ctx context.Context, d *schema.Resour func resourceAppCookieStickinessPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBConn() + conn := meta.(*conns.AWSClient).ELBConn(ctx) lbName, lbPort, policyName, err := AppCookieStickinessPolicyParseResourceID(d.Id()) diff --git a/internal/service/elb/app_cookie_stickiness_policy_test.go b/internal/service/elb/app_cookie_stickiness_policy_test.go index 46f0181f030..70c5ecf2609 100644 --- a/internal/service/elb/app_cookie_stickiness_policy_test.go +++ b/internal/service/elb/app_cookie_stickiness_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elb_test import ( @@ -100,7 +103,7 @@ func TestAccELBAppCookieStickinessPolicy_Disappears_elb(t *testing.T) { func testAccCheckAppCookieStickinessPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_app_cookie_stickiness_policy" { @@ -147,7 +150,7 @@ func testAccCheckAppCookieStickinessPolicyExists(ctx context.Context, n string) return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn(ctx) _, err = tfelb.FindLoadBalancerListenerPolicyByThreePartKey(ctx, conn, lbName, lbPort, policyName) diff --git a/internal/service/elb/attachment.go b/internal/service/elb/attachment.go index 36a225e6240..4dad85856f7 100644 --- a/internal/service/elb/attachment.go +++ b/internal/service/elb/attachment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elb import ( @@ -43,7 +46,7 @@ func ResourceAttachment() *schema.Resource { func resourceAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBConn() + conn := meta.(*conns.AWSClient).ELBConn(ctx) elbName := d.Get("elb").(string) instance := d.Get("instance").(string) @@ -59,7 +62,7 @@ func resourceAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta _, err := conn.RegisterInstancesWithLoadBalancerWithContext(ctx, ®isterInstancesOpts) if tfawserr.ErrCodeEquals(err, "InvalidTarget") { - return retry.RetryableError(fmt.Errorf("Error attaching instance to ELB, retrying: %s", err)) + return retry.RetryableError(fmt.Errorf("attaching instance to ELB, retrying: %s", err)) } if err != nil { @@ -83,7 +86,7 @@ func resourceAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta func resourceAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBConn() + conn := meta.(*conns.AWSClient).ELBConn(ctx) elbName := d.Get("elb").(string) // only add the instance that was previously defined for this resource @@ -128,7 +131,7 @@ func resourceAttachmentRead(ctx context.Context, d *schema.ResourceData, meta in func resourceAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBConn() + conn := meta.(*conns.AWSClient).ELBConn(ctx) elbName := d.Get("elb").(string) instance := d.Get("instance").(string) diff --git a/internal/service/elb/attachment_test.go b/internal/service/elb/attachment_test.go index 9e3f0e83fac..2658c2e19de 100644 --- a/internal/service/elb/attachment_test.go +++ b/internal/service/elb/attachment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elb_test import ( @@ -62,7 +65,7 @@ func TestAccELBAttachment_drift(t *testing.T) { resourceName := "aws_elb.test" testAccAttachmentConfig_deregInstance := func() { - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn(ctx) deRegisterInstancesOpts := elb.DeregisterInstancesFromLoadBalancerInput{ LoadBalancerName: conf.LoadBalancerName, diff --git a/internal/service/elb/backend_server_policy.go b/internal/service/elb/backend_server_policy.go index 918a2d2135f..7f84b5e6a18 100644 --- a/internal/service/elb/backend_server_policy.go +++ b/internal/service/elb/backend_server_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elb import ( @@ -45,7 +48,7 @@ func ResourceBackendServerPolicy() *schema.Resource { func resourceBackendServerPolicySet(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBConn() + conn := meta.(*conns.AWSClient).ELBConn(ctx) instancePort := d.Get("instance_port").(int) lbName := d.Get("load_balancer_name").(string) @@ -72,7 +75,7 @@ func resourceBackendServerPolicySet(ctx context.Context, d *schema.ResourceData, func resourceBackendServerPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBConn() + conn := meta.(*conns.AWSClient).ELBConn(ctx) lbName, instancePort, err := BackendServerPolicyParseResourceID(d.Id()) @@ -101,7 +104,7 @@ func resourceBackendServerPolicyRead(ctx context.Context, d *schema.ResourceData func resourceBackendServerPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBConn() + conn := meta.(*conns.AWSClient).ELBConn(ctx) lbName, instancePort, err := BackendServerPolicyParseResourceID(d.Id()) diff --git a/internal/service/elb/backend_server_policy_test.go b/internal/service/elb/backend_server_policy_test.go index d95b88711a8..3c39e29f7ed 100644 --- a/internal/service/elb/backend_server_policy_test.go +++ b/internal/service/elb/backend_server_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elb_test import ( @@ -112,7 +115,7 @@ func TestAccELBBackendServerPolicy_update(t *testing.T) { func testAccCheckBackendServerPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_load_balancer_backend_policy" { @@ -159,7 +162,7 @@ func testAccCheckBackendServerPolicyExists(ctx context.Context, n string) resour return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn(ctx) _, err = tfelb.FindLoadBalancerBackendServerPolicyByTwoPartKey(ctx, conn, lbName, instancePort) diff --git a/internal/service/elb/enum.go b/internal/service/elb/enum.go index 824e52048d1..527a0fe0999 100644 --- a/internal/service/elb/enum.go +++ b/internal/service/elb/enum.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elb const ( diff --git a/internal/service/elb/flex.go b/internal/service/elb/flex.go index 676525774b2..8ee89ec4c9b 100644 --- a/internal/service/elb/flex.go +++ b/internal/service/elb/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elb import ( diff --git a/internal/service/elb/flex_test.go b/internal/service/elb/flex_test.go index 95266c1621d..fa72b9ef0a5 100644 --- a/internal/service/elb/flex_test.go +++ b/internal/service/elb/flex_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elb import ( diff --git a/internal/service/elb/generate.go b/internal/service/elb/generate.go index 919aeb0fca3..00321adf62d 100644 --- a/internal/service/elb/generate.go +++ b/internal/service/elb/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=DescribeTags -ListTagsInIDElem=LoadBalancerNames -ListTagsInIDNeedSlice=yes -ListTagsOutTagsElem=TagDescriptions[0].Tags -ServiceTagsSlice -TagOp=AddTags -TagInIDElem=LoadBalancerNames -TagInIDNeedSlice=yes -TagKeyType=TagKeyOnly -UntagOp=RemoveTags -UntagInNeedTagKeyType=yes -UntagInTagsElem=Tags -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package elb diff --git a/internal/service/elb/hosted_zone_id_data_source.go b/internal/service/elb/hosted_zone_id_data_source.go index 02cbde5cd7f..6b2a6cf949d 100644 --- a/internal/service/elb/hosted_zone_id_data_source.go +++ b/internal/service/elb/hosted_zone_id_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elb import ( diff --git a/internal/service/elb/hosted_zone_id_data_source_test.go b/internal/service/elb/hosted_zone_id_data_source_test.go index 927ae670be8..147f1ff4f20 100644 --- a/internal/service/elb/hosted_zone_id_data_source_test.go +++ b/internal/service/elb/hosted_zone_id_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elb_test import ( diff --git a/internal/service/elb/lb_cookie_stickiness_policy.go b/internal/service/elb/lb_cookie_stickiness_policy.go index c68ffc089a4..e2e25d49f45 100644 --- a/internal/service/elb/lb_cookie_stickiness_policy.go +++ b/internal/service/elb/lb_cookie_stickiness_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elb import ( @@ -52,7 +55,7 @@ func ResourceCookieStickinessPolicy() *schema.Resource { func resourceCookieStickinessPolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBConn() + conn := meta.(*conns.AWSClient).ELBConn(ctx) lbName := d.Get("load_balancer").(string) lbPort := d.Get("lb_port").(int) @@ -96,7 +99,7 @@ func resourceCookieStickinessPolicyCreate(ctx context.Context, d *schema.Resourc func resourceCookieStickinessPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBConn() + conn := meta.(*conns.AWSClient).ELBConn(ctx) lbName, lbPort, policyName, err := LBCookieStickinessPolicyParseResourceID(d.Id()) @@ -133,7 +136,7 @@ func resourceCookieStickinessPolicyRead(ctx context.Context, d *schema.ResourceD func resourceCookieStickinessPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBConn() + conn := meta.(*conns.AWSClient).ELBConn(ctx) lbName, lbPort, policyName, err := LBCookieStickinessPolicyParseResourceID(d.Id()) diff --git a/internal/service/elb/lb_cookie_stickiness_policy_test.go b/internal/service/elb/lb_cookie_stickiness_policy_test.go index f1efe8b3c7d..26b8d5ef861 100644 --- a/internal/service/elb/lb_cookie_stickiness_policy_test.go +++ b/internal/service/elb/lb_cookie_stickiness_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elb_test import ( @@ -95,7 +98,7 @@ func TestAccELBCookieStickinessPolicy_Disappears_elb(t *testing.T) { func testAccCheckLBCookieStickinessPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lb_cookie_stickiness_policy" { @@ -142,7 +145,7 @@ func testAccCheckLBCookieStickinessPolicyExists(ctx context.Context, n string) r return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn(ctx) _, err = tfelb.FindLoadBalancerListenerPolicyByThreePartKey(ctx, conn, lbName, lbPort, policyName) diff --git a/internal/service/elb/lb_ssl_negotiation_policy.go b/internal/service/elb/lb_ssl_negotiation_policy.go index 1c3aade57a0..85d14a1fd58 100644 --- a/internal/service/elb/lb_ssl_negotiation_policy.go +++ b/internal/service/elb/lb_ssl_negotiation_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elb import ( @@ -68,7 +71,7 @@ func ResourceSSLNegotiationPolicy() *schema.Resource { func resourceSSLNegotiationPolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBConn() + conn := meta.(*conns.AWSClient).ELBConn(ctx) lbName := d.Get("load_balancer").(string) lbPort := d.Get("lb_port").(int) @@ -114,7 +117,7 @@ func resourceSSLNegotiationPolicyCreate(ctx context.Context, d *schema.ResourceD func resourceSSLNegotiationPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBConn() + conn := meta.(*conns.AWSClient).ELBConn(ctx) lbName, lbPort, policyName, err := SSLNegotiationPolicyParseResourceID(d.Id()) @@ -158,7 +161,7 @@ func resourceSSLNegotiationPolicyRead(ctx context.Context, d *schema.ResourceDat func resourceSSLNegotiationPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBConn() + conn := meta.(*conns.AWSClient).ELBConn(ctx) lbName, lbPort, policyName, err := SSLNegotiationPolicyParseResourceID(d.Id()) diff --git a/internal/service/elb/lb_ssl_negotiation_policy_test.go b/internal/service/elb/lb_ssl_negotiation_policy_test.go index 47b3be61359..a47e1ae9971 100644 --- a/internal/service/elb/lb_ssl_negotiation_policy_test.go +++ b/internal/service/elb/lb_ssl_negotiation_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elb_test import ( @@ -101,7 +104,7 @@ func TestAccELBSSLNegotiationPolicy_disappears(t *testing.T) { func testAccCheckLBSSLNegotiationPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lb_ssl_negotiation_policy" { @@ -148,7 +151,7 @@ func testAccCheckLBSSLNegotiationPolicy(ctx context.Context, n string) resource. return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn(ctx) _, err = tfelb.FindLoadBalancerListenerPolicyByThreePartKey(ctx, conn, lbName, lbPort, policyName) diff --git a/internal/service/elb/listener_policy.go b/internal/service/elb/listener_policy.go index 10927800794..8ac42723a7f 100644 --- a/internal/service/elb/listener_policy.go +++ b/internal/service/elb/listener_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elb import ( @@ -50,7 +53,7 @@ func ResourceListenerPolicy() *schema.Resource { func resourceListenerPolicySet(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBConn() + conn := meta.(*conns.AWSClient).ELBConn(ctx) lbName := d.Get("load_balancer_name").(string) lbPort := d.Get("load_balancer_port").(int) @@ -77,7 +80,7 @@ func resourceListenerPolicySet(ctx context.Context, d *schema.ResourceData, meta func resourceListenerPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBConn() + conn := meta.(*conns.AWSClient).ELBConn(ctx) lbName, lbPort, err := ListenerPolicyParseResourceID(d.Id()) @@ -106,7 +109,7 @@ func resourceListenerPolicyRead(ctx context.Context, d *schema.ResourceData, met func resourceListenerPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBConn() + conn := meta.(*conns.AWSClient).ELBConn(ctx) lbName, lbPort, err := ListenerPolicyParseResourceID(d.Id()) diff --git a/internal/service/elb/listener_policy_test.go b/internal/service/elb/listener_policy_test.go index 9fc11065b50..a65210f1df2 100644 --- a/internal/service/elb/listener_policy_test.go +++ b/internal/service/elb/listener_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elb_test import ( @@ -103,7 +106,7 @@ func TestAccELBListenerPolicy_disappears(t *testing.T) { func testAccCheckListenerPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_load_balancer_listener_policy" { @@ -150,7 +153,7 @@ func testAccCheckListenerPolicyExists(ctx context.Context, n string) resource.Te return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn(ctx) _, err = tfelb.FindLoadBalancerListenerPolicyByTwoPartKey(ctx, conn, lbName, lbPort) diff --git a/internal/service/elb/load_balancer.go b/internal/service/elb/load_balancer.go index 17fda661b1f..505631fed10 100644 --- a/internal/service/elb/load_balancer.go +++ b/internal/service/elb/load_balancer.go @@ -1,6 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elb -import ( // nosemgrep:ci.aws-sdk-go-multiple-service-imports +import ( // nosemgrep:ci.semgrep.aws.multiple-service-imports "bytes" "context" "fmt" @@ -17,6 +20,7 @@ import ( // nosemgrep:ci.aws-sdk-go-multiple-service-imports "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" multierror "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" @@ -40,11 +44,28 @@ func ResourceLoadBalancer() *schema.Resource { ReadWithoutTimeout: resourceLoadBalancerRead, UpdateWithoutTimeout: resourceLoadBalancerUpdate, DeleteWithoutTimeout: resourceLoadBalancerDelete, + Importer: &schema.ResourceImporter{ StateContext: schema.ImportStatePassthroughContext, }, - CustomizeDiff: verify.SetTagsDiff, + CustomizeDiff: customdiff.All( + customdiff.ForceNewIfChange("subnets", func(_ context.Context, o, n, meta interface{}) bool { + // Force new if removing all current subnets. + os := o.(*schema.Set) + ns := n.(*schema.Set) + + removed := os.Difference(ns) + + return removed.Equal(os) + }), + verify.SetTagsDiff, + ), + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(5 * time.Minute), + Update: schema.DefaultTimeout(5 * time.Minute), + }, Schema: map[string]*schema.Schema{ "access_logs": { @@ -249,7 +270,7 @@ func ResourceLoadBalancer() *schema.Resource { func resourceLoadBalancerCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBConn() + conn := meta.(*conns.AWSClient).ELBConn(ctx) var elbName string if v, ok := d.GetOk("name"); ok { @@ -260,38 +281,37 @@ func resourceLoadBalancerCreate(ctx context.Context, d *schema.ResourceData, met } else { elbName = id.PrefixedUniqueId("tf-lb-") } - d.Set("name", elbName) } - // Expand the "listener" set to aws-sdk-go compat []*elb.Listener listeners, err := ExpandListeners(d.Get("listener").(*schema.Set).List()) + if err != nil { - return sdkdiag.AppendErrorf(diags, "creating ELB Classic Load Balancer (%s): %s", elbName, err) + return sdkdiag.AppendFromErr(diags, err) } - // Provision the elb + input := &elb.CreateLoadBalancerInput{ LoadBalancerName: aws.String(elbName), Listeners: listeners, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } - if _, ok := d.GetOk("internal"); ok { - input.Scheme = aws.String("internal") + if v, ok := d.GetOk("availability_zones"); ok && v.(*schema.Set).Len() > 0 { + input.AvailabilityZones = flex.ExpandStringSet(v.(*schema.Set)) } - if v, ok := d.GetOk("availability_zones"); ok { - input.AvailabilityZones = flex.ExpandStringSet(v.(*schema.Set)) + if _, ok := d.GetOk("internal"); ok { + input.Scheme = aws.String("internal") } - if v, ok := d.GetOk("security_groups"); ok { + if v, ok := d.GetOk("security_groups"); ok && v.(*schema.Set).Len() > 0 { input.SecurityGroups = flex.ExpandStringSet(v.(*schema.Set)) } - if v, ok := d.GetOk("subnets"); ok { + if v, ok := d.GetOk("subnets"); ok && v.(*schema.Set).Len() > 0 { input.Subnets = flex.ExpandStringSet(v.(*schema.Set)) } - _, err = tfresource.RetryWhenAWSErrCodeEquals(ctx, 5*time.Minute, func() (interface{}, error) { + _, err = tfresource.RetryWhenAWSErrCodeEquals(ctx, d.Timeout(schema.TimeoutCreate), func() (interface{}, error) { return conn.CreateLoadBalancerWithContext(ctx, input) }, elb.ErrCodeCertificateNotFoundException) @@ -299,16 +319,14 @@ func resourceLoadBalancerCreate(ctx context.Context, d *schema.ResourceData, met return sdkdiag.AppendErrorf(diags, "creating ELB Classic Load Balancer (%s): %s", elbName, err) } - // Assign the elb's unique identifier for use later d.SetId(elbName) - log.Printf("[INFO] ELB ID: %s", d.Id()) return append(diags, resourceLoadBalancerUpdate(ctx, d, meta)...) } func resourceLoadBalancerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBConn() + conn := meta.(*conns.AWSClient).ELBConn(ctx) lb, err := FindLoadBalancerByName(ctx, conn, d.Id()) @@ -322,6 +340,12 @@ func resourceLoadBalancerRead(ctx context.Context, d *schema.ResourceData, meta return sdkdiag.AppendErrorf(diags, "reading ELB Classic Load Balancer (%s): %s", d.Id(), err) } + lbAttrs, err := findLoadBalancerAttributesByName(ctx, conn, d.Id()) + + if err != nil { + return sdkdiag.AppendErrorf(diags, "reading ELB Classic Load Balancer (%s) attributes: %s", d.Id(), err) + } + arn := arn.ARN{ Partition: meta.(*conns.AWSClient).Partition, Region: meta.(*conns.AWSClient).Region, @@ -330,38 +354,25 @@ func resourceLoadBalancerRead(ctx context.Context, d *schema.ResourceData, meta Resource: fmt.Sprintf("loadbalancer/%s", d.Id()), } d.Set("arn", arn.String()) - - if err := flattenLoadBalancerResource(ctx, d, meta.(*conns.AWSClient).EC2Conn(), conn, lb); err != nil { - return sdkdiag.AppendFromErr(diags, err) - } - return diags -} - -// flattenLoadBalancerResource takes a *elb.LoadBalancerDescription and populates all respective resource fields. -func flattenLoadBalancerResource(ctx context.Context, d *schema.ResourceData, ec2conn *ec2.EC2, elbconn *elb.ELB, lb *elb.LoadBalancerDescription) error { - describeAttrsOpts := &elb.DescribeLoadBalancerAttributesInput{ - LoadBalancerName: aws.String(d.Id()), - } - describeAttrsResp, err := elbconn.DescribeLoadBalancerAttributesWithContext(ctx, describeAttrsOpts) - if err != nil { - return fmt.Errorf("Error retrieving ELB: %s", err) - } - - lbAttrs := describeAttrsResp.LoadBalancerAttributes - - d.Set("name", lb.LoadBalancerName) + d.Set("availability_zones", flex.FlattenStringList(lb.AvailabilityZones)) + d.Set("connection_draining", lbAttrs.ConnectionDraining.Enabled) + d.Set("connection_draining_timeout", lbAttrs.ConnectionDraining.Timeout) + d.Set("cross_zone_load_balancing", lbAttrs.CrossZoneLoadBalancing.Enabled) d.Set("dns_name", lb.DNSName) - d.Set("zone_id", lb.CanonicalHostedZoneNameID) - + if lbAttrs.ConnectionSettings != nil { + d.Set("idle_timeout", lbAttrs.ConnectionSettings.IdleTimeout) + } + d.Set("instances", flattenInstances(lb.Instances)) var scheme bool if lb.Scheme != nil { scheme = aws.StringValue(lb.Scheme) == "internal" } d.Set("internal", scheme) - d.Set("availability_zones", flex.FlattenStringList(lb.AvailabilityZones)) - d.Set("instances", flattenInstances(lb.Instances)) d.Set("listener", flattenListeners(lb.ListenerDescriptions)) + d.Set("name", lb.LoadBalancerName) d.Set("security_groups", flex.FlattenStringList(lb.SecurityGroups)) + d.Set("subnets", flex.FlattenStringList(lb.Subnets)) + d.Set("zone_id", lb.CanonicalHostedZoneNameID) if lb.SourceSecurityGroup != nil { group := lb.SourceSecurityGroup.GroupName @@ -372,21 +383,15 @@ func flattenLoadBalancerResource(ctx context.Context, d *schema.ResourceData, ec // Manually look up the ELB Security Group ID, since it's not provided if lb.VPCId != nil { - sg, err := tfec2.FindSecurityGroupByNameAndVPCIDAndOwnerID(ctx, ec2conn, aws.StringValue(lb.SourceSecurityGroup.GroupName), aws.StringValue(lb.VPCId), aws.StringValue(lb.SourceSecurityGroup.OwnerAlias)) + sg, err := tfec2.FindSecurityGroupByNameAndVPCIDAndOwnerID(ctx, meta.(*conns.AWSClient).EC2Conn(ctx), aws.StringValue(lb.SourceSecurityGroup.GroupName), aws.StringValue(lb.VPCId), aws.StringValue(lb.SourceSecurityGroup.OwnerAlias)) if err != nil { - return fmt.Errorf("Error looking up ELB Security Group ID: %w", err) + return sdkdiag.AppendErrorf(diags, "reading ELB Classic Load Balancer (%s) security group: %s", d.Id(), err) } else { d.Set("source_security_group_id", sg.GroupId) } } } - d.Set("subnets", flex.FlattenStringList(lb.Subnets)) - if lbAttrs.ConnectionSettings != nil { - d.Set("idle_timeout", lbAttrs.ConnectionSettings.IdleTimeout) - } - d.Set("connection_draining", lbAttrs.ConnectionDraining.Enabled) - d.Set("connection_draining_timeout", lbAttrs.ConnectionDraining.Timeout) - d.Set("cross_zone_load_balancing", lbAttrs.CrossZoneLoadBalancing.Enabled) + if lbAttrs.AccessLog != nil { // The AWS API does not allow users to remove access_logs, only disable them. // During creation of the ELB, Terraform sets the access_logs to disabled, @@ -404,11 +409,11 @@ func flattenLoadBalancerResource(ctx context.Context, d *schema.ResourceData, ec _, n := d.GetChange("access_logs") elbal := lbAttrs.AccessLog nl := n.([]interface{}) - if len(nl) == 0 && !*elbal.Enabled { + if len(nl) == 0 && !aws.BoolValue(elbal.Enabled) { elbal = nil } if err := d.Set("access_logs", flattenAccessLog(elbal)); err != nil { - return fmt.Errorf("reading ELB Classic Load Balancer (%s): setting access_logs: %w", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "setting access_logs: %s", err) } } @@ -422,15 +427,17 @@ func flattenLoadBalancerResource(ctx context.Context, d *schema.ResourceData, ec // There's only one health check, so save that to state as we // currently can if aws.StringValue(lb.HealthCheck.Target) != "" { - d.Set("health_check", FlattenHealthCheck(lb.HealthCheck)) + if err := d.Set("health_check", FlattenHealthCheck(lb.HealthCheck)); err != nil { + return sdkdiag.AppendErrorf(diags, "setting health_check: %s", err) + } } - return nil + return diags } func resourceLoadBalancerUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBConn() + conn := meta.(*conns.AWSClient).ELBConn(ctx) if d.HasChange("listener") { o, n := d.GetChange("listener") @@ -439,8 +446,9 @@ func resourceLoadBalancerUpdate(ctx context.Context, d *schema.ResourceData, met remove, _ := ExpandListeners(os.Difference(ns).List()) add, err := ExpandListeners(ns.Difference(os).List()) + if err != nil { - return sdkdiag.AppendErrorf(diags, "updating ELB Classic Load Balancer (%s): %s", d.Id(), err) + return sdkdiag.AppendFromErr(diags, err) } if len(remove) > 0 { @@ -449,49 +457,43 @@ func resourceLoadBalancerUpdate(ctx context.Context, d *schema.ResourceData, met ports = append(ports, listener.LoadBalancerPort) } - deleteListenersOpts := &elb.DeleteLoadBalancerListenersInput{ + input := &elb.DeleteLoadBalancerListenersInput{ LoadBalancerName: aws.String(d.Id()), LoadBalancerPorts: ports, } - log.Printf("[DEBUG] ELB Delete Listeners opts: %s", deleteListenersOpts) - _, err := conn.DeleteLoadBalancerListenersWithContext(ctx, deleteListenersOpts) + _, err := conn.DeleteLoadBalancerListenersWithContext(ctx, input) + if err != nil { - return sdkdiag.AppendErrorf(diags, "Failure removing outdated ELB listeners: %s", err) + return sdkdiag.AppendErrorf(diags, "deleting ELB Classic Load Balancer (%s) listeners: %s", d.Id(), err) } } if len(add) > 0 { input := &elb.CreateLoadBalancerListenersInput{ - LoadBalancerName: aws.String(d.Id()), Listeners: add, + LoadBalancerName: aws.String(d.Id()), } // Occasionally AWS will error with a 'duplicate listener', without any // other listeners on the ELB. Retry here to eliminate that. - err := retry.RetryContext(ctx, 5*time.Minute, func() *retry.RetryError { - _, err := conn.CreateLoadBalancerListenersWithContext(ctx, input) - if err != nil { + _, err := tfresource.RetryWhen(ctx, d.Timeout(schema.TimeoutUpdate), + func() (interface{}, error) { + return conn.CreateLoadBalancerListenersWithContext(ctx, input) + }, + func(err error) (bool, error) { if tfawserr.ErrCodeEquals(err, elb.ErrCodeDuplicateListenerException) { - log.Printf("[DEBUG] Duplicate listener found for ELB (%s), retrying", d.Id()) - return retry.RetryableError(err) + return true, err } if tfawserr.ErrMessageContains(err, elb.ErrCodeCertificateNotFoundException, "Server Certificate not found for the key: arn") { - log.Printf("[DEBUG] SSL Cert not found for given ARN, retrying") - return retry.RetryableError(err) + return true, err } - // Didn't recognize the error, so shouldn't retry. - return retry.NonRetryableError(err) - } - // Successful creation - return nil - }) - if tfresource.TimedOut(err) { - _, err = conn.CreateLoadBalancerListenersWithContext(ctx, input) - } + return false, err + }) + if err != nil { - return sdkdiag.AppendErrorf(diags, "Failure adding new or updated ELB listeners: %s", err) + return sdkdiag.AppendErrorf(diags, "creating ELB Classic Load Balancer (%s) listeners: %s", d.Id(), err) } } } @@ -507,32 +509,34 @@ func resourceLoadBalancerUpdate(ctx context.Context, d *schema.ResourceData, met add := ExpandInstanceString(ns.Difference(os).List()) if len(add) > 0 { - registerInstancesOpts := elb.RegisterInstancesWithLoadBalancerInput{ - LoadBalancerName: aws.String(d.Id()), + input := &elb.RegisterInstancesWithLoadBalancerInput{ Instances: add, + LoadBalancerName: aws.String(d.Id()), } - _, err := conn.RegisterInstancesWithLoadBalancerWithContext(ctx, ®isterInstancesOpts) + _, err := conn.RegisterInstancesWithLoadBalancerWithContext(ctx, input) + if err != nil { - return sdkdiag.AppendErrorf(diags, "Failure registering instances with ELB: %s", err) + return sdkdiag.AppendErrorf(diags, "registering ELB Classic Load Balancer (%s) instances: %s", d.Id(), err) } } + if len(remove) > 0 { - deRegisterInstancesOpts := elb.DeregisterInstancesFromLoadBalancerInput{ - LoadBalancerName: aws.String(d.Id()), + input := &elb.DeregisterInstancesFromLoadBalancerInput{ Instances: remove, + LoadBalancerName: aws.String(d.Id()), } - _, err := conn.DeregisterInstancesFromLoadBalancerWithContext(ctx, &deRegisterInstancesOpts) + _, err := conn.DeregisterInstancesFromLoadBalancerWithContext(ctx, input) + if err != nil { - return sdkdiag.AppendErrorf(diags, "Failure deregistering instances from ELB: %s", err) + return sdkdiag.AppendErrorf(diags, "deregistering ELB Classic Load Balancer (%s) instances: %s", d.Id(), err) } } } if d.HasChanges("cross_zone_load_balancing", "idle_timeout", "access_logs", "desync_mitigation_mode") { - attrs := elb.ModifyLoadBalancerAttributesInput{ - LoadBalancerName: aws.String(d.Get("name").(string)), + input := &elb.ModifyLoadBalancerAttributesInput{ LoadBalancerAttributes: &elb.LoadBalancerAttributes{ AdditionalAttributes: []*elb.AdditionalAttribute{ { @@ -547,12 +551,12 @@ func resourceLoadBalancerUpdate(ctx context.Context, d *schema.ResourceData, met IdleTimeout: aws.Int64(int64(d.Get("idle_timeout").(int))), }, }, + LoadBalancerName: aws.String(d.Id()), } - logs := d.Get("access_logs").([]interface{}) - if len(logs) == 1 { + if logs := d.Get("access_logs").([]interface{}); len(logs) == 1 { l := logs[0].(map[string]interface{}) - attrs.LoadBalancerAttributes.AccessLog = &elb.AccessLog{ + input.LoadBalancerAttributes.AccessLog = &elb.AccessLog{ Enabled: aws.Bool(l["enabled"].(bool)), EmitInterval: aws.Int64(int64(l["interval"].(int))), S3BucketName: aws.String(l["bucket"].(string)), @@ -560,15 +564,15 @@ func resourceLoadBalancerUpdate(ctx context.Context, d *schema.ResourceData, met } } else if len(logs) == 0 { // disable access logs - attrs.LoadBalancerAttributes.AccessLog = &elb.AccessLog{ + input.LoadBalancerAttributes.AccessLog = &elb.AccessLog{ Enabled: aws.Bool(false), } } - log.Printf("[DEBUG] ELB Modify Load Balancer Attributes Request: %#v", attrs) - _, err := conn.ModifyLoadBalancerAttributesWithContext(ctx, &attrs) + _, err := conn.ModifyLoadBalancerAttributesWithContext(ctx, input) + if err != nil { - return sdkdiag.AppendErrorf(diags, "Failure configuring ELB attributes: %s", err) + return sdkdiag.AppendErrorf(diags, "modifying ELB Classic Load Balancer (%s) attributes: %s", d.Id(), err) } } @@ -580,70 +584,73 @@ func resourceLoadBalancerUpdate(ctx context.Context, d *schema.ResourceData, met // We do timeout changes first since they require us to set draining // to true for a hot second. if d.HasChange("connection_draining_timeout") { - attrs := elb.ModifyLoadBalancerAttributesInput{ - LoadBalancerName: aws.String(d.Get("name").(string)), + input := &elb.ModifyLoadBalancerAttributesInput{ LoadBalancerAttributes: &elb.LoadBalancerAttributes{ ConnectionDraining: &elb.ConnectionDraining{ Enabled: aws.Bool(true), Timeout: aws.Int64(int64(d.Get("connection_draining_timeout").(int))), }, }, + LoadBalancerName: aws.String(d.Id()), } - _, err := conn.ModifyLoadBalancerAttributesWithContext(ctx, &attrs) + _, err := conn.ModifyLoadBalancerAttributesWithContext(ctx, input) + if err != nil { - return sdkdiag.AppendErrorf(diags, "Failure configuring ELB attributes: %s", err) + return sdkdiag.AppendErrorf(diags, "modifying ELB Classic Load Balancer (%s) attributes: %s", d.Id(), err) } } // Then we always set connection draining even if there is no change. // This lets us reset to "false" if requested even with a timeout // change. - attrs := elb.ModifyLoadBalancerAttributesInput{ - LoadBalancerName: aws.String(d.Get("name").(string)), + input := &elb.ModifyLoadBalancerAttributesInput{ LoadBalancerAttributes: &elb.LoadBalancerAttributes{ ConnectionDraining: &elb.ConnectionDraining{ Enabled: aws.Bool(d.Get("connection_draining").(bool)), }, }, + LoadBalancerName: aws.String(d.Id()), } - _, err := conn.ModifyLoadBalancerAttributesWithContext(ctx, &attrs) + _, err := conn.ModifyLoadBalancerAttributesWithContext(ctx, input) + if err != nil { - return sdkdiag.AppendErrorf(diags, "Failure configuring ELB attributes: %s", err) + return sdkdiag.AppendErrorf(diags, "modifying ELB Classic Load Balancer (%s) attributes: %s", d.Id(), err) } } if d.HasChange("health_check") { - hc := d.Get("health_check").([]interface{}) - if len(hc) > 0 { + if hc := d.Get("health_check").([]interface{}); len(hc) > 0 { check := hc[0].(map[string]interface{}) - configureHealthCheckOpts := elb.ConfigureHealthCheckInput{ - LoadBalancerName: aws.String(d.Id()), + input := &elb.ConfigureHealthCheckInput{ HealthCheck: &elb.HealthCheck{ HealthyThreshold: aws.Int64(int64(check["healthy_threshold"].(int))), - UnhealthyThreshold: aws.Int64(int64(check["unhealthy_threshold"].(int))), Interval: aws.Int64(int64(check["interval"].(int))), Target: aws.String(check["target"].(string)), Timeout: aws.Int64(int64(check["timeout"].(int))), + UnhealthyThreshold: aws.Int64(int64(check["unhealthy_threshold"].(int))), }, + LoadBalancerName: aws.String(d.Id()), } - _, err := conn.ConfigureHealthCheckWithContext(ctx, &configureHealthCheckOpts) + _, err := conn.ConfigureHealthCheckWithContext(ctx, input) + if err != nil { - return sdkdiag.AppendErrorf(diags, "Failure configuring health check for ELB: %s", err) + return sdkdiag.AppendErrorf(diags, "configuring ELB Classic Load Balancer (%s) health check: %s", d.Id(), err) } } } if d.HasChange("security_groups") { - applySecurityGroupsOpts := elb.ApplySecurityGroupsToLoadBalancerInput{ + input := &elb.ApplySecurityGroupsToLoadBalancerInput{ LoadBalancerName: aws.String(d.Id()), SecurityGroups: flex.ExpandStringSet(d.Get("security_groups").(*schema.Set)), } - _, err := conn.ApplySecurityGroupsToLoadBalancerWithContext(ctx, &applySecurityGroupsOpts) + _, err := conn.ApplySecurityGroupsToLoadBalancerWithContext(ctx, input) + if err != nil { - return sdkdiag.AppendErrorf(diags, "Failure applying security groups to ELB: %s", err) + return sdkdiag.AppendErrorf(diags, "applying ELB Classic Load Balancer (%s) security groups: %s", d.Id(), err) } } @@ -656,28 +663,28 @@ func resourceLoadBalancerUpdate(ctx context.Context, d *schema.ResourceData, met added := flex.ExpandStringSet(ns.Difference(os)) if len(added) > 0 { - enableOpts := &elb.EnableAvailabilityZonesForLoadBalancerInput{ - LoadBalancerName: aws.String(d.Id()), + input := &elb.EnableAvailabilityZonesForLoadBalancerInput{ AvailabilityZones: added, + LoadBalancerName: aws.String(d.Id()), } - log.Printf("[DEBUG] ELB enable availability zones opts: %s", enableOpts) - _, err := conn.EnableAvailabilityZonesForLoadBalancerWithContext(ctx, enableOpts) + _, err := conn.EnableAvailabilityZonesForLoadBalancerWithContext(ctx, input) + if err != nil { - return sdkdiag.AppendErrorf(diags, "Failure enabling ELB availability zones: %s", err) + return sdkdiag.AppendErrorf(diags, "enabling ELB Classic Load Balancer (%s) Availability Zones: %s", d.Id(), err) } } if len(removed) > 0 { - disableOpts := &elb.DisableAvailabilityZonesForLoadBalancerInput{ - LoadBalancerName: aws.String(d.Id()), + input := &elb.DisableAvailabilityZonesForLoadBalancerInput{ AvailabilityZones: removed, + LoadBalancerName: aws.String(d.Id()), } - log.Printf("[DEBUG] ELB disable availability zones opts: %s", disableOpts) - _, err := conn.DisableAvailabilityZonesForLoadBalancerWithContext(ctx, disableOpts) + _, err := conn.DisableAvailabilityZonesForLoadBalancerWithContext(ctx, input) + if err != nil { - return sdkdiag.AppendErrorf(diags, "Failure disabling ELB availability zones: %s", err) + return sdkdiag.AppendErrorf(diags, "enabling ELB Classic Load Balancer (%s) Availability Zones: %s", d.Id(), err) } } } @@ -691,43 +698,30 @@ func resourceLoadBalancerUpdate(ctx context.Context, d *schema.ResourceData, met added := flex.ExpandStringSet(ns.Difference(os)) if len(removed) > 0 { - detachOpts := &elb.DetachLoadBalancerFromSubnetsInput{ + input := &elb.DetachLoadBalancerFromSubnetsInput{ LoadBalancerName: aws.String(d.Id()), Subnets: removed, } - log.Printf("[DEBUG] ELB detach subnets opts: %s", detachOpts) - _, err := conn.DetachLoadBalancerFromSubnetsWithContext(ctx, detachOpts) + _, err := conn.DetachLoadBalancerFromSubnetsWithContext(ctx, input) + if err != nil { - return sdkdiag.AppendErrorf(diags, "Failure removing ELB subnets: %s", err) + return sdkdiag.AppendErrorf(diags, "detaching ELB Classic Load Balancer (%s) from subnets: %s", d.Id(), err) } } if len(added) > 0 { - attachOpts := &elb.AttachLoadBalancerToSubnetsInput{ + input := &elb.AttachLoadBalancerToSubnetsInput{ LoadBalancerName: aws.String(d.Id()), Subnets: added, } - log.Printf("[DEBUG] ELB attach subnets opts: %s", attachOpts) - err := retry.RetryContext(ctx, 5*time.Minute, func() *retry.RetryError { - _, err := conn.AttachLoadBalancerToSubnetsWithContext(ctx, attachOpts) - if err != nil { - if tfawserr.ErrMessageContains(err, elb.ErrCodeInvalidConfigurationRequestException, "cannot be attached to multiple subnets in the same AZ") { - // eventually consistent issue with removing a subnet in AZ1 and - // immediately adding a new one in the same AZ - log.Printf("[DEBUG] retrying az association") - return retry.RetryableError(err) - } - return retry.NonRetryableError(err) - } - return nil - }) - if tfresource.TimedOut(err) { - _, err = conn.AttachLoadBalancerToSubnetsWithContext(ctx, attachOpts) - } + _, err := tfresource.RetryWhenAWSErrMessageContains(ctx, d.Timeout(schema.TimeoutUpdate), func() (interface{}, error) { + return conn.AttachLoadBalancerToSubnetsWithContext(ctx, input) + }, elb.ErrCodeInvalidConfigurationRequestException, "cannot be attached to multiple subnets in the same AZ") + if err != nil { - return sdkdiag.AppendErrorf(diags, "Failure adding ELB subnets: %s", err) + return sdkdiag.AppendErrorf(diags, "attaching ELB Classic Load Balancer (%s) to subnets: %s", d.Id(), err) } } } @@ -737,7 +731,7 @@ func resourceLoadBalancerUpdate(ctx context.Context, d *schema.ResourceData, met func resourceLoadBalancerDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBConn() + conn := meta.(*conns.AWSClient).ELBConn(ctx) log.Printf("[INFO] Deleting ELB Classic Load Balancer: %s", d.Id()) _, err := conn.DeleteLoadBalancerWithContext(ctx, &elb.DeleteLoadBalancerInput{ @@ -748,7 +742,7 @@ func resourceLoadBalancerDelete(ctx context.Context, d *schema.ResourceData, met return sdkdiag.AppendErrorf(diags, "deleting ELB Classic Load Balancer (%s): %s", d.Id(), err) } - err = cleanupNetworkInterfaces(ctx, meta.(*conns.AWSClient).EC2Conn(), d.Id()) + err = deleteNetworkInterfaces(ctx, meta.(*conns.AWSClient).EC2Conn(ctx), d.Id()) if err != nil { diags = sdkdiag.AppendWarningf(diags, "cleaning up ELB Classic Load Balancer (%s) ENIs: %s", d.Id(), err) @@ -793,6 +787,31 @@ func FindLoadBalancerByName(ctx context.Context, conn *elb.ELB, name string) (*e return output.LoadBalancerDescriptions[0], nil } +func findLoadBalancerAttributesByName(ctx context.Context, conn *elb.ELB, name string) (*elb.LoadBalancerAttributes, error) { + input := &elb.DescribeLoadBalancerAttributesInput{ + LoadBalancerName: aws.String(name), + } + + output, err := conn.DescribeLoadBalancerAttributesWithContext(ctx, input) + + if tfawserr.ErrCodeEquals(err, elb.ErrCodeAccessPointNotFoundException) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil || output.LoadBalancerAttributes == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + return output.LoadBalancerAttributes, nil +} + func ListenerHash(v interface{}) int { var buf bytes.Buffer m := v.(map[string]interface{}) @@ -917,7 +936,7 @@ func validateListenerProtocol() schema.SchemaValidateFunc { // but the cleanup is asynchronous and may take time // which then blocks IGW, SG or VPC on deletion // So we make the cleanup "synchronous" here -func cleanupNetworkInterfaces(ctx context.Context, conn *ec2.EC2, name string) error { +func deleteNetworkInterfaces(ctx context.Context, conn *ec2.EC2, name string) error { // https://aws.amazon.com/premiumsupport/knowledge-center/elb-find-load-balancer-IP/. networkInterfaces, err := tfec2.FindNetworkInterfacesByAttachmentInstanceOwnerIDAndDescription(ctx, conn, "amazon-elb", "ELB "+name) diff --git a/internal/service/elb/load_balancer_data_source.go b/internal/service/elb/load_balancer_data_source.go index e4611b0f0e3..be77f84b19b 100644 --- a/internal/service/elb/load_balancer_data_source.go +++ b/internal/service/elb/load_balancer_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elb import ( @@ -204,8 +207,8 @@ func DataSourceLoadBalancer() *schema.Resource { func dataSourceLoadBalancerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBConn() - ec2conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).ELBConn(ctx) + ec2conn := meta.(*conns.AWSClient).EC2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig lbName := d.Get("name").(string) @@ -308,7 +311,7 @@ func dataSourceLoadBalancerRead(ctx context.Context, d *schema.ResourceData, met } } - tags, err := ListTags(ctx, conn, d.Id()) + tags, err := listTags(ctx, conn, d.Id()) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags for ELB (%s): %s", d.Id(), err) diff --git a/internal/service/elb/load_balancer_data_source_test.go b/internal/service/elb/load_balancer_data_source_test.go index d155418ca78..ebae55e5348 100644 --- a/internal/service/elb/load_balancer_data_source_test.go +++ b/internal/service/elb/load_balancer_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elb_test import ( diff --git a/internal/service/elb/load_balancer_test.go b/internal/service/elb/load_balancer_test.go index f248acc2ba6..d781b1d9d8d 100644 --- a/internal/service/elb/load_balancer_test.go +++ b/internal/service/elb/load_balancer_test.go @@ -1,13 +1,15 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elb_test -import ( // nosemgrep:ci.aws-sdk-go-multiple-service-imports +import ( // nosemgrep:ci.semgrep.aws.multiple-service-imports "context" "fmt" "math/rand" "reflect" "regexp" "testing" - "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/elb" @@ -20,9 +22,227 @@ import ( // nosemgrep:ci.aws-sdk-go-multiple-service-imports "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) +func TestLoadBalancerListenerHash(t *testing.T) { + t.Parallel() + + cases := map[string]struct { + Left map[string]interface{} + Right map[string]interface{} + Match bool + }{ + "protocols are case insensitive": { + map[string]interface{}{ + "instance_port": 80, + "instance_protocol": "TCP", + "lb_port": 80, + "lb_protocol": "TCP", + }, + map[string]interface{}{ + "instance_port": 80, + "instance_protocol": "Tcp", + "lb_port": 80, + "lb_protocol": "tcP", + }, + true, + }, + } + + for tn, tc := range cases { + leftHash := tfelb.ListenerHash(tc.Left) + rightHash := tfelb.ListenerHash(tc.Right) + if leftHash == rightHash != tc.Match { + t.Fatalf("%s: expected match: %t, but did not get it", tn, tc.Match) + } + } +} + +func TestValidLoadBalancerNameCannotBeginWithHyphen(t *testing.T) { + t.Parallel() + + var n = "-Testing123" + _, errors := tfelb.ValidName(n, "SampleKey") + + if len(errors) != 1 { + t.Fatalf("Expected the ELB Name to trigger a validation error") + } +} + +func TestValidLoadBalancerNameCanBeAnEmptyString(t *testing.T) { + t.Parallel() + + var n = "" + _, errors := tfelb.ValidName(n, "SampleKey") + + if len(errors) != 0 { + t.Fatalf("Expected the ELB Name to pass validation") + } +} + +func TestValidLoadBalancerNameCannotBeLongerThan32Characters(t *testing.T) { + t.Parallel() + + var n = "Testing123dddddddddddddddddddvvvv" + _, errors := tfelb.ValidName(n, "SampleKey") + + if len(errors) != 1 { + t.Fatalf("Expected the ELB Name to trigger a validation error") + } +} + +func TestValidLoadBalancerNameCannotHaveSpecialCharacters(t *testing.T) { + t.Parallel() + + var n = "Testing123%%" + _, errors := tfelb.ValidName(n, "SampleKey") + + if len(errors) != 1 { + t.Fatalf("Expected the ELB Name to trigger a validation error") + } +} + +func TestValidLoadBalancerNameCannotEndWithHyphen(t *testing.T) { + t.Parallel() + + var n = "Testing123-" + _, errors := tfelb.ValidName(n, "SampleKey") + + if len(errors) != 1 { + t.Fatalf("Expected the ELB Name to trigger a validation error") + } +} + +func TestValidLoadBalancerAccessLogsInterval(t *testing.T) { + t.Parallel() + + type testCases struct { + Value int + ErrCount int + } + + invalidCases := []testCases{ + { + Value: 0, + ErrCount: 1, + }, + { + Value: 10, + ErrCount: 1, + }, + { + Value: -1, + ErrCount: 1, + }, + } + + for _, tc := range invalidCases { + _, errors := tfelb.ValidAccessLogsInterval(tc.Value, "interval") + if len(errors) != tc.ErrCount { + t.Fatalf("Expected %q to trigger a validation error.", tc.Value) + } + } +} + +func TestValidLoadBalancerHealthCheckTarget(t *testing.T) { + t.Parallel() + + type testCase struct { + Value string + ErrCount int + } + + randomRunes := func(n int) string { + // A complete set of modern Katakana characters. + runes := []rune("アイウエオ" + + "カキクケコガギグゲゴサシスセソザジズゼゾ" + + "タチツテトダヂヅデドナニヌネノハヒフヘホ" + + "バビブベボパピプペポマミムメモヤユヨラリ" + + "ルレロワヰヱヲン") + + s := make([]rune, n) + for i := range s { + s[i] = runes[rand.Intn(len(runes))] + } + return string(s) + } + + validCases := []testCase{ + { + Value: "TCP:1234", + ErrCount: 0, + }, + { + Value: "http:80/test", + ErrCount: 0, + }, + { + Value: fmt.Sprintf("HTTP:8080/%s", randomRunes(5)), + ErrCount: 0, + }, + { + Value: "SSL:8080", + ErrCount: 0, + }, + } + + for _, tc := range validCases { + _, errors := tfelb.ValidHeathCheckTarget(tc.Value, "target") + if len(errors) != tc.ErrCount { + t.Fatalf("Expected %q not to trigger a validation error.", tc.Value) + } + } + + invalidCases := []testCase{ + { + Value: "", + ErrCount: 1, + }, + { + Value: "TCP:", + ErrCount: 1, + }, + { + Value: "TCP:1234/", + ErrCount: 1, + }, + { + Value: "SSL:8080/", + ErrCount: 1, + }, + { + Value: "HTTP:8080", + ErrCount: 1, + }, + { + Value: "incorrect-value", + ErrCount: 1, + }, + { + Value: "TCP:123456", + ErrCount: 1, + }, + { + Value: "incorrect:80/", + ErrCount: 1, + }, + { + Value: fmt.Sprintf("HTTP:8080/%s%s", + sdkacctest.RandStringFromCharSet(512, sdkacctest.CharSetAlpha), randomRunes(512)), + ErrCount: 1, + }, + } + + for _, tc := range invalidCases { + _, errors := tfelb.ValidHeathCheckTarget(tc.Value, "target") + if len(errors) != tc.ErrCount { + t.Fatalf("Expected %q to trigger a validation error.", tc.Value) + } + } +} + func TestAccELBLoadBalancer_basic(t *testing.T) { ctx := acctest.Context(t) var conf elb.LoadBalancerDescription + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_elb.test" resource.ParallelTest(t, resource.TestCase{ @@ -32,21 +252,22 @@ func TestAccELBLoadBalancer_basic(t *testing.T) { CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccLoadBalancerConfig_basic, + Config: testAccLoadBalancerConfig_basic(rName), Check: resource.ComposeTestCheckFunc( testAccCheckLoadBalancerExists(ctx, resourceName, &conf), testAccCheckLoadBalancerAttributes(&conf), resource.TestCheckResourceAttrSet(resourceName, "arn"), resource.TestCheckResourceAttr(resourceName, "availability_zones.#", "3"), - resource.TestCheckResourceAttr(resourceName, "subnets.#", "3"), + resource.TestCheckResourceAttr(resourceName, "cross_zone_load_balancing", "true"), + resource.TestCheckResourceAttr(resourceName, "desync_mitigation_mode", "defensive"), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "listener.*", map[string]string{ "instance_port": "8000", "instance_protocol": "http", "lb_port": "80", "lb_protocol": "http", }), - resource.TestCheckResourceAttr(resourceName, "cross_zone_load_balancing", "true"), - resource.TestCheckResourceAttr(resourceName, "desync_mitigation_mode", "defensive"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "subnets.#", "3"), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), ), }, @@ -62,6 +283,7 @@ func TestAccELBLoadBalancer_basic(t *testing.T) { func TestAccELBLoadBalancer_disappears(t *testing.T) { ctx := acctest.Context(t) var loadBalancer elb.LoadBalancerDescription + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_elb.test" resource.ParallelTest(t, resource.TestCase{ @@ -71,7 +293,7 @@ func TestAccELBLoadBalancer_disappears(t *testing.T) { CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccLoadBalancerConfig_basic, + Config: testAccLoadBalancerConfig_basic(rName), Check: resource.ComposeTestCheckFunc( testAccCheckLoadBalancerExists(ctx, resourceName, &loadBalancer), acctest.CheckResourceDisappears(ctx, acctest.Provider, tfelb.ResourceLoadBalancer(), resourceName), @@ -82,11 +304,11 @@ func TestAccELBLoadBalancer_disappears(t *testing.T) { }) } -func TestAccELBLoadBalancer_fullCharacterRange(t *testing.T) { +func TestAccELBLoadBalancer_namePrefix(t *testing.T) { ctx := acctest.Context(t) var conf elb.LoadBalancerDescription + nameRegex := regexp.MustCompile("^tfacc-") resourceName := "aws_elb.test" - rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, @@ -95,21 +317,21 @@ func TestAccELBLoadBalancer_fullCharacterRange(t *testing.T) { CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccLoadBalancerConfig_fullRangeOfCharacters(rName), + Config: testAccLoadBalancerConfig_namePrefix, Check: resource.ComposeTestCheckFunc( testAccCheckLoadBalancerExists(ctx, resourceName, &conf), - resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestMatchResourceAttr(resourceName, "name", nameRegex), ), }, }, }) } -func TestAccELBLoadBalancer_AccessLogs_enabled(t *testing.T) { +func TestAccELBLoadBalancer_nameGenerated(t *testing.T) { ctx := acctest.Context(t) var conf elb.LoadBalancerDescription + generatedNameRegexp := regexp.MustCompile("^tf-lb-") resourceName := "aws_elb.test" - rName := fmt.Sprintf("tf-test-access-logs-%d", sdkacctest.RandInt()) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, @@ -118,39 +340,21 @@ func TestAccELBLoadBalancer_AccessLogs_enabled(t *testing.T) { CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccLoadBalancerConfig_accessLogs, - Check: resource.ComposeTestCheckFunc( - testAccCheckLoadBalancerExists(ctx, resourceName, &conf), - ), - }, - - { - Config: testAccLoadBalancerConfig_accessLogsOn(rName), - Check: resource.ComposeTestCheckFunc( - testAccCheckLoadBalancerExists(ctx, resourceName, &conf), - resource.TestCheckResourceAttr(resourceName, "access_logs.#", "1"), - resource.TestCheckResourceAttr(resourceName, "access_logs.0.bucket", rName), - resource.TestCheckResourceAttr(resourceName, "access_logs.0.interval", "5"), - resource.TestCheckResourceAttr(resourceName, "access_logs.0.enabled", "true"), - ), - }, - - { - Config: testAccLoadBalancerConfig_accessLogs, + Config: testAccLoadBalancerConfig_nameGenerated, Check: resource.ComposeTestCheckFunc( testAccCheckLoadBalancerExists(ctx, resourceName, &conf), - resource.TestCheckResourceAttr(resourceName, "access_logs.#", "0"), + resource.TestMatchResourceAttr(resourceName, "name", generatedNameRegexp), ), }, }, }) } -func TestAccELBLoadBalancer_AccessLogs_disabled(t *testing.T) { +func TestAccELBLoadBalancer_tags(t *testing.T) { ctx := acctest.Context(t) var conf elb.LoadBalancerDescription + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_elb.test" - rName := fmt.Sprintf("tf-test-access-logs-%d", sdkacctest.RandInt()) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, @@ -159,38 +363,47 @@ func TestAccELBLoadBalancer_AccessLogs_disabled(t *testing.T) { CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccLoadBalancerConfig_accessLogs, + Config: testAccLoadBalancerConfig_tags1(rName, "key1", "value1"), Check: resource.ComposeTestCheckFunc( testAccCheckLoadBalancerExists(ctx, resourceName, &conf), + testAccCheckLoadBalancerAttributes(&conf), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), ), }, { - Config: testAccLoadBalancerConfig_accessLogsDisabled(rName), + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccLoadBalancerConfig_tags2(rName, "key1", "value1updated", "key2", "value2"), Check: resource.ComposeTestCheckFunc( testAccCheckLoadBalancerExists(ctx, resourceName, &conf), - resource.TestCheckResourceAttr(resourceName, "access_logs.#", "1"), - resource.TestCheckResourceAttr(resourceName, "access_logs.0.bucket", rName), - resource.TestCheckResourceAttr(resourceName, "access_logs.0.interval", "5"), - resource.TestCheckResourceAttr(resourceName, "access_logs.0.enabled", "false"), + testAccCheckLoadBalancerAttributes(&conf), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), ), }, { - Config: testAccLoadBalancerConfig_accessLogs, + Config: testAccLoadBalancerConfig_tags1(rName, "key2", "value2"), Check: resource.ComposeTestCheckFunc( testAccCheckLoadBalancerExists(ctx, resourceName, &conf), - resource.TestCheckResourceAttr( - resourceName, "access_logs.#", "0"), + testAccCheckLoadBalancerAttributes(&conf), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), ), }, }, }) } -func TestAccELBLoadBalancer_namePrefix(t *testing.T) { +func TestAccELBLoadBalancer_fullCharacterRange(t *testing.T) { ctx := acctest.Context(t) var conf elb.LoadBalancerDescription - nameRegex := regexp.MustCompile("^test-") resourceName := "aws_elb.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, @@ -199,21 +412,21 @@ func TestAccELBLoadBalancer_namePrefix(t *testing.T) { CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccLoadBalancerConfig_namePrefix, + Config: testAccLoadBalancerConfig_fullRangeOfCharacters(rName), Check: resource.ComposeTestCheckFunc( testAccCheckLoadBalancerExists(ctx, resourceName, &conf), - resource.TestMatchResourceAttr(resourceName, "name", nameRegex), + resource.TestCheckResourceAttr(resourceName, "name", rName), ), }, }, }) } -func TestAccELBLoadBalancer_generatedName(t *testing.T) { +func TestAccELBLoadBalancer_AccessLogs_enabled(t *testing.T) { ctx := acctest.Context(t) var conf elb.LoadBalancerDescription - generatedNameRegexp := regexp.MustCompile("^tf-lb-") resourceName := "aws_elb.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, @@ -222,43 +435,39 @@ func TestAccELBLoadBalancer_generatedName(t *testing.T) { CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccLoadBalancerConfig_generatedName, + Config: testAccLoadBalancerConfig_basic(rName), Check: resource.ComposeTestCheckFunc( testAccCheckLoadBalancerExists(ctx, resourceName, &conf), - resource.TestMatchResourceAttr(resourceName, "name", generatedNameRegexp), ), }, - }, - }) -} -func TestAccELBLoadBalancer_generatesNameForZeroValue(t *testing.T) { - ctx := acctest.Context(t) - var conf elb.LoadBalancerDescription - generatedNameRegexp := regexp.MustCompile("^tf-lb-") - resourceName := "aws_elb.test" + { + Config: testAccLoadBalancerConfig_accessLogsOn(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckLoadBalancerExists(ctx, resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "access_logs.#", "1"), + resource.TestCheckResourceAttr(resourceName, "access_logs.0.bucket", rName), + resource.TestCheckResourceAttr(resourceName, "access_logs.0.interval", "5"), + resource.TestCheckResourceAttr(resourceName, "access_logs.0.enabled", "true"), + ), + }, - resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, elb.EndpointsID), - ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), - Steps: []resource.TestStep{ { - Config: testAccLoadBalancerConfig_zeroValueName, + Config: testAccLoadBalancerConfig_basic(rName), Check: resource.ComposeTestCheckFunc( testAccCheckLoadBalancerExists(ctx, resourceName, &conf), - resource.TestMatchResourceAttr(resourceName, "name", generatedNameRegexp), + resource.TestCheckResourceAttr(resourceName, "access_logs.#", "0"), ), }, }, }) } -func TestAccELBLoadBalancer_availabilityZones(t *testing.T) { +func TestAccELBLoadBalancer_AccessLogs_disabled(t *testing.T) { ctx := acctest.Context(t) var conf elb.LoadBalancerDescription resourceName := "aws_elb.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, @@ -267,27 +476,36 @@ func TestAccELBLoadBalancer_availabilityZones(t *testing.T) { CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccLoadBalancerConfig_basic, + Config: testAccLoadBalancerConfig_basic(rName), Check: resource.ComposeTestCheckFunc( testAccCheckLoadBalancerExists(ctx, resourceName, &conf), - resource.TestCheckResourceAttr(resourceName, "availability_zones.#", "3"), ), }, - { - Config: testAccLoadBalancerConfig_availabilityZonesUpdate, + Config: testAccLoadBalancerConfig_accessLogsDisabled(rName), Check: resource.ComposeTestCheckFunc( testAccCheckLoadBalancerExists(ctx, resourceName, &conf), - resource.TestCheckResourceAttr(resourceName, "availability_zones.#", "2"), + resource.TestCheckResourceAttr(resourceName, "access_logs.#", "1"), + resource.TestCheckResourceAttr(resourceName, "access_logs.0.bucket", rName), + resource.TestCheckResourceAttr(resourceName, "access_logs.0.interval", "5"), + resource.TestCheckResourceAttr(resourceName, "access_logs.0.enabled", "false"), + ), + }, + { + Config: testAccLoadBalancerConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckLoadBalancerExists(ctx, resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "access_logs.#", "0"), ), }, }, }) } -func TestAccELBLoadBalancer_tags(t *testing.T) { +func TestAccELBLoadBalancer_generatesNameForZeroValue(t *testing.T) { ctx := acctest.Context(t) var conf elb.LoadBalancerDescription + generatedNameRegexp := regexp.MustCompile("^tf-lb-") resourceName := "aws_elb.test" resource.ParallelTest(t, resource.TestCase{ @@ -297,36 +515,41 @@ func TestAccELBLoadBalancer_tags(t *testing.T) { CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccLoadBalancerConfig_tags1("key1", "value1"), + Config: testAccLoadBalancerConfig_zeroValueName, Check: resource.ComposeTestCheckFunc( testAccCheckLoadBalancerExists(ctx, resourceName, &conf), - testAccCheckLoadBalancerAttributes(&conf), - resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), - resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + resource.TestMatchResourceAttr(resourceName, "name", generatedNameRegexp), ), }, + }, + }) +} + +func TestAccELBLoadBalancer_availabilityZones(t *testing.T) { + ctx := acctest.Context(t) + var conf elb.LoadBalancerDescription + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_elb.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, elb.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), + Steps: []resource.TestStep{ { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - { - Config: testAccLoadBalancerConfig_tags2("key1", "value1updated", "key2", "value2"), + Config: testAccLoadBalancerConfig_basic(rName), Check: resource.ComposeTestCheckFunc( testAccCheckLoadBalancerExists(ctx, resourceName, &conf), - testAccCheckLoadBalancerAttributes(&conf), - resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), - resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), - resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + resource.TestCheckResourceAttr(resourceName, "availability_zones.#", "3"), ), }, + { - Config: testAccLoadBalancerConfig_tags1("key2", "value2"), + Config: testAccLoadBalancerConfig_availabilityZonesUpdate(rName), Check: resource.ComposeTestCheckFunc( testAccCheckLoadBalancerExists(ctx, resourceName, &conf), - testAccCheckLoadBalancerAttributes(&conf), - resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), - resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + resource.TestCheckResourceAttr(resourceName, "availability_zones.#", "2"), ), }, }, @@ -338,7 +561,7 @@ func TestAccELBLoadBalancer_ListenerSSLCertificateID_iamServerCertificate(t *tes var conf elb.LoadBalancerDescription key := acctest.TLSRSAPrivateKeyPEM(t, 2048) certificate := acctest.TLSRSAX509SelfSignedCertificatePEM(t, key, "example.com") - rName := fmt.Sprintf("tf-acctest-%s", sdkacctest.RandString(10)) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_elb.test" testCheck := func(*terraform.State) error { @@ -378,6 +601,7 @@ func TestAccELBLoadBalancer_ListenerSSLCertificateID_iamServerCertificate(t *tes func TestAccELBLoadBalancer_Swap_subnets(t *testing.T) { ctx := acctest.Context(t) var conf elb.LoadBalancerDescription + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_elb.test" resource.ParallelTest(t, resource.TestCase{ @@ -387,15 +611,21 @@ func TestAccELBLoadBalancer_Swap_subnets(t *testing.T) { CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccLoadBalancerConfig_subnets, + Config: testAccLoadBalancerConfig_subnets(rName), Check: resource.ComposeTestCheckFunc( testAccCheckLoadBalancerExists(ctx, resourceName, &conf), resource.TestCheckResourceAttr(resourceName, "subnets.#", "2"), ), }, - { - Config: testAccLoadBalancerConfig_subnetSwap, + Config: testAccLoadBalancerConfig_subnetSwap(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckLoadBalancerExists(ctx, "aws_elb.test", &conf), + resource.TestCheckResourceAttr("aws_elb.test", "subnets.#", "2"), + ), + }, + { + Config: testAccLoadBalancerConfig_subnetCompleteSwap(rName), Check: resource.ComposeTestCheckFunc( testAccCheckLoadBalancerExists(ctx, "aws_elb.test", &conf), resource.TestCheckResourceAttr("aws_elb.test", "subnets.#", "2"), @@ -408,6 +638,7 @@ func TestAccELBLoadBalancer_Swap_subnets(t *testing.T) { func TestAccELBLoadBalancer_instanceAttaching(t *testing.T) { ctx := acctest.Context(t) var conf elb.LoadBalancerDescription + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_elb.test" testCheckInstanceAttached := func(count int) resource.TestCheckFunc { @@ -426,7 +657,7 @@ func TestAccELBLoadBalancer_instanceAttaching(t *testing.T) { CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccLoadBalancerConfig_basic, + Config: testAccLoadBalancerConfig_basic(rName), Check: resource.ComposeTestCheckFunc( testAccCheckLoadBalancerExists(ctx, resourceName, &conf), testAccCheckLoadBalancerAttributes(&conf), @@ -434,7 +665,7 @@ func TestAccELBLoadBalancer_instanceAttaching(t *testing.T) { }, { - Config: testAccLoadBalancerConfig_newInstance, + Config: testAccLoadBalancerConfig_newInstance(rName), Check: resource.ComposeTestCheckFunc( testAccCheckLoadBalancerExists(ctx, resourceName, &conf), testCheckInstanceAttached(1), @@ -447,6 +678,7 @@ func TestAccELBLoadBalancer_instanceAttaching(t *testing.T) { func TestAccELBLoadBalancer_listener(t *testing.T) { ctx := acctest.Context(t) var conf elb.LoadBalancerDescription + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_elb.test" resource.ParallelTest(t, resource.TestCase{ @@ -456,7 +688,7 @@ func TestAccELBLoadBalancer_listener(t *testing.T) { CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccLoadBalancerConfig_basic, + Config: testAccLoadBalancerConfig_basic(rName), Check: resource.ComposeTestCheckFunc( testAccCheckLoadBalancerExists(ctx, resourceName, &conf), resource.TestCheckResourceAttr(resourceName, "listener.#", "1"), @@ -469,7 +701,7 @@ func TestAccELBLoadBalancer_listener(t *testing.T) { ), }, { - Config: testAccLoadBalancerConfig_listenerMultipleListeners, + Config: testAccLoadBalancerConfig_listenerMultipleListeners(rName), Check: resource.ComposeTestCheckFunc( testAccCheckLoadBalancerExists(ctx, resourceName, &conf), resource.TestCheckResourceAttr(resourceName, "listener.#", "2"), @@ -488,7 +720,7 @@ func TestAccELBLoadBalancer_listener(t *testing.T) { ), }, { - Config: testAccLoadBalancerConfig_basic, + Config: testAccLoadBalancerConfig_basic(rName), Check: resource.ComposeTestCheckFunc( testAccCheckLoadBalancerExists(ctx, resourceName, &conf), resource.TestCheckResourceAttr(resourceName, "listener.#", "1"), @@ -501,7 +733,7 @@ func TestAccELBLoadBalancer_listener(t *testing.T) { ), }, { - Config: testAccLoadBalancerConfig_listenerUpdate, + Config: testAccLoadBalancerConfig_listenerUpdate(rName), Check: resource.ComposeTestCheckFunc( testAccCheckLoadBalancerExists(ctx, resourceName, &conf), resource.TestCheckResourceAttr(resourceName, "listener.#", "1"), @@ -516,7 +748,7 @@ func TestAccELBLoadBalancer_listener(t *testing.T) { { PreConfig: func() { // Simulate out of band listener removal - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn(ctx) input := &elb.DeleteLoadBalancerListenersInput{ LoadBalancerName: conf.LoadBalancerName, LoadBalancerPorts: []*int64{aws.Int64(80)}, @@ -525,7 +757,7 @@ func TestAccELBLoadBalancer_listener(t *testing.T) { t.Fatalf("Error deleting listener: %s", err) } }, - Config: testAccLoadBalancerConfig_basic, + Config: testAccLoadBalancerConfig_basic(rName), Check: resource.ComposeTestCheckFunc( testAccCheckLoadBalancerExists(ctx, resourceName, &conf), resource.TestCheckResourceAttr(resourceName, "listener.#", "1"), @@ -540,7 +772,7 @@ func TestAccELBLoadBalancer_listener(t *testing.T) { { PreConfig: func() { // Simulate out of band listener addition - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn(ctx) input := &elb.CreateLoadBalancerListenersInput{ LoadBalancerName: conf.LoadBalancerName, Listeners: []*elb.Listener{ @@ -556,7 +788,7 @@ func TestAccELBLoadBalancer_listener(t *testing.T) { t.Fatalf("Error creating listener: %s", err) } }, - Config: testAccLoadBalancerConfig_basic, + Config: testAccLoadBalancerConfig_basic(rName), Check: resource.ComposeTestCheckFunc( testAccCheckLoadBalancerExists(ctx, resourceName, &conf), resource.TestCheckResourceAttr(resourceName, "listener.#", "1"), @@ -574,6 +806,8 @@ func TestAccELBLoadBalancer_listener(t *testing.T) { func TestAccELBLoadBalancer_healthCheck(t *testing.T) { ctx := acctest.Context(t) + var conf elb.LoadBalancerDescription + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_elb.test" resource.ParallelTest(t, resource.TestCase{ @@ -583,15 +817,16 @@ func TestAccELBLoadBalancer_healthCheck(t *testing.T) { CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccLoadBalancerConfig_healthCheck, + Config: testAccLoadBalancerConfig_healthCheck(rName), Check: resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttr( - resourceName, "health_check.0.healthy_threshold", "5"), + testAccCheckLoadBalancerExists(ctx, resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "health_check.0.healthy_threshold", "5"), ), }, { - Config: testAccLoadBalancerConfig_healthCheckUpdate, + Config: testAccLoadBalancerConfig_healthCheckUpdate(rName), Check: resource.ComposeTestCheckFunc( + testAccCheckLoadBalancerExists(ctx, resourceName, &conf), resource.TestCheckResourceAttr(resourceName, "health_check.0.healthy_threshold", "10"), ), }, @@ -601,6 +836,7 @@ func TestAccELBLoadBalancer_healthCheck(t *testing.T) { func TestAccELBLoadBalancer_timeout(t *testing.T) { ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_elb.test" resource.ParallelTest(t, resource.TestCase{ @@ -610,13 +846,13 @@ func TestAccELBLoadBalancer_timeout(t *testing.T) { CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccLoadBalancerConfig_idleTimeout, + Config: testAccLoadBalancerConfig_idleTimeout(rName), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr(resourceName, "idle_timeout", "200"), ), }, { - Config: testAccLoadBalancerConfig_idleTimeoutUpdate, + Config: testAccLoadBalancerConfig_idleTimeoutUpdate(rName), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr(resourceName, "idle_timeout", "400"), ), @@ -627,6 +863,7 @@ func TestAccELBLoadBalancer_timeout(t *testing.T) { func TestAccELBLoadBalancer_connectionDraining(t *testing.T) { ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_elb.test" resource.ParallelTest(t, resource.TestCase{ @@ -636,21 +873,21 @@ func TestAccELBLoadBalancer_connectionDraining(t *testing.T) { CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccLoadBalancerConfig_connectionDraining, + Config: testAccLoadBalancerConfig_connectionDraining(rName), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr(resourceName, "connection_draining", "true"), resource.TestCheckResourceAttr(resourceName, "connection_draining_timeout", "400"), ), }, { - Config: testAccLoadBalancerConfig_connectionDrainingUpdateTimeout, + Config: testAccLoadBalancerConfig_connectionDrainingUpdateTimeout(rName), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr(resourceName, "connection_draining", "true"), resource.TestCheckResourceAttr(resourceName, "connection_draining_timeout", "600"), ), }, { - Config: testAccLoadBalancerConfig_connectionDrainingUpdateDisable, + Config: testAccLoadBalancerConfig_connectionDrainingUpdateDisable(rName), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr(resourceName, "connection_draining", "false"), ), @@ -661,6 +898,7 @@ func TestAccELBLoadBalancer_connectionDraining(t *testing.T) { func TestAccELBLoadBalancer_securityGroups(t *testing.T) { ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_elb.test" resource.ParallelTest(t, resource.TestCase{ @@ -670,14 +908,14 @@ func TestAccELBLoadBalancer_securityGroups(t *testing.T) { CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccLoadBalancerConfig_basic, + Config: testAccLoadBalancerConfig_basic(rName), Check: resource.ComposeTestCheckFunc( // ELBs get a default security group resource.TestCheckResourceAttr(resourceName, "security_groups.#", "1"), ), }, { - Config: testAccLoadBalancerConfig_securityGroups, + Config: testAccLoadBalancerConfig_securityGroups(rName), Check: resource.ComposeTestCheckFunc( // Count should still be one as we swap in a custom security group resource.TestCheckResourceAttr(resourceName, "security_groups.#", "1"), @@ -689,6 +927,7 @@ func TestAccELBLoadBalancer_securityGroups(t *testing.T) { func TestAccELBLoadBalancer_desyncMitigationMode(t *testing.T) { ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_elb.test" resource.ParallelTest(t, resource.TestCase{ @@ -698,7 +937,7 @@ func TestAccELBLoadBalancer_desyncMitigationMode(t *testing.T) { CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccLoadBalancerConfig_desyncMitigationMode, + Config: testAccLoadBalancerConfig_desyncMitigationMode(rName), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr(resourceName, "desync_mitigation_mode", "strictest"), ), @@ -714,6 +953,7 @@ func TestAccELBLoadBalancer_desyncMitigationMode(t *testing.T) { func TestAccELBLoadBalancer_desyncMitigationMode_update(t *testing.T) { ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_elb.test" resource.ParallelTest(t, resource.TestCase{ @@ -723,7 +963,7 @@ func TestAccELBLoadBalancer_desyncMitigationMode_update(t *testing.T) { CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccLoadBalancerConfig_desyncMitigationModeUpdateDefault, + Config: testAccLoadBalancerConfig_desyncMitigationModeUpdateDefault(rName), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr(resourceName, "desync_mitigation_mode", "defensive"), ), @@ -734,7 +974,7 @@ func TestAccELBLoadBalancer_desyncMitigationMode_update(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccLoadBalancerConfig_desyncMitigationModeUpdateMonitor, + Config: testAccLoadBalancerConfig_desyncMitigationModeUpdateMonitor(rName), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr(resourceName, "desync_mitigation_mode", "monitor"), ), @@ -745,7 +985,7 @@ func TestAccELBLoadBalancer_desyncMitigationMode_update(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccLoadBalancerConfig_desyncMitigationModeUpdateDefault, + Config: testAccLoadBalancerConfig_desyncMitigationModeUpdateDefault(rName), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr(resourceName, "desync_mitigation_mode", "defensive"), ), @@ -754,229 +994,9 @@ func TestAccELBLoadBalancer_desyncMitigationMode_update(t *testing.T) { }) } -// Unit test for listeners hash -func TestLoadBalancerListenerHash(t *testing.T) { - t.Parallel() - - cases := map[string]struct { - Left map[string]interface{} - Right map[string]interface{} - Match bool - }{ - "protocols are case insensitive": { - map[string]interface{}{ - "instance_port": 80, - "instance_protocol": "TCP", - "lb_port": 80, - "lb_protocol": "TCP", - }, - map[string]interface{}{ - "instance_port": 80, - "instance_protocol": "Tcp", - "lb_port": 80, - "lb_protocol": "tcP", - }, - true, - }, - } - - for tn, tc := range cases { - leftHash := tfelb.ListenerHash(tc.Left) - rightHash := tfelb.ListenerHash(tc.Right) - if leftHash == rightHash != tc.Match { - t.Fatalf("%s: expected match: %t, but did not get it", tn, tc.Match) - } - } -} - -func TestValidLoadBalancerNameCannotBeginWithHyphen(t *testing.T) { - t.Parallel() - - var n = "-Testing123" - _, errors := tfelb.ValidName(n, "SampleKey") - - if len(errors) != 1 { - t.Fatalf("Expected the ELB Name to trigger a validation error") - } -} - -func TestValidLoadBalancerNameCanBeAnEmptyString(t *testing.T) { - t.Parallel() - - var n = "" - _, errors := tfelb.ValidName(n, "SampleKey") - - if len(errors) != 0 { - t.Fatalf("Expected the ELB Name to pass validation") - } -} - -func TestValidLoadBalancerNameCannotBeLongerThan32Characters(t *testing.T) { - t.Parallel() - - var n = "Testing123dddddddddddddddddddvvvv" - _, errors := tfelb.ValidName(n, "SampleKey") - - if len(errors) != 1 { - t.Fatalf("Expected the ELB Name to trigger a validation error") - } -} - -func TestValidLoadBalancerNameCannotHaveSpecialCharacters(t *testing.T) { - t.Parallel() - - var n = "Testing123%%" - _, errors := tfelb.ValidName(n, "SampleKey") - - if len(errors) != 1 { - t.Fatalf("Expected the ELB Name to trigger a validation error") - } -} - -func TestValidLoadBalancerNameCannotEndWithHyphen(t *testing.T) { - t.Parallel() - - var n = "Testing123-" - _, errors := tfelb.ValidName(n, "SampleKey") - - if len(errors) != 1 { - t.Fatalf("Expected the ELB Name to trigger a validation error") - } -} - -func TestValidLoadBalancerAccessLogsInterval(t *testing.T) { - t.Parallel() - - type testCases struct { - Value int - ErrCount int - } - - invalidCases := []testCases{ - { - Value: 0, - ErrCount: 1, - }, - { - Value: 10, - ErrCount: 1, - }, - { - Value: -1, - ErrCount: 1, - }, - } - - for _, tc := range invalidCases { - _, errors := tfelb.ValidAccessLogsInterval(tc.Value, "interval") - if len(errors) != tc.ErrCount { - t.Fatalf("Expected %q to trigger a validation error.", tc.Value) - } - } -} - -func TestValidLoadBalancerHealthCheckTarget(t *testing.T) { - t.Parallel() - - type testCase struct { - Value string - ErrCount int - } - - randomRunes := func(n int) string { - rand.Seed(time.Now().UTC().UnixNano()) - - // A complete set of modern Katakana characters. - runes := []rune("アイウエオ" + - "カキクケコガギグゲゴサシスセソザジズゼゾ" + - "タチツテトダヂヅデドナニヌネノハヒフヘホ" + - "バビブベボパピプペポマミムメモヤユヨラリ" + - "ルレロワヰヱヲン") - - s := make([]rune, n) - for i := range s { - s[i] = runes[rand.Intn(len(runes))] - } - return string(s) - } - - validCases := []testCase{ - { - Value: "TCP:1234", - ErrCount: 0, - }, - { - Value: "http:80/test", - ErrCount: 0, - }, - { - Value: fmt.Sprintf("HTTP:8080/%s", randomRunes(5)), - ErrCount: 0, - }, - { - Value: "SSL:8080", - ErrCount: 0, - }, - } - - for _, tc := range validCases { - _, errors := tfelb.ValidHeathCheckTarget(tc.Value, "target") - if len(errors) != tc.ErrCount { - t.Fatalf("Expected %q not to trigger a validation error.", tc.Value) - } - } - - invalidCases := []testCase{ - { - Value: "", - ErrCount: 1, - }, - { - Value: "TCP:", - ErrCount: 1, - }, - { - Value: "TCP:1234/", - ErrCount: 1, - }, - { - Value: "SSL:8080/", - ErrCount: 1, - }, - { - Value: "HTTP:8080", - ErrCount: 1, - }, - { - Value: "incorrect-value", - ErrCount: 1, - }, - { - Value: "TCP:123456", - ErrCount: 1, - }, - { - Value: "incorrect:80/", - ErrCount: 1, - }, - { - Value: fmt.Sprintf("HTTP:8080/%s%s", - sdkacctest.RandStringFromCharSet(512, sdkacctest.CharSetAlpha), randomRunes(512)), - ErrCount: 1, - }, - } - - for _, tc := range invalidCases { - _, errors := tfelb.ValidHeathCheckTarget(tc.Value, "target") - if len(errors) != tc.ErrCount { - t.Fatalf("Expected %q to trigger a validation error.", tc.Value) - } - } -} - func testAccCheckLoadBalancerDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_elb" { @@ -1011,7 +1031,7 @@ func testAccCheckLoadBalancerExists(ctx context.Context, n string, v *elb.LoadBa return fmt.Errorf("No ELB Classic Load Balancer ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn(ctx) output, err := tfelb.FindLoadBalancerByName(ctx, conn, rs.Primary.ID) @@ -1049,19 +1069,13 @@ func testAccCheckLoadBalancerAttributes(conf *elb.LoadBalancerDescription) resou } } -const testAccLoadBalancerConfig_basic = ` -data "aws_availability_zones" "available" { - state = "available" - - filter { - name = "opt-in-status" - values = ["opt-in-not-required"] - } -} - +func testAccLoadBalancerConfig_basic(rName string) string { + return acctest.ConfigCompose(acctest.ConfigAvailableAZsNoOptIn(), fmt.Sprintf(` resource "aws_elb" "test" { availability_zones = [data.aws_availability_zones.available.names[0], data.aws_availability_zones.available.names[1], data.aws_availability_zones.available.names[2]] + name = %[1]q + listener { instance_port = 8000 instance_protocol = "http" @@ -1071,22 +1085,16 @@ resource "aws_elb" "test" { cross_zone_load_balancing = true } -` - -func testAccLoadBalancerConfig_tags1(tagKey1, tagValue1 string) string { - return fmt.Sprintf(` -data "aws_availability_zones" "available" { - state = "available" - - filter { - name = "opt-in-status" - values = ["opt-in-not-required"] - } +`, rName)) } +func testAccLoadBalancerConfig_tags1(rName, tagKey1, tagValue1 string) string { + return acctest.ConfigCompose(acctest.ConfigAvailableAZsNoOptIn(), fmt.Sprintf(` resource "aws_elb" "test" { availability_zones = [data.aws_availability_zones.available.names[0], data.aws_availability_zones.available.names[1], data.aws_availability_zones.available.names[2]] + name = %[1]q + listener { instance_port = 8000 instance_protocol = "http" @@ -1095,28 +1103,21 @@ resource "aws_elb" "test" { } tags = { - %[1]q = %[2]q + %[2]q = %[3]q } cross_zone_load_balancing = true } -`, tagKey1, tagValue1) -} - -func testAccLoadBalancerConfig_tags2(tagKey1, tagValue1, tagKey2, tagValue2 string) string { - return fmt.Sprintf(` -data "aws_availability_zones" "available" { - state = "available" - - filter { - name = "opt-in-status" - values = ["opt-in-not-required"] - } +`, rName, tagKey1, tagValue1)) } +func testAccLoadBalancerConfig_tags2(rName, tagKey1, tagValue1, tagKey2, tagValue2 string) string { + return acctest.ConfigCompose(acctest.ConfigAvailableAZsNoOptIn(), fmt.Sprintf(` resource "aws_elb" "test" { availability_zones = [data.aws_availability_zones.available.names[0], data.aws_availability_zones.available.names[1], data.aws_availability_zones.available.names[2]] + name = %[1]q + listener { instance_port = 8000 instance_protocol = "http" @@ -1125,26 +1126,17 @@ resource "aws_elb" "test" { } tags = { - %[1]q = %[2]q - %[3]q = %[4]q + %[2]q = %[3]q + %[4]q = %[5]q } cross_zone_load_balancing = true } -`, tagKey1, tagValue1, tagKey2, tagValue2) +`, rName, tagKey1, tagValue1, tagKey2, tagValue2)) } func testAccLoadBalancerConfig_fullRangeOfCharacters(rName string) string { - return fmt.Sprintf(` -data "aws_availability_zones" "available" { - state = "available" - - filter { - name = "opt-in-status" - values = ["opt-in-not-required"] - } -} - + return acctest.ConfigCompose(acctest.ConfigAvailableAZsNoOptIn(), fmt.Sprintf(` resource "aws_elb" "test" { name = %[1]q availability_zones = [data.aws_availability_zones.available.names[0], data.aws_availability_zones.available.names[1], data.aws_availability_zones.available.names[2]] @@ -1156,39 +1148,53 @@ resource "aws_elb" "test" { lb_protocol = "http" } } -`, rName) +`, rName)) } -const testAccLoadBalancerConfig_accessLogs = ` -data "aws_availability_zones" "available" { - state = "available" +func testAccLoadBalancerConfig_baseAccessLogs(rName string) string { + return fmt.Sprintf(` +data "aws_elb_service_account" "current" {} + +data "aws_partition" "current" {} - filter { - name = "opt-in-status" - values = ["opt-in-not-required"] - } +resource "aws_s3_bucket" "accesslogs_bucket" { + bucket = %[1]q + force_destroy = true } -resource "aws_elb" "test" { - availability_zones = [data.aws_availability_zones.available.names[0], data.aws_availability_zones.available.names[1], data.aws_availability_zones.available.names[2]] - - listener { - instance_port = 8000 - instance_protocol = "http" - lb_port = 80 - lb_protocol = "http" - } +resource "aws_s3_bucket_policy" "test" { + bucket = aws_s3_bucket.accesslogs_bucket.id + policy = < 0 { return tags @@ -103,17 +103,17 @@ func GetTagsIn(ctx context.Context) []*elb.Tag { return nil } -// SetTagsOut sets elb service tags in Context. -func SetTagsOut(ctx context.Context, tags []*elb.Tag) { +// setTagsOut sets elb service tags in Context. +func setTagsOut(ctx context.Context, tags []*elb.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates elb service tags. +// updateTags updates elb service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn elbiface.ELBAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn elbiface.ELBAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -153,5 +153,5 @@ func UpdateTags(ctx context.Context, conn elbiface.ELBAPI, identifier string, ol // UpdateTags updates elb service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).ELBConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).ELBConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/elb/validate.go b/internal/service/elb/validate.go index f670921c4b1..0481cf8b296 100644 --- a/internal/service/elb/validate.go +++ b/internal/service/elb/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elb import ( diff --git a/internal/service/elb/validate_test.go b/internal/service/elb/validate_test.go index 686b0f92394..4f74417488a 100644 --- a/internal/service/elb/validate_test.go +++ b/internal/service/elb/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elb import ( diff --git a/internal/service/elbv2/const.go b/internal/service/elbv2/const.go index 742b6648328..1a0c5b661ce 100644 --- a/internal/service/elbv2/const.go +++ b/internal/service/elbv2/const.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elbv2 const ( diff --git a/internal/service/elbv2/find.go b/internal/service/elbv2/find.go index 297f6346580..fa1ecb3814b 100644 --- a/internal/service/elbv2/find.go +++ b/internal/service/elbv2/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elbv2 import ( diff --git a/internal/service/elbv2/generate.go b/internal/service/elbv2/generate.go index 118a07db815..d3fc79cf148 100644 --- a/internal/service/elbv2/generate.go +++ b/internal/service/elbv2/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=DescribeTags -ListTagsInIDElem=ResourceArns -ListTagsInIDNeedSlice=yes -ListTagsOutTagsElem=TagDescriptions[0].Tags -ServiceTagsSlice -TagOp=AddTags -TagInIDElem=ResourceArns -TagInIDNeedSlice=yes -UntagOp=RemoveTags -UpdateTags -CreateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package elbv2 diff --git a/internal/service/elbv2/hosted_zone_id_data_source.go b/internal/service/elbv2/hosted_zone_id_data_source.go index c391c43488e..a7d75f86afa 100644 --- a/internal/service/elbv2/hosted_zone_id_data_source.go +++ b/internal/service/elbv2/hosted_zone_id_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elbv2 import ( diff --git a/internal/service/elbv2/hosted_zone_id_data_source_test.go b/internal/service/elbv2/hosted_zone_id_data_source_test.go index 0c2458cc6a6..90271f25673 100644 --- a/internal/service/elbv2/hosted_zone_id_data_source_test.go +++ b/internal/service/elbv2/hosted_zone_id_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elbv2_test import ( diff --git a/internal/service/elbv2/id.go b/internal/service/elbv2/id.go index 364ea85be16..74338265616 100644 --- a/internal/service/elbv2/id.go +++ b/internal/service/elbv2/id.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elbv2 import ( diff --git a/internal/service/elbv2/listener.go b/internal/service/elbv2/listener.go index 721b3e50299..be8d94e97e5 100644 --- a/internal/service/elbv2/listener.go +++ b/internal/service/elbv2/listener.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elbv2 import ( @@ -394,12 +397,12 @@ func suppressIfDefaultActionTypeNot(t string) schema.SchemaDiffSuppressFunc { func resourceListenerCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) lbARN := d.Get("load_balancer_arn").(string) input := &elbv2.CreateListenerInput{ LoadBalancerArn: aws.String(lbARN), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if alpnPolicy, ok := d.GetOk("alpn_policy"); ok { @@ -461,7 +464,7 @@ func resourceListenerCreate(ctx context.Context, d *schema.ResourceData, meta in d.SetId(aws.StringValue(output.Listeners[0].ListenerArn)) // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := createTags(ctx, conn, d.Id(), tags) // If default tags only, continue. Otherwise, error. @@ -482,7 +485,7 @@ func resourceListenerRead(ctx context.Context, d *schema.ResourceData, meta inte const ( loadBalancerListenerReadTimeout = 2 * time.Minute ) - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) var listener *elbv2.Listener @@ -554,7 +557,7 @@ func resourceListenerUpdate(ctx context.Context, d *schema.ResourceData, meta in const ( loadBalancerListenerUpdateTimeout = 5 * time.Minute ) - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &elbv2.ModifyListenerInput{ @@ -620,7 +623,7 @@ func resourceListenerUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceListenerDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) _, err := conn.DeleteListenerWithContext(ctx, &elbv2.DeleteListenerInput{ ListenerArn: aws.String(d.Id()), diff --git a/internal/service/elbv2/listener_certificate.go b/internal/service/elbv2/listener_certificate.go index ed0e010308c..79d4a0c00e1 100644 --- a/internal/service/elbv2/listener_certificate.go +++ b/internal/service/elbv2/listener_certificate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elbv2 import ( @@ -54,7 +57,7 @@ const ( func resourceListenerCertificateCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) listenerArn := d.Get("listener_arn").(string) certificateArn := d.Get("certificate_arn").(string) @@ -98,7 +101,7 @@ func resourceListenerCertificateCreate(ctx context.Context, d *schema.ResourceDa func resourceListenerCertificateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) listenerArn, certificateArn, err := listenerCertificateParseID(d.Id()) if err != nil { @@ -142,7 +145,7 @@ func resourceListenerCertificateRead(ctx context.Context, d *schema.ResourceData func resourceListenerCertificateDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) certificateArn := d.Get("certificate_arn").(string) listenerArn := d.Get("listener_arn").(string) diff --git a/internal/service/elbv2/listener_certificate_test.go b/internal/service/elbv2/listener_certificate_test.go index 8ca27a25ca8..2b83d26a3b4 100644 --- a/internal/service/elbv2/listener_certificate_test.go +++ b/internal/service/elbv2/listener_certificate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elbv2_test import ( @@ -182,7 +185,7 @@ func TestAccELBV2ListenerCertificate_disappears(t *testing.T) { func testAccCheckListenerCertificateDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lb_listener_certificate" { diff --git a/internal/service/elbv2/listener_data_source.go b/internal/service/elbv2/listener_data_source.go index a6b55364ad4..ac188f7fab2 100644 --- a/internal/service/elbv2/listener_data_source.go +++ b/internal/service/elbv2/listener_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elbv2 import ( @@ -275,7 +278,7 @@ func DataSourceListener() *schema.Resource { func dataSourceListenerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &elbv2.DescribeListenersInput{} @@ -348,7 +351,7 @@ func dataSourceListenerRead(ctx context.Context, d *schema.ResourceData, meta in return sdkdiag.AppendErrorf(diags, "setting default_action: %s", err) } - tags, err := ListTags(ctx, conn, d.Id()) + tags, err := listTags(ctx, conn, d.Id()) if verify.ErrorISOUnsupported(conn.PartitionID, err) { log.Printf("[WARN] Unable to list tags for ELBv2 Listener %s: %s", d.Id(), err) diff --git a/internal/service/elbv2/listener_data_source_test.go b/internal/service/elbv2/listener_data_source_test.go index 84b5cce9b4f..97c7f2d3128 100644 --- a/internal/service/elbv2/listener_data_source_test.go +++ b/internal/service/elbv2/listener_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elbv2_test import ( diff --git a/internal/service/elbv2/listener_rule.go b/internal/service/elbv2/listener_rule.go index 11d748658e5..ab484a1348c 100644 --- a/internal/service/elbv2/listener_rule.go +++ b/internal/service/elbv2/listener_rule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elbv2 import ( @@ -496,12 +499,12 @@ func suppressIfActionTypeNot(t string) schema.SchemaDiffSuppressFunc { func resourceListenerRuleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) listenerARN := d.Get("listener_arn").(string) input := &elbv2.CreateRuleInput{ ListenerArn: aws.String(listenerARN), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } var err error @@ -532,7 +535,7 @@ func resourceListenerRuleCreate(ctx context.Context, d *schema.ResourceData, met d.SetId(aws.StringValue(output.Rules[0].RuleArn)) // Post-create tagging supported in some partitions - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := createTags(ctx, conn, d.Id(), tags) // If default tags only, continue. Otherwise, error. @@ -550,7 +553,7 @@ func resourceListenerRuleCreate(ctx context.Context, d *schema.ResourceData, met func resourceListenerRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) var resp *elbv2.DescribeRulesOutput var req = &elbv2.DescribeRulesInput{ @@ -773,7 +776,7 @@ func resourceListenerRuleRead(ctx context.Context, d *schema.ResourceData, meta func resourceListenerRuleUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) if d.HasChangesExcept("tags", "tags_all") { if d.HasChange("priority") { @@ -832,7 +835,7 @@ func resourceListenerRuleUpdate(ctx context.Context, d *schema.ResourceData, met func resourceListenerRuleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) _, err := conn.DeleteRuleWithContext(ctx, &elbv2.DeleteRuleInput{ RuleArn: aws.String(d.Id()), diff --git a/internal/service/elbv2/listener_rule_test.go b/internal/service/elbv2/listener_rule_test.go index b6223650454..8924964a2b4 100644 --- a/internal/service/elbv2/listener_rule_test.go +++ b/internal/service/elbv2/listener_rule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elbv2_test import ( @@ -1359,7 +1362,7 @@ func testAccCheckListenerRuleActionOrderDisappears(ctx context.Context, rule *el return fmt.Errorf("Unable to find action order %d from actions: %#v", actionOrderToDelete, rule.Actions) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn(ctx) input := &elbv2.ModifyRuleInput{ Actions: newActions, @@ -1403,7 +1406,7 @@ func testAccCheckListenerRuleExists(ctx context.Context, n string, res *elbv2.Ru return errors.New("No Listener Rule ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn(ctx) describe, err := conn.DescribeRulesWithContext(ctx, &elbv2.DescribeRulesInput{ RuleArns: []*string{aws.String(rs.Primary.ID)}, @@ -1425,7 +1428,7 @@ func testAccCheckListenerRuleExists(ctx context.Context, n string, res *elbv2.Ru func testAccCheckListenerRuleDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lb_listener_rule" && rs.Type != "aws_alb_listener_rule" { diff --git a/internal/service/elbv2/listener_test.go b/internal/service/elbv2/listener_test.go index 3b2c4ba7a4f..e1909088709 100644 --- a/internal/service/elbv2/listener_test.go +++ b/internal/service/elbv2/listener_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elbv2_test import ( @@ -620,7 +623,7 @@ func testAccCheckListenerDefaultActionOrderDisappears(ctx context.Context, liste return fmt.Errorf("Unable to find default action order %d from default actions: %#v", actionOrderToDelete, listener.DefaultActions) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn(ctx) input := &elbv2.ModifyListenerInput{ DefaultActions: newDefaultActions, @@ -644,7 +647,7 @@ func testAccCheckListenerExists(ctx context.Context, n string, res *elbv2.Listen return errors.New("No Listener ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn(ctx) listener, err := tfelbv2.FindListenerByARN(ctx, conn, rs.Primary.ID) @@ -663,7 +666,7 @@ func testAccCheckListenerExists(ctx context.Context, n string, res *elbv2.Listen func testAccCheckListenerDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lb_listener" && rs.Type != "aws_alb_listener" { diff --git a/internal/service/elbv2/load_balancer.go b/internal/service/elbv2/load_balancer.go index 6f5ac85b83b..e038ba169b5 100644 --- a/internal/service/elbv2/load_balancer.go +++ b/internal/service/elbv2/load_balancer.go @@ -1,6 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elbv2 -import ( // nosemgrep:ci.aws-sdk-go-multiple-service-imports +import ( // nosemgrep:ci.semgrep.aws.multiple-service-imports "bytes" "context" "errors" @@ -303,7 +306,7 @@ func suppressIfLBTypeNot(t string) schema.SchemaDiffSuppressFunc { func resourceLoadBalancerCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) var name string if v, ok := d.GetOk("name"); ok { @@ -318,7 +321,7 @@ func resourceLoadBalancerCreate(ctx context.Context, d *schema.ResourceData, met lbType := d.Get("load_balancer_type").(string) input := &elbv2.CreateLoadBalancerInput{ Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), Type: aws.String(lbType), } @@ -366,7 +369,7 @@ func resourceLoadBalancerCreate(ctx context.Context, d *schema.ResourceData, met } // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := createTags(ctx, conn, d.Id(), tags) // If default tags only, continue. Otherwise, error. @@ -384,7 +387,7 @@ func resourceLoadBalancerCreate(ctx context.Context, d *schema.ResourceData, met func resourceLoadBalancerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) lb, err := FindLoadBalancerByARN(ctx, conn, d.Id()) @@ -406,7 +409,7 @@ func resourceLoadBalancerRead(ctx context.Context, d *schema.ResourceData, meta func resourceLoadBalancerUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) attributes := make([]*elbv2.LoadBalancerAttribute, 0) @@ -612,7 +615,7 @@ func resourceLoadBalancerUpdate(ctx context.Context, d *schema.ResourceData, met func resourceLoadBalancerDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) log.Printf("[INFO] Deleting LB: %s", d.Id()) @@ -624,7 +627,7 @@ func resourceLoadBalancerDelete(ctx context.Context, d *schema.ResourceData, met return sdkdiag.AppendErrorf(diags, "deleting LB: %s", err) } - ec2conn := meta.(*conns.AWSClient).EC2Conn() + ec2conn := meta.(*conns.AWSClient).EC2Conn(ctx) err := cleanupALBNetworkInterfaces(ctx, ec2conn, d.Id()) if err != nil { @@ -897,7 +900,7 @@ func SuffixFromARN(arn *string) string { // flattenResource takes a *elbv2.LoadBalancer and populates all respective resource fields. func flattenResource(ctx context.Context, d *schema.ResourceData, meta interface{}, lb *elbv2.LoadBalancer) error { - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) d.Set("arn", lb.LoadBalancerArn) d.Set("arn_suffix", SuffixFromARN(lb.LoadBalancerArn)) diff --git a/internal/service/elbv2/load_balancer_data_source.go b/internal/service/elbv2/load_balancer_data_source.go index 70bacf2e574..77e1b5cbd73 100644 --- a/internal/service/elbv2/load_balancer_data_source.go +++ b/internal/service/elbv2/load_balancer_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elbv2 import ( @@ -181,7 +184,7 @@ func DataSourceLoadBalancer() *schema.Resource { func dataSourceLoadBalancerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig tagsToMatch := tftags.New(ctx, d.Get("tags").(map[string]interface{})).IgnoreAWS().IgnoreConfig(ignoreTagsConfig) @@ -205,7 +208,7 @@ func dataSourceLoadBalancerRead(ctx context.Context, d *schema.ResourceData, met for _, loadBalancer := range results { arn := aws.StringValue(loadBalancer.LoadBalancerArn) - tags, err := ListTags(ctx, conn, arn) + tags, err := listTags(ctx, conn, arn) if tfawserr.ErrCodeEquals(err, elbv2.ErrCodeLoadBalancerNotFoundException) { continue @@ -317,7 +320,7 @@ func dataSourceLoadBalancerRead(ctx context.Context, d *schema.ResourceData, met return sdkdiag.AppendErrorf(diags, "setting access_logs: %s", err) } - tags, err := ListTags(ctx, conn, d.Id()) + tags, err := listTags(ctx, conn, d.Id()) if verify.ErrorISOUnsupported(conn.PartitionID, err) { log.Printf("[WARN] Unable to list tags for ELBv2 Load Balancer %s: %s", d.Id(), err) diff --git a/internal/service/elbv2/load_balancer_data_source_test.go b/internal/service/elbv2/load_balancer_data_source_test.go index c7aadfe5f47..ca77fe5329e 100644 --- a/internal/service/elbv2/load_balancer_data_source_test.go +++ b/internal/service/elbv2/load_balancer_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elbv2_test import ( diff --git a/internal/service/elbv2/load_balancer_test.go b/internal/service/elbv2/load_balancer_test.go index 76de8af7afc..5bd0c2d169c 100644 --- a/internal/service/elbv2/load_balancer_test.go +++ b/internal/service/elbv2/load_balancer_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elbv2_test import ( @@ -1533,7 +1536,7 @@ func testAccCheckLoadBalancerExists(ctx context.Context, n string, v *elbv2.Load return errors.New("No ELBv2 Load Balancer ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn(ctx) output, err := tfelbv2.FindLoadBalancerByARN(ctx, conn, rs.Primary.ID) @@ -1558,7 +1561,7 @@ func testAccCheckLoadBalancerAttribute(ctx context.Context, n, key, value string return errors.New("No LB ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn(ctx) attributesResp, err := conn.DescribeLoadBalancerAttributesWithContext(ctx, &elbv2.DescribeLoadBalancerAttributesInput{ LoadBalancerArn: aws.String(rs.Primary.ID), }) @@ -1580,7 +1583,7 @@ func testAccCheckLoadBalancerAttribute(ctx context.Context, n, key, value string func testAccCheckLoadBalancerDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lb" && rs.Type != "aws_alb" { @@ -1605,7 +1608,7 @@ func testAccCheckLoadBalancerDestroy(ctx context.Context) resource.TestCheckFunc } func testAccPreCheckGatewayLoadBalancer(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn(ctx) input := &elbv2.DescribeAccountLimitsInput{} diff --git a/internal/service/elbv2/load_balancers_data_source.go b/internal/service/elbv2/load_balancers_data_source.go index a357dc8dce7..7693e68a15e 100644 --- a/internal/service/elbv2/load_balancers_data_source.go +++ b/internal/service/elbv2/load_balancers_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elbv2 import ( @@ -34,7 +37,7 @@ const ( ) func dataSourceLoadBalancersRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig results, err := FindLoadBalancers(ctx, conn, &elbv2.DescribeLoadBalancersInput{}) @@ -49,7 +52,7 @@ func dataSourceLoadBalancersRead(ctx context.Context, d *schema.ResourceData, me for _, loadBalancer := range results { arn := aws.StringValue(loadBalancer.LoadBalancerArn) - tags, err := ListTags(ctx, conn, arn) + tags, err := listTags(ctx, conn, arn) if tfawserr.ErrCodeEquals(err, elbv2.ErrCodeLoadBalancerNotFoundException) { continue diff --git a/internal/service/elbv2/load_balancers_data_source_test.go b/internal/service/elbv2/load_balancers_data_source_test.go index 8ea3d59dbde..819fc9cbc3f 100644 --- a/internal/service/elbv2/load_balancers_data_source_test.go +++ b/internal/service/elbv2/load_balancers_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elbv2_test import ( diff --git a/internal/service/elbv2/service_package_gen.go b/internal/service/elbv2/service_package_gen.go index 4e9a66fb0ba..57e5abc0a71 100644 --- a/internal/service/elbv2/service_package_gen.go +++ b/internal/service/elbv2/service_package_gen.go @@ -5,6 +5,10 @@ package elbv2 import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + elbv2_sdkv1 "github.com/aws/aws-sdk-go/service/elbv2" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -145,4 +149,13 @@ func (p *servicePackage) ServicePackageName() string { return names.ELBV2 } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*elbv2_sdkv1.ELBV2, error) { + sess := config["session"].(*session_sdkv1.Session) + + return elbv2_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/elbv2/sweep.go b/internal/service/elbv2/sweep.go index 27003f76798..4fda2fb336a 100644 --- a/internal/service/elbv2/sweep.go +++ b/internal/service/elbv2/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/elbv2" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -42,11 +44,11 @@ func init() { func sweepLoadBalancers(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).ELBV2Conn() + conn := client.ELBV2Conn(ctx) var sweeperErrs *multierror.Error err = conn.DescribeLoadBalancersPagesWithContext(ctx, &elbv2.DescribeLoadBalancersInput{}, func(page *elbv2.DescribeLoadBalancersOutput, lastPage bool) bool { @@ -82,11 +84,11 @@ func sweepLoadBalancers(region string) error { func sweepTargetGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).ELBV2Conn() + conn := client.ELBV2Conn(ctx) err = conn.DescribeTargetGroupsPagesWithContext(ctx, &elbv2.DescribeTargetGroupsInput{}, func(page *elbv2.DescribeTargetGroupsOutput, lastPage bool) bool { if page == nil || len(page.TargetGroups) == 0 { @@ -119,12 +121,12 @@ func sweepTargetGroups(region string) error { func sweepListeners(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).ELBV2Conn() + conn := client.ELBV2Conn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -169,7 +171,7 @@ func sweepListeners(region string) error { errs = multierror.Append(errs, fmt.Errorf("error describing ELBv2 Listeners for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping ELBv2 Listeners for %s: %w", region, err)) } diff --git a/internal/service/elbv2/tags_gen.go b/internal/service/elbv2/tags_gen.go index 70f6e97452c..6cc5877e01c 100644 --- a/internal/service/elbv2/tags_gen.go +++ b/internal/service/elbv2/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists elbv2 service tags. +// listTags lists elbv2 service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn elbv2iface.ELBV2API, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn elbv2iface.ELBV2API, identifier string) (tftags.KeyValueTags, error) { input := &elbv2.DescribeTagsInput{ ResourceArns: aws.StringSlice([]string{identifier}), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn elbv2iface.ELBV2API, identifier string) // ListTags lists elbv2 service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).ELBV2Conn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).ELBV2Conn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*elbv2.Tag) tftags.KeyValueTags { return tftags.New(ctx, m) } -// GetTagsIn returns elbv2 service tags from Context. +// getTagsIn returns elbv2 service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*elbv2.Tag { +func getTagsIn(ctx context.Context) []*elbv2.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,8 +88,8 @@ func GetTagsIn(ctx context.Context) []*elbv2.Tag { return nil } -// SetTagsOut sets elbv2 service tags in Context. -func SetTagsOut(ctx context.Context, tags []*elbv2.Tag) { +// setTagsOut sets elbv2 service tags in Context. +func setTagsOut(ctx context.Context, tags []*elbv2.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } @@ -101,13 +101,13 @@ func createTags(ctx context.Context, conn elbv2iface.ELBV2API, identifier string return nil } - return UpdateTags(ctx, conn, identifier, nil, KeyValueTags(ctx, tags)) + return updateTags(ctx, conn, identifier, nil, KeyValueTags(ctx, tags)) } -// UpdateTags updates elbv2 service tags. +// updateTags updates elbv2 service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn elbv2iface.ELBV2API, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn elbv2iface.ELBV2API, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -147,5 +147,5 @@ func UpdateTags(ctx context.Context, conn elbv2iface.ELBV2API, identifier string // UpdateTags updates elbv2 service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).ELBV2Conn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).ELBV2Conn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/elbv2/target_group.go b/internal/service/elbv2/target_group.go index 68fdbaecb97..a99c405f7b5 100644 --- a/internal/service/elbv2/target_group.go +++ b/internal/service/elbv2/target_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elbv2 import ( @@ -21,9 +24,9 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/errs" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" - "github.com/hashicorp/terraform-provider-aws/internal/experimental/nullable" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/types/nullable" "github.com/hashicorp/terraform-provider-aws/internal/verify" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -336,7 +339,7 @@ func suppressIfTargetType(t string) schema.SchemaDiffSuppressFunc { func resourceTargetGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) var groupName string if v, ok := d.GetOk("name"); ok { @@ -359,7 +362,7 @@ func resourceTargetGroupCreate(ctx context.Context, d *schema.ResourceData, meta input := &elbv2.CreateTargetGroupInput{ Name: aws.String(groupName), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), TargetType: aws.String(d.Get("target_type").(string)), } @@ -598,7 +601,7 @@ func resourceTargetGroupCreate(ctx context.Context, d *schema.ResourceData, meta } // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := createTags(ctx, conn, d.Id(), tags) // If default tags only, continue. Otherwise, error. @@ -616,7 +619,7 @@ func resourceTargetGroupCreate(ctx context.Context, d *schema.ResourceData, meta func resourceTargetGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { return FindTargetGroupByARN(ctx, conn, d.Id()) @@ -640,7 +643,7 @@ func resourceTargetGroupRead(ctx context.Context, d *schema.ResourceData, meta i func resourceTargetGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) if d.HasChange("health_check") { var params *elbv2.ModifyTargetGroupInput @@ -835,7 +838,7 @@ func resourceTargetGroupDelete(ctx context.Context, d *schema.ResourceData, meta const ( targetGroupDeleteTimeout = 2 * time.Minute ) - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) input := &elbv2.DeleteTargetGroupInput{ TargetGroupArn: aws.String(d.Id()), @@ -1019,7 +1022,7 @@ func TargetGroupSuffixFromARN(arn *string) string { // flattenTargetGroupResource takes a *elbv2.TargetGroup and populates all respective resource fields. func flattenTargetGroupResource(ctx context.Context, d *schema.ResourceData, meta interface{}, targetGroup *elbv2.TargetGroup) error { - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) d.Set("arn", targetGroup.TargetGroupArn) d.Set("arn_suffix", TargetGroupSuffixFromARN(targetGroup.TargetGroupArn)) @@ -1164,7 +1167,7 @@ func flattenTargetGroupStickiness(attributes []*elbv2.TargetGroupAttribute) ([]i if sType, ok := m["type"].(string); !ok || sType == "app_cookie" { duration, err := strconv.Atoi(aws.StringValue(attr.Value)) if err != nil { - return nil, fmt.Errorf("Error converting stickiness.app_cookie.duration_seconds to int: %s", aws.StringValue(attr.Value)) + return nil, fmt.Errorf("converting stickiness.app_cookie.duration_seconds to int: %s", aws.StringValue(attr.Value)) } m["cookie_duration"] = duration } diff --git a/internal/service/elbv2/target_group_attachment.go b/internal/service/elbv2/target_group_attachment.go index 1830c9d3c8d..70433ac3a2b 100644 --- a/internal/service/elbv2/target_group_attachment.go +++ b/internal/service/elbv2/target_group_attachment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elbv2 import ( @@ -56,7 +59,7 @@ func ResourceTargetGroupAttachment() *schema.Resource { func resourceAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) target := &elbv2.TargetDescription{ Id: aws.String(d.Get("target_id").(string)), @@ -82,7 +85,7 @@ func resourceAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta _, err := conn.RegisterTargetsWithContext(ctx, params) if tfawserr.ErrCodeEquals(err, "InvalidTarget") { - return retry.RetryableError(fmt.Errorf("Error attaching instance to LB, retrying: %s", err)) + return retry.RetryableError(fmt.Errorf("attaching instance to LB, retrying: %s", err)) } if err != nil { @@ -106,7 +109,7 @@ func resourceAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta func resourceAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) target := &elbv2.TargetDescription{ Id: aws.String(d.Get("target_id").(string)), @@ -137,7 +140,7 @@ func resourceAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta // target, so there is no work to do beyond ensuring that the target and group still exist. func resourceAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) target := &elbv2.TargetDescription{ Id: aws.String(d.Get("target_id").(string)), diff --git a/internal/service/elbv2/target_group_attachment_test.go b/internal/service/elbv2/target_group_attachment_test.go index 7a256f5d0c9..7412ac9f716 100644 --- a/internal/service/elbv2/target_group_attachment_test.go +++ b/internal/service/elbv2/target_group_attachment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elbv2_test import ( @@ -145,7 +148,7 @@ func testAccCheckTargetGroupAttachmentDisappears(ctx context.Context, n string) return fmt.Errorf("Attachment not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn(ctx) targetGroupArn := rs.Primary.Attributes["target_group_arn"] target := &elbv2.TargetDescription{ @@ -183,7 +186,7 @@ func testAccCheckTargetGroupAttachmentExists(ctx context.Context, n string) reso return errors.New("No Target Group Attachment ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn(ctx) _, hasPort := rs.Primary.Attributes["port"] targetGroupArn := rs.Primary.Attributes["target_group_arn"] @@ -215,7 +218,7 @@ func testAccCheckTargetGroupAttachmentExists(ctx context.Context, n string) reso func testAccCheckTargetGroupAttachmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lb_target_group_attachment" && rs.Type != "aws_alb_target_group_attachment" { diff --git a/internal/service/elbv2/target_group_data_source.go b/internal/service/elbv2/target_group_data_source.go index a011be85096..abf14508c53 100644 --- a/internal/service/elbv2/target_group_data_source.go +++ b/internal/service/elbv2/target_group_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elbv2 import ( @@ -169,7 +172,7 @@ func DataSourceTargetGroup() *schema.Resource { func dataSourceTargetGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ELBV2Conn() + conn := meta.(*conns.AWSClient).ELBV2Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig tagsToMatch := tftags.New(ctx, d.Get("tags").(map[string]interface{})).IgnoreAWS().IgnoreConfig(ignoreTagsConfig) @@ -192,7 +195,7 @@ func dataSourceTargetGroupRead(ctx context.Context, d *schema.ResourceData, meta for _, targetGroup := range results { arn := aws.StringValue(targetGroup.TargetGroupArn) - tags, err := ListTags(ctx, conn, arn) + tags, err := listTags(ctx, conn, arn) if tfawserr.ErrCodeEquals(err, elbv2.ErrCodeTargetGroupNotFoundException) { continue @@ -302,7 +305,7 @@ func dataSourceTargetGroupRead(ctx context.Context, d *schema.ResourceData, meta return sdkdiag.AppendErrorf(diags, "setting stickiness: %s", err) } - tags, err := ListTags(ctx, conn, d.Id()) + tags, err := listTags(ctx, conn, d.Id()) if verify.ErrorISOUnsupported(conn.PartitionID, err) { log.Printf("[WARN] Unable to list tags for ELBv2 Target Group %s: %s", d.Id(), err) diff --git a/internal/service/elbv2/target_group_data_source_test.go b/internal/service/elbv2/target_group_data_source_test.go index da322b7b73b..64cea03ae3c 100644 --- a/internal/service/elbv2/target_group_data_source_test.go +++ b/internal/service/elbv2/target_group_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elbv2_test import ( diff --git a/internal/service/elbv2/target_group_test.go b/internal/service/elbv2/target_group_test.go index 97f5a07c2a6..e767f62418b 100644 --- a/internal/service/elbv2/target_group_test.go +++ b/internal/service/elbv2/target_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elbv2_test import ( @@ -2281,7 +2284,7 @@ func TestAccELBV2TargetGroup_Name_noDuplicates(t *testing.T) { func testAccCheckTargetGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lb_target_group" && rs.Type != "aws_alb_target_group" { @@ -2316,7 +2319,7 @@ func testAccCheckTargetGroupExists(ctx context.Context, n string, v *elbv2.Targe return errors.New("No ELBv2 Target Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBV2Conn(ctx) output, err := tfelbv2.FindTargetGroupByARN(ctx, conn, rs.Primary.ID) diff --git a/internal/service/elbv2/validate.go b/internal/service/elbv2/validate.go index 60e120102c8..61013b0cb12 100644 --- a/internal/service/elbv2/validate.go +++ b/internal/service/elbv2/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elbv2 import ( diff --git a/internal/service/elbv2/validate_test.go b/internal/service/elbv2/validate_test.go index e7a62b134bb..6ebbab39df0 100644 --- a/internal/service/elbv2/validate_test.go +++ b/internal/service/elbv2/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package elbv2 import ( diff --git a/internal/service/emr/block_public_access_configuration.go b/internal/service/emr/block_public_access_configuration.go index 173b8625b62..bed58c5d714 100644 --- a/internal/service/emr/block_public_access_configuration.go +++ b/internal/service/emr/block_public_access_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emr import ( @@ -61,7 +64,7 @@ const ( ) func resourceBlockPublicAccessConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) blockPublicAccessConfiguration := &emr.BlockPublicAccessConfiguration{} @@ -85,7 +88,7 @@ func resourceBlockPublicAccessConfigurationCreate(ctx context.Context, d *schema } func resourceBlockPublicAccessConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) out, err := FindBlockPublicAccessConfiguration(ctx, conn) @@ -102,7 +105,7 @@ func resourceBlockPublicAccessConfigurationRead(ctx context.Context, d *schema.R } func resourceBlockPublicAccessConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) log.Print("[INFO] Restoring EMR Block Public Access Configuration to default settings") diff --git a/internal/service/emr/block_public_access_configuration_test.go b/internal/service/emr/block_public_access_configuration_test.go index d823460660b..46f7c5f6632 100644 --- a/internal/service/emr/block_public_access_configuration_test.go +++ b/internal/service/emr/block_public_access_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emr_test import ( @@ -166,7 +169,7 @@ func TestAccEMRBlockPublicAccessConfiguration_enabledMultiRange(t *testing.T) { func testAccCheckBlockPublicAccessConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_emr_block_public_access_configuration" { @@ -205,7 +208,7 @@ func testAccCheckBlockPublicAccessConfigurationAttributes_enabledOnly(ctx contex return create.Error(names.EMR, create.ErrActionCheckingExistence, tfemr.ResNameBlockPublicAccessConfiguration, name, errors.New("not found")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn(ctx) resp, err := tfemr.FindBlockPublicAccessConfiguration(ctx, conn) if err != nil { @@ -229,7 +232,7 @@ func testAccCheckBlockPublicAccessConfigurationAttributes_enabledOnly(ctx contex func testAccCheckBlockPublicAccessConfigurationAttributes_default(ctx context.Context, name string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn(ctx) rs, ok := s.RootModule().Resources[name] if !ok { @@ -267,7 +270,7 @@ func testAccCheckBlockPublicAccessConfigurationAttributes_enabledMultiRange(ctx return create.Error(names.EMR, create.ErrActionCheckingExistence, tfemr.ResNameBlockPublicAccessConfiguration, name, errors.New("not found")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn(ctx) resp, err := tfemr.FindBlockPublicAccessConfiguration(ctx, conn) if err != nil { @@ -302,7 +305,7 @@ func testAccCheckBlockPublicAccessConfigurationAttributes_disabled(ctx context.C return create.Error(names.EMR, create.ErrActionCheckingExistence, tfemr.ResNameBlockPublicAccessConfiguration, name, errors.New("not found")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn(ctx) resp, err := tfemr.FindBlockPublicAccessConfiguration(ctx, conn) if err != nil { @@ -324,7 +327,7 @@ func testAccCheckBlockPublicAccessConfigurationAttributes_disabled(ctx context.C } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn(ctx) input := &emr.GetBlockPublicAccessConfigurationInput{} _, err := conn.GetBlockPublicAccessConfigurationWithContext(ctx, input) diff --git a/internal/service/emr/cluster.go b/internal/service/emr/cluster.go index 56be4ca5c11..faac7a6412f 100644 --- a/internal/service/emr/cluster.go +++ b/internal/service/emr/cluster.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emr import ( @@ -621,7 +624,7 @@ func instanceFleetConfigSchema() *schema.Resource { Type: schema.TypeMap, Optional: true, ForceNew: true, - Elem: schema.TypeString, + Elem: &schema.Schema{Type: schema.TypeString}, }, }, }, @@ -764,7 +767,7 @@ func instanceFleetConfigSchema() *schema.Resource { func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) applications := d.Get("applications").(*schema.Set).List() keepJobFlowAliveWhenNoSteps := true @@ -905,7 +908,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int ReleaseLabel: aws.String(d.Get("release_label").(string)), ServiceRole: aws.String(d.Get("service_role").(string)), VisibleToAllUsers: aws.Bool(d.Get("visible_to_all_users").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("additional_info"); ok { @@ -1043,7 +1046,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) cluster, err := FindClusterByID(ctx, conn, d.Id()) @@ -1098,7 +1101,7 @@ func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta inter } } - SetTagsOut(ctx, cluster.Tags) + setTagsOut(ctx, cluster.Tags) d.Set("name", cluster.Name) @@ -1213,7 +1216,7 @@ func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta inter func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) if d.HasChange("visible_to_all_users") { _, err := conn.SetVisibleToAllUsersWithContext(ctx, &emr.SetVisibleToAllUsersInput{ @@ -1413,7 +1416,7 @@ func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceClusterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) log.Printf("[DEBUG] Deleting EMR Cluster: %s", d.Id()) _, err := conn.TerminateJobFlowsWithContext(ctx, &emr.TerminateJobFlowsInput{ diff --git a/internal/service/emr/cluster_test.go b/internal/service/emr/cluster_test.go index bae8afe6994..d3801b26af2 100644 --- a/internal/service/emr/cluster_test.go +++ b/internal/service/emr/cluster_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emr_test import ( @@ -563,7 +566,7 @@ func TestAccEMRCluster_EC2Attributes_defaultManagedSecurityGroups(t *testing.T) }, { PreConfig: func() { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) err := testAccDeleteManagedSecurityGroups(ctx, conn, &vpc) @@ -1663,7 +1666,7 @@ func TestAccEMRCluster_InstanceFleetMaster_only(t *testing.T) { func testAccCheckClusterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_emr_cluster" { @@ -1698,7 +1701,7 @@ func testAccCheckClusterExists(ctx context.Context, n string, v *emr.Cluster) re return fmt.Errorf("No EMR Cluster ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn(ctx) output, err := tfemr.FindClusterByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/emr/consts.go b/internal/service/emr/consts.go index 5e05fa6db9c..1590d7e7774 100644 --- a/internal/service/emr/consts.go +++ b/internal/service/emr/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emr import ( diff --git a/internal/service/emr/errors.go b/internal/service/emr/errors.go index b81bafc5470..1061eaae4ec 100644 --- a/internal/service/emr/errors.go +++ b/internal/service/emr/errors.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emr const ( diff --git a/internal/service/emr/find.go b/internal/service/emr/find.go index 238be25c187..e39ce7c0dbd 100644 --- a/internal/service/emr/find.go +++ b/internal/service/emr/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emr import ( diff --git a/internal/service/emr/generate.go b/internal/service/emr/generate.go index ab1cb13aa4d..4e567f657a8 100644 --- a/internal/service/emr/generate.go +++ b/internal/service/emr/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTagsInIDElem=ResourceId -ServiceTagsSlice -TagOp=AddTags -TagInIDElem=ResourceId -UntagOp=RemoveTags -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package emr diff --git a/internal/service/emr/id.go b/internal/service/emr/id.go index a2789f382f9..6bfd8ae55c2 100644 --- a/internal/service/emr/id.go +++ b/internal/service/emr/id.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emr import ( diff --git a/internal/service/emr/instance_fleet.go b/internal/service/emr/instance_fleet.go index 69e9fc99205..6e9ae5c085e 100644 --- a/internal/service/emr/instance_fleet.go +++ b/internal/service/emr/instance_fleet.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emr import ( @@ -79,7 +82,7 @@ func ResourceInstanceFleet() *schema.Resource { Type: schema.TypeMap, Optional: true, ForceNew: true, - Elem: schema.TypeString, + Elem: &schema.Schema{Type: schema.TypeString}, }, }, }, @@ -220,7 +223,7 @@ func ResourceInstanceFleet() *schema.Resource { func resourceInstanceFleetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) taskFleet := map[string]interface{}{ "name": d.Get("name"), @@ -247,7 +250,7 @@ func resourceInstanceFleetCreate(ctx context.Context, d *schema.ResourceData, me func resourceInstanceFleetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) fleet, err := FindInstanceFleetByTwoPartKey(ctx, conn, d.Get("cluster_id").(string), d.Id()) @@ -278,7 +281,7 @@ func resourceInstanceFleetRead(ctx context.Context, d *schema.ResourceData, meta func resourceInstanceFleetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) modifyConfig := &emr.InstanceFleetModifyConfig{ InstanceFleetId: aws.String(d.Id()), @@ -316,7 +319,7 @@ func resourceInstanceFleetUpdate(ctx context.Context, d *schema.ResourceData, me func resourceInstanceFleetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) // AWS EMR Instance Fleet does not support DELETE; resizing cluster to zero before removing from state. log.Printf("[DEBUG] Deleting EMR Instance Fleet: %s", d.Id()) diff --git a/internal/service/emr/instance_fleet_test.go b/internal/service/emr/instance_fleet_test.go index f7d0faed338..4219db4668d 100644 --- a/internal/service/emr/instance_fleet_test.go +++ b/internal/service/emr/instance_fleet_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emr_test import ( @@ -153,7 +156,7 @@ func testAccCheckInstanceFleetExists(ctx context.Context, n string, v *emr.Insta return fmt.Errorf("No EMR Instance Fleet ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn(ctx) output, err := tfemr.FindInstanceFleetByTwoPartKey(ctx, conn, rs.Primary.Attributes["cluster_id"], rs.Primary.ID) diff --git a/internal/service/emr/instance_group.go b/internal/service/emr/instance_group.go index 6a6900fc4a0..bd156fcf0bf 100644 --- a/internal/service/emr/instance_group.go +++ b/internal/service/emr/instance_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emr import ( @@ -141,7 +144,7 @@ func ResourceInstanceGroup() *schema.Resource { func resourceInstanceGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) instanceRole := emr.InstanceGroupTypeTask groupConfig := &emr.InstanceGroupConfig{ @@ -207,7 +210,7 @@ func resourceInstanceGroupCreate(ctx context.Context, d *schema.ResourceData, me func resourceInstanceGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) ig, err := FetchInstanceGroup(ctx, conn, d.Get("cluster_id").(string), d.Id()) @@ -277,7 +280,7 @@ func resourceInstanceGroupRead(ctx context.Context, d *schema.ResourceData, meta func resourceInstanceGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) log.Printf("[DEBUG] Modify EMR task group") if d.HasChanges("instance_count", "configurations_json") { @@ -340,7 +343,7 @@ func resourceInstanceGroupUpdate(ctx context.Context, d *schema.ResourceData, me func resourceInstanceGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) log.Printf("[WARN] AWS EMR Instance Group does not support DELETE; resizing cluster to zero before removing from state") params := &emr.ModifyInstanceGroupsInput{ diff --git a/internal/service/emr/instance_group_test.go b/internal/service/emr/instance_group_test.go index 2355da06621..1eed1c5d614 100644 --- a/internal/service/emr/instance_group_test.go +++ b/internal/service/emr/instance_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emr_test import ( @@ -325,7 +328,7 @@ func testAccCheckInstanceGroupExists(ctx context.Context, name string, ig *emr.I } meta := acctest.Provider.Meta() - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) group, err := tfemr.FetchInstanceGroup(ctx, conn, rs.Primary.Attributes["cluster_id"], rs.Primary.ID) if err != nil { return fmt.Errorf("EMR error: %v", err) diff --git a/internal/service/emr/managed_scaling_policy.go b/internal/service/emr/managed_scaling_policy.go index 6b6fb32d09a..5dbd0fa73db 100644 --- a/internal/service/emr/managed_scaling_policy.go +++ b/internal/service/emr/managed_scaling_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emr import ( @@ -71,7 +74,7 @@ func ResourceManagedScalingPolicy() *schema.Resource { func resourceManagedScalingPolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) if l := d.Get("compute_limits").(*schema.Set).List(); len(l) > 0 && l[0] != nil { cl := l[0].(map[string]interface{}) @@ -110,7 +113,7 @@ func resourceManagedScalingPolicyCreate(ctx context.Context, d *schema.ResourceD func resourceManagedScalingPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) input := &emr.GetManagedScalingPolicyInput{ ClusterId: aws.String(d.Id()), @@ -150,7 +153,7 @@ func resourceManagedScalingPolicyRead(ctx context.Context, d *schema.ResourceDat func resourceManagedScalingPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) input := &emr.RemoveManagedScalingPolicyInput{ ClusterId: aws.String(d.Get("cluster_id").(string)), diff --git a/internal/service/emr/managed_scaling_policy_test.go b/internal/service/emr/managed_scaling_policy_test.go index cd862cf4085..0b150213962 100644 --- a/internal/service/emr/managed_scaling_policy_test.go +++ b/internal/service/emr/managed_scaling_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emr_test import ( @@ -170,7 +173,7 @@ func testAccCheckManagedScalingPolicyExists(ctx context.Context, n string) resou return fmt.Errorf("No EMR Managed Scaling Policy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn(ctx) resp, err := conn.GetManagedScalingPolicyWithContext(ctx, &emr.GetManagedScalingPolicyInput{ ClusterId: aws.String(rs.Primary.ID), }) @@ -187,7 +190,7 @@ func testAccCheckManagedScalingPolicyExists(ctx context.Context, n string) resou func testAccCheckManagedScalingPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_emr_managed_scaling_policy" { continue diff --git a/internal/service/emr/release_labels_data_source.go b/internal/service/emr/release_labels_data_source.go index 0d676e1d6e4..c5cb109cd82 100644 --- a/internal/service/emr/release_labels_data_source.go +++ b/internal/service/emr/release_labels_data_source.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emr import ( "context" - "fmt" "strings" "github.com/aws/aws-sdk-go/aws" @@ -45,7 +47,7 @@ func DataSourceReleaseLabels() *schema.Resource { } func dataSourceReleaseLabelsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) input := &emr.ListReleaseLabelsInput{} @@ -56,7 +58,7 @@ func dataSourceReleaseLabelsRead(ctx context.Context, d *schema.ResourceData, me output, err := findReleaseLabels(ctx, conn, input) if err != nil { - return diag.FromErr(fmt.Errorf("error reading EMR Release Labels: %w", err)) + return diag.Errorf("reading EMR Release Labels: %s", err) } releaseLabels := aws.StringValueSlice(output) diff --git a/internal/service/emr/release_labels_data_source_test.go b/internal/service/emr/release_labels_data_source_test.go index ef3a5c44f34..a53bccbf6ef 100644 --- a/internal/service/emr/release_labels_data_source_test.go +++ b/internal/service/emr/release_labels_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emr_test import ( diff --git a/internal/service/emr/security_configuration.go b/internal/service/emr/security_configuration.go index cc3b4921900..da83a4c3bc1 100644 --- a/internal/service/emr/security_configuration.go +++ b/internal/service/emr/security_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emr import ( @@ -60,7 +63,7 @@ func ResourceSecurityConfiguration() *schema.Resource { func resourceSecurityConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) var emrSCName string if v, ok := d.GetOk("name"); ok { @@ -88,7 +91,7 @@ func resourceSecurityConfigurationCreate(ctx context.Context, d *schema.Resource func resourceSecurityConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) resp, err := conn.DescribeSecurityConfigurationWithContext(ctx, &emr.DescribeSecurityConfigurationInput{ Name: aws.String(d.Id()), @@ -111,7 +114,7 @@ func resourceSecurityConfigurationRead(ctx context.Context, d *schema.ResourceDa func resourceSecurityConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) _, err := conn.DeleteSecurityConfigurationWithContext(ctx, &emr.DeleteSecurityConfigurationInput{ Name: aws.String(d.Id()), diff --git a/internal/service/emr/security_configuration_test.go b/internal/service/emr/security_configuration_test.go index 1e3c2e0e840..bc32833760d 100644 --- a/internal/service/emr/security_configuration_test.go +++ b/internal/service/emr/security_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emr_test import ( @@ -42,7 +45,7 @@ func TestAccEMRSecurityConfiguration_basic(t *testing.T) { func testAccCheckSecurityConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_emr_security_configuration" { continue @@ -83,7 +86,7 @@ func testAccCheckSecurityConfigurationExists(ctx context.Context, n string) reso return fmt.Errorf("No EMR Security Configuration ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn(ctx) resp, err := conn.DescribeSecurityConfigurationWithContext(ctx, &emr.DescribeSecurityConfigurationInput{ Name: aws.String(rs.Primary.ID), }) diff --git a/internal/service/emr/service_package_gen.go b/internal/service/emr/service_package_gen.go index 093baa2192e..94d5db4633c 100644 --- a/internal/service/emr/service_package_gen.go +++ b/internal/service/emr/service_package_gen.go @@ -5,6 +5,10 @@ package emr import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + emr_sdkv1 "github.com/aws/aws-sdk-go/service/emr" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -77,4 +81,13 @@ func (p *servicePackage) ServicePackageName() string { return names.EMR } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*emr_sdkv1.EMR, error) { + sess := config["session"].(*session_sdkv1.Session) + + return emr_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/emr/status.go b/internal/service/emr/status.go index ef3fb39af26..affeeace57d 100644 --- a/internal/service/emr/status.go +++ b/internal/service/emr/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emr import ( diff --git a/internal/service/emr/studio.go b/internal/service/emr/studio.go index cc9d0b9add9..a0b8060f439 100644 --- a/internal/service/emr/studio.go +++ b/internal/service/emr/studio.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emr import ( @@ -115,7 +118,7 @@ func ResourceStudio() *schema.Resource { func resourceStudioCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) input := &emr.CreateStudioInput{ AuthMode: aws.String(d.Get("auth_mode").(string)), @@ -124,7 +127,7 @@ func resourceStudioCreate(ctx context.Context, d *schema.ResourceData, meta inte Name: aws.String(d.Get("name").(string)), ServiceRole: aws.String(d.Get("service_role").(string)), SubnetIds: flex.ExpandStringSet(d.Get("subnet_ids").(*schema.Set)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), VpcId: aws.String(d.Get("vpc_id").(string)), WorkspaceSecurityGroupId: aws.String(d.Get("workspace_security_group_id").(string)), } @@ -174,7 +177,7 @@ func resourceStudioCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceStudioUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &emr.UpdateStudioInput{ @@ -209,7 +212,7 @@ func resourceStudioUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceStudioRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) studio, err := FindStudioByID(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -237,14 +240,14 @@ func resourceStudioRead(ctx context.Context, d *schema.ResourceData, meta interf d.Set("workspace_security_group_id", studio.WorkspaceSecurityGroupId) d.Set("subnet_ids", flex.FlattenStringSet(studio.SubnetIds)) - SetTagsOut(ctx, studio.Tags) + setTagsOut(ctx, studio.Tags) return diags } func resourceStudioDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) request := &emr.DeleteStudioInput{ StudioId: aws.String(d.Id()), diff --git a/internal/service/emr/studio_session_mapping.go b/internal/service/emr/studio_session_mapping.go index cbad529b089..ef51828c81c 100644 --- a/internal/service/emr/studio_session_mapping.go +++ b/internal/service/emr/studio_session_mapping.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emr import ( @@ -65,7 +68,7 @@ func ResourceStudioSessionMapping() *schema.Resource { func resourceStudioSessionMappingCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) var id string studioId := d.Get("studio_id").(string) @@ -98,7 +101,7 @@ func resourceStudioSessionMappingCreate(ctx context.Context, d *schema.ResourceD func resourceStudioSessionMappingUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) studioId, identityType, identityId, err := readStudioSessionMapping(d.Id()) if err != nil { @@ -122,7 +125,7 @@ func resourceStudioSessionMappingUpdate(ctx context.Context, d *schema.ResourceD func resourceStudioSessionMappingRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) mapping, err := FindStudioSessionMappingByID(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -146,7 +149,7 @@ func resourceStudioSessionMappingRead(ctx context.Context, d *schema.ResourceDat func resourceStudioSessionMappingDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRConn() + conn := meta.(*conns.AWSClient).EMRConn(ctx) studioId, identityType, identityId, err := readStudioSessionMapping(d.Id()) if err != nil { return sdkdiag.AppendErrorf(diags, "deleting EMR Studio Session Mapping (%s): %s", d.Id(), err) diff --git a/internal/service/emr/studio_session_mapping_test.go b/internal/service/emr/studio_session_mapping_test.go index 999077d9a84..05f09b7296b 100644 --- a/internal/service/emr/studio_session_mapping_test.go +++ b/internal/service/emr/studio_session_mapping_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emr_test import ( @@ -98,7 +101,7 @@ func testAccCheckStudioSessionMappingExists(ctx context.Context, resourceName st return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn(ctx) output, err := tfemr.FindStudioSessionMappingByID(ctx, conn, rs.Primary.ID) if err != nil { @@ -117,7 +120,7 @@ func testAccCheckStudioSessionMappingExists(ctx context.Context, resourceName st func testAccCheckStudioSessionMappingDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_emr_studio_session_mapping" { diff --git a/internal/service/emr/studio_test.go b/internal/service/emr/studio_test.go index 46462cd13a6..385c15b2235 100644 --- a/internal/service/emr/studio_test.go +++ b/internal/service/emr/studio_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emr_test import ( @@ -186,7 +189,7 @@ func testAccCheckStudioExists(ctx context.Context, resourceName string, studio * return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn(ctx) output, err := tfemr.FindStudioByID(ctx, conn, rs.Primary.ID) if err != nil { @@ -205,7 +208,7 @@ func testAccCheckStudioExists(ctx context.Context, resourceName string, studio * func testAccCheckStudioDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EMRConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_emr_studio" { diff --git a/internal/service/emr/sweep.go b/internal/service/emr/sweep.go index 288ac29b81a..08582b0c952 100644 --- a/internal/service/emr/sweep.go +++ b/internal/service/emr/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/emr" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -29,11 +31,11 @@ func init() { func sweepClusters(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).EMRConn() + conn := client.EMRConn(ctx) input := &emr.ListClustersInput{ ClusterStates: aws.StringSlice([]string{emr.ClusterStateBootstrapping, emr.ClusterStateRunning, emr.ClusterStateStarting, emr.ClusterStateWaiting}), } @@ -75,7 +77,7 @@ func sweepClusters(region string) error { return fmt.Errorf("error listing EMR Clusters (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping EMR Clusters (%s): %w", region, err) @@ -86,13 +88,13 @@ func sweepClusters(region string) error { func sweepStudios(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).EMRConn() + conn := client.EMRConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error input := &emr.ListStudiosInput{} @@ -121,7 +123,7 @@ func sweepStudios(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing EMR Studios for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping EMR Studios for %s: %w", region, err)) } diff --git a/internal/service/emr/tags_gen.go b/internal/service/emr/tags_gen.go index 57a3d9577c1..c3194900d4f 100644 --- a/internal/service/emr/tags_gen.go +++ b/internal/service/emr/tags_gen.go @@ -43,9 +43,9 @@ func KeyValueTags(ctx context.Context, tags []*emr.Tag) tftags.KeyValueTags { return tftags.New(ctx, m) } -// GetTagsIn returns emr service tags from Context. +// getTagsIn returns emr service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*emr.Tag { +func getTagsIn(ctx context.Context) []*emr.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -55,17 +55,17 @@ func GetTagsIn(ctx context.Context) []*emr.Tag { return nil } -// SetTagsOut sets emr service tags in Context. -func SetTagsOut(ctx context.Context, tags []*emr.Tag) { +// setTagsOut sets emr service tags in Context. +func setTagsOut(ctx context.Context, tags []*emr.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates emr service tags. +// updateTags updates emr service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn emriface.EMRAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn emriface.EMRAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -105,5 +105,5 @@ func UpdateTags(ctx context.Context, conn emriface.EMRAPI, identifier string, ol // UpdateTags updates emr service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).EMRConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).EMRConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/emr/validate.go b/internal/service/emr/validate.go index e8a2c23307b..71ee38b465e 100644 --- a/internal/service/emr/validate.go +++ b/internal/service/emr/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emr import ( diff --git a/internal/service/emr/validate_test.go b/internal/service/emr/validate_test.go index d05482f7032..037a5216e20 100644 --- a/internal/service/emr/validate_test.go +++ b/internal/service/emr/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emr import ( diff --git a/internal/service/emr/wait.go b/internal/service/emr/wait.go index 5d3ed38ec65..84e7c0f6e65 100644 --- a/internal/service/emr/wait.go +++ b/internal/service/emr/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emr import ( diff --git a/internal/service/emrcontainers/generate.go b/internal/service/emrcontainers/generate.go index 3a3ae91aa62..3a048572da2 100644 --- a/internal/service/emrcontainers/generate.go +++ b/internal/service/emrcontainers/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package emrcontainers diff --git a/internal/service/emrcontainers/job_template.go b/internal/service/emrcontainers/job_template.go new file mode 100644 index 00000000000..798bbeb29d6 --- /dev/null +++ b/internal/service/emrcontainers/job_template.go @@ -0,0 +1,783 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package emrcontainers + +import ( + "context" + "log" + "regexp" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/emrcontainers" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/flex" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/verify" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @SDKResource("aws_emrcontainers_job_template", name="Job Template") +// @Tags(identifierAttribute="arn") +func ResourceJobTemplate() *schema.Resource { + return &schema.Resource{ + CreateWithoutTimeout: resourceJobTemplateCreate, + ReadWithoutTimeout: resourceJobTemplateRead, + // UpdateWithoutTimeout: resourceJobTemplateUpdate, + DeleteWithoutTimeout: resourceJobTemplateDelete, + + Importer: &schema.ResourceImporter{ + StateContext: schema.ImportStatePassthroughContext, + }, + + Timeouts: &schema.ResourceTimeout{ + Delete: schema.DefaultTimeout(90 * time.Minute), + }, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "job_template_data": { + Type: schema.TypeList, + MaxItems: 1, + Required: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "configuration_overrides": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "application_configuration": { + Type: schema.TypeList, + MaxItems: 100, + Optional: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "classification": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "configurations": { + Type: schema.TypeList, + MaxItems: 100, + Optional: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "classification": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "properties": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + }, + }, + }, + "properties": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + }, + }, + }, + "monitoring_configuration": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cloud_watch_monitoring_configuration": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "log_group_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "log_stream_name_prefix": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + }, + }, + }, + "persistent_app_ui": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice(emrcontainers.PersistentAppUI_Values(), false), + }, + "s3_monitoring_configuration": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "log_uri": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + "execution_role_arn": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: verify.ValidARN, + }, + "job_driver": { + Type: schema.TypeList, + MaxItems: 1, + Required: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "spark_sql_job_driver": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + ForceNew: true, + ExactlyOneOf: []string{"job_template_data.0.job_driver.0.spark_sql_job_driver", "job_template_data.0.job_driver.0.spark_submit_job_driver"}, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "entry_point": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "spark_sql_parameters": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + }, + }, + }, + "spark_submit_job_driver": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + ForceNew: true, + ExactlyOneOf: []string{"job_template_data.0.job_driver.0.spark_sql_job_driver", "job_template_data.0.job_driver.0.spark_submit_job_driver"}, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "entry_point": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "entry_point_arguments": { + Type: schema.TypeSet, + Optional: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "spark_submit_parameters": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + }, + }, + }, + }, + }, + }, + "job_tags": { + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "release_label": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + }, + }, + }, + "kms_key_arn": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: verify.ValidARN, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.All( + validation.StringLenBetween(1, 64), + validation.StringMatch(regexp.MustCompile(`[.\-_/#A-Za-z0-9]+`), "must contain only alphanumeric, hyphen, underscore, dot and # characters"), + ), + }, + names.AttrTags: tftags.TagsSchemaForceNew(), + names.AttrTagsAll: tftags.TagsSchemaComputed(), + }, + + CustomizeDiff: verify.SetTagsDiff, + } +} + +func resourceJobTemplateCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).EMRContainersConn(ctx) + + name := d.Get("name").(string) + input := &emrcontainers.CreateJobTemplateInput{ + ClientToken: aws.String(id.UniqueId()), + Name: aws.String(name), + Tags: getTagsIn(ctx), + } + + if v, ok := d.GetOk("job_template_data"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.JobTemplateData = expandJobTemplateData(v.([]interface{})[0].(map[string]interface{})) + } + + if v, ok := d.GetOk("kms_key_arn"); ok { + input.KmsKeyArn = aws.String(v.(string)) + } + + output, err := conn.CreateJobTemplateWithContext(ctx, input) + + if err != nil { + return diag.Errorf("creating EMR Containers Job Template (%s): %s", name, err) + } + + d.SetId(aws.StringValue(output.Id)) + + return resourceJobTemplateRead(ctx, d, meta) +} + +func resourceJobTemplateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).EMRContainersConn(ctx) + + vc, err := FindJobTemplateByID(ctx, conn, d.Id()) + + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] EMR Containers Job Template %s not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return diag.Errorf("reading EMR Containers Job Template (%s): %s", d.Id(), err) + } + + d.Set("arn", vc.Arn) + if vc.JobTemplateData != nil { + if err := d.Set("job_template_data", []interface{}{flattenJobTemplateData(vc.JobTemplateData)}); err != nil { + return diag.Errorf("setting job_template_data: %s", err) + } + } else { + d.Set("job_template_data", nil) + } + d.Set("name", vc.Name) + d.Set("kms_key_arn", vc.KmsKeyArn) + + setTagsOut(ctx, vc.Tags) + + return nil +} + +// func resourceJobTemplateUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { +// // Tags only. +// return resourceJobTemplateRead(ctx, d, meta) +// } + +func resourceJobTemplateDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).EMRContainersConn(ctx) + + log.Printf("[INFO] Deleting EMR Containers Job Template: %s", d.Id()) + _, err := conn.DeleteJobTemplateWithContext(ctx, &emrcontainers.DeleteJobTemplateInput{ + Id: aws.String(d.Id()), + }) + + if tfawserr.ErrCodeEquals(err, emrcontainers.ErrCodeResourceNotFoundException) { + return nil + } + + if err != nil { + return diag.Errorf("deleting EMR Containers Job Template (%s): %s", d.Id(), err) + } + + // if _, err = waitJobTemplateDeleted(ctx, conn, d.Id(), d.Timeout(schema.TimeoutDelete)); err != nil { + // return diag.Errorf("waiting for EMR Containers Job Template (%s) delete: %s", d.Id(), err) + // } + + return nil +} + +func expandJobTemplateData(tfMap map[string]interface{}) *emrcontainers.JobTemplateData { + if tfMap == nil { + return nil + } + + apiObject := &emrcontainers.JobTemplateData{} + + if v, ok := tfMap["configuration_overrides"].([]interface{}); ok && len(v) > 0 { + apiObject.ConfigurationOverrides = expandConfigurationOverrides(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["execution_role_arn"].(string); ok && v != "" { + apiObject.ExecutionRoleArn = aws.String(v) + } + + if v, ok := tfMap["job_driver"].([]interface{}); ok && len(v) > 0 { + apiObject.JobDriver = expandJobDriver(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["job_tags"].(map[string]interface{}); ok && len(v) > 0 { + apiObject.JobTags = flex.ExpandStringMap(v) + } + + if v, ok := tfMap["release_label"].(string); ok && v != "" { + apiObject.ReleaseLabel = aws.String(v) + } + + return apiObject +} + +func expandConfigurationOverrides(tfMap map[string]interface{}) *emrcontainers.ParametricConfigurationOverrides { + if tfMap == nil { + return nil + } + + apiObject := &emrcontainers.ParametricConfigurationOverrides{} + + if v, ok := tfMap["application_configuration"].([]interface{}); ok && len(v) > 0 { + apiObject.ApplicationConfiguration = expandConfigurations(v) + } + + if v, ok := tfMap["monitoring_configuration"].([]interface{}); ok && len(v) > 0 { + apiObject.MonitoringConfiguration = expandMonitoringConfiguration(v[0].(map[string]interface{})) + } + + return apiObject +} +func expandConfigurations(tfList []interface{}) []*emrcontainers.Configuration { + if len(tfList) == 0 { + return nil + } + + var apiObjects []*emrcontainers.Configuration + + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + + if !ok { + continue + } + + apiObject := expandConfiguration(tfMap) + + if apiObject == nil { + continue + } + + apiObjects = append(apiObjects, apiObject) + } + + return apiObjects +} + +func expandConfiguration(tfMap map[string]interface{}) *emrcontainers.Configuration { + if tfMap == nil { + return nil + } + + apiObject := &emrcontainers.Configuration{} + + if v, ok := tfMap["classification"].(string); ok && v != "" { + apiObject.Classification = aws.String(v) + } + + if v, ok := tfMap["configurations"].([]interface{}); ok && len(v) > 0 { + apiObject.Configurations = expandConfigurations(v) + } + + if v, ok := tfMap["properties"].(map[string]interface{}); ok && len(v) > 0 { + apiObject.Properties = flex.ExpandStringMap(v) + } + + return apiObject +} + +func expandMonitoringConfiguration(tfMap map[string]interface{}) *emrcontainers.ParametricMonitoringConfiguration { + if tfMap == nil { + return nil + } + + apiObject := &emrcontainers.ParametricMonitoringConfiguration{} + + if v, ok := tfMap["cloud_watch_monitoring_configuration"].([]interface{}); ok && len(v) > 0 { + apiObject.CloudWatchMonitoringConfiguration = expandCloudWatchMonitoringConfiguration(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["persistent_app_ui"].(string); ok && v != "" { + apiObject.PersistentAppUI = aws.String(v) + } + + if v, ok := tfMap["s3_monitoring_configuration"].([]interface{}); ok && len(v) > 0 { + apiObject.S3MonitoringConfiguration = expandS3MonitoringConfiguration(v[0].(map[string]interface{})) + } + + return apiObject +} + +func expandCloudWatchMonitoringConfiguration(tfMap map[string]interface{}) *emrcontainers.ParametricCloudWatchMonitoringConfiguration { + if tfMap == nil { + return nil + } + + apiObject := &emrcontainers.ParametricCloudWatchMonitoringConfiguration{} + + if v, ok := tfMap["log_group_mame"].(string); ok && v != "" { + apiObject.LogGroupName = aws.String(v) + } + + if v, ok := tfMap["log_stream_name_prefix"].(string); ok && v != "" { + apiObject.LogStreamNamePrefix = aws.String(v) + } + + return apiObject +} + +func expandS3MonitoringConfiguration(tfMap map[string]interface{}) *emrcontainers.ParametricS3MonitoringConfiguration { + if tfMap == nil { + return nil + } + + apiObject := &emrcontainers.ParametricS3MonitoringConfiguration{} + + if v, ok := tfMap["log_uri"].(string); ok && v != "" { + apiObject.LogUri = aws.String(v) + } + + return apiObject +} + +func expandJobDriver(tfMap map[string]interface{}) *emrcontainers.JobDriver { + if tfMap == nil { + return nil + } + + apiObject := &emrcontainers.JobDriver{} + + if v, ok := tfMap["spark_sql_job_driver"].([]interface{}); ok && len(v) > 0 { + apiObject.SparkSqlJobDriver = expandSparkSQLJobDriver(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["spark_submit_job_driver"].([]interface{}); ok && len(v) > 0 { + apiObject.SparkSubmitJobDriver = expandSparkSubmitJobDriver(v[0].(map[string]interface{})) + } + + return apiObject +} + +func expandSparkSQLJobDriver(tfMap map[string]interface{}) *emrcontainers.SparkSqlJobDriver { + if tfMap == nil { + return nil + } + + apiObject := &emrcontainers.SparkSqlJobDriver{} + + if v, ok := tfMap["entry_point"].(string); ok && v != "" { + apiObject.EntryPoint = aws.String(v) + } + + if v, ok := tfMap["spark_sql_parameters"].(string); ok && v != "" { + apiObject.SparkSqlParameters = aws.String(v) + } + + return apiObject +} + +func expandSparkSubmitJobDriver(tfMap map[string]interface{}) *emrcontainers.SparkSubmitJobDriver { + if tfMap == nil { + return nil + } + + apiObject := &emrcontainers.SparkSubmitJobDriver{} + + if v, ok := tfMap["entry_point"].(string); ok && v != "" { + apiObject.EntryPoint = aws.String(v) + } + + if v, ok := tfMap["entry_point_arguments"].(*schema.Set); ok && v.Len() > 0 { + apiObject.EntryPointArguments = flex.ExpandStringSet(v) + } + + if v, ok := tfMap["spark_submit_parameters"].(string); ok && v != "" { + apiObject.SparkSubmitParameters = aws.String(v) + } + + return apiObject +} + +func flattenJobTemplateData(apiObject *emrcontainers.JobTemplateData) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.ConfigurationOverrides; v != nil { + tfMap["configuration_overrides"] = []interface{}{flattenConfigurationOverrides(v)} + } + + if v := apiObject.ExecutionRoleArn; v != nil { + tfMap["execution_role_arn"] = aws.StringValue(v) + } + + if v := apiObject.JobDriver; v != nil { + tfMap["job_driver"] = []interface{}{flattenJobDriver(v)} + } + + if v := apiObject.JobTags; v != nil { + tfMap["job_tags"] = aws.StringValueMap(v) + } + + if v := apiObject.ReleaseLabel; v != nil { + tfMap["release_label"] = aws.StringValue(v) + } + + return tfMap +} + +func flattenConfigurationOverrides(apiObject *emrcontainers.ParametricConfigurationOverrides) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.ApplicationConfiguration; v != nil { + tfMap["application_configuration"] = []interface{}{flattenConfigurations(v)} + } + + if v := apiObject.MonitoringConfiguration; v != nil { + tfMap["monitoring_configuration"] = []interface{}{flattenMonitoringConfiguration(v)} + } + + return tfMap +} + +func flattenConfigurations(apiObjects []*emrcontainers.Configuration) []interface{} { + if len(apiObjects) == 0 { + return nil + } + + var tfList []interface{} + + for _, apiObject := range apiObjects { + if apiObject == nil { + continue + } + + tfList = append(tfList, flattenConfiguration(apiObject)) + } + + return tfList +} + +func flattenConfiguration(apiObject *emrcontainers.Configuration) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.Classification; v != nil { + tfMap["classification"] = aws.StringValue(v) + } + + if v := apiObject.Properties; v != nil { + tfMap["properties"] = aws.StringValueMap(v) + } + + return tfMap +} + +func flattenMonitoringConfiguration(apiObject *emrcontainers.ParametricMonitoringConfiguration) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.CloudWatchMonitoringConfiguration; v != nil { + tfMap["cloud_watch_monitoring_configuration"] = []interface{}{flattenCloudWatchMonitoringConfiguration(v)} + } + + if v := apiObject.PersistentAppUI; v != nil { + tfMap["persistent_app_ui"] = aws.StringValue(v) + } + + if v := apiObject.S3MonitoringConfiguration; v != nil { + tfMap["s3_monitoring_configuration"] = []interface{}{flattenS3MonitoringConfiguration(v)} + } + + return tfMap +} + +func flattenCloudWatchMonitoringConfiguration(apiObject *emrcontainers.ParametricCloudWatchMonitoringConfiguration) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.LogGroupName; v != nil { + tfMap["log_group_name"] = aws.StringValue(v) + } + + if v := apiObject.LogStreamNamePrefix; v != nil { + tfMap["log_stream_name_prefix"] = aws.StringValue(v) + } + + return tfMap +} + +func flattenS3MonitoringConfiguration(apiObject *emrcontainers.ParametricS3MonitoringConfiguration) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.LogUri; v != nil { + tfMap["log_uri"] = aws.StringValue(v) + } + + return tfMap +} + +func flattenJobDriver(apiObject *emrcontainers.JobDriver) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.SparkSqlJobDriver; v != nil { + tfMap["spark_sql_job_driver"] = []interface{}{flattenSparkSQLJobDriver(v)} + } + + if v := apiObject.SparkSubmitJobDriver; v != nil { + tfMap["spark_submit_job_driver"] = []interface{}{flattenSparkSubmitJobDriver(v)} + } + + return tfMap +} + +func flattenSparkSQLJobDriver(apiObject *emrcontainers.SparkSqlJobDriver) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.EntryPoint; v != nil { + tfMap["entry_point"] = aws.StringValue(v) + } + + if v := apiObject.SparkSqlParameters; v != nil { + tfMap["spark_sql_parameters"] = aws.StringValue(v) + } + + return tfMap +} + +func flattenSparkSubmitJobDriver(apiObject *emrcontainers.SparkSubmitJobDriver) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.EntryPoint; v != nil { + tfMap["entry_point"] = aws.StringValue(v) + } + + if v := apiObject.EntryPointArguments; v != nil { + tfMap["entry_point_arguments"] = flex.FlattenStringSet(v) + } + + if v := apiObject.SparkSubmitParameters; v != nil { + tfMap["spark_submit_parameters"] = aws.StringValue(v) + } + + return tfMap +} + +func findJobTemplate(ctx context.Context, conn *emrcontainers.EMRContainers, input *emrcontainers.DescribeJobTemplateInput) (*emrcontainers.JobTemplate, error) { + output, err := conn.DescribeJobTemplateWithContext(ctx, input) + + if tfawserr.ErrCodeEquals(err, emrcontainers.ErrCodeResourceNotFoundException) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil || output.JobTemplate == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + return output.JobTemplate, nil +} + +func FindJobTemplateByID(ctx context.Context, conn *emrcontainers.EMRContainers, id string) (*emrcontainers.JobTemplate, error) { + input := &emrcontainers.DescribeJobTemplateInput{ + Id: aws.String(id), + } + + output, err := findJobTemplate(ctx, conn, input) + + if err != nil { + return nil, err + } + + return output, nil +} diff --git a/internal/service/emrcontainers/job_template_test.go b/internal/service/emrcontainers/job_template_test.go new file mode 100644 index 00000000000..37dac806589 --- /dev/null +++ b/internal/service/emrcontainers/job_template_test.go @@ -0,0 +1,247 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package emrcontainers_test + +import ( + "context" + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/service/emrcontainers" + sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-plugin-testing/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + tfemrcontainers "github.com/hashicorp/terraform-provider-aws/internal/service/emrcontainers" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" +) + +func TestAccEMRContainersJobTemplate_basic(t *testing.T) { + ctx := acctest.Context(t) + var v emrcontainers.JobTemplate + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_emrcontainers_job_template.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, emrcontainers.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckJobTemplateDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccJobTemplateConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckJobTemplateExists(ctx, resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "job_template_data.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "job_template_data.0.execution_role_arn", "aws_iam_role.test", "arn"), + resource.TestCheckResourceAttr(resourceName, "job_template_data.0.job_driver.#", "1"), + resource.TestCheckResourceAttr(resourceName, "job_template_data.0.job_driver.0.spark_sql_job_driver.#", "1"), + resource.TestCheckResourceAttr(resourceName, "job_template_data.0.job_driver.0.spark_sql_job_driver.0.entry_point", "default"), + resource.TestCheckResourceAttr(resourceName, "job_template_data.0.release_label", "emr-6.10.0-latest"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccEMRContainersJobTemplate_disappears(t *testing.T) { + ctx := acctest.Context(t) + var v emrcontainers.JobTemplate + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_emrcontainers_job_template.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, emrcontainers.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckJobTemplateDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccJobTemplateConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckJobTemplateExists(ctx, resourceName, &v), + acctest.CheckResourceDisappears(ctx, acctest.Provider, tfemrcontainers.ResourceJobTemplate(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccEMRContainersJobTemplate_tags(t *testing.T) { + ctx := acctest.Context(t) + var v emrcontainers.JobTemplate + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_emrcontainers_job_template.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, emrcontainers.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckJobTemplateDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccJobTemplateConfig_tags1(rName, "key1", "value1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckJobTemplateExists(ctx, resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckJobTemplateExists(ctx context.Context, n string, v *emrcontainers.JobTemplate) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No EMR Containers Job Template ID is set") + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).EMRContainersConn(ctx) + + output, err := tfemrcontainers.FindJobTemplateByID(ctx, conn, rs.Primary.ID) + + if err != nil { + return err + } + + *v = *output + + return nil + } +} + +func testAccCheckJobTemplateDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).EMRContainersConn(ctx) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_emrcontainers_job_template" { + continue + } + + _, err := tfemrcontainers.FindJobTemplateByID(ctx, conn, rs.Primary.ID) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return err + } + + return fmt.Errorf("EMR Containers Job Template %s still exists", rs.Primary.ID) + } + + return nil + } +} + +func testAccJobTemplateConfig_basic(rName string) string { + return fmt.Sprintf(` +data "aws_partition" "current" {} + +resource "aws_iam_role" "test" { + name = %[1]q + + assume_role_policy = jsonencode({ + Statement = [{ + Action = "sts:AssumeRole" + Effect = "Allow" + Principal = { + Service = [ + "eks.${data.aws_partition.current.dns_suffix}", + "eks-nodegroup.${data.aws_partition.current.dns_suffix}", + ] + } + }] + Version = "2012-10-17" + }) +} + +resource "aws_emrcontainers_job_template" "test" { + job_template_data { + execution_role_arn = aws_iam_role.test.arn + release_label = "emr-6.10.0-latest" + + job_driver { + spark_sql_job_driver { + entry_point = "default" + } + } + } + + name = %[1]q +} +`, rName) +} + +func testAccJobTemplateConfig_tags1(rName, tagKey1, tagValue1 string) string { + return fmt.Sprintf(` +data "aws_partition" "current" {} + +resource "aws_iam_role" "test" { + name = %[1]q + + assume_role_policy = jsonencode({ + Statement = [{ + Action = "sts:AssumeRole" + Effect = "Allow" + Principal = { + Service = [ + "eks.${data.aws_partition.current.dns_suffix}", + "eks-nodegroup.${data.aws_partition.current.dns_suffix}", + ] + } + }] + Version = "2012-10-17" + }) +} + +resource "aws_emrcontainers_job_template" "test" { + job_template_data { + execution_role_arn = aws_iam_role.test.arn + release_label = "emr-6.10.0-latest" + + job_driver { + spark_sql_job_driver { + entry_point = "default" + } + } + } + + name = %[1]q + + tags = { + %[2]q = %[3]q + } + +} +`, rName, tagKey1, tagValue1) +} diff --git a/internal/service/emrcontainers/service_package_gen.go b/internal/service/emrcontainers/service_package_gen.go index 1515a66a318..20072a6650a 100644 --- a/internal/service/emrcontainers/service_package_gen.go +++ b/internal/service/emrcontainers/service_package_gen.go @@ -5,6 +5,10 @@ package emrcontainers import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + emrcontainers_sdkv1 "github.com/aws/aws-sdk-go/service/emrcontainers" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -30,6 +34,14 @@ func (p *servicePackage) SDKDataSources(ctx context.Context) []*types.ServicePac func (p *servicePackage) SDKResources(ctx context.Context) []*types.ServicePackageSDKResource { return []*types.ServicePackageSDKResource{ + { + Factory: ResourceJobTemplate, + TypeName: "aws_emrcontainers_job_template", + Name: "Job Template", + Tags: &types.ServicePackageResourceTags{ + IdentifierAttribute: "arn", + }, + }, { Factory: ResourceVirtualCluster, TypeName: "aws_emrcontainers_virtual_cluster", @@ -45,4 +57,13 @@ func (p *servicePackage) ServicePackageName() string { return names.EMRContainers } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*emrcontainers_sdkv1.EMRContainers, error) { + sess := config["session"].(*session_sdkv1.Session) + + return emrcontainers_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/emrcontainers/sweep.go b/internal/service/emrcontainers/sweep.go index 2b516997153..c117906cb8f 100644 --- a/internal/service/emrcontainers/sweep.go +++ b/internal/service/emrcontainers/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/emrcontainers" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -19,15 +21,20 @@ func init() { Name: "aws_emrcontainers_virtual_cluster", F: sweepVirtualClusters, }) + + resource.AddTestSweepers("aws_emrcontainers_job_template", &resource.Sweeper{ + Name: "aws_emrcontainers_job_template", + F: sweepJobTemplates, + }) } func sweepVirtualClusters(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).EMRContainersConn() + conn := client.EMRContainersConn(ctx) input := &emrcontainers.ListVirtualClustersInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -60,7 +67,7 @@ func sweepVirtualClusters(region string) error { return fmt.Errorf("error listing EMR Containers Virtual Clusters (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping EMR Containers Virtual Clusters (%s): %w", region, err) @@ -68,3 +75,47 @@ func sweepVirtualClusters(region string) error { return nil } + +func sweepJobTemplates(region string) error { + ctx := sweep.Context(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.EMRContainersConn(ctx) + input := &emrcontainers.ListJobTemplatesInput{} + sweepResources := make([]sweep.Sweepable, 0) + + err = conn.ListJobTemplatesPagesWithContext(ctx, input, func(page *emrcontainers.ListJobTemplatesOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, v := range page.Templates { + r := ResourceJobTemplate() + d := r.Data(nil) + d.SetId(aws.StringValue(v.Id)) + + sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) + } + + return !lastPage + }) + + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping EMR Containers Job Template sweep for %s: %s", region, err) + return nil + } + + if err != nil { + return fmt.Errorf("error listing EMR Containers Job Templates (%s): %w", region, err) + } + + err = sweep.SweepOrchestrator(ctx, sweepResources) + + if err != nil { + return fmt.Errorf("error sweeping EMR Containers Job Templates (%s): %w", region, err) + } + + return nil +} diff --git a/internal/service/emrcontainers/tags_gen.go b/internal/service/emrcontainers/tags_gen.go index 329c9dfdbd7..23b669c6860 100644 --- a/internal/service/emrcontainers/tags_gen.go +++ b/internal/service/emrcontainers/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists emrcontainers service tags. +// listTags lists emrcontainers service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn emrcontainersiface.EMRContainersAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn emrcontainersiface.EMRContainersAPI, identifier string) (tftags.KeyValueTags, error) { input := &emrcontainers.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn emrcontainersiface.EMRContainersAPI, ide // ListTags lists emrcontainers service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).EMRContainersConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).EMRContainersConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from emrcontainers service tags. +// KeyValueTags creates tftags.KeyValueTags from emrcontainers service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns emrcontainers service tags from Context. +// getTagsIn returns emrcontainers service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets emrcontainers service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets emrcontainers service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates emrcontainers service tags. +// updateTags updates emrcontainers service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn emrcontainersiface.EMRContainersAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn emrcontainersiface.EMRContainersAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn emrcontainersiface.EMRContainersAPI, i // UpdateTags updates emrcontainers service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).EMRContainersConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).EMRContainersConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/emrcontainers/virtual_cluster.go b/internal/service/emrcontainers/virtual_cluster.go index 6fdd3dd3d6d..658b9bf0035 100644 --- a/internal/service/emrcontainers/virtual_cluster.go +++ b/internal/service/emrcontainers/virtual_cluster.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emrcontainers import ( @@ -108,12 +111,12 @@ func ResourceVirtualCluster() *schema.Resource { } func resourceVirtualClusterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EMRContainersConn() + conn := meta.(*conns.AWSClient).EMRContainersConn(ctx) name := d.Get("name").(string) input := &emrcontainers.CreateVirtualClusterInput{ Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("container_provider"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { @@ -132,7 +135,7 @@ func resourceVirtualClusterCreate(ctx context.Context, d *schema.ResourceData, m } func resourceVirtualClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EMRContainersConn() + conn := meta.(*conns.AWSClient).EMRContainersConn(ctx) vc, err := FindVirtualClusterByID(ctx, conn, d.Id()) @@ -156,7 +159,7 @@ func resourceVirtualClusterRead(ctx context.Context, d *schema.ResourceData, met } d.Set("name", vc.Name) - SetTagsOut(ctx, vc.Tags) + setTagsOut(ctx, vc.Tags) return nil } @@ -167,7 +170,7 @@ func resourceVirtualClusterUpdate(ctx context.Context, d *schema.ResourceData, m } func resourceVirtualClusterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EMRContainersConn() + conn := meta.(*conns.AWSClient).EMRContainersConn(ctx) log.Printf("[INFO] Deleting EMR Containers Virtual Cluster: %s", d.Id()) _, err := conn.DeleteVirtualClusterWithContext(ctx, &emrcontainers.DeleteVirtualClusterInput{ diff --git a/internal/service/emrcontainers/virtual_cluster_data_source.go b/internal/service/emrcontainers/virtual_cluster_data_source.go index 67b820f6361..28fe524817a 100644 --- a/internal/service/emrcontainers/virtual_cluster_data_source.go +++ b/internal/service/emrcontainers/virtual_cluster_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emrcontainers import ( @@ -78,7 +81,7 @@ func DataSourceVirtualCluster() *schema.Resource { } func dataSourceVirtualClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EMRContainersConn() + conn := meta.(*conns.AWSClient).EMRContainersConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig id := d.Get("virtual_cluster_id").(string) diff --git a/internal/service/emrcontainers/virtual_cluster_data_source_test.go b/internal/service/emrcontainers/virtual_cluster_data_source_test.go index 0f1a38bacb1..f0e4e96fb4e 100644 --- a/internal/service/emrcontainers/virtual_cluster_data_source_test.go +++ b/internal/service/emrcontainers/virtual_cluster_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emrcontainers_test import ( diff --git a/internal/service/emrcontainers/virtual_cluster_test.go b/internal/service/emrcontainers/virtual_cluster_test.go index 864f54a53b9..c070a77079d 100644 --- a/internal/service/emrcontainers/virtual_cluster_test.go +++ b/internal/service/emrcontainers/virtual_cluster_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emrcontainers_test import ( @@ -167,7 +170,7 @@ func testAccCheckVirtualClusterExists(ctx context.Context, n string, v *emrconta return fmt.Errorf("No EMR Containers Virtual Cluster ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EMRContainersConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EMRContainersConn(ctx) output, err := tfemrcontainers.FindVirtualClusterByID(ctx, conn, rs.Primary.ID) @@ -183,7 +186,7 @@ func testAccCheckVirtualClusterExists(ctx context.Context, n string, v *emrconta func testAccCheckVirtualClusterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EMRContainersConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EMRContainersConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_emrcontainers_virtual_cluster" { diff --git a/internal/service/emrserverless/application.go b/internal/service/emrserverless/application.go index 24dec230a9c..614f4428fe8 100644 --- a/internal/service/emrserverless/application.go +++ b/internal/service/emrserverless/application.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emrserverless import ( @@ -201,7 +204,6 @@ func ResourceApplication() *schema.Resource { "release_label": { Type: schema.TypeString, Required: true, - ForceNew: true, }, names.AttrTags: tftags.TagsSchema(), names.AttrTagsAll: tftags.TagsSchemaComputed(), @@ -219,14 +221,14 @@ func ResourceApplication() *schema.Resource { func resourceApplicationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRServerlessConn() + conn := meta.(*conns.AWSClient).EMRServerlessConn(ctx) name := d.Get("name").(string) input := &emrserverless.CreateApplicationInput{ ClientToken: aws.String(id.UniqueId()), ReleaseLabel: aws.String(d.Get("release_label").(string)), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), Type: aws.String(d.Get("type").(string)), } @@ -275,7 +277,7 @@ func resourceApplicationCreate(ctx context.Context, d *schema.ResourceData, meta func resourceApplicationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRServerlessConn() + conn := meta.(*conns.AWSClient).EMRServerlessConn(ctx) application, err := FindApplicationByID(ctx, conn, d.Id()) @@ -319,14 +321,14 @@ func resourceApplicationRead(ctx context.Context, d *schema.ResourceData, meta i return sdkdiag.AppendErrorf(diags, "setting network_configuration: %s", err) } - SetTagsOut(ctx, application.Tags) + setTagsOut(ctx, application.Tags) return diags } func resourceApplicationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRServerlessConn() + conn := meta.(*conns.AWSClient).EMRServerlessConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &emrserverless.UpdateApplicationInput{ @@ -334,6 +336,10 @@ func resourceApplicationUpdate(ctx context.Context, d *schema.ResourceData, meta ClientToken: aws.String(id.UniqueId()), } + if v, ok := d.GetOk("release_label"); ok { + input.ReleaseLabel = aws.String(v.(string)) + } + if v, ok := d.GetOk("architecture"); ok { input.Architecture = aws.String(v.(string)) } @@ -375,7 +381,7 @@ func resourceApplicationUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceApplicationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EMRServerlessConn() + conn := meta.(*conns.AWSClient).EMRServerlessConn(ctx) log.Printf("[INFO] Deleting EMR Serverless Application: %s", d.Id()) _, err := conn.DeleteApplicationWithContext(ctx, &emrserverless.DeleteApplicationInput{ diff --git a/internal/service/emrserverless/application_test.go b/internal/service/emrserverless/application_test.go index 50e47d6684b..5e3ed3cf483 100644 --- a/internal/service/emrserverless/application_test.go +++ b/internal/service/emrserverless/application_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emrserverless_test import ( @@ -91,6 +94,41 @@ func TestAccEMRServerlessApplication_arch(t *testing.T) { }) } +func TestAccEMRServerlessApplication_releaseLabel(t *testing.T) { + ctx := acctest.Context(t) + var application emrserverless.Application + resourceName := "aws_emrserverless_application.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, emrserverless.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckApplicationDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccApplicationConfig_releaseLabel(rName, "emr-6.10.0"), + Check: resource.ComposeTestCheckFunc( + testAccCheckApplicationExists(ctx, resourceName, &application), + resource.TestCheckResourceAttr(resourceName, "release_label", "emr-6.10.0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccApplicationConfig_releaseLabel(rName, "emr-6.11.0"), + Check: resource.ComposeTestCheckFunc( + testAccCheckApplicationExists(ctx, resourceName, &application), + resource.TestCheckResourceAttr(resourceName, "release_label", "emr-6.11.0"), + ), + }, + }, + }) +} + func TestAccEMRServerlessApplication_initialCapacity(t *testing.T) { ctx := acctest.Context(t) var application emrserverless.Application @@ -337,7 +375,7 @@ func testAccCheckApplicationExists(ctx context.Context, resourceName string, app return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).EMRServerlessConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EMRServerlessConn(ctx) output, err := tfemrserverless.FindApplicationByID(ctx, conn, rs.Primary.ID) if err != nil { @@ -356,7 +394,7 @@ func testAccCheckApplicationExists(ctx context.Context, resourceName string, app func testAccCheckApplicationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EMRServerlessConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EMRServerlessConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_emrserverless_application" { @@ -388,6 +426,16 @@ resource "aws_emrserverless_application" "test" { `, rName) } +func testAccApplicationConfig_releaseLabel(rName string, rl string) string { + return fmt.Sprintf(` +resource "aws_emrserverless_application" "test" { + name = %[1]q + release_label = %[2]q + type = "spark" +} +`, rName, rl) +} + func testAccApplicationConfig_initialCapacity(rName, cpu string) string { return fmt.Sprintf(` resource "aws_emrserverless_application" "test" { diff --git a/internal/service/emrserverless/find.go b/internal/service/emrserverless/find.go index 4640038a859..d00e66ccb95 100644 --- a/internal/service/emrserverless/find.go +++ b/internal/service/emrserverless/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emrserverless import ( diff --git a/internal/service/emrserverless/generate.go b/internal/service/emrserverless/generate.go index e77eca73acf..e92b754f307 100644 --- a/internal/service/emrserverless/generate.go +++ b/internal/service/emrserverless/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package emrserverless diff --git a/internal/service/emrserverless/service_package_gen.go b/internal/service/emrserverless/service_package_gen.go index 1e185abdc6f..09f435ba15e 100644 --- a/internal/service/emrserverless/service_package_gen.go +++ b/internal/service/emrserverless/service_package_gen.go @@ -5,6 +5,10 @@ package emrserverless import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + emrserverless_sdkv1 "github.com/aws/aws-sdk-go/service/emrserverless" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -40,4 +44,13 @@ func (p *servicePackage) ServicePackageName() string { return names.EMRServerless } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*emrserverless_sdkv1.EMRServerless, error) { + sess := config["session"].(*session_sdkv1.Session) + + return emrserverless_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/emrserverless/status.go b/internal/service/emrserverless/status.go index 8751d96eaa3..a403c87c856 100644 --- a/internal/service/emrserverless/status.go +++ b/internal/service/emrserverless/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emrserverless import ( diff --git a/internal/service/emrserverless/sweep.go b/internal/service/emrserverless/sweep.go index b08120660c9..0254645bde4 100644 --- a/internal/service/emrserverless/sweep.go +++ b/internal/service/emrserverless/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/emrserverless" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -23,11 +25,11 @@ func init() { func sweepApplications(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).EMRServerlessConn() + conn := client.EMRServerlessConn(ctx) input := &emrserverless.ListApplicationsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -60,7 +62,7 @@ func sweepApplications(region string) error { return fmt.Errorf("error listing EMR Serverless Applications (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping EMR Serverless Applications (%s): %w", region, err) diff --git a/internal/service/emrserverless/tags_gen.go b/internal/service/emrserverless/tags_gen.go index 987a7babd55..69227dd1466 100644 --- a/internal/service/emrserverless/tags_gen.go +++ b/internal/service/emrserverless/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists emrserverless service tags. +// listTags lists emrserverless service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn emrserverlessiface.EMRServerlessAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn emrserverlessiface.EMRServerlessAPI, identifier string) (tftags.KeyValueTags, error) { input := &emrserverless.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn emrserverlessiface.EMRServerlessAPI, ide // ListTags lists emrserverless service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).EMRServerlessConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).EMRServerlessConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from emrserverless service tags. +// KeyValueTags creates tftags.KeyValueTags from emrserverless service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns emrserverless service tags from Context. +// getTagsIn returns emrserverless service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets emrserverless service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets emrserverless service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates emrserverless service tags. +// updateTags updates emrserverless service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn emrserverlessiface.EMRServerlessAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn emrserverlessiface.EMRServerlessAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn emrserverlessiface.EMRServerlessAPI, i // UpdateTags updates emrserverless service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).EMRServerlessConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).EMRServerlessConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/emrserverless/wait.go b/internal/service/emrserverless/wait.go index 59608469087..41c1fcddab0 100644 --- a/internal/service/emrserverless/wait.go +++ b/internal/service/emrserverless/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package emrserverless import ( diff --git a/internal/service/events/api_destination.go b/internal/service/events/api_destination.go index 2af9f1da9ec..d50d4d3138d 100644 --- a/internal/service/events/api_destination.go +++ b/internal/service/events/api_destination.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events import ( @@ -72,7 +75,7 @@ func ResourceAPIDestination() *schema.Resource { func resourceAPIDestinationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) input := &eventbridge.CreateApiDestinationInput{} @@ -109,7 +112,7 @@ func resourceAPIDestinationCreate(ctx context.Context, d *schema.ResourceData, m func resourceAPIDestinationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) input := &eventbridge.DescribeApiDestinationInput{ Name: aws.String(d.Id()), @@ -138,7 +141,7 @@ func resourceAPIDestinationRead(ctx context.Context, d *schema.ResourceData, met func resourceAPIDestinationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) input := &eventbridge.UpdateApiDestinationInput{} @@ -171,7 +174,7 @@ func resourceAPIDestinationUpdate(ctx context.Context, d *schema.ResourceData, m func resourceAPIDestinationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) log.Printf("[INFO] Deleting EventBridge API Destination (%s)", d.Id()) input := &eventbridge.DeleteApiDestinationInput{ diff --git a/internal/service/events/api_destination_test.go b/internal/service/events/api_destination_test.go index 98e3ff98cbf..40e8803a3a9 100644 --- a/internal/service/events/api_destination_test.go +++ b/internal/service/events/api_destination_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events_test import ( @@ -206,7 +209,7 @@ func TestAccEventsAPIDestination_disappears(t *testing.T) { func testAccCheckAPIDestinationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudwatch_event_api_destination" { @@ -235,7 +238,7 @@ func testAccCheckAPIDestinationExists(ctx context.Context, n string, v *eventbri return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn(ctx) params := eventbridge.DescribeApiDestinationInput{ Name: aws.String(rs.Primary.ID), } diff --git a/internal/service/events/archive.go b/internal/service/events/archive.go index 1eac5c7d926..cddefc6d559 100644 --- a/internal/service/events/archive.go +++ b/internal/service/events/archive.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events import ( @@ -69,7 +72,7 @@ func ResourceArchive() *schema.Resource { func resourceArchiveCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) input, err := buildCreateArchiveInputStruct(d) @@ -93,7 +96,7 @@ func resourceArchiveCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceArchiveRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) input := &eventbridge.DescribeArchiveInput{ ArchiveName: aws.String(d.Id()), } @@ -122,7 +125,7 @@ func resourceArchiveRead(ctx context.Context, d *schema.ResourceData, meta inter func resourceArchiveUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) input, err := buildUpdateArchiveInputStruct(d) @@ -141,7 +144,7 @@ func resourceArchiveUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceArchiveDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) input := &eventbridge.DeleteArchiveInput{ ArchiveName: aws.String(d.Get("name").(string)), diff --git a/internal/service/events/archive_test.go b/internal/service/events/archive_test.go index 5f14de33221..61d2d805ef5 100644 --- a/internal/service/events/archive_test.go +++ b/internal/service/events/archive_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events_test import ( @@ -104,7 +107,7 @@ func TestAccEventsArchive_disappears(t *testing.T) { func testAccCheckArchiveDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudwatch_event_archive" { @@ -133,7 +136,7 @@ func testAccCheckArchiveExists(ctx context.Context, n string, v *eventbridge.Des return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn(ctx) params := eventbridge.DescribeArchiveInput{ ArchiveName: aws.String(rs.Primary.ID), } diff --git a/internal/service/events/bus.go b/internal/service/events/bus.go index ec0128cd40c..1a79ae189e2 100644 --- a/internal/service/events/bus.go +++ b/internal/service/events/bus.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events import ( @@ -59,12 +62,12 @@ func ResourceBus() *schema.Resource { func resourceBusCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) eventBusName := d.Get("name").(string) input := &eventbridge.CreateEventBusInput{ Name: aws.String(eventBusName), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("event_source_name"); ok { @@ -87,7 +90,7 @@ func resourceBusCreate(ctx context.Context, d *schema.ResourceData, meta interfa d.SetId(eventBusName) // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := createTags(ctx, conn, aws.StringValue(output.EventBusArn), tags) // If default tags only, continue. Otherwise, error. @@ -105,7 +108,7 @@ func resourceBusCreate(ctx context.Context, d *schema.ResourceData, meta interfa func resourceBusRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) output, err := FindEventBusByName(ctx, conn, d.Id()) @@ -135,7 +138,7 @@ func resourceBusUpdate(ctx context.Context, d *schema.ResourceData, meta interfa func resourceBusDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) log.Printf("[INFO] Deleting EventBridge Event Bus: %s", d.Id()) _, err := conn.DeleteEventBusWithContext(ctx, &eventbridge.DeleteEventBusInput{ diff --git a/internal/service/events/bus_data_source.go b/internal/service/events/bus_data_source.go index becf117592b..9983b11a356 100644 --- a/internal/service/events/bus_data_source.go +++ b/internal/service/events/bus_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events import ( @@ -31,7 +34,7 @@ func DataSourceBus() *schema.Resource { func dataSourceBusRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) name := d.Get("name").(string) diff --git a/internal/service/events/bus_data_source_test.go b/internal/service/events/bus_data_source_test.go index da16a9b3638..96b02f373ad 100644 --- a/internal/service/events/bus_data_source_test.go +++ b/internal/service/events/bus_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events_test import ( diff --git a/internal/service/events/bus_policy.go b/internal/service/events/bus_policy.go index afea53230a7..a39e91f7306 100644 --- a/internal/service/events/bus_policy.go +++ b/internal/service/events/bus_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events import ( @@ -58,7 +61,7 @@ func ResourceBusPolicy() *schema.Resource { func resourceBusPolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) eventBusName := d.Get("event_bus_name").(string) @@ -87,7 +90,7 @@ func resourceBusPolicyCreate(ctx context.Context, d *schema.ResourceData, meta i // See also: https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_DescribeEventBus.html func resourceBusPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) eventBusName := d.Id() @@ -158,7 +161,7 @@ func getEventBusPolicy(output *eventbridge.DescribeEventBusOutput) (*string, err func resourceBusPolicyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) eventBusName := d.Id() @@ -184,7 +187,7 @@ func resourceBusPolicyUpdate(ctx context.Context, d *schema.ResourceData, meta i func resourceBusPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) eventBusName := d.Id() removeAllPermissions := true diff --git a/internal/service/events/bus_policy_test.go b/internal/service/events/bus_policy_test.go index 7af81784b77..83ecb78aed1 100644 --- a/internal/service/events/bus_policy_test.go +++ b/internal/service/events/bus_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events_test import ( @@ -116,7 +119,7 @@ func testAccCheckBusPolicyExists(ctx context.Context, pr string) resource.TestCh Name: aws.String(eventBusName), } - conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn(ctx) describedEventBus, err := conn.DescribeEventBusWithContext(ctx, input) if err != nil { @@ -147,7 +150,7 @@ func testAccBusPolicyDocument(ctx context.Context, pr string) resource.TestCheck Name: aws.String(eventBusName), } - conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn(ctx) describedEventBus, err := conn.DescribeEventBusWithContext(ctx, input) if err != nil { return fmt.Errorf("Reading EventBridge bus policy for '%s' failed: %w", pr, err) diff --git a/internal/service/events/bus_test.go b/internal/service/events/bus_test.go index 2d52f6ebf59..c9b8e4f37df 100644 --- a/internal/service/events/bus_test.go +++ b/internal/service/events/bus_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events_test import ( @@ -192,7 +195,7 @@ func TestAccEventsBus_partnerEventSource(t *testing.T) { func testAccCheckBusDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudwatch_event_bus" { @@ -227,7 +230,7 @@ func testAccCheckBusExists(ctx context.Context, n string, v *eventbridge.Describ return fmt.Errorf("No EventBridge Event Bus ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn(ctx) output, err := tfevents.FindEventBusByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/events/connection.go b/internal/service/events/connection.go index 24afaaed04d..3924434bc43 100644 --- a/internal/service/events/connection.go +++ b/internal/service/events/connection.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events import ( @@ -262,7 +265,7 @@ func ResourceConnection() *schema.Resource { func resourceConnectionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) name := d.Get("name").(string) input := &eventbridge.CreateConnectionInput{ @@ -294,7 +297,7 @@ func resourceConnectionCreate(ctx context.Context, d *schema.ResourceData, meta func resourceConnectionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) output, err := FindConnectionByName(ctx, conn, d.Id()) @@ -326,7 +329,7 @@ func resourceConnectionRead(ctx context.Context, d *schema.ResourceData, meta in func resourceConnectionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) input := &eventbridge.UpdateConnectionInput{ Name: aws.String(d.Id()), @@ -361,7 +364,7 @@ func resourceConnectionUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceConnectionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) log.Printf("[INFO] Deleting EventBridge connection (%s)", d.Id()) _, err := conn.DeleteConnectionWithContext(ctx, &eventbridge.DeleteConnectionInput{ diff --git a/internal/service/events/connection_data_source.go b/internal/service/events/connection_data_source.go index b68fd239b5e..4bae5239d03 100644 --- a/internal/service/events/connection_data_source.go +++ b/internal/service/events/connection_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events import ( @@ -42,7 +45,7 @@ func dataSourceConnectionRead(ctx context.Context, d *schema.ResourceData, meta var diags diag.Diagnostics d.SetId(d.Get("name").(string)) - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) input := &eventbridge.DescribeConnectionInput{ Name: aws.String(d.Id()), diff --git a/internal/service/events/connection_data_source_test.go b/internal/service/events/connection_data_source_test.go index 313ec9e5138..795ca2ce94f 100644 --- a/internal/service/events/connection_data_source_test.go +++ b/internal/service/events/connection_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events_test import ( diff --git a/internal/service/events/connection_test.go b/internal/service/events/connection_test.go index 418f0c21060..bcb932106ab 100644 --- a/internal/service/events/connection_test.go +++ b/internal/service/events/connection_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events_test import ( @@ -576,7 +579,7 @@ func TestAccEventsConnection_disappears(t *testing.T) { func testAccCheckConnectionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudwatch_event_connection" { @@ -607,7 +610,7 @@ func testAccCheckConnectionExists(ctx context.Context, n string, v *eventbridge. return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn(ctx) output, err := tfevents.FindConnectionByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/events/consts.go b/internal/service/events/consts.go index 0bd20d8203c..15e608e3fd2 100644 --- a/internal/service/events/consts.go +++ b/internal/service/events/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events import ( diff --git a/internal/service/events/endpoint.go b/internal/service/events/endpoint.go index df3e7067170..17124f010b8 100644 --- a/internal/service/events/endpoint.go +++ b/internal/service/events/endpoint.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events import ( @@ -143,7 +146,7 @@ func resourceEndpointCreate(ctx context.Context, d *schema.ResourceData, meta in timeout = 2 * time.Minute ) var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) name := d.Get("name").(string) input := &eventbridge.CreateEndpointInput{ @@ -183,7 +186,7 @@ func resourceEndpointCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceEndpointRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) output, err := FindEndpointByName(ctx, conn, d.Id()) @@ -228,7 +231,7 @@ func resourceEndpointUpdate(ctx context.Context, d *schema.ResourceData, meta in timeout = 2 * time.Minute ) var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) input := &eventbridge.UpdateEndpointInput{ Name: aws.String(d.Id()), @@ -274,7 +277,7 @@ func resourceEndpointDelete(ctx context.Context, d *schema.ResourceData, meta in timeout = 2 * time.Minute ) var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) log.Printf("[INFO] Deleting EventBridge Global Endpoint: %s", d.Id()) _, err := conn.DeleteEndpointWithContext(ctx, &eventbridge.DeleteEndpointInput{ diff --git a/internal/service/events/endpoint_test.go b/internal/service/events/endpoint_test.go index 5d42111b910..01e4d4b86d0 100644 --- a/internal/service/events/endpoint_test.go +++ b/internal/service/events/endpoint_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events_test import ( @@ -258,7 +261,7 @@ func TestAccEventsEndpoint_updateRoutingConfig(t *testing.T) { func testAccCheckEndpointDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudwatch_event_endpoint" { @@ -289,7 +292,7 @@ func testAccCheckEndpointExists(ctx context.Context, n string, v *eventbridge.De return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn(ctx) output, err := tfevents.FindEndpointByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/events/find.go b/internal/service/events/find.go index a023561bbbe..06acd7926a8 100644 --- a/internal/service/events/find.go +++ b/internal/service/events/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events import ( diff --git a/internal/service/events/generate.go b/internal/service/events/generate.go index af02f00ef20..9861732aef1 100644 --- a/internal/service/events/generate.go +++ b/internal/service/events/generate.go @@ -1,5 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/listpages/main.go -ListOps=ListEventBuses,ListRules,ListTargetsByRule //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceARN -ServiceTagsSlice -TagInIDElem=ResourceARN -UpdateTags -CreateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package events diff --git a/internal/service/events/id.go b/internal/service/events/id.go index 799c778b1f3..7231368348c 100644 --- a/internal/service/events/id.go +++ b/internal/service/events/id.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events import ( diff --git a/internal/service/events/id_test.go b/internal/service/events/id_test.go index 027104e713e..b3037459459 100644 --- a/internal/service/events/id_test.go +++ b/internal/service/events/id_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events_test import ( diff --git a/internal/service/events/permission.go b/internal/service/events/permission.go index 36cc48e3b97..d580e45fb62 100644 --- a/internal/service/events/permission.go +++ b/internal/service/events/permission.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events import ( @@ -87,7 +90,7 @@ func ResourcePermission() *schema.Resource { func resourcePermissionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) eventBusName := d.Get("event_bus_name").(string) statementID := d.Get("statement_id").(string) @@ -115,7 +118,7 @@ func resourcePermissionCreate(ctx context.Context, d *schema.ResourceData, meta // See also: https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_DescribeEventBus.html func resourcePermissionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) eventBusName, statementID, err := PermissionParseResourceID(d.Id()) if err != nil { @@ -205,7 +208,7 @@ func getPolicyStatement(output *eventbridge.DescribeEventBusOutput, statementID err := json.Unmarshal([]byte(*output.Policy), &policyDoc) if err != nil { - return nil, fmt.Errorf("error reading EventBridge permission (%s): %w", statementID, err) + return nil, fmt.Errorf("reading EventBridge permission (%s): %w", statementID, err) } return FindPermissionPolicyStatementByID(&policyDoc, statementID) @@ -213,7 +216,7 @@ func getPolicyStatement(output *eventbridge.DescribeEventBusOutput, statementID func resourcePermissionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) eventBusName, statementID, err := PermissionParseResourceID(d.Id()) if err != nil { @@ -237,7 +240,7 @@ func resourcePermissionUpdate(ctx context.Context, d *schema.ResourceData, meta func resourcePermissionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) eventBusName, statementID, err := PermissionParseResourceID(d.Id()) if err != nil { diff --git a/internal/service/events/permission_test.go b/internal/service/events/permission_test.go index 85a4d39fdac..8b0e8d78197 100644 --- a/internal/service/events/permission_test.go +++ b/internal/service/events/permission_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events_test import ( @@ -274,7 +277,7 @@ func TestAccEventsPermission_disappears(t *testing.T) { func testAccCheckPermissionExists(ctx context.Context, pr string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn(ctx) rs, ok := s.RootModule().Resources[pr] if !ok { return fmt.Errorf("Not found: %s", pr) @@ -313,7 +316,7 @@ func testAccCheckPermissionExists(ctx context.Context, pr string) resource.TestC func testAccCheckPermissionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudwatch_event_permission" { diff --git a/internal/service/events/rule.go b/internal/service/events/rule.go index 968de2e1abe..8ccf033a5b8 100644 --- a/internal/service/events/rule.go +++ b/internal/service/events/rule.go @@ -1,7 +1,12 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events import ( + "bytes" "context" + "encoding/json" "fmt" "log" "time" @@ -12,7 +17,6 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/create" @@ -65,7 +69,7 @@ func ResourceRule() *schema.Resource { ValidateFunc: validateEventPatternValue(), AtLeastOneOf: []string{"schedule_expression", "event_pattern"}, StateFunc: func(v interface{}) string { - json, _ := structure.NormalizeJsonString(v.(string)) + json, _ := RuleEventPatternJSONDecoder(v.(string)) return json }, }, @@ -111,11 +115,11 @@ func ResourceRule() *schema.Resource { func resourceRuleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) name := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) input := expandPutRuleInput(d, name) - input.Tags = GetTagsIn(ctx) + input.Tags = getTagsIn(ctx) arn, err := retryPutRule(ctx, conn, input) @@ -142,7 +146,7 @@ func resourceRuleCreate(ctx context.Context, d *schema.ResourceData, meta interf } // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := createTags(ctx, conn, arn, tags) // If default tags only, continue. Otherwise, error. @@ -160,7 +164,7 @@ func resourceRuleCreate(ctx context.Context, d *schema.ResourceData, meta interf func resourceRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) eventBusName, ruleName, err := RuleParseResourceID(d.Id()) @@ -185,7 +189,7 @@ func resourceRuleRead(ctx context.Context, d *schema.ResourceData, meta interfac d.Set("description", output.Description) d.Set("event_bus_name", eventBusName) // Use event bus name from resource ID as API response may collapse any ARN. if output.EventPattern != nil { - pattern, err := structure.NormalizeJsonString(aws.StringValue(output.EventPattern)) + pattern, err := RuleEventPatternJSONDecoder(aws.StringValue(output.EventPattern)) if err != nil { return sdkdiag.AppendErrorf(diags, "event pattern contains an invalid JSON: %s", err) } @@ -202,7 +206,7 @@ func resourceRuleRead(ctx context.Context, d *schema.ResourceData, meta interfac func resourceRuleUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) if d.HasChangesExcept("tags", "tags_all") { _, ruleName, err := RuleParseResourceID(d.Id()) @@ -224,7 +228,7 @@ func resourceRuleUpdate(ctx context.Context, d *schema.ResourceData, meta interf func resourceRuleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) eventBusName, ruleName, err := RuleParseResourceID(d.Id()) @@ -295,6 +299,34 @@ func FindRuleByTwoPartKey(ctx context.Context, conn *eventbridge.EventBridge, ev return output, nil } +// RuleEventPatternJSONDecoder decodes unicode translation of <,>,& +func RuleEventPatternJSONDecoder(jsonString interface{}) (string, error) { + var j interface{} + + if jsonString == nil || jsonString.(string) == "" { + return "", nil + } + + s := jsonString.(string) + + err := json.Unmarshal([]byte(s), &j) + if err != nil { + return s, err + } + + b, err := json.Marshal(j) + if err != nil { + return "", err + } + + if bytes.Contains(b, []byte("\\u003c")) || bytes.Contains(b, []byte("\\u003e")) || bytes.Contains(b, []byte("\\u0026")) { + b = bytes.Replace(b, []byte("\\u003c"), []byte("<"), -1) + b = bytes.Replace(b, []byte("\\u003e"), []byte(">"), -1) + b = bytes.Replace(b, []byte("\\u0026"), []byte("&"), -1) + } + return string(b[:]), nil +} + func expandPutRuleInput(d *schema.ResourceData, name string) *eventbridge.PutRuleInput { apiObject := &eventbridge.PutRuleInput{ Name: aws.String(name), @@ -309,7 +341,7 @@ func expandPutRuleInput(d *schema.ResourceData, name string) *eventbridge.PutRul } if v, ok := d.GetOk("event_pattern"); ok { - json, _ := structure.NormalizeJsonString(v) + json, _ := RuleEventPatternJSONDecoder(v.(string)) apiObject.EventPattern = aws.String(json) } @@ -332,7 +364,7 @@ func expandPutRuleInput(d *schema.ResourceData, name string) *eventbridge.PutRul func validateEventPatternValue() schema.SchemaValidateFunc { return func(v interface{}, k string) (ws []string, errors []error) { - json, err := structure.NormalizeJsonString(v) + json, err := RuleEventPatternJSONDecoder(v.(string)) if err != nil { errors = append(errors, fmt.Errorf("%q contains an invalid JSON: %w", k, err)) diff --git a/internal/service/events/rule_test.go b/internal/service/events/rule_test.go index 37d442a89a8..f44916406f5 100644 --- a/internal/service/events/rule_test.go +++ b/internal/service/events/rule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events_test import ( @@ -9,6 +12,7 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/eventbridge" + "github.com/google/go-cmp/cmp" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -29,6 +33,41 @@ func testAccErrorCheckSkip(t *testing.T) resource.ErrorCheckFunc { ) } +func TestRuleEventPatternJSONDecoder(t *testing.T) { + t.Parallel() + + type testCase struct { + input string + expected string + } + tests := map[string]testCase{ + "lessThanGreaterThan": { + input: `{"detail":{"count":[{"numeric":["\u003e",0,"\u003c",5]}]}}`, + expected: `{"detail":{"count":[{"numeric":[">",0,"<",5]}]}}`, + }, + "ampersand": { + input: `{"detail":{"count":[{"numeric":["\u0026",0,"\u0026",5]}]}}`, + expected: `{"detail":{"count":[{"numeric":["&",0,"&",5]}]}}`, + }, + } + + for name, test := range tests { + name, test := name, test + t.Run(name, func(t *testing.T) { + t.Parallel() + + got, err := tfevents.RuleEventPatternJSONDecoder(test.input) + if err != nil { + t.Fatal(err) + } + + if diff := cmp.Diff(got, test.expected); diff != "" { + t.Errorf("unexpected diff (+wanted, -got): %s", diff) + } + }) + } +} + func TestAccEventsRule_basic(t *testing.T) { ctx := acctest.Context(t) var v1, v2, v3 eventbridge.DescribeRuleOutput @@ -256,6 +295,31 @@ func TestAccEventsRule_pattern(t *testing.T) { }) } +func TestAccEventsRule_patternJSONEncoder(t *testing.T) { + ctx := acctest.Context(t) + var v1 eventbridge.DescribeRuleOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_cloudwatch_event_rule.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, eventbridge.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckRuleDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccRuleConfig_patternJSONEncoder(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckRuleExists(ctx, resourceName, &v1), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "schedule_expression", ""), + acctest.CheckResourceAttrEquivalentJSON(resourceName, "event_pattern", `{"detail":{"count":[{"numeric":[">",0,"<",5]}]}}`), + ), + }, + }, + }) +} + func TestAccEventsRule_scheduleAndPattern(t *testing.T) { ctx := acctest.Context(t) var v eventbridge.DescribeRuleOutput @@ -530,7 +594,7 @@ func testAccCheckRuleExists(ctx context.Context, n string, v *eventbridge.Descri return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn(ctx) output, err := tfevents.FindRuleByTwoPartKey(ctx, conn, eventBusName, ruleName) @@ -557,7 +621,7 @@ func testAccCheckRuleEnabled(ctx context.Context, n string, want string) resourc return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn(ctx) output, err := tfevents.FindRuleByTwoPartKey(ctx, conn, eventBusName, ruleName) @@ -575,7 +639,7 @@ func testAccCheckRuleEnabled(ctx context.Context, n string, want string) resourc func testAccCheckRuleDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudwatch_event_rule" { @@ -685,6 +749,15 @@ PATTERN `, rName, pattern) } +func testAccRuleConfig_patternJSONEncoder(rName string) string { + return fmt.Sprintf(` +resource "aws_cloudwatch_event_rule" "test" { + name = %[1]q + event_pattern = jsonencode({ "detail" : { "count" : [{ "numeric" : [">", 0, "<", 5] }] } }) +} +`, rName) +} + func testAccRuleConfig_scheduleAndPattern(rName, pattern string) string { return fmt.Sprintf(` resource "aws_cloudwatch_event_rule" "test" { diff --git a/internal/service/events/service_package_gen.go b/internal/service/events/service_package_gen.go index 4ed3257ae73..727fc83c172 100644 --- a/internal/service/events/service_package_gen.go +++ b/internal/service/events/service_package_gen.go @@ -5,6 +5,10 @@ package events import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + eventbridge_sdkv1 "github.com/aws/aws-sdk-go/service/eventbridge" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -90,4 +94,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Events } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*eventbridge_sdkv1.EventBridge, error) { + sess := config["session"].(*session_sdkv1.Session) + + return eventbridge_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/events/source_data_source.go b/internal/service/events/source_data_source.go index 1d01feca64a..1a95588d741 100644 --- a/internal/service/events/source_data_source.go +++ b/internal/service/events/source_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events import ( @@ -44,7 +47,7 @@ func DataSourceSource() *schema.Resource { func dataSourceSourceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) input := &eventbridge.ListEventSourcesInput{} if v, ok := d.GetOk("name_prefix"); ok { diff --git a/internal/service/events/source_data_source_test.go b/internal/service/events/source_data_source_test.go index 68236a117cd..3e4390db89c 100644 --- a/internal/service/events/source_data_source_test.go +++ b/internal/service/events/source_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events_test import ( diff --git a/internal/service/events/status.go b/internal/service/events/status.go index 0db4838bd6f..37a7c3aef54 100644 --- a/internal/service/events/status.go +++ b/internal/service/events/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events import ( diff --git a/internal/service/events/sweep.go b/internal/service/events/sweep.go index f2d15f5a16c..2f9d2d728c4 100644 --- a/internal/service/events/sweep.go +++ b/internal/service/events/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -12,7 +15,6 @@ import ( "github.com/aws/aws-sdk-go/service/eventbridge" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -69,11 +71,11 @@ func init() { func sweepAPIDestination(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("Error getting client: %w", err) } - conn := client.(*conns.AWSClient).EventsConn() + conn := client.EventsConn(ctx) var sweeperErrs *multierror.Error @@ -120,11 +122,11 @@ func sweepAPIDestination(region string) error { func sweepArchives(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("Error getting client: %w", err) } - conn := client.(*conns.AWSClient).EventsConn() + conn := client.EventsConn(ctx) input := &eventbridge.ListArchivesInput{} @@ -171,11 +173,11 @@ func sweepArchives(region string) error { func sweepBuses(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("Error getting client: %w", err) } - conn := client.(*conns.AWSClient).EventsConn() + conn := client.EventsConn(ctx) var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -209,7 +211,7 @@ func sweepBuses(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing EventBridge event buses: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping EventBridge Event Buses: %w", err)) } @@ -218,11 +220,11 @@ func sweepBuses(region string) error { func sweepConnection(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("Error getting client: %w", err) } - conn := client.(*conns.AWSClient).EventsConn() + conn := client.EventsConn(ctx) var sweeperErrs *multierror.Error @@ -266,11 +268,11 @@ func sweepConnection(region string) error { func sweepPermissions(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("Error getting client: %w", err) } - conn := client.(*conns.AWSClient).EventsConn() + conn := client.EventsConn(ctx) output, err := conn.DescribeEventBusWithContext(ctx, &eventbridge.DescribeEventBusInput{}) if err != nil { @@ -311,11 +313,11 @@ func sweepPermissions(region string) error { func sweepRules(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).EventsConn() + conn := client.EventsConn(ctx) input := &eventbridge.ListEventBusesInput{} var sweeperErrs *multierror.Error @@ -381,11 +383,11 @@ func sweepRules(region string) error { func sweepTargets(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).EventsConn() + conn := client.EventsConn(ctx) input := &eventbridge.ListEventBusesInput{} var sweeperErrs *multierror.Error diff --git a/internal/service/events/tags_gen.go b/internal/service/events/tags_gen.go index 129f95968dc..6dab6ba9bd7 100644 --- a/internal/service/events/tags_gen.go +++ b/internal/service/events/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists events service tags. +// listTags lists events service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn eventbridgeiface.EventBridgeAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn eventbridgeiface.EventBridgeAPI, identifier string) (tftags.KeyValueTags, error) { input := &eventbridge.ListTagsForResourceInput{ ResourceARN: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn eventbridgeiface.EventBridgeAPI, identif // ListTags lists events service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).EventsConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).EventsConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*eventbridge.Tag) tftags.KeyValueT return tftags.New(ctx, m) } -// GetTagsIn returns events service tags from Context. +// getTagsIn returns events service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*eventbridge.Tag { +func getTagsIn(ctx context.Context) []*eventbridge.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,8 +88,8 @@ func GetTagsIn(ctx context.Context) []*eventbridge.Tag { return nil } -// SetTagsOut sets events service tags in Context. -func SetTagsOut(ctx context.Context, tags []*eventbridge.Tag) { +// setTagsOut sets events service tags in Context. +func setTagsOut(ctx context.Context, tags []*eventbridge.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } @@ -101,13 +101,13 @@ func createTags(ctx context.Context, conn eventbridgeiface.EventBridgeAPI, ident return nil } - return UpdateTags(ctx, conn, identifier, nil, KeyValueTags(ctx, tags)) + return updateTags(ctx, conn, identifier, nil, KeyValueTags(ctx, tags)) } -// UpdateTags updates events service tags. +// updateTags updates events service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn eventbridgeiface.EventBridgeAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn eventbridgeiface.EventBridgeAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -147,5 +147,5 @@ func UpdateTags(ctx context.Context, conn eventbridgeiface.EventBridgeAPI, ident // UpdateTags updates events service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).EventsConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).EventsConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/events/target.go b/internal/service/events/target.go index 125f6c310ad..fe542f62222 100644 --- a/internal/service/events/target.go +++ b/internal/service/events/target.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events import ( @@ -440,7 +443,7 @@ func ResourceTarget() *schema.Resource { } func resourceTargetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) rule := d.Get("rule").(string) @@ -476,7 +479,7 @@ func resourceTargetCreate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceTargetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) busName := d.Get("event_bus_name").(string) @@ -565,7 +568,7 @@ func resourceTargetRead(ctx context.Context, d *schema.ResourceData, meta interf } func resourceTargetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) input := buildPutTargetInputStruct(ctx, d) @@ -584,7 +587,7 @@ func resourceTargetUpdate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceTargetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EventsConn() + conn := meta.(*conns.AWSClient).EventsConn(ctx) input := &eventbridge.RemoveTargetsInput{ Ids: []*string{aws.String(d.Get("target_id").(string))}, diff --git a/internal/service/events/target_migrate.go b/internal/service/events/target_migrate.go index 7a6315a0c71..952f2bdfc1e 100644 --- a/internal/service/events/target_migrate.go +++ b/internal/service/events/target_migrate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events import ( diff --git a/internal/service/events/target_migrate_test.go b/internal/service/events/target_migrate_test.go index b2a351014ba..f3370d6e898 100644 --- a/internal/service/events/target_migrate_test.go +++ b/internal/service/events/target_migrate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events_test import ( diff --git a/internal/service/events/target_test.go b/internal/service/events/target_test.go index 65a902a6247..929a9a345f8 100644 --- a/internal/service/events/target_test.go +++ b/internal/service/events/target_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events_test import ( @@ -977,7 +980,7 @@ func testAccCheckTargetExists(ctx context.Context, n string, v *eventbridge.Targ return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn(ctx) output, err := tfevents.FindTargetByThreePartKey(ctx, conn, rs.Primary.Attributes["event_bus_name"], rs.Primary.Attributes["rule"], rs.Primary.Attributes["target_id"]) @@ -993,7 +996,7 @@ func testAccCheckTargetExists(ctx context.Context, n string, v *eventbridge.Targ func testAccCheckTargetDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EventsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudwatch_event_target" { diff --git a/internal/service/events/validate.go b/internal/service/events/validate.go index bcc1ddf8c00..9933f1857a9 100644 --- a/internal/service/events/validate.go +++ b/internal/service/events/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events import ( diff --git a/internal/service/events/validate_test.go b/internal/service/events/validate_test.go index f82029dcc81..14cf3a2aa83 100644 --- a/internal/service/events/validate_test.go +++ b/internal/service/events/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events import ( diff --git a/internal/service/events/wait.go b/internal/service/events/wait.go index 4c4764de3c9..0ab814c9fae 100644 --- a/internal/service/events/wait.go +++ b/internal/service/events/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package events import ( diff --git a/internal/service/evidently/feature.go b/internal/service/evidently/feature.go index 6f6e7d51948..3997dbd9006 100644 --- a/internal/service/evidently/feature.go +++ b/internal/service/evidently/feature.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package evidently import ( @@ -16,10 +19,10 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" - "github.com/hashicorp/terraform-provider-aws/internal/experimental/nullable" "github.com/hashicorp/terraform-provider-aws/internal/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/types/nullable" "github.com/hashicorp/terraform-provider-aws/internal/verify" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -196,14 +199,14 @@ func ResourceFeature() *schema.Resource { } func resourceFeatureCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EvidentlyConn() + conn := meta.(*conns.AWSClient).EvidentlyConn(ctx) name := d.Get("name").(string) project := d.Get("project").(string) input := &cloudwatchevidently.CreateFeatureInput{ Name: aws.String(name), Project: aws.String(project), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), Variations: expandVariations(d.Get("variations").(*schema.Set).List()), } @@ -241,7 +244,7 @@ func resourceFeatureCreate(ctx context.Context, d *schema.ResourceData, meta int } func resourceFeatureRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EvidentlyConn() + conn := meta.(*conns.AWSClient).EvidentlyConn(ctx) featureName, projectNameOrARN, err := FeatureParseID(d.Id()) @@ -281,13 +284,13 @@ func resourceFeatureRead(ctx context.Context, d *schema.ResourceData, meta inter d.Set("status", feature.Status) d.Set("value_type", feature.ValueType) - SetTagsOut(ctx, feature.Tags) + setTagsOut(ctx, feature.Tags) return nil } func resourceFeatureUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EvidentlyConn() + conn := meta.(*conns.AWSClient).EvidentlyConn(ctx) if d.HasChanges("default_variation", "description", "entity_overrides", "evaluation_strategy", "variations") { name := d.Get("name").(string) @@ -327,7 +330,7 @@ func resourceFeatureUpdate(ctx context.Context, d *schema.ResourceData, meta int } func resourceFeatureDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EvidentlyConn() + conn := meta.(*conns.AWSClient).EvidentlyConn(ctx) name := d.Get("name").(string) project := d.Get("project").(string) diff --git a/internal/service/evidently/feature_test.go b/internal/service/evidently/feature_test.go index c440229b911..405e40d3251 100644 --- a/internal/service/evidently/feature_test.go +++ b/internal/service/evidently/feature_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package evidently_test import ( @@ -616,7 +619,7 @@ func TestAccEvidentlyFeature_disappears(t *testing.T) { func testAccCheckFeatureDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EvidentlyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EvidentlyConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_evidently_feature" { continue @@ -663,7 +666,7 @@ func testAccCheckFeatureExists(ctx context.Context, n string, v *cloudwatchevide return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).EvidentlyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EvidentlyConn(ctx) output, err := tfcloudwatchevidently.FindFeatureWithProjectNameorARN(ctx, conn, featureName, projectNameOrARN) diff --git a/internal/service/evidently/find.go b/internal/service/evidently/find.go index 9e27124861d..5fe0b279516 100644 --- a/internal/service/evidently/find.go +++ b/internal/service/evidently/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package evidently import ( diff --git a/internal/service/evidently/generate.go b/internal/service/evidently/generate.go index 27a9feee03e..360376ded08 100644 --- a/internal/service/evidently/generate.go +++ b/internal/service/evidently/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package evidently diff --git a/internal/service/evidently/launch.go b/internal/service/evidently/launch.go index 86f1621abde..b583bb7c10a 100644 --- a/internal/service/evidently/launch.go +++ b/internal/service/evidently/launch.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package evidently import ( @@ -294,7 +297,7 @@ func ResourceLaunch() *schema.Resource { } func resourceLaunchCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EvidentlyConn() + conn := meta.(*conns.AWSClient).EvidentlyConn(ctx) name := d.Get("name").(string) project := d.Get("project").(string) @@ -302,7 +305,7 @@ func resourceLaunchCreate(ctx context.Context, d *schema.ResourceData, meta inte Name: aws.String(name), Project: aws.String(project), Groups: expandGroups(d.Get("groups").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -339,7 +342,7 @@ func resourceLaunchCreate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceLaunchRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EvidentlyConn() + conn := meta.(*conns.AWSClient).EvidentlyConn(ctx) launchName, projectNameOrARN, err := LaunchParseID(d.Id()) @@ -386,13 +389,13 @@ func resourceLaunchRead(ctx context.Context, d *schema.ResourceData, meta interf d.Set("status_reason", launch.StatusReason) d.Set("type", launch.Type) - SetTagsOut(ctx, launch.Tags) + setTagsOut(ctx, launch.Tags) return nil } func resourceLaunchUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EvidentlyConn() + conn := meta.(*conns.AWSClient).EvidentlyConn(ctx) if d.HasChanges("description", "groups", "metric_monitors", "randomization_salt", "scheduled_splits_config") { name := d.Get("name").(string) @@ -423,7 +426,7 @@ func resourceLaunchUpdate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceLaunchDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EvidentlyConn() + conn := meta.(*conns.AWSClient).EvidentlyConn(ctx) name := d.Get("name").(string) project := d.Get("project").(string) diff --git a/internal/service/evidently/launch_test.go b/internal/service/evidently/launch_test.go index 598f5227ad6..0a318962727 100644 --- a/internal/service/evidently/launch_test.go +++ b/internal/service/evidently/launch_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package evidently_test import ( @@ -615,7 +618,7 @@ func TestAccEvidentlyLaunch_disappears(t *testing.T) { func testAccCheckLaunchDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EvidentlyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EvidentlyConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_evidently_launch" { continue @@ -662,7 +665,7 @@ func testAccCheckLaunchExists(ctx context.Context, n string, v *cloudwatcheviden return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).EvidentlyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EvidentlyConn(ctx) output, err := tfcloudwatchevidently.FindLaunchWithProjectNameorARN(ctx, conn, launchName, projectNameOrARN) diff --git a/internal/service/evidently/project.go b/internal/service/evidently/project.go index 8120105de1c..f22847ef373 100644 --- a/internal/service/evidently/project.go +++ b/internal/service/evidently/project.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package evidently import ( @@ -155,12 +158,12 @@ func ResourceProject() *schema.Resource { } func resourceProjectCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EvidentlyConn() + conn := meta.(*conns.AWSClient).EvidentlyConn(ctx) name := d.Get("name").(string) input := &cloudwatchevidently.CreateProjectInput{ Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -187,7 +190,7 @@ func resourceProjectCreate(ctx context.Context, d *schema.ResourceData, meta int } func resourceProjectRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EvidentlyConn() + conn := meta.(*conns.AWSClient).EvidentlyConn(ctx) project, err := FindProjectByNameOrARN(ctx, conn, d.Id()) @@ -217,13 +220,13 @@ func resourceProjectRead(ctx context.Context, d *schema.ResourceData, meta inter d.Set("name", project.Name) d.Set("status", project.Status) - SetTagsOut(ctx, project.Tags) + setTagsOut(ctx, project.Tags) return nil } func resourceProjectUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EvidentlyConn() + conn := meta.(*conns.AWSClient).EvidentlyConn(ctx) // Project has 2 update APIs // UpdateProjectWithContext: Updates the description of an existing project. @@ -281,7 +284,7 @@ func resourceProjectUpdate(ctx context.Context, d *schema.ResourceData, meta int } func resourceProjectDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EvidentlyConn() + conn := meta.(*conns.AWSClient).EvidentlyConn(ctx) log.Printf("[DEBUG] Deleting CloudWatch Evidently Project: %s", d.Id()) _, err := conn.DeleteProjectWithContext(ctx, &cloudwatchevidently.DeleteProjectInput{ diff --git a/internal/service/evidently/project_test.go b/internal/service/evidently/project_test.go index 247f604af4d..ad272eacf96 100644 --- a/internal/service/evidently/project_test.go +++ b/internal/service/evidently/project_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package evidently_test import ( @@ -359,7 +362,7 @@ func TestAccEvidentlyProject_disappears(t *testing.T) { func testAccCheckProjectDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EvidentlyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EvidentlyConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_evidently_project" { continue @@ -394,7 +397,7 @@ func testAccCheckProjectExists(ctx context.Context, n string, v *cloudwatchevide return fmt.Errorf("No CloudWatch Evidently Project ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EvidentlyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EvidentlyConn(ctx) output, err := tfcloudwatchevidently.FindProjectByNameOrARN(ctx, conn, rs.Primary.ID) diff --git a/internal/service/evidently/segment.go b/internal/service/evidently/segment.go index 317f570532a..9a0eac6d64d 100644 --- a/internal/service/evidently/segment.go +++ b/internal/service/evidently/segment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package evidently import ( @@ -92,13 +95,13 @@ func ResourceSegment() *schema.Resource { } func resourceSegmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EvidentlyConn() + conn := meta.(*conns.AWSClient).EvidentlyConn(ctx) name := d.Get("name").(string) input := &cloudwatchevidently.CreateSegmentInput{ Name: aws.String(name), Pattern: aws.String(d.Get("pattern").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -117,7 +120,7 @@ func resourceSegmentCreate(ctx context.Context, d *schema.ResourceData, meta int } func resourceSegmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EvidentlyConn() + conn := meta.(*conns.AWSClient).EvidentlyConn(ctx) segment, err := FindSegmentByNameOrARN(ctx, conn, d.Id()) @@ -140,7 +143,7 @@ func resourceSegmentRead(ctx context.Context, d *schema.ResourceData, meta inter d.Set("name", segment.Name) d.Set("pattern", segment.Pattern) - SetTagsOut(ctx, segment.Tags) + setTagsOut(ctx, segment.Tags) return nil } @@ -151,7 +154,7 @@ func resourceSegmentUpdate(ctx context.Context, d *schema.ResourceData, meta int } func resourceSegmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).EvidentlyConn() + conn := meta.(*conns.AWSClient).EvidentlyConn(ctx) log.Printf("[DEBUG] Deleting CloudWatch Evidently Segment: %s", d.Id()) _, err := conn.DeleteSegmentWithContext(ctx, &cloudwatchevidently.DeleteSegmentInput{ diff --git a/internal/service/evidently/segment_test.go b/internal/service/evidently/segment_test.go index 876cc688cbe..0730a17408c 100644 --- a/internal/service/evidently/segment_test.go +++ b/internal/service/evidently/segment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package evidently_test import ( @@ -204,7 +207,7 @@ func TestAccEvidentlySegment_disappears(t *testing.T) { func testAccCheckSegmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EvidentlyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EvidentlyConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_evidently_segment" { continue @@ -239,7 +242,7 @@ func testAccCheckSegmentExists(ctx context.Context, n string, v *cloudwatchevide return fmt.Errorf("No CloudWatch Evidently Segment ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).EvidentlyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EvidentlyConn(ctx) output, err := tfcloudwatchevidently.FindSegmentByNameOrARN(ctx, conn, rs.Primary.ID) diff --git a/internal/service/evidently/service_package_gen.go b/internal/service/evidently/service_package_gen.go index 60461af02f3..4de7208b9cf 100644 --- a/internal/service/evidently/service_package_gen.go +++ b/internal/service/evidently/service_package_gen.go @@ -5,6 +5,10 @@ package evidently import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + cloudwatchevidently_sdkv1 "github.com/aws/aws-sdk-go/service/cloudwatchevidently" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -64,4 +68,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Evidently } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*cloudwatchevidently_sdkv1.CloudWatchEvidently, error) { + sess := config["session"].(*session_sdkv1.Session) + + return cloudwatchevidently_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/evidently/status.go b/internal/service/evidently/status.go index eae94eb0e77..f2e831ad7b1 100644 --- a/internal/service/evidently/status.go +++ b/internal/service/evidently/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package evidently import ( diff --git a/internal/service/evidently/sweep.go b/internal/service/evidently/sweep.go index 7e90f5e4710..4543a03c41d 100644 --- a/internal/service/evidently/sweep.go +++ b/internal/service/evidently/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/cloudwatchevidently" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -24,11 +26,11 @@ func init() { func sweepProject(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("Error getting client: %w", err) } - conn := client.(*conns.AWSClient).EvidentlyConn() + conn := client.EvidentlyConn(ctx) input := &cloudwatchevidently.ListProjectsInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -58,7 +60,7 @@ func sweepProject(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Evidently Projects for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Evidently Projects for %s: %w", region, err)) } diff --git a/internal/service/evidently/tags_gen.go b/internal/service/evidently/tags_gen.go index 8e63c843fcb..44ab2be6706 100644 --- a/internal/service/evidently/tags_gen.go +++ b/internal/service/evidently/tags_gen.go @@ -21,14 +21,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from evidently service tags. +// KeyValueTags creates tftags.KeyValueTags from evidently service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns evidently service tags from Context. +// getTagsIn returns evidently service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -38,17 +38,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets evidently service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets evidently service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates evidently service tags. +// updateTags updates evidently service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn cloudwatchevidentlyiface.CloudWatchEvidentlyAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn cloudwatchevidentlyiface.CloudWatchEvidentlyAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -88,5 +88,5 @@ func UpdateTags(ctx context.Context, conn cloudwatchevidentlyiface.CloudWatchEvi // UpdateTags updates evidently service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).EvidentlyConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).EvidentlyConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/evidently/wait.go b/internal/service/evidently/wait.go index cc64c9261ce..f6c23f8db30 100644 --- a/internal/service/evidently/wait.go +++ b/internal/service/evidently/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package evidently import ( diff --git a/internal/service/finspace/generate.go b/internal/service/finspace/generate.go new file mode 100644 index 00000000000..d0b2ec2728c --- /dev/null +++ b/internal/service/finspace/generate.go @@ -0,0 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/tags/main.go -ServiceTagsMap -AWSSDKVersion=2 -KVTValues -ListTags -CreateTags -UpdateTags -SkipTypesImp +//go:generate go run ../../generate/servicepackage/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package finspace diff --git a/internal/service/finspace/kx_cluster.go b/internal/service/finspace/kx_cluster.go new file mode 100644 index 00000000000..dca91fb12c7 --- /dev/null +++ b/internal/service/finspace/kx_cluster.go @@ -0,0 +1,1179 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package finspace + +import ( + "context" + "errors" + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/finspace" + "github.com/aws/aws-sdk-go-v2/service/finspace/types" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/enum" + "github.com/hashicorp/terraform-provider-aws/internal/flex" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/verify" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @SDKResource("aws_finspace_kx_cluster", name="Kx Cluster") +// @Tags(identifierAttribute="arn") +func ResourceKxCluster() *schema.Resource { + return &schema.Resource{ + CreateWithoutTimeout: resourceKxClusterCreate, + ReadWithoutTimeout: resourceKxClusterRead, + UpdateWithoutTimeout: resourceKxClusterUpdate, + DeleteWithoutTimeout: resourceKxClusterDelete, + + Importer: &schema.ResourceImporter{ + StateContext: schema.ImportStatePassthroughContext, + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(30 * time.Minute), + Update: schema.DefaultTimeout(2 * time.Minute), // Tags only + Delete: schema.DefaultTimeout(40 * time.Minute), + }, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "auto_scaling_configuration": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "auto_scaling_metric": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice( + enum.Slice(types.AutoScalingMetricCpuUtilizationPercentage), true), + }, + "max_node_count": { + Type: schema.TypeInt, + Required: true, + ForceNew: true, + ValidateFunc: validation.IntBetween(1, 5), + }, + "metric_target": { + Type: schema.TypeFloat, + Required: true, + ForceNew: true, + ValidateFunc: validation.FloatBetween(0, 100), + }, + "min_node_count": { + Type: schema.TypeInt, + Required: true, + ForceNew: true, + ValidateFunc: validation.IntBetween(1, 5), + }, + "scale_in_cooldown_seconds": { + Type: schema.TypeFloat, + Required: true, + ForceNew: true, + ValidateFunc: validation.FloatBetween(0, 100000), + }, + "scale_out_cooldown_seconds": { + Type: schema.TypeFloat, + Required: true, + ForceNew: true, + ValidateFunc: validation.FloatBetween(0, 100000), + }, + }, + }, + }, + "availability_zone_id": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "az_mode": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateDiagFunc: enum.Validate[types.KxAzMode](), + }, + "cache_storage_configurations": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "size": { + Type: schema.TypeInt, + Required: true, + ForceNew: true, + ValidateFunc: validation.IntBetween(1200, 33600), + }, + "type": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(8, 10), + }, + }, + }, + }, + "capacity_configuration": { + Type: schema.TypeList, + Required: true, + ForceNew: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "node_count": { + Type: schema.TypeInt, + Required: true, + ForceNew: true, + ValidateFunc: validation.IntBetween(1, 5), + }, + "node_type": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(1, 32), + }, + }, + }, + }, + "code": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "s3_bucket": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(3, 255), + }, + "s3_key": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(3, 1024), + }, + "s3_object_version": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(3, 63), + }, + }, + }, + }, + "command_line_arguments": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + ForceNew: true, + ValidateDiagFunc: verify.ValidAllDiag( + validation.MapKeyLenBetween(1, 50), + validation.MapValueLenBetween(1, 50), + ), + }, + "created_timestamp": { + Type: schema.TypeString, + Computed: true, + }, + "database": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cache_configurations": { + Type: schema.TypeList, + Required: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cache_type": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice([]string{ + "CACHE_1000", + }, true), + }, + "db_paths": { + Type: schema.TypeSet, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Required: true, + ForceNew: true, + }, + }, + }, + }, + "changeset_id": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(1, 26), + }, + "database_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(3, 63), + }, + }, + }, + }, + "description": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(1, 1000), + }, + "environment_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(1, 32), + }, + "execution_role": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(1, 1024), + }, + "initialization_script": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(1, 255), + }, + "last_modified_timestamp": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(3, 63), + }, + "release_label": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(1, 16), + }, + "status": { + Type: schema.TypeString, + Computed: true, + }, + "savedown_storage_configuration": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "type": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice( + enum.Slice(types.KxSavedownStorageTypeSds01), true), + }, + "size": { + Type: schema.TypeInt, + Required: true, + ForceNew: true, + ValidateFunc: validation.IntBetween(4, 16000), + }, + }, + }, + }, + "status_reason": { + Type: schema.TypeString, + Computed: true, + }, + names.AttrTags: tftags.TagsSchema(), + names.AttrTagsAll: tftags.TagsSchemaComputed(), + "type": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateDiagFunc: enum.Validate[types.KxClusterType](), + }, + "vpc_configuration": { + Type: schema.TypeList, + Required: true, + ForceNew: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "ip_address_type": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice(enum.Slice(types.IPAddressTypeIpV4), true), + }, + "security_group_ids": { + Type: schema.TypeSet, + Required: true, + ForceNew: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringLenBetween(1, 1024), + }, + }, + "subnet_ids": { + Type: schema.TypeSet, + Required: true, + ForceNew: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringLenBetween(1, 1024), + }, + }, + "vpc_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(1, 1024), + }, + }, + }, + }, + }, + + CustomizeDiff: verify.SetTagsDiff, + } +} + +const ( + ResNameKxCluster = "Kx Cluster" + + kxClusterIDPartCount = 2 +) + +func resourceKxClusterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).FinSpaceClient(ctx) + + environmentId := d.Get("environment_id").(string) + clusterName := d.Get("name").(string) + idParts := []string{ + environmentId, + clusterName, + } + rID, err := flex.FlattenResourceId(idParts, kxClusterIDPartCount, false) + if err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionFlatteningResourceId, ResNameKxCluster, d.Get("name").(string), err)...) + } + d.SetId(rID) + + in := &finspace.CreateKxClusterInput{ + EnvironmentId: aws.String(environmentId), + ClusterName: aws.String(clusterName), + ClusterType: types.KxClusterType(d.Get("type").(string)), + ReleaseLabel: aws.String(d.Get("release_label").(string)), + AzMode: types.KxAzMode(d.Get("az_mode").(string)), + CapacityConfiguration: expandCapacityConfiguration(d.Get("capacity_configuration").([]interface{})), + ClientToken: aws.String(id.UniqueId()), + Tags: getTagsIn(ctx), + } + + if v, ok := d.GetOk("description"); ok { + in.ClusterDescription = aws.String(v.(string)) + } + + if v, ok := d.GetOk("initialization_script"); ok { + in.InitializationScript = aws.String(v.(string)) + } + + if v, ok := d.GetOk("execution_role"); ok { + in.ExecutionRole = aws.String(v.(string)) + } + + if v, ok := d.GetOk("availability_zone_id"); ok { + in.AvailabilityZoneId = aws.String(v.(string)) + } + + if v, ok := d.GetOk("command_line_arguments"); ok && len(v.(map[string]interface{})) > 0 { + in.CommandLineArguments = expandCommandLineArguments(v.(map[string]interface{})) + } + + if v, ok := d.GetOk("vpc_configuration"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + in.VpcConfiguration = expandVPCConfiguration(v.([]interface{})) + } + + if v, ok := d.GetOk("auto_scaling_configuration"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + in.AutoScalingConfiguration = expandAutoScalingConfiguration(v.([]interface{})) + } + + if v, ok := d.GetOk("database"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + in.Databases = expandDatabases(v.([]interface{})) + } + + if v, ok := d.GetOk("savedown_storage_configuration"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + in.SavedownStorageConfiguration = expandSavedownStorageConfiguration(v.([]interface{})) + } + + if v, ok := d.GetOk("cache_storage_configurations"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + in.CacheStorageConfigurations = expandCacheStorageConfigurations(v.([]interface{})) + } + + if v, ok := d.GetOk("code"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + in.Code = expandCode(v.([]interface{})) + } + + out, err := conn.CreateKxCluster(ctx, in) + if err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionCreating, ResNameKxCluster, d.Get("name").(string), err)...) + } + + if out == nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionCreating, ResNameKxCluster, d.Get("name").(string), errors.New("empty output"))...) + } + + if _, err := waitKxClusterCreated(ctx, conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionWaitingForCreation, ResNameKxCluster, d.Id(), err)...) + } + + return append(diags, resourceKxClusterRead(ctx, d, meta)...) +} + +func resourceKxClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).FinSpaceClient(ctx) + + out, err := findKxClusterByID(ctx, conn, d.Id()) + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] FinSpace KxCluster (%s) not found, removing from state", d.Id()) + d.SetId("") + return diags + } + + if err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionReading, ResNameKxCluster, d.Id(), err)...) + } + + d.Set("status", out.Status) + d.Set("status_reason", out.StatusReason) + d.Set("created_timestamp", out.CreatedTimestamp.String()) + d.Set("last_modified_timestamp", out.LastModifiedTimestamp.String()) + d.Set("name", out.ClusterName) + d.Set("type", out.ClusterType) + d.Set("release_label", out.ReleaseLabel) + d.Set("description", out.ClusterDescription) + d.Set("az_mode", out.AzMode) + d.Set("availability_zone_id", out.AvailabilityZoneId) + d.Set("execution_role", out.ExecutionRole) + d.Set("initialization_script", out.InitializationScript) + + if err := d.Set("capacity_configuration", flattenCapacityConfiguration(out.CapacityConfiguration)); err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionSetting, ResNameKxCluster, d.Id(), err)...) + } + + if err := d.Set("vpc_configuration", flattenVPCConfiguration(out.VpcConfiguration)); err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionSetting, ResNameKxCluster, d.Id(), err)...) + } + + if err := d.Set("code", flattenCode(out.Code)); err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionSetting, ResNameKxCluster, d.Id(), err)...) + } + + if err := d.Set("auto_scaling_configuration", flattenAutoScalingConfiguration(out.AutoScalingConfiguration)); err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionSetting, ResNameKxCluster, d.Id(), err)...) + } + + if err := d.Set("savedown_storage_configuration", flattenSavedownStorageConfiguration( + out.SavedownStorageConfiguration)); err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionSetting, ResNameKxCluster, d.Id(), err)...) + } + + if err := d.Set("cache_storage_configurations", flattenCacheStorageConfigurations( + out.CacheStorageConfigurations)); err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionSetting, ResNameKxCluster, d.Id(), err)...) + } + + if d.IsNewResource() { + if err := d.Set("database", flattenDatabases(out.Databases)); err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionSetting, ResNameKxCluster, d.Id(), err)...) + } + } + + if err := d.Set("command_line_arguments", flattenCommandLineArguments(out.CommandLineArguments)); err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionSetting, ResNameKxCluster, d.Id(), err)...) + } + + // compose cluster ARN using environment ARN + parts, err := flex.ExpandResourceId(d.Id(), kxUserIDPartCount, false) + if err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionSetting, ResNameKxCluster, d.Id(), err)...) + } + env, err := findKxEnvironmentByID(ctx, conn, parts[0]) + if err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionSetting, ResNameKxCluster, d.Id(), err)...) + } + arn := fmt.Sprintf("%s/kxCluster/%s", aws.ToString(env.EnvironmentArn), aws.ToString(out.ClusterName)) + d.Set("arn", arn) + d.Set("environment_id", parts[0]) + + return diags +} + +func resourceKxClusterUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + // Tags only. + return append(diags, resourceKxClusterRead(ctx, d, meta)...) +} + +func resourceKxClusterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).FinSpaceClient(ctx) + + log.Printf("[INFO] Deleting FinSpace KxCluster %s", d.Id()) + _, err := conn.DeleteKxCluster(ctx, &finspace.DeleteKxClusterInput{ + ClusterName: aws.String(d.Get("name").(string)), + EnvironmentId: aws.String(d.Get("environment_id").(string)), + }) + if err != nil { + var nfe *types.ResourceNotFoundException + if errors.As(err, &nfe) { + return diags + } + + return append(diags, create.DiagError(names.FinSpace, create.ErrActionDeleting, ResNameKxCluster, d.Id(), err)...) + } + + _, err = waitKxClusterDeleted(ctx, conn, d.Id(), d.Timeout(schema.TimeoutDelete)) + if err != nil && !tfresource.NotFound(err) { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionWaitingForDeletion, ResNameKxCluster, d.Id(), err)...) + } + + return diags +} + +func waitKxClusterCreated(ctx context.Context, conn *finspace.Client, id string, timeout time.Duration) (*finspace.GetKxClusterOutput, error) { + stateConf := &retry.StateChangeConf{ + Pending: enum.Slice(types.KxClusterStatusPending, types.KxClusterStatusCreating), + Target: enum.Slice(types.KxClusterStatusRunning), + Refresh: statusKxCluster(ctx, conn, id), + Timeout: timeout, + NotFoundChecks: 20, + ContinuousTargetOccurence: 2, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + if out, ok := outputRaw.(*finspace.GetKxClusterOutput); ok { + return out, err + } + + return nil, err +} + +func waitKxClusterDeleted(ctx context.Context, conn *finspace.Client, id string, timeout time.Duration) (*finspace.GetKxClusterOutput, error) { + stateConf := &retry.StateChangeConf{ + Pending: enum.Slice(types.KxClusterStatusDeleting), + Target: enum.Slice(types.KxClusterStatusDeleted), + Refresh: statusKxCluster(ctx, conn, id), + Timeout: timeout, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + if out, ok := outputRaw.(*finspace.GetKxClusterOutput); ok { + return out, err + } + + return nil, err +} + +func statusKxCluster(ctx context.Context, conn *finspace.Client, id string) retry.StateRefreshFunc { + return func() (interface{}, string, error) { + out, err := findKxClusterByID(ctx, conn, id) + if tfresource.NotFound(err) { + return nil, "", nil + } + + if err != nil { + return nil, "", err + } + + return out, string(out.Status), nil + } +} + +func findKxClusterByID(ctx context.Context, conn *finspace.Client, id string) (*finspace.GetKxClusterOutput, error) { + parts, err := flex.ExpandResourceId(id, kxUserIDPartCount, false) + if err != nil { + return nil, err + } + in := &finspace.GetKxClusterInput{ + EnvironmentId: aws.String(parts[0]), + ClusterName: aws.String(parts[1]), + } + + out, err := conn.GetKxCluster(ctx, in) + if err != nil { + var nfe *types.ResourceNotFoundException + if errors.As(err, &nfe) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + return nil, err + } + + if out == nil || out.ClusterName == nil { + return nil, tfresource.NewEmptyResultError(in) + } + + return out, nil +} + +func expandCapacityConfiguration(tfList []interface{}) *types.CapacityConfiguration { + if len(tfList) == 0 || tfList[0] == nil { + return nil + } + + tfMap := tfList[0].(map[string]interface{}) + + a := &types.CapacityConfiguration{} + + if v, ok := tfMap["node_type"].(string); ok && v != "" { + a.NodeType = aws.String(v) + } + + if v, ok := tfMap["node_count"].(int); ok && v != 0 { + a.NodeCount = aws.Int32(int32(v)) + } + + return a +} + +func expandAutoScalingConfiguration(tfList []interface{}) *types.AutoScalingConfiguration { + if len(tfList) == 0 || tfList[0] == nil { + return nil + } + + tfMap := tfList[0].(map[string]interface{}) + + a := &types.AutoScalingConfiguration{} + + if v, ok := tfMap["auto_scaling_metric"].(string); ok && v != "" { + a.AutoScalingMetric = types.AutoScalingMetric(v) + } + + if v, ok := tfMap["min_node_count"].(int); ok && v != 0 { + a.MinNodeCount = aws.Int32(int32(v)) + } + + if v, ok := tfMap["max_node_count"].(int); ok && v != 0 { + a.MaxNodeCount = aws.Int32(int32(v)) + } + + if v, ok := tfMap["metric_target"].(float64); ok && v != 0 { + a.MetricTarget = aws.Float64(v) + } + + if v, ok := tfMap["scale_in_cooldown_seconds"].(float64); ok && v != 0 { + a.ScaleInCooldownSeconds = aws.Float64(v) + } + + if v, ok := tfMap["scale_out_cooldown_seconds"].(float64); ok && v != 0 { + a.ScaleOutCooldownSeconds = aws.Float64(v) + } + + return a +} + +func expandSavedownStorageConfiguration(tfList []interface{}) *types.KxSavedownStorageConfiguration { + if len(tfList) == 0 || tfList[0] == nil { + return nil + } + + tfMap := tfList[0].(map[string]interface{}) + + a := &types.KxSavedownStorageConfiguration{} + + if v, ok := tfMap["type"].(string); ok && v != "" { + a.Type = types.KxSavedownStorageType(v) + } + + if v, ok := tfMap["size"].(int); ok && v != 0 { + a.Size = int32(v) + } + + return a +} + +func expandVPCConfiguration(tfList []interface{}) *types.VpcConfiguration { + if len(tfList) == 0 || tfList[0] == nil { + return nil + } + + tfMap := tfList[0].(map[string]interface{}) + + a := &types.VpcConfiguration{} + + if v, ok := tfMap["vpc_id"].(string); ok && v != "" { + a.VpcId = aws.String(v) + } + + if v, ok := tfMap["security_group_ids"].(*schema.Set); ok && v.Len() > 0 { + a.SecurityGroupIds = flex.ExpandStringValueSet(v) + } + + if v, ok := tfMap["subnet_ids"].(*schema.Set); ok && v.Len() > 0 { + a.SubnetIds = flex.ExpandStringValueSet(v) + } + + if v, ok := tfMap["ip_address_type"].(string); ok && v != "" { + a.IpAddressType = types.IPAddressType(v) + } + + return a +} + +func expandCacheStorageConfiguration(tfMap map[string]interface{}) *types.KxCacheStorageConfiguration { + if tfMap == nil { + return nil + } + + a := &types.KxCacheStorageConfiguration{} + + if v, ok := tfMap["type"].(string); ok && v != "" { + a.Type = &v + } + + if v, ok := tfMap["size"].(int); ok { + a.Size = aws.Int32(int32(v)) + } + + return a +} + +func expandCacheStorageConfigurations(tfList []interface{}) []types.KxCacheStorageConfiguration { + if len(tfList) == 0 { + return nil + } + + var s []types.KxCacheStorageConfiguration + + for _, r := range tfList { + m, ok := r.(map[string]interface{}) + + if !ok { + continue + } + + a := expandCacheStorageConfiguration(m) + + if a == nil { + continue + } + + s = append(s, *a) + } + + return s +} + +func expandDatabases(tfList []interface{}) []types.KxDatabaseConfiguration { + if len(tfList) == 0 { + return nil + } + + var s []types.KxDatabaseConfiguration + + for _, r := range tfList { + m, ok := r.(map[string]interface{}) + + if !ok { + continue + } + + a := expandDatabase(m) + + if a == nil { + continue + } + + s = append(s, *a) + } + + return s +} + +func expandDatabase(tfMap map[string]interface{}) *types.KxDatabaseConfiguration { + if tfMap == nil { + return nil + } + + a := &types.KxDatabaseConfiguration{} + + if v, ok := tfMap["database_name"].(string); ok && v != "" { + a.DatabaseName = aws.String(v) + } + + if v, ok := tfMap["cache_configurations"]; ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + a.CacheConfigurations = expandCacheConfigurations(v.([]interface{})) + } + + if v, ok := tfMap["changeset_id"].(string); ok && v != "" { + a.ChangesetId = aws.String(v) + } + + return a +} + +func expandCacheConfigurations(tfList []interface{}) []types.KxDatabaseCacheConfiguration { + if len(tfList) == 0 { + return nil + } + + var s []types.KxDatabaseCacheConfiguration + + for _, r := range tfList { + m, ok := r.(map[string]interface{}) + + if !ok { + continue + } + + a := expandCacheConfiguration(m) + + if a == nil { + continue + } + + s = append(s, *a) + } + + return s +} + +func expandCacheConfiguration(tfMap map[string]interface{}) *types.KxDatabaseCacheConfiguration { + if tfMap == nil { + return nil + } + + a := &types.KxDatabaseCacheConfiguration{} + + if v, ok := tfMap["cache_type"].(string); ok && v != "" { + a.CacheType = &v + } + + if v, ok := tfMap["db_paths"].(*schema.Set); ok && v.Len() > 0 { + a.DbPaths = flex.ExpandStringValueSet(v) + } + + return a +} + +func expandCode(tfList []interface{}) *types.CodeConfiguration { + if len(tfList) == 0 || tfList[0] == nil { + return nil + } + + tfMap := tfList[0].(map[string]interface{}) + + a := &types.CodeConfiguration{} + + if v, ok := tfMap["s3_bucket"].(string); ok && v != "" { + a.S3Bucket = aws.String(v) + } + + if v, ok := tfMap["s3_key"].(string); ok && v != "" { + a.S3Key = aws.String(v) + } + + if v, ok := tfMap["s3_object_version"].(string); ok && v != "" { + a.S3ObjectVersion = aws.String(v) + } + + return a +} + +func expandCommandLineArgument(k string, v string) *types.KxCommandLineArgument { + if k == "" || v == "" { + return nil + } + + a := &types.KxCommandLineArgument{ + Key: aws.String(k), + Value: aws.String(v), + } + return a +} + +func expandCommandLineArguments(tfMap map[string]interface{}) []types.KxCommandLineArgument { + if tfMap == nil { + return nil + } + + var s []types.KxCommandLineArgument + + for k, v := range tfMap { + a := expandCommandLineArgument(k, v.(string)) + + if a == nil { + continue + } + + s = append(s, *a) + } + + return s +} + +func flattenCapacityConfiguration(apiObject *types.CapacityConfiguration) []interface{} { + if apiObject == nil { + return nil + } + + m := map[string]interface{}{} + + if v := apiObject.NodeType; v != nil { + m["node_type"] = aws.ToString(v) + } + + if v := apiObject.NodeCount; v != nil { + m["node_count"] = aws.ToInt32(v) + } + + return []interface{}{m} +} + +func flattenAutoScalingConfiguration(apiObject *types.AutoScalingConfiguration) []interface{} { + if apiObject == nil { + return nil + } + + m := map[string]interface{}{} + + if v := apiObject.AutoScalingMetric; v != "" { + m["auto_scaling_metric"] = v + } + + if v := apiObject.MinNodeCount; v != nil { + m["min_node_count"] = aws.ToInt32(v) + } + + if v := apiObject.MaxNodeCount; v != nil { + m["max_node_count"] = aws.ToInt32(v) + } + + if v := apiObject.MetricTarget; v != nil { + m["metric_target"] = aws.ToFloat64(v) + } + + if v := apiObject.ScaleInCooldownSeconds; v != nil { + m["scale_in_cooldown_seconds"] = aws.ToFloat64(v) + } + + if v := apiObject.ScaleOutCooldownSeconds; v != nil { + m["scale_out_cooldown_seconds"] = aws.ToFloat64(v) + } + + return []interface{}{m} +} + +func flattenSavedownStorageConfiguration(apiObject *types.KxSavedownStorageConfiguration) []interface{} { + if apiObject == nil { + return nil + } + + m := map[string]interface{}{} + + if v := apiObject.Type; v != "" { + m["type"] = v + } + + if v := apiObject.Size; v >= 4 && v <= 16000 { + m["size"] = v + } + + return []interface{}{m} +} + +func flattenVPCConfiguration(apiObject *types.VpcConfiguration) []interface{} { + if apiObject == nil { + return nil + } + + m := map[string]interface{}{} + + if v := apiObject.VpcId; v != nil { + m["vpc_id"] = aws.ToString(v) + } + + if v := apiObject.SecurityGroupIds; v != nil { + m["security_group_ids"] = v + } + + if v := apiObject.SubnetIds; v != nil { + m["subnet_ids"] = v + } + + if v := apiObject.IpAddressType; v != "" { + m["ip_address_type"] = string(v) + } + + return []interface{}{m} +} + +func flattenCode(apiObject *types.CodeConfiguration) []interface{} { + if apiObject == nil { + return nil + } + + m := map[string]interface{}{} + + if v := apiObject.S3Bucket; v != nil { + m["s3_bucket"] = aws.ToString(v) + } + + if v := apiObject.S3Key; v != nil { + m["s3_key"] = aws.ToString(v) + } + + if v := apiObject.S3ObjectVersion; v != nil { + m["s3_object_version"] = aws.ToString(v) + } + + return []interface{}{m} +} + +func flattenCacheStorageConfiguration(apiObject *types.KxCacheStorageConfiguration) map[string]interface{} { + if apiObject == nil { + return nil + } + + m := map[string]interface{}{} + + if v := apiObject.Type; aws.ToString(v) != "" { + m["type"] = aws.ToString(v) + } + + if v := apiObject.Size; v != nil { + m["size"] = aws.ToInt32(v) + } + + return m +} + +func flattenCacheStorageConfigurations(apiObjects []types.KxCacheStorageConfiguration) []interface{} { + if len(apiObjects) == 0 { + return nil + } + + var l []interface{} + + for _, apiObject := range apiObjects { + l = append(l, flattenCacheStorageConfiguration(&apiObject)) + } + + return l +} + +func flattenCacheConfiguration(apiObject *types.KxDatabaseCacheConfiguration) map[string]interface{} { + if apiObject == nil { + return nil + } + + m := map[string]interface{}{} + + if v := apiObject.CacheType; aws.ToString(v) != "" { + m["cache_type"] = aws.ToString(v) + } + + if v := apiObject.DbPaths; v != nil { + m["db_paths"] = v + } + + return m +} + +func flattenCacheConfigurations(apiObjects []types.KxDatabaseCacheConfiguration) []interface{} { + if len(apiObjects) == 0 { + return nil + } + + var l []interface{} + + for _, apiObject := range apiObjects { + l = append(l, flattenCacheConfiguration(&apiObject)) + } + + return l +} + +func flattenDatabase(apiObject *types.KxDatabaseConfiguration) map[string]interface{} { + if apiObject == nil { + return nil + } + + m := map[string]interface{}{} + + if v := apiObject.DatabaseName; v != nil { + m["database_name"] = aws.ToString(v) + } + + if v := apiObject.CacheConfigurations; v != nil { + m["cache_configurations"] = flattenCacheConfigurations(v) + } + + if v := apiObject.ChangesetId; v != nil { + m["changeset_id"] = aws.ToString(v) + } + + return m +} + +func flattenDatabases(apiObjects []types.KxDatabaseConfiguration) []interface{} { + if len(apiObjects) == 0 { + return nil + } + + var l []interface{} + + for _, apiObject := range apiObjects { + l = append(l, flattenDatabase(&apiObject)) + } + + return l +} + +func flattenCommandLineArguments(apiObjects []types.KxCommandLineArgument) map[string]string { + if len(apiObjects) == 0 { + return nil + } + + m := make(map[string]string) + + for _, apiObject := range apiObjects { + m[aws.ToString(apiObject.Key)] = aws.ToString(apiObject.Value) + } + + return m +} diff --git a/internal/service/finspace/kx_cluster_test.go b/internal/service/finspace/kx_cluster_test.go new file mode 100644 index 00000000000..f97714d32dc --- /dev/null +++ b/internal/service/finspace/kx_cluster_test.go @@ -0,0 +1,1160 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package finspace_test + +import ( + "context" + "errors" + "fmt" + "testing" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/finspace" + "github.com/aws/aws-sdk-go-v2/service/finspace/types" + sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-plugin-testing/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + tffinspace "github.com/hashicorp/terraform-provider-aws/internal/service/finspace" + "github.com/hashicorp/terraform-provider-aws/names" +) + +func TestAccFinSpaceKxCluster_basic(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + ctx := acctest.Context(t) + var kxcluster finspace.GetKxClusterOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_finspace_kx_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, finspace.ServiceID) + }, + ErrorCheck: acctest.ErrorCheck(t, finspace.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckKxClusterDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccKxClusterConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxClusterExists(ctx, resourceName, &kxcluster), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "status", string(types.KxClusterStatusRunning)), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccFinSpaceKxCluster_disappears(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + ctx := acctest.Context(t) + var kxcluster finspace.GetKxClusterOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_finspace_kx_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, finspace.ServiceID) + }, + ErrorCheck: acctest.ErrorCheck(t, finspace.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckKxClusterDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccKxClusterConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxClusterExists(ctx, resourceName, &kxcluster), + acctest.CheckResourceDisappears(ctx, acctest.Provider, tffinspace.ResourceKxCluster(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccFinSpaceKxCluster_description(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + ctx := acctest.Context(t) + var kxcluster finspace.GetKxClusterOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_finspace_kx_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, finspace.ServiceID) + }, + ErrorCheck: acctest.ErrorCheck(t, finspace.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckKxClusterDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccKxClusterConfig_description(rName, "cluster description"), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxClusterExists(ctx, resourceName, &kxcluster), + resource.TestCheckResourceAttr(resourceName, "description", "cluster description"), + ), + }, + }, + }) +} + +func TestAccFinSpaceKxCluster_database(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + ctx := acctest.Context(t) + var kxcluster finspace.GetKxClusterOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_finspace_kx_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, finspace.ServiceID) + }, + ErrorCheck: acctest.ErrorCheck(t, finspace.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckKxClusterDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccKxClusterConfig_database(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxClusterExists(ctx, resourceName, &kxcluster), + resource.TestCheckResourceAttr(resourceName, "status", string(types.KxClusterStatusRunning)), + ), + }, + }, + }) +} + +func TestAccFinSpaceKxCluster_code(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + ctx := acctest.Context(t) + var kxcluster finspace.GetKxClusterOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_finspace_kx_cluster.test" + codePath := "test-fixtures/code.zip" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, finspace.ServiceID) + }, + ErrorCheck: acctest.ErrorCheck(t, finspace.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckKxClusterDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccKxClusterConfig_code(rName, codePath), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxClusterExists(ctx, resourceName, &kxcluster), + ), + }, + }, + }) +} + +func TestAccFinSpaceKxCluster_multiAZ(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + ctx := acctest.Context(t) + var kxcluster finspace.GetKxClusterOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_finspace_kx_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, finspace.ServiceID) + }, + ErrorCheck: acctest.ErrorCheck(t, finspace.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckKxClusterDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccKxClusterConfig_multiAZ(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxClusterExists(ctx, resourceName, &kxcluster), + resource.TestCheckResourceAttr(resourceName, "status", string(types.KxClusterStatusRunning)), + ), + }, + }, + }) +} + +func TestAccFinSpaceKxCluster_rdb(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + ctx := acctest.Context(t) + var kxcluster finspace.GetKxClusterOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_finspace_kx_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, finspace.ServiceID) + }, + ErrorCheck: acctest.ErrorCheck(t, finspace.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckKxClusterDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccKxClusterConfig_rdb(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxClusterExists(ctx, resourceName, &kxcluster), + resource.TestCheckResourceAttr(resourceName, "status", string(types.KxClusterStatusRunning)), + ), + }, + }, + }) +} + +func TestAccFinSpaceKxCluster_executionRole(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + ctx := acctest.Context(t) + var kxcluster finspace.GetKxClusterOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_finspace_kx_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, finspace.ServiceID) + }, + ErrorCheck: acctest.ErrorCheck(t, finspace.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckKxClusterDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccKxClusterConfig_executionRole(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxClusterExists(ctx, resourceName, &kxcluster), + resource.TestCheckResourceAttr(resourceName, "status", string(types.KxClusterStatusRunning)), + ), + }, + }, + }) +} + +func TestAccFinSpaceKxCluster_autoScaling(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + ctx := acctest.Context(t) + var kxcluster finspace.GetKxClusterOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_finspace_kx_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, finspace.ServiceID) + }, + ErrorCheck: acctest.ErrorCheck(t, finspace.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckKxClusterDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccKxClusterConfig_autoScaling(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxClusterExists(ctx, resourceName, &kxcluster), + resource.TestCheckResourceAttr(resourceName, "status", string(types.KxClusterStatusRunning)), + ), + }, + }, + }) +} + +func TestAccFinSpaceKxCluster_initializationScript(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + ctx := acctest.Context(t) + var kxcluster finspace.GetKxClusterOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_finspace_kx_cluster.test" + // Need to set these to the bucket/key you want to use + codePath := "test-fixtures/code.zip" + initScriptPath := "code/helloworld.q" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, finspace.ServiceID) + }, + ErrorCheck: acctest.ErrorCheck(t, finspace.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckKxClusterDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccKxClusterConfig_initScript(rName, codePath, initScriptPath), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxClusterExists(ctx, resourceName, &kxcluster), + ), + }, + }, + }) +} + +func TestAccFinSpaceKxCluster_commandLineArgs(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + ctx := acctest.Context(t) + var kxcluster finspace.GetKxClusterOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_finspace_kx_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, finspace.ServiceID) + }, + ErrorCheck: acctest.ErrorCheck(t, finspace.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckKxClusterDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccKxClusterConfig_commandLineArgs1(rName, "arg1", "value1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxClusterExists(ctx, resourceName, &kxcluster), + resource.TestCheckResourceAttr(resourceName, "command_line_arguments.%", "1"), + resource.TestCheckResourceAttr(resourceName, "command_line_arguments.arg1", "value1"), + ), + }, + }, + }) +} + +func TestAccFinSpaceKxCluster_tags(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + ctx := acctest.Context(t) + var kxcluster finspace.GetKxClusterOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_finspace_kx_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, finspace.ServiceID) + }, + ErrorCheck: acctest.ErrorCheck(t, finspace.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckKxClusterDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccKxClusterConfig_tags1(rName, "key1", "value1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxClusterExists(ctx, resourceName, &kxcluster), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + Config: testAccKxClusterConfig_tags2(rName, "key1", "value1updated", "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxClusterExists(ctx, resourceName, &kxcluster), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + { + Config: testAccKxClusterConfig_tags1(rName, "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxClusterExists(ctx, resourceName, &kxcluster), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + }, + }) +} + +func testAccCheckKxClusterDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).FinSpaceClient(ctx) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_finspace_kx_cluster" { + continue + } + + input := &finspace.GetKxClusterInput{ + ClusterName: aws.String(rs.Primary.Attributes["name"]), + EnvironmentId: aws.String(rs.Primary.Attributes["environment_id"]), + } + _, err := conn.GetKxCluster(ctx, input) + if err != nil { + var nfe *types.ResourceNotFoundException + if errors.As(err, &nfe) { + return nil + } + return err + } + + return create.Error(names.FinSpace, create.ErrActionCheckingDestroyed, tffinspace.ResNameKxCluster, rs.Primary.ID, errors.New("not destroyed")) + } + + return nil + } +} + +func testAccCheckKxClusterExists(ctx context.Context, name string, kxcluster *finspace.GetKxClusterOutput) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return create.Error(names.FinSpace, create.ErrActionCheckingExistence, tffinspace.ResNameKxCluster, name, errors.New("not found")) + } + + if rs.Primary.ID == "" { + return create.Error(names.FinSpace, create.ErrActionCheckingExistence, tffinspace.ResNameKxCluster, name, errors.New("not set")) + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).FinSpaceClient(ctx) + resp, err := conn.GetKxCluster(ctx, &finspace.GetKxClusterInput{ + ClusterName: aws.String(rs.Primary.Attributes["name"]), + EnvironmentId: aws.String(rs.Primary.Attributes["environment_id"]), + }) + + if err != nil { + return create.Error(names.FinSpace, create.ErrActionCheckingExistence, tffinspace.ResNameKxCluster, rs.Primary.ID, err) + } + + *kxcluster = *resp + + return nil + } +} + +func testAccKxClusterConfigBase(rName string) string { + return fmt.Sprintf(` +data "aws_caller_identity" "current" {} +data "aws_partition" "current" {} + +output "account_id" { + value = data.aws_caller_identity.current.account_id +} + +resource "aws_kms_key" "test" { + deletion_window_in_days = 7 +} + +resource "aws_finspace_kx_environment" "test" { + name = %[1]q + kms_key_id = aws_kms_key.test.arn +} + +data "aws_iam_policy_document" "key_policy" { + statement { + actions = [ + "kms:Decrypt", + "kms:GenerateDataKey" + ] + + resources = [ + aws_kms_key.test.arn, + ] + + principals { + type = "Service" + identifiers = ["finspace.amazonaws.com"] + } + + condition { + test = "ArnLike" + variable = "aws:SourceArn" + values = ["${aws_finspace_kx_environment.test.arn}/*"] + } + + condition { + test = "StringEquals" + variable = "aws:SourceAccount" + values = [data.aws_caller_identity.current.account_id] + } + } + + statement { + actions = [ + "kms:*", + ] + + resources = [ + "*", + ] + + principals { + type = "AWS" + identifiers = ["arn:${data.aws_partition.current.partition}:iam::${data.aws_caller_identity.current.account_id}:root"] + } + } +} + +resource "aws_kms_key_policy" "test" { + key_id = aws_kms_key.test.id + policy = data.aws_iam_policy_document.key_policy.json +} + +resource "aws_vpc" "test" { + cidr_block = "172.31.0.0/16" + enable_dns_hostnames = true +} + +resource "aws_subnet" "test" { + vpc_id = aws_vpc.test.id + cidr_block = "172.31.32.0/20" + availability_zone_id = aws_finspace_kx_environment.test.availability_zones[0] +} + +resource "aws_security_group" "test" { + name = %[1]q + vpc_id = aws_vpc.test.id + + ingress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + egress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } +} + +resource "aws_internet_gateway" "test" { + vpc_id = aws_vpc.test.id +} + +data "aws_route_tables" "rts" { + vpc_id = aws_vpc.test.id +} + +resource "aws_route" "r" { + route_table_id = tolist(data.aws_route_tables.rts.ids)[0] + destination_cidr_block = "0.0.0.0/0" + gateway_id = aws_internet_gateway.test.id +} +`, rName) +} + +func testAccKxClusterConfig_basic(rName string) string { + return acctest.ConfigCompose( + testAccKxClusterConfigBase(rName), + fmt.Sprintf(` +resource "aws_finspace_kx_cluster" "test" { + name = %[1]q + environment_id = aws_finspace_kx_environment.test.id + type = "HDB" + release_label = "1.0" + az_mode = "SINGLE" + availability_zone_id = aws_finspace_kx_environment.test.availability_zones[0] + capacity_configuration { + node_count = 2 + node_type = "kx.s.xlarge" + } + + vpc_configuration { + vpc_id = aws_vpc.test.id + security_group_ids = [aws_security_group.test.id] + subnet_ids = [aws_subnet.test.id] + ip_address_type = "IP_V4" + } +} +`, rName)) +} + +func testAccKxClusterConfig_description(rName, description string) string { + return acctest.ConfigCompose( + testAccKxClusterConfigBase(rName), + fmt.Sprintf(` +resource "aws_finspace_kx_cluster" "test" { + name = %[1]q + description = %[2]q + environment_id = aws_finspace_kx_environment.test.id + az_mode = "SINGLE" + availability_zone_id = aws_finspace_kx_environment.test.availability_zones[0] + type = "HDB" + release_label = "1.0" + capacity_configuration { + node_count = 2 + node_type = "kx.s.xlarge" + } + + vpc_configuration { + vpc_id = aws_vpc.test.id + security_group_ids = [aws_security_group.test.id] + subnet_ids = [aws_subnet.test.id] + ip_address_type = "IP_V4" + } +} +`, rName, description)) +} + +func testAccKxClusterConfig_commandLineArgs1(rName, arg1, val1 string) string { + return acctest.ConfigCompose( + testAccKxClusterConfigBase(rName), + fmt.Sprintf(` +resource "aws_finspace_kx_cluster" "test" { + name = %[1]q + environment_id = aws_finspace_kx_environment.test.id + az_mode = "SINGLE" + availability_zone_id = aws_finspace_kx_environment.test.availability_zones[0] + type = "HDB" + release_label = "1.0" + capacity_configuration { + node_count = 2 + node_type = "kx.s.xlarge" + } + + vpc_configuration { + vpc_id = aws_vpc.test.id + security_group_ids = [aws_security_group.test.id] + subnet_ids = [aws_subnet.test.id] + ip_address_type = "IP_V4" + } + + command_line_arguments = { + %[2]q = %[3]q + } +} +`, rName, arg1, val1)) +} + +func testAccKxClusterConfig_tags1(rName, tagKey1, tagValue1 string) string { + return acctest.ConfigCompose( + testAccKxClusterConfigBase(rName), + fmt.Sprintf(` +resource "aws_finspace_kx_cluster" "test" { + name = %[1]q + environment_id = aws_finspace_kx_environment.test.id + type = "HDB" + az_mode = "SINGLE" + availability_zone_id = aws_finspace_kx_environment.test.availability_zones[0] + release_label = "1.0" + capacity_configuration { + node_count = 2 + node_type = "kx.s.xlarge" + } + + vpc_configuration { + vpc_id = aws_vpc.test.id + security_group_ids = [aws_security_group.test.id] + subnet_ids = [aws_subnet.test.id] + ip_address_type = "IP_V4" + } + + tags = { + %[2]q = %[3]q + } +} +`, rName, tagKey1, tagValue1)) +} + +func testAccKxClusterConfig_tags2(rName, tagKey1, tagValue1, tagKey2, tagValue2 string) string { + return acctest.ConfigCompose( + testAccKxClusterConfigBase(rName), + fmt.Sprintf(` +resource "aws_finspace_kx_cluster" "test" { + name = %[1]q + environment_id = aws_finspace_kx_environment.test.id + type = "HDB" + az_mode = "SINGLE" + availability_zone_id = aws_finspace_kx_environment.test.availability_zones[0] + release_label = "1.0" + capacity_configuration { + node_count = 2 + node_type = "kx.s.xlarge" + } + + vpc_configuration { + vpc_id = aws_vpc.test.id + security_group_ids = [aws_security_group.test.id] + subnet_ids = [aws_subnet.test.id] + ip_address_type = "IP_V4" + } + + tags = { + %[2]q = %[3]q + %[4]q = %[5]q + } +} +`, rName, tagKey1, tagValue1, tagKey2, tagValue2)) +} + +func testAccKxClusterConfig_database(rName string) string { + return acctest.ConfigCompose( + testAccKxClusterConfigBase(rName), + fmt.Sprintf(` +resource "aws_finspace_kx_database" "test" { + name = %[1]q + environment_id = aws_finspace_kx_environment.test.id +} + +resource "aws_finspace_kx_cluster" "test" { + name = %[1]q + environment_id = aws_finspace_kx_environment.test.id + type = "HDB" + release_label = "1.0" + az_mode = "SINGLE" + availability_zone_id = aws_finspace_kx_environment.test.availability_zones[0] + + cache_storage_configurations { + type = "CACHE_1000" + size = 1200 + } + + database { + database_name = aws_finspace_kx_database.test.name + cache_configurations { + cache_type = "CACHE_1000" + db_paths = ["/"] + } + } + + capacity_configuration { + node_count = 2 + node_type = "kx.s.xlarge" + } + + vpc_configuration { + vpc_id = aws_vpc.test.id + security_group_ids = [aws_security_group.test.id] + subnet_ids = [aws_subnet.test.id] + ip_address_type = "IP_V4" + } +} +`, rName)) +} + +func testAccKxClusterConfig_code(rName, path string) string { + return acctest.ConfigCompose( + testAccKxClusterConfigBase(rName), + fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q +} + +data "aws_iam_policy_document" "bucket_policy" { + statement { + actions = [ + "s3:GetObject", + "s3:GetObjectTagging" + ] + + resources = [ + "arn:${data.aws_partition.current.partition}:s3:::${aws_s3_bucket.test.id}/*", + ] + + principals { + type = "Service" + identifiers = ["finspace.amazonaws.com"] + } + + condition { + test = "ArnLike" + variable = "aws:SourceArn" + values = ["${aws_finspace_kx_environment.test.arn}/*"] + } + + condition { + test = "StringEquals" + variable = "aws:SourceAccount" + values = [data.aws_caller_identity.current.account_id] + } + } + + statement { + actions = [ + "s3:ListBucket" + ] + + resources = [ + "arn:${data.aws_partition.current.partition}:s3:::${aws_s3_bucket.test.id}", + ] + + principals { + type = "Service" + identifiers = ["finspace.amazonaws.com"] + } + + condition { + test = "ArnLike" + variable = "aws:SourceArn" + values = ["${aws_finspace_kx_environment.test.arn}/*"] + } + + condition { + test = "StringEquals" + variable = "aws:SourceAccount" + values = [data.aws_caller_identity.current.account_id] + } + } +} + +resource "aws_s3_bucket_policy" "test" { + bucket = aws_s3_bucket.test.id + policy = data.aws_iam_policy_document.bucket_policy.json +} + +resource "aws_s3_object" "object" { + bucket = aws_s3_bucket.test.id + key = %[2]q + source = %[2]q +} + +resource "aws_finspace_kx_cluster" "test" { + name = %[1]q + environment_id = aws_finspace_kx_environment.test.id + type = "HDB" + release_label = "1.0" + az_mode = "SINGLE" + availability_zone_id = aws_finspace_kx_environment.test.availability_zones[0] + capacity_configuration { + node_count = 2 + node_type = "kx.s.xlarge" + } + + vpc_configuration { + vpc_id = aws_vpc.test.id + security_group_ids = [aws_security_group.test.id] + subnet_ids = [aws_subnet.test.id] + ip_address_type = "IP_V4" + } + + code { + s3_bucket = aws_s3_bucket.test.id + s3_key = aws_s3_object.object.key + } +} +`, rName, path)) +} + +func testAccKxClusterConfig_multiAZ(rName string) string { + return acctest.ConfigCompose( + testAccKxClusterConfigBase(rName), + fmt.Sprintf(` +resource "aws_subnet" "test2" { + vpc_id = aws_vpc.test.id + cidr_block = "172.31.16.0/20" + availability_zone_id = aws_finspace_kx_environment.test.availability_zones[1] +} + +resource "aws_subnet" "test3" { + vpc_id = aws_vpc.test.id + cidr_block = "172.31.64.0/20" + availability_zone_id = aws_finspace_kx_environment.test.availability_zones[2] +} + +resource "aws_finspace_kx_cluster" "test" { + name = %[1]q + environment_id = aws_finspace_kx_environment.test.id + type = "HDB" + release_label = "1.0" + az_mode = "MULTI" + capacity_configuration { + node_count = 3 + node_type = "kx.s.xlarge" + } + + vpc_configuration { + vpc_id = aws_vpc.test.id + security_group_ids = [aws_security_group.test.id] + subnet_ids = [aws_subnet.test.id, aws_subnet.test2.id, aws_subnet.test3.id] + ip_address_type = "IP_V4" + } +} +`, rName)) +} + +func testAccKxClusterConfig_rdb(rName string) string { + return acctest.ConfigCompose( + testAccKxClusterConfigBase(rName), + fmt.Sprintf(` +resource "aws_finspace_kx_cluster" "test" { + name = %[1]q + environment_id = aws_finspace_kx_environment.test.id + type = "RDB" + release_label = "1.0" + az_mode = "SINGLE" + availability_zone_id = aws_finspace_kx_environment.test.availability_zones[0] + + savedown_storage_configuration { + type = "SDS01" + size = 500 + } + + capacity_configuration { + node_count = 2 + node_type = "kx.s.xlarge" + } + + vpc_configuration { + vpc_id = aws_vpc.test.id + security_group_ids = [aws_security_group.test.id] + subnet_ids = [aws_subnet.test.id] + ip_address_type = "IP_V4" + } +} +`, rName)) +} + +func testAccKxClusterConfig_executionRole(rName string) string { + return acctest.ConfigCompose( + testAccKxClusterConfigBase(rName), + fmt.Sprintf(` +resource "aws_iam_policy" "test" { + name = %[1]q + policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Action = ["finspace:ConnectKxCluster", "finspace:GetKxConnectionString"] + Effect = "Allow" + Resource = "*" + }, + ] + }) +} + +resource "aws_iam_role" "test" { + name = %[1]q + managed_policy_arns = [aws_iam_policy.test.arn] + assume_role_policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Action = "sts:AssumeRole" + Effect = "Allow" + Sid = "" + Principal = { + "Service" : "prod.finspacekx.aws.internal", + "AWS" : "arn:${data.aws_partition.current.partition}:iam::${data.aws_caller_identity.current.account_id}:root" + } + }, + ] + }) +} + +resource "aws_finspace_kx_cluster" "test" { + name = %[1]q + environment_id = aws_finspace_kx_environment.test.id + type = "HDB" + release_label = "1.0" + az_mode = "SINGLE" + availability_zone_id = aws_finspace_kx_environment.test.availability_zones[0] + execution_role = aws_iam_role.test.arn + + capacity_configuration { + node_count = 2 + node_type = "kx.s.xlarge" + } + + vpc_configuration { + vpc_id = aws_vpc.test.id + security_group_ids = [aws_security_group.test.id] + subnet_ids = [aws_subnet.test.id] + ip_address_type = "IP_V4" + } +} +`, rName)) +} + +func testAccKxClusterConfig_autoScaling(rName string) string { + return acctest.ConfigCompose( + testAccKxClusterConfigBase(rName), + fmt.Sprintf(` +resource "aws_finspace_kx_cluster" "test" { + name = %[1]q + environment_id = aws_finspace_kx_environment.test.id + type = "HDB" + release_label = "1.0" + az_mode = "SINGLE" + availability_zone_id = aws_finspace_kx_environment.test.availability_zones[0] + capacity_configuration { + node_count = 3 + node_type = "kx.s.xlarge" + } + + auto_scaling_configuration { + min_node_count = 3 + max_node_count = 5 + auto_scaling_metric = "CPU_UTILIZATION_PERCENTAGE" + metric_target = 25.0 + scale_in_cooldown_seconds = 30.0 + scale_out_cooldown_seconds = 30.0 + } + + vpc_configuration { + vpc_id = aws_vpc.test.id + security_group_ids = [aws_security_group.test.id] + subnet_ids = [aws_subnet.test.id] + ip_address_type = "IP_V4" + } +} +`, rName)) +} + +func testAccKxClusterConfig_initScript(rName, codePath, relPath string) string { + return acctest.ConfigCompose( + testAccKxClusterConfigBase(rName), + fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q +} + +data "aws_iam_policy_document" "test" { + statement { + actions = [ + "s3:GetObject", + "s3:GetObjectTagging" + ] + + resources = [ + "arn:${data.aws_partition.current.partition}:s3:::${aws_s3_bucket.test.id}/*", + ] + + principals { + type = "Service" + identifiers = ["finspace.amazonaws.com"] + } + + condition { + test = "ArnLike" + variable = "aws:SourceArn" + values = ["${aws_finspace_kx_environment.test.arn}/*"] + } + + condition { + test = "StringEquals" + variable = "aws:SourceAccount" + values = [data.aws_caller_identity.current.account_id] + } + } + + statement { + actions = [ + "s3:ListBucket" + ] + + resources = [ + "arn:${data.aws_partition.current.partition}:s3:::${aws_s3_bucket.test.id}", + ] + + principals { + type = "Service" + identifiers = ["finspace.amazonaws.com"] + } + + condition { + test = "ArnLike" + variable = "aws:SourceArn" + values = ["${aws_finspace_kx_environment.test.arn}/*"] + } + + condition { + test = "StringEquals" + variable = "aws:SourceAccount" + values = [data.aws_caller_identity.current.account_id] + } + } +} + +resource "aws_s3_bucket_policy" "test" { + bucket = aws_s3_bucket.test.id + policy = data.aws_iam_policy_document.test.json +} + +resource "aws_s3_object" "object" { + bucket = aws_s3_bucket.test.id + key = %[2]q + source = %[2]q +} + +resource "aws_finspace_kx_database" "test" { + name = %[1]q + environment_id = aws_finspace_kx_environment.test.id +} + +resource "aws_finspace_kx_cluster" "test" { + name = %[1]q + environment_id = aws_finspace_kx_environment.test.id + type = "HDB" + release_label = "1.0" + az_mode = "SINGLE" + availability_zone_id = aws_finspace_kx_environment.test.availability_zones[0] + initialization_script = %[3]q + capacity_configuration { + node_count = 2 + node_type = "kx.s.xlarge" + } + + vpc_configuration { + vpc_id = aws_vpc.test.id + security_group_ids = [aws_security_group.test.id] + subnet_ids = [aws_subnet.test.id] + ip_address_type = "IP_V4" + } + + cache_storage_configurations { + type = "CACHE_1000" + size = 1200 + } + + database { + database_name = aws_finspace_kx_database.test.name + cache_configurations { + cache_type = "CACHE_1000" + db_paths = ["/"] + } + } + + code { + s3_bucket = aws_s3_bucket.test.id + s3_key = aws_s3_object.object.key + } +} +`, rName, codePath, relPath)) +} diff --git a/internal/service/finspace/kx_database.go b/internal/service/finspace/kx_database.go new file mode 100644 index 00000000000..ca953294001 --- /dev/null +++ b/internal/service/finspace/kx_database.go @@ -0,0 +1,227 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package finspace + +import ( + "context" + "errors" + "log" + "time" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/finspace" + "github.com/aws/aws-sdk-go-v2/service/finspace/types" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/flex" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/verify" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @SDKResource("aws_finspace_kx_database", name="Kx Database") +// @Tags(identifierAttribute="arn") +func ResourceKxDatabase() *schema.Resource { + return &schema.Resource{ + CreateWithoutTimeout: resourceKxDatabaseCreate, + ReadWithoutTimeout: resourceKxDatabaseRead, + UpdateWithoutTimeout: resourceKxDatabaseUpdate, + DeleteWithoutTimeout: resourceKxDatabaseDelete, + + Importer: &schema.ResourceImporter{ + StateContext: schema.ImportStatePassthroughContext, + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(30 * time.Minute), + Update: schema.DefaultTimeout(30 * time.Minute), + Delete: schema.DefaultTimeout(30 * time.Minute), + }, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "created_timestamp": { + Type: schema.TypeString, + Computed: true, + }, + "description": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(1, 1000), + }, + "environment_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(1, 32), + }, + "last_modified_timestamp": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(3, 63), + }, + names.AttrTags: tftags.TagsSchema(), + names.AttrTagsAll: tftags.TagsSchemaComputed(), + }, + + CustomizeDiff: verify.SetTagsDiff, + } +} + +const ( + ResNameKxDatabase = "Kx Database" + + kxDatabaseIDPartCount = 2 +) + +func resourceKxDatabaseCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).FinSpaceClient(ctx) + + in := &finspace.CreateKxDatabaseInput{ + DatabaseName: aws.String(d.Get("name").(string)), + EnvironmentId: aws.String(d.Get("environment_id").(string)), + ClientToken: aws.String(id.UniqueId()), + Tags: getTagsIn(ctx), + } + + if v, ok := d.GetOk("description"); ok { + in.Description = aws.String(v.(string)) + } + + out, err := conn.CreateKxDatabase(ctx, in) + if err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionCreating, ResNameKxDatabase, d.Get("name").(string), err)...) + } + + if out == nil || out.DatabaseArn == nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionCreating, ResNameKxDatabase, d.Get("name").(string), errors.New("empty output"))...) + } + + idParts := []string{ + aws.ToString(out.EnvironmentId), + aws.ToString(out.DatabaseName), + } + id, err := flex.FlattenResourceId(idParts, kxDatabaseIDPartCount, false) + if err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionFlatteningResourceId, ResNameKxDatabase, d.Get("name").(string), err)...) + } + + d.SetId(id) + + return append(diags, resourceKxDatabaseRead(ctx, d, meta)...) +} + +func resourceKxDatabaseRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).FinSpaceClient(ctx) + + out, err := findKxDatabaseByID(ctx, conn, d.Id()) + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] FinSpace KxDatabase (%s) not found, removing from state", d.Id()) + d.SetId("") + return diags + } + + if err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionReading, ResNameKxDatabase, d.Id(), err)...) + } + + d.Set("arn", out.DatabaseArn) + d.Set("name", out.DatabaseName) + d.Set("environment_id", out.EnvironmentId) + d.Set("description", out.Description) + d.Set("created_timestamp", out.CreatedTimestamp.String()) + d.Set("last_modified_timestamp", out.LastModifiedTimestamp.String()) + + return diags +} + +func resourceKxDatabaseUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).FinSpaceClient(ctx) + + if d.HasChanges("description") { + in := &finspace.UpdateKxDatabaseInput{ + EnvironmentId: aws.String(d.Get("environment_id").(string)), + DatabaseName: aws.String(d.Get("name").(string)), + Description: aws.String(d.Get("description").(string)), + } + + _, err := conn.UpdateKxDatabase(ctx, in) + if err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionUpdating, ResNameKxDatabase, d.Id(), err)...) + } + } + + return append(diags, resourceKxDatabaseRead(ctx, d, meta)...) +} + +func resourceKxDatabaseDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).FinSpaceClient(ctx) + + log.Printf("[INFO] Deleting FinSpace KxDatabase %s", d.Id()) + + _, err := conn.DeleteKxDatabase(ctx, &finspace.DeleteKxDatabaseInput{ + EnvironmentId: aws.String(d.Get("environment_id").(string)), + DatabaseName: aws.String(d.Get("name").(string)), + }) + + if err != nil { + var nfe *types.ResourceNotFoundException + if errors.As(err, &nfe) { + return diags + } + + return append(diags, create.DiagError(names.FinSpace, create.ErrActionDeleting, ResNameKxDatabase, d.Id(), err)...) + } + + return diags +} + +func findKxDatabaseByID(ctx context.Context, conn *finspace.Client, id string) (*finspace.GetKxDatabaseOutput, error) { + parts, err := flex.ExpandResourceId(id, kxDatabaseIDPartCount, false) + if err != nil { + return nil, err + } + + in := &finspace.GetKxDatabaseInput{ + EnvironmentId: aws.String(parts[0]), + DatabaseName: aws.String(parts[1]), + } + + out, err := conn.GetKxDatabase(ctx, in) + if err != nil { + var nfe *types.ResourceNotFoundException + if errors.As(err, &nfe) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + return nil, err + } + + if out == nil || out.DatabaseArn == nil { + return nil, tfresource.NewEmptyResultError(in) + } + + return out, nil +} diff --git a/internal/service/finspace/kx_database_test.go b/internal/service/finspace/kx_database_test.go new file mode 100644 index 00000000000..1797ba028a4 --- /dev/null +++ b/internal/service/finspace/kx_database_test.go @@ -0,0 +1,297 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package finspace_test + +import ( + "context" + "errors" + "fmt" + "testing" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/finspace" + "github.com/aws/aws-sdk-go-v2/service/finspace/types" + sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-plugin-testing/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + tffinspace "github.com/hashicorp/terraform-provider-aws/internal/service/finspace" + "github.com/hashicorp/terraform-provider-aws/names" +) + +func TestAccFinSpaceKxDatabase_basic(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + ctx := acctest.Context(t) + var kxdatabase finspace.GetKxDatabaseOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_finspace_kx_database.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, finspace.ServiceID) + }, + ErrorCheck: acctest.ErrorCheck(t, finspace.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckKxDatabaseDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccKxDatabaseConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxDatabaseExists(ctx, resourceName, &kxdatabase), + resource.TestCheckResourceAttr(resourceName, "name", rName), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccFinSpaceKxDatabase_disappears(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + ctx := acctest.Context(t) + var kxdatabase finspace.GetKxDatabaseOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_finspace_kx_database.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, finspace.ServiceID) + }, + ErrorCheck: acctest.ErrorCheck(t, finspace.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckKxDatabaseDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccKxDatabaseConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxDatabaseExists(ctx, resourceName, &kxdatabase), + acctest.CheckResourceDisappears(ctx, acctest.Provider, tffinspace.ResourceKxDatabase(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccFinSpaceKxDatabase_description(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + ctx := acctest.Context(t) + var kxdatabase finspace.GetKxDatabaseOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_finspace_kx_database.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, finspace.ServiceID) + }, + ErrorCheck: acctest.ErrorCheck(t, finspace.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckKxDatabaseDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccKxDatabaseConfig_description(rName, "description 1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxDatabaseExists(ctx, resourceName, &kxdatabase), + resource.TestCheckResourceAttr(resourceName, "description", "description 1"), + ), + }, + { + Config: testAccKxDatabaseConfig_description(rName, "description 2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxDatabaseExists(ctx, resourceName, &kxdatabase), + resource.TestCheckResourceAttr(resourceName, "description", "description 2"), + ), + }, + }, + }) +} + +func TestAccFinSpaceKxDatabase_tags(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + ctx := acctest.Context(t) + var kxdatabase finspace.GetKxDatabaseOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_finspace_kx_database.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, finspace.ServiceID) + }, + ErrorCheck: acctest.ErrorCheck(t, finspace.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckKxDatabaseDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccKxDatabaseConfig_tags1(rName, "key1", "value1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxDatabaseExists(ctx, resourceName, &kxdatabase), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + Config: testAccKxDatabaseConfig_tags2(rName, "key1", "value1", "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxDatabaseExists(ctx, resourceName, &kxdatabase), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + { + Config: testAccKxDatabaseConfig_tags1(rName, "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxDatabaseExists(ctx, resourceName, &kxdatabase), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + }, + }) +} + +func testAccCheckKxDatabaseDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).FinSpaceClient(ctx) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_finspace_kx_database" { + continue + } + + input := &finspace.GetKxDatabaseInput{ + DatabaseName: aws.String(rs.Primary.Attributes["name"]), + EnvironmentId: aws.String(rs.Primary.Attributes["environment_id"]), + } + _, err := conn.GetKxDatabase(ctx, input) + if err != nil { + var nfe *types.ResourceNotFoundException + if errors.As(err, &nfe) { + return nil + } + return err + } + + return create.Error(names.FinSpace, create.ErrActionCheckingDestroyed, tffinspace.ResNameKxDatabase, rs.Primary.ID, errors.New("not destroyed")) + } + + return nil + } +} + +func testAccCheckKxDatabaseExists(ctx context.Context, name string, kxdatabase *finspace.GetKxDatabaseOutput) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return create.Error(names.FinSpace, create.ErrActionCheckingExistence, tffinspace.ResNameKxDatabase, name, errors.New("not found")) + } + + if rs.Primary.ID == "" { + return create.Error(names.FinSpace, create.ErrActionCheckingExistence, tffinspace.ResNameKxDatabase, name, errors.New("not set")) + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).FinSpaceClient(ctx) + resp, err := conn.GetKxDatabase(ctx, &finspace.GetKxDatabaseInput{ + DatabaseName: aws.String(rs.Primary.Attributes["name"]), + EnvironmentId: aws.String(rs.Primary.Attributes["environment_id"]), + }) + + if err != nil { + return create.Error(names.FinSpace, create.ErrActionCheckingExistence, tffinspace.ResNameKxDatabase, rs.Primary.ID, err) + } + + *kxdatabase = *resp + + return nil + } +} + +func testAccKxDatabaseConfigBase(rName string) string { + return fmt.Sprintf(` +resource "aws_kms_key" "test" { + deletion_window_in_days = 7 +} + +resource "aws_finspace_kx_environment" "test" { + name = %[1]q + kms_key_id = aws_kms_key.test.arn +} +`, rName) +} + +func testAccKxDatabaseConfig_basic(rName string) string { + return acctest.ConfigCompose( + testAccKxDatabaseConfigBase(rName), + fmt.Sprintf(` +resource "aws_finspace_kx_database" "test" { + name = %[1]q + environment_id = aws_finspace_kx_environment.test.id +} +`, rName)) +} + +func testAccKxDatabaseConfig_description(rName, description string) string { + return acctest.ConfigCompose( + testAccKxDatabaseConfigBase(rName), + fmt.Sprintf(` +resource "aws_finspace_kx_database" "test" { + name = %[1]q + environment_id = aws_finspace_kx_environment.test.id + description = %[2]q +} +`, rName, description)) +} + +func testAccKxDatabaseConfig_tags1(rName, tagKey1, tagValue1 string) string { + return acctest.ConfigCompose( + testAccKxDatabaseConfigBase(rName), + fmt.Sprintf(` +resource "aws_finspace_kx_database" "test" { + name = %[1]q + environment_id = aws_finspace_kx_environment.test.id + + tags = { + %[2]q = %[3]q + } +} +`, rName, tagKey1, tagValue1)) +} + +func testAccKxDatabaseConfig_tags2(rName, tagKey1, tagValue1, tagKey2, tagValue2 string) string { + return acctest.ConfigCompose( + testAccKxDatabaseConfigBase(rName), + fmt.Sprintf(` +resource "aws_finspace_kx_database" "test" { + name = %[1]q + environment_id = aws_finspace_kx_environment.test.id + + tags = { + %[2]q = %[3]q + %[4]q = %[5]q + } +} +`, rName, tagKey1, tagValue1, tagKey2, tagValue2)) +} diff --git a/internal/service/finspace/kx_environment.go b/internal/service/finspace/kx_environment.go new file mode 100644 index 00000000000..e497160209b --- /dev/null +++ b/internal/service/finspace/kx_environment.go @@ -0,0 +1,595 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package finspace + +import ( + "context" + "errors" + "log" + "time" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/finspace" + "github.com/aws/aws-sdk-go-v2/service/finspace/types" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/enum" + "github.com/hashicorp/terraform-provider-aws/internal/errs" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/verify" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @SDKResource("aws_finspace_kx_environment", name="Kx Environment") +// @Tags(identifierAttribute="arn") +func ResourceKxEnvironment() *schema.Resource { + return &schema.Resource{ + CreateWithoutTimeout: resourceKxEnvironmentCreate, + ReadWithoutTimeout: resourceKxEnvironmentRead, + UpdateWithoutTimeout: resourceKxEnvironmentUpdate, + DeleteWithoutTimeout: resourceKxEnvironmentDelete, + + Importer: &schema.ResourceImporter{ + StateContext: schema.ImportStatePassthroughContext, + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(30 * time.Minute), + Update: schema.DefaultTimeout(30 * time.Minute), + Delete: schema.DefaultTimeout(30 * time.Minute), + }, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "availability_zones": { + Type: schema.TypeList, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Computed: true, + }, + "created_timestamp": { + Type: schema.TypeString, + Computed: true, + }, + "custom_dns_configuration": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "custom_dns_server_name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(3, 255), + }, + "custom_dns_server_ip": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.IsIPAddress, + }, + }, + }, + }, + "description": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(1, 1000), + }, + "id": { + Type: schema.TypeString, + Computed: true, + }, + "infrastructure_account_id": { + Type: schema.TypeString, + Computed: true, + }, + "kms_key_id": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidARN, + }, + "last_modified_timestamp": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 255), + }, + "status": { + Type: schema.TypeString, + Computed: true, + }, + names.AttrTags: tftags.TagsSchema(), + names.AttrTagsAll: tftags.TagsSchemaComputed(), + "transit_gateway_configuration": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "transit_gateway_id": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 32), + }, + "routable_cidr_space": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.IsCIDR, + }, + }, + }, + }, + }, + CustomizeDiff: verify.SetTagsDiff, + } +} + +const ( + ResNameKxEnvironment = "Kx Environment" +) + +func resourceKxEnvironmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).FinSpaceClient(ctx) + + in := &finspace.CreateKxEnvironmentInput{ + Name: aws.String(d.Get("name").(string)), + ClientToken: aws.String(id.UniqueId()), + } + + if v, ok := d.GetOk("description"); ok { + in.Description = aws.String(v.(string)) + } + + if v, ok := d.GetOk("kms_key_id"); ok { + in.KmsKeyId = aws.String(v.(string)) + } + + out, err := conn.CreateKxEnvironment(ctx, in) + if err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionCreating, ResNameKxEnvironment, d.Get("name").(string), err)...) + } + + if out == nil || out.EnvironmentId == nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionCreating, ResNameKxEnvironment, d.Get("name").(string), errors.New("empty output"))...) + } + + d.SetId(aws.ToString(out.EnvironmentId)) + + if _, err := waitKxEnvironmentCreated(ctx, conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionWaitingForCreation, ResNameKxEnvironment, d.Id(), err)...) + } + + if err := updateKxEnvironmentNetwork(ctx, d, conn); err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionCreating, ResNameKxEnvironment, d.Id(), err)...) + } + + // The CreateKxEnvironment API currently fails to tag the environment when the + // Tags field is set. Until the API is fixed, tag after creation instead. + if err := createTags(ctx, conn, aws.ToString(out.EnvironmentArn), getTagsIn(ctx)); err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionCreating, ResNameKxEnvironment, d.Id(), err)...) + } + + return append(diags, resourceKxEnvironmentRead(ctx, d, meta)...) +} + +func resourceKxEnvironmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).FinSpaceClient(ctx) + + out, err := findKxEnvironmentByID(ctx, conn, d.Id()) + + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] FinSpace KxEnvironment (%s) not found, removing from state", d.Id()) + d.SetId("") + return diags + } + + if err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionReading, ResNameKxEnvironment, d.Id(), err)...) + } + + d.Set("id", out.EnvironmentId) + d.Set("arn", out.EnvironmentArn) + d.Set("name", out.Name) + d.Set("description", out.Description) + d.Set("kms_key_id", out.KmsKeyId) + d.Set("status", out.Status) + d.Set("availability_zones", out.AvailabilityZoneIds) + d.Set("infrastructure_account_id", out.DedicatedServiceAccountId) + d.Set("created_timestamp", out.CreationTimestamp.String()) + d.Set("last_modified_timestamp", out.UpdateTimestamp.String()) + + if err := d.Set("transit_gateway_configuration", flattenTransitGatewayConfiguration(out.TransitGatewayConfiguration)); err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionSetting, ResNameKxEnvironment, d.Id(), err)...) + } + + if err := d.Set("custom_dns_configuration", flattenCustomDNSConfigurations(out.CustomDNSConfiguration)); err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionSetting, ResNameKxEnvironment, d.Id(), err)...) + } + + return diags +} + +func resourceKxEnvironmentUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).FinSpaceClient(ctx) + + update := false + + in := &finspace.UpdateKxEnvironmentInput{ + EnvironmentId: aws.String(d.Id()), + Name: aws.String(d.Get("name").(string)), + } + + if d.HasChanges("description") { + in.Description = aws.String(d.Get("description").(string)) + } + + if d.HasChanges("name") || d.HasChanges("description") { + update = true + log.Printf("[DEBUG] Updating FinSpace KxEnvironment (%s): %#v", d.Id(), in) + _, err := conn.UpdateKxEnvironment(ctx, in) + if err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionUpdating, ResNameKxEnvironment, d.Id(), err)...) + } + } + + if d.HasChanges("transit_gateway_configuration") || d.HasChanges("custom_dns_configuration") { + update = true + if err := updateKxEnvironmentNetwork(ctx, d, conn); err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionUpdating, ResNameKxEnvironment, d.Id(), err)...) + } + } + + if !update { + return diags + } + return append(diags, resourceKxEnvironmentRead(ctx, d, meta)...) +} + +func resourceKxEnvironmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).FinSpaceClient(ctx) + + log.Printf("[INFO] Deleting FinSpace KxEnvironment %s", d.Id()) + + _, err := conn.DeleteKxEnvironment(ctx, &finspace.DeleteKxEnvironmentInput{ + EnvironmentId: aws.String(d.Id()), + }) + if errs.IsA[*types.ResourceNotFoundException](err) || + errs.IsAErrorMessageContains[*types.ValidationException](err, "The Environment is in DELETED state") { + log.Printf("[DEBUG] FinSpace KxEnvironment %s already deleted. Nothing to delete.", d.Id()) + return diags + } + + if err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionDeleting, ResNameKxEnvironment, d.Id(), err)...) + } + + if _, err := waitKxEnvironmentDeleted(ctx, conn, d.Id(), d.Timeout(schema.TimeoutDelete)); err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionWaitingForDeletion, ResNameKxEnvironment, d.Id(), err)...) + } + + return diags +} + +// As of 2023-02-09, updating network configuration requires 2 separate requests if both DNS +// and transit gateway configurationtions are set. +func updateKxEnvironmentNetwork(ctx context.Context, d *schema.ResourceData, client *finspace.Client) error { + transitGatewayConfigIn := &finspace.UpdateKxEnvironmentNetworkInput{ + EnvironmentId: aws.String(d.Id()), + ClientToken: aws.String(id.UniqueId()), + } + + customDnsConfigIn := &finspace.UpdateKxEnvironmentNetworkInput{ + EnvironmentId: aws.String(d.Id()), + ClientToken: aws.String(id.UniqueId()), + } + + updateTransitGatewayConfig := false + updateCustomDnsConfig := false + + if v, ok := d.GetOk("transit_gateway_configuration"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil && + d.HasChanges("transit_gateway_configuration") { + transitGatewayConfigIn.TransitGatewayConfiguration = expandTransitGatewayConfiguration(v.([]interface{})) + updateTransitGatewayConfig = true + } + + if v, ok := d.GetOk("custom_dns_configuration"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil && + d.HasChanges("custom_dns_configuration") { + customDnsConfigIn.CustomDNSConfiguration = expandCustomDNSConfigurations(v.([]interface{})) + updateCustomDnsConfig = true + } + + if updateTransitGatewayConfig { + if _, err := client.UpdateKxEnvironmentNetwork(ctx, transitGatewayConfigIn); err != nil { + return err + } + + if _, err := waitTransitGatewayConfigurationUpdated(ctx, client, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { + return err + } + } + + if updateCustomDnsConfig { + if _, err := client.UpdateKxEnvironmentNetwork(ctx, customDnsConfigIn); err != nil { + return err + } + + if _, err := waitCustomDNSConfigurationUpdated(ctx, client, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { + return err + } + } + + return nil +} + +func waitKxEnvironmentCreated(ctx context.Context, conn *finspace.Client, id string, timeout time.Duration) (*finspace.GetKxEnvironmentOutput, error) { + stateConf := &retry.StateChangeConf{ + Pending: enum.Slice(types.EnvironmentStatusCreateRequested, types.EnvironmentStatusCreating), + Target: enum.Slice(types.EnvironmentStatusCreated), + Refresh: statusKxEnvironment(ctx, conn, id), + Timeout: timeout, + NotFoundChecks: 20, + ContinuousTargetOccurence: 2, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + if out, ok := outputRaw.(*finspace.GetKxEnvironmentOutput); ok { + return out, err + } + + return nil, err +} + +func waitTransitGatewayConfigurationUpdated(ctx context.Context, conn *finspace.Client, id string, timeout time.Duration) (*finspace.GetKxEnvironmentOutput, error) { + stateConf := &retry.StateChangeConf{ + Pending: enum.Slice(types.TgwStatusUpdateRequested, types.TgwStatusUpdating), + Target: enum.Slice(types.TgwStatusSuccessfullyUpdated), + Refresh: statusTransitGatewayConfiguration(ctx, conn, id), + Timeout: timeout, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + if out, ok := outputRaw.(*finspace.GetKxEnvironmentOutput); ok { + return out, err + } + + return nil, err +} + +func waitCustomDNSConfigurationUpdated(ctx context.Context, conn *finspace.Client, id string, timeout time.Duration) (*finspace.GetKxEnvironmentOutput, error) { + stateConf := &retry.StateChangeConf{ + Pending: enum.Slice(types.DnsStatusUpdateRequested, types.DnsStatusUpdating), + Target: enum.Slice(types.DnsStatusSuccessfullyUpdated), + Refresh: statusCustomDNSConfiguration(ctx, conn, id), + Timeout: timeout, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + if out, ok := outputRaw.(*finspace.GetKxEnvironmentOutput); ok { + return out, err + } + + return nil, err +} + +func waitKxEnvironmentDeleted(ctx context.Context, conn *finspace.Client, id string, timeout time.Duration) (*finspace.GetKxEnvironmentOutput, error) { + stateConf := &retry.StateChangeConf{ + Pending: enum.Slice(types.EnvironmentStatusDeleteRequested, types.EnvironmentStatusDeleting), + Target: []string{}, + Refresh: statusKxEnvironment(ctx, conn, id), + Timeout: timeout, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + if out, ok := outputRaw.(*finspace.GetKxEnvironmentOutput); ok { + return out, err + } + + return nil, err +} + +func statusKxEnvironment(ctx context.Context, conn *finspace.Client, id string) retry.StateRefreshFunc { + return func() (interface{}, string, error) { + out, err := findKxEnvironmentByID(ctx, conn, id) + if tfresource.NotFound(err) { + return nil, "", nil + } + + if err != nil { + return nil, "", err + } + + return out, string(out.Status), nil + } +} + +func statusTransitGatewayConfiguration(ctx context.Context, conn *finspace.Client, id string) retry.StateRefreshFunc { + return func() (interface{}, string, error) { + out, err := findKxEnvironmentByID(ctx, conn, id) + if tfresource.NotFound(err) { + return nil, "", nil + } + + if err != nil { + return nil, "", err + } + + return out, string(out.TgwStatus), nil + } +} + +func statusCustomDNSConfiguration(ctx context.Context, conn *finspace.Client, id string) retry.StateRefreshFunc { + return func() (interface{}, string, error) { + out, err := findKxEnvironmentByID(ctx, conn, id) + if tfresource.NotFound(err) { + return nil, "", nil + } + + if err != nil { + return nil, "", err + } + + return out, string(out.DnsStatus), nil + } +} + +func findKxEnvironmentByID(ctx context.Context, conn *finspace.Client, id string) (*finspace.GetKxEnvironmentOutput, error) { + in := &finspace.GetKxEnvironmentInput{ + EnvironmentId: aws.String(id), + } + out, err := conn.GetKxEnvironment(ctx, in) + if err != nil { + var nfe *types.ResourceNotFoundException + if errors.As(err, &nfe) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + return nil, err + } + // Treat DELETED status as NotFound + if out != nil && out.Status == types.EnvironmentStatusDeleted { + return nil, &retry.NotFoundError{ + LastError: errors.New("status is deleted"), + LastRequest: in, + } + } + + if out == nil || out.EnvironmentArn == nil { + return nil, tfresource.NewEmptyResultError(in) + } + + return out, nil +} + +func expandTransitGatewayConfiguration(tfList []interface{}) *types.TransitGatewayConfiguration { + if len(tfList) == 0 || tfList[0] == nil { + return nil + } + + tfMap := tfList[0].(map[string]interface{}) + + a := &types.TransitGatewayConfiguration{} + + if v, ok := tfMap["transit_gateway_id"].(string); ok && v != "" { + a.TransitGatewayID = aws.String(v) + } + + if v, ok := tfMap["routable_cidr_space"].(string); ok && v != "" { + a.RoutableCIDRSpace = aws.String(v) + } + + return a +} + +func expandCustomDNSConfiguration(tfMap map[string]interface{}) *types.CustomDNSServer { + if tfMap == nil { + return nil + } + + a := &types.CustomDNSServer{} + + if v, ok := tfMap["custom_dns_server_name"].(string); ok && v != "" { + a.CustomDNSServerName = aws.String(v) + } + + if v, ok := tfMap["custom_dns_server_ip"].(string); ok && v != "" { + a.CustomDNSServerIP = aws.String(v) + } + + return a +} + +func expandCustomDNSConfigurations(tfList []interface{}) []types.CustomDNSServer { + if len(tfList) == 0 { + return nil + } + + var s []types.CustomDNSServer + + for _, r := range tfList { + m, ok := r.(map[string]interface{}) + + if !ok { + continue + } + + a := expandCustomDNSConfiguration(m) + + if a == nil { + continue + } + + s = append(s, *a) + } + + return s +} + +func flattenTransitGatewayConfiguration(apiObject *types.TransitGatewayConfiguration) []interface{} { + if apiObject == nil { + return nil + } + + m := map[string]interface{}{} + + if v := apiObject.TransitGatewayID; v != nil { + m["transit_gateway_id"] = aws.ToString(v) + } + + if v := apiObject.RoutableCIDRSpace; v != nil { + m["routable_cidr_space"] = aws.ToString(v) + } + + return []interface{}{m} +} + +func flattenCustomDNSConfiguration(apiObject *types.CustomDNSServer) map[string]interface{} { + if apiObject == nil { + return nil + } + + m := map[string]interface{}{} + + if v := apiObject.CustomDNSServerName; v != nil { + m["custom_dns_server_name"] = aws.ToString(v) + } + + if v := apiObject.CustomDNSServerIP; v != nil { + m["custom_dns_server_ip"] = aws.ToString(v) + } + + return m +} + +func flattenCustomDNSConfigurations(apiObjects []types.CustomDNSServer) []interface{} { + if len(apiObjects) == 0 { + return nil + } + + var l []interface{} + + for _, apiObject := range apiObjects { + l = append(l, flattenCustomDNSConfiguration(&apiObject)) + } + + return l +} diff --git a/internal/service/finspace/kx_environment_test.go b/internal/service/finspace/kx_environment_test.go new file mode 100644 index 00000000000..8a2d090700f --- /dev/null +++ b/internal/service/finspace/kx_environment_test.go @@ -0,0 +1,443 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package finspace_test + +import ( + "context" + "errors" + "fmt" + "testing" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/finspace" + "github.com/aws/aws-sdk-go-v2/service/finspace/types" + sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-plugin-testing/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + tffinspace "github.com/hashicorp/terraform-provider-aws/internal/service/finspace" + "github.com/hashicorp/terraform-provider-aws/names" +) + +func TestAccFinSpaceKxEnvironment_basic(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + ctx := acctest.Context(t) + var kxenvironment finspace.GetKxEnvironmentOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_finspace_kx_environment.test" + kmsKeyResourceName := "aws_kms_key.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, finspace.ServiceID) + }, + ErrorCheck: acctest.ErrorCheck(t, finspace.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckKxEnvironmentDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccKxEnvironmentConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxEnvironmentExists(ctx, resourceName, &kxenvironment), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttrPair(resourceName, "kms_key_id", kmsKeyResourceName, "arn"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccFinSpaceKxEnvironment_disappears(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + ctx := acctest.Context(t) + var kxenvironment finspace.GetKxEnvironmentOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_finspace_kx_environment.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, finspace.ServiceID) + }, + ErrorCheck: acctest.ErrorCheck(t, finspace.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckKxEnvironmentDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccKxEnvironmentConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxEnvironmentExists(ctx, resourceName, &kxenvironment), + acctest.CheckResourceDisappears(ctx, acctest.Provider, tffinspace.ResourceKxEnvironment(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccFinSpaceKxEnvironment_updateName(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + ctx := acctest.Context(t) + var kxenvironment finspace.GetKxEnvironmentOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rName2 := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_finspace_kx_environment.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, finspace.ServiceID) + }, + ErrorCheck: acctest.ErrorCheck(t, finspace.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckKxEnvironmentDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccKxEnvironmentConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxEnvironmentExists(ctx, resourceName, &kxenvironment), + resource.TestCheckResourceAttr(resourceName, "name", rName), + ), + }, + { + Config: testAccKxEnvironmentConfig_basic(rName2), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxEnvironmentExists(ctx, resourceName, &kxenvironment), + resource.TestCheckResourceAttr(resourceName, "name", rName2), + ), + }, + }, + }) +} + +func TestAccFinSpaceKxEnvironment_description(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + ctx := acctest.Context(t) + var kxenvironment finspace.GetKxEnvironmentOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_finspace_kx_environment.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, finspace.ServiceID) + }, + ErrorCheck: acctest.ErrorCheck(t, finspace.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckKxEnvironmentDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccKxEnvironmentConfig_description(rName, "description 1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxEnvironmentExists(ctx, resourceName, &kxenvironment), + resource.TestCheckResourceAttr(resourceName, "description", "description 1"), + ), + }, + { + Config: testAccKxEnvironmentConfig_description(rName, "description 2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxEnvironmentExists(ctx, resourceName, &kxenvironment), + resource.TestCheckResourceAttr(resourceName, "description", "description 2"), + ), + }, + }, + }) +} + +func TestAccFinSpaceKxEnvironment_customDNS(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + ctx := acctest.Context(t) + var kxenvironment finspace.GetKxEnvironmentOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_finspace_kx_environment.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, finspace.ServiceID) + }, + ErrorCheck: acctest.ErrorCheck(t, finspace.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckKxEnvironmentDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccKxEnvironmentConfig_dnsConfig(rName, "example.finspace.amazon.aws.com", "10.0.0.76"), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxEnvironmentExists(ctx, resourceName, &kxenvironment), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "custom_dns_configuration.*", map[string]string{ + "custom_dns_server_name": "example.finspace.amazon.aws.com", + "custom_dns_server_ip": "10.0.0.76", + }), + ), + }, + { + Config: testAccKxEnvironmentConfig_dnsConfig(rName, "updated.finspace.amazon.com", "10.0.0.24"), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxEnvironmentExists(ctx, resourceName, &kxenvironment), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "custom_dns_configuration.*", map[string]string{ + "custom_dns_server_name": "updated.finspace.amazon.com", + "custom_dns_server_ip": "10.0.0.24", + }), + ), + }, + }, + }) +} + +func TestAccFinSpaceKxEnvironment_transitGateway(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + ctx := acctest.Context(t) + var kxenvironment finspace.GetKxEnvironmentOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_finspace_kx_environment.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, finspace.ServiceID) + }, + ErrorCheck: acctest.ErrorCheck(t, finspace.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckKxEnvironmentDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccKxEnvironmentConfig_tgwConfig(rName, "100.64.0.0/26"), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxEnvironmentExists(ctx, resourceName, &kxenvironment), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "transit_gateway_configuration.*", map[string]string{ + "routable_cidr_space": "100.64.0.0/26", + }), + ), + }, + }, + }) +} + +func TestAccFinSpaceKxEnvironment_tags(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + ctx := acctest.Context(t) + var kxenvironment finspace.GetKxEnvironmentOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_finspace_kx_environment.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, finspace.ServiceID) + }, + ErrorCheck: acctest.ErrorCheck(t, finspace.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckKxEnvironmentDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccKxEnvironmentConfig_tags1(rName, "key1", "value1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxEnvironmentExists(ctx, resourceName, &kxenvironment), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + Config: testAccKxEnvironmentConfig_tags2(rName, "key1", "value1", "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxEnvironmentExists(ctx, resourceName, &kxenvironment), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + { + Config: testAccKxEnvironmentConfig_tags1(rName, "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxEnvironmentExists(ctx, resourceName, &kxenvironment), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + }, + }) +} + +func testAccCheckKxEnvironmentDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).FinSpaceClient(ctx) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_finspace_kx_environment" { + continue + } + + input := &finspace.GetKxEnvironmentInput{ + EnvironmentId: aws.String(rs.Primary.ID), + } + out, err := conn.GetKxEnvironment(ctx, input) + if err != nil { + var nfe *types.ResourceNotFoundException + if errors.As(err, &nfe) { + return nil + } + return err + } + if out.Status == types.EnvironmentStatusDeleted { + return nil + } + return create.Error(names.FinSpace, create.ErrActionCheckingDestroyed, tffinspace.ResNameKxEnvironment, rs.Primary.ID, errors.New("not destroyed")) + } + + return nil + } +} + +func testAccCheckKxEnvironmentExists(ctx context.Context, name string, kxenvironment *finspace.GetKxEnvironmentOutput) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return create.Error(names.FinSpace, create.ErrActionCheckingExistence, tffinspace.ResNameKxEnvironment, name, errors.New("not found")) + } + + if rs.Primary.ID == "" { + return create.Error(names.FinSpace, create.ErrActionCheckingExistence, tffinspace.ResNameKxEnvironment, name, errors.New("not set")) + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).FinSpaceClient(ctx) + resp, err := conn.GetKxEnvironment(ctx, &finspace.GetKxEnvironmentInput{ + EnvironmentId: aws.String(rs.Primary.ID), + }) + + if err != nil { + return create.Error(names.FinSpace, create.ErrActionCheckingExistence, tffinspace.ResNameKxEnvironment, rs.Primary.ID, err) + } + + *kxenvironment = *resp + + return nil + } +} + +func testAccKxEnvironmentConfigBase() string { + return ` +resource "aws_kms_key" "test" { + deletion_window_in_days = 7 +} +` +} + +func testAccKxEnvironmentConfig_basic(rName string) string { + return acctest.ConfigCompose( + testAccKxEnvironmentConfigBase(), + fmt.Sprintf(` +resource "aws_finspace_kx_environment" "test" { + name = %[1]q + kms_key_id = aws_kms_key.test.arn +} +`, rName)) +} + +func testAccKxEnvironmentConfig_description(rName, desc string) string { + return acctest.ConfigCompose( + testAccKxEnvironmentConfigBase(), + fmt.Sprintf(` +resource "aws_finspace_kx_environment" "test" { + name = %[1]q + kms_key_id = aws_kms_key.test.arn + description = %[2]q +} +`, rName, desc)) +} + +func testAccKxEnvironmentConfig_tgwConfig(rName, cidr string) string { + return acctest.ConfigCompose( + testAccKxEnvironmentConfigBase(), + fmt.Sprintf(` +resource "aws_ec2_transit_gateway" "test" { + description = "test" +} + +resource "aws_finspace_kx_environment" "test" { + name = %[1]q + kms_key_id = aws_kms_key.test.arn + + transit_gateway_configuration { + transit_gateway_id = aws_ec2_transit_gateway.test.id + routable_cidr_space = %[2]q + } +} +`, rName, cidr)) +} + +func testAccKxEnvironmentConfig_dnsConfig(rName, serverName, serverIP string) string { + return acctest.ConfigCompose( + testAccKxEnvironmentConfigBase(), + fmt.Sprintf(` +resource "aws_finspace_kx_environment" "test" { + name = %[1]q + kms_key_id = aws_kms_key.test.arn + + custom_dns_configuration { + custom_dns_server_name = %[2]q + custom_dns_server_ip = %[3]q + } +} +`, rName, serverName, serverIP)) +} + +func testAccKxEnvironmentConfig_tags1(rName, tagKey1, tagValue1 string) string { + return acctest.ConfigCompose( + testAccKxEnvironmentConfigBase(), + fmt.Sprintf(` +resource "aws_finspace_kx_environment" "test" { + name = %[1]q + kms_key_id = aws_kms_key.test.arn + + tags = { + %[2]q = %[3]q + } +} +`, rName, tagKey1, tagValue1)) +} + +func testAccKxEnvironmentConfig_tags2(rName, tagKey1, tagValue1, tagKey2, tagValue2 string) string { + return acctest.ConfigCompose( + testAccKxEnvironmentConfigBase(), + fmt.Sprintf(` +resource "aws_finspace_kx_environment" "test" { + name = %[1]q + kms_key_id = aws_kms_key.test.arn + + tags = { + %[2]q = %[3]q + %[4]q = %[5]q + } +} +`, rName, tagKey1, tagValue1, tagKey2, tagValue2)) +} diff --git a/internal/service/finspace/kx_user.go b/internal/service/finspace/kx_user.go new file mode 100644 index 00000000000..e5252329290 --- /dev/null +++ b/internal/service/finspace/kx_user.go @@ -0,0 +1,209 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package finspace + +import ( + "context" + "errors" + "log" + "time" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/finspace" + "github.com/aws/aws-sdk-go-v2/service/finspace/types" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/flex" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/verify" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @SDKResource("aws_finspace_kx_user", name="Kx User") +// @Tags(identifierAttribute="arn") +func ResourceKxUser() *schema.Resource { + return &schema.Resource{ + CreateWithoutTimeout: resourceKxUserCreate, + ReadWithoutTimeout: resourceKxUserRead, + UpdateWithoutTimeout: resourceKxUserUpdate, + DeleteWithoutTimeout: resourceKxUserDelete, + + Importer: &schema.ResourceImporter{ + StateContext: schema.ImportStatePassthroughContext, + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(30 * time.Minute), + Update: schema.DefaultTimeout(30 * time.Minute), + Delete: schema.DefaultTimeout(30 * time.Minute), + }, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "environment_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(1, 32), + }, + "iam_role": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidARN, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(1, 255), + }, + names.AttrTags: tftags.TagsSchema(), + names.AttrTagsAll: tftags.TagsSchemaComputed(), + }, + CustomizeDiff: verify.SetTagsDiff, + } +} + +const ( + ResNameKxUser = "Kx User" + + kxUserIDPartCount = 2 +) + +func resourceKxUserCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + client := meta.(*conns.AWSClient).FinSpaceClient(ctx) + + in := &finspace.CreateKxUserInput{ + UserName: aws.String(d.Get("name").(string)), + EnvironmentId: aws.String(d.Get("environment_id").(string)), + IamRole: aws.String(d.Get("iam_role").(string)), + Tags: getTagsIn(ctx), + } + + out, err := client.CreateKxUser(ctx, in) + if err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionCreating, ResNameKxUser, d.Get("name").(string), err)...) + } + + if out == nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionCreating, ResNameKxUser, d.Get("name").(string), errors.New("empty output"))...) + } + + idParts := []string{ + aws.ToString(out.EnvironmentId), + aws.ToString(out.UserName), + } + id, err := flex.FlattenResourceId(idParts, kxUserIDPartCount, false) + if err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionFlatteningResourceId, ResNameKxUser, d.Get("name").(string), err)...) + } + d.SetId(id) + + return append(diags, resourceKxUserRead(ctx, d, meta)...) +} + +func resourceKxUserRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).FinSpaceClient(ctx) + + out, err := findKxUserByID(ctx, conn, d.Id()) + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] FinSpace KxUser (%s) not found, removing from state", d.Id()) + d.SetId("") + return diags + } + + if err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionReading, ResNameKxUser, d.Id(), err)...) + } + + d.Set("arn", out.UserArn) + d.Set("name", out.UserName) + d.Set("iam_role", out.IamRole) + d.Set("environment_id", out.EnvironmentId) + + return diags +} + +func resourceKxUserUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).FinSpaceClient(ctx) + + if d.HasChange("iam_role") { + in := &finspace.UpdateKxUserInput{ + EnvironmentId: aws.String(d.Get("environment_id").(string)), + UserName: aws.String(d.Get("name").(string)), + IamRole: aws.String(d.Get("iam_role").(string)), + } + + _, err := conn.UpdateKxUser(ctx, in) + if err != nil { + return append(diags, create.DiagError(names.FinSpace, create.ErrActionUpdating, ResNameKxUser, d.Id(), err)...) + } + } + + return append(diags, resourceKxUserRead(ctx, d, meta)...) +} + +func resourceKxUserDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).FinSpaceClient(ctx) + + log.Printf("[INFO] Deleting FinSpace KxUser %s", d.Id()) + + _, err := conn.DeleteKxUser(ctx, &finspace.DeleteKxUserInput{ + EnvironmentId: aws.String(d.Get("environment_id").(string)), + UserName: aws.String(d.Get("name").(string)), + }) + + if err != nil { + var nfe *types.ResourceNotFoundException + if errors.As(err, &nfe) { + return nil + } + + return append(diags, create.DiagError(names.FinSpace, create.ErrActionDeleting, ResNameKxUser, d.Id(), err)...) + } + + return diags +} + +func findKxUserByID(ctx context.Context, conn *finspace.Client, id string) (*finspace.GetKxUserOutput, error) { + parts, err := flex.ExpandResourceId(id, kxUserIDPartCount, false) + if err != nil { + return nil, err + } + in := &finspace.GetKxUserInput{ + EnvironmentId: aws.String(parts[0]), + UserName: aws.String(parts[1]), + } + + out, err := conn.GetKxUser(ctx, in) + if err != nil { + var nfe *types.ResourceNotFoundException + if errors.As(err, &nfe) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + return nil, err + } + + if out == nil || out.UserArn == nil { + return nil, tfresource.NewEmptyResultError(in) + } + + return out, nil +} diff --git a/internal/service/finspace/kx_user_test.go b/internal/service/finspace/kx_user_test.go new file mode 100644 index 00000000000..254f878afce --- /dev/null +++ b/internal/service/finspace/kx_user_test.go @@ -0,0 +1,336 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package finspace_test + +import ( + "context" + "errors" + "fmt" + "testing" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/finspace" + "github.com/aws/aws-sdk-go-v2/service/finspace/types" + sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-plugin-testing/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + tffinspace "github.com/hashicorp/terraform-provider-aws/internal/service/finspace" + "github.com/hashicorp/terraform-provider-aws/names" +) + +func TestAccFinSpaceKxUser_basic(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + ctx := acctest.Context(t) + var kxuser finspace.GetKxUserOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + userName := sdkacctest.RandString(sdkacctest.RandIntRange(1, 50)) + resourceName := "aws_finspace_kx_user.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, finspace.ServiceID) + }, + ErrorCheck: acctest.ErrorCheck(t, finspace.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckKxUserDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccKxUserConfig_basic(rName, userName), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxUserExists(ctx, resourceName, &kxuser), + resource.TestCheckResourceAttr(resourceName, "name", userName), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccFinSpaceKxUser_disappears(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + ctx := acctest.Context(t) + var kxuser finspace.GetKxUserOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + userName := sdkacctest.RandString(sdkacctest.RandIntRange(1, 50)) + resourceName := "aws_finspace_kx_user.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, finspace.ServiceID) + }, + ErrorCheck: acctest.ErrorCheck(t, finspace.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckKxUserDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccKxUserConfig_basic(rName, userName), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxUserExists(ctx, resourceName, &kxuser), + acctest.CheckResourceDisappears(ctx, acctest.Provider, tffinspace.ResourceKxUser(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccFinSpaceKxUser_updateRole(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + ctx := acctest.Context(t) + var kxuser finspace.GetKxUserOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + userName := sdkacctest.RandString(sdkacctest.RandIntRange(1, 50)) + resourceName := "aws_finspace_kx_user.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, finspace.ServiceID) + }, + ErrorCheck: acctest.ErrorCheck(t, finspace.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckKxUserDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccKxUserConfig_basic(rName, userName), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxUserExists(ctx, resourceName, &kxuser), + ), + }, + { + Config: testAccKxUserConfig_updateRole(rName, "updated"+rName, userName), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxUserExists(ctx, resourceName, &kxuser), + ), + }, + }, + }) +} + +func TestAccFinSpaceKxUser_tags(t *testing.T) { + ctx := acctest.Context(t) + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var kxuser finspace.GetKxUserOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + userName := sdkacctest.RandString(sdkacctest.RandIntRange(1, 50)) + resourceName := "aws_finspace_kx_user.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, finspace.ServiceID) + }, + ErrorCheck: acctest.ErrorCheck(t, finspace.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckKxUserDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccKxUserConfig_tags1(rName, userName, "key1", "value1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxUserExists(ctx, resourceName, &kxuser), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + Config: testAccKxUserConfig_tags2(rName, userName, "key1", "value1", "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxUserExists(ctx, resourceName, &kxuser), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + { + Config: testAccKxUserConfig_tags1(rName, userName, "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckKxUserExists(ctx, resourceName, &kxuser), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + }, + }) +} + +func testAccCheckKxUserDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).FinSpaceClient(ctx) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_finspace_kx_user" { + continue + } + + input := &finspace.GetKxUserInput{ + UserName: aws.String(rs.Primary.Attributes["name"]), + EnvironmentId: aws.String(rs.Primary.Attributes["environment_id"]), + } + _, err := conn.GetKxUser(ctx, input) + if err != nil { + var nfe *types.ResourceNotFoundException + if errors.As(err, &nfe) { + return nil + } + return err + } + + return create.Error(names.FinSpace, create.ErrActionCheckingDestroyed, tffinspace.ResNameKxUser, rs.Primary.ID, errors.New("not destroyed")) + } + + return nil + } +} + +func testAccCheckKxUserExists(ctx context.Context, name string, kxuser *finspace.GetKxUserOutput) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return create.Error(names.FinSpace, create.ErrActionCheckingExistence, tffinspace.ResNameKxUser, name, errors.New("not found")) + } + + if rs.Primary.ID == "" { + return create.Error(names.FinSpace, create.ErrActionCheckingExistence, tffinspace.ResNameKxUser, name, errors.New("not set")) + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).FinSpaceClient(ctx) + resp, err := conn.GetKxUser(ctx, &finspace.GetKxUserInput{ + UserName: aws.String(rs.Primary.Attributes["name"]), + EnvironmentId: aws.String(rs.Primary.Attributes["environment_id"]), + }) + + if err != nil { + return create.Error(names.FinSpace, create.ErrActionCheckingExistence, tffinspace.ResNameKxUser, rs.Primary.ID, err) + } + + *kxuser = *resp + + return nil + } +} + +func testAccKxUserConfigBase(rName string) string { + return fmt.Sprintf(` +resource "aws_kms_key" "test" { + deletion_window_in_days = 7 +} + +resource "aws_iam_role" "test" { + name = %[1]q + + assume_role_policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Action = "sts:AssumeRole" + Effect = "Allow" + Sid = "" + Principal = { + Service = "ec2.amazonaws.com" + } + }, + ] + }) +} + +resource "aws_finspace_kx_environment" "test" { + name = %[1]q + kms_key_id = aws_kms_key.test.arn +} +`, rName) +} + +func testAccKxUserConfig_basic(rName, userName string) string { + return acctest.ConfigCompose( + testAccKxUserConfigBase(rName), + fmt.Sprintf(` +resource "aws_finspace_kx_user" "test" { + name = %[1]q + environment_id = aws_finspace_kx_environment.test.id + iam_role = aws_iam_role.test.arn +} +`, userName)) +} + +func testAccKxUserConfig_updateRole(rName, rName2, userName string) string { + return acctest.ConfigCompose( + testAccKxUserConfigBase(rName), + fmt.Sprintf(` +resource "aws_iam_role" "updated" { + name = %[1]q + assume_role_policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Action = "sts:AssumeRole" + Effect = "Allow" + Sid = "" + Principal = { + Service = "ec2.amazonaws.com" + } + }, + ] + }) +} + +resource "aws_finspace_kx_user" "test" { + name = %[2]q + environment_id = aws_finspace_kx_environment.test.id + iam_role = aws_iam_role.updated.arn +} +`, rName2, userName)) +} + +func testAccKxUserConfig_tags1(rName, userName, tagKey1, tagValue1 string) string { + return acctest.ConfigCompose( + testAccKxUserConfigBase(rName), + fmt.Sprintf(` +resource "aws_finspace_kx_user" "test" { + name = %[1]q + environment_id = aws_finspace_kx_environment.test.id + iam_role = aws_iam_role.test.arn + tags = { + %[2]q = %[3]q + } +} + +`, userName, tagKey1, tagValue1)) +} + +func testAccKxUserConfig_tags2(rName, userName, tagKey1, tagValue1, tagKey2, tagValue2 string) string { + return acctest.ConfigCompose( + testAccKxUserConfigBase(rName), + fmt.Sprintf(` +resource "aws_finspace_kx_user" "test" { + name = %[1]q + environment_id = aws_finspace_kx_environment.test.id + iam_role = aws_iam_role.test.arn + tags = { + %[2]q = %[3]q + %[4]q = %[5]q + } +} +`, userName, tagKey1, tagValue1, tagKey2, tagValue2)) +} diff --git a/internal/service/finspace/service_package_gen.go b/internal/service/finspace/service_package_gen.go new file mode 100644 index 00000000000..3df139ef5b4 --- /dev/null +++ b/internal/service/finspace/service_package_gen.go @@ -0,0 +1,83 @@ +// Code generated by internal/generate/servicepackages/main.go; DO NOT EDIT. + +package finspace + +import ( + "context" + + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + finspace_sdkv2 "github.com/aws/aws-sdk-go-v2/service/finspace" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/types" + "github.com/hashicorp/terraform-provider-aws/names" +) + +type servicePackage struct{} + +func (p *servicePackage) FrameworkDataSources(ctx context.Context) []*types.ServicePackageFrameworkDataSource { + return []*types.ServicePackageFrameworkDataSource{} +} + +func (p *servicePackage) FrameworkResources(ctx context.Context) []*types.ServicePackageFrameworkResource { + return []*types.ServicePackageFrameworkResource{} +} + +func (p *servicePackage) SDKDataSources(ctx context.Context) []*types.ServicePackageSDKDataSource { + return []*types.ServicePackageSDKDataSource{} +} + +func (p *servicePackage) SDKResources(ctx context.Context) []*types.ServicePackageSDKResource { + return []*types.ServicePackageSDKResource{ + { + Factory: ResourceKxCluster, + TypeName: "aws_finspace_kx_cluster", + Name: "Kx Cluster", + Tags: &types.ServicePackageResourceTags{ + IdentifierAttribute: "arn", + }, + }, + { + Factory: ResourceKxDatabase, + TypeName: "aws_finspace_kx_database", + Name: "Kx Database", + Tags: &types.ServicePackageResourceTags{ + IdentifierAttribute: "arn", + }, + }, + { + Factory: ResourceKxEnvironment, + TypeName: "aws_finspace_kx_environment", + Name: "Kx Environment", + Tags: &types.ServicePackageResourceTags{ + IdentifierAttribute: "arn", + }, + }, + { + Factory: ResourceKxUser, + TypeName: "aws_finspace_kx_user", + Name: "Kx User", + Tags: &types.ServicePackageResourceTags{ + IdentifierAttribute: "arn", + }, + }, + } +} + +func (p *servicePackage) ServicePackageName() string { + return names.FinSpace +} + +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*finspace_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return finspace_sdkv2.NewFromConfig(cfg, func(o *finspace_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = finspace_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/finspace/tags_gen.go b/internal/service/finspace/tags_gen.go new file mode 100644 index 00000000000..582dc406bcd --- /dev/null +++ b/internal/service/finspace/tags_gen.go @@ -0,0 +1,133 @@ +// Code generated by internal/generate/tags/main.go; DO NOT EDIT. +package finspace + +import ( + "context" + "fmt" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/finspace" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/types" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// listTags lists finspace service tags. +// The identifier is typically the Amazon Resource Name (ARN), although +// it may also be a different identifier depending on the service. +func listTags(ctx context.Context, conn *finspace.Client, identifier string) (tftags.KeyValueTags, error) { + input := &finspace.ListTagsForResourceInput{ + ResourceArn: aws.String(identifier), + } + + output, err := conn.ListTagsForResource(ctx, input) + + if err != nil { + return tftags.New(ctx, nil), err + } + + return KeyValueTags(ctx, output.Tags), nil +} + +// ListTags lists finspace service tags and set them in Context. +// It is called from outside this package. +func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { + tags, err := listTags(ctx, meta.(*conns.AWSClient).FinSpaceClient(ctx), identifier) + + if err != nil { + return err + } + + if inContext, ok := tftags.FromContext(ctx); ok { + inContext.TagsOut = types.Some(tags) + } + + return nil +} + +// map[string]string handling + +// Tags returns finspace service tags. +func Tags(tags tftags.KeyValueTags) map[string]string { + return tags.Map() +} + +// KeyValueTags creates tftags.KeyValueTags from finspace service tags. +func KeyValueTags(ctx context.Context, tags map[string]string) tftags.KeyValueTags { + return tftags.New(ctx, tags) +} + +// getTagsIn returns finspace service tags from Context. +// nil is returned if there are no input tags. +func getTagsIn(ctx context.Context) map[string]string { + if inContext, ok := tftags.FromContext(ctx); ok { + if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { + return tags + } + } + + return nil +} + +// setTagsOut sets finspace service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]string) { + if inContext, ok := tftags.FromContext(ctx); ok { + inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) + } +} + +// createTags creates finspace service tags for new resources. +func createTags(ctx context.Context, conn *finspace.Client, identifier string, tags map[string]string) error { + if len(tags) == 0 { + return nil + } + + return updateTags(ctx, conn, identifier, nil, tags) +} + +// updateTags updates finspace service tags. +// The identifier is typically the Amazon Resource Name (ARN), although +// it may also be a different identifier depending on the service. +func updateTags(ctx context.Context, conn *finspace.Client, identifier string, oldTagsMap, newTagsMap any) error { + oldTags := tftags.New(ctx, oldTagsMap) + newTags := tftags.New(ctx, newTagsMap) + + removedTags := oldTags.Removed(newTags) + removedTags = removedTags.IgnoreSystem(names.FinSpace) + if len(removedTags) > 0 { + input := &finspace.UntagResourceInput{ + ResourceArn: aws.String(identifier), + TagKeys: removedTags.Keys(), + } + + _, err := conn.UntagResource(ctx, input) + + if err != nil { + return fmt.Errorf("untagging resource (%s): %w", identifier, err) + } + } + + updatedTags := oldTags.Updated(newTags) + updatedTags = updatedTags.IgnoreSystem(names.FinSpace) + if len(updatedTags) > 0 { + input := &finspace.TagResourceInput{ + ResourceArn: aws.String(identifier), + Tags: Tags(updatedTags), + } + + _, err := conn.TagResource(ctx, input) + + if err != nil { + return fmt.Errorf("tagging resource (%s): %w", identifier, err) + } + } + + return nil +} + +// UpdateTags updates finspace service tags. +// It is called from outside this package. +func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { + return updateTags(ctx, meta.(*conns.AWSClient).FinSpaceClient(ctx), identifier, oldTags, newTags) +} diff --git a/internal/service/finspace/test-fixtures/code.zip b/internal/service/finspace/test-fixtures/code.zip new file mode 100644 index 00000000000..34a083bc499 Binary files /dev/null and b/internal/service/finspace/test-fixtures/code.zip differ diff --git a/internal/service/firehose/consts.go b/internal/service/firehose/consts.go index 7b6465190c8..722660cb182 100644 --- a/internal/service/firehose/consts.go +++ b/internal/service/firehose/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package firehose import ( diff --git a/internal/service/firehose/delivery_stream.go b/internal/service/firehose/delivery_stream.go index c9853197fd8..c3deff02fe1 100644 --- a/internal/service/firehose/delivery_stream.go +++ b/internal/service/firehose/delivery_stream.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package firehose import ( @@ -2854,10 +2857,10 @@ func resourceDeliveryStreamCreate(ctx context.Context, d *schema.ResourceData, m return sdkdiag.AppendErrorf(diags, "creating Kinesis Firehose Delivery Stream (%s): %s", sn, err) } - conn := meta.(*conns.AWSClient).FirehoseConn() + conn := meta.(*conns.AWSClient).FirehoseConn(ctx) input := &firehose.CreateDeliveryStreamInput{ DeliveryStreamName: aws.String(sn), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("kinesis_source_configuration"); ok { @@ -2994,7 +2997,7 @@ func resourceDeliveryStreamUpdate(ctx context.Context, d *schema.ResourceData, m return sdkdiag.AppendErrorf(diags, "updating Kinesis Firehose Delivery Stream (%s): %s", sn, err) } - conn := meta.(*conns.AWSClient).FirehoseConn() + conn := meta.(*conns.AWSClient).FirehoseConn(ctx) if d.HasChangesExcept("tags", "tags_all") { updateInput := &firehose.UpdateDestinationInput{ @@ -3117,7 +3120,7 @@ func resourceDeliveryStreamUpdate(ctx context.Context, d *schema.ResourceData, m func resourceDeliveryStreamRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FirehoseConn() + conn := meta.(*conns.AWSClient).FirehoseConn(ctx) sn := d.Get("name").(string) s, err := FindDeliveryStreamByName(ctx, conn, sn) @@ -3141,7 +3144,7 @@ func resourceDeliveryStreamRead(ctx context.Context, d *schema.ResourceData, met func resourceDeliveryStreamDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FirehoseConn() + conn := meta.(*conns.AWSClient).FirehoseConn(ctx) sn := d.Get("name").(string) log.Printf("[DEBUG] Deleting Kinesis Firehose Delivery Stream: (%s)", sn) diff --git a/internal/service/firehose/delivery_stream_data_source.go b/internal/service/firehose/delivery_stream_data_source.go index 9a5e5d22bf7..490705c8723 100644 --- a/internal/service/firehose/delivery_stream_data_source.go +++ b/internal/service/firehose/delivery_stream_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package firehose import ( @@ -29,7 +32,7 @@ func DataSourceDeliveryStream() *schema.Resource { func dataSourceDeliveryStreamRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FirehoseConn() + conn := meta.(*conns.AWSClient).FirehoseConn(ctx) sn := d.Get("name").(string) output, err := FindDeliveryStreamByName(ctx, conn, sn) diff --git a/internal/service/firehose/delivery_stream_data_source_test.go b/internal/service/firehose/delivery_stream_data_source_test.go index 9e2fa78329a..7a00fb2b87f 100644 --- a/internal/service/firehose/delivery_stream_data_source_test.go +++ b/internal/service/firehose/delivery_stream_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package firehose_test import ( diff --git a/internal/service/firehose/delivery_stream_migrate.go b/internal/service/firehose/delivery_stream_migrate.go index a0de631a88b..049a75ff49e 100644 --- a/internal/service/firehose/delivery_stream_migrate.go +++ b/internal/service/firehose/delivery_stream_migrate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package firehose import ( diff --git a/internal/service/firehose/delivery_stream_migrate_test.go b/internal/service/firehose/delivery_stream_migrate_test.go index a78a448585c..bf468f4e6ce 100644 --- a/internal/service/firehose/delivery_stream_migrate_test.go +++ b/internal/service/firehose/delivery_stream_migrate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package firehose_test import ( diff --git a/internal/service/firehose/delivery_stream_test.go b/internal/service/firehose/delivery_stream_test.go index 471cc03e41d..cdd8f40f0cd 100644 --- a/internal/service/firehose/delivery_stream_test.go +++ b/internal/service/firehose/delivery_stream_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package firehose_test import ( @@ -254,7 +257,7 @@ func TestAccFirehoseDeliveryStream_ExtendedS3_externalUpdate(t *testing.T) { }, { PreConfig: func() { - conn := acctest.Provider.Meta().(*conns.AWSClient).FirehoseConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FirehoseConn(ctx) udi := firehose.UpdateDestinationInput{ DeliveryStreamName: aws.String(rName), DestinationId: aws.String("destinationId-000000000001"), @@ -1632,7 +1635,7 @@ func testAccCheckDeliveryStreamExists(ctx context.Context, n string, v *firehose return fmt.Errorf("No Kinesis Firehose Delivery Stream ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).FirehoseConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FirehoseConn(ctx) output, err := tffirehose.FindDeliveryStreamByName(ctx, conn, rs.Primary.Attributes["name"]) @@ -1653,7 +1656,7 @@ func testAccCheckDeliveryStreamDestroy(ctx context.Context) resource.TestCheckFu continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).FirehoseConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FirehoseConn(ctx) _, err := tffirehose.FindDeliveryStreamByName(ctx, conn, rs.Primary.Attributes["name"]) @@ -1878,7 +1881,7 @@ func testAccCheckDeliveryStreamDestroy_ExtendedS3(ctx context.Context) resource. func testAccCheckLambdaFunctionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lambda_function" { diff --git a/internal/service/firehose/find.go b/internal/service/firehose/find.go index 934d616645a..08fdad4747f 100644 --- a/internal/service/firehose/find.go +++ b/internal/service/firehose/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package firehose import ( diff --git a/internal/service/firehose/generate.go b/internal/service/firehose/generate.go index c1e38b66ae6..51d3a4c78ab 100644 --- a/internal/service/firehose/generate.go +++ b/internal/service/firehose/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=ListTagsForDeliveryStream -ListTagsInIDElem=DeliveryStreamName -ServiceTagsSlice -TagOp=TagDeliveryStream -TagInIDElem=DeliveryStreamName -UntagOp=UntagDeliveryStream -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package firehose diff --git a/internal/service/firehose/list_pages.go b/internal/service/firehose/list_pages.go index 22167ae35f4..f401bb8ceae 100644 --- a/internal/service/firehose/list_pages.go +++ b/internal/service/firehose/list_pages.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package firehose import ( diff --git a/internal/service/firehose/service_package_gen.go b/internal/service/firehose/service_package_gen.go index ddcf6853d87..2453db4b3d6 100644 --- a/internal/service/firehose/service_package_gen.go +++ b/internal/service/firehose/service_package_gen.go @@ -5,6 +5,10 @@ package firehose import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + firehose_sdkv1 "github.com/aws/aws-sdk-go/service/firehose" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -45,4 +49,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Firehose } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*firehose_sdkv1.Firehose, error) { + sess := config["session"].(*session_sdkv1.Session) + + return firehose_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/firehose/status.go b/internal/service/firehose/status.go index 7e18648e331..77f584abc57 100644 --- a/internal/service/firehose/status.go +++ b/internal/service/firehose/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package firehose import ( diff --git a/internal/service/firehose/sweep.go b/internal/service/firehose/sweep.go index 0141991a7f4..9f3ffb94e0b 100644 --- a/internal/service/firehose/sweep.go +++ b/internal/service/firehose/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/aws/arn" "github.com/aws/aws-sdk-go/service/firehose" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -24,11 +26,11 @@ func init() { func sweepDeliveryStreams(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).FirehoseConn() + conn := client.FirehoseConn(ctx) input := &firehose.ListDeliveryStreamsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -42,10 +44,10 @@ func sweepDeliveryStreams(region string) error { d := r.Data(nil) name := aws.StringValue(v) arn := arn.ARN{ - Partition: client.(*conns.AWSClient).Partition, + Partition: client.Partition, Service: firehose.ServiceName, - Region: client.(*conns.AWSClient).Region, - AccountID: client.(*conns.AWSClient).AccountID, + Region: client.Region, + AccountID: client.AccountID, Resource: fmt.Sprintf("deliverystream/%s", name), }.String() d.SetId(arn) @@ -66,7 +68,7 @@ func sweepDeliveryStreams(region string) error { return fmt.Errorf("error listing Kinesis Firehose Delivery Streams (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Kinesis Firehose Delivery Streams (%s): %w", region, err) diff --git a/internal/service/firehose/tags_gen.go b/internal/service/firehose/tags_gen.go index 635b90a1a11..f57525f2d84 100644 --- a/internal/service/firehose/tags_gen.go +++ b/internal/service/firehose/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists firehose service tags. +// listTags lists firehose service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn firehoseiface.FirehoseAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn firehoseiface.FirehoseAPI, identifier string) (tftags.KeyValueTags, error) { input := &firehose.ListTagsForDeliveryStreamInput{ DeliveryStreamName: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn firehoseiface.FirehoseAPI, identifier st // ListTags lists firehose service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).FirehoseConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).FirehoseConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*firehose.Tag) tftags.KeyValueTags return tftags.New(ctx, m) } -// GetTagsIn returns firehose service tags from Context. +// getTagsIn returns firehose service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*firehose.Tag { +func getTagsIn(ctx context.Context) []*firehose.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*firehose.Tag { return nil } -// SetTagsOut sets firehose service tags in Context. -func SetTagsOut(ctx context.Context, tags []*firehose.Tag) { +// setTagsOut sets firehose service tags in Context. +func setTagsOut(ctx context.Context, tags []*firehose.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates firehose service tags. +// updateTags updates firehose service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn firehoseiface.FirehoseAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn firehoseiface.FirehoseAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn firehoseiface.FirehoseAPI, identifier // UpdateTags updates firehose service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).FirehoseConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).FirehoseConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/firehose/wait.go b/internal/service/firehose/wait.go index aea82d251fa..6b99127077b 100644 --- a/internal/service/firehose/wait.go +++ b/internal/service/firehose/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package firehose import ( diff --git a/internal/service/fis/experiment_template.go b/internal/service/fis/experiment_template.go index f93bc5e5d16..e7fb9466f75 100644 --- a/internal/service/fis/experiment_template.go +++ b/internal/service/fis/experiment_template.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fis import ( @@ -126,6 +129,49 @@ func ResourceExperimentTemplate() *schema.Resource { Required: true, ValidateFunc: validation.StringLenBetween(0, 512), }, + "log_configuration": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cloudwatch_logs_configuration": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "log_group_arn": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + "log_schema_version": { + Type: schema.TypeInt, + Required: true, + }, + "s3_configuration": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "bucket_name": { + Type: schema.TypeString, + Required: true, + }, + "prefix": { + Type: schema.TypeString, + Optional: true, + }, + }, + }, + }, + }, + }, + }, "role_arn": { Type: schema.TypeString, Required: true, @@ -181,6 +227,11 @@ func ResourceExperimentTemplate() *schema.Resource { Required: true, ValidateFunc: validation.StringLenBetween(0, 64), }, + "parameters": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "resource_arns": { Type: schema.TypeSet, Optional: true, @@ -233,15 +284,16 @@ func ResourceExperimentTemplate() *schema.Resource { } func resourceExperimentTemplateCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).FISClient() + conn := meta.(*conns.AWSClient).FISClient(ctx) input := &fis.CreateExperimentTemplateInput{ - Actions: expandExperimentTemplateActions(d.Get("action").(*schema.Set)), - ClientToken: aws.String(id.UniqueId()), - Description: aws.String(d.Get("description").(string)), - RoleArn: aws.String(d.Get("role_arn").(string)), - StopConditions: expandExperimentTemplateStopConditions(d.Get("stop_condition").(*schema.Set)), - Tags: GetTagsIn(ctx), + Actions: expandExperimentTemplateActions(d.Get("action").(*schema.Set)), + ClientToken: aws.String(id.UniqueId()), + Description: aws.String(d.Get("description").(string)), + LogConfiguration: expandExperimentTemplateLogConfiguration(d.Get("log_configuration").([]interface{})), + RoleArn: aws.String(d.Get("role_arn").(string)), + StopConditions: expandExperimentTemplateStopConditions(d.Get("stop_condition").(*schema.Set)), + Tags: getTagsIn(ctx), } targets, err := expandExperimentTemplateTargets(d.Get("target").(*schema.Set)) @@ -261,7 +313,7 @@ func resourceExperimentTemplateCreate(ctx context.Context, d *schema.ResourceDat } func resourceExperimentTemplateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).FISClient() + conn := meta.(*conns.AWSClient).FISClient(ctx) input := &fis.GetExperimentTemplateInput{Id: aws.String(d.Id())} out, err := conn.GetExperimentTemplate(ctx, input) @@ -296,6 +348,10 @@ func resourceExperimentTemplateRead(ctx context.Context, d *schema.ResourceData, return create.DiagSettingError(names.FIS, ResNameExperimentTemplate, d.Id(), "action", err) } + if err := d.Set("log_configuration", flattenExperimentTemplateLogConfiguration(experimentTemplate.LogConfiguration)); err != nil { + return create.DiagSettingError(names.FIS, ResNameExperimentTemplate, d.Id(), "log_configuration", err) + } + if err := d.Set("stop_condition", flattenExperimentTemplateStopConditions(experimentTemplate.StopConditions)); err != nil { return create.DiagSettingError(names.FIS, ResNameExperimentTemplate, d.Id(), "stop_condition", err) } @@ -304,52 +360,59 @@ func resourceExperimentTemplateRead(ctx context.Context, d *schema.ResourceData, return create.DiagSettingError(names.FIS, ResNameExperimentTemplate, d.Id(), "target", err) } - SetTagsOut(ctx, experimentTemplate.Tags) + setTagsOut(ctx, experimentTemplate.Tags) return nil } func resourceExperimentTemplateUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).FISClient() + conn := meta.(*conns.AWSClient).FISClient(ctx) - input := &fis.UpdateExperimentTemplateInput{ - Id: aws.String(d.Id()), - } + if d.HasChangesExcept("tags", "tags_all") { + input := &fis.UpdateExperimentTemplateInput{ + Id: aws.String(d.Id()), + } - if d.HasChange("action") { - input.Actions = expandExperimentTemplateActionsForUpdate(d.Get("action").(*schema.Set)) - } + if d.HasChange("action") { + input.Actions = expandExperimentTemplateActionsForUpdate(d.Get("action").(*schema.Set)) + } - if d.HasChange("description") { - input.Description = aws.String(d.Get("description").(string)) - } + if d.HasChange("description") { + input.Description = aws.String(d.Get("description").(string)) + } - if d.HasChange("role_arn") { - input.RoleArn = aws.String(d.Get("role_arn").(string)) - } + if d.HasChange("log_configuration") { + config := expandExperimentTemplateLogConfigurationForUpdate(d.Get("log_configuration").([]interface{})) + input.LogConfiguration = config + } - if d.HasChange("stop_condition") { - input.StopConditions = expandExperimentTemplateStopConditionsForUpdate(d.Get("stop_condition").(*schema.Set)) - } + if d.HasChange("role_arn") { + input.RoleArn = aws.String(d.Get("role_arn").(string)) + } + + if d.HasChange("stop_condition") { + input.StopConditions = expandExperimentTemplateStopConditionsForUpdate(d.Get("stop_condition").(*schema.Set)) + } - if d.HasChange("target") { - targets, err := expandExperimentTemplateTargetsForUpdate(d.Get("target").(*schema.Set)) + if d.HasChange("target") { + targets, err := expandExperimentTemplateTargetsForUpdate(d.Get("target").(*schema.Set)) + if err != nil { + return create.DiagError(names.FIS, create.ErrActionUpdating, ResNameExperimentTemplate, d.Id(), err) + } + input.Targets = targets + } + + _, err := conn.UpdateExperimentTemplate(ctx, input) if err != nil { return create.DiagError(names.FIS, create.ErrActionUpdating, ResNameExperimentTemplate, d.Id(), err) } - input.Targets = targets - } - - _, err := conn.UpdateExperimentTemplate(ctx, input) - if err != nil { - return create.DiagError(names.FIS, create.ErrActionUpdating, ResNameExperimentTemplate, d.Id(), err) } return resourceExperimentTemplateRead(ctx, d, meta) } func resourceExperimentTemplateDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).FISClient() + conn := meta.(*conns.AWSClient).FISClient(ctx) _, err := conn.DeleteExperimentTemplate(ctx, &fis.DeleteExperimentTemplateInput{ Id: aws.String(d.Id()), }) @@ -473,6 +536,58 @@ func expandExperimentTemplateStopConditions(l *schema.Set) []types.CreateExperim return items } +func expandExperimentTemplateLogConfiguration(l []interface{}) *types.CreateExperimentTemplateLogConfigurationInput { + if len(l) == 0 { + return nil + } + + raw := l[0].(map[string]interface{}) + + config := types.CreateExperimentTemplateLogConfigurationInput{ + LogSchemaVersion: aws.Int32(int32(raw["log_schema_version"].(int))), + } + + if v, ok := raw["cloudwatch_logs_configuration"].([]interface{}); ok && len(v) > 0 { + config.CloudWatchLogsConfiguration = expandExperimentTemplateCloudWatchLogsConfiguration(v) + } + + if v, ok := raw["s3_configuration"].([]interface{}); ok && len(v) > 0 { + config.S3Configuration = expandExperimentTemplateS3Configuration(v) + } + + return &config +} + +func expandExperimentTemplateCloudWatchLogsConfiguration(l []interface{}) *types.ExperimentTemplateCloudWatchLogsLogConfigurationInput { + if len(l) == 0 { + return nil + } + + raw := l[0].(map[string]interface{}) + + config := types.ExperimentTemplateCloudWatchLogsLogConfigurationInput{ + LogGroupArn: aws.String(raw["log_group_arn"].(string)), + } + return &config +} + +func expandExperimentTemplateS3Configuration(l []interface{}) *types.ExperimentTemplateS3LogConfigurationInput { + if len(l) == 0 { + return nil + } + + raw := l[0].(map[string]interface{}) + + config := types.ExperimentTemplateS3LogConfigurationInput{ + BucketName: aws.String(raw["bucket_name"].(string)), + } + if v, ok := raw["prefix"].(string); ok && v != "" { + config.Prefix = aws.String(v) + } + + return &config +} + func expandExperimentTemplateStopConditionsForUpdate(l *schema.Set) []types.UpdateExperimentTemplateStopConditionInput { if l.Len() == 0 { return nil @@ -543,6 +658,10 @@ func expandExperimentTemplateTargets(l *schema.Set) (map[string]types.CreateExpe config.SelectionMode = aws.String(v) } + if v, ok := raw["parameters"].(map[string]interface{}); ok && len(v) > 0 { + config.Parameters = flex.ExpandStringValueMap(v) + } + if v, ok := raw["name"].(string); ok && v != "" { attrs[v] = config } @@ -595,6 +714,10 @@ func expandExperimentTemplateTargetsForUpdate(l *schema.Set) (map[string]types.U config.SelectionMode = aws.String(v) } + if v, ok := raw["parameters"].(map[string]interface{}); ok && len(v) > 0 { + config.Parameters = flex.ExpandStringValueMap(v) + } + if v, ok := raw["name"].(string); ok && v != "" { attrs[v] = config } @@ -603,6 +726,26 @@ func expandExperimentTemplateTargetsForUpdate(l *schema.Set) (map[string]types.U return attrs, nil } +func expandExperimentTemplateLogConfigurationForUpdate(l []interface{}) *types.UpdateExperimentTemplateLogConfigurationInput { + if len(l) == 0 { + return &types.UpdateExperimentTemplateLogConfigurationInput{} + } + + raw := l[0].(map[string]interface{}) + config := types.UpdateExperimentTemplateLogConfigurationInput{ + LogSchemaVersion: aws.Int32(int32(raw["log_schema_version"].(int))), + } + if v, ok := raw["cloudwatch_logs_configuration"].([]interface{}); ok && len(v) > 0 { + config.CloudWatchLogsConfiguration = expandExperimentTemplateCloudWatchLogsConfiguration(v) + } + + if v, ok := raw["s3_configuration"].([]interface{}); ok && len(v) > 0 { + config.S3Configuration = expandExperimentTemplateS3Configuration(v) + } + + return &config +} + func expandExperimentTemplateActionParameteres(l *schema.Set) map[string]string { if l.Len() == 0 { return nil @@ -725,6 +868,7 @@ func flattenExperimentTemplateTargets(configured map[string]types.ExperimentTemp item["resource_tag"] = flattenExperimentTemplateTargetResourceTags(v.ResourceTags) item["resource_type"] = aws.ToString(v.ResourceType) item["selection_mode"] = aws.ToString(v.SelectionMode) + item["parameters"] = v.Parameters item["name"] = k @@ -734,6 +878,47 @@ func flattenExperimentTemplateTargets(configured map[string]types.ExperimentTemp return dataResources } +func flattenExperimentTemplateLogConfiguration(configured *types.ExperimentTemplateLogConfiguration) []map[string]interface{} { + if configured == nil { + return make([]map[string]interface{}, 0) + } + + dataResources := make([]map[string]interface{}, 1) + dataResources[0] = make(map[string]interface{}) + dataResources[0]["log_schema_version"] = configured.LogSchemaVersion + dataResources[0]["cloudwatch_logs_configuration"] = flattenCloudWatchLogsConfiguration(configured.CloudWatchLogsConfiguration) + dataResources[0]["s3_configuration"] = flattenS3Configuration(configured.S3Configuration) + + return dataResources +} + +func flattenCloudWatchLogsConfiguration(configured *types.ExperimentTemplateCloudWatchLogsLogConfiguration) []map[string]interface{} { + if configured == nil { + return make([]map[string]interface{}, 0) + } + + dataResources := make([]map[string]interface{}, 1) + dataResources[0] = make(map[string]interface{}) + dataResources[0]["log_group_arn"] = configured.LogGroupArn + + return dataResources +} + +func flattenS3Configuration(configured *types.ExperimentTemplateS3LogConfiguration) []map[string]interface{} { + if configured == nil { + return make([]map[string]interface{}, 0) + } + + dataResources := make([]map[string]interface{}, 1) + dataResources[0] = make(map[string]interface{}) + dataResources[0]["bucket_name"] = configured.BucketName + if aws.ToString(configured.Prefix) != "" { + dataResources[0]["prefix"] = configured.Prefix + } + + return dataResources +} + func flattenExperimentTemplateActionParameters(configured map[string]string) []map[string]interface{} { dataResources := make([]map[string]interface{}, 0, len(configured)) @@ -801,15 +986,19 @@ func validExperimentTemplateStopConditionSource() schema.SchemaValidateFunc { } func validExperimentTemplateActionTargetKey() schema.SchemaValidateFunc { + // See https://docs.aws.amazon.com/fis/latest/userguide/actions.html#action-targets allowedStopConditionSources := []string{ "Cluster", "Clusters", "DBInstances", "Instances", "Nodegroups", + "Pods", "Roles", "SpotInstances", "Subnets", + "Tasks", + "Volumes", } return validation.All( diff --git a/internal/service/fis/experiment_template_test.go b/internal/service/fis/experiment_template_test.go index e7f72e410e0..4e5fa011e68 100644 --- a/internal/service/fis/experiment_template_test.go +++ b/internal/service/fis/experiment_template_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fis_test import ( @@ -265,6 +268,156 @@ func TestAccFISExperimentTemplate_eks(t *testing.T) { }) } +func TestAccFISExperimentTemplate_ebs(t *testing.T) { + ctx := acctest.Context(t) + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_fis_experiment_template.test" + var conf types.ExperimentTemplate + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, fis.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckExperimentTemplateDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccExperimentTemplateConfig_ebsVolume(rName, "EBS Volume Pause I/O Experiment", "ebs-paused-io-action", "EBS Volume Pause I/O", "aws:ebs:pause-volume-io", "Volumes", "ebs-volume-to-pause-io", "duration", "PT6M", "aws:ec2:ebs-volume", "ALL", "env", "test"), + Check: resource.ComposeTestCheckFunc( + testAccExperimentTemplateExists(ctx, resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "description", "EBS Volume Pause I/O Experiment"), + resource.TestCheckResourceAttrPair(resourceName, "role_arn", "aws_iam_role.test_fis", "arn"), + resource.TestCheckResourceAttr(resourceName, "stop_condition.0.source", "none"), + resource.TestCheckResourceAttr(resourceName, "stop_condition.0.value", ""), + resource.TestCheckResourceAttr(resourceName, "stop_condition.#", "1"), + resource.TestCheckResourceAttr(resourceName, "action.0.name", "ebs-paused-io-action"), + resource.TestCheckResourceAttr(resourceName, "action.0.description", "EBS Volume Pause I/O"), + resource.TestCheckResourceAttr(resourceName, "action.0.action_id", "aws:ebs:pause-volume-io"), + resource.TestCheckResourceAttr(resourceName, "action.0.parameter.#", "1"), + resource.TestCheckResourceAttr(resourceName, "action.0.parameter.0.key", "duration"), + resource.TestCheckResourceAttr(resourceName, "action.0.parameter.0.value", "PT6M"), + resource.TestCheckResourceAttr(resourceName, "action.0.start_after.#", "0"), + resource.TestCheckResourceAttr(resourceName, "action.0.target.0.key", "Volumes"), + resource.TestCheckResourceAttr(resourceName, "action.0.target.0.value", "ebs-volume-to-pause-io"), + resource.TestCheckResourceAttr(resourceName, "action.0.target.#", "1"), + resource.TestCheckResourceAttr(resourceName, "action.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target.0.name", "ebs-volume-to-pause-io"), + resource.TestCheckResourceAttr(resourceName, "target.0.resource_type", "aws:ec2:ebs-volume"), + resource.TestCheckResourceAttr(resourceName, "target.0.selection_mode", "ALL"), + resource.TestCheckResourceAttr(resourceName, "target.0.filter.#", "0"), + resource.TestCheckResourceAttrPair(resourceName, "target.0.resource_arns.0", "aws_ebs_volume.test", "arn"), + resource.TestCheckResourceAttr(resourceName, "target.0.resource_tag.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target.#", "1"), + ), + }, + }, + }) +} + +func TestAccFISExperimentTemplate_ebsParameters(t *testing.T) { + ctx := acctest.Context(t) + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_fis_experiment_template.test" + var conf types.ExperimentTemplate + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, fis.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckExperimentTemplateDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccExperimentTemplateConfig_ebsVolumeParameters(rName, "EBS Volume Pause I/O Experiment", "ebs-paused-io-action", "EBS Volume Pause I/O", "aws:ebs:pause-volume-io", "Volumes", "ebs-volume-to-pause-io", "duration", "PT6M", "aws:ec2:ebs-volume", "ALL", "env", "test"), + Check: resource.ComposeTestCheckFunc( + testAccExperimentTemplateExists(ctx, resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "description", "EBS Volume Pause I/O Experiment"), + resource.TestCheckResourceAttrPair(resourceName, "role_arn", "aws_iam_role.test_fis", "arn"), + resource.TestCheckResourceAttr(resourceName, "stop_condition.0.source", "none"), + resource.TestCheckResourceAttr(resourceName, "stop_condition.0.value", ""), + resource.TestCheckResourceAttr(resourceName, "stop_condition.#", "1"), + resource.TestCheckResourceAttr(resourceName, "action.0.name", "ebs-paused-io-action"), + resource.TestCheckResourceAttr(resourceName, "action.0.description", "EBS Volume Pause I/O"), + resource.TestCheckResourceAttr(resourceName, "action.0.action_id", "aws:ebs:pause-volume-io"), + resource.TestCheckResourceAttr(resourceName, "action.0.parameter.#", "1"), + resource.TestCheckResourceAttr(resourceName, "action.0.parameter.0.key", "duration"), + resource.TestCheckResourceAttr(resourceName, "action.0.parameter.0.value", "PT6M"), + resource.TestCheckResourceAttr(resourceName, "action.0.start_after.#", "0"), + resource.TestCheckResourceAttr(resourceName, "action.0.target.0.key", "Volumes"), + resource.TestCheckResourceAttr(resourceName, "action.0.target.0.value", "ebs-volume-to-pause-io"), + resource.TestCheckResourceAttr(resourceName, "action.0.target.#", "1"), + resource.TestCheckResourceAttr(resourceName, "action.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target.0.name", "ebs-volume-to-pause-io"), + resource.TestCheckResourceAttr(resourceName, "target.0.resource_type", "aws:ec2:ebs-volume"), + resource.TestCheckResourceAttr(resourceName, "target.0.selection_mode", "ALL"), + resource.TestCheckResourceAttr(resourceName, "target.0.parameters.%", "1"), + resource.TestCheckResourceAttrPair(resourceName, "target.0.parameters.availabilityZoneIdentifier", "aws_ebs_volume.test", "availability_zone"), + resource.TestCheckResourceAttr(resourceName, "target.0.filter.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target.0.resource_tag.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target.0.resource_tag.0.key", "Name"), + resource.TestCheckResourceAttrPair(resourceName, "target.0.resource_tag.0.value", "aws_ebs_volume.test", "tags.Name"), + resource.TestCheckResourceAttr(resourceName, "target.#", "1"), + ), + }, + }, + }) +} + +func TestAccFISExperimentTemplate_loggingConfiguration(t *testing.T) { + ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_fis_experiment_template.test" + var conf types.ExperimentTemplate + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, fis.ServiceID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckExperimentTemplateDestroy(ctx), + Steps: []resource.TestStep{ + // Cloudwatch Logging + { + Config: testAccExperimentTemplateConfig_logConfigCloudWatch(rName, "An experiment template for testing", "test-action-1", "", "aws:ec2:terminate-instances", "Instances", "to-terminate-1", "aws:ec2:instance", "COUNT(1)", "env", "test"), + Check: resource.ComposeTestCheckFunc( + testAccExperimentTemplateExists(ctx, resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "log_configuration.0.log_schema_version", "2"), + acctest.CheckResourceAttrRegionalARN(resourceName, "log_configuration.0.cloudwatch_logs_configuration.0.log_group_arn", "logs", fmt.Sprintf("log-group:%s:*", rName)), + ), + }, + // Delete Logging + { + Config: testAccExperimentTemplateConfig_basic(rName, "An experiment template for testing", "test-action-1", "", "aws:ec2:terminate-instances", "Instances", "to-terminate-1", "aws:ec2:instance", "COUNT(1)", "env", "test"), + Check: resource.ComposeTestCheckFunc( + testAccExperimentTemplateExists(ctx, resourceName, &conf), + ), + }, + // S3 Logging + { + Config: testAccExperimentTemplateConfig_logConfigS3(rName, "An experiment template for testing", "test-action-1", "", "aws:ec2:terminate-instances", "Instances", "to-terminate-1", "aws:ec2:instance", "COUNT(1)", "env", "test"), + Check: resource.ComposeTestCheckFunc( + testAccExperimentTemplateExists(ctx, resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "log_configuration.0.log_schema_version", "2"), + resource.TestCheckResourceAttr(resourceName, "log_configuration.0.s3_configuration.0.bucket_name", rName), + resource.TestCheckResourceAttr(resourceName, "log_configuration.0.s3_configuration.0.prefix", ""), + ), + }, + { + Config: testAccExperimentTemplateConfig_logConfigS3Prefix(rName, "An experiment template for testing", "test-action-1", "", "aws:ec2:terminate-instances", "Instances", "to-terminate-1", "aws:ec2:instance", "COUNT(1)", "env", "test"), + Check: resource.ComposeTestCheckFunc( + testAccExperimentTemplateExists(ctx, resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "log_configuration.0.log_schema_version", "2"), + resource.TestCheckResourceAttr(resourceName, "log_configuration.0.s3_configuration.0.bucket_name", rName), + resource.TestCheckResourceAttr(resourceName, "log_configuration.0.s3_configuration.0.prefix", "test"), + ), + }, + }, + }) +} + func testAccExperimentTemplateExists(ctx context.Context, resourceName string, config *types.ExperimentTemplate) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[resourceName] @@ -272,7 +425,7 @@ func testAccExperimentTemplateExists(ctx context.Context, resourceName string, c return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).FISClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).FISClient(ctx) out, err := conn.GetExperimentTemplate(ctx, &fis.GetExperimentTemplateInput{Id: aws.String(rs.Primary.ID)}) if err != nil { @@ -291,7 +444,7 @@ func testAccExperimentTemplateExists(ctx context.Context, resourceName string, c func testAccCheckExperimentTemplateDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).FISClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).FISClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_fis_experiment_template" { continue @@ -574,3 +727,342 @@ resource "aws_fis_experiment_template" "test" { } `, rName+"-fis", desc, actionName, actionDesc, actionID, actionTargetK, actionTargetV, paramK1, paramV1, paramK2, paramV2, paramK3, paramV3, paramK4, paramV4, paramK5, paramV5, targetResType, targetSelectMode, targetResTagK, targetResTagV)) } + +func testAccExperimentTemplateConfig_baseEBSVolume(rName string) string { + return acctest.ConfigCompose(acctest.ConfigAvailableAZsNoOptIn(), fmt.Sprintf(` +data "aws_partition" "current" {} + +resource "aws_ebs_volume" "test" { + availability_zone = data.aws_availability_zones.available.names[0] + size = 40 + + tags = { + Name = %[1]q + } +} +`, rName)) +} + +func testAccExperimentTemplateConfig_ebsVolume(rName, desc, actionName, actionDesc, actionID, actionTargetK, actionTargetV, paramK1, paramV1, targetResType, targetSelectMode, targetResTagK, targetResTagV string) string { + return acctest.ConfigCompose(testAccExperimentTemplateConfig_baseEBSVolume(rName), fmt.Sprintf(` +resource "aws_iam_role" "test_fis" { + name = %[1]q + + assume_role_policy = jsonencode({ + Statement = [{ + Action = "sts:AssumeRole" + Effect = "Allow" + Principal = { + Service = [ + "fis.${data.aws_partition.current.dns_suffix}", + ] + } + }] + Version = "2012-10-17" + }) +} + +resource "aws_fis_experiment_template" "test" { + description = %[2]q + role_arn = aws_iam_role.test_fis.arn + + stop_condition { + source = "none" + } + + action { + name = %[3]q + description = %[4]q + action_id = %[5]q + + target { + key = %[6]q + value = %[7]q + } + + parameter { + key = %[8]q + value = %[9]q + } + } + + target { + name = %[7]q + resource_type = %[10]q + selection_mode = %[11]q + + resource_arns = tolist([aws_ebs_volume.test.arn]) + } + + tags = { + Name = %[1]q + } +} +`, rName+"-fis", desc, actionName, actionDesc, actionID, actionTargetK, actionTargetV, paramK1, paramV1, targetResType, targetSelectMode, targetResTagK, targetResTagV)) +} + +func testAccExperimentTemplateConfig_ebsVolumeParameters(rName, desc, actionName, actionDesc, actionID, actionTargetK, actionTargetV, paramK1, paramV1, targetResType, targetSelectMode, targetResTagK, targetResTagV string) string { + return acctest.ConfigCompose(testAccExperimentTemplateConfig_baseEBSVolume(rName), fmt.Sprintf(` +resource "aws_iam_role" "test_fis" { + name = %[1]q + assume_role_policy = jsonencode({ + Statement = [{ + Action = "sts:AssumeRole" + Effect = "Allow" + Principal = { + Service = [ + "fis.${data.aws_partition.current.dns_suffix}", + ] + } + }] + Version = "2012-10-17" + }) +} +resource "aws_fis_experiment_template" "test" { + description = %[2]q + role_arn = aws_iam_role.test_fis.arn + stop_condition { + source = "none" + } + action { + name = %[3]q + description = %[4]q + action_id = %[5]q + target { + key = %[6]q + value = %[7]q + } + parameter { + key = %[8]q + value = %[9]q + } + } + target { + name = %[7]q + resource_type = %[10]q + selection_mode = %[11]q + resource_tag { + key = "Name" + value = aws_ebs_volume.test.tags.Name + } + parameters = { + availabilityZoneIdentifier = aws_ebs_volume.test.availability_zone + } + } + tags = { + Name = %[1]q + } +} +`, rName+"-fis", desc, actionName, actionDesc, actionID, actionTargetK, actionTargetV, paramK1, paramV1, targetResType, targetSelectMode, targetResTagK, targetResTagV)) +} + +func testAccExperimentTemplateConfig_logConfigCloudWatch(rName, desc, actionName, actionDesc, actionID, actionTargetK, actionTargetV, targetResType, targetSelectMode, targetResTagK, targetResTagV string) string { + return fmt.Sprintf(` +data "aws_partition" "current" {} + +resource "aws_iam_role" "test" { + name = %[1]q + + assume_role_policy = jsonencode({ + Statement = [{ + Action = "sts:AssumeRole" + Effect = "Allow" + Principal = { + Service = [ + "fis.${data.aws_partition.current.dns_suffix}", + ] + } + }] + Version = "2012-10-17" + }) +} + +resource "aws_cloudwatch_log_group" "test" { + name = %[1]q +} + +resource "aws_fis_experiment_template" "test" { + description = %[2]q + role_arn = aws_iam_role.test.arn + + stop_condition { + source = "none" + } + + action { + name = %[3]q + description = %[4]q + action_id = %[5]q + + target { + key = %[6]q + value = %[7]q + } + } + + target { + name = %[7]q + resource_type = %[8]q + selection_mode = %[9]q + + resource_tag { + key = %[10]q + value = %[11]q + } + } + + log_configuration { + log_schema_version = 2 + + cloudwatch_logs_configuration { + log_group_arn = "${aws_cloudwatch_log_group.test.arn}:*" + } + } + + tags = { + Name = %[1]q + } +} +`, rName, desc, actionName, actionDesc, actionID, actionTargetK, actionTargetV, targetResType, targetSelectMode, targetResTagK, targetResTagV) +} + +func testAccExperimentTemplateConfig_logConfigS3(rName, desc, actionName, actionDesc, actionID, actionTargetK, actionTargetV, targetResType, targetSelectMode, targetResTagK, targetResTagV string) string { + return fmt.Sprintf(` +data "aws_partition" "current" {} + +resource "aws_iam_role" "test" { + name = %[1]q + + assume_role_policy = jsonencode({ + Statement = [{ + Action = "sts:AssumeRole" + Effect = "Allow" + Principal = { + Service = [ + "fis.${data.aws_partition.current.dns_suffix}", + ] + } + }] + Version = "2012-10-17" + }) +} + +resource "aws_s3_bucket" "test" { + bucket = %[1]q +} + +resource "aws_fis_experiment_template" "test" { + description = %[2]q + role_arn = aws_iam_role.test.arn + + stop_condition { + source = "none" + } + + action { + name = %[3]q + description = %[4]q + action_id = %[5]q + + target { + key = %[6]q + value = %[7]q + } + } + + target { + name = %[7]q + resource_type = %[8]q + selection_mode = %[9]q + + resource_tag { + key = %[10]q + value = %[11]q + } + } + + log_configuration { + log_schema_version = 2 + + s3_configuration { + bucket_name = aws_s3_bucket.test.bucket + } + } + + tags = { + Name = %[1]q + } +} +`, rName, desc, actionName, actionDesc, actionID, actionTargetK, actionTargetV, targetResType, targetSelectMode, targetResTagK, targetResTagV) +} + +func testAccExperimentTemplateConfig_logConfigS3Prefix(rName, desc, actionName, actionDesc, actionID, actionTargetK, actionTargetV, targetResType, targetSelectMode, targetResTagK, targetResTagV string) string { + return fmt.Sprintf(` +data "aws_partition" "current" {} + +resource "aws_iam_role" "test" { + name = %[1]q + + assume_role_policy = jsonencode({ + Statement = [{ + Action = "sts:AssumeRole" + Effect = "Allow" + Principal = { + Service = [ + "fis.${data.aws_partition.current.dns_suffix}", + ] + } + }] + Version = "2012-10-17" + }) +} + +resource "aws_s3_bucket" "test" { + bucket = %[1]q +} + +resource "aws_fis_experiment_template" "test" { + description = %[2]q + role_arn = aws_iam_role.test.arn + + stop_condition { + source = "none" + } + + action { + name = %[3]q + description = %[4]q + action_id = %[5]q + + target { + key = %[6]q + value = %[7]q + } + } + + target { + name = %[7]q + resource_type = %[8]q + selection_mode = %[9]q + + resource_tag { + key = %[10]q + value = %[11]q + } + } + + log_configuration { + log_schema_version = 2 + + s3_configuration { + bucket_name = aws_s3_bucket.test.bucket + prefix = "test" + } + } + + tags = { + Name = %[1]q + } +} +`, rName, desc, actionName, actionDesc, actionID, actionTargetK, actionTargetV, targetResType, targetSelectMode, targetResTagK, targetResTagV) +} diff --git a/internal/service/fis/generate.go b/internal/service/fis/generate.go index cc0808e9e1c..300a13fd097 100644 --- a/internal/service/fis/generate.go +++ b/internal/service/fis/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -ListTags -ServiceTagsMap -UpdateTags -KVTValues -SkipTypesImp +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package fis diff --git a/internal/service/fis/service_package_gen.go b/internal/service/fis/service_package_gen.go index 820a49ebcce..b631b934677 100644 --- a/internal/service/fis/service_package_gen.go +++ b/internal/service/fis/service_package_gen.go @@ -5,6 +5,9 @@ package fis import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + fis_sdkv2 "github.com/aws/aws-sdk-go-v2/service/fis" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -38,4 +41,17 @@ func (p *servicePackage) ServicePackageName() string { return names.FIS } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*fis_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return fis_sdkv2.NewFromConfig(cfg, func(o *fis_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = fis_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/fis/sweep.go b/internal/service/fis/sweep.go index f6a447adc6e..778f0b91080 100644 --- a/internal/service/fis/sweep.go +++ b/internal/service/fis/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go-v2/service/fis" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -24,11 +26,11 @@ func init() { func sweepExperimentTemplates(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).FISClient() + conn := client.FISClient(ctx) input := &fis.ListExperimentTemplatesInput{} sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -54,7 +56,7 @@ func sweepExperimentTemplates(region string) error { } } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping FIS Experiment Templates (%s): %w", region, err) diff --git a/internal/service/fis/tags_gen.go b/internal/service/fis/tags_gen.go index baf249ccb88..3dda2ab92bb 100644 --- a/internal/service/fis/tags_gen.go +++ b/internal/service/fis/tags_gen.go @@ -13,10 +13,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists fis service tags. +// listTags lists fis service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn *fis.Client, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *fis.Client, identifier string) (tftags.KeyValueTags, error) { input := &fis.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -33,7 +33,7 @@ func ListTags(ctx context.Context, conn *fis.Client, identifier string) (tftags. // ListTags lists fis service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).FISClient(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).FISClient(ctx), identifier) if err != nil { return err @@ -53,14 +53,14 @@ func Tags(tags tftags.KeyValueTags) map[string]string { return tags.Map() } -// KeyValueTags creates KeyValueTags from fis service tags. +// KeyValueTags creates tftags.KeyValueTags from fis service tags. func KeyValueTags(ctx context.Context, tags map[string]string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns fis service tags from Context. +// getTagsIn returns fis service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]string { +func getTagsIn(ctx context.Context) map[string]string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -70,17 +70,17 @@ func GetTagsIn(ctx context.Context) map[string]string { return nil } -// SetTagsOut sets fis service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]string) { +// setTagsOut sets fis service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates fis service tags. +// updateTags updates fis service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn *fis.Client, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *fis.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -120,5 +120,5 @@ func UpdateTags(ctx context.Context, conn *fis.Client, identifier string, oldTag // UpdateTags updates fis service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).FISClient(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).FISClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/fms/admin_account.go b/internal/service/fms/admin_account.go index de0b4e6fc6b..26fb7501230 100644 --- a/internal/service/fms/admin_account.go +++ b/internal/service/fms/admin_account.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fms import ( @@ -41,7 +44,7 @@ func ResourceAdminAccount() *schema.Resource { func resourceAdminAccountCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FMSConn() + conn := meta.(*conns.AWSClient).FMSConn(ctx) // Ensure there is not an existing FMS Admin Account output, err := conn.GetAdminAccountWithContext(ctx, &fms.GetAdminAccountInput{}) @@ -116,7 +119,7 @@ func associateAdminAccountRefreshFunc(ctx context.Context, conn *fms.FMS, accoun func resourceAdminAccountRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FMSConn() + conn := meta.(*conns.AWSClient).FMSConn(ctx) output, err := conn.GetAdminAccountWithContext(ctx, &fms.GetAdminAccountInput{}) @@ -147,7 +150,7 @@ func resourceAdminAccountRead(ctx context.Context, d *schema.ResourceData, meta func resourceAdminAccountDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FMSConn() + conn := meta.(*conns.AWSClient).FMSConn(ctx) _, err := conn.DisassociateAdminAccountWithContext(ctx, &fms.DisassociateAdminAccountInput{}) diff --git a/internal/service/fms/admin_account_test.go b/internal/service/fms/admin_account_test.go index 120ed34a4a6..de1ffb92819 100644 --- a/internal/service/fms/admin_account_test.go +++ b/internal/service/fms/admin_account_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fms_test import ( @@ -41,7 +44,7 @@ func testAccAdminAccount_basic(t *testing.T) { func testAccCheckAdminAccountDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).FMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FMSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_fms_admin_account" { diff --git a/internal/service/fms/fms_test.go b/internal/service/fms/fms_test.go index 5ebdfdb52fb..64f2f215b71 100644 --- a/internal/service/fms/fms_test.go +++ b/internal/service/fms/fms_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fms_test import ( diff --git a/internal/service/fms/generate.go b/internal/service/fms/generate.go index f5bd7dd47fb..64625868c77 100644 --- a/internal/service/fms/generate.go +++ b/internal/service/fms/generate.go @@ -1,5 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=ListTagsForResource -ListTagsInIDElem=ResourceArn -ListTagsOutTagsElem=TagList -ServiceTagsSlice -TagOp=TagResource -TagInTagsElem=TagList -TagInIDElem=ResourceArn -UpdateTags -TagType=Tag +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package fms diff --git a/internal/service/fms/policy.go b/internal/service/fms/policy.go index 434b72fae14..99cf5ff3089 100644 --- a/internal/service/fms/policy.go +++ b/internal/service/fms/policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fms import ( @@ -204,11 +207,11 @@ func ResourcePolicy() *schema.Resource { func resourcePolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FMSConn() + conn := meta.(*conns.AWSClient).FMSConn(ctx) input := &fms.PutPolicyInput{ Policy: resourcePolicyExpandPolicy(d), - TagList: GetTagsIn(ctx), + TagList: getTagsIn(ctx), } output, err := conn.PutPolicyWithContext(ctx, input) @@ -224,7 +227,7 @@ func resourcePolicyCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourcePolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FMSConn() + conn := meta.(*conns.AWSClient).FMSConn(ctx) output, err := FindPolicyByID(ctx, conn, d.Id()) @@ -274,7 +277,7 @@ func resourcePolicyRead(ctx context.Context, d *schema.ResourceData, meta interf func resourcePolicyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FMSConn() + conn := meta.(*conns.AWSClient).FMSConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &fms.PutPolicyInput{ @@ -293,7 +296,7 @@ func resourcePolicyUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourcePolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FMSConn() + conn := meta.(*conns.AWSClient).FMSConn(ctx) log.Printf("[DEBUG] Deleting FMS Policy: %s", d.Id()) _, err := conn.DeletePolicyWithContext(ctx, &fms.DeletePolicyInput{ diff --git a/internal/service/fms/policy_test.go b/internal/service/fms/policy_test.go index f202c09c6aa..cda1d0a39a9 100644 --- a/internal/service/fms/policy_test.go +++ b/internal/service/fms/policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fms_test import ( @@ -269,7 +272,7 @@ func testAccPolicy_tags(t *testing.T) { func testAccCheckPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).FMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FMSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_fms_policy" { @@ -304,7 +307,7 @@ func testAccCheckPolicyExists(ctx context.Context, n string) resource.TestCheckF return fmt.Errorf("No FMS Policy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).FMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FMSConn(ctx) _, err := tffms.FindPolicyByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/fms/service_package.go b/internal/service/fms/service_package.go new file mode 100644 index 00000000000..4cc0bfd776e --- /dev/null +++ b/internal/service/fms/service_package.go @@ -0,0 +1,42 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package fms + +import ( + "context" + + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + request_sdkv1 "github.com/aws/aws-sdk-go/aws/request" + fms_sdkv1 "github.com/aws/aws-sdk-go/service/fms" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" +) + +// CustomizeConn customizes a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) CustomizeConn(ctx context.Context, conn *fms_sdkv1.FMS) (*fms_sdkv1.FMS, error) { + conn.Handlers.Retry.PushBack(func(r *request_sdkv1.Request) { + // Acceptance testing creates and deletes resources in quick succession. + // The FMS onboarding process into Organizations is opaque to consumers. + // Since we cannot reasonably check this status before receiving the error, + // set the operation as retryable. + switch r.Operation.Name { + case "AssociateAdminAccount": + if tfawserr.ErrMessageContains(r.Error, fms_sdkv1.ErrCodeInvalidOperationException, "Your AWS Organization is currently offboarding with AWS Firewall Manager. Please submit onboard request after offboarded.") { + r.Retryable = aws_sdkv1.Bool(true) + } + case "DisassociateAdminAccount": + if tfawserr.ErrMessageContains(r.Error, fms_sdkv1.ErrCodeInvalidOperationException, "Your AWS Organization is currently onboarding with AWS Firewall Manager and cannot be offboarded.") { + r.Retryable = aws_sdkv1.Bool(true) + } + // System problems can arise during FMS policy updates (maybe also creation), + // so we set the following operation as retryable. + // Reference: https://github.com/hashicorp/terraform-provider-aws/issues/23946 + case "PutPolicy": + if tfawserr.ErrCodeEquals(r.Error, fms_sdkv1.ErrCodeInternalErrorException) { + r.Retryable = aws_sdkv1.Bool(true) + } + } + }) + + return conn, nil +} diff --git a/internal/service/fms/service_package_gen.go b/internal/service/fms/service_package_gen.go index 319c9883106..089b55274f9 100644 --- a/internal/service/fms/service_package_gen.go +++ b/internal/service/fms/service_package_gen.go @@ -5,6 +5,10 @@ package fms import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + fms_sdkv1 "github.com/aws/aws-sdk-go/service/fms" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -44,4 +48,13 @@ func (p *servicePackage) ServicePackageName() string { return names.FMS } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*fms_sdkv1.FMS, error) { + sess := config["session"].(*session_sdkv1.Session) + + return fms_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/fms/tags_gen.go b/internal/service/fms/tags_gen.go index bcc5538bb50..4413889d158 100644 --- a/internal/service/fms/tags_gen.go +++ b/internal/service/fms/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists fms service tags. +// listTags lists fms service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn fmsiface.FMSAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn fmsiface.FMSAPI, identifier string) (tftags.KeyValueTags, error) { input := &fms.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn fmsiface.FMSAPI, identifier string) (tft // ListTags lists fms service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).FMSConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).FMSConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*fms.Tag) tftags.KeyValueTags { return tftags.New(ctx, m) } -// GetTagsIn returns fms service tags from Context. +// getTagsIn returns fms service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*fms.Tag { +func getTagsIn(ctx context.Context) []*fms.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*fms.Tag { return nil } -// SetTagsOut sets fms service tags in Context. -func SetTagsOut(ctx context.Context, tags []*fms.Tag) { +// setTagsOut sets fms service tags in Context. +func setTagsOut(ctx context.Context, tags []*fms.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates fms service tags. +// updateTags updates fms service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn fmsiface.FMSAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn fmsiface.FMSAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn fmsiface.FMSAPI, identifier string, ol // UpdateTags updates fms service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).FMSConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).FMSConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/fsx/backup.go b/internal/service/fsx/backup.go index 2f5a8768f39..6d1375a4d74 100644 --- a/internal/service/fsx/backup.go +++ b/internal/service/fsx/backup.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx import ( @@ -76,11 +79,11 @@ func ResourceBackup() *schema.Resource { func resourceBackupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) input := &fsx.CreateBackupInput{ ClientRequestToken: aws.String(id.UniqueId()), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("file_system_id"); ok { @@ -124,7 +127,7 @@ func resourceBackupUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceBackupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) backup, err := FindBackupByID(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -153,14 +156,14 @@ func resourceBackupRead(ctx context.Context, d *schema.ResourceData, meta interf d.Set("volume_id", backup.Volume.VolumeId) } - SetTagsOut(ctx, backup.Tags) + setTagsOut(ctx, backup.Tags) return diags } func resourceBackupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) request := &fsx.DeleteBackupInput{ BackupId: aws.String(d.Id()), diff --git a/internal/service/fsx/backup_test.go b/internal/service/fsx/backup_test.go index 0d49091cb45..9a6b00e6b22 100644 --- a/internal/service/fsx/backup_test.go +++ b/internal/service/fsx/backup_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx_test import ( @@ -268,7 +271,7 @@ func testAccCheckBackupExists(ctx context.Context, resourceName string, fs *fsx. return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn(ctx) output, err := tffsx.FindBackupByID(ctx, conn, rs.Primary.ID) if err != nil { @@ -287,7 +290,7 @@ func testAccCheckBackupExists(ctx context.Context, resourceName string, fs *fsx. func testAccCheckBackupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_fsx_backup" { diff --git a/internal/service/fsx/common_schema_data_source.go b/internal/service/fsx/common_schema_data_source.go index bb6c07a515f..f1aef6df181 100644 --- a/internal/service/fsx/common_schema_data_source.go +++ b/internal/service/fsx/common_schema_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx import ( diff --git a/internal/service/fsx/data_repository_association.go b/internal/service/fsx/data_repository_association.go index 7c7bc19f874..ceb9a0d2d36 100644 --- a/internal/service/fsx/data_repository_association.go +++ b/internal/service/fsx/data_repository_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx import ( @@ -156,14 +159,14 @@ func ResourceDataRepositoryAssociation() *schema.Resource { func resourceDataRepositoryAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) input := &fsx.CreateDataRepositoryAssociationInput{ ClientRequestToken: aws.String(id.UniqueId()), DataRepositoryPath: aws.String(d.Get("data_repository_path").(string)), FileSystemId: aws.String(d.Get("file_system_id").(string)), FileSystemPath: aws.String(d.Get("file_system_path").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("batch_import_meta_data_on_create"); ok { @@ -195,7 +198,7 @@ func resourceDataRepositoryAssociationCreate(ctx context.Context, d *schema.Reso func resourceDataRepositoryAssociationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) if d.HasChangesExcept("tags_all", "tags") { input := &fsx.UpdateDataRepositoryAssociationInput{ @@ -226,7 +229,7 @@ func resourceDataRepositoryAssociationUpdate(ctx context.Context, d *schema.Reso func resourceDataRepositoryAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) association, err := FindDataRepositoryAssociationByID(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -249,14 +252,14 @@ func resourceDataRepositoryAssociationRead(ctx context.Context, d *schema.Resour return sdkdiag.AppendErrorf(diags, "setting s3 data repository configuration: %s", err) } - SetTagsOut(ctx, association.Tags) + setTagsOut(ctx, association.Tags) return diags } func resourceDataRepositoryAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) request := &fsx.DeleteDataRepositoryAssociationInput{ ClientRequestToken: aws.String(id.UniqueId()), diff --git a/internal/service/fsx/data_repository_association_test.go b/internal/service/fsx/data_repository_association_test.go index 1c9c400a4eb..8cd8a1536fd 100644 --- a/internal/service/fsx/data_repository_association_test.go +++ b/internal/service/fsx/data_repository_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx_test import ( @@ -530,7 +533,7 @@ func testAccCheckDataRepositoryAssociationExists(ctx context.Context, resourceNa return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn(ctx) association, err := tffsx.FindDataRepositoryAssociationByID(ctx, conn, rs.Primary.ID) if err != nil { @@ -549,7 +552,7 @@ func testAccCheckDataRepositoryAssociationExists(ctx context.Context, resourceNa func testAccCheckDataRepositoryAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_fsx_lustre_file_system" { diff --git a/internal/service/fsx/file_cache.go b/internal/service/fsx/file_cache.go index ea4cf46c5b6..b9a8186c263 100644 --- a/internal/service/fsx/file_cache.go +++ b/internal/service/fsx/file_cache.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx import ( @@ -296,7 +299,7 @@ const ( ) func resourceFileCacheCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) input := &fsx.CreateFileCacheInput{ ClientRequestToken: aws.String(id.UniqueId()), @@ -304,7 +307,7 @@ func resourceFileCacheCreate(ctx context.Context, d *schema.ResourceData, meta i FileCacheTypeVersion: aws.String(d.Get("file_cache_type_version").(string)), StorageCapacity: aws.Int64(int64(d.Get("storage_capacity").(int))), SubnetIds: flex.ExpandStringList(d.Get("subnet_ids").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("copy_tags_to_data_repository_associations"); ok { input.CopyTagsToDataRepositoryAssociations = aws.Bool(v.(bool)) @@ -338,7 +341,7 @@ func resourceFileCacheCreate(ctx context.Context, d *schema.ResourceData, meta i } func resourceFileCacheRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) filecache, err := findFileCacheByID(ctx, conn, d.Id()) @@ -390,7 +393,7 @@ func resourceFileCacheRead(ctx context.Context, d *schema.ResourceData, meta int } func resourceFileCacheUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) if d.HasChangesExcept("tags_all") { input := &fsx.UpdateFileCacheInput{ @@ -417,7 +420,7 @@ func resourceFileCacheUpdate(ctx context.Context, d *schema.ResourceData, meta i } func resourceFileCacheDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) log.Printf("[INFO] Deleting FSx FileCache %s", d.Id()) _, err := conn.DeleteFileCacheWithContext(ctx, &fsx.DeleteFileCacheInput{ diff --git a/internal/service/fsx/file_cache_test.go b/internal/service/fsx/file_cache_test.go index 64031beaad2..85c575b1f30 100644 --- a/internal/service/fsx/file_cache_test.go +++ b/internal/service/fsx/file_cache_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx_test import ( @@ -386,7 +389,7 @@ func testAccFileCache_tags(t *testing.T) { func testAccCheckFileCacheDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_fsx_file_cache" { @@ -421,7 +424,7 @@ func testAccCheckFileCacheExists(ctx context.Context, name string, filecache *fs return create.Error(names.FSx, create.ErrActionCheckingExistence, tffsx.ResNameFileCache, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn(ctx) resp, err := conn.DescribeFileCachesWithContext(ctx, &fsx.DescribeFileCachesInput{ FileCacheIds: []*string{aws.String(rs.Primary.ID)}, diff --git a/internal/service/fsx/find.go b/internal/service/fsx/find.go index d8b3fdafe20..f64069248c6 100644 --- a/internal/service/fsx/find.go +++ b/internal/service/fsx/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx import ( diff --git a/internal/service/fsx/generate.go b/internal/service/fsx/generate.go index 2faa9032891..f79c17d8d92 100644 --- a/internal/service/fsx/generate.go +++ b/internal/service/fsx/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceARN -ServiceTagsSlice -TagInIDElem=ResourceARN -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package fsx diff --git a/internal/service/fsx/lustre_file_system.go b/internal/service/fsx/lustre_file_system.go index 550b6cebc2b..e5ed927280c 100644 --- a/internal/service/fsx/lustre_file_system.go +++ b/internal/service/fsx/lustre_file_system.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx import ( @@ -292,7 +295,7 @@ func resourceLustreFileSystemSchemaCustomizeDiff(_ context.Context, d *schema.Re func resourceLustreFileSystemCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) input := &fsx.CreateFileSystemInput{ ClientRequestToken: aws.String(id.UniqueId()), @@ -303,7 +306,7 @@ func resourceLustreFileSystemCreate(ctx context.Context, d *schema.ResourceData, LustreConfiguration: &fsx.CreateFileSystemLustreConfiguration{ DeploymentType: aws.String(d.Get("deployment_type").(string)), }, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } backupInput := &fsx.CreateFileSystemFromBackupInput{ @@ -313,7 +316,7 @@ func resourceLustreFileSystemCreate(ctx context.Context, d *schema.ResourceData, LustreConfiguration: &fsx.CreateFileSystemLustreConfiguration{ DeploymentType: aws.String(d.Get("deployment_type").(string)), }, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } //Applicable only for TypePersistent1 and TypePersistent2 @@ -428,7 +431,7 @@ func resourceLustreFileSystemCreate(ctx context.Context, d *schema.ResourceData, func resourceLustreFileSystemUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) if d.HasChangesExcept("tags_all", "tags") { var waitAdminAction = false @@ -493,7 +496,7 @@ func resourceLustreFileSystemUpdate(ctx context.Context, d *schema.ResourceData, func resourceLustreFileSystemRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) filesystem, err := FindFileSystemByID(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -553,7 +556,7 @@ func resourceLustreFileSystemRead(ctx context.Context, d *schema.ResourceData, m return sdkdiag.AppendErrorf(diags, "setting root_squash_configuration: %s", err) } - SetTagsOut(ctx, filesystem.Tags) + setTagsOut(ctx, filesystem.Tags) d.Set("vpc_id", filesystem.VpcId) d.Set("weekly_maintenance_start_time", lustreConfig.WeeklyMaintenanceStartTime) @@ -568,7 +571,7 @@ func resourceLustreFileSystemRead(ctx context.Context, d *schema.ResourceData, m func resourceLustreFileSystemDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) request := &fsx.DeleteFileSystemInput{ FileSystemId: aws.String(d.Id()), diff --git a/internal/service/fsx/lustre_file_system_test.go b/internal/service/fsx/lustre_file_system_test.go index a6099a30e0c..6bc17c357b3 100644 --- a/internal/service/fsx/lustre_file_system_test.go +++ b/internal/service/fsx/lustre_file_system_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx_test import ( @@ -940,7 +943,7 @@ func testAccCheckLustreFileSystemExists(ctx context.Context, resourceName string return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn(ctx) filesystem, err := tffsx.FindFileSystemByID(ctx, conn, rs.Primary.ID) if err != nil { @@ -959,7 +962,7 @@ func testAccCheckLustreFileSystemExists(ctx context.Context, resourceName string func testAccCheckLustreFileSystemDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_fsx_lustre_file_system" { diff --git a/internal/service/fsx/ontap_file_system.go b/internal/service/fsx/ontap_file_system.go index 0ce57df8551..07b51ec6448 100644 --- a/internal/service/fsx/ontap_file_system.go +++ b/internal/service/fsx/ontap_file_system.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx import ( @@ -232,7 +235,7 @@ func ResourceOntapFileSystem() *schema.Resource { func resourceOntapFileSystemCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) input := &fsx.CreateFileSystemInput{ ClientRequestToken: aws.String(id.UniqueId()), @@ -246,7 +249,7 @@ func resourceOntapFileSystemCreate(ctx context.Context, d *schema.ResourceData, ThroughputCapacity: aws.Int64(int64(d.Get("throughput_capacity").(int))), PreferredSubnetId: aws.String(d.Get("preferred_subnet_id").(string)), }, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("kms_key_id"); ok { @@ -298,7 +301,7 @@ func resourceOntapFileSystemCreate(ctx context.Context, d *schema.ResourceData, func resourceOntapFileSystemRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) filesystem, err := FindFileSystemByID(ctx, conn, d.Id()) @@ -353,14 +356,14 @@ func resourceOntapFileSystemRead(ctx context.Context, d *schema.ResourceData, me return sdkdiag.AppendErrorf(diags, "setting disk_iops_configuration: %s", err) } - SetTagsOut(ctx, filesystem.Tags) + setTagsOut(ctx, filesystem.Tags) return diags } func resourceOntapFileSystemUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) if d.HasChangesExcept("tags_all", "tags") { input := &fsx.UpdateFileSystemInput{ @@ -433,7 +436,7 @@ func resourceOntapFileSystemUpdate(ctx context.Context, d *schema.ResourceData, func resourceOntapFileSystemDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) log.Printf("[DEBUG] Deleting FSx ONTAP File System: %s", d.Id()) _, err := conn.DeleteFileSystemWithContext(ctx, &fsx.DeleteFileSystemInput{ diff --git a/internal/service/fsx/ontap_file_system_test.go b/internal/service/fsx/ontap_file_system_test.go index 85998e50577..133600a925d 100644 --- a/internal/service/fsx/ontap_file_system_test.go +++ b/internal/service/fsx/ontap_file_system_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx_test import ( @@ -595,7 +598,7 @@ func testAccCheckOntapFileSystemExists(ctx context.Context, resourceName string, return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn(ctx) filesystem, err := tffsx.FindFileSystemByID(ctx, conn, rs.Primary.ID) if err != nil { @@ -614,7 +617,7 @@ func testAccCheckOntapFileSystemExists(ctx context.Context, resourceName string, func testAccCheckOntapFileSystemDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_fsx_ontap_file_system" { diff --git a/internal/service/fsx/ontap_storage_virtual_machine.go b/internal/service/fsx/ontap_storage_virtual_machine.go index 68270585356..b9f9e4e6bac 100644 --- a/internal/service/fsx/ontap_storage_virtual_machine.go +++ b/internal/service/fsx/ontap_storage_virtual_machine.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx import ( @@ -236,12 +239,12 @@ func ResourceOntapStorageVirtualMachine() *schema.Resource { func resourceOntapStorageVirtualMachineCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) input := &fsx.CreateStorageVirtualMachineInput{ FileSystemId: aws.String(d.Get("file_system_id").(string)), Name: aws.String(d.Get("name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("active_directory_configuration"); ok { @@ -273,7 +276,7 @@ func resourceOntapStorageVirtualMachineCreate(ctx context.Context, d *schema.Res func resourceOntapStorageVirtualMachineRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) storageVirtualMachine, err := FindStorageVirtualMachineByID(ctx, conn, d.Id()) @@ -309,7 +312,7 @@ func resourceOntapStorageVirtualMachineRead(ctx context.Context, d *schema.Resou func resourceOntapStorageVirtualMachineUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) if d.HasChangesExcept("tags_all", "tags") { input := &fsx.UpdateStorageVirtualMachineInput{ @@ -341,7 +344,7 @@ func resourceOntapStorageVirtualMachineUpdate(ctx context.Context, d *schema.Res func resourceOntapStorageVirtualMachineDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) log.Printf("[DEBUG] Deleting FSx ONTAP Storage Virtual Machine: %s", d.Id()) _, err := conn.DeleteStorageVirtualMachineWithContext(ctx, &fsx.DeleteStorageVirtualMachineInput{ diff --git a/internal/service/fsx/ontap_storage_virtual_machine_migrate.go b/internal/service/fsx/ontap_storage_virtual_machine_migrate.go index d424d7b4b7d..f4ff7343237 100644 --- a/internal/service/fsx/ontap_storage_virtual_machine_migrate.go +++ b/internal/service/fsx/ontap_storage_virtual_machine_migrate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx import ( diff --git a/internal/service/fsx/ontap_storage_virtual_machine_migrate_test.go b/internal/service/fsx/ontap_storage_virtual_machine_migrate_test.go index cd8d61b072e..4e03b1f8518 100644 --- a/internal/service/fsx/ontap_storage_virtual_machine_migrate_test.go +++ b/internal/service/fsx/ontap_storage_virtual_machine_migrate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx_test import ( diff --git a/internal/service/fsx/ontap_storage_virtual_machine_test.go b/internal/service/fsx/ontap_storage_virtual_machine_test.go index 43364afa57c..1abcaa66c9d 100644 --- a/internal/service/fsx/ontap_storage_virtual_machine_test.go +++ b/internal/service/fsx/ontap_storage_virtual_machine_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx_test import ( @@ -292,7 +295,7 @@ func testAccCheckOntapStorageVirtualMachineExists(ctx context.Context, resourceN return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn(ctx) storageVirtualMachine, err := tffsx.FindStorageVirtualMachineByID(ctx, conn, rs.Primary.ID) if err != nil { @@ -311,7 +314,7 @@ func testAccCheckOntapStorageVirtualMachineExists(ctx context.Context, resourceN func testAccCheckOntapStorageVirtualMachineDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_storage_virtual_machine" { diff --git a/internal/service/fsx/ontap_volume.go b/internal/service/fsx/ontap_volume.go index 0787e601dd5..f5d81e2eba9 100644 --- a/internal/service/fsx/ontap_volume.go +++ b/internal/service/fsx/ontap_volume.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx import ( @@ -28,8 +31,13 @@ func ResourceOntapVolume() *schema.Resource { ReadWithoutTimeout: resourceOntapVolumeRead, UpdateWithoutTimeout: resourceOntapVolumeUpdate, DeleteWithoutTimeout: resourceOntapVolumeDelete, + Importer: &schema.ResourceImporter{ - StateContext: schema.ImportStatePassthroughContext, + StateContext: func(ctx context.Context, d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + d.Set("skip_final_backup", false) + + return []*schema.ResourceData{d}, nil + }, }, Timeouts: &schema.ResourceTimeout{ @@ -53,7 +61,7 @@ func ResourceOntapVolume() *schema.Resource { }, "junction_path": { Type: schema.TypeString, - Required: true, + Optional: true, ValidateFunc: validation.StringLenBetween(1, 255), }, "name": { @@ -63,13 +71,16 @@ func ResourceOntapVolume() *schema.Resource { ValidateFunc: validation.StringLenBetween(1, 203), }, "ontap_volume_type": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice(fsx.InputOntapVolumeType_Values(), false), }, "security_style": { Type: schema.TypeString, Optional: true, - Default: "UNIX", + Computed: true, ValidateFunc: validation.StringInSlice(fsx.StorageVirtualMachineRootVolumeSecurityStyle_Values(), false), }, "size_in_megabytes": { @@ -77,13 +88,19 @@ func ResourceOntapVolume() *schema.Resource { Required: true, ValidateFunc: validation.IntBetween(0, 2147483647), }, + "skip_final_backup": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, "storage_efficiency_enabled": { Type: schema.TypeBool, - Required: true, + Optional: true, }, "storage_virtual_machine_id": { Type: schema.TypeString, Required: true, + ForceNew: true, ValidateFunc: validation.StringLenBetween(21, 21), }, "tiering_policy": { @@ -115,8 +132,9 @@ func ResourceOntapVolume() *schema.Resource { }, "volume_type": { Type: schema.TypeString, - Default: fsx.VolumeTypeOntap, Optional: true, + ForceNew: true, + Default: fsx.VolumeTypeOntap, ValidateFunc: validation.StringInSlice(fsx.VolumeType_Values(), false), }, }, @@ -126,24 +144,35 @@ func ResourceOntapVolume() *schema.Resource { func resourceOntapVolumeCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) + name := d.Get("name").(string) input := &fsx.CreateVolumeInput{ - Name: aws.String(d.Get("name").(string)), - VolumeType: aws.String(d.Get("volume_type").(string)), + Name: aws.String(name), OntapConfiguration: &fsx.CreateOntapVolumeConfiguration{ - JunctionPath: aws.String(d.Get("junction_path").(string)), - SizeInMegabytes: aws.Int64(int64(d.Get("size_in_megabytes").(int))), - StorageEfficiencyEnabled: aws.Bool(d.Get("storage_efficiency_enabled").(bool)), - StorageVirtualMachineId: aws.String(d.Get("storage_virtual_machine_id").(string)), + SizeInMegabytes: aws.Int64(int64(d.Get("size_in_megabytes").(int))), + StorageVirtualMachineId: aws.String(d.Get("storage_virtual_machine_id").(string)), }, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), + VolumeType: aws.String(d.Get("volume_type").(string)), + } + + if v, ok := d.GetOk("junction_path"); ok { + input.OntapConfiguration.JunctionPath = aws.String(v.(string)) + } + + if v, ok := d.GetOk("ontap_volume_type"); ok { + input.OntapConfiguration.OntapVolumeType = aws.String(v.(string)) } if v, ok := d.GetOk("security_style"); ok { input.OntapConfiguration.SecurityStyle = aws.String(v.(string)) } + if v, ok := d.GetOkExists("storage_efficiency_enabled"); ok { + input.OntapConfiguration.StorageEfficiencyEnabled = aws.Bool(v.(bool)) + } + if v, ok := d.GetOk("tiering_policy"); ok { input.OntapConfiguration.TieringPolicy = expandOntapVolumeTieringPolicy(v.([]interface{})) } @@ -151,13 +180,13 @@ func resourceOntapVolumeCreate(ctx context.Context, d *schema.ResourceData, meta result, err := conn.CreateVolumeWithContext(ctx, input) if err != nil { - return sdkdiag.AppendErrorf(diags, "creating FSx Volume: %s", err) + return sdkdiag.AppendErrorf(diags, "creating FSx ONTAP Volume (%s): %s", name, err) } d.SetId(aws.StringValue(result.Volume.VolumeId)) if _, err := waitVolumeCreated(ctx, conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { - return sdkdiag.AppendErrorf(diags, "waiting for FSx Volume(%s) create: %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "waiting for FSx ONTAP Volume (%s) create: %s", d.Id(), err) } return append(diags, resourceOntapVolumeRead(ctx, d, meta)...) @@ -165,7 +194,7 @@ func resourceOntapVolumeCreate(ctx context.Context, d *schema.ResourceData, meta func resourceOntapVolumeRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) volume, err := FindVolumeByID(ctx, conn, d.Id()) @@ -181,7 +210,7 @@ func resourceOntapVolumeRead(ctx context.Context, d *schema.ResourceData, meta i ontapConfig := volume.OntapConfiguration if ontapConfig == nil { - return sdkdiag.AppendErrorf(diags, "describing FSx ONTAP Volume (%s): empty ONTAP configuration", d.Id()) + return sdkdiag.AppendErrorf(diags, "reading FSx ONTAP Volume (%s): empty ONTAP configuration", d.Id()) } d.Set("arn", volume.ResourceARN) @@ -193,25 +222,24 @@ func resourceOntapVolumeRead(ctx context.Context, d *schema.ResourceData, meta i d.Set("size_in_megabytes", ontapConfig.SizeInMegabytes) d.Set("storage_efficiency_enabled", ontapConfig.StorageEfficiencyEnabled) d.Set("storage_virtual_machine_id", ontapConfig.StorageVirtualMachineId) - d.Set("uuid", ontapConfig.UUID) - d.Set("volume_type", volume.VolumeType) - if err := d.Set("tiering_policy", flattenOntapVolumeTieringPolicy(ontapConfig.TieringPolicy)); err != nil { return sdkdiag.AppendErrorf(diags, "setting tiering_policy: %s", err) } + d.Set("uuid", ontapConfig.UUID) + d.Set("volume_type", volume.VolumeType) return diags } func resourceOntapVolumeUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) if d.HasChangesExcept("tags_all", "tags") { input := &fsx.UpdateVolumeInput{ ClientRequestToken: aws.String(id.UniqueId()), - VolumeId: aws.String(d.Id()), OntapConfiguration: &fsx.UpdateOntapVolumeConfiguration{}, + VolumeId: aws.String(d.Id()), } if d.HasChange("junction_path") { @@ -250,10 +278,13 @@ func resourceOntapVolumeUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceOntapVolumeDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) log.Printf("[DEBUG] Deleting FSx ONTAP Volume: %s", d.Id()) _, err := conn.DeleteVolumeWithContext(ctx, &fsx.DeleteVolumeInput{ + OntapConfiguration: &fsx.DeleteVolumeOntapConfiguration{ + SkipFinalBackup: aws.Bool(d.Get("skip_final_backup").(bool)), + }, VolumeId: aws.String(d.Id()), }) diff --git a/internal/service/fsx/ontap_volume_test.go b/internal/service/fsx/ontap_volume_test.go index 78b8a6eda63..417295c0a24 100644 --- a/internal/service/fsx/ontap_volume_test.go +++ b/internal/service/fsx/ontap_volume_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx_test import ( @@ -38,8 +41,9 @@ func TestAccFSxOntapVolume_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "junction_path", fmt.Sprintf("/%[1]s", rName)), resource.TestCheckResourceAttr(resourceName, "ontap_volume_type", "RW"), resource.TestCheckResourceAttr(resourceName, "name", rName), - resource.TestCheckResourceAttr(resourceName, "security_style", "UNIX"), + resource.TestCheckResourceAttr(resourceName, "security_style", ""), resource.TestCheckResourceAttr(resourceName, "size_in_megabytes", "1024"), + resource.TestCheckResourceAttr(resourceName, "skip_final_backup", "false"), resource.TestCheckResourceAttr(resourceName, "storage_efficiency_enabled", "true"), resource.TestCheckResourceAttrSet(resourceName, "storage_virtual_machine_id"), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), @@ -48,9 +52,10 @@ func TestAccFSxOntapVolume_basic(t *testing.T) { ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"skip_final_backup"}, }, }, }) @@ -101,9 +106,10 @@ func TestAccFSxOntapVolume_name(t *testing.T) { ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"skip_final_backup"}, }, { Config: testAccONTAPVolumeConfig_basic(rName2), @@ -140,9 +146,10 @@ func TestAccFSxOntapVolume_junctionPath(t *testing.T) { ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"skip_final_backup"}, }, { Config: testAccONTAPVolumeConfig_junctionPath(rName, jPath2), @@ -157,6 +164,36 @@ func TestAccFSxOntapVolume_junctionPath(t *testing.T) { }) } +func TestAccFSxOntapVolume_ontapVolumeType(t *testing.T) { + ctx := acctest.Context(t) + var volume fsx.Volume + resourceName := "aws_fsx_ontap_volume.test" + rName := fmt.Sprintf("tf_acc_test_%d", sdkacctest.RandInt()) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckPartitionHasService(t, fsx.EndpointsID) }, + ErrorCheck: acctest.ErrorCheck(t, fsx.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckOntapVolumeDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccONTAPVolumeConfig_ontapVolumeTypeDP(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckOntapVolumeExists(ctx, resourceName, &volume), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "ontap_volume_type", "DP"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"skip_final_backup"}, + }, + }, + }) +} + func TestAccFSxOntapVolume_securityStyle(t *testing.T) { ctx := acctest.Context(t) var volume1, volume2, volume3 fsx.Volume @@ -178,9 +215,10 @@ func TestAccFSxOntapVolume_securityStyle(t *testing.T) { ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"skip_final_backup"}, }, { Config: testAccONTAPVolumeConfig_securityStyle(rName, "NTFS"), @@ -227,9 +265,10 @@ func TestAccFSxOntapVolume_size(t *testing.T) { ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"skip_final_backup"}, }, { Config: testAccONTAPVolumeConfig_size(rName, size2), @@ -265,9 +304,10 @@ func TestAccFSxOntapVolume_storageEfficiency(t *testing.T) { ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"skip_final_backup"}, }, { Config: testAccONTAPVolumeConfig_storageEfficiency(rName, false), @@ -303,9 +343,10 @@ func TestAccFSxOntapVolume_tags(t *testing.T) { ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"skip_final_backup"}, }, { Config: testAccONTAPVolumeConfig_tags2(rName, "key1", "value1updated", "key2", "value2"), @@ -351,9 +392,10 @@ func TestAccFSxOntapVolume_tieringPolicy(t *testing.T) { ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"skip_final_backup"}, }, { Config: testAccONTAPVolumeConfig_tieringPolicy(rName, "SNAPSHOT_ONLY", 10), @@ -388,25 +430,22 @@ func TestAccFSxOntapVolume_tieringPolicy(t *testing.T) { }) } -func testAccCheckOntapVolumeExists(ctx context.Context, resourceName string, volume *fsx.Volume) resource.TestCheckFunc { +func testAccCheckOntapVolumeExists(ctx context.Context, n string, v *fsx.Volume) resource.TestCheckFunc { return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[resourceName] + rs, ok := s.RootModule().Resources[n] if !ok { - return fmt.Errorf("Not found: %s", resourceName) + return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn(ctx) + + output, err := tffsx.FindVolumeByID(ctx, conn, rs.Primary.ID) - volume1, err := tffsx.FindVolumeByID(ctx, conn, rs.Primary.ID) if err != nil { return err } - if volume == nil { - return fmt.Errorf("FSx ONTAP Volume (%s) not found", rs.Primary.ID) - } - - *volume = *volume1 + *v = *output return nil } @@ -414,7 +453,7 @@ func testAccCheckOntapVolumeExists(ctx context.Context, resourceName string, vol func testAccCheckOntapVolumeDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_fsx_ontap_volume" { @@ -422,6 +461,7 @@ func testAccCheckOntapVolumeDestroy(ctx context.Context) resource.TestCheckFunc } volume, err := tffsx.FindVolumeByID(ctx, conn, rs.Primary.ID) + if tfresource.NotFound(err) { continue } @@ -454,42 +494,14 @@ func testAccCheckOntapVolumeRecreated(i, j *fsx.Volume) resource.TestCheckFunc { } } -func testAccOntapVolumeBaseConfig(rName string) string { - return acctest.ConfigCompose(acctest.ConfigAvailableAZsNoOptIn(), fmt.Sprintf(` -resource "aws_vpc" "test" { - cidr_block = "10.0.0.0/16" - - tags = { - Name = %[1]q - } -} - -resource "aws_subnet" "test1" { - vpc_id = aws_vpc.test.id - cidr_block = "10.0.1.0/24" - availability_zone = data.aws_availability_zones.available.names[0] - - tags = { - Name = %[1]q - } -} - -resource "aws_subnet" "test2" { - vpc_id = aws_vpc.test.id - cidr_block = "10.0.2.0/24" - availability_zone = data.aws_availability_zones.available.names[1] - - tags = { - Name = %[1]q - } -} - +func testAccOntapVolumeConfig_base(rName string) string { + return acctest.ConfigCompose(acctest.ConfigVPCWithSubnets(rName, 2), fmt.Sprintf(` resource "aws_fsx_ontap_file_system" "test" { storage_capacity = 1024 - subnet_ids = [aws_subnet.test1.id, aws_subnet.test2.id] + subnet_ids = aws_subnet.test[*].id deployment_type = "MULTI_AZ_1" throughput_capacity = 512 - preferred_subnet_id = aws_subnet.test1.id + preferred_subnet_id = aws_subnet.test[0].id tags = { Name = %[1]q @@ -504,7 +516,7 @@ resource "aws_fsx_ontap_storage_virtual_machine" "test" { } func testAccONTAPVolumeConfig_basic(rName string) string { - return acctest.ConfigCompose(testAccOntapVolumeBaseConfig(rName), fmt.Sprintf(` + return acctest.ConfigCompose(testAccOntapVolumeConfig_base(rName), fmt.Sprintf(` resource "aws_fsx_ontap_volume" "test" { name = %[1]q junction_path = "/%[1]s" @@ -516,7 +528,7 @@ resource "aws_fsx_ontap_volume" "test" { } func testAccONTAPVolumeConfig_junctionPath(rName string, junctionPath string) string { - return acctest.ConfigCompose(testAccOntapVolumeBaseConfig(rName), fmt.Sprintf(` + return acctest.ConfigCompose(testAccOntapVolumeConfig_base(rName), fmt.Sprintf(` resource "aws_fsx_ontap_volume" "test" { name = %[1]q junction_path = %[2]q @@ -527,8 +539,20 @@ resource "aws_fsx_ontap_volume" "test" { `, rName, junctionPath)) } +func testAccONTAPVolumeConfig_ontapVolumeTypeDP(rName string) string { + return acctest.ConfigCompose(testAccOntapVolumeConfig_base(rName), fmt.Sprintf(` +resource "aws_fsx_ontap_volume" "test" { + name = %[1]q + ontap_volume_type = "DP" + size_in_megabytes = 1024 + skip_final_backup = true + storage_virtual_machine_id = aws_fsx_ontap_storage_virtual_machine.test.id +} +`, rName)) +} + func testAccONTAPVolumeConfig_securityStyle(rName string, securityStyle string) string { - return acctest.ConfigCompose(testAccOntapVolumeBaseConfig(rName), fmt.Sprintf(` + return acctest.ConfigCompose(testAccOntapVolumeConfig_base(rName), fmt.Sprintf(` resource "aws_fsx_ontap_volume" "test" { name = %[1]q junction_path = "/%[1]s" @@ -541,7 +565,7 @@ resource "aws_fsx_ontap_volume" "test" { } func testAccONTAPVolumeConfig_size(rName string, size int) string { - return acctest.ConfigCompose(testAccOntapVolumeBaseConfig(rName), fmt.Sprintf(` + return acctest.ConfigCompose(testAccOntapVolumeConfig_base(rName), fmt.Sprintf(` resource "aws_fsx_ontap_volume" "test" { name = %[1]q junction_path = "/%[1]s" @@ -553,7 +577,7 @@ resource "aws_fsx_ontap_volume" "test" { } func testAccONTAPVolumeConfig_storageEfficiency(rName string, storageEfficiencyEnabled bool) string { - return acctest.ConfigCompose(testAccOntapVolumeBaseConfig(rName), fmt.Sprintf(` + return acctest.ConfigCompose(testAccOntapVolumeConfig_base(rName), fmt.Sprintf(` resource "aws_fsx_ontap_volume" "test" { name = %[1]q junction_path = "/%[1]s" @@ -565,13 +589,14 @@ resource "aws_fsx_ontap_volume" "test" { } func testAccONTAPVolumeConfig_tieringPolicy(rName string, policy string, coolingPeriod int) string { - return acctest.ConfigCompose(testAccOntapVolumeBaseConfig(rName), fmt.Sprintf(` + return acctest.ConfigCompose(testAccOntapVolumeConfig_base(rName), fmt.Sprintf(` resource "aws_fsx_ontap_volume" "test" { name = %[1]q junction_path = "/%[1]s" size_in_megabytes = 1024 storage_efficiency_enabled = true storage_virtual_machine_id = aws_fsx_ontap_storage_virtual_machine.test.id + tiering_policy { name = %[2]q cooling_period = %[3]d @@ -581,13 +606,14 @@ resource "aws_fsx_ontap_volume" "test" { } func testAccONTAPVolumeConfig_tieringPolicyNoCooling(rName string, policy string) string { - return acctest.ConfigCompose(testAccOntapVolumeBaseConfig(rName), fmt.Sprintf(` + return acctest.ConfigCompose(testAccOntapVolumeConfig_base(rName), fmt.Sprintf(` resource "aws_fsx_ontap_volume" "test" { name = %[1]q junction_path = "/%[1]s" size_in_megabytes = 1024 storage_efficiency_enabled = true storage_virtual_machine_id = aws_fsx_ontap_storage_virtual_machine.test.id + tiering_policy { name = %[2]q } @@ -596,7 +622,7 @@ resource "aws_fsx_ontap_volume" "test" { } func testAccONTAPVolumeConfig_tags1(rName, tagKey1, tagValue1 string) string { - return acctest.ConfigCompose(testAccOntapVolumeBaseConfig(rName), fmt.Sprintf(` + return acctest.ConfigCompose(testAccOntapVolumeConfig_base(rName), fmt.Sprintf(` resource "aws_fsx_ontap_volume" "test" { name = %[1]q junction_path = "/%[1]s" @@ -612,7 +638,7 @@ resource "aws_fsx_ontap_volume" "test" { } func testAccONTAPVolumeConfig_tags2(rName, tagKey1, tagValue1, tagKey2, tagValue2 string) string { - return acctest.ConfigCompose(testAccOntapVolumeBaseConfig(rName), fmt.Sprintf(` + return acctest.ConfigCompose(testAccOntapVolumeConfig_base(rName), fmt.Sprintf(` resource "aws_fsx_ontap_volume" "test" { name = %[1]q junction_path = "/%[1]s" diff --git a/internal/service/fsx/openzfs_file_system.go b/internal/service/fsx/openzfs_file_system.go index 48640e26753..43fc09fdd70 100644 --- a/internal/service/fsx/openzfs_file_system.go +++ b/internal/service/fsx/openzfs_file_system.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx import ( @@ -324,7 +327,7 @@ func validateDiskConfigurationIOPS(_ context.Context, d *schema.ResourceDiff, me func resourceOpenzfsFileSystemCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) input := &fsx.CreateFileSystemInput{ ClientRequestToken: aws.String(id.UniqueId()), @@ -336,7 +339,7 @@ func resourceOpenzfsFileSystemCreate(ctx context.Context, d *schema.ResourceData DeploymentType: aws.String(d.Get("deployment_type").(string)), AutomaticBackupRetentionDays: aws.Int64(int64(d.Get("automatic_backup_retention_days").(int))), }, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } backupInput := &fsx.CreateFileSystemFromBackupInput{ @@ -347,7 +350,7 @@ func resourceOpenzfsFileSystemCreate(ctx context.Context, d *schema.ResourceData DeploymentType: aws.String(d.Get("deployment_type").(string)), AutomaticBackupRetentionDays: aws.Int64(int64(d.Get("automatic_backup_retention_days").(int))), }, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("disk_iops_configuration"); ok { @@ -426,7 +429,7 @@ func resourceOpenzfsFileSystemCreate(ctx context.Context, d *schema.ResourceData func resourceOpenzfsFileSystemRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) filesystem, err := FindFileSystemByID(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -476,7 +479,7 @@ func resourceOpenzfsFileSystemRead(ctx context.Context, d *schema.ResourceData, return sdkdiag.AppendErrorf(diags, "setting subnet_ids: %s", err) } - SetTagsOut(ctx, filesystem.Tags) + setTagsOut(ctx, filesystem.Tags) if err := d.Set("disk_iops_configuration", flattenOpenzfsFileDiskIopsConfiguration(openzfsConfig.DiskIopsConfiguration)); err != nil { return sdkdiag.AppendErrorf(diags, "setting disk_iops_configuration: %s", err) @@ -504,7 +507,7 @@ func resourceOpenzfsFileSystemRead(ctx context.Context, d *schema.ResourceData, func resourceOpenzfsFileSystemUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) if d.HasChangesExcept("tags_all", "tags") { input := &fsx.UpdateFileSystemInput{ @@ -585,7 +588,7 @@ func resourceOpenzfsFileSystemUpdate(ctx context.Context, d *schema.ResourceData func resourceOpenzfsFileSystemDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) log.Printf("[DEBUG] Deleting FSx OpenZFS File System: %s", d.Id()) _, err := conn.DeleteFileSystemWithContext(ctx, &fsx.DeleteFileSystemInput{ diff --git a/internal/service/fsx/openzfs_file_system_test.go b/internal/service/fsx/openzfs_file_system_test.go index 85a046cb467..2be63b13870 100644 --- a/internal/service/fsx/openzfs_file_system_test.go +++ b/internal/service/fsx/openzfs_file_system_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx_test import ( @@ -753,7 +756,7 @@ func testAccCheckOpenzfsFileSystemExists(ctx context.Context, resourceName strin return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn(ctx) filesystem, err := tffsx.FindFileSystemByID(ctx, conn, rs.Primary.ID) if err != nil { @@ -792,7 +795,7 @@ func testAccCheckOpenzfsFileSystemRecreated(i, j *fsx.FileSystem) resource.TestC func testAccCheckOpenzfsFileSystemDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_fsx_openzfs_file_system" { diff --git a/internal/service/fsx/openzfs_snapshot.go b/internal/service/fsx/openzfs_snapshot.go index 2df03da0f86..d300865ccd8 100644 --- a/internal/service/fsx/openzfs_snapshot.go +++ b/internal/service/fsx/openzfs_snapshot.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx import ( @@ -72,12 +75,12 @@ func ResourceOpenzfsSnapshot() *schema.Resource { func resourceOpenzfsSnapshotCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) input := &fsx.CreateSnapshotInput{ ClientRequestToken: aws.String(id.UniqueId()), Name: aws.String(d.Get("name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), VolumeId: aws.String(d.Get("volume_id").(string)), } @@ -98,7 +101,7 @@ func resourceOpenzfsSnapshotCreate(ctx context.Context, d *schema.ResourceData, func resourceOpenzfsSnapshotRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) snapshot, err := FindSnapshotByID(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -124,7 +127,7 @@ func resourceOpenzfsSnapshotRead(ctx context.Context, d *schema.ResourceData, me func resourceOpenzfsSnapshotUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) if d.HasChangesExcept("tags_all", "tags") { input := &fsx.UpdateSnapshotInput{ @@ -152,7 +155,7 @@ func resourceOpenzfsSnapshotUpdate(ctx context.Context, d *schema.ResourceData, func resourceOpenzfsSnapshotDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) request := &fsx.DeleteSnapshotInput{ SnapshotId: aws.String(d.Id()), diff --git a/internal/service/fsx/openzfs_snapshot_data_source.go b/internal/service/fsx/openzfs_snapshot_data_source.go index e8ca9b56c70..508c142e9cb 100644 --- a/internal/service/fsx/openzfs_snapshot_data_source.go +++ b/internal/service/fsx/openzfs_snapshot_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx import ( @@ -59,7 +62,7 @@ func DataSourceOpenzfsSnapshot() *schema.Resource { func dataSourceOpenzfsSnapshotRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &fsx.DescribeSnapshotsInput{} @@ -110,7 +113,7 @@ func dataSourceOpenzfsSnapshotRead(ctx context.Context, d *schema.ResourceData, } //Snapshot tags do not get returned with describe call so need to make a separate list tags call - tags, tagserr := ListTags(ctx, conn, *snapshot.ResourceARN) + tags, tagserr := listTags(ctx, conn, *snapshot.ResourceARN) if tagserr != nil { return sdkdiag.AppendErrorf(diags, "reading Tags for FSx OpenZFS Snapshot (%s): %s", d.Id(), err) diff --git a/internal/service/fsx/openzfs_snapshot_data_source_test.go b/internal/service/fsx/openzfs_snapshot_data_source_test.go index d4cdb668e24..9d786a44826 100644 --- a/internal/service/fsx/openzfs_snapshot_data_source_test.go +++ b/internal/service/fsx/openzfs_snapshot_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx_test import ( diff --git a/internal/service/fsx/openzfs_snapshot_test.go b/internal/service/fsx/openzfs_snapshot_test.go index 27f5102da2a..bd07687b191 100644 --- a/internal/service/fsx/openzfs_snapshot_test.go +++ b/internal/service/fsx/openzfs_snapshot_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx_test import ( @@ -230,7 +233,7 @@ func testAccCheckOpenzfsSnapshotExists(ctx context.Context, resourceName string, return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn(ctx) output, err := tffsx.FindSnapshotByID(ctx, conn, rs.Primary.ID) if err != nil { @@ -249,7 +252,7 @@ func testAccCheckOpenzfsSnapshotExists(ctx context.Context, resourceName string, func testAccCheckOpenzfsSnapshotDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_fsx_openzfs_snapshot" { diff --git a/internal/service/fsx/openzfs_volume.go b/internal/service/fsx/openzfs_volume.go index b6684a94af9..7a407ac3192 100644 --- a/internal/service/fsx/openzfs_volume.go +++ b/internal/service/fsx/openzfs_volume.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx import ( @@ -192,7 +195,7 @@ func ResourceOpenzfsVolume() *schema.Resource { func resourceOpenzfsVolumeCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) input := &fsx.CreateVolumeInput{ ClientRequestToken: aws.String(id.UniqueId()), @@ -201,7 +204,7 @@ func resourceOpenzfsVolumeCreate(ctx context.Context, d *schema.ResourceData, me OpenZFSConfiguration: &fsx.CreateOpenZFSVolumeConfiguration{ ParentVolumeId: aws.String(d.Get("parent_volume_id").(string)), }, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("copy_tags_to_snapshots"); ok { @@ -267,7 +270,7 @@ func resourceOpenzfsVolumeCreate(ctx context.Context, d *schema.ResourceData, me func resourceOpenzfsVolumeRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) volume, err := FindVolumeByID(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -318,7 +321,7 @@ func resourceOpenzfsVolumeRead(ctx context.Context, d *schema.ResourceData, meta func resourceOpenzfsVolumeUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) if d.HasChangesExcept("tags_all", "tags") { input := &fsx.UpdateVolumeInput{ @@ -375,7 +378,7 @@ func resourceOpenzfsVolumeUpdate(ctx context.Context, d *schema.ResourceData, me func resourceOpenzfsVolumeDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) log.Printf("[DEBUG] Deleting FSx OpenZFS Volume: %s", d.Id()) _, err := conn.DeleteVolumeWithContext(ctx, &fsx.DeleteVolumeInput{ diff --git a/internal/service/fsx/openzfs_volume_test.go b/internal/service/fsx/openzfs_volume_test.go index fe33f5f2845..b102a52d9df 100644 --- a/internal/service/fsx/openzfs_volume_test.go +++ b/internal/service/fsx/openzfs_volume_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx_test import ( @@ -40,7 +43,7 @@ func TestAccFSxOpenzfsVolume_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "nfs_exports.#", "1"), resource.TestCheckResourceAttr(resourceName, "nfs_exports.0.client_configurations.#", "1"), resource.TestCheckResourceAttr(resourceName, "nfs_exports.0.client_configurations.0.clients", "*"), - acctest.CheckResourceAttrGreaterThanValue(resourceName, "nfs_exports.0.client_configurations.0.options.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(resourceName, "nfs_exports.0.client_configurations.0.options.#", 0), resource.TestCheckResourceAttrSet(resourceName, "parent_volume_id"), resource.TestCheckResourceAttr(resourceName, "read_only", "false"), resource.TestCheckResourceAttr(resourceName, "record_size_kib", "128"), @@ -482,7 +485,7 @@ func testAccCheckOpenzfsVolumeExists(ctx context.Context, resourceName string, v return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn(ctx) volume1, err := tffsx.FindVolumeByID(ctx, conn, rs.Primary.ID) if err != nil { @@ -501,7 +504,7 @@ func testAccCheckOpenzfsVolumeExists(ctx context.Context, resourceName string, v func testAccCheckOpenzfsVolumeDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_fsx_openzfs_volume" { diff --git a/internal/service/fsx/service_package_gen.go b/internal/service/fsx/service_package_gen.go index 1d6efec7247..38d735af0a8 100644 --- a/internal/service/fsx/service_package_gen.go +++ b/internal/service/fsx/service_package_gen.go @@ -5,6 +5,10 @@ package fsx import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + fsx_sdkv1 "github.com/aws/aws-sdk-go/service/fsx" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -129,4 +133,13 @@ func (p *servicePackage) ServicePackageName() string { return names.FSx } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*fsx_sdkv1.FSx, error) { + sess := config["session"].(*session_sdkv1.Session) + + return fsx_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/fsx/status.go b/internal/service/fsx/status.go index 44d1fda90da..e7643eeef65 100644 --- a/internal/service/fsx/status.go +++ b/internal/service/fsx/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx import ( diff --git a/internal/service/fsx/sweep.go b/internal/service/fsx/sweep.go index bc422b0131a..c49d7bde753 100644 --- a/internal/service/fsx/sweep.go +++ b/internal/service/fsx/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/fsx" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -64,13 +66,13 @@ func init() { func sweepBackups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).FSxConn() + conn := client.FSxConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error input := &fsx.DescribeBackupsInput{} @@ -95,7 +97,7 @@ func sweepBackups(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing FSx Backups for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping FSx Backups for %s: %w", region, err)) } @@ -109,13 +111,13 @@ func sweepBackups(region string) error { func sweepLustreFileSystems(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).FSxConn() + conn := client.FSxConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error input := &fsx.DescribeFileSystemsInput{} @@ -144,7 +146,7 @@ func sweepLustreFileSystems(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing FSx Lustre File Systems for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping FSx Lustre File Systems for %s: %w", region, err)) } @@ -158,13 +160,13 @@ func sweepLustreFileSystems(region string) error { func sweepOntapFileSystems(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).FSxConn() + conn := client.FSxConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error input := &fsx.DescribeFileSystemsInput{} @@ -193,7 +195,7 @@ func sweepOntapFileSystems(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing FSx ONTAP File Systems for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping FSx ONTAP File Systems for %s: %w", region, err)) } @@ -207,13 +209,13 @@ func sweepOntapFileSystems(region string) error { func sweepOntapStorageVirtualMachine(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).FSxConn() + conn := client.FSxConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error input := &fsx.DescribeStorageVirtualMachinesInput{} @@ -238,7 +240,7 @@ func sweepOntapStorageVirtualMachine(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing FSx ONTAP Storage Virtual Machine for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping FSx ONTAP Storage Virtual Machine for %s: %w", region, err)) } @@ -252,13 +254,13 @@ func sweepOntapStorageVirtualMachine(region string) error { func sweepOntapVolume(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).FSxConn() + conn := client.FSxConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error input := &fsx.DescribeVolumesInput{} @@ -290,7 +292,7 @@ func sweepOntapVolume(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing FSx ONTAP Volume for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping FSx ONTAP Volume for %s: %w", region, err)) } @@ -304,13 +306,13 @@ func sweepOntapVolume(region string) error { func sweepOpenZFSFileSystems(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).FSxConn() + conn := client.FSxConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error input := &fsx.DescribeFileSystemsInput{} @@ -339,7 +341,7 @@ func sweepOpenZFSFileSystems(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing FSx OpenZFS File Systems for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping FSx OpenZFS File Systems for %s: %w", region, err)) } @@ -353,13 +355,13 @@ func sweepOpenZFSFileSystems(region string) error { func sweepOpenZFSVolume(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).FSxConn() + conn := client.FSxConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error input := &fsx.DescribeVolumesInput{} @@ -391,7 +393,7 @@ func sweepOpenZFSVolume(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing FSx OpenZFS Volume for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping FSx OpenZFS Volume for %s: %w", region, err)) } @@ -405,13 +407,13 @@ func sweepOpenZFSVolume(region string) error { func sweepWindowsFileSystems(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).FSxConn() + conn := client.FSxConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error input := &fsx.DescribeFileSystemsInput{} @@ -441,7 +443,7 @@ func sweepWindowsFileSystems(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing FSx Windows File Systems for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping FSx Windows File Systems for %s: %w", region, err)) } diff --git a/internal/service/fsx/tags_gen.go b/internal/service/fsx/tags_gen.go index d366b63530c..4547bab7c87 100644 --- a/internal/service/fsx/tags_gen.go +++ b/internal/service/fsx/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists fsx service tags. +// listTags lists fsx service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn fsxiface.FSxAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn fsxiface.FSxAPI, identifier string) (tftags.KeyValueTags, error) { input := &fsx.ListTagsForResourceInput{ ResourceARN: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn fsxiface.FSxAPI, identifier string) (tft // ListTags lists fsx service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).FSxConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).FSxConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*fsx.Tag) tftags.KeyValueTags { return tftags.New(ctx, m) } -// GetTagsIn returns fsx service tags from Context. +// getTagsIn returns fsx service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*fsx.Tag { +func getTagsIn(ctx context.Context) []*fsx.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*fsx.Tag { return nil } -// SetTagsOut sets fsx service tags in Context. -func SetTagsOut(ctx context.Context, tags []*fsx.Tag) { +// setTagsOut sets fsx service tags in Context. +func setTagsOut(ctx context.Context, tags []*fsx.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates fsx service tags. +// updateTags updates fsx service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn fsxiface.FSxAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn fsxiface.FSxAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn fsxiface.FSxAPI, identifier string, ol // UpdateTags updates fsx service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).FSxConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).FSxConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/fsx/wait.go b/internal/service/fsx/wait.go index a6f45301431..818accded92 100644 --- a/internal/service/fsx/wait.go +++ b/internal/service/fsx/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx import ( diff --git a/internal/service/fsx/windows_file_system.go b/internal/service/fsx/windows_file_system.go index 3384e7cce49..15cbdd48a2b 100644 --- a/internal/service/fsx/windows_file_system.go +++ b/internal/service/fsx/windows_file_system.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx import ( @@ -276,14 +279,14 @@ func ResourceWindowsFileSystem() *schema.Resource { func resourceWindowsFileSystemCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) input := &fsx.CreateFileSystemInput{ ClientRequestToken: aws.String(id.UniqueId()), FileSystemType: aws.String(fsx.FileSystemTypeWindows), StorageCapacity: aws.Int64(int64(d.Get("storage_capacity").(int))), SubnetIds: flex.ExpandStringList(d.Get("subnet_ids").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), WindowsConfiguration: &fsx.CreateFileSystemWindowsConfiguration{ AutomaticBackupRetentionDays: aws.Int64(int64(d.Get("automatic_backup_retention_days").(int))), CopyTagsToBackups: aws.Bool(d.Get("copy_tags_to_backups").(bool)), @@ -294,7 +297,7 @@ func resourceWindowsFileSystemCreate(ctx context.Context, d *schema.ResourceData backupInput := &fsx.CreateFileSystemFromBackupInput{ ClientRequestToken: aws.String(id.UniqueId()), SubnetIds: flex.ExpandStringList(d.Get("subnet_ids").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), WindowsConfiguration: &fsx.CreateFileSystemWindowsConfiguration{ AutomaticBackupRetentionDays: aws.Int64(int64(d.Get("automatic_backup_retention_days").(int))), CopyTagsToBackups: aws.Bool(d.Get("copy_tags_to_backups").(bool)), @@ -386,7 +389,7 @@ func resourceWindowsFileSystemCreate(ctx context.Context, d *schema.ResourceData func resourceWindowsFileSystemRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) filesystem, err := FindFileSystemByID(ctx, conn, d.Id()) @@ -435,14 +438,14 @@ func resourceWindowsFileSystemRead(ctx context.Context, d *schema.ResourceData, d.Set("vpc_id", filesystem.VpcId) d.Set("weekly_maintenance_start_time", filesystem.WindowsConfiguration.WeeklyMaintenanceStartTime) - SetTagsOut(ctx, filesystem.Tags) + setTagsOut(ctx, filesystem.Tags) return diags } func resourceWindowsFileSystemUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) if d.HasChange("aliases") { o, n := d.GetChange("aliases") @@ -528,7 +531,7 @@ func resourceWindowsFileSystemUpdate(ctx context.Context, d *schema.ResourceData func resourceWindowsFileSystemDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) input := &fsx.DeleteFileSystemInput{ ClientRequestToken: aws.String(id.UniqueId()), @@ -578,11 +581,11 @@ func updateAliases(ctx context.Context, conn *fsx.FSx, identifier string, oldSet _, err := conn.AssociateFileSystemAliasesWithContext(ctx, input) if err != nil { - return fmt.Errorf("error associating aliases to FSx file system (%s): %w", identifier, err) + return fmt.Errorf("associating aliases to FSx file system (%s): %w", identifier, err) } if _, err := waitAdministrativeActionCompleted(ctx, conn, identifier, fsx.AdministrativeActionTypeFileSystemAliasAssociation, timeout); err != nil { - return fmt.Errorf("error waiting for FSx Windows File System (%s) alias to be associated: %w", identifier, err) + return fmt.Errorf("waiting for FSx Windows File System (%s) alias to be associated: %w", identifier, err) } } } @@ -597,11 +600,11 @@ func updateAliases(ctx context.Context, conn *fsx.FSx, identifier string, oldSet _, err := conn.DisassociateFileSystemAliasesWithContext(ctx, input) if err != nil { - return fmt.Errorf("error disassociating aliases from FSx file system (%s): %w", identifier, err) + return fmt.Errorf("disassociating aliases from FSx file system (%s): %w", identifier, err) } if _, err := waitAdministrativeActionCompleted(ctx, conn, identifier, fsx.AdministrativeActionTypeFileSystemAliasDisassociation, timeout); err != nil { - return fmt.Errorf("error waiting for FSx Windows File System (%s) alias to be disassociated: %w", identifier, err) + return fmt.Errorf("waiting for FSx Windows File System (%s) alias to be disassociated: %w", identifier, err) } } } diff --git a/internal/service/fsx/windows_file_system_data_source.go b/internal/service/fsx/windows_file_system_data_source.go index b0a9193c2f1..d019cb030c7 100644 --- a/internal/service/fsx/windows_file_system_data_source.go +++ b/internal/service/fsx/windows_file_system_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx import ( @@ -149,7 +152,7 @@ func DataSourceWindowsFileSystem() *schema.Resource { func dataSourceWindowsFileSystemRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).FSxConn() + conn := meta.(*conns.AWSClient).FSxConn(ctx) defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig diff --git a/internal/service/fsx/windows_file_system_data_source_test.go b/internal/service/fsx/windows_file_system_data_source_test.go index cab7eccca8e..647b5631314 100644 --- a/internal/service/fsx/windows_file_system_data_source_test.go +++ b/internal/service/fsx/windows_file_system_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx_test import ( diff --git a/internal/service/fsx/windows_file_system_test.go b/internal/service/fsx/windows_file_system_test.go index 3b4c603ffb0..5640c97e116 100644 --- a/internal/service/fsx/windows_file_system_test.go +++ b/internal/service/fsx/windows_file_system_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package fsx_test import ( @@ -846,7 +849,7 @@ func testAccCheckWindowsFileSystemExists(ctx context.Context, n string, v *fsx.F return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn(ctx) output, err := tffsx.FindFileSystemByID(ctx, conn, rs.Primary.ID) @@ -862,7 +865,7 @@ func testAccCheckWindowsFileSystemExists(ctx context.Context, n string, v *fsx.F func testAccCheckWindowsFileSystemDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).FSxConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_fsx_windows_file_system" { diff --git a/internal/service/gamelift/alias.go b/internal/service/gamelift/alias.go index 5dbe81b9193..8b51f8015f6 100644 --- a/internal/service/gamelift/alias.go +++ b/internal/service/gamelift/alias.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package gamelift import ( @@ -80,13 +83,13 @@ func ResourceAlias() *schema.Resource { func resourceAliasCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GameLiftConn() + conn := meta.(*conns.AWSClient).GameLiftConn(ctx) rs := expandRoutingStrategy(d.Get("routing_strategy").([]interface{})) input := gamelift.CreateAliasInput{ Name: aws.String(d.Get("name").(string)), RoutingStrategy: rs, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { input.Description = aws.String(v.(string)) @@ -103,7 +106,7 @@ func resourceAliasCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceAliasRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GameLiftConn() + conn := meta.(*conns.AWSClient).GameLiftConn(ctx) out, err := conn.DescribeAliasWithContext(ctx, &gamelift.DescribeAliasInput{ AliasId: aws.String(d.Id()), @@ -129,7 +132,7 @@ func resourceAliasRead(ctx context.Context, d *schema.ResourceData, meta interfa func resourceAliasUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GameLiftConn() + conn := meta.(*conns.AWSClient).GameLiftConn(ctx) log.Printf("[INFO] Updating GameLift Alias: %s", d.Id()) _, err := conn.UpdateAliasWithContext(ctx, &gamelift.UpdateAliasInput{ @@ -147,7 +150,7 @@ func resourceAliasUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceAliasDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GameLiftConn() + conn := meta.(*conns.AWSClient).GameLiftConn(ctx) log.Printf("[INFO] Deleting GameLift Alias: %s", d.Id()) if _, err := conn.DeleteAliasWithContext(ctx, &gamelift.DeleteAliasInput{ diff --git a/internal/service/gamelift/alias_test.go b/internal/service/gamelift/alias_test.go index 374f2b4bd21..123b4d6509a 100644 --- a/internal/service/gamelift/alias_test.go +++ b/internal/service/gamelift/alias_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package gamelift_test import ( @@ -229,7 +232,7 @@ func TestAccGameLiftAlias_disappears(t *testing.T) { func testAccCheckAliasDisappears(ctx context.Context, res *gamelift.Alias) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn(ctx) input := &gamelift.DeleteAliasInput{AliasId: res.AliasId} @@ -250,7 +253,7 @@ func testAccCheckAliasExists(ctx context.Context, n string, res *gamelift.Alias) return fmt.Errorf("No GameLift Alias ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn(ctx) out, err := conn.DescribeAliasWithContext(ctx, &gamelift.DescribeAliasInput{ AliasId: aws.String(rs.Primary.ID), @@ -272,7 +275,7 @@ func testAccCheckAliasExists(ctx context.Context, n string, res *gamelift.Alias) func testAccCheckAliasDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_gamelift_alias" { diff --git a/internal/service/gamelift/build.go b/internal/service/gamelift/build.go index 21e23ce74e1..42c514302c6 100644 --- a/internal/service/gamelift/build.go +++ b/internal/service/gamelift/build.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package gamelift import ( @@ -93,13 +96,13 @@ func ResourceBuild() *schema.Resource { func resourceBuildCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GameLiftConn() + conn := meta.(*conns.AWSClient).GameLiftConn(ctx) input := gamelift.CreateBuildInput{ Name: aws.String(d.Get("name").(string)), OperatingSystem: aws.String(d.Get("operating_system").(string)), StorageLocation: expandStorageLocation(d.Get("storage_location").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("version"); ok { @@ -138,7 +141,7 @@ func resourceBuildCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceBuildRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GameLiftConn() + conn := meta.(*conns.AWSClient).GameLiftConn(ctx) build, err := FindBuildByID(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -163,7 +166,7 @@ func resourceBuildRead(ctx context.Context, d *schema.ResourceData, meta interfa func resourceBuildUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GameLiftConn() + conn := meta.(*conns.AWSClient).GameLiftConn(ctx) if d.HasChangesExcept("tags", "tags_all") { log.Printf("[INFO] Updating GameLift Build: %s", d.Id()) @@ -186,7 +189,7 @@ func resourceBuildUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceBuildDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GameLiftConn() + conn := meta.(*conns.AWSClient).GameLiftConn(ctx) log.Printf("[INFO] Deleting GameLift Build: %s", d.Id()) _, err := conn.DeleteBuildWithContext(ctx, &gamelift.DeleteBuildInput{ diff --git a/internal/service/gamelift/build_test.go b/internal/service/gamelift/build_test.go index a4380bff7e6..9a36274b7ab 100644 --- a/internal/service/gamelift/build_test.go +++ b/internal/service/gamelift/build_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package gamelift_test import ( @@ -214,7 +217,7 @@ func testAccCheckBuildExists(ctx context.Context, n string, res *gamelift.Build) return fmt.Errorf("No GameLift Build ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn(ctx) build, err := tfgamelift.FindBuildByID(ctx, conn, rs.Primary.ID) if err != nil { @@ -233,7 +236,7 @@ func testAccCheckBuildExists(ctx context.Context, n string, res *gamelift.Build) func testAccCheckBuildDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_gamelift_build" { @@ -256,7 +259,7 @@ func testAccCheckBuildDestroy(ctx context.Context) resource.TestCheckFunc { } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn(ctx) input := &gamelift.ListBuildsInput{} diff --git a/internal/service/gamelift/consts.go b/internal/service/gamelift/consts.go index 0b04d493663..e82ca2aa6e6 100644 --- a/internal/service/gamelift/consts.go +++ b/internal/service/gamelift/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package gamelift import ( diff --git a/internal/service/gamelift/find.go b/internal/service/gamelift/find.go index 68875c66200..0dd5ca38959 100644 --- a/internal/service/gamelift/find.go +++ b/internal/service/gamelift/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package gamelift import ( diff --git a/internal/service/gamelift/fleet.go b/internal/service/gamelift/fleet.go index b28ab453c3c..aaef4e4584e 100644 --- a/internal/service/gamelift/fleet.go +++ b/internal/service/gamelift/fleet.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package gamelift import ( @@ -242,12 +245,12 @@ func ResourceFleet() *schema.Resource { func resourceFleetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GameLiftConn() + conn := meta.(*conns.AWSClient).GameLiftConn(ctx) input := &gamelift.CreateFleetInput{ EC2InstanceType: aws.String(d.Get("ec2_instance_type").(string)), Name: aws.String(d.Get("name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("build_id"); ok { @@ -325,7 +328,7 @@ func resourceFleetCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceFleetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GameLiftConn() + conn := meta.(*conns.AWSClient).GameLiftConn(ctx) log.Printf("[INFO] Describing GameLift Fleet: %s", d.Id()) fleet, err := FindFleetByID(ctx, conn, d.Id()) @@ -381,7 +384,7 @@ func resourceFleetRead(ctx context.Context, d *schema.ResourceData, meta interfa func resourceFleetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GameLiftConn() + conn := meta.(*conns.AWSClient).GameLiftConn(ctx) log.Printf("[INFO] Updating GameLift Fleet: %s", d.Id()) @@ -428,7 +431,7 @@ func resourceFleetUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceFleetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GameLiftConn() + conn := meta.(*conns.AWSClient).GameLiftConn(ctx) log.Printf("[INFO] Deleting GameLift Fleet: %s", d.Id()) // It can take ~ 1 hr as GameLift will keep retrying on errors like diff --git a/internal/service/gamelift/fleet_test.go b/internal/service/gamelift/fleet_test.go index d8a2795ded6..96666f1fc25 100644 --- a/internal/service/gamelift/fleet_test.go +++ b/internal/service/gamelift/fleet_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package gamelift_test import ( @@ -645,7 +648,7 @@ func testAccCheckFleetExists(ctx context.Context, n string, res *gamelift.FleetA return fmt.Errorf("No GameLift Fleet ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn(ctx) fleet, err := tfgamelift.FindFleetByID(ctx, conn, rs.Primary.ID) if err != nil { @@ -664,7 +667,7 @@ func testAccCheckFleetExists(ctx context.Context, n string, res *gamelift.FleetA func testAccCheckFleetDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_gamelift_fleet" { diff --git a/internal/service/gamelift/game_server_group.go b/internal/service/gamelift/game_server_group.go index 2ca757c4608..fc0ae888022 100644 --- a/internal/service/gamelift/game_server_group.go +++ b/internal/service/gamelift/game_server_group.go @@ -1,6 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package gamelift -import ( // nosemgrep:ci.aws-sdk-go-multiple-service-imports +import ( // nosemgrep:ci.semgrep.aws.multiple-service-imports "context" "fmt" "log" @@ -191,7 +194,7 @@ func ResourceGameServerGroup() *schema.Resource { func resourceGameServerGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GameLiftConn() + conn := meta.(*conns.AWSClient).GameLiftConn(ctx) input := &gamelift.CreateGameServerGroupInput{ GameServerGroupName: aws.String(d.Get("game_server_group_name").(string)), @@ -200,7 +203,7 @@ func resourceGameServerGroupCreate(ctx context.Context, d *schema.ResourceData, MaxSize: aws.Int64(int64(d.Get("max_size").(int))), MinSize: aws.Int64(int64(d.Get("min_size").(int))), RoleArn: aws.String(d.Get("role_arn").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("auto_scaling_policy"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { @@ -255,8 +258,8 @@ func resourceGameServerGroupCreate(ctx context.Context, d *schema.ResourceData, func resourceGameServerGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GameLiftConn() - autoscalingConn := meta.(*conns.AWSClient).AutoScalingConn() + conn := meta.(*conns.AWSClient).GameLiftConn(ctx) + autoscalingConn := meta.(*conns.AWSClient).AutoScalingConn(ctx) gameServerGroupName := d.Id() @@ -323,7 +326,7 @@ func resourceGameServerGroupRead(ctx context.Context, d *schema.ResourceData, me func resourceGameServerGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GameLiftConn() + conn := meta.(*conns.AWSClient).GameLiftConn(ctx) log.Printf("[INFO] Updating GameLift Game Server Group: %s", d.Id()) @@ -353,7 +356,7 @@ func resourceGameServerGroupUpdate(ctx context.Context, d *schema.ResourceData, func resourceGameServerGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GameLiftConn() + conn := meta.(*conns.AWSClient).GameLiftConn(ctx) log.Printf("[INFO] Deleting GameLift Game Server Group: %s", d.Id()) input := &gamelift.DeleteGameServerGroupInput{ diff --git a/internal/service/gamelift/game_server_group_test.go b/internal/service/gamelift/game_server_group_test.go index 2e737a945fe..8ce96b86bbf 100644 --- a/internal/service/gamelift/game_server_group_test.go +++ b/internal/service/gamelift/game_server_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package gamelift_test import ( @@ -581,7 +584,7 @@ func TestAccGameLiftGameServerGroup_vpcSubnets(t *testing.T) { func testAccCheckGameServerGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_gamelift_game_server_group" { @@ -622,7 +625,7 @@ func testAccCheckGameServerGroupExists(ctx context.Context, resourceName string) return fmt.Errorf("resource %s has not set its id", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn(ctx) input := gamelift.DescribeGameServerGroupInput{ GameServerGroupName: aws.String(rs.Primary.ID), diff --git a/internal/service/gamelift/game_session_queue.go b/internal/service/gamelift/game_session_queue.go index 398feada9fd..50faea75726 100644 --- a/internal/service/gamelift/game_session_queue.go +++ b/internal/service/gamelift/game_session_queue.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package gamelift import ( @@ -92,7 +95,7 @@ func ResourceGameSessionQueue() *schema.Resource { func resourceGameSessionQueueCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GameLiftConn() + conn := meta.(*conns.AWSClient).GameLiftConn(ctx) name := d.Get("name").(string) input := &gamelift.CreateGameSessionQueueInput{ @@ -100,7 +103,7 @@ func resourceGameSessionQueueCreate(ctx context.Context, d *schema.ResourceData, Destinations: expandGameSessionQueueDestinations(d.Get("destinations").([]interface{})), PlayerLatencyPolicies: expandGameSessionPlayerLatencyPolicies(d.Get("player_latency_policy").([]interface{})), TimeoutInSeconds: aws.Int64(int64(d.Get("timeout_in_seconds").(int))), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("custom_event_data"); ok { @@ -124,7 +127,7 @@ func resourceGameSessionQueueCreate(ctx context.Context, d *schema.ResourceData, func resourceGameSessionQueueRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GameLiftConn() + conn := meta.(*conns.AWSClient).GameLiftConn(ctx) sessionQueue, err := FindGameSessionQueueByName(ctx, conn, d.Id()) @@ -156,7 +159,7 @@ func resourceGameSessionQueueRead(ctx context.Context, d *schema.ResourceData, m func resourceGameSessionQueueUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GameLiftConn() + conn := meta.(*conns.AWSClient).GameLiftConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &gamelift.UpdateGameSessionQueueInput{ @@ -186,7 +189,7 @@ func resourceGameSessionQueueUpdate(ctx context.Context, d *schema.ResourceData, func resourceGameSessionQueueDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GameLiftConn() + conn := meta.(*conns.AWSClient).GameLiftConn(ctx) log.Printf("[INFO] Deleting GameLift Session Queue: %s", d.Id()) _, err := conn.DeleteGameSessionQueueWithContext(ctx, &gamelift.DeleteGameSessionQueueInput{ diff --git a/internal/service/gamelift/game_session_queue_test.go b/internal/service/gamelift/game_session_queue_test.go index d06885667c2..8d84793f05a 100644 --- a/internal/service/gamelift/game_session_queue_test.go +++ b/internal/service/gamelift/game_session_queue_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package gamelift_test import ( @@ -214,7 +217,7 @@ func testAccCheckGameSessionQueueExists(ctx context.Context, n string, v *gameli return fmt.Errorf("No GameLift Game Session Queue ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn(ctx) output, err := tfgamelift.FindGameSessionQueueByName(ctx, conn, rs.Primary.ID) @@ -230,7 +233,7 @@ func testAccCheckGameSessionQueueExists(ctx context.Context, n string, v *gameli func testAccCheckGameSessionQueueDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_gamelift_game_session_queue" { diff --git a/internal/service/gamelift/gamelift_test.go b/internal/service/gamelift/gamelift_test.go index 1fb42a18168..46ba26d21c1 100644 --- a/internal/service/gamelift/gamelift_test.go +++ b/internal/service/gamelift/gamelift_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package gamelift_test import ( diff --git a/internal/service/gamelift/generate.go b/internal/service/gamelift/generate.go index a7782bdd0ef..86e19c258e9 100644 --- a/internal/service/gamelift/generate.go +++ b/internal/service/gamelift/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceARN -ServiceTagsSlice -TagInIDElem=ResourceARN -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package gamelift diff --git a/internal/service/gamelift/script.go b/internal/service/gamelift/script.go index aa7d6d2d868..93c9ea9d0d4 100644 --- a/internal/service/gamelift/script.go +++ b/internal/service/gamelift/script.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package gamelift import ( @@ -94,11 +97,11 @@ func ResourceScript() *schema.Resource { func resourceScriptCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GameLiftConn() + conn := meta.(*conns.AWSClient).GameLiftConn(ctx) input := gamelift.CreateScriptInput{ Name: aws.String(d.Get("name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("storage_location"); ok && len(v.([]interface{})) > 0 { @@ -148,7 +151,7 @@ func resourceScriptCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceScriptRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GameLiftConn() + conn := meta.(*conns.AWSClient).GameLiftConn(ctx) log.Printf("[INFO] Reading GameLift Script: %s", d.Id()) script, err := FindScriptByID(ctx, conn, d.Id()) @@ -177,7 +180,7 @@ func resourceScriptRead(ctx context.Context, d *schema.ResourceData, meta interf func resourceScriptUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GameLiftConn() + conn := meta.(*conns.AWSClient).GameLiftConn(ctx) if d.HasChangesExcept("tags", "tags_all") { log.Printf("[INFO] Updating GameLift Script: %s", d.Id()) @@ -222,7 +225,7 @@ func resourceScriptUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceScriptDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GameLiftConn() + conn := meta.(*conns.AWSClient).GameLiftConn(ctx) log.Printf("[INFO] Deleting GameLift Script: %s", d.Id()) _, err := conn.DeleteScriptWithContext(ctx, &gamelift.DeleteScriptInput{ diff --git a/internal/service/gamelift/script_test.go b/internal/service/gamelift/script_test.go index 7e92fcb07eb..a52f55baf7a 100644 --- a/internal/service/gamelift/script_test.go +++ b/internal/service/gamelift/script_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package gamelift_test import ( @@ -161,7 +164,7 @@ func testAccCheckScriptExists(ctx context.Context, n string, res *gamelift.Scrip return fmt.Errorf("No GameLift Script ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn(ctx) script, err := tfgamelift.FindScriptByID(ctx, conn, rs.Primary.ID) if err != nil { @@ -180,7 +183,7 @@ func testAccCheckScriptExists(ctx context.Context, n string, res *gamelift.Scrip func testAccCheckScriptDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GameLiftConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_gamelift_script" { diff --git a/internal/service/gamelift/service_package_gen.go b/internal/service/gamelift/service_package_gen.go index 48322b49bae..dee7fa2a2c9 100644 --- a/internal/service/gamelift/service_package_gen.go +++ b/internal/service/gamelift/service_package_gen.go @@ -5,6 +5,10 @@ package gamelift import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + gamelift_sdkv1 "github.com/aws/aws-sdk-go/service/gamelift" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -80,4 +84,13 @@ func (p *servicePackage) ServicePackageName() string { return names.GameLift } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*gamelift_sdkv1.GameLift, error) { + sess := config["session"].(*session_sdkv1.Session) + + return gamelift_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/gamelift/status.go b/internal/service/gamelift/status.go index f0c0518feb0..d320fa516e1 100644 --- a/internal/service/gamelift/status.go +++ b/internal/service/gamelift/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package gamelift import ( diff --git a/internal/service/gamelift/sweep.go b/internal/service/gamelift/sweep.go index 5a6a3b45fc9..4544987c94b 100644 --- a/internal/service/gamelift/sweep.go +++ b/internal/service/gamelift/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -12,7 +15,6 @@ import ( "github.com/aws/aws-sdk-go/service/gamelift" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -56,11 +58,11 @@ func init() { func sweepAliases(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).GameLiftConn() + conn := client.GameLiftConn(ctx) err = listAliases(ctx, &gamelift.ListAliasesInput{}, conn, func(resp *gamelift.ListAliasesOutput) error { if len(resp.Aliases) == 0 { @@ -95,11 +97,11 @@ func sweepAliases(region string) error { func sweepBuilds(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).GameLiftConn() + conn := client.GameLiftConn(ctx) resp, err := conn.ListBuildsWithContext(ctx, &gamelift.ListBuildsInput{}) if err != nil { @@ -133,11 +135,11 @@ func sweepBuilds(region string) error { func sweepScripts(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).GameLiftConn() + conn := client.GameLiftConn(ctx) resp, err := conn.ListScriptsWithContext(ctx, &gamelift.ListScriptsInput{}) if err != nil { @@ -171,11 +173,11 @@ func sweepScripts(region string) error { func sweepFleets(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).GameLiftConn() + conn := client.GameLiftConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -212,7 +214,7 @@ func sweepFleets(region string) error { errs = multierror.Append(errs, fmt.Errorf("listing GameLift Fleet for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping GameLift Fleet for %s: %w", region, err)) } @@ -226,11 +228,11 @@ func sweepFleets(region string) error { func sweepGameServerGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).GameLiftConn() + conn := client.GameLiftConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -267,7 +269,7 @@ func sweepGameServerGroups(region string) error { errs = multierror.Append(errs, fmt.Errorf("listing GameLift Game Server Group for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping GameLift Game Server Group for %s: %w", region, err)) } @@ -281,11 +283,11 @@ func sweepGameServerGroups(region string) error { func sweepGameSessionQueue(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).GameLiftConn() + conn := client.GameLiftConn(ctx) out, err := conn.DescribeGameSessionQueuesWithContext(ctx, &gamelift.DescribeGameSessionQueuesInput{}) diff --git a/internal/service/gamelift/tags_gen.go b/internal/service/gamelift/tags_gen.go index 447d4aeac85..2f00ebe11db 100644 --- a/internal/service/gamelift/tags_gen.go +++ b/internal/service/gamelift/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists gamelift service tags. +// listTags lists gamelift service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn gameliftiface.GameLiftAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn gameliftiface.GameLiftAPI, identifier string) (tftags.KeyValueTags, error) { input := &gamelift.ListTagsForResourceInput{ ResourceARN: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn gameliftiface.GameLiftAPI, identifier st // ListTags lists gamelift service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).GameLiftConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).GameLiftConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*gamelift.Tag) tftags.KeyValueTags return tftags.New(ctx, m) } -// GetTagsIn returns gamelift service tags from Context. +// getTagsIn returns gamelift service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*gamelift.Tag { +func getTagsIn(ctx context.Context) []*gamelift.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*gamelift.Tag { return nil } -// SetTagsOut sets gamelift service tags in Context. -func SetTagsOut(ctx context.Context, tags []*gamelift.Tag) { +// setTagsOut sets gamelift service tags in Context. +func setTagsOut(ctx context.Context, tags []*gamelift.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates gamelift service tags. +// updateTags updates gamelift service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn gameliftiface.GameLiftAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn gameliftiface.GameLiftAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn gameliftiface.GameLiftAPI, identifier // UpdateTags updates gamelift service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).GameLiftConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).GameLiftConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/gamelift/wait.go b/internal/service/gamelift/wait.go index 5f7edf29d34..7fd9bc74163 100644 --- a/internal/service/gamelift/wait.go +++ b/internal/service/gamelift/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package gamelift import ( @@ -185,7 +188,7 @@ func waitGameServerGroupTerminated(ctx context.Context, conn *gamelift.GameLift, } if err != nil { - return fmt.Errorf("error deleting GameLift Game Server Group (%s): %w", name, err) + return fmt.Errorf("deleting GameLift Game Server Group (%s): %w", name, err) } return nil diff --git a/internal/service/glacier/exports_test.go b/internal/service/glacier/exports_test.go new file mode 100644 index 00000000000..3696926484c --- /dev/null +++ b/internal/service/glacier/exports_test.go @@ -0,0 +1,13 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package glacier + +// Exports for use in tests only. +var ( + ResourceVault = resourceVault + ResourceVaultLock = resourceVaultLock + + FindVaultByName = findVaultByName + FindVaultLockByName = findVaultLockByName +) diff --git a/internal/service/glacier/generate.go b/internal/service/glacier/generate.go index fd6511717ab..42c5002c468 100644 --- a/internal/service/glacier/generate.go +++ b/internal/service/glacier/generate.go @@ -1,4 +1,8 @@ -//go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=ListTagsForVault -ListTagsInIDElem=VaultName -ServiceTagsMap -TagOp=AddTagsToVault -TagInIDElem=VaultName -UntagOp=RemoveTagsFromVault -UpdateTags -CreateTags +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -ListTags -ListTagsOp=ListTagsForVault -ListTagsInIDElem=VaultName -ServiceTagsMap -KVTValues -TagOp=AddTagsToVault -TagInIDElem=VaultName -UntagOp=RemoveTagsFromVault -UpdateTags -CreateTags -SkipTypesImp +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package glacier diff --git a/internal/service/glacier/service_package_gen.go b/internal/service/glacier/service_package_gen.go index 3a453c7ce9a..b14b0b60b79 100644 --- a/internal/service/glacier/service_package_gen.go +++ b/internal/service/glacier/service_package_gen.go @@ -5,6 +5,9 @@ package glacier import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + glacier_sdkv2 "github.com/aws/aws-sdk-go-v2/service/glacier" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -26,7 +29,7 @@ func (p *servicePackage) SDKDataSources(ctx context.Context) []*types.ServicePac func (p *servicePackage) SDKResources(ctx context.Context) []*types.ServicePackageSDKResource { return []*types.ServicePackageSDKResource{ { - Factory: ResourceVault, + Factory: resourceVault, TypeName: "aws_glacier_vault", Name: "Vault", Tags: &types.ServicePackageResourceTags{ @@ -34,7 +37,7 @@ func (p *servicePackage) SDKResources(ctx context.Context) []*types.ServicePacka }, }, { - Factory: ResourceVaultLock, + Factory: resourceVaultLock, TypeName: "aws_glacier_vault_lock", }, } @@ -44,4 +47,17 @@ func (p *servicePackage) ServicePackageName() string { return names.Glacier } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*glacier_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return glacier_sdkv2.NewFromConfig(cfg, func(o *glacier_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = glacier_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/glacier/sweep.go b/internal/service/glacier/sweep.go index 640d605ec0b..7782e567393 100644 --- a/internal/service/glacier/sweep.go +++ b/internal/service/glacier/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -7,10 +10,9 @@ import ( "fmt" "log" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/glacier" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/glacier" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -23,40 +25,37 @@ func init() { func sweepVaults(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } input := &glacier.ListVaultsInput{} - conn := client.(*conns.AWSClient).GlacierConn() + conn := client.GlacierClient(ctx) sweepResources := make([]sweep.Sweepable, 0) - err = conn.ListVaultsPagesWithContext(ctx, input, func(page *glacier.ListVaultsOutput, lastPage bool) bool { - if page == nil { - return !lastPage + pages := glacier.NewListVaultsPaginator(conn, input) + for pages.HasMorePages() { + page, err := pages.NextPage(ctx) + + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping Glacier Vault sweep for %s: %s", region, err) + return nil + } + + if err != nil { + return fmt.Errorf("error listing Glacier Vaults (%s): %w", region, err) } for _, v := range page.VaultList { - r := ResourceVault() + r := resourceVault() d := r.Data(nil) - d.SetId(aws.StringValue(v.VaultName)) + d.SetId(aws.ToString(v.VaultName)) sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } - - return !lastPage - }) - - if sweep.SkipSweepError(err) { - log.Printf("[WARN] Skipping Glacier Vault sweep for %s: %s", region, err) - return nil - } - - if err != nil { - return fmt.Errorf("error listing Glacier Vaults (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Glacier Vaults (%s): %w", region, err) diff --git a/internal/service/glacier/tags_gen.go b/internal/service/glacier/tags_gen.go index eb4e4de312e..34e55a0c9e5 100644 --- a/internal/service/glacier/tags_gen.go +++ b/internal/service/glacier/tags_gen.go @@ -5,24 +5,23 @@ import ( "context" "fmt" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/glacier" - "github.com/aws/aws-sdk-go/service/glacier/glacieriface" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/glacier" "github.com/hashicorp/terraform-provider-aws/internal/conns" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists glacier service tags. +// listTags lists glacier service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn glacieriface.GlacierAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *glacier.Client, identifier string) (tftags.KeyValueTags, error) { input := &glacier.ListTagsForVaultInput{ VaultName: aws.String(identifier), } - output, err := conn.ListTagsForVaultWithContext(ctx, input) + output, err := conn.ListTagsForVault(ctx, input) if err != nil { return tftags.New(ctx, nil), err @@ -34,7 +33,7 @@ func ListTags(ctx context.Context, conn glacieriface.GlacierAPI, identifier stri // ListTags lists glacier service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).GlacierConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).GlacierClient(ctx), identifier) if err != nil { return err @@ -47,21 +46,21 @@ func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier stri return nil } -// map[string]*string handling +// map[string]string handling // Tags returns glacier service tags. -func Tags(tags tftags.KeyValueTags) map[string]*string { - return aws.StringMap(tags.Map()) +func Tags(tags tftags.KeyValueTags) map[string]string { + return tags.Map() } -// KeyValueTags creates KeyValueTags from glacier service tags. -func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { +// KeyValueTags creates tftags.KeyValueTags from glacier service tags. +func KeyValueTags(ctx context.Context, tags map[string]string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns glacier service tags from Context. +// getTagsIn returns glacier service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,26 +70,26 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets glacier service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets glacier service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } // createTags creates glacier service tags for new resources. -func createTags(ctx context.Context, conn glacieriface.GlacierAPI, identifier string, tags map[string]*string) error { +func createTags(ctx context.Context, conn *glacier.Client, identifier string, tags map[string]string) error { if len(tags) == 0 { return nil } - return UpdateTags(ctx, conn, identifier, nil, tags) + return updateTags(ctx, conn, identifier, nil, tags) } -// UpdateTags updates glacier service tags. +// updateTags updates glacier service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn glacieriface.GlacierAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *glacier.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -99,10 +98,10 @@ func UpdateTags(ctx context.Context, conn glacieriface.GlacierAPI, identifier st if len(removedTags) > 0 { input := &glacier.RemoveTagsFromVaultInput{ VaultName: aws.String(identifier), - TagKeys: aws.StringSlice(removedTags.Keys()), + TagKeys: removedTags.Keys(), } - _, err := conn.RemoveTagsFromVaultWithContext(ctx, input) + _, err := conn.RemoveTagsFromVault(ctx, input) if err != nil { return fmt.Errorf("untagging resource (%s): %w", identifier, err) @@ -117,7 +116,7 @@ func UpdateTags(ctx context.Context, conn glacieriface.GlacierAPI, identifier st Tags: Tags(updatedTags), } - _, err := conn.AddTagsToVaultWithContext(ctx, input) + _, err := conn.AddTagsToVault(ctx, input) if err != nil { return fmt.Errorf("tagging resource (%s): %w", identifier, err) @@ -130,5 +129,5 @@ func UpdateTags(ctx context.Context, conn glacieriface.GlacierAPI, identifier st // UpdateTags updates glacier service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).GlacierConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).GlacierClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/glacier/vault.go b/internal/service/glacier/vault.go index ba79bdc3772..b4a97d68867 100644 --- a/internal/service/glacier/vault.go +++ b/internal/service/glacier/vault.go @@ -1,30 +1,35 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glacier import ( "context" - "errors" "fmt" "log" "regexp" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/glacier" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/glacier" + "github.com/aws/aws-sdk-go-v2/service/glacier/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/errs" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" "github.com/hashicorp/terraform-provider-aws/internal/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/verify" "github.com/hashicorp/terraform-provider-aws/names" ) // @SDKResource("aws_glacier_vault", name="Vault") // @Tags(identifierAttribute="id") -func ResourceVault() *schema.Resource { +func resourceVault() *schema.Resource { return &schema.Resource{ CreateWithoutTimeout: resourceVaultCreate, ReadWithoutTimeout: resourceVaultRead, @@ -36,27 +41,6 @@ func ResourceVault() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validation.All( - validation.StringLenBetween(1, 255), - validation.StringMatch(regexp.MustCompile(`^[.0-9A-Za-z-_]+$`), - "only alphanumeric characters, hyphens, underscores, and periods are allowed"), - ), - }, - - "location": { - Type: schema.TypeString, - Computed: true, - }, - - "arn": { - Type: schema.TypeString, - Computed: true, - }, - "access_policy": { Type: schema.TypeString, Optional: true, @@ -68,7 +52,24 @@ func ResourceVault() *schema.Resource { return json }, }, - + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "location": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.All( + validation.StringLenBetween(1, 255), + validation.StringMatch(regexp.MustCompile(`^[.0-9A-Za-z-_]+$`), + "only alphanumeric characters, hyphens, underscores, and periods are allowed"), + ), + }, "notification": { Type: schema.TypeList, Optional: true, @@ -85,7 +86,6 @@ func ResourceVault() *schema.Resource { "InventoryRetrievalCompleted", }, false), }, - Set: schema.HashString, }, "sns_topic": { Type: schema.TypeString, @@ -95,7 +95,6 @@ func ResourceVault() *schema.Resource { }, }, }, - names.AttrTags: tftags.TagsSchema(), names.AttrTagsAll: tftags.TagsSchemaComputed(), }, @@ -106,32 +105,56 @@ func ResourceVault() *schema.Resource { func resourceVaultCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlacierConn() + conn := meta.(*conns.AWSClient).GlacierClient(ctx) + name := d.Get("name").(string) input := &glacier.CreateVaultInput{ - VaultName: aws.String(d.Get("name").(string)), + VaultName: aws.String(name), } - _, err := conn.CreateVaultWithContext(ctx, input) + _, err := conn.CreateVault(ctx, input) + if err != nil { - return sdkdiag.AppendErrorf(diags, "creating Glacier Vault: %s", err) + return sdkdiag.AppendErrorf(diags, "creating Glacier Vault (%s): %s", name, err) } - d.SetId(d.Get("name").(string)) + d.SetId(name) - if err := createTags(ctx, conn, d.Id(), GetTagsIn(ctx)); err != nil { + if err := createTags(ctx, conn, d.Id(), getTagsIn(ctx)); err != nil { return sdkdiag.AppendErrorf(diags, "setting Glacier Vault (%s) tags: %s", d.Id(), err) } - if _, ok := d.GetOk("access_policy"); ok { - if err := resourceVaultPolicyUpdate(ctx, conn, d); err != nil { - return sdkdiag.AppendErrorf(diags, "updating Glacier Vault (%s) access policy: %s", d.Id(), err) + if v, ok := d.GetOk("access_policy"); ok { + policy, err := structure.NormalizeJsonString(v.(string)) + + if err != nil { + return sdkdiag.AppendFromErr(diags, err) + } + + input := &glacier.SetVaultAccessPolicyInput{ + Policy: &types.VaultAccessPolicy{ + Policy: aws.String(policy), + }, + VaultName: aws.String(d.Id()), + } + + _, err = conn.SetVaultAccessPolicy(ctx, input) + + if err != nil { + return sdkdiag.AppendErrorf(diags, "setting Glacier Vault (%s) access policy: %s", d.Id(), err) } } - if _, ok := d.GetOk("notification"); ok { - if err := resourceVaultNotificationUpdate(ctx, conn, d); err != nil { - return sdkdiag.AppendErrorf(diags, "updating Glacier Vault (%s) notification: %s", d.Id(), err) + if v, ok := d.GetOk("notification"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input := &glacier.SetVaultNotificationsInput{ + VaultName: aws.String(d.Id()), + VaultNotificationConfig: expandVaultNotificationConfig(v.([]interface{})[0].(map[string]interface{})), + } + + _, err := conn.SetVaultNotifications(ctx, input) + + if err != nil { + return sdkdiag.AppendErrorf(diags, "setting Glacier Vault (%s) notifications: %s", d.Id(), err) } } @@ -140,58 +163,65 @@ func resourceVaultCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceVaultRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlacierConn() + conn := meta.(*conns.AWSClient).GlacierClient(ctx) - input := &glacier.DescribeVaultInput{ - VaultName: aws.String(d.Id()), - } + output, err := findVaultByName(ctx, conn, d.Id()) - out, err := conn.DescribeVaultWithContext(ctx, input) - if tfawserr.ErrCodeEquals(err, glacier.ErrCodeResourceNotFoundException) { + if !d.IsNewResource() && tfresource.NotFound(err) { log.Printf("[WARN] Glaier Vault (%s) not found, removing from state", d.Id()) d.SetId("") return diags } - if err != nil { - return sdkdiag.AppendErrorf(diags, "reading Glacier Vault (%s): %s", d.Id(), err) - } - - awsClient := meta.(*conns.AWSClient) - d.Set("name", out.VaultName) - d.Set("arn", out.VaultARN) - location, err := buildVaultLocation(awsClient.AccountID, d.Id()) if err != nil { return sdkdiag.AppendErrorf(diags, "reading Glacier Vault (%s): %s", d.Id(), err) } - d.Set("location", location) - log.Printf("[DEBUG] Getting the access_policy for Vault %s", d.Id()) - pol, err := conn.GetVaultAccessPolicyWithContext(ctx, &glacier.GetVaultAccessPolicyInput{ - VaultName: aws.String(d.Id()), - }) + d.Set("access_policy", nil) + d.Set("arn", output.VaultARN) + d.Set("location", fmt.Sprintf("/%s/vaults/%s", meta.(*conns.AWSClient).AccountID, d.Id())) + d.Set("name", output.VaultName) + d.Set("notification", nil) - if tfawserr.ErrCodeEquals(err, glacier.ErrCodeResourceNotFoundException) { - d.Set("access_policy", "") - } else if err != nil { - return sdkdiag.AppendErrorf(diags, "reading Glacier Vault (%s): reading policy: %s", d.Id(), err) - } else if pol != nil && pol.Policy != nil { - policy, err := verify.PolicyToSet(d.Get("access_policy").(string), aws.StringValue(pol.Policy.Policy)) + if output, err := conn.GetVaultAccessPolicy(ctx, &glacier.GetVaultAccessPolicyInput{ + VaultName: aws.String(d.Id()), + }); err != nil { + // "An error occurred (ResourceNotFoundException) when calling the GetVaultAccessPolicy operation: No vault access policy is set for: ..." + if !errs.IsA[*types.ResourceNotFoundException](err) { + return sdkdiag.AppendErrorf(diags, "reading Glacier Vault (%s) access policy: %s", d.Id(), err) + } + } else if output != nil && output.Policy != nil { + policy, err := verify.PolicyToSet(d.Get("access_policy").(string), aws.ToString(output.Policy.Policy)) if err != nil { - return sdkdiag.AppendErrorf(diags, "reading Glacier Vault (%s): setting policy: %s", d.Id(), err) + return sdkdiag.AppendFromErr(diags, err) } d.Set("access_policy", policy) } - notifications, err := getVaultNotification(ctx, conn, d.Id()) - if tfawserr.ErrCodeEquals(err, glacier.ErrCodeResourceNotFoundException) { - d.Set("notification", []map[string]interface{}{}) - } else if pol != nil { - d.Set("notification", notifications) - } else { - return sdkdiag.AppendErrorf(diags, "setting notification: %s", err) + if output, err := conn.GetVaultNotifications(ctx, &glacier.GetVaultNotificationsInput{ + VaultName: aws.String(d.Id()), + }); err != nil { + // "An error occurred (ResourceNotFoundException) when calling the GetVaultNotifications operation: No notification configuration is set for vault: ..." + if !errs.IsA[*types.ResourceNotFoundException](err) { + return sdkdiag.AppendErrorf(diags, "reading Glacier Vault (%s) notifications: %s", d.Id(), err) + } + } else if output != nil && output.VaultNotificationConfig != nil { + apiObject := output.VaultNotificationConfig + tfMap := map[string]interface{}{} + + if v := apiObject.Events; v != nil { + tfMap["events"] = v + } + + if v := apiObject.SNSTopic; v != nil { + tfMap["sns_topic"] = aws.ToString(v) + } + + if err := d.Set("notification", []interface{}{tfMap}); err != nil { + return sdkdiag.AppendErrorf(diags, "setting notification: %s", err) + } } return diags @@ -199,17 +229,63 @@ func resourceVaultRead(ctx context.Context, d *schema.ResourceData, meta interfa func resourceVaultUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlacierConn() + conn := meta.(*conns.AWSClient).GlacierClient(ctx) if d.HasChange("access_policy") { - if err := resourceVaultPolicyUpdate(ctx, conn, d); err != nil { - return sdkdiag.AppendErrorf(diags, "updating Glacier Vault (%s) access policy: %s", d.Id(), err) + if v, ok := d.GetOk("access_policy"); ok { + policy, err := structure.NormalizeJsonString(v.(string)) + + if err != nil { + return sdkdiag.AppendFromErr(diags, err) + } + + input := &glacier.SetVaultAccessPolicyInput{ + Policy: &types.VaultAccessPolicy{ + Policy: aws.String(policy), + }, + VaultName: aws.String(d.Id()), + } + + _, err = conn.SetVaultAccessPolicy(ctx, input) + + if err != nil { + return sdkdiag.AppendErrorf(diags, "setting Glacier Vault (%s) access policy: %s", d.Id(), err) + } + } else { + input := &glacier.DeleteVaultAccessPolicyInput{ + VaultName: aws.String(d.Id()), + } + + _, err := conn.DeleteVaultAccessPolicy(ctx, input) + + if err != nil { + return sdkdiag.AppendErrorf(diags, "deleting Glacier Vault (%s) access policy: %s", d.Id(), err) + } } } if d.HasChange("notification") { - if err := resourceVaultNotificationUpdate(ctx, conn, d); err != nil { - return sdkdiag.AppendErrorf(diags, "updating Glacier Vault (%s) notification: %s", d.Id(), err) + if v, ok := d.GetOk("notification"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input := &glacier.SetVaultNotificationsInput{ + VaultName: aws.String(d.Id()), + VaultNotificationConfig: expandVaultNotificationConfig(v.([]interface{})[0].(map[string]interface{})), + } + + _, err := conn.SetVaultNotifications(ctx, input) + + if err != nil { + return sdkdiag.AppendErrorf(diags, "setting Glacier Vault (%s) notifications: %s", d.Id(), err) + } + } else { + input := &glacier.DeleteVaultNotificationsInput{ + VaultName: aws.String(d.Id()), + } + + _, err := conn.DeleteVaultNotifications(ctx, input) + + if err != nil { + return sdkdiag.AppendErrorf(diags, "deleting Glacier Vault (%s) notifications: %s", d.Id(), err) + } } } @@ -218,108 +294,59 @@ func resourceVaultUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceVaultDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlacierConn() + conn := meta.(*conns.AWSClient).GlacierClient(ctx) log.Printf("[DEBUG] Deleting Glacier Vault: %s", d.Id()) - _, err := conn.DeleteVaultWithContext(ctx, &glacier.DeleteVaultInput{ + _, err := conn.DeleteVault(ctx, &glacier.DeleteVaultInput{ VaultName: aws.String(d.Id()), }) + if err != nil { - return sdkdiag.AppendErrorf(diags, "deleting Glacier Vault: %s", err) + return sdkdiag.AppendErrorf(diags, "deleting Glacier Vault (%s): %s", d.Id(), err) } + return diags } -func resourceVaultNotificationUpdate(ctx context.Context, conn *glacier.Glacier, d *schema.ResourceData) error { - if v, ok := d.GetOk("notification"); ok { - settings := v.([]interface{}) - - s := settings[0].(map[string]interface{}) - - _, err := conn.SetVaultNotificationsWithContext(ctx, &glacier.SetVaultNotificationsInput{ - VaultName: aws.String(d.Id()), - VaultNotificationConfig: &glacier.VaultNotificationConfig{ - SNSTopic: aws.String(s["sns_topic"].(string)), - Events: flex.ExpandStringSet(s["events"].(*schema.Set)), - }, - }) +func findVaultByName(ctx context.Context, conn *glacier.Client, name string) (*glacier.DescribeVaultOutput, error) { + input := &glacier.DescribeVaultInput{ + VaultName: aws.String(name), + } - if err != nil { - return fmt.Errorf("Error Updating Glacier Vault Notifications: %w", err) - } - } else { - _, err := conn.DeleteVaultNotificationsWithContext(ctx, &glacier.DeleteVaultNotificationsInput{ - VaultName: aws.String(d.Id()), - }) + output, err := conn.DescribeVault(ctx, input) - if err != nil { - return fmt.Errorf("Error Removing Glacier Vault Notifications: %w", err) + if errs.IsA[*types.ResourceNotFoundException](err) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: input, } } - return nil -} - -func resourceVaultPolicyUpdate(ctx context.Context, conn *glacier.Glacier, d *schema.ResourceData) error { - vaultName := d.Id() - policyContents, err := structure.NormalizeJsonString(d.Get("access_policy").(string)) - if err != nil { - return fmt.Errorf("policy (%s) is invalid JSON: %w", policyContents, err) + return nil, err } - policy := &glacier.VaultAccessPolicy{ - Policy: aws.String(policyContents), - } - - if policyContents != "" { - log.Printf("[DEBUG] Glacier Vault: %s, put policy", vaultName) - - _, err := conn.SetVaultAccessPolicyWithContext(ctx, &glacier.SetVaultAccessPolicyInput{ - VaultName: aws.String(d.Id()), - Policy: policy, - }) - - if err != nil { - return fmt.Errorf("Error putting Glacier Vault policy: %w", err) - } - } else { - log.Printf("[DEBUG] Glacier Vault: %s, delete policy: %s", vaultName, policy) - _, err := conn.DeleteVaultAccessPolicyWithContext(ctx, &glacier.DeleteVaultAccessPolicyInput{ - VaultName: aws.String(d.Id()), - }) - - if err != nil { - return fmt.Errorf("Error deleting Glacier Vault policy: %w", err) - } + if output == nil { + return nil, tfresource.NewEmptyResultError(input) } - return nil + return output, nil } -func buildVaultLocation(accountId, vaultName string) (string, error) { - if accountId == "" { - return "", errors.New("AWS account ID unavailable - failed to construct Vault location") +func expandVaultNotificationConfig(tfMap map[string]interface{}) *types.VaultNotificationConfig { + if tfMap == nil { + return nil } - return fmt.Sprintf("/" + accountId + "/vaults/" + vaultName), nil -} -func getVaultNotification(ctx context.Context, conn *glacier.Glacier, vaultName string) ([]map[string]interface{}, error) { - request := &glacier.GetVaultNotificationsInput{ - VaultName: aws.String(vaultName), - } + apiObject := &types.VaultNotificationConfig{} - response, err := conn.GetVaultNotificationsWithContext(ctx, request) - if err != nil { - return nil, fmt.Errorf("Error reading Glacier Vault Notifications: %w", err) + if v, ok := tfMap["events"].(*schema.Set); ok && v.Len() > 0 { + apiObject.Events = flex.ExpandStringValueSet(v) } - notifications := make(map[string]interface{}) - - log.Print("[DEBUG] Flattening Glacier Vault Notifications") - - notifications["events"] = aws.StringValueSlice(response.VaultNotificationConfig.Events) - notifications["sns_topic"] = aws.StringValue(response.VaultNotificationConfig.SNSTopic) + if v, ok := tfMap["sns_topic"].(string); ok && v != "" { + apiObject.SNSTopic = aws.String(v) + } - return []map[string]interface{}{notifications}, nil + return apiObject } diff --git a/internal/service/glacier/vault_lock.go b/internal/service/glacier/vault_lock.go index fec82a7f7af..b71ba480064 100644 --- a/internal/service/glacier/vault_lock.go +++ b/internal/service/glacier/vault_lock.go @@ -1,32 +1,36 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glacier import ( "context" - "fmt" "log" "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/glacier" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/glacier" + "github.com/aws/aws-sdk-go-v2/service/glacier/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/errs" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/verify" ) // @SDKResource("aws_glacier_vault_lock") -func ResourceVaultLock() *schema.Resource { +func resourceVaultLock() *schema.Resource { return &schema.Resource{ CreateWithoutTimeout: resourceVaultLockCreate, ReadWithoutTimeout: resourceVaultLockRead, - // Allow ignore_deletion_error update - UpdateWithoutTimeout: schema.NoopContext, + UpdateWithoutTimeout: schema.NoopContext, // Allow ignore_deletion_error update. DeleteWithoutTimeout: resourceVaultLockDelete, + Importer: &schema.ResourceImporter{ StateContext: schema.ImportStatePassthroughContext, }, @@ -64,49 +68,53 @@ func ResourceVaultLock() *schema.Resource { } } +const ( + lockStateInProgress = "InProgress" + lockStateLocked = "Locked" +) + func resourceVaultLockCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlacierConn() - vaultName := d.Get("vault_name").(string) + conn := meta.(*conns.AWSClient).GlacierClient(ctx) policy, err := structure.NormalizeJsonString(d.Get("policy").(string)) if err != nil { - return sdkdiag.AppendErrorf(diags, "policy is invalid JSON: %s", err) + return sdkdiag.AppendFromErr(diags, err) } + vaultName := d.Get("vault_name").(string) input := &glacier.InitiateVaultLockInput{ AccountId: aws.String("-"), - Policy: &glacier.VaultLockPolicy{ + Policy: &types.VaultLockPolicy{ Policy: aws.String(policy), }, VaultName: aws.String(vaultName), } - log.Printf("[DEBUG] Initiating Glacier Vault Lock: %s", input) - output, err := conn.InitiateVaultLockWithContext(ctx, input) + output, err := conn.InitiateVaultLock(ctx, input) + if err != nil { - return sdkdiag.AppendErrorf(diags, "initiating Glacier Vault Lock: %s", err) + return sdkdiag.AppendErrorf(diags, "creating Glacier Vault Lock (%s): %s", vaultName, err) } d.SetId(vaultName) - if !d.Get("complete_lock").(bool) { - return append(diags, resourceVaultLockRead(ctx, d, meta)...) - } + if d.Get("complete_lock").(bool) { + input := &glacier.CompleteVaultLockInput{ + LockId: output.LockId, + VaultName: aws.String(vaultName), + } - completeLockInput := &glacier.CompleteVaultLockInput{ - LockId: output.LockId, - VaultName: aws.String(vaultName), - } + _, err := conn.CompleteVaultLock(ctx, input) - log.Printf("[DEBUG] Completing Glacier Vault (%s) Lock: %s", vaultName, completeLockInput) - if _, err := conn.CompleteVaultLockWithContext(ctx, completeLockInput); err != nil { - return sdkdiag.AppendErrorf(diags, "completing Glacier Vault (%s) Lock: %s", vaultName, err) - } + if err != nil { + return sdkdiag.AppendErrorf(diags, "completing Glacier Vault Lock (%s): %s", d.Id(), err) + } - if err := waitVaultLockCompletion(ctx, conn, vaultName); err != nil { - return sdkdiag.AppendErrorf(diags, "waiting for Glacier Vault Lock (%s) completion: %s", d.Id(), err) + if err := waitVaultLockComplete(ctx, conn, d.Id()); err != nil { + return sdkdiag.AppendErrorf(diags, "waiting for Glacier Vault Lock (%s) completion: %s", d.Id(), err) + } } return append(diags, resourceVaultLockRead(ctx, d, meta)...) @@ -114,18 +122,12 @@ func resourceVaultLockCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceVaultLockRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlacierConn() - - input := &glacier.GetVaultLockInput{ - AccountId: aws.String("-"), - VaultName: aws.String(d.Id()), - } + conn := meta.(*conns.AWSClient).GlacierClient(ctx) - log.Printf("[DEBUG] Reading Glacier Vault Lock (%s): %s", d.Id(), input) - output, err := conn.GetVaultLockWithContext(ctx, input) + output, err := findVaultLockByName(ctx, conn, d.Id()) - if tfawserr.ErrCodeEquals(err, glacier.ErrCodeResourceNotFoundException) { - log.Printf("[WARN] Glacier Vault Lock (%s) not found, removing from state", d.Id()) + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] Glaier Vault Lock (%s) not found, removing from state", d.Id()) d.SetId("") return diags } @@ -134,19 +136,13 @@ func resourceVaultLockRead(ctx context.Context, d *schema.ResourceData, meta int return sdkdiag.AppendErrorf(diags, "reading Glacier Vault Lock (%s): %s", d.Id(), err) } - if output == nil { - log.Printf("[WARN] Glacier Vault Lock (%s) not found, removing from state", d.Id()) - d.SetId("") - return diags - } - - d.Set("complete_lock", aws.StringValue(output.State) == "Locked") + d.Set("complete_lock", aws.ToString(output.State) == lockStateLocked) d.Set("vault_name", d.Id()) - policyToSet, err := verify.PolicyToSet(d.Get("policy").(string), aws.StringValue(output.Policy)) + policyToSet, err := verify.PolicyToSet(d.Get("policy").(string), aws.ToString(output.Policy)) if err != nil { - return sdkdiag.AppendErrorf(diags, "reading Glacier Vault Lock (%s): setting policy: %s", d.Id(), err) + return sdkdiag.AppendFromErr(diags, err) } d.Set("policy", policyToSet) @@ -156,61 +152,74 @@ func resourceVaultLockRead(ctx context.Context, d *schema.ResourceData, meta int func resourceVaultLockDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlacierConn() + conn := meta.(*conns.AWSClient).GlacierClient(ctx) - input := &glacier.AbortVaultLockInput{ + log.Printf("[DEBUG] Deleting Glacier Vault Lock: %s", d.Id()) + _, err := conn.AbortVaultLock(ctx, &glacier.AbortVaultLockInput{ VaultName: aws.String(d.Id()), - } + }) - log.Printf("[DEBUG] Aborting Glacier Vault Lock (%s): %s", d.Id(), input) - _, err := conn.AbortVaultLockWithContext(ctx, input) - - if tfawserr.ErrCodeEquals(err, glacier.ErrCodeResourceNotFoundException) { + if errs.IsA[*types.ResourceNotFoundException](err) { return diags } if err != nil && !d.Get("ignore_deletion_error").(bool) { - return sdkdiag.AppendErrorf(diags, "aborting Glacier Vault Lock (%s): %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "deleting Glacier Vault Lock (%s): %s", d.Id(), err) } return diags } -func vaultLockRefreshFunc(ctx context.Context, conn *glacier.Glacier, vaultName string) retry.StateRefreshFunc { - return func() (interface{}, string, error) { - input := &glacier.GetVaultLockInput{ - AccountId: aws.String("-"), - VaultName: aws.String(vaultName), +func findVaultLockByName(ctx context.Context, conn *glacier.Client, name string) (*glacier.GetVaultLockOutput, error) { + input := &glacier.GetVaultLockInput{ + AccountId: aws.String("-"), + VaultName: aws.String(name), + } + + output, err := conn.GetVaultLock(ctx, input) + + if errs.IsA[*types.ResourceNotFoundException](err) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: input, } + } - log.Printf("[DEBUG] Reading Glacier Vault Lock (%s): %s", vaultName, input) - output, err := conn.GetVaultLockWithContext(ctx, input) + if err != nil { + return nil, err + } + + if output == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + return output, nil +} + +func statusLockState(ctx context.Context, conn *glacier.Client, name string) retry.StateRefreshFunc { + return func() (interface{}, string, error) { + output, err := findVaultLockByName(ctx, conn, name) - if tfawserr.ErrCodeEquals(err, glacier.ErrCodeResourceNotFoundException) { + if tfresource.NotFound(err) { return nil, "", nil } if err != nil { - return nil, "", fmt.Errorf("error reading Glacier Vault Lock (%s): %s", vaultName, err) - } - - if output == nil { - return nil, "", nil + return nil, "", err } - return output, aws.StringValue(output.State), nil + return output, aws.ToString(output.State), nil } } -func waitVaultLockCompletion(ctx context.Context, conn *glacier.Glacier, vaultName string) error { +func waitVaultLockComplete(ctx context.Context, conn *glacier.Client, name string) error { stateConf := &retry.StateChangeConf{ - Pending: []string{"InProgress"}, - Target: []string{"Locked"}, - Refresh: vaultLockRefreshFunc(ctx, conn, vaultName), + Pending: []string{lockStateInProgress}, + Target: []string{lockStateLocked}, + Refresh: statusLockState(ctx, conn, name), Timeout: 5 * time.Minute, } - log.Printf("[DEBUG] Waiting for Glacier Vault Lock (%s) completion", vaultName) _, err := stateConf.WaitForStateContext(ctx) return err diff --git a/internal/service/glacier/vault_lock_test.go b/internal/service/glacier/vault_lock_test.go index 91122ed5420..df8fbea980a 100644 --- a/internal/service/glacier/vault_lock_test.go +++ b/internal/service/glacier/vault_lock_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glacier_test import ( @@ -5,14 +8,15 @@ import ( "fmt" "testing" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/glacier" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/service/glacier" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" + tfglacier "github.com/hashicorp/terraform-provider-aws/internal/service/glacier" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" ) func TestAccGlacierVaultLock_basic(t *testing.T) { @@ -24,7 +28,7 @@ func TestAccGlacierVaultLock_basic(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, glacier.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.GlacierEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckVaultLockDestroy(ctx), Steps: []resource.TestStep{ @@ -57,7 +61,7 @@ func TestAccGlacierVaultLock_completeLock(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, glacier.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.GlacierEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckVaultLockDestroy(ctx), Steps: []resource.TestStep{ @@ -90,7 +94,7 @@ func TestAccGlacierVaultLock_ignoreEquivalentPolicy(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, glacier.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.GlacierEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckVaultLockDestroy(ctx), Steps: []resource.TestStep{ @@ -112,33 +116,26 @@ func TestAccGlacierVaultLock_ignoreEquivalentPolicy(t *testing.T) { }) } -func testAccCheckVaultLockExists(ctx context.Context, resourceName string, getVaultLockOutput *glacier.GetVaultLockOutput) resource.TestCheckFunc { +func testAccCheckVaultLockExists(ctx context.Context, n string, v *glacier.GetVaultLockOutput) resource.TestCheckFunc { return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[resourceName] + rs, ok := s.RootModule().Resources[n] if !ok { - return fmt.Errorf("Not found: %s", resourceName) + return fmt.Errorf("Not found: %s", n) } if rs.Primary.ID == "" { - return fmt.Errorf("No ID is set") + return fmt.Errorf("No Glacier Vault Lock ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlacierConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlacierClient(ctx) - input := &glacier.GetVaultLockInput{ - VaultName: aws.String(rs.Primary.ID), - } - output, err := conn.GetVaultLockWithContext(ctx, input) + output, err := tfglacier.FindVaultLockByName(ctx, conn, rs.Primary.ID) if err != nil { - return fmt.Errorf("error reading Glacier Vault Lock (%s): %s", rs.Primary.ID, err) - } - - if output == nil { - return fmt.Errorf("error reading Glacier Vault Lock (%s): empty response", rs.Primary.ID) + return err } - *getVaultLockOutput = *output + *v = *output return nil } @@ -146,29 +143,24 @@ func testAccCheckVaultLockExists(ctx context.Context, resourceName string, getVa func testAccCheckVaultLockDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GlacierConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlacierClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_glacier_vault_lock" { continue } - input := &glacier.GetVaultLockInput{ - VaultName: aws.String(rs.Primary.ID), - } - output, err := conn.GetVaultLockWithContext(ctx, input) + _, err := tfglacier.FindVaultLockByName(ctx, conn, rs.Primary.ID) - if tfawserr.ErrCodeEquals(err, glacier.ErrCodeResourceNotFoundException) { + if tfresource.NotFound(err) { continue } if err != nil { - return fmt.Errorf("error reading Glacier Vault Lock (%s): %s", rs.Primary.ID, err) + return err } - if output != nil { - return fmt.Errorf("Glacier Vault Lock (%s) still exists", rs.Primary.ID) - } + return fmt.Errorf("Glacier Vault Lock %s still exists", rs.Primary.ID) } return nil diff --git a/internal/service/glacier/vault_test.go b/internal/service/glacier/vault_test.go index 6b3e517f7d1..abd4db60122 100644 --- a/internal/service/glacier/vault_test.go +++ b/internal/service/glacier/vault_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glacier_test import ( @@ -6,15 +9,15 @@ import ( "regexp" "testing" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/glacier" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/service/glacier" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" tfglacier "github.com/hashicorp/terraform-provider-aws/internal/service/glacier" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" ) func TestAccGlacierVault_basic(t *testing.T) { @@ -25,7 +28,7 @@ func TestAccGlacierVault_basic(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, glacier.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.GlacierEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckVaultDestroy(ctx), Steps: []resource.TestStep{ @@ -58,7 +61,7 @@ func TestAccGlacierVault_notification(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, glacier.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.GlacierEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckVaultDestroy(ctx), Steps: []resource.TestStep{ @@ -81,7 +84,6 @@ func TestAccGlacierVault_notification(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckVaultExists(ctx, resourceName, &vault), resource.TestCheckResourceAttr(resourceName, "notification.#", "0"), - testAccCheckVaultNotificationsMissing(ctx, resourceName), ), }, { @@ -105,7 +107,7 @@ func TestAccGlacierVault_policy(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, glacier.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.GlacierEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckVaultDestroy(ctx), Steps: []resource.TestStep{ @@ -114,8 +116,7 @@ func TestAccGlacierVault_policy(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckVaultExists(ctx, resourceName, &vault), resource.TestCheckResourceAttr(resourceName, "name", rName), - resource.TestMatchResourceAttr(resourceName, "access_policy", - regexp.MustCompile(`"Sid":"cross-account-upload".+`)), + resource.TestMatchResourceAttr(resourceName, "access_policy", regexp.MustCompile(`"Sid":"cross-account-upload".+`)), ), }, { @@ -128,8 +129,7 @@ func TestAccGlacierVault_policy(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckVaultExists(ctx, resourceName, &vault), resource.TestCheckResourceAttr(resourceName, "name", rName), - resource.TestMatchResourceAttr(resourceName, "access_policy", - regexp.MustCompile(`"Sid":"cross-account-upload1".+`)), + resource.TestMatchResourceAttr(resourceName, "access_policy", regexp.MustCompile(`"Sid":"cross-account-upload1".+`)), ), }, { @@ -151,7 +151,7 @@ func TestAccGlacierVault_tags(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, glacier.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.GlacierEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckVaultDestroy(ctx), Steps: []resource.TestStep{ @@ -197,7 +197,7 @@ func TestAccGlacierVault_disappears(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, glacier.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.GlacierEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckVaultDestroy(ctx), Steps: []resource.TestStep{ @@ -221,7 +221,7 @@ func TestAccGlacierVault_ignoreEquivalent(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, glacier.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.GlacierEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckVaultDestroy(ctx), Steps: []resource.TestStep{ @@ -244,64 +244,26 @@ func TestAccGlacierVault_ignoreEquivalent(t *testing.T) { }) } -func testAccCheckVaultExists(ctx context.Context, name string, vault *glacier.DescribeVaultOutput) resource.TestCheckFunc { +func testAccCheckVaultExists(ctx context.Context, n string, v *glacier.DescribeVaultOutput) resource.TestCheckFunc { return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[name] + rs, ok := s.RootModule().Resources[n] if !ok { - return fmt.Errorf("Not found: %s", name) + return fmt.Errorf("Not found: %s", n) } if rs.Primary.ID == "" { - return fmt.Errorf("No ID is set") + return fmt.Errorf("No Glacier Vault ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlacierConn() - out, err := conn.DescribeVaultWithContext(ctx, &glacier.DescribeVaultInput{ - VaultName: aws.String(rs.Primary.ID), - }) + conn := acctest.Provider.Meta().(*conns.AWSClient).GlacierClient(ctx) + + output, err := tfglacier.FindVaultByName(ctx, conn, rs.Primary.ID) if err != nil { return err } - if out.VaultARN == nil { - return fmt.Errorf("No Glacier Vault Found") - } - - if *out.VaultName != rs.Primary.ID { - return fmt.Errorf("Glacier Vault Mismatch - existing: %q, state: %q", - *out.VaultName, rs.Primary.ID) - } - - *vault = *out - - return nil - } -} - -func testAccCheckVaultNotificationsMissing(ctx context.Context, name string) resource.TestCheckFunc { - return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[name] - if !ok { - return fmt.Errorf("Not found: %s", name) - } - - if rs.Primary.ID == "" { - return fmt.Errorf("No ID is set") - } - - conn := acctest.Provider.Meta().(*conns.AWSClient).GlacierConn() - out, err := conn.GetVaultNotificationsWithContext(ctx, &glacier.GetVaultNotificationsInput{ - VaultName: aws.String(rs.Primary.ID), - }) - - if !tfawserr.ErrCodeEquals(err, glacier.ErrCodeResourceNotFoundException) { - return fmt.Errorf("Expected ResourceNotFoundException for Vault %s Notification Block but got %s", rs.Primary.ID, err) - } - - if out.VaultNotificationConfig != nil { - return fmt.Errorf("Vault Notification Block has been found for %s", rs.Primary.ID) - } + *v = *output return nil } @@ -309,25 +271,24 @@ func testAccCheckVaultNotificationsMissing(ctx context.Context, name string) res func testAccCheckVaultDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GlacierConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlacierClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_glacier_vault" { continue } - input := &glacier.DescribeVaultInput{ - VaultName: aws.String(rs.Primary.ID), + _, err := tfglacier.FindVaultByName(ctx, conn, rs.Primary.ID) + + if tfresource.NotFound(err) { + continue } - if _, err := conn.DescribeVaultWithContext(ctx, input); err != nil { - // Verify the error is what we want - if tfawserr.ErrCodeEquals(err, glacier.ErrCodeResourceNotFoundException) { - continue - } + if err != nil { return err } - return fmt.Errorf("still exists") + + return fmt.Errorf("Glacier Vault %s still exists", rs.Primary.ID) } return nil } diff --git a/internal/service/globalaccelerator/accelerator.go b/internal/service/globalaccelerator/accelerator.go index 78960c0f43e..feb00cbfe7f 100644 --- a/internal/service/globalaccelerator/accelerator.go +++ b/internal/service/globalaccelerator/accelerator.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package globalaccelerator import ( @@ -131,14 +134,14 @@ func ResourceAccelerator() *schema.Resource { } func resourceAcceleratorCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).GlobalAcceleratorConn() + conn := meta.(*conns.AWSClient).GlobalAcceleratorConn(ctx) name := d.Get("name").(string) input := &globalaccelerator.CreateAcceleratorInput{ Enabled: aws.Bool(d.Get("enabled").(bool)), IdempotencyToken: aws.String(id.UniqueId()), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("ip_address_type"); ok { @@ -180,7 +183,7 @@ func resourceAcceleratorCreate(ctx context.Context, d *schema.ResourceData, meta } func resourceAcceleratorRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).GlobalAcceleratorConn() + conn := meta.(*conns.AWSClient).GlobalAcceleratorConn(ctx) accelerator, err := FindAcceleratorByARN(ctx, conn, d.Id()) @@ -218,7 +221,7 @@ func resourceAcceleratorRead(ctx context.Context, d *schema.ResourceData, meta i } func resourceAcceleratorUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).GlobalAcceleratorConn() + conn := meta.(*conns.AWSClient).GlobalAcceleratorConn(ctx) if d.HasChanges("name", "ip_address_type", "enabled") { input := &globalaccelerator.UpdateAcceleratorInput{ @@ -283,7 +286,7 @@ func resourceAcceleratorUpdate(ctx context.Context, d *schema.ResourceData, meta } func resourceAcceleratorDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).GlobalAcceleratorConn() + conn := meta.(*conns.AWSClient).GlobalAcceleratorConn(ctx) input := &globalaccelerator.UpdateAcceleratorInput{ AcceleratorArn: aws.String(d.Id()), diff --git a/internal/service/globalaccelerator/accelerator_data_source.go b/internal/service/globalaccelerator/accelerator_data_source.go index 39815b73f49..e09c797331c 100644 --- a/internal/service/globalaccelerator/accelerator_data_source.go +++ b/internal/service/globalaccelerator/accelerator_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package globalaccelerator import ( @@ -10,8 +13,8 @@ import ( "github.com/hashicorp/terraform-plugin-framework/datasource" "github.com/hashicorp/terraform-plugin-framework/datasource/schema" "github.com/hashicorp/terraform-plugin-framework/types" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" fwtypes "github.com/hashicorp/terraform-provider-aws/internal/framework/types" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" ) @@ -101,7 +104,7 @@ func (d *dataSourceAccelerator) Read(ctx context.Context, request datasource.Rea return } - conn := d.Meta().GlobalAcceleratorConn() + conn := d.Meta().GlobalAcceleratorConn(ctx) ignoreTagsConfig := d.Meta().IgnoreTagsConfig var results []*globalaccelerator.Accelerator @@ -171,7 +174,7 @@ func (d *dataSourceAccelerator) Read(ctx context.Context, request datasource.Rea data.Attributes = d.flattenAcceleratorAttributesFramework(ctx, attributes) - tags, err := ListTags(ctx, conn, acceleratorARN) + tags, err := listTags(ctx, conn, acceleratorARN) if err != nil { response.Diagnostics.AddError("listing tags for Global Accelerator Accelerator", err.Error()) diff --git a/internal/service/globalaccelerator/accelerator_data_source_test.go b/internal/service/globalaccelerator/accelerator_data_source_test.go index dbeae83d46b..4e7c94bfbb8 100644 --- a/internal/service/globalaccelerator/accelerator_data_source_test.go +++ b/internal/service/globalaccelerator/accelerator_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package globalaccelerator_test import ( diff --git a/internal/service/globalaccelerator/accelerator_test.go b/internal/service/globalaccelerator/accelerator_test.go index 23cfe23185b..7bf32447f74 100644 --- a/internal/service/globalaccelerator/accelerator_test.go +++ b/internal/service/globalaccelerator/accelerator_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package globalaccelerator_test import ( @@ -327,7 +330,7 @@ func TestAccGlobalAcceleratorAccelerator_tags(t *testing.T) { } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn(ctx) input := &globalaccelerator.ListAcceleratorsInput{} @@ -351,7 +354,7 @@ func testAccCheckBYOIPExists(ctx context.Context, t *testing.T) { parsedAddr := net.ParseIP(requestedAddr) - conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn(ctx) input := &globalaccelerator.ListByoipCidrsInput{} cidrs := make([]*globalaccelerator.ByoipCidr, 0) @@ -391,7 +394,7 @@ func testAccCheckBYOIPExists(ctx context.Context, t *testing.T) { func testAccCheckAcceleratorExists(ctx context.Context, n string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn(ctx) rs, ok := s.RootModule().Resources[n] if !ok { @@ -410,7 +413,7 @@ func testAccCheckAcceleratorExists(ctx context.Context, n string) resource.TestC func testAccCheckAcceleratorDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_globalaccelerator_accelerator" { diff --git a/internal/service/globalaccelerator/arn.go b/internal/service/globalaccelerator/arn.go index 316265c689e..e02a0830622 100644 --- a/internal/service/globalaccelerator/arn.go +++ b/internal/service/globalaccelerator/arn.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package globalaccelerator import ( diff --git a/internal/service/globalaccelerator/arn_test.go b/internal/service/globalaccelerator/arn_test.go index 4844644e64a..d1baa9cf29c 100644 --- a/internal/service/globalaccelerator/arn_test.go +++ b/internal/service/globalaccelerator/arn_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package globalaccelerator_test import ( diff --git a/internal/service/globalaccelerator/custom_routing_accelerator.go b/internal/service/globalaccelerator/custom_routing_accelerator.go index 948af0cf1b9..8f5738d22be 100644 --- a/internal/service/globalaccelerator/custom_routing_accelerator.go +++ b/internal/service/globalaccelerator/custom_routing_accelerator.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package globalaccelerator import ( @@ -129,14 +132,14 @@ func ResourceCustomRoutingAccelerator() *schema.Resource { func resourceCustomRoutingAcceleratorCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlobalAcceleratorConn() + conn := meta.(*conns.AWSClient).GlobalAcceleratorConn(ctx) name := d.Get("name").(string) input := &globalaccelerator.CreateCustomRoutingAcceleratorInput{ Name: aws.String(name), IdempotencyToken: aws.String(id.UniqueId()), Enabled: aws.Bool(d.Get("enabled").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("ip_address_type"); ok { @@ -177,7 +180,7 @@ func resourceCustomRoutingAcceleratorCreate(ctx context.Context, d *schema.Resou func resourceCustomRoutingAcceleratorRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlobalAcceleratorConn() + conn := meta.(*conns.AWSClient).GlobalAcceleratorConn(ctx) accelerator, err := FindCustomRoutingAcceleratorByARN(ctx, conn, d.Id()) @@ -215,7 +218,7 @@ func resourceCustomRoutingAcceleratorRead(ctx context.Context, d *schema.Resourc func resourceCustomRoutingAcceleratorUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlobalAcceleratorConn() + conn := meta.(*conns.AWSClient).GlobalAcceleratorConn(ctx) if d.HasChanges("name", "ip_address_type", "enabled") { input := &globalaccelerator.UpdateCustomRoutingAcceleratorInput{ @@ -281,7 +284,7 @@ func resourceCustomRoutingAcceleratorUpdate(ctx context.Context, d *schema.Resou func resourceCustomRoutingAcceleratorDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlobalAcceleratorConn() + conn := meta.(*conns.AWSClient).GlobalAcceleratorConn(ctx) input := &globalaccelerator.UpdateCustomRoutingAcceleratorInput{ AcceleratorArn: aws.String(d.Id()), diff --git a/internal/service/globalaccelerator/custom_routing_accelerator_data_source.go b/internal/service/globalaccelerator/custom_routing_accelerator_data_source.go index 89daabf4b10..384c2979933 100644 --- a/internal/service/globalaccelerator/custom_routing_accelerator_data_source.go +++ b/internal/service/globalaccelerator/custom_routing_accelerator_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package globalaccelerator import ( @@ -88,7 +91,7 @@ func DataSourceCustomRoutingAccelerator() *schema.Resource { func dataSourceCustomRoutingAcceleratorRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlobalAcceleratorConn() + conn := meta.(*conns.AWSClient).GlobalAcceleratorConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig var results []*globalaccelerator.CustomRoutingAccelerator @@ -147,7 +150,7 @@ func dataSourceCustomRoutingAcceleratorRead(ctx context.Context, d *schema.Resou return sdkdiag.AppendErrorf(diags, "setting attributes: %s", err) } - tags, err := ListTags(ctx, conn, d.Id()) + tags, err := listTags(ctx, conn, d.Id()) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags for Global Accelerator Custom Routing Accelerator (%s): %s", d.Id(), err) diff --git a/internal/service/globalaccelerator/custom_routing_accelerator_data_source_test.go b/internal/service/globalaccelerator/custom_routing_accelerator_data_source_test.go index deaabe9cc41..576ef8656f8 100644 --- a/internal/service/globalaccelerator/custom_routing_accelerator_data_source_test.go +++ b/internal/service/globalaccelerator/custom_routing_accelerator_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package globalaccelerator_test import ( diff --git a/internal/service/globalaccelerator/custom_routing_accelerator_test.go b/internal/service/globalaccelerator/custom_routing_accelerator_test.go index dbcfa6c68d4..b27fe1e9373 100644 --- a/internal/service/globalaccelerator/custom_routing_accelerator_test.go +++ b/internal/service/globalaccelerator/custom_routing_accelerator_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package globalaccelerator_test import ( @@ -165,7 +168,7 @@ func TestAccGlobalAcceleratorCustomRoutingAccelerator_update(t *testing.T) { func testAccCheckCustomRoutingAcceleratorExists(ctx context.Context, n string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn(ctx) rs, ok := s.RootModule().Resources[n] if !ok { @@ -184,7 +187,7 @@ func testAccCheckCustomRoutingAcceleratorExists(ctx context.Context, n string) r func testAccCheckCustomRoutingAcceleratorDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_globalaccelerator_custom_routing_accelerator" { diff --git a/internal/service/globalaccelerator/custom_routing_endpoint_group.go b/internal/service/globalaccelerator/custom_routing_endpoint_group.go index 75cccad3d1e..2d871dc81b0 100644 --- a/internal/service/globalaccelerator/custom_routing_endpoint_group.go +++ b/internal/service/globalaccelerator/custom_routing_endpoint_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package globalaccelerator import ( @@ -101,7 +104,7 @@ func ResourceCustomRoutingEndpointGroup() *schema.Resource { func resourceCustomRoutingEndpointGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlobalAcceleratorConn() + conn := meta.(*conns.AWSClient).GlobalAcceleratorConn(ctx) region := meta.(*conns.AWSClient).Region input := &globalaccelerator.CreateCustomRoutingEndpointGroupInput{ @@ -151,7 +154,7 @@ func resourceCustomRoutingEndpointGroupCreate(ctx context.Context, d *schema.Res func resourceCustomRoutingEndpointGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlobalAcceleratorConn() + conn := meta.(*conns.AWSClient).GlobalAcceleratorConn(ctx) endpointGroup, err := FindCustomRoutingEndpointGroupByARN(ctx, conn, d.Id()) @@ -186,7 +189,7 @@ func resourceCustomRoutingEndpointGroupRead(ctx context.Context, d *schema.Resou func resourceCustomRoutingEndpointGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlobalAcceleratorConn() + conn := meta.(*conns.AWSClient).GlobalAcceleratorConn(ctx) log.Printf("[DEBUG] Deleting Global Accelerator Custom Routing Endpoint Group (%s)", d.Id()) _, err := conn.DeleteCustomRoutingEndpointGroupWithContext(ctx, &globalaccelerator.DeleteCustomRoutingEndpointGroupInput{ diff --git a/internal/service/globalaccelerator/custom_routing_endpoint_group_test.go b/internal/service/globalaccelerator/custom_routing_endpoint_group_test.go index d46ba9636ac..2b3e01cafae 100644 --- a/internal/service/globalaccelerator/custom_routing_endpoint_group_test.go +++ b/internal/service/globalaccelerator/custom_routing_endpoint_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package globalaccelerator_test import ( @@ -112,7 +115,7 @@ func TestAccGlobalAcceleratorCustomRoutingEndpointGroup_endpointConfiguration(t func testAccCheckCustomRoutingEndpointGroupExists(ctx context.Context, n string, v *globalaccelerator.CustomRoutingEndpointGroup) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn(ctx) rs, ok := s.RootModule().Resources[n] if !ok { @@ -137,7 +140,7 @@ func testAccCheckCustomRoutingEndpointGroupExists(ctx context.Context, n string, func testAccCheckCustomRoutingEndpointGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_globalaccelerator_custom_routing_endpoint_group" { diff --git a/internal/service/globalaccelerator/custom_routing_listener.go b/internal/service/globalaccelerator/custom_routing_listener.go index 176d634dac7..0f8324bdf6c 100644 --- a/internal/service/globalaccelerator/custom_routing_listener.go +++ b/internal/service/globalaccelerator/custom_routing_listener.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package globalaccelerator import ( @@ -68,7 +71,7 @@ func ResourceCustomRoutingListener() *schema.Resource { func resourceCustomRoutingListenerCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlobalAcceleratorConn() + conn := meta.(*conns.AWSClient).GlobalAcceleratorConn(ctx) acceleratorARN := d.Get("accelerator_arn").(string) input := &globalaccelerator.CreateCustomRoutingListenerInput{ @@ -95,7 +98,7 @@ func resourceCustomRoutingListenerCreate(ctx context.Context, d *schema.Resource func resourceCustomRoutingListenerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlobalAcceleratorConn() + conn := meta.(*conns.AWSClient).GlobalAcceleratorConn(ctx) listener, err := FindCustomRoutingListenerByARN(ctx, conn, d.Id()) @@ -125,7 +128,7 @@ func resourceCustomRoutingListenerRead(ctx context.Context, d *schema.ResourceDa func resourceCustomRoutingListenerUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlobalAcceleratorConn() + conn := meta.(*conns.AWSClient).GlobalAcceleratorConn(ctx) acceleratorARN := d.Get("accelerator_arn").(string) input := &globalaccelerator.UpdateCustomRoutingListenerInput{ @@ -149,7 +152,7 @@ func resourceCustomRoutingListenerUpdate(ctx context.Context, d *schema.Resource func resourceCustomRoutingListenerDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlobalAcceleratorConn() + conn := meta.(*conns.AWSClient).GlobalAcceleratorConn(ctx) acceleratorARN := d.Get("accelerator_arn").(string) diff --git a/internal/service/globalaccelerator/custom_routing_listener_test.go b/internal/service/globalaccelerator/custom_routing_listener_test.go index f6f03a67439..93028d2f4cf 100644 --- a/internal/service/globalaccelerator/custom_routing_listener_test.go +++ b/internal/service/globalaccelerator/custom_routing_listener_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package globalaccelerator_test import ( @@ -77,7 +80,7 @@ func TestAccGlobalAcceleratorCustomRoutingListener_disappears(t *testing.T) { func testAccCheckCustomRoutingListenerExists(ctx context.Context, n string, v *globalaccelerator.CustomRoutingListener) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn(ctx) rs, ok := s.RootModule().Resources[n] if !ok { @@ -102,7 +105,7 @@ func testAccCheckCustomRoutingListenerExists(ctx context.Context, n string, v *g func testAccCheckCustomRoutingListenerDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_globalaccelerator_custom_routing_listener" { diff --git a/internal/service/globalaccelerator/endpoint_group.go b/internal/service/globalaccelerator/endpoint_group.go index 391de89ff26..7a82cbfff13 100644 --- a/internal/service/globalaccelerator/endpoint_group.go +++ b/internal/service/globalaccelerator/endpoint_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package globalaccelerator import ( @@ -137,7 +140,7 @@ func ResourceEndpointGroup() *schema.Resource { } func resourceEndpointGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).GlobalAcceleratorConn() + conn := meta.(*conns.AWSClient).GlobalAcceleratorConn(ctx) input := &globalaccelerator.CreateEndpointGroupInput{ EndpointGroupRegion: aws.String(meta.(*conns.AWSClient).Region), @@ -203,7 +206,7 @@ func resourceEndpointGroupCreate(ctx context.Context, d *schema.ResourceData, me } func resourceEndpointGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).GlobalAcceleratorConn() + conn := meta.(*conns.AWSClient).GlobalAcceleratorConn(ctx) endpointGroup, err := FindEndpointGroupByARN(ctx, conn, d.Id()) @@ -243,7 +246,7 @@ func resourceEndpointGroupRead(ctx context.Context, d *schema.ResourceData, meta } func resourceEndpointGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).GlobalAcceleratorConn() + conn := meta.(*conns.AWSClient).GlobalAcceleratorConn(ctx) input := &globalaccelerator.UpdateEndpointGroupInput{ EndpointGroupArn: aws.String(d.Id()), @@ -305,7 +308,7 @@ func resourceEndpointGroupUpdate(ctx context.Context, d *schema.ResourceData, me } func resourceEndpointGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).GlobalAcceleratorConn() + conn := meta.(*conns.AWSClient).GlobalAcceleratorConn(ctx) log.Printf("[DEBUG] Deleting Global Accelerator Endpoint Group: %s", d.Id()) _, err := conn.DeleteEndpointGroupWithContext(ctx, &globalaccelerator.DeleteEndpointGroupInput{ @@ -377,7 +380,7 @@ func expandEndpointConfiguration(tfMap map[string]interface{}) *globalaccelerato apiObject.EndpointId = aws.String(v) } - if v, ok := tfMap["weight"].(int); ok && v != 0 { + if v, ok := tfMap["weight"].(int); ok { apiObject.Weight = aws.Int64(int64(v)) } diff --git a/internal/service/globalaccelerator/endpoint_group_test.go b/internal/service/globalaccelerator/endpoint_group_test.go index 0491eeeaff0..bd04d4aacdb 100644 --- a/internal/service/globalaccelerator/endpoint_group_test.go +++ b/internal/service/globalaccelerator/endpoint_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package globalaccelerator_test import ( @@ -97,7 +100,7 @@ func TestAccGlobalAcceleratorEndpointGroup_ALBEndpoint_clientIP(t *testing.T) { CheckDestroy: testAccCheckEndpointGroupDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccEndpointGroupConfig_albClientIP(rName, false), + Config: testAccEndpointGroupConfig_albClientIP(rName, false, 20), Check: resource.ComposeTestCheckFunc( testAccCheckEndpointGroupExists(ctx, resourceName, &v), acctest.MatchResourceAttrGlobalARN(resourceName, "arn", "globalaccelerator", regexp.MustCompile(`accelerator/[^/]+/listener/[^/]+/endpoint-group/[^/]+`)), @@ -124,14 +127,14 @@ func TestAccGlobalAcceleratorEndpointGroup_ALBEndpoint_clientIP(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccEndpointGroupConfig_albClientIP(rName, true), + Config: testAccEndpointGroupConfig_albClientIP(rName, true, 0), Check: resource.ComposeTestCheckFunc( testAccCheckEndpointGroupExists(ctx, resourceName, &v), acctest.MatchResourceAttrGlobalARN(resourceName, "arn", "globalaccelerator", regexp.MustCompile(`accelerator/[^/]+/listener/[^/]+/endpoint-group/[^/]+`)), resource.TestCheckResourceAttr(resourceName, "endpoint_configuration.#", "1"), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "endpoint_configuration.*", map[string]string{ "client_ip_preservation_enabled": "true", - "weight": "20", + "weight": "0", }), resource.TestCheckTypeSetElemAttrPair(resourceName, "endpoint_configuration.*.endpoint_id", albResourceName, "id"), resource.TestCheckResourceAttr(resourceName, "endpoint_group_region", acctest.Region()), @@ -427,7 +430,7 @@ func TestAccGlobalAcceleratorEndpointGroup_update(t *testing.T) { func testAccCheckEndpointGroupExists(ctx context.Context, name string, v *globalaccelerator.EndpointGroup) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn(ctx) rs, ok := s.RootModule().Resources[name] if !ok { @@ -452,7 +455,7 @@ func testAccCheckEndpointGroupExists(ctx context.Context, name string, v *global func testAccCheckEndpointGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_globalaccelerator_endpoint_group" { @@ -480,7 +483,7 @@ func testAccCheckEndpointGroupDestroy(ctx context.Context) resource.TestCheckFun func testAccCheckEndpointGroupDeleteSecurityGroup(ctx context.Context, vpc *ec2.Vpc) resource.TestCheckFunc { return func(s *terraform.State) error { meta := acctest.Provider.Meta() - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) v, err := tfec2.FindSecurityGroupByNameAndVPCIDAndOwnerID(ctx, conn, "GlobalAccelerator", aws.StringValue(vpc.VpcId), aws.StringValue(vpc.OwnerId)) @@ -528,7 +531,7 @@ resource "aws_globalaccelerator_endpoint_group" "test" { `, rName) } -func testAccEndpointGroupConfig_albClientIP(rName string, clientIP bool) string { +func testAccEndpointGroupConfig_albClientIP(rName string, clientIP bool, weight int) string { return acctest.ConfigCompose(acctest.ConfigVPCWithSubnets(rName, 2), fmt.Sprintf(` resource "aws_lb" "test" { name = %[1]q @@ -596,7 +599,7 @@ resource "aws_globalaccelerator_endpoint_group" "test" { endpoint_configuration { endpoint_id = aws_lb.test.id - weight = 20 + weight = %[3]d client_ip_preservation_enabled = %[2]t } @@ -607,7 +610,7 @@ resource "aws_globalaccelerator_endpoint_group" "test" { threshold_count = 3 traffic_dial_percentage = 100 } -`, rName, clientIP)) +`, rName, clientIP, weight)) } func testAccEndpointGroupConfig_instance(rName string) string { @@ -690,7 +693,7 @@ resource "aws_globalaccelerator_listener" "test" { resource "aws_eip" "test" { provider = "awsalternate" - vpc = true + domain = "vpc" tags = { Name = %[1]q @@ -800,7 +803,7 @@ resource "aws_globalaccelerator_listener" "test" { } resource "aws_eip" "test" { - vpc = true + domain = "vpc" tags = { Name = %[1]q @@ -846,7 +849,7 @@ resource "aws_globalaccelerator_listener" "test" { } resource "aws_eip" "test" { - vpc = true + domain = "vpc" tags = { Name = %[1]q diff --git a/internal/service/globalaccelerator/generate.go b/internal/service/globalaccelerator/generate.go index 2b7ed8676db..e3f90ea20bd 100644 --- a/internal/service/globalaccelerator/generate.go +++ b/internal/service/globalaccelerator/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsSlice -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package globalaccelerator diff --git a/internal/service/globalaccelerator/listener.go b/internal/service/globalaccelerator/listener.go index 217a83cd652..e40aff79b5e 100644 --- a/internal/service/globalaccelerator/listener.go +++ b/internal/service/globalaccelerator/listener.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package globalaccelerator import ( @@ -77,7 +80,7 @@ func ResourceListener() *schema.Resource { } func resourceListenerCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).GlobalAcceleratorConn() + conn := meta.(*conns.AWSClient).GlobalAcceleratorConn(ctx) acceleratorARN := d.Get("accelerator_arn").(string) input := &globalaccelerator.CreateListenerInput{ @@ -105,7 +108,7 @@ func resourceListenerCreate(ctx context.Context, d *schema.ResourceData, meta in } func resourceListenerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).GlobalAcceleratorConn() + conn := meta.(*conns.AWSClient).GlobalAcceleratorConn(ctx) listener, err := FindListenerByARN(ctx, conn, d.Id()) @@ -136,7 +139,7 @@ func resourceListenerRead(ctx context.Context, d *schema.ResourceData, meta inte } func resourceListenerUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).GlobalAcceleratorConn() + conn := meta.(*conns.AWSClient).GlobalAcceleratorConn(ctx) acceleratorARN := d.Get("accelerator_arn").(string) input := &globalaccelerator.UpdateListenerInput{ @@ -161,7 +164,7 @@ func resourceListenerUpdate(ctx context.Context, d *schema.ResourceData, meta in } func resourceListenerDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).GlobalAcceleratorConn() + conn := meta.(*conns.AWSClient).GlobalAcceleratorConn(ctx) acceleratorARN := d.Get("accelerator_arn").(string) log.Printf("[DEBUG] Deleting Global Accelerator Listener: %s", d.Id()) diff --git a/internal/service/globalaccelerator/listener_test.go b/internal/service/globalaccelerator/listener_test.go index bbf6b4a2e44..d4534469271 100644 --- a/internal/service/globalaccelerator/listener_test.go +++ b/internal/service/globalaccelerator/listener_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package globalaccelerator_test import ( @@ -109,7 +112,7 @@ func TestAccGlobalAcceleratorListener_update(t *testing.T) { func testAccCheckListenerExists(ctx context.Context, n string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn(ctx) rs, ok := s.RootModule().Resources[n] if !ok { @@ -128,7 +131,7 @@ func testAccCheckListenerExists(ctx context.Context, n string) resource.TestChec func testAccCheckListenerDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlobalAcceleratorConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_globalaccelerator_listener" { diff --git a/internal/service/globalaccelerator/service_package.go b/internal/service/globalaccelerator/service_package.go new file mode 100644 index 00000000000..4126e265623 --- /dev/null +++ b/internal/service/globalaccelerator/service_package.go @@ -0,0 +1,26 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package globalaccelerator + +import ( + "context" + + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + endpoints_sdkv1 "github.com/aws/aws-sdk-go/aws/endpoints" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + globalaccelerator_sdkv1 "github.com/aws/aws-sdk-go/service/globalaccelerator" +) + +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, m map[string]any) (*globalaccelerator_sdkv1.GlobalAccelerator, error) { + sess := m["session"].(*session_sdkv1.Session) + config := &aws_sdkv1.Config{Endpoint: aws_sdkv1.String(m["endpoint"].(string))} + + // Force "global" services to correct Regions. + if m["partition"].(string) == endpoints_sdkv1.AwsPartitionID { + config.Region = aws_sdkv1.String(endpoints_sdkv1.UsWest2RegionID) + } + + return globalaccelerator_sdkv1.New(sess.Copy(config)), nil +} diff --git a/internal/service/globalaccelerator/service_package_gen.go b/internal/service/globalaccelerator/service_package_gen.go index 4dc8fca71e4..ee841fd98c6 100644 --- a/internal/service/globalaccelerator/service_package_gen.go +++ b/internal/service/globalaccelerator/service_package_gen.go @@ -5,6 +5,7 @@ package globalaccelerator import ( "context" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -73,4 +74,6 @@ func (p *servicePackage) ServicePackageName() string { return names.GlobalAccelerator } -var ServicePackage = &servicePackage{} +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/globalaccelerator/sweep.go b/internal/service/globalaccelerator/sweep.go index 26a0af44eb7..a835604de88 100644 --- a/internal/service/globalaccelerator/sweep.go +++ b/internal/service/globalaccelerator/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/globalaccelerator" multierror "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -61,11 +63,11 @@ func init() { func sweepAccelerators(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).GlobalAcceleratorConn() + conn := client.GlobalAcceleratorConn(ctx) input := &globalaccelerator.ListAcceleratorsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -94,7 +96,7 @@ func sweepAccelerators(region string) error { return fmt.Errorf("error listing Global Accelerator Accelerators (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Global Accelerator Accelerators (%s): %w", region, err) @@ -105,11 +107,11 @@ func sweepAccelerators(region string) error { func sweepEndpointGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).GlobalAcceleratorConn() + conn := client.GlobalAcceleratorConn(ctx) input := &globalaccelerator.ListAcceleratorsInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -183,7 +185,7 @@ func sweepEndpointGroups(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Global Accelerator Accelerators (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Global Accelerator Endpoint Groups (%s): %w", region, err)) @@ -194,11 +196,11 @@ func sweepEndpointGroups(region string) error { func sweepListeners(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).GlobalAcceleratorConn() + conn := client.GlobalAcceleratorConn(ctx) input := &globalaccelerator.ListAcceleratorsInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -250,7 +252,7 @@ func sweepListeners(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Global Accelerator Accelerators (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Global Accelerator Listeners (%s): %w", region, err)) @@ -261,11 +263,11 @@ func sweepListeners(region string) error { func sweepCustomRoutingAccelerators(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).GlobalAcceleratorConn() + conn := client.GlobalAcceleratorConn(ctx) input := &globalaccelerator.ListCustomRoutingAcceleratorsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -294,7 +296,7 @@ func sweepCustomRoutingAccelerators(region string) error { return fmt.Errorf("error listing Global Accelerator Custom Routing Accelerators (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Global Accelerator Custom Routing Accelerators (%s): %w", region, err) @@ -305,11 +307,11 @@ func sweepCustomRoutingAccelerators(region string) error { func sweepCustomRoutingEndpointGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).GlobalAcceleratorConn() + conn := client.GlobalAcceleratorConn(ctx) input := &globalaccelerator.ListCustomRoutingAcceleratorsInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -383,7 +385,7 @@ func sweepCustomRoutingEndpointGroups(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Global Accelerator Custom Routing Accelerators (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Global Accelerator Custom Routing Endpoint Groups (%s): %w", region, err)) @@ -394,11 +396,11 @@ func sweepCustomRoutingEndpointGroups(region string) error { func sweepCustomRoutingListeners(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).GlobalAcceleratorConn() + conn := client.GlobalAcceleratorConn(ctx) input := &globalaccelerator.ListCustomRoutingAcceleratorsInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -450,7 +452,7 @@ func sweepCustomRoutingListeners(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Global Accelerator Custom Routing Accelerators (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Global Accelerator Custom Routing Listeners (%s): %w", region, err)) diff --git a/internal/service/globalaccelerator/tags_gen.go b/internal/service/globalaccelerator/tags_gen.go index 522d9b7d4b4..3bb574c9203 100644 --- a/internal/service/globalaccelerator/tags_gen.go +++ b/internal/service/globalaccelerator/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists globalaccelerator service tags. +// listTags lists globalaccelerator service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn globalacceleratoriface.GlobalAcceleratorAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn globalacceleratoriface.GlobalAcceleratorAPI, identifier string) (tftags.KeyValueTags, error) { input := &globalaccelerator.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn globalacceleratoriface.GlobalAccelerator // ListTags lists globalaccelerator service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).GlobalAcceleratorConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).GlobalAcceleratorConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*globalaccelerator.Tag) tftags.Key return tftags.New(ctx, m) } -// GetTagsIn returns globalaccelerator service tags from Context. +// getTagsIn returns globalaccelerator service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*globalaccelerator.Tag { +func getTagsIn(ctx context.Context) []*globalaccelerator.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*globalaccelerator.Tag { return nil } -// SetTagsOut sets globalaccelerator service tags in Context. -func SetTagsOut(ctx context.Context, tags []*globalaccelerator.Tag) { +// setTagsOut sets globalaccelerator service tags in Context. +func setTagsOut(ctx context.Context, tags []*globalaccelerator.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates globalaccelerator service tags. +// updateTags updates globalaccelerator service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn globalacceleratoriface.GlobalAcceleratorAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn globalacceleratoriface.GlobalAcceleratorAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn globalacceleratoriface.GlobalAccelerat // UpdateTags updates globalaccelerator service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).GlobalAcceleratorConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).GlobalAcceleratorConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/glue/catalog_database.go b/internal/service/glue/catalog_database.go index bbb4c482176..b7f228a990d 100644 --- a/internal/service/glue/catalog_database.go +++ b/internal/service/glue/catalog_database.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( @@ -118,6 +121,10 @@ func ResourceCatalogDatabase() *schema.Resource { Type: schema.TypeString, Required: true, }, + "region": { + Type: schema.TypeString, + Optional: true, + }, }, }, }, @@ -127,7 +134,7 @@ func ResourceCatalogDatabase() *schema.Resource { func resourceCatalogDatabaseCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) catalogID := createCatalogID(d, meta.(*conns.AWSClient).AccountID) name := d.Get("name").(string) @@ -158,7 +165,7 @@ func resourceCatalogDatabaseCreate(ctx context.Context, d *schema.ResourceData, input := &glue.CreateDatabaseInput{ CatalogId: aws.String(catalogID), DatabaseInput: dbInput, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } _, err := conn.CreateDatabaseWithContext(ctx, input) @@ -173,7 +180,7 @@ func resourceCatalogDatabaseCreate(ctx context.Context, d *schema.ResourceData, func resourceCatalogDatabaseUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) if d.HasChangesExcept("tags", "tags_all") { catalogID, name, err := ReadCatalogID(d.Id()) @@ -218,7 +225,7 @@ func resourceCatalogDatabaseUpdate(ctx context.Context, d *schema.ResourceData, func resourceCatalogDatabaseRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) catalogID, name, err := ReadCatalogID(d.Id()) if err != nil { @@ -268,7 +275,7 @@ func resourceCatalogDatabaseRead(ctx context.Context, d *schema.ResourceData, me func resourceCatalogDatabaseDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) log.Printf("[DEBUG] Glue Catalog Database: %s", d.Id()) _, err := conn.DeleteDatabaseWithContext(ctx, &glue.DeleteDatabaseInput{ @@ -314,6 +321,10 @@ func expandDatabaseTargetDatabase(tfMap map[string]interface{}) *glue.DatabaseId apiObject.DatabaseName = aws.String(v) } + if v, ok := tfMap["region"].(string); ok && v != "" { + apiObject.Region = aws.String(v) + } + return apiObject } @@ -332,6 +343,10 @@ func flattenDatabaseTargetDatabase(apiObject *glue.DatabaseIdentifier) map[strin tfMap["database_name"] = aws.StringValue(v) } + if v := apiObject.Region; v != nil { + tfMap["region"] = aws.StringValue(v) + } + return tfMap } diff --git a/internal/service/glue/catalog_database_test.go b/internal/service/glue/catalog_database_test.go index 85ac56058cd..7a824c17bad 100644 --- a/internal/service/glue/catalog_database_test.go +++ b/internal/service/glue/catalog_database_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue_test import ( @@ -6,6 +9,7 @@ import ( "testing" "github.com/aws/aws-sdk-go/service/glue" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -135,6 +139,7 @@ func TestAccGlueCatalogDatabase_targetDatabase(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "target_database.#", "1"), resource.TestCheckResourceAttrPair(resourceName, "target_database.0.catalog_id", "aws_glue_catalog_database.test2", "catalog_id"), resource.TestCheckResourceAttrPair(resourceName, "target_database.0.database_name", "aws_glue_catalog_database.test2", "name"), + resource.TestCheckResourceAttr(resourceName, "target_database.0.region", ""), ), }, { @@ -150,8 +155,41 @@ func TestAccGlueCatalogDatabase_targetDatabase(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "target_database.#", "1"), resource.TestCheckResourceAttrPair(resourceName, "target_database.0.catalog_id", "aws_glue_catalog_database.test2", "catalog_id"), resource.TestCheckResourceAttrPair(resourceName, "target_database.0.database_name", "aws_glue_catalog_database.test2", "name"), + resource.TestCheckResourceAttr(resourceName, "target_database.0.region", ""), + ), + }, + }, + }) +} + +func TestAccGlueCatalogDatabase_targetDatabaseWithRegion(t *testing.T) { + ctx := acctest.Context(t) + var providers []*schema.Provider + resourceName := "aws_glue_catalog_database.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckMultipleRegion(t, 2) }, + ErrorCheck: acctest.ErrorCheck(t, glue.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5FactoriesPlusProvidersAlternate(ctx, t, &providers), + CheckDestroy: testAccCheckDatabaseDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccCatalogDatabaseConfig_targetWithRegion(rName), + Destroy: false, + Check: resource.ComposeTestCheckFunc( + testAccCheckCatalogDatabaseExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "target_database.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "target_database.0.catalog_id", "aws_glue_catalog_database.test2", "catalog_id"), + resource.TestCheckResourceAttrPair(resourceName, "target_database.0.database_name", "aws_glue_catalog_database.test2", "name"), + resource.TestCheckResourceAttr(resourceName, "target_database.0.region", acctest.AlternateRegion()), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } @@ -229,7 +267,7 @@ func TestAccGlueCatalogDatabase_disappears(t *testing.T) { func testAccCheckDatabaseDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_glue_catalog_database" { @@ -317,6 +355,26 @@ resource "aws_glue_catalog_database" "test2" { `, rName) } +func testAccCatalogDatabaseConfig_targetWithRegion(rName string) string { + return acctest.ConfigCompose(acctest.ConfigAlternateRegionProvider(), fmt.Sprintf(` +resource "aws_glue_catalog_database" "test" { + name = %[1]q + + target_database { + catalog_id = aws_glue_catalog_database.test2.catalog_id + database_name = aws_glue_catalog_database.test2.name + region = %[2]q + } +} + +resource "aws_glue_catalog_database" "test2" { + provider = "awsalternate" + + name = "%[1]s-2" +} +`, rName, acctest.AlternateRegion())) +} + func testAccCatalogDatabaseConfig_permission(rName, permission string) string { return fmt.Sprintf(` resource "aws_glue_catalog_database" "test" { @@ -374,7 +432,7 @@ func testAccCheckCatalogDatabaseExists(ctx context.Context, name string) resourc return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) _, err = tfglue.FindDatabaseByName(ctx, conn, catalogId, dbName) return err diff --git a/internal/service/glue/catalog_table.go b/internal/service/glue/catalog_table.go index d11970f5f48..cfc14358b80 100644 --- a/internal/service/glue/catalog_table.go +++ b/internal/service/glue/catalog_table.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( @@ -362,7 +365,7 @@ func ReadTableID(id string) (string, string, string, error) { func resourceCatalogTableCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) catalogID := createCatalogID(d, meta.(*conns.AWSClient).AccountID) dbName := d.Get("database_name").(string) name := d.Get("name").(string) @@ -386,7 +389,7 @@ func resourceCatalogTableCreate(ctx context.Context, d *schema.ResourceData, met func resourceCatalogTableRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) catalogID, dbName, name, err := ReadTableID(d.Id()) if err != nil { @@ -466,7 +469,7 @@ func resourceCatalogTableRead(ctx context.Context, d *schema.ResourceData, meta func resourceCatalogTableUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) catalogID, dbName, _, err := ReadTableID(d.Id()) if err != nil { @@ -488,7 +491,7 @@ func resourceCatalogTableUpdate(ctx context.Context, d *schema.ResourceData, met func resourceCatalogTableDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) catalogID, dbName, name, err := ReadTableID(d.Id()) if err != nil { diff --git a/internal/service/glue/catalog_table_data_source.go b/internal/service/glue/catalog_table_data_source.go index dd31d51382f..6fb5ff5f402 100644 --- a/internal/service/glue/catalog_table_data_source.go +++ b/internal/service/glue/catalog_table_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( @@ -316,7 +319,7 @@ func DataSourceCatalogTable() *schema.Resource { } func dataSourceCatalogTableRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) catalogID := createCatalogID(d, meta.(*conns.AWSClient).AccountID) dbName := d.Get("database_name").(string) @@ -345,7 +348,7 @@ func dataSourceCatalogTableRead(ctx context.Context, d *schema.ResourceData, met dbName) } - return diag.Errorf("Error reading Glue Catalog Table: %s", err) + return diag.Errorf("reading Glue Catalog Table: %s", err) } table := out.Table @@ -366,11 +369,11 @@ func dataSourceCatalogTableRead(ctx context.Context, d *schema.ResourceData, met d.Set("retention", table.Retention) if err := d.Set("storage_descriptor", flattenStorageDescriptor(table.StorageDescriptor)); err != nil { - return diag.Errorf("error setting storage_descriptor: %s", err) + return diag.Errorf("setting storage_descriptor: %s", err) } if err := d.Set("partition_keys", flattenColumns(table.PartitionKeys)); err != nil { - return diag.Errorf("error setting partition_keys: %s", err) + return diag.Errorf("setting partition_keys: %s", err) } d.Set("view_original_text", table.ViewOriginalText) @@ -378,12 +381,12 @@ func dataSourceCatalogTableRead(ctx context.Context, d *schema.ResourceData, met d.Set("table_type", table.TableType) if err := d.Set("parameters", aws.StringValueMap(table.Parameters)); err != nil { - return diag.Errorf("error setting parameters: %s", err) + return diag.Errorf("setting parameters: %s", err) } if table.TargetTable != nil { if err := d.Set("target_table", []interface{}{flattenTableTargetTable(table.TargetTable)}); err != nil { - return diag.Errorf("error setting target_table: %s", err) + return diag.Errorf("setting target_table: %s", err) } } else { d.Set("target_table", nil) @@ -396,12 +399,12 @@ func dataSourceCatalogTableRead(ctx context.Context, d *schema.ResourceData, met } partOut, err := conn.GetPartitionIndexesWithContext(ctx, partIndexInput) if err != nil { - return diag.Errorf("error getting Glue Partition Indexes: %s", err) + return diag.Errorf("getting Glue Partition Indexes: %s", err) } if partOut != nil && len(partOut.PartitionIndexDescriptorList) > 0 { if err := d.Set("partition_index", flattenPartitionIndexes(partOut.PartitionIndexDescriptorList)); err != nil { - return diag.Errorf("error setting partition_index: %s", err) + return diag.Errorf("setting partition_index: %s", err) } } diff --git a/internal/service/glue/catalog_table_data_source_test.go b/internal/service/glue/catalog_table_data_source_test.go index 681af5eaa94..17563394bc6 100644 --- a/internal/service/glue/catalog_table_data_source_test.go +++ b/internal/service/glue/catalog_table_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue_test import ( diff --git a/internal/service/glue/catalog_table_test.go b/internal/service/glue/catalog_table_test.go index ca14ebda89b..3b8a5196e6a 100644 --- a/internal/service/glue/catalog_table_test.go +++ b/internal/service/glue/catalog_table_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue_test import ( @@ -1126,7 +1129,7 @@ resource "aws_glue_catalog_table" "test" { func testAccCheckTableDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_glue_catalog_table" { @@ -1168,7 +1171,7 @@ func testAccCheckCatalogTableExists(ctx context.Context, name string) resource.T return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) out, err := tfglue.FindTableByName(ctx, conn, catalogId, dbName, resourceName) if err != nil { return err diff --git a/internal/service/glue/classifier.go b/internal/service/glue/classifier.go index dc10a542e10..0c501929954 100644 --- a/internal/service/glue/classifier.go +++ b/internal/service/glue/classifier.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( @@ -189,7 +192,7 @@ func ResourceClassifier() *schema.Resource { func resourceClassifierCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) name := d.Get("name").(string) input := &glue.CreateClassifierInput{} @@ -227,7 +230,7 @@ func resourceClassifierCreate(ctx context.Context, d *schema.ResourceData, meta func resourceClassifierRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) classifier, err := FindClassifierByName(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -263,7 +266,7 @@ func resourceClassifierRead(ctx context.Context, d *schema.ResourceData, meta in func resourceClassifierUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) input := &glue.UpdateClassifierInput{} @@ -298,7 +301,7 @@ func resourceClassifierUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceClassifierDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) log.Printf("[DEBUG] Deleting Glue Classifier: %s", d.Id()) err := DeleteClassifier(ctx, conn, d.Id()) diff --git a/internal/service/glue/classifier_test.go b/internal/service/glue/classifier_test.go index 07bd3fb61ce..e016a079f2e 100644 --- a/internal/service/glue/classifier_test.go +++ b/internal/service/glue/classifier_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue_test import ( @@ -451,7 +454,7 @@ func testAccCheckClassifierExists(ctx context.Context, resourceName string, clas return fmt.Errorf("No Glue Classifier ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) output, err := tfglue.FindClassifierByName(ctx, conn, rs.Primary.ID) @@ -471,7 +474,7 @@ func testAccCheckClassifierDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) _, err := tfglue.FindClassifierByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/glue/connection.go b/internal/service/glue/connection.go index 25d0c0d9f89..cb934e2511a 100644 --- a/internal/service/glue/connection.go +++ b/internal/service/glue/connection.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( @@ -110,7 +113,7 @@ func ResourceConnection() *schema.Resource { func resourceConnectionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) var catalogID string if v, ok := d.GetOkExists("catalog_id"); ok { @@ -123,7 +126,7 @@ func resourceConnectionCreate(ctx context.Context, d *schema.ResourceData, meta input := &glue.CreateConnectionInput{ CatalogId: aws.String(catalogID), ConnectionInput: expandConnectionInput(d), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } log.Printf("[DEBUG] Creating Glue Connection: %s", input) @@ -139,7 +142,7 @@ func resourceConnectionCreate(ctx context.Context, d *schema.ResourceData, meta func resourceConnectionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) catalogID, connectionName, err := DecodeConnectionID(d.Id()) if err != nil { @@ -185,7 +188,7 @@ func resourceConnectionRead(ctx context.Context, d *schema.ResourceData, meta in func resourceConnectionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) if d.HasChangesExcept("tags", "tags_all") { catalogID, connectionName, err := DecodeConnectionID(d.Id()) @@ -211,7 +214,7 @@ func resourceConnectionUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceConnectionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) catalogID, connectionName, err := DecodeConnectionID(d.Id()) if err != nil { diff --git a/internal/service/glue/connection_data_source.go b/internal/service/glue/connection_data_source.go index 6accb0a2e99..1dfc5264b36 100644 --- a/internal/service/glue/connection_data_source.go +++ b/internal/service/glue/connection_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( @@ -85,21 +88,21 @@ func DataSourceConnection() *schema.Resource { } func dataSourceConnectionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig id := d.Get("id").(string) catalogID, connectionName, err := DecodeConnectionID(id) if err != nil { - return diag.Errorf("error decoding Glue Connection %s: %s", id, err) + return diag.Errorf("decoding Glue Connection %s: %s", id, err) } connection, err := FindConnectionByName(ctx, conn, connectionName, catalogID) if err != nil { if tfresource.NotFound(err) { - return diag.Errorf("error Glue Connection (%s) not found", id) + return diag.Errorf("Glue Connection (%s) not found", id) } - return diag.Errorf("error reading Glue Connection (%s): %s", id, err) + return diag.Errorf("reading Glue Connection (%s): %s", id, err) } d.SetId(id) @@ -118,26 +121,26 @@ func dataSourceConnectionRead(ctx context.Context, d *schema.ResourceData, meta d.Set("arn", connectionArn) if err := d.Set("connection_properties", aws.StringValueMap(connection.ConnectionProperties)); err != nil { - return diag.Errorf("error setting connection_properties: %s", err) + return diag.Errorf("setting connection_properties: %s", err) } if err := d.Set("physical_connection_requirements", flattenPhysicalConnectionRequirements(connection.PhysicalConnectionRequirements)); err != nil { - return diag.Errorf("error setting physical_connection_requirements: %s", err) + return diag.Errorf("setting physical_connection_requirements: %s", err) } if err := d.Set("match_criteria", flex.FlattenStringList(connection.MatchCriteria)); err != nil { - return diag.Errorf("error setting match_criteria: %s", err) + return diag.Errorf("setting match_criteria: %s", err) } - tags, err := ListTags(ctx, conn, connectionArn) + tags, err := listTags(ctx, conn, connectionArn) if err != nil { - return diag.Errorf("error listing tags for Glue Connection (%s): %s", connectionArn, err) + return diag.Errorf("listing tags for Glue Connection (%s): %s", connectionArn, err) } //lintignore:AWSR002 if err := d.Set("tags", tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return diag.Errorf("error setting tags: %s", err) + return diag.Errorf("setting tags: %s", err) } return nil diff --git a/internal/service/glue/connection_data_source_test.go b/internal/service/glue/connection_data_source_test.go index 6fd42a02586..9664f8b3038 100644 --- a/internal/service/glue/connection_data_source_test.go +++ b/internal/service/glue/connection_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue_test import ( diff --git a/internal/service/glue/connection_test.go b/internal/service/glue/connection_test.go index dd6a5dd6cf3..607bd3de1a9 100644 --- a/internal/service/glue/connection_test.go +++ b/internal/service/glue/connection_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue_test import ( @@ -377,7 +380,7 @@ func testAccCheckConnectionExists(ctx context.Context, resourceName string, conn return fmt.Errorf("No Glue Connection ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) catalogID, connectionName, err := tfglue.DecodeConnectionID(rs.Primary.ID) if err != nil { return err @@ -402,7 +405,7 @@ func testAccCheckConnectionDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) catalogID, connectionName, err := tfglue.DecodeConnectionID(rs.Primary.ID) if err != nil { return err diff --git a/internal/service/glue/consts.go b/internal/service/glue/consts.go index 0e3dcddf53e..d792bce432a 100644 --- a/internal/service/glue/consts.go +++ b/internal/service/glue/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( diff --git a/internal/service/glue/crawler.go b/internal/service/glue/crawler.go index fca96299c87..e5fba168eb4 100644 --- a/internal/service/glue/crawler.go +++ b/internal/service/glue/crawler.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( @@ -26,7 +29,7 @@ import ( ) func targets() []string { - return []string{"s3_target", "dynamodb_target", "mongodb_target", "jdbc_target", "catalog_target", "delta_target"} + return []string{"s3_target", "dynamodb_target", "mongodb_target", "jdbc_target", "catalog_target", "delta_target", "iceberg_target"} } // @SDKResource("aws_glue_crawler", name="Crawler") @@ -158,6 +161,35 @@ func ResourceCrawler() *schema.Resource { }, }, }, + "iceberg_target": { + Type: schema.TypeList, + Optional: true, + MinItems: 1, + AtLeastOneOf: targets(), + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "connection_name": { + Type: schema.TypeString, + Optional: true, + }, + "exclusions": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "maximum_traversal_depth": { + Type: schema.TypeInt, + Required: true, + ValidateFunc: validation.IntBetween(1, 20), + }, + "paths": { + Type: schema.TypeSet, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + }, + }, + }, "jdbc_target": { Type: schema.TypeList, Optional: true, @@ -368,7 +400,7 @@ func ResourceCrawler() *schema.Resource { func resourceCrawlerCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - glueConn := meta.(*conns.AWSClient).GlueConn() + glueConn := meta.(*conns.AWSClient).GlueConn(ctx) name := d.Get("name").(string) crawlerInput, err := createCrawlerInput(ctx, d, name) @@ -389,6 +421,11 @@ func resourceCrawlerCreate(ctx context.Context, d *schema.ResourceData, meta int return retry.RetryableError(err) } + // InvalidInputException: com.amazonaws.services.glue.model.AccessDeniedException: You need to enable AWS Security Token Service for this region. . Please verify the role's TrustPolicy. + if tfawserr.ErrMessageContains(err, glue.ErrCodeInvalidInputException, "Please verify the role's TrustPolicy") { + return retry.RetryableError(err) + } + // InvalidInputException: Unable to retrieve connection tf-acc-test-8656357591012534997: User: arn:aws:sts::*******:assumed-role/tf-acc-test-8656357591012534997/AWS-Crawler is not authorized to perform: glue:GetConnection on resource: * (Service: AmazonDataCatalog; Status Code: 400; Error Code: AccessDeniedException; Request ID: 4d72b66f-9c75-11e8-9faf-5b526c7be968) if tfawserr.ErrMessageContains(err, glue.ErrCodeInvalidInputException, "is not authorized") { return retry.RetryableError(err) @@ -416,7 +453,7 @@ func resourceCrawlerCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceCrawlerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) crawler, err := FindCrawlerByName(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -482,6 +519,10 @@ func resourceCrawlerRead(ctx context.Context, d *schema.ResourceData, meta inter if err := d.Set("delta_target", flattenDeltaTargets(crawler.Targets.DeltaTargets)); err != nil { return sdkdiag.AppendErrorf(diags, "setting delta_target: %s", err) } + + if err := d.Set("iceberg_target", flattenIcebergTargets(crawler.Targets.IcebergTargets)); err != nil { + return sdkdiag.AppendErrorf(diags, "setting iceberg_target: %s", err) + } } if err := d.Set("lineage_configuration", flattenCrawlerLineageConfiguration(crawler.LineageConfiguration)); err != nil { @@ -501,7 +542,7 @@ func resourceCrawlerRead(ctx context.Context, d *schema.ResourceData, meta inter func resourceCrawlerUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - glueConn := meta.(*conns.AWSClient).GlueConn() + glueConn := meta.(*conns.AWSClient).GlueConn(ctx) name := d.Get("name").(string) if d.HasChangesExcept("tags", "tags_all") { @@ -523,6 +564,11 @@ func resourceCrawlerUpdate(ctx context.Context, d *schema.ResourceData, meta int return retry.RetryableError(err) } + // InvalidInputException: com.amazonaws.services.glue.model.AccessDeniedException: You need to enable AWS Security Token Service for this region. . Please verify the role's TrustPolicy. + if tfawserr.ErrMessageContains(err, glue.ErrCodeInvalidInputException, "Please verify the role's TrustPolicy") { + return retry.RetryableError(err) + } + // InvalidInputException: Unable to retrieve connection tf-acc-test-8656357591012534997: User: arn:aws:sts::*******:assumed-role/tf-acc-test-8656357591012534997/AWS-Crawler is not authorized to perform: glue:GetConnection on resource: * (Service: AmazonDataCatalog; Status Code: 400; Error Code: AccessDeniedException; Request ID: 4d72b66f-9c75-11e8-9faf-5b526c7be968) if tfawserr.ErrMessageContains(err, glue.ErrCodeInvalidInputException, "is not authorized") { return retry.RetryableError(err) @@ -552,7 +598,7 @@ func resourceCrawlerUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceCrawlerDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - glueConn := meta.(*conns.AWSClient).GlueConn() + glueConn := meta.(*conns.AWSClient).GlueConn(ctx) log.Printf("[DEBUG] Deleting Glue Crawler: %s", d.Id()) _, err := glueConn.DeleteCrawlerWithContext(ctx, &glue.DeleteCrawlerInput{ @@ -575,7 +621,7 @@ func createCrawlerInput(ctx context.Context, d *schema.ResourceData, crawlerName Name: aws.String(crawlerName), DatabaseName: aws.String(d.Get("database_name").(string)), Role: aws.String(d.Get("role").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), Targets: expandCrawlerTargets(d), } if description, ok := d.GetOk("description"); ok { @@ -726,6 +772,10 @@ func expandCrawlerTargets(d *schema.ResourceData) *glue.CrawlerTargets { crawlerTargets.DeltaTargets = expandDeltaTargets(v.([]interface{})) } + if v, ok := d.GetOk("iceberg_target"); ok { + crawlerTargets.IcebergTargets = expandIcebergTargets(v.([]interface{})) + } + return crawlerTargets } @@ -816,7 +866,7 @@ func expandJDBCTarget(cfg map[string]interface{}) *glue.JdbcTarget { } if v, ok := cfg["enable_additional_metadata"].([]interface{}); ok { - target.Exclusions = flex.ExpandStringList(v) + target.EnableAdditionalMetadata = flex.ExpandStringList(v) } if v, ok := cfg["exclusions"].([]interface{}); ok { @@ -910,6 +960,36 @@ func expandDeltaTarget(cfg map[string]interface{}) *glue.DeltaTarget { return target } +func expandIcebergTargets(targets []interface{}) []*glue.IcebergTarget { + if len(targets) < 1 { + return []*glue.IcebergTarget{} + } + + perms := make([]*glue.IcebergTarget, len(targets)) + for i, rawCfg := range targets { + cfg := rawCfg.(map[string]interface{}) + perms[i] = expandIcebergTarget(cfg) + } + return perms +} + +func expandIcebergTarget(cfg map[string]interface{}) *glue.IcebergTarget { + target := &glue.IcebergTarget{ + Paths: flex.ExpandStringSet(cfg["paths"].(*schema.Set)), + MaximumTraversalDepth: aws.Int64(int64(cfg["maximum_traversal_depth"].(int))), + } + + if v, ok := cfg["exclusions"]; ok { + target.Exclusions = flex.ExpandStringList(v.([]interface{})) + } + + if v, ok := cfg["connection_name"].(string); ok { + target.ConnectionName = aws.String(v) + } + + return target +} + func flattenS3Targets(s3Targets []*glue.S3Target) []map[string]interface{} { result := make([]map[string]interface{}, 0) @@ -1005,6 +1085,21 @@ func flattenDeltaTargets(deltaTargets []*glue.DeltaTarget) []map[string]interfac return result } +func flattenIcebergTargets(icebergTargets []*glue.IcebergTarget) []map[string]interface{} { + result := make([]map[string]interface{}, 0) + + for _, icebergTarget := range icebergTargets { + attrs := make(map[string]interface{}) + attrs["connection_name"] = aws.StringValue(icebergTarget.ConnectionName) + attrs["maximum_traversal_depth"] = aws.Int64Value(icebergTarget.MaximumTraversalDepth) + attrs["paths"] = flex.FlattenStringSet(icebergTarget.Paths) + attrs["exclusions"] = flex.FlattenStringList(icebergTarget.Exclusions) + + result = append(result, attrs) + } + return result +} + func flattenCrawlerSchemaChangePolicy(cfg *glue.SchemaChangePolicy) []map[string]interface{} { if cfg == nil { return []map[string]interface{}{} diff --git a/internal/service/glue/crawler_test.go b/internal/service/glue/crawler_test.go index 0dec1ba0781..9f3e6bf6769 100644 --- a/internal/service/glue/crawler_test.go +++ b/internal/service/glue/crawler_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue_test import ( @@ -207,6 +210,7 @@ func TestAccGlueCrawler_jdbcTarget(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.connection_name", rName), resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.exclusions.#", "0"), resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.path", "database-name/%"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.enable_additional_metadata.#", "0"), resource.TestCheckResourceAttr(resourceName, "name", rName), resource.TestCheckResourceAttr(resourceName, "role", rName), resource.TestCheckResourceAttr(resourceName, "s3_target.#", "0"), @@ -232,6 +236,7 @@ func TestAccGlueCrawler_jdbcTarget(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.connection_name", rName), resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.exclusions.#", "0"), resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.path", "database-name/table-name"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.enable_additional_metadata.#", "0"), resource.TestCheckResourceAttr(resourceName, "name", rName), resource.TestCheckResourceAttr(resourceName, "role", rName), resource.TestCheckResourceAttr(resourceName, "s3_target.#", "0"), @@ -248,6 +253,32 @@ func TestAccGlueCrawler_jdbcTarget(t *testing.T) { ImportState: true, ImportStateVerify: true, }, + { + Config: testAccCrawlerConfig_jdbcTargetMetadata(rName, jdbcConnectionUrl, "database-name/table-name"), + Check: resource.ComposeTestCheckFunc( + testAccCheckCrawlerExists(ctx, resourceName, &crawler), + acctest.CheckResourceAttrRegionalARN(resourceName, "arn", "glue", fmt.Sprintf("crawler/%s", rName)), + resource.TestCheckResourceAttr(resourceName, "classifiers.#", "0"), + resource.TestCheckResourceAttr(resourceName, "configuration", ""), + resource.TestCheckResourceAttr(resourceName, "database_name", rName), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestCheckResourceAttr(resourceName, "dynamodb_target.#", "0"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.#", "1"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.connection_name", rName), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.exclusions.#", "0"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.path", "database-name/table-name"), + resource.TestCheckResourceAttr(resourceName, "jdbc_target.0.enable_additional_metadata.#", "1"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "role", rName), + resource.TestCheckResourceAttr(resourceName, "s3_target.#", "0"), + resource.TestCheckResourceAttr(resourceName, "schedule", ""), + resource.TestCheckResourceAttr(resourceName, "schema_change_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "schema_change_policy.0.delete_behavior", "DEPRECATE_IN_DATABASE"), + resource.TestCheckResourceAttr(resourceName, "schema_change_policy.0.update_behavior", "UPDATE_IN_DATABASE"), + resource.TestCheckResourceAttr(resourceName, "table_prefix", ""), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + ), + }, }, }) } @@ -560,6 +591,51 @@ func TestAccGlueCrawler_deltaTarget(t *testing.T) { }) } +func TestAccGlueCrawler_icebergTarget(t *testing.T) { + ctx := acctest.Context(t) + var crawler glue.Crawler + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_glue_crawler.test" + + connectionUrl := fmt.Sprintf("mongodb://%s:27017/testdatabase", acctest.RandomDomainName()) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, glue.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckCrawlerDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccCrawlerConfig_icebergTarget(rName, connectionUrl, "s3://table1", 1), + Check: resource.ComposeTestCheckFunc( + testAccCheckCrawlerExists(ctx, resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "iceberg_target.#", "1"), + resource.TestCheckResourceAttr(resourceName, "iceberg_target.0.connection_name", rName), + resource.TestCheckResourceAttr(resourceName, "iceberg_target.0.maximum_traversal_depth", "1"), + resource.TestCheckResourceAttr(resourceName, "iceberg_target.0.paths.#", "1"), + resource.TestCheckTypeSetElemAttr(resourceName, "iceberg_target.0.paths.*", "s3://table1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccCrawlerConfig_icebergTarget(rName, connectionUrl, "s3://table2", 2), + Check: resource.ComposeTestCheckFunc( + testAccCheckCrawlerExists(ctx, resourceName, &crawler), + resource.TestCheckResourceAttr(resourceName, "iceberg_target.#", "1"), + resource.TestCheckResourceAttr(resourceName, "iceberg_target.0.connection_name", rName), + resource.TestCheckResourceAttr(resourceName, "iceberg_target.0.maximum_traversal_depth", "2"), + resource.TestCheckResourceAttr(resourceName, "iceberg_target.0.paths.#", "1"), + resource.TestCheckTypeSetElemAttr(resourceName, "iceberg_target.0.paths.*", "s3://table2"), + ), + }, + }, + }) +} + func TestAccGlueCrawler_s3Target(t *testing.T) { ctx := acctest.Context(t) var crawler glue.Crawler @@ -1620,7 +1696,7 @@ func testAccCheckCrawlerExists(ctx context.Context, resourceName string, crawler return fmt.Errorf("no ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) output, err := tfglue.FindCrawlerByName(ctx, conn, rs.Primary.ID) if err != nil { @@ -1640,7 +1716,7 @@ func testAccCheckCrawlerDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) _, err := tfglue.FindCrawlerByName(ctx, conn, rs.Primary.ID) if tfresource.NotFound(err) { @@ -1942,6 +2018,38 @@ resource "aws_glue_crawler" "test" { `, rName, jdbcConnectionUrl, path)) } +func testAccCrawlerConfig_jdbcTargetMetadata(rName, jdbcConnectionUrl, path string) string { + return acctest.ConfigCompose(testAccCrawlerConfig_base(rName), fmt.Sprintf(` +resource "aws_glue_catalog_database" "test" { + name = %[1]q +} + +resource "aws_glue_connection" "test" { + name = %[1]q + + connection_properties = { + JDBC_CONNECTION_URL = %[1]q + PASSWORD = "testpassword" + USERNAME = "testusername" + } +} + +resource "aws_glue_crawler" "test" { + depends_on = [aws_iam_role_policy_attachment.test-AWSGlueServiceRole] + + database_name = aws_glue_catalog_database.test.name + name = %[1]q + role = aws_iam_role.test.name + + jdbc_target { + connection_name = aws_glue_connection.test.name + path = %[3]q + enable_additional_metadata = ["COMMENTS"] + } +} +`, rName, jdbcConnectionUrl, path)) +} + func testAccCrawlerConfig_jdbcTargetExclusions1(rName, jdbcConnectionUrl, exclusion1 string) string { return acctest.ConfigCompose(testAccCrawlerConfig_base(rName), fmt.Sprintf(` resource "aws_glue_catalog_database" "test" { @@ -2967,6 +3075,60 @@ resource "aws_glue_crawler" "test" { `, rName, connectionUrl, tableName, createNativeDeltaTable)) } +func testAccCrawlerConfig_icebergTarget(rName, connectionUrl, tableName string, depth int) string { + return acctest.ConfigCompose(testAccCrawlerConfig_base(rName), acctest.ConfigVPCWithSubnets(rName, 2), fmt.Sprintf(` +resource "aws_security_group" "test" { + name = %[1]q + vpc_id = aws_vpc.test.id + + ingress { + from_port = 1 + protocol = "tcp" + self = true + to_port = 65535 + } + + tags = { + Name = %[1]q + } +} + +resource "aws_glue_catalog_database" "test" { + name = %[1]q +} + +resource "aws_glue_connection" "test" { + connection_properties = { + JDBC_ENFORCE_SSL = false + } + + connection_type = "NETWORK" + + name = %[1]q + + physical_connection_requirements { + availability_zone = aws_subnet.test[0].availability_zone + security_group_id_list = [aws_security_group.test.id] + subnet_id = aws_subnet.test[0].id + } +} + +resource "aws_glue_crawler" "test" { + depends_on = [aws_iam_role_policy_attachment.test-AWSGlueServiceRole] + + database_name = aws_glue_catalog_database.test.name + name = %[1]q + role = aws_iam_role.test.name + + iceberg_target { + connection_name = aws_glue_connection.test.name + paths = [%[3]q] + maximum_traversal_depth = %[4]d + } +} +`, rName, connectionUrl, tableName, depth)) +} + func testAccCrawlerConfig_lakeformation(rName string, use bool) string { return acctest.ConfigCompose(testAccCrawlerConfig_base(rName), fmt.Sprintf(` resource "aws_glue_catalog_database" "test" { diff --git a/internal/service/glue/data_catalog_encryption_settings.go b/internal/service/glue/data_catalog_encryption_settings.go index c0a01c85459..0713bad6669 100644 --- a/internal/service/glue/data_catalog_encryption_settings.go +++ b/internal/service/glue/data_catalog_encryption_settings.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( @@ -84,7 +87,7 @@ func ResourceDataCatalogEncryptionSettings() *schema.Resource { func resourceDataCatalogEncryptionSettingsPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) catalogID := createCatalogID(d, meta.(*conns.AWSClient).AccountID) input := &glue.PutDataCatalogEncryptionSettingsInput{ @@ -109,7 +112,7 @@ func resourceDataCatalogEncryptionSettingsPut(ctx context.Context, d *schema.Res func resourceDataCatalogEncryptionSettingsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) output, err := conn.GetDataCatalogEncryptionSettingsWithContext(ctx, &glue.GetDataCatalogEncryptionSettingsInput{ CatalogId: aws.String(d.Id()), @@ -133,7 +136,7 @@ func resourceDataCatalogEncryptionSettingsRead(ctx context.Context, d *schema.Re func resourceDataCatalogEncryptionSettingsDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) input := &glue.PutDataCatalogEncryptionSettingsInput{ CatalogId: aws.String(d.Id()), diff --git a/internal/service/glue/data_catalog_encryption_settings_data_source.go b/internal/service/glue/data_catalog_encryption_settings_data_source.go index 4f63d159620..3b806f8d326 100644 --- a/internal/service/glue/data_catalog_encryption_settings_data_source.go +++ b/internal/service/glue/data_catalog_encryption_settings_data_source.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( "context" - "fmt" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/glue" @@ -65,7 +67,7 @@ func DataSourceDataCatalogEncryptionSettings() *schema.Resource { } func dataSourceDataCatalogEncryptionSettingsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) catalogID := d.Get("catalog_id").(string) output, err := conn.GetDataCatalogEncryptionSettingsWithContext(ctx, &glue.GetDataCatalogEncryptionSettingsInput{ @@ -73,14 +75,14 @@ func dataSourceDataCatalogEncryptionSettingsRead(ctx context.Context, d *schema. }) if err != nil { - return diag.FromErr(fmt.Errorf("error reading Glue Data Catalog Encryption Settings (%s): %w", catalogID, err)) + return diag.Errorf("reading Glue Data Catalog Encryption Settings (%s): %s", catalogID, err) } d.SetId(catalogID) d.Set("catalog_id", d.Id()) if output.DataCatalogEncryptionSettings != nil { if err := d.Set("data_catalog_encryption_settings", []interface{}{flattenDataCatalogEncryptionSettings(output.DataCatalogEncryptionSettings)}); err != nil { - return diag.FromErr(fmt.Errorf("error setting data_catalog_encryption_settings: %w", err)) + return diag.Errorf("setting data_catalog_encryption_settings: %s", err) } } else { d.Set("data_catalog_encryption_settings", nil) diff --git a/internal/service/glue/data_catalog_encryption_settings_data_source_test.go b/internal/service/glue/data_catalog_encryption_settings_data_source_test.go index 0397f923f32..e99503de8e2 100644 --- a/internal/service/glue/data_catalog_encryption_settings_data_source_test.go +++ b/internal/service/glue/data_catalog_encryption_settings_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue_test import ( diff --git a/internal/service/glue/data_catalog_encryption_settings_test.go b/internal/service/glue/data_catalog_encryption_settings_test.go index c5a9f082735..ad36882afc0 100644 --- a/internal/service/glue/data_catalog_encryption_settings_test.go +++ b/internal/service/glue/data_catalog_encryption_settings_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue_test import ( @@ -90,7 +93,7 @@ func testAccCheckDataCatalogEncryptionSettingsExists(ctx context.Context, resour return fmt.Errorf("No Glue Data Catalog Encryption Settings ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) output, err := conn.GetDataCatalogEncryptionSettingsWithContext(ctx, &glue.GetDataCatalogEncryptionSettingsInput{ CatalogId: aws.String(rs.Primary.ID), diff --git a/internal/service/glue/data_quality_ruleset.go b/internal/service/glue/data_quality_ruleset.go new file mode 100644 index 00000000000..5f36599e337 --- /dev/null +++ b/internal/service/glue/data_quality_ruleset.go @@ -0,0 +1,250 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package glue + +import ( + "context" + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" + "github.com/aws/aws-sdk-go/service/glue" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/verify" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @SDKResource("aws_glue_data_quality_ruleset", name="Data Quality Ruleset") +// @Tags(identifierAttribute="arn") +func ResourceDataQualityRuleset() *schema.Resource { + return &schema.Resource{ + CreateWithoutTimeout: resourceDataQualityRulesetCreate, + ReadWithoutTimeout: resourceDataQualityRulesetRead, + UpdateWithoutTimeout: resourceDataQualityRulesetUpdate, + DeleteWithoutTimeout: resourceDataQualityRulesetDelete, + Importer: &schema.ResourceImporter{ + StateContext: schema.ImportStatePassthroughContext, + }, + CustomizeDiff: verify.SetTagsDiff, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "created_on": { + Type: schema.TypeString, + Computed: true, + }, + "description": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(0, 2048), + }, + "last_modified_on": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + ForceNew: true, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 255), + }, + "recommendation_run_id": { + Type: schema.TypeString, + Computed: true, + }, + "ruleset": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 65536), + }, + names.AttrTags: tftags.TagsSchema(), + names.AttrTagsAll: tftags.TagsSchemaComputed(), + "target_table": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "catalog_id": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(1, 255), + }, + "database_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(1, 255), + }, + "table_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(1, 255), + }, + }, + }, + }, + }, + } +} + +func resourceDataQualityRulesetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).GlueConn(ctx) + + name := d.Get("name").(string) + + input := &glue.CreateDataQualityRulesetInput{ + Name: aws.String(name), + Ruleset: aws.String(d.Get("ruleset").(string)), + Tags: getTagsIn(ctx), + } + + if v, ok := d.GetOk("description"); ok { + input.Description = aws.String(v.(string)) + } + + if v, ok := d.GetOk("target_table"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.TargetTable = expandTargetTable(v.([]interface{})[0].(map[string]interface{})) + } + + _, err := conn.CreateDataQualityRulesetWithContext(ctx, input) + if err != nil { + return sdkdiag.AppendErrorf(diags, "creating Glue Data Quality Ruleset (%s): %s", name, err) + } + + d.SetId(name) + + return append(diags, resourceDataQualityRulesetRead(ctx, d, meta)...) +} + +func resourceDataQualityRulesetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).GlueConn(ctx) + + name := d.Id() + + dataQualityRuleset, err := FindDataQualityRulesetByName(ctx, conn, name) + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] Glue Data Quality Ruleset (%s) not found, removing from state", d.Id()) + d.SetId("") + return diags + } + + if err != nil { + return sdkdiag.AppendErrorf(diags, "reading Glue Data Quality Ruleset (%s): %s", d.Id(), err) + } + + dataQualityRulesetArn := arn.ARN{ + Partition: meta.(*conns.AWSClient).Partition, + Service: "glue", + Region: meta.(*conns.AWSClient).Region, + AccountID: meta.(*conns.AWSClient).AccountID, + Resource: fmt.Sprintf("dataQualityRuleset/%s", aws.StringValue(dataQualityRuleset.Name)), + }.String() + + d.Set("arn", dataQualityRulesetArn) + d.Set("created_on", dataQualityRuleset.CreatedOn.Format(time.RFC3339)) + d.Set("name", dataQualityRuleset.Name) + d.Set("description", dataQualityRuleset.Description) + d.Set("last_modified_on", dataQualityRuleset.CreatedOn.Format(time.RFC3339)) + d.Set("recommendation_run_id", dataQualityRuleset.RecommendationRunId) + d.Set("ruleset", dataQualityRuleset.Ruleset) + + if err := d.Set("target_table", flattenTargetTable(dataQualityRuleset.TargetTable)); err != nil { + return sdkdiag.AppendErrorf(diags, "setting target_table: %s", err) + } + + return diags +} + +func resourceDataQualityRulesetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).GlueConn(ctx) + + if d.HasChanges("description", "ruleset") { + name := d.Id() + + input := &glue.UpdateDataQualityRulesetInput{ + Name: aws.String(name), + } + + if v, ok := d.GetOk("description"); ok { + input.Description = aws.String(v.(string)) + } + + if v, ok := d.GetOk("ruleset"); ok { + input.Ruleset = aws.String(v.(string)) + } + + if _, err := conn.UpdateDataQualityRulesetWithContext(ctx, input); err != nil { + return sdkdiag.AppendErrorf(diags, "updating Glue Data Quality Ruleset (%s): %s", d.Id(), err) + } + } + + return append(diags, resourceDataQualityRulesetRead(ctx, d, meta)...) +} + +func resourceDataQualityRulesetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).GlueConn(ctx) + + log.Printf("[DEBUG] Glue Data Quality Ruleset: %s", d.Id()) + _, err := conn.DeleteDataQualityRulesetWithContext(ctx, &glue.DeleteDataQualityRulesetInput{ + Name: aws.String(d.Get("name").(string)), + }) + if err != nil { + return sdkdiag.AppendErrorf(diags, "deleting Glue Data Quality Ruleset (%s): %s", d.Id(), err) + } + + return diags +} + +func expandTargetTable(tfMap map[string]interface{}) *glue.DataQualityTargetTable { + if tfMap == nil { + return nil + } + + apiObject := &glue.DataQualityTargetTable{ + DatabaseName: aws.String(tfMap["database_name"].(string)), + TableName: aws.String(tfMap["table_name"].(string)), + } + + if v, ok := tfMap["catalog_id"].(string); ok && v != "" { + apiObject.CatalogId = aws.String(v) + } + + return apiObject +} + +func flattenTargetTable(apiObject *glue.DataQualityTargetTable) []interface{} { + if apiObject == nil { + return []interface{}{} + } + + tfMap := map[string]interface{}{ + "database_name": aws.StringValue(apiObject.DatabaseName), + "table_name": aws.StringValue(apiObject.TableName), + } + + if v := apiObject.CatalogId; v != nil { + tfMap["catalog_id"] = aws.StringValue(v) + } + + return []interface{}{tfMap} +} diff --git a/internal/service/glue/data_quality_ruleset_test.go b/internal/service/glue/data_quality_ruleset_test.go new file mode 100644 index 00000000000..63a41b25d06 --- /dev/null +++ b/internal/service/glue/data_quality_ruleset_test.go @@ -0,0 +1,426 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package glue_test + +import ( + "context" + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/glue" + sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-plugin-testing/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + tfglue "github.com/hashicorp/terraform-provider-aws/internal/service/glue" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" +) + +func TestAccGlueDataQualityRuleset_basic(t *testing.T) { + ctx := acctest.Context(t) + + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + ruleset := "Rules = [Completeness \"colA\" between 0.4 and 0.8]" + resourceName := "aws_glue_data_quality_ruleset.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, glue.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckDataQualityRulesetDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccDataQualityRulesetConfig_basic(rName, ruleset), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckDataQualityRulesetExists(ctx, resourceName), + acctest.CheckResourceAttrRegionalARN(resourceName, "arn", "glue", fmt.Sprintf("dataQualityRuleset/%s", rName)), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttrSet(resourceName, "created_on"), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestCheckResourceAttrSet(resourceName, "last_modified_on"), + resource.TestCheckResourceAttr(resourceName, "ruleset", ruleset), + resource.TestCheckResourceAttr(resourceName, "target_table.#", "0"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccGlueDataQualityRuleset_updateRuleset(t *testing.T) { + ctx := acctest.Context(t) + + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + originalRuleset := "Rules = [Completeness \"colA\" between 0.4 and 0.8]" + updatedRuleset := "Rules = [Completeness \"colA\" between 0.5 and 1.0]" + resourceName := "aws_glue_data_quality_ruleset.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, glue.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckDataQualityRulesetDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccDataQualityRulesetConfig_basic(rName, originalRuleset), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckDataQualityRulesetExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "ruleset", originalRuleset), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccDataQualityRulesetConfig_basic(rName, updatedRuleset), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckDataQualityRulesetExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "ruleset", updatedRuleset), + ), + }, + }, + }) +} + +func TestAccGlueDataQualityRuleset_updateDescription(t *testing.T) { + ctx := acctest.Context(t) + + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + ruleset := "Rules = [Completeness \"colA\" between 0.4 and 0.8]" + originalDescription := "original description" + updatedDescription := "updated description" + resourceName := "aws_glue_data_quality_ruleset.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, glue.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckDataQualityRulesetDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccDataQualityRulesetConfig_description(rName, ruleset, originalDescription), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckDataQualityRulesetExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "description", originalDescription), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccDataQualityRulesetConfig_description(rName, ruleset, updatedDescription), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckDataQualityRulesetExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "description", updatedDescription), + ), + }, + }, + }) +} + +func TestAccGlueDataQualityRuleset_targetTableRequired(t *testing.T) { + ctx := acctest.Context(t) + + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rName2 := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rName3 := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + ruleset := "Rules = [Completeness \"colA\" between 0.4 and 0.8]" + resourceName := "aws_glue_data_quality_ruleset.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, glue.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckDataQualityRulesetDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccDataQualityRulesetConfig_targetTable(rName, rName2, rName3, ruleset), + Destroy: false, + Check: resource.ComposeTestCheckFunc( + testAccCheckDataQualityRulesetExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "target_table.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_table.0.catalog_id", ""), + resource.TestCheckResourceAttrPair(resourceName, "target_table.0.database_name", "aws_glue_catalog_database.test", "name"), + resource.TestCheckResourceAttrPair(resourceName, "target_table.0.table_name", "aws_glue_catalog_table.test", "name"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccGlueDataQualityRuleset_targetTableFull(t *testing.T) { + ctx := acctest.Context(t) + + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rName2 := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rName3 := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + ruleset := "Rules = [Completeness \"colA\" between 0.4 and 0.8]" + resourceName := "aws_glue_data_quality_ruleset.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, glue.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckDataQualityRulesetDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccDataQualityRulesetConfig_targetTableFull(rName, rName2, rName3, ruleset), + Destroy: false, + Check: resource.ComposeTestCheckFunc( + testAccCheckDataQualityRulesetExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "target_table.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "target_table.0.catalog_id", "aws_glue_catalog_table.test", "catalog_id"), + resource.TestCheckResourceAttrPair(resourceName, "target_table.0.database_name", "aws_glue_catalog_database.test", "name"), + resource.TestCheckResourceAttrPair(resourceName, "target_table.0.table_name", "aws_glue_catalog_table.test", "name"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccGlueDataQualityRuleset_tags(t *testing.T) { + ctx := acctest.Context(t) + + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + ruleset := "Rules = [Completeness \"colA\" between 0.4 and 0.8]" + resourceName := "aws_glue_data_quality_ruleset.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, glue.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckDataQualityRulesetDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccDataQualityRulesetConfig_tags1(rName, ruleset, "key1", "value1"), + Destroy: false, + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckDataQualityRulesetExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccDataQualityRulesetConfig_tags2(rName, ruleset, "key1", "value1updated", "key2", "value2"), + Destroy: false, + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckDataQualityRulesetExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + { + Config: testAccDataQualityRulesetConfig_tags1(rName, ruleset, "key2", "value2"), + Destroy: false, + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckDataQualityRulesetExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + }, + }) +} + +func TestAccGlueDataQualityRuleset_disappears(t *testing.T) { + ctx := acctest.Context(t) + + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + ruleset := "Rules = [Completeness \"colA\" between 0.4 and 0.8]" + resourceName := "aws_glue_data_quality_ruleset.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, glue.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckDataQualityRulesetDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccDataQualityRulesetConfig_basic(rName, ruleset), + Check: resource.ComposeTestCheckFunc( + testAccCheckDataQualityRulesetExists(ctx, resourceName), + acctest.CheckResourceDisappears(ctx, acctest.Provider, tfglue.ResourceDataQualityRuleset(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func testAccCheckDataQualityRulesetExists(ctx context.Context, n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) + + resp, err := tfglue.FindDataQualityRulesetByName(ctx, conn, rs.Primary.ID) + + if err != nil { + return err + } + + if resp == nil { + return fmt.Errorf("No Glue Data Quality Ruleset Found") + } + + if aws.StringValue(resp.Name) != rs.Primary.ID { + return fmt.Errorf("Glue Data Quality Ruleset Mismatch - existing: %q, state: %q", + aws.StringValue(resp.Name), rs.Primary.ID) + } + + return nil + } +} + +func testAccCheckDataQualityRulesetDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_glue_data_quality_ruleset" { + continue + } + + _, err := tfglue.FindDataQualityRulesetByName(ctx, conn, rs.Primary.ID) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return err + } + return fmt.Errorf("Glue Data Quality Ruleset %s still exists", rs.Primary.ID) + } + + return nil + } +} + +func testAccDataQualityRulesetConfig_basic(rName, ruleset string) string { + return fmt.Sprintf(` +resource "aws_glue_data_quality_ruleset" "test" { + name = %[1]q + ruleset = %[2]q +} +`, rName, ruleset) +} + +func testAccDataQualityRulesetConfig_description(rName, ruleset, description string) string { + return fmt.Sprintf(` +resource "aws_glue_data_quality_ruleset" "test" { + name = %[1]q + ruleset = %[2]q + description = %[3]q +} +`, rName, ruleset, description) +} + +func testAccDataQualityRulesetConfigTargetTableConfigBasic(rName, rName2 string) string { + return fmt.Sprintf(` +resource "aws_glue_catalog_database" "test" { + name = %[1]q +} + +resource "aws_glue_catalog_table" "test" { + name = %[2]q + database_name = aws_glue_catalog_database.test.name +} +`, rName, rName2) +} + +func testAccDataQualityRulesetConfig_targetTable(rName, rName2, rName3, ruleset string) string { + return acctest.ConfigCompose( + testAccDataQualityRulesetConfigTargetTableConfigBasic(rName2, rName3), + fmt.Sprintf(` +resource "aws_glue_data_quality_ruleset" "test" { + name = %[1]q + ruleset = %[2]q + + target_table { + database_name = aws_glue_catalog_database.test.name + table_name = aws_glue_catalog_table.test.name + } +} +`, rName, ruleset)) +} + +func testAccDataQualityRulesetConfig_targetTableFull(rName, rName2, rName3, ruleset string) string { + return acctest.ConfigCompose( + testAccDataQualityRulesetConfigTargetTableConfigBasic(rName2, rName3), + fmt.Sprintf(` +resource "aws_glue_data_quality_ruleset" "test" { + name = %[1]q + ruleset = %[2]q + + target_table { + catalog_id = aws_glue_catalog_table.test.catalog_id + database_name = aws_glue_catalog_database.test.name + table_name = aws_glue_catalog_table.test.name + } +} +`, rName, ruleset)) +} + +func testAccDataQualityRulesetConfig_tags1(rName, ruleset, tagKey1, tagValue1 string) string { + return fmt.Sprintf(` +resource "aws_glue_data_quality_ruleset" "test" { + name = %[1]q + ruleset = %[2]q + + tags = { + %[3]q = %[4]q + } +} +`, rName, ruleset, tagKey1, tagValue1) +} + +func testAccDataQualityRulesetConfig_tags2(rName, ruleset, tagKey1, tagValue1, tagKey2, tagValue2 string) string { + return fmt.Sprintf(` +resource "aws_glue_data_quality_ruleset" "test" { + name = %[1]q + ruleset = %[2]q + + tags = { + %[3]q = %[4]q + %[5]q = %[6]q + } +} +`, rName, ruleset, tagKey1, tagValue1, tagKey2, tagValue2) +} diff --git a/internal/service/glue/dev_endpoint.go b/internal/service/glue/dev_endpoint.go index fc40d95f859..cf463a888d2 100644 --- a/internal/service/glue/dev_endpoint.go +++ b/internal/service/glue/dev_endpoint.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( @@ -43,7 +46,7 @@ func ResourceDevEndpoint() *schema.Resource { "arguments": { Type: schema.TypeMap, Optional: true, - Elem: schema.TypeString, + Elem: &schema.Schema{Type: schema.TypeString}, }, "arn": { Type: schema.TypeString, @@ -171,13 +174,13 @@ func ResourceDevEndpoint() *schema.Resource { func resourceDevEndpointCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) name := d.Get("name").(string) input := &glue.CreateDevEndpointInput{ EndpointName: aws.String(name), RoleArn: aws.String(d.Get("role_arn").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("arguments"); ok { @@ -270,7 +273,7 @@ func resourceDevEndpointCreate(ctx context.Context, d *schema.ResourceData, meta func resourceDevEndpointRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) endpoint, err := FindDevEndpointByName(ctx, conn, d.Id()) @@ -389,7 +392,7 @@ func resourceDevEndpointRead(ctx context.Context, d *schema.ResourceData, meta i func resourceDevEndpointUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) input := &glue.UpdateDevEndpointInput{ EndpointName: aws.String(d.Get("name").(string)), @@ -488,7 +491,7 @@ func resourceDevEndpointUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceDevEndpointDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) log.Printf("[INFO] Deleting Glue Dev Endpoint: %s", d.Id()) _, err := conn.DeleteDevEndpointWithContext(ctx, &glue.DeleteDevEndpointInput{ diff --git a/internal/service/glue/dev_endpoint_test.go b/internal/service/glue/dev_endpoint_test.go index 57b7ffbe2d9..2a8f3155522 100644 --- a/internal/service/glue/dev_endpoint_test.go +++ b/internal/service/glue/dev_endpoint_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue_test import ( @@ -587,7 +590,7 @@ func testAccCheckDevEndpointExists(ctx context.Context, n string, v *glue.DevEnd return fmt.Errorf("No Glue Dev Endpoint ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) output, err := tfglue.FindDevEndpointByName(ctx, conn, rs.Primary.ID) @@ -603,7 +606,7 @@ func testAccCheckDevEndpointExists(ctx context.Context, n string, v *glue.DevEnd func testAccCheckDevEndpointDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_glue_dev_endpoint" { diff --git a/internal/service/glue/find.go b/internal/service/glue/find.go index d32d1ea8a9b..5165d243278 100644 --- a/internal/service/glue/find.go +++ b/internal/service/glue/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( @@ -85,6 +88,30 @@ func FindDatabaseByName(ctx context.Context, conn *glue.Glue, catalogID, name st return output, nil } +func FindDataQualityRulesetByName(ctx context.Context, conn *glue.Glue, name string) (*glue.GetDataQualityRulesetOutput, error) { + input := &glue.GetDataQualityRulesetInput{ + Name: aws.String(name), + } + + output, err := conn.GetDataQualityRulesetWithContext(ctx, input) + if tfawserr.ErrCodeEquals(err, glue.ErrCodeEntityNotFoundException) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + return output, nil +} + // FindTableByName returns the Table corresponding to the specified name. func FindTableByName(ctx context.Context, conn *glue.Glue, catalogID, dbName, name string) (*glue.GetTableOutput, error) { input := &glue.GetTableInput{ diff --git a/internal/service/glue/generate.go b/internal/service/glue/generate.go index 71f5ff45b88..6e249ac625f 100644 --- a/internal/service/glue/generate.go +++ b/internal/service/glue/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=GetTags -ServiceTagsMap -TagInTagsElem=TagsToAdd -UntagInTagsElem=TagsToRemove -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package glue diff --git a/internal/service/glue/glue_test.go b/internal/service/glue/glue_test.go index 60b68bc3e81..46adaf31e68 100644 --- a/internal/service/glue/glue_test.go +++ b/internal/service/glue/glue_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue_test import ( diff --git a/internal/service/glue/id.go b/internal/service/glue/id.go index 24ace1058b9..2bd7a22ffa1 100644 --- a/internal/service/glue/id.go +++ b/internal/service/glue/id.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( diff --git a/internal/service/glue/job.go b/internal/service/glue/job.go index 167399dc948..b9d3776fc82 100644 --- a/internal/service/glue/job.go +++ b/internal/service/glue/job.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( @@ -176,14 +179,14 @@ func ResourceJob() *schema.Resource { func resourceJobCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) name := d.Get("name").(string) input := &glue.CreateJobInput{ Command: expandJobCommand(d.Get("command").([]interface{})), Name: aws.String(name), Role: aws.String(d.Get("role_arn").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("connections"); ok { @@ -258,7 +261,7 @@ func resourceJobCreate(ctx context.Context, d *schema.ResourceData, meta interfa func resourceJobRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) job, err := FindJobByName(ctx, conn, d.Id()) @@ -311,7 +314,7 @@ func resourceJobRead(ctx context.Context, d *schema.ResourceData, meta interface func resourceJobUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) if d.HasChangesExcept("tags", "tags_all") { jobUpdate := &glue.JobUpdate{ @@ -395,7 +398,7 @@ func resourceJobUpdate(ctx context.Context, d *schema.ResourceData, meta interfa func resourceJobDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) log.Printf("[DEBUG] Deleting Glue Job: %s", d.Id()) _, err := conn.DeleteJobWithContext(ctx, &glue.DeleteJobInput{ diff --git a/internal/service/glue/job_test.go b/internal/service/glue/job_test.go index 20b99353174..72c49fc7bb6 100644 --- a/internal/service/glue/job_test.go +++ b/internal/service/glue/job_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue_test import ( @@ -772,7 +775,7 @@ func testAccCheckJobExists(ctx context.Context, n string, v *glue.Job) resource. return fmt.Errorf("No Glue Job ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) output, err := tfglue.FindJobByName(ctx, conn, rs.Primary.ID) @@ -793,7 +796,7 @@ func testAccCheckJobDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) _, err := tfglue.FindJobByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/glue/ml_transform.go b/internal/service/glue/ml_transform.go index 188848e14e5..3fba5b87076 100644 --- a/internal/service/glue/ml_transform.go +++ b/internal/service/glue/ml_transform.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( @@ -183,12 +186,12 @@ func ResourceMLTransform() *schema.Resource { func resourceMLTransformCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) input := &glue.CreateMLTransformInput{ Name: aws.String(d.Get("name").(string)), Role: aws.String(d.Get("role_arn").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), Timeout: aws.Int64(int64(d.Get("timeout").(int))), InputRecordTables: expandMLTransformInputRecordTables(d.Get("input_record_tables").([]interface{})), Parameters: expandMLTransformParameters(d.Get("parameters").([]interface{})), @@ -231,7 +234,7 @@ func resourceMLTransformCreate(ctx context.Context, d *schema.ResourceData, meta func resourceMLTransformRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) input := &glue.GetMLTransformInput{ TransformId: aws.String(d.Id()), @@ -293,7 +296,7 @@ func resourceMLTransformRead(ctx context.Context, d *schema.ResourceData, meta i func resourceMLTransformUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) if d.HasChanges("description", "glue_version", "max_capacity", "max_retries", "number_of_workers", "role_arn", "timeout", "worker_type", "parameters") { @@ -343,7 +346,7 @@ func resourceMLTransformUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceMLTransformDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) log.Printf("[DEBUG] Deleting Glue ML Trasform: %s", d.Id()) diff --git a/internal/service/glue/ml_transform_test.go b/internal/service/glue/ml_transform_test.go index 249856c74de..fe204ea95b5 100644 --- a/internal/service/glue/ml_transform_test.go +++ b/internal/service/glue/ml_transform_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue_test import ( @@ -434,7 +437,7 @@ func testAccCheckMLTransformExists(ctx context.Context, resourceName string, mlT return fmt.Errorf("No Glue Job ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) output, err := conn.GetMLTransformWithContext(ctx, &glue.GetMLTransformInput{ TransformId: aws.String(rs.Primary.ID), @@ -463,7 +466,7 @@ func testAccCheckMLTransformDestroy(ctx context.Context) resource.TestCheckFunc continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) output, err := conn.GetMLTransformWithContext(ctx, &glue.GetMLTransformInput{ TransformId: aws.String(rs.Primary.ID), diff --git a/internal/service/glue/partition.go b/internal/service/glue/partition.go index ae6aa451cfd..821e839946a 100644 --- a/internal/service/glue/partition.go +++ b/internal/service/glue/partition.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( @@ -203,7 +206,7 @@ func ResourcePartition() *schema.Resource { func resourcePartitionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) catalogID := createCatalogID(d, meta.(*conns.AWSClient).AccountID) dbName := d.Get("database_name").(string) tableName := d.Get("table_name").(string) @@ -229,7 +232,7 @@ func resourcePartitionCreate(ctx context.Context, d *schema.ResourceData, meta i func resourcePartitionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) log.Printf("[DEBUG] Reading Glue Partition: %s", d.Id()) partition, err := FindPartitionByValues(ctx, conn, d.Id()) @@ -272,7 +275,7 @@ func resourcePartitionRead(ctx context.Context, d *schema.ResourceData, meta int func resourcePartitionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) catalogID, dbName, tableName, values, err := readPartitionID(d.Id()) if err != nil { @@ -296,7 +299,7 @@ func resourcePartitionUpdate(ctx context.Context, d *schema.ResourceData, meta i func resourcePartitionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) catalogID, dbName, tableName, values, err := readPartitionID(d.Id()) if err != nil { diff --git a/internal/service/glue/partition_index.go b/internal/service/glue/partition_index.go index 4778962d5b5..d8541bba072 100644 --- a/internal/service/glue/partition_index.go +++ b/internal/service/glue/partition_index.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( @@ -82,7 +85,7 @@ func ResourcePartitionIndex() *schema.Resource { func resourcePartitionIndexCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) catalogID := createCatalogID(d, meta.(*conns.AWSClient).AccountID) dbName := d.Get("database_name").(string) tableName := d.Get("table_name").(string) @@ -111,7 +114,7 @@ func resourcePartitionIndexCreate(ctx context.Context, d *schema.ResourceData, m func resourcePartitionIndexRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) catalogID, dbName, tableName, _, err := readPartitionIndexID(d.Id()) if err != nil { @@ -143,7 +146,7 @@ func resourcePartitionIndexRead(ctx context.Context, d *schema.ResourceData, met func resourcePartitionIndexDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) catalogID, dbName, tableName, partIndex, err := readPartitionIndexID(d.Id()) if err != nil { diff --git a/internal/service/glue/partition_index_test.go b/internal/service/glue/partition_index_test.go index 90a7a663e22..18610c58bab 100644 --- a/internal/service/glue/partition_index_test.go +++ b/internal/service/glue/partition_index_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue_test import ( @@ -223,7 +226,7 @@ resource "aws_glue_partition_index" "test" { func testAccCheckPartitionIndexDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_glue_partition_index" { @@ -255,7 +258,7 @@ func testAccCheckPartitionIndexExists(ctx context.Context, name string) resource return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) out, err := tfglue.FindPartitionIndexByName(ctx, conn, rs.Primary.ID) if err != nil { return err diff --git a/internal/service/glue/partition_test.go b/internal/service/glue/partition_test.go index eeba0c91b23..3b2f4e53bb1 100644 --- a/internal/service/glue/partition_test.go +++ b/internal/service/glue/partition_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue_test import ( @@ -175,7 +178,7 @@ func TestAccGluePartition_Disappears_table(t *testing.T) { func testAccCheckPartitionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_glue_partition" { @@ -206,7 +209,7 @@ func testAccCheckPartitionExists(ctx context.Context, name string) resource.Test return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) out, err := tfglue.FindPartitionByValues(ctx, conn, rs.Primary.ID) if err != nil { return err diff --git a/internal/service/glue/registry.go b/internal/service/glue/registry.go index b58f59de9c7..a99b4b3944c 100644 --- a/internal/service/glue/registry.go +++ b/internal/service/glue/registry.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( @@ -59,11 +62,11 @@ func ResourceRegistry() *schema.Resource { func resourceRegistryCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) input := &glue.CreateRegistryInput{ RegistryName: aws.String(d.Get("registry_name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -82,7 +85,7 @@ func resourceRegistryCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceRegistryRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) output, err := FindRegistryByID(ctx, conn, d.Id()) if err != nil { @@ -110,7 +113,7 @@ func resourceRegistryRead(ctx context.Context, d *schema.ResourceData, meta inte func resourceRegistryUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) if d.HasChanges("description") { input := &glue.UpdateRegistryInput{ @@ -133,7 +136,7 @@ func resourceRegistryUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceRegistryDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) log.Printf("[DEBUG] Deleting Glue Registry: %s", d.Id()) input := &glue.DeleteRegistryInput{ diff --git a/internal/service/glue/registry_test.go b/internal/service/glue/registry_test.go index 9e3bfa9368a..f1e23b76b74 100644 --- a/internal/service/glue/registry_test.go +++ b/internal/service/glue/registry_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue_test import ( @@ -156,7 +159,7 @@ func TestAccGlueRegistry_disappears(t *testing.T) { } func testAccPreCheckRegistry(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) _, err := conn.ListRegistriesWithContext(ctx, &glue.ListRegistriesInput{}) @@ -181,7 +184,7 @@ func testAccCheckRegistryExists(ctx context.Context, resourceName string, regist return fmt.Errorf("No Glue Registry ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) output, err := tfglue.FindRegistryByID(ctx, conn, rs.Primary.ID) if err != nil { return err @@ -207,7 +210,7 @@ func testAccCheckRegistryDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) output, err := tfglue.FindRegistryByID(ctx, conn, rs.Primary.ID) if err != nil { if tfawserr.ErrCodeEquals(err, glue.ErrCodeEntityNotFoundException) { diff --git a/internal/service/glue/resource_policy.go b/internal/service/glue/resource_policy.go index 40e782275a8..8f5f5ec3ebe 100644 --- a/internal/service/glue/resource_policy.go +++ b/internal/service/glue/resource_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( @@ -51,7 +54,7 @@ func ResourceResourcePolicy() *schema.Resource { func resourceResourcePolicyPut(condition string) func(context.Context, *schema.ResourceData, interface{}) diag.Diagnostics { return func(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) policy, err := structure.NormalizeJsonString(d.Get("policy").(string)) @@ -80,7 +83,7 @@ func resourceResourcePolicyPut(condition string) func(context.Context, *schema.R func resourceResourcePolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) resourcePolicy, err := conn.GetResourcePolicyWithContext(ctx, &glue.GetResourcePolicyInput{}) if tfawserr.ErrCodeEquals(err, glue.ErrCodeEntityNotFoundException) { @@ -109,7 +112,7 @@ func resourceResourcePolicyRead(ctx context.Context, d *schema.ResourceData, met func resourceResourcePolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) _, err := conn.DeleteResourcePolicyWithContext(ctx, &glue.DeleteResourcePolicyInput{}) if err != nil { diff --git a/internal/service/glue/resource_policy_test.go b/internal/service/glue/resource_policy_test.go index 89bb8ea1eaa..c65bf60464d 100644 --- a/internal/service/glue/resource_policy_test.go +++ b/internal/service/glue/resource_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue_test import ( @@ -164,7 +167,7 @@ func testAccResourcePolicy(ctx context.Context, n string, action string) resourc return fmt.Errorf("No policy id set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) policy, err := conn.GetResourcePolicyWithContext(ctx, &glue.GetResourcePolicyInput{}) if err != nil { @@ -189,7 +192,7 @@ func testAccResourcePolicy(ctx context.Context, n string, action string) resourc func testAccCheckResourcePolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) policy, err := conn.GetResourcePolicyWithContext(ctx, &glue.GetResourcePolicyInput{}) diff --git a/internal/service/glue/schema.go b/internal/service/glue/schema.go index f52178ae629..2fe248cd662 100644 --- a/internal/service/glue/schema.go +++ b/internal/service/glue/schema.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( @@ -99,13 +102,13 @@ func ResourceSchema() *schema.Resource { func resourceSchemaCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) input := &glue.CreateSchemaInput{ SchemaName: aws.String(d.Get("schema_name").(string)), SchemaDefinition: aws.String(d.Get("schema_definition").(string)), DataFormat: aws.String(d.Get("data_format").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("registry_arn"); ok { @@ -137,7 +140,7 @@ func resourceSchemaCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceSchemaRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) output, err := FindSchemaByID(ctx, conn, d.Id()) if err != nil { @@ -179,7 +182,7 @@ func resourceSchemaRead(ctx context.Context, d *schema.ResourceData, meta interf func resourceSchemaUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) input := &glue.UpdateSchemaInput{ SchemaId: createSchemaID(d.Id()), @@ -234,7 +237,7 @@ func resourceSchemaUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceSchemaDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) log.Printf("[DEBUG] Deleting Glue Schema: %s", d.Id()) input := &glue.DeleteSchemaInput{ diff --git a/internal/service/glue/schema_test.go b/internal/service/glue/schema_test.go index 120b7453ad1..faceb3a989d 100644 --- a/internal/service/glue/schema_test.go +++ b/internal/service/glue/schema_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue_test import ( @@ -326,7 +329,7 @@ func TestAccGlueSchema_Disappears_registry(t *testing.T) { } func testAccPreCheckSchema(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) _, err := conn.ListRegistriesWithContext(ctx, &glue.ListRegistriesInput{}) @@ -351,7 +354,7 @@ func testAccCheckSchemaExists(ctx context.Context, resourceName string, schema * return fmt.Errorf("No Glue Schema ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) output, err := tfglue.FindSchemaByID(ctx, conn, rs.Primary.ID) if err != nil { return err @@ -377,7 +380,7 @@ func testAccCheckSchemaDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) output, err := tfglue.FindSchemaByID(ctx, conn, rs.Primary.ID) if err != nil { if tfawserr.ErrCodeEquals(err, glue.ErrCodeEntityNotFoundException) { diff --git a/internal/service/glue/script_data_source.go b/internal/service/glue/script_data_source.go index 3e93a89be2e..f8209c43f39 100644 --- a/internal/service/glue/script_data_source.go +++ b/internal/service/glue/script_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( @@ -102,7 +105,7 @@ func DataSourceScript() *schema.Resource { func dataSourceScriptRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) dagEdge := d.Get("dag_edge").([]interface{}) dagNode := d.Get("dag_node").([]interface{}) diff --git a/internal/service/glue/script_data_source_test.go b/internal/service/glue/script_data_source_test.go index 369535c0eb3..6756bf53c90 100644 --- a/internal/service/glue/script_data_source_test.go +++ b/internal/service/glue/script_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue_test import ( diff --git a/internal/service/glue/security_configuration.go b/internal/service/glue/security_configuration.go index 7cc615c8d0a..6979415f167 100644 --- a/internal/service/glue/security_configuration.go +++ b/internal/service/glue/security_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( @@ -123,7 +126,7 @@ func ResourceSecurityConfiguration() *schema.Resource { func resourceSecurityConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) name := d.Get("name").(string) input := &glue.CreateSecurityConfigurationInput{ @@ -144,7 +147,7 @@ func resourceSecurityConfigurationCreate(ctx context.Context, d *schema.Resource func resourceSecurityConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) input := &glue.GetSecurityConfigurationInput{ Name: aws.String(d.Id()), @@ -181,7 +184,7 @@ func resourceSecurityConfigurationRead(ctx context.Context, d *schema.ResourceDa func resourceSecurityConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) log.Printf("[DEBUG] Deleting Glue Security Configuration: %s", d.Id()) err := DeleteSecurityConfiguration(ctx, conn, d.Id()) diff --git a/internal/service/glue/security_configuration_test.go b/internal/service/glue/security_configuration_test.go index 9ed6726c9e5..835855708fb 100644 --- a/internal/service/glue/security_configuration_test.go +++ b/internal/service/glue/security_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue_test import ( @@ -196,7 +199,7 @@ func testAccCheckSecurityConfigurationExists(ctx context.Context, resourceName s return fmt.Errorf("No Glue Security Configuration ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) output, err := conn.GetSecurityConfigurationWithContext(ctx, &glue.GetSecurityConfigurationInput{ Name: aws.String(rs.Primary.ID), @@ -225,7 +228,7 @@ func testAccCheckSecurityConfigurationDestroy(ctx context.Context) resource.Test continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) output, err := conn.GetSecurityConfigurationWithContext(ctx, &glue.GetSecurityConfigurationInput{ Name: aws.String(rs.Primary.ID), diff --git a/internal/service/glue/service_package_gen.go b/internal/service/glue/service_package_gen.go index b992109c3bf..781259dd492 100644 --- a/internal/service/glue/service_package_gen.go +++ b/internal/service/glue/service_package_gen.go @@ -5,6 +5,10 @@ package glue import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + glue_sdkv1 "github.com/aws/aws-sdk-go/service/glue" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -78,6 +82,14 @@ func (p *servicePackage) SDKResources(ctx context.Context) []*types.ServicePacka Factory: ResourceDataCatalogEncryptionSettings, TypeName: "aws_glue_data_catalog_encryption_settings", }, + { + Factory: ResourceDataQualityRuleset, + TypeName: "aws_glue_data_quality_ruleset", + Name: "Data Quality Ruleset", + Tags: &types.ServicePackageResourceTags{ + IdentifierAttribute: "arn", + }, + }, { Factory: ResourceDevEndpoint, TypeName: "aws_glue_dev_endpoint", @@ -161,4 +173,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Glue } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*glue_sdkv1.Glue, error) { + sess := config["session"].(*session_sdkv1.Session) + + return glue_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/glue/status.go b/internal/service/glue/status.go index 1a7829b8326..00c9a40ba93 100644 --- a/internal/service/glue/status.go +++ b/internal/service/glue/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( diff --git a/internal/service/glue/sweep.go b/internal/service/glue/sweep.go index c84d5884cbe..61e3ecbfbec 100644 --- a/internal/service/glue/sweep.go +++ b/internal/service/glue/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -13,7 +16,6 @@ import ( "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -81,11 +83,11 @@ func init() { func sweepCatalogDatabases(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).GlueConn() + conn := client.GlueConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -119,7 +121,7 @@ func sweepCatalogDatabases(region string) error { return fmt.Errorf("Error retrieving Glue Catalog Databases: %s", err) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Glue Catalog Databases: %w", err)) } @@ -128,11 +130,11 @@ func sweepCatalogDatabases(region string) error { func sweepClassifiers(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).GlueConn() + conn := client.GlueConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -176,7 +178,7 @@ func sweepClassifiers(region string) error { return fmt.Errorf("Error retrieving Glue Classifiers: %s", err) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Glue Classifiers: %w", err)) } @@ -185,12 +187,12 @@ func sweepClassifiers(region string) error { func sweepConnections(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).GlueConn() - catalogID := client.(*conns.AWSClient).AccountID + conn := client.GlueConn(ctx) + catalogID := client.AccountID sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -223,7 +225,7 @@ func sweepConnections(region string) error { return fmt.Errorf("Error retrieving Glue Connections: %s", err) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping API Gateway VPC Links: %w", err)) } @@ -232,11 +234,11 @@ func sweepConnections(region string) error { func sweepCrawlers(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).GlueConn() + conn := client.GlueConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -266,7 +268,7 @@ func sweepCrawlers(region string) error { return fmt.Errorf("Error retrieving Glue Crawlers: %s", err) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping API Gateway VPC Links: %w", err)) } @@ -275,12 +277,12 @@ func sweepCrawlers(region string) error { func sweepDevEndpoints(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } input := &glue.GetDevEndpointsInput{} - conn := client.(*conns.AWSClient).GlueConn() + conn := client.GlueConn(ctx) sweepResources := make([]sweep.Sweepable, 0) err = conn.GetDevEndpointsPagesWithContext(ctx, input, func(page *glue.GetDevEndpointsOutput, lastPage bool) bool { @@ -314,7 +316,7 @@ func sweepDevEndpoints(region string) error { return fmt.Errorf("error listing Glue Dev Endpoints (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Glue Dev Endpoints (%s): %w", region, err) @@ -325,12 +327,12 @@ func sweepDevEndpoints(region string) error { func sweepJobs(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } input := &glue.GetJobsInput{} - conn := client.(*conns.AWSClient).GlueConn() + conn := client.GlueConn(ctx) sweepResources := make([]sweep.Sweepable, 0) err = conn.GetJobsPagesWithContext(ctx, input, func(page *glue.GetJobsOutput, lastPage bool) bool { @@ -358,7 +360,7 @@ func sweepJobs(region string) error { return fmt.Errorf("error listing Glue Jobs (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Glue Jobs (%s): %w", region, err) @@ -369,11 +371,11 @@ func sweepJobs(region string) error { func sweepMLTransforms(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).GlueConn() + conn := client.GlueConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -404,7 +406,7 @@ func sweepMLTransforms(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error retrieving Glue ML Transforms: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Glue ML Transforms: %w", err)) } @@ -413,11 +415,11 @@ func sweepMLTransforms(region string) error { func sweepRegistry(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).GlueConn() + conn := client.GlueConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -440,7 +442,7 @@ func sweepRegistry(region string) error { sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Glue Registry: %w", err)) } @@ -449,11 +451,11 @@ func sweepRegistry(region string) error { func sweepSchema(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).GlueConn() + conn := client.GlueConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -476,7 +478,7 @@ func sweepSchema(region string) error { sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping API Gateway VPC Links: %w", err)) } @@ -485,11 +487,11 @@ func sweepSchema(region string) error { func sweepSecurityConfigurations(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).GlueConn() + conn := client.GlueConn(ctx) input := &glue.GetSecurityConfigurationsInput{} @@ -527,11 +529,11 @@ func sweepSecurityConfigurations(region string) error { func sweepTriggers(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).GlueConn() + conn := client.GlueConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -562,7 +564,7 @@ func sweepTriggers(region string) error { return fmt.Errorf("Error retrieving Glue Triggers: %s", err) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Glue Triggers: %w", err)) } @@ -571,11 +573,11 @@ func sweepTriggers(region string) error { func sweepWorkflow(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).GlueConn() + conn := client.GlueConn(ctx) listOutput, err := conn.ListWorkflowsWithContext(ctx, &glue.ListWorkflowsInput{}) if err != nil { diff --git a/internal/service/glue/tags_gen.go b/internal/service/glue/tags_gen.go index 2e115fb6e37..fb624159cf8 100644 --- a/internal/service/glue/tags_gen.go +++ b/internal/service/glue/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists glue service tags. +// listTags lists glue service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn glueiface.GlueAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn glueiface.GlueAPI, identifier string) (tftags.KeyValueTags, error) { input := &glue.GetTagsInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn glueiface.GlueAPI, identifier string) (t // ListTags lists glue service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).GlueConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).GlueConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from glue service tags. +// KeyValueTags creates tftags.KeyValueTags from glue service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns glue service tags from Context. +// getTagsIn returns glue service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets glue service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets glue service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates glue service tags. +// updateTags updates glue service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn glueiface.GlueAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn glueiface.GlueAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn glueiface.GlueAPI, identifier string, // UpdateTags updates glue service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).GlueConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).GlueConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/glue/trigger.go b/internal/service/glue/trigger.go index 34c0492a0ce..b7da9473eaa 100644 --- a/internal/service/glue/trigger.go +++ b/internal/service/glue/trigger.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( @@ -206,14 +209,14 @@ func ResourceTrigger() *schema.Resource { func resourceTriggerCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) name := d.Get("name").(string) triggerType := d.Get("type").(string) input := &glue.CreateTriggerInput{ Actions: expandActions(d.Get("actions").([]interface{})), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), Type: aws.String(triggerType), StartOnCreation: aws.Bool(d.Get("start_on_creation").(bool)), } @@ -301,7 +304,7 @@ func resourceTriggerCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceTriggerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) output, err := FindTriggerByName(ctx, conn, d.Id()) if err != nil { @@ -364,7 +367,7 @@ func resourceTriggerRead(ctx context.Context, d *schema.ResourceData, meta inter func resourceTriggerUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) if d.HasChanges("actions", "description", "predicate", "schedule", "event_batching_condition") { triggerUpdate := &glue.TriggerUpdate{ @@ -435,7 +438,7 @@ func resourceTriggerUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceTriggerDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) log.Printf("[DEBUG] Deleting Glue Trigger: %s", d.Id()) err := deleteTrigger(ctx, conn, d.Id()) diff --git a/internal/service/glue/trigger_test.go b/internal/service/glue/trigger_test.go index 76f4c3d3e5c..3527f6777d6 100644 --- a/internal/service/glue/trigger_test.go +++ b/internal/service/glue/trigger_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue_test import ( @@ -589,7 +592,7 @@ func testAccCheckTriggerExists(ctx context.Context, resourceName string, trigger return fmt.Errorf("No Glue Trigger ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) output, err := tfglue.FindTriggerByName(ctx, conn, rs.Primary.ID) if err != nil { @@ -616,7 +619,7 @@ func testAccCheckTriggerDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) output, err := tfglue.FindTriggerByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/glue/user_defined_function.go b/internal/service/glue/user_defined_function.go index 5147b11654c..08f63aa40b6 100644 --- a/internal/service/glue/user_defined_function.go +++ b/internal/service/glue/user_defined_function.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( @@ -94,7 +97,7 @@ func ResourceUserDefinedFunction() *schema.Resource { func resourceUserDefinedFunctionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) catalogID := createCatalogID(d, meta.(*conns.AWSClient).AccountID) dbName := d.Get("database_name").(string) funcName := d.Get("name").(string) @@ -117,7 +120,7 @@ func resourceUserDefinedFunctionCreate(ctx context.Context, d *schema.ResourceDa func resourceUserDefinedFunctionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) catalogID, dbName, funcName, err := ReadUDFID(d.Id()) if err != nil { @@ -140,7 +143,7 @@ func resourceUserDefinedFunctionUpdate(ctx context.Context, d *schema.ResourceDa func resourceUserDefinedFunctionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) catalogID, dbName, funcName, err := ReadUDFID(d.Id()) if err != nil { @@ -193,7 +196,7 @@ func resourceUserDefinedFunctionRead(ctx context.Context, d *schema.ResourceData func resourceUserDefinedFunctionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) catalogID, dbName, funcName, err := ReadUDFID(d.Id()) if err != nil { return sdkdiag.AppendErrorf(diags, "deleting Glue User Defined Function (%s): %s", d.Id(), err) diff --git a/internal/service/glue/user_defined_function_test.go b/internal/service/glue/user_defined_function_test.go index d1397700862..82964f9bcf7 100644 --- a/internal/service/glue/user_defined_function_test.go +++ b/internal/service/glue/user_defined_function_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue_test import ( @@ -124,7 +127,7 @@ func TestAccGlueUserDefinedFunction_disappears(t *testing.T) { func testAccCheckUDFDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_glue_user_defined_function" { @@ -171,7 +174,7 @@ func testAccCheckUserDefinedFunctionExists(ctx context.Context, name string) res return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) out, err := conn.GetUserDefinedFunctionWithContext(ctx, &glue.GetUserDefinedFunctionInput{ CatalogId: aws.String(catalogId), DatabaseName: aws.String(dbName), diff --git a/internal/service/glue/validate.go b/internal/service/glue/validate.go index baf20f81978..764ede36143 100644 --- a/internal/service/glue/validate.go +++ b/internal/service/glue/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( diff --git a/internal/service/glue/wait.go b/internal/service/glue/wait.go index a12ac1604a1..82594efa2a5 100644 --- a/internal/service/glue/wait.go +++ b/internal/service/glue/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( diff --git a/internal/service/glue/workflow.go b/internal/service/glue/workflow.go index 36a2b4d1054..92be05ec924 100644 --- a/internal/service/glue/workflow.go +++ b/internal/service/glue/workflow.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue import ( @@ -42,7 +45,7 @@ func ResourceWorkflow() *schema.Resource { "default_run_properties": { Type: schema.TypeMap, Optional: true, - Elem: schema.TypeString, + Elem: &schema.Schema{Type: schema.TypeString}, }, "description": { Type: schema.TypeString, @@ -66,12 +69,12 @@ func ResourceWorkflow() *schema.Resource { func resourceWorkflowCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) name := d.Get("name").(string) input := &glue.CreateWorkflowInput{ Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if kv, ok := d.GetOk("default_run_properties"); ok { @@ -98,7 +101,7 @@ func resourceWorkflowCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceWorkflowRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) input := &glue.GetWorkflowInput{ Name: aws.String(d.Id()), @@ -143,7 +146,7 @@ func resourceWorkflowRead(ctx context.Context, d *schema.ResourceData, meta inte func resourceWorkflowUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) if d.HasChanges("default_run_properties", "description", "max_concurrent_runs") { input := &glue.UpdateWorkflowInput{ @@ -174,7 +177,7 @@ func resourceWorkflowUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceWorkflowDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GlueConn() + conn := meta.(*conns.AWSClient).GlueConn(ctx) log.Printf("[DEBUG] Deleting Glue Workflow: %s", d.Id()) err := DeleteWorkflow(ctx, conn, d.Id()) diff --git a/internal/service/glue/workflow_test.go b/internal/service/glue/workflow_test.go index 78baaf15037..d10eea57fb2 100644 --- a/internal/service/glue/workflow_test.go +++ b/internal/service/glue/workflow_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package glue_test import ( @@ -229,7 +232,7 @@ func TestAccGlueWorkflow_disappears(t *testing.T) { } func testAccPreCheckWorkflow(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) _, err := conn.ListWorkflowsWithContext(ctx, &glue.ListWorkflowsInput{}) @@ -254,7 +257,7 @@ func testAccCheckWorkflowExists(ctx context.Context, resourceName string, workfl return fmt.Errorf("No Glue Workflow ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) output, err := conn.GetWorkflowWithContext(ctx, &glue.GetWorkflowInput{ Name: aws.String(rs.Primary.ID), @@ -283,7 +286,7 @@ func testAccCheckWorkflowDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GlueConn(ctx) output, err := conn.GetWorkflowWithContext(ctx, &glue.GetWorkflowInput{ Name: aws.String(rs.Primary.ID), diff --git a/internal/service/grafana/find.go b/internal/service/grafana/find.go index 7e82f83c336..eb997aa21f0 100644 --- a/internal/service/grafana/find.go +++ b/internal/service/grafana/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package grafana import ( diff --git a/internal/service/grafana/generate.go b/internal/service/grafana/generate.go index 66ef0fbbc89..488ce62ad19 100644 --- a/internal/service/grafana/generate.go +++ b/internal/service/grafana/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package grafana diff --git a/internal/service/grafana/license_association.go b/internal/service/grafana/license_association.go index 1f73af16d10..0d6bba4d681 100644 --- a/internal/service/grafana/license_association.go +++ b/internal/service/grafana/license_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package grafana import ( @@ -58,7 +61,7 @@ func ResourceLicenseAssociation() *schema.Resource { func resourceLicenseAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GrafanaConn() + conn := meta.(*conns.AWSClient).GrafanaConn(ctx) input := &managedgrafana.AssociateLicenseInput{ LicenseType: aws.String(d.Get("license_type").(string)), @@ -83,7 +86,7 @@ func resourceLicenseAssociationCreate(ctx context.Context, d *schema.ResourceDat func resourceLicenseAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GrafanaConn() + conn := meta.(*conns.AWSClient).GrafanaConn(ctx) workspace, err := FindLicensedWorkspaceByID(ctx, conn, d.Id()) @@ -114,7 +117,7 @@ func resourceLicenseAssociationRead(ctx context.Context, d *schema.ResourceData, func resourceLicenseAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GrafanaConn() + conn := meta.(*conns.AWSClient).GrafanaConn(ctx) log.Printf("[DEBUG] Deleting Grafana License Association: %s", d.Id()) _, err := conn.DisassociateLicenseWithContext(ctx, &managedgrafana.DisassociateLicenseInput{ diff --git a/internal/service/grafana/license_association_test.go b/internal/service/grafana/license_association_test.go index 94a809fea80..8e37b401097 100644 --- a/internal/service/grafana/license_association_test.go +++ b/internal/service/grafana/license_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package grafana_test import ( @@ -60,7 +63,7 @@ func testAccCheckLicenseAssociationExists(ctx context.Context, name string) reso return fmt.Errorf("No Grafana Workspace ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).GrafanaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GrafanaConn(ctx) _, err := tfgrafana.FindLicensedWorkspaceByID(ctx, conn, rs.Primary.ID) @@ -70,7 +73,7 @@ func testAccCheckLicenseAssociationExists(ctx context.Context, name string) reso func testAccCheckLicenseAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GrafanaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GrafanaConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_grafana_license_association" { diff --git a/internal/service/grafana/role_association.go b/internal/service/grafana/role_association.go index d9614ef6eba..001b4d66280 100644 --- a/internal/service/grafana/role_association.go +++ b/internal/service/grafana/role_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package grafana import ( @@ -55,7 +58,7 @@ func ResourceRoleAssociation() *schema.Resource { func resourceRoleAssociationUpsert(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GrafanaConn() + conn := meta.(*conns.AWSClient).GrafanaConn(ctx) role := d.Get("role").(string) workspaceID := d.Get("workspace_id").(string) @@ -113,7 +116,7 @@ func populateUpdateInstructions(role string, list []*string, action string, type func resourceRoleAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GrafanaConn() + conn := meta.(*conns.AWSClient).GrafanaConn(ctx) roleAssociations, err := FindRoleAssociationsByRoleAndWorkspaceID(ctx, conn, d.Get("role").(string), d.Get("workspace_id").(string)) @@ -135,7 +138,7 @@ func resourceRoleAssociationRead(ctx context.Context, d *schema.ResourceData, me func resourceRoleAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GrafanaConn() + conn := meta.(*conns.AWSClient).GrafanaConn(ctx) updateInstructions := make([]*managedgrafana.UpdateInstruction, 0) if v, ok := d.GetOk("user_ids"); ok && v.(*schema.Set).Len() > 0 { diff --git a/internal/service/grafana/role_association_test.go b/internal/service/grafana/role_association_test.go index b882137b1c6..cda6c8b4b83 100644 --- a/internal/service/grafana/role_association_test.go +++ b/internal/service/grafana/role_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package grafana_test import ( @@ -296,7 +299,7 @@ func testAccCheckRoleAssociationExists(ctx context.Context, n string) resource.T return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).GrafanaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GrafanaConn(ctx) _, err := tfgrafana.FindRoleAssociationsByRoleAndWorkspaceID(ctx, conn, rs.Primary.Attributes["role"], rs.Primary.Attributes["workspace_id"]) @@ -306,7 +309,7 @@ func testAccCheckRoleAssociationExists(ctx context.Context, n string) resource.T func testAccCheckRoleAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GrafanaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GrafanaConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_grafana_role_association" { diff --git a/internal/service/grafana/service_package_gen.go b/internal/service/grafana/service_package_gen.go index 4493af7f50f..1eceaa2c05a 100644 --- a/internal/service/grafana/service_package_gen.go +++ b/internal/service/grafana/service_package_gen.go @@ -5,6 +5,10 @@ package grafana import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + managedgrafana_sdkv1 "github.com/aws/aws-sdk-go/service/managedgrafana" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -61,4 +65,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Grafana } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*managedgrafana_sdkv1.ManagedGrafana, error) { + sess := config["session"].(*session_sdkv1.Session) + + return managedgrafana_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/grafana/status.go b/internal/service/grafana/status.go index 8e21ce7b7eb..1d42b9e430d 100644 --- a/internal/service/grafana/status.go +++ b/internal/service/grafana/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package grafana import ( diff --git a/internal/service/grafana/sweep.go b/internal/service/grafana/sweep.go index a05e88584d5..a2c682a6648 100644 --- a/internal/service/grafana/sweep.go +++ b/internal/service/grafana/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/managedgrafana" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -24,11 +26,11 @@ func init() { func sweepWorkSpaces(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).GrafanaConn() + conn := client.GrafanaConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -63,7 +65,7 @@ func sweepWorkSpaces(region string) error { errs = multierror.Append(errs, fmt.Errorf("listing Grafana Workspace for %s: %w", region, err)) } - if err := sweep.SweepOrchestrator(sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping Grafana Workspace for %s: %w", region, err)) } diff --git a/internal/service/grafana/tags_gen.go b/internal/service/grafana/tags_gen.go index 695266df375..beb01fc56df 100644 --- a/internal/service/grafana/tags_gen.go +++ b/internal/service/grafana/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists grafana service tags. +// listTags lists grafana service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn managedgrafanaiface.ManagedGrafanaAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn managedgrafanaiface.ManagedGrafanaAPI, identifier string) (tftags.KeyValueTags, error) { input := &managedgrafana.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn managedgrafanaiface.ManagedGrafanaAPI, i // ListTags lists grafana service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).GrafanaConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).GrafanaConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from grafana service tags. +// KeyValueTags creates tftags.KeyValueTags from grafana service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns grafana service tags from Context. +// getTagsIn returns grafana service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets grafana service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets grafana service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates grafana service tags. +// updateTags updates grafana service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn managedgrafanaiface.ManagedGrafanaAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn managedgrafanaiface.ManagedGrafanaAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn managedgrafanaiface.ManagedGrafanaAPI, // UpdateTags updates grafana service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).GrafanaConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).GrafanaConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/grafana/test-fixtures/idp_metadata.xml b/internal/service/grafana/test-fixtures/idp_metadata.xml index 00fd50a5c2d..0232bb547bf 100644 --- a/internal/service/grafana/test-fixtures/idp_metadata.xml +++ b/internal/service/grafana/test-fixtures/idp_metadata.xml @@ -1,4 +1,9 @@ + + diff --git a/internal/service/grafana/wait.go b/internal/service/grafana/wait.go index 2b5f59091ea..75986cd2671 100644 --- a/internal/service/grafana/wait.go +++ b/internal/service/grafana/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package grafana import ( diff --git a/internal/service/grafana/workspace.go b/internal/service/grafana/workspace.go index f3aafccc2ff..31c6c352707 100644 --- a/internal/service/grafana/workspace.go +++ b/internal/service/grafana/workspace.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package grafana import ( @@ -38,8 +41,8 @@ func ResourceWorkspace() *schema.Resource { }, Timeouts: &schema.ResourceTimeout{ - Create: schema.DefaultTimeout(10 * time.Minute), - Update: schema.DefaultTimeout(10 * time.Minute), + Create: schema.DefaultTimeout(30 * time.Minute), + Update: schema.DefaultTimeout(30 * time.Minute), }, Schema: map[string]*schema.Schema{ @@ -187,14 +190,14 @@ func ResourceWorkspace() *schema.Resource { func resourceWorkspaceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GrafanaConn() + conn := meta.(*conns.AWSClient).GrafanaConn(ctx) input := &managedgrafana.CreateWorkspaceInput{ AccountAccessType: aws.String(d.Get("account_access_type").(string)), AuthenticationProviders: flex.ExpandStringList(d.Get("authentication_providers").([]interface{})), ClientToken: aws.String(id.UniqueId()), PermissionType: aws.String(d.Get("permission_type").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("configuration"); ok { @@ -263,7 +266,7 @@ func resourceWorkspaceCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceWorkspaceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GrafanaConn() + conn := meta.(*conns.AWSClient).GrafanaConn(ctx) workspace, err := FindWorkspaceByID(ctx, conn, d.Id()) @@ -309,7 +312,7 @@ func resourceWorkspaceRead(ctx context.Context, d *schema.ResourceData, meta int return sdkdiag.AppendErrorf(diags, "setting network_access_control: %s", err) } - SetTagsOut(ctx, workspace.Tags) + setTagsOut(ctx, workspace.Tags) input := &managedgrafana.DescribeWorkspaceConfigurationInput{ WorkspaceId: aws.String(d.Id()), @@ -328,7 +331,7 @@ func resourceWorkspaceRead(ctx context.Context, d *schema.ResourceData, meta int func resourceWorkspaceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GrafanaConn() + conn := meta.(*conns.AWSClient).GrafanaConn(ctx) if d.HasChangesExcept("configuration", "tags", "tags_all") { input := &managedgrafana.UpdateWorkspaceInput{ @@ -428,7 +431,7 @@ func resourceWorkspaceUpdate(ctx context.Context, d *schema.ResourceData, meta i func resourceWorkspaceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GrafanaConn() + conn := meta.(*conns.AWSClient).GrafanaConn(ctx) log.Printf("[DEBUG] Deleting Grafana Workspace: %s", d.Id()) _, err := conn.DeleteWorkspaceWithContext(ctx, &managedgrafana.DeleteWorkspaceInput{ diff --git a/internal/service/grafana/workspace_api_key.go b/internal/service/grafana/workspace_api_key.go index 8de323e03b5..14fc7fa53ec 100644 --- a/internal/service/grafana/workspace_api_key.go +++ b/internal/service/grafana/workspace_api_key.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package grafana import ( @@ -57,7 +60,7 @@ func ResourceWorkspaceAPIKey() *schema.Resource { func resourceWorkspaceAPIKeyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GrafanaConn() + conn := meta.(*conns.AWSClient).GrafanaConn(ctx) keyName := d.Get("key_name").(string) workspaceID := d.Get("workspace_id").(string) @@ -84,7 +87,7 @@ func resourceWorkspaceAPIKeyCreate(ctx context.Context, d *schema.ResourceData, func resourceWorkspaceAPIKeyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GrafanaConn() + conn := meta.(*conns.AWSClient).GrafanaConn(ctx) workspaceID, keyName, err := WorkspaceAPIKeyParseResourceID(d.Id()) diff --git a/internal/service/grafana/workspace_api_key_test.go b/internal/service/grafana/workspace_api_key_test.go index a8eb76a4c50..baf2c137279 100644 --- a/internal/service/grafana/workspace_api_key_test.go +++ b/internal/service/grafana/workspace_api_key_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package grafana_test import ( diff --git a/internal/service/grafana/workspace_data_source.go b/internal/service/grafana/workspace_data_source.go index 51edb22f8b2..637117400f6 100644 --- a/internal/service/grafana/workspace_data_source.go +++ b/internal/service/grafana/workspace_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package grafana import ( @@ -107,7 +110,7 @@ func DataSourceWorkspace() *schema.Resource { func dataSourceWorkspaceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GrafanaConn() + conn := meta.(*conns.AWSClient).GrafanaConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig workspaceID := d.Get("workspace_id").(string) diff --git a/internal/service/grafana/workspace_data_source_test.go b/internal/service/grafana/workspace_data_source_test.go index 154778ee8fb..9443c5b9357 100644 --- a/internal/service/grafana/workspace_data_source_test.go +++ b/internal/service/grafana/workspace_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package grafana_test import ( diff --git a/internal/service/grafana/workspace_saml_configuration.go b/internal/service/grafana/workspace_saml_configuration.go index f43325462c1..bd6d3e89f5d 100644 --- a/internal/service/grafana/workspace_saml_configuration.go +++ b/internal/service/grafana/workspace_saml_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package grafana import ( @@ -103,7 +106,7 @@ func ResourceWorkspaceSAMLConfiguration() *schema.Resource { func resourceWorkspaceSAMLConfigurationUpsert(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GrafanaConn() + conn := meta.(*conns.AWSClient).GrafanaConn(ctx) d.SetId(d.Get("workspace_id").(string)) workspace, err := FindWorkspaceByID(ctx, conn, d.Id()) @@ -222,7 +225,7 @@ func resourceWorkspaceSAMLConfigurationUpsert(ctx context.Context, d *schema.Res func resourceWorkspaceSAMLConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GrafanaConn() + conn := meta.(*conns.AWSClient).GrafanaConn(ctx) saml, err := FindSamlConfigurationByID(ctx, conn, d.Id()) diff --git a/internal/service/grafana/workspace_saml_configuration_test.go b/internal/service/grafana/workspace_saml_configuration_test.go index d2797c9351d..cde05adee6c 100644 --- a/internal/service/grafana/workspace_saml_configuration_test.go +++ b/internal/service/grafana/workspace_saml_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package grafana_test import ( @@ -160,7 +163,7 @@ func testAccCheckWorkspaceSAMLConfigurationExists(ctx context.Context, name stri return fmt.Errorf("No Grafana Workspace ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).GrafanaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GrafanaConn(ctx) _, err := tfgrafana.FindSamlConfigurationByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/grafana/workspace_test.go b/internal/service/grafana/workspace_test.go index e1da326dc92..b4aa1a60f62 100644 --- a/internal/service/grafana/workspace_test.go +++ b/internal/service/grafana/workspace_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package grafana_test import ( @@ -540,7 +543,7 @@ func testAccCheckWorkspaceExists(ctx context.Context, name string) resource.Test return fmt.Errorf("No Grafana Workspace ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).GrafanaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GrafanaConn(ctx) _, err := tfgrafana.FindWorkspaceByID(ctx, conn, rs.Primary.ID) @@ -550,7 +553,7 @@ func testAccCheckWorkspaceExists(ctx context.Context, name string) resource.Test func testAccCheckWorkspaceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GrafanaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GrafanaConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_grafana_workspace" { diff --git a/internal/service/greengrass/generate.go b/internal/service/greengrass/generate.go index 4f2bcd2b623..dc1f07e6d4e 100644 --- a/internal/service/greengrass/generate.go +++ b/internal/service/greengrass/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package greengrass diff --git a/internal/service/greengrass/service_package_gen.go b/internal/service/greengrass/service_package_gen.go index 6c156ba3c23..8ed2f760003 100644 --- a/internal/service/greengrass/service_package_gen.go +++ b/internal/service/greengrass/service_package_gen.go @@ -5,6 +5,10 @@ package greengrass import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + greengrass_sdkv1 "github.com/aws/aws-sdk-go/service/greengrass" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -31,4 +35,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Greengrass } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*greengrass_sdkv1.Greengrass, error) { + sess := config["session"].(*session_sdkv1.Session) + + return greengrass_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/greengrass/tags_gen.go b/internal/service/greengrass/tags_gen.go index 5ef341738ed..165a224787d 100644 --- a/internal/service/greengrass/tags_gen.go +++ b/internal/service/greengrass/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists greengrass service tags. +// listTags lists greengrass service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn greengrassiface.GreengrassAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn greengrassiface.GreengrassAPI, identifier string) (tftags.KeyValueTags, error) { input := &greengrass.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn greengrassiface.GreengrassAPI, identifie // ListTags lists greengrass service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).GreengrassConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).GreengrassConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from greengrass service tags. +// KeyValueTags creates tftags.KeyValueTags from greengrass service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns greengrass service tags from Context. +// getTagsIn returns greengrass service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets greengrass service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets greengrass service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates greengrass service tags. +// updateTags updates greengrass service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn greengrassiface.GreengrassAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn greengrassiface.GreengrassAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn greengrassiface.GreengrassAPI, identif // UpdateTags updates greengrass service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).GreengrassConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).GreengrassConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/guardduty/detector.go b/internal/service/guardduty/detector.go index af7ae1899be..77b2bc1d591 100644 --- a/internal/service/guardduty/detector.go +++ b/internal/service/guardduty/detector.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package guardduty import ( @@ -148,11 +151,11 @@ func ResourceDetector() *schema.Resource { func resourceDetectorCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) input := guardduty.CreateDetectorInput{ Enable: aws.Bool(d.Get("enable").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("finding_publishing_frequency"); ok { @@ -175,7 +178,7 @@ func resourceDetectorCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceDetectorRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) input := guardduty.GetDetectorInput{ DetectorId: aws.String(d.Id()), @@ -214,14 +217,14 @@ func resourceDetectorRead(ctx context.Context, d *schema.ResourceData, meta inte d.Set("enable", aws.StringValue(gdo.Status) == guardduty.DetectorStatusEnabled) d.Set("finding_publishing_frequency", gdo.FindingPublishingFrequency) - SetTagsOut(ctx, gdo.Tags) + setTagsOut(ctx, gdo.Tags) return diags } func resourceDetectorUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := guardduty.UpdateDetectorInput{ @@ -237,7 +240,7 @@ func resourceDetectorUpdate(ctx context.Context, d *schema.ResourceData, meta in log.Printf("[DEBUG] Update GuardDuty Detector: %s", input) _, err := conn.UpdateDetectorWithContext(ctx, &input) if err != nil { - return sdkdiag.AppendErrorf(diags, "Updating GuardDuty Detector '%s' failed: %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "updating GuardDuty Detector (%s): %s", d.Id(), err) } } @@ -246,7 +249,7 @@ func resourceDetectorUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceDetectorDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) input := &guardduty.DeleteDetectorInput{ DetectorId: aws.String(d.Id()), diff --git a/internal/service/guardduty/detector_data_source.go b/internal/service/guardduty/detector_data_source.go index 7c2cfba8d07..ca6eb8b51cb 100644 --- a/internal/service/guardduty/detector_data_source.go +++ b/internal/service/guardduty/detector_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package guardduty import ( @@ -40,7 +43,7 @@ func DataSourceDetector() *schema.Resource { func dataSourceDetectorRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) detectorId := d.Get("id").(string) diff --git a/internal/service/guardduty/detector_data_source_test.go b/internal/service/guardduty/detector_data_source_test.go index 1d158ea2e54..5892b939b9d 100644 --- a/internal/service/guardduty/detector_data_source_test.go +++ b/internal/service/guardduty/detector_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package guardduty_test import ( diff --git a/internal/service/guardduty/detector_test.go b/internal/service/guardduty/detector_test.go index a769a09d7b6..a3bbccacf29 100644 --- a/internal/service/guardduty/detector_test.go +++ b/internal/service/guardduty/detector_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package guardduty_test import ( @@ -300,7 +303,7 @@ func testAccDetector_datasources_all(t *testing.T) { func testAccCheckDetectorDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_guardduty_detector" { @@ -337,7 +340,7 @@ func testAccCheckDetectorExists(ctx context.Context, name string) resource.TestC return fmt.Errorf("Resource (%s) has empty ID", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn(ctx) output, err := conn.GetDetectorWithContext(ctx, &guardduty.GetDetectorInput{ DetectorId: aws.String(rs.Primary.ID), diff --git a/internal/service/guardduty/filter.go b/internal/service/guardduty/filter.go index d76c1e67ab2..c443cf15c81 100644 --- a/internal/service/guardduty/filter.go +++ b/internal/service/guardduty/filter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package guardduty import ( @@ -131,7 +134,7 @@ func ResourceFilter() *schema.Resource { func resourceFilterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) input := guardduty.CreateFilterInput{ Action: aws.String(d.Get("action").(string)), @@ -139,7 +142,7 @@ func resourceFilterCreate(ctx context.Context, d *schema.ResourceData, meta inte DetectorId: aws.String(d.Get("detector_id").(string)), Name: aws.String(d.Get("name").(string)), Rank: aws.Int64(int64(d.Get("rank").(int))), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } var err error @@ -161,7 +164,7 @@ func resourceFilterCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceFilterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) var detectorID, name string var err error @@ -214,7 +217,7 @@ func resourceFilterRead(ctx context.Context, d *schema.ResourceData, meta interf d.Set("detector_id", detectorID) d.Set("rank", filter.Rank) - SetTagsOut(ctx, filter.Tags) + setTagsOut(ctx, filter.Tags) d.SetId(filterCreateID(detectorID, name)) @@ -223,7 +226,7 @@ func resourceFilterRead(ctx context.Context, d *schema.ResourceData, meta interf func resourceFilterUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) if d.HasChanges("action", "description", "finding_criteria", "rank") { input := guardduty.UpdateFilterInput{ @@ -251,7 +254,7 @@ func resourceFilterUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceFilterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) detectorId := d.Get("detector_id").(string) name := d.Get("name").(string) diff --git a/internal/service/guardduty/filter_test.go b/internal/service/guardduty/filter_test.go index 35c2e77ffbf..0dfd1478b63 100644 --- a/internal/service/guardduty/filter_test.go +++ b/internal/service/guardduty/filter_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package guardduty_test import ( @@ -200,7 +203,7 @@ func testAccFilter_disappears(t *testing.T) { func testAccCheckFilterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_guardduty_filter" { @@ -248,7 +251,7 @@ func testAccCheckFilterExists(ctx context.Context, name string, filter *guarddut return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn(ctx) input := guardduty.GetFilterInput{ DetectorId: aws.String(detectorID), FilterName: aws.String(name), @@ -418,7 +421,7 @@ resource "aws_guardduty_detector" "test" { func testAccCheckACMPCACertificateAuthorityDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ACMPCAConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ACMPCAConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_acmpca_certificate_authority" { diff --git a/internal/service/guardduty/finding_ids_data_source.go b/internal/service/guardduty/finding_ids_data_source.go new file mode 100644 index 00000000000..762d6e84602 --- /dev/null +++ b/internal/service/guardduty/finding_ids_data_source.go @@ -0,0 +1,112 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package guardduty + +import ( + "context" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/guardduty" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/hashicorp/terraform-plugin-framework/datasource" + "github.com/hashicorp/terraform-plugin-framework/datasource/schema" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @FrameworkDataSource(name="Finding Ids") +func newDataSourceFindingIds(context.Context) (datasource.DataSourceWithConfigure, error) { + return &dataSourceFindingIds{}, nil +} + +const ( + DSNameFindingIds = "Finding Ids Data Source" +) + +type dataSourceFindingIds struct { + framework.DataSourceWithConfigure +} + +func (d *dataSourceFindingIds) Metadata(_ context.Context, req datasource.MetadataRequest, resp *datasource.MetadataResponse) { // nosemgrep:ci.meta-in-func-name + resp.TypeName = "aws_guardduty_finding_ids" +} + +func (d *dataSourceFindingIds) Schema(ctx context.Context, req datasource.SchemaRequest, resp *datasource.SchemaResponse) { + resp.Schema = schema.Schema{ + Attributes: map[string]schema.Attribute{ + "detector_id": schema.StringAttribute{ + Required: true, + }, + "has_findings": schema.BoolAttribute{ + Computed: true, + }, + "finding_ids": schema.ListAttribute{ + Computed: true, + ElementType: types.StringType, + }, + "id": framework.IDAttribute(), + }, + } +} + +func (d *dataSourceFindingIds) Read(ctx context.Context, req datasource.ReadRequest, resp *datasource.ReadResponse) { + conn := d.Meta().GuardDutyConn(ctx) + + var data dataSourceFindingIdsData + resp.Diagnostics.Append(req.Config.Get(ctx, &data)...) + if resp.Diagnostics.HasError() { + return + } + + out, err := findFindingIds(ctx, conn, data.DetectorID.ValueString()) + if err != nil { + resp.Diagnostics.AddError( + create.ProblemStandardMessage(names.GuardDuty, create.ErrActionReading, DSNameFindingIds, data.DetectorID.String(), err), + err.Error(), + ) + return + } + + data.ID = types.StringValue(data.DetectorID.ValueString()) + data.FindingIDs = flex.FlattenFrameworkStringList(ctx, out) + data.HasFindings = types.BoolValue((len(out) > 0)) + + resp.Diagnostics.Append(resp.State.Set(ctx, &data)...) +} + +func findFindingIds(ctx context.Context, conn *guardduty.GuardDuty, id string) ([]*string, error) { + in := &guardduty.ListFindingsInput{ + DetectorId: aws.String(id), + } + + var findingIds []*string + err := conn.ListFindingsPagesWithContext(ctx, in, func(page *guardduty.ListFindingsOutput, lastPage bool) bool { + findingIds = append(findingIds, page.FindingIds...) + return !lastPage + }) + + if tfawserr.ErrMessageContains(err, guardduty.ErrCodeBadRequestException, "The request is rejected because the input detectorId is not owned by the current account.") { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + if err != nil { + return nil, err + } + + return findingIds, nil +} + +type dataSourceFindingIdsData struct { + DetectorID types.String `tfsdk:"detector_id"` + HasFindings types.Bool `tfsdk:"has_findings"` + FindingIDs types.List `tfsdk:"finding_ids"` + ID types.String `tfsdk:"id"` +} diff --git a/internal/service/guardduty/finding_ids_data_source_test.go b/internal/service/guardduty/finding_ids_data_source_test.go new file mode 100644 index 00000000000..546f6c86424 --- /dev/null +++ b/internal/service/guardduty/finding_ids_data_source_test.go @@ -0,0 +1,47 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package guardduty_test + +import ( + "testing" + + "github.com/aws/aws-sdk-go/service/guardduty" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" +) + +func TestAccGuardDutyFindingIdsDataSource_basic(t *testing.T) { + ctx := acctest.Context(t) + dataSourceName := "data.aws_guardduty_finding_ids.test" + detectorDataSourceName := "data.aws_guardduty_detector.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + testAccPreCheckDetectorExists(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, guardduty.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + Steps: []resource.TestStep{ + { + Config: testAccFindingIdsDataSourceConfig_basic(), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrPair(dataSourceName, "detector_id", detectorDataSourceName, "id"), + resource.TestCheckResourceAttrSet(dataSourceName, "has_findings"), + resource.TestCheckResourceAttrSet(dataSourceName, "finding_ids.#"), + ), + }, + }, + }) +} + +func testAccFindingIdsDataSourceConfig_basic() string { + return ` +data "aws_guardduty_detector" "test" {} + +data "aws_guardduty_finding_ids" "test" { + detector_id = data.aws_guardduty_detector.test.id +} +` +} diff --git a/internal/service/guardduty/generate.go b/internal/service/guardduty/generate.go index b5eb23fc1c4..9ca05391d7b 100644 --- a/internal/service/guardduty/generate.go +++ b/internal/service/guardduty/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package guardduty diff --git a/internal/service/guardduty/guardduty_test.go b/internal/service/guardduty/guardduty_test.go index d9b8e2c9069..4e79096acbe 100644 --- a/internal/service/guardduty/guardduty_test.go +++ b/internal/service/guardduty/guardduty_test.go @@ -1,10 +1,16 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package guardduty_test import ( + "context" "os" "testing" + "github.com/aws/aws-sdk-go/service/guardduty" "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" ) func TestAccGuardDuty_serial(t *testing.T) { @@ -80,3 +86,21 @@ func testAccMemberFromEnv(t *testing.T) (string, string) { } return accountID, email } + +// testAccPreCheckDetectorExists verifies the current account has a single active +// GuardDuty detector configured. +func testAccPreCheckDetectorExists(ctx context.Context, t *testing.T) { + conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn(ctx) + + out, err := conn.ListDetectorsWithContext(ctx, &guardduty.ListDetectorsInput{}) + if out == nil || len(out.DetectorIds) == 0 { + t.Skip("this AWS account must have an existing GuardDuty detector configured") + } + if len(out.DetectorIds) > 1 { + t.Skipf("this AWS account must have a single existing GuardDuty detector configured. Found %d.", len(out.DetectorIds)) + } + + if err != nil { + t.Fatalf("listing GuardDuty Detectors: %s", err) + } +} diff --git a/internal/service/guardduty/invite_accepter.go b/internal/service/guardduty/invite_accepter.go index 28a3434a774..e34966ad1ac 100644 --- a/internal/service/guardduty/invite_accepter.go +++ b/internal/service/guardduty/invite_accepter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package guardduty import ( @@ -50,7 +53,7 @@ func ResourceInviteAccepter() *schema.Resource { func resourceInviteAccepterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) detectorID := d.Get("detector_id").(string) invitationID := "" @@ -117,7 +120,7 @@ func resourceInviteAccepterCreate(ctx context.Context, d *schema.ResourceData, m func resourceInviteAccepterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) input := &guardduty.GetMasterAccountInput{ DetectorId: aws.String(d.Id()), @@ -148,7 +151,7 @@ func resourceInviteAccepterRead(ctx context.Context, d *schema.ResourceData, met func resourceInviteAccepterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) input := &guardduty.DisassociateFromMasterAccountInput{ DetectorId: aws.String(d.Id()), diff --git a/internal/service/guardduty/invite_accepter_test.go b/internal/service/guardduty/invite_accepter_test.go index b76f07c569b..41b98378bb4 100644 --- a/internal/service/guardduty/invite_accepter_test.go +++ b/internal/service/guardduty/invite_accepter_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package guardduty_test import ( @@ -50,7 +53,7 @@ func testAccInviteAccepter_basic(t *testing.T) { func testAccCheckInviteAccepterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_guardduty_invite_accepter" { @@ -93,7 +96,7 @@ func testAccCheckInviteAccepterExists(ctx context.Context, resourceName string) return fmt.Errorf("Resource (%s) has empty ID", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn(ctx) input := &guardduty.GetMasterAccountInput{ DetectorId: aws.String(rs.Primary.ID), diff --git a/internal/service/guardduty/ipset.go b/internal/service/guardduty/ipset.go index 9a7cccf6f0f..d3763e6ec25 100644 --- a/internal/service/guardduty/ipset.go +++ b/internal/service/guardduty/ipset.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package guardduty import ( @@ -50,17 +53,10 @@ func ResourceIPSet() *schema.Resource { Required: true, }, "format": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validation.StringInSlice([]string{ - guardduty.IpSetFormatTxt, - guardduty.IpSetFormatStix, - guardduty.IpSetFormatOtxCsv, - guardduty.IpSetFormatAlienVault, - guardduty.IpSetFormatProofPoint, - guardduty.IpSetFormatFireEye, - }, false), + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice(guardduty.IpSetFormat_Values(), false), }, "location": { Type: schema.TypeString, @@ -80,7 +76,7 @@ func ResourceIPSet() *schema.Resource { func resourceIPSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) detectorID := d.Get("detector_id").(string) input := &guardduty.CreateIPSetInput{ @@ -89,7 +85,7 @@ func resourceIPSetCreate(ctx context.Context, d *schema.ResourceData, meta inter Format: aws.String(d.Get("format").(string)), Location: aws.String(d.Get("location").(string)), Activate: aws.Bool(d.Get("activate").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } resp, err := conn.CreateIPSetWithContext(ctx, input) @@ -117,7 +113,7 @@ func resourceIPSetCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceIPSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) ipSetId, detectorId, err := DecodeIPSetID(d.Id()) if err != nil { @@ -153,14 +149,14 @@ func resourceIPSetRead(ctx context.Context, d *schema.ResourceData, meta interfa d.Set("name", resp.Name) d.Set("activate", aws.StringValue(resp.Status) == guardduty.IpSetStatusActive) - SetTagsOut(ctx, resp.Tags) + setTagsOut(ctx, resp.Tags) return diags } func resourceIPSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) ipSetId, detectorId, err := DecodeIPSetID(d.Id()) if err != nil { @@ -194,7 +190,7 @@ func resourceIPSetUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceIPSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) ipSetId, detectorId, err := DecodeIPSetID(d.Id()) if err != nil { diff --git a/internal/service/guardduty/ipset_test.go b/internal/service/guardduty/ipset_test.go index cbceb0e4517..22cb7b37eb7 100644 --- a/internal/service/guardduty/ipset_test.go +++ b/internal/service/guardduty/ipset_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package guardduty_test import ( @@ -108,7 +111,7 @@ func testAccIPSet_tags(t *testing.T) { func testAccCheckIPSetDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_guardduty_ipset" { @@ -160,7 +163,7 @@ func testAccCheckIPSetExists(ctx context.Context, name string) resource.TestChec IpSetId: aws.String(ipSetId), } - conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn(ctx) _, err = conn.GetIPSetWithContext(ctx, input) return err } diff --git a/internal/service/guardduty/member.go b/internal/service/guardduty/member.go index c83034a34f0..0750497d46d 100644 --- a/internal/service/guardduty/member.go +++ b/internal/service/guardduty/member.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package guardduty import ( @@ -76,7 +79,7 @@ func ResourceMember() *schema.Resource { func resourceMemberCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) accountID := d.Get("account_id").(string) detectorID := d.Get("detector_id").(string) @@ -123,7 +126,7 @@ func resourceMemberCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceMemberRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) accountID, detectorID, err := DecodeMemberID(d.Id()) if err != nil { @@ -171,7 +174,7 @@ func resourceMemberRead(ctx context.Context, d *schema.ResourceData, meta interf func resourceMemberUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) accountID, detectorID, err := DecodeMemberID(d.Id()) if err != nil { @@ -220,7 +223,7 @@ func resourceMemberUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceMemberDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) accountID, detectorID, err := DecodeMemberID(d.Id()) if err != nil { @@ -254,7 +257,7 @@ func inviteMemberWaiter(ctx context.Context, accountID, detectorID string, timeo out, err = conn.GetMembersWithContext(ctx, &input) if err != nil { - return retry.NonRetryableError(fmt.Errorf("error reading GuardDuty Member %q: %s", accountID, err)) + return retry.NonRetryableError(fmt.Errorf("reading GuardDuty Member %q: %s", accountID, err)) } retryable, err := memberInvited(out, accountID) @@ -271,20 +274,20 @@ func inviteMemberWaiter(ctx context.Context, accountID, detectorID string, timeo out, err = conn.GetMembersWithContext(ctx, &input) if err != nil { - return fmt.Errorf("Error reading GuardDuty member: %w", err) + return fmt.Errorf("reading GuardDuty member: %w", err) } _, err = memberInvited(out, accountID) return err } if err != nil { - return fmt.Errorf("Error waiting for GuardDuty email verification: %w", err) + return fmt.Errorf("waiting for GuardDuty email verification: %w", err) } return nil } func memberInvited(out *guardduty.GetMembersOutput, accountID string) (bool, error) { if out == nil || len(out.Members) == 0 { - return true, fmt.Errorf("error reading GuardDuty Member %q: member missing from response", accountID) + return true, fmt.Errorf("reading GuardDuty Member %q: member missing from response", accountID) } member := out.Members[0] @@ -298,7 +301,7 @@ func memberInvited(out *guardduty.GetMembersOutput, accountID string) (bool, err return true, fmt.Errorf("Expected member to be invited but was in state: %s", status) } - return false, fmt.Errorf("error inviting GuardDuty Member %q: invalid status: %s", accountID, status) + return false, fmt.Errorf("inviting GuardDuty Member %q: invalid status: %s", accountID, status) } func DecodeMemberID(id string) (accountID, detectorID string, err error) { diff --git a/internal/service/guardduty/member_test.go b/internal/service/guardduty/member_test.go index 8ceacee65c1..00c59fcf9dd 100644 --- a/internal/service/guardduty/member_test.go +++ b/internal/service/guardduty/member_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package guardduty_test import ( @@ -165,7 +168,7 @@ func testAccMember_invitationMessage(t *testing.T) { func testAccCheckMemberDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_guardduty_member" { @@ -218,7 +221,7 @@ func testAccCheckMemberExists(ctx context.Context, name string) resource.TestChe DetectorId: aws.String(detectorID), } - conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn(ctx) gmo, err := conn.GetMembersWithContext(ctx, input) if err != nil { return err diff --git a/internal/service/guardduty/organization_admin_account.go b/internal/service/guardduty/organization_admin_account.go index 507cb21d887..c24899ec64f 100644 --- a/internal/service/guardduty/organization_admin_account.go +++ b/internal/service/guardduty/organization_admin_account.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package guardduty import ( @@ -37,7 +40,7 @@ func ResourceOrganizationAdminAccount() *schema.Resource { func resourceOrganizationAdminAccountCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) adminAccountID := d.Get("admin_account_id").(string) @@ -62,7 +65,7 @@ func resourceOrganizationAdminAccountCreate(ctx context.Context, d *schema.Resou func resourceOrganizationAdminAccountRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) adminAccount, err := GetOrganizationAdminAccount(ctx, conn, d.Id()) @@ -83,7 +86,7 @@ func resourceOrganizationAdminAccountRead(ctx context.Context, d *schema.Resourc func resourceOrganizationAdminAccountDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) input := &guardduty.DisableOrganizationAdminAccountInput{ AdminAccountId: aws.String(d.Id()), diff --git a/internal/service/guardduty/organization_admin_account_test.go b/internal/service/guardduty/organization_admin_account_test.go index 34ca975120a..03c50016612 100644 --- a/internal/service/guardduty/organization_admin_account_test.go +++ b/internal/service/guardduty/organization_admin_account_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package guardduty_test import ( @@ -45,7 +48,7 @@ func testAccOrganizationAdminAccount_basic(t *testing.T) { func testAccCheckOrganizationAdminAccountDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_guardduty_organization_admin_account" { @@ -80,7 +83,7 @@ func testAccCheckOrganizationAdminAccountExists(ctx context.Context, resourceNam return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn(ctx) adminAccount, err := tfguardduty.GetOrganizationAdminAccount(ctx, conn, rs.Primary.ID) diff --git a/internal/service/guardduty/organization_configuration.go b/internal/service/guardduty/organization_configuration.go index a42284093b3..ae09eea6d54 100644 --- a/internal/service/guardduty/organization_configuration.go +++ b/internal/service/guardduty/organization_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package guardduty import ( @@ -165,7 +168,7 @@ func ResourceOrganizationConfiguration() *schema.Resource { func resourceOrganizationConfigurationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) detectorID := d.Get("detector_id").(string) @@ -191,7 +194,7 @@ func resourceOrganizationConfigurationUpdate(ctx context.Context, d *schema.Reso func resourceOrganizationConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) input := &guardduty.DescribeOrganizationConfigurationInput{ DetectorId: aws.String(d.Id()), diff --git a/internal/service/guardduty/organization_configuration_test.go b/internal/service/guardduty/organization_configuration_test.go index ad150269b14..3d600f553cf 100644 --- a/internal/service/guardduty/organization_configuration_test.go +++ b/internal/service/guardduty/organization_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package guardduty_test import ( diff --git a/internal/service/guardduty/publishing_destination.go b/internal/service/guardduty/publishing_destination.go index 7b8b33c517b..8d8a8a31a49 100644 --- a/internal/service/guardduty/publishing_destination.go +++ b/internal/service/guardduty/publishing_destination.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package guardduty import ( @@ -57,7 +60,7 @@ func ResourcePublishingDestination() *schema.Resource { func resourcePublishingDestinationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) detectorID := d.Get("detector_id").(string) input := guardduty.CreatePublishingDestinationInput{ @@ -88,7 +91,7 @@ func resourcePublishingDestinationCreate(ctx context.Context, d *schema.Resource func resourcePublishingDestinationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) destinationId, detectorId, err := DecodePublishDestinationID(d.Id()) @@ -120,7 +123,7 @@ func resourcePublishingDestinationRead(ctx context.Context, d *schema.ResourceDa func resourcePublishingDestinationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) destinationId, detectorId, err := DecodePublishDestinationID(d.Id()) @@ -146,7 +149,7 @@ func resourcePublishingDestinationUpdate(ctx context.Context, d *schema.Resource func resourcePublishingDestinationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) destinationId, detectorId, err := DecodePublishDestinationID(d.Id()) diff --git a/internal/service/guardduty/publishing_destination_test.go b/internal/service/guardduty/publishing_destination_test.go index ad1b506652a..43be2642545 100644 --- a/internal/service/guardduty/publishing_destination_test.go +++ b/internal/service/guardduty/publishing_destination_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package guardduty_test import ( @@ -203,7 +206,7 @@ func testAccCheckPublishingDestinationExists(ctx context.Context, name string) r DestinationId: aws.String(destination_id), } - conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn(ctx) _, err := conn.DescribePublishingDestinationWithContext(ctx, input) return err } @@ -211,7 +214,7 @@ func testAccCheckPublishingDestinationExists(ctx context.Context, name string) r func testAccCheckPublishingDestinationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_guardduty_publishing_destination" { diff --git a/internal/service/guardduty/service_package_gen.go b/internal/service/guardduty/service_package_gen.go index 9fb1238c9ef..bcd3713e47f 100644 --- a/internal/service/guardduty/service_package_gen.go +++ b/internal/service/guardduty/service_package_gen.go @@ -5,6 +5,10 @@ package guardduty import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + guardduty_sdkv1 "github.com/aws/aws-sdk-go/service/guardduty" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -12,7 +16,12 @@ import ( type servicePackage struct{} func (p *servicePackage) FrameworkDataSources(ctx context.Context) []*types.ServicePackageFrameworkDataSource { - return []*types.ServicePackageFrameworkDataSource{} + return []*types.ServicePackageFrameworkDataSource{ + { + Factory: newDataSourceFindingIds, + Name: "Finding Ids", + }, + } } func (p *servicePackage) FrameworkResources(ctx context.Context) []*types.ServicePackageFrameworkResource { @@ -89,4 +98,13 @@ func (p *servicePackage) ServicePackageName() string { return names.GuardDuty } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*guardduty_sdkv1.GuardDuty, error) { + sess := config["session"].(*session_sdkv1.Session) + + return guardduty_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/guardduty/status.go b/internal/service/guardduty/status.go index 06a3e9c2eab..8f2954d23db 100644 --- a/internal/service/guardduty/status.go +++ b/internal/service/guardduty/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package guardduty import ( diff --git a/internal/service/guardduty/sweep.go b/internal/service/guardduty/sweep.go index 778077bc1bf..17d9f9eb2a4 100644 --- a/internal/service/guardduty/sweep.go +++ b/internal/service/guardduty/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -12,7 +15,6 @@ import ( "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -31,13 +33,13 @@ func init() { func sweepDetectors(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).GuardDutyConn() + conn := client.GuardDutyConn(ctx) input := &guardduty.ListDetectorsInput{} var sweeperErrs *multierror.Error @@ -78,13 +80,13 @@ func sweepDetectors(region string) error { func sweepPublishingDestinations(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).GuardDutyConn() + conn := client.GuardDutyConn(ctx) var sweeperErrs *multierror.Error detect_input := &guardduty.ListDetectorsInput{} diff --git a/internal/service/guardduty/tags_gen.go b/internal/service/guardduty/tags_gen.go index 8da3804aa60..2335dadc5c9 100644 --- a/internal/service/guardduty/tags_gen.go +++ b/internal/service/guardduty/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists guardduty service tags. +// listTags lists guardduty service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn guarddutyiface.GuardDutyAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn guarddutyiface.GuardDutyAPI, identifier string) (tftags.KeyValueTags, error) { input := &guardduty.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn guarddutyiface.GuardDutyAPI, identifier // ListTags lists guardduty service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).GuardDutyConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).GuardDutyConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from guardduty service tags. +// KeyValueTags creates tftags.KeyValueTags from guardduty service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns guardduty service tags from Context. +// getTagsIn returns guardduty service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets guardduty service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets guardduty service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates guardduty service tags. +// updateTags updates guardduty service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn guarddutyiface.GuardDutyAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn guarddutyiface.GuardDutyAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn guarddutyiface.GuardDutyAPI, identifie // UpdateTags updates guardduty service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).GuardDutyConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).GuardDutyConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/guardduty/threatintelset.go b/internal/service/guardduty/threatintelset.go index 352238d2cb8..85397c05a9e 100644 --- a/internal/service/guardduty/threatintelset.go +++ b/internal/service/guardduty/threatintelset.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package guardduty import ( @@ -50,17 +53,10 @@ func ResourceThreatIntelSet() *schema.Resource { Required: true, }, "format": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validation.StringInSlice([]string{ - guardduty.ThreatIntelSetFormatTxt, - guardduty.ThreatIntelSetFormatStix, - guardduty.ThreatIntelSetFormatOtxCsv, - guardduty.ThreatIntelSetFormatAlienVault, - guardduty.ThreatIntelSetFormatProofPoint, - guardduty.ThreatIntelSetFormatFireEye, - }, false), + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice(guardduty.ThreatIntelSetFormat_Values(), false), }, "location": { Type: schema.TypeString, @@ -80,7 +76,7 @@ func ResourceThreatIntelSet() *schema.Resource { func resourceThreatIntelSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) detectorID := d.Get("detector_id").(string) name := d.Get("name").(string) @@ -90,7 +86,7 @@ func resourceThreatIntelSetCreate(ctx context.Context, d *schema.ResourceData, m Format: aws.String(d.Get("format").(string)), Location: aws.String(d.Get("location").(string)), Activate: aws.Bool(d.Get("activate").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } resp, err := conn.CreateThreatIntelSetWithContext(ctx, input) @@ -117,7 +113,7 @@ func resourceThreatIntelSetCreate(ctx context.Context, d *schema.ResourceData, m func resourceThreatIntelSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) threatIntelSetId, detectorId, err := DecodeThreatIntelSetID(d.Id()) if err != nil { @@ -153,14 +149,14 @@ func resourceThreatIntelSetRead(ctx context.Context, d *schema.ResourceData, met d.Set("name", resp.Name) d.Set("activate", aws.StringValue(resp.Status) == guardduty.ThreatIntelSetStatusActive) - SetTagsOut(ctx, resp.Tags) + setTagsOut(ctx, resp.Tags) return diags } func resourceThreatIntelSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) threatIntelSetID, detectorId, err := DecodeThreatIntelSetID(d.Id()) if err != nil { @@ -193,7 +189,7 @@ func resourceThreatIntelSetUpdate(ctx context.Context, d *schema.ResourceData, m func resourceThreatIntelSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).GuardDutyConn() + conn := meta.(*conns.AWSClient).GuardDutyConn(ctx) threatIntelSetID, detectorId, err := DecodeThreatIntelSetID(d.Id()) if err != nil { diff --git a/internal/service/guardduty/threatintelset_test.go b/internal/service/guardduty/threatintelset_test.go index a89efcef68e..db0e5845659 100644 --- a/internal/service/guardduty/threatintelset_test.go +++ b/internal/service/guardduty/threatintelset_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package guardduty_test import ( @@ -108,7 +111,7 @@ func testAccThreatIntelSet_tags(t *testing.T) { func testAccCheckThreatIntelSetDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_guardduty_threatintelset" { @@ -160,7 +163,7 @@ func testAccCheckThreatIntelSetExists(ctx context.Context, name string) resource ThreatIntelSetId: aws.String(threatIntelSetId), } - conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).GuardDutyConn(ctx) _, err = conn.GetThreatIntelSetWithContext(ctx, input) return err } diff --git a/internal/service/guardduty/wait.go b/internal/service/guardduty/wait.go index 173104cb94f..03d546e35d8 100644 --- a/internal/service/guardduty/wait.go +++ b/internal/service/guardduty/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package guardduty import ( diff --git a/internal/service/healthlake/generate.go b/internal/service/healthlake/generate.go index b76e835f09c..07ce5ab38f3 100644 --- a/internal/service/healthlake/generate.go +++ b/internal/service/healthlake/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -KVTValues=true -SkipTypesImp=false -TagInIDElem=ResourceARN -ListTagsInIDElem=ResourceARN -ListTags -ServiceTagsSlice -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package healthlake diff --git a/internal/service/healthlake/service_package_gen.go b/internal/service/healthlake/service_package_gen.go index 84b2d685fb4..ef5392e2045 100644 --- a/internal/service/healthlake/service_package_gen.go +++ b/internal/service/healthlake/service_package_gen.go @@ -5,6 +5,9 @@ package healthlake import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + healthlake_sdkv2 "github.com/aws/aws-sdk-go-v2/service/healthlake" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -31,4 +34,17 @@ func (p *servicePackage) ServicePackageName() string { return names.HealthLake } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*healthlake_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return healthlake_sdkv2.NewFromConfig(cfg, func(o *healthlake_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = healthlake_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/healthlake/sweep.go b/internal/service/healthlake/sweep.go index 710a4fd201c..eaef44ee5aa 100644 --- a/internal/service/healthlake/sweep.go +++ b/internal/service/healthlake/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep diff --git a/internal/service/healthlake/tags_gen.go b/internal/service/healthlake/tags_gen.go index bce5c077192..4a392fa7b05 100644 --- a/internal/service/healthlake/tags_gen.go +++ b/internal/service/healthlake/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists healthlake service tags. +// listTags lists healthlake service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn *healthlake.Client, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *healthlake.Client, identifier string) (tftags.KeyValueTags, error) { input := &healthlake.ListTagsForResourceInput{ ResourceARN: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn *healthlake.Client, identifier string) ( // ListTags lists healthlake service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).HealthLakeClient(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).HealthLakeClient(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []awstypes.Tag) tftags.KeyValueTags return tftags.New(ctx, m) } -// GetTagsIn returns healthlake service tags from Context. +// getTagsIn returns healthlake service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []awstypes.Tag { +func getTagsIn(ctx context.Context) []awstypes.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []awstypes.Tag { return nil } -// SetTagsOut sets healthlake service tags in Context. -func SetTagsOut(ctx context.Context, tags []awstypes.Tag) { +// setTagsOut sets healthlake service tags in Context. +func setTagsOut(ctx context.Context, tags []awstypes.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates healthlake service tags. +// updateTags updates healthlake service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn *healthlake.Client, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *healthlake.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn *healthlake.Client, identifier string, // UpdateTags updates healthlake service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).HealthLakeClient(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).HealthLakeClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/iam/acc_test.go b/internal/service/iam/acc_test.go index 231a75f6795..7f0c0f1f8fc 100644 --- a/internal/service/iam/acc_test.go +++ b/internal/service/iam/acc_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( diff --git a/internal/service/iam/access_key.go b/internal/service/iam/access_key.go index 12bb95f7548..d79ef18dac6 100644 --- a/internal/service/iam/access_key.go +++ b/internal/service/iam/access_key.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -32,7 +35,7 @@ func ResourceAccessKey() *schema.Resource { // ValidationError: Must specify userName when calling with non-User credentials // To prevent import from requiring this extra information, use GetAccessKeyLastUsed. StateContext: func(ctx context.Context, d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) input := &iam.GetAccessKeyLastUsedInput{ AccessKeyId: aws.String(d.Id()), @@ -41,11 +44,11 @@ func ResourceAccessKey() *schema.Resource { output, err := conn.GetAccessKeyLastUsedWithContext(ctx, input) if err != nil { - return nil, fmt.Errorf("error fetching IAM Access Key (%s) username via GetAccessKeyLastUsed: %w", d.Id(), err) + return nil, fmt.Errorf("fetching IAM Access Key (%s) username via GetAccessKeyLastUsed: %w", d.Id(), err) } if output == nil || output.UserName == nil { - return nil, fmt.Errorf("error fetching IAM Access Key (%s) username via GetAccessKeyLastUsed: empty response", d.Id()) + return nil, fmt.Errorf("fetching IAM Access Key (%s) username via GetAccessKeyLastUsed: empty response", d.Id()) } d.Set("user", output.UserName) @@ -103,7 +106,7 @@ func ResourceAccessKey() *schema.Resource { func resourceAccessKeyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) username := d.Get("user").(string) @@ -181,7 +184,7 @@ func resourceAccessKeyCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceAccessKeyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) username := d.Get("user").(string) @@ -224,7 +227,7 @@ func resourceAccessKeyReadResult(d *schema.ResourceData, key *iam.AccessKeyMetad func resourceAccessKeyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) if d.HasChange("status") { if err := resourceAccessKeyStatusUpdate(ctx, conn, d); err != nil { @@ -237,7 +240,7 @@ func resourceAccessKeyUpdate(ctx context.Context, d *schema.ResourceData, meta i func resourceAccessKeyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) request := &iam.DeleteAccessKeyInput{ AccessKeyId: aws.String(d.Id()), diff --git a/internal/service/iam/access_key_test.go b/internal/service/iam/access_key_test.go index 214e405e478..e9ee83aaf8f 100644 --- a/internal/service/iam/access_key_test.go +++ b/internal/service/iam/access_key_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -134,7 +137,7 @@ func TestAccIAMAccessKey_status(t *testing.T) { func testAccCheckAccessKeyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_access_key" { @@ -166,7 +169,7 @@ func testAccCheckAccessKeyExists(ctx context.Context, n string, res *iam.AccessK return fmt.Errorf("No Access Key ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) accessKey, err := tfiam.FindAccessKey(ctx, conn, rs.Primary.Attributes["user"], rs.Primary.ID) if err != nil { diff --git a/internal/service/iam/access_keys_data_source.go b/internal/service/iam/access_keys_data_source.go index 7c193fa8dea..3b7fc8d4868 100644 --- a/internal/service/iam/access_keys_data_source.go +++ b/internal/service/iam/access_keys_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -51,7 +54,7 @@ const ( ) func dataSourceAccessKeysRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) username := d.Get("user").(string) out, err := FindAccessKeys(ctx, conn, username) diff --git a/internal/service/iam/access_keys_data_source_test.go b/internal/service/iam/access_keys_data_source_test.go index e3f4a5b1d46..41e731ef22c 100644 --- a/internal/service/iam/access_keys_data_source_test.go +++ b/internal/service/iam/access_keys_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( diff --git a/internal/service/iam/account_alias.go b/internal/service/iam/account_alias.go index 1486e9d4bcd..de591236fa6 100644 --- a/internal/service/iam/account_alias.go +++ b/internal/service/iam/account_alias.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -35,7 +38,7 @@ func ResourceAccountAlias() *schema.Resource { func resourceAccountAliasCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) account_alias := d.Get("account_alias").(string) @@ -56,7 +59,7 @@ func resourceAccountAliasCreate(ctx context.Context, d *schema.ResourceData, met func resourceAccountAliasRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) params := &iam.ListAccountAliasesInput{} @@ -81,7 +84,7 @@ func resourceAccountAliasRead(ctx context.Context, d *schema.ResourceData, meta func resourceAccountAliasDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) account_alias := d.Get("account_alias").(string) diff --git a/internal/service/iam/account_alias_data_source.go b/internal/service/iam/account_alias_data_source.go index 7ecfef39e37..413079f0491 100644 --- a/internal/service/iam/account_alias_data_source.go +++ b/internal/service/iam/account_alias_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -28,7 +31,7 @@ func DataSourceAccountAlias() *schema.Resource { func dataSourceAccountAliasRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) log.Printf("[DEBUG] Reading IAM Account Aliases.") diff --git a/internal/service/iam/account_alias_data_source_test.go b/internal/service/iam/account_alias_data_source_test.go index cb9ef64ee6d..c1d2ef921a2 100644 --- a/internal/service/iam/account_alias_data_source_test.go +++ b/internal/service/iam/account_alias_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( diff --git a/internal/service/iam/account_alias_test.go b/internal/service/iam/account_alias_test.go index d4efcea5a8f..b5d85841b48 100644 --- a/internal/service/iam/account_alias_test.go +++ b/internal/service/iam/account_alias_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -57,7 +60,7 @@ func testAccAccountAlias_basic(t *testing.T) { func testAccCheckAccountAliasDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iam_account_alias" { @@ -92,7 +95,7 @@ func testAccCheckAccountAliasExists(ctx context.Context, n string) resource.Test return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) params := &iam.ListAccountAliasesInput{} resp, err := conn.ListAccountAliasesWithContext(ctx, params) diff --git a/internal/service/iam/account_password_policy.go b/internal/service/iam/account_password_policy.go index ddfb179d57b..2bce87935d5 100644 --- a/internal/service/iam/account_password_policy.go +++ b/internal/service/iam/account_password_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -83,7 +86,7 @@ func ResourceAccountPasswordPolicy() *schema.Resource { func resourceAccountPasswordPolicyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) input := &iam.UpdateAccountPasswordPolicyInput{} @@ -130,7 +133,7 @@ func resourceAccountPasswordPolicyUpdate(ctx context.Context, d *schema.Resource func resourceAccountPasswordPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) policy, err := FindAccountPasswordPolicy(ctx, conn) @@ -160,7 +163,7 @@ func resourceAccountPasswordPolicyRead(ctx context.Context, d *schema.ResourceDa func resourceAccountPasswordPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) log.Printf("[DEBUG] Deleting IAM Account Password Policy: %s", d.Id()) _, err := conn.DeleteAccountPasswordPolicyWithContext(ctx, &iam.DeleteAccountPasswordPolicyInput{}) diff --git a/internal/service/iam/account_password_policy_test.go b/internal/service/iam/account_password_policy_test.go index 00f5788dab2..0ad3b7ce721 100644 --- a/internal/service/iam/account_password_policy_test.go +++ b/internal/service/iam/account_password_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -84,7 +87,7 @@ func testAccAccountPasswordPolicy_disappears(t *testing.T) { func testAccCheckAccountPasswordPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iam_account_password_policy" { @@ -119,7 +122,7 @@ func testAccCheckAccountPasswordPolicyExists(ctx context.Context, n string, v *i return fmt.Errorf("No IAM Account Password Policy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) output, err := tfiam.FindAccountPasswordPolicy(ctx, conn) diff --git a/internal/service/iam/arn.go b/internal/service/iam/arn.go index a42bf6e7db0..1abf14baf02 100644 --- a/internal/service/iam/arn.go +++ b/internal/service/iam/arn.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -19,7 +22,7 @@ func InstanceProfileARNToName(inputARN string) (string, error) { parsedARN, err := arn.Parse(inputARN) if err != nil { - return "", fmt.Errorf("error parsing ARN (%s): %w", inputARN, err) + return "", fmt.Errorf("parsing ARN (%s): %w", inputARN, err) } if actual, expected := parsedARN.Service, ARNService; actual != expected { diff --git a/internal/service/iam/arn_test.go b/internal/service/iam/arn_test.go index ed41b323729..6a5407ba4ee 100644 --- a/internal/service/iam/arn_test.go +++ b/internal/service/iam/arn_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -19,12 +22,12 @@ func TestInstanceProfileARNToName(t *testing.T) { { TestName: "empty ARN", InputARN: "", - ExpectedError: regexp.MustCompile(`error parsing ARN`), + ExpectedError: regexp.MustCompile(`parsing ARN`), }, { TestName: "unparsable ARN", InputARN: "test", - ExpectedError: regexp.MustCompile(`error parsing ARN`), + ExpectedError: regexp.MustCompile(`parsing ARN`), }, { TestName: "invalid ARN service", diff --git a/internal/service/iam/diff.go b/internal/service/iam/diff.go index 2b305a1bd71..91348b726f4 100644 --- a/internal/service/iam/diff.go +++ b/internal/service/iam/diff.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( diff --git a/internal/service/iam/encryption.go b/internal/service/iam/encryption.go index 0dc89cd2194..eb3cca3b725 100644 --- a/internal/service/iam/encryption.go +++ b/internal/service/iam/encryption.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( diff --git a/internal/service/iam/find.go b/internal/service/iam/find.go index 43f1258a770..569601bc938 100644 --- a/internal/service/iam/find.go +++ b/internal/service/iam/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( diff --git a/internal/service/iam/flex.go b/internal/service/iam/flex.go index 86ee62993e4..ecb766fed2e 100644 --- a/internal/service/iam/flex.go +++ b/internal/service/iam/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( diff --git a/internal/service/iam/generate.go b/internal/service/iam/generate.go index 172b6b1f160..81f5563089a 100644 --- a/internal/service/iam/generate.go +++ b/internal/service/iam/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ServiceTagsSlice +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package iam diff --git a/internal/service/iam/group.go b/internal/service/iam/group.go index 623526278c8..5ae378d47c1 100644 --- a/internal/service/iam/group.go +++ b/internal/service/iam/group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -66,7 +69,7 @@ func ResourceGroup() *schema.Resource { func resourceGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) name := d.Get("name").(string) input := &iam.CreateGroupInput{ @@ -95,7 +98,7 @@ func resourceGroupCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) group, err := FindGroupByName(ctx, conn, d.Id()) @@ -119,7 +122,7 @@ func resourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interfa func resourceGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) o, n := d.GetChange("name") input := &iam.UpdateGroupInput{ @@ -141,7 +144,7 @@ func resourceGroupUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) log.Printf("[DEBUG] Deleting IAM Group: %s", d.Id()) _, err := conn.DeleteGroupWithContext(ctx, &iam.DeleteGroupInput{ @@ -197,7 +200,7 @@ func DeleteGroupPolicyAttachments(ctx context.Context, conn *iam.IAM, groupName } if err != nil { - return fmt.Errorf("error listing IAM Group (%s) policy attachments for deletion: %w", groupName, err) + return fmt.Errorf("listing IAM Group (%s) policy attachments for deletion: %w", groupName, err) } for _, attachedPolicy := range attachedPolicies { @@ -213,7 +216,7 @@ func DeleteGroupPolicyAttachments(ctx context.Context, conn *iam.IAM, groupName } if err != nil { - return fmt.Errorf("error detaching IAM Group (%s) policy (%s): %w", groupName, aws.StringValue(attachedPolicy.PolicyArn), err) + return fmt.Errorf("detaching IAM Group (%s) policy (%s): %w", groupName, aws.StringValue(attachedPolicy.PolicyArn), err) } } @@ -236,7 +239,7 @@ func DeleteGroupPolicies(ctx context.Context, conn *iam.IAM, groupName string) e } if err != nil { - return fmt.Errorf("error listing IAM Group (%s) inline policies for deletion: %w", groupName, err) + return fmt.Errorf("listing IAM Group (%s) inline policies for deletion: %w", groupName, err) } for _, policyName := range inlinePolicies { @@ -252,7 +255,7 @@ func DeleteGroupPolicies(ctx context.Context, conn *iam.IAM, groupName string) e } if err != nil { - return fmt.Errorf("error deleting IAM Group (%s) inline policy (%s): %w", groupName, aws.StringValue(policyName), err) + return fmt.Errorf("deleting IAM Group (%s) inline policy (%s): %w", groupName, aws.StringValue(policyName), err) } } diff --git a/internal/service/iam/group_data_source.go b/internal/service/iam/group_data_source.go index aeafec1b204..c1f3d94bb06 100644 --- a/internal/service/iam/group_data_source.go +++ b/internal/service/iam/group_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -64,7 +67,7 @@ func DataSourceGroup() *schema.Resource { func dataSourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) groupName := d.Get("group_name").(string) diff --git a/internal/service/iam/group_data_source_test.go b/internal/service/iam/group_data_source_test.go index 2e51da446a6..a428dd2ff19 100644 --- a/internal/service/iam/group_data_source_test.go +++ b/internal/service/iam/group_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( diff --git a/internal/service/iam/group_membership.go b/internal/service/iam/group_membership.go index b80c81ef312..64c52ea2178 100644 --- a/internal/service/iam/group_membership.go +++ b/internal/service/iam/group_membership.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -49,7 +52,7 @@ func ResourceGroupMembership() *schema.Resource { func resourceGroupMembershipCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) group := d.Get("group").(string) userList := flex.ExpandStringValueSet(d.Get("users").(*schema.Set)) @@ -65,7 +68,7 @@ func resourceGroupMembershipCreate(ctx context.Context, d *schema.ResourceData, func resourceGroupMembershipRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) group := d.Get("group").(string) input := &iam.GetGroupInput{ @@ -131,7 +134,7 @@ func resourceGroupMembershipRead(ctx context.Context, d *schema.ResourceData, me func resourceGroupMembershipUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) if d.HasChange("users") { group := d.Get("group").(string) @@ -163,7 +166,7 @@ func resourceGroupMembershipUpdate(ctx context.Context, d *schema.ResourceData, func resourceGroupMembershipDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) userList := flex.ExpandStringValueSet(d.Get("users").(*schema.Set)) group := d.Get("group").(string) diff --git a/internal/service/iam/group_membership_test.go b/internal/service/iam/group_membership_test.go index 714957cb48f..d3339a9e823 100644 --- a/internal/service/iam/group_membership_test.go +++ b/internal/service/iam/group_membership_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -89,7 +92,7 @@ func TestAccIAMGroupMembership_paginatedUserList(t *testing.T) { func testAccCheckGroupMembershipDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iam_group_membership" { @@ -128,7 +131,7 @@ func testAccCheckGroupMembershipExists(ctx context.Context, n string, g *iam.Get return fmt.Errorf("No User name is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) gn := rs.Primary.Attributes["group"] resp, err := conn.GetGroupWithContext(ctx, &iam.GetGroupInput{ diff --git a/internal/service/iam/group_policy.go b/internal/service/iam/group_policy.go index ea4898146d1..b0d5d4d9936 100644 --- a/internal/service/iam/group_policy.go +++ b/internal/service/iam/group_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -70,7 +73,7 @@ func ResourceGroupPolicy() *schema.Resource { func resourceGroupPolicyPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) policyDoc, err := verify.LegacyPolicyNormalize(d.Get("policy").(string)) if err != nil { @@ -102,7 +105,7 @@ func resourceGroupPolicyPut(ctx context.Context, d *schema.ResourceData, meta in func resourceGroupPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) group, name, err := GroupPolicyParseID(d.Id()) if err != nil { @@ -175,7 +178,7 @@ func resourceGroupPolicyRead(ctx context.Context, d *schema.ResourceData, meta i func resourceGroupPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) group, name, err := GroupPolicyParseID(d.Id()) if err != nil { diff --git a/internal/service/iam/group_policy_attachment.go b/internal/service/iam/group_policy_attachment.go index c9f92229b23..4686bb34238 100644 --- a/internal/service/iam/group_policy_attachment.go +++ b/internal/service/iam/group_policy_attachment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -45,7 +48,7 @@ func ResourceGroupPolicyAttachment() *schema.Resource { func resourceGroupPolicyAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) group := d.Get("group").(string) arn := d.Get("policy_arn").(string) @@ -63,7 +66,7 @@ func resourceGroupPolicyAttachmentCreate(ctx context.Context, d *schema.Resource func resourceGroupPolicyAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) group := d.Get("group").(string) arn := d.Get("policy_arn").(string) // Human friendly ID for error messages since d.Id() is non-descriptive @@ -122,7 +125,7 @@ func resourceGroupPolicyAttachmentRead(ctx context.Context, d *schema.ResourceDa func resourceGroupPolicyAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) group := d.Get("group").(string) arn := d.Get("policy_arn").(string) diff --git a/internal/service/iam/group_policy_attachment_test.go b/internal/service/iam/group_policy_attachment_test.go index 807f10e0cfc..f6a7f9e12c7 100644 --- a/internal/service/iam/group_policy_attachment_test.go +++ b/internal/service/iam/group_policy_attachment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -83,7 +86,7 @@ func testAccCheckGroupPolicyAttachmentExists(ctx context.Context, n string, c in return fmt.Errorf("No policy name is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) group := rs.Primary.Attributes["group"] attachedPolicies, err := conn.ListAttachedGroupPoliciesWithContext(ctx, &iam.ListAttachedGroupPoliciesInput{ diff --git a/internal/service/iam/group_policy_test.go b/internal/service/iam/group_policy_test.go index 0d37ceec59a..e355e038d09 100644 --- a/internal/service/iam/group_policy_test.go +++ b/internal/service/iam/group_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -197,7 +200,7 @@ func TestAccIAMGroupPolicy_unknownsInPolicy(t *testing.T) { func testAccCheckGroupPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iam_group_policy" { @@ -234,7 +237,7 @@ func testAccCheckGroupPolicyDestroy(ctx context.Context) resource.TestCheckFunc func testAccCheckGroupPolicyDisappears(ctx context.Context, out *iam.GetGroupPolicyOutput) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) params := &iam.DeleteGroupPolicyInput{ PolicyName: out.PolicyName, @@ -265,7 +268,7 @@ func testAccCheckGroupPolicyExists(ctx context.Context, return fmt.Errorf("Not Found: %s", iamGroupPolicyResource) } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) group, name, err := tfiam.GroupPolicyParseID(policy.Primary.ID) if err != nil { return err diff --git a/internal/service/iam/group_test.go b/internal/service/iam/group_test.go index 6fdf40efde0..d5e24d9bef2 100644 --- a/internal/service/iam/group_test.go +++ b/internal/service/iam/group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -145,7 +148,7 @@ func TestAccIAMGroup_path(t *testing.T) { func testAccCheckGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iam_group" { @@ -180,7 +183,7 @@ func testAccCheckGroupExists(ctx context.Context, n string, v *iam.Group) resour return fmt.Errorf("No IAM Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) output, err := tfiam.FindGroupByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/iam/instance_profile.go b/internal/service/iam/instance_profile.go index 3a73dcfc131..7f3a3fdb9f9 100644 --- a/internal/service/iam/instance_profile.go +++ b/internal/service/iam/instance_profile.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -90,13 +93,13 @@ func ResourceInstanceProfile() *schema.Resource { func resourceInstanceProfileCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) name := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) input := &iam.CreateInstanceProfileInput{ InstanceProfileName: aws.String(name), Path: aws.String(d.Get("path").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } output, err := conn.CreateInstanceProfileWithContext(ctx, input) @@ -131,7 +134,7 @@ func resourceInstanceProfileCreate(ctx context.Context, d *schema.ResourceData, } // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := instanceProfileCreateTags(ctx, conn, d.Id(), tags) // If default tags only, continue. Otherwise, error. @@ -149,7 +152,7 @@ func resourceInstanceProfileCreate(ctx context.Context, d *schema.ResourceData, func resourceInstanceProfileRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) instanceProfile, err := FindInstanceProfileByName(ctx, conn, d.Id()) @@ -190,14 +193,14 @@ func resourceInstanceProfileRead(ctx context.Context, d *schema.ResourceData, me } d.Set("unique_id", instanceProfile.InstanceProfileId) - SetTagsOut(ctx, instanceProfile.Tags) + setTagsOut(ctx, instanceProfile.Tags) return diags } func resourceInstanceProfileUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) if d.HasChange("role") { o, n := d.GetChange("role") @@ -239,7 +242,7 @@ func resourceInstanceProfileUpdate(ctx context.Context, d *schema.ResourceData, func resourceInstanceProfileDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) if v, ok := d.GetOk("role"); ok { err := instanceProfileRemoveRole(ctx, conn, d.Id(), v.(string)) diff --git a/internal/service/iam/instance_profile_data_source.go b/internal/service/iam/instance_profile_data_source.go index 3e148f91435..5151dd20c3a 100644 --- a/internal/service/iam/instance_profile_data_source.go +++ b/internal/service/iam/instance_profile_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -51,7 +54,7 @@ func DataSourceInstanceProfile() *schema.Resource { func dataSourceInstanceProfileRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) name := d.Get("name").(string) instanceProfile, err := FindInstanceProfileByName(ctx, conn, name) diff --git a/internal/service/iam/instance_profile_data_source_test.go b/internal/service/iam/instance_profile_data_source_test.go index bb5bcf9cdee..f1715cb2d60 100644 --- a/internal/service/iam/instance_profile_data_source_test.go +++ b/internal/service/iam/instance_profile_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( diff --git a/internal/service/iam/instance_profile_test.go b/internal/service/iam/instance_profile_test.go index 667627d3be6..073833c365c 100644 --- a/internal/service/iam/instance_profile_test.go +++ b/internal/service/iam/instance_profile_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -229,7 +232,7 @@ func TestAccIAMInstanceProfile_Disappears_role(t *testing.T) { func testAccCheckInstanceProfileDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iam_instance_profile" { @@ -264,7 +267,7 @@ func testAccCheckInstanceProfileExists(ctx context.Context, n string, v *iam.Ins return fmt.Errorf("No IAM Instance Profile ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) output, err := tfiam.FindInstanceProfileByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/iam/instance_profiles_data_source.go b/internal/service/iam/instance_profiles_data_source.go index 932eefa2815..deaad9086a7 100644 --- a/internal/service/iam/instance_profiles_data_source.go +++ b/internal/service/iam/instance_profiles_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -41,7 +44,7 @@ func DataSourceInstanceProfiles() *schema.Resource { } func dataSourceInstanceProfilesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) roleName := d.Get("role_name").(string) input := &iam.ListInstanceProfilesForRoleInput{ diff --git a/internal/service/iam/instance_profiles_data_source_test.go b/internal/service/iam/instance_profiles_data_source_test.go index 46f23f01cd7..4a3c6403f66 100644 --- a/internal/service/iam/instance_profiles_data_source_test.go +++ b/internal/service/iam/instance_profiles_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( diff --git a/internal/service/iam/openid_connect_provider.go b/internal/service/iam/openid_connect_provider.go index ea271c6c92e..b0db1ce5a6d 100644 --- a/internal/service/iam/openid_connect_provider.go +++ b/internal/service/iam/openid_connect_provider.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -73,11 +76,11 @@ func ResourceOpenIDConnectProvider() *schema.Resource { func resourceOpenIDConnectProviderCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) input := &iam.CreateOpenIDConnectProviderInput{ ClientIDList: flex.ExpandStringSet(d.Get("client_id_list").(*schema.Set)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), ThumbprintList: flex.ExpandStringList(d.Get("thumbprint_list").([]interface{})), Url: aws.String(d.Get("url").(string)), } @@ -98,7 +101,7 @@ func resourceOpenIDConnectProviderCreate(ctx context.Context, d *schema.Resource d.SetId(aws.StringValue(output.OpenIDConnectProviderArn)) // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := openIDConnectProviderCreateTags(ctx, conn, d.Id(), tags) // If default tags only, continue. Otherwise, error. @@ -116,7 +119,7 @@ func resourceOpenIDConnectProviderCreate(ctx context.Context, d *schema.Resource func resourceOpenIDConnectProviderRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) output, err := FindOpenIDConnectProviderByARN(ctx, conn, d.Id()) @@ -135,14 +138,14 @@ func resourceOpenIDConnectProviderRead(ctx context.Context, d *schema.ResourceDa d.Set("thumbprint_list", aws.StringValueSlice(output.ThumbprintList)) d.Set("url", output.Url) - SetTagsOut(ctx, output.Tags) + setTagsOut(ctx, output.Tags) return diags } func resourceOpenIDConnectProviderUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) if d.HasChange("thumbprint_list") { input := &iam.UpdateOpenIDConnectProviderThumbprintInput{ @@ -177,7 +180,7 @@ func resourceOpenIDConnectProviderUpdate(ctx context.Context, d *schema.Resource func resourceOpenIDConnectProviderDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) log.Printf("[INFO] Deleting IAM OIDC Provider: %s", d.Id()) _, err := conn.DeleteOpenIDConnectProviderWithContext(ctx, &iam.DeleteOpenIDConnectProviderInput{ diff --git a/internal/service/iam/openid_connect_provider_data_source.go b/internal/service/iam/openid_connect_provider_data_source.go index 5348ee94304..0d567cb4548 100644 --- a/internal/service/iam/openid_connect_provider_data_source.go +++ b/internal/service/iam/openid_connect_provider_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -52,7 +55,7 @@ func DataSourceOpenIDConnectProvider() *schema.Resource { } func dataSourceOpenIDConnectProviderRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &iam.GetOpenIDConnectProviderInput{} @@ -64,11 +67,11 @@ func dataSourceOpenIDConnectProviderRead(ctx context.Context, d *schema.Resource oidcpEntry, err := dataSourceGetOpenIDConnectProviderByURL(ctx, conn, url) if err != nil { - return diag.Errorf("error finding IAM OIDC Provider by url (%s): %s", url, err) + return diag.Errorf("finding IAM OIDC Provider by url (%s): %s", url, err) } if oidcpEntry == nil { - return diag.Errorf("error finding IAM OIDC Provider by url (%s): not found", url) + return diag.Errorf("finding IAM OIDC Provider by url (%s): not found", url) } input.OpenIDConnectProviderArn = oidcpEntry.Arn } @@ -76,7 +79,7 @@ func dataSourceOpenIDConnectProviderRead(ctx context.Context, d *schema.Resource resp, err := conn.GetOpenIDConnectProviderWithContext(ctx, input) if err != nil { - return diag.Errorf("error reading IAM OIDC Provider: %s", err) + return diag.Errorf("reading IAM OIDC Provider: %s", err) } d.SetId(aws.StringValue(input.OpenIDConnectProviderArn)) @@ -86,7 +89,7 @@ func dataSourceOpenIDConnectProviderRead(ctx context.Context, d *schema.Resource d.Set("thumbprint_list", flex.FlattenStringList(resp.ThumbprintList)) if err := d.Set("tags", KeyValueTags(ctx, resp.Tags).IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return diag.Errorf("error setting tags: %s", err) + return diag.Errorf("setting tags: %s", err) } return nil @@ -124,7 +127,7 @@ func dataSourceGetOpenIDConnectProviderByURL(ctx context.Context, conn *iam.IAM, func urlFromOpenIDConnectProviderARN(arn string) (string, error) { parts := strings.SplitN(arn, "/", 2) if len(parts) != 2 { - return "", fmt.Errorf("error reading OpenID Connect Provider expected the arn to be like: arn:PARTITION:iam::ACCOUNT:oidc-provider/URL but got: %s", arn) + return "", fmt.Errorf("reading OpenID Connect Provider expected the arn to be like: arn:PARTITION:iam::ACCOUNT:oidc-provider/URL but got: %s", arn) } return parts[1], nil } diff --git a/internal/service/iam/openid_connect_provider_data_source_test.go b/internal/service/iam/openid_connect_provider_data_source_test.go index 90c8e8a3691..b460ca65e64 100644 --- a/internal/service/iam/openid_connect_provider_data_source_test.go +++ b/internal/service/iam/openid_connect_provider_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( diff --git a/internal/service/iam/openid_connect_provider_test.go b/internal/service/iam/openid_connect_provider_test.go index bc1ee0f80e1..c6761d1c03e 100644 --- a/internal/service/iam/openid_connect_provider_test.go +++ b/internal/service/iam/openid_connect_provider_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -163,7 +166,7 @@ func TestAccIAMOpenIDConnectProvider_clientIDListOrder(t *testing.T) { func testAccCheckOpenIDConnectProviderDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iam_openid_connect_provider" { @@ -198,7 +201,7 @@ func testAccCheckOpenIDConnectProviderExists(ctx context.Context, n string) reso return fmt.Errorf("No IAM OIDC Provider ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) _, err := tfiam.FindOpenIDConnectProviderByARN(ctx, conn, rs.Primary.ID) diff --git a/internal/service/iam/policy.go b/internal/service/iam/policy.go index 961db0e4d57..68302a637f6 100644 --- a/internal/service/iam/policy.go +++ b/internal/service/iam/policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -100,7 +103,7 @@ func ResourcePolicy() *schema.Resource { func resourcePolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) policy, err := structure.NormalizeJsonString(d.Get("policy").(string)) if err != nil { @@ -113,7 +116,7 @@ func resourcePolicyCreate(ctx context.Context, d *schema.ResourceData, meta inte Path: aws.String(d.Get("path").(string)), PolicyDocument: aws.String(policy), PolicyName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } output, err := conn.CreatePolicyWithContext(ctx, input) @@ -132,7 +135,7 @@ func resourcePolicyCreate(ctx context.Context, d *schema.ResourceData, meta inte d.SetId(aws.StringValue(output.Policy.Arn)) // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := policyCreateTags(ctx, conn, d.Id(), tags) // If default tags only, continue. Otherwise, error. @@ -150,7 +153,7 @@ func resourcePolicyCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourcePolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) type policyWithVersion struct { policy *iam.Policy @@ -194,7 +197,7 @@ func resourcePolicyRead(ctx context.Context, d *schema.ResourceData, meta interf d.Set("path", policy.Path) d.Set("policy_id", policy.PolicyId) - SetTagsOut(ctx, policy.Tags) + setTagsOut(ctx, policy.Tags) policyDocument, err := url.QueryUnescape(aws.StringValue(output.policyVersion.Document)) @@ -214,7 +217,7 @@ func resourcePolicyRead(ctx context.Context, d *schema.ResourceData, meta interf func resourcePolicyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) if d.HasChangesExcept("tags", "tags_all") { if err := policyPruneVersions(ctx, conn, d.Id()); err != nil { @@ -259,7 +262,7 @@ func resourcePolicyUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourcePolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) // Delete non-default policy versions. versions, err := findPolicyVersionsByARN(ctx, conn, d.Id()) diff --git a/internal/service/iam/policy_attachment.go b/internal/service/iam/policy_attachment.go index 3375ddc7358..5cab1ff5ca1 100644 --- a/internal/service/iam/policy_attachment.go +++ b/internal/service/iam/policy_attachment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -60,7 +63,7 @@ func ResourcePolicyAttachment() *schema.Resource { func resourcePolicyAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) name := d.Get("name").(string) arn := d.Get("policy_arn").(string) @@ -91,7 +94,7 @@ func resourcePolicyAttachmentCreate(ctx context.Context, d *schema.ResourceData, func resourcePolicyAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) arn := d.Get("policy_arn").(string) name := d.Get("name").(string) @@ -146,7 +149,7 @@ func resourcePolicyAttachmentRead(ctx context.Context, d *schema.ResourceData, m func resourcePolicyAttachmentUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) name := d.Get("name").(string) var userErr, roleErr, groupErr error @@ -167,7 +170,7 @@ func resourcePolicyAttachmentUpdate(ctx context.Context, d *schema.ResourceData, func resourcePolicyAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) name := d.Get("name").(string) arn := d.Get("policy_arn").(string) users := flex.ExpandStringSet(d.Get("users").(*schema.Set)) diff --git a/internal/service/iam/policy_attachment_test.go b/internal/service/iam/policy_attachment_test.go index 882318067b1..ff5394a586a 100644 --- a/internal/service/iam/policy_attachment_test.go +++ b/internal/service/iam/policy_attachment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -198,7 +201,7 @@ func testAccCheckPolicyAttachmentExists(ctx context.Context, n string, c int64, return fmt.Errorf("No policy name is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) arn := rs.Primary.Attributes["policy_arn"] resp, err := conn.GetPolicyWithContext(ctx, &iam.GetPolicyInput{ diff --git a/internal/service/iam/policy_data_source.go b/internal/service/iam/policy_data_source.go index 3b7c93fbe29..9e3696bee1a 100644 --- a/internal/service/iam/policy_data_source.go +++ b/internal/service/iam/policy_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -62,7 +65,7 @@ func DataSourcePolicy() *schema.Resource { func dataSourcePolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig arn := d.Get("arn").(string) diff --git a/internal/service/iam/policy_data_source_test.go b/internal/service/iam/policy_data_source_test.go index 761727477c2..0c758bf21af 100644 --- a/internal/service/iam/policy_data_source_test.go +++ b/internal/service/iam/policy_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( diff --git a/internal/service/iam/policy_document_data_source.go b/internal/service/iam/policy_document_data_source.go index 3c79e46422b..0797c79fc2b 100644 --- a/internal/service/iam/policy_document_data_source.go +++ b/internal/service/iam/policy_document_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -306,7 +309,7 @@ func dataSourcePolicyDocumentMakeConditions(in []interface{}, version string) (I version, ) if err != nil { - return nil, fmt.Errorf("error reading values: %w", err) + return nil, fmt.Errorf("reading values: %w", err) } itemValues := out[i].Values.([]string) if len(itemValues) == 1 { @@ -330,7 +333,7 @@ func dataSourcePolicyDocumentMakePrincipals(in []interface{}, version string) (I ), version, ) if err != nil { - return nil, fmt.Errorf("error reading identifiers: %w", err) + return nil, fmt.Errorf("reading identifiers: %w", err) } } return IAMPolicyStatementPrincipalSet(out), nil diff --git a/internal/service/iam/policy_document_data_source_test.go b/internal/service/iam/policy_document_data_source_test.go index c39136cd0b4..bea378c0d5c 100644 --- a/internal/service/iam/policy_document_data_source_test.go +++ b/internal/service/iam/policy_document_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -52,6 +55,25 @@ func TestAccIAMPolicyDocumentDataSource_singleConditionValue(t *testing.T) { }) } +func TestAccIAMPolicyDocumentDataSource_multipleConditionKeys(t *testing.T) { + ctx := acctest.Context(t) + dataSourceName := "data.aws_iam_policy_document.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, iam.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + Steps: []resource.TestStep{ + { + Config: testAccPolicyDocumentDataSourceConfig_multipleConditionKeys, + Check: resource.ComposeTestCheckFunc( + acctest.CheckResourceAttrEquivalentJSON(dataSourceName, "json", testAccPolicyDocumentConfig_multipleConditionKeys_ExpectedJSON), + ), + }, + }, + }) +} + func TestAccIAMPolicyDocumentDataSource_conditionWithBoolValue(t *testing.T) { ctx := acctest.Context(t) resource.ParallelTest(t, resource.TestCase{ @@ -588,6 +610,58 @@ const testAccPolicyDocumentConfig_SingleConditionValue_ExpectedJSON = `{ ] }` +const testAccPolicyDocumentDataSourceConfig_multipleConditionKeys = ` +data "aws_iam_policy_document" "test" { + statement { + sid = "AWSCloudTrailWrite20150319" + + effect = "Allow" + + principals { + type = "Service" + identifiers = ["cloudtrail.amazonaws.com"] + } + + actions = ["s3:PutObject"] + + resources = ["*"] + + condition { + test = "StringEquals" + variable = "s3:x-amz-acl" + values = ["bucket-owner-full-control"] + } + condition { + test = "StringEquals" + variable = "aws:SourceArn" + values = ["some-other-value"] + } + } +} +` + +var testAccPolicyDocumentConfig_multipleConditionKeys_ExpectedJSON = `{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "AWSCloudTrailWrite20150319", + "Effect": "Allow", + "Action": "s3:PutObject", + "Resource": "*", + "Principal": { + "Service": "cloudtrail.amazonaws.com" + }, + "Condition": { + "StringEquals": { + "s3:x-amz-acl": "bucket-owner-full-control", + "aws:SourceArn": "some-other-value" + } + } + } + ] +} +` + var testAccPolicyDocumentDataSourceConfig_deprecated = ` data "aws_partition" "current" {} diff --git a/internal/service/iam/policy_model.go b/internal/service/iam/policy_model.go index 6c9edd7b184..35e223f2e88 100644 --- a/internal/service/iam/policy_model.go +++ b/internal/service/iam/policy_model.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( diff --git a/internal/service/iam/policy_model_test.go b/internal/service/iam/policy_model_test.go index b7211dc482a..85d590ecdbd 100644 --- a/internal/service/iam/policy_model_test.go +++ b/internal/service/iam/policy_model_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( diff --git a/internal/service/iam/policy_test.go b/internal/service/iam/policy_test.go index 8d04d50107b..379192959fc 100644 --- a/internal/service/iam/policy_test.go +++ b/internal/service/iam/policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -346,7 +349,7 @@ func testAccCheckPolicyExists(ctx context.Context, n string, v *iam.Policy) reso return fmt.Errorf("No IAM Policy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) output, err := tfiam.FindPolicyByARN(ctx, conn, rs.Primary.ID) @@ -362,7 +365,7 @@ func testAccCheckPolicyExists(ctx context.Context, n string, v *iam.Policy) reso func testAccCheckPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iam_policy" { diff --git a/internal/service/iam/principal_policy_simulation_data_source.go b/internal/service/iam/principal_policy_simulation_data_source.go new file mode 100644 index 00000000000..dbdd14d958f --- /dev/null +++ b/internal/service/iam/principal_policy_simulation_data_source.go @@ -0,0 +1,337 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package iam + +import ( + "context" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/iam" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" + "github.com/hashicorp/terraform-provider-aws/internal/flex" + "github.com/hashicorp/terraform-provider-aws/internal/verify" +) + +// @SDKDataSource("aws_iam_principal_policy_simulation") +func DataSourcePrincipalPolicySimulation() *schema.Resource { + return &schema.Resource{ + ReadWithoutTimeout: dataSourcePrincipalPolicySimulationRead, + + Schema: map[string]*schema.Schema{ + // Arguments + "action_names": { + Type: schema.TypeSet, + Required: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Description: `One or more names of actions, like "iam:CreateUser", that should be included in the simulation.`, + }, + "caller_arn": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidARN, + Description: `ARN of a user to use as the caller of the simulated requests. If not specified, defaults to the principal specified in policy_source_arn, if it is a user ARN.`, + }, + "context": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "key": { + Type: schema.TypeString, + Required: true, + Description: `The key name of the context entry, such as "aws:CurrentTime".`, + }, + "type": { + Type: schema.TypeString, + Required: true, + Description: `The type that the simulator should use to interpret the strings given in argument "values".`, + }, + "values": { + Type: schema.TypeSet, + Required: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Description: `One or more values to assign to the context key, given as a string in a syntax appropriate for the selected value type.`, + }, + }, + }, + Description: `Each block specifies one item of additional context entry to include in the simulated requests. These are the additional properties used in the 'Condition' element of an IAM policy, and in dynamic value interpolations.`, + }, + "permissions_boundary_policies_json": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + }, + Description: `Additional permission boundary policies to use in the simulation.`, + }, + "additional_policies_json": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringIsJSON, + }, + Description: `Additional principal-based policies to use in the simulation.`, + }, + "policy_source_arn": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidARN, + Description: `ARN of the principal (e.g. user, role) whose existing configured access policies will be used as the basis for the simulation. If you specify a role ARN here, you can also set caller_arn to simulate a particular user acting with the given role.`, + }, + "resource_arns": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: verify.ValidARN, + }, + Description: `ARNs of specific resources to use as the targets of the specified actions during simulation. If not specified, the simulator assumes "*" which represents general access across all resources.`, + }, + "resource_handling_option": { + Type: schema.TypeString, + Optional: true, + Description: `Specifies the type of simulation to run. Some API operations need a particular resource handling option in order to produce a correct reesult.`, + }, + "resource_owner_account_id": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidAccountID, + Description: `An AWS account ID to use as the simulated owner for any resource whose ARN does not include a specific owner account ID. Defaults to the account given as part of caller_arn.`, + }, + "resource_policy_json": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringIsJSON, + Description: `A resource policy to associate with all of the target resources for simulation purposes. The policy simulator does not automatically retrieve resource-level policies, so if a resource policy is crucial to your test then you must specify here the same policy document associated with your target resource(s).`, + }, + + // Result Attributes + "all_allowed": { + Type: schema.TypeBool, + Computed: true, + Description: `A summary of the results attribute which is true if all of the results have decision "allowed", and false otherwise.`, + }, + "results": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "action_name": { + Type: schema.TypeString, + Computed: true, + Description: `The name of the action whose simulation this result is describing.`, + }, + "decision": { + Type: schema.TypeString, + Computed: true, + Description: `The exact decision keyword returned by the policy simulator: "allowed", "explicitDeny", or "implicitDeny".`, + }, + "allowed": { + Type: schema.TypeBool, + Computed: true, + Description: `A summary of attribute "decision" which is true only if the decision is "allowed".`, + }, + "decision_details": { + Type: schema.TypeMap, + Computed: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Description: `A mapping of various additional details that are relevant to the decision, exactly as returned by the policy simulator.`, + }, + "resource_arn": { + Type: schema.TypeString, + Computed: true, + Description: `ARN of the resource that the action was tested against.`, + }, + "matched_statements": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "source_policy_id": { + Type: schema.TypeString, + Computed: true, + Description: `Identifier of one of the policies used as input to the simulation.`, + }, + "source_policy_type": { + Type: schema.TypeString, + Computed: true, + Description: `The type of the policy identified in source_policy_id.`, + }, + // NOTE: start position and end position + // omitted right now because they would + // ideally be singleton objects with + // column/line attributes, but this SDK + // can't support that. Maybe we later adopt + // the new framework and add support for + // those. + }, + }, + Description: `Detail about which specific policies contributed to this result.`, + }, + "missing_context_keys": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Description: `Set of context entry keys that were needed for one or more of the relevant policies but not included in the request. You must specify suitable values for all context keys used in all of the relevant policies in order to obtain a correct simulation result.`, + }, + // NOTE: organizations decision detail, permissions + // boundary decision detail, and resource-specific + // results omitted for now because it isn't clear + // that they will be useful and they would make the + // results of this data source considerably larger + // and more complicated. + }, + }, + }, + "id": { + Type: schema.TypeString, + Computed: true, + Description: `Do not use`, + }, + }, + } +} + +func dataSourcePrincipalPolicySimulationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).IAMConn(ctx) + + setAsAWSStringSlice := func(raw interface{}) []*string { + if raw.(*schema.Set).Len() == 0 { + return nil + } + return flex.ExpandStringSet(raw.(*schema.Set)) + } + + input := &iam.SimulatePrincipalPolicyInput{ + ActionNames: setAsAWSStringSlice(d.Get("action_names")), + PermissionsBoundaryPolicyInputList: setAsAWSStringSlice(d.Get("permissions_boundary_policies_json")), + PolicyInputList: setAsAWSStringSlice(d.Get("additional_policies_json")), + PolicySourceArn: aws.String(d.Get("policy_source_arn").(string)), + ResourceArns: setAsAWSStringSlice(d.Get("resource_arns")), + } + + for _, entryRaw := range d.Get("context").(*schema.Set).List() { + entryRaw := entryRaw.(map[string]interface{}) + entry := &iam.ContextEntry{ + ContextKeyName: aws.String(entryRaw["key"].(string)), + ContextKeyType: aws.String(entryRaw["type"].(string)), + ContextKeyValues: setAsAWSStringSlice(entryRaw["values"]), + } + input.ContextEntries = append(input.ContextEntries, entry) + } + + if v := d.Get("caller_arn").(string); v != "" { + input.CallerArn = aws.String(v) + } + if v := d.Get("resource_handling_option").(string); v != "" { + input.ResourceHandlingOption = aws.String(v) + } + if v := d.Get("resource_owner_account_id").(string); v != "" { + input.ResourceOwner = aws.String(v) + } + if v := d.Get("resource_policy_json").(string); v != "" { + input.ResourcePolicy = aws.String(v) + } + + // We are going to keep fetching through potentially multiple pages of + // results in order to return a complete result, so we'll ask the API + // to return as much as possible in each request to minimize the + // round-trips. + input.MaxItems = aws.Int64(1000) + + var results []*iam.EvaluationResult + + for { // Terminates below, once we see a result that does not set IsTruncated. + output, err := conn.SimulatePrincipalPolicyWithContext(ctx, input) + if err != nil { + return sdkdiag.AppendErrorf(diags, "simulating IAM Principal Policy: %s", err) + } + + results = append(results, output.EvaluationResults...) + + if !aws.BoolValue(output.IsTruncated) { + break // All done! + } + + // If we're making another request then we need to specify the marker + // to get the next page of results. + input.Marker = output.Marker + } + + // While we build the result we'll also tally up the number of allowed + // vs. denied decisions to use for our top-level "all_allowed" summary + // result. + allowedCount := 0 + deniedCount := 0 + + rawResults := make([]interface{}, len(results)) + for i, result := range results { + rawResult := map[string]interface{}{} + rawResult["action_name"] = aws.StringValue(result.EvalActionName) + rawResult["decision"] = aws.StringValue(result.EvalDecision) + allowed := aws.StringValue(result.EvalDecision) == "allowed" + rawResult["allowed"] = allowed + if allowed { + allowedCount++ + } else { + deniedCount++ + } + if result.EvalResourceName != nil { + rawResult["resource_arn"] = aws.StringValue(result.EvalResourceName) + } + + var missingContextKeys []string + for _, mkk := range result.MissingContextValues { + if mkk != nil { + missingContextKeys = append(missingContextKeys, *mkk) + } + } + rawResult["missing_context_keys"] = missingContextKeys + + decisionDetails := make(map[string]string, len(result.EvalDecisionDetails)) + for k, pv := range result.EvalDecisionDetails { + if pv != nil { + decisionDetails[k] = aws.StringValue(pv) + } + } + rawResult["decision_details"] = decisionDetails + + rawMatchedStmts := make([]interface{}, len(result.MatchedStatements)) + for i, stmt := range result.MatchedStatements { + rawStmt := map[string]interface{}{ + "source_policy_id": stmt.SourcePolicyId, + "source_policy_type": stmt.SourcePolicyType, + } + rawMatchedStmts[i] = rawStmt + } + rawResult["matched_statements"] = rawMatchedStmts + + rawResults[i] = rawResult + } + d.Set("results", rawResults) + + // "all" are allowed only if there is at least one result and no other + // results were denied. We require at least one allowed here just as + // a safety-net against a confusing result from a degenerate request. + d.Set("all_allowed", allowedCount > 0 && deniedCount == 0) + + d.SetId("-") + + return diags +} diff --git a/internal/service/iam/principal_policy_simulation_data_source_test.go b/internal/service/iam/principal_policy_simulation_data_source_test.go new file mode 100644 index 00000000000..bc3a1c35174 --- /dev/null +++ b/internal/service/iam/principal_policy_simulation_data_source_test.go @@ -0,0 +1,214 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package iam_test + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/service/iam" + sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-plugin-testing/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" +) + +func TestAccIAMPrincipalPolicySimulationDataSource_basic(t *testing.T) { + ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, iam.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + Steps: []resource.TestStep{ + { + Config: testAccPrincipalPolicySimulationDataSourceConfig_main(rName), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.allow_simple", "all_allowed", "true"), + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.allow_simple", "results.#", "1"), + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.allow_simple", "results.0.action_name", "ec2:AssociateVpcCidrBlock"), + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.allow_simple", "results.0.allowed", "true"), + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.allow_simple", "results.0.decision", "allowed"), + + // IAM seems to generate the SourcePolicyId by concatenating + // together the username, the policy name, and some other + // hard-coded bits. Not sure if this is constractual, so + // if this turns out to change in future it may be better + // to test this in a different way. + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.allow_simple", "results.0.matched_statements.0.source_policy_id", fmt.Sprintf("user_%s_%s", rName, rName)), + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.allow_simple", "results.0.matched_statements.0.source_policy_type", "IAM Policy"), + + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.deny_explicit", "all_allowed", "false"), + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.deny_explicit", "results.#", "1"), + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.deny_explicit", "results.0.action_name", "ec2:AttachClassicLinkVpc"), + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.deny_explicit", "results.0.allowed", "false"), + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.deny_explicit", "results.0.decision", "explicitDeny"), + + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.deny_implicit", "all_allowed", "false"), + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.deny_implicit", "results.#", "1"), + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.deny_implicit", "results.0.action_name", "ec2:AttachVpnGateway"), + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.deny_implicit", "results.0.allowed", "false"), + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.deny_implicit", "results.0.decision", "implicitDeny"), + + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.allow_with_context", "all_allowed", "true"), + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.allow_with_context", "results.#", "1"), + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.allow_with_context", "results.0.action_name", "ec2:AttachInternetGateway"), + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.allow_with_context", "results.0.allowed", "true"), + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.allow_with_context", "results.0.decision", "allowed"), + + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.allow_with_wrong_context", "all_allowed", "false"), + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.allow_with_wrong_context", "results.#", "1"), + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.allow_with_wrong_context", "results.0.action_name", "ec2:AttachInternetGateway"), + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.allow_with_wrong_context", "results.0.allowed", "false"), + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.allow_with_wrong_context", "results.0.decision", "implicitDeny"), + + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.multiple_mixed", "all_allowed", "false"), + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.multiple_mixed", "results.#", "2"), + + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.multiple_allow", "all_allowed", "true"), + resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.multiple_allow", "results.#", "2"), + + func(state *terraform.State) error { + vpcARN := state.RootModule().Outputs["vpc_arn"].Value.(string) + return resource.TestCheckResourceAttr("data.aws_iam_principal_policy_simulation.allow_simple", "results.0.resource_arn", vpcARN)(state) + }, + ), + }, + }, + }) +} + +func testAccPrincipalPolicySimulationDataSourceConfig_main(rName string) string { + return fmt.Sprintf(` +resource "aws_iam_user" "test" { + name = %[1]q +} + +resource "aws_vpc" "test" { + cidr_block = "192.168.0.0/16" + + tags = { + Name = %[1]q + } +} + +resource "aws_iam_user_policy" "test" { + name = %[1]q + user = aws_iam_user.test.name + + policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Action = "ec2:AssociateVpcCidrBlock" + Effect = "Allow" + Resource = aws_vpc.test.arn + }, + { + Action = "ec2:AttachClassicLinkVpc" + Effect = "Deny" + Resource = aws_vpc.test.arn + }, + { + Action = "ec2:AttachInternetGateway" + Effect = "Allow" + Resource = aws_vpc.test.arn + Condition = { + StringEquals = { + "ec2:ResourceTag/Foo" = "bar" + } + } + }, + ] + }) +} + +data "aws_iam_principal_policy_simulation" "allow_simple" { + action_names = ["ec2:AssociateVpcCidrBlock"] + resource_arns = [aws_vpc.test.arn] + policy_source_arn = aws_iam_user.test.arn + + depends_on = [aws_iam_user_policy.test] +} + +data "aws_iam_principal_policy_simulation" "deny_explicit" { + action_names = ["ec2:AttachClassicLinkVpc"] + resource_arns = [aws_vpc.test.arn] + policy_source_arn = aws_iam_user.test.arn + + depends_on = [aws_iam_user_policy.test] +} + +data "aws_iam_principal_policy_simulation" "deny_implicit" { + # This one is implicit deny because our policy + # doesn't mention ec2:AttachVpnGateway at all. + action_names = ["ec2:AttachVpnGateway"] + resource_arns = [aws_vpc.test.arn] + policy_source_arn = aws_iam_user.test.arn + + depends_on = [aws_iam_user_policy.test] +} + +data "aws_iam_principal_policy_simulation" "allow_with_context" { + action_names = ["ec2:AttachInternetGateway"] + resource_arns = [aws_vpc.test.arn] + policy_source_arn = aws_iam_user.test.arn + + context { + key = "ec2:ResourceTag/Foo" + type = "string" + values = ["bar"] + } + + depends_on = [aws_iam_user_policy.test] +} + +data "aws_iam_principal_policy_simulation" "allow_with_wrong_context" { + action_names = ["ec2:AttachInternetGateway"] + resource_arns = [aws_vpc.test.arn] + policy_source_arn = aws_iam_user.test.arn + + context { + key = "ec2:ResourceTag/Foo" + type = "string" + values = ["baz"] + } + + depends_on = [aws_iam_user_policy.test] +} + +data "aws_iam_principal_policy_simulation" "multiple_mixed" { + action_names = [ + "ec2:AssociateVpcCidrBlock", + "ec2:AttachClassicLinkVpc", + ] + resource_arns = [aws_vpc.test.arn] + policy_source_arn = aws_iam_user.test.arn + + depends_on = [aws_iam_user_policy.test] +} + +data "aws_iam_principal_policy_simulation" "multiple_allow" { + action_names = [ + "ec2:AssociateVpcCidrBlock", + "ec2:AttachInternetGateway", + ] + resource_arns = [aws_vpc.test.arn] + policy_source_arn = aws_iam_user.test.arn + + context { + key = "ec2:ResourceTag/Foo" + type = "string" + values = ["bar"] + } + + depends_on = [aws_iam_user_policy.test] +} + +output "vpc_arn" { + value = aws_vpc.test.arn +} +`, rName) +} diff --git a/internal/service/iam/role.go b/internal/service/iam/role.go index 243b1a27624..33048d10ade 100644 --- a/internal/service/iam/role.go +++ b/internal/service/iam/role.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -160,22 +163,6 @@ func ResourceRole() *schema.Resource { Optional: true, ValidateFunc: verify.ValidARN, }, - "role_last_used": { - Type: schema.TypeList, - Computed: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "region": { - Type: schema.TypeString, - Computed: true, - }, - "last_used_date": { - Type: schema.TypeString, - Computed: true, - }, - }, - }, - }, names.AttrTags: tftags.TagsSchema(), names.AttrTagsAll: tftags.TagsSchemaComputed(), "unique_id": { @@ -195,7 +182,7 @@ func resourceRoleImport(ctx context.Context, d *schema.ResourceData, meta interf func resourceRoleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) assumeRolePolicy, err := structure.NormalizeJsonString(d.Get("assume_role_policy").(string)) if err != nil { @@ -207,7 +194,7 @@ func resourceRoleCreate(ctx context.Context, d *schema.ResourceData, meta interf AssumeRolePolicyDocument: aws.String(assumeRolePolicy), Path: aws.String(d.Get("path").(string)), RoleName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -254,7 +241,7 @@ func resourceRoleCreate(ctx context.Context, d *schema.ResourceData, meta interf d.SetId(roleName) // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := roleCreateTags(ctx, conn, d.Id(), tags) // If default tags only, continue. Otherwise, error. @@ -272,7 +259,7 @@ func resourceRoleCreate(ctx context.Context, d *schema.ResourceData, meta interf func resourceRoleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { return FindRoleByName(ctx, conn, d.Id()) @@ -324,10 +311,6 @@ func resourceRoleRead(ctx context.Context, d *schema.ResourceData, meta interfac return sdkdiag.AppendErrorf(diags, "reading inline policies for IAM role %s, error: %s", d.Id(), err) } - if err := d.Set("role_last_used", flattenRoleLastUsed(role.RoleLastUsed)); err != nil { - return sdkdiag.AppendErrorf(diags, "setting role_last_used: %s", err) - } - var configPoliciesList []*iam.PutRolePolicyInput if v := d.Get("inline_policy").(*schema.Set); v.Len() > 0 { configPoliciesList = expandRoleInlinePolicies(aws.StringValue(role.RoleName), v.List()) @@ -345,14 +328,14 @@ func resourceRoleRead(ctx context.Context, d *schema.ResourceData, meta interfac } d.Set("managed_policy_arns", managedPolicies) - SetTagsOut(ctx, role.Tags) + setTagsOut(ctx, role.Tags) return diags } func resourceRoleUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) if d.HasChange("assume_role_policy") { assumeRolePolicy, err := structure.NormalizeJsonString(d.Get("assume_role_policy").(string)) @@ -521,7 +504,7 @@ func resourceRoleUpdate(ctx context.Context, d *schema.ResourceData, meta interf func resourceRoleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) hasInline := false if v, ok := d.GetOk("inline_policy"); ok && v.(*schema.Set).Len() > 0 { @@ -754,21 +737,6 @@ func deleteRoleInlinePolicies(ctx context.Context, conn *iam.IAM, roleName strin return nil } -func flattenRoleLastUsed(apiObject *iam.RoleLastUsed) []interface{} { - if apiObject == nil { - return nil - } - - tfMap := map[string]interface{}{ - "region": aws.StringValue(apiObject.Region), - } - - if apiObject.LastUsedDate != nil { - tfMap["last_used_date"] = apiObject.LastUsedDate.Format(time.RFC3339) - } - return []interface{}{tfMap} -} - func flattenRoleInlinePolicy(apiObject *iam.PutRolePolicyInput) map[string]interface{} { if apiObject == nil { return nil @@ -853,7 +821,7 @@ func expandRoleInlinePolicies(roleName string, tfList []interface{}) []*iam.PutR } func addRoleInlinePolicies(ctx context.Context, policies []*iam.PutRolePolicyInput, meta interface{}) error { - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) var errs *multierror.Error for _, policy := range policies { @@ -871,7 +839,7 @@ func addRoleInlinePolicies(ctx context.Context, policies []*iam.PutRolePolicyInp } func addRoleManagedPolicies(ctx context.Context, roleName string, policies []*string, meta interface{}) error { - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) var errs *multierror.Error for _, arn := range policies { @@ -885,7 +853,7 @@ func addRoleManagedPolicies(ctx context.Context, roleName string, policies []*st } func readRoleInlinePolicies(ctx context.Context, roleName string, meta interface{}) ([]*iam.PutRolePolicyInput, error) { - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) policyNames, err := readRolePolicyNames(ctx, conn, roleName) if err != nil { diff --git a/internal/service/iam/role_data_source.go b/internal/service/iam/role_data_source.go index 44e41616dfb..1db1d793168 100644 --- a/internal/service/iam/role_data_source.go +++ b/internal/service/iam/role_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -79,7 +82,7 @@ func DataSourceRole() *schema.Resource { func dataSourceRoleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig name := d.Get("name").(string) @@ -131,3 +134,18 @@ func dataSourceRoleRead(ctx context.Context, d *schema.ResourceData, meta interf return diags } + +func flattenRoleLastUsed(apiObject *iam.RoleLastUsed) []interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{ + "region": aws.StringValue(apiObject.Region), + } + + if apiObject.LastUsedDate != nil { + tfMap["last_used_date"] = apiObject.LastUsedDate.Format(time.RFC3339) + } + return []interface{}{tfMap} +} diff --git a/internal/service/iam/role_data_source_test.go b/internal/service/iam/role_data_source_test.go index 9e26a86e50c..f39d538a836 100644 --- a/internal/service/iam/role_data_source_test.go +++ b/internal/service/iam/role_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -74,7 +77,6 @@ const testAccRoleDataSourceConfig_AssumeRolePolicy_ExpectedJSON = `{ "Version": "2012-10-17", "Statement": [ { - "Sid": "", "Effect": "Allow", "Action": "sts:AssumeRole", "Principal": { diff --git a/internal/service/iam/role_policy.go b/internal/service/iam/role_policy.go index a1bdd8894db..ffdfaee190a 100644 --- a/internal/service/iam/role_policy.go +++ b/internal/service/iam/role_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -77,7 +80,7 @@ func ResourceRolePolicy() *schema.Resource { func resourceRolePolicyPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) policy, err := verify.LegacyPolicyNormalize(d.Get("policy").(string)) if err != nil { @@ -109,7 +112,7 @@ func resourceRolePolicyPut(ctx context.Context, d *schema.ResourceData, meta int func resourceRolePolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) role, name, err := RolePolicyParseID(d.Id()) if err != nil { @@ -177,7 +180,7 @@ func resourceRolePolicyRead(ctx context.Context, d *schema.ResourceData, meta in func resourceRolePolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) role, name, err := RolePolicyParseID(d.Id()) if err != nil { diff --git a/internal/service/iam/role_policy_attachment.go b/internal/service/iam/role_policy_attachment.go index c507b186384..92835771597 100644 --- a/internal/service/iam/role_policy_attachment.go +++ b/internal/service/iam/role_policy_attachment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -45,7 +48,7 @@ func ResourceRolePolicyAttachment() *schema.Resource { func resourceRolePolicyAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) role := d.Get("role").(string) arn := d.Get("policy_arn").(string) @@ -63,7 +66,7 @@ func resourceRolePolicyAttachmentCreate(ctx context.Context, d *schema.ResourceD func resourceRolePolicyAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) role := d.Get("role").(string) policyARN := d.Get("policy_arn").(string) // Human friendly ID for error messages since d.Id() is non-descriptive @@ -121,7 +124,7 @@ func resourceRolePolicyAttachmentRead(ctx context.Context, d *schema.ResourceDat func resourceRolePolicyAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) role := d.Get("role").(string) arn := d.Get("policy_arn").(string) diff --git a/internal/service/iam/role_policy_attachment_test.go b/internal/service/iam/role_policy_attachment_test.go index 701b62f770a..13bada1cbd2 100644 --- a/internal/service/iam/role_policy_attachment_test.go +++ b/internal/service/iam/role_policy_attachment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -128,7 +131,7 @@ func TestAccIAMRolePolicyAttachment_Disappears_role(t *testing.T) { func testAccCheckRolePolicyAttachmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iam_role_policy_attachment" { @@ -168,7 +171,7 @@ func testAccCheckRolePolicyAttachmentExists(ctx context.Context, n string, c int return fmt.Errorf("No policy name is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) role := rs.Primary.Attributes["role"] attachedPolicies, err := conn.ListAttachedRolePoliciesWithContext(ctx, &iam.ListAttachedRolePoliciesInput{ @@ -208,7 +211,7 @@ func testAccCheckRolePolicyAttachmentAttributes(policies []string, out *iam.List func testAccCheckRolePolicyAttachmentDisappears(ctx context.Context, resourceName string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) rs, ok := s.RootModule().Resources[resourceName] diff --git a/internal/service/iam/role_policy_test.go b/internal/service/iam/role_policy_test.go index e2606ac00eb..31d440b18ad 100644 --- a/internal/service/iam/role_policy_test.go +++ b/internal/service/iam/role_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -277,7 +280,7 @@ func TestAccIAMRolePolicy_unknownsInPolicy(t *testing.T) { func testAccCheckRolePolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iam_role_policy" { @@ -315,7 +318,7 @@ func testAccCheckRolePolicyDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckRolePolicyDisappears(ctx context.Context, out *iam.GetRolePolicyOutput) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) params := &iam.DeleteRolePolicyInput{ PolicyName: out.PolicyName, @@ -346,7 +349,7 @@ func testAccCheckRolePolicyExists(ctx context.Context, return fmt.Errorf("Not Found: %s", iamRolePolicyResource) } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) role, name, err := tfiam.RolePolicyParseID(policy.Primary.ID) if err != nil { return err diff --git a/internal/service/iam/role_test.go b/internal/service/iam/role_test.go index a26c9e13515..f6f2b902320 100644 --- a/internal/service/iam/role_test.go +++ b/internal/service/iam/role_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -1038,7 +1041,7 @@ func TestAccIAMRole_ManagedPolicy_outOfBandAdditionRemovedEmpty(t *testing.T) { func testAccCheckRoleDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iam_role" { @@ -1073,7 +1076,7 @@ func testAccCheckRoleExists(ctx context.Context, n string, v *iam.Role) resource return fmt.Errorf("No IAM Role ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) output, err := tfiam.FindRoleByName(ctx, conn, rs.Primary.ID) @@ -1098,7 +1101,7 @@ func testAccAddRolePolicy(ctx context.Context, n string) resource.TestCheckFunc return fmt.Errorf("No Role name is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) input := &iam.PutRolePolicyInput{ RoleName: aws.String(rs.Primary.ID), @@ -1136,7 +1139,7 @@ func testAccCheckRolePermissionsBoundary(role *iam.Role, expectedPermissionsBoun func testAccCheckRolePolicyDetachManagedPolicy(ctx context.Context, role *iam.Role, policyName string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) var managedARN string input := &iam.ListAttachedRolePoliciesInput{ @@ -1170,7 +1173,7 @@ func testAccCheckRolePolicyDetachManagedPolicy(ctx context.Context, role *iam.Ro func testAccCheckRolePolicyAttachManagedPolicy(ctx context.Context, role *iam.Role, policyName string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) var managedARN string input := &iam.ListPoliciesInput{ @@ -1206,7 +1209,7 @@ func testAccCheckRolePolicyAttachManagedPolicy(ctx context.Context, role *iam.Ro func testAccCheckRolePolicyAddInlinePolicy(ctx context.Context, role *iam.Role, inlinePolicy string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) _, err := conn.PutRolePolicyWithContext(ctx, &iam.PutRolePolicyInput{ PolicyDocument: aws.String(testAccRolePolicyExtraInlineConfig()), @@ -1220,7 +1223,7 @@ func testAccCheckRolePolicyAddInlinePolicy(ctx context.Context, role *iam.Role, func testAccCheckRolePolicyRemoveInlinePolicy(ctx context.Context, role *iam.Role, inlinePolicy string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) _, err := conn.DeleteRolePolicyWithContext(ctx, &iam.DeleteRolePolicyInput{ PolicyName: aws.String(inlinePolicy), diff --git a/internal/service/iam/roles_data_source.go b/internal/service/iam/roles_data_source.go index c63553624a6..1e685831475 100644 --- a/internal/service/iam/roles_data_source.go +++ b/internal/service/iam/roles_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -44,7 +47,7 @@ func DataSourceRoles() *schema.Resource { func dataSourceRolesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) input := &iam.ListRolesInput{} diff --git a/internal/service/iam/roles_data_source_test.go b/internal/service/iam/roles_data_source_test.go index ad0e2a52c6e..d1b05cbb277 100644 --- a/internal/service/iam/roles_data_source_test.go +++ b/internal/service/iam/roles_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( diff --git a/internal/service/iam/saml_provider.go b/internal/service/iam/saml_provider.go index 1ccc41086c8..a0ecba645e8 100644 --- a/internal/service/iam/saml_provider.go +++ b/internal/service/iam/saml_provider.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -65,13 +68,13 @@ func ResourceSAMLProvider() *schema.Resource { } func resourceSAMLProviderCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) name := d.Get("name").(string) input := &iam.CreateSAMLProviderInput{ Name: aws.String(name), SAMLMetadataDocument: aws.String(d.Get("saml_metadata_document").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } output, err := conn.CreateSAMLProviderWithContext(ctx, input) @@ -90,7 +93,7 @@ func resourceSAMLProviderCreate(ctx context.Context, d *schema.ResourceData, met d.SetId(aws.StringValue(output.SAMLProviderArn)) // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := samlProviderCreateTags(ctx, conn, d.Id(), tags) // If default tags only, continue. Otherwise, error. @@ -107,7 +110,7 @@ func resourceSAMLProviderCreate(ctx context.Context, d *schema.ResourceData, met } func resourceSAMLProviderRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) output, err := FindSAMLProviderByARN(ctx, conn, d.Id()) @@ -136,13 +139,13 @@ func resourceSAMLProviderRead(ctx context.Context, d *schema.ResourceData, meta d.Set("valid_until", nil) } - SetTagsOut(ctx, output.Tags) + setTagsOut(ctx, output.Tags) return nil } func resourceSAMLProviderUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &iam.UpdateSAMLProviderInput{ @@ -176,7 +179,7 @@ func resourceSAMLProviderUpdate(ctx context.Context, d *schema.ResourceData, met } func resourceSAMLProviderDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) log.Printf("[DEBUG] Deleting IAM SAML Provider: %s", d.Id()) _, err := conn.DeleteSAMLProviderWithContext(ctx, &iam.DeleteSAMLProviderInput{ diff --git a/internal/service/iam/saml_provider_data_source.go b/internal/service/iam/saml_provider_data_source.go index 77d33373211..060404c73c6 100644 --- a/internal/service/iam/saml_provider_data_source.go +++ b/internal/service/iam/saml_provider_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -45,7 +48,7 @@ func DataSourceSAMLProvider() *schema.Resource { } func dataSourceSAMLProviderRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig arn := d.Get("arn").(string) @@ -79,7 +82,7 @@ func dataSourceSAMLProviderRead(ctx context.Context, d *schema.ResourceData, met //lintignore:AWSR002 if err := d.Set("tags", tags.Map()); err != nil { - return diag.Errorf("error setting tags: %s", err) + return diag.Errorf("setting tags: %s", err) } return nil diff --git a/internal/service/iam/saml_provider_data_source_test.go b/internal/service/iam/saml_provider_data_source_test.go index 49c638ad064..76c20213e23 100644 --- a/internal/service/iam/saml_provider_data_source_test.go +++ b/internal/service/iam/saml_provider_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( diff --git a/internal/service/iam/saml_provider_test.go b/internal/service/iam/saml_provider_test.go index c16c861611f..6bb82b6cd93 100644 --- a/internal/service/iam/saml_provider_test.go +++ b/internal/service/iam/saml_provider_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -128,7 +131,7 @@ func TestAccIAMSAMLProvider_disappears(t *testing.T) { func testAccCheckSAMLProviderDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iam_saml_provider" { @@ -163,7 +166,7 @@ func testAccCheckSAMLProviderExists(ctx context.Context, n string) resource.Test return fmt.Errorf("No IAM SAML Provider ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) _, err := tfiam.FindSAMLProviderByARN(ctx, conn, rs.Primary.ID) diff --git a/internal/service/iam/server_certificate.go b/internal/service/iam/server_certificate.go index 4df084dcd7e..65d7305aef5 100644 --- a/internal/service/iam/server_certificate.go +++ b/internal/service/iam/server_certificate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -106,14 +109,14 @@ func ResourceServerCertificate() *schema.Resource { func resourceServerCertificateCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) sslCertName := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) input := &iam.UploadServerCertificateInput{ CertificateBody: aws.String(d.Get("certificate_body").(string)), PrivateKey: aws.String(d.Get("private_key").(string)), ServerCertificateName: aws.String(sslCertName), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("certificate_chain"); ok { @@ -141,7 +144,7 @@ func resourceServerCertificateCreate(ctx context.Context, d *schema.ResourceData d.Set("name", sslCertName) // Required for resource Read. // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := serverCertificateCreateTags(ctx, conn, sslCertName, tags) // If default tags only, continue. Otherwise, error. @@ -159,7 +162,7 @@ func resourceServerCertificateCreate(ctx context.Context, d *schema.ResourceData func resourceServerCertificateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) cert, err := FindServerCertificateByName(ctx, conn, d.Get("name").(string)) @@ -192,14 +195,14 @@ func resourceServerCertificateRead(ctx context.Context, d *schema.ResourceData, d.Set("upload_date", nil) } - SetTagsOut(ctx, cert.Tags) + setTagsOut(ctx, cert.Tags) return diags } func resourceServerCertificateUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) if d.HasChange("tags_all") { o, n := d.GetChange("tags_all") @@ -221,7 +224,7 @@ func resourceServerCertificateUpdate(ctx context.Context, d *schema.ResourceData func resourceServerCertificateDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) log.Printf("[DEBUG] Deleting IAM Server Certificate: %s", d.Id()) _, err := tfresource.RetryWhenAWSErrMessageContains(ctx, 15*time.Minute, func() (interface{}, error) { diff --git a/internal/service/iam/server_certificate_data_source.go b/internal/service/iam/server_certificate_data_source.go index 2a506769f6e..1387821a64f 100644 --- a/internal/service/iam/server_certificate_data_source.go +++ b/internal/service/iam/server_certificate_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -98,7 +101,7 @@ func (m CertificateByExpiration) Less(i, j int) bool { func dataSourceServerCertificateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) var matcher = func(cert *iam.ServerCertificateMetadata) bool { return strings.HasPrefix(aws.StringValue(cert.ServerCertificateName), d.Get("name_prefix").(string)) diff --git a/internal/service/iam/server_certificate_data_source_test.go b/internal/service/iam/server_certificate_data_source_test.go index 1285e4ae342..3296005c716 100644 --- a/internal/service/iam/server_certificate_data_source_test.go +++ b/internal/service/iam/server_certificate_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( diff --git a/internal/service/iam/server_certificate_test.go b/internal/service/iam/server_certificate_test.go index a4f79d81c50..4fa8e8a2280 100644 --- a/internal/service/iam/server_certificate_test.go +++ b/internal/service/iam/server_certificate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -262,7 +265,7 @@ func testAccCheckCertExists(ctx context.Context, n string, v *iam.ServerCertific return fmt.Errorf("No IAM Server Certificate ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) output, err := tfiam.FindServerCertificateByName(ctx, conn, rs.Primary.Attributes["name"]) @@ -278,7 +281,7 @@ func testAccCheckCertExists(ctx context.Context, n string, v *iam.ServerCertific func testAccCheckServerCertificateDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iam_server_certificate" { diff --git a/internal/service/iam/service_linked_role.go b/internal/service/iam/service_linked_role.go index 6aab671cabb..61714187aef 100644 --- a/internal/service/iam/service_linked_role.go +++ b/internal/service/iam/service_linked_role.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -90,7 +93,7 @@ func ResourceServiceLinkedRole() *schema.Resource { func resourceServiceLinkedRoleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) serviceName := d.Get("aws_service_name").(string) input := &iam.CreateServiceLinkedRoleInput{ @@ -113,7 +116,7 @@ func resourceServiceLinkedRoleCreate(ctx context.Context, d *schema.ResourceData d.SetId(aws.StringValue(output.Role.Arn)) - if tags := GetTagsIn(ctx); len(tags) > 0 { + if tags := getTagsIn(ctx); len(tags) > 0 { _, roleName, _, err := DecodeServiceLinkedRoleID(d.Id()) if err != nil { @@ -137,7 +140,7 @@ func resourceServiceLinkedRoleCreate(ctx context.Context, d *schema.ResourceData func resourceServiceLinkedRoleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) serviceName, roleName, customSuffix, err := DecodeServiceLinkedRoleID(d.Id()) @@ -170,14 +173,14 @@ func resourceServiceLinkedRoleRead(ctx context.Context, d *schema.ResourceData, d.Set("path", role.Path) d.Set("unique_id", role.RoleId) - SetTagsOut(ctx, role.Tags) + setTagsOut(ctx, role.Tags) return diags } func resourceServiceLinkedRoleUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) _, roleName, _, err := DecodeServiceLinkedRoleID(d.Id()) @@ -218,7 +221,7 @@ func resourceServiceLinkedRoleUpdate(ctx context.Context, d *schema.ResourceData func resourceServiceLinkedRoleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) _, roleName, _, err := DecodeServiceLinkedRoleID(d.Id()) diff --git a/internal/service/iam/service_linked_role_test.go b/internal/service/iam/service_linked_role_test.go index c148c9c6bfc..1028d089baf 100644 --- a/internal/service/iam/service_linked_role_test.go +++ b/internal/service/iam/service_linked_role_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -303,7 +306,7 @@ func TestAccIAMServiceLinkedRole_disappears(t *testing.T) { func testAccCheckServiceLinkedRoleDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iam_service_linked_role" { @@ -340,7 +343,7 @@ func testAccCheckServiceLinkedRoleExists(ctx context.Context, n string) resource return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) _, roleName, _, err := tfiam.DecodeServiceLinkedRoleID(rs.Primary.ID) diff --git a/internal/service/iam/service_package_gen.go b/internal/service/iam/service_package_gen.go index 10eb376a43a..2d02e382157 100644 --- a/internal/service/iam/service_package_gen.go +++ b/internal/service/iam/service_package_gen.go @@ -5,6 +5,10 @@ package iam import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + iam_sdkv1 "github.com/aws/aws-sdk-go/service/iam" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -53,6 +57,10 @@ func (p *servicePackage) SDKDataSources(ctx context.Context) []*types.ServicePac Factory: DataSourcePolicyDocument, TypeName: "aws_iam_policy_document", }, + { + Factory: DataSourcePrincipalPolicySimulation, + TypeName: "aws_iam_principal_policy_simulation", + }, { Factory: DataSourceRole, TypeName: "aws_iam_role", @@ -219,4 +227,13 @@ func (p *servicePackage) ServicePackageName() string { return names.IAM } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*iam_sdkv1.IAM, error) { + sess := config["session"].(*session_sdkv1.Session) + + return iam_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/iam/service_specific_credential.go b/internal/service/iam/service_specific_credential.go index c49ff79458e..0ef9533bf62 100644 --- a/internal/service/iam/service_specific_credential.go +++ b/internal/service/iam/service_specific_credential.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -65,7 +68,7 @@ func ResourceServiceSpecificCredential() *schema.Resource { func resourceServiceSpecificCredentialCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) input := &iam.CreateServiceSpecificCredentialInput{ ServiceName: aws.String(d.Get("service_name").(string)), @@ -100,7 +103,7 @@ func resourceServiceSpecificCredentialCreate(ctx context.Context, d *schema.Reso func resourceServiceSpecificCredentialRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) serviceName, userName, credID, err := DecodeServiceSpecificCredentialId(d.Id()) if err != nil { @@ -134,7 +137,7 @@ func resourceServiceSpecificCredentialRead(ctx context.Context, d *schema.Resour func resourceServiceSpecificCredentialUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) request := &iam.UpdateServiceSpecificCredentialInput{ ServiceSpecificCredentialId: aws.String(d.Get("service_specific_credential_id").(string)), @@ -151,7 +154,7 @@ func resourceServiceSpecificCredentialUpdate(ctx context.Context, d *schema.Reso func resourceServiceSpecificCredentialDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) request := &iam.DeleteServiceSpecificCredentialInput{ ServiceSpecificCredentialId: aws.String(d.Get("service_specific_credential_id").(string)), diff --git a/internal/service/iam/service_specific_credential_test.go b/internal/service/iam/service_specific_credential_test.go index 67f4cf1a84b..2d5137cda57 100644 --- a/internal/service/iam/service_specific_credential_test.go +++ b/internal/service/iam/service_specific_credential_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -169,7 +172,7 @@ func testAccCheckServiceSpecificCredentialExists(ctx context.Context, n string, if rs.Primary.ID == "" { return fmt.Errorf("No Server Cert ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) serviceName, userName, credId, err := tfiam.DecodeServiceSpecificCredentialId(rs.Primary.ID) if err != nil { @@ -189,7 +192,7 @@ func testAccCheckServiceSpecificCredentialExists(ctx context.Context, n string, func testAccCheckServiceSpecificCredentialDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iam_service_specific_credential" { diff --git a/internal/service/iam/session_context_data_source.go b/internal/service/iam/session_context_data_source.go index f23f8d2b74a..5c0c8670645 100644 --- a/internal/service/iam/session_context_data_source.go +++ b/internal/service/iam/session_context_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -49,7 +52,7 @@ func DataSourceSessionContext() *schema.Resource { func dataSourceSessionContextRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) arn := d.Get("arn").(string) diff --git a/internal/service/iam/session_context_data_source_test.go b/internal/service/iam/session_context_data_source_test.go index 710a92faa20..1f0a82ac0a2 100644 --- a/internal/service/iam/session_context_data_source_test.go +++ b/internal/service/iam/session_context_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( diff --git a/internal/service/iam/signing_certificate.go b/internal/service/iam/signing_certificate.go index 9f65c5adb83..ee0c0a7b950 100644 --- a/internal/service/iam/signing_certificate.go +++ b/internal/service/iam/signing_certificate.go @@ -1,6 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam -import ( // nosemgrep:ci.aws-sdk-go-multiple-service-imports +import ( // nosemgrep:ci.semgrep.aws.multiple-service-imports "context" "fmt" @@ -57,7 +60,7 @@ func ResourceSigningCertificate() *schema.Resource { func resourceSigningCertificateCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) createOpts := &iam.UploadSigningCertificateInput{ CertificateBody: aws.String(d.Get("certificate_body").(string)), @@ -92,7 +95,7 @@ func resourceSigningCertificateCreate(ctx context.Context, d *schema.ResourceDat func resourceSigningCertificateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) certId, userName, err := DecodeSigningCertificateId(d.Id()) if err != nil { @@ -125,7 +128,7 @@ func resourceSigningCertificateRead(ctx context.Context, d *schema.ResourceData, func resourceSigningCertificateUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) certId, userName, err := DecodeSigningCertificateId(d.Id()) if err != nil { @@ -148,7 +151,7 @@ func resourceSigningCertificateUpdate(ctx context.Context, d *schema.ResourceDat func resourceSigningCertificateDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) log.Printf("[INFO] Deleting IAM Signing Certificate: %s", d.Id()) certId, userName, err := DecodeSigningCertificateId(d.Id()) diff --git a/internal/service/iam/signing_certificate_test.go b/internal/service/iam/signing_certificate_test.go index a24cd38279b..d8771640efd 100644 --- a/internal/service/iam/signing_certificate_test.go +++ b/internal/service/iam/signing_certificate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -132,7 +135,7 @@ func testAccCheckSigningCertificateExists(ctx context.Context, n string, cred *i if rs.Primary.ID == "" { return fmt.Errorf("No Server Cert ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) certId, userName, err := tfiam.DecodeSigningCertificateId(rs.Primary.ID) if err != nil { @@ -152,7 +155,7 @@ func testAccCheckSigningCertificateExists(ctx context.Context, n string, cred *i func testAccCheckSigningCertificateDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iam_signing_certificate" { diff --git a/internal/service/iam/state_funcs.go b/internal/service/iam/state_funcs.go index 9f1df2a15c8..efe0fc52658 100644 --- a/internal/service/iam/state_funcs.go +++ b/internal/service/iam/state_funcs.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( diff --git a/internal/service/iam/sweep.go b/internal/service/iam/sweep.go index ff6b6c6f3b7..71528c8a478 100644 --- a/internal/service/iam/sweep.go +++ b/internal/service/iam/sweep.go @@ -1,21 +1,29 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep package iam import ( + "context" "fmt" "log" "regexp" "strings" + "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/iam" "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/go-multierror" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" + "github.com/hashicorp/terraform-provider-aws/internal/sweep/sdk" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) func init() { @@ -122,13 +130,13 @@ func init() { func sweepGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).IAMConn() + conn := client.IAMConn(ctx) input := &iam.ListGroupsInput{} var sweeperErrs *multierror.Error @@ -238,11 +246,11 @@ func sweepGroups(region string) error { func sweepInstanceProfile(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).IAMConn() + conn := client.IAMConn(ctx) var sweeperErrs *multierror.Error @@ -271,7 +279,7 @@ func sweepInstanceProfile(region string) error { } log.Printf("[INFO] Sweeping IAM Instance Profile %q", name) - err := sweep.DeleteResource(ctx, r, d, client) + err := sdk.DeleteResource(ctx, r, d, client) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error deleting IAM Instance Profile (%s): %w", name, err)) @@ -296,11 +304,11 @@ func sweepInstanceProfile(region string) error { func sweepOpenIDConnectProvider(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).IAMConn() + conn := client.IAMConn(ctx) var sweeperErrs *multierror.Error @@ -312,7 +320,7 @@ func sweepOpenIDConnectProvider(region string) error { r := ResourceOpenIDConnectProvider() d := r.Data(nil) d.SetId(arn) - err := sweep.DeleteResource(ctx, r, d, client) + err := sdk.DeleteResource(ctx, r, d, client) if err != nil { sweeperErr := fmt.Errorf("error deleting IAM OIDC Provider (%s): %w", arn, err) @@ -336,11 +344,11 @@ func sweepOpenIDConnectProvider(region string) error { func sweepServiceSpecificCredentials(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).IAMConn() + conn := client.IAMConn(ctx) var sweeperErrs *multierror.Error @@ -378,7 +386,7 @@ func sweepServiceSpecificCredentials(region string) error { r := ResourceServiceSpecificCredential() d := r.Data(nil) d.SetId(id) - err := sweep.DeleteResource(ctx, r, d, client) + err := sdk.DeleteResource(ctx, r, d, client) if err != nil { sweeperErr := fmt.Errorf("error deleting IAM Service Specific Credential (%s): %w", id, err) @@ -403,15 +411,17 @@ func sweepServiceSpecificCredentials(region string) error { func sweepPolicies(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).IAMConn() + + conn := client.IAMConn(ctx) input := &iam.ListPoliciesInput{ Scope: aws.String(iam.PolicyScopeTypeLocal), } - var sweeperErrs *multierror.Error + + var sweepResources []sweep.Sweepable err = conn.ListPoliciesPagesWithContext(ctx, input, func(page *iam.ListPoliciesOutput, lastPage bool) bool { if page == nil { @@ -424,25 +434,7 @@ func sweepPolicies(region string) error { d := r.Data(nil) d.SetId(arn) - err := sweep.NewSweepResource(r, d, client).Delete(ctx, sweep.ThrottlingRetryTimeout) // nosemgrep:ci.semgrep.migrate.direct-CRUD-calls - - // Treat this sweeper as best effort for now. There are a lot of edge cases - // with lingering aws_iam_role resources in the HashiCorp testing accounts. - if tfawserr.ErrCodeEquals(err, iam.ErrCodeDeleteConflictException) { - log.Printf("[WARN] Ignoring IAM Policy (%s) deletion error: %s", arn, err) - continue - } - - if tfawserr.ErrMessageContains(err, "AccessDenied", "with an explicit deny") { - continue - } - - if err != nil { - sweeperErr := fmt.Errorf("error deleting IAM Policy (%s): %w", arn, err) - log.Printf("[ERROR] %s", sweeperErr) - sweeperErrs = multierror.Append(sweeperErrs, sweeperErr) - continue - } + sweepResources = append(sweepResources, newPolicySweeper(r, d, client)) } return !lastPage @@ -450,23 +442,52 @@ func sweepPolicies(region string) error { if sweep.SkipSweepError(err) { log.Printf("[WARN] Skipping IAM Policy sweep for %s: %s", region, err) - return sweeperErrs.ErrorOrNil() + return nil } if err != nil { - sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error retrieving IAM Policies: %w", err)) + return fmt.Errorf("retrieving IAM Policies: %w", err) } - return sweeperErrs.ErrorOrNil() + err = sweep.SweepOrchestrator(ctx, sweepResources) + if err != nil { + return fmt.Errorf("sweeping IAM Policies (%s): %w", region, err) + } + + return nil +} + +type policySweeper struct { + d *schema.ResourceData + sweepable sweep.Sweepable +} + +func newPolicySweeper(resource *schema.Resource, d *schema.ResourceData, client *conns.AWSClient) *policySweeper { + return &policySweeper{ + d: d, + sweepable: sdk.NewSweepResource(resource, d, client), + } +} + +func (ps policySweeper) Delete(ctx context.Context, timeout time.Duration, optFns ...tfresource.OptionsFunc) error { + err := ps.sweepable.Delete(ctx, timeout, optFns...) + + accessDenied := regexp.MustCompile(`AccessDenied: .+ with an explicit deny`) + if accessDenied.MatchString(err.Error()) { + log.Printf("[DEBUG] Skipping IAM Policy (%s): %s", ps.d.Id(), err) + return nil + } + + return err } func sweepRoles(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).IAMConn() + conn := client.IAMConn(ctx) roles := make([]string, 0) err = conn.ListRolesPagesWithContext(ctx, &iam.ListRolesInput{}, func(page *iam.ListRolesOutput, lastPage bool) bool { @@ -522,11 +543,11 @@ func sweepRoles(region string) error { func sweepSAMLProvider(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).IAMConn() + conn := client.IAMConn(ctx) var sweeperErrs *multierror.Error @@ -538,7 +559,7 @@ func sweepSAMLProvider(region string) error { r := ResourceSAMLProvider() d := r.Data(nil) d.SetId(arn) - err := sweep.DeleteResource(ctx, r, d, client) + err := sdk.DeleteResource(ctx, r, d, client) if err != nil { sweeperErr := fmt.Errorf("error deleting IAM SAML Provider (%s): %w", arn, err) @@ -562,11 +583,11 @@ func sweepSAMLProvider(region string) error { func sweepServerCertificates(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).IAMConn() + conn := client.IAMConn(ctx) err = conn.ListServerCertificatesPagesWithContext(ctx, &iam.ListServerCertificatesInput{}, func(out *iam.ListServerCertificatesOutput, lastPage bool) bool { for _, sc := range out.ServerCertificateMetadataList { @@ -596,11 +617,11 @@ func sweepServerCertificates(region string) error { func sweepServiceLinkedRoles(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).IAMConn() + conn := client.IAMConn(ctx) var sweeperErrs *multierror.Error input := &iam.ListRolesInput{ PathPrefix: aws.String("/aws-service-role/"), @@ -626,7 +647,7 @@ func sweepServiceLinkedRoles(region string) error { r := ResourceServiceLinkedRole() d := r.Data(nil) d.SetId(aws.StringValue(role.Arn)) - err := sweep.DeleteResource(ctx, r, d, client) + err := sdk.DeleteResource(ctx, r, d, client) if err != nil { sweeperErr := fmt.Errorf("error deleting IAM Service Linked Role (%s): %w", roleName, err) log.Printf("[ERROR] %s", sweeperErr) @@ -651,11 +672,11 @@ func sweepServiceLinkedRoles(region string) error { func sweepUsers(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).IAMConn() + conn := client.IAMConn(ctx) prefixes := []string{ "test-user", "test_user", @@ -846,9 +867,11 @@ func roleNameFilter(name string) bool { // exhaustive list. prefixes := []string{ "another_rds", + "AmazonComprehendServiceRole-", "aws_batch_service_role", "aws_elastictranscoder_pipeline_tf_test", "batch_tf_batch_target-", + "codebuild-", "codepipeline-", "cognito_authenticated_", "cognito_unauthenticated_", @@ -889,14 +912,16 @@ func roleNameFilter(name string) bool { func sweepVirtualMFADevice(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).IAMConn() + conn := client.IAMConn(ctx) var sweeperErrs *multierror.Error - input := &iam.ListVirtualMFADevicesInput{} + accessDenied := regexp.MustCompile(`AccessDenied: .+ with an explicit deny`) + + input := &iam.ListVirtualMFADevicesInput{} err = conn.ListVirtualMFADevicesPagesWithContext(ctx, input, func(page *iam.ListVirtualMFADevicesOutput, lastPage bool) bool { if len(page.VirtualMFADevices) == 0 { log.Printf("[INFO] No IAM Virtual MFA Devices to sweep") @@ -913,11 +938,19 @@ func sweepVirtualMFADevice(region string) error { r := ResourceVirtualMFADevice() d := r.Data(nil) d.SetId(serialNum) - err := sweep.DeleteResource(ctx, r, d, client) + + if err := sdk.ReadResource(ctx, r, d, client); err != nil { + sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("reading IAM Virtual MFA Device (%s): %w", serialNum, err)) + continue + } + + err := sdk.DeleteResource(ctx, r, d, client) if err != nil { - sweeperErr := fmt.Errorf("error deleting IAM Virtual MFA Device (%s): %w", device, err) - log.Printf("[ERROR] %s", sweeperErr) - sweeperErrs = multierror.Append(sweeperErrs, sweeperErr) + if accessDenied.MatchString(err.Error()) { + log.Printf("[DEBUG] Skipping IAM Virtual MFA Device (%s): %s", serialNum, err) + continue + } + sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("deleting IAM Virtual MFA Device (%s): %w", serialNum, err)) continue } } @@ -938,11 +971,11 @@ func sweepVirtualMFADevice(region string) error { func sweepSigningCertificates(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).IAMConn() + conn := client.IAMConn(ctx) var sweeperErrs *multierror.Error @@ -980,7 +1013,7 @@ func sweepSigningCertificates(region string) error { r := ResourceSigningCertificate() d := r.Data(nil) d.SetId(id) - err := sweep.DeleteResource(ctx, r, d, client) + err := sdk.DeleteResource(ctx, r, d, client) if err != nil { sweeperErr := fmt.Errorf("error deleting IAM Signing Certificate (%s): %w", id, err) diff --git a/internal/service/iam/tags.go b/internal/service/iam/tags.go index a213d1f1cf4..fef0ebb1047 100644 --- a/internal/service/iam/tags.go +++ b/internal/service/iam/tags.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build !generate // +build !generate diff --git a/internal/service/iam/tags_gen.go b/internal/service/iam/tags_gen.go index 78be8ccf163..e3b900f95f7 100644 --- a/internal/service/iam/tags_gen.go +++ b/internal/service/iam/tags_gen.go @@ -39,9 +39,9 @@ func KeyValueTags(ctx context.Context, tags []*iam.Tag) tftags.KeyValueTags { return tftags.New(ctx, m) } -// GetTagsIn returns iam service tags from Context. +// getTagsIn returns iam service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*iam.Tag { +func getTagsIn(ctx context.Context) []*iam.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -51,8 +51,8 @@ func GetTagsIn(ctx context.Context) []*iam.Tag { return nil } -// SetTagsOut sets iam service tags in Context. -func SetTagsOut(ctx context.Context, tags []*iam.Tag) { +// setTagsOut sets iam service tags in Context. +func setTagsOut(ctx context.Context, tags []*iam.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } diff --git a/internal/service/iam/user.go b/internal/service/iam/user.go index 1204e5e9fc8..5f02a657d2f 100644 --- a/internal/service/iam/user.go +++ b/internal/service/iam/user.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -87,13 +90,13 @@ func ResourceUser() *schema.Resource { func resourceUserCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) name := d.Get("name").(string) path := d.Get("path").(string) input := &iam.CreateUserInput{ Path: aws.String(path), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), UserName: aws.String(name), } @@ -117,7 +120,7 @@ func resourceUserCreate(ctx context.Context, d *schema.ResourceData, meta interf d.SetId(aws.StringValue(output.User.UserName)) // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := userCreateTags(ctx, conn, d.Id(), tags) // If default tags only, continue. Otherwise, error. @@ -135,7 +138,7 @@ func resourceUserCreate(ctx context.Context, d *schema.ResourceData, meta interf func resourceUserRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, func() (interface{}, error) { return FindUserByName(ctx, conn, d.Id()) @@ -161,14 +164,14 @@ func resourceUserRead(ctx context.Context, d *schema.ResourceData, meta interfac } d.Set("unique_id", user.UserId) - SetTagsOut(ctx, user.Tags) + setTagsOut(ctx, user.Tags) return diags } func resourceUserUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) if d.HasChanges("name", "path") { o, n := d.GetChange("name") @@ -231,7 +234,7 @@ func resourceUserUpdate(ctx context.Context, d *schema.ResourceData, meta interf func resourceUserDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) // IAM Users must be removed from all groups before they can be deleted if err := DeleteUserGroupMemberships(ctx, conn, d.Id()); err != nil { @@ -322,7 +325,7 @@ func DeleteUserGroupMemberships(ctx context.Context, conn *iam.IAM, username str } err := conn.ListGroupsForUserPagesWithContext(ctx, listGroups, pageOfGroups) if err != nil { - return fmt.Errorf("Error removing user %q from all groups: %s", username, err) + return fmt.Errorf("removing user %q from all groups: %s", username, err) } for _, g := range groups { // use iam group membership func to remove user from all groups @@ -350,7 +353,7 @@ func DeleteUserSSHKeys(ctx context.Context, conn *iam.IAM, username string) erro } err = conn.ListSSHPublicKeysPagesWithContext(ctx, listSSHPublicKeys, pageOfListSSHPublicKeys) if err != nil { - return fmt.Errorf("Error removing public SSH keys of user %s: %w", username, err) + return fmt.Errorf("removing public SSH keys of user %s: %w", username, err) } for _, k := range publicKeys { _, err := conn.DeleteSSHPublicKeyWithContext(ctx, &iam.DeleteSSHPublicKeyInput{ @@ -358,7 +361,7 @@ func DeleteUserSSHKeys(ctx context.Context, conn *iam.IAM, username string) erro SSHPublicKeyId: aws.String(k), }) if err != nil { - return fmt.Errorf("Error deleting public SSH key %s: %w", k, err) + return fmt.Errorf("deleting public SSH key %s: %w", k, err) } } @@ -383,7 +386,7 @@ func DeleteUserVirtualMFADevices(ctx context.Context, conn *iam.IAM, username st } err = conn.ListVirtualMFADevicesPagesWithContext(ctx, listVirtualMFADevices, pageOfVirtualMFADevices) if err != nil { - return fmt.Errorf("Error removing Virtual MFA devices of user %s: %w", username, err) + return fmt.Errorf("removing Virtual MFA devices of user %s: %w", username, err) } for _, m := range VirtualMFADevices { _, err := conn.DeactivateMFADeviceWithContext(ctx, &iam.DeactivateMFADeviceInput{ @@ -391,13 +394,13 @@ func DeleteUserVirtualMFADevices(ctx context.Context, conn *iam.IAM, username st SerialNumber: aws.String(m), }) if err != nil { - return fmt.Errorf("Error deactivating Virtual MFA device %s: %w", m, err) + return fmt.Errorf("deactivating Virtual MFA device %s: %w", m, err) } _, err = conn.DeleteVirtualMFADeviceWithContext(ctx, &iam.DeleteVirtualMFADeviceInput{ SerialNumber: aws.String(m), }) if err != nil { - return fmt.Errorf("Error deleting Virtual MFA device %s: %w", m, err) + return fmt.Errorf("deleting Virtual MFA device %s: %w", m, err) } } @@ -419,7 +422,7 @@ func DeactivateUserMFADevices(ctx context.Context, conn *iam.IAM, username strin } err = conn.ListMFADevicesPagesWithContext(ctx, listMFADevices, pageOfMFADevices) if err != nil { - return fmt.Errorf("Error removing MFA devices of user %s: %w", username, err) + return fmt.Errorf("removing MFA devices of user %s: %w", username, err) } for _, m := range MFADevices { _, err := conn.DeactivateMFADeviceWithContext(ctx, &iam.DeactivateMFADeviceInput{ @@ -427,7 +430,7 @@ func DeactivateUserMFADevices(ctx context.Context, conn *iam.IAM, username strin SerialNumber: aws.String(m), }) if err != nil { - return fmt.Errorf("Error deactivating MFA device %s: %w", m, err) + return fmt.Errorf("deactivating MFA device %s: %w", m, err) } } @@ -457,7 +460,7 @@ func DeleteUserLoginProfile(ctx context.Context, conn *iam.IAM, username string) _, err = conn.DeleteLoginProfileWithContext(ctx, input) } if err != nil { - return fmt.Errorf("Error deleting Account Login Profile: %w", err) + return fmt.Errorf("deleting Account Login Profile: %w", err) } return nil @@ -466,7 +469,7 @@ func DeleteUserLoginProfile(ctx context.Context, conn *iam.IAM, username string) func DeleteUserAccessKeys(ctx context.Context, conn *iam.IAM, username string) error { accessKeys, err := FindAccessKeys(ctx, conn, username) if err != nil && !tfresource.NotFound(err) { - return fmt.Errorf("error listing access keys for IAM User (%s): %w", username, err) + return fmt.Errorf("listing access keys for IAM User (%s): %w", username, err) } var errs *multierror.Error for _, k := range accessKeys { @@ -475,7 +478,7 @@ func DeleteUserAccessKeys(ctx context.Context, conn *iam.IAM, username string) e AccessKeyId: k.AccessKeyId, }) if err != nil { - errs = multierror.Append(errs, fmt.Errorf("error deleting Access Key (%s) from User (%s): %w", aws.StringValue(k.AccessKeyId), username, err)) + errs = multierror.Append(errs, fmt.Errorf("deleting Access Key (%s) from User (%s): %w", aws.StringValue(k.AccessKeyId), username, err)) } } @@ -496,7 +499,7 @@ func deleteUserSigningCertificates(ctx context.Context, conn *iam.IAM, userName return !lastPage }) if err != nil { - return fmt.Errorf("Error removing signing certificates of user %s: %w", userName, err) + return fmt.Errorf("removing signing certificates of user %s: %w", userName, err) } for _, c := range certificateIDList { @@ -505,7 +508,7 @@ func deleteUserSigningCertificates(ctx context.Context, conn *iam.IAM, userName UserName: aws.String(userName), }) if err != nil { - return fmt.Errorf("Error deleting signing certificate %s: %w", c, err) + return fmt.Errorf("deleting signing certificate %s: %w", c, err) } } @@ -519,7 +522,7 @@ func DeleteServiceSpecificCredentials(ctx context.Context, conn *iam.IAM, userna output, err := conn.ListServiceSpecificCredentialsWithContext(ctx, input) if err != nil { - return fmt.Errorf("Error listing Service Specific Credentials of user %s: %w", username, err) + return fmt.Errorf("listing Service Specific Credentials of user %s: %w", username, err) } for _, m := range output.ServiceSpecificCredentials { _, err := conn.DeleteServiceSpecificCredentialWithContext(ctx, &iam.DeleteServiceSpecificCredentialInput{ @@ -527,7 +530,7 @@ func DeleteServiceSpecificCredentials(ctx context.Context, conn *iam.IAM, userna ServiceSpecificCredentialId: m.ServiceSpecificCredentialId, }) if err != nil { - return fmt.Errorf("Error deleting Service Specific Credentials %s: %w", m, err) + return fmt.Errorf("deleting Service Specific Credentials %s: %w", m, err) } } diff --git a/internal/service/iam/user_data_source.go b/internal/service/iam/user_data_source.go index 6b52fc36924..91f5820e42d 100644 --- a/internal/service/iam/user_data_source.go +++ b/internal/service/iam/user_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -46,7 +49,7 @@ func DataSourceUser() *schema.Resource { func dataSourceUserRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig userName := d.Get("user_name").(string) diff --git a/internal/service/iam/user_data_source_test.go b/internal/service/iam/user_data_source_test.go index 9fa7500c0bd..d180c759746 100644 --- a/internal/service/iam/user_data_source_test.go +++ b/internal/service/iam/user_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( diff --git a/internal/service/iam/user_group_membership.go b/internal/service/iam/user_group_membership.go index c38aa9798d7..7e5e8e2ca51 100644 --- a/internal/service/iam/user_group_membership.go +++ b/internal/service/iam/user_group_membership.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -48,7 +51,7 @@ func ResourceUserGroupMembership() *schema.Resource { func resourceUserGroupMembershipCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) user := d.Get("user").(string) groupList := flex.ExpandStringValueSet(d.Get("groups").(*schema.Set)) @@ -65,7 +68,7 @@ func resourceUserGroupMembershipCreate(ctx context.Context, d *schema.ResourceDa func resourceUserGroupMembershipRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) user := d.Get("user").(string) groups := d.Get("groups").(*schema.Set) @@ -137,7 +140,7 @@ func resourceUserGroupMembershipRead(ctx context.Context, d *schema.ResourceData func resourceUserGroupMembershipUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) if d.HasChange("groups") { user := d.Get("user").(string) @@ -169,7 +172,7 @@ func resourceUserGroupMembershipUpdate(ctx context.Context, d *schema.ResourceDa func resourceUserGroupMembershipDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) user := d.Get("user").(string) groups := flex.ExpandStringValueSet(d.Get("groups").(*schema.Set)) diff --git a/internal/service/iam/user_group_membership_test.go b/internal/service/iam/user_group_membership_test.go index 78dc39c5fab..5e18f0c535d 100644 --- a/internal/service/iam/user_group_membership_test.go +++ b/internal/service/iam/user_group_membership_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -114,7 +117,7 @@ func TestAccIAMUserGroupMembership_basic(t *testing.T) { func testAccCheckUserGroupMembershipDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type == "aws_iam_user_group_membership" { @@ -146,7 +149,7 @@ func testAccCheckUserGroupMembershipDestroy(ctx context.Context) resource.TestCh func testAccUserGroupMembershipCheckGroupListForUser(ctx context.Context, userName string, groups []string, groupsNeg []string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) // get list of groups for user userGroupList, err := conn.ListGroupsForUserWithContext(ctx, &iam.ListGroupsForUserInput{ diff --git a/internal/service/iam/user_login_profile.go b/internal/service/iam/user_login_profile.go index 7b066f57909..0baa88f8a64 100644 --- a/internal/service/iam/user_login_profile.go +++ b/internal/service/iam/user_login_profile.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -131,7 +134,7 @@ func CheckPwdPolicy(pass []byte) bool { func resourceUserLoginProfileCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) username := d.Get("user").(string) passwordLength := d.Get("password_length").(int) @@ -175,7 +178,7 @@ func resourceUserLoginProfileCreate(ctx context.Context, d *schema.ResourceData, func resourceUserLoginProfileRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) input := &iam.GetLoginProfileInput{ UserName: aws.String(d.Id()), @@ -227,7 +230,7 @@ func resourceUserLoginProfileRead(ctx context.Context, d *schema.ResourceData, m func resourceUserLoginProfileDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) input := &iam.DeleteLoginProfileInput{ UserName: aws.String(d.Id()), diff --git a/internal/service/iam/user_login_profile_test.go b/internal/service/iam/user_login_profile_test.go index 8d424405f97..8c1e4e7091f 100644 --- a/internal/service/iam/user_login_profile_test.go +++ b/internal/service/iam/user_login_profile_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -162,7 +165,7 @@ func TestAccIAMUserLoginProfile_keybaseDoesntExist(t *testing.T) { { // We own this account but it doesn't have any key associated with it Config: testAccUserLoginProfileConfig_required(rName, "keybase:terraform_nope"), - ExpectError: regexp.MustCompile(`Error retrieving Public Key`), + ExpectError: regexp.MustCompile(`retrieving Public Key`), }, }, }) @@ -181,7 +184,7 @@ func TestAccIAMUserLoginProfile_notAKey(t *testing.T) { { // We own this account but it doesn't have any key associated with it Config: testAccUserLoginProfileConfig_required(rName, "lolimnotakey"), - ExpectError: regexp.MustCompile(`Error encrypting Password`), + ExpectError: regexp.MustCompile(`encrypting Password`), }, }, }) @@ -286,7 +289,7 @@ func TestAccIAMUserLoginProfile_disappears(t *testing.T) { func testAccCheckUserLoginProfileDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iam_user_login_profile" { @@ -385,7 +388,7 @@ func testAccCheckUserLoginProfileExists(ctx context.Context, n string, res *iam. return errors.New("No UserName is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) resp, err := conn.GetLoginProfileWithContext(ctx, &iam.GetLoginProfileInput{ UserName: aws.String(rs.Primary.ID), }) diff --git a/internal/service/iam/user_policy.go b/internal/service/iam/user_policy.go index 09ad99b24a6..5e531f0aa21 100644 --- a/internal/service/iam/user_policy.go +++ b/internal/service/iam/user_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -69,7 +72,7 @@ func ResourceUserPolicy() *schema.Resource { func resourceUserPolicyPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) p, err := verify.LegacyPolicyNormalize(d.Get("policy").(string)) if err != nil { @@ -106,7 +109,7 @@ func resourceUserPolicyPut(ctx context.Context, d *schema.ResourceData, meta int func resourceUserPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) user, name, err := UserPolicyParseID(d.Id()) if err != nil { @@ -174,7 +177,7 @@ func resourceUserPolicyRead(ctx context.Context, d *schema.ResourceData, meta in func resourceUserPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) user, name, err := UserPolicyParseID(d.Id()) if err != nil { diff --git a/internal/service/iam/user_policy_attachment.go b/internal/service/iam/user_policy_attachment.go index 5d110375036..13f642d5498 100644 --- a/internal/service/iam/user_policy_attachment.go +++ b/internal/service/iam/user_policy_attachment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -45,7 +48,7 @@ func ResourceUserPolicyAttachment() *schema.Resource { func resourceUserPolicyAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) user := d.Get("user").(string) arn := d.Get("policy_arn").(string) @@ -63,7 +66,7 @@ func resourceUserPolicyAttachmentCreate(ctx context.Context, d *schema.ResourceD func resourceUserPolicyAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) user := d.Get("user").(string) arn := d.Get("policy_arn").(string) // Human friendly ID for error messages since d.Id() is non-descriptive @@ -122,7 +125,7 @@ func resourceUserPolicyAttachmentRead(ctx context.Context, d *schema.ResourceDat func resourceUserPolicyAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) user := d.Get("user").(string) arn := d.Get("policy_arn").(string) diff --git a/internal/service/iam/user_policy_attachment_test.go b/internal/service/iam/user_policy_attachment_test.go index 35c4ceefb89..ff0359ff3b7 100644 --- a/internal/service/iam/user_policy_attachment_test.go +++ b/internal/service/iam/user_policy_attachment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -84,7 +87,7 @@ func testAccCheckUserPolicyAttachmentExists(ctx context.Context, n string, c int return fmt.Errorf("No policy name is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) user := rs.Primary.Attributes["user"] attachedPolicies, err := conn.ListAttachedUserPoliciesWithContext(ctx, &iam.ListAttachedUserPoliciesInput{ diff --git a/internal/service/iam/user_policy_test.go b/internal/service/iam/user_policy_test.go index 5787cc37f9f..d373d131a76 100644 --- a/internal/service/iam/user_policy_test.go +++ b/internal/service/iam/user_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -273,7 +276,7 @@ func testAccCheckUserPolicyExists(ctx context.Context, resource string, res *iam return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) resp, err := conn.GetUserPolicyWithContext(ctx, &iam.GetUserPolicyInput{ PolicyName: aws.String(name), @@ -291,7 +294,7 @@ func testAccCheckUserPolicyExists(ctx context.Context, resource string, res *iam func testAccCheckUserPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iam_user_policy" { @@ -329,7 +332,7 @@ func testAccCheckUserPolicyDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckUserPolicyDisappears(ctx context.Context, out *iam.GetUserPolicyOutput) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) params := &iam.DeleteUserPolicyInput{ PolicyName: out.PolicyName, @@ -359,7 +362,7 @@ func testAccCheckUserPolicy(ctx context.Context, return fmt.Errorf("Not Found: %s", iamUserPolicyResource) } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) username, name, err := tfiam.UserPolicyParseID(policy.Primary.ID) if err != nil { return err @@ -385,7 +388,7 @@ func testAccCheckUserPolicyExpectedPolicies(ctx context.Context, iamUserResource return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) userPolicies, err := conn.ListUserPoliciesWithContext(ctx, &iam.ListUserPoliciesInput{ UserName: aws.String(rs.Primary.ID), }) diff --git a/internal/service/iam/user_ssh_key.go b/internal/service/iam/user_ssh_key.go index 675a160e25e..0ba97c3f985 100644 --- a/internal/service/iam/user_ssh_key.go +++ b/internal/service/iam/user_ssh_key.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -73,7 +76,7 @@ func ResourceUserSSHKey() *schema.Resource { func resourceUserSSHKeyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) username := d.Get("username").(string) input := &iam.UploadSSHPublicKeyInput{ @@ -116,7 +119,7 @@ func resourceUserSSHKeyCreate(ctx context.Context, d *schema.ResourceData, meta func resourceUserSSHKeyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) encoding := d.Get("encoding").(string) key, err := FindSSHPublicKeyByThreePartKey(ctx, conn, d.Id(), encoding, d.Get("username").(string)) @@ -145,7 +148,7 @@ func resourceUserSSHKeyRead(ctx context.Context, d *schema.ResourceData, meta in func resourceUserSSHKeyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) input := &iam.UpdateSSHPublicKeyInput{ SSHPublicKeyId: aws.String(d.Id()), @@ -164,7 +167,7 @@ func resourceUserSSHKeyUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceUserSSHKeyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) log.Printf("[DEBUG] Deleting IAM User SSH Key: %s", d.Id()) _, err := conn.DeleteSSHPublicKeyWithContext(ctx, &iam.DeleteSSHPublicKeyInput{ diff --git a/internal/service/iam/user_ssh_key_data_source.go b/internal/service/iam/user_ssh_key_data_source.go index 814221de6c2..64e4ece8bc2 100644 --- a/internal/service/iam/user_ssh_key_data_source.go +++ b/internal/service/iam/user_ssh_key_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -51,7 +54,7 @@ func DataSourceUserSSHKey() *schema.Resource { func dataSourceUserSSHKeyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) encoding := d.Get("encoding").(string) sshPublicKeyId := d.Get("ssh_public_key_id").(string) diff --git a/internal/service/iam/user_ssh_key_data_source_test.go b/internal/service/iam/user_ssh_key_data_source_test.go index bfac60fcc82..4085a383d96 100644 --- a/internal/service/iam/user_ssh_key_data_source_test.go +++ b/internal/service/iam/user_ssh_key_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( diff --git a/internal/service/iam/user_ssh_key_test.go b/internal/service/iam/user_ssh_key_test.go index 2de98d3731b..06787153c84 100644 --- a/internal/service/iam/user_ssh_key_test.go +++ b/internal/service/iam/user_ssh_key_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -109,7 +112,7 @@ func TestAccIAMUserSSHKey_pemEncoding(t *testing.T) { func testAccCheckUserSSHKeyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iam_user_ssh_key" { @@ -144,7 +147,7 @@ func testAccCheckUserSSHKeyExists(ctx context.Context, n string, v *iam.SSHPubli return fmt.Errorf("No IAM User SSH Key ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) output, err := tfiam.FindSSHPublicKeyByThreePartKey(ctx, conn, rs.Primary.ID, rs.Primary.Attributes["encoding"], rs.Primary.Attributes["username"]) diff --git a/internal/service/iam/user_test.go b/internal/service/iam/user_test.go index 1b0ff4f162e..7d4d259af22 100644 --- a/internal/service/iam/user_test.go +++ b/internal/service/iam/user_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -466,7 +469,7 @@ func TestAccIAMUser_tags(t *testing.T) { func testAccCheckUserDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iam_user" { @@ -501,7 +504,7 @@ func testAccCheckUserExists(ctx context.Context, n string, v *iam.User) resource return fmt.Errorf("No IAM User ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) output, err := tfiam.FindUserByName(ctx, conn, rs.Primary.ID) @@ -547,7 +550,7 @@ func testAccCheckUserPermissionsBoundary(user *iam.User, expectedPermissionsBoun func testAccCheckUserCreatesAccessKey(ctx context.Context, user *iam.User) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) input := &iam.CreateAccessKeyInput{ UserName: user.UserName, @@ -563,7 +566,7 @@ func testAccCheckUserCreatesAccessKey(ctx context.Context, user *iam.User) resou func testAccCheckUserCreatesLoginProfile(ctx context.Context, user *iam.User) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) password, err := tfiam.GeneratePassword(32) if err != nil { return err @@ -583,7 +586,7 @@ func testAccCheckUserCreatesLoginProfile(ctx context.Context, user *iam.User) re func testAccCheckUserCreatesMFADevice(ctx context.Context, user *iam.User) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) createVirtualMFADeviceInput := &iam.CreateVirtualMFADeviceInput{ Path: user.Path, @@ -628,7 +631,7 @@ func testAccCheckUserUploadsSSHKey(ctx context.Context, user *iam.User) resource return fmt.Errorf("error generating random SSH key: %w", err) } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) input := &iam.UploadSSHPublicKeyInput{ UserName: user.UserName, @@ -647,7 +650,7 @@ func testAccCheckUserUploadsSSHKey(ctx context.Context, user *iam.User) resource // Creates an IAM User Service Specific Credential outside of Terraform to verify that it is deleted when `force_destroy` is set func testAccCheckUserServiceSpecificCredential(ctx context.Context, user *iam.User) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) input := &iam.CreateServiceSpecificCredentialInput{ UserName: user.UserName, @@ -665,7 +668,7 @@ func testAccCheckUserServiceSpecificCredential(ctx context.Context, user *iam.Us func testAccCheckUserUploadSigningCertificate(ctx context.Context, t *testing.T, user *iam.User) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) key := acctest.TLSRSAPrivateKeyPEM(t, 2048) certificate := acctest.TLSRSAX509SelfSignedCertificatePEM(t, key, "example.com") diff --git a/internal/service/iam/users_data_source.go b/internal/service/iam/users_data_source.go index af101684ebb..4b29191fbed 100644 --- a/internal/service/iam/users_data_source.go +++ b/internal/service/iam/users_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( @@ -41,7 +44,7 @@ func DataSourceUsers() *schema.Resource { func dataSourceUsersRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) nameRegex := d.Get("name_regex").(string) pathPrefix := d.Get("path_prefix").(string) diff --git a/internal/service/iam/users_data_source_test.go b/internal/service/iam/users_data_source_test.go index 85dbe7b12fa..a92d94aa936 100644 --- a/internal/service/iam/users_data_source_test.go +++ b/internal/service/iam/users_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( diff --git a/internal/service/iam/validate.go b/internal/service/iam/validate.go index c34cebc7c93..bedf9359fc9 100644 --- a/internal/service/iam/validate.go +++ b/internal/service/iam/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( diff --git a/internal/service/iam/validate_test.go b/internal/service/iam/validate_test.go index 8c7a9849c28..fd0638596ca 100644 --- a/internal/service/iam/validate_test.go +++ b/internal/service/iam/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( diff --git a/internal/service/iam/virtual_mfa_device.go b/internal/service/iam/virtual_mfa_device.go index 008b5ca0b1a..729af064f4c 100644 --- a/internal/service/iam/virtual_mfa_device.go +++ b/internal/service/iam/virtual_mfa_device.go @@ -1,11 +1,17 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( "context" + "fmt" "log" "regexp" + "time" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" "github.com/aws/aws-sdk-go/service/iam" "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" @@ -43,6 +49,10 @@ func ResourceVirtualMFADevice() *schema.Resource { Type: schema.TypeString, Computed: true, }, + "enable_date": { + Type: schema.TypeString, + Computed: true, + }, "path": { Type: schema.TypeString, Optional: true, @@ -56,6 +66,10 @@ func ResourceVirtualMFADevice() *schema.Resource { }, names.AttrTags: tftags.TagsSchema(), names.AttrTagsAll: tftags.TagsSchemaComputed(), + "user_name": { + Type: schema.TypeString, + Computed: true, + }, "virtual_mfa_device_name": { Type: schema.TypeString, Required: true, @@ -73,12 +87,12 @@ func ResourceVirtualMFADevice() *schema.Resource { func resourceVirtualMFADeviceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) name := d.Get("virtual_mfa_device_name").(string) input := &iam.CreateVirtualMFADeviceInput{ Path: aws.String(d.Get("path").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), VirtualMFADeviceName: aws.String(name), } @@ -98,11 +112,12 @@ func resourceVirtualMFADeviceCreate(ctx context.Context, d *schema.ResourceData, vMFA := output.VirtualMFADevice d.SetId(aws.StringValue(vMFA.SerialNumber)) + // Base32StringSeed and QRCodePNG must be read here, because they are not available via ListVirtualMFADevices d.Set("base_32_string_seed", string(vMFA.Base32StringSeed)) d.Set("qr_code_png", string(vMFA.QRCodePNG)) // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := virtualMFADeviceCreateTags(ctx, conn, d.Id(), tags) // If default tags only, continue. Otherwise, error. @@ -120,7 +135,7 @@ func resourceVirtualMFADeviceCreate(ctx context.Context, d *schema.ResourceData, func resourceVirtualMFADeviceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) vMFA, err := FindVirtualMFADeviceBySerialNumber(ctx, conn, d.Id()) @@ -136,6 +151,22 @@ func resourceVirtualMFADeviceRead(ctx context.Context, d *schema.ResourceData, m d.Set("arn", vMFA.SerialNumber) + path, name, err := parseVirtualMFADeviceARN(aws.StringValue(vMFA.SerialNumber)) + if err != nil { + return sdkdiag.AppendErrorf(diags, "reading IAM Virtual MFA Device (%s): %s", d.Id(), err) + } + + d.Set("path", path) + d.Set("virtual_mfa_device_name", name) + + if v := vMFA.EnableDate; v != nil { + d.Set("enable_date", aws.TimeValue(v).Format(time.RFC3339)) + } + + if u := vMFA.User; u != nil { + d.Set("user_name", u.UserName) + } + // The call above returns empty tags. output, err := conn.ListMFADeviceTagsWithContext(ctx, &iam.ListMFADeviceTagsInput{ SerialNumber: aws.String(d.Id()), @@ -145,14 +176,14 @@ func resourceVirtualMFADeviceRead(ctx context.Context, d *schema.ResourceData, m return sdkdiag.AppendErrorf(diags, "listing IAM Virtual MFA Device (%s) tags: %s", d.Id(), err) } - SetTagsOut(ctx, output.Tags) + setTagsOut(ctx, output.Tags) return diags } func resourceVirtualMFADeviceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) o, n := d.GetChange("tags_all") @@ -172,7 +203,20 @@ func resourceVirtualMFADeviceUpdate(ctx context.Context, d *schema.ResourceData, func resourceVirtualMFADeviceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IAMConn() + conn := meta.(*conns.AWSClient).IAMConn(ctx) + + if v := d.Get("user_name"); v != "" { + _, err := conn.DeactivateMFADeviceWithContext(ctx, &iam.DeactivateMFADeviceInput{ + UserName: aws.String(v.(string)), + SerialNumber: aws.String(d.Id()), + }) + if tfawserr.ErrCodeEquals(err, iam.ErrCodeNoSuchEntityException) { + return diags + } + if err != nil { + return sdkdiag.AppendErrorf(diags, "deactivating IAM Virtual MFA Device (%s): %s", d.Id(), err) + } + } log.Printf("[INFO] Deleting IAM Virtual MFA Device: %s", d.Id()) _, err := conn.DeleteVirtualMFADeviceWithContext(ctx, &iam.DeleteVirtualMFADeviceInput{ @@ -219,3 +263,18 @@ func FindVirtualMFADeviceBySerialNumber(ctx context.Context, conn *iam.IAM, seri return output, nil } + +func parseVirtualMFADeviceARN(s string) (path, name string, err error) { + arn, err := arn.Parse(s) + if err != nil { + return "", "", err + } + + re := regexp.MustCompile(`^mfa(/|/[\x{0021}-\x{007E}]+/)([-A-Za-z0-9_+=,.@]+)$`) + matches := re.FindStringSubmatch(arn.Resource) + if len(matches) != 3 { + return "", "", fmt.Errorf("IAM Virtual MFA Device ARN: invalid resource section (%s)", arn.Resource) + } + + return matches[1], matches[2], nil +} diff --git a/internal/service/iam/virtual_mfa_device_test.go b/internal/service/iam/virtual_mfa_device_test.go index b8a2c490ef8..ec6f4f20892 100644 --- a/internal/service/iam/virtual_mfa_device_test.go +++ b/internal/service/iam/virtual_mfa_device_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam_test import ( @@ -31,18 +34,60 @@ func TestAccIAMVirtualMFADevice_basic(t *testing.T) { Steps: []resource.TestStep{ { Config: testAccVirtualMFADeviceConfig_basic(rName), - Check: resource.ComposeTestCheckFunc( + Check: resource.ComposeAggregateTestCheckFunc( testAccCheckVirtualMFADeviceExists(ctx, resourceName, &conf), acctest.CheckResourceAttrGlobalARN(resourceName, "arn", "iam", fmt.Sprintf("mfa/%s", rName)), resource.TestCheckResourceAttrSet(resourceName, "base_32_string_seed"), + resource.TestCheckNoResourceAttr(resourceName, "enable_date"), + resource.TestCheckResourceAttr(resourceName, "path", "/"), resource.TestCheckResourceAttrSet(resourceName, "qr_code_png"), + resource.TestCheckNoResourceAttr(resourceName, "user_name"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "base_32_string_seed", + "qr_code_png", + }, + }, + }, + }) +} + +func TestAccIAMVirtualMFADevice_path(t *testing.T) { + ctx := acctest.Context(t) + var conf iam.VirtualMFADevice + resourceName := "aws_iam_virtual_mfa_device.test" + + path := "/path/" + + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, iam.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckVirtualMFADeviceDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccVirtualMFADeviceConfig_path(rName, path), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckVirtualMFADeviceExists(ctx, resourceName, &conf), + acctest.CheckResourceAttrGlobalARN(resourceName, "arn", "iam", fmt.Sprintf("mfa%s%s", path, rName)), + resource.TestCheckResourceAttr(resourceName, "path", path), ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"path", "virtual_mfa_device_name", "base_32_string_seed", "qr_code_png"}, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "base_32_string_seed", + "qr_code_png", + }, }, }, }) @@ -63,21 +108,24 @@ func TestAccIAMVirtualMFADevice_tags(t *testing.T) { Steps: []resource.TestStep{ { Config: testAccVirtualMFADeviceConfig_tags1(rName, "key1", "value1"), - Check: resource.ComposeTestCheckFunc( + Check: resource.ComposeAggregateTestCheckFunc( testAccCheckVirtualMFADeviceExists(ctx, resourceName, &conf), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"path", "virtual_mfa_device_name", "base_32_string_seed", "qr_code_png"}, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "base_32_string_seed", + "qr_code_png", + }, }, { Config: testAccVirtualMFADeviceConfig_tags2(rName, "key1", "value1updated", "key2", "value2"), - Check: resource.ComposeTestCheckFunc( + Check: resource.ComposeAggregateTestCheckFunc( testAccCheckVirtualMFADeviceExists(ctx, resourceName, &conf), resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), @@ -86,7 +134,7 @@ func TestAccIAMVirtualMFADevice_tags(t *testing.T) { }, { Config: testAccVirtualMFADeviceConfig_tags1(rName, "key2", "value2"), - Check: resource.ComposeTestCheckFunc( + Check: resource.ComposeAggregateTestCheckFunc( testAccCheckVirtualMFADeviceExists(ctx, resourceName, &conf), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), @@ -111,10 +159,9 @@ func TestAccIAMVirtualMFADevice_disappears(t *testing.T) { Steps: []resource.TestStep{ { Config: testAccVirtualMFADeviceConfig_basic(rName), - Check: resource.ComposeTestCheckFunc( + Check: resource.ComposeAggregateTestCheckFunc( testAccCheckVirtualMFADeviceExists(ctx, resourceName, &conf), acctest.CheckResourceDisappears(ctx, acctest.Provider, tfiam.ResourceVirtualMFADevice(), resourceName), - acctest.CheckResourceDisappears(ctx, acctest.Provider, tfiam.ResourceVirtualMFADevice(), resourceName), ), ExpectNonEmptyPlan: true, }, @@ -124,7 +171,7 @@ func TestAccIAMVirtualMFADevice_disappears(t *testing.T) { func testAccCheckVirtualMFADeviceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iam_virtual_mfa_device" { @@ -157,7 +204,7 @@ func testAccCheckVirtualMFADeviceExists(ctx context.Context, n string, v *iam.Vi return errors.New("No Virtual MFA Device ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn(ctx) output, err := tfiam.FindVirtualMFADeviceBySerialNumber(ctx, conn, rs.Primary.ID) @@ -179,6 +226,16 @@ resource "aws_iam_virtual_mfa_device" "test" { `, rName) } +func testAccVirtualMFADeviceConfig_path(rName, path string) string { + return fmt.Sprintf(` +resource "aws_iam_virtual_mfa_device" "test" { + virtual_mfa_device_name = %[1]q + + path = %[2]q +} +`, rName, path) +} + func testAccVirtualMFADeviceConfig_tags1(rName, tagKey1, tagValue1 string) string { return fmt.Sprintf(` resource "aws_iam_virtual_mfa_device" "test" { diff --git a/internal/service/iam/wait.go b/internal/service/iam/wait.go index 51f8acf6abc..3d1c542aa27 100644 --- a/internal/service/iam/wait.go +++ b/internal/service/iam/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iam import ( diff --git a/internal/service/identitystore/flex.go b/internal/service/identitystore/flex.go index de99b6b63b8..02c662856a4 100644 --- a/internal/service/identitystore/flex.go +++ b/internal/service/identitystore/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package identitystore import ( diff --git a/internal/service/identitystore/generate.go b/internal/service/identitystore/generate.go new file mode 100644 index 00000000000..1ab0b7b333e --- /dev/null +++ b/internal/service/identitystore/generate.go @@ -0,0 +1,7 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/servicepackage/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package identitystore diff --git a/internal/service/identitystore/group.go b/internal/service/identitystore/group.go index 2b655a5f056..dc278e5a507 100644 --- a/internal/service/identitystore/group.go +++ b/internal/service/identitystore/group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package identitystore import ( @@ -80,7 +83,7 @@ const ( ) func resourceGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IdentityStoreClient() + conn := meta.(*conns.AWSClient).IdentityStoreClient(ctx) identityStoreId := d.Get("identity_store_id").(string) @@ -112,7 +115,7 @@ func resourceGroupCreate(ctx context.Context, d *schema.ResourceData, meta inter } func resourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IdentityStoreClient() + conn := meta.(*conns.AWSClient).IdentityStoreClient(ctx) identityStoreId, groupId, err := resourceGroupParseID(d.Id()) @@ -145,7 +148,7 @@ func resourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interfa } func resourceGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IdentityStoreClient() + conn := meta.(*conns.AWSClient).IdentityStoreClient(ctx) in := &identitystore.UpdateGroupInput{ GroupId: aws.String(d.Get("group_id").(string)), @@ -172,7 +175,7 @@ func resourceGroupUpdate(ctx context.Context, d *schema.ResourceData, meta inter } func resourceGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IdentityStoreClient() + conn := meta.(*conns.AWSClient).IdentityStoreClient(ctx) log.Printf("[INFO] Deleting IdentityStore Group %s", d.Id()) _, err := conn.DeleteGroup(ctx, &identitystore.DeleteGroupInput{ diff --git a/internal/service/identitystore/group_data_source.go b/internal/service/identitystore/group_data_source.go index 57e5c76fb30..17e893ae23d 100644 --- a/internal/service/identitystore/group_data_source.go +++ b/internal/service/identitystore/group_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package identitystore import ( @@ -118,7 +121,7 @@ const ( ) func dataSourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IdentityStoreClient() + conn := meta.(*conns.AWSClient).IdentityStoreClient(ctx) identityStoreID := d.Get("identity_store_id").(string) diff --git a/internal/service/identitystore/group_data_source_test.go b/internal/service/identitystore/group_data_source_test.go index e0330266f28..9c09b61108d 100644 --- a/internal/service/identitystore/group_data_source_test.go +++ b/internal/service/identitystore/group_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package identitystore_test import ( @@ -194,7 +197,7 @@ data "aws_identitystore_group" "test" { } func testAccPreCheckSSOAdminInstances(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn(ctx) var instances []*ssoadmin.InstanceMetadata err := conn.ListInstancesPagesWithContext(ctx, &ssoadmin.ListInstancesInput{}, func(page *ssoadmin.ListInstancesOutput, lastPage bool) bool { diff --git a/internal/service/identitystore/group_membership.go b/internal/service/identitystore/group_membership.go index d60123616a6..b1e50dc2a43 100644 --- a/internal/service/identitystore/group_membership.go +++ b/internal/service/identitystore/group_membership.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package identitystore import ( @@ -66,7 +69,7 @@ func ResourceGroupMembership() *schema.Resource { } func resourceGroupMembershipCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IdentityStoreClient() + conn := meta.(*conns.AWSClient).IdentityStoreClient(ctx) identityStoreId := d.Get("identity_store_id").(string) @@ -96,7 +99,7 @@ func resourceGroupMembershipCreate(ctx context.Context, d *schema.ResourceData, } func resourceGroupMembershipRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IdentityStoreClient() + conn := meta.(*conns.AWSClient).IdentityStoreClient(ctx) identityStoreId, groupMembershipId, err := resourceGroupMembershipParseID(d.Id()) @@ -132,7 +135,7 @@ func resourceGroupMembershipRead(ctx context.Context, d *schema.ResourceData, me } func resourceGroupMembershipDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IdentityStoreClient() + conn := meta.(*conns.AWSClient).IdentityStoreClient(ctx) log.Printf("[INFO] Deleting IdentityStore GroupMembership %s", d.Id()) diff --git a/internal/service/identitystore/group_membership_test.go b/internal/service/identitystore/group_membership_test.go index 6fd93c6e8e6..ccb7176e340 100644 --- a/internal/service/identitystore/group_membership_test.go +++ b/internal/service/identitystore/group_membership_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package identitystore_test import ( @@ -181,7 +184,7 @@ func TestAccIdentityStoreGroupMembership_MemberId(t *testing.T) { func testAccCheckGroupMembershipDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IdentityStoreClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).IdentityStoreClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_identitystore_group_membership" { @@ -218,7 +221,7 @@ func testAccCheckGroupMembershipExists(ctx context.Context, name string, groupMe return create.Error(names.IdentityStore, create.ErrActionCheckingExistence, tfidentitystore.ResNameGroupMembership, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).IdentityStoreClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).IdentityStoreClient(ctx) resp, err := conn.DescribeGroupMembership(ctx, &identitystore.DescribeGroupMembershipInput{ IdentityStoreId: aws.String(rs.Primary.Attributes["identity_store_id"]), diff --git a/internal/service/identitystore/group_test.go b/internal/service/identitystore/group_test.go index b3b16b016ca..27ac1fc713f 100644 --- a/internal/service/identitystore/group_test.go +++ b/internal/service/identitystore/group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package identitystore_test import ( @@ -82,7 +85,7 @@ func TestAccIdentityStoreGroup_disappears(t *testing.T) { func testAccCheckGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IdentityStoreClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).IdentityStoreClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_identitystore_group" { @@ -116,7 +119,7 @@ func testAccCheckGroupExists(ctx context.Context, n string, v *identitystore.Des return create.Error(names.IdentityStore, create.ErrActionCheckingExistence, tfidentitystore.ResNameGroup, n, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).IdentityStoreClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).IdentityStoreClient(ctx) output, err := tfidentitystore.FindGroupByTwoPartKey(ctx, conn, rs.Primary.Attributes["identity_store_id"], rs.Primary.Attributes["group_id"]) diff --git a/internal/service/identitystore/service_package_gen.go b/internal/service/identitystore/service_package_gen.go index 6f1b905fcde..e0d9ea2f365 100644 --- a/internal/service/identitystore/service_package_gen.go +++ b/internal/service/identitystore/service_package_gen.go @@ -5,6 +5,9 @@ package identitystore import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + identitystore_sdkv2 "github.com/aws/aws-sdk-go-v2/service/identitystore" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -53,4 +56,17 @@ func (p *servicePackage) ServicePackageName() string { return names.IdentityStore } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*identitystore_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return identitystore_sdkv2.NewFromConfig(cfg, func(o *identitystore_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = identitystore_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/identitystore/user.go b/internal/service/identitystore/user.go index e0d1d7f1882..e0020cea9e5 100644 --- a/internal/service/identitystore/user.go +++ b/internal/service/identitystore/user.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package identitystore import ( @@ -251,7 +254,7 @@ const ( ) func resourceUserCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IdentityStoreClient() + conn := meta.(*conns.AWSClient).IdentityStoreClient(ctx) in := &identitystore.CreateUserInput{ DisplayName: aws.String(d.Get("display_name").(string)), @@ -318,7 +321,7 @@ func resourceUserCreate(ctx context.Context, d *schema.ResourceData, meta interf } func resourceUserRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IdentityStoreClient() + conn := meta.(*conns.AWSClient).IdentityStoreClient(ctx) identityStoreId, userId, err := resourceUserParseID(d.Id()) @@ -374,7 +377,7 @@ func resourceUserRead(ctx context.Context, d *schema.ResourceData, meta interfac } func resourceUserUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IdentityStoreClient() + conn := meta.(*conns.AWSClient).IdentityStoreClient(ctx) in := &identitystore.UpdateUserInput{ IdentityStoreId: aws.String(d.Get("identity_store_id").(string)), @@ -607,7 +610,7 @@ func resourceUserUpdate(ctx context.Context, d *schema.ResourceData, meta interf } func resourceUserDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IdentityStoreClient() + conn := meta.(*conns.AWSClient).IdentityStoreClient(ctx) log.Printf("[INFO] Deleting IdentityStore User %s", d.Id()) diff --git a/internal/service/identitystore/user_data_source.go b/internal/service/identitystore/user_data_source.go index 37d723d96de..65aa2d3a152 100644 --- a/internal/service/identitystore/user_data_source.go +++ b/internal/service/identitystore/user_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package identitystore import ( @@ -260,7 +263,7 @@ const ( ) func dataSourceUserRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IdentityStoreClient() + conn := meta.(*conns.AWSClient).IdentityStoreClient(ctx) identityStoreID := d.Get("identity_store_id").(string) diff --git a/internal/service/identitystore/user_data_source_test.go b/internal/service/identitystore/user_data_source_test.go index 7e7e9de98eb..796dbe494c1 100644 --- a/internal/service/identitystore/user_data_source_test.go +++ b/internal/service/identitystore/user_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package identitystore_test import ( diff --git a/internal/service/identitystore/user_test.go b/internal/service/identitystore/user_test.go index 10094d2b6e4..6560b3933f5 100644 --- a/internal/service/identitystore/user_test.go +++ b/internal/service/identitystore/user_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package identitystore_test import ( @@ -980,7 +983,7 @@ func TestAccIdentityStoreUser_UserType(t *testing.T) { func testAccCheckUserDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IdentityStoreClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).IdentityStoreClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_identitystore_user" { @@ -1014,7 +1017,7 @@ func testAccCheckUserExists(ctx context.Context, n string, v *identitystore.Desc return create.Error(names.IdentityStore, create.ErrActionCheckingExistence, tfidentitystore.ResNameUser, n, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).IdentityStoreClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).IdentityStoreClient(ctx) output, err := tfidentitystore.FindUserByTwoPartKey(ctx, conn, rs.Primary.Attributes["identity_store_id"], rs.Primary.Attributes["user_id"]) @@ -1029,8 +1032,8 @@ func testAccCheckUserExists(ctx context.Context, n string, v *identitystore.Desc } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).IdentityStoreClient() - ssoadminConn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IdentityStoreClient(ctx) + ssoadminConn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn(ctx) instances, err := ssoadminConn.ListInstancesWithContext(ctx, &ssoadmin.ListInstancesInput{MaxResults: aws.Int64(1)}) diff --git a/internal/service/imagebuilder/component.go b/internal/service/imagebuilder/component.go index debadeff80a..1ca488a41cf 100644 --- a/internal/service/imagebuilder/component.go +++ b/internal/service/imagebuilder/component.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder import ( @@ -128,11 +131,11 @@ func ResourceComponent() *schema.Resource { func resourceComponentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.CreateComponentInput{ ClientToken: aws.String(id.UniqueId()), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("change_description"); ok { @@ -188,7 +191,7 @@ func resourceComponentCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceComponentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.GetComponentInput{ ComponentBuildVersionArn: aws.String(d.Id()), @@ -224,7 +227,7 @@ func resourceComponentRead(ctx context.Context, d *schema.ResourceData, meta int d.Set("platform", component.Platform) d.Set("supported_os_versions", aws.StringValueSlice(component.SupportedOsVersions)) - SetTagsOut(ctx, component.Tags) + setTagsOut(ctx, component.Tags) d.Set("type", component.Type) d.Set("version", component.Version) @@ -248,7 +251,7 @@ func resourceComponentDelete(ctx context.Context, d *schema.ResourceData, meta i return diags } - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.DeleteComponentInput{ ComponentBuildVersionArn: aws.String(d.Id()), diff --git a/internal/service/imagebuilder/component_data_source.go b/internal/service/imagebuilder/component_data_source.go index dc1f2721a16..4ecf3bfd050 100644 --- a/internal/service/imagebuilder/component_data_source.go +++ b/internal/service/imagebuilder/component_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder import ( @@ -80,7 +83,7 @@ func DataSourceComponent() *schema.Resource { func dataSourceComponentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &imagebuilder.GetComponentInput{} diff --git a/internal/service/imagebuilder/component_data_source_test.go b/internal/service/imagebuilder/component_data_source_test.go index 41bb456d100..79f5f8696ef 100644 --- a/internal/service/imagebuilder/component_data_source_test.go +++ b/internal/service/imagebuilder/component_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder_test import ( diff --git a/internal/service/imagebuilder/component_test.go b/internal/service/imagebuilder/component_test.go index cdfe7f5dafd..9080ea3af79 100644 --- a/internal/service/imagebuilder/component_test.go +++ b/internal/service/imagebuilder/component_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder_test import ( @@ -297,7 +300,7 @@ func TestAccImageBuilderComponent_uri(t *testing.T) { func testAccCheckComponentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_imagebuilder_component" { @@ -334,7 +337,7 @@ func testAccCheckComponentExists(ctx context.Context, resourceName string) resou return fmt.Errorf("resource not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.GetComponentInput{ ComponentBuildVersionArn: aws.String(rs.Primary.ID), diff --git a/internal/service/imagebuilder/components_data_source.go b/internal/service/imagebuilder/components_data_source.go index 1901ab01c61..576d9c02ae9 100644 --- a/internal/service/imagebuilder/components_data_source.go +++ b/internal/service/imagebuilder/components_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder import ( @@ -40,7 +43,7 @@ func DataSourceComponents() *schema.Resource { func dataSourceComponentsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.ListComponentsInput{} diff --git a/internal/service/imagebuilder/components_data_source_test.go b/internal/service/imagebuilder/components_data_source_test.go index 88fb013e37b..19777171587 100644 --- a/internal/service/imagebuilder/components_data_source_test.go +++ b/internal/service/imagebuilder/components_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder_test import ( diff --git a/internal/service/imagebuilder/consts.go b/internal/service/imagebuilder/consts.go index a67d3a883ed..1d85561bd79 100644 --- a/internal/service/imagebuilder/consts.go +++ b/internal/service/imagebuilder/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder import ( diff --git a/internal/service/imagebuilder/container_recipe.go b/internal/service/imagebuilder/container_recipe.go index cbf244e492c..49a39ccf065 100644 --- a/internal/service/imagebuilder/container_recipe.go +++ b/internal/service/imagebuilder/container_recipe.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder import ( @@ -14,8 +17,8 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" - "github.com/hashicorp/terraform-provider-aws/internal/experimental/nullable" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/types/nullable" "github.com/hashicorp/terraform-provider-aws/internal/verify" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -282,11 +285,11 @@ func ResourceContainerRecipe() *schema.Resource { func resourceContainerRecipeCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.CreateContainerRecipeInput{ ClientToken: aws.String(id.UniqueId()), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("component"); ok && len(v.([]interface{})) > 0 { @@ -358,7 +361,7 @@ func resourceContainerRecipeCreate(ctx context.Context, d *schema.ResourceData, func resourceContainerRecipeRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.GetContainerRecipeInput{ ContainerRecipeArn: aws.String(d.Id()), @@ -402,7 +405,7 @@ func resourceContainerRecipeRead(ctx context.Context, d *schema.ResourceData, me d.Set("parent_image", containerRecipe.ParentImage) d.Set("platform", containerRecipe.Platform) - SetTagsOut(ctx, containerRecipe.Tags) + setTagsOut(ctx, containerRecipe.Tags) d.Set("target_repository", []interface{}{flattenTargetContainerRepository(containerRecipe.TargetRepository)}) d.Set("version", containerRecipe.Version) @@ -421,7 +424,7 @@ func resourceContainerRecipeUpdate(ctx context.Context, d *schema.ResourceData, func resourceContainerRecipeDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.DeleteContainerRecipeInput{ ContainerRecipeArn: aws.String(d.Id()), diff --git a/internal/service/imagebuilder/container_recipe_data_source.go b/internal/service/imagebuilder/container_recipe_data_source.go index 948cb88fbf1..1b57940251a 100644 --- a/internal/service/imagebuilder/container_recipe_data_source.go +++ b/internal/service/imagebuilder/container_recipe_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder import ( @@ -194,7 +197,7 @@ func DataSourceContainerRecipe() *schema.Resource { func dataSourceContainerRecipeRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &imagebuilder.GetContainerRecipeInput{} diff --git a/internal/service/imagebuilder/container_recipe_data_source_test.go b/internal/service/imagebuilder/container_recipe_data_source_test.go index 5c4398016d5..e514e2fb102 100644 --- a/internal/service/imagebuilder/container_recipe_data_source_test.go +++ b/internal/service/imagebuilder/container_recipe_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder_test import ( diff --git a/internal/service/imagebuilder/container_recipe_test.go b/internal/service/imagebuilder/container_recipe_test.go index 8947616d7c7..cfb87233c63 100644 --- a/internal/service/imagebuilder/container_recipe_test.go +++ b/internal/service/imagebuilder/container_recipe_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder_test import ( @@ -692,7 +695,7 @@ func TestAccImageBuilderContainerRecipe_platformOverride(t *testing.T) { func testAccCheckContainerRecipeDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_imagebuilder_container_recipe" { @@ -729,7 +732,7 @@ func testAccCheckContainerRecipeExists(ctx context.Context, resourceName string) return fmt.Errorf("resource not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.GetContainerRecipeInput{ ContainerRecipeArn: aws.String(rs.Primary.ID), diff --git a/internal/service/imagebuilder/container_recipes_data_source.go b/internal/service/imagebuilder/container_recipes_data_source.go index c39c70a3736..4714349763e 100644 --- a/internal/service/imagebuilder/container_recipes_data_source.go +++ b/internal/service/imagebuilder/container_recipes_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder import ( @@ -40,7 +43,7 @@ func DataSourceContainerRecipes() *schema.Resource { func dataSourceContainerRecipesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.ListContainerRecipesInput{} diff --git a/internal/service/imagebuilder/container_recipes_data_source_test.go b/internal/service/imagebuilder/container_recipes_data_source_test.go index bee526a3e7d..cf3a1e8d3f6 100644 --- a/internal/service/imagebuilder/container_recipes_data_source_test.go +++ b/internal/service/imagebuilder/container_recipes_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder_test import ( diff --git a/internal/service/imagebuilder/distribution_configuration.go b/internal/service/imagebuilder/distribution_configuration.go index 3eeca29766a..e24f0d2ffa0 100644 --- a/internal/service/imagebuilder/distribution_configuration.go +++ b/internal/service/imagebuilder/distribution_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder import ( @@ -291,11 +294,11 @@ func ResourceDistributionConfiguration() *schema.Resource { func resourceDistributionConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.CreateDistributionConfigurationInput{ ClientToken: aws.String(id.UniqueId()), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -327,7 +330,7 @@ func resourceDistributionConfigurationCreate(ctx context.Context, d *schema.Reso func resourceDistributionConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.GetDistributionConfigurationInput{ DistributionConfigurationArn: aws.String(d.Id()), @@ -358,14 +361,14 @@ func resourceDistributionConfigurationRead(ctx context.Context, d *schema.Resour d.Set("distribution", flattenDistributions(distributionConfiguration.Distributions)) d.Set("name", distributionConfiguration.Name) - SetTagsOut(ctx, distributionConfiguration.Tags) + setTagsOut(ctx, distributionConfiguration.Tags) return diags } func resourceDistributionConfigurationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) if d.HasChanges("description", "distribution") { input := &imagebuilder.UpdateDistributionConfigurationInput{ @@ -393,7 +396,7 @@ func resourceDistributionConfigurationUpdate(ctx context.Context, d *schema.Reso func resourceDistributionConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.DeleteDistributionConfigurationInput{ DistributionConfigurationArn: aws.String(d.Id()), diff --git a/internal/service/imagebuilder/distribution_configuration_data_source.go b/internal/service/imagebuilder/distribution_configuration_data_source.go index 89a9e0a49a6..75a2f1f3b28 100644 --- a/internal/service/imagebuilder/distribution_configuration_data_source.go +++ b/internal/service/imagebuilder/distribution_configuration_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder import ( @@ -237,7 +240,7 @@ func DataSourceDistributionConfiguration() *schema.Resource { func dataSourceDistributionConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &imagebuilder.GetDistributionConfigurationInput{} diff --git a/internal/service/imagebuilder/distribution_configuration_data_source_test.go b/internal/service/imagebuilder/distribution_configuration_data_source_test.go index c021d831d47..bc5d4da40fa 100644 --- a/internal/service/imagebuilder/distribution_configuration_data_source_test.go +++ b/internal/service/imagebuilder/distribution_configuration_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder_test import ( diff --git a/internal/service/imagebuilder/distribution_configuration_test.go b/internal/service/imagebuilder/distribution_configuration_test.go index 0e4d8c6417b..5c1f55dd10a 100644 --- a/internal/service/imagebuilder/distribution_configuration_test.go +++ b/internal/service/imagebuilder/distribution_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder_test import ( @@ -909,7 +912,7 @@ func TestAccImageBuilderDistributionConfiguration_tags(t *testing.T) { func testAccCheckDistributionConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_imagebuilder_distribution_configuration" { @@ -946,7 +949,7 @@ func testAccCheckDistributionConfigurationExists(ctx context.Context, resourceNa return fmt.Errorf("resource not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.GetDistributionConfigurationInput{ DistributionConfigurationArn: aws.String(rs.Primary.ID), diff --git a/internal/service/imagebuilder/distribution_configurations_data_source.go b/internal/service/imagebuilder/distribution_configurations_data_source.go index e06074a0793..297b019d0d8 100644 --- a/internal/service/imagebuilder/distribution_configurations_data_source.go +++ b/internal/service/imagebuilder/distribution_configurations_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder import ( @@ -34,7 +37,7 @@ func DataSourceDistributionConfigurations() *schema.Resource { func dataSourceDistributionConfigurationsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.ListDistributionConfigurationsInput{} diff --git a/internal/service/imagebuilder/distribution_configurations_data_source_test.go b/internal/service/imagebuilder/distribution_configurations_data_source_test.go index d9656261cd0..a0ce4262c63 100644 --- a/internal/service/imagebuilder/distribution_configurations_data_source_test.go +++ b/internal/service/imagebuilder/distribution_configurations_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder_test import ( diff --git a/internal/service/imagebuilder/generate.go b/internal/service/imagebuilder/generate.go index a0fef418a7f..2dcc8a60a9c 100644 --- a/internal/service/imagebuilder/generate.go +++ b/internal/service/imagebuilder/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package imagebuilder diff --git a/internal/service/imagebuilder/id.go b/internal/service/imagebuilder/id.go index 385aff8d0d3..7941a7dd559 100644 --- a/internal/service/imagebuilder/id.go +++ b/internal/service/imagebuilder/id.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder const ( diff --git a/internal/service/imagebuilder/image.go b/internal/service/imagebuilder/image.go index 5e6e02553cc..01a5dfa08cd 100644 --- a/internal/service/imagebuilder/image.go +++ b/internal/service/imagebuilder/image.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder import ( @@ -179,12 +182,12 @@ func ResourceImage() *schema.Resource { func resourceImageCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.CreateImageInput{ ClientToken: aws.String(id.UniqueId()), EnhancedImageMetadataEnabled: aws.Bool(d.Get("enhanced_image_metadata_enabled").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("container_recipe_arn"); ok { @@ -228,7 +231,7 @@ func resourceImageCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceImageRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.GetImageInput{ ImageBuildVersionArn: aws.String(d.Id()), @@ -289,7 +292,7 @@ func resourceImageRead(ctx context.Context, d *schema.ResourceData, meta interfa d.Set("output_resources", nil) } - SetTagsOut(ctx, image.Tags) + setTagsOut(ctx, image.Tags) d.Set("version", image.Version) @@ -306,7 +309,7 @@ func resourceImageUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceImageDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.DeleteImageInput{ ImageBuildVersionArn: aws.String(d.Id()), diff --git a/internal/service/imagebuilder/image_data_source.go b/internal/service/imagebuilder/image_data_source.go index 8c9d23df9a8..e54fbfa494f 100644 --- a/internal/service/imagebuilder/image_data_source.go +++ b/internal/service/imagebuilder/image_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder import ( @@ -144,7 +147,7 @@ func DataSourceImage() *schema.Resource { func dataSourceImageRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.GetImageInput{} diff --git a/internal/service/imagebuilder/image_data_source_test.go b/internal/service/imagebuilder/image_data_source_test.go index 9ec894e444f..9108be472ee 100644 --- a/internal/service/imagebuilder/image_data_source_test.go +++ b/internal/service/imagebuilder/image_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder_test import ( diff --git a/internal/service/imagebuilder/image_pipeline.go b/internal/service/imagebuilder/image_pipeline.go index 0621b073581..837ccb7b7c6 100644 --- a/internal/service/imagebuilder/image_pipeline.go +++ b/internal/service/imagebuilder/image_pipeline.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder import ( @@ -161,12 +164,12 @@ func ResourceImagePipeline() *schema.Resource { func resourceImagePipelineCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.CreateImagePipelineInput{ ClientToken: aws.String(id.UniqueId()), EnhancedImageMetadataEnabled: aws.Bool(d.Get("enhanced_image_metadata_enabled").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("container_recipe_arn"); ok { @@ -222,7 +225,7 @@ func resourceImagePipelineCreate(ctx context.Context, d *schema.ResourceData, me func resourceImagePipelineRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.GetImagePipelineInput{ ImagePipelineArn: aws.String(d.Id()), @@ -275,14 +278,14 @@ func resourceImagePipelineRead(ctx context.Context, d *schema.ResourceData, meta d.Set("status", imagePipeline.Status) - SetTagsOut(ctx, imagePipeline.Tags) + setTagsOut(ctx, imagePipeline.Tags) return diags } func resourceImagePipelineUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) if d.HasChanges( "description", @@ -343,7 +346,7 @@ func resourceImagePipelineUpdate(ctx context.Context, d *schema.ResourceData, me func resourceImagePipelineDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.DeleteImagePipelineInput{ ImagePipelineArn: aws.String(d.Id()), diff --git a/internal/service/imagebuilder/image_pipeline_data_source.go b/internal/service/imagebuilder/image_pipeline_data_source.go index a7d62ea6db7..aec9e7077d0 100644 --- a/internal/service/imagebuilder/image_pipeline_data_source.go +++ b/internal/service/imagebuilder/image_pipeline_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder import ( @@ -115,7 +118,7 @@ func DataSourceImagePipeline() *schema.Resource { func dataSourceImagePipelineRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.GetImagePipelineInput{} diff --git a/internal/service/imagebuilder/image_pipeline_data_source_test.go b/internal/service/imagebuilder/image_pipeline_data_source_test.go index 203d4cbfe5c..ee53e006cf3 100644 --- a/internal/service/imagebuilder/image_pipeline_data_source_test.go +++ b/internal/service/imagebuilder/image_pipeline_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder_test import ( diff --git a/internal/service/imagebuilder/image_pipeline_test.go b/internal/service/imagebuilder/image_pipeline_test.go index d7a78cfbdbb..1e9d749647c 100644 --- a/internal/service/imagebuilder/image_pipeline_test.go +++ b/internal/service/imagebuilder/image_pipeline_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder_test import ( @@ -559,7 +562,7 @@ func TestAccImageBuilderImagePipeline_tags(t *testing.T) { func testAccCheckImagePipelineDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_imagebuilder_image_pipeline" { @@ -596,7 +599,7 @@ func testAccCheckImagePipelineExists(ctx context.Context, resourceName string) r return fmt.Errorf("resource not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.GetImagePipelineInput{ ImagePipelineArn: aws.String(rs.Primary.ID), diff --git a/internal/service/imagebuilder/image_pipelines_data_source.go b/internal/service/imagebuilder/image_pipelines_data_source.go index 312cc0b3c8a..e47d46ae286 100644 --- a/internal/service/imagebuilder/image_pipelines_data_source.go +++ b/internal/service/imagebuilder/image_pipelines_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder import ( @@ -34,7 +37,7 @@ func DataSourceImagePipelines() *schema.Resource { func dataSourceImagePipelinesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.ListImagePipelinesInput{} diff --git a/internal/service/imagebuilder/image_pipelines_data_source_test.go b/internal/service/imagebuilder/image_pipelines_data_source_test.go index 1f59dc6b542..6459fc400b2 100644 --- a/internal/service/imagebuilder/image_pipelines_data_source_test.go +++ b/internal/service/imagebuilder/image_pipelines_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder_test import ( diff --git a/internal/service/imagebuilder/image_recipe.go b/internal/service/imagebuilder/image_recipe.go index 7e5a04b269f..fce38e8a381 100644 --- a/internal/service/imagebuilder/image_recipe.go +++ b/internal/service/imagebuilder/image_recipe.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder import ( @@ -15,8 +18,8 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" - "github.com/hashicorp/terraform-provider-aws/internal/experimental/nullable" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/types/nullable" "github.com/hashicorp/terraform-provider-aws/internal/verify" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -248,11 +251,11 @@ func ResourceImageRecipe() *schema.Resource { func resourceImageRecipeCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.CreateImageRecipeInput{ ClientToken: aws.String(id.UniqueId()), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("block_device_mapping"); ok && v.(*schema.Set).Len() > 0 { @@ -312,7 +315,7 @@ func resourceImageRecipeCreate(ctx context.Context, d *schema.ResourceData, meta func resourceImageRecipeRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.GetImageRecipeInput{ ImageRecipeArn: aws.String(d.Id()), @@ -346,7 +349,7 @@ func resourceImageRecipeRead(ctx context.Context, d *schema.ResourceData, meta i d.Set("parent_image", imageRecipe.ParentImage) d.Set("platform", imageRecipe.Platform) - SetTagsOut(ctx, imageRecipe.Tags) + setTagsOut(ctx, imageRecipe.Tags) if imageRecipe.AdditionalInstanceConfiguration != nil { d.Set("systems_manager_agent", []interface{}{flattenSystemsManagerAgent(imageRecipe.AdditionalInstanceConfiguration.SystemsManagerAgent)}) @@ -369,7 +372,7 @@ func resourceImageRecipeUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceImageRecipeDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.DeleteImageRecipeInput{ ImageRecipeArn: aws.String(d.Id()), diff --git a/internal/service/imagebuilder/image_recipe_data_source.go b/internal/service/imagebuilder/image_recipe_data_source.go index 49561dc36a6..4d9aaa3c2fa 100644 --- a/internal/service/imagebuilder/image_recipe_data_source.go +++ b/internal/service/imagebuilder/image_recipe_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder import ( @@ -155,7 +158,7 @@ func DataSourceImageRecipe() *schema.Resource { func dataSourceImageRecipeRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &imagebuilder.GetImageRecipeInput{} diff --git a/internal/service/imagebuilder/image_recipe_data_source_test.go b/internal/service/imagebuilder/image_recipe_data_source_test.go index 5c855807a25..3447778b188 100644 --- a/internal/service/imagebuilder/image_recipe_data_source_test.go +++ b/internal/service/imagebuilder/image_recipe_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder_test import ( diff --git a/internal/service/imagebuilder/image_recipe_test.go b/internal/service/imagebuilder/image_recipe_test.go index 4382a67e839..8d9d10bc447 100644 --- a/internal/service/imagebuilder/image_recipe_test.go +++ b/internal/service/imagebuilder/image_recipe_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder_test import ( @@ -738,7 +741,7 @@ func TestAccImageBuilderImageRecipe_windowsBaseImage(t *testing.T) { func testAccCheckImageRecipeDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_imagebuilder_image_recipe" { @@ -775,7 +778,7 @@ func testAccCheckImageRecipeExists(ctx context.Context, resourceName string) res return fmt.Errorf("resource not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.GetImageRecipeInput{ ImageRecipeArn: aws.String(rs.Primary.ID), diff --git a/internal/service/imagebuilder/image_recipes_data_source.go b/internal/service/imagebuilder/image_recipes_data_source.go index 03319a44432..a0b09730498 100644 --- a/internal/service/imagebuilder/image_recipes_data_source.go +++ b/internal/service/imagebuilder/image_recipes_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder import ( @@ -40,7 +43,7 @@ func DataSourceImageRecipes() *schema.Resource { func dataSourceImageRecipesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.ListImageRecipesInput{} diff --git a/internal/service/imagebuilder/image_recipes_data_source_test.go b/internal/service/imagebuilder/image_recipes_data_source_test.go index 514977dd250..e5965a9223e 100644 --- a/internal/service/imagebuilder/image_recipes_data_source_test.go +++ b/internal/service/imagebuilder/image_recipes_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder_test import ( diff --git a/internal/service/imagebuilder/image_test.go b/internal/service/imagebuilder/image_test.go index 800443c4ecf..9d04c3f01ee 100644 --- a/internal/service/imagebuilder/image_test.go +++ b/internal/service/imagebuilder/image_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder_test import ( @@ -291,7 +294,7 @@ func TestAccImageBuilderImage_outputResources_containers(t *testing.T) { func testAccCheckImageDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_imagebuilder_image_pipeline" { @@ -328,7 +331,7 @@ func testAccCheckImageExists(ctx context.Context, resourceName string) resource. return fmt.Errorf("resource not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.GetImageInput{ ImageBuildVersionArn: aws.String(rs.Primary.ID), diff --git a/internal/service/imagebuilder/infrastructure_configuration.go b/internal/service/imagebuilder/infrastructure_configuration.go index c56fba4a0d9..845f4184c78 100644 --- a/internal/service/imagebuilder/infrastructure_configuration.go +++ b/internal/service/imagebuilder/infrastructure_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder import ( @@ -152,11 +155,11 @@ func ResourceInfrastructureConfiguration() *schema.Resource { func resourceInfrastructureConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.CreateInfrastructureConfigurationInput{ ClientToken: aws.String(id.UniqueId()), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), TerminateInstanceOnFailure: aws.Bool(d.Get("terminate_instance_on_failure").(bool)), } @@ -240,7 +243,7 @@ func resourceInfrastructureConfigurationCreate(ctx context.Context, d *schema.Re func resourceInfrastructureConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.GetInfrastructureConfigurationInput{ InfrastructureConfigurationArn: aws.String(d.Id()), @@ -291,7 +294,7 @@ func resourceInfrastructureConfigurationRead(ctx context.Context, d *schema.Reso d.Set("sns_topic_arn", infrastructureConfiguration.SnsTopicArn) d.Set("subnet_id", infrastructureConfiguration.SubnetId) - SetTagsOut(ctx, infrastructureConfiguration.Tags) + setTagsOut(ctx, infrastructureConfiguration.Tags) d.Set("terminate_instance_on_failure", infrastructureConfiguration.TerminateInstanceOnFailure) @@ -300,7 +303,7 @@ func resourceInfrastructureConfigurationRead(ctx context.Context, d *schema.Reso func resourceInfrastructureConfigurationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) if d.HasChanges( "description", @@ -388,7 +391,7 @@ func resourceInfrastructureConfigurationUpdate(ctx context.Context, d *schema.Re func resourceInfrastructureConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.DeleteInfrastructureConfigurationInput{ InfrastructureConfigurationArn: aws.String(d.Id()), diff --git a/internal/service/imagebuilder/infrastructure_configuration_data_source.go b/internal/service/imagebuilder/infrastructure_configuration_data_source.go index 7c9e83f2d6e..40bda575f19 100644 --- a/internal/service/imagebuilder/infrastructure_configuration_data_source.go +++ b/internal/service/imagebuilder/infrastructure_configuration_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder import ( @@ -118,7 +121,7 @@ func DataSourceInfrastructureConfiguration() *schema.Resource { func dataSourceInfrastructureConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &imagebuilder.GetInfrastructureConfigurationInput{} diff --git a/internal/service/imagebuilder/infrastructure_configuration_data_source_test.go b/internal/service/imagebuilder/infrastructure_configuration_data_source_test.go index 0d325f7f4ad..494502ff12e 100644 --- a/internal/service/imagebuilder/infrastructure_configuration_data_source_test.go +++ b/internal/service/imagebuilder/infrastructure_configuration_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder_test import ( diff --git a/internal/service/imagebuilder/infrastructure_configuration_test.go b/internal/service/imagebuilder/infrastructure_configuration_test.go index 02b909a7d67..fad60578ef5 100644 --- a/internal/service/imagebuilder/infrastructure_configuration_test.go +++ b/internal/service/imagebuilder/infrastructure_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder_test import ( @@ -576,7 +579,7 @@ func TestAccImageBuilderInfrastructureConfiguration_terminateInstanceOnFailure(t func testAccCheckInfrastructureConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_imagebuilder_infrastructure_configuration" { @@ -613,7 +616,7 @@ func testAccCheckInfrastructureConfigurationExists(ctx context.Context, resource return fmt.Errorf("resource not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.GetInfrastructureConfigurationInput{ InfrastructureConfigurationArn: aws.String(rs.Primary.ID), diff --git a/internal/service/imagebuilder/infrastructure_configurations_data_source.go b/internal/service/imagebuilder/infrastructure_configurations_data_source.go index ec207dfb9e2..399c39f12a9 100644 --- a/internal/service/imagebuilder/infrastructure_configurations_data_source.go +++ b/internal/service/imagebuilder/infrastructure_configurations_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder import ( @@ -34,7 +37,7 @@ func DataSourceInfrastructureConfigurations() *schema.Resource { func dataSourceInfrastructureConfigurationsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ImageBuilderConn() + conn := meta.(*conns.AWSClient).ImageBuilderConn(ctx) input := &imagebuilder.ListInfrastructureConfigurationsInput{} diff --git a/internal/service/imagebuilder/infrastructure_configurations_data_source_test.go b/internal/service/imagebuilder/infrastructure_configurations_data_source_test.go index a5eb0c6fee9..54161eac8f7 100644 --- a/internal/service/imagebuilder/infrastructure_configurations_data_source_test.go +++ b/internal/service/imagebuilder/infrastructure_configurations_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder_test import ( diff --git a/internal/service/imagebuilder/service_package_gen.go b/internal/service/imagebuilder/service_package_gen.go index 12cf836a8a7..353f7600a04 100644 --- a/internal/service/imagebuilder/service_package_gen.go +++ b/internal/service/imagebuilder/service_package_gen.go @@ -5,6 +5,10 @@ package imagebuilder import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + imagebuilder_sdkv1 "github.com/aws/aws-sdk-go/service/imagebuilder" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -141,4 +145,13 @@ func (p *servicePackage) ServicePackageName() string { return names.ImageBuilder } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*imagebuilder_sdkv1.Imagebuilder, error) { + sess := config["session"].(*session_sdkv1.Session) + + return imagebuilder_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/imagebuilder/status.go b/internal/service/imagebuilder/status.go index 1444168559a..225838f1025 100644 --- a/internal/service/imagebuilder/status.go +++ b/internal/service/imagebuilder/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder import ( diff --git a/internal/service/imagebuilder/sweep.go b/internal/service/imagebuilder/sweep.go index caaab89ad7a..48ddc2fd1af 100644 --- a/internal/service/imagebuilder/sweep.go +++ b/internal/service/imagebuilder/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/imagebuilder" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -54,11 +56,11 @@ func init() { func sweepComponents(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).ImageBuilderConn() + conn := client.ImageBuilderConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -122,7 +124,7 @@ func sweepComponents(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Image Builder Components: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Image Builder Components: %w", err)) } @@ -131,11 +133,11 @@ func sweepComponents(region string) error { func sweepDistributionConfigurations(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).ImageBuilderConn() + conn := client.ImageBuilderConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -171,7 +173,7 @@ func sweepDistributionConfigurations(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Image Builder Distribution Configurations: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Image Builder Distribution Configurations: %w", err)) } @@ -180,11 +182,11 @@ func sweepDistributionConfigurations(region string) error { func sweepImagePipelines(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).ImageBuilderConn() + conn := client.ImageBuilderConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -220,7 +222,7 @@ func sweepImagePipelines(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Image Builder Image Pipelines: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Image Builder Image Pipelines: %w", err)) } @@ -229,11 +231,11 @@ func sweepImagePipelines(region string) error { func sweepImageRecipes(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).ImageBuilderConn() + conn := client.ImageBuilderConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -271,7 +273,7 @@ func sweepImageRecipes(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Image Builder Image Recipes: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Image Builder Image Recipes: %w", err)) } @@ -280,11 +282,11 @@ func sweepImageRecipes(region string) error { func sweepContainerRecipes(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).ImageBuilderConn() + conn := client.ImageBuilderConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -322,7 +324,7 @@ func sweepContainerRecipes(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Image Builder Container Recipes: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Image Builder Container Recipes: %w", err)) } @@ -331,13 +333,13 @@ func sweepContainerRecipes(region string) error { func sweepImages(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).ImageBuilderConn() + conn := client.ImageBuilderConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -398,7 +400,7 @@ func sweepImages(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing Image Builder Images for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Image Builder Images for %s: %w", region, err)) } @@ -412,11 +414,11 @@ func sweepImages(region string) error { func sweepInfrastructureConfigurations(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).ImageBuilderConn() + conn := client.ImageBuilderConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -453,7 +455,7 @@ func sweepInfrastructureConfigurations(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Image Builder Infrastructure Configurations: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Image Builder Infrastructure Configurations: %w", err)) } diff --git a/internal/service/imagebuilder/tags_gen.go b/internal/service/imagebuilder/tags_gen.go index 34e8382e828..4f16b448ab7 100644 --- a/internal/service/imagebuilder/tags_gen.go +++ b/internal/service/imagebuilder/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists imagebuilder service tags. +// listTags lists imagebuilder service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn imagebuilderiface.ImagebuilderAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn imagebuilderiface.ImagebuilderAPI, identifier string) (tftags.KeyValueTags, error) { input := &imagebuilder.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn imagebuilderiface.ImagebuilderAPI, ident // ListTags lists imagebuilder service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).ImageBuilderConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).ImageBuilderConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from imagebuilder service tags. +// KeyValueTags creates tftags.KeyValueTags from imagebuilder service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns imagebuilder service tags from Context. +// getTagsIn returns imagebuilder service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets imagebuilder service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets imagebuilder service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates imagebuilder service tags. +// updateTags updates imagebuilder service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn imagebuilderiface.ImagebuilderAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn imagebuilderiface.ImagebuilderAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn imagebuilderiface.ImagebuilderAPI, ide // UpdateTags updates imagebuilder service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).ImageBuilderConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).ImageBuilderConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/imagebuilder/wait.go b/internal/service/imagebuilder/wait.go index 413daaa2682..ac4b4d2aa76 100644 --- a/internal/service/imagebuilder/wait.go +++ b/internal/service/imagebuilder/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package imagebuilder import ( diff --git a/internal/service/inspector/assessment_target.go b/internal/service/inspector/assessment_target.go index a0adccdb28c..49e5e55ae9d 100644 --- a/internal/service/inspector/assessment_target.go +++ b/internal/service/inspector/assessment_target.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package inspector import ( @@ -47,7 +50,7 @@ func ResourceAssessmentTarget() *schema.Resource { func resourceAssessmentTargetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).InspectorConn() + conn := meta.(*conns.AWSClient).InspectorConn(ctx) input := &inspector.CreateAssessmentTargetInput{ AssessmentTargetName: aws.String(d.Get("name").(string)), @@ -69,7 +72,7 @@ func resourceAssessmentTargetCreate(ctx context.Context, d *schema.ResourceData, func resourceAssessmentTargetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).InspectorConn() + conn := meta.(*conns.AWSClient).InspectorConn(ctx) assessmentTarget, err := DescribeAssessmentTarget(ctx, conn, d.Id()) @@ -92,7 +95,7 @@ func resourceAssessmentTargetRead(ctx context.Context, d *schema.ResourceData, m func resourceAssessmentTargetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).InspectorConn() + conn := meta.(*conns.AWSClient).InspectorConn(ctx) input := inspector.UpdateAssessmentTargetInput{ AssessmentTargetArn: aws.String(d.Id()), @@ -113,7 +116,7 @@ func resourceAssessmentTargetUpdate(ctx context.Context, d *schema.ResourceData, func resourceAssessmentTargetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).InspectorConn() + conn := meta.(*conns.AWSClient).InspectorConn(ctx) input := &inspector.DeleteAssessmentTargetInput{ AssessmentTargetArn: aws.String(d.Id()), } diff --git a/internal/service/inspector/assessment_target_test.go b/internal/service/inspector/assessment_target_test.go index 92a946f1da6..1665c480bd9 100644 --- a/internal/service/inspector/assessment_target_test.go +++ b/internal/service/inspector/assessment_target_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package inspector_test import ( @@ -158,7 +161,7 @@ func TestAccInspectorAssessmentTarget_resourceGroupARN(t *testing.T) { func testAccCheckTargetAssessmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).InspectorConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).InspectorConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_inspector_assessment_target" { @@ -187,7 +190,7 @@ func testAccCheckTargetExists(ctx context.Context, name string, target *inspecto return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).InspectorConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).InspectorConn(ctx) assessmentTarget, err := tfinspector.DescribeAssessmentTarget(ctx, conn, rs.Primary.ID) @@ -207,7 +210,7 @@ func testAccCheckTargetExists(ctx context.Context, name string, target *inspecto func testAccCheckTargetDisappears(ctx context.Context, assessmentTarget *inspector.AssessmentTarget) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).InspectorConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).InspectorConn(ctx) input := &inspector.DeleteAssessmentTargetInput{ AssessmentTargetArn: assessmentTarget.Arn, diff --git a/internal/service/inspector/assessment_template.go b/internal/service/inspector/assessment_template.go index c8dc0d80458..f0d0d7dcb14 100644 --- a/internal/service/inspector/assessment_template.go +++ b/internal/service/inspector/assessment_template.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package inspector import ( @@ -90,7 +93,7 @@ func ResourceAssessmentTemplate() *schema.Resource { func resourceAssessmentTemplateCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).InspectorConn() + conn := meta.(*conns.AWSClient).InspectorConn(ctx) name := d.Get("name").(string) input := &inspector.CreateAssessmentTemplateInput{ @@ -108,7 +111,7 @@ func resourceAssessmentTemplateCreate(ctx context.Context, d *schema.ResourceDat d.SetId(aws.StringValue(output.AssessmentTemplateArn)) - if err := createTags(ctx, conn, d.Id(), GetTagsIn(ctx)); err != nil { + if err := createTags(ctx, conn, d.Id(), getTagsIn(ctx)); err != nil { return sdkdiag.AppendErrorf(diags, "setting Inspector Classic Assessment Template (%s) tags: %s", d.Id(), err) } @@ -125,7 +128,7 @@ func resourceAssessmentTemplateCreate(ctx context.Context, d *schema.ResourceDat func resourceAssessmentTemplateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).InspectorConn() + conn := meta.(*conns.AWSClient).InspectorConn(ctx) resp, err := conn.DescribeAssessmentTemplatesWithContext(ctx, &inspector.DescribeAssessmentTemplatesInput{ AssessmentTemplateArns: aws.StringSlice([]string{d.Id()}), @@ -164,7 +167,7 @@ func resourceAssessmentTemplateRead(ctx context.Context, d *schema.ResourceData, func resourceAssessmentTemplateUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).InspectorConn() + conn := meta.(*conns.AWSClient).InspectorConn(ctx) if d.HasChange("event_subscription") { old, new := d.GetChange("event_subscription") @@ -193,7 +196,7 @@ func resourceAssessmentTemplateUpdate(ctx context.Context, d *schema.ResourceDat func resourceAssessmentTemplateDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).InspectorConn() + conn := meta.(*conns.AWSClient).InspectorConn(ctx) log.Printf("[INFO] Deleting Inspector Classic Assessment Template: %s", d.Id()) _, err := conn.DeleteAssessmentTemplateWithContext(ctx, &inspector.DeleteAssessmentTemplateInput{ diff --git a/internal/service/inspector/assessment_template_test.go b/internal/service/inspector/assessment_template_test.go index fae39c2f4af..bb8caaaf01b 100644 --- a/internal/service/inspector/assessment_template_test.go +++ b/internal/service/inspector/assessment_template_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package inspector_test import ( @@ -186,7 +189,7 @@ func TestAccInspectorAssessmentTemplate_eventSubscription(t *testing.T) { func testAccCheckTemplateDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).InspectorConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).InspectorConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_inspector_assessment_template" { @@ -218,7 +221,7 @@ func testAccCheckTemplateDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckTemplateDisappears(ctx context.Context, v *inspector.AssessmentTemplate) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).InspectorConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).InspectorConn(ctx) _, err := conn.DeleteAssessmentTemplateWithContext(ctx, &inspector.DeleteAssessmentTemplateInput{ AssessmentTemplateArn: v.Arn, @@ -239,7 +242,7 @@ func testAccCheckTemplateExists(ctx context.Context, name string, v *inspector.A return fmt.Errorf("No Inspector Classic Assessment template ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).InspectorConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).InspectorConn(ctx) resp, err := conn.DescribeAssessmentTemplatesWithContext(ctx, &inspector.DescribeAssessmentTemplatesInput{ AssessmentTemplateArns: aws.StringSlice([]string{rs.Primary.ID}), diff --git a/internal/service/inspector/find.go b/internal/service/inspector/find.go index dfcc3dbe876..8f5565fe1a5 100644 --- a/internal/service/inspector/find.go +++ b/internal/service/inspector/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package inspector import ( diff --git a/internal/service/inspector/generate.go b/internal/service/inspector/generate.go index 3f580be1b80..9d8fab53ce5 100644 --- a/internal/service/inspector/generate.go +++ b/internal/service/inspector/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsSlice +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package inspector diff --git a/internal/service/inspector/resource_group.go b/internal/service/inspector/resource_group.go index 4a5e55d0a0c..63ef0a37d91 100644 --- a/internal/service/inspector/resource_group.go +++ b/internal/service/inspector/resource_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package inspector import ( @@ -36,7 +39,7 @@ func ResourceResourceGroup() *schema.Resource { func resourceResourceGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).InspectorConn() + conn := meta.(*conns.AWSClient).InspectorConn(ctx) req := &inspector.CreateResourceGroupInput{ ResourceGroupTags: expandResourceGroupTags(d.Get("tags").(map[string]interface{})), @@ -55,7 +58,7 @@ func resourceResourceGroupCreate(ctx context.Context, d *schema.ResourceData, me func resourceResourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).InspectorConn() + conn := meta.(*conns.AWSClient).InspectorConn(ctx) resp, err := conn.DescribeResourceGroupsWithContext(ctx, &inspector.DescribeResourceGroupsInput{ ResourceGroupArns: aws.StringSlice([]string{d.Id()}), diff --git a/internal/service/inspector/resource_group_test.go b/internal/service/inspector/resource_group_test.go index 9c9e33dfb46..8a664372125 100644 --- a/internal/service/inspector/resource_group_test.go +++ b/internal/service/inspector/resource_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package inspector_test import ( @@ -48,7 +51,7 @@ func TestAccInspectorResourceGroup_basic(t *testing.T) { func testAccCheckResourceGroupExists(ctx context.Context, name string, rg *inspector.ResourceGroup) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).InspectorConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).InspectorConn(ctx) rs, ok := s.RootModule().Resources[name] if !ok { diff --git a/internal/service/inspector/rules_packages_data_source.go b/internal/service/inspector/rules_packages_data_source.go index baa103afac5..0007c58d381 100644 --- a/internal/service/inspector/rules_packages_data_source.go +++ b/internal/service/inspector/rules_packages_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package inspector import ( @@ -29,7 +32,7 @@ func DataSourceRulesPackages() *schema.Resource { func dataSourceRulesPackagesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).InspectorConn() + conn := meta.(*conns.AWSClient).InspectorConn(ctx) output, err := findRulesPackageARNs(ctx, conn) diff --git a/internal/service/inspector/rules_packages_data_source_test.go b/internal/service/inspector/rules_packages_data_source_test.go index a5cf95e9f7d..e0432e82d25 100644 --- a/internal/service/inspector/rules_packages_data_source_test.go +++ b/internal/service/inspector/rules_packages_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package inspector_test import ( diff --git a/internal/service/inspector/service_package_gen.go b/internal/service/inspector/service_package_gen.go index 9dee6976c57..ff84ea15222 100644 --- a/internal/service/inspector/service_package_gen.go +++ b/internal/service/inspector/service_package_gen.go @@ -5,6 +5,10 @@ package inspector import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + inspector_sdkv1 "github.com/aws/aws-sdk-go/service/inspector" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -53,4 +57,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Inspector } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*inspector_sdkv1.Inspector, error) { + sess := config["session"].(*session_sdkv1.Session) + + return inspector_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/inspector/tags.go b/internal/service/inspector/tags.go index 7c515565d5c..a0d5b4d86da 100644 --- a/internal/service/inspector/tags.go +++ b/internal/service/inspector/tags.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build !generate // +build !generate @@ -60,5 +63,5 @@ func createTags(ctx context.Context, conn inspectoriface.InspectorAPI, identifie // UpdateTags updates Inspector Classic service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return updateTags(ctx, meta.(*conns.AWSClient).InspectorConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).InspectorConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/inspector/tags_gen.go b/internal/service/inspector/tags_gen.go index c9a11a71119..b4a675fe131 100644 --- a/internal/service/inspector/tags_gen.go +++ b/internal/service/inspector/tags_gen.go @@ -12,10 +12,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/types" ) -// ListTags lists inspector service tags. +// listTags lists inspector service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn inspectoriface.InspectorAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn inspectoriface.InspectorAPI, identifier string) (tftags.KeyValueTags, error) { input := &inspector.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -32,7 +32,7 @@ func ListTags(ctx context.Context, conn inspectoriface.InspectorAPI, identifier // ListTags lists inspector service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).InspectorConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).InspectorConn(ctx), identifier) if err != nil { return err @@ -74,9 +74,9 @@ func KeyValueTags(ctx context.Context, tags []*inspector.Tag) tftags.KeyValueTag return tftags.New(ctx, m) } -// GetTagsIn returns inspector service tags from Context. +// getTagsIn returns inspector service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*inspector.Tag { +func getTagsIn(ctx context.Context) []*inspector.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -86,8 +86,8 @@ func GetTagsIn(ctx context.Context) []*inspector.Tag { return nil } -// SetTagsOut sets inspector service tags in Context. -func SetTagsOut(ctx context.Context, tags []*inspector.Tag) { +// setTagsOut sets inspector service tags in Context. +func setTagsOut(ctx context.Context, tags []*inspector.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } diff --git a/internal/service/inspector2/acc_test.go b/internal/service/inspector2/acc_test.go index 8af543f7397..62ceafe4f96 100644 --- a/internal/service/inspector2/acc_test.go +++ b/internal/service/inspector2/acc_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package inspector2_test import ( @@ -12,7 +15,7 @@ import ( ) func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).Inspector2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).Inspector2Client(ctx) _, err := conn.ListDelegatedAdminAccounts(ctx, &inspector2.ListDelegatedAdminAccountsInput{}) diff --git a/internal/service/inspector2/delegated_admin_account.go b/internal/service/inspector2/delegated_admin_account.go index 27a4366f3eb..0c7fe6c934c 100644 --- a/internal/service/inspector2/delegated_admin_account.go +++ b/internal/service/inspector2/delegated_admin_account.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package inspector2 import ( @@ -57,7 +60,7 @@ const ( ) func resourceDelegatedAdminAccountCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Inspector2Client() + conn := meta.(*conns.AWSClient).Inspector2Client(ctx) in := &inspector2.EnableDelegatedAdminAccountInput{ DelegatedAdminAccountId: aws.String(d.Get("account_id").(string)), @@ -84,7 +87,7 @@ func resourceDelegatedAdminAccountCreate(ctx context.Context, d *schema.Resource } func resourceDelegatedAdminAccountRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Inspector2Client() + conn := meta.(*conns.AWSClient).Inspector2Client(ctx) st, ai, err := FindDelegatedAdminAccountStatusID(ctx, conn, d.Id()) @@ -105,7 +108,7 @@ func resourceDelegatedAdminAccountRead(ctx context.Context, d *schema.ResourceDa } func resourceDelegatedAdminAccountDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Inspector2Client() + conn := meta.(*conns.AWSClient).Inspector2Client(ctx) log.Printf("[INFO] Deleting Inspector DelegatedAdminAccount %s", d.Id()) diff --git a/internal/service/inspector2/delegated_admin_account_test.go b/internal/service/inspector2/delegated_admin_account_test.go index 3d4e5a02103..adf3ad3dd03 100644 --- a/internal/service/inspector2/delegated_admin_account_test.go +++ b/internal/service/inspector2/delegated_admin_account_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package inspector2_test import ( @@ -97,7 +100,7 @@ func testAccDelegatedAdminAccount_disappears(t *testing.T) { func testAccCheckDelegatedAdminAccountDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Inspector2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).Inspector2Client(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_inspector2_delegated_admin_account" { @@ -132,7 +135,7 @@ func testAccCheckDelegatedAdminAccountExists(ctx context.Context, name string) r return create.Error(names.Inspector2, create.ErrActionCheckingExistence, tfinspector2.ResNameDelegatedAdminAccount, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).Inspector2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).Inspector2Client(ctx) _, _, err := tfinspector2.FindDelegatedAdminAccountStatusID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/inspector2/enabler.go b/internal/service/inspector2/enabler.go index dcd3943e740..b93ccec55d7 100644 --- a/internal/service/inspector2/enabler.go +++ b/internal/service/inspector2/enabler.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package inspector2 import ( @@ -97,7 +100,7 @@ const ( func resourceEnablerCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Inspector2Client() + conn := meta.(*conns.AWSClient).Inspector2Client(ctx) accountIDs := getAccountIDs(d) @@ -194,7 +197,7 @@ func resourceEnablerCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceEnablerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Inspector2Client() + conn := meta.(*conns.AWSClient).Inspector2Client(ctx) accountIDs, _, err := parseEnablerID(d.Id()) if err != nil { @@ -240,7 +243,7 @@ func resourceEnablerRead(ctx context.Context, d *schema.ResourceData, meta inter func resourceEnablerUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Inspector2Client() + conn := meta.(*conns.AWSClient).Inspector2Client(ctx) typeEnable := flex.ExpandStringyValueSet[types.ResourceScanType](d.Get("resource_types").(*schema.Set)) var typeDisable []types.ResourceScanType @@ -319,7 +322,7 @@ func resourceEnablerUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceEnablerDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics client := meta.(*conns.AWSClient) - conn := client.Inspector2Client() + conn := client.Inspector2Client(ctx) accountIDs := getAccountIDs(d) admin := slices.Contains(accountIDs, client.AccountID) diff --git a/internal/service/inspector2/enabler_test.go b/internal/service/inspector2/enabler_test.go index 8392ca452c9..db3898ab31d 100644 --- a/internal/service/inspector2/enabler_test.go +++ b/internal/service/inspector2/enabler_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package inspector2_test import ( @@ -494,7 +497,7 @@ func testAccEnabler_memberAccount_updateMemberAccountsAndScanTypes(t *testing.T) func testAccCheckEnablerDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Inspector2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).Inspector2Client(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_inspector2_enabler" { @@ -540,7 +543,7 @@ func testAccCheckEnablerExists(ctx context.Context, name string, t []types.Resou return create.Error(names.Inspector2, create.ErrActionCheckingExistence, tfinspector2.ResNameEnabler, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).Inspector2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).Inspector2Client(ctx) accountIDs, _, err := tfinspector2.ParseEnablerID(rs.Primary.ID) if err != nil { diff --git a/internal/service/inspector2/exports_test.go b/internal/service/inspector2/exports_test.go index 8b94cc7ab5f..5ffd77545a6 100644 --- a/internal/service/inspector2/exports_test.go +++ b/internal/service/inspector2/exports_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package inspector2 // Exports for use in tests only. diff --git a/internal/service/inspector2/generate.go b/internal/service/inspector2/generate.go new file mode 100644 index 00000000000..9a7bcf751c3 --- /dev/null +++ b/internal/service/inspector2/generate.go @@ -0,0 +1,7 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/servicepackage/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package inspector2 diff --git a/internal/service/inspector2/inspector2_test.go b/internal/service/inspector2/inspector2_test.go index a43a7718ef7..272f4e044a9 100644 --- a/internal/service/inspector2/inspector2_test.go +++ b/internal/service/inspector2/inspector2_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package inspector2_test import ( diff --git a/internal/service/inspector2/member_association.go b/internal/service/inspector2/member_association.go index 758c129b5ee..e5bdc2bae61 100644 --- a/internal/service/inspector2/member_association.go +++ b/internal/service/inspector2/member_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package inspector2 import ( @@ -60,7 +63,7 @@ func ResourceMemberAssociation() *schema.Resource { func resourceMemberAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Inspector2Client() + conn := meta.(*conns.AWSClient).Inspector2Client(ctx) accountID := d.Get("account_id").(string) input := &inspector2.AssociateMemberInput{ @@ -84,7 +87,7 @@ func resourceMemberAssociationCreate(ctx context.Context, d *schema.ResourceData func resourceMemberAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Inspector2Client() + conn := meta.(*conns.AWSClient).Inspector2Client(ctx) member, err := FindMemberByAccountID(ctx, conn, d.Id()) @@ -108,7 +111,7 @@ func resourceMemberAssociationRead(ctx context.Context, d *schema.ResourceData, func resourceMemberAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Inspector2Client() + conn := meta.(*conns.AWSClient).Inspector2Client(ctx) log.Printf("[DEBUG] Deleting Amazon Inspector Member Association: %s", d.Id()) diff --git a/internal/service/inspector2/member_association_test.go b/internal/service/inspector2/member_association_test.go index 3f26245fbfc..2908f133ac4 100644 --- a/internal/service/inspector2/member_association_test.go +++ b/internal/service/inspector2/member_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package inspector2_test import ( @@ -89,7 +92,7 @@ func testAccCheckMemberAssociationExists(ctx context.Context, n string) resource return fmt.Errorf("No Inspector2 Member Association ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).Inspector2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).Inspector2Client(ctx) _, err := tfinspector2.FindMemberByAccountID(ctx, conn, rs.Primary.ID) @@ -99,7 +102,7 @@ func testAccCheckMemberAssociationExists(ctx context.Context, n string) resource func testAccCheckMemberAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Inspector2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).Inspector2Client(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_inspector2_member_association" { diff --git a/internal/service/inspector2/organization_configuration.go b/internal/service/inspector2/organization_configuration.go index b7f49236f3b..9e67587d079 100644 --- a/internal/service/inspector2/organization_configuration.go +++ b/internal/service/inspector2/organization_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package inspector2 import ( @@ -75,7 +78,7 @@ func resourceOrganizationConfigurationCreate(ctx context.Context, d *schema.Reso } func resourceOrganizationConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Inspector2Client() + conn := meta.(*conns.AWSClient).Inspector2Client(ctx) out, err := conn.DescribeOrganizationConfiguration(ctx, &inspector2.DescribeOrganizationConfigurationInput{}) @@ -99,7 +102,7 @@ func resourceOrganizationConfigurationRead(ctx context.Context, d *schema.Resour } func resourceOrganizationConfigurationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Inspector2Client() + conn := meta.(*conns.AWSClient).Inspector2Client(ctx) update := false @@ -131,7 +134,7 @@ func resourceOrganizationConfigurationUpdate(ctx context.Context, d *schema.Reso } func resourceOrganizationConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Inspector2Client() + conn := meta.(*conns.AWSClient).Inspector2Client(ctx) conns.GlobalMutexKV.Lock(orgConfigMutex) defer conns.GlobalMutexKV.Unlock(orgConfigMutex) diff --git a/internal/service/inspector2/organization_configuration_test.go b/internal/service/inspector2/organization_configuration_test.go index 1beb6e4d967..e9e9389771c 100644 --- a/internal/service/inspector2/organization_configuration_test.go +++ b/internal/service/inspector2/organization_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package inspector2_test import ( @@ -130,7 +133,7 @@ func testAccOrganizationConfiguration_lambda(t *testing.T) { func testAccCheckOrganizationConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Inspector2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).Inspector2Client(ctx) enabledDelAdAcct := false @@ -225,7 +228,7 @@ func testAccCheckOrganizationConfigurationExists(ctx context.Context, name strin return create.Error(names.Inspector2, create.ErrActionCheckingExistence, tfinspector2.ResNameOrganizationConfiguration, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).Inspector2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).Inspector2Client(ctx) _, err := conn.DescribeOrganizationConfiguration(ctx, &inspector2.DescribeOrganizationConfigurationInput{}) diff --git a/internal/service/inspector2/service_package_gen.go b/internal/service/inspector2/service_package_gen.go index 641dec8fb58..fb51a10b399 100644 --- a/internal/service/inspector2/service_package_gen.go +++ b/internal/service/inspector2/service_package_gen.go @@ -5,6 +5,9 @@ package inspector2 import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + inspector2_sdkv2 "github.com/aws/aws-sdk-go-v2/service/inspector2" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -48,4 +51,17 @@ func (p *servicePackage) ServicePackageName() string { return names.Inspector2 } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*inspector2_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return inspector2_sdkv2.NewFromConfig(cfg, func(o *inspector2_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = inspector2_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/internetmonitor/exports_test.go b/internal/service/internetmonitor/exports_test.go new file mode 100644 index 00000000000..ca0f4428e40 --- /dev/null +++ b/internal/service/internetmonitor/exports_test.go @@ -0,0 +1,11 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package internetmonitor + +// Exports for use in tests only. +var ( + ResourceMonitor = resourceMonitor + + FindMonitorByName = findMonitorByName +) diff --git a/internal/service/internetmonitor/find.go b/internal/service/internetmonitor/find.go deleted file mode 100644 index acf379526a4..00000000000 --- a/internal/service/internetmonitor/find.go +++ /dev/null @@ -1,31 +0,0 @@ -package internetmonitor - -import ( - "context" - - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/internetmonitor" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" -) - -func FindMonitor(ctx context.Context, conn *internetmonitor.InternetMonitor, name string) (*internetmonitor.GetMonitorOutput, error) { - input := &internetmonitor.GetMonitorInput{ - MonitorName: aws.String(name), - } - - output, err := conn.GetMonitorWithContext(ctx, input) - - if tfawserr.ErrCodeEquals(err, internetmonitor.ErrCodeResourceNotFoundException) { - return nil, &retry.NotFoundError{ - LastError: err, - LastRequest: input, - } - } - - if err != nil { - return nil, err - } - - return output, nil -} diff --git a/internal/service/internetmonitor/generate.go b/internal/service/internetmonitor/generate.go index 13e851671a5..da87a69a432 100644 --- a/internal/service/internetmonitor/generate.go +++ b/internal/service/internetmonitor/generate.go @@ -1,4 +1,8 @@ -//go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceArn -ServiceTagsMap -TagInIDElem=ResourceArn -UpdateTags +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -ListTags -ListTagsInIDElem=ResourceArn -ServiceTagsMap -KVTValues -TagInIDElem=ResourceArn -UpdateTags -SkipTypesImp +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package internetmonitor diff --git a/internal/service/internetmonitor/monitor.go b/internal/service/internetmonitor/monitor.go index ad1389f0fae..ff372886916 100644 --- a/internal/service/internetmonitor/monitor.go +++ b/internal/service/internetmonitor/monitor.go @@ -1,17 +1,25 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package internetmonitor import ( "context" + "errors" "log" + "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/internetmonitor" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/internetmonitor" + "github.com/aws/aws-sdk-go-v2/service/internetmonitor/types" + "github.com/hashicorp/aws-sdk-go-base/v2/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/enum" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" "github.com/hashicorp/terraform-provider-aws/internal/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" @@ -22,12 +30,13 @@ import ( // @SDKResource("aws_internetmonitor_monitor", name="Monitor") // @Tags(identifierAttribute="arn") -func ResourceMonitor() *schema.Resource { +func resourceMonitor() *schema.Resource { return &schema.Resource{ CreateWithoutTimeout: resourceMonitorCreate, ReadWithoutTimeout: resourceMonitorRead, UpdateWithoutTimeout: resourceMonitorUpdate, DeleteWithoutTimeout: resourceMonitorDelete, + Importer: &schema.ResourceImporter{ StateContext: schema.ImportStatePassthroughContext, }, @@ -37,6 +46,25 @@ func ResourceMonitor() *schema.Resource { Type: schema.TypeString, Computed: true, }, + "health_events_config": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "availability_score_threshold": { + Type: schema.TypeFloat, + Optional: true, + Default: 95.0, + }, + "performance_score_threshold": { + Type: schema.TypeFloat, + Optional: true, + Default: 95.0, + }, + }, + }, + }, "internet_measurements_log_delivery": { Type: schema.TypeList, Optional: true, @@ -59,10 +87,10 @@ func ResourceMonitor() *schema.Resource { Optional: true, }, "log_delivery_status": { - Type: schema.TypeString, - Optional: true, - Default: internetmonitor.LogDeliveryStatusEnabled, - ValidateFunc: validation.StringInSlice(internetmonitor.LogDeliveryStatus_Values(), false), + Type: schema.TypeString, + Optional: true, + Default: types.LogDeliveryStatusEnabled, + ValidateDiagFunc: enum.Validate[types.LogDeliveryStatus](), }, }, }, @@ -93,11 +121,11 @@ func ResourceMonitor() *schema.Resource { "status": { Type: schema.TypeString, Optional: true, - Default: internetmonitor.MonitorConfigStateActive, - ValidateFunc: validation.StringInSlice([]string{ - internetmonitor.MonitorConfigStateActive, - internetmonitor.MonitorConfigStateInactive, - }, false), + Default: types.MonitorConfigStateActive, + ValidateFunc: validation.StringInSlice(enum.Slice( + types.MonitorConfigStateActive, + types.MonitorConfigStateInactive, + ), false), }, names.AttrTags: tftags.TagsSchema(), names.AttrTagsAll: tftags.TagsSchemaComputed(), @@ -112,57 +140,70 @@ func ResourceMonitor() *schema.Resource { } } +const ( + errCodeResourceNotFoundException = "ResourceNotFoundException" +) + func resourceMonitorCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).InternetMonitorConn() + conn := meta.(*conns.AWSClient).InternetMonitorClient(ctx) - monitorName := d.Get("monitor_name").(string) + name := d.Get("monitor_name").(string) input := &internetmonitor.CreateMonitorInput{ ClientToken: aws.String(id.UniqueId()), - MonitorName: aws.String(monitorName), - Tags: GetTagsIn(ctx), + MonitorName: aws.String(name), + Tags: getTagsIn(ctx), } - if v, ok := d.GetOk("max_city_networks_to_monitor"); ok { - input.MaxCityNetworksToMonitor = aws.Int64(int64(v.(int))) - } - - if v, ok := d.GetOk("traffic_percentage_to_monitor"); ok { - input.TrafficPercentageToMonitor = aws.Int64(int64(v.(int))) + if v, ok := d.GetOk("health_events_config"); ok { + input.HealthEventsConfig = expandHealthEventsConfig(v.([]interface{})) } if v, ok := d.GetOk("internet_measurements_log_delivery"); ok { input.InternetMeasurementsLogDelivery = expandInternetMeasurementsLogDelivery(v.([]interface{})) } + if v, ok := d.GetOk("max_city_networks_to_monitor"); ok { + input.MaxCityNetworksToMonitor = int32(v.(int)) + } + if v, ok := d.GetOk("resources"); ok && v.(*schema.Set).Len() > 0 { - input.Resources = flex.ExpandStringSet(v.(*schema.Set)) + input.Resources = flex.ExpandStringValueSet(v.(*schema.Set)) } - log.Printf("[DEBUG] Creating Internet Monitor Monitor: %s", input) - _, err := conn.CreateMonitorWithContext(ctx, input) + if v, ok := d.GetOk("traffic_percentage_to_monitor"); ok { + input.TrafficPercentageToMonitor = int32(v.(int)) + } + + _, err := conn.CreateMonitor(ctx, input) if err != nil { - return sdkdiag.AppendErrorf(diags, "creating Internet Monitor Monitor (%s): %s", monitorName, err) + return sdkdiag.AppendErrorf(diags, "creating Internet Monitor Monitor (%s): %s", name, err) } - d.SetId(monitorName) + d.SetId(name) - if err := waitMonitor(ctx, conn, monitorName, internetmonitor.MonitorConfigStateActive); err != nil { + if err := waitMonitor(ctx, conn, d.Id(), types.MonitorConfigStateActive); err != nil { return sdkdiag.AppendErrorf(diags, "waiting for Internet Monitor Monitor (%s) create: %s", d.Id(), err) } - if v, ok := d.GetOk("status"); ok && v.(string) != internetmonitor.MonitorConfigStateActive { - input := &internetmonitor.UpdateMonitorInput{ - ClientToken: aws.String(id.UniqueId()), - MonitorName: aws.String(d.Id()), - Status: aws.String(v.(string)), - } + if v, ok := d.GetOk("status"); ok { + if v := types.MonitorConfigState(v.(string)); v != types.MonitorConfigStateActive { + input := &internetmonitor.UpdateMonitorInput{ + ClientToken: aws.String(id.UniqueId()), + MonitorName: aws.String(d.Id()), + Status: v, + } - _, err := conn.UpdateMonitorWithContext(ctx, input) + _, err := conn.UpdateMonitor(ctx, input) - if err != nil { - return sdkdiag.AppendErrorf(diags, "updating Internet Monitor Monitor (%s) to inactive at creation: %s", d.Id(), err) + if err != nil { + return sdkdiag.AppendErrorf(diags, "updating Internet Monitor Monitor (%s): %s", d.Id(), err) + } + + if err := waitMonitor(ctx, conn, d.Id(), v); err != nil { + return sdkdiag.AppendErrorf(diags, "waiting for Internet Monitor Monitor (%s) INACTIVE: %s", d.Id(), err) + } } } @@ -171,9 +212,9 @@ func resourceMonitorCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceMonitorRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).InternetMonitorConn() + conn := meta.(*conns.AWSClient).InternetMonitorClient(ctx) - monitor, err := FindMonitor(ctx, conn, d.Id()) + monitor, err := findMonitorByName(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { log.Printf("[WARN] Internet Monitor Monitor (%s) not found, removing from state", d.Id()) @@ -185,26 +226,27 @@ func resourceMonitorRead(ctx context.Context, d *schema.ResourceData, meta inter return sdkdiag.AppendErrorf(diags, "reading Internet Monitor Monitor (%s): %s", d.Id(), err) } - err = d.Set("internet_measurements_log_delivery", flattenInternetMeasurementsLogDelivery(monitor.InternetMeasurementsLogDelivery)) - if err != nil { + d.Set("arn", monitor.MonitorArn) + if err := d.Set("health_events_config", flattenHealthEventsConfig(monitor.HealthEventsConfig)); err != nil { + return sdkdiag.AppendErrorf(diags, "setting health_events_config: %s", err) + } + if err := d.Set("internet_measurements_log_delivery", flattenInternetMeasurementsLogDelivery(monitor.InternetMeasurementsLogDelivery)); err != nil { return sdkdiag.AppendErrorf(diags, "setting internet_measurements_log_delivery: %s", err) } - - d.Set("arn", monitor.MonitorArn) d.Set("monitor_name", monitor.MonitorName) d.Set("max_city_networks_to_monitor", monitor.MaxCityNetworksToMonitor) - d.Set("traffic_percentage_to_monitor", monitor.TrafficPercentageToMonitor) + d.Set("resources", flex.FlattenStringValueSet(monitor.Resources)) d.Set("status", monitor.Status) - d.Set("resources", flex.FlattenStringSet(monitor.Resources)) + d.Set("traffic_percentage_to_monitor", monitor.TrafficPercentageToMonitor) - SetTagsOut(ctx, monitor.Tags) + setTagsOut(ctx, monitor.Tags) return diags } func resourceMonitorUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).InternetMonitorConn() + conn := meta.(*conns.AWSClient).InternetMonitorClient(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &internetmonitor.UpdateMonitorInput{ @@ -212,46 +254,46 @@ func resourceMonitorUpdate(ctx context.Context, d *schema.ResourceData, meta int MonitorName: aws.String(d.Id()), } - if d.HasChange("max_city_networks_to_monitor") { - input.MaxCityNetworksToMonitor = aws.Int64(int64(d.Get("max_city_networks_to_monitor").(int))) - } - - if d.HasChange("traffic_percentage_to_monitor") { - input.TrafficPercentageToMonitor = aws.Int64(int64(d.Get("traffic_percentage_to_monitor").(int))) - } - - if d.HasChange("status") { - input.Status = aws.String(d.Get("status").(string)) + if d.HasChange("health_events_config") { + input.HealthEventsConfig = expandHealthEventsConfig(d.Get("health_events_config").([]interface{})) } if d.HasChange("internet_measurements_log_delivery") { input.InternetMeasurementsLogDelivery = expandInternetMeasurementsLogDelivery(d.Get("internet_measurements_log_delivery").([]interface{})) } + if d.HasChange("max_city_networks_to_monitor") { + input.MaxCityNetworksToMonitor = int32(d.Get("max_city_networks_to_monitor").(int)) + } + if d.HasChange("resources") { o, n := d.GetChange("resources") os, ns := o.(*schema.Set), n.(*schema.Set) - remove := flex.ExpandStringValueSet(os.Difference(ns)) - add := flex.ExpandStringValueSet(ns.Difference(os)) - - if len(add) > 0 { - input.ResourcesToAdd = aws.StringSlice(add) + if add := flex.ExpandStringValueSet(ns.Difference(os)); len(add) > 0 { + input.ResourcesToAdd = add } - - if len(remove) > 0 { - input.ResourcesToRemove = aws.StringSlice(remove) + if remove := flex.ExpandStringValueSet(os.Difference(ns)); len(remove) > 0 { + input.ResourcesToRemove = remove } } - log.Printf("[DEBUG] Updating Internet Monitor Monitor: %s", input) - _, err := conn.UpdateMonitorWithContext(ctx, input) + status := types.MonitorConfigState(d.Get("status").(string)) + if d.HasChange("status") { + input.Status = status + } + + if d.HasChange("traffic_percentage_to_monitor") { + input.TrafficPercentageToMonitor = int32(d.Get("traffic_percentage_to_monitor").(int)) + } + + _, err := conn.UpdateMonitor(ctx, input) if err != nil { return sdkdiag.AppendErrorf(diags, "updating Internet Monitor Monitor (%s): %s", d.Id(), err) } - if err := waitMonitor(ctx, conn, d.Id(), d.Get("status").(string)); err != nil { - return sdkdiag.AppendErrorf(diags, "waiting for Internet Monitor Monitor (%s) update: %s", d.Id(), err) + if err := waitMonitor(ctx, conn, d.Id(), status); err != nil { + return sdkdiag.AppendErrorf(diags, "waiting for Internet Monitor Monitor (%s) %s: %s", d.Id(), status, err) } } @@ -260,30 +302,36 @@ func resourceMonitorUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceMonitorDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).InternetMonitorConn() + conn := meta.(*conns.AWSClient).InternetMonitorClient(ctx) input := &internetmonitor.UpdateMonitorInput{ ClientToken: aws.String(id.UniqueId()), MonitorName: aws.String(d.Id()), - Status: aws.String(internetmonitor.MonitorConfigStateInactive), + Status: types.MonitorConfigStateInactive, } - _, err := conn.UpdateMonitorWithContext(ctx, input) + _, err := conn.UpdateMonitor(ctx, input) + + // if errs.IsA[*types.ResourceNotFoundException](err) { + if tfawserr.ErrCodeEquals(err, errCodeResourceNotFoundException) { + return diags + } if err != nil { - return sdkdiag.AppendErrorf(diags, "updating Internet Monitor Monitor (%s) to inactive before deletion: %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "updating Internet Monitor Monitor (%s): %s", d.Id(), err) } - if err := waitMonitor(ctx, conn, d.Id(), internetmonitor.MonitorConfigStateInactive); err != nil { - return sdkdiag.AppendErrorf(diags, "waiting for Internet Monitor Monitor (%s) to be inactive before deletion: %s", d.Id(), err) + if err := waitMonitor(ctx, conn, d.Id(), types.MonitorConfigStateInactive); err != nil { + return sdkdiag.AppendErrorf(diags, "waiting for Internet Monitor Monitor (%s) INACTIVE: %s", d.Id(), err) } log.Printf("[DEBUG] Deleting Internet Monitor Monitor: %s", d.Id()) - _, err = conn.DeleteMonitorWithContext(ctx, &internetmonitor.DeleteMonitorInput{ + _, err = conn.DeleteMonitor(ctx, &internetmonitor.DeleteMonitorInput{ MonitorName: aws.String(d.Id()), }) - if tfawserr.ErrCodeEquals(err, internetmonitor.ErrCodeNotFoundException) { + // if errs.IsA[*types.ResourceNotFoundException](err) { + if tfawserr.ErrCodeEquals(err, errCodeResourceNotFoundException) { return diags } @@ -294,72 +342,168 @@ func resourceMonitorDelete(ctx context.Context, d *schema.ResourceData, meta int return diags } -func expandInternetMeasurementsLogDelivery(vInternetMeasurementsLogDelivery []interface{}) *internetmonitor.InternetMeasurementsLogDelivery { - if len(vInternetMeasurementsLogDelivery) == 0 || vInternetMeasurementsLogDelivery[0] == nil { +func findMonitorByName(ctx context.Context, conn *internetmonitor.Client, name string) (*internetmonitor.GetMonitorOutput, error) { + input := &internetmonitor.GetMonitorInput{ + MonitorName: aws.String(name), + } + + output, err := conn.GetMonitor(ctx, input) + + // if errs.IsA[*types.ResourceNotFoundException](err) { + if tfawserr.ErrCodeEquals(err, errCodeResourceNotFoundException) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + return output, nil +} + +func statusMonitor(ctx context.Context, conn *internetmonitor.Client, name string) retry.StateRefreshFunc { + return func() (interface{}, string, error) { + monitor, err := findMonitorByName(ctx, conn, name) + + if tfresource.NotFound(err) { + return nil, "", nil + } + + if err != nil { + return nil, "", err + } + + return monitor, string(monitor.Status), nil + } +} + +func waitMonitor(ctx context.Context, conn *internetmonitor.Client, name string, targetState types.MonitorConfigState) error { + const ( + timeout = 5 * time.Minute + ) + stateConf := &retry.StateChangeConf{ + Pending: enum.Slice(types.MonitorConfigStatePending), + Target: enum.Slice(targetState), + Refresh: statusMonitor(ctx, conn, name), + Timeout: timeout, + Delay: 10 * time.Second, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + + if output, ok := outputRaw.(*internetmonitor.GetMonitorOutput); ok { + if status := output.Status; status == types.MonitorConfigStateError { + tfresource.SetLastError(err, errors.New(aws.ToString(output.ProcessingStatusInfo))) + } + + return err + } + + return err +} + +func expandHealthEventsConfig(tfList []interface{}) *types.HealthEventsConfig { + if len(tfList) == 0 || tfList[0] == nil { + return nil + } + + tfMap := tfList[0].(map[string]interface{}) + apiObject := &types.HealthEventsConfig{} + + if v, ok := tfMap["availability_score_threshold"].(float64); ok && v != 0.0 { + apiObject.AvailabilityScoreThreshold = v + } + + if v, ok := tfMap["performance_score_threshold"].(float64); ok && v != 0.0 { + apiObject.PerformanceScoreThreshold = v + } + + return apiObject +} + +func expandInternetMeasurementsLogDelivery(tfList []interface{}) *types.InternetMeasurementsLogDelivery { + if len(tfList) == 0 || tfList[0] == nil { return nil } - mInternetMeasurementsLogDelivery := vInternetMeasurementsLogDelivery[0].(map[string]interface{}) - logDelivery := &internetmonitor.InternetMeasurementsLogDelivery{} + tfMap := tfList[0].(map[string]interface{}) + apiObject := &types.InternetMeasurementsLogDelivery{} - if v, ok := mInternetMeasurementsLogDelivery["s3_config"].([]interface{}); ok { - logDelivery.S3Config = expandS3Config(v) + if v, ok := tfMap["s3_config"].([]interface{}); ok { + apiObject.S3Config = expandS3Config(v) } - return logDelivery + return apiObject } -func expandS3Config(vS3Config []interface{}) *internetmonitor.S3Config { - if len(vS3Config) == 0 || vS3Config[0] == nil { +func expandS3Config(tfList []interface{}) *types.S3Config { + if len(tfList) == 0 || tfList[0] == nil { return nil } - mS3Config := vS3Config[0].(map[string]interface{}) - s3Config := &internetmonitor.S3Config{} + tfMap := tfList[0].(map[string]interface{}) + apiObject := &types.S3Config{} - if v, ok := mS3Config["bucket_name"].(string); ok && v != "" { - s3Config.BucketName = aws.String(v) + if v, ok := tfMap["bucket_name"].(string); ok && v != "" { + apiObject.BucketName = aws.String(v) } - if v, ok := mS3Config["bucket_prefix"].(string); ok && v != "" { - s3Config.BucketPrefix = aws.String(v) + if v, ok := tfMap["bucket_prefix"].(string); ok && v != "" { + apiObject.BucketPrefix = aws.String(v) } - if v, ok := mS3Config["log_delivery_status"].(string); ok && v != "" { - s3Config.LogDeliveryStatus = aws.String(v) + if v, ok := tfMap["log_delivery_status"].(string); ok && v != "" { + apiObject.LogDeliveryStatus = types.LogDeliveryStatus(v) } - return s3Config + return apiObject } -func flattenInternetMeasurementsLogDelivery(internetMeasurementsLogDelivery *internetmonitor.InternetMeasurementsLogDelivery) []interface{} { - if internetMeasurementsLogDelivery == nil { +func flattenHealthEventsConfig(apiObject *types.HealthEventsConfig) []interface{} { + if apiObject == nil { return []interface{}{} } - mInternetMeasurementsLogDelivery := map[string]interface{}{ - "s3_config": flattenS3Config(internetMeasurementsLogDelivery.S3Config), + tfMap := map[string]interface{}{ + "availability_score_threshold": apiObject.AvailabilityScoreThreshold, + "performance_score_threshold": apiObject.PerformanceScoreThreshold, } - return []interface{}{mInternetMeasurementsLogDelivery} + return []interface{}{tfMap} } -func flattenS3Config(s3Config *internetmonitor.S3Config) []interface{} { - if s3Config == nil { +func flattenInternetMeasurementsLogDelivery(apiObject *types.InternetMeasurementsLogDelivery) []interface{} { + if apiObject == nil { return []interface{}{} } - mS3Config := map[string]interface{}{ - "bucket_name": aws.StringValue(s3Config.BucketName), + tfMap := map[string]interface{}{ + "s3_config": flattenS3Config(apiObject.S3Config), + } + + return []interface{}{tfMap} +} + +func flattenS3Config(apiObject *types.S3Config) []interface{} { + if apiObject == nil { + return []interface{}{} } - if s3Config.BucketPrefix != nil { - mS3Config["bucket_prefix"] = aws.StringValue(s3Config.BucketPrefix) + tfMap := map[string]interface{}{ + "bucket_name": aws.ToString(apiObject.BucketName), + "log_delivery_status": string(apiObject.LogDeliveryStatus), } - if s3Config.LogDeliveryStatus != nil { - mS3Config["log_delivery_status"] = aws.StringValue(s3Config.LogDeliveryStatus) + if apiObject.BucketPrefix != nil { + tfMap["bucket_prefix"] = aws.ToString(apiObject.BucketPrefix) } - return []interface{}{mS3Config} + return []interface{}{tfMap} } diff --git a/internal/service/internetmonitor/monitor_test.go b/internal/service/internetmonitor/monitor_test.go index 06ebcdc5a9d..e65cf69e4f6 100644 --- a/internal/service/internetmonitor/monitor_test.go +++ b/internal/service/internetmonitor/monitor_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package internetmonitor_test import ( @@ -6,7 +9,6 @@ import ( "regexp" "testing" - "github.com/aws/aws-sdk-go/service/internetmonitor" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -14,6 +16,7 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/conns" tfinternetmonitor "github.com/hashicorp/terraform-provider-aws/internal/service/internetmonitor" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" ) func TestAccInternetMonitorMonitor_basic(t *testing.T) { @@ -23,19 +26,23 @@ func TestAccInternetMonitorMonitor_basic(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, internetmonitor.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.InternetMonitorEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckMonitorDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccMonitorConfig_basic(rName), - Check: resource.ComposeTestCheckFunc( + Check: resource.ComposeAggregateTestCheckFunc( testAccCheckMonitorExists(ctx, resourceName), acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "internetmonitor", regexp.MustCompile(`monitor/.+$`)), + resource.TestCheckResourceAttr(resourceName, "health_events_config.#", "0"), + resource.TestCheckResourceAttr(resourceName, "internet_measurements_log_delivery.#", "1"), + resource.TestCheckResourceAttr(resourceName, "max_city_networks_to_monitor", "0"), resource.TestCheckResourceAttr(resourceName, "monitor_name", rName), - resource.TestCheckResourceAttr(resourceName, "traffic_percentage_to_monitor", "1"), + resource.TestCheckResourceAttr(resourceName, "resources.#", "0"), resource.TestCheckResourceAttr(resourceName, "status", "ACTIVE"), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttr(resourceName, "traffic_percentage_to_monitor", "1"), ), }, { @@ -54,75 +61,92 @@ func TestAccInternetMonitorMonitor_basic(t *testing.T) { }) } -func TestAccInternetMonitorMonitor_log(t *testing.T) { +func TestAccInternetMonitorMonitor_disappears(t *testing.T) { ctx := acctest.Context(t) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_internetmonitor_monitor.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, internetmonitor.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.InternetMonitorEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckMonitorDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccMonitorConfig_log(rName), + Config: testAccMonitorConfig_basic(rName), Check: resource.ComposeTestCheckFunc( testAccCheckMonitorExists(ctx, resourceName), - resource.TestCheckResourceAttr(resourceName, "internet_measurements_log_delivery.#", "1"), - resource.TestCheckResourceAttr(resourceName, "internet_measurements_log_delivery.0.s3_config.#", "1"), - resource.TestCheckResourceAttr(resourceName, "internet_measurements_log_delivery.0.s3_config.0.bucket_name", rName), + acctest.CheckResourceDisappears(ctx, acctest.Provider, tfinternetmonitor.ResourceMonitor(), resourceName), ), - }, - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ExpectNonEmptyPlan: true, }, }, }) } -func TestAccInternetMonitorMonitor_disappears(t *testing.T) { +func TestAccInternetMonitorMonitor_tags(t *testing.T) { ctx := acctest.Context(t) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_internetmonitor_monitor.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, internetmonitor.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.InternetMonitorEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckMonitorDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccMonitorConfig_basic(rName), + Config: testAccMonitorConfig_tags1(rName, "key1", "value1"), Check: resource.ComposeTestCheckFunc( testAccCheckMonitorExists(ctx, resourceName), - acctest.CheckResourceDisappears(ctx, acctest.Provider, tfinternetmonitor.ResourceMonitor(), resourceName), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccMonitorConfig_tags2(rName, "key1", "value1updated", "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckMonitorExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + { + Config: testAccMonitorConfig_tags1(rName, "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckMonitorExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), ), - ExpectNonEmptyPlan: true, }, }, }) } -func TestAccInternetMonitorMonitor_tags(t *testing.T) { +func TestAccInternetMonitorMonitor_healthEventsConfig(t *testing.T) { ctx := acctest.Context(t) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_internetmonitor_monitor.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, internetmonitor.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.InternetMonitorEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckMonitorDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccMonitorConfig_tags1(rName, "key1", "value1"), + Config: testAccMonitorConfig_healthEventsConfig(rName), Check: resource.ComposeTestCheckFunc( testAccCheckMonitorExists(ctx, resourceName), - resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), - resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + resource.TestCheckResourceAttr(resourceName, "health_events_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "health_events_config.0.availability_score_threshold", "50"), + resource.TestCheckResourceAttr(resourceName, "health_events_config.0.performance_score_threshold", "95"), ), }, { @@ -131,36 +155,57 @@ func TestAccInternetMonitorMonitor_tags(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccMonitorConfig_tags2(rName, "key1", "value1updated", "key2", "value2"), + Config: testAccMonitorConfig_healthEventsConfigUpdated(rName), Check: resource.ComposeTestCheckFunc( testAccCheckMonitorExists(ctx, resourceName), - resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), - resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), - resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + resource.TestCheckResourceAttr(resourceName, "health_events_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "health_events_config.0.availability_score_threshold", "75"), + resource.TestCheckResourceAttr(resourceName, "health_events_config.0.performance_score_threshold", "85"), ), }, + }, + }) +} + +func TestAccInternetMonitorMonitor_log(t *testing.T) { + ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_internetmonitor_monitor.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, names.InternetMonitorEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckMonitorDestroy(ctx), + Steps: []resource.TestStep{ { - Config: testAccMonitorConfig_tags1(rName, "key2", "value2"), + Config: testAccMonitorConfig_log(rName), Check: resource.ComposeTestCheckFunc( testAccCheckMonitorExists(ctx, resourceName), - resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), - resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + resource.TestCheckResourceAttr(resourceName, "internet_measurements_log_delivery.#", "1"), + resource.TestCheckResourceAttr(resourceName, "internet_measurements_log_delivery.0.s3_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "internet_measurements_log_delivery.0.s3_config.0.bucket_name", rName), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } func testAccCheckMonitorDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).InternetMonitorConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).InternetMonitorClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_internetmonitor_monitor" { continue } - _, err := tfinternetmonitor.FindMonitor(ctx, conn, rs.Primary.ID) + _, err := tfinternetmonitor.FindMonitorByName(ctx, conn, rs.Primary.ID) if tfresource.NotFound(err) { continue @@ -170,27 +215,23 @@ func testAccCheckMonitorDestroy(ctx context.Context) resource.TestCheckFunc { return err } - return fmt.Errorf("InternetMonitor Monitor %s still exists", rs.Primary.ID) + return fmt.Errorf("Internet Monitor Monitor %s still exists", rs.Primary.ID) } return nil } } -func testAccCheckMonitorExists(ctx context.Context, resourceName string) resource.TestCheckFunc { +func testAccCheckMonitorExists(ctx context.Context, n string) resource.TestCheckFunc { return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[resourceName] + rs, ok := s.RootModule().Resources[n] if !ok { - return fmt.Errorf("Not found: %s", resourceName) + return fmt.Errorf("Not found: %s", n) } - if rs.Primary.ID == "" { - return fmt.Errorf("No InternetMonitor Monitor ID is set") - } - - conn := acctest.Provider.Meta().(*conns.AWSClient).InternetMonitorConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).InternetMonitorClient(ctx) - _, err := tfinternetmonitor.FindMonitor(ctx, conn, rs.Primary.ID) + _, err := tfinternetmonitor.FindMonitorByName(ctx, conn, rs.Primary.ID) return err } @@ -215,6 +256,33 @@ resource "aws_internetmonitor_monitor" "test" { `, rName, status) } +func testAccMonitorConfig_healthEventsConfig(rName string) string { + return fmt.Sprintf(` +resource "aws_internetmonitor_monitor" "test" { + monitor_name = %[1]q + max_city_networks_to_monitor = 2 + + health_events_config { + availability_score_threshold = 50 + } +} +`, rName) +} + +func testAccMonitorConfig_healthEventsConfigUpdated(rName string) string { + return fmt.Sprintf(` +resource "aws_internetmonitor_monitor" "test" { + monitor_name = %[1]q + max_city_networks_to_monitor = 2 + + health_events_config { + availability_score_threshold = 75 + performance_score_threshold = 85 + } +} +`, rName) +} + func testAccMonitorConfig_log(rName string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { diff --git a/internal/service/internetmonitor/service_package_gen.go b/internal/service/internetmonitor/service_package_gen.go index 5152df63960..2ea6b068201 100644 --- a/internal/service/internetmonitor/service_package_gen.go +++ b/internal/service/internetmonitor/service_package_gen.go @@ -5,6 +5,9 @@ package internetmonitor import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + internetmonitor_sdkv2 "github.com/aws/aws-sdk-go-v2/service/internetmonitor" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -26,7 +29,7 @@ func (p *servicePackage) SDKDataSources(ctx context.Context) []*types.ServicePac func (p *servicePackage) SDKResources(ctx context.Context) []*types.ServicePackageSDKResource { return []*types.ServicePackageSDKResource{ { - Factory: ResourceMonitor, + Factory: resourceMonitor, TypeName: "aws_internetmonitor_monitor", Name: "Monitor", Tags: &types.ServicePackageResourceTags{ @@ -40,4 +43,17 @@ func (p *servicePackage) ServicePackageName() string { return names.InternetMonitor } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*internetmonitor_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return internetmonitor_sdkv2.NewFromConfig(cfg, func(o *internetmonitor_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = internetmonitor_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/internetmonitor/status.go b/internal/service/internetmonitor/status.go deleted file mode 100644 index 9ec04517248..00000000000 --- a/internal/service/internetmonitor/status.go +++ /dev/null @@ -1,26 +0,0 @@ -package internetmonitor - -import ( - "context" - - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/internetmonitor" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" - "github.com/hashicorp/terraform-provider-aws/internal/tfresource" -) - -func statusMonitor(ctx context.Context, conn *internetmonitor.InternetMonitor, name string) retry.StateRefreshFunc { - return func() (interface{}, string, error) { - monitor, err := FindMonitor(ctx, conn, name) - - if tfresource.NotFound(err) { - return nil, "", nil - } - - if err != nil { - return nil, "", err - } - - return monitor, aws.StringValue(monitor.Status), nil - } -} diff --git a/internal/service/internetmonitor/sweep.go b/internal/service/internetmonitor/sweep.go index c1f3afae88a..04c1bf87a65 100644 --- a/internal/service/internetmonitor/sweep.go +++ b/internal/service/internetmonitor/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -7,11 +10,9 @@ import ( "fmt" "log" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/internetmonitor" - "github.com/hashicorp/go-multierror" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/internetmonitor" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -24,46 +25,41 @@ func init() { func sweepMonitors(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) - + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - - conn := client.(*conns.AWSClient).InternetMonitorConn() + input := &internetmonitor.ListMonitorsInput{} + conn := client.InternetMonitorClient(ctx) sweepResources := make([]sweep.Sweepable, 0) - var errs *multierror.Error - err = conn.ListMonitorsPagesWithContext(ctx, &internetmonitor.ListMonitorsInput{}, func(resp *internetmonitor.ListMonitorsOutput, lastPage bool) bool { - if len(resp.Monitors) == 0 { - log.Print("[DEBUG] No InternetMonitor Monitors to sweep") - return !lastPage + pages := internetmonitor.NewListMonitorsPaginator(conn, input) + for pages.HasMorePages() { + page, err := pages.NextPage(ctx) + + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping Internet Monitor Monitor sweep for %s: %s", region, err) + return nil + } + + if err != nil { + return fmt.Errorf("error listing Internet Monitor Monitors (%s): %w", region, err) } - for _, c := range resp.Monitors { - r := ResourceMonitor() + for _, v := range page.Monitors { + r := resourceMonitor() d := r.Data(nil) - d.SetId(aws.StringValue(c.MonitorName)) + d.SetId(aws.ToString(v.MonitorName)) sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } - - return !lastPage - }) - - if err != nil { - errs = multierror.Append(errs, fmt.Errorf("error describing InternetMonitor Monitors: %w", err)) - // in case work can be done, don't jump out yet } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { - errs = multierror.Append(errs, fmt.Errorf("error sweeping InternetMonitor Monitors for %s: %w", region, err)) - } + err = sweep.SweepOrchestrator(ctx, sweepResources) - if sweep.SkipSweepError(errs.ErrorOrNil()) { - log.Printf("[WARN] Skipping InternetMonitor Monitor sweep for %s: %s", region, err) - return nil + if err != nil { + return fmt.Errorf("error sweeping Internet Monitor Monitors (%s): %w", region, err) } - return errs.ErrorOrNil() + return nil } diff --git a/internal/service/internetmonitor/tags_gen.go b/internal/service/internetmonitor/tags_gen.go index 905ae83ee0f..842a8996f71 100644 --- a/internal/service/internetmonitor/tags_gen.go +++ b/internal/service/internetmonitor/tags_gen.go @@ -5,24 +5,23 @@ import ( "context" "fmt" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/internetmonitor" - "github.com/aws/aws-sdk-go/service/internetmonitor/internetmonitoriface" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/internetmonitor" "github.com/hashicorp/terraform-provider-aws/internal/conns" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists internetmonitor service tags. +// listTags lists internetmonitor service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn internetmonitoriface.InternetMonitorAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *internetmonitor.Client, identifier string) (tftags.KeyValueTags, error) { input := &internetmonitor.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } - output, err := conn.ListTagsForResourceWithContext(ctx, input) + output, err := conn.ListTagsForResource(ctx, input) if err != nil { return tftags.New(ctx, nil), err @@ -34,7 +33,7 @@ func ListTags(ctx context.Context, conn internetmonitoriface.InternetMonitorAPI, // ListTags lists internetmonitor service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).InternetMonitorConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).InternetMonitorClient(ctx), identifier) if err != nil { return err @@ -47,21 +46,21 @@ func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier stri return nil } -// map[string]*string handling +// map[string]string handling // Tags returns internetmonitor service tags. -func Tags(tags tftags.KeyValueTags) map[string]*string { - return aws.StringMap(tags.Map()) +func Tags(tags tftags.KeyValueTags) map[string]string { + return tags.Map() } -// KeyValueTags creates KeyValueTags from internetmonitor service tags. -func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { +// KeyValueTags creates tftags.KeyValueTags from internetmonitor service tags. +func KeyValueTags(ctx context.Context, tags map[string]string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns internetmonitor service tags from Context. +// getTagsIn returns internetmonitor service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +70,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets internetmonitor service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets internetmonitor service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates internetmonitor service tags. +// updateTags updates internetmonitor service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn internetmonitoriface.InternetMonitorAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *internetmonitor.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -90,10 +89,10 @@ func UpdateTags(ctx context.Context, conn internetmonitoriface.InternetMonitorAP if len(removedTags) > 0 { input := &internetmonitor.UntagResourceInput{ ResourceArn: aws.String(identifier), - TagKeys: aws.StringSlice(removedTags.Keys()), + TagKeys: removedTags.Keys(), } - _, err := conn.UntagResourceWithContext(ctx, input) + _, err := conn.UntagResource(ctx, input) if err != nil { return fmt.Errorf("untagging resource (%s): %w", identifier, err) @@ -108,7 +107,7 @@ func UpdateTags(ctx context.Context, conn internetmonitoriface.InternetMonitorAP Tags: Tags(updatedTags), } - _, err := conn.TagResourceWithContext(ctx, input) + _, err := conn.TagResource(ctx, input) if err != nil { return fmt.Errorf("tagging resource (%s): %w", identifier, err) @@ -121,5 +120,5 @@ func UpdateTags(ctx context.Context, conn internetmonitoriface.InternetMonitorAP // UpdateTags updates internetmonitor service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).InternetMonitorConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).InternetMonitorClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/internetmonitor/wait.go b/internal/service/internetmonitor/wait.go deleted file mode 100644 index cc4a7d77006..00000000000 --- a/internal/service/internetmonitor/wait.go +++ /dev/null @@ -1,38 +0,0 @@ -package internetmonitor - -import ( - "context" - "errors" - "time" - - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/internetmonitor" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" - "github.com/hashicorp/terraform-provider-aws/internal/tfresource" -) - -const ( - monitorCreatedTimeout = 5 * time.Minute -) - -func waitMonitor(ctx context.Context, conn *internetmonitor.InternetMonitor, name, target string) error { - stateConf := &retry.StateChangeConf{ - Pending: []string{internetmonitor.MonitorConfigStatePending}, - Target: []string{target}, - Refresh: statusMonitor(ctx, conn, name), - Timeout: monitorCreatedTimeout, - Delay: 10 * time.Second, - } - - outputRaw, err := stateConf.WaitForStateContext(ctx) - - if output, ok := outputRaw.(*internetmonitor.GetMonitorOutput); ok { - if statusCode := aws.StringValue(output.Status); statusCode == internetmonitor.MonitorConfigStateError { - tfresource.SetLastError(err, errors.New(aws.StringValue(output.ProcessingStatusInfo))) - } - - return err - } - - return err -} diff --git a/internal/service/iot/authorizer.go b/internal/service/iot/authorizer.go index 8d160a8676a..46c3f703b55 100644 --- a/internal/service/iot/authorizer.go +++ b/internal/service/iot/authorizer.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot import ( @@ -87,7 +90,7 @@ func ResourceAuthorizer() *schema.Resource { func resourceAuthorizerCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) name := d.Get("name").(string) input := &iot.CreateAuthorizerInput{ @@ -120,7 +123,7 @@ func resourceAuthorizerCreate(ctx context.Context, d *schema.ResourceData, meta func resourceAuthorizerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) authorizer, err := FindAuthorizerByName(ctx, conn, d.Id()) @@ -148,7 +151,7 @@ func resourceAuthorizerRead(ctx context.Context, d *schema.ResourceData, meta in func resourceAuthorizerUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) input := iot.UpdateAuthorizerInput{ AuthorizerName: aws.String(d.Id()), @@ -186,7 +189,7 @@ func resourceAuthorizerUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceAuthorizerDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) // In order to delete an IoT Authorizer, you must set it inactive first. if d.Get("status").(string) == iot.AuthorizerStatusActive { diff --git a/internal/service/iot/authorizer_test.go b/internal/service/iot/authorizer_test.go index aa95e824c54..68794778d68 100644 --- a/internal/service/iot/authorizer_test.go +++ b/internal/service/iot/authorizer_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot_test import ( @@ -163,7 +166,7 @@ func testAccCheckAuthorizerExists(ctx context.Context, n string, v *iot.Authoriz return fmt.Errorf("No IoT Authorizer ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) output, err := tfiot.FindAuthorizerByName(ctx, conn, rs.Primary.ID) @@ -179,7 +182,7 @@ func testAccCheckAuthorizerExists(ctx context.Context, n string, v *iot.Authoriz func testAccCheckAuthorizerDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iot_authorizer" { diff --git a/internal/service/iot/certificate.go b/internal/service/iot/certificate.go index b01cb9d5ea9..9ba99b81fda 100644 --- a/internal/service/iot/certificate.go +++ b/internal/service/iot/certificate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot import ( @@ -60,7 +63,7 @@ func ResourceCertificate() *schema.Resource { func resourceCertificateCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) _, okcert := d.GetOk("certificate_pem") _, okCA := d.GetOk("ca_pem") @@ -127,7 +130,7 @@ func resourceCertificateCreate(ctx context.Context, d *schema.ResourceData, meta func resourceCertificateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) out, err := conn.DescribeCertificateWithContext(ctx, &iot.DescribeCertificateInput{ CertificateId: aws.String(d.Id()), @@ -145,7 +148,7 @@ func resourceCertificateRead(ctx context.Context, d *schema.ResourceData, meta i func resourceCertificateUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) if d.HasChange("active") { status := iot.CertificateStatusInactive @@ -167,7 +170,7 @@ func resourceCertificateUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceCertificateDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) _, err := conn.UpdateCertificateWithContext(ctx, &iot.UpdateCertificateInput{ CertificateId: aws.String(d.Id()), diff --git a/internal/service/iot/certificate_test.go b/internal/service/iot/certificate_test.go index 22f916f42cd..8b6489a3c46 100644 --- a/internal/service/iot/certificate_test.go +++ b/internal/service/iot/certificate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot_test import ( @@ -88,7 +91,7 @@ func TestAccIoTCertificate_Keys_existingCertificate(t *testing.T) { func testAccCheckCertificateDestroy_basic(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iot_certificate" { diff --git a/internal/service/iot/consts.go b/internal/service/iot/consts.go index 6e573097288..57bbf5672b7 100644 --- a/internal/service/iot/consts.go +++ b/internal/service/iot/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot import ( diff --git a/internal/service/iot/endpoint_data_source.go b/internal/service/iot/endpoint_data_source.go index 5d25168a354..a8299445e76 100644 --- a/internal/service/iot/endpoint_data_source.go +++ b/internal/service/iot/endpoint_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot import ( @@ -37,7 +40,7 @@ func DataSourceEndpoint() *schema.Resource { func dataSourceEndpointRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) input := &iot.DescribeEndpointInput{} if v, ok := d.GetOk("endpoint_type"); ok { diff --git a/internal/service/iot/endpoint_data_source_test.go b/internal/service/iot/endpoint_data_source_test.go index e2872269bde..c194b798112 100644 --- a/internal/service/iot/endpoint_data_source_test.go +++ b/internal/service/iot/endpoint_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot_test import ( diff --git a/internal/service/iot/find.go b/internal/service/iot/find.go index 9a274bade0c..ecfdbea6eec 100644 --- a/internal/service/iot/find.go +++ b/internal/service/iot/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot import ( diff --git a/internal/service/iot/flex.go b/internal/service/iot/flex.go index 28bdf811dc7..0cb947a8f56 100644 --- a/internal/service/iot/flex.go +++ b/internal/service/iot/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot import ( diff --git a/internal/service/iot/generate.go b/internal/service/iot/generate.go index b940244206f..d93df36bc39 100644 --- a/internal/service/iot/generate.go +++ b/internal/service/iot/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsSlice -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package iot diff --git a/internal/service/iot/indexing_configuration.go b/internal/service/iot/indexing_configuration.go index 5ab58698acc..239da1c60ab 100644 --- a/internal/service/iot/indexing_configuration.go +++ b/internal/service/iot/indexing_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot import ( @@ -150,7 +153,7 @@ func ResourceIndexingConfiguration() *schema.Resource { } func resourceIndexingConfigurationPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) input := &iot.UpdateIndexingConfigurationInput{} @@ -166,7 +169,7 @@ func resourceIndexingConfigurationPut(ctx context.Context, d *schema.ResourceDat _, err := conn.UpdateIndexingConfigurationWithContext(ctx, input) if err != nil { - return diag.Errorf("error updating IoT Indexing Configuration: %s", err) + return diag.Errorf("updating IoT Indexing Configuration: %s", err) } d.SetId(meta.(*conns.AWSClient).Region) @@ -175,24 +178,24 @@ func resourceIndexingConfigurationPut(ctx context.Context, d *schema.ResourceDat } func resourceIndexingConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) output, err := conn.GetIndexingConfigurationWithContext(ctx, &iot.GetIndexingConfigurationInput{}) if err != nil { - return diag.Errorf("error reading IoT Indexing Configuration: %s", err) + return diag.Errorf("reading IoT Indexing Configuration: %s", err) } if output.ThingGroupIndexingConfiguration != nil { if err := d.Set("thing_group_indexing_configuration", []interface{}{flattenThingGroupIndexingConfiguration(output.ThingGroupIndexingConfiguration)}); err != nil { - return diag.Errorf("error setting thing_group_indexing_configuration: %s", err) + return diag.Errorf("setting thing_group_indexing_configuration: %s", err) } } else { d.Set("thing_group_indexing_configuration", nil) } if output.ThingIndexingConfiguration != nil { if err := d.Set("thing_indexing_configuration", []interface{}{flattenThingIndexingConfiguration(output.ThingIndexingConfiguration)}); err != nil { - return diag.Errorf("error setting thing_indexing_configuration: %s", err) + return diag.Errorf("setting thing_indexing_configuration: %s", err) } } else { d.Set("thing_indexing_configuration", nil) diff --git a/internal/service/iot/indexing_configuration_test.go b/internal/service/iot/indexing_configuration_test.go index 3c24d79e823..82b0fc9212a 100644 --- a/internal/service/iot/indexing_configuration_test.go +++ b/internal/service/iot/indexing_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot_test import ( @@ -69,7 +72,7 @@ func testAccIndexingConfiguration_allAttributes(t *testing.T) { Check: resource.ComposeAggregateTestCheckFunc( resource.TestCheckResourceAttr(resourceName, "thing_group_indexing_configuration.#", "1"), resource.TestCheckResourceAttr(resourceName, "thing_group_indexing_configuration.0.custom_field.#", "0"), - acctest.CheckResourceAttrGreaterThanValue(resourceName, "thing_group_indexing_configuration.0.managed_field.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(resourceName, "thing_group_indexing_configuration.0.managed_field.#", 0), resource.TestCheckResourceAttr(resourceName, "thing_group_indexing_configuration.0.thing_group_indexing_mode", "ON"), resource.TestCheckResourceAttr(resourceName, "thing_indexing_configuration.#", "1"), resource.TestCheckResourceAttr(resourceName, "thing_indexing_configuration.0.custom_field.#", "3"), @@ -86,7 +89,7 @@ func testAccIndexingConfiguration_allAttributes(t *testing.T) { "type": "Number", }), resource.TestCheckResourceAttr(resourceName, "thing_indexing_configuration.0.device_defender_indexing_mode", "VIOLATIONS"), - acctest.CheckResourceAttrGreaterThanValue(resourceName, "thing_group_indexing_configuration.0.managed_field.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(resourceName, "thing_group_indexing_configuration.0.managed_field.#", 0), resource.TestCheckResourceAttr(resourceName, "thing_indexing_configuration.0.named_shadow_indexing_mode", "ON"), resource.TestCheckResourceAttr(resourceName, "thing_indexing_configuration.0.thing_connectivity_indexing_mode", "STATUS"), resource.TestCheckResourceAttr(resourceName, "thing_indexing_configuration.0.thing_indexing_mode", "REGISTRY_AND_SHADOW"), diff --git a/internal/service/iot/logging_options.go b/internal/service/iot/logging_options.go index 5b630dc018c..7169d9db77f 100644 --- a/internal/service/iot/logging_options.go +++ b/internal/service/iot/logging_options.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot import ( @@ -41,7 +44,7 @@ func ResourceLoggingOptions() *schema.Resource { } func resourceLoggingOptionsPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) input := &iot.SetV2LoggingOptionsInput{} @@ -74,7 +77,7 @@ func resourceLoggingOptionsPut(ctx context.Context, d *schema.ResourceData, meta } func resourceLoggingOptionsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) output, err := conn.GetV2LoggingOptionsWithContext(ctx, &iot.GetV2LoggingOptionsInput{}) diff --git a/internal/service/iot/logging_options_test.go b/internal/service/iot/logging_options_test.go index fd87b2f5ee6..b2db821362e 100644 --- a/internal/service/iot/logging_options_test.go +++ b/internal/service/iot/logging_options_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot_test import ( diff --git a/internal/service/iot/policy.go b/internal/service/iot/policy.go index 3411e85fb8a..e9df2ab5d0b 100644 --- a/internal/service/iot/policy.go +++ b/internal/service/iot/policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot import ( @@ -58,7 +61,7 @@ func ResourcePolicy() *schema.Resource { func resourcePolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) policy, err := structure.NormalizeJsonString(d.Get("policy").(string)) if err != nil { @@ -81,7 +84,7 @@ func resourcePolicyCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourcePolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) out, err := conn.GetPolicyWithContext(ctx, &iot.GetPolicyInput{ PolicyName: aws.String(d.Id()), @@ -113,7 +116,7 @@ func resourcePolicyRead(ctx context.Context, d *schema.ResourceData, meta interf func resourcePolicyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) if d.HasChange("policy") { policy, err := structure.NormalizeJsonString(d.Get("policy").(string)) @@ -137,7 +140,7 @@ func resourcePolicyUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourcePolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) out, err := conn.ListPolicyVersionsWithContext(ctx, &iot.ListPolicyVersionsInput{ PolicyName: aws.String(d.Id()), diff --git a/internal/service/iot/policy_attachment.go b/internal/service/iot/policy_attachment.go index 8f465704559..73211319818 100644 --- a/internal/service/iot/policy_attachment.go +++ b/internal/service/iot/policy_attachment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot import ( @@ -37,7 +40,7 @@ func ResourcePolicyAttachment() *schema.Resource { func resourcePolicyAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) policyName := d.Get("policy").(string) target := d.Get("target").(string) @@ -97,7 +100,7 @@ func GetPolicyAttachment(ctx context.Context, conn *iot.IoT, target, policyName func resourcePolicyAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) policyName := d.Get("policy").(string) target := d.Get("target").(string) @@ -121,7 +124,7 @@ func resourcePolicyAttachmentRead(ctx context.Context, d *schema.ResourceData, m func resourcePolicyAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) policyName := d.Get("policy").(string) target := d.Get("target").(string) diff --git a/internal/service/iot/policy_attachment_test.go b/internal/service/iot/policy_attachment_test.go index cdb57fa4108..8c8713074a5 100644 --- a/internal/service/iot/policy_attachment_test.go +++ b/internal/service/iot/policy_attachment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot_test import ( @@ -64,7 +67,7 @@ func TestAccIoTPolicyAttachment_basic(t *testing.T) { func testAccCheckPolicyAttchmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iot_policy_attachment" { continue @@ -117,7 +120,7 @@ func testAccCheckPolicyAttachmentExists(ctx context.Context, n string) resource. return fmt.Errorf("No policy name is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) target := rs.Primary.Attributes["target"] policyName := rs.Primary.Attributes["policy"] @@ -137,7 +140,7 @@ func testAccCheckPolicyAttachmentExists(ctx context.Context, n string) resource. func testAccCheckPolicyAttachmentCertStatus(ctx context.Context, n string, policies []string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) rs, ok := s.RootModule().Resources[n] diff --git a/internal/service/iot/policy_test.go b/internal/service/iot/policy_test.go index 8b4d89ddb6b..531a0b52360 100644 --- a/internal/service/iot/policy_test.go +++ b/internal/service/iot/policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot_test import ( @@ -73,7 +76,7 @@ func TestAccIoTPolicy_disappears(t *testing.T) { func testAccCheckPolicyDestroy_basic(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iot_policy" { @@ -115,7 +118,7 @@ func testAccCheckPolicyExists(ctx context.Context, n string, v *iot.GetPolicyOut return fmt.Errorf("No IoT Policy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) resp, err := conn.GetPolicyWithContext(ctx, &iot.GetPolicyInput{ PolicyName: aws.String(rs.Primary.ID), diff --git a/internal/service/iot/provisioning_template.go b/internal/service/iot/provisioning_template.go index 2191e43ee9e..82f6e561518 100644 --- a/internal/service/iot/provisioning_template.go +++ b/internal/service/iot/provisioning_template.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot import ( @@ -112,12 +115,12 @@ func ResourceProvisioningTemplate() *schema.Resource { } func resourceProvisioningTemplateCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) name := d.Get("name").(string) input := &iot.CreateProvisioningTemplateInput{ Enabled: aws.Bool(d.Get("enabled").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), TemplateName: aws.String(name), } @@ -144,7 +147,7 @@ func resourceProvisioningTemplateCreate(ctx context.Context, d *schema.ResourceD iot.ErrCodeInvalidRequestException, "The provisioning role cannot be assumed by AWS IoT") if err != nil { - return diag.Errorf("error creating IoT Provisioning Template (%s): %s", name, err) + return diag.Errorf("creating IoT Provisioning Template (%s): %s", name, err) } d.SetId(aws.StringValue(outputRaw.(*iot.CreateProvisioningTemplateOutput).TemplateName)) @@ -153,7 +156,7 @@ func resourceProvisioningTemplateCreate(ctx context.Context, d *schema.ResourceD } func resourceProvisioningTemplateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) output, err := FindProvisioningTemplateByName(ctx, conn, d.Id()) @@ -164,7 +167,7 @@ func resourceProvisioningTemplateRead(ctx context.Context, d *schema.ResourceDat } if err != nil { - return diag.Errorf("error reading IoT Provisioning Template (%s): %s", d.Id(), err) + return diag.Errorf("reading IoT Provisioning Template (%s): %s", d.Id(), err) } d.Set("arn", output.TemplateArn) @@ -174,7 +177,7 @@ func resourceProvisioningTemplateRead(ctx context.Context, d *schema.ResourceDat d.Set("name", output.TemplateName) if output.PreProvisioningHook != nil { if err := d.Set("pre_provisioning_hook", []interface{}{flattenProvisioningHook(output.PreProvisioningHook)}); err != nil { - return diag.Errorf("error setting pre_provisioning_hook: %s", err) + return diag.Errorf("setting pre_provisioning_hook: %s", err) } } else { d.Set("pre_provisioning_hook", nil) @@ -186,7 +189,7 @@ func resourceProvisioningTemplateRead(ctx context.Context, d *schema.ResourceDat } func resourceProvisioningTemplateUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) if d.HasChange("template_body") { input := &iot.CreateProvisioningTemplateVersionInput{ @@ -199,7 +202,7 @@ func resourceProvisioningTemplateUpdate(ctx context.Context, d *schema.ResourceD _, err := conn.CreateProvisioningTemplateVersionWithContext(ctx, input) if err != nil { - return diag.Errorf("error creating IoT Provisioning Template (%s) version: %s", d.Id(), err) + return diag.Errorf("creating IoT Provisioning Template (%s) version: %s", d.Id(), err) } } @@ -219,7 +222,7 @@ func resourceProvisioningTemplateUpdate(ctx context.Context, d *schema.ResourceD iot.ErrCodeInvalidRequestException, "The provisioning role cannot be assumed by AWS IoT") if err != nil { - return diag.Errorf("error updating IoT Provisioning Template (%s): %s", d.Id(), err) + return diag.Errorf("updating IoT Provisioning Template (%s): %s", d.Id(), err) } } @@ -227,7 +230,7 @@ func resourceProvisioningTemplateUpdate(ctx context.Context, d *schema.ResourceD } func resourceProvisioningTemplateDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) log.Printf("[INFO] Deleting IoT Provisioning Template: %s", d.Id()) _, err := conn.DeleteProvisioningTemplateWithContext(ctx, &iot.DeleteProvisioningTemplateInput{ @@ -239,7 +242,7 @@ func resourceProvisioningTemplateDelete(ctx context.Context, d *schema.ResourceD } if err != nil { - return diag.Errorf("error deleting IoT Provisioning Template (%s): %s", d.Id(), err) + return diag.Errorf("deleting IoT Provisioning Template (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/iot/provisioning_template_test.go b/internal/service/iot/provisioning_template_test.go index 1c1d206e201..74712b31b7e 100644 --- a/internal/service/iot/provisioning_template_test.go +++ b/internal/service/iot/provisioning_template_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot_test import ( @@ -183,7 +186,7 @@ func testAccCheckProvisioningTemplateExists(ctx context.Context, n string) resou return fmt.Errorf("No IoT Provisioning Template ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) _, err := tfiot.FindProvisioningTemplateByName(ctx, conn, rs.Primary.ID) @@ -193,7 +196,7 @@ func testAccCheckProvisioningTemplateExists(ctx context.Context, n string) resou func testAccCheckProvisioningTemplateDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iot_provisioning_template" { @@ -219,7 +222,7 @@ func testAccCheckProvisioningTemplateDestroy(ctx context.Context) resource.TestC func testAccCheckProvisioningTemplateNumVersions(ctx context.Context, name string, want int) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) var got int err := conn.ListProvisioningTemplateVersionsPagesWithContext(ctx, &iot.ListProvisioningTemplateVersionsInput{TemplateName: aws.String(name)}, diff --git a/internal/service/iot/role_alias.go b/internal/service/iot/role_alias.go index 8dd1925a5d4..3430b5c17fd 100644 --- a/internal/service/iot/role_alias.go +++ b/internal/service/iot/role_alias.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot import ( @@ -49,7 +52,7 @@ func ResourceRoleAlias() *schema.Resource { func resourceRoleAliasCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) roleAlias := d.Get("alias").(string) roleArn := d.Get("role_arn").(string) @@ -87,7 +90,7 @@ func GetRoleAliasDescription(ctx context.Context, conn *iot.IoT, alias string) ( func resourceRoleAliasRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) var roleAliasDescription *iot.RoleAliasDescription @@ -113,7 +116,7 @@ func resourceRoleAliasRead(ctx context.Context, d *schema.ResourceData, meta int func resourceRoleAliasDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) alias := d.Get("alias").(string) @@ -130,7 +133,7 @@ func resourceRoleAliasDelete(ctx context.Context, d *schema.ResourceData, meta i func resourceRoleAliasUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) if d.HasChange("credential_duration") { roleAliasInput := &iot.UpdateRoleAliasInput{ diff --git a/internal/service/iot/role_alias_test.go b/internal/service/iot/role_alias_test.go index f05034063ce..1428f95212c 100644 --- a/internal/service/iot/role_alias_test.go +++ b/internal/service/iot/role_alias_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot_test import ( @@ -82,7 +85,7 @@ func TestAccIoTRoleAlias_basic(t *testing.T) { func testAccCheckRoleAliasDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iot_role_alias" { continue @@ -111,7 +114,7 @@ func testAccCheckRoleAliasExists(ctx context.Context, n string) resource.TestChe return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) role_arn := rs.Primary.Attributes["role_arn"] roleAliasDescription, err := tfiot.GetRoleAliasDescription(ctx, conn, rs.Primary.ID) diff --git a/internal/service/iot/service_package_gen.go b/internal/service/iot/service_package_gen.go index ac83d9f3b8a..2577b9f4945 100644 --- a/internal/service/iot/service_package_gen.go +++ b/internal/service/iot/service_package_gen.go @@ -5,6 +5,10 @@ package iot import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + iot_sdkv1 "github.com/aws/aws-sdk-go/service/iot" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -113,4 +117,13 @@ func (p *servicePackage) ServicePackageName() string { return names.IoT } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*iot_sdkv1.IoT, error) { + sess := config["session"].(*session_sdkv1.Session) + + return iot_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/iot/sweep.go b/internal/service/iot/sweep.go index c51e5253cd9..7b2c401df50 100644 --- a/internal/service/iot/sweep.go +++ b/internal/service/iot/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -12,7 +15,6 @@ import ( "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -80,13 +82,13 @@ func init() { func sweepCertifcates(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).IoTConn() + conn := client.IoTConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -113,7 +115,7 @@ func sweepCertifcates(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing IoT Certificate for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping IoT Certificate for %s: %w", region, err)) } @@ -127,13 +129,13 @@ func sweepCertifcates(region string) error { func sweepPolicyAttachments(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).IoTConn() + conn := client.IoTConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -180,7 +182,7 @@ func sweepPolicyAttachments(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing IoT Policy Attachment for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping IoT Policy Attachment for %s: %w", region, err)) } @@ -194,13 +196,13 @@ func sweepPolicyAttachments(region string) error { func sweepPolicies(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).IoTConn() + conn := client.IoTConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -227,7 +229,7 @@ func sweepPolicies(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing IoT Policy for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping IoT Policy for %s: %w", region, err)) } @@ -241,13 +243,13 @@ func sweepPolicies(region string) error { func sweepRoleAliases(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).IoTConn() + conn := client.IoTConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -274,7 +276,7 @@ func sweepRoleAliases(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing IoT Role Alias for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping IoT Role Alias for %s: %w", region, err)) } @@ -288,13 +290,13 @@ func sweepRoleAliases(region string) error { func sweepThingPrincipalAttachments(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).IoTConn() + conn := client.IoTConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -341,7 +343,7 @@ func sweepThingPrincipalAttachments(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing IoT Thing Principal Attachment for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping IoT Thing Principal Attachment for %s: %w", region, err)) } @@ -355,13 +357,13 @@ func sweepThingPrincipalAttachments(region string) error { func sweepThings(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).IoTConn() + conn := client.IoTConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -388,7 +390,7 @@ func sweepThings(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing IoT Thing for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping IoT Thing for %s: %w", region, err)) } @@ -402,13 +404,13 @@ func sweepThings(region string) error { func sweepThingTypes(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).IoTConn() + conn := client.IoTConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -435,7 +437,7 @@ func sweepThingTypes(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing IoT Thing Type for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping IoT Thing Type for %s: %w", region, err)) } @@ -449,11 +451,11 @@ func sweepThingTypes(region string) error { func sweepTopicRules(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).IoTConn() + conn := client.IoTConn(ctx) input := &iot.ListTopicRulesInput{} var sweeperErrs *multierror.Error @@ -497,11 +499,11 @@ func sweepTopicRules(region string) error { func sweepThingGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).IoTConn() + conn := client.IoTConn(ctx) input := &iot.ListThingGroupsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -530,7 +532,7 @@ func sweepThingGroups(region string) error { return fmt.Errorf("error listing IoT Thing Groups (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping IoT Thing Groups (%s): %w", region, err) @@ -541,11 +543,11 @@ func sweepThingGroups(region string) error { func sweepTopicRuleDestinations(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).IoTConn() + conn := client.IoTConn(ctx) input := &iot.ListTopicRuleDestinationsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -574,7 +576,7 @@ func sweepTopicRuleDestinations(region string) error { return fmt.Errorf("error listing IoT Topic Rule Destinations (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping IoT Topic Rule Destinations (%s): %w", region, err) diff --git a/internal/service/iot/tags_gen.go b/internal/service/iot/tags_gen.go index 3769a186ad9..15e813f9ce7 100644 --- a/internal/service/iot/tags_gen.go +++ b/internal/service/iot/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists iot service tags. +// listTags lists iot service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn iotiface.IoTAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn iotiface.IoTAPI, identifier string) (tftags.KeyValueTags, error) { input := &iot.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn iotiface.IoTAPI, identifier string) (tft // ListTags lists iot service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).IoTConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).IoTConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*iot.Tag) tftags.KeyValueTags { return tftags.New(ctx, m) } -// GetTagsIn returns iot service tags from Context. +// getTagsIn returns iot service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*iot.Tag { +func getTagsIn(ctx context.Context) []*iot.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*iot.Tag { return nil } -// SetTagsOut sets iot service tags in Context. -func SetTagsOut(ctx context.Context, tags []*iot.Tag) { +// setTagsOut sets iot service tags in Context. +func setTagsOut(ctx context.Context, tags []*iot.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates iot service tags. +// updateTags updates iot service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn iotiface.IoTAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn iotiface.IoTAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn iotiface.IoTAPI, identifier string, ol // UpdateTags updates iot service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).IoTConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).IoTConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/iot/thing.go b/internal/service/iot/thing.go index 34e19b2854e..101ee701d70 100644 --- a/internal/service/iot/thing.go +++ b/internal/service/iot/thing.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot import ( @@ -63,7 +66,7 @@ func ResourceThing() *schema.Resource { func resourceThingCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) name := d.Get("name").(string) input := &iot.CreateThingInput{ @@ -94,7 +97,7 @@ func resourceThingCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceThingRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) output, err := FindThingByName(ctx, conn, d.Id()) @@ -120,7 +123,7 @@ func resourceThingRead(ctx context.Context, d *schema.ResourceData, meta interfa func resourceThingUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) input := &iot.UpdateThingInput{ ThingName: aws.String(d.Get("name").(string)), @@ -158,7 +161,7 @@ func resourceThingUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceThingDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) log.Printf("[DEBUG] Deleting IoT Thing: %s", d.Id()) _, err := conn.DeleteThingWithContext(ctx, &iot.DeleteThingInput{ diff --git a/internal/service/iot/thing_group.go b/internal/service/iot/thing_group.go index dbc3bfabdec..d1505a5b2b6 100644 --- a/internal/service/iot/thing_group.go +++ b/internal/service/iot/thing_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot import ( @@ -127,11 +130,11 @@ const ( func resourceThingGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) name := d.Get("name").(string) input := &iot.CreateThingGroupInput{ - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), ThingGroupName: aws.String(name), } @@ -156,7 +159,7 @@ func resourceThingGroupCreate(ctx context.Context, d *schema.ResourceData, meta func resourceThingGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) output, err := FindThingGroupByName(ctx, conn, d.Id()) @@ -200,7 +203,7 @@ func resourceThingGroupRead(ctx context.Context, d *schema.ResourceData, meta in func resourceThingGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &iot.UpdateThingGroupInput{ @@ -235,7 +238,7 @@ func resourceThingGroupUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceThingGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) log.Printf("[DEBUG] Deleting IoT Thing Group: %s", d.Id()) _, err := tfresource.RetryWhen(ctx, thingGroupDeleteTimeout, diff --git a/internal/service/iot/thing_group_membership.go b/internal/service/iot/thing_group_membership.go index 822cb367e39..df611d6e663 100644 --- a/internal/service/iot/thing_group_membership.go +++ b/internal/service/iot/thing_group_membership.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot import ( @@ -49,7 +52,7 @@ func ResourceThingGroupMembership() *schema.Resource { func resourceThingGroupMembershipCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) thingGroupName := d.Get("thing_group_name").(string) thingName := d.Get("thing_name").(string) @@ -76,7 +79,7 @@ func resourceThingGroupMembershipCreate(ctx context.Context, d *schema.ResourceD func resourceThingGroupMembershipRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) thingGroupName, thingName, err := ThingGroupMembershipParseResourceID(d.Id()) @@ -104,7 +107,7 @@ func resourceThingGroupMembershipRead(ctx context.Context, d *schema.ResourceDat func resourceThingGroupMembershipDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) thingGroupName, thingName, err := ThingGroupMembershipParseResourceID(d.Id()) diff --git a/internal/service/iot/thing_group_membership_test.go b/internal/service/iot/thing_group_membership_test.go index 9f489e805ba..63f85637656 100644 --- a/internal/service/iot/thing_group_membership_test.go +++ b/internal/service/iot/thing_group_membership_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot_test import ( @@ -167,7 +170,7 @@ func testAccCheckThingGroupMembershipExists(ctx context.Context, n string) resou return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) return tfiot.FindThingGroupMembership(ctx, conn, thingGroupName, thingName) } @@ -175,7 +178,7 @@ func testAccCheckThingGroupMembershipExists(ctx context.Context, n string) resou func testAccCheckThingGroupMembershipDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iot_thing_group_membership" { diff --git a/internal/service/iot/thing_group_test.go b/internal/service/iot/thing_group_test.go index 093210d4f6f..d11b58a4b2d 100644 --- a/internal/service/iot/thing_group_test.go +++ b/internal/service/iot/thing_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot_test import ( @@ -217,7 +220,7 @@ func testAccCheckThingGroupExists(ctx context.Context, n string, v *iot.Describe return fmt.Errorf("No IoT Thing Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) output, err := tfiot.FindThingGroupByName(ctx, conn, rs.Primary.ID) @@ -233,7 +236,7 @@ func testAccCheckThingGroupExists(ctx context.Context, n string, v *iot.Describe func testAccCheckThingGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iot_thing_group" { diff --git a/internal/service/iot/thing_principal_attachment.go b/internal/service/iot/thing_principal_attachment.go index 7d6b900dbea..b8265a3c991 100644 --- a/internal/service/iot/thing_principal_attachment.go +++ b/internal/service/iot/thing_principal_attachment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot import ( @@ -38,7 +41,7 @@ func ResourceThingPrincipalAttachment() *schema.Resource { func resourceThingPrincipalAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) principal := d.Get("principal").(string) thing := d.Get("thing").(string) @@ -77,7 +80,7 @@ func GetThingPricipalAttachment(ctx context.Context, conn *iot.IoT, thing, princ func resourceThingPrincipalAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) principal := d.Get("principal").(string) thing := d.Get("thing").(string) @@ -98,7 +101,7 @@ func resourceThingPrincipalAttachmentRead(ctx context.Context, d *schema.Resourc func resourceThingPrincipalAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) principal := d.Get("principal").(string) thing := d.Get("thing").(string) diff --git a/internal/service/iot/thing_principal_attachment_test.go b/internal/service/iot/thing_principal_attachment_test.go index b14bdb9ebf3..bf6e1bd0e32 100644 --- a/internal/service/iot/thing_principal_attachment_test.go +++ b/internal/service/iot/thing_principal_attachment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot_test import ( @@ -73,7 +76,7 @@ func TestAccIoTThingPrincipalAttachment_basic(t *testing.T) { func testAccCheckThingPrincipalAttachmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iot_thing_principal_attachment" { @@ -111,7 +114,7 @@ func testAccCheckThingPrincipalAttachmentExists(ctx context.Context, n string) r return fmt.Errorf("No attachment") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) thing := rs.Primary.Attributes["thing"] principal := rs.Primary.Attributes["principal"] @@ -131,7 +134,7 @@ func testAccCheckThingPrincipalAttachmentExists(ctx context.Context, n string) r func testAccCheckThingPrincipalAttachmentStatus(ctx context.Context, thingName string, exists bool, principals []string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) principalARNs := make(map[string]string) diff --git a/internal/service/iot/thing_test.go b/internal/service/iot/thing_test.go index 1db6eacfe3b..01e228d8c51 100644 --- a/internal/service/iot/thing_test.go +++ b/internal/service/iot/thing_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot_test import ( @@ -125,7 +128,7 @@ func testAccCheckThingExists(ctx context.Context, n string, v *iot.DescribeThing return fmt.Errorf("No IoT Thing ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) output, err := tfiot.FindThingByName(ctx, conn, rs.Primary.ID) @@ -141,7 +144,7 @@ func testAccCheckThingExists(ctx context.Context, n string, v *iot.DescribeThing func testAccCheckThingDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iot_thing" { diff --git a/internal/service/iot/thing_type.go b/internal/service/iot/thing_type.go index f943c4c73be..df6331e18de 100644 --- a/internal/service/iot/thing_type.go +++ b/internal/service/iot/thing_type.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot import ( @@ -88,10 +91,10 @@ func ResourceThingType() *schema.Resource { func resourceThingTypeCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) input := &iot.CreateThingTypeInput{ - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), ThingTypeName: aws.String(d.Get("name").(string)), } @@ -130,7 +133,7 @@ func resourceThingTypeCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceThingTypeRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) params := &iot.DescribeThingTypeInput{ ThingTypeName: aws.String(d.Id()), @@ -161,7 +164,7 @@ func resourceThingTypeRead(ctx context.Context, d *schema.ResourceData, meta int func resourceThingTypeUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) if d.HasChange("deprecated") { params := &iot.DeprecateThingTypeInput{ @@ -182,7 +185,7 @@ func resourceThingTypeUpdate(ctx context.Context, d *schema.ResourceData, meta i func resourceThingTypeDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) // In order to delete an IoT Thing Type, you must deprecate it first and wait // at least 5 minutes. diff --git a/internal/service/iot/thing_type_test.go b/internal/service/iot/thing_type_test.go index c9ee587adc2..feec784de12 100644 --- a/internal/service/iot/thing_type_test.go +++ b/internal/service/iot/thing_type_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot_test import ( @@ -132,7 +135,7 @@ func testAccCheckThingTypeExists(ctx context.Context, name string) resource.Test return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) input := &iot.ListThingTypesInput{} output, err := conn.ListThingTypesWithContext(ctx, input) @@ -153,7 +156,7 @@ func testAccCheckThingTypeExists(ctx context.Context, name string) resource.Test func testAccCheckThingTypeDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iot_thing_type" { diff --git a/internal/service/iot/topic_rule.go b/internal/service/iot/topic_rule.go index 1ef267b27dc..503a88120f2 100644 --- a/internal/service/iot/topic_rule.go +++ b/internal/service/iot/topic_rule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot import ( @@ -1196,12 +1199,12 @@ var timestreamDimensionResource *schema.Resource = &schema.Resource{ func resourceTopicRuleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) ruleName := d.Get("name").(string) input := &iot.CreateTopicRuleInput{ RuleName: aws.String(ruleName), - Tags: aws.String(KeyValueTags(ctx, GetTagsIn(ctx)).URLQueryString()), + Tags: aws.String(KeyValueTags(ctx, getTagsIn(ctx)).URLQueryString()), TopicRulePayload: expandTopicRulePayload(d), } @@ -1222,7 +1225,7 @@ func resourceTopicRuleCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceTopicRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) output, err := FindTopicRuleByName(ctx, conn, d.Id()) @@ -1328,7 +1331,7 @@ func resourceTopicRuleRead(ctx context.Context, d *schema.ResourceData, meta int func resourceTopicRuleUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &iot.ReplaceTopicRuleInput{ @@ -1348,7 +1351,7 @@ func resourceTopicRuleUpdate(ctx context.Context, d *schema.ResourceData, meta i func resourceTopicRuleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) log.Printf("[INFO] Deleting IoT Topic Rule: %s", d.Id()) _, err := conn.DeleteTopicRuleWithContext(ctx, &iot.DeleteTopicRuleInput{ diff --git a/internal/service/iot/topic_rule_destination.go b/internal/service/iot/topic_rule_destination.go index 00b4b3f4b25..1fbd529c34a 100644 --- a/internal/service/iot/topic_rule_destination.go +++ b/internal/service/iot/topic_rule_destination.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot import ( @@ -84,7 +87,7 @@ func ResourceTopicRuleDestination() *schema.Resource { } func resourceTopicRuleDestinationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) input := &iot.CreateTopicRuleDestinationInput{ DestinationConfiguration: &iot.TopicRuleDestinationConfiguration{}, @@ -138,7 +141,7 @@ func resourceTopicRuleDestinationCreate(ctx context.Context, d *schema.ResourceD } func resourceTopicRuleDestinationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) output, err := FindTopicRuleDestinationByARN(ctx, conn, d.Id()) @@ -166,7 +169,7 @@ func resourceTopicRuleDestinationRead(ctx context.Context, d *schema.ResourceDat } func resourceTopicRuleDestinationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) if d.HasChange("enabled") { input := &iot.UpdateTopicRuleDestinationInput{ @@ -195,7 +198,7 @@ func resourceTopicRuleDestinationUpdate(ctx context.Context, d *schema.ResourceD } func resourceTopicRuleDestinationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IoTConn() + conn := meta.(*conns.AWSClient).IoTConn(ctx) log.Printf("[INFO] Deleting IoT Topic Rule Destination: %s", d.Id()) _, err := conn.DeleteTopicRuleDestinationWithContext(ctx, &iot.DeleteTopicRuleDestinationInput{ diff --git a/internal/service/iot/topic_rule_destination_test.go b/internal/service/iot/topic_rule_destination_test.go index 1de13d5706a..51d659ba3c0 100644 --- a/internal/service/iot/topic_rule_destination_test.go +++ b/internal/service/iot/topic_rule_destination_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot_test import ( @@ -123,7 +126,7 @@ func TestAccIoTTopicRuleDestination_enabled(t *testing.T) { func testAccCheckTopicRuleDestinationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iot_topic_rule_destination" { @@ -158,7 +161,7 @@ func testAccCheckTopicRuleDestinationExists(ctx context.Context, n string) resou return fmt.Errorf("No IoT Topic Rule Destination ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) _, err := tfiot.FindTopicRuleDestinationByARN(ctx, conn, rs.Primary.ID) diff --git a/internal/service/iot/topic_rule_test.go b/internal/service/iot/topic_rule_test.go index c0e5b3182ae..3fc2b6c13a6 100644 --- a/internal/service/iot/topic_rule_test.go +++ b/internal/service/iot/topic_rule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot_test import ( @@ -1798,7 +1801,7 @@ func TestAccIoTTopicRule_updateKinesisErrorAction(t *testing.T) { func testAccCheckTopicRuleDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_iot_topic_rule" { @@ -1833,7 +1836,7 @@ func testAccCheckTopicRuleExists(ctx context.Context, n string) resource.TestChe return fmt.Errorf("No IoT Topic Rule ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn(ctx) _, err := tfiot.FindTopicRuleByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/iot/validate.go b/internal/service/iot/validate.go index 34b69091033..b9cd9799421 100644 --- a/internal/service/iot/validate.go +++ b/internal/service/iot/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package iot import ( diff --git a/internal/service/iotanalytics/generate.go b/internal/service/iotanalytics/generate.go index 99ab36b9f42..00e14896279 100644 --- a/internal/service/iotanalytics/generate.go +++ b/internal/service/iotanalytics/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsSlice -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package iotanalytics diff --git a/internal/service/iotanalytics/service_package_gen.go b/internal/service/iotanalytics/service_package_gen.go index 7ec817167b6..6ef6bc9a53e 100644 --- a/internal/service/iotanalytics/service_package_gen.go +++ b/internal/service/iotanalytics/service_package_gen.go @@ -5,6 +5,10 @@ package iotanalytics import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + iotanalytics_sdkv1 "github.com/aws/aws-sdk-go/service/iotanalytics" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -31,4 +35,13 @@ func (p *servicePackage) ServicePackageName() string { return names.IoTAnalytics } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*iotanalytics_sdkv1.IoTAnalytics, error) { + sess := config["session"].(*session_sdkv1.Session) + + return iotanalytics_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/iotanalytics/tags_gen.go b/internal/service/iotanalytics/tags_gen.go index a0db8715ff8..5d7b44ffd5e 100644 --- a/internal/service/iotanalytics/tags_gen.go +++ b/internal/service/iotanalytics/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists iotanalytics service tags. +// listTags lists iotanalytics service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn iotanalyticsiface.IoTAnalyticsAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn iotanalyticsiface.IoTAnalyticsAPI, identifier string) (tftags.KeyValueTags, error) { input := &iotanalytics.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn iotanalyticsiface.IoTAnalyticsAPI, ident // ListTags lists iotanalytics service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).IoTAnalyticsConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).IoTAnalyticsConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*iotanalytics.Tag) tftags.KeyValue return tftags.New(ctx, m) } -// GetTagsIn returns iotanalytics service tags from Context. +// getTagsIn returns iotanalytics service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*iotanalytics.Tag { +func getTagsIn(ctx context.Context) []*iotanalytics.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*iotanalytics.Tag { return nil } -// SetTagsOut sets iotanalytics service tags in Context. -func SetTagsOut(ctx context.Context, tags []*iotanalytics.Tag) { +// setTagsOut sets iotanalytics service tags in Context. +func setTagsOut(ctx context.Context, tags []*iotanalytics.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates iotanalytics service tags. +// updateTags updates iotanalytics service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn iotanalyticsiface.IoTAnalyticsAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn iotanalyticsiface.IoTAnalyticsAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn iotanalyticsiface.IoTAnalyticsAPI, ide // UpdateTags updates iotanalytics service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).IoTAnalyticsConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).IoTAnalyticsConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/iotevents/generate.go b/internal/service/iotevents/generate.go index fbc341ea81d..0cb770c5da9 100644 --- a/internal/service/iotevents/generate.go +++ b/internal/service/iotevents/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsSlice -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package iotevents diff --git a/internal/service/iotevents/service_package_gen.go b/internal/service/iotevents/service_package_gen.go index 7c5a132a736..5586b9e6aee 100644 --- a/internal/service/iotevents/service_package_gen.go +++ b/internal/service/iotevents/service_package_gen.go @@ -5,6 +5,10 @@ package iotevents import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + iotevents_sdkv1 "github.com/aws/aws-sdk-go/service/iotevents" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -31,4 +35,13 @@ func (p *servicePackage) ServicePackageName() string { return names.IoTEvents } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*iotevents_sdkv1.IoTEvents, error) { + sess := config["session"].(*session_sdkv1.Session) + + return iotevents_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/iotevents/tags_gen.go b/internal/service/iotevents/tags_gen.go index 0fb3ed9284f..fdaaf6a1938 100644 --- a/internal/service/iotevents/tags_gen.go +++ b/internal/service/iotevents/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists iotevents service tags. +// listTags lists iotevents service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn ioteventsiface.IoTEventsAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn ioteventsiface.IoTEventsAPI, identifier string) (tftags.KeyValueTags, error) { input := &iotevents.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn ioteventsiface.IoTEventsAPI, identifier // ListTags lists iotevents service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).IoTEventsConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).IoTEventsConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*iotevents.Tag) tftags.KeyValueTag return tftags.New(ctx, m) } -// GetTagsIn returns iotevents service tags from Context. +// getTagsIn returns iotevents service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*iotevents.Tag { +func getTagsIn(ctx context.Context) []*iotevents.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*iotevents.Tag { return nil } -// SetTagsOut sets iotevents service tags in Context. -func SetTagsOut(ctx context.Context, tags []*iotevents.Tag) { +// setTagsOut sets iotevents service tags in Context. +func setTagsOut(ctx context.Context, tags []*iotevents.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates iotevents service tags. +// updateTags updates iotevents service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn ioteventsiface.IoTEventsAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn ioteventsiface.IoTEventsAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn ioteventsiface.IoTEventsAPI, identifie // UpdateTags updates iotevents service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).IoTEventsConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).IoTEventsConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/ivs/channel.go b/internal/service/ivs/channel.go index c216195f956..79c9c06502e 100644 --- a/internal/service/ivs/channel.go +++ b/internal/service/ivs/channel.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ivs import ( @@ -95,10 +98,10 @@ const ( ) func resourceChannelCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IVSConn() + conn := meta.(*conns.AWSClient).IVSConn(ctx) in := &ivs.CreateChannelInput{ - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("authorized"); ok { @@ -140,7 +143,7 @@ func resourceChannelCreate(ctx context.Context, d *schema.ResourceData, meta int } func resourceChannelRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IVSConn() + conn := meta.(*conns.AWSClient).IVSConn(ctx) out, err := FindChannelByID(ctx, conn, d.Id()) @@ -167,7 +170,7 @@ func resourceChannelRead(ctx context.Context, d *schema.ResourceData, meta inter } func resourceChannelUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IVSConn() + conn := meta.(*conns.AWSClient).IVSConn(ctx) update := false @@ -220,7 +223,7 @@ func resourceChannelUpdate(ctx context.Context, d *schema.ResourceData, meta int } func resourceChannelDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IVSConn() + conn := meta.(*conns.AWSClient).IVSConn(ctx) log.Printf("[INFO] Deleting IVS Channel %s", d.Id()) diff --git a/internal/service/ivs/channel_test.go b/internal/service/ivs/channel_test.go index 462b82f87f4..af40a03b0cc 100644 --- a/internal/service/ivs/channel_test.go +++ b/internal/service/ivs/channel_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ivs_test import ( @@ -215,7 +218,7 @@ func TestAccIVSChannel_recordingConfiguration(t *testing.T) { func testAccCheckChannelDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IVSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IVSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ivs_channel" { @@ -253,7 +256,7 @@ func testAccCheckChannelExists(ctx context.Context, name string, channel *ivs.Ch return create.Error(names.IVS, create.ErrActionCheckingExistence, tfivs.ResNameChannel, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).IVSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IVSConn(ctx) output, err := tfivs.FindChannelByID(ctx, conn, rs.Primary.ID) @@ -268,7 +271,7 @@ func testAccCheckChannelExists(ctx context.Context, name string, channel *ivs.Ch } func testAccChannelPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).IVSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IVSConn(ctx) input := &ivs.ListChannelsInput{} _, err := conn.ListChannelsWithContext(ctx, input) diff --git a/internal/service/ivs/find.go b/internal/service/ivs/find.go index 2bc671b4a24..d427a3c4219 100644 --- a/internal/service/ivs/find.go +++ b/internal/service/ivs/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ivs import ( diff --git a/internal/service/ivs/generate.go b/internal/service/ivs/generate.go index d8b8b5b49ff..5d9cc8ade7f 100644 --- a/internal/service/ivs/generate.go +++ b/internal/service/ivs/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package ivs diff --git a/internal/service/ivs/ivs_test.go b/internal/service/ivs/ivs_test.go index e1df85ceaca..cb4bd480e49 100644 --- a/internal/service/ivs/ivs_test.go +++ b/internal/service/ivs/ivs_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ivs_test import ( diff --git a/internal/service/ivs/playback_key_pair.go b/internal/service/ivs/playback_key_pair.go index a9f05f5bf93..8c2a2c33d2d 100644 --- a/internal/service/ivs/playback_key_pair.go +++ b/internal/service/ivs/playback_key_pair.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ivs import ( @@ -69,11 +72,11 @@ const ( ) func resourcePlaybackKeyPairCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IVSConn() + conn := meta.(*conns.AWSClient).IVSConn(ctx) in := &ivs.ImportPlaybackKeyPairInput{ PublicKeyMaterial: aws.String(d.Get("public_key").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("name"); ok { @@ -99,7 +102,7 @@ func resourcePlaybackKeyPairCreate(ctx context.Context, d *schema.ResourceData, } func resourcePlaybackKeyPairRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IVSConn() + conn := meta.(*conns.AWSClient).IVSConn(ctx) out, err := FindPlaybackKeyPairByID(ctx, conn, d.Id()) @@ -121,7 +124,7 @@ func resourcePlaybackKeyPairRead(ctx context.Context, d *schema.ResourceData, me } func resourcePlaybackKeyPairDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IVSConn() + conn := meta.(*conns.AWSClient).IVSConn(ctx) log.Printf("[INFO] Deleting IVS PlaybackKeyPair %s", d.Id()) diff --git a/internal/service/ivs/playback_key_pair_test.go b/internal/service/ivs/playback_key_pair_test.go index 12ab8773960..fffbd79fcb9 100644 --- a/internal/service/ivs/playback_key_pair_test.go +++ b/internal/service/ivs/playback_key_pair_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ivs_test import ( @@ -189,7 +192,7 @@ func testAccPlaybackKeyPair_disappears(t *testing.T) { func testAccCheckPlaybackKeyPairDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IVSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IVSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ivs_playback_key_pair" { @@ -225,7 +228,7 @@ func testAccCheckPlaybackKeyPairExists(ctx context.Context, name string, playbac return create.Error(names.IVS, create.ErrActionCheckingExistence, tfivs.ResNamePlaybackKeyPair, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).IVSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IVSConn(ctx) resp, err := tfivs.FindPlaybackKeyPairByID(ctx, conn, rs.Primary.ID) if err != nil { @@ -239,7 +242,7 @@ func testAccCheckPlaybackKeyPairExists(ctx context.Context, name string, playbac } func testAccPlaybackKeyPairPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).IVSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IVSConn(ctx) input := &ivs.ListPlaybackKeyPairsInput{} _, err := conn.ListPlaybackKeyPairsWithContext(ctx, input) diff --git a/internal/service/ivs/recording_configuration.go b/internal/service/ivs/recording_configuration.go index edf753b4dc8..3d0789882c3 100644 --- a/internal/service/ivs/recording_configuration.go +++ b/internal/service/ivs/recording_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ivs import ( @@ -121,11 +124,11 @@ const ( ) func resourceRecordingConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IVSConn() + conn := meta.(*conns.AWSClient).IVSConn(ctx) in := &ivs.CreateRecordingConfigurationInput{ DestinationConfiguration: expandDestinationConfiguration(d.Get("destination_configuration").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("name"); ok { @@ -163,7 +166,7 @@ func resourceRecordingConfigurationCreate(ctx context.Context, d *schema.Resourc } func resourceRecordingConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IVSConn() + conn := meta.(*conns.AWSClient).IVSConn(ctx) out, err := FindRecordingConfigurationByID(ctx, conn, d.Id()) @@ -195,7 +198,7 @@ func resourceRecordingConfigurationRead(ctx context.Context, d *schema.ResourceD } func resourceRecordingConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IVSConn() + conn := meta.(*conns.AWSClient).IVSConn(ctx) log.Printf("[INFO] Deleting IVS RecordingConfiguration %s", d.Id()) diff --git a/internal/service/ivs/recording_configuration_test.go b/internal/service/ivs/recording_configuration_test.go index b6c2ff9311b..3b4d00915c0 100644 --- a/internal/service/ivs/recording_configuration_test.go +++ b/internal/service/ivs/recording_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ivs_test import ( @@ -217,7 +220,7 @@ func TestAccIVSRecordingConfiguration_tags(t *testing.T) { func testAccCheckRecordingConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IVSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IVSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ivs_recording_configuration" { @@ -253,7 +256,7 @@ func testAccCheckRecordingConfigurationExists(ctx context.Context, name string, return create.Error(names.IVS, create.ErrActionCheckingExistence, tfivs.ResNameRecordingConfiguration, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).IVSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IVSConn(ctx) resp, err := tfivs.FindRecordingConfigurationByID(ctx, conn, rs.Primary.ID) @@ -278,7 +281,7 @@ func testAccCheckRecordingConfigurationRecreated(before, after *ivs.RecordingCon } func testAccRecordingConfigurationPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).IVSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).IVSConn(ctx) input := &ivs.ListRecordingConfigurationsInput{} _, err := conn.ListRecordingConfigurationsWithContext(ctx, input) diff --git a/internal/service/ivs/service_package_gen.go b/internal/service/ivs/service_package_gen.go index b682386bd1d..8269a7e1e1b 100644 --- a/internal/service/ivs/service_package_gen.go +++ b/internal/service/ivs/service_package_gen.go @@ -5,6 +5,10 @@ package ivs import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + ivs_sdkv1 "github.com/aws/aws-sdk-go/service/ivs" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -61,4 +65,13 @@ func (p *servicePackage) ServicePackageName() string { return names.IVS } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*ivs_sdkv1.IVS, error) { + sess := config["session"].(*session_sdkv1.Session) + + return ivs_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/ivs/status.go b/internal/service/ivs/status.go index 184906da41d..7e3b85b9650 100644 --- a/internal/service/ivs/status.go +++ b/internal/service/ivs/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ivs import ( diff --git a/internal/service/ivs/stream_key_data_source.go b/internal/service/ivs/stream_key_data_source.go index 362b0765d83..7701f7527ec 100644 --- a/internal/service/ivs/stream_key_data_source.go +++ b/internal/service/ivs/stream_key_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ivs import ( @@ -39,7 +42,7 @@ const ( ) func dataSourceStreamKeyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IVSConn() + conn := meta.(*conns.AWSClient).IVSConn(ctx) channelArn := d.Get("channel_arn").(string) diff --git a/internal/service/ivs/stream_key_data_source_test.go b/internal/service/ivs/stream_key_data_source_test.go index 6b1b1aa16b7..20dfddce92b 100644 --- a/internal/service/ivs/stream_key_data_source_test.go +++ b/internal/service/ivs/stream_key_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ivs_test import ( diff --git a/internal/service/ivs/tags_gen.go b/internal/service/ivs/tags_gen.go index 12bbfa62935..3244fd0a6e1 100644 --- a/internal/service/ivs/tags_gen.go +++ b/internal/service/ivs/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists ivs service tags. +// listTags lists ivs service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn ivsiface.IVSAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn ivsiface.IVSAPI, identifier string) (tftags.KeyValueTags, error) { input := &ivs.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn ivsiface.IVSAPI, identifier string) (tft // ListTags lists ivs service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).IVSConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).IVSConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from ivs service tags. +// KeyValueTags creates tftags.KeyValueTags from ivs service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns ivs service tags from Context. +// getTagsIn returns ivs service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets ivs service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets ivs service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates ivs service tags. +// updateTags updates ivs service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn ivsiface.IVSAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn ivsiface.IVSAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn ivsiface.IVSAPI, identifier string, ol // UpdateTags updates ivs service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).IVSConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).IVSConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/ivs/wait.go b/internal/service/ivs/wait.go index 84b5e46f650..4279c735999 100644 --- a/internal/service/ivs/wait.go +++ b/internal/service/ivs/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ivs import ( diff --git a/internal/service/ivschat/find.go b/internal/service/ivschat/find.go index 596293f0825..6953a3f812a 100644 --- a/internal/service/ivschat/find.go +++ b/internal/service/ivschat/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ivschat import ( diff --git a/internal/service/ivschat/generate.go b/internal/service/ivschat/generate.go index bf0da04298d..4ba5eaa332b 100644 --- a/internal/service/ivschat/generate.go +++ b/internal/service/ivschat/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -ListTags -ServiceTagsMap -UpdateTags -KVTValues -SkipTypesImp +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package ivschat diff --git a/internal/service/ivschat/logging_configuration.go b/internal/service/ivschat/logging_configuration.go index 370030a9920..3697d0cabb1 100644 --- a/internal/service/ivschat/logging_configuration.go +++ b/internal/service/ivschat/logging_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ivschat import ( @@ -145,11 +148,11 @@ const ( ) func resourceLoggingConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IVSChatClient() + conn := meta.(*conns.AWSClient).IVSChatClient(ctx) in := &ivschat.CreateLoggingConfigurationInput{ DestinationConfiguration: expandDestinationConfiguration(d.Get("destination_configuration").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("name"); ok { @@ -175,7 +178,7 @@ func resourceLoggingConfigurationCreate(ctx context.Context, d *schema.ResourceD } func resourceLoggingConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IVSChatClient() + conn := meta.(*conns.AWSClient).IVSChatClient(ctx) out, err := findLoggingConfigurationByID(ctx, conn, d.Id()) @@ -202,7 +205,7 @@ func resourceLoggingConfigurationRead(ctx context.Context, d *schema.ResourceDat } func resourceLoggingConfigurationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IVSChatClient() + conn := meta.(*conns.AWSClient).IVSChatClient(ctx) update := false @@ -238,7 +241,7 @@ func resourceLoggingConfigurationUpdate(ctx context.Context, d *schema.ResourceD } func resourceLoggingConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IVSChatClient() + conn := meta.(*conns.AWSClient).IVSChatClient(ctx) log.Printf("[INFO] Deleting IVSChat LoggingConfiguration %s", d.Id()) diff --git a/internal/service/ivschat/logging_configuration_test.go b/internal/service/ivschat/logging_configuration_test.go index e0ecac2c539..56bf4cb08e7 100644 --- a/internal/service/ivschat/logging_configuration_test.go +++ b/internal/service/ivschat/logging_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ivschat_test import ( @@ -289,7 +292,7 @@ func TestAccIVSChatLoggingConfiguration_disappears(t *testing.T) { func testAccCheckLoggingConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IVSChatClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).IVSChatClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ivschat_logging_configuration" { @@ -325,7 +328,7 @@ func testAccCheckLoggingConfigurationExists(ctx context.Context, name string, lo return create.Error(names.IVSChat, create.ErrActionCheckingExistence, tfivschat.ResNameLoggingConfiguration, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).IVSChatClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).IVSChatClient(ctx) resp, err := conn.GetLoggingConfiguration(ctx, &ivschat.GetLoggingConfigurationInput{ Identifier: aws.String(rs.Primary.ID), @@ -342,7 +345,7 @@ func testAccCheckLoggingConfigurationExists(ctx context.Context, name string, lo } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).IVSChatClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).IVSChatClient(ctx) input := &ivschat.ListLoggingConfigurationsInput{} _, err := conn.ListLoggingConfigurations(ctx, input) diff --git a/internal/service/ivschat/room.go b/internal/service/ivschat/room.go index a8edc94c89c..643a5632953 100644 --- a/internal/service/ivschat/room.go +++ b/internal/service/ivschat/room.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ivschat import ( @@ -104,10 +107,10 @@ const ( ) func resourceRoomCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IVSChatClient() + conn := meta.(*conns.AWSClient).IVSChatClient(ctx) in := &ivschat.CreateRoomInput{ - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("logging_configuration_identifiers"); ok { @@ -149,7 +152,7 @@ func resourceRoomCreate(ctx context.Context, d *schema.ResourceData, meta interf } func resourceRoomRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IVSChatClient() + conn := meta.(*conns.AWSClient).IVSChatClient(ctx) out, err := findRoomByID(ctx, conn, d.Id()) @@ -182,7 +185,7 @@ func resourceRoomRead(ctx context.Context, d *schema.ResourceData, meta interfac } func resourceRoomUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IVSChatClient() + conn := meta.(*conns.AWSClient).IVSChatClient(ctx) update := false @@ -233,7 +236,7 @@ func resourceRoomUpdate(ctx context.Context, d *schema.ResourceData, meta interf } func resourceRoomDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).IVSChatClient() + conn := meta.(*conns.AWSClient).IVSChatClient(ctx) log.Printf("[INFO] Deleting IVSChat Room %s", d.Id()) diff --git a/internal/service/ivschat/room_test.go b/internal/service/ivschat/room_test.go index 02eee85a664..31443d5d5bc 100644 --- a/internal/service/ivschat/room_test.go +++ b/internal/service/ivschat/room_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ivschat_test import ( @@ -277,7 +280,7 @@ func TestAccIVSChatRoom_update_remove_messageReviewHandler_uri(t *testing.T) { func testAccCheckRoomDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).IVSChatClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).IVSChatClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ivschat_room" { @@ -314,7 +317,7 @@ func testAccCheckRoomExists(ctx context.Context, name string, room *ivschat.GetR return create.Error(names.IVSChat, create.ErrActionCheckingExistence, tfivschat.ResNameRoom, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).IVSChatClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).IVSChatClient(ctx) resp, err := conn.GetRoom(ctx, &ivschat.GetRoomInput{ Identifier: aws.String(rs.Primary.ID), @@ -331,7 +334,7 @@ func testAccCheckRoomExists(ctx context.Context, name string, room *ivschat.GetR } func testAccPreCheckRoom(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).IVSChatClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).IVSChatClient(ctx) input := &ivschat.ListRoomsInput{} _, err := conn.ListRooms(ctx, input) diff --git a/internal/service/ivschat/service_package_gen.go b/internal/service/ivschat/service_package_gen.go index 1a0b487eeb8..f143014048d 100644 --- a/internal/service/ivschat/service_package_gen.go +++ b/internal/service/ivschat/service_package_gen.go @@ -5,6 +5,9 @@ package ivschat import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + ivschat_sdkv2 "github.com/aws/aws-sdk-go-v2/service/ivschat" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -48,4 +51,17 @@ func (p *servicePackage) ServicePackageName() string { return names.IVSChat } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*ivschat_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return ivschat_sdkv2.NewFromConfig(cfg, func(o *ivschat_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = ivschat_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/ivschat/status.go b/internal/service/ivschat/status.go index 2f0339e6be6..ae5d46bad19 100644 --- a/internal/service/ivschat/status.go +++ b/internal/service/ivschat/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ivschat import ( diff --git a/internal/service/ivschat/tags_gen.go b/internal/service/ivschat/tags_gen.go index 4b43570e6e8..ec483e3a5f4 100644 --- a/internal/service/ivschat/tags_gen.go +++ b/internal/service/ivschat/tags_gen.go @@ -13,10 +13,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists ivschat service tags. +// listTags lists ivschat service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn *ivschat.Client, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *ivschat.Client, identifier string) (tftags.KeyValueTags, error) { input := &ivschat.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -33,7 +33,7 @@ func ListTags(ctx context.Context, conn *ivschat.Client, identifier string) (tft // ListTags lists ivschat service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).IVSChatClient(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).IVSChatClient(ctx), identifier) if err != nil { return err @@ -53,14 +53,14 @@ func Tags(tags tftags.KeyValueTags) map[string]string { return tags.Map() } -// KeyValueTags creates KeyValueTags from ivschat service tags. +// KeyValueTags creates tftags.KeyValueTags from ivschat service tags. func KeyValueTags(ctx context.Context, tags map[string]string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns ivschat service tags from Context. +// getTagsIn returns ivschat service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]string { +func getTagsIn(ctx context.Context) map[string]string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -70,17 +70,17 @@ func GetTagsIn(ctx context.Context) map[string]string { return nil } -// SetTagsOut sets ivschat service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]string) { +// setTagsOut sets ivschat service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates ivschat service tags. +// updateTags updates ivschat service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn *ivschat.Client, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *ivschat.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -120,5 +120,5 @@ func UpdateTags(ctx context.Context, conn *ivschat.Client, identifier string, ol // UpdateTags updates ivschat service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).IVSChatClient(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).IVSChatClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/ivschat/wait.go b/internal/service/ivschat/wait.go index 48259da7845..82bb88a4d90 100644 --- a/internal/service/ivschat/wait.go +++ b/internal/service/ivschat/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ivschat import ( diff --git a/internal/service/kafka/broker_nodes_data_source.go b/internal/service/kafka/broker_nodes_data_source.go index 7d63eb09696..714ba400727 100644 --- a/internal/service/kafka/broker_nodes_data_source.go +++ b/internal/service/kafka/broker_nodes_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafka import ( @@ -64,7 +67,7 @@ func DataSourceBrokerNodes() *schema.Resource { func dataSourceBrokerNodesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KafkaConn() + conn := meta.(*conns.AWSClient).KafkaConn(ctx) clusterARN := d.Get("cluster_arn").(string) input := &kafka.ListNodesInput{ diff --git a/internal/service/kafka/broker_nodes_data_source_test.go b/internal/service/kafka/broker_nodes_data_source_test.go index 0899333e850..018f1ba6384 100644 --- a/internal/service/kafka/broker_nodes_data_source_test.go +++ b/internal/service/kafka/broker_nodes_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafka_test import ( diff --git a/internal/service/kafka/cluster.go b/internal/service/kafka/cluster.go index ebe78911af5..a5edbee4bb6 100644 --- a/internal/service/kafka/cluster.go +++ b/internal/service/kafka/cluster.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafka import ( @@ -468,14 +471,14 @@ func ResourceCluster() *schema.Resource { } func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KafkaConn() + conn := meta.(*conns.AWSClient).KafkaConn(ctx) name := d.Get("cluster_name").(string) input := &kafka.CreateClusterInput{ ClusterName: aws.String(name), KafkaVersion: aws.String(d.Get("kafka_version").(string)), NumberOfBrokerNodes: aws.Int64(int64(d.Get("number_of_broker_nodes").(int))), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("broker_node_group_info"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { @@ -528,7 +531,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int } func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KafkaConn() + conn := meta.(*conns.AWSClient).KafkaConn(ctx) cluster, err := FindClusterByARN(ctx, conn, d.Id()) @@ -621,13 +624,13 @@ func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta inter d.Set("zookeeper_connect_string", SortEndpointsString(aws.StringValue(cluster.ZookeeperConnectString))) d.Set("zookeeper_connect_string_tls", SortEndpointsString(aws.StringValue(cluster.ZookeeperConnectStringTls))) - SetTagsOut(ctx, cluster.Tags) + setTagsOut(ctx, cluster.Tags) return nil } func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KafkaConn() + conn := meta.(*conns.AWSClient).KafkaConn(ctx) if d.HasChange("broker_node_group_info.0.instance_type") { input := &kafka.UpdateBrokerTypeInput{ @@ -893,7 +896,7 @@ func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta int } func resourceClusterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KafkaConn() + conn := meta.(*conns.AWSClient).KafkaConn(ctx) log.Printf("[DEBUG] Deleting MSK Cluster: %s", d.Id()) _, err := conn.DeleteClusterWithContext(ctx, &kafka.DeleteClusterInput{ @@ -1685,7 +1688,7 @@ func flattenNodeExporter(apiObject *kafka.NodeExporter) map[string]interface{} { } func refreshClusterVersion(ctx context.Context, d *schema.ResourceData, meta interface{}) error { - conn := meta.(*conns.AWSClient).KafkaConn() + conn := meta.(*conns.AWSClient).KafkaConn(ctx) cluster, err := FindClusterByARN(ctx, conn, d.Id()) diff --git a/internal/service/kafka/cluster_data_source.go b/internal/service/kafka/cluster_data_source.go index d4f6106beca..529471048c2 100644 --- a/internal/service/kafka/cluster_data_source.go +++ b/internal/service/kafka/cluster_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafka import ( @@ -79,7 +82,7 @@ func DataSourceCluster() *schema.Resource { func dataSourceClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KafkaConn() + conn := meta.(*conns.AWSClient).KafkaConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig clusterName := d.Get("cluster_name").(string) diff --git a/internal/service/kafka/cluster_data_source_test.go b/internal/service/kafka/cluster_data_source_test.go index 455d86d8dfd..a766e7fe5bd 100644 --- a/internal/service/kafka/cluster_data_source_test.go +++ b/internal/service/kafka/cluster_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafka_test import ( diff --git a/internal/service/kafka/cluster_test.go b/internal/service/kafka/cluster_test.go index e1da19d97ff..4b7b51c7e1d 100644 --- a/internal/service/kafka/cluster_test.go +++ b/internal/service/kafka/cluster_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafka_test import ( @@ -1145,7 +1148,7 @@ func testAccCheckResourceAttrIsSortedCSV(resourceName, attributeName string) res func testAccCheckClusterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_msk_cluster" { @@ -1180,7 +1183,7 @@ func testAccCheckClusterExists(ctx context.Context, n string, v *kafka.ClusterIn return fmt.Errorf("No MSK Cluster ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConn(ctx) output, err := tfkafka.FindClusterByARN(ctx, conn, rs.Primary.ID) @@ -1215,7 +1218,7 @@ func testAccCheckClusterRecreated(i, j *kafka.ClusterInfo) resource.TestCheckFun } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConn(ctx) input := &kafka.ListClustersInput{} diff --git a/internal/service/kafka/configuration.go b/internal/service/kafka/configuration.go index 64c9165381f..a5d03fbe211 100644 --- a/internal/service/kafka/configuration.go +++ b/internal/service/kafka/configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafka import ( @@ -69,7 +72,7 @@ func ResourceConfiguration() *schema.Resource { func resourceConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KafkaConn() + conn := meta.(*conns.AWSClient).KafkaConn(ctx) input := &kafka.CreateConfigurationInput{ Name: aws.String(d.Get("name").(string)), @@ -97,7 +100,7 @@ func resourceConfigurationCreate(ctx context.Context, d *schema.ResourceData, me func resourceConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KafkaConn() + conn := meta.(*conns.AWSClient).KafkaConn(ctx) configurationInput := &kafka.DescribeConfigurationInput{ Arn: aws.String(d.Id()), @@ -155,7 +158,7 @@ func resourceConfigurationRead(ctx context.Context, d *schema.ResourceData, meta func resourceConfigurationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KafkaConn() + conn := meta.(*conns.AWSClient).KafkaConn(ctx) input := &kafka.UpdateConfigurationInput{ Arn: aws.String(d.Id()), @@ -177,7 +180,7 @@ func resourceConfigurationUpdate(ctx context.Context, d *schema.ResourceData, me func resourceConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KafkaConn() + conn := meta.(*conns.AWSClient).KafkaConn(ctx) input := &kafka.DeleteConfigurationInput{ Arn: aws.String(d.Id()), diff --git a/internal/service/kafka/configuration_data_source.go b/internal/service/kafka/configuration_data_source.go index 9499a2410bf..1107a09bcad 100644 --- a/internal/service/kafka/configuration_data_source.go +++ b/internal/service/kafka/configuration_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafka import ( @@ -52,7 +55,7 @@ func DataSourceConfiguration() *schema.Resource { func dataSourceConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KafkaConn() + conn := meta.(*conns.AWSClient).KafkaConn(ctx) listConfigurationsInput := &kafka.ListConfigurationsInput{} diff --git a/internal/service/kafka/configuration_data_source_test.go b/internal/service/kafka/configuration_data_source_test.go index 3c26d69114f..53cdf439047 100644 --- a/internal/service/kafka/configuration_data_source_test.go +++ b/internal/service/kafka/configuration_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafka_test import ( diff --git a/internal/service/kafka/configuration_test.go b/internal/service/kafka/configuration_test.go index 0c5e374ef58..cdd2a446453 100644 --- a/internal/service/kafka/configuration_test.go +++ b/internal/service/kafka/configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafka_test import ( @@ -180,7 +183,7 @@ func TestAccKafkaConfiguration_serverProperties(t *testing.T) { func testAccCheckConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_msk_configuration" { @@ -221,7 +224,7 @@ func testAccCheckConfigurationExists(ctx context.Context, resourceName string, c return fmt.Errorf("Resource ID not set: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConn(ctx) input := &kafka.DescribeConfigurationInput{ Arn: aws.String(rs.Primary.ID), diff --git a/internal/service/kafka/enum.go b/internal/service/kafka/enum.go index d8b70b8bf07..0f9b48dd8b1 100644 --- a/internal/service/kafka/enum.go +++ b/internal/service/kafka/enum.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafka const ( diff --git a/internal/service/kafka/find.go b/internal/service/kafka/find.go index b8c56adc4dc..747d9493940 100644 --- a/internal/service/kafka/find.go +++ b/internal/service/kafka/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafka import ( diff --git a/internal/service/kafka/generate.go b/internal/service/kafka/generate.go index 7f276140e4e..5278d0eba7f 100644 --- a/internal/service/kafka/generate.go +++ b/internal/service/kafka/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package kafka diff --git a/internal/service/kafka/kafka_version_data_source.go b/internal/service/kafka/kafka_version_data_source.go index d1bb7b03348..ce764f55e7a 100644 --- a/internal/service/kafka/kafka_version_data_source.go +++ b/internal/service/kafka/kafka_version_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafka import ( @@ -65,7 +68,7 @@ func findVersion(preferredVersions []interface{}, versions []*kafka.KafkaVersion func dataSourceVersionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KafkaConn() + conn := meta.(*conns.AWSClient).KafkaConn(ctx) var kafkaVersions []*kafka.KafkaVersion diff --git a/internal/service/kafka/kafka_version_data_source_test.go b/internal/service/kafka/kafka_version_data_source_test.go index a8f314d5d42..3020c1fba75 100644 --- a/internal/service/kafka/kafka_version_data_source_test.go +++ b/internal/service/kafka/kafka_version_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafka_test import ( @@ -55,7 +58,7 @@ func TestAccKafkaKafkaVersionDataSource_preferred(t *testing.T) { } func testAccVersionPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConn(ctx) input := &kafka.ListKafkaVersionsInput{} diff --git a/internal/service/kafka/scram_secret_association.go b/internal/service/kafka/scram_secret_association.go index efac9f55e46..537659bd16e 100644 --- a/internal/service/kafka/scram_secret_association.go +++ b/internal/service/kafka/scram_secret_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafka import ( @@ -52,7 +55,7 @@ func ResourceScramSecretAssociation() *schema.Resource { func resourceScramSecretAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KafkaConn() + conn := meta.(*conns.AWSClient).KafkaConn(ctx) clusterArn := d.Get("cluster_arn").(string) secretArnList := flex.ExpandStringSet(d.Get("secret_arn_list").(*schema.Set)) @@ -73,7 +76,7 @@ func resourceScramSecretAssociationCreate(ctx context.Context, d *schema.Resourc func resourceScramSecretAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KafkaConn() + conn := meta.(*conns.AWSClient).KafkaConn(ctx) secretArnList, err := FindScramSecrets(ctx, conn, d.Id()) @@ -96,7 +99,7 @@ func resourceScramSecretAssociationRead(ctx context.Context, d *schema.ResourceD func resourceScramSecretAssociationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KafkaConn() + conn := meta.(*conns.AWSClient).KafkaConn(ctx) o, n := d.GetChange("secret_arn_list") oldSet, newSet := o.(*schema.Set), n.(*schema.Set) @@ -132,7 +135,7 @@ func resourceScramSecretAssociationUpdate(ctx context.Context, d *schema.Resourc func resourceScramSecretAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KafkaConn() + conn := meta.(*conns.AWSClient).KafkaConn(ctx) secretArnList, err := FindScramSecrets(ctx, conn, d.Id()) diff --git a/internal/service/kafka/scram_secret_association_test.go b/internal/service/kafka/scram_secret_association_test.go index 9569947dc58..7508df1c3c7 100644 --- a/internal/service/kafka/scram_secret_association_test.go +++ b/internal/service/kafka/scram_secret_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafka_test import ( @@ -149,7 +152,7 @@ func testAccCheckScramSecretAssociationDestroy(ctx context.Context) resource.Tes continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConn(ctx) input := &kafka.ListScramSecretsInput{ ClusterArn: aws.String(rs.Primary.ID), } @@ -177,7 +180,7 @@ func testAccCheckScramSecretAssociationExists(ctx context.Context, resourceName return fmt.Errorf("No ID is set for %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConn(ctx) _, err := tfkafka.FindScramSecrets(ctx, conn, rs.Primary.ID) return err diff --git a/internal/service/kafka/serverless_cluster.go b/internal/service/kafka/serverless_cluster.go index e74c853ca61..347f7135d71 100644 --- a/internal/service/kafka/serverless_cluster.go +++ b/internal/service/kafka/serverless_cluster.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafka import ( @@ -118,7 +121,7 @@ func ResourceServerlessCluster() *schema.Resource { } func resourceServerlessClusterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KafkaConn() + conn := meta.(*conns.AWSClient).KafkaConn(ctx) name := d.Get("cluster_name").(string) input := &kafka.CreateClusterV2Input{ @@ -127,7 +130,7 @@ func resourceServerlessClusterCreate(ctx context.Context, d *schema.ResourceData ClientAuthentication: expandServerlessClientAuthentication(d.Get("client_authentication").([]interface{})[0].(map[string]interface{})), VpcConfigs: expandVpcConfigs(d.Get("vpc_config").([]interface{})), }, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } log.Printf("[DEBUG] Creating MSK Serverless Cluster: %s", input) @@ -149,7 +152,7 @@ func resourceServerlessClusterCreate(ctx context.Context, d *schema.ResourceData } func resourceServerlessClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KafkaConn() + conn := meta.(*conns.AWSClient).KafkaConn(ctx) cluster, err := FindServerlessClusterByARN(ctx, conn, d.Id()) @@ -176,7 +179,7 @@ func resourceServerlessClusterRead(ctx context.Context, d *schema.ResourceData, return diag.Errorf("setting vpc_config: %s", err) } - SetTagsOut(ctx, cluster.Tags) + setTagsOut(ctx, cluster.Tags) return nil } diff --git a/internal/service/kafka/serverless_cluster_test.go b/internal/service/kafka/serverless_cluster_test.go index 23aaba7dc97..efa0517d92a 100644 --- a/internal/service/kafka/serverless_cluster_test.go +++ b/internal/service/kafka/serverless_cluster_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafka_test import ( @@ -155,7 +158,7 @@ func TestAccKafkaServerlessCluster_securityGroup(t *testing.T) { func testAccCheckServerlessClusterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_msk_serverless_cluster" { @@ -190,7 +193,7 @@ func testAccCheckServerlessClusterExists(ctx context.Context, n string, v *kafka return fmt.Errorf("No MSK Serverless Cluster ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConn(ctx) output, err := tfkafka.FindServerlessClusterByARN(ctx, conn, rs.Primary.ID) diff --git a/internal/service/kafka/service_package.go b/internal/service/kafka/service_package.go new file mode 100644 index 00000000000..651653a72a5 --- /dev/null +++ b/internal/service/kafka/service_package.go @@ -0,0 +1,24 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package kafka + +import ( + "context" + + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + request_sdkv1 "github.com/aws/aws-sdk-go/aws/request" + kafka_sdkv1 "github.com/aws/aws-sdk-go/service/kafka" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" +) + +// CustomizeConn customizes a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) CustomizeConn(ctx context.Context, conn *kafka_sdkv1.Kafka) (*kafka_sdkv1.Kafka, error) { + conn.Handlers.Retry.PushBack(func(r *request_sdkv1.Request) { + if tfawserr.ErrMessageContains(r.Error, kafka_sdkv1.ErrCodeTooManyRequestsException, "Too Many Requests") { + r.Retryable = aws_sdkv1.Bool(true) + } + }) + + return conn, nil +} diff --git a/internal/service/kafka/service_package_gen.go b/internal/service/kafka/service_package_gen.go index 9686971ebeb..0ab9223d549 100644 --- a/internal/service/kafka/service_package_gen.go +++ b/internal/service/kafka/service_package_gen.go @@ -5,6 +5,10 @@ package kafka import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + kafka_sdkv1 "github.com/aws/aws-sdk-go/service/kafka" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -73,4 +77,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Kafka } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*kafka_sdkv1.Kafka, error) { + sess := config["session"].(*session_sdkv1.Session) + + return kafka_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/kafka/sort.go b/internal/service/kafka/sort.go index 90ac9d9158d..13c58ff41ab 100644 --- a/internal/service/kafka/sort.go +++ b/internal/service/kafka/sort.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafka import ( diff --git a/internal/service/kafka/sort_test.go b/internal/service/kafka/sort_test.go index 19157a55c40..77040bd726a 100644 --- a/internal/service/kafka/sort_test.go +++ b/internal/service/kafka/sort_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafka_test import ( diff --git a/internal/service/kafka/status.go b/internal/service/kafka/status.go index aa64974232d..63bc4548ba8 100644 --- a/internal/service/kafka/status.go +++ b/internal/service/kafka/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafka import ( diff --git a/internal/service/kafka/sweep.go b/internal/service/kafka/sweep.go index f18ae2d9397..a93ab2ded3e 100644 --- a/internal/service/kafka/sweep.go +++ b/internal/service/kafka/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/kafka" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -34,12 +36,12 @@ func init() { func sweepClusters(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } input := &kafka.ListClustersV2Input{} - conn := client.(*conns.AWSClient).KafkaConn() + conn := client.KafkaConn(ctx) sweepResources := make([]sweep.Sweepable, 0) err = conn.ListClustersV2PagesWithContext(ctx, input, func(page *kafka.ListClustersV2Output, lastPage bool) bool { @@ -67,7 +69,7 @@ func sweepClusters(region string) error { return fmt.Errorf("error listing MSK Clusters (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping MSK Clusters (%s): %w", region, err) @@ -78,11 +80,11 @@ func sweepClusters(region string) error { func sweepConfigurations(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).KafkaConn() + conn := client.KafkaConn(ctx) sweepResources := make([]sweep.Sweepable, 0) @@ -111,7 +113,7 @@ func sweepConfigurations(region string) error { return fmt.Errorf("error listing MSK Configurations (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping MSK Configurations (%s): %w", region, err) diff --git a/internal/service/kafka/tags_gen.go b/internal/service/kafka/tags_gen.go index cfbbf6817bb..1da373c939a 100644 --- a/internal/service/kafka/tags_gen.go +++ b/internal/service/kafka/tags_gen.go @@ -21,14 +21,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from kafka service tags. +// KeyValueTags creates tftags.KeyValueTags from kafka service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns kafka service tags from Context. +// getTagsIn returns kafka service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -38,17 +38,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets kafka service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets kafka service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates kafka service tags. +// updateTags updates kafka service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn kafkaiface.KafkaAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn kafkaiface.KafkaAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -88,5 +88,5 @@ func UpdateTags(ctx context.Context, conn kafkaiface.KafkaAPI, identifier string // UpdateTags updates kafka service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).KafkaConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).KafkaConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/kafka/wait.go b/internal/service/kafka/wait.go index 7dd881a38ce..56f86191904 100644 --- a/internal/service/kafka/wait.go +++ b/internal/service/kafka/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafka import ( diff --git a/internal/service/kafkaconnect/connector.go b/internal/service/kafkaconnect/connector.go index ba4e95adffd..d89f27af858 100644 --- a/internal/service/kafkaconnect/connector.go +++ b/internal/service/kafkaconnect/connector.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafkaconnect import ( @@ -383,7 +386,7 @@ func ResourceConnector() *schema.Resource { } func resourceConnectorCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KafkaConnectConn() + conn := meta.(*conns.AWSClient).KafkaConnectConn(ctx) name := d.Get("name").(string) input := &kafkaconnect.CreateConnectorInput{ @@ -414,7 +417,7 @@ func resourceConnectorCreate(ctx context.Context, d *schema.ResourceData, meta i output, err := conn.CreateConnectorWithContext(ctx, input) if err != nil { - return diag.Errorf("error creating MSK Connect Connector (%s): %s", name, err) + return diag.Errorf("creating MSK Connect Connector (%s): %s", name, err) } d.SetId(aws.StringValue(output.ConnectorArn)) @@ -422,14 +425,14 @@ func resourceConnectorCreate(ctx context.Context, d *schema.ResourceData, meta i _, err = waitConnectorCreated(ctx, conn, d.Id(), d.Timeout(schema.TimeoutCreate)) if err != nil { - return diag.Errorf("error waiting for MSK Connect Connector (%s) create: %s", d.Id(), err) + return diag.Errorf("waiting for MSK Connect Connector (%s) create: %s", d.Id(), err) } return resourceConnectorRead(ctx, d, meta) } func resourceConnectorRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KafkaConnectConn() + conn := meta.(*conns.AWSClient).KafkaConnectConn(ctx) connector, err := FindConnectorByARN(ctx, conn, d.Id()) @@ -440,13 +443,13 @@ func resourceConnectorRead(ctx context.Context, d *schema.ResourceData, meta int } if err != nil { - return diag.Errorf("error reading MSK Connect Connector (%s): %s", d.Id(), err) + return diag.Errorf("reading MSK Connect Connector (%s): %s", d.Id(), err) } d.Set("arn", connector.ConnectorArn) if connector.Capacity != nil { if err := d.Set("capacity", []interface{}{flattenCapacityDescription(connector.Capacity)}); err != nil { - return diag.Errorf("error setting capacity: %s", err) + return diag.Errorf("setting capacity: %s", err) } } else { d.Set("capacity", nil) @@ -455,21 +458,21 @@ func resourceConnectorRead(ctx context.Context, d *schema.ResourceData, meta int d.Set("description", connector.ConnectorDescription) if connector.KafkaCluster != nil { if err := d.Set("kafka_cluster", []interface{}{flattenClusterDescription(connector.KafkaCluster)}); err != nil { - return diag.Errorf("error setting kafka_cluster: %s", err) + return diag.Errorf("setting kafka_cluster: %s", err) } } else { d.Set("kafka_cluster", nil) } if connector.KafkaClusterClientAuthentication != nil { if err := d.Set("kafka_cluster_client_authentication", []interface{}{flattenClusterClientAuthenticationDescription(connector.KafkaClusterClientAuthentication)}); err != nil { - return diag.Errorf("error setting kafka_cluster_client_authentication: %s", err) + return diag.Errorf("setting kafka_cluster_client_authentication: %s", err) } } else { d.Set("kafka_cluster_client_authentication", nil) } if connector.KafkaClusterEncryptionInTransit != nil { if err := d.Set("kafka_cluster_encryption_in_transit", []interface{}{flattenClusterEncryptionInTransitDescription(connector.KafkaClusterEncryptionInTransit)}); err != nil { - return diag.Errorf("error setting kafka_cluster_encryption_in_transit: %s", err) + return diag.Errorf("setting kafka_cluster_encryption_in_transit: %s", err) } } else { d.Set("kafka_cluster_encryption_in_transit", nil) @@ -477,20 +480,20 @@ func resourceConnectorRead(ctx context.Context, d *schema.ResourceData, meta int d.Set("kafkaconnect_version", connector.KafkaConnectVersion) if connector.LogDelivery != nil { if err := d.Set("log_delivery", []interface{}{flattenLogDeliveryDescription(connector.LogDelivery)}); err != nil { - return diag.Errorf("error setting log_delivery: %s", err) + return diag.Errorf("setting log_delivery: %s", err) } } else { d.Set("log_delivery", nil) } d.Set("name", connector.ConnectorName) if err := d.Set("plugin", flattenPluginDescriptions(connector.Plugins)); err != nil { - return diag.Errorf("error setting plugin: %s", err) + return diag.Errorf("setting plugin: %s", err) } d.Set("service_execution_role_arn", connector.ServiceExecutionRoleArn) d.Set("version", connector.CurrentVersion) if connector.WorkerConfiguration != nil { if err := d.Set("worker_configuration", []interface{}{flattenWorkerConfigurationDescription(connector.WorkerConfiguration)}); err != nil { - return diag.Errorf("error setting worker_configuration: %s", err) + return diag.Errorf("setting worker_configuration: %s", err) } } else { d.Set("worker_configuration", nil) @@ -500,7 +503,7 @@ func resourceConnectorRead(ctx context.Context, d *schema.ResourceData, meta int } func resourceConnectorUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KafkaConnectConn() + conn := meta.(*conns.AWSClient).KafkaConnectConn(ctx) input := &kafkaconnect.UpdateConnectorInput{ Capacity: expandCapacityUpdate(d.Get("capacity").([]interface{})[0].(map[string]interface{})), @@ -512,20 +515,20 @@ func resourceConnectorUpdate(ctx context.Context, d *schema.ResourceData, meta i _, err := conn.UpdateConnectorWithContext(ctx, input) if err != nil { - return diag.Errorf("error updating MSK Connect Connector (%s): %s", d.Id(), err) + return diag.Errorf("updating MSK Connect Connector (%s): %s", d.Id(), err) } _, err = waitConnectorUpdated(ctx, conn, d.Id(), d.Timeout(schema.TimeoutUpdate)) if err != nil { - return diag.Errorf("error waiting for MSK Connect Connector (%s) update: %s", d.Id(), err) + return diag.Errorf("waiting for MSK Connect Connector (%s) update: %s", d.Id(), err) } return resourceConnectorRead(ctx, d, meta) } func resourceConnectorDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KafkaConnectConn() + conn := meta.(*conns.AWSClient).KafkaConnectConn(ctx) log.Printf("[DEBUG] Deleting MSK Connect Connector: %s", d.Id()) _, err := conn.DeleteConnectorWithContext(ctx, &kafkaconnect.DeleteConnectorInput{ @@ -537,13 +540,13 @@ func resourceConnectorDelete(ctx context.Context, d *schema.ResourceData, meta i } if err != nil { - return diag.Errorf("error deleting MSK Connect Connector (%s): %s", d.Id(), err) + return diag.Errorf("deleting MSK Connect Connector (%s): %s", d.Id(), err) } _, err = waitConnectorDeleted(ctx, conn, d.Id(), d.Timeout(schema.TimeoutDelete)) if err != nil { - return diag.Errorf("error waiting for MSK Connect Connector (%s) delete: %s", d.Id(), err) + return diag.Errorf("waiting for MSK Connect Connector (%s) delete: %s", d.Id(), err) } return nil diff --git a/internal/service/kafkaconnect/connector_data_source.go b/internal/service/kafkaconnect/connector_data_source.go index 754b181e471..1fc0119027a 100644 --- a/internal/service/kafkaconnect/connector_data_source.go +++ b/internal/service/kafkaconnect/connector_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafkaconnect import ( @@ -38,7 +41,7 @@ func DataSourceConnector() *schema.Resource { } func dataSourceConnectorRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KafkaConnectConn() + conn := meta.(*conns.AWSClient).KafkaConnectConn(ctx) name := d.Get("name") var output []*kafkaconnect.ConnectorSummary @@ -58,7 +61,7 @@ func dataSourceConnectorRead(ctx context.Context, d *schema.ResourceData, meta i }) if err != nil { - return diag.Errorf("error listing MSK Connect Connectors: %s", err) + return diag.Errorf("listing MSK Connect Connectors: %s", err) } if len(output) == 0 || output[0] == nil { diff --git a/internal/service/kafkaconnect/connector_data_source_test.go b/internal/service/kafkaconnect/connector_data_source_test.go index 27b71c45d62..8735be1a9fd 100644 --- a/internal/service/kafkaconnect/connector_data_source_test.go +++ b/internal/service/kafkaconnect/connector_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafkaconnect_test import ( diff --git a/internal/service/kafkaconnect/connector_test.go b/internal/service/kafkaconnect/connector_test.go index 2641da8be18..d129cfb683c 100644 --- a/internal/service/kafkaconnect/connector_test.go +++ b/internal/service/kafkaconnect/connector_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafkaconnect_test import ( @@ -236,7 +239,7 @@ func testAccCheckConnectorExists(ctx context.Context, n string) resource.TestChe return fmt.Errorf("No MSK Connect Connector ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConnectConn(ctx) _, err := tfkafkaconnect.FindConnectorByARN(ctx, conn, rs.Primary.ID) @@ -246,7 +249,7 @@ func testAccCheckConnectorExists(ctx context.Context, n string) resource.TestChe func testAccCheckConnectorDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConnectConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_mskconnect_connector" { diff --git a/internal/service/kafkaconnect/custom_plugin.go b/internal/service/kafkaconnect/custom_plugin.go index 9079d0cfc28..3a9bde28808 100644 --- a/internal/service/kafkaconnect/custom_plugin.go +++ b/internal/service/kafkaconnect/custom_plugin.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafkaconnect import ( @@ -102,7 +105,7 @@ func ResourceCustomPlugin() *schema.Resource { } func resourceCustomPluginCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KafkaConnectConn() + conn := meta.(*conns.AWSClient).KafkaConnectConn(ctx) name := d.Get("name").(string) input := &kafkaconnect.CreateCustomPluginInput{ @@ -119,7 +122,7 @@ func resourceCustomPluginCreate(ctx context.Context, d *schema.ResourceData, met output, err := conn.CreateCustomPluginWithContext(ctx, input) if err != nil { - return diag.Errorf("error creating MSK Connect Custom Plugin (%s): %s", name, err) + return diag.Errorf("creating MSK Connect Custom Plugin (%s): %s", name, err) } d.SetId(aws.StringValue(output.CustomPluginArn)) @@ -127,14 +130,14 @@ func resourceCustomPluginCreate(ctx context.Context, d *schema.ResourceData, met _, err = waitCustomPluginCreated(ctx, conn, d.Id(), d.Timeout(schema.TimeoutCreate)) if err != nil { - return diag.Errorf("error waiting for MSK Connect Custom Plugin (%s) create: %s", d.Id(), err) + return diag.Errorf("waiting for MSK Connect Custom Plugin (%s) create: %s", d.Id(), err) } return resourceCustomPluginRead(ctx, d, meta) } func resourceCustomPluginRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KafkaConnectConn() + conn := meta.(*conns.AWSClient).KafkaConnectConn(ctx) plugin, err := FindCustomPluginByARN(ctx, conn, d.Id()) @@ -145,7 +148,7 @@ func resourceCustomPluginRead(ctx context.Context, d *schema.ResourceData, meta } if err != nil { - return diag.Errorf("error reading MSK Connect Custom Plugin (%s): %s", d.Id(), err) + return diag.Errorf("reading MSK Connect Custom Plugin (%s): %s", d.Id(), err) } d.Set("arn", plugin.CustomPluginArn) @@ -158,7 +161,7 @@ func resourceCustomPluginRead(ctx context.Context, d *schema.ResourceData, meta d.Set("latest_revision", plugin.LatestRevision.Revision) if plugin.LatestRevision.Location != nil { if err := d.Set("location", []interface{}{flattenCustomPluginLocationDescription(plugin.LatestRevision.Location)}); err != nil { - return diag.Errorf("error setting location: %s", err) + return diag.Errorf("setting location: %s", err) } } else { d.Set("location", nil) @@ -173,7 +176,7 @@ func resourceCustomPluginRead(ctx context.Context, d *schema.ResourceData, meta } func resourceCustomPluginDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KafkaConnectConn() + conn := meta.(*conns.AWSClient).KafkaConnectConn(ctx) log.Printf("[DEBUG] Deleting MSK Connect Custom Plugin: %s", d.Id()) _, err := conn.DeleteCustomPluginWithContext(ctx, &kafkaconnect.DeleteCustomPluginInput{ @@ -185,13 +188,13 @@ func resourceCustomPluginDelete(ctx context.Context, d *schema.ResourceData, met } if err != nil { - return diag.Errorf("error deleting MSK Connect Custom Plugin (%s): %s", d.Id(), err) + return diag.Errorf("deleting MSK Connect Custom Plugin (%s): %s", d.Id(), err) } _, err = waitCustomPluginDeleted(ctx, conn, d.Id(), d.Timeout(schema.TimeoutDelete)) if err != nil { - return diag.Errorf("error waiting for MSK Connect Custom Plugin (%s) delete: %s", d.Id(), err) + return diag.Errorf("waiting for MSK Connect Custom Plugin (%s) delete: %s", d.Id(), err) } return nil diff --git a/internal/service/kafkaconnect/custom_plugin_data_source.go b/internal/service/kafkaconnect/custom_plugin_data_source.go index 7d85a8c1665..6adf69b76b6 100644 --- a/internal/service/kafkaconnect/custom_plugin_data_source.go +++ b/internal/service/kafkaconnect/custom_plugin_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafkaconnect import ( @@ -42,7 +45,7 @@ func DataSourceCustomPlugin() *schema.Resource { } func dataSourceCustomPluginRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KafkaConnectConn() + conn := meta.(*conns.AWSClient).KafkaConnectConn(ctx) name := d.Get("name") var output []*kafkaconnect.CustomPluginSummary @@ -62,7 +65,7 @@ func dataSourceCustomPluginRead(ctx context.Context, d *schema.ResourceData, met }) if err != nil { - return diag.Errorf("error listing MSK Connect Custom Plugins: %s", err) + return diag.Errorf("listing MSK Connect Custom Plugins: %s", err) } if len(output) == 0 || output[0] == nil { diff --git a/internal/service/kafkaconnect/custom_plugin_data_source_test.go b/internal/service/kafkaconnect/custom_plugin_data_source_test.go index 04bf587ef07..a1d7b98ee99 100644 --- a/internal/service/kafkaconnect/custom_plugin_data_source_test.go +++ b/internal/service/kafkaconnect/custom_plugin_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafkaconnect_test import ( diff --git a/internal/service/kafkaconnect/custom_plugin_test.go b/internal/service/kafkaconnect/custom_plugin_test.go index e3907100cfc..6a0e75f0592 100644 --- a/internal/service/kafkaconnect/custom_plugin_test.go +++ b/internal/service/kafkaconnect/custom_plugin_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafkaconnect_test import ( @@ -140,7 +143,7 @@ func testAccCheckCustomPluginExists(ctx context.Context, name string) resource.T return fmt.Errorf("No MSK Connect Custom Plugin ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConnectConn(ctx) _, err := tfkafkaconnect.FindCustomPluginByARN(ctx, conn, rs.Primary.ID) @@ -150,7 +153,7 @@ func testAccCheckCustomPluginExists(ctx context.Context, name string) resource.T func testAccCheckCustomPluginDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConnectConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_mskconnect_custom_plugin" { diff --git a/internal/service/kafkaconnect/find.go b/internal/service/kafkaconnect/find.go index 42dfbe39497..52cab9333a0 100644 --- a/internal/service/kafkaconnect/find.go +++ b/internal/service/kafkaconnect/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafkaconnect import ( diff --git a/internal/service/kafkaconnect/generate.go b/internal/service/kafkaconnect/generate.go new file mode 100644 index 00000000000..aeb7bedb679 --- /dev/null +++ b/internal/service/kafkaconnect/generate.go @@ -0,0 +1,7 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/servicepackage/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package kafkaconnect diff --git a/internal/service/kafkaconnect/service_package_gen.go b/internal/service/kafkaconnect/service_package_gen.go index 2d6bad664f1..43e36b2f4e1 100644 --- a/internal/service/kafkaconnect/service_package_gen.go +++ b/internal/service/kafkaconnect/service_package_gen.go @@ -5,6 +5,10 @@ package kafkaconnect import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + kafkaconnect_sdkv1 "github.com/aws/aws-sdk-go/service/kafkaconnect" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -57,4 +61,13 @@ func (p *servicePackage) ServicePackageName() string { return names.KafkaConnect } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*kafkaconnect_sdkv1.KafkaConnect, error) { + sess := config["session"].(*session_sdkv1.Session) + + return kafkaconnect_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/kafkaconnect/status.go b/internal/service/kafkaconnect/status.go index de025fa1b3f..acb2f5e6596 100644 --- a/internal/service/kafkaconnect/status.go +++ b/internal/service/kafkaconnect/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafkaconnect import ( diff --git a/internal/service/kafkaconnect/sweep.go b/internal/service/kafkaconnect/sweep.go index e94d5025119..71c98e99955 100644 --- a/internal/service/kafkaconnect/sweep.go +++ b/internal/service/kafkaconnect/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/kafkaconnect" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -31,11 +33,11 @@ func init() { func sweepConnectors(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).KafkaConnectConn() + conn := client.KafkaConnectConn(ctx) input := &kafkaconnect.ListConnectorsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -64,7 +66,7 @@ func sweepConnectors(region string) error { return fmt.Errorf("error listing MSK Connect Connectors (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping MSK Connect Connectors (%s): %w", region, err) @@ -75,11 +77,11 @@ func sweepConnectors(region string) error { func sweepCustomPlugins(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).KafkaConnectConn() + conn := client.KafkaConnectConn(ctx) input := &kafkaconnect.ListCustomPluginsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -108,7 +110,7 @@ func sweepCustomPlugins(region string) error { return fmt.Errorf("error listing MSK Connect Custom Plugins (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping MSK Connect Custom Plugins (%s): %w", region, err) diff --git a/internal/service/kafkaconnect/wait.go b/internal/service/kafkaconnect/wait.go index 6e843d77327..944a30c5c44 100644 --- a/internal/service/kafkaconnect/wait.go +++ b/internal/service/kafkaconnect/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafkaconnect import ( diff --git a/internal/service/kafkaconnect/worker_configuration.go b/internal/service/kafkaconnect/worker_configuration.go index b12d35ac73c..6cd3b4abc65 100644 --- a/internal/service/kafkaconnect/worker_configuration.go +++ b/internal/service/kafkaconnect/worker_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafkaconnect import ( @@ -62,7 +65,7 @@ func ResourceWorkerConfiguration() *schema.Resource { } func resourceWorkerConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KafkaConnectConn() + conn := meta.(*conns.AWSClient).KafkaConnectConn(ctx) name := d.Get("name").(string) input := &kafkaconnect.CreateWorkerConfigurationInput{ @@ -78,7 +81,7 @@ func resourceWorkerConfigurationCreate(ctx context.Context, d *schema.ResourceDa output, err := conn.CreateWorkerConfigurationWithContext(ctx, input) if err != nil { - return diag.Errorf("error creating MSK Connect Worker Configuration (%s): %s", name, err) + return diag.Errorf("creating MSK Connect Worker Configuration (%s): %s", name, err) } d.SetId(aws.StringValue(output.WorkerConfigurationArn)) @@ -87,7 +90,7 @@ func resourceWorkerConfigurationCreate(ctx context.Context, d *schema.ResourceDa } func resourceWorkerConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KafkaConnectConn() + conn := meta.(*conns.AWSClient).KafkaConnectConn(ctx) config, err := FindWorkerConfigurationByARN(ctx, conn, d.Id()) @@ -98,7 +101,7 @@ func resourceWorkerConfigurationRead(ctx context.Context, d *schema.ResourceData } if err != nil { - return diag.Errorf("error reading MSK Connect Worker Configuration (%s): %s", d.Id(), err) + return diag.Errorf("reading MSK Connect Worker Configuration (%s): %s", d.Id(), err) } d.Set("arn", config.WorkerConfigurationArn) diff --git a/internal/service/kafkaconnect/worker_configuration_data_source.go b/internal/service/kafkaconnect/worker_configuration_data_source.go index 7ae71c91b8f..c12ed01a76c 100644 --- a/internal/service/kafkaconnect/worker_configuration_data_source.go +++ b/internal/service/kafkaconnect/worker_configuration_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafkaconnect import ( @@ -42,7 +45,7 @@ func DataSourceWorkerConfiguration() *schema.Resource { } func dataSourceWorkerConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KafkaConnectConn() + conn := meta.(*conns.AWSClient).KafkaConnectConn(ctx) name := d.Get("name") var output []*kafkaconnect.WorkerConfigurationSummary @@ -62,7 +65,7 @@ func dataSourceWorkerConfigurationRead(ctx context.Context, d *schema.ResourceDa }) if err != nil { - return diag.Errorf("error listing MSK Connect Worker Configurations: %s", err) + return diag.Errorf("listing MSK Connect Worker Configurations: %s", err) } if len(output) == 0 || output[0] == nil { @@ -79,7 +82,7 @@ func dataSourceWorkerConfigurationRead(ctx context.Context, d *schema.ResourceDa config, err := FindWorkerConfigurationByARN(ctx, conn, arn) if err != nil { - return diag.Errorf("error reading MSK Connect Worker Configuration (%s): %s", arn, err) + return diag.Errorf("reading MSK Connect Worker Configuration (%s): %s", arn, err) } d.SetId(aws.StringValue(config.Name)) diff --git a/internal/service/kafkaconnect/worker_configuration_data_source_test.go b/internal/service/kafkaconnect/worker_configuration_data_source_test.go index d40dc97c459..4d015d8fa58 100644 --- a/internal/service/kafkaconnect/worker_configuration_data_source_test.go +++ b/internal/service/kafkaconnect/worker_configuration_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafkaconnect_test import ( diff --git a/internal/service/kafkaconnect/worker_configuration_test.go b/internal/service/kafkaconnect/worker_configuration_test.go index 933f49014d3..69a95721ec5 100644 --- a/internal/service/kafkaconnect/worker_configuration_test.go +++ b/internal/service/kafkaconnect/worker_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kafkaconnect_test import ( @@ -82,7 +85,7 @@ func testAccCheckWorkerConfigurationExists(ctx context.Context, n string) resour return fmt.Errorf("No MSK Connect Worker Configuration ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConnectConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConnectConn(ctx) _, err := tfkafkaconnect.FindWorkerConfigurationByARN(ctx, conn, rs.Primary.ID) diff --git a/internal/service/kendra/data_source.go b/internal/service/kendra/data_source.go index 1a0255d9710..76951996d9d 100644 --- a/internal/service/kendra/data_source.go +++ b/internal/service/kendra/data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kendra import ( @@ -588,14 +591,14 @@ func documentAttributeValueSchema() *schema.Schema { } func resourceDataSourceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) name := d.Get("name").(string) input := &kendra.CreateDataSourceInput{ ClientToken: aws.String(id.UniqueId()), IndexId: aws.String(d.Get("index_id").(string)), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), Type: types.DataSourceType(d.Get("type").(string)), } @@ -661,7 +664,7 @@ func resourceDataSourceCreate(ctx context.Context, d *schema.ResourceData, meta } func resourceDataSourceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) id, indexId, err := DataSourceParseResourceID(d.Id()) if err != nil { @@ -714,7 +717,7 @@ func resourceDataSourceRead(ctx context.Context, d *schema.ResourceData, meta in } func resourceDataSourceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) if d.HasChanges("configuration", "custom_document_enrichment_configuration", "description", "language_code", "name", "role_arn", "schedule") { id, indexId, err := DataSourceParseResourceID(d.Id()) @@ -785,7 +788,7 @@ func resourceDataSourceUpdate(ctx context.Context, d *schema.ResourceData, meta } func resourceDataSourceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) log.Printf("[INFO] Deleting Kendra Data Source %s", d.Id()) diff --git a/internal/service/kendra/data_source_test.go b/internal/service/kendra/data_source_test.go index 105929fc50e..ebbf1bfe61a 100644 --- a/internal/service/kendra/data_source_test.go +++ b/internal/service/kendra/data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kendra_test import ( @@ -1780,7 +1783,7 @@ func TestAccKendraDataSource_CustomDocumentEnrichmentConfiguration_ExtractionHoo func testAccCheckDataSourceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KendraClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).KendraClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_kendra_data_source" { @@ -1823,7 +1826,7 @@ func testAccCheckDataSourceExists(ctx context.Context, name string) resource.Tes return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).KendraClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).KendraClient(ctx) _, err = tfkendra.FindDataSourceByID(ctx, conn, id, indexId) diff --git a/internal/service/kendra/experience.go b/internal/service/kendra/experience.go index fdc4f2a9752..3cb79c1567b 100644 --- a/internal/service/kendra/experience.go +++ b/internal/service/kendra/experience.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kendra import ( @@ -174,7 +177,7 @@ func ResourceExperience() *schema.Resource { } func resourceExperienceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) in := &kendra.CreateExperienceInput{ ClientToken: aws.String(id.UniqueId()), @@ -213,7 +216,7 @@ func resourceExperienceCreate(ctx context.Context, d *schema.ResourceData, meta } func resourceExperienceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) id, indexId, err := ExperienceParseResourceID(d.Id()) if err != nil { @@ -260,7 +263,7 @@ func resourceExperienceRead(ctx context.Context, d *schema.ResourceData, meta in } func resourceExperienceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) id, indexId, err := ExperienceParseResourceID(d.Id()) if err != nil { @@ -302,7 +305,7 @@ func resourceExperienceUpdate(ctx context.Context, d *schema.ResourceData, meta } func resourceExperienceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) log.Printf("[INFO] Deleting Kendra Experience %s", d.Id()) diff --git a/internal/service/kendra/experience_data_source.go b/internal/service/kendra/experience_data_source.go index 4d291fc548b..b32a579bae2 100644 --- a/internal/service/kendra/experience_data_source.go +++ b/internal/service/kendra/experience_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kendra import ( @@ -137,7 +140,7 @@ func DataSourceExperience() *schema.Resource { } func dataSourceExperienceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) experienceID := d.Get("experience_id").(string) indexID := d.Get("index_id").(string) diff --git a/internal/service/kendra/experience_data_source_test.go b/internal/service/kendra/experience_data_source_test.go index dca2779fe88..d61206b3d12 100644 --- a/internal/service/kendra/experience_data_source_test.go +++ b/internal/service/kendra/experience_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kendra_test import ( diff --git a/internal/service/kendra/experience_test.go b/internal/service/kendra/experience_test.go index 964072eca29..b9679254d31 100644 --- a/internal/service/kendra/experience_test.go +++ b/internal/service/kendra/experience_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kendra_test import ( @@ -572,7 +575,7 @@ func TestAccKendraExperience_Configuration_UserIdentityConfigurationWithContentS func testAccCheckExperienceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KendraClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).KendraClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_kendra_experience" { @@ -609,7 +612,7 @@ func testAccCheckExperienceExists(ctx context.Context, name string) resource.Tes return fmt.Errorf("No Kendra Experience is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).KendraClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).KendraClient(ctx) id, indexId, err := tfkendra.ExperienceParseResourceID(rs.Primary.ID) if err != nil { diff --git a/internal/service/kendra/faq.go b/internal/service/kendra/faq.go index 70c7526312f..281206802da 100644 --- a/internal/service/kendra/faq.go +++ b/internal/service/kendra/faq.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kendra import ( @@ -146,7 +149,7 @@ func ResourceFaq() *schema.Resource { } func resourceFaqCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) name := d.Get("name").(string) input := &kendra.CreateFaqInput{ @@ -155,7 +158,7 @@ func resourceFaqCreate(ctx context.Context, d *schema.ResourceData, meta interfa Name: aws.String(name), RoleArn: aws.String(d.Get("role_arn").(string)), S3Path: expandS3Path(d.Get("s3_path").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -208,7 +211,7 @@ func resourceFaqCreate(ctx context.Context, d *schema.ResourceData, meta interfa } func resourceFaqRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) id, indexId, err := FaqParseResourceID(d.Id()) if err != nil { @@ -261,7 +264,7 @@ func resourceFaqUpdate(ctx context.Context, d *schema.ResourceData, meta interfa } func resourceFaqDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) log.Printf("[INFO] Deleting Kendra Faq %s", d.Id()) diff --git a/internal/service/kendra/faq_data_source.go b/internal/service/kendra/faq_data_source.go index 731353dc269..72c270de9af 100644 --- a/internal/service/kendra/faq_data_source.go +++ b/internal/service/kendra/faq_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kendra import ( @@ -101,7 +104,7 @@ func DataSourceFaq() *schema.Resource { } func dataSourceFaqRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig id := d.Get("faq_id").(string) @@ -142,14 +145,14 @@ func dataSourceFaqRead(ctx context.Context, d *schema.ResourceData, meta interfa return diag.FromErr(err) } - tags, err := ListTags(ctx, conn, arn) + tags, err := listTags(ctx, conn, arn) if err != nil { - return diag.Errorf("error listing tags for resource (%s): %s", arn, err) + return diag.Errorf("listing tags for resource (%s): %s", arn, err) } tags = tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig) if err := d.Set("tags", tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return diag.Errorf("error setting tags: %s", err) + return diag.Errorf("setting tags: %s", err) } d.SetId(fmt.Sprintf("%s/%s", id, indexId)) diff --git a/internal/service/kendra/faq_data_source_test.go b/internal/service/kendra/faq_data_source_test.go index 5767378ff90..cf6413a4b6c 100644 --- a/internal/service/kendra/faq_data_source_test.go +++ b/internal/service/kendra/faq_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kendra_test import ( diff --git a/internal/service/kendra/faq_test.go b/internal/service/kendra/faq_test.go index 608dab6b34f..cdf48e638ad 100644 --- a/internal/service/kendra/faq_test.go +++ b/internal/service/kendra/faq_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kendra_test import ( @@ -257,7 +260,7 @@ func TestAccKendraFaq_disappears(t *testing.T) { func testAccCheckFaqDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KendraClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).KendraClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_kendra_faq" { @@ -300,7 +303,7 @@ func testAccCheckFaqExists(ctx context.Context, name string) resource.TestCheckF return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).KendraClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).KendraClient(ctx) _, err = tfkendra.FindFaqByID(ctx, conn, id, indexId) diff --git a/internal/service/kendra/find.go b/internal/service/kendra/find.go index 02623777198..94dbefd9aa5 100644 --- a/internal/service/kendra/find.go +++ b/internal/service/kendra/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kendra import ( diff --git a/internal/service/kendra/flex.go b/internal/service/kendra/flex.go index 7e80eb972e7..c1ad37351a2 100644 --- a/internal/service/kendra/flex.go +++ b/internal/service/kendra/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kendra import ( diff --git a/internal/service/kendra/generate.go b/internal/service/kendra/generate.go index f7cd4fd292f..19746cb941e 100644 --- a/internal/service/kendra/generate.go +++ b/internal/service/kendra/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -TagInIDElem=ResourceARN -ListTags -ListTagsInIDElem=ResourceARN -ServiceTagsSlice -UpdateTags -UntagInTagsElem=TagKeys +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package kendra diff --git a/internal/service/kendra/id.go b/internal/service/kendra/id.go index e6dafb9e064..abbbbcf9dbc 100644 --- a/internal/service/kendra/id.go +++ b/internal/service/kendra/id.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kendra import ( diff --git a/internal/service/kendra/id_test.go b/internal/service/kendra/id_test.go index 1e99c2c2ded..d2c002bc3ad 100644 --- a/internal/service/kendra/id_test.go +++ b/internal/service/kendra/id_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kendra_test import ( diff --git a/internal/service/kendra/index.go b/internal/service/kendra/index.go index f3fdff01026..fa2a36055df 100644 --- a/internal/service/kendra/index.go +++ b/internal/service/kendra/index.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kendra import ( @@ -376,14 +379,14 @@ func ResourceIndex() *schema.Resource { } func resourceIndexCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) name := d.Get("name").(string) input := &kendra.CreateIndexInput{ ClientToken: aws.String(id.UniqueId()), Name: aws.String(name), RoleArn: aws.String(d.Get("role_arn").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -426,11 +429,11 @@ func resourceIndexCreate(ctx context.Context, d *schema.ResourceData, meta inter ) if err != nil { - return diag.Errorf("error creating Kendra Index (%s): %s", name, err) + return diag.Errorf("creating Kendra Index (%s): %s", name, err) } if outputRaw == nil { - return diag.Errorf("error creating Kendra Index (%s): empty output", name) + return diag.Errorf("creating Kendra Index (%s): empty output", name) } output := outputRaw.(*kendra.CreateIndexOutput) @@ -439,7 +442,7 @@ func resourceIndexCreate(ctx context.Context, d *schema.ResourceData, meta inter // waiter since the status changes from CREATING to either ACTIVE or FAILED if _, err := waitIndexCreated(ctx, conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { - return diag.Errorf("error waiting for Index (%s) creation: %s", d.Id(), err) + return diag.Errorf("waiting for Index (%s) creation: %s", d.Id(), err) } callUpdateIndex := false @@ -462,7 +465,7 @@ func resourceIndexCreate(ctx context.Context, d *schema.ResourceData, meta inter } func resourceIndexRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) resp, err := findIndexByID(ctx, conn, d.Id()) @@ -473,7 +476,7 @@ func resourceIndexRead(ctx context.Context, d *schema.ResourceData, meta interfa } if err != nil { - return diag.Errorf("error getting Kendra Index (%s): %s", d.Id(), err) + return diag.Errorf("getting Kendra Index (%s): %s", d.Id(), err) } arn := arn.ARN{ @@ -523,7 +526,7 @@ func resourceIndexRead(ctx context.Context, d *schema.ResourceData, meta interfa } func resourceIndexUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) id := d.Id() @@ -572,12 +575,12 @@ func resourceIndexUpdate(ctx context.Context, d *schema.ResourceData, meta inter ) if err != nil { - return diag.Errorf("error updating Index (%s): %s", d.Id(), err) + return diag.Errorf("updating Index (%s): %s", d.Id(), err) } // waiter since the status changes from UPDATING to either ACTIVE or FAILED if _, err := waitIndexUpdated(ctx, conn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { - return diag.Errorf("error waiting for Index (%s) update: %s", d.Id(), err) + return diag.Errorf("waiting for Index (%s) update: %s", d.Id(), err) } } @@ -585,7 +588,7 @@ func resourceIndexUpdate(ctx context.Context, d *schema.ResourceData, meta inter } func resourceIndexDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) id := d.Id() @@ -594,11 +597,11 @@ func resourceIndexDelete(ctx context.Context, d *schema.ResourceData, meta inter }) if err != nil { - return diag.Errorf("error deleting Index (%s): %s", d.Id(), err) + return diag.Errorf("deleting Index (%s): %s", d.Id(), err) } if _, err := waitIndexDeleted(ctx, conn, id, d.Timeout(schema.TimeoutDelete)); err != nil { - return diag.Errorf("error waiting for Index (%s) delete: %s", d.Id(), err) + return diag.Errorf("waiting for Index (%s) delete: %s", d.Id(), err) } return nil @@ -1065,7 +1068,7 @@ func flattenUserGroupResolutionConfiguration(userGroupResolutionConfiguration *t } values := map[string]interface{}{ - "user_group_resolution_configuration": userGroupResolutionConfiguration.UserGroupResolutionMode, + "user_group_resolution_mode": userGroupResolutionConfiguration.UserGroupResolutionMode, } return []interface{}{values} diff --git a/internal/service/kendra/index_data_source.go b/internal/service/kendra/index_data_source.go index c6562a77ab4..a3e50b1b986 100644 --- a/internal/service/kendra/index_data_source.go +++ b/internal/service/kendra/index_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kendra import ( @@ -279,7 +282,7 @@ func DataSourceIndex() *schema.Resource { } func dataSourceIndexRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig id := d.Get("id").(string) @@ -287,11 +290,11 @@ func dataSourceIndexRead(ctx context.Context, d *schema.ResourceData, meta inter resp, err := findIndexByID(ctx, conn, id) if err != nil { - return diag.Errorf("error getting Kendra Index (%s): %s", id, err) + return diag.Errorf("getting Kendra Index (%s): %s", id, err) } if resp == nil { - return diag.Errorf("error getting Kendra Index (%s): empty response", id) + return diag.Errorf("getting Kendra Index (%s): empty response", id) } arn := arn.ARN{ @@ -337,14 +340,14 @@ func dataSourceIndexRead(ctx context.Context, d *schema.ResourceData, meta inter return diag.FromErr(err) } - tags, err := ListTags(ctx, conn, arn) + tags, err := listTags(ctx, conn, arn) if err != nil { - return diag.Errorf("error listing tags for resource (%s): %s", arn, err) + return diag.Errorf("listing tags for resource (%s): %s", arn, err) } tags = tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig) if err := d.Set("tags", tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return diag.Errorf("error setting tags: %s", err) + return diag.Errorf("setting tags: %s", err) } d.SetId(id) diff --git a/internal/service/kendra/index_data_source_test.go b/internal/service/kendra/index_data_source_test.go index f7ee576024a..1d215cbf8a3 100644 --- a/internal/service/kendra/index_data_source_test.go +++ b/internal/service/kendra/index_data_source_test.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kendra_test import ( "fmt" - "regexp" "testing" "github.com/aws/aws-sdk-go/service/backup" @@ -24,10 +26,6 @@ func TestAccKendraIndexDataSource_basic(t *testing.T) { ErrorCheck: acctest.ErrorCheck(t, backup.EndpointsID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, Steps: []resource.TestStep{ - { - Config: testAccIndexDataSourceConfig_nonExistent, - ExpectError: regexp.MustCompile(`error getting Kendra Index`), - }, { Config: testAccIndexDataSourceConfig_userTokenJSON(rName, rName2, rName3), Check: resource.ComposeTestCheckFunc( @@ -66,12 +64,6 @@ func TestAccKendraIndexDataSource_basic(t *testing.T) { }) } -const testAccIndexDataSourceConfig_nonExistent = ` -data "aws_kendra_index" "test" { - id = "tf-acc-test-does-not-exist-kendra-id" -} -` - func testAccIndexDataSourceConfig_userTokenJSON(rName, rName2, rName3 string) string { return acctest.ConfigCompose( testAccIndexConfigBase(rName, rName2), diff --git a/internal/service/kendra/index_test.go b/internal/service/kendra/index_test.go index 3c14c3b2378..4f4271fdb2a 100644 --- a/internal/service/kendra/index_test.go +++ b/internal/service/kendra/index_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kendra_test import ( @@ -21,7 +24,7 @@ import ( func testAccPreCheck(ctx context.Context, t *testing.T) { acctest.PreCheckPartitionHasService(t, names.KendraEndpointID) - conn := acctest.Provider.Meta().(*conns.AWSClient).KendraClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).KendraClient(ctx) input := &kendra.ListIndicesInput{} @@ -62,7 +65,7 @@ func TestAccKendraIndex_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "capacity_units.0.storage_capacity_units", "0"), resource.TestCheckResourceAttrSet(resourceName, "created_at"), resource.TestCheckResourceAttr(resourceName, "description", description), - resource.TestCheckResourceAttr(resourceName, "document_metadata_configuration_updates.#", "13"), + resource.TestCheckResourceAttr(resourceName, "document_metadata_configuration_updates.#", "14"), resource.TestCheckResourceAttr(resourceName, "edition", string(types.IndexEditionEnterpriseEdition)), resource.TestCheckResourceAttr(resourceName, "index_statistics.#", "1"), resource.TestCheckResourceAttr(resourceName, "index_statistics.0.faq_statistics.#", "1"), @@ -396,6 +399,48 @@ func TestAccKendraIndex_updateRoleARN(t *testing.T) { }) } +func TestAccKendraIndex_updateUserGroupResolutionConfigurationMode(t *testing.T) { + ctx := acctest.Context(t) + var index kendra.DescribeIndexOutput + + rName := sdkacctest.RandomWithPrefix("resource-test-terraform") + rName2 := sdkacctest.RandomWithPrefix("resource-test-terraform") + rName3 := sdkacctest.RandomWithPrefix("resource-test-terraform") + originalUserGroupResolutionMode := types.UserGroupResolutionModeAwsSso + updatedUserGroupResolutionMode := types.UserGroupResolutionModeNone + resourceName := "aws_kendra_index.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, names.KendraEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckIndexDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccIndexConfig_userGroupResolutionMode(rName, rName2, rName3, string(originalUserGroupResolutionMode)), + Check: resource.ComposeTestCheckFunc( + testAccCheckIndexExists(ctx, resourceName, &index), + resource.TestCheckResourceAttr(resourceName, "user_group_resolution_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "user_group_resolution_configuration.0.user_group_resolution_mode", string(originalUserGroupResolutionMode)), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccIndexConfig_userGroupResolutionMode(rName, rName2, rName3, string(updatedUserGroupResolutionMode)), + Check: resource.ComposeTestCheckFunc( + testAccCheckIndexExists(ctx, resourceName, &index), + resource.TestCheckResourceAttr(resourceName, "user_group_resolution_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "user_group_resolution_configuration.0.user_group_resolution_mode", string(updatedUserGroupResolutionMode)), + ), + }, + }, + }) +} + func TestAccKendraIndex_addDocumentMetadataConfigurationUpdates(t *testing.T) { ctx := acctest.Context(t) var index kendra.DescribeIndexOutput @@ -420,7 +465,7 @@ func TestAccKendraIndex_addDocumentMetadataConfigurationUpdates(t *testing.T) { Config: testAccIndexConfig_documentMetadataConfigurationUpdatesBase(rName, rName2, rName3), Check: resource.ComposeTestCheckFunc( testAccCheckIndexExists(ctx, resourceName, &index), - resource.TestCheckResourceAttr(resourceName, "document_metadata_configuration_updates.#", "13"), + resource.TestCheckResourceAttr(resourceName, "document_metadata_configuration_updates.#", "14"), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "document_metadata_configuration_updates.*", map[string]string{ "name": "_authors", "type": string(types.DocumentAttributeValueTypeStringListValue), @@ -555,6 +600,18 @@ func TestAccKendraIndex_addDocumentMetadataConfigurationUpdates(t *testing.T) { "search.0.searchable": "false", "search.0.sortable": "false", }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "document_metadata_configuration_updates.*", map[string]string{ + "name": "_tenant_id", + "type": string(types.DocumentAttributeValueTypeStringValue), + "relevance.#": "1", + "relevance.0.importance": "1", + "relevance.0.values_importance_map.%": "0", + "search.#": "1", + "search.0.displayable": "false", + "search.0.facetable": "false", + "search.0.searchable": "false", + "search.0.sortable": "true", + }), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "document_metadata_configuration_updates.*", map[string]string{ "name": "_version", "type": string(types.DocumentAttributeValueTypeStringValue), @@ -590,7 +647,7 @@ func TestAccKendraIndex_addDocumentMetadataConfigurationUpdates(t *testing.T) { Config: testAccIndexConfig_documentMetadataConfigurationUpdatesAddNewMetadata(rName, rName2, rName3, authorsFacetable, longValDisplayable, stringListValSearchable, dateValSortable, stringValImportance), Check: resource.ComposeTestCheckFunc( testAccCheckIndexExists(ctx, resourceName, &index), - resource.TestCheckResourceAttr(resourceName, "document_metadata_configuration_updates.#", "17"), + resource.TestCheckResourceAttr(resourceName, "document_metadata_configuration_updates.#", "18"), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "document_metadata_configuration_updates.*", map[string]string{ "name": "_authors", "type": string(types.DocumentAttributeValueTypeStringListValue), @@ -725,6 +782,18 @@ func TestAccKendraIndex_addDocumentMetadataConfigurationUpdates(t *testing.T) { "search.0.searchable": "false", "search.0.sortable": "false", }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "document_metadata_configuration_updates.*", map[string]string{ + "name": "_tenant_id", + "type": string(types.DocumentAttributeValueTypeStringValue), + "relevance.#": "1", + "relevance.0.importance": "1", + "relevance.0.values_importance_map.%": "0", + "search.#": "1", + "search.0.displayable": "false", + "search.0.facetable": "false", + "search.0.searchable": "false", + "search.0.sortable": "true", + }), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "document_metadata_configuration_updates.*", map[string]string{ "name": "_version", "type": string(types.DocumentAttributeValueTypeStringValue), @@ -834,7 +903,7 @@ func TestAccKendraIndex_inplaceUpdateDocumentMetadataConfigurationUpdates(t *tes Config: testAccIndexConfig_documentMetadataConfigurationUpdatesAddNewMetadata(rName, rName2, rName3, originalAuthorsFacetable, originalLongValDisplayable, originalStringListValSearchable, originalDateValSortable, originalStringValImportance), Check: resource.ComposeTestCheckFunc( testAccCheckIndexExists(ctx, resourceName, &index), - resource.TestCheckResourceAttr(resourceName, "document_metadata_configuration_updates.#", "17"), + resource.TestCheckResourceAttr(resourceName, "document_metadata_configuration_updates.#", "18"), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "document_metadata_configuration_updates.*", map[string]string{ "name": "_authors", "type": string(types.DocumentAttributeValueTypeStringListValue), @@ -969,6 +1038,18 @@ func TestAccKendraIndex_inplaceUpdateDocumentMetadataConfigurationUpdates(t *tes "search.0.searchable": "false", "search.0.sortable": "false", }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "document_metadata_configuration_updates.*", map[string]string{ + "name": "_tenant_id", + "type": string(types.DocumentAttributeValueTypeStringValue), + "relevance.#": "1", + "relevance.0.importance": "1", + "relevance.0.values_importance_map.%": "0", + "search.#": "1", + "search.0.displayable": "false", + "search.0.facetable": "false", + "search.0.searchable": "false", + "search.0.sortable": "true", + }), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "document_metadata_configuration_updates.*", map[string]string{ "name": "_version", "type": string(types.DocumentAttributeValueTypeStringValue), @@ -1053,7 +1134,7 @@ func TestAccKendraIndex_inplaceUpdateDocumentMetadataConfigurationUpdates(t *tes Config: testAccIndexConfig_documentMetadataConfigurationUpdatesAddNewMetadata(rName, rName2, rName3, updatedAuthorsFacetable, updatedLongValDisplayable, updatedStringListValSearchable, updatedDateValSortable, updatedStringValImportance), Check: resource.ComposeTestCheckFunc( testAccCheckIndexExists(ctx, resourceName, &index), - resource.TestCheckResourceAttr(resourceName, "document_metadata_configuration_updates.#", "17"), + resource.TestCheckResourceAttr(resourceName, "document_metadata_configuration_updates.#", "18"), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "document_metadata_configuration_updates.*", map[string]string{ "name": "_authors", "type": string(types.DocumentAttributeValueTypeStringListValue), @@ -1188,6 +1269,18 @@ func TestAccKendraIndex_inplaceUpdateDocumentMetadataConfigurationUpdates(t *tes "search.0.searchable": "false", "search.0.sortable": "false", }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "document_metadata_configuration_updates.*", map[string]string{ + "name": "_tenant_id", + "type": string(types.DocumentAttributeValueTypeStringValue), + "relevance.#": "1", + "relevance.0.importance": "1", + "relevance.0.values_importance_map.%": "0", + "search.#": "1", + "search.0.displayable": "false", + "search.0.facetable": "false", + "search.0.searchable": "false", + "search.0.sortable": "true", + }), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "document_metadata_configuration_updates.*", map[string]string{ "name": "_version", "type": string(types.DocumentAttributeValueTypeStringValue), @@ -1297,7 +1390,7 @@ func TestAccKendraIndex_disappears(t *testing.T) { func testAccCheckIndexDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KendraClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).KendraClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_kendra_index" { @@ -1329,7 +1422,7 @@ func testAccCheckIndexExists(ctx context.Context, name string, index *kendra.Des return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).KendraClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).KendraClient(ctx) input := &kendra.DescribeIndexInput{ Id: aws.String(rs.Primary.ID), } @@ -1555,6 +1648,21 @@ resource "aws_kendra_index" "test" { `, rName3, groupAttributeField, userNameAttributeField)) } +func testAccIndexConfig_userGroupResolutionMode(rName, rName2, rName3, UserGroupResolutionMode string) string { + return acctest.ConfigCompose( + testAccIndexConfigBase(rName, rName2), + fmt.Sprintf(` +resource "aws_kendra_index" "test" { + name = %[1]q + role_arn = aws_iam_role.access_cw.arn + + user_group_resolution_configuration { + user_group_resolution_mode = %[2]q + } +} +`, rName3, UserGroupResolutionMode)) +} + func testAccIndexConfig_tags(rName, rName2, rName3, description string) string { return acctest.ConfigCompose( testAccIndexConfigBase(rName, rName2), @@ -1765,6 +1873,21 @@ resource "aws_kendra_index" "test" { } } + document_metadata_configuration_updates { + name = "_tenant_id" + type = "STRING_VALUE" + search { + displayable = false + facetable = false + searchable = false + sortable = true + } + relevance { + importance = 1 + values_importance_map = {} + } + } + document_metadata_configuration_updates { name = "_version" type = "STRING_VALUE" @@ -1973,6 +2096,21 @@ resource "aws_kendra_index" "test" { } } + document_metadata_configuration_updates { + name = "_tenant_id" + type = "STRING_VALUE" + search { + displayable = false + facetable = false + searchable = false + sortable = true + } + relevance { + importance = 1 + values_importance_map = {} + } + } + document_metadata_configuration_updates { name = "_version" type = "STRING_VALUE" diff --git a/internal/service/kendra/query_suggestions_block_list.go b/internal/service/kendra/query_suggestions_block_list.go index 251ca7f290f..1635c5cec1f 100644 --- a/internal/service/kendra/query_suggestions_block_list.go +++ b/internal/service/kendra/query_suggestions_block_list.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kendra import ( @@ -100,7 +103,7 @@ func ResourceQuerySuggestionsBlockList() *schema.Resource { } func resourceQuerySuggestionsBlockListCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) in := &kendra.CreateQuerySuggestionsBlockListInput{ ClientToken: aws.String(id.UniqueId()), @@ -108,7 +111,7 @@ func resourceQuerySuggestionsBlockListCreate(ctx context.Context, d *schema.Reso Name: aws.String(d.Get("name").(string)), RoleArn: aws.String(d.Get("role_arn").(string)), SourceS3Path: expandSourceS3Path(d.Get("source_s3_path").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -152,7 +155,7 @@ func resourceQuerySuggestionsBlockListCreate(ctx context.Context, d *schema.Reso } func resourceQuerySuggestionsBlockListRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) id, indexId, err := QuerySuggestionsBlockListParseResourceID(d.Id()) if err != nil { @@ -195,7 +198,7 @@ func resourceQuerySuggestionsBlockListRead(ctx context.Context, d *schema.Resour } func resourceQuerySuggestionsBlockListUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) if d.HasChangesExcept("tags", "tags_all") { id, indexId, err := QuerySuggestionsBlockListParseResourceID(d.Id()) @@ -254,7 +257,7 @@ func resourceQuerySuggestionsBlockListUpdate(ctx context.Context, d *schema.Reso } func resourceQuerySuggestionsBlockListDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) log.Printf("[INFO] Deleting Kendra QuerySuggestionsBlockList %s", d.Id()) diff --git a/internal/service/kendra/query_suggestions_block_list_data_source.go b/internal/service/kendra/query_suggestions_block_list_data_source.go index ad370a5f82a..53091522f62 100644 --- a/internal/service/kendra/query_suggestions_block_list_data_source.go +++ b/internal/service/kendra/query_suggestions_block_list_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kendra import ( @@ -98,7 +101,7 @@ func DataSourceQuerySuggestionsBlockList() *schema.Resource { } func dataSourceQuerySuggestionsBlockListRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig querySuggestionsBlockListID := d.Get("query_suggestions_block_list_id").(string) @@ -134,7 +137,7 @@ func dataSourceQuerySuggestionsBlockListRead(ctx context.Context, d *schema.Reso return diag.Errorf("setting source_s3_path: %s", err) } - tags, err := ListTags(ctx, conn, arn) + tags, err := listTags(ctx, conn, arn) if err != nil { return diag.Errorf("listing tags for Kendra QuerySuggestionsBlockList (%s): %s", arn, err) diff --git a/internal/service/kendra/query_suggestions_block_list_data_source_test.go b/internal/service/kendra/query_suggestions_block_list_data_source_test.go index d1e920e2468..32e7171022d 100644 --- a/internal/service/kendra/query_suggestions_block_list_data_source_test.go +++ b/internal/service/kendra/query_suggestions_block_list_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kendra_test import ( diff --git a/internal/service/kendra/query_suggestions_block_list_test.go b/internal/service/kendra/query_suggestions_block_list_test.go index a28d68b2599..bf72164e8d9 100644 --- a/internal/service/kendra/query_suggestions_block_list_test.go +++ b/internal/service/kendra/query_suggestions_block_list_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kendra_test import ( @@ -322,7 +325,7 @@ func TestAccKendraQuerySuggestionsBlockList_tags(t *testing.T) { func testAccCheckQuerySuggestionsBlockListDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KendraClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).KendraClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_kendra_query_suggestions_block_list" { @@ -367,7 +370,7 @@ func testAccCheckQuerySuggestionsBlockListExists(ctx context.Context, name strin return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).KendraClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).KendraClient(ctx) _, err = tfkendra.FindQuerySuggestionsBlockListByID(ctx, conn, id, indexId) diff --git a/internal/service/kendra/service_package_gen.go b/internal/service/kendra/service_package_gen.go index 13c05c5e86e..f4e3cc3ca9e 100644 --- a/internal/service/kendra/service_package_gen.go +++ b/internal/service/kendra/service_package_gen.go @@ -5,6 +5,9 @@ package kendra import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + kendra_sdkv2 "github.com/aws/aws-sdk-go-v2/service/kendra" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -97,4 +100,17 @@ func (p *servicePackage) ServicePackageName() string { return names.Kendra } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*kendra_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return kendra_sdkv2.NewFromConfig(cfg, func(o *kendra_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = kendra_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/kendra/sweep.go b/internal/service/kendra/sweep.go index b92720f7282..6a7d2a9f033 100644 --- a/internal/service/kendra/sweep.go +++ b/internal/service/kendra/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go-v2/service/kendra" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -24,12 +26,12 @@ func init() { func sweepIndex(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).KendraClient() + conn := client.KendraClient(ctx) sweepResources := make([]sweep.Sweepable, 0) in := &kendra.ListIndicesInput{} var errs *multierror.Error @@ -57,7 +59,7 @@ func sweepIndex(region string) error { } } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping Kendra Indices for %s: %w", region, err)) } diff --git a/internal/service/kendra/tags_gen.go b/internal/service/kendra/tags_gen.go index a72e9aabae0..9890eca351a 100644 --- a/internal/service/kendra/tags_gen.go +++ b/internal/service/kendra/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists kendra service tags. +// listTags lists kendra service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn *kendra.Client, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *kendra.Client, identifier string) (tftags.KeyValueTags, error) { input := &kendra.ListTagsForResourceInput{ ResourceARN: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn *kendra.Client, identifier string) (tfta // ListTags lists kendra service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).KendraClient(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).KendraClient(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []awstypes.Tag) tftags.KeyValueTags return tftags.New(ctx, m) } -// GetTagsIn returns kendra service tags from Context. +// getTagsIn returns kendra service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []awstypes.Tag { +func getTagsIn(ctx context.Context) []awstypes.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []awstypes.Tag { return nil } -// SetTagsOut sets kendra service tags in Context. -func SetTagsOut(ctx context.Context, tags []awstypes.Tag) { +// setTagsOut sets kendra service tags in Context. +func setTagsOut(ctx context.Context, tags []awstypes.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates kendra service tags. +// updateTags updates kendra service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn *kendra.Client, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *kendra.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn *kendra.Client, identifier string, old // UpdateTags updates kendra service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).KendraClient(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).KendraClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/kendra/thesaurus.go b/internal/service/kendra/thesaurus.go index f37de0f62f8..a0aabe18751 100644 --- a/internal/service/kendra/thesaurus.go +++ b/internal/service/kendra/thesaurus.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kendra import ( @@ -100,7 +103,7 @@ func ResourceThesaurus() *schema.Resource { } func resourceThesaurusCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) input := &kendra.CreateThesaurusInput{ ClientToken: aws.String(id.UniqueId()), @@ -108,7 +111,7 @@ func resourceThesaurusCreate(ctx context.Context, d *schema.ResourceData, meta i Name: aws.String(d.Get("name").(string)), RoleArn: aws.String(d.Get("role_arn").(string)), SourceS3Path: expandSourceS3Path(d.Get("source_s3_path").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -153,7 +156,7 @@ func resourceThesaurusCreate(ctx context.Context, d *schema.ResourceData, meta i } func resourceThesaurusRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) id, indexId, err := ThesaurusParseResourceID(d.Id()) if err != nil { @@ -196,7 +199,7 @@ func resourceThesaurusRead(ctx context.Context, d *schema.ResourceData, meta int } func resourceThesaurusUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) if d.HasChangesExcept("tags", "tags_all") { id, indexId, err := ThesaurusParseResourceID(d.Id()) @@ -255,7 +258,7 @@ func resourceThesaurusUpdate(ctx context.Context, d *schema.ResourceData, meta i } func resourceThesaurusDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) log.Printf("[INFO] Deleting Kendra Thesaurus %s", d.Id()) diff --git a/internal/service/kendra/thesaurus_data_source.go b/internal/service/kendra/thesaurus_data_source.go index 6a01c1b76f4..be785fb0217 100644 --- a/internal/service/kendra/thesaurus_data_source.go +++ b/internal/service/kendra/thesaurus_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kendra import ( @@ -105,7 +108,7 @@ func DataSourceThesaurus() *schema.Resource { } func dataSourceThesaurusRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KendraClient() + conn := meta.(*conns.AWSClient).KendraClient(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig thesaurusID := d.Get("thesaurus_id").(string) @@ -142,7 +145,7 @@ func dataSourceThesaurusRead(ctx context.Context, d *schema.ResourceData, meta i return diag.Errorf("setting source_s3_path: %s", err) } - tags, err := ListTags(ctx, conn, arn) + tags, err := listTags(ctx, conn, arn) if err != nil { return diag.Errorf("listing tags for Kendra Thesaurus (%s): %s", arn, err) diff --git a/internal/service/kendra/thesaurus_data_source_test.go b/internal/service/kendra/thesaurus_data_source_test.go index 6e22abd2386..36a0fccd820 100644 --- a/internal/service/kendra/thesaurus_data_source_test.go +++ b/internal/service/kendra/thesaurus_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kendra_test import ( diff --git a/internal/service/kendra/thesaurus_test.go b/internal/service/kendra/thesaurus_test.go index f20cf1a45e0..4ba360f2de0 100644 --- a/internal/service/kendra/thesaurus_test.go +++ b/internal/service/kendra/thesaurus_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kendra_test import ( @@ -323,7 +326,7 @@ func TestAccKendraThesaurus_sourceS3Path(t *testing.T) { func testAccCheckThesaurusDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KendraClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).KendraClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_kendra_thesaurus" { @@ -366,7 +369,7 @@ func testAccCheckThesaurusExists(ctx context.Context, name string) resource.Test return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).KendraClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).KendraClient(ctx) _, err = tfkendra.FindThesaurusByID(ctx, conn, id, indexId) diff --git a/internal/service/keyspaces/exports_test.go b/internal/service/keyspaces/exports_test.go new file mode 100644 index 00000000000..5c5e2704f9d --- /dev/null +++ b/internal/service/keyspaces/exports_test.go @@ -0,0 +1,15 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package keyspaces + +// Exports for use in tests only. +var ( + ResourceKeyspace = resourceKeyspace + ResourceTable = resourceTable + + FindKeyspaceByName = findKeyspaceByName + FindTableByTwoPartKey = findTableByTwoPartKey + + TableParseResourceID = tableParseResourceID +) diff --git a/internal/service/keyspaces/find.go b/internal/service/keyspaces/find.go deleted file mode 100644 index c22dcee84ea..00000000000 --- a/internal/service/keyspaces/find.go +++ /dev/null @@ -1,69 +0,0 @@ -package keyspaces - -import ( - "context" - - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/keyspaces" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" - "github.com/hashicorp/terraform-provider-aws/internal/tfresource" -) - -func FindKeyspaceByName(ctx context.Context, conn *keyspaces.Keyspaces, name string) (*keyspaces.GetKeyspaceOutput, error) { - input := keyspaces.GetKeyspaceInput{ - KeyspaceName: aws.String(name), - } - - output, err := conn.GetKeyspaceWithContext(ctx, &input) - - if tfawserr.ErrCodeEquals(err, keyspaces.ErrCodeResourceNotFoundException) { - return nil, &retry.NotFoundError{ - LastError: err, - LastRequest: input, - } - } - - if err != nil { - return nil, err - } - - if output == nil { - return nil, tfresource.NewEmptyResultError(input) - } - - return output, nil -} - -func FindTableByTwoPartKey(ctx context.Context, conn *keyspaces.Keyspaces, keyspaceName, tableName string) (*keyspaces.GetTableOutput, error) { - input := keyspaces.GetTableInput{ - KeyspaceName: aws.String(keyspaceName), - TableName: aws.String(tableName), - } - - output, err := conn.GetTableWithContext(ctx, &input) - - if tfawserr.ErrCodeEquals(err, keyspaces.ErrCodeResourceNotFoundException) { - return nil, &retry.NotFoundError{ - LastError: err, - LastRequest: input, - } - } - - if err != nil { - return nil, err - } - - if output == nil { - return nil, tfresource.NewEmptyResultError(input) - } - - if status := aws.StringValue(output.Status); status == keyspaces.TableStatusDeleted { - return nil, &retry.NotFoundError{ - Message: status, - LastRequest: input, - } - } - - return output, nil -} diff --git a/internal/service/keyspaces/generate.go b/internal/service/keyspaces/generate.go index 6a326f63672..6bec69cd0cd 100644 --- a/internal/service/keyspaces/generate.go +++ b/internal/service/keyspaces/generate.go @@ -1,4 +1,8 @@ -//go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsSlice -UpdateTags -UntagInTagsElem=Tags -UntagInNeedTagType +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -ListTags -ServiceTagsSlice -UpdateTags -UntagInTagsElem=Tags -UntagInNeedTagType +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package keyspaces diff --git a/internal/service/keyspaces/keyspace.go b/internal/service/keyspaces/keyspace.go index 29451230630..a98787a305a 100644 --- a/internal/service/keyspaces/keyspace.go +++ b/internal/service/keyspaces/keyspace.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package keyspaces import ( @@ -6,13 +9,15 @@ import ( "regexp" "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/keyspaces" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/keyspaces" + "github.com/aws/aws-sdk-go-v2/service/keyspaces/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/errs" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/verify" @@ -21,7 +26,7 @@ import ( // @SDKResource("aws_keyspaces_keyspace", name="Keyspace") // @Tags(identifierAttribute="arn") -func ResourceKeyspace() *schema.Resource { +func resourceKeyspace() *schema.Resource { return &schema.Resource{ CreateWithoutTimeout: resourceKeyspaceCreate, ReadWithoutTimeout: resourceKeyspaceRead, @@ -48,12 +53,9 @@ func ResourceKeyspace() *schema.Resource { Type: schema.TypeString, ForceNew: true, Required: true, - ValidateFunc: validation.All( - validation.StringLenBetween(1, 48), - validation.StringMatch( - regexp.MustCompile(`^[a-zA-Z0-9][a-zA-Z0-9_]{1,47}$`), - "The name must consist of alphanumerics and underscores.", - ), + ValidateFunc: validation.StringMatch( + regexp.MustCompile(`^[a-zA-Z0-9][a-zA-Z0-9_]{0,47}$`), + "The name can have up to 48 characters. It must begin with an alpha-numeric character and can only contain alpha-numeric characters and underscores.", ), }, names.AttrTags: tftags.TagsSchema(), @@ -63,15 +65,15 @@ func ResourceKeyspace() *schema.Resource { } func resourceKeyspaceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KeyspacesConn() + conn := meta.(*conns.AWSClient).KeyspacesClient(ctx) name := d.Get("name").(string) input := &keyspaces.CreateKeyspaceInput{ KeyspaceName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } - _, err := conn.CreateKeyspaceWithContext(ctx, input) + _, err := conn.CreateKeyspace(ctx, input) if err != nil { return diag.Errorf("creating Keyspaces Keyspace (%s): %s", name, err) @@ -80,7 +82,7 @@ func resourceKeyspaceCreate(ctx context.Context, d *schema.ResourceData, meta in d.SetId(name) _, err = tfresource.RetryWhenNotFound(ctx, d.Timeout(schema.TimeoutCreate), func() (interface{}, error) { - return FindKeyspaceByName(ctx, conn, d.Id()) + return findKeyspaceByName(ctx, conn, d.Id()) }) if err != nil { @@ -91,9 +93,9 @@ func resourceKeyspaceCreate(ctx context.Context, d *schema.ResourceData, meta in } func resourceKeyspaceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KeyspacesConn() + conn := meta.(*conns.AWSClient).KeyspacesClient(ctx) - keyspace, err := FindKeyspaceByName(ctx, conn, d.Id()) + keyspace, err := findKeyspaceByName(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { log.Printf("[WARN] Keyspaces Keyspace (%s) not found, removing from state", d.Id()) @@ -117,18 +119,18 @@ func resourceKeyspaceUpdate(ctx context.Context, d *schema.ResourceData, meta in } func resourceKeyspaceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KeyspacesConn() + conn := meta.(*conns.AWSClient).KeyspacesClient(ctx) log.Printf("[DEBUG] Deleting Keyspaces Keyspace: (%s)", d.Id()) - _, err := tfresource.RetryWhenAWSErrMessageContains(ctx, d.Timeout(schema.TimeoutDelete), + _, err := tfresource.RetryWhenIsAErrorMessageContains[*types.ConflictException](ctx, d.Timeout(schema.TimeoutDelete), func() (interface{}, error) { - return conn.DeleteKeyspaceWithContext(ctx, &keyspaces.DeleteKeyspaceInput{ + return conn.DeleteKeyspace(ctx, &keyspaces.DeleteKeyspaceInput{ KeyspaceName: aws.String(d.Id()), }) }, - keyspaces.ErrCodeConflictException, "a table under it is currently being created or deleted") + "a table under it is currently being created or deleted") - if tfawserr.ErrCodeEquals(err, keyspaces.ErrCodeResourceNotFoundException) { + if errs.IsA[*types.ResourceNotFoundException](err) { return nil } @@ -137,7 +139,7 @@ func resourceKeyspaceDelete(ctx context.Context, d *schema.ResourceData, meta in } _, err = tfresource.RetryUntilNotFound(ctx, d.Timeout(schema.TimeoutDelete), func() (interface{}, error) { - return FindKeyspaceByName(ctx, conn, d.Id()) + return findKeyspaceByName(ctx, conn, d.Id()) }) if err != nil { @@ -146,3 +148,28 @@ func resourceKeyspaceDelete(ctx context.Context, d *schema.ResourceData, meta in return nil } + +func findKeyspaceByName(ctx context.Context, conn *keyspaces.Client, name string) (*keyspaces.GetKeyspaceOutput, error) { + input := keyspaces.GetKeyspaceInput{ + KeyspaceName: aws.String(name), + } + + output, err := conn.GetKeyspace(ctx, &input) + + if errs.IsA[*types.ResourceNotFoundException](err) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + return output, nil +} diff --git a/internal/service/keyspaces/keyspace_test.go b/internal/service/keyspaces/keyspace_test.go index f2367d40f84..fc283cd9c16 100644 --- a/internal/service/keyspaces/keyspace_test.go +++ b/internal/service/keyspaces/keyspace_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package keyspaces_test import ( @@ -6,7 +9,6 @@ import ( "testing" "github.com/aws/aws-sdk-go/aws/endpoints" - "github.com/aws/aws-sdk-go/service/keyspaces" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -14,6 +16,7 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/conns" tfkeyspaces "github.com/hashicorp/terraform-provider-aws/internal/service/keyspaces" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" ) func testAccPreCheck(t *testing.T) { @@ -27,7 +30,7 @@ func TestAccKeyspacesKeyspace_basic(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(t) }, - ErrorCheck: acctest.ErrorCheck(t, keyspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.KeyspacesEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckKeyspaceDestroy(ctx), Steps: []resource.TestStep{ @@ -56,7 +59,7 @@ func TestAccKeyspacesKeyspace_disappears(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(t) }, - ErrorCheck: acctest.ErrorCheck(t, keyspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.KeyspacesEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckKeyspaceDestroy(ctx), Steps: []resource.TestStep{ @@ -79,7 +82,7 @@ func TestAccKeyspacesKeyspace_tags(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(t) }, - ErrorCheck: acctest.ErrorCheck(t, keyspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.KeyspacesEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckKeyspaceDestroy(ctx), Steps: []resource.TestStep{ @@ -119,7 +122,7 @@ func TestAccKeyspacesKeyspace_tags(t *testing.T) { func testAccCheckKeyspaceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KeyspacesConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KeyspacesClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_keyspaces_keyspace" { @@ -154,7 +157,7 @@ func testAccCheckKeyspaceExists(ctx context.Context, n string) resource.TestChec return fmt.Errorf("No Keyspaces Keyspace ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).KeyspacesConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KeyspacesClient(ctx) _, err := tfkeyspaces.FindKeyspaceByName(ctx, conn, rs.Primary.Attributes["name"]) diff --git a/internal/service/keyspaces/service_package_gen.go b/internal/service/keyspaces/service_package_gen.go index 1ba32817a35..0cd9bb3325d 100644 --- a/internal/service/keyspaces/service_package_gen.go +++ b/internal/service/keyspaces/service_package_gen.go @@ -5,6 +5,9 @@ package keyspaces import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + keyspaces_sdkv2 "github.com/aws/aws-sdk-go-v2/service/keyspaces" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -26,7 +29,7 @@ func (p *servicePackage) SDKDataSources(ctx context.Context) []*types.ServicePac func (p *servicePackage) SDKResources(ctx context.Context) []*types.ServicePackageSDKResource { return []*types.ServicePackageSDKResource{ { - Factory: ResourceKeyspace, + Factory: resourceKeyspace, TypeName: "aws_keyspaces_keyspace", Name: "Keyspace", Tags: &types.ServicePackageResourceTags{ @@ -34,7 +37,7 @@ func (p *servicePackage) SDKResources(ctx context.Context) []*types.ServicePacka }, }, { - Factory: ResourceTable, + Factory: resourceTable, TypeName: "aws_keyspaces_table", Name: "Table", Tags: &types.ServicePackageResourceTags{ @@ -48,4 +51,17 @@ func (p *servicePackage) ServicePackageName() string { return names.Keyspaces } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*keyspaces_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return keyspaces_sdkv2.NewFromConfig(cfg, func(o *keyspaces_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = keyspaces_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/keyspaces/sweep.go b/internal/service/keyspaces/sweep.go index 10e9cf3b8d0..f61e09528db 100644 --- a/internal/service/keyspaces/sweep.go +++ b/internal/service/keyspaces/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -7,10 +10,9 @@ import ( "fmt" "log" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/keyspaces" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/keyspaces" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -24,21 +26,29 @@ func init() { func sweepKeyspaces(region string) error { // nosemgrep:ci.keyspaces-in-func-name ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).KeyspacesConn() + conn := client.KeyspacesClient(ctx) input := &keyspaces.ListKeyspacesInput{} sweepResources := make([]sweep.Sweepable, 0) - err = conn.ListKeyspacesPagesWithContext(ctx, input, func(page *keyspaces.ListKeyspacesOutput, lastPage bool) bool { - if page == nil { - return !lastPage + pages := keyspaces.NewListKeyspacesPaginator(conn, input) + for pages.HasMorePages() { + page, err := pages.NextPage(ctx) + + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping Keyspaces Keyspace sweep for %s: %s", region, err) + return nil + } + + if err != nil { + return fmt.Errorf("error listing Keyspaces Keyspaces (%s): %w", region, err) } for _, v := range page.Keyspaces { - id := aws.StringValue(v.KeyspaceName) + id := aws.ToString(v.KeyspaceName) switch id { case "system_schema", "system_schema_mcs", "system", "system_multiregion_info": @@ -46,26 +56,15 @@ func sweepKeyspaces(region string) error { // nosemgrep:ci.keyspaces-in-func-nam continue } - r := ResourceKeyspace() + r := resourceKeyspace() d := r.Data(nil) d.SetId(id) sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } - - return !lastPage - }) - - if sweep.SkipSweepError(err) { - log.Printf("[WARN] Skipping Keyspaces Keyspace sweep for %s: %s", region, err) - return nil - } - - if err != nil { - return fmt.Errorf("error listing Keyspaces Keyspaces (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Keyspaces Keyspaces (%s): %w", region, err) diff --git a/internal/service/keyspaces/table.go b/internal/service/keyspaces/table.go index ccdd5c30bd2..9555beb3e9c 100644 --- a/internal/service/keyspaces/table.go +++ b/internal/service/keyspaces/table.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package keyspaces import ( @@ -8,15 +11,17 @@ import ( "strings" "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/keyspaces" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/keyspaces" + "github.com/aws/aws-sdk-go-v2/service/keyspaces/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/enum" + "github.com/hashicorp/terraform-provider-aws/internal/errs" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/verify" @@ -25,7 +30,7 @@ import ( // @SDKResource("aws_keyspaces_table", name="Table") // @Tags(identifierAttribute="arn") -func ResourceTable() *schema.Resource { +func resourceTable() *schema.Resource { return &schema.Resource{ CreateWithoutTimeout: resourceTableCreate, ReadWithoutTimeout: resourceTableRead, @@ -43,10 +48,14 @@ func ResourceTable() *schema.Resource { }, CustomizeDiff: customdiff.Sequence( - customdiff.ForceNewIfChange("schema_definition.0.column", func(_ context.Context, o, n, meta interface{}) bool { + customdiff.ForceNewIfChange("client_side_timestamps", func(_ context.Context, old, new, meta interface{}) bool { + // Client-side timestamps cannot be disabled. + return len(old.([]interface{})) == 1 && len(new.([]interface{})) == 0 + }), + customdiff.ForceNewIfChange("schema_definition.0.column", func(_ context.Context, old, new, meta interface{}) bool { // Columns can only be added. - if os, ok := o.(*schema.Set); ok { - if ns, ok := n.(*schema.Set); ok { + if os, ok := old.(*schema.Set); ok { + if ns, ok := new.(*schema.Set); ok { if del := os.Difference(ns); del.Len() > 0 { return true } @@ -76,10 +85,10 @@ func ResourceTable() *schema.Resource { ValidateFunc: validation.IntAtLeast(1), }, "throughput_mode": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ValidateFunc: validation.StringInSlice(keyspaces.ThroughputMode_Values(), false), + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateDiagFunc: enum.Validate[types.ThroughputMode](), }, "write_capacity_units": { Type: schema.TypeInt, @@ -89,6 +98,20 @@ func ResourceTable() *schema.Resource { }, }, }, + "client_side_timestamps": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "status": { + Type: schema.TypeString, + Required: true, + ValidateDiagFunc: enum.Validate[types.ClientSideTimestampsStatus](), + }, + }, + }, + }, "comment": { Type: schema.TypeList, Optional: true, @@ -124,10 +147,10 @@ func ResourceTable() *schema.Resource { ValidateFunc: verify.ValidARN, }, "type": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ValidateFunc: validation.StringInSlice(keyspaces.EncryptionType_Values(), false), + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateDiagFunc: enum.Validate[types.EncryptionType](), }, }, }, @@ -136,12 +159,9 @@ func ResourceTable() *schema.Resource { Type: schema.TypeString, ForceNew: true, Required: true, - ValidateFunc: validation.All( - validation.StringLenBetween(1, 48), - validation.StringMatch( - regexp.MustCompile(`^[a-zA-Z0-9][a-zA-Z0-9_]{1,47}$`), - "The name must consist of alphanumerics and underscores.", - ), + ValidateFunc: validation.StringMatch( + regexp.MustCompile(`^[a-zA-Z0-9][a-zA-Z0-9_]{0,47}$`), + "The keyspace name can have up to 48 characters. It must begin with an alpha-numeric character and can only contain alpha-numeric characters and underscores.", ), }, "point_in_time_recovery": { @@ -152,10 +172,10 @@ func ResourceTable() *schema.Resource { Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "status": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ValidateFunc: validation.StringInSlice(keyspaces.PointInTimeRecoveryStatus_Values(), false), + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateDiagFunc: enum.Validate[types.PointInTimeRecoveryStatus](), }, }, }, @@ -176,19 +196,16 @@ func ResourceTable() *schema.Resource { Type: schema.TypeString, Required: true, ForceNew: true, - ValidateFunc: validation.All( - validation.StringLenBetween(1, 48), - validation.StringMatch( - regexp.MustCompile(`^[a-z0-9][a-z0-9_]{1,47}$`), - "The name must consist of lower case alphanumerics and underscores.", - ), + ValidateFunc: validation.StringMatch( + regexp.MustCompile(`^[a-z0-9_]{1,48}$`), + "The column name can have up to 48 characters. It can only contain lowercase alpha-numeric characters and underscores.", ), }, "order_by": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validation.StringInSlice(keyspaces.SortOrder_Values(), false), + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateDiagFunc: enum.Validate[types.SortOrder](), }, }, }, @@ -201,12 +218,9 @@ func ResourceTable() *schema.Resource { "name": { Type: schema.TypeString, Required: true, - ValidateFunc: validation.All( - validation.StringLenBetween(1, 48), - validation.StringMatch( - regexp.MustCompile(`^[a-z0-9][a-z0-9_]{1,47}$`), - "The name must consist of lower case alphanumerics and underscores.", - ), + ValidateFunc: validation.StringMatch( + regexp.MustCompile(`^[a-z0-9_]{1,48}$`), + "The column name can have up to 48 characters. It can only contain lowercase alpha-numeric characters and underscores.", ), }, "type": { @@ -230,12 +244,9 @@ func ResourceTable() *schema.Resource { Type: schema.TypeString, Required: true, ForceNew: true, - ValidateFunc: validation.All( - validation.StringLenBetween(1, 48), - validation.StringMatch( - regexp.MustCompile(`^[a-z0-9][a-z0-9_]{1,47}$`), - "The name must consist of lower case alphanumerics and underscores.", - ), + ValidateFunc: validation.StringMatch( + regexp.MustCompile(`^[a-z0-9_]{1,48}$`), + "The column name can have up to 48 characters. It can only contain lowercase alpha-numeric characters and underscores.", ), }, }, @@ -251,12 +262,9 @@ func ResourceTable() *schema.Resource { Type: schema.TypeString, Required: true, ForceNew: true, - ValidateFunc: validation.All( - validation.StringLenBetween(1, 48), - validation.StringMatch( - regexp.MustCompile(`^[a-z0-9][a-z0-9_]{1,47}$`), - "The name must consist of lower case alphanumerics and underscores.", - ), + ValidateFunc: validation.StringMatch( + regexp.MustCompile(`^[a-z0-9_]{1,48}$`), + "The column name can have up to 48 characters. It can only contain lowercase alpha-numeric characters and underscores.", ), }, }, @@ -269,12 +277,9 @@ func ResourceTable() *schema.Resource { Type: schema.TypeString, ForceNew: true, Required: true, - ValidateFunc: validation.All( - validation.StringLenBetween(1, 48), - validation.StringMatch( - regexp.MustCompile(`^[a-zA-Z0-9][a-zA-Z0-9_]{1,47}$`), - "The name must consist of alphanumerics and underscores.", - ), + ValidateFunc: validation.StringMatch( + regexp.MustCompile(`^[a-zA-Z0-9_]{1,48}$`), + "The table name can have up to 48 characters. It can only contain alpha-numeric characters and underscores.", ), }, names.AttrTags: tftags.TagsSchema(), @@ -286,9 +291,9 @@ func ResourceTable() *schema.Resource { Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "status": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice(keyspaces.TimeToLiveStatus_Values(), false), + Type: schema.TypeString, + Required: true, + ValidateDiagFunc: enum.Validate[types.TimeToLiveStatus](), }, }, }, @@ -298,27 +303,31 @@ func ResourceTable() *schema.Resource { } func resourceTableCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KeyspacesConn() + conn := meta.(*conns.AWSClient).KeyspacesClient(ctx) keyspaceName := d.Get("keyspace_name").(string) tableName := d.Get("table_name").(string) - id := TableCreateResourceID(keyspaceName, tableName) + id := tableCreateResourceID(keyspaceName, tableName) input := &keyspaces.CreateTableInput{ KeyspaceName: aws.String(keyspaceName), TableName: aws.String(tableName), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("capacity_specification"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { input.CapacitySpecification = expandCapacitySpecification(v.([]interface{})[0].(map[string]interface{})) } + if v, ok := d.GetOk("client_side_timestamps"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.ClientSideTimestamps = expandClientSideTimestamps(v.([]interface{})[0].(map[string]interface{})) + } + if v, ok := d.GetOk("comment"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { input.Comment = expandComment(v.([]interface{})[0].(map[string]interface{})) } if v, ok := d.GetOk("default_time_to_live"); ok { - input.DefaultTimeToLive = aws.Int64(int64(v.(int))) + input.DefaultTimeToLive = aws.Int32(int32(v.(int))) } if v, ok := d.GetOk("encryption_specification"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { @@ -337,7 +346,7 @@ func resourceTableCreate(ctx context.Context, d *schema.ResourceData, meta inter input.Ttl = expandTimeToLive(v.([]interface{})[0].(map[string]interface{})) } - _, err := conn.CreateTableWithContext(ctx, input) + _, err := conn.CreateTable(ctx, input) if err != nil { return diag.Errorf("creating Keyspaces Table (%s): %s", id, err) @@ -353,15 +362,15 @@ func resourceTableCreate(ctx context.Context, d *schema.ResourceData, meta inter } func resourceTableRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KeyspacesConn() + conn := meta.(*conns.AWSClient).KeyspacesClient(ctx) - keyspaceName, tableName, err := TableParseResourceID(d.Id()) + keyspaceName, tableName, err := tableParseResourceID(d.Id()) if err != nil { return diag.FromErr(err) } - table, err := FindTableByTwoPartKey(ctx, conn, keyspaceName, tableName) + table, err := findTableByTwoPartKey(ctx, conn, keyspaceName, tableName) if !d.IsNewResource() && tfresource.NotFound(err) { log.Printf("[WARN] Keyspaces Table (%s) not found, removing from state", d.Id()) @@ -381,6 +390,13 @@ func resourceTableRead(ctx context.Context, d *schema.ResourceData, meta interfa } else { d.Set("capacity_specification", nil) } + if table.ClientSideTimestamps != nil { + if err := d.Set("client_side_timestamps", []interface{}{flattenClientSideTimestamps(table.ClientSideTimestamps)}); err != nil { + return diag.Errorf("setting client_side_timestamps: %s", err) + } + } else { + d.Set("client_side_timestamps", nil) + } if table.Comment != nil { if err := d.Set("comment", []interface{}{flattenComment(table.Comment)}); err != nil { return diag.Errorf("setting comment: %s", err) @@ -424,9 +440,9 @@ func resourceTableRead(ctx context.Context, d *schema.ResourceData, meta interfa } func resourceTableUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KeyspacesConn() + conn := meta.(*conns.AWSClient).KeyspacesClient(ctx) - keyspaceName, tableName, err := TableParseResourceID(d.Id()) + keyspaceName, tableName, err := tableParseResourceID(d.Id()) if err != nil { return diag.FromErr(err) @@ -443,8 +459,7 @@ func resourceTableUpdate(ctx context.Context, d *schema.ResourceData, meta inter TableName: aws.String(tableName), } - log.Printf("[DEBUG] Updating Keyspaces Table: %s", input) - _, err := conn.UpdateTableWithContext(ctx, input) + _, err := conn.UpdateTable(ctx, input) if err != nil { return diag.Errorf("updating Keyspaces Table (%s) CapacitySpecification: %s", d.Id(), err) @@ -456,15 +471,34 @@ func resourceTableUpdate(ctx context.Context, d *schema.ResourceData, meta inter } } + if d.HasChange("client_side_timestamps") { + if v, ok := d.GetOk("client_side_timestamps"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input := &keyspaces.UpdateTableInput{ + ClientSideTimestamps: expandClientSideTimestamps(v.([]interface{})[0].(map[string]interface{})), + KeyspaceName: aws.String(keyspaceName), + TableName: aws.String(tableName), + } + + _, err := conn.UpdateTable(ctx, input) + + if err != nil { + return diag.Errorf("updating Keyspaces Table (%s) ClientSideTimestamps: %s", d.Id(), err) + } + + if _, err := waitTableUpdated(ctx, conn, keyspaceName, tableName, d.Timeout(schema.TimeoutUpdate)); err != nil { + return diag.Errorf("waiting for Keyspaces Table (%s) ClientSideTimestamps update: %s", d.Id(), err) + } + } + } + if d.HasChange("default_time_to_live") { input := &keyspaces.UpdateTableInput{ - DefaultTimeToLive: aws.Int64(int64(d.Get("default_time_to_live").(int))), + DefaultTimeToLive: aws.Int32(int32(d.Get("default_time_to_live").(int))), KeyspaceName: aws.String(keyspaceName), TableName: aws.String(tableName), } - log.Printf("[DEBUG] Updating Keyspaces Table: %s", input) - _, err := conn.UpdateTableWithContext(ctx, input) + _, err := conn.UpdateTable(ctx, input) if err != nil { return diag.Errorf("updating Keyspaces Table (%s) DefaultTimeToLive: %s", d.Id(), err) @@ -483,8 +517,7 @@ func resourceTableUpdate(ctx context.Context, d *schema.ResourceData, meta inter TableName: aws.String(tableName), } - log.Printf("[DEBUG] Updating Keyspaces Table: %s", input) - _, err := conn.UpdateTableWithContext(ctx, input) + _, err := conn.UpdateTable(ctx, input) if err != nil { return diag.Errorf("updating Keyspaces Table (%s) EncryptionSpecification: %s", d.Id(), err) @@ -504,8 +537,7 @@ func resourceTableUpdate(ctx context.Context, d *schema.ResourceData, meta inter TableName: aws.String(tableName), } - log.Printf("[DEBUG] Updating Keyspaces Table: %s", input) - _, err := conn.UpdateTableWithContext(ctx, input) + _, err := conn.UpdateTable(ctx, input) if err != nil { return diag.Errorf("updating Keyspaces Table (%s) PointInTimeRecovery: %s", d.Id(), err) @@ -525,8 +557,7 @@ func resourceTableUpdate(ctx context.Context, d *schema.ResourceData, meta inter Ttl: expandTimeToLive(v.([]interface{})[0].(map[string]interface{})), } - log.Printf("[DEBUG] Updating Keyspaces Table: %s", input) - _, err := conn.UpdateTableWithContext(ctx, input) + _, err := conn.UpdateTable(ctx, input) if err != nil { return diag.Errorf("updating Keyspaces Table (%s) Ttl: %s", d.Id(), err) @@ -561,8 +592,7 @@ func resourceTableUpdate(ctx context.Context, d *schema.ResourceData, meta inter TableName: aws.String(tableName), } - log.Printf("[DEBUG] Updating Keyspaces Table: %s", input) - _, err := conn.UpdateTableWithContext(ctx, input) + _, err := conn.UpdateTable(ctx, input) if err != nil { return diag.Errorf("updating Keyspaces Table (%s) AddColumns: %s", d.Id(), err) @@ -580,21 +610,21 @@ func resourceTableUpdate(ctx context.Context, d *schema.ResourceData, meta inter } func resourceTableDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KeyspacesConn() + conn := meta.(*conns.AWSClient).KeyspacesClient(ctx) - keyspaceName, tableName, err := TableParseResourceID(d.Id()) + keyspaceName, tableName, err := tableParseResourceID(d.Id()) if err != nil { return diag.FromErr(err) } log.Printf("[DEBUG] Deleting Keyspaces Table: (%s)", d.Id()) - _, err = conn.DeleteTableWithContext(ctx, &keyspaces.DeleteTableInput{ + _, err = conn.DeleteTable(ctx, &keyspaces.DeleteTableInput{ KeyspaceName: aws.String(keyspaceName), TableName: aws.String(tableName), }) - if tfawserr.ErrCodeEquals(err, keyspaces.ErrCodeResourceNotFoundException) { + if errs.IsA[*types.ResourceNotFoundException](err) { return nil } @@ -611,14 +641,14 @@ func resourceTableDelete(ctx context.Context, d *schema.ResourceData, meta inter const tableIDSeparator = "/" -func TableCreateResourceID(keyspaceName, tableName string) string { +func tableCreateResourceID(keyspaceName, tableName string) string { parts := []string{keyspaceName, tableName} id := strings.Join(parts, tableIDSeparator) return id } -func TableParseResourceID(id string) (string, string, error) { +func tableParseResourceID(id string) (string, string, error) { parts := strings.Split(id, tableIDSeparator) if len(parts) == 2 && parts[0] != "" && parts[1] != "" { @@ -628,9 +658,42 @@ func TableParseResourceID(id string) (string, string, error) { return "", "", fmt.Errorf("unexpected format for ID (%[1]s), expected KEYSPACE-NAME%[2]sTABLE-NAME", id, tableIDSeparator) } -func statusTable(ctx context.Context, conn *keyspaces.Keyspaces, keyspaceName, tableName string) retry.StateRefreshFunc { +func findTableByTwoPartKey(ctx context.Context, conn *keyspaces.Client, keyspaceName, tableName string) (*keyspaces.GetTableOutput, error) { + input := keyspaces.GetTableInput{ + KeyspaceName: aws.String(keyspaceName), + TableName: aws.String(tableName), + } + + output, err := conn.GetTable(ctx, &input) + + if errs.IsA[*types.ResourceNotFoundException](err) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + if status := output.Status; status == types.TableStatusDeleted { + return nil, &retry.NotFoundError{ + Message: string(status), + LastRequest: input, + } + } + + return output, nil +} + +func statusTable(ctx context.Context, conn *keyspaces.Client, keyspaceName, tableName string) retry.StateRefreshFunc { return func() (interface{}, string, error) { - output, err := FindTableByTwoPartKey(ctx, conn, keyspaceName, tableName) + output, err := findTableByTwoPartKey(ctx, conn, keyspaceName, tableName) if tfresource.NotFound(err) { return nil, "", nil @@ -640,14 +703,14 @@ func statusTable(ctx context.Context, conn *keyspaces.Keyspaces, keyspaceName, t return nil, "", err } - return output, aws.StringValue(output.Status), nil + return output, string(output.Status), nil } } -func waitTableCreated(ctx context.Context, conn *keyspaces.Keyspaces, keyspaceName, tableName string, timeout time.Duration) (*keyspaces.GetTableOutput, error) { +func waitTableCreated(ctx context.Context, conn *keyspaces.Client, keyspaceName, tableName string, timeout time.Duration) (*keyspaces.GetTableOutput, error) { stateConf := &retry.StateChangeConf{ - Pending: []string{keyspaces.TableStatusCreating}, - Target: []string{keyspaces.TableStatusActive}, + Pending: enum.Slice(types.TableStatusCreating), + Target: enum.Slice(types.TableStatusActive), Refresh: statusTable(ctx, conn, keyspaceName, tableName), Timeout: timeout, } @@ -661,9 +724,9 @@ func waitTableCreated(ctx context.Context, conn *keyspaces.Keyspaces, keyspaceNa return nil, err } -func waitTableDeleted(ctx context.Context, conn *keyspaces.Keyspaces, keyspaceName, tableName string, timeout time.Duration) (*keyspaces.GetTableOutput, error) { +func waitTableDeleted(ctx context.Context, conn *keyspaces.Client, keyspaceName, tableName string, timeout time.Duration) (*keyspaces.GetTableOutput, error) { stateConf := &retry.StateChangeConf{ - Pending: []string{keyspaces.TableStatusActive, keyspaces.TableStatusDeleting}, + Pending: enum.Slice(types.TableStatusActive, types.TableStatusDeleting), Target: []string{}, Refresh: statusTable(ctx, conn, keyspaceName, tableName), Timeout: timeout, @@ -678,10 +741,10 @@ func waitTableDeleted(ctx context.Context, conn *keyspaces.Keyspaces, keyspaceNa return nil, err } -func waitTableUpdated(ctx context.Context, conn *keyspaces.Keyspaces, keyspaceName, tableName string, timeout time.Duration) (*keyspaces.GetTableOutput, error) { //nolint:unparam +func waitTableUpdated(ctx context.Context, conn *keyspaces.Client, keyspaceName, tableName string, timeout time.Duration) (*keyspaces.GetTableOutput, error) { //nolint:unparam stateConf := &retry.StateChangeConf{ - Pending: []string{keyspaces.TableStatusUpdating}, - Target: []string{keyspaces.TableStatusActive}, + Pending: enum.Slice(types.TableStatusUpdating), + Target: enum.Slice(types.TableStatusActive), Refresh: statusTable(ctx, conn, keyspaceName, tableName), Timeout: timeout, Delay: 10 * time.Second, @@ -696,19 +759,19 @@ func waitTableUpdated(ctx context.Context, conn *keyspaces.Keyspaces, keyspaceNa return nil, err } -func expandCapacitySpecification(tfMap map[string]interface{}) *keyspaces.CapacitySpecification { +func expandCapacitySpecification(tfMap map[string]interface{}) *types.CapacitySpecification { if tfMap == nil { return nil } - apiObject := &keyspaces.CapacitySpecification{} + apiObject := &types.CapacitySpecification{} if v, ok := tfMap["read_capacity_units"].(int); ok && v != 0 { apiObject.ReadCapacityUnits = aws.Int64(int64(v)) } if v, ok := tfMap["throughput_mode"].(string); ok && v != "" { - apiObject.ThroughputMode = aws.String(v) + apiObject.ThroughputMode = types.ThroughputMode(v) } if v, ok := tfMap["write_capacity_units"].(int); ok && v != 0 { @@ -718,12 +781,26 @@ func expandCapacitySpecification(tfMap map[string]interface{}) *keyspaces.Capaci return apiObject } -func expandComment(tfMap map[string]interface{}) *keyspaces.Comment { +func expandClientSideTimestamps(tfMap map[string]interface{}) *types.ClientSideTimestamps { + if tfMap == nil { + return nil + } + + apiObject := &types.ClientSideTimestamps{} + + if v, ok := tfMap["status"].(string); ok && v != "" { + apiObject.Status = types.ClientSideTimestampsStatus(v) + } + + return apiObject +} + +func expandComment(tfMap map[string]interface{}) *types.Comment { if tfMap == nil { return nil } - apiObject := &keyspaces.Comment{} + apiObject := &types.Comment{} if v, ok := tfMap["message"].(string); ok && v != "" { apiObject.Message = aws.String(v) @@ -732,44 +809,44 @@ func expandComment(tfMap map[string]interface{}) *keyspaces.Comment { return apiObject } -func expandEncryptionSpecification(tfMap map[string]interface{}) *keyspaces.EncryptionSpecification { +func expandEncryptionSpecification(tfMap map[string]interface{}) *types.EncryptionSpecification { if tfMap == nil { return nil } - apiObject := &keyspaces.EncryptionSpecification{} + apiObject := &types.EncryptionSpecification{} if v, ok := tfMap["kms_key_identifier"].(string); ok && v != "" { apiObject.KmsKeyIdentifier = aws.String(v) } if v, ok := tfMap["type"].(string); ok && v != "" { - apiObject.Type = aws.String(v) + apiObject.Type = types.EncryptionType(v) } return apiObject } -func expandPointInTimeRecovery(tfMap map[string]interface{}) *keyspaces.PointInTimeRecovery { +func expandPointInTimeRecovery(tfMap map[string]interface{}) *types.PointInTimeRecovery { if tfMap == nil { return nil } - apiObject := &keyspaces.PointInTimeRecovery{} + apiObject := &types.PointInTimeRecovery{} if v, ok := tfMap["status"].(string); ok && v != "" { - apiObject.Status = aws.String(v) + apiObject.Status = types.PointInTimeRecoveryStatus(v) } return apiObject } -func expandSchemaDefinition(tfMap map[string]interface{}) *keyspaces.SchemaDefinition { +func expandSchemaDefinition(tfMap map[string]interface{}) *types.SchemaDefinition { if tfMap == nil { return nil } - apiObject := &keyspaces.SchemaDefinition{} + apiObject := &types.SchemaDefinition{} if v, ok := tfMap["clustering_key"].([]interface{}); ok && len(v) > 0 { apiObject.ClusteringKeys = expandClusteringKeys(v) @@ -790,26 +867,26 @@ func expandSchemaDefinition(tfMap map[string]interface{}) *keyspaces.SchemaDefin return apiObject } -func expandTimeToLive(tfMap map[string]interface{}) *keyspaces.TimeToLive { +func expandTimeToLive(tfMap map[string]interface{}) *types.TimeToLive { if tfMap == nil { return nil } - apiObject := &keyspaces.TimeToLive{} + apiObject := &types.TimeToLive{} if v, ok := tfMap["status"].(string); ok && v != "" { - apiObject.Status = aws.String(v) + apiObject.Status = types.TimeToLiveStatus(v) } return apiObject } -func expandColumnDefinition(tfMap map[string]interface{}) *keyspaces.ColumnDefinition { +func expandColumnDefinition(tfMap map[string]interface{}) *types.ColumnDefinition { if tfMap == nil { return nil } - apiObject := &keyspaces.ColumnDefinition{} + apiObject := &types.ColumnDefinition{} if v, ok := tfMap["name"].(string); ok && v != "" { apiObject.Name = aws.String(v) @@ -822,12 +899,12 @@ func expandColumnDefinition(tfMap map[string]interface{}) *keyspaces.ColumnDefin return apiObject } -func expandColumnDefinitions(tfList []interface{}) []*keyspaces.ColumnDefinition { +func expandColumnDefinitions(tfList []interface{}) []types.ColumnDefinition { if len(tfList) == 0 { return nil } - var apiObjects []*keyspaces.ColumnDefinition + var apiObjects []types.ColumnDefinition for _, tfMapRaw := range tfList { tfMap, ok := tfMapRaw.(map[string]interface{}) @@ -842,36 +919,36 @@ func expandColumnDefinitions(tfList []interface{}) []*keyspaces.ColumnDefinition continue } - apiObjects = append(apiObjects, apiObject) + apiObjects = append(apiObjects, *apiObject) } return apiObjects } -func expandClusteringKey(tfMap map[string]interface{}) *keyspaces.ClusteringKey { +func expandClusteringKey(tfMap map[string]interface{}) *types.ClusteringKey { if tfMap == nil { return nil } - apiObject := &keyspaces.ClusteringKey{} + apiObject := &types.ClusteringKey{} if v, ok := tfMap["name"].(string); ok && v != "" { apiObject.Name = aws.String(v) } if v, ok := tfMap["order_by"].(string); ok && v != "" { - apiObject.OrderBy = aws.String(v) + apiObject.OrderBy = types.SortOrder(v) } return apiObject } -func expandClusteringKeys(tfList []interface{}) []*keyspaces.ClusteringKey { +func expandClusteringKeys(tfList []interface{}) []types.ClusteringKey { if len(tfList) == 0 { return nil } - var apiObjects []*keyspaces.ClusteringKey + var apiObjects []types.ClusteringKey for _, tfMapRaw := range tfList { tfMap, ok := tfMapRaw.(map[string]interface{}) @@ -886,18 +963,18 @@ func expandClusteringKeys(tfList []interface{}) []*keyspaces.ClusteringKey { continue } - apiObjects = append(apiObjects, apiObject) + apiObjects = append(apiObjects, *apiObject) } return apiObjects } -func expandPartitionKey(tfMap map[string]interface{}) *keyspaces.PartitionKey { +func expandPartitionKey(tfMap map[string]interface{}) *types.PartitionKey { if tfMap == nil { return nil } - apiObject := &keyspaces.PartitionKey{} + apiObject := &types.PartitionKey{} if v, ok := tfMap["name"].(string); ok && v != "" { apiObject.Name = aws.String(v) @@ -906,12 +983,12 @@ func expandPartitionKey(tfMap map[string]interface{}) *keyspaces.PartitionKey { return apiObject } -func expandPartitionKeys(tfList []interface{}) []*keyspaces.PartitionKey { +func expandPartitionKeys(tfList []interface{}) []types.PartitionKey { if len(tfList) == 0 { return nil } - var apiObjects []*keyspaces.PartitionKey + var apiObjects []types.PartitionKey for _, tfMapRaw := range tfList { tfMap, ok := tfMapRaw.(map[string]interface{}) @@ -926,18 +1003,18 @@ func expandPartitionKeys(tfList []interface{}) []*keyspaces.PartitionKey { continue } - apiObjects = append(apiObjects, apiObject) + apiObjects = append(apiObjects, *apiObject) } return apiObjects } -func expandStaticColumn(tfMap map[string]interface{}) *keyspaces.StaticColumn { +func expandStaticColumn(tfMap map[string]interface{}) *types.StaticColumn { if tfMap == nil { return nil } - apiObject := &keyspaces.StaticColumn{} + apiObject := &types.StaticColumn{} if v, ok := tfMap["name"].(string); ok && v != "" { apiObject.Name = aws.String(v) @@ -946,12 +1023,12 @@ func expandStaticColumn(tfMap map[string]interface{}) *keyspaces.StaticColumn { return apiObject } -func expandStaticColumns(tfList []interface{}) []*keyspaces.StaticColumn { +func expandStaticColumns(tfList []interface{}) []types.StaticColumn { if len(tfList) == 0 { return nil } - var apiObjects []*keyspaces.StaticColumn + var apiObjects []types.StaticColumn for _, tfMapRaw := range tfList { tfMap, ok := tfMapRaw.(map[string]interface{}) @@ -966,35 +1043,45 @@ func expandStaticColumns(tfList []interface{}) []*keyspaces.StaticColumn { continue } - apiObjects = append(apiObjects, apiObject) + apiObjects = append(apiObjects, *apiObject) } return apiObjects } -func flattenCapacitySpecificationSummary(apiObject *keyspaces.CapacitySpecificationSummary) map[string]interface{} { +func flattenCapacitySpecificationSummary(apiObject *types.CapacitySpecificationSummary) map[string]interface{} { if apiObject == nil { return nil } - tfMap := map[string]interface{}{} + tfMap := map[string]interface{}{ + "throughput_mode": apiObject.ThroughputMode, + } if v := apiObject.ReadCapacityUnits; v != nil { - tfMap["read_capacity_units"] = aws.Int64Value(v) + tfMap["read_capacity_units"] = aws.ToInt64(v) } - if v := apiObject.ThroughputMode; v != nil { - tfMap["throughput_mode"] = aws.StringValue(v) + if v := apiObject.WriteCapacityUnits; v != nil { + tfMap["write_capacity_units"] = aws.ToInt64(v) } - if v := apiObject.WriteCapacityUnits; v != nil { - tfMap["write_capacity_units"] = aws.Int64Value(v) + return tfMap +} + +func flattenClientSideTimestamps(apiObject *types.ClientSideTimestamps) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{ + "status": apiObject.Status, } return tfMap } -func flattenComment(apiObject *keyspaces.Comment) map[string]interface{} { +func flattenComment(apiObject *types.Comment) map[string]interface{} { if apiObject == nil { return nil } @@ -1002,45 +1089,41 @@ func flattenComment(apiObject *keyspaces.Comment) map[string]interface{} { tfMap := map[string]interface{}{} if v := apiObject.Message; v != nil { - tfMap["message"] = aws.StringValue(v) + tfMap["message"] = aws.ToString(v) } return tfMap } -func flattenEncryptionSpecification(apiObject *keyspaces.EncryptionSpecification) map[string]interface{} { +func flattenEncryptionSpecification(apiObject *types.EncryptionSpecification) map[string]interface{} { if apiObject == nil { return nil } - tfMap := map[string]interface{}{} - - if v := apiObject.KmsKeyIdentifier; v != nil { - tfMap["kms_key_identifier"] = aws.StringValue(v) + tfMap := map[string]interface{}{ + "type": apiObject.Type, } - if v := apiObject.Type; v != nil { - tfMap["type"] = aws.StringValue(v) + if v := apiObject.KmsKeyIdentifier; v != nil { + tfMap["kms_key_identifier"] = aws.ToString(v) } return tfMap } -func flattenPointInTimeRecoverySummary(apiObject *keyspaces.PointInTimeRecoverySummary) map[string]interface{} { +func flattenPointInTimeRecoverySummary(apiObject *types.PointInTimeRecoverySummary) map[string]interface{} { if apiObject == nil { return nil } - tfMap := map[string]interface{}{} - - if v := apiObject.Status; v != nil { - tfMap["status"] = aws.StringValue(v) + tfMap := map[string]interface{}{ + "status": apiObject.Status, } return tfMap } -func flattenSchemaDefinition(apiObject *keyspaces.SchemaDefinition) map[string]interface{} { +func flattenSchemaDefinition(apiObject *types.SchemaDefinition) map[string]interface{} { if apiObject == nil { return nil } @@ -1066,21 +1149,19 @@ func flattenSchemaDefinition(apiObject *keyspaces.SchemaDefinition) map[string]i return tfMap } -func flattenTimeToLive(apiObject *keyspaces.TimeToLive) map[string]interface{} { +func flattenTimeToLive(apiObject *types.TimeToLive) map[string]interface{} { if apiObject == nil { return nil } - tfMap := map[string]interface{}{} - - if v := apiObject.Status; v != nil { - tfMap["status"] = aws.StringValue(v) + tfMap := map[string]interface{}{ + "status": apiObject.Status, } return tfMap } -func flattenColumnDefinition(apiObject *keyspaces.ColumnDefinition) map[string]interface{} { +func flattenColumnDefinition(apiObject *types.ColumnDefinition) map[string]interface{} { if apiObject == nil { return nil } @@ -1088,17 +1169,17 @@ func flattenColumnDefinition(apiObject *keyspaces.ColumnDefinition) map[string]i tfMap := map[string]interface{}{} if v := apiObject.Name; v != nil { - tfMap["name"] = aws.StringValue(v) + tfMap["name"] = aws.ToString(v) } if v := apiObject.Type; v != nil { - tfMap["type"] = aws.StringValue(v) + tfMap["type"] = aws.ToString(v) } return tfMap } -func flattenColumnDefinitions(apiObjects []*keyspaces.ColumnDefinition) []interface{} { +func flattenColumnDefinitions(apiObjects []types.ColumnDefinition) []interface{} { if len(apiObjects) == 0 { return nil } @@ -1106,35 +1187,29 @@ func flattenColumnDefinitions(apiObjects []*keyspaces.ColumnDefinition) []interf var tfList []interface{} for _, apiObject := range apiObjects { - if apiObject == nil { - continue - } - - tfList = append(tfList, flattenColumnDefinition(apiObject)) + tfList = append(tfList, flattenColumnDefinition(&apiObject)) } return tfList } -func flattenClusteringKey(apiObject *keyspaces.ClusteringKey) map[string]interface{} { +func flattenClusteringKey(apiObject *types.ClusteringKey) map[string]interface{} { if apiObject == nil { return nil } - tfMap := map[string]interface{}{} - - if v := apiObject.Name; v != nil { - tfMap["name"] = aws.StringValue(v) + tfMap := map[string]interface{}{ + "order_by": apiObject.OrderBy, } - if v := apiObject.OrderBy; v != nil { - tfMap["order_by"] = aws.StringValue(v) + if v := apiObject.Name; v != nil { + tfMap["name"] = aws.ToString(v) } return tfMap } -func flattenClusteringKeys(apiObjects []*keyspaces.ClusteringKey) []interface{} { +func flattenClusteringKeys(apiObjects []types.ClusteringKey) []interface{} { if len(apiObjects) == 0 { return nil } @@ -1142,17 +1217,13 @@ func flattenClusteringKeys(apiObjects []*keyspaces.ClusteringKey) []interface{} var tfList []interface{} for _, apiObject := range apiObjects { - if apiObject == nil { - continue - } - - tfList = append(tfList, flattenClusteringKey(apiObject)) + tfList = append(tfList, flattenClusteringKey(&apiObject)) } return tfList } -func flattenPartitionKey(apiObject *keyspaces.PartitionKey) map[string]interface{} { +func flattenPartitionKey(apiObject *types.PartitionKey) map[string]interface{} { if apiObject == nil { return nil } @@ -1160,13 +1231,13 @@ func flattenPartitionKey(apiObject *keyspaces.PartitionKey) map[string]interface tfMap := map[string]interface{}{} if v := apiObject.Name; v != nil { - tfMap["name"] = aws.StringValue(v) + tfMap["name"] = aws.ToString(v) } return tfMap } -func flattenPartitionKeys(apiObjects []*keyspaces.PartitionKey) []interface{} { +func flattenPartitionKeys(apiObjects []types.PartitionKey) []interface{} { if len(apiObjects) == 0 { return nil } @@ -1174,17 +1245,13 @@ func flattenPartitionKeys(apiObjects []*keyspaces.PartitionKey) []interface{} { var tfList []interface{} for _, apiObject := range apiObjects { - if apiObject == nil { - continue - } - - tfList = append(tfList, flattenPartitionKey(apiObject)) + tfList = append(tfList, flattenPartitionKey(&apiObject)) } return tfList } -func flattenStaticColumn(apiObject *keyspaces.StaticColumn) map[string]interface{} { +func flattenStaticColumn(apiObject *types.StaticColumn) map[string]interface{} { if apiObject == nil { return nil } @@ -1192,13 +1259,13 @@ func flattenStaticColumn(apiObject *keyspaces.StaticColumn) map[string]interface tfMap := map[string]interface{}{} if v := apiObject.Name; v != nil { - tfMap["name"] = aws.StringValue(v) + tfMap["name"] = aws.ToString(v) } return tfMap } -func flattenStaticColumns(apiObjects []*keyspaces.StaticColumn) []interface{} { +func flattenStaticColumns(apiObjects []types.StaticColumn) []interface{} { if len(apiObjects) == 0 { return nil } @@ -1206,11 +1273,7 @@ func flattenStaticColumns(apiObjects []*keyspaces.StaticColumn) []interface{} { var tfList []interface{} for _, apiObject := range apiObjects { - if apiObject == nil { - continue - } - - tfList = append(tfList, flattenStaticColumn(apiObject)) + tfList = append(tfList, flattenStaticColumn(&apiObject)) } return tfList diff --git a/internal/service/keyspaces/table_test.go b/internal/service/keyspaces/table_test.go index 06acbc2ba94..dd4e415a650 100644 --- a/internal/service/keyspaces/table_test.go +++ b/internal/service/keyspaces/table_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package keyspaces_test import ( @@ -6,8 +9,8 @@ import ( "fmt" "testing" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/keyspaces" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/keyspaces" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -15,6 +18,7 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/conns" tfkeyspaces "github.com/hashicorp/terraform-provider-aws/internal/service/keyspaces" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" ) func TestAccKeyspacesTable_basic(t *testing.T) { @@ -26,7 +30,7 @@ func TestAccKeyspacesTable_basic(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(t) }, - ErrorCheck: acctest.ErrorCheck(t, keyspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.KeyspacesEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckTableDestroy(ctx), Steps: []resource.TestStep{ @@ -37,6 +41,7 @@ func TestAccKeyspacesTable_basic(t *testing.T) { acctest.CheckResourceAttrRegionalARN(resourceName, "arn", "cassandra", fmt.Sprintf("/keyspace/%s/table/%s", rName1, rName2)), resource.TestCheckResourceAttr(resourceName, "capacity_specification.#", "1"), resource.TestCheckResourceAttr(resourceName, "capacity_specification.0.throughput_mode", "PAY_PER_REQUEST"), + resource.TestCheckResourceAttr(resourceName, "client_side_timestamps.#", "0"), resource.TestCheckResourceAttr(resourceName, "comment.#", "1"), resource.TestCheckResourceAttr(resourceName, "comment.0.message", ""), resource.TestCheckResourceAttr(resourceName, "default_time_to_live", "0"), @@ -80,7 +85,7 @@ func TestAccKeyspacesTable_disappears(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(t) }, - ErrorCheck: acctest.ErrorCheck(t, keyspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.KeyspacesEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckTableDestroy(ctx), Steps: []resource.TestStep{ @@ -105,7 +110,7 @@ func TestAccKeyspacesTable_tags(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(t) }, - ErrorCheck: acctest.ErrorCheck(t, keyspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.KeyspacesEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckTableDestroy(ctx), Steps: []resource.TestStep{ @@ -143,6 +148,36 @@ func TestAccKeyspacesTable_tags(t *testing.T) { }) } +func TestAccKeyspacesTable_clientSideTimestamps(t *testing.T) { + ctx := acctest.Context(t) + var v keyspaces.GetTableOutput + rName1 := "tf_acc_test_" + sdkacctest.RandString(20) + rName2 := "tf_acc_test_" + sdkacctest.RandString(20) + resourceName := "aws_keyspaces_table.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, names.KeyspacesEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckTableDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccTableConfig_clientSideTimestamps(rName1, rName2), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckTableExists(ctx, resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "client_side_timestamps.#", "1"), + resource.TestCheckResourceAttr(resourceName, "client_side_timestamps.0.status", "ENABLED"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccKeyspacesTable_multipleColumns(t *testing.T) { ctx := acctest.Context(t) var v keyspaces.GetTableOutput @@ -152,7 +187,7 @@ func TestAccKeyspacesTable_multipleColumns(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(t) }, - ErrorCheck: acctest.ErrorCheck(t, keyspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.KeyspacesEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckTableDestroy(ctx), Steps: []resource.TestStep{ @@ -176,7 +211,7 @@ func TestAccKeyspacesTable_multipleColumns(t *testing.T) { "type": "text", }), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "schema_definition.0.column.*", map[string]string{ - "name": "name", + "name": "n", "type": "text", }), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "schema_definition.0.column.*", map[string]string{ @@ -196,7 +231,7 @@ func TestAccKeyspacesTable_multipleColumns(t *testing.T) { "type": "text", }), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "schema_definition.0.column.*", map[string]string{ - "name": "pay_scale", + "name": "pay_scale0", "type": "int", }), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "schema_definition.0.column.*", map[string]string{ @@ -224,7 +259,7 @@ func TestAccKeyspacesTable_multipleColumns(t *testing.T) { "name": "role", }), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "schema_definition.0.static_column.*", map[string]string{ - "name": "pay_scale", + "name": "pay_scale0", }), ), }, @@ -247,7 +282,7 @@ func TestAccKeyspacesTable_update(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(t) }, - ErrorCheck: acctest.ErrorCheck(t, keyspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.KeyspacesEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckTableDestroy(ctx), Steps: []resource.TestStep{ @@ -320,7 +355,7 @@ func TestAccKeyspacesTable_addColumns(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(t) }, - ErrorCheck: acctest.ErrorCheck(t, keyspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.KeyspacesEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckTableDestroy(ctx), Steps: []resource.TestStep{ @@ -387,7 +422,7 @@ func TestAccKeyspacesTable_delColumns(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(t) }, - ErrorCheck: acctest.ErrorCheck(t, keyspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.KeyspacesEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckTableDestroy(ctx), Steps: []resource.TestStep{ @@ -447,7 +482,7 @@ func TestAccKeyspacesTable_delColumns(t *testing.T) { func testAccCheckTableDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KeyspacesConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KeyspacesClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_keyspaces_table" { @@ -488,7 +523,7 @@ func testAccCheckTableExists(ctx context.Context, n string, v *keyspaces.GetTabl return fmt.Errorf("No Keyspaces Table ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).KeyspacesConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KeyspacesClient(ctx) keyspaceName, tableName, err := tfkeyspaces.TableParseResourceID(rs.Primary.ID) @@ -510,7 +545,7 @@ func testAccCheckTableExists(ctx context.Context, n string, v *keyspaces.GetTabl func testAccCheckTableNotRecreated(i, j *keyspaces.GetTableOutput) resource.TestCheckFunc { return func(s *terraform.State) error { - if !aws.TimeValue(i.CreationTimestamp).Equal(aws.TimeValue(j.CreationTimestamp)) { + if !aws.ToTime(i.CreationTimestamp).Equal(aws.ToTime(j.CreationTimestamp)) { return errors.New("Keyspaces Table was recreated") } @@ -520,7 +555,7 @@ func testAccCheckTableNotRecreated(i, j *keyspaces.GetTableOutput) resource.Test func testAccCheckTableRecreated(i, j *keyspaces.GetTableOutput) resource.TestCheckFunc { return func(s *terraform.State) error { - if aws.TimeValue(i.CreationTimestamp).Equal(aws.TimeValue(j.CreationTimestamp)) { + if aws.ToTime(i.CreationTimestamp).Equal(aws.ToTime(j.CreationTimestamp)) { return errors.New("Keyspaces Table was not recreated") } @@ -609,6 +644,34 @@ resource "aws_keyspaces_table" "test" { `, rName1, rName2, tagKey1, tagValue1, tagKey2, tagValue2) } +func testAccTableConfig_clientSideTimestamps(rName1, rName2 string) string { + return fmt.Sprintf(` +resource "aws_keyspaces_keyspace" "test" { + name = %[1]q +} + +resource "aws_keyspaces_table" "test" { + keyspace_name = aws_keyspaces_keyspace.test.name + table_name = %[2]q + + schema_definition { + column { + name = "message" + type = "ascii" + } + + partition_key { + name = "message" + } + } + + client_side_timestamps { + status = "ENABLED" + } +} +`, rName1, rName2) +} + func testAccTableConfig_multipleColumns(rName1, rName2 string) string { return fmt.Sprintf(` resource "aws_keyspaces_keyspace" "test" { @@ -626,7 +689,7 @@ resource "aws_keyspaces_table" "test" { } column { - name = "name" + name = "n" type = "text" } @@ -651,7 +714,7 @@ resource "aws_keyspaces_table" "test" { } column { - name = "pay_scale" + name = "pay_scale0" type = "int" } @@ -694,7 +757,7 @@ resource "aws_keyspaces_table" "test" { } static_column { - name = "pay_scale" + name = "pay_scale0" } } } diff --git a/internal/service/keyspaces/tags_gen.go b/internal/service/keyspaces/tags_gen.go index f0bdd26f64e..d736dc85d15 100644 --- a/internal/service/keyspaces/tags_gen.go +++ b/internal/service/keyspaces/tags_gen.go @@ -5,24 +5,24 @@ import ( "context" "fmt" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/keyspaces" - "github.com/aws/aws-sdk-go/service/keyspaces/keyspacesiface" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/keyspaces" + awstypes "github.com/aws/aws-sdk-go-v2/service/keyspaces/types" "github.com/hashicorp/terraform-provider-aws/internal/conns" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists keyspaces service tags. +// listTags lists keyspaces service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn keyspacesiface.KeyspacesAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *keyspaces.Client, identifier string) (tftags.KeyValueTags, error) { input := &keyspaces.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } - output, err := conn.ListTagsForResourceWithContext(ctx, input) + output, err := conn.ListTagsForResource(ctx, input) if err != nil { return tftags.New(ctx, nil), err @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn keyspacesiface.KeyspacesAPI, identifier // ListTags lists keyspaces service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).KeyspacesConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).KeyspacesClient(ctx), identifier) if err != nil { return err @@ -50,11 +50,11 @@ func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier stri // []*SERVICE.Tag handling // Tags returns keyspaces service tags. -func Tags(tags tftags.KeyValueTags) []*keyspaces.Tag { - result := make([]*keyspaces.Tag, 0, len(tags)) +func Tags(tags tftags.KeyValueTags) []awstypes.Tag { + result := make([]awstypes.Tag, 0, len(tags)) for k, v := range tags.Map() { - tag := &keyspaces.Tag{ + tag := awstypes.Tag{ Key: aws.String(k), Value: aws.String(v), } @@ -66,19 +66,19 @@ func Tags(tags tftags.KeyValueTags) []*keyspaces.Tag { } // KeyValueTags creates tftags.KeyValueTags from keyspaces service tags. -func KeyValueTags(ctx context.Context, tags []*keyspaces.Tag) tftags.KeyValueTags { +func KeyValueTags(ctx context.Context, tags []awstypes.Tag) tftags.KeyValueTags { m := make(map[string]*string, len(tags)) for _, tag := range tags { - m[aws.StringValue(tag.Key)] = tag.Value + m[aws.ToString(tag.Key)] = tag.Value } return tftags.New(ctx, m) } -// GetTagsIn returns keyspaces service tags from Context. +// getTagsIn returns keyspaces service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*keyspaces.Tag { +func getTagsIn(ctx context.Context) []awstypes.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*keyspaces.Tag { return nil } -// SetTagsOut sets keyspaces service tags in Context. -func SetTagsOut(ctx context.Context, tags []*keyspaces.Tag) { +// setTagsOut sets keyspaces service tags in Context. +func setTagsOut(ctx context.Context, tags []awstypes.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates keyspaces service tags. +// updateTags updates keyspaces service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn keyspacesiface.KeyspacesAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *keyspaces.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -110,7 +110,7 @@ func UpdateTags(ctx context.Context, conn keyspacesiface.KeyspacesAPI, identifie Tags: Tags(removedTags), } - _, err := conn.UntagResourceWithContext(ctx, input) + _, err := conn.UntagResource(ctx, input) if err != nil { return fmt.Errorf("untagging resource (%s): %w", identifier, err) @@ -125,7 +125,7 @@ func UpdateTags(ctx context.Context, conn keyspacesiface.KeyspacesAPI, identifie Tags: Tags(updatedTags), } - _, err := conn.TagResourceWithContext(ctx, input) + _, err := conn.TagResource(ctx, input) if err != nil { return fmt.Errorf("tagging resource (%s): %w", identifier, err) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn keyspacesiface.KeyspacesAPI, identifie // UpdateTags updates keyspaces service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).KeyspacesConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).KeyspacesClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/kinesis/flex.go b/internal/service/kinesis/flex.go index 097a470fae3..c6a83295f5b 100644 --- a/internal/service/kinesis/flex.go +++ b/internal/service/kinesis/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesis import ( diff --git a/internal/service/kinesis/flex_test.go b/internal/service/kinesis/flex_test.go index 6a662befb76..f340759d2e1 100644 --- a/internal/service/kinesis/flex_test.go +++ b/internal/service/kinesis/flex_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesis import ( diff --git a/internal/service/kinesis/generate.go b/internal/service/kinesis/generate.go index 877b5d2bd9b..c51605fd39a 100644 --- a/internal/service/kinesis/generate.go +++ b/internal/service/kinesis/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=ListTagsForStream -ListTagsInIDElem=StreamName -ServiceTagsSlice -TagOp=AddTagsToStream -TagOpBatchSize=10 -TagInCustomVal=aws.StringMap(updatedTags.IgnoreAWS().Map()) -TagInIDElem=StreamName -UntagOp=RemoveTagsFromStream -UpdateTags -CreateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package kinesis diff --git a/internal/service/kinesis/migrate.go b/internal/service/kinesis/migrate.go index a7748bfd7b3..5b2267062d2 100644 --- a/internal/service/kinesis/migrate.go +++ b/internal/service/kinesis/migrate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesis import ( diff --git a/internal/service/kinesis/migrate_test.go b/internal/service/kinesis/migrate_test.go index c371418fb92..d2fe434ee3f 100644 --- a/internal/service/kinesis/migrate_test.go +++ b/internal/service/kinesis/migrate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesis_test import ( diff --git a/internal/service/kinesis/service_package.go b/internal/service/kinesis/service_package.go new file mode 100644 index 00000000000..d75e4b7e503 --- /dev/null +++ b/internal/service/kinesis/service_package.go @@ -0,0 +1,31 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package kinesis + +import ( + "context" + + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + request_sdkv1 "github.com/aws/aws-sdk-go/aws/request" + kinesis_sdkv1 "github.com/aws/aws-sdk-go/service/kinesis" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" +) + +// CustomizeConn customizes a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) CustomizeConn(ctx context.Context, conn *kinesis_sdkv1.Kinesis) (*kinesis_sdkv1.Kinesis, error) { + conn.Handlers.Retry.PushBack(func(r *request_sdkv1.Request) { + if r.Operation.Name == "CreateStream" { + if tfawserr.ErrMessageContains(r.Error, kinesis_sdkv1.ErrCodeLimitExceededException, "simultaneously be in CREATING or DELETING") { + r.Retryable = aws_sdkv1.Bool(true) + } + } + if r.Operation.Name == "CreateStream" || r.Operation.Name == "DeleteStream" { + if tfawserr.ErrMessageContains(r.Error, kinesis_sdkv1.ErrCodeLimitExceededException, "Rate exceeded for stream") { + r.Retryable = aws_sdkv1.Bool(true) + } + } + }) + + return conn, nil +} diff --git a/internal/service/kinesis/service_package_gen.go b/internal/service/kinesis/service_package_gen.go index cd4f42d68a8..e06dde0bcba 100644 --- a/internal/service/kinesis/service_package_gen.go +++ b/internal/service/kinesis/service_package_gen.go @@ -5,6 +5,10 @@ package kinesis import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + kinesis_sdkv1 "github.com/aws/aws-sdk-go/service/kinesis" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -53,4 +57,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Kinesis } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*kinesis_sdkv1.Kinesis, error) { + sess := config["session"].(*session_sdkv1.Session) + + return kinesis_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/kinesis/stream.go b/internal/service/kinesis/stream.go index 62efa2ae834..3e9ec20a611 100644 --- a/internal/service/kinesis/stream.go +++ b/internal/service/kinesis/stream.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesis import ( @@ -144,7 +147,7 @@ func ResourceStream() *schema.Resource { func resourceStreamCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KinesisConn() + conn := meta.(*conns.AWSClient).KinesisConn(ctx) name := d.Get("name").(string) input := &kinesis.CreateStreamInput{ @@ -241,7 +244,7 @@ func resourceStreamCreate(ctx context.Context, d *schema.ResourceData, meta inte } } - if err := createTags(ctx, conn, name, GetTagsIn(ctx)); err != nil { + if err := createTags(ctx, conn, name, getTagsIn(ctx)); err != nil { return sdkdiag.AppendErrorf(diags, "setting Kinesis Stream (%s) tags: %s", name, err) } @@ -250,7 +253,7 @@ func resourceStreamCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceStreamRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KinesisConn() + conn := meta.(*conns.AWSClient).KinesisConn(ctx) name := d.Get("name").(string) stream, err := FindStreamByName(ctx, conn, name) @@ -300,7 +303,7 @@ func resourceStreamRead(ctx context.Context, d *schema.ResourceData, meta interf func resourceStreamUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KinesisConn() + conn := meta.(*conns.AWSClient).KinesisConn(ctx) name := d.Get("name").(string) if d.HasChange("stream_mode_details.0.stream_mode") { @@ -495,7 +498,7 @@ func resourceStreamUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceStreamDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KinesisConn() + conn := meta.(*conns.AWSClient).KinesisConn(ctx) name := d.Get("name").(string) log.Printf("[DEBUG] Deleting Kinesis Stream: (%s)", name) @@ -522,7 +525,7 @@ func resourceStreamDelete(ctx context.Context, d *schema.ResourceData, meta inte } func resourceStreamImport(ctx context.Context, d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - conn := meta.(*conns.AWSClient).KinesisConn() + conn := meta.(*conns.AWSClient).KinesisConn(ctx) output, err := FindStreamByName(ctx, conn, d.Id()) diff --git a/internal/service/kinesis/stream_consumer.go b/internal/service/kinesis/stream_consumer.go index a040a9c19a1..19535fee53e 100644 --- a/internal/service/kinesis/stream_consumer.go +++ b/internal/service/kinesis/stream_consumer.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesis import ( @@ -54,7 +57,7 @@ func ResourceStreamConsumer() *schema.Resource { func resourceStreamConsumerCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KinesisConn() + conn := meta.(*conns.AWSClient).KinesisConn(ctx) name := d.Get("name").(string) input := &kinesis.RegisterStreamConsumerInput{ @@ -80,7 +83,7 @@ func resourceStreamConsumerCreate(ctx context.Context, d *schema.ResourceData, m func resourceStreamConsumerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KinesisConn() + conn := meta.(*conns.AWSClient).KinesisConn(ctx) consumer, err := FindStreamConsumerByARN(ctx, conn, d.Id()) @@ -104,7 +107,7 @@ func resourceStreamConsumerRead(ctx context.Context, d *schema.ResourceData, met func resourceStreamConsumerDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KinesisConn() + conn := meta.(*conns.AWSClient).KinesisConn(ctx) log.Printf("[DEBUG] Deregistering Kinesis Stream Consumer: (%s)", d.Id()) _, err := conn.DeregisterStreamConsumerWithContext(ctx, &kinesis.DeregisterStreamConsumerInput{ diff --git a/internal/service/kinesis/stream_consumer_data_source.go b/internal/service/kinesis/stream_consumer_data_source.go index a0c522a1916..d8e82285a09 100644 --- a/internal/service/kinesis/stream_consumer_data_source.go +++ b/internal/service/kinesis/stream_consumer_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesis import ( @@ -53,7 +56,7 @@ func DataSourceStreamConsumer() *schema.Resource { func dataSourceStreamConsumerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KinesisConn() + conn := meta.(*conns.AWSClient).KinesisConn(ctx) streamArn := d.Get("stream_arn").(string) diff --git a/internal/service/kinesis/stream_consumer_data_source_test.go b/internal/service/kinesis/stream_consumer_data_source_test.go index 9f3f3e58b03..4fba8bcd902 100644 --- a/internal/service/kinesis/stream_consumer_data_source_test.go +++ b/internal/service/kinesis/stream_consumer_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesis_test import ( diff --git a/internal/service/kinesis/stream_consumer_test.go b/internal/service/kinesis/stream_consumer_test.go index adfb7d2cac4..d8988362cd1 100644 --- a/internal/service/kinesis/stream_consumer_test.go +++ b/internal/service/kinesis/stream_consumer_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesis_test import ( @@ -129,7 +132,7 @@ func TestAccKinesisStreamConsumer_exceedMaxConcurrentConsumers(t *testing.T) { func testAccCheckStreamConsumerDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_kinesis_stream_consumer" { @@ -155,7 +158,7 @@ func testAccCheckStreamConsumerDestroy(ctx context.Context) resource.TestCheckFu func testAccStreamConsumerExists(ctx context.Context, n string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisConn(ctx) rs, ok := s.RootModule().Resources[n] if !ok { diff --git a/internal/service/kinesis/stream_data_source.go b/internal/service/kinesis/stream_data_source.go index f66860a4784..df46e8ced18 100644 --- a/internal/service/kinesis/stream_data_source.go +++ b/internal/service/kinesis/stream_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesis import ( @@ -72,7 +75,7 @@ func DataSourceStream() *schema.Resource { func dataSourceStreamRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KinesisConn() + conn := meta.(*conns.AWSClient).KinesisConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig name := d.Get("name").(string) @@ -145,7 +148,7 @@ func dataSourceStreamRead(ctx context.Context, d *schema.ResourceData, meta inte d.Set("stream_mode_details", nil) } - tags, err := ListTags(ctx, conn, name) + tags, err := listTags(ctx, conn, name) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags for Kinesis Stream (%s): %s", name, err) diff --git a/internal/service/kinesis/stream_data_source_test.go b/internal/service/kinesis/stream_data_source_test.go index a9a48c86a7f..079ebb969ce 100644 --- a/internal/service/kinesis/stream_data_source_test.go +++ b/internal/service/kinesis/stream_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesis_test import ( diff --git a/internal/service/kinesis/stream_test.go b/internal/service/kinesis/stream_test.go index d3b32b50106..f4ffa995f6b 100644 --- a/internal/service/kinesis/stream_test.go +++ b/internal/service/kinesis/stream_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesis_test import ( @@ -600,7 +603,7 @@ func TestAccKinesisStream_failOnBadStreamCountAndStreamModeCombination(t *testin func testAccCheckStreamExists(ctx context.Context, n string, v *kinesis.StreamDescriptionSummary) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisConn(ctx) rs, ok := s.RootModule().Resources[n] if !ok { @@ -625,7 +628,7 @@ func testAccCheckStreamExists(ctx context.Context, n string, v *kinesis.StreamDe func testAccCheckStreamDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_kinesis_stream" { @@ -651,7 +654,7 @@ func testAccCheckStreamDestroy(ctx context.Context) resource.TestCheckFunc { func testAccStreamRegisterStreamConsumer(ctx context.Context, stream *kinesis.StreamDescriptionSummary, rName string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisConn(ctx) if _, err := conn.RegisterStreamConsumerWithContext(ctx, &kinesis.RegisterStreamConsumerInput{ ConsumerName: aws.String(rName), diff --git a/internal/service/kinesis/sweep.go b/internal/service/kinesis/sweep.go index 42617bc4e41..1c8a5e44697 100644 --- a/internal/service/kinesis/sweep.go +++ b/internal/service/kinesis/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/kinesis" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -23,11 +25,11 @@ func init() { func sweepStreams(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).KinesisConn() + conn := client.KinesisConn(ctx) sweepResources := make([]sweep.Sweepable, 0) input := &kinesis.ListStreamsInput{} @@ -57,7 +59,7 @@ func sweepStreams(region string) error { return fmt.Errorf("error listing Kinesis Streams (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Kinesis Streams (%s): %w", region, err) diff --git a/internal/service/kinesis/tags_gen.go b/internal/service/kinesis/tags_gen.go index 8f874a8bcca..c49c1c40243 100644 --- a/internal/service/kinesis/tags_gen.go +++ b/internal/service/kinesis/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists kinesis service tags. +// listTags lists kinesis service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn kinesisiface.KinesisAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn kinesisiface.KinesisAPI, identifier string) (tftags.KeyValueTags, error) { input := &kinesis.ListTagsForStreamInput{ StreamName: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn kinesisiface.KinesisAPI, identifier stri // ListTags lists kinesis service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).KinesisConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).KinesisConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*kinesis.Tag) tftags.KeyValueTags return tftags.New(ctx, m) } -// GetTagsIn returns kinesis service tags from Context. +// getTagsIn returns kinesis service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*kinesis.Tag { +func getTagsIn(ctx context.Context) []*kinesis.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,8 +88,8 @@ func GetTagsIn(ctx context.Context) []*kinesis.Tag { return nil } -// SetTagsOut sets kinesis service tags in Context. -func SetTagsOut(ctx context.Context, tags []*kinesis.Tag) { +// setTagsOut sets kinesis service tags in Context. +func setTagsOut(ctx context.Context, tags []*kinesis.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } @@ -101,13 +101,13 @@ func createTags(ctx context.Context, conn kinesisiface.KinesisAPI, identifier st return nil } - return UpdateTags(ctx, conn, identifier, nil, KeyValueTags(ctx, tags)) + return updateTags(ctx, conn, identifier, nil, KeyValueTags(ctx, tags)) } -// UpdateTags updates kinesis service tags. +// updateTags updates kinesis service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn kinesisiface.KinesisAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn kinesisiface.KinesisAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -151,5 +151,5 @@ func UpdateTags(ctx context.Context, conn kinesisiface.KinesisAPI, identifier st // UpdateTags updates kinesis service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).KinesisConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).KinesisConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/kinesisanalytics/application.go b/internal/service/kinesisanalytics/application.go index 8b7f4a4d8b1..479ab90bc95 100644 --- a/internal/service/kinesisanalytics/application.go +++ b/internal/service/kinesisanalytics/application.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesisanalytics import ( @@ -621,7 +624,7 @@ func ResourceApplication() *schema.Resource { func resourceApplicationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KinesisAnalyticsConn() + conn := meta.(*conns.AWSClient).KinesisAnalyticsConn(ctx) applicationName := d.Get("name").(string) input := &kinesisanalytics.CreateApplicationInput{ @@ -631,7 +634,7 @@ func resourceApplicationCreate(ctx context.Context, d *schema.ResourceData, meta CloudWatchLoggingOptions: expandCloudWatchLoggingOptions(d.Get("cloudwatch_logging_options").([]interface{})), Inputs: expandInputs(d.Get("inputs").([]interface{})), Outputs: expandOutputs(d.Get("outputs").(*schema.Set).List()), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } outputRaw, err := waitIAMPropagation(ctx, func() (interface{}, error) { @@ -700,7 +703,7 @@ func resourceApplicationCreate(ctx context.Context, d *schema.ResourceData, meta func resourceApplicationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KinesisAnalyticsConn() + conn := meta.(*conns.AWSClient).KinesisAnalyticsConn(ctx) application, err := FindApplicationDetailByName(ctx, conn, d.Get("name").(string)) @@ -745,7 +748,7 @@ func resourceApplicationRead(ctx context.Context, d *schema.ResourceData, meta i func resourceApplicationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KinesisAnalyticsConn() + conn := meta.(*conns.AWSClient).KinesisAnalyticsConn(ctx) if d.HasChanges("cloudwatch_logging_options", "code", "inputs", "outputs", "reference_data_sources") { applicationName := d.Get("name").(string) @@ -1133,7 +1136,7 @@ func resourceApplicationUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceApplicationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KinesisAnalyticsConn() + conn := meta.(*conns.AWSClient).KinesisAnalyticsConn(ctx) createTimestamp, err := time.Parse(time.RFC3339, d.Get("create_timestamp").(string)) if err != nil { @@ -1168,7 +1171,7 @@ func resourceApplicationDelete(ctx context.Context, d *schema.ResourceData, meta func resourceApplicationImport(ctx context.Context, d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { arn, err := arn.Parse(d.Id()) if err != nil { - return []*schema.ResourceData{}, fmt.Errorf("Error parsing ARN %q: %w", d.Id(), err) + return []*schema.ResourceData{}, fmt.Errorf("parsing ARN %q: %w", d.Id(), err) } // application/ diff --git a/internal/service/kinesisanalytics/application_test.go b/internal/service/kinesisanalytics/application_test.go index 596764333f1..015619a8153 100644 --- a/internal/service/kinesisanalytics/application_test.go +++ b/internal/service/kinesisanalytics/application_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesisanalytics_test import ( @@ -1899,7 +1902,7 @@ func TestAccKinesisAnalyticsApplication_StartApplication_update(t *testing.T) { func testAccCheckApplicationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisAnalyticsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisAnalyticsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_kinesis_analytics_application" { @@ -1933,7 +1936,7 @@ func testAccCheckApplicationExists(ctx context.Context, n string, v *kinesisanal return fmt.Errorf("No Kinesis Analytics Application ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisAnalyticsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisAnalyticsConn(ctx) application, err := tfkinesisanalytics.FindApplicationDetailByName(ctx, conn, rs.Primary.Attributes["name"]) @@ -1948,7 +1951,7 @@ func testAccCheckApplicationExists(ctx context.Context, n string, v *kinesisanal } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisAnalyticsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisAnalyticsConn(ctx) input := &kinesisanalytics.ListApplicationsInput{} diff --git a/internal/service/kinesisanalytics/consts.go b/internal/service/kinesisanalytics/consts.go index 55b48ccb358..71c379598d0 100644 --- a/internal/service/kinesisanalytics/consts.go +++ b/internal/service/kinesisanalytics/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesisanalytics import ( diff --git a/internal/service/kinesisanalytics/find.go b/internal/service/kinesisanalytics/find.go index 67ff78f8210..538970589a8 100644 --- a/internal/service/kinesisanalytics/find.go +++ b/internal/service/kinesisanalytics/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesisanalytics import ( diff --git a/internal/service/kinesisanalytics/generate.go b/internal/service/kinesisanalytics/generate.go index f4dc3ca6aed..5d5d28bb4ff 100644 --- a/internal/service/kinesisanalytics/generate.go +++ b/internal/service/kinesisanalytics/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceARN -ServiceTagsSlice -TagInIDElem=ResourceARN -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package kinesisanalytics diff --git a/internal/service/kinesisanalytics/list.go b/internal/service/kinesisanalytics/list.go index 5fdd5e67341..43d8984be93 100644 --- a/internal/service/kinesisanalytics/list.go +++ b/internal/service/kinesisanalytics/list.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesisanalytics import ( diff --git a/internal/service/kinesisanalytics/service_package_gen.go b/internal/service/kinesisanalytics/service_package_gen.go index 6cc0a21b7cd..87507d228f3 100644 --- a/internal/service/kinesisanalytics/service_package_gen.go +++ b/internal/service/kinesisanalytics/service_package_gen.go @@ -5,6 +5,10 @@ package kinesisanalytics import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + kinesisanalytics_sdkv1 "github.com/aws/aws-sdk-go/service/kinesisanalytics" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -40,4 +44,13 @@ func (p *servicePackage) ServicePackageName() string { return names.KinesisAnalytics } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*kinesisanalytics_sdkv1.KinesisAnalytics, error) { + sess := config["session"].(*session_sdkv1.Session) + + return kinesisanalytics_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/kinesisanalytics/status.go b/internal/service/kinesisanalytics/status.go index a8d012d9917..4c2dc4cb0b0 100644 --- a/internal/service/kinesisanalytics/status.go +++ b/internal/service/kinesisanalytics/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesisanalytics import ( diff --git a/internal/service/kinesisanalytics/sweep.go b/internal/service/kinesisanalytics/sweep.go index 25462c77f26..e57a6e2ff78 100644 --- a/internal/service/kinesisanalytics/sweep.go +++ b/internal/service/kinesisanalytics/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -13,7 +16,6 @@ import ( "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -26,11 +28,11 @@ func init() { func sweepApplications(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).KinesisAnalyticsConn() + conn := client.KinesisAnalyticsConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -78,7 +80,7 @@ func sweepApplications(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Kinesis Analytics Applications: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Kinesis Analytics Applications: %w", err)) } diff --git a/internal/service/kinesisanalytics/tags_gen.go b/internal/service/kinesisanalytics/tags_gen.go index 2918ba1f3ec..0d6411bc89b 100644 --- a/internal/service/kinesisanalytics/tags_gen.go +++ b/internal/service/kinesisanalytics/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists kinesisanalytics service tags. +// listTags lists kinesisanalytics service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn kinesisanalyticsiface.KinesisAnalyticsAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn kinesisanalyticsiface.KinesisAnalyticsAPI, identifier string) (tftags.KeyValueTags, error) { input := &kinesisanalytics.ListTagsForResourceInput{ ResourceARN: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn kinesisanalyticsiface.KinesisAnalyticsAP // ListTags lists kinesisanalytics service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).KinesisAnalyticsConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).KinesisAnalyticsConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*kinesisanalytics.Tag) tftags.KeyV return tftags.New(ctx, m) } -// GetTagsIn returns kinesisanalytics service tags from Context. +// getTagsIn returns kinesisanalytics service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*kinesisanalytics.Tag { +func getTagsIn(ctx context.Context) []*kinesisanalytics.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*kinesisanalytics.Tag { return nil } -// SetTagsOut sets kinesisanalytics service tags in Context. -func SetTagsOut(ctx context.Context, tags []*kinesisanalytics.Tag) { +// setTagsOut sets kinesisanalytics service tags in Context. +func setTagsOut(ctx context.Context, tags []*kinesisanalytics.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates kinesisanalytics service tags. +// updateTags updates kinesisanalytics service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn kinesisanalyticsiface.KinesisAnalyticsAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn kinesisanalyticsiface.KinesisAnalyticsAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn kinesisanalyticsiface.KinesisAnalytics // UpdateTags updates kinesisanalytics service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).KinesisAnalyticsConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).KinesisAnalyticsConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/kinesisanalytics/wait.go b/internal/service/kinesisanalytics/wait.go index 7c954745274..56a9b65c577 100644 --- a/internal/service/kinesisanalytics/wait.go +++ b/internal/service/kinesisanalytics/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesisanalytics import ( diff --git a/internal/service/kinesisanalyticsv2/application.go b/internal/service/kinesisanalyticsv2/application.go index 0dc304ec98d..f96cb05a1c1 100644 --- a/internal/service/kinesisanalyticsv2/application.go +++ b/internal/service/kinesisanalyticsv2/application.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesisanalyticsv2 import ( @@ -933,7 +936,7 @@ func ResourceApplication() *schema.Resource { func resourceApplicationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KinesisAnalyticsV2Conn() + conn := meta.(*conns.AWSClient).KinesisAnalyticsV2Conn(ctx) applicationName := d.Get("name").(string) input := &kinesisanalyticsv2.CreateApplicationInput{ @@ -943,7 +946,7 @@ func resourceApplicationCreate(ctx context.Context, d *schema.ResourceData, meta CloudWatchLoggingOptions: expandCloudWatchLoggingOptions(d.Get("cloudwatch_logging_options").([]interface{})), RuntimeEnvironment: aws.String(d.Get("runtime_environment").(string)), ServiceExecutionRole: aws.String(d.Get("service_execution_role").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } outputRaw, err := waitIAMPropagation(ctx, func() (interface{}, error) { @@ -971,7 +974,7 @@ func resourceApplicationCreate(ctx context.Context, d *schema.ResourceData, meta func resourceApplicationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KinesisAnalyticsV2Conn() + conn := meta.(*conns.AWSClient).KinesisAnalyticsV2Conn(ctx) application, err := FindApplicationDetailByName(ctx, conn, d.Get("name").(string)) @@ -1009,7 +1012,7 @@ func resourceApplicationRead(ctx context.Context, d *schema.ResourceData, meta i func resourceApplicationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KinesisAnalyticsV2Conn() + conn := meta.(*conns.AWSClient).KinesisAnalyticsV2Conn(ctx) applicationName := d.Get("name").(string) if d.HasChanges("application_configuration", "cloudwatch_logging_options", "service_execution_role") { @@ -1501,7 +1504,7 @@ func resourceApplicationUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceApplicationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KinesisAnalyticsV2Conn() + conn := meta.(*conns.AWSClient).KinesisAnalyticsV2Conn(ctx) createTimestamp, err := time.Parse(time.RFC3339, d.Get("create_timestamp").(string)) if err != nil { diff --git a/internal/service/kinesisanalyticsv2/application_snapshot.go b/internal/service/kinesisanalyticsv2/application_snapshot.go index 4ce32c3bfc2..2fbd0b9ddff 100644 --- a/internal/service/kinesisanalyticsv2/application_snapshot.go +++ b/internal/service/kinesisanalyticsv2/application_snapshot.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesisanalyticsv2 import ( @@ -69,7 +72,7 @@ func ResourceApplicationSnapshot() *schema.Resource { func resourceApplicationSnapshotCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KinesisAnalyticsV2Conn() + conn := meta.(*conns.AWSClient).KinesisAnalyticsV2Conn(ctx) applicationName := d.Get("application_name").(string) snapshotName := d.Get("snapshot_name").(string) @@ -99,7 +102,7 @@ func resourceApplicationSnapshotCreate(ctx context.Context, d *schema.ResourceDa func resourceApplicationSnapshotRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KinesisAnalyticsV2Conn() + conn := meta.(*conns.AWSClient).KinesisAnalyticsV2Conn(ctx) applicationName, snapshotName, err := applicationSnapshotParseID(d.Id()) @@ -129,7 +132,7 @@ func resourceApplicationSnapshotRead(ctx context.Context, d *schema.ResourceData func resourceApplicationSnapshotDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KinesisAnalyticsV2Conn() + conn := meta.(*conns.AWSClient).KinesisAnalyticsV2Conn(ctx) applicationName, snapshotName, err := applicationSnapshotParseID(d.Id()) diff --git a/internal/service/kinesisanalyticsv2/application_snapshot_test.go b/internal/service/kinesisanalyticsv2/application_snapshot_test.go index d6cf880f1e6..43c335bcda4 100644 --- a/internal/service/kinesisanalyticsv2/application_snapshot_test.go +++ b/internal/service/kinesisanalyticsv2/application_snapshot_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesisanalyticsv2_test import ( @@ -98,7 +101,7 @@ func TestAccKinesisAnalyticsV2ApplicationSnapshot_Disappears_application(t *test func testAccCheckApplicationSnapshotDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisAnalyticsV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisAnalyticsV2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_kinesisanalyticsv2_application_snapshot" { @@ -132,7 +135,7 @@ func testAccCheckApplicationSnapshotExists(ctx context.Context, n string, v *kin return fmt.Errorf("No Kinesis Analytics v2 Application Snapshot ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisAnalyticsV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisAnalyticsV2Conn(ctx) application, err := tfkinesisanalyticsv2.FindSnapshotDetailsByApplicationAndSnapshotNames(ctx, conn, rs.Primary.Attributes["application_name"], rs.Primary.Attributes["snapshot_name"]) diff --git a/internal/service/kinesisanalyticsv2/application_test.go b/internal/service/kinesisanalyticsv2/application_test.go index 16ea1bcd38b..a9257e71089 100644 --- a/internal/service/kinesisanalyticsv2/application_test.go +++ b/internal/service/kinesisanalyticsv2/application_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesisanalyticsv2_test import ( @@ -4151,7 +4154,7 @@ func TestAccKinesisAnalyticsV2Application_RunConfiguration_Update(t *testing.T) func testAccCheckApplicationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisAnalyticsV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisAnalyticsV2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_kinesisanalyticsv2_application" { @@ -4185,7 +4188,7 @@ func testAccCheckApplicationExists(ctx context.Context, n string, v *kinesisanal return fmt.Errorf("No Kinesis Analytics v2 Application ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisAnalyticsV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisAnalyticsV2Conn(ctx) application, err := tfkinesisanalyticsv2.FindApplicationDetailByName(ctx, conn, rs.Primary.Attributes["name"]) @@ -4200,7 +4203,7 @@ func testAccCheckApplicationExists(ctx context.Context, n string, v *kinesisanal } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisAnalyticsV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisAnalyticsV2Conn(ctx) input := &kinesisanalyticsv2.ListApplicationsInput{} diff --git a/internal/service/kinesisanalyticsv2/consts.go b/internal/service/kinesisanalyticsv2/consts.go index 34c30b793b6..d34a57f1043 100644 --- a/internal/service/kinesisanalyticsv2/consts.go +++ b/internal/service/kinesisanalyticsv2/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesisanalyticsv2 import ( diff --git a/internal/service/kinesisanalyticsv2/find.go b/internal/service/kinesisanalyticsv2/find.go index 4ee50a2e121..dbbe76000aa 100644 --- a/internal/service/kinesisanalyticsv2/find.go +++ b/internal/service/kinesisanalyticsv2/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesisanalyticsv2 import ( diff --git a/internal/service/kinesisanalyticsv2/generate.go b/internal/service/kinesisanalyticsv2/generate.go index daf11cc1ac5..cb0ee2be62f 100644 --- a/internal/service/kinesisanalyticsv2/generate.go +++ b/internal/service/kinesisanalyticsv2/generate.go @@ -1,5 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/listpages/main.go -ListOps=ListApplications //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceARN -ServiceTagsSlice -TagInIDElem=ResourceARN -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package kinesisanalyticsv2 diff --git a/internal/service/kinesisanalyticsv2/id.go b/internal/service/kinesisanalyticsv2/id.go index 48d2d477842..8f4c1b59624 100644 --- a/internal/service/kinesisanalyticsv2/id.go +++ b/internal/service/kinesisanalyticsv2/id.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesisanalyticsv2 import ( diff --git a/internal/service/kinesisanalyticsv2/service_package_gen.go b/internal/service/kinesisanalyticsv2/service_package_gen.go index e84494775e1..c023ff61869 100644 --- a/internal/service/kinesisanalyticsv2/service_package_gen.go +++ b/internal/service/kinesisanalyticsv2/service_package_gen.go @@ -5,6 +5,10 @@ package kinesisanalyticsv2 import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + kinesisanalyticsv2_sdkv1 "github.com/aws/aws-sdk-go/service/kinesisanalyticsv2" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -44,4 +48,13 @@ func (p *servicePackage) ServicePackageName() string { return names.KinesisAnalyticsV2 } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*kinesisanalyticsv2_sdkv1.KinesisAnalyticsV2, error) { + sess := config["session"].(*session_sdkv1.Session) + + return kinesisanalyticsv2_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/kinesisanalyticsv2/status.go b/internal/service/kinesisanalyticsv2/status.go index c418e5bb251..0835682e71d 100644 --- a/internal/service/kinesisanalyticsv2/status.go +++ b/internal/service/kinesisanalyticsv2/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesisanalyticsv2 import ( diff --git a/internal/service/kinesisanalyticsv2/sweep.go b/internal/service/kinesisanalyticsv2/sweep.go index ef05cd82905..fb3eb293adb 100644 --- a/internal/service/kinesisanalyticsv2/sweep.go +++ b/internal/service/kinesisanalyticsv2/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -12,7 +15,6 @@ import ( "github.com/aws/aws-sdk-go/service/kinesisanalyticsv2" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -25,11 +27,11 @@ func init() { func sweepApplication(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).KinesisAnalyticsV2Conn() + conn := client.KinesisAnalyticsV2Conn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -73,7 +75,7 @@ func sweepApplication(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Kinesis Analytics v2 Applications: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Kinesis Analytics v2 Applications: %w", err)) } diff --git a/internal/service/kinesisanalyticsv2/tags_gen.go b/internal/service/kinesisanalyticsv2/tags_gen.go index 2e21d55ea00..2df786ef0d1 100644 --- a/internal/service/kinesisanalyticsv2/tags_gen.go +++ b/internal/service/kinesisanalyticsv2/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists kinesisanalyticsv2 service tags. +// listTags lists kinesisanalyticsv2 service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn kinesisanalyticsv2iface.KinesisAnalyticsV2API, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn kinesisanalyticsv2iface.KinesisAnalyticsV2API, identifier string) (tftags.KeyValueTags, error) { input := &kinesisanalyticsv2.ListTagsForResourceInput{ ResourceARN: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn kinesisanalyticsv2iface.KinesisAnalytics // ListTags lists kinesisanalyticsv2 service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).KinesisAnalyticsV2Conn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).KinesisAnalyticsV2Conn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*kinesisanalyticsv2.Tag) tftags.Ke return tftags.New(ctx, m) } -// GetTagsIn returns kinesisanalyticsv2 service tags from Context. +// getTagsIn returns kinesisanalyticsv2 service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*kinesisanalyticsv2.Tag { +func getTagsIn(ctx context.Context) []*kinesisanalyticsv2.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*kinesisanalyticsv2.Tag { return nil } -// SetTagsOut sets kinesisanalyticsv2 service tags in Context. -func SetTagsOut(ctx context.Context, tags []*kinesisanalyticsv2.Tag) { +// setTagsOut sets kinesisanalyticsv2 service tags in Context. +func setTagsOut(ctx context.Context, tags []*kinesisanalyticsv2.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates kinesisanalyticsv2 service tags. +// updateTags updates kinesisanalyticsv2 service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn kinesisanalyticsv2iface.KinesisAnalyticsV2API, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn kinesisanalyticsv2iface.KinesisAnalyticsV2API, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn kinesisanalyticsv2iface.KinesisAnalyti // UpdateTags updates kinesisanalyticsv2 service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).KinesisAnalyticsV2Conn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).KinesisAnalyticsV2Conn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/kinesisanalyticsv2/wait.go b/internal/service/kinesisanalyticsv2/wait.go index 11cc51cd84a..76ee1be56f1 100644 --- a/internal/service/kinesisanalyticsv2/wait.go +++ b/internal/service/kinesisanalyticsv2/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesisanalyticsv2 import ( diff --git a/internal/service/kinesisvideo/generate.go b/internal/service/kinesisvideo/generate.go index 3123b8a7e78..2d9dcbaed11 100644 --- a/internal/service/kinesisvideo/generate.go +++ b/internal/service/kinesisvideo/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=ListTagsForStream -ListTagsInIDElem=StreamARN -ServiceTagsMap -TagOp=TagStream -TagInIDElem=StreamARN -UntagOp=UntagStream -UntagInTagsElem=TagKeyList -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package kinesisvideo diff --git a/internal/service/kinesisvideo/service_package_gen.go b/internal/service/kinesisvideo/service_package_gen.go index 6190047247e..43426de01a1 100644 --- a/internal/service/kinesisvideo/service_package_gen.go +++ b/internal/service/kinesisvideo/service_package_gen.go @@ -5,6 +5,10 @@ package kinesisvideo import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + kinesisvideo_sdkv1 "github.com/aws/aws-sdk-go/service/kinesisvideo" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -40,4 +44,13 @@ func (p *servicePackage) ServicePackageName() string { return names.KinesisVideo } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*kinesisvideo_sdkv1.KinesisVideo, error) { + sess := config["session"].(*session_sdkv1.Session) + + return kinesisvideo_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/kinesisvideo/stream.go b/internal/service/kinesisvideo/stream.go index c221c69ccae..92ac2f73402 100644 --- a/internal/service/kinesisvideo/stream.go +++ b/internal/service/kinesisvideo/stream.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesisvideo import ( @@ -98,12 +101,12 @@ func ResourceStream() *schema.Resource { func resourceStreamCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KinesisVideoConn() + conn := meta.(*conns.AWSClient).KinesisVideoConn(ctx) input := &kinesisvideo.CreateStreamInput{ StreamName: aws.String(d.Get("name").(string)), DataRetentionInHours: aws.Int64(int64(d.Get("data_retention_in_hours").(int))), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("device_name"); ok { @@ -144,7 +147,7 @@ func resourceStreamCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceStreamRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KinesisVideoConn() + conn := meta.(*conns.AWSClient).KinesisVideoConn(ctx) descOpts := &kinesisvideo.DescribeStreamInput{ StreamARN: aws.String(d.Id()), @@ -176,7 +179,7 @@ func resourceStreamRead(ctx context.Context, d *schema.ResourceData, meta interf func resourceStreamUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KinesisVideoConn() + conn := meta.(*conns.AWSClient).KinesisVideoConn(ctx) updateOpts := &kinesisvideo.UpdateStreamInput{ StreamARN: aws.String(d.Id()), @@ -213,7 +216,7 @@ func resourceStreamUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceStreamDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KinesisVideoConn() + conn := meta.(*conns.AWSClient).KinesisVideoConn(ctx) if _, err := conn.DeleteStreamWithContext(ctx, &kinesisvideo.DeleteStreamInput{ StreamARN: aws.String(d.Id()), diff --git a/internal/service/kinesisvideo/stream_test.go b/internal/service/kinesisvideo/stream_test.go index 9c4d1f01371..8e79563a94f 100644 --- a/internal/service/kinesisvideo/stream_test.go +++ b/internal/service/kinesisvideo/stream_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kinesisvideo_test import ( @@ -179,7 +182,7 @@ func TestAccKinesisVideoStream_disappears(t *testing.T) { func testAccCheckStreamDisappears(ctx context.Context, resourceName string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisVideoConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisVideoConn(ctx) rs, ok := s.RootModule().Resources[resourceName] if !ok { @@ -227,7 +230,7 @@ func testAccCheckStreamExists(ctx context.Context, n string, stream *kinesisvide return fmt.Errorf("No Kinesis ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisVideoConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisVideoConn(ctx) describeOpts := &kinesisvideo.DescribeStreamInput{ StreamARN: aws.String(rs.Primary.ID), } @@ -248,7 +251,7 @@ func testAccCheckStreamDestroy(ctx context.Context) resource.TestCheckFunc { if rs.Type != "aws_kinesis_video_stream" { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisVideoConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KinesisVideoConn(ctx) describeOpts := &kinesisvideo.DescribeStreamInput{ StreamARN: aws.String(rs.Primary.ID), } diff --git a/internal/service/kinesisvideo/tags_gen.go b/internal/service/kinesisvideo/tags_gen.go index 880e7c1e1c2..732217ce87e 100644 --- a/internal/service/kinesisvideo/tags_gen.go +++ b/internal/service/kinesisvideo/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists kinesisvideo service tags. +// listTags lists kinesisvideo service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn kinesisvideoiface.KinesisVideoAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn kinesisvideoiface.KinesisVideoAPI, identifier string) (tftags.KeyValueTags, error) { input := &kinesisvideo.ListTagsForStreamInput{ StreamARN: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn kinesisvideoiface.KinesisVideoAPI, ident // ListTags lists kinesisvideo service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).KinesisVideoConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).KinesisVideoConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from kinesisvideo service tags. +// KeyValueTags creates tftags.KeyValueTags from kinesisvideo service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns kinesisvideo service tags from Context. +// getTagsIn returns kinesisvideo service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets kinesisvideo service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets kinesisvideo service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates kinesisvideo service tags. +// updateTags updates kinesisvideo service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn kinesisvideoiface.KinesisVideoAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn kinesisvideoiface.KinesisVideoAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn kinesisvideoiface.KinesisVideoAPI, ide // UpdateTags updates kinesisvideo service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).KinesisVideoConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).KinesisVideoConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/kms/alias.go b/internal/service/kms/alias.go index b889e5c87fd..bec4e842d25 100644 --- a/internal/service/kms/alias.go +++ b/internal/service/kms/alias.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms import ( @@ -67,7 +70,7 @@ func ResourceAlias() *schema.Resource { func resourceAliasCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) namePrefix := d.Get("name_prefix").(string) if namePrefix == "" { @@ -98,7 +101,7 @@ func resourceAliasCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceAliasRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, PropagationTimeout, func() (interface{}, error) { return FindAliasByName(ctx, conn, d.Id()) @@ -133,7 +136,7 @@ func resourceAliasRead(ctx context.Context, d *schema.ResourceData, meta interfa func resourceAliasUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) if d.HasChange("target_key_id") { input := &kms.UpdateAliasInput{ @@ -154,7 +157,7 @@ func resourceAliasUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceAliasDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) log.Printf("[DEBUG] Deleting KMS Alias: (%s)", d.Id()) _, err := conn.DeleteAliasWithContext(ctx, &kms.DeleteAliasInput{ diff --git a/internal/service/kms/alias_data_source.go b/internal/service/kms/alias_data_source.go index 87e7a7f0e48..a1c89de5ba9 100644 --- a/internal/service/kms/alias_data_source.go +++ b/internal/service/kms/alias_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms import ( @@ -38,7 +41,7 @@ func DataSourceAlias() *schema.Resource { func dataSourceAliasRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) target := d.Get("name").(string) diff --git a/internal/service/kms/alias_data_source_test.go b/internal/service/kms/alias_data_source_test.go index baaf62aa7ca..f8874294e4e 100644 --- a/internal/service/kms/alias_data_source_test.go +++ b/internal/service/kms/alias_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms_test import ( diff --git a/internal/service/kms/alias_test.go b/internal/service/kms/alias_test.go index d87bfe5066f..5c0c48f8b34 100644 --- a/internal/service/kms/alias_test.go +++ b/internal/service/kms/alias_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms_test import ( @@ -239,7 +242,7 @@ func TestAccKMSAlias_arnDiffSuppress(t *testing.T) { func testAccCheckAliasDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_kms_alias" { @@ -274,7 +277,7 @@ func testAccCheckAliasExists(ctx context.Context, name string, v *kms.AliasListE return fmt.Errorf("No KMS Alias ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn(ctx) output, err := tfkms.FindAliasByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/kms/arn.go b/internal/service/kms/arn.go index 9024709d734..078f24f72ee 100644 --- a/internal/service/kms/arn.go +++ b/internal/service/kms/arn.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms import ( diff --git a/internal/service/kms/arn_test.go b/internal/service/kms/arn_test.go index cc5d7c6705e..067b98c9c76 100644 --- a/internal/service/kms/arn_test.go +++ b/internal/service/kms/arn_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms_test import ( diff --git a/internal/service/kms/ciphertext.go b/internal/service/kms/ciphertext.go index 9355488dc98..1936e75c306 100644 --- a/internal/service/kms/ciphertext.go +++ b/internal/service/kms/ciphertext.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms import ( @@ -53,7 +56,7 @@ func ResourceCiphertext() *schema.Resource { func resourceCiphertextCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) //lintignore:R017 // Allow legacy unstable ID usage in managed resource d.SetId(time.Now().UTC().String()) diff --git a/internal/service/kms/ciphertext_data_source.go b/internal/service/kms/ciphertext_data_source.go index 595e3857755..facd994d0de 100644 --- a/internal/service/kms/ciphertext_data_source.go +++ b/internal/service/kms/ciphertext_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms import ( @@ -47,7 +50,7 @@ func DataSourceCiphertext() *schema.Resource { func dataSourceCiphertextRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) keyID := d.Get("key_id").(string) req := &kms.EncryptInput{ diff --git a/internal/service/kms/ciphertext_data_source_test.go b/internal/service/kms/ciphertext_data_source_test.go index 0fb83f542ea..e1baf508c78 100644 --- a/internal/service/kms/ciphertext_data_source_test.go +++ b/internal/service/kms/ciphertext_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms_test import ( diff --git a/internal/service/kms/ciphertext_test.go b/internal/service/kms/ciphertext_test.go index 3cc8b2dfd37..b6804953181 100644 --- a/internal/service/kms/ciphertext_test.go +++ b/internal/service/kms/ciphertext_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms_test import ( diff --git a/internal/service/kms/consts.go b/internal/service/kms/consts.go index cec87d40e0a..e4074fbacf2 100644 --- a/internal/service/kms/consts.go +++ b/internal/service/kms/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms import ( diff --git a/internal/service/kms/custom_key_store.go b/internal/service/kms/custom_key_store.go index 825d14203a1..651d6f61b9a 100644 --- a/internal/service/kms/custom_key_store.go +++ b/internal/service/kms/custom_key_store.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms import ( @@ -64,7 +67,7 @@ const ( ) func resourceCustomKeyStoreCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) in := &kms.CreateCustomKeyStoreInput{ CloudHsmClusterId: aws.String(d.Get("cloud_hsm_cluster_id").(string)), @@ -88,7 +91,7 @@ func resourceCustomKeyStoreCreate(ctx context.Context, d *schema.ResourceData, m } func resourceCustomKeyStoreRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) in := &kms.DescribeCustomKeyStoresInput{ CustomKeyStoreId: aws.String(d.Id()), @@ -113,7 +116,7 @@ func resourceCustomKeyStoreRead(ctx context.Context, d *schema.ResourceData, met } func resourceCustomKeyStoreUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) update := false @@ -145,7 +148,7 @@ func resourceCustomKeyStoreUpdate(ctx context.Context, d *schema.ResourceData, m } func resourceCustomKeyStoreDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) log.Printf("[INFO] Deleting KMS CustomKeyStore %s", d.Id()) diff --git a/internal/service/kms/custom_key_store_data_source.go b/internal/service/kms/custom_key_store_data_source.go index faab8050a32..4194f22c37c 100644 --- a/internal/service/kms/custom_key_store_data_source.go +++ b/internal/service/kms/custom_key_store_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms import ( @@ -55,7 +58,7 @@ const ( ) func dataSourceCustomKeyStoreRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) input := &kms.DescribeCustomKeyStoresInput{} diff --git a/internal/service/kms/custom_key_store_data_source_test.go b/internal/service/kms/custom_key_store_data_source_test.go index e6edd975ebd..bfc3fdc688f 100644 --- a/internal/service/kms/custom_key_store_data_source_test.go +++ b/internal/service/kms/custom_key_store_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms_test import ( diff --git a/internal/service/kms/custom_key_store_test.go b/internal/service/kms/custom_key_store_test.go index a67d2187f4e..2c681279f9b 100644 --- a/internal/service/kms/custom_key_store_test.go +++ b/internal/service/kms/custom_key_store_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms_test import ( @@ -156,7 +159,7 @@ func testAccCustomKeyStore_disappears(t *testing.T) { func testAccCheckCustomKeyStoreDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_kms_custom_key_store" { @@ -190,7 +193,7 @@ func testAccCheckCustomKeyStoreExists(ctx context.Context, name string, customke return create.Error(names.KMS, create.ErrActionCheckingExistence, tfkms.ResNameCustomKeyStore, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn(ctx) in := &kms.DescribeCustomKeyStoresInput{ CustomKeyStoreId: aws.String(rs.Primary.ID), @@ -208,7 +211,7 @@ func testAccCheckCustomKeyStoreExists(ctx context.Context, name string, customke } func testAccCustomKeyStoresPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn(ctx) input := &kms.DescribeCustomKeyStoresInput{} _, err := conn.DescribeCustomKeyStoresWithContext(ctx, input) diff --git a/internal/service/kms/diff.go b/internal/service/kms/diff.go index 1f1e97b3951..087d226b4e5 100644 --- a/internal/service/kms/diff.go +++ b/internal/service/kms/diff.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms import ( diff --git a/internal/service/kms/diff_test.go b/internal/service/kms/diff_test.go index c7ac7a5b3c2..91ad22e6150 100644 --- a/internal/service/kms/diff_test.go +++ b/internal/service/kms/diff_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms import ( diff --git a/internal/service/kms/external_key.go b/internal/service/kms/external_key.go index 6a13cdf6934..2f46ef1bc38 100644 --- a/internal/service/kms/external_key.go +++ b/internal/service/kms/external_key.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms import ( @@ -119,13 +122,13 @@ func ResourceExternalKey() *schema.Resource { func resourceExternalKeyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) input := &kms.CreateKeyInput{ BypassPolicyLockoutSafetyCheck: aws.Bool(d.Get("bypass_policy_lockout_safety_check").(bool)), KeyUsage: aws.String(kms.KeyUsageTypeEncryptDecrypt), Origin: aws.String(kms.OriginTypeExternal), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -191,8 +194,8 @@ func resourceExternalKeyCreate(ctx context.Context, d *schema.ResourceData, meta } } - if tags := KeyValueTags(ctx, GetTagsIn(ctx)); len(tags) > 0 { - if err := WaitTagsPropagated(ctx, conn, d.Id(), tags); err != nil { + if tags := KeyValueTags(ctx, getTagsIn(ctx)); len(tags) > 0 { + if err := waitTagsPropagated(ctx, conn, d.Id(), tags); err != nil { return sdkdiag.AppendErrorf(diags, "waiting for KMS External Key (%s) tag propagation: %s", d.Id(), err) } } @@ -202,7 +205,7 @@ func resourceExternalKeyCreate(ctx context.Context, d *schema.ResourceData, meta func resourceExternalKeyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) key, err := findKey(ctx, conn, d.Id(), d.IsNewResource()) @@ -250,14 +253,14 @@ func resourceExternalKeyRead(ctx context.Context, d *schema.ResourceData, meta i d.Set("valid_to", nil) } - SetTagsOut(ctx, key.tags) + setTagsOut(ctx, key.tags) return diags } func resourceExternalKeyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) if hasChange, enabled, state := d.HasChange("enabled"), d.Get("enabled").(bool), d.Get("key_state").(string); hasChange && enabled && state != kms.KeyStatePendingImport { // Enable before any attributes are modified. @@ -301,18 +304,12 @@ func resourceExternalKeyUpdate(ctx context.Context, d *schema.ResourceData, meta } } - if d.HasChange("tags_all") { - if err := WaitTagsPropagated(ctx, conn, d.Id(), tftags.New(ctx, d.Get("tags_all").(map[string]interface{}))); err != nil { - return sdkdiag.AppendErrorf(diags, "waiting for KMS External Key (%s) tag propagation: %s", d.Id(), err) - } - } - return append(diags, resourceExternalKeyRead(ctx, d, meta)...) } func resourceExternalKeyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) input := &kms.ScheduleKeyDeletionInput{ KeyId: aws.String(d.Id()), diff --git a/internal/service/kms/external_key_test.go b/internal/service/kms/external_key_test.go index 829fc6b3cbf..a73b5c757fb 100644 --- a/internal/service/kms/external_key_test.go +++ b/internal/service/kms/external_key_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms_test import ( @@ -55,7 +58,6 @@ func TestAccKMSExternalKey_basic(t *testing.T) { ImportStateVerifyIgnore: []string{ "bypass_policy_lockout_safety_check", "deletion_window_in_days", - "key_material_base64", }, }, }, @@ -111,7 +113,6 @@ func TestAccKMSExternalKey_multiRegion(t *testing.T) { ImportStateVerifyIgnore: []string{ "bypass_policy_lockout_safety_check", "deletion_window_in_days", - "key_material_base64", }, }, }, @@ -144,7 +145,6 @@ func TestAccKMSExternalKey_deletionWindowInDays(t *testing.T) { ImportStateVerifyIgnore: []string{ "bypass_policy_lockout_safety_check", "deletion_window_in_days", - "key_material_base64", }, }, { @@ -185,7 +185,6 @@ func TestAccKMSExternalKey_description(t *testing.T) { ImportStateVerifyIgnore: []string{ "bypass_policy_lockout_safety_check", "deletion_window_in_days", - "key_material_base64", }, }, { @@ -226,7 +225,6 @@ func TestAccKMSExternalKey_enabled(t *testing.T) { ImportStateVerifyIgnore: []string{ "bypass_policy_lockout_safety_check", "deletion_window_in_days", - "key_material_base64", }, }, { @@ -276,7 +274,6 @@ func TestAccKMSExternalKey_keyMaterialBase64(t *testing.T) { ImportStateVerifyIgnore: []string{ "bypass_policy_lockout_safety_check", "deletion_window_in_days", - "key_material_base64", }, }, { @@ -320,7 +317,6 @@ func TestAccKMSExternalKey_policy(t *testing.T) { ImportStateVerifyIgnore: []string{ "bypass_policy_lockout_safety_check", "deletion_window_in_days", - "key_material_base64", }, }, { @@ -363,7 +359,6 @@ func TestAccKMSExternalKey_policyBypass(t *testing.T) { ImportStateVerifyIgnore: []string{ "bypass_policy_lockout_safety_check", "deletion_window_in_days", - "key_material_base64", }, }, }, @@ -397,7 +392,6 @@ func TestAccKMSExternalKey_tags(t *testing.T) { ImportStateVerifyIgnore: []string{ "bypass_policy_lockout_safety_check", "deletion_window_in_days", - "key_material_base64", }, }, { @@ -419,6 +413,23 @@ func TestAccKMSExternalKey_tags(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), ), }, + { + Config: testAccExternalKeyConfig_tags0(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckExternalKeyExists(ctx, resourceName, &key3), + testAccCheckExternalKeyNotRecreated(&key2, &key3), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "bypass_policy_lockout_safety_check", + "deletion_window_in_days", + }, + }, }, }) } @@ -452,7 +463,6 @@ func TestAccKMSExternalKey_validTo(t *testing.T) { ImportStateVerifyIgnore: []string{ "bypass_policy_lockout_safety_check", "deletion_window_in_days", - "key_material_base64", }, }, { @@ -497,7 +507,7 @@ func testAccCheckExternalKeyHasPolicy(ctx context.Context, name string, expected return fmt.Errorf("No KMS External Key ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn(ctx) output, err := tfkms.FindKeyPolicyByKeyIDAndPolicyName(ctx, conn, rs.Primary.ID, tfkms.PolicyNameDefault) @@ -522,7 +532,7 @@ func testAccCheckExternalKeyHasPolicy(ctx context.Context, name string, expected func testAccCheckExternalKeyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_kms_external_key" { @@ -557,7 +567,7 @@ func testAccCheckExternalKeyExists(ctx context.Context, name string, key *kms.Ke return fmt.Errorf("No KMS External Key ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn(ctx) outputRaw, err := tfresource.RetryWhenNotFound(ctx, tfkms.PropagationTimeout, func() (interface{}, error) { return tfkms.FindKeyByID(ctx, conn, rs.Primary.ID) @@ -701,6 +711,15 @@ resource "aws_kms_external_key" "test" { `, rName, tagKey1, tagValue1, tagKey2, tagValue2) } +func testAccExternalKeyConfig_tags0(rName string) string { + return fmt.Sprintf(` +resource "aws_kms_external_key" "test" { + description = %[1]q + deletion_window_in_days = 7 +} +`, rName) +} + func testAccExternalKeyConfig_validTo(rName, validTo string) string { return fmt.Sprintf(` # ACCEPTANCE TESTING ONLY -- NEVER EXPOSE YOUR KEY MATERIAL diff --git a/internal/service/kms/find.go b/internal/service/kms/find.go index b19592d3e00..e7045b4456c 100644 --- a/internal/service/kms/find.go +++ b/internal/service/kms/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms import ( @@ -99,7 +102,7 @@ func FindKeyByID(ctx context.Context, conn *kms.KMS, id string) (*kms.KeyMetadat } func FindDefaultKey(ctx context.Context, service, region string, meta interface{}) (string, error) { - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) if aws.StringValue(conn.Config.Region) != region { session, err := conns.NewSessionForRegion(&conn.Config, region, meta.(*conns.AWSClient).TerraformVersion) diff --git a/internal/service/kms/generate.go b/internal/service/kms/generate.go index 5744f8d783a..da2cdd33aae 100644 --- a/internal/service/kms/generate.go +++ b/internal/service/kms/generate.go @@ -1,4 +1,8 @@ -//go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=ListResourceTags -ListTagsInIDElem=KeyId -ServiceTagsSlice -TagInIDElem=KeyId -TagTypeKeyElem=TagKey -TagTypeValElem=TagValue -UpdateTags +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=ListResourceTags -ListTagsInIDElem=KeyId -ServiceTagsSlice -TagInIDElem=KeyId -TagTypeKeyElem=TagKey -TagTypeValElem=TagValue -UpdateTags -Wait -WaitContinuousOccurence 5 -WaitMinTimeout 1s -WaitTimeout 10m -ParentNotFoundErrCode=NotFoundException +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package kms diff --git a/internal/service/kms/grant.go b/internal/service/kms/grant.go index b78c47ca6c2..d4b7f0520f1 100644 --- a/internal/service/kms/grant.go +++ b/internal/service/kms/grant.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms import ( @@ -130,7 +133,7 @@ func ResourceGrant() *schema.Resource { func resourceGrantCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) keyID := d.Get("key_id").(string) input := &kms.CreateGrantInput{ @@ -184,7 +187,7 @@ func resourceGrantRead(ctx context.Context, d *schema.ResourceData, meta interfa timeout = 3 * time.Minute ) var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) keyID, grantID, err := GrantParseResourceID(d.Id()) @@ -227,7 +230,7 @@ func resourceGrantRead(ctx context.Context, d *schema.ResourceData, meta interfa func resourceGrantDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) keyID, grantID, err := GrantParseResourceID(d.Id()) diff --git a/internal/service/kms/grant_test.go b/internal/service/kms/grant_test.go index 65672793baf..e27a580751a 100644 --- a/internal/service/kms/grant_test.go +++ b/internal/service/kms/grant_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms_test import ( @@ -276,7 +279,7 @@ func TestAccKMSGrant_crossAccountARN(t *testing.T) { func testAccCheckGrantDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_kms_grant" { @@ -323,7 +326,7 @@ func testAccCheckGrantExists(ctx context.Context, n string) resource.TestCheckFu return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn(ctx) _, err = tfkms.FindGrantByTwoPartKey(ctx, conn, keyID, grantID) diff --git a/internal/service/kms/key.go b/internal/service/kms/key.go index c026122b66f..ed15821c52e 100644 --- a/internal/service/kms/key.go +++ b/internal/service/kms/key.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms import ( @@ -117,13 +120,13 @@ func ResourceKey() *schema.Resource { func resourceKeyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) input := &kms.CreateKeyInput{ BypassPolicyLockoutSafetyCheck: aws.Bool(d.Get("bypass_policy_lockout_safety_check").(bool)), CustomerMasterKeySpec: aws.String(d.Get("customer_master_key_spec").(string)), KeyUsage: aws.String(d.Get("key_usage").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -181,8 +184,8 @@ func resourceKeyCreate(ctx context.Context, d *schema.ResourceData, meta interfa } } - if tags := KeyValueTags(ctx, GetTagsIn(ctx)); len(tags) > 0 { - if err := WaitTagsPropagated(ctx, conn, d.Id(), tags); err != nil { + if tags := KeyValueTags(ctx, getTagsIn(ctx)); len(tags) > 0 { + if err := waitTagsPropagated(ctx, conn, d.Id(), tags); err != nil { return sdkdiag.AppendErrorf(diags, "waiting for KMS Key (%s) tag propagation: %s", d.Id(), err) } } @@ -192,7 +195,7 @@ func resourceKeyCreate(ctx context.Context, d *schema.ResourceData, meta interfa func resourceKeyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) key, err := findKey(ctx, conn, d.Id(), d.IsNewResource()) @@ -227,14 +230,14 @@ func resourceKeyRead(ctx context.Context, d *schema.ResourceData, meta interface d.Set("policy", policyToSet) - SetTagsOut(ctx, key.tags) + setTagsOut(ctx, key.tags) return diags } func resourceKeyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) if hasChange, enabled := d.HasChange("is_enabled"), d.Get("is_enabled").(bool); hasChange && enabled { // Enable before any attributes are modified. @@ -268,18 +271,12 @@ func resourceKeyUpdate(ctx context.Context, d *schema.ResourceData, meta interfa } } - if d.HasChange("tags_all") { - if err := WaitTagsPropagated(ctx, conn, d.Id(), tftags.New(ctx, d.Get("tags_all").(map[string]interface{}))); err != nil { - return sdkdiag.AppendErrorf(diags, "waiting for KMS Key (%s) tag propagation: %s", d.Id(), err) - } - } - return append(diags, resourceKeyRead(ctx, d, meta)...) } func resourceKeyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) input := &kms.ScheduleKeyDeletionInput{ KeyId: aws.String(d.Id()), @@ -350,7 +347,7 @@ func findKey(ctx context.Context, conn *kms.KMS, keyID string, isNewResource boo } } - tags, err := ListTags(ctx, conn, keyID) + tags, err := listTags(ctx, conn, keyID) if tfawserr.ErrCodeEquals(err, kms.ErrCodeNotFoundException) { return nil, &retry.NotFoundError{LastError: err} diff --git a/internal/service/kms/key_data_source.go b/internal/service/kms/key_data_source.go index 6ac8e64980a..f538a51690e 100644 --- a/internal/service/kms/key_data_source.go +++ b/internal/service/kms/key_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms import ( @@ -162,7 +165,7 @@ func DataSourceKey() *schema.Resource { func dataSourceKeyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) keyID := d.Get("key_id").(string) input := &kms.DescribeKeyInput{ diff --git a/internal/service/kms/key_data_source_test.go b/internal/service/kms/key_data_source_test.go index 020667f4ec7..c627fefce4b 100644 --- a/internal/service/kms/key_data_source_test.go +++ b/internal/service/kms/key_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms_test import ( diff --git a/internal/service/kms/key_policy.go b/internal/service/kms/key_policy.go index 407f7aff2d8..8b3891cadc4 100644 --- a/internal/service/kms/key_policy.go +++ b/internal/service/kms/key_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms import ( @@ -54,7 +57,7 @@ func ResourceKeyPolicy() *schema.Resource { func resourceKeyPolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) keyID := d.Get("key_id").(string) @@ -69,7 +72,7 @@ func resourceKeyPolicyCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceKeyPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) key, err := findKey(ctx, conn, d.Id(), d.IsNewResource()) @@ -96,7 +99,7 @@ func resourceKeyPolicyRead(ctx context.Context, d *schema.ResourceData, meta int func resourceKeyPolicyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) if d.HasChange("policy") { if err := updateKeyPolicy(ctx, conn, d.Id(), d.Get("policy").(string), d.Get("bypass_policy_lockout_safety_check").(bool)); err != nil { @@ -109,7 +112,7 @@ func resourceKeyPolicyUpdate(ctx context.Context, d *schema.ResourceData, meta i func resourceKeyPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) if !d.Get("bypass_policy_lockout_safety_check").(bool) { if err := updateKeyPolicy(ctx, conn, d.Get("key_id").(string), meta.(*conns.AWSClient).DefaultKMSKeyPolicy(), d.Get("bypass_policy_lockout_safety_check").(bool)); err != nil { diff --git a/internal/service/kms/key_policy_test.go b/internal/service/kms/key_policy_test.go index 53d8d3f61f6..6eaddf60c60 100644 --- a/internal/service/kms/key_policy_test.go +++ b/internal/service/kms/key_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms_test import ( diff --git a/internal/service/kms/key_test.go b/internal/service/kms/key_test.go index 4f10b79ddfb..7539229abb5 100644 --- a/internal/service/kms/key_test.go +++ b/internal/service/kms/key_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms_test import ( @@ -45,6 +48,13 @@ func TestAccKMSKey_basic(t *testing.T) { ImportStateVerify: true, ImportStateVerifyIgnore: []string{"deletion_window_in_days", "bypass_policy_lockout_safety_check"}, }, + { + // Set deletion window to 7 days + Config: testAccKeyConfig_basicDeletionWindow(), + Check: resource.ComposeTestCheckFunc( + testAccCheckKeyExists(ctx, resourceName, &key), + ), + }, }, }) } @@ -489,6 +499,19 @@ func TestAccKMSKey_tags(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), ), }, + { + Config: testAccKeyConfig_tags0(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckKeyExists(ctx, resourceName, &key), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_window_in_days", "bypass_policy_lockout_safety_check"}, + }, }, }) } @@ -504,7 +527,7 @@ func testAccCheckKeyHasPolicy(ctx context.Context, name string, expectedPolicyTe return fmt.Errorf("No KMS Key ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn(ctx) out, err := conn.GetKeyPolicyWithContext(ctx, &kms.GetKeyPolicyInput{ KeyId: aws.String(rs.Primary.ID), @@ -531,7 +554,7 @@ func testAccCheckKeyHasPolicy(ctx context.Context, name string, expectedPolicyTe func testAccCheckKeyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_kms_key" { @@ -566,7 +589,7 @@ func testAccCheckKeyExists(ctx context.Context, name string, key *kms.KeyMetadat return fmt.Errorf("No KMS Key ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn(ctx) outputRaw, err := tfresource.RetryWhenNotFound(ctx, tfkms.PropagationTimeout, func() (interface{}, error) { return tfkms.FindKeyByID(ctx, conn, rs.Primary.ID) @@ -588,6 +611,14 @@ resource "aws_kms_key" "test" {} ` } +func testAccKeyConfig_basicDeletionWindow() string { + return ` +resource "aws_kms_key" "test" { + deletion_window_in_days = 7 +} +` +} + func testAccKeyConfig_name(rName string) string { return fmt.Sprintf(` resource "aws_kms_key" "test" { @@ -1028,7 +1059,8 @@ resource "aws_kms_key" "test" { func testAccKeyConfig_tags1(rName, tagKey1, tagValue1 string) string { return fmt.Sprintf(` resource "aws_kms_key" "test" { - description = %[1]q + description = %[1]q + deletion_window_in_days = 7 tags = { %[2]q = %[3]q @@ -1040,7 +1072,8 @@ resource "aws_kms_key" "test" { func testAccKeyConfig_tags2(rName, tagKey1, tagValue1, tagKey2, tagValue2 string) string { return fmt.Sprintf(` resource "aws_kms_key" "test" { - description = %[1]q + description = %[1]q + deletion_window_in_days = 7 tags = { %[2]q = %[3]q @@ -1049,3 +1082,12 @@ resource "aws_kms_key" "test" { } `, rName, tagKey1, tagValue1, tagKey2, tagValue2) } + +func testAccKeyConfig_tags0(rName string) string { + return fmt.Sprintf(` +resource "aws_kms_key" "test" { + description = %[1]q + deletion_window_in_days = 7 +} +`, rName) +} diff --git a/internal/service/kms/kms_test.go b/internal/service/kms/kms_test.go index f6145ed005b..afd6b2eb360 100644 --- a/internal/service/kms/kms_test.go +++ b/internal/service/kms/kms_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms_test import ( diff --git a/internal/service/kms/public_key_data_source.go b/internal/service/kms/public_key_data_source.go index 73e764da675..045fd5dd6ac 100644 --- a/internal/service/kms/public_key_data_source.go +++ b/internal/service/kms/public_key_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms import ( @@ -65,7 +68,7 @@ func DataSourcePublicKey() *schema.Resource { func dataSourcePublicKeyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) keyId := d.Get("key_id").(string) input := &kms.GetPublicKeyInput{ diff --git a/internal/service/kms/public_key_data_source_test.go b/internal/service/kms/public_key_data_source_test.go index 4921c5ac8ce..cf6714eed28 100644 --- a/internal/service/kms/public_key_data_source_test.go +++ b/internal/service/kms/public_key_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms_test import ( diff --git a/internal/service/kms/replica_external_key.go b/internal/service/kms/replica_external_key.go index aeffe8362cb..5255d185105 100644 --- a/internal/service/kms/replica_external_key.go +++ b/internal/service/kms/replica_external_key.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms import ( @@ -110,7 +113,7 @@ func ResourceReplicaExternalKey() *schema.Resource { func resourceReplicaExternalKeyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) // e.g. arn:aws:kms:us-east-2:111122223333:key/mrk-1234abcd12ab34cd56ef1234567890ab primaryKeyARN, err := arn.Parse(d.Get("primary_key_arn").(string)) @@ -122,7 +125,7 @@ func resourceReplicaExternalKeyCreate(ctx context.Context, d *schema.ResourceDat input := &kms.ReplicateKeyInput{ KeyId: aws.String(strings.TrimPrefix(primaryKeyARN.Resource, "key/")), ReplicaRegion: aws.String(meta.(*conns.AWSClient).Region), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("bypass_policy_lockout_safety_check"); ok { @@ -191,8 +194,8 @@ func resourceReplicaExternalKeyCreate(ctx context.Context, d *schema.ResourceDat } } - if tags := KeyValueTags(ctx, GetTagsIn(ctx)); len(tags) > 0 { - if err := WaitTagsPropagated(ctx, conn, d.Id(), tags); err != nil { + if tags := KeyValueTags(ctx, getTagsIn(ctx)); len(tags) > 0 { + if err := waitTagsPropagated(ctx, conn, d.Id(), tags); err != nil { return sdkdiag.AppendErrorf(diags, "waiting for KMS Replica External Key (%s) tag propagation: %s", d.Id(), err) } } @@ -202,7 +205,7 @@ func resourceReplicaExternalKeyCreate(ctx context.Context, d *schema.ResourceDat func resourceReplicaExternalKeyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) key, err := findKey(ctx, conn, d.Id(), d.IsNewResource()) @@ -251,14 +254,14 @@ func resourceReplicaExternalKeyRead(ctx context.Context, d *schema.ResourceData, d.Set("valid_to", nil) } - SetTagsOut(ctx, key.tags) + setTagsOut(ctx, key.tags) return diags } func resourceReplicaExternalKeyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) if hasChange, enabled, state := d.HasChange("enabled"), d.Get("enabled").(bool), d.Get("key_state").(string); hasChange && enabled && state != kms.KeyStatePendingImport { // Enable before any attributes are modified. @@ -302,18 +305,12 @@ func resourceReplicaExternalKeyUpdate(ctx context.Context, d *schema.ResourceDat } } - if d.HasChange("tags_all") { - if err := WaitTagsPropagated(ctx, conn, d.Id(), tftags.New(ctx, d.Get("tags_all").(map[string]interface{}))); err != nil { - return sdkdiag.AppendErrorf(diags, "waiting for KMS Replica External Key (%s) tag propagation: %s", d.Id(), err) - } - } - return append(diags, resourceReplicaExternalKeyRead(ctx, d, meta)...) } func resourceReplicaExternalKeyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) input := &kms.ScheduleKeyDeletionInput{ KeyId: aws.String(d.Id()), diff --git a/internal/service/kms/replica_external_key_test.go b/internal/service/kms/replica_external_key_test.go index cc2676081ad..4eeb179bb5f 100644 --- a/internal/service/kms/replica_external_key_test.go +++ b/internal/service/kms/replica_external_key_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms_test import ( @@ -213,6 +216,23 @@ func TestAccKMSReplicaExternalKey_tags(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), ), }, + { + Config: testAccReplicaExternalKeyConfig_tags0(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckKeyExists(ctx, resourceName, &key), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "bypass_policy_lockout_safety_check", + "deletion_window_in_days", + "key_material_base64", + }, + }, }, }) } @@ -314,7 +334,7 @@ resource "aws_kms_external_key" "test" { } resource "aws_kms_replica_external_key" "test" { - description = %[2]q + description = "%[1]s-Replica" enabled = true primary_key_arn = aws_kms_external_key.test.arn @@ -349,7 +369,7 @@ resource "aws_kms_external_key" "test" { } resource "aws_kms_replica_external_key" "test" { - description = %[2]q + description = "%[1]s-Replica" enabled = true primary_key_arn = aws_kms_external_key.test.arn @@ -364,3 +384,30 @@ resource "aws_kms_replica_external_key" "test" { } `, rName, tagKey1, tagValue1, tagKey2, tagValue2)) } + +func testAccReplicaExternalKeyConfig_tags0(rName string) string { + return acctest.ConfigCompose(acctest.ConfigAlternateRegionProvider(), fmt.Sprintf(` +# ACCEPTANCE TESTING ONLY -- NEVER EXPOSE YOUR KEY MATERIAL +resource "aws_kms_external_key" "test" { + provider = awsalternate + + description = %[1]q + multi_region = true + enabled = true + + key_material_base64 = "Wblj06fduthWggmsT0cLVoIMOkeLbc2kVfMud77i/JY=" + + deletion_window_in_days = 7 +} + +resource "aws_kms_replica_external_key" "test" { + description = "%[1]s-Replica" + enabled = true + primary_key_arn = aws_kms_external_key.test.arn + + key_material_base64 = "Wblj06fduthWggmsT0cLVoIMOkeLbc2kVfMud77i/JY=" + + deletion_window_in_days = 7 +} +`, rName)) +} diff --git a/internal/service/kms/replica_key.go b/internal/service/kms/replica_key.go index 15e5ace8f1d..cf4b43b981f 100644 --- a/internal/service/kms/replica_key.go +++ b/internal/service/kms/replica_key.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms import ( @@ -98,7 +101,7 @@ func ResourceReplicaKey() *schema.Resource { func resourceReplicaKeyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) // e.g. arn:aws:kms:us-east-2:111122223333:key/mrk-1234abcd12ab34cd56ef1234567890ab primaryKeyARN, err := arn.Parse(d.Get("primary_key_arn").(string)) @@ -110,7 +113,7 @@ func resourceReplicaKeyCreate(ctx context.Context, d *schema.ResourceData, meta input := &kms.ReplicateKeyInput{ KeyId: aws.String(strings.TrimPrefix(primaryKeyARN.Resource, "key/")), ReplicaRegion: aws.String(meta.(*conns.AWSClient).Region), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("bypass_policy_lockout_safety_check"); ok { @@ -163,8 +166,8 @@ func resourceReplicaKeyCreate(ctx context.Context, d *schema.ResourceData, meta } } - if tags := KeyValueTags(ctx, GetTagsIn(ctx)); len(tags) > 0 { - if err := WaitTagsPropagated(ctx, conn, d.Id(), tags); err != nil { + if tags := KeyValueTags(ctx, getTagsIn(ctx)); len(tags) > 0 { + if err := waitTagsPropagated(ctx, conn, d.Id(), tags); err != nil { return sdkdiag.AppendErrorf(diags, "waiting for KMS Replica Key (%s) tag propagation: %s", d.Id(), err) } } @@ -174,7 +177,7 @@ func resourceReplicaKeyCreate(ctx context.Context, d *schema.ResourceData, meta func resourceReplicaKeyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) key, err := findKey(ctx, conn, d.Id(), d.IsNewResource()) @@ -217,14 +220,14 @@ func resourceReplicaKeyRead(ctx context.Context, d *schema.ResourceData, meta in d.Set("policy", policyToSet) d.Set("primary_key_arn", key.metadata.MultiRegionConfiguration.PrimaryKey.Arn) - SetTagsOut(ctx, key.tags) + setTagsOut(ctx, key.tags) return diags } func resourceReplicaKeyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) if hasChange, enabled := d.HasChange("enabled"), d.Get("enabled").(bool); hasChange && enabled { // Enable before any attributes are modified. @@ -252,18 +255,12 @@ func resourceReplicaKeyUpdate(ctx context.Context, d *schema.ResourceData, meta } } - if d.HasChange("tags_all") { - if err := WaitTagsPropagated(ctx, conn, d.Id(), tftags.New(ctx, d.Get("tags_all").(map[string]interface{}))); err != nil { - return sdkdiag.AppendErrorf(diags, "waiting for KMS Replica Key (%s) tag propagation: %s", d.Id(), err) - } - } - return append(diags, resourceReplicaKeyRead(ctx, d, meta)...) } func resourceReplicaKeyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) input := &kms.ScheduleKeyDeletionInput{ KeyId: aws.String(d.Id()), diff --git a/internal/service/kms/replica_key_test.go b/internal/service/kms/replica_key_test.go index 05810e5f6fc..0e645571f3b 100644 --- a/internal/service/kms/replica_key_test.go +++ b/internal/service/kms/replica_key_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms_test import ( @@ -200,10 +203,13 @@ func TestAccKMSReplicaKey_tags(t *testing.T) { ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"deletion_window_in_days", "bypass_policy_lockout_safety_check"}, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "deletion_window_in_days", + "bypass_policy_lockout_safety_check", + }, }, { Config: testAccReplicaKeyConfig_tags2(rName, "key1", "value1updated", "key2", "value2"), @@ -222,6 +228,22 @@ func TestAccKMSReplicaKey_tags(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), ), }, + { + Config: testAccReplicaKeyConfig_tags0(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckKeyExists(ctx, resourceName, &key), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "deletion_window_in_days", + "bypass_policy_lockout_safety_check", + }, + }, }, }) } @@ -368,6 +390,26 @@ resource "aws_kms_replica_key" "test" { `, rName, tagKey1, tagValue1, tagKey2, tagValue2)) } +func testAccReplicaKeyConfig_tags0(rName string) string { + return acctest.ConfigCompose(acctest.ConfigAlternateRegionProvider(), fmt.Sprintf(` +resource "aws_kms_key" "test" { + provider = awsalternate + + description = %[1]q + multi_region = true + + deletion_window_in_days = 7 +} + +resource "aws_kms_replica_key" "test" { + description = %[1]q + primary_key_arn = aws_kms_key.test.arn + + deletion_window_in_days = 7 +} +`, rName)) +} + func testAccReplicaKeyConfig_two(rName string) string { return acctest.ConfigCompose(acctest.ConfigMultipleRegionProvider(3), fmt.Sprintf(` resource "aws_kms_key" "test" { diff --git a/internal/service/kms/secret_data_source.go b/internal/service/kms/secret_data_source.go index 7b88fd4db5e..8fb9244ad2d 100644 --- a/internal/service/kms/secret_data_source.go +++ b/internal/service/kms/secret_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms import ( diff --git a/internal/service/kms/secret_data_source_test.go b/internal/service/kms/secret_data_source_test.go index 4aea546e1d2..b9012eb7a07 100644 --- a/internal/service/kms/secret_data_source_test.go +++ b/internal/service/kms/secret_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms_test import ( diff --git a/internal/service/kms/secrets_data_source.go b/internal/service/kms/secrets_data_source.go index d0db057ce00..25ac693d97a 100644 --- a/internal/service/kms/secrets_data_source.go +++ b/internal/service/kms/secrets_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms import ( @@ -65,7 +68,7 @@ func DataSourceSecrets() *schema.Resource { } func dataSourceSecretsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) secrets := d.Get("secret").(*schema.Set).List() plaintext := make(map[string]string, len(secrets)) diff --git a/internal/service/kms/secrets_data_source_test.go b/internal/service/kms/secrets_data_source_test.go index 159d0c05d1d..8c24865454c 100644 --- a/internal/service/kms/secrets_data_source_test.go +++ b/internal/service/kms/secrets_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms_test import ( @@ -70,7 +73,7 @@ func TestAccKMSSecretsDataSource_asymmetric(t *testing.T) { func testAccSecretsEncryptDataSource(ctx context.Context, key *kms.KeyMetadata, plaintext string, encryptedPayload *string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn(ctx) input := &kms.EncryptInput{ KeyId: key.Arn, @@ -94,7 +97,7 @@ func testAccSecretsEncryptDataSource(ctx context.Context, key *kms.KeyMetadata, func testAccSecretsEncryptDataSourceAsymmetric(ctx context.Context, key *kms.KeyMetadata, plaintext string, encryptedPayload *string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KMSConn(ctx) input := &kms.EncryptInput{ KeyId: key.Arn, diff --git a/internal/service/kms/service_package_gen.go b/internal/service/kms/service_package_gen.go index 1ddeacb1aaf..1f126fed777 100644 --- a/internal/service/kms/service_package_gen.go +++ b/internal/service/kms/service_package_gen.go @@ -5,6 +5,10 @@ package kms import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + kms_sdkv1 "github.com/aws/aws-sdk-go/service/kms" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -113,4 +117,13 @@ func (p *servicePackage) ServicePackageName() string { return names.KMS } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*kms_sdkv1.KMS, error) { + sess := config["session"].(*session_sdkv1.Session) + + return kms_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/kms/status.go b/internal/service/kms/status.go index a1be8d45496..de6fe0cac44 100644 --- a/internal/service/kms/status.go +++ b/internal/service/kms/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms import ( diff --git a/internal/service/kms/sweep.go b/internal/service/kms/sweep.go index 3d3f204c392..61710943837 100644 --- a/internal/service/kms/sweep.go +++ b/internal/service/kms/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -9,10 +12,11 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/kms" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" + "github.com/hashicorp/terraform-provider-aws/internal/sweep/sdk" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) @@ -25,14 +29,14 @@ func init() { func sweepKeys(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } input := &kms.ListKeysInput{ Limit: aws.Int64(1000), } - conn := client.(*conns.AWSClient).KMSConn() + conn := client.KMSConn(ctx) var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -50,7 +54,11 @@ func sweepKeys(region string) error { } if err != nil { - sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error reading KMS Key (%s): %w", keyID, err)) + if tfawserr.ErrMessageContains(err, "AccessDeniedException", "is not authorized to perform") { + log.Printf("[DEBUG] Skipping KMS Key (%s): %s", keyID, err) + continue + } + sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("reading KMS Key (%s): %w", keyID, err)) continue } @@ -69,7 +77,7 @@ func sweepKeys(region string) error { d.Set("key_id", keyID) d.Set("deletion_window_in_days", "7") - sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) + sweepResources = append(sweepResources, sdk.NewSweepResource(r, d, client)) } return !lastPage @@ -84,7 +92,7 @@ func sweepKeys(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing KMS Keys (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping KMS Keys (%s): %w", region, err)) diff --git a/internal/service/kms/tags_gen.go b/internal/service/kms/tags_gen.go index f31757e2435..33201aed10e 100644 --- a/internal/service/kms/tags_gen.go +++ b/internal/service/kms/tags_gen.go @@ -4,26 +4,37 @@ package kms import ( "context" "fmt" + "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/kms" "github.com/aws/aws-sdk-go/service/kms/kmsiface" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-provider-aws/internal/conns" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists kms service tags. +// listTags lists kms service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn kmsiface.KMSAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn kmsiface.KMSAPI, identifier string) (tftags.KeyValueTags, error) { input := &kms.ListResourceTagsInput{ KeyId: aws.String(identifier), } output, err := conn.ListResourceTagsWithContext(ctx, input) + if tfawserr.ErrCodeEquals(err, "NotFoundException") { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + if err != nil { return tftags.New(ctx, nil), err } @@ -34,7 +45,7 @@ func ListTags(ctx context.Context, conn kmsiface.KMSAPI, identifier string) (tft // ListTags lists kms service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).KMSConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).KMSConn(ctx), identifier) if err != nil { return err @@ -76,9 +87,9 @@ func KeyValueTags(ctx context.Context, tags []*kms.Tag) tftags.KeyValueTags { return tftags.New(ctx, m) } -// GetTagsIn returns kms service tags from Context. +// getTagsIn returns kms service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*kms.Tag { +func getTagsIn(ctx context.Context) []*kms.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +99,17 @@ func GetTagsIn(ctx context.Context) []*kms.Tag { return nil } -// SetTagsOut sets kms service tags in Context. -func SetTagsOut(ctx context.Context, tags []*kms.Tag) { +// setTagsOut sets kms service tags in Context. +func setTagsOut(ctx context.Context, tags []*kms.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates kms service tags. +// updateTags updates kms service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn kmsiface.KMSAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn kmsiface.KMSAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -132,11 +143,42 @@ func UpdateTags(ctx context.Context, conn kmsiface.KMSAPI, identifier string, ol } } + if len(removedTags) > 0 || len(updatedTags) > 0 { + if err := waitTagsPropagated(ctx, conn, identifier, newTags); err != nil { + return fmt.Errorf("waiting for resource (%s) tag propagation: %w", identifier, err) + } + } + return nil } // UpdateTags updates kms service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).KMSConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).KMSConn(ctx), identifier, oldTags, newTags) +} + +// waitTagsPropagated waits for kms service tags to be propagated. +// The identifier is typically the Amazon Resource Name (ARN), although +// it may also be a different identifier depending on the service. +func waitTagsPropagated(ctx context.Context, conn kmsiface.KMSAPI, id string, tags tftags.KeyValueTags) error { + checkFunc := func() (bool, error) { + output, err := listTags(ctx, conn, id) + + if tfresource.NotFound(err) { + return false, nil + } + + if err != nil { + return false, err + } + + return output.Equal(tags), nil + } + opts := tfresource.WaitOpts{ + ContinuousTargetOccurence: 5, + MinTimeout: 1 * time.Second, + } + + return tfresource.WaitUntil(ctx, 10*time.Minute, checkFunc, opts) } diff --git a/internal/service/kms/validate.go b/internal/service/kms/validate.go index 2e22f35880c..97bfed7a5df 100644 --- a/internal/service/kms/validate.go +++ b/internal/service/kms/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms import ( diff --git a/internal/service/kms/validate_test.go b/internal/service/kms/validate_test.go index 63f46d2f1a1..a56c597b479 100644 --- a/internal/service/kms/validate_test.go +++ b/internal/service/kms/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms import ( diff --git a/internal/service/kms/wait.go b/internal/service/kms/wait.go index 38731a5df5d..09307585925 100644 --- a/internal/service/kms/wait.go +++ b/internal/service/kms/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package kms import ( @@ -6,10 +9,8 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/kms" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" awspolicy "github.com/hashicorp/awspolicyequivalence" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" - tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) @@ -202,28 +203,6 @@ func WaitKeyValidToPropagated(ctx context.Context, conn *kms.KMS, id string, val return tfresource.WaitUntil(ctx, KeyValidToPropagationTimeout, checkFunc, opts) } -func WaitTagsPropagated(ctx context.Context, conn *kms.KMS, id string, tags tftags.KeyValueTags) error { - checkFunc := func() (bool, error) { - output, err := ListTags(ctx, conn, id) - - if tfawserr.ErrCodeEquals(err, kms.ErrCodeNotFoundException) { - return false, nil - } - - if err != nil { - return false, err - } - - return output.Equal(tags), nil - } - opts := tfresource.WaitOpts{ - ContinuousTargetOccurence: 5, - MinTimeout: 1 * time.Second, - } - - return tfresource.WaitUntil(ctx, KeyTagsPropagationTimeout, checkFunc, opts) -} - func WaitReplicaExternalKeyCreated(ctx context.Context, conn *kms.KMS, id string) (*kms.KeyMetadata, error) { stateConf := &retry.StateChangeConf{ Pending: []string{kms.KeyStateCreating}, diff --git a/internal/service/lakeformation/consts.go b/internal/service/lakeformation/consts.go index 1c55c725426..99ef4846bf8 100644 --- a/internal/service/lakeformation/consts.go +++ b/internal/service/lakeformation/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lakeformation import ( diff --git a/internal/service/lakeformation/data_lake_settings.go b/internal/service/lakeformation/data_lake_settings.go index 734eef4a8c6..0385353a400 100644 --- a/internal/service/lakeformation/data_lake_settings.go +++ b/internal/service/lakeformation/data_lake_settings.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lakeformation import ( @@ -131,7 +134,7 @@ func ResourceDataLakeSettings() *schema.Resource { func resourceDataLakeSettingsCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LakeFormationConn() + conn := meta.(*conns.AWSClient).LakeFormationConn(ctx) input := &lakeformation.PutDataLakeSettingsInput{} @@ -183,7 +186,7 @@ func resourceDataLakeSettingsCreate(ctx context.Context, d *schema.ResourceData, return retry.RetryableError(err) } - return retry.NonRetryableError(fmt.Errorf("error creating Lake Formation data lake settings: %w", err)) + return retry.NonRetryableError(fmt.Errorf("creating Lake Formation data lake settings: %w", err)) } return nil }) @@ -207,7 +210,7 @@ func resourceDataLakeSettingsCreate(ctx context.Context, d *schema.ResourceData, func resourceDataLakeSettingsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LakeFormationConn() + conn := meta.(*conns.AWSClient).LakeFormationConn(ctx) input := &lakeformation.GetDataLakeSettingsInput{} @@ -246,7 +249,7 @@ func resourceDataLakeSettingsRead(ctx context.Context, d *schema.ResourceData, m func resourceDataLakeSettingsDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LakeFormationConn() + conn := meta.(*conns.AWSClient).LakeFormationConn(ctx) input := &lakeformation.PutDataLakeSettingsInput{ DataLakeSettings: &lakeformation.DataLakeSettings{ diff --git a/internal/service/lakeformation/data_lake_settings_data_source.go b/internal/service/lakeformation/data_lake_settings_data_source.go index 3248fd3c5bc..dd97d3bc08c 100644 --- a/internal/service/lakeformation/data_lake_settings_data_source.go +++ b/internal/service/lakeformation/data_lake_settings_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lakeformation import ( @@ -90,7 +93,7 @@ func DataSourceDataLakeSettings() *schema.Resource { func dataSourceDataLakeSettingsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LakeFormationConn() + conn := meta.(*conns.AWSClient).LakeFormationConn(ctx) input := &lakeformation.GetDataLakeSettingsInput{} diff --git a/internal/service/lakeformation/data_lake_settings_data_source_test.go b/internal/service/lakeformation/data_lake_settings_data_source_test.go index acd4fdde13e..c70d98600b3 100644 --- a/internal/service/lakeformation/data_lake_settings_data_source_test.go +++ b/internal/service/lakeformation/data_lake_settings_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lakeformation_test import ( diff --git a/internal/service/lakeformation/data_lake_settings_test.go b/internal/service/lakeformation/data_lake_settings_test.go index 7202ecec78e..fd325945cc9 100644 --- a/internal/service/lakeformation/data_lake_settings_test.go +++ b/internal/service/lakeformation/data_lake_settings_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lakeformation_test import ( @@ -97,7 +100,7 @@ func testAccDataLakeSettings_withoutCatalogID(t *testing.T) { func testAccCheckDataLakeSettingsDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LakeFormationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LakeFormationConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lakeformation_data_lake_settings" { @@ -136,7 +139,7 @@ func testAccCheckDataLakeSettingsExists(ctx context.Context, resourceName string return fmt.Errorf("resource not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).LakeFormationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LakeFormationConn(ctx) input := &lakeformation.GetDataLakeSettingsInput{} diff --git a/internal/service/lakeformation/filter.go b/internal/service/lakeformation/filter.go index d37dcc0dced..405da724b68 100644 --- a/internal/service/lakeformation/filter.go +++ b/internal/service/lakeformation/filter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lakeformation import ( diff --git a/internal/service/lakeformation/filter_test.go b/internal/service/lakeformation/filter_test.go index 2221ced077f..bdec9fc0ddc 100644 --- a/internal/service/lakeformation/filter_test.go +++ b/internal/service/lakeformation/filter_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lakeformation_test import ( diff --git a/internal/service/lakeformation/generate.go b/internal/service/lakeformation/generate.go new file mode 100644 index 00000000000..50e7541ecaa --- /dev/null +++ b/internal/service/lakeformation/generate.go @@ -0,0 +1,7 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/servicepackage/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package lakeformation diff --git a/internal/service/lakeformation/lakeformation_test.go b/internal/service/lakeformation/lakeformation_test.go index 6f0966caaca..99422b50d36 100644 --- a/internal/service/lakeformation/lakeformation_test.go +++ b/internal/service/lakeformation/lakeformation_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lakeformation_test import ( @@ -62,11 +65,13 @@ func TestAccLakeFormation_serial(t *testing.T) { "valuesOverFifty": testAccLFTag_Values_overFifty, }, "ResourceLFTags": { - "basic": testAccResourceLFTags_basic, - "database": testAccResourceLFTags_database, - "databaseMultiple": testAccResourceLFTags_databaseMultiple, - "table": testAccResourceLFTags_table, - "tableWithColumns": testAccResourceLFTags_tableWithColumns, + "basic": testAccResourceLFTags_basic, + "database": testAccResourceLFTags_database, + "databaseMultipleTags": testAccResourceLFTags_databaseMultipleTags, + "disappears": testAccResourceLFTags_disappears, + "hierarchy": testAccResourceLFTags_hierarchy, + "table": testAccResourceLFTags_table, + "tableWithColumns": testAccResourceLFTags_tableWithColumns, }, } diff --git a/internal/service/lakeformation/lf_tag.go b/internal/service/lakeformation/lf_tag.go index 3e9a76aac25..8ddb9c8c9f6 100644 --- a/internal/service/lakeformation/lf_tag.go +++ b/internal/service/lakeformation/lf_tag.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lakeformation import ( @@ -64,7 +67,7 @@ func ResourceLFTag() *schema.Resource { func resourceLFTagCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LakeFormationConn() + conn := meta.(*conns.AWSClient).LakeFormationConn(ctx) tagKey := d.Get("key").(string) tagValues := d.Get("values").(*schema.Set) @@ -113,7 +116,7 @@ func resourceLFTagCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceLFTagRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LakeFormationConn() + conn := meta.(*conns.AWSClient).LakeFormationConn(ctx) catalogID, tagKey, err := ReadLFTagID(d.Id()) if err != nil { @@ -147,7 +150,7 @@ func resourceLFTagRead(ctx context.Context, d *schema.ResourceData, meta interfa func resourceLFTagUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LakeFormationConn() + conn := meta.(*conns.AWSClient).LakeFormationConn(ctx) catalogID, tagKey, err := ReadLFTagID(d.Id()) if err != nil { @@ -204,7 +207,7 @@ func resourceLFTagUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceLFTagDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LakeFormationConn() + conn := meta.(*conns.AWSClient).LakeFormationConn(ctx) catalogID, tagKey, err := ReadLFTagID(d.Id()) if err != nil { diff --git a/internal/service/lakeformation/lf_tag_test.go b/internal/service/lakeformation/lf_tag_test.go index 495926a9a1a..24ad66aec82 100644 --- a/internal/service/lakeformation/lf_tag_test.go +++ b/internal/service/lakeformation/lf_tag_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lakeformation_test import ( @@ -240,7 +243,7 @@ func testAccLFTag_Values_overFifty(t *testing.T) { func testAccCheckLFTagsDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LakeFormationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LakeFormationConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lakeformation_lf_tag" { @@ -295,7 +298,7 @@ func testAccCheckLFTagExists(ctx context.Context, name string) resource.TestChec TagKey: aws.String(tagKey), } - conn := acctest.Provider.Meta().(*conns.AWSClient).LakeFormationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LakeFormationConn(ctx) _, err = conn.GetLFTagWithContext(ctx, input) return err @@ -323,7 +326,7 @@ func testAccCheckLFTagValuesLen(ctx context.Context, name string, expectedLength TagKey: aws.String(tagKey), } - conn := acctest.Provider.Meta().(*conns.AWSClient).LakeFormationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LakeFormationConn(ctx) output, err := conn.GetLFTagWithContext(ctx, input) if len(output.TagValues) != expectedLength { diff --git a/internal/service/lakeformation/permissions.go b/internal/service/lakeformation/permissions.go index 464f3ee60b8..21e0b428aa7 100644 --- a/internal/service/lakeformation/permissions.go +++ b/internal/service/lakeformation/permissions.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lakeformation import ( @@ -381,7 +384,7 @@ func ResourcePermissions() *schema.Resource { func resourcePermissionsCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LakeFormationConn() + conn := meta.(*conns.AWSClient).LakeFormationConn(ctx) input := &lakeformation.GrantPermissionsInput{ Permissions: flex.ExpandStringList(d.Get("permissions").([]interface{})), @@ -448,7 +451,7 @@ func resourcePermissionsCreate(ctx context.Context, d *schema.ResourceData, meta return retry.RetryableError(err) } - return retry.NonRetryableError(fmt.Errorf("error creating Lake Formation Permissions: %w", err)) + return retry.NonRetryableError(fmt.Errorf("creating Lake Formation Permissions: %w", err)) } return nil }) @@ -472,7 +475,7 @@ func resourcePermissionsCreate(ctx context.Context, d *schema.ResourceData, meta func resourcePermissionsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LakeFormationConn() + conn := meta.(*conns.AWSClient).LakeFormationConn(ctx) input := &lakeformation.ListPermissionsInput{ Principal: &lakeformation.DataLakePrincipal{ @@ -684,7 +687,7 @@ func resourcePermissionsRead(ctx context.Context, d *schema.ResourceData, meta i func resourcePermissionsDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LakeFormationConn() + conn := meta.(*conns.AWSClient).LakeFormationConn(ctx) input := &lakeformation.RevokePermissionsInput{ Permissions: flex.ExpandStringList(d.Get("permissions").([]interface{})), diff --git a/internal/service/lakeformation/permissions_data_source.go b/internal/service/lakeformation/permissions_data_source.go index f92fcb3148a..ffcfef2f716 100644 --- a/internal/service/lakeformation/permissions_data_source.go +++ b/internal/service/lakeformation/permissions_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lakeformation import ( @@ -248,7 +251,7 @@ func DataSourcePermissions() *schema.Resource { func dataSourcePermissionsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LakeFormationConn() + conn := meta.(*conns.AWSClient).LakeFormationConn(ctx) input := &lakeformation.ListPermissionsInput{ Principal: &lakeformation.DataLakePrincipal{ diff --git a/internal/service/lakeformation/permissions_data_source_test.go b/internal/service/lakeformation/permissions_data_source_test.go index 98e5db87281..7a096dc8da4 100644 --- a/internal/service/lakeformation/permissions_data_source_test.go +++ b/internal/service/lakeformation/permissions_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lakeformation_test import ( diff --git a/internal/service/lakeformation/permissions_test.go b/internal/service/lakeformation/permissions_test.go index b848e3f3b0c..1103c7cb695 100644 --- a/internal/service/lakeformation/permissions_test.go +++ b/internal/service/lakeformation/permissions_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lakeformation_test import ( @@ -778,7 +781,7 @@ func testAccPermissions_twcWildcardSelectPlus(t *testing.T) { func testAccCheckPermissionsDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LakeFormationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LakeFormationConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lakeformation_permissions" { @@ -810,7 +813,7 @@ func testAccCheckPermissionsExists(ctx context.Context, resourceName string) res return fmt.Errorf("acceptance test: resource not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).LakeFormationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LakeFormationConn(ctx) permCount, err := permissionCountForResource(ctx, conn, rs) diff --git a/internal/service/lakeformation/resource.go b/internal/service/lakeformation/resource.go index 6c5a64727d5..4be275cd29b 100644 --- a/internal/service/lakeformation/resource.go +++ b/internal/service/lakeformation/resource.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lakeformation import ( @@ -46,7 +49,7 @@ func ResourceResource() *schema.Resource { func resourceResourceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LakeFormationConn() + conn := meta.(*conns.AWSClient).LakeFormationConn(ctx) resourceArn := d.Get("arn").(string) input := &lakeformation.RegisterResourceInput{ @@ -73,7 +76,7 @@ func resourceResourceCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceResourceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LakeFormationConn() + conn := meta.(*conns.AWSClient).LakeFormationConn(ctx) resourceArn := d.Get("arn").(string) input := &lakeformation.DescribeResourceInput{ @@ -107,7 +110,7 @@ func resourceResourceRead(ctx context.Context, d *schema.ResourceData, meta inte func resourceResourceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LakeFormationConn() + conn := meta.(*conns.AWSClient).LakeFormationConn(ctx) resourceArn := d.Get("arn").(string) input := &lakeformation.DeregisterResourceInput{ diff --git a/internal/service/lakeformation/resource_data_source.go b/internal/service/lakeformation/resource_data_source.go index 2c4d92e2443..bb307fd2b14 100644 --- a/internal/service/lakeformation/resource_data_source.go +++ b/internal/service/lakeformation/resource_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lakeformation import ( @@ -40,7 +43,7 @@ func DataSourceResource() *schema.Resource { func dataSourceResourceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LakeFormationConn() + conn := meta.(*conns.AWSClient).LakeFormationConn(ctx) input := &lakeformation.DescribeResourceInput{} diff --git a/internal/service/lakeformation/resource_data_source_test.go b/internal/service/lakeformation/resource_data_source_test.go index dc9f233721f..b223281ba02 100644 --- a/internal/service/lakeformation/resource_data_source_test.go +++ b/internal/service/lakeformation/resource_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lakeformation_test import ( diff --git a/internal/service/lakeformation/resource_lf_tags.go b/internal/service/lakeformation/resource_lf_tags.go index f590bb9da56..2a3776f3eac 100644 --- a/internal/service/lakeformation/resource_lf_tags.go +++ b/internal/service/lakeformation/resource_lf_tags.go @@ -1,10 +1,12 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lakeformation import ( "bytes" "context" "fmt" - "log" "reflect" "time" @@ -18,6 +20,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/errs" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/verify" "github.com/hashicorp/terraform-provider-aws/names" @@ -223,31 +226,27 @@ func ResourceResourceLFTags() *schema.Resource { } func resourceResourceLFTagsCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LakeFormationConn() + var diags diag.Diagnostics - input := &lakeformation.AddLFTagsToResourceInput{ - Resource: &lakeformation.Resource{}, - } + conn := meta.(*conns.AWSClient).LakeFormationConn(ctx) + + input := &lakeformation.AddLFTagsToResourceInput{} if v, ok := d.GetOk("catalog_id"); ok { input.CatalogId = aws.String(v.(string)) } - if v, ok := d.GetOk("database"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { - input.Resource.Database = ExpandDatabaseResource(v.([]interface{})[0].(map[string]interface{})) - } - if v, ok := d.GetOk("lf_tag"); ok && v.(*schema.Set).Len() > 0 { input.LFTags = expandLFTagPairs(v.(*schema.Set).List()) } - if v, ok := d.GetOk("table"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { - input.Resource.Table = ExpandTableResource(v.([]interface{})[0].(map[string]interface{})) + tagger, ds := lfTagsTagger(d) + diags = append(diags, ds...) + if diags.HasError() { + return diags } - if v, ok := d.GetOk("table_with_columns"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { - input.Resource.TableWithColumns = expandTableColumnsResource(v.([]interface{})[0].(map[string]interface{})) - } + input.Resource = tagger.ExpandResource(d) var output *lakeformation.AddLFTagsToResourceOutput err := retry.RetryContext(ctx, IAMPropagationTimeout, func() *retry.RetryError { @@ -271,19 +270,16 @@ func resourceResourceLFTagsCreate(ctx context.Context, d *schema.ResourceData, m } if err != nil { - return create.DiagError(names.LakeFormation, create.ErrActionCreating, ResNameLFTags, input.String(), err) + return create.AddError(diags, names.LakeFormation, create.ErrActionCreating, ResNameLFTags, input.String(), err) } - diags := diag.Diagnostics{} - if output != nil && len(output.Failures) > 0 { for _, v := range output.Failures { if v.LFTag == nil || v.Error == nil { continue } - diags = create.AddWarning( - diags, + diags = create.AddError(diags, names.LakeFormation, create.ErrActionCreating, ResNameLFTags, @@ -291,27 +287,22 @@ func resourceResourceLFTagsCreate(ctx context.Context, d *schema.ResourceData, m awserr.New(aws.StringValue(v.Error.ErrorCode), aws.StringValue(v.Error.ErrorMessage), nil), ) } - - if len(diags) == len(input.LFTags) { - return append(diags, - diag.Diagnostic{ - Severity: diag.Error, - Summary: create.ProblemStandardMessage(names.LakeFormation, create.ErrActionCreating, ResNameLFTags, "", fmt.Errorf("attempted to add %d tags, %d failures", len(input.LFTags), len(diags))), - }, - ) - } + } + if diags.HasError() { + return diags } d.SetId(fmt.Sprintf("%d", create.StringHashcode(input.String()))) - return append(resourceResourceLFTagsRead(ctx, d, meta), diags...) + return append(diags, resourceResourceLFTagsRead(ctx, d, meta)...) } func resourceResourceLFTagsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LakeFormationConn() + var diags diag.Diagnostics + + conn := meta.(*conns.AWSClient).LakeFormationConn(ctx) input := &lakeformation.GetResourceLFTagsInput{ - Resource: &lakeformation.Resource{}, ShowAssignedLFTags: aws.Bool(true), } @@ -319,82 +310,53 @@ func resourceResourceLFTagsRead(ctx context.Context, d *schema.ResourceData, met input.CatalogId = aws.String(v.(string)) } - if v, ok := d.GetOk("database"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { - input.Resource.Database = ExpandDatabaseResource(v.([]interface{})[0].(map[string]interface{})) - } - - if v, ok := d.GetOk("table"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { - input.Resource.Table = ExpandTableResource(v.([]interface{})[0].(map[string]interface{})) + tagger, ds := lfTagsTagger(d) + diags = append(diags, ds...) + if diags.HasError() { + return diags } - if v, ok := d.GetOk("table_with_columns"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { - input.Resource.TableWithColumns = expandTableColumnsResource(v.([]interface{})[0].(map[string]interface{})) - } + input.Resource = tagger.ExpandResource(d) output, err := conn.GetResourceLFTagsWithContext(ctx, input) if err != nil { - return create.DiagError(names.LakeFormation, create.ErrActionReading, ResNameLFTags, d.Id(), err) - } - - if len(output.LFTagOnDatabase) > 0 { - if err := d.Set("lf_tag", flattenLFTagPairs(output.LFTagOnDatabase)); err != nil { - return create.DiagError(names.LakeFormation, create.ErrActionSetting, ResNameLFTags, d.Id(), err) - } - } - - if len(output.LFTagsOnColumns) > 0 { - for _, v := range output.LFTagsOnColumns { - if aws.StringValue(v.Name) != d.Get("table_with_columns.0.name").(string) { - continue - } - - if err := d.Set("lf_tag", flattenLFTagPairs(v.LFTags)); err != nil { - return create.DiagError(names.LakeFormation, create.ErrActionSetting, ResNameLFTags, d.Id(), err) - } - } + return create.AddError(diags, names.LakeFormation, create.ErrActionReading, ResNameLFTags, d.Id(), err) } - if len(output.LFTagsOnTable) > 0 { - if err := d.Set("lf_tag", flattenLFTagPairs(output.LFTagsOnTable)); err != nil { - return create.DiagError(names.LakeFormation, create.ErrActionSetting, ResNameLFTags, d.Id(), err) - } + if err := d.Set("lf_tag", tagger.FlattenTags(output)); err != nil { + return create.AddError(diags, names.LakeFormation, create.ErrActionSetting, ResNameLFTags, d.Id(), err) } - return nil + return diags } func resourceResourceLFTagsDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LakeFormationConn() + var diags diag.Diagnostics - input := &lakeformation.RemoveLFTagsFromResourceInput{ - Resource: &lakeformation.Resource{}, - } + conn := meta.(*conns.AWSClient).LakeFormationConn(ctx) + + input := &lakeformation.RemoveLFTagsFromResourceInput{} if v, ok := d.GetOk("catalog_id"); ok { input.CatalogId = aws.String(v.(string)) } - if v, ok := d.GetOk("database"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { - input.Resource.Database = ExpandDatabaseResource(v.([]interface{})[0].(map[string]interface{})) - } - if v, ok := d.GetOk("lf_tag"); ok && v.(*schema.Set).Len() > 0 { input.LFTags = expandLFTagPairs(v.(*schema.Set).List()) } - if v, ok := d.GetOk("table"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { - input.Resource.Table = ExpandTableResource(v.([]interface{})[0].(map[string]interface{})) + tagger, ds := lfTagsTagger(d) + diags = append(diags, ds...) + if diags.HasError() { + return diags } - if v, ok := d.GetOk("table_with_columns"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { - input.Resource.TableWithColumns = expandTableColumnsResource(v.([]interface{})[0].(map[string]interface{})) - } + input.Resource = tagger.ExpandResource(d) - if input.Resource == nil || reflect.DeepEqual(input.Resource, &lakeformation.Resource{}) { + if input.Resource == nil || reflect.DeepEqual(input.Resource, &lakeformation.Resource{}) || len(input.LFTags) == 0 { // if resource is empty, don't delete = it won't delete anything since this is the predicate - log.Printf("[WARN] No Lake Formation Resource LF Tags to remove") - return nil + return create.AddWarningMessage(diags, names.LakeFormation, create.ErrActionSetting, ResNameLFTags, d.Id(), "no LF-Tags to remove") } err := retry.RetryContext(ctx, d.Timeout(schema.TimeoutDelete), func() *retry.RetryError { @@ -408,7 +370,7 @@ func resourceResourceLFTagsDelete(ctx context.Context, d *schema.ResourceData, m return retry.RetryableError(err) } - return retry.NonRetryableError(fmt.Errorf("unable to revoke Lake Formation Permissions: %w", err)) + return retry.NonRetryableError(fmt.Errorf("removing Lake Formation LF-Tags: %w", err)) } return nil }) @@ -418,10 +380,83 @@ func resourceResourceLFTagsDelete(ctx context.Context, d *schema.ResourceData, m } if err != nil { - return create.DiagError(names.LakeFormation, create.ErrActionDeleting, ResNameLFTags, d.Id(), err) + return create.AddError(diags, names.LakeFormation, create.ErrActionDeleting, ResNameLFTags, d.Id(), err) + } + + return diags +} + +func lfTagsTagger(d *schema.ResourceData) (tagger, diag.Diagnostics) { + var diags diag.Diagnostics + if v, ok := d.GetOk("database"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + return &databaseTagger{}, diags + } else if v, ok := d.GetOk("table"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + return &tableTagger{}, diags + } else if v, ok := d.GetOk("table_with_columns"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + return &columnTagger{}, diags + } else { + diags = append(diags, errs.NewErrorDiagnostic( + "Invalid Lake Formation Resource Type", + "An unexpected error occurred while resolving the Lake Formation Resource type. "+ + "This is always an error in the provider. "+ + "Please report the following to the provider developer:\n\n"+ + "No Lake Formation Resource defined.", + )) + return nil, diags + } +} + +type tagger interface { + ExpandResource(*schema.ResourceData) *lakeformation.Resource + FlattenTags(*lakeformation.GetResourceLFTagsOutput) []any +} + +type databaseTagger struct{} + +func (t *databaseTagger) ExpandResource(d *schema.ResourceData) *lakeformation.Resource { + v := d.Get("database").([]any)[0].(map[string]any) + return &lakeformation.Resource{ + Database: ExpandDatabaseResource(v), + } +} + +func (t *databaseTagger) FlattenTags(output *lakeformation.GetResourceLFTagsOutput) []any { + return flattenLFTagPairs(output.LFTagOnDatabase) +} + +type tableTagger struct{} + +func (t *tableTagger) ExpandResource(d *schema.ResourceData) *lakeformation.Resource { + v := d.Get("table").([]any)[0].(map[string]any) + return &lakeformation.Resource{ + Table: ExpandTableResource(v), + } +} + +func (t *tableTagger) FlattenTags(output *lakeformation.GetResourceLFTagsOutput) []any { + return flattenLFTagPairs(output.LFTagsOnTable) +} + +type columnTagger struct{} + +func (t *columnTagger) ExpandResource(d *schema.ResourceData) *lakeformation.Resource { + v := d.Get("table_with_columns").([]any)[0].(map[string]any) + return &lakeformation.Resource{ + TableWithColumns: expandTableColumnsResource(v), + } +} + +func (t *columnTagger) FlattenTags(output *lakeformation.GetResourceLFTagsOutput) []any { + if len(output.LFTagsOnColumns) == 0 { + return []any{} + } + + tags := output.LFTagsOnColumns[0] + if tags == nil { + return []any{} } - return nil + return flattenLFTagPairs(tags.LFTags) } func lfTagsHash(v interface{}) int { diff --git a/internal/service/lakeformation/resource_lf_tags_test.go b/internal/service/lakeformation/resource_lf_tags_test.go index 20608a545bd..1abc4ca1213 100644 --- a/internal/service/lakeformation/resource_lf_tags_test.go +++ b/internal/service/lakeformation/resource_lf_tags_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lakeformation_test import ( @@ -15,6 +18,7 @@ import ( "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" + tflakeformation "github.com/hashicorp/terraform-provider-aws/internal/service/lakeformation" ) func testAccResourceLFTags_basic(t *testing.T) { @@ -44,6 +48,29 @@ func testAccResourceLFTags_basic(t *testing.T) { }) } +func testAccResourceLFTags_disappears(t *testing.T) { + ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_lakeformation_resource_lf_tags.test" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckPartitionHasService(t, lakeformation.EndpointsID) }, + ErrorCheck: acctest.ErrorCheck(t, lakeformation.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckResourceDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccResourceLFTagsConfig_basic(rName, []string{"copse"}, "copse"), + Check: resource.ComposeTestCheckFunc( + testAccCheckDatabaseLFTagsExists(ctx, resourceName), + acctest.CheckResourceDisappears(ctx, acctest.Provider, tflakeformation.ResourceResourceLFTags(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + func testAccResourceLFTags_database(t *testing.T) { ctx := acctest.Context(t) resourceName := "aws_lakeformation_resource_lf_tags.test" @@ -80,7 +107,7 @@ func testAccResourceLFTags_database(t *testing.T) { }) } -func testAccResourceLFTags_databaseMultiple(t *testing.T) { +func testAccResourceLFTags_databaseMultipleTags(t *testing.T) { ctx := acctest.Context(t) resourceName := "aws_lakeformation_resource_lf_tags.test" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) @@ -92,7 +119,7 @@ func testAccResourceLFTags_databaseMultiple(t *testing.T) { CheckDestroy: testAccCheckDatabaseLFTagsDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccResourceLFTagsConfig_databaseMultiple(rName, []string{"abbey", "village", "luffield", "woodcote", "copse", "chapel", "stowe", "club"}, []string{"farm", "theloop", "aintree", "brooklands", "maggotts", "becketts", "vale"}, "woodcote", "theloop"), + Config: testAccResourceLFTagsConfig_databaseMultipleTags(rName, []string{"abbey", "village", "luffield", "woodcote", "copse", "chapel", "stowe", "club"}, []string{"farm", "theloop", "aintree", "brooklands", "maggotts", "becketts", "vale"}, "woodcote", "theloop"), Destroy: false, Check: resource.ComposeTestCheckFunc( testAccCheckDatabaseLFTagsExists(ctx, resourceName), @@ -107,7 +134,7 @@ func testAccResourceLFTags_databaseMultiple(t *testing.T) { ), }, { - Config: testAccResourceLFTagsConfig_databaseMultiple(rName, []string{"abbey", "village", "luffield", "woodcote", "copse", "chapel", "stowe", "club"}, []string{"farm", "theloop", "aintree", "brooklands", "maggotts", "becketts", "vale"}, "stowe", "becketts"), + Config: testAccResourceLFTagsConfig_databaseMultipleTags(rName, []string{"abbey", "village", "luffield", "woodcote", "copse", "chapel", "stowe", "club"}, []string{"farm", "theloop", "aintree", "brooklands", "maggotts", "becketts", "vale"}, "stowe", "becketts"), Check: resource.ComposeTestCheckFunc( testAccCheckDatabaseLFTagsExists(ctx, resourceName), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "lf_tag.*", map[string]string{ @@ -124,6 +151,83 @@ func testAccResourceLFTags_databaseMultiple(t *testing.T) { }) } +func testAccResourceLFTags_hierarchy(t *testing.T) { + ctx := acctest.Context(t) + databaseResourceName := "aws_lakeformation_resource_lf_tags.database_tags" + tableResourceName := "aws_lakeformation_resource_lf_tags.table_tags" + columnResourceName := "aws_lakeformation_resource_lf_tags.column_tags" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckPartitionHasService(t, lakeformation.EndpointsID) }, + ErrorCheck: acctest.ErrorCheck(t, lakeformation.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckDatabaseLFTagsDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccResourceLFTagsConfig_hierarchy(rName, + []string{"abbey", "village", "luffield", "woodcote", "copse", "chapel", "stowe", "club"}, + []string{"farm", "theloop", "aintree", "brooklands", "maggotts", "becketts", "vale"}, + []string{"one", "two", "three"}, + "woodcote", + "theloop", + "two", + ), + Check: resource.ComposeTestCheckFunc( + testAccCheckDatabaseLFTagsExists(ctx, databaseResourceName), + testAccCheckDatabaseLFTagsExists(ctx, tableResourceName), + testAccCheckDatabaseLFTagsExists(ctx, columnResourceName), + resource.TestCheckResourceAttr(databaseResourceName, "lf_tag.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(databaseResourceName, "lf_tag.*", map[string]string{ + "key": rName, + "value": "woodcote", + }), + resource.TestCheckResourceAttr(tableResourceName, "lf_tag.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(tableResourceName, "lf_tag.*", map[string]string{ + "key": fmt.Sprintf("%s-2", rName), + "value": "theloop", + }), + resource.TestCheckResourceAttr(columnResourceName, "lf_tag.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(columnResourceName, "lf_tag.*", map[string]string{ + "key": fmt.Sprintf("%s-3", rName), + "value": "two", + }), + ), + }, + { + Config: testAccResourceLFTagsConfig_hierarchy(rName, + []string{"abbey", "village", "luffield", "woodcote", "copse", "chapel", "stowe", "club"}, + []string{"farm", "theloop", "aintree", "brooklands", "maggotts", "becketts", "vale"}, + []string{"one", "two", "three"}, + "stowe", + "becketts", + "three", + ), + Check: resource.ComposeTestCheckFunc( + testAccCheckDatabaseLFTagsExists(ctx, databaseResourceName), + testAccCheckDatabaseLFTagsExists(ctx, tableResourceName), + testAccCheckDatabaseLFTagsExists(ctx, columnResourceName), + resource.TestCheckResourceAttr(databaseResourceName, "lf_tag.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(databaseResourceName, "lf_tag.*", map[string]string{ + "key": rName, + "value": "stowe", + }), + resource.TestCheckResourceAttr(tableResourceName, "lf_tag.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(tableResourceName, "lf_tag.*", map[string]string{ + "key": fmt.Sprintf("%s-2", rName), + "value": "becketts", + }), + resource.TestCheckResourceAttr(columnResourceName, "lf_tag.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(columnResourceName, "lf_tag.*", map[string]string{ + "key": fmt.Sprintf("%s-3", rName), + "value": "three", + }), + ), + }, + }, + }) +} + func testAccResourceLFTags_table(t *testing.T) { ctx := acctest.Context(t) resourceName := "aws_lakeformation_resource_lf_tags.test" @@ -172,7 +276,7 @@ func testAccResourceLFTags_tableWithColumns(t *testing.T) { CheckDestroy: testAccCheckDatabaseLFTagsDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccResourceLFTagsConfig_tableWithColumnsMultiple(rName, []string{"abbey", "village", "luffield", "woodcote", "copse", "chapel", "stowe", "club"}, []string{"farm", "theloop", "aintree", "brooklands", "maggotts", "becketts", "vale"}, "luffield", "vale"), + Config: testAccResourceLFTagsConfig_tableWithColumnsMultipleTags(rName, []string{"abbey", "village", "luffield", "woodcote", "copse", "chapel", "stowe", "club"}, []string{"farm", "theloop", "aintree", "brooklands", "maggotts", "becketts", "vale"}, "luffield", "vale"), Destroy: false, Check: resource.ComposeTestCheckFunc( testAccCheckDatabaseLFTagsExists(ctx, resourceName), @@ -187,7 +291,7 @@ func testAccResourceLFTags_tableWithColumns(t *testing.T) { ), }, { - Config: testAccResourceLFTagsConfig_tableWithColumnsMultiple(rName, []string{"abbey", "village", "luffield", "woodcote", "copse", "chapel", "stowe", "club"}, []string{"farm", "theloop", "aintree", "brooklands", "maggotts", "becketts", "vale"}, "copse", "aintree"), + Config: testAccResourceLFTagsConfig_tableWithColumnsMultipleTags(rName, []string{"abbey", "village", "luffield", "woodcote", "copse", "chapel", "stowe", "club"}, []string{"farm", "theloop", "aintree", "brooklands", "maggotts", "becketts", "vale"}, "copse", "aintree"), Check: resource.ComposeTestCheckFunc( testAccCheckDatabaseLFTagsExists(ctx, resourceName), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "lf_tag.*", map[string]string{ @@ -206,7 +310,7 @@ func testAccResourceLFTags_tableWithColumns(t *testing.T) { func testAccCheckDatabaseLFTagsDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LakeFormationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LakeFormationConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lakeformation_resource_lf_tags" { @@ -393,7 +497,7 @@ func testAccCheckDatabaseLFTagsExists(ctx context.Context, resourceName string) } } - conn := acctest.Provider.Meta().(*conns.AWSClient).LakeFormationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LakeFormationConn(ctx) _, err := conn.GetResourceLFTagsWithContext(ctx, input) return err @@ -420,7 +524,7 @@ resource "aws_lakeformation_lf_tag" "test" { key = %[1]q values = [%[2]s] - # for consistency, ensure that admins are setup before testing + # for consistency, ensure that admins are set up before testing depends_on = [aws_lakeformation_data_lake_settings.test] } @@ -436,7 +540,7 @@ resource "aws_lakeformation_resource_lf_tags" "test" { value = %[3]q } - # for consistency, ensure that admins are setup before testing + # for consistency, ensure that admins are set up before testing depends_on = [aws_lakeformation_data_lake_settings.test] } `, rName, fmt.Sprintf(`"%s"`, strings.Join(values, `", "`)), value) @@ -462,7 +566,7 @@ resource "aws_lakeformation_lf_tag" "test" { key = %[1]q values = [%[2]s] - # for consistency, ensure that admins are setup before testing + # for consistency, ensure that admins are set up before testing depends_on = [aws_lakeformation_data_lake_settings.test] } @@ -476,13 +580,13 @@ resource "aws_lakeformation_resource_lf_tags" "test" { value = %[3]q } - # for consistency, ensure that admins are setup before testing + # for consistency, ensure that admins are set up before testing depends_on = [aws_lakeformation_data_lake_settings.test] } `, rName, fmt.Sprintf(`"%s"`, strings.Join(values, `", "`)), value) } -func testAccResourceLFTagsConfig_databaseMultiple(rName string, values1, values2 []string, value1, value2 string) string { +func testAccResourceLFTagsConfig_databaseMultipleTags(rName string, values1, values2 []string, value1, value2 string) string { return fmt.Sprintf(` data "aws_caller_identity" "current" {} @@ -502,7 +606,7 @@ resource "aws_lakeformation_lf_tag" "test" { key = %[1]q values = [%[2]s] - # for consistency, ensure that admins are setup before testing + # for consistency, ensure that admins are set up before testing depends_on = [aws_lakeformation_data_lake_settings.test] } @@ -510,7 +614,7 @@ resource "aws_lakeformation_lf_tag" "test2" { key = "%[1]s-2" values = [%[3]s] - # for consistency, ensure that admins are setup before testing + # for consistency, ensure that admins are set up before testing depends_on = [aws_lakeformation_data_lake_settings.test] } @@ -529,12 +633,121 @@ resource "aws_lakeformation_resource_lf_tags" "test" { value = %[5]q } - # for consistency, ensure that admins are setup before testing + # for consistency, ensure that admins are set up before testing depends_on = [aws_lakeformation_data_lake_settings.test] } `, rName, fmt.Sprintf(`"%s"`, strings.Join(values1, `", "`)), fmt.Sprintf(`"%s"`, strings.Join(values2, `", "`)), value1, value2) } +func testAccResourceLFTagsConfig_hierarchy(rName string, values1, values2, values3 []string, value1, value2, value3 string) string { + return fmt.Sprintf(` +data "aws_caller_identity" "current" {} + +data "aws_iam_session_context" "current" { + arn = data.aws_caller_identity.current.arn +} + +resource "aws_lakeformation_data_lake_settings" "test" { + admins = [data.aws_iam_session_context.current.issuer_arn] +} + +resource "aws_glue_catalog_database" "test" { + name = %[1]q +} + +resource "aws_glue_catalog_table" "test" { + name = %[1]q + database_name = aws_glue_catalog_database.test.name + + storage_descriptor { + columns { + name = "event" + type = "string" + } + + columns { + name = "timestamp" + type = "date" + } + + columns { + name = "value" + type = "double" + } + } +} + +resource "aws_lakeformation_lf_tag" "test" { + key = %[1]q + values = [%[2]s] + + # for consistency, ensure that admins are set up before testing + depends_on = [aws_lakeformation_data_lake_settings.test] +} + +resource "aws_lakeformation_lf_tag" "test2" { + key = "%[1]s-2" + values = [%[3]s] + + # for consistency, ensure that admins are set up before testing + depends_on = [aws_lakeformation_data_lake_settings.test] +} + +resource "aws_lakeformation_lf_tag" "column_tags" { + key = "%[1]s-3" + values = [%[6]s] + + # for consistency, ensure that admins are set up before testing + depends_on = [aws_lakeformation_data_lake_settings.test] +} + +resource "aws_lakeformation_resource_lf_tags" "database_tags" { + database { + name = aws_glue_catalog_database.test.name + } + + lf_tag { + key = aws_lakeformation_lf_tag.test.key + value = %[4]q + } + + # for consistency, ensure that admins are set up before testing + depends_on = [aws_lakeformation_data_lake_settings.test] +} + +resource "aws_lakeformation_resource_lf_tags" "table_tags" { + table { + database_name = aws_glue_catalog_database.test.name + name = aws_glue_catalog_table.test.name + } + + lf_tag { + key = aws_lakeformation_lf_tag.test2.key + value = %[5]q + } + + # for consistency, ensure that admins are set up before testing + depends_on = [aws_lakeformation_data_lake_settings.test] +} + +resource "aws_lakeformation_resource_lf_tags" "column_tags" { + table_with_columns { + database_name = aws_glue_catalog_database.test.name + name = aws_glue_catalog_table.test.name + column_names = ["event", "timestamp"] + } + + lf_tag { + key = aws_lakeformation_lf_tag.column_tags.key + value = %[7]q + } + + # for consistency, ensure that admins are set up before testing + depends_on = [aws_lakeformation_data_lake_settings.test] +} +`, rName, fmt.Sprintf(`"%s"`, strings.Join(values1, `", "`)), fmt.Sprintf(`"%s"`, strings.Join(values2, `", "`)), value1, value2, fmt.Sprintf(`"%s"`, strings.Join(values3, `", "`)), value3) +} + func testAccResourceLFTagsConfig_table(rName string, values []string, value string) string { return fmt.Sprintf(` data "aws_caller_identity" "current" {} @@ -577,7 +790,7 @@ resource "aws_lakeformation_lf_tag" "test" { key = %[1]q values = [%[2]s] - # for consistency, ensure that admins are setup before testing + # for consistency, ensure that admins are set up before testing depends_on = [aws_lakeformation_data_lake_settings.test] } @@ -592,13 +805,13 @@ resource "aws_lakeformation_resource_lf_tags" "test" { value = %[3]q } - # for consistency, ensure that admins are setup before testing + # for consistency, ensure that admins are set up before testing depends_on = [aws_lakeformation_data_lake_settings.test] } `, rName, fmt.Sprintf(`"%s"`, strings.Join(values, `", "`)), value) } -func testAccResourceLFTagsConfig_tableWithColumnsMultiple(rName string, values1, values2 []string, value1 string, value2 string) string { +func testAccResourceLFTagsConfig_tableWithColumnsMultipleTags(rName string, values1, values2 []string, value1 string, value2 string) string { return fmt.Sprintf(` data "aws_caller_identity" "current" {} @@ -640,7 +853,7 @@ resource "aws_lakeformation_lf_tag" "test" { key = %[1]q values = [%[2]s] - # for consistency, ensure that admins are setup before testing + # for consistency, ensure that admins are set up before testing depends_on = [aws_lakeformation_data_lake_settings.test] } @@ -648,7 +861,7 @@ resource "aws_lakeformation_lf_tag" "test2" { key = "%[1]s-2" values = [%[3]s] - # for consistency, ensure that admins are setup before testing + # for consistency, ensure that admins are set up before testing depends_on = [aws_lakeformation_data_lake_settings.test] } @@ -669,7 +882,7 @@ resource "aws_lakeformation_resource_lf_tags" "test" { value = %[5]q } - # for consistency, ensure that admins are setup before testing + # for consistency, ensure that admins are set up before testing depends_on = [aws_lakeformation_data_lake_settings.test] } `, rName, fmt.Sprintf(`"%s"`, strings.Join(values1, `", "`)), fmt.Sprintf(`"%s"`, strings.Join(values2, `", "`)), value1, value2) diff --git a/internal/service/lakeformation/resource_test.go b/internal/service/lakeformation/resource_test.go index 9486c675115..951d7384006 100644 --- a/internal/service/lakeformation/resource_test.go +++ b/internal/service/lakeformation/resource_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lakeformation_test import ( @@ -173,7 +176,7 @@ func TestAccLakeFormationResource_updateSLRToRole(t *testing.T) { func testAccCheckResourceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LakeFormationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LakeFormationConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lakeformation_resource" { @@ -206,7 +209,7 @@ func testAccCheckResourceExists(ctx context.Context, resourceName string) resour return fmt.Errorf("resource not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).LakeFormationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LakeFormationConn(ctx) input := &lakeformation.DescribeResourceInput{ ResourceArn: aws.String(rs.Primary.ID), diff --git a/internal/service/lakeformation/service_package_gen.go b/internal/service/lakeformation/service_package_gen.go index f1a44c62e1f..b6396ca6763 100644 --- a/internal/service/lakeformation/service_package_gen.go +++ b/internal/service/lakeformation/service_package_gen.go @@ -5,6 +5,10 @@ package lakeformation import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + lakeformation_sdkv1 "github.com/aws/aws-sdk-go/service/lakeformation" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -65,4 +69,13 @@ func (p *servicePackage) ServicePackageName() string { return names.LakeFormation } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*lakeformation_sdkv1.LakeFormation, error) { + sess := config["session"].(*session_sdkv1.Session) + + return lakeformation_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/lakeformation/status.go b/internal/service/lakeformation/status.go index dbfcee3e938..91bfe6011d4 100644 --- a/internal/service/lakeformation/status.go +++ b/internal/service/lakeformation/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lakeformation import ( @@ -38,7 +41,7 @@ func statusPermissions(ctx context.Context, conn *lakeformation.LakeFormation, i } if err != nil { - return nil, statusFailed, fmt.Errorf("error listing permissions: %w", err) + return nil, statusFailed, fmt.Errorf("listing permissions: %w", err) } // clean permissions = filter out permissions that do not pertain to this specific resource diff --git a/internal/service/lakeformation/strings.go b/internal/service/lakeformation/strings.go index 4028970f570..ece0ae79a5b 100644 --- a/internal/service/lakeformation/strings.go +++ b/internal/service/lakeformation/strings.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lakeformation import ( diff --git a/internal/service/lakeformation/strings_test.go b/internal/service/lakeformation/strings_test.go index 71e95b45924..a864d14f2f0 100644 --- a/internal/service/lakeformation/strings_test.go +++ b/internal/service/lakeformation/strings_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lakeformation_test import ( diff --git a/internal/service/lakeformation/validate.go b/internal/service/lakeformation/validate.go index f1e853efcc2..5eb0a1f09f7 100644 --- a/internal/service/lakeformation/validate.go +++ b/internal/service/lakeformation/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lakeformation import ( diff --git a/internal/service/lakeformation/validate_test.go b/internal/service/lakeformation/validate_test.go index da5dc943cb9..8f55304ff7d 100644 --- a/internal/service/lakeformation/validate_test.go +++ b/internal/service/lakeformation/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lakeformation import ( diff --git a/internal/service/lakeformation/wait.go b/internal/service/lakeformation/wait.go index b6933be1bdb..f903ee426e0 100644 --- a/internal/service/lakeformation/wait.go +++ b/internal/service/lakeformation/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lakeformation import ( diff --git a/internal/service/lambda/alias.go b/internal/service/lambda/alias.go index e52a1447d39..d84bac2a5d8 100644 --- a/internal/service/lambda/alias.go +++ b/internal/service/lambda/alias.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda import ( @@ -82,7 +85,7 @@ func ResourceAlias() *schema.Resource { // CreateAlias in the API / SDK func resourceAliasCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) functionName := d.Get("function_name").(string) aliasName := d.Get("name").(string) @@ -111,7 +114,7 @@ func resourceAliasCreate(ctx context.Context, d *schema.ResourceData, meta inter // GetAlias in the API / SDK func resourceAliasRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) log.Printf("[DEBUG] Fetching Lambda alias: %s:%s", d.Get("function_name"), d.Get("name")) @@ -149,7 +152,7 @@ func resourceAliasRead(ctx context.Context, d *schema.ResourceData, meta interfa // DeleteAlias in the API / SDK func resourceAliasDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) log.Printf("[INFO] Deleting Lambda alias: %s:%s", d.Get("function_name"), d.Get("name")) @@ -170,7 +173,7 @@ func resourceAliasDelete(ctx context.Context, d *schema.ResourceData, meta inter // UpdateAlias in the API / SDK func resourceAliasUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) log.Printf("[DEBUG] Updating Lambda alias: %s:%s", d.Get("function_name"), d.Get("name")) diff --git a/internal/service/lambda/alias_data_source.go b/internal/service/lambda/alias_data_source.go index 05ca3f5b2b9..e59d95d7b35 100644 --- a/internal/service/lambda/alias_data_source.go +++ b/internal/service/lambda/alias_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda import ( @@ -52,7 +55,7 @@ func DataSourceAlias() *schema.Resource { func dataSourceAliasRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) functionName := d.Get("function_name").(string) name := d.Get("name").(string) diff --git a/internal/service/lambda/alias_data_source_test.go b/internal/service/lambda/alias_data_source_test.go index cb7985402d8..89559b5eb00 100644 --- a/internal/service/lambda/alias_data_source_test.go +++ b/internal/service/lambda/alias_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda_test import ( diff --git a/internal/service/lambda/alias_test.go b/internal/service/lambda/alias_test.go index f836f12ac8d..ba9460ca713 100644 --- a/internal/service/lambda/alias_test.go +++ b/internal/service/lambda/alias_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda_test import ( @@ -197,7 +200,7 @@ func TestAccLambdaAlias_routing(t *testing.T) { func testAccCheckAliasDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lambda_alias" { @@ -228,7 +231,7 @@ func testAccCheckAliasExists(ctx context.Context, n string, mapping *lambda.Alia return fmt.Errorf("Lambda alias not set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn(ctx) params := &lambda.GetAliasInput{ FunctionName: aws.String(rs.Primary.ID), diff --git a/internal/service/lambda/code_signing_config.go b/internal/service/lambda/code_signing_config.go index 4a0376f2aee..fb7fd878c7b 100644 --- a/internal/service/lambda/code_signing_config.go +++ b/internal/service/lambda/code_signing_config.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda import ( @@ -89,7 +92,7 @@ func ResourceCodeSigningConfig() *schema.Resource { func resourceCodeSigningConfigCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) log.Printf("[DEBUG] Creating Lambda code signing config") @@ -121,7 +124,7 @@ func resourceCodeSigningConfigCreate(ctx context.Context, d *schema.ResourceData func resourceCodeSigningConfigRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) configOutput, err := conn.GetCodeSigningConfigWithContext(ctx, &lambda.GetCodeSigningConfigInput{ CodeSigningConfigArn: aws.String(d.Id()), @@ -171,7 +174,7 @@ func resourceCodeSigningConfigRead(ctx context.Context, d *schema.ResourceData, func resourceCodeSigningConfigUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) configInput := &lambda.UpdateCodeSigningConfigInput{ CodeSigningConfigArn: aws.String(d.Id()), @@ -209,7 +212,7 @@ func resourceCodeSigningConfigUpdate(ctx context.Context, d *schema.ResourceData func resourceCodeSigningConfigDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) _, err := conn.DeleteCodeSigningConfigWithContext(ctx, &lambda.DeleteCodeSigningConfigInput{ CodeSigningConfigArn: aws.String(d.Id()), diff --git a/internal/service/lambda/code_signing_config_data_source.go b/internal/service/lambda/code_signing_config_data_source.go index 377a0e7b951..3b339df2686 100644 --- a/internal/service/lambda/code_signing_config_data_source.go +++ b/internal/service/lambda/code_signing_config_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda import ( @@ -69,7 +72,7 @@ func DataSourceCodeSigningConfig() *schema.Resource { func dataSourceCodeSigningConfigRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) arn := d.Get("arn").(string) diff --git a/internal/service/lambda/code_signing_config_data_source_test.go b/internal/service/lambda/code_signing_config_data_source_test.go index f15c47d11b8..dd87a587f2a 100644 --- a/internal/service/lambda/code_signing_config_data_source_test.go +++ b/internal/service/lambda/code_signing_config_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda_test import ( diff --git a/internal/service/lambda/code_signing_config_test.go b/internal/service/lambda/code_signing_config_test.go index 917abd281d9..be094d89796 100644 --- a/internal/service/lambda/code_signing_config_test.go +++ b/internal/service/lambda/code_signing_config_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda_test import ( @@ -202,7 +205,7 @@ func testAccCheckCodeSigningExistsConfig(ctx context.Context, n string, mapping return fmt.Errorf("Code Signing Config ID not set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn(ctx) params := &lambda.GetCodeSigningConfigInput{ CodeSigningConfigArn: aws.String(rs.Primary.ID), @@ -221,7 +224,7 @@ func testAccCheckCodeSigningExistsConfig(ctx context.Context, n string, mapping func testAccCheckCodeSigningConfigDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lambda_code_signing_config" { diff --git a/internal/service/lambda/consts.go b/internal/service/lambda/consts.go index 9f25192394f..6324d4f574a 100644 --- a/internal/service/lambda/consts.go +++ b/internal/service/lambda/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda import ( @@ -17,3 +20,21 @@ const ( const ( propagationTimeout = 5 * time.Minute ) + +const ( + invocationActionCreate = "create" + invocationActionDelete = "delete" + invocationActionUpdate = "update" +) + +const ( + lifecycleScopeCreateOnly = "CREATE_ONLY" + lifecycleScopeCrud = "CRUD" +) + +func lifecycleScope_Values() []string { + return []string{ + lifecycleScopeCreateOnly, + lifecycleScopeCrud, + } +} diff --git a/internal/service/lambda/diff.go b/internal/service/lambda/diff.go new file mode 100644 index 00000000000..66bf5c5f07d --- /dev/null +++ b/internal/service/lambda/diff.go @@ -0,0 +1,36 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package lambda + +import ( + "context" + "errors" + "strings" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" +) + +// customizeDiffValidateInput validates that `input` is JSON object when +// `lifecycle_scope` is not "CREATE_ONLY" +func customizeDiffValidateInput(_ context.Context, diff *schema.ResourceDiff, v interface{}) error { + if diff.Get("lifecycle_scope") == lifecycleScopeCreateOnly { + return nil + } + // input is validated to be valid JSON in the schema already. + inputNoSpaces := strings.TrimSpace(diff.Get("input").(string)) + if strings.HasPrefix(inputNoSpaces, "{") && strings.HasSuffix(inputNoSpaces, "}") { + return nil + } + + return errors.New(`lifecycle_scope other than "CREATE_ONLY" requires input to be a JSON object`) +} + +// customizeDiffInputChangeWithCreateOnlyScope forces a new resource when `input` has +// a change and `lifecycle_scope` is set to "CREATE_ONLY" +func customizeDiffInputChangeWithCreateOnlyScope(_ context.Context, diff *schema.ResourceDiff, v interface{}) error { + if diff.HasChange("input") && diff.Get("lifecycle_scope").(string) == lifecycleScopeCreateOnly { + return diff.ForceNew("input") + } + return nil +} diff --git a/internal/service/lambda/event_source_mapping.go b/internal/service/lambda/event_source_mapping.go index a6c10710b99..0007aff98df 100644 --- a/internal/service/lambda/event_source_mapping.go +++ b/internal/service/lambda/event_source_mapping.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda import ( @@ -232,9 +235,10 @@ func ResourceEventSourceMapping() *schema.Resource { Computed: true, }, "queues": { - Type: schema.TypeSet, + Type: schema.TypeList, Optional: true, ForceNew: true, + MaxItems: 1, Elem: &schema.Schema{ Type: schema.TypeString, ValidateFunc: validation.StringLenBetween(1, 1000), @@ -365,7 +369,7 @@ func ResourceEventSourceMapping() *schema.Resource { func resourceEventSourceMappingCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) functionName := d.Get("function_name").(string) input := &lambda.CreateEventSourceMappingInput{ @@ -426,8 +430,8 @@ func resourceEventSourceMappingCreate(ctx context.Context, d *schema.ResourceDat input.ParallelizationFactor = aws.Int64(int64(v.(int))) } - if v, ok := d.GetOk("queues"); ok && v.(*schema.Set).Len() > 0 { - input.Queues = flex.ExpandStringSet(v.(*schema.Set)) + if v, ok := d.GetOk("queues"); ok && len(v.([]interface{})) > 0 { + input.Queues = flex.ExpandStringList(v.([]interface{})) } if v, ok := d.GetOk("scaling_config"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { @@ -491,7 +495,7 @@ func resourceEventSourceMappingCreate(ctx context.Context, d *schema.ResourceDat func resourceEventSourceMappingRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) eventSourceMappingConfiguration, err := FindEventSourceMappingConfigurationByID(ctx, conn, d.Id()) @@ -601,7 +605,7 @@ func resourceEventSourceMappingRead(ctx context.Context, d *schema.ResourceData, func resourceEventSourceMappingUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) input := &lambda.UpdateEventSourceMappingInput{ UUID: aws.String(d.Id()), @@ -700,7 +704,7 @@ func resourceEventSourceMappingUpdate(ctx context.Context, d *schema.ResourceDat func resourceEventSourceMappingDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) log.Printf("[INFO] Deleting Lambda Event Source Mapping: %s", d.Id()) _, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, eventSourceMappingPropagationTimeout, func() (interface{}, error) { diff --git a/internal/service/lambda/event_source_mapping_test.go b/internal/service/lambda/event_source_mapping_test.go index 3237266e689..a83b5b8bf72 100644 --- a/internal/service/lambda/event_source_mapping_test.go +++ b/internal/service/lambda/event_source_mapping_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda_test import ( @@ -1207,7 +1210,7 @@ func TestAccLambdaEventSourceMapping_documentDB(t *testing.T) { } func testAccPreCheckMQ(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).MQConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MQConn(ctx) input := &mq.ListBrokersInput{} @@ -1223,7 +1226,7 @@ func testAccPreCheckMQ(ctx context.Context, t *testing.T) { } func testAccPreCheckMSK(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConn(ctx) input := &kafka.ListClustersInput{} @@ -1239,7 +1242,7 @@ func testAccPreCheckMSK(ctx context.Context, t *testing.T) { } func testAccPreCheckSecretsManager(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).SecretsManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecretsManagerConn(ctx) input := &secretsmanager.ListSecretsInput{} @@ -1256,7 +1259,7 @@ func testAccPreCheckSecretsManager(ctx context.Context, t *testing.T) { func testAccCheckEventSourceMappingIsBeingDisabled(ctx context.Context, conf *lambda.EventSourceMappingConfiguration) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn(ctx) // Disable enabled state err := retry.RetryContext(ctx, 10*time.Minute, func() *retry.RetryError { params := &lambda.UpdateEventSourceMappingInput{ @@ -1306,7 +1309,7 @@ func testAccCheckEventSourceMappingIsBeingDisabled(ctx context.Context, conf *la func testAccCheckEventSourceMappingDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lambda_event_source_mapping" { @@ -1341,7 +1344,7 @@ func testAccCheckEventSourceMappingExists(ctx context.Context, n string, v *lamb return fmt.Errorf("no Lambda Event Source Mapping ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn(ctx) eventSourceMappingConfiguration, err := tflambda.FindEventSourceMappingConfigurationByID(ctx, conn, rs.Primary.ID) @@ -1597,7 +1600,7 @@ resource "aws_lambda_function" "test" { function_name = %[1]q handler = "exports.example" role = aws_iam_role.test.arn - runtime = "nodejs12.x" + runtime = "nodejs18.x" } `, rName, streamStatus, streamViewType) } diff --git a/internal/service/lambda/flex.go b/internal/service/lambda/flex.go index 0f35e788a89..31213f28317 100644 --- a/internal/service/lambda/flex.go +++ b/internal/service/lambda/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda import ( diff --git a/internal/service/lambda/function.go b/internal/service/lambda/function.go index a49a9c777dc..c13e9a232fb 100644 --- a/internal/service/lambda/function.go +++ b/internal/service/lambda/function.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda import ( @@ -15,7 +18,6 @@ import ( "github.com/aws/aws-sdk-go-v2/service/lambda" "github.com/aws/aws-sdk-go-v2/service/lambda/types" "github.com/aws/aws-sdk-go/aws/endpoints" - "github.com/aws/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" @@ -26,7 +28,6 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/errs" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" "github.com/hashicorp/terraform-provider-aws/internal/flex" - tfec2 "github.com/hashicorp/terraform-provider-aws/internal/service/ec2" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/verify" @@ -258,10 +259,14 @@ func ResourceFunction() *schema.Resource { Computed: true, }, "replace_security_groups_on_destroy": { + Deprecated: "AWS no longer supports this operation. This attribute now has " + + "no effect and will be removed in a future major version.", Type: schema.TypeBool, Optional: true, }, "replacement_security_group_ids": { + Deprecated: "AWS no longer supports this operation. This attribute now has " + + "no effect and will be removed in a future major version.", Type: schema.TypeSet, Optional: true, Elem: &schema.Schema{Type: schema.TypeString}, @@ -423,7 +428,7 @@ const ( func resourceFunctionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaClient() + conn := meta.(*conns.AWSClient).LambdaClient(ctx) functionName := d.Get("function_name").(string) packageType := types.PackageType(d.Get("package_type").(string)) @@ -435,7 +440,7 @@ func resourceFunctionCreate(ctx context.Context, d *schema.ResourceData, meta in PackageType: packageType, Publish: d.Get("publish").(bool), Role: aws.String(d.Get("role").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), Timeout: aws.Int32(int32(d.Get("timeout").(int))), } @@ -571,7 +576,7 @@ func resourceFunctionCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceFunctionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaClient() + conn := meta.(*conns.AWSClient).LambdaClient(ctx) input := &lambda.GetFunctionInput{ FunctionName: aws.String(d.Id()), @@ -683,7 +688,7 @@ func resourceFunctionRead(ctx context.Context, d *schema.ResourceData, meta inte d.Set("qualified_invoke_arn", functionInvokeARN(qualifiedARN, meta)) d.Set("version", latest.Version) - SetTagsOut(ctx, output.Tags) + setTagsOut(ctx, output.Tags) } // Currently, this functionality is only enabled in AWS Commercial partition @@ -717,7 +722,7 @@ func resourceFunctionRead(ctx context.Context, d *schema.ResourceData, meta inte func resourceFunctionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaClient() + conn := meta.(*conns.AWSClient).LambdaClient(ctx) if d.HasChange("code_signing_config_arn") { if v, ok := d.GetOk("code_signing_config_arn"); ok { @@ -991,7 +996,7 @@ func resourceFunctionUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceFunctionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaClient() + conn := meta.(*conns.AWSClient).LambdaClient(ctx) if v, ok := d.GetOk("skip_destroy"); ok && v.(bool) { log.Printf("[DEBUG] Retaining Lambda Function: %s", d.Id()) @@ -1013,12 +1018,6 @@ func resourceFunctionDelete(ctx context.Context, d *schema.ResourceData, meta in return sdkdiag.AppendErrorf(diags, "deleting Lambda Function (%s): %s", d.Id(), err) } - if _, ok := d.GetOk("replace_security_groups_on_destroy"); ok { - if err := replaceSecurityGroups(ctx, d, meta); err != nil { - return sdkdiag.AppendFromErr(diags, err) - } - } - return diags } @@ -1077,56 +1076,6 @@ func findLatestFunctionVersionByName(ctx context.Context, conn *lambda.Client, n return output, nil } -// replaceSecurityGroups will replace the security groups on orphaned lambda ENI's -// -// If the replacement_security_group_ids attribute is set, those values will be used as -// replacements. Otherwise, the default security group is used. -func replaceSecurityGroups(ctx context.Context, d *schema.ResourceData, meta interface{}) error { - ec2Conn := meta.(*conns.AWSClient).EC2Conn() - - var sgIDs []string - var vpcID string - if v, ok := d.GetOk("vpc_config"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { - tfMap := v.([]interface{})[0].(map[string]interface{}) - sgIDs = flex.ExpandStringValueSet(tfMap["security_group_ids"].(*schema.Set)) - vpcID = tfMap["vpc_id"].(string) - } else { // empty VPC config, nothing to do - return nil - } - - if len(sgIDs) == 0 { // no security groups, nothing to do - return nil - } - - var replacmentSGIDs []*string - if v, ok := d.GetOk("replacement_security_group_ids"); ok { - replacmentSGIDs = flex.ExpandStringSet(v.(*schema.Set)) - } else { - defaultSG, err := tfec2.FindSecurityGroupByNameAndVPCID(ctx, ec2Conn, "default", vpcID) - if err != nil || defaultSG == nil { - return fmt.Errorf("finding VPC (%s) default security group: %s", vpcID, err) - } - replacmentSGIDs = []*string{defaultSG.GroupId} - } - - networkInterfaces, err := tfec2.FindLambdaNetworkInterfacesBySecurityGroupIDsAndFunctionName(ctx, ec2Conn, sgIDs, d.Id()) - if err != nil { - return fmt.Errorf("finding Lambda Function (%s) network interfaces: %s", d.Id(), err) - } - - for _, ni := range networkInterfaces { - _, err := ec2Conn.ModifyNetworkInterfaceAttributeWithContext(ctx, &ec2.ModifyNetworkInterfaceAttributeInput{ - NetworkInterfaceId: ni.NetworkInterfaceId, - Groups: replacmentSGIDs, - }) - if err != nil { - return fmt.Errorf("modifying Lambda Function (%s) network interfaces: %s", d.Id(), err) - } - } - - return nil -} - func statusFunctionLastUpdateStatus(ctx context.Context, conn *lambda.Client, name string) retry.StateRefreshFunc { return func() (interface{}, string, error) { output, err := FindFunctionByName(ctx, conn, name) @@ -1340,6 +1289,7 @@ func SignerServiceIsAvailable(region string) bool { endpoints.UsWest1RegionID: {}, endpoints.UsWest2RegionID: {}, endpoints.AfSouth1RegionID: {}, + endpoints.ApEast1RegionID: {}, endpoints.ApSouth1RegionID: {}, endpoints.ApNortheast2RegionID: {}, endpoints.ApSoutheast1RegionID: {}, diff --git a/internal/service/lambda/function_data_source.go b/internal/service/lambda/function_data_source.go index 3b6d9ffe6cf..d4c8b2e1ca3 100644 --- a/internal/service/lambda/function_data_source.go +++ b/internal/service/lambda/function_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda import ( @@ -214,7 +217,7 @@ func DataSourceFunction() *schema.Resource { func dataSourceFunctionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaClient() + conn := meta.(*conns.AWSClient).LambdaClient(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig functionName := d.Get("function_name").(string) diff --git a/internal/service/lambda/function_data_source_test.go b/internal/service/lambda/function_data_source_test.go index 354042fa462..849aa6dce96 100644 --- a/internal/service/lambda/function_data_source_test.go +++ b/internal/service/lambda/function_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda_test import ( diff --git a/internal/service/lambda/function_event_invoke_config.go b/internal/service/lambda/function_event_invoke_config.go index f94a02cf0e9..de4676bc75e 100644 --- a/internal/service/lambda/function_event_invoke_config.go +++ b/internal/service/lambda/function_event_invoke_config.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda import ( @@ -99,7 +102,7 @@ func ResourceFunctionEventInvokeConfig() *schema.Resource { func resourceFunctionEventInvokeConfigCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) functionName := d.Get("function_name").(string) qualifier := d.Get("qualifier").(string) @@ -159,7 +162,7 @@ func resourceFunctionEventInvokeConfigCreate(ctx context.Context, d *schema.Reso func resourceFunctionEventInvokeConfigRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) functionName, qualifier, err := FunctionEventInvokeConfigParseID(d.Id()) @@ -201,7 +204,7 @@ func resourceFunctionEventInvokeConfigRead(ctx context.Context, d *schema.Resour func resourceFunctionEventInvokeConfigUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) functionName, qualifier, err := FunctionEventInvokeConfigParseID(d.Id()) @@ -257,7 +260,7 @@ func resourceFunctionEventInvokeConfigUpdate(ctx context.Context, d *schema.Reso func resourceFunctionEventInvokeConfigDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) functionName, qualifier, err := FunctionEventInvokeConfigParseID(d.Id()) @@ -291,7 +294,7 @@ func FunctionEventInvokeConfigParseID(id string) (string, string, error) { parsedARN, err := arn.Parse(id) if err != nil { - return "", "", fmt.Errorf("error parsing ARN (%s): %s", id, err) + return "", "", fmt.Errorf("parsing ARN (%s): %s", id, err) } function := strings.TrimPrefix(parsedARN.Resource, "function:") diff --git a/internal/service/lambda/function_event_invoke_config_test.go b/internal/service/lambda/function_event_invoke_config_test.go index cf2b4031e61..74117b027ef 100644 --- a/internal/service/lambda/function_event_invoke_config_test.go +++ b/internal/service/lambda/function_event_invoke_config_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda_test import ( @@ -510,7 +513,7 @@ func TestAccLambdaFunctionEventInvokeConfig_Qualifier_latest(t *testing.T) { func testAccCheckFunctionEventInvokeConfigDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lambda_function_event_invoke_config" { @@ -551,7 +554,7 @@ func testAccCheckFunctionEventInvokeConfigDestroy(ctx context.Context) resource. func testAccCheckFunctionDisappears(ctx context.Context, function *lambda.GetFunctionOutput) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaClient(ctx) input := &lambda.DeleteFunctionInput{ FunctionName: function.Configuration.FunctionName, @@ -574,7 +577,7 @@ func testAccCheckFunctionEventInvokeDisappearsConfig(ctx context.Context, resour return fmt.Errorf("Resource (%s) ID not set", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaClient(ctx) functionName, qualifier, err := tflambda.FunctionEventInvokeConfigParseID(rs.Primary.ID) @@ -607,7 +610,7 @@ func testAccCheckFunctionEventInvokeExistsConfig(ctx context.Context, resourceNa return fmt.Errorf("Resource (%s) ID not set", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaClient(ctx) functionName, qualifier, err := tflambda.FunctionEventInvokeConfigParseID(rs.Primary.ID) diff --git a/internal/service/lambda/function_test.go b/internal/service/lambda/function_test.go index 22bfbac0be8..8d9a66b3b45 100644 --- a/internal/service/lambda/function_test.go +++ b/internal/service/lambda/function_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda_test import ( @@ -2089,6 +2092,8 @@ func TestAccLambdaFunction_runtimes(t *testing.T) { fallthrough case types.RuntimeRuby25: fallthrough + case types.RuntimeNodejs12x: + fallthrough case types.RuntimeNodejs10x: fallthrough case types.RuntimeNodejs810: @@ -2101,6 +2106,8 @@ func TestAccLambdaFunction_runtimes(t *testing.T) { fallthrough case types.RuntimeNodejs: fallthrough + case types.RuntimeDotnetcore31: + fallthrough case types.RuntimeDotnetcore20: fallthrough case types.RuntimeDotnetcore10: @@ -2215,7 +2222,7 @@ func TestAccLambdaFunction_skipDestroyInconsistentPlan(t *testing.T) { func testAccCheckFunctionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lambda_function" { @@ -2241,7 +2248,7 @@ func testAccCheckFunctionDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckFunctionNoDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lambda_function" { @@ -2268,7 +2275,7 @@ func testAccCheckFunctionExists(ctx context.Context, n string, v *lambda.GetFunc return fmt.Errorf("No Lambda Function ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaClient(ctx) output, err := tflambda.FindFunctionByName(ctx, conn, rs.Primary.ID) @@ -2299,7 +2306,7 @@ func testAccCheckFunctionInvokeARN(name string, function *lambda.GetFunctionOutp func testAccInvokeFunction(ctx context.Context, function *lambda.GetFunctionOutput) resource.TestCheckFunc { return func(s *terraform.State) error { f := function.Configuration - conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaClient(ctx) // If the function is VPC-enabled this will create ENI automatically _, err := conn.Invoke(ctx, &lambda.InvokeInput{ @@ -3727,11 +3734,6 @@ resource "aws_s3_bucket" "artifacts" { force_destroy = true } -resource "aws_s3_bucket_acl" "artifacts" { - bucket = aws_s3_bucket.artifacts.id - acl = "private" -} - resource "aws_s3_bucket_versioning" "artifacts" { bucket = aws_s3_bucket.artifacts.id versioning_configuration { @@ -3788,11 +3790,6 @@ resource "aws_s3_bucket" "artifacts" { force_destroy = true } -resource "aws_s3_bucket_acl" "artifacts" { - bucket = aws_s3_bucket.artifacts.id - acl = "private" -} - resource "aws_s3_object" "o" { bucket = aws_s3_bucket.artifacts.bucket key = %[2]q @@ -3891,7 +3888,7 @@ func TestFlattenImageConfigShouldNotFailWithEmptyImageConfig(t *testing.T) { } func testAccPreCheckSignerSigningProfile(ctx context.Context, t *testing.T, platformID string) { - conn := acctest.Provider.Meta().(*conns.AWSClient).SignerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SignerConn(ctx) var foundPlatform bool err := conn.ListSigningPlatformsPagesWithContext(ctx, &signer.ListSigningPlatformsInput{}, func(page *signer.ListSigningPlatformsOutput, lastPage bool) bool { diff --git a/internal/service/lambda/function_url.go b/internal/service/lambda/function_url.go index 0044a8bd906..d2d7ef17805 100644 --- a/internal/service/lambda/function_url.go +++ b/internal/service/lambda/function_url.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda import ( @@ -120,7 +123,7 @@ func ResourceFunctionURL() *schema.Resource { } func resourceFunctionURLCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) name := d.Get("function_name").(string) qualifier := d.Get("qualifier").(string) @@ -143,7 +146,7 @@ func resourceFunctionURLCreate(ctx context.Context, d *schema.ResourceData, meta _, err := conn.CreateFunctionUrlConfigWithContext(ctx, input) if err != nil { - return diag.Errorf("error creating Lambda Function URL (%s): %s", id, err) + return diag.Errorf("creating Lambda Function URL (%s): %s", id, err) } d.SetId(id) @@ -168,7 +171,7 @@ func resourceFunctionURLCreate(ctx context.Context, d *schema.ResourceData, meta if tfawserr.ErrMessageContains(err, lambda.ErrCodeResourceConflictException, "The statement id (FunctionURLAllowPublicAccess) provided already exists") { log.Printf("[DEBUG] function permission statement 'FunctionURLAllowPublicAccess' already exists.") } else { - return diag.Errorf("error adding Lambda Function URL (%s) permission %s", d.Id(), err) + return diag.Errorf("adding Lambda Function URL (%s) permission %s", d.Id(), err) } } } @@ -177,7 +180,7 @@ func resourceFunctionURLCreate(ctx context.Context, d *schema.ResourceData, meta } func resourceFunctionURLRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) name, qualifier, err := FunctionURLParseResourceID(d.Id()) @@ -194,7 +197,7 @@ func resourceFunctionURLRead(ctx context.Context, d *schema.ResourceData, meta i } if err != nil { - return diag.Errorf("error reading Lambda Function URL (%s): %s", d.Id(), err) + return diag.Errorf("reading Lambda Function URL (%s): %s", d.Id(), err) } functionURL := aws.StringValue(output.FunctionUrl) @@ -202,7 +205,7 @@ func resourceFunctionURLRead(ctx context.Context, d *schema.ResourceData, meta i d.Set("authorization_type", output.AuthType) if output.Cors != nil { if err := d.Set("cors", []interface{}{flattenCors(output.Cors)}); err != nil { - return diag.Errorf("error setting cors: %s", err) + return diag.Errorf("setting cors: %s", err) } } else { d.Set("cors", nil) @@ -216,7 +219,7 @@ func resourceFunctionURLRead(ctx context.Context, d *schema.ResourceData, meta i // Function URL endpoints have the following format: // https://.lambda-url..on.aws if v, err := url.Parse(functionURL); err != nil { - return diag.Errorf("error parsing URL (%s): %s", functionURL, err) + return diag.Errorf("parsing URL (%s): %s", functionURL, err) } else if v := strings.Split(v.Host, "."); len(v) > 0 { d.Set("url_id", v[0]) } else { @@ -227,7 +230,7 @@ func resourceFunctionURLRead(ctx context.Context, d *schema.ResourceData, meta i } func resourceFunctionURLUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) name, qualifier, err := FunctionURLParseResourceID(d.Id()) @@ -263,14 +266,14 @@ func resourceFunctionURLUpdate(ctx context.Context, d *schema.ResourceData, meta _, err = conn.UpdateFunctionUrlConfigWithContext(ctx, input) if err != nil { - return diag.Errorf("error updating Lambda Function URL (%s): %s", d.Id(), err) + return diag.Errorf("updating Lambda Function URL (%s): %s", d.Id(), err) } return resourceFunctionURLRead(ctx, d, meta) } func resourceFunctionURLDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) name, qualifier, err := FunctionURLParseResourceID(d.Id()) @@ -294,7 +297,7 @@ func resourceFunctionURLDelete(ctx context.Context, d *schema.ResourceData, meta } if err != nil { - return diag.Errorf("error deleting Lambda Function URL (%s): %s", d.Id(), err) + return diag.Errorf("deleting Lambda Function URL (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/lambda/function_url_data_source.go b/internal/service/lambda/function_url_data_source.go index f1a160ba40f..3e010a8273b 100644 --- a/internal/service/lambda/function_url_data_source.go +++ b/internal/service/lambda/function_url_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda import ( @@ -94,7 +97,7 @@ func DataSourceFunctionURL() *schema.Resource { } func dataSourceFunctionURLRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) name := d.Get("function_name").(string) qualifier := d.Get("qualifier").(string) @@ -102,7 +105,7 @@ func dataSourceFunctionURLRead(ctx context.Context, d *schema.ResourceData, meta output, err := FindFunctionURLByNameAndQualifier(ctx, conn, name, qualifier) if err != nil { - return diag.Errorf("error reading Lambda Function URL (%s): %s", id, err) + return diag.Errorf("reading Lambda Function URL (%s): %s", id, err) } functionURL := aws.StringValue(output.FunctionUrl) @@ -111,7 +114,7 @@ func dataSourceFunctionURLRead(ctx context.Context, d *schema.ResourceData, meta d.Set("authorization_type", output.AuthType) if output.Cors != nil { if err := d.Set("cors", []interface{}{flattenCors(output.Cors)}); err != nil { - return diag.Errorf("error setting cors: %s", err) + return diag.Errorf("setting cors: %s", err) } } else { d.Set("cors", nil) @@ -127,7 +130,7 @@ func dataSourceFunctionURLRead(ctx context.Context, d *schema.ResourceData, meta // Function URL endpoints have the following format: // https://.lambda-url..on.aws if v, err := url.Parse(functionURL); err != nil { - return diag.Errorf("error parsing URL (%s): %s", functionURL, err) + return diag.Errorf("parsing URL (%s): %s", functionURL, err) } else if v := strings.Split(v.Host, "."); len(v) > 0 { d.Set("url_id", v[0]) } else { diff --git a/internal/service/lambda/function_url_data_source_test.go b/internal/service/lambda/function_url_data_source_test.go index 033cce3b085..3d1bb4712dd 100644 --- a/internal/service/lambda/function_url_data_source_test.go +++ b/internal/service/lambda/function_url_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda_test import ( diff --git a/internal/service/lambda/function_url_test.go b/internal/service/lambda/function_url_test.go index 7b08bddc68e..1af0c5d4690 100644 --- a/internal/service/lambda/function_url_test.go +++ b/internal/service/lambda/function_url_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda_test import ( @@ -290,7 +293,7 @@ func testAccCheckFunctionURLExists(ctx context.Context, n string, v *lambda.GetF return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn(ctx) output, err := tflambda.FindFunctionURLByNameAndQualifier(ctx, conn, name, qualifier) @@ -306,7 +309,7 @@ func testAccCheckFunctionURLExists(ctx context.Context, n string, v *lambda.GetF func testAccCheckFunctionURLDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lambda_function_url" { diff --git a/internal/service/lambda/functions_data_source.go b/internal/service/lambda/functions_data_source.go index dd2fad2944e..d6a59b28e68 100644 --- a/internal/service/lambda/functions_data_source.go +++ b/internal/service/lambda/functions_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda import ( @@ -37,7 +40,7 @@ func DataSourceFunctions() *schema.Resource { } func dataSourceFunctionsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) input := &lambda.ListFunctionsInput{} diff --git a/internal/service/lambda/functions_data_source_test.go b/internal/service/lambda/functions_data_source_test.go index 3798d99f802..84854d4b08a 100644 --- a/internal/service/lambda/functions_data_source_test.go +++ b/internal/service/lambda/functions_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda_test import ( @@ -22,8 +25,8 @@ func TestAccLambdaFunctionsDataSource_basic(t *testing.T) { { Config: testAccFunctionsDataSourceConfig_basic(rName), Check: resource.ComposeAggregateTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "function_arns.#", "0"), - acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "function_names.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "function_arns.#", 0), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "function_names.#", 0), ), }, }, diff --git a/internal/service/lambda/generate.go b/internal/service/lambda/generate.go index c2d49414add..92e529c5dc8 100644 --- a/internal/service/lambda/generate.go +++ b/internal/service/lambda/generate.go @@ -1,4 +1,8 @@ -//go:generate go run ../../generate/tags/main.go -ServiceTagsMap -TagInIDElem=Resource -UpdateTags -AWSSDKVersion=2 -KVTValues -SkipTypesImp +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/tags/main.go -ServiceTagsMap -TagInIDElem=Resource -UpdateTags -ListTags -ListTagsInIDElem=Resource -ListTagsOp=ListTags -AWSSDKVersion=2 -KVTValues -SkipTypesImp +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package lambda diff --git a/internal/service/lambda/invocation.go b/internal/service/lambda/invocation.go index aec656aa99a..46d8c3eb7a8 100644 --- a/internal/service/lambda/invocation.go +++ b/internal/service/lambda/invocation.go @@ -1,26 +1,34 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda import ( "context" "crypto/md5" + "encoding/json" "fmt" "log" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/lambda" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" ) +const defaultInvocationTerraformKey = "tf" + // @SDKResource("aws_lambda_invocation") func ResourceInvocation() *schema.Resource { return &schema.Resource{ CreateWithoutTimeout: resourceInvocationCreate, ReadWithoutTimeout: resourceInvocationRead, DeleteWithoutTimeout: resourceInvocationDelete, + UpdateWithoutTimeout: resourceInvocationUpdate, Schema: map[string]*schema.Schema{ "function_name": { @@ -31,7 +39,6 @@ func ResourceInvocation() *schema.Resource { "input": { Type: schema.TypeString, Required: true, - ForceNew: true, ValidateFunc: validation.StringIsJSON, }, "qualifier": { @@ -50,17 +57,121 @@ func ResourceInvocation() *schema.Resource { ForceNew: true, Elem: &schema.Schema{Type: schema.TypeString}, }, + "lifecycle_scope": { + Type: schema.TypeString, + Optional: true, + Default: lifecycleScopeCreateOnly, + ValidateFunc: validation.StringInSlice(lifecycleScope_Values(), false), + }, + "terraform_key": { + Type: schema.TypeString, + Optional: true, + Default: defaultInvocationTerraformKey, + }, }, + CustomizeDiff: customdiff.Sequence( + customizeDiffValidateInput, + customizeDiffInputChangeWithCreateOnlyScope, + ), } } func resourceInvocationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + return invoke(ctx, invocationActionCreate, d, meta) +} + +func resourceInvocationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + return diags +} + +func resourceInvocationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + return invoke(ctx, invocationActionUpdate, d, meta) +} + +func resourceInvocationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + if !isCreateOnlyScope(d) { + log.Printf("[DEBUG] Lambda Invocation (%s) \"deleted\" by invocation & removing from state", d.Id()) + return invoke(ctx, invocationActionDelete, d, meta) + } var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + log.Printf("[DEBUG] Lambda Invocation (%s) \"deleted\" by removing from state", d.Id()) + return diags +} + +// buildInput makes sure that the user provided input is enriched for handling lifecycle events +// +// In order to make this a non-breaking change this function only manipulates input if +// the invocation is not only for creation of resources. In order for the lambda +// to understand the action it has to take we pass on the action that terraform wants to do +// on the invocation resource. +// +// Because Lambda functions by default are stateless we must pass the input from the previous +// invocation to allow implementation of delete/update at Lambda side. +func buildInput(d *schema.ResourceData, action string) ([]byte, error) { + if isCreateOnlyScope(d) { + jsonBytes := []byte(d.Get("input").(string)) + return jsonBytes, nil + } + oldInputMap, newInputMap, err := getInputChange(d) + if err != nil { + log.Printf("[DEBUG] input serialization %s", err) + return nil, err + } + + newInputMap[d.Get("terraform_key").(string)] = map[string]interface{}{ + "action": action, + "prev_input": oldInputMap, + } + return json.Marshal(&newInputMap) +} + +func getObjectFromJSONString(s string) (map[string]interface{}, error) { + if len(s) == 0 { + return nil, nil + } + var mapObject map[string]interface{} + if err := json.Unmarshal([]byte(s), &mapObject); err != nil { + log.Printf("[ERROR] input JSON deserialization '%s'", s) + return nil, err + } + return mapObject, nil +} + +// getInputChange gets old an new input as maps +func getInputChange(d *schema.ResourceData) (map[string]interface{}, map[string]interface{}, error) { + old, new := d.GetChange("input") + oldMap, err := getObjectFromJSONString(old.(string)) + if err != nil { + log.Printf("[ERROR] old input serialization '%s'", old.(string)) + return nil, nil, err + } + newMap, err := getObjectFromJSONString(new.(string)) + if err != nil { + log.Printf("[ERROR] new input serialization '%s'", new.(string)) + return nil, nil, err + } + return oldMap, newMap, nil +} + +// isCreateOnlyScope returns True if Lambda is only invoked when the resource is +// created or replaced. +// +// The original invocation logic only triggers on create. +func isCreateOnlyScope(d *schema.ResourceData) bool { + return d.Get("lifecycle_scope").(string) == lifecycleScopeCreateOnly +} + +func invoke(ctx context.Context, action string, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).LambdaConn(ctx) functionName := d.Get("function_name").(string) qualifier := d.Get("qualifier").(string) - input := []byte(d.Get("input").(string)) + input, err := buildInput(d, action) + if err != nil { + return sdkdiag.AppendErrorf(diags, "Lambda Invocation (%s) input transformation failed for input (%s): %s", d.Id(), d.Get("input").(string), err) + } res, err := conn.InvokeWithContext(ctx, &lambda.InvokeInput{ FunctionName: aws.String(functionName), @@ -82,14 +193,3 @@ func resourceInvocationCreate(ctx context.Context, d *schema.ResourceData, meta return diags } - -func resourceInvocationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - var diags diag.Diagnostics - return diags -} - -func resourceInvocationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - var diags diag.Diagnostics - log.Printf("[DEBUG] Lambda Invocation (%s) \"deleted\" by removing from state", d.Id()) - return diags -} diff --git a/internal/service/lambda/invocation_data_source.go b/internal/service/lambda/invocation_data_source.go index 048548d5276..8bed79a1a3d 100644 --- a/internal/service/lambda/invocation_data_source.go +++ b/internal/service/lambda/invocation_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda import ( @@ -47,7 +50,7 @@ func DataSourceInvocation() *schema.Resource { func dataSourceInvocationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) functionName := d.Get("function_name").(string) qualifier := d.Get("qualifier").(string) diff --git a/internal/service/lambda/invocation_data_source_test.go b/internal/service/lambda/invocation_data_source_test.go index 783500bab7c..af8cda882cf 100644 --- a/internal/service/lambda/invocation_data_source_test.go +++ b/internal/service/lambda/invocation_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda_test import ( diff --git a/internal/service/lambda/invocation_test.go b/internal/service/lambda/invocation_test.go index 5f63d5c5fec..7e40cf28f25 100644 --- a/internal/service/lambda/invocation_test.go +++ b/internal/service/lambda/invocation_test.go @@ -1,32 +1,46 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda_test import ( + "context" "fmt" + "strconv" "testing" + "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/lambda" + "github.com/aws/aws-sdk-go/service/ssm" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/verify" ) func TestAccLambdaInvocation_basic(t *testing.T) { ctx := acctest.Context(t) resourceName := "aws_lambda_invocation.test" + fName := "lambda_invocation" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) testData := "value3" + inputJSON := `{"key1":"value1","key2":"value2"}` + resultJSON := fmt.Sprintf(`{"key1":"value1","key2":"value2","key3":%q}`, testData) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, ErrorCheck: acctest.ErrorCheck(t, lambda.EndpointsID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckInvocationDestroy, + CheckDestroy: acctest.CheckDestroyNoop, Steps: []resource.TestStep{ { - Config: testAccInvocationConfig_basic(rName, testData), + Config: acctest.ConfigCompose( + testAccInvocationConfig_function(fName, rName, testData), + testAccInvocationConfig_invocation(inputJSON, ""), + ), Check: resource.ComposeTestCheckFunc( - testAccCheckInvocationResult(resourceName, fmt.Sprintf(`{"key1":"value1","key2":"value2","key3":%q}`, testData)), + testAccCheckInvocationResult(resourceName, resultJSON), ), }, }, @@ -43,7 +57,7 @@ func TestAccLambdaInvocation_qualifier(t *testing.T) { PreCheck: func() { acctest.PreCheck(ctx, t) }, ErrorCheck: acctest.ErrorCheck(t, lambda.EndpointsID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckInvocationDestroy, + CheckDestroy: acctest.CheckDestroyNoop, Steps: []resource.TestStep{ { Config: testAccInvocationConfig_qualifier(rName, testData), @@ -65,7 +79,7 @@ func TestAccLambdaInvocation_complex(t *testing.T) { PreCheck: func() { acctest.PreCheck(ctx, t) }, ErrorCheck: acctest.ErrorCheck(t, lambda.EndpointsID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckInvocationDestroy, + CheckDestroy: acctest.CheckDestroyNoop, Steps: []resource.TestStep{ { Config: testAccInvocationConfig_complex(rName, testData), @@ -88,7 +102,7 @@ func TestAccLambdaInvocation_triggers(t *testing.T) { PreCheck: func() { acctest.PreCheck(ctx, t) }, ErrorCheck: acctest.ErrorCheck(t, lambda.EndpointsID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckInvocationDestroy, + CheckDestroy: acctest.CheckDestroyNoop, Steps: []resource.TestStep{ { Config: testAccInvocationConfig_triggers(rName, testData), @@ -112,12 +126,281 @@ func TestAccLambdaInvocation_triggers(t *testing.T) { }) } -func testAccCheckInvocationDestroy(s *terraform.State) error { - // Nothing to check on destroy - return nil +func TestAccLambdaInvocation_lifecycle_scopeCRUDCreate(t *testing.T) { + ctx := acctest.Context(t) + resourceName := "aws_lambda_invocation.test" + fName := "lambda_invocation_crud" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + inputJSON := `{"key1":"value1","key2":"value2"}` + resultJSON := `{"key1":"value1","key2":"value2","tf":{"action":"create", "prev_input": null}}` + + extraArgs := `lifecycle_scope = "CRUD"` + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, lambda.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: acctest.CheckDestroyNoop, + Steps: []resource.TestStep{ + { + Config: acctest.ConfigCompose( + testAccInvocationConfig_function(fName, rName, ""), + testAccInvocationConfig_invocation(inputJSON, extraArgs), + ), + Check: resource.ComposeTestCheckFunc( + testAccCheckInvocationResult(resourceName, resultJSON), + ), + }, + }, + }) +} + +func TestAccLambdaInvocation_lifecycle_scopeCRUDUpdateInput(t *testing.T) { + ctx := acctest.Context(t) + resourceName := "aws_lambda_invocation.test" + fName := "lambda_invocation_crud" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + inputJSON := `{"key1":"value1","key2":"value2"}` + resultJSON := `{"key1":"value1","key2":"value2","tf":{"action":"create", "prev_input": null}}` + inputJSON2 := `{"key1":"valueB","key2":"value2"}` + resultJSON2 := fmt.Sprintf(`{"key1":"valueB","key2":"value2","tf":{"action":"update", "prev_input": %s}}`, inputJSON) + + extraArgs := `lifecycle_scope = "CRUD"` + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, lambda.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: acctest.CheckDestroyNoop, + Steps: []resource.TestStep{ + { + Config: acctest.ConfigCompose( + testAccInvocationConfig_function(fName, rName, ""), + testAccInvocationConfig_invocation(inputJSON, extraArgs), + ), + Check: resource.ComposeTestCheckFunc( + testAccCheckInvocationResult(resourceName, resultJSON), + ), + }, + { + Config: acctest.ConfigCompose( + testAccInvocationConfig_function(fName, rName, ""), + testAccInvocationConfig_invocation(inputJSON2, extraArgs), + ), + Check: resource.ComposeTestCheckFunc( + testAccCheckInvocationResult(resourceName, resultJSON2), + ), + }, + }, + }) } -func testAccConfigInvocation_base(roleName string) string { +func TestAccLambdaInvocation_lifecycle_scopeCreateOnlyUpdateInput(t *testing.T) { + ctx := acctest.Context(t) + resourceName := "aws_lambda_invocation.test" + fName := "lambda_invocation_crud" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + inputJSON := `{"key1":"value1","key2":"value2"}` + resultJSON := `{"key1":"value1","key2":"value2"}` + inputJSON2 := `{"key1":"valueB","key2":"value2"}` + resultJSON2 := `{"key1":"valueB","key2":"value2"}` + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, lambda.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: acctest.CheckDestroyNoop, + Steps: []resource.TestStep{ + { + Config: acctest.ConfigCompose( + testAccInvocationConfig_function(fName, rName, ""), + testAccInvocationConfig_invocation(inputJSON, ""), + ), + Check: resource.ComposeTestCheckFunc( + testAccCheckInvocationResult(resourceName, resultJSON), + ), + }, + { + Config: acctest.ConfigCompose( + testAccInvocationConfig_function(fName, rName, ""), + testAccInvocationConfig_invocation(inputJSON2, ""), + ), + Check: resource.ComposeTestCheckFunc( + testAccCheckInvocationResult(resourceName, resultJSON2), + ), + }, + }, + }) +} + +// TestAccLambdaInvocation_lifecycle_scopeCRUDDestroy will check destroy is handled appropriately. +// +// In order to allow checking the deletion we use a custom lifecycle which will store it's JSON even when a delete action +// is passed. The Lambda function will create the SSM parameter and the check will verify the content. +func TestAccLambdaInvocation_lifecycle_scopeCRUDDestroy(t *testing.T) { + ctx := acctest.Context(t) + resourceName := "aws_lambda_invocation.test" + fName := "lambda_invocation_crud" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + ssmParameterName := fmt.Sprintf("/tf-test/CRUD/%s", rName) + + inputJSON := `{"key1":"value1","key2":"value2"}` + resultJSON := `{"key1":"value1","key2":"value2","tf":{"action":"create", "prev_input": null}}` + destroyJSON := fmt.Sprintf(`{"key1":"value1","key2":"value2","tf":{"action":"delete","prev_input":%s}}`, inputJSON) + + dependsOnSSMPermissions := `depends_on = [aws_iam_role_policy_attachment.test_ssm]` + crudLifecycle := `lifecycle_scope = "CRUD"` + extraArgs := dependsOnSSMPermissions + "\n" + crudLifecycle + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, lambda.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: acctest.CheckDestroyNoop, + Steps: []resource.TestStep{ + { + Config: acctest.ConfigCompose( + testAccInvocationConfig_function(fName, rName, ssmParameterName), + testAccInvocationConfig_crudAllowSSM(rName, ssmParameterName), + testAccInvocationConfig_invocation(inputJSON, extraArgs), + ), + Check: resource.ComposeTestCheckFunc( + testAccCheckInvocationResult(resourceName, resultJSON), + ), + }, + { + Config: acctest.ConfigCompose( + testAccInvocationConfig_function(fName, rName, ssmParameterName), + testAccInvocationConfig_crudAllowSSM(rName, ssmParameterName), + ), + Check: resource.ComposeTestCheckFunc( + testAccCheckCRUDDestroyResult(ctx, resourceName, ssmParameterName, destroyJSON, t), + ), + }, + }, + }) +} + +func TestAccLambdaInvocation_lifecycle_scopeCreateOnlyToCRUD(t *testing.T) { + ctx := acctest.Context(t) + resourceName := "aws_lambda_invocation.test" + fName := "lambda_invocation_crud" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + ssmParameterName := fmt.Sprintf("/tf-test/CRUD/%s", rName) + + inputJSON := `{"key1":"value1","key2":"value2"}` + resultJSON := `{"key1":"value1","key2":"value2"}` + resultJSONCRUD := fmt.Sprintf(`{"key1":"value1","key2":"value2","tf":{"action":"update", "prev_input": %s}}`, inputJSON) + + extraArgs := `lifecycle_scope = "CRUD"` + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, lambda.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: acctest.CheckDestroyNoop, + Steps: []resource.TestStep{ + { + Config: acctest.ConfigCompose( + testAccInvocationConfig_function(fName, rName, ""), + testAccInvocationConfig_crudAllowSSM(rName, ssmParameterName), + testAccInvocationConfig_invocation(inputJSON, ""), + ), + Check: resource.ComposeTestCheckFunc( + testAccCheckInvocationResult(resourceName, resultJSON), + ), + }, + { + Config: acctest.ConfigCompose( + testAccInvocationConfig_function(fName, rName, ""), + testAccInvocationConfig_crudAllowSSM(rName, ssmParameterName), + testAccInvocationConfig_invocation(inputJSON, extraArgs), + ), + Check: resource.ComposeTestCheckFunc( + testAccCheckInvocationResult(resourceName, resultJSONCRUD), + ), + }, + }, + }) +} + +func TestAccLambdaInvocation_terraformKey(t *testing.T) { + ctx := acctest.Context(t) + resourceName := "aws_lambda_invocation.test" + fName := "lambda_invocation_crud" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + inputJSON := `{"key1":"value1","key2":"value2"}` + resultJSON := `{"key1":"value1","key2":"value2","custom_key":{"action":"create", "prev_input": null}}` + + terraformKey := `terraform_key = "custom_key"` + crudLifecycle := `lifecycle_scope = "CRUD"` + extraArgs := terraformKey + "\n" + crudLifecycle + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, lambda.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: acctest.CheckDestroyNoop, + Steps: []resource.TestStep{ + { + Config: acctest.ConfigCompose( + testAccInvocationConfig_function(fName, rName, ""), + testAccInvocationConfig_invocation(inputJSON, extraArgs), + ), + Check: resource.ComposeTestCheckFunc( + testAccCheckInvocationResult(resourceName, resultJSON), + ), + }, + }, + }) +} + +// testAccCheckCRUDDestroyResult verifies that when CRUD lifecycle is active that a destroyed resource +// triggers the lambda. +// +// Because a destroy implies the resource will be removed from the state we need another way to check +// how the lambda was invoked. The JSON used to invoke the lambda is stored in an SSM Parameter. +// We will read it out, compare with the expected result and clean up the SSM parameter. +func testAccCheckCRUDDestroyResult(ctx context.Context, name, ssmParameterName, expectedResult string, t *testing.T) resource.TestCheckFunc { + return func(s *terraform.State) error { + _, ok := s.RootModule().Resources[name] + if ok { + return fmt.Errorf("Still found resource in state: %s", name) + } + conn := acctest.ProviderMeta(t).SSMConn(ctx) + res, err := conn.GetParameterWithContext(ctx, &ssm.GetParameterInput{ + Name: aws.String(ssmParameterName), + WithDecryption: aws.Bool(true), + }) + + if cleanupErr := removeSSMParameter(ctx, conn, ssmParameterName); cleanupErr != nil { + return fmt.Errorf("Could not cleanup SSM Parameter %s", ssmParameterName) + } + + if err != nil { + return fmt.Errorf("Could not get SSM Parameter %s", ssmParameterName) + } + + if !verify.JSONStringsEqual(*res.Parameter.Value, expectedResult) { + return fmt.Errorf("%s: input for destroy expected %s, got %s", name, expectedResult, *res.Parameter.Value) + } + + return nil + } +} + +func removeSSMParameter(ctx context.Context, conn *ssm.SSM, name string) error { + _, err := conn.DeleteParameterWithContext(ctx, &ssm.DeleteParameterInput{ + Name: aws.String(name), + }) + return err +} + +func testAccInvocationConfig_base(roleName string) string { return fmt.Sprintf(` data "aws_partition" "current" {} @@ -144,58 +427,71 @@ resource "aws_iam_role_policy_attachment" "test" { `, roleName) } -func testAccInvocationConfig_basic(rName, testData string) string { +func testAccInvocationConfig_crudAllowSSM(rName, ssmParameterName string) string { + return fmt.Sprintf(` +resource "aws_iam_policy" "test" { + name = %[1]q + + # Terraform's "jsonencode" function converts a + # Terraform expression result to valid JSON syntax. + policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Action = [ + "ssm:PutParameter", + ] + Effect = "Allow" + Resource = "arn:${data.aws_partition.current.partition}:ssm:*:*:parameter%[2]s" + }, + ] + }) +} + +resource "aws_iam_role_policy_attachment" "test_ssm" { + policy_arn = aws_iam_policy.test.arn + role = aws_iam_role.test.name +} +`, rName, ssmParameterName) +} + +func testAccInvocationConfig_function(fName, rName, testData string) string { return acctest.ConfigCompose( - testAccConfigInvocation_base(rName), + testAccInvocationConfig_base(rName), fmt.Sprintf(` resource "aws_lambda_function" "test" { depends_on = [aws_iam_role_policy_attachment.test] - filename = "test-fixtures/lambda_invocation.zip" - function_name = %[1]q + filename = "test-fixtures/%[1]s.zip" + function_name = %[2]q role = aws_iam_role.test.arn - handler = "lambda_invocation.handler" + handler = "%[1]s.handler" runtime = "nodejs14.x" environment { variables = { - TEST_DATA = %[2]q + TEST_DATA = %[3]q } } } +`, fName, rName, testData)) +} +func testAccInvocationConfig_invocation(inputJSON, extraArgs string) string { + return fmt.Sprintf(` resource "aws_lambda_invocation" "test" { function_name = aws_lambda_function.test.function_name - input = jsonencode({ - key1 = "value1" - key2 = "value2" - }) + input = %[1]s + %[2]s } -`, rName, testData)) +`, strconv.Quote(inputJSON), extraArgs) } func testAccInvocationConfig_qualifier(rName, testData string) string { return acctest.ConfigCompose( - testAccConfigInvocation_base(rName), - fmt.Sprintf(` -resource "aws_lambda_function" "test" { - depends_on = [aws_iam_role_policy_attachment.test] - - filename = "test-fixtures/lambda_invocation.zip" - function_name = %[1]q - role = aws_iam_role.test.arn - handler = "lambda_invocation.handler" - runtime = "nodejs14.x" - publish = true - - environment { - variables = { - TEST_DATA = %[2]q - } - } -} - + testAccInvocationConfig_function("lambda_invocation", rName, testData), + ` resource "aws_lambda_invocation" "test" { function_name = aws_lambda_function.test.function_name qualifier = aws_lambda_function.test.version @@ -205,30 +501,13 @@ resource "aws_lambda_invocation" "test" { key2 = "value2" }) } -`, rName, testData)) +`) } func testAccInvocationConfig_complex(rName, testData string) string { return acctest.ConfigCompose( - testAccConfigInvocation_base(rName), - fmt.Sprintf(` -resource "aws_lambda_function" "test" { - depends_on = [aws_iam_role_policy_attachment.test] - - filename = "test-fixtures/lambda_invocation.zip" - function_name = %[1]q - role = aws_iam_role.test.arn - handler = "lambda_invocation.handler" - runtime = "nodejs14.x" - publish = true - - environment { - variables = { - TEST_DATA = %[2]q - } - } -} - + testAccInvocationConfig_function("lambda_invocation", rName, testData), + ` resource "aws_lambda_invocation" "test" { function_name = aws_lambda_function.test.function_name @@ -244,30 +523,13 @@ resource "aws_lambda_invocation" "test" { } }) } -`, rName, testData)) +`) } func testAccInvocationConfig_triggers(rName, testData string) string { return acctest.ConfigCompose( - testAccConfigInvocation_base(rName), - fmt.Sprintf(` -resource "aws_lambda_function" "test" { - depends_on = [aws_iam_role_policy_attachment.test] - - filename = "test-fixtures/lambda_invocation.zip" - function_name = %[1]q - role = aws_iam_role.test.arn - handler = "lambda_invocation.handler" - runtime = "nodejs14.x" - publish = true - - environment { - variables = { - TEST_DATA = %[2]q - } - } -} - + testAccInvocationConfig_function("lambda_invocation", rName, testData), + ` resource "aws_lambda_invocation" "test" { function_name = aws_lambda_function.test.function_name @@ -289,5 +551,5 @@ resource "aws_lambda_invocation" "test" { } }) } -`, rName, testData)) +`) } diff --git a/internal/service/lambda/layer_version.go b/internal/service/lambda/layer_version.go index 87077b91ad5..d917e091163 100644 --- a/internal/service/lambda/layer_version.go +++ b/internal/service/lambda/layer_version.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda import ( @@ -140,7 +143,7 @@ func ResourceLayerVersion() *schema.Resource { func resourceLayerVersionPublish(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) layerName := d.Get("layer_name").(string) filename, hasFilename := d.GetOk("filename") @@ -203,7 +206,7 @@ func resourceLayerVersionPublish(ctx context.Context, d *schema.ResourceData, me func resourceLayerVersionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) layerName, version, err := LayerVersionParseID(d.Id()) if err != nil { @@ -276,7 +279,7 @@ func resourceLayerVersionDelete(ctx context.Context, d *schema.ResourceData, met return diags } - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) version, err := strconv.ParseInt(d.Get("version").(string), 10, 64) if err != nil { diff --git a/internal/service/lambda/layer_version_data_source.go b/internal/service/lambda/layer_version_data_source.go index f54ea72d842..cb57b9ae2ca 100644 --- a/internal/service/lambda/layer_version_data_source.go +++ b/internal/service/lambda/layer_version_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda import ( @@ -98,7 +101,7 @@ func DataSourceLayerVersion() *schema.Resource { func dataSourceLayerVersionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) layerName := d.Get("layer_name").(string) var version int64 diff --git a/internal/service/lambda/layer_version_data_source_test.go b/internal/service/lambda/layer_version_data_source_test.go index c8730c2a5bd..b0569072149 100644 --- a/internal/service/lambda/layer_version_data_source_test.go +++ b/internal/service/lambda/layer_version_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda_test import ( diff --git a/internal/service/lambda/layer_version_permission.go b/internal/service/lambda/layer_version_permission.go index aee018edca2..5c697fdab93 100644 --- a/internal/service/lambda/layer_version_permission.go +++ b/internal/service/lambda/layer_version_permission.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda import ( @@ -72,6 +75,12 @@ func ResourceLayerVersionPermission() *schema.Resource { Type: schema.TypeString, Computed: true, }, + "skip_destroy": { + Type: schema.TypeBool, + Default: false, + ForceNew: true, + Optional: true, + }, "policy": { Type: schema.TypeString, Computed: true, @@ -82,7 +91,7 @@ func ResourceLayerVersionPermission() *schema.Resource { func resourceLayerVersionPermissionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) layerName := d.Get("layer_name").(string) versionNumber := d.Get("version_number").(int) @@ -111,7 +120,7 @@ func resourceLayerVersionPermissionCreate(ctx context.Context, d *schema.Resourc func resourceLayerVersionPermissionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) layerName, versionNumber, err := ResourceLayerVersionPermissionParseId(d.Id()) if err != nil { @@ -197,7 +206,12 @@ func resourceLayerVersionPermissionRead(ctx context.Context, d *schema.ResourceD func resourceLayerVersionPermissionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + if v, ok := d.GetOk("skip_destroy"); ok && v.(bool) { + log.Printf("[DEBUG] Retaining Lambda Layer Permission Version %q", d.Id()) + return diags + } + + conn := meta.(*conns.AWSClient).LambdaConn(ctx) layerName, versionNumber, err := ResourceLayerVersionPermissionParseId(d.Id()) if err != nil { diff --git a/internal/service/lambda/layer_version_permission_test.go b/internal/service/lambda/layer_version_permission_test.go index c7d4f1242d6..17f67aba356 100644 --- a/internal/service/lambda/layer_version_permission_test.go +++ b/internal/service/lambda/layer_version_permission_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda_test import ( @@ -38,9 +41,10 @@ func TestAccLambdaLayerVersionPermission_basic_byARN(t *testing.T) { ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"skip_destroy"}, }, }, }) @@ -68,9 +72,10 @@ func TestAccLambdaLayerVersionPermission_basic_byName(t *testing.T) { ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"skip_destroy"}, }, }, }) @@ -99,9 +104,10 @@ func TestAccLambdaLayerVersionPermission_org(t *testing.T) { ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"skip_destroy"}, }, }, }) @@ -129,9 +135,10 @@ func TestAccLambdaLayerVersionPermission_account(t *testing.T) { ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"skip_destroy"}, }, }, }) @@ -160,6 +167,36 @@ func TestAccLambdaLayerVersionPermission_disappears(t *testing.T) { }) } +func TestAccLambdaLayerVersionPermission_skipDestroy(t *testing.T) { + ctx := acctest.Context(t) + resourceName := "aws_lambda_layer_version_permission.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rName2 := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, lambda.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: nil, // this purposely leaves dangling resources, since skip_destroy = true + Steps: []resource.TestStep{ + { + Config: testAccLayerVersionPermissionConfig_skipDestroy(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckLayerVersionPermissionExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "skip_destroy", "true"), + ), + }, + { + Config: testAccLayerVersionPermissionConfig_skipDestroy(rName2), + Check: resource.ComposeTestCheckFunc( + testAccCheckLayerVersionPermissionExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "skip_destroy", "true"), + ), + }, + }, + }) +} + // Creating Lambda layer and Lambda layer permissions func testAccLayerVersionPermissionConfig_basicARN(layerName string) string { @@ -233,6 +270,24 @@ resource "aws_lambda_layer_version_permission" "test" { `, layerName) } +func testAccLayerVersionPermissionConfig_skipDestroy(layerName string) string { + return fmt.Sprintf(` +resource "aws_lambda_layer_version" "test" { + filename = "test-fixtures/lambdatest.zip" + layer_name = %[1]q +} + +resource "aws_lambda_layer_version_permission" "test" { + layer_name = aws_lambda_layer_version.test.layer_name + version_number = aws_lambda_layer_version.test.version + action = "lambda:GetLayerVersion" + statement_id = "xaccount" + principal = "*" + skip_destroy = true +} +`, layerName) +} + func testAccCheckLayerVersionPermissionExists(ctx context.Context, n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -249,7 +304,7 @@ func testAccCheckLayerVersionPermissionExists(ctx context.Context, n string) res return fmt.Errorf("error parsing lambda layer ID: %w", err) } - conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn(ctx) _, err = conn.GetLayerVersionPolicyWithContext(ctx, &lambda.GetLayerVersionPolicyInput{ LayerName: aws.String(layerName), @@ -262,7 +317,7 @@ func testAccCheckLayerVersionPermissionExists(ctx context.Context, n string) res func testAccCheckLayerVersionPermissionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lambda_layer_version_permission" { diff --git a/internal/service/lambda/layer_version_test.go b/internal/service/lambda/layer_version_test.go index 644d04ba264..8a9c3b0216b 100644 --- a/internal/service/lambda/layer_version_test.go +++ b/internal/service/lambda/layer_version_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda_test import ( @@ -286,7 +289,7 @@ func TestAccLambdaLayerVersion_skipDestroy(t *testing.T) { func testAccCheckLayerVersionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lambda_layer_version" { @@ -336,7 +339,7 @@ func testAccCheckLayerVersionExists(ctx context.Context, res, layerName string) return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn(ctx) _, err = conn.GetLayerVersionWithContext(ctx, &lambda.GetLayerVersionInput{ LayerName: aws.String(layerName), VersionNumber: aws.Int64(int64(version)), diff --git a/internal/service/lambda/permission.go b/internal/service/lambda/permission.go index 93b57a35afb..f9b8b2e45cb 100644 --- a/internal/service/lambda/permission.go +++ b/internal/service/lambda/permission.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda import ( @@ -110,7 +113,7 @@ func ResourcePermission() *schema.Resource { func resourcePermissionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) functionName := d.Get("function_name").(string) statementID := create.Name(d.Get("statement_id").(string), d.Get("statement_id_prefix").(string)) @@ -171,7 +174,7 @@ func resourcePermissionCreate(ctx context.Context, d *schema.ResourceData, meta func resourcePermissionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) functionName := d.Get("function_name").(string) outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, propagationTimeout, @@ -240,7 +243,7 @@ func resourcePermissionRead(ctx context.Context, d *schema.ResourceData, meta in func resourcePermissionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) functionName := d.Get("function_name").(string) @@ -391,7 +394,7 @@ func resourcePermissionImport(ctx context.Context, d *schema.ResourceData, meta statementId := idParts[1] log.Printf("[DEBUG] Importing Lambda Permission %s for function name %s", statementId, functionName) - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) getFunctionOutput, err := conn.GetFunctionWithContext(ctx, input) if err != nil { return nil, err diff --git a/internal/service/lambda/permission_test.go b/internal/service/lambda/permission_test.go index ae1246131d3..523b7323961 100644 --- a/internal/service/lambda/permission_test.go +++ b/internal/service/lambda/permission_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda_test import ( @@ -715,7 +718,7 @@ func testAccCheckPermissionExists(ctx context.Context, n string, v *tflambda.Pol return fmt.Errorf("No Lambda Permission ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn(ctx) output, err := tflambda.FindPolicyStatementByTwoPartKey(ctx, conn, rs.Primary.Attributes["function_name"], rs.Primary.ID, rs.Primary.Attributes["qualifier"]) @@ -731,7 +734,7 @@ func testAccCheckPermissionExists(ctx context.Context, n string, v *tflambda.Pol func testAccCheckPermissionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lambda_permission" { diff --git a/internal/service/lambda/policy_model.go b/internal/service/lambda/policy_model.go index c7155f65d1c..05ea91b59db 100644 --- a/internal/service/lambda/policy_model.go +++ b/internal/service/lambda/policy_model.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda import ( diff --git a/internal/service/lambda/provisioned_concurrency_config.go b/internal/service/lambda/provisioned_concurrency_config.go index 15776d8a38c..0b7a7fee453 100644 --- a/internal/service/lambda/provisioned_concurrency_config.go +++ b/internal/service/lambda/provisioned_concurrency_config.go @@ -1,10 +1,12 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda import ( "context" "fmt" "log" - "strings" "time" "github.com/aws/aws-sdk-go/aws" @@ -16,6 +18,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" + "github.com/hashicorp/terraform-provider-aws/internal/flex" ) // @SDKResource("aws_lambda_provisioned_concurrency_config") @@ -25,14 +28,25 @@ func ResourceProvisionedConcurrencyConfig() *schema.Resource { ReadWithoutTimeout: resourceProvisionedConcurrencyConfigRead, UpdateWithoutTimeout: resourceProvisionedConcurrencyConfigUpdate, DeleteWithoutTimeout: resourceProvisionedConcurrencyConfigDelete, + Importer: &schema.ResourceImporter{ StateContext: schema.ImportStatePassthroughContext, }, + Timeouts: &schema.ResourceTimeout{ Create: schema.DefaultTimeout(15 * time.Minute), Update: schema.DefaultTimeout(15 * time.Minute), }, + SchemaVersion: 1, + StateUpgraders: []schema.StateUpgrader{ + { + Type: resourceProvisionedConcurrencyConfigV0().CoreConfigSchema().ImpliedType(), + Upgrade: provisionedConcurrencyConfigStateUpgradeV0, + Version: 0, + }, + }, + Schema: map[string]*schema.Schema{ "function_name": { Type: schema.TypeString, @@ -51,13 +65,22 @@ func ResourceProvisionedConcurrencyConfig() *schema.Resource { ForceNew: true, ValidateFunc: validation.NoZeroValues, }, + "skip_destroy": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, }, } } +const ( + ProvisionedConcurrencyIDPartCount = 2 +) + func resourceProvisionedConcurrencyConfigCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + conn := meta.(*conns.AWSClient).LambdaConn(ctx) functionName := d.Get("function_name").(string) qualifier := d.Get("qualifier").(string) @@ -70,10 +93,15 @@ func resourceProvisionedConcurrencyConfigCreate(ctx context.Context, d *schema.R _, err := conn.PutProvisionedConcurrencyConfigWithContext(ctx, input) if err != nil { - return sdkdiag.AppendErrorf(diags, "putting Lambda Provisioned Concurrency Config (%s:%s): %s", functionName, qualifier, err) + return sdkdiag.AppendErrorf(diags, "putting Lambda Provisioned Concurrency Config (%s,%s): %s", functionName, qualifier, err) } - d.SetId(fmt.Sprintf("%s:%s", functionName, qualifier)) + parts := []string{functionName, qualifier} + id, err := flex.FlattenResourceId(parts, ProvisionedConcurrencyIDPartCount, false) + if err != nil { + return sdkdiag.AppendErrorf(diags, "setting Lambda Provisioned Concurrency Config ID (%s,%s): %s", functionName, qualifier, err) + } + d.SetId(id) if err := waitForProvisionedConcurrencyConfigStatusReady(ctx, conn, functionName, qualifier, d.Timeout(schema.TimeoutCreate)); err != nil { return sdkdiag.AppendErrorf(diags, "waiting for Lambda Provisioned Concurrency Config (%s) to be ready: %s", d.Id(), err) @@ -84,13 +112,14 @@ func resourceProvisionedConcurrencyConfigCreate(ctx context.Context, d *schema.R func resourceProvisionedConcurrencyConfigRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() - - functionName, qualifier, err := ProvisionedConcurrencyConfigParseID(d.Id()) + conn := meta.(*conns.AWSClient).LambdaConn(ctx) + parts, err := flex.ExpandResourceId(d.Id(), ProvisionedConcurrencyIDPartCount, false) if err != nil { return sdkdiag.AppendErrorf(diags, "reading Lambda Provisioned Concurrency Config (%s): %s", d.Id(), err) } + functionName := parts[0] + qualifier := parts[1] input := &lambda.GetProvisionedConcurrencyConfigInput{ FunctionName: aws.String(functionName), @@ -118,13 +147,14 @@ func resourceProvisionedConcurrencyConfigRead(ctx context.Context, d *schema.Res func resourceProvisionedConcurrencyConfigUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() - - functionName, qualifier, err := ProvisionedConcurrencyConfigParseID(d.Id()) + conn := meta.(*conns.AWSClient).LambdaConn(ctx) + parts, err := flex.ExpandResourceId(d.Id(), ProvisionedConcurrencyIDPartCount, false) if err != nil { return sdkdiag.AppendErrorf(diags, "updating Lambda Provisioned Concurrency Config (%s): %s", d.Id(), err) } + functionName := parts[0] + qualifier := parts[1] input := &lambda.PutProvisionedConcurrencyConfigInput{ FunctionName: aws.String(functionName), @@ -147,17 +177,21 @@ func resourceProvisionedConcurrencyConfigUpdate(ctx context.Context, d *schema.R func resourceProvisionedConcurrencyConfigDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LambdaConn() + if v, ok := d.GetOk("skip_destroy"); ok && v.(bool) { + log.Printf("[DEBUG] Retaining Lambda Provisioned Concurrency Config %q", d.Id()) + return diags + } - functionName, qualifier, err := ProvisionedConcurrencyConfigParseID(d.Id()) + conn := meta.(*conns.AWSClient).LambdaConn(ctx) + parts, err := flex.ExpandResourceId(d.Id(), ProvisionedConcurrencyIDPartCount, false) if err != nil { return sdkdiag.AppendErrorf(diags, "deleting Lambda Provisioned Concurrency Config (%s): %s", d.Id(), err) } input := &lambda.DeleteProvisionedConcurrencyConfigInput{ - FunctionName: aws.String(functionName), - Qualifier: aws.String(qualifier), + FunctionName: aws.String(parts[0]), + Qualifier: aws.String(parts[1]), } _, err = conn.DeleteProvisionedConcurrencyConfigWithContext(ctx, input) @@ -173,16 +207,6 @@ func resourceProvisionedConcurrencyConfigDelete(ctx context.Context, d *schema.R return diags } -func ProvisionedConcurrencyConfigParseID(id string) (string, string, error) { - parts := strings.SplitN(id, ":", 2) - - if len(parts) != 2 || parts[0] == "" || parts[1] == "" { - return "", "", fmt.Errorf("unexpected format of ID (%s), expected FUNCTION_NAME:QUALIFIER", id) - } - - return parts[0], parts[1], nil -} - func refreshProvisionedConcurrencyConfigStatus(ctx context.Context, conn *lambda.Lambda, functionName, qualifier string) retry.StateRefreshFunc { return func() (interface{}, string, error) { input := &lambda.GetProvisionedConcurrencyConfigInput{ diff --git a/internal/service/lambda/provisioned_concurrency_config_migrate.go b/internal/service/lambda/provisioned_concurrency_config_migrate.go new file mode 100644 index 00000000000..672ebe8d663 --- /dev/null +++ b/internal/service/lambda/provisioned_concurrency_config_migrate.go @@ -0,0 +1,62 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package lambda + +import ( + "context" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/flex" +) + +func resourceProvisionedConcurrencyConfigV0() *schema.Resource { + // Resource with v0 schema (provider v5.3.0 and below) + return &schema.Resource{ + Schema: map[string]*schema.Schema{ + "function_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.NoZeroValues, + }, + "provisioned_concurrent_executions": { + Type: schema.TypeInt, + Required: true, + ValidateFunc: validation.IntAtLeast(1), + }, + "qualifier": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.NoZeroValues, + }, + "skip_destroy": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + }, + } +} + +func provisionedConcurrencyConfigStateUpgradeV0(ctx context.Context, rawState map[string]interface{}, meta interface{}) (map[string]interface{}, error) { + if rawState == nil { + rawState = map[string]interface{}{} + } + + // Convert id separator from ":" to "," + parts := []string{ + rawState["function_name"].(string), + rawState["qualifier"].(string), + } + + id, err := flex.FlattenResourceId(parts, ProvisionedConcurrencyIDPartCount, false) + if err != nil { + return rawState, err + } + rawState["id"] = id + + return rawState, nil +} diff --git a/internal/service/lambda/provisioned_concurrency_config_test.go b/internal/service/lambda/provisioned_concurrency_config_test.go index 75986eac850..00ed4cf14b6 100644 --- a/internal/service/lambda/provisioned_concurrency_config_test.go +++ b/internal/service/lambda/provisioned_concurrency_config_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda_test import ( @@ -14,6 +17,7 @@ import ( "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/flex" tflambda "github.com/hashicorp/terraform-provider-aws/internal/service/lambda" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -31,18 +35,20 @@ func TestAccLambdaProvisionedConcurrencyConfig_basic(t *testing.T) { CheckDestroy: testAccCheckProvisionedConcurrencyConfigDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccProvisionedConcurrencyConfigConfig_qualifierFunctionVersion(rName), + Config: testAccProvisionedConcurrencyConfigConfig_concurrentExecutions(rName, 1), Check: resource.ComposeTestCheckFunc( - testAccCheckProvisionedConcurrencyExistsConfig(ctx, resourceName), + testAccCheckProvisionedConcurrencyConfigExists(ctx, resourceName), resource.TestCheckResourceAttrPair(resourceName, "function_name", lambdaFunctionResourceName, "function_name"), resource.TestCheckResourceAttr(resourceName, "provisioned_concurrent_executions", "1"), resource.TestCheckResourceAttrPair(resourceName, "qualifier", lambdaFunctionResourceName, "version"), + resource.TestCheckResourceAttr(resourceName, "skip_destroy", "false"), ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"skip_destroy"}, }, }, }) @@ -65,7 +71,7 @@ func TestAccLambdaProvisionedConcurrencyConfig_Disappears_lambdaFunction(t *test Config: testAccProvisionedConcurrencyConfigConfig_concurrentExecutions(rName, 1), Check: resource.ComposeTestCheckFunc( testAccCheckFunctionExists(ctx, lambdaFunctionResourceName, &function), - testAccCheckProvisionedConcurrencyExistsConfig(ctx, resourceName), + testAccCheckProvisionedConcurrencyConfigExists(ctx, resourceName), testAccCheckFunctionDisappears(ctx, &function), ), ExpectNonEmptyPlan: true, @@ -88,7 +94,7 @@ func TestAccLambdaProvisionedConcurrencyConfig_Disappears_lambdaProvisionedConcu { Config: testAccProvisionedConcurrencyConfigConfig_concurrentExecutions(rName, 1), Check: resource.ComposeTestCheckFunc( - testAccCheckProvisionedConcurrencyExistsConfig(ctx, resourceName), + testAccCheckProvisionedConcurrencyConfigExists(ctx, resourceName), testAccCheckProvisionedConcurrencyDisappearsConfig(ctx, resourceName), ), ExpectNonEmptyPlan: true, @@ -115,21 +121,22 @@ func TestAccLambdaProvisionedConcurrencyConfig_provisionedConcurrentExecutions(t { Config: testAccProvisionedConcurrencyConfigConfig_concurrentExecutions(rName, 1), Check: resource.ComposeTestCheckFunc( - testAccCheckProvisionedConcurrencyExistsConfig(ctx, resourceName), + testAccCheckProvisionedConcurrencyConfigExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "function_name", rName), resource.TestCheckResourceAttr(resourceName, "provisioned_concurrent_executions", "1"), resource.TestCheckResourceAttr(resourceName, "qualifier", "1"), ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"skip_destroy"}, }, { Config: testAccProvisionedConcurrencyConfigConfig_concurrentExecutions(rName, 2), Check: resource.ComposeTestCheckFunc( - testAccCheckProvisionedConcurrencyExistsConfig(ctx, resourceName), + testAccCheckProvisionedConcurrencyConfigExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "function_name", rName), resource.TestCheckResourceAttr(resourceName, "provisioned_concurrent_executions", "2"), resource.TestCheckResourceAttr(resourceName, "qualifier", "1"), @@ -139,6 +146,50 @@ func TestAccLambdaProvisionedConcurrencyConfig_provisionedConcurrentExecutions(t }) } +func TestAccLambdaProvisionedConcurrencyConfig_FunctionName_arn(t *testing.T) { + ctx := acctest.Context(t) + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_lambda_provisioned_concurrency_config.test" + lambdaFunctionResourceName := "aws_lambda_function.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, names.LambdaEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckProvisionedConcurrencyConfigDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccProvisionedConcurrencyConfigConfig_FunctionName_arn(rName, 1), + Check: resource.ComposeTestCheckFunc( + testAccCheckProvisionedConcurrencyConfigExists(ctx, resourceName), + resource.TestCheckResourceAttrPair(resourceName, "function_name", lambdaFunctionResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "provisioned_concurrent_executions", "1"), + resource.TestCheckResourceAttr(resourceName, "qualifier", "1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"skip_destroy"}, + }, + { + Config: testAccProvisionedConcurrencyConfigConfig_FunctionName_arn(rName, 2), + Check: resource.ComposeTestCheckFunc( + testAccCheckProvisionedConcurrencyConfigExists(ctx, resourceName), + resource.TestCheckResourceAttrPair(resourceName, "function_name", lambdaFunctionResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "provisioned_concurrent_executions", "2"), + resource.TestCheckResourceAttr(resourceName, "qualifier", "1"), + ), + }, + }, + }) +} + func TestAccLambdaProvisionedConcurrencyConfig_Qualifier_aliasName(t *testing.T) { ctx := acctest.Context(t) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) @@ -154,14 +205,108 @@ func TestAccLambdaProvisionedConcurrencyConfig_Qualifier_aliasName(t *testing.T) { Config: testAccProvisionedConcurrencyConfigConfig_qualifierAliasName(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckProvisionedConcurrencyExistsConfig(ctx, resourceName), + testAccCheckProvisionedConcurrencyConfigExists(ctx, resourceName), resource.TestCheckResourceAttrPair(resourceName, "qualifier", lambdaAliasResourceName, "name"), ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"skip_destroy"}, + }, + }, + }) +} + +func TestAccLambdaProvisionedConcurrencyConfig_skipDestroy(t *testing.T) { + ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + filename1 := "test-fixtures/lambdapinpoint.zip" + filename2 := "test-fixtures/lambdapinpoint_modified.zip" + version1 := "1" + version2 := "2" + lambdaFunctionResourceName := "aws_lambda_function.test" + resourceName := "aws_lambda_provisioned_concurrency_config.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, names.LambdaEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckProvisionedConcurrencyConfigDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccProvisionedConcurrencyConfigConfig_skipDestroy(rName, filename1, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckProvisionedConcurrencyConfigExists(ctx, resourceName), + resource.TestCheckResourceAttrPair(resourceName, "function_name", lambdaFunctionResourceName, "function_name"), + resource.TestCheckResourceAttr(resourceName, "provisioned_concurrent_executions", "1"), + resource.TestCheckResourceAttrPair(resourceName, "qualifier", lambdaFunctionResourceName, "version"), + resource.TestCheckResourceAttr(resourceName, "skip_destroy", "true"), + ), + }, + { + Config: testAccProvisionedConcurrencyConfigConfig_skipDestroy(rName, filename2, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckProvisionedConcurrencyConfigExists(ctx, resourceName), + testAccCheckProvisionedConcurrencyConfigExistsByName(ctx, rName, version1), // verify config on previous version still exists + resource.TestCheckResourceAttrPair(resourceName, "function_name", lambdaFunctionResourceName, "function_name"), + resource.TestCheckResourceAttr(resourceName, "provisioned_concurrent_executions", "1"), + resource.TestCheckResourceAttrPair(resourceName, "qualifier", lambdaFunctionResourceName, "version"), + resource.TestCheckResourceAttr(resourceName, "skip_destroy", "true"), + ), + }, + { + Config: testAccProvisionedConcurrencyConfigConfigBase_withFilename(rName, filename2), // remove the provisioned concurrency config completely + Check: resource.ComposeTestCheckFunc( + testAccCheckProvisionedConcurrencyConfigExistsByName(ctx, rName, version1), + testAccCheckProvisionedConcurrencyConfigExistsByName(ctx, rName, version2), + ), + }, + }, + }) +} + +func TestAccLambdaProvisionedConcurrencyConfig_idMigration530(t *testing.T) { + ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + lambdaFunctionResourceName := "aws_lambda_function.test" + resourceName := "aws_lambda_provisioned_concurrency_config.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, names.LambdaEndpointID), + CheckDestroy: testAccCheckProvisionedConcurrencyConfigDestroy(ctx), + Steps: []resource.TestStep{ + { + // At v5.3.0 the resource's schema is v0 and id is colon-delimited + ExternalProviders: map[string]resource.ExternalProvider{ + "aws": { + Source: "hashicorp/aws", + VersionConstraint: "5.3.0", + }, + }, + Config: testAccProvisionedConcurrencyConfigConfig_concurrentExecutions(rName, 1), + Check: resource.ComposeTestCheckFunc( + testAccCheckProvisionedConcurrencyConfigExists_v0Schema(ctx, resourceName), + resource.TestCheckResourceAttrPair(resourceName, "function_name", lambdaFunctionResourceName, "function_name"), + resource.TestCheckResourceAttr(resourceName, "provisioned_concurrent_executions", "1"), + resource.TestCheckResourceAttrPair(resourceName, "qualifier", lambdaFunctionResourceName, "version"), + resource.TestCheckResourceAttr(resourceName, "skip_destroy", "false"), + resource.TestCheckResourceAttr(resourceName, "id", fmt.Sprintf("%s:1", rName)), + ), + }, + { + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + Config: testAccProvisionedConcurrencyConfigConfig_concurrentExecutions(rName, 1), + Check: resource.ComposeTestCheckFunc( + testAccCheckProvisionedConcurrencyConfigExists(ctx, resourceName), + resource.TestCheckResourceAttrPair(resourceName, "function_name", lambdaFunctionResourceName, "function_name"), + resource.TestCheckResourceAttr(resourceName, "provisioned_concurrent_executions", "1"), + resource.TestCheckResourceAttrPair(resourceName, "qualifier", lambdaFunctionResourceName, "version"), + resource.TestCheckResourceAttr(resourceName, "skip_destroy", "false"), + resource.TestCheckResourceAttr(resourceName, "id", fmt.Sprintf("%s,1", rName)), + ), }, }, }) @@ -169,22 +314,21 @@ func TestAccLambdaProvisionedConcurrencyConfig_Qualifier_aliasName(t *testing.T) func testAccCheckProvisionedConcurrencyConfigDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lambda_provisioned_concurrency_config" { continue } - functionName, qualifier, err := tflambda.ProvisionedConcurrencyConfigParseID(rs.Primary.ID) - + parts, err := flex.ExpandResourceId(rs.Primary.ID, tflambda.ProvisionedConcurrencyIDPartCount, false) if err != nil { return err } input := &lambda.GetProvisionedConcurrencyConfigInput{ - FunctionName: aws.String(functionName), - Qualifier: aws.String(qualifier), + FunctionName: aws.String(parts[0]), + Qualifier: aws.String(parts[1]), } output, err := conn.GetProvisionedConcurrencyConfig(ctx, input) @@ -220,17 +364,16 @@ func testAccCheckProvisionedConcurrencyDisappearsConfig(ctx context.Context, res return fmt.Errorf("Resource (%s) ID not set", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaClient() - - functionName, qualifier, err := tflambda.ProvisionedConcurrencyConfigParseID(rs.Primary.ID) + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaClient(ctx) + parts, err := flex.ExpandResourceId(rs.Primary.ID, tflambda.ProvisionedConcurrencyIDPartCount, false) if err != nil { return err } input := &lambda.DeleteProvisionedConcurrencyConfigInput{ - FunctionName: aws.String(functionName), - Qualifier: aws.String(qualifier), + FunctionName: aws.String(parts[0]), + Qualifier: aws.String(parts[1]), } _, err = conn.DeleteProvisionedConcurrencyConfig(ctx, input) @@ -239,7 +382,52 @@ func testAccCheckProvisionedConcurrencyDisappearsConfig(ctx context.Context, res } } -func testAccCheckProvisionedConcurrencyExistsConfig(ctx context.Context, resourceName string) resource.TestCheckFunc { +// testAccCheckProvisionedConcurrencyConfigExists_v0Schema is a variant of the check +// exists functions for v0 schemas. +func testAccCheckProvisionedConcurrencyConfigExists_v0Schema(ctx context.Context, resourceName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Resource not found: %s", resourceName) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("Resource (%s) ID not set", resourceName) + } + + // flex.ExpandResourceId will fail for unmigrated (v0) schemas. For checking existence + // in the migration test, read the required attributes directly instead. + functionName, ok := rs.Primary.Attributes["function_name"] + if !ok { + return fmt.Errorf("Resource (%s) function_name attribute not set", resourceName) + } + qualifier, ok := rs.Primary.Attributes["qualifier"] + if !ok { + return fmt.Errorf("Resource (%s) qualifier attribute not set", resourceName) + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaClient(ctx) + + input := &lambda.GetProvisionedConcurrencyConfigInput{ + FunctionName: aws.String(functionName), + Qualifier: aws.String(qualifier), + } + + output, err := conn.GetProvisionedConcurrencyConfig(ctx, input) + + if err != nil { + return err + } + + if got, want := output.Status, types.ProvisionedConcurrencyStatusEnumReady; got != want { + return fmt.Errorf("Lambda Provisioned Concurrency Config (%s) expected status (%s), got: %s", rs.Primary.ID, want, got) + } + + return nil + } +} + +func testAccCheckProvisionedConcurrencyConfigExists(ctx context.Context, resourceName string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[resourceName] if !ok { @@ -250,14 +438,40 @@ func testAccCheckProvisionedConcurrencyExistsConfig(ctx context.Context, resourc return fmt.Errorf("Resource (%s) ID not set", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaClient(ctx) + + parts, err := flex.ExpandResourceId(rs.Primary.ID, tflambda.ProvisionedConcurrencyIDPartCount, false) + if err != nil { + return err + } + + input := &lambda.GetProvisionedConcurrencyConfigInput{ + FunctionName: aws.String(parts[0]), + Qualifier: aws.String(parts[1]), + } - functionName, qualifier, err := tflambda.ProvisionedConcurrencyConfigParseID(rs.Primary.ID) + output, err := conn.GetProvisionedConcurrencyConfig(ctx, input) if err != nil { return err } + if got, want := output.Status, types.ProvisionedConcurrencyStatusEnumReady; got != want { + return fmt.Errorf("Lambda Provisioned Concurrency Config (%s) expected status (%s), got: %s", rs.Primary.ID, want, got) + } + + return nil + } +} + +// testAccCheckProvisionedConcurrencyConfigExistsByName is a helper to verify a +// provisioned concurrency setting is in place on a specific function version. +// This variant of the test check function accepts function name and qualifer arguments +// directly to support skip_destroy checks where the provisioned concurrency configuration +// resource is removed from state, but should still exist remotely. +func testAccCheckProvisionedConcurrencyConfigExistsByName(ctx context.Context, functionName, qualifier string) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).LambdaClient(ctx) input := &lambda.GetProvisionedConcurrencyConfigInput{ FunctionName: aws.String(functionName), Qualifier: aws.String(qualifier), @@ -270,14 +484,18 @@ func testAccCheckProvisionedConcurrencyExistsConfig(ctx context.Context, resourc } if got, want := output.Status, types.ProvisionedConcurrencyStatusEnumReady; got != want { - return fmt.Errorf("Lambda Provisioned Concurrency Config (%s) expected status (%s), got: %s", rs.Primary.ID, want, got) + return fmt.Errorf("Lambda Provisioned Concurrency Config (%s) expected status (%s), got: %s", functionName, want, got) } return nil } } -func testAccProvisionedConcurrencyConfig_base(rName string) string { +func testAccProvisionedConcurrencyConfigConfigBase(rName string) string { + return testAccProvisionedConcurrencyConfigConfigBase_withFilename(rName, "test-fixtures/lambdapinpoint.zip") +} + +func testAccProvisionedConcurrencyConfigConfigBase_withFilename(rName, filename string) string { return fmt.Sprintf(` data "aws_partition" "current" {} @@ -307,8 +525,8 @@ resource "aws_iam_role_policy_attachment" "test" { } resource "aws_lambda_function" "test" { - filename = "test-fixtures/lambdapinpoint.zip" function_name = %[1]q + filename = %[2]q role = aws_iam_role.test.arn handler = "lambdapinpoint.handler" publish = true @@ -316,21 +534,39 @@ resource "aws_lambda_function" "test" { depends_on = [aws_iam_role_policy_attachment.test] } -`, rName) +`, rName, filename) } func testAccProvisionedConcurrencyConfigConfig_concurrentExecutions(rName string, provisionedConcurrentExecutions int) string { - return testAccProvisionedConcurrencyConfig_base(rName) + fmt.Sprintf(` + return acctest.ConfigCompose( + testAccProvisionedConcurrencyConfigConfigBase(rName), + fmt.Sprintf(` resource "aws_lambda_provisioned_concurrency_config" "test" { function_name = aws_lambda_function.test.function_name provisioned_concurrent_executions = %[1]d qualifier = aws_lambda_function.test.version } -`, provisionedConcurrentExecutions) +`, provisionedConcurrentExecutions), + ) +} + +func testAccProvisionedConcurrencyConfigConfig_FunctionName_arn(rName string, provisionedConcurrentExecutions int) string { + return acctest.ConfigCompose( + testAccProvisionedConcurrencyConfigConfigBase(rName), + fmt.Sprintf(` +resource "aws_lambda_provisioned_concurrency_config" "test" { + function_name = aws_lambda_function.test.arn + provisioned_concurrent_executions = %[1]d + qualifier = aws_lambda_function.test.version +} +`, provisionedConcurrentExecutions), + ) } func testAccProvisionedConcurrencyConfigConfig_qualifierAliasName(rName string) string { - return testAccProvisionedConcurrencyConfig_base(rName) + ` + return acctest.ConfigCompose( + testAccProvisionedConcurrencyConfigConfigBase(rName), + ` resource "aws_lambda_alias" "test" { function_name = aws_lambda_function.test.function_name function_version = aws_lambda_function.test.version @@ -342,15 +578,20 @@ resource "aws_lambda_provisioned_concurrency_config" "test" { provisioned_concurrent_executions = 1 qualifier = aws_lambda_alias.test.name } -` +`, + ) } -func testAccProvisionedConcurrencyConfigConfig_qualifierFunctionVersion(rName string) string { - return testAccProvisionedConcurrencyConfig_base(rName) + ` +func testAccProvisionedConcurrencyConfigConfig_skipDestroy(rName, filename string, skipDestroy bool) string { + return acctest.ConfigCompose( + testAccProvisionedConcurrencyConfigConfigBase_withFilename(rName, filename), + fmt.Sprintf(` resource "aws_lambda_provisioned_concurrency_config" "test" { function_name = aws_lambda_function.test.function_name provisioned_concurrent_executions = 1 qualifier = aws_lambda_function.test.version + + skip_destroy = %[1]t } -` +`, skipDestroy)) } diff --git a/internal/service/lambda/service_package_gen.go b/internal/service/lambda/service_package_gen.go index 3e8ec8ec403..c3ea4260980 100644 --- a/internal/service/lambda/service_package_gen.go +++ b/internal/service/lambda/service_package_gen.go @@ -5,6 +5,12 @@ package lambda import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + lambda_sdkv2 "github.com/aws/aws-sdk-go-v2/service/lambda" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + lambda_sdkv1 "github.com/aws/aws-sdk-go/service/lambda" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -109,4 +115,24 @@ func (p *servicePackage) ServicePackageName() string { return names.Lambda } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*lambda_sdkv1.Lambda, error) { + sess := config["session"].(*session_sdkv1.Session) + + return lambda_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*lambda_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return lambda_sdkv2.NewFromConfig(cfg, func(o *lambda_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = lambda_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/lambda/sweep.go b/internal/service/lambda/sweep.go index ed584f69e14..d4bb9ad0cdd 100644 --- a/internal/service/lambda/sweep.go +++ b/internal/service/lambda/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -12,7 +15,6 @@ import ( "github.com/aws/aws-sdk-go/service/lambda" multierror "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -33,11 +35,11 @@ func init() { func sweepFunctions(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).LambdaConn() + conn := client.LambdaConn(ctx) input := &lambda.ListFunctionsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -67,7 +69,7 @@ func sweepFunctions(region string) error { return fmt.Errorf("error listing Lambda Functions (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Lambda Functions (%s): %w", region, err) @@ -78,11 +80,11 @@ func sweepFunctions(region string) error { func sweepLayerVersions(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).LambdaConn() + conn := client.LambdaConn(ctx) input := &lambda.ListLayersInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -137,7 +139,7 @@ func sweepLayerVersions(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Lambda Layers (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Lambda Layer Versions (%s): %w", region, err)) diff --git a/internal/service/lambda/tags_gen.go b/internal/service/lambda/tags_gen.go index 2fa11436156..5db48123832 100644 --- a/internal/service/lambda/tags_gen.go +++ b/internal/service/lambda/tags_gen.go @@ -13,6 +13,39 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) +// listTags lists lambda service tags. +// The identifier is typically the Amazon Resource Name (ARN), although +// it may also be a different identifier depending on the service. +func listTags(ctx context.Context, conn *lambda.Client, identifier string) (tftags.KeyValueTags, error) { + input := &lambda.ListTagsInput{ + Resource: aws.String(identifier), + } + + output, err := conn.ListTags(ctx, input) + + if err != nil { + return tftags.New(ctx, nil), err + } + + return KeyValueTags(ctx, output.Tags), nil +} + +// ListTags lists lambda service tags and set them in Context. +// It is called from outside this package. +func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { + tags, err := listTags(ctx, meta.(*conns.AWSClient).LambdaClient(ctx), identifier) + + if err != nil { + return err + } + + if inContext, ok := tftags.FromContext(ctx); ok { + inContext.TagsOut = types.Some(tags) + } + + return nil +} + // map[string]string handling // Tags returns lambda service tags. @@ -20,14 +53,14 @@ func Tags(tags tftags.KeyValueTags) map[string]string { return tags.Map() } -// KeyValueTags creates KeyValueTags from lambda service tags. +// KeyValueTags creates tftags.KeyValueTags from lambda service tags. func KeyValueTags(ctx context.Context, tags map[string]string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns lambda service tags from Context. +// getTagsIn returns lambda service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]string { +func getTagsIn(ctx context.Context) map[string]string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -37,17 +70,17 @@ func GetTagsIn(ctx context.Context) map[string]string { return nil } -// SetTagsOut sets lambda service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]string) { +// setTagsOut sets lambda service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates lambda service tags. +// updateTags updates lambda service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn *lambda.Client, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *lambda.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -87,5 +120,5 @@ func UpdateTags(ctx context.Context, conn *lambda.Client, identifier string, old // UpdateTags updates lambda service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).LambdaClient(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).LambdaClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/lambda/test-fixtures/lambda_func.js b/internal/service/lambda/test-fixtures/lambda_func.js index 556182a5ccc..8ea027b3a05 100644 --- a/internal/service/lambda/test-fixtures/lambda_func.js +++ b/internal/service/lambda/test-fixtures/lambda_func.js @@ -1,3 +1,8 @@ +/** + * Copyright (c) HashiCorp, Inc. + * SPDX-License-Identifier: MPL-2.0 + */ + var http = require('http') exports.handler = function(event, context) { diff --git a/internal/service/lambda/test-fixtures/lambda_func_modified.js b/internal/service/lambda/test-fixtures/lambda_func_modified.js index 9842040bb4a..6bcd2136aa5 100644 --- a/internal/service/lambda/test-fixtures/lambda_func_modified.js +++ b/internal/service/lambda/test-fixtures/lambda_func_modified.js @@ -1,3 +1,8 @@ +/** + * Copyright (c) HashiCorp, Inc. + * SPDX-License-Identifier: MPL-2.0 + */ + var http = require('http') exports.handler = function(event, context) { diff --git a/internal/service/lambda/test-fixtures/lambda_invocation.js b/internal/service/lambda/test-fixtures/lambda_invocation.js index abc0191f982..ce72f62d6e6 100644 --- a/internal/service/lambda/test-fixtures/lambda_invocation.js +++ b/internal/service/lambda/test-fixtures/lambda_invocation.js @@ -1,3 +1,8 @@ +/** + * Copyright (c) HashiCorp, Inc. + * SPDX-License-Identifier: MPL-2.0 + */ + exports.handler = async (event) => { if (process.env.TEST_DATA) { event.key3 = process.env.TEST_DATA; diff --git a/internal/service/lambda/test-fixtures/lambda_invocation_crud.js b/internal/service/lambda/test-fixtures/lambda_invocation_crud.js new file mode 100644 index 00000000000..7032f5f02d8 --- /dev/null +++ b/internal/service/lambda/test-fixtures/lambda_invocation_crud.js @@ -0,0 +1,21 @@ +/** + * Copyright (c) HashiCorp, Inc. + * SPDX-License-Identifier: MPL-2.0 + */ + +const AWS = require('aws-sdk') +const ssmClient = new AWS.SSM(); + +exports.handler = async (event) => { + let tf_key = "tf"; + if (tf_key in event) { + if (event[tf_key].action == "delete" && process.env.TEST_DATA != "") { + await ssmClient.putParameter({ + Name: process.env.TEST_DATA, + Value: JSON.stringify(event), + Type: "String" + }).promise(); + } + } + return event; +} diff --git a/internal/service/lambda/test-fixtures/lambda_invocation_crud.zip b/internal/service/lambda/test-fixtures/lambda_invocation_crud.zip new file mode 100644 index 00000000000..5a73b22f535 Binary files /dev/null and b/internal/service/lambda/test-fixtures/lambda_invocation_crud.zip differ diff --git a/internal/service/lambda/test-fixtures/lambdapinpoint_modified.zip b/internal/service/lambda/test-fixtures/lambdapinpoint_modified.zip new file mode 100644 index 00000000000..d6a332586fb Binary files /dev/null and b/internal/service/lambda/test-fixtures/lambdapinpoint_modified.zip differ diff --git a/internal/service/lambda/validate.go b/internal/service/lambda/validate.go index 56cb7c89615..9085cbacd9f 100644 --- a/internal/service/lambda/validate.go +++ b/internal/service/lambda/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda import ( diff --git a/internal/service/lambda/validate_test.go b/internal/service/lambda/validate_test.go index 31542c9f3c0..731143384f4 100644 --- a/internal/service/lambda/validate_test.go +++ b/internal/service/lambda/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lambda import ( diff --git a/internal/service/lexmodels/bot.go b/internal/service/lexmodels/bot.go index 81bcc9cae49..410f554f338 100644 --- a/internal/service/lexmodels/bot.go +++ b/internal/service/lexmodels/bot.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lexmodels import ( @@ -221,7 +224,7 @@ var validBotVersion = validation.All( func resourceBotCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LexModelsConn() + conn := meta.(*conns.AWSClient).LexModelsConn(ctx) name := d.Get("name").(string) input := &lexmodelbuildingservice.PutBotInput{ @@ -278,7 +281,7 @@ func resourceBotCreate(ctx context.Context, d *schema.ResourceData, meta interfa func resourceBotRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LexModelsConn() + conn := meta.(*conns.AWSClient).LexModelsConn(ctx) output, err := FindBotVersionByName(ctx, conn, d.Id(), BotVersionLatest) @@ -346,7 +349,7 @@ func resourceBotRead(ctx context.Context, d *schema.ResourceData, meta interface func resourceBotUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LexModelsConn() + conn := meta.(*conns.AWSClient).LexModelsConn(ctx) input := &lexmodelbuildingservice.PutBotInput{ Checksum: aws.String(d.Get("checksum").(string)), @@ -392,7 +395,7 @@ func resourceBotUpdate(ctx context.Context, d *schema.ResourceData, meta interfa func resourceBotDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LexModelsConn() + conn := meta.(*conns.AWSClient).LexModelsConn(ctx) input := &lexmodelbuildingservice.DeleteBotInput{ Name: aws.String(d.Id()), diff --git a/internal/service/lexmodels/bot_alias.go b/internal/service/lexmodels/bot_alias.go index b70cbbd31a8..ad141aab851 100644 --- a/internal/service/lexmodels/bot_alias.go +++ b/internal/service/lexmodels/bot_alias.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lexmodels import ( @@ -121,7 +124,7 @@ var validBotAliasName = validation.All( func resourceBotAliasCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LexModelsConn() + conn := meta.(*conns.AWSClient).LexModelsConn(ctx) botName := d.Get("bot_name").(string) botAliasName := d.Get("name").(string) @@ -174,7 +177,7 @@ func resourceBotAliasCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceBotAliasRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LexModelsConn() + conn := meta.(*conns.AWSClient).LexModelsConn(ctx) resp, err := conn.GetBotAliasWithContext(ctx, &lexmodelbuildingservice.GetBotAliasInput{ BotName: aws.String(d.Get("bot_name").(string)), @@ -215,7 +218,7 @@ func resourceBotAliasRead(ctx context.Context, d *schema.ResourceData, meta inte func resourceBotAliasUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LexModelsConn() + conn := meta.(*conns.AWSClient).LexModelsConn(ctx) input := &lexmodelbuildingservice.PutBotAliasInput{ BotName: aws.String(d.Get("bot_name").(string)), @@ -266,7 +269,7 @@ func resourceBotAliasUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceBotAliasDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LexModelsConn() + conn := meta.(*conns.AWSClient).LexModelsConn(ctx) botName := d.Get("bot_name").(string) botAliasName := d.Get("name").(string) diff --git a/internal/service/lexmodels/bot_alias_data_source.go b/internal/service/lexmodels/bot_alias_data_source.go index 171864e910f..9f2226532ce 100644 --- a/internal/service/lexmodels/bot_alias_data_source.go +++ b/internal/service/lexmodels/bot_alias_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lexmodels import ( @@ -60,7 +63,7 @@ func DataSourceBotAlias() *schema.Resource { func dataSourceBotAliasRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LexModelsConn() + conn := meta.(*conns.AWSClient).LexModelsConn(ctx) botName := d.Get("bot_name").(string) botAliasName := d.Get("name").(string) diff --git a/internal/service/lexmodels/bot_alias_data_source_test.go b/internal/service/lexmodels/bot_alias_data_source_test.go index 08a9e18ab0d..2758918fe00 100644 --- a/internal/service/lexmodels/bot_alias_data_source_test.go +++ b/internal/service/lexmodels/bot_alias_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lexmodels_test import ( diff --git a/internal/service/lexmodels/bot_alias_test.go b/internal/service/lexmodels/bot_alias_test.go index 0e26a01afa2..5606a9ec0cf 100644 --- a/internal/service/lexmodels/bot_alias_test.go +++ b/internal/service/lexmodels/bot_alias_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lexmodels_test import ( @@ -369,7 +372,7 @@ func testAccCheckBotAliasExists(ctx context.Context, rName string, output *lexmo botAliasName := rs.Primary.Attributes["name"] var err error - conn := acctest.Provider.Meta().(*conns.AWSClient).LexModelsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LexModelsConn(ctx) output, err = conn.GetBotAliasWithContext(ctx, &lexmodelbuildingservice.GetBotAliasInput{ BotName: aws.String(botName), @@ -388,7 +391,7 @@ func testAccCheckBotAliasExists(ctx context.Context, rName string, output *lexmo func testAccCheckBotAliasDestroy(ctx context.Context, botName, botAliasName string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LexModelsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LexModelsConn(ctx) _, err := conn.GetBotAliasWithContext(ctx, &lexmodelbuildingservice.GetBotAliasInput{ BotName: aws.String(botName), diff --git a/internal/service/lexmodels/bot_data_source.go b/internal/service/lexmodels/bot_data_source.go index 1cda9e19862..725ae1a841b 100644 --- a/internal/service/lexmodels/bot_data_source.go +++ b/internal/service/lexmodels/bot_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lexmodels import ( @@ -91,7 +94,7 @@ func DataSourceBot() *schema.Resource { func dataSourceBotRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LexModelsConn() + conn := meta.(*conns.AWSClient).LexModelsConn(ctx) name := d.Get("name").(string) version := d.Get("version").(string) diff --git a/internal/service/lexmodels/bot_data_source_test.go b/internal/service/lexmodels/bot_data_source_test.go index 90e5161f45a..0f79ac30223 100644 --- a/internal/service/lexmodels/bot_data_source_test.go +++ b/internal/service/lexmodels/bot_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lexmodels_test import ( diff --git a/internal/service/lexmodels/bot_test.go b/internal/service/lexmodels/bot_test.go index 43ab75b06aa..ccf68a65d3b 100644 --- a/internal/service/lexmodels/bot_test.go +++ b/internal/service/lexmodels/bot_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lexmodels_test import ( @@ -748,7 +751,7 @@ func testAccCheckBotExistsWithVersion(ctx context.Context, rName, botVersion str return fmt.Errorf("No Lex Bot ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LexModelsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LexModelsConn(ctx) output, err := tflexmodels.FindBotVersionByName(ctx, conn, rs.Primary.ID, botVersion) @@ -768,7 +771,7 @@ func testAccCheckBotExists(ctx context.Context, rName string, output *lexmodelbu func testAccCheckBotNotExists(ctx context.Context, botName, botVersion string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LexModelsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LexModelsConn(ctx) _, err := tflexmodels.FindBotVersionByName(ctx, conn, botName, botVersion) @@ -786,7 +789,7 @@ func testAccCheckBotNotExists(ctx context.Context, botName, botVersion string) r func testAccCheckBotDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LexModelsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LexModelsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lex_bot" { diff --git a/internal/service/lexmodels/enum.go b/internal/service/lexmodels/enum.go index f038374c2ef..262b3da324c 100644 --- a/internal/service/lexmodels/enum.go +++ b/internal/service/lexmodels/enum.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lexmodels const ( diff --git a/internal/service/lexmodels/find.go b/internal/service/lexmodels/find.go index b4659271a79..ee507a01a4f 100644 --- a/internal/service/lexmodels/find.go +++ b/internal/service/lexmodels/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lexmodels import ( diff --git a/internal/service/lexmodels/generate.go b/internal/service/lexmodels/generate.go new file mode 100644 index 00000000000..bc7c633e16c --- /dev/null +++ b/internal/service/lexmodels/generate.go @@ -0,0 +1,7 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/servicepackage/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package lexmodels diff --git a/internal/service/lexmodels/intent.go b/internal/service/lexmodels/intent.go index 105e54b145e..21f1432400b 100644 --- a/internal/service/lexmodels/intent.go +++ b/internal/service/lexmodels/intent.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lexmodels import ( @@ -286,7 +289,7 @@ func hasIntentConfigChanges(d verify.ResourceDiffer) bool { func resourceIntentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LexModelsConn() + conn := meta.(*conns.AWSClient).LexModelsConn(ctx) name := d.Get("name").(string) input := &lexmodelbuildingservice.PutIntentInput{ @@ -360,7 +363,7 @@ func resourceIntentCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceIntentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LexModelsConn() + conn := meta.(*conns.AWSClient).LexModelsConn(ctx) resp, err := conn.GetIntentWithContext(ctx, &lexmodelbuildingservice.GetIntentInput{ Name: aws.String(d.Id()), @@ -435,7 +438,7 @@ func resourceIntentRead(ctx context.Context, d *schema.ResourceData, meta interf func resourceIntentUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LexModelsConn() + conn := meta.(*conns.AWSClient).LexModelsConn(ctx) input := &lexmodelbuildingservice.PutIntentInput{ Checksum: aws.String(d.Get("checksum").(string)), @@ -506,7 +509,7 @@ func resourceIntentUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceIntentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LexModelsConn() + conn := meta.(*conns.AWSClient).LexModelsConn(ctx) input := &lexmodelbuildingservice.DeleteIntentInput{ Name: aws.String(d.Id()), diff --git a/internal/service/lexmodels/intent_data_source.go b/internal/service/lexmodels/intent_data_source.go index 5c01d970663..4164dec1cc9 100644 --- a/internal/service/lexmodels/intent_data_source.go +++ b/internal/service/lexmodels/intent_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lexmodels import ( @@ -69,7 +72,7 @@ func DataSourceIntent() *schema.Resource { func dataSourceIntentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LexModelsConn() + conn := meta.(*conns.AWSClient).LexModelsConn(ctx) intentName := d.Get("name").(string) resp, err := conn.GetIntentWithContext(ctx, &lexmodelbuildingservice.GetIntentInput{ diff --git a/internal/service/lexmodels/intent_data_source_test.go b/internal/service/lexmodels/intent_data_source_test.go index 085e779811f..ff5c39120b9 100644 --- a/internal/service/lexmodels/intent_data_source_test.go +++ b/internal/service/lexmodels/intent_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lexmodels_test import ( diff --git a/internal/service/lexmodels/intent_test.go b/internal/service/lexmodels/intent_test.go index 4b87505ec3e..dfd363e6635 100644 --- a/internal/service/lexmodels/intent_test.go +++ b/internal/service/lexmodels/intent_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lexmodels_test import ( @@ -574,7 +577,7 @@ func TestAccLexModelsIntent_updateWithExternalChange(t *testing.T) { testAccCheckAWSLexIntentUpdateDescription := func(provider *schema.Provider, _ *schema.Resource, resourceName string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := provider.Meta().(*conns.AWSClient).LexModelsConn() + conn := provider.Meta().(*conns.AWSClient).LexModelsConn(ctx) resourceState, ok := s.RootModule().Resources[resourceName] if !ok { @@ -703,7 +706,7 @@ func testAccCheckIntentExistsWithVersion(ctx context.Context, rName, intentVersi } var err error - conn := acctest.Provider.Meta().(*conns.AWSClient).LexModelsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LexModelsConn(ctx) output, err = conn.GetIntentWithContext(ctx, &lexmodelbuildingservice.GetIntentInput{ Name: aws.String(rs.Primary.ID), @@ -726,7 +729,7 @@ func testAccCheckIntentExists(ctx context.Context, rName string, output *lexmode func testAccCheckIntentNotExists(ctx context.Context, intentName, intentVersion string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LexModelsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LexModelsConn(ctx) _, err := conn.GetIntentWithContext(ctx, &lexmodelbuildingservice.GetIntentInput{ Name: aws.String(intentName), @@ -745,7 +748,7 @@ func testAccCheckIntentNotExists(ctx context.Context, intentName, intentVersion func testAccCheckIntentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LexModelsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LexModelsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lex_intent" { diff --git a/internal/service/lexmodels/service_package_gen.go b/internal/service/lexmodels/service_package_gen.go index eeedb421009..7280e8b0290 100644 --- a/internal/service/lexmodels/service_package_gen.go +++ b/internal/service/lexmodels/service_package_gen.go @@ -5,6 +5,10 @@ package lexmodels import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + lexmodelbuildingservice_sdkv1 "github.com/aws/aws-sdk-go/service/lexmodelbuildingservice" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -65,4 +69,13 @@ func (p *servicePackage) ServicePackageName() string { return names.LexModels } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*lexmodelbuildingservice_sdkv1.LexModelBuildingService, error) { + sess := config["session"].(*session_sdkv1.Session) + + return lexmodelbuildingservice_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/lexmodels/slot_type.go b/internal/service/lexmodels/slot_type.go index 154373741ad..df24a22b9f0 100644 --- a/internal/service/lexmodels/slot_type.go +++ b/internal/service/lexmodels/slot_type.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lexmodels import ( @@ -137,7 +140,7 @@ func hasSlotTypeConfigChanges(d verify.ResourceDiffer) bool { func resourceSlotTypeCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LexModelsConn() + conn := meta.(*conns.AWSClient).LexModelsConn(ctx) name := d.Get("name").(string) input := &lexmodelbuildingservice.PutSlotTypeInput{ @@ -174,7 +177,7 @@ func resourceSlotTypeCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceSlotTypeRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LexModelsConn() + conn := meta.(*conns.AWSClient).LexModelsConn(ctx) output, err := FindSlotTypeVersionByName(ctx, conn, d.Id(), SlotTypeVersionLatest) @@ -212,7 +215,7 @@ func resourceSlotTypeRead(ctx context.Context, d *schema.ResourceData, meta inte func resourceSlotTypeUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LexModelsConn() + conn := meta.(*conns.AWSClient).LexModelsConn(ctx) input := &lexmodelbuildingservice.PutSlotTypeInput{ Checksum: aws.String(d.Get("checksum").(string)), @@ -239,7 +242,7 @@ func resourceSlotTypeUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceSlotTypeDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LexModelsConn() + conn := meta.(*conns.AWSClient).LexModelsConn(ctx) input := &lexmodelbuildingservice.DeleteSlotTypeInput{ Name: aws.String(d.Id()), diff --git a/internal/service/lexmodels/slot_type_data_source.go b/internal/service/lexmodels/slot_type_data_source.go index 01fc9b29817..b1558bd2b66 100644 --- a/internal/service/lexmodels/slot_type_data_source.go +++ b/internal/service/lexmodels/slot_type_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lexmodels import ( @@ -80,7 +83,7 @@ func DataSourceSlotType() *schema.Resource { func dataSourceSlotTypeRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LexModelsConn() + conn := meta.(*conns.AWSClient).LexModelsConn(ctx) name := d.Get("name").(string) version := d.Get("version").(string) diff --git a/internal/service/lexmodels/slot_type_data_source_test.go b/internal/service/lexmodels/slot_type_data_source_test.go index ee3ca7bb3e3..16e25958c4b 100644 --- a/internal/service/lexmodels/slot_type_data_source_test.go +++ b/internal/service/lexmodels/slot_type_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lexmodels_test import ( diff --git a/internal/service/lexmodels/slot_type_test.go b/internal/service/lexmodels/slot_type_test.go index ec66b94c340..a518500dc39 100644 --- a/internal/service/lexmodels/slot_type_test.go +++ b/internal/service/lexmodels/slot_type_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lexmodels_test import ( @@ -390,7 +393,7 @@ func testAccCheckSlotTypeExistsWithVersion(ctx context.Context, rName, slotTypeV return fmt.Errorf("No Lex Slot Type ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LexModelsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LexModelsConn(ctx) output, err := tflexmodels.FindSlotTypeVersionByName(ctx, conn, rs.Primary.ID, slotTypeVersion) @@ -410,7 +413,7 @@ func testAccCheckSlotTypeExists(ctx context.Context, rName string, output *lexmo func testAccCheckSlotTypeNotExists(ctx context.Context, slotTypeName, slotTypeVersion string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LexModelsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LexModelsConn(ctx) _, err := tflexmodels.FindSlotTypeVersionByName(ctx, conn, slotTypeName, slotTypeVersion) @@ -428,7 +431,7 @@ func testAccCheckSlotTypeNotExists(ctx context.Context, slotTypeName, slotTypeVe func testAccCheckSlotTypeDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LexModelsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LexModelsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lex_slot_type" { diff --git a/internal/service/lexmodels/status.go b/internal/service/lexmodels/status.go index c4300b3f3e1..39a0ca3d268 100644 --- a/internal/service/lexmodels/status.go +++ b/internal/service/lexmodels/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lexmodels import ( diff --git a/internal/service/lexmodels/sweep.go b/internal/service/lexmodels/sweep.go index ab0732b67a2..ed84704a78d 100644 --- a/internal/service/lexmodels/sweep.go +++ b/internal/service/lexmodels/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/lexmodelbuildingservice" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -42,13 +44,13 @@ func init() { func sweepBotAliases(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).LexModelsConn() + conn := client.LexModelsConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -102,7 +104,7 @@ func sweepBotAliases(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing Lex Bot Alias for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Lex Bot Alias for %s: %w", region, err)) } @@ -116,13 +118,13 @@ func sweepBotAliases(region string) error { func sweepBots(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).LexModelsConn() + conn := client.LexModelsConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -149,7 +151,7 @@ func sweepBots(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing Lex Bot for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Lex Bot for %s: %w", region, err)) } @@ -163,13 +165,13 @@ func sweepBots(region string) error { func sweepIntents(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).LexModelsConn() + conn := client.LexModelsConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -196,7 +198,7 @@ func sweepIntents(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing Lex Intent for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Lex Intent for %s: %w", region, err)) } @@ -210,13 +212,13 @@ func sweepIntents(region string) error { func sweepSlotTypes(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).LexModelsConn() + conn := client.LexModelsConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -243,7 +245,7 @@ func sweepSlotTypes(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing Lex Slot Type for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Lex Slot Type for %s: %w", region, err)) } diff --git a/internal/service/lexmodels/wait.go b/internal/service/lexmodels/wait.go index df5122537eb..868959b776c 100644 --- a/internal/service/lexmodels/wait.go +++ b/internal/service/lexmodels/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lexmodels import ( diff --git a/internal/service/licensemanager/association.go b/internal/service/licensemanager/association.go index 6dd3bf951d4..e5055554139 100644 --- a/internal/service/licensemanager/association.go +++ b/internal/service/licensemanager/association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package licensemanager import ( @@ -45,7 +48,7 @@ func ResourceAssociation() *schema.Resource { } func resourceAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LicenseManagerConn() + conn := meta.(*conns.AWSClient).LicenseManagerConn(ctx) licenseConfigurationARN := d.Get("license_configuration_arn").(string) resourceARN := d.Get("resource_arn").(string) @@ -70,7 +73,7 @@ func resourceAssociationCreate(ctx context.Context, d *schema.ResourceData, meta } func resourceAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LicenseManagerConn() + conn := meta.(*conns.AWSClient).LicenseManagerConn(ctx) resourceARN, licenseConfigurationARN, err := AssociationParseResourceID(d.Id()) @@ -97,7 +100,7 @@ func resourceAssociationRead(ctx context.Context, d *schema.ResourceData, meta i } func resourceAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LicenseManagerConn() + conn := meta.(*conns.AWSClient).LicenseManagerConn(ctx) resourceARN, licenseConfigurationARN, err := AssociationParseResourceID(d.Id()) diff --git a/internal/service/licensemanager/association_test.go b/internal/service/licensemanager/association_test.go index 38beead952a..9b967c54e23 100644 --- a/internal/service/licensemanager/association_test.go +++ b/internal/service/licensemanager/association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package licensemanager_test import ( @@ -83,7 +86,7 @@ func testAccCheckAssociationExists(ctx context.Context, n string) resource.TestC return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).LicenseManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LicenseManagerConn(ctx) return tflicensemanager.FindAssociation(ctx, conn, resourceARN, licenseConfigurationARN) } @@ -91,7 +94,7 @@ func testAccCheckAssociationExists(ctx context.Context, n string) resource.TestC func testAccCheckAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LicenseManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LicenseManagerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_licensemanager_association" { diff --git a/internal/service/licensemanager/common_schema_data_source.go b/internal/service/licensemanager/common_schema_data_source.go index db66c9cc0a8..166e955cefd 100644 --- a/internal/service/licensemanager/common_schema_data_source.go +++ b/internal/service/licensemanager/common_schema_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package licensemanager import ( diff --git a/internal/service/licensemanager/generate.go b/internal/service/licensemanager/generate.go index bf7544e0974..1e6ae59c38c 100644 --- a/internal/service/licensemanager/generate.go +++ b/internal/service/licensemanager/generate.go @@ -1,5 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ServiceTagsSlice -UpdateTags //go:generate go run ../../generate/listpages/main.go -ListOps=ListLicenseConfigurations,ListLicenseSpecificationsForResource,ListReceivedLicenses,ListDistributedGrants +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package licensemanager diff --git a/internal/service/licensemanager/grant.go b/internal/service/licensemanager/grant.go index db55cd446e1..06c5cba7d1e 100644 --- a/internal/service/licensemanager/grant.go +++ b/internal/service/licensemanager/grant.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package licensemanager import ( @@ -95,7 +98,7 @@ func ResourceGrant() *schema.Resource { } func resourceGrantCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LicenseManagerConn() + conn := meta.(*conns.AWSClient).LicenseManagerConn(ctx) in := &licensemanager.CreateGrantInput{ AllowedOperations: aws.StringSlice(expandAllowedOperations(d.Get("allowed_operations").(*schema.Set).List())), @@ -118,7 +121,7 @@ func resourceGrantCreate(ctx context.Context, d *schema.ResourceData, meta inter } func resourceGrantRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LicenseManagerConn() + conn := meta.(*conns.AWSClient).LicenseManagerConn(ctx) out, err := FindGrantByARN(ctx, conn, d.Id()) @@ -146,7 +149,7 @@ func resourceGrantRead(ctx context.Context, d *schema.ResourceData, meta interfa } func resourceGrantUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LicenseManagerConn() + conn := meta.(*conns.AWSClient).LicenseManagerConn(ctx) in := &licensemanager.CreateGrantVersionInput{ GrantArn: aws.String(d.Id()), @@ -171,7 +174,7 @@ func resourceGrantUpdate(ctx context.Context, d *schema.ResourceData, meta inter } func resourceGrantDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LicenseManagerConn() + conn := meta.(*conns.AWSClient).LicenseManagerConn(ctx) out, err := FindGrantByARN(ctx, conn, d.Id()) diff --git a/internal/service/licensemanager/grant_accepter.go b/internal/service/licensemanager/grant_accepter.go index acf89fc3609..0a03785a376 100644 --- a/internal/service/licensemanager/grant_accepter.go +++ b/internal/service/licensemanager/grant_accepter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package licensemanager import ( @@ -87,7 +90,7 @@ func ResourceGrantAccepter() *schema.Resource { } func resourceGrantAccepterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LicenseManagerConn() + conn := meta.(*conns.AWSClient).LicenseManagerConn(ctx) in := &licensemanager.AcceptGrantInput{ GrantArn: aws.String(d.Get("grant_arn").(string)), @@ -105,7 +108,7 @@ func resourceGrantAccepterCreate(ctx context.Context, d *schema.ResourceData, me } func resourceGrantAccepterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LicenseManagerConn() + conn := meta.(*conns.AWSClient).LicenseManagerConn(ctx) out, err := FindGrantAccepterByGrantARN(ctx, conn, d.Id()) @@ -133,7 +136,7 @@ func resourceGrantAccepterRead(ctx context.Context, d *schema.ResourceData, meta } func resourceGrantAccepterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LicenseManagerConn() + conn := meta.(*conns.AWSClient).LicenseManagerConn(ctx) in := &licensemanager.RejectGrantInput{ GrantArn: aws.String(d.Id()), diff --git a/internal/service/licensemanager/grant_accepter_test.go b/internal/service/licensemanager/grant_accepter_test.go index 5d1537cadd7..4a61989fa7e 100644 --- a/internal/service/licensemanager/grant_accepter_test.go +++ b/internal/service/licensemanager/grant_accepter_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package licensemanager_test import ( @@ -103,7 +106,7 @@ func testAccCheckGrantAccepterExists(ctx context.Context, n string, providerF fu return fmt.Errorf("No License Manager License Configuration ID is set") } - conn := providerF().Meta().(*conns.AWSClient).LicenseManagerConn() + conn := providerF().Meta().(*conns.AWSClient).LicenseManagerConn(ctx) out, err := tflicensemanager.FindGrantAccepterByGrantARN(ctx, conn, rs.Primary.ID) @@ -121,7 +124,7 @@ func testAccCheckGrantAccepterExists(ctx context.Context, n string, providerF fu func testAccCheckGrantAccepterDestroyWithProvider(ctx context.Context) acctest.TestCheckWithProviderFunc { return func(s *terraform.State, provider *schema.Provider) error { - conn := provider.Meta().(*conns.AWSClient).LicenseManagerConn() + conn := provider.Meta().(*conns.AWSClient).LicenseManagerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_licensemanager_grant_accepter" { diff --git a/internal/service/licensemanager/grant_test.go b/internal/service/licensemanager/grant_test.go index 29a90d43a82..a90103f45a2 100644 --- a/internal/service/licensemanager/grant_test.go +++ b/internal/service/licensemanager/grant_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package licensemanager_test import ( @@ -167,7 +170,7 @@ func testAccCheckGrantExists(ctx context.Context, n string) resource.TestCheckFu return fmt.Errorf("No License Manager License Configuration ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LicenseManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LicenseManagerConn(ctx) out, err := tflicensemanager.FindGrantByARN(ctx, conn, rs.Primary.ID) @@ -185,7 +188,7 @@ func testAccCheckGrantExists(ctx context.Context, n string) resource.TestCheckFu func testAccCheckGrantDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LicenseManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LicenseManagerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_licensemanager_grant" { diff --git a/internal/service/licensemanager/license_configuration.go b/internal/service/licensemanager/license_configuration.go index 731afcefc0a..c676db6fba6 100644 --- a/internal/service/licensemanager/license_configuration.go +++ b/internal/service/licensemanager/license_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package licensemanager import ( @@ -83,13 +86,13 @@ func ResourceLicenseConfiguration() *schema.Resource { } func resourceLicenseConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LicenseManagerConn() + conn := meta.(*conns.AWSClient).LicenseManagerConn(ctx) name := d.Get("name").(string) input := &licensemanager.CreateLicenseConfigurationInput{ LicenseCountingType: aws.String(d.Get("license_counting_type").(string)), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -121,7 +124,7 @@ func resourceLicenseConfigurationCreate(ctx context.Context, d *schema.ResourceD } func resourceLicenseConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LicenseManagerConn() + conn := meta.(*conns.AWSClient).LicenseManagerConn(ctx) output, err := FindLicenseConfigurationByARN(ctx, conn, d.Id()) @@ -144,13 +147,13 @@ func resourceLicenseConfigurationRead(ctx context.Context, d *schema.ResourceDat d.Set("name", output.Name) d.Set("owner_account_id", output.OwnerAccountId) - SetTagsOut(ctx, output.Tags) + setTagsOut(ctx, output.Tags) return nil } func resourceLicenseConfigurationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LicenseManagerConn() + conn := meta.(*conns.AWSClient).LicenseManagerConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &licensemanager.UpdateLicenseConfigurationInput{ @@ -176,7 +179,7 @@ func resourceLicenseConfigurationUpdate(ctx context.Context, d *schema.ResourceD } func resourceLicenseConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LicenseManagerConn() + conn := meta.(*conns.AWSClient).LicenseManagerConn(ctx) log.Printf("[DEBUG] Deleting License Manager License Configuration: %s", d.Id()) _, err := conn.DeleteLicenseConfigurationWithContext(ctx, &licensemanager.DeleteLicenseConfigurationInput{ diff --git a/internal/service/licensemanager/license_configuration_test.go b/internal/service/licensemanager/license_configuration_test.go index cd32e3a8219..d0735909248 100644 --- a/internal/service/licensemanager/license_configuration_test.go +++ b/internal/service/licensemanager/license_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package licensemanager_test import ( @@ -197,7 +200,7 @@ func testAccCheckLicenseConfigurationExists(ctx context.Context, n string, v *li return fmt.Errorf("No License Manager License Configuration ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LicenseManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LicenseManagerConn(ctx) output, err := tflicensemanager.FindLicenseConfigurationByARN(ctx, conn, rs.Primary.ID) @@ -213,7 +216,7 @@ func testAccCheckLicenseConfigurationExists(ctx context.Context, n string, v *li func testAccCheckLicenseConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LicenseManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LicenseManagerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_licensemanager_license_configuration" { diff --git a/internal/service/licensemanager/license_grants_data_source.go b/internal/service/licensemanager/license_grants_data_source.go index 06cd4f1f048..920ac421c52 100644 --- a/internal/service/licensemanager/license_grants_data_source.go +++ b/internal/service/licensemanager/license_grants_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package licensemanager import ( @@ -30,7 +33,7 @@ func DataSourceDistributedGrants() *schema.Resource { func dataSourceDistributedGrantsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LicenseManagerConn() + conn := meta.(*conns.AWSClient).LicenseManagerConn(ctx) in := &licensemanager.ListDistributedGrantsInput{} diff --git a/internal/service/licensemanager/license_grants_data_source_test.go b/internal/service/licensemanager/license_grants_data_source_test.go index 43db37a9860..6583e0404b5 100644 --- a/internal/service/licensemanager/license_grants_data_source_test.go +++ b/internal/service/licensemanager/license_grants_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package licensemanager_test import ( diff --git a/internal/service/licensemanager/received_license_data_source.go b/internal/service/licensemanager/received_license_data_source.go index fe8b71e5f83..e133e684a46 100644 --- a/internal/service/licensemanager/received_license_data_source.go +++ b/internal/service/licensemanager/received_license_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package licensemanager import ( @@ -215,7 +218,7 @@ func DataSourceReceivedLicense() *schema.Resource { func dataSourceReceivedLicenseRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LicenseManagerConn() + conn := meta.(*conns.AWSClient).LicenseManagerConn(ctx) arn := d.Get("license_arn").(string) diff --git a/internal/service/licensemanager/received_license_data_source_test.go b/internal/service/licensemanager/received_license_data_source_test.go index 8de3a2faa96..6addf6eb62b 100644 --- a/internal/service/licensemanager/received_license_data_source_test.go +++ b/internal/service/licensemanager/received_license_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package licensemanager_test import ( diff --git a/internal/service/licensemanager/received_licenses_data_source.go b/internal/service/licensemanager/received_licenses_data_source.go index fc2ddcf6846..f03f65b3b9c 100644 --- a/internal/service/licensemanager/received_licenses_data_source.go +++ b/internal/service/licensemanager/received_licenses_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package licensemanager import ( @@ -30,7 +33,7 @@ func DataSourceReceivedLicenses() *schema.Resource { func dataSourceReceivedLicensesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LicenseManagerConn() + conn := meta.(*conns.AWSClient).LicenseManagerConn(ctx) in := &licensemanager.ListReceivedLicensesInput{} diff --git a/internal/service/licensemanager/received_licenses_data_source_test.go b/internal/service/licensemanager/received_licenses_data_source_test.go index 2b52c2c709c..5b971e60008 100644 --- a/internal/service/licensemanager/received_licenses_data_source_test.go +++ b/internal/service/licensemanager/received_licenses_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package licensemanager_test import ( diff --git a/internal/service/licensemanager/service_package_gen.go b/internal/service/licensemanager/service_package_gen.go index ae4f9276c95..2f102119151 100644 --- a/internal/service/licensemanager/service_package_gen.go +++ b/internal/service/licensemanager/service_package_gen.go @@ -5,6 +5,10 @@ package licensemanager import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + licensemanager_sdkv1 "github.com/aws/aws-sdk-go/service/licensemanager" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -65,4 +69,13 @@ func (p *servicePackage) ServicePackageName() string { return names.LicenseManager } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*licensemanager_sdkv1.LicenseManager, error) { + sess := config["session"].(*session_sdkv1.Session) + + return licensemanager_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/licensemanager/sweep.go b/internal/service/licensemanager/sweep.go index c134cc5e8f6..837532822b2 100644 --- a/internal/service/licensemanager/sweep.go +++ b/internal/service/licensemanager/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/licensemanager" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -23,11 +25,11 @@ func init() { func sweepLicenseConfigurations(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).LicenseManagerConn() + conn := client.LicenseManagerConn(ctx) input := &licensemanager.ListLicenseConfigurationsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -56,7 +58,7 @@ func sweepLicenseConfigurations(region string) error { return fmt.Errorf("error listing License Manager License Configurations (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping License Manager License Configurations (%s): %w", region, err) diff --git a/internal/service/licensemanager/tags_gen.go b/internal/service/licensemanager/tags_gen.go index 3c0c761337b..96b627b2cfe 100644 --- a/internal/service/licensemanager/tags_gen.go +++ b/internal/service/licensemanager/tags_gen.go @@ -43,9 +43,9 @@ func KeyValueTags(ctx context.Context, tags []*licensemanager.Tag) tftags.KeyVal return tftags.New(ctx, m) } -// GetTagsIn returns licensemanager service tags from Context. +// getTagsIn returns licensemanager service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*licensemanager.Tag { +func getTagsIn(ctx context.Context) []*licensemanager.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -55,17 +55,17 @@ func GetTagsIn(ctx context.Context) []*licensemanager.Tag { return nil } -// SetTagsOut sets licensemanager service tags in Context. -func SetTagsOut(ctx context.Context, tags []*licensemanager.Tag) { +// setTagsOut sets licensemanager service tags in Context. +func setTagsOut(ctx context.Context, tags []*licensemanager.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates licensemanager service tags. +// updateTags updates licensemanager service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn licensemanageriface.LicenseManagerAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn licensemanageriface.LicenseManagerAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -105,5 +105,5 @@ func UpdateTags(ctx context.Context, conn licensemanageriface.LicenseManagerAPI, // UpdateTags updates licensemanager service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).LicenseManagerConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).LicenseManagerConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/lightsail/bucket.go b/internal/service/lightsail/bucket.go index f288207af71..18a098962f3 100644 --- a/internal/service/lightsail/bucket.go +++ b/internal/service/lightsail/bucket.go @@ -1,16 +1,21 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail import ( "context" "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/errs" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/verify" @@ -71,22 +76,22 @@ func ResourceBucket() *schema.Resource { } func resourceBucketCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) in := lightsail.CreateBucketInput{ BucketName: aws.String(d.Get("name").(string)), BundleId: aws.String(d.Get("bundle_id").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } - out, err := conn.CreateBucketWithContext(ctx, &in) + out, err := conn.CreateBucket(ctx, &in) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeCreateBucket, ResBucket, d.Get("name").(string), err) + return create.DiagError(names.Lightsail, string(types.OperationTypeCreateBucket), ResBucket, d.Get("name").(string), err) } id := d.Get("name").(string) - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeCreateBucket, ResBucket, id) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeCreateBucket, ResBucket, id) if diag != nil { return diag @@ -98,18 +103,18 @@ func resourceBucketCreate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceBucketRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) out, err := FindBucketById(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { - create.LogNotFoundRemoveState(names.CE, create.ErrActionReading, ResBucket, d.Id()) + create.LogNotFoundRemoveState(names.Lightsail, create.ErrActionReading, ResBucket, d.Id()) d.SetId("") return nil } if err != nil { - return create.DiagError(names.CE, create.ErrActionReading, ResBucket, d.Id(), err) + return create.DiagError(names.Lightsail, create.ErrActionReading, ResBucket, d.Id(), err) } d.Set("arn", out.Arn) @@ -121,26 +126,26 @@ func resourceBucketRead(ctx context.Context, d *schema.ResourceData, meta interf d.Set("support_code", out.SupportCode) d.Set("url", out.Url) - SetTagsOut(ctx, out.Tags) + setTagsOut(ctx, out.Tags) return nil } func resourceBucketUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) if d.HasChange("bundle_id") { in := lightsail.UpdateBucketBundleInput{ BucketName: aws.String(d.Id()), BundleId: aws.String(d.Get("bundle_id").(string)), } - out, err := conn.UpdateBucketBundleWithContext(ctx, &in) + out, err := conn.UpdateBucketBundle(ctx, &in) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeUpdateBucket, ResBucket, d.Get("name").(string), err) + return create.DiagError(names.Lightsail, string(types.OperationTypeUpdateBucket), ResBucket, d.Get("name").(string), err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeUpdateBucket, ResBucket, d.Get("name").(string)) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeUpdateBucket, ResBucket, d.Get("name").(string)) if diag != nil { return diag @@ -151,20 +156,20 @@ func resourceBucketUpdate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceBucketDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() - out, err := conn.DeleteBucketWithContext(ctx, &lightsail.DeleteBucketInput{ + conn := meta.(*conns.AWSClient).LightsailClient(ctx) + out, err := conn.DeleteBucket(ctx, &lightsail.DeleteBucketInput{ BucketName: aws.String(d.Id()), }) - if err != nil && tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { + if err != nil && errs.IsA[*types.NotFoundException](err) { return nil } if err != nil { - return create.DiagError(names.CE, create.ErrActionDeleting, ResBucket, d.Id(), err) + return create.DiagError(names.Lightsail, create.ErrActionDeleting, ResBucket, d.Id(), err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeDeleteBucket, ResBucket, d.Id()) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeDeleteBucket, ResBucket, d.Id()) if diag != nil { return diag @@ -172,3 +177,25 @@ func resourceBucketDelete(ctx context.Context, d *schema.ResourceData, meta inte return nil } + +func FindBucketById(ctx context.Context, conn *lightsail.Client, id string) (*types.Bucket, error) { + in := &lightsail.GetBucketsInput{BucketName: aws.String(id)} + out, err := conn.GetBuckets(ctx, in) + + if IsANotFoundError(err) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + if err != nil { + return nil, err + } + + if out == nil || len(out.Buckets) == 0 { + return nil, tfresource.NewEmptyResultError(in) + } + + return &out.Buckets[0], nil +} diff --git a/internal/service/lightsail/bucket_access_key.go b/internal/service/lightsail/bucket_access_key.go index c7db905e7b5..47e8a0ca2d4 100644 --- a/internal/service/lightsail/bucket_access_key.go +++ b/internal/service/lightsail/bucket_access_key.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail import ( @@ -5,14 +8,16 @@ import ( "regexp" "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/errs" "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/names" @@ -60,19 +65,19 @@ func ResourceBucketAccessKey() *schema.Resource { } func resourceBucketAccessKeyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) in := lightsail.CreateBucketAccessKeyInput{ BucketName: aws.String(d.Get("bucket_name").(string)), } - out, err := conn.CreateBucketAccessKeyWithContext(ctx, &in) + out, err := conn.CreateBucketAccessKey(ctx, &in) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeCreateBucketAccessKey, ResBucketAccessKey, d.Get("bucket_name").(string), err) + return create.DiagError(names.Lightsail, string(types.OperationTypeCreateBucketAccessKey), ResBucketAccessKey, d.Get("bucket_name").(string), err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeCreateBucketAccessKey, ResBucketAccessKey, d.Get("bucket_name").(string)) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeCreateBucketAccessKey, ResBucketAccessKey, d.Get("bucket_name").(string)) if diag != nil { return diag @@ -92,7 +97,7 @@ func resourceBucketAccessKeyCreate(ctx context.Context, d *schema.ResourceData, } func resourceBucketAccessKeyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) out, err := FindBucketAccessKeyById(ctx, conn, d.Id()) @@ -115,19 +120,19 @@ func resourceBucketAccessKeyRead(ctx context.Context, d *schema.ResourceData, me } func resourceBucketAccessKeyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) parts, err := flex.ExpandResourceId(d.Id(), BucketAccessKeyIdPartsCount, false) if err != nil { return create.DiagError(names.Lightsail, create.ErrActionExpandingResourceId, ResBucketAccessKey, d.Id(), err) } - out, err := conn.DeleteBucketAccessKeyWithContext(ctx, &lightsail.DeleteBucketAccessKeyInput{ + out, err := conn.DeleteBucketAccessKey(ctx, &lightsail.DeleteBucketAccessKeyInput{ BucketName: aws.String(parts[0]), AccessKeyId: aws.String(parts[1]), }) - if err != nil && tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { + if err != nil && errs.IsA[*types.NotFoundException](err) { return nil } @@ -135,7 +140,7 @@ func resourceBucketAccessKeyDelete(ctx context.Context, d *schema.ResourceData, return create.DiagError(names.Lightsail, create.ErrActionDeleting, ResBucketAccessKey, d.Id(), err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeDeleteBucketAccessKey, ResBucketAccessKey, d.Id()) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeDeleteBucketAccessKey, ResBucketAccessKey, d.Id()) if diag != nil { return diag @@ -143,3 +148,42 @@ func resourceBucketAccessKeyDelete(ctx context.Context, d *schema.ResourceData, return nil } + +func FindBucketAccessKeyById(ctx context.Context, conn *lightsail.Client, id string) (*types.AccessKey, error) { + parts, err := flex.ExpandResourceId(id, BucketAccessKeyIdPartsCount, false) + + if err != nil { + return nil, err + } + + in := &lightsail.GetBucketAccessKeysInput{BucketName: aws.String(parts[0])} + out, err := conn.GetBucketAccessKeys(ctx, in) + + if IsANotFoundError(err) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + if err != nil { + return nil, err + } + + var entry types.AccessKey + entryExists := false + + for _, n := range out.AccessKeys { + if parts[1] == aws.ToString(n.AccessKeyId) { + entry = n + entryExists = true + break + } + } + + if !entryExists { + return nil, tfresource.NewEmptyResultError(in) + } + + return &entry, nil +} diff --git a/internal/service/lightsail/bucket_access_key_test.go b/internal/service/lightsail/bucket_access_key_test.go index 2792d33c946..7638e92dd60 100644 --- a/internal/service/lightsail/bucket_access_key_test.go +++ b/internal/service/lightsail/bucket_access_key_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail_test import ( @@ -5,9 +8,10 @@ import ( "errors" "fmt" "regexp" + "strings" "testing" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -27,10 +31,10 @@ func TestAccLightsailBucketAccessKey_basic(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketAccessKeyDestroy(ctx), Steps: []resource.TestStep{ @@ -62,10 +66,10 @@ func TestAccLightsailBucketAccessKey_disappears(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketAccessKeyDestroy(ctx), Steps: []resource.TestStep{ @@ -92,7 +96,7 @@ func testAccCheckBucketAccessKeyExists(ctx context.Context, resourceName string) return fmt.Errorf("Resource (%s) ID not set", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) out, err := tflightsail.FindBucketAccessKeyById(ctx, conn, rs.Primary.ID) @@ -110,7 +114,7 @@ func testAccCheckBucketAccessKeyExists(ctx context.Context, resourceName string) func testAccCheckBucketAccessKeyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lightsail_bucket_access_key" { diff --git a/internal/service/lightsail/bucket_resource_access.go b/internal/service/lightsail/bucket_resource_access.go index 916cf0ba052..a06169fa1f7 100644 --- a/internal/service/lightsail/bucket_resource_access.go +++ b/internal/service/lightsail/bucket_resource_access.go @@ -1,17 +1,22 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail import ( "context" "regexp" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/errs" "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/names" @@ -49,21 +54,21 @@ func ResourceBucketResourceAccess() *schema.Resource { } func resourceBucketResourceAccessCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) in := lightsail.SetResourceAccessForBucketInput{ BucketName: aws.String(d.Get("bucket_name").(string)), ResourceName: aws.String(d.Get("resource_name").(string)), - Access: aws.String(lightsail.ResourceBucketAccessAllow), + Access: types.ResourceBucketAccessAllow, } - out, err := conn.SetResourceAccessForBucketWithContext(ctx, &in) + out, err := conn.SetResourceAccessForBucket(ctx, &in) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeSetResourceAccessForBucket, ResBucketResourceAccess, d.Get("bucket_name").(string), err) + return create.DiagError(names.Lightsail, string(types.OperationTypeSetResourceAccessForBucket), ResBucketResourceAccess, d.Get("bucket_name").(string), err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeSetResourceAccessForBucket, ResBucketResourceAccess, d.Get("bucket_name").(string)) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeSetResourceAccessForBucket, ResBucketResourceAccess, d.Get("bucket_name").(string)) if diag != nil { return diag @@ -82,7 +87,7 @@ func resourceBucketResourceAccessCreate(ctx context.Context, d *schema.ResourceD } func resourceBucketResourceAccessRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) out, err := FindBucketResourceAccessById(ctx, conn, d.Id()) @@ -109,28 +114,28 @@ func resourceBucketResourceAccessRead(ctx context.Context, d *schema.ResourceDat } func resourceBucketResourceAccessDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) parts, err := flex.ExpandResourceId(d.Id(), BucketResourceAccessIdPartsCount, false) if err != nil { return create.DiagError(names.Lightsail, create.ErrActionExpandingResourceId, ResBucketResourceAccess, d.Id(), err) } - out, err := conn.SetResourceAccessForBucketWithContext(ctx, &lightsail.SetResourceAccessForBucketInput{ + out, err := conn.SetResourceAccessForBucket(ctx, &lightsail.SetResourceAccessForBucketInput{ BucketName: aws.String(parts[0]), ResourceName: aws.String(parts[1]), - Access: aws.String(lightsail.ResourceBucketAccessDeny), + Access: types.ResourceBucketAccessDeny, }) - if err != nil && tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { + if err != nil && errs.IsA[*types.NotFoundException](err) { return nil } if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeSetResourceAccessForBucket, ResBucketResourceAccess, d.Id(), err) + return create.DiagError(names.Lightsail, string(types.OperationTypeSetResourceAccessForBucket), ResBucketResourceAccess, d.Id(), err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeSetResourceAccessForBucket, ResBucketResourceAccess, d.Id()) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeSetResourceAccessForBucket, ResBucketResourceAccess, d.Id()) if diag != nil { return diag @@ -138,3 +143,51 @@ func resourceBucketResourceAccessDelete(ctx context.Context, d *schema.ResourceD return nil } + +func FindBucketResourceAccessById(ctx context.Context, conn *lightsail.Client, id string) (*types.ResourceReceivingAccess, error) { + parts, err := flex.ExpandResourceId(id, BucketAccessKeyIdPartsCount, false) + + if err != nil { + return nil, err + } + + in := &lightsail.GetBucketsInput{ + BucketName: aws.String(parts[0]), + IncludeConnectedResources: aws.Bool(true), + } + + out, err := conn.GetBuckets(ctx, in) + + if IsANotFoundError(err) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + if err != nil { + return nil, err + } + + if out == nil || len(out.Buckets) == 0 { + return nil, tfresource.NewEmptyResultError(in) + } + + bucket := out.Buckets[0] + var entry types.ResourceReceivingAccess + entryExists := false + + for _, n := range bucket.ResourcesReceivingAccess { + if parts[1] == aws.ToString(n.Name) { + entry = n + entryExists = true + break + } + } + + if !entryExists { + return nil, tfresource.NewEmptyResultError(in) + } + + return &entry, nil +} diff --git a/internal/service/lightsail/bucket_resource_access_test.go b/internal/service/lightsail/bucket_resource_access_test.go index 5553077cb06..b95cf01c88d 100644 --- a/internal/service/lightsail/bucket_resource_access_test.go +++ b/internal/service/lightsail/bucket_resource_access_test.go @@ -1,12 +1,16 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail_test import ( "context" "errors" "fmt" + "strings" "testing" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -27,10 +31,10 @@ func TestAccLightsailBucketResourceAccess_basic(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketResourceAccessDestroy(ctx), Steps: []resource.TestStep{ @@ -60,10 +64,10 @@ func TestAccLightsailBucketResourceAccess_disappears(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketResourceAccessDestroy(ctx), Steps: []resource.TestStep{ @@ -90,7 +94,7 @@ func testAccCheckBucketResourceAccessExists(ctx context.Context, resourceName st return fmt.Errorf("Resource (%s) ID not set", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) out, err := tflightsail.FindBucketResourceAccessById(ctx, conn, rs.Primary.ID) @@ -108,7 +112,7 @@ func testAccCheckBucketResourceAccessExists(ctx context.Context, resourceName st func testAccCheckBucketResourceAccessDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lightsail_bucket_access_key" { diff --git a/internal/service/lightsail/bucket_test.go b/internal/service/lightsail/bucket_test.go index e270380ca20..f6ca4adfc26 100644 --- a/internal/service/lightsail/bucket_test.go +++ b/internal/service/lightsail/bucket_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail_test import ( @@ -5,9 +8,10 @@ import ( "errors" "fmt" "regexp" + "strings" "testing" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -27,10 +31,10 @@ func TestAccLightsailBucket_basic(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketDestroy(ctx), Steps: []resource.TestStep{ @@ -68,10 +72,10 @@ func TestAccLightsailBucket_BundleId(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketDestroy(ctx), Steps: []resource.TestStep{ @@ -106,10 +110,10 @@ func TestAccLightsailBucket_disappears(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketDestroy(ctx), Steps: []resource.TestStep{ @@ -133,10 +137,10 @@ func TestAccLightsailBucket_tags(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckBucketDestroy(ctx), Steps: []resource.TestStep{ @@ -185,7 +189,7 @@ func testAccCheckBucketExists(ctx context.Context, resourceName string) resource return fmt.Errorf("Resource (%s) ID not set", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) out, err := tflightsail.FindBucketById(ctx, conn, rs.Primary.ID) @@ -203,7 +207,7 @@ func testAccCheckBucketExists(ctx context.Context, resourceName string) resource func testAccCheckBucketDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lightsail_bucket" { diff --git a/internal/service/lightsail/certificate.go b/internal/service/lightsail/certificate.go index f22fb72b9a4..e89f4d9e788 100644 --- a/internal/service/lightsail/certificate.go +++ b/internal/service/lightsail/certificate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail import ( @@ -6,15 +9,17 @@ import ( "regexp" "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/errs" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/verify" @@ -116,7 +121,7 @@ func ResourceCertificate() *schema.Resource { if sanSet, ok := diff.Get("subject_alternative_names").(*schema.Set); ok { sanSet.Add(domain_name) if err := diff.SetNew("subject_alternative_names", sanSet); err != nil { - return fmt.Errorf("error setting new subject_alternative_names diff: %w", err) + return fmt.Errorf("setting new subject_alternative_names diff: %w", err) } } } @@ -129,26 +134,26 @@ func ResourceCertificate() *schema.Resource { } func resourceCertificateCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) req := lightsail.CreateCertificateInput{ CertificateName: aws.String(d.Get("name").(string)), DomainName: aws.String(d.Get("domain_name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("subject_alternative_names"); ok { - req.SubjectAlternativeNames = aws.StringSlice(expandSubjectAlternativeNames(v)) + req.SubjectAlternativeNames = expandSubjectAlternativeNames(v) } - resp, err := conn.CreateCertificateWithContext(ctx, &req) + resp, err := conn.CreateCertificate(ctx, &req) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeCreateCertificate, ResCertificate, d.Get("name").(string), err) + return create.DiagError(names.Lightsail, string(types.OperationTypeCreateCertificate), ResCertificate, d.Get("name").(string), err) } id := d.Get("name").(string) - diag := expandOperations(ctx, conn, resp.Operations, lightsail.OperationTypeCreateCertificate, ResCertificate, id) + diag := expandOperations(ctx, conn, resp.Operations, types.OperationTypeCreateCertificate, ResCertificate, id) if diag != nil { return diag @@ -160,18 +165,18 @@ func resourceCertificateCreate(ctx context.Context, d *schema.ResourceData, meta } func resourceCertificateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) - certificate, err := FindCertificateByName(ctx, conn, d.Id()) + certificate, err := FindCertificateById(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { - create.LogNotFoundRemoveState(names.CE, create.ErrActionReading, ResCertificate, d.Id()) + create.LogNotFoundRemoveState(names.Lightsail, create.ErrActionReading, ResCertificate, d.Id()) d.SetId("") return nil } if err != nil { - return create.DiagError(names.CE, create.ErrActionReading, ResCertificate, d.Id(), err) + return create.DiagError(names.Lightsail, create.ErrActionReading, ResCertificate, d.Id(), err) } d.Set("arn", certificate.Arn) @@ -179,9 +184,9 @@ func resourceCertificateRead(ctx context.Context, d *schema.ResourceData, meta i d.Set("domain_name", certificate.DomainName) d.Set("domain_validation_options", flattenDomainValidationRecords(certificate.DomainValidationRecords)) d.Set("name", certificate.Name) - d.Set("subject_alternative_names", aws.StringValueSlice(certificate.SubjectAlternativeNames)) + d.Set("subject_alternative_names", certificate.SubjectAlternativeNames) - SetTagsOut(ctx, certificate.Tags) + setTagsOut(ctx, certificate.Tags) return nil } @@ -192,21 +197,21 @@ func resourceCertificateUpdate(ctx context.Context, d *schema.ResourceData, meta } func resourceCertificateDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) - resp, err := conn.DeleteCertificateWithContext(ctx, &lightsail.DeleteCertificateInput{ + resp, err := conn.DeleteCertificate(ctx, &lightsail.DeleteCertificateInput{ CertificateName: aws.String(d.Id()), }) - if err != nil && tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { + if err != nil && errs.IsA[*types.NotFoundException](err) { return nil } if err != nil { - return create.DiagError(names.CE, create.ErrActionDeleting, ResCertificate, d.Id(), err) + return create.DiagError(names.Lightsail, create.ErrActionDeleting, ResCertificate, d.Id(), err) } - diag := expandOperations(ctx, conn, resp.Operations, lightsail.OperationTypeDeleteCertificate, ResCertificate, d.Id()) + diag := expandOperations(ctx, conn, resp.Operations, types.OperationTypeDeleteCertificate, ResCertificate, d.Id()) if diag != nil { return diag @@ -229,16 +234,16 @@ func domainValidationOptionsHash(v interface{}) int { return 0 } -func flattenDomainValidationRecords(domainValidationRecords []*lightsail.DomainValidationRecord) []map[string]interface{} { +func flattenDomainValidationRecords(domainValidationRecords []types.DomainValidationRecord) []map[string]interface{} { var domainValidationResult []map[string]interface{} for _, o := range domainValidationRecords { if o.ResourceRecord != nil { validationOption := map[string]interface{}{ - "domain_name": aws.StringValue(o.DomainName), - "resource_record_name": aws.StringValue(o.ResourceRecord.Name), - "resource_record_type": aws.StringValue(o.ResourceRecord.Type), - "resource_record_value": aws.StringValue(o.ResourceRecord.Value), + "domain_name": aws.ToString(o.DomainName), + "resource_record_name": aws.ToString(o.ResourceRecord.Name), + "resource_record_type": aws.ToString(o.ResourceRecord.Type), + "resource_record_value": aws.ToString(o.ResourceRecord.Value), } domainValidationResult = append(domainValidationResult, validationOption) } @@ -255,3 +260,28 @@ func expandSubjectAlternativeNames(sans interface{}) []string { return subjectAlternativeNames } + +func FindCertificateById(ctx context.Context, conn *lightsail.Client, name string) (*types.Certificate, error) { + in := &lightsail.GetCertificatesInput{ + CertificateName: aws.String(name), + } + + out, err := conn.GetCertificates(ctx, in) + + if IsANotFoundError(err) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + if err != nil { + return nil, err + } + + if out == nil || len(out.Certificates) == 0 { + return nil, tfresource.NewEmptyResultError(in) + } + + return out.Certificates[0].CertificateDetail, nil +} diff --git a/internal/service/lightsail/certificate_test.go b/internal/service/lightsail/certificate_test.go index 38a46ec1417..13d4a9ad7e9 100644 --- a/internal/service/lightsail/certificate_test.go +++ b/internal/service/lightsail/certificate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail_test import ( @@ -5,11 +8,12 @@ import ( "errors" "fmt" "regexp" + "strings" "testing" "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -23,7 +27,6 @@ import ( func TestAccLightsailCertificate_basic(t *testing.T) { ctx := acctest.Context(t) - var certificate lightsail.Certificate resourceName := "aws_lightsail_certificate.test" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) domainName := acctest.ACMCertificateRandomSubDomain(acctest.RandomDomainName()) @@ -31,17 +34,17 @@ func TestAccLightsailCertificate_basic(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckCertificateDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccCertificateConfig_basic(rName, domainName), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckCertificateExists(ctx, resourceName, &certificate), + testAccCheckCertificateExists(ctx, resourceName), acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "lightsail", regexp.MustCompile(`Certificate/.+`)), resource.TestCheckResourceAttr(resourceName, "name", rName), resource.TestCheckResourceAttr(resourceName, "domain_name", domainName), @@ -59,7 +62,6 @@ func TestAccLightsailCertificate_basic(t *testing.T) { func TestAccLightsailCertificate_subjectAlternativeNames(t *testing.T) { ctx := acctest.Context(t) - var certificate lightsail.Certificate resourceName := "aws_lightsail_certificate.test" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) domainName := acctest.ACMCertificateRandomSubDomain(acctest.RandomDomainName()) @@ -68,17 +70,17 @@ func TestAccLightsailCertificate_subjectAlternativeNames(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckCertificateDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccCertificateConfig_subjectAlternativeNames(rName, domainName, subjectAlternativeName), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckCertificateExists(ctx, resourceName, &certificate), + testAccCheckCertificateExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "subject_alternative_names.#", "2"), resource.TestCheckTypeSetElemAttr(resourceName, "subject_alternative_names.*", subjectAlternativeName), resource.TestCheckTypeSetElemAttr(resourceName, "subject_alternative_names.*", domainName), @@ -90,7 +92,6 @@ func TestAccLightsailCertificate_subjectAlternativeNames(t *testing.T) { func TestAccLightsailCertificate_DomainValidationOptions(t *testing.T) { ctx := acctest.Context(t) - var certificate lightsail.Certificate resourceName := "aws_lightsail_certificate.test" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) // Lightsail will only return Domain Validation Options when using a non-test domain. @@ -101,17 +102,17 @@ func TestAccLightsailCertificate_DomainValidationOptions(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckCertificateDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccCertificateConfig_subjectAlternativeNames(rName, domainName, subjectAlternativeName), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckCertificateExists(ctx, resourceName, &certificate), + testAccCheckCertificateExists(ctx, resourceName), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "domain_validation_options.*", map[string]string{ "domain_name": domainName, "resource_record_type": "CNAME", @@ -128,7 +129,6 @@ func TestAccLightsailCertificate_DomainValidationOptions(t *testing.T) { func TestAccLightsailCertificate_tags(t *testing.T) { ctx := acctest.Context(t) - var certificate lightsail.Certificate resourceName := "aws_lightsail_certificate.test" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) domainName := acctest.ACMCertificateRandomSubDomain(acctest.RandomDomainName()) @@ -136,17 +136,17 @@ func TestAccLightsailCertificate_tags(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckCertificateDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccCertificateConfig_tags1(rName, domainName, "key1", "value1"), Check: resource.ComposeTestCheckFunc( - testAccCheckCertificateExists(ctx, resourceName, &certificate), + testAccCheckCertificateExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), ), @@ -159,7 +159,7 @@ func TestAccLightsailCertificate_tags(t *testing.T) { { Config: testAccCertificateConfig_tags2(rName, domainName, "key1", "value1updated", "key2", "value2"), Check: resource.ComposeTestCheckFunc( - testAccCheckCertificateExists(ctx, resourceName, &certificate), + testAccCheckCertificateExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), @@ -168,7 +168,7 @@ func TestAccLightsailCertificate_tags(t *testing.T) { { Config: testAccCertificateConfig_tags1(rName, domainName, "key2", "value2"), Check: resource.ComposeTestCheckFunc( - testAccCheckCertificateExists(ctx, resourceName, &certificate), + testAccCheckCertificateExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), ), @@ -179,15 +179,14 @@ func TestAccLightsailCertificate_tags(t *testing.T) { func TestAccLightsailCertificate_disappears(t *testing.T) { ctx := acctest.Context(t) - var certificate lightsail.Certificate resourceName := "aws_lightsail_certificate.test" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) domainName := acctest.ACMCertificateRandomSubDomain(acctest.RandomDomainName()) testDestroy := func(*terraform.State) error { // reach out and DELETE the Certificate - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() - _, err := conn.DeleteCertificateWithContext(ctx, &lightsail.DeleteCertificateInput{ + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) + _, err := conn.DeleteCertificate(ctx, &lightsail.DeleteCertificateInput{ CertificateName: aws.String(rName), }) @@ -204,17 +203,17 @@ func TestAccLightsailCertificate_disappears(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckCertificateDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccCertificateConfig_basic(rName, domainName), Check: resource.ComposeTestCheckFunc( - testAccCheckCertificateExists(ctx, resourceName, &certificate), + testAccCheckCertificateExists(ctx, resourceName), testDestroy, ), ExpectNonEmptyPlan: true, @@ -230,9 +229,9 @@ func testAccCheckCertificateDestroy(ctx context.Context) resource.TestCheckFunc continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) - _, err := tflightsail.FindCertificateByName(ctx, conn, rs.Primary.ID) + _, err := tflightsail.FindCertificateById(ctx, conn, rs.Primary.ID) if tfresource.NotFound(err) { continue @@ -249,7 +248,7 @@ func testAccCheckCertificateDestroy(ctx context.Context) resource.TestCheckFunc } } -func testAccCheckCertificateExists(ctx context.Context, n string, certificate *lightsail.Certificate) resource.TestCheckFunc { +func testAccCheckCertificateExists(ctx context.Context, n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -260,9 +259,9 @@ func testAccCheckCertificateExists(ctx context.Context, n string, certificate *l return errors.New("No Certificate ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) - respCertificate, err := tflightsail.FindCertificateByName(ctx, conn, rs.Primary.ID) + respCertificate, err := tflightsail.FindCertificateById(ctx, conn, rs.Primary.ID) if err != nil { return err @@ -272,8 +271,6 @@ func testAccCheckCertificateExists(ctx context.Context, n string, certificate *l return fmt.Errorf("Certificate %q does not exist", rs.Primary.ID) } - *certificate = *respCertificate - return nil } } diff --git a/internal/service/lightsail/consts.go b/internal/service/lightsail/consts.go index 7caca2ddd77..ce6550e1c24 100644 --- a/internal/service/lightsail/consts.go +++ b/internal/service/lightsail/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail const ( diff --git a/internal/service/lightsail/container_service.go b/internal/service/lightsail/container_service.go index f5aece259ed..c36f7e4a9ba 100644 --- a/internal/service/lightsail/container_service.go +++ b/internal/service/lightsail/container_service.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail import ( @@ -7,10 +10,11 @@ import ( "regexp" "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" @@ -70,7 +74,7 @@ func ResourceContainerService() *schema.Resource { "power": { Type: schema.TypeString, Required: true, - ValidateFunc: validation.StringInSlice(lightsail.ContainerServicePowerName_Values(), false), + ValidateFunc: validation.StringInSlice(flattenContainerServicePowerValues(types.ContainerServicePowerName("").Values()), false), }, "power_id": { Type: schema.TypeString, @@ -165,14 +169,14 @@ func ResourceContainerService() *schema.Resource { } func resourceContainerServiceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) serviceName := d.Get("name").(string) input := &lightsail.CreateContainerServiceInput{ ServiceName: aws.String(serviceName), - Power: aws.String(d.Get("power").(string)), - Scale: aws.Int64(int64(d.Get("scale").(int))), - Tags: GetTagsIn(ctx), + Power: types.ContainerServicePowerName(d.Get("power").(string)), + Scale: aws.Int32(int32(d.Get("scale").(int))), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("public_domain_names"); ok { @@ -183,15 +187,15 @@ func resourceContainerServiceCreate(ctx context.Context, d *schema.ResourceData, input.PrivateRegistryAccess = expandPrivateRegistryAccess(v.([]interface{})[0].(map[string]interface{})) } - _, err := conn.CreateContainerServiceWithContext(ctx, input) + _, err := conn.CreateContainerService(ctx, input) if err != nil { - return diag.Errorf("error creating Lightsail Container Service (%s): %s", serviceName, err) + return diag.Errorf("creating Lightsail Container Service (%s): %s", serviceName, err) } d.SetId(serviceName) if err := waitContainerServiceCreated(ctx, conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { - return diag.Errorf("error waiting for Lightsail Container Service (%s) creation: %s", d.Id(), err) + return diag.Errorf("waiting for Lightsail Container Service (%s) creation: %s", d.Id(), err) } // once container service creation and/or deployment successful (now enabled by default), disable it if "is_disabled" is true @@ -201,13 +205,13 @@ func resourceContainerServiceCreate(ctx context.Context, d *schema.ResourceData, IsDisabled: aws.Bool(true), } - _, err := conn.UpdateContainerServiceWithContext(ctx, input) + _, err := conn.UpdateContainerService(ctx, input) if err != nil { - return diag.Errorf("error disabling Lightsail Container Service (%s): %s", d.Id(), err) + return diag.Errorf("disabling Lightsail Container Service (%s): %s", d.Id(), err) } if err := waitContainerServiceDisabled(ctx, conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { - return diag.Errorf("error waiting for Lightsail Container Service (%s) to be disabled: %s", d.Id(), err) + return diag.Errorf("waiting for Lightsail Container Service (%s) to be disabled: %s", d.Id(), err) } } @@ -215,7 +219,7 @@ func resourceContainerServiceCreate(ctx context.Context, d *schema.ResourceData, } func resourceContainerServiceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) cs, err := FindContainerServiceByName(ctx, conn, d.Id()) @@ -226,7 +230,7 @@ func resourceContainerServiceRead(ctx context.Context, d *schema.ResourceData, m } if err != nil { - return diag.Errorf("error reading Lightsail Container Service (%s): %s", d.Id(), err) + return diag.Errorf("reading Lightsail Container Service (%s): %s", d.Id(), err) } d.Set("name", cs.ContainerServiceName) @@ -235,14 +239,14 @@ func resourceContainerServiceRead(ctx context.Context, d *schema.ResourceData, m d.Set("is_disabled", cs.IsDisabled) if err := d.Set("public_domain_names", flattenContainerServicePublicDomainNames(cs.PublicDomainNames)); err != nil { - return diag.Errorf("error setting public_domain_names for Lightsail Container Service (%s): %s", d.Id(), err) + return diag.Errorf("setting public_domain_names for Lightsail Container Service (%s): %s", d.Id(), err) } if err := d.Set("private_registry_access", []interface{}{flattenPrivateRegistryAccess(cs.PrivateRegistryAccess)}); err != nil { - return diag.Errorf("error setting private_registry_access for Lightsail Container Service (%s): %s", d.Id(), err) + return diag.Errorf("setting private_registry_access for Lightsail Container Service (%s): %s", d.Id(), err) } d.Set("arn", cs.Arn) d.Set("availability_zone", cs.Location.AvailabilityZone) - d.Set("created_at", aws.TimeValue(cs.CreatedAt).Format(time.RFC3339)) + d.Set("created_at", aws.ToTime(cs.CreatedAt).Format(time.RFC3339)) d.Set("power_id", cs.PowerId) d.Set("principal_arn", cs.PrincipalArn) d.Set("private_domain_name", cs.PrivateDomainName) @@ -250,13 +254,13 @@ func resourceContainerServiceRead(ctx context.Context, d *schema.ResourceData, m d.Set("state", cs.State) d.Set("url", cs.Url) - SetTagsOut(ctx, cs.Tags) + setTagsOut(ctx, cs.Tags) return nil } func resourceContainerServiceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) if d.HasChangesExcept("tags", "tags_all") { publicDomainNames, _ := containerServicePublicDomainNamesChanged(d) @@ -264,23 +268,23 @@ func resourceContainerServiceUpdate(ctx context.Context, d *schema.ResourceData, input := &lightsail.UpdateContainerServiceInput{ ServiceName: aws.String(d.Id()), IsDisabled: aws.Bool(d.Get("is_disabled").(bool)), - Power: aws.String(d.Get("power").(string)), + Power: types.ContainerServicePowerName(d.Get("power").(string)), PublicDomainNames: publicDomainNames, - Scale: aws.Int64(int64(d.Get("scale").(int))), + Scale: aws.Int32(int32(d.Get("scale").(int))), } - _, err := conn.UpdateContainerServiceWithContext(ctx, input) + _, err := conn.UpdateContainerService(ctx, input) if err != nil { - return diag.Errorf("error updating Lightsail Container Service (%s): %s", d.Id(), err) + return diag.Errorf("updating Lightsail Container Service (%s): %s", d.Id(), err) } if d.HasChange("is_disabled") && d.Get("is_disabled").(bool) { if err := waitContainerServiceDisabled(ctx, conn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { - return diag.Errorf("error waiting for Lightsail Container Service (%s) update: %s", d.Id(), err) + return diag.Errorf("waiting for Lightsail Container Service (%s) update: %s", d.Id(), err) } } else { if err := waitContainerServiceUpdated(ctx, conn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { - return diag.Errorf("error waiting for Lightsail Container Service (%s) update: %s", d.Id(), err) + return diag.Errorf("waiting for Lightsail Container Service (%s) update: %s", d.Id(), err) } } } @@ -289,35 +293,35 @@ func resourceContainerServiceUpdate(ctx context.Context, d *schema.ResourceData, } func resourceContainerServiceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) input := &lightsail.DeleteContainerServiceInput{ ServiceName: aws.String(d.Id()), } - _, err := conn.DeleteContainerServiceWithContext(ctx, input) + _, err := conn.DeleteContainerService(ctx, input) - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { + if IsANotFoundError(err) { return nil } if err != nil { - return diag.Errorf("error deleting Lightsail Container Service (%s): %s", d.Id(), err) + return diag.Errorf("deleting Lightsail Container Service (%s): %s", d.Id(), err) } if err := waitContainerServiceDeleted(ctx, conn, d.Id(), d.Timeout(schema.TimeoutDelete)); err != nil { - return diag.Errorf("error waiting for Lightsail Container Service (%s) deletion: %s", d.Id(), err) + return diag.Errorf("waiting for Lightsail Container Service (%s) deletion: %s", d.Id(), err) } return nil } -func expandContainerServicePublicDomainNames(rawPublicDomainNames []interface{}) map[string][]*string { +func expandContainerServicePublicDomainNames(rawPublicDomainNames []interface{}) map[string][]string { if len(rawPublicDomainNames) == 0 { return nil } - resultMap := make(map[string][]*string) + resultMap := make(map[string][]string) for _, rpdn := range rawPublicDomainNames { rpdnMap := rpdn.(map[string]interface{}) @@ -327,9 +331,9 @@ func expandContainerServicePublicDomainNames(rawPublicDomainNames []interface{}) for _, rc := range rawCertificates { rcMap := rc.(map[string]interface{}) - var domainNames []*string + var domainNames []string for _, rawDomainName := range rcMap["domain_names"].([]interface{}) { - domainNames = append(domainNames, aws.String(rawDomainName.(string))) + domainNames = append(domainNames, rawDomainName.(string)) } certificateName := rcMap["certificate_name"].(string) @@ -341,12 +345,12 @@ func expandContainerServicePublicDomainNames(rawPublicDomainNames []interface{}) return resultMap } -func expandPrivateRegistryAccess(tfMap map[string]interface{}) *lightsail.PrivateRegistryAccessRequest { +func expandPrivateRegistryAccess(tfMap map[string]interface{}) *types.PrivateRegistryAccessRequest { if tfMap == nil { return nil } - apiObject := &lightsail.PrivateRegistryAccessRequest{} + apiObject := &types.PrivateRegistryAccessRequest{} if v, ok := tfMap["ecr_image_puller_role"].([]interface{}); ok && len(v) > 0 && v[0] != nil { apiObject.EcrImagePullerRole = expandECRImagePullerRole(v[0].(map[string]interface{})) @@ -355,12 +359,12 @@ func expandPrivateRegistryAccess(tfMap map[string]interface{}) *lightsail.Privat return apiObject } -func expandECRImagePullerRole(tfMap map[string]interface{}) *lightsail.ContainerServiceECRImagePullerRoleRequest { +func expandECRImagePullerRole(tfMap map[string]interface{}) *types.ContainerServiceECRImagePullerRoleRequest { if tfMap == nil { return nil } - apiObject := &lightsail.ContainerServiceECRImagePullerRoleRequest{} + apiObject := &types.ContainerServiceECRImagePullerRoleRequest{} if v, ok := tfMap["is_active"].(bool); ok { apiObject.IsActive = aws.Bool(v) @@ -369,7 +373,7 @@ func expandECRImagePullerRole(tfMap map[string]interface{}) *lightsail.Container return apiObject } -func flattenPrivateRegistryAccess(apiObject *lightsail.PrivateRegistryAccess) map[string]interface{} { +func flattenPrivateRegistryAccess(apiObject *types.PrivateRegistryAccess) map[string]interface{} { if apiObject == nil { return nil } @@ -383,7 +387,7 @@ func flattenPrivateRegistryAccess(apiObject *lightsail.PrivateRegistryAccess) ma return tfMap } -func flattenECRImagePullerRole(apiObject *lightsail.ContainerServiceECRImagePullerRole) map[string]interface{} { +func flattenECRImagePullerRole(apiObject *types.ContainerServiceECRImagePullerRole) map[string]interface{} { if apiObject == nil { return nil } @@ -391,17 +395,17 @@ func flattenECRImagePullerRole(apiObject *lightsail.ContainerServiceECRImagePull tfMap := map[string]interface{}{} if v := apiObject.IsActive; v != nil { - tfMap["is_active"] = aws.BoolValue(v) + tfMap["is_active"] = aws.ToBool(v) } if v := apiObject.PrincipalArn; v != nil { - tfMap["principal_arn"] = aws.StringValue(v) + tfMap["principal_arn"] = aws.ToString(v) } return tfMap } -func flattenContainerServicePublicDomainNames(domainNames map[string][]*string) []interface{} { +func flattenContainerServicePublicDomainNames(domainNames map[string][]string) []interface{} { if domainNames == nil { return []interface{}{} } @@ -411,7 +415,7 @@ func flattenContainerServicePublicDomainNames(domainNames map[string][]*string) for certName, domains := range domainNames { rawCertificate := map[string]interface{}{ "certificate_name": certName, - "domain_names": aws.StringValueSlice(domains), + "domain_names": domains, } rawCertificates = append(rawCertificates, rawCertificate) @@ -424,7 +428,7 @@ func flattenContainerServicePublicDomainNames(domainNames map[string][]*string) } } -func containerServicePublicDomainNamesChanged(d *schema.ResourceData) (map[string][]*string, bool) { +func containerServicePublicDomainNamesChanged(d *schema.ResourceData) (map[string][]string, bool) { o, n := d.GetChange("public_domain_names") oldPublicDomainNames := expandContainerServicePublicDomainNames(o.([]interface{})) newPublicDomainNames := expandContainerServicePublicDomainNames(n.([]interface{})) @@ -432,7 +436,7 @@ func containerServicePublicDomainNamesChanged(d *schema.ResourceData) (map[strin changed := !reflect.DeepEqual(oldPublicDomainNames, newPublicDomainNames) if changed { if newPublicDomainNames == nil { - newPublicDomainNames = map[string][]*string{} + newPublicDomainNames = map[string][]string{} } // if the change is to detach a certificate, in .tf, a certificate block is removed @@ -440,10 +444,49 @@ func containerServicePublicDomainNamesChanged(d *schema.ResourceData) (map[strin // under the certificate, effectively detaching the certificate for certificateName := range oldPublicDomainNames { if _, ok := newPublicDomainNames[certificateName]; !ok { - newPublicDomainNames[certificateName] = []*string{} + newPublicDomainNames[certificateName] = []string{} } } } return newPublicDomainNames, changed } + +func flattenContainerServicePowerValues(t []types.ContainerServicePowerName) []string { + var out []string + + for _, v := range t { + out = append(out, string(v)) + } + + return out +} + +func FindContainerServiceByName(ctx context.Context, conn *lightsail.Client, serviceName string) (*types.ContainerService, error) { + input := &lightsail.GetContainerServicesInput{ + ServiceName: aws.String(serviceName), + } + + output, err := conn.GetContainerServices(ctx, input) + + if IsANotFoundError(err) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil || len(output.ContainerServices) == 0 { + return nil, tfresource.NewEmptyResultError(input) + } + + if count := len(output.ContainerServices); count > 1 { + return nil, tfresource.NewTooManyResultsError(count, input) + } + + return &output.ContainerServices[0], nil +} diff --git a/internal/service/lightsail/container_service_deployment_version.go b/internal/service/lightsail/container_service_deployment_version.go index a409d745fb3..e0fecde1a0a 100644 --- a/internal/service/lightsail/container_service_deployment_version.go +++ b/internal/service/lightsail/container_service_deployment_version.go @@ -1,16 +1,22 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail import ( "context" "fmt" "log" + "reflect" "strconv" "strings" "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" @@ -71,7 +77,7 @@ func ResourceContainerServiceDeploymentVersion() *schema.Resource { ForceNew: true, Elem: &schema.Schema{ Type: schema.TypeString, - ValidateFunc: validation.StringInSlice(lightsail.ContainerServiceProtocol_Values(), false), + ValidateFunc: validation.StringInSlice(flattenContainerServiceProtocolValues(types.ContainerServiceProtocol("").Values()), false), }}, }, }, @@ -166,10 +172,10 @@ func ResourceContainerServiceDeploymentVersion() *schema.Resource { } func resourceContainerServiceDeploymentVersionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) serviceName := d.Get("service_name").(string) - input := &lightsail.CreateContainerServiceDeploymentInput{ + input := lightsail.CreateContainerServiceDeploymentInput{ ServiceName: aws.String(serviceName), } @@ -181,28 +187,28 @@ func resourceContainerServiceDeploymentVersionCreate(ctx context.Context, d *sch input.PublicEndpoint = expandContainerServiceDeploymentPublicEndpoint(v.([]interface{})) } - output, err := conn.CreateContainerServiceDeploymentWithContext(ctx, input) + output, err := conn.CreateContainerServiceDeployment(ctx, &input) if err != nil { - return diag.Errorf("error creating Lightsail Container Service (%s) Deployment Version: %s", serviceName, err) + return diag.Errorf("creating Lightsail Container Service (%s) Deployment Version: %s", serviceName, err) } if output == nil || output.ContainerService == nil || output.ContainerService.NextDeployment == nil { - return diag.Errorf("error creating Lightsail Container Service (%s) Deployment Version: empty output", serviceName) + return diag.Errorf("creating Lightsail Container Service (%s) Deployment Version: empty output", serviceName) } - version := int(aws.Int64Value(output.ContainerService.NextDeployment.Version)) + version := int(aws.ToInt32(output.ContainerService.NextDeployment.Version)) d.SetId(fmt.Sprintf("%s/%d", serviceName, version)) if err := waitContainerServiceDeploymentVersionActive(ctx, conn, serviceName, version, d.Timeout(schema.TimeoutCreate)); err != nil { - return diag.Errorf("error waiting for Lightsail Container Service (%s) Deployment Version (%d): %s", serviceName, version, err) + return diag.Errorf("waiting for Lightsail Container Service (%s) Deployment Version (%d): %s", serviceName, version, err) } return resourceContainerServiceDeploymentVersionRead(ctx, d, meta) } func resourceContainerServiceDeploymentVersionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) serviceName, version, err := ContainerServiceDeploymentVersionParseResourceID(d.Id()) if err != nil { @@ -218,20 +224,20 @@ func resourceContainerServiceDeploymentVersionRead(ctx context.Context, d *schem } if err != nil { - return diag.Errorf("error reading Lightsail Container Service (%s) Deployment Version (%d): %s", serviceName, version, err) + return diag.Errorf("reading Lightsail Container Service (%s) Deployment Version (%d): %s", serviceName, version, err) } - d.Set("created_at", aws.TimeValue(deployment.CreatedAt).Format(time.RFC3339)) + d.Set("created_at", aws.ToTime(deployment.CreatedAt).Format(time.RFC3339)) d.Set("service_name", serviceName) d.Set("state", deployment.State) d.Set("version", deployment.Version) if err := d.Set("container", flattenContainerServiceDeploymentContainers(deployment.Containers)); err != nil { - return diag.Errorf("error setting container for Lightsail Container Service (%s) Deployment Version (%d): %s", serviceName, version, err) + return diag.Errorf("setting container for Lightsail Container Service (%s) Deployment Version (%d): %s", serviceName, version, err) } if err := d.Set("public_endpoint", flattenContainerServiceDeploymentPublicEndpoint(deployment.PublicEndpoint)); err != nil { - return diag.Errorf("error setting public_endpoint for Lightsail Container Service (%s) Deployment Version (%d): %s", serviceName, version, err) + return diag.Errorf("setting public_endpoint for Lightsail Container Service (%s) Deployment Version (%d): %s", serviceName, version, err) } return nil @@ -257,12 +263,12 @@ func ContainerServiceDeploymentVersionParseResourceID(id string) (string, int, e return parts[0], version, nil } -func expandContainerServiceDeploymentContainers(tfList []interface{}) map[string]*lightsail.Container { +func expandContainerServiceDeploymentContainers(tfList []interface{}) map[string]types.Container { if len(tfList) == 0 { - return map[string]*lightsail.Container{} + return map[string]types.Container{} } - result := make(map[string]*lightsail.Container) + result := make(map[string]types.Container) for _, tfListRaw := range tfList { tfMap, ok := tfListRaw.(map[string]interface{}) @@ -272,20 +278,20 @@ func expandContainerServiceDeploymentContainers(tfList []interface{}) map[string containerName := tfMap["container_name"].(string) - container := &lightsail.Container{ + container := types.Container{ Image: aws.String(tfMap["image"].(string)), } if v, ok := tfMap["command"].([]interface{}); ok && len(v) > 0 { - container.Command = flex.ExpandStringList(v) + container.Command = aws.ToStringSlice(flex.ExpandStringList(v)) } if v, ok := tfMap["environment"].(map[string]interface{}); ok && len(v) > 0 { - container.Environment = flex.ExpandStringMap(v) + container.Environment = aws.ToStringMap(flex.ExpandStringMap(v)) } if v, ok := tfMap["ports"].(map[string]interface{}); ok && len(v) > 0 { - container.Ports = flex.ExpandStringMap(v) + container.Ports = expandContainerServiceProtocol(v) } result[containerName] = container @@ -294,7 +300,30 @@ func expandContainerServiceDeploymentContainers(tfList []interface{}) map[string return result } -func expandContainerServiceDeploymentPublicEndpoint(tfList []interface{}) *lightsail.EndpointRequest { +func expandContainerServiceProtocol(tfMap map[string]interface{}) map[string]types.ContainerServiceProtocol { + if tfMap == nil { + return nil + } + + apiObject := map[string]types.ContainerServiceProtocol{} + + for k, v := range tfMap { + switch v { + case "HTTP": + apiObject[k] = types.ContainerServiceProtocolHttp + case "HTTPS": + apiObject[k] = types.ContainerServiceProtocolHttps + case "TCP": + apiObject[k] = types.ContainerServiceProtocolTcp + case "UDP": + apiObject[k] = types.ContainerServiceProtocolUdp + } + } + + return apiObject +} + +func expandContainerServiceDeploymentPublicEndpoint(tfList []interface{}) *types.EndpointRequest { if len(tfList) == 0 || tfList[0] == nil { return nil } @@ -304,9 +333,9 @@ func expandContainerServiceDeploymentPublicEndpoint(tfList []interface{}) *light return nil } - endpoint := &lightsail.EndpointRequest{ + endpoint := &types.EndpointRequest{ ContainerName: aws.String(tfMap["container_name"].(string)), - ContainerPort: aws.Int64(int64(tfMap["container_port"].(int))), + ContainerPort: aws.Int32(int32(tfMap["container_port"].(int))), } if v, ok := tfMap["health_check"].([]interface{}); ok && len(v) > 0 { @@ -316,7 +345,7 @@ func expandContainerServiceDeploymentPublicEndpoint(tfList []interface{}) *light return endpoint } -func expandContainerServiceDeploymentPublicEndpointHealthCheck(tfList []interface{}) *lightsail.ContainerServiceHealthCheckConfig { +func expandContainerServiceDeploymentPublicEndpointHealthCheck(tfList []interface{}) *types.ContainerServiceHealthCheckConfig { if len(tfList) == 0 || tfList[0] == nil { return nil } @@ -326,19 +355,19 @@ func expandContainerServiceDeploymentPublicEndpointHealthCheck(tfList []interfac return nil } - healthCheck := &lightsail.ContainerServiceHealthCheckConfig{ - HealthyThreshold: aws.Int64(int64(tfMap["healthy_threshold"].(int))), - IntervalSeconds: aws.Int64(int64(tfMap["interval_seconds"].(int))), + healthCheck := &types.ContainerServiceHealthCheckConfig{ + HealthyThreshold: aws.Int32(int32(tfMap["healthy_threshold"].(int))), + IntervalSeconds: aws.Int32(int32(tfMap["interval_seconds"].(int))), Path: aws.String(tfMap["path"].(string)), SuccessCodes: aws.String(tfMap["success_codes"].(string)), - TimeoutSeconds: aws.Int64(int64(tfMap["timeout_seconds"].(int))), - UnhealthyThreshold: aws.Int64(int64(tfMap["unhealthy_threshold"].(int))), + TimeoutSeconds: aws.Int32(int32(tfMap["timeout_seconds"].(int))), + UnhealthyThreshold: aws.Int32(int32(tfMap["unhealthy_threshold"].(int))), } return healthCheck } -func flattenContainerServiceDeploymentContainers(containers map[string]*lightsail.Container) []interface{} { +func flattenContainerServiceDeploymentContainers(containers map[string]types.Container) []interface{} { if len(containers) == 0 { return nil } @@ -347,10 +376,10 @@ func flattenContainerServiceDeploymentContainers(containers map[string]*lightsai for containerName, container := range containers { rawContainer := map[string]interface{}{ "container_name": containerName, - "image": aws.StringValue(container.Image), - "command": aws.StringValueSlice(container.Command), - "environment": aws.StringValueMap(container.Environment), - "ports": aws.StringValueMap(container.Ports), + "image": aws.ToString(container.Image), + "command": container.Command, + "environment": container.Environment, + "ports": container.Ports, } rawContainers = append(rawContainers, rawContainer) @@ -359,33 +388,88 @@ func flattenContainerServiceDeploymentContainers(containers map[string]*lightsai return rawContainers } -func flattenContainerServiceDeploymentPublicEndpoint(endpoint *lightsail.ContainerServiceEndpoint) []interface{} { +func flattenContainerServiceDeploymentPublicEndpoint(endpoint *types.ContainerServiceEndpoint) []interface{} { if endpoint == nil { return []interface{}{} } return []interface{}{ map[string]interface{}{ - "container_name": aws.StringValue(endpoint.ContainerName), - "container_port": int(aws.Int64Value(endpoint.ContainerPort)), + "container_name": aws.ToString(endpoint.ContainerName), + "container_port": int(aws.ToInt32(endpoint.ContainerPort)), "health_check": flattenContainerServiceDeploymentPublicEndpointHealthCheck(endpoint.HealthCheck), }, } } -func flattenContainerServiceDeploymentPublicEndpointHealthCheck(healthCheck *lightsail.ContainerServiceHealthCheckConfig) []interface{} { +func flattenContainerServiceDeploymentPublicEndpointHealthCheck(healthCheck *types.ContainerServiceHealthCheckConfig) []interface{} { if healthCheck == nil { return []interface{}{} } return []interface{}{ map[string]interface{}{ - "healthy_threshold": int(aws.Int64Value(healthCheck.HealthyThreshold)), - "interval_seconds": int(aws.Int64Value(healthCheck.IntervalSeconds)), - "path": aws.StringValue(healthCheck.Path), - "success_codes": aws.StringValue(healthCheck.SuccessCodes), - "timeout_seconds": int(aws.Int64Value(healthCheck.TimeoutSeconds)), - "unhealthy_threshold": int(aws.Int64Value(healthCheck.UnhealthyThreshold)), + "healthy_threshold": int(aws.ToInt32(healthCheck.HealthyThreshold)), + "interval_seconds": int(aws.ToInt32(healthCheck.IntervalSeconds)), + "path": aws.ToString(healthCheck.Path), + "success_codes": aws.ToString(healthCheck.SuccessCodes), + "timeout_seconds": int(aws.ToInt32(healthCheck.TimeoutSeconds)), + "unhealthy_threshold": int(aws.ToInt32(healthCheck.UnhealthyThreshold)), }, } } + +func flattenContainerServiceProtocolValues(t []types.ContainerServiceProtocol) []string { + var out []string + + for _, v := range t { + out = append(out, string(v)) + } + + return out +} + +func FindContainerServiceDeploymentByVersion(ctx context.Context, conn *lightsail.Client, serviceName string, version int) (*types.ContainerServiceDeployment, error) { + input := &lightsail.GetContainerServiceDeploymentsInput{ + ServiceName: aws.String(serviceName), + } + + output, err := conn.GetContainerServiceDeployments(ctx, input) + + if IsANotFoundError(err) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil || len(output.Deployments) == 0 { + return nil, tfresource.NewEmptyResultError(input) + } + + var result types.ContainerServiceDeployment + + for _, deployment := range output.Deployments { + if reflect.DeepEqual(deployment, types.ContainerServiceDeployment{}) { + continue + } + + if int(aws.ToInt32(deployment.Version)) == version { + result = deployment + break + } + } + + if reflect.DeepEqual(result, types.ContainerServiceDeployment{}) { + return nil, &retry.NotFoundError{ + Message: "Empty result", + LastRequest: input, + } + } + + return &result, nil +} diff --git a/internal/service/lightsail/container_service_deployment_version_test.go b/internal/service/lightsail/container_service_deployment_version_test.go index 2566e24eaab..122dcd51621 100644 --- a/internal/service/lightsail/container_service_deployment_version_test.go +++ b/internal/service/lightsail/container_service_deployment_version_test.go @@ -1,12 +1,17 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail_test import ( "context" "fmt" "regexp" + "strings" "testing" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -93,7 +98,7 @@ func TestContainerServiceDeploymentVersionParseResourceID(t *testing.T) { } } -func TestAccLightsailContainerServiceDeploymentVersion_Container_Basic(t *testing.T) { +func TestAccLightsailContainerServiceDeploymentVersion_container_basic(t *testing.T) { ctx := acctest.Context(t) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) containerName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) @@ -102,10 +107,10 @@ func TestAccLightsailContainerServiceDeploymentVersion_Container_Basic(t *testin resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckContainerServiceDestroy(ctx), Steps: []resource.TestStep{ @@ -114,7 +119,7 @@ func TestAccLightsailContainerServiceDeploymentVersion_Container_Basic(t *testin Check: resource.ComposeTestCheckFunc( testAccCheckContainerServiceDeploymentVersionExists(ctx, resourceName), resource.TestCheckResourceAttrSet(resourceName, "created_at"), - resource.TestCheckResourceAttr(resourceName, "state", lightsail.ContainerServiceDeploymentStateActive), + resource.TestCheckResourceAttr(resourceName, "state", string(types.ContainerServiceDeploymentStateActive)), resource.TestCheckResourceAttr(resourceName, "version", "1"), resource.TestCheckResourceAttr(resourceName, "container.#", "1"), resource.TestCheckResourceAttr(resourceName, "container.0.container_name", containerName), @@ -134,7 +139,7 @@ func TestAccLightsailContainerServiceDeploymentVersion_Container_Basic(t *testin }) } -func TestAccLightsailContainerServiceDeploymentVersion_Container_Multiple(t *testing.T) { +func TestAccLightsailContainerServiceDeploymentVersion_container_multiple(t *testing.T) { ctx := acctest.Context(t) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) containerName1 := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) @@ -144,10 +149,10 @@ func TestAccLightsailContainerServiceDeploymentVersion_Container_Multiple(t *tes resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckContainerServiceDestroy(ctx), Steps: []resource.TestStep{ @@ -155,7 +160,7 @@ func TestAccLightsailContainerServiceDeploymentVersion_Container_Multiple(t *tes Config: testAccContainerServiceDeploymentVersionConfig_Container_multiple(rName, containerName1, helloWorldImage, containerName2, redisImage), Check: resource.ComposeTestCheckFunc( testAccCheckContainerServiceDeploymentVersionExists(ctx, resourceName), - resource.TestCheckResourceAttr(resourceName, "state", lightsail.ContainerServiceDeploymentStateActive), + resource.TestCheckResourceAttr(resourceName, "state", string(types.ContainerServiceDeploymentStateActive)), resource.TestCheckResourceAttr(resourceName, "version", "1"), resource.TestCheckResourceAttr(resourceName, "container.#", "2"), resource.TestCheckResourceAttr(resourceName, "container.0.container_name", containerName1), @@ -173,7 +178,7 @@ func TestAccLightsailContainerServiceDeploymentVersion_Container_Multiple(t *tes }) } -func TestAccLightsailContainerServiceDeploymentVersion_Container_Environment(t *testing.T) { +func TestAccLightsailContainerServiceDeploymentVersion_container_environment(t *testing.T) { ctx := acctest.Context(t) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) containerName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) @@ -182,10 +187,10 @@ func TestAccLightsailContainerServiceDeploymentVersion_Container_Environment(t * resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckContainerServiceDestroy(ctx), Steps: []resource.TestStep{ @@ -193,7 +198,7 @@ func TestAccLightsailContainerServiceDeploymentVersion_Container_Environment(t * Config: testAccContainerServiceDeploymentVersionConfig_Container_environment1(rName, containerName, "A", "a"), Check: resource.ComposeTestCheckFunc( testAccCheckContainerServiceDeploymentVersionExists(ctx, resourceName), - resource.TestCheckResourceAttr(resourceName, "state", lightsail.ContainerServiceDeploymentStateActive), + resource.TestCheckResourceAttr(resourceName, "state", string(types.ContainerServiceDeploymentStateActive)), resource.TestCheckResourceAttr(resourceName, "version", "1"), resource.TestCheckResourceAttr(resourceName, "container.#", "1"), resource.TestCheckResourceAttr(resourceName, "container.0.environment.%", "1"), @@ -209,7 +214,7 @@ func TestAccLightsailContainerServiceDeploymentVersion_Container_Environment(t * Config: testAccContainerServiceDeploymentVersionConfig_Container_environment1(rName, containerName, "B", "b"), Check: resource.ComposeTestCheckFunc( testAccCheckContainerServiceDeploymentVersionExists(ctx, resourceName), - resource.TestCheckResourceAttr(resourceName, "state", lightsail.ContainerServiceDeploymentStateActive), + resource.TestCheckResourceAttr(resourceName, "state", string(types.ContainerServiceDeploymentStateActive)), resource.TestCheckResourceAttr(resourceName, "version", "2"), resource.TestCheckResourceAttr(resourceName, "container.#", "1"), resource.TestCheckResourceAttr(resourceName, "container.0.environment.%", "1"), @@ -220,7 +225,7 @@ func TestAccLightsailContainerServiceDeploymentVersion_Container_Environment(t * Config: testAccContainerServiceDeploymentVersionConfig_Container_environment2(rName, containerName, "A", "a", "B", "b"), Check: resource.ComposeTestCheckFunc( testAccCheckContainerServiceDeploymentVersionExists(ctx, resourceName), - resource.TestCheckResourceAttr(resourceName, "state", lightsail.ContainerServiceDeploymentStateActive), + resource.TestCheckResourceAttr(resourceName, "state", string(types.ContainerServiceDeploymentStateActive)), resource.TestCheckResourceAttr(resourceName, "version", "3"), resource.TestCheckResourceAttr(resourceName, "container.#", "1"), resource.TestCheckResourceAttr(resourceName, "container.0.environment.%", "2"), @@ -237,7 +242,7 @@ func TestAccLightsailContainerServiceDeploymentVersion_Container_Environment(t * Config: testAccContainerServiceDeploymentVersionConfig_Container_basic(rName, containerName, helloWorldImage), Check: resource.ComposeTestCheckFunc( testAccCheckContainerServiceDeploymentVersionExists(ctx, resourceName), - resource.TestCheckResourceAttr(resourceName, "state", lightsail.ContainerServiceDeploymentStateActive), + resource.TestCheckResourceAttr(resourceName, "state", string(types.ContainerServiceDeploymentStateActive)), resource.TestCheckResourceAttr(resourceName, "version", "4"), resource.TestCheckResourceAttr(resourceName, "container.#", "1"), resource.TestCheckResourceAttr(resourceName, "container.0.environment.%", "0"), @@ -252,7 +257,7 @@ func TestAccLightsailContainerServiceDeploymentVersion_Container_Environment(t * }) } -func TestAccLightsailContainerServiceDeploymentVersion_Container_Ports(t *testing.T) { +func TestAccLightsailContainerServiceDeploymentVersion_container_ports(t *testing.T) { ctx := acctest.Context(t) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) containerName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) @@ -261,22 +266,22 @@ func TestAccLightsailContainerServiceDeploymentVersion_Container_Ports(t *testin resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckContainerServiceDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccContainerServiceDeploymentVersionConfig_Container_ports1(rName, containerName, "80", lightsail.ContainerServiceProtocolHttp), + Config: testAccContainerServiceDeploymentVersionConfig_Container_ports1(rName, containerName, "80", string(types.ContainerServiceProtocolHttp)), Check: resource.ComposeTestCheckFunc( testAccCheckContainerServiceDeploymentVersionExists(ctx, resourceName), - resource.TestCheckResourceAttr(resourceName, "state", lightsail.ContainerServiceDeploymentStateActive), + resource.TestCheckResourceAttr(resourceName, "state", string(types.ContainerServiceDeploymentStateActive)), resource.TestCheckResourceAttr(resourceName, "version", "1"), resource.TestCheckResourceAttr(resourceName, "container.#", "1"), resource.TestCheckResourceAttr(resourceName, "container.0.ports.%", "1"), - resource.TestCheckResourceAttr(resourceName, "container.0.ports.80", lightsail.ContainerServiceProtocolHttp), + resource.TestCheckResourceAttr(resourceName, "container.0.ports.80", string(types.ContainerServiceProtocolHttp)), ), }, { @@ -285,33 +290,33 @@ func TestAccLightsailContainerServiceDeploymentVersion_Container_Ports(t *testin ImportStateVerify: true, }, { - Config: testAccContainerServiceDeploymentVersionConfig_Container_ports1(rName, containerName, "90", lightsail.ContainerServiceProtocolTcp), + Config: testAccContainerServiceDeploymentVersionConfig_Container_ports1(rName, containerName, "90", string(types.ContainerServiceProtocolTcp)), Check: resource.ComposeTestCheckFunc( testAccCheckContainerServiceDeploymentVersionExists(ctx, resourceName), - resource.TestCheckResourceAttr(resourceName, "state", lightsail.ContainerServiceDeploymentStateActive), + resource.TestCheckResourceAttr(resourceName, "state", string(types.ContainerServiceDeploymentStateActive)), resource.TestCheckResourceAttr(resourceName, "version", "2"), resource.TestCheckResourceAttr(resourceName, "container.#", "1"), resource.TestCheckResourceAttr(resourceName, "container.0.ports.%", "1"), - resource.TestCheckResourceAttr(resourceName, "container.0.ports.90", lightsail.ContainerServiceProtocolTcp), + resource.TestCheckResourceAttr(resourceName, "container.0.ports.90", string(types.ContainerServiceProtocolTcp)), ), }, { - Config: testAccContainerServiceDeploymentVersionConfig_Container_ports2(rName, containerName, "80", lightsail.ContainerServiceProtocolHttp, "90", lightsail.ContainerServiceProtocolTcp), + Config: testAccContainerServiceDeploymentVersionConfig_Container_ports2(rName, containerName, "80", string(types.ContainerServiceProtocolHttp), "90", string(types.ContainerServiceProtocolTcp)), Check: resource.ComposeTestCheckFunc( testAccCheckContainerServiceDeploymentVersionExists(ctx, resourceName), - resource.TestCheckResourceAttr(resourceName, "state", lightsail.ContainerServiceDeploymentStateActive), + resource.TestCheckResourceAttr(resourceName, "state", string(types.ContainerServiceDeploymentStateActive)), resource.TestCheckResourceAttr(resourceName, "version", "3"), resource.TestCheckResourceAttr(resourceName, "container.#", "1"), resource.TestCheckResourceAttr(resourceName, "container.0.ports.%", "2"), - resource.TestCheckResourceAttr(resourceName, "container.0.ports.80", lightsail.ContainerServiceProtocolHttp), - resource.TestCheckResourceAttr(resourceName, "container.0.ports.90", lightsail.ContainerServiceProtocolTcp), + resource.TestCheckResourceAttr(resourceName, "container.0.ports.80", string(types.ContainerServiceProtocolHttp)), + resource.TestCheckResourceAttr(resourceName, "container.0.ports.90", string(types.ContainerServiceProtocolTcp)), ), }, { Config: testAccContainerServiceDeploymentVersionConfig_Container_basic(rName, containerName, helloWorldImage), Check: resource.ComposeTestCheckFunc( testAccCheckContainerServiceDeploymentVersionExists(ctx, resourceName), - resource.TestCheckResourceAttr(resourceName, "state", lightsail.ContainerServiceDeploymentStateActive), + resource.TestCheckResourceAttr(resourceName, "state", string(types.ContainerServiceDeploymentStateActive)), resource.TestCheckResourceAttr(resourceName, "version", "4"), resource.TestCheckResourceAttr(resourceName, "container.#", "1"), resource.TestCheckResourceAttr(resourceName, "container.0.ports.%", "0"), @@ -326,7 +331,7 @@ func TestAccLightsailContainerServiceDeploymentVersion_Container_Ports(t *testin }) } -func TestAccLightsailContainerServiceDeploymentVersion_Container_PublicEndpoint(t *testing.T) { +func TestAccLightsailContainerServiceDeploymentVersion_container_publicEndpoint(t *testing.T) { ctx := acctest.Context(t) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) containerName1 := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) @@ -337,10 +342,10 @@ func TestAccLightsailContainerServiceDeploymentVersion_Container_PublicEndpoint( resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckContainerServiceDestroy(ctx), Steps: []resource.TestStep{ @@ -348,11 +353,11 @@ func TestAccLightsailContainerServiceDeploymentVersion_Container_PublicEndpoint( Config: testAccContainerServiceDeploymentVersionConfig_Container_publicEndpoint(rName, containerName1), Check: resource.ComposeTestCheckFunc( testAccCheckContainerServiceDeploymentVersionExists(ctx, resourceName), - resource.TestCheckResourceAttr(resourceName, "state", lightsail.ContainerServiceDeploymentStateActive), + resource.TestCheckResourceAttr(resourceName, "state", string(types.ContainerServiceDeploymentStateActive)), resource.TestCheckResourceAttr(resourceName, "version", "1"), resource.TestCheckResourceAttr(resourceName, "container.#", "1"), resource.TestCheckResourceAttr(resourceName, "container.0.ports.%", "1"), - resource.TestCheckResourceAttr(resourceName, "container.0.ports.80", lightsail.ContainerServiceProtocolHttp), + resource.TestCheckResourceAttr(resourceName, "container.0.ports.80", string(types.ContainerServiceProtocolHttp)), resource.TestCheckResourceAttr(resourceName, "public_endpoint.#", "1"), resource.TestCheckResourceAttr(resourceName, "public_endpoint.0.container_name", containerName1), resource.TestCheckResourceAttr(resourceName, "public_endpoint.0.container_port", "80"), @@ -374,7 +379,7 @@ func TestAccLightsailContainerServiceDeploymentVersion_Container_PublicEndpoint( Config: testAccContainerServiceDeploymentVersionConfig_Container_publicEndpointCompleteHealthCheck(rName, containerName2), Check: resource.ComposeTestCheckFunc( testAccCheckContainerServiceDeploymentVersionExists(ctx, resourceName), - resource.TestCheckResourceAttr(resourceName, "state", lightsail.ContainerServiceDeploymentStateActive), + resource.TestCheckResourceAttr(resourceName, "state", string(types.ContainerServiceDeploymentStateActive)), resource.TestCheckResourceAttr(resourceName, "version", "2"), resource.TestCheckResourceAttr(resourceName, "container.#", "1"), resource.TestCheckResourceAttr(resourceName, "public_endpoint.#", "1"), @@ -393,7 +398,7 @@ func TestAccLightsailContainerServiceDeploymentVersion_Container_PublicEndpoint( Config: testAccContainerServiceDeploymentVersionConfig_Container_publicEndpointMinimalHealthCheck(rName, containerName2), Check: resource.ComposeTestCheckFunc( testAccCheckContainerServiceDeploymentVersionExists(ctx, resourceName), - resource.TestCheckResourceAttr(resourceName, "state", lightsail.ContainerServiceDeploymentStateActive), + resource.TestCheckResourceAttr(resourceName, "state", string(types.ContainerServiceDeploymentStateActive)), resource.TestCheckResourceAttr(resourceName, "version", "3"), resource.TestCheckResourceAttr(resourceName, "container.#", "1"), resource.TestCheckResourceAttr(resourceName, "public_endpoint.0.container_name", containerName2), @@ -411,7 +416,7 @@ func TestAccLightsailContainerServiceDeploymentVersion_Container_PublicEndpoint( Config: testAccContainerServiceDeploymentVersionConfig_Container_publicEndpoint(rName, containerName2), Check: resource.ComposeTestCheckFunc( testAccCheckContainerServiceDeploymentVersionExists(ctx, resourceName), - resource.TestCheckResourceAttr(resourceName, "state", lightsail.ContainerServiceDeploymentStateActive), + resource.TestCheckResourceAttr(resourceName, "state", string(types.ContainerServiceDeploymentStateActive)), resource.TestCheckResourceAttr(resourceName, "version", "4"), resource.TestCheckResourceAttr(resourceName, "container.#", "1"), resource.TestCheckResourceAttr(resourceName, "public_endpoint.#", "1"), @@ -430,7 +435,7 @@ func TestAccLightsailContainerServiceDeploymentVersion_Container_PublicEndpoint( Config: testAccContainerServiceDeploymentVersionConfig_Container_basic(rName, containerName1, helloWorldImage), Check: resource.ComposeTestCheckFunc( testAccCheckContainerServiceDeploymentVersionExists(ctx, resourceName), - resource.TestCheckResourceAttr(resourceName, "state", lightsail.ContainerServiceDeploymentStateActive), + resource.TestCheckResourceAttr(resourceName, "state", string(types.ContainerServiceDeploymentStateActive)), resource.TestCheckResourceAttr(resourceName, "version", "5"), resource.TestCheckResourceAttr(resourceName, "public_endpoint.#", "0"), ), @@ -439,7 +444,7 @@ func TestAccLightsailContainerServiceDeploymentVersion_Container_PublicEndpoint( }) } -func TestAccLightsailContainerServiceDeploymentVersion_Container_EnableService(t *testing.T) { +func TestAccLightsailContainerServiceDeploymentVersion_Container_enableService(t *testing.T) { ctx := acctest.Context(t) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) containerName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) @@ -448,10 +453,10 @@ func TestAccLightsailContainerServiceDeploymentVersion_Container_EnableService(t resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckContainerServiceDestroy(ctx), Steps: []resource.TestStep{ @@ -463,7 +468,7 @@ func TestAccLightsailContainerServiceDeploymentVersion_Container_EnableService(t Config: testAccContainerServiceDeploymentVersionConfig_Container_withDisabledService(rName, containerName, false), Check: resource.ComposeTestCheckFunc( testAccCheckContainerServiceDeploymentVersionExists(ctx, resourceName), - resource.TestCheckResourceAttr(resourceName, "state", lightsail.ContainerServiceDeploymentStateActive), + resource.TestCheckResourceAttr(resourceName, "state", string(types.ContainerServiceDeploymentStateActive)), resource.TestCheckResourceAttr(resourceName, "version", "1"), resource.TestCheckResourceAttr(resourceName, "container.0.container_name", containerName), ), @@ -488,7 +493,7 @@ func testAccCheckContainerServiceDeploymentVersionExists(ctx context.Context, re return fmt.Errorf("no Lightsail Container Service Deployment Version ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) serviceName, version, err := tflightsail.ContainerServiceDeploymentVersionParseResourceID(rs.Primary.ID) if err != nil { diff --git a/internal/service/lightsail/container_service_test.go b/internal/service/lightsail/container_service_test.go index af697cda598..471dc9d03ab 100644 --- a/internal/service/lightsail/container_service_test.go +++ b/internal/service/lightsail/container_service_test.go @@ -1,12 +1,17 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail_test import ( "context" "fmt" "regexp" + "strings" "testing" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -24,10 +29,10 @@ func TestAccLightsailContainerService_basic(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckContainerServiceDestroy(ctx), Steps: []resource.TestStep{ @@ -37,7 +42,7 @@ func TestAccLightsailContainerService_basic(t *testing.T) { testAccCheckContainerServiceExists(ctx, resourceName), resource.TestCheckResourceAttrSet(resourceName, "created_at"), resource.TestCheckResourceAttr(resourceName, "name", rName), - resource.TestCheckResourceAttr(resourceName, "power", lightsail.ContainerServicePowerNameNano), + resource.TestCheckResourceAttr(resourceName, "power", string(types.ContainerServicePowerNameNano)), resource.TestCheckResourceAttr(resourceName, "scale", "1"), resource.TestCheckResourceAttr(resourceName, "is_disabled", "false"), resource.TestCheckResourceAttrSet(resourceName, "arn"), @@ -74,10 +79,10 @@ func TestAccLightsailContainerService_disappears(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckContainerServiceDestroy(ctx), Steps: []resource.TestStep{ @@ -93,7 +98,7 @@ func TestAccLightsailContainerService_disappears(t *testing.T) { }) } -func TestAccLightsailContainerService_Name(t *testing.T) { +func TestAccLightsailContainerService_name(t *testing.T) { ctx := acctest.Context(t) rName1 := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) rName2 := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) @@ -102,10 +107,10 @@ func TestAccLightsailContainerService_Name(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckContainerServiceDestroy(ctx), Steps: []resource.TestStep{ @@ -127,7 +132,7 @@ func TestAccLightsailContainerService_Name(t *testing.T) { }) } -func TestAccLightsailContainerService_IsDisabled(t *testing.T) { +func TestAccLightsailContainerService_isDisabled(t *testing.T) { ctx := acctest.Context(t) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_lightsail_container_service.test" @@ -135,10 +140,10 @@ func TestAccLightsailContainerService_IsDisabled(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckContainerServiceDestroy(ctx), Steps: []resource.TestStep{ @@ -160,7 +165,7 @@ func TestAccLightsailContainerService_IsDisabled(t *testing.T) { }) } -func TestAccLightsailContainerService_Power(t *testing.T) { +func TestAccLightsailContainerService_power(t *testing.T) { ctx := acctest.Context(t) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_lightsail_container_service.test" @@ -168,10 +173,10 @@ func TestAccLightsailContainerService_Power(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckContainerServiceDestroy(ctx), Steps: []resource.TestStep{ @@ -179,31 +184,31 @@ func TestAccLightsailContainerService_Power(t *testing.T) { Config: testAccContainerServiceConfig_basic(rName), Check: resource.ComposeTestCheckFunc( testAccCheckContainerServiceExists(ctx, resourceName), - resource.TestCheckResourceAttr(resourceName, "power", lightsail.ContainerServicePowerNameNano), + resource.TestCheckResourceAttr(resourceName, "power", string(types.ContainerServicePowerNameNano)), ), }, { Config: testAccContainerServiceConfig_power(rName), Check: resource.ComposeTestCheckFunc( testAccCheckContainerServiceExists(ctx, resourceName), - resource.TestCheckResourceAttr(resourceName, "power", lightsail.ContainerServicePowerNameMicro), + resource.TestCheckResourceAttr(resourceName, "power", string(types.ContainerServicePowerNameMicro)), ), }, }, }) } -func TestAccLightsailContainerService_PublicDomainNames(t *testing.T) { +func TestAccLightsailContainerService_publicDomainNames(t *testing.T) { ctx := acctest.Context(t) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckContainerServiceDestroy(ctx), Steps: []resource.TestStep{ @@ -215,7 +220,7 @@ func TestAccLightsailContainerService_PublicDomainNames(t *testing.T) { }) } -func TestAccLightsailContainerService_PrivateRegistryAccess(t *testing.T) { +func TestAccLightsailContainerService_privateRegistryAccess(t *testing.T) { ctx := acctest.Context(t) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_lightsail_container_service.test" @@ -223,10 +228,10 @@ func TestAccLightsailContainerService_PrivateRegistryAccess(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckContainerServiceDestroy(ctx), Steps: []resource.TestStep{ @@ -244,7 +249,7 @@ func TestAccLightsailContainerService_PrivateRegistryAccess(t *testing.T) { }) } -func TestAccLightsailContainerService_Scale(t *testing.T) { +func TestAccLightsailContainerService_scale(t *testing.T) { ctx := acctest.Context(t) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_lightsail_container_service.test" @@ -252,10 +257,10 @@ func TestAccLightsailContainerService_Scale(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckContainerServiceDestroy(ctx), Steps: []resource.TestStep{ @@ -285,10 +290,10 @@ func TestAccLightsailContainerService_tags(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckContainerServiceDestroy(ctx), Steps: []resource.TestStep{ @@ -328,7 +333,7 @@ func TestAccLightsailContainerService_tags(t *testing.T) { func testAccCheckContainerServiceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) for _, r := range s.RootModule().Resources { if r.Type != "aws_lightsail_container_service" { @@ -361,7 +366,7 @@ func testAccCheckContainerServiceExists(ctx context.Context, resourceName string return fmt.Errorf("no Lightsail Container Service ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) _, err := tflightsail.FindContainerServiceByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/lightsail/database.go b/internal/service/lightsail/database.go index a7ae3106907..46ec4bf2698 100644 --- a/internal/service/lightsail/database.go +++ b/internal/service/lightsail/database.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail import ( @@ -6,9 +9,9 @@ import ( "regexp" "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -186,7 +189,7 @@ func ResourceDatabase() *schema.Resource { } func resourceDatabaseCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) relationalDatabaseName := d.Get("relational_database_name").(string) input := &lightsail.CreateRelationalDatabaseInput{ @@ -195,7 +198,7 @@ func resourceDatabaseCreate(ctx context.Context, d *schema.ResourceData, meta in RelationalDatabaseBlueprintId: aws.String(d.Get("blueprint_id").(string)), RelationalDatabaseBundleId: aws.String(d.Get("bundle_id").(string)), RelationalDatabaseName: aws.String(relationalDatabaseName), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("availability_zone"); ok { @@ -218,13 +221,13 @@ func resourceDatabaseCreate(ctx context.Context, d *schema.ResourceData, meta in input.PubliclyAccessible = aws.Bool(v.(bool)) } - output, err := conn.CreateRelationalDatabaseWithContext(ctx, input) + output, err := conn.CreateRelationalDatabase(ctx, input) if err != nil { return diag.Errorf("creating Lightsail Relational Database (%s): %s", relationalDatabaseName, err) } - diagError := expandOperations(ctx, conn, output.Operations, lightsail.OperationTypeCreateRelationalDatabase, ResNameDatabase, relationalDatabaseName) + diagError := expandOperations(ctx, conn, output.Operations, types.OperationTypeCreateRelationalDatabase, ResNameDatabase, relationalDatabaseName) if diagError != nil { return diagError @@ -241,13 +244,13 @@ func resourceDatabaseCreate(ctx context.Context, d *schema.ResourceData, meta in RelationalDatabaseName: aws.String(d.Id()), } - output, err := conn.UpdateRelationalDatabaseWithContext(ctx, input) + output, err := conn.UpdateRelationalDatabase(ctx, input) if err != nil { return diag.Errorf("updating Lightsail Relational Database (%s) backup retention: %s", d.Id(), err) } - diagError := expandOperations(ctx, conn, output.Operations, lightsail.OperationTypeUpdateRelationalDatabase, ResNameDatabase, relationalDatabaseName) + diagError := expandOperations(ctx, conn, output.Operations, types.OperationTypeUpdateRelationalDatabase, ResNameDatabase, relationalDatabaseName) if diagError != nil { return diagError @@ -267,13 +270,13 @@ func resourceDatabaseCreate(ctx context.Context, d *schema.ResourceData, meta in } func resourceDatabaseRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) // Some Operations can complete before the Database enters the Available state. Added a waiter to make sure the Database is available before continuing. // This is to support importing a resource that is not in a ready state. database, err := waitDatabaseModified(ctx, conn, aws.String(d.Id())) - if !d.IsNewResource() && tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { + if !d.IsNewResource() && IsANotFoundError(err) { log.Printf("[WARN] Lightsail Relational Database (%s) not found, removing from state", d.Id()) d.SetId("") return nil @@ -308,13 +311,13 @@ func resourceDatabaseRead(ctx context.Context, d *schema.ResourceData, meta inte d.Set("secondary_availability_zone", rd.SecondaryAvailabilityZone) d.Set("support_code", rd.SupportCode) - SetTagsOut(ctx, rd.Tags) + setTagsOut(ctx, rd.Tags) return nil } func resourceDatabaseUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) if d.HasChangesExcept("apply_immediately", "final_snapshot_name", "skip_final_snapshot", "tags", "tags_all") { input := &lightsail.UpdateRelationalDatabaseInput{ @@ -350,13 +353,13 @@ func resourceDatabaseUpdate(ctx context.Context, d *schema.ResourceData, meta in input.PubliclyAccessible = aws.Bool(d.Get("publicly_accessible").(bool)) } - output, err := conn.UpdateRelationalDatabaseWithContext(ctx, input) + output, err := conn.UpdateRelationalDatabase(ctx, input) if err != nil { return diag.Errorf("updating Lightsail Relational Database (%s): %s", d.Id(), err) } - diagError := expandOperations(ctx, conn, output.Operations, lightsail.OperationTypeUpdateRelationalDatabase, ResNameDatabase, d.Id()) + diagError := expandOperations(ctx, conn, output.Operations, types.OperationTypeUpdateRelationalDatabase, ResNameDatabase, d.Id()) if diagError != nil { return diagError @@ -384,7 +387,7 @@ func resourceDatabaseUpdate(ctx context.Context, d *schema.ResourceData, meta in } func resourceDatabaseDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) // Some Operations can complete before the Database enters the Available state. Added a waiter to make sure the Database is available before continuing. if _, err := waitDatabaseModified(ctx, conn, aws.String(d.Id())); err != nil { @@ -406,13 +409,13 @@ func resourceDatabaseDelete(ctx context.Context, d *schema.ResourceData, meta in } } - output, err := conn.DeleteRelationalDatabaseWithContext(ctx, input) + output, err := conn.DeleteRelationalDatabase(ctx, input) if err != nil { return diag.Errorf("deleting Lightsail Relational Database (%s): %s", d.Id(), err) } - diagError := expandOperations(ctx, conn, output.Operations, lightsail.OperationTypeDeleteRelationalDatabase, ResNameDatabase, d.Id()) + diagError := expandOperations(ctx, conn, output.Operations, types.OperationTypeDeleteRelationalDatabase, ResNameDatabase, d.Id()) if diagError != nil { return diagError diff --git a/internal/service/lightsail/database_test.go b/internal/service/lightsail/database_test.go index 95ba94906b8..f59eb5dc9e6 100644 --- a/internal/service/lightsail/database_test.go +++ b/internal/service/lightsail/database_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail_test import ( @@ -6,12 +9,12 @@ import ( "fmt" "log" "regexp" + "strings" "testing" "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -24,24 +27,23 @@ import ( func TestAccLightsailDatabase_basic(t *testing.T) { ctx := acctest.Context(t) - var db lightsail.RelationalDatabase resourceName := "aws_lightsail_database.test" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDatabaseDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccDatabaseConfig_basic(rName), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckDatabaseExists(ctx, resourceName, &db), + testAccCheckDatabaseExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "relational_database_name", rName), resource.TestCheckResourceAttr(resourceName, "blueprint_id", "mysql_8_0"), resource.TestCheckResourceAttr(resourceName, "bundle_id", "micro_1_0"), @@ -77,7 +79,6 @@ func TestAccLightsailDatabase_basic(t *testing.T) { func TestAccLightsailDatabase_relationalDatabaseName(t *testing.T) { ctx := acctest.Context(t) - var db lightsail.RelationalDatabase resourceName := "aws_lightsail_database.test" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) rNameTooShort := "s" @@ -89,10 +90,10 @@ func TestAccLightsailDatabase_relationalDatabaseName(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDatabaseDestroy(ctx), Steps: []resource.TestStep{ @@ -119,7 +120,7 @@ func TestAccLightsailDatabase_relationalDatabaseName(t *testing.T) { { Config: testAccDatabaseConfig_basic(rName), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckDatabaseExists(ctx, resourceName, &db), + testAccCheckDatabaseExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "relational_database_name", rName), ), }, @@ -140,7 +141,6 @@ func TestAccLightsailDatabase_relationalDatabaseName(t *testing.T) { func TestAccLightsailDatabase_masterDatabaseName(t *testing.T) { ctx := acctest.Context(t) - var db lightsail.RelationalDatabase rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_lightsail_database.test" dbName := "randomdatabasename" @@ -153,10 +153,10 @@ func TestAccLightsailDatabase_masterDatabaseName(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDatabaseDestroy(ctx), Steps: []resource.TestStep{ @@ -179,7 +179,7 @@ func TestAccLightsailDatabase_masterDatabaseName(t *testing.T) { { Config: testAccDatabaseConfig_masterDatabaseName(rName, dbName), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckDatabaseExists(ctx, resourceName, &db), + testAccCheckDatabaseExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "master_database_name", dbName), ), }, @@ -197,7 +197,7 @@ func TestAccLightsailDatabase_masterDatabaseName(t *testing.T) { { Config: testAccDatabaseConfig_masterDatabaseName(rName, dbNameContainsUnderscore), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckDatabaseExists(ctx, resourceName, &db), + testAccCheckDatabaseExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "master_database_name", dbNameContainsUnderscore), ), }, @@ -207,7 +207,6 @@ func TestAccLightsailDatabase_masterDatabaseName(t *testing.T) { func TestAccLightsailDatabase_masterUsername(t *testing.T) { ctx := acctest.Context(t) - var db lightsail.RelationalDatabase rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_lightsail_database.test" username := "username1" @@ -221,10 +220,10 @@ func TestAccLightsailDatabase_masterUsername(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDatabaseDestroy(ctx), Steps: []resource.TestStep{ @@ -251,7 +250,7 @@ func TestAccLightsailDatabase_masterUsername(t *testing.T) { { Config: testAccDatabaseConfig_masterUsername(rName, username), Check: resource.ComposeTestCheckFunc( - testAccCheckDatabaseExists(ctx, resourceName, &db), + testAccCheckDatabaseExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "master_username", username), ), }, @@ -269,7 +268,7 @@ func TestAccLightsailDatabase_masterUsername(t *testing.T) { { Config: testAccDatabaseConfig_masterUsername(rName, usernameContainsUndercore), Check: resource.ComposeTestCheckFunc( - testAccCheckDatabaseExists(ctx, resourceName, &db), + testAccCheckDatabaseExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "master_username", usernameContainsUndercore), ), }, @@ -291,10 +290,10 @@ func TestAccLightsailDatabase_masterPassword(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDatabaseDestroy(ctx), Steps: []resource.TestStep{ @@ -328,7 +327,6 @@ func TestAccLightsailDatabase_masterPassword(t *testing.T) { func TestAccLightsailDatabase_preferredBackupWindow(t *testing.T) { ctx := acctest.Context(t) - var db lightsail.RelationalDatabase rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_lightsail_database.test" backupWindowInvalidHour := "25:30-10:00" @@ -337,10 +335,10 @@ func TestAccLightsailDatabase_preferredBackupWindow(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDatabaseDestroy(ctx), Steps: []resource.TestStep{ @@ -355,7 +353,7 @@ func TestAccLightsailDatabase_preferredBackupWindow(t *testing.T) { { Config: testAccDatabaseConfig_preferredBackupWindow(rName, "09:30-10:00"), Check: resource.ComposeTestCheckFunc( - testAccCheckDatabaseExists(ctx, resourceName, &db), + testAccCheckDatabaseExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "preferred_backup_window", "09:30-10:00"), ), }, @@ -373,7 +371,7 @@ func TestAccLightsailDatabase_preferredBackupWindow(t *testing.T) { { Config: testAccDatabaseConfig_preferredBackupWindow(rName, "09:45-10:15"), Check: resource.ComposeTestCheckFunc( - testAccCheckDatabaseExists(ctx, resourceName, &db), + testAccCheckDatabaseExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "preferred_backup_window", "09:45-10:15"), ), }, @@ -383,7 +381,6 @@ func TestAccLightsailDatabase_preferredBackupWindow(t *testing.T) { func TestAccLightsailDatabase_preferredMaintenanceWindow(t *testing.T) { ctx := acctest.Context(t) - var db lightsail.RelationalDatabase rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_lightsail_database.test" maintenanceWindowInvalidDay := "tuesday:04:30-tue:05:00" @@ -393,10 +390,10 @@ func TestAccLightsailDatabase_preferredMaintenanceWindow(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDatabaseDestroy(ctx), Steps: []resource.TestStep{ @@ -415,7 +412,7 @@ func TestAccLightsailDatabase_preferredMaintenanceWindow(t *testing.T) { { Config: testAccDatabaseConfig_preferredMaintenanceWindow(rName, "tue:04:30-tue:05:00"), Check: resource.ComposeTestCheckFunc( - testAccCheckDatabaseExists(ctx, resourceName, &db), + testAccCheckDatabaseExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "preferred_maintenance_window", "tue:04:30-tue:05:00"), ), }, @@ -433,7 +430,7 @@ func TestAccLightsailDatabase_preferredMaintenanceWindow(t *testing.T) { { Config: testAccDatabaseConfig_preferredMaintenanceWindow(rName, "wed:06:00-wed:07:30"), Check: resource.ComposeTestCheckFunc( - testAccCheckDatabaseExists(ctx, resourceName, &db), + testAccCheckDatabaseExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "preferred_maintenance_window", "wed:06:00-wed:07:30"), ), }, @@ -443,24 +440,23 @@ func TestAccLightsailDatabase_preferredMaintenanceWindow(t *testing.T) { func TestAccLightsailDatabase_publiclyAccessible(t *testing.T) { ctx := acctest.Context(t) - var db lightsail.RelationalDatabase rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_lightsail_database.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDatabaseDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccDatabaseConfig_publiclyAccessible(rName, true), Check: resource.ComposeTestCheckFunc( - testAccCheckDatabaseExists(ctx, resourceName, &db), + testAccCheckDatabaseExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "publicly_accessible", "true"), ), }, @@ -478,7 +474,7 @@ func TestAccLightsailDatabase_publiclyAccessible(t *testing.T) { { Config: testAccDatabaseConfig_publiclyAccessible(rName, false), Check: resource.ComposeTestCheckFunc( - testAccCheckDatabaseExists(ctx, resourceName, &db), + testAccCheckDatabaseExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "publicly_accessible", "false"), ), }, @@ -488,24 +484,23 @@ func TestAccLightsailDatabase_publiclyAccessible(t *testing.T) { func TestAccLightsailDatabase_backupRetentionEnabled(t *testing.T) { ctx := acctest.Context(t) - var db lightsail.RelationalDatabase rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_lightsail_database.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDatabaseDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccDatabaseConfig_backupRetentionEnabled(rName, true), Check: resource.ComposeTestCheckFunc( - testAccCheckDatabaseExists(ctx, resourceName, &db), + testAccCheckDatabaseExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "backup_retention_enabled", "true"), ), }, @@ -523,7 +518,7 @@ func TestAccLightsailDatabase_backupRetentionEnabled(t *testing.T) { { Config: testAccDatabaseConfig_backupRetentionEnabled(rName, false), Check: resource.ComposeTestCheckFunc( - testAccCheckDatabaseExists(ctx, resourceName, &db), + testAccCheckDatabaseExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "backup_retention_enabled", "false"), ), }, @@ -533,7 +528,6 @@ func TestAccLightsailDatabase_backupRetentionEnabled(t *testing.T) { func TestAccLightsailDatabase_finalSnapshotName(t *testing.T) { ctx := acctest.Context(t) - var db lightsail.RelationalDatabase rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_lightsail_database.test" sName := fmt.Sprintf("%s-snapshot", rName) @@ -545,10 +539,10 @@ func TestAccLightsailDatabase_finalSnapshotName(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDatabaseSnapshotDestroy(ctx), Steps: []resource.TestStep{ @@ -571,7 +565,7 @@ func TestAccLightsailDatabase_finalSnapshotName(t *testing.T) { { Config: testAccDatabaseConfig_finalSnapshotName(rName, sName), Check: resource.ComposeTestCheckFunc( - testAccCheckDatabaseExists(ctx, resourceName, &db), + testAccCheckDatabaseExists(ctx, resourceName), ), }, { @@ -591,24 +585,23 @@ func TestAccLightsailDatabase_finalSnapshotName(t *testing.T) { func TestAccLightsailDatabase_tags(t *testing.T) { ctx := acctest.Context(t) - var db1, db2, db3 lightsail.RelationalDatabase rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_lightsail_database.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDatabaseDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccDatabaseConfig_tags1(rName, "key1", "value1"), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckDatabaseExists(ctx, resourceName, &db1), + testAccCheckDatabaseExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), ), @@ -627,7 +620,7 @@ func TestAccLightsailDatabase_tags(t *testing.T) { { Config: testAccDatabaseConfig_tags2(rName, "key1", "value1updated", "key2", "value2"), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckDatabaseExists(ctx, resourceName, &db2), + testAccCheckDatabaseExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), @@ -636,7 +629,7 @@ func TestAccLightsailDatabase_tags(t *testing.T) { { Config: testAccDatabaseConfig_tags1(rName, "key2", "value2"), Check: resource.ComposeTestCheckFunc( - testAccCheckDatabaseExists(ctx, resourceName, &db3), + testAccCheckDatabaseExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), ), @@ -647,24 +640,23 @@ func TestAccLightsailDatabase_tags(t *testing.T) { func TestAccLightsailDatabase_ha(t *testing.T) { ctx := acctest.Context(t) - var db lightsail.RelationalDatabase resourceName := "aws_lightsail_database.test" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDatabaseDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccDatabaseConfig_ha(rName), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckDatabaseExists(ctx, resourceName, &db), + testAccCheckDatabaseExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "relational_database_name", rName), resource.TestCheckResourceAttr(resourceName, "bundle_id", "micro_ha_1_0"), resource.TestCheckResourceAttrSet(resourceName, "availability_zone"), @@ -687,15 +679,14 @@ func TestAccLightsailDatabase_ha(t *testing.T) { func TestAccLightsailDatabase_disappears(t *testing.T) { ctx := acctest.Context(t) - var db lightsail.RelationalDatabase rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_lightsail_database.test" testDestroy := func(*terraform.State) error { // reach out and DELETE the Database - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) - _, err := conn.DeleteRelationalDatabaseWithContext(ctx, &lightsail.DeleteRelationalDatabaseInput{ + _, err := conn.DeleteRelationalDatabase(ctx, &lightsail.DeleteRelationalDatabaseInput{ RelationalDatabaseName: aws.String(rName), SkipFinalSnapshot: aws.Bool(true), }) @@ -713,17 +704,17 @@ func TestAccLightsailDatabase_disappears(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), CheckDestroy: testAccCheckDatabaseDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccDatabaseConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckDatabaseExists(ctx, resourceName, &db), + testAccCheckDatabaseExists(ctx, resourceName), testDestroy), ExpectNonEmptyPlan: true, }, @@ -731,7 +722,7 @@ func TestAccLightsailDatabase_disappears(t *testing.T) { }) } -func testAccCheckDatabaseExists(ctx context.Context, n string, res *lightsail.RelationalDatabase) resource.TestCheckFunc { +func testAccCheckDatabaseExists(ctx context.Context, n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -742,13 +733,13 @@ func testAccCheckDatabaseExists(ctx context.Context, n string, res *lightsail.Re return errors.New("No Lightsail Database ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) params := lightsail.GetRelationalDatabaseInput{ RelationalDatabaseName: aws.String(rs.Primary.ID), } - resp, err := conn.GetRelationalDatabaseWithContext(ctx, ¶ms) + resp, err := conn.GetRelationalDatabase(ctx, ¶ms) if err != nil { return err @@ -757,14 +748,14 @@ func testAccCheckDatabaseExists(ctx context.Context, n string, res *lightsail.Re if resp == nil || resp.RelationalDatabase == nil { return fmt.Errorf("Database (%s) not found", rs.Primary.ID) } - *res = *resp.RelationalDatabase + return nil } } func testAccCheckDatabaseDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lightsail_database" { @@ -775,9 +766,9 @@ func testAccCheckDatabaseDestroy(ctx context.Context) resource.TestCheckFunc { RelationalDatabaseName: aws.String(rs.Primary.ID), } - respDatabase, err := conn.GetRelationalDatabaseWithContext(ctx, ¶ms) + respDatabase, err := conn.GetRelationalDatabase(ctx, ¶ms) - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { + if tflightsail.IsANotFoundError(err) { continue } @@ -796,7 +787,7 @@ func testAccCheckDatabaseDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckDatabaseSnapshotDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lightsail_database" { @@ -807,7 +798,7 @@ func testAccCheckDatabaseSnapshotDestroy(ctx context.Context) resource.TestCheck snapshot_identifier := rs.Primary.Attributes["final_snapshot_name"] log.Printf("[INFO] Deleting the Snapshot %s", snapshot_identifier) - _, err := conn.DeleteRelationalDatabaseSnapshotWithContext(ctx, &lightsail.DeleteRelationalDatabaseSnapshotInput{ + _, err := conn.DeleteRelationalDatabaseSnapshot(ctx, &lightsail.DeleteRelationalDatabaseSnapshotInput{ RelationalDatabaseSnapshotName: aws.String(snapshot_identifier), }) @@ -819,9 +810,9 @@ func testAccCheckDatabaseSnapshotDestroy(ctx context.Context) resource.TestCheck RelationalDatabaseName: aws.String(rs.Primary.ID), } - respDatabase, err := conn.GetRelationalDatabaseWithContext(ctx, ¶ms) + respDatabase, err := conn.GetRelationalDatabase(ctx, ¶ms) - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { + if tflightsail.IsANotFoundError(err) { continue } diff --git a/internal/service/lightsail/disk.go b/internal/service/lightsail/disk.go index 2d494669db9..925969cc8c8 100644 --- a/internal/service/lightsail/disk.go +++ b/internal/service/lightsail/disk.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail import ( @@ -5,9 +8,11 @@ import ( "regexp" "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" @@ -72,23 +77,23 @@ func ResourceDisk() *schema.Resource { } func resourceDiskCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) id := d.Get("name").(string) in := lightsail.CreateDiskInput{ AvailabilityZone: aws.String(d.Get("availability_zone").(string)), - SizeInGb: aws.Int64(int64(d.Get("size_in_gb").(int))), + SizeInGb: aws.Int32(int32(d.Get("size_in_gb").(int))), DiskName: aws.String(id), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } - out, err := conn.CreateDiskWithContext(ctx, &in) + out, err := conn.CreateDisk(ctx, &in) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeCreateDisk, ResDisk, id, err) + return create.DiagError(names.Lightsail, string(types.OperationTypeCreateDisk), ResDisk, id, err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeCreateDisk, ResDisk, id) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeCreateDisk, ResDisk, id) if diag != nil { return diag @@ -100,7 +105,7 @@ func resourceDiskCreate(ctx context.Context, d *schema.ResourceData, meta interf } func resourceDiskRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) out, err := FindDiskById(ctx, conn, d.Id()) @@ -121,7 +126,7 @@ func resourceDiskRead(ctx context.Context, d *schema.ResourceData, meta interfac d.Set("size_in_gb", out.SizeInGb) d.Set("support_code", out.SupportCode) - SetTagsOut(ctx, out.Tags) + setTagsOut(ctx, out.Tags) return nil } @@ -132,17 +137,21 @@ func resourceDiskUpdate(ctx context.Context, d *schema.ResourceData, meta interf } func resourceDiskDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) - out, err := conn.DeleteDiskWithContext(ctx, &lightsail.DeleteDiskInput{ + out, err := conn.DeleteDisk(ctx, &lightsail.DeleteDiskInput{ DiskName: aws.String(d.Id()), }) + if IsANotFoundError(err) { + return nil + } + if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeDeleteDisk, ResDisk, d.Get("name").(string), err) + return create.DiagError(names.Lightsail, string(types.OperationTypeDeleteDisk), ResDisk, d.Get("name").(string), err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeDeleteDisk, ResDisk, d.Id()) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeDeleteDisk, ResDisk, d.Id()) if diag != nil { return diag @@ -150,3 +159,28 @@ func resourceDiskDelete(ctx context.Context, d *schema.ResourceData, meta interf return nil } + +func FindDiskById(ctx context.Context, conn *lightsail.Client, id string) (*types.Disk, error) { + in := &lightsail.GetDiskInput{ + DiskName: aws.String(id), + } + + out, err := conn.GetDisk(ctx, in) + + if IsANotFoundError(err) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + if err != nil { + return nil, err + } + + if out == nil || out.Disk == nil { + return nil, tfresource.NewEmptyResultError(in) + } + + return out.Disk, nil +} diff --git a/internal/service/lightsail/disk_attachment.go b/internal/service/lightsail/disk_attachment.go index a1be902872b..d55812997bb 100644 --- a/internal/service/lightsail/disk_attachment.go +++ b/internal/service/lightsail/disk_attachment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail import ( @@ -5,9 +8,11 @@ import ( "errors" "strings" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/create" @@ -47,7 +52,7 @@ func ResourceDiskAttachment() *schema.Resource { } func resourceDiskAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) in := lightsail.AttachDiskInput{ DiskName: aws.String(d.Get("disk_name").(string)), @@ -55,13 +60,13 @@ func resourceDiskAttachmentCreate(ctx context.Context, d *schema.ResourceData, m InstanceName: aws.String(d.Get("instance_name").(string)), } - out, err := conn.AttachDiskWithContext(ctx, &in) + out, err := conn.AttachDisk(ctx, &in) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeAttachDisk, ResDiskAttachment, d.Get("disk_name").(string), err) + return create.DiagError(names.Lightsail, string(types.OperationTypeAttachDisk), ResDiskAttachment, d.Get("disk_name").(string), err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeAttachDisk, ResDiskAttachment, d.Get("disk_name").(string)) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeAttachDisk, ResDiskAttachment, d.Get("disk_name").(string)) if diag != nil { return diag @@ -79,7 +84,7 @@ func resourceDiskAttachmentCreate(ctx context.Context, d *schema.ResourceData, m } func resourceDiskAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) out, err := FindDiskAttachmentById(ctx, conn, d.Id()) @@ -101,65 +106,65 @@ func resourceDiskAttachmentRead(ctx context.Context, d *schema.ResourceData, met } func resourceDiskAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) id_parts := strings.SplitN(d.Id(), ",", -1) dName := id_parts[0] iName := id_parts[1] // A Disk can only be detached from a stopped instance - iStateOut, err := waitInstanceStateWithContext(ctx, conn, &iName) + iStateOut, err := waitInstanceState(ctx, conn, &iName) if err != nil { return create.DiagError(names.Lightsail, create.ErrActionReading, ResInstance, iName, errors.New("Error waiting for Instance to enter running or stopped state")) } - if aws.StringValue(iStateOut.State.Name) == "running" { - stopOut, err := conn.StopInstanceWithContext(ctx, &lightsail.StopInstanceInput{ + if aws.ToString(iStateOut.State.Name) == "running" { + stopOut, err := conn.StopInstance(ctx, &lightsail.StopInstanceInput{ InstanceName: aws.String(iName), }) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeStopInstance, ResInstance, iName, err) + return create.DiagError(names.Lightsail, string(types.OperationTypeStopInstance), ResInstance, iName, err) } - diag := expandOperations(ctx, conn, stopOut.Operations, lightsail.OperationTypeStopInstance, ResInstance, iName) + diag := expandOperations(ctx, conn, stopOut.Operations, types.OperationTypeStopInstance, ResInstance, iName) if diag != nil { return diag } } - out, err := conn.DetachDiskWithContext(ctx, &lightsail.DetachDiskInput{ + out, err := conn.DetachDisk(ctx, &lightsail.DetachDiskInput{ DiskName: aws.String(dName), }) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeDetachDisk, ResDiskAttachment, d.Get("disk_name").(string), err) + return create.DiagError(names.Lightsail, string(types.OperationTypeDetachDisk), ResDiskAttachment, d.Get("disk_name").(string), err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeDetachDisk, ResDiskAttachment, d.Get("disk_name").(string)) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeDetachDisk, ResDiskAttachment, d.Get("disk_name").(string)) if diag != nil { return diag } - iStateOut, err = waitInstanceStateWithContext(ctx, conn, &iName) + iStateOut, err = waitInstanceState(ctx, conn, &iName) if err != nil { return create.DiagError(names.Lightsail, create.ErrActionReading, ResInstance, iName, errors.New("Error waiting for Instance to enter running or stopped state")) } - if aws.StringValue(iStateOut.State.Name) != "running" { - startOut, err := conn.StartInstanceWithContext(ctx, &lightsail.StartInstanceInput{ + if aws.ToString(iStateOut.State.Name) != "running" { + startOut, err := conn.StartInstance(ctx, &lightsail.StartInstanceInput{ InstanceName: aws.String(iName), }) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeStartInstance, ResInstance, iName, err) + return create.DiagError(names.Lightsail, string(types.OperationTypeStartInstance), ResInstance, iName, err) } - diag := expandOperations(ctx, conn, startOut.Operations, lightsail.OperationTypeStartInstance, ResInstance, iName) + diag := expandOperations(ctx, conn, startOut.Operations, types.OperationTypeStartInstance, ResInstance, iName) if diag != nil { return diag @@ -168,3 +173,39 @@ func resourceDiskAttachmentDelete(ctx context.Context, d *schema.ResourceData, m return nil } + +func FindDiskAttachmentById(ctx context.Context, conn *lightsail.Client, id string) (*types.Disk, error) { + id_parts := strings.SplitN(id, ",", -1) + + if len(id_parts) != 2 { + return nil, errors.New("invalid Disk Attachment id") + } + + dName := id_parts[0] + iName := id_parts[1] + + in := &lightsail.GetDiskInput{ + DiskName: aws.String(dName), + } + + out, err := conn.GetDisk(ctx, in) + + if IsANotFoundError(err) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + if err != nil { + return nil, err + } + + disk := out.Disk + + if disk == nil || !aws.ToBool(disk.IsAttached) || aws.ToString(disk.Name) != dName || aws.ToString(disk.AttachedTo) != iName { + return nil, tfresource.NewEmptyResultError(in) + } + + return out.Disk, nil +} diff --git a/internal/service/lightsail/disk_attachment_test.go b/internal/service/lightsail/disk_attachment_test.go index 295771330a7..1c36c5bb15d 100644 --- a/internal/service/lightsail/disk_attachment_test.go +++ b/internal/service/lightsail/disk_attachment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail_test import ( @@ -5,9 +8,10 @@ import ( "errors" "fmt" "regexp" + "strings" "testing" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -21,7 +25,6 @@ import ( func TestAccLightsailDiskAttachment_basic(t *testing.T) { ctx := acctest.Context(t) - var disk lightsail.Disk resourceName := "aws_lightsail_disk_attachment.test" dName := sdkacctest.RandomWithPrefix("tf-acc-test") liName := sdkacctest.RandomWithPrefix("tf-acc-test") @@ -31,17 +34,17 @@ func TestAccLightsailDiskAttachment_basic(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDiskAttachmentDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccDiskAttachmentConfig_basic(dName, liName, diskPath), Check: resource.ComposeTestCheckFunc( - testAccCheckDiskAttachmentExists(ctx, resourceName, disk), + testAccCheckDiskAttachmentExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "disk_name", dName), resource.TestCheckResourceAttr(resourceName, "disk_path", diskPath), resource.TestCheckResourceAttr(resourceName, "instance_name", liName), @@ -57,7 +60,6 @@ func TestAccLightsailDiskAttachment_basic(t *testing.T) { func TestAccLightsailDiskAttachment_disappears(t *testing.T) { ctx := acctest.Context(t) - var disk lightsail.Disk resourceName := "aws_lightsail_disk_attachment.test" dName := sdkacctest.RandomWithPrefix("tf-acc-test") liName := sdkacctest.RandomWithPrefix("tf-acc-test") @@ -66,17 +68,17 @@ func TestAccLightsailDiskAttachment_disappears(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDiskAttachmentDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccDiskAttachmentConfig_basic(dName, liName, diskPath), Check: resource.ComposeTestCheckFunc( - testAccCheckDiskAttachmentExists(ctx, resourceName, disk), + testAccCheckDiskAttachmentExists(ctx, resourceName), acctest.CheckResourceDisappears(ctx, acctest.Provider, tflightsail.ResourceDiskAttachment(), resourceName), ), ExpectNonEmptyPlan: true, @@ -85,7 +87,7 @@ func TestAccLightsailDiskAttachment_disappears(t *testing.T) { }) } -func testAccCheckDiskAttachmentExists(ctx context.Context, n string, disk lightsail.Disk) resource.TestCheckFunc { +func testAccCheckDiskAttachmentExists(ctx context.Context, n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -96,7 +98,7 @@ func testAccCheckDiskAttachmentExists(ctx context.Context, n string, disk lights return errors.New("No LightsailDiskAttachment ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) out, err := tflightsail.FindDiskAttachmentById(ctx, conn, rs.Primary.ID) @@ -108,8 +110,6 @@ func testAccCheckDiskAttachmentExists(ctx context.Context, n string, disk lights return fmt.Errorf("Disk Attachment %q does not exist", rs.Primary.ID) } - disk = *out - return nil } } @@ -121,7 +121,7 @@ func testAccCheckDiskAttachmentDestroy(ctx context.Context) resource.TestCheckFu continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) _, err := tflightsail.FindDiskAttachmentById(ctx, conn, rs.Primary.ID) diff --git a/internal/service/lightsail/disk_test.go b/internal/service/lightsail/disk_test.go index 6541abac9b3..734ce3034fd 100644 --- a/internal/service/lightsail/disk_test.go +++ b/internal/service/lightsail/disk_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail_test import ( @@ -5,9 +8,10 @@ import ( "errors" "fmt" "regexp" + "strings" "testing" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -21,24 +25,23 @@ import ( func TestAccLightsailDisk_basic(t *testing.T) { ctx := acctest.Context(t) - var disk lightsail.Disk rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_lightsail_disk.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDiskDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccDiskConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckDiskExists(ctx, resourceName, &disk), + testAccCheckDiskExists(ctx, resourceName), acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "lightsail", regexp.MustCompile(`Disk/.+`)), resource.TestCheckResourceAttrSet(resourceName, "availability_zone"), resource.TestCheckResourceAttrSet(resourceName, "created_at"), @@ -58,24 +61,23 @@ func TestAccLightsailDisk_basic(t *testing.T) { func TestAccLightsailDisk_Tags(t *testing.T) { ctx := acctest.Context(t) - var disk1, disk2, disk3 lightsail.Disk rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_lightsail_disk.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDiskDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccDiskConfig_tags1(rName, "key1", "value1"), Check: resource.ComposeTestCheckFunc( - testAccCheckDiskExists(ctx, resourceName, &disk1), + testAccCheckDiskExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), ), @@ -88,7 +90,7 @@ func TestAccLightsailDisk_Tags(t *testing.T) { { Config: testAccDiskConfig_tags2(rName, "key1", "value1updated", "key2", "value2"), Check: resource.ComposeTestCheckFunc( - testAccCheckDiskExists(ctx, resourceName, &disk2), + testAccCheckDiskExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), @@ -97,7 +99,7 @@ func TestAccLightsailDisk_Tags(t *testing.T) { { Config: testAccDiskConfig_tags1(rName, "key2", "value2"), Check: resource.ComposeTestCheckFunc( - testAccCheckDiskExists(ctx, resourceName, &disk3), + testAccCheckDiskExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), ), @@ -106,7 +108,7 @@ func TestAccLightsailDisk_Tags(t *testing.T) { }) } -func testAccCheckDiskExists(ctx context.Context, n string, disk *lightsail.Disk) resource.TestCheckFunc { +func testAccCheckDiskExists(ctx context.Context, n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -117,7 +119,7 @@ func testAccCheckDiskExists(ctx context.Context, n string, disk *lightsail.Disk) return errors.New("No LightsailDisk ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) resp, err := tflightsail.FindDiskById(ctx, conn, rs.Primary.ID) @@ -129,32 +131,29 @@ func testAccCheckDiskExists(ctx context.Context, n string, disk *lightsail.Disk) return fmt.Errorf("Disk %q does not exist", rs.Primary.ID) } - *disk = *resp - return nil } } func TestAccLightsailDisk_disappears(t *testing.T) { ctx := acctest.Context(t) - var disk lightsail.Disk rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_lightsail_disk.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDiskDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccDiskConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckDiskExists(ctx, resourceName, &disk), + testAccCheckDiskExists(ctx, resourceName), acctest.CheckResourceDisappears(ctx, acctest.Provider, tflightsail.ResourceDisk(), resourceName), ), ExpectNonEmptyPlan: true, @@ -170,7 +169,7 @@ func testAccCheckDiskDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) _, err := tflightsail.FindDiskById(ctx, conn, rs.Primary.ID) diff --git a/internal/service/lightsail/distribution.go b/internal/service/lightsail/distribution.go index f6ff9623572..2361956f931 100644 --- a/internal/service/lightsail/distribution.go +++ b/internal/service/lightsail/distribution.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail import ( @@ -7,15 +10,17 @@ import ( "regexp" "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/enum" + "github.com/hashicorp/terraform-provider-aws/internal/errs" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/verify" @@ -70,7 +75,7 @@ func ResourceDistribution() *schema.Resource { Type: schema.TypeString, Required: true, Description: "The cache behavior for the specified path.", - ValidateFunc: validation.StringInSlice(lightsail.BehaviorEnum_Values(), false), + ValidateFunc: validation.StringInSlice(flattenBehaviorEnumValues(types.BehaviorEnum("").Values()), false), }, "path": { Type: schema.TypeString, @@ -121,7 +126,7 @@ func ResourceDistribution() *schema.Resource { Type: schema.TypeString, Optional: true, Description: "Specifies which cookies to forward to the distribution's origin for a cache behavior: all, none, or allow-list to forward only the cookies specified in the cookiesAllowList parameter.", - ValidateFunc: validation.StringInSlice(lightsail.ForwardValues_Values(), false), + ValidateFunc: validation.StringInSlice(flattenForwardValuesValues(types.ForwardValues("").Values()), false), }, }, }, @@ -139,14 +144,14 @@ func ResourceDistribution() *schema.Resource { Description: "The specific headers to forward to your distribution's origin.", Elem: &schema.Schema{ Type: schema.TypeString, - ValidateFunc: validation.StringInSlice(lightsail.HeaderEnum_Values(), false), + ValidateFunc: validation.StringInSlice(flattenHeaderEnumValues(types.HeaderEnum("").Values()), false), }, }, "option": { Type: schema.TypeString, Optional: true, Description: "The headers that you want your distribution to forward to your origin and base caching on.", - ValidateFunc: validation.StringInSlice([]string{"default", lightsail.ForwardValuesAllowList, lightsail.ForwardValuesAll}, false), + ValidateFunc: validation.StringInSlice(enum.Slice("default", types.ForwardValuesAllowList, types.ForwardValuesAll), false), }, }, }, @@ -207,7 +212,7 @@ func ResourceDistribution() *schema.Resource { Type: schema.TypeString, Required: true, Description: "The cache behavior of the distribution.", - ValidateFunc: validation.StringInSlice(lightsail.BehaviorEnum_Values(), false), + ValidateFunc: validation.StringInSlice(flattenBehaviorEnumValues(types.BehaviorEnum("").Values()), false), }, }, }, @@ -221,7 +226,7 @@ func ResourceDistribution() *schema.Resource { Type: schema.TypeString, Optional: true, Description: "The IP address type of the distribution.", - ValidateFunc: validation.StringInSlice(lightsail.IpAddressType_Values(), false), + ValidateFunc: validation.StringInSlice(flattenIPAddressTypeValues(types.IpAddressType("").Values()), false), Default: "dualstack", }, "location": { @@ -234,7 +239,7 @@ func ResourceDistribution() *schema.Resource { Type: schema.TypeString, Required: true, Description: "The Availability Zone.", - ValidateFunc: validation.StringInSlice(lightsail.BehaviorEnum_Values(), false), + ValidateFunc: validation.StringInSlice(flattenBehaviorEnumValues(types.BehaviorEnum("").Values()), false), }, "region_name": { Type: schema.TypeString, @@ -272,7 +277,7 @@ func ResourceDistribution() *schema.Resource { "protocol_policy": { Type: schema.TypeString, Optional: true, - ValidateFunc: validation.StringInSlice(lightsail.OriginProtocolPolicyEnum_Values(), false), + ValidateFunc: validation.StringInSlice(flattenOriginProtocolPolicyEnumValues(types.OriginProtocolPolicyEnum("").Values()), false), Description: "The protocol that your Amazon Lightsail distribution uses when establishing a connection with your origin to pull content.", }, "region_name": { @@ -322,14 +327,14 @@ const ( ) func resourceDistributionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) in := &lightsail.CreateDistributionInput{ BundleId: aws.String(d.Get("bundle_id").(string)), DefaultCacheBehavior: expandCacheBehavior(d.Get("default_cache_behavior").([]interface{})[0].(map[string]interface{})), DistributionName: aws.String(d.Get("name").(string)), Origin: expandInputOrigin(d.Get("origin").([]interface{})[0].(map[string]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("cache_behavior_settings"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { @@ -341,10 +346,10 @@ func resourceDistributionCreate(ctx context.Context, d *schema.ResourceData, met } if v, ok := d.GetOk("ip_address_type"); ok { - in.IpAddressType = aws.String(v.(string)) + in.IpAddressType = types.IpAddressType(v.(string)) } - out, err := conn.CreateDistributionWithContext(ctx, in) + out, err := conn.CreateDistribution(ctx, in) if err != nil { return create.DiagError(names.Lightsail, create.ErrActionCreating, ResNameDistribution, d.Get("name").(string), err) @@ -354,9 +359,9 @@ func resourceDistributionCreate(ctx context.Context, d *schema.ResourceData, met return create.DiagError(names.Lightsail, create.ErrActionCreating, ResNameDistribution, d.Get("name").(string), errors.New("empty output")) } - id := aws.StringValue(out.Distribution.Name) + id := aws.ToString(out.Distribution.Name) - diag := expandOperation(ctx, conn, out.Operation, lightsail.OperationTypeCreateDistribution, ResNameDistribution, id) + diag := expandOperation(ctx, conn, *out.Operation, types.OperationTypeCreateDistribution, ResNameDistribution, id) if diag != nil { return diag @@ -371,13 +376,13 @@ func resourceDistributionCreate(ctx context.Context, d *schema.ResourceData, met DistributionName: aws.String(id), IsEnabled: aws.Bool(isEnabled), } - updateOut, err := conn.UpdateDistributionWithContext(ctx, updateIn) + updateOut, err := conn.UpdateDistribution(ctx, updateIn) if err != nil { return create.DiagError(names.Lightsail, create.ErrActionUpdating, ResNameDistribution, d.Id(), err) } - diagUpdate := expandOperation(ctx, conn, updateOut.Operation, lightsail.OperationTypeUpdateDistribution, ResNameDistribution, d.Id()) + diagUpdate := expandOperation(ctx, conn, *updateOut.Operation, types.OperationTypeUpdateDistribution, ResNameDistribution, d.Id()) if diagUpdate != nil { return diagUpdate @@ -388,7 +393,7 @@ func resourceDistributionCreate(ctx context.Context, d *schema.ResourceData, met } func resourceDistributionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) out, err := FindDistributionByID(ctx, conn, d.Id()) @@ -436,13 +441,13 @@ func resourceDistributionRead(ctx context.Context, d *schema.ResourceData, meta d.Set("status", out.Status) d.Set("support_code", out.SupportCode) - SetTagsOut(ctx, out.Tags) + setTagsOut(ctx, out.Tags) return nil } func resourceDistributionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) update := false bundleUpdate := false @@ -486,17 +491,17 @@ func resourceDistributionUpdate(ctx context.Context, d *schema.ResourceData, met } if d.HasChange("ip_address_type") { - out, err := conn.SetIpAddressTypeWithContext(ctx, &lightsail.SetIpAddressTypeInput{ + out, err := conn.SetIpAddressType(ctx, &lightsail.SetIpAddressTypeInput{ ResourceName: aws.String(d.Id()), - ResourceType: aws.String("Distribution"), - IpAddressType: aws.String(d.Get("ip_address_type").(string)), + ResourceType: types.ResourceTypeDistribution, + IpAddressType: types.IpAddressType(d.Get("ip_address_type").(string)), }) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeSetIpAddressType, ResNameDistribution, d.Id(), err) + return create.DiagError(names.Lightsail, string(types.OperationTypeSetIpAddressType), ResNameDistribution, d.Id(), err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeSetIpAddressType, ResNameDistribution, d.Id()) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeSetIpAddressType, ResNameDistribution, d.Id()) if diag != nil { return diag @@ -505,12 +510,12 @@ func resourceDistributionUpdate(ctx context.Context, d *schema.ResourceData, met if update { log.Printf("[DEBUG] Updating Lightsail Distribution (%s): %#v", d.Id(), in) - out, err := conn.UpdateDistributionWithContext(ctx, in) + out, err := conn.UpdateDistribution(ctx, in) if err != nil { return create.DiagError(names.Lightsail, create.ErrActionUpdating, ResNameDistribution, d.Id(), err) } - diag := expandOperation(ctx, conn, out.Operation, lightsail.OperationTypeUpdateDistribution, ResNameDistribution, d.Id()) + diag := expandOperation(ctx, conn, *out.Operation, types.OperationTypeUpdateDistribution, ResNameDistribution, d.Id()) if diag != nil { return diag @@ -519,12 +524,12 @@ func resourceDistributionUpdate(ctx context.Context, d *schema.ResourceData, met if bundleUpdate { log.Printf("[DEBUG] Updating Lightsail Distribution Bundle (%s): %#v", d.Id(), in) - out, err := conn.UpdateDistributionBundleWithContext(ctx, bundleIn) + out, err := conn.UpdateDistributionBundle(ctx, bundleIn) if err != nil { return create.DiagError(names.Lightsail, create.ErrActionUpdating, ResNameDistribution, d.Id(), err) } - diag := expandOperation(ctx, conn, out.Operation, lightsail.OperationTypeUpdateDistributionBundle, ResNameDistribution, d.Id()) + diag := expandOperation(ctx, conn, *out.Operation, types.OperationTypeUpdateDistributionBundle, ResNameDistribution, d.Id()) if diag != nil { return diag @@ -535,15 +540,15 @@ func resourceDistributionUpdate(ctx context.Context, d *schema.ResourceData, met } func resourceDistributionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) log.Printf("[INFO] Deleting Lightsail Distribution %s", d.Id()) - out, err := conn.DeleteDistributionWithContext(ctx, &lightsail.DeleteDistributionInput{ + out, err := conn.DeleteDistribution(ctx, &lightsail.DeleteDistributionInput{ DistributionName: aws.String(d.Id()), }) - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) || tfawserr.ErrMessageContains(err, lightsail.ErrCodeInvalidInputException, "Requested resource not found") { + if IsANotFoundError(err) || errs.IsA[*types.InvalidInputException](err) { return nil } @@ -551,7 +556,7 @@ func resourceDistributionDelete(ctx context.Context, d *schema.ResourceData, met return create.DiagError(names.Lightsail, create.ErrActionDeleting, ResNameDistribution, d.Id(), err) } - diag := expandOperation(ctx, conn, out.Operation, lightsail.OperationTypeDeleteDistribution, ResNameDistribution, d.Id()) + diag := expandOperation(ctx, conn, *out.Operation, types.OperationTypeDeleteDistribution, ResNameDistribution, d.Id()) if diag != nil { return diag @@ -560,12 +565,12 @@ func resourceDistributionDelete(ctx context.Context, d *schema.ResourceData, met return nil } -func FindDistributionByID(ctx context.Context, conn *lightsail.Lightsail, id string) (*lightsail.LightsailDistribution, error) { +func FindDistributionByID(ctx context.Context, conn *lightsail.Client, id string) (*types.LightsailDistribution, error) { in := &lightsail.GetDistributionsInput{ DistributionName: aws.String(id), } - out, err := conn.GetDistributionsWithContext(ctx, in) - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) || tfawserr.ErrMessageContains(err, lightsail.ErrCodeInvalidInputException, "Requested resource not found") { + out, err := conn.GetDistributions(ctx, in) + if IsANotFoundError(err) || errs.IsA[*types.InvalidInputException](err) { return nil, &retry.NotFoundError{ LastError: err, LastRequest: in, @@ -580,10 +585,10 @@ func FindDistributionByID(ctx context.Context, conn *lightsail.Lightsail, id str return nil, tfresource.NewEmptyResultError(in) } - return out.Distributions[0], nil + return &out.Distributions[0], nil } -func flattenCookieObject(apiObject *lightsail.CookieObject) map[string]interface{} { +func flattenCookieObject(apiObject *types.CookieObject) map[string]interface{} { if apiObject == nil { return nil } @@ -594,14 +599,14 @@ func flattenCookieObject(apiObject *lightsail.CookieObject) map[string]interface m["cookies_allow_list"] = v } - if v := apiObject.Option; v != nil { - m["option"] = aws.StringValue(v) + if v := apiObject.Option; v != "" { + m["option"] = v } return m } -func flattenHeaderObject(apiObject *lightsail.HeaderObject) map[string]interface{} { +func flattenHeaderObject(apiObject *types.HeaderObject) map[string]interface{} { if apiObject == nil { return nil } @@ -612,14 +617,14 @@ func flattenHeaderObject(apiObject *lightsail.HeaderObject) map[string]interface m["headers_allow_list"] = v } - if v := apiObject.Option; v != nil { - m["option"] = aws.StringValue(v) + if v := apiObject.Option; v != "" { + m["option"] = v } return m } -func flattenQueryStringObject(apiObject *lightsail.QueryStringObject) map[string]interface{} { +func flattenQueryStringObject(apiObject *types.QueryStringObject) map[string]interface{} { if apiObject == nil { return nil } @@ -631,13 +636,13 @@ func flattenQueryStringObject(apiObject *lightsail.QueryStringObject) map[string } if v := apiObject.Option; v != nil { - m["option"] = aws.BoolValue(v) + m["option"] = aws.ToBool(v) } return m } -func flattenCacheSettings(apiObject *lightsail.CacheSettings) map[string]interface{} { +func flattenCacheSettings(apiObject *types.CacheSettings) map[string]interface{} { if apiObject == nil { return nil } @@ -645,15 +650,15 @@ func flattenCacheSettings(apiObject *lightsail.CacheSettings) map[string]interfa m := map[string]interface{}{} if v := apiObject.AllowedHTTPMethods; v != nil { - m["allowed_http_methods"] = aws.StringValue(v) + m["allowed_http_methods"] = aws.ToString(v) } if v := apiObject.CachedHTTPMethods; v != nil { - m["cached_http_methods"] = aws.StringValue(v) + m["cached_http_methods"] = aws.ToString(v) } if v := apiObject.DefaultTTL; v != nil { - m["default_ttl"] = int(aws.Int64Value(v)) + m["default_ttl"] = int(aws.ToInt64(v)) } if v := apiObject.ForwardedCookies; v != nil { @@ -669,35 +674,35 @@ func flattenCacheSettings(apiObject *lightsail.CacheSettings) map[string]interfa } if v := apiObject.MaximumTTL; v != nil { - m["maximum_ttl"] = int(aws.Int64Value(v)) + m["maximum_ttl"] = int(aws.ToInt64(v)) } if v := apiObject.MinimumTTL; v != nil { - m["minimum_ttl"] = int(aws.Int64Value(v)) + m["minimum_ttl"] = int(aws.ToInt64(v)) } return m } -func flattenCacheBehaviorPerPath(apiObject *lightsail.CacheBehaviorPerPath) map[string]interface{} { - if apiObject == nil { +func flattenCacheBehaviorPerPath(apiObject types.CacheBehaviorPerPath) map[string]interface{} { + if apiObject == (types.CacheBehaviorPerPath{}) { return nil } m := map[string]interface{}{} - if v := apiObject.Behavior; v != nil { - m["behavior"] = aws.StringValue(v) + if v := apiObject.Behavior; v != "" { + m["behavior"] = v } if v := apiObject.Path; v != nil { - m["path"] = aws.StringValue(v) + m["path"] = aws.ToString(v) } return m } -func flattenCacheBehaviorsPerPath(apiObjects []*lightsail.CacheBehaviorPerPath) []interface{} { +func flattenCacheBehaviorsPerPath(apiObjects []types.CacheBehaviorPerPath) []interface{} { if len(apiObjects) == 0 { return nil } @@ -705,7 +710,7 @@ func flattenCacheBehaviorsPerPath(apiObjects []*lightsail.CacheBehaviorPerPath) var l []interface{} for _, apiObject := range apiObjects { - if apiObject == nil { + if apiObject == (types.CacheBehaviorPerPath{}) { continue } @@ -715,21 +720,21 @@ func flattenCacheBehaviorsPerPath(apiObjects []*lightsail.CacheBehaviorPerPath) return l } -func flattenCacheBehavior(apiObject *lightsail.CacheBehavior) map[string]interface{} { +func flattenCacheBehavior(apiObject *types.CacheBehavior) map[string]interface{} { if apiObject == nil { return nil } m := map[string]interface{}{} - if v := apiObject.Behavior; v != nil { - m["behavior"] = aws.StringValue(v) + if v := apiObject.Behavior; v != "" { + m["behavior"] = v } return m } -func flattenOrigin(apiObject *lightsail.Origin) map[string]interface{} { +func flattenOrigin(apiObject *types.Origin) map[string]interface{} { if apiObject == nil { return nil } @@ -737,55 +742,55 @@ func flattenOrigin(apiObject *lightsail.Origin) map[string]interface{} { m := map[string]interface{}{} if v := apiObject.Name; v != nil { - m["name"] = aws.StringValue(v) + m["name"] = aws.ToString(v) } - if v := apiObject.ProtocolPolicy; v != nil { - m["protocol_policy"] = aws.StringValue(v) + if v := apiObject.ProtocolPolicy; v != "" { + m["protocol_policy"] = v } - if v := apiObject.RegionName; v != nil { - m["region_name"] = aws.StringValue(v) + if v := apiObject.RegionName; v != "" { + m["region_name"] = v } - if v := apiObject.ResourceType; v != nil { - m["resource_type"] = aws.StringValue(v) + if v := apiObject.ResourceType; v != "" { + m["resource_type"] = v } return m } -func expandInputOrigin(tfMap map[string]interface{}) *lightsail.InputOrigin { +func expandInputOrigin(tfMap map[string]interface{}) *types.InputOrigin { if tfMap == nil { return nil } - a := &lightsail.InputOrigin{} + a := &types.InputOrigin{} if v, ok := tfMap["name"].(string); ok && v != "" { a.Name = aws.String(v) } if v, ok := tfMap["protocol_policy"].(string); ok && v != "" { - a.ProtocolPolicy = aws.String(v) + a.ProtocolPolicy = types.OriginProtocolPolicyEnum(v) } if v, ok := tfMap["region_name"].(string); ok && v != "" { - a.RegionName = aws.String(v) + a.RegionName = types.RegionName(v) } return a } -func expandCacheBehaviorPerPath(tfMap map[string]interface{}) *lightsail.CacheBehaviorPerPath { +func expandCacheBehaviorPerPath(tfMap map[string]interface{}) types.CacheBehaviorPerPath { if tfMap == nil { - return nil + return types.CacheBehaviorPerPath{} } - a := &lightsail.CacheBehaviorPerPath{} + a := types.CacheBehaviorPerPath{} if v, ok := tfMap["behavior"].(string); ok && v != "" { - a.Behavior = aws.String(v) + a.Behavior = types.BehaviorEnum(v) } if v, ok := tfMap["path"].(string); ok && v != "" { @@ -795,12 +800,12 @@ func expandCacheBehaviorPerPath(tfMap map[string]interface{}) *lightsail.CacheBe return a } -func expandCacheBehaviorsPerPath(tfList []interface{}) []*lightsail.CacheBehaviorPerPath { +func expandCacheBehaviorsPerPath(tfList []interface{}) []types.CacheBehaviorPerPath { if len(tfList) == 0 { return nil } - var s []*lightsail.CacheBehaviorPerPath + var s []types.CacheBehaviorPerPath for _, r := range tfList { m, ok := r.(map[string]interface{}) @@ -811,7 +816,7 @@ func expandCacheBehaviorsPerPath(tfList []interface{}) []*lightsail.CacheBehavio a := expandCacheBehaviorPerPath(m) - if a == nil { + if a == (types.CacheBehaviorPerPath{}) { continue } @@ -821,12 +826,12 @@ func expandCacheBehaviorsPerPath(tfList []interface{}) []*lightsail.CacheBehavio return s } -func expandAllowList(tfList []interface{}) []*string { +func expandAllowList(tfList []interface{}) []string { if len(tfList) == 0 { return nil } - var s []*string + var s []string for _, r := range tfList { m, ok := r.(string) @@ -835,54 +840,73 @@ func expandAllowList(tfList []interface{}) []*string { continue } - s = append(s, aws.String(m)) + s = append(s, m) } return s } -func expandCookieObject(tfMap map[string]interface{}) *lightsail.CookieObject { +func expandHeaderEnumList(tfList []interface{}) []types.HeaderEnum { + if len(tfList) == 0 { + return nil + } + + var s []types.HeaderEnum + + for _, r := range tfList { + m, ok := r.(string) + + if !ok { + continue + } + + s = append(s, types.HeaderEnum(m)) + } + + return s +} +func expandCookieObject(tfMap map[string]interface{}) *types.CookieObject { if tfMap == nil { return nil } - a := &lightsail.CookieObject{} + a := &types.CookieObject{} if v, ok := tfMap["cookies_allow_list"]; ok && len(v.(*schema.Set).List()) > 0 { a.CookiesAllowList = expandAllowList(v.(*schema.Set).List()) } if v, ok := tfMap["option"].(string); ok && v != "" { - a.Option = aws.String(v) + a.Option = types.ForwardValues(v) } return a } -func expandHeaderObject(tfMap map[string]interface{}) *lightsail.HeaderObject { +func expandHeaderObject(tfMap map[string]interface{}) *types.HeaderObject { if tfMap == nil { return nil } - a := &lightsail.HeaderObject{} + a := &types.HeaderObject{} if v, ok := tfMap["headers_allow_list"]; ok && len(v.(*schema.Set).List()) > 0 { - a.HeadersAllowList = expandAllowList(v.(*schema.Set).List()) + a.HeadersAllowList = expandHeaderEnumList(v.(*schema.Set).List()) } if v, ok := tfMap["option"].(string); ok && v != "" { - a.Option = aws.String(v) + a.Option = types.ForwardValues(v) } return a } -func expandQueryStringObject(tfMap map[string]interface{}) *lightsail.QueryStringObject { +func expandQueryStringObject(tfMap map[string]interface{}) *types.QueryStringObject { if tfMap == nil { return nil } - a := &lightsail.QueryStringObject{} + a := &types.QueryStringObject{} if v, ok := tfMap["query_strings_allowed_list"]; ok && len(v.(*schema.Set).List()) > 0 { a.QueryStringsAllowList = expandAllowList(v.(*schema.Set).List()) @@ -895,12 +919,12 @@ func expandQueryStringObject(tfMap map[string]interface{}) *lightsail.QueryStrin return a } -func expandCacheSettings(tfMap map[string]interface{}) *lightsail.CacheSettings { +func expandCacheSettings(tfMap map[string]interface{}) *types.CacheSettings { if tfMap == nil { return nil } - a := &lightsail.CacheSettings{} + a := &types.CacheSettings{} if v, ok := tfMap["allowed_http_methods"].(string); ok && v != "" { a.AllowedHTTPMethods = aws.String(v) @@ -937,16 +961,66 @@ func expandCacheSettings(tfMap map[string]interface{}) *lightsail.CacheSettings return a } -func expandCacheBehavior(tfMap map[string]interface{}) *lightsail.CacheBehavior { +func expandCacheBehavior(tfMap map[string]interface{}) *types.CacheBehavior { if tfMap == nil { return nil } - a := &lightsail.CacheBehavior{} + a := &types.CacheBehavior{} if v, ok := tfMap["behavior"].(string); ok && v != "" { - a.Behavior = aws.String(v) + a.Behavior = types.BehaviorEnum(v) } return a } + +func flattenForwardValuesValues(t []types.ForwardValues) []string { + var out []string + + for _, v := range t { + out = append(out, string(v)) + } + + return out +} + +func flattenHeaderEnumValues(t []types.HeaderEnum) []string { + var out []string + + for _, v := range t { + out = append(out, string(v)) + } + + return out +} + +func flattenIPAddressTypeValues(t []types.IpAddressType) []string { + var out []string + + for _, v := range t { + out = append(out, string(v)) + } + + return out +} + +func flattenBehaviorEnumValues(t []types.BehaviorEnum) []string { + var out []string + + for _, v := range t { + out = append(out, string(v)) + } + + return out +} + +func flattenOriginProtocolPolicyEnumValues(t []types.OriginProtocolPolicyEnum) []string { + var out []string + + for _, v := range t { + out = append(out, string(v)) + } + + return out +} diff --git a/internal/service/lightsail/distribution_test.go b/internal/service/lightsail/distribution_test.go index f8a56de28a7..b9c5a81d95e 100644 --- a/internal/service/lightsail/distribution_test.go +++ b/internal/service/lightsail/distribution_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail_test import ( @@ -5,10 +8,11 @@ import ( "errors" "fmt" "regexp" + "strings" "testing" - "github.com/aws/aws-sdk-go/aws/endpoints" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -53,11 +57,11 @@ func testAccDistribution_basic(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) - acctest.PreCheckRegion(t, endpoints.UsEast1RegionID) + acctest.PreCheckRegion(t, string(types.RegionNameUsEast1)) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDistributionDestroy(ctx), Steps: []resource.TestStep{ @@ -122,11 +126,11 @@ func testAccDistribution_isEnabled(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) - acctest.PreCheckRegion(t, endpoints.UsEast1RegionID) + acctest.PreCheckRegion(t, string(types.RegionNameUsEast1)) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDistributionDestroy(ctx), Steps: []resource.TestStep{ @@ -165,11 +169,11 @@ func testAccDistribution_cacheBehavior(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) - acctest.PreCheckRegion(t, endpoints.UsEast1RegionID) + acctest.PreCheckRegion(t, string(types.RegionNameUsEast1)) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDistributionDestroy(ctx), Steps: []resource.TestStep{ @@ -224,11 +228,11 @@ func testAccDistribution_defaultCacheBehavior(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) - acctest.PreCheckRegion(t, endpoints.UsEast1RegionID) + acctest.PreCheckRegion(t, string(types.RegionNameUsEast1)) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDistributionDestroy(ctx), Steps: []resource.TestStep{ @@ -270,19 +274,19 @@ func testAccDistribution_ipAddressType(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) - acctest.PreCheckRegion(t, endpoints.UsEast1RegionID) + acctest.PreCheckRegion(t, string(types.RegionNameUsEast1)) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDistributionDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccDistributionConfig_ipAddressType(rName, bucketName, lightsail.IpAddressTypeIpv4), + Config: testAccDistributionConfig_ipAddressType(rName, bucketName, string(types.IpAddressTypeIpv4)), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckDistributionExists(ctx, resourceName), - resource.TestCheckResourceAttr(resourceName, "ip_address_type", lightsail.IpAddressTypeIpv4), + resource.TestCheckResourceAttr(resourceName, "ip_address_type", string(types.IpAddressTypeIpv4)), ), }, { @@ -291,10 +295,10 @@ func testAccDistribution_ipAddressType(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccDistributionConfig_ipAddressType(rName, bucketName, lightsail.IpAddressTypeDualstack), + Config: testAccDistributionConfig_ipAddressType(rName, bucketName, string(types.IpAddressTypeDualstack)), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckDistributionExists(ctx, resourceName), - resource.TestCheckResourceAttr(resourceName, "ip_address_type", lightsail.IpAddressTypeDualstack), + resource.TestCheckResourceAttr(resourceName, "ip_address_type", string(types.IpAddressTypeDualstack)), ), }, }, @@ -314,11 +318,11 @@ func testAccDistribution_cacheBehaviorSettings(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) - acctest.PreCheckRegion(t, endpoints.UsEast1RegionID) + acctest.PreCheckRegion(t, string(types.RegionNameUsEast1)) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDistributionDestroy(ctx), Steps: []resource.TestStep{ @@ -388,11 +392,11 @@ func testAccDistribution_tags(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) - acctest.PreCheckRegion(t, endpoints.UsEast1RegionID) + acctest.PreCheckRegion(t, string(types.RegionNameUsEast1)) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDistributionDestroy(ctx), Steps: []resource.TestStep{ @@ -439,11 +443,11 @@ func testAccDistribution_disappears(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) - acctest.PreCheckRegion(t, endpoints.UsEast1RegionID) + acctest.PreCheckRegion(t, string(types.RegionNameUsEast1)) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDistributionDestroy(ctx), Steps: []resource.TestStep{ @@ -461,7 +465,7 @@ func testAccDistribution_disappears(t *testing.T) { func testAccCheckDistributionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lightsail_distribution" { @@ -496,7 +500,7 @@ func testAccCheckDistributionExists(ctx context.Context, name string) resource.T return create.Error(names.Lightsail, create.ErrActionCheckingExistence, tflightsail.ResNameDistribution, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) resp, err := tflightsail.FindDistributionByID(ctx, conn, rs.Primary.ID) if err != nil { diff --git a/internal/service/lightsail/domain.go b/internal/service/lightsail/domain.go index 234a0061012..55f5c9c3bf5 100644 --- a/internal/service/lightsail/domain.go +++ b/internal/service/lightsail/domain.go @@ -1,12 +1,14 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail import ( "context" "log" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-aws/internal/conns" @@ -36,8 +38,8 @@ func ResourceDomain() *schema.Resource { func resourceDomainCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LightsailConn() - _, err := conn.CreateDomainWithContext(ctx, &lightsail.CreateDomainInput{ + conn := meta.(*conns.AWSClient).LightsailClient(ctx) + _, err := conn.CreateDomain(ctx, &lightsail.CreateDomainInput{ DomainName: aws.String(d.Get("domain_name").(string)), }) @@ -52,13 +54,13 @@ func resourceDomainCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceDomainRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LightsailConn() - resp, err := conn.GetDomainWithContext(ctx, &lightsail.GetDomainInput{ + conn := meta.(*conns.AWSClient).LightsailClient(ctx) + resp, err := conn.GetDomain(ctx, &lightsail.GetDomainInput{ DomainName: aws.String(d.Id()), }) if err != nil { - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { + if IsANotFoundError(err) { log.Printf("[WARN] Lightsail Domain (%s) not found, removing from state", d.Id()) d.SetId("") return diags @@ -72,8 +74,8 @@ func resourceDomainRead(ctx context.Context, d *schema.ResourceData, meta interf func resourceDomainDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LightsailConn() - _, err := conn.DeleteDomainWithContext(ctx, &lightsail.DeleteDomainInput{ + conn := meta.(*conns.AWSClient).LightsailClient(ctx) + _, err := conn.DeleteDomain(ctx, &lightsail.DeleteDomainInput{ DomainName: aws.String(d.Id()), }) diff --git a/internal/service/lightsail/domain_entry.go b/internal/service/lightsail/domain_entry.go index 96b7e42f4f1..f38a2dbd2db 100644 --- a/internal/service/lightsail/domain_entry.go +++ b/internal/service/lightsail/domain_entry.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail import ( @@ -5,15 +8,16 @@ import ( "fmt" "strings" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/errs" "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/names" @@ -75,12 +79,12 @@ func ResourceDomainEntry() *schema.Resource { } func resourceDomainEntryCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) name := d.Get("name").(string) req := &lightsail.CreateDomainEntryInput{ DomainName: aws.String(d.Get("domain_name").(string)), - DomainEntry: &lightsail.DomainEntry{ + DomainEntry: &types.DomainEntry{ IsAlias: aws.Bool(d.Get("is_alias").(bool)), Name: aws.String(expandDomainEntryName(name, d.Get("domain_name").(string))), Target: aws.String(d.Get("target").(string)), @@ -88,13 +92,13 @@ func resourceDomainEntryCreate(ctx context.Context, d *schema.ResourceData, meta }, } - resp, err := conn.CreateDomainEntryWithContext(ctx, req) + resp, err := conn.CreateDomainEntry(ctx, req) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeCreateDomain, ResNameDomainEntry, name, err) + return create.DiagError(names.Lightsail, string(types.OperationTypeCreateDomain), ResNameDomainEntry, name, err) } - diag := expandOperations(ctx, conn, []*lightsail.Operation{resp.Operation}, lightsail.OperationTypeCreateDomain, ResNameDomainEntry, name) + diag := expandOperations(ctx, conn, []types.Operation{*resp.Operation}, types.OperationTypeCreateDomain, ResNameDomainEntry, name) if diag != nil { return diag @@ -120,7 +124,7 @@ func resourceDomainEntryCreate(ctx context.Context, d *schema.ResourceData, meta } func resourceDomainEntryRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) entry, err := FindDomainEntryById(ctx, conn, d.Id()) @@ -140,7 +144,7 @@ func resourceDomainEntryRead(ctx context.Context, d *schema.ResourceData, meta i return create.DiagError(names.Lightsail, create.ErrActionExpandingResourceId, ResNameDomainEntry, d.Id(), err) } - name := flattenDomainEntryName(aws.StringValue(entry.Name), domainName) + name := flattenDomainEntryName(aws.ToString(entry.Name), domainName) partCount := flex.ResourceIdPartCount(d.Id()) @@ -149,8 +153,8 @@ func resourceDomainEntryRead(ctx context.Context, d *schema.ResourceData, meta i idParts := []string{ name, domainName, - aws.StringValue(entry.Type), - aws.StringValue(entry.Target), + aws.ToString(entry.Type), + aws.ToString(entry.Target), } id, err := flex.FlattenResourceId(idParts, DomainEntryIdPartsCount, true) @@ -171,7 +175,7 @@ func resourceDomainEntryRead(ctx context.Context, d *schema.ResourceData, meta i } func resourceDomainEntryDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) domainName, err := expandDomainNameFromId(d.Id()) @@ -185,12 +189,12 @@ func resourceDomainEntryDelete(ctx context.Context, d *schema.ResourceData, meta return create.DiagError(names.Lightsail, create.ErrActionExpandingResourceId, ResNameDomainEntry, d.Id(), err) } - resp, err := conn.DeleteDomainEntryWithContext(ctx, &lightsail.DeleteDomainEntryInput{ + resp, err := conn.DeleteDomainEntry(ctx, &lightsail.DeleteDomainEntryInput{ DomainName: aws.String(domainName), DomainEntry: domainEntry, }) - if err != nil && tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { + if err != nil && errs.IsA[*types.NotFoundException](err) { return nil } @@ -198,7 +202,7 @@ func resourceDomainEntryDelete(ctx context.Context, d *schema.ResourceData, meta return create.DiagError(names.Lightsail, create.ErrActionDeleting, ResNameDomainEntry, d.Id(), err) } - diag := expandOperations(ctx, conn, []*lightsail.Operation{resp.Operation}, lightsail.OperationTypeDeleteDomain, ResNameDomainEntry, d.Id()) + diag := expandOperations(ctx, conn, []types.Operation{*resp.Operation}, types.OperationTypeDeleteDomain, ResNameDomainEntry, d.Id()) if diag != nil { return diag @@ -207,7 +211,7 @@ func resourceDomainEntryDelete(ctx context.Context, d *schema.ResourceData, meta return nil } -func expandDomainEntry(id string) (*lightsail.DomainEntry, error) { +func expandDomainEntry(id string) (*types.DomainEntry, error) { partCount := flex.ResourceIdPartCount(id) var name string @@ -242,7 +246,7 @@ func expandDomainEntry(id string) (*lightsail.DomainEntry, error) { recordType = idParts[2] recordTarget = idParts[3] } - entry := &lightsail.DomainEntry{ + entry := &types.DomainEntry{ Name: aws.String(expandDomainEntryName(name, domainName)), Type: aws.String(recordType), Target: aws.String(recordTarget), @@ -304,7 +308,7 @@ func flattenDomainEntryName(name, domainName string) string { return rn } -func FindDomainEntryById(ctx context.Context, conn *lightsail.Lightsail, id string) (*lightsail.DomainEntry, error) { +func FindDomainEntryById(ctx context.Context, conn *lightsail.Client, id string) (*types.DomainEntry, error) { partCount := flex.ResourceIdPartCount(id) in := &lightsail.GetDomainInput{} @@ -352,9 +356,9 @@ func FindDomainEntryById(ctx context.Context, conn *lightsail.Lightsail, id stri in.DomainName = aws.String(domainName) - out, err := conn.GetDomainWithContext(ctx, in) + out, err := conn.GetDomain(ctx, in) - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { + if IsANotFoundError(err) { return nil, &retry.NotFoundError{ LastError: err, LastRequest: in, @@ -365,11 +369,11 @@ func FindDomainEntryById(ctx context.Context, conn *lightsail.Lightsail, id stri return nil, err } - var entry *lightsail.DomainEntry + var entry types.DomainEntry entryExists := false for _, n := range out.Domain.DomainEntries { - if entryName == aws.StringValue(n.Name) && recordType == aws.StringValue(n.Type) && recordTarget == aws.StringValue(n.Target) { + if entryName == aws.ToString(n.Name) && recordType == aws.ToString(n.Type) && recordTarget == aws.ToString(n.Target) { entry = n entryExists = true break @@ -380,5 +384,5 @@ func FindDomainEntryById(ctx context.Context, conn *lightsail.Lightsail, id stri return nil, tfresource.NewEmptyResultError(in) } - return entry, nil + return &entry, nil } diff --git a/internal/service/lightsail/domain_entry_test.go b/internal/service/lightsail/domain_entry_test.go index 2cfe0e0fb22..1f66999ce8a 100644 --- a/internal/service/lightsail/domain_entry_test.go +++ b/internal/service/lightsail/domain_entry_test.go @@ -1,15 +1,17 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail_test import ( "context" "errors" "fmt" + "strings" "testing" - "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/endpoints" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -23,21 +25,20 @@ import ( func TestAccLightsailDomainEntry_basic(t *testing.T) { ctx := acctest.Context(t) - var domainEntry lightsail.DomainEntry resourceName := "aws_lightsail_domain_entry.test" domainName := acctest.RandomDomainName() domainEntryName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckRegion(t, endpoints.UsEast1RegionID) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckRegion(t, string(types.RegionNameUsEast1)) }, + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDomainEntryDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccDomainEntryConfig_basic(domainName, domainEntryName), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckDomainEntryExists(ctx, resourceName, &domainEntry), + testAccCheckDomainEntryExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "domain_name", domainName), resource.TestCheckResourceAttr(resourceName, "name", domainEntryName), resource.TestCheckResourceAttr(resourceName, "target", "127.0.0.1"), @@ -64,21 +65,20 @@ func TestAccLightsailDomainEntry_basic(t *testing.T) { func TestAccLightsailDomainEntry_underscore(t *testing.T) { ctx := acctest.Context(t) - var domainEntry lightsail.DomainEntry resourceName := "aws_lightsail_domain_entry.test" domainName := acctest.RandomDomainName() domainEntryName := "_" + sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckRegion(t, endpoints.UsEast1RegionID) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckRegion(t, string(types.RegionNameUsEast1)) }, + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDomainEntryDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccDomainEntryConfig_basic(domainName, domainEntryName), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckDomainEntryExists(ctx, resourceName, &domainEntry), + testAccCheckDomainEntryExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "domain_name", domainName), resource.TestCheckResourceAttr(resourceName, "name", domainEntryName), resource.TestCheckResourceAttr(resourceName, "target", "127.0.0.1"), @@ -105,21 +105,20 @@ func TestAccLightsailDomainEntry_underscore(t *testing.T) { func TestAccLightsailDomainEntry_apex(t *testing.T) { ctx := acctest.Context(t) - var domainEntry lightsail.DomainEntry resourceName := "aws_lightsail_domain_entry.test" domainName := acctest.RandomDomainName() domainEntryName := "" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckRegion(t, endpoints.UsEast1RegionID) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckRegion(t, string(types.RegionNameUsEast1)) }, + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDomainEntryDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccDomainEntryConfig_basic(domainName, domainEntryName), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckDomainEntryExists(ctx, resourceName, &domainEntry), + testAccCheckDomainEntryExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "domain_name", domainName), resource.TestCheckResourceAttr(resourceName, "name", domainEntryName), resource.TestCheckResourceAttr(resourceName, "target", "127.0.0.1"), @@ -146,43 +145,21 @@ func TestAccLightsailDomainEntry_apex(t *testing.T) { func TestAccLightsailDomainEntry_disappears(t *testing.T) { ctx := acctest.Context(t) - var domainEntry lightsail.DomainEntry resourceName := "aws_lightsail_domain_entry.test" domainName := acctest.RandomDomainName() domainEntryName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - testDestroy := func(*terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() - _, err := conn.DeleteDomainEntryWithContext(ctx, &lightsail.DeleteDomainEntryInput{ - DomainName: aws.String(domainName), - DomainEntry: &lightsail.DomainEntry{ - Name: aws.String(fmt.Sprintf("%s.%s", domainEntryName, domainName)), - Type: aws.String("A"), - Target: aws.String("127.0.0.1"), - }, - }) - - if err != nil { - return fmt.Errorf("error deleting Lightsail Domain Entry in disappear test") - } - - // sleep 7 seconds to give it time, so we don't have to poll - time.Sleep(7 * time.Second) - - return nil - } - resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckRegion(t, endpoints.UsEast1RegionID) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckRegion(t, string(types.RegionNameUsEast1)) }, + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDomainEntryDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccDomainEntryConfig_basic(domainName, domainEntryName), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckDomainEntryExists(ctx, resourceName, &domainEntry), - testDestroy, + testAccCheckDomainEntryExists(ctx, resourceName), + acctest.CheckResourceDisappears(ctx, acctest.Provider, tflightsail.ResourceDomainEntry(), resourceName), ), ExpectNonEmptyPlan: true, }, @@ -190,7 +167,7 @@ func TestAccLightsailDomainEntry_disappears(t *testing.T) { }) } -func testAccCheckDomainEntryExists(ctx context.Context, n string, domainEntry *lightsail.DomainEntry) resource.TestCheckFunc { +func testAccCheckDomainEntryExists(ctx context.Context, n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -202,7 +179,7 @@ func testAccCheckDomainEntryExists(ctx context.Context, n string, domainEntry *l return errors.New("No Lightsail Domain Entry ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) resp, err := tflightsail.FindDomainEntryById(ctx, conn, rs.Primary.ID) @@ -214,8 +191,6 @@ func testAccCheckDomainEntryExists(ctx context.Context, n string, domainEntry *l return fmt.Errorf("DomainEntry %q does not exist", rs.Primary.ID) } - *domainEntry = *resp - return nil } } @@ -227,7 +202,7 @@ func testAccCheckDomainEntryDestroy(ctx context.Context) resource.TestCheckFunc continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) _, err := tflightsail.FindDomainEntryById(ctx, conn, rs.Primary.ID) diff --git a/internal/service/lightsail/domain_test.go b/internal/service/lightsail/domain_test.go index d5fc7f14f1b..f5d3ea8bc8d 100644 --- a/internal/service/lightsail/domain_test.go +++ b/internal/service/lightsail/domain_test.go @@ -1,15 +1,18 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail_test import ( "context" "errors" "fmt" + "strings" "testing" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/endpoints" - "github.com/aws/aws-sdk-go/service/lightsail" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -20,20 +23,19 @@ import ( func TestAccLightsailDomain_basic(t *testing.T) { ctx := acctest.Context(t) - var domain lightsail.Domain lightsailDomainName := fmt.Sprintf("tf-test-lightsail-%s.com", sdkacctest.RandString(5)) resourceName := "aws_lightsail_domain.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckRegion(t, endpoints.UsEast1RegionID) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckRegion(t, string(types.RegionNameUsEast1)) }, + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDomainDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccDomainConfig_basic(lightsailDomainName), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckDomainExists(ctx, resourceName, &domain), + testAccCheckDomainExists(ctx, resourceName), ), }, }, @@ -42,20 +44,19 @@ func TestAccLightsailDomain_basic(t *testing.T) { func TestAccLightsailDomain_disappears(t *testing.T) { ctx := acctest.Context(t) - var domain lightsail.Domain lightsailDomainName := fmt.Sprintf("tf-test-lightsail-%s.com", sdkacctest.RandString(5)) resourceName := "aws_lightsail_domain.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckRegion(t, endpoints.UsEast1RegionID) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckRegion(t, string(types.RegionNameUsEast1)) }, + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDomainDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccDomainConfig_basic(lightsailDomainName), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckDomainExists(ctx, resourceName, &domain), + testAccCheckDomainExists(ctx, resourceName), acctest.CheckResourceDisappears(ctx, acctest.Provider, tflightsail.ResourceDomain(), resourceName), ), ExpectNonEmptyPlan: true, @@ -64,7 +65,7 @@ func TestAccLightsailDomain_disappears(t *testing.T) { }) } -func testAccCheckDomainExists(ctx context.Context, n string, domain *lightsail.Domain) resource.TestCheckFunc { +func testAccCheckDomainExists(ctx context.Context, n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -75,9 +76,9 @@ func testAccCheckDomainExists(ctx context.Context, n string, domain *lightsail.D return errors.New("No Lightsail Domain ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) - resp, err := conn.GetDomainWithContext(ctx, &lightsail.GetDomainInput{ + resp, err := conn.GetDomain(ctx, &lightsail.GetDomainInput{ DomainName: aws.String(rs.Primary.ID), }) @@ -88,7 +89,7 @@ func testAccCheckDomainExists(ctx context.Context, n string, domain *lightsail.D if resp == nil || resp.Domain == nil { return fmt.Errorf("Domain (%s) not found", rs.Primary.ID) } - *domain = *resp.Domain + return nil } } @@ -100,13 +101,13 @@ func testAccCheckDomainDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) - resp, err := conn.GetDomainWithContext(ctx, &lightsail.GetDomainInput{ + resp, err := conn.GetDomain(ctx, &lightsail.GetDomainInput{ DomainName: aws.String(rs.Primary.ID), }) - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { + if tflightsail.IsANotFoundError(err) { continue } diff --git a/internal/service/lightsail/errs.go b/internal/service/lightsail/errs.go new file mode 100644 index 00000000000..5de49bd184f --- /dev/null +++ b/internal/service/lightsail/errs.go @@ -0,0 +1,21 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package lightsail + +import ( + "strings" + + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" + "github.com/hashicorp/terraform-provider-aws/internal/errs" +) + +// Some Operations do not properly return the types.NotFoundException error +// This function matches on the types.NotFoundException or if the error text contains "DoesNotExist" +func IsANotFoundError(err error) bool { + if err != nil { + return errs.IsA[*types.NotFoundException](err) || strings.Contains(err.Error(), "DoesNotExist") + } else { + return false + } +} diff --git a/internal/service/lightsail/find.go b/internal/service/lightsail/find.go deleted file mode 100644 index f7e537c9c9a..00000000000 --- a/internal/service/lightsail/find.go +++ /dev/null @@ -1,496 +0,0 @@ -package lightsail - -import ( - "context" - "errors" - "strings" - - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" - "github.com/hashicorp/terraform-provider-aws/internal/flex" - "github.com/hashicorp/terraform-provider-aws/internal/tfresource" -) - -func FindCertificateByName(ctx context.Context, conn *lightsail.Lightsail, name string) (*lightsail.Certificate, error) { - in := &lightsail.GetCertificatesInput{ - CertificateName: aws.String(name), - } - - out, err := conn.GetCertificatesWithContext(ctx, in) - - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { - return nil, &retry.NotFoundError{ - LastError: err, - LastRequest: in, - } - } - - if err != nil { - return nil, err - } - - if out == nil || len(out.Certificates) == 0 || out.Certificates[0] == nil { - return nil, tfresource.NewEmptyResultError(in) - } - - return out.Certificates[0].CertificateDetail, nil -} - -func FindContainerServiceByName(ctx context.Context, conn *lightsail.Lightsail, serviceName string) (*lightsail.ContainerService, error) { - input := &lightsail.GetContainerServicesInput{ - ServiceName: aws.String(serviceName), - } - - output, err := conn.GetContainerServicesWithContext(ctx, input) - - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { - return nil, &retry.NotFoundError{ - LastError: err, - LastRequest: input, - } - } - - if err != nil { - return nil, err - } - - if output == nil || len(output.ContainerServices) == 0 { - return nil, tfresource.NewEmptyResultError(input) - } - - if count := len(output.ContainerServices); count > 1 { - return nil, tfresource.NewTooManyResultsError(count, input) - } - - return output.ContainerServices[0], nil -} - -func FindContainerServiceDeploymentByVersion(ctx context.Context, conn *lightsail.Lightsail, serviceName string, version int) (*lightsail.ContainerServiceDeployment, error) { - input := &lightsail.GetContainerServiceDeploymentsInput{ - ServiceName: aws.String(serviceName), - } - - output, err := conn.GetContainerServiceDeploymentsWithContext(ctx, input) - - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { - return nil, &retry.NotFoundError{ - LastError: err, - LastRequest: input, - } - } - - if err != nil { - return nil, err - } - - if output == nil || len(output.Deployments) == 0 { - return nil, tfresource.NewEmptyResultError(input) - } - - var result *lightsail.ContainerServiceDeployment - - for _, deployment := range output.Deployments { - if deployment == nil { - continue - } - - if int(aws.Int64Value(deployment.Version)) == version { - result = deployment - break - } - } - - if result == nil { - return nil, &retry.NotFoundError{ - Message: "Empty result", - LastRequest: input, - } - } - - return result, nil -} - -func FindDiskById(ctx context.Context, conn *lightsail.Lightsail, id string) (*lightsail.Disk, error) { - in := &lightsail.GetDiskInput{ - DiskName: aws.String(id), - } - - out, err := conn.GetDiskWithContext(ctx, in) - - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { - return nil, &retry.NotFoundError{ - LastError: err, - LastRequest: in, - } - } - - if err != nil { - return nil, err - } - - if out == nil || out.Disk == nil { - return nil, tfresource.NewEmptyResultError(in) - } - - return out.Disk, nil -} - -func FindDiskAttachmentById(ctx context.Context, conn *lightsail.Lightsail, id string) (*lightsail.Disk, error) { - id_parts := strings.SplitN(id, ",", -1) - - if len(id_parts) != 2 { - return nil, errors.New("invalid Disk Attachment id") - } - - dName := id_parts[0] - iName := id_parts[1] - - in := &lightsail.GetDiskInput{ - DiskName: aws.String(dName), - } - - out, err := conn.GetDiskWithContext(ctx, in) - - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { - return nil, &retry.NotFoundError{ - LastError: err, - LastRequest: in, - } - } - - if err != nil { - return nil, err - } - - disk := out.Disk - - if disk == nil || !aws.BoolValue(disk.IsAttached) || aws.StringValue(disk.Name) != dName || aws.StringValue(disk.AttachedTo) != iName { - return nil, tfresource.NewEmptyResultError(in) - } - - return out.Disk, nil -} - -func FindLoadBalancerByName(ctx context.Context, conn *lightsail.Lightsail, name string) (*lightsail.LoadBalancer, error) { - in := &lightsail.GetLoadBalancerInput{LoadBalancerName: aws.String(name)} - out, err := conn.GetLoadBalancerWithContext(ctx, in) - - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { - return nil, &retry.NotFoundError{ - LastError: err, - LastRequest: in, - } - } - - if err != nil { - return nil, err - } - - if out == nil || out.LoadBalancer == nil { - return nil, tfresource.NewEmptyResultError(in) - } - - lb := out.LoadBalancer - - return lb, nil -} - -func FindLoadBalancerAttachmentById(ctx context.Context, conn *lightsail.Lightsail, id string) (*string, error) { - id_parts := strings.SplitN(id, ",", -1) - if len(id_parts) != 2 { - return nil, errors.New("invalid load balancer attachment id") - } - - lbName := id_parts[0] - iName := id_parts[1] - - in := &lightsail.GetLoadBalancerInput{LoadBalancerName: aws.String(lbName)} - out, err := conn.GetLoadBalancerWithContext(ctx, in) - - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { - return nil, &retry.NotFoundError{ - LastError: err, - LastRequest: in, - } - } - - if err != nil { - return nil, err - } - - var entry *string - entryExists := false - - for _, n := range out.LoadBalancer.InstanceHealthSummary { - if iName == aws.StringValue(n.InstanceName) { - entry = n.InstanceName - entryExists = true - break - } - } - - if !entryExists { - return nil, tfresource.NewEmptyResultError(in) - } - - return entry, nil -} - -func FindLoadBalancerCertificateById(ctx context.Context, conn *lightsail.Lightsail, id string) (*lightsail.LoadBalancerTlsCertificate, error) { - id_parts := strings.SplitN(id, ",", -1) - if len(id_parts) != 2 { - return nil, errors.New("invalid load balancer certificate id") - } - - lbName := id_parts[0] - cName := id_parts[1] - - in := &lightsail.GetLoadBalancerTlsCertificatesInput{LoadBalancerName: aws.String(lbName)} - out, err := conn.GetLoadBalancerTlsCertificatesWithContext(ctx, in) - - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { - return nil, &retry.NotFoundError{ - LastError: err, - LastRequest: in, - } - } - - if err != nil { - return nil, err - } - - var entry *lightsail.LoadBalancerTlsCertificate - entryExists := false - - for _, n := range out.TlsCertificates { - if cName == aws.StringValue(n.Name) { - entry = n - entryExists = true - break - } - } - - if !entryExists { - return nil, tfresource.NewEmptyResultError(in) - } - - return entry, nil -} - -func FindLoadBalancerCertificateAttachmentById(ctx context.Context, conn *lightsail.Lightsail, id string) (*string, error) { - id_parts := strings.SplitN(id, ",", -1) - if len(id_parts) != 2 { - return nil, errors.New("invalid load balancer certificate attachment id") - } - - lbName := id_parts[0] - cName := id_parts[1] - - in := &lightsail.GetLoadBalancerTlsCertificatesInput{LoadBalancerName: aws.String(lbName)} - out, err := conn.GetLoadBalancerTlsCertificatesWithContext(ctx, in) - - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { - return nil, &retry.NotFoundError{ - LastError: err, - LastRequest: in, - } - } - - if err != nil { - return nil, err - } - - var entry *string - entryExists := false - - for _, n := range out.TlsCertificates { - if cName == aws.StringValue(n.Name) && aws.BoolValue(n.IsAttached) { - entry = n.Name - entryExists = true - break - } - } - - if !entryExists { - return nil, tfresource.NewEmptyResultError(in) - } - - return entry, nil -} - -func FindLoadBalancerStickinessPolicyById(ctx context.Context, conn *lightsail.Lightsail, id string) (map[string]*string, error) { - in := &lightsail.GetLoadBalancerInput{LoadBalancerName: aws.String(id)} - out, err := conn.GetLoadBalancerWithContext(ctx, in) - - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { - return nil, &retry.NotFoundError{ - LastError: err, - LastRequest: in, - } - } - - if err != nil { - return nil, err - } - - if out == nil || out.LoadBalancer.ConfigurationOptions == nil { - return nil, tfresource.NewEmptyResultError(in) - } - - return out.LoadBalancer.ConfigurationOptions, nil -} - -func FindLoadBalancerHTTPSRedirectionPolicyById(ctx context.Context, conn *lightsail.Lightsail, id string) (*bool, error) { - in := &lightsail.GetLoadBalancerInput{LoadBalancerName: aws.String(id)} - out, err := conn.GetLoadBalancerWithContext(ctx, in) - - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { - return nil, &retry.NotFoundError{ - LastError: err, - LastRequest: in, - } - } - - if err != nil { - return nil, err - } - - if out == nil || out.LoadBalancer.HttpsRedirectionEnabled == nil { - return nil, tfresource.NewEmptyResultError(in) - } - - return out.LoadBalancer.HttpsRedirectionEnabled, nil -} - -func FindBucketById(ctx context.Context, conn *lightsail.Lightsail, id string) (*lightsail.Bucket, error) { - in := &lightsail.GetBucketsInput{BucketName: aws.String(id)} - out, err := conn.GetBucketsWithContext(ctx, in) - - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { - return nil, &retry.NotFoundError{ - LastError: err, - LastRequest: in, - } - } - - if err != nil { - return nil, err - } - - if out == nil || len(out.Buckets) == 0 || out.Buckets[0] == nil { - return nil, tfresource.NewEmptyResultError(in) - } - - return out.Buckets[0], nil -} - -func FindInstanceById(ctx context.Context, conn *lightsail.Lightsail, id string) (*lightsail.Instance, error) { - in := &lightsail.GetInstanceInput{InstanceName: aws.String(id)} - out, err := conn.GetInstanceWithContext(ctx, in) - - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { - return nil, &retry.NotFoundError{ - LastError: err, - LastRequest: in, - } - } - - if err != nil { - return nil, err - } - - if out == nil || out.Instance == nil { - return nil, tfresource.NewEmptyResultError(in) - } - - return out.Instance, nil -} - -func FindBucketAccessKeyById(ctx context.Context, conn *lightsail.Lightsail, id string) (*lightsail.AccessKey, error) { - parts, err := flex.ExpandResourceId(id, BucketAccessKeyIdPartsCount, false) - - if err != nil { - return nil, err - } - - in := &lightsail.GetBucketAccessKeysInput{BucketName: aws.String(parts[0])} - out, err := conn.GetBucketAccessKeysWithContext(ctx, in) - - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { - return nil, &retry.NotFoundError{ - LastError: err, - LastRequest: in, - } - } - - if err != nil { - return nil, err - } - - var entry *lightsail.AccessKey - entryExists := false - - for _, n := range out.AccessKeys { - if parts[1] == aws.StringValue(n.AccessKeyId) { - entry = n - entryExists = true - break - } - } - - if !entryExists { - return nil, tfresource.NewEmptyResultError(in) - } - - return entry, nil -} - -func FindBucketResourceAccessById(ctx context.Context, conn *lightsail.Lightsail, id string) (*lightsail.ResourceReceivingAccess, error) { - parts, err := flex.ExpandResourceId(id, BucketAccessKeyIdPartsCount, false) - - if err != nil { - return nil, err - } - - in := &lightsail.GetBucketsInput{ - BucketName: aws.String(parts[0]), - IncludeConnectedResources: aws.Bool(true), - } - - out, err := conn.GetBucketsWithContext(ctx, in) - - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { - return nil, &retry.NotFoundError{ - LastError: err, - LastRequest: in, - } - } - - if err != nil { - return nil, err - } - - if out == nil || len(out.Buckets) == 0 || out.Buckets[0] == nil { - return nil, tfresource.NewEmptyResultError(in) - } - - bucket := out.Buckets[0] - var entry *lightsail.ResourceReceivingAccess - entryExists := false - - for _, n := range bucket.ResourcesReceivingAccess { - if parts[1] == aws.StringValue(n.Name) { - entry = n - entryExists = true - break - } - } - - if !entryExists { - return nil, tfresource.NewEmptyResultError(in) - } - - return entry, nil -} diff --git a/internal/service/lightsail/flex.go b/internal/service/lightsail/flex.go index 7bf6b736ec8..125ffe1b899 100644 --- a/internal/service/lightsail/flex.go +++ b/internal/service/lightsail/flex.go @@ -1,35 +1,39 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail import ( "context" "errors" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-provider-aws/internal/create" "github.com/hashicorp/terraform-provider-aws/names" ) // expandOperations provides a uniform approach for handling lightsail operations and errors. -func expandOperations(ctx context.Context, conn *lightsail.Lightsail, operations []*lightsail.Operation, action string, resource string, id string) diag.Diagnostics { +func expandOperations(ctx context.Context, conn *lightsail.Client, operations []types.Operation, action types.OperationType, resource string, id string) diag.Diagnostics { if len(operations) == 0 { - return create.DiagError(names.Lightsail, action, resource, id, errors.New("no operations found for request")) + return create.DiagError(names.Lightsail, string(action), resource, id, errors.New("no operations found for request")) } op := operations[0] err := waitOperation(ctx, conn, op.Id) if err != nil { - return create.DiagError(names.Lightsail, action, resource, id, errors.New("error waiting for request operation")) + return create.DiagError(names.Lightsail, string(action), resource, id, errors.New("error waiting for request operation")) } return nil } // expandOperation provides a uniform approach for handling a single lightsail operation and errors. -func expandOperation(ctx context.Context, conn *lightsail.Lightsail, operation *lightsail.Operation, action string, resource string, id string) diag.Diagnostics { - diag := expandOperations(ctx, conn, []*lightsail.Operation{operation}, action, resource, id) +func expandOperation(ctx context.Context, conn *lightsail.Client, operation types.Operation, action types.OperationType, resource string, id string) diag.Diagnostics { + diag := expandOperations(ctx, conn, []types.Operation{operation}, action, resource, id) if diag != nil { return diag @@ -38,7 +42,7 @@ func expandOperation(ctx context.Context, conn *lightsail.Lightsail, operation * return nil } -func flattenResourceLocation(apiObject *lightsail.ResourceLocation) map[string]interface{} { +func flattenResourceLocation(apiObject *types.ResourceLocation) map[string]interface{} { if apiObject == nil { return nil } @@ -46,11 +50,11 @@ func flattenResourceLocation(apiObject *lightsail.ResourceLocation) map[string]i m := map[string]interface{}{} if v := apiObject.AvailabilityZone; v != nil { - m["availability_zone"] = aws.StringValue(v) + m["availability_zone"] = aws.ToString(v) } - if v := apiObject.RegionName; v != nil { - m["region_name"] = aws.StringValue(v) + if v := apiObject.RegionName; string(v) != "" { + m["region_name"] = string(v) } return m diff --git a/internal/service/lightsail/generate.go b/internal/service/lightsail/generate.go index df8aeadb43b..d1aad92a595 100644 --- a/internal/service/lightsail/generate.go +++ b/internal/service/lightsail/generate.go @@ -1,4 +1,8 @@ -//go:generate go run ../../generate/tags/main.go -ListTagsInIDElem=ResourceName -ServiceTagsSlice -TagInIDElem=ResourceName -UpdateTags +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -ListTagsInIDElem=ResourceName -ServiceTagsSlice -TagInIDElem=ResourceName -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package lightsail diff --git a/internal/service/lightsail/instance.go b/internal/service/lightsail/instance.go index f53d788b576..f0bb59c0424 100644 --- a/internal/service/lightsail/instance.go +++ b/internal/service/lightsail/instance.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail import ( @@ -7,15 +10,17 @@ import ( "strings" "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/errs" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/verify" @@ -44,7 +49,7 @@ func ResourceInstance() *schema.Resource { "type": { Type: schema.TypeString, Required: true, - ValidateFunc: validation.StringInSlice(lightsail.AddOnType_Values(), false), + ValidateFunc: validation.StringInSlice(flattenAddOnTypeValues(types.AddOnType("").Values()), false), }, "snapshot_time": { Type: schema.TypeString, @@ -129,11 +134,6 @@ func ResourceInstance() *schema.Resource { Optional: true, Default: "dualstack", }, - "ipv6_address": { - Type: schema.TypeString, - Computed: true, - Deprecated: "use `ipv6_addresses` attribute instead", - }, "ipv6_addresses": { Type: schema.TypeList, Computed: true, @@ -172,7 +172,7 @@ func ResourceInstance() *schema.Resource { } func resourceInstanceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) iName := d.Get("name").(string) @@ -180,8 +180,8 @@ func resourceInstanceCreate(ctx context.Context, d *schema.ResourceData, meta in AvailabilityZone: aws.String(d.Get("availability_zone").(string)), BlueprintId: aws.String(d.Get("blueprint_id").(string)), BundleId: aws.String(d.Get("bundle_id").(string)), - InstanceNames: aws.StringSlice([]string{iName}), - Tags: GetTagsIn(ctx), + InstanceNames: []string{iName}, + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("key_pair_name"); ok { @@ -193,15 +193,15 @@ func resourceInstanceCreate(ctx context.Context, d *schema.ResourceData, meta in } if v, ok := d.GetOk("ip_address_type"); ok { - in.IpAddressType = aws.String(v.(string)) + in.IpAddressType = types.IpAddressType(v.(string)) } - out, err := conn.CreateInstancesWithContext(ctx, &in) + out, err := conn.CreateInstances(ctx, &in) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeCreateInstance, ResInstance, iName, err) + return create.DiagError(names.Lightsail, string(types.OperationTypeCreateInstance), ResInstance, iName, err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeCreateInstance, ResInstance, iName) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeCreateInstance, ResInstance, iName) if diag != nil { return diag @@ -216,13 +216,13 @@ func resourceInstanceCreate(ctx context.Context, d *schema.ResourceData, meta in AddOnRequest: expandAddOnRequest(d.Get("add_on").([]interface{})), } - out, err := conn.EnableAddOnWithContext(ctx, &in) + out, err := conn.EnableAddOn(ctx, &in) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeEnableAddOn, ResInstance, iName, err) + return create.DiagError(names.Lightsail, string(types.OperationTypeEnableAddOn), ResInstance, iName, err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeEnableAddOn, ResInstance, iName) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeEnableAddOn, ResInstance, iName) if diag != nil { return diag @@ -233,7 +233,7 @@ func resourceInstanceCreate(ctx context.Context, d *schema.ResourceData, meta in } func resourceInstanceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) out, err := FindInstanceById(ctx, conn, d.Id()) @@ -261,30 +261,25 @@ func resourceInstanceRead(ctx context.Context, d *schema.ResourceData, meta inte d.Set("cpu_count", out.Hardware.CpuCount) d.Set("ram_size", out.Hardware.RamSizeInGb) - // Deprecated: AWS Go SDK v1.36.25 removed Ipv6Address field - if len(out.Ipv6Addresses) > 0 { - d.Set("ipv6_address", out.Ipv6Addresses[0]) - } - - d.Set("ipv6_addresses", aws.StringValueSlice(out.Ipv6Addresses)) + d.Set("ipv6_addresses", out.Ipv6Addresses) d.Set("ip_address_type", out.IpAddressType) d.Set("is_static_ip", out.IsStaticIp) d.Set("private_ip_address", out.PrivateIpAddress) d.Set("public_ip_address", out.PublicIpAddress) - SetTagsOut(ctx, out.Tags) + setTagsOut(ctx, out.Tags) return nil } func resourceInstanceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() - out, err := conn.DeleteInstanceWithContext(ctx, &lightsail.DeleteInstanceInput{ + conn := meta.(*conns.AWSClient).LightsailClient(ctx) + out, err := conn.DeleteInstance(ctx, &lightsail.DeleteInstanceInput{ InstanceName: aws.String(d.Id()), ForceDeleteAddOns: aws.Bool(true), }) - if err != nil && tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { + if err != nil && errs.IsA[*types.NotFoundException](err) { return nil } @@ -292,7 +287,7 @@ func resourceInstanceDelete(ctx context.Context, d *schema.ResourceData, meta in return create.DiagError(names.Lightsail, create.ErrActionDeleting, ResInstance, d.Id(), err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeDeleteInstance, ResInstance, d.Id()) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeDeleteInstance, ResInstance, d.Id()) if diag != nil { return diag @@ -302,20 +297,20 @@ func resourceInstanceDelete(ctx context.Context, d *schema.ResourceData, meta in } func resourceInstanceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) if d.HasChange("ip_address_type") { - out, err := conn.SetIpAddressTypeWithContext(ctx, &lightsail.SetIpAddressTypeInput{ + out, err := conn.SetIpAddressType(ctx, &lightsail.SetIpAddressTypeInput{ ResourceName: aws.String(d.Id()), - ResourceType: aws.String("Instance"), - IpAddressType: aws.String(d.Get("ip_address_type").(string)), + ResourceType: types.ResourceTypeInstance, + IpAddressType: types.IpAddressType(d.Get("ip_address_type").(string)), }) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeSetIpAddressType, ResInstance, d.Id(), err) + return create.DiagError(names.Lightsail, string(types.OperationTypeSetIpAddressType), ResInstance, d.Id(), err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeSetIpAddressType, ResInstance, d.Id()) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeSetIpAddressType, ResInstance, d.Id()) if diag != nil { return diag @@ -325,7 +320,7 @@ func resourceInstanceUpdate(ctx context.Context, d *schema.ResourceData, meta in if d.HasChange("add_on") { o, n := d.GetChange("add_on") - if err := updateAddOnWithContext(ctx, conn, d.Id(), o, n); err != nil { + if err := updateAddOn(ctx, conn, d.Id(), o, n); err != nil { return err } } @@ -333,17 +328,17 @@ func resourceInstanceUpdate(ctx context.Context, d *schema.ResourceData, meta in return resourceInstanceRead(ctx, d, meta) } -func expandAddOnRequest(addOnListRaw []interface{}) *lightsail.AddOnRequest { +func expandAddOnRequest(addOnListRaw []interface{}) *types.AddOnRequest { if len(addOnListRaw) == 0 { - return &lightsail.AddOnRequest{} + return &types.AddOnRequest{} } - addOnRequest := &lightsail.AddOnRequest{} + addOnRequest := &types.AddOnRequest{} for _, addOnRaw := range addOnListRaw { addOnMap := addOnRaw.(map[string]interface{}) - addOnRequest.AddOnType = aws.String(addOnMap["type"].(string)) - addOnRequest.AutoSnapshotAddOnRequest = &lightsail.AutoSnapshotAddOnRequest{ + addOnRequest.AddOnType = types.AddOnType(addOnMap["type"].(string)) + addOnRequest.AutoSnapshotAddOnRequest = &types.AutoSnapshotAddOnRequest{ SnapshotTimeOfDay: aws.String(addOnMap["snapshot_time"].(string)), } } @@ -365,14 +360,14 @@ func expandAddOnEnabled(addOnListRaw []interface{}) bool { return enabled } -func flattenAddOns(addOns []*lightsail.AddOn) []interface{} { +func flattenAddOns(addOns []types.AddOn) []interface{} { var rawAddOns []interface{} for _, addOn := range addOns { rawAddOn := map[string]interface{}{ - "type": aws.StringValue(addOn.Name), - "snapshot_time": aws.StringValue(addOn.SnapshotTimeOfDay), - "status": aws.StringValue(addOn.Status), + "type": aws.ToString(addOn.Name), + "snapshot_time": aws.ToString(addOn.SnapshotTimeOfDay), + "status": aws.ToString(addOn.Status), } rawAddOns = append(rawAddOns, rawAddOn) } @@ -380,7 +375,7 @@ func flattenAddOns(addOns []*lightsail.AddOn) []interface{} { return rawAddOns } -func updateAddOnWithContext(ctx context.Context, conn *lightsail.Lightsail, name string, oldAddOnsRaw interface{}, newAddOnsRaw interface{}) diag.Diagnostics { +func updateAddOn(ctx context.Context, conn *lightsail.Client, name string, oldAddOnsRaw interface{}, newAddOnsRaw interface{}) diag.Diagnostics { oldAddOns := expandAddOnRequest(oldAddOnsRaw.([]interface{})) newAddOns := expandAddOnRequest(newAddOnsRaw.([]interface{})) oldAddOnStatus := expandAddOnEnabled(oldAddOnsRaw.([]interface{})) @@ -392,13 +387,13 @@ func updateAddOnWithContext(ctx context.Context, conn *lightsail.Lightsail, name AddOnType: oldAddOns.AddOnType, } - out, err := conn.DisableAddOnWithContext(ctx, &in) + out, err := conn.DisableAddOn(ctx, &in) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeDisableAddOn, ResInstance, name, err) + return create.DiagError(names.Lightsail, string(types.OperationTypeDisableAddOn), ResInstance, name, err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeDisableAddOn, ResInstance, name) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeDisableAddOn, ResInstance, name) if diag != nil { return diag @@ -411,13 +406,13 @@ func updateAddOnWithContext(ctx context.Context, conn *lightsail.Lightsail, name AddOnRequest: newAddOns, } - out, err := conn.EnableAddOnWithContext(ctx, &in) + out, err := conn.EnableAddOn(ctx, &in) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeEnableAddOn, ResInstance, name, err) + return create.DiagError(names.Lightsail, string(types.OperationTypeEnableAddOn), ResInstance, name, err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeEnableAddOn, ResInstance, name) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeEnableAddOn, ResInstance, name) if diag != nil { return diag @@ -426,3 +421,35 @@ func updateAddOnWithContext(ctx context.Context, conn *lightsail.Lightsail, name return nil } + +func flattenAddOnTypeValues(t []types.AddOnType) []string { + var out []string + + for _, v := range t { + out = append(out, string(v)) + } + + return out +} + +func FindInstanceById(ctx context.Context, conn *lightsail.Client, id string) (*types.Instance, error) { + in := &lightsail.GetInstanceInput{InstanceName: aws.String(id)} + out, err := conn.GetInstance(ctx, in) + + if IsANotFoundError(err) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + if err != nil { + return nil, err + } + + if out == nil || out.Instance == nil { + return nil, tfresource.NewEmptyResultError(in) + } + + return out.Instance, nil +} diff --git a/internal/service/lightsail/instance_public_ports.go b/internal/service/lightsail/instance_public_ports.go index 433c87a321b..e3ef931cbba 100644 --- a/internal/service/lightsail/instance_public_ports.go +++ b/internal/service/lightsail/instance_public_ports.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail import ( @@ -6,9 +9,9 @@ import ( "fmt" "log" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" multierror "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" @@ -78,7 +81,7 @@ func ResourceInstancePublicPorts() *schema.Resource { Type: schema.TypeString, Required: true, ForceNew: true, - ValidateFunc: validation.StringInSlice(lightsail.NetworkProtocol_Values(), false), + ValidateFunc: validation.StringInSlice(flattenNetworkProtocolValues(types.NetworkProtocol("").Values()), false), }, "to_port": { Type: schema.TypeInt, @@ -95,9 +98,9 @@ func ResourceInstancePublicPorts() *schema.Resource { func resourceInstancePublicPortsCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) - var portInfos []*lightsail.PortInfo + var portInfos []types.PortInfo if v, ok := d.GetOk("port_info"); ok && v.(*schema.Set).Len() > 0 { portInfos = expandPortInfos(v.(*schema.Set).List()) } @@ -107,7 +110,7 @@ func resourceInstancePublicPortsCreate(ctx context.Context, d *schema.ResourceDa PortInfos: portInfos, } - _, err := conn.PutInstancePublicPortsWithContext(ctx, input) + _, err := conn.PutInstancePublicPorts(ctx, input) if err != nil { return sdkdiag.AppendErrorf(diags, "unable to create public ports for instance %s: %s", d.Get("instance_name").(string), err) @@ -115,7 +118,7 @@ func resourceInstancePublicPortsCreate(ctx context.Context, d *schema.ResourceDa var buffer bytes.Buffer for _, portInfo := range portInfos { - buffer.WriteString(fmt.Sprintf("%s-%d-%d\n", aws.StringValue(portInfo.Protocol), aws.Int64Value(portInfo.FromPort), aws.Int64Value(portInfo.ToPort))) + buffer.WriteString(fmt.Sprintf("%s-%d-%d\n", string(portInfo.Protocol), int64(portInfo.FromPort), int64(portInfo.ToPort))) } d.SetId(fmt.Sprintf("%s-%d", d.Get("instance_name").(string), create.StringHashcode(buffer.String()))) @@ -125,15 +128,15 @@ func resourceInstancePublicPortsCreate(ctx context.Context, d *schema.ResourceDa func resourceInstancePublicPortsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) input := &lightsail.GetInstancePortStatesInput{ InstanceName: aws.String(d.Get("instance_name").(string)), } - output, err := conn.GetInstancePortStatesWithContext(ctx, input) + output, err := conn.GetInstancePortStates(ctx, input) - if !d.IsNewResource() && tfawserr.ErrCodeEquals(err, "NotFoundException") { + if !d.IsNewResource() && IsANotFoundError(err) { log.Printf("[WARN] Lightsail instance public ports (%s) not found, removing from state", d.Id()) d.SetId("") return diags @@ -158,19 +161,19 @@ func resourceInstancePublicPortsRead(ctx context.Context, d *schema.ResourceData func resourceInstancePublicPortsDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) var err *multierror.Error - var portInfos []*lightsail.PortInfo + var portInfos []types.PortInfo if v, ok := d.GetOk("port_info"); ok && v.(*schema.Set).Len() > 0 { portInfos = expandPortInfos(v.(*schema.Set).List()) } for _, portInfo := range portInfos { - _, portError := conn.CloseInstancePublicPortsWithContext(ctx, &lightsail.CloseInstancePublicPortsInput{ + _, portError := conn.CloseInstancePublicPorts(ctx, &lightsail.CloseInstancePublicPortsInput{ InstanceName: aws.String(d.Get("instance_name").(string)), - PortInfo: portInfo, + PortInfo: &portInfo, }) if portError != nil { @@ -185,38 +188,38 @@ func resourceInstancePublicPortsDelete(ctx context.Context, d *schema.ResourceDa return diags } -func expandPortInfo(tfMap map[string]interface{}) *lightsail.PortInfo { - if tfMap == nil { - return nil - } +func expandPortInfo(tfMap map[string]interface{}) types.PortInfo { + // if tfMap == nil { + // return nil + // } - apiObject := &lightsail.PortInfo{ - FromPort: aws.Int64((int64)(tfMap["from_port"].(int))), - ToPort: aws.Int64((int64)(tfMap["to_port"].(int))), - Protocol: aws.String(tfMap["protocol"].(string)), + apiObject := types.PortInfo{ + FromPort: int32(tfMap["from_port"].(int)), + ToPort: int32(tfMap["to_port"].(int)), + Protocol: types.NetworkProtocol(tfMap["protocol"].(string)), } if v, ok := tfMap["cidrs"].(*schema.Set); ok && v.Len() > 0 { - apiObject.Cidrs = flex.ExpandStringSet(v) + apiObject.Cidrs = aws.ToStringSlice(flex.ExpandStringSet(v)) } if v, ok := tfMap["cidr_list_aliases"].(*schema.Set); ok && v.Len() > 0 { - apiObject.CidrListAliases = flex.ExpandStringSet(v) + apiObject.CidrListAliases = aws.ToStringSlice(flex.ExpandStringSet(v)) } if v, ok := tfMap["ipv6_cidrs"].(*schema.Set); ok && v.Len() > 0 { - apiObject.Ipv6Cidrs = flex.ExpandStringSet(v) + apiObject.Ipv6Cidrs = aws.ToStringSlice(flex.ExpandStringSet(v)) } return apiObject } -func expandPortInfos(tfList []interface{}) []*lightsail.PortInfo { +func expandPortInfos(tfList []interface{}) []types.PortInfo { if len(tfList) == 0 { return nil } - var apiObjects []*lightsail.PortInfo + var apiObjects []types.PortInfo for _, tfMapRaw := range tfList { tfMap, ok := tfMapRaw.(map[string]interface{}) @@ -227,43 +230,39 @@ func expandPortInfos(tfList []interface{}) []*lightsail.PortInfo { apiObject := expandPortInfo(tfMap) - if apiObject == nil { - continue - } - apiObjects = append(apiObjects, apiObject) } return apiObjects } -func flattenInstancePortState(apiObject *lightsail.InstancePortState) map[string]interface{} { - if apiObject == nil { - return nil - } +func flattenInstancePortState(apiObject types.InstancePortState) map[string]interface{} { + // if apiObject == (types.InstancePortState{}) { + // return nil + // } tfMap := map[string]interface{}{} - tfMap["from_port"] = aws.Int64Value(apiObject.FromPort) - tfMap["to_port"] = aws.Int64Value(apiObject.ToPort) - tfMap["protocol"] = aws.StringValue(apiObject.Protocol) + tfMap["from_port"] = int(apiObject.FromPort) + tfMap["to_port"] = int(apiObject.ToPort) + tfMap["protocol"] = string(apiObject.Protocol) if v := apiObject.Cidrs; v != nil { - tfMap["cidrs"] = aws.StringValueSlice(v) + tfMap["cidrs"] = v } if v := apiObject.CidrListAliases; v != nil { - tfMap["cidr_list_aliases"] = aws.StringValueSlice(v) + tfMap["cidr_list_aliases"] = v } if v := apiObject.Ipv6Cidrs; v != nil { - tfMap["ipv6_cidrs"] = aws.StringValueSlice(v) + tfMap["ipv6_cidrs"] = v } return tfMap } -func flattenInstancePortStates(apiObjects []*lightsail.InstancePortState) []interface{} { +func flattenInstancePortStates(apiObjects []types.InstancePortState) []interface{} { if len(apiObjects) == 0 { return nil } @@ -271,12 +270,22 @@ func flattenInstancePortStates(apiObjects []*lightsail.InstancePortState) []inte var tfList []interface{} for _, apiObject := range apiObjects { - if apiObject == nil { - continue - } + // if apiObject == nil { + // continue + // } tfList = append(tfList, flattenInstancePortState(apiObject)) } return tfList } + +func flattenNetworkProtocolValues(t []types.NetworkProtocol) []string { + var out []string + + for _, v := range t { + out = append(out, string(v)) + } + + return out +} diff --git a/internal/service/lightsail/instance_public_ports_test.go b/internal/service/lightsail/instance_public_ports_test.go index 337f13837f5..02f839b3bb5 100644 --- a/internal/service/lightsail/instance_public_ports_test.go +++ b/internal/service/lightsail/instance_public_ports_test.go @@ -1,13 +1,16 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail_test import ( "context" "fmt" + "strings" "testing" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -24,10 +27,10 @@ func TestAccLightsailInstancePublicPorts_basic(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckInstancePublicPortsDestroy(ctx), Steps: []resource.TestStep{ @@ -55,10 +58,10 @@ func TestAccLightsailInstancePublicPorts_multiple(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckInstancePublicPortsDestroy(ctx), Steps: []resource.TestStep{ @@ -91,10 +94,10 @@ func TestAccLightsailInstancePublicPorts_cidrs(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckInstancePublicPortsDestroy(ctx), Steps: []resource.TestStep{ @@ -125,10 +128,10 @@ func TestAccLightsailInstancePublicPorts_cidrListAliases(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckInstancePublicPortsDestroy(ctx), Steps: []resource.TestStep{ @@ -158,10 +161,10 @@ func TestAccLightsailInstancePublicPorts_disappears(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckInstancePublicPortsDestroy(ctx), Steps: []resource.TestStep{ @@ -186,10 +189,10 @@ func TestAccLightsailInstancePublicPorts_disappears_Instance(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckInstancePublicPortsDestroy(ctx), Steps: []resource.TestStep{ @@ -212,13 +215,13 @@ func testAccCheckInstancePublicPortsExists(ctx context.Context, resourceName str return fmt.Errorf("resource not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) input := &lightsail.GetInstancePortStatesInput{ InstanceName: aws.String(rs.Primary.Attributes["instance_name"]), } - _, err := conn.GetInstancePortStatesWithContext(ctx, input) + _, err := conn.GetInstancePortStates(ctx, input) if err != nil { return fmt.Errorf("error getting Lightsail Instance Public Ports (%s): %w", rs.Primary.ID, err) @@ -230,7 +233,7 @@ func testAccCheckInstancePublicPortsExists(ctx context.Context, resourceName str func testAccCheckInstancePublicPortsDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_lightsail_instance_public_ports" { @@ -241,9 +244,9 @@ func testAccCheckInstancePublicPortsDestroy(ctx context.Context) resource.TestCh InstanceName: aws.String(rs.Primary.Attributes["instance_name"]), } - output, err := conn.GetInstancePortStatesWithContext(ctx, input) + output, err := conn.GetInstancePortStates(ctx, input) - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { + if tflightsail.IsANotFoundError(err) { continue } diff --git a/internal/service/lightsail/instance_test.go b/internal/service/lightsail/instance_test.go index 1297ac4cb75..8903b9fbaba 100644 --- a/internal/service/lightsail/instance_test.go +++ b/internal/service/lightsail/instance_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail_test import ( @@ -5,9 +8,10 @@ import ( "errors" "fmt" "regexp" + "strings" "testing" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -36,10 +40,10 @@ func TestAccLightsailInstance_basic(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckInstanceDestroy(ctx), Steps: []resource.TestStep{ @@ -50,8 +54,8 @@ func TestAccLightsailInstance_basic(t *testing.T) { resource.TestCheckResourceAttrSet(resourceName, "availability_zone"), resource.TestCheckResourceAttrSet(resourceName, "blueprint_id"), resource.TestCheckResourceAttrSet(resourceName, "bundle_id"), - resource.TestMatchResourceAttr(resourceName, "ipv6_address", regexp.MustCompile(`([a-f0-9]{1,4}:){7}[a-f0-9]{1,4}`)), resource.TestCheckResourceAttr(resourceName, "ipv6_addresses.#", "1"), + resource.TestMatchResourceAttr(resourceName, "ipv6_addresses.0", regexp.MustCompile(`([a-f0-9]{1,4}:){7}[a-f0-9]{1,4}`)), resource.TestCheckResourceAttrSet(resourceName, "key_pair_name"), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), resource.TestMatchResourceAttr(resourceName, "ram_size", regexp.MustCompile(`\d+(.\d+)?`)), @@ -73,10 +77,10 @@ func TestAccLightsailInstance_name(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckInstanceDestroy(ctx), Steps: []resource.TestStep{ @@ -130,10 +134,10 @@ func TestAccLightsailInstance_tags(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckInstanceDestroy(ctx), Steps: []resource.TestStep{ @@ -145,7 +149,7 @@ func TestAccLightsailInstance_tags(t *testing.T) { resource.TestCheckResourceAttrSet(resourceName, "blueprint_id"), resource.TestCheckResourceAttrSet(resourceName, "bundle_id"), resource.TestCheckResourceAttrSet(resourceName, "key_pair_name"), - resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), ), }, { @@ -156,7 +160,7 @@ func TestAccLightsailInstance_tags(t *testing.T) { resource.TestCheckResourceAttrSet(resourceName, "blueprint_id"), resource.TestCheckResourceAttrSet(resourceName, "bundle_id"), resource.TestCheckResourceAttrSet(resourceName, "key_pair_name"), - resource.TestCheckResourceAttr(resourceName, "tags.%", "3"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), ), }, }, @@ -171,10 +175,10 @@ func TestAccLightsailInstance_IPAddressType(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckInstanceDestroy(ctx), Steps: []resource.TestStep{ @@ -213,10 +217,10 @@ func TestAccLightsailInstance_addOn(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckInstanceDestroy(ctx), Steps: []resource.TestStep{ @@ -280,10 +284,10 @@ func TestAccLightsailInstance_availabilityZone(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckInstanceDestroy(ctx), Steps: []resource.TestStep{ @@ -303,10 +307,10 @@ func TestAccLightsailInstance_disappears(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckInstanceDestroy(ctx), Steps: []resource.TestStep{ @@ -333,7 +337,7 @@ func testAccCheckInstanceExists(ctx context.Context, n string) resource.TestChec return errors.New("No LightsailInstance ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) out, err := tflightsail.FindInstanceById(ctx, conn, rs.Primary.ID) @@ -356,7 +360,7 @@ func testAccCheckInstanceDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) _, err := tflightsail.FindInstanceById(ctx, conn, rs.Primary.ID) @@ -376,11 +380,11 @@ func testAccCheckInstanceDestroy(ctx context.Context) resource.TestCheckFunc { } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) input := &lightsail.GetInstancesInput{} - _, err := conn.GetInstancesWithContext(ctx, input) + _, err := conn.GetInstances(ctx, input) if acctest.PreCheckSkipError(err) { t.Skipf("skipping acceptance testing: %s", err) @@ -440,8 +444,7 @@ resource "aws_lightsail_instance" "test" { bundle_id = "nano_1_0" tags = { - Name = "tf-test" - KeyOnlyTag = "" + Name = "tf-test" } } `, rName)) @@ -458,9 +461,8 @@ resource "aws_lightsail_instance" "test" { bundle_id = "nano_1_0" tags = { - Name = "tf-test", - KeyOnlyTag = "" - ExtraName = "tf-test" + Name = "tf-test", + ExtraName = "tf-test" } } `, rName)) diff --git a/internal/service/lightsail/key_pair.go b/internal/service/lightsail/key_pair.go index 478442512b6..eaa19b265fa 100644 --- a/internal/service/lightsail/key_pair.go +++ b/internal/service/lightsail/key_pair.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail import ( @@ -7,9 +10,9 @@ import ( "log" "strings" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" @@ -88,7 +91,7 @@ func ResourceKeyPair() *schema.Resource { func resourceKeyPairCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) var kName string if v, ok := d.GetOk("name"); ok { @@ -100,14 +103,14 @@ func resourceKeyPairCreate(ctx context.Context, d *schema.ResourceData, meta int } var pubKey string - var op *lightsail.Operation + var op *types.Operation if pubKeyInterface, ok := d.GetOk("public_key"); ok { pubKey = pubKeyInterface.(string) } if pubKey == "" { // creating new key - resp, err := conn.CreateKeyPairWithContext(ctx, &lightsail.CreateKeyPairInput{ + resp, err := conn.CreateKeyPair(ctx, &lightsail.CreateKeyPairInput{ KeyPairName: aws.String(kName), }) if err != nil { @@ -132,7 +135,7 @@ func resourceKeyPairCreate(ctx context.Context, d *schema.ResourceData, meta int return sdkdiag.AppendErrorf(diags, "creating Lightsail Key Pair (%s): %s", kName, err) } if pgpKey != "" { - fingerprint, encrypted, err := encryptValue(pgpKey, aws.StringValue(resp.PrivateKeyBase64), "Lightsail Private Key") + fingerprint, encrypted, err := encryptValue(pgpKey, aws.ToString(resp.PrivateKeyBase64), "Lightsail Private Key") if err != nil { return sdkdiag.AppendErrorf(diags, "creating Lightsail Key Pair (%s): %s", kName, err) } @@ -146,7 +149,7 @@ func resourceKeyPairCreate(ctx context.Context, d *schema.ResourceData, meta int op = resp.Operation } else { // importing key - resp, err := conn.ImportKeyPairWithContext(ctx, &lightsail.ImportKeyPairInput{ + resp, err := conn.ImportKeyPair(ctx, &lightsail.ImportKeyPairInput{ KeyPairName: aws.String(kName), PublicKeyBase64: aws.String(pubKey), }) @@ -159,7 +162,7 @@ func resourceKeyPairCreate(ctx context.Context, d *schema.ResourceData, meta int op = resp.Operation } - diag := expandOperations(ctx, conn, []*lightsail.Operation{op}, "CreateKeyPair", ResKeyPair, kName) + diag := expandOperations(ctx, conn, []types.Operation{*op}, "CreateKeyPair", ResKeyPair, kName) if diag != nil { return diag @@ -170,14 +173,14 @@ func resourceKeyPairCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceKeyPairRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) - resp, err := conn.GetKeyPairWithContext(ctx, &lightsail.GetKeyPairInput{ + resp, err := conn.GetKeyPair(ctx, &lightsail.GetKeyPairInput{ KeyPairName: aws.String(d.Id()), }) if err != nil { - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { + if IsANotFoundError(err) { log.Printf("[WARN] Lightsail KeyPair (%s) not found, removing from state", d.Id()) d.SetId("") return diags @@ -194,8 +197,8 @@ func resourceKeyPairRead(ctx context.Context, d *schema.ResourceData, meta inter func resourceKeyPairDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LightsailConn() - resp, err := conn.DeleteKeyPairWithContext(ctx, &lightsail.DeleteKeyPairInput{ + conn := meta.(*conns.AWSClient).LightsailClient(ctx) + resp, err := conn.DeleteKeyPair(ctx, &lightsail.DeleteKeyPairInput{ KeyPairName: aws.String(d.Id()), }) @@ -203,7 +206,7 @@ func resourceKeyPairDelete(ctx context.Context, d *schema.ResourceData, meta int return sdkdiag.AppendErrorf(diags, "deleting Lightsail Key Pair (%s): %s", d.Id(), err) } - diag := expandOperations(ctx, conn, []*lightsail.Operation{resp.Operation}, "DeleteKeyPair", ResKeyPair, d.Id()) + diag := expandOperations(ctx, conn, []types.Operation{*resp.Operation}, "DeleteKeyPair", ResKeyPair, d.Id()) if diag != nil { return diag diff --git a/internal/service/lightsail/key_pair_test.go b/internal/service/lightsail/key_pair_test.go index 216e57f5177..6ac6708f7bd 100644 --- a/internal/service/lightsail/key_pair_test.go +++ b/internal/service/lightsail/key_pair_test.go @@ -1,38 +1,41 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail_test import ( "context" "errors" "fmt" + "strings" "testing" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" + tflightsail "github.com/hashicorp/terraform-provider-aws/internal/service/lightsail" ) func TestAccLightsailKeyPair_basic(t *testing.T) { ctx := acctest.Context(t) - var conf lightsail.KeyPair rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_lightsail_key_pair.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckKeyPairDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccKeyPairConfig_basic(rName), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckKeyPairExists(ctx, resourceName, &conf), + testAccCheckKeyPairExists(ctx, resourceName), resource.TestCheckResourceAttrSet(resourceName, "arn"), resource.TestCheckResourceAttrSet(resourceName, "fingerprint"), resource.TestCheckResourceAttrSet(resourceName, "public_key"), @@ -45,7 +48,6 @@ func TestAccLightsailKeyPair_basic(t *testing.T) { func TestAccLightsailKeyPair_publicKey(t *testing.T) { ctx := acctest.Context(t) - var conf lightsail.KeyPair rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_lightsail_key_pair.test" @@ -57,14 +59,14 @@ func TestAccLightsailKeyPair_publicKey(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckKeyPairDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccKeyPairConfig_imported(rName, publicKey), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckKeyPairExists(ctx, resourceName, &conf), + testAccCheckKeyPairExists(ctx, resourceName), resource.TestCheckResourceAttrSet(resourceName, "arn"), resource.TestCheckResourceAttrSet(resourceName, "fingerprint"), resource.TestCheckResourceAttrSet(resourceName, "public_key"), @@ -79,21 +81,20 @@ func TestAccLightsailKeyPair_publicKey(t *testing.T) { func TestAccLightsailKeyPair_encrypted(t *testing.T) { ctx := acctest.Context(t) - var conf lightsail.KeyPair rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_lightsail_key_pair.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckKeyPairDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccKeyPairConfig_encrypted(rName, testKeyPairPubKey1), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckKeyPairExists(ctx, resourceName, &conf), + testAccCheckKeyPairExists(ctx, resourceName), resource.TestCheckResourceAttrSet(resourceName, "arn"), resource.TestCheckResourceAttrSet(resourceName, "fingerprint"), resource.TestCheckResourceAttrSet(resourceName, "encrypted_fingerprint"), @@ -108,19 +109,17 @@ func TestAccLightsailKeyPair_encrypted(t *testing.T) { func TestAccLightsailKeyPair_namePrefix(t *testing.T) { ctx := acctest.Context(t) - var conf1, conf2 lightsail.KeyPair - resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckKeyPairDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccKeyPairConfig_prefixed(), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckKeyPairExists(ctx, "aws_lightsail_key_pair.lightsail_key_pair_test_omit", &conf1), - testAccCheckKeyPairExists(ctx, "aws_lightsail_key_pair.lightsail_key_pair_test_prefixed", &conf2), + testAccCheckKeyPairExists(ctx, "aws_lightsail_key_pair.lightsail_key_pair_test_omit"), + testAccCheckKeyPairExists(ctx, "aws_lightsail_key_pair.lightsail_key_pair_test_prefixed"), resource.TestCheckResourceAttrSet("aws_lightsail_key_pair.lightsail_key_pair_test_omit", "name"), resource.TestCheckResourceAttrSet("aws_lightsail_key_pair.lightsail_key_pair_test_prefixed", "name"), ), @@ -129,7 +128,7 @@ func TestAccLightsailKeyPair_namePrefix(t *testing.T) { }) } -func testAccCheckKeyPairExists(ctx context.Context, n string, res *lightsail.KeyPair) resource.TestCheckFunc { +func testAccCheckKeyPairExists(ctx context.Context, n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -140,9 +139,9 @@ func testAccCheckKeyPairExists(ctx context.Context, n string, res *lightsail.Key return errors.New("No LightsailKeyPair set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) - respKeyPair, err := conn.GetKeyPairWithContext(ctx, &lightsail.GetKeyPairInput{ + respKeyPair, err := conn.GetKeyPair(ctx, &lightsail.GetKeyPairInput{ KeyPairName: aws.String(rs.Primary.Attributes["name"]), }) @@ -153,7 +152,6 @@ func testAccCheckKeyPairExists(ctx context.Context, n string, res *lightsail.Key if respKeyPair == nil || respKeyPair.KeyPair == nil { return fmt.Errorf("KeyPair (%s) not found", rs.Primary.Attributes["name"]) } - *res = *respKeyPair.KeyPair return nil } } @@ -165,13 +163,13 @@ func testAccCheckKeyPairDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) - respKeyPair, err := conn.GetKeyPairWithContext(ctx, &lightsail.GetKeyPairInput{ + respKeyPair, err := conn.GetKeyPair(ctx, &lightsail.GetKeyPairInput{ KeyPairName: aws.String(rs.Primary.Attributes["name"]), }) - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { + if tflightsail.IsANotFoundError(err) { continue } diff --git a/internal/service/lightsail/lb.go b/internal/service/lightsail/lb.go index 63833b6ab44..fb52bef5aa4 100644 --- a/internal/service/lightsail/lb.go +++ b/internal/service/lightsail/lb.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail import ( @@ -5,9 +8,11 @@ import ( "regexp" "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" @@ -95,26 +100,26 @@ func ResourceLoadBalancer() *schema.Resource { } func resourceLoadBalancerCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) lbName := d.Get("name").(string) in := lightsail.CreateLoadBalancerInput{ - InstancePort: aws.Int64(int64(d.Get("instance_port").(int))), + InstancePort: int32(d.Get("instance_port").(int)), LoadBalancerName: aws.String(lbName), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if d.Get("health_check_path").(string) != "/" { in.HealthCheckPath = aws.String(d.Get("health_check_path").(string)) } - out, err := conn.CreateLoadBalancerWithContext(ctx, &in) + out, err := conn.CreateLoadBalancer(ctx, &in) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeCreateLoadBalancer, ResLoadBalancer, lbName, err) + return create.DiagError(names.Lightsail, string(types.OperationTypeCreateLoadBalancer), ResLoadBalancer, lbName, err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeCreateLoadBalancer, ResLoadBalancer, lbName) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeCreateLoadBalancer, ResLoadBalancer, lbName) if diag != nil { return diag @@ -126,9 +131,9 @@ func resourceLoadBalancerCreate(ctx context.Context, d *schema.ResourceData, met } func resourceLoadBalancerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) - lb, err := FindLoadBalancerByName(ctx, conn, d.Id()) + lb, err := FindLoadBalancerById(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { create.LogNotFoundRemoveState(names.Lightsail, create.ErrActionReading, ResLoadBalancer, d.Id()) @@ -151,13 +156,13 @@ func resourceLoadBalancerRead(ctx context.Context, d *schema.ResourceData, meta d.Set("name", lb.Name) d.Set("support_code", lb.SupportCode) - SetTagsOut(ctx, lb.Tags) + setTagsOut(ctx, lb.Tags) return nil } func resourceLoadBalancerUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) lbName := d.Get("name").(string) in := &lightsail.UpdateLoadBalancerAttributeInput{ @@ -166,16 +171,16 @@ func resourceLoadBalancerUpdate(ctx context.Context, d *schema.ResourceData, met if d.HasChange("health_check_path") { healthCheckIn := in - healthCheckIn.AttributeName = aws.String("HealthCheckPath") + healthCheckIn.AttributeName = types.LoadBalancerAttributeNameHealthCheckPath healthCheckIn.AttributeValue = aws.String(d.Get("health_check_path").(string)) - out, err := conn.UpdateLoadBalancerAttributeWithContext(ctx, healthCheckIn) + out, err := conn.UpdateLoadBalancerAttribute(ctx, healthCheckIn) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeUpdateLoadBalancerAttribute, ResLoadBalancer, lbName, err) + return create.DiagError(names.Lightsail, string(types.OperationTypeUpdateLoadBalancerAttribute), ResLoadBalancer, lbName, err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeUpdateLoadBalancerAttribute, ResLoadBalancer, lbName) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeUpdateLoadBalancerAttribute, ResLoadBalancer, lbName) if diag != nil { return diag @@ -186,18 +191,18 @@ func resourceLoadBalancerUpdate(ctx context.Context, d *schema.ResourceData, met } func resourceLoadBalancerDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) lbName := d.Get("name").(string) - out, err := conn.DeleteLoadBalancerWithContext(ctx, &lightsail.DeleteLoadBalancerInput{ + out, err := conn.DeleteLoadBalancer(ctx, &lightsail.DeleteLoadBalancerInput{ LoadBalancerName: aws.String(d.Id()), }) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeDeleteLoadBalancer, ResLoadBalancer, lbName, err) + return create.DiagError(names.Lightsail, string(types.OperationTypeDeleteLoadBalancer), ResLoadBalancer, lbName, err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeDeleteLoadBalancer, ResLoadBalancer, lbName) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeDeleteLoadBalancer, ResLoadBalancer, lbName) if diag != nil { return diag @@ -205,3 +210,27 @@ func resourceLoadBalancerDelete(ctx context.Context, d *schema.ResourceData, met return nil } + +func FindLoadBalancerById(ctx context.Context, conn *lightsail.Client, name string) (*types.LoadBalancer, error) { + in := &lightsail.GetLoadBalancerInput{LoadBalancerName: aws.String(name)} + out, err := conn.GetLoadBalancer(ctx, in) + + if IsANotFoundError(err) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + if err != nil { + return nil, err + } + + if out == nil || out.LoadBalancer == nil { + return nil, tfresource.NewEmptyResultError(in) + } + + lb := out.LoadBalancer + + return lb, nil +} diff --git a/internal/service/lightsail/lb_attachment.go b/internal/service/lightsail/lb_attachment.go index 118409934c4..84584f311ae 100644 --- a/internal/service/lightsail/lb_attachment.go +++ b/internal/service/lightsail/lb_attachment.go @@ -1,13 +1,19 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail import ( "context" + "errors" "regexp" "strings" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" @@ -48,20 +54,20 @@ func ResourceLoadBalancerAttachment() *schema.Resource { } func resourceLoadBalancerAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) lbName := d.Get("lb_name").(string) req := lightsail.AttachInstancesToLoadBalancerInput{ LoadBalancerName: aws.String(lbName), - InstanceNames: aws.StringSlice([]string{d.Get("instance_name").(string)}), + InstanceNames: []string{d.Get("instance_name").(string)}, } - out, err := conn.AttachInstancesToLoadBalancerWithContext(ctx, &req) + out, err := conn.AttachInstancesToLoadBalancer(ctx, &req) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeAttachInstancesToLoadBalancer, ResLoadBalancerAttachment, lbName, err) + return create.DiagError(names.Lightsail, string(types.OperationTypeAttachInstancesToLoadBalancer), ResLoadBalancerAttachment, lbName, err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeAttachInstancesToLoadBalancer, ResLoadBalancerAttachment, lbName) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeAttachInstancesToLoadBalancer, ResLoadBalancerAttachment, lbName) if diag != nil { return diag @@ -79,7 +85,7 @@ func resourceLoadBalancerAttachmentCreate(ctx context.Context, d *schema.Resourc } func resourceLoadBalancerAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) out, err := FindLoadBalancerAttachmentById(ctx, conn, d.Id()) @@ -100,7 +106,7 @@ func resourceLoadBalancerAttachmentRead(ctx context.Context, d *schema.ResourceD } func resourceLoadBalancerAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) id_parts := strings.SplitN(d.Id(), ",", -1) if len(id_parts) != 2 { @@ -112,16 +118,16 @@ func resourceLoadBalancerAttachmentDelete(ctx context.Context, d *schema.Resourc in := lightsail.DetachInstancesFromLoadBalancerInput{ LoadBalancerName: aws.String(lbName), - InstanceNames: aws.StringSlice([]string{iName}), + InstanceNames: []string{iName}, } - out, err := conn.DetachInstancesFromLoadBalancerWithContext(ctx, &in) + out, err := conn.DetachInstancesFromLoadBalancer(ctx, &in) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeDetachInstancesFromLoadBalancer, ResLoadBalancerAttachment, lbName, err) + return create.DiagError(names.Lightsail, string(types.OperationTypeDetachInstancesFromLoadBalancer), ResLoadBalancerAttachment, lbName, err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeDetachInstancesFromLoadBalancer, ResLoadBalancerAttachment, lbName) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeDetachInstancesFromLoadBalancer, ResLoadBalancerAttachment, lbName) if diag != nil { return diag @@ -136,3 +142,44 @@ func expandLoadBalancerNameFromId(id string) string { return lbName } + +func FindLoadBalancerAttachmentById(ctx context.Context, conn *lightsail.Client, id string) (*string, error) { + id_parts := strings.SplitN(id, ",", -1) + if len(id_parts) != 2 { + return nil, errors.New("invalid load balancer attachment id") + } + + lbName := id_parts[0] + iName := id_parts[1] + + in := &lightsail.GetLoadBalancerInput{LoadBalancerName: aws.String(lbName)} + out, err := conn.GetLoadBalancer(ctx, in) + + if IsANotFoundError(err) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + if err != nil { + return nil, err + } + + var entry *string + entryExists := false + + for _, n := range out.LoadBalancer.InstanceHealthSummary { + if iName == aws.ToString(n.InstanceName) { + entry = n.InstanceName + entryExists = true + break + } + } + + if !entryExists { + return nil, tfresource.NewEmptyResultError(in) + } + + return entry, nil +} diff --git a/internal/service/lightsail/lb_attachment_test.go b/internal/service/lightsail/lb_attachment_test.go index 7daa1845f8a..72e602462fe 100644 --- a/internal/service/lightsail/lb_attachment_test.go +++ b/internal/service/lightsail/lb_attachment_test.go @@ -1,12 +1,16 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail_test import ( "context" "errors" "fmt" + "strings" "testing" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -27,10 +31,10 @@ func testAccLoadBalancerAttachment_basic(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckLoadBalancerAttachmentDestroy(ctx), Steps: []resource.TestStep{ @@ -55,10 +59,10 @@ func testAccLoadBalancerAttachment_disappears(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckLoadBalancerAttachmentDestroy(ctx), Steps: []resource.TestStep{ @@ -85,7 +89,7 @@ func testAccCheckLoadBalancerAttachmentExists(ctx context.Context, n string, liN return errors.New("No LightsailLoadBalancerAttachment ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) out, err := tflightsail.FindLoadBalancerAttachmentById(ctx, conn, rs.Primary.ID) @@ -110,7 +114,7 @@ func testAccCheckLoadBalancerAttachmentDestroy(ctx context.Context) resource.Tes continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) _, err := tflightsail.FindLoadBalancerAttachmentById(ctx, conn, rs.Primary.ID) diff --git a/internal/service/lightsail/lb_certificate.go b/internal/service/lightsail/lb_certificate.go index 030b104e7dc..01a8af5e2a1 100644 --- a/internal/service/lightsail/lb_certificate.go +++ b/internal/service/lightsail/lb_certificate.go @@ -1,16 +1,22 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail import ( "context" + "errors" "fmt" "regexp" "strings" "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" @@ -126,7 +132,7 @@ func ResourceLoadBalancerCertificate() *schema.Resource { if sanSet, ok := diff.Get("subject_alternative_names").(*schema.Set); ok { sanSet.Add(domain_name) if err := diff.SetNew("subject_alternative_names", sanSet); err != nil { - return fmt.Errorf("error setting new subject_alternative_names diff: %w", err) + return fmt.Errorf("setting new subject_alternative_names diff: %w", err) } } } @@ -138,7 +144,7 @@ func ResourceLoadBalancerCertificate() *schema.Resource { } func resourceLoadBalancerCertificateCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) certName := d.Get("name").(string) in := lightsail.CreateLoadBalancerTlsCertificateInput{ CertificateDomainName: aws.String(d.Get("domain_name").(string)), @@ -147,16 +153,16 @@ func resourceLoadBalancerCertificateCreate(ctx context.Context, d *schema.Resour } if v, ok := d.GetOk("subject_alternative_names"); ok { - in.CertificateAlternativeNames = aws.StringSlice(expandSubjectAlternativeNames(v)) + in.CertificateAlternativeNames = expandSubjectAlternativeNames(v) } - out, err := conn.CreateLoadBalancerTlsCertificateWithContext(ctx, &in) + out, err := conn.CreateLoadBalancerTlsCertificate(ctx, &in) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeCreateLoadBalancerTlsCertificate, ResLoadBalancerCertificate, certName, err) + return create.DiagError(names.Lightsail, string(types.OperationTypeCreateLoadBalancerTlsCertificate), ResLoadBalancerCertificate, certName, err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeCreateLoadBalancerTlsCertificate, ResLoadBalancerCertificate, certName) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeCreateLoadBalancerTlsCertificate, ResLoadBalancerCertificate, certName) if diag != nil { return diag @@ -174,7 +180,7 @@ func resourceLoadBalancerCertificateCreate(ctx context.Context, d *schema.Resour } func resourceLoadBalancerCertificateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) out, err := FindLoadBalancerCertificateById(ctx, conn, d.Id()) @@ -194,29 +200,29 @@ func resourceLoadBalancerCertificateRead(ctx context.Context, d *schema.Resource d.Set("domain_validation_records", flattenLoadBalancerDomainValidationRecords(out.DomainValidationRecords)) d.Set("lb_name", out.LoadBalancerName) d.Set("name", out.Name) - d.Set("subject_alternative_names", aws.StringValueSlice(out.SubjectAlternativeNames)) + d.Set("subject_alternative_names", out.SubjectAlternativeNames) d.Set("support_code", out.SupportCode) return nil } func resourceLoadBalancerCertificateDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) id_parts := strings.SplitN(d.Id(), ",", -1) lbName := id_parts[0] certName := id_parts[1] - out, err := conn.DeleteLoadBalancerTlsCertificateWithContext(ctx, &lightsail.DeleteLoadBalancerTlsCertificateInput{ + out, err := conn.DeleteLoadBalancerTlsCertificate(ctx, &lightsail.DeleteLoadBalancerTlsCertificateInput{ CertificateName: aws.String(certName), LoadBalancerName: aws.String(lbName), }) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeDeleteLoadBalancerTlsCertificate, ResLoadBalancerCertificate, certName, err) + return create.DiagError(names.Lightsail, string(types.OperationTypeDeleteLoadBalancerTlsCertificate), ResLoadBalancerCertificate, certName, err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeDeleteLoadBalancerTlsCertificate, ResLoadBalancerCertificate, certName) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeDeleteLoadBalancerTlsCertificate, ResLoadBalancerCertificate, certName) if diag != nil { return diag @@ -225,18 +231,59 @@ func resourceLoadBalancerCertificateDelete(ctx context.Context, d *schema.Resour return nil } -func flattenLoadBalancerDomainValidationRecords(domainValidationRecords []*lightsail.LoadBalancerTlsCertificateDomainValidationRecord) []map[string]interface{} { +func flattenLoadBalancerDomainValidationRecords(domainValidationRecords []types.LoadBalancerTlsCertificateDomainValidationRecord) []map[string]interface{} { var domainValidationResult []map[string]interface{} for _, o := range domainValidationRecords { validationOption := map[string]interface{}{ - "domain_name": aws.StringValue(o.DomainName), - "resource_record_name": aws.StringValue(o.Name), - "resource_record_type": aws.StringValue(o.Type), - "resource_record_value": aws.StringValue(o.Value), + "domain_name": aws.ToString(o.DomainName), + "resource_record_name": aws.ToString(o.Name), + "resource_record_type": aws.ToString(o.Type), + "resource_record_value": aws.ToString(o.Value), } domainValidationResult = append(domainValidationResult, validationOption) } return domainValidationResult } + +func FindLoadBalancerCertificateById(ctx context.Context, conn *lightsail.Client, id string) (*types.LoadBalancerTlsCertificate, error) { + id_parts := strings.SplitN(id, ",", -1) + if len(id_parts) != 2 { + return nil, errors.New("invalid load balancer certificate id") + } + + lbName := id_parts[0] + cName := id_parts[1] + + in := &lightsail.GetLoadBalancerTlsCertificatesInput{LoadBalancerName: aws.String(lbName)} + out, err := conn.GetLoadBalancerTlsCertificates(ctx, in) + + if IsANotFoundError(err) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + if err != nil { + return nil, err + } + + var entry types.LoadBalancerTlsCertificate + entryExists := false + + for _, n := range out.TlsCertificates { + if cName == aws.ToString(n.Name) { + entry = n + entryExists = true + break + } + } + + if !entryExists { + return nil, tfresource.NewEmptyResultError(in) + } + + return &entry, nil +} diff --git a/internal/service/lightsail/lb_certificate_attachment.go b/internal/service/lightsail/lb_certificate_attachment.go index 74cc5cb12d4..72a294a9d60 100644 --- a/internal/service/lightsail/lb_certificate_attachment.go +++ b/internal/service/lightsail/lb_certificate_attachment.go @@ -1,14 +1,20 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail import ( "context" + "errors" "log" "regexp" "strings" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" @@ -49,20 +55,20 @@ func ResourceLoadBalancerCertificateAttachment() *schema.Resource { } func resourceLoadBalancerCertificateAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) certName := d.Get("certificate_name").(string) req := lightsail.AttachLoadBalancerTlsCertificateInput{ LoadBalancerName: aws.String(d.Get("lb_name").(string)), CertificateName: aws.String(certName), } - out, err := conn.AttachLoadBalancerTlsCertificateWithContext(ctx, &req) + out, err := conn.AttachLoadBalancerTlsCertificate(ctx, &req) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeAttachLoadBalancerTlsCertificate, ResLoadBalancerCertificateAttachment, certName, err) + return create.DiagError(names.Lightsail, string(types.OperationTypeAttachLoadBalancerTlsCertificate), ResLoadBalancerCertificateAttachment, certName, err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeAttachLoadBalancerTlsCertificate, ResLoadBalancerCertificateAttachment, certName) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeAttachLoadBalancerTlsCertificate, ResLoadBalancerCertificateAttachment, certName) if diag != nil { return diag @@ -80,7 +86,7 @@ func resourceLoadBalancerCertificateAttachmentCreate(ctx context.Context, d *sch } func resourceLoadBalancerCertificateAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) out, err := FindLoadBalancerCertificateAttachmentById(ctx, conn, d.Id()) @@ -104,3 +110,44 @@ func resourceLoadBalancerCertificateAttachmentDelete(ctx context.Context, d *sch log.Printf("[WARN] Cannot destroy Lightsail Load Balancer Certificate Attachment. Terraform will remove this resource from the state file, however resources may remain.") return nil } + +func FindLoadBalancerCertificateAttachmentById(ctx context.Context, conn *lightsail.Client, id string) (*string, error) { + id_parts := strings.SplitN(id, ",", -1) + if len(id_parts) != 2 { + return nil, errors.New("invalid load balancer certificate attachment id") + } + + lbName := id_parts[0] + cName := id_parts[1] + + in := &lightsail.GetLoadBalancerTlsCertificatesInput{LoadBalancerName: aws.String(lbName)} + out, err := conn.GetLoadBalancerTlsCertificates(ctx, in) + + if IsANotFoundError(err) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + if err != nil { + return nil, err + } + + var entry *string + entryExists := false + + for _, n := range out.TlsCertificates { + if cName == aws.ToString(n.Name) && aws.ToBool(n.IsAttached) { + entry = n.Name + entryExists = true + break + } + } + + if !entryExists { + return nil, tfresource.NewEmptyResultError(in) + } + + return entry, nil +} diff --git a/internal/service/lightsail/lb_certificate_attachment_test.go b/internal/service/lightsail/lb_certificate_attachment_test.go index 88e69a7e374..6ac38885c17 100644 --- a/internal/service/lightsail/lb_certificate_attachment_test.go +++ b/internal/service/lightsail/lb_certificate_attachment_test.go @@ -1,11 +1,15 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail_test import ( "fmt" "regexp" + "strings" "testing" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-provider-aws/internal/acctest" @@ -20,10 +24,10 @@ func testAccLoadBalancerCertificateAttachment_basic(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckLoadBalancerCertificateDestroy(ctx), Steps: []resource.TestStep{ diff --git a/internal/service/lightsail/lb_certificate_test.go b/internal/service/lightsail/lb_certificate_test.go index 093a2771fa5..be27f1e1cfb 100644 --- a/internal/service/lightsail/lb_certificate_test.go +++ b/internal/service/lightsail/lb_certificate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail_test import ( @@ -5,9 +8,10 @@ import ( "errors" "fmt" "regexp" + "strings" "testing" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -21,7 +25,6 @@ import ( func testAccLoadBalancerCertificate_basic(t *testing.T) { ctx := acctest.Context(t) - var certificate lightsail.LoadBalancerTlsCertificate resourceName := "aws_lightsail_lb_certificate.test" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) lbName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) @@ -30,17 +33,17 @@ func testAccLoadBalancerCertificate_basic(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckLoadBalancerCertificateDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccLoadBalancerCertificateConfig_basic(rName, lbName, domainName), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckLoadBalancerCertificateExists(ctx, resourceName, &certificate), + testAccCheckLoadBalancerCertificateExists(ctx, resourceName), acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "lightsail", regexp.MustCompile(`LoadBalancerTlsCertificate/.+`)), resource.TestCheckResourceAttr(resourceName, "name", rName), resource.TestCheckResourceAttr(resourceName, "domain_name", domainName), @@ -58,7 +61,6 @@ func testAccLoadBalancerCertificate_basic(t *testing.T) { func testAccLoadBalancerCertificate_subjectAlternativeNames(t *testing.T) { ctx := acctest.Context(t) - var certificate lightsail.LoadBalancerTlsCertificate resourceName := "aws_lightsail_lb_certificate.test" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) lbName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) @@ -68,17 +70,17 @@ func testAccLoadBalancerCertificate_subjectAlternativeNames(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckLoadBalancerCertificateDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccLoadBalancerCertificateConfig_subjectAlternativeNames(rName, lbName, domainName, subjectAlternativeName), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckLoadBalancerCertificateExists(ctx, resourceName, &certificate), + testAccCheckLoadBalancerCertificateExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "subject_alternative_names.#", "2"), resource.TestCheckTypeSetElemAttr(resourceName, "subject_alternative_names.*", subjectAlternativeName), resource.TestCheckTypeSetElemAttr(resourceName, "subject_alternative_names.*", domainName), @@ -90,7 +92,6 @@ func testAccLoadBalancerCertificate_subjectAlternativeNames(t *testing.T) { func testAccLoadBalancerCertificate_domainValidationRecords(t *testing.T) { ctx := acctest.Context(t) - var certificate lightsail.LoadBalancerTlsCertificate resourceName := "aws_lightsail_lb_certificate.test" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) lbName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) @@ -102,17 +103,17 @@ func testAccLoadBalancerCertificate_domainValidationRecords(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckLoadBalancerCertificateDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccLoadBalancerCertificateConfig_subjectAlternativeNames(rName, lbName, domainName, subjectAlternativeName), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckLoadBalancerCertificateExists(ctx, resourceName, &certificate), + testAccCheckLoadBalancerCertificateExists(ctx, resourceName), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "domain_validation_records.*", map[string]string{ "domain_name": domainName, "resource_record_type": "CNAME", @@ -129,7 +130,6 @@ func testAccLoadBalancerCertificate_domainValidationRecords(t *testing.T) { func testAccLoadBalancerCertificate_disappears(t *testing.T) { ctx := acctest.Context(t) - var certificate lightsail.LoadBalancerTlsCertificate resourceName := "aws_lightsail_lb_certificate.test" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) lbName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) @@ -138,17 +138,17 @@ func testAccLoadBalancerCertificate_disappears(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckLoadBalancerCertificateDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccLoadBalancerCertificateConfig_basic(rName, lbName, domainName), Check: resource.ComposeTestCheckFunc( - testAccCheckLoadBalancerCertificateExists(ctx, resourceName, &certificate), + testAccCheckLoadBalancerCertificateExists(ctx, resourceName), acctest.CheckResourceDisappears(ctx, acctest.Provider, tflightsail.ResourceLoadBalancerCertificate(), resourceName), ), ExpectNonEmptyPlan: true, @@ -164,7 +164,7 @@ func testAccCheckLoadBalancerCertificateDestroy(ctx context.Context) resource.Te continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) _, err := tflightsail.FindLoadBalancerCertificateById(ctx, conn, rs.Primary.ID) @@ -183,7 +183,7 @@ func testAccCheckLoadBalancerCertificateDestroy(ctx context.Context) resource.Te } } -func testAccCheckLoadBalancerCertificateExists(ctx context.Context, n string, certificate *lightsail.LoadBalancerTlsCertificate) resource.TestCheckFunc { +func testAccCheckLoadBalancerCertificateExists(ctx context.Context, n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -194,7 +194,7 @@ func testAccCheckLoadBalancerCertificateExists(ctx context.Context, n string, ce return errors.New("No Certificate ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) respCertificate, err := tflightsail.FindLoadBalancerCertificateById(ctx, conn, rs.Primary.ID) @@ -206,8 +206,6 @@ func testAccCheckLoadBalancerCertificateExists(ctx context.Context, n string, ce return fmt.Errorf("Load Balancer Certificate %q does not exist", rs.Primary.ID) } - *certificate = *respCertificate - return nil } } diff --git a/internal/service/lightsail/lb_https_redirection_policy.go b/internal/service/lightsail/lb_https_redirection_policy.go index a10239c9d7a..7eb8e429d24 100644 --- a/internal/service/lightsail/lb_https_redirection_policy.go +++ b/internal/service/lightsail/lb_https_redirection_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail import ( @@ -5,9 +8,11 @@ import ( "fmt" "regexp" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" @@ -48,21 +53,21 @@ func ResourceLoadBalancerHTTPSRedirectionPolicy() *schema.Resource { } func resourceLoadBalancerHTTPSRedirectionPolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) lbName := d.Get("lb_name").(string) in := lightsail.UpdateLoadBalancerAttributeInput{ LoadBalancerName: aws.String(lbName), - AttributeName: aws.String(lightsail.LoadBalancerAttributeNameHttpsRedirectionEnabled), + AttributeName: types.LoadBalancerAttributeNameHttpsRedirectionEnabled, AttributeValue: aws.String(fmt.Sprint(d.Get("enabled").(bool))), } - out, err := conn.UpdateLoadBalancerAttributeWithContext(ctx, &in) + out, err := conn.UpdateLoadBalancerAttribute(ctx, &in) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeUpdateLoadBalancerAttribute, ResLoadBalancerHTTPSRedirectionPolicy, lbName, err) + return create.DiagError(names.Lightsail, string(types.OperationTypeUpdateLoadBalancerAttribute), ResLoadBalancerHTTPSRedirectionPolicy, lbName, err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeUpdateLoadBalancerAttribute, ResLoadBalancerHTTPSRedirectionPolicy, lbName) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeUpdateLoadBalancerAttribute, ResLoadBalancerHTTPSRedirectionPolicy, lbName) if diag != nil { return diag @@ -74,7 +79,7 @@ func resourceLoadBalancerHTTPSRedirectionPolicyCreate(ctx context.Context, d *sc } func resourceLoadBalancerHTTPSRedirectionPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) out, err := FindLoadBalancerHTTPSRedirectionPolicyById(ctx, conn, d.Id()) @@ -95,22 +100,22 @@ func resourceLoadBalancerHTTPSRedirectionPolicyRead(ctx context.Context, d *sche } func resourceLoadBalancerHTTPSRedirectionPolicyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) lbName := d.Get("lb_name").(string) if d.HasChange("enabled") { in := lightsail.UpdateLoadBalancerAttributeInput{ LoadBalancerName: aws.String(lbName), - AttributeName: aws.String(lightsail.LoadBalancerAttributeNameHttpsRedirectionEnabled), + AttributeName: types.LoadBalancerAttributeNameHttpsRedirectionEnabled, AttributeValue: aws.String(fmt.Sprint(d.Get("enabled").(bool))), } - out, err := conn.UpdateLoadBalancerAttributeWithContext(ctx, &in) + out, err := conn.UpdateLoadBalancerAttribute(ctx, &in) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeUpdateLoadBalancerAttribute, ResLoadBalancerHTTPSRedirectionPolicy, lbName, err) + return create.DiagError(names.Lightsail, string(types.OperationTypeUpdateLoadBalancerAttribute), ResLoadBalancerHTTPSRedirectionPolicy, lbName, err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeUpdateLoadBalancerAttribute, ResLoadBalancerHTTPSRedirectionPolicy, lbName) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeUpdateLoadBalancerAttribute, ResLoadBalancerHTTPSRedirectionPolicy, lbName) if diag != nil { return diag @@ -121,21 +126,21 @@ func resourceLoadBalancerHTTPSRedirectionPolicyUpdate(ctx context.Context, d *sc } func resourceLoadBalancerHTTPSRedirectionPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) lbName := d.Get("lb_name").(string) in := lightsail.UpdateLoadBalancerAttributeInput{ LoadBalancerName: aws.String(lbName), - AttributeName: aws.String(lightsail.LoadBalancerAttributeNameHttpsRedirectionEnabled), + AttributeName: types.LoadBalancerAttributeNameHttpsRedirectionEnabled, AttributeValue: aws.String("false"), } - out, err := conn.UpdateLoadBalancerAttributeWithContext(ctx, &in) + out, err := conn.UpdateLoadBalancerAttribute(ctx, &in) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeUpdateLoadBalancerAttribute, ResLoadBalancerHTTPSRedirectionPolicy, lbName, err) + return create.DiagError(names.Lightsail, string(types.OperationTypeUpdateLoadBalancerAttribute), ResLoadBalancerHTTPSRedirectionPolicy, lbName, err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeUpdateLoadBalancerAttribute, ResLoadBalancerHTTPSRedirectionPolicy, lbName) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeUpdateLoadBalancerAttribute, ResLoadBalancerHTTPSRedirectionPolicy, lbName) if diag != nil { return diag @@ -143,3 +148,25 @@ func resourceLoadBalancerHTTPSRedirectionPolicyDelete(ctx context.Context, d *sc return nil } + +func FindLoadBalancerHTTPSRedirectionPolicyById(ctx context.Context, conn *lightsail.Client, id string) (*bool, error) { + in := &lightsail.GetLoadBalancerInput{LoadBalancerName: aws.String(id)} + out, err := conn.GetLoadBalancer(ctx, in) + + if IsANotFoundError(err) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + if err != nil { + return nil, err + } + + if out == nil || out.LoadBalancer.HttpsRedirectionEnabled == nil { + return nil, tfresource.NewEmptyResultError(in) + } + + return out.LoadBalancer.HttpsRedirectionEnabled, nil +} diff --git a/internal/service/lightsail/lb_https_redirection_policy_test.go b/internal/service/lightsail/lb_https_redirection_policy_test.go index d4078bec578..bc3181b9971 100644 --- a/internal/service/lightsail/lb_https_redirection_policy_test.go +++ b/internal/service/lightsail/lb_https_redirection_policy_test.go @@ -1,11 +1,15 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail_test import ( "fmt" "regexp" + "strings" "testing" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-provider-aws/internal/acctest" @@ -19,10 +23,10 @@ func testAccLoadBalancerHTTPSRedirectionPolicy_basic(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ diff --git a/internal/service/lightsail/lb_stickiness_policy.go b/internal/service/lightsail/lb_stickiness_policy.go index 7977ecf73ab..0c41c8e8057 100644 --- a/internal/service/lightsail/lb_stickiness_policy.go +++ b/internal/service/lightsail/lb_stickiness_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail import ( @@ -6,9 +9,11 @@ import ( "regexp" "strconv" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" @@ -53,7 +58,7 @@ func ResourceLoadBalancerStickinessPolicy() *schema.Resource { } func resourceLoadBalancerStickinessPolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) lbName := d.Get("lb_name").(string) for _, v := range []string{"enabled", "cookie_duration"} { in := lightsail.UpdateLoadBalancerAttributeInput{ @@ -61,22 +66,22 @@ func resourceLoadBalancerStickinessPolicyCreate(ctx context.Context, d *schema.R } if v == "enabled" { - in.AttributeName = aws.String(lightsail.LoadBalancerAttributeNameSessionStickinessEnabled) + in.AttributeName = types.LoadBalancerAttributeNameSessionStickinessEnabled in.AttributeValue = aws.String(fmt.Sprint(d.Get("enabled").(bool))) } if v == "cookie_duration" { - in.AttributeName = aws.String(lightsail.LoadBalancerAttributeNameSessionStickinessLbCookieDurationSeconds) + in.AttributeName = types.LoadBalancerAttributeNameSessionStickinessLbCookieDurationSeconds in.AttributeValue = aws.String(fmt.Sprint(d.Get("cookie_duration").(int))) } - out, err := conn.UpdateLoadBalancerAttributeWithContext(ctx, &in) + out, err := conn.UpdateLoadBalancerAttribute(ctx, &in) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeUpdateLoadBalancerAttribute, ResLoadBalancerStickinessPolicy, lbName, err) + return create.DiagError(names.Lightsail, string(types.OperationTypeUpdateLoadBalancerAttribute), ResLoadBalancerStickinessPolicy, lbName, err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeUpdateLoadBalancerAttribute, ResLoadBalancerStickinessPolicy, lbName) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeUpdateLoadBalancerAttribute, ResLoadBalancerStickinessPolicy, lbName) if diag != nil { return diag @@ -89,7 +94,7 @@ func resourceLoadBalancerStickinessPolicyCreate(ctx context.Context, d *schema.R } func resourceLoadBalancerStickinessPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) out, err := FindLoadBalancerStickinessPolicyById(ctx, conn, d.Id()) @@ -103,12 +108,12 @@ func resourceLoadBalancerStickinessPolicyRead(ctx context.Context, d *schema.Res return create.DiagError(names.Lightsail, create.ErrActionReading, ResLoadBalancerStickinessPolicy, d.Id(), err) } - boolValue, err := strconv.ParseBool(*out[lightsail.LoadBalancerAttributeNameSessionStickinessEnabled]) + boolValue, err := strconv.ParseBool(out[string(types.LoadBalancerAttributeNameSessionStickinessEnabled)]) if err != nil { return create.DiagError(names.Lightsail, create.ErrActionReading, ResLoadBalancerStickinessPolicy, d.Id(), err) } - intValue, err := strconv.Atoi(*out[lightsail.LoadBalancerAttributeNameSessionStickinessLbCookieDurationSeconds]) + intValue, err := strconv.Atoi(out[string(types.LoadBalancerAttributeNameSessionStickinessLbCookieDurationSeconds)]) if err != nil { return create.DiagError(names.Lightsail, create.ErrActionReading, ResLoadBalancerStickinessPolicy, d.Id(), err) } @@ -121,22 +126,22 @@ func resourceLoadBalancerStickinessPolicyRead(ctx context.Context, d *schema.Res } func resourceLoadBalancerStickinessPolicyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) lbName := d.Get("lb_name").(string) if d.HasChange("enabled") { in := lightsail.UpdateLoadBalancerAttributeInput{ LoadBalancerName: aws.String(lbName), - AttributeName: aws.String(lightsail.LoadBalancerAttributeNameSessionStickinessEnabled), + AttributeName: types.LoadBalancerAttributeNameSessionStickinessEnabled, AttributeValue: aws.String(fmt.Sprint(d.Get("enabled").(bool))), } - out, err := conn.UpdateLoadBalancerAttributeWithContext(ctx, &in) + out, err := conn.UpdateLoadBalancerAttribute(ctx, &in) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeUpdateLoadBalancerAttribute, ResLoadBalancerStickinessPolicy, lbName, err) + return create.DiagError(names.Lightsail, string(types.OperationTypeUpdateLoadBalancerAttribute), ResLoadBalancerStickinessPolicy, lbName, err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeUpdateLoadBalancerAttribute, ResLoadBalancerStickinessPolicy, lbName) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeUpdateLoadBalancerAttribute, ResLoadBalancerStickinessPolicy, lbName) if diag != nil { return diag @@ -146,17 +151,17 @@ func resourceLoadBalancerStickinessPolicyUpdate(ctx context.Context, d *schema.R if d.HasChange("cookie_duration") { in := lightsail.UpdateLoadBalancerAttributeInput{ LoadBalancerName: aws.String(lbName), - AttributeName: aws.String(lightsail.LoadBalancerAttributeNameSessionStickinessLbCookieDurationSeconds), + AttributeName: types.LoadBalancerAttributeNameSessionStickinessLbCookieDurationSeconds, AttributeValue: aws.String(fmt.Sprint(d.Get("cookie_duration").(int))), } - out, err := conn.UpdateLoadBalancerAttributeWithContext(ctx, &in) + out, err := conn.UpdateLoadBalancerAttribute(ctx, &in) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeUpdateLoadBalancerAttribute, ResLoadBalancerStickinessPolicy, lbName, err) + return create.DiagError(names.Lightsail, string(types.OperationTypeUpdateLoadBalancerAttribute), ResLoadBalancerStickinessPolicy, lbName, err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeUpdateLoadBalancerAttribute, ResLoadBalancerStickinessPolicy, lbName) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeUpdateLoadBalancerAttribute, ResLoadBalancerStickinessPolicy, lbName) if diag != nil { return diag @@ -167,21 +172,21 @@ func resourceLoadBalancerStickinessPolicyUpdate(ctx context.Context, d *schema.R } func resourceLoadBalancerStickinessPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) lbName := d.Get("lb_name").(string) in := lightsail.UpdateLoadBalancerAttributeInput{ LoadBalancerName: aws.String(lbName), - AttributeName: aws.String(lightsail.LoadBalancerAttributeNameSessionStickinessEnabled), + AttributeName: types.LoadBalancerAttributeNameSessionStickinessEnabled, AttributeValue: aws.String("false"), } - out, err := conn.UpdateLoadBalancerAttributeWithContext(ctx, &in) + out, err := conn.UpdateLoadBalancerAttribute(ctx, &in) if err != nil { - return create.DiagError(names.Lightsail, lightsail.OperationTypeUpdateLoadBalancerAttribute, ResLoadBalancerStickinessPolicy, lbName, err) + return create.DiagError(names.Lightsail, string(types.OperationTypeUpdateLoadBalancerAttribute), ResLoadBalancerStickinessPolicy, lbName, err) } - diag := expandOperations(ctx, conn, out.Operations, lightsail.OperationTypeUpdateLoadBalancerAttribute, ResLoadBalancerStickinessPolicy, lbName) + diag := expandOperations(ctx, conn, out.Operations, types.OperationTypeUpdateLoadBalancerAttribute, ResLoadBalancerStickinessPolicy, lbName) if diag != nil { return diag @@ -189,3 +194,25 @@ func resourceLoadBalancerStickinessPolicyDelete(ctx context.Context, d *schema.R return nil } + +func FindLoadBalancerStickinessPolicyById(ctx context.Context, conn *lightsail.Client, id string) (map[string]string, error) { + in := &lightsail.GetLoadBalancerInput{LoadBalancerName: aws.String(id)} + out, err := conn.GetLoadBalancer(ctx, in) + + if IsANotFoundError(err) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + if err != nil { + return nil, err + } + + if out == nil || out.LoadBalancer.ConfigurationOptions == nil { + return nil, tfresource.NewEmptyResultError(in) + } + + return out.LoadBalancer.ConfigurationOptions, nil +} diff --git a/internal/service/lightsail/lb_stickiness_policy_test.go b/internal/service/lightsail/lb_stickiness_policy_test.go index 61fc215a111..3f074e87359 100644 --- a/internal/service/lightsail/lb_stickiness_policy_test.go +++ b/internal/service/lightsail/lb_stickiness_policy_test.go @@ -1,12 +1,16 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail_test import ( "context" "errors" "fmt" + "strings" "testing" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -25,10 +29,10 @@ func testAccLoadBalancerStickinessPolicy_basic(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ @@ -56,10 +60,10 @@ func testAccLoadBalancerStickinessPolicy_cookieDuration(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ @@ -97,10 +101,10 @@ func testAccLoadBalancerStickinessPolicy_enabled(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ @@ -137,10 +141,10 @@ func testAccLoadBalancerStickinessPolicy_disappears(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ @@ -167,7 +171,7 @@ func testAccCheckLoadBalancerStickinessPolicyExists(ctx context.Context, n strin return errors.New("No LightsailLoadBalancerStickinessPolicy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) out, err := tflightsail.FindLoadBalancerStickinessPolicyById(ctx, conn, rs.Primary.ID) diff --git a/internal/service/lightsail/lb_test.go b/internal/service/lightsail/lb_test.go index 6aa299484eb..1850349ac0a 100644 --- a/internal/service/lightsail/lb_test.go +++ b/internal/service/lightsail/lb_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail_test import ( @@ -5,11 +8,12 @@ import ( "errors" "fmt" "regexp" + "strings" "testing" "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -62,24 +66,23 @@ func TestAccLightsailLoadBalancer_serial(t *testing.T) { } func testAccLoadBalancer_basic(t *testing.T) { ctx := acctest.Context(t) - var lb lightsail.LoadBalancer rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_lightsail_lb.test" resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccLoadBalancerConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckLoadBalancerExists(ctx, resourceName, &lb), + testAccCheckLoadBalancerExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "health_check_path", "/"), resource.TestCheckResourceAttr(resourceName, "instance_port", "80"), resource.TestCheckResourceAttrSet(resourceName, "dns_name"), @@ -96,7 +99,6 @@ func testAccLoadBalancer_basic(t *testing.T) { func testAccLoadBalancer_name(t *testing.T) { ctx := acctest.Context(t) - var lb lightsail.LoadBalancer rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) lightsailNameWithSpaces := fmt.Sprint(rName, "string with spaces") lightsailNameWithStartingDigit := fmt.Sprintf("01-%s", rName) @@ -106,10 +108,10 @@ func testAccLoadBalancer_name(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ @@ -124,7 +126,7 @@ func testAccLoadBalancer_name(t *testing.T) { { Config: testAccLoadBalancerConfig_basic(rName), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckLoadBalancerExists(ctx, resourceName, &lb), + testAccCheckLoadBalancerExists(ctx, resourceName), resource.TestCheckResourceAttrSet(resourceName, "health_check_path"), resource.TestCheckResourceAttrSet(resourceName, "instance_port"), ), @@ -132,7 +134,7 @@ func testAccLoadBalancer_name(t *testing.T) { { Config: testAccLoadBalancerConfig_basic(lightsailNameWithUnderscore), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckLoadBalancerExists(ctx, resourceName, &lb), + testAccCheckLoadBalancerExists(ctx, resourceName), resource.TestCheckResourceAttrSet(resourceName, "health_check_path"), resource.TestCheckResourceAttrSet(resourceName, "instance_port"), ), @@ -143,24 +145,23 @@ func testAccLoadBalancer_name(t *testing.T) { func testAccLoadBalancer_healthCheckPath(t *testing.T) { ctx := acctest.Context(t) - var lb lightsail.LoadBalancer rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_lightsail_lb.test" resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccLoadBalancerConfig_healthCheckPath(rName, "/"), Check: resource.ComposeTestCheckFunc( - testAccCheckLoadBalancerExists(ctx, resourceName, &lb), + testAccCheckLoadBalancerExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "health_check_path", "/"), ), }, @@ -172,7 +173,7 @@ func testAccLoadBalancer_healthCheckPath(t *testing.T) { { Config: testAccLoadBalancerConfig_healthCheckPath(rName, "/healthcheck"), Check: resource.ComposeTestCheckFunc( - testAccCheckLoadBalancerExists(ctx, resourceName, &lb), + testAccCheckLoadBalancerExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "health_check_path", "/healthcheck"), ), }, @@ -182,24 +183,23 @@ func testAccLoadBalancer_healthCheckPath(t *testing.T) { func testAccLoadBalancer_tags(t *testing.T) { ctx := acctest.Context(t) - var lb1, lb2, lb3 lightsail.LoadBalancer rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_lightsail_lb.test" resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccLoadBalancerConfig_tags1(rName, "key1", "value1"), Check: resource.ComposeTestCheckFunc( - testAccCheckLoadBalancerExists(ctx, resourceName, &lb1), + testAccCheckLoadBalancerExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), ), @@ -212,7 +212,7 @@ func testAccLoadBalancer_tags(t *testing.T) { { Config: testAccLoadBalancerConfig_tags2(rName, "key1", "value1updated", "key2", "value2"), Check: resource.ComposeTestCheckFunc( - testAccCheckLoadBalancerExists(ctx, resourceName, &lb2), + testAccCheckLoadBalancerExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), @@ -221,7 +221,7 @@ func testAccLoadBalancer_tags(t *testing.T) { { Config: testAccLoadBalancerConfig_tags1(rName, "key2", "value2"), Check: resource.ComposeTestCheckFunc( - testAccCheckLoadBalancerExists(ctx, resourceName, &lb3), + testAccCheckLoadBalancerExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), ), @@ -230,7 +230,7 @@ func testAccLoadBalancer_tags(t *testing.T) { }) } -func testAccCheckLoadBalancerExists(ctx context.Context, n string, lb *lightsail.LoadBalancer) resource.TestCheckFunc { +func testAccCheckLoadBalancerExists(ctx context.Context, n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -241,9 +241,9 @@ func testAccCheckLoadBalancerExists(ctx context.Context, n string, lb *lightsail return errors.New("No LightsailLoadBalancer ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) - resp, err := tflightsail.FindLoadBalancerByName(ctx, conn, rs.Primary.ID) + resp, err := tflightsail.FindLoadBalancerById(ctx, conn, rs.Primary.ID) if err != nil { return err @@ -253,22 +253,19 @@ func testAccCheckLoadBalancerExists(ctx context.Context, n string, lb *lightsail return fmt.Errorf("Load Balancer %q does not exist", rs.Primary.ID) } - *lb = *resp - return nil } } func testAccLoadBalancer_disappears(t *testing.T) { ctx := acctest.Context(t) - var lb lightsail.LoadBalancer rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_lightsail_lb.test" testDestroy := func(*terraform.State) error { // reach out and DELETE the LoadBalancer - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() - _, err := conn.DeleteLoadBalancerWithContext(ctx, &lightsail.DeleteLoadBalancerInput{ + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) + _, err := conn.DeleteLoadBalancer(ctx, &lightsail.DeleteLoadBalancerInput{ LoadBalancerName: aws.String(rName), }) @@ -285,17 +282,17 @@ func testAccLoadBalancer_disappears(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) - acctest.PreCheckPartitionHasService(t, lightsail.EndpointsID) + acctest.PreCheckPartitionHasService(t, strings.ToLower(lightsail.ServiceID)) testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckLoadBalancerDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccLoadBalancerConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckLoadBalancerExists(ctx, resourceName, &lb), + testAccCheckLoadBalancerExists(ctx, resourceName), testDestroy, ), ExpectNonEmptyPlan: true, @@ -311,9 +308,9 @@ func testAccCheckLoadBalancerDestroy(ctx context.Context) resource.TestCheckFunc continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) - _, err := tflightsail.FindLoadBalancerByName(ctx, conn, rs.Primary.ID) + _, err := tflightsail.FindLoadBalancerById(ctx, conn, rs.Primary.ID) if tfresource.NotFound(err) { continue diff --git a/internal/service/lightsail/service_package.go b/internal/service/lightsail/service_package.go new file mode 100644 index 00000000000..f232f80faad --- /dev/null +++ b/internal/service/lightsail/service_package.go @@ -0,0 +1,40 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package lightsail + +import ( + "context" + "strings" + "time" + + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + retry_sdkv2 "github.com/aws/aws-sdk-go-v2/aws/retry" + lightsail_sdkv2 "github.com/aws/aws-sdk-go-v2/service/lightsail" +) + +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*lightsail_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return lightsail_sdkv2.NewFromConfig(cfg, func(o *lightsail_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = lightsail_sdkv2.EndpointResolverFromURL(endpoint) + } + + retryable := retry_sdkv2.IsErrorRetryableFunc(func(e error) aws_sdkv2.Ternary { + if strings.Contains(e.Error(), "Please try again in a few minutes") || strings.Contains(e.Error(), "Please wait for it to complete before trying again") { + return aws_sdkv2.TrueTernary + } + return aws_sdkv2.UnknownTernary + }) + const ( + backoff = 10 * time.Second + ) + o.Retryer = retry_sdkv2.NewStandard(func(o *retry_sdkv2.StandardOptions) { + o.Retryables = append(o.Retryables, retryable) + o.MaxAttempts = 18 + o.Backoff = retry_sdkv2.NewExponentialJitterBackoff(backoff) + }) + }), nil +} diff --git a/internal/service/lightsail/service_package_gen.go b/internal/service/lightsail/service_package_gen.go index 45c69a465cd..a0cdc9da6ec 100644 --- a/internal/service/lightsail/service_package_gen.go +++ b/internal/service/lightsail/service_package_gen.go @@ -5,6 +5,7 @@ package lightsail import ( "context" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -156,4 +157,6 @@ func (p *servicePackage) ServicePackageName() string { return names.Lightsail } -var ServicePackage = &servicePackage{} +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/lightsail/static_ip.go b/internal/service/lightsail/static_ip.go index 5a84881355e..33de8d725ee 100644 --- a/internal/service/lightsail/static_ip.go +++ b/internal/service/lightsail/static_ip.go @@ -1,12 +1,14 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail import ( "context" "log" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-aws/internal/conns" @@ -44,11 +46,11 @@ func ResourceStaticIP() *schema.Resource { func resourceStaticIPCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) name := d.Get("name").(string) log.Printf("[INFO] Allocating Lightsail Static IP: %q", name) - _, err := conn.AllocateStaticIpWithContext(ctx, &lightsail.AllocateStaticIpInput{ + _, err := conn.AllocateStaticIp(ctx, &lightsail.AllocateStaticIpInput{ StaticIpName: aws.String(name), }) if err != nil { @@ -62,15 +64,15 @@ func resourceStaticIPCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceStaticIPRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) name := d.Get("name").(string) log.Printf("[INFO] Reading Lightsail Static IP: %q", name) - out, err := conn.GetStaticIpWithContext(ctx, &lightsail.GetStaticIpInput{ + out, err := conn.GetStaticIp(ctx, &lightsail.GetStaticIpInput{ StaticIpName: aws.String(name), }) if err != nil { - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { + if IsANotFoundError(err) { log.Printf("[WARN] Lightsail Static IP (%s) not found, removing from state", d.Id()) d.SetId("") return diags @@ -87,11 +89,11 @@ func resourceStaticIPRead(ctx context.Context, d *schema.ResourceData, meta inte func resourceStaticIPDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) name := d.Get("name").(string) log.Printf("[INFO] Deleting Lightsail Static IP: %q", name) - _, err := conn.ReleaseStaticIpWithContext(ctx, &lightsail.ReleaseStaticIpInput{ + _, err := conn.ReleaseStaticIp(ctx, &lightsail.ReleaseStaticIpInput{ StaticIpName: aws.String(name), }) if err != nil { diff --git a/internal/service/lightsail/static_ip_attachment.go b/internal/service/lightsail/static_ip_attachment.go index faa470e7da2..5ee8a4244e0 100644 --- a/internal/service/lightsail/static_ip_attachment.go +++ b/internal/service/lightsail/static_ip_attachment.go @@ -1,12 +1,14 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail import ( "context" "log" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-aws/internal/conns" @@ -41,11 +43,11 @@ func ResourceStaticIPAttachment() *schema.Resource { func resourceStaticIPAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) staticIpName := d.Get("static_ip_name").(string) log.Printf("[INFO] Creating Lightsail Static IP Attachment: %q", staticIpName) - _, err := conn.AttachStaticIpWithContext(ctx, &lightsail.AttachStaticIpInput{ + _, err := conn.AttachStaticIp(ctx, &lightsail.AttachStaticIpInput{ StaticIpName: aws.String(staticIpName), InstanceName: aws.String(d.Get("instance_name").(string)), }) @@ -60,15 +62,15 @@ func resourceStaticIPAttachmentCreate(ctx context.Context, d *schema.ResourceDat func resourceStaticIPAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) staticIpName := d.Get("static_ip_name").(string) log.Printf("[INFO] Reading Lightsail Static IP Attachment: %q", staticIpName) - out, err := conn.GetStaticIpWithContext(ctx, &lightsail.GetStaticIpInput{ + out, err := conn.GetStaticIp(ctx, &lightsail.GetStaticIpInput{ StaticIpName: aws.String(staticIpName), }) if err != nil { - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { + if IsANotFoundError(err) { log.Printf("[WARN] Lightsail Static IP Attachment (%s) not found, removing from state", d.Id()) d.SetId("") return diags @@ -89,10 +91,10 @@ func resourceStaticIPAttachmentRead(ctx context.Context, d *schema.ResourceData, func resourceStaticIPAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LightsailConn() + conn := meta.(*conns.AWSClient).LightsailClient(ctx) name := d.Get("static_ip_name").(string) - _, err := conn.DetachStaticIpWithContext(ctx, &lightsail.DetachStaticIpInput{ + _, err := conn.DetachStaticIp(ctx, &lightsail.DetachStaticIpInput{ StaticIpName: aws.String(name), }) if err != nil { diff --git a/internal/service/lightsail/static_ip_attachment_test.go b/internal/service/lightsail/static_ip_attachment_test.go index 98ed0418ffa..86e82cddfd0 100644 --- a/internal/service/lightsail/static_ip_attachment_test.go +++ b/internal/service/lightsail/static_ip_attachment_test.go @@ -1,38 +1,41 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail_test import ( "context" "errors" "fmt" + "strings" "testing" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" + tflightsail "github.com/hashicorp/terraform-provider-aws/internal/service/lightsail" ) func TestAccLightsailStaticIPAttachment_basic(t *testing.T) { ctx := acctest.Context(t) - var staticIp lightsail.StaticIp staticIpName := fmt.Sprintf("tf-test-lightsail-%s", sdkacctest.RandString(5)) instanceName := fmt.Sprintf("tf-test-lightsail-%s", sdkacctest.RandString(5)) keypairName := fmt.Sprintf("tf-test-lightsail-%s", sdkacctest.RandString(5)) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckStaticIPAttachmentDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccStaticIPAttachmentConfig_basic(staticIpName, instanceName, keypairName), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckStaticIPAttachmentExists(ctx, "aws_lightsail_static_ip_attachment.test", &staticIp), + testAccCheckStaticIPAttachmentExists(ctx, "aws_lightsail_static_ip_attachment.test"), resource.TestCheckResourceAttrSet("aws_lightsail_static_ip_attachment.test", "ip_address"), ), }, @@ -42,14 +45,13 @@ func TestAccLightsailStaticIPAttachment_basic(t *testing.T) { func TestAccLightsailStaticIPAttachment_disappears(t *testing.T) { ctx := acctest.Context(t) - var staticIp lightsail.StaticIp staticIpName := fmt.Sprintf("tf-test-lightsail-%s", sdkacctest.RandString(5)) instanceName := fmt.Sprintf("tf-test-lightsail-%s", sdkacctest.RandString(5)) keypairName := fmt.Sprintf("tf-test-lightsail-%s", sdkacctest.RandString(5)) staticIpDestroy := func(*terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() - _, err := conn.DetachStaticIpWithContext(ctx, &lightsail.DetachStaticIpInput{ + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) + _, err := conn.DetachStaticIp(ctx, &lightsail.DetachStaticIpInput{ StaticIpName: aws.String(staticIpName), }) @@ -62,14 +64,14 @@ func TestAccLightsailStaticIPAttachment_disappears(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckStaticIPAttachmentDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccStaticIPAttachmentConfig_basic(staticIpName, instanceName, keypairName), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckStaticIPAttachmentExists(ctx, "aws_lightsail_static_ip_attachment.test", &staticIp), + testAccCheckStaticIPAttachmentExists(ctx, "aws_lightsail_static_ip_attachment.test"), staticIpDestroy, ), ExpectNonEmptyPlan: true, @@ -78,7 +80,7 @@ func TestAccLightsailStaticIPAttachment_disappears(t *testing.T) { }) } -func testAccCheckStaticIPAttachmentExists(ctx context.Context, n string, staticIp *lightsail.StaticIp) resource.TestCheckFunc { +func testAccCheckStaticIPAttachmentExists(ctx context.Context, n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -89,9 +91,9 @@ func testAccCheckStaticIPAttachmentExists(ctx context.Context, n string, staticI return errors.New("No Lightsail Static IP Attachment ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) - resp, err := conn.GetStaticIpWithContext(ctx, &lightsail.GetStaticIpInput{ + resp, err := conn.GetStaticIp(ctx, &lightsail.GetStaticIpInput{ StaticIpName: aws.String(rs.Primary.ID), }) if err != nil { @@ -106,7 +108,6 @@ func testAccCheckStaticIPAttachmentExists(ctx context.Context, n string, staticI return fmt.Errorf("Static IP (%s) not attached", rs.Primary.ID) } - *staticIp = *resp.StaticIp return nil } } @@ -118,13 +119,13 @@ func testAccCheckStaticIPAttachmentDestroy(ctx context.Context) resource.TestChe continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) - resp, err := conn.GetStaticIpWithContext(ctx, &lightsail.GetStaticIpInput{ + resp, err := conn.GetStaticIp(ctx, &lightsail.GetStaticIpInput{ StaticIpName: aws.String(rs.Primary.ID), }) - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { + if tflightsail.IsANotFoundError(err) { continue } diff --git a/internal/service/lightsail/static_ip_test.go b/internal/service/lightsail/static_ip_test.go index 2b41c04473a..786fa5c5319 100644 --- a/internal/service/lightsail/static_ip_test.go +++ b/internal/service/lightsail/static_ip_test.go @@ -1,36 +1,39 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail_test import ( "context" "errors" "fmt" + "strings" "testing" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" + tflightsail "github.com/hashicorp/terraform-provider-aws/internal/service/lightsail" ) func TestAccLightsailStaticIP_basic(t *testing.T) { ctx := acctest.Context(t) - var staticIp lightsail.StaticIp staticIpName := fmt.Sprintf("tf-test-lightsail-%s", sdkacctest.RandString(5)) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckStaticIPDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccStaticIPConfig_basic(staticIpName), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckStaticIPExists(ctx, "aws_lightsail_static_ip.test", &staticIp), + testAccCheckStaticIPExists(ctx, "aws_lightsail_static_ip.test"), ), }, }, @@ -39,12 +42,11 @@ func TestAccLightsailStaticIP_basic(t *testing.T) { func TestAccLightsailStaticIP_disappears(t *testing.T) { ctx := acctest.Context(t) - var staticIp lightsail.StaticIp staticIpName := fmt.Sprintf("tf-test-lightsail-%s", sdkacctest.RandString(5)) staticIpDestroy := func(*terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() - _, err := conn.ReleaseStaticIpWithContext(ctx, &lightsail.ReleaseStaticIpInput{ + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) + _, err := conn.ReleaseStaticIp(ctx, &lightsail.ReleaseStaticIpInput{ StaticIpName: aws.String(staticIpName), }) @@ -57,14 +59,14 @@ func TestAccLightsailStaticIP_disappears(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, lightsail.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(lightsail.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckStaticIPDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccStaticIPConfig_basic(staticIpName), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckStaticIPExists(ctx, "aws_lightsail_static_ip.test", &staticIp), + testAccCheckStaticIPExists(ctx, "aws_lightsail_static_ip.test"), staticIpDestroy, ), ExpectNonEmptyPlan: true, @@ -73,7 +75,7 @@ func TestAccLightsailStaticIP_disappears(t *testing.T) { }) } -func testAccCheckStaticIPExists(ctx context.Context, n string, staticIp *lightsail.StaticIp) resource.TestCheckFunc { +func testAccCheckStaticIPExists(ctx context.Context, n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -84,9 +86,9 @@ func testAccCheckStaticIPExists(ctx context.Context, n string, staticIp *lightsa return errors.New("No Lightsail Static IP ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) - resp, err := conn.GetStaticIpWithContext(ctx, &lightsail.GetStaticIpInput{ + resp, err := conn.GetStaticIp(ctx, &lightsail.GetStaticIpInput{ StaticIpName: aws.String(rs.Primary.ID), }) @@ -97,7 +99,7 @@ func testAccCheckStaticIPExists(ctx context.Context, n string, staticIp *lightsa if resp == nil || resp.StaticIp == nil { return fmt.Errorf("Static IP (%s) not found", rs.Primary.ID) } - *staticIp = *resp.StaticIp + return nil } } @@ -109,13 +111,13 @@ func testAccCheckStaticIPDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LightsailClient(ctx) - resp, err := conn.GetStaticIpWithContext(ctx, &lightsail.GetStaticIpInput{ + resp, err := conn.GetStaticIp(ctx, &lightsail.GetStaticIpInput{ StaticIpName: aws.String(rs.Primary.ID), }) - if tfawserr.ErrCodeEquals(err, lightsail.ErrCodeNotFoundException) { + if tflightsail.IsANotFoundError(err) { continue } diff --git a/internal/service/lightsail/status.go b/internal/service/lightsail/status.go index 4cba9fb1d71..a9520991646 100644 --- a/internal/service/lightsail/status.go +++ b/internal/service/lightsail/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail import ( @@ -6,13 +9,13 @@ import ( "log" "strconv" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) -func statusContainerService(ctx context.Context, conn *lightsail.Lightsail, serviceName string) retry.StateRefreshFunc { +func statusContainerService(ctx context.Context, conn *lightsail.Client, serviceName string) retry.StateRefreshFunc { return func() (interface{}, string, error) { containerService, err := FindContainerServiceByName(ctx, conn, serviceName) @@ -24,11 +27,11 @@ func statusContainerService(ctx context.Context, conn *lightsail.Lightsail, serv return nil, "", err } - return containerService, aws.StringValue(containerService.State), nil + return containerService, string(containerService.State), nil } } -func statusContainerServiceDeploymentVersion(ctx context.Context, conn *lightsail.Lightsail, serviceName string, version int) retry.StateRefreshFunc { +func statusContainerServiceDeploymentVersion(ctx context.Context, conn *lightsail.Client, serviceName string, version int) retry.StateRefreshFunc { return func() (interface{}, string, error) { deployment, err := FindContainerServiceDeploymentByVersion(ctx, conn, serviceName, version) @@ -40,53 +43,53 @@ func statusContainerServiceDeploymentVersion(ctx context.Context, conn *lightsai return nil, "", err } - return deployment, aws.StringValue(deployment.State), nil + return deployment, string(deployment.State), nil } } // statusOperation is a method to check the status of a Lightsail Operation -func statusOperation(ctx context.Context, conn *lightsail.Lightsail, oid *string) retry.StateRefreshFunc { +func statusOperation(ctx context.Context, conn *lightsail.Client, oid *string) retry.StateRefreshFunc { return func() (interface{}, string, error) { input := &lightsail.GetOperationInput{ OperationId: oid, } - oidValue := aws.StringValue(oid) + oidValue := aws.ToString(oid) log.Printf("[DEBUG] Checking if Lightsail Operation (%s) is Completed", oidValue) - output, err := conn.GetOperationWithContext(ctx, input) + output, err := conn.GetOperation(ctx, input) if err != nil { return output, "FAILED", err } if output.Operation == nil { - return nil, "Failed", fmt.Errorf("Error retrieving Operation info for operation (%s)", oidValue) + return nil, "Failed", fmt.Errorf("retrieving Operation info for operation (%s)", oidValue) } - log.Printf("[DEBUG] Lightsail Operation (%s) is currently %q", oidValue, *output.Operation.Status) - return output, *output.Operation.Status, nil + log.Printf("[DEBUG] Lightsail Operation (%s) is currently %q", oidValue, string(output.Operation.Status)) + return output, string(output.Operation.Status), nil } } // statusDatabase is a method to check the status of a Lightsail Relational Database -func statusDatabase(ctx context.Context, conn *lightsail.Lightsail, db *string) retry.StateRefreshFunc { +func statusDatabase(ctx context.Context, conn *lightsail.Client, db *string) retry.StateRefreshFunc { return func() (interface{}, string, error) { input := &lightsail.GetRelationalDatabaseInput{ RelationalDatabaseName: db, } - dbValue := aws.StringValue(db) + dbValue := aws.ToString(db) log.Printf("[DEBUG] Checking if Lightsail Database (%s) is in an available state.", dbValue) - output, err := conn.GetRelationalDatabaseWithContext(ctx, input) + output, err := conn.GetRelationalDatabase(ctx, input) if err != nil { return output, "FAILED", err } if output.RelationalDatabase == nil { - return nil, "Failed", fmt.Errorf("Error retrieving Database info for (%s)", dbValue) + return nil, "Failed", fmt.Errorf("retrieving Database info for (%s)", dbValue) } log.Printf("[DEBUG] Lightsail Database (%s) is currently %q", dbValue, *output.RelationalDatabase.State) @@ -95,70 +98,70 @@ func statusDatabase(ctx context.Context, conn *lightsail.Lightsail, db *string) } // statusDatabase is a method to check the status of a Lightsail Relational Database Backup Retention -func statusDatabaseBackupRetention(ctx context.Context, conn *lightsail.Lightsail, db *string) retry.StateRefreshFunc { +func statusDatabaseBackupRetention(ctx context.Context, conn *lightsail.Client, db *string) retry.StateRefreshFunc { return func() (interface{}, string, error) { input := &lightsail.GetRelationalDatabaseInput{ RelationalDatabaseName: db, } - dbValue := aws.StringValue(db) + dbValue := aws.ToString(db) log.Printf("[DEBUG] Checking if Lightsail Database (%s) Backup Retention setting has been updated.", dbValue) - output, err := conn.GetRelationalDatabaseWithContext(ctx, input) + output, err := conn.GetRelationalDatabase(ctx, input) if err != nil { return output, "FAILED", err } if output.RelationalDatabase == nil { - return nil, "Failed", fmt.Errorf("Error retrieving Database info for (%s)", dbValue) + return nil, "Failed", fmt.Errorf("retrieving Database info for (%s)", dbValue) } - return output, strconv.FormatBool(aws.BoolValue(output.RelationalDatabase.BackupRetentionEnabled)), nil + return output, strconv.FormatBool(aws.ToBool(output.RelationalDatabase.BackupRetentionEnabled)), nil } } -func statusDatabasePubliclyAccessible(ctx context.Context, conn *lightsail.Lightsail, db *string) retry.StateRefreshFunc { +func statusDatabasePubliclyAccessible(ctx context.Context, conn *lightsail.Client, db *string) retry.StateRefreshFunc { return func() (interface{}, string, error) { input := &lightsail.GetRelationalDatabaseInput{ RelationalDatabaseName: db, } - dbValue := aws.StringValue(db) + dbValue := aws.ToString(db) log.Printf("[DEBUG] Checking if Lightsail Database (%s) Backup Retention setting has been updated.", dbValue) - output, err := conn.GetRelationalDatabaseWithContext(ctx, input) + output, err := conn.GetRelationalDatabase(ctx, input) if err != nil { return output, "FAILED", err } if output.RelationalDatabase == nil { - return nil, "Failed", fmt.Errorf("Error retrieving Database info for (%s)", dbValue) + return nil, "Failed", fmt.Errorf("retrieving Database info for (%s)", dbValue) } - return output, strconv.FormatBool(aws.BoolValue(output.RelationalDatabase.PubliclyAccessible)), nil + return output, strconv.FormatBool(aws.ToBool(output.RelationalDatabase.PubliclyAccessible)), nil } } -func statusInstance(ctx context.Context, conn *lightsail.Lightsail, iName *string) retry.StateRefreshFunc { +func statusInstance(ctx context.Context, conn *lightsail.Client, iName *string) retry.StateRefreshFunc { return func() (interface{}, string, error) { in := &lightsail.GetInstanceStateInput{ InstanceName: iName, } - iNameValue := aws.StringValue(iName) + iNameValue := aws.ToString(iName) log.Printf("[DEBUG] Checking if Lightsail Instance (%s) is in a ready state.", iNameValue) - out, err := conn.GetInstanceStateWithContext(ctx, in) + out, err := conn.GetInstanceState(ctx, in) if err != nil { return out, "FAILED", err } if out.State == nil { - return nil, "Failed", fmt.Errorf("Error retrieving Instance info for (%s)", iNameValue) + return nil, "Failed", fmt.Errorf("retrieving Instance info for (%s)", iNameValue) } log.Printf("[DEBUG] Lightsail Instance (%s) State is currently (%s)", iNameValue, *out.State.Name) diff --git a/internal/service/lightsail/sweep.go b/internal/service/lightsail/sweep.go index c59671f39d0..9e79b09432b 100644 --- a/internal/service/lightsail/sweep.go +++ b/internal/service/lightsail/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -7,11 +10,10 @@ import ( "fmt" "log" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -34,17 +36,17 @@ func init() { func sweepContainerServices(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("Error getting client: %s", err) } - conn := client.(*conns.AWSClient).LightsailConn() + conn := client.LightsailClient(ctx) input := &lightsail.GetContainerServicesInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) - output, err := conn.GetContainerServicesWithContext(ctx, input) + output, err := conn.GetContainerServices(ctx, input) if sweep.SkipSweepError(err) { log.Printf("[WARN] Skipping Lightsail Container Service sweep for %s: %s", region, err) @@ -56,13 +58,9 @@ func sweepContainerServices(region string) error { } for _, service := range output.ContainerServices { - if service == nil { - continue - } - r := ResourceContainerService() d := r.Data(nil) - d.SetId(aws.StringValue(service.ContainerServiceName)) + d.SetId(aws.ToString(service.ContainerServiceName)) sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } @@ -76,7 +74,7 @@ func sweepContainerServices(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Lightsail Container Services for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Lightsail Container Services for %s: %w", region, err)) } @@ -85,17 +83,17 @@ func sweepContainerServices(region string) error { func sweepInstances(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("Error getting client: %s", err) } - conn := client.(*conns.AWSClient).LightsailConn() + conn := client.LightsailClient(ctx) input := &lightsail.GetInstancesInput{} var sweeperErrs *multierror.Error for { - output, err := conn.GetInstancesWithContext(ctx, input) + output, err := conn.GetInstances(ctx, input) if sweep.SkipSweepError(err) { log.Printf("[WARN] Skipping Lightsail Instance sweep for %s: %s", region, err) @@ -107,13 +105,13 @@ func sweepInstances(region string) error { } for _, instance := range output.Instances { - name := aws.StringValue(instance.Name) + name := aws.ToString(instance.Name) input := &lightsail.DeleteInstanceInput{ InstanceName: instance.Name, } log.Printf("[INFO] Deleting Lightsail Instance: %s", name) - _, err := conn.DeleteInstanceWithContext(ctx, input) + _, err := conn.DeleteInstance(ctx, input) if err != nil { sweeperErr := fmt.Errorf("error deleting Lightsail Instance (%s): %s", name, err) @@ -122,7 +120,7 @@ func sweepInstances(region string) error { } } - if aws.StringValue(output.NextPageToken) == "" { + if aws.ToString(output.NextPageToken) == "" { break } @@ -134,16 +132,16 @@ func sweepInstances(region string) error { func sweepStaticIPs(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("Error getting client: %s", err) } - conn := client.(*conns.AWSClient).LightsailConn() + conn := client.LightsailClient(ctx) input := &lightsail.GetStaticIpsInput{} for { - output, err := conn.GetStaticIpsWithContext(ctx, input) + output, err := conn.GetStaticIps(ctx, input) if err != nil { if sweep.SkipSweepError(err) { log.Printf("[WARN] Skipping Lightsail Static IP sweep for %s: %s", region, err) @@ -158,10 +156,10 @@ func sweepStaticIPs(region string) error { } for _, staticIp := range output.StaticIps { - name := aws.StringValue(staticIp.Name) + name := aws.ToString(staticIp.Name) log.Printf("[INFO] Deleting Lightsail Static IP %s", name) - _, err := conn.ReleaseStaticIpWithContext(ctx, &lightsail.ReleaseStaticIpInput{ + _, err := conn.ReleaseStaticIp(ctx, &lightsail.ReleaseStaticIpInput{ StaticIpName: aws.String(name), }) if err != nil { diff --git a/internal/service/lightsail/tags_gen.go b/internal/service/lightsail/tags_gen.go index 0932714c8af..4b685fff261 100644 --- a/internal/service/lightsail/tags_gen.go +++ b/internal/service/lightsail/tags_gen.go @@ -5,9 +5,9 @@ import ( "context" "fmt" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" - "github.com/aws/aws-sdk-go/service/lightsail/lightsailiface" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + awstypes "github.com/aws/aws-sdk-go-v2/service/lightsail/types" "github.com/hashicorp/terraform-provider-aws/internal/conns" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/types" @@ -17,11 +17,11 @@ import ( // []*SERVICE.Tag handling // Tags returns lightsail service tags. -func Tags(tags tftags.KeyValueTags) []*lightsail.Tag { - result := make([]*lightsail.Tag, 0, len(tags)) +func Tags(tags tftags.KeyValueTags) []awstypes.Tag { + result := make([]awstypes.Tag, 0, len(tags)) for k, v := range tags.Map() { - tag := &lightsail.Tag{ + tag := awstypes.Tag{ Key: aws.String(k), Value: aws.String(v), } @@ -33,19 +33,19 @@ func Tags(tags tftags.KeyValueTags) []*lightsail.Tag { } // KeyValueTags creates tftags.KeyValueTags from lightsail service tags. -func KeyValueTags(ctx context.Context, tags []*lightsail.Tag) tftags.KeyValueTags { +func KeyValueTags(ctx context.Context, tags []awstypes.Tag) tftags.KeyValueTags { m := make(map[string]*string, len(tags)) for _, tag := range tags { - m[aws.StringValue(tag.Key)] = tag.Value + m[aws.ToString(tag.Key)] = tag.Value } return tftags.New(ctx, m) } -// GetTagsIn returns lightsail service tags from Context. +// getTagsIn returns lightsail service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*lightsail.Tag { +func getTagsIn(ctx context.Context) []awstypes.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -55,17 +55,17 @@ func GetTagsIn(ctx context.Context) []*lightsail.Tag { return nil } -// SetTagsOut sets lightsail service tags in Context. -func SetTagsOut(ctx context.Context, tags []*lightsail.Tag) { +// setTagsOut sets lightsail service tags in Context. +func setTagsOut(ctx context.Context, tags []awstypes.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates lightsail service tags. +// updateTags updates lightsail service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn lightsailiface.LightsailAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *lightsail.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -74,10 +74,10 @@ func UpdateTags(ctx context.Context, conn lightsailiface.LightsailAPI, identifie if len(removedTags) > 0 { input := &lightsail.UntagResourceInput{ ResourceName: aws.String(identifier), - TagKeys: aws.StringSlice(removedTags.Keys()), + TagKeys: removedTags.Keys(), } - _, err := conn.UntagResourceWithContext(ctx, input) + _, err := conn.UntagResource(ctx, input) if err != nil { return fmt.Errorf("untagging resource (%s): %w", identifier, err) @@ -92,7 +92,7 @@ func UpdateTags(ctx context.Context, conn lightsailiface.LightsailAPI, identifie Tags: Tags(updatedTags), } - _, err := conn.TagResourceWithContext(ctx, input) + _, err := conn.TagResource(ctx, input) if err != nil { return fmt.Errorf("tagging resource (%s): %w", identifier, err) @@ -105,5 +105,5 @@ func UpdateTags(ctx context.Context, conn lightsailiface.LightsailAPI, identifie // UpdateTags updates lightsail service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).LightsailConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).LightsailClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/lightsail/wait.go b/internal/service/lightsail/wait.go index 4ae1d7d2809..521763b98c8 100644 --- a/internal/service/lightsail/wait.go +++ b/internal/service/lightsail/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package lightsail import ( @@ -7,9 +10,11 @@ import ( "strconv" "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/lightsail" + "github.com/aws/aws-sdk-go-v2/service/lightsail/types" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/hashicorp/terraform-provider-aws/internal/enum" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) @@ -35,10 +40,10 @@ const ( ) // waitOperation waits for an Operation to return Succeeded or Completed -func waitOperation(ctx context.Context, conn *lightsail.Lightsail, oid *string) error { +func waitOperation(ctx context.Context, conn *lightsail.Client, oid *string) error { stateConf := &retry.StateChangeConf{ - Pending: []string{lightsail.OperationStatusStarted}, - Target: []string{lightsail.OperationStatusCompleted, lightsail.OperationStatusSucceeded}, + Pending: enum.Slice(types.OperationStatusStarted), + Target: enum.Slice(types.OperationStatusCompleted, types.OperationStatusSucceeded), Refresh: statusOperation(ctx, conn, oid), Timeout: OperationTimeout, Delay: OperationDelay, @@ -55,7 +60,7 @@ func waitOperation(ctx context.Context, conn *lightsail.Lightsail, oid *string) } // waitDatabaseModified waits for a Modified Database return available -func waitDatabaseModified(ctx context.Context, conn *lightsail.Lightsail, db *string) (*lightsail.GetRelationalDatabaseOutput, error) { +func waitDatabaseModified(ctx context.Context, conn *lightsail.Client, db *string) (*lightsail.GetRelationalDatabaseOutput, error) { stateConf := &retry.StateChangeConf{ Pending: []string{DatabaseStateModifying}, Target: []string{DatabaseStateAvailable}, @@ -76,7 +81,7 @@ func waitDatabaseModified(ctx context.Context, conn *lightsail.Lightsail, db *st // waitDatabaseBackupRetentionModified waits for a Modified BackupRetention on Database return available -func waitDatabaseBackupRetentionModified(ctx context.Context, conn *lightsail.Lightsail, db *string, target bool) error { +func waitDatabaseBackupRetentionModified(ctx context.Context, conn *lightsail.Client, db *string, target bool) error { stateConf := &retry.StateChangeConf{ Pending: []string{strconv.FormatBool(!target)}, Target: []string{strconv.FormatBool(target)}, @@ -95,7 +100,7 @@ func waitDatabaseBackupRetentionModified(ctx context.Context, conn *lightsail.Li return err } -func waitDatabasePubliclyAccessibleModified(ctx context.Context, conn *lightsail.Lightsail, db *string, target bool) error { +func waitDatabasePubliclyAccessibleModified(ctx context.Context, conn *lightsail.Client, db *string, target bool) error { stateConf := &retry.StateChangeConf{ Pending: []string{strconv.FormatBool(!target)}, Target: []string{strconv.FormatBool(target)}, @@ -114,10 +119,10 @@ func waitDatabasePubliclyAccessibleModified(ctx context.Context, conn *lightsail return err } -func waitContainerServiceCreated(ctx context.Context, conn *lightsail.Lightsail, serviceName string, timeout time.Duration) error { +func waitContainerServiceCreated(ctx context.Context, conn *lightsail.Client, serviceName string, timeout time.Duration) error { stateConf := &retry.StateChangeConf{ - Pending: []string{lightsail.ContainerServiceStatePending}, - Target: []string{lightsail.ContainerServiceStateReady}, + Pending: enum.Slice(types.ContainerServiceStatePending), + Target: enum.Slice(types.ContainerServiceStateReady), Refresh: statusContainerService(ctx, conn, serviceName), Timeout: timeout, Delay: 5 * time.Second, @@ -126,9 +131,9 @@ func waitContainerServiceCreated(ctx context.Context, conn *lightsail.Lightsail, outputRaw, err := stateConf.WaitForStateContext(ctx) - if output, ok := outputRaw.(*lightsail.ContainerService); ok { + if output, ok := outputRaw.(*types.ContainerService); ok { if detail := output.StateDetail; detail != nil { - tfresource.SetLastError(err, fmt.Errorf("%s: %s", aws.StringValue(detail.Code), aws.StringValue(detail.Message))) + tfresource.SetLastError(err, fmt.Errorf("%s: %s", string(detail.Code), aws.ToString(detail.Message))) } return err @@ -137,10 +142,10 @@ func waitContainerServiceCreated(ctx context.Context, conn *lightsail.Lightsail, return err } -func waitContainerServiceDisabled(ctx context.Context, conn *lightsail.Lightsail, serviceName string, timeout time.Duration) error { +func waitContainerServiceDisabled(ctx context.Context, conn *lightsail.Client, serviceName string, timeout time.Duration) error { stateConf := &retry.StateChangeConf{ - Pending: []string{lightsail.ContainerServiceStateUpdating}, - Target: []string{lightsail.ContainerServiceStateDisabled}, + Pending: enum.Slice(types.ContainerServiceStateUpdating), + Target: enum.Slice(types.ContainerServiceStateDisabled), Refresh: statusContainerService(ctx, conn, serviceName), Timeout: timeout, Delay: 5 * time.Second, @@ -149,9 +154,9 @@ func waitContainerServiceDisabled(ctx context.Context, conn *lightsail.Lightsail outputRaw, err := stateConf.WaitForStateContext(ctx) - if output, ok := outputRaw.(*lightsail.ContainerService); ok { + if output, ok := outputRaw.(*types.ContainerService); ok { if detail := output.StateDetail; detail != nil { - tfresource.SetLastError(err, fmt.Errorf("%s: %s", aws.StringValue(detail.Code), aws.StringValue(detail.Message))) + tfresource.SetLastError(err, fmt.Errorf("%s: %s", string(detail.Code), aws.ToString(detail.Message))) } return err @@ -160,10 +165,10 @@ func waitContainerServiceDisabled(ctx context.Context, conn *lightsail.Lightsail return err } -func waitContainerServiceUpdated(ctx context.Context, conn *lightsail.Lightsail, serviceName string, timeout time.Duration) error { +func waitContainerServiceUpdated(ctx context.Context, conn *lightsail.Client, serviceName string, timeout time.Duration) error { stateConf := &retry.StateChangeConf{ - Pending: []string{lightsail.ContainerServiceStateUpdating}, - Target: []string{lightsail.ContainerServiceStateReady, lightsail.ContainerServiceStateRunning}, + Pending: enum.Slice(types.ContainerServiceStateUpdating), + Target: enum.Slice(types.ContainerServiceStateReady, types.ContainerServiceStateRunning), Refresh: statusContainerService(ctx, conn, serviceName), Timeout: timeout, Delay: 5 * time.Second, @@ -172,9 +177,9 @@ func waitContainerServiceUpdated(ctx context.Context, conn *lightsail.Lightsail, outputRaw, err := stateConf.WaitForStateContext(ctx) - if output, ok := outputRaw.(*lightsail.ContainerService); ok { + if output, ok := outputRaw.(*types.ContainerService); ok { if detail := output.StateDetail; detail != nil { - tfresource.SetLastError(err, fmt.Errorf("%s: %s", aws.StringValue(detail.Code), aws.StringValue(detail.Message))) + tfresource.SetLastError(err, fmt.Errorf("%s: %s", string(detail.Code), aws.ToString(detail.Message))) } return err @@ -183,9 +188,9 @@ func waitContainerServiceUpdated(ctx context.Context, conn *lightsail.Lightsail, return err } -func waitContainerServiceDeleted(ctx context.Context, conn *lightsail.Lightsail, serviceName string, timeout time.Duration) error { +func waitContainerServiceDeleted(ctx context.Context, conn *lightsail.Client, serviceName string, timeout time.Duration) error { stateConf := &retry.StateChangeConf{ - Pending: []string{lightsail.ContainerServiceStateDeleting}, + Pending: enum.Slice(types.ContainerServiceStateDeleting), Target: []string{}, Refresh: statusContainerService(ctx, conn, serviceName), Timeout: timeout, @@ -195,9 +200,9 @@ func waitContainerServiceDeleted(ctx context.Context, conn *lightsail.Lightsail, outputRaw, err := stateConf.WaitForStateContext(ctx) - if output, ok := outputRaw.(*lightsail.ContainerService); ok { + if output, ok := outputRaw.(*types.ContainerService); ok { if detail := output.StateDetail; detail != nil { - tfresource.SetLastError(err, fmt.Errorf("%s: %s", aws.StringValue(detail.Code), aws.StringValue(detail.Message))) + tfresource.SetLastError(err, fmt.Errorf("%s: %s", string(detail.Code), aws.ToString(detail.Message))) } return err @@ -206,10 +211,10 @@ func waitContainerServiceDeleted(ctx context.Context, conn *lightsail.Lightsail, return err } -func waitContainerServiceDeploymentVersionActive(ctx context.Context, conn *lightsail.Lightsail, serviceName string, version int, timeout time.Duration) error { +func waitContainerServiceDeploymentVersionActive(ctx context.Context, conn *lightsail.Client, serviceName string, version int, timeout time.Duration) error { stateConf := &retry.StateChangeConf{ - Pending: []string{lightsail.ContainerServiceDeploymentStateActivating}, - Target: []string{lightsail.ContainerServiceDeploymentStateActive}, + Pending: enum.Slice(types.ContainerServiceDeploymentStateActivating), + Target: enum.Slice(types.ContainerServiceDeploymentStateActive), Refresh: statusContainerServiceDeploymentVersion(ctx, conn, serviceName, version), Timeout: timeout, Delay: 5 * time.Second, @@ -218,8 +223,8 @@ func waitContainerServiceDeploymentVersionActive(ctx context.Context, conn *ligh outputRaw, err := stateConf.WaitForStateContext(ctx) - if output, ok := outputRaw.(*lightsail.ContainerServiceDeployment); ok { - if aws.StringValue(output.State) == lightsail.ContainerServiceDeploymentStateFailed { + if output, ok := outputRaw.(*types.ContainerServiceDeployment); ok { + if output.State == types.ContainerServiceDeploymentStateFailed { tfresource.SetLastError(err, errors.New("The deployment failed. Use the GetContainerLog action to view the log events for the containers in the deployment to try to determine the reason for the failure.")) } @@ -229,7 +234,7 @@ func waitContainerServiceDeploymentVersionActive(ctx context.Context, conn *ligh return err } -func waitInstanceStateWithContext(ctx context.Context, conn *lightsail.Lightsail, id *string) (*lightsail.GetInstanceStateOutput, error) { +func waitInstanceState(ctx context.Context, conn *lightsail.Client, id *string) (*lightsail.GetInstanceStateOutput, error) { stateConf := &retry.StateChangeConf{ Pending: []string{"pending", "stopping"}, Target: []string{"stopped", "running"}, diff --git a/internal/service/location/generate.go b/internal/service/location/generate.go index bdb51f751fa..b4c3fc3bbdd 100644 --- a/internal/service/location/generate.go +++ b/internal/service/location/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ServiceTagsMap -UpdateTags -ListTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package location diff --git a/internal/service/location/geofence_collection.go b/internal/service/location/geofence_collection.go index b9392910114..f8962a69845 100644 --- a/internal/service/location/geofence_collection.go +++ b/internal/service/location/geofence_collection.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package location import ( @@ -83,11 +86,11 @@ const ( ) func resourceGeofenceCollectionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) in := &locationservice.CreateGeofenceCollectionInput{ CollectionName: aws.String(d.Get("collection_name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok && v != "" { @@ -113,7 +116,7 @@ func resourceGeofenceCollectionCreate(ctx context.Context, d *schema.ResourceDat } func resourceGeofenceCollectionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) out, err := findGeofenceCollectionByName(ctx, conn, d.Id()) @@ -138,7 +141,7 @@ func resourceGeofenceCollectionRead(ctx context.Context, d *schema.ResourceData, } func resourceGeofenceCollectionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) update := false @@ -165,7 +168,7 @@ func resourceGeofenceCollectionUpdate(ctx context.Context, d *schema.ResourceDat } func resourceGeofenceCollectionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) log.Printf("[INFO] Deleting Location GeofenceCollection %s", d.Id()) diff --git a/internal/service/location/geofence_collection_data_source.go b/internal/service/location/geofence_collection_data_source.go index 80acad84747..4ada93f47b1 100644 --- a/internal/service/location/geofence_collection_data_source.go +++ b/internal/service/location/geofence_collection_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package location import ( @@ -56,7 +59,7 @@ const ( ) func dataSourceGeofenceCollectionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) name := d.Get("collection_name").(string) diff --git a/internal/service/location/geofence_collection_data_source_test.go b/internal/service/location/geofence_collection_data_source_test.go index 4c1607c59b2..a9103e679a5 100644 --- a/internal/service/location/geofence_collection_data_source_test.go +++ b/internal/service/location/geofence_collection_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package location_test import ( diff --git a/internal/service/location/geofence_collection_test.go b/internal/service/location/geofence_collection_test.go index 7f505ae7b40..df948a73aed 100644 --- a/internal/service/location/geofence_collection_test.go +++ b/internal/service/location/geofence_collection_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package location_test import ( @@ -183,7 +186,7 @@ func TestAccLocationGeofenceCollection_tags(t *testing.T) { func testAccCheckGeofenceCollectionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LocationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LocationConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_location_geofence_collection" { @@ -220,7 +223,7 @@ func testAccCheckGeofenceCollectionExists(ctx context.Context, name string) reso return create.Error(names.Location, create.ErrActionCheckingExistence, tflocation.ResNameGeofenceCollection, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).LocationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LocationConn(ctx) _, err := conn.DescribeGeofenceCollectionWithContext(ctx, &locationservice.DescribeGeofenceCollectionInput{ CollectionName: aws.String(rs.Primary.ID), }) diff --git a/internal/service/location/map.go b/internal/service/location/map.go index 5ce053a7932..b8868c0a733 100644 --- a/internal/service/location/map.go +++ b/internal/service/location/map.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package location import ( @@ -78,10 +81,10 @@ func ResourceMap() *schema.Resource { func resourceMapCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) input := &locationservice.CreateMapInput{ - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("configuration"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { @@ -113,7 +116,7 @@ func resourceMapCreate(ctx context.Context, d *schema.ResourceData, meta interfa func resourceMapRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) input := &locationservice.DescribeMapInput{ MapName: aws.String(d.Id()), @@ -147,14 +150,14 @@ func resourceMapRead(ctx context.Context, d *schema.ResourceData, meta interface d.Set("map_name", output.MapName) d.Set("update_time", aws.TimeValue(output.UpdateTime).Format(time.RFC3339)) - SetTagsOut(ctx, output.Tags) + setTagsOut(ctx, output.Tags) return diags } func resourceMapUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) if d.HasChange("description") { input := &locationservice.UpdateMapInput{ @@ -177,7 +180,7 @@ func resourceMapUpdate(ctx context.Context, d *schema.ResourceData, meta interfa func resourceMapDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) input := &locationservice.DeleteMapInput{ MapName: aws.String(d.Id()), diff --git a/internal/service/location/map_data_source.go b/internal/service/location/map_data_source.go index 29a3be3dd3c..8a0f8a6e94c 100644 --- a/internal/service/location/map_data_source.go +++ b/internal/service/location/map_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package location import ( @@ -59,7 +62,7 @@ func DataSourceMap() *schema.Resource { func dataSourceMapRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) input := &locationservice.DescribeMapInput{} diff --git a/internal/service/location/map_data_source_test.go b/internal/service/location/map_data_source_test.go index b4e4f263bc3..47bd0c8f897 100644 --- a/internal/service/location/map_data_source_test.go +++ b/internal/service/location/map_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package location_test import ( diff --git a/internal/service/location/map_test.go b/internal/service/location/map_test.go index 002fa05749c..5d34a8094dc 100644 --- a/internal/service/location/map_test.go +++ b/internal/service/location/map_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package location_test import ( @@ -154,7 +157,7 @@ func TestAccLocationMap_tags(t *testing.T) { func testAccCheckMapDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LocationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LocationConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_location_map" { @@ -192,7 +195,7 @@ func testAccCheckMapExists(ctx context.Context, resourceName string) resource.Te return fmt.Errorf("resource not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).LocationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LocationConn(ctx) input := &locationservice.DescribeMapInput{ MapName: aws.String(rs.Primary.ID), diff --git a/internal/service/location/place_index.go b/internal/service/location/place_index.go index 1b7b60614f2..6ba3436a0ca 100644 --- a/internal/service/location/place_index.go +++ b/internal/service/location/place_index.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package location import ( @@ -83,10 +86,10 @@ func ResourcePlaceIndex() *schema.Resource { func resourcePlaceIndexCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) input := &locationservice.CreatePlaceIndexInput{ - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("data_source"); ok { @@ -122,7 +125,7 @@ func resourcePlaceIndexCreate(ctx context.Context, d *schema.ResourceData, meta func resourcePlaceIndexRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) input := &locationservice.DescribePlaceIndexInput{ IndexName: aws.String(d.Id()), @@ -157,7 +160,7 @@ func resourcePlaceIndexRead(ctx context.Context, d *schema.ResourceData, meta in d.Set("index_arn", output.IndexArn) d.Set("index_name", output.IndexName) - SetTagsOut(ctx, output.Tags) + setTagsOut(ctx, output.Tags) d.Set("update_time", aws.TimeValue(output.UpdateTime).Format(time.RFC3339)) @@ -166,7 +169,7 @@ func resourcePlaceIndexRead(ctx context.Context, d *schema.ResourceData, meta in func resourcePlaceIndexUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) if d.HasChanges("data_source_configuration", "description") { input := &locationservice.UpdatePlaceIndexInput{ @@ -195,7 +198,7 @@ func resourcePlaceIndexUpdate(ctx context.Context, d *schema.ResourceData, meta func resourcePlaceIndexDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) input := &locationservice.DeletePlaceIndexInput{ IndexName: aws.String(d.Id()), diff --git a/internal/service/location/place_index_data_source.go b/internal/service/location/place_index_data_source.go index 698935de7c0..9b90d7c9bb4 100644 --- a/internal/service/location/place_index_data_source.go +++ b/internal/service/location/place_index_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package location import ( @@ -63,7 +66,7 @@ func DataSourcePlaceIndex() *schema.Resource { func dataSourcePlaceIndexRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) input := &locationservice.DescribePlaceIndexInput{} diff --git a/internal/service/location/place_index_data_source_test.go b/internal/service/location/place_index_data_source_test.go index 2fb371440b0..70b8cfd79a3 100644 --- a/internal/service/location/place_index_data_source_test.go +++ b/internal/service/location/place_index_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package location_test import ( diff --git a/internal/service/location/place_index_test.go b/internal/service/location/place_index_test.go index 88bbabb0692..dda9fbad032 100644 --- a/internal/service/location/place_index_test.go +++ b/internal/service/location/place_index_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package location_test import ( @@ -191,7 +194,7 @@ func TestAccLocationPlaceIndex_tags(t *testing.T) { func testAccCheckPlaceIndexDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LocationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LocationConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_location_place_index" { @@ -229,7 +232,7 @@ func testAccCheckPlaceIndexExists(ctx context.Context, resourceName string) reso return fmt.Errorf("resource not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).LocationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LocationConn(ctx) input := &locationservice.DescribePlaceIndexInput{ IndexName: aws.String(rs.Primary.ID), diff --git a/internal/service/location/route_calculator.go b/internal/service/location/route_calculator.go index 579d156001f..8caaf7dbc6c 100644 --- a/internal/service/location/route_calculator.go +++ b/internal/service/location/route_calculator.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package location import ( @@ -76,12 +79,12 @@ func ResourceRouteCalculator() *schema.Resource { } func resourceRouteCalculatorCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) in := &locationservice.CreateRouteCalculatorInput{ CalculatorName: aws.String(d.Get("calculator_name").(string)), DataSource: aws.String(d.Get("data_source").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -103,7 +106,7 @@ func resourceRouteCalculatorCreate(ctx context.Context, d *schema.ResourceData, } func resourceRouteCalculatorRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) out, err := findRouteCalculatorByName(ctx, conn, d.Id()) @@ -128,7 +131,7 @@ func resourceRouteCalculatorRead(ctx context.Context, d *schema.ResourceData, me } func resourceRouteCalculatorUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) update := false @@ -155,7 +158,7 @@ func resourceRouteCalculatorUpdate(ctx context.Context, d *schema.ResourceData, } func resourceRouteCalculatorDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) log.Printf("[INFO] Deleting Location Service Route Calculator %s", d.Id()) diff --git a/internal/service/location/route_calculator_data_source.go b/internal/service/location/route_calculator_data_source.go index f5434bff1d2..e419f8cdd21 100644 --- a/internal/service/location/route_calculator_data_source.go +++ b/internal/service/location/route_calculator_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package location import ( @@ -49,7 +52,7 @@ func DataSourceRouteCalculator() *schema.Resource { } func dataSourceRouteCalculatorRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) out, err := findRouteCalculatorByName(ctx, conn, d.Get("calculator_name").(string)) if err != nil { diff --git a/internal/service/location/route_calculator_data_source_test.go b/internal/service/location/route_calculator_data_source_test.go index cb25a219a4d..6ac8100697b 100644 --- a/internal/service/location/route_calculator_data_source_test.go +++ b/internal/service/location/route_calculator_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package location_test import ( diff --git a/internal/service/location/route_calculator_test.go b/internal/service/location/route_calculator_test.go index 77883223415..38a8c63b42a 100644 --- a/internal/service/location/route_calculator_test.go +++ b/internal/service/location/route_calculator_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package location_test import ( @@ -153,7 +156,7 @@ func TestAccLocationRouteCalculator_tags(t *testing.T) { func testAccCheckRouteCalculatorDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LocationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LocationConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_location_route_calculator" { @@ -190,7 +193,7 @@ func testAccCheckRouteCalculatorExists(ctx context.Context, name string) resourc return fmt.Errorf("No Location Service Route Calculator is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LocationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LocationConn(ctx) _, err := conn.DescribeRouteCalculatorWithContext(ctx, &locationservice.DescribeRouteCalculatorInput{ CalculatorName: aws.String(rs.Primary.ID), }) diff --git a/internal/service/location/service_package_gen.go b/internal/service/location/service_package_gen.go index 6743275d913..e913d27b0bc 100644 --- a/internal/service/location/service_package_gen.go +++ b/internal/service/location/service_package_gen.go @@ -5,6 +5,10 @@ package location import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + locationservice_sdkv1 "github.com/aws/aws-sdk-go/service/locationservice" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -105,4 +109,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Location } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*locationservice_sdkv1.LocationService, error) { + sess := config["session"].(*session_sdkv1.Session) + + return locationservice_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/location/sweep.go b/internal/service/location/sweep.go index f302b380f63..fcfc9d85450 100644 --- a/internal/service/location/sweep.go +++ b/internal/service/location/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/locationservice" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -49,13 +51,13 @@ func init() { func sweepGeofenceCollections(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).LocationConn() + conn := client.LocationConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -83,7 +85,7 @@ func sweepGeofenceCollections(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing Location Service Geofence Collection for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Location Service Geofence Collection for %s: %w", region, err)) } @@ -97,13 +99,13 @@ func sweepGeofenceCollections(region string) error { func sweepMaps(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).LocationConn() + conn := client.LocationConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -131,7 +133,7 @@ func sweepMaps(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing Location Service Map for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Location Service Map for %s: %w", region, err)) } @@ -145,13 +147,13 @@ func sweepMaps(region string) error { func sweepPlaceIndexes(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).LocationConn() + conn := client.LocationConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -179,7 +181,7 @@ func sweepPlaceIndexes(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing Location Service Place Index for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Location Service Place Index for %s: %w", region, err)) } @@ -193,13 +195,13 @@ func sweepPlaceIndexes(region string) error { func sweepRouteCalculators(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).LocationConn() + conn := client.LocationConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -227,7 +229,7 @@ func sweepRouteCalculators(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing Location Service Route Calculator for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Location Service Route Calculator for %s: %w", region, err)) } @@ -241,13 +243,13 @@ func sweepRouteCalculators(region string) error { func sweepTrackers(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).LocationConn() + conn := client.LocationConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -275,7 +277,7 @@ func sweepTrackers(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing Location Service Tracker for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Location Service Tracker for %s: %w", region, err)) } @@ -289,13 +291,13 @@ func sweepTrackers(region string) error { func sweepTrackerAssociations(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).LocationConn() + conn := client.LocationConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -340,7 +342,7 @@ func sweepTrackerAssociations(region string) error { errs = multierror.Append(errs, fmt.Errorf("error listing Location Service Tracker for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Location Service Tracker Association for %s: %w", region, err)) } diff --git a/internal/service/location/tags_gen.go b/internal/service/location/tags_gen.go index d1e0be6f0a0..2e42431a8c2 100644 --- a/internal/service/location/tags_gen.go +++ b/internal/service/location/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists location service tags. +// listTags lists location service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn locationserviceiface.LocationServiceAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn locationserviceiface.LocationServiceAPI, identifier string) (tftags.KeyValueTags, error) { input := &locationservice.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn locationserviceiface.LocationServiceAPI, // ListTags lists location service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).LocationConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).LocationConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from location service tags. +// KeyValueTags creates tftags.KeyValueTags from location service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns location service tags from Context. +// getTagsIn returns location service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets location service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets location service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates location service tags. +// updateTags updates location service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn locationserviceiface.LocationServiceAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn locationserviceiface.LocationServiceAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn locationserviceiface.LocationServiceAP // UpdateTags updates location service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).LocationConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).LocationConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/location/tracker.go b/internal/service/location/tracker.go index ba28a018199..c2b7e894976 100644 --- a/internal/service/location/tracker.go +++ b/internal/service/location/tracker.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package location import ( @@ -73,10 +76,10 @@ func ResourceTracker() *schema.Resource { func resourceTrackerCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) input := &locationservice.CreateTrackerInput{ - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -112,7 +115,7 @@ func resourceTrackerCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceTrackerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) input := &locationservice.DescribeTrackerInput{ TrackerName: aws.String(d.Id()), @@ -139,7 +142,7 @@ func resourceTrackerRead(ctx context.Context, d *schema.ResourceData, meta inter d.Set("kms_key_id", output.KmsKeyId) d.Set("position_filtering", output.PositionFiltering) - SetTagsOut(ctx, output.Tags) + setTagsOut(ctx, output.Tags) d.Set("tracker_arn", output.TrackerArn) d.Set("tracker_name", output.TrackerName) @@ -150,7 +153,7 @@ func resourceTrackerRead(ctx context.Context, d *schema.ResourceData, meta inter func resourceTrackerUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) if d.HasChanges("description", "position_filtering") { input := &locationservice.UpdateTrackerInput{ @@ -177,7 +180,7 @@ func resourceTrackerUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceTrackerDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) input := &locationservice.DeleteTrackerInput{ TrackerName: aws.String(d.Id()), diff --git a/internal/service/location/tracker_association.go b/internal/service/location/tracker_association.go index 376365c6c93..458ed7fa5c9 100644 --- a/internal/service/location/tracker_association.go +++ b/internal/service/location/tracker_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package location import ( @@ -60,7 +63,7 @@ const ( ) func resourceTrackerAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) consumerArn := d.Get("consumer_arn").(string) trackerName := d.Get("tracker_name").(string) @@ -85,7 +88,7 @@ func resourceTrackerAssociationCreate(ctx context.Context, d *schema.ResourceDat } func resourceTrackerAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) trackerAssociationId, err := TrackerAssociationParseID(d.Id()) if err != nil { @@ -111,7 +114,7 @@ func resourceTrackerAssociationRead(ctx context.Context, d *schema.ResourceData, } func resourceTrackerAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) log.Printf("[INFO] Deleting Location TrackerAssociation %s", d.Id()) diff --git a/internal/service/location/tracker_association_data_source.go b/internal/service/location/tracker_association_data_source.go index a22586c4037..2c7c248ce25 100644 --- a/internal/service/location/tracker_association_data_source.go +++ b/internal/service/location/tracker_association_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package location import ( @@ -38,7 +41,7 @@ const ( ) func dataSourceTrackerAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) consumerArn := d.Get("consumer_arn").(string) trackerName := d.Get("tracker_name").(string) diff --git a/internal/service/location/tracker_association_data_source_test.go b/internal/service/location/tracker_association_data_source_test.go index 47f0770a188..b6c383f2cdf 100644 --- a/internal/service/location/tracker_association_data_source_test.go +++ b/internal/service/location/tracker_association_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package location_test import ( diff --git a/internal/service/location/tracker_association_test.go b/internal/service/location/tracker_association_test.go index 4d156fd5d07..1dc8c137858 100644 --- a/internal/service/location/tracker_association_test.go +++ b/internal/service/location/tracker_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package location_test import ( @@ -126,7 +129,7 @@ func TestAccLocationTrackerAssociation_disappears(t *testing.T) { func testAccCheckTrackerAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LocationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LocationConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_location_tracker_association" { @@ -172,7 +175,7 @@ func testAccCheckTrackerAssociationExists(ctx context.Context, name string) reso return create.Error(names.Location, create.ErrActionCheckingExistence, tflocation.ResNameTrackerAssociation, name, err) } - conn := acctest.Provider.Meta().(*conns.AWSClient).LocationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LocationConn(ctx) err = tflocation.FindTrackerAssociationByTrackerNameAndConsumerARN(ctx, conn, trackerAssociationId.TrackerName, trackerAssociationId.ConsumerARN) diff --git a/internal/service/location/tracker_associations_data_source.go b/internal/service/location/tracker_associations_data_source.go index 73b420f7eb3..9bf7ab09b80 100644 --- a/internal/service/location/tracker_associations_data_source.go +++ b/internal/service/location/tracker_associations_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package location import ( @@ -38,7 +41,7 @@ const ( ) func dataSourceTrackerAssociationsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) name := d.Get("tracker_name").(string) diff --git a/internal/service/location/tracker_associations_data_source_test.go b/internal/service/location/tracker_associations_data_source_test.go index 969ba8e16bf..33675eb3544 100644 --- a/internal/service/location/tracker_associations_data_source_test.go +++ b/internal/service/location/tracker_associations_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package location_test import ( diff --git a/internal/service/location/tracker_data_source.go b/internal/service/location/tracker_data_source.go index 2036084ccc3..f3cd24651c2 100644 --- a/internal/service/location/tracker_data_source.go +++ b/internal/service/location/tracker_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package location import ( @@ -55,7 +58,7 @@ func DataSourceTracker() *schema.Resource { func dataSourceTrackerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).LocationConn() + conn := meta.(*conns.AWSClient).LocationConn(ctx) input := &locationservice.DescribeTrackerInput{ TrackerName: aws.String(d.Get("tracker_name").(string)), diff --git a/internal/service/location/tracker_data_source_test.go b/internal/service/location/tracker_data_source_test.go index 6bfe19bdfa7..6bab25d0192 100644 --- a/internal/service/location/tracker_data_source_test.go +++ b/internal/service/location/tracker_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package location_test import ( diff --git a/internal/service/location/tracker_test.go b/internal/service/location/tracker_test.go index 02589f53f67..b8010536da4 100644 --- a/internal/service/location/tracker_test.go +++ b/internal/service/location/tracker_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package location_test import ( @@ -215,7 +218,7 @@ func TestAccLocationTracker_tags(t *testing.T) { func testAccCheckTrackerDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LocationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LocationConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_location_tracker" { @@ -253,7 +256,7 @@ func testAccCheckTrackerExists(ctx context.Context, resourceName string) resourc return fmt.Errorf("resource not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).LocationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LocationConn(ctx) input := &locationservice.DescribeTrackerInput{ TrackerName: aws.String(rs.Primary.ID), diff --git a/internal/service/logs/README.md b/internal/service/logs/README.md index 99e0a3925a5..f91fe2830b6 100644 --- a/internal/service/logs/README.md +++ b/internal/service/logs/README.md @@ -1,4 +1,4 @@ -# Terraform AWS Provider CloudWatchLogs Package +# Terraform AWS Provider CloudWatch Logs Package This area is primarily for AWS provider contributors and maintainers. For information on _using_ Terraform and the AWS provider, see the links below. @@ -6,5 +6,5 @@ This area is primarily for AWS provider contributors and maintainers. For inform * [Find out about contributing](https://hashicorp.github.io/terraform-provider-aws/#contribute) to the AWS provider! * AWS Provider Docs: [Home](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) -* AWS Provider Docs: [One of the CloudWatchLogs resources](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_log_destination) -* AWS Docs: [AWS SDK for Go CloudWatchLogs](https://docs.aws.amazon.com/sdk-for-go/api/service/cloudwatchlogs/) +* AWS Provider Docs: [One of the CloudWatch Logs resources](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_log_destination) +* AWS Docs: [AWS SDK for Go CloudWatch Logs](https://docs.aws.amazon.com/sdk-for-go/api/service/cloudwatchlogs/) diff --git a/internal/service/logs/arn.go b/internal/service/logs/arn.go index 275e3d9252b..77e238ca8dd 100644 --- a/internal/service/logs/arn.go +++ b/internal/service/logs/arn.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs import ( diff --git a/internal/service/logs/arn_test.go b/internal/service/logs/arn_test.go index c8130b9a726..f164cbbe007 100644 --- a/internal/service/logs/arn_test.go +++ b/internal/service/logs/arn_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs_test import ( diff --git a/internal/service/logs/data_protection_policy.go b/internal/service/logs/data_protection_policy.go index d3e5a8bbddb..873b9843dff 100644 --- a/internal/service/logs/data_protection_policy.go +++ b/internal/service/logs/data_protection_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs import ( @@ -52,7 +55,7 @@ func resourceDataProtectionPolicy() *schema.Resource { } func resourceDataProtectionPolicyPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsClient() + conn := meta.(*conns.AWSClient).LogsClient(ctx) logGroupName := d.Get("log_group_name").(string) @@ -81,7 +84,7 @@ func resourceDataProtectionPolicyPut(ctx context.Context, d *schema.ResourceData } func resourceDataProtectionPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsClient() + conn := meta.(*conns.AWSClient).LogsClient(ctx) output, err := FindDataProtectionPolicyByID(ctx, conn, d.Id()) @@ -115,7 +118,7 @@ func resourceDataProtectionPolicyRead(ctx context.Context, d *schema.ResourceDat } func resourceDataProtectionPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsClient() + conn := meta.(*conns.AWSClient).LogsClient(ctx) log.Printf("[DEBUG] Deleting CloudWatch Logs Data Protection Policy: %s", d.Id()) _, err := conn.DeleteDataProtectionPolicy(ctx, &cloudwatchlogs.DeleteDataProtectionPolicyInput{ diff --git a/internal/service/logs/data_protection_policy_document_data_source.go b/internal/service/logs/data_protection_policy_document_data_source.go index 8680f8bc85c..ae30b9ac07c 100644 --- a/internal/service/logs/data_protection_policy_document_data_source.go +++ b/internal/service/logs/data_protection_policy_document_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs import ( diff --git a/internal/service/logs/data_protection_policy_document_data_source_test.go b/internal/service/logs/data_protection_policy_document_data_source_test.go index 50f8d2b6ec9..8005d26e212 100644 --- a/internal/service/logs/data_protection_policy_document_data_source_test.go +++ b/internal/service/logs/data_protection_policy_document_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs_test import ( @@ -169,9 +172,9 @@ resource "aws_kinesis_firehose_delivery_stream" "audit" { depends_on = [aws_iam_role_policy.firehose] name = %[2]q - destination = "s3" + destination = "extended_s3" - s3_configuration { + extended_s3_configuration { role_arn = aws_iam_role.firehose.arn bucket_arn = aws_s3_bucket.audit.arn } diff --git a/internal/service/logs/data_protection_policy_test.go b/internal/service/logs/data_protection_policy_test.go index 7e30ef68a42..23dda5f96b6 100644 --- a/internal/service/logs/data_protection_policy_test.go +++ b/internal/service/logs/data_protection_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs_test import ( @@ -211,7 +214,7 @@ func TestAccLogsDataProtectionPolicy_policyDocument(t *testing.T) { func testAccCheckDataProtectionPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LogsClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).LogsClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudwatch_log_data_protection_policy" { @@ -246,7 +249,7 @@ func testAccCheckDataProtectionPolicyExists(ctx context.Context, n string, v *cl return fmt.Errorf("No CloudWatch Logs Data Protection Policy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LogsClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).LogsClient(ctx) output, err := tflogs.FindDataProtectionPolicyByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/logs/destination.go b/internal/service/logs/destination.go index 5e617a5258e..754a622cec1 100644 --- a/internal/service/logs/destination.go +++ b/internal/service/logs/destination.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs import ( @@ -69,7 +72,7 @@ const ( ) func resourceDestinationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsConn() + conn := meta.(*conns.AWSClient).LogsConn(ctx) name := d.Get("name").(string) input := &cloudwatchlogs.PutDestinationInput{ @@ -91,7 +94,7 @@ func resourceDestinationCreate(ctx context.Context, d *schema.ResourceData, meta // Although PutDestinationInput has a Tags field, specifying tags there results in // "InvalidParameterException: Could not deliver test message to specified destination. Check if the destination is valid." - if err := createTags(ctx, conn, aws.StringValue(destination.Arn), GetTagsIn(ctx)); err != nil { + if err := createTags(ctx, conn, aws.StringValue(destination.Arn), getTagsIn(ctx)); err != nil { return diag.Errorf("setting CloudWatch Logs Destination (%s) tags: %s", d.Id(), err) } @@ -99,7 +102,7 @@ func resourceDestinationCreate(ctx context.Context, d *schema.ResourceData, meta } func resourceDestinationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsConn() + conn := meta.(*conns.AWSClient).LogsConn(ctx) destination, err := FindDestinationByName(ctx, conn, d.Id()) @@ -122,7 +125,7 @@ func resourceDestinationRead(ctx context.Context, d *schema.ResourceData, meta i } func resourceDestinationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsConn() + conn := meta.(*conns.AWSClient).LogsConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &cloudwatchlogs.PutDestinationInput{ @@ -144,7 +147,7 @@ func resourceDestinationUpdate(ctx context.Context, d *schema.ResourceData, meta } func resourceDestinationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsConn() + conn := meta.(*conns.AWSClient).LogsConn(ctx) log.Printf("[INFO] Deleting CloudWatch Logs Destination: %s", d.Id()) _, err := conn.DeleteDestinationWithContext(ctx, &cloudwatchlogs.DeleteDestinationInput{ diff --git a/internal/service/logs/destination_policy.go b/internal/service/logs/destination_policy.go index 678a11bf6d8..c6e329444e6 100644 --- a/internal/service/logs/destination_policy.go +++ b/internal/service/logs/destination_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs import ( @@ -52,7 +55,7 @@ func resourceDestinationPolicy() *schema.Resource { } func resourceDestinationPolicyPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsConn() + conn := meta.(*conns.AWSClient).LogsConn(ctx) name := d.Get("destination_name").(string) input := &cloudwatchlogs.PutDestinationPolicyInput{ @@ -78,7 +81,7 @@ func resourceDestinationPolicyPut(ctx context.Context, d *schema.ResourceData, m } func resourceDestinationPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsConn() + conn := meta.(*conns.AWSClient).LogsConn(ctx) destination, err := FindDestinationByName(ctx, conn, d.Id()) diff --git a/internal/service/logs/destination_policy_test.go b/internal/service/logs/destination_policy_test.go index 8803e526a9e..4b5a59d6b7a 100644 --- a/internal/service/logs/destination_policy_test.go +++ b/internal/service/logs/destination_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs_test import ( @@ -62,7 +65,7 @@ func testAccCheckDestinationPolicyExists(ctx context.Context, n string, v *strin return fmt.Errorf("No CloudWatch Logs Destination Policy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LogsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LogsConn(ctx) output, err := tflogs.FindDestinationByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/logs/destination_test.go b/internal/service/logs/destination_test.go index 2d110c3e600..6355fc10d41 100644 --- a/internal/service/logs/destination_test.go +++ b/internal/service/logs/destination_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs_test import ( @@ -194,7 +197,7 @@ func TestAccLogsDestination_update(t *testing.T) { func testAccCheckDestinationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LogsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LogsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudwatch_log_destination" { @@ -228,7 +231,7 @@ func testAccCheckDestinationExists(ctx context.Context, n string, v *cloudwatchl return fmt.Errorf("No CloudWatch Logs Destination ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LogsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LogsConn(ctx) output, err := tflogs.FindDestinationByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/logs/exports_test.go b/internal/service/logs/exports_test.go index c416d0831d7..ddc80b16bd6 100644 --- a/internal/service/logs/exports_test.go +++ b/internal/service/logs/exports_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs // Exports for use in tests only. diff --git a/internal/service/logs/generate.go b/internal/service/logs/generate.go index 65d6d979bb2..4393f4b208b 100644 --- a/internal/service/logs/generate.go +++ b/internal/service/logs/generate.go @@ -1,6 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/listpages/main.go -ListOps=DescribeQueryDefinitions,DescribeResourcePolicies //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsMap -UpdateTags -CreateTags -//go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=ListTagsLogGroup -ListTagsInIDElem=LogGroupName -ListTagsFunc=ListLogGroupTags -TagOp=TagLogGroup -TagInIDElem=LogGroupName -UntagOp=UntagLogGroup -UntagInTagsElem=Tags -UpdateTags -UpdateTagsFunc=UpdateLogGroupTags -- log_group_tags_gen.go +//go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=ListTagsLogGroup -ListTagsInIDElem=LogGroupName -ListTagsFunc=listLogGroupTags -TagOp=TagLogGroup -TagInIDElem=LogGroupName -UntagOp=UntagLogGroup -UntagInTagsElem=Tags -UpdateTags -UpdateTagsFunc=updateLogGroupTags -- log_group_tags_gen.go +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package logs diff --git a/internal/service/logs/group.go b/internal/service/logs/group.go index ac1e7ac5cfe..0b26749899a 100644 --- a/internal/service/logs/group.go +++ b/internal/service/logs/group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs import ( @@ -76,12 +79,12 @@ func resourceGroup() *schema.Resource { } func resourceGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsConn() + conn := meta.(*conns.AWSClient).LogsConn(ctx) name := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) input := &cloudwatchlogs.CreateLogGroupInput{ LogGroupName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("kms_key_id"); ok { @@ -115,7 +118,7 @@ func resourceGroupCreate(ctx context.Context, d *schema.ResourceData, meta inter } func resourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsConn() + conn := meta.(*conns.AWSClient).LogsConn(ctx) lg, err := FindLogGroupByName(ctx, conn, d.Id()) @@ -135,19 +138,19 @@ func resourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interfa d.Set("name_prefix", create.NamePrefixFromName(aws.StringValue(lg.LogGroupName))) d.Set("retention_in_days", lg.RetentionInDays) - tags, err := ListLogGroupTags(ctx, conn, d.Id()) + tags, err := listLogGroupTags(ctx, conn, d.Id()) if err != nil { return diag.Errorf("listing tags for CloudWatch Logs Log Group (%s): %s", d.Id(), err) } - SetTagsOut(ctx, Tags(tags)) + setTagsOut(ctx, Tags(tags)) return nil } func resourceGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsConn() + conn := meta.(*conns.AWSClient).LogsConn(ctx) if d.HasChange("retention_in_days") { if v, ok := d.GetOk("retention_in_days"); ok { @@ -198,7 +201,7 @@ func resourceGroupUpdate(ctx context.Context, d *schema.ResourceData, meta inter if d.HasChange("tags_all") { o, n := d.GetChange("tags_all") - if err := UpdateLogGroupTags(ctx, conn, d.Id(), o, n); err != nil { + if err := updateLogGroupTags(ctx, conn, d.Id(), o, n); err != nil { return diag.Errorf("updating CloudWatch Logs Log Group (%s) tags: %s", d.Id(), err) } } @@ -212,7 +215,7 @@ func resourceGroupDelete(ctx context.Context, d *schema.ResourceData, meta inter return nil } - conn := meta.(*conns.AWSClient).LogsConn() + conn := meta.(*conns.AWSClient).LogsConn(ctx) log.Printf("[INFO] Deleting CloudWatch Logs Log Group: %s", d.Id()) _, err := conn.DeleteLogGroupWithContext(ctx, &cloudwatchlogs.DeleteLogGroupInput{ diff --git a/internal/service/logs/group_data_source.go b/internal/service/logs/group_data_source.go index 2c7062edabd..d1c8bdd6e81 100644 --- a/internal/service/logs/group_data_source.go +++ b/internal/service/logs/group_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs import ( @@ -42,7 +45,7 @@ func dataSourceGroup() *schema.Resource { } func dataSourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsConn() + conn := meta.(*conns.AWSClient).LogsConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig name := d.Get("name").(string) @@ -58,7 +61,7 @@ func dataSourceGroupRead(ctx context.Context, d *schema.ResourceData, meta inter d.Set("kms_key_id", logGroup.KmsKeyId) d.Set("retention_in_days", logGroup.RetentionInDays) - tags, err := ListLogGroupTags(ctx, conn, name) + tags, err := listLogGroupTags(ctx, conn, name) if err != nil { return diag.Errorf("listing tags for CloudWatch Logs Log Group (%s): %s", name, err) diff --git a/internal/service/logs/group_data_source_test.go b/internal/service/logs/group_data_source_test.go index 9f0d5ec346f..b698170ac65 100644 --- a/internal/service/logs/group_data_source_test.go +++ b/internal/service/logs/group_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs_test import ( diff --git a/internal/service/logs/group_test.go b/internal/service/logs/group_test.go index 649772c0ae8..ff7cf546527 100644 --- a/internal/service/logs/group_test.go +++ b/internal/service/logs/group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs_test import ( @@ -326,7 +329,7 @@ func testAccCheckGroupExists(ctx context.Context, t *testing.T, n string, v *clo return fmt.Errorf("No CloudWatch Logs Log Group ID is set") } - conn := acctest.ProviderMeta(t).LogsConn() + conn := acctest.ProviderMeta(t).LogsConn(ctx) output, err := tflogs.FindLogGroupByName(ctx, conn, rs.Primary.ID) @@ -342,7 +345,7 @@ func testAccCheckGroupExists(ctx context.Context, t *testing.T, n string, v *clo func testAccCheckGroupDestroy(ctx context.Context, t *testing.T) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.ProviderMeta(t).LogsConn() + conn := acctest.ProviderMeta(t).LogsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudwatch_log_group" { @@ -368,7 +371,7 @@ func testAccCheckGroupDestroy(ctx context.Context, t *testing.T) resource.TestCh func testAccCheckGroupNoDestroy(ctx context.Context, t *testing.T) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.ProviderMeta(t).LogsConn() + conn := acctest.ProviderMeta(t).LogsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudwatch_log_group" { diff --git a/internal/service/logs/groups_data_source.go b/internal/service/logs/groups_data_source.go index 9cc08c4a461..7edb674bc7a 100644 --- a/internal/service/logs/groups_data_source.go +++ b/internal/service/logs/groups_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs import ( @@ -35,7 +38,7 @@ func dataSourceGroups() *schema.Resource { } func dataSourceGroupsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsConn() + conn := meta.(*conns.AWSClient).LogsConn(ctx) input := &cloudwatchlogs.DescribeLogGroupsInput{} diff --git a/internal/service/logs/groups_data_source_test.go b/internal/service/logs/groups_data_source_test.go index 1b6ea9484ce..af7e8ba2ede 100644 --- a/internal/service/logs/groups_data_source_test.go +++ b/internal/service/logs/groups_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs_test import ( @@ -52,10 +55,10 @@ func TestAccLogsGroupsDataSource_noPrefix(t *testing.T) { { Config: testAccGroupsDataSourceConfig_noPrefix(rName), Check: resource.ComposeAggregateTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "arns.#", "1"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "arns.#", 1), resource.TestCheckTypeSetElemAttrPair(dataSourceName, "arns.*", resource1Name, "arn"), resource.TestCheckTypeSetElemAttrPair(dataSourceName, "arns.*", resource2Name, "arn"), - acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "log_group_names.#", "1"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "log_group_names.#", 1), resource.TestCheckTypeSetElemAttrPair(dataSourceName, "log_group_names.*", resource1Name, "name"), resource.TestCheckTypeSetElemAttrPair(dataSourceName, "log_group_names.*", resource2Name, "name"), ), diff --git a/internal/service/logs/log_group_tags_gen.go b/internal/service/logs/log_group_tags_gen.go index 08fd16fd2d7..999146a7e79 100644 --- a/internal/service/logs/log_group_tags_gen.go +++ b/internal/service/logs/log_group_tags_gen.go @@ -8,16 +8,14 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/cloudwatchlogs" "github.com/aws/aws-sdk-go/service/cloudwatchlogs/cloudwatchlogsiface" - "github.com/hashicorp/terraform-provider-aws/internal/conns" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" - "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) -// ListLogGroupTags lists logs service tags. +// listLogGroupTags lists logs service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListLogGroupTags(ctx context.Context, conn cloudwatchlogsiface.CloudWatchLogsAPI, identifier string) (tftags.KeyValueTags, error) { +func listLogGroupTags(ctx context.Context, conn cloudwatchlogsiface.CloudWatchLogsAPI, identifier string) (tftags.KeyValueTags, error) { input := &cloudwatchlogs.ListTagsLogGroupInput{ LogGroupName: aws.String(identifier), } @@ -31,26 +29,10 @@ func ListLogGroupTags(ctx context.Context, conn cloudwatchlogsiface.CloudWatchLo return KeyValueTags(ctx, output.Tags), nil } -// ListLogGroupTags lists logs service tags and set them in Context. -// It is called from outside this package. -func (p *servicePackage) ListLogGroupTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListLogGroupTags(ctx, meta.(*conns.AWSClient).LogsConn(), identifier) - - if err != nil { - return err - } - - if inContext, ok := tftags.FromContext(ctx); ok { - inContext.TagsOut = types.Some(tags) - } - - return nil -} - -// UpdateLogGroupTags updates logs service tags. +// updateLogGroupTags updates logs service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateLogGroupTags(ctx context.Context, conn cloudwatchlogsiface.CloudWatchLogsAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateLogGroupTags(ctx context.Context, conn cloudwatchlogsiface.CloudWatchLogsAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -86,9 +68,3 @@ func UpdateLogGroupTags(ctx context.Context, conn cloudwatchlogsiface.CloudWatch return nil } - -// UpdateLogGroupTags updates logs service tags. -// It is called from outside this package. -func (p *servicePackage) UpdateLogGroupTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateLogGroupTags(ctx, meta.(*conns.AWSClient).LogsConn(), identifier, oldTags, newTags) -} diff --git a/internal/service/logs/metric_filter.go b/internal/service/logs/metric_filter.go index ad3fb451f51..ad17c0b3a0b 100644 --- a/internal/service/logs/metric_filter.go +++ b/internal/service/logs/metric_filter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs import ( @@ -15,9 +18,9 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" - "github.com/hashicorp/terraform-provider-aws/internal/experimental/nullable" "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/types/nullable" ) // @SDKResource("aws_cloudwatch_log_metric_filter") @@ -102,7 +105,7 @@ func resourceMetricFilter() *schema.Resource { } func resourceMetricFilterPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsConn() + conn := meta.(*conns.AWSClient).LogsConn(ctx) name := d.Get("name").(string) logGroupName := d.Get("log_group_name").(string) @@ -134,7 +137,7 @@ func resourceMetricFilterPut(ctx context.Context, d *schema.ResourceData, meta i } func resourceMetricFilterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsConn() + conn := meta.(*conns.AWSClient).LogsConn(ctx) mf, err := FindMetricFilterByTwoPartKey(ctx, conn, d.Get("log_group_name").(string), d.Id()) @@ -159,7 +162,7 @@ func resourceMetricFilterRead(ctx context.Context, d *schema.ResourceData, meta } func resourceMetricFilterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsConn() + conn := meta.(*conns.AWSClient).LogsConn(ctx) // Creating multiple filters on the same log group can sometimes cause // clashes, so use a mutex here (and on creation) to serialise actions on diff --git a/internal/service/logs/metric_filter_test.go b/internal/service/logs/metric_filter_test.go index 72fcfad4b09..8d6a824cf8e 100644 --- a/internal/service/logs/metric_filter_test.go +++ b/internal/service/logs/metric_filter_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs_test import ( @@ -202,7 +205,7 @@ func testAccCheckMetricFilterExists(ctx context.Context, n string, v *cloudwatch return fmt.Errorf("No CloudWatch Logs Metric Filter ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LogsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LogsConn(ctx) output, err := tflogs.FindMetricFilterByTwoPartKey(ctx, conn, rs.Primary.Attributes["log_group_name"], rs.Primary.ID) @@ -218,7 +221,7 @@ func testAccCheckMetricFilterExists(ctx context.Context, n string, v *cloudwatch func testAccCheckMetricFilterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LogsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LogsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudwatch_log_metric_filter" { diff --git a/internal/service/logs/query_definition.go b/internal/service/logs/query_definition.go index 8ba2e728a24..09c8ae2e87e 100644 --- a/internal/service/logs/query_definition.go +++ b/internal/service/logs/query_definition.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs import ( @@ -61,7 +64,7 @@ func resourceQueryDefinition() *schema.Resource { } func resourceQueryDefinitionPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsConn() + conn := meta.(*conns.AWSClient).LogsConn(ctx) name := d.Get("name").(string) input := &cloudwatchlogs.PutQueryDefinitionInput{ @@ -91,7 +94,7 @@ func resourceQueryDefinitionPut(ctx context.Context, d *schema.ResourceData, met } func resourceQueryDefinitionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsConn() + conn := meta.(*conns.AWSClient).LogsConn(ctx) result, err := FindQueryDefinitionByTwoPartKey(ctx, conn, d.Get("name").(string), d.Id()) @@ -114,7 +117,7 @@ func resourceQueryDefinitionRead(ctx context.Context, d *schema.ResourceData, me } func resourceQueryDefinitionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsConn() + conn := meta.(*conns.AWSClient).LogsConn(ctx) log.Printf("[INFO] Deleting CloudWatch Logs Query Definition: %s", d.Id()) _, err := conn.DeleteQueryDefinitionWithContext(ctx, &cloudwatchlogs.DeleteQueryDefinitionInput{ diff --git a/internal/service/logs/query_definition_test.go b/internal/service/logs/query_definition_test.go index 2a6f9fba112..9bbb239cfe9 100644 --- a/internal/service/logs/query_definition_test.go +++ b/internal/service/logs/query_definition_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs_test import ( @@ -183,7 +186,7 @@ func testAccCheckQueryDefinitionExists(ctx context.Context, n string, v *cloudwa return fmt.Errorf("No CloudWatch Logs Query Definition ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LogsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LogsConn(ctx) output, err := tflogs.FindQueryDefinitionByTwoPartKey(ctx, conn, rs.Primary.Attributes["name"], rs.Primary.ID) @@ -199,7 +202,7 @@ func testAccCheckQueryDefinitionExists(ctx context.Context, n string, v *cloudwa func testAccCheckQueryDefinitionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LogsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LogsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudwatch_query_definition" { diff --git a/internal/service/logs/resource_policy.go b/internal/service/logs/resource_policy.go index 8de01503b9d..f07d02e81b3 100644 --- a/internal/service/logs/resource_policy.go +++ b/internal/service/logs/resource_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs import ( @@ -51,7 +54,7 @@ func resourceResourcePolicy() *schema.Resource { } func resourceResourcePolicyPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsConn() + conn := meta.(*conns.AWSClient).LogsConn(ctx) policy, err := structure.NormalizeJsonString(d.Get("policy_document").(string)) @@ -77,7 +80,7 @@ func resourceResourcePolicyPut(ctx context.Context, d *schema.ResourceData, meta } func resourceResourcePolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsConn() + conn := meta.(*conns.AWSClient).LogsConn(ctx) resourcePolicy, err := FindResourcePolicyByName(ctx, conn, d.Id()) @@ -109,7 +112,7 @@ func resourceResourcePolicyRead(ctx context.Context, d *schema.ResourceData, met } func resourceResourcePolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsConn() + conn := meta.(*conns.AWSClient).LogsConn(ctx) log.Printf("[DEBUG] Deleting CloudWatch Logs Resource Policy: %s", d.Id()) _, err := conn.DeleteResourcePolicyWithContext(ctx, &cloudwatchlogs.DeleteResourcePolicyInput{ diff --git a/internal/service/logs/resource_policy_test.go b/internal/service/logs/resource_policy_test.go index ecf6f35863d..492988184bd 100644 --- a/internal/service/logs/resource_policy_test.go +++ b/internal/service/logs/resource_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs_test import ( @@ -95,7 +98,7 @@ func testAccCheckResourcePolicyExists(ctx context.Context, n string, v *cloudwat return fmt.Errorf("No CloudWatch Logs Resource Policy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LogsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LogsConn(ctx) output, err := tflogs.FindResourcePolicyByName(ctx, conn, rs.Primary.ID) @@ -111,7 +114,7 @@ func testAccCheckResourcePolicyExists(ctx context.Context, n string, v *cloudwat func testAccCheckResourcePolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LogsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LogsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudwatch_log_resource_policy" { diff --git a/internal/service/logs/service_package_gen.go b/internal/service/logs/service_package_gen.go index 0a61a60a97b..9afe91f33ed 100644 --- a/internal/service/logs/service_package_gen.go +++ b/internal/service/logs/service_package_gen.go @@ -5,6 +5,12 @@ package logs import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + cloudwatchlogs_sdkv2 "github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + cloudwatchlogs_sdkv1 "github.com/aws/aws-sdk-go/service/cloudwatchlogs" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -87,4 +93,24 @@ func (p *servicePackage) ServicePackageName() string { return names.Logs } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*cloudwatchlogs_sdkv1.CloudWatchLogs, error) { + sess := config["session"].(*session_sdkv1.Session) + + return cloudwatchlogs_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*cloudwatchlogs_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return cloudwatchlogs_sdkv2.NewFromConfig(cfg, func(o *cloudwatchlogs_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = cloudwatchlogs_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/logs/stream.go b/internal/service/logs/stream.go index ec8055c7f1c..607ec4f5167 100644 --- a/internal/service/logs/stream.go +++ b/internal/service/logs/stream.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs import ( @@ -49,7 +52,7 @@ func resourceStream() *schema.Resource { } func resourceStreamCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsConn() + conn := meta.(*conns.AWSClient).LogsConn(ctx) name := d.Get("name").(string) input := &cloudwatchlogs.CreateLogStreamInput{ @@ -77,7 +80,7 @@ func resourceStreamCreate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceStreamRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsConn() + conn := meta.(*conns.AWSClient).LogsConn(ctx) ls, err := FindLogStreamByTwoPartKey(ctx, conn, d.Get("log_group_name").(string), d.Id()) @@ -98,7 +101,7 @@ func resourceStreamRead(ctx context.Context, d *schema.ResourceData, meta interf } func resourceStreamDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsConn() + conn := meta.(*conns.AWSClient).LogsConn(ctx) log.Printf("[INFO] Deleting CloudWatch Logs Log Stream: %s", d.Id()) _, err := conn.DeleteLogStreamWithContext(ctx, &cloudwatchlogs.DeleteLogStreamInput{ diff --git a/internal/service/logs/stream_test.go b/internal/service/logs/stream_test.go index 27d79bfb93b..a4531bf883e 100644 --- a/internal/service/logs/stream_test.go +++ b/internal/service/logs/stream_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs_test import ( @@ -103,7 +106,7 @@ func testAccCheckStreamExists(ctx context.Context, n string, v *cloudwatchlogs.L return fmt.Errorf("No CloudWatch Logs Log Stream ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LogsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LogsConn(ctx) output, err := tflogs.FindLogStreamByTwoPartKey(ctx, conn, rs.Primary.Attributes["log_group_name"], rs.Primary.ID) @@ -119,7 +122,7 @@ func testAccCheckStreamExists(ctx context.Context, n string, v *cloudwatchlogs.L func testAccCheckStreamDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LogsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LogsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudwatch_log_stream" { diff --git a/internal/service/logs/subscription_filter.go b/internal/service/logs/subscription_filter.go index 4341f13b62b..207be80ae67 100644 --- a/internal/service/logs/subscription_filter.go +++ b/internal/service/logs/subscription_filter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs import ( @@ -73,7 +76,7 @@ func resourceSubscriptionFilter() *schema.Resource { } func resourceSubscriptionFilterPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsConn() + conn := meta.(*conns.AWSClient).LogsConn(ctx) logGroupName := d.Get("log_group_name").(string) name := d.Get("name").(string) @@ -122,7 +125,7 @@ func resourceSubscriptionFilterPut(ctx context.Context, d *schema.ResourceData, } func resourceSubscriptionFilterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsConn() + conn := meta.(*conns.AWSClient).LogsConn(ctx) subscriptionFilter, err := FindSubscriptionFilterByTwoPartKey(ctx, conn, d.Get("log_group_name").(string), d.Get("name").(string)) @@ -147,7 +150,7 @@ func resourceSubscriptionFilterRead(ctx context.Context, d *schema.ResourceData, } func resourceSubscriptionFilterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).LogsConn() + conn := meta.(*conns.AWSClient).LogsConn(ctx) log.Printf("[INFO] Deleting CloudWatch Logs Subscription Filter: %s", d.Id()) _, err := conn.DeleteSubscriptionFilterWithContext(ctx, &cloudwatchlogs.DeleteSubscriptionFilterInput{ diff --git a/internal/service/logs/subscription_filter_test.go b/internal/service/logs/subscription_filter_test.go index 1fc6c9cf37e..749ca5c3651 100644 --- a/internal/service/logs/subscription_filter_test.go +++ b/internal/service/logs/subscription_filter_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs_test import ( @@ -254,7 +257,7 @@ func TestAccLogsSubscriptionFilter_roleARN(t *testing.T) { func testAccCheckSubscriptionFilterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).LogsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LogsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudwatch_log_subscription_filter" { @@ -289,7 +292,7 @@ func testAccCheckSubscriptionFilterExists(ctx context.Context, n string, v *clou return fmt.Errorf("No CloudWatch Logs Filter Subscription ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).LogsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).LogsConn(ctx) output, err := tflogs.FindSubscriptionFilterByTwoPartKey(ctx, conn, rs.Primary.Attributes["log_group_name"], rs.Primary.Attributes["name"]) diff --git a/internal/service/logs/sweep.go b/internal/service/logs/sweep.go index 305b3bb427f..e24bbce6d98 100644 --- a/internal/service/logs/sweep.go +++ b/internal/service/logs/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/cloudwatchlogs" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -55,12 +57,12 @@ func init() { func sweepGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } input := &cloudwatchlogs.DescribeLogGroupsInput{} - conn := client.(*conns.AWSClient).LogsConn() + conn := client.LogsConn(ctx) sweepResources := make([]sweep.Sweepable, 0) err = conn.DescribeLogGroupsPagesWithContext(ctx, input, func(page *cloudwatchlogs.DescribeLogGroupsOutput, lastPage bool) bool { @@ -88,7 +90,7 @@ func sweepGroups(region string) error { return fmt.Errorf("error listing CloudWatch Logs Log Groups (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping CloudWatch Logs Log Groups (%s): %w", region, err) @@ -99,12 +101,12 @@ func sweepGroups(region string) error { func sweeplogQueryDefinitions(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } input := &cloudwatchlogs.DescribeQueryDefinitionsInput{} - conn := client.(*conns.AWSClient).LogsConn() + conn := client.LogsConn(ctx) sweepResources := make([]sweep.Sweepable, 0) err = describeQueryDefinitionsPages(ctx, conn, input, func(page *cloudwatchlogs.DescribeQueryDefinitionsOutput, lastPage bool) bool { @@ -132,7 +134,7 @@ func sweeplogQueryDefinitions(region string) error { return fmt.Errorf("error listing CloudWatch Logs Query Definitions (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping CloudWatch Logs Query Definitions (%s): %w", region, err) @@ -143,12 +145,12 @@ func sweeplogQueryDefinitions(region string) error { func sweepResourcePolicies(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } input := &cloudwatchlogs.DescribeResourcePoliciesInput{} - conn := client.(*conns.AWSClient).LogsConn() + conn := client.LogsConn(ctx) sweepResources := make([]sweep.Sweepable, 0) err = describeResourcePoliciesPages(ctx, conn, input, func(page *cloudwatchlogs.DescribeResourcePoliciesOutput, lastPage bool) bool { @@ -176,7 +178,7 @@ func sweepResourcePolicies(region string) error { return fmt.Errorf("error listing CloudWatch Logs Resource Policies (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping CloudWatch Logs Resource Policies (%s): %w", region, err) diff --git a/internal/service/logs/tags_gen.go b/internal/service/logs/tags_gen.go index 66e1068433d..5e01bf5f945 100644 --- a/internal/service/logs/tags_gen.go +++ b/internal/service/logs/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists logs service tags. +// listTags lists logs service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn cloudwatchlogsiface.CloudWatchLogsAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn cloudwatchlogsiface.CloudWatchLogsAPI, identifier string) (tftags.KeyValueTags, error) { input := &cloudwatchlogs.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn cloudwatchlogsiface.CloudWatchLogsAPI, i // ListTags lists logs service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).LogsConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).LogsConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from logs service tags. +// KeyValueTags creates tftags.KeyValueTags from logs service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns logs service tags from Context. +// getTagsIn returns logs service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,8 +71,8 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets logs service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets logs service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } @@ -84,13 +84,13 @@ func createTags(ctx context.Context, conn cloudwatchlogsiface.CloudWatchLogsAPI, return nil } - return UpdateTags(ctx, conn, identifier, nil, tags) + return updateTags(ctx, conn, identifier, nil, tags) } -// UpdateTags updates logs service tags. +// updateTags updates logs service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn cloudwatchlogsiface.CloudWatchLogsAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn cloudwatchlogsiface.CloudWatchLogsAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -130,5 +130,5 @@ func UpdateTags(ctx context.Context, conn cloudwatchlogsiface.CloudWatchLogsAPI, // UpdateTags updates logs service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).LogsConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).LogsConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/logs/validate.go b/internal/service/logs/validate.go index a8c92fdd487..6c4b5b04962 100644 --- a/internal/service/logs/validate.go +++ b/internal/service/logs/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs import ( diff --git a/internal/service/logs/validate_test.go b/internal/service/logs/validate_test.go index 273baa1b543..5ac45579f83 100644 --- a/internal/service/logs/validate_test.go +++ b/internal/service/logs/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package logs import ( diff --git a/internal/service/macie2/account.go b/internal/service/macie2/account.go index 2a741ea4205..cc7273cdfe8 100644 --- a/internal/service/macie2/account.go +++ b/internal/service/macie2/account.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package macie2 import ( "context" - "fmt" "log" "time" @@ -59,7 +61,7 @@ func ResourceAccount() *schema.Resource { } func resourceAccountCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) input := &macie2.EnableMacieInput{ ClientToken: aws.String(id.UniqueId()), @@ -90,7 +92,7 @@ func resourceAccountCreate(ctx context.Context, d *schema.ResourceData, meta int } if err != nil { - return diag.FromErr(fmt.Errorf("error enabling Macie Account: %w", err)) + return diag.Errorf("enabling Macie Account: %s", err) } d.SetId(meta.(*conns.AWSClient).AccountID) @@ -99,7 +101,7 @@ func resourceAccountCreate(ctx context.Context, d *schema.ResourceData, meta int } func resourceAccountRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) input := &macie2.GetMacieSessionInput{} @@ -113,7 +115,7 @@ func resourceAccountRead(ctx context.Context, d *schema.ResourceData, meta inter } if err != nil { - return diag.FromErr(fmt.Errorf("error reading Macie Account (%s): %w", d.Id(), err)) + return diag.Errorf("reading Macie Account (%s): %s", d.Id(), err) } d.Set("status", resp.Status) @@ -126,7 +128,7 @@ func resourceAccountRead(ctx context.Context, d *schema.ResourceData, meta inter } func resourceAccountUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) input := &macie2.UpdateMacieSessionInput{} @@ -140,14 +142,14 @@ func resourceAccountUpdate(ctx context.Context, d *schema.ResourceData, meta int _, err := conn.UpdateMacieSessionWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error updating Macie Account (%s): %w", d.Id(), err)) + return diag.Errorf("updating Macie Account (%s): %s", d.Id(), err) } return resourceAccountRead(ctx, d, meta) } func resourceAccountDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) input := &macie2.DisableMacieInput{} @@ -178,7 +180,7 @@ func resourceAccountDelete(ctx context.Context, d *schema.ResourceData, meta int tfawserr.ErrMessageContains(err, macie2.ErrCodeAccessDeniedException, "Macie is not enabled") { return nil } - return diag.FromErr(fmt.Errorf("error disabling Macie Account (%s): %w", d.Id(), err)) + return diag.Errorf("disabling Macie Account (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/macie2/account_test.go b/internal/service/macie2/account_test.go index e6791757f4c..47519294c49 100644 --- a/internal/service/macie2/account_test.go +++ b/internal/service/macie2/account_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package macie2_test import ( @@ -196,7 +199,7 @@ func testAccAccount_disappears(t *testing.T) { func testAccCheckAccountDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_macie2_account" { @@ -230,7 +233,7 @@ func testAccCheckAccountExists(ctx context.Context, resourceName string, macie2S return fmt.Errorf("not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn(ctx) input := &macie2.GetMacieSessionInput{} resp, err := conn.GetMacieSessionWithContext(ctx, input) diff --git a/internal/service/macie2/classification_export_configuration.go b/internal/service/macie2/classification_export_configuration.go index 60dd148c88d..a4328730baf 100644 --- a/internal/service/macie2/classification_export_configuration.go +++ b/internal/service/macie2/classification_export_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package macie2 import ( @@ -53,16 +56,16 @@ func ResourceClassificationExportConfiguration() *schema.Resource { } func resourceClassificationExportConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) if d.IsNewResource() { output, err := conn.GetClassificationExportConfigurationWithContext(ctx, &macie2.GetClassificationExportConfigurationInput{}) if err != nil { - return diag.FromErr(fmt.Errorf("reading Macie classification export configuration failed: %w", err)) + return diag.Errorf("reading Macie classification export configuration failed: %s", err) } - if (macie2.ClassificationExportConfiguration{}) != *output.Configuration { // nosemgrep: ci.prefer-aws-go-sdk-pointer-conversion-conditional - return diag.FromErr(fmt.Errorf("creating Macie classification export configuration: a configuration already exists")) + if (macie2.ClassificationExportConfiguration{}) != *output.Configuration { // nosemgrep:ci.semgrep.aws.prefer-pointer-conversion-conditional + return diag.Errorf("creating Macie classification export configuration: a configuration already exists") } } @@ -79,14 +82,14 @@ func resourceClassificationExportConfigurationCreate(ctx context.Context, d *sch _, err := conn.PutClassificationExportConfigurationWithContext(ctx, &input) if err != nil { - return diag.FromErr(fmt.Errorf("creating Macie classification export configuration failed: %w", err)) + return diag.Errorf("creating Macie classification export configuration failed: %s", err) } return resourceClassificationExportConfigurationRead(ctx, d, meta) } func resourceClassificationExportConfigurationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) input := macie2.PutClassificationExportConfigurationInput{ Configuration: &macie2.ClassificationExportConfiguration{}, @@ -103,27 +106,27 @@ func resourceClassificationExportConfigurationUpdate(ctx context.Context, d *sch _, err := conn.PutClassificationExportConfigurationWithContext(ctx, &input) if err != nil { - return diag.FromErr(fmt.Errorf("creating Macie classification export configuration failed: %w", err)) + return diag.Errorf("creating Macie classification export configuration failed: %s", err) } return resourceClassificationExportConfigurationRead(ctx, d, meta) } func resourceClassificationExportConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) input := macie2.GetClassificationExportConfigurationInput{} // api does not have a getById() like endpoint. output, err := conn.GetClassificationExportConfigurationWithContext(ctx, &input) if err != nil { - return diag.FromErr(fmt.Errorf("reading Macie classification export configuration failed: %w", err)) + return diag.Errorf("reading Macie classification export configuration failed: %s", err) } - if (macie2.ClassificationExportConfiguration{}) != *output.Configuration { // nosemgrep: ci.prefer-aws-go-sdk-pointer-conversion-conditional - if (macie2.S3Destination{}) != *output.Configuration.S3Destination { // nosemgrep: ci.prefer-aws-go-sdk-pointer-conversion-conditional + if (macie2.ClassificationExportConfiguration{}) != *output.Configuration { // nosemgrep:ci.semgrep.aws.prefer-pointer-conversion-conditional + if (macie2.S3Destination{}) != *output.Configuration.S3Destination { // nosemgrep:ci.semgrep.aws.prefer-pointer-conversion-conditional var flattenedS3Destination = flattenClassificationExportConfigurationS3DestinationResult(output.Configuration.S3Destination) if err := d.Set("s3_destination", []interface{}{flattenedS3Destination}); err != nil { - return diag.FromErr(fmt.Errorf("error setting Macie classification export configuration s3_destination: %w", err)) + return diag.Errorf("setting Macie classification export configuration s3_destination: %s", err) } } d.SetId(fmt.Sprintf("%s:%s:%s", "macie:classification_export_configuration", meta.(*conns.AWSClient).AccountID, meta.(*conns.AWSClient).Region)) @@ -133,7 +136,7 @@ func resourceClassificationExportConfigurationRead(ctx context.Context, d *schem } func resourceClassificationExportConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) input := macie2.PutClassificationExportConfigurationInput{ Configuration: &macie2.ClassificationExportConfiguration{}, @@ -144,7 +147,7 @@ func resourceClassificationExportConfigurationDelete(ctx context.Context, d *sch _, err := conn.PutClassificationExportConfigurationWithContext(ctx, &input) if err != nil { - return diag.FromErr(fmt.Errorf("deleting Macie classification export configuration failed: %w", err)) + return diag.Errorf("deleting Macie classification export configuration failed: %s", err) } return nil diff --git a/internal/service/macie2/classification_export_configuration_test.go b/internal/service/macie2/classification_export_configuration_test.go index 974bcdaf404..6c97bac61d8 100644 --- a/internal/service/macie2/classification_export_configuration_test.go +++ b/internal/service/macie2/classification_export_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package macie2_test import ( @@ -60,7 +63,7 @@ func testAccClassificationExportConfiguration_basic(t *testing.T) { func testAccCheckClassificationExportConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_macie2_classification_export_configuration" { @@ -78,7 +81,7 @@ func testAccCheckClassificationExportConfigurationDestroy(ctx context.Context) r return err } - if (macie2.GetClassificationExportConfigurationOutput{}) != *resp || resp != nil { // nosemgrep: ci.prefer-aws-go-sdk-pointer-conversion-conditional + if (macie2.GetClassificationExportConfigurationOutput{}) != *resp || resp != nil { // nosemgrep:ci.semgrep.aws.prefer-pointer-conversion-conditional return fmt.Errorf("macie classification export configuration %q still configured", rs.Primary.ID) } } @@ -94,7 +97,7 @@ func testAccCheckClassificationExportConfigurationExists(ctx context.Context, re return fmt.Errorf("not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn(ctx) input := macie2.GetClassificationExportConfigurationInput{} resp, err := conn.GetClassificationExportConfigurationWithContext(ctx, &input) @@ -103,7 +106,7 @@ func testAccCheckClassificationExportConfigurationExists(ctx context.Context, re return err } - if (macie2.GetClassificationExportConfigurationOutput{}) == *resp || resp == nil { // nosemgrep: ci.prefer-aws-go-sdk-pointer-conversion-conditional + if (macie2.GetClassificationExportConfigurationOutput{}) == *resp || resp == nil { // nosemgrep:ci.semgrep.aws.prefer-pointer-conversion-conditional return fmt.Errorf("macie classification export configuration %q does not exist", rs.Primary.ID) } diff --git a/internal/service/macie2/classification_job.go b/internal/service/macie2/classification_job.go index 66b41ed5032..d56d9acafd9 100644 --- a/internal/service/macie2/classification_job.go +++ b/internal/service/macie2/classification_job.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package macie2 import ( @@ -574,14 +577,14 @@ func resourceClassificationJobCustomizeDiff(_ context.Context, diff *schema.Reso } func resourceClassificationJobCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) input := &macie2.CreateClassificationJobInput{ ClientToken: aws.String(id.UniqueId()), Name: aws.String(create.Name(d.Get("name").(string), d.Get("name_prefix").(string))), JobType: aws.String(d.Get("job_type").(string)), S3JobDefinition: expandS3JobDefinition(d.Get("s3_job_definition").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("custom_data_identifier_ids"); ok { @@ -620,7 +623,7 @@ func resourceClassificationJobCreate(ctx context.Context, d *schema.ResourceData } if err != nil { - return diag.FromErr(fmt.Errorf("error creating Macie ClassificationJob: %w", err)) + return diag.Errorf("creating Macie ClassificationJob: %s", err) } d.SetId(aws.StringValue(output.JobId)) @@ -629,7 +632,7 @@ func resourceClassificationJobCreate(ctx context.Context, d *schema.ResourceData } func resourceClassificationJobRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) input := &macie2.DescribeClassificationJobInput{ JobId: aws.String(d.Id()), @@ -646,14 +649,14 @@ func resourceClassificationJobRead(ctx context.Context, d *schema.ResourceData, } if err != nil { - return diag.FromErr(fmt.Errorf("error reading Macie ClassificationJob (%s): %w", d.Id(), err)) + return diag.Errorf("reading Macie ClassificationJob (%s): %s", d.Id(), err) } if err = d.Set("custom_data_identifier_ids", flex.FlattenStringList(resp.CustomDataIdentifierIds)); err != nil { - return diag.FromErr(fmt.Errorf("error setting `%s` for Macie ClassificationJob (%s): %w", "custom_data_identifier_ids", d.Id(), err)) + return diag.Errorf("setting `%s` for Macie ClassificationJob (%s): %s", "custom_data_identifier_ids", d.Id(), err) } if err = d.Set("schedule_frequency", flattenScheduleFrequency(resp.ScheduleFrequency)); err != nil { - return diag.FromErr(fmt.Errorf("error setting `%s` for Macie ClassificationJob (%s): %w", "schedule_frequency", d.Id(), err)) + return diag.Errorf("setting `%s` for Macie ClassificationJob (%s): %s", "schedule_frequency", d.Id(), err) } d.Set("sampling_percentage", resp.SamplingPercentage) d.Set("name", resp.Name) @@ -662,10 +665,10 @@ func resourceClassificationJobRead(ctx context.Context, d *schema.ResourceData, d.Set("initial_run", resp.InitialRun) d.Set("job_type", resp.JobType) if err = d.Set("s3_job_definition", flattenS3JobDefinition(resp.S3JobDefinition)); err != nil { - return diag.FromErr(fmt.Errorf("error setting `%s` for Macie ClassificationJob (%s): %w", "s3_job_definition", d.Id(), err)) + return diag.Errorf("setting `%s` for Macie ClassificationJob (%s): %s", "s3_job_definition", d.Id(), err) } - SetTagsOut(ctx, resp.Tags) + setTagsOut(ctx, resp.Tags) d.Set("job_id", resp.JobId) d.Set("job_arn", resp.JobArn) @@ -676,14 +679,14 @@ func resourceClassificationJobRead(ctx context.Context, d *schema.ResourceData, d.Set("job_status", status) d.Set("created_at", aws.TimeValue(resp.CreatedAt).Format(time.RFC3339)) if err = d.Set("user_paused_details", flattenUserPausedDetails(resp.UserPausedDetails)); err != nil { - return diag.FromErr(fmt.Errorf("error setting `%s` for Macie ClassificationJob (%s): %w", "user_paused_details", d.Id(), err)) + return diag.Errorf("setting `%s` for Macie ClassificationJob (%s): %s", "user_paused_details", d.Id(), err) } return nil } func resourceClassificationJobUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) input := &macie2.UpdateClassificationJobInput{ JobId: aws.String(d.Id()), @@ -693,7 +696,7 @@ func resourceClassificationJobUpdate(ctx context.Context, d *schema.ResourceData status := d.Get("job_status").(string) if status == macie2.JobStatusCancelled { - return diag.FromErr(fmt.Errorf("error updating Macie ClassificationJob (%s): %s", d.Id(), fmt.Sprintf("%s cannot be set", macie2.JobStatusCancelled))) + return diag.Errorf("updating Macie ClassificationJob (%s): %s", d.Id(), fmt.Sprintf("%s cannot be set", macie2.JobStatusCancelled)) } input.JobStatus = aws.String(status) @@ -701,14 +704,14 @@ func resourceClassificationJobUpdate(ctx context.Context, d *schema.ResourceData _, err := conn.UpdateClassificationJobWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error updating Macie ClassificationJob (%s): %w", d.Id(), err)) + return diag.Errorf("updating Macie ClassificationJob (%s): %s", d.Id(), err) } return resourceClassificationJobRead(ctx, d, meta) } func resourceClassificationJobDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) input := &macie2.UpdateClassificationJobInput{ JobId: aws.String(d.Id()), @@ -722,7 +725,7 @@ func resourceClassificationJobDelete(ctx context.Context, d *schema.ResourceData tfawserr.ErrMessageContains(err, macie2.ErrCodeValidationException, "cannot update cancelled job for job") { return nil } - return diag.FromErr(fmt.Errorf("error deleting Macie ClassificationJob (%s): %w", d.Id(), err)) + return diag.Errorf("deleting Macie ClassificationJob (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/macie2/classification_job_test.go b/internal/service/macie2/classification_job_test.go index 3030c6e37ab..ad8cd1d151e 100644 --- a/internal/service/macie2/classification_job_test.go +++ b/internal/service/macie2/classification_job_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package macie2_test import ( @@ -394,7 +397,7 @@ func testAccCheckClassificationJobExists(ctx context.Context, resourceName strin return fmt.Errorf("not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn(ctx) input := &macie2.DescribeClassificationJobInput{JobId: aws.String(rs.Primary.ID)} resp, err := conn.DescribeClassificationJobWithContext(ctx, input) @@ -415,7 +418,7 @@ func testAccCheckClassificationJobExists(ctx context.Context, resourceName strin func testAccCheckClassificationJobDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_macie2_classification_job" { diff --git a/internal/service/macie2/consts.go b/internal/service/macie2/consts.go index 7205c50923b..a9996d1f235 100644 --- a/internal/service/macie2/consts.go +++ b/internal/service/macie2/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package macie2 const ( diff --git a/internal/service/macie2/custom_data_identifier.go b/internal/service/macie2/custom_data_identifier.go index 490a2bddae9..0100394088e 100644 --- a/internal/service/macie2/custom_data_identifier.go +++ b/internal/service/macie2/custom_data_identifier.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package macie2 import ( "context" - "fmt" "log" "time" @@ -105,11 +107,11 @@ func ResourceCustomDataIdentifier() *schema.Resource { } func resourceCustomDataIdentifierCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) input := &macie2.CreateCustomDataIdentifierInput{ ClientToken: aws.String(id.UniqueId()), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("regex"); ok { @@ -149,7 +151,7 @@ func resourceCustomDataIdentifierCreate(ctx context.Context, d *schema.ResourceD } if err != nil { - return diag.FromErr(fmt.Errorf("error creating Macie CustomDataIdentifier: %w", err)) + return diag.Errorf("creating Macie CustomDataIdentifier: %s", err) } d.SetId(aws.StringValue(output.CustomDataIdentifierId)) @@ -158,7 +160,7 @@ func resourceCustomDataIdentifierCreate(ctx context.Context, d *schema.ResourceD } func resourceCustomDataIdentifierRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) input := &macie2.GetCustomDataIdentifierInput{ Id: aws.String(d.Id()), @@ -174,22 +176,22 @@ func resourceCustomDataIdentifierRead(ctx context.Context, d *schema.ResourceDat } if err != nil { - return diag.FromErr(fmt.Errorf("error reading Macie CustomDataIdentifier (%s): %w", d.Id(), err)) + return diag.Errorf("reading Macie CustomDataIdentifier (%s): %s", d.Id(), err) } d.Set("regex", resp.Regex) if err = d.Set("keywords", flex.FlattenStringList(resp.Keywords)); err != nil { - return diag.FromErr(fmt.Errorf("error setting `%s` for Macie CustomDataIdentifier (%s): %w", "keywords", d.Id(), err)) + return diag.Errorf("setting `%s` for Macie CustomDataIdentifier (%s): %s", "keywords", d.Id(), err) } if err = d.Set("ignore_words", flex.FlattenStringList(resp.IgnoreWords)); err != nil { - return diag.FromErr(fmt.Errorf("error setting `%s` for Macie CustomDataIdentifier (%s): %w", "ignore_words", d.Id(), err)) + return diag.Errorf("setting `%s` for Macie CustomDataIdentifier (%s): %s", "ignore_words", d.Id(), err) } d.Set("name", resp.Name) d.Set("name_prefix", create.NamePrefixFromName(aws.StringValue(resp.Name))) d.Set("description", resp.Description) d.Set("maximum_match_distance", resp.MaximumMatchDistance) - SetTagsOut(ctx, resp.Tags) + setTagsOut(ctx, resp.Tags) if aws.BoolValue(resp.Deleted) { log.Printf("[WARN] Macie CustomDataIdentifier (%s) is soft deleted, removing from state", d.Id()) @@ -203,7 +205,7 @@ func resourceCustomDataIdentifierRead(ctx context.Context, d *schema.ResourceDat } func resourceCustomDataIdentifierDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) input := &macie2.DeleteCustomDataIdentifierInput{ Id: aws.String(d.Id()), @@ -215,7 +217,7 @@ func resourceCustomDataIdentifierDelete(ctx context.Context, d *schema.ResourceD tfawserr.ErrMessageContains(err, macie2.ErrCodeAccessDeniedException, "Macie is not enabled") { return nil } - return diag.FromErr(fmt.Errorf("error deleting Macie CustomDataIdentifier (%s): %w", d.Id(), err)) + return diag.Errorf("deleting Macie CustomDataIdentifier (%s): %s", d.Id(), err) } return nil } diff --git a/internal/service/macie2/custom_data_identifier_test.go b/internal/service/macie2/custom_data_identifier_test.go index 789db7f9bde..5ee553788a2 100644 --- a/internal/service/macie2/custom_data_identifier_test.go +++ b/internal/service/macie2/custom_data_identifier_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package macie2_test import ( @@ -225,7 +228,7 @@ func testAccCheckCustomDataIdentifierExists(ctx context.Context, resourceName st return fmt.Errorf("not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn(ctx) input := &macie2.GetCustomDataIdentifierInput{Id: aws.String(rs.Primary.ID)} resp, err := conn.GetCustomDataIdentifierWithContext(ctx, input) @@ -246,7 +249,7 @@ func testAccCheckCustomDataIdentifierExists(ctx context.Context, resourceName st func testAccCheckCustomDataIdentifierDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_macie2_custom_data_identifier" { diff --git a/internal/service/macie2/find.go b/internal/service/macie2/find.go index 64d00cf77a3..ff85ef997db 100644 --- a/internal/service/macie2/find.go +++ b/internal/service/macie2/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package macie2 import ( diff --git a/internal/service/macie2/findings_filter.go b/internal/service/macie2/findings_filter.go index 24371022d85..2f5502a1cc1 100644 --- a/internal/service/macie2/findings_filter.go +++ b/internal/service/macie2/findings_filter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package macie2 import ( @@ -132,19 +135,19 @@ func ResourceFindingsFilter() *schema.Resource { } func resourceFindingsFilterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) input := &macie2.CreateFindingsFilterInput{ ClientToken: aws.String(id.UniqueId()), Name: aws.String(create.Name(d.Get("name").(string), d.Get("name_prefix").(string))), Action: aws.String(d.Get("action").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } var err error input.FindingCriteria, err = expandFindingCriteriaFilter(d.Get("finding_criteria").([]interface{})) if err != nil { - return diag.FromErr(fmt.Errorf("error creating Macie FindingsFilter: %w", err)) + return diag.Errorf("creating Macie FindingsFilter: %s", err) } if v, ok := d.GetOk("description"); ok { @@ -174,7 +177,7 @@ func resourceFindingsFilterCreate(ctx context.Context, d *schema.ResourceData, m } if err != nil { - return diag.FromErr(fmt.Errorf("error creating Macie FindingsFilter: %w", err)) + return diag.Errorf("creating Macie FindingsFilter: %s", err) } d.SetId(aws.StringValue(output.Id)) @@ -183,7 +186,7 @@ func resourceFindingsFilterCreate(ctx context.Context, d *schema.ResourceData, m } func resourceFindingsFilterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) input := &macie2.GetFindingsFilterInput{ Id: aws.String(d.Id()), @@ -199,11 +202,11 @@ func resourceFindingsFilterRead(ctx context.Context, d *schema.ResourceData, met } if err != nil { - return diag.FromErr(fmt.Errorf("error reading Macie FindingsFilter (%s): %w", d.Id(), err)) + return diag.Errorf("reading Macie FindingsFilter (%s): %s", d.Id(), err) } if err = d.Set("finding_criteria", flattenFindingCriteriaFindingsFilter(resp.FindingCriteria)); err != nil { - return diag.FromErr(fmt.Errorf("error setting `%s` for Macie FindingsFilter (%s): %w", "finding_criteria", d.Id(), err)) + return diag.Errorf("setting `%s` for Macie FindingsFilter (%s): %s", "finding_criteria", d.Id(), err) } d.Set("name", resp.Name) d.Set("name_prefix", create.NamePrefixFromName(aws.StringValue(resp.Name))) @@ -211,7 +214,7 @@ func resourceFindingsFilterRead(ctx context.Context, d *schema.ResourceData, met d.Set("action", resp.Action) d.Set("position", resp.Position) - SetTagsOut(ctx, resp.Tags) + setTagsOut(ctx, resp.Tags) d.Set("arn", resp.Arn) @@ -219,7 +222,7 @@ func resourceFindingsFilterRead(ctx context.Context, d *schema.ResourceData, met } func resourceFindingsFilterUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) input := &macie2.UpdateFindingsFilterInput{ Id: aws.String(d.Id()), @@ -229,7 +232,7 @@ func resourceFindingsFilterUpdate(ctx context.Context, d *schema.ResourceData, m if d.HasChange("finding_criteria") { input.FindingCriteria, err = expandFindingCriteriaFilter(d.Get("finding_criteria").([]interface{})) if err != nil { - return diag.FromErr(fmt.Errorf("error updating Macie FindingsFilter (%s): %w", d.Id(), err)) + return diag.Errorf("updating Macie FindingsFilter (%s): %s", d.Id(), err) } } if d.HasChange("name") { @@ -250,14 +253,14 @@ func resourceFindingsFilterUpdate(ctx context.Context, d *schema.ResourceData, m _, err = conn.UpdateFindingsFilterWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error updating Macie FindingsFilter (%s): %w", d.Id(), err)) + return diag.Errorf("updating Macie FindingsFilter (%s): %s", d.Id(), err) } return resourceFindingsFilterRead(ctx, d, meta) } func resourceFindingsFilterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) input := &macie2.DeleteFindingsFilterInput{ Id: aws.String(d.Id()), @@ -269,7 +272,7 @@ func resourceFindingsFilterDelete(ctx context.Context, d *schema.ResourceData, m tfawserr.ErrMessageContains(err, macie2.ErrCodeAccessDeniedException, "Macie is not enabled") { return nil } - return diag.FromErr(fmt.Errorf("error deleting Macie FindingsFilter (%s): %w", d.Id(), err)) + return diag.Errorf("deleting Macie FindingsFilter (%s): %s", d.Id(), err) } return nil } @@ -315,28 +318,28 @@ func expandFindingCriteriaFilter(findingCriterias []interface{}) (*macie2.Findin if v, ok := crit["lt"].(string); ok && v != "" { i, err := expandConditionIntField(field, v) if err != nil { - return nil, fmt.Errorf("error parsing condition %q for field %q: %w", "lt", field, err) + return nil, fmt.Errorf("parsing condition %q for field %q: %w", "lt", field, err) } conditional.Lt = aws.Int64(i) } if v, ok := crit["lte"].(string); ok && v != "" { i, err := expandConditionIntField(field, v) if err != nil { - return nil, fmt.Errorf("error parsing condition %q for field %q: %w", "lte", field, err) + return nil, fmt.Errorf("parsing condition %q for field %q: %w", "lte", field, err) } conditional.Lte = aws.Int64(i) } if v, ok := crit["gt"].(string); ok && v != "" { i, err := expandConditionIntField(field, v) if err != nil { - return nil, fmt.Errorf("error parsing condition %q for field %q: %w", "gt", field, err) + return nil, fmt.Errorf("parsing condition %q for field %q: %w", "gt", field, err) } conditional.Gt = aws.Int64(i) } if v, ok := crit["gte"].(string); ok && v != "" { i, err := expandConditionIntField(field, v) if err != nil { - return nil, fmt.Errorf("error parsing condition %q for field %q: %w", "gte", field, err) + return nil, fmt.Errorf("parsing condition %q for field %q: %w", "gte", field, err) } conditional.Gte = aws.Int64(i) } diff --git a/internal/service/macie2/findings_filter_test.go b/internal/service/macie2/findings_filter_test.go index 879a7c47209..7db87400ce9 100644 --- a/internal/service/macie2/findings_filter_test.go +++ b/internal/service/macie2/findings_filter_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package macie2_test import ( @@ -406,7 +409,7 @@ func testAccCheckFindingsFilterExists(ctx context.Context, resourceName string, return fmt.Errorf("not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn(ctx) input := &macie2.GetFindingsFilterInput{Id: aws.String(rs.Primary.ID)} resp, err := conn.GetFindingsFilterWithContext(ctx, input) @@ -427,7 +430,7 @@ func testAccCheckFindingsFilterExists(ctx context.Context, resourceName string, func testAccCheckFindingsFilterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_macie2_findings_filter" { diff --git a/internal/service/macie2/generate.go b/internal/service/macie2/generate.go index f8bf8e0028b..a7a03f605de 100644 --- a/internal/service/macie2/generate.go +++ b/internal/service/macie2/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ServiceTagsMap +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package macie2 diff --git a/internal/service/macie2/invitation_accepter.go b/internal/service/macie2/invitation_accepter.go index 875d7158332..412ceb9f448 100644 --- a/internal/service/macie2/invitation_accepter.go +++ b/internal/service/macie2/invitation_accepter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package macie2 import ( @@ -45,7 +48,7 @@ func ResourceInvitationAccepter() *schema.Resource { } func resourceInvitationAccepterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) adminAccountID := d.Get("administrator_account_id").(string) var invitationID string @@ -86,7 +89,7 @@ func resourceInvitationAccepterCreate(ctx context.Context, d *schema.ResourceDat }) } if err != nil { - return diag.FromErr(fmt.Errorf("error listing Macie InvitationAccepter (%s): %w", d.Id(), err)) + return diag.Errorf("listing Macie InvitationAccepter (%s): %s", d.Id(), err) } acceptInvitationInput := &macie2.AcceptInvitationInput{ @@ -97,7 +100,7 @@ func resourceInvitationAccepterCreate(ctx context.Context, d *schema.ResourceDat _, err = conn.AcceptInvitationWithContext(ctx, acceptInvitationInput) if err != nil { - return diag.FromErr(fmt.Errorf("error accepting Macie InvitationAccepter (%s): %w", d.Id(), err)) + return diag.Errorf("accepting Macie InvitationAccepter (%s): %s", d.Id(), err) } d.SetId(adminAccountID) @@ -106,7 +109,7 @@ func resourceInvitationAccepterCreate(ctx context.Context, d *schema.ResourceDat } func resourceInvitationAccepterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) var err error @@ -122,11 +125,11 @@ func resourceInvitationAccepterRead(ctx context.Context, d *schema.ResourceData, } if err != nil { - return diag.FromErr(fmt.Errorf("error reading Macie InvitationAccepter (%s): %w", d.Id(), err)) + return diag.Errorf("reading Macie InvitationAccepter (%s): %s", d.Id(), err) } if output == nil || output.Administrator == nil { - return diag.FromErr(fmt.Errorf("error reading Macie InvitationAccepter (%s): %w", d.Id(), err)) + return diag.Errorf("reading Macie InvitationAccepter (%s): %s", d.Id(), err) } d.Set("administrator_account_id", output.Administrator.AccountId) @@ -135,7 +138,7 @@ func resourceInvitationAccepterRead(ctx context.Context, d *schema.ResourceData, } func resourceInvitationAccepterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) input := &macie2.DisassociateFromAdministratorAccountInput{} @@ -145,7 +148,7 @@ func resourceInvitationAccepterDelete(ctx context.Context, d *schema.ResourceDat tfawserr.ErrMessageContains(err, macie2.ErrCodeAccessDeniedException, "Macie is not enabled") { return nil } - return diag.FromErr(fmt.Errorf("error disassociating Macie InvitationAccepter (%s): %w", d.Id(), err)) + return diag.Errorf("disassociating Macie InvitationAccepter (%s): %s", d.Id(), err) } return nil } diff --git a/internal/service/macie2/invitation_accepter_test.go b/internal/service/macie2/invitation_accepter_test.go index 22983f00cf2..cb63988189f 100644 --- a/internal/service/macie2/invitation_accepter_test.go +++ b/internal/service/macie2/invitation_accepter_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package macie2_test import ( @@ -56,7 +59,7 @@ func testAccCheckInvitationAccepterExists(ctx context.Context, resourceName stri return fmt.Errorf("resource (%s) has empty ID", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn(ctx) input := &macie2.GetAdministratorAccountInput{} output, err := conn.GetAdministratorAccountWithContext(ctx, input) @@ -74,7 +77,7 @@ func testAccCheckInvitationAccepterExists(ctx context.Context, resourceName stri func testAccCheckInvitationAccepterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_macie2_invitation_accepter" { diff --git a/internal/service/macie2/macie2_test.go b/internal/service/macie2/macie2_test.go index 3e869d50e73..39c7decaaa8 100644 --- a/internal/service/macie2/macie2_test.go +++ b/internal/service/macie2/macie2_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package macie2_test import ( diff --git a/internal/service/macie2/member.go b/internal/service/macie2/member.go index ec333fd815a..4f714756a0a 100644 --- a/internal/service/macie2/member.go +++ b/internal/service/macie2/member.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package macie2 import ( "context" - "fmt" "log" "time" @@ -95,7 +97,7 @@ func ResourceMember() *schema.Resource { } func resourceMemberCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) accountId := d.Get("account_id").(string) input := &macie2.CreateMemberInput{ @@ -103,7 +105,7 @@ func resourceMemberCreate(ctx context.Context, d *schema.ResourceData, meta inte AccountId: aws.String(accountId), Email: aws.String(d.Get("email").(string)), }, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } var err error @@ -126,7 +128,7 @@ func resourceMemberCreate(ctx context.Context, d *schema.ResourceData, meta inte } if err != nil { - return diag.FromErr(fmt.Errorf("error creating Macie Member: %w", err)) + return diag.Errorf("creating Macie Member: %s", err) } d.SetId(accountId) @@ -170,22 +172,22 @@ func resourceMemberCreate(ctx context.Context, d *schema.ResourceData, meta inte } if err != nil { - return diag.FromErr(fmt.Errorf("error inviting Macie Member: %w", err)) + return diag.Errorf("inviting Macie Member: %s", err) } if len(output.UnprocessedAccounts) != 0 { - return diag.FromErr(fmt.Errorf("error inviting Macie Member: %s: %s", aws.StringValue(output.UnprocessedAccounts[0].ErrorCode), aws.StringValue(output.UnprocessedAccounts[0].ErrorMessage))) + return diag.Errorf("inviting Macie Member: %s: %s", aws.StringValue(output.UnprocessedAccounts[0].ErrorCode), aws.StringValue(output.UnprocessedAccounts[0].ErrorMessage)) } if _, err = waitMemberInvited(ctx, conn, d.Id()); err != nil { - return diag.FromErr(fmt.Errorf("error waiting for Macie Member (%s) invitation: %w", d.Id(), err)) + return diag.Errorf("waiting for Macie Member (%s) invitation: %s", d.Id(), err) } return resourceMemberRead(ctx, d, meta) } func resourceMemberRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) input := &macie2.GetMemberInput{ Id: aws.String(d.Id()), @@ -203,7 +205,7 @@ func resourceMemberRead(ctx context.Context, d *schema.ResourceData, meta interf } if err != nil { - return diag.FromErr(fmt.Errorf("error reading Macie Member (%s): %w", d.Id(), err)) + return diag.Errorf("reading Macie Member (%s): %s", d.Id(), err) } d.Set("account_id", resp.AccountId) @@ -215,7 +217,7 @@ func resourceMemberRead(ctx context.Context, d *schema.ResourceData, meta interf d.Set("updated_at", aws.TimeValue(resp.UpdatedAt).Format(time.RFC3339)) d.Set("arn", resp.Arn) - SetTagsOut(ctx, resp.Tags) + setTagsOut(ctx, resp.Tags) status := aws.StringValue(resp.RelationshipStatus) log.Printf("[DEBUG] print resp.RelationshipStatus: %v", aws.StringValue(resp.RelationshipStatus)) @@ -241,7 +243,7 @@ func resourceMemberRead(ctx context.Context, d *schema.ResourceData, meta interf } func resourceMemberUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) // Invitation workflow @@ -280,15 +282,15 @@ func resourceMemberUpdate(ctx context.Context, d *schema.ResourceData, meta inte } if err != nil { - return diag.FromErr(fmt.Errorf("error inviting Macie Member: %w", err)) + return diag.Errorf("inviting Macie Member: %s", err) } if len(output.UnprocessedAccounts) != 0 { - return diag.FromErr(fmt.Errorf("error inviting Macie Member: %s: %s", aws.StringValue(output.UnprocessedAccounts[0].ErrorCode), aws.StringValue(output.UnprocessedAccounts[0].ErrorMessage))) + return diag.Errorf("inviting Macie Member: %s: %s", aws.StringValue(output.UnprocessedAccounts[0].ErrorCode), aws.StringValue(output.UnprocessedAccounts[0].ErrorMessage)) } if _, err = waitMemberInvited(ctx, conn, d.Id()); err != nil { - return diag.FromErr(fmt.Errorf("error waiting for Macie Member (%s) invitation: %w", d.Id(), err)) + return diag.Errorf("waiting for Macie Member (%s) invitation: %s", d.Id(), err) } } else { input := &macie2.DisassociateMemberInput{ @@ -301,7 +303,7 @@ func resourceMemberUpdate(ctx context.Context, d *schema.ResourceData, meta inte tfawserr.ErrMessageContains(err, macie2.ErrCodeAccessDeniedException, "Macie is not enabled") { return nil } - return diag.FromErr(fmt.Errorf("error disassociating Macie Member invite (%s): %w", d.Id(), err)) + return diag.Errorf("disassociating Macie Member invite (%s): %s", d.Id(), err) } } } @@ -316,7 +318,7 @@ func resourceMemberUpdate(ctx context.Context, d *schema.ResourceData, meta inte _, err := conn.UpdateMemberSessionWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error updating Macie Member (%s): %w", d.Id(), err)) + return diag.Errorf("updating Macie Member (%s): %s", d.Id(), err) } } @@ -324,7 +326,7 @@ func resourceMemberUpdate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceMemberDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) input := &macie2.DeleteMemberInput{ Id: aws.String(d.Id()), @@ -338,7 +340,7 @@ func resourceMemberDelete(ctx context.Context, d *schema.ResourceData, meta inte tfawserr.ErrMessageContains(err, macie2.ErrCodeValidationException, "account is not associated with your account") { return nil } - return diag.FromErr(fmt.Errorf("error deleting Macie Member (%s): %w", d.Id(), err)) + return diag.Errorf("deleting Macie Member (%s): %s", d.Id(), err) } return nil } diff --git a/internal/service/macie2/member_test.go b/internal/service/macie2/member_test.go index ba33255ce5e..7bbd4d6454e 100644 --- a/internal/service/macie2/member_test.go +++ b/internal/service/macie2/member_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package macie2_test import ( @@ -341,7 +344,7 @@ func testAccCheckMemberExists(ctx context.Context, resourceName string, macie2Se return fmt.Errorf("not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn(ctx) input := &macie2.GetMemberInput{Id: aws.String(rs.Primary.ID)} resp, err := conn.GetMemberWithContext(ctx, input) @@ -362,7 +365,7 @@ func testAccCheckMemberExists(ctx context.Context, resourceName string, macie2Se func testAccCheckMemberDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_macie2_member" { diff --git a/internal/service/macie2/organization_admin_account.go b/internal/service/macie2/organization_admin_account.go index 753e4368b53..05cf01e6980 100644 --- a/internal/service/macie2/organization_admin_account.go +++ b/internal/service/macie2/organization_admin_account.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package macie2 import ( "context" - "fmt" "log" "time" @@ -37,7 +39,7 @@ func ResourceOrganizationAdminAccount() *schema.Resource { } func resourceOrganizationAdminAccountCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) adminAccountID := d.Get("admin_account_id").(string) input := &macie2.EnableOrganizationAdminAccountInput{ AdminAccountId: aws.String(adminAccountID), @@ -64,7 +66,7 @@ func resourceOrganizationAdminAccountCreate(ctx context.Context, d *schema.Resou } if err != nil { - return diag.FromErr(fmt.Errorf("error creating Macie OrganizationAdminAccount: %w", err)) + return diag.Errorf("creating Macie OrganizationAdminAccount: %s", err) } d.SetId(adminAccountID) @@ -73,7 +75,7 @@ func resourceOrganizationAdminAccountCreate(ctx context.Context, d *schema.Resou } func resourceOrganizationAdminAccountRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) var err error @@ -87,7 +89,7 @@ func resourceOrganizationAdminAccountRead(ctx context.Context, d *schema.Resourc } if err != nil { - return diag.FromErr(fmt.Errorf("error reading Macie OrganizationAdminAccount (%s): %w", d.Id(), err)) + return diag.Errorf("reading Macie OrganizationAdminAccount (%s): %s", d.Id(), err) } if res == nil { @@ -106,7 +108,7 @@ func resourceOrganizationAdminAccountRead(ctx context.Context, d *schema.Resourc } func resourceOrganizationAdminAccountDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Macie2Conn() + conn := meta.(*conns.AWSClient).Macie2Conn(ctx) input := &macie2.DisableOrganizationAdminAccountInput{ AdminAccountId: aws.String(d.Id()), @@ -118,7 +120,7 @@ func resourceOrganizationAdminAccountDelete(ctx context.Context, d *schema.Resou tfawserr.ErrMessageContains(err, macie2.ErrCodeAccessDeniedException, "Macie is not enabled") { return nil } - return diag.FromErr(fmt.Errorf("error deleting Macie OrganizationAdminAccount (%s): %w", d.Id(), err)) + return diag.Errorf("deleting Macie OrganizationAdminAccount (%s): %s", d.Id(), err) } return nil } diff --git a/internal/service/macie2/organization_admin_account_test.go b/internal/service/macie2/organization_admin_account_test.go index 340eb61f780..a504cbf5598 100644 --- a/internal/service/macie2/organization_admin_account_test.go +++ b/internal/service/macie2/organization_admin_account_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package macie2_test import ( @@ -81,7 +84,7 @@ func testAccCheckOrganizationAdminAccountExists(ctx context.Context, resourceNam return fmt.Errorf("not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn(ctx) adminAccount, err := tfmacie2.GetOrganizationAdminAccount(ctx, conn, rs.Primary.ID) @@ -99,7 +102,7 @@ func testAccCheckOrganizationAdminAccountExists(ctx context.Context, resourceNam func testAccCheckOrganizationAdminAccountDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Macie2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_macie2_organization_admin_account" { diff --git a/internal/service/macie2/service_package_gen.go b/internal/service/macie2/service_package_gen.go index 59e79d64119..1cb66d10e4c 100644 --- a/internal/service/macie2/service_package_gen.go +++ b/internal/service/macie2/service_package_gen.go @@ -5,6 +5,10 @@ package macie2 import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + macie2_sdkv1 "github.com/aws/aws-sdk-go/service/macie2" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -72,4 +76,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Macie2 } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*macie2_sdkv1.Macie2, error) { + sess := config["session"].(*session_sdkv1.Session) + + return macie2_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/macie2/status.go b/internal/service/macie2/status.go index 6572e7a62b8..38ccf0b2e46 100644 --- a/internal/service/macie2/status.go +++ b/internal/service/macie2/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package macie2 import ( diff --git a/internal/service/macie2/tags_gen.go b/internal/service/macie2/tags_gen.go index 929d414dd33..4585dd94808 100644 --- a/internal/service/macie2/tags_gen.go +++ b/internal/service/macie2/tags_gen.go @@ -16,14 +16,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from macie2 service tags. +// KeyValueTags creates tftags.KeyValueTags from macie2 service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns macie2 service tags from Context. +// getTagsIn returns macie2 service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -33,8 +33,8 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets macie2 service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets macie2 service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } diff --git a/internal/service/macie2/wait.go b/internal/service/macie2/wait.go index df362514ad5..28d379ba3db 100644 --- a/internal/service/macie2/wait.go +++ b/internal/service/macie2/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package macie2 import ( diff --git a/internal/service/mediaconnect/generate.go b/internal/service/mediaconnect/generate.go index cbf7888d44c..3b82e38f603 100644 --- a/internal/service/mediaconnect/generate.go +++ b/internal/service/mediaconnect/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package mediaconnect diff --git a/internal/service/mediaconnect/service_package_gen.go b/internal/service/mediaconnect/service_package_gen.go index a1e539ff867..7adaa5c29b9 100644 --- a/internal/service/mediaconnect/service_package_gen.go +++ b/internal/service/mediaconnect/service_package_gen.go @@ -5,6 +5,10 @@ package mediaconnect import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + mediaconnect_sdkv1 "github.com/aws/aws-sdk-go/service/mediaconnect" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -31,4 +35,13 @@ func (p *servicePackage) ServicePackageName() string { return names.MediaConnect } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*mediaconnect_sdkv1.MediaConnect, error) { + sess := config["session"].(*session_sdkv1.Session) + + return mediaconnect_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/mediaconnect/tags_gen.go b/internal/service/mediaconnect/tags_gen.go index adf5cce1957..6013203b6c0 100644 --- a/internal/service/mediaconnect/tags_gen.go +++ b/internal/service/mediaconnect/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists mediaconnect service tags. +// listTags lists mediaconnect service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn mediaconnectiface.MediaConnectAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn mediaconnectiface.MediaConnectAPI, identifier string) (tftags.KeyValueTags, error) { input := &mediaconnect.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn mediaconnectiface.MediaConnectAPI, ident // ListTags lists mediaconnect service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).MediaConnectConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).MediaConnectConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from mediaconnect service tags. +// KeyValueTags creates tftags.KeyValueTags from mediaconnect service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns mediaconnect service tags from Context. +// getTagsIn returns mediaconnect service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets mediaconnect service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets mediaconnect service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates mediaconnect service tags. +// updateTags updates mediaconnect service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn mediaconnectiface.MediaConnectAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn mediaconnectiface.MediaConnectAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn mediaconnectiface.MediaConnectAPI, ide // UpdateTags updates mediaconnect service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).MediaConnectConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).MediaConnectConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/mediaconvert/generate.go b/internal/service/mediaconvert/generate.go index 612b03b3f5e..e85db640728 100644 --- a/internal/service/mediaconvert/generate.go +++ b/internal/service/mediaconvert/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=Arn -ListTagsOutTagsElem=ResourceTags.Tags -ServiceTagsMap -TagInIDElem=Arn -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package mediaconvert diff --git a/internal/service/mediaconvert/queue.go b/internal/service/mediaconvert/queue.go index 73334274fa0..a03a3163695 100644 --- a/internal/service/mediaconvert/queue.go +++ b/internal/service/mediaconvert/queue.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package mediaconvert import ( @@ -113,7 +116,7 @@ func resourceQueueCreate(ctx context.Context, d *schema.ResourceData, meta inter Name: aws.String(d.Get("name").(string)), Status: aws.String(d.Get("status").(string)), PricingPlan: aws.String(d.Get("pricing_plan").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -165,13 +168,13 @@ func resourceQueueRead(ctx context.Context, d *schema.ResourceData, meta interfa return sdkdiag.AppendErrorf(diags, "setting Media Convert Queue reservation_plan_settings: %s", err) } - tags, err := ListTags(ctx, conn, aws.StringValue(resp.Queue.Arn)) + tags, err := listTags(ctx, conn, aws.StringValue(resp.Queue.Arn)) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags for Media Convert Queue (%s): %s", d.Id(), err) } - SetTagsOut(ctx, Tags(tags)) + setTagsOut(ctx, Tags(tags)) return diags } @@ -206,7 +209,7 @@ func resourceQueueUpdate(ctx context.Context, d *schema.ResourceData, meta inter if d.HasChange("tags_all") { o, n := d.GetChange("tags_all") - if err := UpdateTags(ctx, conn, d.Get("arn").(string), o, n); err != nil { + if err := updateTags(ctx, conn, d.Get("arn").(string), o, n); err != nil { return sdkdiag.AppendErrorf(diags, "updating tags: %s", err) } } @@ -249,22 +252,22 @@ func GetAccountClient(ctx context.Context, awsClient *conns.AWSClient) (*mediaco Mode: aws.String(mediaconvert.DescribeEndpointsModeDefault), } - output, err := awsClient.MediaConvertConn().DescribeEndpointsWithContext(ctx, input) + output, err := awsClient.MediaConvertConn(ctx).DescribeEndpointsWithContext(ctx, input) if err != nil { - return nil, fmt.Errorf("error describing MediaConvert Endpoints: %w", err) + return nil, fmt.Errorf("describing MediaConvert Endpoints: %w", err) } if output == nil || len(output.Endpoints) == 0 || output.Endpoints[0] == nil || output.Endpoints[0].Url == nil { - return nil, fmt.Errorf("error describing MediaConvert Endpoints: empty response or URL") + return nil, fmt.Errorf("describing MediaConvert Endpoints: empty response or URL") } endpointURL := aws.StringValue(output.Endpoints[0].Url) - sess, err := session.NewSession(&awsClient.MediaConvertConn().Config) + sess, err := session.NewSession(&awsClient.MediaConvertConn(ctx).Config) if err != nil { - return nil, fmt.Errorf("error creating AWS MediaConvert session: %w", err) + return nil, fmt.Errorf("creating AWS MediaConvert session: %w", err) } conn := mediaconvert.New(sess.Copy(&aws.Config{Endpoint: aws.String(endpointURL)})) diff --git a/internal/service/mediaconvert/queue_test.go b/internal/service/mediaconvert/queue_test.go index a41b7a681b0..f7efc057876 100644 --- a/internal/service/mediaconvert/queue_test.go +++ b/internal/service/mediaconvert/queue_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package mediaconvert_test import ( diff --git a/internal/service/mediaconvert/service_package_gen.go b/internal/service/mediaconvert/service_package_gen.go index afdbb68e036..1f1ec934266 100644 --- a/internal/service/mediaconvert/service_package_gen.go +++ b/internal/service/mediaconvert/service_package_gen.go @@ -5,6 +5,10 @@ package mediaconvert import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + mediaconvert_sdkv1 "github.com/aws/aws-sdk-go/service/mediaconvert" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -38,4 +42,13 @@ func (p *servicePackage) ServicePackageName() string { return names.MediaConvert } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*mediaconvert_sdkv1.MediaConvert, error) { + sess := config["session"].(*session_sdkv1.Session) + + return mediaconvert_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/mediaconvert/structure.go b/internal/service/mediaconvert/structure.go index 5e50f00c15b..e9a9c18eb04 100644 --- a/internal/service/mediaconvert/structure.go +++ b/internal/service/mediaconvert/structure.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package mediaconvert import ( diff --git a/internal/service/mediaconvert/tags_gen.go b/internal/service/mediaconvert/tags_gen.go index 45f24f67a4a..c05e9864184 100644 --- a/internal/service/mediaconvert/tags_gen.go +++ b/internal/service/mediaconvert/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists mediaconvert service tags. +// listTags lists mediaconvert service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn mediaconvertiface.MediaConvertAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn mediaconvertiface.MediaConvertAPI, identifier string) (tftags.KeyValueTags, error) { input := &mediaconvert.ListTagsForResourceInput{ Arn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn mediaconvertiface.MediaConvertAPI, ident // ListTags lists mediaconvert service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).MediaConvertConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).MediaConvertConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from mediaconvert service tags. +// KeyValueTags creates tftags.KeyValueTags from mediaconvert service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns mediaconvert service tags from Context. +// getTagsIn returns mediaconvert service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets mediaconvert service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets mediaconvert service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates mediaconvert service tags. +// updateTags updates mediaconvert service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn mediaconvertiface.MediaConvertAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn mediaconvertiface.MediaConvertAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn mediaconvertiface.MediaConvertAPI, ide // UpdateTags updates mediaconvert service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).MediaConvertConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).MediaConvertConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/medialive/channel.go b/internal/service/medialive/channel.go index ff9891e5dbb..4cacb35412b 100644 --- a/internal/service/medialive/channel.go +++ b/internal/service/medialive/channel.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package medialive import ( @@ -44,177 +47,179 @@ func ResourceChannel() *schema.Resource { Delete: schema.DefaultTimeout(15 * time.Minute), }, - Schema: map[string]*schema.Schema{ - "arn": { - Type: schema.TypeString, - Computed: true, - }, - "cdi_input_specification": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "resolution": { - Type: schema.TypeString, - Required: true, - ValidateDiagFunc: enum.Validate[types.CdiInputResolution](), + SchemaFunc: func() map[string]*schema.Schema { + return map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "cdi_input_specification": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "resolution": { + Type: schema.TypeString, + Required: true, + ValidateDiagFunc: enum.Validate[types.CdiInputResolution](), + }, }, }, }, - }, - "channel_class": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateDiagFunc: enum.Validate[types.ChannelClass](), - }, - "channel_id": { - Type: schema.TypeString, - Computed: true, - }, - "destinations": { - Type: schema.TypeSet, - Required: true, - MinItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "id": { - Type: schema.TypeString, - Required: true, - }, - "media_package_settings": { - Type: schema.TypeSet, - Optional: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "channel_id": { - Type: schema.TypeString, - Required: true, + "channel_class": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateDiagFunc: enum.Validate[types.ChannelClass](), + }, + "channel_id": { + Type: schema.TypeString, + Computed: true, + }, + "destinations": { + Type: schema.TypeSet, + Required: true, + MinItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "id": { + Type: schema.TypeString, + Required: true, + }, + "media_package_settings": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "channel_id": { + Type: schema.TypeString, + Required: true, + }, }, }, }, - }, - "multiplex_settings": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "multiplex_id": { - Type: schema.TypeString, - Required: true, - }, - "program_name": { - Type: schema.TypeString, - Required: true, + "multiplex_settings": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "multiplex_id": { + Type: schema.TypeString, + Required: true, + }, + "program_name": { + Type: schema.TypeString, + Required: true, + }, }, }, }, - }, - "settings": { - Type: schema.TypeSet, - Optional: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "password_param": { - Type: schema.TypeString, - Optional: true, - }, - "stream_name": { - Type: schema.TypeString, - Optional: true, - }, - "url": { - Type: schema.TypeString, - Optional: true, - }, - "username": { - Type: schema.TypeString, - Optional: true, + "settings": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "password_param": { + Type: schema.TypeString, + Optional: true, + }, + "stream_name": { + Type: schema.TypeString, + Optional: true, + }, + "url": { + Type: schema.TypeString, + Optional: true, + }, + "username": { + Type: schema.TypeString, + Optional: true, + }, }, }, }, }, }, }, - }, - "encoder_settings": func() *schema.Schema { - return channelEncoderSettingsSchema() - }(), - "input_attachments": { - Type: schema.TypeSet, - Required: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "automatic_input_failover_settings": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "secondary_input_id": { - Type: schema.TypeString, - Required: true, - }, - "error_clear_time_msec": { - Type: schema.TypeInt, - Optional: true, - }, - "failover_condition": { - Type: schema.TypeSet, - Optional: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "failover_condition_settings": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "audio_silence_settings": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "audio_selector_name": { - Type: schema.TypeString, - Required: true, - }, - "audio_silence_threshold_msec": { - Type: schema.TypeInt, - Optional: true, + "encoder_settings": func() *schema.Schema { + return channelEncoderSettingsSchema() + }(), + "input_attachments": { + Type: schema.TypeSet, + Required: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "automatic_input_failover_settings": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "secondary_input_id": { + Type: schema.TypeString, + Required: true, + }, + "error_clear_time_msec": { + Type: schema.TypeInt, + Optional: true, + }, + "failover_condition": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "failover_condition_settings": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "audio_silence_settings": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "audio_selector_name": { + Type: schema.TypeString, + Required: true, + }, + "audio_silence_threshold_msec": { + Type: schema.TypeInt, + Optional: true, + }, }, }, }, - }, - "input_loss_settings": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "input_loss_threshold_msec": { - Type: schema.TypeInt, - Optional: true, + "input_loss_settings": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "input_loss_threshold_msec": { + Type: schema.TypeInt, + Optional: true, + }, }, }, }, - }, - "video_black_settings": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "black_detect_threshold": { - Type: schema.TypeFloat, - Optional: true, - }, - "video_black_threshold_msec": { - Type: schema.TypeInt, - Optional: true, + "video_black_settings": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "black_detect_threshold": { + Type: schema.TypeFloat, + Optional: true, + }, + "video_black_threshold_msec": { + Type: schema.TypeInt, + Optional: true, + }, }, }, }, @@ -224,107 +229,107 @@ func ResourceChannel() *schema.Resource { }, }, }, - }, - "input_preference": { - Type: schema.TypeString, - Optional: true, - ValidateDiagFunc: enum.Validate[types.InputPreference](), + "input_preference": { + Type: schema.TypeString, + Optional: true, + ValidateDiagFunc: enum.Validate[types.InputPreference](), + }, }, }, }, - }, - "input_attachment_name": { - Type: schema.TypeString, - Required: true, - }, - "input_id": { - Type: schema.TypeString, - Required: true, - }, - "input_settings": { - Type: schema.TypeList, - Optional: true, - Computed: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "audio_selector": { - Type: schema.TypeList, - Optional: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "name": { - Type: schema.TypeString, - Required: true, - }, - "selector_settings": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "audio_hls_rendition_selection": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "group_id": { - Type: schema.TypeString, - Required: true, - }, - "name": { - Type: schema.TypeString, - Required: true, + "input_attachment_name": { + Type: schema.TypeString, + Required: true, + }, + "input_id": { + Type: schema.TypeString, + Required: true, + }, + "input_settings": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "audio_selector": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + }, + "selector_settings": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "audio_hls_rendition_selection": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "group_id": { + Type: schema.TypeString, + Required: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + }, }, }, }, - }, - "audio_language_selection": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "language_code": { - Type: schema.TypeString, - Required: true, - }, - "language_selection_policy": { - Type: schema.TypeString, - Optional: true, - ValidateDiagFunc: enum.Validate[types.AudioLanguageSelectionPolicy](), + "audio_language_selection": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "language_code": { + Type: schema.TypeString, + Required: true, + }, + "language_selection_policy": { + Type: schema.TypeString, + Optional: true, + ValidateDiagFunc: enum.Validate[types.AudioLanguageSelectionPolicy](), + }, }, }, }, - }, - "audio_pid_selection": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "pid": { - Type: schema.TypeInt, - Required: true, + "audio_pid_selection": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "pid": { + Type: schema.TypeInt, + Required: true, + }, }, }, }, - }, - "audio_track_selection": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "track": { - Type: schema.TypeSet, - Required: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "track": { - Type: schema.TypeInt, - Required: true, + "audio_track_selection": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "track": { + Type: schema.TypeSet, + Required: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "track": { + Type: schema.TypeInt, + Required: true, + }, }, }, }, @@ -337,154 +342,154 @@ func ResourceChannel() *schema.Resource { }, }, }, - }, - "caption_selector": { - Type: schema.TypeList, - Optional: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "name": { - Type: schema.TypeString, - Required: true, - }, - "language_code": { - Type: schema.TypeString, - Optional: true, - }, - "selector_settings": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "ancillary_source_settings": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "source_ancillary_channel_number": { - Type: schema.TypeInt, - Optional: true, + "caption_selector": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + }, + "language_code": { + Type: schema.TypeString, + Optional: true, + }, + "selector_settings": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "ancillary_source_settings": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "source_ancillary_channel_number": { + Type: schema.TypeInt, + Optional: true, + }, }, }, }, - }, - "dvb_tdt_settings": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "ocr_language": { - Type: schema.TypeString, - Optional: true, - ValidateDiagFunc: enum.Validate[types.DvbSubOcrLanguage](), - }, - "pid": { - Type: schema.TypeInt, - Optional: true, + "dvb_tdt_settings": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "ocr_language": { + Type: schema.TypeString, + Optional: true, + ValidateDiagFunc: enum.Validate[types.DvbSubOcrLanguage](), + }, + "pid": { + Type: schema.TypeInt, + Optional: true, + }, }, }, }, - }, - "embedded_source_settings": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "convert_608_to_708": { - Type: schema.TypeString, - Optional: true, - ValidateDiagFunc: enum.Validate[types.EmbeddedConvert608To708](), - }, - "scte20_detection": { - Type: schema.TypeString, - Optional: true, - ValidateDiagFunc: enum.Validate[types.EmbeddedScte20Detection](), - }, - "source_608_channel_number": { - Type: schema.TypeInt, - Optional: true, - }, - "source_608_track_number": { - Type: schema.TypeInt, - Optional: true, + "embedded_source_settings": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "convert_608_to_708": { + Type: schema.TypeString, + Optional: true, + ValidateDiagFunc: enum.Validate[types.EmbeddedConvert608To708](), + }, + "scte20_detection": { + Type: schema.TypeString, + Optional: true, + ValidateDiagFunc: enum.Validate[types.EmbeddedScte20Detection](), + }, + "source_608_channel_number": { + Type: schema.TypeInt, + Optional: true, + }, + "source_608_track_number": { + Type: schema.TypeInt, + Optional: true, + }, }, }, }, - }, - "scte20_source_settings": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "convert_608_to_708": { - Type: schema.TypeString, - Optional: true, - ValidateDiagFunc: enum.Validate[types.Scte20Convert608To708](), - }, - "source_608_channel_number": { - Type: schema.TypeInt, - Optional: true, + "scte20_source_settings": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "convert_608_to_708": { + Type: schema.TypeString, + Optional: true, + ValidateDiagFunc: enum.Validate[types.Scte20Convert608To708](), + }, + "source_608_channel_number": { + Type: schema.TypeInt, + Optional: true, + }, }, }, }, - }, - "scte27_source_settings": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "ocr_language": { - Type: schema.TypeString, - Optional: true, - ValidateDiagFunc: enum.Validate[types.Scte27OcrLanguage](), - }, - "pid": { - Type: schema.TypeInt, - Optional: true, + "scte27_source_settings": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "ocr_language": { + Type: schema.TypeString, + Optional: true, + ValidateDiagFunc: enum.Validate[types.Scte27OcrLanguage](), + }, + "pid": { + Type: schema.TypeInt, + Optional: true, + }, }, }, }, - }, - "teletext_source_settings": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "output_rectangle": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "height": { - Type: schema.TypeFloat, - Required: true, - }, - "left_offset": { - Type: schema.TypeFloat, - Required: true, - }, - "top_offset": { - Type: schema.TypeFloat, - Required: true, - }, - "width": { - Type: schema.TypeFloat, - Required: true, + "teletext_source_settings": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "output_rectangle": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "height": { + Type: schema.TypeFloat, + Required: true, + }, + "left_offset": { + Type: schema.TypeFloat, + Required: true, + }, + "top_offset": { + Type: schema.TypeFloat, + Required: true, + }, + "width": { + Type: schema.TypeFloat, + Required: true, + }, }, }, }, - }, - "page_number": { - Type: schema.TypeString, - Optional: true, + "page_number": { + Type: schema.TypeString, + Optional: true, + }, }, }, }, @@ -494,104 +499,104 @@ func ResourceChannel() *schema.Resource { }, }, }, - }, - "deblock_filter": { - Type: schema.TypeString, - Optional: true, - ValidateDiagFunc: enum.Validate[types.InputDeblockFilter](), - }, - "denoise_filter": { - Type: schema.TypeString, - Optional: true, - ValidateDiagFunc: enum.Validate[types.InputDenoiseFilter](), - }, - "filter_strength": { - Type: schema.TypeInt, - Optional: true, - ValidateDiagFunc: validation.ToDiagFunc(validation.IntBetween(1, 5)), - }, - "input_filter": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ValidateDiagFunc: enum.Validate[types.InputFilter](), - }, - "network_input_settings": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "hls_input_settings": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "bandwidth": { - Type: schema.TypeInt, - Optional: true, - }, - "buffer_segments": { - Type: schema.TypeInt, - Optional: true, - }, - "retries": { - Type: schema.TypeInt, - Optional: true, - }, - "retry_interval": { - Type: schema.TypeInt, - Optional: true, - }, - "scte35_source": { - Type: schema.TypeString, - Optional: true, - ValidateDiagFunc: enum.Validate[types.HlsScte35SourceType](), + "deblock_filter": { + Type: schema.TypeString, + Optional: true, + ValidateDiagFunc: enum.Validate[types.InputDeblockFilter](), + }, + "denoise_filter": { + Type: schema.TypeString, + Optional: true, + ValidateDiagFunc: enum.Validate[types.InputDenoiseFilter](), + }, + "filter_strength": { + Type: schema.TypeInt, + Optional: true, + ValidateDiagFunc: validation.ToDiagFunc(validation.IntBetween(1, 5)), + }, + "input_filter": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateDiagFunc: enum.Validate[types.InputFilter](), + }, + "network_input_settings": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "hls_input_settings": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "bandwidth": { + Type: schema.TypeInt, + Optional: true, + }, + "buffer_segments": { + Type: schema.TypeInt, + Optional: true, + }, + "retries": { + Type: schema.TypeInt, + Optional: true, + }, + "retry_interval": { + Type: schema.TypeInt, + Optional: true, + }, + "scte35_source": { + Type: schema.TypeString, + Optional: true, + ValidateDiagFunc: enum.Validate[types.HlsScte35SourceType](), + }, }, }, }, - }, - "server_validation": { - Type: schema.TypeString, - Optional: true, - ValidateDiagFunc: enum.Validate[types.NetworkInputServerValidation](), + "server_validation": { + Type: schema.TypeString, + Optional: true, + ValidateDiagFunc: enum.Validate[types.NetworkInputServerValidation](), + }, }, }, }, - }, - "scte35_pid": { - Type: schema.TypeInt, - Optional: true, - }, - "smpte2038_data_preference": { - Type: schema.TypeString, - Optional: true, - ValidateDiagFunc: enum.Validate[types.Smpte2038DataPreference](), - }, - "source_end_behavior": { - Type: schema.TypeString, - Optional: true, - ValidateDiagFunc: enum.Validate[types.InputSourceEndBehavior](), - }, - "video_selector": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "color_space": { - Type: schema.TypeString, - Optional: true, - ValidateDiagFunc: enum.Validate[types.VideoSelectorColorSpace](), - }, - // TODO implement color_space_settings - "color_space_usage": { - Type: schema.TypeString, - Optional: true, - ValidateDiagFunc: enum.Validate[types.VideoSelectorColorSpaceUsage](), + "scte35_pid": { + Type: schema.TypeInt, + Optional: true, + }, + "smpte2038_data_preference": { + Type: schema.TypeString, + Optional: true, + ValidateDiagFunc: enum.Validate[types.Smpte2038DataPreference](), + }, + "source_end_behavior": { + Type: schema.TypeString, + Optional: true, + ValidateDiagFunc: enum.Validate[types.InputSourceEndBehavior](), + }, + "video_selector": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "color_space": { + Type: schema.TypeString, + Optional: true, + ValidateDiagFunc: enum.Validate[types.VideoSelectorColorSpace](), + }, + // TODO implement color_space_settings + "color_space_usage": { + Type: schema.TypeString, + Optional: true, + ValidateDiagFunc: enum.Validate[types.VideoSelectorColorSpaceUsage](), + }, + // TODO implement selector_settings }, - // TODO implement selector_settings }, }, }, @@ -600,104 +605,104 @@ func ResourceChannel() *schema.Resource { }, }, }, - }, - "input_specification": { - Type: schema.TypeList, - Required: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "codec": { - Type: schema.TypeString, - Required: true, - ValidateDiagFunc: enum.Validate[types.InputCodec](), - }, - "maximum_bitrate": { - Type: schema.TypeString, - Required: true, - ValidateDiagFunc: enum.Validate[types.InputMaximumBitrate](), - }, - "input_resolution": { - Type: schema.TypeString, - Required: true, - ValidateDiagFunc: enum.Validate[types.InputResolution](), + "input_specification": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "codec": { + Type: schema.TypeString, + Required: true, + ValidateDiagFunc: enum.Validate[types.InputCodec](), + }, + "maximum_bitrate": { + Type: schema.TypeString, + Required: true, + ValidateDiagFunc: enum.Validate[types.InputMaximumBitrate](), + }, + "input_resolution": { + Type: schema.TypeString, + Required: true, + ValidateDiagFunc: enum.Validate[types.InputResolution](), + }, }, }, }, - }, - "log_level": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ValidateDiagFunc: enum.Validate[types.LogLevel](), - }, - "maintenance": { - Type: schema.TypeList, - Optional: true, - Computed: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "maintenance_day": { - Type: schema.TypeString, - Required: true, - ValidateDiagFunc: enum.Validate[types.MaintenanceDay](), - }, - "maintenance_start_time": { - Type: schema.TypeString, - Required: true, + "log_level": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateDiagFunc: enum.Validate[types.LogLevel](), + }, + "maintenance": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "maintenance_day": { + Type: schema.TypeString, + Required: true, + ValidateDiagFunc: enum.Validate[types.MaintenanceDay](), + }, + "maintenance_start_time": { + Type: schema.TypeString, + Required: true, + }, }, }, }, - }, - "name": { - Type: schema.TypeString, - Required: true, - }, - "role_arn": { - Type: schema.TypeString, - Optional: true, - ValidateDiagFunc: validation.ToDiagFunc(verify.ValidARN), - }, - "start_channel": { - Type: schema.TypeBool, - Optional: true, - Default: false, - }, - "vpc": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - ForceNew: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "availability_zones": { - Type: schema.TypeList, - Computed: true, - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "public_address_allocation_ids": { - Type: schema.TypeList, - Required: true, - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "security_group_ids": { - Type: schema.TypeList, - Optional: true, - Computed: true, - MaxItems: 5, - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "subnet_ids": { - Type: schema.TypeList, - Required: true, - Elem: &schema.Schema{Type: schema.TypeString}, + "name": { + Type: schema.TypeString, + Required: true, + }, + "role_arn": { + Type: schema.TypeString, + Optional: true, + ValidateDiagFunc: validation.ToDiagFunc(verify.ValidARN), + }, + "start_channel": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "vpc": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "availability_zones": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "public_address_allocation_ids": { + Type: schema.TypeList, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "security_group_ids": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 5, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "subnet_ids": { + Type: schema.TypeList, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, }, }, }, - }, - names.AttrTags: tftags.TagsSchema(), - names.AttrTagsAll: tftags.TagsSchemaComputed(), + names.AttrTags: tftags.TagsSchema(), + names.AttrTagsAll: tftags.TagsSchemaComputed(), + } }, CustomizeDiff: verify.SetTagsDiff, @@ -709,12 +714,12 @@ const ( ) func resourceChannelCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MediaLiveClient() + conn := meta.(*conns.AWSClient).MediaLiveClient(ctx) in := &medialive.CreateChannelInput{ Name: aws.String(d.Get("name").(string)), RequestId: aws.String(id.UniqueId()), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("cdi_input_specification"); ok && len(v.([]interface{})) > 0 { @@ -770,7 +775,7 @@ func resourceChannelCreate(ctx context.Context, d *schema.ResourceData, meta int } func resourceChannelRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MediaLiveClient() + conn := meta.(*conns.AWSClient).MediaLiveClient(ctx) out, err := FindChannelByID(ctx, conn, d.Id()) @@ -817,7 +822,7 @@ func resourceChannelRead(ctx context.Context, d *schema.ResourceData, meta inter } func resourceChannelUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MediaLiveClient() + conn := meta.(*conns.AWSClient).MediaLiveClient(ctx) if d.HasChangesExcept("tags", "tags_all", "start_channel") { in := &medialive.UpdateChannelInput{ @@ -915,7 +920,7 @@ func resourceChannelUpdate(ctx context.Context, d *schema.ResourceData, meta int } func resourceChannelDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MediaLiveClient() + conn := meta.(*conns.AWSClient).MediaLiveClient(ctx) log.Printf("[INFO] Deleting MediaLive Channel %s", d.Id()) diff --git a/internal/service/medialive/channel_encoder_settings_schema.go b/internal/service/medialive/channel_encoder_settings_schema.go index a750ec3fc79..b86dc987b2b 100644 --- a/internal/service/medialive/channel_encoder_settings_schema.go +++ b/internal/service/medialive/channel_encoder_settings_schema.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package medialive import ( @@ -3263,7 +3266,7 @@ func expandHLSGroupSettings(tfList []interface{}) *types.HlsGroupSettings { if v, ok := m["encryption_type"].(string); ok && v != "" { out.EncryptionType = types.HlsEncryptionType(v) } - if v, ok := m["hls_cdn_setting"].([]interface{}); ok && len(v) > 0 { + if v, ok := m["hls_cdn_settings"].([]interface{}); ok && len(v) > 0 { out.HlsCdnSettings = expandHLSCDNSettings(v) } if v, ok := m["hls_id3_segment_tagging"].(string); ok && v != "" { @@ -3428,19 +3431,19 @@ func expandHLSCDNSettings(tfList []interface{}) *types.HlsCdnSettings { m := tfList[0].(map[string]interface{}) var out types.HlsCdnSettings - if v, ok := m["hls_akamai_setting"].([]interface{}); ok && len(v) > 0 { + if v, ok := m["hls_akamai_settings"].([]interface{}); ok && len(v) > 0 { out.HlsAkamaiSettings = expandHSLAkamaiSettings(v) } - if v, ok := m["hls_basic_put_setting"].([]interface{}); ok && len(v) > 0 { + if v, ok := m["hls_basic_put_settings"].([]interface{}); ok && len(v) > 0 { out.HlsBasicPutSettings = expandHSLBasicPutSettings(v) } - if v, ok := m["hls_media_store_setting"].([]interface{}); ok && len(v) > 0 { + if v, ok := m["hls_media_store_settings"].([]interface{}); ok && len(v) > 0 { out.HlsMediaStoreSettings = expandHLSMediaStoreSettings(v) } - if v, ok := m["hls_s3_setting"].([]interface{}); ok && len(v) > 0 { + if v, ok := m["hls_s3_settings"].([]interface{}); ok && len(v) > 0 { out.HlsS3Settings = expandHSLS3Settings(v) } - if v, ok := m["hls_webdav_setting"].([]interface{}); ok && len(v) > 0 { + if v, ok := m["hls_webdav_settings"].([]interface{}); ok && len(v) > 0 { out.HlsWebdavSettings = expandHLSWebdavSettings(v) } return &out diff --git a/internal/service/medialive/channel_test.go b/internal/service/medialive/channel_test.go index 83d48655cda..0de5260b0be 100644 --- a/internal/service/medialive/channel_test.go +++ b/internal/service/medialive/channel_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package medialive_test import ( @@ -780,7 +783,7 @@ func TestAccMediaLiveChannel_disappears(t *testing.T) { func testAccCheckChannelDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_medialive_channel" { @@ -813,7 +816,7 @@ func testAccCheckChannelExists(ctx context.Context, name string, channel *medial return create.Error(names.MediaLive, create.ErrActionCheckingExistence, tfmedialive.ResNameChannel, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient(ctx) resp, err := tfmedialive.FindChannelByID(ctx, conn, rs.Primary.ID) @@ -838,7 +841,7 @@ func testAccCheckChannelStatus(ctx context.Context, name string, state types.Cha return create.Error(names.MediaLive, create.ErrActionChecking, tfmedialive.ResNameChannel, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient(ctx) resp, err := tfmedialive.FindChannelByID(ctx, conn, rs.Primary.ID) @@ -855,7 +858,7 @@ func testAccCheckChannelStatus(ctx context.Context, name string, state types.Cha } func testAccChannelsPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient(ctx) input := &medialive.ListChannelsInput{} _, err := conn.ListChannels(ctx, input) diff --git a/internal/service/medialive/exports_test.go b/internal/service/medialive/exports_test.go index 7ac8016c18d..989fafc36a0 100644 --- a/internal/service/medialive/exports_test.go +++ b/internal/service/medialive/exports_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package medialive // Exports for use in tests only. diff --git a/internal/service/medialive/generate.go b/internal/service/medialive/generate.go index 9805ffb8d9b..b51eb9334e2 100644 --- a/internal/service/medialive/generate.go +++ b/internal/service/medialive/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -KVTValues=true -SkipTypesImp=true -ListTags -ServiceTagsMap -TagOp=CreateTags -UntagOp=DeleteTags -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package medialive diff --git a/internal/service/medialive/input.go b/internal/service/medialive/input.go index d2b93c053f1..0a023dcb5ea 100644 --- a/internal/service/medialive/input.go +++ b/internal/service/medialive/input.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package medialive import ( @@ -183,12 +186,12 @@ const ( ) func resourceInputCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MediaLiveClient() + conn := meta.(*conns.AWSClient).MediaLiveClient(ctx) in := &medialive.CreateInputInput{ RequestId: aws.String(id.UniqueId()), Name: aws.String(d.Get("name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), Type: types.InputType(d.Get("type").(string)), } @@ -252,7 +255,7 @@ func resourceInputCreate(ctx context.Context, d *schema.ResourceData, meta inter } func resourceInputRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MediaLiveClient() + conn := meta.(*conns.AWSClient).MediaLiveClient(ctx) out, err := FindInputByID(ctx, conn, d.Id()) @@ -283,7 +286,7 @@ func resourceInputRead(ctx context.Context, d *schema.ResourceData, meta interfa } func resourceInputUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MediaLiveClient() + conn := meta.(*conns.AWSClient).MediaLiveClient(ctx) if d.HasChangesExcept("tags", "tags_all") { in := &medialive.UpdateInputInput{ @@ -342,7 +345,7 @@ func resourceInputUpdate(ctx context.Context, d *schema.ResourceData, meta inter } func resourceInputDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MediaLiveClient() + conn := meta.(*conns.AWSClient).MediaLiveClient(ctx) log.Printf("[INFO] Deleting MediaLive Input %s", d.Id()) diff --git a/internal/service/medialive/input_security_group.go b/internal/service/medialive/input_security_group.go index ba1f95f296a..0ea429d20c0 100644 --- a/internal/service/medialive/input_security_group.go +++ b/internal/service/medialive/input_security_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package medialive import ( @@ -78,10 +81,10 @@ const ( ) func resourceInputSecurityGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MediaLiveClient() + conn := meta.(*conns.AWSClient).MediaLiveClient(ctx) in := &medialive.CreateInputSecurityGroupInput{ - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), WhitelistRules: expandWhitelistRules(d.Get("whitelist_rules").(*schema.Set).List()), } @@ -104,7 +107,7 @@ func resourceInputSecurityGroupCreate(ctx context.Context, d *schema.ResourceDat } func resourceInputSecurityGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MediaLiveClient() + conn := meta.(*conns.AWSClient).MediaLiveClient(ctx) out, err := FindInputSecurityGroupByID(ctx, conn, d.Id()) @@ -126,7 +129,7 @@ func resourceInputSecurityGroupRead(ctx context.Context, d *schema.ResourceData, } func resourceInputSecurityGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MediaLiveClient() + conn := meta.(*conns.AWSClient).MediaLiveClient(ctx) if d.HasChangesExcept("tags", "tags_all") { in := &medialive.UpdateInputSecurityGroupInput{ @@ -152,7 +155,7 @@ func resourceInputSecurityGroupUpdate(ctx context.Context, d *schema.ResourceDat } func resourceInputSecurityGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MediaLiveClient() + conn := meta.(*conns.AWSClient).MediaLiveClient(ctx) log.Printf("[INFO] Deleting MediaLive InputSecurityGroup %s", d.Id()) diff --git a/internal/service/medialive/input_security_group_test.go b/internal/service/medialive/input_security_group_test.go index 84e7c313886..8d0f4fc3fd5 100644 --- a/internal/service/medialive/input_security_group_test.go +++ b/internal/service/medialive/input_security_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package medialive_test import ( @@ -183,7 +186,7 @@ func TestAccMediaLiveInputSecurityGroup_disappears(t *testing.T) { func testAccCheckInputSecurityGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_medialive_input_security_group" { @@ -216,7 +219,7 @@ func testAccCheckInputSecurityGroupExists(ctx context.Context, name string, inpu return create.Error(names.MediaLive, create.ErrActionCheckingExistence, tfmedialive.ResNameInputSecurityGroup, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient(ctx) resp, err := tfmedialive.FindInputSecurityGroupByID(ctx, conn, rs.Primary.ID) @@ -231,7 +234,7 @@ func testAccCheckInputSecurityGroupExists(ctx context.Context, name string, inpu } func testAccInputSecurityGroupsPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient(ctx) input := &medialive.ListInputSecurityGroupsInput{} _, err := conn.ListInputSecurityGroups(ctx, input) diff --git a/internal/service/medialive/input_test.go b/internal/service/medialive/input_test.go index f1531150d8f..84482bf0d5a 100644 --- a/internal/service/medialive/input_test.go +++ b/internal/service/medialive/input_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package medialive_test import ( @@ -185,7 +188,7 @@ func TestAccMediaLiveInput_disappears(t *testing.T) { func testAccCheckInputDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_medialive_input" { @@ -218,7 +221,7 @@ func testAccCheckInputExists(ctx context.Context, name string, input *medialive. return create.Error(names.MediaLive, create.ErrActionCheckingExistence, tfmedialive.ResNameInput, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient(ctx) resp, err := tfmedialive.FindInputByID(ctx, conn, rs.Primary.ID) @@ -233,7 +236,7 @@ func testAccCheckInputExists(ctx context.Context, name string, input *medialive. } func testAccInputsPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient(ctx) input := &medialive.ListInputsInput{} _, err := conn.ListInputs(ctx, input) diff --git a/internal/service/medialive/medialive_test.go b/internal/service/medialive/medialive_test.go index 78a0b176652..c18d9ea7748 100644 --- a/internal/service/medialive/medialive_test.go +++ b/internal/service/medialive/medialive_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package medialive_test import ( diff --git a/internal/service/medialive/multiplex.go b/internal/service/medialive/multiplex.go index 7e40a34e6a9..4c72c8b120b 100644 --- a/internal/service/medialive/multiplex.go +++ b/internal/service/medialive/multiplex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package medialive import ( @@ -107,13 +110,13 @@ const ( ) func resourceMultiplexCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MediaLiveClient() + conn := meta.(*conns.AWSClient).MediaLiveClient(ctx) in := &medialive.CreateMultiplexInput{ RequestId: aws.String(id.UniqueId()), Name: aws.String(d.Get("name").(string)), AvailabilityZones: flex.ExpandStringValueList(d.Get("availability_zones").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("multiplex_settings"); ok && len(v.([]interface{})) > 0 { @@ -145,7 +148,7 @@ func resourceMultiplexCreate(ctx context.Context, d *schema.ResourceData, meta i } func resourceMultiplexRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MediaLiveClient() + conn := meta.(*conns.AWSClient).MediaLiveClient(ctx) out, err := FindMultiplexByID(ctx, conn, d.Id()) @@ -171,7 +174,7 @@ func resourceMultiplexRead(ctx context.Context, d *schema.ResourceData, meta int } func resourceMultiplexUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MediaLiveClient() + conn := meta.(*conns.AWSClient).MediaLiveClient(ctx) if d.HasChangesExcept("tags", "tags_all", "start_multiplex") { in := &medialive.UpdateMultiplexInput{ @@ -220,7 +223,7 @@ func resourceMultiplexUpdate(ctx context.Context, d *schema.ResourceData, meta i } func resourceMultiplexDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MediaLiveClient() + conn := meta.(*conns.AWSClient).MediaLiveClient(ctx) log.Printf("[INFO] Deleting MediaLive Multiplex %s", d.Id()) diff --git a/internal/service/medialive/multiplex_program.go b/internal/service/medialive/multiplex_program.go index 7f7f3378037..4ebc039c22c 100644 --- a/internal/service/medialive/multiplex_program.go +++ b/internal/service/medialive/multiplex_program.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package medialive import ( @@ -24,8 +27,8 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-provider-aws/internal/create" "github.com/hashicorp/terraform-provider-aws/internal/enum" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -154,7 +157,7 @@ func (m *multiplexProgram) Schema(ctx context.Context, req resource.SchemaReques } func (m *multiplexProgram) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) { - conn := m.Meta().MediaLiveClient() + conn := m.Meta().MediaLiveClient(ctx) var plan resourceMultiplexProgramData diags := req.Plan.Get(ctx, &plan) @@ -212,7 +215,7 @@ func (m *multiplexProgram) Create(ctx context.Context, req resource.CreateReques } func (m *multiplexProgram) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) { - conn := m.Meta().MediaLiveClient() + conn := m.Meta().MediaLiveClient(ctx) var state resourceMultiplexProgramData diags := req.State.Get(ctx, &state) @@ -262,7 +265,7 @@ func (m *multiplexProgram) Read(ctx context.Context, req resource.ReadRequest, r } func (m *multiplexProgram) Update(ctx context.Context, req resource.UpdateRequest, resp *resource.UpdateResponse) { - conn := m.Meta().MediaLiveClient() + conn := m.Meta().MediaLiveClient(ctx) var plan resourceMultiplexProgramData diags := req.Plan.Get(ctx, &plan) @@ -327,7 +330,7 @@ func (m *multiplexProgram) Update(ctx context.Context, req resource.UpdateReques } func (m *multiplexProgram) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) { - conn := m.Meta().MediaLiveClient() + conn := m.Meta().MediaLiveClient(ctx) var state resourceMultiplexProgramData diags := req.State.Get(ctx, &state) diff --git a/internal/service/medialive/multiplex_program_test.go b/internal/service/medialive/multiplex_program_test.go index d772bcecbce..2f5dbdb9577 100644 --- a/internal/service/medialive/multiplex_program_test.go +++ b/internal/service/medialive/multiplex_program_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package medialive_test import ( @@ -187,7 +190,7 @@ func testAccMultiplexProgram_disappears(t *testing.T) { func testAccCheckMultiplexProgramDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_medialive_multiplex_program" { @@ -228,7 +231,7 @@ func testAccCheckMultiplexProgramExists(ctx context.Context, name string, multip return create.Error(names.MediaLive, create.ErrActionCheckingExistence, tfmedialive.ResNameMultiplexProgram, rs.Primary.ID, err) } - conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient(ctx) resp, err := tfmedialive.FindMultiplexProgramByID(ctx, conn, multiplexId, programName) diff --git a/internal/service/medialive/multiplex_test.go b/internal/service/medialive/multiplex_test.go index bd4e4993b29..9f035becfcd 100644 --- a/internal/service/medialive/multiplex_test.go +++ b/internal/service/medialive/multiplex_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package medialive_test import ( @@ -231,7 +234,7 @@ func testAccMultiplex_disappears(t *testing.T) { func testAccCheckMultiplexDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_medialive_multiplex" { @@ -264,7 +267,7 @@ func testAccCheckMultiplexExists(ctx context.Context, name string, multiplex *me return create.Error(names.MediaLive, create.ErrActionCheckingExistence, tfmedialive.ResNameMultiplex, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient(ctx) resp, err := tfmedialive.FindMultiplexByID(ctx, conn, rs.Primary.ID) @@ -279,7 +282,7 @@ func testAccCheckMultiplexExists(ctx context.Context, name string, multiplex *me } func testAccMultiplexesPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).MediaLiveClient(ctx) input := &medialive.ListMultiplexesInput{} _, err := conn.ListMultiplexes(ctx, input) diff --git a/internal/service/medialive/schemas.go b/internal/service/medialive/schemas.go index eb3d609a1e6..c6d6fc91cf5 100644 --- a/internal/service/medialive/schemas.go +++ b/internal/service/medialive/schemas.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package medialive import ( diff --git a/internal/service/medialive/service_package_gen.go b/internal/service/medialive/service_package_gen.go index f1817dd97d8..1c62c7b4423 100644 --- a/internal/service/medialive/service_package_gen.go +++ b/internal/service/medialive/service_package_gen.go @@ -5,6 +5,9 @@ package medialive import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + medialive_sdkv2 "github.com/aws/aws-sdk-go-v2/service/medialive" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -68,4 +71,17 @@ func (p *servicePackage) ServicePackageName() string { return names.MediaLive } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*medialive_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return medialive_sdkv2.NewFromConfig(cfg, func(o *medialive_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = medialive_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/medialive/sweep.go b/internal/service/medialive/sweep.go index 81af8a08aa1..bb49d797ba4 100644 --- a/internal/service/medialive/sweep.go +++ b/internal/service/medialive/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go-v2/service/medialive" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -42,12 +44,12 @@ func init() { func sweepChannels(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).MediaLiveClient() + conn := client.MediaLiveClient(ctx) sweepResources := make([]sweep.Sweepable, 0) in := &medialive.ListChannelsInput{} var errs *multierror.Error @@ -78,7 +80,7 @@ func sweepChannels(region string) error { } } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping MediaLive Channels for %s: %w", region, err)) } @@ -92,12 +94,12 @@ func sweepChannels(region string) error { func sweepInputs(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).MediaLiveClient() + conn := client.MediaLiveClient(ctx) sweepResources := make([]sweep.Sweepable, 0) in := &medialive.ListInputsInput{} var errs *multierror.Error @@ -128,7 +130,7 @@ func sweepInputs(region string) error { } } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping MediaLive Inputs for %s: %w", region, err)) } @@ -142,12 +144,12 @@ func sweepInputs(region string) error { func sweepInputSecurityGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).MediaLiveClient() + conn := client.MediaLiveClient(ctx) sweepResources := make([]sweep.Sweepable, 0) in := &medialive.ListInputSecurityGroupsInput{} var errs *multierror.Error @@ -178,7 +180,7 @@ func sweepInputSecurityGroups(region string) error { } } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping MediaLive Input Security Groups for %s: %w", region, err)) } @@ -192,12 +194,12 @@ func sweepInputSecurityGroups(region string) error { func sweepMultiplexes(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).MediaLiveClient() + conn := client.MediaLiveClient(ctx) sweepResources := make([]sweep.Sweepable, 0) in := &medialive.ListMultiplexesInput{} var errs *multierror.Error @@ -228,7 +230,7 @@ func sweepMultiplexes(region string) error { } } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping MediaLive Multiplexes for %s: %w", region, err)) } diff --git a/internal/service/medialive/tags_gen.go b/internal/service/medialive/tags_gen.go index 05192801476..c5a7a2b4baf 100644 --- a/internal/service/medialive/tags_gen.go +++ b/internal/service/medialive/tags_gen.go @@ -13,10 +13,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists medialive service tags. +// listTags lists medialive service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn *medialive.Client, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *medialive.Client, identifier string) (tftags.KeyValueTags, error) { input := &medialive.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -33,7 +33,7 @@ func ListTags(ctx context.Context, conn *medialive.Client, identifier string) (t // ListTags lists medialive service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).MediaLiveClient(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).MediaLiveClient(ctx), identifier) if err != nil { return err @@ -53,14 +53,14 @@ func Tags(tags tftags.KeyValueTags) map[string]string { return tags.Map() } -// KeyValueTags creates KeyValueTags from medialive service tags. +// KeyValueTags creates tftags.KeyValueTags from medialive service tags. func KeyValueTags(ctx context.Context, tags map[string]string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns medialive service tags from Context. +// getTagsIn returns medialive service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]string { +func getTagsIn(ctx context.Context) map[string]string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -70,17 +70,17 @@ func GetTagsIn(ctx context.Context) map[string]string { return nil } -// SetTagsOut sets medialive service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]string) { +// setTagsOut sets medialive service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates medialive service tags. +// updateTags updates medialive service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn *medialive.Client, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *medialive.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -120,5 +120,5 @@ func UpdateTags(ctx context.Context, conn *medialive.Client, identifier string, // UpdateTags updates medialive service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).MediaLiveClient(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).MediaLiveClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/mediapackage/channel.go b/internal/service/mediapackage/channel.go index 5945c2ebb6c..ec54e002d9f 100644 --- a/internal/service/mediapackage/channel.go +++ b/internal/service/mediapackage/channel.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package mediapackage import ( @@ -87,12 +90,12 @@ func ResourceChannel() *schema.Resource { func resourceChannelCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).MediaPackageConn() + conn := meta.(*conns.AWSClient).MediaPackageConn(ctx) input := &mediapackage.CreateChannelInput{ Id: aws.String(d.Get("channel_id").(string)), Description: aws.String(d.Get("description").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } resp, err := conn.CreateChannelWithContext(ctx, input) @@ -107,7 +110,7 @@ func resourceChannelCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceChannelRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).MediaPackageConn() + conn := meta.(*conns.AWSClient).MediaPackageConn(ctx) input := &mediapackage.DescribeChannelInput{ Id: aws.String(d.Id()), @@ -124,14 +127,14 @@ func resourceChannelRead(ctx context.Context, d *schema.ResourceData, meta inter return sdkdiag.AppendErrorf(diags, "setting hls_ingest: %s", err) } - SetTagsOut(ctx, resp.Tags) + setTagsOut(ctx, resp.Tags) return diags } func resourceChannelUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).MediaPackageConn() + conn := meta.(*conns.AWSClient).MediaPackageConn(ctx) input := &mediapackage.UpdateChannelInput{ Id: aws.String(d.Id()), @@ -148,7 +151,7 @@ func resourceChannelUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceChannelDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).MediaPackageConn() + conn := meta.(*conns.AWSClient).MediaPackageConn(ctx) input := &mediapackage.DeleteChannelInput{ Id: aws.String(d.Id()), diff --git a/internal/service/mediapackage/channel_test.go b/internal/service/mediapackage/channel_test.go index a7521b03578..f9ffd84e125 100644 --- a/internal/service/mediapackage/channel_test.go +++ b/internal/service/mediapackage/channel_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package mediapackage_test import ( @@ -129,7 +132,7 @@ func TestAccMediaPackageChannel_tags(t *testing.T) { func testAccCheckChannelDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).MediaPackageConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MediaPackageConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_media_package_channel" { @@ -161,7 +164,7 @@ func testAccCheckChannelExists(ctx context.Context, name string) resource.TestCh return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).MediaPackageConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MediaPackageConn(ctx) input := &mediapackage.DescribeChannelInput{ Id: aws.String(rs.Primary.ID), @@ -174,7 +177,7 @@ func testAccCheckChannelExists(ctx context.Context, name string) resource.TestCh } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).MediaPackageConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MediaPackageConn(ctx) input := &mediapackage.ListChannelsInput{} diff --git a/internal/service/mediapackage/generate.go b/internal/service/mediapackage/generate.go index ac225983ed5..a6335c931d5 100644 --- a/internal/service/mediapackage/generate.go +++ b/internal/service/mediapackage/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package mediapackage diff --git a/internal/service/mediapackage/service_package_gen.go b/internal/service/mediapackage/service_package_gen.go index b0e757dcffe..67792ffc21d 100644 --- a/internal/service/mediapackage/service_package_gen.go +++ b/internal/service/mediapackage/service_package_gen.go @@ -5,6 +5,10 @@ package mediapackage import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + mediapackage_sdkv1 "github.com/aws/aws-sdk-go/service/mediapackage" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -40,4 +44,13 @@ func (p *servicePackage) ServicePackageName() string { return names.MediaPackage } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*mediapackage_sdkv1.MediaPackage, error) { + sess := config["session"].(*session_sdkv1.Session) + + return mediapackage_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/mediapackage/tags_gen.go b/internal/service/mediapackage/tags_gen.go index 9c001fb62b2..6913db60b57 100644 --- a/internal/service/mediapackage/tags_gen.go +++ b/internal/service/mediapackage/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists mediapackage service tags. +// listTags lists mediapackage service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn mediapackageiface.MediaPackageAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn mediapackageiface.MediaPackageAPI, identifier string) (tftags.KeyValueTags, error) { input := &mediapackage.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn mediapackageiface.MediaPackageAPI, ident // ListTags lists mediapackage service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).MediaPackageConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).MediaPackageConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from mediapackage service tags. +// KeyValueTags creates tftags.KeyValueTags from mediapackage service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns mediapackage service tags from Context. +// getTagsIn returns mediapackage service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets mediapackage service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets mediapackage service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates mediapackage service tags. +// updateTags updates mediapackage service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn mediapackageiface.MediaPackageAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn mediapackageiface.MediaPackageAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn mediapackageiface.MediaPackageAPI, ide // UpdateTags updates mediapackage service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).MediaPackageConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).MediaPackageConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/mediastore/container.go b/internal/service/mediastore/container.go index e34b2d8b2f1..8cd13047f42 100644 --- a/internal/service/mediastore/container.go +++ b/internal/service/mediastore/container.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package mediastore import ( @@ -58,11 +61,11 @@ func ResourceContainer() *schema.Resource { func resourceContainerCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).MediaStoreConn() + conn := meta.(*conns.AWSClient).MediaStoreConn(ctx) input := &mediastore.CreateContainerInput{ ContainerName: aws.String(d.Get("name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } resp, err := conn.CreateContainerWithContext(ctx, input) @@ -91,7 +94,7 @@ func resourceContainerCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceContainerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).MediaStoreConn() + conn := meta.(*conns.AWSClient).MediaStoreConn(ctx) input := &mediastore.DescribeContainerInput{ ContainerName: aws.String(d.Id()), @@ -124,7 +127,7 @@ func resourceContainerUpdate(ctx context.Context, d *schema.ResourceData, meta i func resourceContainerDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).MediaStoreConn() + conn := meta.(*conns.AWSClient).MediaStoreConn(ctx) input := &mediastore.DeleteContainerInput{ ContainerName: aws.String(d.Id()), diff --git a/internal/service/mediastore/container_policy.go b/internal/service/mediastore/container_policy.go index 9386f024222..bb418440275 100644 --- a/internal/service/mediastore/container_policy.go +++ b/internal/service/mediastore/container_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package mediastore import ( @@ -48,7 +51,7 @@ func ResourceContainerPolicy() *schema.Resource { func resourceContainerPolicyPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).MediaStoreConn() + conn := meta.(*conns.AWSClient).MediaStoreConn(ctx) name := d.Get("container_name").(string) policy, err := structure.NormalizeJsonString(d.Get("policy").(string)) @@ -73,7 +76,7 @@ func resourceContainerPolicyPut(ctx context.Context, d *schema.ResourceData, met func resourceContainerPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).MediaStoreConn() + conn := meta.(*conns.AWSClient).MediaStoreConn(ctx) input := &mediastore.GetContainerPolicyInput{ ContainerName: aws.String(d.Id()), @@ -108,7 +111,7 @@ func resourceContainerPolicyRead(ctx context.Context, d *schema.ResourceData, me func resourceContainerPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).MediaStoreConn() + conn := meta.(*conns.AWSClient).MediaStoreConn(ctx) input := &mediastore.DeleteContainerPolicyInput{ ContainerName: aws.String(d.Id()), diff --git a/internal/service/mediastore/container_policy_test.go b/internal/service/mediastore/container_policy_test.go index ece9f1021f4..b7978fc322f 100644 --- a/internal/service/mediastore/container_policy_test.go +++ b/internal/service/mediastore/container_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package mediastore_test import ( @@ -56,7 +59,7 @@ func TestAccMediaStoreContainerPolicy_basic(t *testing.T) { func testAccCheckContainerPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).MediaStoreConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MediaStoreConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_media_store_container_policy" { @@ -94,7 +97,7 @@ func testAccCheckContainerPolicyExists(ctx context.Context, name string) resourc return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).MediaStoreConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MediaStoreConn(ctx) input := &mediastore.GetContainerPolicyInput{ ContainerName: aws.String(rs.Primary.ID), diff --git a/internal/service/mediastore/container_test.go b/internal/service/mediastore/container_test.go index f203b8c5a0f..efed8634027 100644 --- a/internal/service/mediastore/container_test.go +++ b/internal/service/mediastore/container_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package mediastore_test import ( @@ -89,7 +92,7 @@ func TestAccMediaStoreContainer_tags(t *testing.T) { func testAccCheckContainerDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).MediaStoreConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MediaStoreConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_media_store_container" { @@ -123,7 +126,7 @@ func testAccCheckContainerExists(ctx context.Context, name string) resource.Test return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).MediaStoreConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MediaStoreConn(ctx) input := &mediastore.DescribeContainerInput{ ContainerName: aws.String(rs.Primary.ID), @@ -136,7 +139,7 @@ func testAccCheckContainerExists(ctx context.Context, name string) resource.Test } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).MediaStoreConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MediaStoreConn(ctx) input := &mediastore.ListContainersInput{} diff --git a/internal/service/mediastore/generate.go b/internal/service/mediastore/generate.go index 7ceb5519226..55c91274d5b 100644 --- a/internal/service/mediastore/generate.go +++ b/internal/service/mediastore/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=Resource -ServiceTagsSlice -TagInIDElem=Resource -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package mediastore diff --git a/internal/service/mediastore/service_package_gen.go b/internal/service/mediastore/service_package_gen.go index 723eab19dad..15af80f5a96 100644 --- a/internal/service/mediastore/service_package_gen.go +++ b/internal/service/mediastore/service_package_gen.go @@ -5,6 +5,10 @@ package mediastore import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + mediastore_sdkv1 "github.com/aws/aws-sdk-go/service/mediastore" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -44,4 +48,13 @@ func (p *servicePackage) ServicePackageName() string { return names.MediaStore } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*mediastore_sdkv1.MediaStore, error) { + sess := config["session"].(*session_sdkv1.Session) + + return mediastore_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/mediastore/tags_gen.go b/internal/service/mediastore/tags_gen.go index d8a78f6d835..2c2da3ec539 100644 --- a/internal/service/mediastore/tags_gen.go +++ b/internal/service/mediastore/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists mediastore service tags. +// listTags lists mediastore service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn mediastoreiface.MediaStoreAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn mediastoreiface.MediaStoreAPI, identifier string) (tftags.KeyValueTags, error) { input := &mediastore.ListTagsForResourceInput{ Resource: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn mediastoreiface.MediaStoreAPI, identifie // ListTags lists mediastore service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).MediaStoreConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).MediaStoreConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*mediastore.Tag) tftags.KeyValueTa return tftags.New(ctx, m) } -// GetTagsIn returns mediastore service tags from Context. +// getTagsIn returns mediastore service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*mediastore.Tag { +func getTagsIn(ctx context.Context) []*mediastore.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*mediastore.Tag { return nil } -// SetTagsOut sets mediastore service tags in Context. -func SetTagsOut(ctx context.Context, tags []*mediastore.Tag) { +// setTagsOut sets mediastore service tags in Context. +func setTagsOut(ctx context.Context, tags []*mediastore.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates mediastore service tags. +// updateTags updates mediastore service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn mediastoreiface.MediaStoreAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn mediastoreiface.MediaStoreAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn mediastoreiface.MediaStoreAPI, identif // UpdateTags updates mediastore service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).MediaStoreConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).MediaStoreConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/memorydb/acl.go b/internal/service/memorydb/acl.go index 3c4e586e211..618a76d22d8 100644 --- a/internal/service/memorydb/acl.go +++ b/internal/service/memorydb/acl.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb import ( @@ -75,12 +78,12 @@ func ResourceACL() *schema.Resource { } func resourceACLCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) name := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) input := &memorydb.CreateACLInput{ ACLName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("user_names"); ok && v.(*schema.Set).Len() > 0 { @@ -91,11 +94,11 @@ func resourceACLCreate(ctx context.Context, d *schema.ResourceData, meta interfa _, err := conn.CreateACLWithContext(ctx, input) if err != nil { - return diag.Errorf("error creating MemoryDB ACL (%s): %s", name, err) + return diag.Errorf("creating MemoryDB ACL (%s): %s", name, err) } if err := waitACLActive(ctx, conn, name); err != nil { - return diag.Errorf("error waiting for MemoryDB ACL (%s) to be created: %s", name, err) + return diag.Errorf("waiting for MemoryDB ACL (%s) to be created: %s", name, err) } d.SetId(name) @@ -104,7 +107,7 @@ func resourceACLCreate(ctx context.Context, d *schema.ResourceData, meta interfa } func resourceACLUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &memorydb.UpdateACLInput{ @@ -127,7 +130,7 @@ func resourceACLUpdate(ctx context.Context, d *schema.ResourceData, meta interfa initialState, err := FindACLByName(ctx, conn, d.Id()) if err != nil { - return diag.Errorf("error getting MemoryDB ACL (%s) current state: %s", d.Id(), err) + return diag.Errorf("getting MemoryDB ACL (%s) current state: %s", d.Id(), err) } initialUserNames := map[string]struct{}{} @@ -149,11 +152,11 @@ func resourceACLUpdate(ctx context.Context, d *schema.ResourceData, meta interfa _, err := conn.UpdateACLWithContext(ctx, input) if err != nil { - return diag.Errorf("error updating MemoryDB ACL (%s): %s", d.Id(), err) + return diag.Errorf("updating MemoryDB ACL (%s): %s", d.Id(), err) } if err := waitACLActive(ctx, conn, d.Id()); err != nil { - return diag.Errorf("error waiting for MemoryDB ACL (%s) to be modified: %s", d.Id(), err) + return diag.Errorf("waiting for MemoryDB ACL (%s) to be modified: %s", d.Id(), err) } } } @@ -162,7 +165,7 @@ func resourceACLUpdate(ctx context.Context, d *schema.ResourceData, meta interfa } func resourceACLRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) acl, err := FindACLByName(ctx, conn, d.Id()) @@ -173,7 +176,7 @@ func resourceACLRead(ctx context.Context, d *schema.ResourceData, meta interface } if err != nil { - return diag.Errorf("error reading MemoryDB ACL (%s): %s", d.Id(), err) + return diag.Errorf("reading MemoryDB ACL (%s): %s", d.Id(), err) } d.Set("arn", acl.ARN) @@ -186,7 +189,7 @@ func resourceACLRead(ctx context.Context, d *schema.ResourceData, meta interface } func resourceACLDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) log.Printf("[DEBUG] Deleting MemoryDB ACL: (%s)", d.Id()) _, err := conn.DeleteACLWithContext(ctx, &memorydb.DeleteACLInput{ @@ -198,11 +201,11 @@ func resourceACLDelete(ctx context.Context, d *schema.ResourceData, meta interfa } if err != nil { - return diag.Errorf("error deleting MemoryDB ACL (%s): %s", d.Id(), err) + return diag.Errorf("deleting MemoryDB ACL (%s): %s", d.Id(), err) } if err := waitACLDeleted(ctx, conn, d.Id()); err != nil { - return diag.Errorf("error waiting for MemoryDB ACL (%s) to be deleted: %s", d.Id(), err) + return diag.Errorf("waiting for MemoryDB ACL (%s) to be deleted: %s", d.Id(), err) } return nil diff --git a/internal/service/memorydb/acl_data_source.go b/internal/service/memorydb/acl_data_source.go index fad6f5b500b..9ccfc697f78 100644 --- a/internal/service/memorydb/acl_data_source.go +++ b/internal/service/memorydb/acl_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb import ( @@ -41,7 +44,7 @@ func DataSourceACL() *schema.Resource { } func dataSourceACLRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig name := d.Get("name").(string) @@ -59,14 +62,14 @@ func dataSourceACLRead(ctx context.Context, d *schema.ResourceData, meta interfa d.Set("name", acl.Name) d.Set("user_names", flex.FlattenStringSet(acl.UserNames)) - tags, err := ListTags(ctx, conn, d.Get("arn").(string)) + tags, err := listTags(ctx, conn, d.Get("arn").(string)) if err != nil { - return diag.Errorf("error listing tags for MemoryDB ACL (%s): %s", d.Id(), err) + return diag.Errorf("listing tags for MemoryDB ACL (%s): %s", d.Id(), err) } if err := d.Set("tags", tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return diag.Errorf("error setting tags: %s", err) + return diag.Errorf("setting tags: %s", err) } return nil diff --git a/internal/service/memorydb/acl_data_source_test.go b/internal/service/memorydb/acl_data_source_test.go index c9358affada..d2bbc48eefb 100644 --- a/internal/service/memorydb/acl_data_source_test.go +++ b/internal/service/memorydb/acl_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb_test import ( diff --git a/internal/service/memorydb/acl_test.go b/internal/service/memorydb/acl_test.go index b43c8485e46..abdf617063f 100644 --- a/internal/service/memorydb/acl_test.go +++ b/internal/service/memorydb/acl_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb_test import ( @@ -294,7 +297,7 @@ func TestAccMemoryDBACL_update_userNames(t *testing.T) { func testAccCheckACLDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).MemoryDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MemoryDBConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_memorydb_acl" { @@ -329,7 +332,7 @@ func testAccCheckACLExists(ctx context.Context, n string) resource.TestCheckFunc return fmt.Errorf("No MemoryDB ACL ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).MemoryDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MemoryDBConn(ctx) _, err := tfmemorydb.FindACLByName(ctx, conn, rs.Primary.Attributes["name"]) diff --git a/internal/service/memorydb/cluster.go b/internal/service/memorydb/cluster.go index 2b4e13a1699..6eacd29bdbe 100644 --- a/internal/service/memorydb/cluster.go +++ b/internal/service/memorydb/cluster.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb import ( @@ -268,7 +271,7 @@ func endpointSchema() *schema.Schema { } func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) name := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) input := &memorydb.CreateClusterInput{ @@ -278,7 +281,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int NodeType: aws.String(d.Get("node_type").(string)), NumReplicasPerShard: aws.Int64(int64(d.Get("num_replicas_per_shard").(int))), NumShards: aws.Int64(int64(d.Get("num_shards").(int))), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), TLSEnabled: aws.Bool(d.Get("tls_enabled").(bool)), } @@ -345,11 +348,11 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int _, err := conn.CreateClusterWithContext(ctx, input) if err != nil { - return diag.Errorf("error creating MemoryDB Cluster (%s): %s", name, err) + return diag.Errorf("creating MemoryDB Cluster (%s): %s", name, err) } if err := waitClusterAvailable(ctx, conn, name, d.Timeout(schema.TimeoutCreate)); err != nil { - return diag.Errorf("error waiting for MemoryDB Cluster (%s) to be created: %s", name, err) + return diag.Errorf("waiting for MemoryDB Cluster (%s) to be created: %s", name, err) } d.SetId(name) @@ -358,7 +361,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int } func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) if d.HasChangesExcept("final_snapshot_name", "tags", "tags_all") { waitParameterGroupInSync := false @@ -444,22 +447,22 @@ func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta int _, err := conn.UpdateClusterWithContext(ctx, input) if err != nil { - return diag.Errorf("error updating MemoryDB Cluster (%s): %s", d.Id(), err) + return diag.Errorf("updating MemoryDB Cluster (%s): %s", d.Id(), err) } if err := waitClusterAvailable(ctx, conn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { - return diag.Errorf("error waiting for MemoryDB Cluster (%s) to be modified: %s", d.Id(), err) + return diag.Errorf("waiting for MemoryDB Cluster (%s) to be modified: %s", d.Id(), err) } if waitParameterGroupInSync { if err := waitClusterParameterGroupInSync(ctx, conn, d.Id()); err != nil { - return diag.Errorf("error waiting for MemoryDB Cluster (%s) parameter group to be in sync: %s", d.Id(), err) + return diag.Errorf("waiting for MemoryDB Cluster (%s) parameter group to be in sync: %s", d.Id(), err) } } if waitSecurityGroupsActive { if err := waitClusterSecurityGroupsActive(ctx, conn, d.Id()); err != nil { - return diag.Errorf("error waiting for MemoryDB Cluster (%s) security groups to be available: %s", d.Id(), err) + return diag.Errorf("waiting for MemoryDB Cluster (%s) security groups to be available: %s", d.Id(), err) } } } @@ -468,7 +471,7 @@ func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta int } func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) cluster, err := FindClusterByName(ctx, conn, d.Id()) @@ -479,7 +482,7 @@ func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta inter } if err != nil { - return diag.Errorf("error reading MemoryDB Cluster (%s): %s", d.Id(), err) + return diag.Errorf("reading MemoryDB Cluster (%s): %s", d.Id(), err) } d.Set("acl_name", cluster.ACLName) @@ -494,7 +497,7 @@ func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta inter if v := aws.StringValue(cluster.DataTiering); v != "" { b, err := strconv.ParseBool(v) if err != nil { - return diag.Errorf("error reading data_tiering for MemoryDB Cluster (%s): %s", d.Id(), err) + return diag.Errorf("reading data_tiering for MemoryDB Cluster (%s): %s", d.Id(), err) } d.Set("data_tiering", b) @@ -511,7 +514,7 @@ func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta inter numReplicasPerShard, err := deriveClusterNumReplicasPerShard(cluster) if err != nil { - return diag.Errorf("error reading num_replicas_per_shard for MemoryDB Cluster (%s): %s", d.Id(), err) + return diag.Errorf("reading num_replicas_per_shard for MemoryDB Cluster (%s): %s", d.Id(), err) } d.Set("num_replicas_per_shard", numReplicasPerShard) @@ -544,7 +547,7 @@ func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta inter } func resourceClusterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) input := &memorydb.DeleteClusterInput{ ClusterName: aws.String(d.Id()), @@ -562,11 +565,11 @@ func resourceClusterDelete(ctx context.Context, d *schema.ResourceData, meta int } if err != nil { - return diag.Errorf("error deleting MemoryDB Cluster (%s): %s", d.Id(), err) + return diag.Errorf("deleting MemoryDB Cluster (%s): %s", d.Id(), err) } if err := waitClusterDeleted(ctx, conn, d.Id(), d.Timeout(schema.TimeoutDelete)); err != nil { - return diag.Errorf("error waiting for MemoryDB Cluster (%s) to be deleted: %s", d.Id(), err) + return diag.Errorf("waiting for MemoryDB Cluster (%s) to be deleted: %s", d.Id(), err) } return nil diff --git a/internal/service/memorydb/cluster_data_source.go b/internal/service/memorydb/cluster_data_source.go index 5a7c9d035bb..a6529fad999 100644 --- a/internal/service/memorydb/cluster_data_source.go +++ b/internal/service/memorydb/cluster_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb import ( @@ -160,7 +163,7 @@ func DataSourceCluster() *schema.Resource { } func dataSourceClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig name := d.Get("name").(string) @@ -185,7 +188,7 @@ func dataSourceClusterRead(ctx context.Context, d *schema.ResourceData, meta int if v := aws.StringValue(cluster.DataTiering); v != "" { b, err := strconv.ParseBool(v) if err != nil { - return diag.Errorf("error reading data_tiering for MemoryDB Cluster (%s): %s", d.Id(), err) + return diag.Errorf("reading data_tiering for MemoryDB Cluster (%s): %s", d.Id(), err) } d.Set("data_tiering", b) @@ -201,7 +204,7 @@ func dataSourceClusterRead(ctx context.Context, d *schema.ResourceData, meta int numReplicasPerShard, err := deriveClusterNumReplicasPerShard(cluster) if err != nil { - return diag.Errorf("error reading num_replicas_per_shard for MemoryDB Cluster (%s): %s", d.Id(), err) + return diag.Errorf("reading num_replicas_per_shard for MemoryDB Cluster (%s): %s", d.Id(), err) } d.Set("num_replicas_per_shard", numReplicasPerShard) @@ -230,14 +233,14 @@ func dataSourceClusterRead(ctx context.Context, d *schema.ResourceData, meta int d.Set("subnet_group_name", cluster.SubnetGroupName) d.Set("tls_enabled", cluster.TLSEnabled) - tags, err := ListTags(ctx, conn, d.Get("arn").(string)) + tags, err := listTags(ctx, conn, d.Get("arn").(string)) if err != nil { - return diag.Errorf("error listing tags for MemoryDB Cluster (%s): %s", d.Id(), err) + return diag.Errorf("listing tags for MemoryDB Cluster (%s): %s", d.Id(), err) } if err := d.Set("tags", tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return diag.Errorf("error setting tags: %s", err) + return diag.Errorf("setting tags: %s", err) } return nil diff --git a/internal/service/memorydb/cluster_data_source_test.go b/internal/service/memorydb/cluster_data_source_test.go index d84dbe050c5..c6a0f68133e 100644 --- a/internal/service/memorydb/cluster_data_source_test.go +++ b/internal/service/memorydb/cluster_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb_test import ( diff --git a/internal/service/memorydb/cluster_test.go b/internal/service/memorydb/cluster_test.go index c6878b70279..f7299ec8281 100644 --- a/internal/service/memorydb/cluster_test.go +++ b/internal/service/memorydb/cluster_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb_test import ( @@ -1059,7 +1062,7 @@ func TestAccMemoryDBCluster_Update_tags(t *testing.T) { func testAccCheckClusterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).MemoryDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MemoryDBConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_memorydb_cluster" { @@ -1094,7 +1097,7 @@ func testAccCheckClusterExists(ctx context.Context, n string) resource.TestCheck return fmt.Errorf("No MemoryDB Cluster ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).MemoryDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MemoryDBConn(ctx) _, err := tfmemorydb.FindClusterByName(ctx, conn, rs.Primary.Attributes["name"]) @@ -1104,7 +1107,7 @@ func testAccCheckClusterExists(ctx context.Context, n string) resource.TestCheck func testAccCheckSnapshotExistsByName(ctx context.Context, snapshotName string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).MemoryDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MemoryDBConn(ctx) _, err := tfmemorydb.FindSnapshotByName(ctx, conn, snapshotName) diff --git a/internal/service/memorydb/enum.go b/internal/service/memorydb/enum.go index c4e1d210819..9c5b1eaa287 100644 --- a/internal/service/memorydb/enum.go +++ b/internal/service/memorydb/enum.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb // WARNING: As of 01/2022, the MemoryDB API does not provide a formal definition diff --git a/internal/service/memorydb/find.go b/internal/service/memorydb/find.go index 52bedac4693..93124d29636 100644 --- a/internal/service/memorydb/find.go +++ b/internal/service/memorydb/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb import ( diff --git a/internal/service/memorydb/generate.go b/internal/service/memorydb/generate.go index 81d58c9a31c..63b9e021dcd 100644 --- a/internal/service/memorydb/generate.go +++ b/internal/service/memorydb/generate.go @@ -1,5 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/listpages/main.go -ListOps=DescribeACLs,DescribeClusters,DescribeParameterGroups,DescribeSnapshots,DescribeSubnetGroups,DescribeUsers //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=ListTags -ListTagsOutTagsElem=TagList -ServiceTagsSlice -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package memorydb diff --git a/internal/service/memorydb/parameter_group.go b/internal/service/memorydb/parameter_group.go index ab628656404..36215e1258b 100644 --- a/internal/service/memorydb/parameter_group.go +++ b/internal/service/memorydb/parameter_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb import ( @@ -94,21 +97,21 @@ func ResourceParameterGroup() *schema.Resource { } func resourceParameterGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) name := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) input := &memorydb.CreateParameterGroupInput{ Description: aws.String(d.Get("description").(string)), Family: aws.String(d.Get("family").(string)), ParameterGroupName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } log.Printf("[DEBUG] Creating MemoryDB Parameter Group: %s", input) output, err := conn.CreateParameterGroupWithContext(ctx, input) if err != nil { - return diag.Errorf("error creating MemoryDB Parameter Group (%s): %s", name, err) + return diag.Errorf("creating MemoryDB Parameter Group (%s): %s", name, err) } d.SetId(name) @@ -121,7 +124,7 @@ func resourceParameterGroupCreate(ctx context.Context, d *schema.ResourceData, m } func resourceParameterGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) if d.HasChange("parameter") { o, n := d.GetChange("parameter") @@ -148,7 +151,7 @@ func resourceParameterGroupUpdate(ctx context.Context, d *schema.ResourceData, m err := resetParameterGroupParameters(ctx, conn, d.Get("name").(string), paramsToReset) if err != nil { - return diag.Errorf("error resetting MemoryDB Parameter Group (%s) parameters to defaults: %s", d.Id(), err) + return diag.Errorf("resetting MemoryDB Parameter Group (%s) parameters to defaults: %s", d.Id(), err) } } @@ -163,7 +166,7 @@ func resourceParameterGroupUpdate(ctx context.Context, d *schema.ResourceData, m err := modifyParameterGroupParameters(ctx, conn, d.Get("name").(string), paramsToModify) if err != nil { - return diag.Errorf("error modifying MemoryDB Parameter Group (%s) parameters: %s", d.Id(), err) + return diag.Errorf("modifying MemoryDB Parameter Group (%s) parameters: %s", d.Id(), err) } } } @@ -172,7 +175,7 @@ func resourceParameterGroupUpdate(ctx context.Context, d *schema.ResourceData, m } func resourceParameterGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) group, err := FindParameterGroupByName(ctx, conn, d.Id()) @@ -183,7 +186,7 @@ func resourceParameterGroupRead(ctx context.Context, d *schema.ResourceData, met } if err != nil { - return diag.Errorf("error reading MemoryDB Parameter Group (%s): %s", d.Id(), err) + return diag.Errorf("reading MemoryDB Parameter Group (%s): %s", d.Id(), err) } d.Set("arn", group.ARN) @@ -196,7 +199,7 @@ func resourceParameterGroupRead(ctx context.Context, d *schema.ResourceData, met parameters, err := listParameterGroupParameters(ctx, conn, d.Get("family").(string), d.Id(), userDefinedParameters) if err != nil { - return diag.Errorf("error listing parameters for MemoryDB Parameter Group (%s): %s", d.Id(), err) + return diag.Errorf("listing parameters for MemoryDB Parameter Group (%s): %s", d.Id(), err) } if err := d.Set("parameter", flattenParameters(parameters)); err != nil { @@ -207,7 +210,7 @@ func resourceParameterGroupRead(ctx context.Context, d *schema.ResourceData, met } func resourceParameterGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) log.Printf("[DEBUG] Deleting MemoryDB Parameter Group: (%s)", d.Id()) _, err := conn.DeleteParameterGroupWithContext(ctx, &memorydb.DeleteParameterGroupInput{ @@ -219,7 +222,7 @@ func resourceParameterGroupDelete(ctx context.Context, d *schema.ResourceData, m } if err != nil { - return diag.Errorf("error deleting MemoryDB Parameter Group (%s): %s", d.Id(), err) + return diag.Errorf("deleting MemoryDB Parameter Group (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/memorydb/parameter_group_data_source.go b/internal/service/memorydb/parameter_group_data_source.go index d66d2de36d9..56bcdb9967e 100644 --- a/internal/service/memorydb/parameter_group_data_source.go +++ b/internal/service/memorydb/parameter_group_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb import ( @@ -56,7 +59,7 @@ func DataSourceParameterGroup() *schema.Resource { } func dataSourceParameterGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig name := d.Get("name").(string) @@ -78,21 +81,21 @@ func dataSourceParameterGroupRead(ctx context.Context, d *schema.ResourceData, m parameters, err := listParameterGroupParameters(ctx, conn, d.Get("family").(string), d.Id(), userDefinedParameters) if err != nil { - return diag.Errorf("error listing parameters for MemoryDB Parameter Group (%s): %s", d.Id(), err) + return diag.Errorf("listing parameters for MemoryDB Parameter Group (%s): %s", d.Id(), err) } if err := d.Set("parameter", flattenParameters(parameters)); err != nil { return diag.Errorf("failed to set parameter: %s", err) } - tags, err := ListTags(ctx, conn, d.Get("arn").(string)) + tags, err := listTags(ctx, conn, d.Get("arn").(string)) if err != nil { - return diag.Errorf("error listing tags for MemoryDB Parameter Group (%s): %s", d.Id(), err) + return diag.Errorf("listing tags for MemoryDB Parameter Group (%s): %s", d.Id(), err) } if err := d.Set("tags", tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return diag.Errorf("error setting tags: %s", err) + return diag.Errorf("setting tags: %s", err) } return nil diff --git a/internal/service/memorydb/parameter_group_data_source_test.go b/internal/service/memorydb/parameter_group_data_source_test.go index 806397c6a8d..ccb3bf5a768 100644 --- a/internal/service/memorydb/parameter_group_data_source_test.go +++ b/internal/service/memorydb/parameter_group_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb_test import ( diff --git a/internal/service/memorydb/parameter_group_test.go b/internal/service/memorydb/parameter_group_test.go index e1179cc655b..6da7b4f4763 100644 --- a/internal/service/memorydb/parameter_group_test.go +++ b/internal/service/memorydb/parameter_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb_test import ( @@ -268,7 +271,7 @@ func TestAccMemoryDBParameterGroup_update_tags(t *testing.T) { func testAccCheckParameterGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).MemoryDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MemoryDBConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_memorydb_parameter_group" { @@ -303,7 +306,7 @@ func testAccCheckParameterGroupExists(ctx context.Context, n string) resource.Te return fmt.Errorf("No MemoryDB Parameter Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).MemoryDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MemoryDBConn(ctx) _, err := tfmemorydb.FindParameterGroupByName(ctx, conn, rs.Primary.Attributes["name"]) diff --git a/internal/service/memorydb/service_package_gen.go b/internal/service/memorydb/service_package_gen.go index df68789b417..4ca4e1251be 100644 --- a/internal/service/memorydb/service_package_gen.go +++ b/internal/service/memorydb/service_package_gen.go @@ -5,6 +5,10 @@ package memorydb import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + memorydb_sdkv1 "github.com/aws/aws-sdk-go/service/memorydb" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -105,4 +109,13 @@ func (p *servicePackage) ServicePackageName() string { return names.MemoryDB } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*memorydb_sdkv1.MemoryDB, error) { + sess := config["session"].(*session_sdkv1.Session) + + return memorydb_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/memorydb/snapshot.go b/internal/service/memorydb/snapshot.go index ac1ea233a08..55240167f2b 100644 --- a/internal/service/memorydb/snapshot.go +++ b/internal/service/memorydb/snapshot.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb import ( @@ -145,13 +148,13 @@ func ResourceSnapshot() *schema.Resource { } func resourceSnapshotCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) name := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) input := &memorydb.CreateSnapshotInput{ ClusterName: aws.String(d.Get("cluster_name").(string)), SnapshotName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("kms_key_arn"); ok { @@ -162,11 +165,11 @@ func resourceSnapshotCreate(ctx context.Context, d *schema.ResourceData, meta in _, err := conn.CreateSnapshotWithContext(ctx, input) if err != nil { - return diag.Errorf("error creating MemoryDB Snapshot (%s): %s", name, err) + return diag.Errorf("creating MemoryDB Snapshot (%s): %s", name, err) } if err := waitSnapshotAvailable(ctx, conn, name, d.Timeout(schema.TimeoutCreate)); err != nil { - return diag.Errorf("error waiting for MemoryDB Snapshot (%s) to be created: %s", name, err) + return diag.Errorf("waiting for MemoryDB Snapshot (%s) to be created: %s", name, err) } d.SetId(name) @@ -180,7 +183,7 @@ func resourceSnapshotUpdate(ctx context.Context, d *schema.ResourceData, meta in } func resourceSnapshotRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) snapshot, err := FindSnapshotByName(ctx, conn, d.Id()) @@ -191,7 +194,7 @@ func resourceSnapshotRead(ctx context.Context, d *schema.ResourceData, meta inte } if err != nil { - return diag.Errorf("error reading MemoryDB Snapshot (%s): %s", d.Id(), err) + return diag.Errorf("reading MemoryDB Snapshot (%s): %s", d.Id(), err) } d.Set("arn", snapshot.ARN) @@ -208,7 +211,7 @@ func resourceSnapshotRead(ctx context.Context, d *schema.ResourceData, meta inte } func resourceSnapshotDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) log.Printf("[DEBUG] Deleting MemoryDB Snapshot: (%s)", d.Id()) _, err := conn.DeleteSnapshotWithContext(ctx, &memorydb.DeleteSnapshotInput{ @@ -220,11 +223,11 @@ func resourceSnapshotDelete(ctx context.Context, d *schema.ResourceData, meta in } if err != nil { - return diag.Errorf("error deleting MemoryDB Snapshot (%s): %s", d.Id(), err) + return diag.Errorf("deleting MemoryDB Snapshot (%s): %s", d.Id(), err) } if err := waitSnapshotDeleted(ctx, conn, d.Id(), d.Timeout(schema.TimeoutDelete)); err != nil { - return diag.Errorf("error waiting for MemoryDB Snapshot (%s) to be deleted: %s", d.Id(), err) + return diag.Errorf("waiting for MemoryDB Snapshot (%s) to be deleted: %s", d.Id(), err) } return nil diff --git a/internal/service/memorydb/snapshot_data_source.go b/internal/service/memorydb/snapshot_data_source.go index ccbb0936968..39a8c9931c6 100644 --- a/internal/service/memorydb/snapshot_data_source.go +++ b/internal/service/memorydb/snapshot_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb import ( @@ -103,7 +106,7 @@ func DataSourceSnapshot() *schema.Resource { } func dataSourceSnapshotRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig name := d.Get("name").(string) @@ -125,14 +128,14 @@ func dataSourceSnapshotRead(ctx context.Context, d *schema.ResourceData, meta in d.Set("name", snapshot.Name) d.Set("source", snapshot.Source) - tags, err := ListTags(ctx, conn, d.Get("arn").(string)) + tags, err := listTags(ctx, conn, d.Get("arn").(string)) if err != nil { - return diag.Errorf("error listing tags for MemoryDB Snapshot (%s): %s", d.Id(), err) + return diag.Errorf("listing tags for MemoryDB Snapshot (%s): %s", d.Id(), err) } if err := d.Set("tags", tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return diag.Errorf("error setting tags: %s", err) + return diag.Errorf("setting tags: %s", err) } return nil diff --git a/internal/service/memorydb/snapshot_data_source_test.go b/internal/service/memorydb/snapshot_data_source_test.go index 3223b1c940c..51440649af6 100644 --- a/internal/service/memorydb/snapshot_data_source_test.go +++ b/internal/service/memorydb/snapshot_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb_test import ( diff --git a/internal/service/memorydb/snapshot_test.go b/internal/service/memorydb/snapshot_test.go index 71f1d5bf710..6e0bc598d0a 100644 --- a/internal/service/memorydb/snapshot_test.go +++ b/internal/service/memorydb/snapshot_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb_test import ( @@ -231,7 +234,7 @@ func TestAccMemoryDBSnapshot_update_tags(t *testing.T) { func testAccCheckSnapshotDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).MemoryDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MemoryDBConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_memorydb_snapshot" { @@ -266,7 +269,7 @@ func testAccCheckSnapshotExists(ctx context.Context, n string) resource.TestChec return fmt.Errorf("No MemoryDB Snapshot ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).MemoryDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MemoryDBConn(ctx) _, err := tfmemorydb.FindSnapshotByName(ctx, conn, rs.Primary.Attributes["name"]) diff --git a/internal/service/memorydb/status.go b/internal/service/memorydb/status.go index 866b581d016..54af4c86037 100644 --- a/internal/service/memorydb/status.go +++ b/internal/service/memorydb/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb import ( diff --git a/internal/service/memorydb/subnet_group.go b/internal/service/memorydb/subnet_group.go index a9ff4ed56f7..f8739b158d4 100644 --- a/internal/service/memorydb/subnet_group.go +++ b/internal/service/memorydb/subnet_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb import ( @@ -77,21 +80,21 @@ func ResourceSubnetGroup() *schema.Resource { } func resourceSubnetGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) name := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) input := &memorydb.CreateSubnetGroupInput{ Description: aws.String(d.Get("description").(string)), SubnetGroupName: aws.String(name), SubnetIds: flex.ExpandStringSet(d.Get("subnet_ids").(*schema.Set)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } log.Printf("[DEBUG] Creating MemoryDB Subnet Group: %s", input) _, err := conn.CreateSubnetGroupWithContext(ctx, input) if err != nil { - return diag.Errorf("error creating MemoryDB Subnet Group (%s): %s", name, err) + return diag.Errorf("creating MemoryDB Subnet Group (%s): %s", name, err) } d.SetId(name) @@ -100,7 +103,7 @@ func resourceSubnetGroupCreate(ctx context.Context, d *schema.ResourceData, meta } func resourceSubnetGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &memorydb.UpdateSubnetGroupInput{ @@ -113,7 +116,7 @@ func resourceSubnetGroupUpdate(ctx context.Context, d *schema.ResourceData, meta _, err := conn.UpdateSubnetGroupWithContext(ctx, input) if err != nil { - return diag.Errorf("error updating MemoryDB Subnet Group (%s): %s", d.Id(), err) + return diag.Errorf("updating MemoryDB Subnet Group (%s): %s", d.Id(), err) } } @@ -121,7 +124,7 @@ func resourceSubnetGroupUpdate(ctx context.Context, d *schema.ResourceData, meta } func resourceSubnetGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) group, err := FindSubnetGroupByName(ctx, conn, d.Id()) @@ -132,7 +135,7 @@ func resourceSubnetGroupRead(ctx context.Context, d *schema.ResourceData, meta i } if err != nil { - return diag.Errorf("error reading MemoryDB Subnet Group (%s): %s", d.Id(), err) + return diag.Errorf("reading MemoryDB Subnet Group (%s): %s", d.Id(), err) } var subnetIds []*string @@ -151,7 +154,7 @@ func resourceSubnetGroupRead(ctx context.Context, d *schema.ResourceData, meta i } func resourceSubnetGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) log.Printf("[DEBUG] Deleting MemoryDB Subnet Group: (%s)", d.Id()) _, err := conn.DeleteSubnetGroupWithContext(ctx, &memorydb.DeleteSubnetGroupInput{ @@ -163,7 +166,7 @@ func resourceSubnetGroupDelete(ctx context.Context, d *schema.ResourceData, meta } if err != nil { - return diag.Errorf("error deleting MemoryDB Subnet Group (%s): %s", d.Id(), err) + return diag.Errorf("deleting MemoryDB Subnet Group (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/memorydb/subnet_group_data_source.go b/internal/service/memorydb/subnet_group_data_source.go index 1af0886db04..4d3b21ee9e8 100644 --- a/internal/service/memorydb/subnet_group_data_source.go +++ b/internal/service/memorydb/subnet_group_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb import ( @@ -45,7 +48,7 @@ func DataSourceSubnetGroup() *schema.Resource { } func dataSourceSubnetGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig name := d.Get("name").(string) @@ -69,14 +72,14 @@ func dataSourceSubnetGroupRead(ctx context.Context, d *schema.ResourceData, meta d.Set("name", group.Name) d.Set("vpc_id", group.VpcId) - tags, err := ListTags(ctx, conn, d.Get("arn").(string)) + tags, err := listTags(ctx, conn, d.Get("arn").(string)) if err != nil { - return diag.Errorf("error listing tags for MemoryDB Subnet Group (%s): %s", d.Id(), err) + return diag.Errorf("listing tags for MemoryDB Subnet Group (%s): %s", d.Id(), err) } if err := d.Set("tags", tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return diag.Errorf("error setting tags: %s", err) + return diag.Errorf("setting tags: %s", err) } return nil diff --git a/internal/service/memorydb/subnet_group_data_source_test.go b/internal/service/memorydb/subnet_group_data_source_test.go index 0a77d95d441..03f7e23a480 100644 --- a/internal/service/memorydb/subnet_group_data_source_test.go +++ b/internal/service/memorydb/subnet_group_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb_test import ( diff --git a/internal/service/memorydb/subnet_group_test.go b/internal/service/memorydb/subnet_group_test.go index 4850930b6a2..53cc2fa3300 100644 --- a/internal/service/memorydb/subnet_group_test.go +++ b/internal/service/memorydb/subnet_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb_test import ( @@ -295,7 +298,7 @@ func TestAccMemoryDBSubnetGroup_update_tags(t *testing.T) { func testAccCheckSubnetGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).MemoryDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MemoryDBConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_memorydb_subnet_group" { @@ -330,7 +333,7 @@ func testAccCheckSubnetGroupExists(ctx context.Context, n string) resource.TestC return fmt.Errorf("No MemoryDB Subnet Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).MemoryDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MemoryDBConn(ctx) _, err := tfmemorydb.FindSubnetGroupByName(ctx, conn, rs.Primary.Attributes["name"]) diff --git a/internal/service/memorydb/sweep.go b/internal/service/memorydb/sweep.go index 0615ddd0129..98bc3222a52 100644 --- a/internal/service/memorydb/sweep.go +++ b/internal/service/memorydb/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/memorydb" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -64,11 +66,11 @@ func init() { func sweepACLs(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).MemoryDBConn() + conn := client.MemoryDBConn(ctx) input := &memorydb.DescribeACLsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -103,7 +105,7 @@ func sweepACLs(region string) error { return fmt.Errorf("error listing MemoryDB ACLs (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping MemoryDB ACLs (%s): %w", region, err) @@ -114,11 +116,11 @@ func sweepACLs(region string) error { func sweepClusters(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).MemoryDBConn() + conn := client.MemoryDBConn(ctx) input := &memorydb.DescribeClustersInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -147,7 +149,7 @@ func sweepClusters(region string) error { return fmt.Errorf("error listing MemoryDB Clusters (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping MemoryDB Clusters (%s): %w", region, err) @@ -158,11 +160,11 @@ func sweepClusters(region string) error { func sweepParameterGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).MemoryDBConn() + conn := client.MemoryDBConn(ctx) input := &memorydb.DescribeParameterGroupsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -197,7 +199,7 @@ func sweepParameterGroups(region string) error { return fmt.Errorf("error listing MemoryDB Parameter Groups (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping MemoryDB Parameter Groups (%s): %w", region, err) @@ -208,11 +210,11 @@ func sweepParameterGroups(region string) error { func sweepSnapshots(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).MemoryDBConn() + conn := client.MemoryDBConn(ctx) input := &memorydb.DescribeSnapshotsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -241,7 +243,7 @@ func sweepSnapshots(region string) error { return fmt.Errorf("error listing MemoryDB Snapshots (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping MemoryDB Snapshots (%s): %w", region, err) @@ -252,11 +254,11 @@ func sweepSnapshots(region string) error { func sweepSubnetGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).MemoryDBConn() + conn := client.MemoryDBConn(ctx) input := &memorydb.DescribeSubnetGroupsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -291,7 +293,7 @@ func sweepSubnetGroups(region string) error { return fmt.Errorf("error listing MemoryDB Subnet Groups (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping MemoryDB Subnet Groups (%s): %w", region, err) @@ -302,11 +304,11 @@ func sweepSubnetGroups(region string) error { func sweepUsers(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).MemoryDBConn() + conn := client.MemoryDBConn(ctx) input := &memorydb.DescribeUsersInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -341,7 +343,7 @@ func sweepUsers(region string) error { return fmt.Errorf("error listing MemoryDB Users (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping MemoryDB Users (%s): %w", region, err) diff --git a/internal/service/memorydb/tags_gen.go b/internal/service/memorydb/tags_gen.go index cc5a8960f55..73a875d4c48 100644 --- a/internal/service/memorydb/tags_gen.go +++ b/internal/service/memorydb/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists memorydb service tags. +// listTags lists memorydb service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn memorydbiface.MemoryDBAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn memorydbiface.MemoryDBAPI, identifier string) (tftags.KeyValueTags, error) { input := &memorydb.ListTagsInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn memorydbiface.MemoryDBAPI, identifier st // ListTags lists memorydb service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).MemoryDBConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).MemoryDBConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*memorydb.Tag) tftags.KeyValueTags return tftags.New(ctx, m) } -// GetTagsIn returns memorydb service tags from Context. +// getTagsIn returns memorydb service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*memorydb.Tag { +func getTagsIn(ctx context.Context) []*memorydb.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*memorydb.Tag { return nil } -// SetTagsOut sets memorydb service tags in Context. -func SetTagsOut(ctx context.Context, tags []*memorydb.Tag) { +// setTagsOut sets memorydb service tags in Context. +func setTagsOut(ctx context.Context, tags []*memorydb.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates memorydb service tags. +// updateTags updates memorydb service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn memorydbiface.MemoryDBAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn memorydbiface.MemoryDBAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn memorydbiface.MemoryDBAPI, identifier // UpdateTags updates memorydb service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).MemoryDBConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).MemoryDBConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/memorydb/user.go b/internal/service/memorydb/user.go index 1fcb088a723..c9dc104b044 100644 --- a/internal/service/memorydb/user.go +++ b/internal/service/memorydb/user.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb import ( @@ -89,7 +92,7 @@ func ResourceUser() *schema.Resource { } func resourceUserCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) userName := d.Get("user_name").(string) input := &memorydb.CreateUserInput{ @@ -98,7 +101,7 @@ func resourceUserCreate(ctx context.Context, d *schema.ResourceData, meta interf Passwords: flex.ExpandStringSet(d.Get("authentication_mode.0.passwords").(*schema.Set)), Type: aws.String(d.Get("authentication_mode.0.type").(string)), }, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), UserName: aws.String(userName), } @@ -114,7 +117,7 @@ func resourceUserCreate(ctx context.Context, d *schema.ResourceData, meta interf } func resourceUserRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) user, err := FindUserByName(ctx, conn, d.Id()) @@ -150,7 +153,7 @@ func resourceUserRead(ctx context.Context, d *schema.ResourceData, meta interfac } func resourceUserUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &memorydb.UpdateUserInput{ @@ -183,7 +186,7 @@ func resourceUserUpdate(ctx context.Context, d *schema.ResourceData, meta interf } func resourceUserDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) log.Printf("[DEBUG] Deleting MemoryDB User: (%s)", d.Id()) _, err := conn.DeleteUserWithContext(ctx, &memorydb.DeleteUserInput{ diff --git a/internal/service/memorydb/user_data_source.go b/internal/service/memorydb/user_data_source.go index e216b06a669..926baa9d67a 100644 --- a/internal/service/memorydb/user_data_source.go +++ b/internal/service/memorydb/user_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb import ( @@ -55,7 +58,7 @@ func DataSourceUser() *schema.Resource { } func dataSourceUserRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MemoryDBConn() + conn := meta.(*conns.AWSClient).MemoryDBConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig userName := d.Get("user_name").(string) @@ -85,14 +88,14 @@ func dataSourceUserRead(ctx context.Context, d *schema.ResourceData, meta interf d.Set("minimum_engine_version", user.MinimumEngineVersion) d.Set("user_name", user.Name) - tags, err := ListTags(ctx, conn, d.Get("arn").(string)) + tags, err := listTags(ctx, conn, d.Get("arn").(string)) if err != nil { - return diag.Errorf("error listing tags for MemoryDB User (%s): %s", d.Id(), err) + return diag.Errorf("listing tags for MemoryDB User (%s): %s", d.Id(), err) } if err := d.Set("tags", tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return diag.Errorf("error setting tags: %s", err) + return diag.Errorf("setting tags: %s", err) } return nil diff --git a/internal/service/memorydb/user_data_source_test.go b/internal/service/memorydb/user_data_source_test.go index d0fca9c8cab..b0c486a36e1 100644 --- a/internal/service/memorydb/user_data_source_test.go +++ b/internal/service/memorydb/user_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb_test import ( diff --git a/internal/service/memorydb/user_test.go b/internal/service/memorydb/user_test.go index 40fdb188329..c96ed8e7844 100644 --- a/internal/service/memorydb/user_test.go +++ b/internal/service/memorydb/user_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb_test import ( @@ -212,7 +215,7 @@ func TestAccMemoryDBUser_tags(t *testing.T) { func testAccCheckUserDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).MemoryDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MemoryDBConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_memorydb_user" { @@ -247,7 +250,7 @@ func testAccCheckUserExists(ctx context.Context, n string) resource.TestCheckFun return fmt.Errorf("No MemoryDB User ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).MemoryDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MemoryDBConn(ctx) _, err := tfmemorydb.FindUserByName(ctx, conn, rs.Primary.Attributes["user_name"]) diff --git a/internal/service/memorydb/validate.go b/internal/service/memorydb/validate.go index 167523c55b6..f23ef0eb4ec 100644 --- a/internal/service/memorydb/validate.go +++ b/internal/service/memorydb/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb import ( diff --git a/internal/service/memorydb/wait.go b/internal/service/memorydb/wait.go index cba30f066ec..eaff72786b7 100644 --- a/internal/service/memorydb/wait.go +++ b/internal/service/memorydb/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package memorydb import ( diff --git a/internal/service/meta/arn_data_source.go b/internal/service/meta/arn_data_source.go index ea1e1a53a48..a1baaa04a33 100644 --- a/internal/service/meta/arn_data_source.go +++ b/internal/service/meta/arn_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // Code generated by tools/tfsdk2fw/main.go. Manual editing is required. package meta diff --git a/internal/service/meta/arn_data_source_test.go b/internal/service/meta/arn_data_source_test.go index 8626b531fd8..d122d94b066 100644 --- a/internal/service/meta/arn_data_source_test.go +++ b/internal/service/meta/arn_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package meta_test import ( diff --git a/internal/service/meta/billing_service_account_data_source.go b/internal/service/meta/billing_service_account_data_source.go index 319502dfd5c..969aaf6e65f 100644 --- a/internal/service/meta/billing_service_account_data_source.go +++ b/internal/service/meta/billing_service_account_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package meta import ( diff --git a/internal/service/meta/billing_service_account_data_source_test.go b/internal/service/meta/billing_service_account_data_source_test.go index fbb170f7804..4d61186475c 100644 --- a/internal/service/meta/billing_service_account_data_source_test.go +++ b/internal/service/meta/billing_service_account_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package meta_test import ( diff --git a/internal/service/meta/consts.go b/internal/service/meta/consts.go index ddc8e7a7d28..788d26347f7 100644 --- a/internal/service/meta/consts.go +++ b/internal/service/meta/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package meta const ( diff --git a/internal/service/meta/default_tags_data_source.go b/internal/service/meta/default_tags_data_source.go index 0d0516f1c42..e92ea7051aa 100644 --- a/internal/service/meta/default_tags_data_source.go +++ b/internal/service/meta/default_tags_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // Code generated by tools/tfsdk2fw/main.go. Manual editing is required. package meta @@ -8,8 +11,8 @@ import ( "github.com/hashicorp/terraform-plugin-framework/datasource" "github.com/hashicorp/terraform-plugin-framework/datasource/schema" "github.com/hashicorp/terraform-plugin-framework/types" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" ) diff --git a/internal/service/meta/default_tags_data_source_test.go b/internal/service/meta/default_tags_data_source_test.go index c9ce8b62517..b520101c11f 100644 --- a/internal/service/meta/default_tags_data_source_test.go +++ b/internal/service/meta/default_tags_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package meta_test import ( diff --git a/internal/service/meta/generate.go b/internal/service/meta/generate.go new file mode 100644 index 00000000000..835c158105d --- /dev/null +++ b/internal/service/meta/generate.go @@ -0,0 +1,7 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/servicepackage/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package meta diff --git a/internal/service/meta/ip_ranges_data_source.go b/internal/service/meta/ip_ranges_data_source.go index 8537a89e896..0f44a0cecf2 100644 --- a/internal/service/meta/ip_ranges_data_source.go +++ b/internal/service/meta/ip_ranges_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package meta import ( @@ -14,8 +17,8 @@ import ( "github.com/hashicorp/terraform-plugin-framework/datasource" "github.com/hashicorp/terraform-plugin-framework/datasource/schema" "github.com/hashicorp/terraform-plugin-framework/types" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" tfslices "github.com/hashicorp/terraform-provider-aws/internal/slices" "golang.org/x/exp/slices" ) diff --git a/internal/service/meta/ip_ranges_data_source_test.go b/internal/service/meta/ip_ranges_data_source_test.go index f411ccdf4ad..a8c872fe4e3 100644 --- a/internal/service/meta/ip_ranges_data_source_test.go +++ b/internal/service/meta/ip_ranges_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package meta_test import ( diff --git a/internal/service/meta/partition_data_source.go b/internal/service/meta/partition_data_source.go index 368737def3b..1207c5efa67 100644 --- a/internal/service/meta/partition_data_source.go +++ b/internal/service/meta/partition_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package meta import ( diff --git a/internal/service/meta/partition_data_source_test.go b/internal/service/meta/partition_data_source_test.go index c5ed08caf03..748f184ea6a 100644 --- a/internal/service/meta/partition_data_source_test.go +++ b/internal/service/meta/partition_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package meta_test import ( diff --git a/internal/service/meta/region_data_source.go b/internal/service/meta/region_data_source.go index 732732f5c7e..3de2753e506 100644 --- a/internal/service/meta/region_data_source.go +++ b/internal/service/meta/region_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package meta import ( diff --git a/internal/service/meta/region_data_source_test.go b/internal/service/meta/region_data_source_test.go index b3b81b8034c..42ff6d78d7b 100644 --- a/internal/service/meta/region_data_source_test.go +++ b/internal/service/meta/region_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package meta_test import ( diff --git a/internal/service/meta/regions_data_source.go b/internal/service/meta/regions_data_source.go index 031b9dae907..aa22b6d6c10 100644 --- a/internal/service/meta/regions_data_source.go +++ b/internal/service/meta/regions_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package meta import ( @@ -8,8 +11,8 @@ import ( "github.com/hashicorp/terraform-plugin-framework/datasource" "github.com/hashicorp/terraform-plugin-framework/datasource/schema" "github.com/hashicorp/terraform-plugin-framework/types" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" tfec2 "github.com/hashicorp/terraform-provider-aws/internal/service/ec2" ) @@ -64,7 +67,7 @@ func (d *dataSourceRegions) Read(ctx context.Context, request datasource.ReadReq return } - conn := d.Meta().EC2Conn() + conn := d.Meta().EC2Conn(ctx) input := &ec2.DescribeRegionsInput{ AllRegions: flex.BoolFromFramework(ctx, data.AllRegions), diff --git a/internal/service/meta/regions_data_source_test.go b/internal/service/meta/regions_data_source_test.go index 8e2a0e0b0ed..f94ad92741c 100644 --- a/internal/service/meta/regions_data_source_test.go +++ b/internal/service/meta/regions_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package meta_test import ( @@ -21,7 +24,7 @@ func TestAccMetaRegionsDataSource_basic(t *testing.T) { { Config: testAccRegionsDataSourceConfig_empty(), Check: resource.ComposeTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "names.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "names.#", 0), ), }, }, @@ -40,7 +43,7 @@ func TestAccMetaRegionsDataSource_filter(t *testing.T) { { Config: testAccRegionsDataSourceConfig_optInStatusFilter("opt-in-not-required"), Check: resource.ComposeTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "names.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "names.#", 0), ), }, }, @@ -59,7 +62,7 @@ func TestAccMetaRegionsDataSource_allRegions(t *testing.T) { { Config: testAccRegionsDataSourceConfig_allRegions(), Check: resource.ComposeTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "names.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "names.#", 0), ), }, }, diff --git a/internal/service/meta/service_data_source.go b/internal/service/meta/service_data_source.go index adb00eb94fb..2706a0a0e7b 100644 --- a/internal/service/meta/service_data_source.go +++ b/internal/service/meta/service_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package meta import ( diff --git a/internal/service/meta/service_data_source_test.go b/internal/service/meta/service_data_source_test.go index dc8a8fc9c20..db95a0f5fa1 100644 --- a/internal/service/meta/service_data_source_test.go +++ b/internal/service/meta/service_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package meta_test import ( diff --git a/internal/service/meta/service_package_gen.go b/internal/service/meta/service_package_gen.go index 6bbd72b1a5b..caec02b9caf 100644 --- a/internal/service/meta/service_package_gen.go +++ b/internal/service/meta/service_package_gen.go @@ -5,6 +5,7 @@ package meta import ( "context" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" ) @@ -55,4 +56,6 @@ func (p *servicePackage) ServicePackageName() string { return "meta" } -var ServicePackage = &servicePackage{} +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/mq/broker.go b/internal/service/mq/broker.go index c58f6248ee9..22b9a9bcd30 100644 --- a/internal/service/mq/broker.go +++ b/internal/service/mq/broker.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package mq import ( @@ -23,10 +26,10 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/create" - "github.com/hashicorp/terraform-provider-aws/internal/experimental/nullable" "github.com/hashicorp/terraform-provider-aws/internal/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/types/nullable" "github.com/hashicorp/terraform-provider-aws/internal/verify" "github.com/hashicorp/terraform-provider-aws/names" "github.com/mitchellh/copystructure" @@ -326,6 +329,11 @@ func ResourceBroker() *schema.Resource { Sensitive: true, ValidateFunc: ValidBrokerPassword, }, + "replication_user": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, "username": { Type: schema.TypeString, Required: true, @@ -354,7 +362,7 @@ func ResourceBroker() *schema.Resource { } func resourceBrokerCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MQConn() + conn := meta.(*conns.AWSClient).MQConn(ctx) name := d.Get("broker_name").(string) engineType := d.Get("engine_type").(string) @@ -366,7 +374,7 @@ func resourceBrokerCreate(ctx context.Context, d *schema.ResourceData, meta inte EngineVersion: aws.String(d.Get("engine_version").(string)), HostInstanceType: aws.String(d.Get("host_instance_type").(string)), PubliclyAccessible: aws.Bool(d.Get("publicly_accessible").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), Users: expandUsers(d.Get("user").(*schema.Set).List()), } @@ -418,7 +426,7 @@ func resourceBrokerCreate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceBrokerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MQConn() + conn := meta.(*conns.AWSClient).MQConn(ctx) output, err := FindBrokerByID(ctx, conn, d.Id()) @@ -481,13 +489,13 @@ func resourceBrokerRead(ctx context.Context, d *schema.ResourceData, meta interf return diag.Errorf("setting user: %s", err) } - SetTagsOut(ctx, output.Tags) + setTagsOut(ctx, output.Tags) return nil } func resourceBrokerUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MQConn() + conn := meta.(*conns.AWSClient).MQConn(ctx) requiresReboot := false @@ -591,7 +599,7 @@ func resourceBrokerUpdate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceBrokerDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MQConn() + conn := meta.(*conns.AWSClient).MQConn(ctx) log.Printf("[INFO] Deleting MQ Broker: %s", d.Id()) _, err := conn.DeleteBrokerWithContext(ctx, &mq.DeleteBrokerInput{ @@ -803,11 +811,12 @@ func DiffBrokerUsers(bId string, oldUsers, newUsers []interface{}) ( if !reflect.DeepEqual(existingUserMap, newUserMap) { ur = append(ur, &mq.UpdateUserRequest{ - BrokerId: aws.String(bId), - ConsoleAccess: aws.Bool(newUserMap["console_access"].(bool)), - Groups: flex.ExpandStringList(ng), - Password: aws.String(newUserMap["password"].(string)), - Username: aws.String(username), + BrokerId: aws.String(bId), + ConsoleAccess: aws.Bool(newUserMap["console_access"].(bool)), + Groups: flex.ExpandStringList(ng), + ReplicationUser: aws.Bool(newUserMap["replication_user"].(bool)), + Password: aws.String(newUserMap["password"].(string)), + Username: aws.String(username), }) } @@ -815,10 +824,11 @@ func DiffBrokerUsers(bId string, oldUsers, newUsers []interface{}) ( delete(existingUsers, username) } else { cur := &mq.CreateUserRequest{ - BrokerId: aws.String(bId), - ConsoleAccess: aws.Bool(newUserMap["console_access"].(bool)), - Password: aws.String(newUserMap["password"].(string)), - Username: aws.String(username), + BrokerId: aws.String(bId), + ConsoleAccess: aws.Bool(newUserMap["console_access"].(bool)), + Password: aws.String(newUserMap["password"].(string)), + ReplicationUser: aws.Bool(newUserMap["replication_user"].(bool)), + Username: aws.String(username), } if len(ng) > 0 { cur.Groups = flex.ExpandStringList(ng) @@ -904,6 +914,9 @@ func expandUsers(cfg []interface{}) []*mq.User { if v, ok := u["console_access"]; ok { user.ConsoleAccess = aws.Bool(v.(bool)) } + if v, ok := u["replication_user"]; ok { + user.ReplicationUser = aws.Bool(v.(bool)) + } if v, ok := u["groups"]; ok { user.Groups = flex.ExpandStringSet(v.(*schema.Set)) } @@ -930,9 +943,10 @@ func expandUsersForBroker(ctx context.Context, conn *mq.MQ, brokerId string, inp } user := &mq.User{ - ConsoleAccess: uOut.ConsoleAccess, - Groups: uOut.Groups, - Username: uOut.Username, + ConsoleAccess: uOut.ConsoleAccess, + Groups: uOut.Groups, + ReplicationUser: uOut.ReplicationUser, + Username: uOut.Username, } rawUsers = append(rawUsers, user) @@ -965,6 +979,9 @@ func flattenUsers(users []*mq.User, cfgUsers []interface{}) *schema.Set { if u.ConsoleAccess != nil { m["console_access"] = aws.BoolValue(u.ConsoleAccess) } + if u.ReplicationUser != nil { + m["replication_user"] = aws.BoolValue(u.ReplicationUser) + } if len(u.Groups) > 0 { m["groups"] = flex.FlattenStringSet(u.Groups) } diff --git a/internal/service/mq/broker_data_source.go b/internal/service/mq/broker_data_source.go index 46f29f735de..2eafd5d1277 100644 --- a/internal/service/mq/broker_data_source.go +++ b/internal/service/mq/broker_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package mq import ( @@ -8,8 +11,8 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-aws/internal/conns" - "github.com/hashicorp/terraform-provider-aws/internal/experimental/nullable" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/types/nullable" ) // @SDKDataSource("aws_mq_broker") @@ -246,7 +249,7 @@ func DataSourceBroker() *schema.Resource { } func dataSourceBrokerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MQConn() + conn := meta.(*conns.AWSClient).MQConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &mq.ListBrokersInput{} diff --git a/internal/service/mq/broker_data_source_test.go b/internal/service/mq/broker_data_source_test.go index aba8245d4c3..fa5bdcb4dd7 100644 --- a/internal/service/mq/broker_data_source_test.go +++ b/internal/service/mq/broker_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package mq_test import ( diff --git a/internal/service/mq/broker_instance_type_offerings_data_source.go b/internal/service/mq/broker_instance_type_offerings_data_source.go index e56ba9b3450..ff41348fc62 100644 --- a/internal/service/mq/broker_instance_type_offerings_data_source.go +++ b/internal/service/mq/broker_instance_type_offerings_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package mq import ( @@ -79,7 +82,7 @@ func DataSourceBrokerInstanceTypeOfferings() *schema.Resource { } func dataSourceBrokerInstanceTypeOfferingsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MQConn() + conn := meta.(*conns.AWSClient).MQConn(ctx) input := &mq.DescribeBrokerInstanceOptionsInput{} diff --git a/internal/service/mq/broker_instance_type_offerings_data_source_test.go b/internal/service/mq/broker_instance_type_offerings_data_source_test.go index 876422d8c7a..a4c806b3ff8 100644 --- a/internal/service/mq/broker_instance_type_offerings_data_source_test.go +++ b/internal/service/mq/broker_instance_type_offerings_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package mq_test import ( diff --git a/internal/service/mq/broker_test.go b/internal/service/mq/broker_test.go index 0fffe0b71e1..baa026a5375 100644 --- a/internal/service/mq/broker_test.go +++ b/internal/service/mq/broker_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package mq_test import ( @@ -117,19 +120,21 @@ func TestDiffUsers(t *testing.T) { OldUsers: []interface{}{}, NewUsers: []interface{}{ map[string]interface{}{ - "console_access": false, - "username": "second", - "password": "TestTest2222", - "groups": schema.NewSet(schema.HashString, []interface{}{"admin"}), + "console_access": false, + "username": "second", + "password": "TestTest2222", + "groups": schema.NewSet(schema.HashString, []interface{}{"admin"}), + "replication_user": false, }, }, Creations: []*mq.CreateUserRequest{ { - BrokerId: aws.String("test"), - ConsoleAccess: aws.Bool(false), - Username: aws.String("second"), - Password: aws.String("TestTest2222"), - Groups: aws.StringSlice([]string{"admin"}), + BrokerId: aws.String("test"), + ConsoleAccess: aws.Bool(false), + Username: aws.String("second"), + Password: aws.String("TestTest2222"), + Groups: aws.StringSlice([]string{"admin"}), + ReplicationUser: aws.Bool(false), }, }, Deletions: []*mq.DeleteUserInput{}, @@ -138,24 +143,27 @@ func TestDiffUsers(t *testing.T) { { OldUsers: []interface{}{ map[string]interface{}{ - "console_access": true, - "username": "first", - "password": "TestTest1111", + "console_access": true, + "username": "first", + "password": "TestTest1111", + "replication_user": false, }, }, NewUsers: []interface{}{ map[string]interface{}{ - "console_access": false, - "username": "second", - "password": "TestTest2222", + "console_access": false, + "username": "second", + "password": "TestTest2222", + "replication_user": false, }, }, Creations: []*mq.CreateUserRequest{ { - BrokerId: aws.String("test"), - ConsoleAccess: aws.Bool(false), - Username: aws.String("second"), - Password: aws.String("TestTest2222"), + BrokerId: aws.String("test"), + ConsoleAccess: aws.Bool(false), + Username: aws.String("second"), + Password: aws.String("TestTest2222"), + ReplicationUser: aws.Bool(false), }, }, Deletions: []*mq.DeleteUserInput{ @@ -166,22 +174,25 @@ func TestDiffUsers(t *testing.T) { { OldUsers: []interface{}{ map[string]interface{}{ - "console_access": true, - "username": "first", - "password": "TestTest1111updated", + "console_access": true, + "username": "first", + "password": "TestTest1111updated", + "replication_user": false, }, map[string]interface{}{ - "console_access": false, - "username": "second", - "password": "TestTest2222", + "console_access": false, + "username": "second", + "password": "TestTest2222", + "replication_user": false, }, }, NewUsers: []interface{}{ map[string]interface{}{ - "console_access": false, - "username": "second", - "password": "TestTest2222", - "groups": schema.NewSet(schema.HashString, []interface{}{"admin"}), + "console_access": false, + "username": "second", + "password": "TestTest2222", + "groups": schema.NewSet(schema.HashString, []interface{}{"admin"}), + "replication_user": false, }, }, Creations: []*mq.CreateUserRequest{}, @@ -190,11 +201,12 @@ func TestDiffUsers(t *testing.T) { }, Updates: []*mq.UpdateUserRequest{ { - BrokerId: aws.String("test"), - ConsoleAccess: aws.Bool(false), - Username: aws.String("second"), - Password: aws.String("TestTest2222"), - Groups: aws.StringSlice([]string{"admin"}), + BrokerId: aws.String("test"), + ConsoleAccess: aws.Bool(false), + Username: aws.String("second"), + Password: aws.String("TestTest2222"), + Groups: aws.StringSlice([]string{"admin"}), + ReplicationUser: aws.Bool(false), }, }, }, @@ -1311,7 +1323,7 @@ func TestAccMQBroker_ldap(t *testing.T) { func testAccCheckBrokerDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).MQConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MQConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_mq_broker" { @@ -1346,7 +1358,7 @@ func testAccCheckBrokerExists(ctx context.Context, n string, v *mq.DescribeBroke return fmt.Errorf("No MQ Broker ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).MQConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MQConn(ctx) output, err := tfmq.FindBrokerByID(ctx, conn, rs.Primary.ID) @@ -1361,7 +1373,7 @@ func testAccCheckBrokerExists(ctx context.Context, n string, v *mq.DescribeBroke } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).MQConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MQConn(ctx) input := &mq.ListBrokersInput{} diff --git a/internal/service/mq/configuration.go b/internal/service/mq/configuration.go index a2059f613bd..1ec300d9bff 100644 --- a/internal/service/mq/configuration.go +++ b/internal/service/mq/configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package mq import ( @@ -100,14 +103,14 @@ func ResourceConfiguration() *schema.Resource { } func resourceConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MQConn() + conn := meta.(*conns.AWSClient).MQConn(ctx) name := d.Get("name").(string) input := &mq.CreateConfigurationRequest{ EngineType: aws.String(d.Get("engine_type").(string)), EngineVersion: aws.String(d.Get("engine_version").(string)), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("authentication_strategy"); ok { @@ -143,7 +146,7 @@ func resourceConfigurationCreate(ctx context.Context, d *schema.ResourceData, me } func resourceConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MQConn() + conn := meta.(*conns.AWSClient).MQConn(ctx) configuration, err := FindConfigurationByID(ctx, conn, d.Id()) @@ -183,13 +186,13 @@ func resourceConfigurationRead(ctx context.Context, d *schema.ResourceData, meta d.Set("data", string(data)) - SetTagsOut(ctx, configuration.Tags) + setTagsOut(ctx, configuration.Tags) return nil } func resourceConfigurationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MQConn() + conn := meta.(*conns.AWSClient).MQConn(ctx) if d.HasChanges("data", "description") { input := &mq.UpdateConfigurationRequest{ diff --git a/internal/service/mq/configuration_test.go b/internal/service/mq/configuration_test.go index 2cda45b73b0..2e75868be0e 100644 --- a/internal/service/mq/configuration_test.go +++ b/internal/service/mq/configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package mq_test import ( @@ -197,7 +200,7 @@ func testAccCheckConfigurationExists(ctx context.Context, n string) resource.Tes return fmt.Errorf("No MQ Configuration ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).MQConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MQConn(ctx) _, err := tfmq.FindConfigurationByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/mq/forge.go b/internal/service/mq/forge.go index 488f821a6cd..0e25d2d0c09 100644 --- a/internal/service/mq/forge.go +++ b/internal/service/mq/forge.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package mq import ( @@ -6,7 +9,7 @@ import ( "github.com/beevik/etree" ) -// cannonicalXML reads XML in a string and re-writes it canonically, used for +// CanonicalXML reads XML in a string and re-writes it canonically, used for // comparing XML for logical equivalency func CanonicalXML(s string) (string, error) { doc := etree.NewDocument() diff --git a/internal/service/mq/forge_test.go b/internal/service/mq/forge_test.go index 84c735856e4..5c3dae88209 100644 --- a/internal/service/mq/forge_test.go +++ b/internal/service/mq/forge_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package mq_test import ( diff --git a/internal/service/mq/generate.go b/internal/service/mq/generate.go index ee31a0ebf59..f541c1e7d3f 100644 --- a/internal/service/mq/generate.go +++ b/internal/service/mq/generate.go @@ -1,5 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/listpages/main.go -ListOps=DescribeBrokerInstanceOptions //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=ListTags -ServiceTagsMap -TagOp=CreateTags -UntagOp=DeleteTags -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package mq diff --git a/internal/service/mq/service_package_gen.go b/internal/service/mq/service_package_gen.go index 65c58b5d649..65642098572 100644 --- a/internal/service/mq/service_package_gen.go +++ b/internal/service/mq/service_package_gen.go @@ -5,6 +5,10 @@ package mq import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + mq_sdkv1 "github.com/aws/aws-sdk-go/service/mq" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -57,4 +61,13 @@ func (p *servicePackage) ServicePackageName() string { return names.MQ } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*mq_sdkv1.MQ, error) { + sess := config["session"].(*session_sdkv1.Session) + + return mq_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/mq/sweep.go b/internal/service/mq/sweep.go index a55e2aa56d4..30ae94d13ba 100644 --- a/internal/service/mq/sweep.go +++ b/internal/service/mq/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/mq" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -23,12 +25,12 @@ func init() { func sweepBrokers(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } input := &mq.ListBrokersInput{MaxResults: aws.Int64(100)} - conn := client.(*conns.AWSClient).MQConn() + conn := client.MQConn(ctx) sweepResources := make([]sweep.Sweepable, 0) err = conn.ListBrokersPagesWithContext(ctx, input, func(page *mq.ListBrokersResponse, lastPage bool) bool { @@ -56,7 +58,7 @@ func sweepBrokers(region string) error { return fmt.Errorf("error listing MQ Brokers (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping MQ Brokers (%s): %w", region, err) diff --git a/internal/service/mq/tags_gen.go b/internal/service/mq/tags_gen.go index a4e587d9318..cb6187445f0 100644 --- a/internal/service/mq/tags_gen.go +++ b/internal/service/mq/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists mq service tags. +// listTags lists mq service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn mqiface.MQAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn mqiface.MQAPI, identifier string) (tftags.KeyValueTags, error) { input := &mq.ListTagsInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn mqiface.MQAPI, identifier string) (tftag // ListTags lists mq service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).MQConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).MQConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from mq service tags. +// KeyValueTags creates tftags.KeyValueTags from mq service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns mq service tags from Context. +// getTagsIn returns mq service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets mq service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets mq service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates mq service tags. +// updateTags updates mq service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn mqiface.MQAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn mqiface.MQAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn mqiface.MQAPI, identifier string, oldT // UpdateTags updates mq service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).MQConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).MQConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/mwaa/environment.go b/internal/service/mwaa/environment.go index 5be70ef013f..3cfec177356 100644 --- a/internal/service/mwaa/environment.go +++ b/internal/service/mwaa/environment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package mwaa import ( @@ -9,7 +12,9 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/mwaa" "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + gversion "github.com/hashicorp/go-version" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -264,12 +269,30 @@ func ResourceEnvironment() *schema.Resource { }, }, - CustomizeDiff: verify.SetTagsDiff, + CustomizeDiff: customdiff.Sequence( + customdiff.ForceNewIf("airflow_version", func(ctx context.Context, d *schema.ResourceDiff, meta interface{}) bool { + o, n := d.GetChange("airflow_version") + + if oldVersion, err := gversion.NewVersion(o.(string)); err == nil { + if newVersion, err := gversion.NewVersion(n.(string)); err == nil { + // https://docs.aws.amazon.com/mwaa/latest/userguide/airflow-versions.html#airflow-versions-upgrade: + // Amazon MWAA supports minor version upgrades. + // This means you can upgrade your environment from version x.4.z to x.5.z. + // However, you cannot upgrade your environment to a new major version of Apache Airflow. + // For example, upgrading from version 1.y.z to 2.y.z is not supported. + return oldVersion.Segments()[0] < newVersion.Segments()[0] + } + } + + return false + }), + verify.SetTagsDiff, + ), } } func resourceEnvironmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MWAAConn() + conn := meta.(*conns.AWSClient).MWAAConn(ctx) name := d.Get("name").(string) input := &mwaa.CreateEnvironmentInput{ @@ -278,7 +301,7 @@ func resourceEnvironmentCreate(ctx context.Context, d *schema.ResourceData, meta Name: aws.String(name), NetworkConfiguration: expandEnvironmentNetworkConfigurationCreate(d.Get("network_configuration").([]interface{})), SourceBucketArn: aws.String(d.Get("source_bucket_arn").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("airflow_configuration_options"); ok { @@ -346,7 +369,6 @@ func resourceEnvironmentCreate(ctx context.Context, d *schema.ResourceData, meta input.WeeklyMaintenanceWindowStart = aws.String(v.(string)) } - log.Printf("[INFO] Creating MWAA Environment: %s", input) /* Execution roles created just before the MWAA Environment may result in ValidationExceptions due to IAM permission propagation delays. @@ -369,7 +391,7 @@ func resourceEnvironmentCreate(ctx context.Context, d *schema.ResourceData, meta } func resourceEnvironmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MWAAConn() + conn := meta.(*conns.AWSClient).MWAAConn(ctx) environment, err := FindEnvironmentByName(ctx, conn, d.Id()) @@ -417,13 +439,13 @@ func resourceEnvironmentRead(ctx context.Context, d *schema.ResourceData, meta i d.Set("webserver_url", environment.WebserverUrl) d.Set("weekly_maintenance_window_start", environment.WeeklyMaintenanceWindowStart) - SetTagsOut(ctx, environment.Tags) + setTagsOut(ctx, environment.Tags) return nil } func resourceEnvironmentUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MWAAConn() + conn := meta.(*conns.AWSClient).MWAAConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &mwaa.UpdateEnvironmentInput{ @@ -511,7 +533,6 @@ func resourceEnvironmentUpdate(ctx context.Context, d *schema.ResourceData, meta input.WeeklyMaintenanceWindowStart = aws.String(d.Get("weekly_maintenance_window_start").(string)) } - log.Printf("[INFO] Updating MWAA Environment: %s", input) _, err := conn.UpdateEnvironmentWithContext(ctx, input) if err != nil { @@ -527,7 +548,7 @@ func resourceEnvironmentUpdate(ctx context.Context, d *schema.ResourceData, meta } func resourceEnvironmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).MWAAConn() + conn := meta.(*conns.AWSClient).MWAAConn(ctx) log.Printf("[INFO] Deleting MWAA Environment: %s", d.Id()) _, err := conn.DeleteEnvironmentWithContext(ctx, &mwaa.DeleteEnvironmentInput{ @@ -635,7 +656,7 @@ func waitEnvironmentCreated(ctx context.Context, conn *mwaa.MWAA, name string, t func waitEnvironmentUpdated(ctx context.Context, conn *mwaa.MWAA, name string, timeout time.Duration) (*mwaa.Environment, error) { stateConf := &retry.StateChangeConf{ - Pending: []string{mwaa.EnvironmentStatusUpdating}, + Pending: []string{mwaa.EnvironmentStatusUpdating, mwaa.EnvironmentStatusCreatingSnapshot}, Target: []string{mwaa.EnvironmentStatusAvailable}, Refresh: statusEnvironment(ctx, conn, name), Timeout: timeout, diff --git a/internal/service/mwaa/environment_test.go b/internal/service/mwaa/environment_test.go index 9e06025846b..bfc5980a430 100644 --- a/internal/service/mwaa/environment_test.go +++ b/internal/service/mwaa/environment_test.go @@ -1,10 +1,15 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package mwaa_test import ( "context" + "errors" "fmt" "testing" + "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/mwaa" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" @@ -150,7 +155,7 @@ func TestAccMWAAEnvironment_airflowOptions(t *testing.T) { func TestAccMWAAEnvironment_log(t *testing.T) { ctx := acctest.Context(t) - var environment mwaa.Environment + var environment1, environment2 mwaa.Environment rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_mwaa_environment.test" @@ -163,7 +168,7 @@ func TestAccMWAAEnvironment_log(t *testing.T) { { Config: testAccEnvironmentConfig_logging(rName, "true", mwaa.LoggingLevelCritical), Check: resource.ComposeTestCheckFunc( - testAccCheckEnvironmentExists(ctx, resourceName, &environment), + testAccCheckEnvironmentExists(ctx, resourceName, &environment1), resource.TestCheckResourceAttr(resourceName, "logging_configuration.#", "1"), resource.TestCheckResourceAttr(resourceName, "logging_configuration.0.dag_processing_logs.#", "1"), @@ -200,7 +205,8 @@ func TestAccMWAAEnvironment_log(t *testing.T) { { Config: testAccEnvironmentConfig_logging(rName, "false", mwaa.LoggingLevelInfo), Check: resource.ComposeTestCheckFunc( - testAccCheckEnvironmentExists(ctx, resourceName, &environment), + testAccCheckEnvironmentExists(ctx, resourceName, &environment2), + testAccCheckEnvironmentNotRecreated(&environment2, &environment1), resource.TestCheckResourceAttr(resourceName, "logging_configuration.#", "1"), resource.TestCheckResourceAttr(resourceName, "logging_configuration.0.dag_processing_logs.#", "1"), @@ -311,7 +317,7 @@ func TestAccMWAAEnvironment_full(t *testing.T) { func TestAccMWAAEnvironment_pluginsS3ObjectVersion(t *testing.T) { ctx := acctest.Context(t) - var environment mwaa.Environment + var environment1, environment2 mwaa.Environment rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_mwaa_environment.test" s3ObjectResourceName := "aws_s3_object.plugins" @@ -325,7 +331,7 @@ func TestAccMWAAEnvironment_pluginsS3ObjectVersion(t *testing.T) { { Config: testAccEnvironmentConfig_pluginsS3ObjectVersion(rName, "test"), Check: resource.ComposeTestCheckFunc( - testAccCheckEnvironmentExists(ctx, resourceName, &environment), + testAccCheckEnvironmentExists(ctx, resourceName, &environment1), resource.TestCheckResourceAttrPair(resourceName, "plugins_s3_object_version", s3ObjectResourceName, "version_id"), ), }, @@ -337,7 +343,8 @@ func TestAccMWAAEnvironment_pluginsS3ObjectVersion(t *testing.T) { { Config: testAccEnvironmentConfig_pluginsS3ObjectVersion(rName, "test-updated"), Check: resource.ComposeTestCheckFunc( - testAccCheckEnvironmentExists(ctx, resourceName, &environment), + testAccCheckEnvironmentExists(ctx, resourceName, &environment2), + testAccCheckEnvironmentNotRecreated(&environment2, &environment1), resource.TestCheckResourceAttrPair(resourceName, "plugins_s3_object_version", s3ObjectResourceName, "version_id"), ), }, @@ -350,6 +357,42 @@ func TestAccMWAAEnvironment_pluginsS3ObjectVersion(t *testing.T) { }) } +func TestAccMWAAEnvironment_updateAirflowVersionMinor(t *testing.T) { + ctx := acctest.Context(t) + var environment1, environment2 mwaa.Environment + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_mwaa_environment.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, mwaa.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckEnvironmentDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccEnvironmentConfig_airflowVersion(rName, "2.4.3"), + Check: resource.ComposeTestCheckFunc( + testAccCheckEnvironmentExists(ctx, resourceName, &environment1), + resource.TestCheckResourceAttr(resourceName, "airflow_version", "2.4.3"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccEnvironmentConfig_airflowVersion(rName, "2.5.1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckEnvironmentExists(ctx, resourceName, &environment2), + testAccCheckEnvironmentNotRecreated(&environment2, &environment1), + resource.TestCheckResourceAttr(resourceName, "airflow_version", "2.5.1"), + ), + }, + }, + }) +} + func testAccCheckEnvironmentExists(ctx context.Context, n string, v *mwaa.Environment) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -361,7 +404,7 @@ func testAccCheckEnvironmentExists(ctx context.Context, n string, v *mwaa.Enviro return fmt.Errorf("No MWAA Environment ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).MWAAConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MWAAConn(ctx) output, err := tfmwaa.FindEnvironmentByName(ctx, conn, rs.Primary.ID) @@ -377,7 +420,7 @@ func testAccCheckEnvironmentExists(ctx context.Context, n string, v *mwaa.Enviro func testAccCheckEnvironmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).MWAAConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).MWAAConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_mwaa_environment" { @@ -401,6 +444,16 @@ func testAccCheckEnvironmentDestroy(ctx context.Context) resource.TestCheckFunc } } +func testAccCheckEnvironmentNotRecreated(i, j *mwaa.Environment) resource.TestCheckFunc { + return func(s *terraform.State) error { + if !i.CreatedAt.Equal(aws.TimeValue(j.CreatedAt)) { + return errors.New("MWAA Environment was recreated") + } + + return nil + } +} + func testAccEnvironmentConfig_base(rName string) string { return acctest.ConfigCompose(acctest.ConfigAvailableAZsNoOptIn(), fmt.Sprintf(` data "aws_partition" "current" {} @@ -456,7 +509,7 @@ resource "aws_subnet" "private" { resource "aws_eip" "private" { count = 2 - vpc = true + domain = "vpc" tags = { Name = %[1]q @@ -833,3 +886,22 @@ resource "aws_s3_object" "plugins" { } `, rName, content)) } + +func testAccEnvironmentConfig_airflowVersion(rName, airflowVersion string) string { + return acctest.ConfigCompose(testAccEnvironmentConfig_base(rName), fmt.Sprintf(` +resource "aws_mwaa_environment" "test" { + dag_s3_path = aws_s3_object.dags.key + execution_role_arn = aws_iam_role.test.arn + name = %[1]q + + network_configuration { + security_group_ids = [aws_security_group.test.id] + subnet_ids = aws_subnet.private[*].id + } + + source_bucket_arn = aws_s3_bucket.test.arn + + airflow_version = %[2]q +} +`, rName, airflowVersion)) +} diff --git a/internal/service/mwaa/generate.go b/internal/service/mwaa/generate.go index a1dc2004aab..b0f4ab0d274 100644 --- a/internal/service/mwaa/generate.go +++ b/internal/service/mwaa/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTagsOp=ListTags -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package mwaa diff --git a/internal/service/mwaa/service_package_gen.go b/internal/service/mwaa/service_package_gen.go index 053b6bfded9..7c91da7df51 100644 --- a/internal/service/mwaa/service_package_gen.go +++ b/internal/service/mwaa/service_package_gen.go @@ -5,6 +5,10 @@ package mwaa import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + mwaa_sdkv1 "github.com/aws/aws-sdk-go/service/mwaa" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -40,4 +44,13 @@ func (p *servicePackage) ServicePackageName() string { return names.MWAA } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*mwaa_sdkv1.MWAA, error) { + sess := config["session"].(*session_sdkv1.Session) + + return mwaa_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/mwaa/sweep.go b/internal/service/mwaa/sweep.go index 61157810359..c0c46966d99 100644 --- a/internal/service/mwaa/sweep.go +++ b/internal/service/mwaa/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -12,7 +15,6 @@ import ( "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -25,11 +27,11 @@ func init() { func sweepEnvironment(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).MWAAConn() + conn := client.MWAAConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -51,7 +53,7 @@ func sweepEnvironment(region string) error { sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping MWAA Environment: %w", err)) } diff --git a/internal/service/mwaa/tags_gen.go b/internal/service/mwaa/tags_gen.go index 3409041c223..cd608a00828 100644 --- a/internal/service/mwaa/tags_gen.go +++ b/internal/service/mwaa/tags_gen.go @@ -21,14 +21,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from mwaa service tags. +// KeyValueTags creates tftags.KeyValueTags from mwaa service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns mwaa service tags from Context. +// getTagsIn returns mwaa service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -38,17 +38,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets mwaa service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets mwaa service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates mwaa service tags. +// updateTags updates mwaa service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn mwaaiface.MWAAAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn mwaaiface.MWAAAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -88,5 +88,5 @@ func UpdateTags(ctx context.Context, conn mwaaiface.MWAAAPI, identifier string, // UpdateTags updates mwaa service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).MWAAConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).MWAAConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/neptune/cluster.go b/internal/service/neptune/cluster.go index 08280df998d..08401eae2c2 100644 --- a/internal/service/neptune/cluster.go +++ b/internal/service/neptune/cluster.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune import ( @@ -301,7 +304,7 @@ func ResourceCluster() *schema.Resource { func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) // Check if any of the parameters that require a cluster modification after creation are set. // See https://docs.aws.amazon.com/neptune/latest/userguide/backup-restore-restore-snapshot.html#backup-restore-restore-snapshot-considerations. @@ -327,7 +330,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int Port: aws.Int64(int64(d.Get("port").(int))), StorageEncrypted: aws.Bool(d.Get("storage_encrypted").(bool)), DeletionProtection: aws.Bool(d.Get("deletion_protection").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), ServerlessV2ScalingConfiguration: serverlessConfiguration, } inputR := &neptune.RestoreDBClusterFromSnapshotInput{ @@ -337,7 +340,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int Port: aws.Int64(int64(d.Get("port").(int))), SnapshotIdentifier: aws.String(d.Get("snapshot_identifier").(string)), DeletionProtection: aws.Bool(d.Get("deletion_protection").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), ServerlessV2ScalingConfiguration: serverlessConfiguration, } inputM := &neptune.ModifyDBClusterInput{ @@ -486,7 +489,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) dbc, err := FindClusterByID(ctx, conn, d.Id()) @@ -557,7 +560,7 @@ func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta inter func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) if d.HasChangesExcept("tags", "tags_all", "iam_roles", "global_cluster_identifier") { allowMajorVersionUpgrade := d.Get("allow_major_version_upgrade").(bool) @@ -719,7 +722,7 @@ func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceClusterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) skipFinalSnapshot := d.Get("skip_final_snapshot").(bool) input := neptune.DeleteDBClusterInput{ diff --git a/internal/service/neptune/cluster_endpoint.go b/internal/service/neptune/cluster_endpoint.go index faf742d5a73..d4780b44fc9 100644 --- a/internal/service/neptune/cluster_endpoint.go +++ b/internal/service/neptune/cluster_endpoint.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune import ( @@ -80,13 +83,13 @@ func ResourceClusterEndpoint() *schema.Resource { func resourceClusterEndpointCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) input := &neptune.CreateDBClusterEndpointInput{ DBClusterEndpointIdentifier: aws.String(d.Get("cluster_endpoint_identifier").(string)), DBClusterIdentifier: aws.String(d.Get("cluster_identifier").(string)), EndpointType: aws.String(d.Get("endpoint_type").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if attr := d.Get("static_members").(*schema.Set); attr.Len() > 0 { @@ -121,7 +124,7 @@ func resourceClusterEndpointCreate(ctx context.Context, d *schema.ResourceData, func resourceClusterEndpointRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) resp, err := FindEndpointByID(ctx, conn, d.Id()) @@ -150,7 +153,7 @@ func resourceClusterEndpointRead(ctx context.Context, d *schema.ResourceData, me func resourceClusterEndpointUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) if d.HasChangesExcept("tags", "tags_all") { req := &neptune.ModifyDBClusterEndpointInput{ @@ -185,7 +188,7 @@ func resourceClusterEndpointUpdate(ctx context.Context, d *schema.ResourceData, func resourceClusterEndpointDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) endpointId := d.Get("cluster_endpoint_identifier").(string) input := &neptune.DeleteDBClusterEndpointInput{ diff --git a/internal/service/neptune/cluster_endpoint_test.go b/internal/service/neptune/cluster_endpoint_test.go index 36b7cff8c25..5a7b276696e 100644 --- a/internal/service/neptune/cluster_endpoint_test.go +++ b/internal/service/neptune/cluster_endpoint_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune_test import ( @@ -158,7 +161,7 @@ func testAccCheckClusterEndpointDestroy(ctx context.Context) resource.TestCheckF func testAccCheckClusterEndpointDestroyWithProvider(ctx context.Context) acctest.TestCheckWithProviderFunc { return func(s *terraform.State, provider *schema.Provider) error { - conn := provider.Meta().(*conns.AWSClient).NeptuneConn() + conn := provider.Meta().(*conns.AWSClient).NeptuneConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_neptune_cluster_endpoint" { @@ -196,7 +199,7 @@ func testAccCheckClusterEndpointExistsWithProvider(ctx context.Context, n string } provider := providerF() - conn := provider.Meta().(*conns.AWSClient).NeptuneConn() + conn := provider.Meta().(*conns.AWSClient).NeptuneConn(ctx) resp, err := tfneptune.FindEndpointByID(ctx, conn, rs.Primary.ID) if err != nil { return fmt.Errorf("Neptune Cluster Endpoint (%s) not found: %w", rs.Primary.ID, err) diff --git a/internal/service/neptune/cluster_instance.go b/internal/service/neptune/cluster_instance.go index 30e1f246c75..ce3e4c41798 100644 --- a/internal/service/neptune/cluster_instance.go +++ b/internal/service/neptune/cluster_instance.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune import ( @@ -180,7 +183,7 @@ func ResourceClusterInstance() *schema.Resource { func resourceClusterInstanceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) var instanceID string if v, ok := d.GetOk("identifier"); ok { @@ -199,7 +202,7 @@ func resourceClusterInstanceCreate(ctx context.Context, d *schema.ResourceData, Engine: aws.String(d.Get("engine").(string)), PromotionTier: aws.Int64(int64(d.Get("promotion_tier").(int))), PubliclyAccessible: aws.Bool(d.Get("publicly_accessible").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("availability_zone"); ok { @@ -245,7 +248,7 @@ func resourceClusterInstanceCreate(ctx context.Context, d *schema.ResourceData, func resourceClusterInstanceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) db, err := FindClusterInstanceByID(ctx, conn, d.Id()) @@ -304,7 +307,7 @@ func resourceClusterInstanceRead(ctx context.Context, d *schema.ResourceData, me func resourceClusterInstanceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &neptune.ModifyDBInstanceInput{ @@ -354,7 +357,7 @@ func resourceClusterInstanceUpdate(ctx context.Context, d *schema.ResourceData, func resourceClusterInstanceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) log.Printf("[DEBUG] Deleting Neptune Cluster Instance: %s", d.Id()) _, err := conn.DeleteDBInstanceWithContext(ctx, &neptune.DeleteDBInstanceInput{ diff --git a/internal/service/neptune/cluster_instance_test.go b/internal/service/neptune/cluster_instance_test.go index e5133c1d052..54bf98b0f99 100644 --- a/internal/service/neptune/cluster_instance_test.go +++ b/internal/service/neptune/cluster_instance_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune_test import ( @@ -298,7 +301,7 @@ func testAccCheckClusterInstanceExists(ctx context.Context, n string, v *neptune return fmt.Errorf("No Neptune Cluster Instance ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn(ctx) output, err := tfneptune.FindClusterInstanceByID(ctx, conn, rs.Primary.ID) @@ -314,7 +317,7 @@ func testAccCheckClusterInstanceExists(ctx context.Context, n string, v *neptune func testAccCheckClusterInstanceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_neptune_cluster_instance" { diff --git a/internal/service/neptune/cluster_parameter_group.go b/internal/service/neptune/cluster_parameter_group.go index 92e8cbe29c1..a482521b575 100644 --- a/internal/service/neptune/cluster_parameter_group.go +++ b/internal/service/neptune/cluster_parameter_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune import ( @@ -99,7 +102,7 @@ func ResourceClusterParameterGroup() *schema.Resource { func resourceClusterParameterGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) var groupName string if v, ok := d.GetOk("name"); ok { @@ -114,7 +117,7 @@ func resourceClusterParameterGroupCreate(ctx context.Context, d *schema.Resource DBClusterParameterGroupName: aws.String(groupName), DBParameterGroupFamily: aws.String(d.Get("family").(string)), Description: aws.String(d.Get("description").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } _, err := conn.CreateDBClusterParameterGroupWithContext(ctx, &createOpts) @@ -136,7 +139,7 @@ func resourceClusterParameterGroupCreate(ctx context.Context, d *schema.Resource func resourceClusterParameterGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) describeOpts := neptune.DescribeDBClusterParameterGroupsInput{ DBClusterParameterGroupName: aws.String(d.Id()), @@ -189,7 +192,7 @@ func resourceClusterParameterGroupRead(ctx context.Context, d *schema.ResourceDa func resourceClusterParameterGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) if d.HasChange("parameter") { o, n := d.GetChange("parameter") @@ -218,7 +221,7 @@ func resourceClusterParameterGroupUpdate(ctx context.Context, d *schema.Resource func resourceClusterParameterGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) input := neptune.DeleteDBClusterParameterGroupInput{ DBClusterParameterGroupName: aws.String(d.Id()), diff --git a/internal/service/neptune/cluster_parameter_group_test.go b/internal/service/neptune/cluster_parameter_group_test.go index 8d3515101b9..6416f1c3e03 100644 --- a/internal/service/neptune/cluster_parameter_group_test.go +++ b/internal/service/neptune/cluster_parameter_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune_test import ( @@ -289,7 +292,7 @@ func TestAccNeptuneClusterParameterGroup_tags(t *testing.T) { func testAccCheckClusterParameterGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_neptune_cluster_parameter_group" { @@ -344,7 +347,7 @@ func testAccCheckClusterParameterGroupExists(ctx context.Context, n string, v *n return errors.New("No Neptune Cluster Parameter Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn(ctx) opts := neptune.DescribeDBClusterParameterGroupsInput{ DBClusterParameterGroupName: aws.String(rs.Primary.ID), diff --git a/internal/service/neptune/cluster_snapshot.go b/internal/service/neptune/cluster_snapshot.go index d8f0ea5e48c..fc3539b98ad 100644 --- a/internal/service/neptune/cluster_snapshot.go +++ b/internal/service/neptune/cluster_snapshot.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune import ( @@ -101,7 +104,7 @@ func ResourceClusterSnapshot() *schema.Resource { func resourceClusterSnapshotCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) clusterSnapshotID := d.Get("db_cluster_snapshot_identifier").(string) input := &neptune.CreateDBClusterSnapshotInput{ @@ -126,7 +129,7 @@ func resourceClusterSnapshotCreate(ctx context.Context, d *schema.ResourceData, func resourceClusterSnapshotRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) snapshot, err := FindClusterSnapshotByID(ctx, conn, d.Id()) @@ -161,7 +164,7 @@ func resourceClusterSnapshotRead(ctx context.Context, d *schema.ResourceData, me func resourceClusterSnapshotDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) log.Printf("[DEBUG] Deleting Neptune Cluster Snapshot: %s", d.Id()) _, err := conn.DeleteDBClusterSnapshotWithContext(ctx, &neptune.DeleteDBClusterSnapshotInput{ diff --git a/internal/service/neptune/cluster_snapshot_test.go b/internal/service/neptune/cluster_snapshot_test.go index ef905b05b3e..f6a9cd3d8f4 100644 --- a/internal/service/neptune/cluster_snapshot_test.go +++ b/internal/service/neptune/cluster_snapshot_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune_test import ( @@ -82,7 +85,7 @@ func TestAccNeptuneClusterSnapshot_disappears(t *testing.T) { func testAccCheckClusterSnapshotDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_neptune_cluster_snapshot" { @@ -117,7 +120,7 @@ func testAccCheckClusterSnapshotExists(ctx context.Context, n string, v *neptune return fmt.Errorf("No Neptune Cluster Snapshot ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn(ctx) output, err := tfneptune.FindClusterSnapshotByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/neptune/cluster_test.go b/internal/service/neptune/cluster_test.go index b41935f3fc1..5bccd3c1af6 100644 --- a/internal/service/neptune/cluster_test.go +++ b/internal/service/neptune/cluster_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune_test import ( @@ -603,7 +606,7 @@ func testAccCheckClusterDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckClusterDestroyWithProvider(ctx context.Context) acctest.TestCheckWithProviderFunc { return func(s *terraform.State, provider *schema.Provider) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_neptune_cluster" { @@ -642,7 +645,7 @@ func testAccCheckClusterExistsWithProvider(ctx context.Context, n string, v *nep return fmt.Errorf("No Neptune Cluster ID is set") } - conn := providerF().Meta().(*conns.AWSClient).NeptuneConn() + conn := providerF().Meta().(*conns.AWSClient).NeptuneConn(ctx) output, err := tfneptune.FindClusterByID(ctx, conn, rs.Primary.ID) @@ -663,7 +666,7 @@ func testAccCheckClusterDestroyWithFinalSnapshot(ctx context.Context) resource.T continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn(ctx) finalSnapshotID := rs.Primary.Attributes["final_snapshot_identifier"] _, err := tfneptune.FindClusterSnapshotByID(ctx, conn, finalSnapshotID) diff --git a/internal/service/neptune/consts.go b/internal/service/neptune/consts.go index 5ad1d45d67a..95941b8b5a5 100644 --- a/internal/service/neptune/consts.go +++ b/internal/service/neptune/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune import ( diff --git a/internal/service/neptune/engine_version_data_source.go b/internal/service/neptune/engine_version_data_source.go index a85bb30463d..b01b3d57052 100644 --- a/internal/service/neptune/engine_version_data_source.go +++ b/internal/service/neptune/engine_version_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune import ( @@ -88,7 +91,7 @@ func DataSourceEngineVersion() *schema.Resource { func dataSourceEngineVersionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) input := &neptune.DescribeDBEngineVersionsInput{} diff --git a/internal/service/neptune/engine_version_data_source_test.go b/internal/service/neptune/engine_version_data_source_test.go index fc2a1f52b39..60c71b85215 100644 --- a/internal/service/neptune/engine_version_data_source_test.go +++ b/internal/service/neptune/engine_version_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune_test import ( @@ -86,7 +89,7 @@ func TestAccNeptuneEngineVersionDataSource_defaultOnly(t *testing.T) { } func testAccEngineVersionPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn(ctx) input := &neptune.DescribeDBEngineVersionsInput{ Engine: aws.String("neptune"), diff --git a/internal/service/neptune/event_subscription.go b/internal/service/neptune/event_subscription.go index 9d9433696bb..beae7d729fa 100644 --- a/internal/service/neptune/event_subscription.go +++ b/internal/service/neptune/event_subscription.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune import ( @@ -98,7 +101,7 @@ func ResourceEventSubscription() *schema.Resource { func resourceEventSubscriptionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) if v, ok := d.GetOk("name"); ok { d.Set("name", v.(string)) @@ -112,7 +115,7 @@ func resourceEventSubscriptionCreate(ctx context.Context, d *schema.ResourceData SubscriptionName: aws.String(d.Get("name").(string)), SnsTopicArn: aws.String(d.Get("sns_topic_arn").(string)), Enabled: aws.Bool(d.Get("enabled").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("source_ids"); ok { @@ -168,7 +171,7 @@ func resourceEventSubscriptionCreate(ctx context.Context, d *schema.ResourceData func resourceEventSubscriptionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) sub, err := resourceEventSubscriptionRetrieve(ctx, d.Id(), conn) if err != nil { @@ -205,7 +208,7 @@ func resourceEventSubscriptionRead(ctx context.Context, d *schema.ResourceData, func resourceEventSubscriptionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) requestUpdate := false @@ -310,7 +313,7 @@ func resourceEventSubscriptionUpdate(ctx context.Context, d *schema.ResourceData func resourceEventSubscriptionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) log.Printf("[DEBUG] Deleting Neptune Event Subscription: %s", d.Id()) _, err := conn.DeleteEventSubscriptionWithContext(ctx, &neptune.DeleteEventSubscriptionInput{ diff --git a/internal/service/neptune/event_subscription_test.go b/internal/service/neptune/event_subscription_test.go index 73b20f06905..bae97f7f181 100644 --- a/internal/service/neptune/event_subscription_test.go +++ b/internal/service/neptune/event_subscription_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune_test import ( @@ -180,7 +183,7 @@ func testAccCheckEventSubscriptionExists(ctx context.Context, n string, v *neptu return fmt.Errorf("No Neptune Event Subscription is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn(ctx) opts := neptune.DescribeEventSubscriptionsInput{ SubscriptionName: aws.String(rs.Primary.ID), @@ -204,7 +207,7 @@ func testAccCheckEventSubscriptionExists(ctx context.Context, n string, v *neptu func testAccCheckEventSubscriptionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_neptune_event_subscription" { diff --git a/internal/service/neptune/find.go b/internal/service/neptune/find.go index 53bcfb87de3..bd8f7d69351 100644 --- a/internal/service/neptune/find.go +++ b/internal/service/neptune/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune import ( diff --git a/internal/service/neptune/flex.go b/internal/service/neptune/flex.go index 2d77fde8dbb..34ea166a95e 100644 --- a/internal/service/neptune/flex.go +++ b/internal/service/neptune/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune import ( diff --git a/internal/service/neptune/generate.go b/internal/service/neptune/generate.go index 2cc579c63f9..90f05698561 100644 --- a/internal/service/neptune/generate.go +++ b/internal/service/neptune/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceName -ListTagsOutTagsElem=TagList -ServiceTagsSlice -TagOp=AddTagsToResource -TagInIDElem=ResourceName -UntagOp=RemoveTagsFromResource -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package neptune diff --git a/internal/service/neptune/global_cluster.go b/internal/service/neptune/global_cluster.go index de945133afb..d0294618fee 100644 --- a/internal/service/neptune/global_cluster.go +++ b/internal/service/neptune/global_cluster.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune import ( @@ -106,7 +109,7 @@ func ResourceGlobalCluster() *schema.Resource { } func resourceGlobalClusterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) globalClusterID := d.Get("global_cluster_identifier").(string) input := &neptune.CreateGlobalClusterInput{ @@ -149,7 +152,7 @@ func resourceGlobalClusterCreate(ctx context.Context, d *schema.ResourceData, me } func resourceGlobalClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) globalCluster, err := FindGlobalClusterByID(ctx, conn, d.Id()) @@ -178,7 +181,7 @@ func resourceGlobalClusterRead(ctx context.Context, d *schema.ResourceData, meta } func resourceGlobalClusterUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) if d.HasChange("deletion_protection") { input := &neptune.ModifyGlobalClusterInput{ @@ -244,7 +247,7 @@ func resourceGlobalClusterUpdate(ctx context.Context, d *schema.ResourceData, me } func resourceGlobalClusterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) // Remove any members from the global cluster. for _, tfMapRaw := range d.Get("global_cluster_members").(*schema.Set).List() { diff --git a/internal/service/neptune/global_cluster_test.go b/internal/service/neptune/global_cluster_test.go index 6593a18126d..4988a2640e8 100644 --- a/internal/service/neptune/global_cluster_test.go +++ b/internal/service/neptune/global_cluster_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune_test import ( @@ -319,7 +322,7 @@ func testAccCheckGlobalClusterExists(ctx context.Context, n string, v *neptune.G return fmt.Errorf("No Neptune Global Cluster ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn(ctx) output, err := tfneptune.FindGlobalClusterByID(ctx, conn, rs.Primary.ID) @@ -335,7 +338,7 @@ func testAccCheckGlobalClusterExists(ctx context.Context, n string, v *neptune.G func testAccCheckGlobalClusterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_neptune_global_cluster" { @@ -380,7 +383,7 @@ func testAccCheckGlobalClusterRecreated(i, j *neptune.GlobalCluster) resource.Te } func testAccPreCheckGlobalCluster(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn(ctx) input := &neptune.DescribeGlobalClustersInput{} diff --git a/internal/service/neptune/id.go b/internal/service/neptune/id.go index a0609eaa16f..1c0a8d41b2e 100644 --- a/internal/service/neptune/id.go +++ b/internal/service/neptune/id.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune import ( diff --git a/internal/service/neptune/orderable_db_instance_data_source.go b/internal/service/neptune/orderable_db_instance_data_source.go index bb11a144f8d..59f9d61c7ef 100644 --- a/internal/service/neptune/orderable_db_instance_data_source.go +++ b/internal/service/neptune/orderable_db_instance_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune import ( @@ -136,7 +139,7 @@ func DataSourceOrderableDBInstance() *schema.Resource { func dataSourceOrderableDBInstanceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) input := &neptune.DescribeOrderableDBInstanceOptionsInput{} diff --git a/internal/service/neptune/orderable_db_instance_data_source_test.go b/internal/service/neptune/orderable_db_instance_data_source_test.go index 9db320b9b11..b25689bbc06 100644 --- a/internal/service/neptune/orderable_db_instance_data_source_test.go +++ b/internal/service/neptune/orderable_db_instance_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune_test import ( @@ -67,7 +70,7 @@ func TestAccNeptuneOrderableDBInstanceDataSource_preferred(t *testing.T) { } func testAccPreCheckOrderableDBInstance(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn(ctx) input := &neptune.DescribeOrderableDBInstanceOptionsInput{ Engine: aws.String("mysql"), diff --git a/internal/service/neptune/parameter_group.go b/internal/service/neptune/parameter_group.go index 8ea1ed86110..634e4f17cc0 100644 --- a/internal/service/neptune/parameter_group.go +++ b/internal/service/neptune/parameter_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune import ( @@ -95,13 +98,13 @@ func ResourceParameterGroup() *schema.Resource { func resourceParameterGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) createOpts := neptune.CreateDBParameterGroupInput{ DBParameterGroupName: aws.String(d.Get("name").(string)), DBParameterGroupFamily: aws.String(d.Get("family").(string)), Description: aws.String(d.Get("description").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } log.Printf("[DEBUG] Create Neptune Parameter Group: %#v", createOpts) @@ -119,7 +122,7 @@ func resourceParameterGroupCreate(ctx context.Context, d *schema.ResourceData, m func resourceParameterGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) describeOpts := neptune.DescribeDBParameterGroupsInput{ DBParameterGroupName: aws.String(d.Id()), @@ -175,7 +178,7 @@ func resourceParameterGroupRead(ctx context.Context, d *schema.ResourceData, met func resourceParameterGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) if d.HasChange("parameter") { o, n := d.GetChange("parameter") @@ -253,7 +256,7 @@ func resourceParameterGroupUpdate(ctx context.Context, d *schema.ResourceData, m func resourceParameterGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) deleteOpts := neptune.DeleteDBParameterGroupInput{ DBParameterGroupName: aws.String(d.Id()), diff --git a/internal/service/neptune/parameter_group_test.go b/internal/service/neptune/parameter_group_test.go index 6c1ef0edcfd..c771b47354e 100644 --- a/internal/service/neptune/parameter_group_test.go +++ b/internal/service/neptune/parameter_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune_test import ( @@ -176,7 +179,7 @@ func TestAccNeptuneParameterGroup_tags(t *testing.T) { func testAccCheckParameterGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_neptune_parameter_group" { @@ -229,7 +232,7 @@ func testAccCheckParameterGroupExists(ctx context.Context, n string, v *neptune. return fmt.Errorf("No Neptune Parameter Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn(ctx) opts := neptune.DescribeDBParameterGroupsInput{ DBParameterGroupName: aws.String(rs.Primary.ID), diff --git a/internal/service/neptune/service_package_gen.go b/internal/service/neptune/service_package_gen.go index bbc041d004e..76972ef6539 100644 --- a/internal/service/neptune/service_package_gen.go +++ b/internal/service/neptune/service_package_gen.go @@ -5,6 +5,10 @@ package neptune import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + neptune_sdkv1 "github.com/aws/aws-sdk-go/service/neptune" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -105,4 +109,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Neptune } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*neptune_sdkv1.Neptune, error) { + sess := config["session"].(*session_sdkv1.Session) + + return neptune_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/neptune/status.go b/internal/service/neptune/status.go index 44fc695fadb..3b28c15bd2f 100644 --- a/internal/service/neptune/status.go +++ b/internal/service/neptune/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune import ( diff --git a/internal/service/neptune/subnet_group.go b/internal/service/neptune/subnet_group.go index 47f7d3f21e1..be08ae2bf47 100644 --- a/internal/service/neptune/subnet_group.go +++ b/internal/service/neptune/subnet_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune import ( @@ -75,14 +78,14 @@ func ResourceSubnetGroup() *schema.Resource { func resourceSubnetGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) name := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) input := &neptune.CreateDBSubnetGroupInput{ DBSubnetGroupName: aws.String(name), DBSubnetGroupDescription: aws.String(d.Get("description").(string)), SubnetIds: flex.ExpandStringSet(d.Get("subnet_ids").(*schema.Set)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } output, err := conn.CreateDBSubnetGroupWithContext(ctx, input) @@ -98,7 +101,7 @@ func resourceSubnetGroupCreate(ctx context.Context, d *schema.ResourceData, meta func resourceSubnetGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) subnetGroup, err := FindSubnetGroupByName(ctx, conn, d.Id()) @@ -128,7 +131,7 @@ func resourceSubnetGroupRead(ctx context.Context, d *schema.ResourceData, meta i func resourceSubnetGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) if d.HasChanges("description", "subnet_ids") { input := &neptune.ModifyDBSubnetGroupInput{ @@ -149,7 +152,7 @@ func resourceSubnetGroupUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceSubnetGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).NeptuneConn() + conn := meta.(*conns.AWSClient).NeptuneConn(ctx) log.Printf("[DEBUG] Deleting Neptune Subnet Group: %s", d.Id()) _, err := conn.DeleteDBSubnetGroupWithContext(ctx, &neptune.DeleteDBSubnetGroupInput{ diff --git a/internal/service/neptune/subnet_group_test.go b/internal/service/neptune/subnet_group_test.go index 1a66920d435..3ce129c5cae 100644 --- a/internal/service/neptune/subnet_group_test.go +++ b/internal/service/neptune/subnet_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune_test import ( @@ -212,7 +215,7 @@ func TestAccNeptuneSubnetGroup_update(t *testing.T) { func testAccCheckSubnetGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_neptune_subnet_group" { @@ -247,7 +250,7 @@ func testAccCheckSubnetGroupExists(ctx context.Context, n string, v *neptune.DBS return fmt.Errorf("No Neptune Subnet Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NeptuneConn(ctx) output, err := tfneptune.FindSubnetGroupByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/neptune/sweep.go b/internal/service/neptune/sweep.go index 8470019afb4..8ecf8be0afe 100644 --- a/internal/service/neptune/sweep.go +++ b/internal/service/neptune/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -12,7 +15,6 @@ import ( "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) @@ -42,11 +44,11 @@ func init() { func sweepEventSubscriptions(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).NeptuneConn() + conn := client.NeptuneConn(ctx) var sweeperErrs *multierror.Error err = conn.DescribeEventSubscriptionsPagesWithContext(ctx, &neptune.DescribeEventSubscriptionsInput{}, func(page *neptune.DescribeEventSubscriptionsOutput, lastPage bool) bool { @@ -98,11 +100,11 @@ func sweepEventSubscriptions(region string) error { func sweepClusters(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).NeptuneConn() + conn := client.NeptuneConn(ctx) var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -150,7 +152,7 @@ func sweepClusters(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("listing Neptune Clusters (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("sweeping Neptune Clusters (%s): %w", region, err)) } @@ -160,11 +162,11 @@ func sweepClusters(region string) error { func sweepClusterInstances(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).NeptuneConn() + conn := client.NeptuneConn(ctx) sweepResources := make([]sweep.Sweepable, 0) input := &neptune.DescribeDBInstancesInput{} @@ -195,7 +197,7 @@ func sweepClusterInstances(region string) error { return fmt.Errorf("listing Neptune Cluster Instances (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("sweeping Neptune Cluster Instances (%s): %w", region, err) diff --git a/internal/service/neptune/tags_gen.go b/internal/service/neptune/tags_gen.go index 47e6bfaf9fc..5f106d9c6cc 100644 --- a/internal/service/neptune/tags_gen.go +++ b/internal/service/neptune/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists neptune service tags. +// listTags lists neptune service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn neptuneiface.NeptuneAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn neptuneiface.NeptuneAPI, identifier string) (tftags.KeyValueTags, error) { input := &neptune.ListTagsForResourceInput{ ResourceName: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn neptuneiface.NeptuneAPI, identifier stri // ListTags lists neptune service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).NeptuneConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).NeptuneConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*neptune.Tag) tftags.KeyValueTags return tftags.New(ctx, m) } -// GetTagsIn returns neptune service tags from Context. +// getTagsIn returns neptune service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*neptune.Tag { +func getTagsIn(ctx context.Context) []*neptune.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*neptune.Tag { return nil } -// SetTagsOut sets neptune service tags in Context. -func SetTagsOut(ctx context.Context, tags []*neptune.Tag) { +// setTagsOut sets neptune service tags in Context. +func setTagsOut(ctx context.Context, tags []*neptune.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates neptune service tags. +// updateTags updates neptune service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn neptuneiface.NeptuneAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn neptuneiface.NeptuneAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn neptuneiface.NeptuneAPI, identifier st // UpdateTags updates neptune service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).NeptuneConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).NeptuneConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/neptune/validate.go b/internal/service/neptune/validate.go index bac4f74d0ef..e0b26fd9377 100644 --- a/internal/service/neptune/validate.go +++ b/internal/service/neptune/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune import ( diff --git a/internal/service/neptune/validate_test.go b/internal/service/neptune/validate_test.go index 04af47f41ca..60320275ee3 100644 --- a/internal/service/neptune/validate_test.go +++ b/internal/service/neptune/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune import ( diff --git a/internal/service/neptune/wait.go b/internal/service/neptune/wait.go index c384b9614da..fd84dc6bf24 100644 --- a/internal/service/neptune/wait.go +++ b/internal/service/neptune/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package neptune import ( diff --git a/internal/service/networkfirewall/find.go b/internal/service/networkfirewall/find.go index b6088684ceb..16815140437 100644 --- a/internal/service/networkfirewall/find.go +++ b/internal/service/networkfirewall/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkfirewall import ( diff --git a/internal/service/networkfirewall/firewall.go b/internal/service/networkfirewall/firewall.go index 4a6d2bf654a..8020b008e97 100644 --- a/internal/service/networkfirewall/firewall.go +++ b/internal/service/networkfirewall/firewall.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkfirewall import ( @@ -146,14 +149,14 @@ func ResourceFirewall() *schema.Resource { } func resourceFirewallCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkFirewallConn() + conn := meta.(*conns.AWSClient).NetworkFirewallConn(ctx) name := d.Get("name").(string) input := &networkfirewall.CreateFirewallInput{ FirewallName: aws.String(name), FirewallPolicyArn: aws.String(d.Get("firewall_policy_arn").(string)), SubnetMappings: expandSubnetMappings(d.Get("subnet_mapping").(*schema.Set).List()), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), VpcId: aws.String(d.Get("vpc_id").(string)), } @@ -193,7 +196,7 @@ func resourceFirewallCreate(ctx context.Context, d *schema.ResourceData, meta in } func resourceFirewallRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkFirewallConn() + conn := meta.(*conns.AWSClient).NetworkFirewallConn(ctx) output, err := FindFirewallByARN(ctx, conn, d.Id()) @@ -227,13 +230,13 @@ func resourceFirewallRead(ctx context.Context, d *schema.ResourceData, meta inte d.Set("update_token", output.UpdateToken) d.Set("vpc_id", firewall.VpcId) - SetTagsOut(ctx, firewall.Tags) + setTagsOut(ctx, firewall.Tags) return nil } func resourceFirewallUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkFirewallConn() + conn := meta.(*conns.AWSClient).NetworkFirewallConn(ctx) updateToken := d.Get("update_token").(string) if d.HasChange("delete_protection") { @@ -384,7 +387,7 @@ func resourceFirewallUpdate(ctx context.Context, d *schema.ResourceData, meta in } func resourceFirewallDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkFirewallConn() + conn := meta.(*conns.AWSClient).NetworkFirewallConn(ctx) log.Printf("[DEBUG] Deleting NetworkFirewall Firewall: %s", d.Id()) _, err := conn.DeleteFirewallWithContext(ctx, &networkfirewall.DeleteFirewallInput{ diff --git a/internal/service/networkfirewall/firewall_data_source.go b/internal/service/networkfirewall/firewall_data_source.go index 29c3b73e0fd..a6991d56c93 100644 --- a/internal/service/networkfirewall/firewall_data_source.go +++ b/internal/service/networkfirewall/firewall_data_source.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkfirewall import ( "context" - "fmt" "log" "regexp" @@ -182,7 +184,7 @@ func DataSourceFirewall() *schema.Resource { } func dataSourceFirewallResourceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkFirewallConn() + conn := meta.(*conns.AWSClient).NetworkFirewallConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &networkfirewall.DescribeFirewallInput{} @@ -196,7 +198,7 @@ func dataSourceFirewallResourceRead(ctx context.Context, d *schema.ResourceData, } if input.FirewallArn == nil && input.FirewallName == nil { - return diag.FromErr(fmt.Errorf("must specify either arn, name, or both")) + return diag.Errorf("must specify either arn, name, or both") } output, err := conn.DescribeFirewallWithContext(ctx, input) diff --git a/internal/service/networkfirewall/firewall_data_source_test.go b/internal/service/networkfirewall/firewall_data_source_test.go index 90443df75cd..0dc08dd0df0 100644 --- a/internal/service/networkfirewall/firewall_data_source_test.go +++ b/internal/service/networkfirewall/firewall_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkfirewall_test import ( diff --git a/internal/service/networkfirewall/firewall_policy.go b/internal/service/networkfirewall/firewall_policy.go index c99aee159e1..5a6bbfa71f8 100644 --- a/internal/service/networkfirewall/firewall_policy.go +++ b/internal/service/networkfirewall/firewall_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkfirewall import ( @@ -63,9 +66,14 @@ func ResourceFirewallPolicy() *schema.Resource { Schema: map[string]*schema.Schema{ "rule_order": { Type: schema.TypeString, - Required: true, + Optional: true, ValidateFunc: validation.StringInSlice(networkfirewall.RuleOrder_Values(), false), }, + "stream_exception_policy": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice(networkfirewall.StreamExceptionPolicy_Values(), false), + }, }, }, }, @@ -158,13 +166,13 @@ func ResourceFirewallPolicy() *schema.Resource { } func resourceFirewallPolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkFirewallConn() + conn := meta.(*conns.AWSClient).NetworkFirewallConn(ctx) name := d.Get("name").(string) input := &networkfirewall.CreateFirewallPolicyInput{ FirewallPolicy: expandFirewallPolicy(d.Get("firewall_policy").([]interface{})), FirewallPolicyName: aws.String(d.Get("name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -186,7 +194,7 @@ func resourceFirewallPolicyCreate(ctx context.Context, d *schema.ResourceData, m } func resourceFirewallPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkFirewallConn() + conn := meta.(*conns.AWSClient).NetworkFirewallConn(ctx) output, err := FindFirewallPolicyByARN(ctx, conn, d.Id()) @@ -210,13 +218,13 @@ func resourceFirewallPolicyRead(ctx context.Context, d *schema.ResourceData, met d.Set("name", response.FirewallPolicyName) d.Set("update_token", output.UpdateToken) - SetTagsOut(ctx, response.Tags) + setTagsOut(ctx, response.Tags) return nil } func resourceFirewallPolicyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkFirewallConn() + conn := meta.(*conns.AWSClient).NetworkFirewallConn(ctx) if d.HasChanges("description", "encryption_configuration", "firewall_policy") { input := &networkfirewall.UpdateFirewallPolicyInput{ @@ -245,7 +253,7 @@ func resourceFirewallPolicyDelete(ctx context.Context, d *schema.ResourceData, m const ( timeout = 10 * time.Minute ) - conn := meta.(*conns.AWSClient).NetworkFirewallConn() + conn := meta.(*conns.AWSClient).NetworkFirewallConn(ctx) log.Printf("[DEBUG] Deleting NetworkFirewall Firewall Policy: %s", d.Id()) _, err := tfresource.RetryWhenAWSErrMessageContains(ctx, timeout, func() (interface{}, error) { @@ -335,9 +343,12 @@ func expandStatefulEngineOptions(l []interface{}) *networkfirewall.StatefulEngin options := &networkfirewall.StatefulEngineOptions{} m := l[0].(map[string]interface{}) - if v, ok := m["rule_order"].(string); ok { + if v, ok := m["rule_order"].(string); ok && v != "" { options.RuleOrder = aws.String(v) } + if v, ok := m["stream_exception_policy"].(string); ok && v != "" { + options.StreamExceptionPolicy = aws.String(v) + } return options } @@ -476,8 +487,12 @@ func flattenStatefulEngineOptions(options *networkfirewall.StatefulEngineOptions return []interface{}{} } - m := map[string]interface{}{ - "rule_order": aws.StringValue(options.RuleOrder), + m := map[string]interface{}{} + if options.RuleOrder != nil { + m["rule_order"] = aws.StringValue(options.RuleOrder) + } + if options.StreamExceptionPolicy != nil { + m["stream_exception_policy"] = aws.StringValue(options.StreamExceptionPolicy) } return []interface{}{m} diff --git a/internal/service/networkfirewall/firewall_policy_data_source.go b/internal/service/networkfirewall/firewall_policy_data_source.go index 2dec652e28b..6617d388acb 100644 --- a/internal/service/networkfirewall/firewall_policy_data_source.go +++ b/internal/service/networkfirewall/firewall_policy_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkfirewall import ( @@ -48,6 +51,10 @@ func DataSourceFirewallPolicy() *schema.Resource { Type: schema.TypeString, Computed: true, }, + "stream_exception_policy": { + Type: schema.TypeString, + Computed: true, + }, }, }, }, @@ -125,7 +132,7 @@ func DataSourceFirewallPolicy() *schema.Resource { } func dataSourceFirewallPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkFirewallConn() + conn := meta.(*conns.AWSClient).NetworkFirewallConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig arn := d.Get("arn").(string) diff --git a/internal/service/networkfirewall/firewall_policy_data_source_test.go b/internal/service/networkfirewall/firewall_policy_data_source_test.go index 4a5d10c7c44..2acfa9d7faf 100644 --- a/internal/service/networkfirewall/firewall_policy_data_source_test.go +++ b/internal/service/networkfirewall/firewall_policy_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkfirewall_test import ( diff --git a/internal/service/networkfirewall/firewall_policy_test.go b/internal/service/networkfirewall/firewall_policy_test.go index 0542bb26f31..a2eea19b7df 100644 --- a/internal/service/networkfirewall/firewall_policy_test.go +++ b/internal/service/networkfirewall/firewall_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkfirewall_test import ( @@ -156,12 +159,13 @@ func TestAccNetworkFirewallFirewallPolicy_statefulEngineOption(t *testing.T) { CheckDestroy: testAccCheckFirewallPolicyDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccFirewallPolicyConfig_statefulEngineOptions(rName, "STRICT_ORDER"), + Config: testAccFirewallPolicyConfig_statefulEngineOptions(rName, "STRICT_ORDER", "DROP"), Check: resource.ComposeTestCheckFunc( testAccCheckFirewallPolicyExists(ctx, resourceName, &firewallPolicy), resource.TestCheckResourceAttr(resourceName, "firewall_policy.#", "1"), resource.TestCheckResourceAttr(resourceName, "firewall_policy.0.stateful_engine_options.#", "1"), resource.TestCheckResourceAttr(resourceName, "firewall_policy.0.stateful_engine_options.0.rule_order", networkfirewall.RuleOrderStrictOrder), + resource.TestCheckResourceAttr(resourceName, "firewall_policy.0.stateful_engine_options.0.stream_exception_policy", networkfirewall.StreamExceptionPolicyDrop), ), }, { @@ -186,12 +190,13 @@ func TestAccNetworkFirewallFirewallPolicy_updateStatefulEngineOption(t *testing. CheckDestroy: testAccCheckFirewallPolicyDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccFirewallPolicyConfig_statefulEngineOptions(rName, "DEFAULT_ACTION_ORDER"), + Config: testAccFirewallPolicyConfig_statefulEngineOptions(rName, "DEFAULT_ACTION_ORDER", "CONTINUE"), Check: resource.ComposeTestCheckFunc( testAccCheckFirewallPolicyExists(ctx, resourceName, &firewallPolicy1), resource.TestCheckResourceAttr(resourceName, "firewall_policy.#", "1"), resource.TestCheckResourceAttr(resourceName, "firewall_policy.0.stateful_engine_options.#", "1"), resource.TestCheckResourceAttr(resourceName, "firewall_policy.0.stateful_engine_options.0.rule_order", networkfirewall.RuleOrderDefaultActionOrder), + resource.TestCheckResourceAttr(resourceName, "firewall_policy.0.stateful_engine_options.0.stream_exception_policy", networkfirewall.StreamExceptionPolicyContinue), ), }, { @@ -203,13 +208,55 @@ func TestAccNetworkFirewallFirewallPolicy_updateStatefulEngineOption(t *testing. ), }, { - Config: testAccFirewallPolicyConfig_statefulEngineOptions(rName, "STRICT_ORDER"), + Config: testAccFirewallPolicyConfig_statefulEngineOptions(rName, "STRICT_ORDER", "REJECT"), Check: resource.ComposeTestCheckFunc( testAccCheckFirewallPolicyExists(ctx, resourceName, &firewallPolicy3), testAccCheckFirewallPolicyRecreated(&firewallPolicy2, &firewallPolicy3), resource.TestCheckResourceAttr(resourceName, "firewall_policy.#", "1"), resource.TestCheckResourceAttr(resourceName, "firewall_policy.0.stateful_engine_options.#", "1"), resource.TestCheckResourceAttr(resourceName, "firewall_policy.0.stateful_engine_options.0.rule_order", networkfirewall.RuleOrderStrictOrder), + resource.TestCheckResourceAttr(resourceName, "firewall_policy.0.stateful_engine_options.0.stream_exception_policy", networkfirewall.StreamExceptionPolicyReject), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccNetworkFirewallFirewallPolicy_statefulEngineOptionsSingle(t *testing.T) { + ctx := acctest.Context(t) + var firewallPolicy networkfirewall.DescribeFirewallPolicyOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_networkfirewall_firewall_policy.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, networkfirewall.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckFirewallPolicyDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccFirewallPolicyConfig_ruleOrderOnly(rName, "DEFAULT_ACTION_ORDER"), + Check: resource.ComposeTestCheckFunc( + testAccCheckFirewallPolicyExists(ctx, resourceName, &firewallPolicy), + resource.TestCheckResourceAttr(resourceName, "firewall_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "firewall_policy.0.stateful_engine_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "firewall_policy.0.stateful_engine_options.0.rule_order", networkfirewall.RuleOrderDefaultActionOrder), + resource.TestCheckResourceAttr(resourceName, "firewall_policy.0.stateful_engine_options.0.stream_exception_policy", ""), + ), + }, + { + Config: testAccFirewallPolicyConfig_streamExceptionPolicyOnly(rName, "REJECT"), + Check: resource.ComposeTestCheckFunc( + testAccCheckFirewallPolicyExists(ctx, resourceName, &firewallPolicy), + resource.TestCheckResourceAttr(resourceName, "firewall_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "firewall_policy.0.stateful_engine_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "firewall_policy.0.stateful_engine_options.0.rule_order", ""), + resource.TestCheckResourceAttr(resourceName, "firewall_policy.0.stateful_engine_options.0.stream_exception_policy", networkfirewall.StreamExceptionPolicyReject), ), }, { @@ -892,7 +939,7 @@ func testAccCheckFirewallPolicyDestroy(ctx context.Context) resource.TestCheckFu continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkFirewallConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkFirewallConn(ctx) _, err := tfnetworkfirewall.FindFirewallPolicyByARN(ctx, conn, rs.Primary.ID) @@ -922,7 +969,7 @@ func testAccCheckFirewallPolicyExists(ctx context.Context, n string, v *networkf return fmt.Errorf("No NetworkFirewall Firewall Policy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkFirewallConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkFirewallConn(ctx) output, err := tfnetworkfirewall.FindFirewallPolicyByARN(ctx, conn, rs.Primary.ID) @@ -1110,7 +1157,25 @@ resource "aws_networkfirewall_firewall_policy" "test" { `, rName, tagKey1, tagValue1, tagKey2, tagValue2) } -func testAccFirewallPolicyConfig_statefulEngineOptions(rName, ruleOrder string) string { +func testAccFirewallPolicyConfig_statefulEngineOptions(rName, ruleOrder, streamExceptionPolicy string) string { + return fmt.Sprintf(` +resource "aws_networkfirewall_firewall_policy" "test" { + name = %[1]q + + firewall_policy { + stateless_fragment_default_actions = ["aws:drop"] + stateless_default_actions = ["aws:pass"] + + stateful_engine_options { + rule_order = %[2]q + stream_exception_policy = %[3]q + } + } +} +`, rName, ruleOrder, streamExceptionPolicy) +} + +func testAccFirewallPolicyConfig_ruleOrderOnly(rName, ruleOrder string) string { return fmt.Sprintf(` resource "aws_networkfirewall_firewall_policy" "test" { name = %[1]q @@ -1127,6 +1192,23 @@ resource "aws_networkfirewall_firewall_policy" "test" { `, rName, ruleOrder) } +func testAccFirewallPolicyConfig_streamExceptionPolicyOnly(rName, streamExceptionPolicy string) string { + return fmt.Sprintf(` +resource "aws_networkfirewall_firewall_policy" "test" { + name = %[1]q + + firewall_policy { + stateless_fragment_default_actions = ["aws:drop"] + stateless_default_actions = ["aws:pass"] + + stateful_engine_options { + stream_exception_policy = %[2]q + } + } +} +`, rName, streamExceptionPolicy) +} + func testAccFirewallPolicyConfig_statefulDefaultActions(rName string) string { return fmt.Sprintf(` resource "aws_networkfirewall_firewall_policy" "test" { diff --git a/internal/service/networkfirewall/firewall_resource_policy_data_source.go b/internal/service/networkfirewall/firewall_resource_policy_data_source.go index 866a9f176f4..52fa58b203c 100644 --- a/internal/service/networkfirewall/firewall_resource_policy_data_source.go +++ b/internal/service/networkfirewall/firewall_resource_policy_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkfirewall import ( @@ -30,7 +33,7 @@ func DataSourceFirewallResourcePolicy() *schema.Resource { } func dataSourceFirewallResourcePolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkFirewallConn() + conn := meta.(*conns.AWSClient).NetworkFirewallConn(ctx) resourceARN := d.Get("resource_arn").(string) policy, err := FindResourcePolicy(ctx, conn, resourceARN) diff --git a/internal/service/networkfirewall/firewall_resource_policy_data_source_test.go b/internal/service/networkfirewall/firewall_resource_policy_data_source_test.go index dcaa06f5da9..b72ba101253 100644 --- a/internal/service/networkfirewall/firewall_resource_policy_data_source_test.go +++ b/internal/service/networkfirewall/firewall_resource_policy_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkfirewall_test import ( diff --git a/internal/service/networkfirewall/firewall_test.go b/internal/service/networkfirewall/firewall_test.go index 96d9ca12bbc..72f91cd404c 100644 --- a/internal/service/networkfirewall/firewall_test.go +++ b/internal/service/networkfirewall/firewall_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkfirewall_test import ( @@ -428,7 +431,7 @@ func testAccCheckFirewallDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkFirewallConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkFirewallConn(ctx) _, err := tfnetworkfirewall.FindFirewallByARN(ctx, conn, rs.Primary.ID) @@ -458,7 +461,7 @@ func testAccCheckFirewallExists(ctx context.Context, n string) resource.TestChec return fmt.Errorf("No NetworkFirewall Firewall ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkFirewallConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkFirewallConn(ctx) _, err := tfnetworkfirewall.FindFirewallByARN(ctx, conn, rs.Primary.ID) @@ -467,7 +470,7 @@ func testAccCheckFirewallExists(ctx context.Context, n string) resource.TestChec } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkFirewallConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkFirewallConn(ctx) input := &networkfirewall.ListFirewallsInput{} diff --git a/internal/service/networkfirewall/generate.go b/internal/service/networkfirewall/generate.go index 50864ea0f88..98d7418946b 100644 --- a/internal/service/networkfirewall/generate.go +++ b/internal/service/networkfirewall/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsSlice -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package networkfirewall diff --git a/internal/service/networkfirewall/helpers.go b/internal/service/networkfirewall/helpers.go index d0e7242550a..442f03feb03 100644 --- a/internal/service/networkfirewall/helpers.go +++ b/internal/service/networkfirewall/helpers.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkfirewall import ( diff --git a/internal/service/networkfirewall/logging_configuration.go b/internal/service/networkfirewall/logging_configuration.go index eac98a03fd8..f3e8bc0e82b 100644 --- a/internal/service/networkfirewall/logging_configuration.go +++ b/internal/service/networkfirewall/logging_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkfirewall import ( @@ -73,7 +76,7 @@ func ResourceLoggingConfiguration() *schema.Resource { } func resourceLoggingConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkFirewallConn() + conn := meta.(*conns.AWSClient).NetworkFirewallConn(ctx) firewallArn := d.Get("firewall_arn").(string) log.Printf("[DEBUG] Adding Logging Configuration to NetworkFirewall Firewall: %s", firewallArn) @@ -91,7 +94,7 @@ func resourceLoggingConfigurationCreate(ctx context.Context, d *schema.ResourceD } func resourceLoggingConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkFirewallConn() + conn := meta.(*conns.AWSClient).NetworkFirewallConn(ctx) log.Printf("[DEBUG] Reading Logging Configuration for NetworkFirewall Firewall: %s", d.Id()) @@ -103,24 +106,24 @@ func resourceLoggingConfigurationRead(ctx context.Context, d *schema.ResourceDat } if err != nil { - return diag.FromErr(fmt.Errorf("error reading Logging Configuration for NetworkFirewall Firewall: %s: %w", d.Id(), err)) + return diag.Errorf("reading Logging Configuration for NetworkFirewall Firewall: %s: %s", d.Id(), err) } if output == nil { - return diag.FromErr(fmt.Errorf("error reading Logging Configuration for NetworkFirewall Firewall: %s: empty output", d.Id())) + return diag.Errorf("reading Logging Configuration for NetworkFirewall Firewall: %s: empty output", d.Id()) } d.Set("firewall_arn", output.FirewallArn) if err := d.Set("logging_configuration", flattenLoggingConfiguration(output.LoggingConfiguration)); err != nil { - return diag.FromErr(fmt.Errorf("error setting logging_configuration: %w", err)) + return diag.Errorf("setting logging_configuration: %s", err) } return nil } func resourceLoggingConfigurationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkFirewallConn() + conn := meta.(*conns.AWSClient).NetworkFirewallConn(ctx) log.Printf("[DEBUG] Updating Logging Configuration for NetworkFirewall Firewall: %s", d.Id()) @@ -149,7 +152,7 @@ func resourceLoggingConfigurationUpdate(ctx context.Context, d *schema.ResourceD } func resourceLoggingConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkFirewallConn() + conn := meta.(*conns.AWSClient).NetworkFirewallConn(ctx) log.Printf("[DEBUG] Deleting Logging Configuration for NetworkFirewall Firewall: %s", d.Id()) output, err := FindLoggingConfiguration(ctx, conn, d.Id()) @@ -158,7 +161,7 @@ func resourceLoggingConfigurationDelete(ctx context.Context, d *schema.ResourceD } if err != nil { - return diag.FromErr(fmt.Errorf("error deleting Logging Configuration for NetworkFirewall Firewall: %s: %w", d.Id(), err)) + return diag.Errorf("deleting Logging Configuration for NetworkFirewall Firewall: %s: %s", d.Id(), err) } if output != nil && output.LoggingConfiguration != nil { @@ -180,7 +183,7 @@ func putLoggingConfiguration(ctx context.Context, conn *networkfirewall.NetworkF } _, err := conn.UpdateLoggingConfigurationWithContext(ctx, input) if err != nil { - errors = multierror.Append(errors, fmt.Errorf("error adding Logging Configuration to NetworkFirewall Firewall (%s): %w", arn, err)) + errors = multierror.Append(errors, fmt.Errorf("adding Logging Configuration to NetworkFirewall Firewall (%s): %w", arn, err)) } } return errors.ErrorOrNil() @@ -204,7 +207,7 @@ func removeLoggingConfiguration(ctx context.Context, conn *networkfirewall.Netwo } _, err := conn.UpdateLoggingConfigurationWithContext(ctx, input) if err != nil { - errors = multierror.Append(errors, fmt.Errorf("error removing Logging Configuration LogDestinationConfig (%v) from NetworkFirewall Firewall: %s: %w", config, arn, err)) + errors = multierror.Append(errors, fmt.Errorf("removing Logging Configuration LogDestinationConfig (%v) from NetworkFirewall Firewall: %s: %w", config, arn, err)) } } diff --git a/internal/service/networkfirewall/logging_configuration_test.go b/internal/service/networkfirewall/logging_configuration_test.go index 6c4ee2dfb8e..5edc2a612ab 100644 --- a/internal/service/networkfirewall/logging_configuration_test.go +++ b/internal/service/networkfirewall/logging_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkfirewall_test import ( @@ -617,7 +620,7 @@ func testAccCheckLoggingConfigurationDestroy(ctx context.Context) resource.TestC continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkFirewallConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkFirewallConn(ctx) output, err := tfnetworkfirewall.FindLoggingConfiguration(ctx, conn, rs.Primary.ID) if tfawserr.ErrCodeEquals(err, networkfirewall.ErrCodeResourceNotFoundException) { continue @@ -645,7 +648,7 @@ func testAccCheckLoggingConfigurationExists(ctx context.Context, n string) resou return fmt.Errorf("No NetworkFirewall Logging Configuration ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkFirewallConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkFirewallConn(ctx) output, err := tfnetworkfirewall.FindLoggingConfiguration(ctx, conn, rs.Primary.ID) if err != nil { return err @@ -766,11 +769,6 @@ resource "aws_s3_bucket" "test" { create_before_destroy = true } } - -resource "aws_s3_bucket_acl" "test" { - bucket = aws_s3_bucket.test.id - acl = "private" -} `, rName) } @@ -861,17 +859,12 @@ resource "aws_s3_bucket" "logs" { force_destroy = true } -resource "aws_s3_bucket_acl" "logs_acl" { - bucket = aws_s3_bucket.logs.id - acl = "private" -} - resource "aws_kinesis_firehose_delivery_stream" "test" { depends_on = [aws_iam_role_policy.test] name = %[2]q - destination = "s3" + destination = "extended_s3" - s3_configuration { + extended_s3_configuration { role_arn = aws_iam_role.test.arn bucket_arn = aws_s3_bucket.logs.arn } diff --git a/internal/service/networkfirewall/resource_policy.go b/internal/service/networkfirewall/resource_policy.go index d0e921508d0..34cfc8ae36f 100644 --- a/internal/service/networkfirewall/resource_policy.go +++ b/internal/service/networkfirewall/resource_policy.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkfirewall import ( "context" - "fmt" "log" "time" @@ -52,7 +54,7 @@ func ResourceResourcePolicy() *schema.Resource { } func resourceResourcePolicyPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkFirewallConn() + conn := meta.(*conns.AWSClient).NetworkFirewallConn(ctx) resourceArn := d.Get("resource_arn").(string) policy, err := structure.NormalizeJsonString(d.Get("policy").(string)) @@ -70,7 +72,7 @@ func resourceResourcePolicyPut(ctx context.Context, d *schema.ResourceData, meta _, err = conn.PutResourcePolicyWithContext(ctx, input) if err != nil { - return diag.Errorf("error putting NetworkFirewall Resource Policy (for resource: %s): %s", resourceArn, err) + return diag.Errorf("putting NetworkFirewall Resource Policy (for resource: %s): %s", resourceArn, err) } d.SetId(resourceArn) @@ -79,7 +81,7 @@ func resourceResourcePolicyPut(ctx context.Context, d *schema.ResourceData, meta } func resourceResourcePolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkFirewallConn() + conn := meta.(*conns.AWSClient).NetworkFirewallConn(ctx) resourceArn := d.Id() log.Printf("[DEBUG] Reading NetworkFirewall Resource Policy for resource: %s", resourceArn) @@ -91,11 +93,11 @@ func resourceResourcePolicyRead(ctx context.Context, d *schema.ResourceData, met return nil } if err != nil { - return diag.FromErr(fmt.Errorf("error reading NetworkFirewall Resource Policy (for resource: %s): %w", resourceArn, err)) + return diag.Errorf("reading NetworkFirewall Resource Policy (for resource: %s): %s", resourceArn, err) } if policy == nil { - return diag.FromErr(fmt.Errorf("error reading NetworkFirewall Resource Policy (for resource: %s): empty output", resourceArn)) + return diag.Errorf("reading NetworkFirewall Resource Policy (for resource: %s): empty output", resourceArn) } d.Set("resource_arn", resourceArn) @@ -115,7 +117,7 @@ func resourceResourcePolicyDelete(ctx context.Context, d *schema.ResourceData, m const ( timeout = 2 * time.Minute ) - conn := meta.(*conns.AWSClient).NetworkFirewallConn() + conn := meta.(*conns.AWSClient).NetworkFirewallConn(ctx) log.Printf("[DEBUG] Deleting NetworkFirewall Resource Policy: %s", d.Id()) _, err := tfresource.RetryWhenAWSErrMessageContains(ctx, timeout, func() (interface{}, error) { diff --git a/internal/service/networkfirewall/resource_policy_test.go b/internal/service/networkfirewall/resource_policy_test.go index 557b7635f71..2445bf98c1a 100644 --- a/internal/service/networkfirewall/resource_policy_test.go +++ b/internal/service/networkfirewall/resource_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkfirewall_test import ( @@ -196,7 +199,7 @@ func testAccCheckResourcePolicyDestroy(ctx context.Context) resource.TestCheckFu continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkFirewallConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkFirewallConn(ctx) policy, err := tfnetworkfirewall.FindResourcePolicy(ctx, conn, rs.Primary.ID) if tfawserr.ErrCodeEquals(err, networkfirewall.ErrCodeResourceNotFoundException) { continue @@ -224,7 +227,7 @@ func testAccCheckResourcePolicyExists(ctx context.Context, n string) resource.Te return fmt.Errorf("No NetworkFirewall Resource Policy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkFirewallConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkFirewallConn(ctx) policy, err := tfnetworkfirewall.FindResourcePolicy(ctx, conn, rs.Primary.ID) if err != nil { return err diff --git a/internal/service/networkfirewall/rule_group.go b/internal/service/networkfirewall/rule_group.go index a6fae6ee8d6..0b9c6b56aee 100644 --- a/internal/service/networkfirewall/rule_group.go +++ b/internal/service/networkfirewall/rule_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkfirewall import ( @@ -456,13 +459,13 @@ func ResourceRuleGroup() *schema.Resource { } func resourceRuleGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkFirewallConn() + conn := meta.(*conns.AWSClient).NetworkFirewallConn(ctx) name := d.Get("name").(string) input := &networkfirewall.CreateRuleGroupInput{ Capacity: aws.Int64(int64(d.Get("capacity").(int))), RuleGroupName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), Type: aws.String(d.Get("type").(string)), } @@ -494,7 +497,7 @@ func resourceRuleGroupCreate(ctx context.Context, d *schema.ResourceData, meta i } func resourceRuleGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkFirewallConn() + conn := meta.(*conns.AWSClient).NetworkFirewallConn(ctx) output, err := FindRuleGroupByARN(ctx, conn, d.Id()) @@ -524,13 +527,13 @@ func resourceRuleGroupRead(ctx context.Context, d *schema.ResourceData, meta int d.Set("type", response.Type) d.Set("update_token", output.UpdateToken) - SetTagsOut(ctx, response.Tags) + setTagsOut(ctx, response.Tags) return nil } func resourceRuleGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkFirewallConn() + conn := meta.(*conns.AWSClient).NetworkFirewallConn(ctx) if d.HasChanges("description", "encryption_configuration", "rule_group", "rules", "type") { input := &networkfirewall.UpdateRuleGroupInput{ @@ -581,7 +584,7 @@ func resourceRuleGroupDelete(ctx context.Context, d *schema.ResourceData, meta i const ( timeout = 10 * time.Minute ) - conn := meta.(*conns.AWSClient).NetworkFirewallConn() + conn := meta.(*conns.AWSClient).NetworkFirewallConn(ctx) log.Printf("[DEBUG] Deleting NetworkFirewall Rule Group: %s", d.Id()) _, err := tfresource.RetryWhenAWSErrMessageContains(ctx, timeout, func() (interface{}, error) { diff --git a/internal/service/networkfirewall/rule_group_test.go b/internal/service/networkfirewall/rule_group_test.go index d8c2641afd1..b8228fdc805 100644 --- a/internal/service/networkfirewall/rule_group_test.go +++ b/internal/service/networkfirewall/rule_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkfirewall_test import ( @@ -1007,7 +1010,7 @@ func testAccCheckRuleGroupDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkFirewallConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkFirewallConn(ctx) _, err := tfnetworkfirewall.FindRuleGroupByARN(ctx, conn, rs.Primary.ID) @@ -1037,7 +1040,7 @@ func testAccCheckRuleGroupExists(ctx context.Context, n string, v *networkfirewa return fmt.Errorf("No NetworkFirewall Rule Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkFirewallConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkFirewallConn(ctx) output, err := tfnetworkfirewall.FindRuleGroupByARN(ctx, conn, rs.Primary.ID) diff --git a/internal/service/networkfirewall/service_package_gen.go b/internal/service/networkfirewall/service_package_gen.go index 655e2cf4992..3b7562c45b4 100644 --- a/internal/service/networkfirewall/service_package_gen.go +++ b/internal/service/networkfirewall/service_package_gen.go @@ -5,6 +5,10 @@ package networkfirewall import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + networkfirewall_sdkv1 "github.com/aws/aws-sdk-go/service/networkfirewall" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -77,4 +81,13 @@ func (p *servicePackage) ServicePackageName() string { return names.NetworkFirewall } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*networkfirewall_sdkv1.NetworkFirewall, error) { + sess := config["session"].(*session_sdkv1.Session) + + return networkfirewall_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/networkfirewall/sweep.go b/internal/service/networkfirewall/sweep.go index 21b6f0bc46e..6d69a36dfa8 100644 --- a/internal/service/networkfirewall/sweep.go +++ b/internal/service/networkfirewall/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/networkfirewall" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -47,11 +49,11 @@ func init() { func sweepFirewallPolicies(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).NetworkFirewallConn() + conn := client.NetworkFirewallConn(ctx) input := &networkfirewall.ListFirewallPoliciesInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -80,7 +82,7 @@ func sweepFirewallPolicies(region string) error { return fmt.Errorf("error listing NetworkFirewall Firewall Policies (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping NetworkFirewall Firewall Policies (%s): %w", region, err) @@ -91,11 +93,11 @@ func sweepFirewallPolicies(region string) error { func sweepFirewalls(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).NetworkFirewallConn() + conn := client.NetworkFirewallConn(ctx) input := &networkfirewall.ListFirewallsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -124,7 +126,7 @@ func sweepFirewalls(region string) error { return fmt.Errorf("error listing NetworkFirewall Firewalls (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping NetworkFirewall Firewalls (%s): %w", region, err) @@ -135,11 +137,11 @@ func sweepFirewalls(region string) error { func sweepLoggingConfigurations(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).NetworkFirewallConn() + conn := client.NetworkFirewallConn(ctx) input := &networkfirewall.ListFirewallsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -168,7 +170,7 @@ func sweepLoggingConfigurations(region string) error { return fmt.Errorf("error listing NetworkFirewall Firewalls (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping NetworkFirewall Logging Configurations (%s): %w", region, err) @@ -179,11 +181,11 @@ func sweepLoggingConfigurations(region string) error { func sweepRuleGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).NetworkFirewallConn() + conn := client.NetworkFirewallConn(ctx) input := &networkfirewall.ListRuleGroupsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -212,7 +214,7 @@ func sweepRuleGroups(region string) error { return fmt.Errorf("error listing NetworkFirewall Rule Groups (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping NetworkFirewall Rule Groups (%s): %w", region, err) diff --git a/internal/service/networkfirewall/tags_gen.go b/internal/service/networkfirewall/tags_gen.go index 82df4834785..55df8200166 100644 --- a/internal/service/networkfirewall/tags_gen.go +++ b/internal/service/networkfirewall/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists networkfirewall service tags. +// listTags lists networkfirewall service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn networkfirewalliface.NetworkFirewallAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn networkfirewalliface.NetworkFirewallAPI, identifier string) (tftags.KeyValueTags, error) { input := &networkfirewall.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn networkfirewalliface.NetworkFirewallAPI, // ListTags lists networkfirewall service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).NetworkFirewallConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).NetworkFirewallConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*networkfirewall.Tag) tftags.KeyVa return tftags.New(ctx, m) } -// GetTagsIn returns networkfirewall service tags from Context. +// getTagsIn returns networkfirewall service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*networkfirewall.Tag { +func getTagsIn(ctx context.Context) []*networkfirewall.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*networkfirewall.Tag { return nil } -// SetTagsOut sets networkfirewall service tags in Context. -func SetTagsOut(ctx context.Context, tags []*networkfirewall.Tag) { +// setTagsOut sets networkfirewall service tags in Context. +func setTagsOut(ctx context.Context, tags []*networkfirewall.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates networkfirewall service tags. +// updateTags updates networkfirewall service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn networkfirewalliface.NetworkFirewallAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn networkfirewalliface.NetworkFirewallAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn networkfirewalliface.NetworkFirewallAP // UpdateTags updates networkfirewall service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).NetworkFirewallConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).NetworkFirewallConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/networkmanager/attachment_accepter.go b/internal/service/networkmanager/attachment_accepter.go index 9f10e0f62f2..39d87cb94c5 100644 --- a/internal/service/networkmanager/attachment_accepter.go +++ b/internal/service/networkmanager/attachment_accepter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -41,14 +44,10 @@ func ResourceAttachmentAccepter() *schema.Resource { // querying attachments requires knowing the type ahead of time // therefore type is required in provider, though not on the API "attachment_type": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validation.StringInSlice([]string{ - networkmanager.AttachmentTypeVpc, - networkmanager.AttachmentTypeSiteToSiteVpn, - networkmanager.AttachmentTypeConnect, - }, false), + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice(networkmanager.AttachmentType_Values(), false), }, "core_network_arn": { Type: schema.TypeString, @@ -83,7 +82,7 @@ func ResourceAttachmentAccepter() *schema.Resource { } func resourceAttachmentAccepterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) var state string attachmentID := d.Get("attachment_id").(string) @@ -123,6 +122,17 @@ func resourceAttachmentAccepterCreate(ctx context.Context, d *schema.ResourceDat d.SetId(attachmentID) + case networkmanager.AttachmentTypeTransitGatewayRouteTable: + tgwAttachment, err := FindTransitGatewayRouteTableAttachmentByID(ctx, conn, attachmentID) + + if err != nil { + return diag.Errorf("reading Network Manager Transit Gateway Route Table Attachment (%s): %s", attachmentID, err) + } + + state = aws.StringValue(tgwAttachment.Attachment.State) + + d.SetId(attachmentID) + default: return diag.Errorf("unsupported Network Manager Attachment type: %s", attachmentType) } @@ -153,6 +163,11 @@ func resourceAttachmentAccepterCreate(ctx context.Context, d *schema.ResourceDat if _, err := waitConnectAttachmentAvailable(ctx, conn, attachmentID, d.Timeout(schema.TimeoutCreate)); err != nil { return diag.Errorf("waiting for Network Manager Connect Attachment (%s) create: %s", attachmentID, err) } + + case networkmanager.AttachmentTypeTransitGatewayRouteTable: + if _, err := waitTransitGatewayRouteTableAttachmentAvailable(ctx, conn, attachmentID, d.Timeout(schema.TimeoutCreate)); err != nil { + return diag.Errorf("waiting for Network Manager Transit Gateway Route Table Attachment (%s) create: %s", attachmentID, err) + } } } @@ -160,7 +175,7 @@ func resourceAttachmentAccepterCreate(ctx context.Context, d *schema.ResourceDat } func resourceAttachmentAccepterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) var a *networkmanager.Attachment @@ -209,6 +224,21 @@ func resourceAttachmentAccepterRead(ctx context.Context, d *schema.ResourceData, } a = connectAttachment.Attachment + + case networkmanager.AttachmentTypeTransitGatewayRouteTable: + tgwAttachment, err := FindTransitGatewayRouteTableAttachmentByID(ctx, conn, d.Id()) + + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] Network Manager Transit Gateway Route Table Attachment %s not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return diag.Errorf("reading Network Manager Transit Gateway Route Table Attachment (%s): %s", d.Id(), err) + } + + a = tgwAttachment.Attachment } d.Set("attachment_policy_rule_number", a.AttachmentPolicyRuleNumber) diff --git a/internal/service/networkmanager/connect_attachment.go b/internal/service/networkmanager/connect_attachment.go index 014043615d8..81bb1ba49e1 100644 --- a/internal/service/networkmanager/connect_attachment.go +++ b/internal/service/networkmanager/connect_attachment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -127,7 +130,7 @@ func ResourceConnectAttachment() *schema.Resource { } func resourceConnectAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) coreNetworkID := d.Get("core_network_id").(string) edgeLocation := d.Get("edge_location").(string) @@ -141,7 +144,7 @@ func resourceConnectAttachmentCreate(ctx context.Context, d *schema.ResourceData CoreNetworkId: aws.String(coreNetworkID), EdgeLocation: aws.String(edgeLocation), Options: options, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), TransportAttachmentId: aws.String(transportAttachmentID), } @@ -188,7 +191,7 @@ func resourceConnectAttachmentCreate(ctx context.Context, d *schema.ResourceData } func resourceConnectAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) connectAttachment, err := FindConnectAttachmentByID(ctx, conn, d.Id()) @@ -229,7 +232,7 @@ func resourceConnectAttachmentRead(ctx context.Context, d *schema.ResourceData, d.Set("state", a.State) d.Set("transport_attachment_id", connectAttachment.TransportAttachmentId) - SetTagsOut(ctx, a.Tags) + setTagsOut(ctx, a.Tags) return nil } @@ -240,7 +243,7 @@ func resourceConnectAttachmentUpdate(ctx context.Context, d *schema.ResourceData } func resourceConnectAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) // If ResourceAttachmentAccepter is used, then Connect Attachment state // is never updated from StatePendingAttachmentAcceptance and the delete fails diff --git a/internal/service/networkmanager/connect_attachment_test.go b/internal/service/networkmanager/connect_attachment_test.go index ea8a4ab71ae..46f0b820a62 100644 --- a/internal/service/networkmanager/connect_attachment_test.go +++ b/internal/service/networkmanager/connect_attachment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( @@ -167,7 +170,7 @@ func testAccCheckConnectAttachmentExists(ctx context.Context, n string, v *netwo return fmt.Errorf("No Network Manager Connect Attachment ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) output, err := tfnetworkmanager.FindConnectAttachmentByID(ctx, conn, rs.Primary.ID) @@ -183,7 +186,7 @@ func testAccCheckConnectAttachmentExists(ctx context.Context, n string, v *netwo func testAccCheckConnectAttachmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_networkmanager_connect_attachment" { diff --git a/internal/service/networkmanager/connect_peer.go b/internal/service/networkmanager/connect_peer.go index ed32aa0412b..4fefaeffc84 100644 --- a/internal/service/networkmanager/connect_peer.go +++ b/internal/service/networkmanager/connect_peer.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -178,7 +181,7 @@ func ResourceConnectPeer() *schema.Resource { } func resourceConnectPeerCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) connectAttachmentID := d.Get("connect_attachment_id").(string) insideCIDRBlocks := flex.ExpandStringList(d.Get("inside_cidr_blocks").([]interface{})) @@ -187,7 +190,7 @@ func resourceConnectPeerCreate(ctx context.Context, d *schema.ResourceData, meta ConnectAttachmentId: aws.String(connectAttachmentID), InsideCidrBlocks: insideCIDRBlocks, PeerAddress: aws.String(peerAddress), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("bgp_options"); ok && len(v.([]interface{})) > 0 { @@ -238,7 +241,7 @@ func resourceConnectPeerCreate(ctx context.Context, d *schema.ResourceData, meta } func resourceConnectPeerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) connectPeer, err := FindConnectPeerByID(ctx, conn, d.Id()) @@ -276,7 +279,7 @@ func resourceConnectPeerRead(ctx context.Context, d *schema.ResourceData, meta i d.Set("peer_address", connectPeer.Configuration.PeerAddress) d.Set("state", connectPeer.State) - SetTagsOut(ctx, connectPeer.Tags) + setTagsOut(ctx, connectPeer.Tags) return nil } @@ -287,7 +290,7 @@ func resourceConnectPeerUpdate(ctx context.Context, d *schema.ResourceData, meta } func resourceConnectPeerDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) log.Printf("[DEBUG] Deleting Network Manager Connect Peer: %s", d.Id()) _, err := conn.DeleteConnectPeerWithContext(ctx, &networkmanager.DeleteConnectPeerInput{ @@ -439,7 +442,7 @@ func waitConnectPeerDeleted(ctx context.Context, conn *networkmanager.NetworkMan return nil, err } -// validationExceptionMessageContains returns true if the error matches all these conditions: +// validationExceptionMessage_Contains returns true if the error matches all these conditions: // - err is of type networkmanager.ValidationException // - ValidationException.Reason equals reason // - ValidationException.Message_ contains message diff --git a/internal/service/networkmanager/connect_peer_test.go b/internal/service/networkmanager/connect_peer_test.go index 323d2e9ac89..b4e214990e5 100644 --- a/internal/service/networkmanager/connect_peer_test.go +++ b/internal/service/networkmanager/connect_peer_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( @@ -163,7 +166,7 @@ func testAccCheckConnectPeerExists(ctx context.Context, n string, v *networkmana if rs.Primary.ID == "" { return fmt.Errorf("No Network Manager Connect Peer ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) output, err := tfnetworkmanager.FindConnectPeerByID(ctx, conn, rs.Primary.ID) @@ -179,7 +182,7 @@ func testAccCheckConnectPeerExists(ctx context.Context, n string, v *networkmana func testAccCheckConnectPeerDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_networkmanager_connect_peer" { diff --git a/internal/service/networkmanager/connection.go b/internal/service/networkmanager/connection.go index c8f7dc825d5..dcd602b9f6c 100644 --- a/internal/service/networkmanager/connection.go +++ b/internal/service/networkmanager/connection.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -36,7 +39,7 @@ func ResourceConnection() *schema.Resource { parsedARN, err := arn.Parse(d.Id()) if err != nil { - return nil, fmt.Errorf("error parsing ARN (%s): %w", d.Id(), err) + return nil, fmt.Errorf("parsing ARN (%s): %w", d.Id(), err) } // See https://docs.aws.amazon.com/service-authorization/latest/reference/list_networkmanager.html#networkmanager-resources-for-iam-policies. @@ -101,14 +104,14 @@ func ResourceConnection() *schema.Resource { } func resourceConnectionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID := d.Get("global_network_id").(string) input := &networkmanager.CreateConnectionInput{ ConnectedDeviceId: aws.String(d.Get("connected_device_id").(string)), DeviceId: aws.String(d.Get("device_id").(string)), GlobalNetworkId: aws.String(globalNetworkID), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("connected_link_id"); ok { @@ -127,20 +130,20 @@ func resourceConnectionCreate(ctx context.Context, d *schema.ResourceData, meta output, err := conn.CreateConnectionWithContext(ctx, input) if err != nil { - return diag.Errorf("error creating Network Manager Connection: %s", err) + return diag.Errorf("creating Network Manager Connection: %s", err) } d.SetId(aws.StringValue(output.Connection.ConnectionId)) if _, err := waitConnectionCreated(ctx, conn, globalNetworkID, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { - return diag.Errorf("error waiting for Network Manager Connection (%s) create: %s", d.Id(), err) + return diag.Errorf("waiting for Network Manager Connection (%s) create: %s", d.Id(), err) } return resourceConnectionRead(ctx, d, meta) } func resourceConnectionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID := d.Get("global_network_id").(string) connection, err := FindConnectionByTwoPartKey(ctx, conn, globalNetworkID, d.Id()) @@ -152,7 +155,7 @@ func resourceConnectionRead(ctx context.Context, d *schema.ResourceData, meta in } if err != nil { - return diag.Errorf("error reading Network Manager Connection (%s): %s", d.Id(), err) + return diag.Errorf("reading Network Manager Connection (%s): %s", d.Id(), err) } d.Set("arn", connection.ConnectionArn) @@ -163,13 +166,13 @@ func resourceConnectionRead(ctx context.Context, d *schema.ResourceData, meta in d.Set("global_network_id", connection.GlobalNetworkId) d.Set("link_id", connection.LinkId) - SetTagsOut(ctx, connection.Tags) + setTagsOut(ctx, connection.Tags) return nil } func resourceConnectionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) if d.HasChangesExcept("tags", "tags_all") { globalNetworkID := d.Get("global_network_id").(string) @@ -185,11 +188,11 @@ func resourceConnectionUpdate(ctx context.Context, d *schema.ResourceData, meta _, err := conn.UpdateConnectionWithContext(ctx, input) if err != nil { - return diag.Errorf("error updating Network Manager Connection (%s): %s", d.Id(), err) + return diag.Errorf("updating Network Manager Connection (%s): %s", d.Id(), err) } if _, err := waitConnectionUpdated(ctx, conn, globalNetworkID, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { - return diag.Errorf("error waiting for Network Manager Connection (%s) update: %s", d.Id(), err) + return diag.Errorf("waiting for Network Manager Connection (%s) update: %s", d.Id(), err) } } @@ -197,7 +200,7 @@ func resourceConnectionUpdate(ctx context.Context, d *schema.ResourceData, meta } func resourceConnectionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID := d.Get("global_network_id").(string) @@ -212,11 +215,11 @@ func resourceConnectionDelete(ctx context.Context, d *schema.ResourceData, meta } if err != nil { - return diag.Errorf("error deleting Network Manager Connection (%s): %s", d.Id(), err) + return diag.Errorf("deleting Network Manager Connection (%s): %s", d.Id(), err) } if _, err := waitConnectionDeleted(ctx, conn, globalNetworkID, d.Id(), d.Timeout(schema.TimeoutDelete)); err != nil { - return diag.Errorf("error waiting for Network Manager Connection (%s) delete: %s", d.Id(), err) + return diag.Errorf("waiting for Network Manager Connection (%s) delete: %s", d.Id(), err) } return nil diff --git a/internal/service/networkmanager/connection_data_source.go b/internal/service/networkmanager/connection_data_source.go index fbc690fdd21..7caa49a1e09 100644 --- a/internal/service/networkmanager/connection_data_source.go +++ b/internal/service/networkmanager/connection_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -53,7 +56,7 @@ func DataSourceConnection() *schema.Resource { } func dataSourceConnectionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig globalNetworkID := d.Get("global_network_id").(string) @@ -61,7 +64,7 @@ func dataSourceConnectionRead(ctx context.Context, d *schema.ResourceData, meta connection, err := FindConnectionByTwoPartKey(ctx, conn, globalNetworkID, connectionID) if err != nil { - return diag.Errorf("error reading Network Manager Connection (%s): %s", connectionID, err) + return diag.Errorf("reading Network Manager Connection (%s): %s", connectionID, err) } d.SetId(connectionID) @@ -75,7 +78,7 @@ func dataSourceConnectionRead(ctx context.Context, d *schema.ResourceData, meta d.Set("link_id", connection.LinkId) if err := d.Set("tags", KeyValueTags(ctx, connection.Tags).IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return diag.Errorf("error setting tags: %s", err) + return diag.Errorf("setting tags: %s", err) } return nil diff --git a/internal/service/networkmanager/connection_data_source_test.go b/internal/service/networkmanager/connection_data_source_test.go index b6a60f43f45..ae24f8822e6 100644 --- a/internal/service/networkmanager/connection_data_source_test.go +++ b/internal/service/networkmanager/connection_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( diff --git a/internal/service/networkmanager/connection_test.go b/internal/service/networkmanager/connection_test.go index e8e7cdc9bae..9960fc959f7 100644 --- a/internal/service/networkmanager/connection_test.go +++ b/internal/service/networkmanager/connection_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( @@ -172,7 +175,7 @@ func testAccConnection_descriptionAndLinks(t *testing.T) { func testAccCheckConnectionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_networkmanager_device" { @@ -207,7 +210,7 @@ func testAccCheckConnectionExists(ctx context.Context, n string) resource.TestCh return fmt.Errorf("No Network Manager Connection ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) _, err := tfnetworkmanager.FindConnectionByTwoPartKey(ctx, conn, rs.Primary.Attributes["global_network_id"], rs.Primary.ID) diff --git a/internal/service/networkmanager/connections_data_source.go b/internal/service/networkmanager/connections_data_source.go index 0b296ddd136..5662c100834 100644 --- a/internal/service/networkmanager/connections_data_source.go +++ b/internal/service/networkmanager/connections_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -36,7 +39,7 @@ func DataSourceConnections() *schema.Resource { } func dataSourceConnectionsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig tagsToMatch := tftags.New(ctx, d.Get("tags").(map[string]interface{})).IgnoreAWS().IgnoreConfig(ignoreTagsConfig) @@ -51,7 +54,7 @@ func dataSourceConnectionsRead(ctx context.Context, d *schema.ResourceData, meta output, err := FindConnections(ctx, conn, input) if err != nil { - return diag.Errorf("error listing Network Manager Connections: %s", err) + return diag.Errorf("listing Network Manager Connections: %s", err) } var connectionIDs []string diff --git a/internal/service/networkmanager/connections_data_source_test.go b/internal/service/networkmanager/connections_data_source_test.go index 1bfd22e6d21..121b1fb6595 100644 --- a/internal/service/networkmanager/connections_data_source_test.go +++ b/internal/service/networkmanager/connections_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( @@ -24,7 +27,7 @@ func TestAccNetworkManagerConnectionsDataSource_basic(t *testing.T) { { Config: testAccConnectionsDataSourceConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue(dataSourceAllName, "ids.#", "1"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceAllName, "ids.#", 1), resource.TestCheckResourceAttr(dataSourceByTagsName, "ids.#", "1"), ), }, diff --git a/internal/service/networkmanager/core_network.go b/internal/service/networkmanager/core_network.go index b3bc7b776bc..2e91ae59fac 100644 --- a/internal/service/networkmanager/core_network.go +++ b/internal/service/networkmanager/core_network.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -152,13 +155,13 @@ func ResourceCoreNetwork() *schema.Resource { } func resourceCoreNetworkCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID := d.Get("global_network_id").(string) input := &networkmanager.CreateCoreNetworkInput{ ClientToken: aws.String(id.UniqueId()), GlobalNetworkId: aws.String(globalNetworkID), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -200,7 +203,7 @@ func resourceCoreNetworkCreate(ctx context.Context, d *schema.ResourceData, meta } func resourceCoreNetworkRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) coreNetwork, err := FindCoreNetworkByID(ctx, conn, d.Id()) @@ -230,13 +233,13 @@ func resourceCoreNetworkRead(ctx context.Context, d *schema.ResourceData, meta i } d.Set("state", coreNetwork.State) - SetTagsOut(ctx, coreNetwork.Tags) + setTagsOut(ctx, coreNetwork.Tags) return nil } func resourceCoreNetworkUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) if d.HasChange("description") { _, err := conn.UpdateCoreNetworkWithContext(ctx, &networkmanager.UpdateCoreNetworkInput{ @@ -285,7 +288,7 @@ func resourceCoreNetworkUpdate(ctx context.Context, d *schema.ResourceData, meta } func resourceCoreNetworkDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) log.Printf("[DEBUG] Deleting Network Manager Core Network: %s", d.Id()) _, err := conn.DeleteCoreNetworkWithContext(ctx, &networkmanager.DeleteCoreNetworkInput{ diff --git a/internal/service/networkmanager/core_network_policy_attachment.go b/internal/service/networkmanager/core_network_policy_attachment.go index 2724258bbb7..4912ac0940d 100644 --- a/internal/service/networkmanager/core_network_policy_attachment.go +++ b/internal/service/networkmanager/core_network_policy_attachment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -70,7 +73,7 @@ func resourceCoreNetworkPolicyAttachmentCreate(ctx context.Context, d *schema.Re } func resourceCoreNetworkPolicyAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) coreNetwork, err := FindCoreNetworkByID(ctx, conn, d.Id()) @@ -108,7 +111,7 @@ func resourceCoreNetworkPolicyAttachmentRead(ctx context.Context, d *schema.Reso } func resourceCoreNetworkPolicyAttachmentUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) if d.HasChange("policy_document") { err := PutAndExecuteCoreNetworkPolicy(ctx, conn, d.Id(), d.Get("policy_document").(string)) diff --git a/internal/service/networkmanager/core_network_policy_attachment_test.go b/internal/service/networkmanager/core_network_policy_attachment_test.go index adf3c0d053c..0ad5e4750ff 100644 --- a/internal/service/networkmanager/core_network_policy_attachment_test.go +++ b/internal/service/networkmanager/core_network_policy_attachment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( @@ -167,7 +170,7 @@ func testAccCheckCoreNetworkPolicyAttachmentExists(ctx context.Context, n string return fmt.Errorf("No Network Manager Core Network ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) // pass in latestPolicyVersionId to get the latest version id by default const latestPolicyVersionId = -1 diff --git a/internal/service/networkmanager/core_network_policy_document_data_source.go b/internal/service/networkmanager/core_network_policy_document_data_source.go index 91fb8fdd918..fb05abdbb97 100644 --- a/internal/service/networkmanager/core_network_policy_document_data_source.go +++ b/internal/service/networkmanager/core_network_policy_document_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( diff --git a/internal/service/networkmanager/core_network_policy_document_data_source_test.go b/internal/service/networkmanager/core_network_policy_document_data_source_test.go index 16737d30652..224c9cdc434 100644 --- a/internal/service/networkmanager/core_network_policy_document_data_source_test.go +++ b/internal/service/networkmanager/core_network_policy_document_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( diff --git a/internal/service/networkmanager/core_network_policy_model.go b/internal/service/networkmanager/core_network_policy_model.go index a34d165d1f7..daf1a732a0f 100644 --- a/internal/service/networkmanager/core_network_policy_model.go +++ b/internal/service/networkmanager/core_network_policy_model.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( diff --git a/internal/service/networkmanager/core_network_test.go b/internal/service/networkmanager/core_network_test.go index d1c82e86ea9..8104d1e6f08 100644 --- a/internal/service/networkmanager/core_network_test.go +++ b/internal/service/networkmanager/core_network_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( @@ -320,7 +323,7 @@ func TestAccNetworkManagerCoreNetwork_withoutPolicyDocumentUpdateToCreateBasePol func testAccCheckCoreNetworkDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_networkmanager_core_network" { @@ -355,7 +358,7 @@ func testAccCheckCoreNetworkExists(ctx context.Context, n string) resource.TestC return fmt.Errorf("No Network Manager Core Network ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) _, err := tfnetworkmanager.FindCoreNetworkByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/networkmanager/customer_gateway_association.go b/internal/service/networkmanager/customer_gateway_association.go index 0ddb8fcbf43..3baca75fe4b 100644 --- a/internal/service/networkmanager/customer_gateway_association.go +++ b/internal/service/networkmanager/customer_gateway_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -61,7 +64,7 @@ func ResourceCustomerGatewayAssociation() *schema.Resource { } func resourceCustomerGatewayAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID := d.Get("global_network_id").(string) customerGatewayARN := d.Get("customer_gateway_arn").(string) @@ -103,20 +106,20 @@ func resourceCustomerGatewayAssociationCreate(ctx context.Context, d *schema.Res ) if err != nil { - return diag.Errorf("error creating Network Manager Customer Gateway Association (%s): %s", id, err) + return diag.Errorf("creating Network Manager Customer Gateway Association (%s): %s", id, err) } d.SetId(id) if _, err := waitCustomerGatewayAssociationCreated(ctx, conn, globalNetworkID, customerGatewayARN, d.Timeout(schema.TimeoutCreate)); err != nil { - return diag.Errorf("error waiting for Network Manager Customer Gateway Association (%s) create: %s", d.Id(), err) + return diag.Errorf("waiting for Network Manager Customer Gateway Association (%s) create: %s", d.Id(), err) } return resourceCustomerGatewayAssociationRead(ctx, d, meta) } func resourceCustomerGatewayAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID, customerGatewayARN, err := CustomerGatewayAssociationParseResourceID(d.Id()) @@ -133,7 +136,7 @@ func resourceCustomerGatewayAssociationRead(ctx context.Context, d *schema.Resou } if err != nil { - return diag.Errorf("error reading Network Manager Customer Gateway Association (%s): %s", d.Id(), err) + return diag.Errorf("reading Network Manager Customer Gateway Association (%s): %s", d.Id(), err) } d.Set("customer_gateway_arn", output.CustomerGatewayArn) @@ -145,7 +148,7 @@ func resourceCustomerGatewayAssociationRead(ctx context.Context, d *schema.Resou } func resourceCustomerGatewayAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID, customerGatewayARN, err := CustomerGatewayAssociationParseResourceID(d.Id()) @@ -176,11 +179,11 @@ func disassociateCustomerGateway(ctx context.Context, conn *networkmanager.Netwo } if err != nil { - return fmt.Errorf("error deleting Network Manager Customer Gateway Association (%s): %w", id, err) + return fmt.Errorf("deleting Network Manager Customer Gateway Association (%s): %w", id, err) } if _, err := waitCustomerGatewayAssociationDeleted(ctx, conn, globalNetworkID, customerGatewayARN, timeout); err != nil { - return fmt.Errorf("error waiting for Network Manager Customer Gateway Association (%s) delete: %w", id, err) + return fmt.Errorf("waiting for Network Manager Customer Gateway Association (%s) delete: %w", id, err) } return nil diff --git a/internal/service/networkmanager/customer_gateway_association_test.go b/internal/service/networkmanager/customer_gateway_association_test.go index 374b967c014..2a0f6c0fafb 100644 --- a/internal/service/networkmanager/customer_gateway_association_test.go +++ b/internal/service/networkmanager/customer_gateway_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( @@ -105,7 +108,7 @@ func testAccCustomerGatewayAssociation_Disappears_customerGateway(t *testing.T) func testAccCheckCustomerGatewayAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_networkmanager_customer_gateway_association" { @@ -146,7 +149,7 @@ func testAccCheckCustomerGatewayAssociationExists(ctx context.Context, n string) return fmt.Errorf("No Network Manager Customer Gateway Association ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID, customerGatewayARN, err := tfnetworkmanager.CustomerGatewayAssociationParseResourceID(rs.Primary.ID) diff --git a/internal/service/networkmanager/device.go b/internal/service/networkmanager/device.go index 818a92d89c6..9db8f1b3e3f 100644 --- a/internal/service/networkmanager/device.go +++ b/internal/service/networkmanager/device.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -36,7 +39,7 @@ func ResourceDevice() *schema.Resource { parsedARN, err := arn.Parse(d.Id()) if err != nil { - return nil, fmt.Errorf("error parsing ARN (%s): %w", d.Id(), err) + return nil, fmt.Errorf("parsing ARN (%s): %w", d.Id(), err) } // See https://docs.aws.amazon.com/service-authorization/latest/reference/list_networkmanager.html#networkmanager-resources-for-iam-policies. @@ -151,12 +154,12 @@ func ResourceDevice() *schema.Resource { } func resourceDeviceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID := d.Get("global_network_id").(string) input := &networkmanager.CreateDeviceInput{ GlobalNetworkId: aws.String(globalNetworkID), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -195,20 +198,20 @@ func resourceDeviceCreate(ctx context.Context, d *schema.ResourceData, meta inte output, err := conn.CreateDeviceWithContext(ctx, input) if err != nil { - return diag.Errorf("error creating Network Manager Device: %s", err) + return diag.Errorf("creating Network Manager Device: %s", err) } d.SetId(aws.StringValue(output.Device.DeviceId)) if _, err := waitDeviceCreated(ctx, conn, globalNetworkID, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { - return diag.Errorf("error waiting for Network Manager Device (%s) create: %s", d.Id(), err) + return diag.Errorf("waiting for Network Manager Device (%s) create: %s", d.Id(), err) } return resourceDeviceRead(ctx, d, meta) } func resourceDeviceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID := d.Get("global_network_id").(string) device, err := FindDeviceByTwoPartKey(ctx, conn, globalNetworkID, d.Id()) @@ -220,13 +223,13 @@ func resourceDeviceRead(ctx context.Context, d *schema.ResourceData, meta interf } if err != nil { - return diag.Errorf("error reading Network Manager Device (%s): %s", d.Id(), err) + return diag.Errorf("reading Network Manager Device (%s): %s", d.Id(), err) } d.Set("arn", device.DeviceArn) if device.AWSLocation != nil { if err := d.Set("aws_location", []interface{}{flattenAWSLocation(device.AWSLocation)}); err != nil { - return diag.Errorf("error setting aws_location: %s", err) + return diag.Errorf("setting aws_location: %s", err) } } else { d.Set("aws_location", nil) @@ -235,7 +238,7 @@ func resourceDeviceRead(ctx context.Context, d *schema.ResourceData, meta interf d.Set("global_network_id", device.GlobalNetworkId) if device.Location != nil { if err := d.Set("location", []interface{}{flattenLocation(device.Location)}); err != nil { - return diag.Errorf("error setting location: %s", err) + return diag.Errorf("setting location: %s", err) } } else { d.Set("location", nil) @@ -246,13 +249,13 @@ func resourceDeviceRead(ctx context.Context, d *schema.ResourceData, meta interf d.Set("type", device.Type) d.Set("vendor", device.Vendor) - SetTagsOut(ctx, device.Tags) + setTagsOut(ctx, device.Tags) return nil } func resourceDeviceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) if d.HasChangesExcept("tags", "tags_all") { globalNetworkID := d.Get("global_network_id").(string) @@ -279,11 +282,11 @@ func resourceDeviceUpdate(ctx context.Context, d *schema.ResourceData, meta inte _, err := conn.UpdateDeviceWithContext(ctx, input) if err != nil { - return diag.Errorf("error updating Network Manager Device (%s): %s", d.Id(), err) + return diag.Errorf("updating Network Manager Device (%s): %s", d.Id(), err) } if _, err := waitDeviceUpdated(ctx, conn, globalNetworkID, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { - return diag.Errorf("error waiting for Network Manager Device (%s) update: %s", d.Id(), err) + return diag.Errorf("waiting for Network Manager Device (%s) update: %s", d.Id(), err) } } @@ -291,7 +294,7 @@ func resourceDeviceUpdate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceDeviceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID := d.Get("global_network_id").(string) @@ -306,11 +309,11 @@ func resourceDeviceDelete(ctx context.Context, d *schema.ResourceData, meta inte } if err != nil { - return diag.Errorf("error deleting Network Manager Device (%s): %s", d.Id(), err) + return diag.Errorf("deleting Network Manager Device (%s): %s", d.Id(), err) } if _, err := waitDeviceDeleted(ctx, conn, globalNetworkID, d.Id(), d.Timeout(schema.TimeoutDelete)); err != nil { - return diag.Errorf("error waiting for Network Manager Device (%s) delete: %s", d.Id(), err) + return diag.Errorf("waiting for Network Manager Device (%s) delete: %s", d.Id(), err) } return nil diff --git a/internal/service/networkmanager/device_data_source.go b/internal/service/networkmanager/device_data_source.go index 6b1983da156..4610ce86923 100644 --- a/internal/service/networkmanager/device_data_source.go +++ b/internal/service/networkmanager/device_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -93,7 +96,7 @@ func DataSourceDevice() *schema.Resource { } func dataSourceDeviceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig globalNetworkID := d.Get("global_network_id").(string) @@ -101,14 +104,14 @@ func dataSourceDeviceRead(ctx context.Context, d *schema.ResourceData, meta inte device, err := FindDeviceByTwoPartKey(ctx, conn, globalNetworkID, deviceID) if err != nil { - return diag.Errorf("error reading Network Manager Device (%s): %s", deviceID, err) + return diag.Errorf("reading Network Manager Device (%s): %s", deviceID, err) } d.SetId(deviceID) d.Set("arn", device.DeviceArn) if device.AWSLocation != nil { if err := d.Set("aws_location", []interface{}{flattenAWSLocation(device.AWSLocation)}); err != nil { - return diag.Errorf("error setting aws_location: %s", err) + return diag.Errorf("setting aws_location: %s", err) } } else { d.Set("aws_location", nil) @@ -117,7 +120,7 @@ func dataSourceDeviceRead(ctx context.Context, d *schema.ResourceData, meta inte d.Set("device_id", device.DeviceId) if device.Location != nil { if err := d.Set("location", []interface{}{flattenLocation(device.Location)}); err != nil { - return diag.Errorf("error setting location: %s", err) + return diag.Errorf("setting location: %s", err) } } else { d.Set("location", nil) @@ -129,7 +132,7 @@ func dataSourceDeviceRead(ctx context.Context, d *schema.ResourceData, meta inte d.Set("vendor", device.Vendor) if err := d.Set("tags", KeyValueTags(ctx, device.Tags).IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return diag.Errorf("error setting tags: %s", err) + return diag.Errorf("setting tags: %s", err) } return nil diff --git a/internal/service/networkmanager/device_data_source_test.go b/internal/service/networkmanager/device_data_source_test.go index 0074c1a8e4b..348d10e9112 100644 --- a/internal/service/networkmanager/device_data_source_test.go +++ b/internal/service/networkmanager/device_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( diff --git a/internal/service/networkmanager/device_test.go b/internal/service/networkmanager/device_test.go index 692d8d9a2d8..e8aa19e2b72 100644 --- a/internal/service/networkmanager/device_test.go +++ b/internal/service/networkmanager/device_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( @@ -218,7 +221,7 @@ func TestAccNetworkManagerDevice_awsLocation(t *testing.T) { // nosemgrep:ci.aws func testAccCheckDeviceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_networkmanager_device" { @@ -253,7 +256,7 @@ func testAccCheckDeviceExists(ctx context.Context, n string) resource.TestCheckF return fmt.Errorf("No Network Manager Device ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) _, err := tfnetworkmanager.FindDeviceByTwoPartKey(ctx, conn, rs.Primary.Attributes["global_network_id"], rs.Primary.ID) diff --git a/internal/service/networkmanager/devices_data_source.go b/internal/service/networkmanager/devices_data_source.go index dbd5974fb46..f29de277479 100644 --- a/internal/service/networkmanager/devices_data_source.go +++ b/internal/service/networkmanager/devices_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -36,7 +39,7 @@ func DataSourceDevices() *schema.Resource { } func dataSourceDevicesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig tagsToMatch := tftags.New(ctx, d.Get("tags").(map[string]interface{})).IgnoreAWS().IgnoreConfig(ignoreTagsConfig) @@ -51,7 +54,7 @@ func dataSourceDevicesRead(ctx context.Context, d *schema.ResourceData, meta int output, err := FindDevices(ctx, conn, input) if err != nil { - return diag.Errorf("error listing Network Manager Devices: %s", err) + return diag.Errorf("listing Network Manager Devices: %s", err) } var deviceIDs []string diff --git a/internal/service/networkmanager/devices_data_source_test.go b/internal/service/networkmanager/devices_data_source_test.go index 81364d494b3..734588c46c8 100644 --- a/internal/service/networkmanager/devices_data_source_test.go +++ b/internal/service/networkmanager/devices_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( @@ -24,7 +27,7 @@ func TestAccNetworkManagerDevicesDataSource_basic(t *testing.T) { { Config: testAccDevicesDataSourceConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue(dataSourceAllName, "ids.#", "1"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceAllName, "ids.#", 1), resource.TestCheckResourceAttr(dataSourceByTagsName, "ids.#", "1"), ), }, diff --git a/internal/service/networkmanager/errors.go b/internal/service/networkmanager/errors.go index 6ff3b68dc30..ad293d814be 100644 --- a/internal/service/networkmanager/errors.go +++ b/internal/service/networkmanager/errors.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( diff --git a/internal/service/networkmanager/generate.go b/internal/service/networkmanager/generate.go index b342f557339..f6a18a9e8a1 100644 --- a/internal/service/networkmanager/generate.go +++ b/internal/service/networkmanager/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ServiceTagsSlice -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package networkmanager diff --git a/internal/service/networkmanager/global_network.go b/internal/service/networkmanager/global_network.go index cb8e2ed1a02..f0ee4b518ac 100644 --- a/internal/service/networkmanager/global_network.go +++ b/internal/service/networkmanager/global_network.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -57,10 +60,10 @@ func ResourceGlobalNetwork() *schema.Resource { } func resourceGlobalNetworkCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) input := &networkmanager.CreateGlobalNetworkInput{ - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -71,20 +74,20 @@ func resourceGlobalNetworkCreate(ctx context.Context, d *schema.ResourceData, me output, err := conn.CreateGlobalNetworkWithContext(ctx, input) if err != nil { - return diag.Errorf("error creating Network Manager Global Network: %s", err) + return diag.Errorf("creating Network Manager Global Network: %s", err) } d.SetId(aws.StringValue(output.GlobalNetwork.GlobalNetworkId)) if _, err := waitGlobalNetworkCreated(ctx, conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { - return diag.Errorf("error waiting for Network Manager Global Network (%s) create: %s", d.Id(), err) + return diag.Errorf("waiting for Network Manager Global Network (%s) create: %s", d.Id(), err) } return resourceGlobalNetworkRead(ctx, d, meta) } func resourceGlobalNetworkRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) globalNetwork, err := FindGlobalNetworkByID(ctx, conn, d.Id()) @@ -95,19 +98,19 @@ func resourceGlobalNetworkRead(ctx context.Context, d *schema.ResourceData, meta } if err != nil { - return diag.Errorf("error reading Network Manager Global Network (%s): %s", d.Id(), err) + return diag.Errorf("reading Network Manager Global Network (%s): %s", d.Id(), err) } d.Set("arn", globalNetwork.GlobalNetworkArn) d.Set("description", globalNetwork.Description) - SetTagsOut(ctx, globalNetwork.Tags) + setTagsOut(ctx, globalNetwork.Tags) return nil } func resourceGlobalNetworkUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &networkmanager.UpdateGlobalNetworkInput{ @@ -119,11 +122,11 @@ func resourceGlobalNetworkUpdate(ctx context.Context, d *schema.ResourceData, me _, err := conn.UpdateGlobalNetworkWithContext(ctx, input) if err != nil { - return diag.Errorf("error updating Network Manager Global Network (%s): %s", d.Id(), err) + return diag.Errorf("updating Network Manager Global Network (%s): %s", d.Id(), err) } if _, err := waitGlobalNetworkUpdated(ctx, conn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { - return diag.Errorf("error waiting for Network Manager Global Network (%s) update: %s", d.Id(), err) + return diag.Errorf("waiting for Network Manager Global Network (%s) update: %s", d.Id(), err) } } @@ -131,7 +134,7 @@ func resourceGlobalNetworkUpdate(ctx context.Context, d *schema.ResourceData, me } func resourceGlobalNetworkDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) if diags := disassociateCustomerGateways(ctx, conn, d.Id(), d.Timeout(schema.TimeoutDelete)); diags.HasError() { return diags @@ -166,11 +169,11 @@ func resourceGlobalNetworkDelete(ctx context.Context, d *schema.ResourceData, me } if err != nil { - return diag.Errorf("error deleting Network Manager Global Network (%s): %s", d.Id(), err) + return diag.Errorf("deleting Network Manager Global Network (%s): %s", d.Id(), err) } if _, err := waitGlobalNetworkDeleted(ctx, conn, d.Id(), d.Timeout(schema.TimeoutDelete)); err != nil { - return diag.Errorf("error waiting for Network Manager Global Network (%s) delete: %s", d.Id(), err) + return diag.Errorf("waiting for Network Manager Global Network (%s) delete: %s", d.Id(), err) } return nil @@ -186,7 +189,7 @@ func deregisterTransitGateways(ctx context.Context, conn *networkmanager.Network } if err != nil { - return diag.Errorf("error listing Network Manager Transit Gateway Registrations (%s): %s", globalNetworkID, err) + return diag.Errorf("listing Network Manager Transit Gateway Registrations (%s): %s", globalNetworkID, err) } var diags diag.Diagnostics @@ -216,7 +219,7 @@ func disassociateCustomerGateways(ctx context.Context, conn *networkmanager.Netw } if err != nil { - return diag.Errorf("error listing Network Manager Customer Gateway Associations (%s): %s", globalNetworkID, err) + return diag.Errorf("listing Network Manager Customer Gateway Associations (%s): %s", globalNetworkID, err) } var diags diag.Diagnostics @@ -246,7 +249,7 @@ func disassociateTransitGatewayConnectPeers(ctx context.Context, conn *networkma } if err != nil { - return diag.Errorf("error listing Network Manager Transit Gateway Connect Peer Associations (%s): %s", globalNetworkID, err) + return diag.Errorf("listing Network Manager Transit Gateway Connect Peer Associations (%s): %s", globalNetworkID, err) } var diags diag.Diagnostics diff --git a/internal/service/networkmanager/global_network_data_source.go b/internal/service/networkmanager/global_network_data_source.go index bd1883d384d..3d10c56509c 100644 --- a/internal/service/networkmanager/global_network_data_source.go +++ b/internal/service/networkmanager/global_network_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -33,14 +36,14 @@ func DataSourceGlobalNetwork() *schema.Resource { } func dataSourceGlobalNetworkRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig globalNetworkID := d.Get("global_network_id").(string) globalNetwork, err := FindGlobalNetworkByID(ctx, conn, globalNetworkID) if err != nil { - return diag.Errorf("error reading Network Manager Global Network (%s): %s", globalNetworkID, err) + return diag.Errorf("reading Network Manager Global Network (%s): %s", globalNetworkID, err) } d.SetId(globalNetworkID) @@ -49,7 +52,7 @@ func dataSourceGlobalNetworkRead(ctx context.Context, d *schema.ResourceData, me d.Set("global_network_id", globalNetwork.GlobalNetworkId) if err := d.Set("tags", KeyValueTags(ctx, globalNetwork.Tags).IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return diag.Errorf("error setting tags: %s", err) + return diag.Errorf("setting tags: %s", err) } return nil diff --git a/internal/service/networkmanager/global_network_data_source_test.go b/internal/service/networkmanager/global_network_data_source_test.go index d4f2208edcc..e51dcbb9b64 100644 --- a/internal/service/networkmanager/global_network_data_source_test.go +++ b/internal/service/networkmanager/global_network_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( diff --git a/internal/service/networkmanager/global_network_test.go b/internal/service/networkmanager/global_network_test.go index 441fa577e1e..f437056bbc9 100644 --- a/internal/service/networkmanager/global_network_test.go +++ b/internal/service/networkmanager/global_network_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( @@ -144,7 +147,7 @@ func TestAccNetworkManagerGlobalNetwork_description(t *testing.T) { func testAccCheckGlobalNetworkDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_networkmanager_global_network" { @@ -179,7 +182,7 @@ func testAccCheckGlobalNetworkExists(ctx context.Context, n string) resource.Tes return fmt.Errorf("No Network Manager Global Network ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) _, err := tfnetworkmanager.FindGlobalNetworkByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/networkmanager/global_networks_data_source.go b/internal/service/networkmanager/global_networks_data_source.go index 9bb05e3b2fa..775745a46b5 100644 --- a/internal/service/networkmanager/global_networks_data_source.go +++ b/internal/service/networkmanager/global_networks_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -28,14 +31,14 @@ func DataSourceGlobalNetworks() *schema.Resource { } func dataSourceGlobalNetworksRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig tagsToMatch := tftags.New(ctx, d.Get("tags").(map[string]interface{})).IgnoreAWS().IgnoreConfig(ignoreTagsConfig) output, err := FindGlobalNetworks(ctx, conn, &networkmanager.DescribeGlobalNetworksInput{}) if err != nil { - return diag.Errorf("error listing Network Manager Global Networks: %s", err) + return diag.Errorf("listing Network Manager Global Networks: %s", err) } var globalNetworkIDs []string diff --git a/internal/service/networkmanager/global_networks_data_source_test.go b/internal/service/networkmanager/global_networks_data_source_test.go index b82d2bfad80..b8f53223a2a 100644 --- a/internal/service/networkmanager/global_networks_data_source_test.go +++ b/internal/service/networkmanager/global_networks_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( @@ -24,7 +27,7 @@ func TestAccNetworkManagerGlobalNetworksDataSource_basic(t *testing.T) { { Config: testAccGlobalNetworksDataSourceConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue(dataSourceAllName, "ids.#", "1"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceAllName, "ids.#", 1), resource.TestCheckResourceAttr(dataSourceByTagsName, "ids.#", "1"), ), }, diff --git a/internal/service/networkmanager/link.go b/internal/service/networkmanager/link.go index cd12b761e61..f0f6814fd88 100644 --- a/internal/service/networkmanager/link.go +++ b/internal/service/networkmanager/link.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -36,7 +39,7 @@ func ResourceLink() *schema.Resource { parsedARN, err := arn.Parse(d.Id()) if err != nil { - return nil, fmt.Errorf("error parsing ARN (%s): %w", d.Id(), err) + return nil, fmt.Errorf("parsing ARN (%s): %w", d.Id(), err) } // See https://docs.aws.amazon.com/service-authorization/latest/reference/list_networkmanager.html#networkmanager-resources-for-iam-policies. @@ -115,13 +118,13 @@ func ResourceLink() *schema.Resource { } func resourceLinkCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID := d.Get("global_network_id").(string) input := &networkmanager.CreateLinkInput{ GlobalNetworkId: aws.String(globalNetworkID), SiteId: aws.String(d.Get("site_id").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("bandwidth"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { @@ -144,20 +147,20 @@ func resourceLinkCreate(ctx context.Context, d *schema.ResourceData, meta interf output, err := conn.CreateLinkWithContext(ctx, input) if err != nil { - return diag.Errorf("error creating Network Manager Link: %s", err) + return diag.Errorf("creating Network Manager Link: %s", err) } d.SetId(aws.StringValue(output.Link.LinkId)) if _, err := waitLinkCreated(ctx, conn, globalNetworkID, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { - return diag.Errorf("error waiting for Network Manager Link (%s) create: %s", d.Id(), err) + return diag.Errorf("waiting for Network Manager Link (%s) create: %s", d.Id(), err) } return resourceLinkRead(ctx, d, meta) } func resourceLinkRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID := d.Get("global_network_id").(string) link, err := FindLinkByTwoPartKey(ctx, conn, globalNetworkID, d.Id()) @@ -169,13 +172,13 @@ func resourceLinkRead(ctx context.Context, d *schema.ResourceData, meta interfac } if err != nil { - return diag.Errorf("error reading Network Manager Link (%s): %s", d.Id(), err) + return diag.Errorf("reading Network Manager Link (%s): %s", d.Id(), err) } d.Set("arn", link.LinkArn) if link.Bandwidth != nil { if err := d.Set("bandwidth", []interface{}{flattenBandwidth(link.Bandwidth)}); err != nil { - return diag.Errorf("error setting bandwidth: %s", err) + return diag.Errorf("setting bandwidth: %s", err) } } else { d.Set("bandwidth", nil) @@ -186,13 +189,13 @@ func resourceLinkRead(ctx context.Context, d *schema.ResourceData, meta interfac d.Set("site_id", link.SiteId) d.Set("type", link.Type) - SetTagsOut(ctx, link.Tags) + setTagsOut(ctx, link.Tags) return nil } func resourceLinkUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) if d.HasChangesExcept("tags", "tags_all") { globalNetworkID := d.Get("global_network_id").(string) @@ -212,11 +215,11 @@ func resourceLinkUpdate(ctx context.Context, d *schema.ResourceData, meta interf _, err := conn.UpdateLinkWithContext(ctx, input) if err != nil { - return diag.Errorf("error updating Network Manager Link (%s): %s", d.Id(), err) + return diag.Errorf("updating Network Manager Link (%s): %s", d.Id(), err) } if _, err := waitLinkUpdated(ctx, conn, globalNetworkID, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { - return diag.Errorf("error waiting for Network Manager Link (%s) update: %s", d.Id(), err) + return diag.Errorf("waiting for Network Manager Link (%s) update: %s", d.Id(), err) } } @@ -224,7 +227,7 @@ func resourceLinkUpdate(ctx context.Context, d *schema.ResourceData, meta interf } func resourceLinkDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID := d.Get("global_network_id").(string) @@ -239,11 +242,11 @@ func resourceLinkDelete(ctx context.Context, d *schema.ResourceData, meta interf } if err != nil { - return diag.Errorf("error deleting Network Manager Link (%s): %s", d.Id(), err) + return diag.Errorf("deleting Network Manager Link (%s): %s", d.Id(), err) } if _, err := waitLinkDeleted(ctx, conn, globalNetworkID, d.Id(), d.Timeout(schema.TimeoutDelete)); err != nil { - return diag.Errorf("error waiting for Network Manager Link (%s) delete: %s", d.Id(), err) + return diag.Errorf("waiting for Network Manager Link (%s) delete: %s", d.Id(), err) } return nil diff --git a/internal/service/networkmanager/link_association.go b/internal/service/networkmanager/link_association.go index 7709d31ddce..a442890ed3c 100644 --- a/internal/service/networkmanager/link_association.go +++ b/internal/service/networkmanager/link_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -54,7 +57,7 @@ func ResourceLinkAssociation() *schema.Resource { } func resourceLinkAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID := d.Get("global_network_id").(string) linkID := d.Get("link_id").(string) @@ -70,20 +73,20 @@ func resourceLinkAssociationCreate(ctx context.Context, d *schema.ResourceData, _, err := conn.AssociateLinkWithContext(ctx, input) if err != nil { - return diag.Errorf("error creating Network Manager Link Association (%s): %s", id, err) + return diag.Errorf("creating Network Manager Link Association (%s): %s", id, err) } d.SetId(id) if _, err := waitLinkAssociationCreated(ctx, conn, globalNetworkID, linkID, deviceID, d.Timeout(schema.TimeoutCreate)); err != nil { - return diag.Errorf("error waiting for Network Manager Link Association (%s) create: %s", d.Id(), err) + return diag.Errorf("waiting for Network Manager Link Association (%s) create: %s", d.Id(), err) } return resourceLinkAssociationRead(ctx, d, meta) } func resourceLinkAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID, linkID, deviceID, err := LinkAssociationParseResourceID(d.Id()) @@ -100,7 +103,7 @@ func resourceLinkAssociationRead(ctx context.Context, d *schema.ResourceData, me } if err != nil { - return diag.Errorf("error reading Network Manager Link Association (%s): %s", d.Id(), err) + return diag.Errorf("reading Network Manager Link Association (%s): %s", d.Id(), err) } d.Set("device_id", output.DeviceId) @@ -111,7 +114,7 @@ func resourceLinkAssociationRead(ctx context.Context, d *schema.ResourceData, me } func resourceLinkAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID, linkID, deviceID, err := LinkAssociationParseResourceID(d.Id()) @@ -131,11 +134,11 @@ func resourceLinkAssociationDelete(ctx context.Context, d *schema.ResourceData, } if err != nil { - return diag.Errorf("error deleting Network Manager Link Association (%s): %s", d.Id(), err) + return diag.Errorf("deleting Network Manager Link Association (%s): %s", d.Id(), err) } if _, err := waitLinkAssociationDeleted(ctx, conn, globalNetworkID, linkID, deviceID, d.Timeout(schema.TimeoutCreate)); err != nil { - return diag.Errorf("error waiting for Network Manager Link Association (%s) delete: %s", d.Id(), err) + return diag.Errorf("waiting for Network Manager Link Association (%s) delete: %s", d.Id(), err) } return nil diff --git a/internal/service/networkmanager/link_association_test.go b/internal/service/networkmanager/link_association_test.go index 26941f21aa3..1e509975177 100644 --- a/internal/service/networkmanager/link_association_test.go +++ b/internal/service/networkmanager/link_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( @@ -66,7 +69,7 @@ func TestAccNetworkManagerLinkAssociation_disappears(t *testing.T) { func testAccCheckLinkAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_networkmanager_link_association" { @@ -107,7 +110,7 @@ func testAccCheckLinkAssociationExists(ctx context.Context, n string) resource.T return fmt.Errorf("No Network Manager Link Association ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID, linkID, deviceID, err := tfnetworkmanager.LinkAssociationParseResourceID(rs.Primary.ID) diff --git a/internal/service/networkmanager/link_data_source.go b/internal/service/networkmanager/link_data_source.go index aef037b9f18..8c4cba4b55e 100644 --- a/internal/service/networkmanager/link_data_source.go +++ b/internal/service/networkmanager/link_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -65,7 +68,7 @@ func DataSourceLink() *schema.Resource { } func dataSourceLinkRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig globalNetworkID := d.Get("global_network_id").(string) @@ -73,14 +76,14 @@ func dataSourceLinkRead(ctx context.Context, d *schema.ResourceData, meta interf link, err := FindLinkByTwoPartKey(ctx, conn, globalNetworkID, linkID) if err != nil { - return diag.Errorf("error reading Network Manager Link (%s): %s", linkID, err) + return diag.Errorf("reading Network Manager Link (%s): %s", linkID, err) } d.SetId(linkID) d.Set("arn", link.LinkArn) if link.Bandwidth != nil { if err := d.Set("bandwidth", []interface{}{flattenBandwidth(link.Bandwidth)}); err != nil { - return diag.Errorf("error setting bandwidth: %s", err) + return diag.Errorf("setting bandwidth: %s", err) } } else { d.Set("bandwidth", nil) @@ -93,7 +96,7 @@ func dataSourceLinkRead(ctx context.Context, d *schema.ResourceData, meta interf d.Set("type", link.Type) if err := d.Set("tags", KeyValueTags(ctx, link.Tags).IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return diag.Errorf("error setting tags: %s", err) + return diag.Errorf("setting tags: %s", err) } return nil diff --git a/internal/service/networkmanager/link_data_source_test.go b/internal/service/networkmanager/link_data_source_test.go index b7a54678a05..d5c339dd1ac 100644 --- a/internal/service/networkmanager/link_data_source_test.go +++ b/internal/service/networkmanager/link_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( diff --git a/internal/service/networkmanager/link_test.go b/internal/service/networkmanager/link_test.go index c8fd694bf6d..a19acc23927 100644 --- a/internal/service/networkmanager/link_test.go +++ b/internal/service/networkmanager/link_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( @@ -166,7 +169,7 @@ func TestAccNetworkManagerLink_allAttributes(t *testing.T) { func testAccCheckLinkDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_networkmanager_link" { @@ -201,7 +204,7 @@ func testAccCheckLinkExists(ctx context.Context, n string) resource.TestCheckFun return fmt.Errorf("No Network Manager Link ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) _, err := tfnetworkmanager.FindLinkByTwoPartKey(ctx, conn, rs.Primary.Attributes["global_network_id"], rs.Primary.ID) diff --git a/internal/service/networkmanager/links_data_source.go b/internal/service/networkmanager/links_data_source.go index 77656508ca3..e68317b1750 100644 --- a/internal/service/networkmanager/links_data_source.go +++ b/internal/service/networkmanager/links_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -44,7 +47,7 @@ func DataSourceLinks() *schema.Resource { } func dataSourceLinksRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig tagsToMatch := tftags.New(ctx, d.Get("tags").(map[string]interface{})).IgnoreAWS().IgnoreConfig(ignoreTagsConfig) @@ -67,7 +70,7 @@ func dataSourceLinksRead(ctx context.Context, d *schema.ResourceData, meta inter output, err := FindLinks(ctx, conn, input) if err != nil { - return diag.Errorf("error listing Network Manager Links: %s", err) + return diag.Errorf("listing Network Manager Links: %s", err) } var linkIDs []string diff --git a/internal/service/networkmanager/links_data_source_test.go b/internal/service/networkmanager/links_data_source_test.go index b23560bfefa..9f39b7d373b 100644 --- a/internal/service/networkmanager/links_data_source_test.go +++ b/internal/service/networkmanager/links_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( @@ -24,7 +27,7 @@ func TestAccNetworkManagerLinksDataSource_basic(t *testing.T) { { Config: testAccLinksDataSourceConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue(dataSourceAllName, "ids.#", "1"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceAllName, "ids.#", 1), resource.TestCheckResourceAttr(dataSourceByTagsName, "ids.#", "1"), ), }, diff --git a/internal/service/networkmanager/service_package_gen.go b/internal/service/networkmanager/service_package_gen.go index d86064f1006..66504591086 100644 --- a/internal/service/networkmanager/service_package_gen.go +++ b/internal/service/networkmanager/service_package_gen.go @@ -5,6 +5,10 @@ package networkmanager import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + networkmanager_sdkv1 "github.com/aws/aws-sdk-go/service/networkmanager" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -197,4 +201,13 @@ func (p *servicePackage) ServicePackageName() string { return names.NetworkManager } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*networkmanager_sdkv1.NetworkManager, error) { + sess := config["session"].(*session_sdkv1.Session) + + return networkmanager_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/networkmanager/site.go b/internal/service/networkmanager/site.go index 7f6623ea2c0..7449ea0b48c 100644 --- a/internal/service/networkmanager/site.go +++ b/internal/service/networkmanager/site.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -36,7 +39,7 @@ func ResourceSite() *schema.Resource { parsedARN, err := arn.Parse(d.Id()) if err != nil { - return nil, fmt.Errorf("error parsing ARN (%s): %w", d.Id(), err) + return nil, fmt.Errorf("parsing ARN (%s): %w", d.Id(), err) } // See https://docs.aws.amazon.com/service-authorization/latest/reference/list_networkmanager.html#networkmanager-resources-for-iam-policies. @@ -107,12 +110,12 @@ func ResourceSite() *schema.Resource { } func resourceSiteCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID := d.Get("global_network_id").(string) input := &networkmanager.CreateSiteInput{ GlobalNetworkId: aws.String(globalNetworkID), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -127,20 +130,20 @@ func resourceSiteCreate(ctx context.Context, d *schema.ResourceData, meta interf output, err := conn.CreateSiteWithContext(ctx, input) if err != nil { - return diag.Errorf("error creating Network Manager Site: %s", err) + return diag.Errorf("creating Network Manager Site: %s", err) } d.SetId(aws.StringValue(output.Site.SiteId)) if _, err := waitSiteCreated(ctx, conn, globalNetworkID, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { - return diag.Errorf("error waiting for Network Manager Site (%s) create: %s", d.Id(), err) + return diag.Errorf("waiting for Network Manager Site (%s) create: %s", d.Id(), err) } return resourceSiteRead(ctx, d, meta) } func resourceSiteRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID := d.Get("global_network_id").(string) site, err := FindSiteByTwoPartKey(ctx, conn, globalNetworkID, d.Id()) @@ -152,7 +155,7 @@ func resourceSiteRead(ctx context.Context, d *schema.ResourceData, meta interfac } if err != nil { - return diag.Errorf("error reading Network Manager Site (%s): %s", d.Id(), err) + return diag.Errorf("reading Network Manager Site (%s): %s", d.Id(), err) } d.Set("arn", site.SiteArn) @@ -160,19 +163,19 @@ func resourceSiteRead(ctx context.Context, d *schema.ResourceData, meta interfac d.Set("global_network_id", site.GlobalNetworkId) if site.Location != nil { if err := d.Set("location", []interface{}{flattenLocation(site.Location)}); err != nil { - return diag.Errorf("error setting location: %s", err) + return diag.Errorf("setting location: %s", err) } } else { d.Set("location", nil) } - SetTagsOut(ctx, site.Tags) + setTagsOut(ctx, site.Tags) return nil } func resourceSiteUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) if d.HasChangesExcept("tags", "tags_all") { globalNetworkID := d.Get("global_network_id").(string) @@ -190,11 +193,11 @@ func resourceSiteUpdate(ctx context.Context, d *schema.ResourceData, meta interf _, err := conn.UpdateSiteWithContext(ctx, input) if err != nil { - return diag.Errorf("error updating Network Manager Site (%s): %s", d.Id(), err) + return diag.Errorf("updating Network Manager Site (%s): %s", d.Id(), err) } if _, err := waitSiteUpdated(ctx, conn, globalNetworkID, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { - return diag.Errorf("error waiting for Network Manager Site (%s) update: %s", d.Id(), err) + return diag.Errorf("waiting for Network Manager Site (%s) update: %s", d.Id(), err) } } @@ -202,7 +205,7 @@ func resourceSiteUpdate(ctx context.Context, d *schema.ResourceData, meta interf } func resourceSiteDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID := d.Get("global_network_id").(string) @@ -228,11 +231,11 @@ func resourceSiteDelete(ctx context.Context, d *schema.ResourceData, meta interf } if err != nil { - return diag.Errorf("error deleting Network Manager Site (%s): %s", d.Id(), err) + return diag.Errorf("deleting Network Manager Site (%s): %s", d.Id(), err) } if _, err := waitSiteDeleted(ctx, conn, globalNetworkID, d.Id(), d.Timeout(schema.TimeoutDelete)); err != nil { - return diag.Errorf("error waiting for Network Manager Site (%s) delete: %s", d.Id(), err) + return diag.Errorf("waiting for Network Manager Site (%s) delete: %s", d.Id(), err) } return nil diff --git a/internal/service/networkmanager/site_data_source.go b/internal/service/networkmanager/site_data_source.go index 9fa3ed3e0cd..ab8ca54c208 100644 --- a/internal/service/networkmanager/site_data_source.go +++ b/internal/service/networkmanager/site_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -57,7 +60,7 @@ func DataSourceSite() *schema.Resource { } func dataSourceSiteRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig globalNetworkID := d.Get("global_network_id").(string) @@ -65,7 +68,7 @@ func dataSourceSiteRead(ctx context.Context, d *schema.ResourceData, meta interf site, err := FindSiteByTwoPartKey(ctx, conn, globalNetworkID, siteID) if err != nil { - return diag.Errorf("error reading Network Manager Site (%s): %s", siteID, err) + return diag.Errorf("reading Network Manager Site (%s): %s", siteID, err) } d.SetId(siteID) @@ -74,7 +77,7 @@ func dataSourceSiteRead(ctx context.Context, d *schema.ResourceData, meta interf d.Set("global_network_id", site.GlobalNetworkId) if site.Location != nil { if err := d.Set("location", []interface{}{flattenLocation(site.Location)}); err != nil { - return diag.Errorf("error setting location: %s", err) + return diag.Errorf("setting location: %s", err) } } else { d.Set("location", nil) @@ -82,7 +85,7 @@ func dataSourceSiteRead(ctx context.Context, d *schema.ResourceData, meta interf d.Set("site_id", site.SiteId) if err := d.Set("tags", KeyValueTags(ctx, site.Tags).IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return diag.Errorf("error setting tags: %s", err) + return diag.Errorf("setting tags: %s", err) } return nil diff --git a/internal/service/networkmanager/site_data_source_test.go b/internal/service/networkmanager/site_data_source_test.go index 007c4cc0f07..b428664e5c5 100644 --- a/internal/service/networkmanager/site_data_source_test.go +++ b/internal/service/networkmanager/site_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( diff --git a/internal/service/networkmanager/site_test.go b/internal/service/networkmanager/site_test.go index 226345e8ec1..22be6a4f389 100644 --- a/internal/service/networkmanager/site_test.go +++ b/internal/service/networkmanager/site_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( @@ -193,7 +196,7 @@ func TestAccNetworkManagerSite_location(t *testing.T) { func testAccCheckSiteDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_networkmanager_site" { @@ -228,7 +231,7 @@ func testAccCheckSiteExists(ctx context.Context, n string) resource.TestCheckFun return fmt.Errorf("No Network Manager Site ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) _, err := tfnetworkmanager.FindSiteByTwoPartKey(ctx, conn, rs.Primary.Attributes["global_network_id"], rs.Primary.ID) diff --git a/internal/service/networkmanager/site_to_site_vpn_attachment.go b/internal/service/networkmanager/site_to_site_vpn_attachment.go index f7d6fc84aea..baf77787e20 100644 --- a/internal/service/networkmanager/site_to_site_vpn_attachment.go +++ b/internal/service/networkmanager/site_to_site_vpn_attachment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -98,13 +101,13 @@ func ResourceSiteToSiteVPNAttachment() *schema.Resource { } func resourceSiteToSiteVPNAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) coreNetworkID := d.Get("core_network_id").(string) vpnConnectionARN := d.Get("vpn_connection_arn").(string) input := &networkmanager.CreateSiteToSiteVpnAttachmentInput{ CoreNetworkId: aws.String(coreNetworkID), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), VpnConnectionArn: aws.String(vpnConnectionARN), } @@ -124,7 +127,7 @@ func resourceSiteToSiteVPNAttachmentCreate(ctx context.Context, d *schema.Resour } func resourceSiteToSiteVPNAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) vpnAttachment, err := FindSiteToSiteVPNAttachmentByID(ctx, conn, d.Id()) @@ -157,7 +160,7 @@ func resourceSiteToSiteVPNAttachmentRead(ctx context.Context, d *schema.Resource d.Set("state", a.State) d.Set("vpn_connection_arn", a.ResourceArn) - SetTagsOut(ctx, a.Tags) + setTagsOut(ctx, a.Tags) return nil } @@ -168,7 +171,7 @@ func resourceSiteToSiteVPNAttachmentUpdate(ctx context.Context, d *schema.Resour } func resourceSiteToSiteVPNAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) // If ResourceAttachmentAccepter is used, then VPN Attachment state // is never updated from StatePendingAttachmentAcceptance and the delete fails diff --git a/internal/service/networkmanager/site_to_site_vpn_attachment_test.go b/internal/service/networkmanager/site_to_site_vpn_attachment_test.go index 5be3ade143f..98f9c71f92d 100644 --- a/internal/service/networkmanager/site_to_site_vpn_attachment_test.go +++ b/internal/service/networkmanager/site_to_site_vpn_attachment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( @@ -153,7 +156,7 @@ func testAccCheckSiteToSiteVPNAttachmentExists(ctx context.Context, n string, v return fmt.Errorf("No Network Manager Site To Site VPN Attachment ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) output, err := tfnetworkmanager.FindSiteToSiteVPNAttachmentByID(ctx, conn, rs.Primary.ID) @@ -169,7 +172,7 @@ func testAccCheckSiteToSiteVPNAttachmentExists(ctx context.Context, n string, v func testAccCheckSiteToSiteVPNAttachmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_networkmanager_site_to_site_vpn_attachment" { diff --git a/internal/service/networkmanager/sites_data_source.go b/internal/service/networkmanager/sites_data_source.go index 9d0eefacdcf..0289c32c79d 100644 --- a/internal/service/networkmanager/sites_data_source.go +++ b/internal/service/networkmanager/sites_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -32,7 +35,7 @@ func DataSourceSites() *schema.Resource { } func dataSourceSitesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig tagsToMatch := tftags.New(ctx, d.Get("tags").(map[string]interface{})).IgnoreAWS().IgnoreConfig(ignoreTagsConfig) @@ -41,7 +44,7 @@ func dataSourceSitesRead(ctx context.Context, d *schema.ResourceData, meta inter }) if err != nil { - return diag.Errorf("error listing Network Manager Sites: %s", err) + return diag.Errorf("listing Network Manager Sites: %s", err) } var siteIDs []string diff --git a/internal/service/networkmanager/sites_data_source_test.go b/internal/service/networkmanager/sites_data_source_test.go index 9ea35f56471..b82bb21c335 100644 --- a/internal/service/networkmanager/sites_data_source_test.go +++ b/internal/service/networkmanager/sites_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( @@ -24,7 +27,7 @@ func TestAccNetworkManagerSitesDataSource_basic(t *testing.T) { { Config: testAccSitesDataSourceConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue(dataSourceAllName, "ids.#", "1"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceAllName, "ids.#", 1), resource.TestCheckResourceAttr(dataSourceByTagsName, "ids.#", "1"), ), }, diff --git a/internal/service/networkmanager/sweep.go b/internal/service/networkmanager/sweep.go index fe12d56a09b..9289807c00d 100644 --- a/internal/service/networkmanager/sweep.go +++ b/internal/service/networkmanager/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/networkmanager" multierror "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -105,11 +107,11 @@ func init() { func sweepGlobalNetworks(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).NetworkManagerConn() + conn := client.NetworkManagerConn(ctx) input := &networkmanager.DescribeGlobalNetworksInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -138,7 +140,7 @@ func sweepGlobalNetworks(region string) error { return fmt.Errorf("error listing Network Manager Global Networks (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Network Manager Global Networks (%s): %w", region, err) @@ -149,11 +151,11 @@ func sweepGlobalNetworks(region string) error { func sweepCoreNetworks(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).NetworkManagerConn() + conn := client.NetworkManagerConn(ctx) input := &networkmanager.ListCoreNetworksInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -182,7 +184,7 @@ func sweepCoreNetworks(region string) error { return fmt.Errorf("error listing Network Manager Core Networks (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Network Manager Core Networks (%s): %w", region, err) @@ -193,11 +195,11 @@ func sweepCoreNetworks(region string) error { func sweepConnectAttachments(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).NetworkManagerConn() + conn := client.NetworkManagerConn(ctx) input := &networkmanager.ListAttachmentsInput{ AttachmentType: aws.String(networkmanager.AttachmentTypeConnect), } @@ -228,7 +230,7 @@ func sweepConnectAttachments(region string) error { return fmt.Errorf("error listing Network Manager Connect Attachments (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Network Manager Connect Attachments (%s): %w", region, err) @@ -239,11 +241,11 @@ func sweepConnectAttachments(region string) error { func sweepSiteToSiteVPNAttachments(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).NetworkManagerConn() + conn := client.NetworkManagerConn(ctx) input := &networkmanager.ListAttachmentsInput{ AttachmentType: aws.String(networkmanager.AttachmentTypeSiteToSiteVpn), } @@ -274,7 +276,7 @@ func sweepSiteToSiteVPNAttachments(region string) error { return fmt.Errorf("error listing Network Manager Site To Site VPN Attachments (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Network Manager Site To Site VPN Attachments (%s): %w", region, err) @@ -285,11 +287,11 @@ func sweepSiteToSiteVPNAttachments(region string) error { func sweepTransitGatewayPeerings(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).NetworkManagerConn() + conn := client.NetworkManagerConn(ctx) input := &networkmanager.ListPeeringsInput{ PeeringType: aws.String(networkmanager.PeeringTypeTransitGateway), } @@ -320,7 +322,7 @@ func sweepTransitGatewayPeerings(region string) error { return fmt.Errorf("error listing Network Manager Transit Gateway Peerings (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Network Manager Transit Gateway Peerings (%s): %w", region, err) @@ -331,11 +333,11 @@ func sweepTransitGatewayPeerings(region string) error { func sweepTransitGatewayRouteTableAttachments(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).NetworkManagerConn() + conn := client.NetworkManagerConn(ctx) input := &networkmanager.ListAttachmentsInput{ AttachmentType: aws.String(networkmanager.AttachmentTypeTransitGatewayRouteTable), } @@ -366,7 +368,7 @@ func sweepTransitGatewayRouteTableAttachments(region string) error { return fmt.Errorf("error listing Network Manager Transit Gateway Route Table Attachments (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Network Manager Transit Gateway Route Table Attachments (%s): %w", region, err) @@ -377,11 +379,11 @@ func sweepTransitGatewayRouteTableAttachments(region string) error { func sweepVPCAttachments(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).NetworkManagerConn() + conn := client.NetworkManagerConn(ctx) input := &networkmanager.ListAttachmentsInput{ AttachmentType: aws.String(networkmanager.AttachmentTypeVpc), } @@ -412,7 +414,7 @@ func sweepVPCAttachments(region string) error { return fmt.Errorf("error listing Network Manager VPC Attachments (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Network Manager VPC Attachments (%s): %w", region, err) @@ -423,11 +425,11 @@ func sweepVPCAttachments(region string) error { func sweepSites(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).NetworkManagerConn() + conn := client.NetworkManagerConn(ctx) input := &networkmanager.DescribeGlobalNetworksInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -480,7 +482,7 @@ func sweepSites(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Network Manager Global Networks (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Network Manager Sites (%s): %w", region, err)) @@ -491,11 +493,11 @@ func sweepSites(region string) error { func sweepDevices(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).NetworkManagerConn() + conn := client.NetworkManagerConn(ctx) input := &networkmanager.DescribeGlobalNetworksInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -548,7 +550,7 @@ func sweepDevices(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Network Manager Global Networks (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Network Manager Devices (%s): %w", region, err)) @@ -559,11 +561,11 @@ func sweepDevices(region string) error { func sweepLinks(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).NetworkManagerConn() + conn := client.NetworkManagerConn(ctx) input := &networkmanager.DescribeGlobalNetworksInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -616,7 +618,7 @@ func sweepLinks(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Network Manager Global Networks (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Network Manager Links (%s): %w", region, err)) @@ -627,11 +629,11 @@ func sweepLinks(region string) error { func sweepLinkAssociations(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).NetworkManagerConn() + conn := client.NetworkManagerConn(ctx) input := &networkmanager.DescribeGlobalNetworksInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -683,7 +685,7 @@ func sweepLinkAssociations(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Network Manager Global Networks (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Network Manager Link Associations (%s): %w", region, err)) @@ -694,11 +696,11 @@ func sweepLinkAssociations(region string) error { func sweepConnections(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).NetworkManagerConn() + conn := client.NetworkManagerConn(ctx) input := &networkmanager.DescribeGlobalNetworksInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -751,7 +753,7 @@ func sweepConnections(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Network Manager Global Networks (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Network Manager Connections (%s): %w", region, err)) diff --git a/internal/service/networkmanager/tags_gen.go b/internal/service/networkmanager/tags_gen.go index 4ee00748ac5..6779ef11ea8 100644 --- a/internal/service/networkmanager/tags_gen.go +++ b/internal/service/networkmanager/tags_gen.go @@ -43,9 +43,9 @@ func KeyValueTags(ctx context.Context, tags []*networkmanager.Tag) tftags.KeyVal return tftags.New(ctx, m) } -// GetTagsIn returns networkmanager service tags from Context. +// getTagsIn returns networkmanager service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*networkmanager.Tag { +func getTagsIn(ctx context.Context) []*networkmanager.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -55,17 +55,17 @@ func GetTagsIn(ctx context.Context) []*networkmanager.Tag { return nil } -// SetTagsOut sets networkmanager service tags in Context. -func SetTagsOut(ctx context.Context, tags []*networkmanager.Tag) { +// setTagsOut sets networkmanager service tags in Context. +func setTagsOut(ctx context.Context, tags []*networkmanager.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates networkmanager service tags. +// updateTags updates networkmanager service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn networkmanageriface.NetworkManagerAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn networkmanageriface.NetworkManagerAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -105,5 +105,5 @@ func UpdateTags(ctx context.Context, conn networkmanageriface.NetworkManagerAPI, // UpdateTags updates networkmanager service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).NetworkManagerConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).NetworkManagerConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/networkmanager/transit_gateway_connect_peer_association.go b/internal/service/networkmanager/transit_gateway_connect_peer_association.go index b4ed701a5a0..e6f63fa5a54 100644 --- a/internal/service/networkmanager/transit_gateway_connect_peer_association.go +++ b/internal/service/networkmanager/transit_gateway_connect_peer_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -61,7 +64,7 @@ func ResourceTransitGatewayConnectPeerAssociation() *schema.Resource { } func resourceTransitGatewayConnectPeerAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID := d.Get("global_network_id").(string) connectPeerARN := d.Get("transit_gateway_connect_peer_arn").(string) @@ -80,20 +83,20 @@ func resourceTransitGatewayConnectPeerAssociationCreate(ctx context.Context, d * _, err := conn.AssociateTransitGatewayConnectPeerWithContext(ctx, input) if err != nil { - return diag.Errorf("error creating Network Manager Transit Gateway Connect Peer Association (%s): %s", id, err) + return diag.Errorf("creating Network Manager Transit Gateway Connect Peer Association (%s): %s", id, err) } d.SetId(id) if _, err := waitTransitGatewayConnectPeerAssociationCreated(ctx, conn, globalNetworkID, connectPeerARN, d.Timeout(schema.TimeoutCreate)); err != nil { - return diag.Errorf("error waiting for Network Manager Transit Gateway Connect Peer Association (%s) create: %s", d.Id(), err) + return diag.Errorf("waiting for Network Manager Transit Gateway Connect Peer Association (%s) create: %s", d.Id(), err) } return resourceTransitGatewayConnectPeerAssociationRead(ctx, d, meta) } func resourceTransitGatewayConnectPeerAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID, connectPeerARN, err := TransitGatewayConnectPeerAssociationParseResourceID(d.Id()) @@ -110,7 +113,7 @@ func resourceTransitGatewayConnectPeerAssociationRead(ctx context.Context, d *sc } if err != nil { - return diag.Errorf("error reading Network Manager Transit Gateway Connect Peer Association (%s): %s", d.Id(), err) + return diag.Errorf("reading Network Manager Transit Gateway Connect Peer Association (%s): %s", d.Id(), err) } d.Set("device_id", output.DeviceId) @@ -122,7 +125,7 @@ func resourceTransitGatewayConnectPeerAssociationRead(ctx context.Context, d *sc } func resourceTransitGatewayConnectPeerAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID, connectPeerARN, err := TransitGatewayConnectPeerAssociationParseResourceID(d.Id()) @@ -153,11 +156,11 @@ func disassociateTransitGatewayConnectPeer(ctx context.Context, conn *networkman } if err != nil { - return fmt.Errorf("error deleting Network Manager Transit Gateway Connect Peer Association (%s): %w", id, err) + return fmt.Errorf("deleting Network Manager Transit Gateway Connect Peer Association (%s): %w", id, err) } if _, err := waitTransitGatewayConnectPeerAssociationDeleted(ctx, conn, globalNetworkID, connectPeerARN, timeout); err != nil { - return fmt.Errorf("error waiting for Network Manager Transit Gateway Connect Peer Association (%s) delete: %w", id, err) + return fmt.Errorf("waiting for Network Manager Transit Gateway Connect Peer Association (%s) delete: %w", id, err) } return nil diff --git a/internal/service/networkmanager/transit_gateway_connect_peer_association_test.go b/internal/service/networkmanager/transit_gateway_connect_peer_association_test.go index da4376e8e90..50a9e9b08e2 100644 --- a/internal/service/networkmanager/transit_gateway_connect_peer_association_test.go +++ b/internal/service/networkmanager/transit_gateway_connect_peer_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( @@ -103,7 +106,7 @@ func testAccTransitGatewayConnectPeerAssociation_Disappears_connectPeer(t *testi func testAccCheckTransitGatewayConnectPeerAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_networkmanager_customer_gateway_association" { @@ -144,7 +147,7 @@ func testAccCheckTransitGatewayConnectPeerAssociationExists(ctx context.Context, return fmt.Errorf("No Network Manager Transit Gateway Connect Peer Association ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID, connectPeerARN, err := tfnetworkmanager.TransitGatewayConnectPeerAssociationParseResourceID(rs.Primary.ID) diff --git a/internal/service/networkmanager/transit_gateway_peering.go b/internal/service/networkmanager/transit_gateway_peering.go index b2c4b64b23e..999b375deae 100644 --- a/internal/service/networkmanager/transit_gateway_peering.go +++ b/internal/service/networkmanager/transit_gateway_peering.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -87,13 +90,13 @@ func ResourceTransitGatewayPeering() *schema.Resource { } func resourceTransitGatewayPeeringCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) coreNetworkID := d.Get("core_network_id").(string) transitGatewayARN := d.Get("transit_gateway_arn").(string) input := &networkmanager.CreateTransitGatewayPeeringInput{ CoreNetworkId: aws.String(coreNetworkID), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), TransitGatewayArn: aws.String(transitGatewayARN), } @@ -114,7 +117,7 @@ func resourceTransitGatewayPeeringCreate(ctx context.Context, d *schema.Resource } func resourceTransitGatewayPeeringRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) transitGatewayPeering, err := FindTransitGatewayPeeringByID(ctx, conn, d.Id()) @@ -145,7 +148,7 @@ func resourceTransitGatewayPeeringRead(ctx context.Context, d *schema.ResourceDa d.Set("transit_gateway_arn", transitGatewayPeering.TransitGatewayArn) d.Set("transit_gateway_peering_attachment_id", transitGatewayPeering.TransitGatewayPeeringAttachmentId) - SetTagsOut(ctx, p.Tags) + setTagsOut(ctx, p.Tags) return nil } @@ -156,7 +159,7 @@ func resourceTransitGatewayPeeringUpdate(ctx context.Context, d *schema.Resource } func resourceTransitGatewayPeeringDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) log.Printf("[DEBUG] Deleting Network Manager Transit Gateway Peering: %s", d.Id()) _, err := conn.DeletePeeringWithContext(ctx, &networkmanager.DeletePeeringInput{ diff --git a/internal/service/networkmanager/transit_gateway_peering_test.go b/internal/service/networkmanager/transit_gateway_peering_test.go index 66af67533e2..8e98a9916dc 100644 --- a/internal/service/networkmanager/transit_gateway_peering_test.go +++ b/internal/service/networkmanager/transit_gateway_peering_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( @@ -134,7 +137,7 @@ func testAccCheckTransitGatewayPeeringExists(ctx context.Context, n string, v *n return fmt.Errorf("No Network Manager Transit Gateway Peering ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) output, err := tfnetworkmanager.FindTransitGatewayPeeringByID(ctx, conn, rs.Primary.ID) @@ -150,7 +153,7 @@ func testAccCheckTransitGatewayPeeringExists(ctx context.Context, n string, v *n func testAccCheckTransitGatewayPeeringDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_networkmanager_transit_gateway_peering" { @@ -200,13 +203,17 @@ resource "aws_networkmanager_global_network" "test" { resource "aws_networkmanager_core_network" "test" { global_network_id = aws_networkmanager_global_network.test.id - policy_document = data.aws_networkmanager_core_network_policy_document.test.json tags = { Name = %[1]q } } +resource "aws_networkmanager_core_network_policy_attachment" "test" { + core_network_id = aws_networkmanager_core_network.test.id + policy_document = data.aws_networkmanager_core_network_policy_document.test.json +} + data "aws_networkmanager_core_network_policy_document" "test" { core_network_configuration { # Don't overlap with default TGW ASN: 64512. diff --git a/internal/service/networkmanager/transit_gateway_registration.go b/internal/service/networkmanager/transit_gateway_registration.go index 395cd4bc364..fa3ab93a96e 100644 --- a/internal/service/networkmanager/transit_gateway_registration.go +++ b/internal/service/networkmanager/transit_gateway_registration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -52,7 +55,7 @@ func ResourceTransitGatewayRegistration() *schema.Resource { } func resourceTransitGatewayRegistrationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID := d.Get("global_network_id").(string) transitGatewayARN := d.Get("transit_gateway_arn").(string) @@ -66,20 +69,20 @@ func resourceTransitGatewayRegistrationCreate(ctx context.Context, d *schema.Res _, err := conn.RegisterTransitGatewayWithContext(ctx, input) if err != nil { - return diag.Errorf("error creating Network Manager Transit Gateway Registration (%s): %s", id, err) + return diag.Errorf("creating Network Manager Transit Gateway Registration (%s): %s", id, err) } d.SetId(id) if _, err := waitTransitGatewayRegistrationCreated(ctx, conn, globalNetworkID, transitGatewayARN, d.Timeout(schema.TimeoutCreate)); err != nil { - return diag.Errorf("error waiting for Network Manager Transit Gateway Attachment (%s) create: %s", d.Id(), err) + return diag.Errorf("waiting for Network Manager Transit Gateway Attachment (%s) create: %s", d.Id(), err) } return resourceTransitGatewayRegistrationRead(ctx, d, meta) } func resourceTransitGatewayRegistrationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID, transitGatewayARN, err := TransitGatewayRegistrationParseResourceID(d.Id()) @@ -96,7 +99,7 @@ func resourceTransitGatewayRegistrationRead(ctx context.Context, d *schema.Resou } if err != nil { - return diag.Errorf("error reading Network Manager Transit Gateway Registration (%s): %s", d.Id(), err) + return diag.Errorf("reading Network Manager Transit Gateway Registration (%s): %s", d.Id(), err) } d.Set("global_network_id", transitGatewayRegistration.GlobalNetworkId) @@ -106,7 +109,7 @@ func resourceTransitGatewayRegistrationRead(ctx context.Context, d *schema.Resou } func resourceTransitGatewayRegistrationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID, transitGatewayARN, err := TransitGatewayRegistrationParseResourceID(d.Id()) @@ -137,11 +140,11 @@ func deregisterTransitGateway(ctx context.Context, conn *networkmanager.NetworkM } if err != nil { - return fmt.Errorf("error deleting Network Manager Transit Gateway Registration (%s): %w", id, err) + return fmt.Errorf("deleting Network Manager Transit Gateway Registration (%s): %w", id, err) } if _, err := waitTransitGatewayRegistrationDeleted(ctx, conn, globalNetworkID, transitGatewayARN, timeout); err != nil { - return fmt.Errorf("error waiting for Network Manager Transit Gateway Registration (%s) delete: %w", id, err) + return fmt.Errorf("waiting for Network Manager Transit Gateway Registration (%s) delete: %w", id, err) } return nil diff --git a/internal/service/networkmanager/transit_gateway_registration_test.go b/internal/service/networkmanager/transit_gateway_registration_test.go index 730f8620a1a..bfeee54a776 100644 --- a/internal/service/networkmanager/transit_gateway_registration_test.go +++ b/internal/service/networkmanager/transit_gateway_registration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( @@ -130,7 +133,7 @@ func testAccTransitGatewayRegistration_crossRegion(t *testing.T) { func testAccCheckTransitGatewayRegistrationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_networkmanager_transit_gateway_registration" { @@ -171,7 +174,7 @@ func testAccCheckTransitGatewayRegistrationExists(ctx context.Context, n string) return fmt.Errorf("No Network Manager Transit Gateway Registration ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) globalNetworkID, transitGatewayARN, err := tfnetworkmanager.TransitGatewayRegistrationParseResourceID(rs.Primary.ID) diff --git a/internal/service/networkmanager/transit_gateway_route_table_attachment.go b/internal/service/networkmanager/transit_gateway_route_table_attachment.go index c7c09a16c93..02ada62ed03 100644 --- a/internal/service/networkmanager/transit_gateway_route_table_attachment.go +++ b/internal/service/networkmanager/transit_gateway_route_table_attachment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -99,13 +102,13 @@ func ResourceTransitGatewayRouteTableAttachment() *schema.Resource { } func resourceTransitGatewayRouteTableAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) peeringID := d.Get("peering_id").(string) transitGatewayRouteTableARN := d.Get("transit_gateway_route_table_arn").(string) input := &networkmanager.CreateTransitGatewayRouteTableAttachmentInput{ PeeringId: aws.String(peeringID), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), TransitGatewayRouteTableArn: aws.String(transitGatewayRouteTableARN), } @@ -126,7 +129,7 @@ func resourceTransitGatewayRouteTableAttachmentCreate(ctx context.Context, d *sc } func resourceTransitGatewayRouteTableAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) transitGatewayRouteTableAttachment, err := FindTransitGatewayRouteTableAttachmentByID(ctx, conn, d.Id()) @@ -160,7 +163,7 @@ func resourceTransitGatewayRouteTableAttachmentRead(ctx context.Context, d *sche d.Set("state", a.State) d.Set("transit_gateway_route_table_arn", transitGatewayRouteTableAttachment.TransitGatewayRouteTableArn) - SetTagsOut(ctx, a.Tags) + setTagsOut(ctx, a.Tags) return nil } @@ -171,7 +174,7 @@ func resourceTransitGatewayRouteTableAttachmentUpdate(ctx context.Context, d *sc } func resourceTransitGatewayRouteTableAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) log.Printf("[DEBUG] Deleting Network Manager Transit Gateway Route Table Attachment: %s", d.Id()) _, err := conn.DeleteAttachmentWithContext(ctx, &networkmanager.DeleteAttachmentInput{ @@ -218,7 +221,7 @@ func FindTransitGatewayRouteTableAttachmentByID(ctx context.Context, conn *netwo return output.TransitGatewayRouteTableAttachment, nil } -func StatusTransitGatewayRouteTableAttachmentState(ctx context.Context, conn *networkmanager.NetworkManager, id string) retry.StateRefreshFunc { +func statusTransitGatewayRouteTableAttachmentState(ctx context.Context, conn *networkmanager.NetworkManager, id string) retry.StateRefreshFunc { return func() (interface{}, string, error) { output, err := FindTransitGatewayRouteTableAttachmentByID(ctx, conn, id) @@ -239,7 +242,7 @@ func waitTransitGatewayRouteTableAttachmentCreated(ctx context.Context, conn *ne Pending: []string{networkmanager.AttachmentStateCreating, networkmanager.AttachmentStatePendingNetworkUpdate}, Target: []string{networkmanager.AttachmentStateAvailable, networkmanager.AttachmentStatePendingAttachmentAcceptance}, Timeout: timeout, - Refresh: StatusTransitGatewayRouteTableAttachmentState(ctx, conn, id), + Refresh: statusTransitGatewayRouteTableAttachmentState(ctx, conn, id), } outputRaw, err := stateConf.WaitForStateContext(ctx) @@ -256,7 +259,7 @@ func waitTransitGatewayRouteTableAttachmentDeleted(ctx context.Context, conn *ne Pending: []string{networkmanager.AttachmentStateDeleting}, Target: []string{}, Timeout: timeout, - Refresh: StatusTransitGatewayRouteTableAttachmentState(ctx, conn, id), + Refresh: statusTransitGatewayRouteTableAttachmentState(ctx, conn, id), NotFoundChecks: 1, } @@ -268,3 +271,20 @@ func waitTransitGatewayRouteTableAttachmentDeleted(ctx context.Context, conn *ne return nil, err } + +func waitTransitGatewayRouteTableAttachmentAvailable(ctx context.Context, conn *networkmanager.NetworkManager, id string, timeout time.Duration) (*networkmanager.TransitGatewayRouteTableAttachment, error) { + stateConf := &retry.StateChangeConf{ + Pending: []string{networkmanager.AttachmentStateCreating, networkmanager.AttachmentStatePendingAttachmentAcceptance, networkmanager.AttachmentStatePendingNetworkUpdate}, + Target: []string{networkmanager.AttachmentStateAvailable}, + Timeout: timeout, + Refresh: statusTransitGatewayRouteTableAttachmentState(ctx, conn, id), + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + + if output, ok := outputRaw.(*networkmanager.TransitGatewayRouteTableAttachment); ok { + return output, err + } + + return nil, err +} diff --git a/internal/service/networkmanager/transit_gateway_route_table_attachment_test.go b/internal/service/networkmanager/transit_gateway_route_table_attachment_test.go index 0b939ddbca8..2c952889230 100644 --- a/internal/service/networkmanager/transit_gateway_route_table_attachment_test.go +++ b/internal/service/networkmanager/transit_gateway_route_table_attachment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( @@ -133,7 +136,7 @@ func testAccCheckTransitGatewayRouteTableAttachmentExists(ctx context.Context, n return fmt.Errorf("No Network Manager Transit Gateway Route Table Attachment ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) output, err := tfnetworkmanager.FindTransitGatewayRouteTableAttachmentByID(ctx, conn, rs.Primary.ID) @@ -149,7 +152,7 @@ func testAccCheckTransitGatewayRouteTableAttachmentExists(ctx context.Context, n func testAccCheckTransitGatewayRouteTableAttachmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_networkmanager_transit_gateway_route_table_attachment" { @@ -183,7 +186,7 @@ resource "aws_networkmanager_transit_gateway_peering" "test" { Name = %[1]q } - depends_on = [aws_ec2_transit_gateway_policy_table.test] + depends_on = [aws_ec2_transit_gateway_policy_table.test, aws_networkmanager_core_network_policy_attachment.test] } resource "aws_ec2_transit_gateway_route_table" "test" { @@ -209,6 +212,11 @@ resource "aws_networkmanager_transit_gateway_route_table_attachment" "test" { depends_on = [aws_ec2_transit_gateway_policy_table_association.test] } + +resource "aws_networkmanager_attachment_accepter" "test" { + attachment_id = aws_networkmanager_transit_gateway_route_table_attachment.test.id + attachment_type = aws_networkmanager_transit_gateway_route_table_attachment.test.attachment_type +} `) } @@ -224,6 +232,11 @@ resource "aws_networkmanager_transit_gateway_route_table_attachment" "test" { depends_on = [aws_ec2_transit_gateway_policy_table_association.test] } + +resource "aws_networkmanager_attachment_accepter" "test" { + attachment_id = aws_networkmanager_transit_gateway_route_table_attachment.test.id + attachment_type = aws_networkmanager_transit_gateway_route_table_attachment.test.attachment_type +} `, tagKey1, tagValue1)) } @@ -240,5 +253,10 @@ resource "aws_networkmanager_transit_gateway_route_table_attachment" "test" { depends_on = [aws_ec2_transit_gateway_policy_table_association.test] } + +resource "aws_networkmanager_attachment_accepter" "test" { + attachment_id = aws_networkmanager_transit_gateway_route_table_attachment.test.id + attachment_type = aws_networkmanager_transit_gateway_route_table_attachment.test.attachment_type +} `, tagKey1, tagValue1, tagKey2, tagValue2)) } diff --git a/internal/service/networkmanager/vpc_attachment.go b/internal/service/networkmanager/vpc_attachment.go index 937aa661e74..8567b9e63b8 100644 --- a/internal/service/networkmanager/vpc_attachment.go +++ b/internal/service/networkmanager/vpc_attachment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager import ( @@ -123,14 +126,14 @@ func ResourceVPCAttachment() *schema.Resource { } func resourceVPCAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) coreNetworkID := d.Get("core_network_id").(string) vpcARN := d.Get("vpc_arn").(string) input := &networkmanager.CreateVpcAttachmentInput{ CoreNetworkId: aws.String(coreNetworkID), SubnetArns: flex.ExpandStringSet(d.Get("subnet_arns").(*schema.Set)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), VpcArn: aws.String(vpcARN), } @@ -155,7 +158,7 @@ func resourceVPCAttachmentCreate(ctx context.Context, d *schema.ResourceData, me } func resourceVPCAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) vpcAttachment, err := FindVPCAttachmentByID(ctx, conn, d.Id()) @@ -196,13 +199,13 @@ func resourceVPCAttachmentRead(ctx context.Context, d *schema.ResourceData, meta d.Set("subnet_arns", aws.StringValueSlice(vpcAttachment.SubnetArns)) d.Set("vpc_arn", a.ResourceArn) - SetTagsOut(ctx, a.Tags) + setTagsOut(ctx, a.Tags) return nil } func resourceVPCAttachmentUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &networkmanager.UpdateVpcAttachmentInput{ @@ -250,7 +253,7 @@ func resourceVPCAttachmentUpdate(ctx context.Context, d *schema.ResourceData, me } func resourceVPCAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).NetworkManagerConn() + conn := meta.(*conns.AWSClient).NetworkManagerConn(ctx) // If ResourceAttachmentAccepter is used, then VPC Attachment state // is not updated from StatePendingAttachmentAcceptance and the delete fails if deleted immediately after create diff --git a/internal/service/networkmanager/vpc_attachment_test.go b/internal/service/networkmanager/vpc_attachment_test.go index 39e2c932650..1af5d57525e 100644 --- a/internal/service/networkmanager/vpc_attachment_test.go +++ b/internal/service/networkmanager/vpc_attachment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package networkmanager_test import ( @@ -197,7 +200,7 @@ func testAccCheckVPCAttachmentExists(ctx context.Context, n string, v *networkma return fmt.Errorf("No Network Manager VPC Attachment ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) output, err := tfnetworkmanager.FindVPCAttachmentByID(ctx, conn, rs.Primary.ID) @@ -213,7 +216,7 @@ func testAccCheckVPCAttachmentExists(ctx context.Context, n string, v *networkma func testAccCheckVPCAttachmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).NetworkManagerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_networkmanager_vpc_attachment" { diff --git a/internal/service/oam/generate.go b/internal/service/oam/generate.go index 147cd92475c..af3c5176ecb 100644 --- a/internal/service/oam/generate.go +++ b/internal/service/oam/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -KVTValues=true -SkipTypesImp=true -ListTags -ServiceTagsMap -TagOp=TagResource -UntagOp=UntagResource -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package oam diff --git a/internal/service/oam/link.go b/internal/service/oam/link.go index 21383bc54a2..f6e430e9230 100644 --- a/internal/service/oam/link.go +++ b/internal/service/oam/link.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package oam import ( @@ -91,13 +94,13 @@ const ( ) func resourceLinkCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ObservabilityAccessManagerClient() + conn := meta.(*conns.AWSClient).ObservabilityAccessManagerClient(ctx) in := &oam.CreateLinkInput{ LabelTemplate: aws.String(d.Get("label_template").(string)), ResourceTypes: flex.ExpandStringyValueSet[types.ResourceType](d.Get("resource_types").(*schema.Set)), SinkIdentifier: aws.String(d.Get("sink_identifier").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } out, err := conn.CreateLink(ctx, in) @@ -115,7 +118,7 @@ func resourceLinkCreate(ctx context.Context, d *schema.ResourceData, meta interf } func resourceLinkRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ObservabilityAccessManagerClient() + conn := meta.(*conns.AWSClient).ObservabilityAccessManagerClient(ctx) out, err := findLinkByID(ctx, conn, d.Id()) @@ -141,7 +144,7 @@ func resourceLinkRead(ctx context.Context, d *schema.ResourceData, meta interfac } func resourceLinkUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ObservabilityAccessManagerClient() + conn := meta.(*conns.AWSClient).ObservabilityAccessManagerClient(ctx) update := false @@ -166,7 +169,7 @@ func resourceLinkUpdate(ctx context.Context, d *schema.ResourceData, meta interf } func resourceLinkDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ObservabilityAccessManagerClient() + conn := meta.(*conns.AWSClient).ObservabilityAccessManagerClient(ctx) log.Printf("[INFO] Deleting ObservabilityAccessManager Link %s", d.Id()) diff --git a/internal/service/oam/link_data_source.go b/internal/service/oam/link_data_source.go index 840bece991e..42499a6d094 100644 --- a/internal/service/oam/link_data_source.go +++ b/internal/service/oam/link_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package oam import ( @@ -60,7 +63,7 @@ const ( ) func dataSourceLinkRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ObservabilityAccessManagerClient() + conn := meta.(*conns.AWSClient).ObservabilityAccessManagerClient(ctx) linkIdentifier := d.Get("link_identifier").(string) @@ -78,7 +81,7 @@ func dataSourceLinkRead(ctx context.Context, d *schema.ResourceData, meta interf d.Set("resource_types", flex.FlattenStringValueList(out.ResourceTypes)) d.Set("sink_arn", out.SinkArn) - tags, err := ListTags(ctx, conn, d.Id()) + tags, err := listTags(ctx, conn, d.Id()) if err != nil { return create.DiagError(names.ObservabilityAccessManager, create.ErrActionReading, DSNameLink, d.Id(), err) } diff --git a/internal/service/oam/link_data_source_test.go b/internal/service/oam/link_data_source_test.go index 58b2731749e..ccbe3bb263e 100644 --- a/internal/service/oam/link_data_source_test.go +++ b/internal/service/oam/link_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package oam_test import ( @@ -5,7 +8,6 @@ import ( "regexp" "testing" - "github.com/aws/aws-sdk-go-v2/service/oam" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-provider-aws/internal/acctest" @@ -18,7 +20,6 @@ func TestAccObservabilityAccessManagerLinkDataSource_basic(t *testing.T) { } ctx := acctest.Context(t) - var link oam.GetLinkOutput rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) dataSourceName := "data.aws_oam_link.test" @@ -31,12 +32,10 @@ func TestAccObservabilityAccessManagerLinkDataSource_basic(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, names.ObservabilityAccessManagerEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5FactoriesAlternate(ctx, t), - CheckDestroy: testAccCheckLinkDestroy, Steps: []resource.TestStep{ { Config: testAccLinkDataSourceConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckLinkExists(dataSourceName, &link), acctest.MatchResourceAttrRegionalARN(dataSourceName, "arn", "oam", regexp.MustCompile(`link/+.`)), resource.TestCheckResourceAttrSet(dataSourceName, "label"), resource.TestCheckResourceAttr(dataSourceName, "label_template", "$AccountName"), diff --git a/internal/service/oam/link_test.go b/internal/service/oam/link_test.go index c69ab7887e4..7990c946ad7 100644 --- a/internal/service/oam/link_test.go +++ b/internal/service/oam/link_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package oam_test import ( @@ -39,12 +42,12 @@ func TestAccObservabilityAccessManagerLink_basic(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, names.ObservabilityAccessManagerEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5FactoriesAlternate(ctx, t), - CheckDestroy: testAccCheckLinkDestroy, + CheckDestroy: testAccCheckLinkDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccLinkConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckLinkExists(resourceName, &link), + testAccCheckLinkExists(ctx, resourceName, &link), acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "oam", regexp.MustCompile(`link/+.`)), resource.TestCheckResourceAttrSet(resourceName, "label"), resource.TestCheckResourceAttr(resourceName, "label_template", "$AccountName"), @@ -83,12 +86,12 @@ func TestAccObservabilityAccessManagerLink_disappears(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, names.ObservabilityAccessManagerEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5FactoriesAlternate(ctx, t), - CheckDestroy: testAccCheckLinkDestroy, + CheckDestroy: testAccCheckLinkDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccLinkConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckLinkExists(resourceName, &link), + testAccCheckLinkExists(ctx, resourceName, &link), acctest.CheckResourceDisappears(ctx, acctest.Provider, tfoam.ResourceLink(), resourceName), ), ExpectNonEmptyPlan: true, @@ -116,12 +119,12 @@ func TestAccObservabilityAccessManagerLink_update(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, names.ObservabilityAccessManagerEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5FactoriesAlternate(ctx, t), - CheckDestroy: testAccCheckLinkDestroy, + CheckDestroy: testAccCheckLinkDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccLinkConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckLinkExists(resourceName, &link), + testAccCheckLinkExists(ctx, resourceName, &link), acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "oam", regexp.MustCompile(`link/+.`)), resource.TestCheckResourceAttrSet(resourceName, "label"), resource.TestCheckResourceAttr(resourceName, "label_template", "$AccountName"), @@ -135,7 +138,7 @@ func TestAccObservabilityAccessManagerLink_update(t *testing.T) { { Config: testAccLinkConfig_update(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckLinkExists(resourceName, &link), + testAccCheckLinkExists(ctx, resourceName, &link), acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "oam", regexp.MustCompile(`link/+.`)), resource.TestCheckResourceAttrSet(resourceName, "label"), resource.TestCheckResourceAttr(resourceName, "label_template", "$AccountName"), @@ -175,12 +178,12 @@ func TestAccObservabilityAccessManagerLink_tags(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, names.ObservabilityAccessManagerEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5FactoriesAlternate(ctx, t), - CheckDestroy: testAccCheckLinkDestroy, + CheckDestroy: testAccCheckLinkDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccLinkConfig_tags1(rName, "key1", "value1"), Check: resource.ComposeTestCheckFunc( - testAccCheckLinkExists(resourceName, &link), + testAccCheckLinkExists(ctx, resourceName, &link), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), ), @@ -188,7 +191,7 @@ func TestAccObservabilityAccessManagerLink_tags(t *testing.T) { { Config: testAccLinkConfig_tags2(rName, "key1", "value1updated", "key2", "value2"), Check: resource.ComposeTestCheckFunc( - testAccCheckLinkExists(resourceName, &link), + testAccCheckLinkExists(ctx, resourceName, &link), resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), @@ -197,7 +200,7 @@ func TestAccObservabilityAccessManagerLink_tags(t *testing.T) { { Config: testAccLinkConfig_tags1(rName, "key2", "value2"), Check: resource.ComposeTestCheckFunc( - testAccCheckLinkExists(resourceName, &link), + testAccCheckLinkExists(ctx, resourceName, &link), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), ), @@ -211,34 +214,35 @@ func TestAccObservabilityAccessManagerLink_tags(t *testing.T) { }) } -func testAccCheckLinkDestroy(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ObservabilityAccessManagerClient() - ctx := context.Background() +func testAccCheckLinkDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).ObservabilityAccessManagerClient(ctx) - for _, rs := range s.RootModule().Resources { - if rs.Type != "aws_oam_link" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_oam_link" { + continue + } - input := &oam.GetLinkInput{ - Identifier: aws.String(rs.Primary.ID), - } - _, err := conn.GetLink(ctx, input) - if err != nil { - var nfe *types.ResourceNotFoundException - if errors.As(err, &nfe) { - return nil + input := &oam.GetLinkInput{ + Identifier: aws.String(rs.Primary.ID), + } + _, err := conn.GetLink(ctx, input) + if err != nil { + var nfe *types.ResourceNotFoundException + if errors.As(err, &nfe) { + return nil + } + return err } - return err + + return create.Error(names.ObservabilityAccessManager, create.ErrActionCheckingDestroyed, tfoam.ResNameLink, rs.Primary.ID, errors.New("not destroyed")) } - return create.Error(names.ObservabilityAccessManager, create.ErrActionCheckingDestroyed, tfoam.ResNameLink, rs.Primary.ID, errors.New("not destroyed")) + return nil } - - return nil } -func testAccCheckLinkExists(name string, link *oam.GetLinkOutput) resource.TestCheckFunc { +func testAccCheckLinkExists(ctx context.Context, name string, link *oam.GetLinkOutput) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[name] if !ok { @@ -249,8 +253,8 @@ func testAccCheckLinkExists(name string, link *oam.GetLinkOutput) resource.TestC return create.Error(names.ObservabilityAccessManager, create.ErrActionCheckingExistence, tfoam.ResNameLink, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ObservabilityAccessManagerClient() - ctx := context.Background() + conn := acctest.Provider.Meta().(*conns.AWSClient).ObservabilityAccessManagerClient(ctx) + resp, err := conn.GetLink(ctx, &oam.GetLinkInput{ Identifier: aws.String(rs.Primary.ID), }) diff --git a/internal/service/oam/links_data_source.go b/internal/service/oam/links_data_source.go index 97b49830a9a..57a9e021f5e 100644 --- a/internal/service/oam/links_data_source.go +++ b/internal/service/oam/links_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package oam import ( @@ -32,7 +35,7 @@ const ( ) func dataSourceLinksRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ObservabilityAccessManagerClient() + conn := meta.(*conns.AWSClient).ObservabilityAccessManagerClient(ctx) listLinksInput := &oam.ListLinksInput{} paginator := oam.NewListLinksPaginator(conn, listLinksInput) diff --git a/internal/service/oam/links_data_source_test.go b/internal/service/oam/links_data_source_test.go index 34d31c96b52..1e2570a33b3 100644 --- a/internal/service/oam/links_data_source_test.go +++ b/internal/service/oam/links_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package oam_test import ( diff --git a/internal/service/oam/service_package_gen.go b/internal/service/oam/service_package_gen.go index ea2581af935..a90a68d3992 100644 --- a/internal/service/oam/service_package_gen.go +++ b/internal/service/oam/service_package_gen.go @@ -5,6 +5,9 @@ package oam import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + oam_sdkv2 "github.com/aws/aws-sdk-go-v2/service/oam" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -69,4 +72,17 @@ func (p *servicePackage) ServicePackageName() string { return names.ObservabilityAccessManager } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*oam_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return oam_sdkv2.NewFromConfig(cfg, func(o *oam_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = oam_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/oam/sink.go b/internal/service/oam/sink.go index 3e9f55d8137..c78d7cdb9b3 100644 --- a/internal/service/oam/sink.go +++ b/internal/service/oam/sink.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package oam import ( @@ -66,11 +69,11 @@ const ( ) func resourceSinkCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ObservabilityAccessManagerClient() + conn := meta.(*conns.AWSClient).ObservabilityAccessManagerClient(ctx) in := &oam.CreateSinkInput{ Name: aws.String(d.Get("name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } out, err := conn.CreateSink(ctx, in) @@ -88,7 +91,7 @@ func resourceSinkCreate(ctx context.Context, d *schema.ResourceData, meta interf } func resourceSinkRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ObservabilityAccessManagerClient() + conn := meta.(*conns.AWSClient).ObservabilityAccessManagerClient(ctx) out, err := findSinkByID(ctx, conn, d.Id()) @@ -115,7 +118,7 @@ func resourceSinkUpdate(ctx context.Context, d *schema.ResourceData, meta interf } func resourceSinkDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ObservabilityAccessManagerClient() + conn := meta.(*conns.AWSClient).ObservabilityAccessManagerClient(ctx) log.Printf("[INFO] Deleting ObservabilityAccessManager Sink %s", d.Id()) diff --git a/internal/service/oam/sink_data_source.go b/internal/service/oam/sink_data_source.go index c02f0491c02..1876eaf293c 100644 --- a/internal/service/oam/sink_data_source.go +++ b/internal/service/oam/sink_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package oam import ( @@ -44,7 +47,7 @@ const ( ) func dataSourceSinkRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ObservabilityAccessManagerClient() + conn := meta.(*conns.AWSClient).ObservabilityAccessManagerClient(ctx) sinkIdentifier := d.Get("sink_identifier").(string) @@ -59,7 +62,7 @@ func dataSourceSinkRead(ctx context.Context, d *schema.ResourceData, meta interf d.Set("name", out.Name) d.Set("sink_id", out.Id) - tags, err := ListTags(ctx, conn, d.Id()) + tags, err := listTags(ctx, conn, d.Id()) if err != nil { return create.DiagError(names.ObservabilityAccessManager, create.ErrActionReading, DSNameSink, d.Id(), err) } diff --git a/internal/service/oam/sink_data_source_test.go b/internal/service/oam/sink_data_source_test.go index 2a782b38f69..76e9316d1f5 100644 --- a/internal/service/oam/sink_data_source_test.go +++ b/internal/service/oam/sink_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package oam_test import ( diff --git a/internal/service/oam/sink_policy.go b/internal/service/oam/sink_policy.go index 8cd9e9d5c53..108a6e26b35 100644 --- a/internal/service/oam/sink_policy.go +++ b/internal/service/oam/sink_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package oam import ( @@ -73,7 +76,7 @@ const ( ) func resourceSinkPolicyPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ObservabilityAccessManagerClient() + conn := meta.(*conns.AWSClient).ObservabilityAccessManagerClient(ctx) sinkIdentifier := d.Get("sink_identifier").(string) policy, err := structure.NormalizeJsonString(d.Get("policy").(string)) @@ -100,7 +103,7 @@ func resourceSinkPolicyPut(ctx context.Context, d *schema.ResourceData, meta int } func resourceSinkPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ObservabilityAccessManagerClient() + conn := meta.(*conns.AWSClient).ObservabilityAccessManagerClient(ctx) out, err := findSinkPolicyByID(ctx, conn, d.Id()) diff --git a/internal/service/oam/sink_policy_test.go b/internal/service/oam/sink_policy_test.go index 14ee64ea1c4..af46b058704 100644 --- a/internal/service/oam/sink_policy_test.go +++ b/internal/service/oam/sink_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package oam_test import ( @@ -38,12 +41,12 @@ func TestAccObservabilityAccessManagerSinkPolicy_basic(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, names.ObservabilityAccessManagerEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckSinkPolicyDestroy, + CheckDestroy: testAccCheckSinkPolicyDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccSinkPolicyConfigBasic(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckSinkPolicyExists(resourceName, &sinkPolicy), + testAccCheckSinkPolicyExists(ctx, resourceName, &sinkPolicy), acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "oam", regexp.MustCompile(`sink/+.`)), resource.TestCheckResourceAttrWith(resourceName, "policy", func(value string) error { _, err := awspolicy.PoliciesAreEquivalent(value, fmt.Sprintf(` @@ -97,12 +100,12 @@ func TestAccObservabilityAccessManagerSinkPolicy_update(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, names.ObservabilityAccessManagerEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckSinkPolicyDestroy, + CheckDestroy: testAccCheckSinkPolicyDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccSinkPolicyConfigBasic(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckSinkPolicyExists(resourceName, &sinkPolicy), + testAccCheckSinkPolicyExists(ctx, resourceName, &sinkPolicy), acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "oam", regexp.MustCompile(`sink/+.`)), resource.TestCheckResourceAttrWith(resourceName, "policy", func(value string) error { _, err := awspolicy.PoliciesAreEquivalent(value, fmt.Sprintf(` @@ -133,7 +136,7 @@ func TestAccObservabilityAccessManagerSinkPolicy_update(t *testing.T) { { Config: testAccSinkPolicyConfigUpdate(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckSinkPolicyExists(resourceName, &sinkPolicy), + testAccCheckSinkPolicyExists(ctx, resourceName, &sinkPolicy), resource.TestCheckResourceAttrPair(resourceName, "sink_identifier", "aws_oam_sink.test", "id"), resource.TestCheckResourceAttrWith(resourceName, "policy", func(value string) error { _, err := awspolicy.PoliciesAreEquivalent(value, fmt.Sprintf(` @@ -165,34 +168,35 @@ func TestAccObservabilityAccessManagerSinkPolicy_update(t *testing.T) { }) } -func testAccCheckSinkPolicyDestroy(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ObservabilityAccessManagerClient() - ctx := context.Background() +func testAccCheckSinkPolicyDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).ObservabilityAccessManagerClient(ctx) - for _, rs := range s.RootModule().Resources { - if rs.Type != "aws_oam_sink_policy" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_oam_sink_policy" { + continue + } - input := &oam.GetSinkPolicyInput{ - SinkIdentifier: aws.String(rs.Primary.ID), - } - _, err := conn.GetSinkPolicy(ctx, input) - if err != nil { - var nfe *types.ResourceNotFoundException - if errors.As(err, &nfe) { - return nil + input := &oam.GetSinkPolicyInput{ + SinkIdentifier: aws.String(rs.Primary.ID), + } + _, err := conn.GetSinkPolicy(ctx, input) + if err != nil { + var nfe *types.ResourceNotFoundException + if errors.As(err, &nfe) { + return nil + } + return err } - return err + + return create.Error(names.ObservabilityAccessManager, create.ErrActionCheckingDestroyed, tfoam.ResNameSinkPolicy, rs.Primary.ID, errors.New("not destroyed")) } - return create.Error(names.ObservabilityAccessManager, create.ErrActionCheckingDestroyed, tfoam.ResNameSinkPolicy, rs.Primary.ID, errors.New("not destroyed")) + return nil } - - return nil } -func testAccCheckSinkPolicyExists(name string, sinkPolicy *oam.GetSinkPolicyOutput) resource.TestCheckFunc { +func testAccCheckSinkPolicyExists(ctx context.Context, name string, sinkPolicy *oam.GetSinkPolicyOutput) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[name] if !ok { @@ -203,8 +207,8 @@ func testAccCheckSinkPolicyExists(name string, sinkPolicy *oam.GetSinkPolicyOutp return create.Error(names.ObservabilityAccessManager, create.ErrActionCheckingExistence, tfoam.ResNameSinkPolicy, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ObservabilityAccessManagerClient() - ctx := context.Background() + conn := acctest.Provider.Meta().(*conns.AWSClient).ObservabilityAccessManagerClient(ctx) + resp, err := conn.GetSinkPolicy(ctx, &oam.GetSinkPolicyInput{ SinkIdentifier: aws.String(rs.Primary.ID), }) diff --git a/internal/service/oam/sink_test.go b/internal/service/oam/sink_test.go index 36bad4cb3f2..3bf93b51a5c 100644 --- a/internal/service/oam/sink_test.go +++ b/internal/service/oam/sink_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package oam_test import ( @@ -147,7 +150,7 @@ func TestAccObservabilityAccessManagerSink_tags(t *testing.T) { func testAccCheckSinkDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ObservabilityAccessManagerClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).ObservabilityAccessManagerClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_oam_sink" { @@ -184,7 +187,7 @@ func testAccCheckSinkExists(ctx context.Context, name string, sink *oam.GetSinkO return create.Error(names.ObservabilityAccessManager, create.ErrActionCheckingExistence, tfoam.ResNameSink, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ObservabilityAccessManagerClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).ObservabilityAccessManagerClient(ctx) resp, err := conn.GetSink(ctx, &oam.GetSinkInput{ Identifier: aws.String(rs.Primary.ID), @@ -201,7 +204,7 @@ func testAccCheckSinkExists(ctx context.Context, name string, sink *oam.GetSinkO } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).ObservabilityAccessManagerClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).ObservabilityAccessManagerClient(ctx) input := &oam.ListSinksInput{} _, err := conn.ListSinks(ctx, input) diff --git a/internal/service/oam/sinks_data_source.go b/internal/service/oam/sinks_data_source.go index ec49ad944ad..eabb6398777 100644 --- a/internal/service/oam/sinks_data_source.go +++ b/internal/service/oam/sinks_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package oam import ( @@ -32,7 +35,7 @@ const ( ) func dataSourceSinksRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ObservabilityAccessManagerClient() + conn := meta.(*conns.AWSClient).ObservabilityAccessManagerClient(ctx) listSinksInput := &oam.ListSinksInput{} paginator := oam.NewListSinksPaginator(conn, listSinksInput) diff --git a/internal/service/oam/sinks_data_source_test.go b/internal/service/oam/sinks_data_source_test.go index ef693d7e5ba..75e95ce7f63 100644 --- a/internal/service/oam/sinks_data_source_test.go +++ b/internal/service/oam/sinks_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package oam_test import ( diff --git a/internal/service/oam/sweep.go b/internal/service/oam/sweep.go index 59ee18f1dee..5bdab95a252 100644 --- a/internal/service/oam/sweep.go +++ b/internal/service/oam/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep diff --git a/internal/service/oam/tags_gen.go b/internal/service/oam/tags_gen.go index d1fe0eb3e55..ca0301c7d03 100644 --- a/internal/service/oam/tags_gen.go +++ b/internal/service/oam/tags_gen.go @@ -13,10 +13,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists oam service tags. +// listTags lists oam service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn *oam.Client, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *oam.Client, identifier string) (tftags.KeyValueTags, error) { input := &oam.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -33,7 +33,7 @@ func ListTags(ctx context.Context, conn *oam.Client, identifier string) (tftags. // ListTags lists oam service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).ObservabilityAccessManagerClient(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).ObservabilityAccessManagerClient(ctx), identifier) if err != nil { return err @@ -53,14 +53,14 @@ func Tags(tags tftags.KeyValueTags) map[string]string { return tags.Map() } -// KeyValueTags creates KeyValueTags from oam service tags. +// KeyValueTags creates tftags.KeyValueTags from oam service tags. func KeyValueTags(ctx context.Context, tags map[string]string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns oam service tags from Context. +// getTagsIn returns oam service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]string { +func getTagsIn(ctx context.Context) map[string]string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -70,17 +70,17 @@ func GetTagsIn(ctx context.Context) map[string]string { return nil } -// SetTagsOut sets oam service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]string) { +// setTagsOut sets oam service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates oam service tags. +// updateTags updates oam service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn *oam.Client, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *oam.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -120,5 +120,5 @@ func UpdateTags(ctx context.Context, conn *oam.Client, identifier string, oldTag // UpdateTags updates oam service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).ObservabilityAccessManagerClient(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).ObservabilityAccessManagerClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/opensearch/consts.go b/internal/service/opensearch/consts.go index 65856de18dc..8848e558378 100644 --- a/internal/service/opensearch/consts.go +++ b/internal/service/opensearch/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opensearch import ( diff --git a/internal/service/opensearch/domain.go b/internal/service/opensearch/domain.go index 182477f9462..6e10f2cf16f 100644 --- a/internal/service/opensearch/domain.go +++ b/internal/service/opensearch/domain.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opensearch import ( @@ -50,7 +53,7 @@ func ResourceDomain() *schema.Resource { newVersion := d.Get("engine_version").(string) domainName := d.Get("domain_name").(string) - conn := meta.(*conns.AWSClient).OpenSearchConn() + conn := meta.(*conns.AWSClient).OpenSearchConn(ctx) resp, err := conn.GetCompatibleVersionsWithContext(ctx, &opensearchservice.GetCompatibleVersionsInput{ DomainName: aws.String(domainName), }) @@ -278,13 +281,9 @@ func ResourceDomain() *schema.Resource { Optional: true, }, "warm_type": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.StringInSlice([]string{ - opensearchservice.OpenSearchWarmPartitionInstanceTypeUltrawarm1MediumSearch, - opensearchservice.OpenSearchWarmPartitionInstanceTypeUltrawarm1LargeSearch, - "ultrawarm1.xlarge.search", - }, false), + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice(opensearchservice.OpenSearchWarmPartitionInstanceType_Values(), false), }, "zone_awareness_config": { Type: schema.TypeList, @@ -453,11 +452,12 @@ func ResourceDomain() *schema.Resource { "engine_version": { Type: schema.TypeString, Optional: true, - Default: "OpenSearch_1.1", + Computed: true, }, "kibana_endpoint": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Deprecated: "use 'dashboard_endpoint' attribute instead", }, "log_publishing_options": { Type: schema.TypeSet, @@ -496,6 +496,51 @@ func ResourceDomain() *schema.Resource { }, }, }, + "off_peak_window_options": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + "off_peak_window": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "window_start_time": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "hours": { + Type: schema.TypeInt, + Optional: true, + Computed: true, + }, + "minutes": { + Type: schema.TypeInt, + Optional: true, + Computed: true, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, "snapshot_options": { Type: schema.TypeList, Optional: true, @@ -556,7 +601,7 @@ func resourceDomainImport(ctx context.Context, func resourceDomainCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OpenSearchConn() + conn := meta.(*conns.AWSClient).OpenSearchConn(ctx) // The API doesn't check for duplicate names // so w/out this check Create would act as upsert @@ -566,10 +611,13 @@ func resourceDomainCreate(ctx context.Context, d *schema.ResourceData, meta inte return sdkdiag.AppendErrorf(diags, "OpenSearch Domain %q already exists", aws.StringValue(resp.DomainName)) } - inputCreateDomain := opensearchservice.CreateDomainInput{ - DomainName: aws.String(d.Get("domain_name").(string)), - EngineVersion: aws.String(d.Get("engine_version").(string)), - TagList: GetTagsIn(ctx), + input := &opensearchservice.CreateDomainInput{ + DomainName: aws.String(d.Get("domain_name").(string)), + TagList: getTagsIn(ctx), + } + + if v, ok := d.GetOk("engine_version"); ok { + input.EngineVersion = aws.String(v.(string)) } if v, ok := d.GetOk("access_policies"); ok { @@ -579,19 +627,19 @@ func resourceDomainCreate(ctx context.Context, d *schema.ResourceData, meta inte return sdkdiag.AppendErrorf(diags, "policy (%s) is invalid JSON: %s", policy, err) } - inputCreateDomain.AccessPolicies = aws.String(policy) + input.AccessPolicies = aws.String(policy) } if v, ok := d.GetOk("advanced_options"); ok { - inputCreateDomain.AdvancedOptions = flex.ExpandStringMap(v.(map[string]interface{})) + input.AdvancedOptions = flex.ExpandStringMap(v.(map[string]interface{})) } if v, ok := d.GetOk("advanced_security_options"); ok { - inputCreateDomain.AdvancedSecurityOptions = expandAdvancedSecurityOptions(v.([]interface{})) + input.AdvancedSecurityOptions = expandAdvancedSecurityOptions(v.([]interface{})) } if v, ok := d.GetOk("auto_tune_options"); ok && len(v.([]interface{})) > 0 { - inputCreateDomain.AutoTuneOptions = expandAutoTuneOptionsInput(v.([]interface{})[0].(map[string]interface{})) + input.AutoTuneOptions = expandAutoTuneOptionsInput(v.([]interface{})[0].(map[string]interface{})) } if v, ok := d.GetOk("ebs_options"); ok { @@ -603,7 +651,7 @@ func resourceDomainCreate(ctx context.Context, d *schema.ResourceData, meta inte } s := options[0].(map[string]interface{}) - inputCreateDomain.EBSOptions = expandEBSOptions(s) + input.EBSOptions = expandEBSOptions(s) } } @@ -614,7 +662,7 @@ func resourceDomainCreate(ctx context.Context, d *schema.ResourceData, meta inte } s := options[0].(map[string]interface{}) - inputCreateDomain.EncryptionAtRestOptions = expandEncryptAtRestOptions(s) + input.EncryptionAtRestOptions = expandEncryptAtRestOptions(s) } if v, ok := d.GetOk("cluster_config"); ok { @@ -625,7 +673,7 @@ func resourceDomainCreate(ctx context.Context, d *schema.ResourceData, meta inte return sdkdiag.AppendErrorf(diags, "At least one field is expected inside cluster_config") } m := config[0].(map[string]interface{}) - inputCreateDomain.ClusterConfig = expandClusterConfig(m) + input.ClusterConfig = expandClusterConfig(m) } } @@ -633,7 +681,7 @@ func resourceDomainCreate(ctx context.Context, d *schema.ResourceData, meta inte options := v.([]interface{}) s := options[0].(map[string]interface{}) - inputCreateDomain.NodeToNodeEncryptionOptions = expandNodeToNodeEncryptionOptions(s) + input.NodeToNodeEncryptionOptions = expandNodeToNodeEncryptionOptions(s) } if v, ok := d.GetOk("snapshot_options"); ok { @@ -650,7 +698,7 @@ func resourceDomainCreate(ctx context.Context, d *schema.ResourceData, meta inte AutomatedSnapshotStartHour: aws.Int64(int64(o["automated_snapshot_start_hour"].(int))), } - inputCreateDomain.SnapshotOptions = &snapshotOptions + input.SnapshotOptions = &snapshotOptions } } @@ -661,26 +709,36 @@ func resourceDomainCreate(ctx context.Context, d *schema.ResourceData, meta inte } s := options[0].(map[string]interface{}) - inputCreateDomain.VPCOptions = expandVPCOptions(s) + input.VPCOptions = expandVPCOptions(s) } if v, ok := d.GetOk("log_publishing_options"); ok { - inputCreateDomain.LogPublishingOptions = expandLogPublishingOptions(v.(*schema.Set)) + input.LogPublishingOptions = expandLogPublishingOptions(v.(*schema.Set)) } if v, ok := d.GetOk("domain_endpoint_options"); ok { - inputCreateDomain.DomainEndpointOptions = expandDomainEndpointOptions(v.([]interface{})) + input.DomainEndpointOptions = expandDomainEndpointOptions(v.([]interface{})) } if v, ok := d.GetOk("cognito_options"); ok { - inputCreateDomain.CognitoOptions = expandCognitoOptions(v.([]interface{})) + input.CognitoOptions = expandCognitoOptions(v.([]interface{})) + } + + if v, ok := d.GetOk("off_peak_window_options"); ok && len(v.([]interface{})) > 0 { + input.OffPeakWindowOptions = expandOffPeakWindowOptions(v.([]interface{})[0].(map[string]interface{})) + + // This option is only available when modifying a domain created prior to February 16, 2023, not when creating a new domain. + // An off-peak window is required for a domain and cannot be disabled. + if input.OffPeakWindowOptions != nil { + input.OffPeakWindowOptions.Enabled = aws.Bool(true) + } } // IAM Roles can take some time to propagate if set in AccessPolicies and created in the same terraform var out *opensearchservice.CreateDomainOutput err = retry.RetryContext(ctx, propagationTimeout, func() *retry.RetryError { var err error - out, err = conn.CreateDomainWithContext(ctx, &inputCreateDomain) + out, err = conn.CreateDomainWithContext(ctx, input) if err != nil { if tfawserr.ErrMessageContains(err, "InvalidTypeException", "Error setting policy") { return retry.RetryableError(err) @@ -711,7 +769,7 @@ func resourceDomainCreate(ctx context.Context, d *schema.ResourceData, meta inte return nil }) if tfresource.TimedOut(err) { - out, err = conn.CreateDomainWithContext(ctx, &inputCreateDomain) + out, err = conn.CreateDomainWithContext(ctx, input) } if err != nil { return sdkdiag.AppendErrorf(diags, "creating OpenSearch Domain: %s", err) @@ -729,13 +787,13 @@ func resourceDomainCreate(ctx context.Context, d *schema.ResourceData, meta inte if v, ok := d.GetOk("auto_tune_options"); ok && len(v.([]interface{})) > 0 { log.Printf("[DEBUG] Modifying config for OpenSearch Domain %q", d.Id()) - inputUpdateDomainConfig := &opensearchservice.UpdateDomainConfigInput{ + input := &opensearchservice.UpdateDomainConfigInput{ DomainName: aws.String(d.Get("domain_name").(string)), } - inputUpdateDomainConfig.AutoTuneOptions = expandAutoTuneOptions(v.([]interface{})[0].(map[string]interface{})) + input.AutoTuneOptions = expandAutoTuneOptions(v.([]interface{})[0].(map[string]interface{})) - _, err = conn.UpdateDomainConfigWithContext(ctx, inputUpdateDomainConfig) + _, err = conn.UpdateDomainConfigWithContext(ctx, input) if err != nil { return sdkdiag.AppendErrorf(diags, "modifying config for OpenSearch Domain: %s", err) @@ -749,7 +807,7 @@ func resourceDomainCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceDomainRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OpenSearchConn() + conn := meta.(*conns.AWSClient).OpenSearchConn(ctx) ds, err := FindDomainByName(ctx, conn, d.Get("domain_name").(string)) @@ -789,6 +847,7 @@ func resourceDomainRead(ctx context.Context, d *schema.ResourceData, meta interf } d.SetId(aws.StringValue(ds.ARN)) + d.Set("arn", ds.ARN) d.Set("domain_id", ds.DomainId) d.Set("domain_name", ds.DomainName) d.Set("engine_version", ds.EngineVersion) @@ -867,14 +926,20 @@ func resourceDomainRead(ctx context.Context, d *schema.ResourceData, meta interf return sdkdiag.AppendErrorf(diags, "setting domain_endpoint_options: %s", err) } - d.Set("arn", ds.ARN) + if ds.OffPeakWindowOptions != nil { + if err := d.Set("off_peak_window_options", []interface{}{flattenOffPeakWindowOptions(ds.OffPeakWindowOptions)}); err != nil { + return sdkdiag.AppendErrorf(diags, "setting off_peak_window_options: %s", err) + } + } else { + d.Set("off_peak_window_options", nil) + } return diags } func resourceDomainUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OpenSearchConn() + conn := meta.(*conns.AWSClient).OpenSearchConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := opensearchservice.UpdateDomainConfigInput{ @@ -901,6 +966,10 @@ func resourceDomainUpdate(ctx context.Context, d *schema.ResourceData, meta inte input.AutoTuneOptions = expandAutoTuneOptions(d.Get("auto_tune_options").([]interface{})[0].(map[string]interface{})) } + if d.HasChange("cognito_options") { + input.CognitoOptions = expandCognitoOptions(d.Get("cognito_options").([]interface{})) + } + if d.HasChange("domain_endpoint_options") { input.DomainEndpointOptions = expandDomainEndpointOptions(d.Get("domain_endpoint_options").([]interface{})) } @@ -952,6 +1021,10 @@ func resourceDomainUpdate(ctx context.Context, d *schema.ResourceData, meta inte } } + if d.HasChange("log_publishing_options") { + input.LogPublishingOptions = expandLogPublishingOptions(d.Get("log_publishing_options").(*schema.Set)) + } + if d.HasChange("node_to_node_encryption") { input.NodeToNodeEncryptionOptions = nil if v, ok := d.GetOk("node_to_node_encryption"); ok { @@ -962,6 +1035,10 @@ func resourceDomainUpdate(ctx context.Context, d *schema.ResourceData, meta inte } } + if d.HasChange("off_peak_window_options") { + input.OffPeakWindowOptions = expandOffPeakWindowOptions(d.Get("off_peak_window_options").([]interface{})[0].(map[string]interface{})) + } + if d.HasChange("snapshot_options") { options := d.Get("snapshot_options").([]interface{}) @@ -982,15 +1059,6 @@ func resourceDomainUpdate(ctx context.Context, d *schema.ResourceData, meta inte input.VPCOptions = expandVPCOptions(s) } - if d.HasChange("cognito_options") { - options := d.Get("cognito_options").([]interface{}) - input.CognitoOptions = expandCognitoOptions(options) - } - - if d.HasChange("log_publishing_options") { - input.LogPublishingOptions = expandLogPublishingOptions(d.Get("log_publishing_options").(*schema.Set)) - } - _, err := conn.UpdateDomainConfigWithContext(ctx, &input) if err != nil { return sdkdiag.AppendErrorf(diags, "updating OpenSearch Domain (%s): %s", d.Id(), err) @@ -1022,7 +1090,7 @@ func resourceDomainUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceDomainDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OpenSearchConn() + conn := meta.(*conns.AWSClient).OpenSearchConn(ctx) domainName := d.Get("domain_name").(string) log.Printf("[DEBUG] Deleting OpenSearch Domain: %q", domainName) @@ -1313,7 +1381,7 @@ func EBSVolumeTypePermitsIopsInput(volumeType string) bool { return false } -// EBSVolumeTypePermitsIopsInput returns true if the volume type supports the Throughput input +// EBSVolumeTypePermitsThroughputInput returns true if the volume type supports the Throughput input // // This check prevents a ValidationException when updating EBS volume types from a value // that supports Throughput (ex. gp3) to one that doesn't (ex. gp2). diff --git a/internal/service/opensearch/domain_data_source.go b/internal/service/opensearch/domain_data_source.go index 5472f505ff0..5068ba64714 100644 --- a/internal/service/opensearch/domain_data_source.go +++ b/internal/service/opensearch/domain_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opensearch import ( @@ -49,6 +52,10 @@ func DataSourceDomain() *schema.Resource { }, }, }, + "arn": { + Type: schema.TypeString, + Computed: true, + }, "auto_tune_options": { Type: schema.TypeList, Computed: true, @@ -63,7 +70,7 @@ func DataSourceDomain() *schema.Resource { Computed: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "start_at": { + "cron_expression_for_recurrence": { Type: schema.TypeString, Computed: true, }, @@ -72,18 +79,18 @@ func DataSourceDomain() *schema.Resource { Computed: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "value": { - Type: schema.TypeInt, - Computed: true, - }, "unit": { Type: schema.TypeString, Computed: true, }, + "value": { + Type: schema.TypeInt, + Computed: true, + }, }, }, }, - "cron_expression_for_recurrence": { + "start_at": { Type: schema.TypeString, Computed: true, }, @@ -97,25 +104,117 @@ func DataSourceDomain() *schema.Resource { }, }, }, - "domain_name": { - Type: schema.TypeString, - Required: true, + "cluster_config": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cold_storage_options": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Computed: true, + }, + }, + }, + }, + "dedicated_master_count": { + Type: schema.TypeInt, + Computed: true, + }, + "dedicated_master_enabled": { + Type: schema.TypeBool, + Computed: true, + }, + "dedicated_master_type": { + Type: schema.TypeString, + Computed: true, + }, + "instance_count": { + Type: schema.TypeInt, + Computed: true, + }, + "instance_type": { + Type: schema.TypeString, + Computed: true, + }, + "warm_count": { + Type: schema.TypeInt, + Computed: true, + }, + "warm_enabled": { + Type: schema.TypeBool, + Optional: true, + }, + "warm_type": { + Type: schema.TypeString, + Computed: true, + }, + "zone_awareness_config": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "availability_zone_count": { + Type: schema.TypeInt, + Computed: true, + }, + }, + }, + }, + "zone_awareness_enabled": { + Type: schema.TypeBool, + Computed: true, + }, + }, + }, }, - "arn": { - Type: schema.TypeString, + "cognito_options": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Computed: true, + }, + "identity_pool_id": { + Type: schema.TypeString, + Computed: true, + }, + "role_arn": { + Type: schema.TypeString, + Computed: true, + }, + "user_pool_id": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "created": { + Type: schema.TypeBool, Computed: true, }, "dashboard_endpoint": { Type: schema.TypeString, Computed: true, }, + "deleted": { + Type: schema.TypeBool, + Computed: true, + }, "domain_id": { Type: schema.TypeString, Computed: true, }, - "endpoint": { + "domain_name": { Type: schema.TypeString, - Computed: true, + Required: true, }, "ebs_options": { Type: schema.TypeList, @@ -161,90 +260,92 @@ func DataSourceDomain() *schema.Resource { }, }, }, - "kibana_endpoint": { + "endpoint": { Type: schema.TypeString, Computed: true, }, - "node_to_node_encryption": { - Type: schema.TypeList, + "engine_version": { + Type: schema.TypeString, + Computed: true, + }, + "kibana_endpoint": { + Type: schema.TypeString, + Computed: true, + Deprecated: "use 'dashboard_endpoint' attribute instead", + }, + "log_publishing_options": { + Type: schema.TypeSet, Computed: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ + "cloudwatch_log_group_arn": { + Type: schema.TypeString, + Computed: true, + }, "enabled": { Type: schema.TypeBool, Computed: true, }, + "log_type": { + Type: schema.TypeString, + Computed: true, + }, }, }, }, - "cluster_config": { + "node_to_node_encryption": { Type: schema.TypeList, Computed: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "cold_storage_options": { - Type: schema.TypeList, - Computed: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "enabled": { - Type: schema.TypeBool, - Computed: true, - }, - }, - }, - }, - "dedicated_master_count": { - Type: schema.TypeInt, - Computed: true, - }, - "dedicated_master_enabled": { + "enabled": { Type: schema.TypeBool, Computed: true, }, - "dedicated_master_type": { - Type: schema.TypeString, - Computed: true, - }, - "instance_count": { - Type: schema.TypeInt, - Computed: true, - }, - "instance_type": { - Type: schema.TypeString, + }, + }, + }, + "off_peak_window_options": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, Computed: true, }, - "zone_awareness_config": { + "off_peak_window": { Type: schema.TypeList, Computed: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "availability_zone_count": { - Type: schema.TypeInt, + "window_start_time": { + Type: schema.TypeList, Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "hours": { + Type: schema.TypeInt, + Computed: true, + }, + "minutes": { + Type: schema.TypeInt, + Computed: true, + }, + }, + }, }, }, }, }, - "zone_awareness_enabled": { - Type: schema.TypeBool, - Computed: true, - }, - "warm_enabled": { - Type: schema.TypeBool, - Optional: true, - }, - "warm_count": { - Type: schema.TypeInt, - Computed: true, - }, - "warm_type": { - Type: schema.TypeString, - Computed: true, - }, }, }, }, + "processing": { + Type: schema.TypeBool, + Computed: true, + }, "snapshot_options": { Type: schema.TypeList, Computed: true, @@ -257,6 +358,7 @@ func DataSourceDomain() *schema.Resource { }, }, }, + "tags": tftags.TagsSchemaComputed(), "vpc_options": { Type: schema.TypeList, Computed: true, @@ -266,7 +368,6 @@ func DataSourceDomain() *schema.Resource { Type: schema.TypeSet, Computed: true, Elem: &schema.Schema{Type: schema.TypeString}, - //Set: schema.HashString, }, "security_group_ids": { Type: schema.TypeSet, @@ -285,76 +386,13 @@ func DataSourceDomain() *schema.Resource { }, }, }, - "log_publishing_options": { - Type: schema.TypeSet, - Computed: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "log_type": { - Type: schema.TypeString, - Computed: true, - }, - "cloudwatch_log_group_arn": { - Type: schema.TypeString, - Computed: true, - }, - "enabled": { - Type: schema.TypeBool, - Computed: true, - }, - }, - }, - }, - "engine_version": { - Type: schema.TypeString, - Computed: true, - }, - "cognito_options": { - Type: schema.TypeList, - Computed: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "enabled": { - Type: schema.TypeBool, - Computed: true, - }, - "user_pool_id": { - Type: schema.TypeString, - Computed: true, - }, - "identity_pool_id": { - Type: schema.TypeString, - Computed: true, - }, - "role_arn": { - Type: schema.TypeString, - Computed: true, - }, - }, - }, - }, - - "created": { - Type: schema.TypeBool, - Computed: true, - }, - "deleted": { - Type: schema.TypeBool, - Computed: true, - }, - "processing": { - Type: schema.TypeBool, - Computed: true, - }, - - "tags": tftags.TagsSchemaComputed(), }, } } func dataSourceDomainRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OpenSearchConn() + conn := meta.(*conns.AWSClient).OpenSearchConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig ds, err := FindDomainByName(ctx, conn, d.Get("domain_name").(string)) @@ -462,12 +500,19 @@ func dataSourceDomainRead(ctx context.Context, d *schema.ResourceData, meta inte return sdkdiag.AppendErrorf(diags, "setting cognito_options: %s", err) } + if ds.OffPeakWindowOptions != nil { + if err := d.Set("off_peak_window_options", []interface{}{flattenOffPeakWindowOptions(ds.OffPeakWindowOptions)}); err != nil { + return sdkdiag.AppendErrorf(diags, "setting off_peak_window_options: %s", err) + } + } else { + d.Set("off_peak_window_options", nil) + } + d.Set("created", ds.Created) d.Set("deleted", ds.Deleted) - d.Set("processing", ds.Processing) - tags, err := ListTags(ctx, conn, d.Id()) + tags, err := listTags(ctx, conn, d.Id()) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags for OpenSearch Cluster (%s): %s", d.Id(), err) diff --git a/internal/service/opensearch/domain_data_source_test.go b/internal/service/opensearch/domain_data_source_test.go index 5292c0375c0..e08ac54cc58 100644 --- a/internal/service/opensearch/domain_data_source_test.go +++ b/internal/service/opensearch/domain_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opensearch_test import ( @@ -45,6 +48,7 @@ func TestAccOpenSearchDomainDataSource_Data_basic(t *testing.T) { resource.TestCheckResourceAttrPair(datasourceName, "ebs_options.0.volume_type", resourceName, "ebs_options.0.volume_type"), resource.TestCheckResourceAttrPair(datasourceName, "ebs_options.0.volume_size", resourceName, "ebs_options.0.volume_size"), resource.TestCheckResourceAttrPair(datasourceName, "ebs_options.0.iops", resourceName, "ebs_options.0.iops"), + resource.TestCheckResourceAttrPair(datasourceName, "off_peak_window_options.#", resourceName, "off_peak_window_options.#"), resource.TestCheckResourceAttrPair(datasourceName, "snapshot_options.#", resourceName, "snapshot_options.#"), resource.TestCheckResourceAttrPair(datasourceName, "snapshot_options.0.automated_snapshot_start_hour", resourceName, "snapshot_options.0.automated_snapshot_start_hour"), ), @@ -90,6 +94,7 @@ func TestAccOpenSearchDomainDataSource_Data_advanced(t *testing.T) { resource.TestCheckResourceAttrPair(datasourceName, "ebs_options.0.volume_size", resourceName, "ebs_options.0.volume_size"), resource.TestCheckResourceAttrPair(datasourceName, "engine_version", resourceName, "engine_version"), resource.TestCheckResourceAttrPair(datasourceName, "log_publishing_options.#", resourceName, "log_publishing_options.#"), + resource.TestCheckResourceAttrPair(datasourceName, "off_peak_window_options.#", resourceName, "off_peak_window_options.#"), resource.TestCheckResourceAttrPair(datasourceName, "snapshot_options.#", resourceName, "snapshot_options.#"), resource.TestCheckResourceAttrPair(datasourceName, "snapshot_options.0.automated_snapshot_start_hour", resourceName, "snapshot_options.0.automated_snapshot_start_hour"), resource.TestCheckResourceAttrPair(datasourceName, "vpc_options.#", resourceName, "vpc_options.#"), diff --git a/internal/service/opensearch/domain_policy.go b/internal/service/opensearch/domain_policy.go index 1c638c50b92..10e6aefe2a0 100644 --- a/internal/service/opensearch/domain_policy.go +++ b/internal/service/opensearch/domain_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opensearch import ( @@ -51,7 +54,7 @@ func ResourceDomainPolicy() *schema.Resource { func resourceDomainPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OpenSearchConn() + conn := meta.(*conns.AWSClient).OpenSearchConn(ctx) ds, err := FindDomainByName(ctx, conn, d.Get("domain_name").(string)) @@ -78,7 +81,7 @@ func resourceDomainPolicyRead(ctx context.Context, d *schema.ResourceData, meta func resourceDomainPolicyUpsert(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OpenSearchConn() + conn := meta.(*conns.AWSClient).OpenSearchConn(ctx) domainName := d.Get("domain_name").(string) policy, err := structure.NormalizeJsonString(d.Get("access_policies").(string)) @@ -106,7 +109,7 @@ func resourceDomainPolicyUpsert(ctx context.Context, d *schema.ResourceData, met func resourceDomainPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OpenSearchConn() + conn := meta.(*conns.AWSClient).OpenSearchConn(ctx) _, err := conn.UpdateDomainConfigWithContext(ctx, &opensearchservice.UpdateDomainConfigInput{ DomainName: aws.String(d.Get("domain_name").(string)), diff --git a/internal/service/opensearch/domain_policy_test.go b/internal/service/opensearch/domain_policy_test.go index be98d77722b..d915426c48a 100644 --- a/internal/service/opensearch/domain_policy_test.go +++ b/internal/service/opensearch/domain_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opensearch_test import ( @@ -57,7 +60,6 @@ func TestAccOpenSearchDomainPolicy_basic(t *testing.T) { Config: testAccDomainPolicyConfig_basic(ri, policy), Check: resource.ComposeTestCheckFunc( testAccCheckDomainExists(ctx, "aws_opensearch_domain.test", &domain), - resource.TestCheckResourceAttr("aws_opensearch_domain.test", "engine_version", "OpenSearch_1.1"), func(s *terraform.State) error { awsClient := acctest.Provider.Meta().(*conns.AWSClient) expectedArn, err := buildDomainARN(name, awsClient.Partition, awsClient.AccountID, awsClient.Region) @@ -117,8 +119,7 @@ func buildDomainARN(name, partition, accId, region string) (string, error) { func testAccDomainPolicyConfig_basic(randInt int, policy string) string { return fmt.Sprintf(` resource "aws_opensearch_domain" "test" { - domain_name = "tf-test-%d" - engine_version = "OpenSearch_1.1" + domain_name = "tf-test-%d" cluster_config { instance_type = "t2.small.search" # supported in both aws and aws-us-gov diff --git a/internal/service/opensearch/domain_saml_options.go b/internal/service/opensearch/domain_saml_options.go index 846f57bf5c3..e015d9c8cbd 100644 --- a/internal/service/opensearch/domain_saml_options.go +++ b/internal/service/opensearch/domain_saml_options.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opensearch import ( @@ -114,7 +117,7 @@ func domainSamlOptionsDiffSupress(k, old, new string, d *schema.ResourceData) bo func resourceDomainSAMLOptionsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OpenSearchConn() + conn := meta.(*conns.AWSClient).OpenSearchConn(ctx) ds, err := FindDomainByName(ctx, conn, d.Get("domain_name").(string)) @@ -141,7 +144,7 @@ func resourceDomainSAMLOptionsRead(ctx context.Context, d *schema.ResourceData, func resourceDomainSAMLOptionsPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OpenSearchConn() + conn := meta.(*conns.AWSClient).OpenSearchConn(ctx) domainName := d.Get("domain_name").(string) config := opensearchservice.AdvancedSecurityOptionsInput_{} @@ -169,7 +172,7 @@ func resourceDomainSAMLOptionsPut(ctx context.Context, d *schema.ResourceData, m func resourceDomainSAMLOptionsDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OpenSearchConn() + conn := meta.(*conns.AWSClient).OpenSearchConn(ctx) domainName := d.Get("domain_name").(string) config := opensearchservice.AdvancedSecurityOptionsInput_{} diff --git a/internal/service/opensearch/domain_saml_options_test.go b/internal/service/opensearch/domain_saml_options_test.go index dfbcefb3205..7808949ed96 100644 --- a/internal/service/opensearch/domain_saml_options_test.go +++ b/internal/service/opensearch/domain_saml_options_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opensearch_test import ( @@ -182,7 +185,7 @@ func testAccCheckESDomainSAMLOptionsDestroy(ctx context.Context) resource.TestCh continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchConn(ctx) _, err := tfopensearch.FindDomainByName(ctx, conn, rs.Primary.Attributes["domain_name"]) if tfresource.NotFound(err) { @@ -216,7 +219,7 @@ func testAccCheckESDomainSAMLOptions(ctx context.Context, esResource string, sam return fmt.Errorf("Not found: %s", samlOptionsResource) } - conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchConn(ctx) _, err := tfopensearch.FindDomainByName(ctx, conn, options.Primary.Attributes["domain_name"]) return err diff --git a/internal/service/opensearch/domain_structure.go b/internal/service/opensearch/domain_structure.go index fc6292ab143..b5a24fedb12 100644 --- a/internal/service/opensearch/domain_structure.go +++ b/internal/service/opensearch/domain_structure.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opensearch import ( @@ -170,6 +173,56 @@ func expandSAMLOptionsIdp(l []interface{}) *opensearchservice.SAMLIdp { } } +func expandOffPeakWindowOptions(tfMap map[string]interface{}) *opensearchservice.OffPeakWindowOptions { + if tfMap == nil { + return nil + } + + apiObject := &opensearchservice.OffPeakWindowOptions{} + + if v, ok := tfMap["enabled"].(bool); ok { + apiObject.Enabled = aws.Bool(v) + } + + if v, ok := tfMap["off_peak_window"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.OffPeakWindow = expandOffPeakWindow(v[0].(map[string]interface{})) + } + + return apiObject +} + +func expandOffPeakWindow(tfMap map[string]interface{}) *opensearchservice.OffPeakWindow { + if tfMap == nil { + return nil + } + + apiObject := &opensearchservice.OffPeakWindow{} + + if v, ok := tfMap["window_start_time"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.WindowStartTime = expandWindowStartTime(v[0].(map[string]interface{})) + } + + return apiObject +} + +func expandWindowStartTime(tfMap map[string]interface{}) *opensearchservice.WindowStartTime { + if tfMap == nil { + return nil + } + + apiObject := &opensearchservice.WindowStartTime{} + + if v, ok := tfMap["hours"].(int); ok && v != 0 { + apiObject.Hours = aws.Int64(int64(v)) + } + + if v, ok := tfMap["minutes"].(int); ok && v != 0 { + apiObject.Minutes = aws.Int64(int64(v)) + } + + return apiObject +} + func flattenAdvancedSecurityOptions(advancedSecurityOptions *opensearchservice.AdvancedSecurityOptions) []map[string]interface{} { if advancedSecurityOptions == nil { return []map[string]interface{}{} @@ -318,3 +371,53 @@ func flattenLogPublishingOptions(o map[string]*opensearchservice.LogPublishingOp } return m } + +func flattenOffPeakWindowOptions(apiObject *opensearchservice.OffPeakWindowOptions) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.Enabled; v != nil { + tfMap["enabled"] = aws.BoolValue(v) + } + + if v := apiObject.OffPeakWindow; v != nil { + tfMap["off_peak_window"] = []interface{}{flattenOffPeakWindow(v)} + } + + return tfMap +} + +func flattenOffPeakWindow(apiObject *opensearchservice.OffPeakWindow) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.WindowStartTime; v != nil { + tfMap["window_start_time"] = []interface{}{flattenWindowStartTime(v)} + } + + return tfMap +} + +func flattenWindowStartTime(apiObject *opensearchservice.WindowStartTime) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.Hours; v != nil { + tfMap["hours"] = aws.Int64Value(v) + } + + if v := apiObject.Minutes; v != nil { + tfMap["minutes"] = aws.Int64Value(v) + } + + return tfMap +} diff --git a/internal/service/opensearch/domain_test.go b/internal/service/opensearch/domain_test.go index 77b15b59668..e829e9ccb4f 100644 --- a/internal/service/opensearch/domain_test.go +++ b/internal/service/opensearch/domain_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opensearch_test import ( @@ -150,8 +153,9 @@ func TestAccOpenSearchDomain_basic(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckDomainExists(ctx, resourceName, &domain), resource.TestMatchResourceAttr(resourceName, "dashboard_endpoint", regexp.MustCompile(`.*(opensearch|es)\..*/_dashboards`)), - resource.TestCheckResourceAttr(resourceName, "engine_version", "OpenSearch_1.1"), + resource.TestCheckResourceAttrSet(resourceName, "engine_version"), resource.TestMatchResourceAttr(resourceName, "kibana_endpoint", regexp.MustCompile(`.*(opensearch|es)\..*/_plugin/kibana/`)), + resource.TestCheckResourceAttr(resourceName, "off_peak_window_options.#", "1"), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), resource.TestCheckResourceAttr(resourceName, "vpc_options.#", "0"), ), @@ -513,7 +517,7 @@ func TestAccOpenSearchDomain_duplicate(t *testing.T) { ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchConn(ctx) _, err := conn.DeleteDomainWithContext(ctx, &opensearchservice.DeleteDomainInput{ DomainName: aws.String(rName), }) @@ -523,7 +527,7 @@ func TestAccOpenSearchDomain_duplicate(t *testing.T) { { PreConfig: func() { // Create duplicate - conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchConn(ctx) _, err := conn.CreateDomainWithContext(ctx, &opensearchservice.CreateDomainInput{ DomainName: aws.String(rName), EBSOptions: &opensearchservice.EBSOptions{ @@ -543,7 +547,8 @@ func TestAccOpenSearchDomain_duplicate(t *testing.T) { Config: testAccDomainConfig_basic(rName), Check: resource.ComposeTestCheckFunc( testAccCheckDomainExists(ctx, resourceName, &domain), - resource.TestCheckResourceAttr(resourceName, "engine_version", "OpenSearch_1.1")), + resource.TestCheckResourceAttrSet(resourceName, "engine_version"), + ), ExpectError: regexp.MustCompile(`OpenSearch Domain ".+" already exists`), }, }, @@ -1303,14 +1308,14 @@ func TestAccOpenSearchDomain_Encryption_atRestEnable(t *testing.T) { CheckDestroy: testAccCheckDomainDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccDomainConfig_encryptAtRestDefaultKey(rName, "OpenSearch_1.1", false), + Config: testAccDomainConfig_encryptAtRestDefaultKey(rName, "OpenSearch_2.5", false), Check: resource.ComposeTestCheckFunc( testAccCheckDomainExists(ctx, resourceName, &domain1), testAccCheckDomainEncrypted(false, &domain1), ), }, { - Config: testAccDomainConfig_encryptAtRestDefaultKey(rName, "OpenSearch_1.1", true), + Config: testAccDomainConfig_encryptAtRestDefaultKey(rName, "OpenSearch_2.5", true), Check: resource.ComposeTestCheckFunc( testAccCheckDomainExists(ctx, resourceName, &domain2), testAccCheckDomainEncrypted(true, &domain2), @@ -1318,7 +1323,7 @@ func TestAccOpenSearchDomain_Encryption_atRestEnable(t *testing.T) { ), }, { - Config: testAccDomainConfig_encryptAtRestDefaultKey(rName, "OpenSearch_1.1", false), + Config: testAccDomainConfig_encryptAtRestDefaultKey(rName, "OpenSearch_2.5", false), Check: resource.ComposeTestCheckFunc( testAccCheckDomainExists(ctx, resourceName, &domain1), testAccCheckDomainEncrypted(false, &domain1), @@ -1412,14 +1417,14 @@ func TestAccOpenSearchDomain_Encryption_nodeToNodeEnable(t *testing.T) { CheckDestroy: testAccCheckDomainDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccDomainConfig_nodeToNodeEncryption(rName, "OpenSearch_1.1", false), + Config: testAccDomainConfig_nodeToNodeEncryption(rName, "OpenSearch_2.5", false), Check: resource.ComposeTestCheckFunc( testAccCheckDomainExists(ctx, resourceName, &domain1), testAccCheckNodeToNodeEncrypted(false, &domain1), ), }, { - Config: testAccDomainConfig_nodeToNodeEncryption(rName, "OpenSearch_1.1", true), + Config: testAccDomainConfig_nodeToNodeEncryption(rName, "OpenSearch_2.5", true), Check: resource.ComposeTestCheckFunc( testAccCheckDomainExists(ctx, resourceName, &domain2), testAccCheckNodeToNodeEncrypted(true, &domain2), @@ -1427,7 +1432,7 @@ func TestAccOpenSearchDomain_Encryption_nodeToNodeEnable(t *testing.T) { ), }, { - Config: testAccDomainConfig_nodeToNodeEncryption(rName, "OpenSearch_1.1", false), + Config: testAccDomainConfig_nodeToNodeEncryption(rName, "OpenSearch_2.5", false), Check: resource.ComposeTestCheckFunc( testAccCheckDomainExists(ctx, resourceName, &domain1), testAccCheckNodeToNodeEncrypted(false, &domain1), @@ -1478,6 +1483,56 @@ func TestAccOpenSearchDomain_Encryption_nodeToNodeEnableLegacy(t *testing.T) { }) } +func TestAccOpenSearchDomain_offPeakWindowOptions(t *testing.T) { + ctx := acctest.Context(t) + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var domain opensearchservice.DomainStatus + rName := testAccRandomDomainName() + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheckIAMServiceLinkedRole(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckDomainDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_offPeakWindowOptions(rName, 9, 30), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(ctx, resourceName, &domain), + resource.TestCheckResourceAttr(resourceName, "off_peak_window_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "off_peak_window_options.0.enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "off_peak_window_options.0.off_peak_window.#", "1"), + resource.TestCheckResourceAttr(resourceName, "off_peak_window_options.0.off_peak_window.0.window_start_time.#", "1"), + resource.TestCheckResourceAttr(resourceName, "off_peak_window_options.0.off_peak_window.0.window_start_time.0.hours", "9"), + resource.TestCheckResourceAttr(resourceName, "off_peak_window_options.0.off_peak_window.0.window_start_time.0.minutes", "30"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName, + ImportStateVerify: true, + }, + { + Config: testAccDomainConfig_offPeakWindowOptions(rName, 10, 15), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(ctx, resourceName, &domain), + resource.TestCheckResourceAttr(resourceName, "off_peak_window_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "off_peak_window_options.0.enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "off_peak_window_options.0.off_peak_window.#", "1"), + resource.TestCheckResourceAttr(resourceName, "off_peak_window_options.0.off_peak_window.0.window_start_time.#", "1"), + resource.TestCheckResourceAttr(resourceName, "off_peak_window_options.0.off_peak_window.0.window_start_time.0.hours", "10"), + resource.TestCheckResourceAttr(resourceName, "off_peak_window_options.0.off_peak_window.0.window_start_time.0.minutes", "15"), + ), + }, + }, + }) +} + func TestAccOpenSearchDomain_tags(t *testing.T) { ctx := acctest.Context(t) if testing.Short() { @@ -1927,7 +1982,7 @@ func testAccCheckDomainExists(ctx context.Context, n string, domain *opensearchs return fmt.Errorf("No OpenSearch Domain ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchConn(ctx) resp, err := tfopensearch.FindDomainByName(ctx, conn, rs.Primary.Attributes["domain_name"]) if err != nil { return fmt.Errorf("Error describing domain: %s", err.Error()) @@ -1947,7 +2002,7 @@ func testAccCheckDomainExists(ctx context.Context, n string, domain *opensearchs func testAccCheckDomainNotRecreated(domain1, domain2 *opensearchservice.DomainStatus) resource.TestCheckFunc { return func(s *terraform.State) error { /* - conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchConn(ctx) ic, err := conn.DescribeDomainConfig(&opensearchservice.DescribeDomainConfigInput{ DomainName: domain1.DomainName, @@ -1985,7 +2040,7 @@ func testAccCheckDomainDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchConn(ctx) _, err := tfopensearch.FindDomainByName(ctx, conn, rs.Primary.Attributes["domain_name"]) if tfresource.NotFound(err) { @@ -2016,7 +2071,7 @@ func testAccPreCheckIAMServiceLinkedRole(ctx context.Context, t *testing.T) { } func testAccPreCheckCognitoIdentityProvider(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn(ctx) input := &cognitoidentityprovider.ListUserPoolsInput{ MaxResults: aws.Int64(1), @@ -3324,8 +3379,6 @@ resource "aws_iam_role_policy_attachment" "test" { resource "aws_opensearch_domain" "test" { domain_name = %[1]q - engine_version = "OpenSearch_1.1" - %[2]s ebs_options { @@ -3340,3 +3393,26 @@ resource "aws_opensearch_domain" "test" { } `, rName, cognitoOptions) } + +func testAccDomainConfig_offPeakWindowOptions(rName string, h, m int) string { + return fmt.Sprintf(` +resource "aws_opensearch_domain" "test" { + domain_name = %[1]q + engine_version = "Elasticsearch_6.7" + + ebs_options { + ebs_enabled = true + volume_size = 10 + } + + off_peak_window_options { + off_peak_window { + window_start_time { + hours = %[2]d + minutes = %[3]d + } + } + } +} +`, rName, h, m) +} diff --git a/internal/service/opensearch/find.go b/internal/service/opensearch/find.go index 74948258e89..026019a345b 100644 --- a/internal/service/opensearch/find.go +++ b/internal/service/opensearch/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opensearch import ( diff --git a/internal/service/opensearch/flex.go b/internal/service/opensearch/flex.go index 77db447d765..27224b04ea9 100644 --- a/internal/service/opensearch/flex.go +++ b/internal/service/opensearch/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opensearch import ( diff --git a/internal/service/opensearch/generate.go b/internal/service/opensearch/generate.go index 3d0eae86610..a4f69c05c98 100644 --- a/internal/service/opensearch/generate.go +++ b/internal/service/opensearch/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=ListTags -ListTagsInIDElem=ARN -ListTagsOutTagsElem=TagList -ServiceTagsSlice -TagOp=AddTags -TagInIDElem=ARN -TagInTagsElem=TagList -UntagOp=RemoveTags -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package opensearch diff --git a/internal/service/opensearch/inbound_connection_accepter.go b/internal/service/opensearch/inbound_connection_accepter.go index 9a595c30cd2..ccb578d682a 100644 --- a/internal/service/opensearch/inbound_connection_accepter.go +++ b/internal/service/opensearch/inbound_connection_accepter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opensearch import ( @@ -49,7 +52,7 @@ func ResourceInboundConnectionAccepter() *schema.Resource { } func resourceInboundConnectionAccepterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).OpenSearchConn() + conn := meta.(*conns.AWSClient).OpenSearchConn(ctx) // Create the Inbound Connection acceptOpts := &opensearchservice.AcceptInboundConnectionInput{ @@ -60,7 +63,7 @@ func resourceInboundConnectionAccepterCreate(ctx context.Context, d *schema.Reso resp, err := conn.AcceptInboundConnectionWithContext(ctx, acceptOpts) if err != nil { - return diag.FromErr(fmt.Errorf("Error accepting Inbound Connection: %s", err)) + return diag.Errorf("accepting Inbound Connection: %s", err) } // Get the ID and store it @@ -69,19 +72,19 @@ func resourceInboundConnectionAccepterCreate(ctx context.Context, d *schema.Reso err = inboundConnectionWaitUntilActive(ctx, conn, d.Id(), d.Timeout(schema.TimeoutCreate)) if err != nil { - return diag.FromErr(fmt.Errorf("Error waiting for Inbound Connection to become active: %s", err)) + return diag.Errorf("waiting for Inbound Connection to become active: %s", err) } return resourceInboundConnectionRead(ctx, d, meta) } func resourceInboundConnectionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).OpenSearchConn() + conn := meta.(*conns.AWSClient).OpenSearchConn(ctx) ccscRaw, statusCode, err := inboundConnectionRefreshState(ctx, conn, d.Id())() if err != nil { - return diag.FromErr(fmt.Errorf("Error reading Inbound Connection: %s", err)) + return diag.Errorf("reading Inbound Connection: %s", err) } ccsc := ccscRaw.(*opensearchservice.InboundConnection) @@ -93,7 +96,7 @@ func resourceInboundConnectionRead(ctx context.Context, d *schema.ResourceData, } func resourceInboundConnectionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).OpenSearchConn() + conn := meta.(*conns.AWSClient).OpenSearchConn(ctx) req := &opensearchservice.DeleteInboundConnectionInput{ ConnectionId: aws.String(d.Id()), @@ -106,11 +109,11 @@ func resourceInboundConnectionDelete(ctx context.Context, d *schema.ResourceData } if err != nil { - return diag.FromErr(fmt.Errorf("Error deleting Inbound Connection (%s): %s", d.Id(), err)) + return diag.Errorf("deleting Inbound Connection (%s): %s", d.Id(), err) } if err := waitForInboundConnectionDeletion(ctx, conn, d.Id(), d.Timeout(schema.TimeoutDelete)); err != nil { - return diag.FromErr(fmt.Errorf("Error waiting for VPC Peering Connection (%s) to be deleted: %s", d.Id(), err)) + return diag.Errorf("waiting for VPC Peering Connection (%s) to be deleted: %s", d.Id(), err) } return nil @@ -162,7 +165,7 @@ func inboundConnectionWaitUntilActive(ctx context.Context, conn *opensearchservi Timeout: timeout, } if _, err := stateConf.WaitForStateContext(ctx); err != nil { - return fmt.Errorf("Error waiting for Inbound Connection (%s) to become available: %s", id, err) + return fmt.Errorf("waiting for Inbound Connection (%s) to become available: %s", id, err) } return nil } diff --git a/internal/service/opensearch/inbound_connection_accepter_test.go b/internal/service/opensearch/inbound_connection_accepter_test.go index 7cc681aa10e..126d99eaa3b 100644 --- a/internal/service/opensearch/inbound_connection_accepter_test.go +++ b/internal/service/opensearch/inbound_connection_accepter_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opensearch_test import ( @@ -72,8 +75,7 @@ func testAccInboundConnectionAccepterConfig(name string) string { pw := fmt.Sprintf("Aa1-%s", sdkacctest.RandString(10)) return fmt.Sprintf(` resource "aws_opensearch_domain" "domain_1" { - domain_name = "%s-1" - engine_version = "OpenSearch_1.1" + domain_name = "%s-1" cluster_config { instance_type = "t3.small.search" # supported in both aws and aws-us-gov @@ -109,8 +111,7 @@ resource "aws_opensearch_domain" "domain_1" { } resource "aws_opensearch_domain" "domain_2" { - domain_name = "%s-2" - engine_version = "OpenSearch_1.1" + domain_name = "%s-2" cluster_config { instance_type = "t3.small.search" # supported in both aws and aws-us-gov diff --git a/internal/service/opensearch/outbound_connection.go b/internal/service/opensearch/outbound_connection.go index 0cf923478ab..471044ba02b 100644 --- a/internal/service/opensearch/outbound_connection.go +++ b/internal/service/opensearch/outbound_connection.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opensearch import ( @@ -48,7 +51,7 @@ func ResourceOutboundConnection() *schema.Resource { } func resourceOutboundConnectionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).OpenSearchConn() + conn := meta.(*conns.AWSClient).OpenSearchConn(ctx) // Create the Outbound Connection createOpts := &opensearchservice.CreateOutboundConnectionInput{ @@ -61,7 +64,7 @@ func resourceOutboundConnectionCreate(ctx context.Context, d *schema.ResourceDat resp, err := conn.CreateOutboundConnectionWithContext(ctx, createOpts) if err != nil { - return diag.FromErr(fmt.Errorf("Error creating Outbound Connection: %s", err)) + return diag.Errorf("creating Outbound Connection: %s", err) } // Get the ID and store it @@ -70,19 +73,19 @@ func resourceOutboundConnectionCreate(ctx context.Context, d *schema.ResourceDat err = outboundConnectionWaitUntilAvailable(ctx, conn, d.Id(), d.Timeout(schema.TimeoutCreate)) if err != nil { - return diag.FromErr(fmt.Errorf("Error waiting for Outbound Connection to become available: %s", err)) + return diag.Errorf("waiting for Outbound Connection to become available: %s", err) } return resourceOutboundConnectionRead(ctx, d, meta) } func resourceOutboundConnectionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).OpenSearchConn() + conn := meta.(*conns.AWSClient).OpenSearchConn(ctx) ccscRaw, statusCode, err := outboundConnectionRefreshState(ctx, conn, d.Id())() if err != nil { - return diag.FromErr(fmt.Errorf("Error reading Outbound Connection: %s", err)) + return diag.Errorf("reading Outbound Connection: %s", err) } ccsc := ccscRaw.(*opensearchservice.OutboundConnection) @@ -103,7 +106,7 @@ func resourceOutboundConnectionRead(ctx context.Context, d *schema.ResourceData, } func resourceOutboundConnectionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).OpenSearchConn() + conn := meta.(*conns.AWSClient).OpenSearchConn(ctx) req := &opensearchservice.DeleteOutboundConnectionInput{ ConnectionId: aws.String(d.Id()), @@ -116,11 +119,11 @@ func resourceOutboundConnectionDelete(ctx context.Context, d *schema.ResourceDat } if err != nil { - return diag.FromErr(fmt.Errorf("Error deleting Outbound Connection (%s): %s", d.Id(), err)) + return diag.Errorf("deleting Outbound Connection (%s): %s", d.Id(), err) } if err := waitForOutboundConnectionDeletion(ctx, conn, d.Id(), d.Timeout(schema.TimeoutDelete)); err != nil { - return diag.FromErr(fmt.Errorf("Error waiting for VPC Peering Connection (%s) to be deleted: %s", d.Id(), err)) + return diag.Errorf("waiting for VPC Peering Connection (%s) to be deleted: %s", d.Id(), err) } return nil @@ -182,7 +185,7 @@ func outboundConnectionWaitUntilAvailable(ctx context.Context, conn *opensearchs Timeout: timeout, } if _, err := stateConf.WaitForStateContext(ctx); err != nil { - return fmt.Errorf("Error waiting for Outbound Connection (%s) to become available: %s", id, err) + return fmt.Errorf("waiting for Outbound Connection (%s) to become available: %s", id, err) } return nil } diff --git a/internal/service/opensearch/outbound_connection_test.go b/internal/service/opensearch/outbound_connection_test.go index bcef6d1cdd7..01c9f0f5b97 100644 --- a/internal/service/opensearch/outbound_connection_test.go +++ b/internal/service/opensearch/outbound_connection_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opensearch_test import ( @@ -72,8 +75,7 @@ func testAccOutboundConnectionConfig(name string) string { pw := fmt.Sprintf("Aa1-%s", sdkacctest.RandString(10)) return fmt.Sprintf(` resource "aws_opensearch_domain" "domain_1" { - domain_name = "%s-1" - engine_version = "OpenSearch_1.1" + domain_name = "%s-1" cluster_config { instance_type = "t3.small.search" # supported in both aws and aws-us-gov @@ -109,8 +111,7 @@ resource "aws_opensearch_domain" "domain_1" { } resource "aws_opensearch_domain" "domain_2" { - domain_name = "%s-2" - engine_version = "OpenSearch_1.1" + domain_name = "%s-2" cluster_config { instance_type = "t3.small.search" # supported in both aws and aws-us-gov diff --git a/internal/service/opensearch/service_package_gen.go b/internal/service/opensearch/service_package_gen.go index 71d03cacf3d..96f66fa6e58 100644 --- a/internal/service/opensearch/service_package_gen.go +++ b/internal/service/opensearch/service_package_gen.go @@ -5,6 +5,10 @@ package opensearch import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + opensearchservice_sdkv1 "github.com/aws/aws-sdk-go/service/opensearchservice" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -61,4 +65,13 @@ func (p *servicePackage) ServicePackageName() string { return names.OpenSearch } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*opensearchservice_sdkv1.OpenSearchService, error) { + sess := config["session"].(*session_sdkv1.Session) + + return opensearchservice_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/opensearch/status.go b/internal/service/opensearch/status.go index e031018083b..57d38fd8716 100644 --- a/internal/service/opensearch/status.go +++ b/internal/service/opensearch/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opensearch import ( diff --git a/internal/service/opensearch/sweep.go b/internal/service/opensearch/sweep.go index 62ac33fc350..5de01631b4d 100644 --- a/internal/service/opensearch/sweep.go +++ b/internal/service/opensearch/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/opensearchservice" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -24,13 +26,13 @@ func init() { func sweepDomains(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).OpenSearchConn() + conn := client.OpenSearchConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -88,7 +90,7 @@ func sweepDomains(region string) error { sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping OpenSearch Domains for %s: %w", region, err)) } diff --git a/internal/service/opensearch/tags_gen.go b/internal/service/opensearch/tags_gen.go index cb3c462c992..0a06050c909 100644 --- a/internal/service/opensearch/tags_gen.go +++ b/internal/service/opensearch/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists opensearch service tags. +// listTags lists opensearch service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn opensearchserviceiface.OpenSearchServiceAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn opensearchserviceiface.OpenSearchServiceAPI, identifier string) (tftags.KeyValueTags, error) { input := &opensearchservice.ListTagsInput{ ARN: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn opensearchserviceiface.OpenSearchService // ListTags lists opensearch service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).OpenSearchConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).OpenSearchConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*opensearchservice.Tag) tftags.Key return tftags.New(ctx, m) } -// GetTagsIn returns opensearch service tags from Context. +// getTagsIn returns opensearch service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*opensearchservice.Tag { +func getTagsIn(ctx context.Context) []*opensearchservice.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*opensearchservice.Tag { return nil } -// SetTagsOut sets opensearch service tags in Context. -func SetTagsOut(ctx context.Context, tags []*opensearchservice.Tag) { +// setTagsOut sets opensearch service tags in Context. +func setTagsOut(ctx context.Context, tags []*opensearchservice.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates opensearch service tags. +// updateTags updates opensearch service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn opensearchserviceiface.OpenSearchServiceAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn opensearchserviceiface.OpenSearchServiceAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn opensearchserviceiface.OpenSearchServi // UpdateTags updates opensearch service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).OpenSearchConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).OpenSearchConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/opensearch/wait.go b/internal/service/opensearch/wait.go index 943beb14d24..84197efa6c5 100644 --- a/internal/service/opensearch/wait.go +++ b/internal/service/opensearch/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opensearch import ( @@ -38,7 +41,7 @@ func waitUpgradeSucceeded(ctx context.Context, conn *opensearchservice.OpenSearc func WaitForDomainCreation(ctx context.Context, conn *opensearchservice.OpenSearchService, domainName string, timeout time.Duration) error { var out *opensearchservice.DomainStatus - err := retry.RetryContext(ctx, timeout, func() *retry.RetryError { + err := tfresource.Retry(ctx, timeout, func() *retry.RetryError { var err error out, err = FindDomainByName(ctx, conn, domainName) if err != nil { @@ -50,26 +53,27 @@ func WaitForDomainCreation(ctx context.Context, conn *opensearchservice.OpenSear } return retry.RetryableError( - fmt.Errorf("%q: Timeout while waiting for the domain to be created", domainName)) - }) + fmt.Errorf("%q: Timeout while waiting for OpenSearch Domain to be created", domainName)) + }, tfresource.WithDelay(10*time.Minute), tfresource.WithPollInterval(10*time.Second)) + if tfresource.TimedOut(err) { out, err = FindDomainByName(ctx, conn, domainName) if err != nil { - return fmt.Errorf("Error describing OpenSearch domain: %w", err) + return fmt.Errorf("describing OpenSearch Domain: %w", err) } if !aws.BoolValue(out.Processing) && (out.Endpoint != nil || out.Endpoints != nil) { return nil } } if err != nil { - return fmt.Errorf("Error waiting for OpenSearch domain to be created: %w", err) + return fmt.Errorf("waiting for OpenSearch Domain to be created: %w", err) } return nil } func waitForDomainUpdate(ctx context.Context, conn *opensearchservice.OpenSearchService, domainName string, timeout time.Duration) error { var out *opensearchservice.DomainStatus - err := retry.RetryContext(ctx, timeout, func() *retry.RetryError { + err := tfresource.Retry(ctx, timeout, func() *retry.RetryError { var err error out, err = FindDomainByName(ctx, conn, domainName) if err != nil { @@ -82,25 +86,26 @@ func waitForDomainUpdate(ctx context.Context, conn *opensearchservice.OpenSearch return retry.RetryableError( fmt.Errorf("%q: Timeout while waiting for changes to be processed", domainName)) - }) + }, tfresource.WithDelay(10*time.Minute), tfresource.WithPollInterval(10*time.Second)) + if tfresource.TimedOut(err) { out, err = FindDomainByName(ctx, conn, domainName) if err != nil { - return fmt.Errorf("Error describing OpenSearch domain: %w", err) + return fmt.Errorf("describing OpenSearch Domain: %w", err) } if !aws.BoolValue(out.Processing) { return nil } } if err != nil { - return fmt.Errorf("Error waiting for OpenSearch domain changes to be processed: %w", err) + return fmt.Errorf("waiting for OpenSearch Domain changes to be processed: %w", err) } return nil } func waitForDomainDelete(ctx context.Context, conn *opensearchservice.OpenSearchService, domainName string, timeout time.Duration) error { var out *opensearchservice.DomainStatus - err := retry.RetryContext(ctx, timeout, func() *retry.RetryError { + err := tfresource.Retry(ctx, timeout, func() *retry.RetryError { var err error out, err = FindDomainByName(ctx, conn, domainName) @@ -115,8 +120,8 @@ func waitForDomainDelete(ctx context.Context, conn *opensearchservice.OpenSearch return nil } - return retry.RetryableError(fmt.Errorf("timeout while waiting for the OpenSearch domain %q to be deleted", domainName)) - }) + return retry.RetryableError(fmt.Errorf("timeout while waiting for the OpenSearch Domain %q to be deleted", domainName)) + }, tfresource.WithDelay(10*time.Minute), tfresource.WithPollInterval(10*time.Second)) if tfresource.TimedOut(err) { out, err = FindDomainByName(ctx, conn, domainName) @@ -124,7 +129,7 @@ func waitForDomainDelete(ctx context.Context, conn *opensearchservice.OpenSearch if tfresource.NotFound(err) { return nil } - return fmt.Errorf("Error describing OpenSearch domain: %s", err) + return fmt.Errorf("describing OpenSearch Domain: %s", err) } if out != nil && !aws.BoolValue(out.Processing) { return nil @@ -132,7 +137,7 @@ func waitForDomainDelete(ctx context.Context, conn *opensearchservice.OpenSearch } if err != nil { - return fmt.Errorf("Error waiting for OpenSearch domain to be deleted: %s", err) + return fmt.Errorf("waiting for OpenSearch Domain to be deleted: %s", err) } // opensearch maintains information about the domain in multiple (at least 2) places that need diff --git a/internal/service/opensearchserverless/access_policy.go b/internal/service/opensearchserverless/access_policy.go new file mode 100644 index 00000000000..14f37c7e28f --- /dev/null +++ b/internal/service/opensearchserverless/access_policy.go @@ -0,0 +1,267 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package opensearchserverless + +import ( + "context" + "errors" + "fmt" + "strings" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/opensearchserverless" + awstypes "github.com/aws/aws-sdk-go-v2/service/opensearchserverless/types" + "github.com/hashicorp/terraform-plugin-framework-validators/stringvalidator" + "github.com/hashicorp/terraform-plugin-framework/diag" + "github.com/hashicorp/terraform-plugin-framework/resource" + "github.com/hashicorp/terraform-plugin-framework/resource/schema" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier" + "github.com/hashicorp/terraform-plugin-framework/schema/validator" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" + "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/enum" + "github.com/hashicorp/terraform-provider-aws/internal/errs/fwdiag" + "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @FrameworkResource +func newResourceAccessPolicy(_ context.Context) (resource.ResourceWithConfigure, error) { + return &resourceAccessPolicy{}, nil +} + +type resourceAccessPolicyData struct { + Description types.String `tfsdk:"description"` + ID types.String `tfsdk:"id"` + Name types.String `tfsdk:"name"` + Policy types.String `tfsdk:"policy"` + PolicyVersion types.String `tfsdk:"policy_version"` + Type types.String `tfsdk:"type"` +} + +const ( + ResNameAccessPolicy = "Access Policy" +) + +type resourceAccessPolicy struct { + framework.ResourceWithConfigure +} + +func (r *resourceAccessPolicy) Metadata(_ context.Context, request resource.MetadataRequest, response *resource.MetadataResponse) { + response.TypeName = "aws_opensearchserverless_access_policy" +} + +func (r *resourceAccessPolicy) Schema(ctx context.Context, req resource.SchemaRequest, resp *resource.SchemaResponse) { + resp.Schema = schema.Schema{ + Attributes: map[string]schema.Attribute{ + "description": schema.StringAttribute{ + Optional: true, + Validators: []validator.String{ + stringvalidator.LengthBetween(1, 1000), + }, + }, + "id": framework.IDAttribute(), + "name": schema.StringAttribute{ + Required: true, + Validators: []validator.String{ + stringvalidator.LengthBetween(3, 32), + }, + PlanModifiers: []planmodifier.String{ + stringplanmodifier.RequiresReplace(), + }, + }, + "policy": schema.StringAttribute{ + Required: true, + Validators: []validator.String{ + stringvalidator.LengthBetween(1, 20480), + }, + }, + "policy_version": schema.StringAttribute{ + Computed: true, + }, + "type": schema.StringAttribute{ + Required: true, + Validators: []validator.String{ + enum.FrameworkValidate[awstypes.AccessPolicyType](), + }, + PlanModifiers: []planmodifier.String{ + stringplanmodifier.RequiresReplace(), + }, + }, + }, + } +} + +func (r *resourceAccessPolicy) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) { + var plan resourceAccessPolicyData + + resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) + + if resp.Diagnostics.HasError() { + return + } + + conn := r.Meta().OpenSearchServerlessClient(ctx) + + in := &opensearchserverless.CreateAccessPolicyInput{ + ClientToken: aws.String(id.UniqueId()), + Name: aws.String(plan.Name.ValueString()), + Policy: aws.String(plan.Policy.ValueString()), + Type: awstypes.AccessPolicyType(plan.Type.ValueString()), + } + + if !plan.Description.IsNull() { + in.Description = aws.String(plan.Description.ValueString()) + } + + out, err := conn.CreateAccessPolicy(ctx, in) + if err != nil { + resp.Diagnostics.AddError( + create.ProblemStandardMessage(names.OpenSearchServerless, create.ErrActionCreating, ResNameAccessPolicy, plan.Name.String(), nil), + err.Error(), + ) + return + } + + state := plan + resp.Diagnostics.Append(state.refreshFromOutput(ctx, out.AccessPolicyDetail)...) + resp.Diagnostics.Append(resp.State.Set(ctx, &state)...) +} + +func (r *resourceAccessPolicy) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) { + conn := r.Meta().OpenSearchServerlessClient(ctx) + + var state resourceAccessPolicyData + resp.Diagnostics.Append(req.State.Get(ctx, &state)...) + if resp.Diagnostics.HasError() { + return + } + + out, err := findAccessPolicyByNameAndType(ctx, conn, state.ID.ValueString(), state.Type.ValueString()) + if tfresource.NotFound(err) { + resp.Diagnostics.Append(fwdiag.NewResourceNotFoundWarningDiagnostic(err)) + resp.State.RemoveResource(ctx) + return + } + + resp.Diagnostics.Append(state.refreshFromOutput(ctx, out)...) + resp.Diagnostics.Append(resp.State.Set(ctx, &state)...) +} + +func (r *resourceAccessPolicy) Update(ctx context.Context, req resource.UpdateRequest, resp *resource.UpdateResponse) { + conn := r.Meta().OpenSearchServerlessClient(ctx) + + var plan, state resourceAccessPolicyData + resp.Diagnostics.Append(req.State.Get(ctx, &state)...) + resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) + if resp.Diagnostics.HasError() { + return + } + + if !plan.Description.Equal(state.Description) || + !plan.Policy.Equal(state.Policy) { + input := &opensearchserverless.UpdateAccessPolicyInput{ + ClientToken: aws.String(id.UniqueId()), + Name: flex.StringFromFramework(ctx, plan.Name), + PolicyVersion: flex.StringFromFramework(ctx, state.PolicyVersion), + Type: awstypes.AccessPolicyType(plan.Type.ValueString()), + } + + if !plan.Description.Equal(state.Description) { + input.Description = aws.String(plan.Description.ValueString()) + } + + if !plan.Policy.Equal(state.Policy) { + input.Policy = aws.String(plan.Policy.ValueString()) + } + + out, err := conn.UpdateAccessPolicy(ctx, input) + + if err != nil { + resp.Diagnostics.AddError(fmt.Sprintf("updating Security Policy (%s)", plan.Name.ValueString()), err.Error()) + return + } + resp.Diagnostics.Append(state.refreshFromOutput(ctx, out.AccessPolicyDetail)...) + } + + resp.Diagnostics.Append(resp.State.Set(ctx, &state)...) +} + +func (r *resourceAccessPolicy) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) { + conn := r.Meta().OpenSearchServerlessClient(ctx) + + var state resourceAccessPolicyData + resp.Diagnostics.Append(req.State.Get(ctx, &state)...) + if resp.Diagnostics.HasError() { + return + } + + _, err := conn.DeleteAccessPolicy(ctx, &opensearchserverless.DeleteAccessPolicyInput{ + ClientToken: aws.String(id.UniqueId()), + Name: aws.String(state.Name.ValueString()), + Type: awstypes.AccessPolicyType(state.Type.ValueString()), + }) + if err != nil { + var nfe *awstypes.ResourceNotFoundException + if errors.As(err, &nfe) { + return + } + resp.Diagnostics.AddError( + create.ProblemStandardMessage(names.OpenSearchServerless, create.ErrActionDeleting, ResNameAccessPolicy, state.Name.String(), nil), + err.Error(), + ) + } +} + +func (r *resourceAccessPolicy) ImportState(ctx context.Context, req resource.ImportStateRequest, resp *resource.ImportStateResponse) { + parts := strings.Split(req.ID, idSeparator) + if len(parts) != 2 || parts[0] == "" || parts[1] == "" { + err := fmt.Errorf("unexpected format for ID (%[1]s), expected security-policy-name%[2]ssecurity-policy-type", req.ID, idSeparator) + resp.Diagnostics.AddError(fmt.Sprintf("importing Security Policy (%s)", req.ID), err.Error()) + return + } + + state := resourceAccessPolicyData{ + ID: types.StringValue(parts[0]), + Name: types.StringValue(parts[0]), + Type: types.StringValue(parts[1]), + } + + diags := resp.State.Set(ctx, &state) + resp.Diagnostics.Append(diags...) + if resp.Diagnostics.HasError() { + return + } +} + +// refreshFromOutput writes state data from an AWS response object +func (rd *resourceAccessPolicyData) refreshFromOutput(ctx context.Context, out *awstypes.AccessPolicyDetail) diag.Diagnostics { + var diags diag.Diagnostics + + if out == nil { + return diags + } + + rd.ID = flex.StringToFramework(ctx, out.Name) + rd.Description = flex.StringToFramework(ctx, out.Description) + rd.Name = flex.StringToFramework(ctx, out.Name) + rd.Type = flex.StringValueToFramework(ctx, out.Type) + rd.PolicyVersion = flex.StringToFramework(ctx, out.PolicyVersion) + + policyBytes, err := out.Policy.MarshalSmithyDocument() + if err != nil { + diags.AddError(fmt.Sprintf("refreshing state for Security Policy (%s)", rd.Name), err.Error()) + return diags + } + + p := string(policyBytes) + + rd.Policy = flex.StringToFramework(ctx, &p) + + return diags +} diff --git a/internal/service/opensearchserverless/access_policy_data_source.go b/internal/service/opensearchserverless/access_policy_data_source.go new file mode 100644 index 00000000000..4f52b03fa10 --- /dev/null +++ b/internal/service/opensearchserverless/access_policy_data_source.go @@ -0,0 +1,113 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package opensearchserverless + +import ( + "context" + + awstypes "github.com/aws/aws-sdk-go-v2/service/opensearchserverless/types" + "github.com/hashicorp/terraform-plugin-framework-validators/stringvalidator" + "github.com/hashicorp/terraform-plugin-framework/datasource" + "github.com/hashicorp/terraform-plugin-framework/datasource/schema" + "github.com/hashicorp/terraform-plugin-framework/schema/validator" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/enum" + "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @FrameworkDataSource(name="Access Policy") +func newDataSourceAccessPolicy(context.Context) (datasource.DataSourceWithConfigure, error) { + return &dataSourceAccessPolicy{}, nil +} + +const ( + DSNameAccessPolicy = "Access Policy Data Source" +) + +type dataSourceAccessPolicy struct { + framework.DataSourceWithConfigure +} + +func (d *dataSourceAccessPolicy) Metadata(_ context.Context, req datasource.MetadataRequest, resp *datasource.MetadataResponse) { // nosemgrep:ci.meta-in-func-name + resp.TypeName = "aws_opensearchserverless_access_policy" +} + +func (d *dataSourceAccessPolicy) Schema(_ context.Context, _ datasource.SchemaRequest, resp *datasource.SchemaResponse) { + resp.Schema = schema.Schema{ + Attributes: map[string]schema.Attribute{ + "description": schema.StringAttribute{ + Computed: true, + }, + "id": framework.IDAttribute(), + "name": schema.StringAttribute{ + Required: true, + Validators: []validator.String{ + stringvalidator.LengthBetween(3, 32), + }, + }, + "policy": schema.StringAttribute{ + Computed: true, + }, + "policy_version": schema.StringAttribute{ + Computed: true, + }, + "type": schema.StringAttribute{ + Required: true, + Validators: []validator.String{ + enum.FrameworkValidate[awstypes.AccessPolicyType](), + }, + }, + }, + } +} +func (d *dataSourceAccessPolicy) Read(ctx context.Context, req datasource.ReadRequest, resp *datasource.ReadResponse) { + conn := d.Meta().OpenSearchServerlessClient(ctx) + + var data dataSourceAccessPolicyData + resp.Diagnostics.Append(req.Config.Get(ctx, &data)...) + if resp.Diagnostics.HasError() { + return + } + + out, err := findAccessPolicyByNameAndType(ctx, conn, data.Name.ValueString(), data.Type.ValueString()) + if err != nil { + resp.Diagnostics.AddError( + create.ProblemStandardMessage(names.OpenSearchServerless, create.ErrActionReading, DSNameAccessPolicy, data.Name.String(), err), + err.Error(), + ) + return + } + + data.ID = flex.StringToFramework(ctx, out.Name) + data.Description = flex.StringToFramework(ctx, out.Description) + data.Name = flex.StringToFramework(ctx, out.Name) + data.Type = flex.StringValueToFramework(ctx, out.Type) + data.PolicyVersion = flex.StringToFramework(ctx, out.PolicyVersion) + + policyBytes, err := out.Policy.MarshalSmithyDocument() + + if err != nil { + resp.Diagnostics.AddError( + create.ProblemStandardMessage(names.OpenSearchServerless, create.ErrActionReading, DSNameAccessPolicy, data.Name.String(), err), + err.Error(), + ) + } + + pb := string(policyBytes) + data.Policy = flex.StringToFramework(ctx, &pb) + + resp.Diagnostics.Append(resp.State.Set(ctx, &data)...) +} + +type dataSourceAccessPolicyData struct { + Description types.String `tfsdk:"description"` + ID types.String `tfsdk:"id"` + Name types.String `tfsdk:"name"` + Policy types.String `tfsdk:"policy"` + PolicyVersion types.String `tfsdk:"policy_version"` + Type types.String `tfsdk:"type"` +} diff --git a/internal/service/opensearchserverless/access_policy_data_source_test.go b/internal/service/opensearchserverless/access_policy_data_source_test.go new file mode 100644 index 00000000000..97c0df34186 --- /dev/null +++ b/internal/service/opensearchserverless/access_policy_data_source_test.go @@ -0,0 +1,87 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package opensearchserverless_test + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go-v2/service/opensearchserverless/types" + sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/names" +) + +func TestAccOpenSearchServerlessAccessPolicyDataSource_basic(t *testing.T) { + ctx := acctest.Context(t) + + var accesspolicy types.AccessPolicyDetail + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + dataSourceName := "data.aws_opensearchserverless_access_policy.test" + resourceName := "aws_opensearchserverless_access_policy.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.OpenSearchServerlessEndpointID) + testAccPreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.OpenSearchServerlessEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckAccessPolicyDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccAccessPolicyDataSourceConfig_basic(rName, "data"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAccessPolicyExists(ctx, dataSourceName, &accesspolicy), + resource.TestCheckResourceAttrPair(dataSourceName, "name", resourceName, "name"), + resource.TestCheckResourceAttrPair(dataSourceName, "description", resourceName, "description"), + resource.TestCheckResourceAttrPair(dataSourceName, "policy", resourceName, "policy"), + resource.TestCheckResourceAttrPair(dataSourceName, "policy_version", resourceName, "policy_version"), + ), + }, + }, + }) +} + +func testAccAccessPolicyDataSourceConfig_basic(rName, policyType string) string { + return fmt.Sprintf(` +data "aws_caller_identity" "current" {} +data "aws_partition" "current" {} + +resource "aws_opensearchserverless_access_policy" "test" { + name = %[1]q + type = %[2]q + description = %[1]q + policy = jsonencode([ + { + "Rules" : [ + { + "ResourceType" : "index", + "Resource" : [ + "index/books/*" + ], + "Permission" : [ + "aoss:CreateIndex", + "aoss:ReadDocument", + "aoss:UpdateIndex", + "aoss:DeleteIndex", + "aoss:WriteDocument" + ] + } + ], + "Principal" : [ + "arn:${data.aws_partition.current.partition}:iam::${data.aws_caller_identity.current.account_id}:user/admin" + ] + } + ]) +} + +data "aws_opensearchserverless_access_policy" "test" { + name = aws_opensearchserverless_access_policy.test.name + type = aws_opensearchserverless_access_policy.test.type +} +`, rName, policyType) +} diff --git a/internal/service/opensearchserverless/access_policy_test.go b/internal/service/opensearchserverless/access_policy_test.go new file mode 100644 index 00000000000..a051f1688ca --- /dev/null +++ b/internal/service/opensearchserverless/access_policy_test.go @@ -0,0 +1,268 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package opensearchserverless_test + +import ( + "context" + "errors" + "fmt" + "testing" + + "github.com/aws/aws-sdk-go-v2/service/opensearchserverless" + "github.com/aws/aws-sdk-go-v2/service/opensearchserverless/types" + sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-plugin-testing/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + tfopensearchserverless "github.com/hashicorp/terraform-provider-aws/internal/service/opensearchserverless" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" +) + +func TestAccOpenSearchServerlessAccessPolicy_basic(t *testing.T) { + ctx := acctest.Context(t) + var accesspolicy types.AccessPolicyDetail + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearchserverless_access_policy.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.OpenSearchServerlessEndpointID) + testAccPreCheckAccessPolicy(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.OpenSearchServerlessEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckAccessPolicyDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccAccessPolicyConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAccessPolicyExists(ctx, resourceName, &accesspolicy), + resource.TestCheckResourceAttr(resourceName, "type", "data"), + ), + }, + { + ResourceName: resourceName, + ImportStateIdFunc: testAccAccessPolicyImportStateIdFunc(resourceName), + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccOpenSearchServerlessAccessPolicy_update(t *testing.T) { + ctx := acctest.Context(t) + var accesspolicy types.AccessPolicyDetail + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearchserverless_access_policy.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.OpenSearchServerlessEndpointID) + testAccPreCheckAccessPolicy(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.OpenSearchServerlessEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckAccessPolicyDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccAccessPolicyConfig_update(rName, "description"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAccessPolicyExists(ctx, resourceName, &accesspolicy), + resource.TestCheckResourceAttr(resourceName, "type", "data"), + resource.TestCheckResourceAttr(resourceName, "description", "description"), + ), + }, + { + Config: testAccAccessPolicyConfig_update(rName, "description updated"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAccessPolicyExists(ctx, resourceName, &accesspolicy), + resource.TestCheckResourceAttr(resourceName, "type", "data"), + resource.TestCheckResourceAttr(resourceName, "description", "description updated"), + ), + }, + }, + }) +} + +func TestAccOpenSearchServerlessAccessPolicy_disappears(t *testing.T) { + ctx := acctest.Context(t) + + var accesspolicy types.AccessPolicyDetail + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearchserverless_access_policy.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.OpenSearchServerlessEndpointID) + testAccPreCheckAccessPolicy(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.OpenSearchServerlessEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckAccessPolicyDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccAccessPolicyConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAccessPolicyExists(ctx, resourceName, &accesspolicy), + acctest.CheckFrameworkResourceDisappears(ctx, acctest.Provider, tfopensearchserverless.ResourceAccessPolicy, resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func testAccCheckAccessPolicyDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchServerlessClient(ctx) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_opensearchserverless_access_policy" { + continue + } + + _, err := tfopensearchserverless.FindAccessPolicyByNameAndType(ctx, conn, rs.Primary.ID, rs.Primary.Attributes["type"]) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return err + } + + return create.Error(names.OpenSearchServerless, create.ErrActionCheckingDestroyed, tfopensearchserverless.ResNameAccessPolicy, rs.Primary.ID, errors.New("not destroyed")) + } + + return nil + } +} + +func testAccCheckAccessPolicyExists(ctx context.Context, name string, accesspolicy *types.AccessPolicyDetail) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return create.Error(names.OpenSearchServerless, create.ErrActionCheckingExistence, tfopensearchserverless.ResNameAccessPolicy, name, errors.New("not found")) + } + + if rs.Primary.ID == "" { + return create.Error(names.OpenSearchServerless, create.ErrActionCheckingExistence, tfopensearchserverless.ResNameAccessPolicy, name, errors.New("not set")) + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchServerlessClient(ctx) + resp, err := tfopensearchserverless.FindAccessPolicyByNameAndType(ctx, conn, rs.Primary.ID, rs.Primary.Attributes["type"]) + + if err != nil { + return create.Error(names.OpenSearchServerless, create.ErrActionCheckingExistence, tfopensearchserverless.ResNameAccessPolicy, rs.Primary.ID, err) + } + + *accesspolicy = *resp + + return nil + } +} + +func testAccAccessPolicyImportStateIdFunc(resourceName string) resource.ImportStateIdFunc { + return func(s *terraform.State) (string, error) { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return "", fmt.Errorf("not found: %s", resourceName) + } + + return fmt.Sprintf("%s/%s", rs.Primary.Attributes["name"], rs.Primary.Attributes["type"]), nil + } +} + +func testAccPreCheckAccessPolicy(ctx context.Context, t *testing.T) { + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchServerlessClient(ctx) + + input := &opensearchserverless.ListAccessPoliciesInput{ + Type: types.AccessPolicyTypeData, + } + _, err := conn.ListAccessPolicies(ctx, input) + + if acctest.PreCheckSkipError(err) { + t.Skipf("skipping acceptance testing: %s", err) + } + + if err != nil { + t.Fatalf("unexpected PreCheck error: %s", err) + } +} + +func testAccAccessPolicyConfig_basic(rName string) string { + return fmt.Sprintf(` +data "aws_caller_identity" "current" {} +data "aws_partition" "current" {} + +resource "aws_opensearchserverless_access_policy" "test" { + name = %[1]q + type = "data" + policy = jsonencode([ + { + "Rules" : [ + { + "ResourceType" : "index", + "Resource" : [ + "index/books/*" + ], + "Permission" : [ + "aoss:CreateIndex", + "aoss:ReadDocument", + "aoss:UpdateIndex", + "aoss:DeleteIndex", + "aoss:WriteDocument" + ] + } + ], + "Principal" : [ + "arn:${data.aws_partition.current.partition}:iam::${data.aws_caller_identity.current.account_id}:user/admin" + ] + } + ]) +} +`, rName) +} + +func testAccAccessPolicyConfig_update(rName, description string) string { + return fmt.Sprintf(` +data "aws_caller_identity" "current" {} +data "aws_partition" "current" {} + +resource "aws_opensearchserverless_access_policy" "test" { + name = %[1]q + type = "data" + description = %[2]q + policy = jsonencode([ + { + "Rules" : [ + { + "ResourceType" : "index", + "Resource" : [ + "index/books/*" + ], + "Permission" : [ + "aoss:CreateIndex", + "aoss:ReadDocument", + "aoss:UpdateIndex", + "aoss:DeleteIndex", + "aoss:WriteDocument" + ] + } + ], + "Principal" : [ + "arn:${data.aws_partition.current.partition}:iam::${data.aws_caller_identity.current.account_id}:user/admin" + ] + } + ]) +} +`, rName, description) +} diff --git a/internal/service/opensearchserverless/collection.go b/internal/service/opensearchserverless/collection.go new file mode 100644 index 00000000000..74039e4e387 --- /dev/null +++ b/internal/service/opensearchserverless/collection.go @@ -0,0 +1,357 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package opensearchserverless + +import ( + "context" + "errors" + "regexp" + "time" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/opensearchserverless" + awstypes "github.com/aws/aws-sdk-go-v2/service/opensearchserverless/types" + "github.com/hashicorp/terraform-plugin-framework-timeouts/resource/timeouts" + "github.com/hashicorp/terraform-plugin-framework-validators/stringvalidator" + "github.com/hashicorp/terraform-plugin-framework/path" + "github.com/hashicorp/terraform-plugin-framework/resource" + "github.com/hashicorp/terraform-plugin-framework/resource/schema" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier" + "github.com/hashicorp/terraform-plugin-framework/schema/validator" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/enum" + "github.com/hashicorp/terraform-provider-aws/internal/errs/fwdiag" + "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @FrameworkResource(name="Collection") +// @Tags(identifierAttribute="arn") +func newResourceCollection(_ context.Context) (resource.ResourceWithConfigure, error) { + r := resourceCollection{} + r.SetDefaultCreateTimeout(20 * time.Minute) + r.SetDefaultDeleteTimeout(20 * time.Minute) + + return &r, nil +} + +type resourceCollectionData struct { + ARN types.String `tfsdk:"arn"` + CollectionEndpoint types.String `tfsdk:"collection_endpoint"` + DashboardEndpoint types.String `tfsdk:"dashboard_endpoint"` + Description types.String `tfsdk:"description"` + ID types.String `tfsdk:"id"` + KmsKeyARN types.String `tfsdk:"kms_key_arn"` + Name types.String `tfsdk:"name"` + Tags types.Map `tfsdk:"tags"` + TagsAll types.Map `tfsdk:"tags_all"` + Timeouts timeouts.Value `tfsdk:"timeouts"` + Type types.String `tfsdk:"type"` +} + +const ( + ResNameCollection = "Collection" +) + +type resourceCollection struct { + framework.ResourceWithConfigure + framework.WithTimeouts +} + +func (r *resourceCollection) Metadata(_ context.Context, request resource.MetadataRequest, response *resource.MetadataResponse) { + response.TypeName = "aws_opensearchserverless_collection" +} + +func (r *resourceCollection) Schema(ctx context.Context, req resource.SchemaRequest, resp *resource.SchemaResponse) { + resp.Schema = schema.Schema{ + Attributes: map[string]schema.Attribute{ + "arn": framework.ARNAttributeComputedOnly(), + "collection_endpoint": schema.StringAttribute{ + Computed: true, + PlanModifiers: []planmodifier.String{ + stringplanmodifier.UseStateForUnknown(), + }, + }, + "dashboard_endpoint": schema.StringAttribute{ + Computed: true, + PlanModifiers: []planmodifier.String{ + stringplanmodifier.UseStateForUnknown(), + }, + }, + "description": schema.StringAttribute{ + Optional: true, + Validators: []validator.String{ + stringvalidator.LengthBetween(0, 1000), + }, + }, + "id": framework.IDAttribute(), + "kms_key_arn": schema.StringAttribute{ + Computed: true, + PlanModifiers: []planmodifier.String{ + stringplanmodifier.UseStateForUnknown(), + }, + }, + "name": schema.StringAttribute{ + Required: true, + PlanModifiers: []planmodifier.String{ + stringplanmodifier.RequiresReplace(), + }, + Validators: []validator.String{ + stringvalidator.LengthBetween(3, 32), + stringvalidator.RegexMatches(regexp.MustCompile(`^[a-z][a-z0-9-]+$`), + `must start with any lower case letter and can can include any lower case letter, number, or "-"`), + }, + }, + names.AttrTags: tftags.TagsAttribute(), + names.AttrTagsAll: tftags.TagsAttributeComputedOnly(), + "type": schema.StringAttribute{ + Optional: true, + Computed: true, + PlanModifiers: []planmodifier.String{ + stringplanmodifier.RequiresReplace(), + stringplanmodifier.UseStateForUnknown(), + }, + Validators: []validator.String{ + enum.FrameworkValidate[awstypes.CollectionType](), + }, + }, + }, + Blocks: map[string]schema.Block{ + "timeouts": timeouts.Block(ctx, timeouts.Opts{ + Create: true, + Delete: true, + }), + }, + } +} + +func (r *resourceCollection) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) { + var plan resourceCollectionData + + resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) + + if resp.Diagnostics.HasError() { + return + } + + conn := r.Meta().OpenSearchServerlessClient(ctx) + + in := &opensearchserverless.CreateCollectionInput{ + ClientToken: aws.String(id.UniqueId()), + Name: aws.String(plan.Name.ValueString()), + Tags: getTagsIn(ctx), + } + + if !plan.Description.IsNull() { + in.Description = aws.String(plan.Description.ValueString()) + } + + if !plan.Type.IsNull() { + in.Type = awstypes.CollectionType(plan.Type.ValueString()) + } + + out, err := conn.CreateCollection(ctx, in) + if err != nil { + resp.Diagnostics.AddError( + create.ProblemStandardMessage(names.OpenSearchServerless, create.ErrActionCreating, ResNameCollection, plan.Name.ValueString(), nil), + err.Error(), + ) + return + } + + state := plan + state.ID = flex.StringToFramework(ctx, out.CreateCollectionDetail.Id) + + createTimeout := r.CreateTimeout(ctx, plan.Timeouts) + waitOut, err := waitCollectionCreated(ctx, conn, aws.ToString(out.CreateCollectionDetail.Id), createTimeout) + + if err != nil { + resp.Diagnostics.AddError( + create.ProblemStandardMessage(names.OpenSearchServerless, create.ErrActionWaitingForCreation, ResNameCollection, plan.Name.ValueString(), err), + err.Error(), + ) + return + } + + state.ARN = flex.StringToFramework(ctx, waitOut.Arn) + state.CollectionEndpoint = flex.StringToFramework(ctx, waitOut.CollectionEndpoint) + state.DashboardEndpoint = flex.StringToFramework(ctx, waitOut.DashboardEndpoint) + state.Description = flex.StringToFramework(ctx, waitOut.Description) + state.KmsKeyARN = flex.StringToFramework(ctx, waitOut.KmsKeyArn) + state.Name = flex.StringToFramework(ctx, waitOut.Name) + state.Type = flex.StringValueToFramework(ctx, waitOut.Type) + + resp.Diagnostics.Append(resp.State.Set(ctx, &state)...) +} + +func (r *resourceCollection) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) { + conn := r.Meta().OpenSearchServerlessClient(ctx) + + var state resourceCollectionData + resp.Diagnostics.Append(req.State.Get(ctx, &state)...) + if resp.Diagnostics.HasError() { + return + } + + out, err := findCollectionByID(ctx, conn, state.ID.ValueString()) + if tfresource.NotFound(err) { + resp.Diagnostics.Append(fwdiag.NewResourceNotFoundWarningDiagnostic(err)) + resp.State.RemoveResource(ctx) + return + } + + state.ARN = flex.StringToFramework(ctx, out.Arn) + state.CollectionEndpoint = flex.StringToFramework(ctx, out.CollectionEndpoint) + state.DashboardEndpoint = flex.StringToFramework(ctx, out.DashboardEndpoint) + state.Description = flex.StringToFramework(ctx, out.Description) + state.ID = flex.StringToFramework(ctx, out.Id) + state.KmsKeyARN = flex.StringToFramework(ctx, out.KmsKeyArn) + state.Name = flex.StringToFramework(ctx, out.Name) + state.Type = flex.StringValueToFramework(ctx, out.Type) + + resp.Diagnostics.Append(resp.State.Set(ctx, &state)...) +} + +func (r *resourceCollection) Update(ctx context.Context, req resource.UpdateRequest, resp *resource.UpdateResponse) { + conn := r.Meta().OpenSearchServerlessClient(ctx) + + var plan, state resourceCollectionData + resp.Diagnostics.Append(req.State.Get(ctx, &state)...) + resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) + if resp.Diagnostics.HasError() { + return + } + + if !plan.Description.Equal(state.Description) { + input := &opensearchserverless.UpdateCollectionInput{ + ClientToken: aws.String(id.UniqueId()), + Id: flex.StringFromFramework(ctx, plan.ID), + Description: flex.StringFromFramework(ctx, plan.Description), + } + + out, err := conn.UpdateCollection(ctx, input) + + if err != nil { + resp.Diagnostics.AddError( + create.ProblemStandardMessage(names.OpenSearchServerless, create.ErrActionUpdating, ResNameCollection, state.ID.ValueString(), err), + err.Error(), + ) + return + } + + plan.ARN = flex.StringToFramework(ctx, out.UpdateCollectionDetail.Arn) + plan.Description = flex.StringToFramework(ctx, out.UpdateCollectionDetail.Description) + plan.ID = flex.StringToFramework(ctx, out.UpdateCollectionDetail.Id) + plan.Name = flex.StringToFramework(ctx, out.UpdateCollectionDetail.Name) + plan.Type = flex.StringValueToFramework(ctx, out.UpdateCollectionDetail.Type) + } + + resp.Diagnostics.Append(resp.State.Set(ctx, &plan)...) +} + +func (r *resourceCollection) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) { + conn := r.Meta().OpenSearchServerlessClient(ctx) + + var state resourceCollectionData + resp.Diagnostics.Append(req.State.Get(ctx, &state)...) + if resp.Diagnostics.HasError() { + return + } + + _, err := conn.DeleteCollection(ctx, &opensearchserverless.DeleteCollectionInput{ + ClientToken: aws.String(id.UniqueId()), + Id: aws.String(state.ID.ValueString()), + }) + + if err != nil { + var nfe *awstypes.ResourceNotFoundException + if errors.As(err, &nfe) { + return + } + resp.Diagnostics.AddError( + create.ProblemStandardMessage(names.OpenSearchServerless, create.ErrActionDeleting, ResNameCollection, state.Name.ValueString(), nil), + err.Error(), + ) + } + + deleteTimeout := r.DeleteTimeout(ctx, state.Timeouts) + _, err = waitCollectionDeleted(ctx, conn, state.ID.ValueString(), deleteTimeout) + + if err != nil { + resp.Diagnostics.AddError( + create.ProblemStandardMessage(names.OpenSearchServerless, create.ErrActionWaitingForCreation, ResNameCollection, state.Name.ValueString(), err), + err.Error(), + ) + return + } +} + +func (r *resourceCollection) ModifyPlan(ctx context.Context, req resource.ModifyPlanRequest, resp *resource.ModifyPlanResponse) { + r.SetTagsAll(ctx, req, resp) +} + +func (r *resourceCollection) ImportState(ctx context.Context, req resource.ImportStateRequest, resp *resource.ImportStateResponse) { + resource.ImportStatePassthroughID(ctx, path.Root("id"), req, resp) +} + +func waitCollectionCreated(ctx context.Context, conn *opensearchserverless.Client, id string, timeout time.Duration) (*awstypes.CollectionDetail, error) { + stateConf := &retry.StateChangeConf{ + Pending: enum.Slice(awstypes.CollectionStatusCreating), + Target: enum.Slice(awstypes.CollectionStatusActive), + Refresh: statusCollection(ctx, conn, id), + Timeout: timeout, + MinTimeout: 10 * time.Second, + Delay: 30 * time.Second, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + + if output, ok := outputRaw.(*awstypes.CollectionDetail); ok { + return output, err + } + + return nil, err +} + +func waitCollectionDeleted(ctx context.Context, conn *opensearchserverless.Client, id string, timeout time.Duration) (*awstypes.CollectionDetail, error) { + stateConf := &retry.StateChangeConf{ + Pending: enum.Slice(awstypes.CollectionStatusDeleting), + Target: []string{}, + Refresh: statusCollection(ctx, conn, id), + Timeout: timeout, + MinTimeout: 10 * time.Second, + Delay: 30 * time.Second, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + + if output, ok := outputRaw.(*awstypes.CollectionDetail); ok { + return output, err + } + + return nil, err +} + +func statusCollection(ctx context.Context, conn *opensearchserverless.Client, id string) retry.StateRefreshFunc { + return func() (interface{}, string, error) { + output, err := findCollectionByID(ctx, conn, id) + + if tfresource.NotFound(err) { + return nil, "", nil + } + + if err != nil { + return nil, "", err + } + + return output, string(output.Status), nil + } +} diff --git a/internal/service/opensearchserverless/collection_data_source.go b/internal/service/opensearchserverless/collection_data_source.go new file mode 100644 index 00000000000..74c61741b8d --- /dev/null +++ b/internal/service/opensearchserverless/collection_data_source.go @@ -0,0 +1,173 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package opensearchserverless + +import ( + "context" + "time" + + "github.com/aws/aws-sdk-go-v2/aws" + awstypes "github.com/aws/aws-sdk-go-v2/service/opensearchserverless/types" + "github.com/hashicorp/terraform-plugin-framework-validators/stringvalidator" + "github.com/hashicorp/terraform-plugin-framework/datasource" + "github.com/hashicorp/terraform-plugin-framework/datasource/schema" + "github.com/hashicorp/terraform-plugin-framework/path" + "github.com/hashicorp/terraform-plugin-framework/schema/validator" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @FrameworkDataSource(name="Collection") +func newDataSourceCollection(context.Context) (datasource.DataSourceWithConfigure, error) { + return &dataSourceCollection{}, nil +} + +const ( + DSNameCollection = "Collection Data Source" +) + +type dataSourceCollection struct { + framework.DataSourceWithConfigure +} + +func (d *dataSourceCollection) Metadata(_ context.Context, _ datasource.MetadataRequest, resp *datasource.MetadataResponse) { // nosemgrep:ci.meta-in-func-name + resp.TypeName = "aws_opensearchserverless_collection" +} + +func (d *dataSourceCollection) Schema(_ context.Context, _ datasource.SchemaRequest, resp *datasource.SchemaResponse) { + resp.Schema = schema.Schema{ + Attributes: map[string]schema.Attribute{ + "arn": framework.ARNAttributeComputedOnly(), + "collection_endpoint": schema.StringAttribute{ + Computed: true, + }, + "created_date": schema.StringAttribute{ + Computed: true, + }, + "dashboard_endpoint": schema.StringAttribute{ + Computed: true, + }, + "description": schema.StringAttribute{ + Computed: true, + }, + "id": schema.StringAttribute{ + Optional: true, + Computed: true, + Validators: []validator.String{ + stringvalidator.ConflictsWith( + path.MatchRelative().AtParent().AtName("name"), + ), + stringvalidator.ExactlyOneOf( + path.MatchRelative().AtParent().AtName("name"), + ), + }, + }, + "kms_key_arn": schema.StringAttribute{ + Computed: true, + }, + "last_modified_date": schema.StringAttribute{ + Computed: true, + }, + "name": schema.StringAttribute{ + Optional: true, + Computed: true, + Validators: []validator.String{ + stringvalidator.ConflictsWith( + path.MatchRelative().AtParent().AtName("id"), + ), + }, + }, + names.AttrTags: tftags.TagsAttributeComputedOnly(), + "type": schema.StringAttribute{ + Computed: true, + }, + }, + } +} +func (d *dataSourceCollection) Read(ctx context.Context, req datasource.ReadRequest, resp *datasource.ReadResponse) { + conn := d.Meta().OpenSearchServerlessClient(ctx) + + var data dataSourceCollectionData + resp.Diagnostics.Append(req.Config.Get(ctx, &data)...) + if resp.Diagnostics.HasError() { + return + } + + var out *awstypes.CollectionDetail + + if !data.ID.IsNull() && !data.ID.IsUnknown() { + output, err := findCollectionByID(ctx, conn, data.ID.ValueString()) + if err != nil { + resp.Diagnostics.AddError( + create.ProblemStandardMessage(names.OpenSearchServerless, create.ErrActionReading, DSNameCollection, data.ID.String(), err), + err.Error(), + ) + return + } + + out = output + } + + if !data.Name.IsNull() && !data.Name.IsUnknown() { + output, err := findCollectionByName(ctx, conn, data.Name.ValueString()) + if err != nil { + resp.Diagnostics.AddError( + create.ProblemStandardMessage(names.OpenSearchServerless, create.ErrActionReading, DSNameCollection, data.ID.String(), err), + err.Error(), + ) + return + } + + out = output + } + + data.ARN = flex.StringToFramework(ctx, out.Arn) + data.CollectionEndpoint = flex.StringToFramework(ctx, out.CollectionEndpoint) + data.DashboardEndpoint = flex.StringToFramework(ctx, out.DashboardEndpoint) + data.Description = flex.StringToFramework(ctx, out.Description) + data.ID = flex.StringToFramework(ctx, out.Id) + data.KmsKeyARN = flex.StringToFramework(ctx, out.KmsKeyArn) + data.Name = flex.StringToFramework(ctx, out.Name) + data.Type = flex.StringValueToFramework(ctx, out.Type) + + createdDate := time.UnixMilli(aws.ToInt64(out.CreatedDate)) + data.CreatedDate = flex.StringValueToFramework(ctx, createdDate.Format(time.RFC3339)) + + lastModifiedDate := time.UnixMilli(aws.ToInt64(out.LastModifiedDate)) + data.LastModifiedDate = flex.StringValueToFramework(ctx, lastModifiedDate.Format(time.RFC3339)) + + ignoreTagsConfig := d.Meta().IgnoreTagsConfig + tags, err := listTags(ctx, conn, aws.ToString(out.Arn)) + + if err != nil { + resp.Diagnostics.AddError( + create.ProblemStandardMessage(names.OpenSearchServerless, create.ErrActionReading, DSNameCollection, data.ID.String(), err), + err.Error(), + ) + return + } + + tags = tags.IgnoreConfig(ignoreTagsConfig) + data.Tags = flex.FlattenFrameworkStringValueMapLegacy(ctx, tags.Map()) + + resp.Diagnostics.Append(resp.State.Set(ctx, &data)...) +} + +type dataSourceCollectionData struct { + ARN types.String `tfsdk:"arn"` + CollectionEndpoint types.String `tfsdk:"collection_endpoint"` + CreatedDate types.String `tfsdk:"created_date"` + DashboardEndpoint types.String `tfsdk:"dashboard_endpoint"` + Description types.String `tfsdk:"description"` + ID types.String `tfsdk:"id"` + KmsKeyARN types.String `tfsdk:"kms_key_arn"` + LastModifiedDate types.String `tfsdk:"last_modified_date"` + Name types.String `tfsdk:"name"` + Tags types.Map `tfsdk:"tags"` + Type types.String `tfsdk:"type"` +} diff --git a/internal/service/opensearchserverless/collection_data_source_test.go b/internal/service/opensearchserverless/collection_data_source_test.go new file mode 100644 index 00000000000..8635998e569 --- /dev/null +++ b/internal/service/opensearchserverless/collection_data_source_test.go @@ -0,0 +1,138 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package opensearchserverless_test + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go-v2/service/opensearchserverless/types" + sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/names" +) + +func TestAccOpenSearchServerlessCollectionDataSource_basic(t *testing.T) { + ctx := acctest.Context(t) + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var collection types.CollectionDetail + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + dataSourceName := "data.aws_opensearchserverless_collection.test" + resourceName := "aws_opensearchserverless_collection.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.OpenSearchServerlessEndpointID) + testAccPreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.OpenSearchServerlessEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckCollectionDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccCollectionDataSourceConfig_basic(rName, "encryption"), + Check: resource.ComposeTestCheckFunc( + testAccCheckCollectionExists(ctx, dataSourceName, &collection), + resource.TestCheckResourceAttrPair(dataSourceName, "name", resourceName, "name"), + resource.TestCheckResourceAttrPair(dataSourceName, "id", resourceName, "id"), + resource.TestCheckResourceAttrPair(dataSourceName, "arn", resourceName, "arn"), + resource.TestCheckResourceAttrPair(dataSourceName, "collection_endpoint", resourceName, "collection_endpoint"), + resource.TestCheckResourceAttrPair(dataSourceName, "dashboard_endpoint", resourceName, "dashboard_endpoint"), + resource.TestCheckResourceAttrPair(dataSourceName, "description", resourceName, "description"), + resource.TestCheckResourceAttrPair(dataSourceName, "kms_key_arn", resourceName, "kms_key_arn"), + resource.TestCheckResourceAttrPair(dataSourceName, "type", resourceName, "type"), + ), + }, + }, + }) +} + +func TestAccOpenSearchServerlessCollectionDataSource_name(t *testing.T) { + ctx := acctest.Context(t) + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var collection types.CollectionDetail + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + dataSourceName := "data.aws_opensearchserverless_collection.test" + resourceName := "aws_opensearchserverless_collection.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.OpenSearchServerlessEndpointID) + testAccPreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.OpenSearchServerlessEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckCollectionDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccCollectionDataSourceConfig_name(rName, "encryption"), + Check: resource.ComposeTestCheckFunc( + testAccCheckCollectionExists(ctx, dataSourceName, &collection), + resource.TestCheckResourceAttrPair(dataSourceName, "name", resourceName, "name"), + resource.TestCheckResourceAttrPair(dataSourceName, "id", resourceName, "id"), + resource.TestCheckResourceAttrPair(dataSourceName, "arn", resourceName, "arn"), + resource.TestCheckResourceAttrPair(dataSourceName, "collection_endpoint", resourceName, "collection_endpoint"), + resource.TestCheckResourceAttrPair(dataSourceName, "dashboard_endpoint", resourceName, "dashboard_endpoint"), + resource.TestCheckResourceAttrPair(dataSourceName, "description", resourceName, "description"), + resource.TestCheckResourceAttrPair(dataSourceName, "kms_key_arn", resourceName, "kms_key_arn"), + resource.TestCheckResourceAttrPair(dataSourceName, "type", resourceName, "type"), + ), + }, + }, + }) +} + +func testAccCollectionDataSourceBaseConfig(rName, policyType string) string { + return fmt.Sprintf(` +resource "aws_opensearchserverless_security_policy" "test" { + name = %[1]q + type = %[2]q + policy = jsonencode({ + Rules = [ + { + Resource = [ + "collection/%[1]s" + ], + ResourceType = "collection" + } + ], + AWSOwnedKey = true + }) +} + +resource "aws_opensearchserverless_collection" "test" { + name = %[1]q + depends_on = [aws_opensearchserverless_security_policy.test] +} +`, rName, policyType) +} + +func testAccCollectionDataSourceConfig_basic(rName, policyType string) string { + return acctest.ConfigCompose( + testAccCollectionDataSourceBaseConfig(rName, policyType), + ` +data "aws_opensearchserverless_collection" "test" { + id = aws_opensearchserverless_collection.test.id +} +`) +} + +func testAccCollectionDataSourceConfig_name(rName, policyType string) string { + return acctest.ConfigCompose( + testAccCollectionDataSourceBaseConfig(rName, policyType), + ` +data "aws_opensearchserverless_collection" "test" { + name = aws_opensearchserverless_collection.test.name +} +`) +} diff --git a/internal/service/opensearchserverless/collection_test.go b/internal/service/opensearchserverless/collection_test.go new file mode 100644 index 00000000000..6a43b4957ac --- /dev/null +++ b/internal/service/opensearchserverless/collection_test.go @@ -0,0 +1,315 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package opensearchserverless_test + +import ( + "context" + "errors" + "fmt" + "testing" + + "github.com/aws/aws-sdk-go-v2/service/opensearchserverless" + "github.com/aws/aws-sdk-go-v2/service/opensearchserverless/types" + sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-plugin-testing/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + tfopensearchserverless "github.com/hashicorp/terraform-provider-aws/internal/service/opensearchserverless" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" +) + +func TestAccOpenSearchServerlessCollection_basic(t *testing.T) { + ctx := acctest.Context(t) + var collection types.CollectionDetail + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearchserverless_collection.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.OpenSearchServerlessEndpointID) + testAccPreCheckCollection(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.OpenSearchServerlessEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckCollectionDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccCollectionConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckCollectionExists(ctx, resourceName, &collection), + resource.TestCheckResourceAttrSet(resourceName, "type"), + resource.TestCheckResourceAttrSet(resourceName, "collection_endpoint"), + resource.TestCheckResourceAttrSet(resourceName, "dashboard_endpoint"), + resource.TestCheckResourceAttrSet(resourceName, "kms_key_arn"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccOpenSearchServerlessCollection_tags(t *testing.T) { + ctx := acctest.Context(t) + var collection types.CollectionDetail + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearchserverless_collection.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.OpenSearchServerlessEndpointID) + testAccPreCheckCollection(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.OpenSearchServerlessEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckCollectionDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccCollectionConfig_tags1(rName, "key1", "value1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckCollectionExists(ctx, resourceName, &collection), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + Config: testAccCollectionConfig_tags2(rName, "key1", "value1", "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckCollectionExists(ctx, resourceName, &collection), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + { + Config: testAccCollectionConfig_tags1(rName, "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckCollectionExists(ctx, resourceName, &collection), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + }, + }) +} + +func TestAccOpenSearchServerlessCollection_update(t *testing.T) { + ctx := acctest.Context(t) + var collection types.CollectionDetail + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearchserverless_collection.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.OpenSearchServerlessEndpointID) + testAccPreCheckCollection(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.OpenSearchServerlessEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckCollectionDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccCollectionConfig_update(rName, "description"), + Check: resource.ComposeTestCheckFunc( + testAccCheckCollectionExists(ctx, resourceName, &collection), + resource.TestCheckResourceAttrSet(resourceName, "type"), + resource.TestCheckResourceAttr(resourceName, "description", "description"), + ), + }, + { + Config: testAccCollectionConfig_update(rName, "description updated"), + Check: resource.ComposeTestCheckFunc( + testAccCheckCollectionExists(ctx, resourceName, &collection), + resource.TestCheckResourceAttrSet(resourceName, "type"), + resource.TestCheckResourceAttr(resourceName, "description", "description updated"), + ), + }, + }, + }) +} + +func TestAccOpenSearchServerlessCollection_disappears(t *testing.T) { + ctx := acctest.Context(t) + + var collection types.CollectionDetail + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearchserverless_collection.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.OpenSearchServerlessEndpointID) + testAccPreCheckCollection(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.OpenSearchServerlessEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckCollectionDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccCollectionConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckCollectionExists(ctx, resourceName, &collection), + acctest.CheckFrameworkResourceDisappears(ctx, acctest.Provider, tfopensearchserverless.ResourceCollection, resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func testAccCheckCollectionDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchServerlessClient(ctx) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_opensearchserverless_collection" { + continue + } + + _, err := tfopensearchserverless.FindCollectionByID(ctx, conn, rs.Primary.ID) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return err + } + + return create.Error(names.OpenSearchServerless, create.ErrActionCheckingDestroyed, tfopensearchserverless.ResNameCollection, rs.Primary.ID, errors.New("not destroyed")) + } + + return nil + } +} + +func testAccCheckCollectionExists(ctx context.Context, name string, collection *types.CollectionDetail) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return create.Error(names.OpenSearchServerless, create.ErrActionCheckingExistence, tfopensearchserverless.ResNameCollection, name, errors.New("not found")) + } + + if rs.Primary.ID == "" { + return create.Error(names.OpenSearchServerless, create.ErrActionCheckingExistence, tfopensearchserverless.ResNameCollection, name, errors.New("not set")) + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchServerlessClient(ctx) + resp, err := tfopensearchserverless.FindCollectionByID(ctx, conn, rs.Primary.ID) + + if err != nil { + return create.Error(names.OpenSearchServerless, create.ErrActionCheckingExistence, tfopensearchserverless.ResNameCollection, rs.Primary.ID, err) + } + + *collection = *resp + + return nil + } +} + +func testAccPreCheckCollection(ctx context.Context, t *testing.T) { + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchServerlessClient(ctx) + + input := &opensearchserverless.ListCollectionsInput{} + _, err := conn.ListCollections(ctx, input) + + if acctest.PreCheckSkipError(err) { + t.Skipf("skipping acceptance testing: %s", err) + } + + if err != nil { + t.Fatalf("unexpected PreCheck error: %s", err) + } +} + +func testAccCollectionBaseConfig(rName string) string { + return fmt.Sprintf(` +resource "aws_opensearchserverless_security_policy" "test" { + name = %[1]q + type = "encryption" + policy = jsonencode({ + "Rules" = [ + { + "Resource" = [ + "collection/%[1]s" + ], + "ResourceType" = "collection" + } + ], + "AWSOwnedKey" = true + }) +} +`, rName) +} + +func testAccCollectionConfig_basic(rName string) string { + return acctest.ConfigCompose( + testAccCollectionBaseConfig(rName), + fmt.Sprintf(` +resource "aws_opensearchserverless_collection" "test" { + name = %[1]q + + depends_on = [aws_opensearchserverless_security_policy.test] +} +`, rName), + ) +} + +func testAccCollectionConfig_update(rName, description string) string { + return acctest.ConfigCompose( + testAccCollectionBaseConfig(rName), + fmt.Sprintf(` +resource "aws_opensearchserverless_collection" "test" { + name = %[1]q + description = %[2]q + + depends_on = [aws_opensearchserverless_security_policy.test] +} +`, rName, description), + ) +} + +func testAccCollectionConfig_tags1(rName, key1, value1 string) string { + return acctest.ConfigCompose( + testAccCollectionBaseConfig(rName), + fmt.Sprintf(` +resource "aws_opensearchserverless_collection" "test" { + name = %[1]q + + tags = { + %[2]q = %[3]q + } + + depends_on = [aws_opensearchserverless_security_policy.test] +} +`, rName, key1, value1), + ) +} + +func testAccCollectionConfig_tags2(rName, key1, value1, key2, value2 string) string { + return acctest.ConfigCompose( + testAccCollectionBaseConfig(rName), + fmt.Sprintf(` +resource "aws_opensearchserverless_collection" "test" { + name = %[1]q + + tags = { + %[2]q = %[3]q + %[4]q = %[5]q + } + + depends_on = [aws_opensearchserverless_security_policy.test] +} +`, rName, key1, value1, key2, value2), + ) +} diff --git a/internal/service/opensearchserverless/const.go b/internal/service/opensearchserverless/const.go new file mode 100644 index 00000000000..37e4c45a83b --- /dev/null +++ b/internal/service/opensearchserverless/const.go @@ -0,0 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package opensearchserverless + +const idSeparator = "/" diff --git a/internal/service/opensearchserverless/exports_test.go b/internal/service/opensearchserverless/exports_test.go new file mode 100644 index 00000000000..05db312ae10 --- /dev/null +++ b/internal/service/opensearchserverless/exports_test.go @@ -0,0 +1,19 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package opensearchserverless + +// Exports for use in tests only. +var ( + ResourceAccessPolicy = newResourceAccessPolicy + ResourceCollection = newResourceCollection + ResourceSecurityConfig = newResourceSecurityConfig + ResourceSecurityPolicy = newResourceSecurityPolicy + ResourceVPCEndpoint = newResourceVPCEndpoint + + FindAccessPolicyByNameAndType = findAccessPolicyByNameAndType + FindCollectionByID = findCollectionByID + FindSecurityConfigByID = findSecurityConfigByID + FindSecurityPolicyByNameAndType = findSecurityPolicyByNameAndType + FindVPCEndpointByID = findVPCEndpointByID +) diff --git a/internal/service/opensearchserverless/find.go b/internal/service/opensearchserverless/find.go new file mode 100644 index 00000000000..ff6fe112e4c --- /dev/null +++ b/internal/service/opensearchserverless/find.go @@ -0,0 +1,166 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package opensearchserverless + +import ( + "context" + "errors" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/opensearchserverless" + "github.com/aws/aws-sdk-go-v2/service/opensearchserverless/types" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" +) + +func findAccessPolicyByNameAndType(ctx context.Context, conn *opensearchserverless.Client, id, policyType string) (*types.AccessPolicyDetail, error) { + in := &opensearchserverless.GetAccessPolicyInput{ + Name: aws.String(id), + Type: types.AccessPolicyType(policyType), + } + out, err := conn.GetAccessPolicy(ctx, in) + if err != nil { + var nfe *types.ResourceNotFoundException + if errors.As(err, &nfe) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + return nil, err + } + + if out == nil || out.AccessPolicyDetail == nil { + return nil, tfresource.NewEmptyResultError(in) + } + + return out.AccessPolicyDetail, nil +} + +func findCollectionByID(ctx context.Context, conn *opensearchserverless.Client, id string) (*types.CollectionDetail, error) { + in := &opensearchserverless.BatchGetCollectionInput{ + Ids: []string{id}, + } + out, err := conn.BatchGetCollection(ctx, in) + if err != nil { + var nfe *types.ResourceNotFoundException + if errors.As(err, &nfe) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + return nil, err + } + + if out == nil || out.CollectionDetails == nil || len(out.CollectionDetails) == 0 { + return nil, tfresource.NewEmptyResultError(in) + } + + return &out.CollectionDetails[0], nil +} + +func findCollectionByName(ctx context.Context, conn *opensearchserverless.Client, name string) (*types.CollectionDetail, error) { + in := &opensearchserverless.BatchGetCollectionInput{ + Names: []string{name}, + } + out, err := conn.BatchGetCollection(ctx, in) + if err != nil { + var nfe *types.ResourceNotFoundException + if errors.As(err, &nfe) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + return nil, err + } + + if out == nil || out.CollectionDetails == nil || len(out.CollectionDetails) == 0 { + return nil, tfresource.NewEmptyResultError(in) + } + + if len(out.CollectionDetails) > 1 { + return nil, tfresource.NewTooManyResultsError(len(out.CollectionDetails), in) + } + + return &out.CollectionDetails[0], nil +} + +func findSecurityConfigByID(ctx context.Context, conn *opensearchserverless.Client, id string) (*types.SecurityConfigDetail, error) { + in := &opensearchserverless.GetSecurityConfigInput{ + Id: aws.String(id), + } + out, err := conn.GetSecurityConfig(ctx, in) + if err != nil { + var nfe *types.ResourceNotFoundException + if errors.As(err, &nfe) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + return nil, err + } + + if out == nil || out.SecurityConfigDetail == nil { + return nil, tfresource.NewEmptyResultError(in) + } + + return out.SecurityConfigDetail, nil +} + +func findSecurityPolicyByNameAndType(ctx context.Context, conn *opensearchserverless.Client, name, policyType string) (*types.SecurityPolicyDetail, error) { + in := &opensearchserverless.GetSecurityPolicyInput{ + Name: aws.String(name), + Type: types.SecurityPolicyType(policyType), + } + out, err := conn.GetSecurityPolicy(ctx, in) + if err != nil { + var nfe *types.ResourceNotFoundException + if errors.As(err, &nfe) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + return nil, err + } + + if out == nil || out.SecurityPolicyDetail == nil { + return nil, tfresource.NewEmptyResultError(in) + } + + return out.SecurityPolicyDetail, nil +} + +func findVPCEndpointByID(ctx context.Context, conn *opensearchserverless.Client, id string) (*types.VpcEndpointDetail, error) { + in := &opensearchserverless.BatchGetVpcEndpointInput{ + Ids: []string{id}, + } + out, err := conn.BatchGetVpcEndpoint(ctx, in) + + if err != nil { + var nfe *types.ResourceNotFoundException + if errors.As(err, &nfe) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + return nil, err + } + + if out == nil || out.VpcEndpointDetails == nil || len(out.VpcEndpointDetails) == 0 { + return nil, tfresource.NewEmptyResultError(in) + } + + return &out.VpcEndpointDetails[0], nil +} diff --git a/internal/service/opensearchserverless/generate.go b/internal/service/opensearchserverless/generate.go index 85c63ca0676..d5b82af5b6e 100644 --- a/internal/service/opensearchserverless/generate.go +++ b/internal/service/opensearchserverless/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -ListTags -ServiceTagsSlice -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package opensearchserverless diff --git a/internal/service/opensearchserverless/security_config.go b/internal/service/opensearchserverless/security_config.go new file mode 100644 index 00000000000..a045aae4f8d --- /dev/null +++ b/internal/service/opensearchserverless/security_config.go @@ -0,0 +1,336 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package opensearchserverless + +import ( + "context" + "errors" + "fmt" + "strings" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/opensearchserverless" + awstypes "github.com/aws/aws-sdk-go-v2/service/opensearchserverless/types" + "github.com/hashicorp/terraform-plugin-framework-validators/int64validator" + "github.com/hashicorp/terraform-plugin-framework-validators/stringvalidator" + "github.com/hashicorp/terraform-plugin-framework/attr" + "github.com/hashicorp/terraform-plugin-framework/diag" + "github.com/hashicorp/terraform-plugin-framework/path" + "github.com/hashicorp/terraform-plugin-framework/resource" + "github.com/hashicorp/terraform-plugin-framework/resource/schema" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier" + "github.com/hashicorp/terraform-plugin-framework/schema/validator" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/hashicorp/terraform-plugin-framework/types/basetypes" + sdkid "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" + "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/enum" + "github.com/hashicorp/terraform-provider-aws/internal/errs/fwdiag" + "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @FrameworkResource +func newResourceSecurityConfig(_ context.Context) (resource.ResourceWithConfigure, error) { + return &resourceSecurityConfig{}, nil +} + +const ( + ResNameSecurityConfig = "Security Config" +) + +type resourceSecurityConfig struct { + framework.ResourceWithConfigure +} + +func (r *resourceSecurityConfig) Metadata(_ context.Context, request resource.MetadataRequest, response *resource.MetadataResponse) { + response.TypeName = "aws_opensearchserverless_security_config" +} + +func (r *resourceSecurityConfig) Schema(ctx context.Context, req resource.SchemaRequest, resp *resource.SchemaResponse) { + resp.Schema = schema.Schema{ + Attributes: map[string]schema.Attribute{ + "id": framework.IDAttribute(), + "config_version": schema.StringAttribute{ + Computed: true, + }, + "description": schema.StringAttribute{ + Optional: true, + Validators: []validator.String{ + stringvalidator.LengthBetween(1, 1000), + }, + }, + "name": schema.StringAttribute{ + Required: true, + PlanModifiers: []planmodifier.String{ + stringplanmodifier.RequiresReplace(), + }, + Validators: []validator.String{ + stringvalidator.LengthBetween(3, 32), + }, + }, + "type": schema.StringAttribute{ + Required: true, + PlanModifiers: []planmodifier.String{ + stringplanmodifier.RequiresReplace(), + }, + Validators: []validator.String{ + enum.FrameworkValidate[awstypes.SecurityConfigType](), + }, + }, + }, + Blocks: map[string]schema.Block{ + "saml_options": schema.SingleNestedBlock{ + Attributes: map[string]schema.Attribute{ + "group_attribute": schema.StringAttribute{ + Optional: true, + Validators: []validator.String{ + stringvalidator.LengthBetween(1, 2048), + }, + }, + "metadata": schema.StringAttribute{ + Required: true, + Validators: []validator.String{ + stringvalidator.LengthBetween(1, 20480), + }, + }, + "session_timeout": schema.Int64Attribute{ + Optional: true, + Computed: true, + Validators: []validator.Int64{ + int64validator.Between(5, 1540), + }, + }, + "user_attribute": schema.StringAttribute{ + Optional: true, + Validators: []validator.String{ + stringvalidator.LengthBetween(1, 2048), + }, + }, + }, + }, + }, + } +} + +func (r *resourceSecurityConfig) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) { + var plan resourceSecurityConfigData + + resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) + + if resp.Diagnostics.HasError() { + return + } + + conn := r.Meta().OpenSearchServerlessClient(ctx) + + in := &opensearchserverless.CreateSecurityConfigInput{ + ClientToken: aws.String(sdkid.UniqueId()), + Name: flex.StringFromFramework(ctx, plan.Name), + Type: awstypes.SecurityConfigType(*flex.StringFromFramework(ctx, plan.Type)), + SamlOptions: expandSAMLOptions(ctx, plan.SamlOptions, &resp.Diagnostics), + } + + if !plan.Description.IsNull() { + in.Description = flex.StringFromFramework(ctx, plan.Description) + } + + out, err := conn.CreateSecurityConfig(ctx, in) + if err != nil { + resp.Diagnostics.AddError( + create.ProblemStandardMessage(names.OpenSearchServerless, create.ErrActionCreating, ResNameSecurityConfig, plan.Name.String(), nil), + err.Error(), + ) + return + } + + if out == nil || out.SecurityConfigDetail == nil { + resp.Diagnostics.AddError( + create.ProblemStandardMessage(names.OpenSearchServerless, create.ErrActionCreating, ResNameSecurityConfig, plan.Name.String(), nil), + err.Error(), + ) + return + } + + state := plan + state.refreshFromOutput(ctx, out.SecurityConfigDetail) + resp.Diagnostics.Append(resp.State.Set(ctx, &state)...) +} + +func (r *resourceSecurityConfig) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) { + conn := r.Meta().OpenSearchServerlessClient(ctx) + + var state resourceSecurityConfigData + resp.Diagnostics.Append(req.State.Get(ctx, &state)...) + if resp.Diagnostics.HasError() { + return + } + + out, err := findSecurityConfigByID(ctx, conn, state.ID.ValueString()) + if tfresource.NotFound(err) { + resp.Diagnostics.Append(fwdiag.NewResourceNotFoundWarningDiagnostic(err)) + resp.State.RemoveResource(ctx) + return + } + + state.refreshFromOutput(ctx, out) + resp.Diagnostics.Append(resp.State.Set(ctx, &state)...) +} + +func (r *resourceSecurityConfig) Update(ctx context.Context, req resource.UpdateRequest, resp *resource.UpdateResponse) { + conn := r.Meta().OpenSearchServerlessClient(ctx) + + var plan, state resourceSecurityConfigData + resp.Diagnostics.Append(req.State.Get(ctx, &state)...) + resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) + if resp.Diagnostics.HasError() { + return + } + + update := false + + input := &opensearchserverless.UpdateSecurityConfigInput{ + ClientToken: aws.String(sdkid.UniqueId()), + ConfigVersion: flex.StringFromFramework(ctx, state.ConfigVersion), + Id: flex.StringFromFramework(ctx, plan.ID), + } + + if !plan.Description.Equal(state.Description) { + input.Description = aws.String(plan.Description.ValueString()) + update = true + } + + if !plan.SamlOptions.Equal(state.SamlOptions) { + input.SamlOptions = expandSAMLOptions(ctx, plan.SamlOptions, &resp.Diagnostics) + update = true + } + + if !update { + return + } + + out, err := conn.UpdateSecurityConfig(ctx, input) + + if err != nil { + resp.Diagnostics.AddError(fmt.Sprintf("updating Security Policy (%s)", plan.Name.ValueString()), err.Error()) + return + } + plan.refreshFromOutput(ctx, out.SecurityConfigDetail) + + resp.Diagnostics.Append(resp.State.Set(ctx, &plan)...) +} + +func (r *resourceSecurityConfig) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) { + conn := r.Meta().OpenSearchServerlessClient(ctx) + + var state resourceSecurityConfigData + resp.Diagnostics.Append(req.State.Get(ctx, &state)...) + if resp.Diagnostics.HasError() { + return + } + + _, err := conn.DeleteSecurityConfig(ctx, &opensearchserverless.DeleteSecurityConfigInput{ + ClientToken: aws.String(sdkid.UniqueId()), + Id: flex.StringFromFramework(ctx, state.ID), + }) + if err != nil { + var nfe *awstypes.ResourceNotFoundException + if errors.As(err, &nfe) { + return + } + resp.Diagnostics.AddError( + create.ProblemStandardMessage(names.OpenSearchServerless, create.ErrActionDeleting, ResNameSecurityConfig, state.Name.String(), nil), + err.Error(), + ) + } +} + +func (r *resourceSecurityConfig) ImportState(ctx context.Context, req resource.ImportStateRequest, resp *resource.ImportStateResponse) { + parts := strings.Split(req.ID, idSeparator) + if len(parts) != 3 || parts[0] == "" || parts[1] == "" || parts[2] == "" { + err := fmt.Errorf("unexpected format for ID (%[1]s), expected saml/account-id/name", req.ID) + resp.Diagnostics.AddError(fmt.Sprintf("importing Security Policy (%s)", req.ID), err.Error()) + return + } + + resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("id"), req.ID)...) + resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("name"), parts[2])...) +} + +type resourceSecurityConfigData struct { + ID types.String `tfsdk:"id"` + ConfigVersion types.String `tfsdk:"config_version"` + Description types.String `tfsdk:"description"` + Name types.String `tfsdk:"name"` + SamlOptions types.Object `tfsdk:"saml_options"` + Type types.String `tfsdk:"type"` +} + +// refreshFromOutput writes state data from an AWS response object +func (rd *resourceSecurityConfigData) refreshFromOutput(ctx context.Context, out *awstypes.SecurityConfigDetail) { + if out == nil { + return + } + + rd.ID = flex.StringToFramework(ctx, out.Id) + rd.ConfigVersion = flex.StringToFramework(ctx, out.ConfigVersion) + rd.Description = flex.StringToFramework(ctx, out.Description) + rd.SamlOptions = flattenSAMLOptions(ctx, out.SamlOptions) + rd.Type = flex.StringValueToFramework(ctx, out.Type) +} + +type samlOptions struct { + GroupAttribute types.String `tfsdk:"group_attribute"` + Metadata types.String `tfsdk:"metadata"` + SessionTimeout types.Int64 `tfsdk:"session_timeout"` + UserAttribute types.String `tfsdk:"user_attribute"` +} + +func (so *samlOptions) expand(ctx context.Context) *awstypes.SamlConfigOptions { + if so == nil { + return nil + } + + result := &awstypes.SamlConfigOptions{ + Metadata: flex.StringFromFramework(ctx, so.Metadata), + GroupAttribute: flex.StringFromFramework(ctx, so.GroupAttribute), + UserAttribute: flex.StringFromFramework(ctx, so.UserAttribute), + } + + if so.SessionTimeout.ValueInt64() != 0 { + result.SessionTimeout = aws.Int32(int32(so.SessionTimeout.ValueInt64())) + } + + return result +} + +func expandSAMLOptions(ctx context.Context, object types.Object, diags *diag.Diagnostics) *awstypes.SamlConfigOptions { + var options samlOptions + diags.Append(object.As(ctx, &options, basetypes.ObjectAsOptions{})...) + if diags.HasError() { + return nil + } + + return options.expand(ctx) +} + +func flattenSAMLOptions(ctx context.Context, so *awstypes.SamlConfigOptions) types.Object { + attributeTypes := flex.AttributeTypesMust[samlOptions](ctx) + + if so == nil { + return types.ObjectNull(attributeTypes) + } + + attrs := map[string]attr.Value{} + attrs["group_attribute"] = flex.StringToFramework(ctx, so.GroupAttribute) + attrs["metadata"] = flex.StringToFramework(ctx, so.Metadata) + timeout := int64(*so.SessionTimeout) + attrs["session_timeout"] = flex.Int64ToFramework(ctx, &timeout) + attrs["user_attribute"] = flex.StringToFramework(ctx, so.UserAttribute) + + return types.ObjectValueMust(attributeTypes, attrs) +} diff --git a/internal/service/opensearchserverless/security_config_data_source.go b/internal/service/opensearchserverless/security_config_data_source.go new file mode 100644 index 00000000000..b72a0667ed9 --- /dev/null +++ b/internal/service/opensearchserverless/security_config_data_source.go @@ -0,0 +1,124 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package opensearchserverless + +import ( + "context" + "time" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/hashicorp/terraform-plugin-framework/datasource" + "github.com/hashicorp/terraform-plugin-framework/datasource/schema" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @FrameworkDataSource(name="Security Config") +func newDataSourceSecurityConfig(context.Context) (datasource.DataSourceWithConfigure, error) { + return &dataSourceSecurityConfig{}, nil +} + +const ( + DSNameSecurityConfig = "Security Config Data Source" +) + +type dataSourceSecurityConfig struct { + framework.DataSourceWithConfigure +} + +func (d *dataSourceSecurityConfig) Metadata(_ context.Context, req datasource.MetadataRequest, resp *datasource.MetadataResponse) { // nosemgrep:ci.meta-in-func-name + resp.TypeName = "aws_opensearchserverless_security_config" +} + +func (d *dataSourceSecurityConfig) Schema(ctx context.Context, req datasource.SchemaRequest, resp *datasource.SchemaResponse) { + resp.Schema = schema.Schema{ + Attributes: map[string]schema.Attribute{ + "config_version": schema.StringAttribute{ + Computed: true, + }, + "created_date": schema.StringAttribute{ + Computed: true, + }, + "description": schema.StringAttribute{ + Computed: true, + }, + "id": schema.StringAttribute{ + Required: true, + }, + "last_modified_date": schema.StringAttribute{ + Computed: true, + }, + "type": schema.StringAttribute{ + Computed: true, + }, + }, + Blocks: map[string]schema.Block{ + "saml_options": schema.SingleNestedBlock{ + Attributes: map[string]schema.Attribute{ + "group_attribute": schema.StringAttribute{ + Computed: true, + }, + "metadata": schema.StringAttribute{ + Computed: true, + }, + "session_timeout": schema.Int64Attribute{ + Computed: true, + }, + "user_attribute": schema.StringAttribute{ + Computed: true, + }, + }, + }, + }, + } +} + +func (d *dataSourceSecurityConfig) Read(ctx context.Context, req datasource.ReadRequest, resp *datasource.ReadResponse) { + conn := d.Meta().OpenSearchServerlessClient(ctx) + + var data dataSourceSecurityConfigData + resp.Diagnostics.Append(req.Config.Get(ctx, &data)...) + if resp.Diagnostics.HasError() { + return + } + + out, err := findSecurityConfigByID(ctx, conn, data.ID.ValueString()) + if err != nil { + resp.Diagnostics.AddError( + create.ProblemStandardMessage(names.OpenSearchServerless, create.ErrActionReading, DSNameSecurityConfig, data.ID.String(), err), + err.Error(), + ) + return + } + + createdDate := time.UnixMilli(aws.ToInt64(out.CreatedDate)) + data.CreatedDate = flex.StringValueToFramework(ctx, createdDate.Format(time.RFC3339)) + + data.ConfigVersion = flex.StringToFramework(ctx, out.ConfigVersion) + data.Description = flex.StringToFramework(ctx, out.Description) + data.ID = flex.StringToFramework(ctx, out.Id) + + lastModifiedDate := time.UnixMilli(aws.ToInt64(out.LastModifiedDate)) + data.LastModifiedDate = flex.StringValueToFramework(ctx, lastModifiedDate.Format(time.RFC3339)) + + data.Type = flex.StringValueToFramework(ctx, out.Type) + + samlOptions := flattenSAMLOptions(ctx, out.SamlOptions) + data.SamlOptions = samlOptions + + resp.Diagnostics.Append(resp.State.Set(ctx, &data)...) +} + +type dataSourceSecurityConfigData struct { + ConfigVersion types.String `tfsdk:"config_version"` + CreatedDate types.String `tfsdk:"created_date"` + Description types.String `tfsdk:"description"` + ID types.String `tfsdk:"id"` + LastModifiedDate types.String `tfsdk:"last_modified_date"` + SamlOptions types.Object `tfsdk:"saml_options"` + Type types.String `tfsdk:"type"` +} diff --git a/internal/service/opensearchserverless/security_config_data_source_test.go b/internal/service/opensearchserverless/security_config_data_source_test.go new file mode 100644 index 00000000000..db0699a766b --- /dev/null +++ b/internal/service/opensearchserverless/security_config_data_source_test.go @@ -0,0 +1,68 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package opensearchserverless_test + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go-v2/service/opensearchserverless/types" + sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/names" +) + +func TestAccOpenSearchServerlessSecurityConfigDataSource_basic(t *testing.T) { + ctx := acctest.Context(t) + + var securityconfig types.SecurityConfigDetail + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearchserverless_security_config.test" + dataSourceName := "data.aws_opensearchserverless_security_config.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.OpenSearchServerlessEndpointID) + testAccPreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.OpenSearchServerlessEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckSecurityConfigDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccSecurityConfigDataSourceConfig_basic(rName, "description", "test-fixtures/idp-metadata.xml"), + Check: resource.ComposeTestCheckFunc( + testAccCheckSecurityConfigExists(ctx, dataSourceName, &securityconfig), + resource.TestCheckResourceAttrSet(dataSourceName, "created_date"), + resource.TestCheckResourceAttrPair(dataSourceName, "config_version", resourceName, "config_version"), + resource.TestCheckResourceAttrPair(dataSourceName, "description", resourceName, "description"), + resource.TestCheckResourceAttrSet(dataSourceName, "last_modified_date"), + resource.TestCheckResourceAttrPair(dataSourceName, "type", resourceName, "type"), + resource.TestCheckResourceAttrPair(dataSourceName, "saml_options.metadata", resourceName, "saml_options.metadata"), + resource.TestCheckResourceAttrPair(dataSourceName, "saml_options.session_timeout", resourceName, "saml_options.session_timeout"), + ), + }, + }, + }) +} + +func testAccSecurityConfigDataSourceConfig_basic(rName, description, samlOptions string) string { + return fmt.Sprintf(` +resource "aws_opensearchserverless_security_config" "test" { + name = %[1]q + description = %[2]q + type = "saml" + + saml_options { + metadata = file("%[3]s") + } +} + +data "aws_opensearchserverless_security_config" "test" { + id = aws_opensearchserverless_security_config.test.id +} +`, rName, description, samlOptions) +} diff --git a/internal/service/opensearchserverless/security_config_test.go b/internal/service/opensearchserverless/security_config_test.go new file mode 100644 index 00000000000..4be7abc8b33 --- /dev/null +++ b/internal/service/opensearchserverless/security_config_test.go @@ -0,0 +1,219 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package opensearchserverless_test + +import ( + "context" + "errors" + "fmt" + "testing" + + "github.com/aws/aws-sdk-go-v2/service/opensearchserverless" + "github.com/aws/aws-sdk-go-v2/service/opensearchserverless/types" + sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-plugin-testing/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + tfopensearchserverless "github.com/hashicorp/terraform-provider-aws/internal/service/opensearchserverless" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" +) + +func TestAccOpenSearchServerlessSecurityConfig_basic(t *testing.T) { + ctx := acctest.Context(t) + var securityconfig types.SecurityConfigDetail + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearchserverless_security_config.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.OpenSearchServerlessEndpointID) + testAccPreCheckSecurityConfig(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.OpenSearchServerlessEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckSecurityConfigDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccSecurityConfig_basic(rName, "test-fixtures/idp-metadata.xml"), + Check: resource.ComposeTestCheckFunc( + testAccCheckSecurityConfigExists(ctx, resourceName, &securityconfig), + resource.TestCheckResourceAttr(resourceName, "type", "saml"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttrSet(resourceName, "saml_options.session_timeout"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccOpenSearchServerlessSecurityConfig_update(t *testing.T) { + ctx := acctest.Context(t) + var securityconfig types.SecurityConfigDetail + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearchserverless_security_config.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.OpenSearchServerlessEndpointID) + testAccPreCheckSecurityConfig(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.OpenSearchServerlessEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckSecurityConfigDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccSecurityConfig_update(rName, "test-fixtures/idp-metadata.xml", "description", 60), + Check: resource.ComposeTestCheckFunc( + testAccCheckSecurityConfigExists(ctx, resourceName, &securityconfig), + resource.TestCheckResourceAttr(resourceName, "type", "saml"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "saml_options.session_timeout", "60"), + resource.TestCheckResourceAttr(resourceName, "description", "description"), + ), + }, + { + Config: testAccSecurityConfig_update(rName, "test-fixtures/idp-metadata.xml", "description updated", 40), + Check: resource.ComposeTestCheckFunc( + testAccCheckSecurityConfigExists(ctx, resourceName, &securityconfig), + resource.TestCheckResourceAttr(resourceName, "type", "saml"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "saml_options.session_timeout", "40"), + resource.TestCheckResourceAttr(resourceName, "description", "description updated"), + ), + }, + }, + }) +} + +func TestAccOpenSearchServerlessSecurityConfig_disappears(t *testing.T) { + ctx := acctest.Context(t) + var securityconfig types.SecurityConfigDetail + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearchserverless_security_config.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.OpenSearchServerlessEndpointID) + testAccPreCheckSecurityConfig(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.OpenSearchServerlessEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckSecurityConfigDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccSecurityConfig_basic(rName, "test-fixtures/idp-metadata.xml"), + Check: resource.ComposeTestCheckFunc( + testAccCheckSecurityConfigExists(ctx, resourceName, &securityconfig), + acctest.CheckFrameworkResourceDisappears(ctx, acctest.Provider, tfopensearchserverless.ResourceSecurityConfig, resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func testAccCheckSecurityConfigDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchServerlessClient(ctx) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_opensearchserverless_security_config" { + continue + } + + _, err := tfopensearchserverless.FindSecurityConfigByID(ctx, conn, rs.Primary.ID) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return err + } + + return create.Error(names.OpenSearchServerless, create.ErrActionCheckingDestroyed, tfopensearchserverless.ResNameSecurityConfig, rs.Primary.ID, errors.New("not destroyed")) + } + + return nil + } +} + +func testAccCheckSecurityConfigExists(ctx context.Context, name string, securityconfig *types.SecurityConfigDetail) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return create.Error(names.OpenSearchServerless, create.ErrActionCheckingExistence, tfopensearchserverless.ResNameSecurityConfig, name, errors.New("not found")) + } + + if rs.Primary.ID == "" { + return create.Error(names.OpenSearchServerless, create.ErrActionCheckingExistence, tfopensearchserverless.ResNameSecurityConfig, name, errors.New("not set")) + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchServerlessClient(ctx) + resp, err := tfopensearchserverless.FindSecurityConfigByID(ctx, conn, rs.Primary.ID) + + if err != nil { + return create.Error(names.OpenSearchServerless, create.ErrActionCheckingExistence, tfopensearchserverless.ResNameSecurityConfig, rs.Primary.ID, err) + } + + *securityconfig = *resp + + return nil + } +} + +func testAccPreCheckSecurityConfig(ctx context.Context, t *testing.T) { + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchServerlessClient(ctx) + + input := &opensearchserverless.ListSecurityConfigsInput{ + Type: types.SecurityConfigTypeSaml, + } + _, err := conn.ListSecurityConfigs(ctx, input) + + if acctest.PreCheckSkipError(err) { + t.Skipf("skipping acceptance testing: %s", err) + } + + if err != nil { + t.Fatalf("unexpected PreCheck error: %s", err) + } +} + +func testAccSecurityConfig_basic(rName string, samlOptions string) string { + return fmt.Sprintf(` +resource "aws_opensearchserverless_security_config" "test" { + name = %[1]q + type = "saml" + saml_options { + metadata = file("%[2]s") + } +} +`, rName, samlOptions) +} + +func testAccSecurityConfig_update(rName, samlOptions, description string, sessionTimeout int) string { + return fmt.Sprintf(` +resource "aws_opensearchserverless_security_config" "test" { + name = %[1]q + description = %[3]q + type = "saml" + + saml_options { + metadata = file("%[2]s") + session_timeout = %[4]d + } +} +`, rName, samlOptions, description, sessionTimeout) +} diff --git a/internal/service/opensearchserverless/security_policy.go b/internal/service/opensearchserverless/security_policy.go new file mode 100644 index 00000000000..4b7479fd64f --- /dev/null +++ b/internal/service/opensearchserverless/security_policy.go @@ -0,0 +1,267 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package opensearchserverless + +import ( + "context" + "errors" + "fmt" + "strings" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/opensearchserverless" + awstypes "github.com/aws/aws-sdk-go-v2/service/opensearchserverless/types" + "github.com/hashicorp/terraform-plugin-framework-validators/stringvalidator" + "github.com/hashicorp/terraform-plugin-framework/diag" + "github.com/hashicorp/terraform-plugin-framework/resource" + "github.com/hashicorp/terraform-plugin-framework/resource/schema" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier" + "github.com/hashicorp/terraform-plugin-framework/schema/validator" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" + "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/enum" + "github.com/hashicorp/terraform-provider-aws/internal/errs/fwdiag" + "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @FrameworkResource +func newResourceSecurityPolicy(_ context.Context) (resource.ResourceWithConfigure, error) { + return &resourceSecurityPolicy{}, nil +} + +type resourceSecurityPolicyData struct { + Description types.String `tfsdk:"description"` + ID types.String `tfsdk:"id"` + Name types.String `tfsdk:"name"` + Policy types.String `tfsdk:"policy"` + PolicyVersion types.String `tfsdk:"policy_version"` + Type types.String `tfsdk:"type"` +} + +const ( + ResNameSecurityPolicy = "Security Policy" +) + +type resourceSecurityPolicy struct { + framework.ResourceWithConfigure +} + +func (r *resourceSecurityPolicy) Metadata(_ context.Context, request resource.MetadataRequest, response *resource.MetadataResponse) { + response.TypeName = "aws_opensearchserverless_security_policy" +} + +func (r *resourceSecurityPolicy) Schema(ctx context.Context, req resource.SchemaRequest, resp *resource.SchemaResponse) { + resp.Schema = schema.Schema{ + Attributes: map[string]schema.Attribute{ + "description": schema.StringAttribute{ + Optional: true, + Validators: []validator.String{ + stringvalidator.LengthBetween(1, 1000), + }, + }, + "id": framework.IDAttribute(), + "name": schema.StringAttribute{ + Required: true, + Validators: []validator.String{ + stringvalidator.LengthBetween(3, 32), + }, + PlanModifiers: []planmodifier.String{ + stringplanmodifier.RequiresReplace(), + }, + }, + "policy": schema.StringAttribute{ + Required: true, + Validators: []validator.String{ + stringvalidator.LengthBetween(1, 20480), + }, + }, + "policy_version": schema.StringAttribute{ + Computed: true, + }, + "type": schema.StringAttribute{ + Required: true, + Validators: []validator.String{ + enum.FrameworkValidate[awstypes.SecurityPolicyType](), + }, + PlanModifiers: []planmodifier.String{ + stringplanmodifier.RequiresReplace(), + }, + }, + }, + } +} + +func (r *resourceSecurityPolicy) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) { + var plan resourceSecurityPolicyData + + resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) + + if resp.Diagnostics.HasError() { + return + } + + conn := r.Meta().OpenSearchServerlessClient(ctx) + + in := &opensearchserverless.CreateSecurityPolicyInput{ + ClientToken: aws.String(id.UniqueId()), + Name: aws.String(plan.Name.ValueString()), + Policy: aws.String(plan.Policy.ValueString()), + Type: awstypes.SecurityPolicyType(plan.Type.ValueString()), + } + + if !plan.Description.IsNull() { + in.Description = aws.String(plan.Description.ValueString()) + } + + out, err := conn.CreateSecurityPolicy(ctx, in) + if err != nil { + resp.Diagnostics.AddError( + create.ProblemStandardMessage(names.OpenSearchServerless, create.ErrActionCreating, ResNameSecurityPolicy, plan.Name.String(), nil), + err.Error(), + ) + return + } + + state := plan + resp.Diagnostics.Append(state.refreshFromOutput(ctx, out.SecurityPolicyDetail)...) + resp.Diagnostics.Append(resp.State.Set(ctx, &state)...) +} + +func (r *resourceSecurityPolicy) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) { + conn := r.Meta().OpenSearchServerlessClient(ctx) + + var state resourceSecurityPolicyData + resp.Diagnostics.Append(req.State.Get(ctx, &state)...) + if resp.Diagnostics.HasError() { + return + } + + out, err := findSecurityPolicyByNameAndType(ctx, conn, state.ID.ValueString(), state.Type.ValueString()) + if tfresource.NotFound(err) { + resp.Diagnostics.Append(fwdiag.NewResourceNotFoundWarningDiagnostic(err)) + resp.State.RemoveResource(ctx) + return + } + + resp.Diagnostics.Append(state.refreshFromOutput(ctx, out)...) + resp.Diagnostics.Append(resp.State.Set(ctx, &state)...) +} + +func (r *resourceSecurityPolicy) Update(ctx context.Context, req resource.UpdateRequest, resp *resource.UpdateResponse) { + conn := r.Meta().OpenSearchServerlessClient(ctx) + + var plan, state resourceSecurityPolicyData + resp.Diagnostics.Append(req.State.Get(ctx, &state)...) + resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) + if resp.Diagnostics.HasError() { + return + } + + if !plan.Description.Equal(state.Description) || + !plan.Policy.Equal(state.Policy) { + input := &opensearchserverless.UpdateSecurityPolicyInput{ + ClientToken: aws.String(id.UniqueId()), + Name: flex.StringFromFramework(ctx, plan.Name), + PolicyVersion: flex.StringFromFramework(ctx, state.PolicyVersion), + Type: awstypes.SecurityPolicyType(plan.Type.ValueString()), + } + + if !plan.Policy.Equal(state.Policy) { + input.Policy = aws.String(plan.Policy.ValueString()) + } + + if !plan.Description.Equal(state.Description) { + input.Description = aws.String(plan.Description.ValueString()) + } + + out, err := conn.UpdateSecurityPolicy(ctx, input) + + if err != nil { + resp.Diagnostics.AddError(fmt.Sprintf("updating Security Policy (%s)", plan.Name.ValueString()), err.Error()) + return + } + resp.Diagnostics.Append(state.refreshFromOutput(ctx, out.SecurityPolicyDetail)...) + } + + resp.Diagnostics.Append(resp.State.Set(ctx, &state)...) +} + +func (r *resourceSecurityPolicy) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) { + conn := r.Meta().OpenSearchServerlessClient(ctx) + + var state resourceSecurityPolicyData + resp.Diagnostics.Append(req.State.Get(ctx, &state)...) + if resp.Diagnostics.HasError() { + return + } + + _, err := conn.DeleteSecurityPolicy(ctx, &opensearchserverless.DeleteSecurityPolicyInput{ + ClientToken: aws.String(id.UniqueId()), + Name: aws.String(state.Name.ValueString()), + Type: awstypes.SecurityPolicyType(state.Type.ValueString()), + }) + if err != nil { + var nfe *awstypes.ResourceNotFoundException + if errors.As(err, &nfe) { + return + } + resp.Diagnostics.AddError( + create.ProblemStandardMessage(names.OpenSearchServerless, create.ErrActionDeleting, ResNameSecurityPolicy, state.Name.String(), nil), + err.Error(), + ) + } +} + +func (r *resourceSecurityPolicy) ImportState(ctx context.Context, req resource.ImportStateRequest, resp *resource.ImportStateResponse) { + parts := strings.Split(req.ID, idSeparator) + if len(parts) != 2 || parts[0] == "" || parts[1] == "" { + err := fmt.Errorf("unexpected format for ID (%[1]s), expected security-policy-name%[2]ssecurity-policy-type", req.ID, idSeparator) + resp.Diagnostics.AddError(fmt.Sprintf("importing Security Policy (%s)", req.ID), err.Error()) + return + } + + state := resourceSecurityPolicyData{ + ID: types.StringValue(parts[0]), + Name: types.StringValue(parts[0]), + Type: types.StringValue(parts[1]), + } + + diags := resp.State.Set(ctx, &state) + resp.Diagnostics.Append(diags...) + if resp.Diagnostics.HasError() { + return + } +} + +// refreshFromOutput writes state data from an AWS response object +func (rd *resourceSecurityPolicyData) refreshFromOutput(ctx context.Context, out *awstypes.SecurityPolicyDetail) diag.Diagnostics { + var diags diag.Diagnostics + + if out == nil { + return diags + } + + rd.ID = flex.StringToFramework(ctx, out.Name) + rd.Description = flex.StringToFramework(ctx, out.Description) + rd.Name = flex.StringToFramework(ctx, out.Name) + rd.Type = flex.StringValueToFramework(ctx, out.Type) + rd.PolicyVersion = flex.StringToFramework(ctx, out.PolicyVersion) + + policyBytes, err := out.Policy.MarshalSmithyDocument() + if err != nil { + diags.AddError(fmt.Sprintf("refreshing state for Security Policy (%s)", rd.Name), err.Error()) + return diags + } + + p := string(policyBytes) + + rd.Policy = flex.StringToFramework(ctx, &p) + + return diags +} diff --git a/internal/service/opensearchserverless/security_policy_data_source.go b/internal/service/opensearchserverless/security_policy_data_source.go new file mode 100644 index 00000000000..bfdd361fa02 --- /dev/null +++ b/internal/service/opensearchserverless/security_policy_data_source.go @@ -0,0 +1,95 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package opensearchserverless + +import ( + "context" + "regexp" + "time" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/opensearchserverless/types" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/enum" + "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" +) + +// @SDKDataSource("aws_opensearchserverless_security_policy") +func DataSourceSecurityPolicy() *schema.Resource { + return &schema.Resource{ + ReadWithoutTimeout: dataSourceSecurityPolicyRead, + + Schema: map[string]*schema.Schema{ + "created_date": { + Type: schema.TypeString, + Computed: true, + }, + "description": { + Type: schema.TypeString, + Computed: true, + }, + "last_modified_date": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.All( + validation.StringLenBetween(3, 32), + validation.StringMatch(regexp.MustCompile(`^[a-z][a-z0-9-]+$`), `must start with any lower case letter and can include any lower case letter, number, or "-"`), + ), + }, + "policy": { + Type: schema.TypeString, + Computed: true, + }, + "policy_version": { + Type: schema.TypeString, + Computed: true, + }, + "type": { + Type: schema.TypeString, + Required: true, + ValidateDiagFunc: enum.Validate[types.SecurityPolicyType](), + }, + }, + } +} + +func dataSourceSecurityPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).OpenSearchServerlessClient(ctx) + + securityPolicyName := d.Get("name").(string) + securityPolicyType := d.Get("type").(string) + securityPolicy, err := findSecurityPolicyByNameAndType(ctx, conn, securityPolicyName, securityPolicyType) + + if err != nil { + return sdkdiag.AppendErrorf(diags, "reading OpenSearch Security Policy with name (%s) and type (%s): %s", securityPolicyName, securityPolicyType, err) + } + + policyBytes, err := securityPolicy.Policy.MarshalSmithyDocument() + if err != nil { + return sdkdiag.AppendErrorf(diags, "reading JSON policy document for OpenSearch Security Policy with name %s and type %s: %s", securityPolicyName, securityPolicyType, err) + } + + d.SetId(aws.ToString(securityPolicy.Name)) + d.Set("description", securityPolicy.Description) + d.Set("name", securityPolicy.Name) + d.Set("policy", string(policyBytes)) + d.Set("policy_version", securityPolicy.PolicyVersion) + d.Set("type", securityPolicy.Type) + + createdDate := time.UnixMilli(aws.ToInt64(securityPolicy.CreatedDate)) + d.Set("created_date", createdDate.Format(time.RFC3339)) + + lastModifiedDate := time.UnixMilli(aws.ToInt64(securityPolicy.LastModifiedDate)) + d.Set("last_modified_date", lastModifiedDate.Format(time.RFC3339)) + + return diags +} diff --git a/internal/service/opensearchserverless/security_policy_data_source_test.go b/internal/service/opensearchserverless/security_policy_data_source_test.go new file mode 100644 index 00000000000..46c0da657a8 --- /dev/null +++ b/internal/service/opensearchserverless/security_policy_data_source_test.go @@ -0,0 +1,72 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package opensearchserverless_test + +import ( + "fmt" + "testing" + + sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/names" +) + +func TestAccOpenSearchServerlessSecurityPolicyDataSource_basic(t *testing.T) { + ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearchserverless_security_policy.test" + dataSourceName := "data.aws_opensearchserverless_security_policy.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.OpenSearchServerlessEndpointID) + }, + ErrorCheck: acctest.ErrorCheck(t, names.OpenSearchServerlessEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckSecurityPolicyDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccSecurityPolicyDataSourceConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrPair(dataSourceName, "name", resourceName, "name"), + resource.TestCheckResourceAttrPair(dataSourceName, "type", resourceName, "type"), + resource.TestCheckResourceAttrPair(dataSourceName, "description", resourceName, "description"), + resource.TestCheckResourceAttrPair(dataSourceName, "policy", resourceName, "policy"), + resource.TestCheckResourceAttrPair(dataSourceName, "policy_version", resourceName, "policy_version"), + resource.TestCheckResourceAttrSet(dataSourceName, "created_date"), + resource.TestCheckResourceAttrSet(dataSourceName, "last_modified_date"), + ), + }, + }, + }) +} + +func testAccSecurityPolicyDataSourceConfig_basic(rName string) string { + collection := fmt.Sprintf("collection/%s", rName) + return fmt.Sprintf(` +resource "aws_opensearchserverless_security_policy" "test" { + name = %[1]q + type = "encryption" + description = %[1]q + policy = jsonencode({ + "Rules" = [ + { + "Resource" = [ + %[2]q + ], + "ResourceType" = "collection" + } + ], + "AWSOwnedKey" = true + }) +} + +data "aws_opensearchserverless_security_policy" "test" { + name = aws_opensearchserverless_security_policy.test.name + type = "encryption" +} +`, rName, collection) +} diff --git a/internal/service/opensearchserverless/security_policy_test.go b/internal/service/opensearchserverless/security_policy_test.go new file mode 100644 index 00000000000..ab0e792c081 --- /dev/null +++ b/internal/service/opensearchserverless/security_policy_test.go @@ -0,0 +1,244 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package opensearchserverless_test + +import ( + "context" + "errors" + "fmt" + "testing" + + "github.com/aws/aws-sdk-go-v2/service/opensearchserverless" + "github.com/aws/aws-sdk-go-v2/service/opensearchserverless/types" + sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-plugin-testing/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + tfopensearchserverless "github.com/hashicorp/terraform-provider-aws/internal/service/opensearchserverless" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" +) + +func TestAccOpenSearchServerlessSecurityPolicy_basic(t *testing.T) { + ctx := acctest.Context(t) + var securitypolicy types.SecurityPolicyDetail + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearchserverless_security_policy.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.OpenSearchServerlessEndpointID) + testAccPreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.OpenSearchServerlessEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckSecurityPolicyDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccSecurityPolicyConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckSecurityPolicyExists(ctx, resourceName, &securitypolicy), + resource.TestCheckResourceAttr(resourceName, "type", "encryption"), + resource.TestCheckResourceAttr(resourceName, "description", rName), + ), + }, + { + ResourceName: resourceName, + ImportStateIdFunc: testAccSecurityPolicyImportStateIdFunc(resourceName), + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccOpenSearchServerlessSecurityPolicy_update(t *testing.T) { + ctx := acctest.Context(t) + var securitypolicy types.SecurityPolicyDetail + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearchserverless_security_policy.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.OpenSearchServerlessEndpointID) + testAccPreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.OpenSearchServerlessEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckSecurityPolicyDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccSecurityPolicyConfig_update(rName, "description"), + Check: resource.ComposeTestCheckFunc( + testAccCheckSecurityPolicyExists(ctx, resourceName, &securitypolicy), + resource.TestCheckResourceAttr(resourceName, "type", "encryption"), + resource.TestCheckResourceAttr(resourceName, "description", "description"), + ), + }, + { + Config: testAccSecurityPolicyConfig_update(rName, "description updated"), + Check: resource.ComposeTestCheckFunc( + testAccCheckSecurityPolicyExists(ctx, resourceName, &securitypolicy), + resource.TestCheckResourceAttr(resourceName, "type", "encryption"), + resource.TestCheckResourceAttr(resourceName, "description", "description updated"), + ), + }, + }, + }) +} + +func TestAccOpenSearchServerlessSecurityPolicy_disappears(t *testing.T) { + ctx := acctest.Context(t) + + var securitypolicy types.SecurityPolicyDetail + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearchserverless_security_policy.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.OpenSearchServerlessEndpointID) + testAccPreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.OpenSearchServerlessEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckSecurityPolicyDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccSecurityPolicyConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckSecurityPolicyExists(ctx, resourceName, &securitypolicy), + acctest.CheckFrameworkResourceDisappears(ctx, acctest.Provider, tfopensearchserverless.ResourceSecurityPolicy, resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func testAccCheckSecurityPolicyDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchServerlessClient(ctx) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_opensearchserverless_security_policy" { + continue + } + + _, err := tfopensearchserverless.FindSecurityPolicyByNameAndType(ctx, conn, rs.Primary.ID, rs.Primary.Attributes["type"]) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return err + } + + return create.Error(names.OpenSearchServerless, create.ErrActionCheckingDestroyed, tfopensearchserverless.ResNameSecurityPolicy, rs.Primary.ID, errors.New("not destroyed")) + } + + return nil + } +} + +func testAccCheckSecurityPolicyExists(ctx context.Context, name string, securitypolicy *types.SecurityPolicyDetail) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return create.Error(names.OpenSearchServerless, create.ErrActionCheckingExistence, tfopensearchserverless.ResNameSecurityPolicy, name, errors.New("not found")) + } + + if rs.Primary.ID == "" { + return create.Error(names.OpenSearchServerless, create.ErrActionCheckingExistence, tfopensearchserverless.ResNameSecurityPolicy, name, errors.New("not set")) + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchServerlessClient(ctx) + resp, err := tfopensearchserverless.FindSecurityPolicyByNameAndType(ctx, conn, rs.Primary.ID, rs.Primary.Attributes["type"]) + + if err != nil { + return create.Error(names.OpenSearchServerless, create.ErrActionCheckingExistence, tfopensearchserverless.ResNameSecurityPolicy, rs.Primary.ID, err) + } + + *securitypolicy = *resp + + return nil + } +} + +func testAccSecurityPolicyImportStateIdFunc(resourceName string) resource.ImportStateIdFunc { + return func(s *terraform.State) (string, error) { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return "", fmt.Errorf("not found: %s", resourceName) + } + + return fmt.Sprintf("%s/%s", rs.Primary.Attributes["name"], rs.Primary.Attributes["type"]), nil + } +} + +func testAccPreCheck(ctx context.Context, t *testing.T) { + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchServerlessClient(ctx) + + input := &opensearchserverless.ListSecurityPoliciesInput{ + Type: types.SecurityPolicyTypeEncryption, + } + _, err := conn.ListSecurityPolicies(ctx, input) + + if acctest.PreCheckSkipError(err) { + t.Skipf("skipping acceptance testing: %s", err) + } + + if err != nil { + t.Fatalf("unexpected PreCheck error: %s", err) + } +} + +func testAccSecurityPolicyConfig_basic(rName string) string { + collection := fmt.Sprintf("collection/%s", rName) + return fmt.Sprintf(` +resource "aws_opensearchserverless_security_policy" "test" { + name = %[1]q + type = "encryption" + description = %[1]q + policy = jsonencode({ + "Rules" = [ + { + "Resource" = [ + %[2]q + ], + "ResourceType" = "collection" + } + ], + "AWSOwnedKey" = true + }) +} +`, rName, collection) +} + +func testAccSecurityPolicyConfig_update(rName, description string) string { + collection := fmt.Sprintf("collection/%s", rName) + return fmt.Sprintf(` +resource "aws_opensearchserverless_security_policy" "test" { + name = %[1]q + type = "encryption" + description = %[3]q + policy = jsonencode({ + "Rules" = [ + { + "Resource" = [ + %[2]q + ], + "ResourceType" = "collection" + } + ], + "AWSOwnedKey" = true + }) +} +`, rName, collection, description) +} diff --git a/internal/service/opensearchserverless/service_package_gen.go b/internal/service/opensearchserverless/service_package_gen.go index 669a35f103b..db6b44f5964 100644 --- a/internal/service/opensearchserverless/service_package_gen.go +++ b/internal/service/opensearchserverless/service_package_gen.go @@ -5,6 +5,9 @@ package opensearchserverless import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + opensearchserverless_sdkv2 "github.com/aws/aws-sdk-go-v2/service/opensearchserverless" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -12,15 +15,57 @@ import ( type servicePackage struct{} func (p *servicePackage) FrameworkDataSources(ctx context.Context) []*types.ServicePackageFrameworkDataSource { - return []*types.ServicePackageFrameworkDataSource{} + return []*types.ServicePackageFrameworkDataSource{ + { + Factory: newDataSourceAccessPolicy, + Name: "Access Policy", + }, + { + Factory: newDataSourceCollection, + Name: "Collection", + }, + { + Factory: newDataSourceSecurityConfig, + Name: "Security Config", + }, + } } func (p *servicePackage) FrameworkResources(ctx context.Context) []*types.ServicePackageFrameworkResource { - return []*types.ServicePackageFrameworkResource{} + return []*types.ServicePackageFrameworkResource{ + { + Factory: newResourceAccessPolicy, + }, + { + Factory: newResourceCollection, + Name: "Collection", + Tags: &types.ServicePackageResourceTags{ + IdentifierAttribute: "arn", + }, + }, + { + Factory: newResourceSecurityConfig, + }, + { + Factory: newResourceSecurityPolicy, + }, + { + Factory: newResourceVPCEndpoint, + }, + } } func (p *servicePackage) SDKDataSources(ctx context.Context) []*types.ServicePackageSDKDataSource { - return []*types.ServicePackageSDKDataSource{} + return []*types.ServicePackageSDKDataSource{ + { + Factory: DataSourceSecurityPolicy, + TypeName: "aws_opensearchserverless_security_policy", + }, + { + Factory: DataSourceVPCEndpoint, + TypeName: "aws_opensearchserverless_vpc_endpoint", + }, + } } func (p *servicePackage) SDKResources(ctx context.Context) []*types.ServicePackageSDKResource { @@ -31,4 +76,17 @@ func (p *servicePackage) ServicePackageName() string { return names.OpenSearchServerless } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*opensearchserverless_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return opensearchserverless_sdkv2.NewFromConfig(cfg, func(o *opensearchserverless_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = opensearchserverless_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/opensearchserverless/sweep.go b/internal/service/opensearchserverless/sweep.go new file mode 100644 index 00000000000..a76b7942a37 --- /dev/null +++ b/internal/service/opensearchserverless/sweep.go @@ -0,0 +1,310 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:build sweep +// +build sweep + +package opensearchserverless + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/opensearchserverless" + "github.com/aws/aws-sdk-go-v2/service/opensearchserverless/types" + "github.com/hashicorp/go-multierror" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/sweep" + "github.com/hashicorp/terraform-provider-aws/internal/sweep/framework" +) + +func init() { + resource.AddTestSweepers("aws_opensearchserverless_access_policy", &resource.Sweeper{ + Name: "aws_opensearchserverless_access_policy", + F: sweepAccessPolicies, + }) + resource.AddTestSweepers("aws_opensearchserverless_collection", &resource.Sweeper{ + Name: "aws_opensearchserverless_collection", + F: sweepCollections, + }) + resource.AddTestSweepers("aws_opensearchserverless_security_config", &resource.Sweeper{ + Name: "aws_opensearchserverless_security_config", + F: sweepSecurityConfigs, + }) + resource.AddTestSweepers("aws_opensearchserverless_security_policy", &resource.Sweeper{ + Name: "aws_opensearchserverless_security_policy", + F: sweepSecurityPolicies, + }) + resource.AddTestSweepers("aws_opensearchserverless_vpc_endpoint", &resource.Sweeper{ + Name: "aws_opensearchserverless_vpc_endpoint", + F: sweepVPCEndpoints, + }) +} + +func sweepAccessPolicies(region string) error { + ctx := sweep.Context(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) + + if err != nil { + return fmt.Errorf("error getting client: %w", err) + } + + conn := client.OpenSearchServerlessClient(ctx) + sweepResources := make([]sweep.Sweepable, 0) + var errs *multierror.Error + input := &opensearchserverless.ListAccessPoliciesInput{ + Type: types.AccessPolicyTypeData, + } + + pages := opensearchserverless.NewListAccessPoliciesPaginator(conn, input) + + for pages.HasMorePages() { + page, err := pages.NextPage(ctx) + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping OpenSearch Serverless Access Policies sweep for %s: %s", region, err) + return nil + } + if err != nil { + return fmt.Errorf("error retrieving OpenSearch Serverless Access Policies: %w", err) + } + + for _, ap := range page.AccessPolicySummaries { + name := aws.ToString(ap.Name) + + log.Printf("[INFO] Deleting OpenSearch Serverless Access Policy: %s", name) + sweepResources = append(sweepResources, framework.NewSweepResource(newResourceAccessPolicy, client, + framework.NewAttribute("id", name), + framework.NewAttribute("name", name), + framework.NewAttribute("type", ap.Type), + )) + } + } + + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { + errs = multierror.Append(errs, fmt.Errorf("error sweeping OpenSearch Serverless Access Policies for %s: %w", region, err)) + } + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping OpenSearch Serverless Access Policies sweep for %s: %s", region, errs) + return nil + } + + return errs.ErrorOrNil() +} + +func sweepCollections(region string) error { + ctx := sweep.Context(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) + + if err != nil { + return fmt.Errorf("error getting client: %w", err) + } + + conn := client.OpenSearchServerlessClient(ctx) + sweepResources := make([]sweep.Sweepable, 0) + var errs *multierror.Error + input := &opensearchserverless.ListCollectionsInput{} + + pages := opensearchserverless.NewListCollectionsPaginator(conn, input) + + for pages.HasMorePages() { + page, err := pages.NextPage(ctx) + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping OpenSearch Serverless Collections sweep for %s: %s", region, err) + return nil + } + if err != nil { + return fmt.Errorf("error retrieving OpenSearch Serverless Collections: %w", err) + } + + for _, collection := range page.CollectionSummaries { + id := aws.ToString(collection.Id) + + log.Printf("[INFO] Deleting OpenSearch Serverless Collection: %s", id) + sweepResources = append(sweepResources, framework.NewSweepResource(newResourceCollection, client, + framework.NewAttribute("id", id), + )) + } + } + + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { + errs = multierror.Append(errs, fmt.Errorf("error sweeping OpenSearch Serverless Collections for %s: %w", region, err)) + } + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping OpenSearch Serverless Collections sweep for %s: %s", region, errs) + return nil + } + + return errs.ErrorOrNil() +} + +func sweepSecurityConfigs(region string) error { + ctx := sweep.Context(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) + + if err != nil { + return fmt.Errorf("error getting client: %w", err) + } + + conn := client.OpenSearchServerlessClient(ctx) + sweepResources := make([]sweep.Sweepable, 0) + var errs *multierror.Error + + input := &opensearchserverless.ListSecurityConfigsInput{ + Type: types.SecurityConfigTypeSaml, + } + pages := opensearchserverless.NewListSecurityConfigsPaginator(conn, input) + + for pages.HasMorePages() { + page, err := pages.NextPage(ctx) + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping OpenSearch Serverless Security Configs sweep for %s: %s", region, err) + return nil + } + if err != nil { + return fmt.Errorf("error retrieving OpenSearch Serverless Security Configs: %w", err) + } + + for _, sc := range page.SecurityConfigSummaries { + id := aws.ToString(sc.Id) + + log.Printf("[INFO] Deleting OpenSearch Serverless Security Config: %s", id) + sweepResources = append(sweepResources, framework.NewSweepResource(newResourceCollection, client, + framework.NewAttribute("id", id), + )) + } + } + + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { + errs = multierror.Append(errs, fmt.Errorf("error sweeping OpenSearch Serverless Security Configs for %s: %w", region, err)) + } + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping OpenSearch Serverless Security Configs sweep for %s: %s", region, errs) + return nil + } + + return errs.ErrorOrNil() +} + +func sweepSecurityPolicies(region string) error { + ctx := sweep.Context(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) + + if err != nil { + return fmt.Errorf("error getting client: %w", err) + } + + conn := client.OpenSearchServerlessClient(ctx) + sweepResources := make([]sweep.Sweepable, 0) + var errs *multierror.Error + + inputEncryption := &opensearchserverless.ListSecurityPoliciesInput{ + Type: types.SecurityPolicyTypeEncryption, + } + pagesEncryption := opensearchserverless.NewListSecurityPoliciesPaginator(conn, inputEncryption) + + for pagesEncryption.HasMorePages() { + page, err := pagesEncryption.NextPage(ctx) + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping OpenSearch Serverless Security Policies sweep for %s: %s", region, err) + return nil + } + if err != nil { + return fmt.Errorf("error retrieving OpenSearch Serverless Security Policies: %w", err) + } + + for _, sp := range page.SecurityPolicySummaries { + name := aws.ToString(sp.Name) + + log.Printf("[INFO] Deleting OpenSearch Serverless Security Policy: %s", name) + sweepResources = append(sweepResources, framework.NewSweepResource(newResourceCollection, client, + framework.NewAttribute("id", name), + framework.NewAttribute("name", name), + framework.NewAttribute("type", sp.Type), + )) + } + } + + inputNetwork := &opensearchserverless.ListSecurityPoliciesInput{ + Type: types.SecurityPolicyTypeNetwork, + } + pagesNetwork := opensearchserverless.NewListSecurityPoliciesPaginator(conn, inputNetwork) + + for pagesNetwork.HasMorePages() { + page, err := pagesNetwork.NextPage(ctx) + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping OpenSearch Serverless Security Policies sweep for %s: %s", region, err) + return nil + } + if err != nil { + return fmt.Errorf("error retrieving OpenSearch Serverless Security Policies: %w", err) + } + + for _, sp := range page.SecurityPolicySummaries { + name := aws.ToString(sp.Name) + + log.Printf("[INFO] Deleting OpenSearch Serverless Security Policy: %s", name) + sweepResources = append(sweepResources, framework.NewSweepResource(newResourceCollection, client, + framework.NewAttribute("id", name), + framework.NewAttribute("name", name), + framework.NewAttribute("type", sp.Type), + )) + } + } + + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { + errs = multierror.Append(errs, fmt.Errorf("error sweeping OpenSearch Serverless Security Policies for %s: %w", region, err)) + } + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping OpenSearch Serverless Security Policies sweep for %s: %s", region, errs) + return nil + } + + return errs.ErrorOrNil() +} + +func sweepVPCEndpoints(region string) error { + ctx := sweep.Context(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) + + if err != nil { + return fmt.Errorf("error getting client: %w", err) + } + + conn := client.OpenSearchServerlessClient(ctx) + sweepResources := make([]sweep.Sweepable, 0) + var errs *multierror.Error + input := &opensearchserverless.ListVpcEndpointsInput{} + + pages := opensearchserverless.NewListVpcEndpointsPaginator(conn, input) + + for pages.HasMorePages() { + page, err := pages.NextPage(ctx) + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping OpenSearch Serverless VPC Endpoints sweep for %s: %s", region, err) + return nil + } + if err != nil { + return fmt.Errorf("error retrieving OpenSearch Serverless VPC Endpoints: %w", err) + } + + for _, endpoint := range page.VpcEndpointSummaries { + id := aws.ToString(endpoint.Id) + + log.Printf("[INFO] Deleting OpenSearch Serverless VPC Endpoint: %s", id) + sweepResources = append(sweepResources, framework.NewSweepResource(newResourceCollection, client, + framework.NewAttribute("id", id), + )) + } + } + + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { + errs = multierror.Append(errs, fmt.Errorf("error sweeping OpenSearch Serverless VPC Endpoints for %s: %w", region, err)) + } + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping OpenSearch Serverless VPC Endpoint sweep for %s: %s", region, errs) + return nil + } + + return errs.ErrorOrNil() +} diff --git a/internal/service/opensearchserverless/tags_gen.go b/internal/service/opensearchserverless/tags_gen.go index 79f6585bfa9..de619f99ccd 100644 --- a/internal/service/opensearchserverless/tags_gen.go +++ b/internal/service/opensearchserverless/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists opensearchserverless service tags. +// listTags lists opensearchserverless service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn *opensearchserverless.Client, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *opensearchserverless.Client, identifier string) (tftags.KeyValueTags, error) { input := &opensearchserverless.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn *opensearchserverless.Client, identifier // ListTags lists opensearchserverless service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).OpenSearchServerlessClient(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).OpenSearchServerlessClient(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []awstypes.Tag) tftags.KeyValueTags return tftags.New(ctx, m) } -// GetTagsIn returns opensearchserverless service tags from Context. +// getTagsIn returns opensearchserverless service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []awstypes.Tag { +func getTagsIn(ctx context.Context) []awstypes.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []awstypes.Tag { return nil } -// SetTagsOut sets opensearchserverless service tags in Context. -func SetTagsOut(ctx context.Context, tags []awstypes.Tag) { +// setTagsOut sets opensearchserverless service tags in Context. +func setTagsOut(ctx context.Context, tags []awstypes.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates opensearchserverless service tags. +// updateTags updates opensearchserverless service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn *opensearchserverless.Client, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *opensearchserverless.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn *opensearchserverless.Client, identifi // UpdateTags updates opensearchserverless service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).OpenSearchServerlessClient(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).OpenSearchServerlessClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/opensearchserverless/test-fixtures/idp-metadata.xml b/internal/service/opensearchserverless/test-fixtures/idp-metadata.xml new file mode 100644 index 00000000000..4d336c9797e --- /dev/null +++ b/internal/service/opensearchserverless/test-fixtures/idp-metadata.xml @@ -0,0 +1,21 @@ + + + + + + + + + tls-certificate + + s + + urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified + urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress + + + + diff --git a/internal/service/opensearchserverless/vpc_endpoint.go b/internal/service/opensearchserverless/vpc_endpoint.go new file mode 100644 index 00000000000..dd590722a80 --- /dev/null +++ b/internal/service/opensearchserverless/vpc_endpoint.go @@ -0,0 +1,391 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package opensearchserverless + +import ( + "context" + "errors" + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/opensearchserverless" + awstypes "github.com/aws/aws-sdk-go-v2/service/opensearchserverless/types" + "github.com/hashicorp/terraform-plugin-framework-timeouts/resource/timeouts" + "github.com/hashicorp/terraform-plugin-framework-validators/setvalidator" + "github.com/hashicorp/terraform-plugin-framework-validators/stringvalidator" + "github.com/hashicorp/terraform-plugin-framework/path" + "github.com/hashicorp/terraform-plugin-framework/resource" + "github.com/hashicorp/terraform-plugin-framework/resource/schema" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/setplanmodifier" + "github.com/hashicorp/terraform-plugin-framework/schema/validator" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/enum" + "github.com/hashicorp/terraform-provider-aws/internal/errs/fwdiag" + "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @FrameworkResource +func newResourceVPCEndpoint(_ context.Context) (resource.ResourceWithConfigure, error) { + r := resourceVpcEndpoint{} + r.SetDefaultCreateTimeout(30 * time.Minute) + r.SetDefaultUpdateTimeout(30 * time.Minute) + r.SetDefaultDeleteTimeout(30 * time.Minute) + + return &r, nil +} + +type resourceVpcEndpointData struct { + ID types.String `tfsdk:"id"` + Name types.String `tfsdk:"name"` + SecurityGroupIds types.Set `tfsdk:"security_group_ids"` + SubnetIds types.Set `tfsdk:"subnet_ids"` + Timeouts timeouts.Value `tfsdk:"timeouts"` + VpcId types.String `tfsdk:"vpc_id"` +} + +const ( + ResNameVPCEndpoint = "VPC Endpoint" +) + +type resourceVpcEndpoint struct { + framework.ResourceWithConfigure + framework.WithTimeouts +} + +func (r *resourceVpcEndpoint) Metadata(_ context.Context, request resource.MetadataRequest, response *resource.MetadataResponse) { + response.TypeName = "aws_opensearchserverless_vpc_endpoint" +} + +func (r *resourceVpcEndpoint) Schema(ctx context.Context, req resource.SchemaRequest, resp *resource.SchemaResponse) { + resp.Schema = schema.Schema{ + Attributes: map[string]schema.Attribute{ + "id": framework.IDAttribute(), + "name": schema.StringAttribute{ + Required: true, + Validators: []validator.String{ + stringvalidator.LengthBetween(3, 32), + }, + }, + "security_group_ids": schema.SetAttribute{ + ElementType: types.StringType, + Optional: true, + Computed: true, + Validators: []validator.Set{ + setvalidator.SizeBetween(1, 5), + }, + PlanModifiers: []planmodifier.Set{ + setplanmodifier.UseStateForUnknown(), + }, + }, + "subnet_ids": schema.SetAttribute{ + ElementType: types.StringType, + Required: true, + Validators: []validator.Set{ + setvalidator.SizeBetween(1, 6), + }, + }, + "vpc_id": schema.StringAttribute{ + Required: true, + Validators: []validator.String{ + stringvalidator.LengthBetween(1, 255), + }, + }, + }, + Blocks: map[string]schema.Block{ + "timeouts": timeouts.Block(ctx, timeouts.Opts{ + Create: true, + Update: true, + Delete: true, + }), + }, + } +} + +func (r *resourceVpcEndpoint) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) { + var plan resourceVpcEndpointData + + resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) + + if resp.Diagnostics.HasError() { + return + } + + conn := r.Meta().OpenSearchServerlessClient(ctx) + + in := &opensearchserverless.CreateVpcEndpointInput{ + ClientToken: aws.String(id.UniqueId()), + Name: aws.String(plan.Name.ValueString()), + SubnetIds: flex.ExpandFrameworkStringValueSet(ctx, plan.SubnetIds), + VpcId: aws.String(plan.VpcId.ValueString()), + } + + if !plan.SecurityGroupIds.IsNull() && !plan.SecurityGroupIds.IsUnknown() { + in.SecurityGroupIds = flex.ExpandFrameworkStringValueSet(ctx, plan.SecurityGroupIds) + } + + out, err := conn.CreateVpcEndpoint(ctx, in) + if err != nil { + resp.Diagnostics.AddError( + create.ProblemStandardMessage(names.OpenSearchServerless, create.ErrActionCreating, ResNameVPCEndpoint, plan.Name.String(), nil), + err.Error(), + ) + return + } + + createTimeout := r.CreateTimeout(ctx, plan.Timeouts) + if _, err := waitVPCEndpointCreated(ctx, conn, *out.CreateVpcEndpointDetail.Id, createTimeout); err != nil { + resp.Diagnostics.AddError( + create.ProblemStandardMessage(names.OpenSearchServerless, create.ErrActionWaitingForCreation, ResNameVPCEndpoint, plan.Name.String(), nil), + err.Error(), + ) + return + } + + // The create operation only returns the Id and Name so retrieve the newly + // created VPC Endpoint so we can store the possibly computed + // security_group_ids in state + vpcEndpoint, err := findVPCEndpointByID(ctx, conn, aws.ToString(out.CreateVpcEndpointDetail.Id)) + if err != nil { + resp.Diagnostics.AddError( + create.ProblemStandardMessage(names.OpenSearchServerless, create.ErrActionChecking, ResNameVPCEndpoint, plan.Name.String(), nil), + err.Error(), + ) + return + } + + state := plan + state.refreshFromOutput(ctx, vpcEndpoint) + resp.Diagnostics.Append(resp.State.Set(ctx, &state)...) +} + +func (r *resourceVpcEndpoint) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) { + conn := r.Meta().OpenSearchServerlessClient(ctx) + + var state resourceVpcEndpointData + resp.Diagnostics.Append(req.State.Get(ctx, &state)...) + if resp.Diagnostics.HasError() { + return + } + + out, err := findVPCEndpointByID(ctx, conn, state.ID.ValueString()) + if tfresource.NotFound(err) { + resp.Diagnostics.Append(fwdiag.NewResourceNotFoundWarningDiagnostic(err)) + resp.State.RemoveResource(ctx) + return + } + + state.refreshFromOutput(ctx, out) + resp.Diagnostics.Append(resp.State.Set(ctx, &state)...) +} + +func (r *resourceVpcEndpoint) Update(ctx context.Context, req resource.UpdateRequest, resp *resource.UpdateResponse) { + conn := r.Meta().OpenSearchServerlessClient(ctx) + + update := false + + var plan, state resourceVpcEndpointData + resp.Diagnostics.Append(req.State.Get(ctx, &state)...) + resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) + if resp.Diagnostics.HasError() { + return + } + + input := &opensearchserverless.UpdateVpcEndpointInput{ + ClientToken: aws.String(id.UniqueId()), + Id: aws.String(plan.ID.ValueString()), + } + + if !plan.SecurityGroupIds.Equal(state.SecurityGroupIds) { + newSGs := flex.ExpandFrameworkStringValueSet(ctx, plan.SecurityGroupIds) + oldSGs := flex.ExpandFrameworkStringValueSet(ctx, state.SecurityGroupIds) + + if add := newSGs.Difference(oldSGs); len(add) > 0 { + input.AddSecurityGroupIds = add + } + + if del := oldSGs.Difference(newSGs); len(del) > 0 { + input.RemoveSecurityGroupIds = del + } + + update = true + } + + if !plan.SubnetIds.Equal(state.SubnetIds) { + old := flex.ExpandFrameworkStringValueSet(ctx, state.SubnetIds) + new := flex.ExpandFrameworkStringValueSet(ctx, plan.SubnetIds) + + if add := new.Difference(old); len(add) > 0 { + input.AddSubnetIds = add + } + + if del := old.Difference(new); len(del) > 0 { + input.RemoveSubnetIds = del + } + + update = true + } + + if !update { + return + } + + log.Printf("[DEBUG] Updating OpenSearchServerless VPC Endpoint (%s): %#v", plan.ID.ValueString(), input) + out, err := conn.UpdateVpcEndpoint(ctx, input) + if err != nil { + resp.Diagnostics.AddError(fmt.Sprintf("updating VPC Endpoint (%s)", plan.ID.ValueString()), err.Error()) + return + } + + updateTimeout := r.UpdateTimeout(ctx, plan.Timeouts) + if _, err := waitVPCEndpointUpdated(ctx, conn, *out.UpdateVpcEndpointDetail.Id, updateTimeout); err != nil { + resp.Diagnostics.AddError( + create.ProblemStandardMessage(names.OpenSearchServerless, create.ErrActionWaitingForUpdate, ResNameVPCEndpoint, plan.Name.String(), nil), + err.Error(), + ) + return + } + + // The update operation only returns security_group_ids if those were + // changed so retrieve the updated VPC Endpoint so we can store the + // actual security_group_ids in state + vpcEndpoint, err := findVPCEndpointByID(ctx, conn, *out.UpdateVpcEndpointDetail.Id) + if err != nil { + resp.Diagnostics.AddError( + create.ProblemStandardMessage(names.OpenSearchServerless, create.ErrActionChecking, ResNameVPCEndpoint, plan.Name.String(), nil), + err.Error(), + ) + return + } + + plan.refreshFromOutput(ctx, vpcEndpoint) + resp.Diagnostics.Append(resp.State.Set(ctx, &plan)...) +} + +func (r *resourceVpcEndpoint) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) { + conn := r.Meta().OpenSearchServerlessClient(ctx) + + var state resourceVpcEndpointData + resp.Diagnostics.Append(req.State.Get(ctx, &state)...) + if resp.Diagnostics.HasError() { + return + } + + _, err := conn.DeleteVpcEndpoint(ctx, &opensearchserverless.DeleteVpcEndpointInput{ + ClientToken: aws.String(id.UniqueId()), + Id: aws.String(state.ID.ValueString()), + }) + + if err != nil { + var nfe *awstypes.ResourceNotFoundException + if errors.As(err, &nfe) { + return + } + resp.Diagnostics.AddError( + create.ProblemStandardMessage(names.OpenSearchServerless, create.ErrActionDeleting, ResNameVPCEndpoint, state.Name.String(), nil), + err.Error(), + ) + } + + deleteTimeout := r.DeleteTimeout(ctx, state.Timeouts) + if _, err := waitVPCEndpointDeleted(ctx, conn, state.ID.ValueString(), deleteTimeout); err != nil { + resp.Diagnostics.AddError( + create.ProblemStandardMessage(names.OpenSearchServerless, create.ErrActionWaitingForDeletion, ResNameVPCEndpoint, state.Name.String(), nil), + err.Error(), + ) + return + } +} + +func (r *resourceVpcEndpoint) ImportState(ctx context.Context, req resource.ImportStateRequest, resp *resource.ImportStateResponse) { + resource.ImportStatePassthroughID(ctx, path.Root("id"), req, resp) +} + +// refreshFromOutput writes state data from an AWS response object +func (rd *resourceVpcEndpointData) refreshFromOutput(ctx context.Context, out *awstypes.VpcEndpointDetail) { + if out == nil { + return + } + + rd.ID = flex.StringToFramework(ctx, out.Id) + rd.Name = flex.StringToFramework(ctx, out.Name) + rd.SecurityGroupIds = flex.FlattenFrameworkStringValueSet(ctx, out.SecurityGroupIds) + rd.SubnetIds = flex.FlattenFrameworkStringValueSet(ctx, out.SubnetIds) + rd.VpcId = flex.StringToFramework(ctx, out.VpcId) +} + +func waitVPCEndpointCreated(ctx context.Context, conn *opensearchserverless.Client, id string, timeout time.Duration) (*awstypes.VpcEndpointDetail, error) { + stateConf := &retry.StateChangeConf{ + Pending: enum.Slice(awstypes.VpcEndpointStatusPending), + Target: enum.Slice(awstypes.VpcEndpointStatusActive), + Refresh: statusVPCEndpoint(ctx, conn, id), + Timeout: timeout, + NotFoundChecks: 20, + ContinuousTargetOccurence: 2, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + if out, ok := outputRaw.(*awstypes.VpcEndpointDetail); ok { + return out, err + } + + return nil, err +} + +func waitVPCEndpointUpdated(ctx context.Context, conn *opensearchserverless.Client, id string, timeout time.Duration) (*awstypes.VpcEndpointDetail, error) { + stateConf := &retry.StateChangeConf{ + Pending: enum.Slice(awstypes.VpcEndpointStatusPending), + Target: enum.Slice(awstypes.VpcEndpointStatusActive), + Refresh: statusVPCEndpoint(ctx, conn, id), + Timeout: timeout, + NotFoundChecks: 20, + ContinuousTargetOccurence: 2, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + if out, ok := outputRaw.(*awstypes.VpcEndpointDetail); ok { + return out, err + } + + return nil, err +} + +func waitVPCEndpointDeleted(ctx context.Context, conn *opensearchserverless.Client, id string, timeout time.Duration) (*awstypes.VpcEndpointDetail, error) { + stateConf := &retry.StateChangeConf{ + Pending: enum.Slice(awstypes.VpcEndpointStatusDeleting, awstypes.VpcEndpointStatusActive), + Target: []string{}, + Refresh: statusVPCEndpoint(ctx, conn, id), + Timeout: timeout, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + if out, ok := outputRaw.(*awstypes.VpcEndpointDetail); ok { + return out, err + } + + return nil, err +} + +func statusVPCEndpoint(ctx context.Context, conn *opensearchserverless.Client, id string) retry.StateRefreshFunc { + return func() (interface{}, string, error) { + out, err := findVPCEndpointByID(ctx, conn, id) + if tfresource.NotFound(err) { + return nil, "", nil + } + + if err != nil { + return nil, "", err + } + + return out, string(out.Status), nil + } +} diff --git a/internal/service/opensearchserverless/vpc_endpoint_data_source.go b/internal/service/opensearchserverless/vpc_endpoint_data_source.go new file mode 100644 index 00000000000..48e4253e4f3 --- /dev/null +++ b/internal/service/opensearchserverless/vpc_endpoint_data_source.go @@ -0,0 +1,81 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package opensearchserverless + +import ( + "context" + "regexp" + "time" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" +) + +// @SDKDataSource("aws_opensearchserverless_vpc_endpoint") +func DataSourceVPCEndpoint() *schema.Resource { + return &schema.Resource{ + ReadWithoutTimeout: dataSourceVPCEndpointRead, + + Schema: map[string]*schema.Schema{ + "created_date": { + Type: schema.TypeString, + Computed: true, + }, + "vpc_endpoint_id": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.All( + validation.StringLenBetween(1, 255), + validation.StringMatch(regexp.MustCompile(`^vpce-[0-9a-z]*$`), `must start with "vpce-" and can include any lower case letter or number`), + ), + }, + "name": { + Type: schema.TypeString, + Computed: true, + }, + "security_group_ids": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "subnet_ids": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "vpc_id": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func dataSourceVPCEndpointRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).OpenSearchServerlessClient(ctx) + + id := d.Get("vpc_endpoint_id").(string) + vpcEndpoint, err := findVPCEndpointByID(ctx, conn, id) + + if err != nil { + return sdkdiag.AppendErrorf(diags, "reading OpenSearch Serverless VPC Endpoint with id (%s): %s", id, err) + } + + d.SetId(aws.ToString(vpcEndpoint.Id)) + + createdDate := time.UnixMilli(aws.ToInt64(vpcEndpoint.CreatedDate)) + d.Set("created_date", createdDate.Format(time.RFC3339)) + + d.Set("name", vpcEndpoint.Name) + d.Set("security_group_ids", vpcEndpoint.SecurityGroupIds) + d.Set("subnet_ids", vpcEndpoint.SubnetIds) + d.Set("vpc_id", vpcEndpoint.VpcId) + + return diags +} diff --git a/internal/service/opensearchserverless/vpc_endpoint_data_source_test.go b/internal/service/opensearchserverless/vpc_endpoint_data_source_test.go new file mode 100644 index 00000000000..01e917eac51 --- /dev/null +++ b/internal/service/opensearchserverless/vpc_endpoint_data_source_test.go @@ -0,0 +1,110 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package opensearchserverless_test + +import ( + "fmt" + "testing" + + sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/names" +) + +func TestAccOpenSearchServerlessVPCEndpointDataSource_basic(t *testing.T) { + ctx := acctest.Context(t) + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearchserverless_vpc_endpoint.test" + dataSourceName := "data.aws_opensearchserverless_vpc_endpoint.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.OpenSearchServerlessEndpointID) + }, + ErrorCheck: acctest.ErrorCheck(t, names.OpenSearchServerlessEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckVPCEndpointDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccVPCEndpointDataSourceConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrSet(dataSourceName, "created_date"), + resource.TestCheckResourceAttrPair(dataSourceName, "id", resourceName, "id"), + resource.TestCheckResourceAttrPair(dataSourceName, "name", resourceName, "name"), + resource.TestCheckResourceAttrPair(dataSourceName, "security_group_ids.#", resourceName, "security_group_ids.#"), + resource.TestCheckResourceAttrPair(dataSourceName, "subnet_ids.#", resourceName, "subnet_ids.#"), + resource.TestCheckResourceAttrPair(dataSourceName, "vpc_id", resourceName, "vpc_id"), + ), + }, + }, + }) +} + +func testAccVPCEndpointDataSourceConfig_networkingBase(rName string, subnetCount int) string { + return acctest.ConfigCompose( + acctest.ConfigAvailableAZsNoOptInDefaultExclude(), + fmt.Sprintf(` +resource "aws_vpc" "test" { + cidr_block = "10.0.0.0/16" + enable_dns_hostnames = true + + tags = { + Name = %[1]q + } +} + +resource "aws_subnet" "test" { + count = %[2]d + + vpc_id = aws_vpc.test.id + availability_zone = data.aws_availability_zones.available.names[count.index] + cidr_block = cidrsubnet(aws_vpc.test.cidr_block, 8, count.index) + + tags = { + Name = %[1]q + } +} +`, rName, subnetCount), + ) +} + +func testAccVPCEndpointDataSourceConfig_securityGroupBase(rName string, sgCount int) string { + return acctest.ConfigCompose( + fmt.Sprintf(` +resource "aws_security_group" "test" { + count = %[2]d + name = "%[1]s-${count.index}" + vpc_id = aws_vpc.test.id + + tags = { + Name = %[1]q + } +} +`, rName, sgCount), + ) +} + +func testAccVPCEndpointDataSourceConfig_basic(rName string) string { + return acctest.ConfigCompose( + testAccVPCEndpointDataSourceConfig_networkingBase(rName, 2), + testAccVPCEndpointDataSourceConfig_securityGroupBase(rName, 2), + fmt.Sprintf(` +resource "aws_opensearchserverless_vpc_endpoint" "test" { + name = %[1]q + security_group_ids = aws_security_group.test[*].id + subnet_ids = aws_subnet.test[*].id + vpc_id = aws_vpc.test.id +} + +data "aws_opensearchserverless_vpc_endpoint" "test" { + vpc_endpoint_id = aws_opensearchserverless_vpc_endpoint.test.id +} +`, rName)) +} diff --git a/internal/service/opensearchserverless/vpc_endpoint_test.go b/internal/service/opensearchserverless/vpc_endpoint_test.go new file mode 100644 index 00000000000..a2967812dc1 --- /dev/null +++ b/internal/service/opensearchserverless/vpc_endpoint_test.go @@ -0,0 +1,364 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package opensearchserverless_test + +import ( + "context" + "errors" + "fmt" + "testing" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/opensearchserverless" + "github.com/aws/aws-sdk-go-v2/service/opensearchserverless/types" + sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-plugin-testing/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + tfopensearchserverless "github.com/hashicorp/terraform-provider-aws/internal/service/opensearchserverless" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" +) + +func TestAccOpenSearchServerlessVPCEndpoint_basic(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + ctx := acctest.Context(t) + var vpcendpoint types.VpcEndpointDetail + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearchserverless_vpc_endpoint.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.OpenSearchServerlessEndpointID) + testAccPreCheckVPCEndpoint(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.OpenSearchServerlessEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckVPCEndpointDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccVPCEndpointConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckVPCEndpointExists(ctx, resourceName, &vpcendpoint), + resource.TestCheckResourceAttr(resourceName, "subnet_ids.#", "1"), + resource.TestCheckResourceAttr(resourceName, "security_group_ids.#", "1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccOpenSearchServerlessVPCEndpoint_securityGroups(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + ctx := acctest.Context(t) + var vpcendpoint1, vpcendpoint2, vpcendpoint3 types.VpcEndpointDetail + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearchserverless_vpc_endpoint.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.OpenSearchServerlessEndpointID) + testAccPreCheckVPCEndpoint(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.OpenSearchServerlessEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckVPCEndpointDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccVPCEndpointConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckVPCEndpointExists(ctx, resourceName, &vpcendpoint1), + resource.TestCheckResourceAttr(resourceName, "security_group_ids.#", "1"), + ), + }, + { + Config: testAccVPCEndpointConfig_multiple_securityGroups(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckVPCEndpointExists(ctx, resourceName, &vpcendpoint2), + testAccCheckVPCEndpointNotRecreated(&vpcendpoint1, &vpcendpoint2), + resource.TestCheckResourceAttr(resourceName, "security_group_ids.#", "2"), + ), + }, + { + Config: testAccVPCEndpointConfig_single_securityGroup(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckVPCEndpointExists(ctx, resourceName, &vpcendpoint3), + testAccCheckVPCEndpointNotRecreated(&vpcendpoint1, &vpcendpoint3), + resource.TestCheckResourceAttr(resourceName, "security_group_ids.#", "1"), + ), + }, + }, + }) +} + +func TestAccOpenSearchServerlessVPCEndpoint_update(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + ctx := acctest.Context(t) + var vpcendpoint1, vpcendpoint2, vpcendpoint3 types.VpcEndpointDetail + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearchserverless_vpc_endpoint.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.OpenSearchServerlessEndpointID) + testAccPreCheckVPCEndpoint(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.OpenSearchServerlessEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckVPCEndpointDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccVPCEndpointConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckVPCEndpointExists(ctx, resourceName, &vpcendpoint1), + resource.TestCheckResourceAttr(resourceName, "subnet_ids.#", "1"), + resource.TestCheckResourceAttr(resourceName, "security_group_ids.#", "1"), + ), + }, + { + Config: testAccVPCEndpointConfig_multiple_subnets(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckVPCEndpointExists(ctx, resourceName, &vpcendpoint2), + testAccCheckVPCEndpointNotRecreated(&vpcendpoint1, &vpcendpoint2), + resource.TestCheckResourceAttr(resourceName, "subnet_ids.#", "2"), + resource.TestCheckResourceAttr(resourceName, "security_group_ids.#", "1"), + ), + }, + { + Config: testAccVPCEndpointConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckVPCEndpointExists(ctx, resourceName, &vpcendpoint3), + testAccCheckVPCEndpointNotRecreated(&vpcendpoint2, &vpcendpoint3), + resource.TestCheckResourceAttr(resourceName, "subnet_ids.#", "1"), + resource.TestCheckResourceAttr(resourceName, "security_group_ids.#", "1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccOpenSearchServerlessVPCEndpoint_disappears(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + ctx := acctest.Context(t) + var vpcendpoint types.VpcEndpointDetail + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearchserverless_vpc_endpoint.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.OpenSearchServerlessEndpointID) + testAccPreCheckVPCEndpoint(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.OpenSearchServerlessEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckVPCEndpointDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccVPCEndpointConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckVPCEndpointExists(ctx, resourceName, &vpcendpoint), + acctest.CheckFrameworkResourceDisappears(ctx, acctest.Provider, tfopensearchserverless.ResourceVPCEndpoint, resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func testAccCheckVPCEndpointDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchServerlessClient(ctx) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_opensearchserverless_vpc_endpointa" { + continue + } + + _, err := tfopensearchserverless.FindVPCEndpointByID(ctx, conn, rs.Primary.ID) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return err + } + + return create.Error(names.OpenSearchServerless, create.ErrActionCheckingDestroyed, tfopensearchserverless.ResNameVPCEndpoint, rs.Primary.ID, errors.New("not destroyed")) + } + + return nil + } +} + +func testAccCheckVPCEndpointExists(ctx context.Context, name string, vpcendpoint *types.VpcEndpointDetail) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return create.Error(names.OpenSearchServerless, create.ErrActionCheckingExistence, tfopensearchserverless.ResNameVPCEndpoint, name, errors.New("not found")) + } + + if rs.Primary.ID == "" { + return create.Error(names.OpenSearchServerless, create.ErrActionCheckingExistence, tfopensearchserverless.ResNameVPCEndpoint, name, errors.New("not set")) + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchServerlessClient(ctx) + resp, err := tfopensearchserverless.FindVPCEndpointByID(ctx, conn, rs.Primary.ID) + + if err != nil { + return create.Error(names.OpenSearchServerless, create.ErrActionCheckingExistence, tfopensearchserverless.ResNameVPCEndpoint, rs.Primary.ID, err) + } + + *vpcendpoint = *resp + + return nil + } +} + +func testAccPreCheckVPCEndpoint(ctx context.Context, t *testing.T) { + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchServerlessClient(ctx) + + input := &opensearchserverless.ListVpcEndpointsInput{} + _, err := conn.ListVpcEndpoints(ctx, input) + + if acctest.PreCheckSkipError(err) { + t.Skipf("skipping acceptance testing: %s", err) + } + + if err != nil { + t.Fatalf("unexpected PreCheck error: %s", err) + } +} + +func testAccCheckVPCEndpointNotRecreated(before, after *types.VpcEndpointDetail) resource.TestCheckFunc { + return func(s *terraform.State) error { + if before, after := aws.ToString(before.Id), aws.ToString(after.Id); before != after { + return create.Error(names.OpenSearchServerless, create.ErrActionCheckingNotRecreated, tfopensearchserverless.ResNameVPCEndpoint, before, errors.New("recreated")) + } + + return nil + } +} + +func testAccVPCEndpointConfig_networkingBase(rName string, subnetCount int) string { + return acctest.ConfigCompose( + acctest.ConfigAvailableAZsNoOptInDefaultExclude(), + fmt.Sprintf(` +resource "aws_vpc" "test" { + cidr_block = "10.0.0.0/16" + enable_dns_hostnames = true + + tags = { + Name = %[1]q + } +} + +resource "aws_subnet" "test" { + count = %[2]d + + vpc_id = aws_vpc.test.id + availability_zone = data.aws_availability_zones.available.names[count.index] + cidr_block = cidrsubnet(aws_vpc.test.cidr_block, 8, count.index) + + tags = { + Name = %[1]q + } +} +`, rName, subnetCount), + ) +} + +func testAccVPCEndpointConfig_securityGroupBase(rName string, sgCount int) string { + return acctest.ConfigCompose( + fmt.Sprintf(` +resource "aws_security_group" "test" { + count = %[2]d + name = "%[1]s-${count.index}" + vpc_id = aws_vpc.test.id + + tags = { + Name = %[1]q + } +} +`, rName, sgCount), + ) +} + +func testAccVPCEndpointConfig_basic(rName string) string { + return acctest.ConfigCompose( + testAccVPCEndpointConfig_networkingBase(rName, 2), + fmt.Sprintf(` +resource "aws_opensearchserverless_vpc_endpoint" "test" { + name = %[1]q + subnet_ids = [aws_subnet.test[0].id] + vpc_id = aws_vpc.test.id +} +`, rName)) +} + +func testAccVPCEndpointConfig_multiple_subnets(rName string) string { + return acctest.ConfigCompose( + testAccVPCEndpointConfig_networkingBase(rName, 2), + fmt.Sprintf(` +resource "aws_opensearchserverless_vpc_endpoint" "test" { + name = %[1]q + subnet_ids = aws_subnet.test[*].id + vpc_id = aws_vpc.test.id +} +`, rName)) +} + +func testAccVPCEndpointConfig_multiple_securityGroups(rName string) string { + return acctest.ConfigCompose( + testAccVPCEndpointConfig_networkingBase(rName, 2), + testAccVPCEndpointConfig_securityGroupBase(rName, 2), + fmt.Sprintf(` +resource "aws_opensearchserverless_vpc_endpoint" "test" { + name = %[1]q + subnet_ids = aws_subnet.test[*].id + vpc_id = aws_vpc.test.id + + security_group_ids = aws_security_group.test[*].id +} +`, rName)) +} + +func testAccVPCEndpointConfig_single_securityGroup(rName string) string { + return acctest.ConfigCompose( + testAccVPCEndpointConfig_networkingBase(rName, 2), + testAccVPCEndpointConfig_securityGroupBase(rName, 2), + fmt.Sprintf(` +resource "aws_opensearchserverless_vpc_endpoint" "test" { + name = %[1]q + subnet_ids = aws_subnet.test[*].id + vpc_id = aws_vpc.test.id + + security_group_ids = [aws_security_group.test[0].id] +} +`, rName)) +} diff --git a/internal/service/opsworks/application.go b/internal/service/opsworks/application.go index 53e6f3e9a2d..5da9050df2f 100644 --- a/internal/service/opsworks/application.go +++ b/internal/service/opsworks/application.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks import ( @@ -259,7 +262,7 @@ func resourceApplicationValidate(d *schema.ResourceData) error { func resourceApplicationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OpsWorksConn() + conn := meta.(*conns.AWSClient).OpsWorksConn(ctx) req := &opsworks.DescribeAppsInput{ AppIds: []*string{ @@ -307,7 +310,7 @@ func resourceApplicationRead(ctx context.Context, d *schema.ResourceData, meta i func resourceApplicationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OpsWorksConn() + conn := meta.(*conns.AWSClient).OpsWorksConn(ctx) err := resourceApplicationValidate(d) if err != nil { @@ -341,7 +344,7 @@ func resourceApplicationCreate(ctx context.Context, d *schema.ResourceData, meta func resourceApplicationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OpsWorksConn() + conn := meta.(*conns.AWSClient).OpsWorksConn(ctx) err := resourceApplicationValidate(d) if err != nil { @@ -374,7 +377,7 @@ func resourceApplicationUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceApplicationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OpsWorksConn() + conn := meta.(*conns.AWSClient).OpsWorksConn(ctx) log.Printf("[DEBUG] Deleting OpsWorks Application: %s", d.Id()) _, err := conn.DeleteAppWithContext(ctx, &opsworks.DeleteAppInput{ diff --git a/internal/service/opsworks/application_test.go b/internal/service/opsworks/application_test.go index ee153970213..d717915fb23 100644 --- a/internal/service/opsworks/application_test.go +++ b/internal/service/opsworks/application_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks_test import ( @@ -105,7 +108,7 @@ func testAccCheckApplicationExists(ctx context.Context, return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn(ctx) params := &opsworks.DescribeAppsInput{ AppIds: []*string{&rs.Primary.ID}, @@ -237,7 +240,7 @@ func testAccCheckUpdateAppAttributes( func testAccCheckApplicationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - client := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn() + client := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_opsworks_application" { diff --git a/internal/service/opsworks/consts.go b/internal/service/opsworks/consts.go index d58dae8d41c..5eac793957b 100644 --- a/internal/service/opsworks/consts.go +++ b/internal/service/opsworks/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks import ( diff --git a/internal/service/opsworks/custom_layer.go b/internal/service/opsworks/custom_layer.go index 1e02c7cfe13..c6d12dbaa89 100644 --- a/internal/service/opsworks/custom_layer.go +++ b/internal/service/opsworks/custom_layer.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks import ( diff --git a/internal/service/opsworks/custom_layer_test.go b/internal/service/opsworks/custom_layer_test.go index dfa37a6b3b8..a0921a159a6 100644 --- a/internal/service/opsworks/custom_layer_test.go +++ b/internal/service/opsworks/custom_layer_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks_test import ( diff --git a/internal/service/opsworks/ecs_cluster_layer.go b/internal/service/opsworks/ecs_cluster_layer.go index 12c74c38d9d..f596028940f 100644 --- a/internal/service/opsworks/ecs_cluster_layer.go +++ b/internal/service/opsworks/ecs_cluster_layer.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks import ( diff --git a/internal/service/opsworks/ecs_cluster_layer_test.go b/internal/service/opsworks/ecs_cluster_layer_test.go index 373050faaf0..65df45c9b10 100644 --- a/internal/service/opsworks/ecs_cluster_layer_test.go +++ b/internal/service/opsworks/ecs_cluster_layer_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks_test import ( diff --git a/internal/service/opsworks/forge.go b/internal/service/opsworks/forge.go index 891dc3a5855..9a37383c230 100644 --- a/internal/service/opsworks/forge.go +++ b/internal/service/opsworks/forge.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks import ( diff --git a/internal/service/opsworks/ganglia_layer.go b/internal/service/opsworks/ganglia_layer.go index c06ea4d632f..79c1eda1b85 100644 --- a/internal/service/opsworks/ganglia_layer.go +++ b/internal/service/opsworks/ganglia_layer.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks import ( diff --git a/internal/service/opsworks/ganglia_layer_test.go b/internal/service/opsworks/ganglia_layer_test.go index 02740294672..3ffdce5244f 100644 --- a/internal/service/opsworks/ganglia_layer_test.go +++ b/internal/service/opsworks/ganglia_layer_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks_test import ( diff --git a/internal/service/opsworks/generate.go b/internal/service/opsworks/generate.go index 702a769c9cc..5d496a90d2f 100644 --- a/internal/service/opsworks/generate.go +++ b/internal/service/opsworks/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=ListTags -ServiceTagsMap -UpdateTags -CreateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package opsworks diff --git a/internal/service/opsworks/haproxy_layer.go b/internal/service/opsworks/haproxy_layer.go index 57063ec38d3..485f6e8c599 100644 --- a/internal/service/opsworks/haproxy_layer.go +++ b/internal/service/opsworks/haproxy_layer.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks import ( diff --git a/internal/service/opsworks/haproxy_layer_test.go b/internal/service/opsworks/haproxy_layer_test.go index f303545f290..70544537a17 100644 --- a/internal/service/opsworks/haproxy_layer_test.go +++ b/internal/service/opsworks/haproxy_layer_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks_test import ( diff --git a/internal/service/opsworks/instance.go b/internal/service/opsworks/instance.go index d41d4f27727..874238e31d5 100644 --- a/internal/service/opsworks/instance.go +++ b/internal/service/opsworks/instance.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks import ( @@ -456,7 +459,7 @@ func resourceInstanceValidate(d *schema.ResourceData) error { func resourceInstanceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OpsWorksConn() + conn := meta.(*conns.AWSClient).OpsWorksConn(ctx) req := &opsworks.DescribeInstancesInput{ InstanceIds: []*string{ @@ -567,7 +570,7 @@ func resourceInstanceRead(ctx context.Context, d *schema.ResourceData, meta inte func resourceInstanceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OpsWorksConn() + conn := meta.(*conns.AWSClient).OpsWorksConn(ctx) err := resourceInstanceValidate(d) if err != nil { @@ -731,7 +734,7 @@ func resourceInstanceCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceInstanceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OpsWorksConn() + conn := meta.(*conns.AWSClient).OpsWorksConn(ctx) err := resourceInstanceValidate(d) if err != nil { @@ -813,7 +816,7 @@ func resourceInstanceUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceInstanceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OpsWorksConn() + conn := meta.(*conns.AWSClient).OpsWorksConn(ctx) if v, ok := d.GetOk("status"); ok && v.(string) != instanceStatusStopped { err := stopInstance(ctx, d, meta, d.Timeout(schema.TimeoutDelete)) @@ -857,7 +860,7 @@ func resourceInstanceImport(ctx context.Context, d *schema.ResourceData, meta in } func startInstance(ctx context.Context, d *schema.ResourceData, meta interface{}, wait bool, timeout time.Duration) error { - conn := meta.(*conns.AWSClient).OpsWorksConn() + conn := meta.(*conns.AWSClient).OpsWorksConn(ctx) req := &opsworks.StartInstanceInput{ InstanceId: aws.String(d.Id()), @@ -883,7 +886,7 @@ func startInstance(ctx context.Context, d *schema.ResourceData, meta interface{} } func stopInstance(ctx context.Context, d *schema.ResourceData, meta interface{}, timeout time.Duration) error { - conn := meta.(*conns.AWSClient).OpsWorksConn() + conn := meta.(*conns.AWSClient).OpsWorksConn(ctx) req := &opsworks.StopInstanceInput{ InstanceId: aws.String(d.Id()), diff --git a/internal/service/opsworks/instance_test.go b/internal/service/opsworks/instance_test.go index fdc05646648..04ab5703f9d 100644 --- a/internal/service/opsworks/instance_test.go +++ b/internal/service/opsworks/instance_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks_test import ( @@ -126,7 +129,7 @@ func testAccCheckInstanceExists(ctx context.Context, return fmt.Errorf("No Opsworks Instance is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn(ctx) params := &opsworks.DescribeInstancesInput{ InstanceIds: []*string{&rs.Primary.ID}, @@ -175,7 +178,7 @@ func testAccCheckInstanceAttributes( func testAccCheckInstanceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_opsworks_instance" { continue diff --git a/internal/service/opsworks/java_app_layer.go b/internal/service/opsworks/java_app_layer.go index 9dab6233bc8..9dc0cd7092e 100644 --- a/internal/service/opsworks/java_app_layer.go +++ b/internal/service/opsworks/java_app_layer.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks import ( diff --git a/internal/service/opsworks/java_app_layer_test.go b/internal/service/opsworks/java_app_layer_test.go index 2736bebe6cf..5b6713151bb 100644 --- a/internal/service/opsworks/java_app_layer_test.go +++ b/internal/service/opsworks/java_app_layer_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks_test import ( diff --git a/internal/service/opsworks/layers.go b/internal/service/opsworks/layers.go index 9e7cb6c025a..1b581b2076d 100644 --- a/internal/service/opsworks/layers.go +++ b/internal/service/opsworks/layers.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks import ( @@ -458,14 +461,16 @@ func (lt *opsworksLayerType) resourceSchema() *schema.Resource { StateContext: schema.ImportStatePassthroughContext, }, - Schema: resourceSchema, + SchemaFunc: func() map[string]*schema.Schema { + return resourceSchema + }, CustomizeDiff: verify.SetTagsDiff, } } func (lt *opsworksLayerType) Create(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).OpsWorksConn() + conn := meta.(*conns.AWSClient).OpsWorksConn(ctx) attributes, err := lt.Attributes.resourceDataToAPIAttributes(d) @@ -590,7 +595,7 @@ func (lt *opsworksLayerType) Create(ctx context.Context, d *schema.ResourceData, } } - if tags := KeyValueTags(ctx, GetTagsIn(ctx)); len(tags) > 0 { + if tags := KeyValueTags(ctx, getTagsIn(ctx)); len(tags) > 0 { layer, err := FindLayerByID(ctx, conn, d.Id()) if err != nil { @@ -598,7 +603,7 @@ func (lt *opsworksLayerType) Create(ctx context.Context, d *schema.ResourceData, } arn := aws.StringValue(layer.Arn) - if err := UpdateTags(ctx, conn, arn, nil, tags); err != nil { + if err := updateTags(ctx, conn, arn, nil, tags); err != nil { return diag.Errorf("adding OpsWorks Layer (%s) tags: %s", arn, err) } } @@ -607,7 +612,7 @@ func (lt *opsworksLayerType) Create(ctx context.Context, d *schema.ResourceData, } func (lt *opsworksLayerType) Read(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).OpsWorksConn() + conn := meta.(*conns.AWSClient).OpsWorksConn(ctx) layer, err := FindLayerByID(ctx, conn, d.Id()) @@ -706,7 +711,7 @@ func (lt *opsworksLayerType) Read(ctx context.Context, d *schema.ResourceData, m } func (lt *opsworksLayerType) Update(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).OpsWorksConn() + conn := meta.(*conns.AWSClient).OpsWorksConn(ctx) if d.HasChangesExcept("elastic_load_balancer", "load_based_auto_scaling", "tags", "tags_all") { input := &opsworks.UpdateLayerInput{ @@ -863,7 +868,7 @@ func (lt *opsworksLayerType) Update(ctx context.Context, d *schema.ResourceData, } func (lt *opsworksLayerType) Delete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).OpsWorksConn() + conn := meta.(*conns.AWSClient).OpsWorksConn(ctx) log.Printf("[DEBUG] Deleting OpsWorks Layer: %s", d.Id()) _, err := conn.DeleteLayerWithContext(ctx, &opsworks.DeleteLayerInput{ diff --git a/internal/service/opsworks/layers_test.go b/internal/service/opsworks/layers_test.go index 4e022f7090c..c6762afc4a3 100644 --- a/internal/service/opsworks/layers_test.go +++ b/internal/service/opsworks/layers_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks_test import ( @@ -24,7 +27,7 @@ func testAccCheckLayerExists(ctx context.Context, n string, v *opsworks.Layer) r return fmt.Errorf("No OpsWorks Layer ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn(ctx) output, err := tfopsworks.FindLayerByID(ctx, conn, rs.Primary.ID) @@ -39,7 +42,7 @@ func testAccCheckLayerExists(ctx context.Context, n string, v *opsworks.Layer) r } func testAccCheckLayerDestroy(ctx context.Context, resourceType string, s *terraform.State) error { // nosemgrep:ci.semgrep.acctest.naming.destroy-check-signature - conn := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != resourceType { diff --git a/internal/service/opsworks/memcached_layer.go b/internal/service/opsworks/memcached_layer.go index b8e2f0bc0ae..293b82933b1 100644 --- a/internal/service/opsworks/memcached_layer.go +++ b/internal/service/opsworks/memcached_layer.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks import ( diff --git a/internal/service/opsworks/memcached_layer_test.go b/internal/service/opsworks/memcached_layer_test.go index ccf44878ee1..e72ddd9a919 100644 --- a/internal/service/opsworks/memcached_layer_test.go +++ b/internal/service/opsworks/memcached_layer_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks_test import ( diff --git a/internal/service/opsworks/mysql_layer.go b/internal/service/opsworks/mysql_layer.go index 70e88f10398..79f9eca0dee 100644 --- a/internal/service/opsworks/mysql_layer.go +++ b/internal/service/opsworks/mysql_layer.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks import ( diff --git a/internal/service/opsworks/mysql_layer_test.go b/internal/service/opsworks/mysql_layer_test.go index d682e83a719..27408704125 100644 --- a/internal/service/opsworks/mysql_layer_test.go +++ b/internal/service/opsworks/mysql_layer_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks_test import ( diff --git a/internal/service/opsworks/nodejs_app_layer.go b/internal/service/opsworks/nodejs_app_layer.go index 38ce23abf1a..469ceca5b24 100644 --- a/internal/service/opsworks/nodejs_app_layer.go +++ b/internal/service/opsworks/nodejs_app_layer.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks import ( diff --git a/internal/service/opsworks/nodejs_app_layer_test.go b/internal/service/opsworks/nodejs_app_layer_test.go index 1fbeb874dfd..a91f5fa3a16 100644 --- a/internal/service/opsworks/nodejs_app_layer_test.go +++ b/internal/service/opsworks/nodejs_app_layer_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks_test import ( diff --git a/internal/service/opsworks/permission.go b/internal/service/opsworks/permission.go index d53a3d61977..ee94f1ae78c 100644 --- a/internal/service/opsworks/permission.go +++ b/internal/service/opsworks/permission.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks import ( @@ -63,7 +66,7 @@ func ResourcePermission() *schema.Resource { func resourceSetPermission(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OpsWorksConn() + conn := meta.(*conns.AWSClient).OpsWorksConn(ctx) iamUserARN := d.Get("user_arn").(string) stackID := d.Get("stack_id").(string) @@ -100,7 +103,7 @@ func resourceSetPermission(ctx context.Context, d *schema.ResourceData, meta int func resourcePermissionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OpsWorksConn() + conn := meta.(*conns.AWSClient).OpsWorksConn(ctx) permission, err := FindPermissionByTwoPartKey(ctx, conn, d.Get("user_arn").(string), d.Get("stack_id").(string)) diff --git a/internal/service/opsworks/permission_test.go b/internal/service/opsworks/permission_test.go index d8718fa7688..91a960ff3d7 100644 --- a/internal/service/opsworks/permission_test.go +++ b/internal/service/opsworks/permission_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks_test import ( @@ -110,7 +113,7 @@ func testAccCheckPermissionExists(ctx context.Context, n string, v *opsworks.Per return fmt.Errorf("No OpsWorks Layer ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn(ctx) output, err := tfopsworks.FindPermissionByTwoPartKey(ctx, conn, rs.Primary.Attributes["user_arn"], rs.Primary.Attributes["stack_id"]) diff --git a/internal/service/opsworks/php_app_layer.go b/internal/service/opsworks/php_app_layer.go index a8bb7388be5..e8f8a652716 100644 --- a/internal/service/opsworks/php_app_layer.go +++ b/internal/service/opsworks/php_app_layer.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks import ( diff --git a/internal/service/opsworks/php_app_layer_test.go b/internal/service/opsworks/php_app_layer_test.go index 13bca5f03ba..fa9166757f7 100644 --- a/internal/service/opsworks/php_app_layer_test.go +++ b/internal/service/opsworks/php_app_layer_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks_test import ( diff --git a/internal/service/opsworks/rails_app_layer.go b/internal/service/opsworks/rails_app_layer.go index 18413cfdbc5..0b0ee1e1485 100644 --- a/internal/service/opsworks/rails_app_layer.go +++ b/internal/service/opsworks/rails_app_layer.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks import ( diff --git a/internal/service/opsworks/rails_app_layer_test.go b/internal/service/opsworks/rails_app_layer_test.go index 0e1205c7ebc..80405d4e665 100644 --- a/internal/service/opsworks/rails_app_layer_test.go +++ b/internal/service/opsworks/rails_app_layer_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks_test import ( diff --git a/internal/service/opsworks/rds_db_instance.go b/internal/service/opsworks/rds_db_instance.go index a73888e7545..b55ec8491d4 100644 --- a/internal/service/opsworks/rds_db_instance.go +++ b/internal/service/opsworks/rds_db_instance.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks import ( @@ -49,7 +52,7 @@ func ResourceRDSDBInstance() *schema.Resource { func resourceRDSDBInstanceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - client := meta.(*conns.AWSClient).OpsWorksConn() + client := meta.(*conns.AWSClient).OpsWorksConn(ctx) dbInstanceARN := d.Get("rds_db_instance_arn").(string) stackID := d.Get("stack_id").(string) @@ -74,7 +77,7 @@ func resourceRDSDBInstanceCreate(ctx context.Context, d *schema.ResourceData, me func resourceRDSDBInstanceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OpsWorksConn() + conn := meta.(*conns.AWSClient).OpsWorksConn(ctx) dbInstance, err := FindRDSDBInstanceByTwoPartKey(ctx, conn, d.Get("rds_db_instance_arn").(string), d.Get("stack_id").(string)) @@ -97,7 +100,7 @@ func resourceRDSDBInstanceRead(ctx context.Context, d *schema.ResourceData, meta func resourceRDSDBInstanceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - client := meta.(*conns.AWSClient).OpsWorksConn() + client := meta.(*conns.AWSClient).OpsWorksConn(ctx) input := &opsworks.UpdateRdsDbInstanceInput{ RdsDbInstanceArn: aws.String(d.Get("rds_db_instance_arn").(string)), @@ -122,7 +125,7 @@ func resourceRDSDBInstanceUpdate(ctx context.Context, d *schema.ResourceData, me func resourceRDSDBInstanceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - client := meta.(*conns.AWSClient).OpsWorksConn() + client := meta.(*conns.AWSClient).OpsWorksConn(ctx) log.Printf("[DEBUG] Deregistering OpsWorks RDS DB Instance: %s", d.Id()) _, err := client.DeregisterRdsDbInstanceWithContext(ctx, &opsworks.DeregisterRdsDbInstanceInput{ diff --git a/internal/service/opsworks/rds_db_instance_test.go b/internal/service/opsworks/rds_db_instance_test.go index 2974269ecbe..f464d7a6739 100644 --- a/internal/service/opsworks/rds_db_instance_test.go +++ b/internal/service/opsworks/rds_db_instance_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks_test import ( @@ -98,7 +101,7 @@ func testAccCheckRDSDBInstanceExists(ctx context.Context, n string, v *opsworks. return fmt.Errorf("No OpsWorks RDS DB Instance ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn(ctx) output, err := tfopsworks.FindRDSDBInstanceByTwoPartKey(ctx, conn, rs.Primary.Attributes["rds_db_instance_arn"], rs.Primary.Attributes["stack_id"]) @@ -114,7 +117,7 @@ func testAccCheckRDSDBInstanceExists(ctx context.Context, n string, v *opsworks. func testAccCheckRDSDBInstanceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_opsworks_rds_db_instance" { diff --git a/internal/service/opsworks/service_package_gen.go b/internal/service/opsworks/service_package_gen.go index 56b2de3145b..a310f3c4a5e 100644 --- a/internal/service/opsworks/service_package_gen.go +++ b/internal/service/opsworks/service_package_gen.go @@ -5,6 +5,10 @@ package opsworks import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + opsworks_sdkv1 "github.com/aws/aws-sdk-go/service/opsworks" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -146,4 +150,13 @@ func (p *servicePackage) ServicePackageName() string { return names.OpsWorks } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*opsworks_sdkv1.OpsWorks, error) { + sess := config["session"].(*session_sdkv1.Session) + + return opsworks_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/opsworks/stack.go b/internal/service/opsworks/stack.go index 1f9a00a1e86..5cf1a40ff74 100644 --- a/internal/service/opsworks/stack.go +++ b/internal/service/opsworks/stack.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks import ( @@ -201,7 +204,7 @@ func ResourceStack() *schema.Resource { func resourceStackCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OpsWorksConn() + conn := meta.(*conns.AWSClient).OpsWorksConn(ctx) name := d.Get("name").(string) region := d.Get("region").(string) @@ -299,7 +302,7 @@ func resourceStackCreate(ctx context.Context, d *schema.ResourceData, meta inter Resource: fmt.Sprintf("stack/%s/", d.Id()), }.String() - if err := createTags(ctx, conn, arn, GetTagsIn(ctx)); err != nil { + if err := createTags(ctx, conn, arn, getTagsIn(ctx)); err != nil { return sdkdiag.AppendErrorf(diags, "setting OpsWorks Stack (%s) tags: %s", arn, err) } @@ -319,11 +322,11 @@ func resourceStackCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceStackRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics var err error - conn := meta.(*conns.AWSClient).OpsWorksConn() + conn := meta.(*conns.AWSClient).OpsWorksConn(ctx) if v, ok := d.GetOk("stack_endpoint"); ok { log.Printf(`[DEBUG] overriding region using "stack_endpoint": %s`, v) - conn, err = regionalConn(meta.(*conns.AWSClient), v.(string)) + conn, err = regionalConn(ctx, meta.(*conns.AWSClient), v.(string)) if err != nil { return sdkdiag.AppendErrorf(diags, `reading OpsWorks Stack (%s): creating client for "stack_endpoint" (%s): %s`, d.Id(), v, err) } @@ -337,7 +340,7 @@ func resourceStackRead(ctx context.Context, d *schema.ResourceData, meta interfa // See https://github.com/hashicorp/terraform/issues/12842. v := endpoints.UsEast1RegionID log.Printf(`[DEBUG] overriding region using legacy region: %s`, v) - conn, err = regionalConn(meta.(*conns.AWSClient), v) + conn, err = regionalConn(ctx, meta.(*conns.AWSClient), v) if err != nil { return sdkdiag.AppendErrorf(diags, `reading OpsWorks Stack (%s): creating client for legacy region (%s): %s`, d.Id(), v, err) } @@ -411,13 +414,13 @@ func resourceStackRead(ctx context.Context, d *schema.ResourceData, meta interfa d.Set("use_opsworks_security_groups", stack.UseOpsworksSecurityGroups) d.Set("vpc_id", stack.VpcId) - tags, err := ListTags(ctx, conn, arn) + tags, err := listTags(ctx, conn, arn) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags for OpsWorks Stack (%s): %s", arn, err) } - SetTagsOut(ctx, Tags(tags)) + setTagsOut(ctx, Tags(tags)) return diags } @@ -425,10 +428,10 @@ func resourceStackRead(ctx context.Context, d *schema.ResourceData, meta interfa func resourceStackUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics var err error - conn := meta.(*conns.AWSClient).OpsWorksConn() + conn := meta.(*conns.AWSClient).OpsWorksConn(ctx) if v, ok := d.GetOk("stack_endpoint"); ok { - conn, err = regionalConn(meta.(*conns.AWSClient), v.(string)) + conn, err = regionalConn(ctx, meta.(*conns.AWSClient), v.(string)) if err != nil { return sdkdiag.AppendErrorf(diags, `updating OpsWorks Stack (%s): creating client for "stack_endpoint" (%s): %s`, d.Id(), v, err) } @@ -530,7 +533,7 @@ func resourceStackUpdate(ctx context.Context, d *schema.ResourceData, meta inter if d.HasChange("tags_all") { o, n := d.GetChange("tags_all") - if err := UpdateTags(ctx, conn, d.Get("arn").(string), o, n); err != nil { + if err := updateTags(ctx, conn, d.Get("arn").(string), o, n); err != nil { return sdkdiag.AppendErrorf(diags, "updating OpsWorks Stack (%s) tags: %s", d.Id(), err) } } @@ -541,10 +544,10 @@ func resourceStackUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceStackDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics var err error - conn := meta.(*conns.AWSClient).OpsWorksConn() + conn := meta.(*conns.AWSClient).OpsWorksConn(ctx) if v, ok := d.GetOk("stack_endpoint"); ok { - conn, err = regionalConn(meta.(*conns.AWSClient), v.(string)) + conn, err = regionalConn(ctx, meta.(*conns.AWSClient), v.(string)) if err != nil { return sdkdiag.AppendErrorf(diags, `deleting OpsWorks Stack (%s): creating client for "stack_endpoint" (%s): %s`, d.Id(), v, err) } @@ -686,8 +689,8 @@ func flattenSource(apiObject *opsworks.Source) map[string]interface{} { // See: // - https://github.com/hashicorp/terraform/pull/12688 // - https://github.com/hashicorp/terraform/issues/12842 -func regionalConn(client *conns.AWSClient, regionName string) (*opsworks.OpsWorks, error) { - conn := client.OpsWorksConn() +func regionalConn(ctx context.Context, client *conns.AWSClient, regionName string) (*opsworks.OpsWorks, error) { + conn := client.OpsWorksConn(ctx) // Regions are the same, no need to reconfigure. if aws.StringValue(conn.Config.Region) == regionName { diff --git a/internal/service/opsworks/stack_test.go b/internal/service/opsworks/stack_test.go index 2de8a0c53c5..b9c32216bae 100644 --- a/internal/service/opsworks/stack_test.go +++ b/internal/service/opsworks/stack_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks_test import ( @@ -480,7 +483,7 @@ func TestAccOpsWorksStack_windows(t *testing.T) { } func testAccPreCheckStacks(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn(ctx) input := &opsworks.DescribeStacksInput{} @@ -506,7 +509,7 @@ func testAccCheckStackExists(ctx context.Context, n string, v *opsworks.Stack) r return fmt.Errorf("No OpsWorks Stack ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn(ctx) output, err := tfopsworks.FindStackByID(ctx, conn, rs.Primary.ID) @@ -522,7 +525,7 @@ func testAccCheckStackExists(ctx context.Context, n string, v *opsworks.Stack) r func testAccCheckStackDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_opsworks_stack" { diff --git a/internal/service/opsworks/static_web_layer.go b/internal/service/opsworks/static_web_layer.go index 4cd872df65d..399d01495ab 100644 --- a/internal/service/opsworks/static_web_layer.go +++ b/internal/service/opsworks/static_web_layer.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks import ( diff --git a/internal/service/opsworks/static_web_layer_test.go b/internal/service/opsworks/static_web_layer_test.go index e8218a5829f..fd37d65c67c 100644 --- a/internal/service/opsworks/static_web_layer_test.go +++ b/internal/service/opsworks/static_web_layer_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks_test import ( diff --git a/internal/service/opsworks/sweep.go b/internal/service/opsworks/sweep.go index 2e1bbb48392..eafa7f731a2 100644 --- a/internal/service/opsworks/sweep.go +++ b/internal/service/opsworks/sweep.go @@ -1,20 +1,27 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep package opsworks import ( - "errors" + "context" "fmt" "log" + "strings" + "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/opsworks" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/go-multierror" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" + "github.com/hashicorp/terraform-provider-aws/internal/sweep/sdk" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) func init() { @@ -65,12 +72,12 @@ func init() { func sweepApplication(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).OpsWorksConn() + conn := client.OpsWorksConn(ctx) sweepResources := make([]sweep.Sweepable, 0) output, err := conn.DescribeStacksWithContext(ctx, &opsworks.DescribeStacksInput{}) @@ -108,21 +115,21 @@ func sweepApplication(region string) error { d := r.Data(nil) d.SetId(aws.StringValue(app.AppId)) - sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) + sweepResources = append(sweepResources, sdk.NewSweepResource(r, d, client)) } } - return sweep.SweepOrchestratorWithContext(ctx, sweepResources) + return sweep.SweepOrchestrator(ctx, sweepResources) } func sweepInstance(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).OpsWorksConn() + conn := client.OpsWorksConn(ctx) sweepResources := make([]sweep.Sweepable, 0) output, err := conn.DescribeStacksWithContext(ctx, &opsworks.DescribeStacksInput{}) @@ -161,21 +168,21 @@ func sweepInstance(region string) error { d.SetId(aws.StringValue(instance.InstanceId)) d.Set("status", instance.Status) - sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) + sweepResources = append(sweepResources, sdk.NewSweepResource(r, d, client)) } } - return sweep.SweepOrchestratorWithContext(ctx, sweepResources) + return sweep.SweepOrchestrator(ctx, sweepResources) } func sweepRDSDBInstance(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).OpsWorksConn() + conn := client.OpsWorksConn(ctx) sweepResources := make([]sweep.Sweepable, 0) output, err := conn.DescribeStacksWithContext(ctx, &opsworks.DescribeStacksInput{}) @@ -214,21 +221,21 @@ func sweepRDSDBInstance(region string) error { d.SetId(aws.StringValue(dbInstance.DbInstanceIdentifier)) d.Set("rds_db_instance_arn", dbInstance.RdsDbInstanceArn) - sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) + sweepResources = append(sweepResources, sdk.NewSweepResource(r, d, client)) } } - return sweep.SweepOrchestratorWithContext(ctx, sweepResources) + return sweep.SweepOrchestrator(ctx, sweepResources) } func sweepStacks(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).OpsWorksConn() + conn := client.OpsWorksConn(ctx) sweepResources := make([]sweep.Sweepable, 0) output, err := conn.DescribeStacksWithContext(ctx, &opsworks.DescribeStacksInput{}) @@ -258,20 +265,20 @@ func sweepStacks(region string) error { d.Set("use_opsworks_security_groups", true) } - sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) + sweepResources = append(sweepResources, sdk.NewSweepResource(r, d, client)) } - return sweep.SweepOrchestratorWithContext(ctx, sweepResources) + return sweep.SweepOrchestrator(ctx, sweepResources) } func sweepLayers(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).OpsWorksConn() + conn := client.OpsWorksConn(ctx) sweepResources := make([]sweep.Sweepable, 0) output, err := conn.DescribeStacksWithContext(ctx, &opsworks.DescribeStacksInput{}) @@ -319,21 +326,21 @@ func sweepLayers(region string) error { } } - sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) + sweepResources = append(sweepResources, sdk.NewSweepResource(r, d, client)) } } - return sweep.SweepOrchestratorWithContext(ctx, sweepResources) + return sweep.SweepOrchestrator(ctx, sweepResources) } func sweepUserProfiles(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).OpsWorksConn() + conn := client.OpsWorksConn(ctx) sweepResources := make([]sweep.Sweepable, 0) output, err := conn.DescribeUserProfilesWithContext(ctx, &opsworks.DescribeUserProfilesInput{}) @@ -350,23 +357,29 @@ func sweepUserProfiles(region string) error { r := ResourceUserProfile() d := r.Data(nil) d.SetId(aws.StringValue(profile.IamUserArn)) - sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) + sweepResources = append(sweepResources, newUserProfileSweeper(r, d, client)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + return sweep.SweepOrchestrator(ctx, sweepResources) +} - var errs *multierror.Error - if errors.As(err, &errs) { - var es *multierror.Error - for _, e := range errs.Errors { - if tfawserr.ErrMessageContains(err, opsworks.ErrCodeValidationException, "Cannot delete self") { - log.Printf("[WARN] Ignoring error: %s", e.Error()) - } else { - es = multierror.Append(es, e) - } - } - return es.ErrorOrNil() +type userProfileSweeper struct { + d *schema.ResourceData + sweepable sweep.Sweepable +} + +func newUserProfileSweeper(resource *schema.Resource, d *schema.ResourceData, client *conns.AWSClient) *userProfileSweeper { + return &userProfileSweeper{ + d: d, + sweepable: sdk.NewSweepResource(resource, d, client), } +} +func (ups userProfileSweeper) Delete(ctx context.Context, timeout time.Duration, optFns ...tfresource.OptionsFunc) error { + err := ups.sweepable.Delete(ctx, timeout, optFns...) + if err != nil && strings.Contains(err.Error(), "Cannot delete self") { + log.Printf("[WARN] Skipping OpsWorks User Profile (%s): %s", ups.d.Id(), err) + return nil + } return err } diff --git a/internal/service/opsworks/tags_gen.go b/internal/service/opsworks/tags_gen.go index ad051a1a68d..6d3590fe2a1 100644 --- a/internal/service/opsworks/tags_gen.go +++ b/internal/service/opsworks/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists opsworks service tags. +// listTags lists opsworks service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn opsworksiface.OpsWorksAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn opsworksiface.OpsWorksAPI, identifier string) (tftags.KeyValueTags, error) { input := &opsworks.ListTagsInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn opsworksiface.OpsWorksAPI, identifier st // ListTags lists opsworks service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).OpsWorksConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).OpsWorksConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from opsworks service tags. +// KeyValueTags creates tftags.KeyValueTags from opsworks service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns opsworks service tags from Context. +// getTagsIn returns opsworks service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,8 +71,8 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets opsworks service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets opsworks service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } @@ -84,13 +84,13 @@ func createTags(ctx context.Context, conn opsworksiface.OpsWorksAPI, identifier return nil } - return UpdateTags(ctx, conn, identifier, nil, tags) + return updateTags(ctx, conn, identifier, nil, tags) } -// UpdateTags updates opsworks service tags. +// updateTags updates opsworks service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn opsworksiface.OpsWorksAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn opsworksiface.OpsWorksAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -130,5 +130,5 @@ func UpdateTags(ctx context.Context, conn opsworksiface.OpsWorksAPI, identifier // UpdateTags updates opsworks service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).OpsWorksConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).OpsWorksConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/opsworks/user_profile.go b/internal/service/opsworks/user_profile.go index 30fd6661470..3f9c851dc63 100644 --- a/internal/service/opsworks/user_profile.go +++ b/internal/service/opsworks/user_profile.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks import ( @@ -48,7 +51,7 @@ func ResourceUserProfile() *schema.Resource { func resourceUserProfileCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OpsWorksConn() + conn := meta.(*conns.AWSClient).OpsWorksConn(ctx) iamUserARN := d.Get("user_arn").(string) input := &opsworks.CreateUserProfileInput{ @@ -74,7 +77,7 @@ func resourceUserProfileCreate(ctx context.Context, d *schema.ResourceData, meta func resourceUserProfileRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OpsWorksConn() + conn := meta.(*conns.AWSClient).OpsWorksConn(ctx) profile, err := FindUserProfileByARN(ctx, conn, d.Id()) @@ -98,7 +101,7 @@ func resourceUserProfileRead(ctx context.Context, d *schema.ResourceData, meta i func resourceUserProfileUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OpsWorksConn() + conn := meta.(*conns.AWSClient).OpsWorksConn(ctx) input := &opsworks.UpdateUserProfileInput{ AllowSelfManagement: aws.Bool(d.Get("allow_self_management").(bool)), @@ -118,7 +121,7 @@ func resourceUserProfileUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceUserProfileDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OpsWorksConn() + conn := meta.(*conns.AWSClient).OpsWorksConn(ctx) log.Printf("[DEBUG] Deleting OpsWorks User Profile: %s", d.Id()) _, err := conn.DeleteUserProfileWithContext(ctx, &opsworks.DeleteUserProfileInput{ diff --git a/internal/service/opsworks/user_profile_test.go b/internal/service/opsworks/user_profile_test.go index ea20cfd32cb..6ce7e73bf36 100644 --- a/internal/service/opsworks/user_profile_test.go +++ b/internal/service/opsworks/user_profile_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package opsworks_test import ( @@ -83,7 +86,7 @@ func testAccCheckUserProfileExists(ctx context.Context, n string) resource.TestC return fmt.Errorf("No OpsWorks User Profile ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn(ctx) _, err := tfopsworks.FindUserProfileByARN(ctx, conn, rs.Primary.ID) @@ -93,7 +96,7 @@ func testAccCheckUserProfileExists(ctx context.Context, n string) resource.TestC func testAccCheckUserProfileDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).OpsWorksConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_opsworks_user_profile" { diff --git a/internal/service/organizations/account.go b/internal/service/organizations/account.go index 641cbb120d8..226381b3351 100644 --- a/internal/service/organizations/account.go +++ b/internal/service/organizations/account.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package organizations import ( @@ -108,7 +111,7 @@ func ResourceAccount() *schema.Resource { func resourceAccountCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OrganizationsConn() + conn := meta.(*conns.AWSClient).OrganizationsConn(ctx) var iamUserAccessToBilling *string @@ -127,7 +130,7 @@ func resourceAccountCreate(ctx context.Context, d *schema.ResourceData, meta int d.Get("email").(string), iamUserAccessToBilling, roleName, - GetTagsIn(ctx), + getTagsIn(ctx), d.Get("create_govcloud").(bool), ) @@ -170,7 +173,7 @@ func resourceAccountCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceAccountRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OrganizationsConn() + conn := meta.(*conns.AWSClient).OrganizationsConn(ctx) account, err := FindAccountByID(ctx, conn, d.Id()) @@ -203,7 +206,7 @@ func resourceAccountRead(ctx context.Context, d *schema.ResourceData, meta inter func resourceAccountUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OrganizationsConn() + conn := meta.(*conns.AWSClient).OrganizationsConn(ctx) if d.HasChange("parent_id") { o, n := d.GetChange("parent_id") @@ -224,7 +227,7 @@ func resourceAccountUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceAccountDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OrganizationsConn() + conn := meta.(*conns.AWSClient).OrganizationsConn(ctx) close := d.Get("close_on_deletion").(bool) var err error diff --git a/internal/service/organizations/account_test.go b/internal/service/organizations/account_test.go index 23242f8cd1e..13fdcb82176 100644 --- a/internal/service/organizations/account_test.go +++ b/internal/service/organizations/account_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package organizations_test import ( @@ -233,7 +236,7 @@ func testAccAccount_govCloud(t *testing.T) { func testAccCheckAccountDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).OrganizationsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).OrganizationsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_organizations_account" { @@ -268,7 +271,7 @@ func testAccCheckAccountExists(ctx context.Context, n string, v *organizations.A return fmt.Errorf("No AWS Organizations Account ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).OrganizationsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).OrganizationsConn(ctx) output, err := tforganizations.FindAccountByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/organizations/delegated_administrator.go b/internal/service/organizations/delegated_administrator.go index 436ede84d1b..39251f167f6 100644 --- a/internal/service/organizations/delegated_administrator.go +++ b/internal/service/organizations/delegated_administrator.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package organizations import ( @@ -14,6 +17,8 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/verify" ) @@ -35,12 +40,6 @@ func ResourceDelegatedAdministrator() *schema.Resource { ForceNew: true, ValidateFunc: verify.ValidAccountID, }, - "service_principal": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validation.StringLenBetween(1, 128), - }, "arn": { Type: schema.TypeString, Computed: true, @@ -65,6 +64,12 @@ func ResourceDelegatedAdministrator() *schema.Resource { Type: schema.TypeString, Computed: true, }, + "service_principal": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(1, 128), + }, "status": { Type: schema.TypeString, Computed: true, @@ -74,95 +79,141 @@ func ResourceDelegatedAdministrator() *schema.Resource { } func resourceDelegatedAdministratorCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).OrganizationsConn() + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).OrganizationsConn(ctx) accountID := d.Get("account_id").(string) servicePrincipal := d.Get("service_principal").(string) + id := DelegatedAdministratorCreateResourceID(accountID, servicePrincipal) input := &organizations.RegisterDelegatedAdministratorInput{ AccountId: aws.String(accountID), ServicePrincipal: aws.String(servicePrincipal), } _, err := conn.RegisterDelegatedAdministratorWithContext(ctx, input) + if err != nil { - return diag.FromErr(fmt.Errorf("error creating Organizations DelegatedAdministrator (%s): %w", accountID, err)) + return sdkdiag.AppendErrorf(diags, "creating Organizations Delegated Administrator (%s): %s", id, err) } - d.SetId(fmt.Sprintf("%s/%s", accountID, servicePrincipal)) + d.SetId(id) - return resourceDelegatedAdministratorRead(ctx, d, meta) + return append(diags, resourceDelegatedAdministratorRead(ctx, d, meta)...) } func resourceDelegatedAdministratorRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).OrganizationsConn() + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).OrganizationsConn(ctx) - accountID, servicePrincipal, err := DecodeOrganizationDelegatedAdministratorID(d.Id()) - if err != nil { - return diag.FromErr(fmt.Errorf("error decoding ID AWS Organization (%s) DelegatedAdministrators: %w", d.Id(), err)) - } - input := &organizations.ListDelegatedAdministratorsInput{ - ServicePrincipal: aws.String(servicePrincipal), - } - var delegatedAccount *organizations.DelegatedAdministrator - err = conn.ListDelegatedAdministratorsPagesWithContext(ctx, input, func(page *organizations.ListDelegatedAdministratorsOutput, lastPage bool) bool { - for _, delegated := range page.DelegatedAdministrators { - if aws.StringValue(delegated.Id) == accountID { - delegatedAccount = delegated - } - } + accountID, servicePrincipal, err := DelegatedAdministratorParseResourceID(d.Id()) - return !lastPage - }) if err != nil { - return diag.FromErr(fmt.Errorf("error listing AWS Organization (%s) DelegatedAdministrators: %w", d.Id(), err)) + return sdkdiag.AppendFromErr(diags, err) } - if delegatedAccount == nil { - if !d.IsNewResource() { - log.Printf("[WARN] AWS Organization DelegatedAdministrators not found (%s), removing from state", d.Id()) - d.SetId("") - return nil - } + delegatedAccount, err := findDelegatedAdministratorByTwoPartKey(ctx, conn, accountID, servicePrincipal) + + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] Organizations Delegated Administrator %s not found, removing from state", d.Id()) + d.SetId("") + return diags + } - return diag.FromErr(&retry.NotFoundError{}) + if err != nil { + return sdkdiag.AppendErrorf(diags, "reading Organizations Delegated Administrator (%s): %s", d.Id(), err) } + d.Set("account_id", accountID) d.Set("arn", delegatedAccount.Arn) d.Set("delegation_enabled_date", aws.TimeValue(delegatedAccount.DelegationEnabledDate).Format(time.RFC3339)) d.Set("email", delegatedAccount.Email) d.Set("joined_method", delegatedAccount.JoinedMethod) d.Set("joined_timestamp", aws.TimeValue(delegatedAccount.JoinedTimestamp).Format(time.RFC3339)) d.Set("name", delegatedAccount.Name) - d.Set("status", delegatedAccount.Status) - d.Set("account_id", accountID) d.Set("service_principal", servicePrincipal) + d.Set("status", delegatedAccount.Status) - return nil + return diags } func resourceDelegatedAdministratorDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).OrganizationsConn() + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).OrganizationsConn(ctx) + + accountID, servicePrincipal, err := DelegatedAdministratorParseResourceID(d.Id()) - accountID, servicePrincipal, err := DecodeOrganizationDelegatedAdministratorID(d.Id()) if err != nil { - return diag.FromErr(fmt.Errorf("error decoding ID AWS Organization (%s) DelegatedAdministrators: %w", d.Id(), err)) + return sdkdiag.AppendFromErr(diags, err) } - input := &organizations.DeregisterDelegatedAdministratorInput{ + + log.Printf("[DEBUG] Deleting Organizations Delegated Administrator: %s", d.Id()) + _, err = conn.DeregisterDelegatedAdministratorWithContext(ctx, &organizations.DeregisterDelegatedAdministratorInput{ AccountId: aws.String(accountID), ServicePrincipal: aws.String(servicePrincipal), + }) + + if err != nil { + return sdkdiag.AppendErrorf(diags, "deleting Organizations Delegated Administrator (%s): %s", d.Id(), err) + } + + return diags +} + +func findDelegatedAdministratorByTwoPartKey(ctx context.Context, conn *organizations.Organizations, accountID, servicePrincipal string) (*organizations.DelegatedAdministrator, error) { + input := &organizations.ListDelegatedAdministratorsInput{ + ServicePrincipal: aws.String(servicePrincipal), + } + + output, err := findDelegatedAdministrators(ctx, conn, input) + + if err != nil { + return nil, err + } + + for _, v := range output { + if aws.StringValue(v.Id) == accountID { + return v, nil + } } - _, err = conn.DeregisterDelegatedAdministratorWithContext(ctx, input) + return nil, &retry.NotFoundError{} +} + +func findDelegatedAdministrators(ctx context.Context, conn *organizations.Organizations, input *organizations.ListDelegatedAdministratorsInput) ([]*organizations.DelegatedAdministrator, error) { + var output []*organizations.DelegatedAdministrator + + err := conn.ListDelegatedAdministratorsPagesWithContext(ctx, input, func(page *organizations.ListDelegatedAdministratorsOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + output = append(output, page.DelegatedAdministrators...) + + return !lastPage + }) + if err != nil { - return diag.FromErr(fmt.Errorf("error deleting Organizations DelegatedAdministrator (%s): %w", d.Id(), err)) + return nil, err } - return nil + + return output, nil +} + +const delegatedAdministratorResourceIDSeparator = "/" + +func DelegatedAdministratorCreateResourceID(accountID, servicePrincipal string) string { + parts := []string{accountID, servicePrincipal} + id := strings.Join(parts, delegatedAdministratorResourceIDSeparator) + + return id } -func DecodeOrganizationDelegatedAdministratorID(id string) (string, string, error) { - idParts := strings.Split(id, "/") - if len(idParts) != 2 || idParts[0] == "" || idParts[1] == "" { - return "", "", fmt.Errorf("expected ID in the form of account_id/service_principal, given: %q", id) +func DelegatedAdministratorParseResourceID(id string) (string, string, error) { + parts := strings.Split(id, delegatedAdministratorResourceIDSeparator) + + if len(parts) == 2 && parts[0] != "" && parts[1] != "" { + return parts[0], parts[1], nil } - return idParts[0], idParts[1], nil + + return "", "", fmt.Errorf("unexpected format for ID (%[1]s), expected ACCOUNTID%[2]sSERVICEPRINCIPAL", id, delegatedAdministratorResourceIDSeparator) } diff --git a/internal/service/organizations/delegated_administrator_test.go b/internal/service/organizations/delegated_administrator_test.go index fb64792a398..0348e545583 100644 --- a/internal/service/organizations/delegated_administrator_test.go +++ b/internal/service/organizations/delegated_administrator_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package organizations_test import ( @@ -5,13 +8,13 @@ import ( "fmt" "testing" - "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/organizations" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" tforganizations "github.com/hashicorp/terraform-provider-aws/internal/service/organizations" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) func testAccDelegatedAdministrator_basic(t *testing.T) { @@ -25,6 +28,7 @@ func testAccDelegatedAdministrator_basic(t *testing.T) { PreCheck: func() { acctest.PreCheck(ctx, t) acctest.PreCheckAlternateAccount(t) + acctest.PreCheckOrganizationManagementAccount(ctx, t) }, ErrorCheck: acctest.ErrorCheck(t, organizations.EndpointsID), ProtoV5ProviderFactories: acctest.ProtoV5FactoriesAlternate(ctx, t), @@ -35,16 +39,11 @@ func testAccDelegatedAdministrator_basic(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckDelegatedAdministratorExists(ctx, resourceName, &organization), resource.TestCheckResourceAttrPair(resourceName, "account_id", dataSourceIdentity, "account_id"), - resource.TestCheckResourceAttr(resourceName, "service_principal", servicePrincipal), acctest.CheckResourceAttrRFC3339(resourceName, "delegation_enabled_date"), acctest.CheckResourceAttrRFC3339(resourceName, "joined_timestamp"), + resource.TestCheckResourceAttr(resourceName, "service_principal", servicePrincipal), ), }, - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, }, }) } @@ -59,6 +58,7 @@ func testAccDelegatedAdministrator_disappears(t *testing.T) { PreCheck: func() { acctest.PreCheck(ctx, t) acctest.PreCheckAlternateAccount(t) + acctest.PreCheckOrganizationManagementAccount(ctx, t) }, ProtoV5ProviderFactories: acctest.ProtoV5FactoriesAlternate(ctx, t), CheckDestroy: testAccCheckDelegatedAdministratorDestroy(ctx), @@ -78,94 +78,65 @@ func testAccDelegatedAdministrator_disappears(t *testing.T) { func testAccCheckDelegatedAdministratorDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).OrganizationsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).OrganizationsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_organizations_delegated_administrator" { continue } - accountID, servicePrincipal, err := tforganizations.DecodeOrganizationDelegatedAdministratorID(rs.Primary.ID) + accountID, servicePrincipal, err := tforganizations.DelegatedAdministratorParseResourceID(rs.Primary.ID) + if err != nil { return err } - input := &organizations.ListDelegatedAdministratorsInput{ - ServicePrincipal: aws.String(servicePrincipal), - } - exists := false - err = conn.ListDelegatedAdministratorsPagesWithContext(ctx, input, func(page *organizations.ListDelegatedAdministratorsOutput, lastPage bool) bool { - for _, delegated := range page.DelegatedAdministrators { - if aws.StringValue(delegated.Id) == accountID { - exists = true - } - } + _, err = tforganizations.FindDelegatedAdministratorByTwoPartKey(ctx, conn, accountID, servicePrincipal) - return !lastPage - }) + if tfresource.NotFound(err) { + continue + } if err != nil { return err } - if exists { - return fmt.Errorf("organization DelegatedAdministrator still exists: %q", rs.Primary.ID) - } + return fmt.Errorf("Organizations Delegated Administrator %s still exists", rs.Primary.ID) } return nil } } -func testAccCheckDelegatedAdministratorExists(ctx context.Context, n string, org *organizations.DelegatedAdministrator) resource.TestCheckFunc { +func testAccCheckDelegatedAdministratorExists(ctx context.Context, n string, v *organizations.DelegatedAdministrator) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { return fmt.Errorf("Not found: %s", n) } - if rs.Primary.ID == "" { - return fmt.Errorf("Organization ID not set") - } + accountID, servicePrincipal, err := tforganizations.DelegatedAdministratorParseResourceID(rs.Primary.ID) - accountID, servicePrincipal, err := tforganizations.DecodeOrganizationDelegatedAdministratorID(rs.Primary.ID) if err != nil { return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).OrganizationsConn() - input := &organizations.ListDelegatedAdministratorsInput{ - ServicePrincipal: aws.String(servicePrincipal), - } - exists := false - var resp *organizations.DelegatedAdministrator - err = conn.ListDelegatedAdministratorsPagesWithContext(ctx, input, func(page *organizations.ListDelegatedAdministratorsOutput, lastPage bool) bool { - for _, delegated := range page.DelegatedAdministrators { - if aws.StringValue(delegated.Id) == accountID { - exists = true - resp = delegated - } - } + conn := acctest.Provider.Meta().(*conns.AWSClient).OrganizationsConn(ctx) - return !lastPage - }) + output, err := tforganizations.FindDelegatedAdministratorByTwoPartKey(ctx, conn, accountID, servicePrincipal) if err != nil { return err } - if !exists { - return fmt.Errorf("organization DelegatedAdministrator %q does not exist", rs.Primary.ID) - } - - *org = *resp + *v = *output return nil } } func testAccDelegatedAdministratorConfig_basic(servicePrincipal string) string { - return acctest.ConfigAlternateAccountProvider() + fmt.Sprintf(` + return acctest.ConfigCompose(acctest.ConfigAlternateAccountProvider(), fmt.Sprintf(` data "aws_caller_identity" "delegated" { provider = "awsalternate" } @@ -174,5 +145,5 @@ resource "aws_organizations_delegated_administrator" "test" { account_id = data.aws_caller_identity.delegated.account_id service_principal = %[1]q } -`, servicePrincipal) +`, servicePrincipal)) } diff --git a/internal/service/organizations/delegated_administrators_data_source.go b/internal/service/organizations/delegated_administrators_data_source.go index 7237660079f..79aa1069814 100644 --- a/internal/service/organizations/delegated_administrators_data_source.go +++ b/internal/service/organizations/delegated_administrators_data_source.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package organizations import ( "context" - "fmt" "time" "github.com/aws/aws-sdk-go/aws" @@ -17,12 +19,8 @@ import ( func DataSourceDelegatedAdministrators() *schema.Resource { return &schema.Resource{ ReadWithoutTimeout: dataSourceDelegatedAdministratorsRead, + Schema: map[string]*schema.Schema{ - "service_principal": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.StringLenBetween(1, 128), - }, "delegated_administrators": { Type: schema.TypeSet, Computed: true, @@ -63,12 +61,17 @@ func DataSourceDelegatedAdministrators() *schema.Resource { }, }, }, + "service_principal": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(1, 128), + }, }, } } func dataSourceDelegatedAdministratorsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).OrganizationsConn() + conn := meta.(*conns.AWSClient).OrganizationsConn(ctx) input := &organizations.ListDelegatedAdministratorsInput{} @@ -76,47 +79,39 @@ func dataSourceDelegatedAdministratorsRead(ctx context.Context, d *schema.Resour input.ServicePrincipal = aws.String(v.(string)) } - var delegators []*organizations.DelegatedAdministrator - - err := conn.ListDelegatedAdministratorsPagesWithContext(ctx, input, func(page *organizations.ListDelegatedAdministratorsOutput, lastPage bool) bool { - if page == nil { - return !lastPage - } + output, err := findDelegatedAdministrators(ctx, conn, input) - delegators = append(delegators, page.DelegatedAdministrators...) - - return !lastPage - }) if err != nil { - return diag.FromErr(fmt.Errorf("error describing organizations delegated Administrators: %w", err)) - } - - if err = d.Set("delegated_administrators", flattenDelegatedAdministrators(delegators)); err != nil { - return diag.FromErr(fmt.Errorf("error setting delegated_administrators: %w", err)) + return diag.Errorf("reading Organizations Delegated Administrators: %s", err) } d.SetId(meta.(*conns.AWSClient).AccountID) + if err = d.Set("delegated_administrators", flattenDelegatedAdministrators(output)); err != nil { + return diag.Errorf("setting delegated_administrators: %s", err) + } return nil } -func flattenDelegatedAdministrators(delegatedAdministrators []*organizations.DelegatedAdministrator) []map[string]interface{} { - if len(delegatedAdministrators) == 0 { +func flattenDelegatedAdministrators(apiObjects []*organizations.DelegatedAdministrator) []map[string]interface{} { + if len(apiObjects) == 0 { return nil } - var result []map[string]interface{} - for _, delegated := range delegatedAdministrators { - result = append(result, map[string]interface{}{ - "arn": aws.StringValue(delegated.Arn), - "delegation_enabled_date": aws.TimeValue(delegated.DelegationEnabledDate).Format(time.RFC3339), - "email": aws.StringValue(delegated.Email), - "id": aws.StringValue(delegated.Id), - "joined_method": aws.StringValue(delegated.JoinedMethod), - "joined_timestamp": aws.TimeValue(delegated.JoinedTimestamp).Format(time.RFC3339), - "name": aws.StringValue(delegated.Name), - "status": aws.StringValue(delegated.Status), + var tfList []map[string]interface{} + + for _, apiObject := range apiObjects { + tfList = append(tfList, map[string]interface{}{ + "arn": aws.StringValue(apiObject.Arn), + "delegation_enabled_date": aws.TimeValue(apiObject.DelegationEnabledDate).Format(time.RFC3339), + "email": aws.StringValue(apiObject.Email), + "id": aws.StringValue(apiObject.Id), + "joined_method": aws.StringValue(apiObject.JoinedMethod), + "joined_timestamp": aws.TimeValue(apiObject.JoinedTimestamp).Format(time.RFC3339), + "name": aws.StringValue(apiObject.Name), + "status": aws.StringValue(apiObject.Status), }) } - return result + + return tfList } diff --git a/internal/service/organizations/delegated_administrators_data_source_test.go b/internal/service/organizations/delegated_administrators_data_source_test.go index fa578545e68..639558fe2ce 100644 --- a/internal/service/organizations/delegated_administrators_data_source_test.go +++ b/internal/service/organizations/delegated_administrators_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package organizations_test import ( @@ -9,16 +12,16 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/acctest" ) -func TestAccOrganizationsDelegatedAdministratorsDataSource_basic(t *testing.T) { +func testAccDelegatedAdministratorsDataSource_basic(t *testing.T) { ctx := acctest.Context(t) dataSourceName := "data.aws_organizations_delegated_administrators.test" servicePrincipal := "config-multiaccountsetup.amazonaws.com" - dataSourceIdentity := "data.aws_caller_identity.delegated" resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) acctest.PreCheckAlternateAccount(t) + acctest.PreCheckOrganizationManagementAccount(ctx, t) }, ErrorCheck: acctest.ErrorCheck(t, organizations.EndpointsID), ProtoV5ProviderFactories: acctest.ProtoV5FactoriesAlternate(ctx, t), @@ -26,136 +29,15 @@ func TestAccOrganizationsDelegatedAdministratorsDataSource_basic(t *testing.T) { { Config: testAccDelegatedAdministratorsDataSourceConfig_basic(servicePrincipal), Check: resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttr(dataSourceName, "delegated_administrators.#", "1"), - resource.TestCheckResourceAttrPair(dataSourceName, "delegated_administrators.0.id", dataSourceIdentity, "account_id"), - acctest.CheckResourceAttrRFC3339(dataSourceName, "delegated_administrators.0.delegation_enabled_date"), - acctest.CheckResourceAttrRFC3339(dataSourceName, "delegated_administrators.0.joined_timestamp"), - ), - }, - }, - }) -} - -func TestAccOrganizationsDelegatedAdministratorsDataSource_multiple(t *testing.T) { - ctx := acctest.Context(t) - dataSourceName := "data.aws_organizations_delegated_administrators.test" - servicePrincipal := "config-multiaccountsetup.amazonaws.com" - servicePrincipal2 := "config.amazonaws.com" - dataSourceIdentity := "data.aws_caller_identity.delegated" - - resource.Test(t, resource.TestCase{ - PreCheck: func() { - acctest.PreCheck(ctx, t) - acctest.PreCheckAlternateAccount(t) - }, - ErrorCheck: acctest.ErrorCheck(t, organizations.EndpointsID), - ProtoV5ProviderFactories: acctest.ProtoV5FactoriesAlternate(ctx, t), - Steps: []resource.TestStep{ - { - Config: testAccDelegatedAdministratorsDataSourceConfig_multiple(servicePrincipal, servicePrincipal2), - Check: resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttr(dataSourceName, "delegated_administrators.#", "1"), - resource.TestCheckResourceAttrPair(dataSourceName, "delegated_administrators.0.id", dataSourceIdentity, "account_id"), - acctest.CheckResourceAttrRFC3339(dataSourceName, "delegated_administrators.0.delegation_enabled_date"), - acctest.CheckResourceAttrRFC3339(dataSourceName, "delegated_administrators.0.joined_timestamp"), + acctest.CheckResourceAttrGreaterThanOrEqualValue(dataSourceName, "delegated_administrators.#", 1), ), }, }, }) } -func TestAccOrganizationsDelegatedAdministratorsDataSource_servicePrincipal(t *testing.T) { - ctx := acctest.Context(t) - dataSourceName := "data.aws_organizations_delegated_administrators.test" - servicePrincipal := "config-multiaccountsetup.amazonaws.com" - dataSourceIdentity := "data.aws_caller_identity.delegated" - - resource.Test(t, resource.TestCase{ - PreCheck: func() { - acctest.PreCheck(ctx, t) - acctest.PreCheckAlternateAccount(t) - }, - ErrorCheck: acctest.ErrorCheck(t, organizations.EndpointsID), - ProtoV5ProviderFactories: acctest.ProtoV5FactoriesAlternate(ctx, t), - Steps: []resource.TestStep{ - { - Config: testAccDelegatedAdministratorsDataSourceConfig_servicePrincipal(servicePrincipal), - Check: resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttr(dataSourceName, "delegated_administrators.#", "1"), - resource.TestCheckResourceAttrPair(dataSourceName, "delegated_administrators.0.id", dataSourceIdentity, "account_id"), - acctest.CheckResourceAttrRFC3339(dataSourceName, "delegated_administrators.0.delegation_enabled_date"), - acctest.CheckResourceAttrRFC3339(dataSourceName, "delegated_administrators.0.joined_timestamp"), - ), - }, - }, - }) -} - -func TestAccOrganizationsDelegatedAdministratorsDataSource_empty(t *testing.T) { - ctx := acctest.Context(t) - dataSourceName := "data.aws_organizations_delegated_administrators.test" - servicePrincipal := "config-multiaccountsetup.amazonaws.com" - - resource.Test(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, organizations.EndpointsID), - ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - Steps: []resource.TestStep{ - { - Config: testAccDelegatedAdministratorsDataSourceConfig_empty(servicePrincipal), - Check: resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttr(dataSourceName, "delegated_administrators.#", "0"), - ), - }, - }, - }) -} - -func testAccDelegatedAdministratorsDataSourceConfig_empty(servicePrincipal string) string { - return acctest.ConfigAlternateAccountProvider() + fmt.Sprintf(` -data "aws_organizations_delegated_administrators" "test" { - service_principal = %[1]q -} -`, servicePrincipal) -} - func testAccDelegatedAdministratorsDataSourceConfig_basic(servicePrincipal string) string { - return acctest.ConfigAlternateAccountProvider() + fmt.Sprintf(` -data "aws_caller_identity" "delegated" { - provider = "awsalternate" -} - -resource "aws_organizations_delegated_administrator" "test" { - account_id = data.aws_caller_identity.delegated.account_id - service_principal = %[1]q -} - -data "aws_organizations_delegated_administrators" "test" {} -`, servicePrincipal) -} - -func testAccDelegatedAdministratorsDataSourceConfig_multiple(servicePrincipal, servicePrincipal2 string) string { - return acctest.ConfigAlternateAccountProvider() + fmt.Sprintf(` -data "aws_caller_identity" "delegated" { - provider = "awsalternate" -} - -resource "aws_organizations_delegated_administrator" "delegated" { - account_id = data.aws_caller_identity.delegated.account_id - service_principal = %[1]q -} - -resource "aws_organizations_delegated_administrator" "other_delegated" { - account_id = data.aws_caller_identity.delegated.account_id - service_principal = %[2]q -} - -data "aws_organizations_delegated_administrators" "test" {} -`, servicePrincipal, servicePrincipal2) -} - -func testAccDelegatedAdministratorsDataSourceConfig_servicePrincipal(servicePrincipal string) string { - return acctest.ConfigAlternateAccountProvider() + fmt.Sprintf(` + return acctest.ConfigCompose(acctest.ConfigAlternateAccountProvider(), fmt.Sprintf(` data "aws_caller_identity" "delegated" { provider = "awsalternate" } @@ -166,7 +48,7 @@ resource "aws_organizations_delegated_administrator" "test" { } data "aws_organizations_delegated_administrators" "test" { - service_principal = aws_organizations_delegated_administrator.test.service_principal + depends_on = [aws_organizations_delegated_administrator.test] } -`, servicePrincipal) +`, servicePrincipal)) } diff --git a/internal/service/organizations/delegated_services_data_source.go b/internal/service/organizations/delegated_services_data_source.go index 732c490c9d4..50bdbe03a2d 100644 --- a/internal/service/organizations/delegated_services_data_source.go +++ b/internal/service/organizations/delegated_services_data_source.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package organizations import ( "context" - "fmt" "time" "github.com/aws/aws-sdk-go/aws" @@ -44,46 +46,59 @@ func DataSourceDelegatedServices() *schema.Resource { } func dataSourceDelegatedServicesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).OrganizationsConn() + conn := meta.(*conns.AWSClient).OrganizationsConn(ctx) + + accountID := d.Get("account_id").(string) + output, err := findDelegatedServicesByAccountID(ctx, conn, accountID) + + if err != nil { + return diag.Errorf("reading Organizations Delegated Services (%s): %s", accountID, err) + } + d.SetId(meta.(*conns.AWSClient).AccountID) + if err = d.Set("delegated_services", flattenDelegatedServices(output)); err != nil { + return diag.Errorf("setting delegated_services: %s", err) + } + + return nil +} + +func findDelegatedServicesByAccountID(ctx context.Context, conn *organizations.Organizations, accountID string) ([]*organizations.DelegatedService, error) { input := &organizations.ListDelegatedServicesForAccountInput{ - AccountId: aws.String(d.Get("account_id").(string)), + AccountId: aws.String(accountID), } + var output []*organizations.DelegatedService - var delegators []*organizations.DelegatedService err := conn.ListDelegatedServicesForAccountPagesWithContext(ctx, input, func(page *organizations.ListDelegatedServicesForAccountOutput, lastPage bool) bool { if page == nil { return !lastPage } - delegators = append(delegators, page.DelegatedServices...) + output = append(output, page.DelegatedServices...) return !lastPage }) - if err != nil { - return diag.FromErr(fmt.Errorf("error describing organizations delegated services: %w", err)) - } - if err = d.Set("delegated_services", flattenDelegatedServices(delegators)); err != nil { - return diag.FromErr(fmt.Errorf("error setting delegated_services: %w", err)) + if err != nil { + return nil, err } - d.SetId(meta.(*conns.AWSClient).AccountID) - - return nil + return output, nil } -func flattenDelegatedServices(delegatedServices []*organizations.DelegatedService) []map[string]interface{} { - if len(delegatedServices) == 0 { +func flattenDelegatedServices(apiObjects []*organizations.DelegatedService) []map[string]interface{} { + if len(apiObjects) == 0 { return nil } - var result []map[string]interface{} - for _, delegated := range delegatedServices { - result = append(result, map[string]interface{}{ - "delegation_enabled_date": aws.TimeValue(delegated.DelegationEnabledDate).Format(time.RFC3339), - "service_principal": aws.StringValue(delegated.ServicePrincipal), + var tfList []map[string]interface{} + + for _, apiObject := range apiObjects { + tfList = append(tfList, map[string]interface{}{ + "delegation_enabled_date": aws.TimeValue(apiObject.DelegationEnabledDate).Format(time.RFC3339), + "service_principal": aws.StringValue(apiObject.ServicePrincipal), }) } - return result + + return tfList } diff --git a/internal/service/organizations/delegated_services_data_source_test.go b/internal/service/organizations/delegated_services_data_source_test.go index 723e397b746..6ffc6a2a98b 100644 --- a/internal/service/organizations/delegated_services_data_source_test.go +++ b/internal/service/organizations/delegated_services_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package organizations_test import ( @@ -9,16 +12,16 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/acctest" ) -func TestAccOrganizationsDelegatedServicesDataSource_basic(t *testing.T) { +func testAccDelegatedServicesDataSource_basic(t *testing.T) { ctx := acctest.Context(t) dataSourceName := "data.aws_organizations_delegated_services.test" - dataSourceIdentity := "data.aws_caller_identity.delegated" servicePrincipal := "config-multiaccountsetup.amazonaws.com" resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) acctest.PreCheckAlternateAccount(t) + acctest.PreCheckOrganizationManagementAccount(ctx, t) }, ErrorCheck: acctest.ErrorCheck(t, organizations.EndpointsID), ProtoV5ProviderFactories: acctest.ProtoV5FactoriesAlternate(ctx, t), @@ -26,45 +29,17 @@ func TestAccOrganizationsDelegatedServicesDataSource_basic(t *testing.T) { { Config: testAccDelegatedServicesDataSourceConfig_basic(servicePrincipal), Check: resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttr(dataSourceName, "delegated_services.#", "1"), - resource.TestCheckResourceAttrPair(dataSourceName, "account_id", dataSourceIdentity, "account_id"), - acctest.CheckResourceAttrRFC3339(dataSourceName, "delegated_services.0.delegation_enabled_date"), - resource.TestCheckResourceAttr(dataSourceName, "delegated_services.0.service_principal", servicePrincipal), - ), - }, - }, - }) -} - -func TestAccOrganizationsDelegatedServicesDataSource_empty(t *testing.T) { - ctx := acctest.Context(t) - dataSourceName := "data.aws_organizations_delegated_services.test" - dataSourceIdentity := "data.aws_caller_identity.delegated" - - resource.Test(t, resource.TestCase{ - PreCheck: func() { - acctest.PreCheck(ctx, t) - acctest.PreCheckAlternateAccount(t) - }, - ErrorCheck: acctest.ErrorCheck(t, organizations.EndpointsID), - ProtoV5ProviderFactories: acctest.ProtoV5FactoriesAlternate(ctx, t), - Steps: []resource.TestStep{ - { - Config: testAccDelegatedServicesDataSourceConfig_empty(), - Check: resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttr(dataSourceName, "delegated_services.#", "0"), - resource.TestCheckResourceAttrPair(dataSourceName, "account_id", dataSourceIdentity, "account_id"), + acctest.CheckResourceAttrGreaterThanOrEqualValue(dataSourceName, "delegated_services.#", 1), ), }, }, }) } -func TestAccOrganizationsDelegatedServicesDataSource_multiple(t *testing.T) { +func testAccDelegatedServicesDataSource_multiple(t *testing.T) { ctx := acctest.Context(t) dataSourceName := "data.aws_organizations_delegated_services.test" - dataSourceIdentity := "data.aws_caller_identity.delegated" - servicePrincipal := "config-multiaccountsetup.amazonaws.com" + servicePrincipal1 := "config-multiaccountsetup.amazonaws.com" servicePrincipal2 := "config.amazonaws.com" resource.Test(t, resource.TestCase{ @@ -76,34 +51,17 @@ func TestAccOrganizationsDelegatedServicesDataSource_multiple(t *testing.T) { ProtoV5ProviderFactories: acctest.ProtoV5FactoriesAlternate(ctx, t), Steps: []resource.TestStep{ { - Config: testAccDelegatedServicesDataSourceConfig_multiple(servicePrincipal, servicePrincipal2), + Config: testAccDelegatedServicesDataSourceConfig_multiple(servicePrincipal1, servicePrincipal2), Check: resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttr(dataSourceName, "delegated_services.#", "2"), - resource.TestCheckResourceAttrPair(dataSourceName, "account_id", dataSourceIdentity, "account_id"), - acctest.CheckResourceAttrRFC3339(dataSourceName, "delegated_services.0.delegation_enabled_date"), - resource.TestCheckResourceAttr(dataSourceName, "delegated_services.0.service_principal", servicePrincipal), - acctest.CheckResourceAttrRFC3339(dataSourceName, "delegated_services.1.delegation_enabled_date"), - resource.TestCheckResourceAttr(dataSourceName, "delegated_services.1.service_principal", servicePrincipal2), + acctest.CheckResourceAttrGreaterThanOrEqualValue(dataSourceName, "delegated_services.#", 2), ), }, }, }) } -func testAccDelegatedServicesDataSourceConfig_empty() string { - return acctest.ConfigAlternateAccountProvider() + ` -data "aws_caller_identity" "delegated" { - provider = "awsalternate" -} - -data "aws_organizations_delegated_services" "test" { - account_id = data.aws_caller_identity.delegated.account_id -} -` -} - func testAccDelegatedServicesDataSourceConfig_basic(servicePrincipal string) string { - return acctest.ConfigAlternateAccountProvider() + fmt.Sprintf(` + return acctest.ConfigCompose(acctest.ConfigAlternateAccountProvider(), fmt.Sprintf(` data "aws_caller_identity" "delegated" { provider = "awsalternate" } @@ -114,13 +72,15 @@ resource "aws_organizations_delegated_administrator" "delegated" { } data "aws_organizations_delegated_services" "test" { - account_id = aws_organizations_delegated_administrator.delegated.account_id + account_id = data.aws_caller_identity.delegated.account_id + + depends_on = [aws_organizations_delegated_administrator.delegated] } -`, servicePrincipal) +`, servicePrincipal)) } -func testAccDelegatedServicesDataSourceConfig_multiple(servicePrincipal, servicePrincipal2 string) string { - return acctest.ConfigAlternateAccountProvider() + fmt.Sprintf(` +func testAccDelegatedServicesDataSourceConfig_multiple(servicePrincipal1, servicePrincipal2 string) string { + return acctest.ConfigCompose(acctest.ConfigAlternateAccountProvider(), fmt.Sprintf(` data "aws_caller_identity" "delegated" { provider = "awsalternate" } @@ -136,7 +96,9 @@ resource "aws_organizations_delegated_administrator" "other_delegated" { } data "aws_organizations_delegated_services" "test" { - account_id = aws_organizations_delegated_administrator.other_delegated.account_id + account_id = data.aws_caller_identity.delegated.account_id + + depends_on = [aws_organizations_delegated_administrator.delegated, aws_organizations_delegated_administrator.other_delegated] } -`, servicePrincipal, servicePrincipal2) +`, servicePrincipal1, servicePrincipal2)) } diff --git a/internal/service/organizations/exports_test.go b/internal/service/organizations/exports_test.go new file mode 100644 index 00000000000..74bb08d0e27 --- /dev/null +++ b/internal/service/organizations/exports_test.go @@ -0,0 +1,11 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package organizations + +// Exports for use in tests only. +var ( + FindDelegatedAdministratorByTwoPartKey = findDelegatedAdministratorByTwoPartKey + FindPolicyByID = findPolicyByID + FindResourcePolicy = findResourcePolicy +) diff --git a/internal/service/organizations/find.go b/internal/service/organizations/find.go index 9a623cf809f..fdf5c636e39 100644 --- a/internal/service/organizations/find.go +++ b/internal/service/organizations/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package organizations import ( @@ -42,29 +45,6 @@ func FindAccountByID(ctx context.Context, conn *organizations.Organizations, id return output.Account, nil } -func FindOrganization(ctx context.Context, conn *organizations.Organizations) (*organizations.Organization, error) { - input := &organizations.DescribeOrganizationInput{} - - output, err := conn.DescribeOrganizationWithContext(ctx, input) - - if tfawserr.ErrCodeEquals(err, organizations.ErrCodeAWSOrganizationsNotInUseException) { - return nil, &retry.NotFoundError{ - LastError: err, - LastRequest: input, - } - } - - if err != nil { - return nil, err - } - - if output == nil || output.Organization == nil { - return nil, tfresource.NewEmptyResultError(input) - } - - return output.Organization, nil -} - func FindPolicyAttachmentByTwoPartKey(ctx context.Context, conn *organizations.Organizations, targetID, policyID string) (*organizations.PolicyTargetSummary, error) { input := &organizations.ListTargetsForPolicyInput{ PolicyId: aws.String(policyID), diff --git a/internal/service/organizations/flex_test.go b/internal/service/organizations/flex_test.go index 7a5078b99e4..e4a987ae05a 100644 --- a/internal/service/organizations/flex_test.go +++ b/internal/service/organizations/flex_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package organizations import ( diff --git a/internal/service/organizations/generate.go b/internal/service/organizations/generate.go index 1c4eb60a82d..19340ac7311 100644 --- a/internal/service/organizations/generate.go +++ b/internal/service/organizations/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceId -ServiceTagsSlice -TagInIDElem=ResourceId -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package organizations diff --git a/internal/service/organizations/organization.go b/internal/service/organizations/organization.go index de1c858bd4c..5dcc37ae7e7 100644 --- a/internal/service/organizations/organization.go +++ b/internal/service/organizations/organization.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package organizations import ( @@ -16,6 +19,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) const policyTypeStatusDisabled = "DISABLED" @@ -172,7 +176,7 @@ func ResourceOrganization() *schema.Resource { func resourceOrganizationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OrganizationsConn() + conn := meta.(*conns.AWSClient).OrganizationsConn(ctx) createOpts := &organizations.CreateOrganizationInput{ FeatureSet: aws.String(d.Get("feature_set").(string)), @@ -233,7 +237,7 @@ func resourceOrganizationCreate(ctx context.Context, d *schema.ResourceData, met func resourceOrganizationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OrganizationsConn() + conn := meta.(*conns.AWSClient).OrganizationsConn(ctx) log.Printf("[INFO] Reading Organization: %s", d.Id()) org, err := conn.DescribeOrganizationWithContext(ctx, &organizations.DescribeOrganizationInput{}) @@ -331,7 +335,7 @@ func resourceOrganizationRead(ctx context.Context, d *schema.ResourceData, meta func resourceOrganizationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OrganizationsConn() + conn := meta.(*conns.AWSClient).OrganizationsConn(ctx) if d.HasChange("aws_service_access_principals") { oldRaw, newRaw := d.GetChange("aws_service_access_principals") @@ -419,7 +423,7 @@ func resourceOrganizationUpdate(ctx context.Context, d *schema.ResourceData, met func resourceOrganizationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OrganizationsConn() + conn := meta.(*conns.AWSClient).OrganizationsConn(ctx) log.Printf("[INFO] Deleting Organization: %s", d.Id()) @@ -431,6 +435,50 @@ func resourceOrganizationDelete(ctx context.Context, d *schema.ResourceData, met return diags } +func FindOrganization(ctx context.Context, conn *organizations.Organizations) (*organizations.Organization, error) { + input := &organizations.DescribeOrganizationInput{} + + output, err := conn.DescribeOrganizationWithContext(ctx, input) + + if tfawserr.ErrCodeEquals(err, organizations.ErrCodeAWSOrganizationsNotInUseException) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil || output.Organization == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + return output.Organization, nil +} + +func findAccounts(ctx context.Context, conn *organizations.Organizations) ([]*organizations.Account, error) { + input := &organizations.ListAccountsInput{} + var output []*organizations.Account + + err := conn.ListAccountsPagesWithContext(ctx, input, func(page *organizations.ListAccountsOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + output = append(output, page.Accounts...) + + return !lastPage + }) + + if err != nil { + return nil, err + } + + return output, nil +} + func flattenAccounts(accounts []*organizations.Account) []map[string]interface{} { if len(accounts) == 0 { return nil @@ -503,7 +551,7 @@ func getOrganizationDefaultRootPolicyTypeRefreshFunc(ctx context.Context, conn * defaultRoot, err := getOrganizationDefaultRoot(ctx, conn) if err != nil { - return nil, "", fmt.Errorf("error getting default root: %s", err) + return nil, "", fmt.Errorf("getting default root: %s", err) } for _, pt := range defaultRoot.PolicyTypes { diff --git a/internal/service/organizations/organization_data_source.go b/internal/service/organizations/organization_data_source.go index 14aeeff77a1..ade0fdecb12 100644 --- a/internal/service/organizations/organization_data_source.go +++ b/internal/service/organizations/organization_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package organizations import ( @@ -5,10 +8,12 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/organizations" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" + "github.com/hashicorp/terraform-provider-aws/internal/slices" ) // @SDKDataSource("aws_organizations_organization") @@ -147,51 +152,54 @@ func DataSourceOrganization() *schema.Resource { func dataSourceOrganizationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OrganizationsConn() + conn := meta.(*conns.AWSClient).OrganizationsConn(ctx) + + org, err := FindOrganization(ctx, conn) - org, err := conn.DescribeOrganizationWithContext(ctx, &organizations.DescribeOrganizationInput{}) if err != nil { - return sdkdiag.AppendErrorf(diags, "describing organization: %s", err) + return sdkdiag.AppendErrorf(diags, "reading Organizations Organization: %s", err) } - d.SetId(aws.StringValue(org.Organization.Id)) - d.Set("arn", org.Organization.Arn) - d.Set("feature_set", org.Organization.FeatureSet) - d.Set("master_account_arn", org.Organization.MasterAccountArn) - d.Set("master_account_email", org.Organization.MasterAccountEmail) - d.Set("master_account_id", org.Organization.MasterAccountId) - - if aws.StringValue(org.Organization.MasterAccountId) == meta.(*conns.AWSClient).AccountID { - var accounts []*organizations.Account - var nonMasterAccounts []*organizations.Account - err = conn.ListAccountsPagesWithContext(ctx, &organizations.ListAccountsInput{}, func(page *organizations.ListAccountsOutput, lastPage bool) bool { - for _, account := range page.Accounts { - if aws.StringValue(account.Id) != aws.StringValue(org.Organization.MasterAccountId) { - nonMasterAccounts = append(nonMasterAccounts, account) - } + d.SetId(aws.StringValue(org.Id)) + d.Set("arn", org.Arn) + d.Set("feature_set", org.FeatureSet) + d.Set("master_account_arn", org.MasterAccountArn) + d.Set("master_account_email", org.MasterAccountEmail) + managementAccountID := aws.StringValue(org.MasterAccountId) + d.Set("master_account_id", managementAccountID) - accounts = append(accounts, account) - } + isManagementAccount := managementAccountID == meta.(*conns.AWSClient).AccountID + isDelegatedAdministrator := true + accounts, err := findAccounts(ctx, conn) - return !lastPage - }) - if err != nil { - return sdkdiag.AppendErrorf(diags, "listing AWS Organization (%s) accounts: %s", d.Id(), err) + if err != nil { + if isManagementAccount || !tfawserr.ErrCodeEquals(err, organizations.ErrCodeAccessDeniedException) { + return sdkdiag.AppendErrorf(diags, "reading Organizations Accounts: %s", err) } + isDelegatedAdministrator = false + } + + if isManagementAccount || isDelegatedAdministrator { + nonManagementAccounts := slices.Filter(accounts, func(v *organizations.Account) bool { + return aws.StringValue(v.Id) != managementAccountID + }) + var roots []*organizations.Root + err = conn.ListRootsPagesWithContext(ctx, &organizations.ListRootsInput{}, func(page *organizations.ListRootsOutput, lastPage bool) bool { roots = append(roots, page.Roots...) return !lastPage }) + if err != nil { - return sdkdiag.AppendErrorf(diags, "listing AWS Organization (%s) roots: %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "reading Organizations roots: %s", err) } awsServiceAccessPrincipals := make([]string, 0) // ConstraintViolationException: The request failed because the organization does not have all features enabled. Please enable all features in your organization and then retry. - if aws.StringValue(org.Organization.FeatureSet) == organizations.OrganizationFeatureSetAll { - err = conn.ListAWSServiceAccessForOrganizationPagesWithContext(ctx, &organizations.ListAWSServiceAccessForOrganizationInput{}, func(page *organizations.ListAWSServiceAccessForOrganizationOutput, lastPage bool) bool { + if aws.StringValue(org.FeatureSet) == organizations.OrganizationFeatureSetAll { + err := conn.ListAWSServiceAccessForOrganizationPagesWithContext(ctx, &organizations.ListAWSServiceAccessForOrganizationInput{}, func(page *organizations.ListAWSServiceAccessForOrganizationOutput, lastPage bool) bool { for _, enabledServicePrincipal := range page.EnabledServicePrincipals { awsServiceAccessPrincipals = append(awsServiceAccessPrincipals, aws.StringValue(enabledServicePrincipal.ServicePrincipal)) } @@ -199,11 +207,12 @@ func dataSourceOrganizationRead(ctx context.Context, d *schema.ResourceData, met }) if err != nil { - return sdkdiag.AppendErrorf(diags, "listing AWS Service Access for Organization (%s): %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "reading Organizations AWS service access: %s", err) } } - enabledPolicyTypes := make([]string, 0) + var enabledPolicyTypes []string + for _, policyType := range roots[0].PolicyTypes { if aws.StringValue(policyType.Status) == organizations.PolicyTypeStatusEnabled { enabledPolicyTypes = append(enabledPolicyTypes, aws.StringValue(policyType.Type)) @@ -222,7 +231,7 @@ func dataSourceOrganizationRead(ctx context.Context, d *schema.ResourceData, met return sdkdiag.AppendErrorf(diags, "setting enabled_policy_types: %s", err) } - if err := d.Set("non_master_accounts", flattenAccounts(nonMasterAccounts)); err != nil { + if err := d.Set("non_master_accounts", flattenAccounts(nonManagementAccounts)); err != nil { return sdkdiag.AppendErrorf(diags, "setting non_master_accounts: %s", err) } @@ -230,5 +239,6 @@ func dataSourceOrganizationRead(ctx context.Context, d *schema.ResourceData, met return sdkdiag.AppendErrorf(diags, "setting roots: %s", err) } } + return diags } diff --git a/internal/service/organizations/organization_data_source_test.go b/internal/service/organizations/organization_data_source_test.go index c0935dce709..68d946007ca 100644 --- a/internal/service/organizations/organization_data_source_test.go +++ b/internal/service/organizations/organization_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package organizations_test import ( @@ -8,6 +11,7 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/acctest" ) +// Creates an new organization so that we are its management account. func testAccOrganizationDataSource_basic(t *testing.T) { ctx := acctest.Context(t) resourceName := "aws_organizations_organization.test" @@ -22,10 +26,7 @@ func testAccOrganizationDataSource_basic(t *testing.T) { ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, Steps: []resource.TestStep{ { - Config: testAccOrganizationDataSourceConfig_resourceOnly, - }, - { - Config: testAccOrganizationDataSourceConfig_basic, + Config: testAccOrganizationDataSourceConfig_newOrganization, Check: resource.ComposeAggregateTestCheckFunc( resource.TestCheckResourceAttrPair(resourceName, "accounts.#", dataSourceName, "accounts.#"), resource.TestCheckResourceAttrPair(resourceName, "arn", dataSourceName, "arn"), @@ -33,7 +34,6 @@ func testAccOrganizationDataSource_basic(t *testing.T) { resource.TestCheckResourceAttrPair(resourceName, "enabled_policy_types.#", dataSourceName, "enabled_policy_types.#"), resource.TestCheckResourceAttrPair(resourceName, "feature_set", dataSourceName, "feature_set"), resource.TestCheckResourceAttrPair(resourceName, "id", dataSourceName, "id"), - resource.TestCheckResourceAttrPair(resourceName, "status", dataSourceName, "status"), resource.TestCheckResourceAttrPair(resourceName, "master_account_arn", dataSourceName, "master_account_arn"), resource.TestCheckResourceAttrPair(resourceName, "master_account_email", dataSourceName, "master_account_email"), resource.TestCheckResourceAttrPair(resourceName, "master_account_id", dataSourceName, "master_account_id"), @@ -41,21 +41,134 @@ func testAccOrganizationDataSource_basic(t *testing.T) { resource.TestCheckResourceAttrPair(resourceName, "roots.#", dataSourceName, "roots.#"), ), }, + }, + }) +} + +// Runs as a member account in an existing organization. +// Certain attributes won't be set. +func testAccOrganizationDataSource_memberAccount(t *testing.T) { + ctx := acctest.Context(t) + dataSourceName := "data.aws_organizations_organization.test" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckOrganizationMemberAccount(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, organizations.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + Steps: []resource.TestStep{ + { + Config: testAccOrganizationDataSourceConfig_basic, + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckNoResourceAttr(dataSourceName, "accounts.#"), + resource.TestCheckResourceAttrSet(dataSourceName, "arn"), + resource.TestCheckNoResourceAttr(dataSourceName, "aws_service_access_principals.#"), + resource.TestCheckNoResourceAttr(dataSourceName, "enabled_policy_types.#"), + resource.TestCheckResourceAttrSet(dataSourceName, "feature_set"), + resource.TestCheckResourceAttrSet(dataSourceName, "master_account_arn"), + resource.TestCheckResourceAttrSet(dataSourceName, "master_account_email"), + resource.TestCheckResourceAttrSet(dataSourceName, "master_account_id"), + resource.TestCheckNoResourceAttr(dataSourceName, "non_master_accounts.#"), + resource.TestCheckNoResourceAttr(dataSourceName, "roots.#"), + ), + }, + }, + }) +} + +// Runs as a management account in an existing organization. +// Delegates Organizations management to a member account and runs the data source under that account. +// All attributes will be set. +func testAccOrganizationDataSource_delegatedAdministrator(t *testing.T) { + ctx := acctest.Context(t) + dataSourceName := "data.aws_organizations_organization.test" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckAlternateAccount(t) + acctest.PreCheckOrganizationManagementAccount(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, organizations.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5FactoriesAlternate(ctx, t), + Steps: []resource.TestStep{ { - // This is to make sure the data source isn't around trying to read the resource - // when the resource is being destroyed - Config: testAccOrganizationDataSourceConfig_resourceOnly, + Config: testAccOrganizationDataSourceConfig_delegatedAdministrator, + Check: resource.ComposeAggregateTestCheckFunc( + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "accounts.#", 2), + resource.TestCheckResourceAttrSet(dataSourceName, "arn"), + resource.TestCheckResourceAttrSet(dataSourceName, "aws_service_access_principals.#"), + resource.TestCheckResourceAttrSet(dataSourceName, "enabled_policy_types.#"), + resource.TestCheckResourceAttrSet(dataSourceName, "feature_set"), + resource.TestCheckResourceAttrSet(dataSourceName, "master_account_arn"), + resource.TestCheckResourceAttrSet(dataSourceName, "master_account_email"), + resource.TestCheckResourceAttrSet(dataSourceName, "master_account_id"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "non_master_accounts.#", 1), + resource.TestCheckResourceAttrSet(dataSourceName, "roots.#"), + ), }, }, }) } -const testAccOrganizationDataSourceConfig_resourceOnly = ` +const testAccOrganizationDataSourceConfig_newOrganization = ` resource "aws_organizations_organization" "test" {} + +data "aws_organizations_organization" "test" { + depends_on = [aws_organizations_organization.test] +} ` const testAccOrganizationDataSourceConfig_basic = ` -resource "aws_organizations_organization" "test" {} - data "aws_organizations_organization" "test" {} ` + +var testAccOrganizationDataSourceConfig_delegatedAdministrator = acctest.ConfigCompose(acctest.ConfigAlternateAccountProvider(), ` +data "aws_caller_identity" "delegated" { + provider = "awsalternate" +} + +resource "aws_organizations_resource_policy" "test" { + content = < 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*organizations.Tag { return nil } -// SetTagsOut sets organizations service tags in Context. -func SetTagsOut(ctx context.Context, tags []*organizations.Tag) { +// setTagsOut sets organizations service tags in Context. +func setTagsOut(ctx context.Context, tags []*organizations.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates organizations service tags. +// updateTags updates organizations service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn organizationsiface.OrganizationsAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn organizationsiface.OrganizationsAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn organizationsiface.OrganizationsAPI, i // UpdateTags updates organizations service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).OrganizationsConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).OrganizationsConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/outposts/generate.go b/internal/service/outposts/generate.go index a5b389ddb51..fc50806b44e 100644 --- a/internal/service/outposts/generate.go +++ b/internal/service/outposts/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ServiceTagsMap -UpdateTags -AWSSDKVersion=1 -KVTValues -SkipTypesImp +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package outposts diff --git a/internal/service/outposts/outpost_asset_data_source.go b/internal/service/outposts/outpost_asset_data_source.go index 859cc564fa5..5d57a94dd6d 100644 --- a/internal/service/outposts/outpost_asset_data_source.go +++ b/internal/service/outposts/outpost_asset_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package outposts import ( @@ -49,7 +52,7 @@ func DataSourceOutpostAsset() *schema.Resource { func DataSourceOutpostAssetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OutpostsConn() + conn := meta.(*conns.AWSClient).OutpostsConn(ctx) outpost_id := aws.String(d.Get("arn").(string)) input := &outposts.ListAssetsInput{ diff --git a/internal/service/outposts/outpost_asset_data_source_test.go b/internal/service/outposts/outpost_asset_data_source_test.go index c4de90c6558..bb5a72d50db 100644 --- a/internal/service/outposts/outpost_asset_data_source_test.go +++ b/internal/service/outposts/outpost_asset_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package outposts_test import ( diff --git a/internal/service/outposts/outpost_assets_data_source.go b/internal/service/outposts/outpost_assets_data_source.go index 94c4f0efee9..bea1f5859db 100644 --- a/internal/service/outposts/outpost_assets_data_source.go +++ b/internal/service/outposts/outpost_assets_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package outposts import ( @@ -59,7 +62,7 @@ func DataSourceOutpostAssets() *schema.Resource { func DataSourceOutpostAssetsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OutpostsConn() + conn := meta.(*conns.AWSClient).OutpostsConn(ctx) outpost_id := aws.String(d.Get("arn").(string)) input := &outposts.ListAssetsInput{ diff --git a/internal/service/outposts/outpost_assets_data_source_test.go b/internal/service/outposts/outpost_assets_data_source_test.go index 4d0174168da..3497f765f1e 100644 --- a/internal/service/outposts/outpost_assets_data_source_test.go +++ b/internal/service/outposts/outpost_assets_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package outposts_test import ( diff --git a/internal/service/outposts/outpost_data_source.go b/internal/service/outposts/outpost_data_source.go index 3f960d89a67..3a0d10de6cc 100644 --- a/internal/service/outposts/outpost_data_source.go +++ b/internal/service/outposts/outpost_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package outposts import ( @@ -74,7 +77,7 @@ func DataSourceOutpost() *schema.Resource { func dataSourceOutpostRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OutpostsConn() + conn := meta.(*conns.AWSClient).OutpostsConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &outposts.ListOutpostsInput{} diff --git a/internal/service/outposts/outpost_data_source_test.go b/internal/service/outposts/outpost_data_source_test.go index 1389cd9d11b..4929051e3c6 100644 --- a/internal/service/outposts/outpost_data_source_test.go +++ b/internal/service/outposts/outpost_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package outposts_test import ( diff --git a/internal/service/outposts/outpost_instance_type_data_source.go b/internal/service/outposts/outpost_instance_type_data_source.go index 94b936a18b5..72d1104eaf7 100644 --- a/internal/service/outposts/outpost_instance_type_data_source.go +++ b/internal/service/outposts/outpost_instance_type_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package outposts import ( @@ -41,7 +44,7 @@ func DataSourceOutpostInstanceType() *schema.Resource { func dataSourceOutpostInstanceTypeRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OutpostsConn() + conn := meta.(*conns.AWSClient).OutpostsConn(ctx) input := &outposts.GetOutpostInstanceTypesInput{ OutpostId: aws.String(d.Get("arn").(string)), // Accepts both ARN and ID; prefer ARN which is more common diff --git a/internal/service/outposts/outpost_instance_type_data_source_test.go b/internal/service/outposts/outpost_instance_type_data_source_test.go index 8ea5ddb0153..a1a000f6c35 100644 --- a/internal/service/outposts/outpost_instance_type_data_source_test.go +++ b/internal/service/outposts/outpost_instance_type_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package outposts_test import ( diff --git a/internal/service/outposts/outpost_instance_types_data_source.go b/internal/service/outposts/outpost_instance_types_data_source.go index df86e3322c2..fd41591fd5f 100644 --- a/internal/service/outposts/outpost_instance_types_data_source.go +++ b/internal/service/outposts/outpost_instance_types_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package outposts import ( @@ -34,7 +37,7 @@ func DataSourceOutpostInstanceTypes() *schema.Resource { func dataSourceOutpostInstanceTypesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OutpostsConn() + conn := meta.(*conns.AWSClient).OutpostsConn(ctx) input := &outposts.GetOutpostInstanceTypesInput{ OutpostId: aws.String(d.Get("arn").(string)), // Accepts both ARN and ID; prefer ARN which is more common diff --git a/internal/service/outposts/outpost_instance_types_data_source_test.go b/internal/service/outposts/outpost_instance_types_data_source_test.go index c2034ff7cec..2e6b468787c 100644 --- a/internal/service/outposts/outpost_instance_types_data_source_test.go +++ b/internal/service/outposts/outpost_instance_types_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package outposts_test import ( diff --git a/internal/service/outposts/outposts_data_source.go b/internal/service/outposts/outposts_data_source.go index 7dd68c29aec..ccab08af764 100644 --- a/internal/service/outposts/outposts_data_source.go +++ b/internal/service/outposts/outposts_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package outposts import ( @@ -53,7 +56,7 @@ func DataSourceOutposts() *schema.Resource { // nosemgrep:ci.outposts-in-func-na func dataSourceOutpostsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { // nosemgrep:ci.outposts-in-func-name var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OutpostsConn() + conn := meta.(*conns.AWSClient).OutpostsConn(ctx) input := &outposts.ListOutpostsInput{} diff --git a/internal/service/outposts/outposts_data_source_test.go b/internal/service/outposts/outposts_data_source_test.go index 4fb7897f745..71c5e5ae8e6 100644 --- a/internal/service/outposts/outposts_data_source_test.go +++ b/internal/service/outposts/outposts_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package outposts_test import ( diff --git a/internal/service/outposts/service_package_gen.go b/internal/service/outposts/service_package_gen.go index 49fec891493..5176468c563 100644 --- a/internal/service/outposts/service_package_gen.go +++ b/internal/service/outposts/service_package_gen.go @@ -5,6 +5,10 @@ package outposts import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + outposts_sdkv1 "github.com/aws/aws-sdk-go/service/outposts" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -64,4 +68,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Outposts } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*outposts_sdkv1.Outposts, error) { + sess := config["session"].(*session_sdkv1.Session) + + return outposts_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/outposts/site_data_source.go b/internal/service/outposts/site_data_source.go index 1789d59327f..3ac9de045d3 100644 --- a/internal/service/outposts/site_data_source.go +++ b/internal/service/outposts/site_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package outposts import ( @@ -43,7 +46,7 @@ func DataSourceSite() *schema.Resource { func dataSourceSiteRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OutpostsConn() + conn := meta.(*conns.AWSClient).OutpostsConn(ctx) input := &outposts.ListSitesInput{} diff --git a/internal/service/outposts/site_data_source_test.go b/internal/service/outposts/site_data_source_test.go index e430825bee8..47395699536 100644 --- a/internal/service/outposts/site_data_source_test.go +++ b/internal/service/outposts/site_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package outposts_test import ( diff --git a/internal/service/outposts/sites_data_source.go b/internal/service/outposts/sites_data_source.go index 7626ebc64ef..ed2a6a60931 100644 --- a/internal/service/outposts/sites_data_source.go +++ b/internal/service/outposts/sites_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package outposts import ( @@ -28,7 +31,7 @@ func DataSourceSites() *schema.Resource { func dataSourceSitesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).OutpostsConn() + conn := meta.(*conns.AWSClient).OutpostsConn(ctx) input := &outposts.ListSitesInput{} diff --git a/internal/service/outposts/sites_data_source_test.go b/internal/service/outposts/sites_data_source_test.go index feec5d8ff0d..3e3f30455db 100644 --- a/internal/service/outposts/sites_data_source_test.go +++ b/internal/service/outposts/sites_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package outposts_test import ( @@ -48,7 +51,7 @@ func testAccCheckSitesAttributes(dataSourceName string) resource.TestCheckFunc { } func testAccPreCheckSites(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).OutpostsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).OutpostsConn(ctx) input := &outposts.ListSitesInput{} diff --git a/internal/service/outposts/tags_gen.go b/internal/service/outposts/tags_gen.go index 82f9bd53712..bc8289d3463 100644 --- a/internal/service/outposts/tags_gen.go +++ b/internal/service/outposts/tags_gen.go @@ -21,14 +21,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from outposts service tags. +// KeyValueTags creates tftags.KeyValueTags from outposts service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns outposts service tags from Context. +// getTagsIn returns outposts service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -38,17 +38,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets outposts service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets outposts service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates outposts service tags. +// updateTags updates outposts service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn outpostsiface.OutpostsAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn outpostsiface.OutpostsAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -88,5 +88,5 @@ func UpdateTags(ctx context.Context, conn outpostsiface.OutpostsAPI, identifier // UpdateTags updates outposts service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).OutpostsConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).OutpostsConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/pinpoint/adm_channel.go b/internal/service/pinpoint/adm_channel.go index b1e59789ed9..4de9b75a2e2 100644 --- a/internal/service/pinpoint/adm_channel.go +++ b/internal/service/pinpoint/adm_channel.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pinpoint import ( @@ -51,7 +54,7 @@ func ResourceADMChannel() *schema.Resource { func resourceADMChannelUpsert(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) applicationId := d.Get("application_id").(string) @@ -78,7 +81,7 @@ func resourceADMChannelUpsert(ctx context.Context, d *schema.ResourceData, meta func resourceADMChannelRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) log.Printf("[INFO] Reading Pinpoint ADM Channel for application %s", d.Id()) @@ -104,7 +107,7 @@ func resourceADMChannelRead(ctx context.Context, d *schema.ResourceData, meta in func resourceADMChannelDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) log.Printf("[DEBUG] Pinpoint Delete ADM Channel: %s", d.Id()) _, err := conn.DeleteAdmChannelWithContext(ctx, &pinpoint.DeleteAdmChannelInput{ diff --git a/internal/service/pinpoint/adm_channel_test.go b/internal/service/pinpoint/adm_channel_test.go index 575d34f9d0c..1cf596930df 100644 --- a/internal/service/pinpoint/adm_channel_test.go +++ b/internal/service/pinpoint/adm_channel_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pinpoint_test import ( @@ -92,7 +95,7 @@ func testAccCheckADMChannelExists(ctx context.Context, n string, channel *pinpoi return fmt.Errorf("No Pinpoint ADM channel with that Application ID exists") } - conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn(ctx) // Check if the ADM Channel exists params := &pinpoint.GetAdmChannelInput{ @@ -126,7 +129,7 @@ resource "aws_pinpoint_adm_channel" "channel" { func testAccCheckADMChannelDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_pinpoint_adm_channel" { diff --git a/internal/service/pinpoint/apns_channel.go b/internal/service/pinpoint/apns_channel.go index 8b41dd6f6a4..19665d137b4 100644 --- a/internal/service/pinpoint/apns_channel.go +++ b/internal/service/pinpoint/apns_channel.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pinpoint import ( @@ -87,7 +90,7 @@ func resourceAPNSChannelUpsert(ctx context.Context, d *schema.ResourceData, meta return sdkdiag.AppendErrorf(diags, "At least one set of credentials is required; either [certificate, private_key] or [bundle_id, team_id, token_key, token_key_id]") } - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) applicationId := d.Get("application_id").(string) @@ -121,7 +124,7 @@ func resourceAPNSChannelUpsert(ctx context.Context, d *schema.ResourceData, meta func resourceAPNSChannelRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) log.Printf("[INFO] Reading Pinpoint APNs Channel for Application %s", d.Id()) @@ -148,7 +151,7 @@ func resourceAPNSChannelRead(ctx context.Context, d *schema.ResourceData, meta i func resourceAPNSChannelDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) log.Printf("[DEBUG] Deleting Pinpoint APNs Channel: %s", d.Id()) _, err := conn.DeleteApnsChannelWithContext(ctx, &pinpoint.DeleteApnsChannelInput{ diff --git a/internal/service/pinpoint/apns_channel_test.go b/internal/service/pinpoint/apns_channel_test.go index 1c74d069c71..74380d9a060 100644 --- a/internal/service/pinpoint/apns_channel_test.go +++ b/internal/service/pinpoint/apns_channel_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pinpoint_test import ( @@ -172,7 +175,7 @@ func testAccCheckAPNSChannelExists(ctx context.Context, n string, channel *pinpo return fmt.Errorf("No Pinpoint APNs Channel with that Application ID exists") } - conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn(ctx) // Check if the app exists params := &pinpoint.GetApnsChannelInput{ @@ -224,7 +227,7 @@ resource "aws_pinpoint_apns_channel" "test_apns_channel" { func testAccCheckAPNSChannelDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_pinpoint_apns_channel" { diff --git a/internal/service/pinpoint/apns_sandbox_channel.go b/internal/service/pinpoint/apns_sandbox_channel.go index 35b57011aba..4b5bdf8266f 100644 --- a/internal/service/pinpoint/apns_sandbox_channel.go +++ b/internal/service/pinpoint/apns_sandbox_channel.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pinpoint import ( @@ -87,7 +90,7 @@ func resourceAPNSSandboxChannelUpsert(ctx context.Context, d *schema.ResourceDat return sdkdiag.AppendErrorf(diags, "At least one set of credentials is required; either [certificate, private_key] or [bundle_id, team_id, token_key, token_key_id]") } - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) applicationId := d.Get("application_id").(string) @@ -121,7 +124,7 @@ func resourceAPNSSandboxChannelUpsert(ctx context.Context, d *schema.ResourceDat func resourceAPNSSandboxChannelRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) log.Printf("[INFO] Reading Pinpoint APNs Channel for Application %s", d.Id()) @@ -148,7 +151,7 @@ func resourceAPNSSandboxChannelRead(ctx context.Context, d *schema.ResourceData, func resourceAPNSSandboxChannelDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) log.Printf("[DEBUG] Deleting Pinpoint APNs Sandbox Channel: %s", d.Id()) _, err := conn.DeleteApnsSandboxChannelWithContext(ctx, &pinpoint.DeleteApnsSandboxChannelInput{ diff --git a/internal/service/pinpoint/apns_sandbox_channel_test.go b/internal/service/pinpoint/apns_sandbox_channel_test.go index 0b2e0037db5..d9569a0fdde 100644 --- a/internal/service/pinpoint/apns_sandbox_channel_test.go +++ b/internal/service/pinpoint/apns_sandbox_channel_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pinpoint_test import ( @@ -172,7 +175,7 @@ func testAccCheckAPNSSandboxChannelExists(ctx context.Context, n string, channel return fmt.Errorf("No Pinpoint APNs Channel with that Application ID exists") } - conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn(ctx) // Check if the app exists params := &pinpoint.GetApnsSandboxChannelInput{ @@ -224,7 +227,7 @@ resource "aws_pinpoint_apns_sandbox_channel" "test_channel" { func testAccCheckAPNSSandboxChannelDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_pinpoint_apns_sandbox_channel" { diff --git a/internal/service/pinpoint/apns_voip_channel.go b/internal/service/pinpoint/apns_voip_channel.go index 4649f2a8bd0..923ae62a932 100644 --- a/internal/service/pinpoint/apns_voip_channel.go +++ b/internal/service/pinpoint/apns_voip_channel.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pinpoint import ( @@ -87,7 +90,7 @@ func resourceAPNSVoIPChannelUpsert(ctx context.Context, d *schema.ResourceData, return sdkdiag.AppendErrorf(diags, "At least one set of credentials is required; either [certificate, private_key] or [bundle_id, team_id, token_key, token_key_id]") } - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) applicationId := d.Get("application_id").(string) @@ -121,7 +124,7 @@ func resourceAPNSVoIPChannelUpsert(ctx context.Context, d *schema.ResourceData, func resourceAPNSVoIPChannelRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) log.Printf("[INFO] Reading Pinpoint APNs Voip Channel for Application %s", d.Id()) @@ -148,7 +151,7 @@ func resourceAPNSVoIPChannelRead(ctx context.Context, d *schema.ResourceData, me func resourceAPNSVoIPChannelDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) log.Printf("[DEBUG] Deleting Pinpoint APNs Voip Channel: %s", d.Id()) _, err := conn.DeleteApnsVoipChannelWithContext(ctx, &pinpoint.DeleteApnsVoipChannelInput{ diff --git a/internal/service/pinpoint/apns_voip_channel_test.go b/internal/service/pinpoint/apns_voip_channel_test.go index 00ca8fa940e..024ed232d04 100644 --- a/internal/service/pinpoint/apns_voip_channel_test.go +++ b/internal/service/pinpoint/apns_voip_channel_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pinpoint_test import ( @@ -172,7 +175,7 @@ func testAccCheckAPNSVoIPChannelExists(ctx context.Context, n string, channel *p return fmt.Errorf("No Pinpoint APNs Voip Channel with that Application ID exists") } - conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn(ctx) // Check if the app exists params := &pinpoint.GetApnsVoipChannelInput{ @@ -224,7 +227,7 @@ resource "aws_pinpoint_apns_voip_channel" "test_channel" { func testAccCheckAPNSVoIPChannelDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_pinpoint_apns_voip_channel" { diff --git a/internal/service/pinpoint/apns_voip_sandbox_channel.go b/internal/service/pinpoint/apns_voip_sandbox_channel.go index 4a712c673a9..86945382650 100644 --- a/internal/service/pinpoint/apns_voip_sandbox_channel.go +++ b/internal/service/pinpoint/apns_voip_sandbox_channel.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pinpoint import ( @@ -87,7 +90,7 @@ func resourceAPNSVoIPSandboxChannelUpsert(ctx context.Context, d *schema.Resourc return sdkdiag.AppendErrorf(diags, "At least one set of credentials is required; either [certificate, private_key] or [bundle_id, team_id, token_key, token_key_id]") } - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) applicationId := d.Get("application_id").(string) @@ -121,7 +124,7 @@ func resourceAPNSVoIPSandboxChannelUpsert(ctx context.Context, d *schema.Resourc func resourceAPNSVoIPSandboxChannelRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) log.Printf("[INFO] Reading Pinpoint APNs Voip Sandbox Channel for Application %s", d.Id()) @@ -148,7 +151,7 @@ func resourceAPNSVoIPSandboxChannelRead(ctx context.Context, d *schema.ResourceD func resourceAPNSVoIPSandboxChannelDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) log.Printf("[DEBUG] Deleting Pinpoint APNs Voip Sandbox Channel: %s", d.Id()) _, err := conn.DeleteApnsVoipSandboxChannelWithContext(ctx, &pinpoint.DeleteApnsVoipSandboxChannelInput{ diff --git a/internal/service/pinpoint/apns_voip_sandbox_channel_test.go b/internal/service/pinpoint/apns_voip_sandbox_channel_test.go index a3651e4e26a..f1c012d971e 100644 --- a/internal/service/pinpoint/apns_voip_sandbox_channel_test.go +++ b/internal/service/pinpoint/apns_voip_sandbox_channel_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pinpoint_test import ( @@ -172,7 +175,7 @@ func testAccCheckAPNSVoIPSandboxChannelExists(ctx context.Context, n string, cha return fmt.Errorf("No Pinpoint APNs Voip Sandbox Channel with that Application ID exists") } - conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn(ctx) // Check if the app exists params := &pinpoint.GetApnsVoipSandboxChannelInput{ @@ -224,7 +227,7 @@ resource "aws_pinpoint_apns_voip_sandbox_channel" "test_channel" { func testAccCheckAPNSVoIPSandboxChannelDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_pinpoint_apns_voip_sandbox_channel" { diff --git a/internal/service/pinpoint/app.go b/internal/service/pinpoint/app.go index efe54069e8e..21e05da98ba 100644 --- a/internal/service/pinpoint/app.go +++ b/internal/service/pinpoint/app.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pinpoint import ( @@ -155,7 +158,7 @@ func ResourceApp() *schema.Resource { func resourceAppCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) var name string @@ -172,7 +175,7 @@ func resourceAppCreate(ctx context.Context, d *schema.ResourceData, meta interfa req := &pinpoint.CreateAppInput{ CreateApplicationRequest: &pinpoint.CreateApplicationRequest{ Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), }, } @@ -189,7 +192,7 @@ func resourceAppCreate(ctx context.Context, d *schema.ResourceData, meta interfa func resourceAppRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) log.Printf("[INFO] Reading Pinpoint App Attributes for %s", d.Id()) @@ -239,7 +242,7 @@ func resourceAppRead(ctx context.Context, d *schema.ResourceData, meta interface func resourceAppUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) appSettings := &pinpoint.WriteApplicationSettingsRequest{} @@ -274,7 +277,7 @@ func resourceAppUpdate(ctx context.Context, d *schema.ResourceData, meta interfa func resourceAppDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) log.Printf("[DEBUG] Deleting Pinpoint Application: %s", d.Id()) _, err := conn.DeleteAppWithContext(ctx, &pinpoint.DeleteAppInput{ diff --git a/internal/service/pinpoint/app_test.go b/internal/service/pinpoint/app_test.go index e039705c6c9..3eac62f0d51 100644 --- a/internal/service/pinpoint/app_test.go +++ b/internal/service/pinpoint/app_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pinpoint_test import ( @@ -176,7 +179,7 @@ func TestAccPinpointApp_tags(t *testing.T) { } func testAccPreCheckApp(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn(ctx) input := &pinpoint.GetAppsInput{} @@ -202,7 +205,7 @@ func testAccCheckAppExists(ctx context.Context, n string, application *pinpoint. return fmt.Errorf("No Pinpoint app with that ID exists") } - conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn(ctx) // Check if the app exists params := &pinpoint.GetAppInput{ @@ -337,7 +340,7 @@ resource "aws_pinpoint_app" "test" { func testAccCheckAppDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_pinpoint_app" { @@ -364,7 +367,7 @@ func testAccCheckAppDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckRAMResourceShareDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ram_resource_share" { diff --git a/internal/service/pinpoint/baidu_channel.go b/internal/service/pinpoint/baidu_channel.go index 6ad33e5bdf7..ddde104f65f 100644 --- a/internal/service/pinpoint/baidu_channel.go +++ b/internal/service/pinpoint/baidu_channel.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pinpoint import ( @@ -51,7 +54,7 @@ func ResourceBaiduChannel() *schema.Resource { func resourceBaiduChannelUpsert(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) applicationId := d.Get("application_id").(string) @@ -78,7 +81,7 @@ func resourceBaiduChannelUpsert(ctx context.Context, d *schema.ResourceData, met func resourceBaiduChannelRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) log.Printf("[INFO] Reading Pinpoint Baidu Channel for application %s", d.Id()) @@ -104,7 +107,7 @@ func resourceBaiduChannelRead(ctx context.Context, d *schema.ResourceData, meta func resourceBaiduChannelDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) log.Printf("[DEBUG] Deleting Pinpoint Baidu Channel for application %s", d.Id()) _, err := conn.DeleteBaiduChannelWithContext(ctx, &pinpoint.DeleteBaiduChannelInput{ diff --git a/internal/service/pinpoint/baidu_channel_test.go b/internal/service/pinpoint/baidu_channel_test.go index f3d0d7fb037..66cc16e091b 100644 --- a/internal/service/pinpoint/baidu_channel_test.go +++ b/internal/service/pinpoint/baidu_channel_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pinpoint_test import ( @@ -68,7 +71,7 @@ func testAccCheckBaiduChannelExists(ctx context.Context, n string, channel *pinp return fmt.Errorf("No Pinpoint Baidu channel with that Application ID exists") } - conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn(ctx) // Check if the Baidu Channel exists params := &pinpoint.GetBaiduChannelInput{ @@ -102,7 +105,7 @@ resource "aws_pinpoint_baidu_channel" "channel" { func testAccCheckBaiduChannelDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_pinpoint_baidu_channel" { diff --git a/internal/service/pinpoint/consts.go b/internal/service/pinpoint/consts.go index 2ce2a4d0c69..0a5feebbee6 100644 --- a/internal/service/pinpoint/consts.go +++ b/internal/service/pinpoint/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pinpoint import ( diff --git a/internal/service/pinpoint/email_channel.go b/internal/service/pinpoint/email_channel.go index c55850a3b5c..f239a919ccd 100644 --- a/internal/service/pinpoint/email_channel.go +++ b/internal/service/pinpoint/email_channel.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pinpoint import ( @@ -64,7 +67,7 @@ func ResourceEmailChannel() *schema.Resource { func resourceEmailChannelUpsert(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) applicationId := d.Get("application_id").(string) @@ -99,7 +102,7 @@ func resourceEmailChannelUpsert(ctx context.Context, d *schema.ResourceData, met func resourceEmailChannelRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) log.Printf("[INFO] Reading Pinpoint Email Channel for application %s", d.Id()) @@ -130,7 +133,7 @@ func resourceEmailChannelRead(ctx context.Context, d *schema.ResourceData, meta func resourceEmailChannelDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) log.Printf("[DEBUG] Deleting Pinpoint Email Channel for application %s", d.Id()) _, err := conn.DeleteEmailChannelWithContext(ctx, &pinpoint.DeleteEmailChannelInput{ diff --git a/internal/service/pinpoint/email_channel_test.go b/internal/service/pinpoint/email_channel_test.go index 4f272b21756..fee78750323 100644 --- a/internal/service/pinpoint/email_channel_test.go +++ b/internal/service/pinpoint/email_channel_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pinpoint_test import ( @@ -159,7 +162,7 @@ func testAccCheckEmailChannelExists(ctx context.Context, n string, channel *pinp return fmt.Errorf("No Pinpoint Email Channel with that application ID exists") } - conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn(ctx) // Check if the app exists params := &pinpoint.GetEmailChannelInput{ @@ -320,7 +323,7 @@ resource "aws_ses_domain_identity" "test" { func testAccCheckEmailChannelDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_pinpoint_email_channel" { diff --git a/internal/service/pinpoint/event_stream.go b/internal/service/pinpoint/event_stream.go index dc42bce35e0..95cb1ca9519 100644 --- a/internal/service/pinpoint/event_stream.go +++ b/internal/service/pinpoint/event_stream.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pinpoint import ( @@ -49,7 +52,7 @@ func ResourceEventStream() *schema.Resource { func resourceEventStreamUpsert(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) applicationId := d.Get("application_id").(string) @@ -93,7 +96,7 @@ func resourceEventStreamUpsert(ctx context.Context, d *schema.ResourceData, meta func resourceEventStreamRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) log.Printf("[INFO] Reading Pinpoint Event Stream for application %s", d.Id()) @@ -120,7 +123,7 @@ func resourceEventStreamRead(ctx context.Context, d *schema.ResourceData, meta i func resourceEventStreamDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) log.Printf("[DEBUG] Pinpoint Delete Event Stream: %s", d.Id()) _, err := conn.DeleteEventStreamWithContext(ctx, &pinpoint.DeleteEventStreamInput{ diff --git a/internal/service/pinpoint/event_stream_test.go b/internal/service/pinpoint/event_stream_test.go index f6b77ff0522..63af5473437 100644 --- a/internal/service/pinpoint/event_stream_test.go +++ b/internal/service/pinpoint/event_stream_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pinpoint_test import ( @@ -91,7 +94,7 @@ func testAccCheckEventStreamExists(ctx context.Context, n string, stream *pinpoi return fmt.Errorf("No Pinpoint event stream with that ID exists") } - conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn(ctx) // Check if the app exists params := &pinpoint.GetEventStreamInput{ @@ -169,7 +172,7 @@ EOF func testAccCheckEventStreamDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_pinpoint_event_stream" { diff --git a/internal/service/pinpoint/gcm_channel.go b/internal/service/pinpoint/gcm_channel.go index 750ed205ec7..37f64c4dec4 100644 --- a/internal/service/pinpoint/gcm_channel.go +++ b/internal/service/pinpoint/gcm_channel.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pinpoint import ( @@ -46,7 +49,7 @@ func ResourceGCMChannel() *schema.Resource { func resourceGCMChannelUpsert(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) applicationId := d.Get("application_id").(string) @@ -72,7 +75,7 @@ func resourceGCMChannelUpsert(ctx context.Context, d *schema.ResourceData, meta func resourceGCMChannelRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) log.Printf("[INFO] Reading Pinpoint GCM Channel for application %s", d.Id()) @@ -98,7 +101,7 @@ func resourceGCMChannelRead(ctx context.Context, d *schema.ResourceData, meta in func resourceGCMChannelDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) log.Printf("[DEBUG] Deleting Pinpoint GCM Channel for application %s", d.Id()) _, err := conn.DeleteGcmChannelWithContext(ctx, &pinpoint.DeleteGcmChannelInput{ diff --git a/internal/service/pinpoint/gcm_channel_test.go b/internal/service/pinpoint/gcm_channel_test.go index e6620cc2e30..09051f4b527 100644 --- a/internal/service/pinpoint/gcm_channel_test.go +++ b/internal/service/pinpoint/gcm_channel_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pinpoint_test import ( @@ -73,7 +76,7 @@ func testAccCheckGCMChannelExists(ctx context.Context, n string, channel *pinpoi return fmt.Errorf("No Pinpoint GCM Channel with that application ID exists") } - conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn(ctx) // Check if the app exists params := &pinpoint.GetGcmChannelInput{ @@ -105,7 +108,7 @@ resource "aws_pinpoint_gcm_channel" "test_gcm_channel" { func testAccCheckGCMChannelDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_pinpoint_gcm_channel" { diff --git a/internal/service/pinpoint/generate.go b/internal/service/pinpoint/generate.go index 6efdc55395e..b52ebd52b1a 100644 --- a/internal/service/pinpoint/generate.go +++ b/internal/service/pinpoint/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOutTagsElem=TagsModel.Tags -ServiceTagsMap "-TagInCustomVal=&pinpoint.TagsModel{Tags: Tags(updatedTags.IgnoreAWS())}" -TagInTagsElem=TagsModel -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package pinpoint diff --git a/internal/service/pinpoint/service_package_gen.go b/internal/service/pinpoint/service_package_gen.go index 4002e1cf32e..a2b2db28301 100644 --- a/internal/service/pinpoint/service_package_gen.go +++ b/internal/service/pinpoint/service_package_gen.go @@ -5,6 +5,10 @@ package pinpoint import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + pinpoint_sdkv1 "github.com/aws/aws-sdk-go/service/pinpoint" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -80,4 +84,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Pinpoint } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*pinpoint_sdkv1.Pinpoint, error) { + sess := config["session"].(*session_sdkv1.Session) + + return pinpoint_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/pinpoint/sms_channel.go b/internal/service/pinpoint/sms_channel.go index 1feb66124e3..ad373824b07 100644 --- a/internal/service/pinpoint/sms_channel.go +++ b/internal/service/pinpoint/sms_channel.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pinpoint import ( @@ -57,7 +60,7 @@ func ResourceSMSChannel() *schema.Resource { func resourceSMSChannelUpsert(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) applicationId := d.Get("application_id").(string) @@ -90,7 +93,7 @@ func resourceSMSChannelUpsert(ctx context.Context, d *schema.ResourceData, meta func resourceSMSChannelRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) log.Printf("[INFO] Reading Pinpoint SMS Channel for application %s", d.Id()) @@ -119,7 +122,7 @@ func resourceSMSChannelRead(ctx context.Context, d *schema.ResourceData, meta in func resourceSMSChannelDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PinpointConn() + conn := meta.(*conns.AWSClient).PinpointConn(ctx) log.Printf("[DEBUG] Deleting Pinpoint SMS Channel for application %s", d.Id()) _, err := conn.DeleteSmsChannelWithContext(ctx, &pinpoint.DeleteSmsChannelInput{ diff --git a/internal/service/pinpoint/sms_channel_test.go b/internal/service/pinpoint/sms_channel_test.go index 88b811d4c48..c378516c38f 100644 --- a/internal/service/pinpoint/sms_channel_test.go +++ b/internal/service/pinpoint/sms_channel_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pinpoint_test import ( @@ -146,7 +149,7 @@ func testAccCheckSMSChannelExists(ctx context.Context, n string, channel *pinpoi return fmt.Errorf("No Pinpoint SMS Channel with that application ID exists") } - conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn(ctx) // Check if the app exists params := &pinpoint.GetSmsChannelInput{ @@ -187,7 +190,7 @@ resource "aws_pinpoint_sms_channel" "test" { func testAccCheckSMSChannelDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).PinpointConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_pinpoint_sms_channel" { diff --git a/internal/service/pinpoint/sweep.go b/internal/service/pinpoint/sweep.go index cb7d54fe5a3..ed8d510d6b0 100644 --- a/internal/service/pinpoint/sweep.go +++ b/internal/service/pinpoint/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/pinpoint" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -23,11 +25,11 @@ func init() { func sweepApps(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).PinpointConn() + conn := client.PinpointConn(ctx) input := &pinpoint.GetAppsInput{} diff --git a/internal/service/pinpoint/tags_gen.go b/internal/service/pinpoint/tags_gen.go index d7175a976f8..080573539d7 100644 --- a/internal/service/pinpoint/tags_gen.go +++ b/internal/service/pinpoint/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists pinpoint service tags. +// listTags lists pinpoint service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn pinpointiface.PinpointAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn pinpointiface.PinpointAPI, identifier string) (tftags.KeyValueTags, error) { input := &pinpoint.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn pinpointiface.PinpointAPI, identifier st // ListTags lists pinpoint service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).PinpointConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).PinpointConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from pinpoint service tags. +// KeyValueTags creates tftags.KeyValueTags from pinpoint service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns pinpoint service tags from Context. +// getTagsIn returns pinpoint service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets pinpoint service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets pinpoint service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates pinpoint service tags. +// updateTags updates pinpoint service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn pinpointiface.PinpointAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn pinpointiface.PinpointAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn pinpointiface.PinpointAPI, identifier // UpdateTags updates pinpoint service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).PinpointConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).PinpointConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/pipes/enrichment_parameters.go b/internal/service/pipes/enrichment_parameters.go new file mode 100644 index 00000000000..e4f0d34db36 --- /dev/null +++ b/internal/service/pipes/enrichment_parameters.go @@ -0,0 +1,136 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package pipes + +import ( + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/pipes/types" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/flex" +) + +func enrichmentParametersSchema() *schema.Schema { + return &schema.Schema{ + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "http_parameters": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "header_parameters": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "path_parameter_values": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "query_string_parameters": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + }, + }, + }, + "input_template": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(0, 8192), + }, + }, + }, + } +} + +func expandPipeEnrichmentParameters(tfMap map[string]interface{}) *types.PipeEnrichmentParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.PipeEnrichmentParameters{} + + if v, ok := tfMap["http_parameters"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.HttpParameters = expandPipeEnrichmentHTTPParameters(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["input_template"].(string); ok && v != "" { + apiObject.InputTemplate = aws.String(v) + } + + return apiObject +} + +func expandPipeEnrichmentHTTPParameters(tfMap map[string]interface{}) *types.PipeEnrichmentHttpParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.PipeEnrichmentHttpParameters{} + + if v, ok := tfMap["header_parameters"].(map[string]interface{}); ok && len(v) > 0 { + apiObject.HeaderParameters = flex.ExpandStringValueMap(v) + } + + if v, ok := tfMap["path_parameter_values"].([]interface{}); ok && len(v) > 0 { + apiObject.PathParameterValues = flex.ExpandStringValueList(v) + } + + if v, ok := tfMap["query_string_parameters"].(map[string]interface{}); ok && len(v) > 0 { + apiObject.QueryStringParameters = flex.ExpandStringValueMap(v) + } + + return apiObject +} + +func flattenPipeEnrichmentParameters(apiObject *types.PipeEnrichmentParameters) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.HttpParameters; v != nil { + tfMap["http_parameters"] = []interface{}{flattenPipeEnrichmentHTTPParameters(v)} + } + + if v := apiObject.InputTemplate; v != nil { + tfMap["input_template"] = aws.ToString(v) + } + + return tfMap +} + +func flattenPipeEnrichmentHTTPParameters(apiObject *types.PipeEnrichmentHttpParameters) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.HeaderParameters; v != nil { + tfMap["header_parameters"] = v + } + + if v := apiObject.PathParameterValues; v != nil { + tfMap["path_parameter_values"] = v + } + + if v := apiObject.QueryStringParameters; v != nil { + tfMap["query_string_parameters"] = v + } + + return tfMap +} diff --git a/internal/service/pipes/exports_test.go b/internal/service/pipes/exports_test.go new file mode 100644 index 00000000000..b27f1e9c5b3 --- /dev/null +++ b/internal/service/pipes/exports_test.go @@ -0,0 +1,11 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package pipes + +// Exports for use in tests only. +var ( + FindPipeByName = findPipeByName + + ResourcePipe = resourcePipe +) diff --git a/internal/service/pipes/find.go b/internal/service/pipes/find.go deleted file mode 100644 index 7a3e3a59539..00000000000 --- a/internal/service/pipes/find.go +++ /dev/null @@ -1,37 +0,0 @@ -package pipes - -import ( - "context" - - "github.com/aws/aws-sdk-go-v2/aws" - "github.com/aws/aws-sdk-go-v2/service/pipes" - "github.com/aws/aws-sdk-go-v2/service/pipes/types" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" - "github.com/hashicorp/terraform-provider-aws/internal/errs" - "github.com/hashicorp/terraform-provider-aws/internal/tfresource" -) - -func FindPipeByName(ctx context.Context, conn *pipes.Client, name string) (*pipes.DescribePipeOutput, error) { - input := &pipes.DescribePipeInput{ - Name: aws.String(name), - } - - output, err := conn.DescribePipe(ctx, input) - - if errs.IsA[*types.NotFoundException](err) { - return nil, &retry.NotFoundError{ - LastError: err, - LastRequest: input, - } - } - - if err != nil { - return nil, err - } - - if output == nil || output.Arn == nil { - return nil, tfresource.NewEmptyResultError(input) - } - - return output, nil -} diff --git a/internal/service/pipes/flex.go b/internal/service/pipes/flex.go deleted file mode 100644 index b3e0de87060..00000000000 --- a/internal/service/pipes/flex.go +++ /dev/null @@ -1,146 +0,0 @@ -package pipes - -import ( - "github.com/aws/aws-sdk-go-v2/aws" - "github.com/aws/aws-sdk-go-v2/service/pipes/types" -) - -func expandFilter(tfMap map[string]interface{}) *types.Filter { - if tfMap == nil { - return nil - } - - output := &types.Filter{} - - if v, ok := tfMap["pattern"].(string); ok && len(v) > 0 { - output.Pattern = aws.String(v) - } - - return output -} - -func flattenFilter(apiObject types.Filter) map[string]interface{} { - m := map[string]interface{}{} - - if v := apiObject.Pattern; v != nil { - m["pattern"] = aws.ToString(v) - } - - return m -} - -func expandFilters(tfList []interface{}) []types.Filter { - if len(tfList) == 0 { - return nil - } - - var s []types.Filter - - for _, v := range tfList { - a := expandFilter(v.(map[string]interface{})) - - if a == nil { - continue - } - - s = append(s, *a) - } - - return s -} - -func flattenFilters(apiObjects []types.Filter) []interface{} { - if len(apiObjects) == 0 { - return nil - } - - var l []interface{} - - for _, apiObject := range apiObjects { - l = append(l, flattenFilter(apiObject)) - } - - return l -} - -func expandFilterCriteria(tfMap map[string]interface{}) *types.FilterCriteria { - if tfMap == nil { - return nil - } - - output := &types.FilterCriteria{} - - if v, ok := tfMap["filter"].([]interface{}); ok && len(v) > 0 { - output.Filters = expandFilters(v) - } - - return output -} - -func flattenFilterCriteria(apiObject *types.FilterCriteria) map[string]interface{} { - if apiObject == nil { - return nil - } - - m := map[string]interface{}{} - - m["filter"] = flattenFilters(apiObject.Filters) - - return m -} - -func expandPipeSourceParameters(tfMap map[string]interface{}) *types.PipeSourceParameters { - if tfMap == nil { - return nil - } - - a := &types.PipeSourceParameters{} - - if v, ok := tfMap["filter_criteria"].([]interface{}); ok { - a.FilterCriteria = expandFilterCriteria(v[0].(map[string]interface{})) - } - - return a -} - -func flattenPipeSourceParameters(apiObject *types.PipeSourceParameters) map[string]interface{} { - if apiObject == nil { - return nil - } - - m := map[string]interface{}{} - - if v := apiObject.FilterCriteria; v != nil { - m["filter_criteria"] = []interface{}{flattenFilterCriteria(v)} - } - - return m -} - -func expandPipeTargetParameters(tfMap map[string]interface{}) *types.PipeTargetParameters { - if tfMap == nil { - return nil - } - - a := &types.PipeTargetParameters{} - - if v, ok := tfMap["input_template"].(string); ok { - a.InputTemplate = aws.String(v) - } - - return a -} - -func flattenPipeTargetParameters(apiObject *types.PipeTargetParameters) map[string]interface{} { - if apiObject == nil { - return nil - } - - m := map[string]interface{}{} - - if v := apiObject.InputTemplate; v != nil { - m["input_template"] = aws.ToString(v) - } - - return m -} diff --git a/internal/service/pipes/generate.go b/internal/service/pipes/generate.go index 6f67b28280f..941e7284d7b 100644 --- a/internal/service/pipes/generate.go +++ b/internal/service/pipes/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -ListTags -UpdateTags -ServiceTagsMap -KVTValues -SkipTypesImp +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package pipes diff --git a/internal/service/pipes/pipe.go b/internal/service/pipes/pipe.go index 05f57f7354c..aa17206f7f9 100644 --- a/internal/service/pipes/pipe.go +++ b/internal/service/pipes/pipe.go @@ -1,16 +1,21 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pipes import ( "context" "errors" "log" + "regexp" "time" "github.com/aws/aws-sdk-go-v2/aws" "github.com/aws/aws-sdk-go-v2/service/pipes" - "github.com/aws/aws-sdk-go-v2/service/pipes/types" + awstypes "github.com/aws/aws-sdk-go-v2/service/pipes/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" @@ -19,13 +24,14 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/errs" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/internal/verify" "github.com/hashicorp/terraform-provider-aws/names" ) // @SDKResource("aws_pipes_pipe", name="Pipe") // @Tags(identifierAttribute="arn") -func ResourcePipe() *schema.Resource { +func resourcePipe() *schema.Resource { return &schema.Resource{ CreateWithoutTimeout: resourcePipeCreate, ReadWithoutTimeout: resourcePipeRead, @@ -57,21 +63,25 @@ func ResourcePipe() *schema.Resource { "desired_state": { Type: schema.TypeString, Optional: true, - Default: string(types.RequestedPipeStateRunning), - ValidateDiagFunc: enum.Validate[types.RequestedPipeState](), + Default: string(awstypes.RequestedPipeStateRunning), + ValidateDiagFunc: enum.Validate[awstypes.RequestedPipeState](), }, "enrichment": { Type: schema.TypeString, Optional: true, - ValidateFunc: validation.StringLenBetween(1, 1600), + ValidateFunc: verify.ValidARN, }, + "enrichment_parameters": enrichmentParametersSchema(), "name": { Type: schema.TypeString, Optional: true, Computed: true, ForceNew: true, ConflictsWith: []string{"name_prefix"}, - ValidateFunc: validation.StringLenBetween(1, 64), + ValidateFunc: validation.All( + validation.StringLenBetween(1, 64), + validation.StringMatch(regexp.MustCompile(`^[\.\-_A-Za-z0-9]+`), ""), + ), }, "name_prefix": { Type: schema.TypeString, @@ -79,7 +89,10 @@ func ResourcePipe() *schema.Resource { Computed: true, ForceNew: true, ConflictsWith: []string{"name"}, - ValidateFunc: validation.StringLenBetween(1, 64-id.UniqueIDSuffixLength), + ValidateFunc: validation.All( + validation.StringLenBetween(1, 64-id.UniqueIDSuffixLength), + validation.StringMatch(regexp.MustCompile(`^[\.\-_A-Za-z0-9]+`), ""), + ), }, "role_arn": { Type: schema.TypeString, @@ -87,65 +100,23 @@ func ResourcePipe() *schema.Resource { ValidateFunc: verify.ValidARN, }, "source": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validation.StringLenBetween(1, 1600), - }, - "source_parameters": { - Type: schema.TypeList, + Type: schema.TypeString, Required: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "filter_criteria": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - DiffSuppressFunc: suppressEmptyConfigurationBlock("source_parameters.0.filter_criteria"), - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "filter": { - Type: schema.TypeList, - Optional: true, - MaxItems: 5, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "pattern": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringLenBetween(1, 4096), - }, - }, - }, - }, - }, - }, - }, - }, - }, + ForceNew: true, + ValidateFunc: validation.Any( + verify.ValidARN, + validation.StringMatch(regexp.MustCompile(`^smk://(([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za-z0-9]|[A-Za-z0-9][A-Za-z0-9\-]*[A-Za-z0-9]):[0-9]{1,5}|arn:(aws[a-zA-Z0-9-]*):([a-zA-Z0-9\-]+):([a-z]{2}((-gov)|(-iso(b?)))?-[a-z]+-\d{1})?:(\d{12})?:(.+)$`), ""), + ), }, - names.AttrTags: tftags.TagsSchema(), - names.AttrTagsAll: tftags.TagsSchemaComputed(), + "source_parameters": sourceParametersSchema(), "target": { Type: schema.TypeString, Required: true, - ValidateFunc: validation.StringLenBetween(1, 1600), - }, - "target_parameters": { - Type: schema.TypeList, - Required: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "input_template": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.StringLenBetween(0, 8192), - }, - }, - }, + ValidateFunc: verify.ValidARN, }, + "target_parameters": targetParametersSchema(), + names.AttrTags: tftags.TagsSchema(), + names.AttrTagsAll: tftags.TagsSchemaComputed(), }, } } @@ -155,43 +126,44 @@ const ( ) func resourcePipeCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).PipesClient() + conn := meta.(*conns.AWSClient).PipesClient(ctx) name := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) input := &pipes.CreatePipeInput{ - DesiredState: types.RequestedPipeState(d.Get("desired_state").(string)), + DesiredState: awstypes.RequestedPipeState(d.Get("desired_state").(string)), Name: aws.String(name), RoleArn: aws.String(d.Get("role_arn").(string)), Source: aws.String(d.Get("source").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), Target: aws.String(d.Get("target").(string)), } - if v, ok := d.Get("description").(string); ok { - input.Description = aws.String(v) + if v, ok := d.GetOk("description"); ok { + input.Description = aws.String(v.(string)) } - if v, ok := d.Get("enrichment").(string); ok && v != "" { - input.Enrichment = aws.String(v) + if v, ok := d.GetOk("enrichment"); ok && v != "" { + input.Enrichment = aws.String(v.(string)) } - if v, ok := d.Get("source_parameters").([]interface{}); ok && len(v) > 0 && v[0] != nil { - input.SourceParameters = expandPipeSourceParameters(v[0].(map[string]interface{})) + if v, ok := d.GetOk("enrichment_parameters"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.EnrichmentParameters = expandPipeEnrichmentParameters(v.([]interface{})[0].(map[string]interface{})) } - if v, ok := d.Get("target_parameters").([]interface{}); ok && len(v) > 0 && v[0] != nil { - input.TargetParameters = expandPipeTargetParameters(v[0].(map[string]interface{})) + if v, ok := d.GetOk("source_parameters"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.SourceParameters = expandPipeSourceParameters(v.([]interface{})[0].(map[string]interface{})) + } + + if v, ok := d.GetOk("target_parameters"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.TargetParameters = expandPipeTargetParameters(v.([]interface{})[0].(map[string]interface{})) } output, err := conn.CreatePipe(ctx, input) + if err != nil { return create.DiagError(names.Pipes, create.ErrActionCreating, ResNamePipe, name, err) } - if output == nil || output.Arn == nil { - return create.DiagError(names.Pipes, create.ErrActionCreating, ResNamePipe, name, errors.New("empty output")) - } - d.SetId(aws.ToString(output.Name)) if _, err := waitPipeCreated(ctx, conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { @@ -202,9 +174,9 @@ func resourcePipeCreate(ctx context.Context, d *schema.ResourceData, meta interf } func resourcePipeRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).PipesClient() + conn := meta.(*conns.AWSClient).PipesClient(ctx) - output, err := FindPipeByName(ctx, conn, d.Id()) + output, err := findPipeByName(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { log.Printf("[WARN] EventBridge Pipes Pipe (%s) not found, removing from state", d.Id()) @@ -220,73 +192,68 @@ func resourcePipeRead(ctx context.Context, d *schema.ResourceData, meta interfac d.Set("description", output.Description) d.Set("desired_state", output.DesiredState) d.Set("enrichment", output.Enrichment) + if v := output.EnrichmentParameters; !types.IsZero(v) { + if err := d.Set("enrichment_parameters", []interface{}{flattenPipeEnrichmentParameters(v)}); err != nil { + return diag.Errorf("setting enrichment_parameters: %s", err) + } + } else { + d.Set("enrichment_parameters", nil) + } d.Set("name", output.Name) d.Set("name_prefix", create.NamePrefixFromName(aws.ToString(output.Name))) - - if v := output.SourceParameters; v != nil { + d.Set("role_arn", output.RoleArn) + d.Set("source", output.Source) + if v := output.SourceParameters; !types.IsZero(v) { if err := d.Set("source_parameters", []interface{}{flattenPipeSourceParameters(v)}); err != nil { - return create.DiagError(names.Pipes, create.ErrActionSetting, ResNamePipe, d.Id(), err) + return diag.Errorf("setting source_parameters: %s", err) } + } else { + d.Set("source_parameters", nil) } - - d.Set("role_arn", output.RoleArn) - d.Set("source", output.Source) d.Set("target", output.Target) - - if v := output.TargetParameters; v != nil { + if v := output.TargetParameters; !types.IsZero(v) { if err := d.Set("target_parameters", []interface{}{flattenPipeTargetParameters(v)}); err != nil { - return create.DiagError(names.Pipes, create.ErrActionSetting, ResNamePipe, d.Id(), err) + return diag.Errorf("setting target_parameters: %s", err) } + } else { + d.Set("target_parameters", nil) } return nil } func resourcePipeUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).PipesClient() + conn := meta.(*conns.AWSClient).PipesClient(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &pipes.UpdatePipeInput{ Description: aws.String(d.Get("description").(string)), - DesiredState: types.RequestedPipeState(d.Get("desired_state").(string)), + DesiredState: awstypes.RequestedPipeState(d.Get("desired_state").(string)), Name: aws.String(d.Id()), RoleArn: aws.String(d.Get("role_arn").(string)), Target: aws.String(d.Get("target").(string)), - - // Omitting the SourceParameters entirely is interpreted as "no change". - SourceParameters: &types.UpdatePipeSourceParameters{}, - TargetParameters: &types.PipeTargetParameters{}, + // Reset state in case it's a deletion, have to set the input to an empty string otherwise it doesn't get overwritten. + TargetParameters: &awstypes.PipeTargetParameters{ + InputTemplate: aws.String(""), + }, } if d.HasChange("enrichment") { - // Reset state in case it's a deletion. - input.Enrichment = aws.String("") + input.Enrichment = aws.String(d.Get("enrichment").(string)) } - if v, ok := d.Get("enrichment").(string); ok && v != "" { - input.Enrichment = aws.String(v) + if v, ok := d.GetOk("enrichment_parameters"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.EnrichmentParameters = expandPipeEnrichmentParameters(v.([]interface{})[0].(map[string]interface{})) } - if d.HasChange("source_parameters.0.filter_criteria") { - // To unset a parameter, it must be set to an empty object. Nulling a - // parameter will be interpreted as "no change". - input.SourceParameters.FilterCriteria = &types.FilterCriteria{} + if v, ok := d.GetOk("source_parameters"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.SourceParameters = expandUpdatePipeSourceParameters(v.([]interface{})[0].(map[string]interface{})) } - if v, ok := d.Get("source_parameters.0.filter_criteria").([]interface{}); ok && len(v) > 0 && v[0] != nil { - input.SourceParameters.FilterCriteria = expandFilterCriteria(v[0].(map[string]interface{})) + if v, ok := d.GetOk("target_parameters"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.TargetParameters = expandPipeTargetParameters(v.([]interface{})[0].(map[string]interface{})) } - if d.HasChange("target_parameters.0.input_template") { - input.TargetParameters.InputTemplate = aws.String("") - } - - if v, ok := d.Get("target_parameters.0.input_template").(string); ok { - input.TargetParameters.InputTemplate = aws.String(v) - } - - log.Printf("[DEBUG] Updating EventBridge Pipes Pipe (%s): %#v", d.Id(), input) - output, err := conn.UpdatePipe(ctx, input) if err != nil { @@ -302,14 +269,14 @@ func resourcePipeUpdate(ctx context.Context, d *schema.ResourceData, meta interf } func resourcePipeDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).PipesClient() + conn := meta.(*conns.AWSClient).PipesClient(ctx) log.Printf("[INFO] Deleting EventBridge Pipes Pipe: %s", d.Id()) _, err := conn.DeletePipe(ctx, &pipes.DeletePipeInput{ Name: aws.String(d.Id()), }) - if errs.IsA[*types.NotFoundException](err) { + if errs.IsA[*awstypes.NotFoundException](err) { return nil } @@ -324,6 +291,105 @@ func resourcePipeDelete(ctx context.Context, d *schema.ResourceData, meta interf return nil } +func findPipeByName(ctx context.Context, conn *pipes.Client, name string) (*pipes.DescribePipeOutput, error) { + input := &pipes.DescribePipeInput{ + Name: aws.String(name), + } + + output, err := conn.DescribePipe(ctx, input) + + if errs.IsA[*awstypes.NotFoundException](err) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil || output.Arn == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + return output, nil +} + +func statusPipe(ctx context.Context, conn *pipes.Client, name string) retry.StateRefreshFunc { + return func() (interface{}, string, error) { + output, err := findPipeByName(ctx, conn, name) + + if tfresource.NotFound(err) { + return nil, "", nil + } + + if err != nil { + return nil, "", err + } + + return output, string(output.CurrentState), nil + } +} + +func waitPipeCreated(ctx context.Context, conn *pipes.Client, id string, timeout time.Duration) (*pipes.DescribePipeOutput, error) { + stateConf := &retry.StateChangeConf{ + Pending: enum.Slice(awstypes.PipeStateCreating), + Target: enum.Slice(awstypes.PipeStateRunning, awstypes.PipeStateStopped), + Refresh: statusPipe(ctx, conn, id), + Timeout: timeout, + NotFoundChecks: 20, + ContinuousTargetOccurence: 1, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + if output, ok := outputRaw.(*pipes.DescribePipeOutput); ok { + tfresource.SetLastError(err, errors.New(aws.ToString(output.StateReason))) + + return output, err + } + + return nil, err +} + +func waitPipeUpdated(ctx context.Context, conn *pipes.Client, id string, timeout time.Duration) (*pipes.DescribePipeOutput, error) { + stateConf := &retry.StateChangeConf{ + Pending: enum.Slice(awstypes.PipeStateUpdating), + Target: enum.Slice(awstypes.PipeStateRunning, awstypes.PipeStateStopped), + Refresh: statusPipe(ctx, conn, id), + Timeout: timeout, + NotFoundChecks: 20, + ContinuousTargetOccurence: 1, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + if output, ok := outputRaw.(*pipes.DescribePipeOutput); ok { + tfresource.SetLastError(err, errors.New(aws.ToString(output.StateReason))) + + return output, err + } + + return nil, err +} + +func waitPipeDeleted(ctx context.Context, conn *pipes.Client, id string, timeout time.Duration) (*pipes.DescribePipeOutput, error) { + stateConf := &retry.StateChangeConf{ + Pending: enum.Slice(awstypes.PipeStateDeleting), + Target: []string{}, + Refresh: statusPipe(ctx, conn, id), + Timeout: timeout, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + if output, ok := outputRaw.(*pipes.DescribePipeOutput); ok { + tfresource.SetLastError(err, errors.New(aws.ToString(output.StateReason))) + + return output, err + } + + return nil, err +} + func suppressEmptyConfigurationBlock(key string) schema.SchemaDiffSuppressFunc { return func(k, o, n string, d *schema.ResourceData) bool { if k != key+".#" { diff --git a/internal/service/pipes/pipe_test.go b/internal/service/pipes/pipe_test.go index d8ac9625fd3..7bb6de3822d 100644 --- a/internal/service/pipes/pipe_test.go +++ b/internal/service/pipes/pipe_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pipes_test import ( @@ -20,13 +23,8 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -func TestAccPipesPipe_basic(t *testing.T) { +func TestAccPipesPipe_basicSQS(t *testing.T) { ctx := acctest.Context(t) - - if testing.Short() { - t.Skip("skipping long-running test in short mode") - } - var pipe pipes.DescribePipeOutput rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_pipes_pipe.test" @@ -42,20 +40,31 @@ func TestAccPipesPipe_basic(t *testing.T) { CheckDestroy: testAccCheckPipeDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccPipeConfig_basic(rName), - Check: resource.ComposeTestCheckFunc( + Config: testAccPipeConfig_basicSQS(rName), + Check: resource.ComposeAggregateTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "pipes", regexp.MustCompile(regexp.QuoteMeta(`pipe/`+rName))), resource.TestCheckResourceAttr(resourceName, "description", "Managed by Terraform"), resource.TestCheckResourceAttr(resourceName, "desired_state", "RUNNING"), resource.TestCheckResourceAttr(resourceName, "enrichment", ""), + resource.TestCheckResourceAttr(resourceName, "enrichment_parameters.#", "0"), resource.TestCheckResourceAttr(resourceName, "name", rName), resource.TestCheckResourceAttrPair(resourceName, "role_arn", "aws_iam_role.test", "arn"), resource.TestCheckResourceAttrPair(resourceName, "source", "aws_sqs_queue.source", "arn"), resource.TestCheckResourceAttr(resourceName, "source_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.activemq_broker_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.dynamodb_stream_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.kinesis_stream_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.managed_streaming_kafka_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.rabbitmq_broker_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.self_managed_kafka_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.sqs_queue_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.sqs_queue_parameters.0.batch_size", "10"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.sqs_queue_parameters.0.maximum_batching_window_in_seconds", "0"), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), resource.TestCheckResourceAttrPair(resourceName, "target", "aws_sqs_queue.target", "arn"), - resource.TestCheckResourceAttr(resourceName, "target_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.#", "0"), ), }, { @@ -69,10 +78,6 @@ func TestAccPipesPipe_basic(t *testing.T) { func TestAccPipesPipe_disappears(t *testing.T) { ctx := acctest.Context(t) - if testing.Short() { - t.Skip("skipping long-running test in short mode") - } - var pipe pipes.DescribePipeOutput rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_pipes_pipe.test" @@ -88,7 +93,7 @@ func TestAccPipesPipe_disappears(t *testing.T) { CheckDestroy: testAccCheckPipeDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccPipeConfig_basic(rName), + Config: testAccPipeConfig_basicSQS(rName), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), acctest.CheckResourceDisappears(ctx, acctest.Provider, tfpipes.ResourcePipe(), resourceName), @@ -101,12 +106,8 @@ func TestAccPipesPipe_disappears(t *testing.T) { func TestAccPipesPipe_description(t *testing.T) { ctx := acctest.Context(t) - if testing.Short() { - t.Skip("skipping long-running test in short mode") - } - var pipe pipes.DescribePipeOutput - name := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_pipes_pipe.test" resource.ParallelTest(t, resource.TestCase{ @@ -120,7 +121,7 @@ func TestAccPipesPipe_description(t *testing.T) { CheckDestroy: testAccCheckPipeDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccPipeConfig_description(name, "Description 1"), + Config: testAccPipeConfig_description(rName, "Description 1"), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), resource.TestCheckResourceAttr(resourceName, "description", "Description 1"), @@ -132,53 +133,34 @@ func TestAccPipesPipe_description(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccPipeConfig_description(name, "Description 2"), + Config: testAccPipeConfig_description(rName, "Description 2"), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), resource.TestCheckResourceAttr(resourceName, "description", "Description 2"), ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - { - Config: testAccPipeConfig_description(name, ""), + Config: testAccPipeConfig_description(rName, ""), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), resource.TestCheckResourceAttr(resourceName, "description", ""), ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - { - Config: testAccPipeConfig_basic(name), + Config: testAccPipeConfig_basicSQS(rName), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), resource.TestCheckResourceAttr(resourceName, "description", "Managed by Terraform"), ), }, - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, }, }) } func TestAccPipesPipe_desiredState(t *testing.T) { ctx := acctest.Context(t) - if testing.Short() { - t.Skip("skipping long-running test in short mode") - } - var pipe pipes.DescribePipeOutput - name := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_pipes_pipe.test" resource.ParallelTest(t, resource.TestCase{ @@ -192,7 +174,7 @@ func TestAccPipesPipe_desiredState(t *testing.T) { CheckDestroy: testAccCheckPipeDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccPipeConfig_desiredState(name, "STOPPED"), + Config: testAccPipeConfig_desiredState(rName, "STOPPED"), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), resource.TestCheckResourceAttr(resourceName, "desired_state", "STOPPED"), @@ -204,53 +186,34 @@ func TestAccPipesPipe_desiredState(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccPipeConfig_desiredState(name, "RUNNING"), + Config: testAccPipeConfig_desiredState(rName, "RUNNING"), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), resource.TestCheckResourceAttr(resourceName, "desired_state", "RUNNING"), ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - { - Config: testAccPipeConfig_desiredState(name, "STOPPED"), + Config: testAccPipeConfig_desiredState(rName, "STOPPED"), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), resource.TestCheckResourceAttr(resourceName, "desired_state", "STOPPED"), ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - { - Config: testAccPipeConfig_basic(name), + Config: testAccPipeConfig_basicSQS(rName), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), resource.TestCheckResourceAttr(resourceName, "desired_state", "RUNNING"), ), }, - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, }, }) } func TestAccPipesPipe_enrichment(t *testing.T) { ctx := acctest.Context(t) - if testing.Short() { - t.Skip("skipping long-running test in short mode") - } - var pipe pipes.DescribePipeOutput - name := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_pipes_pipe.test" resource.ParallelTest(t, resource.TestCase{ @@ -264,7 +227,7 @@ func TestAccPipesPipe_enrichment(t *testing.T) { CheckDestroy: testAccCheckPipeDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccPipeConfig_enrichment(name, 0), + Config: testAccPipeConfig_enrichment(rName, 0), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), resource.TestCheckResourceAttrPair(resourceName, "enrichment", "aws_cloudwatch_event_api_destination.test.0", "arn"), @@ -276,41 +239,82 @@ func TestAccPipesPipe_enrichment(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccPipeConfig_enrichment(name, 1), + Config: testAccPipeConfig_enrichment(rName, 1), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), resource.TestCheckResourceAttrPair(resourceName, "enrichment", "aws_cloudwatch_event_api_destination.test.1", "arn"), ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - { - Config: testAccPipeConfig_basic(name), + Config: testAccPipeConfig_basicSQS(rName), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), resource.TestCheckResourceAttr(resourceName, "enrichment", ""), ), }, + }, + }) +} + +func TestAccPipesPipe_enrichmentParameters(t *testing.T) { + ctx := acctest.Context(t) + var pipe pipes.DescribePipeOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_pipes_pipe.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.PipesEndpointID) + testAccPreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.PipesEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckPipeDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccPipeConfig_enrichmentParameters(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckPipeExists(ctx, resourceName, &pipe), + resource.TestCheckResourceAttrPair(resourceName, "enrichment", "aws_cloudwatch_event_api_destination.test", "arn"), + resource.TestCheckResourceAttr(resourceName, "enrichment_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "enrichment_parameters.0.http_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "enrichment_parameters.0.http_parameters.0.header_parameters.%", "1"), + resource.TestCheckResourceAttr(resourceName, "enrichment_parameters.0.http_parameters.0.header_parameters.X-Test-1", "Val1"), + resource.TestCheckResourceAttr(resourceName, "enrichment_parameters.0.http_parameters.0.path_parameter_values.#", "1"), + resource.TestCheckResourceAttr(resourceName, "enrichment_parameters.0.http_parameters.0.path_parameter_values.0", "p1"), + resource.TestCheckResourceAttr(resourceName, "enrichment_parameters.0.http_parameters.0.query_string_parameters.%", "1"), + resource.TestCheckResourceAttr(resourceName, "enrichment_parameters.0.http_parameters.0.query_string_parameters.q1", "abc"), + ), + }, { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, }, + { + Config: testAccPipeConfig_enrichmentParametersUpdated(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckPipeExists(ctx, resourceName, &pipe), + resource.TestCheckResourceAttrPair(resourceName, "enrichment", "aws_cloudwatch_event_api_destination.test", "arn"), + resource.TestCheckResourceAttr(resourceName, "enrichment_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "enrichment_parameters.0.http_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "enrichment_parameters.0.http_parameters.0.header_parameters.%", "2"), + resource.TestCheckResourceAttr(resourceName, "enrichment_parameters.0.http_parameters.0.header_parameters.X-Test-1", "Val1"), + resource.TestCheckResourceAttr(resourceName, "enrichment_parameters.0.http_parameters.0.header_parameters.X-Test-2", "Val2"), + resource.TestCheckResourceAttr(resourceName, "enrichment_parameters.0.http_parameters.0.path_parameter_values.#", "1"), + resource.TestCheckResourceAttr(resourceName, "enrichment_parameters.0.http_parameters.0.path_parameter_values.0", "p2"), + resource.TestCheckResourceAttr(resourceName, "enrichment_parameters.0.http_parameters.0.query_string_parameters.%", "0"), + ), + }, }, }) } func TestAccPipesPipe_sourceParameters_filterCriteria(t *testing.T) { ctx := acctest.Context(t) - if testing.Short() { - t.Skip("skipping long-running test in short mode") - } - var pipe pipes.DescribePipeOutput - name := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_pipes_pipe.test" resource.ParallelTest(t, resource.TestCase{ @@ -324,9 +328,11 @@ func TestAccPipesPipe_sourceParameters_filterCriteria(t *testing.T) { CheckDestroy: testAccCheckPipeDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccPipeConfig_sourceParameters_filterCriteria1(name, "test1"), + Config: testAccPipeConfig_sourceParameters_filterCriteria1(rName, "test1"), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), + resource.TestCheckResourceAttr(resourceName, "source_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.#", "1"), resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.0.filter.#", "1"), resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.0.filter.0.pattern", `{"source":["test1"]}`), ), @@ -337,37 +343,32 @@ func TestAccPipesPipe_sourceParameters_filterCriteria(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccPipeConfig_sourceParameters_filterCriteria2(name, "test1", "test2"), + Config: testAccPipeConfig_sourceParameters_filterCriteria2(rName, "test1", "test2"), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), + resource.TestCheckResourceAttr(resourceName, "source_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.#", "1"), resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.0.filter.#", "2"), resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.0.filter.0.pattern", `{"source":["test1"]}`), resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.0.filter.1.pattern", `{"source":["test2"]}`), ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - { - Config: testAccPipeConfig_sourceParameters_filterCriteria1(name, "test2"), + Config: testAccPipeConfig_sourceParameters_filterCriteria1(rName, "test2"), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), + resource.TestCheckResourceAttr(resourceName, "source_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.#", "1"), resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.0.filter.#", "1"), resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.0.filter.0.pattern", `{"source":["test2"]}`), ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - { - Config: testAccPipeConfig_sourceParameters_filterCriteria0(name), + Config: testAccPipeConfig_sourceParameters_filterCriteria0(rName), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), - resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.0.filter.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.#", "0"), ), }, { @@ -376,41 +377,31 @@ func TestAccPipesPipe_sourceParameters_filterCriteria(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccPipeConfig_sourceParameters_filterCriteria1(name, "test2"), + Config: testAccPipeConfig_sourceParameters_filterCriteria1(rName, "test2"), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), + resource.TestCheckResourceAttr(resourceName, "source_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.#", "1"), resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.0.filter.#", "1"), resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.0.filter.0.pattern", `{"source":["test2"]}`), ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - { - Config: testAccPipeConfig_basic(name), + Config: testAccPipeConfig_basicSQS(rName), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), - resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.#", "1"), ), }, - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, }, }) } func TestAccPipesPipe_nameGenerated(t *testing.T) { ctx := acctest.Context(t) - if testing.Short() { - t.Skip("skipping long-running test in short mode") - } - var pipe pipes.DescribePipeOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_pipes_pipe.test" resource.ParallelTest(t, resource.TestCase{ @@ -424,7 +415,7 @@ func TestAccPipesPipe_nameGenerated(t *testing.T) { CheckDestroy: testAccCheckPipeDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccPipeConfig_nameGenerated(), + Config: testAccPipeConfig_nameGenerated(rName), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), acctest.CheckResourceAttrNameGenerated(resourceName, "name"), @@ -442,11 +433,8 @@ func TestAccPipesPipe_nameGenerated(t *testing.T) { func TestAccPipesPipe_namePrefix(t *testing.T) { ctx := acctest.Context(t) - if testing.Short() { - t.Skip("skipping long-running test in short mode") - } - var pipe pipes.DescribePipeOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_pipes_pipe.test" resource.ParallelTest(t, resource.TestCase{ @@ -460,7 +448,7 @@ func TestAccPipesPipe_namePrefix(t *testing.T) { CheckDestroy: testAccCheckPipeDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccPipeConfig_namePrefix("tf-acc-test-prefix-"), + Config: testAccPipeConfig_namePrefix(rName, "tf-acc-test-prefix-"), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), acctest.CheckResourceAttrNameFromPrefix(resourceName, "name", "tf-acc-test-prefix-"), @@ -478,12 +466,8 @@ func TestAccPipesPipe_namePrefix(t *testing.T) { func TestAccPipesPipe_roleARN(t *testing.T) { ctx := acctest.Context(t) - if testing.Short() { - t.Skip("skipping long-running test in short mode") - } - var pipe pipes.DescribePipeOutput - name := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_pipes_pipe.test" resource.ParallelTest(t, resource.TestCase{ @@ -497,7 +481,7 @@ func TestAccPipesPipe_roleARN(t *testing.T) { CheckDestroy: testAccCheckPipeDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccPipeConfig_basic(name), + Config: testAccPipeConfig_basicSQS(rName), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), resource.TestCheckResourceAttrPair(resourceName, "role_arn", "aws_iam_role.test", "arn"), @@ -509,29 +493,20 @@ func TestAccPipesPipe_roleARN(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccPipeConfig_roleARN(name), + Config: testAccPipeConfig_roleARN(rName), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), resource.TestCheckResourceAttrPair(resourceName, "role_arn", "aws_iam_role.test2", "arn"), ), }, - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, }, }) } func TestAccPipesPipe_tags(t *testing.T) { ctx := acctest.Context(t) - if testing.Short() { - t.Skip("skipping long-running test in short mode") - } - var pipe pipes.DescribePipeOutput - name := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_pipes_pipe.test" resource.ParallelTest(t, resource.TestCase{ @@ -545,7 +520,7 @@ func TestAccPipesPipe_tags(t *testing.T) { CheckDestroy: testAccCheckPipeDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccPipeConfig_tags1(name, "key1", "value1"), + Config: testAccPipeConfig_tags1(rName, "key1", "value1"), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), @@ -558,7 +533,7 @@ func TestAccPipesPipe_tags(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccPipeConfig_tags2(name, "key1", "value1updated", "key2", "value2"), + Config: testAccPipeConfig_tags2(rName, "key1", "value1updated", "key2", "value2"), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), @@ -567,35 +542,21 @@ func TestAccPipesPipe_tags(t *testing.T) { ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - { - Config: testAccPipeConfig_tags1(name, "key2", "value2"), + Config: testAccPipeConfig_tags1(rName, "key2", "value2"), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), ), }, - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, }, }) } -func TestAccPipesPipe_target(t *testing.T) { +func TestAccPipesPipe_targetUpdate(t *testing.T) { ctx := acctest.Context(t) - if testing.Short() { - t.Skip("skipping long-running test in short mode") - } - var pipe pipes.DescribePipeOutput - name := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_pipes_pipe.test" resource.ParallelTest(t, resource.TestCase{ @@ -609,7 +570,7 @@ func TestAccPipesPipe_target(t *testing.T) { CheckDestroy: testAccCheckPipeDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccPipeConfig_basic(name), + Config: testAccPipeConfig_basicSQS(rName), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), resource.TestCheckResourceAttrPair(resourceName, "target", "aws_sqs_queue.target", "arn"), @@ -621,29 +582,20 @@ func TestAccPipesPipe_target(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccPipeConfig_target(name), + Config: testAccPipeConfig_targetUpdated(rName), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), resource.TestCheckResourceAttrPair(resourceName, "target", "aws_sqs_queue.target2", "arn"), ), }, - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, }, }) } func TestAccPipesPipe_targetParameters_inputTemplate(t *testing.T) { ctx := acctest.Context(t) - if testing.Short() { - t.Skip("skipping long-running test in short mode") - } - var pipe pipes.DescribePipeOutput - name := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_pipes_pipe.test" resource.ParallelTest(t, resource.TestCase{ @@ -657,7 +609,7 @@ func TestAccPipesPipe_targetParameters_inputTemplate(t *testing.T) { CheckDestroy: testAccCheckPipeDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccPipeConfig_targetParameters_inputTemplate(name, "$.first"), + Config: testAccPipeConfig_targetParameters_inputTemplate(rName, "$.first"), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), resource.TestCheckResourceAttr(resourceName, "target_parameters.0.input_template", "$.first"), @@ -669,24 +621,87 @@ func TestAccPipesPipe_targetParameters_inputTemplate(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccPipeConfig_targetParameters_inputTemplate(name, "$.second"), + Config: testAccPipeConfig_targetParameters_inputTemplate(rName, "$.second"), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), resource.TestCheckResourceAttr(resourceName, "target_parameters.0.input_template", "$.second"), ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - { - Config: testAccPipeConfig_basic(name), + Config: testAccPipeConfig_basicSQS(rName), Check: resource.ComposeTestCheckFunc( testAccCheckPipeExists(ctx, resourceName, &pipe), resource.TestCheckNoResourceAttr(resourceName, "target_parameters.0.input_template"), ), }, + }, + }) +} + +func TestAccPipesPipe_kinesisSourceAndTarget(t *testing.T) { + ctx := acctest.Context(t) + var pipe pipes.DescribePipeOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_pipes_pipe.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.PipesEndpointID) + testAccPreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.PipesEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckPipeDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccPipeConfig_basicKinesis(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckPipeExists(ctx, resourceName, &pipe), + acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "pipes", regexp.MustCompile(regexp.QuoteMeta(`pipe/`+rName))), + resource.TestCheckResourceAttr(resourceName, "description", "Managed by Terraform"), + resource.TestCheckResourceAttr(resourceName, "desired_state", "RUNNING"), + resource.TestCheckResourceAttr(resourceName, "enrichment", ""), + resource.TestCheckResourceAttr(resourceName, "enrichment_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttrPair(resourceName, "role_arn", "aws_iam_role.test", "arn"), + resource.TestCheckResourceAttrPair(resourceName, "source", "aws_kinesis_stream.source", "arn"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.activemq_broker_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.dynamodb_stream_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.kinesis_stream_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.kinesis_stream_parameters.0.batch_size", "100"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.kinesis_stream_parameters.0.dead_letter_config.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.kinesis_stream_parameters.0.maximum_batching_window_in_seconds", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.kinesis_stream_parameters.0.maximum_record_age_in_seconds", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.kinesis_stream_parameters.0.maximum_retry_attempts", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.kinesis_stream_parameters.0.on_partial_batch_item_failure", ""), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.kinesis_stream_parameters.0.parallelization_factor", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.kinesis_stream_parameters.0.starting_position", "LATEST"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.kinesis_stream_parameters.0.starting_position_timestamp", ""), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.managed_streaming_kafka_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.rabbitmq_broker_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.self_managed_kafka_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.sqs_queue_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttrPair(resourceName, "target", "aws_kinesis_stream.target", "arn"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.cloudwatch_logs_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.eventbridge_event_bus_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.http_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.input_template", ""), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.kinesis_stream_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.kinesis_stream_parameters.0.partition_key", "test"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.lambda_function_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.redshift_data_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.sagemaker_pipeline_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.sqs_queue_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.step_function_state_machine_parameters.#", "0"), + ), + }, { ResourceName: resourceName, ImportState: true, @@ -696,542 +711,2330 @@ func TestAccPipesPipe_targetParameters_inputTemplate(t *testing.T) { }) } -func testAccCheckPipeDestroy(ctx context.Context) resource.TestCheckFunc { - return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).PipesClient() - - for _, rs := range s.RootModule().Resources { - if rs.Type != "aws_pipes_pipe" { - continue - } - - _, err := tfpipes.FindPipeByName(ctx, conn, rs.Primary.ID) - - if tfresource.NotFound(err) { - continue - } - - if err != nil { - return err - } - - return create.Error(names.Pipes, create.ErrActionCheckingDestroyed, tfpipes.ResNamePipe, rs.Primary.ID, errors.New("not destroyed")) - } +func TestAccPipesPipe_dynamoDBSourceCloudWatchLogsTarget(t *testing.T) { + ctx := acctest.Context(t) + var pipe pipes.DescribePipeOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_pipes_pipe.test" - return nil - } + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.PipesEndpointID) + testAccPreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.PipesEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckPipeDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccPipeConfig_basicDynamoDBSourceCloudWatchLogsTarget(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckPipeExists(ctx, resourceName, &pipe), + acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "pipes", regexp.MustCompile(regexp.QuoteMeta(`pipe/`+rName))), + resource.TestCheckResourceAttr(resourceName, "description", "Managed by Terraform"), + resource.TestCheckResourceAttr(resourceName, "desired_state", "RUNNING"), + resource.TestCheckResourceAttr(resourceName, "enrichment", ""), + resource.TestCheckResourceAttr(resourceName, "enrichment_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttrPair(resourceName, "role_arn", "aws_iam_role.test", "arn"), + resource.TestCheckResourceAttrPair(resourceName, "source", "aws_dynamodb_table.source", "stream_arn"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.activemq_broker_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.dynamodb_stream_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.dynamodb_stream_parameters.0.batch_size", "100"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.dynamodb_stream_parameters.0.dead_letter_config.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.dynamodb_stream_parameters.0.maximum_batching_window_in_seconds", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.dynamodb_stream_parameters.0.maximum_record_age_in_seconds", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.dynamodb_stream_parameters.0.maximum_retry_attempts", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.dynamodb_stream_parameters.0.on_partial_batch_item_failure", ""), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.dynamodb_stream_parameters.0.parallelization_factor", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.dynamodb_stream_parameters.0.starting_position", "LATEST"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.managed_streaming_kafka_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.rabbitmq_broker_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.self_managed_kafka_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.sqs_queue_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttrPair(resourceName, "target", "aws_cloudwatch_log_group.target", "arn"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.cloudwatch_logs_parameters.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "target_parameters.0.cloudwatch_logs_parameters.0.log_stream_name", "aws_cloudwatch_log_stream.target", "name"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.cloudwatch_logs_parameters.0.timestamp", ""), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.eventbridge_event_bus_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.http_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.input_template", ""), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.kinesis_stream_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.lambda_function_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.redshift_data_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.sagemaker_pipeline_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.sqs_queue_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.step_function_state_machine_parameters.#", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) } -func testAccCheckPipeExists(ctx context.Context, name string, pipe *pipes.DescribePipeOutput) resource.TestCheckFunc { - return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[name] - if !ok { - return create.Error(names.Pipes, create.ErrActionCheckingExistence, tfpipes.ResNamePipe, name, errors.New("not found")) - } +func TestAccPipesPipe_activeMQSourceStepFunctionTarget(t *testing.T) { + ctx := acctest.Context(t) + var pipe pipes.DescribePipeOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_pipes_pipe.test" - if rs.Primary.ID == "" { - return create.Error(names.Pipes, create.ErrActionCheckingExistence, tfpipes.ResNamePipe, name, errors.New("not set")) + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.PipesEndpointID) + testAccPreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.PipesEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckPipeDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccPipeConfig_basicActiveMQSourceStepFunctionTarget(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckPipeExists(ctx, resourceName, &pipe), + acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "pipes", regexp.MustCompile(regexp.QuoteMeta(`pipe/`+rName))), + resource.TestCheckResourceAttr(resourceName, "description", "Managed by Terraform"), + resource.TestCheckResourceAttr(resourceName, "desired_state", "RUNNING"), + resource.TestCheckResourceAttr(resourceName, "enrichment", ""), + resource.TestCheckResourceAttr(resourceName, "enrichment_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttrPair(resourceName, "role_arn", "aws_iam_role.test", "arn"), + resource.TestCheckResourceAttrPair(resourceName, "source", "aws_mq_broker.source", "arn"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.activemq_broker_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.activemq_broker_parameters.0.batch_size", "100"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.activemq_broker_parameters.0.credentials.#", "1"), + resource.TestCheckResourceAttrSet(resourceName, "source_parameters.0.activemq_broker_parameters.0.credentials.0.basic_auth"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.activemq_broker_parameters.0.maximum_batching_window_in_seconds", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.activemq_broker_parameters.0.queue_name", "test"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.dynamodb_stream_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.managed_streaming_kafka_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.rabbitmq_broker_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.self_managed_kafka_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.sqs_queue_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttrPair(resourceName, "target", "aws_sfn_state_machine.target", "arn"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.cloudwatch_logs_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.eventbridge_event_bus_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.http_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.input_template", ""), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.kinesis_stream_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.lambda_function_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.redshift_data_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.sagemaker_pipeline_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.sqs_queue_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.step_function_state_machine_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.step_function_state_machine_parameters.0.invocation_type", "REQUEST_RESPONSE"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccPipesPipe_rabbitMQSourceEventBusTarget(t *testing.T) { + ctx := acctest.Context(t) + var pipe pipes.DescribePipeOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_pipes_pipe.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.PipesEndpointID) + testAccPreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.PipesEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckPipeDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccPipeConfig_basicRabbitMQSourceEventBusTarget(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckPipeExists(ctx, resourceName, &pipe), + acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "pipes", regexp.MustCompile(regexp.QuoteMeta(`pipe/`+rName))), + resource.TestCheckResourceAttr(resourceName, "description", "Managed by Terraform"), + resource.TestCheckResourceAttr(resourceName, "desired_state", "RUNNING"), + resource.TestCheckResourceAttr(resourceName, "enrichment", ""), + resource.TestCheckResourceAttr(resourceName, "enrichment_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttrPair(resourceName, "role_arn", "aws_iam_role.test", "arn"), + resource.TestCheckResourceAttrPair(resourceName, "source", "aws_mq_broker.source", "arn"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.activemq_broker_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.dynamodb_stream_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.managed_streaming_kafka_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.rabbitmq_broker_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.rabbitmq_broker_parameters.0.batch_size", "10"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.rabbitmq_broker_parameters.0.credentials.#", "1"), + resource.TestCheckResourceAttrSet(resourceName, "source_parameters.0.rabbitmq_broker_parameters.0.credentials.0.basic_auth"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.rabbitmq_broker_parameters.0.maximum_batching_window_in_seconds", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.rabbitmq_broker_parameters.0.queue_name", "test"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.rabbitmq_broker_parameters.0.virtual_host", ""), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.self_managed_kafka_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.sqs_queue_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttrPair(resourceName, "target", "aws_cloudwatch_event_bus.target", "arn"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.#", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccPipesPipe_mskSourceHTTPTarget(t *testing.T) { + acctest.Skip(t, "DependencyViolation errors deleting subnets and security group") + + ctx := acctest.Context(t) + var pipe pipes.DescribePipeOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_pipes_pipe.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.PipesEndpointID) + testAccPreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.PipesEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckPipeDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccPipeConfig_basicMSKSourceHTTPTarget(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckPipeExists(ctx, resourceName, &pipe), + acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "pipes", regexp.MustCompile(regexp.QuoteMeta(`pipe/`+rName))), + resource.TestCheckResourceAttr(resourceName, "description", "Managed by Terraform"), + resource.TestCheckResourceAttr(resourceName, "desired_state", "RUNNING"), + resource.TestCheckResourceAttr(resourceName, "enrichment", ""), + resource.TestCheckResourceAttr(resourceName, "enrichment_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttrPair(resourceName, "role_arn", "aws_iam_role.test", "arn"), + resource.TestCheckResourceAttrPair(resourceName, "source", "aws_msk_cluster.source", "arn"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.activemq_broker_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.dynamodb_stream_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.managed_streaming_kafka_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.managed_streaming_kafka_parameters.0.batch_size", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.managed_streaming_kafka_parameters.0.consumer_group_id", ""), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.managed_streaming_kafka_parameters.0.credentials.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.managed_streaming_kafka_parameters.0.maximum_batching_window_in_seconds", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.managed_streaming_kafka_parameters.0.starting_position", ""), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.managed_streaming_kafka_parameters.0.topic_name", "test"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.rabbitmq_broker_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.self_managed_kafka_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.sqs_queue_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttrSet(resourceName, "target"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.cloudwatch_logs_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.eventbridge_event_bus_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.http_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.http_parameters.0.header_parameters.%", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.http_parameters.0.header_parameters.X-Test", "test"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.http_parameters.0.path_parameter_values.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.http_parameters.0.path_parameter_values.0", "p1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.http_parameters.0.query_string_parameters.%", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.http_parameters.0.query_string_parameters.testing", "yes"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.input_template", ""), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.kinesis_stream_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.lambda_function_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.redshift_data_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.sagemaker_pipeline_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.sqs_queue_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.step_function_state_machine_parameters.#", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccPipesPipe_selfManagedKafkaSourceLambdaFunctionTarget(t *testing.T) { + acctest.Skip(t, "DependencyViolation errors deleting subnets and security group") + + ctx := acctest.Context(t) + var pipe pipes.DescribePipeOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_pipes_pipe.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.PipesEndpointID) + testAccPreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.PipesEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckPipeDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccPipeConfig_basicSelfManagedKafkaSourceLambdaFunctionTarget(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckPipeExists(ctx, resourceName, &pipe), + acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "pipes", regexp.MustCompile(regexp.QuoteMeta(`pipe/`+rName))), + resource.TestCheckResourceAttr(resourceName, "description", "Managed by Terraform"), + resource.TestCheckResourceAttr(resourceName, "desired_state", "RUNNING"), + resource.TestCheckResourceAttr(resourceName, "enrichment", ""), + resource.TestCheckResourceAttr(resourceName, "enrichment_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttrPair(resourceName, "role_arn", "aws_iam_role.test", "arn"), + resource.TestCheckResourceAttr(resourceName, "source", "smk://test1:9092,test2:9092"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.activemq_broker_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.dynamodb_stream_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.managed_streaming_kafka_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.rabbitmq_broker_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.self_managed_kafka_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.self_managed_kafka_parameters.0.additional_bootstrap_servers.#", "1"), + resource.TestCheckTypeSetElemAttr(resourceName, "source_parameters.0.self_managed_kafka_parameters.0.additional_bootstrap_servers.*", "testing:1234"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.self_managed_kafka_parameters.0.batch_size", "100"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.self_managed_kafka_parameters.0.consumer_group_id", "self-managed-test-group-id"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.self_managed_kafka_parameters.0.credentials.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.self_managed_kafka_parameters.0.maximum_batching_window_in_seconds", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.self_managed_kafka_parameters.0.server_root_ca_certificate", ""), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.self_managed_kafka_parameters.0.starting_position", ""), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.self_managed_kafka_parameters.0.topic_name", "test"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.self_managed_kafka_parameters.0.vpc.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.self_managed_kafka_parameters.0.vpc.0.security_groups.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.self_managed_kafka_parameters.0.vpc.0.subnets.#", "2"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.sqs_queue_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttrPair(resourceName, "target", "aws_lambda_function.target", "arn"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.cloudwatch_logs_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.eventbridge_event_bus_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.http_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.input_template", ""), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.kinesis_stream_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.lambda_function_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.lambda_function_parameters.0.invocation_type", "REQUEST_RESPONSE"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.redshift_data_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.sagemaker_pipeline_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.sqs_queue_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.step_function_state_machine_parameters.#", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccPipesPipe_sqsSourceRedshiftTarget(t *testing.T) { + ctx := acctest.Context(t) + var pipe pipes.DescribePipeOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_pipes_pipe.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.PipesEndpointID) + testAccPreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.PipesEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckPipeDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccPipeConfig_basicSQSSourceRedshiftTarget(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckPipeExists(ctx, resourceName, &pipe), + acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "pipes", regexp.MustCompile(regexp.QuoteMeta(`pipe/`+rName))), + resource.TestCheckResourceAttr(resourceName, "description", "Managed by Terraform"), + resource.TestCheckResourceAttr(resourceName, "desired_state", "RUNNING"), + resource.TestCheckResourceAttr(resourceName, "enrichment", ""), + resource.TestCheckResourceAttr(resourceName, "enrichment_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttrPair(resourceName, "role_arn", "aws_iam_role.test", "arn"), + resource.TestCheckResourceAttrPair(resourceName, "source", "aws_sqs_queue.source", "arn"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.activemq_broker_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.dynamodb_stream_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.managed_streaming_kafka_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.rabbitmq_broker_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.self_managed_kafka_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.sqs_queue_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.sqs_queue_parameters.0.batch_size", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.sqs_queue_parameters.0.maximum_batching_window_in_seconds", "90"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttrPair(resourceName, "target", "aws_redshift_cluster.target", "arn"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.cloudwatch_logs_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.eventbridge_event_bus_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.http_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.input_template", ""), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.kinesis_stream_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.lambda_function_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.redshift_data_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.redshift_data_parameters.0.database", "db1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.redshift_data_parameters.0.db_user", "user1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.redshift_data_parameters.0.secret_manager_arn", ""), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.redshift_data_parameters.0.sqls.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.redshift_data_parameters.0.statement_name", "SelectAll"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.redshift_data_parameters.0.with_event", "false"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.sagemaker_pipeline_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.sqs_queue_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.step_function_state_machine_parameters.#", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccPipesPipe_SourceSageMakerTarget(t *testing.T) { + acctest.Skip(t, "aws_sagemaker_pipeline resource not yet implemented") + + ctx := acctest.Context(t) + var pipe pipes.DescribePipeOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_pipes_pipe.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.PipesEndpointID) + testAccPreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.PipesEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckPipeDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccPipeConfig_basicSQSSourceSageMakerTarget(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckPipeExists(ctx, resourceName, &pipe), + acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "pipes", regexp.MustCompile(regexp.QuoteMeta(`pipe/`+rName))), + resource.TestCheckResourceAttr(resourceName, "description", "Managed by Terraform"), + resource.TestCheckResourceAttr(resourceName, "desired_state", "RUNNING"), + resource.TestCheckResourceAttr(resourceName, "enrichment", ""), + resource.TestCheckResourceAttr(resourceName, "enrichment_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttrPair(resourceName, "role_arn", "aws_iam_role.test", "arn"), + resource.TestCheckResourceAttrPair(resourceName, "source", "aws_sqs_queue.source", "arn"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.activemq_broker_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.dynamodb_stream_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.managed_streaming_kafka_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.rabbitmq_broker_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.self_managed_kafka_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.sqs_queue_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttrPair(resourceName, "target", "aws_sagemaker_pipeline.target", "arn"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.cloudwatch_logs_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.eventbridge_event_bus_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.http_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.input_template", ""), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.kinesis_stream_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.lambda_function_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.redshift_data_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.sagemaker_pipeline_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.sagemaker_pipeline_parameters.0.pipeline_parameter.#", "2"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.sagemaker_pipeline_parameters.0.pipeline_parameter.0.name", "p1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.sagemaker_pipeline_parameters.0.pipeline_parameter.0.value", "v1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.sagemaker_pipeline_parameters.0.pipeline_parameter.1.name", "p2"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.sagemaker_pipeline_parameters.0.pipeline_parameter.1.value", "v2"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.sqs_queue_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.step_function_state_machine_parameters.#", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccPipesPipe_sqsSourceBatchJobTarget(t *testing.T) { + ctx := acctest.Context(t) + var pipe pipes.DescribePipeOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_pipes_pipe.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.PipesEndpointID) + testAccPreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.PipesEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckPipeDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccPipeConfig_basicSQSSourceBatchJobTarget(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckPipeExists(ctx, resourceName, &pipe), + acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "pipes", regexp.MustCompile(regexp.QuoteMeta(`pipe/`+rName))), + resource.TestCheckResourceAttr(resourceName, "description", "Managed by Terraform"), + resource.TestCheckResourceAttr(resourceName, "desired_state", "RUNNING"), + resource.TestCheckResourceAttr(resourceName, "enrichment", ""), + resource.TestCheckResourceAttr(resourceName, "enrichment_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttrPair(resourceName, "role_arn", "aws_iam_role.test", "arn"), + resource.TestCheckResourceAttrPair(resourceName, "source", "aws_sqs_queue.source", "arn"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.activemq_broker_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.dynamodb_stream_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.managed_streaming_kafka_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.rabbitmq_broker_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.self_managed_kafka_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.sqs_queue_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttrPair(resourceName, "target", "aws_batch_job_queue.target", "arn"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.0.array_properties.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.0.array_properties.0.size", "512"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.0.container_overrides.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.0.container_overrides.0.command.#", "3"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.0.container_overrides.0.command.0", "rm"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.0.container_overrides.0.command.1", "-fr"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.0.container_overrides.0.command.2", "/"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.0.container_overrides.0.environment.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.0.container_overrides.0.environment.0.name", "TMP"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.0.container_overrides.0.environment.0.value", "/tmp2"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.0.container_overrides.0.instance_type", ""), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.0.container_overrides.0.resource_requirement.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.0.container_overrides.0.resource_requirement.0.type", "GPU"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.0.container_overrides.0.resource_requirement.0.value", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.0.depends_on.#", "0"), + resource.TestCheckResourceAttrSet(resourceName, "target_parameters.0.batch_job_parameters.0.job_definition"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.0.job_name", "testing"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.0.parameters.%", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.0.parameters.Key1", "Value1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.0.retry_strategy.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.cloudwatch_logs_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.eventbridge_event_bus_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.http_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.input_template", ""), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.kinesis_stream_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.lambda_function_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.redshift_data_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.sagemaker_pipeline_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.sqs_queue_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.step_function_state_machine_parameters.#", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccPipesPipe_sqsSourceECSTaskTarget(t *testing.T) { + acctest.Skip(t, "ValidationException: [numeric instance is lower than the required minimum (minimum: 1, found: 0)]") + + ctx := acctest.Context(t) + var pipe pipes.DescribePipeOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_pipes_pipe.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.PipesEndpointID) + testAccPreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.PipesEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckPipeDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccPipeConfig_basicSQSSourceECSTaskTarget(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckPipeExists(ctx, resourceName, &pipe), + acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "pipes", regexp.MustCompile(regexp.QuoteMeta(`pipe/`+rName))), + resource.TestCheckResourceAttr(resourceName, "description", "Managed by Terraform"), + resource.TestCheckResourceAttr(resourceName, "desired_state", "RUNNING"), + resource.TestCheckResourceAttr(resourceName, "enrichment", ""), + resource.TestCheckResourceAttr(resourceName, "enrichment_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttrPair(resourceName, "role_arn", "aws_iam_role.test", "arn"), + resource.TestCheckResourceAttrPair(resourceName, "source", "aws_sqs_queue.source", "arn"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.activemq_broker_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.dynamodb_stream_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.filter_criteria.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.managed_streaming_kafka_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.rabbitmq_broker_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.self_managed_kafka_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "source_parameters.0.sqs_queue_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttrPair(resourceName, "target", "aws_ecs_cluster.target", "id"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.batch_job_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.cloudwatch_logs_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.capacity_provider_strategy.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.enable_ecs_managed_tags", "true"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.enable_execute_command", "false"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.group", "g1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.launch_type", "FARGATE"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.network_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.network_configuration.0.aws_vpc_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.network_configuration.0.aws_vpc_configuration.0.assign_public_ip", ""), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.network_configuration.0.aws_vpc_configuration.0.security_groups.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.network_configuration.0.aws_vpc_configuration.0.subnets.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.overrides.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.overrides.0.container_override.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.overrides.0.container_override.0.command.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.overrides.0.container_override.0.cpu", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.overrides.0.container_override.0.environment.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.overrides.0.container_override.0.environment.0.name", "TMP"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.overrides.0.container_override.0.environment.0.value", "/tmp2"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.overrides.0.container_override.0.environment_file.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.overrides.0.container_override.0.memory", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.overrides.0.container_override.0.memory_reservation", "1024"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.overrides.0.container_override.0.name", "first"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.overrides.0.container_override.0.resource_requirement.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.overrides.0.container_override.0.resource_requirement.0.type", "GPU"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.overrides.0.container_override.0.resource_requirement.0.value", "2"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.overrides.0.cpu", ""), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.overrides.0.ephemeral_storage.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.overrides.0.ephemeral_storage.0.size_in_gib", "32"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.overrides.0.execution_role_arn", ""), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.overrides.0.inference_accelerator_override.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.overrides.0.memory", ""), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.overrides.0.task_role_arn", ""), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.placement_constraint.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.placement_strategy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.placement_strategy.0.field", "cpu"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.placement_strategy.0.type", "binpack"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.platform_version", ""), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.propagate_tags", "TASK_DEFINITION"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.reference_id", "refid"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.tags.Name", rName), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.ecs_task_parameters.0.task_count", "1"), + resource.TestCheckResourceAttrSet(resourceName, "target_parameters.0.ecs_task_parameters.0.task_definition_arn"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.eventbridge_event_bus_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.http_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.input_template", ""), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.kinesis_stream_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.lambda_function_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.redshift_data_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.sagemaker_pipeline_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.sqs_queue_parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "target_parameters.0.step_function_state_machine_parameters.#", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckPipeDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).PipesClient(ctx) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_pipes_pipe" { + continue + } + + _, err := tfpipes.FindPipeByName(ctx, conn, rs.Primary.ID) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return err + } + + return create.Error(names.Pipes, create.ErrActionCheckingDestroyed, tfpipes.ResNamePipe, rs.Primary.ID, errors.New("not destroyed")) + } + + return nil + } +} + +func testAccCheckPipeExists(ctx context.Context, name string, pipe *pipes.DescribePipeOutput) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return create.Error(names.Pipes, create.ErrActionCheckingExistence, tfpipes.ResNamePipe, name, errors.New("not found")) + } + + if rs.Primary.ID == "" { + return create.Error(names.Pipes, create.ErrActionCheckingExistence, tfpipes.ResNamePipe, name, errors.New("not set")) + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).PipesClient(ctx) + + output, err := tfpipes.FindPipeByName(ctx, conn, rs.Primary.ID) + + if err != nil { + return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).PipesClient() + *pipe = *output + + return nil + } +} + +func testAccPreCheck(ctx context.Context, t *testing.T) { + conn := acctest.Provider.Meta().(*conns.AWSClient).PipesClient(ctx) + + input := &pipes.ListPipesInput{} + _, err := conn.ListPipes(ctx, input) + + if acctest.PreCheckSkipError(err) { + t.Skipf("skipping acceptance testing: %s", err) + } + + if err != nil { + t.Fatalf("unexpected PreCheck error: %s", err) + } +} + +func testAccPipeConfig_base(rName string) string { + return fmt.Sprintf(` +data "aws_caller_identity" "main" {} +data "aws_partition" "main" {} + +resource "aws_iam_role" "test" { + name = %[1]q + + assume_role_policy = jsonencode({ + Version = "2012-10-17" + Statement = { + Effect = "Allow" + Action = "sts:AssumeRole" + Principal = { + Service = "pipes.${data.aws_partition.main.dns_suffix}" + } + Condition = { + StringEquals = { + "aws:SourceAccount" = data.aws_caller_identity.main.account_id + } + } + } + }) +} +`, rName) +} + +func testAccPipeConfig_baseSQSSource(rName string) string { + return fmt.Sprintf(` +resource "aws_iam_role_policy" "source" { + role = aws_iam_role.test.id + name = "%[1]s-source" + + policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Effect = "Allow" + Action = [ + "sqs:DeleteMessage", + "sqs:GetQueueAttributes", + "sqs:ReceiveMessage", + ], + Resource = [ + aws_sqs_queue.source.arn, + ] + }, + ] + }) +} + +resource "aws_sqs_queue" "source" { + name = "%[1]s-source" +} +`, rName) +} + +func testAccPipeConfig_baseSQSTarget(rName string) string { + return fmt.Sprintf(` +resource "aws_iam_role_policy" "target" { + role = aws_iam_role.test.id + name = "%[1]s-target" + + policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Effect = "Allow" + Action = [ + "sqs:SendMessage", + ], + Resource = [ + aws_sqs_queue.target.arn, + ] + }, + ] + }) +} + +resource "aws_sqs_queue" "target" { + name = "%[1]s-target" +} +`, rName) +} + +func testAccPipeConfig_baseKinesisSource(rName string) string { + return fmt.Sprintf(` +resource "aws_iam_role_policy" "source" { + role = aws_iam_role.test.id + name = "%[1]s-source" + + policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Effect = "Allow" + Action = [ + "kinesis:DescribeStream", + "kinesis:DescribeStreamSummary", + "kinesis:GetRecords", + "kinesis:GetShardIterator", + "kinesis:ListShards", + "kinesis:ListStreams", + "kinesis:SubscribeToShard", + ], + Resource = [ + aws_kinesis_stream.source.arn, + ] + }, + ] + }) +} + +resource "aws_kinesis_stream" "source" { + name = "%[1]s-source" + + stream_mode_details { + stream_mode = "ON_DEMAND" + } +} +`, rName) +} + +func testAccPipeConfig_baseKinesisTarget(rName string) string { + return fmt.Sprintf(` +resource "aws_iam_role_policy" "target" { + role = aws_iam_role.test.id + name = "%[1]s-target" + + policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Effect = "Allow" + Action = [ + "kinesis:PutRecord", + ], + Resource = [ + aws_kinesis_stream.target.arn, + ] + }, + ] + }) +} + +resource "aws_kinesis_stream" "target" { + name = "%[1]s-target" + + stream_mode_details { + stream_mode = "ON_DEMAND" + } +} +`, rName) +} + +func testAccPipeConfig_basicSQS(rName string) string { + return acctest.ConfigCompose( + testAccPipeConfig_base(rName), + testAccPipeConfig_baseSQSSource(rName), + testAccPipeConfig_baseSQSTarget(rName), + fmt.Sprintf(` +resource "aws_pipes_pipe" "test" { + depends_on = [aws_iam_role_policy.source, aws_iam_role_policy.target] + + name = %[1]q + role_arn = aws_iam_role.test.arn + source = aws_sqs_queue.source.arn + target = aws_sqs_queue.target.arn +} +`, rName)) +} + +func testAccPipeConfig_description(rName, description string) string { + return acctest.ConfigCompose( + testAccPipeConfig_base(rName), + testAccPipeConfig_baseSQSSource(rName), + testAccPipeConfig_baseSQSTarget(rName), + fmt.Sprintf(` +resource "aws_pipes_pipe" "test" { + depends_on = [aws_iam_role_policy.source, aws_iam_role_policy.target] + + name = %[1]q + role_arn = aws_iam_role.test.arn + source = aws_sqs_queue.source.arn + target = aws_sqs_queue.target.arn + + description = %[2]q +} +`, rName, description)) +} + +func testAccPipeConfig_desiredState(rName, state string) string { + return acctest.ConfigCompose( + testAccPipeConfig_base(rName), + testAccPipeConfig_baseSQSSource(rName), + testAccPipeConfig_baseSQSTarget(rName), + fmt.Sprintf(` +resource "aws_pipes_pipe" "test" { + depends_on = [aws_iam_role_policy.source, aws_iam_role_policy.target] + + name = %[1]q + role_arn = aws_iam_role.test.arn + source = aws_sqs_queue.source.arn + target = aws_sqs_queue.target.arn + + desired_state = %[2]q +} +`, rName, state)) +} + +func testAccPipeConfig_enrichment(rName string, i int) string { + return acctest.ConfigCompose( + testAccPipeConfig_base(rName), + testAccPipeConfig_baseSQSSource(rName), + testAccPipeConfig_baseSQSTarget(rName), + fmt.Sprintf(` +resource "aws_cloudwatch_event_connection" "test" { + name = %[1]q + authorization_type = "API_KEY" + + auth_parameters { + api_key { + key = "testKey" + value = "testValue" + } + } +} + +resource "aws_cloudwatch_event_api_destination" "test" { + count = 2 + name = "%[1]s-${count.index}" + invocation_endpoint = "https://example.com/${count.index}" + http_method = "POST" + connection_arn = aws_cloudwatch_event_connection.test.arn +} + +resource "aws_pipes_pipe" "test" { + depends_on = [aws_iam_role_policy.source, aws_iam_role_policy.target] + + name = %[1]q + role_arn = aws_iam_role.test.arn + source = aws_sqs_queue.source.arn + target = aws_sqs_queue.target.arn + + enrichment = aws_cloudwatch_event_api_destination.test[%[2]d].arn +} +`, rName, i)) +} + +func testAccPipeConfig_enrichmentParameters(rName string) string { + return acctest.ConfigCompose( + testAccPipeConfig_base(rName), + testAccPipeConfig_baseSQSSource(rName), + testAccPipeConfig_baseSQSTarget(rName), + fmt.Sprintf(` +resource "aws_cloudwatch_event_connection" "test" { + name = %[1]q + authorization_type = "API_KEY" + + auth_parameters { + api_key { + key = "testKey" + value = "testValue" + } + } +} + +resource "aws_cloudwatch_event_api_destination" "test" { + name = %[1]q + invocation_endpoint = "https://example.com/" + http_method = "POST" + connection_arn = aws_cloudwatch_event_connection.test.arn +} + +resource "aws_pipes_pipe" "test" { + depends_on = [aws_iam_role_policy.source, aws_iam_role_policy.target] + + name = %[1]q + role_arn = aws_iam_role.test.arn + source = aws_sqs_queue.source.arn + target = aws_sqs_queue.target.arn + + enrichment = aws_cloudwatch_event_api_destination.test.arn + + enrichment_parameters { + http_parameters { + header_parameters = { + "X-Test-1" = "Val1" + } + + path_parameter_values = ["p1"] + + query_string_parameters = { + "q1" = "abc" + } + } + } +} +`, rName)) +} + +func testAccPipeConfig_enrichmentParametersUpdated(rName string) string { + return acctest.ConfigCompose( + testAccPipeConfig_base(rName), + testAccPipeConfig_baseSQSSource(rName), + testAccPipeConfig_baseSQSTarget(rName), + fmt.Sprintf(` +resource "aws_cloudwatch_event_connection" "test" { + name = %[1]q + authorization_type = "API_KEY" + + auth_parameters { + api_key { + key = "testKey" + value = "testValue" + } + } +} + +resource "aws_cloudwatch_event_api_destination" "test" { + name = %[1]q + invocation_endpoint = "https://example.com/" + http_method = "POST" + connection_arn = aws_cloudwatch_event_connection.test.arn +} + +resource "aws_pipes_pipe" "test" { + depends_on = [aws_iam_role_policy.source, aws_iam_role_policy.target] + + name = %[1]q + role_arn = aws_iam_role.test.arn + source = aws_sqs_queue.source.arn + target = aws_sqs_queue.target.arn + + enrichment = aws_cloudwatch_event_api_destination.test.arn + + enrichment_parameters { + http_parameters { + header_parameters = { + "X-Test-1" = "Val1" + "X-Test-2" = "Val2" + } + + path_parameter_values = ["p2"] + } + } +} +`, rName)) +} + +func testAccPipeConfig_sourceParameters_filterCriteria1(rName, criteria1 string) string { + return acctest.ConfigCompose( + testAccPipeConfig_base(rName), + testAccPipeConfig_baseSQSSource(rName), + testAccPipeConfig_baseSQSTarget(rName), + fmt.Sprintf(` +resource "aws_pipes_pipe" "test" { + depends_on = [aws_iam_role_policy.source, aws_iam_role_policy.target] + + name = %[1]q + role_arn = aws_iam_role.test.arn + source = aws_sqs_queue.source.arn + target = aws_sqs_queue.target.arn + + source_parameters { + filter_criteria { + filter { + pattern = jsonencode({ + source = [%[2]q] + }) + } + } + } +} +`, rName, criteria1)) +} + +func testAccPipeConfig_sourceParameters_filterCriteria2(rName, criteria1, criteria2 string) string { + return acctest.ConfigCompose( + testAccPipeConfig_base(rName), + testAccPipeConfig_baseSQSSource(rName), + testAccPipeConfig_baseSQSTarget(rName), + fmt.Sprintf(` +resource "aws_pipes_pipe" "test" { + depends_on = [aws_iam_role_policy.source, aws_iam_role_policy.target] + + name = %[1]q + role_arn = aws_iam_role.test.arn + source = aws_sqs_queue.source.arn + target = aws_sqs_queue.target.arn + + source_parameters { + filter_criteria { + filter { + pattern = jsonencode({ + source = [%[2]q] + }) + } + + filter { + pattern = jsonencode({ + source = [%[3]q] + }) + } + } + } +} +`, rName, criteria1, criteria2)) +} + +func testAccPipeConfig_sourceParameters_filterCriteria0(rName string) string { + return acctest.ConfigCompose( + testAccPipeConfig_base(rName), + testAccPipeConfig_baseSQSSource(rName), + testAccPipeConfig_baseSQSTarget(rName), + fmt.Sprintf(` +resource "aws_pipes_pipe" "test" { + depends_on = [aws_iam_role_policy.source, aws_iam_role_policy.target] + name = %[1]q + role_arn = aws_iam_role.test.arn + source = aws_sqs_queue.source.arn + target = aws_sqs_queue.target.arn + + source_parameters { + filter_criteria {} + } +} +`, rName)) +} + +func testAccPipeConfig_nameGenerated(rName string) string { + return acctest.ConfigCompose( + testAccPipeConfig_base(rName), + testAccPipeConfig_baseSQSSource(rName), + testAccPipeConfig_baseSQSTarget(rName), + ` +resource "aws_pipes_pipe" "test" { + depends_on = [aws_iam_role_policy.source, aws_iam_role_policy.target] + + role_arn = aws_iam_role.test.arn + source = aws_sqs_queue.source.arn + target = aws_sqs_queue.target.arn +} +`, + ) +} + +func testAccPipeConfig_namePrefix(rName, namePrefix string) string { + return acctest.ConfigCompose( + testAccPipeConfig_base(rName), + testAccPipeConfig_baseSQSSource(rName), + testAccPipeConfig_baseSQSTarget(rName), + fmt.Sprintf(` +resource "aws_pipes_pipe" "test" { + depends_on = [aws_iam_role_policy.source, aws_iam_role_policy.target] + + name_prefix = %[1]q + role_arn = aws_iam_role.test.arn + source = aws_sqs_queue.source.arn + target = aws_sqs_queue.target.arn +} +`, namePrefix)) +} + +func testAccPipeConfig_roleARN(rName string) string { + return acctest.ConfigCompose( + testAccPipeConfig_base(rName), + testAccPipeConfig_baseSQSSource(rName), + testAccPipeConfig_baseSQSTarget(rName), + fmt.Sprintf(` +resource "aws_iam_role" "test2" { + name = "%[1]s-2" + + assume_role_policy = jsonencode({ + Version = "2012-10-17" + Statement = { + Effect = "Allow" + Action = "sts:AssumeRole" + Principal = { + Service = "pipes.${data.aws_partition.main.dns_suffix}" + } + Condition = { + StringEquals = { + "aws:SourceAccount" = data.aws_caller_identity.main.account_id + } + } + } + }) +} + +resource "aws_iam_role_policy" "source2" { + role = aws_iam_role.test2.id + name = "%[1]s-source2" + + policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Effect = "Allow" + Action = [ + "sqs:DeleteMessage", + "sqs:GetQueueAttributes", + "sqs:ReceiveMessage", + ], + Resource = [ + aws_sqs_queue.source.arn, + ] + }, + ] + }) +} + +resource "aws_pipes_pipe" "test" { + depends_on = [aws_iam_role_policy.source2, aws_iam_role_policy.target] + + name = %[1]q + role_arn = aws_iam_role.test2.arn + source = aws_sqs_queue.source.arn + target = aws_sqs_queue.target.arn +} +`, rName)) +} + +func testAccPipeConfig_tags1(rName, tag1Key, tag1Value string) string { + return acctest.ConfigCompose( + testAccPipeConfig_base(rName), + testAccPipeConfig_baseSQSSource(rName), + testAccPipeConfig_baseSQSTarget(rName), + fmt.Sprintf(` +resource "aws_pipes_pipe" "test" { + depends_on = [aws_iam_role_policy.source, aws_iam_role_policy.target] + + name = %[1]q + role_arn = aws_iam_role.test.arn + source = aws_sqs_queue.source.arn + target = aws_sqs_queue.target.arn + + tags = { + %[2]q = %[3]q + } +} +`, rName, tag1Key, tag1Value)) +} + +func testAccPipeConfig_tags2(rName, tag1Key, tag1Value, tag2Key, tag2Value string) string { + return acctest.ConfigCompose( + testAccPipeConfig_base(rName), + testAccPipeConfig_baseSQSSource(rName), + testAccPipeConfig_baseSQSTarget(rName), + fmt.Sprintf(` +resource "aws_pipes_pipe" "test" { + depends_on = [aws_iam_role_policy.source, aws_iam_role_policy.target] + + name = %[1]q + role_arn = aws_iam_role.test.arn + source = aws_sqs_queue.source.arn + target = aws_sqs_queue.target.arn + + tags = { + %[2]q = %[3]q + %[4]q = %[5]q + } +} +`, rName, tag1Key, tag1Value, tag2Key, tag2Value)) +} + +func testAccPipeConfig_targetUpdated(rName string) string { + return acctest.ConfigCompose( + testAccPipeConfig_base(rName), + testAccPipeConfig_baseSQSSource(rName), + testAccPipeConfig_baseSQSTarget(rName), + fmt.Sprintf(` +resource "aws_iam_role_policy" "target2" { + role = aws_iam_role.test.id + name = "%[1]s-target2" + + policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Effect = "Allow" + Action = [ + "sqs:SendMessage", + ], + Resource = [ + aws_sqs_queue.target2.arn, + ] + }, + ] + }) +} + +resource "aws_sqs_queue" "target2" { + name = "%[1]s-target2" +} + +resource "aws_pipes_pipe" "test" { + depends_on = [aws_iam_role_policy.source, aws_iam_role_policy.target2] + + name = %[1]q + role_arn = aws_iam_role.test.arn + source = aws_sqs_queue.source.arn + target = aws_sqs_queue.target2.arn +} +`, rName)) +} + +func testAccPipeConfig_targetParameters_inputTemplate(rName, template string) string { + return acctest.ConfigCompose( + testAccPipeConfig_base(rName), + testAccPipeConfig_baseSQSSource(rName), + testAccPipeConfig_baseSQSTarget(rName), + fmt.Sprintf(` +resource "aws_pipes_pipe" "test" { + depends_on = [aws_iam_role_policy.source, aws_iam_role_policy.target] + + name = %[1]q + role_arn = aws_iam_role.test.arn + source = aws_sqs_queue.source.arn + target = aws_sqs_queue.target.arn + + target_parameters { + input_template = %[2]q + } +} +`, rName, template)) +} + +func testAccPipeConfig_basicKinesis(rName string) string { + return acctest.ConfigCompose( + testAccPipeConfig_base(rName), + testAccPipeConfig_baseKinesisSource(rName), + testAccPipeConfig_baseKinesisTarget(rName), + fmt.Sprintf(` +resource "aws_pipes_pipe" "test" { + depends_on = [aws_iam_role_policy.source, aws_iam_role_policy.target] + + name = %[1]q + role_arn = aws_iam_role.test.arn + source = aws_kinesis_stream.source.arn + target = aws_kinesis_stream.target.arn + + source_parameters { + kinesis_stream_parameters { + starting_position = "LATEST" + } + } + + target_parameters { + kinesis_stream_parameters { + partition_key = "test" + } + } +} +`, rName)) +} - output, err := tfpipes.FindPipeByName(ctx, conn, rs.Primary.ID) +func testAccPipeConfig_basicDynamoDBSourceCloudWatchLogsTarget(rName string) string { + return acctest.ConfigCompose( + testAccPipeConfig_base(rName), + fmt.Sprintf(` +resource "aws_iam_role_policy" "source" { + role = aws_iam_role.test.id + name = "%[1]s-source" - if err != nil { - return err - } + policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Effect = "Allow" + Action = [ + "dynamodb:DescribeStream", + "dynamodb:GetRecords", + "dynamodb:GetShardIterator", + "dynamodb:ListStreams", + ], + Resource = [ + aws_dynamodb_table.source.stream_arn, + "${aws_dynamodb_table.source.stream_arn}/*" + ] + }, + ] + }) +} - *pipe = *output +resource "aws_dynamodb_table" "source" { + name = "%[1]s-source" + billing_mode = "PAY_PER_REQUEST" + hash_key = "PK" + range_key = "SK" + stream_enabled = true + stream_view_type = "NEW_AND_OLD_IMAGES" + + attribute { + name = "PK" + type = "S" + } - return nil - } + attribute { + name = "SK" + type = "S" + } } -func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).PipesClient() +resource "aws_iam_role_policy" "target" { + role = aws_iam_role.test.id + name = "%[1]s-target" - input := &pipes.ListPipesInput{} - _, err := conn.ListPipes(ctx, input) + policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Effect = "Allow" + Action = [ + "logs:PutLogEvents", + ], + Resource = [ + aws_cloudwatch_log_stream.target.arn, + ] + }, + ] + }) +} - if acctest.PreCheckSkipError(err) { - t.Skipf("skipping acceptance testing: %s", err) - } +resource "aws_cloudwatch_log_group" "target" { + name = "%[1]s-target" +} - if err != nil { - t.Fatalf("unexpected PreCheck error: %s", err) - } +resource "aws_cloudwatch_log_stream" "target" { + name = "%[1]s-target" + log_group_name = aws_cloudwatch_log_group.target.name } -const testAccPipeConfig_base = ` -data "aws_caller_identity" "main" {} -data "aws_partition" "main" {} +resource "aws_pipes_pipe" "test" { + depends_on = [aws_iam_role_policy.source, aws_iam_role_policy.target] -resource "aws_iam_role" "test" { - assume_role_policy = jsonencode({ + name = %[1]q + role_arn = aws_iam_role.test.arn + source = aws_dynamodb_table.source.stream_arn + target = aws_cloudwatch_log_group.target.arn + + source_parameters { + dynamodb_stream_parameters { + starting_position = "LATEST" + } + } + + target_parameters { + cloudwatch_logs_parameters { + log_stream_name = aws_cloudwatch_log_stream.target.name + } + } +} +`, rName)) +} + +func testAccPipeConfig_basicActiveMQSourceStepFunctionTarget(rName string) string { + return acctest.ConfigCompose( + testAccPipeConfig_base(rName), + fmt.Sprintf(` +resource "aws_iam_role_policy" "source" { + role = aws_iam_role.test.id + name = "%[1]s-source" + + policy = jsonencode({ Version = "2012-10-17" - Statement = { - Effect = "Allow" - Action = "sts:AssumeRole" - Principal = { - Service = "pipes.${data.aws_partition.main.dns_suffix}" - } - Condition = { - StringEquals = { - "aws:SourceAccount" = data.aws_caller_identity.main.account_id - } + Statement = [ + { + Effect = "Allow" + Action = [ + "mq:DescribeBroker", + "secretsmanager:GetSecretValue", + "ec2:CreateNetworkInterface", + "ec2:DescribeNetworkInterfaces", + "ec2:DescribeVpcs", + "ec2:DeleteNetworkInterface", + "ec2:DescribeSubnets", + "ec2:DescribeSecurityGroups", + "logs:CreateLogGroup", + "logs:CreateLogStream", + "logs:PutLogEvents" + ], + Resource = [ + "*" + ] + }, + ] + }) + + depends_on = [aws_mq_broker.source] +} + +resource "aws_security_group" "source" { + name = "%[1]s-source" + + ingress { + from_port = 61617 + to_port = 61617 + protocol = "tcp" + cidr_blocks = ["0.0.0.0/0"] + } + + tags = { + Name = %[1]q + } +} + +resource "aws_mq_broker" "source" { + broker_name = "%[1]s-source" + engine_type = "ActiveMQ" + engine_version = "5.15.0" + host_instance_type = "mq.t2.micro" + security_groups = [aws_security_group.source.id] + authentication_strategy = "simple" + storage_type = "efs" + + logs { + general = true + } + + user { + username = "Test" + password = "TestTest1234" + } + + publicly_accessible = true +} + +resource "aws_secretsmanager_secret" "source" { + name = "%[1]s-source" +} + +resource "aws_secretsmanager_secret_version" "source" { + secret_id = aws_secretsmanager_secret.source.id + secret_string = jsonencode({ username = "Test", password = "TestTest1234" }) +} + +resource "aws_iam_role" "target" { + name = "%[1]s-target" + + assume_role_policy = < 0 && v[0] != nil { + apiObject.ActiveMQBrokerParameters = expandPipeSourceActiveMQBrokerParameters(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["dynamodb_stream_parameters"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.DynamoDBStreamParameters = expandPipeSourceDynamoDBStreamParameters(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["filter_criteria"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.FilterCriteria = expandFilterCriteria(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["kinesis_stream_parameters"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.KinesisStreamParameters = expandPipeSourceKinesisStreamParameters(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["managed_streaming_kafka_parameters"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.ManagedStreamingKafkaParameters = expandPipeSourceManagedStreamingKafkaParameters(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["rabbitmq_broker_parameters"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.RabbitMQBrokerParameters = expandPipeSourceRabbitMQBrokerParameters(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["self_managed_kafka_parameters"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.SelfManagedKafkaParameters = expandPipeSourceSelfManagedKafkaParameters(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["sqs_queue_parameters"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.SqsQueueParameters = expandPipeSourceSQSQueueParameters(v[0].(map[string]interface{})) + } + + return apiObject +} + +func expandUpdatePipeSourceParameters(tfMap map[string]interface{}) *types.UpdatePipeSourceParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.UpdatePipeSourceParameters{} + + if v, ok := tfMap["activemq_broker_parameters"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.ActiveMQBrokerParameters = expandUpdatePipeSourceActiveMQBrokerParameters(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["dynamodb_stream_parameters"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.DynamoDBStreamParameters = expandUpdatePipeSourceDynamoDBStreamParameters(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["filter_criteria"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.FilterCriteria = expandFilterCriteria(v[0].(map[string]interface{})) + } else { + apiObject.FilterCriteria = &types.FilterCriteria{} + } + + if v, ok := tfMap["kinesis_stream_parameters"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.KinesisStreamParameters = expandUpdatePipeSourceKinesisStreamParameters(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["managed_streaming_kafka_parameters"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.ManagedStreamingKafkaParameters = expandUpdatePipeSourceManagedStreamingKafkaParameters(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["rabbitmq_broker_parameters"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.RabbitMQBrokerParameters = expandUpdatePipeSourceRabbitMQBrokerParameters(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["self_managed_kafka_parameters"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.SelfManagedKafkaParameters = expandUpdatePipeSourceSelfManagedKafkaParameters(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["sqs_queue_parameters"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.SqsQueueParameters = expandUpdatePipeSourceSQSQueueParameters(v[0].(map[string]interface{})) + } + + return apiObject +} + +func expandFilterCriteria(tfMap map[string]interface{}) *types.FilterCriteria { + if tfMap == nil { + return nil + } + + apiObject := &types.FilterCriteria{} + + if v, ok := tfMap["filter"].([]interface{}); ok && len(v) > 0 { + apiObject.Filters = expandFilters(v) + } + + return apiObject +} + +func expandFilter(tfMap map[string]interface{}) *types.Filter { + if tfMap == nil { + return nil + } + + apiObject := &types.Filter{} + + if v, ok := tfMap["pattern"].(string); ok && v != "" { + apiObject.Pattern = aws.String(v) + } + + return apiObject +} + +func expandFilters(tfList []interface{}) []types.Filter { + if len(tfList) == 0 { + return nil + } + + var apiObjects []types.Filter + + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + + if !ok { + continue + } + + apiObject := expandFilter(tfMap) + + if apiObject == nil || apiObject.Pattern == nil { + continue + } + + apiObjects = append(apiObjects, *apiObject) + } + + return apiObjects +} + +func expandPipeSourceActiveMQBrokerParameters(tfMap map[string]interface{}) *types.PipeSourceActiveMQBrokerParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.PipeSourceActiveMQBrokerParameters{} + + if v, ok := tfMap["batch_size"].(int); ok && v != 0 { + apiObject.BatchSize = aws.Int32(int32(v)) + } + + if v, ok := tfMap["credentials"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.Credentials = expandMQBrokerAccessCredentials(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["maximum_batching_window_in_seconds"].(int); ok && v != 0 { + apiObject.MaximumBatchingWindowInSeconds = aws.Int32(int32(v)) + } + + if v, ok := tfMap["queue_name"].(string); ok && v != "" { + apiObject.QueueName = aws.String(v) + } + + return apiObject +} + +func expandUpdatePipeSourceActiveMQBrokerParameters(tfMap map[string]interface{}) *types.UpdatePipeSourceActiveMQBrokerParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.UpdatePipeSourceActiveMQBrokerParameters{} + + if v, ok := tfMap["batch_size"].(int); ok { + apiObject.BatchSize = aws.Int32(int32(v)) + } + + if v, ok := tfMap["credentials"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.Credentials = expandMQBrokerAccessCredentials(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["maximum_batching_window_in_seconds"].(int); ok { + apiObject.MaximumBatchingWindowInSeconds = aws.Int32(int32(v)) + } + + return apiObject +} + +func expandMQBrokerAccessCredentials(tfMap map[string]interface{}) types.MQBrokerAccessCredentials { + if tfMap == nil { + return nil + } + + if v, ok := tfMap["basic_auth"].(string); ok && v != "" { + apiObject := &types.MQBrokerAccessCredentialsMemberBasicAuth{ + Value: v, + } + + return apiObject + } + + return nil +} + +func expandPipeSourceDynamoDBStreamParameters(tfMap map[string]interface{}) *types.PipeSourceDynamoDBStreamParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.PipeSourceDynamoDBStreamParameters{} + + if v, ok := tfMap["batch_size"].(int); ok && v != 0 { + apiObject.BatchSize = aws.Int32(int32(v)) + } + + if v, ok := tfMap["dead_letter_config"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.DeadLetterConfig = expandDeadLetterConfig(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["maximum_batching_window_in_seconds"].(int); ok && v != 0 { + apiObject.MaximumBatchingWindowInSeconds = aws.Int32(int32(v)) + } + + if v, ok := tfMap["maximum_record_age_in_seconds"].(int); ok && v != 0 { + apiObject.MaximumRecordAgeInSeconds = aws.Int32(int32(v)) + } + + if v, ok := tfMap["maximum_retry_attempts"].(int); ok && v != 0 { + apiObject.MaximumRetryAttempts = aws.Int32(int32(v)) + } + + if v, ok := tfMap["on_partial_batch_item_failure"].(string); ok && v != "" { + apiObject.OnPartialBatchItemFailure = types.OnPartialBatchItemFailureStreams(v) + } + + if v, ok := tfMap["parallelization_factor"].(int); ok && v != 0 { + apiObject.ParallelizationFactor = aws.Int32(int32(v)) + } + + if v, ok := tfMap["starting_position"].(string); ok && v != "" { + apiObject.StartingPosition = types.DynamoDBStreamStartPosition(v) + } + + return apiObject +} + +func expandUpdatePipeSourceDynamoDBStreamParameters(tfMap map[string]interface{}) *types.UpdatePipeSourceDynamoDBStreamParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.UpdatePipeSourceDynamoDBStreamParameters{} + + if v, ok := tfMap["batch_size"].(int); ok { + apiObject.BatchSize = aws.Int32(int32(v)) + } + + if v, ok := tfMap["dead_letter_config"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.DeadLetterConfig = expandDeadLetterConfig(v[0].(map[string]interface{})) + } else { + apiObject.DeadLetterConfig = &types.DeadLetterConfig{} + } + + if v, ok := tfMap["maximum_batching_window_in_seconds"].(int); ok { + apiObject.MaximumBatchingWindowInSeconds = aws.Int32(int32(v)) + } + + if v, ok := tfMap["maximum_record_age_in_seconds"].(int); ok { + apiObject.MaximumRecordAgeInSeconds = aws.Int32(int32(v)) + } + + if v, ok := tfMap["maximum_retry_attempts"].(int); ok { + apiObject.MaximumRetryAttempts = aws.Int32(int32(v)) + } + + if v, ok := tfMap["on_partial_batch_item_failure"].(string); ok { + apiObject.OnPartialBatchItemFailure = types.OnPartialBatchItemFailureStreams(v) + } + + if v, ok := tfMap["parallelization_factor"].(int); ok { + apiObject.ParallelizationFactor = aws.Int32(int32(v)) + } + + return apiObject +} + +func expandDeadLetterConfig(tfMap map[string]interface{}) *types.DeadLetterConfig { + if tfMap == nil { + return nil + } + + apiObject := &types.DeadLetterConfig{} + + if v, ok := tfMap["arn"].(string); ok && v != "" { + apiObject.Arn = aws.String(v) + } + + return apiObject +} + +func expandPipeSourceKinesisStreamParameters(tfMap map[string]interface{}) *types.PipeSourceKinesisStreamParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.PipeSourceKinesisStreamParameters{} + + if v, ok := tfMap["batch_size"].(int); ok && v != 0 { + apiObject.BatchSize = aws.Int32(int32(v)) + } + + if v, ok := tfMap["dead_letter_config"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.DeadLetterConfig = expandDeadLetterConfig(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["maximum_batching_window_in_seconds"].(int); ok && v != 0 { + apiObject.MaximumBatchingWindowInSeconds = aws.Int32(int32(v)) + } + + if v, ok := tfMap["maximum_record_age_in_seconds"].(int); ok && v != 0 { + apiObject.MaximumRecordAgeInSeconds = aws.Int32(int32(v)) + } + + if v, ok := tfMap["maximum_retry_attempts"].(int); ok && v != 0 { + apiObject.MaximumRetryAttempts = aws.Int32(int32(v)) + } + + if v, ok := tfMap["on_partial_batch_item_failure"].(string); ok && v != "" { + apiObject.OnPartialBatchItemFailure = types.OnPartialBatchItemFailureStreams(v) + } + + if v, ok := tfMap["parallelization_factor"].(int); ok && v != 0 { + apiObject.ParallelizationFactor = aws.Int32(int32(v)) + } + + if v, ok := tfMap["starting_position"].(string); ok && v != "" { + apiObject.StartingPosition = types.KinesisStreamStartPosition(v) + } + + if v, ok := tfMap["starting_position_timestamp"].(string); ok && v != "" { + v, _ := time.Parse(time.RFC3339, v) + + apiObject.StartingPositionTimestamp = aws.Time(v) + } + + return apiObject +} + +func expandUpdatePipeSourceKinesisStreamParameters(tfMap map[string]interface{}) *types.UpdatePipeSourceKinesisStreamParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.UpdatePipeSourceKinesisStreamParameters{} + + if v, ok := tfMap["batch_size"].(int); ok { + apiObject.BatchSize = aws.Int32(int32(v)) + } + + if v, ok := tfMap["dead_letter_config"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.DeadLetterConfig = expandDeadLetterConfig(v[0].(map[string]interface{})) + } else { + apiObject.DeadLetterConfig = &types.DeadLetterConfig{} + } + + if v, ok := tfMap["maximum_batching_window_in_seconds"].(int); ok { + apiObject.MaximumBatchingWindowInSeconds = aws.Int32(int32(v)) + } + + if v, ok := tfMap["maximum_record_age_in_seconds"].(int); ok { + apiObject.MaximumRecordAgeInSeconds = aws.Int32(int32(v)) + } + + if v, ok := tfMap["maximum_retry_attempts"].(int); ok { + apiObject.MaximumRetryAttempts = aws.Int32(int32(v)) + } + + if v, ok := tfMap["on_partial_batch_item_failure"].(string); ok { + apiObject.OnPartialBatchItemFailure = types.OnPartialBatchItemFailureStreams(v) + } + + if v, ok := tfMap["parallelization_factor"].(int); ok { + apiObject.ParallelizationFactor = aws.Int32(int32(v)) + } + + return apiObject +} + +func expandPipeSourceManagedStreamingKafkaParameters(tfMap map[string]interface{}) *types.PipeSourceManagedStreamingKafkaParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.PipeSourceManagedStreamingKafkaParameters{} + + if v, ok := tfMap["batch_size"].(int); ok && v != 0 { + apiObject.BatchSize = aws.Int32(int32(v)) + } + + if v, ok := tfMap["consumer_group_id"].(string); ok && v != "" { + apiObject.ConsumerGroupID = aws.String(v) + } + + if v, ok := tfMap["credentials"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.Credentials = expandMSKAccessCredentials(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["maximum_batching_window_in_seconds"].(int); ok && v != 0 { + apiObject.MaximumBatchingWindowInSeconds = aws.Int32(int32(v)) + } + + if v, ok := tfMap["starting_position"].(string); ok && v != "" { + apiObject.StartingPosition = types.MSKStartPosition(v) + } + + if v, ok := tfMap["topic_name"].(string); ok && v != "" { + apiObject.TopicName = aws.String(v) + } + + return apiObject +} + +func expandUpdatePipeSourceManagedStreamingKafkaParameters(tfMap map[string]interface{}) *types.UpdatePipeSourceManagedStreamingKafkaParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.UpdatePipeSourceManagedStreamingKafkaParameters{} + + if v, ok := tfMap["batch_size"].(int); ok { + apiObject.BatchSize = aws.Int32(int32(v)) + } + + if v, ok := tfMap["credentials"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.Credentials = expandMSKAccessCredentials(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["maximum_batching_window_in_seconds"].(int); ok { + apiObject.MaximumBatchingWindowInSeconds = aws.Int32(int32(v)) + } + + return apiObject +} + +func expandMSKAccessCredentials(tfMap map[string]interface{}) types.MSKAccessCredentials { + if tfMap == nil { + return nil + } + + if v, ok := tfMap["client_certificate_tls_auth"].(string); ok && v != "" { + apiObject := &types.MSKAccessCredentialsMemberClientCertificateTlsAuth{ + Value: v, + } + + return apiObject + } + + if v, ok := tfMap["sasl_scram_512_auth"].(string); ok && v != "" { + apiObject := &types.MSKAccessCredentialsMemberSaslScram512Auth{ + Value: v, + } + + return apiObject + } + + return nil +} + +func expandPipeSourceRabbitMQBrokerParameters(tfMap map[string]interface{}) *types.PipeSourceRabbitMQBrokerParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.PipeSourceRabbitMQBrokerParameters{} + + if v, ok := tfMap["batch_size"].(int); ok && v != 0 { + apiObject.BatchSize = aws.Int32(int32(v)) + } + + if v, ok := tfMap["credentials"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.Credentials = expandMQBrokerAccessCredentials(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["maximum_batching_window_in_seconds"].(int); ok && v != 0 { + apiObject.MaximumBatchingWindowInSeconds = aws.Int32(int32(v)) + } + + if v, ok := tfMap["queue_name"].(string); ok && v != "" { + apiObject.QueueName = aws.String(v) + } + + if v, ok := tfMap["virtual_host"].(string); ok && v != "" { + apiObject.VirtualHost = aws.String(v) + } + + return apiObject +} + +func expandUpdatePipeSourceRabbitMQBrokerParameters(tfMap map[string]interface{}) *types.UpdatePipeSourceRabbitMQBrokerParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.UpdatePipeSourceRabbitMQBrokerParameters{} + + if v, ok := tfMap["batch_size"].(int); ok { + apiObject.BatchSize = aws.Int32(int32(v)) + } + + if v, ok := tfMap["credentials"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.Credentials = expandMQBrokerAccessCredentials(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["maximum_batching_window_in_seconds"].(int); ok { + apiObject.MaximumBatchingWindowInSeconds = aws.Int32(int32(v)) + } + + return apiObject +} + +func expandPipeSourceSelfManagedKafkaParameters(tfMap map[string]interface{}) *types.PipeSourceSelfManagedKafkaParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.PipeSourceSelfManagedKafkaParameters{} + + if v, ok := tfMap["additional_bootstrap_servers"].(*schema.Set); ok && v.Len() > 0 { + apiObject.AdditionalBootstrapServers = flex.ExpandStringValueSet(v) + } + + if v, ok := tfMap["batch_size"].(int); ok && v != 0 { + apiObject.BatchSize = aws.Int32(int32(v)) + } + + if v, ok := tfMap["consumer_group_id"].(string); ok && v != "" { + apiObject.ConsumerGroupID = aws.String(v) + } + + if v, ok := tfMap["credentials"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.Credentials = expandSelfManagedKafkaAccessConfigurationCredentials(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["maximum_batching_window_in_seconds"].(int); ok && v != 0 { + apiObject.MaximumBatchingWindowInSeconds = aws.Int32(int32(v)) + } + + if v, ok := tfMap["server_root_ca_certificate"].(string); ok && v != "" { + apiObject.ServerRootCaCertificate = aws.String(v) + } + + if v, ok := tfMap["starting_position"].(string); ok && v != "" { + apiObject.StartingPosition = types.SelfManagedKafkaStartPosition(v) + } + + if v, ok := tfMap["topic_name"].(string); ok && v != "" { + apiObject.TopicName = aws.String(v) + } + + if v, ok := tfMap["vpc"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.Vpc = expandSelfManagedKafkaAccessConfigurationVPC(v[0].(map[string]interface{})) + } + + return apiObject +} + +func expandUpdatePipeSourceSelfManagedKafkaParameters(tfMap map[string]interface{}) *types.UpdatePipeSourceSelfManagedKafkaParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.UpdatePipeSourceSelfManagedKafkaParameters{} + + if v, ok := tfMap["batch_size"].(int); ok && v != 0 { + apiObject.BatchSize = aws.Int32(int32(v)) + } + + if v, ok := tfMap["credentials"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.Credentials = expandSelfManagedKafkaAccessConfigurationCredentials(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["maximum_batching_window_in_seconds"].(int); ok { + apiObject.MaximumBatchingWindowInSeconds = aws.Int32(int32(v)) + } + + if v, ok := tfMap["server_root_ca_certificate"].(string); ok { + apiObject.ServerRootCaCertificate = aws.String(v) + } + + if v, ok := tfMap["vpc"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.Vpc = expandSelfManagedKafkaAccessConfigurationVPC(v[0].(map[string]interface{})) + } else { + apiObject.Vpc = &types.SelfManagedKafkaAccessConfigurationVpc{} + } + + return apiObject +} + +func expandSelfManagedKafkaAccessConfigurationCredentials(tfMap map[string]interface{}) types.SelfManagedKafkaAccessConfigurationCredentials { + if tfMap == nil { + return nil + } + + if v, ok := tfMap["basic_auth"].(string); ok && v != "" { + apiObject := &types.SelfManagedKafkaAccessConfigurationCredentialsMemberBasicAuth{ + Value: v, + } + + return apiObject + } + + if v, ok := tfMap["client_certificate_tls_auth"].(string); ok && v != "" { + apiObject := &types.SelfManagedKafkaAccessConfigurationCredentialsMemberClientCertificateTlsAuth{ + Value: v, + } + + return apiObject + } + + if v, ok := tfMap["sasl_scram_256_auth"].(string); ok && v != "" { + apiObject := &types.SelfManagedKafkaAccessConfigurationCredentialsMemberSaslScram256Auth{ + Value: v, + } + + return apiObject + } + + if v, ok := tfMap["sasl_scram_512_auth"].(string); ok && v != "" { + apiObject := &types.SelfManagedKafkaAccessConfigurationCredentialsMemberSaslScram512Auth{ + Value: v, + } + + return apiObject + } + + return nil +} + +func expandSelfManagedKafkaAccessConfigurationVPC(tfMap map[string]interface{}) *types.SelfManagedKafkaAccessConfigurationVpc { + if tfMap == nil { + return nil + } + + apiObject := &types.SelfManagedKafkaAccessConfigurationVpc{} + + if v, ok := tfMap["security_groups"].(*schema.Set); ok && v.Len() > 0 { + apiObject.SecurityGroup = flex.ExpandStringValueSet(v) + } + + if v, ok := tfMap["subnets"].(*schema.Set); ok && v.Len() > 0 { + apiObject.Subnets = flex.ExpandStringValueSet(v) + } + + return apiObject +} + +func expandPipeSourceSQSQueueParameters(tfMap map[string]interface{}) *types.PipeSourceSqsQueueParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.PipeSourceSqsQueueParameters{} + + if v, ok := tfMap["batch_size"].(int); ok && v != 0 { + apiObject.BatchSize = aws.Int32(int32(v)) + } + + if v, ok := tfMap["maximum_batching_window_in_seconds"].(int); ok && v != 0 { + apiObject.MaximumBatchingWindowInSeconds = aws.Int32(int32(v)) + } + + return apiObject +} + +func expandUpdatePipeSourceSQSQueueParameters(tfMap map[string]interface{}) *types.UpdatePipeSourceSqsQueueParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.UpdatePipeSourceSqsQueueParameters{} + + if v, ok := tfMap["batch_size"].(int); ok { + apiObject.BatchSize = aws.Int32(int32(v)) + } + + if v, ok := tfMap["maximum_batching_window_in_seconds"].(int); ok { + apiObject.MaximumBatchingWindowInSeconds = aws.Int32(int32(v)) + } + + return apiObject +} + +func flattenPipeSourceParameters(apiObject *types.PipeSourceParameters) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.ActiveMQBrokerParameters; v != nil { + tfMap["activemq_broker_parameters"] = []interface{}{flattenPipeSourceActiveMQBrokerParameters(v)} + } + + if v := apiObject.DynamoDBStreamParameters; v != nil { + tfMap["dynamodb_stream_parameters"] = []interface{}{flattenPipeSourceDynamoDBStreamParameters(v)} + } + + if v := apiObject.FilterCriteria; v != nil { + tfMap["filter_criteria"] = []interface{}{flattenFilterCriteria(v)} + } + + if v := apiObject.KinesisStreamParameters; v != nil { + tfMap["kinesis_stream_parameters"] = []interface{}{flattenPipeSourceKinesisStreamParameters(v)} + } + + if v := apiObject.ManagedStreamingKafkaParameters; v != nil { + tfMap["managed_streaming_kafka_parameters"] = []interface{}{flattenPipeSourceManagedStreamingKafkaParameters(v)} + } + + if v := apiObject.RabbitMQBrokerParameters; v != nil { + tfMap["rabbitmq_broker_parameters"] = []interface{}{flattenPipeSourceRabbitMQBrokerParameters(v)} + } + + if v := apiObject.SelfManagedKafkaParameters; v != nil { + tfMap["self_managed_kafka_parameters"] = []interface{}{flattenPipeSourceSelfManagedKafkaParameters(v)} + } + + if v := apiObject.SqsQueueParameters; v != nil { + tfMap["sqs_queue_parameters"] = []interface{}{flattenPipeSourceSQSQueueParameters(v)} + } + + return tfMap +} + +func flattenFilterCriteria(apiObject *types.FilterCriteria) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.Filters; v != nil { + tfMap["filter"] = flattenFilters(v) + } + + return tfMap +} + +func flattenFilter(apiObject types.Filter) map[string]interface{} { + tfMap := map[string]interface{}{} + + if v := apiObject.Pattern; v != nil { + tfMap["pattern"] = aws.ToString(v) + } + + return tfMap +} + +func flattenFilters(apiObjects []types.Filter) []interface{} { + if len(apiObjects) == 0 { + return nil + } + + var tfList []interface{} + + for _, apiObject := range apiObjects { + tfList = append(tfList, flattenFilter(apiObject)) + } + + return tfList +} + +func flattenPipeSourceActiveMQBrokerParameters(apiObject *types.PipeSourceActiveMQBrokerParameters) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.BatchSize; v != nil { + tfMap["batch_size"] = aws.ToInt32(v) + } + + if v := apiObject.Credentials; v != nil { + tfMap["credentials"] = []interface{}{flattenMQBrokerAccessCredentials(v)} + } + + if v := apiObject.MaximumBatchingWindowInSeconds; v != nil { + tfMap["maximum_batching_window_in_seconds"] = aws.ToInt32(v) + } + + if v := apiObject.QueueName; v != nil { + tfMap["queue_name"] = aws.ToString(v) + } + + return tfMap +} + +func flattenMQBrokerAccessCredentials(apiObject types.MQBrokerAccessCredentials) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if apiObject, ok := apiObject.(*types.MQBrokerAccessCredentialsMemberBasicAuth); ok { + if v := apiObject.Value; v != "" { + tfMap["basic_auth"] = v + } + } + + return tfMap +} + +func flattenPipeSourceDynamoDBStreamParameters(apiObject *types.PipeSourceDynamoDBStreamParameters) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.BatchSize; v != nil { + tfMap["batch_size"] = aws.ToInt32(v) + } + + if v := apiObject.DeadLetterConfig; v != nil { + tfMap["dead_letter_config"] = []interface{}{flattenDeadLetterConfig(v)} + } + + if v := apiObject.MaximumBatchingWindowInSeconds; v != nil { + tfMap["maximum_batching_window_in_seconds"] = aws.ToInt32(v) + } + + if v := apiObject.MaximumRecordAgeInSeconds; v != nil { + tfMap["maximum_record_age_in_seconds"] = aws.ToInt32(v) + } + + if v := apiObject.MaximumRetryAttempts; v != nil { + tfMap["maximum_retry_attempts"] = aws.ToInt32(v) + } + + if v := apiObject.OnPartialBatchItemFailure; v != "" { + tfMap["on_partial_batch_item_failure"] = v + } + + if v := apiObject.ParallelizationFactor; v != nil { + tfMap["parallelization_factor"] = aws.ToInt32(v) + } + + if v := apiObject.StartingPosition; v != "" { + tfMap["starting_position"] = v + } + + return tfMap +} + +func flattenPipeSourceKinesisStreamParameters(apiObject *types.PipeSourceKinesisStreamParameters) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.BatchSize; v != nil { + tfMap["batch_size"] = aws.ToInt32(v) + } + + if v := apiObject.DeadLetterConfig; v != nil { + tfMap["dead_letter_config"] = []interface{}{flattenDeadLetterConfig(v)} + } + + if v := apiObject.MaximumBatchingWindowInSeconds; v != nil { + tfMap["maximum_batching_window_in_seconds"] = aws.ToInt32(v) + } + + if v := apiObject.MaximumRecordAgeInSeconds; v != nil { + tfMap["maximum_record_age_in_seconds"] = aws.ToInt32(v) + } + + if v := apiObject.MaximumRetryAttempts; v != nil { + tfMap["maximum_retry_attempts"] = aws.ToInt32(v) + } + + if v := apiObject.OnPartialBatchItemFailure; v != "" { + tfMap["on_partial_batch_item_failure"] = v + } + + if v := apiObject.ParallelizationFactor; v != nil { + tfMap["parallelization_factor"] = aws.ToInt32(v) + } + + if v := apiObject.StartingPosition; v != "" { + tfMap["starting_position"] = v + } + + if v := apiObject.StartingPositionTimestamp; v != nil { + tfMap["starting_position_timestamp"] = aws.ToTime(v).Format(time.RFC3339) + } + + return tfMap +} + +func flattenPipeSourceManagedStreamingKafkaParameters(apiObject *types.PipeSourceManagedStreamingKafkaParameters) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.BatchSize; v != nil { + tfMap["batch_size"] = aws.ToInt32(v) + } + + if v := apiObject.ConsumerGroupID; v != nil { + tfMap["consumer_group_id"] = aws.ToString(v) + } + + if v := apiObject.Credentials; v != nil { + tfMap["credentials"] = []interface{}{flattenMSKAccessCredentials(v)} + } + + if v := apiObject.MaximumBatchingWindowInSeconds; v != nil { + tfMap["maximum_batching_window_in_seconds"] = aws.ToInt32(v) + } + + if v := apiObject.StartingPosition; v != "" { + tfMap["starting_position"] = v + } + + if v := apiObject.TopicName; v != nil { + tfMap["topic_name"] = aws.ToString(v) + } + + return tfMap +} + +func flattenMSKAccessCredentials(apiObject types.MSKAccessCredentials) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if apiObject, ok := apiObject.(*types.MSKAccessCredentialsMemberClientCertificateTlsAuth); ok { + if v := apiObject.Value; v != "" { + tfMap["client_certificate_tls_auth"] = v + } + } + + if apiObject, ok := apiObject.(*types.MSKAccessCredentialsMemberSaslScram512Auth); ok { + if v := apiObject.Value; v != "" { + tfMap["sasl_scram_512_auth"] = v + } + } + + return tfMap +} + +func flattenPipeSourceRabbitMQBrokerParameters(apiObject *types.PipeSourceRabbitMQBrokerParameters) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.BatchSize; v != nil { + tfMap["batch_size"] = aws.ToInt32(v) + } + + if v := apiObject.Credentials; v != nil { + tfMap["credentials"] = []interface{}{flattenMQBrokerAccessCredentials(v)} + } + + if v := apiObject.MaximumBatchingWindowInSeconds; v != nil { + tfMap["maximum_batching_window_in_seconds"] = aws.ToInt32(v) + } + + if v := apiObject.QueueName; v != nil { + tfMap["queue_name"] = aws.ToString(v) + } + + if v := apiObject.VirtualHost; v != nil { + tfMap["virtual_host"] = aws.ToString(v) + } + + return tfMap +} + +func flattenPipeSourceSelfManagedKafkaParameters(apiObject *types.PipeSourceSelfManagedKafkaParameters) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.AdditionalBootstrapServers; v != nil { + tfMap["additional_bootstrap_servers"] = v + } + + if v := apiObject.BatchSize; v != nil { + tfMap["batch_size"] = aws.ToInt32(v) + } + + if v := apiObject.ConsumerGroupID; v != nil { + tfMap["consumer_group_id"] = aws.ToString(v) + } + + if v := apiObject.Credentials; v != nil { + tfMap["credentials"] = []interface{}{flattenSelfManagedKafkaAccessConfigurationCredentials(v)} + } + + if v := apiObject.MaximumBatchingWindowInSeconds; v != nil { + tfMap["maximum_batching_window_in_seconds"] = aws.ToInt32(v) + } + + if v := apiObject.ServerRootCaCertificate; v != nil { + tfMap["server_root_ca_certificate"] = aws.ToString(v) + } + + if v := apiObject.StartingPosition; v != "" { + tfMap["starting_position"] = v + } + + if v := apiObject.TopicName; v != nil { + tfMap["topic_name"] = aws.ToString(v) + } + + if v := apiObject.Vpc; v != nil { + tfMap["vpc"] = []interface{}{flattenSelfManagedKafkaAccessConfigurationVPC(v)} + } + + return tfMap +} + +func flattenSelfManagedKafkaAccessConfigurationCredentials(apiObject types.SelfManagedKafkaAccessConfigurationCredentials) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if apiObject, ok := apiObject.(*types.SelfManagedKafkaAccessConfigurationCredentialsMemberBasicAuth); ok { + if v := apiObject.Value; v != "" { + tfMap["basic_auth"] = v + } + } + + if apiObject, ok := apiObject.(*types.SelfManagedKafkaAccessConfigurationCredentialsMemberClientCertificateTlsAuth); ok { + if v := apiObject.Value; v != "" { + tfMap["client_certificate_tls_auth"] = v + } + } + + if apiObject, ok := apiObject.(*types.SelfManagedKafkaAccessConfigurationCredentialsMemberSaslScram256Auth); ok { + if v := apiObject.Value; v != "" { + tfMap["sasl_scram_256_auth"] = v + } + } + + if apiObject, ok := apiObject.(*types.SelfManagedKafkaAccessConfigurationCredentialsMemberSaslScram512Auth); ok { + if v := apiObject.Value; v != "" { + tfMap["sasl_scram_512_auth"] = v + } + } + + return tfMap +} + +func flattenSelfManagedKafkaAccessConfigurationVPC(apiObject *types.SelfManagedKafkaAccessConfigurationVpc) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.SecurityGroup; v != nil { + tfMap["security_groups"] = v + } + + if v := apiObject.Subnets; v != nil { + tfMap["subnets"] = v + } + + return tfMap +} + +func flattenPipeSourceSQSQueueParameters(apiObject *types.PipeSourceSqsQueueParameters) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.BatchSize; v != nil { + tfMap["batch_size"] = aws.ToInt32(v) + } + + if v := apiObject.MaximumBatchingWindowInSeconds; v != nil { + tfMap["maximum_batching_window_in_seconds"] = aws.ToInt32(v) + } + + return tfMap +} + +func flattenDeadLetterConfig(apiObject *types.DeadLetterConfig) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.Arn; v != nil { + tfMap["arn"] = aws.ToString(v) + } + + return tfMap +} diff --git a/internal/service/pipes/status.go b/internal/service/pipes/status.go deleted file mode 100644 index 4c51772f949..00000000000 --- a/internal/service/pipes/status.go +++ /dev/null @@ -1,40 +0,0 @@ -package pipes - -import ( - "context" - - "github.com/aws/aws-sdk-go-v2/service/pipes" - "github.com/aws/aws-sdk-go-v2/service/pipes/types" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" - "github.com/hashicorp/terraform-provider-aws/internal/tfresource" -) - -const ( - pipeStatusRunning = string(types.PipeStateRunning) - pipeStatusStopped = string(types.PipeStateStopped) - pipeStatusCreating = string(types.PipeStateCreating) - pipeStatusUpdating = string(types.PipeStateUpdating) - pipeStatusDeleting = string(types.PipeStateDeleting) - pipeStatusStarting = string(types.PipeStateStarting) - pipeStatusStopping = string(types.PipeStateStopping) - pipeStatusCreateFailed = string(types.PipeStateCreateFailed) - pipeStatusUpdateFailed = string(types.PipeStateUpdateFailed) - pipeStatusStartFailed = string(types.PipeStateStartFailed) - pipeStatusStopFailed = string(types.PipeStateStopFailed) -) - -func statusPipe(ctx context.Context, conn *pipes.Client, name string) retry.StateRefreshFunc { - return func() (interface{}, string, error) { - output, err := FindPipeByName(ctx, conn, name) - - if tfresource.NotFound(err) { - return nil, "", nil - } - - if err != nil { - return nil, "", err - } - - return output, string(output.CurrentState), nil - } -} diff --git a/internal/service/pipes/sweep.go b/internal/service/pipes/sweep.go index 852f34f04bb..9fb8dcaad9e 100644 --- a/internal/service/pipes/sweep.go +++ b/internal/service/pipes/sweep.go @@ -1,18 +1,18 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep package pipes import ( - "context" "fmt" "log" "github.com/aws/aws-sdk-go-v2/aws" "github.com/aws/aws-sdk-go-v2/service/pipes" - "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -24,45 +24,39 @@ func init() { } func sweepPipes(region string) error { - client, err := sweep.SharedRegionalSweepClient(region) - + ctx := sweep.Context(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - - conn := client.(*conns.AWSClient).PipesClient() + conn := client.PipesClient(ctx) sweepResources := make([]sweep.Sweepable, 0) - var errs *multierror.Error - paginator := pipes.NewListPipesPaginator(conn, &pipes.ListPipesInput{}) + pages := pipes.NewListPipesPaginator(conn, &pipes.ListPipesInput{}) + for pages.HasMorePages() { + page, err := pages.NextPage(ctx) - for paginator.HasMorePages() { - page, err := paginator.NextPage(context.Background()) + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping Pipe sweep for %s: %s", region, err) + return nil + } if err != nil { - errs = multierror.Append(errs, fmt.Errorf("listing Pipes for %s: %w", region, err)) - break + return fmt.Errorf("error listing Pipes (%s): %w", region, err) } - for _, it := range page.Pipes { - name := aws.ToString(it.Name) - - r := ResourcePipe() + for _, v := range page.Pipes { + r := resourcePipe() d := r.Data(nil) - d.SetId(name) + d.SetId(aws.ToString(v.Name)) sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } } - if err := sweep.SweepOrchestrator(sweepResources); err != nil { - errs = multierror.Append(errs, fmt.Errorf("sweeping Pipe for %s: %w", region, err)) - } - - if sweep.SkipSweepError(err) { - log.Printf("[WARN] Skipping Pipe sweep for %s: %s", region, errs) - return nil + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { + return fmt.Errorf("error sweeping Pipes (%s): %w", region, err) } - return errs.ErrorOrNil() + return nil } diff --git a/internal/service/pipes/tags_gen.go b/internal/service/pipes/tags_gen.go index ff5343d3202..957fb0eb5ca 100644 --- a/internal/service/pipes/tags_gen.go +++ b/internal/service/pipes/tags_gen.go @@ -13,10 +13,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists pipes service tags. +// listTags lists pipes service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn *pipes.Client, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *pipes.Client, identifier string) (tftags.KeyValueTags, error) { input := &pipes.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -33,7 +33,7 @@ func ListTags(ctx context.Context, conn *pipes.Client, identifier string) (tftag // ListTags lists pipes service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).PipesClient(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).PipesClient(ctx), identifier) if err != nil { return err @@ -53,14 +53,14 @@ func Tags(tags tftags.KeyValueTags) map[string]string { return tags.Map() } -// KeyValueTags creates KeyValueTags from pipes service tags. +// KeyValueTags creates tftags.KeyValueTags from pipes service tags. func KeyValueTags(ctx context.Context, tags map[string]string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns pipes service tags from Context. +// getTagsIn returns pipes service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]string { +func getTagsIn(ctx context.Context) map[string]string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -70,17 +70,17 @@ func GetTagsIn(ctx context.Context) map[string]string { return nil } -// SetTagsOut sets pipes service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]string) { +// setTagsOut sets pipes service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates pipes service tags. +// updateTags updates pipes service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn *pipes.Client, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *pipes.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -120,5 +120,5 @@ func UpdateTags(ctx context.Context, conn *pipes.Client, identifier string, oldT // UpdateTags updates pipes service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).PipesClient(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).PipesClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/pipes/target_parameters.go b/internal/service/pipes/target_parameters.go new file mode 100644 index 00000000000..89a89d2f384 --- /dev/null +++ b/internal/service/pipes/target_parameters.go @@ -0,0 +1,2701 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package pipes + +import ( + "regexp" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/pipes/types" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/enum" + "github.com/hashicorp/terraform-provider-aws/internal/flex" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/verify" +) + +func targetParametersSchema() *schema.Schema { + return &schema.Schema{ + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "batch_job_parameters": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + ConflictsWith: []string{ + "target_parameters.0.cloudwatch_logs_parameters", + "target_parameters.0.ecs_task_parameters", + "target_parameters.0.eventbridge_event_bus_parameters", + "target_parameters.0.http_parameters", + "target_parameters.0.kinesis_stream_parameters", + "target_parameters.0.lambda_function_parameters", + "target_parameters.0.redshift_data_parameters", + "target_parameters.0.sagemaker_pipeline_parameters", + "target_parameters.0.sqs_queue_parameters", + "target_parameters.0.step_function_state_machine_parameters", + }, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "array_properties": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "size": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntBetween(2, 10000), + }, + }, + }, + }, + "container_overrides": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "command": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "environment": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Optional: true, + }, + "value": { + Type: schema.TypeString, + Optional: true, + }, + }, + }, + }, + "instance_type": { + Type: schema.TypeString, + Optional: true, + }, + "resource_requirement": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "type": { + Type: schema.TypeString, + Required: true, + ValidateDiagFunc: enum.Validate[types.BatchResourceRequirementType](), + }, + "value": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + }, + }, + }, + "depends_on": { + Type: schema.TypeList, + Optional: true, + MaxItems: 20, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "job_id": { + Type: schema.TypeString, + Optional: true, + }, + "type": { + Type: schema.TypeString, + Optional: true, + ValidateDiagFunc: enum.Validate[types.BatchJobDependencyType](), + }, + }, + }, + }, + "job_definition": { + Type: schema.TypeString, + Required: true, + }, + "job_name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 128), + }, + "parameters": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "retry_strategy": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "attempts": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntBetween(1, 10), + }, + }, + }, + }, + }, + }, + }, + "cloudwatch_logs_parameters": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + ConflictsWith: []string{ + "target_parameters.0.batch_job_parameters", + "target_parameters.0.ecs_task_parameters", + "target_parameters.0.eventbridge_event_bus_parameters", + "target_parameters.0.http_parameters", + "target_parameters.0.kinesis_stream_parameters", + "target_parameters.0.lambda_function_parameters", + "target_parameters.0.redshift_data_parameters", + "target_parameters.0.sagemaker_pipeline_parameters", + "target_parameters.0.sqs_queue_parameters", + "target_parameters.0.step_function_state_machine_parameters", + }, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "log_stream_name": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(0, 256), + }, + "timestamp": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.All( + validation.StringLenBetween(1, 256), + validation.StringMatch(regexp.MustCompile(`^\$(\.[\w/_-]+(\[(\d+|\*)\])*)*$`), ""), + ), + }, + }, + }, + }, + "ecs_task_parameters": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + ConflictsWith: []string{ + "target_parameters.0.batch_job_parameters", + "target_parameters.0.cloudwatch_logs_parameters", + "target_parameters.0.eventbridge_event_bus_parameters", + "target_parameters.0.http_parameters", + "target_parameters.0.kinesis_stream_parameters", + "target_parameters.0.lambda_function_parameters", + "target_parameters.0.redshift_data_parameters", + "target_parameters.0.sagemaker_pipeline_parameters", + "target_parameters.0.sqs_queue_parameters", + "target_parameters.0.step_function_state_machine_parameters", + }, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "capacity_provider_strategy": { + Type: schema.TypeList, + Optional: true, + MaxItems: 6, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "base": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntBetween(0, 100000), + }, + "capacity_provider": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 255), + }, + "weight": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntBetween(0, 1000), + }, + }, + }, + }, + "enable_ecs_managed_tags": { + Type: schema.TypeBool, + Optional: true, + }, + "enable_execute_command": { + Type: schema.TypeBool, + Optional: true, + }, + "group": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(1, 255), + }, + "launch_type": { + Type: schema.TypeString, + Optional: true, + ValidateDiagFunc: enum.Validate[types.LaunchType](), + }, + "network_configuration": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "aws_vpc_configuration": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "assign_public_ip": { + Type: schema.TypeString, + Optional: true, + ValidateDiagFunc: enum.Validate[types.AssignPublicIp](), + }, + "security_groups": { + Type: schema.TypeSet, + Optional: true, + MaxItems: 5, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.All( + validation.StringLenBetween(1, 1024), + validation.StringMatch(regexp.MustCompile(`^sg-[0-9a-zA-Z]*|(\$(\.[\w/_-]+(\[(\d+|\*)\])*)*)$`), ""), + ), + }, + }, + "subnets": { + Type: schema.TypeSet, + Optional: true, + MaxItems: 16, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.All( + validation.StringLenBetween(1, 1024), + validation.StringMatch(regexp.MustCompile(`^subnet-[0-9a-z]*|(\$(\.[\w/_-]+(\[(\d+|\*)\])*)*)$`), ""), + ), + }, + }, + }, + }, + }, + }, + }, + }, + "overrides": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "container_override": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "command": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "cpu": { + Type: schema.TypeInt, + Optional: true, + }, + "environment": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Optional: true, + }, + "value": { + Type: schema.TypeString, + Optional: true, + }, + }, + }, + }, + "environment_file": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "type": { + Type: schema.TypeString, + Required: true, + ValidateDiagFunc: enum.Validate[types.EcsEnvironmentFileType](), + }, + "value": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidARN, + }, + }, + }, + }, + "memory": { + Type: schema.TypeInt, + Optional: true, + }, + "memory_reservation": { + Type: schema.TypeInt, + Optional: true, + }, + "name": { + Type: schema.TypeString, + Optional: true, + }, + "resource_requirement": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "type": { + Type: schema.TypeString, + Required: true, + ValidateDiagFunc: enum.Validate[types.EcsResourceRequirementType](), + }, + "value": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + }, + }, + }, + "cpu": { + Type: schema.TypeString, + Optional: true, + }, + "ephemeral_storage": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "size_in_gib": { + Type: schema.TypeInt, + Required: true, + ValidateFunc: validation.IntBetween(21, 200), + }, + }, + }, + }, + "execution_role_arn": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidARN, + }, + "inference_accelerator_override": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "device_name": { + Type: schema.TypeString, + Optional: true, + }, + "device_type": { + Type: schema.TypeString, + Optional: true, + }, + }, + }, + }, + "memory": { + Type: schema.TypeString, + Optional: true, + }, + "task_role_arn": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidARN, + }, + }, + }, + }, + "placement_constraint": { + Type: schema.TypeList, + Optional: true, + MaxItems: 10, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "expression": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(1, 2000), + }, + "type": { + Type: schema.TypeString, + Optional: true, + ValidateDiagFunc: enum.Validate[types.PlacementConstraintType](), + }, + }, + }, + }, + "placement_strategy": { + Type: schema.TypeList, + Optional: true, + MaxItems: 5, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "field": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(1, 255), + }, + "type": { + Type: schema.TypeString, + Optional: true, + ValidateDiagFunc: enum.Validate[types.PlacementStrategyType](), + }, + }, + }, + }, + "platform_version": { + Type: schema.TypeString, + Optional: true, + }, + "propagate_tags": { + Type: schema.TypeString, + Optional: true, + ValidateDiagFunc: enum.Validate[types.PropagateTags](), + }, + "reference_id": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(1, 1024), + }, + "tags": tftags.TagsSchema(), + "task_count": { + Type: schema.TypeInt, + Optional: true, + }, + "task_definition_arn": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidARN, + }, + }, + }, + }, + "eventbridge_event_bus_parameters": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + ConflictsWith: []string{ + "target_parameters.0.batch_job_parameters", + "target_parameters.0.cloudwatch_logs_parameters", + "target_parameters.0.ecs_task_parameters", + "target_parameters.0.http_parameters", + "target_parameters.0.kinesis_stream_parameters", + "target_parameters.0.lambda_function_parameters", + "target_parameters.0.redshift_data_parameters", + "target_parameters.0.sagemaker_pipeline_parameters", + "target_parameters.0.sqs_queue_parameters", + "target_parameters.0.step_function_state_machine_parameters", + }, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "detail_type": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(0, 128), + }, + "endpoint_id": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.All( + validation.StringLenBetween(1, 50), + validation.StringMatch(regexp.MustCompile(`^[A-Za-z0-9\-]+[\.][A-Za-z0-9\-]+$`), ""), + ), + }, + "resources": { + Type: schema.TypeSet, + Optional: true, + MaxItems: 10, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: verify.ValidARN, + }, + }, + "source": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.All( + validation.StringLenBetween(1, 256), + ), + }, + "time": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.All( + validation.StringLenBetween(1, 256), + validation.StringMatch(regexp.MustCompile(`^\$(\.[\w/_-]+(\[(\d+|\*)\])*)*$`), ""), + ), + }, + }, + }, + }, + "http_parameters": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + ConflictsWith: []string{ + "target_parameters.0.batch_job_parameters", + "target_parameters.0.cloudwatch_logs_parameters", + "target_parameters.0.ecs_task_parameters", + "target_parameters.0.eventbridge_event_bus_parameters", + "target_parameters.0.kinesis_stream_parameters", + "target_parameters.0.lambda_function_parameters", + "target_parameters.0.redshift_data_parameters", + "target_parameters.0.sagemaker_pipeline_parameters", + "target_parameters.0.sqs_queue_parameters", + "target_parameters.0.step_function_state_machine_parameters", + }, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "header_parameters": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "path_parameter_values": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "query_string_parameters": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + }, + }, + }, + "input_template": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(0, 8192), + }, + "kinesis_stream_parameters": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + ConflictsWith: []string{ + "target_parameters.0.batch_job_parameters", + "target_parameters.0.cloudwatch_logs_parameters", + "target_parameters.0.ecs_task_parameters", + "target_parameters.0.eventbridge_event_bus_parameters", + "target_parameters.0.http_parameters", + "target_parameters.0.lambda_function_parameters", + "target_parameters.0.redshift_data_parameters", + "target_parameters.0.sagemaker_pipeline_parameters", + "target_parameters.0.sqs_queue_parameters", + "target_parameters.0.step_function_state_machine_parameters", + }, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "partition_key": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 256), + }, + }, + }, + }, + "lambda_function_parameters": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + ConflictsWith: []string{ + "target_parameters.0.batch_job_parameters", + "target_parameters.0.cloudwatch_logs_parameters", + "target_parameters.0.ecs_task_parameters", + "target_parameters.0.eventbridge_event_bus_parameters", + "target_parameters.0.http_parameters", + "target_parameters.0.kinesis_stream_parameters", + "target_parameters.0.redshift_data_parameters", + "target_parameters.0.sagemaker_pipeline_parameters", + "target_parameters.0.sqs_queue_parameters", + "target_parameters.0.step_function_state_machine_parameters", + }, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "invocation_type": { + Type: schema.TypeString, + Required: true, + ValidateDiagFunc: enum.Validate[types.PipeTargetInvocationType](), + }, + }, + }, + }, + "redshift_data_parameters": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + ConflictsWith: []string{ + "target_parameters.0.batch_job_parameters", + "target_parameters.0.cloudwatch_logs_parameters", + "target_parameters.0.ecs_task_parameters", + "target_parameters.0.eventbridge_event_bus_parameters", + "target_parameters.0.http_parameters", + "target_parameters.0.kinesis_stream_parameters", + "target_parameters.0.lambda_function_parameters", + "target_parameters.0.sagemaker_pipeline_parameters", + "target_parameters.0.sqs_queue_parameters", + "target_parameters.0.step_function_state_machine_parameters", + }, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "database": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 64), + }, + "db_user": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(1, 128), + }, + "secret_manager_arn": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidARN, + }, + "sqls": { + Type: schema.TypeSet, + Required: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringLenBetween(1, 100000), + }, + }, + "statement_name": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(1, 500), + }, + "with_event": { + Type: schema.TypeBool, + Optional: true, + }, + }, + }, + }, + "sagemaker_pipeline_parameters": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + ConflictsWith: []string{ + "target_parameters.0.batch_job_parameters", + "target_parameters.0.cloudwatch_logs_parameters", + "target_parameters.0.ecs_task_parameters", + "target_parameters.0.eventbridge_event_bus_parameters", + "target_parameters.0.http_parameters", + "target_parameters.0.kinesis_stream_parameters", + "target_parameters.0.lambda_function_parameters", + "target_parameters.0.redshift_data_parameters", + "target_parameters.0.sqs_queue_parameters", + "target_parameters.0.step_function_state_machine_parameters", + }, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "pipeline_parameter": { + Type: schema.TypeList, + Optional: true, + MaxItems: 200, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.All( + validation.StringLenBetween(1, 256), + validation.StringMatch(regexp.MustCompile(`^[a-zA-Z0-9](-*[a-zA-Z0-9])*|(\$(\.[\w/_-]+(\[(\d+|\*)\])*)*)$`), ""), + ), + }, + "value": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 1024), + }, + }, + }, + }, + }, + }, + }, + "sqs_queue_parameters": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + ConflictsWith: []string{ + "target_parameters.0.batch_job_parameters", + "target_parameters.0.cloudwatch_logs_parameters", + "target_parameters.0.ecs_task_parameters", + "target_parameters.0.eventbridge_event_bus_parameters", + "target_parameters.0.http_parameters", + "target_parameters.0.kinesis_stream_parameters", + "target_parameters.0.lambda_function_parameters", + "target_parameters.0.redshift_data_parameters", + "target_parameters.0.sagemaker_pipeline_parameters", + "target_parameters.0.step_function_state_machine_parameters", + }, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "message_deduplication_id": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(1, 100), + }, + "message_group_id": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(1, 100), + }, + }, + }, + }, + "step_function_state_machine_parameters": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + ConflictsWith: []string{ + "target_parameters.0.batch_job_parameters", + "target_parameters.0.cloudwatch_logs_parameters", + "target_parameters.0.ecs_task_parameters", + "target_parameters.0.eventbridge_event_bus_parameters", + "target_parameters.0.http_parameters", + "target_parameters.0.kinesis_stream_parameters", + "target_parameters.0.lambda_function_parameters", + "target_parameters.0.redshift_data_parameters", + "target_parameters.0.sagemaker_pipeline_parameters", + "target_parameters.0.sqs_queue_parameters", + }, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "invocation_type": { + Type: schema.TypeString, + Required: true, + ValidateDiagFunc: enum.Validate[types.PipeTargetInvocationType](), + }, + }, + }, + }, + }, + }, + } +} + +func expandPipeTargetParameters(tfMap map[string]interface{}) *types.PipeTargetParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.PipeTargetParameters{} + + if v, ok := tfMap["batch_job_parameters"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.BatchJobParameters = expandPipeTargetBatchJobParameters(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["cloudwatch_logs_parameters"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.CloudWatchLogsParameters = expandPipeTargetCloudWatchLogsParameters(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["ecs_task_parameters"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.EcsTaskParameters = expandPipeTargetECSTaskParameters(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["eventbridge_event_bus_parameters"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.EventBridgeEventBusParameters = expandPipeTargetEventBridgeEventBusParameters(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["http_parameters"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.HttpParameters = expandPipeTargetHTTPParameters(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["input_template"].(string); ok && v != "" { + apiObject.InputTemplate = aws.String(v) + } + + if v, ok := tfMap["kinesis_stream_parameters"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.KinesisStreamParameters = expandPipeTargetKinesisStreamParameters(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["lambda_function_parameters"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.LambdaFunctionParameters = expandPipeTargetLambdaFunctionParameters(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["redshift_data_parameters"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.RedshiftDataParameters = expandPipeTargetRedshiftDataParameters(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["sagemaker_pipeline_parameters"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.SageMakerPipelineParameters = expandPipeTargetSageMakerPipelineParameters(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["sqs_queue_parameters"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.SqsQueueParameters = expandPipeTargetSQSQueueParameters(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["step_function_state_machine_parameters"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.StepFunctionStateMachineParameters = expandPipeTargetStateMachineParameters(v[0].(map[string]interface{})) + } + + return apiObject +} + +func expandPipeTargetBatchJobParameters(tfMap map[string]interface{}) *types.PipeTargetBatchJobParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.PipeTargetBatchJobParameters{} + + if v, ok := tfMap["array_properties"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.ArrayProperties = expandBatchArrayProperties(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["container_overrides"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.ContainerOverrides = expandBatchContainerOverrides(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["depends_on"].([]interface{}); ok && len(v) > 0 { + apiObject.DependsOn = expandBatchJobDependencies(v) + } + + if v, ok := tfMap["job_definition"].(string); ok && v != "" { + apiObject.JobDefinition = aws.String(v) + } + + if v, ok := tfMap["job_name"].(string); ok && v != "" { + apiObject.JobName = aws.String(v) + } + + if v, ok := tfMap["parameters"].(map[string]interface{}); ok && len(v) > 0 { + apiObject.Parameters = flex.ExpandStringValueMap(v) + } + + if v, ok := tfMap["retry_strategy"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.RetryStrategy = expandBatchRetryStrategy(v[0].(map[string]interface{})) + } + + return apiObject +} + +func expandBatchArrayProperties(tfMap map[string]interface{}) *types.BatchArrayProperties { + if tfMap == nil { + return nil + } + + apiObject := &types.BatchArrayProperties{} + + if v, ok := tfMap["size"].(int); ok { + apiObject.Size = int32(v) + } + + return apiObject +} + +func expandBatchContainerOverrides(tfMap map[string]interface{}) *types.BatchContainerOverrides { + if tfMap == nil { + return nil + } + + apiObject := &types.BatchContainerOverrides{} + + if v, ok := tfMap["command"].([]interface{}); ok && len(v) > 0 { + apiObject.Command = flex.ExpandStringValueList(v) + } + + if v, ok := tfMap["environment"].([]interface{}); ok && len(v) > 0 { + apiObject.Environment = expandBatchEnvironmentVariables(v) + } + + if v, ok := tfMap["instance_type"].(string); ok && v != "" { + apiObject.InstanceType = aws.String(v) + } + + if v, ok := tfMap["resource_requirement"].([]interface{}); ok && len(v) > 0 { + apiObject.ResourceRequirements = expandBatchResourceRequirements(v) + } + + return apiObject +} + +func expandBatchEnvironmentVariable(tfMap map[string]interface{}) *types.BatchEnvironmentVariable { + if tfMap == nil { + return nil + } + + apiObject := &types.BatchEnvironmentVariable{} + + if v, ok := tfMap["name"].(string); ok && v != "" { + apiObject.Name = aws.String(v) + } + + if v, ok := tfMap["value"].(string); ok && v != "" { + apiObject.Value = aws.String(v) + } + + return apiObject +} + +func expandBatchEnvironmentVariables(tfList []interface{}) []types.BatchEnvironmentVariable { + if len(tfList) == 0 { + return nil + } + + var apiObjects []types.BatchEnvironmentVariable + + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + + if !ok { + continue + } + + apiObject := expandBatchEnvironmentVariable(tfMap) + + if apiObject == nil { + continue + } + + apiObjects = append(apiObjects, *apiObject) + } + + return apiObjects +} + +func expandBatchResourceRequirement(tfMap map[string]interface{}) *types.BatchResourceRequirement { + if tfMap == nil { + return nil + } + + apiObject := &types.BatchResourceRequirement{} + + if v, ok := tfMap["type"].(string); ok && v != "" { + apiObject.Type = types.BatchResourceRequirementType(v) + } + + if v, ok := tfMap["value"].(string); ok && v != "" { + apiObject.Value = aws.String(v) + } + + return apiObject +} + +func expandBatchResourceRequirements(tfList []interface{}) []types.BatchResourceRequirement { + if len(tfList) == 0 { + return nil + } + + var apiObjects []types.BatchResourceRequirement + + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + + if !ok { + continue + } + + apiObject := expandBatchResourceRequirement(tfMap) + + if apiObject == nil { + continue + } + + apiObjects = append(apiObjects, *apiObject) + } + + return apiObjects +} + +func expandBatchJobDependency(tfMap map[string]interface{}) *types.BatchJobDependency { + if tfMap == nil { + return nil + } + + apiObject := &types.BatchJobDependency{} + + if v, ok := tfMap["job_id"].(string); ok && v != "" { + apiObject.JobId = aws.String(v) + } + + if v, ok := tfMap["type"].(string); ok && v != "" { + apiObject.Type = types.BatchJobDependencyType(v) + } + + return apiObject +} + +func expandBatchJobDependencies(tfList []interface{}) []types.BatchJobDependency { + if len(tfList) == 0 { + return nil + } + + var apiObjects []types.BatchJobDependency + + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + + if !ok { + continue + } + + apiObject := expandBatchJobDependency(tfMap) + + if apiObject == nil { + continue + } + + apiObjects = append(apiObjects, *apiObject) + } + + return apiObjects +} + +func expandBatchRetryStrategy(tfMap map[string]interface{}) *types.BatchRetryStrategy { + if tfMap == nil { + return nil + } + + apiObject := &types.BatchRetryStrategy{} + + if v, ok := tfMap["attempts"].(int); ok { + apiObject.Attempts = int32(v) + } + + return apiObject +} + +func expandPipeTargetCloudWatchLogsParameters(tfMap map[string]interface{}) *types.PipeTargetCloudWatchLogsParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.PipeTargetCloudWatchLogsParameters{} + + if v, ok := tfMap["log_stream_name"].(string); ok && v != "" { + apiObject.LogStreamName = aws.String(v) + } + + if v, ok := tfMap["timestamp"].(string); ok && v != "" { + apiObject.Timestamp = aws.String(v) + } + + return apiObject +} + +func expandPipeTargetECSTaskParameters(tfMap map[string]interface{}) *types.PipeTargetEcsTaskParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.PipeTargetEcsTaskParameters{} + + if v, ok := tfMap["capacity_provider_strategy"].([]interface{}); ok && len(v) > 0 { + apiObject.CapacityProviderStrategy = expandCapacityProviderStrategyItems(v) + } + + if v, ok := tfMap["enable_ecs_managed_tags"].(bool); ok { + apiObject.EnableECSManagedTags = v + } + + if v, ok := tfMap["enable_execute_command"].(bool); ok { + apiObject.EnableExecuteCommand = v + } + + if v, ok := tfMap["group"].(string); ok && v != "" { + apiObject.Group = aws.String(v) + } + + if v, ok := tfMap["launch_type"].(string); ok && v != "" { + apiObject.LaunchType = types.LaunchType(v) + } + + if v, ok := tfMap["network_configuration"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.NetworkConfiguration = expandNetworkConfiguration(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["overrides"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.Overrides = expandECSTaskOverride(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["placement_constraint"].([]interface{}); ok && len(v) > 0 { + apiObject.PlacementConstraints = expandPlacementConstraints(v) + } + + if v, ok := tfMap["placement_strategy"].([]interface{}); ok && len(v) > 0 { + apiObject.PlacementStrategy = expandPlacementStrategies(v) + } + + if v, ok := tfMap["platform_version"].(string); ok && v != "" { + apiObject.PlatformVersion = aws.String(v) + } + + if v, ok := tfMap["propagate_tags"].(string); ok && v != "" { + apiObject.PropagateTags = types.PropagateTags(v) + } + + if v, ok := tfMap["reference_id"].(string); ok && v != "" { + apiObject.ReferenceId = aws.String(v) + } + + if v, ok := tfMap["tags"].(map[string]interface{}); ok && len(v) > 0 { + for k, v := range flex.ExpandStringValueMap(v) { + apiObject.Tags = append(apiObject.Tags, types.Tag{Key: aws.String(k), Value: aws.String(v)}) + } + } + + if v, ok := tfMap["task_count"].(int); ok { + apiObject.TaskCount = aws.Int32(int32(v)) + } + + if v, ok := tfMap["task_definition_arn"].(string); ok && v != "" { + apiObject.TaskDefinitionArn = aws.String(v) + } + + return apiObject +} + +func expandCapacityProviderStrategyItem(tfMap map[string]interface{}) *types.CapacityProviderStrategyItem { + if tfMap == nil { + return nil + } + + apiObject := &types.CapacityProviderStrategyItem{} + + if v, ok := tfMap["base"].(int); ok { + apiObject.Base = int32(v) + } + + if v, ok := tfMap["capacity_provider"].(string); ok && v != "" { + apiObject.CapacityProvider = aws.String(v) + } + + if v, ok := tfMap["weight"].(int); ok { + apiObject.Weight = int32(v) + } + + return apiObject +} + +func expandCapacityProviderStrategyItems(tfList []interface{}) []types.CapacityProviderStrategyItem { + if len(tfList) == 0 { + return nil + } + + var apiObjects []types.CapacityProviderStrategyItem + + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + + if !ok { + continue + } + + apiObject := expandCapacityProviderStrategyItem(tfMap) + + if apiObject == nil { + continue + } + + apiObjects = append(apiObjects, *apiObject) + } + + return apiObjects +} + +func expandNetworkConfiguration(tfMap map[string]interface{}) *types.NetworkConfiguration { + if tfMap == nil { + return nil + } + + apiObject := &types.NetworkConfiguration{} + + if v, ok := tfMap["aws_vpc_configuration"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.AwsvpcConfiguration = expandVPCConfiguration(v[0].(map[string]interface{})) + } + + return apiObject +} + +func expandVPCConfiguration(tfMap map[string]interface{}) *types.AwsVpcConfiguration { + if tfMap == nil { + return nil + } + + apiObject := &types.AwsVpcConfiguration{} + + if v, ok := tfMap["assign_public_ip"].(string); ok && v != "" { + apiObject.AssignPublicIp = types.AssignPublicIp(v) + } + + if v, ok := tfMap["security_groups"].(*schema.Set); ok && v.Len() > 0 { + apiObject.SecurityGroups = flex.ExpandStringValueSet(v) + } + + if v, ok := tfMap["subnets"].(*schema.Set); ok && v.Len() > 0 { + apiObject.Subnets = flex.ExpandStringValueSet(v) + } + + return apiObject +} + +func expandECSTaskOverride(tfMap map[string]interface{}) *types.EcsTaskOverride { + if tfMap == nil { + return nil + } + + apiObject := &types.EcsTaskOverride{} + + if v, ok := tfMap["container_override"].([]interface{}); ok && len(v) > 0 { + apiObject.ContainerOverrides = expandECSContainerOverrides(v) + } + + if v, ok := tfMap["cpu"].(string); ok && v != "" { + apiObject.Cpu = aws.String(v) + } + + if v, ok := tfMap["ephemeral_storage"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + apiObject.EphemeralStorage = expandECSEphemeralStorage(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["execution_role_arn"].(string); ok && v != "" { + apiObject.ExecutionRoleArn = aws.String(v) + } + + if v, ok := tfMap["inference_accelerator_override"].([]interface{}); ok && len(v) > 0 { + apiObject.InferenceAcceleratorOverrides = expandECSInferenceAcceleratorOverrides(v) + } + + if v, ok := tfMap["memory"].(string); ok && v != "" { + apiObject.Memory = aws.String(v) + } + + if v, ok := tfMap["task_role_arn"].(string); ok && v != "" { + apiObject.TaskRoleArn = aws.String(v) + } + + return apiObject +} + +func expandECSContainerOverride(tfMap map[string]interface{}) *types.EcsContainerOverride { + if tfMap == nil { + return nil + } + + apiObject := &types.EcsContainerOverride{} + + if v, ok := tfMap["command"].([]interface{}); ok && len(v) > 0 { + apiObject.Command = flex.ExpandStringValueList(v) + } + + if v, ok := tfMap["cpu"].(int); ok { + apiObject.Cpu = aws.Int32(int32(v)) + } + + if v, ok := tfMap["environment"].([]interface{}); ok && len(v) > 0 { + apiObject.Environment = expandECSEnvironmentVariables(v) + } + + if v, ok := tfMap["environment_file"].([]interface{}); ok && len(v) > 0 { + apiObject.EnvironmentFiles = expandECSEnvironmentFiles(v) + } + + if v, ok := tfMap["memory"].(int); ok { + apiObject.Memory = aws.Int32(int32(v)) + } + + if v, ok := tfMap["memory_reservation"].(int); ok { + apiObject.MemoryReservation = aws.Int32(int32(v)) + } + + if v, ok := tfMap["name"].(string); ok && v != "" { + apiObject.Name = aws.String(v) + } + + if v, ok := tfMap["resource_requirement"].([]interface{}); ok && len(v) > 0 { + apiObject.ResourceRequirements = expandECSResourceRequirements(v) + } + + return apiObject +} + +func expandECSContainerOverrides(tfList []interface{}) []types.EcsContainerOverride { + if len(tfList) == 0 { + return nil + } + + var apiObjects []types.EcsContainerOverride + + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + + if !ok { + continue + } + + apiObject := expandECSContainerOverride(tfMap) + + if apiObject == nil { + continue + } + + apiObjects = append(apiObjects, *apiObject) + } + + return apiObjects +} + +func expandECSEnvironmentVariable(tfMap map[string]interface{}) *types.EcsEnvironmentVariable { + if tfMap == nil { + return nil + } + + apiObject := &types.EcsEnvironmentVariable{} + + if v, ok := tfMap["name"].(string); ok && v != "" { + apiObject.Name = aws.String(v) + } + + if v, ok := tfMap["value"].(string); ok && v != "" { + apiObject.Value = aws.String(v) + } + + return apiObject +} + +func expandECSEnvironmentVariables(tfList []interface{}) []types.EcsEnvironmentVariable { + if len(tfList) == 0 { + return nil + } + + var apiObjects []types.EcsEnvironmentVariable + + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + + if !ok { + continue + } + + apiObject := expandECSEnvironmentVariable(tfMap) + + if apiObject == nil { + continue + } + + apiObjects = append(apiObjects, *apiObject) + } + + return apiObjects +} + +func expandECSEnvironmentFile(tfMap map[string]interface{}) *types.EcsEnvironmentFile { + if tfMap == nil { + return nil + } + + apiObject := &types.EcsEnvironmentFile{} + + if v, ok := tfMap["type"].(string); ok && v != "" { + apiObject.Type = types.EcsEnvironmentFileType(v) + } + + if v, ok := tfMap["value"].(string); ok && v != "" { + apiObject.Value = aws.String(v) + } + + return apiObject +} + +func expandECSEnvironmentFiles(tfList []interface{}) []types.EcsEnvironmentFile { + if len(tfList) == 0 { + return nil + } + + var apiObjects []types.EcsEnvironmentFile + + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + + if !ok { + continue + } + + apiObject := expandECSEnvironmentFile(tfMap) + + if apiObject == nil { + continue + } + + apiObjects = append(apiObjects, *apiObject) + } + + return apiObjects +} + +func expandECSResourceRequirement(tfMap map[string]interface{}) *types.EcsResourceRequirement { + if tfMap == nil { + return nil + } + + apiObject := &types.EcsResourceRequirement{} + + if v, ok := tfMap["type"].(string); ok && v != "" { + apiObject.Type = types.EcsResourceRequirementType(v) + } + + if v, ok := tfMap["value"].(string); ok && v != "" { + apiObject.Value = aws.String(v) + } + + return apiObject +} + +func expandECSResourceRequirements(tfList []interface{}) []types.EcsResourceRequirement { + if len(tfList) == 0 { + return nil + } + + var apiObjects []types.EcsResourceRequirement + + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + + if !ok { + continue + } + + apiObject := expandECSResourceRequirement(tfMap) + + if apiObject == nil { + continue + } + + apiObjects = append(apiObjects, *apiObject) + } + + return apiObjects +} + +func expandECSEphemeralStorage(tfMap map[string]interface{}) *types.EcsEphemeralStorage { + if tfMap == nil { + return nil + } + + apiObject := &types.EcsEphemeralStorage{} + + if v, ok := tfMap["size_in_gib"].(int); ok { + apiObject.SizeInGiB = int32(v) + } + + return apiObject +} + +func expandECSInferenceAcceleratorOverride(tfMap map[string]interface{}) *types.EcsInferenceAcceleratorOverride { + if tfMap == nil { + return nil + } + + apiObject := &types.EcsInferenceAcceleratorOverride{} + + if v, ok := tfMap["device_name"].(string); ok && v != "" { + apiObject.DeviceName = aws.String(v) + } + + if v, ok := tfMap["device_type"].(string); ok && v != "" { + apiObject.DeviceType = aws.String(v) + } + + return apiObject +} + +func expandECSInferenceAcceleratorOverrides(tfList []interface{}) []types.EcsInferenceAcceleratorOverride { + if len(tfList) == 0 { + return nil + } + + var apiObjects []types.EcsInferenceAcceleratorOverride + + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + + if !ok { + continue + } + + apiObject := expandECSInferenceAcceleratorOverride(tfMap) + + if apiObject == nil { + continue + } + + apiObjects = append(apiObjects, *apiObject) + } + + return apiObjects +} + +func expandPlacementConstraint(tfMap map[string]interface{}) *types.PlacementConstraint { + if tfMap == nil { + return nil + } + + apiObject := &types.PlacementConstraint{} + + if v, ok := tfMap["expression"].(string); ok && v != "" { + apiObject.Expression = aws.String(v) + } + + if v, ok := tfMap["type"].(string); ok && v != "" { + apiObject.Type = types.PlacementConstraintType(v) + } + + return apiObject +} + +func expandPlacementConstraints(tfList []interface{}) []types.PlacementConstraint { + if len(tfList) == 0 { + return nil + } + + var apiObjects []types.PlacementConstraint + + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + + if !ok { + continue + } + + apiObject := expandPlacementConstraint(tfMap) + + if apiObject == nil { + continue + } + + apiObjects = append(apiObjects, *apiObject) + } + + return apiObjects +} + +func expandPlacementStrategy(tfMap map[string]interface{}) *types.PlacementStrategy { + if tfMap == nil { + return nil + } + + apiObject := &types.PlacementStrategy{} + + if v, ok := tfMap["field"].(string); ok && v != "" { + apiObject.Field = aws.String(v) + } + + if v, ok := tfMap["type"].(string); ok && v != "" { + apiObject.Type = types.PlacementStrategyType(v) + } + + return apiObject +} + +func expandPlacementStrategies(tfList []interface{}) []types.PlacementStrategy { + if len(tfList) == 0 { + return nil + } + + var apiObjects []types.PlacementStrategy + + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + + if !ok { + continue + } + + apiObject := expandPlacementStrategy(tfMap) + + if apiObject == nil { + continue + } + + apiObjects = append(apiObjects, *apiObject) + } + + return apiObjects +} + +func expandPipeTargetEventBridgeEventBusParameters(tfMap map[string]interface{}) *types.PipeTargetEventBridgeEventBusParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.PipeTargetEventBridgeEventBusParameters{} + + if v, ok := tfMap["detail_type"].(string); ok && v != "" { + apiObject.DetailType = aws.String(v) + } + + if v, ok := tfMap["endpoint_id"].(string); ok && v != "" { + apiObject.EndpointId = aws.String(v) + } + + if v, ok := tfMap["resources"].(*schema.Set); ok && v.Len() > 0 { + apiObject.Resources = flex.ExpandStringValueSet(v) + } + + if v, ok := tfMap["source"].(string); ok && v != "" { + apiObject.Source = aws.String(v) + } + + if v, ok := tfMap["time"].(string); ok && v != "" { + apiObject.Time = aws.String(v) + } + + return apiObject +} + +func expandPipeTargetHTTPParameters(tfMap map[string]interface{}) *types.PipeTargetHttpParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.PipeTargetHttpParameters{} + + if v, ok := tfMap["header_parameters"].(map[string]interface{}); ok && len(v) > 0 { + apiObject.HeaderParameters = flex.ExpandStringValueMap(v) + } + + if v, ok := tfMap["path_parameter_values"].([]interface{}); ok && len(v) > 0 { + apiObject.PathParameterValues = flex.ExpandStringValueList(v) + } + + if v, ok := tfMap["query_string_parameters"].(map[string]interface{}); ok && len(v) > 0 { + apiObject.QueryStringParameters = flex.ExpandStringValueMap(v) + } + + return apiObject +} + +func expandPipeTargetKinesisStreamParameters(tfMap map[string]interface{}) *types.PipeTargetKinesisStreamParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.PipeTargetKinesisStreamParameters{} + + if v, ok := tfMap["partition_key"].(string); ok && v != "" { + apiObject.PartitionKey = aws.String(v) + } + + return apiObject +} + +func expandPipeTargetLambdaFunctionParameters(tfMap map[string]interface{}) *types.PipeTargetLambdaFunctionParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.PipeTargetLambdaFunctionParameters{} + + if v, ok := tfMap["invocation_type"].(string); ok && v != "" { + apiObject.InvocationType = types.PipeTargetInvocationType(v) + } + + return apiObject +} + +func expandPipeTargetRedshiftDataParameters(tfMap map[string]interface{}) *types.PipeTargetRedshiftDataParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.PipeTargetRedshiftDataParameters{} + + if v, ok := tfMap["database"].(string); ok && v != "" { + apiObject.Database = aws.String(v) + } + + if v, ok := tfMap["db_user"].(string); ok && v != "" { + apiObject.DbUser = aws.String(v) + } + + if v, ok := tfMap["secret_manager_arn"].(string); ok && v != "" { + apiObject.SecretManagerArn = aws.String(v) + } + + if v, ok := tfMap["sqls"].(*schema.Set); ok && v.Len() > 0 { + apiObject.Sqls = flex.ExpandStringValueSet(v) + } + + if v, ok := tfMap["statement_name"].(string); ok && v != "" { + apiObject.StatementName = aws.String(v) + } + + if v, ok := tfMap["with_event"].(bool); ok { + apiObject.WithEvent = v + } + + return apiObject +} + +func expandPipeTargetSageMakerPipelineParameters(tfMap map[string]interface{}) *types.PipeTargetSageMakerPipelineParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.PipeTargetSageMakerPipelineParameters{} + + if v, ok := tfMap["pipeline_parameter"].([]interface{}); ok && len(v) > 0 { + apiObject.PipelineParameterList = expandSageMakerPipelineParameters(v) + } + + return apiObject +} + +func expandSageMakerPipelineParameter(tfMap map[string]interface{}) *types.SageMakerPipelineParameter { + if tfMap == nil { + return nil + } + + apiObject := &types.SageMakerPipelineParameter{} + + if v, ok := tfMap["name"].(string); ok && v != "" { + apiObject.Name = aws.String(v) + } + + if v, ok := tfMap["value"].(string); ok && v != "" { + apiObject.Value = aws.String(v) + } + + return apiObject +} + +func expandSageMakerPipelineParameters(tfList []interface{}) []types.SageMakerPipelineParameter { + if len(tfList) == 0 { + return nil + } + + var apiObjects []types.SageMakerPipelineParameter + + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + + if !ok { + continue + } + + apiObject := expandSageMakerPipelineParameter(tfMap) + + if apiObject == nil { + continue + } + + apiObjects = append(apiObjects, *apiObject) + } + + return apiObjects +} + +func expandPipeTargetSQSQueueParameters(tfMap map[string]interface{}) *types.PipeTargetSqsQueueParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.PipeTargetSqsQueueParameters{} + + if v, ok := tfMap["message_deduplication_id"].(string); ok && v != "" { + apiObject.MessageDeduplicationId = aws.String(v) + } + + if v, ok := tfMap["message_group_id"].(string); ok && v != "" { + apiObject.MessageGroupId = aws.String(v) + } + + return apiObject +} + +func expandPipeTargetStateMachineParameters(tfMap map[string]interface{}) *types.PipeTargetStateMachineParameters { + if tfMap == nil { + return nil + } + + apiObject := &types.PipeTargetStateMachineParameters{} + + if v, ok := tfMap["invocation_type"].(string); ok && v != "" { + apiObject.InvocationType = types.PipeTargetInvocationType(v) + } + + return apiObject +} + +func flattenPipeTargetParameters(apiObject *types.PipeTargetParameters) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.BatchJobParameters; v != nil { + tfMap["batch_job_parameters"] = []interface{}{flattenPipeTargetBatchJobParameters(v)} + } + + if v := apiObject.CloudWatchLogsParameters; v != nil { + tfMap["cloudwatch_logs_parameters"] = []interface{}{flattenPipeTargetCloudWatchLogsParameters(v)} + } + + if v := apiObject.EcsTaskParameters; v != nil { + tfMap["cloudwatch_logs_parameters"] = []interface{}{flattenPipeTargetECSTaskParameters(v)} + } + + if v := apiObject.EventBridgeEventBusParameters; v != nil { + tfMap["eventbridge_event_bus_parameters"] = []interface{}{flattenPipeTargetEventBridgeEventBusParameters(v)} + } + + if v := apiObject.HttpParameters; v != nil { + tfMap["http_parameters"] = []interface{}{flattenPipeTargetHTTPParameters(v)} + } + + if v := apiObject.InputTemplate; v != nil { + tfMap["input_template"] = aws.ToString(v) + } + + if v := apiObject.KinesisStreamParameters; v != nil { + tfMap["kinesis_stream_parameters"] = []interface{}{flattenPipeTargetKinesisStreamParameters(v)} + } + + if v := apiObject.LambdaFunctionParameters; v != nil { + tfMap["lambda_function_parameters"] = []interface{}{flattenPipeTargetLambdaFunctionParameters(v)} + } + + if v := apiObject.RedshiftDataParameters; v != nil { + tfMap["redshift_data_parameters"] = []interface{}{flattenPipeTargetRedshiftDataParameters(v)} + } + + if v := apiObject.SageMakerPipelineParameters; v != nil { + tfMap["sagemaker_pipeline_parameters"] = []interface{}{flattenPipeTargetSageMakerPipelineParameters(v)} + } + + if v := apiObject.SqsQueueParameters; v != nil { + tfMap["sqs_queue_parameters"] = []interface{}{flattenPipeTargetSQSQueueParameters(v)} + } + + if v := apiObject.StepFunctionStateMachineParameters; v != nil { + tfMap["step_function_state_machine_parameters"] = []interface{}{flattenPipeTargetStateMachineParameters(v)} + } + + return tfMap +} + +func flattenPipeTargetBatchJobParameters(apiObject *types.PipeTargetBatchJobParameters) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.ArrayProperties; v != nil { + tfMap["array_properties"] = []interface{}{flattenBatchArrayProperties(v)} + } + + if v := apiObject.ContainerOverrides; v != nil { + tfMap["container_overrides"] = []interface{}{flattenBatchContainerOverrides(v)} + } + + if v := apiObject.DependsOn; v != nil { + tfMap["depends_on"] = flattenBatchJobDependencies(v) + } + + if v := apiObject.JobDefinition; v != nil { + tfMap["job_definition"] = aws.ToString(v) + } + + if v := apiObject.JobName; v != nil { + tfMap["job_name"] = aws.ToString(v) + } + + if v := apiObject.Parameters; v != nil { + tfMap["parameters"] = v + } + + if v := apiObject.RetryStrategy; v != nil { + tfMap["retry_strategy"] = []interface{}{flattenBatchRetryStrategy(v)} + } + + return tfMap +} + +func flattenBatchArrayProperties(apiObject *types.BatchArrayProperties) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.Size; v != 0 { + tfMap["size"] = int(v) + } + + return tfMap +} + +func flattenBatchContainerOverrides(apiObject *types.BatchContainerOverrides) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.Command; v != nil { + tfMap["command"] = v + } + + if v := apiObject.Environment; v != nil { + tfMap["environment"] = flattenBatchEnvironmentVariables(v) + } + + if v := apiObject.InstanceType; v != nil { + tfMap["instance_type"] = aws.ToString(v) + } + + if v := apiObject.ResourceRequirements; v != nil { + tfMap["resource_requirement"] = flattenBatchResourceRequirements(v) + } + + return tfMap +} + +func flattenBatchEnvironmentVariable(apiObject types.BatchEnvironmentVariable) map[string]interface{} { + tfMap := map[string]interface{}{} + + if v := apiObject.Name; v != nil { + tfMap["name"] = aws.ToString(v) + } + + if v := apiObject.Value; v != nil { + tfMap["value"] = aws.ToString(v) + } + + return tfMap +} + +func flattenBatchEnvironmentVariables(apiObjects []types.BatchEnvironmentVariable) []interface{} { + if len(apiObjects) == 0 { + return nil + } + + var tfList []interface{} + + for _, apiObject := range apiObjects { + tfList = append(tfList, flattenBatchEnvironmentVariable(apiObject)) + } + + return tfList +} + +func flattenBatchResourceRequirement(apiObject types.BatchResourceRequirement) map[string]interface{} { + tfMap := map[string]interface{}{} + + if v := apiObject.Type; v != "" { + tfMap["type"] = v + } + + if v := apiObject.Value; v != nil { + tfMap["value"] = aws.ToString(v) + } + + return tfMap +} + +func flattenBatchResourceRequirements(apiObjects []types.BatchResourceRequirement) []interface{} { + if len(apiObjects) == 0 { + return nil + } + + var tfList []interface{} + + for _, apiObject := range apiObjects { + tfList = append(tfList, flattenBatchResourceRequirement(apiObject)) + } + + return tfList +} + +func flattenBatchJobDependency(apiObject types.BatchJobDependency) map[string]interface{} { + tfMap := map[string]interface{}{} + + if v := apiObject.JobId; v != nil { + tfMap["job_id"] = aws.ToString(v) + } + + if v := apiObject.Type; v != "" { + tfMap["type"] = v + } + + return tfMap +} + +func flattenBatchJobDependencies(apiObjects []types.BatchJobDependency) []interface{} { + if len(apiObjects) == 0 { + return nil + } + + var tfList []interface{} + + for _, apiObject := range apiObjects { + tfList = append(tfList, flattenBatchJobDependency(apiObject)) + } + + return tfList +} + +func flattenBatchRetryStrategy(apiObject *types.BatchRetryStrategy) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.Attempts; v != 0 { + tfMap["attempts"] = int(v) + } + + return tfMap +} + +func flattenPipeTargetCloudWatchLogsParameters(apiObject *types.PipeTargetCloudWatchLogsParameters) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.LogStreamName; v != nil { + tfMap["log_stream_name"] = aws.ToString(v) + } + + if v := apiObject.Timestamp; v != nil { + tfMap["timestamp"] = aws.ToString(v) + } + + return tfMap +} + +func flattenPipeTargetECSTaskParameters(apiObject *types.PipeTargetEcsTaskParameters) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{ + "enable_ecs_managed_tags": apiObject.EnableECSManagedTags, + "enable_execute_command": apiObject.EnableExecuteCommand, + } + + if v := apiObject.CapacityProviderStrategy; v != nil { + tfMap["capacity_provider_strategy"] = flattenCapacityProviderStrategyItems(v) + } + + if v := apiObject.Group; v != nil { + tfMap["group"] = aws.ToString(v) + } + + if v := apiObject.LaunchType; v != "" { + tfMap["launch_type"] = v + } + + if v := apiObject.NetworkConfiguration; v != nil { + tfMap["network_configuration"] = []interface{}{flattenNetworkConfiguration(v)} + } + + if v := apiObject.Overrides; v != nil { + tfMap["overrides"] = []interface{}{flattenECSTaskOverride(v)} + } + + if v := apiObject.PlacementConstraints; v != nil { + tfMap["placement_constraint"] = flattenPlacementConstraints(v) + } + + if v := apiObject.PlacementStrategy; v != nil { + tfMap["placement_strategy"] = flattenPlacementStrategies(v) + } + + if v := apiObject.PlatformVersion; v != nil { + tfMap["platform_version"] = aws.ToString(v) + } + + if v := apiObject.PropagateTags; v != "" { + tfMap["propagate_tags"] = v + } + + if v := apiObject.ReferenceId; v != nil { + tfMap["reference_id"] = aws.ToString(v) + } + + if v := apiObject.Tags; v != nil { + tags := map[string]interface{}{} + + for _, apiObject := range v { + tags[aws.ToString(apiObject.Key)] = aws.ToString(apiObject.Value) + } + + tfMap["tags"] = tags + } + + if v := apiObject.TaskCount; v != nil { + tfMap["task_count"] = aws.ToInt32(v) + } + + if v := apiObject.TaskDefinitionArn; v != nil { + tfMap["task_definition_arn"] = aws.ToString(v) + } + + return tfMap +} + +func flattenCapacityProviderStrategyItem(apiObject types.CapacityProviderStrategyItem) map[string]interface{} { + tfMap := map[string]interface{}{ + "base": apiObject.Base, + "weight": apiObject.Weight, + } + + if v := apiObject.CapacityProvider; v != nil { + tfMap["capacity_provider"] = aws.ToString(v) + } + + return tfMap +} + +func flattenCapacityProviderStrategyItems(apiObjects []types.CapacityProviderStrategyItem) []interface{} { + if len(apiObjects) == 0 { + return nil + } + + var tfList []interface{} + + for _, apiObject := range apiObjects { + tfList = append(tfList, flattenCapacityProviderStrategyItem(apiObject)) + } + + return tfList +} + +func flattenECSTaskOverride(apiObject *types.EcsTaskOverride) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.ContainerOverrides; v != nil { + tfMap["container_override"] = flattenECSContainerOverrides(v) + } + + if v := apiObject.Cpu; v != nil { + tfMap["cpu"] = aws.ToString(v) + } + + if v := apiObject.EphemeralStorage; v != nil { + tfMap["ephemeral_storage"] = []interface{}{flattenECSEphemeralStorage(v)} + } + + if v := apiObject.ExecutionRoleArn; v != nil { + tfMap["execution_role_arn"] = aws.ToString(v) + } + + if v := apiObject.InferenceAcceleratorOverrides; v != nil { + tfMap["inference_accelerator_override"] = flattenECSInferenceAcceleratorOverrides(v) + } + + if v := apiObject.Memory; v != nil { + tfMap["memory"] = aws.ToString(v) + } + + if v := apiObject.TaskRoleArn; v != nil { + tfMap["task_role_arn"] = aws.ToString(v) + } + + return tfMap +} + +func flattenECSContainerOverride(apiObject types.EcsContainerOverride) map[string]interface{} { + tfMap := map[string]interface{}{} + + if v := apiObject.Command; v != nil { + tfMap["command"] = v + } + + if v := apiObject.Cpu; v != nil { + tfMap["cpu"] = aws.ToInt32(v) + } + + if v := apiObject.Environment; v != nil { + tfMap["environment"] = flattenECSEnvironmentVariables(v) + } + + if v := apiObject.EnvironmentFiles; v != nil { + tfMap["environment_file"] = flattenECSEnvironmentFiles(v) + } + + if v := apiObject.Memory; v != nil { + tfMap["memory"] = aws.ToInt32(v) + } + + if v := apiObject.MemoryReservation; v != nil { + tfMap["memory_reservation"] = aws.ToInt32(v) + } + + if v := apiObject.Name; v != nil { + tfMap["name"] = aws.ToString(v) + } + + if v := apiObject.ResourceRequirements; v != nil { + tfMap["resource_requirement"] = flattenECSResourceRequirements(v) + } + + return tfMap +} + +func flattenECSContainerOverrides(apiObjects []types.EcsContainerOverride) []interface{} { + if len(apiObjects) == 0 { + return nil + } + + var tfList []interface{} + + for _, apiObject := range apiObjects { + tfList = append(tfList, flattenECSContainerOverride(apiObject)) + } + + return tfList +} + +func flattenECSResourceRequirement(apiObject types.EcsResourceRequirement) map[string]interface{} { + tfMap := map[string]interface{}{} + + if v := apiObject.Type; v != "" { + tfMap["name"] = v + } + + if v := apiObject.Value; v != nil { + tfMap["value"] = aws.ToString(v) + } + + return tfMap +} + +func flattenECSResourceRequirements(apiObjects []types.EcsResourceRequirement) []interface{} { + if len(apiObjects) == 0 { + return nil + } + + var tfList []interface{} + + for _, apiObject := range apiObjects { + tfList = append(tfList, flattenECSResourceRequirement(apiObject)) + } + + return tfList +} + +func flattenECSEnvironmentFile(apiObject types.EcsEnvironmentFile) map[string]interface{} { + tfMap := map[string]interface{}{} + + if v := apiObject.Type; v != "" { + tfMap["name"] = v + } + + if v := apiObject.Value; v != nil { + tfMap["value"] = aws.ToString(v) + } + + return tfMap +} + +func flattenECSEnvironmentVariable(apiObject types.EcsEnvironmentVariable) map[string]interface{} { + tfMap := map[string]interface{}{} + + if v := apiObject.Name; v != nil { + tfMap["name"] = aws.ToString(v) + } + + if v := apiObject.Value; v != nil { + tfMap["value"] = aws.ToString(v) + } + + return tfMap +} + +func flattenECSEnvironmentVariables(apiObjects []types.EcsEnvironmentVariable) []interface{} { + if len(apiObjects) == 0 { + return nil + } + + var tfList []interface{} + + for _, apiObject := range apiObjects { + tfList = append(tfList, flattenECSEnvironmentVariable(apiObject)) + } + + return tfList +} + +func flattenECSEnvironmentFiles(apiObjects []types.EcsEnvironmentFile) []interface{} { + if len(apiObjects) == 0 { + return nil + } + + var tfList []interface{} + + for _, apiObject := range apiObjects { + tfList = append(tfList, flattenECSEnvironmentFile(apiObject)) + } + + return tfList +} + +func flattenECSEphemeralStorage(apiObject *types.EcsEphemeralStorage) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{ + "size_in_gib": apiObject.SizeInGiB, + } + + return tfMap +} + +func flattenECSInferenceAcceleratorOverride(apiObject types.EcsInferenceAcceleratorOverride) map[string]interface{} { + tfMap := map[string]interface{}{} + + if v := apiObject.DeviceName; v != nil { + tfMap["device_name"] = aws.ToString(v) + } + + if v := apiObject.DeviceType; v != nil { + tfMap["device_type"] = aws.ToString(v) + } + + return tfMap +} + +func flattenECSInferenceAcceleratorOverrides(apiObjects []types.EcsInferenceAcceleratorOverride) []interface{} { + if len(apiObjects) == 0 { + return nil + } + + var tfList []interface{} + + for _, apiObject := range apiObjects { + tfList = append(tfList, flattenECSInferenceAcceleratorOverride(apiObject)) + } + + return tfList +} + +func flattenNetworkConfiguration(apiObject *types.NetworkConfiguration) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.AwsvpcConfiguration; v != nil { + tfMap["aws_vpc_configuration"] = []interface{}{flattenVPCConfiguration(v)} + } + + return tfMap +} + +func flattenVPCConfiguration(apiObject *types.AwsVpcConfiguration) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.AssignPublicIp; v != "" { + tfMap["assign_public_ip"] = v + } + + if v := apiObject.SecurityGroups; v != nil { + tfMap["security_groups"] = v + } + + if v := apiObject.Subnets; v != nil { + tfMap["subnets"] = v + } + + return tfMap +} + +func flattenPlacementConstraint(apiObject types.PlacementConstraint) map[string]interface{} { + tfMap := map[string]interface{}{} + + if v := apiObject.Expression; v != nil { + tfMap["expression"] = aws.ToString(v) + } + + if v := apiObject.Type; v != "" { + tfMap["type"] = v + } + + return tfMap +} + +func flattenPlacementConstraints(apiObjects []types.PlacementConstraint) []interface{} { + if len(apiObjects) == 0 { + return nil + } + + var tfList []interface{} + + for _, apiObject := range apiObjects { + tfList = append(tfList, flattenPlacementConstraint(apiObject)) + } + + return tfList +} + +func flattenPlacementStrategy(apiObject types.PlacementStrategy) map[string]interface{} { + tfMap := map[string]interface{}{} + + if v := apiObject.Field; v != nil { + tfMap["field"] = aws.ToString(v) + } + + if v := apiObject.Type; v != "" { + tfMap["type"] = v + } + + return tfMap +} + +func flattenPlacementStrategies(apiObjects []types.PlacementStrategy) []interface{} { + if len(apiObjects) == 0 { + return nil + } + + var tfList []interface{} + + for _, apiObject := range apiObjects { + tfList = append(tfList, flattenPlacementStrategy(apiObject)) + } + + return tfList +} + +func flattenPipeTargetEventBridgeEventBusParameters(apiObject *types.PipeTargetEventBridgeEventBusParameters) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.DetailType; v != nil { + tfMap["detail_type"] = aws.ToString(v) + } + + if v := apiObject.EndpointId; v != nil { + tfMap["endpoint_id"] = aws.ToString(v) + } + + if v := apiObject.Resources; v != nil { + tfMap["resources"] = v + } + + if v := apiObject.Source; v != nil { + tfMap["source"] = aws.ToString(v) + } + + if v := apiObject.Time; v != nil { + tfMap["time"] = aws.ToString(v) + } + + return tfMap +} + +func flattenPipeTargetHTTPParameters(apiObject *types.PipeTargetHttpParameters) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.HeaderParameters; v != nil { + tfMap["header_parameters"] = v + } + + if v := apiObject.PathParameterValues; v != nil { + tfMap["path_parameter_values"] = v + } + + if v := apiObject.QueryStringParameters; v != nil { + tfMap["query_string_parameters"] = v + } + + return tfMap +} + +func flattenPipeTargetKinesisStreamParameters(apiObject *types.PipeTargetKinesisStreamParameters) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.PartitionKey; v != nil { + tfMap["partition_key"] = aws.ToString(v) + } + + return tfMap +} + +func flattenPipeTargetLambdaFunctionParameters(apiObject *types.PipeTargetLambdaFunctionParameters) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.InvocationType; v != "" { + tfMap["invocation_type"] = v + } + + return tfMap +} + +func flattenPipeTargetRedshiftDataParameters(apiObject *types.PipeTargetRedshiftDataParameters) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{ + "with_event": apiObject.WithEvent, + } + + if v := apiObject.Database; v != nil { + tfMap["database"] = aws.ToString(v) + } + + if v := apiObject.DbUser; v != nil { + tfMap["db_user"] = aws.ToString(v) + } + + if v := apiObject.SecretManagerArn; v != nil { + tfMap["secret_manager_arn"] = aws.ToString(v) + } + + if v := apiObject.Sqls; v != nil { + tfMap["sqls"] = v + } + + if v := apiObject.StatementName; v != nil { + tfMap["statement_name"] = aws.ToString(v) + } + + return tfMap +} + +func flattenPipeTargetSageMakerPipelineParameters(apiObject *types.PipeTargetSageMakerPipelineParameters) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.PipelineParameterList; v != nil { + tfMap["pipeline_parameter"] = flattenSageMakerPipelineParameters(v) + } + + return tfMap +} + +func flattenSageMakerPipelineParameter(apiObject types.SageMakerPipelineParameter) map[string]interface{} { + tfMap := map[string]interface{}{} + + if v := apiObject.Name; v != nil { + tfMap["name"] = aws.ToString(v) + } + + if v := apiObject.Value; v != nil { + tfMap["value"] = aws.ToString(v) + } + + return tfMap +} + +func flattenSageMakerPipelineParameters(apiObjects []types.SageMakerPipelineParameter) []interface{} { + if len(apiObjects) == 0 { + return nil + } + + var tfList []interface{} + + for _, apiObject := range apiObjects { + tfList = append(tfList, flattenSageMakerPipelineParameter(apiObject)) + } + + return tfList +} + +func flattenPipeTargetSQSQueueParameters(apiObject *types.PipeTargetSqsQueueParameters) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.MessageDeduplicationId; v != nil { + tfMap["message_deduplication_id"] = aws.ToString(v) + } + + if v := apiObject.MessageGroupId; v != nil { + tfMap["message_group_id"] = aws.ToString(v) + } + + return tfMap +} + +func flattenPipeTargetStateMachineParameters(apiObject *types.PipeTargetStateMachineParameters) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.InvocationType; v != "" { + tfMap["invocation_type"] = v + } + + return tfMap +} diff --git a/internal/service/pipes/test-fixtures/lambdatest.zip b/internal/service/pipes/test-fixtures/lambdatest.zip new file mode 100644 index 00000000000..5c636e955b2 Binary files /dev/null and b/internal/service/pipes/test-fixtures/lambdatest.zip differ diff --git a/internal/service/pipes/wait.go b/internal/service/pipes/wait.go deleted file mode 100644 index 16f5cf03146..00000000000 --- a/internal/service/pipes/wait.go +++ /dev/null @@ -1,70 +0,0 @@ -package pipes - -import ( - "context" - "errors" - "time" - - "github.com/aws/aws-sdk-go-v2/aws" - "github.com/aws/aws-sdk-go-v2/service/pipes" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" - "github.com/hashicorp/terraform-provider-aws/internal/tfresource" -) - -func waitPipeCreated(ctx context.Context, conn *pipes.Client, id string, timeout time.Duration) (*pipes.DescribePipeOutput, error) { - stateConf := &retry.StateChangeConf{ - Pending: []string{pipeStatusCreating}, - Target: []string{pipeStatusRunning, pipeStatusStopped}, - Refresh: statusPipe(ctx, conn, id), - Timeout: timeout, - NotFoundChecks: 20, - ContinuousTargetOccurence: 1, - } - - outputRaw, err := stateConf.WaitForStateContext(ctx) - if output, ok := outputRaw.(*pipes.DescribePipeOutput); ok { - tfresource.SetLastError(err, errors.New(aws.ToString(output.StateReason))) - - return output, err - } - - return nil, err -} - -func waitPipeUpdated(ctx context.Context, conn *pipes.Client, id string, timeout time.Duration) (*pipes.DescribePipeOutput, error) { - stateConf := &retry.StateChangeConf{ - Pending: []string{pipeStatusUpdating}, - Target: []string{pipeStatusRunning, pipeStatusStopped}, - Refresh: statusPipe(ctx, conn, id), - Timeout: timeout, - NotFoundChecks: 20, - ContinuousTargetOccurence: 1, - } - - outputRaw, err := stateConf.WaitForStateContext(ctx) - if output, ok := outputRaw.(*pipes.DescribePipeOutput); ok { - tfresource.SetLastError(err, errors.New(aws.ToString(output.StateReason))) - - return output, err - } - - return nil, err -} - -func waitPipeDeleted(ctx context.Context, conn *pipes.Client, id string, timeout time.Duration) (*pipes.DescribePipeOutput, error) { - stateConf := &retry.StateChangeConf{ - Pending: []string{pipeStatusDeleting}, - Target: []string{}, - Refresh: statusPipe(ctx, conn, id), - Timeout: timeout, - } - - outputRaw, err := stateConf.WaitForStateContext(ctx) - if output, ok := outputRaw.(*pipes.DescribePipeOutput); ok { - tfresource.SetLastError(err, errors.New(aws.ToString(output.StateReason))) - - return output, err - } - - return nil, err -} diff --git a/internal/service/pricing/generate.go b/internal/service/pricing/generate.go new file mode 100644 index 00000000000..7c35d32d910 --- /dev/null +++ b/internal/service/pricing/generate.go @@ -0,0 +1,7 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/servicepackage/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package pricing diff --git a/internal/service/pricing/product_data_source.go b/internal/service/pricing/product_data_source.go index 3f19229523f..42b818e0e33 100644 --- a/internal/service/pricing/product_data_source.go +++ b/internal/service/pricing/product_data_source.go @@ -1,13 +1,15 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pricing import ( "context" - "encoding/json" "fmt" - "log" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/pricing" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/pricing" + "github.com/aws/aws-sdk-go-v2/service/pricing/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-aws/internal/conns" @@ -16,14 +18,11 @@ import ( ) // @SDKDataSource("aws_pricing_product") -func DataSourceProduct() *schema.Resource { +func dataSourceProduct() *schema.Resource { return &schema.Resource{ ReadWithoutTimeout: dataSourceProductRead, + Schema: map[string]*schema.Schema{ - "service_code": { - Type: schema.TypeString, - Required: true, - }, "filters": { Type: schema.TypeList, Required: true, @@ -45,53 +44,47 @@ func DataSourceProduct() *schema.Resource { Type: schema.TypeString, Computed: true, }, + "service_code": { + Type: schema.TypeString, + Required: true, + }, }, } } func dataSourceProductRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).PricingConn() + conn := meta.(*conns.AWSClient).PricingClient(ctx) - params := &pricing.GetProductsInput{ + input := &pricing.GetProductsInput{ + Filters: []types.Filter{}, ServiceCode: aws.String(d.Get("service_code").(string)), - Filters: []*pricing.Filter{}, } filters := d.Get("filters") for _, v := range filters.([]interface{}) { m := v.(map[string]interface{}) - params.Filters = append(params.Filters, &pricing.Filter{ + input.Filters = append(input.Filters, types.Filter{ Field: aws.String(m["field"].(string)), + Type: types.FilterTypeTermMatch, Value: aws.String(m["value"].(string)), - Type: aws.String(pricing.FilterTypeTermMatch), }) } - log.Printf("[DEBUG] Reading pricing of products: %s", params) - resp, err := conn.GetProductsWithContext(ctx, params) + output, err := conn.GetProducts(ctx, input) + if err != nil { - return sdkdiag.AppendErrorf(diags, "reading pricing of products: %s", err) + return sdkdiag.AppendErrorf(diags, "reading Pricing Products: %s", err) } - numberOfElements := len(resp.PriceList) - if numberOfElements == 0 { + if numberOfElements := len(output.PriceList); numberOfElements == 0 { return sdkdiag.AppendErrorf(diags, "Pricing product query did not return any elements") } else if numberOfElements > 1 { - priceListBytes, err := json.Marshal(resp.PriceList) - priceListString := string(priceListBytes) - if err != nil { - priceListString = err.Error() - } - return sdkdiag.AppendErrorf(diags, "Pricing product query not precise enough. Returned more than one element: %s", priceListString) + return sdkdiag.AppendErrorf(diags, "Pricing product query not precise enough. Returned %d elements", numberOfElements) } - pricingResult, err := json.Marshal(resp.PriceList[0]) - if err != nil { - return sdkdiag.AppendErrorf(diags, "Invalid JSON value returned by AWS: %s", err) - } + d.SetId(fmt.Sprintf("%d", create.StringHashcode(fmt.Sprintf("%#v", input)))) + d.Set("result", output.PriceList[0]) - d.SetId(fmt.Sprintf("%d", create.StringHashcode(params.String()))) - d.Set("result", string(pricingResult)) return diags } diff --git a/internal/service/pricing/product_data_source_test.go b/internal/service/pricing/product_data_source_test.go index 10ba8a5e5b8..d8d70cd949a 100644 --- a/internal/service/pricing/product_data_source_test.go +++ b/internal/service/pricing/product_data_source_test.go @@ -1,15 +1,15 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pricing_test import ( - "encoding/json" - "errors" - "fmt" "testing" "github.com/aws/aws-sdk-go/aws/endpoints" - "github.com/aws/aws-sdk-go/service/pricing" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/names" ) func TestAccPricingProductDataSource_ec2(t *testing.T) { @@ -21,13 +21,13 @@ func TestAccPricingProductDataSource_ec2(t *testing.T) { acctest.PreCheck(ctx, t) acctest.PreCheckRegion(t, endpoints.UsEast1RegionID, endpoints.ApSouth1RegionID) }, - ErrorCheck: acctest.ErrorCheck(t, pricing.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.PricingEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, Steps: []resource.TestStep{ { Config: testAccProductDataSourceConfig_ec2, Check: resource.ComposeAggregateTestCheckFunc( - resource.TestCheckResourceAttrWith(dataSourceName, "result", testAccCheckValueIsJSON), + acctest.CheckResourceAttrIsJSONString(dataSourceName, "result"), ), }, }, @@ -43,13 +43,13 @@ func TestAccPricingProductDataSource_redshift(t *testing.T) { acctest.PreCheck(ctx, t) acctest.PreCheckRegion(t, endpoints.UsEast1RegionID, endpoints.ApSouth1RegionID) }, - ErrorCheck: acctest.ErrorCheck(t, pricing.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.PricingEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, Steps: []resource.TestStep{ { Config: testAccProductDataSourceConfig_redshift, Check: resource.ComposeAggregateTestCheckFunc( - resource.TestCheckResourceAttrWith(dataSourceName, "result", testAccCheckValueIsJSON), + acctest.CheckResourceAttrIsJSONString(dataSourceName, "result"), ), }, }, @@ -129,17 +129,3 @@ data "aws_pricing_product" "test" { } } ` - -func testAccCheckValueIsJSON(v string) error { - var m map[string]*json.RawMessage - - if err := json.Unmarshal([]byte(v), &m); err != nil { - return fmt.Errorf("parsing JSON: %s", err) - } - - if len(m) == 0 { - return errors.New(`empty JSON`) - } - - return nil -} diff --git a/internal/service/pricing/service_package_gen.go b/internal/service/pricing/service_package_gen.go index 6cf077d3184..d25b09b46a3 100644 --- a/internal/service/pricing/service_package_gen.go +++ b/internal/service/pricing/service_package_gen.go @@ -5,6 +5,9 @@ package pricing import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + pricing_sdkv2 "github.com/aws/aws-sdk-go-v2/service/pricing" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -22,7 +25,7 @@ func (p *servicePackage) FrameworkResources(ctx context.Context) []*types.Servic func (p *servicePackage) SDKDataSources(ctx context.Context) []*types.ServicePackageSDKDataSource { return []*types.ServicePackageSDKDataSource{ { - Factory: DataSourceProduct, + Factory: dataSourceProduct, TypeName: "aws_pricing_product", }, } @@ -36,4 +39,17 @@ func (p *servicePackage) ServicePackageName() string { return names.Pricing } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*pricing_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return pricing_sdkv2.NewFromConfig(cfg, func(o *pricing_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = pricing_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/qldb/exports_test.go b/internal/service/qldb/exports_test.go new file mode 100644 index 00000000000..3ab1ddafa11 --- /dev/null +++ b/internal/service/qldb/exports_test.go @@ -0,0 +1,13 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package qldb + +// Exports for use in tests only. +var ( + FindLedgerByName = findLedgerByName + FindStreamByTwoPartKey = findStreamByTwoPartKey + + ResourceLedger = resourceLedger + ResourceStream = resourceStream +) diff --git a/internal/service/qldb/generate.go b/internal/service/qldb/generate.go index aee5bc735c2..ef17f2e8f29 100644 --- a/internal/service/qldb/generate.go +++ b/internal/service/qldb/generate.go @@ -1,4 +1,8 @@ -//go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsMap -UpdateTags +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -ListTags -ServiceTagsMap -UpdateTags -SkipTypesImp +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package qldb diff --git a/internal/service/qldb/ledger.go b/internal/service/qldb/ledger.go index 2c9d1d61703..02d06362642 100644 --- a/internal/service/qldb/ledger.go +++ b/internal/service/qldb/ledger.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package qldb import ( @@ -6,15 +9,17 @@ import ( "regexp" "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/qldb" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/qldb" + "github.com/aws/aws-sdk-go-v2/service/qldb/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/enum" + "github.com/hashicorp/terraform-provider-aws/internal/errs" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/verify" @@ -23,7 +28,7 @@ import ( // @SDKResource("aws_qldb_ledger", name="Ledger") // @Tags(identifierAttribute="arn") -func ResourceLedger() *schema.Resource { +func resourceLedger() *schema.Resource { return &schema.Resource{ CreateWithoutTimeout: resourceLedgerCreate, ReadWithoutTimeout: resourceLedgerRead, @@ -69,9 +74,9 @@ func ResourceLedger() *schema.Resource { ), }, "permissions_mode": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice(qldb.PermissionsMode_Values(), false), + Type: schema.TypeString, + Required: true, + ValidateDiagFunc: enum.Validate[types.PermissionsMode](), }, names.AttrTags: tftags.TagsSchema(), names.AttrTagsAll: tftags.TagsSchemaComputed(), @@ -82,30 +87,29 @@ func ResourceLedger() *schema.Resource { } func resourceLedgerCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QLDBConn() + conn := meta.(*conns.AWSClient).QLDBClient(ctx) name := create.Name(d.Get("name").(string), "tf") input := &qldb.CreateLedgerInput{ DeletionProtection: aws.Bool(d.Get("deletion_protection").(bool)), Name: aws.String(name), - PermissionsMode: aws.String(d.Get("permissions_mode").(string)), - Tags: GetTagsIn(ctx), + PermissionsMode: types.PermissionsMode(d.Get("permissions_mode").(string)), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("kms_key"); ok { input.KmsKey = aws.String(v.(string)) } - log.Printf("[DEBUG] Creating QLDB Ledger: %s", input) - output, err := conn.CreateLedgerWithContext(ctx, input) + output, err := conn.CreateLedger(ctx, input) if err != nil { return diag.Errorf("creating QLDB Ledger (%s): %s", name, err) } - d.SetId(aws.StringValue(output.Name)) + d.SetId(aws.ToString(output.Name)) - if _, err := waitLedgerCreated(ctx, conn, d.Timeout(schema.TimeoutCreate), d.Id()); err != nil { + if _, err := waitLedgerCreated(ctx, conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { return diag.Errorf("waiting for QLDB Ledger (%s) create: %s", d.Id(), err) } @@ -113,9 +117,9 @@ func resourceLedgerCreate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceLedgerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QLDBConn() + conn := meta.(*conns.AWSClient).QLDBClient(ctx) - ledger, err := FindLedgerByName(ctx, conn, d.Id()) + ledger, err := findLedgerByName(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { log.Printf("[WARN] QLDB Ledger %s not found, removing from state", d.Id()) @@ -141,16 +145,15 @@ func resourceLedgerRead(ctx context.Context, d *schema.ResourceData, meta interf } func resourceLedgerUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QLDBConn() + conn := meta.(*conns.AWSClient).QLDBClient(ctx) if d.HasChange("permissions_mode") { input := &qldb.UpdateLedgerPermissionsModeInput{ Name: aws.String(d.Id()), - PermissionsMode: aws.String(d.Get("permissions_mode").(string)), + PermissionsMode: types.PermissionsMode(d.Get("permissions_mode").(string)), } - log.Printf("[INFO] Updating QLDB Ledger permissions mode: %s", input) - if _, err := conn.UpdateLedgerPermissionsModeWithContext(ctx, input); err != nil { + if _, err := conn.UpdateLedgerPermissionsMode(ctx, input); err != nil { return diag.Errorf("updating QLDB Ledger (%s) permissions mode: %s", d.Id(), err) } } @@ -165,8 +168,7 @@ func resourceLedgerUpdate(ctx context.Context, d *schema.ResourceData, meta inte input.KmsKey = aws.String(d.Get("kms_key").(string)) } - log.Printf("[INFO] Updating QLDB Ledger: %s", input) - if _, err := conn.UpdateLedgerWithContext(ctx, input); err != nil { + if _, err := conn.UpdateLedger(ctx, input); err != nil { return diag.Errorf("updating QLDB Ledger (%s): %s", d.Id(), err) } } @@ -175,19 +177,18 @@ func resourceLedgerUpdate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceLedgerDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QLDBConn() + conn := meta.(*conns.AWSClient).QLDBClient(ctx) input := &qldb.DeleteLedgerInput{ Name: aws.String(d.Id()), } log.Printf("[INFO] Deleting QLDB Ledger: %s", d.Id()) - _, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, 5*time.Minute, - func() (interface{}, error) { - return conn.DeleteLedgerWithContext(ctx, input) - }, qldb.ErrCodeResourceInUseException) + _, err := tfresource.RetryWhenIsA[*types.ResourceInUseException](ctx, d.Timeout(schema.TimeoutDelete), func() (interface{}, error) { + return conn.DeleteLedger(ctx, input) + }) - if tfawserr.ErrCodeEquals(err, qldb.ErrCodeResourceNotFoundException) { + if errs.IsA[*types.ResourceNotFoundException](err) { return nil } @@ -195,21 +196,21 @@ func resourceLedgerDelete(ctx context.Context, d *schema.ResourceData, meta inte return diag.Errorf("deleting QLDB Ledger (%s): %s", d.Id(), err) } - if _, err := waitLedgerDeleted(ctx, conn, d.Timeout(schema.TimeoutDelete), d.Id()); err != nil { + if _, err := waitLedgerDeleted(ctx, conn, d.Id(), d.Timeout(schema.TimeoutDelete)); err != nil { return diag.Errorf("waiting for QLDB Ledger (%s) delete: %s", d.Id(), err) } return nil } -func FindLedgerByName(ctx context.Context, conn *qldb.QLDB, name string) (*qldb.DescribeLedgerOutput, error) { +func findLedgerByName(ctx context.Context, conn *qldb.Client, name string) (*qldb.DescribeLedgerOutput, error) { input := &qldb.DescribeLedgerInput{ Name: aws.String(name), } - output, err := conn.DescribeLedgerWithContext(ctx, input) + output, err := conn.DescribeLedger(ctx, input) - if tfawserr.ErrCodeEquals(err, qldb.ErrCodeResourceNotFoundException) { + if errs.IsA[*types.ResourceNotFoundException](err) { return nil, &retry.NotFoundError{ LastError: err, LastRequest: input, @@ -224,9 +225,9 @@ func FindLedgerByName(ctx context.Context, conn *qldb.QLDB, name string) (*qldb. return nil, tfresource.NewEmptyResultError(input) } - if state := aws.StringValue(output.State); state == qldb.LedgerStateDeleted { + if state := output.State; state == types.LedgerStateDeleted { return nil, &retry.NotFoundError{ - Message: state, + Message: string(state), LastRequest: input, } } @@ -234,9 +235,9 @@ func FindLedgerByName(ctx context.Context, conn *qldb.QLDB, name string) (*qldb. return output, nil } -func statusLedgerState(ctx context.Context, conn *qldb.QLDB, name string) retry.StateRefreshFunc { +func statusLedgerState(ctx context.Context, conn *qldb.Client, name string) retry.StateRefreshFunc { return func() (interface{}, string, error) { - output, err := FindLedgerByName(ctx, conn, name) + output, err := findLedgerByName(ctx, conn, name) if tfresource.NotFound(err) { return nil, "", nil @@ -246,14 +247,14 @@ func statusLedgerState(ctx context.Context, conn *qldb.QLDB, name string) retry. return nil, "", err } - return output, aws.StringValue(output.State), nil + return output, string(output.State), nil } } -func waitLedgerCreated(ctx context.Context, conn *qldb.QLDB, timeout time.Duration, name string) (*qldb.DescribeLedgerOutput, error) { +func waitLedgerCreated(ctx context.Context, conn *qldb.Client, name string, timeout time.Duration) (*qldb.DescribeLedgerOutput, error) { stateConf := &retry.StateChangeConf{ - Pending: []string{qldb.LedgerStateCreating}, - Target: []string{qldb.LedgerStateActive}, + Pending: enum.Slice(types.LedgerStateCreating), + Target: enum.Slice(types.LedgerStateActive), Refresh: statusLedgerState(ctx, conn, name), Timeout: timeout, MinTimeout: 3 * time.Second, @@ -268,9 +269,9 @@ func waitLedgerCreated(ctx context.Context, conn *qldb.QLDB, timeout time.Durati return nil, err } -func waitLedgerDeleted(ctx context.Context, conn *qldb.QLDB, timeout time.Duration, name string) (*qldb.DescribeLedgerOutput, error) { +func waitLedgerDeleted(ctx context.Context, conn *qldb.Client, name string, timeout time.Duration) (*qldb.DescribeLedgerOutput, error) { stateConf := &retry.StateChangeConf{ - Pending: []string{qldb.LedgerStateActive, qldb.LedgerStateDeleting}, + Pending: enum.Slice(types.LedgerStateActive, types.LedgerStateDeleting), Target: []string{}, Refresh: statusLedgerState(ctx, conn, name), Timeout: timeout, diff --git a/internal/service/qldb/ledger_data_source.go b/internal/service/qldb/ledger_data_source.go index 788202e7eab..c8f3715e3dc 100644 --- a/internal/service/qldb/ledger_data_source.go +++ b/internal/service/qldb/ledger_data_source.go @@ -1,10 +1,13 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package qldb import ( "context" "regexp" - "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go-v2/aws" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -13,7 +16,7 @@ import ( ) // @SDKDataSource("aws_qldb_ledger") -func DataSourceLedger() *schema.Resource { +func dataSourceLedger() *schema.Resource { return &schema.Resource{ ReadWithoutTimeout: dataSourceLedgerRead, @@ -48,17 +51,17 @@ func DataSourceLedger() *schema.Resource { } func dataSourceLedgerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QLDBConn() + conn := meta.(*conns.AWSClient).QLDBClient(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig name := d.Get("name").(string) - ledger, err := FindLedgerByName(ctx, conn, name) + ledger, err := findLedgerByName(ctx, conn, name) if err != nil { return diag.Errorf("reading QLDB Ledger (%s): %s", name, err) } - d.SetId(aws.StringValue(ledger.Name)) + d.SetId(aws.ToString(ledger.Name)) d.Set("arn", ledger.Arn) d.Set("deletion_protection", ledger.DeletionProtection) if ledger.EncryptionDescription != nil { @@ -69,7 +72,7 @@ func dataSourceLedgerRead(ctx context.Context, d *schema.ResourceData, meta inte d.Set("name", ledger.Name) d.Set("permissions_mode", ledger.PermissionsMode) - tags, err := ListTags(ctx, conn, d.Get("arn").(string)) + tags, err := listTags(ctx, conn, d.Get("arn").(string)) if err != nil { return diag.Errorf("listing tags for QLDB Ledger (%s): %s", d.Id(), err) diff --git a/internal/service/qldb/ledger_data_source_test.go b/internal/service/qldb/ledger_data_source_test.go index 7fbe88cce9f..e93a8f252cb 100644 --- a/internal/service/qldb/ledger_data_source_test.go +++ b/internal/service/qldb/ledger_data_source_test.go @@ -1,13 +1,16 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package qldb_test import ( "fmt" "testing" - "github.com/aws/aws-sdk-go/service/qldb" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/names" ) func TestAccQLDBLedgerDataSource_basic(t *testing.T) { @@ -17,8 +20,8 @@ func TestAccQLDBLedgerDataSource_basic(t *testing.T) { datasourceName := "data.aws_qldb_ledger.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckPartitionHasService(t, qldb.EndpointsID) }, - ErrorCheck: acctest.ErrorCheck(t, qldb.EndpointsID), + PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckPartitionHasService(t, names.QLDBEndpointID) }, + ErrorCheck: acctest.ErrorCheck(t, names.QLDBEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, Steps: []resource.TestStep{ { @@ -44,7 +47,7 @@ resource "aws_qldb_ledger" "test" { deletion_protection = false tags = { - Env = "test" + Name = %[1]q } } diff --git a/internal/service/qldb/ledger_test.go b/internal/service/qldb/ledger_test.go index 0b21e8ae434..6982e76a582 100644 --- a/internal/service/qldb/ledger_test.go +++ b/internal/service/qldb/ledger_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package qldb_test import ( @@ -6,7 +9,7 @@ import ( "regexp" "testing" - "github.com/aws/aws-sdk-go/service/qldb" + "github.com/aws/aws-sdk-go-v2/service/qldb" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -14,6 +17,7 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/conns" tfqldb "github.com/hashicorp/terraform-provider-aws/internal/service/qldb" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" ) func TestAccQLDBLedger_basic(t *testing.T) { @@ -23,8 +27,8 @@ func TestAccQLDBLedger_basic(t *testing.T) { resourceName := "aws_qldb_ledger.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckPartitionHasService(t, qldb.EndpointsID) }, - ErrorCheck: acctest.ErrorCheck(t, qldb.EndpointsID), + PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckPartitionHasService(t, names.QLDBEndpointID) }, + ErrorCheck: acctest.ErrorCheck(t, names.QLDBEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckLedgerDestroy(ctx), Steps: []resource.TestStep{ @@ -56,8 +60,8 @@ func TestAccQLDBLedger_disappears(t *testing.T) { resourceName := "aws_qldb_ledger.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckPartitionHasService(t, qldb.EndpointsID) }, - ErrorCheck: acctest.ErrorCheck(t, qldb.EndpointsID), + PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckPartitionHasService(t, names.QLDBEndpointID) }, + ErrorCheck: acctest.ErrorCheck(t, names.QLDBEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckLedgerDestroy(ctx), Steps: []resource.TestStep{ @@ -79,8 +83,8 @@ func TestAccQLDBLedger_nameGenerated(t *testing.T) { resourceName := "aws_qldb_ledger.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckPartitionHasService(t, qldb.EndpointsID) }, - ErrorCheck: acctest.ErrorCheck(t, qldb.EndpointsID), + PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckPartitionHasService(t, names.QLDBEndpointID) }, + ErrorCheck: acctest.ErrorCheck(t, names.QLDBEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckLedgerDestroy(ctx), Steps: []resource.TestStep{ @@ -107,8 +111,8 @@ func TestAccQLDBLedger_update(t *testing.T) { resourceName := "aws_qldb_ledger.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckPartitionHasService(t, qldb.EndpointsID) }, - ErrorCheck: acctest.ErrorCheck(t, qldb.EndpointsID), + PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckPartitionHasService(t, names.QLDBEndpointID) }, + ErrorCheck: acctest.ErrorCheck(t, names.QLDBEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckLedgerDestroy(ctx), Steps: []resource.TestStep{ @@ -153,8 +157,8 @@ func TestAccQLDBLedger_kmsKey(t *testing.T) { kmsKeyResourceName := "aws_kms_key.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckPartitionHasService(t, qldb.EndpointsID) }, - ErrorCheck: acctest.ErrorCheck(t, qldb.EndpointsID), + PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckPartitionHasService(t, names.QLDBEndpointID) }, + ErrorCheck: acctest.ErrorCheck(t, names.QLDBEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckLedgerDestroy(ctx), Steps: []resource.TestStep{ @@ -188,8 +192,8 @@ func TestAccQLDBLedger_tags(t *testing.T) { resourceName := "aws_qldb_ledger.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckPartitionHasService(t, qldb.EndpointsID) }, - ErrorCheck: acctest.ErrorCheck(t, qldb.EndpointsID), + PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckPartitionHasService(t, names.QLDBEndpointID) }, + ErrorCheck: acctest.ErrorCheck(t, names.QLDBEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckLedgerDestroy(ctx), Steps: []resource.TestStep{ @@ -229,7 +233,7 @@ func TestAccQLDBLedger_tags(t *testing.T) { func testAccCheckLedgerDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).QLDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QLDBClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_qldb_ledger" { @@ -264,7 +268,7 @@ func testAccCheckLedgerExists(ctx context.Context, n string, v *qldb.DescribeLed return fmt.Errorf("No QLDB Ledger ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).QLDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QLDBClient(ctx) output, err := tfqldb.FindLedgerByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/qldb/service_package_gen.go b/internal/service/qldb/service_package_gen.go index 15821d94c84..9316e0bbf2c 100644 --- a/internal/service/qldb/service_package_gen.go +++ b/internal/service/qldb/service_package_gen.go @@ -5,6 +5,9 @@ package qldb import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + qldb_sdkv2 "github.com/aws/aws-sdk-go-v2/service/qldb" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -22,7 +25,7 @@ func (p *servicePackage) FrameworkResources(ctx context.Context) []*types.Servic func (p *servicePackage) SDKDataSources(ctx context.Context) []*types.ServicePackageSDKDataSource { return []*types.ServicePackageSDKDataSource{ { - Factory: DataSourceLedger, + Factory: dataSourceLedger, TypeName: "aws_qldb_ledger", }, } @@ -31,7 +34,7 @@ func (p *servicePackage) SDKDataSources(ctx context.Context) []*types.ServicePac func (p *servicePackage) SDKResources(ctx context.Context) []*types.ServicePackageSDKResource { return []*types.ServicePackageSDKResource{ { - Factory: ResourceLedger, + Factory: resourceLedger, TypeName: "aws_qldb_ledger", Name: "Ledger", Tags: &types.ServicePackageResourceTags{ @@ -39,7 +42,7 @@ func (p *servicePackage) SDKResources(ctx context.Context) []*types.ServicePacka }, }, { - Factory: ResourceStream, + Factory: resourceStream, TypeName: "aws_qldb_stream", Name: "Stream", Tags: &types.ServicePackageResourceTags{ @@ -53,4 +56,17 @@ func (p *servicePackage) ServicePackageName() string { return names.QLDB } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*qldb_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return qldb_sdkv2.NewFromConfig(cfg, func(o *qldb_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = qldb_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/qldb/stream.go b/internal/service/qldb/stream.go index 0e5ec803fec..28e8d9a8ced 100644 --- a/internal/service/qldb/stream.go +++ b/internal/service/qldb/stream.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package qldb import ( @@ -6,14 +9,16 @@ import ( "log" "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/qldb" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/qldb" + "github.com/aws/aws-sdk-go-v2/service/qldb/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/enum" + "github.com/hashicorp/terraform-provider-aws/internal/errs" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/verify" @@ -22,13 +27,18 @@ import ( // @SDKResource("aws_qldb_stream", name="Stream") // @Tags(identifierAttribute="arn") -func ResourceStream() *schema.Resource { +func resourceStream() *schema.Resource { return &schema.Resource{ CreateWithoutTimeout: resourceStreamCreate, ReadWithoutTimeout: resourceStreamRead, UpdateWithoutTimeout: resourceStreamUpdate, DeleteWithoutTimeout: resourceStreamDelete, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(8 * time.Minute), + Delete: schema.DefaultTimeout(5 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "arn": { Type: schema.TypeString, @@ -99,7 +109,7 @@ func ResourceStream() *schema.Resource { } func resourceStreamCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QLDBConn() + conn := meta.(*conns.AWSClient).QLDBClient(ctx) ledgerName := d.Get("ledger_name").(string) name := d.Get("stream_name").(string) @@ -107,7 +117,7 @@ func resourceStreamCreate(ctx context.Context, d *schema.ResourceData, meta inte LedgerName: aws.String(ledgerName), RoleArn: aws.String(d.Get("role_arn").(string)), StreamName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("exclusive_end_time"); ok { @@ -124,16 +134,15 @@ func resourceStreamCreate(ctx context.Context, d *schema.ResourceData, meta inte input.KinesisConfiguration = expandKinesisConfiguration(v.([]interface{})[0].(map[string]interface{})) } - log.Printf("[DEBUG] Creating QLDB Stream: %s", input) - output, err := conn.StreamJournalToKinesisWithContext(ctx, input) + output, err := conn.StreamJournalToKinesis(ctx, input) if err != nil { return diag.Errorf("creating QLDB Stream (%s): %s", name, err) } - d.SetId(aws.StringValue(output.StreamId)) + d.SetId(aws.ToString(output.StreamId)) - if _, err := waitStreamCreated(ctx, conn, ledgerName, d.Id()); err != nil { + if _, err := waitStreamCreated(ctx, conn, ledgerName, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { return diag.Errorf("waiting for QLDB Stream (%s) create: %s", d.Id(), err) } @@ -141,10 +150,10 @@ func resourceStreamCreate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceStreamRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QLDBConn() + conn := meta.(*conns.AWSClient).QLDBClient(ctx) ledgerName := d.Get("ledger_name").(string) - stream, err := FindStream(ctx, conn, ledgerName, d.Id()) + stream, err := findStreamByTwoPartKey(ctx, conn, ledgerName, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { log.Printf("[WARN] QLDB Stream %s not found, removing from state", d.Id()) @@ -158,12 +167,12 @@ func resourceStreamRead(ctx context.Context, d *schema.ResourceData, meta interf d.Set("arn", stream.Arn) if stream.ExclusiveEndTime != nil { - d.Set("exclusive_end_time", aws.TimeValue(stream.ExclusiveEndTime).Format(time.RFC3339)) + d.Set("exclusive_end_time", aws.ToTime(stream.ExclusiveEndTime).Format(time.RFC3339)) } else { d.Set("exclusive_end_time", nil) } if stream.InclusiveStartTime != nil { - d.Set("inclusive_start_time", aws.TimeValue(stream.InclusiveStartTime).Format(time.RFC3339)) + d.Set("inclusive_start_time", aws.ToTime(stream.InclusiveStartTime).Format(time.RFC3339)) } else { d.Set("inclusive_start_time", nil) } @@ -187,7 +196,7 @@ func resourceStreamUpdate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceStreamDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QLDBConn() + conn := meta.(*conns.AWSClient).QLDBClient(ctx) ledgerName := d.Get("ledger_name").(string) input := &qldb.CancelJournalKinesisStreamInput{ @@ -196,12 +205,11 @@ func resourceStreamDelete(ctx context.Context, d *schema.ResourceData, meta inte } log.Printf("[INFO] Deleting QLDB Stream: %s", d.Id()) - _, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, 5*time.Minute, - func() (interface{}, error) { - return conn.CancelJournalKinesisStreamWithContext(ctx, input) - }, qldb.ErrCodeResourceInUseException) + _, err := tfresource.RetryWhenIsA[*types.ResourceInUseException](ctx, d.Timeout(schema.TimeoutDelete), func() (interface{}, error) { + return conn.CancelJournalKinesisStream(ctx, input) + }) - if tfawserr.ErrCodeEquals(err, qldb.ErrCodeResourceNotFoundException) { + if errs.IsA[*types.ResourceNotFoundException](err) { return nil } @@ -209,14 +217,14 @@ func resourceStreamDelete(ctx context.Context, d *schema.ResourceData, meta inte return diag.Errorf("deleting QLDB Stream (%s): %s", d.Id(), err) } - if _, err := waitStreamDeleted(ctx, conn, ledgerName, d.Id()); err != nil { + if _, err := waitStreamDeleted(ctx, conn, ledgerName, d.Id(), d.Timeout(schema.TimeoutDelete)); err != nil { return diag.Errorf("waiting for QLDB Stream (%s) delete: %s", d.Id(), err) } return nil } -func FindStream(ctx context.Context, conn *qldb.QLDB, ledgerName, streamID string) (*qldb.JournalKinesisStreamDescription, error) { +func findStreamByTwoPartKey(ctx context.Context, conn *qldb.Client, ledgerName, streamID string) (*types.JournalKinesisStreamDescription, error) { input := &qldb.DescribeJournalKinesisStreamInput{ LedgerName: aws.String(ledgerName), StreamId: aws.String(streamID), @@ -229,10 +237,10 @@ func FindStream(ctx context.Context, conn *qldb.QLDB, ledgerName, streamID strin } // See https://docs.aws.amazon.com/qldb/latest/developerguide/streams.create.html#streams.create.states. - switch status := aws.StringValue(output.Status); status { - case qldb.StreamStatusCompleted, qldb.StreamStatusCanceled, qldb.StreamStatusFailed: + switch status := output.Status; status { + case types.StreamStatusCompleted, types.StreamStatusCanceled, types.StreamStatusFailed: return nil, &retry.NotFoundError{ - Message: status, + Message: string(status), LastRequest: input, } } @@ -240,10 +248,10 @@ func FindStream(ctx context.Context, conn *qldb.QLDB, ledgerName, streamID strin return output, nil } -func findJournalKinesisStream(ctx context.Context, conn *qldb.QLDB, input *qldb.DescribeJournalKinesisStreamInput) (*qldb.JournalKinesisStreamDescription, error) { - output, err := conn.DescribeJournalKinesisStreamWithContext(ctx, input) +func findJournalKinesisStream(ctx context.Context, conn *qldb.Client, input *qldb.DescribeJournalKinesisStreamInput) (*types.JournalKinesisStreamDescription, error) { + output, err := conn.DescribeJournalKinesisStream(ctx, input) - if tfawserr.ErrCodeEquals(err, qldb.ErrCodeResourceNotFoundException) { + if errs.IsA[*types.ResourceNotFoundException](err) { return nil, &retry.NotFoundError{ LastError: err, LastRequest: input, @@ -261,7 +269,7 @@ func findJournalKinesisStream(ctx context.Context, conn *qldb.QLDB, input *qldb. return output.Stream, nil } -func statusStreamCreated(ctx context.Context, conn *qldb.QLDB, ledgerName, streamID string) retry.StateRefreshFunc { +func statusStreamCreated(ctx context.Context, conn *qldb.Client, ledgerName, streamID string) retry.StateRefreshFunc { return func() (interface{}, string, error) { // Don't call FindStream as it maps useful statuses to NotFoundError. output, err := findJournalKinesisStream(ctx, conn, &qldb.DescribeJournalKinesisStreamInput{ @@ -277,23 +285,23 @@ func statusStreamCreated(ctx context.Context, conn *qldb.QLDB, ledgerName, strea return nil, "", err } - return output, aws.StringValue(output.Status), nil + return output, string(output.Status), nil } } -func waitStreamCreated(ctx context.Context, conn *qldb.QLDB, ledgerName, streamID string) (*qldb.JournalKinesisStreamDescription, error) { +func waitStreamCreated(ctx context.Context, conn *qldb.Client, ledgerName, streamID string, timeout time.Duration) (*types.JournalKinesisStreamDescription, error) { stateConf := &retry.StateChangeConf{ - Pending: []string{qldb.StreamStatusImpaired}, - Target: []string{qldb.StreamStatusActive}, + Pending: enum.Slice(types.StreamStatusImpaired), + Target: enum.Slice(types.StreamStatusActive), Refresh: statusStreamCreated(ctx, conn, ledgerName, streamID), - Timeout: 8 * time.Minute, + Timeout: timeout, MinTimeout: 3 * time.Second, } outputRaw, err := stateConf.WaitForStateContext(ctx) - if output, ok := outputRaw.(*qldb.JournalKinesisStreamDescription); ok { - tfresource.SetLastError(err, errors.New(aws.StringValue(output.ErrorCause))) + if output, ok := outputRaw.(*types.JournalKinesisStreamDescription); ok { + tfresource.SetLastError(err, errors.New(string(output.ErrorCause))) return output, err } @@ -301,9 +309,9 @@ func waitStreamCreated(ctx context.Context, conn *qldb.QLDB, ledgerName, streamI return nil, err } -func statusStreamDeleted(ctx context.Context, conn *qldb.QLDB, ledgerName, streamID string) retry.StateRefreshFunc { +func statusStreamDeleted(ctx context.Context, conn *qldb.Client, ledgerName, streamID string) retry.StateRefreshFunc { return func() (interface{}, string, error) { - output, err := FindStream(ctx, conn, ledgerName, streamID) + output, err := findStreamByTwoPartKey(ctx, conn, ledgerName, streamID) if tfresource.NotFound(err) { return nil, "", nil @@ -313,23 +321,23 @@ func statusStreamDeleted(ctx context.Context, conn *qldb.QLDB, ledgerName, strea return nil, "", err } - return output, aws.StringValue(output.Status), nil + return output, string(output.Status), nil } } -func waitStreamDeleted(ctx context.Context, conn *qldb.QLDB, ledgerName, streamID string) (*qldb.JournalKinesisStreamDescription, error) { +func waitStreamDeleted(ctx context.Context, conn *qldb.Client, ledgerName, streamID string, timeout time.Duration) (*types.JournalKinesisStreamDescription, error) { stateConf := &retry.StateChangeConf{ - Pending: []string{qldb.StreamStatusActive, qldb.StreamStatusImpaired}, + Pending: enum.Slice(types.StreamStatusActive, types.StreamStatusImpaired), Target: []string{}, Refresh: statusStreamDeleted(ctx, conn, ledgerName, streamID), - Timeout: 5 * time.Minute, + Timeout: timeout, MinTimeout: 1 * time.Second, } outputRaw, err := stateConf.WaitForStateContext(ctx) - if output, ok := outputRaw.(*qldb.JournalKinesisStreamDescription); ok { - tfresource.SetLastError(err, errors.New(aws.StringValue(output.ErrorCause))) + if output, ok := outputRaw.(*types.JournalKinesisStreamDescription); ok { + tfresource.SetLastError(err, errors.New(string(output.ErrorCause))) return output, err } @@ -337,12 +345,12 @@ func waitStreamDeleted(ctx context.Context, conn *qldb.QLDB, ledgerName, streamI return nil, err } -func expandKinesisConfiguration(tfMap map[string]interface{}) *qldb.KinesisConfiguration { +func expandKinesisConfiguration(tfMap map[string]interface{}) *types.KinesisConfiguration { if tfMap == nil { return nil } - apiObject := &qldb.KinesisConfiguration{} + apiObject := &types.KinesisConfiguration{} if v, ok := tfMap["aggregation_enabled"].(bool); ok { apiObject.AggregationEnabled = aws.Bool(v) @@ -355,7 +363,7 @@ func expandKinesisConfiguration(tfMap map[string]interface{}) *qldb.KinesisConfi return apiObject } -func flattenKinesisConfiguration(apiObject *qldb.KinesisConfiguration) map[string]interface{} { +func flattenKinesisConfiguration(apiObject *types.KinesisConfiguration) map[string]interface{} { if apiObject == nil { return nil } @@ -363,11 +371,11 @@ func flattenKinesisConfiguration(apiObject *qldb.KinesisConfiguration) map[strin tfMap := map[string]interface{}{} if v := apiObject.AggregationEnabled; v != nil { - tfMap["aggregation_enabled"] = aws.BoolValue(v) + tfMap["aggregation_enabled"] = aws.ToBool(v) } if v := apiObject.StreamArn; v != nil { - tfMap["stream_arn"] = aws.StringValue(v) + tfMap["stream_arn"] = aws.ToString(v) } return tfMap diff --git a/internal/service/qldb/stream_test.go b/internal/service/qldb/stream_test.go index bef510fe9cb..9e6c46102e1 100644 --- a/internal/service/qldb/stream_test.go +++ b/internal/service/qldb/stream_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package qldb_test import ( @@ -6,7 +9,7 @@ import ( "regexp" "testing" - "github.com/aws/aws-sdk-go/service/qldb" + "github.com/aws/aws-sdk-go-v2/service/qldb/types" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -14,17 +17,18 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/conns" tfqldb "github.com/hashicorp/terraform-provider-aws/internal/service/qldb" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" ) func TestAccQLDBStream_basic(t *testing.T) { ctx := acctest.Context(t) - var v qldb.JournalKinesisStreamDescription + var v types.JournalKinesisStreamDescription rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_qldb_stream.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckPartitionHasService(t, qldb.EndpointsID) }, - ErrorCheck: acctest.ErrorCheck(t, qldb.EndpointsID), + PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckPartitionHasService(t, names.QLDBEndpointID) }, + ErrorCheck: acctest.ErrorCheck(t, names.QLDBEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckStreamDestroy(ctx), Steps: []resource.TestStep{ @@ -50,13 +54,13 @@ func TestAccQLDBStream_basic(t *testing.T) { func TestAccQLDBStream_disappears(t *testing.T) { ctx := acctest.Context(t) - var v qldb.JournalKinesisStreamDescription + var v types.JournalKinesisStreamDescription rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_qldb_stream.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckPartitionHasService(t, qldb.EndpointsID) }, - ErrorCheck: acctest.ErrorCheck(t, qldb.EndpointsID), + PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckPartitionHasService(t, names.QLDBEndpointID) }, + ErrorCheck: acctest.ErrorCheck(t, names.QLDBEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckStreamDestroy(ctx), Steps: []resource.TestStep{ @@ -74,13 +78,13 @@ func TestAccQLDBStream_disappears(t *testing.T) { func TestAccQLDBStream_tags(t *testing.T) { ctx := acctest.Context(t) - var v qldb.JournalKinesisStreamDescription + var v types.JournalKinesisStreamDescription rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_qldb_stream.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckPartitionHasService(t, qldb.EndpointsID) }, - ErrorCheck: acctest.ErrorCheck(t, qldb.EndpointsID), + PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckPartitionHasService(t, names.QLDBEndpointID) }, + ErrorCheck: acctest.ErrorCheck(t, names.QLDBEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckStreamDestroy(ctx), Steps: []resource.TestStep{ @@ -115,13 +119,13 @@ func TestAccQLDBStream_tags(t *testing.T) { func TestAccQLDBStream_withEndTime(t *testing.T) { ctx := acctest.Context(t) - var v qldb.JournalKinesisStreamDescription + var v types.JournalKinesisStreamDescription rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_qldb_stream.test" resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckPartitionHasService(t, qldb.EndpointsID) }, - ErrorCheck: acctest.ErrorCheck(t, qldb.EndpointsID), + PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckPartitionHasService(t, names.QLDBEndpointID) }, + ErrorCheck: acctest.ErrorCheck(t, names.QLDBEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckStreamDestroy(ctx), Steps: []resource.TestStep{ @@ -140,7 +144,7 @@ func TestAccQLDBStream_withEndTime(t *testing.T) { }) } -func testAccCheckStreamExists(ctx context.Context, n string, v *qldb.JournalKinesisStreamDescription) resource.TestCheckFunc { +func testAccCheckStreamExists(ctx context.Context, n string, v *types.JournalKinesisStreamDescription) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -151,9 +155,9 @@ func testAccCheckStreamExists(ctx context.Context, n string, v *qldb.JournalKine return fmt.Errorf("No QLDB Stream ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).QLDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QLDBClient(ctx) - output, err := tfqldb.FindStream(ctx, conn, rs.Primary.Attributes["ledger_name"], rs.Primary.ID) + output, err := tfqldb.FindStreamByTwoPartKey(ctx, conn, rs.Primary.Attributes["ledger_name"], rs.Primary.ID) if err != nil { return err @@ -167,14 +171,14 @@ func testAccCheckStreamExists(ctx context.Context, n string, v *qldb.JournalKine func testAccCheckStreamDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).QLDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QLDBClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_qldb_stream" { continue } - _, err := tfqldb.FindStream(ctx, conn, rs.Primary.Attributes["ledger_name"], rs.Primary.ID) + _, err := tfqldb.FindStreamByTwoPartKey(ctx, conn, rs.Primary.Attributes["ledger_name"], rs.Primary.ID) if tfresource.NotFound(err) { continue diff --git a/internal/service/qldb/sweep.go b/internal/service/qldb/sweep.go index 59da795c78c..10d1d85d2d1 100644 --- a/internal/service/qldb/sweep.go +++ b/internal/service/qldb/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -7,11 +10,10 @@ import ( "fmt" "log" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/qldb" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/qldb" multierror "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -33,40 +35,37 @@ func init() { func sweepLedgers(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).QLDBConn() + conn := client.QLDBClient(ctx) input := &qldb.ListLedgersInput{} sweepResources := make([]sweep.Sweepable, 0) - err = conn.ListLedgersPagesWithContext(ctx, input, func(page *qldb.ListLedgersOutput, lastPage bool) bool { - if page == nil { - return !lastPage + pages := qldb.NewListLedgersPaginator(conn, input) + for pages.HasMorePages() { + page, err := pages.NextPage(ctx) + + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping QLDB Ledger sweep for %s: %s", region, err) + return nil + } + + if err != nil { + return fmt.Errorf("error listing QLDB Ledgers (%s): %w", region, err) } for _, v := range page.Ledgers { - r := ResourceLedger() + r := resourceLedger() d := r.Data(nil) - d.SetId(aws.StringValue(v.Name)) + d.SetId(aws.ToString(v.Name)) sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } - - return !lastPage - }) - - if sweep.SkipSweepError(err) { - log.Printf("[WARN] Skipping QLDB Ledger sweep for %s: %s", region, err) - return nil } - if err != nil { - return fmt.Errorf("error listing QLDB Ledgers (%s): %w", region, err) - } - - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping QLDB Ledgers (%s): %w", region, err) @@ -77,18 +76,26 @@ func sweepLedgers(region string) error { func sweepStreams(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).QLDBConn() + conn := client.QLDBClient(ctx) input := &qldb.ListLedgersInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) - err = conn.ListLedgersPagesWithContext(ctx, input, func(page *qldb.ListLedgersOutput, lastPage bool) bool { - if page == nil { - return !lastPage + pages := qldb.NewListLedgersPaginator(conn, input) + for pages.HasMorePages() { + page, err := pages.NextPage(ctx) + + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping QLDB Stream sweep for %s: %s", region, err) + return sweeperErrs.ErrorOrNil() // In case we have completed some pages, but had errors + } + + if err != nil { + sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing QLDB Ledgers (%s): %w", region, err)) } for _, v := range page.Ledgers { @@ -96,45 +103,31 @@ func sweepStreams(region string) error { LedgerName: v.Name, } - err := conn.ListJournalKinesisStreamsForLedgerPagesWithContext(ctx, input, func(page *qldb.ListJournalKinesisStreamsForLedgerOutput, lastPage bool) bool { - if page == nil { - return !lastPage + pages := qldb.NewListJournalKinesisStreamsForLedgerPaginator(conn, input) + for pages.HasMorePages() { + page, err := pages.NextPage(ctx) + + if sweep.SkipSweepError(err) { + continue + } + + if err != nil { + sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing QLDB Streams (%s): %w", region, err)) } for _, v := range page.Streams { - r := ResourceStream() + r := resourceStream() d := r.Data(nil) - d.SetId(aws.StringValue(v.StreamId)) + d.SetId(aws.ToString(v.StreamId)) d.Set("ledger_name", v.LedgerName) sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } - - return !lastPage - }) - - if sweep.SkipSweepError(err) { - continue - } - - if err != nil { - sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing QLDB Streams (%s): %w", region, err)) } } - - return !lastPage - }) - - if sweep.SkipSweepError(err) { - log.Printf("[WARN] Skipping QLDB Stream sweep for %s: %s", region, err) - return sweeperErrs.ErrorOrNil() // In case we have completed some pages, but had errors - } - - if err != nil { - sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing QLDB Ledgers (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping QLDB Streams (%s): %w", region, err)) diff --git a/internal/service/qldb/tags_gen.go b/internal/service/qldb/tags_gen.go index f1cb9f0b03c..fe14cb49016 100644 --- a/internal/service/qldb/tags_gen.go +++ b/internal/service/qldb/tags_gen.go @@ -5,24 +5,23 @@ import ( "context" "fmt" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/qldb" - "github.com/aws/aws-sdk-go/service/qldb/qldbiface" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/qldb" "github.com/hashicorp/terraform-provider-aws/internal/conns" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists qldb service tags. +// listTags lists qldb service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn qldbiface.QLDBAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *qldb.Client, identifier string) (tftags.KeyValueTags, error) { input := &qldb.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } - output, err := conn.ListTagsForResourceWithContext(ctx, input) + output, err := conn.ListTagsForResource(ctx, input) if err != nil { return tftags.New(ctx, nil), err @@ -34,7 +33,7 @@ func ListTags(ctx context.Context, conn qldbiface.QLDBAPI, identifier string) (t // ListTags lists qldb service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).QLDBConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).QLDBClient(ctx), identifier) if err != nil { return err @@ -54,14 +53,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from qldb service tags. +// KeyValueTags creates tftags.KeyValueTags from qldb service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns qldb service tags from Context. +// getTagsIn returns qldb service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +70,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets qldb service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets qldb service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates qldb service tags. +// updateTags updates qldb service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn qldbiface.QLDBAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *qldb.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -90,10 +89,10 @@ func UpdateTags(ctx context.Context, conn qldbiface.QLDBAPI, identifier string, if len(removedTags) > 0 { input := &qldb.UntagResourceInput{ ResourceArn: aws.String(identifier), - TagKeys: aws.StringSlice(removedTags.Keys()), + TagKeys: removedTags.Keys(), } - _, err := conn.UntagResourceWithContext(ctx, input) + _, err := conn.UntagResource(ctx, input) if err != nil { return fmt.Errorf("untagging resource (%s): %w", identifier, err) @@ -108,7 +107,7 @@ func UpdateTags(ctx context.Context, conn qldbiface.QLDBAPI, identifier string, Tags: Tags(updatedTags), } - _, err := conn.TagResourceWithContext(ctx, input) + _, err := conn.TagResource(ctx, input) if err != nil { return fmt.Errorf("tagging resource (%s): %w", identifier, err) @@ -121,5 +120,5 @@ func UpdateTags(ctx context.Context, conn qldbiface.QLDBAPI, identifier string, // UpdateTags updates qldb service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).QLDBConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).QLDBClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/quicksight/account_subscription.go b/internal/service/quicksight/account_subscription.go index 002347e14a7..3e61adfa59b 100644 --- a/internal/service/quicksight/account_subscription.go +++ b/internal/service/quicksight/account_subscription.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight import ( @@ -35,96 +38,98 @@ func ResourceAccountSubscription() *schema.Resource { Delete: schema.DefaultTimeout(10 * time.Minute), }, - Schema: map[string]*schema.Schema{ - "account_name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - }, - "account_subscription_status": { - Type: schema.TypeString, - Computed: true, - }, - "active_directory_name": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - }, - "admin_group": { - Type: schema.TypeList, - Optional: true, - MinItems: 1, - Elem: &schema.Schema{Type: schema.TypeString}, - ForceNew: true, - }, - "authentication_method": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validation.StringInSlice(quicksight.AuthenticationMethodOption_Values(), false), - }, - "author_group": { - Type: schema.TypeList, - Optional: true, - MinItems: 1, - Elem: &schema.Schema{Type: schema.TypeString}, - ForceNew: true, - }, - "aws_account_id": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - ValidateFunc: verify.ValidAccountID, - }, - "contact_number": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - }, - "directory_id": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - }, - "edition": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validation.StringInSlice(quicksight.Edition_Values(), false), - }, - "email_address": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - }, - "first_name": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - }, - "last_name": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - }, - "notification_email": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - }, - "reader_group": { - Type: schema.TypeList, - Optional: true, - ForceNew: true, - MinItems: 1, - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "realm": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - }, + SchemaFunc: func() map[string]*schema.Schema { + return map[string]*schema.Schema{ + "account_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "account_subscription_status": { + Type: schema.TypeString, + Computed: true, + }, + "active_directory_name": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "admin_group": { + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{Type: schema.TypeString}, + ForceNew: true, + }, + "authentication_method": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice(quicksight.AuthenticationMethodOption_Values(), false), + }, + "author_group": { + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{Type: schema.TypeString}, + ForceNew: true, + }, + "aws_account_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: verify.ValidAccountID, + }, + "contact_number": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "directory_id": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "edition": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice(quicksight.Edition_Values(), false), + }, + "email_address": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "first_name": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "last_name": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "notification_email": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "reader_group": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + MinItems: 1, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "realm": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + } }, } } @@ -134,7 +139,7 @@ const ( ) func resourceAccountSubscriptionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) awsAccountId := meta.(*conns.AWSClient).AccountID if v, ok := d.GetOk("aws_account_id"); ok { @@ -208,7 +213,7 @@ func resourceAccountSubscriptionCreate(ctx context.Context, d *schema.ResourceDa } func resourceAccountSubscriptionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) out, err := FindAccountSubscriptionByID(ctx, conn, d.Id()) @@ -237,7 +242,7 @@ func resourceAccountSubscriptionRead(ctx context.Context, d *schema.ResourceData } func resourceAccountSubscriptionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) log.Printf("[INFO] Deleting QuickSight AccountSubscription %s", d.Id()) diff --git a/internal/service/quicksight/account_subscription_test.go b/internal/service/quicksight/account_subscription_test.go index 47488aa66f1..09f744ecfcf 100644 --- a/internal/service/quicksight/account_subscription_test.go +++ b/internal/service/quicksight/account_subscription_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight_test import ( @@ -81,7 +84,7 @@ func testAccAccountSubscription_disappears(t *testing.T) { func testAccCheckAccountSubscriptionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_quicksight_account_subscription" { @@ -118,7 +121,7 @@ func testAccCheckAccountSubscriptionDisableTerminationProtection(ctx context.Con return create.Error(names.QuickSight, create.ErrActionCheckingExistence, tfquicksight.ResNameAccountSubscription, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) defaultNs := "default" _, err := conn.UpdateAccountSettingsWithContext(ctx, &quicksight.UpdateAccountSettingsInput{ AwsAccountId: aws.String(rs.Primary.ID), @@ -145,7 +148,7 @@ func testAccCheckAccountSubscriptionExists(ctx context.Context, name string, acc return create.Error(names.QuickSight, create.ErrActionCheckingExistence, tfquicksight.ResNameAccountSubscription, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) resp, err := tfquicksight.FindAccountSubscriptionByID(ctx, conn, rs.Primary.ID) if err != nil { return create.Error(names.QuickSight, create.ErrActionCheckingExistence, tfquicksight.ResNameAccountSubscription, rs.Primary.ID, err) diff --git a/internal/service/quicksight/analysis.go b/internal/service/quicksight/analysis.go new file mode 100644 index 00000000000..e8a48827919 --- /dev/null +++ b/internal/service/quicksight/analysis.go @@ -0,0 +1,399 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package quicksight + +import ( + "context" + "fmt" + "log" + "strings" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/quicksight" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + quicksightschema "github.com/hashicorp/terraform-provider-aws/internal/service/quicksight/schema" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/verify" + "github.com/hashicorp/terraform-provider-aws/names" +) + +const ( + recoveryWindowInDaysMin = 7 + recoveryWindowInDaysMax = 30 + recoveryWindowInDaysDefault = recoveryWindowInDaysMax +) + +// @SDKResource("aws_quicksight_analysis", name="Analysis") +// @Tags(identifierAttribute="arn") +func ResourceAnalysis() *schema.Resource { + return &schema.Resource{ + CreateWithoutTimeout: resourceAnalysisCreate, + ReadWithoutTimeout: resourceAnalysisRead, + UpdateWithoutTimeout: resourceAnalysisUpdate, + DeleteWithoutTimeout: resourceAnalysisDelete, + + Importer: &schema.ResourceImporter{ + StateContext: func(ctx context.Context, d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + d.Set("recovery_window_in_days", recoveryWindowInDaysDefault) + return []*schema.ResourceData{d}, nil + }, + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(5 * time.Minute), + Update: schema.DefaultTimeout(5 * time.Minute), + Delete: schema.DefaultTimeout(5 * time.Minute), + }, + + SchemaFunc: func() map[string]*schema.Schema { + return map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "aws_account_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: verify.ValidAccountID, + }, + "created_time": { + Type: schema.TypeString, + Computed: true, + }, + "analysis_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "definition": quicksightschema.AnalysisDefinitionSchema(), + "last_updated_time": { + Type: schema.TypeString, + Computed: true, + }, + "last_published_time": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 2048), + }, + "parameters": quicksightschema.ParametersSchema(), + "permissions": { + Type: schema.TypeList, + Optional: true, + MinItems: 1, + MaxItems: 64, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "actions": { + Type: schema.TypeSet, + Required: true, + MinItems: 1, + MaxItems: 16, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "principal": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 256), + }, + }, + }, + }, + "recovery_window_in_days": { + Type: schema.TypeInt, + Optional: true, + Default: 30, + ValidateFunc: validation.Any( + validation.IntBetween(recoveryWindowInDaysMin, recoveryWindowInDaysMax), + validation.IntInSlice([]int{0}), + ), + }, + "source_entity": quicksightschema.AnalysisSourceEntitySchema(), + "status": { + Type: schema.TypeString, + Computed: true, + }, + names.AttrTags: tftags.TagsSchema(), + names.AttrTagsAll: tftags.TagsSchemaComputed(), + "theme_arn": { + Type: schema.TypeString, + Optional: true, + }, + } + }, + + CustomizeDiff: verify.SetTagsDiff, + } +} + +const ( + ResNameAnalysis = "Analysis" +) + +func resourceAnalysisCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) + + awsAccountId := meta.(*conns.AWSClient).AccountID + if v, ok := d.GetOk("aws_account_id"); ok { + awsAccountId = v.(string) + } + analysisId := d.Get("analysis_id").(string) + + d.SetId(createAnalysisId(awsAccountId, analysisId)) + + input := &quicksight.CreateAnalysisInput{ + AwsAccountId: aws.String(awsAccountId), + AnalysisId: aws.String(analysisId), + Name: aws.String(d.Get("name").(string)), + Tags: getTagsIn(ctx), + } + + if v, ok := d.GetOk("source_entity"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.SourceEntity = quicksightschema.ExpandAnalysisSourceEntity(v.([]interface{})) + } + + if v, ok := d.GetOk("definition"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.Definition = quicksightschema.ExpandAnalysisDefinition(d.Get("definition").([]interface{})) + } + + if v, ok := d.GetOk("parameters"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.Parameters = quicksightschema.ExpandParameters(d.Get("parameters").([]interface{})) + } + + if v, ok := d.GetOk("permissions"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.Permissions = expandResourcePermissions(v.([]interface{})) + } + + _, err := conn.CreateAnalysisWithContext(ctx, input) + if err != nil { + return create.DiagError(names.QuickSight, create.ErrActionCreating, ResNameAnalysis, d.Get("name").(string), err) + } + + if _, err := waitAnalysisCreated(ctx, conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { + return create.DiagError(names.QuickSight, create.ErrActionWaitingForCreation, ResNameAnalysis, d.Id(), err) + } + + return resourceAnalysisRead(ctx, d, meta) +} + +func resourceAnalysisRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) + + awsAccountId, analysisId, err := ParseAnalysisId(d.Id()) + if err != nil { + return diag.FromErr(err) + } + + out, err := FindAnalysisByID(ctx, conn, d.Id()) + + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] QuickSight Analysis (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + // Ressource is logically deleted with DELETED status + if !d.IsNewResource() && aws.StringValue(out.Status) == quicksight.ResourceStatusDeleted { + log.Printf("[WARN] QuickSight Analysis (%s) deleted, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return create.DiagError(names.QuickSight, create.ErrActionReading, ResNameAnalysis, d.Id(), err) + } + + d.Set("arn", out.Arn) + d.Set("aws_account_id", awsAccountId) + d.Set("created_time", out.CreatedTime.Format(time.RFC3339)) + d.Set("last_updated_time", out.LastUpdatedTime.Format(time.RFC3339)) + d.Set("name", out.Name) + d.Set("status", out.Status) + d.Set("analysis_id", out.AnalysisId) + + descResp, err := conn.DescribeAnalysisDefinitionWithContext(ctx, &quicksight.DescribeAnalysisDefinitionInput{ + AwsAccountId: aws.String(awsAccountId), + AnalysisId: aws.String(analysisId), + }) + + if err != nil { + return diag.Errorf("describing QuickSight Analysis (%s) Definition: %s", d.Id(), err) + } + + if err := d.Set("definition", quicksightschema.FlattenAnalysisDefinition(descResp.Definition)); err != nil { + return diag.Errorf("setting definition: %s", err) + } + + permsResp, err := conn.DescribeAnalysisPermissionsWithContext(ctx, &quicksight.DescribeAnalysisPermissionsInput{ + AwsAccountId: aws.String(awsAccountId), + AnalysisId: aws.String(analysisId), + }) + + if err != nil { + return diag.Errorf("describing QuickSight Analysis (%s) Permissions: %s", d.Id(), err) + } + + if err := d.Set("permissions", flattenPermissions(permsResp.Permissions)); err != nil { + return diag.Errorf("setting permissions: %s", err) + } + + return nil +} + +func resourceAnalysisUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) + + awsAccountId, analysisId, err := ParseAnalysisId(d.Id()) + if err != nil { + return diag.FromErr(err) + } + + if d.HasChangesExcept("permissions", "tags", "tags_all") { + in := &quicksight.UpdateAnalysisInput{ + AwsAccountId: aws.String(awsAccountId), + AnalysisId: aws.String(analysisId), + Name: aws.String(d.Get("name").(string)), + } + + _, createdFromEntity := d.GetOk("source_entity") + if createdFromEntity { + in.SourceEntity = quicksightschema.ExpandAnalysisSourceEntity(d.Get("source_entity").([]interface{})) + } else { + in.Definition = quicksightschema.ExpandAnalysisDefinition(d.Get("definition").([]interface{})) + } + + if v, ok := d.GetOk("parameters"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + in.Parameters = quicksightschema.ExpandParameters(d.Get("parameters").([]interface{})) + } + + log.Printf("[DEBUG] Updating QuickSight Analysis (%s): %#v", d.Id(), in) + _, err := conn.UpdateAnalysisWithContext(ctx, in) + if err != nil { + return create.DiagError(names.QuickSight, create.ErrActionUpdating, ResNameAnalysis, d.Id(), err) + } + + if _, err := waitAnalysisUpdated(ctx, conn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { + return create.DiagError(names.QuickSight, create.ErrActionWaitingForUpdate, ResNameAnalysis, d.Id(), err) + } + } + + if d.HasChange("permissions") { + oraw, nraw := d.GetChange("permissions") + o := oraw.([]interface{}) + n := nraw.([]interface{}) + + toGrant, toRevoke := DiffPermissions(o, n) + + params := &quicksight.UpdateAnalysisPermissionsInput{ + AwsAccountId: aws.String(awsAccountId), + AnalysisId: aws.String(analysisId), + } + + if len(toGrant) > 0 { + params.GrantPermissions = toGrant + } + + if len(toRevoke) > 0 { + params.RevokePermissions = toRevoke + } + + _, err = conn.UpdateAnalysisPermissionsWithContext(ctx, params) + + if err != nil { + return diag.Errorf("updating QuickSight Analysis (%s) permissions: %s", analysisId, err) + } + } + + return resourceAnalysisRead(ctx, d, meta) +} + +func resourceAnalysisDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) + + awsAccountId, analysisId, err := ParseAnalysisId(d.Id()) + if err != nil { + return diag.FromErr(err) + } + + input := &quicksight.DeleteAnalysisInput{ + AwsAccountId: aws.String(awsAccountId), + AnalysisId: aws.String(analysisId), + } + + recoveryWindowInDays := d.Get("recovery_window_in_days").(int) + if recoveryWindowInDays == 0 { + input.ForceDeleteWithoutRecovery = aws.Bool(true) + } else { + input.RecoveryWindowInDays = aws.Int64(int64(recoveryWindowInDays)) + } + + log.Printf("[INFO] Deleting QuickSight Analysis %s", d.Id()) + _, err = conn.DeleteAnalysisWithContext(ctx, input) + + if tfawserr.ErrCodeEquals(err, quicksight.ErrCodeResourceNotFoundException) { + return nil + } + + if err != nil { + return create.DiagError(names.QuickSight, create.ErrActionDeleting, ResNameAnalysis, d.Id(), err) + } + + return nil +} + +func FindAnalysisByID(ctx context.Context, conn *quicksight.QuickSight, id string) (*quicksight.Analysis, error) { + awsAccountId, analysisId, err := ParseAnalysisId(id) + if err != nil { + return nil, err + } + + descOpts := &quicksight.DescribeAnalysisInput{ + AwsAccountId: aws.String(awsAccountId), + AnalysisId: aws.String(analysisId), + } + + out, err := conn.DescribeAnalysisWithContext(ctx, descOpts) + + if tfawserr.ErrCodeEquals(err, quicksight.ErrCodeResourceNotFoundException) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: descOpts, + } + } + + if err != nil { + return nil, err + } + + if out == nil || out.Analysis == nil { + return nil, tfresource.NewEmptyResultError(descOpts) + } + + return out.Analysis, nil +} + +func ParseAnalysisId(id string) (string, string, error) { + parts := strings.SplitN(id, ",", 2) + if len(parts) != 2 || parts[0] == "" || parts[1] == "" { + return "", "", fmt.Errorf("unexpected format of ID (%s), expected AWS_ACCOUNT_ID,ANALYSIS_ID", id) + } + return parts[0], parts[1], nil +} + +func createAnalysisId(awsAccountID, analysisId string) string { + return fmt.Sprintf("%s,%s", awsAccountID, analysisId) +} diff --git a/internal/service/quicksight/analysis_test.go b/internal/service/quicksight/analysis_test.go new file mode 100644 index 00000000000..a6ed8be3701 --- /dev/null +++ b/internal/service/quicksight/analysis_test.go @@ -0,0 +1,614 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package quicksight_test + +import ( + "context" + "errors" + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/quicksight" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-plugin-testing/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + tfquicksight "github.com/hashicorp/terraform-provider-aws/internal/service/quicksight" + "github.com/hashicorp/terraform-provider-aws/names" +) + +func TestAccQuickSightAnalysis_basic(t *testing.T) { + ctx := acctest.Context(t) + + var analysis quicksight.Analysis + resourceName := "aws_quicksight_analysis.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rId := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, quicksight.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckAnalysisDestroy(ctx, false), + Steps: []resource.TestStep{ + { + Config: testAccAnalysisConfig_basic(rId, rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAnalysisExists(ctx, resourceName, &analysis), + resource.TestCheckResourceAttr(resourceName, "analysis_id", rId), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "status", quicksight.ResourceStatusCreationSuccessful), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccQuickSightAnalysis_disappears(t *testing.T) { + ctx := acctest.Context(t) + + var analysis quicksight.Analysis + resourceName := "aws_quicksight_analysis.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rId := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, quicksight.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckAnalysisDestroy(ctx, false), + Steps: []resource.TestStep{ + { + Config: testAccAnalysisConfig_basic(rId, rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAnalysisExists(ctx, resourceName, &analysis), + acctest.CheckResourceDisappears(ctx, acctest.Provider, tfquicksight.ResourceAnalysis(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccQuickSightAnalysis_sourceEntity(t *testing.T) { + ctx := acctest.Context(t) + + var analysis quicksight.Analysis + resourceName := "aws_quicksight_analysis.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rId := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + sourceName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + sourceId := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, quicksight.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckAnalysisDestroy(ctx, false), + Steps: []resource.TestStep{ + { + Config: testAccAnalysisConfig_TemplateSourceEntity(rId, rName, sourceId, sourceName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAnalysisExists(ctx, resourceName, &analysis), + resource.TestCheckResourceAttr(resourceName, "analysis_id", rId), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "status", quicksight.ResourceStatusCreationSuccessful), + acctest.CheckResourceAttrRegionalARN(resourceName, "source_entity.0.source_template.0.arn", "quicksight", fmt.Sprintf("template/%s", sourceId)), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"source_entity"}, + }, + }, + }) +} + +func TestAccQuickSightAnalysis_update(t *testing.T) { + ctx := acctest.Context(t) + + var analysis quicksight.Analysis + resourceName := "aws_quicksight_analysis.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rNameUpdated := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rId := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, quicksight.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckAnalysisDestroy(ctx, false), + Steps: []resource.TestStep{ + { + Config: testAccAnalysisConfig_basic(rId, rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAnalysisExists(ctx, resourceName, &analysis), + resource.TestCheckResourceAttr(resourceName, "analysis_id", rId), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "status", quicksight.ResourceStatusCreationSuccessful), + ), + }, + { + Config: testAccAnalysisConfig_basic(rId, rNameUpdated), + Check: resource.ComposeTestCheckFunc( + testAccCheckAnalysisExists(ctx, resourceName, &analysis), + resource.TestCheckResourceAttr(resourceName, "analysis_id", rId), + resource.TestCheckResourceAttr(resourceName, "name", rNameUpdated), + resource.TestCheckResourceAttr(resourceName, "status", quicksight.ResourceStatusUpdateSuccessful), + ), + }, + }, + }) +} + +func TestAccQuickSightAnalysis_parametersConfig(t *testing.T) { + ctx := acctest.Context(t) + + var analysis quicksight.Analysis + resourceName := "aws_quicksight_analysis.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rId := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, quicksight.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckAnalysisDestroy(ctx, false), + Steps: []resource.TestStep{ + { + Config: testAccAnalysisConfig_ParametersConfig(rId, rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAnalysisExists(ctx, resourceName, &analysis), + resource.TestCheckResourceAttr(resourceName, "analysis_id", rId), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "status", quicksight.ResourceStatusCreationSuccessful), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"parameters"}, + }, + }, + }) +} + +func TestAccQuickSightAnalysis_forceDelete(t *testing.T) { + ctx := acctest.Context(t) + + var analysis quicksight.Analysis + resourceName := "aws_quicksight_analysis.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rId := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, quicksight.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckAnalysisDestroy(ctx, true), + Steps: []resource.TestStep{ + { + Config: testAccAnalysisConfig_ForceDelete(rId, rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAnalysisExists(ctx, resourceName, &analysis), + resource.TestCheckResourceAttr(resourceName, "analysis_id", rId), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "status", quicksight.ResourceStatusCreationSuccessful), + ), + }, + }, + }) +} + +func testAccCheckAnalysisDestroy(ctx context.Context, forceDelete bool) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_quicksight_analysis" { + continue + } + + output, err := tfquicksight.FindAnalysisByID(ctx, conn, rs.Primary.ID) + if err != nil { + if tfawserr.ErrCodeEquals(err, quicksight.ErrCodeResourceNotFoundException) { + return nil + } + return err + } + + if output != nil && (forceDelete || aws.StringValue(output.Status) != quicksight.ResourceStatusDeleted) { + return fmt.Errorf("QuickSight Analysis (%s) still exists", rs.Primary.ID) + } + } + + return nil + } +} + +func testAccCheckAnalysisExists(ctx context.Context, name string, analysis *quicksight.Analysis) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return create.Error(names.QuickSight, create.ErrActionCheckingExistence, tfquicksight.ResNameAnalysis, name, errors.New("not found")) + } + + if rs.Primary.ID == "" { + return create.Error(names.QuickSight, create.ErrActionCheckingExistence, tfquicksight.ResNameAnalysis, name, errors.New("not set")) + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) + output, err := tfquicksight.FindAnalysisByID(ctx, conn, rs.Primary.ID) + + if err != nil { + return create.Error(names.QuickSight, create.ErrActionCheckingExistence, tfquicksight.ResNameAnalysis, rs.Primary.ID, err) + } + + *analysis = *output + + return nil + } +} + +func testAccAnalysisConfigBase(rId string, rName string) string { + return acctest.ConfigCompose( + testAccDataSetConfigBase(rId, rName), + fmt.Sprintf(` +resource "aws_quicksight_data_set" "test" { + data_set_id = %[1]q + name = %[2]q + import_mode = "SPICE" + + physical_table_map { + physical_table_map_id = %[1]q + s3_source { + data_source_arn = aws_quicksight_data_source.test.arn + input_columns { + name = "Column1" + type = "STRING" + } + input_columns { + name = "Column2" + type = "STRING" + } + upload_settings {} + } + } + logical_table_map { + logical_table_map_id = %[1]q + alias = "Group1" + source { + physical_table_id = %[1]q + } + data_transforms { + cast_column_type_operation { + column_name = "Column2" + new_column_type = "INTEGER" + } + } + } + + lifecycle { + ignore_changes = [ + physical_table_map + ] + } +} +`, rId, rName)) +} + +func testAccAnalysisConfig_basic(rId, rName string) string { + return acctest.ConfigCompose( + testAccAnalysisConfigBase(rId, rName), + fmt.Sprintf(` +resource "aws_quicksight_analysis" "test" { + analysis_id = %[1]q + name = %[2]q + definition { + data_set_identifiers_declarations { + data_set_arn = aws_quicksight_data_set.test.arn + identifier = "1" + } + sheets { + title = "Test" + sheet_id = "Test1" + visuals { + custom_content_visual { + data_set_identifier = "1" + title { + format_text { + plain_text = "Test" + } + } + visual_id = "Test1" + } + } + visuals { + line_chart_visual { + visual_id = "LineChart" + title { + format_text { + plain_text = "Line Chart Test" + } + } + chart_configuration { + field_wells { + line_chart_aggregated_field_wells { + category { + categorical_dimension_field { + field_id = "1" + column { + data_set_identifier = "1" + column_name = "Column1" + } + } + } + values { + categorical_measure_field { + field_id = "2" + column { + data_set_identifier = "1" + column_name = "Column1" + } + aggregation_function = "COUNT" + } + } + } + } + } + } + } + } + } +} +`, rId, rName)) +} + +func testAccAnalysisConfig_TemplateSourceEntity(rId, rName, sourceId, sourceName string) string { + return acctest.ConfigCompose( + testAccAnalysisConfigBase(rId, rName), + fmt.Sprintf(` +resource "aws_quicksight_template" "test" { + template_id = %[3]q + name = %[4]q + version_description = "test" + definition { + data_set_configuration { + data_set_schema { + column_schema_list { + name = "Column1" + data_type = "STRING" + } + column_schema_list { + name = "Column2" + data_type = "INTEGER" + } + } + placeholder = "1" + } + sheets { + title = "Test" + sheet_id = "Test1" + visuals { + custom_content_visual { + data_set_identifier = "1" + title { + format_text { + plain_text = "Test" + } + } + visual_id = "Test1" + } + } + visuals { + line_chart_visual { + visual_id = "LineChart" + title { + format_text { + plain_text = "Line Chart Test" + } + } + chart_configuration { + field_wells { + line_chart_aggregated_field_wells { + category { + categorical_dimension_field { + field_id = "1" + column { + data_set_identifier = "1" + column_name = "Column1" + } + } + } + values { + categorical_measure_field { + field_id = "2" + column { + data_set_identifier = "1" + column_name = "Column1" + } + aggregation_function = "COUNT" + } + } + } + } + } + } + } + } + } +} + +resource "aws_quicksight_analysis" "test" { + analysis_id = %[1]q + name = %[2]q + source_entity { + source_template { + arn = aws_quicksight_template.test.arn + data_set_references { + data_set_arn = aws_quicksight_data_set.test.arn + data_set_placeholder = "1" + } + } + } +} +`, rId, rName, sourceId, sourceName)) +} + +func testAccAnalysisConfig_ParametersConfig(rId, rName string) string { + return acctest.ConfigCompose( + testAccAnalysisConfigBase(rId, rName), + fmt.Sprintf(` +resource "aws_quicksight_analysis" "test" { + analysis_id = %[1]q + name = %[2]q + parameters { + string_parameters { + name = "test" + values = ["value"] + } + } + definition { + data_set_identifiers_declarations { + data_set_arn = aws_quicksight_data_set.test.arn + identifier = "1" + } + parameter_declarations { + string_parameter_declaration { + name = "test" + parameter_value_type = "SINGLE_VALUED" + default_values { + static_values = ["value"] + } + values_when_unset { + value_when_unset_option = "NULL" + } + } + } + sheets { + title = "Example" + sheet_id = "Example1" + visuals { + line_chart_visual { + visual_id = "LineChart" + title { + format_text { + plain_text = "Line Chart Example" + } + } + chart_configuration { + field_wells { + line_chart_aggregated_field_wells { + category { + categorical_dimension_field { + field_id = "1" + column { + data_set_identifier = "1" + column_name = "Column1" + } + } + } + values { + categorical_measure_field { + field_id = "2" + column { + data_set_identifier = "1" + column_name = "Column1" + } + aggregation_function = "COUNT" + } + } + } + } + } + } + } + } + } +} +`, rId, rName)) +} + +func testAccAnalysisConfig_ForceDelete(rId, rName string) string { + return acctest.ConfigCompose( + testAccAnalysisConfigBase(rId, rName), + fmt.Sprintf(` +resource "aws_quicksight_analysis" "test" { + analysis_id = %[1]q + name = %[2]q + + recovery_window_in_days = 0 + + definition { + data_set_identifiers_declarations { + data_set_arn = aws_quicksight_data_set.test.arn + identifier = "1" + } + sheets { + title = "Example" + sheet_id = "Example1" + visuals { + line_chart_visual { + visual_id = "LineChart" + title { + format_text { + plain_text = "Line Chart Example" + } + } + chart_configuration { + field_wells { + line_chart_aggregated_field_wells { + category { + categorical_dimension_field { + field_id = "1" + column { + data_set_identifier = "1" + column_name = "Column1" + } + } + } + values { + categorical_measure_field { + field_id = "2" + column { + data_set_identifier = "1" + column_name = "Column1" + } + aggregation_function = "COUNT" + } + } + } + } + } + } + } + } + } +} +`, rId, rName)) +} diff --git a/internal/service/quicksight/dashboard.go b/internal/service/quicksight/dashboard.go new file mode 100644 index 00000000000..9ec1c5a82e7 --- /dev/null +++ b/internal/service/quicksight/dashboard.go @@ -0,0 +1,430 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package quicksight + +import ( + "context" + "fmt" + "log" + "strconv" + "strings" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/quicksight" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + quicksightschema "github.com/hashicorp/terraform-provider-aws/internal/service/quicksight/schema" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/verify" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @SDKResource("aws_quicksight_dashboard", name="Dashboard") +// @Tags(identifierAttribute="arn") +func ResourceDashboard() *schema.Resource { + return &schema.Resource{ + CreateWithoutTimeout: resourceDashboardCreate, + ReadWithoutTimeout: resourceDashboardRead, + UpdateWithoutTimeout: resourceDashboardUpdate, + DeleteWithoutTimeout: resourceDashboardDelete, + + Importer: &schema.ResourceImporter{ + StateContext: schema.ImportStatePassthroughContext, + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(5 * time.Minute), + Update: schema.DefaultTimeout(5 * time.Minute), + Delete: schema.DefaultTimeout(5 * time.Minute), + }, + + SchemaFunc: func() map[string]*schema.Schema { + return map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "aws_account_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: verify.ValidAccountID, + }, + "created_time": { + Type: schema.TypeString, + Computed: true, + }, + "dashboard_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "dashboard_publish_options": quicksightschema.DashboardPublishOptionsSchema(), + "definition": quicksightschema.DashboardDefinitionSchema(), + "last_updated_time": { + Type: schema.TypeString, + Computed: true, + }, + "last_published_time": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 2048), + }, + "parameters": quicksightschema.ParametersSchema(), + "permissions": { + Type: schema.TypeList, + Optional: true, + MinItems: 1, + MaxItems: 64, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "actions": { + Type: schema.TypeSet, + Required: true, + MinItems: 1, + MaxItems: 16, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "principal": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 256), + }, + }, + }, + }, + "source_entity": quicksightschema.DashboardSourceEntitySchema(), + "source_entity_arn": { + Type: schema.TypeString, + Computed: true, + }, + "status": { + Type: schema.TypeString, + Computed: true, + }, + names.AttrTags: tftags.TagsSchema(), + names.AttrTagsAll: tftags.TagsSchemaComputed(), + "theme_arn": { + Type: schema.TypeString, + Optional: true, + }, + "version_description": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 512), + }, + "version_number": { + Type: schema.TypeInt, + Computed: true, + }, + } + }, + + CustomizeDiff: customdiff.All( + refreshOutputsDiff, + verify.SetTagsDiff, + ), + } +} + +const ( + ResNameDashboard = "Dashboard" +) + +func resourceDashboardCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) + + awsAccountId := meta.(*conns.AWSClient).AccountID + if v, ok := d.GetOk("aws_account_id"); ok { + awsAccountId = v.(string) + } + dashboardId := d.Get("dashboard_id").(string) + + d.SetId(createDashboardId(awsAccountId, dashboardId)) + + input := &quicksight.CreateDashboardInput{ + AwsAccountId: aws.String(awsAccountId), + DashboardId: aws.String(dashboardId), + Name: aws.String(d.Get("name").(string)), + Tags: getTagsIn(ctx), + } + + if v, ok := d.GetOk("version_description"); ok { + input.VersionDescription = aws.String(v.(string)) + } + + if v, ok := d.GetOk("source_entity"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.SourceEntity = quicksightschema.ExpandDashboardSourceEntity(v.([]interface{})) + } + + if v, ok := d.GetOk("definition"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.Definition = quicksightschema.ExpandDashboardDefinition(d.Get("definition").([]interface{})) + } + + if v, ok := d.GetOk("dashboard_publish_options"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.DashboardPublishOptions = quicksightschema.ExpandDashboardPublishOptions(d.Get("dashboard_publish_options").([]interface{})) + } + + if v, ok := d.GetOk("parameters"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.Parameters = quicksightschema.ExpandParameters(d.Get("parameters").([]interface{})) + } + + if v, ok := d.GetOk("permissions"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.Permissions = expandResourcePermissions(v.([]interface{})) + } + + _, err := conn.CreateDashboardWithContext(ctx, input) + if err != nil { + return create.DiagError(names.QuickSight, create.ErrActionCreating, ResNameDashboard, d.Get("name").(string), err) + } + + if _, err := waitDashboardCreated(ctx, conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { + return create.DiagError(names.QuickSight, create.ErrActionWaitingForCreation, ResNameDashboard, d.Id(), err) + } + + return resourceDashboardRead(ctx, d, meta) +} + +func resourceDashboardRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) + + awsAccountId, dashboardId, err := ParseDashboardId(d.Id()) + if err != nil { + return diag.FromErr(err) + } + + out, err := FindDashboardByID(ctx, conn, d.Id()) + + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] QuickSight Dashboard (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return create.DiagError(names.QuickSight, create.ErrActionReading, ResNameDashboard, d.Id(), err) + } + + d.Set("arn", out.Arn) + d.Set("aws_account_id", awsAccountId) + d.Set("created_time", out.CreatedTime.Format(time.RFC3339)) + d.Set("last_updated_time", out.LastUpdatedTime.Format(time.RFC3339)) + d.Set("name", out.Name) + d.Set("status", out.Version.Status) + d.Set("source_entity_arn", out.Version.SourceEntityArn) + d.Set("dashboard_id", out.DashboardId) + d.Set("version_description", out.Version.Description) + d.Set("version_number", out.Version.VersionNumber) + + descResp, err := conn.DescribeDashboardDefinitionWithContext(ctx, &quicksight.DescribeDashboardDefinitionInput{ + AwsAccountId: aws.String(awsAccountId), + DashboardId: aws.String(dashboardId), + VersionNumber: out.Version.VersionNumber, + }) + + if err != nil { + return diag.Errorf("describing QuickSight Dashboard (%s) Definition: %s", d.Id(), err) + } + + if err := d.Set("definition", quicksightschema.FlattenDashboardDefinition(descResp.Definition)); err != nil { + return diag.Errorf("setting definition: %s", err) + } + + if err := d.Set("dashboard_publish_options", quicksightschema.FlattenDashboardPublishOptions(descResp.DashboardPublishOptions)); err != nil { + return diag.Errorf("setting dashboard_publish_options: %s", err) + } + + permsResp, err := conn.DescribeDashboardPermissionsWithContext(ctx, &quicksight.DescribeDashboardPermissionsInput{ + AwsAccountId: aws.String(awsAccountId), + DashboardId: aws.String(dashboardId), + }) + + if err != nil { + return diag.Errorf("describing QuickSight Dashboard (%s) Permissions: %s", d.Id(), err) + } + + if err := d.Set("permissions", flattenPermissions(permsResp.Permissions)); err != nil { + return diag.Errorf("setting permissions: %s", err) + } + + return nil +} + +func resourceDashboardUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) + + awsAccountId, dashboardId, err := ParseDashboardId(d.Id()) + if err != nil { + return diag.FromErr(err) + } + + if d.HasChangesExcept("permissions", "tags", "tags_all") { + in := &quicksight.UpdateDashboardInput{ + AwsAccountId: aws.String(awsAccountId), + DashboardId: aws.String(dashboardId), + Name: aws.String(d.Get("name").(string)), + VersionDescription: aws.String(d.Get("version_description").(string)), + } + + _, createdFromEntity := d.GetOk("source_entity") + if createdFromEntity { + in.SourceEntity = quicksightschema.ExpandDashboardSourceEntity(d.Get("source_entity").([]interface{})) + } else { + in.Definition = quicksightschema.ExpandDashboardDefinition(d.Get("definition").([]interface{})) + } + + if v, ok := d.GetOk("parameters"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + in.Parameters = quicksightschema.ExpandParameters(d.Get("parameters").([]interface{})) + } + + if v, ok := d.GetOk("dashboard_publish_options"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + in.DashboardPublishOptions = quicksightschema.ExpandDashboardPublishOptions(d.Get("dashboard_publish_options").([]interface{})) + } + + log.Printf("[DEBUG] Updating QuickSight Dashboard (%s): %#v", d.Id(), in) + out, err := conn.UpdateDashboardWithContext(ctx, in) + if err != nil { + return create.DiagError(names.QuickSight, create.ErrActionUpdating, ResNameDashboard, d.Id(), err) + } + + if _, err := waitDashboardUpdated(ctx, conn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { + return create.DiagError(names.QuickSight, create.ErrActionWaitingForUpdate, ResNameDashboard, d.Id(), err) + } + + publishVersion := &quicksight.UpdateDashboardPublishedVersionInput{ + AwsAccountId: aws.String(awsAccountId), + DashboardId: aws.String(dashboardId), + VersionNumber: extractVersionFromARN(aws.StringValue(out.VersionArn)), + } + _, err = conn.UpdateDashboardPublishedVersionWithContext(ctx, publishVersion) + if err != nil { + return create.DiagError(names.QuickSight, create.ErrActionUpdating, ResNameDashboard, d.Id(), err) + } + } + + if d.HasChange("permissions") { + oraw, nraw := d.GetChange("permissions") + o := oraw.([]interface{}) + n := nraw.([]interface{}) + + toGrant, toRevoke := DiffPermissions(o, n) + + params := &quicksight.UpdateDashboardPermissionsInput{ + AwsAccountId: aws.String(awsAccountId), + DashboardId: aws.String(dashboardId), + } + + if len(toGrant) > 0 { + params.GrantPermissions = toGrant + } + + if len(toRevoke) > 0 { + params.RevokePermissions = toRevoke + } + + _, err = conn.UpdateDashboardPermissionsWithContext(ctx, params) + + if err != nil { + return diag.Errorf("updating QuickSight Dashboard (%s) permissions: %s", dashboardId, err) + } + } + + return resourceDashboardRead(ctx, d, meta) +} + +func resourceDashboardDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) + + awsAccountId, dashboardId, err := ParseDashboardId(d.Id()) + if err != nil { + return diag.FromErr(err) + } + + log.Printf("[INFO] Deleting QuickSight Dashboard %s", d.Id()) + _, err = conn.DeleteDashboardWithContext(ctx, &quicksight.DeleteDashboardInput{ + AwsAccountId: aws.String(awsAccountId), + DashboardId: aws.String(dashboardId), + }) + + if tfawserr.ErrCodeEquals(err, quicksight.ErrCodeResourceNotFoundException) { + return nil + } + + if err != nil { + return create.DiagError(names.QuickSight, create.ErrActionDeleting, ResNameDashboard, d.Id(), err) + } + + return nil +} + +func FindDashboardByID(ctx context.Context, conn *quicksight.QuickSight, id string) (*quicksight.Dashboard, error) { + awsAccountId, dashboardId, err := ParseDashboardId(id) + if err != nil { + return nil, err + } + + descOpts := &quicksight.DescribeDashboardInput{ + AwsAccountId: aws.String(awsAccountId), + DashboardId: aws.String(dashboardId), + } + + out, err := conn.DescribeDashboardWithContext(ctx, descOpts) + + if tfawserr.ErrCodeEquals(err, quicksight.ErrCodeResourceNotFoundException) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: descOpts, + } + } + + if err != nil { + return nil, err + } + + if out == nil || out.Dashboard == nil { + return nil, tfresource.NewEmptyResultError(descOpts) + } + + return out.Dashboard, nil +} + +func ParseDashboardId(id string) (string, string, error) { + parts := strings.SplitN(id, ",", 2) + if len(parts) != 2 || parts[0] == "" || parts[1] == "" { + return "", "", fmt.Errorf("unexpected format of ID (%s), expected AWS_ACCOUNT_ID,DASHBOARD_ID", id) + } + return parts[0], parts[1], nil +} + +func createDashboardId(awsAccountID, dashboardId string) string { + return fmt.Sprintf("%s,%s", awsAccountID, dashboardId) +} + +func extractVersionFromARN(arn string) *int64 { + version, _ := strconv.Atoi(arn[strings.LastIndex(arn, "/")+1:]) + return aws.Int64(int64(version)) +} + +func refreshOutputsDiff(_ context.Context, diff *schema.ResourceDiff, meta interface{}) error { + if diff.HasChanges("name", "definition", "source_entity", "theme_arn", "version_description", "parameters", "dashboard_publish_options") { + if err := diff.SetNewComputed("version_number"); err != nil { + return err + } + } + + return nil +} diff --git a/internal/service/quicksight/dashboard_test.go b/internal/service/quicksight/dashboard_test.go new file mode 100644 index 00000000000..4e6f3647872 --- /dev/null +++ b/internal/service/quicksight/dashboard_test.go @@ -0,0 +1,562 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package quicksight_test + +import ( + "context" + "errors" + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/service/quicksight" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-plugin-testing/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + tfquicksight "github.com/hashicorp/terraform-provider-aws/internal/service/quicksight" + "github.com/hashicorp/terraform-provider-aws/names" +) + +func TestAccQuickSightDashboard_basic(t *testing.T) { + ctx := acctest.Context(t) + + var dashboard quicksight.Dashboard + resourceName := "aws_quicksight_dashboard.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rId := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, quicksight.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckDashboardDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccDashboardConfig_basic(rId, rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckDashboardExists(ctx, resourceName, &dashboard), + resource.TestCheckResourceAttr(resourceName, "dashboard_id", rId), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "status", quicksight.ResourceStatusCreationSuccessful), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccQuickSightDashboard_disappears(t *testing.T) { + ctx := acctest.Context(t) + + var dashboard quicksight.Dashboard + resourceName := "aws_quicksight_dashboard.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rId := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, quicksight.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckDashboardDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccDashboardConfig_basic(rId, rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckDashboardExists(ctx, resourceName, &dashboard), + acctest.CheckResourceDisappears(ctx, acctest.Provider, tfquicksight.ResourceDashboard(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccQuickSightDashboard_sourceEntity(t *testing.T) { + ctx := acctest.Context(t) + + var dashboard quicksight.Dashboard + resourceName := "aws_quicksight_dashboard.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rId := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + sourceName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + sourceId := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, quicksight.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckDashboardDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccDashboardConfig_TemplateSourceEntity(rId, rName, sourceId, sourceName), + Check: resource.ComposeTestCheckFunc( + testAccCheckDashboardExists(ctx, resourceName, &dashboard), + resource.TestCheckResourceAttr(resourceName, "dashboard_id", rId), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "status", quicksight.ResourceStatusCreationSuccessful), + acctest.CheckResourceAttrRegionalARN(resourceName, "source_entity.0.source_template.0.arn", "quicksight", fmt.Sprintf("template/%s", sourceId)), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"source_entity"}, + }, + }, + }) +} + +func TestAccQuickSightDashboard_updateVersionNumber(t *testing.T) { + ctx := acctest.Context(t) + + var dashboard quicksight.Dashboard + resourceName := "aws_quicksight_dashboard.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rNameUpdated := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rId := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, quicksight.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckDashboardDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccDashboardConfig_basic(rId, rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckDashboardExists(ctx, resourceName, &dashboard), + resource.TestCheckResourceAttr(resourceName, "dashboard_id", rId), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "status", quicksight.ResourceStatusCreationSuccessful), + resource.TestCheckResourceAttr(resourceName, "version_number", "1"), + ), + }, + { + Config: testAccDashboardConfig_basic(rId, rNameUpdated), + Check: resource.ComposeTestCheckFunc( + testAccCheckDashboardExists(ctx, resourceName, &dashboard), + resource.TestCheckResourceAttr(resourceName, "dashboard_id", rId), + resource.TestCheckResourceAttr(resourceName, "name", rNameUpdated), + resource.TestCheckResourceAttr(resourceName, "status", quicksight.ResourceStatusCreationSuccessful), + resource.TestCheckResourceAttr(resourceName, "version_number", "2"), + ), + }, + }, + }) +} + +func TestAccQuickSightDashboard_dashboardSpecificConfig(t *testing.T) { + ctx := acctest.Context(t) + + var dashboard quicksight.Dashboard + resourceName := "aws_quicksight_dashboard.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rId := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, quicksight.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckDashboardDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccDashboardConfig_DashboardSpecificConfig(rId, rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckDashboardExists(ctx, resourceName, &dashboard), + resource.TestCheckResourceAttr(resourceName, "dashboard_id", rId), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "dashboard_publish_options.0.ad_hoc_filtering_option.0.availability_status", quicksight.StatusDisabled), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"parameters"}, + }, + }, + }) +} + +func testAccCheckDashboardDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_quicksight_dashboard" { + continue + } + + output, err := tfquicksight.FindDashboardByID(ctx, conn, rs.Primary.ID) + if err != nil { + if tfawserr.ErrCodeEquals(err, quicksight.ErrCodeResourceNotFoundException) { + return nil + } + return err + } + + if output != nil { + return fmt.Errorf("QuickSight Dashboard (%s) still exists", rs.Primary.ID) + } + } + + return nil + } +} + +func testAccCheckDashboardExists(ctx context.Context, name string, dashboard *quicksight.Dashboard) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return create.Error(names.QuickSight, create.ErrActionCheckingExistence, tfquicksight.ResNameDashboard, name, errors.New("not found")) + } + + if rs.Primary.ID == "" { + return create.Error(names.QuickSight, create.ErrActionCheckingExistence, tfquicksight.ResNameDashboard, name, errors.New("not set")) + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) + output, err := tfquicksight.FindDashboardByID(ctx, conn, rs.Primary.ID) + + if err != nil { + return create.Error(names.QuickSight, create.ErrActionCheckingExistence, tfquicksight.ResNameDashboard, rs.Primary.ID, err) + } + + *dashboard = *output + + return nil + } +} + +func testAccDashboardConfigBase(rId string, rName string) string { + return acctest.ConfigCompose( + testAccDataSetConfigBase(rId, rName), + fmt.Sprintf(` +resource "aws_quicksight_data_set" "test" { + data_set_id = %[1]q + name = %[2]q + import_mode = "SPICE" + + physical_table_map { + physical_table_map_id = %[1]q + s3_source { + data_source_arn = aws_quicksight_data_source.test.arn + input_columns { + name = "Column1" + type = "STRING" + } + input_columns { + name = "Column2" + type = "STRING" + } + upload_settings {} + } + } + logical_table_map { + logical_table_map_id = %[1]q + alias = "Group1" + source { + physical_table_id = %[1]q + } + data_transforms { + cast_column_type_operation { + column_name = "Column2" + new_column_type = "INTEGER" + } + } + } + + lifecycle { + ignore_changes = [ + physical_table_map + ] + } +} +`, rId, rName)) +} + +func testAccDashboardConfig_basic(rId, rName string) string { + return acctest.ConfigCompose( + testAccDashboardConfigBase(rId, rName), + fmt.Sprintf(` +resource "aws_quicksight_dashboard" "test" { + dashboard_id = %[1]q + name = %[2]q + version_description = "test" + definition { + data_set_identifiers_declarations { + data_set_arn = aws_quicksight_data_set.test.arn + identifier = "1" + } + sheets { + title = "Test" + sheet_id = "Test1" + visuals { + custom_content_visual { + data_set_identifier = "1" + title { + format_text { + plain_text = "Test" + } + } + visual_id = "Test1" + } + } + visuals { + line_chart_visual { + visual_id = "LineChart" + title { + format_text { + plain_text = "Line Chart Test" + } + } + chart_configuration { + field_wells { + line_chart_aggregated_field_wells { + category { + categorical_dimension_field { + field_id = "1" + column { + data_set_identifier = "1" + column_name = "Column1" + } + } + } + values { + categorical_measure_field { + field_id = "2" + column { + data_set_identifier = "1" + column_name = "Column1" + } + aggregation_function = "COUNT" + } + } + } + } + } + } + } + } + } +} +`, rId, rName)) +} + +func testAccDashboardConfig_TemplateSourceEntity(rId, rName, sourceId, sourceName string) string { + return acctest.ConfigCompose( + testAccDashboardConfigBase(rId, rName), + fmt.Sprintf(` +resource "aws_quicksight_template" "test" { + template_id = %[3]q + name = %[4]q + version_description = "test" + definition { + data_set_configuration { + data_set_schema { + column_schema_list { + name = "Column1" + data_type = "STRING" + } + column_schema_list { + name = "Column2" + data_type = "INTEGER" + } + } + placeholder = "1" + } + sheets { + title = "Test" + sheet_id = "Test1" + visuals { + custom_content_visual { + data_set_identifier = "1" + title { + format_text { + plain_text = "Test" + } + } + visual_id = "Test1" + } + } + visuals { + line_chart_visual { + visual_id = "LineChart" + title { + format_text { + plain_text = "Line Chart Test" + } + } + chart_configuration { + field_wells { + line_chart_aggregated_field_wells { + category { + categorical_dimension_field { + field_id = "1" + column { + data_set_identifier = "1" + column_name = "Column1" + } + } + } + values { + categorical_measure_field { + field_id = "2" + column { + data_set_identifier = "1" + column_name = "Column1" + } + aggregation_function = "COUNT" + } + } + } + } + } + } + } + } + } +} + +resource "aws_quicksight_dashboard" "test" { + dashboard_id = %[1]q + name = %[2]q + version_description = "test" + source_entity { + source_template { + arn = aws_quicksight_template.test.arn + data_set_references { + data_set_arn = aws_quicksight_data_set.test.arn + data_set_placeholder = "1" + } + } + } +} +`, rId, rName, sourceId, sourceName)) +} + +func testAccDashboardConfig_DashboardSpecificConfig(rId, rName string) string { + return acctest.ConfigCompose( + testAccDashboardConfigBase(rId, rName), + fmt.Sprintf(` +resource "aws_quicksight_dashboard" "test" { + dashboard_id = %[1]q + name = %[2]q + version_description = "test" + parameters { + string_parameters { + name = "test" + values = ["value"] + } + } + dashboard_publish_options { + ad_hoc_filtering_option { + availability_status = "DISABLED" + } + data_point_drill_up_down_option { + availability_status = "ENABLED" + } + data_point_menu_label_option { + availability_status = "ENABLED" + } + data_point_tooltip_option { + availability_status = "ENABLED" + } + export_to_csv_option { + availability_status = "ENABLED" + } + export_with_hidden_fields_option { + availability_status = "DISABLED" + } + sheet_controls_option { + visibility_state = "COLLAPSED" + } + sheet_layout_element_maximization_option { + availability_status = "ENABLED" + } + visual_axis_sort_option { + availability_status = "ENABLED" + } + visual_menu_option { + availability_status = "ENABLED" + } + } + definition { + data_set_identifiers_declarations { + data_set_arn = aws_quicksight_data_set.test.arn + identifier = "1" + } + parameter_declarations { + string_parameter_declaration { + name = "test" + parameter_value_type = "SINGLE_VALUED" + default_values { + static_values = ["value"] + } + values_when_unset { + value_when_unset_option = "NULL" + } + } + } + sheets { + title = "Example" + sheet_id = "Example1" + visuals { + line_chart_visual { + visual_id = "LineChart" + title { + format_text { + plain_text = "Line Chart Example" + } + } + chart_configuration { + field_wells { + line_chart_aggregated_field_wells { + category { + categorical_dimension_field { + field_id = "1" + column { + data_set_identifier = "1" + column_name = "Column1" + } + } + } + values { + categorical_measure_field { + field_id = "2" + column { + data_set_identifier = "1" + column_name = "Column1" + } + aggregation_function = "COUNT" + } + } + } + } + } + } + } + } + } +} +`, rId, rName)) +} diff --git a/internal/service/quicksight/data_set.go b/internal/service/quicksight/data_set.go index 5672a026f12..0410bdf8dda 100644 --- a/internal/service/quicksight/data_set.go +++ b/internal/service/quicksight/data_set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight import ( @@ -35,307 +38,309 @@ func ResourceDataSet() *schema.Resource { StateContext: schema.ImportStatePassthroughContext, }, - Schema: map[string]*schema.Schema{ - "arn": { - Type: schema.TypeString, - Computed: true, - }, - "aws_account_id": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - ValidateFunc: verify.ValidAccountID, - }, - "column_groups": { - Type: schema.TypeList, - Optional: true, - MinItems: 1, - MaxItems: 8, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "geo_spatial_column_group": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "columns": { - Type: schema.TypeList, - Required: true, - MinItems: 1, - MaxItems: 16, - Elem: &schema.Schema{ + SchemaFunc: func() map[string]*schema.Schema { + return map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "aws_account_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: verify.ValidAccountID, + }, + "column_groups": { + Type: schema.TypeList, + Optional: true, + MinItems: 1, + MaxItems: 8, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "geo_spatial_column_group": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "columns": { + Type: schema.TypeList, + Required: true, + MinItems: 1, + MaxItems: 16, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringLenBetween(1, 128), + }, + }, + "country_code": { Type: schema.TypeString, - ValidateFunc: validation.StringLenBetween(1, 128), + Required: true, + ValidateFunc: validation.StringInSlice(quicksight.GeoSpatialCountryCode_Values(), false), + }, + "name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 64), }, - }, - "country_code": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice(quicksight.GeoSpatialCountryCode_Values(), false), - }, - "name": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringLenBetween(1, 64), }, }, }, }, }, }, - }, - "column_level_permission_rules": { - Type: schema.TypeList, - MinItems: 1, - Optional: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "column_names": { - Type: schema.TypeList, - Optional: true, - MinItems: 1, - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "principals": { - Type: schema.TypeList, - Optional: true, - MinItems: 1, - MaxItems: 100, - Elem: &schema.Schema{Type: schema.TypeString}, + "column_level_permission_rules": { + Type: schema.TypeList, + MinItems: 1, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "column_names": { + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "principals": { + Type: schema.TypeList, + Optional: true, + MinItems: 1, + MaxItems: 100, + Elem: &schema.Schema{Type: schema.TypeString}, + }, }, }, }, - }, - "data_set_id": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - }, - "data_set_usage_configuration": { - Type: schema.TypeList, - Computed: true, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "disable_use_as_direct_query_source": { - Type: schema.TypeBool, - Computed: true, - Optional: true, - }, - "disable_use_as_imported_source": { - Type: schema.TypeBool, - Computed: true, - Optional: true, + "data_set_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "data_set_usage_configuration": { + Type: schema.TypeList, + Computed: true, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "disable_use_as_direct_query_source": { + Type: schema.TypeBool, + Computed: true, + Optional: true, + }, + "disable_use_as_imported_source": { + Type: schema.TypeBool, + Computed: true, + Optional: true, + }, }, }, }, - }, - "field_folders": { - Type: schema.TypeSet, - Optional: true, - MaxItems: 1000, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "field_folders_id": { - Type: schema.TypeString, - Required: true, - }, - "columns": { - Type: schema.TypeList, - Optional: true, - MaxItems: 5000, - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "description": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.StringLenBetween(0, 500), + "field_folders": { + Type: schema.TypeSet, + Optional: true, + MaxItems: 1000, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "field_folders_id": { + Type: schema.TypeString, + Required: true, + }, + "columns": { + Type: schema.TypeList, + Optional: true, + MaxItems: 5000, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "description": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(0, 500), + }, }, }, }, - }, - "import_mode": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice(quicksight.DataSetImportMode_Values(), false), - }, - "logical_table_map": { - Type: schema.TypeSet, - Optional: true, - Computed: true, - MaxItems: 64, - Elem: logicalTableMapSchema(), - }, - "name": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringLenBetween(1, 128), - }, - "output_columns": { - Type: schema.TypeList, - Computed: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "description": { - Type: schema.TypeString, - Computed: true, - }, - "name": { - Type: schema.TypeString, - Computed: true, - }, - "type": { - Type: schema.TypeString, - Computed: true, + "import_mode": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(quicksight.DataSetImportMode_Values(), false), + }, + "logical_table_map": { + Type: schema.TypeSet, + Optional: true, + Computed: true, + MaxItems: 64, + Elem: logicalTableMapSchema(), + }, + "name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 128), + }, + "output_columns": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "description": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Computed: true, + }, + "type": { + Type: schema.TypeString, + Computed: true, + }, }, }, }, - }, - "permissions": { - Type: schema.TypeList, - Optional: true, - MinItems: 1, - MaxItems: 64, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "actions": { - Type: schema.TypeSet, - Required: true, - MinItems: 1, - MaxItems: 16, - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "principal": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringLenBetween(1, 256), + "permissions": { + Type: schema.TypeList, + Optional: true, + MinItems: 1, + MaxItems: 64, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "actions": { + Type: schema.TypeSet, + Required: true, + MinItems: 1, + MaxItems: 16, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "principal": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 256), + }, }, }, }, - }, - "physical_table_map": { - Type: schema.TypeSet, - Required: true, - MaxItems: 32, - Elem: physicalTableMapSchema(), - }, - "row_level_permission_data_set": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "arn": { - Type: schema.TypeString, - Required: true, - ValidateFunc: verify.ValidARN, - }, - "format_version": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.StringInSlice(quicksight.RowLevelPermissionFormatVersion_Values(), false), - }, - "namespace": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.StringLenBetween(0, 64), - }, - "permission_policy": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice(quicksight.RowLevelPermissionPolicy_Values(), false), - }, - "status": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.StringInSlice(quicksight.Status_Values(), false), + "physical_table_map": { + Type: schema.TypeSet, + Optional: true, + MaxItems: 32, + Elem: physicalTableMapSchema(), + }, + "row_level_permission_data_set": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidARN, + }, + "format_version": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice(quicksight.RowLevelPermissionFormatVersion_Values(), false), + }, + "namespace": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(0, 64), + }, + "permission_policy": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(quicksight.RowLevelPermissionPolicy_Values(), false), + }, + "status": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice(quicksight.Status_Values(), false), + }, }, }, }, - }, - "row_level_permission_tag_configuration": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "status": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.StringInSlice(quicksight.Status_Values(), false), - }, - "tag_rules": { - Type: schema.TypeList, - Required: true, - MinItems: 1, - MaxItems: 50, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "column_name": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, - }, - "match_all_value": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.StringLenBetween(1, 256), - }, - "tag_key": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringLenBetween(1, 128), - }, - "tag_multi_value_delimiter": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.StringLenBetween(1, 10), + "row_level_permission_tag_configuration": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "status": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice(quicksight.Status_Values(), false), + }, + "tag_rules": { + Type: schema.TypeList, + Required: true, + MinItems: 1, + MaxItems: 50, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "column_name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "match_all_value": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(1, 256), + }, + "tag_key": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 128), + }, + "tag_multi_value_delimiter": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(1, 10), + }, }, }, }, }, }, }, - }, - "refresh_properties": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "refresh_configuration": { - Type: schema.TypeList, - Required: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "incremental_refresh": { - Type: schema.TypeList, - Required: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "lookback_window": { - Type: schema.TypeList, - Required: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "column_name": { - Type: schema.TypeString, - Required: true, - }, - "size": { - Type: schema.TypeInt, - Required: true, - }, - "size_unit": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice(quicksight.LookbackWindowSizeUnit_Values(), false), + "refresh_properties": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "refresh_configuration": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "incremental_refresh": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "lookback_window": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "column_name": { + Type: schema.TypeString, + Required: true, + }, + "size": { + Type: schema.TypeInt, + Required: true, + }, + "size_unit": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(quicksight.LookbackWindowSizeUnit_Values(), false), + }, }, }, }, @@ -348,10 +353,11 @@ func ResourceDataSet() *schema.Resource { }, }, }, - }, - names.AttrTags: tftags.TagsSchema(), - names.AttrTagsAll: tftags.TagsSchemaComputed(), + names.AttrTags: tftags.TagsSchema(), + names.AttrTagsAll: tftags.TagsSchemaComputed(), + } }, + CustomizeDiff: customdiff.All( func(_ context.Context, diff *schema.ResourceDiff, _ interface{}) error { mode := diff.Get("import_mode").(string) @@ -832,7 +838,7 @@ func physicalTableMapSchema() *schema.Resource { } func resourceDataSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) awsAccountId := meta.(*conns.AWSClient).AccountID if v, ok := d.GetOk("aws_account_id"); ok { @@ -848,7 +854,7 @@ func resourceDataSetCreate(ctx context.Context, d *schema.ResourceData, meta int ImportMode: aws.String(d.Get("import_mode").(string)), PhysicalTableMap: expandDataSetPhysicalTableMap(d.Get("physical_table_map").(*schema.Set)), Name: aws.String(d.Get("name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("column_groups"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { @@ -885,7 +891,7 @@ func resourceDataSetCreate(ctx context.Context, d *schema.ResourceData, meta int _, err := conn.CreateDataSetWithContext(ctx, input) if err != nil { - return diag.Errorf("error creating QuickSight Data Set: %s", err) + return diag.Errorf("creating QuickSight Data Set: %s", err) } if v, ok := d.GetOk("refresh_properties"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { @@ -897,7 +903,7 @@ func resourceDataSetCreate(ctx context.Context, d *schema.ResourceData, meta int _, err := conn.PutDataSetRefreshPropertiesWithContext(ctx, input) if err != nil { - return diag.Errorf("error putting QuickSight Data Set Refresh Properties: %s", err) + return diag.Errorf("putting QuickSight Data Set Refresh Properties: %s", err) } } @@ -905,7 +911,7 @@ func resourceDataSetCreate(ctx context.Context, d *schema.ResourceData, meta int } func resourceDataSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) awsAccountId, dataSetId, err := ParseDataSetID(d.Id()) if err != nil { @@ -926,11 +932,11 @@ func resourceDataSetRead(ctx context.Context, d *schema.ResourceData, meta inter } if err != nil { - return diag.Errorf("error describing QuickSight Data Set (%s): %s", d.Id(), err) + return diag.Errorf("describing QuickSight Data Set (%s): %s", d.Id(), err) } if output == nil || output.DataSet == nil { - return diag.Errorf("error describing QuickSight Data Set (%s): empty output", d.Id()) + return diag.Errorf("describing QuickSight Data Set (%s): empty output", d.Id()) } dataSet := output.DataSet @@ -942,39 +948,39 @@ func resourceDataSetRead(ctx context.Context, d *schema.ResourceData, meta inter d.Set("import_mode", dataSet.ImportMode) if err := d.Set("column_groups", flattenColumnGroups(dataSet.ColumnGroups)); err != nil { - return diag.Errorf("error setting column_groups: %s", err) + return diag.Errorf("setting column_groups: %s", err) } if err := d.Set("column_level_permission_rules", flattenColumnLevelPermissionRules(dataSet.ColumnLevelPermissionRules)); err != nil { - return diag.Errorf("error setting column_level_permission_rules: %s", err) + return diag.Errorf("setting column_level_permission_rules: %s", err) } if err := d.Set("data_set_usage_configuration", flattenDataSetUsageConfiguration(dataSet.DataSetUsageConfiguration)); err != nil { - return diag.Errorf("error setting data_set_usage_configuration: %s", err) + return diag.Errorf("setting data_set_usage_configuration: %s", err) } if err := d.Set("field_folders", flattenFieldFolders(dataSet.FieldFolders)); err != nil { - return diag.Errorf("error setting field_folders: %s", err) + return diag.Errorf("setting field_folders: %s", err) } if err := d.Set("logical_table_map", flattenLogicalTableMap(dataSet.LogicalTableMap, logicalTableMapSchema())); err != nil { - return diag.Errorf("error setting logical_table_map: %s", err) + return diag.Errorf("setting logical_table_map: %s", err) } if err := d.Set("output_columns", flattenOutputColumns(dataSet.OutputColumns)); err != nil { - return diag.Errorf("error setting output_columns: %s", err) + return diag.Errorf("setting output_columns: %s", err) } if err := d.Set("physical_table_map", flattenPhysicalTableMap(dataSet.PhysicalTableMap, physicalTableMapSchema())); err != nil { - return diag.Errorf("error setting physical_table_map: %s", err) + return diag.Errorf("setting physical_table_map: %s", err) } if err := d.Set("row_level_permission_data_set", flattenRowLevelPermissionDataSet(dataSet.RowLevelPermissionDataSet)); err != nil { - return diag.Errorf("error setting row_level_permission_data_set: %s", err) + return diag.Errorf("setting row_level_permission_data_set: %s", err) } if err := d.Set("row_level_permission_tag_configuration", flattenRowLevelPermissionTagConfiguration(dataSet.RowLevelPermissionTagConfiguration)); err != nil { - return diag.Errorf("error setting row_level_permission_tag_configuration: %s", err) + return diag.Errorf("setting row_level_permission_tag_configuration: %s", err) } permsResp, err := conn.DescribeDataSetPermissionsWithContext(ctx, &quicksight.DescribeDataSetPermissionsInput{ @@ -983,11 +989,11 @@ func resourceDataSetRead(ctx context.Context, d *schema.ResourceData, meta inter }) if err != nil { - return diag.Errorf("error describing QuickSight Data Source (%s) Permissions: %s", d.Id(), err) + return diag.Errorf("describing QuickSight Data Source (%s) Permissions: %s", d.Id(), err) } if err := d.Set("permissions", flattenPermissions(permsResp.Permissions)); err != nil { - return diag.Errorf("error setting permissions: %s", err) + return diag.Errorf("setting permissions: %s", err) } propsResp, err := conn.DescribeDataSetRefreshPropertiesWithContext(ctx, &quicksight.DescribeDataSetRefreshPropertiesInput{ @@ -995,13 +1001,13 @@ func resourceDataSetRead(ctx context.Context, d *schema.ResourceData, meta inter DataSetId: aws.String(dataSetId), }) - if err != nil && !tfawserr.ErrCodeEquals(err, quicksight.ErrCodeResourceNotFoundException) { - return diag.Errorf("error describing refresh properties (%s): %s", d.Id(), err) + if err != nil && !(tfawserr.ErrCodeEquals(err, quicksight.ErrCodeResourceNotFoundException) || tfawserr.ErrMessageContains(err, quicksight.ErrCodeInvalidParameterValueException, "not a SPICE dataset")) { + return diag.Errorf("describing refresh properties (%s): %s", d.Id(), err) } if err == nil { if err := d.Set("refresh_properties", flattenRefreshProperties(propsResp.DataSetRefreshProperties)); err != nil { - return diag.Errorf("error setting refresh properties: %s", err) + return diag.Errorf("setting refresh properties: %s", err) } } @@ -1009,7 +1015,7 @@ func resourceDataSetRead(ctx context.Context, d *schema.ResourceData, meta inter } func resourceDataSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) if d.HasChangesExcept("permissions", "tags", "tags_all", "refresh_properties") { awsAccountId, dataSetId, err := ParseDataSetID(d.Id()) @@ -1041,7 +1047,7 @@ func resourceDataSetUpdate(ctx context.Context, d *schema.ResourceData, meta int _, err = conn.UpdateDataSetWithContext(ctx, params) if err != nil { - return diag.Errorf("error updating QuickSight Data Set (%s): %s", d.Id(), err) + return diag.Errorf("updating QuickSight Data Set (%s): %s", d.Id(), err) } } @@ -1073,7 +1079,7 @@ func resourceDataSetUpdate(ctx context.Context, d *schema.ResourceData, meta int _, err = conn.UpdateDataSetPermissionsWithContext(ctx, params) if err != nil { - return diag.Errorf("error updating QuickSight Data Set (%s) permissions: %s", dataSetId, err) + return diag.Errorf("updating QuickSight Data Set (%s) permissions: %s", dataSetId, err) } } @@ -1092,7 +1098,7 @@ func resourceDataSetUpdate(ctx context.Context, d *schema.ResourceData, meta int DataSetId: aws.String(dataSetId), }) if err != nil { - return diag.Errorf("error deleting QuickSight Data Set Refresh Properties (%s): %s", d.Id(), err) + return diag.Errorf("deleting QuickSight Data Set Refresh Properties (%s): %s", d.Id(), err) } } else { _, err = conn.PutDataSetRefreshPropertiesWithContext(ctx, &quicksight.PutDataSetRefreshPropertiesInput{ @@ -1101,7 +1107,7 @@ func resourceDataSetUpdate(ctx context.Context, d *schema.ResourceData, meta int DataSetRefreshProperties: expandDataSetRefreshProperties(d.Get("refresh_properties").([]interface{})), }) if err != nil { - return diag.Errorf("error updating QuickSight Data Set Refresh Properties (%s): %s", d.Id(), err) + return diag.Errorf("updating QuickSight Data Set Refresh Properties (%s): %s", d.Id(), err) } } } @@ -1110,7 +1116,7 @@ func resourceDataSetUpdate(ctx context.Context, d *schema.ResourceData, meta int } func resourceDataSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) log.Printf("[INFO] Deleting QuickSight Data Set %s", d.Id()) awsAccountId, dataSetId, err := ParseDataSetID(d.Id()) @@ -1130,7 +1136,7 @@ func resourceDataSetDelete(ctx context.Context, d *schema.ResourceData, meta int } if err != nil { - return diag.Errorf("error deleting QuickSight Data Set (%s): %s", d.Id(), err) + return diag.Errorf("deleting QuickSight Data Set (%s): %s", d.Id(), err) } return nil @@ -1313,8 +1319,8 @@ func expandDataSetLogicalTableSource(tfMap map[string]interface{}) *quicksight.L if v, ok := tfMap["physical_table_id"].(string); ok && v != "" { logicalTableSource.PhysicalTableId = aws.String(v) } - if v, ok := tfMap["join_instruction"].(map[string]interface{}); ok && len(v) > 0 { - logicalTableSource.JoinInstruction = expandDataSetJoinInstruction(v) + if v, ok := tfMap["join_instruction"].([]interface{}); ok && len(v) > 0 { + logicalTableSource.JoinInstruction = expandDataSetJoinInstruction(v[0].(map[string]interface{})) } return logicalTableSource @@ -1644,10 +1650,6 @@ func expandDataSetUntagColumnOperation(tfList []interface{}) *quicksight.UntagCo } func expandDataSetPhysicalTableMap(tfSet *schema.Set) map[string]*quicksight.PhysicalTable { - if tfSet.Len() == 0 { - return nil - } - physicalTableMap := make(map[string]*quicksight.PhysicalTable) for _, v := range tfSet.List() { vMap, ok := v.(map[string]interface{}) @@ -2426,10 +2428,6 @@ func flattenJoinKeyProperties(apiObject *quicksight.JoinKeyProperties) map[strin } func flattenPhysicalTableMap(apiObject map[string]*quicksight.PhysicalTable, resourceSchema *schema.Resource) *schema.Set { - if len(apiObject) == 0 { - return nil - } - var tfList []interface{} for k, v := range apiObject { if v == nil { diff --git a/internal/service/quicksight/data_set_data_source.go b/internal/service/quicksight/data_set_data_source.go index a60ebd9d82c..5ac090680fc 100644 --- a/internal/service/quicksight/data_set_data_source.go +++ b/internal/service/quicksight/data_set_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight import ( @@ -19,214 +22,216 @@ func DataSourceDataSet() *schema.Resource { return &schema.Resource{ ReadWithoutTimeout: dataSourceDataSetRead, - Schema: map[string]*schema.Schema{ - "arn": { - Type: schema.TypeString, - Computed: true, - }, - "aws_account_id": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ValidateFunc: verify.ValidAccountID, - }, - "column_groups": { - Type: schema.TypeList, - Computed: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "geo_spatial_column_group": { - Type: schema.TypeList, - Computed: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "columns": { - Type: schema.TypeList, - Computed: true, - Elem: &schema.Schema{ - Type: schema.TypeString, + SchemaFunc: func() map[string]*schema.Schema { + return map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "aws_account_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: verify.ValidAccountID, + }, + "column_groups": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "geo_spatial_column_group": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "columns": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "country_code": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Computed: true, }, - }, - "country_code": { - Type: schema.TypeString, - Computed: true, - }, - "name": { - Type: schema.TypeString, - Computed: true, }, }, }, }, }, }, - }, - "column_level_permission_rules": { - Type: schema.TypeList, - Optional: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "column_names": { - Type: schema.TypeList, - Computed: true, - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "principals": { - Type: schema.TypeList, - Computed: true, - Elem: &schema.Schema{Type: schema.TypeString}, + "column_level_permission_rules": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "column_names": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "principals": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, }, }, }, - }, - "data_set_id": { - Type: schema.TypeString, - Required: true, - }, - "data_set_usage_configuration": { - Type: schema.TypeList, - Computed: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "disable_use_as_direct_query_source": { - Type: schema.TypeBool, - Computed: true, - }, - "disable_use_as_imported_source": { - Type: schema.TypeBool, - Computed: true, + "data_set_id": { + Type: schema.TypeString, + Required: true, + }, + "data_set_usage_configuration": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "disable_use_as_direct_query_source": { + Type: schema.TypeBool, + Computed: true, + }, + "disable_use_as_imported_source": { + Type: schema.TypeBool, + Computed: true, + }, }, }, }, - }, - "field_folders": { - Type: schema.TypeSet, - Computed: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "field_folders_id": { - Type: schema.TypeString, - Computed: true, - }, - "columns": { - Type: schema.TypeList, - Computed: true, - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "description": { - Type: schema.TypeString, - Computed: true, + "field_folders": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "field_folders_id": { + Type: schema.TypeString, + Computed: true, + }, + "columns": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "description": { + Type: schema.TypeString, + Computed: true, + }, }, }, }, - }, - "import_mode": { - Type: schema.TypeString, - Computed: true, - }, - "logical_table_map": { - Type: schema.TypeSet, - Computed: true, - Elem: logicalTableMapDataSourceSchema(), - }, - "name": { - Type: schema.TypeString, - Computed: true, - }, - "permissions": { - Type: schema.TypeList, - Computed: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "actions": { - Type: schema.TypeSet, - Computed: true, - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "principal": { - Type: schema.TypeString, - Computed: true, + "import_mode": { + Type: schema.TypeString, + Computed: true, + }, + "logical_table_map": { + Type: schema.TypeSet, + Computed: true, + Elem: logicalTableMapDataSourceSchema(), + }, + "name": { + Type: schema.TypeString, + Computed: true, + }, + "permissions": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "actions": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "principal": { + Type: schema.TypeString, + Computed: true, + }, }, }, }, - }, - "physical_table_map": { - Type: schema.TypeSet, - Computed: true, - Elem: physicalTableMapDataSourceSchema(), - }, - "row_level_permission_data_set": { - Type: schema.TypeList, - Computed: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "arn": { - Type: schema.TypeString, - Computed: true, - }, - "format_version": { - Type: schema.TypeString, - Computed: true, - }, - "namespace": { - Type: schema.TypeString, - Computed: true, - }, - "permission_policy": { - Type: schema.TypeString, - Computed: true, - }, - "status": { - Type: schema.TypeString, - Computed: true, + "physical_table_map": { + Type: schema.TypeSet, + Computed: true, + Elem: physicalTableMapDataSourceSchema(), + }, + "row_level_permission_data_set": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "format_version": { + Type: schema.TypeString, + Computed: true, + }, + "namespace": { + Type: schema.TypeString, + Computed: true, + }, + "permission_policy": { + Type: schema.TypeString, + Computed: true, + }, + "status": { + Type: schema.TypeString, + Computed: true, + }, }, }, }, - }, - "row_level_permission_tag_configuration": { - Type: schema.TypeList, - Computed: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "status": { - Type: schema.TypeString, - Computed: true, - }, - "tag_rules": { - Type: schema.TypeList, - Computed: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "column_name": { - Type: schema.TypeString, - Computed: true, - }, - "match_all_value": { - Type: schema.TypeString, - Computed: true, - }, - "tag_key": { - Type: schema.TypeString, - Computed: true, - }, - "tag_multi_value_delimiter": { - Type: schema.TypeString, - Computed: true, + "row_level_permission_tag_configuration": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "status": { + Type: schema.TypeString, + Computed: true, + }, + "tag_rules": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "column_name": { + Type: schema.TypeString, + Computed: true, + }, + "match_all_value": { + Type: schema.TypeString, + Computed: true, + }, + "tag_key": { + Type: schema.TypeString, + Computed: true, + }, + "tag_multi_value_delimiter": { + Type: schema.TypeString, + Computed: true, + }, }, }, }, }, }, }, - }, - "tags": tftags.TagsSchemaComputed(), - "tags_all": { - Type: schema.TypeMap, - Optional: true, - Computed: true, - Elem: &schema.Schema{Type: schema.TypeString}, - Deprecated: `this attribute has been deprecated`, - }, + "tags": tftags.TagsSchemaComputed(), + "tags_all": { + Type: schema.TypeMap, + Optional: true, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Deprecated: `this attribute has been deprecated`, + }, + } }, } } @@ -610,7 +615,7 @@ const ( ) func dataSourceDataSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig @@ -641,52 +646,52 @@ func dataSourceDataSetRead(ctx context.Context, d *schema.ResourceData, meta int d.Set("import_mode", dataSet.ImportMode) if err := d.Set("column_groups", flattenColumnGroups(dataSet.ColumnGroups)); err != nil { - return diag.Errorf("error setting column_groups: %s", err) + return diag.Errorf("setting column_groups: %s", err) } if err := d.Set("column_level_permission_rules", flattenColumnLevelPermissionRules(dataSet.ColumnLevelPermissionRules)); err != nil { - return diag.Errorf("error setting column_level_permission_rules: %s", err) + return diag.Errorf("setting column_level_permission_rules: %s", err) } if err := d.Set("data_set_usage_configuration", flattenDataSetUsageConfiguration(dataSet.DataSetUsageConfiguration)); err != nil { - return diag.Errorf("error setting data_set_usage_configuration: %s", err) + return diag.Errorf("setting data_set_usage_configuration: %s", err) } if err := d.Set("field_folders", flattenFieldFolders(dataSet.FieldFolders)); err != nil { - return diag.Errorf("error setting field_folders: %s", err) + return diag.Errorf("setting field_folders: %s", err) } if err := d.Set("logical_table_map", flattenLogicalTableMap(dataSet.LogicalTableMap, logicalTableMapDataSourceSchema())); err != nil { - return diag.Errorf("error setting logical_table_map: %s", err) + return diag.Errorf("setting logical_table_map: %s", err) } if err := d.Set("physical_table_map", flattenPhysicalTableMap(dataSet.PhysicalTableMap, physicalTableMapDataSourceSchema())); err != nil { - return diag.Errorf("error setting physical_table_map: %s", err) + return diag.Errorf("setting physical_table_map: %s", err) } if err := d.Set("row_level_permission_data_set", flattenRowLevelPermissionDataSet(dataSet.RowLevelPermissionDataSet)); err != nil { - return diag.Errorf("error setting row_level_permission_data_set: %s", err) + return diag.Errorf("setting row_level_permission_data_set: %s", err) } if err := d.Set("row_level_permission_tag_configuration", flattenRowLevelPermissionTagConfiguration(dataSet.RowLevelPermissionTagConfiguration)); err != nil { - return diag.Errorf("error setting row_level_permission_tag_configuration: %s", err) + return diag.Errorf("setting row_level_permission_tag_configuration: %s", err) } - tags, err := ListTags(ctx, conn, d.Get("arn").(string)) + tags, err := listTags(ctx, conn, d.Get("arn").(string)) if err != nil { - return diag.Errorf("error listing tags for QuickSight Data Set (%s): %s", d.Id(), err) + return diag.Errorf("listing tags for QuickSight Data Set (%s): %s", d.Id(), err) } tags = tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig) //lintignore:AWSR002 if err := d.Set("tags", tags.RemoveDefaultConfig(defaultTagsConfig).Map()); err != nil { - return diag.Errorf("error setting tags: %s", err) + return diag.Errorf("setting tags: %s", err) } if err := d.Set("tags_all", tags.Map()); err != nil { - return diag.Errorf("error setting tags_all: %s", err) + return diag.Errorf("setting tags_all: %s", err) } permsResp, err := conn.DescribeDataSetPermissionsWithContext(ctx, &quicksight.DescribeDataSetPermissionsInput{ @@ -695,11 +700,11 @@ func dataSourceDataSetRead(ctx context.Context, d *schema.ResourceData, meta int }) if err != nil { - return diag.Errorf("error describing QuickSight Data Source (%s) Permissions: %s", d.Id(), err) + return diag.Errorf("describing QuickSight Data Source (%s) Permissions: %s", d.Id(), err) } if err := d.Set("permissions", flattenPermissions(permsResp.Permissions)); err != nil { - return diag.Errorf("error setting permissions: %s", err) + return diag.Errorf("setting permissions: %s", err) } return nil } diff --git a/internal/service/quicksight/data_set_data_source_test.go b/internal/service/quicksight/data_set_data_source_test.go index 5ef3514a404..9694ef185dd 100644 --- a/internal/service/quicksight/data_set_data_source_test.go +++ b/internal/service/quicksight/data_set_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight_test import ( diff --git a/internal/service/quicksight/data_set_test.go b/internal/service/quicksight/data_set_test.go index 922971d0b7a..99fc2b25d97 100644 --- a/internal/service/quicksight/data_set_test.go +++ b/internal/service/quicksight/data_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight_test import ( @@ -453,6 +456,46 @@ func TestAccQuickSightDataSet_tags(t *testing.T) { }) } +func TestAccQuickSightDataSet_noPhysicalTableMap(t *testing.T) { + ctx := acctest.Context(t) + var dataSet quicksight.DataSet + resourceName := "aws_quicksight_data_set.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rId := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, quicksight.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckDataSetDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccDataSetConfigNoPhysicalTableMap(rId, rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckDataSetExists(ctx, resourceName, &dataSet), + resource.TestCheckResourceAttr(resourceName, "data_set_id", rId), + acctest.CheckResourceAttrRegionalARN(resourceName, "arn", "quicksight", fmt.Sprintf("dataset/%s", rId)), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "import_mode", "SPICE"), + resource.TestCheckResourceAttr(resourceName, "physical_table_map.#", "0"), + resource.TestCheckResourceAttr(resourceName, "logical_table_map.#", "3"), + resource.TestCheckResourceAttr(resourceName, "logical_table_map.0.logical_table_map_id", "joined"), + resource.TestCheckResourceAttr(resourceName, "logical_table_map.0.alias", "j"), + resource.TestCheckResourceAttr(resourceName, "logical_table_map.0.source.0.join_instruction.0.right_operand", "right"), + resource.TestCheckResourceAttr(resourceName, "logical_table_map.0.source.0.join_instruction.0.left_operand", "left"), + resource.TestCheckResourceAttr(resourceName, "logical_table_map.0.source.0.join_instruction.0.type", "INNER"), + resource.TestCheckResourceAttr(resourceName, "logical_table_map.0.source.0.join_instruction.0.on_clause", "Column1 = Column2"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func testAccCheckDataSetExists(ctx context.Context, resourceName string, dataSet *quicksight.DataSet) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[resourceName] @@ -465,7 +508,7 @@ func testAccCheckDataSetExists(ctx context.Context, resourceName string, dataSet return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) input := &quicksight.DescribeDataSetInput{ AwsAccountId: aws.String(awsAccountID), @@ -488,7 +531,7 @@ func testAccCheckDataSetExists(ctx context.Context, resourceName string, dataSet func testAccCheckDataSetDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_quicksight_data_set" { continue @@ -1034,3 +1077,84 @@ resource "aws_quicksight_data_set" "test" { } `, rId, rName, key1, value1, key2, value2)) } + +func testAccDataSetConfigNoPhysicalTableMap(rId, rName string) string { + return acctest.ConfigCompose( + testAccDataSetConfigBase(rId, rName), + fmt.Sprintf(` +resource "aws_quicksight_data_set" "left" { + data_set_id = "%[1]s-left" + name = "%[2]s-left" + import_mode = "SPICE" + + physical_table_map { + physical_table_map_id = "%[1]s-left" + s3_source { + data_source_arn = aws_quicksight_data_source.test.arn + input_columns { + name = "Column1" + type = "STRING" + } + upload_settings { + format = "JSON" + } + } + } +} + +resource "aws_quicksight_data_set" "right" { + data_set_id = "%[1]s-right" + name = "%[2]s-right" + import_mode = "SPICE" + + physical_table_map { + physical_table_map_id = "%[1]s-right" + s3_source { + data_source_arn = aws_quicksight_data_source.test.arn + input_columns { + name = "Column2" + type = "STRING" + } + upload_settings { + format = "JSON" + } + } + } +} + +resource "aws_quicksight_data_set" "test" { + data_set_id = %[1]q + name = %[2]q + import_mode = "SPICE" + + logical_table_map { + logical_table_map_id = "right" + alias = "r" + source { + data_set_arn = aws_quicksight_data_set.right.arn + } + } + + logical_table_map { + logical_table_map_id = "left" + alias = "l" + source { + data_set_arn = aws_quicksight_data_set.left.arn + } + } + + logical_table_map { + logical_table_map_id = "joined" + alias = "j" + source { + join_instruction { + left_operand = "left" + right_operand = "right" + type = "INNER" + on_clause = "Column1 = Column2" + } + } + } +} +`, rId, rName)) +} diff --git a/internal/service/quicksight/data_source.go b/internal/service/quicksight/data_source.go index 9b93355414c..3412f9be1bb 100644 --- a/internal/service/quicksight/data_source.go +++ b/internal/service/quicksight/data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight import ( @@ -31,580 +34,583 @@ func ResourceDataSource() *schema.Resource { StateContext: schema.ImportStatePassthroughContext, }, - Schema: map[string]*schema.Schema{ - "arn": { - Type: schema.TypeString, - Computed: true, - }, + SchemaFunc: func() map[string]*schema.Schema { + return map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, - "aws_account_id": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - ValidateFunc: verify.ValidAccountID, - }, + "aws_account_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: verify.ValidAccountID, + }, - "credentials": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "copy_source_arn": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: verify.ValidARN, - ConflictsWith: []string{"credentials.0.credential_pair"}, - }, - "credential_pair": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "password": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.All( - validation.NoZeroValues, - validation.StringLenBetween(1, 1024), - ), - Sensitive: true, - }, - "username": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.All( - validation.NoZeroValues, - validation.StringLenBetween(1, 64), - ), - Sensitive: true, + "credentials": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "copy_source_arn": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidARN, + ConflictsWith: []string{"credentials.0.credential_pair"}, + }, + "credential_pair": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "password": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.All( + validation.NoZeroValues, + validation.StringLenBetween(1, 1024), + ), + Sensitive: true, + }, + "username": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.All( + validation.NoZeroValues, + validation.StringLenBetween(1, 64), + ), + Sensitive: true, + }, }, }, + ConflictsWith: []string{"credentials.0.copy_source_arn"}, }, - ConflictsWith: []string{"credentials.0.copy_source_arn"}, }, }, }, - }, - "data_source_id": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - }, + "data_source_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, - "name": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.All( - validation.NoZeroValues, - validation.StringLenBetween(1, 128), - ), - }, + "name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.All( + validation.NoZeroValues, + validation.StringLenBetween(1, 128), + ), + }, - "parameters": { - Type: schema.TypeList, - Required: true, - MaxItems: 1, - MinItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "amazon_elasticsearch": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "domain": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, + "parameters": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + MinItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "amazon_elasticsearch": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "domain": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, }, }, }, - }, - "athena": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "work_group": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.NoZeroValues, + "athena": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "work_group": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.NoZeroValues, + }, }, }, }, - }, - "aurora": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "database": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, - }, - "host": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, - }, - "port": { - Type: schema.TypeInt, - Required: true, - ValidateFunc: validation.IntAtLeast(1), + "aurora": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "database": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "host": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "port": { + Type: schema.TypeInt, + Required: true, + ValidateFunc: validation.IntAtLeast(1), + }, }, }, }, - }, - "aurora_postgresql": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "database": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, - }, - "host": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, - }, - "port": { - Type: schema.TypeInt, - Required: true, - ValidateFunc: validation.IntAtLeast(1), + "aurora_postgresql": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "database": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "host": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "port": { + Type: schema.TypeInt, + Required: true, + ValidateFunc: validation.IntAtLeast(1), + }, }, }, }, - }, - "aws_iot_analytics": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "data_set_name": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, + "aws_iot_analytics": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "data_set_name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, }, }, }, - }, - "jira": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "site_base_url": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, + "jira": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "site_base_url": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, }, }, }, - }, - "maria_db": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "database": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, - }, - "host": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, - }, - "port": { - Type: schema.TypeInt, - Required: true, - ValidateFunc: validation.IntAtLeast(1), + "maria_db": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "database": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "host": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "port": { + Type: schema.TypeInt, + Required: true, + ValidateFunc: validation.IntAtLeast(1), + }, }, }, }, - }, - "mysql": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "database": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, - }, - "host": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, - }, - "port": { - Type: schema.TypeInt, - Required: true, - ValidateFunc: validation.IntAtLeast(1), + "mysql": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "database": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "host": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "port": { + Type: schema.TypeInt, + Required: true, + ValidateFunc: validation.IntAtLeast(1), + }, }, }, }, - }, - "oracle": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "database": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, - }, - "host": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, - }, - "port": { - Type: schema.TypeInt, - Required: true, - ValidateFunc: validation.IntAtLeast(1), + "oracle": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "database": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "host": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "port": { + Type: schema.TypeInt, + Required: true, + ValidateFunc: validation.IntAtLeast(1), + }, }, }, }, - }, - "postgresql": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "database": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, - }, - "host": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, - }, - "port": { - Type: schema.TypeInt, - Required: true, - ValidateFunc: validation.IntAtLeast(1), + "postgresql": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "database": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "host": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "port": { + Type: schema.TypeInt, + Required: true, + ValidateFunc: validation.IntAtLeast(1), + }, }, }, }, - }, - "presto": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "catalog": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, - }, - "host": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, - }, - "port": { - Type: schema.TypeInt, - Required: true, - ValidateFunc: validation.IntAtLeast(1), + "presto": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "catalog": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "host": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "port": { + Type: schema.TypeInt, + Required: true, + ValidateFunc: validation.IntAtLeast(1), + }, }, }, }, - }, - "rds": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "database": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, - }, - "instance_id": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, + "rds": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "database": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "instance_id": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, }, }, }, - }, - "redshift": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "cluster_id": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.NoZeroValues, - }, - "database": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, - }, - "host": { - Type: schema.TypeString, - Optional: true, - }, - "port": { - Type: schema.TypeInt, - Optional: true, + "redshift": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cluster_id": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.NoZeroValues, + }, + "database": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "host": { + Type: schema.TypeString, + Optional: true, + }, + "port": { + Type: schema.TypeInt, + Optional: true, + }, }, }, }, - }, - "s3": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "manifest_file_location": { - Type: schema.TypeList, - Required: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "bucket": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, - }, - "key": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, + "s3": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "manifest_file_location": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "bucket": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "key": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, }, }, }, }, }, }, - }, - "service_now": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "site_base_url": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, + "service_now": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "site_base_url": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, }, }, }, - }, - "snowflake": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "database": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, - }, - "host": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, - }, - "warehouse": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, + "snowflake": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "database": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "host": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "warehouse": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, }, }, }, - }, - "spark": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "host": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, - }, - "port": { - Type: schema.TypeInt, - Required: true, - ValidateFunc: validation.IntAtLeast(1), + "spark": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "host": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "port": { + Type: schema.TypeInt, + Required: true, + ValidateFunc: validation.IntAtLeast(1), + }, }, }, }, - }, - "sql_server": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "database": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, - }, - "host": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, - }, - "port": { - Type: schema.TypeInt, - Required: true, - ValidateFunc: validation.IntAtLeast(1), + "sql_server": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "database": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "host": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "port": { + Type: schema.TypeInt, + Required: true, + ValidateFunc: validation.IntAtLeast(1), + }, }, }, }, - }, - "teradata": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "database": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, - }, - "host": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, - }, - "port": { - Type: schema.TypeInt, - Required: true, - ValidateFunc: validation.IntAtLeast(1), + "teradata": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "database": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "host": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, + "port": { + Type: schema.TypeInt, + Required: true, + ValidateFunc: validation.IntAtLeast(1), + }, }, }, }, - }, - "twitter": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "max_rows": { - Type: schema.TypeInt, - Required: true, - ValidateFunc: validation.IntAtLeast(1), - }, - "query": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.NoZeroValues, + "twitter": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "max_rows": { + Type: schema.TypeInt, + Required: true, + ValidateFunc: validation.IntAtLeast(1), + }, + "query": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.NoZeroValues, + }, }, }, }, }, }, }, - }, - "permission": { - Type: schema.TypeSet, - Optional: true, - MinItems: 1, - MaxItems: 64, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "actions": { - Type: schema.TypeSet, - Required: true, - Elem: &schema.Schema{Type: schema.TypeString}, - MinItems: 1, - MaxItems: 16, - }, - "principal": { - Type: schema.TypeString, - Required: true, - ValidateFunc: verify.ValidARN, + "permission": { + Type: schema.TypeSet, + Optional: true, + MinItems: 1, + MaxItems: 64, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "actions": { + Type: schema.TypeSet, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, + MinItems: 1, + MaxItems: 16, + }, + "principal": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidARN, + }, }, }, }, - }, - "ssl_properties": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "disable_ssl": { - Type: schema.TypeBool, - Required: true, + "ssl_properties": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "disable_ssl": { + Type: schema.TypeBool, + Required: true, + }, }, }, }, - }, - names.AttrTags: tftags.TagsSchema(), - names.AttrTagsAll: tftags.TagsSchemaComputed(), + names.AttrTags: tftags.TagsSchema(), + names.AttrTagsAll: tftags.TagsSchemaComputed(), - "type": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validation.StringInSlice(quicksight.DataSourceType_Values(), false), - }, + "type": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice(quicksight.DataSourceType_Values(), false), + }, - "vpc_connection_properties": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "vpc_connection_arn": { - Type: schema.TypeString, - Required: true, - ValidateFunc: verify.ValidARN, + "vpc_connection_properties": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "vpc_connection_arn": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidARN, + }, }, }, }, - }, + } }, + CustomizeDiff: verify.SetTagsDiff, } } func resourceDataSourceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) awsAccountId := meta.(*conns.AWSClient).AccountID id := d.Get("data_source_id").(string) @@ -618,7 +624,7 @@ func resourceDataSourceCreate(ctx context.Context, d *schema.ResourceData, meta DataSourceId: aws.String(id), DataSourceParameters: expandDataSourceParameters(d.Get("parameters").([]interface{})), Name: aws.String(d.Get("name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), Type: aws.String(d.Get("type").(string)), } @@ -640,20 +646,20 @@ func resourceDataSourceCreate(ctx context.Context, d *schema.ResourceData, meta _, err := conn.CreateDataSourceWithContext(ctx, params) if err != nil { - return diag.Errorf("error creating QuickSight Data Source: %s", err) + return diag.Errorf("creating QuickSight Data Source: %s", err) } d.SetId(fmt.Sprintf("%s/%s", awsAccountId, id)) if _, err := waitCreated(ctx, conn, awsAccountId, id); err != nil { - return diag.Errorf("error waiting from QuickSight Data Source (%s) creation: %s", d.Id(), err) + return diag.Errorf("waiting from QuickSight Data Source (%s) creation: %s", d.Id(), err) } return resourceDataSourceRead(ctx, d, meta) } func resourceDataSourceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) awsAccountId, dataSourceId, err := ParseDataSourceID(d.Id()) if err != nil { @@ -674,11 +680,11 @@ func resourceDataSourceRead(ctx context.Context, d *schema.ResourceData, meta in } if err != nil { - return diag.Errorf("error describing QuickSight Data Source (%s): %s", d.Id(), err) + return diag.Errorf("describing QuickSight Data Source (%s): %s", d.Id(), err) } if output == nil || output.DataSource == nil { - return diag.Errorf("error describing QuickSight Data Source (%s): empty output", d.Id()) + return diag.Errorf("describing QuickSight Data Source (%s): empty output", d.Id()) } dataSource := output.DataSource @@ -689,17 +695,17 @@ func resourceDataSourceRead(ctx context.Context, d *schema.ResourceData, meta in d.Set("name", dataSource.Name) if err := d.Set("parameters", flattenParameters(dataSource.DataSourceParameters)); err != nil { - return diag.Errorf("error setting parameters: %s", err) + return diag.Errorf("setting parameters: %s", err) } if err := d.Set("ssl_properties", flattenSSLProperties(dataSource.SslProperties)); err != nil { - return diag.Errorf("error setting ssl_properties: %s", err) + return diag.Errorf("setting ssl_properties: %s", err) } d.Set("type", dataSource.Type) if err := d.Set("vpc_connection_properties", flattenVPCConnectionProperties(dataSource.VpcConnectionProperties)); err != nil { - return diag.Errorf("error setting vpc_connection_properties: %s", err) + return diag.Errorf("setting vpc_connection_properties: %s", err) } permsResp, err := conn.DescribeDataSourcePermissionsWithContext(ctx, &quicksight.DescribeDataSourcePermissionsInput{ @@ -708,18 +714,18 @@ func resourceDataSourceRead(ctx context.Context, d *schema.ResourceData, meta in }) if err != nil { - return diag.Errorf("error describing QuickSight Data Source (%s) Permissions: %s", d.Id(), err) + return diag.Errorf("describing QuickSight Data Source (%s) Permissions: %s", d.Id(), err) } if err := d.Set("permission", flattenPermissions(permsResp.Permissions)); err != nil { - return diag.Errorf("error setting permission: %s", err) + return diag.Errorf("setting permission: %s", err) } return nil } func resourceDataSourceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) if d.HasChangesExcept("permission", "tags", "tags_all") { awsAccountId, dataSourceId, err := ParseDataSourceID(d.Id()) @@ -752,11 +758,11 @@ func resourceDataSourceUpdate(ctx context.Context, d *schema.ResourceData, meta _, err = conn.UpdateDataSourceWithContext(ctx, params) if err != nil { - return diag.Errorf("error updating QuickSight Data Source (%s): %s", d.Id(), err) + return diag.Errorf("updating QuickSight Data Source (%s): %s", d.Id(), err) } if _, err := waitUpdated(ctx, conn, awsAccountId, dataSourceId); err != nil { - return diag.Errorf("error waiting for QuickSight Data Source (%s) to update: %s", d.Id(), err) + return diag.Errorf("waiting for QuickSight Data Source (%s) to update: %s", d.Id(), err) } } @@ -788,7 +794,7 @@ func resourceDataSourceUpdate(ctx context.Context, d *schema.ResourceData, meta _, err = conn.UpdateDataSourcePermissionsWithContext(ctx, params) if err != nil { - return diag.Errorf("error updating QuickSight Data Source (%s) permissions: %s", dataSourceId, err) + return diag.Errorf("updating QuickSight Data Source (%s) permissions: %s", dataSourceId, err) } } @@ -796,7 +802,7 @@ func resourceDataSourceUpdate(ctx context.Context, d *schema.ResourceData, meta } func resourceDataSourceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) awsAccountId, dataSourceId, err := ParseDataSourceID(d.Id()) if err != nil { @@ -815,7 +821,7 @@ func resourceDataSourceDelete(ctx context.Context, d *schema.ResourceData, meta } if err != nil { - return diag.Errorf("error deleting QuickSight Data Source (%s): %s", d.Id(), err) + return diag.Errorf("deleting QuickSight Data Source (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/quicksight/data_source_test.go b/internal/service/quicksight/data_source_test.go index c26c066cd89..195b3af050d 100644 --- a/internal/service/quicksight/data_source_test.go +++ b/internal/service/quicksight/data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight_test import ( @@ -202,7 +205,7 @@ func testAccCheckDataSourceExists(ctx context.Context, resourceName string, data return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) input := &quicksight.DescribeDataSourceInput{ AwsAccountId: aws.String(awsAccountID), @@ -227,7 +230,7 @@ func testAccCheckDataSourceExists(ctx context.Context, resourceName string, data func testAccCheckDataSourceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_quicksight_data_source" { continue diff --git a/internal/service/quicksight/exports_test.go b/internal/service/quicksight/exports_test.go index 1e466944b40..b4b337bc677 100644 --- a/internal/service/quicksight/exports_test.go +++ b/internal/service/quicksight/exports_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight // Exports for use in tests only. diff --git a/internal/service/quicksight/find.go b/internal/service/quicksight/find.go index 0ebbe2f0524..f330f11dde1 100644 --- a/internal/service/quicksight/find.go +++ b/internal/service/quicksight/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight import ( diff --git a/internal/service/quicksight/flex.go b/internal/service/quicksight/flex.go index 220b2c92296..f0ad9ed6d66 100644 --- a/internal/service/quicksight/flex.go +++ b/internal/service/quicksight/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight import ( diff --git a/internal/service/quicksight/flex_test.go b/internal/service/quicksight/flex_test.go index 8d90740766b..ad97db5ed58 100644 --- a/internal/service/quicksight/flex_test.go +++ b/internal/service/quicksight/flex_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight_test import ( diff --git a/internal/service/quicksight/folder.go b/internal/service/quicksight/folder.go index d28eee52b1b..b6c11f52939 100644 --- a/internal/service/quicksight/folder.go +++ b/internal/service/quicksight/folder.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight import ( @@ -134,7 +137,7 @@ const ( ) func resourceFolderCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) awsAccountId := meta.(*conns.AWSClient).AccountID if v, ok := d.GetOk("aws_account_id"); ok { @@ -149,7 +152,7 @@ func resourceFolderCreate(ctx context.Context, d *schema.ResourceData, meta inte AwsAccountId: aws.String(awsAccountId), FolderId: aws.String(folderId), Name: aws.String(d.Get("name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("folder_type"); ok { @@ -177,7 +180,7 @@ func resourceFolderCreate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceFolderRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) awsAccountId, folderId, err := ParseFolderId(d.Id()) if err != nil { @@ -208,7 +211,7 @@ func resourceFolderRead(ctx context.Context, d *schema.ResourceData, meta interf } if err := d.Set("folder_path", flex.FlattenStringList(out.FolderPath)); err != nil { - return diag.Errorf("error setting folder_path: %s", err) + return diag.Errorf("setting folder_path: %s", err) } permsResp, err := conn.DescribeFolderPermissionsWithContext(ctx, &quicksight.DescribeFolderPermissionsInput{ @@ -217,17 +220,17 @@ func resourceFolderRead(ctx context.Context, d *schema.ResourceData, meta interf }) if err != nil { - return diag.Errorf("error describing QuickSight Folder (%s) Permissions: %s", d.Id(), err) + return diag.Errorf("describing QuickSight Folder (%s) Permissions: %s", d.Id(), err) } if err := d.Set("permissions", flattenPermissions(permsResp.Permissions)); err != nil { - return diag.Errorf("error setting permissions: %s", err) + return diag.Errorf("setting permissions: %s", err) } return nil } func resourceFolderUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) awsAccountId, folderId, err := ParseFolderId(d.Id()) if err != nil { @@ -271,7 +274,7 @@ func resourceFolderUpdate(ctx context.Context, d *schema.ResourceData, meta inte _, err = conn.UpdateFolderPermissionsWithContext(ctx, params) if err != nil { - return diag.Errorf("error updating QuickSight Folder (%s) permissions: %s", folderId, err) + return diag.Errorf("updating QuickSight Folder (%s) permissions: %s", folderId, err) } } @@ -279,7 +282,7 @@ func resourceFolderUpdate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceFolderDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) log.Printf("[INFO] Deleting QuickSight Folder %s", d.Id()) diff --git a/internal/service/quicksight/folder_membership.go b/internal/service/quicksight/folder_membership.go index 1fededea849..f856b91b096 100644 --- a/internal/service/quicksight/folder_membership.go +++ b/internal/service/quicksight/folder_membership.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight import ( @@ -19,8 +22,8 @@ import ( "github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-provider-aws/internal/create" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -81,7 +84,7 @@ func (r *resourceFolderMembership) Schema(ctx context.Context, req resource.Sche } func (r *resourceFolderMembership) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) { - conn := r.Meta().QuickSightConn() + conn := r.Meta().QuickSightConn(ctx) var plan resourceFolderMembershipData resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) @@ -124,7 +127,7 @@ func (r *resourceFolderMembership) Create(ctx context.Context, req resource.Crea } func (r *resourceFolderMembership) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) { - conn := r.Meta().QuickSightConn() + conn := r.Meta().QuickSightConn(ctx) var state resourceFolderMembershipData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) @@ -169,7 +172,7 @@ func (r *resourceFolderMembership) Update(ctx context.Context, req resource.Upda } func (r *resourceFolderMembership) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) { - conn := r.Meta().QuickSightConn() + conn := r.Meta().QuickSightConn(ctx) var state resourceFolderMembershipData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) diff --git a/internal/service/quicksight/folder_membership_test.go b/internal/service/quicksight/folder_membership_test.go index 932d4d3b93f..c5c2d3ab532 100644 --- a/internal/service/quicksight/folder_membership_test.go +++ b/internal/service/quicksight/folder_membership_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight_test import ( @@ -82,7 +85,7 @@ func testAccCheckFolderMembershipExists(ctx context.Context, resourceName string return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) output, err := tfquicksight.FindFolderMembershipByID(ctx, conn, rs.Primary.ID) if err != nil { return create.Error(names.QuickSight, create.ErrActionCheckingExistence, tfquicksight.ResNameFolderMembership, rs.Primary.ID, err) @@ -96,7 +99,7 @@ func testAccCheckFolderMembershipExists(ctx context.Context, resourceName string func testAccCheckFolderMembershipDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_quicksight_folder_membership" { continue diff --git a/internal/service/quicksight/folder_test.go b/internal/service/quicksight/folder_test.go index a3774f43abe..098cac465f6 100644 --- a/internal/service/quicksight/folder_test.go +++ b/internal/service/quicksight/folder_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight_test import ( @@ -240,7 +243,7 @@ func TestAccQuickSightFolder_parentFolder(t *testing.T) { func testAccCheckFolderDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_quicksight_folder" { @@ -275,7 +278,7 @@ func testAccCheckFolderExists(ctx context.Context, name string, folder *quicksig return create.Error(names.QuickSight, create.ErrActionCheckingExistence, tfquicksight.ResNameFolder, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) output, err := tfquicksight.FindFolderByID(ctx, conn, rs.Primary.ID) if err != nil { return create.Error(names.QuickSight, create.ErrActionCheckingExistence, tfquicksight.ResNameFolder, rs.Primary.ID, err) diff --git a/internal/service/quicksight/generate.go b/internal/service/quicksight/generate.go index dea417ca89a..2f879bf8717 100644 --- a/internal/service/quicksight/generate.go +++ b/internal/service/quicksight/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsSlice -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package quicksight diff --git a/internal/service/quicksight/group.go b/internal/service/quicksight/group.go index f96a8c29836..1766338efcf 100644 --- a/internal/service/quicksight/group.go +++ b/internal/service/quicksight/group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight import ( @@ -33,47 +36,49 @@ func ResourceGroup() *schema.Resource { StateContext: schema.ImportStatePassthroughContext, }, - Schema: map[string]*schema.Schema{ - "arn": { - Type: schema.TypeString, - Computed: true, - }, - - "aws_account_id": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - }, - - "description": { - Type: schema.TypeString, - Optional: true, - }, - - "group_name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - }, - - "namespace": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Default: DefaultGroupNamespace, - ValidateFunc: validation.All( - validation.StringLenBetween(1, 63), - validation.StringMatch(regexp.MustCompile(`^[a-zA-Z0-9._-]*$`), "must contain only alphanumeric characters, hyphens, underscores, and periods"), - ), - }, + SchemaFunc: func() map[string]*schema.Schema { + return map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + + "aws_account_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + + "description": { + Type: schema.TypeString, + Optional: true, + }, + + "group_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "namespace": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: DefaultGroupNamespace, + ValidateFunc: validation.All( + validation.StringLenBetween(1, 63), + validation.StringMatch(regexp.MustCompile(`^[a-zA-Z0-9._-]*$`), "must contain only alphanumeric characters, hyphens, underscores, and periods"), + ), + }, + } }, } } func resourceGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) awsAccountID := meta.(*conns.AWSClient).AccountID namespace := d.Get("namespace").(string) @@ -104,7 +109,7 @@ func resourceGroupCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) awsAccountID, namespace, groupName, err := GroupParseID(d.Id()) if err != nil { @@ -138,7 +143,7 @@ func resourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interfa func resourceGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) awsAccountID, namespace, groupName, err := GroupParseID(d.Id()) if err != nil { @@ -165,7 +170,7 @@ func resourceGroupUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) awsAccountID, namespace, groupName, err := GroupParseID(d.Id()) if err != nil { diff --git a/internal/service/quicksight/group_data_source.go b/internal/service/quicksight/group_data_source.go index 273bb1806d2..11359ae205e 100644 --- a/internal/service/quicksight/group_data_source.go +++ b/internal/service/quicksight/group_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight import ( @@ -20,44 +23,46 @@ func DataSourceGroup() *schema.Resource { return &schema.Resource{ ReadWithoutTimeout: dataSourceGroupRead, - Schema: map[string]*schema.Schema{ - "arn": { - Type: schema.TypeString, - Computed: true, - }, - "aws_account_id": { - Type: schema.TypeString, - Optional: true, - Computed: true, - }, - "description": { - Type: schema.TypeString, - Computed: true, - }, - "group_name": { - Type: schema.TypeString, - Required: true, - }, - "namespace": { - Type: schema.TypeString, - Optional: true, - Default: DefaultGroupNamespace, - ValidateFunc: validation.All( - validation.StringLenBetween(1, 63), - validation.StringMatch(regexp.MustCompile(`^[a-zA-Z0-9._-]*$`), "must contain only alphanumeric characters, hyphens, underscores, and periods"), - ), - }, - "principal_id": { - Type: schema.TypeString, - Computed: true, - }, + SchemaFunc: func() map[string]*schema.Schema { + return map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "aws_account_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "description": { + Type: schema.TypeString, + Computed: true, + }, + "group_name": { + Type: schema.TypeString, + Required: true, + }, + "namespace": { + Type: schema.TypeString, + Optional: true, + Default: DefaultGroupNamespace, + ValidateFunc: validation.All( + validation.StringLenBetween(1, 63), + validation.StringMatch(regexp.MustCompile(`^[a-zA-Z0-9._-]*$`), "must contain only alphanumeric characters, hyphens, underscores, and periods"), + ), + }, + "principal_id": { + Type: schema.TypeString, + Computed: true, + }, + } }, } } func dataSourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) awsAccountID := meta.(*conns.AWSClient).AccountID if v, ok := d.GetOk("aws_account_id"); ok { diff --git a/internal/service/quicksight/group_data_source_test.go b/internal/service/quicksight/group_data_source_test.go index b4a9b612ba4..f4cc9da9854 100644 --- a/internal/service/quicksight/group_data_source_test.go +++ b/internal/service/quicksight/group_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight_test import ( diff --git a/internal/service/quicksight/group_membership.go b/internal/service/quicksight/group_membership.go index 0dbd42b8cc1..b91cea9d987 100644 --- a/internal/service/quicksight/group_membership.go +++ b/internal/service/quicksight/group_membership.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight import ( @@ -26,46 +29,48 @@ func ResourceGroupMembership() *schema.Resource { StateContext: schema.ImportStatePassthroughContext, }, - Schema: map[string]*schema.Schema{ - "arn": { - Type: schema.TypeString, - Computed: true, - }, - - "aws_account_id": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - }, - - "member_name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - }, - - "group_name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - }, - - "namespace": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Default: "default", - ValidateFunc: validation.StringInSlice([]string{ - "default", - }, false), - }, + SchemaFunc: func() map[string]*schema.Schema { + return map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + + "aws_account_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + + "member_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "group_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "namespace": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: "default", + ValidateFunc: validation.StringInSlice([]string{ + "default", + }, false), + }, + } }, } } func resourceGroupMembershipCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) awsAccountID := meta.(*conns.AWSClient).AccountID namespace := d.Get("namespace").(string) @@ -85,7 +90,7 @@ func resourceGroupMembershipCreate(ctx context.Context, d *schema.ResourceData, resp, err := conn.CreateGroupMembershipWithContext(ctx, createOpts) if err != nil { - return diag.Errorf("error adding QuickSight user (%s) to group (%s): %s", memberName, groupName, err) + return diag.Errorf("adding QuickSight user (%s) to group (%s): %s", memberName, groupName, err) } d.SetId(fmt.Sprintf("%s/%s/%s/%s", awsAccountID, namespace, groupName, aws.StringValue(resp.GroupMember.MemberName))) @@ -94,7 +99,7 @@ func resourceGroupMembershipCreate(ctx context.Context, d *schema.ResourceData, } func resourceGroupMembershipRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) awsAccountID, namespace, groupName, userName, err := GroupMembershipParseID(d.Id()) if err != nil { @@ -109,7 +114,7 @@ func resourceGroupMembershipRead(ctx context.Context, d *schema.ResourceData, me found, err := FindGroupMembership(ctx, conn, listInput, userName) if err != nil { - return diag.Errorf("Error listing QuickSight Group Memberships (%s): %s", d.Id(), err) + return diag.Errorf("listing QuickSight Group Memberships (%s): %s", d.Id(), err) } if !d.IsNewResource() && !found { @@ -127,7 +132,7 @@ func resourceGroupMembershipRead(ctx context.Context, d *schema.ResourceData, me } func resourceGroupMembershipDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) awsAccountID, namespace, groupName, userName, err := GroupMembershipParseID(d.Id()) if err != nil { @@ -145,7 +150,7 @@ func resourceGroupMembershipDelete(ctx context.Context, d *schema.ResourceData, if tfawserr.ErrCodeEquals(err, quicksight.ErrCodeResourceNotFoundException) { return nil } - return diag.Errorf("Error deleting QuickSight User-group membership %s: %s", d.Id(), err) + return diag.Errorf("deleting QuickSight User-group membership %s: %s", d.Id(), err) } return nil diff --git a/internal/service/quicksight/group_membership_test.go b/internal/service/quicksight/group_membership_test.go index e9fc5ef7da5..757f7a14969 100644 --- a/internal/service/quicksight/group_membership_test.go +++ b/internal/service/quicksight/group_membership_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight_test import ( @@ -69,7 +72,7 @@ func TestAccQuickSightGroupMembership_disappears(t *testing.T) { func testAccCheckGroupMembershipDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_quicksight_group_membership" { @@ -114,7 +117,7 @@ func testAccCheckGroupMembershipExists(ctx context.Context, resourceName string) return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) listInput := &quicksight.ListGroupMembershipsInput{ AwsAccountId: aws.String(awsAccountID), diff --git a/internal/service/quicksight/group_test.go b/internal/service/quicksight/group_test.go index aeabfad8540..adec502dd79 100644 --- a/internal/service/quicksight/group_test.go +++ b/internal/service/quicksight/group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight_test import ( @@ -127,7 +130,7 @@ func testAccCheckGroupExists(ctx context.Context, resourceName string, group *qu return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) input := &quicksight.DescribeGroupInput{ AwsAccountId: aws.String(awsAccountID), @@ -153,7 +156,7 @@ func testAccCheckGroupExists(ctx context.Context, resourceName string, group *qu func testAccCheckGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_quicksight_group" { continue @@ -186,7 +189,7 @@ func testAccCheckGroupDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckGroupDisappears(ctx context.Context, v *quicksight.Group) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) arn, err := arn.Parse(aws.StringValue(v.Arn)) if err != nil { diff --git a/internal/service/quicksight/iam_policy_assignment.go b/internal/service/quicksight/iam_policy_assignment.go index 68e572fe99b..1c307758397 100644 --- a/internal/service/quicksight/iam_policy_assignment.go +++ b/internal/service/quicksight/iam_policy_assignment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight import ( @@ -23,8 +26,8 @@ import ( "github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-provider-aws/internal/create" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -112,7 +115,7 @@ func (r *resourceIAMPolicyAssignment) Schema(ctx context.Context, req resource.S } func (r *resourceIAMPolicyAssignment) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) { - conn := r.Meta().QuickSightConn() + conn := r.Meta().QuickSightConn(ctx) var plan resourceIAMPolicyAssignmentData resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) @@ -177,7 +180,7 @@ func (r *resourceIAMPolicyAssignment) Create(ctx context.Context, req resource.C } func (r *resourceIAMPolicyAssignment) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) { - conn := r.Meta().QuickSightConn() + conn := r.Meta().QuickSightConn(ctx) var state resourceIAMPolicyAssignmentData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) @@ -223,7 +226,7 @@ func (r *resourceIAMPolicyAssignment) Read(ctx context.Context, req resource.Rea } func (r *resourceIAMPolicyAssignment) Update(ctx context.Context, req resource.UpdateRequest, resp *resource.UpdateResponse) { - conn := r.Meta().QuickSightConn() + conn := r.Meta().QuickSightConn(ctx) var plan, state resourceIAMPolicyAssignmentData resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) @@ -276,7 +279,7 @@ func (r *resourceIAMPolicyAssignment) Update(ctx context.Context, req resource.U } func (r *resourceIAMPolicyAssignment) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) { - conn := r.Meta().QuickSightConn() + conn := r.Meta().QuickSightConn(ctx) var state resourceIAMPolicyAssignmentData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) diff --git a/internal/service/quicksight/iam_policy_assignment_test.go b/internal/service/quicksight/iam_policy_assignment_test.go index 487fc70652b..0a230066745 100644 --- a/internal/service/quicksight/iam_policy_assignment_test.go +++ b/internal/service/quicksight/iam_policy_assignment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight_test import ( @@ -162,7 +165,7 @@ func testAccCheckIAMPolicyAssignmentExists(ctx context.Context, resourceName str return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) output, err := tfquicksight.FindIAMPolicyAssignmentByID(ctx, conn, rs.Primary.ID) if err != nil { return create.Error(names.QuickSight, create.ErrActionCheckingExistence, tfquicksight.ResNameIAMPolicyAssignment, rs.Primary.ID, err) @@ -176,7 +179,7 @@ func testAccCheckIAMPolicyAssignmentExists(ctx context.Context, resourceName str func testAccCheckIAMPolicyAssignmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_quicksight_iam_policy_assignment" { continue diff --git a/internal/service/quicksight/ingestion.go b/internal/service/quicksight/ingestion.go index d2e079b98f7..4306e8e3dcd 100644 --- a/internal/service/quicksight/ingestion.go +++ b/internal/service/quicksight/ingestion.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight import ( @@ -19,8 +22,8 @@ import ( "github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-provider-aws/internal/create" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -86,7 +89,7 @@ func (r *resourceIngestion) Schema(ctx context.Context, req resource.SchemaReque } func (r *resourceIngestion) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) { - conn := r.Meta().QuickSightConn() + conn := r.Meta().QuickSightConn(ctx) var plan resourceIngestionData resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) @@ -128,7 +131,7 @@ func (r *resourceIngestion) Create(ctx context.Context, req resource.CreateReque } func (r *resourceIngestion) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) { - conn := r.Meta().QuickSightConn() + conn := r.Meta().QuickSightConn(ctx) var state resourceIngestionData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) @@ -174,7 +177,7 @@ func (r *resourceIngestion) Update(ctx context.Context, req resource.UpdateReque } func (r *resourceIngestion) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) { - conn := r.Meta().QuickSightConn() + conn := r.Meta().QuickSightConn(ctx) var state resourceIngestionData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) diff --git a/internal/service/quicksight/ingestion_test.go b/internal/service/quicksight/ingestion_test.go index e370a9ea291..bbb86a86aba 100644 --- a/internal/service/quicksight/ingestion_test.go +++ b/internal/service/quicksight/ingestion_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight_test import ( @@ -90,7 +93,7 @@ func testAccCheckIngestionExists(ctx context.Context, resourceName string, inges return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) output, err := tfquicksight.FindIngestionByID(ctx, conn, rs.Primary.ID) if err != nil { return create.Error(names.QuickSight, create.ErrActionCheckingExistence, tfquicksight.ResNameIngestion, rs.Primary.ID, err) @@ -104,7 +107,7 @@ func testAccCheckIngestionExists(ctx context.Context, resourceName string, inges func testAccCheckIngestionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_quicksight_ingestion" { continue diff --git a/internal/service/quicksight/namespace.go b/internal/service/quicksight/namespace.go index 9d0ef213656..3fc8abec7a1 100644 --- a/internal/service/quicksight/namespace.go +++ b/internal/service/quicksight/namespace.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight import ( @@ -20,8 +23,8 @@ import ( "github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-provider-aws/internal/create" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/names" @@ -102,7 +105,7 @@ func (r *resourceNamespace) Schema(ctx context.Context, req resource.SchemaReque } func (r *resourceNamespace) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) { - conn := r.Meta().QuickSightConn() + conn := r.Meta().QuickSightConn(ctx) var plan resourceNamespaceData resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) @@ -119,7 +122,7 @@ func (r *resourceNamespace) Create(ctx context.Context, req resource.CreateReque AwsAccountId: aws.String(plan.AWSAccountID.ValueString()), Namespace: aws.String(plan.Namespace.ValueString()), IdentityStore: aws.String(plan.IdentityStore.ValueString()), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } out, err := conn.CreateNamespaceWithContext(ctx, &in) @@ -156,7 +159,7 @@ func (r *resourceNamespace) Create(ctx context.Context, req resource.CreateReque } func (r *resourceNamespace) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) { - conn := r.Meta().QuickSightConn() + conn := r.Meta().QuickSightConn(ctx) var state resourceNamespaceData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) @@ -212,7 +215,7 @@ func (r *resourceNamespace) Update(ctx context.Context, req resource.UpdateReque } func (r *resourceNamespace) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) { - conn := r.Meta().QuickSightConn() + conn := r.Meta().QuickSightConn(ctx) var state resourceNamespaceData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) diff --git a/internal/service/quicksight/namespace_test.go b/internal/service/quicksight/namespace_test.go index dbf277bc523..5bde0c3152e 100644 --- a/internal/service/quicksight/namespace_test.go +++ b/internal/service/quicksight/namespace_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight_test import ( @@ -127,7 +130,7 @@ func testAccCheckNamespaceExists(ctx context.Context, resourceName string, names return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) output, err := tfquicksight.FindNamespaceByID(ctx, conn, rs.Primary.ID) if err != nil { return create.Error(names.QuickSight, create.ErrActionCheckingExistence, tfquicksight.ResNameNamespace, rs.Primary.ID, err) @@ -141,7 +144,7 @@ func testAccCheckNamespaceExists(ctx context.Context, resourceName string, names func testAccCheckNamespaceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_quicksight_namespace" { continue diff --git a/internal/service/quicksight/quicksight_test.go b/internal/service/quicksight/quicksight_test.go index bc6400aca72..a9bb4cf8a7a 100644 --- a/internal/service/quicksight/quicksight_test.go +++ b/internal/service/quicksight/quicksight_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight_test import ( diff --git a/internal/service/quicksight/refresh_schedule.go b/internal/service/quicksight/refresh_schedule.go index 87b015b9cec..51ce5ca4960 100644 --- a/internal/service/quicksight/refresh_schedule.go +++ b/internal/service/quicksight/refresh_schedule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight import ( @@ -25,8 +28,8 @@ import ( "github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-provider-aws/internal/create" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -219,7 +222,7 @@ var ( ) func (r *resourceRefreshSchedule) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) { - conn := r.Meta().QuickSightConn() + conn := r.Meta().QuickSightConn(ctx) var plan resourceRefreshScheduleData resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) @@ -274,7 +277,7 @@ func (r *resourceRefreshSchedule) Create(ctx context.Context, req resource.Creat } func (r *resourceRefreshSchedule) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) { - conn := r.Meta().QuickSightConn() + conn := r.Meta().QuickSightConn(ctx) var state resourceRefreshScheduleData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) @@ -299,7 +302,7 @@ func (r *resourceRefreshSchedule) Read(ctx context.Context, req resource.ReadReq } func (r *resourceRefreshSchedule) Update(ctx context.Context, req resource.UpdateRequest, resp *resource.UpdateResponse) { - conn := r.Meta().QuickSightConn() + conn := r.Meta().QuickSightConn(ctx) var config, plan, state resourceRefreshScheduleData resp.Diagnostics.Append(req.Config.Get(ctx, &config)...) @@ -370,7 +373,7 @@ func (r *resourceRefreshSchedule) Update(ctx context.Context, req resource.Updat } func (r *resourceRefreshSchedule) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) { - conn := r.Meta().QuickSightConn() + conn := r.Meta().QuickSightConn(ctx) var state resourceRefreshScheduleData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) diff --git a/internal/service/quicksight/refresh_schedule_test.go b/internal/service/quicksight/refresh_schedule_test.go index f5e3e75e20b..6db5594c991 100644 --- a/internal/service/quicksight/refresh_schedule_test.go +++ b/internal/service/quicksight/refresh_schedule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight_test import ( @@ -162,7 +165,7 @@ func testAccCheckRefreshScheduleExists(ctx context.Context, resourceName string, return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) _, output, err := tfquicksight.FindRefreshScheduleByID(ctx, conn, rs.Primary.ID) if err != nil { return create.Error(names.QuickSight, create.ErrActionCheckingExistence, tfquicksight.ResNameRefreshSchedule, rs.Primary.ID, err) @@ -176,7 +179,7 @@ func testAccCheckRefreshScheduleExists(ctx context.Context, resourceName string, func testAccCheckRefreshScheduleDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_quicksight_refresh_schedule" { continue diff --git a/internal/service/quicksight/schema/analysis.go b/internal/service/quicksight/schema/analysis.go new file mode 100644 index 00000000000..bd595763c3d --- /dev/null +++ b/internal/service/quicksight/schema/analysis.go @@ -0,0 +1,257 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package schema + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/quicksight" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/verify" +) + +func AnalysisDefinitionSchema() *schema.Schema { + return &schema.Schema{ // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_AnalysisDefinition.html + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Computed: true, + ExactlyOneOf: []string{ + "definition", + "source_entity", + }, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "data_set_identifiers_declarations": dataSetIdentifierDeclarationsSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_DataSetIdentifierDeclaration.html + "analysis_defaults": analysisDefaultSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_AnalysisDefaults.html + "calculated_fields": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_CalculatedField.html + Type: schema.TypeList, + MinItems: 1, + MaxItems: 500, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "data_set_identifier": stringSchema(true, validation.StringLenBetween(1, 2048)), + "expression": stringSchema(true, validation.StringLenBetween(1, 4096)), + "name": stringSchema(true, validation.StringLenBetween(1, 128)), + }, + }, + }, + "column_configurations": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_ColumnConfiguration.html + Type: schema.TypeList, + MinItems: 1, + MaxItems: 200, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "column": columnSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_ColumnIdentifier.html + "format_configuration": formatConfigurationSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_FormatConfiguration.html + "role": stringSchema(false, validation.StringInSlice(quicksight.ColumnRole_Values(), false)), + }, + }, + }, + "filter_groups": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_FilterGroup.html + Type: schema.TypeList, + MinItems: 1, + MaxItems: 2000, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cross_dataset": stringSchema(true, validation.StringInSlice(quicksight.CrossDatasetTypes_Values(), false)), + "filter_group_id": idSchema(), + "filters": filtersSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_Filter.html + "scope_configuration": filterScopeConfigurationSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_FilterScopeConfiguration.html + "status": stringSchema(false, validation.StringInSlice(quicksight.Status_Values(), false)), + }, + }, + }, + "parameter_declarations": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_ParameterDeclaration.html + Type: schema.TypeList, + MinItems: 1, + MaxItems: 200, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "date_time_parameter_declaration": dateTimeParameterDeclarationSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_DateTimeParameterDeclaration.html + "decimal_parameter_declaration": decimalParameterDeclarationSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_DecimalParameterDeclaration.html + "integer_parameter_declaration": integerParameterDeclarationSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_IntegerParameterDeclaration.html + "string_parameter_declaration": stringParameterDeclarationSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_StringParameterDeclaration.html + }, + }, + }, + "sheets": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_SheetDefinition.html + Type: schema.TypeList, + MinItems: 1, + MaxItems: 20, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "sheet_id": idSchema(), + "content_type": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validation.StringInSlice(quicksight.SheetContentType_Values(), false), + }, + "description": stringSchema(false, validation.StringLenBetween(1, 1024)), + "filter_controls": filterControlsSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_FilterControl.html + "layouts": layoutSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_Layout.html + "name": stringSchema(false, validation.StringLenBetween(1, 2048)), + "parameter_controls": parameterControlsSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_ParameterControl.html + "sheet_control_layouts": sheetControlLayoutsSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_SheetControlLayout.html + "text_boxes": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_SheetTextBox.html + Type: schema.TypeList, + MinItems: 1, + MaxItems: 100, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "sheet_text_box_id": idSchema(), + "content": stringSchema(false, validation.StringLenBetween(1, 150000)), + }, + }, + }, + "title": stringSchema(false, validation.StringLenBetween(1, 1024)), + "visuals": visualsSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_Visual.html + }, + }, + }, + }, + }, + } +} + +func AnalysisSourceEntitySchema() *schema.Schema { + return &schema.Schema{ + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + ExactlyOneOf: []string{ + "definition", + "source_entity", + }, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "source_template": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidARN, + }, + "data_set_references": dataSetReferencesSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_DataSetReference.html + }, + }, + }, + }, + }, + } +} + +func ExpandAnalysisSourceEntity(tfList []interface{}) *quicksight.AnalysisSourceEntity { + if len(tfList) == 0 || tfList[0] == nil { + return nil + } + + tfMap, ok := tfList[0].(map[string]interface{}) + if !ok { + return nil + } + + sourceEntity := &quicksight.AnalysisSourceEntity{} + + if v, ok := tfMap["source_template"].([]interface{}); ok && len(v) > 0 { + sourceEntity.SourceTemplate = expandAnalysisSourceTemplate(v[0].(map[string]interface{})) + } + + return sourceEntity +} + +func expandAnalysisSourceTemplate(tfMap map[string]interface{}) *quicksight.AnalysisSourceTemplate { + if tfMap == nil { + return nil + } + + sourceTemplate := &quicksight.AnalysisSourceTemplate{} + if v, ok := tfMap["arn"].(string); ok && v != "" { + sourceTemplate.Arn = aws.String(v) + } + if v, ok := tfMap["data_set_references"].([]interface{}); ok && len(v) > 0 { + sourceTemplate.DataSetReferences = expandDataSetReferences(v) + } + + return sourceTemplate +} + +func ExpandAnalysisDefinition(tfList []interface{}) *quicksight.AnalysisDefinition { + if len(tfList) == 0 || tfList[0] == nil { + return nil + } + + tfMap, ok := tfList[0].(map[string]interface{}) + if !ok { + return nil + } + + definition := &quicksight.AnalysisDefinition{} + + if v, ok := tfMap["analysis_defaults"].([]interface{}); ok && len(v) > 0 { + definition.AnalysisDefaults = expandAnalysisDefaults(v) + } + if v, ok := tfMap["calculated_fields"].([]interface{}); ok && len(v) > 0 { + definition.CalculatedFields = expandCalculatedFields(v) + } + if v, ok := tfMap["column_configurations"].([]interface{}); ok && len(v) > 0 { + definition.ColumnConfigurations = expandColumnConfigurations(v) + } + if v, ok := tfMap["data_set_identifiers_declarations"].([]interface{}); ok && len(v) > 0 { + definition.DataSetIdentifierDeclarations = expandDataSetIdentifierDeclarations(v) + } + if v, ok := tfMap["filter_groups"].([]interface{}); ok && len(v) > 0 { + definition.FilterGroups = expandFilterGroups(v) + } + if v, ok := tfMap["parameter_declarations"].([]interface{}); ok && len(v) > 0 { + definition.ParameterDeclarations = expandParameterDeclarations(v) + } + if v, ok := tfMap["sheets"].([]interface{}); ok && len(v) > 0 { + definition.Sheets = expandSheetDefinitions(v) + } + + return definition +} + +func FlattenAnalysisDefinition(apiObject *quicksight.AnalysisDefinition) []interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + if apiObject.AnalysisDefaults != nil { + tfMap["analysis_defaults"] = flattenAnalysisDefaults(apiObject.AnalysisDefaults) + } + if apiObject.CalculatedFields != nil { + tfMap["calculated_fields"] = flattenCalculatedFields(apiObject.CalculatedFields) + } + if apiObject.ColumnConfigurations != nil { + tfMap["column_configurations"] = flattenColumnConfigurations(apiObject.ColumnConfigurations) + } + if apiObject.DataSetIdentifierDeclarations != nil { + tfMap["data_set_identifiers_declarations"] = flattenDataSetIdentifierDeclarations(apiObject.DataSetIdentifierDeclarations) + } + if apiObject.FilterGroups != nil { + tfMap["filter_groups"] = flattenFilterGroups(apiObject.FilterGroups) + } + if apiObject.ParameterDeclarations != nil { + tfMap["parameter_declarations"] = flattenParameterDeclarations(apiObject.ParameterDeclarations) + } + if apiObject.Sheets != nil { + tfMap["sheets"] = flattenSheetDefinitions(apiObject.Sheets) + } + + return []interface{}{tfMap} +} diff --git a/internal/service/quicksight/schema/dashboard.go b/internal/service/quicksight/schema/dashboard.go new file mode 100644 index 00000000000..5017c39efb2 --- /dev/null +++ b/internal/service/quicksight/schema/dashboard.go @@ -0,0 +1,758 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package schema + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/quicksight" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/verify" +) + +func DashboardDefinitionSchema() *schema.Schema { + return &schema.Schema{ // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_DashboardVersionDefinition.html + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Computed: true, + ExactlyOneOf: []string{ + "definition", + "source_entity", + }, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "data_set_identifiers_declarations": dataSetIdentifierDeclarationsSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_DataSetIdentifierDeclaration.html + "analysis_defaults": analysisDefaultSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_AnalysisDefaults.html + "calculated_fields": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_CalculatedField.html + Type: schema.TypeList, + MinItems: 1, + MaxItems: 500, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "data_set_identifier": stringSchema(true, validation.StringLenBetween(1, 2048)), + "expression": stringSchema(true, validation.StringLenBetween(1, 4096)), + "name": stringSchema(true, validation.StringLenBetween(1, 128)), + }, + }, + }, + "column_configurations": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_ColumnConfiguration.html + Type: schema.TypeList, + MinItems: 1, + MaxItems: 200, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "column": columnSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_ColumnIdentifier.html + "format_configuration": formatConfigurationSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_FormatConfiguration.html + "role": stringSchema(false, validation.StringInSlice(quicksight.ColumnRole_Values(), false)), + }, + }, + }, + "filter_groups": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_FilterGroup.html + Type: schema.TypeList, + MinItems: 1, + MaxItems: 2000, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cross_dataset": stringSchema(true, validation.StringInSlice(quicksight.CrossDatasetTypes_Values(), false)), + "filter_group_id": idSchema(), + "filters": filtersSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_Filter.html + "scope_configuration": filterScopeConfigurationSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_FilterScopeConfiguration.html + "status": stringSchema(false, validation.StringInSlice(quicksight.Status_Values(), false)), + }, + }, + }, + "parameter_declarations": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_ParameterDeclaration.html + Type: schema.TypeList, + MinItems: 1, + MaxItems: 200, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "date_time_parameter_declaration": dateTimeParameterDeclarationSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_DateTimeParameterDeclaration.html + "decimal_parameter_declaration": decimalParameterDeclarationSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_DecimalParameterDeclaration.html + "integer_parameter_declaration": integerParameterDeclarationSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_IntegerParameterDeclaration.html + "string_parameter_declaration": stringParameterDeclarationSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_StringParameterDeclaration.html + }, + }, + }, + "sheets": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_SheetDefinition.html + Type: schema.TypeList, + MinItems: 1, + MaxItems: 20, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "sheet_id": idSchema(), + "content_type": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validation.StringInSlice(quicksight.SheetContentType_Values(), false), + }, + "description": stringSchema(false, validation.StringLenBetween(1, 1024)), + "filter_controls": filterControlsSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_FilterControl.html + "layouts": layoutSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_Layout.html + "name": stringSchema(false, validation.StringLenBetween(1, 2048)), + "parameter_controls": parameterControlsSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_ParameterControl.html + "sheet_control_layouts": sheetControlLayoutsSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_SheetControlLayout.html + "text_boxes": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_SheetTextBox.html + Type: schema.TypeList, + MinItems: 1, + MaxItems: 100, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "sheet_text_box_id": idSchema(), + "content": stringSchema(false, validation.StringLenBetween(1, 150000)), + }, + }, + }, + "title": stringSchema(false, validation.StringLenBetween(1, 1024)), + "visuals": visualsSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_Visual.html + }, + }, + }, + }, + }, + } +} + +func DashboardPublishOptionsSchema() *schema.Schema { + return &schema.Schema{ // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_DashboardPublishOptions.html + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "ad_hoc_filtering_option": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_AdHocFilteringOption.html + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "availability_status": { + Type: schema.TypeString, + Optional: true, + Default: quicksight.StatusEnabled, + ValidateFunc: validation.StringInSlice(quicksight.Status_Values(), false), + }, + }, + }, + }, + "data_point_drill_up_down_option": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_DataPointDrillUpDownOption.html + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "availability_status": { + Type: schema.TypeString, + Optional: true, + Default: quicksight.StatusEnabled, + ValidateFunc: validation.StringInSlice(quicksight.Status_Values(), false), + }}, + }, + }, + "data_point_menu_label_option": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_DataPointMenuLabelOption.html + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "availability_status": { + Type: schema.TypeString, + Optional: true, + Default: quicksight.StatusEnabled, + ValidateFunc: validation.StringInSlice(quicksight.Status_Values(), false), + }}, + }, + }, + "data_point_tooltip_option": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_DataPointTooltipOption.html + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "availability_status": { + Type: schema.TypeString, + Optional: true, + Default: quicksight.StatusEnabled, + ValidateFunc: validation.StringInSlice(quicksight.Status_Values(), false), + }}, + }, + }, + "export_to_csv_option": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_ExportToCSVOption.html + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "availability_status": { + Type: schema.TypeString, + Optional: true, + Default: quicksight.StatusEnabled, + ValidateFunc: validation.StringInSlice(quicksight.Status_Values(), false), + }}, + }, + }, + "export_with_hidden_fields_option": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_ExportWithHiddenFieldsOption.html + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "availability_status": { + Type: schema.TypeString, + Optional: true, + Default: quicksight.StatusDisabled, + ValidateFunc: validation.StringInSlice(quicksight.Status_Values(), false), + }}, + }, + }, + "sheet_controls_option": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_SheetControlsOption.html + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "visibility_state": { + Type: schema.TypeString, + Optional: true, + Default: quicksight.DashboardUIStateCollapsed, + ValidateFunc: validation.StringInSlice(quicksight.DashboardUIState_Values(), false), + }, + }, + }, + }, + "sheet_layout_element_maximization_option": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_SheetLayoutElementMaximizationOption.html + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "availability_status": { + Type: schema.TypeString, + Optional: true, + Default: quicksight.StatusEnabled, + ValidateFunc: validation.StringInSlice(quicksight.Status_Values(), false), + }}, + }, + }, + "visual_axis_sort_option": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_VisualAxisSortOption.html + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "availability_status": { + Type: schema.TypeString, + Optional: true, + Default: quicksight.StatusEnabled, + ValidateFunc: validation.StringInSlice(quicksight.Status_Values(), false), + }}, + }, + }, + "visual_menu_option": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_VisualMenuOption.html + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "availability_status": { + Type: schema.TypeString, + Optional: true, + Default: quicksight.StatusEnabled, + ValidateFunc: validation.StringInSlice(quicksight.Status_Values(), false), + }}, + }, + }, + }, + }, + } +} + +func DashboardSourceEntitySchema() *schema.Schema { + return &schema.Schema{ + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + ExactlyOneOf: []string{ + "definition", + "source_entity", + }, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "source_template": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidARN, + }, + "data_set_references": dataSetReferencesSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_DataSetReference.html + }, + }, + }, + }, + }, + } +} + +func ExpandDashboardSourceEntity(tfList []interface{}) *quicksight.DashboardSourceEntity { + if len(tfList) == 0 || tfList[0] == nil { + return nil + } + + tfMap, ok := tfList[0].(map[string]interface{}) + if !ok { + return nil + } + + sourceEntity := &quicksight.DashboardSourceEntity{} + + if v, ok := tfMap["source_template"].([]interface{}); ok && len(v) > 0 { + sourceEntity.SourceTemplate = expandDashboardSourceTemplate(v[0].(map[string]interface{})) + } + + return sourceEntity +} + +func expandDashboardSourceTemplate(tfMap map[string]interface{}) *quicksight.DashboardSourceTemplate { + if tfMap == nil { + return nil + } + + sourceTemplate := &quicksight.DashboardSourceTemplate{} + if v, ok := tfMap["arn"].(string); ok && v != "" { + sourceTemplate.Arn = aws.String(v) + } + if v, ok := tfMap["data_set_references"].([]interface{}); ok && len(v) > 0 { + sourceTemplate.DataSetReferences = expandDataSetReferences(v) + } + + return sourceTemplate +} + +func ExpandDashboardDefinition(tfList []interface{}) *quicksight.DashboardVersionDefinition { + if len(tfList) == 0 || tfList[0] == nil { + return nil + } + + tfMap, ok := tfList[0].(map[string]interface{}) + if !ok { + return nil + } + + definition := &quicksight.DashboardVersionDefinition{} + + if v, ok := tfMap["analysis_defaults"].([]interface{}); ok && len(v) > 0 { + definition.AnalysisDefaults = expandAnalysisDefaults(v) + } + if v, ok := tfMap["calculated_fields"].([]interface{}); ok && len(v) > 0 { + definition.CalculatedFields = expandCalculatedFields(v) + } + if v, ok := tfMap["column_configurations"].([]interface{}); ok && len(v) > 0 { + definition.ColumnConfigurations = expandColumnConfigurations(v) + } + if v, ok := tfMap["data_set_identifiers_declarations"].([]interface{}); ok && len(v) > 0 { + definition.DataSetIdentifierDeclarations = expandDataSetIdentifierDeclarations(v) + } + if v, ok := tfMap["filter_groups"].([]interface{}); ok && len(v) > 0 { + definition.FilterGroups = expandFilterGroups(v) + } + if v, ok := tfMap["parameter_declarations"].([]interface{}); ok && len(v) > 0 { + definition.ParameterDeclarations = expandParameterDeclarations(v) + } + if v, ok := tfMap["sheets"].([]interface{}); ok && len(v) > 0 { + definition.Sheets = expandSheetDefinitions(v) + } + + return definition +} + +func ExpandDashboardPublishOptions(tfList []interface{}) *quicksight.DashboardPublishOptions { + if len(tfList) == 0 || tfList[0] == nil { + return nil + } + + tfMap, ok := tfList[0].(map[string]interface{}) + if !ok { + return nil + } + + options := &quicksight.DashboardPublishOptions{} + + if v, ok := tfMap["ad_hoc_filtering_option"].([]interface{}); ok && len(v) > 0 { + options.AdHocFilteringOption = expandAdHocFilteringOption(v[0].(map[string]interface{})) + } + if v, ok := tfMap["data_point_drill_up_down_option"].([]interface{}); ok && len(v) > 0 { + options.DataPointDrillUpDownOption = expandDataPointDrillUpDownOption(v[0].(map[string]interface{})) + } + if v, ok := tfMap["data_point_menu_label_option"].([]interface{}); ok && len(v) > 0 { + options.DataPointMenuLabelOption = expandDataPointMenuLabelOption(v[0].(map[string]interface{})) + } + if v, ok := tfMap["data_point_tooltip_option"].([]interface{}); ok && len(v) > 0 { + options.DataPointTooltipOption = expandDataPointTooltipOption(v[0].(map[string]interface{})) + } + if v, ok := tfMap["export_to_csv_option"].([]interface{}); ok && len(v) > 0 { + options.ExportToCSVOption = expandExportToCSVOption(v[0].(map[string]interface{})) + } + if v, ok := tfMap["export_with_hidden_fields_option"].([]interface{}); ok && len(v) > 0 { + options.ExportWithHiddenFieldsOption = expandExportWithHiddenFieldsOption(v[0].(map[string]interface{})) + } + if v, ok := tfMap["sheet_controls_option"].([]interface{}); ok && len(v) > 0 { + options.SheetControlsOption = expandSheetControlsOption(v[0].(map[string]interface{})) + } + if v, ok := tfMap["sheet_layout_element_maximization_option"].([]interface{}); ok && len(v) > 0 { + options.SheetLayoutElementMaximizationOption = expandSheetLayoutElementMaximizationOption(v[0].(map[string]interface{})) + } + if v, ok := tfMap["visual_axis_sort_option"].([]interface{}); ok && len(v) > 0 { + options.VisualAxisSortOption = expandVisualAxisSortOption(v[0].(map[string]interface{})) + } + if v, ok := tfMap["visual_menu_option"].([]interface{}); ok && len(v) > 0 { + options.VisualMenuOption = expandVisualMenuOption(v[0].(map[string]interface{})) + } + + return options +} + +func expandAdHocFilteringOption(tfMap map[string]interface{}) *quicksight.AdHocFilteringOption { + if tfMap == nil { + return nil + } + + options := &quicksight.AdHocFilteringOption{} + if v, ok := tfMap["availability_status"].(string); ok && v != "" { + options.AvailabilityStatus = aws.String(v) + } + + return options +} + +func expandDataPointDrillUpDownOption(tfMap map[string]interface{}) *quicksight.DataPointDrillUpDownOption { + if tfMap == nil { + return nil + } + + options := &quicksight.DataPointDrillUpDownOption{} + if v, ok := tfMap["availability_status"].(string); ok && v != "" { + options.AvailabilityStatus = aws.String(v) + } + + return options +} + +func expandDataPointMenuLabelOption(tfMap map[string]interface{}) *quicksight.DataPointMenuLabelOption { + if tfMap == nil { + return nil + } + + options := &quicksight.DataPointMenuLabelOption{} + if v, ok := tfMap["availability_status"].(string); ok && v != "" { + options.AvailabilityStatus = aws.String(v) + } + + return options +} + +func expandDataPointTooltipOption(tfMap map[string]interface{}) *quicksight.DataPointTooltipOption { + if tfMap == nil { + return nil + } + + options := &quicksight.DataPointTooltipOption{} + if v, ok := tfMap["availability_status"].(string); ok && v != "" { + options.AvailabilityStatus = aws.String(v) + } + + return options +} + +func expandExportToCSVOption(tfMap map[string]interface{}) *quicksight.ExportToCSVOption { + if tfMap == nil { + return nil + } + + options := &quicksight.ExportToCSVOption{} + if v, ok := tfMap["availability_status"].(string); ok && v != "" { + options.AvailabilityStatus = aws.String(v) + } + + return options +} + +func expandExportWithHiddenFieldsOption(tfMap map[string]interface{}) *quicksight.ExportWithHiddenFieldsOption { + if tfMap == nil { + return nil + } + + options := &quicksight.ExportWithHiddenFieldsOption{} + if v, ok := tfMap["availability_status"].(string); ok && v != "" { + options.AvailabilityStatus = aws.String(v) + } + + return options +} + +func expandSheetLayoutElementMaximizationOption(tfMap map[string]interface{}) *quicksight.SheetLayoutElementMaximizationOption { + if tfMap == nil { + return nil + } + + options := &quicksight.SheetLayoutElementMaximizationOption{} + if v, ok := tfMap["availability_status"].(string); ok && v != "" { + options.AvailabilityStatus = aws.String(v) + } + + return options +} + +func expandSheetControlsOption(tfMap map[string]interface{}) *quicksight.SheetControlsOption { + if tfMap == nil { + return nil + } + + options := &quicksight.SheetControlsOption{} + if v, ok := tfMap["visibility_state"].(string); ok && v != "" { + options.VisibilityState = aws.String(v) + } + + return options +} + +func expandVisualAxisSortOption(tfMap map[string]interface{}) *quicksight.VisualAxisSortOption { + if tfMap == nil { + return nil + } + + options := &quicksight.VisualAxisSortOption{} + if v, ok := tfMap["availability_status"].(string); ok && v != "" { + options.AvailabilityStatus = aws.String(v) + } + + return options +} + +func expandVisualMenuOption(tfMap map[string]interface{}) *quicksight.VisualMenuOption { + if tfMap == nil { + return nil + } + + options := &quicksight.VisualMenuOption{} + if v, ok := tfMap["availability_status"].(string); ok && v != "" { + options.AvailabilityStatus = aws.String(v) + } + + return options +} + +func FlattenDashboardDefinition(apiObject *quicksight.DashboardVersionDefinition) []interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + if apiObject.AnalysisDefaults != nil { + tfMap["analysis_defaults"] = flattenAnalysisDefaults(apiObject.AnalysisDefaults) + } + if apiObject.CalculatedFields != nil { + tfMap["calculated_fields"] = flattenCalculatedFields(apiObject.CalculatedFields) + } + if apiObject.ColumnConfigurations != nil { + tfMap["column_configurations"] = flattenColumnConfigurations(apiObject.ColumnConfigurations) + } + if apiObject.DataSetIdentifierDeclarations != nil { + tfMap["data_set_identifiers_declarations"] = flattenDataSetIdentifierDeclarations(apiObject.DataSetIdentifierDeclarations) + } + if apiObject.FilterGroups != nil { + tfMap["filter_groups"] = flattenFilterGroups(apiObject.FilterGroups) + } + if apiObject.ParameterDeclarations != nil { + tfMap["parameter_declarations"] = flattenParameterDeclarations(apiObject.ParameterDeclarations) + } + if apiObject.Sheets != nil { + tfMap["sheets"] = flattenSheetDefinitions(apiObject.Sheets) + } + + return []interface{}{tfMap} +} + +func FlattenDashboardPublishOptions(apiObject *quicksight.DashboardPublishOptions) []interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + if apiObject.AdHocFilteringOption != nil { + tfMap["ad_hoc_filtering_option"] = flattenAdHocFilteringOption(apiObject.AdHocFilteringOption) + } + if apiObject.DataPointDrillUpDownOption != nil { + tfMap["data_point_drill_up_down_option"] = flattenDataPointDrillUpDownOption(apiObject.DataPointDrillUpDownOption) + } + if apiObject.DataPointMenuLabelOption != nil { + tfMap["data_point_menu_label_option"] = flattenDataPointMenuLabelOption(apiObject.DataPointMenuLabelOption) + } + if apiObject.DataPointTooltipOption != nil { + tfMap["data_point_tooltip_option"] = flattenDataPointTooltipOption(apiObject.DataPointTooltipOption) + } + if apiObject.ExportToCSVOption != nil { + tfMap["export_to_csv_option"] = flattenExportToCSVOption(apiObject.ExportToCSVOption) + } + if apiObject.ExportWithHiddenFieldsOption != nil { + tfMap["export_with_hidden_fields_option"] = flattenExportWithHiddenFieldsOption(apiObject.ExportWithHiddenFieldsOption) + } + if apiObject.SheetControlsOption != nil { + tfMap["sheet_controls_option"] = flattenSheetControlsOption(apiObject.SheetControlsOption) + } + if apiObject.SheetLayoutElementMaximizationOption != nil { + tfMap["sheet_layout_element_maximization_option"] = flattenSheetLayoutElementMaximizationOption(apiObject.SheetLayoutElementMaximizationOption) + } + if apiObject.VisualAxisSortOption != nil { + tfMap["visual_axis_sort_option"] = flattenVisualAxisSortOption(apiObject.VisualAxisSortOption) + } + if apiObject.VisualMenuOption != nil { + tfMap["visual_menu_option"] = flattenVisualMenuOption(apiObject.VisualMenuOption) + } + + return []interface{}{tfMap} +} + +func flattenAdHocFilteringOption(apiObject *quicksight.AdHocFilteringOption) []interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + if apiObject.AvailabilityStatus != nil { + tfMap["availability_status"] = aws.StringValue(apiObject.AvailabilityStatus) + } + + return []interface{}{tfMap} +} + +func flattenDataPointDrillUpDownOption(apiObject *quicksight.DataPointDrillUpDownOption) []interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + if apiObject.AvailabilityStatus != nil { + tfMap["availability_status"] = aws.StringValue(apiObject.AvailabilityStatus) + } + + return []interface{}{tfMap} +} + +func flattenDataPointMenuLabelOption(apiObject *quicksight.DataPointMenuLabelOption) []interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + if apiObject.AvailabilityStatus != nil { + tfMap["availability_status"] = aws.StringValue(apiObject.AvailabilityStatus) + } + + return []interface{}{tfMap} +} + +func flattenDataPointTooltipOption(apiObject *quicksight.DataPointTooltipOption) []interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + if apiObject.AvailabilityStatus != nil { + tfMap["availability_status"] = aws.StringValue(apiObject.AvailabilityStatus) + } + + return []interface{}{tfMap} +} + +func flattenExportToCSVOption(apiObject *quicksight.ExportToCSVOption) []interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + if apiObject.AvailabilityStatus != nil { + tfMap["availability_status"] = aws.StringValue(apiObject.AvailabilityStatus) + } + + return []interface{}{tfMap} +} + +func flattenExportWithHiddenFieldsOption(apiObject *quicksight.ExportWithHiddenFieldsOption) []interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + if apiObject.AvailabilityStatus != nil { + tfMap["availability_status"] = aws.StringValue(apiObject.AvailabilityStatus) + } + + return []interface{}{tfMap} +} + +func flattenSheetControlsOption(apiObject *quicksight.SheetControlsOption) []interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + if apiObject.VisibilityState != nil { + tfMap["visibility_state"] = aws.StringValue(apiObject.VisibilityState) + } + + return []interface{}{tfMap} +} + +func flattenSheetLayoutElementMaximizationOption(apiObject *quicksight.SheetLayoutElementMaximizationOption) []interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + if apiObject.AvailabilityStatus != nil { + tfMap["availability_status"] = aws.StringValue(apiObject.AvailabilityStatus) + } + + return []interface{}{tfMap} +} + +func flattenVisualAxisSortOption(apiObject *quicksight.VisualAxisSortOption) []interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + if apiObject.AvailabilityStatus != nil { + tfMap["availability_status"] = aws.StringValue(apiObject.AvailabilityStatus) + } + + return []interface{}{tfMap} +} + +func flattenVisualMenuOption(apiObject *quicksight.VisualMenuOption) []interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + if apiObject.AvailabilityStatus != nil { + tfMap["availability_status"] = aws.StringValue(apiObject.AvailabilityStatus) + } + + return []interface{}{tfMap} +} diff --git a/internal/service/quicksight/schema/dataset.go b/internal/service/quicksight/schema/dataset.go new file mode 100644 index 00000000000..67a57f4385e --- /dev/null +++ b/internal/service/quicksight/schema/dataset.go @@ -0,0 +1,112 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package schema + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/quicksight" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/verify" +) + +func dataSetIdentifierDeclarationsSchema() *schema.Schema { + return &schema.Schema{ // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_DataSetIdentifierDeclaration.html + Type: schema.TypeList, + MinItems: 1, + MaxItems: 50, + Required: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "data_set_arn": stringSchema(false, verify.ValidARN), + "identifier": stringSchema(false, validation.StringLenBetween(1, 2048)), + }, + }, + } +} + +func dataSetReferencesSchema() *schema.Schema { + return &schema.Schema{ // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_DataSetReference.html + Type: schema.TypeList, + Required: true, + MinItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "data_set_arn": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidARN, + }, + "data_set_placeholder": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + } +} + +func expandDataSetIdentifierDeclarations(tfList []interface{}) []*quicksight.DataSetIdentifierDeclaration { + if len(tfList) == 0 { + return nil + } + + var identifiers []*quicksight.DataSetIdentifierDeclaration + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + if !ok { + continue + } + + identifier := expandDataSetIdentifierDeclaration(tfMap) + if identifier == nil { + continue + } + + identifiers = append(identifiers, identifier) + } + + return identifiers +} + +func expandDataSetIdentifierDeclaration(tfMap map[string]interface{}) *quicksight.DataSetIdentifierDeclaration { + if tfMap == nil { + return nil + } + + identifier := &quicksight.DataSetIdentifierDeclaration{} + + if v, ok := tfMap["data_set_arn"].(string); ok && v != "" { + identifier.DataSetArn = aws.String(v) + } + if v, ok := tfMap["identifier"].(string); ok && v != "" { + identifier.Identifier = aws.String(v) + } + + return identifier +} + +func flattenDataSetIdentifierDeclarations(apiObject []*quicksight.DataSetIdentifierDeclaration) []interface{} { + if len(apiObject) == 0 { + return nil + } + + var tfList []interface{} + for _, identifier := range apiObject { + if identifier == nil { + continue + } + + tfMap := map[string]interface{}{} + if identifier.DataSetArn != nil { + tfMap["data_set_arn"] = aws.StringValue(identifier.DataSetArn) + } + if identifier.Identifier != nil { + tfMap["identifier"] = aws.StringValue(identifier.Identifier) + } + tfList = append(tfList, tfMap) + } + + return tfList +} diff --git a/internal/service/quicksight/schema/parameters.go b/internal/service/quicksight/schema/parameters.go new file mode 100644 index 00000000000..cd678687a53 --- /dev/null +++ b/internal/service/quicksight/schema/parameters.go @@ -0,0 +1,294 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package schema + +import ( + "regexp" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/quicksight" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/flex" + "github.com/hashicorp/terraform-provider-aws/internal/verify" +) + +func ParametersSchema() *schema.Schema { + return &schema.Schema{ // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_Parameters.html + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "date_time_parameters": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_DateTimeParameter.html + Type: schema.TypeList, + MinItems: 1, + MaxItems: 100, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": stringSchema(true, validation.StringMatch(regexp.MustCompile(`.*\S.*`), "")), + "values": { + Type: schema.TypeList, + MinItems: 1, + Required: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: verify.ValidUTCTimestamp, + }, + }, + }, + }, + }, + "decimal_parameters": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_DecimalParameter.html + Type: schema.TypeList, + MinItems: 1, + MaxItems: 100, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": stringSchema(true, validation.StringMatch(regexp.MustCompile(`.*\S.*`), "")), + "values": { + Type: schema.TypeList, + MinItems: 1, + Required: true, + Elem: &schema.Schema{ + Type: schema.TypeFloat, + }, + }, + }, + }, + }, + "integer_parameters": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_IntegerParameter.html + Type: schema.TypeList, + MinItems: 1, + MaxItems: 100, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": stringSchema(true, validation.StringMatch(regexp.MustCompile(`.*\S.*`), "")), + "values": { + Type: schema.TypeList, + MinItems: 1, + Required: true, + Elem: &schema.Schema{ + Type: schema.TypeInt, + }, + }, + }, + }, + }, + "string_parameters": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_StringParameter.html + Type: schema.TypeList, + MinItems: 1, + MaxItems: 100, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": stringSchema(true, validation.StringMatch(regexp.MustCompile(`.*\S.*`), "")), + "values": { + Type: schema.TypeList, + MinItems: 1, + Required: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + }, + }, + }, + }, + }, + } +} + +func ExpandParameters(tfList []interface{}) *quicksight.Parameters { + if len(tfList) == 0 || tfList[0] == nil { + return nil + } + + tfMap, ok := tfList[0].(map[string]interface{}) + if !ok { + return nil + } + + parameters := &quicksight.Parameters{} + + if v, ok := tfMap["date_time_parameters"].([]interface{}); ok && len(v) > 0 { + parameters.DateTimeParameters = expandDateTimeParameters(v) + } + if v, ok := tfMap["decimal_parameters"].([]interface{}); ok && len(v) > 0 { + parameters.DecimalParameters = expandDecimalParameters(v) + } + if v, ok := tfMap["integer_parameters"].([]interface{}); ok && len(v) > 0 { + parameters.IntegerParameters = expandIntegerParameters(v) + } + if v, ok := tfMap["string_parameters"].([]interface{}); ok && len(v) > 0 { + parameters.StringParameters = expandStringParameters(v) + } + + return parameters +} + +func expandDateTimeParameters(tfList []interface{}) []*quicksight.DateTimeParameter { + if len(tfList) == 0 { + return nil + } + + var parameters []*quicksight.DateTimeParameter + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + if !ok { + continue + } + + parameter := expandDateTimeParameter(tfMap) + if parameter == nil { + continue + } + + parameters = append(parameters, parameter) + } + + return parameters +} + +func expandDateTimeParameter(tfMap map[string]interface{}) *quicksight.DateTimeParameter { + if tfMap == nil { + return nil + } + + parameter := &quicksight.DateTimeParameter{} + + if v, ok := tfMap["name"].(string); ok && v != "" { + parameter.Name = aws.String(v) + } + if v, ok := tfMap["values"].([]interface{}); ok && len(v) > 0 { + parameter.Values = flex.ExpandStringTimeList(v, time.RFC3339) + } + + return parameter +} + +func expandDecimalParameters(tfList []interface{}) []*quicksight.DecimalParameter { + if len(tfList) == 0 { + return nil + } + + var parameters []*quicksight.DecimalParameter + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + if !ok { + continue + } + + parameter := expandDecimalParameter(tfMap) + if parameter == nil { + continue + } + + parameters = append(parameters, parameter) + } + + return parameters +} + +func expandDecimalParameter(tfMap map[string]interface{}) *quicksight.DecimalParameter { + if tfMap == nil { + return nil + } + + parameter := &quicksight.DecimalParameter{} + + if v, ok := tfMap["name"].(string); ok && v != "" { + parameter.Name = aws.String(v) + } + if v, ok := tfMap["values"].([]interface{}); ok && len(v) > 0 { + parameter.Values = flex.ExpandFloat64List(v) + } + + return parameter +} + +func expandIntegerParameters(tfList []interface{}) []*quicksight.IntegerParameter { + if len(tfList) == 0 { + return nil + } + + var parameters []*quicksight.IntegerParameter + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + if !ok { + continue + } + + parameter := expandIntegerParameter(tfMap) + if parameter == nil { + continue + } + + parameters = append(parameters, parameter) + } + + return parameters +} + +func expandIntegerParameter(tfMap map[string]interface{}) *quicksight.IntegerParameter { + if tfMap == nil { + return nil + } + + parameter := &quicksight.IntegerParameter{} + + if v, ok := tfMap["name"].(string); ok && v != "" { + parameter.Name = aws.String(v) + } + if v, ok := tfMap["values"].([]interface{}); ok && len(v) > 0 { + parameter.Values = flex.ExpandInt64List(v) + } + + return parameter +} + +func expandStringParameters(tfList []interface{}) []*quicksight.StringParameter { + if len(tfList) == 0 { + return nil + } + + var parameters []*quicksight.StringParameter + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + if !ok { + continue + } + + parameter := expandStringParameter(tfMap) + if parameter == nil { + continue + } + + parameters = append(parameters, parameter) + } + + return parameters +} + +func expandStringParameter(tfMap map[string]interface{}) *quicksight.StringParameter { + if tfMap == nil { + return nil + } + + parameter := &quicksight.StringParameter{} + + if v, ok := tfMap["name"].(string); ok && v != "" { + parameter.Name = aws.String(v) + } + if v, ok := tfMap["values"].([]interface{}); ok && len(v) > 0 { + parameter.Values = flex.ExpandStringList(v) + } + + return parameter +} diff --git a/internal/service/quicksight/schema/template.go b/internal/service/quicksight/schema/template.go index ed00d3ba7d3..8f343b2b4a4 100644 --- a/internal/service/quicksight/schema/template.go +++ b/internal/service/quicksight/schema/template.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( @@ -10,7 +13,7 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/verify" ) -func DefinitionSchema() *schema.Schema { +func TemplateDefinitionSchema() *schema.Schema { return &schema.Schema{ // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_TemplateVersionDefinition.html Type: schema.TypeList, MaxItems: 1, @@ -321,7 +324,7 @@ func rollingDateConfigurationSchema() *schema.Schema { } } -func SourceEntitySchema() *schema.Schema { +func TemplateSourceEntitySchema() *schema.Schema { return &schema.Schema{ Type: schema.TypeList, MaxItems: 1, @@ -344,24 +347,7 @@ func SourceEntitySchema() *schema.Schema { Required: true, ValidateFunc: verify.ValidARN, }, - "data_set_references": { - Type: schema.TypeList, - Required: true, - MinItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "data_set_arn": { - Type: schema.TypeString, - Required: true, - ValidateFunc: verify.ValidARN, - }, - "data_set_placeholder": { - Type: schema.TypeString, - Required: true, - }, - }, - }, - }, + "data_set_references": dataSetReferencesSchema(), // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_DataSetReference.html }, }, }, @@ -385,7 +371,7 @@ func SourceEntitySchema() *schema.Schema { } } -func ExpandSourceEntity(tfList []interface{}) *quicksight.TemplateSourceEntity { +func ExpandTemplateSourceEntity(tfList []interface{}) *quicksight.TemplateSourceEntity { if len(tfList) == 0 || tfList[0] == nil { return nil } @@ -400,7 +386,7 @@ func ExpandSourceEntity(tfList []interface{}) *quicksight.TemplateSourceEntity { if v, ok := tfMap["source_analysis"].([]interface{}); ok && len(v) > 0 { sourceEntity.SourceAnalysis = expandSourceAnalysis(v[0].(map[string]interface{})) } else if v, ok := tfMap["source_template"].([]interface{}); ok && len(v) > 0 { - sourceEntity.SourceTemplate = expandSourceTemplate(v[0].(map[string]interface{})) + sourceEntity.SourceTemplate = expandTemplateSourceTemplate(v[0].(map[string]interface{})) } return sourceEntity @@ -461,7 +447,7 @@ func expandDataSetReference(tfMap map[string]interface{}) *quicksight.DataSetRef return dataSetReference } -func expandSourceTemplate(tfMap map[string]interface{}) *quicksight.TemplateSourceTemplate { +func expandTemplateSourceTemplate(tfMap map[string]interface{}) *quicksight.TemplateSourceTemplate { if tfMap == nil { return nil } @@ -474,7 +460,7 @@ func expandSourceTemplate(tfMap map[string]interface{}) *quicksight.TemplateSour return sourceTemplate } -func ExpandDefinition(tfList []interface{}) *quicksight.TemplateVersionDefinition { +func ExpandTemplateDefinition(tfList []interface{}) *quicksight.TemplateVersionDefinition { if len(tfList) == 0 || tfList[0] == nil { return nil } diff --git a/internal/service/quicksight/schema/template_control.go b/internal/service/quicksight/schema/template_control.go index 13f43d244e9..b4c50f54790 100644 --- a/internal/service/quicksight/schema/template_control.go +++ b/internal/service/quicksight/schema/template_control.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/schema/template_filter.go b/internal/service/quicksight/schema/template_filter.go index cb09037220d..92c754dd093 100644 --- a/internal/service/quicksight/schema/template_filter.go +++ b/internal/service/quicksight/schema/template_filter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( @@ -482,7 +485,7 @@ func filterScopeConfigurationSchema() *schema.Schema { Optional: true, MinItems: 1, MaxItems: 50, - Elem: schema.TypeString, + Elem: &schema.Schema{Type: schema.TypeString}, }, }, }, diff --git a/internal/service/quicksight/schema/template_format.go b/internal/service/quicksight/schema/template_format.go index 075b4cf27d5..217b2eb1aa9 100644 --- a/internal/service/quicksight/schema/template_format.go +++ b/internal/service/quicksight/schema/template_format.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/schema/template_parameter.go b/internal/service/quicksight/schema/template_parameter.go index fd5d2713515..77fd0e3ece3 100644 --- a/internal/service/quicksight/schema/template_parameter.go +++ b/internal/service/quicksight/schema/template_parameter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( @@ -551,7 +554,7 @@ func expandDecimalValueWhenUnsetConfiguration(tfList []interface{}) *quicksight. config := &quicksight.DecimalValueWhenUnsetConfiguration{} - if v, ok := tfMap["custom_value"].(float64); ok { + if v, ok := tfMap["custom_value"].(float64); ok && v != 0.0 { config.CustomValue = aws.Float64(v) } if v, ok := tfMap["value_when_unset_option"].(string); ok && v != "" { @@ -623,7 +626,7 @@ func expandIntegerValueWhenUnsetConfiguration(tfList []interface{}) *quicksight. config := &quicksight.IntegerValueWhenUnsetConfiguration{} - if v, ok := tfMap["custom_value"].(int); ok { + if v, ok := tfMap["custom_value"].(int); ok && v != 0 { config.CustomValue = aws.Int64(int64(v)) } if v, ok := tfMap["value_when_unset_option"].(string); ok && v != "" { @@ -695,7 +698,7 @@ func expandStringValueWhenUnsetConfiguration(tfList []interface{}) *quicksight.S config := &quicksight.StringValueWhenUnsetConfiguration{} - if v, ok := tfMap["custom_value"].(string); ok { + if v, ok := tfMap["custom_value"].(string); ok && v != "" { config.CustomValue = aws.String(v) } if v, ok := tfMap["value_when_unset_option"].(string); ok && v != "" { diff --git a/internal/service/quicksight/schema/template_sheet.go b/internal/service/quicksight/schema/template_sheet.go index ddd8309b0af..ba56e22dd62 100644 --- a/internal/service/quicksight/schema/template_sheet.go +++ b/internal/service/quicksight/schema/template_sheet.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( @@ -756,6 +759,9 @@ func expandGridLayoutScreenCanvasSizeOptions(tfList []interface{}) *quicksight.G if v, ok := tfMap["optimized_view_port_width"].(string); ok && v != "" { options.OptimizedViewPortWidth = aws.String(v) } + if v, ok := tfMap["resize_option"].(string); ok && v != "" { + options.ResizeOption = aws.String(v) + } return options } @@ -1270,16 +1276,16 @@ func expandGridLayoutElement(tfMap map[string]interface{}) *quicksight.GridLayou if v, ok := tfMap["element_type"].(string); ok && v != "" { layout.ElementType = aws.String(v) } - if v, ok := tfMap["column_span"].(int); ok { + if v, ok := tfMap["column_span"].(int); ok && v != 0 { layout.ColumnSpan = aws.Int64(int64(v)) } - if v, ok := tfMap["row_span"].(int); ok { + if v, ok := tfMap["row_span"].(int); ok && v != 0 { layout.RowSpan = aws.Int64(int64(v)) } - if v, ok := tfMap["column_index"].(int); ok { + if v, ok := tfMap["column_index"].(int); ok && v != 0 { layout.ColumnIndex = aws.Int64(int64(v)) } - if v, ok := tfMap["row_index"].(int); ok { + if v, ok := tfMap["row_index"].(int); ok && v != 0 { layout.RowIndex = aws.Int64(int64(v)) } diff --git a/internal/service/quicksight/schema/visual.go b/internal/service/quicksight/schema/visual.go index 4d18e50735b..5cc8f3e334e 100644 --- a/internal/service/quicksight/schema/visual.go +++ b/internal/service/quicksight/schema/visual.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/schema/visual_actions.go b/internal/service/quicksight/schema/visual_actions.go index dbf5ef1408e..49695ed1864 100644 --- a/internal/service/quicksight/schema/visual_actions.go +++ b/internal/service/quicksight/schema/visual_actions.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( @@ -74,7 +77,7 @@ func visualCustomActionsSchema(maxItems int) *schema.Schema { Optional: true, MinItems: 1, MaxItems: 30, - Elem: schema.TypeString, + Elem: &schema.Schema{Type: schema.TypeString}, }, }, }, diff --git a/internal/service/quicksight/schema/visual_bar_chart.go b/internal/service/quicksight/schema/visual_bar_chart.go index 7c2cd3f49ef..b9ef684d6e4 100644 --- a/internal/service/quicksight/schema/visual_bar_chart.go +++ b/internal/service/quicksight/schema/visual_bar_chart.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/schema/visual_box_plot.go b/internal/service/quicksight/schema/visual_box_plot.go index a4e7e9a24fb..a7421cb3fae 100644 --- a/internal/service/quicksight/schema/visual_box_plot.go +++ b/internal/service/quicksight/schema/visual_box_plot.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/schema/visual_chart_configuration.go b/internal/service/quicksight/schema/visual_chart_configuration.go index ab1b8eed33e..687bef112e8 100644 --- a/internal/service/quicksight/schema/visual_chart_configuration.go +++ b/internal/service/quicksight/schema/visual_chart_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( @@ -1365,7 +1368,7 @@ func flattenScrollBarOptions(apiObject *quicksight.ScrollBarOptions) []interface tfMap["visibility"] = aws.StringValue(apiObject.Visibility) } if apiObject.VisibleRange != nil { - tfMap["visibile_range"] = flattenVisibleRangeOptions(apiObject.VisibleRange) + tfMap["visible_range"] = flattenVisibleRangeOptions(apiObject.VisibleRange) } return []interface{}{tfMap} diff --git a/internal/service/quicksight/schema/visual_combo_chart.go b/internal/service/quicksight/schema/visual_combo_chart.go index 628f5695097..1c232ebe7e3 100644 --- a/internal/service/quicksight/schema/visual_combo_chart.go +++ b/internal/service/quicksight/schema/visual_combo_chart.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/schema/visual_conditional_formatting.go b/internal/service/quicksight/schema/visual_conditional_formatting.go index 6a2efee0077..2d6b8f591dd 100644 --- a/internal/service/quicksight/schema/visual_conditional_formatting.go +++ b/internal/service/quicksight/schema/visual_conditional_formatting.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/schema/visual_custom_content.go b/internal/service/quicksight/schema/visual_custom_content.go index be4bf3bd868..ed88377c625 100644 --- a/internal/service/quicksight/schema/visual_custom_content.go +++ b/internal/service/quicksight/schema/visual_custom_content.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/schema/visual_empty.go b/internal/service/quicksight/schema/visual_empty.go index 42db33d2944..16d55fa0cc7 100644 --- a/internal/service/quicksight/schema/visual_empty.go +++ b/internal/service/quicksight/schema/visual_empty.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/schema/visual_fields.go b/internal/service/quicksight/schema/visual_fields.go index 2ff96051232..af6c89a2417 100644 --- a/internal/service/quicksight/schema/visual_fields.go +++ b/internal/service/quicksight/schema/visual_fields.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/schema/visual_filled_map.go b/internal/service/quicksight/schema/visual_filled_map.go index 5ad5f20e18d..f6fcbc771e5 100644 --- a/internal/service/quicksight/schema/visual_filled_map.go +++ b/internal/service/quicksight/schema/visual_filled_map.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/schema/visual_funnel_chart.go b/internal/service/quicksight/schema/visual_funnel_chart.go index 3898cfff91e..683ec7bdb4f 100644 --- a/internal/service/quicksight/schema/visual_funnel_chart.go +++ b/internal/service/quicksight/schema/visual_funnel_chart.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/schema/visual_gauge_chart.go b/internal/service/quicksight/schema/visual_gauge_chart.go index 398d42e22f7..0e53218019a 100644 --- a/internal/service/quicksight/schema/visual_gauge_chart.go +++ b/internal/service/quicksight/schema/visual_gauge_chart.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/schema/visual_geospatial_map.go b/internal/service/quicksight/schema/visual_geospatial_map.go index 7ac5f2fab5f..f1aa07c2b6e 100644 --- a/internal/service/quicksight/schema/visual_geospatial_map.go +++ b/internal/service/quicksight/schema/visual_geospatial_map.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/schema/visual_heat_map.go b/internal/service/quicksight/schema/visual_heat_map.go index ea9b7afae00..55d0a585bb8 100644 --- a/internal/service/quicksight/schema/visual_heat_map.go +++ b/internal/service/quicksight/schema/visual_heat_map.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/schema/visual_histogram.go b/internal/service/quicksight/schema/visual_histogram.go index 436c988dbaa..ac846bd471c 100644 --- a/internal/service/quicksight/schema/visual_histogram.go +++ b/internal/service/quicksight/schema/visual_histogram.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/schema/visual_insight.go b/internal/service/quicksight/schema/visual_insight.go index 9e9eedc86ac..75076e262c9 100644 --- a/internal/service/quicksight/schema/visual_insight.go +++ b/internal/service/quicksight/schema/visual_insight.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/schema/visual_kpi.go b/internal/service/quicksight/schema/visual_kpi.go index f3541c0045b..576484551ee 100644 --- a/internal/service/quicksight/schema/visual_kpi.go +++ b/internal/service/quicksight/schema/visual_kpi.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( @@ -229,7 +232,7 @@ func expandKPIFieldWells(tfList []interface{}) *quicksight.KPIFieldWells { config.TrendGroups = expandDimensionFields(v) } if v, ok := tfMap["target_values"].([]interface{}); ok && len(v) > 0 { - config.Values = expandMeasureFields(v) + config.TargetValues = expandMeasureFields(v) } if v, ok := tfMap["values"].([]interface{}); ok && len(v) > 0 { config.Values = expandMeasureFields(v) diff --git a/internal/service/quicksight/schema/visual_line_chart.go b/internal/service/quicksight/schema/visual_line_chart.go index 94924edd3f3..c3c2c43564f 100644 --- a/internal/service/quicksight/schema/visual_line_chart.go +++ b/internal/service/quicksight/schema/visual_line_chart.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/schema/visual_map.go b/internal/service/quicksight/schema/visual_map.go index e55b5b46c9c..4a6c770ec16 100644 --- a/internal/service/quicksight/schema/visual_map.go +++ b/internal/service/quicksight/schema/visual_map.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/schema/visual_pie_chart.go b/internal/service/quicksight/schema/visual_pie_chart.go index e4de76e6903..0d923b8561c 100644 --- a/internal/service/quicksight/schema/visual_pie_chart.go +++ b/internal/service/quicksight/schema/visual_pie_chart.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/schema/visual_pivot_table.go b/internal/service/quicksight/schema/visual_pivot_table.go index c163c5a0eb3..b4166edbfd9 100644 --- a/internal/service/quicksight/schema/visual_pivot_table.go +++ b/internal/service/quicksight/schema/visual_pivot_table.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( @@ -1292,7 +1295,7 @@ func flattenPivotTableAggregatedFieldWells(apiObject *quicksight.PivotTableAggre tfMap["columns"] = flattenDimensionFields(apiObject.Columns) } if apiObject.Rows != nil { - tfMap["row"] = flattenDimensionFields(apiObject.Rows) + tfMap["rows"] = flattenDimensionFields(apiObject.Rows) } if apiObject.Values != nil { tfMap["values"] = flattenMeasureFields(apiObject.Values) @@ -1668,7 +1671,7 @@ func flattenPivotTableFieldOption(apiObject []*quicksight.PivotTableFieldOption) tfMap["custom_label"] = aws.StringValue(config.CustomLabel) } if config.Visibility != nil { - tfMap["visbility"] = aws.StringValue(config.Visibility) + tfMap["visibility"] = aws.StringValue(config.Visibility) } tfList = append(tfList, tfMap) diff --git a/internal/service/quicksight/schema/visual_radar_chart.go b/internal/service/quicksight/schema/visual_radar_chart.go index e71b904f480..290644e0ad0 100644 --- a/internal/service/quicksight/schema/visual_radar_chart.go +++ b/internal/service/quicksight/schema/visual_radar_chart.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/schema/visual_sankey_diagram.go b/internal/service/quicksight/schema/visual_sankey_diagram.go index af0d9081e84..9b9bfd55c3c 100644 --- a/internal/service/quicksight/schema/visual_sankey_diagram.go +++ b/internal/service/quicksight/schema/visual_sankey_diagram.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/schema/visual_scatter_plot.go b/internal/service/quicksight/schema/visual_scatter_plot.go index 0b7ea3591af..321fbd2edaa 100644 --- a/internal/service/quicksight/schema/visual_scatter_plot.go +++ b/internal/service/quicksight/schema/visual_scatter_plot.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/schema/visual_sort.go b/internal/service/quicksight/schema/visual_sort.go index 1db23df9f65..316e1a96301 100644 --- a/internal/service/quicksight/schema/visual_sort.go +++ b/internal/service/quicksight/schema/visual_sort.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/schema/visual_table.go b/internal/service/quicksight/schema/visual_table.go index 33832c19eb3..ced87fa38a3 100644 --- a/internal/service/quicksight/schema/visual_table.go +++ b/internal/service/quicksight/schema/visual_table.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/schema/visual_tree_map.go b/internal/service/quicksight/schema/visual_tree_map.go index d19f1b46ba0..60667335233 100644 --- a/internal/service/quicksight/schema/visual_tree_map.go +++ b/internal/service/quicksight/schema/visual_tree_map.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/schema/visual_waterfall.go b/internal/service/quicksight/schema/visual_waterfall.go index d844f5fc39b..e50d68054a1 100644 --- a/internal/service/quicksight/schema/visual_waterfall.go +++ b/internal/service/quicksight/schema/visual_waterfall.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/schema/visual_word_cloud.go b/internal/service/quicksight/schema/visual_word_cloud.go index 821262444f7..ecd34f0b2f4 100644 --- a/internal/service/quicksight/schema/visual_word_cloud.go +++ b/internal/service/quicksight/schema/visual_word_cloud.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schema import ( diff --git a/internal/service/quicksight/service_package_gen.go b/internal/service/quicksight/service_package_gen.go index d5b164541a5..c1388eddb16 100644 --- a/internal/service/quicksight/service_package_gen.go +++ b/internal/service/quicksight/service_package_gen.go @@ -5,6 +5,10 @@ package quicksight import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + quicksight_sdkv1 "github.com/aws/aws-sdk-go/service/quicksight" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -66,6 +70,11 @@ func (p *servicePackage) SDKDataSources(ctx context.Context) []*types.ServicePac TypeName: "aws_quicksight_group", Name: "Group", }, + { + Factory: DataSourceTheme, + TypeName: "aws_quicksight_theme", + Name: "Theme", + }, { Factory: DataSourceUser, TypeName: "aws_quicksight_user", @@ -81,6 +90,22 @@ func (p *servicePackage) SDKResources(ctx context.Context) []*types.ServicePacka TypeName: "aws_quicksight_account_subscription", Name: "Account Subscription", }, + { + Factory: ResourceAnalysis, + TypeName: "aws_quicksight_analysis", + Name: "Analysis", + Tags: &types.ServicePackageResourceTags{ + IdentifierAttribute: "arn", + }, + }, + { + Factory: ResourceDashboard, + TypeName: "aws_quicksight_dashboard", + Name: "Dashboard", + Tags: &types.ServicePackageResourceTags{ + IdentifierAttribute: "arn", + }, + }, { Factory: ResourceDataSet, TypeName: "aws_quicksight_data_set", @@ -123,6 +148,14 @@ func (p *servicePackage) SDKResources(ctx context.Context) []*types.ServicePacka IdentifierAttribute: "arn", }, }, + { + Factory: ResourceTheme, + TypeName: "aws_quicksight_theme", + Name: "Theme", + Tags: &types.ServicePackageResourceTags{ + IdentifierAttribute: "arn", + }, + }, { Factory: ResourceUser, TypeName: "aws_quicksight_user", @@ -135,4 +168,13 @@ func (p *servicePackage) ServicePackageName() string { return names.QuickSight } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*quicksight_sdkv1.QuickSight, error) { + sess := config["session"].(*session_sdkv1.Session) + + return quicksight_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/quicksight/status.go b/internal/service/quicksight/status.go index 7e8ec69e492..81a0ffee49f 100644 --- a/internal/service/quicksight/status.go +++ b/internal/service/quicksight/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight import ( @@ -46,3 +49,51 @@ func statusTemplate(ctx context.Context, conn *quicksight.QuickSight, id string) return out, *out.Version.Status, nil } } + +// Fetch Dashboard status +func statusDashboard(ctx context.Context, conn *quicksight.QuickSight, id string) retry.StateRefreshFunc { + return func() (interface{}, string, error) { + out, err := FindDashboardByID(ctx, conn, id) + if tfresource.NotFound(err) { + return nil, "", nil + } + + if err != nil { + return nil, "", err + } + + return out, *out.Version.Status, nil + } +} + +// Fetch Analysis status +func statusAnalysis(ctx context.Context, conn *quicksight.QuickSight, id string) retry.StateRefreshFunc { + return func() (interface{}, string, error) { + out, err := FindAnalysisByID(ctx, conn, id) + if tfresource.NotFound(err) { + return nil, "", nil + } + + if err != nil { + return nil, "", err + } + + return out, *out.Status, nil + } +} + +// Fetch Theme status +func statusTheme(ctx context.Context, conn *quicksight.QuickSight, id string) retry.StateRefreshFunc { + return func() (interface{}, string, error) { + out, err := FindThemeByID(ctx, conn, id) + if tfresource.NotFound(err) { + return nil, "", nil + } + + if err != nil { + return nil, "", err + } + + return out, *out.Version.Status, nil + } +} diff --git a/internal/service/quicksight/sweep.go b/internal/service/quicksight/sweep.go index bce9d30d9b4..ef169af6fc5 100644 --- a/internal/service/quicksight/sweep.go +++ b/internal/service/quicksight/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,13 +13,16 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/quicksight" - "github.com/hashicorp/go-multierror" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) func init() { + resource.AddTestSweepers("aws_quicksight_dashboard", &resource.Sweeper{ + Name: "aws_quicksight_dashboard", + F: sweepDashboards, + }) resource.AddTestSweepers("aws_quicksight_data_set", &resource.Sweeper{ Name: "aws_quicksight_data_set", F: sweepDataSets, @@ -44,19 +50,70 @@ const ( acctestResourcePrefix = "tf-acc-test" ) +func sweepDashboards(region string) error { + ctx := sweep.Context(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) + + if err != nil { + return fmt.Errorf("error getting client: %w", err) + } + + conn := client.QuickSightConn(ctx) + sweepResources := make([]sweep.Sweepable, 0) + + awsAccountId := client.AccountID + + input := &quicksight.ListDashboardsInput{ + AwsAccountId: aws.String(awsAccountId), + } + + err = conn.ListDashboardsPagesWithContext(ctx, input, func(page *quicksight.ListDashboardsOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, dashboard := range page.DashboardSummaryList { + if dashboard == nil { + continue + } + + r := ResourceDashboard() + d := r.Data(nil) + d.SetId(fmt.Sprintf("%s,%s", awsAccountId, aws.StringValue(dashboard.DashboardId))) + + sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) + } + + return !lastPage + }) + + if skipSweepError(err) { + log.Printf("[WARN] Skipping QuickSight Dashboard sweep for %s: %s", region, err) + return nil + } + if err != nil { + return fmt.Errorf("listing QuickSight Dashboards: %w", err) + } + + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { + return fmt.Errorf("sweeping QuickSight Dashboards for %s: %w", region, err) + } + + return nil +} + func sweepDataSets(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).QuickSightConn() + conn := client.QuickSightConn(ctx) sweepResources := make([]sweep.Sweepable, 0) - var errs *multierror.Error - awsAccountId := client.(*conns.AWSClient).AccountID + awsAccountId := client.AccountID input := &quicksight.ListDataSetsInput{ AwsAccountId: aws.String(awsAccountId), @@ -82,35 +139,33 @@ func sweepDataSets(region string) error { return !lastPage }) - if err != nil { - errs = multierror.Append(errs, fmt.Errorf("listing QuickSight Data Sets: %w", err)) + if skipSweepError(err) { + log.Printf("[WARN] Skipping QuickSight Data Set sweep for %s: %s", region, err) + return nil } - - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { - errs = multierror.Append(errs, fmt.Errorf("sweeping QuickSight Data Sets for %s: %w", region, err)) + if err != nil { + return fmt.Errorf("listing QuickSight Data Sets: %w", err) } - if sweep.SkipSweepError(errs.ErrorOrNil()) { - log.Printf("[WARN] Skipping QuickSight Data Set sweep for %s: %s", region, errs) - return nil + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { + return fmt.Errorf("sweeping QuickSight Data Sets for %s: %w", region, err) } - return errs.ErrorOrNil() + return nil } func sweepDataSources(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).QuickSightConn() + conn := client.QuickSightConn(ctx) sweepResources := make([]sweep.Sweepable, 0) - var errs *multierror.Error - awsAccountId := client.(*conns.AWSClient).AccountID + awsAccountId := client.AccountID input := &quicksight.ListDataSourcesInput{ AwsAccountId: aws.String(awsAccountId), @@ -136,34 +191,32 @@ func sweepDataSources(region string) error { return !lastPage }) - if err != nil { - errs = multierror.Append(errs, fmt.Errorf("listing QuickSight Data Sources: %w", err)) + if skipSweepError(err) { + log.Printf("[WARN] Skipping QuickSight Data Source sweep for %s: %s", region, err) + return nil } - - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { - errs = multierror.Append(errs, fmt.Errorf("sweeping QuickSight Data Sources for %s: %w", region, err)) + if err != nil { + return fmt.Errorf("listing QuickSight Data Sources: %w", err) } - if sweep.SkipSweepError(errs.ErrorOrNil()) { - log.Printf("[WARN] Skipping QuickSight Data Source sweep for %s: %s", region, errs) - return nil + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { + return fmt.Errorf("sweeping QuickSight Data Sources for %s: %w", region, err) } - return errs.ErrorOrNil() + return nil } func sweepFolders(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).QuickSightConn() - awsAccountId := client.(*conns.AWSClient).AccountID + conn := client.QuickSightConn(ctx) + awsAccountId := client.AccountID sweepResources := make([]sweep.Sweepable, 0) - var errs *multierror.Error input := &quicksight.ListFoldersInput{ AwsAccountId: aws.String(awsAccountId), @@ -182,35 +235,34 @@ func sweepFolders(region string) error { sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } + if skipSweepError(err) { + log.Printf("[WARN] Skipping QuickSight Folder sweep for %s: %s", region, err) + return nil + } if err != nil { - errs = multierror.Append(errs, fmt.Errorf("listing QuickSight Folder for %s: %w", region, err)) + return fmt.Errorf("listing QuickSight Folders: %w", err) } - if err := sweep.SweepOrchestrator(sweepResources); err != nil { - errs = multierror.Append(errs, fmt.Errorf("sweeping QuickSight Folder for %s: %w", region, err)) + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { + return fmt.Errorf("sweeping QuickSight Folders for %s: %w", region, err) } - if sweep.SkipSweepError(err) { - log.Printf("[WARN] Skipping QuickSight Folder sweep for %s: %s", region, errs) - return nil - } + return nil - return errs.ErrorOrNil() } func sweepTemplates(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).QuickSightConn() + conn := client.QuickSightConn(ctx) sweepResources := make([]sweep.Sweepable, 0) - var errs *multierror.Error - awsAccountId := client.(*conns.AWSClient).AccountID + awsAccountId := client.AccountID input := &quicksight.ListTemplatesInput{ AwsAccountId: aws.String(awsAccountId), @@ -236,34 +288,32 @@ func sweepTemplates(region string) error { return !lastPage }) - if err != nil { - errs = multierror.Append(errs, fmt.Errorf("listing QuickSight Templates: %w", err)) + if skipSweepError(err) { + log.Printf("[WARN] Skipping QuickSight Template sweep for %s: %s", region, err) + return nil } - - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { - errs = multierror.Append(errs, fmt.Errorf("sweeping QuickSight Templates for %s: %w", region, err)) + if err != nil { + return fmt.Errorf("listing QuickSight Templates: %w", err) } - if sweep.SkipSweepError(errs.ErrorOrNil()) { - log.Printf("[WARN] Skipping QuickSight Template sweep for %s: %s", region, errs) - return nil + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { + return fmt.Errorf("sweeping QuickSight Templates for %s: %w", region, err) } - return errs.ErrorOrNil() + return nil } func sweepUsers(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).QuickSightConn() - awsAccountId := client.(*conns.AWSClient).AccountID + conn := client.QuickSightConn(ctx) + awsAccountId := client.AccountID sweepResources := make([]sweep.Sweepable, 0) - var errs *multierror.Error input := &quicksight.ListUsersInput{ AwsAccountId: aws.String(awsAccountId), @@ -284,18 +334,36 @@ func sweepUsers(region string) error { sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } + if skipSweepUserError(err) { + log.Printf("[WARN] Skipping QuickSight User sweep for %s: %s", region, err) + return nil + } if err != nil { - errs = multierror.Append(errs, fmt.Errorf("listing QuickSight Users for %s: %w", region, err)) + return fmt.Errorf("listing QuickSight Users: %w", err) } - if err := sweep.SweepOrchestrator(sweepResources); err != nil { - errs = multierror.Append(errs, fmt.Errorf("sweeping QuickSight Users for %s: %w", region, err)) + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { + return fmt.Errorf("sweeping QuickSight Users for %s: %w", region, err) } - if sweep.SkipSweepError(err) { - log.Printf("[WARN] Skipping QuickSight User sweep for %s: %s", region, errs) - return nil + return nil +} + +// skipSweepError adds an additional skippable error code for listing QuickSight resources other than User +func skipSweepError(err error) bool { + if tfawserr.ErrCodeEquals(err, quicksight.ErrCodeUnsupportedUserEditionException) { + return true + } + + return sweep.SkipSweepError(err) +} + +// skipSweepUserError adds an additional skippable error code for listing QuickSight User resources +func skipSweepUserError(err error) bool { + if tfawserr.ErrMessageContains(err, quicksight.ErrCodeResourceNotFoundException, "not signed up with QuickSight") { + return true } - return errs.ErrorOrNil() + return sweep.SkipSweepError(err) + } diff --git a/internal/service/quicksight/tags_gen.go b/internal/service/quicksight/tags_gen.go index c0991003f69..7547b0736ec 100644 --- a/internal/service/quicksight/tags_gen.go +++ b/internal/service/quicksight/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists quicksight service tags. +// listTags lists quicksight service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn quicksightiface.QuickSightAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn quicksightiface.QuickSightAPI, identifier string) (tftags.KeyValueTags, error) { input := &quicksight.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn quicksightiface.QuickSightAPI, identifie // ListTags lists quicksight service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).QuickSightConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).QuickSightConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*quicksight.Tag) tftags.KeyValueTa return tftags.New(ctx, m) } -// GetTagsIn returns quicksight service tags from Context. +// getTagsIn returns quicksight service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*quicksight.Tag { +func getTagsIn(ctx context.Context) []*quicksight.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*quicksight.Tag { return nil } -// SetTagsOut sets quicksight service tags in Context. -func SetTagsOut(ctx context.Context, tags []*quicksight.Tag) { +// setTagsOut sets quicksight service tags in Context. +func setTagsOut(ctx context.Context, tags []*quicksight.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates quicksight service tags. +// updateTags updates quicksight service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn quicksightiface.QuickSightAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn quicksightiface.QuickSightAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn quicksightiface.QuickSightAPI, identif // UpdateTags updates quicksight service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).QuickSightConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).QuickSightConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/quicksight/template.go b/internal/service/quicksight/template.go index 36a0583478b..bc006a4747c 100644 --- a/internal/service/quicksight/template.go +++ b/internal/service/quicksight/template.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight import ( @@ -42,80 +45,83 @@ func ResourceTemplate() *schema.Resource { StateContext: schema.ImportStatePassthroughContext, }, - Schema: map[string]*schema.Schema{ - "arn": { - Type: schema.TypeString, - Computed: true, - }, - "aws_account_id": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - ValidateFunc: verify.ValidAccountID, - }, - "created_time": { - Type: schema.TypeString, - Computed: true, - }, - "definition": quicksightschema.DefinitionSchema(), - "last_updated_time": { - Type: schema.TypeString, - Computed: true, - }, - "name": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringLenBetween(1, 2048), - }, - "permissions": { - Type: schema.TypeList, - Optional: true, - MinItems: 1, - MaxItems: 64, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "actions": { - Type: schema.TypeSet, - Required: true, - MinItems: 1, - MaxItems: 16, - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "principal": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringLenBetween(1, 256), + SchemaFunc: func() map[string]*schema.Schema { + return map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "aws_account_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: verify.ValidAccountID, + }, + "created_time": { + Type: schema.TypeString, + Computed: true, + }, + "definition": quicksightschema.TemplateDefinitionSchema(), + "last_updated_time": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 2048), + }, + "permissions": { + Type: schema.TypeList, + Optional: true, + MinItems: 1, + MaxItems: 64, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "actions": { + Type: schema.TypeSet, + Required: true, + MinItems: 1, + MaxItems: 16, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "principal": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 256), + }, }, }, }, - }, - "source_entity": quicksightschema.SourceEntitySchema(), - "source_entity_arn": { - Type: schema.TypeString, - Computed: true, - }, - "status": { - Type: schema.TypeString, - Computed: true, - }, - names.AttrTags: tftags.TagsSchema(), - names.AttrTagsAll: tftags.TagsSchemaComputed(), - "template_id": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - }, - "version_description": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringLenBetween(1, 512), - }, - "version_number": { - Type: schema.TypeInt, - Computed: true, - }, + "source_entity": quicksightschema.TemplateSourceEntitySchema(), + "source_entity_arn": { + Type: schema.TypeString, + Computed: true, + }, + "status": { + Type: schema.TypeString, + Computed: true, + }, + names.AttrTags: tftags.TagsSchema(), + names.AttrTagsAll: tftags.TagsSchemaComputed(), + "template_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "version_description": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 512), + }, + "version_number": { + Type: schema.TypeInt, + Computed: true, + }, + } }, + CustomizeDiff: verify.SetTagsDiff, } } @@ -125,7 +131,7 @@ const ( ) func resourceTemplateCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) awsAccountId := meta.(*conns.AWSClient).AccountID if v, ok := d.GetOk("aws_account_id"); ok { @@ -139,7 +145,7 @@ func resourceTemplateCreate(ctx context.Context, d *schema.ResourceData, meta in AwsAccountId: aws.String(awsAccountId), TemplateId: aws.String(templateId), Name: aws.String(d.Get("name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("version_description"); ok { @@ -147,11 +153,11 @@ func resourceTemplateCreate(ctx context.Context, d *schema.ResourceData, meta in } if v, ok := d.GetOk("source_entity"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { - input.SourceEntity = quicksightschema.ExpandSourceEntity(v.([]interface{})) + input.SourceEntity = quicksightschema.ExpandTemplateSourceEntity(v.([]interface{})) } if v, ok := d.GetOk("definition"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { - input.Definition = quicksightschema.ExpandDefinition(d.Get("definition").([]interface{})) + input.Definition = quicksightschema.ExpandTemplateDefinition(d.Get("definition").([]interface{})) } if v, ok := d.GetOk("permissions"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { @@ -171,7 +177,7 @@ func resourceTemplateCreate(ctx context.Context, d *schema.ResourceData, meta in } func resourceTemplateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) awsAccountId, templateId, err := ParseTemplateId(d.Id()) if err != nil { @@ -208,11 +214,11 @@ func resourceTemplateRead(ctx context.Context, d *schema.ResourceData, meta inte }) if err != nil { - return diag.Errorf("error describing QuickSight Template (%s) Definition: %s", d.Id(), err) + return diag.Errorf("describing QuickSight Template (%s) Definition: %s", d.Id(), err) } if err := d.Set("definition", quicksightschema.FlattenTemplateDefinition(descResp.Definition)); err != nil { - return diag.Errorf("error setting definition: %s", err) + return diag.Errorf("setting definition: %s", err) } permsResp, err := conn.DescribeTemplatePermissionsWithContext(ctx, &quicksight.DescribeTemplatePermissionsInput{ @@ -221,18 +227,18 @@ func resourceTemplateRead(ctx context.Context, d *schema.ResourceData, meta inte }) if err != nil { - return diag.Errorf("error describing QuickSight Template (%s) Permissions: %s", d.Id(), err) + return diag.Errorf("describing QuickSight Template (%s) Permissions: %s", d.Id(), err) } if err := d.Set("permissions", flattenPermissions(permsResp.Permissions)); err != nil { - return diag.Errorf("error setting permissions: %s", err) + return diag.Errorf("setting permissions: %s", err) } return nil } func resourceTemplateUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) awsAccountId, templateId, err := ParseTemplateId(d.Id()) if err != nil { @@ -247,12 +253,11 @@ func resourceTemplateUpdate(ctx context.Context, d *schema.ResourceData, meta in VersionDescription: aws.String(d.Get("version_description").(string)), } - if d.HasChange("source_entity") { - in.SourceEntity = quicksightschema.ExpandSourceEntity(d.Get("source_entity").([]interface{})) - } - - if d.HasChange("definition") { - in.Definition = quicksightschema.ExpandDefinition(d.Get("definition").([]interface{})) + // One of source_entity or definition is required for update + if _, ok := d.GetOk("source_entity"); ok { + in.SourceEntity = quicksightschema.ExpandTemplateSourceEntity(d.Get("source_entity").([]interface{})) + } else { + in.Definition = quicksightschema.ExpandTemplateDefinition(d.Get("definition").([]interface{})) } log.Printf("[DEBUG] Updating QuickSight Template (%s): %#v", d.Id(), in) @@ -289,7 +294,7 @@ func resourceTemplateUpdate(ctx context.Context, d *schema.ResourceData, meta in _, err = conn.UpdateTemplatePermissionsWithContext(ctx, params) if err != nil { - return diag.Errorf("error updating QuickSight Template (%s) permissions: %s", templateId, err) + return diag.Errorf("updating QuickSight Template (%s) permissions: %s", templateId, err) } } @@ -297,7 +302,7 @@ func resourceTemplateUpdate(ctx context.Context, d *schema.ResourceData, meta in } func resourceTemplateDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) awsAccountId, templateId, err := ParseTemplateId(d.Id()) if err != nil { diff --git a/internal/service/quicksight/template_alias.go b/internal/service/quicksight/template_alias.go index 69f29dc5595..8c1ea40378d 100644 --- a/internal/service/quicksight/template_alias.go +++ b/internal/service/quicksight/template_alias.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight import ( @@ -17,8 +20,8 @@ import ( "github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-provider-aws/internal/create" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -73,7 +76,7 @@ func (r *resourceTemplateAlias) Schema(ctx context.Context, req resource.SchemaR } func (r *resourceTemplateAlias) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) { - conn := r.Meta().QuickSightConn() + conn := r.Meta().QuickSightConn(ctx) var plan resourceTemplateAliasData resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) @@ -116,7 +119,7 @@ func (r *resourceTemplateAlias) Create(ctx context.Context, req resource.CreateR } func (r *resourceTemplateAlias) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) { - conn := r.Meta().QuickSightConn() + conn := r.Meta().QuickSightConn(ctx) var state resourceTemplateAliasData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) @@ -159,7 +162,7 @@ func (r *resourceTemplateAlias) Read(ctx context.Context, req resource.ReadReque } func (r *resourceTemplateAlias) Update(ctx context.Context, req resource.UpdateRequest, resp *resource.UpdateResponse) { - conn := r.Meta().QuickSightConn() + conn := r.Meta().QuickSightConn(ctx) var plan, state resourceTemplateAliasData resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) @@ -199,7 +202,7 @@ func (r *resourceTemplateAlias) Update(ctx context.Context, req resource.UpdateR } func (r *resourceTemplateAlias) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) { - conn := r.Meta().QuickSightConn() + conn := r.Meta().QuickSightConn(ctx) var state resourceTemplateAliasData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) diff --git a/internal/service/quicksight/template_alias_test.go b/internal/service/quicksight/template_alias_test.go index 5142a5c5932..d165bf67c2b 100644 --- a/internal/service/quicksight/template_alias_test.go +++ b/internal/service/quicksight/template_alias_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight_test import ( @@ -86,7 +89,7 @@ func TestAccQuickSightTemplateAlias_disappears(t *testing.T) { func testAccCheckTemplateAliasDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_quicksight_template_alias" { @@ -119,7 +122,7 @@ func testAccCheckTemplateAliasExists(ctx context.Context, name string, templateA return create.Error(names.QuickSight, create.ErrActionCheckingExistence, tfquicksight.ResNameTemplateAlias, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) resp, err := tfquicksight.FindTemplateAliasByID(ctx, conn, rs.Primary.ID) if err != nil { return create.Error(names.QuickSight, create.ErrActionCheckingExistence, tfquicksight.ResNameTemplateAlias, rs.Primary.ID, err) diff --git a/internal/service/quicksight/template_test.go b/internal/service/quicksight/template_test.go index 67a53e56dcb..f0b8bedad5f 100644 --- a/internal/service/quicksight/template_test.go +++ b/internal/service/quicksight/template_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight_test import ( @@ -260,9 +263,61 @@ func TestAccQuickSightTemplate_tags(t *testing.T) { }) } +func TestAccQuickSightTemplate_update(t *testing.T) { + ctx := acctest.Context(t) + + var template quicksight.Template + resourceName := "aws_quicksight_template.copy" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rNameUpdated := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rId := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + sourceName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + sourceId := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, quicksight.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckTemplateDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccTemplateConfig_TemplateSourceEntity(rId, rName, sourceId, sourceName), + Check: resource.ComposeTestCheckFunc( + testAccCheckTemplateExists(ctx, resourceName, &template), + resource.TestCheckResourceAttr(resourceName, "template_id", rId), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "status", quicksight.ResourceStatusCreationSuccessful), + acctest.CheckResourceAttrRegionalARN(resourceName, "source_entity.0.source_template.0.arn", "quicksight", fmt.Sprintf("template/%s", sourceId)), + resource.TestCheckResourceAttr(resourceName, "version_number", "1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"source_entity"}, + }, + + { + Config: testAccTemplateConfig_TemplateSourceEntity(rId, rNameUpdated, sourceId, sourceName), + Check: resource.ComposeTestCheckFunc( + testAccCheckTemplateExists(ctx, resourceName, &template), + resource.TestCheckResourceAttr(resourceName, "template_id", rId), + resource.TestCheckResourceAttr(resourceName, "name", rNameUpdated), + resource.TestCheckResourceAttr(resourceName, "status", quicksight.ResourceStatusCreationSuccessful), + acctest.CheckResourceAttrRegionalARN(resourceName, "source_entity.0.source_template.0.arn", "quicksight", fmt.Sprintf("template/%s", sourceId)), + resource.TestCheckResourceAttr(resourceName, "version_number", "2"), + ), + }, + }, + }) +} + func testAccCheckTemplateDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_quicksight_template" { @@ -297,7 +352,7 @@ func testAccCheckTemplateExists(ctx context.Context, name string, template *quic return create.Error(names.QuickSight, create.ErrActionCheckingExistence, tfquicksight.ResNameTemplate, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) output, err := tfquicksight.FindTemplateByID(ctx, conn, rs.Primary.ID) if err != nil { diff --git a/internal/service/quicksight/theme.go b/internal/service/quicksight/theme.go new file mode 100644 index 00000000000..a2a5772b82a --- /dev/null +++ b/internal/service/quicksight/theme.go @@ -0,0 +1,1084 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package quicksight + +import ( + "context" + "fmt" + "log" + "regexp" + "strings" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/quicksight" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/flex" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/verify" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @SDKResource("aws_quicksight_theme", name="Theme") +// @Tags(identifierAttribute="arn") +func ResourceTheme() *schema.Resource { + return &schema.Resource{ + CreateWithoutTimeout: resourceThemeCreate, + ReadWithoutTimeout: resourceThemeRead, + UpdateWithoutTimeout: resourceThemeUpdate, + DeleteWithoutTimeout: resourceThemeDelete, + + Importer: &schema.ResourceImporter{ + StateContext: schema.ImportStatePassthroughContext, + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(5 * time.Minute), + Update: schema.DefaultTimeout(5 * time.Minute), + Delete: schema.DefaultTimeout(5 * time.Minute), + }, + + SchemaFunc: func() map[string]*schema.Schema { + return map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "aws_account_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: verify.ValidAccountID, + }, + "base_theme_id": { + Type: schema.TypeString, + Required: true, + }, + "configuration": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_ThemeConfiguration.html + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "data_color_palette": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_DataColorPalette.html + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "colors": { + Type: schema.TypeList, + Optional: true, + MinItems: 8, // Colors size needs to be in the range between 8 and 20 + MaxItems: 20, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringMatch(regexp.MustCompile(`^#[A-F0-9]{6}$`), ""), + }, + }, + "empty_fill_color": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringMatch(regexp.MustCompile(`^#[A-F0-9]{6}$`), ""), + }, + "min_max_gradient": { + Type: schema.TypeList, + Optional: true, + MinItems: 2, // MinMaxGradient size needs to be 2 + MaxItems: 2, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringMatch(regexp.MustCompile(`^#[A-F0-9]{6}$`), ""), + }, + }, + }, + }, + }, + "sheet": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_SheetStyle.html + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "tile": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_TileStyle.html + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "border": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_BorderStyle.html + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "show": { + Type: schema.TypeBool, + Optional: true, + }, + }, + }, + }, + }, + }, + }, + "tile_layout": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_TileLayoutStyle.html + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "gutter": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_GutterStyle.html + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "show": { + Type: schema.TypeBool, + Optional: true, + }, + }, + }, + }, + "margin": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_MarginStyle.html + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "show": { + Type: schema.TypeBool, + Optional: true, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + "typography": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_Typography.html + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "font_families": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_Font.html + Type: schema.TypeList, + MaxItems: 5, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "font_family": { + Type: schema.TypeString, + Optional: true, + }, + }, + }, + }, + }, + }, + }, + "ui_color_palette": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_UIColorPalette.html + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "accent": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringMatch(regexp.MustCompile(`^#[A-F0-9]{6}$`), ""), + }, + "accent_foreground": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringMatch(regexp.MustCompile(`^#[A-F0-9]{6}$`), ""), + }, + "danger": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringMatch(regexp.MustCompile(`^#[A-F0-9]{6}$`), ""), + }, + "danger_foreground": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringMatch(regexp.MustCompile(`^#[A-F0-9]{6}$`), ""), + }, + "dimension": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringMatch(regexp.MustCompile(`^#[A-F0-9]{6}$`), ""), + }, + "dimension_foreground": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringMatch(regexp.MustCompile(`^#[A-F0-9]{6}$`), ""), + }, + "measure": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringMatch(regexp.MustCompile(`^#[A-F0-9]{6}$`), ""), + }, + "measure_foreground": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringMatch(regexp.MustCompile(`^#[A-F0-9]{6}$`), ""), + }, + "primary_background": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringMatch(regexp.MustCompile(`^#[A-F0-9]{6}$`), ""), + }, + "primary_foreground": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringMatch(regexp.MustCompile(`^#[A-F0-9]{6}$`), ""), + }, + "secondary_background": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringMatch(regexp.MustCompile(`^#[A-F0-9]{6}$`), ""), + }, + "secondary_foreground": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringMatch(regexp.MustCompile(`^#[A-F0-9]{6}$`), ""), + }, + "success": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringMatch(regexp.MustCompile(`^#[A-F0-9]{6}$`), ""), + }, + "success_foreground": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringMatch(regexp.MustCompile(`^#[A-F0-9]{6}$`), ""), + }, + "warning": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringMatch(regexp.MustCompile(`^#[A-F0-9]{6}$`), ""), + }, + "warning_foreground": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringMatch(regexp.MustCompile(`^#[A-F0-9]{6}$`), ""), + }, + }, + }, + }, + }, + }, + }, + "created_time": { + Type: schema.TypeString, + Computed: true, + }, + "theme_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "last_updated_time": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 2048), + }, + "permissions": { + Type: schema.TypeList, + Optional: true, + MinItems: 1, + MaxItems: 64, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "actions": { + Type: schema.TypeSet, + Required: true, + MinItems: 1, + MaxItems: 16, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "principal": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 256), + }, + }, + }, + }, + "status": { + Type: schema.TypeString, + Computed: true, + }, + names.AttrTags: tftags.TagsSchema(), + names.AttrTagsAll: tftags.TagsSchemaComputed(), + "version_description": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(1, 512), + }, + "version_number": { + Type: schema.TypeInt, + Computed: true, + }, + } + }, + + CustomizeDiff: verify.SetTagsDiff, + } +} + +const ( + ResNameTheme = "Theme" +) + +func resourceThemeCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) + + awsAccountId := meta.(*conns.AWSClient).AccountID + if v, ok := d.GetOk("aws_account_id"); ok { + awsAccountId = v.(string) + } + themeId := d.Get("theme_id").(string) + + d.SetId(createThemeId(awsAccountId, themeId)) + + input := &quicksight.CreateThemeInput{ + AwsAccountId: aws.String(awsAccountId), + ThemeId: aws.String(themeId), + Name: aws.String(d.Get("name").(string)), + BaseThemeId: aws.String(d.Get("base_theme_id").(string)), + Tags: getTagsIn(ctx), + } + + if v, ok := d.GetOk("version_description"); ok { + input.VersionDescription = aws.String(v.(string)) + } + + if v, ok := d.GetOk("configuration"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.Configuration = expandThemeConfiguration(v.([]interface{})) + } + + if v, ok := d.GetOk("permissions"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.Permissions = expandResourcePermissions(v.([]interface{})) + } + + _, err := conn.CreateThemeWithContext(ctx, input) + if err != nil { + return create.DiagError(names.QuickSight, create.ErrActionCreating, ResNameTheme, d.Get("name").(string), err) + } + + if _, err := waitThemeCreated(ctx, conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { + return create.DiagError(names.QuickSight, create.ErrActionWaitingForCreation, ResNameTheme, d.Id(), err) + } + + return resourceThemeRead(ctx, d, meta) +} + +func resourceThemeRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) + + awsAccountId, themeId, err := ParseThemeId(d.Id()) + if err != nil { + return diag.FromErr(err) + } + + out, err := FindThemeByID(ctx, conn, d.Id()) + + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] QuickSight Theme (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return create.DiagError(names.QuickSight, create.ErrActionReading, ResNameTheme, d.Id(), err) + } + + d.Set("arn", out.Arn) + d.Set("aws_account_id", awsAccountId) + d.Set("base_theme_id", out.Version.BaseThemeId) + d.Set("created_time", out.CreatedTime.Format(time.RFC3339)) + d.Set("last_updated_time", out.LastUpdatedTime.Format(time.RFC3339)) + d.Set("name", out.Name) + d.Set("status", out.Version.Status) + d.Set("theme_id", out.ThemeId) + d.Set("version_description", out.Version.Description) + d.Set("version_number", out.Version.VersionNumber) + + if err := d.Set("configuration", flattenThemeConfiguration(out.Version.Configuration)); err != nil { + return diag.Errorf("setting configuration: %s", err) + } + + permsResp, err := conn.DescribeThemePermissionsWithContext(ctx, &quicksight.DescribeThemePermissionsInput{ + AwsAccountId: aws.String(awsAccountId), + ThemeId: aws.String(themeId), + }) + + if err != nil { + return diag.Errorf("describing QuickSight Theme (%s) Permissions: %s", d.Id(), err) + } + + if err := d.Set("permissions", flattenPermissions(permsResp.Permissions)); err != nil { + return diag.Errorf("setting permissions: %s", err) + } + + return nil +} + +func resourceThemeUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) + + awsAccountId, themeId, err := ParseThemeId(d.Id()) + if err != nil { + return diag.FromErr(err) + } + + if d.HasChangesExcept("permissions", "tags", "tags_all") { + in := &quicksight.UpdateThemeInput{ + AwsAccountId: aws.String(awsAccountId), + ThemeId: aws.String(themeId), + BaseThemeId: aws.String(d.Get("base_theme_id").(string)), + Name: aws.String(d.Get("name").(string)), + } + + if v, ok := d.GetOk("configuration"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + in.Configuration = expandThemeConfiguration(v.([]interface{})) + } + + log.Printf("[DEBUG] Updating QuickSight Theme (%s): %#v", d.Id(), in) + _, err := conn.UpdateThemeWithContext(ctx, in) + if err != nil { + return create.DiagError(names.QuickSight, create.ErrActionUpdating, ResNameTheme, d.Id(), err) + } + + if _, err := waitThemeUpdated(ctx, conn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { + return create.DiagError(names.QuickSight, create.ErrActionWaitingForUpdate, ResNameTheme, d.Id(), err) + } + } + + if d.HasChange("permissions") { + oraw, nraw := d.GetChange("permissions") + o := oraw.([]interface{}) + n := nraw.([]interface{}) + + toGrant, toRevoke := DiffPermissions(o, n) + + params := &quicksight.UpdateThemePermissionsInput{ + AwsAccountId: aws.String(awsAccountId), + ThemeId: aws.String(themeId), + } + + if len(toGrant) > 0 { + params.GrantPermissions = toGrant + } + + if len(toRevoke) > 0 { + params.RevokePermissions = toRevoke + } + + _, err = conn.UpdateThemePermissionsWithContext(ctx, params) + + if err != nil { + return diag.Errorf("updating QuickSight Theme (%s) permissions: %s", themeId, err) + } + } + + return resourceThemeRead(ctx, d, meta) +} + +func resourceThemeDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) + + awsAccountId, themeId, err := ParseThemeId(d.Id()) + if err != nil { + return diag.FromErr(err) + } + + input := &quicksight.DeleteThemeInput{ + AwsAccountId: aws.String(awsAccountId), + ThemeId: aws.String(themeId), + } + + log.Printf("[INFO] Deleting QuickSight Theme %s", d.Id()) + _, err = conn.DeleteThemeWithContext(ctx, input) + + if tfawserr.ErrCodeEquals(err, quicksight.ErrCodeResourceNotFoundException) { + return nil + } + + if err != nil { + return create.DiagError(names.QuickSight, create.ErrActionDeleting, ResNameTheme, d.Id(), err) + } + + return nil +} + +func FindThemeByID(ctx context.Context, conn *quicksight.QuickSight, id string) (*quicksight.Theme, error) { + awsAccountId, themeId, err := ParseThemeId(id) + if err != nil { + return nil, err + } + + descOpts := &quicksight.DescribeThemeInput{ + AwsAccountId: aws.String(awsAccountId), + ThemeId: aws.String(themeId), + } + + out, err := conn.DescribeThemeWithContext(ctx, descOpts) + + if tfawserr.ErrCodeEquals(err, quicksight.ErrCodeResourceNotFoundException) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: descOpts, + } + } + + if err != nil { + return nil, err + } + + if out == nil || out.Theme == nil { + return nil, tfresource.NewEmptyResultError(descOpts) + } + + return out.Theme, nil +} + +func ParseThemeId(id string) (string, string, error) { + parts := strings.SplitN(id, ",", 2) + if len(parts) != 2 || parts[0] == "" || parts[1] == "" { + return "", "", fmt.Errorf("unexpected format of ID (%s), expected AWS_ACCOUNT_ID,TTHEME_ID", id) + } + return parts[0], parts[1], nil +} + +func createThemeId(awsAccountID, themeId string) string { + return fmt.Sprintf("%s,%s", awsAccountID, themeId) +} + +func expandThemeConfiguration(tfList []interface{}) *quicksight.ThemeConfiguration { + if len(tfList) == 0 || tfList[0] == nil { + return nil + } + + tfMap, ok := tfList[0].(map[string]interface{}) + if !ok { + return nil + } + + config := &quicksight.ThemeConfiguration{} + + if v, ok := tfMap["data_color_palette"].([]interface{}); ok && len(v) > 0 { + config.DataColorPalette = expandDataColorPalette(v) + } + if v, ok := tfMap["sheet"].([]interface{}); ok && len(v) > 0 { + config.Sheet = expandSheetStyle(v) + } + if v, ok := tfMap["typography"].([]interface{}); ok && len(v) > 0 { + config.Typography = expandTypography(v) + } + if v, ok := tfMap["ui_color_palette"].([]interface{}); ok && len(v) > 0 { + config.UIColorPalette = expandUIColorPalette(v) + } + + return config +} + +func expandDataColorPalette(tfList []interface{}) *quicksight.DataColorPalette { + if len(tfList) == 0 || tfList[0] == nil { + return nil + } + + tfMap, ok := tfList[0].(map[string]interface{}) + if !ok { + return nil + } + + config := &quicksight.DataColorPalette{} + + if v, ok := tfMap["colors"].([]interface{}); ok { + config.Colors = flex.ExpandStringList(v) + } + if v, ok := tfMap["empty_fill_color"].(string); ok && v != "" { + config.EmptyFillColor = aws.String(v) + } + if v, ok := tfMap["min_max_gradient"].([]interface{}); ok { + config.MinMaxGradient = flex.ExpandStringList(v) + } + + return config +} + +func expandSheetStyle(tfList []interface{}) *quicksight.SheetStyle { + if len(tfList) == 0 || tfList[0] == nil { + return nil + } + + tfMap, ok := tfList[0].(map[string]interface{}) + if !ok { + return nil + } + + config := &quicksight.SheetStyle{} + + if v, ok := tfMap["tile"].([]interface{}); ok && len(v) > 0 { + config.Tile = expandTileStyle(v) + } + if v, ok := tfMap["tile_layout"].([]interface{}); ok && len(v) > 0 { + config.TileLayout = expandTileLayoutStyle(v) + } + + return config +} + +func expandTileStyle(tfList []interface{}) *quicksight.TileStyle { + if len(tfList) == 0 || tfList[0] == nil { + return nil + } + + tfMap, ok := tfList[0].(map[string]interface{}) + if !ok { + return nil + } + + config := &quicksight.TileStyle{} + + if v, ok := tfMap["border"].([]interface{}); ok && len(v) > 0 { + config.Border = expandBorderStyle(v) + } + + return config +} + +func expandBorderStyle(tfList []interface{}) *quicksight.BorderStyle { + if len(tfList) == 0 || tfList[0] == nil { + return nil + } + + tfMap, ok := tfList[0].(map[string]interface{}) + if !ok { + return nil + } + + config := &quicksight.BorderStyle{} + + if v, ok := tfMap["show"].(bool); ok { + config.Show = aws.Bool(v) + } + + return config +} + +func expandTileLayoutStyle(tfList []interface{}) *quicksight.TileLayoutStyle { + if len(tfList) == 0 || tfList[0] == nil { + return nil + } + + tfMap, ok := tfList[0].(map[string]interface{}) + if !ok { + return nil + } + + config := &quicksight.TileLayoutStyle{} + + if v, ok := tfMap["gutter"].([]interface{}); ok && len(v) > 0 { + config.Gutter = expandGutterStyle(v) + } + if v, ok := tfMap["margin"].([]interface{}); ok && len(v) > 0 { + config.Margin = expandMarginStyle(v) + } + + return config +} + +func expandGutterStyle(tfList []interface{}) *quicksight.GutterStyle { + if len(tfList) == 0 || tfList[0] == nil { + return nil + } + + tfMap, ok := tfList[0].(map[string]interface{}) + if !ok { + return nil + } + + config := &quicksight.GutterStyle{} + + if v, ok := tfMap["show"].(bool); ok { + config.Show = aws.Bool(v) + } + + return config +} + +func expandMarginStyle(tfList []interface{}) *quicksight.MarginStyle { + if len(tfList) == 0 || tfList[0] == nil { + return nil + } + + tfMap, ok := tfList[0].(map[string]interface{}) + if !ok { + return nil + } + + config := &quicksight.MarginStyle{} + + if v, ok := tfMap["show"].(bool); ok { + config.Show = aws.Bool(v) + } + + return config +} + +func expandTypography(tfList []interface{}) *quicksight.Typography { + if len(tfList) == 0 || tfList[0] == nil { + return nil + } + + tfMap, ok := tfList[0].(map[string]interface{}) + if !ok { + return nil + } + + config := &quicksight.Typography{} + + if v, ok := tfMap["font_families"].([]interface{}); ok && len(v) > 0 { + config.FontFamilies = expandFontFamilies(v) + } + return config +} + +func expandFontFamilies(tfList []interface{}) []*quicksight.Font { + if len(tfList) == 0 { + return nil + } + + var configs []*quicksight.Font + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + if !ok { + continue + } + + font := expandFont(tfMap) + if font == nil { + continue + } + + configs = append(configs, font) + } + + return configs +} + +func expandFont(tfMap map[string]interface{}) *quicksight.Font { + if tfMap == nil { + return nil + } + + font := &quicksight.Font{} + + if v, ok := tfMap["font_family"].(string); ok && v != "" { + font.FontFamily = aws.String(v) + } + + return font +} + +func expandUIColorPalette(tfList []interface{}) *quicksight.UIColorPalette { + if len(tfList) == 0 || tfList[0] == nil { + return nil + } + + tfMap, ok := tfList[0].(map[string]interface{}) + if !ok { + return nil + } + + config := &quicksight.UIColorPalette{} + + if v, ok := tfMap["accent"].(string); ok && v != "" { + config.Accent = aws.String(v) + } + if v, ok := tfMap["accent_foreground"].(string); ok && v != "" { + config.AccentForeground = aws.String(v) + } + if v, ok := tfMap["danger"].(string); ok && v != "" { + config.Danger = aws.String(v) + } + if v, ok := tfMap["danger_foreground"].(string); ok && v != "" { + config.DangerForeground = aws.String(v) + } + if v, ok := tfMap["dimension"].(string); ok && v != "" { + config.Dimension = aws.String(v) + } + if v, ok := tfMap["dimension_foreground"].(string); ok && v != "" { + config.DimensionForeground = aws.String(v) + } + if v, ok := tfMap["measure"].(string); ok && v != "" { + config.Measure = aws.String(v) + } + if v, ok := tfMap["measure_foreground"].(string); ok && v != "" { + config.MeasureForeground = aws.String(v) + } + if v, ok := tfMap["primary_background"].(string); ok && v != "" { + config.PrimaryBackground = aws.String(v) + } + if v, ok := tfMap["primary_foreground"].(string); ok && v != "" { + config.PrimaryForeground = aws.String(v) + } + if v, ok := tfMap["secondary_background"].(string); ok && v != "" { + config.SecondaryBackground = aws.String(v) + } + if v, ok := tfMap["secondary_foreground"].(string); ok && v != "" { + config.SecondaryForeground = aws.String(v) + } + if v, ok := tfMap["success"].(string); ok && v != "" { + config.Success = aws.String(v) + } + if v, ok := tfMap["success_foreground"].(string); ok && v != "" { + config.SuccessForeground = aws.String(v) + } + if v, ok := tfMap["warning"].(string); ok && v != "" { + config.Warning = aws.String(v) + } + if v, ok := tfMap["warning_foreground"].(string); ok && v != "" { + config.WarningForeground = aws.String(v) + } + + return config +} + +func flattenThemeConfiguration(apiObject *quicksight.ThemeConfiguration) []interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + if apiObject.DataColorPalette != nil { + tfMap["data_color_palette"] = flattenDataColorPalette(apiObject.DataColorPalette) + } + if apiObject.Sheet != nil { + tfMap["sheet"] = flattenSheetStyle(apiObject.Sheet) + } + if apiObject.Typography != nil { + tfMap["typography"] = flattenTypography(apiObject.Typography) + } + if apiObject.UIColorPalette != nil { + tfMap["ui_color_palette"] = flattenUIColorPalette(apiObject.UIColorPalette) + } + + return []interface{}{tfMap} +} + +func flattenDataColorPalette(apiObject *quicksight.DataColorPalette) []interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + if apiObject.Colors != nil { + tfMap["colors"] = flex.FlattenStringList(apiObject.Colors) + } + if apiObject.EmptyFillColor != nil { + tfMap["empty_fill_color"] = aws.StringValue(apiObject.EmptyFillColor) + } + if apiObject.MinMaxGradient != nil { + tfMap["min_max_gradient"] = flex.FlattenStringList(apiObject.MinMaxGradient) + } + + return []interface{}{tfMap} +} + +func flattenSheetStyle(apiObject *quicksight.SheetStyle) []interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + if apiObject.Tile != nil { + tfMap["tile"] = flattenTileStyle(apiObject.Tile) + } + if apiObject.TileLayout != nil { + tfMap["tile_layout"] = flattenTileLayoutStyle(apiObject.TileLayout) + } + + return []interface{}{tfMap} +} + +func flattenTileStyle(apiObject *quicksight.TileStyle) []interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + if apiObject.Border != nil { + tfMap["border"] = flattenBorderStyle(apiObject.Border) + } + + return []interface{}{tfMap} +} + +func flattenBorderStyle(apiObject *quicksight.BorderStyle) []interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + if apiObject.Show != nil { + tfMap["show"] = aws.BoolValue(apiObject.Show) + } + + return []interface{}{tfMap} +} + +func flattenTileLayoutStyle(apiObject *quicksight.TileLayoutStyle) []interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + if apiObject.Gutter != nil { + tfMap["gutter"] = flattenGutterStyle(apiObject.Gutter) + } + if apiObject.Margin != nil { + tfMap["margin"] = flattenMarginStyle(apiObject.Margin) + } + + return []interface{}{tfMap} +} + +func flattenGutterStyle(apiObject *quicksight.GutterStyle) []interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + if apiObject.Show != nil { + tfMap["show"] = aws.BoolValue(apiObject.Show) + } + + return []interface{}{tfMap} +} + +func flattenMarginStyle(apiObject *quicksight.MarginStyle) []interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + if apiObject.Show != nil { + tfMap["show"] = aws.BoolValue(apiObject.Show) + } + + return []interface{}{tfMap} +} + +func flattenTypography(apiObject *quicksight.Typography) []interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + if apiObject.FontFamilies != nil { + tfMap["font_families"] = flattenFonts(apiObject.FontFamilies) + } + + return []interface{}{tfMap} +} + +func flattenFonts(apiObject []*quicksight.Font) []interface{} { + if len(apiObject) == 0 { + return nil + } + + var tfList []interface{} + for _, font := range apiObject { + if font == nil { + continue + } + + tfMap := map[string]interface{}{} + if font.FontFamily != nil { + tfMap["font_family"] = aws.StringValue(font.FontFamily) + } + tfList = append(tfList, tfMap) + } + + return tfList +} + +func flattenUIColorPalette(apiObject *quicksight.UIColorPalette) []interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + if apiObject.Accent != nil { + tfMap["accent"] = aws.StringValue(apiObject.Accent) + } + if apiObject.AccentForeground != nil { + tfMap["accent_foreground"] = aws.StringValue(apiObject.AccentForeground) + } + if apiObject.Danger != nil { + tfMap["danger"] = aws.StringValue(apiObject.Danger) + } + if apiObject.DangerForeground != nil { + tfMap["danger_foreground"] = aws.StringValue(apiObject.DangerForeground) + } + if apiObject.Dimension != nil { + tfMap["dimension"] = aws.StringValue(apiObject.Dimension) + } + if apiObject.DimensionForeground != nil { + tfMap["dimension_foreground"] = aws.StringValue(apiObject.DimensionForeground) + } + if apiObject.Measure != nil { + tfMap["measure"] = aws.StringValue(apiObject.Measure) + } + if apiObject.MeasureForeground != nil { + tfMap["measure_foreground"] = aws.StringValue(apiObject.MeasureForeground) + } + if apiObject.PrimaryBackground != nil { + tfMap["primary_background"] = aws.StringValue(apiObject.PrimaryBackground) + } + if apiObject.PrimaryForeground != nil { + tfMap["primary_foreground"] = aws.StringValue(apiObject.PrimaryForeground) + } + if apiObject.SecondaryBackground != nil { + tfMap["secondary_background"] = aws.StringValue(apiObject.SecondaryBackground) + } + if apiObject.SecondaryForeground != nil { + tfMap["secondary_foreground"] = aws.StringValue(apiObject.SecondaryForeground) + } + if apiObject.Success != nil { + tfMap["success"] = aws.StringValue(apiObject.Success) + } + if apiObject.SuccessForeground != nil { + tfMap["success_foreground"] = aws.StringValue(apiObject.SuccessForeground) + } + if apiObject.Warning != nil { + tfMap["warning"] = aws.StringValue(apiObject.Warning) + } + if apiObject.WarningForeground != nil { + tfMap["warning_foreground"] = aws.StringValue(apiObject.WarningForeground) + } + + return []interface{}{tfMap} +} diff --git a/internal/service/quicksight/theme_data_source.go b/internal/service/quicksight/theme_data_source.go new file mode 100644 index 00000000000..0f8028edef7 --- /dev/null +++ b/internal/service/quicksight/theme_data_source.go @@ -0,0 +1,330 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package quicksight + +import ( + "context" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/quicksight" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/verify" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @SDKDataSource("aws_quicksight_theme", name="Theme") +func DataSourceTheme() *schema.Resource { + return &schema.Resource{ + ReadWithoutTimeout: dataSourceThemeRead, + + SchemaFunc: func() map[string]*schema.Schema { + return map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "aws_account_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: verify.ValidAccountID, + }, + "base_theme_id": { + Type: schema.TypeString, + Computed: true, + }, + "configuration": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_ThemeConfiguration.html + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "data_color_palette": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_DataColorPalette.html + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "colors": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "empty_fill_color": { + Type: schema.TypeString, + Computed: true, + }, + "min_max_gradient": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + }, + }, + }, + "sheet": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_SheetStyle.html + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "tile": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_TileStyle.html + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "border": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_BorderStyle.html + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "show": { + Type: schema.TypeBool, + Computed: true, + }, + }, + }, + }, + }, + }, + }, + "tile_layout": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_TileLayoutStyle.html + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "gutter": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_GutterStyle.html + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "show": { + Type: schema.TypeBool, + Computed: true, + }, + }, + }, + }, + "margin": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_MarginStyle.html + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "show": { + Type: schema.TypeBool, + Computed: true, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + "typography": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_Typography.html + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "font_families": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_Font.html + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "font_family": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + }, + }, + }, + "ui_color_palette": { // https://docs.aws.amazon.com/quicksight/latest/APIReference/API_UIColorPalette.html + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "accent": { + Type: schema.TypeString, + Computed: true, + }, + "accent_foreground": { + Type: schema.TypeString, + Computed: true, + }, + "danger": { + Type: schema.TypeString, + Computed: true, + }, + "danger_foreground": { + Type: schema.TypeString, + Computed: true, + }, + "dimension": { + Type: schema.TypeString, + Computed: true, + }, + "dimension_foreground": { + Type: schema.TypeString, + Computed: true, + }, + "measure": { + Type: schema.TypeString, + Computed: true, + }, + "measure_foreground": { + Type: schema.TypeString, + Computed: true, + }, + "primary_background": { + Type: schema.TypeString, + Computed: true, + }, + "primary_foreground": { + Type: schema.TypeString, + Computed: true, + }, + "secondary_background": { + Type: schema.TypeString, + Computed: true, + }, + "secondary_foreground": { + Type: schema.TypeString, + Computed: true, + }, + "success": { + Type: schema.TypeString, + Computed: true, + }, + "success_foreground": { + Type: schema.TypeString, + Computed: true, + }, + "warning": { + Type: schema.TypeString, + Computed: true, + }, + "warning_foreground": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + }, + }, + }, + "created_time": { + Type: schema.TypeString, + Computed: true, + }, + "theme_id": { + Type: schema.TypeString, + Required: true, + }, + "last_updated_time": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Computed: true, + }, + "permissions": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "actions": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "principal": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "status": { + Type: schema.TypeString, + Computed: true, + }, + names.AttrTags: tftags.TagsSchemaComputed(), + "version_description": { + Type: schema.TypeString, + Computed: true, + }, + "version_number": { + Type: schema.TypeInt, + Computed: true, + }, + } + }, + } +} + +const ( + DSNameTheme = "Theme Data Source" +) + +func dataSourceThemeRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) + + awsAccountId := meta.(*conns.AWSClient).AccountID + if v, ok := d.GetOk("aws_account_id"); ok { + awsAccountId = v.(string) + } + themeId := d.Get("theme_id").(string) + + id := createThemeId(awsAccountId, themeId) + + out, err := FindThemeByID(ctx, conn, id) + + if err != nil { + return create.DiagError(names.QuickSight, create.ErrActionReading, ResNameTheme, d.Id(), err) + } + + d.SetId(id) + d.Set("arn", out.Arn) + d.Set("aws_account_id", awsAccountId) + d.Set("base_theme_id", out.Version.BaseThemeId) + d.Set("created_time", out.CreatedTime.Format(time.RFC3339)) + d.Set("last_updated_time", out.LastUpdatedTime.Format(time.RFC3339)) + d.Set("name", out.Name) + d.Set("status", out.Version.Status) + d.Set("theme_id", out.ThemeId) + d.Set("version_description", out.Version.Description) + d.Set("version_number", out.Version.VersionNumber) + + if err := d.Set("configuration", flattenThemeConfiguration(out.Version.Configuration)); err != nil { + return diag.Errorf("setting configuration: %s", err) + } + + permsResp, err := conn.DescribeThemePermissionsWithContext(ctx, &quicksight.DescribeThemePermissionsInput{ + AwsAccountId: aws.String(awsAccountId), + ThemeId: aws.String(themeId), + }) + + if err != nil { + return diag.Errorf("describing QuickSight Theme (%s) Permissions: %s", d.Id(), err) + } + + if err := d.Set("permissions", flattenPermissions(permsResp.Permissions)); err != nil { + return diag.Errorf("setting permissions: %s", err) + } + + return nil +} diff --git a/internal/service/quicksight/theme_data_source_test.go b/internal/service/quicksight/theme_data_source_test.go new file mode 100644 index 00000000000..fbc4a805703 --- /dev/null +++ b/internal/service/quicksight/theme_data_source_test.go @@ -0,0 +1,197 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package quicksight_test + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/service/quicksight" + sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" +) + +func TestAccQuickSightThemeDataSource_basic(t *testing.T) { + ctx := acctest.Context(t) + rId := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_quicksight_theme.test" + dataSourceName := "data.aws_quicksight_theme.test" + themeId := "MIDNIGHT" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, quicksight.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + Steps: []resource.TestStep{ + { + Config: testAccThemeDataSourceConfig_basic(rId, rName, themeId), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrPair(dataSourceName, "arn", resourceName, "arn"), + resource.TestCheckResourceAttr(dataSourceName, "configuration.0.data_color_palette.0.colors.0", "#FFFFFF"), + resource.TestCheckResourceAttr(dataSourceName, "configuration.0.data_color_palette.0.empty_fill_color", "#FFFFFF"), + resource.TestCheckResourceAttr(dataSourceName, "configuration.0.data_color_palette.0.min_max_gradient.0", "#FFFFFF"), + resource.TestCheckNoResourceAttr(dataSourceName, "configuration.0.sheet.0"), + resource.TestCheckNoResourceAttr(dataSourceName, "configuration.0.typography.0"), + resource.TestCheckNoResourceAttr(dataSourceName, "configuration.0.ui_color_palette.0"), + ), + }, + }, + }) +} + +func TestAccQuickSightThemeDataSource_fullConfig(t *testing.T) { + ctx := acctest.Context(t) + rId := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_quicksight_theme.test" + dataSourceName := "data.aws_quicksight_theme.test" + themeId := "MIDNIGHT" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, quicksight.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + Steps: []resource.TestStep{ + { + Config: testAccThemeDataSourceConfig_fullConfig(rId, rName, themeId), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrPair(dataSourceName, "arn", resourceName, "arn"), + resource.TestCheckResourceAttr(dataSourceName, "configuration.0.data_color_palette.0.colors.0", "#FFFFFF"), + resource.TestCheckResourceAttr(dataSourceName, "configuration.0.data_color_palette.0.empty_fill_color", "#FFFFFF"), + resource.TestCheckResourceAttr(dataSourceName, "configuration.0.data_color_palette.0.min_max_gradient.0", "#FFFFFF"), + resource.TestCheckResourceAttr(dataSourceName, "configuration.0.sheet.0.tile.0.border.0.show", "false"), + resource.TestCheckResourceAttr(dataSourceName, "configuration.0.sheet.0.tile_layout.0.gutter.0.show", "false"), + resource.TestCheckResourceAttr(dataSourceName, "configuration.0.sheet.0.tile_layout.0.margin.0.show", "false"), + resource.TestCheckResourceAttr(dataSourceName, "configuration.0.typography.0.font_families.0.font_family", "monospace"), + resource.TestCheckResourceAttr(dataSourceName, "configuration.0.typography.0.font_families.1.font_family", "Roboto"), + resource.TestCheckResourceAttr(dataSourceName, "configuration.0.ui_color_palette.0.accent", "#202020"), + resource.TestCheckResourceAttr(dataSourceName, "configuration.0.ui_color_palette.0.accent_foreground", "#FFFFFF"), + ), + }, + }, + }) +} + +func testAccThemeDataSourceConfig_basic(rId, rName, baseThemId string) string { + return acctest.ConfigCompose( + fmt.Sprintf(` +resource "aws_quicksight_theme" "test" { + theme_id = %[1]q + name = %[2]q + + base_theme_id = %[3]q + + configuration { + data_color_palette { + colors = [ + "#FFFFFF", + "#111111", + "#222222", + "#333333", + "#444444", + "#555555", + "#666666", + "#777777", + "#888888", + "#999999" + ] + empty_fill_color = "#FFFFFF" + min_max_gradient = [ + "#FFFFFF", + "#111111", + ] + } + } +} + +data "aws_quicksight_theme" "test" { + theme_id = aws_quicksight_theme.test.theme_id +} +`, rId, rName, baseThemId)) +} + +func testAccThemeDataSourceConfig_fullConfig(rId, rName, baseThemId string) string { + return acctest.ConfigCompose( + fmt.Sprintf(` +resource "aws_quicksight_theme" "test" { + theme_id = %[1]q + name = %[2]q + + base_theme_id = %[3]q + + configuration { + data_color_palette { + colors = [ + "#FFFFFF", + "#111111", + "#222222", + "#333333", + "#444444", + "#555555", + "#666666", + "#777777", + "#888888", + "#999999" + ] + empty_fill_color = "#FFFFFF" + min_max_gradient = [ + "#FFFFFF", + "#111111", + ] + } + sheet { + tile { + border { + show = false + } + } + tile_layout { + gutter { + show = false + } + margin { + show = false + } + } + } + typography { + font_families { + font_family = "monospace" + } + font_families { + font_family = "Roboto" + } + } + ui_color_palette { + accent = "#202020" + accent_foreground = "#FFFFFF" + danger = "#202020" + danger_foreground = "#FFFFFF" + dimension = "#202020" + dimension_foreground = "#FFFFFF" + measure = "#202020" + measure_foreground = "#FFFFFF" + primary_background = "#202020" + primary_foreground = "#FFFFFF" + secondary_background = "#202020" + secondary_foreground = "#FFFFFF" + success = "#202020" + success_foreground = "#FFFFFF" + warning = "#202020" + warning_foreground = "#FFFFFF" + } + } +} + +data "aws_quicksight_theme" "test" { + theme_id = aws_quicksight_theme.test.theme_id +} +`, rId, rName, baseThemId)) +} diff --git a/internal/service/quicksight/theme_test.go b/internal/service/quicksight/theme_test.go new file mode 100644 index 00000000000..e3057c61932 --- /dev/null +++ b/internal/service/quicksight/theme_test.go @@ -0,0 +1,358 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package quicksight_test + +import ( + "context" + "errors" + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/service/quicksight" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-plugin-testing/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + tfquicksight "github.com/hashicorp/terraform-provider-aws/internal/service/quicksight" + "github.com/hashicorp/terraform-provider-aws/names" +) + +func TestAccQuickSightTheme_basic(t *testing.T) { + ctx := acctest.Context(t) + + var theme quicksight.Theme + resourceName := "aws_quicksight_theme.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rId := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + themeId := "MIDNIGHT" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, quicksight.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckThemeDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccThemeConfig_basic(rId, rName, themeId), + Check: resource.ComposeTestCheckFunc( + testAccCheckThemeExists(ctx, resourceName, &theme), + resource.TestCheckResourceAttr(resourceName, "theme_id", rId), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "status", quicksight.ResourceStatusCreationSuccessful), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccQuickSightTheme_disappears(t *testing.T) { + ctx := acctest.Context(t) + + var theme quicksight.Theme + resourceName := "aws_quicksight_theme.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rId := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + themeId := "MIDNIGHT" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, quicksight.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckThemeDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccThemeConfig_basic(rId, rName, themeId), + Check: resource.ComposeTestCheckFunc( + testAccCheckThemeExists(ctx, resourceName, &theme), + acctest.CheckResourceDisappears(ctx, acctest.Provider, tfquicksight.ResourceTheme(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} +func TestAccQuickSightTheme_fullConfig(t *testing.T) { + ctx := acctest.Context(t) + + var theme quicksight.Theme + resourceName := "aws_quicksight_theme.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rId := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + themeId := "MIDNIGHT" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, quicksight.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckThemeDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccThemeConfig_fullConfig(rId, rName, themeId), + Check: resource.ComposeTestCheckFunc( + testAccCheckThemeExists(ctx, resourceName, &theme), + resource.TestCheckResourceAttr(resourceName, "theme_id", rId), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "status", quicksight.ResourceStatusCreationSuccessful), + resource.TestCheckResourceAttr(resourceName, "configuration.0.ui_color_palette.0.measure_foreground", "#FFFFFF"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccQuickSightTheme_update(t *testing.T) { + ctx := acctest.Context(t) + + var theme quicksight.Theme + resourceName := "aws_quicksight_theme.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rNameUpdated := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rId := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + themeId := "MIDNIGHT" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, quicksight.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckThemeDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccThemeConfig_update(rId, rName, themeId, "#FFFFFF"), + Check: resource.ComposeTestCheckFunc( + testAccCheckThemeExists(ctx, resourceName, &theme), + resource.TestCheckResourceAttr(resourceName, "theme_id", rId), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "status", quicksight.ResourceStatusCreationSuccessful), + resource.TestCheckResourceAttr(resourceName, "configuration.0.data_color_palette.0.empty_fill_color", "#FFFFFF"), + resource.TestCheckResourceAttr(resourceName, "version_number", "1"), + ), + }, + { + Config: testAccThemeConfig_update(rId, rNameUpdated, themeId, "#000000"), + Check: resource.ComposeTestCheckFunc( + testAccCheckThemeExists(ctx, resourceName, &theme), + resource.TestCheckResourceAttr(resourceName, "theme_id", rId), + resource.TestCheckResourceAttr(resourceName, "name", rNameUpdated), + resource.TestCheckResourceAttr(resourceName, "status", quicksight.ResourceStatusCreationSuccessful), + resource.TestCheckResourceAttr(resourceName, "configuration.0.data_color_palette.0.empty_fill_color", "#000000"), + resource.TestCheckResourceAttr(resourceName, "version_number", "2"), + ), + }, + }, + }) +} + +func testAccCheckThemeDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_quicksight_theme" { + continue + } + + output, err := tfquicksight.FindThemeByID(ctx, conn, rs.Primary.ID) + if err != nil { + if tfawserr.ErrCodeEquals(err, quicksight.ErrCodeResourceNotFoundException) { + return nil + } + return err + } + + if output != nil { + return fmt.Errorf("QuickSight Theme (%s) still exists", rs.Primary.ID) + } + } + + return nil + } +} + +func testAccCheckThemeExists(ctx context.Context, name string, theme *quicksight.Theme) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return create.Error(names.QuickSight, create.ErrActionCheckingExistence, tfquicksight.ResNameTheme, name, errors.New("not found")) + } + + if rs.Primary.ID == "" { + return create.Error(names.QuickSight, create.ErrActionCheckingExistence, tfquicksight.ResNameTheme, name, errors.New("not set")) + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) + output, err := tfquicksight.FindThemeByID(ctx, conn, rs.Primary.ID) + + if err != nil { + return create.Error(names.QuickSight, create.ErrActionCheckingExistence, tfquicksight.ResNameTheme, rs.Primary.ID, err) + } + + *theme = *output + + return nil + } +} + +func testAccThemeConfig_basic(rId, rName, baseThemId string) string { + return acctest.ConfigCompose( + fmt.Sprintf(` +resource "aws_quicksight_theme" "test" { + theme_id = %[1]q + name = %[2]q + + base_theme_id = %[3]q + + configuration { + data_color_palette { + colors = [ + "#FFFFFF", + "#111111", + "#222222", + "#333333", + "#444444", + "#555555", + "#666666", + "#777777", + "#888888", + "#999999" + ] + empty_fill_color = "#FFFFFF" + min_max_gradient = [ + "#FFFFFF", + "#111111", + ] + } + } +} +`, rId, rName, baseThemId)) +} + +func testAccThemeConfig_fullConfig(rId, rName, baseThemId string) string { + return acctest.ConfigCompose( + fmt.Sprintf(` +resource "aws_quicksight_theme" "test" { + theme_id = %[1]q + name = %[2]q + + base_theme_id = %[3]q + + configuration { + data_color_palette { + colors = [ + "#FFFFFF", + "#111111", + "#222222", + "#333333", + "#444444", + "#555555", + "#666666", + "#777777", + "#888888", + "#999999" + ] + empty_fill_color = "#FFFFFF" + min_max_gradient = [ + "#FFFFFF", + "#111111", + ] + } + sheet { + tile { + border { + show = false + } + } + tile_layout { + gutter { + show = false + } + margin { + show = false + } + } + } + typography { + font_families { + font_family = "monospace" + } + font_families { + font_family = "Roboto" + } + } + ui_color_palette { + accent = "#202020" + accent_foreground = "#FFFFFF" + danger = "#202020" + danger_foreground = "#FFFFFF" + dimension = "#202020" + dimension_foreground = "#FFFFFF" + measure = "#202020" + measure_foreground = "#FFFFFF" + primary_background = "#202020" + primary_foreground = "#FFFFFF" + secondary_background = "#202020" + secondary_foreground = "#FFFFFF" + success = "#202020" + success_foreground = "#FFFFFF" + warning = "#202020" + warning_foreground = "#FFFFFF" + } + } +} +`, rId, rName, baseThemId)) +} + +func testAccThemeConfig_update(rId, rName, baseThemId, emptyFillColor string) string { + return acctest.ConfigCompose( + fmt.Sprintf(` +resource "aws_quicksight_theme" "test" { + theme_id = %[1]q + name = %[2]q + + base_theme_id = %[3]q + + configuration { + data_color_palette { + colors = [ + "#FFFFFF", + "#111111", + "#222222", + "#333333", + "#444444", + "#555555", + "#666666", + "#777777", + "#888888", + "#999999" + ] + empty_fill_color = %[4]q + min_max_gradient = [ + "#FFFFFF", + "#111111", + ] + } + } +} +`, rId, rName, baseThemId, emptyFillColor)) +} diff --git a/internal/service/quicksight/user.go b/internal/service/quicksight/user.go index cb294e33158..63107345fba 100644 --- a/internal/service/quicksight/user.go +++ b/internal/service/quicksight/user.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight import ( @@ -29,81 +32,83 @@ func ResourceUser() *schema.Resource { UpdateWithoutTimeout: resourceUserUpdate, DeleteWithoutTimeout: resourceUserDelete, - Schema: map[string]*schema.Schema{ - "arn": { - Type: schema.TypeString, - Computed: true, - }, - - "aws_account_id": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - }, - - "email": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - }, - - "iam_arn": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - }, - - "identity_type": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validation.StringInSlice([]string{ - quicksight.IdentityTypeIam, - quicksight.IdentityTypeQuicksight, - }, false), - }, - - "namespace": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Default: DefaultUserNamespace, - ValidateFunc: validation.All( - validation.StringLenBetween(1, 63), - validation.StringMatch(regexp.MustCompile(`^[a-zA-Z0-9._-]*$`), "must contain only alphanumeric characters, hyphens, underscores, and periods"), - ), - }, - - "session_name": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - }, - - "user_name": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.NoZeroValues, - }, - - "user_role": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validation.StringInSlice([]string{ - quicksight.UserRoleReader, - quicksight.UserRoleAuthor, - quicksight.UserRoleAdmin, - }, false), - }, + SchemaFunc: func() map[string]*schema.Schema { + return map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + + "aws_account_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + + "email": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "iam_arn": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "identity_type": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice([]string{ + quicksight.IdentityTypeIam, + quicksight.IdentityTypeQuicksight, + }, false), + }, + + "namespace": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: DefaultUserNamespace, + ValidateFunc: validation.All( + validation.StringLenBetween(1, 63), + validation.StringMatch(regexp.MustCompile(`^[a-zA-Z0-9._-]*$`), "must contain only alphanumeric characters, hyphens, underscores, and periods"), + ), + }, + + "session_name": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + + "user_name": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.NoZeroValues, + }, + + "user_role": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice([]string{ + quicksight.UserRoleReader, + quicksight.UserRoleAuthor, + quicksight.UserRoleAdmin, + }, false), + }, + } }, } } func resourceUserCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) awsAccountID := meta.(*conns.AWSClient).AccountID @@ -145,7 +150,7 @@ func resourceUserCreate(ctx context.Context, d *schema.ResourceData, meta interf func resourceUserRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) awsAccountID, namespace, userName, err := UserParseID(d.Id()) if err != nil { @@ -180,7 +185,7 @@ func resourceUserRead(ctx context.Context, d *schema.ResourceData, meta interfac func resourceUserUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) awsAccountID, namespace, userName, err := UserParseID(d.Id()) if err != nil { @@ -205,7 +210,7 @@ func resourceUserUpdate(ctx context.Context, d *schema.ResourceData, meta interf func resourceUserDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) awsAccountID, namespace, userName, err := UserParseID(d.Id()) if err != nil { diff --git a/internal/service/quicksight/user_data_source.go b/internal/service/quicksight/user_data_source.go index 990be57783a..d7167c94b6c 100644 --- a/internal/service/quicksight/user_data_source.go +++ b/internal/service/quicksight/user_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight import ( @@ -20,56 +23,58 @@ func DataSourceUser() *schema.Resource { return &schema.Resource{ ReadWithoutTimeout: dataSourceUserRead, - Schema: map[string]*schema.Schema{ - "active": { - Type: schema.TypeBool, - Computed: true, - }, - "arn": { - Type: schema.TypeString, - Computed: true, - }, - "aws_account_id": { - Type: schema.TypeString, - Optional: true, - Computed: true, - }, - "email": { - Type: schema.TypeString, - Computed: true, - }, - "identity_type": { - Type: schema.TypeString, - Computed: true, - }, - "namespace": { - Type: schema.TypeString, - Optional: true, - Default: DefaultUserNamespace, - ValidateFunc: validation.All( - validation.StringLenBetween(1, 63), - validation.StringMatch(regexp.MustCompile(`^[a-zA-Z0-9._-]*$`), "must contain only alphanumeric characters, hyphens, underscores, and periods"), - ), - }, - "principal_id": { - Type: schema.TypeString, - Computed: true, - }, - "user_name": { - Type: schema.TypeString, - Required: true, - }, - "user_role": { - Type: schema.TypeString, - Computed: true, - }, + SchemaFunc: func() map[string]*schema.Schema { + return map[string]*schema.Schema{ + "active": { + Type: schema.TypeBool, + Computed: true, + }, + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "aws_account_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "email": { + Type: schema.TypeString, + Computed: true, + }, + "identity_type": { + Type: schema.TypeString, + Computed: true, + }, + "namespace": { + Type: schema.TypeString, + Optional: true, + Default: DefaultUserNamespace, + ValidateFunc: validation.All( + validation.StringLenBetween(1, 63), + validation.StringMatch(regexp.MustCompile(`^[a-zA-Z0-9._-]*$`), "must contain only alphanumeric characters, hyphens, underscores, and periods"), + ), + }, + "principal_id": { + Type: schema.TypeString, + Computed: true, + }, + "user_name": { + Type: schema.TypeString, + Required: true, + }, + "user_role": { + Type: schema.TypeString, + Computed: true, + }, + } }, } } func dataSourceUserRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).QuickSightConn() + conn := meta.(*conns.AWSClient).QuickSightConn(ctx) awsAccountID := meta.(*conns.AWSClient).AccountID if v, ok := d.GetOk("aws_account_id"); ok { diff --git a/internal/service/quicksight/user_data_source_test.go b/internal/service/quicksight/user_data_source_test.go index 322b929d7c3..9e180d14f31 100644 --- a/internal/service/quicksight/user_data_source_test.go +++ b/internal/service/quicksight/user_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight_test import ( diff --git a/internal/service/quicksight/user_test.go b/internal/service/quicksight/user_test.go index 57a2e52375e..4b7f9faf521 100644 --- a/internal/service/quicksight/user_test.go +++ b/internal/service/quicksight/user_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight_test import ( @@ -148,7 +151,7 @@ func testAccCheckUserExists(ctx context.Context, resourceName string, user *quic return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) input := &quicksight.DescribeUserInput{ AwsAccountId: aws.String(awsAccountID), @@ -174,7 +177,7 @@ func testAccCheckUserExists(ctx context.Context, resourceName string, user *quic func testAccCheckUserDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_quicksight_user" { continue @@ -207,7 +210,7 @@ func testAccCheckUserDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckUserDisappears(ctx context.Context, v *quicksight.User) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) arn, err := arn.Parse(aws.StringValue(v.Arn)) if err != nil { diff --git a/internal/service/quicksight/vpc_connection.go b/internal/service/quicksight/vpc_connection.go index 58cb825237d..ff1fb20aac9 100644 --- a/internal/service/quicksight/vpc_connection.go +++ b/internal/service/quicksight/vpc_connection.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight import ( @@ -23,8 +26,8 @@ import ( "github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-provider-aws/internal/create" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/names" @@ -152,7 +155,7 @@ func (r *resourceVPCConnection) Schema(ctx context.Context, req resource.SchemaR } func (r *resourceVPCConnection) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) { - conn := r.Meta().QuickSightConn() + conn := r.Meta().QuickSightConn(ctx) var plan resourceVPCConnectionData resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) @@ -172,7 +175,7 @@ func (r *resourceVPCConnection) Create(ctx context.Context, req resource.CreateR RoleArn: aws.String(plan.RoleArn.ValueString()), SecurityGroupIds: flex.ExpandFrameworkStringSet(ctx, plan.SecurityGroupIds), SubnetIds: flex.ExpandFrameworkStringSet(ctx, plan.SubnetIds), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if !plan.DnsResolvers.IsNull() { @@ -214,7 +217,7 @@ func (r *resourceVPCConnection) Create(ctx context.Context, req resource.CreateR } func (r *resourceVPCConnection) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) { - conn := r.Meta().QuickSightConn() + conn := r.Meta().QuickSightConn(ctx) var state resourceVPCConnectionData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) @@ -267,7 +270,7 @@ func (r *resourceVPCConnection) Read(ctx context.Context, req resource.ReadReque } func (r *resourceVPCConnection) Update(ctx context.Context, req resource.UpdateRequest, resp *resource.UpdateResponse) { - conn := r.Meta().QuickSightConn() + conn := r.Meta().QuickSightConn(ctx) var plan, state resourceVPCConnectionData resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) @@ -330,7 +333,7 @@ func (r *resourceVPCConnection) Update(ctx context.Context, req resource.UpdateR } func (r *resourceVPCConnection) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) { - conn := r.Meta().QuickSightConn() + conn := r.Meta().QuickSightConn(ctx) var state resourceVPCConnectionData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) diff --git a/internal/service/quicksight/vpc_connection_test.go b/internal/service/quicksight/vpc_connection_test.go index e2e3445191d..0bb22329dbc 100644 --- a/internal/service/quicksight/vpc_connection_test.go +++ b/internal/service/quicksight/vpc_connection_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight_test import ( @@ -136,7 +139,7 @@ func testAccCheckVPCConnectionExists(ctx context.Context, resourceName string, v return fmt.Errorf("not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) output, err := tfquicksight.FindVPCConnectionByID(ctx, conn, rs.Primary.ID) if err != nil { return create.Error(names.QuickSight, create.ErrActionCheckingExistence, tfquicksight.ResNameVPCConnection, rs.Primary.ID, err) @@ -150,7 +153,7 @@ func testAccCheckVPCConnectionExists(ctx context.Context, resourceName string, v func testAccCheckVPCConnectionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).QuickSightConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_quicksight_vpc_connection" { continue diff --git a/internal/service/quicksight/wait.go b/internal/service/quicksight/wait.go index 609ee1f1133..3153dfc176e 100644 --- a/internal/service/quicksight/wait.go +++ b/internal/service/quicksight/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package quicksight import ( @@ -95,7 +98,7 @@ func waitTemplateCreated(ctx context.Context, conn *quicksight.QuickSight, id st func waitTemplateUpdated(ctx context.Context, conn *quicksight.QuickSight, id string, timeout time.Duration) (*quicksight.Template, error) { stateConf := &retry.StateChangeConf{ - Pending: []string{quicksight.ResourceStatusUpdateInProgress}, + Pending: []string{quicksight.ResourceStatusUpdateInProgress, quicksight.ResourceStatusCreationInProgress}, Target: []string{quicksight.ResourceStatusUpdateSuccessful, quicksight.ResourceStatusCreationSuccessful}, Refresh: statusTemplate(ctx, conn, id), Timeout: timeout, @@ -122,3 +125,183 @@ func waitTemplateUpdated(ctx context.Context, conn *quicksight.QuickSight, id st return nil, err } + +func waitDashboardCreated(ctx context.Context, conn *quicksight.QuickSight, id string, timeout time.Duration) (*quicksight.Dashboard, error) { + stateConf := &retry.StateChangeConf{ + Pending: []string{quicksight.ResourceStatusCreationInProgress}, + Target: []string{quicksight.ResourceStatusCreationSuccessful}, + Refresh: statusDashboard(ctx, conn, id), + Timeout: timeout, + NotFoundChecks: 20, + ContinuousTargetOccurence: 2, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + if out, ok := outputRaw.(*quicksight.Dashboard); ok { + if status, apiErrors := aws.StringValue(out.Version.Status), out.Version.Errors; status == quicksight.ResourceStatusCreationFailed && apiErrors != nil { + var errors *multierror.Error + + for _, apiError := range apiErrors { + if apiError == nil { + continue + } + errors = multierror.Append(errors, awserr.New(aws.StringValue(apiError.Type), aws.StringValue(apiError.Message), nil)) + } + tfresource.SetLastError(err, errors) + } + + return out, err + } + + return nil, err +} + +func waitDashboardUpdated(ctx context.Context, conn *quicksight.QuickSight, id string, timeout time.Duration) (*quicksight.Dashboard, error) { + stateConf := &retry.StateChangeConf{ + Pending: []string{quicksight.ResourceStatusUpdateInProgress, quicksight.ResourceStatusCreationInProgress}, + Target: []string{quicksight.ResourceStatusUpdateSuccessful, quicksight.ResourceStatusCreationSuccessful}, + Refresh: statusDashboard(ctx, conn, id), + Timeout: timeout, + NotFoundChecks: 20, + ContinuousTargetOccurence: 2, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + if out, ok := outputRaw.(*quicksight.Dashboard); ok { + if status, apiErrors := aws.StringValue(out.Version.Status), out.Version.Errors; status == quicksight.ResourceStatusCreationFailed && apiErrors != nil { + var errors *multierror.Error + + for _, apiError := range apiErrors { + if apiError == nil { + continue + } + errors = multierror.Append(errors, awserr.New(aws.StringValue(apiError.Type), aws.StringValue(apiError.Message), nil)) + } + tfresource.SetLastError(err, errors) + } + + return out, err + } + + return nil, err +} + +func waitAnalysisCreated(ctx context.Context, conn *quicksight.QuickSight, id string, timeout time.Duration) (*quicksight.Analysis, error) { + stateConf := &retry.StateChangeConf{ + Pending: []string{quicksight.ResourceStatusCreationInProgress}, + Target: []string{quicksight.ResourceStatusCreationSuccessful}, + Refresh: statusAnalysis(ctx, conn, id), + Timeout: timeout, + NotFoundChecks: 20, + ContinuousTargetOccurence: 2, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + if out, ok := outputRaw.(*quicksight.Analysis); ok { + if status, apiErrors := aws.StringValue(out.Status), out.Errors; status == quicksight.ResourceStatusCreationFailed && apiErrors != nil { + var errors *multierror.Error + + for _, apiError := range apiErrors { + if apiError == nil { + continue + } + errors = multierror.Append(errors, awserr.New(aws.StringValue(apiError.Type), aws.StringValue(apiError.Message), nil)) + } + tfresource.SetLastError(err, errors) + } + + return out, err + } + + return nil, err +} + +func waitAnalysisUpdated(ctx context.Context, conn *quicksight.QuickSight, id string, timeout time.Duration) (*quicksight.Analysis, error) { + stateConf := &retry.StateChangeConf{ + Pending: []string{quicksight.ResourceStatusUpdateInProgress, quicksight.ResourceStatusCreationInProgress}, + Target: []string{quicksight.ResourceStatusUpdateSuccessful, quicksight.ResourceStatusCreationSuccessful}, + Refresh: statusAnalysis(ctx, conn, id), + Timeout: timeout, + NotFoundChecks: 20, + ContinuousTargetOccurence: 2, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + if out, ok := outputRaw.(*quicksight.Analysis); ok { + if status, apiErrors := aws.StringValue(out.Status), out.Errors; status == quicksight.ResourceStatusCreationFailed && apiErrors != nil { + var errors *multierror.Error + + for _, apiError := range apiErrors { + if apiError == nil { + continue + } + errors = multierror.Append(errors, awserr.New(aws.StringValue(apiError.Type), aws.StringValue(apiError.Message), nil)) + } + tfresource.SetLastError(err, errors) + } + + return out, err + } + + return nil, err +} + +func waitThemeCreated(ctx context.Context, conn *quicksight.QuickSight, id string, timeout time.Duration) (*quicksight.Theme, error) { + stateConf := &retry.StateChangeConf{ + Pending: []string{quicksight.ResourceStatusCreationInProgress}, + Target: []string{quicksight.ResourceStatusCreationSuccessful}, + Refresh: statusTheme(ctx, conn, id), + Timeout: timeout, + NotFoundChecks: 20, + ContinuousTargetOccurence: 2, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + if out, ok := outputRaw.(*quicksight.Theme); ok { + if status, apiErrors := aws.StringValue(out.Version.Status), out.Version.Errors; status == quicksight.ResourceStatusCreationFailed && apiErrors != nil { + var errors *multierror.Error + + for _, apiError := range apiErrors { + if apiError == nil { + continue + } + errors = multierror.Append(errors, awserr.New(aws.StringValue(apiError.Type), aws.StringValue(apiError.Message), nil)) + } + tfresource.SetLastError(err, errors) + } + + return out, err + } + + return nil, err +} + +func waitThemeUpdated(ctx context.Context, conn *quicksight.QuickSight, id string, timeout time.Duration) (*quicksight.Theme, error) { + stateConf := &retry.StateChangeConf{ + Pending: []string{quicksight.ResourceStatusUpdateInProgress, quicksight.ResourceStatusCreationInProgress}, + Target: []string{quicksight.ResourceStatusUpdateSuccessful, quicksight.ResourceStatusCreationSuccessful}, + Refresh: statusTheme(ctx, conn, id), + Timeout: timeout, + NotFoundChecks: 20, + ContinuousTargetOccurence: 2, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + if out, ok := outputRaw.(*quicksight.Theme); ok { + if status, apiErrors := aws.StringValue(out.Version.Status), out.Version.Errors; status == quicksight.ResourceStatusCreationFailed && apiErrors != nil { + var errors *multierror.Error + + for _, apiError := range apiErrors { + if apiError == nil { + continue + } + errors = multierror.Append(errors, awserr.New(aws.StringValue(apiError.Type), aws.StringValue(apiError.Message), nil)) + } + tfresource.SetLastError(err, errors) + } + + return out, err + } + + return nil, err +} diff --git a/internal/service/ram/find.go b/internal/service/ram/find.go index 5e25f9094cc..a2819a1c348 100644 --- a/internal/service/ram/find.go +++ b/internal/service/ram/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ram import ( diff --git a/internal/service/ram/generate.go b/internal/service/ram/generate.go index 6684d89d11a..fbcb43cbd23 100644 --- a/internal/service/ram/generate.go +++ b/internal/service/ram/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTagsInIDElem=ResourceShareArn -ServiceTagsSlice -TagInIDElem=ResourceShareArn -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package ram diff --git a/internal/service/ram/principal_association.go b/internal/service/ram/principal_association.go index 301893fb50c..c19c8b68713 100644 --- a/internal/service/ram/principal_association.go +++ b/internal/service/ram/principal_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ram import ( @@ -53,7 +56,7 @@ func ResourcePrincipalAssociation() *schema.Resource { func resourcePrincipalAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RAMConn() + conn := meta.(*conns.AWSClient).RAMConn(ctx) resourceShareArn := d.Get("resource_share_arn").(string) principal := d.Get("principal").(string) @@ -86,7 +89,7 @@ func resourcePrincipalAssociationCreate(ctx context.Context, d *schema.ResourceD func resourcePrincipalAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RAMConn() + conn := meta.(*conns.AWSClient).RAMConn(ctx) resourceShareArn, principal, err := PrincipalAssociationParseID(d.Id()) if err != nil { @@ -130,7 +133,7 @@ func resourcePrincipalAssociationRead(ctx context.Context, d *schema.ResourceDat func resourcePrincipalAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RAMConn() + conn := meta.(*conns.AWSClient).RAMConn(ctx) resourceShareArn, principal, err := PrincipalAssociationParseID(d.Id()) if err != nil { diff --git a/internal/service/ram/principal_association_test.go b/internal/service/ram/principal_association_test.go index a0d77578df1..3db88d0f02e 100644 --- a/internal/service/ram/principal_association_test.go +++ b/internal/service/ram/principal_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ram_test import ( @@ -69,7 +72,7 @@ func TestAccRAMPrincipalAssociation_disappears(t *testing.T) { func testAccCheckPrincipalAssociationExists(ctx context.Context, resourceName string, resourceShare *ram.ResourceShareAssociation) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RAMConn(ctx) rs, ok := s.RootModule().Resources[resourceName] if !ok { @@ -115,7 +118,7 @@ func testAccCheckPrincipalAssociationExists(ctx context.Context, resourceName st func testAccCheckPrincipalAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ram_principal_association" { diff --git a/internal/service/ram/resource_association.go b/internal/service/ram/resource_association.go index 76c25d78c39..7a69c40ae59 100644 --- a/internal/service/ram/resource_association.go +++ b/internal/service/ram/resource_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ram import ( @@ -48,7 +51,7 @@ func ResourceResourceAssociation() *schema.Resource { func resourceResourceAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RAMConn() + conn := meta.(*conns.AWSClient).RAMConn(ctx) resourceARN := d.Get("resource_arn").(string) resourceShareARN := d.Get("resource_share_arn").(string) @@ -75,7 +78,7 @@ func resourceResourceAssociationCreate(ctx context.Context, d *schema.ResourceDa func resourceResourceAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RAMConn() + conn := meta.(*conns.AWSClient).RAMConn(ctx) resourceShareARN, resourceARN, err := DecodeResourceAssociationID(d.Id()) if err != nil { @@ -106,7 +109,7 @@ func resourceResourceAssociationRead(ctx context.Context, d *schema.ResourceData func resourceResourceAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RAMConn() + conn := meta.(*conns.AWSClient).RAMConn(ctx) resourceShareARN, resourceARN, err := DecodeResourceAssociationID(d.Id()) if err != nil { diff --git a/internal/service/ram/resource_association_test.go b/internal/service/ram/resource_association_test.go index bf1ae656bf7..43cce6f03c6 100644 --- a/internal/service/ram/resource_association_test.go +++ b/internal/service/ram/resource_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ram_test import ( @@ -69,7 +72,7 @@ func TestAccRAMResourceAssociation_disappears(t *testing.T) { func testAccCheckResourceAssociationDisappears(ctx context.Context, resourceShareAssociation *ram.ResourceShareAssociation) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RAMConn(ctx) input := &ram.DisassociateResourceShareInput{ ResourceArns: []*string{resourceShareAssociation.AssociatedEntity}, @@ -87,7 +90,7 @@ func testAccCheckResourceAssociationDisappears(ctx context.Context, resourceShar func testAccCheckResourceAssociationExists(ctx context.Context, resourceName string, association *ram.ResourceShareAssociation) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RAMConn(ctx) rs, ok := s.RootModule().Resources[resourceName] @@ -121,7 +124,7 @@ func testAccCheckResourceAssociationExists(ctx context.Context, resourceName str func testAccCheckResourceAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ram_resource_association" { diff --git a/internal/service/ram/resource_share.go b/internal/service/ram/resource_share.go index 05f29313b1a..b4e43ce306d 100644 --- a/internal/service/ram/resource_share.go +++ b/internal/service/ram/resource_share.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ram import ( @@ -70,13 +73,13 @@ func ResourceResourceShare() *schema.Resource { func resourceResourceShareCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RAMConn() + conn := meta.(*conns.AWSClient).RAMConn(ctx) name := d.Get("name").(string) input := &ram.CreateResourceShareInput{ AllowExternalPrincipals: aws.Bool(d.Get("allow_external_principals").(bool)), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("permission_arns"); ok && v.(*schema.Set).Len() > 0 { @@ -101,7 +104,7 @@ func resourceResourceShareCreate(ctx context.Context, d *schema.ResourceData, me func resourceResourceShareRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RAMConn() + conn := meta.(*conns.AWSClient).RAMConn(ctx) resourceShare, err := FindResourceShareOwnerSelfByARN(ctx, conn, d.Id()) @@ -125,7 +128,7 @@ func resourceResourceShareRead(ctx context.Context, d *schema.ResourceData, meta d.Set("arn", resourceShare.ResourceShareArn) d.Set("name", resourceShare.Name) - SetTagsOut(ctx, resourceShare.Tags) + setTagsOut(ctx, resourceShare.Tags) perms, err := conn.ListResourceSharePermissionsWithContext(ctx, &ram.ListResourceSharePermissionsInput{ ResourceShareArn: aws.String(d.Id()), @@ -148,7 +151,7 @@ func resourceResourceShareRead(ctx context.Context, d *schema.ResourceData, meta func resourceResourceShareUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RAMConn() + conn := meta.(*conns.AWSClient).RAMConn(ctx) if d.HasChanges("name", "allow_external_principals") { input := &ram.UpdateResourceShareInput{ @@ -170,7 +173,7 @@ func resourceResourceShareUpdate(ctx context.Context, d *schema.ResourceData, me func resourceResourceShareDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RAMConn() + conn := meta.(*conns.AWSClient).RAMConn(ctx) log.Printf("[DEBUG] Deleting RAM Resource Share: %s", d.Id()) _, err := conn.DeleteResourceShareWithContext(ctx, &ram.DeleteResourceShareInput{ diff --git a/internal/service/ram/resource_share_accepter.go b/internal/service/ram/resource_share_accepter.go index 903cedc7ff2..9386bd2d5f9 100644 --- a/internal/service/ram/resource_share_accepter.go +++ b/internal/service/ram/resource_share_accepter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ram import ( @@ -85,7 +88,7 @@ func ResourceResourceShareAccepter() *schema.Resource { func resourceResourceShareAccepterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RAMConn() + conn := meta.(*conns.AWSClient).RAMConn(ctx) shareARN := d.Get("share_arn").(string) @@ -130,7 +133,7 @@ func resourceResourceShareAccepterCreate(ctx context.Context, d *schema.Resource func resourceResourceShareAccepterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics accountID := meta.(*conns.AWSClient).AccountID - conn := meta.(*conns.AWSClient).RAMConn() + conn := meta.(*conns.AWSClient).RAMConn(ctx) invitation, err := FindResourceShareInvitationByResourceShareARNAndStatus(ctx, conn, d.Id(), ram.ResourceShareInvitationStatusAccepted) @@ -195,7 +198,7 @@ func resourceResourceShareAccepterRead(ctx context.Context, d *schema.ResourceDa func resourceResourceShareAccepterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RAMConn() + conn := meta.(*conns.AWSClient).RAMConn(ctx) receiverAccountID := d.Get("receiver_account_id").(string) diff --git a/internal/service/ram/resource_share_accepter_test.go b/internal/service/ram/resource_share_accepter_test.go index 342d660ab8b..8c29aad5a69 100644 --- a/internal/service/ram/resource_share_accepter_test.go +++ b/internal/service/ram/resource_share_accepter_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ram_test import ( @@ -124,7 +127,7 @@ func TestAccRAMResourceShareAccepter_resourceAssociation(t *testing.T) { func testAccCheckResourceShareAccepterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ram_resource_share_accepter" { @@ -161,7 +164,7 @@ func testAccCheckResourceShareAccepterExists(ctx context.Context, name string) r return fmt.Errorf("RAM resource share invitation not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).RAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RAMConn(ctx) input := &ram.GetResourceSharesInput{ ResourceShareArns: []*string{aws.String(rs.Primary.Attributes["share_arn"])}, diff --git a/internal/service/ram/resource_share_data_source.go b/internal/service/ram/resource_share_data_source.go index c8f463fb350..4a27f4164b1 100644 --- a/internal/service/ram/resource_share_data_source.go +++ b/internal/service/ram/resource_share_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ram import ( @@ -76,7 +79,7 @@ func DataSourceResourceShare() *schema.Resource { func dataSourceResourceShareRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RAMConn() + conn := meta.(*conns.AWSClient).RAMConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig name := d.Get("name").(string) diff --git a/internal/service/ram/resource_share_data_source_test.go b/internal/service/ram/resource_share_data_source_test.go index bba63d9469d..d4f922f86aa 100644 --- a/internal/service/ram/resource_share_data_source_test.go +++ b/internal/service/ram/resource_share_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ram_test import ( diff --git a/internal/service/ram/resource_share_test.go b/internal/service/ram/resource_share_test.go index 97f33e3a136..bf5af681f82 100644 --- a/internal/service/ram/resource_share_test.go +++ b/internal/service/ram/resource_share_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ram_test import ( @@ -223,7 +226,7 @@ func TestAccRAMResourceShare_disappears(t *testing.T) { func testAccCheckResourceShareExists(ctx context.Context, resourceName string, v *ram.ResourceShare) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RAMConn(ctx) rs, ok := s.RootModule().Resources[resourceName] if !ok { @@ -252,7 +255,7 @@ func testAccCheckResourceShareExists(ctx context.Context, resourceName string, v func testAccCheckResourceShareDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RAMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RAMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ram_resource_share" { diff --git a/internal/service/ram/service_package_gen.go b/internal/service/ram/service_package_gen.go index 61743de5552..5d39c6bbca6 100644 --- a/internal/service/ram/service_package_gen.go +++ b/internal/service/ram/service_package_gen.go @@ -5,6 +5,10 @@ package ram import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + ram_sdkv1 "github.com/aws/aws-sdk-go/service/ram" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -57,4 +61,13 @@ func (p *servicePackage) ServicePackageName() string { return names.RAM } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*ram_sdkv1.RAM, error) { + sess := config["session"].(*session_sdkv1.Session) + + return ram_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/ram/status.go b/internal/service/ram/status.go index f749123888a..0d7aaabc348 100644 --- a/internal/service/ram/status.go +++ b/internal/service/ram/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ram import ( diff --git a/internal/service/ram/sweep.go b/internal/service/ram/sweep.go index ecb44cc1c8c..be3ca7decf2 100644 --- a/internal/service/ram/sweep.go +++ b/internal/service/ram/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ram" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -23,11 +25,11 @@ func init() { func sweepResourceShares(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).RAMConn() + conn := client.RAMConn(ctx) input := &ram.GetResourceSharesInput{ ResourceOwner: aws.String(ram.ResourceOwnerSelf), } @@ -62,7 +64,7 @@ func sweepResourceShares(region string) error { return fmt.Errorf("error listing RAM Resource Shares (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping RAM Resource Shares (%s): %w", region, err) diff --git a/internal/service/ram/tags_gen.go b/internal/service/ram/tags_gen.go index 703bd979db7..a451ed69fd6 100644 --- a/internal/service/ram/tags_gen.go +++ b/internal/service/ram/tags_gen.go @@ -43,9 +43,9 @@ func KeyValueTags(ctx context.Context, tags []*ram.Tag) tftags.KeyValueTags { return tftags.New(ctx, m) } -// GetTagsIn returns ram service tags from Context. +// getTagsIn returns ram service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*ram.Tag { +func getTagsIn(ctx context.Context) []*ram.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -55,17 +55,17 @@ func GetTagsIn(ctx context.Context) []*ram.Tag { return nil } -// SetTagsOut sets ram service tags in Context. -func SetTagsOut(ctx context.Context, tags []*ram.Tag) { +// setTagsOut sets ram service tags in Context. +func setTagsOut(ctx context.Context, tags []*ram.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates ram service tags. +// updateTags updates ram service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn ramiface.RAMAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn ramiface.RAMAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -105,5 +105,5 @@ func UpdateTags(ctx context.Context, conn ramiface.RAMAPI, identifier string, ol // UpdateTags updates ram service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).RAMConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).RAMConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/ram/wait.go b/internal/service/ram/wait.go index aa15c0dd221..a767d8443f4 100644 --- a/internal/service/ram/wait.go +++ b/internal/service/ram/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ram import ( diff --git a/internal/service/rbin/generate.go b/internal/service/rbin/generate.go index 8e3e132243b..e85092be031 100644 --- a/internal/service/rbin/generate.go +++ b/internal/service/rbin/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -ListTags -ListTagsOp=ListTagsForResource -ListTagsInIDElem=ResourceArn -ServiceTagsSlice -TagOp=TagResource -TagInIDElem=ResourceArn -UntagOp=UntagResource -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package rbin diff --git a/internal/service/rbin/rule.go b/internal/service/rbin/rule.go index e155884bdfb..0af0eb3b987 100644 --- a/internal/service/rbin/rule.go +++ b/internal/service/rbin/rule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rbin import ( @@ -156,12 +159,12 @@ const ( ) func resourceRuleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).RBinClient() + conn := meta.(*conns.AWSClient).RBinClient(ctx) in := &rbin.CreateRuleInput{ ResourceType: types.ResourceType(d.Get("resource_type").(string)), RetentionPeriod: expandRetentionPeriod(d.Get("retention_period").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if _, ok := d.GetOk("description"); ok { @@ -191,7 +194,7 @@ func resourceRuleCreate(ctx context.Context, d *schema.ResourceData, meta interf } func resourceRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).RBinClient() + conn := meta.(*conns.AWSClient).RBinClient(ctx) out, err := findRuleByID(ctx, conn, d.Id()) @@ -230,7 +233,7 @@ func resourceRuleRead(ctx context.Context, d *schema.ResourceData, meta interfac } func resourceRuleUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).RBinClient() + conn := meta.(*conns.AWSClient).RBinClient(ctx) update := false @@ -244,7 +247,7 @@ func resourceRuleUpdate(ctx context.Context, d *schema.ResourceData, meta interf } if d.HasChanges("resource_tags") { - in.ResourceTags = expandResourceTags(d.Get("resource_tags").([]interface{})) + in.ResourceTags = expandResourceTags(d.Get("resource_tags").(*schema.Set).List()) update = true } @@ -273,7 +276,7 @@ func resourceRuleUpdate(ctx context.Context, d *schema.ResourceData, meta interf func resourceRuleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { log.Printf("[INFO] Deleting RBin Rule %s", d.Id()) - conn := meta.(*conns.AWSClient).RBinClient() + conn := meta.(*conns.AWSClient).RBinClient(ctx) _, err := conn.DeleteRule(ctx, &rbin.DeleteRuleInput{ Identifier: aws.String(d.Id()), diff --git a/internal/service/rbin/rule_test.go b/internal/service/rbin/rule_test.go index 290e5a6229f..aede9a00a08 100644 --- a/internal/service/rbin/rule_test.go +++ b/internal/service/rbin/rule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rbin_test import ( @@ -32,12 +35,12 @@ func TestAccRBinRule_basic(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, rbin.ServiceID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckRuleDestroy, + CheckDestroy: testAccCheckRuleDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccRuleConfig_basic(description, resourceType), + Config: testAccRuleConfig_basic1(description, resourceType), Check: resource.ComposeTestCheckFunc( - testAccCheckRuleExists(resourceName, &rule), + testAccCheckRuleExists(ctx, resourceName, &rule), resource.TestCheckResourceAttr(resourceName, "description", description), resource.TestCheckResourceAttr(resourceName, "resource_type", resourceType), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "retention_period.*", map[string]string{ @@ -45,8 +48,8 @@ func TestAccRBinRule_basic(t *testing.T) { "retention_period_unit": "DAYS", }), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "resource_tags.*", map[string]string{ - "resource_tag_key": "some_tag", - "resource_tag_value": "", + "resource_tag_key": "some_tag1", + "resource_tag_value": "some_value1", }), ), }, @@ -55,6 +58,26 @@ func TestAccRBinRule_basic(t *testing.T) { ImportState: true, ImportStateVerify: true, }, + { + Config: testAccRuleConfig_basic2(description, resourceType), + Check: resource.ComposeTestCheckFunc( + testAccCheckRuleExists(ctx, resourceName, &rule), + resource.TestCheckResourceAttr(resourceName, "description", description), + resource.TestCheckResourceAttr(resourceName, "resource_type", resourceType), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "retention_period.*", map[string]string{ + "retention_period_value": "10", + "retention_period_unit": "DAYS", + }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "resource_tags.*", map[string]string{ + "resource_tag_key": "some_tag3", + "resource_tag_value": "some_value3", + }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "resource_tags.*", map[string]string{ + "resource_tag_key": "some_tag4", + "resource_tag_value": "some_value4", + }), + ), + }, }, }) } @@ -73,12 +96,12 @@ func TestAccRBinRule_disappears(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, rbin.ServiceID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckRuleDestroy, + CheckDestroy: testAccCheckRuleDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccRuleConfig_basic(description, resourceType), + Config: testAccRuleConfig_basic1(description, resourceType), Check: resource.ComposeTestCheckFunc( - testAccCheckRuleExists(resourceName, &rbinrule), + testAccCheckRuleExists(ctx, resourceName, &rbinrule), acctest.CheckResourceDisappears(ctx, acctest.Provider, tfrbin.ResourceRule(), resourceName), ), ExpectNonEmptyPlan: true, @@ -100,12 +123,12 @@ func TestAccRBinRule_tags(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, names.RBin), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckRuleDestroy, + CheckDestroy: testAccCheckRuleDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccRuleConfigTags1(resourceType, "key1", "value1"), Check: resource.ComposeTestCheckFunc( - testAccCheckRuleExists(resourceName, &rule), + testAccCheckRuleExists(ctx, resourceName, &rule), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), ), @@ -118,7 +141,7 @@ func TestAccRBinRule_tags(t *testing.T) { { Config: testAccRuleConfigTags2(resourceType, "key1", "value1updated", "key2", "value2"), Check: resource.ComposeTestCheckFunc( - testAccCheckRuleExists(resourceName, &rule), + testAccCheckRuleExists(ctx, resourceName, &rule), resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), @@ -127,7 +150,7 @@ func TestAccRBinRule_tags(t *testing.T) { { Config: testAccRuleConfigTags1(resourceType, "key1", "value1"), Check: resource.ComposeTestCheckFunc( - testAccCheckRuleExists(resourceName, &rule), + testAccCheckRuleExists(ctx, resourceName, &rule), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), ), @@ -149,12 +172,12 @@ func TestAccRBinRule_lock_config(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, rbin.ServiceID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckRuleDestroy, + CheckDestroy: testAccCheckRuleDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccRuleConfig_lockConfig(resourceType, "DAYS", "7"), Check: resource.ComposeTestCheckFunc( - testAccCheckRuleExists(resourceName, &rule), + testAccCheckRuleExists(ctx, resourceName, &rule), resource.TestCheckResourceAttr(resourceName, "lock_configuration.#", "1"), resource.TestCheckResourceAttr(resourceName, "lock_configuration.0.unlock_delay.#", "1"), resource.TestCheckResourceAttr(resourceName, "lock_configuration.0.unlock_delay.0.unlock_delay_unit", "DAYS"), @@ -165,33 +188,34 @@ func TestAccRBinRule_lock_config(t *testing.T) { }) } -func testAccCheckRuleDestroy(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RBinClient() - ctx := context.Background() +func testAccCheckRuleDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).RBinClient(ctx) - for _, rs := range s.RootModule().Resources { - if rs.Type != "aws_rbin_rule" { - continue - } + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_rbin_rule" { + continue + } - _, err := conn.GetRule(ctx, &rbin.GetRuleInput{ - Identifier: aws.String(rs.Primary.ID), - }) - if err != nil { - var nfe *types.ResourceNotFoundException - if errors.As(err, &nfe) { - return nil + _, err := conn.GetRule(ctx, &rbin.GetRuleInput{ + Identifier: aws.String(rs.Primary.ID), + }) + if err != nil { + var nfe *types.ResourceNotFoundException + if errors.As(err, &nfe) { + return nil + } + return err } - return err + + return create.Error(names.RBin, create.ErrActionCheckingDestroyed, tfrbin.ResNameRule, rs.Primary.ID, errors.New("not destroyed")) } - return create.Error(names.RBin, create.ErrActionCheckingDestroyed, tfrbin.ResNameRule, rs.Primary.ID, errors.New("not destroyed")) + return nil } - - return nil } -func testAccCheckRuleExists(name string, rbinrule *rbin.GetRuleOutput) resource.TestCheckFunc { +func testAccCheckRuleExists(ctx context.Context, name string, rbinrule *rbin.GetRuleOutput) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[name] if !ok { @@ -202,8 +226,8 @@ func testAccCheckRuleExists(name string, rbinrule *rbin.GetRuleOutput) resource. return create.Error(names.RBin, create.ErrActionCheckingExistence, tfrbin.ResNameRule, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).RBinClient() - ctx := context.Background() + conn := acctest.Provider.Meta().(*conns.AWSClient).RBinClient(ctx) + resp, err := conn.GetRule(ctx, &rbin.GetRuleInput{ Identifier: aws.String(rs.Primary.ID), }) @@ -218,15 +242,39 @@ func testAccCheckRuleExists(name string, rbinrule *rbin.GetRuleOutput) resource. } } -func testAccRuleConfig_basic(description, resourceType string) string { +func testAccRuleConfig_basic1(description, resourceType string) string { return fmt.Sprintf(` resource "aws_rbin_rule" "test" { description = %[1]q resource_type = %[2]q resource_tags { - resource_tag_key = "some_tag" - resource_tag_value = "" + resource_tag_key = "some_tag1" + resource_tag_value = "some_value1" + } + + retention_period { + retention_period_value = 10 + retention_period_unit = "DAYS" + } +} +`, description, resourceType) +} + +func testAccRuleConfig_basic2(description, resourceType string) string { + return fmt.Sprintf(` +resource "aws_rbin_rule" "test" { + description = %[1]q + resource_type = %[2]q + + resource_tags { + resource_tag_key = "some_tag3" + resource_tag_value = "some_value3" + } + + resource_tags { + resource_tag_key = "some_tag4" + resource_tag_value = "some_value4" } retention_period { diff --git a/internal/service/rbin/service_package_gen.go b/internal/service/rbin/service_package_gen.go index d500400b0c8..af10f9c6dff 100644 --- a/internal/service/rbin/service_package_gen.go +++ b/internal/service/rbin/service_package_gen.go @@ -5,6 +5,9 @@ package rbin import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + rbin_sdkv2 "github.com/aws/aws-sdk-go-v2/service/rbin" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -40,4 +43,17 @@ func (p *servicePackage) ServicePackageName() string { return names.RBin } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*rbin_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return rbin_sdkv2.NewFromConfig(cfg, func(o *rbin_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = rbin_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/rbin/tags_gen.go b/internal/service/rbin/tags_gen.go index 7f727746d3c..c239fc4ddab 100644 --- a/internal/service/rbin/tags_gen.go +++ b/internal/service/rbin/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists rbin service tags. +// listTags lists rbin service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn *rbin.Client, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *rbin.Client, identifier string) (tftags.KeyValueTags, error) { input := &rbin.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn *rbin.Client, identifier string) (tftags // ListTags lists rbin service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).RBinClient(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).RBinClient(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []awstypes.Tag) tftags.KeyValueTags return tftags.New(ctx, m) } -// GetTagsIn returns rbin service tags from Context. +// getTagsIn returns rbin service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []awstypes.Tag { +func getTagsIn(ctx context.Context) []awstypes.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []awstypes.Tag { return nil } -// SetTagsOut sets rbin service tags in Context. -func SetTagsOut(ctx context.Context, tags []awstypes.Tag) { +// setTagsOut sets rbin service tags in Context. +func setTagsOut(ctx context.Context, tags []awstypes.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates rbin service tags. +// updateTags updates rbin service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn *rbin.Client, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *rbin.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn *rbin.Client, identifier string, oldTa // UpdateTags updates rbin service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).RBinClient(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).RBinClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/rds/blue_green.go b/internal/service/rds/blue_green.go index 393971e1e81..1ca80d93e93 100644 --- a/internal/service/rds/blue_green.go +++ b/internal/service/rds/blue_green.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( diff --git a/internal/service/rds/certificate_data_source.go b/internal/service/rds/certificate_data_source.go index 0a3cb0c0897..6181d80705f 100644 --- a/internal/service/rds/certificate_data_source.go +++ b/internal/service/rds/certificate_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -61,7 +64,7 @@ func DataSourceCertificate() *schema.Resource { func dataSourceCertificateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) input := &rds.DescribeCertificatesInput{} diff --git a/internal/service/rds/certificate_data_source_test.go b/internal/service/rds/certificate_data_source_test.go index ef4d11f26a8..64b8240010a 100644 --- a/internal/service/rds/certificate_data_source_test.go +++ b/internal/service/rds/certificate_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( @@ -59,7 +62,7 @@ func TestAccRDSCertificateDataSource_latestValidTill(t *testing.T) { } func testAccCertificatePreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) input := &rds.DescribeCertificatesInput{} diff --git a/internal/service/rds/cluster.go b/internal/service/rds/cluster.go index a51d4234e5c..e031c3e8f77 100644 --- a/internal/service/rds/cluster.go +++ b/internal/service/rds/cluster.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -533,7 +536,7 @@ func ResourceCluster() *schema.Resource { func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) // Some API calls (e.g. RestoreDBClusterFromSnapshot do not support all // parameters to correctly apply all settings in one pass. For missing @@ -562,7 +565,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int Engine: aws.String(d.Get("engine").(string)), EngineMode: aws.String(d.Get("engine_mode").(string)), SnapshotIdentifier: aws.String(v.(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("availability_zones"); ok && v.(*schema.Set).Len() > 0 { @@ -681,7 +684,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int S3Prefix: aws.String(tfMap["bucket_prefix"].(string)), SourceEngine: aws.String(tfMap["source_engine"].(string)), SourceEngineVersion: aws.String(tfMap["source_engine_version"].(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("availability_zones"); ok && v.(*schema.Set).Len() > 0 { @@ -791,7 +794,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int DBClusterIdentifier: aws.String(identifier), DeletionProtection: aws.Bool(d.Get("deletion_protection").(bool)), SourceDBClusterIdentifier: aws.String(tfMap["source_cluster_identifier"].(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := tfMap["restore_to_time"].(string); ok && v != "" { @@ -903,7 +906,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int DeletionProtection: aws.Bool(d.Get("deletion_protection").(bool)), Engine: aws.String(d.Get("engine").(string)), EngineMode: aws.String(d.Get("engine_mode").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOkExists("allocated_storage"); ok { @@ -1075,7 +1078,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int } func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) (diags diag.Diagnostics) { - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) dbc, err := FindDBClusterByID(ctx, conn, d.Id()) @@ -1193,13 +1196,13 @@ func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta inter } } - SetTagsOut(ctx, dbc.TagList) + setTagsOut(ctx, dbc.TagList) return nil } func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) (diags diag.Diagnostics) { - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) if d.HasChangesExcept( "allow_major_version_upgrade", @@ -1235,7 +1238,7 @@ func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta int } if d.HasChange("db_cluster_instance_class") { - input.EngineVersion = aws.String(d.Get("db_cluster_instance_class").(string)) + input.DBClusterInstanceClass = aws.String(d.Get("db_cluster_instance_class").(string)) } if d.HasChange("db_cluster_parameter_group_name") { @@ -1432,7 +1435,7 @@ func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta int } func resourceClusterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) (diags diag.Diagnostics) { - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) // Automatically remove from global cluster to bypass this error on deletion: // InvalidDBClusterStateFault: This cluster is a part of a global cluster, please remove it from globalcluster first @@ -1674,6 +1677,7 @@ func waitDBClusterUpdated(ctx context.Context, conn *rds.RDS, id string, timeout ClusterStatusModifying, ClusterStatusRenaming, ClusterStatusResettingMasterCredentials, + ClusterStatusScalingCompute, ClusterStatusUpgrading, }, Target: []string{ClusterStatusAvailable}, diff --git a/internal/service/rds/cluster_activity_stream.go b/internal/service/rds/cluster_activity_stream.go index 7205e4ea0b3..c7eec0ceaa3 100644 --- a/internal/service/rds/cluster_activity_stream.go +++ b/internal/service/rds/cluster_activity_stream.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -60,7 +63,7 @@ func ResourceClusterActivityStream() *schema.Resource { } func resourceClusterActivityStreamCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) arn := d.Get("resource_arn").(string) input := &rds.StartActivityStreamInput{ @@ -86,7 +89,7 @@ func resourceClusterActivityStreamCreate(ctx context.Context, d *schema.Resource } func resourceClusterActivityStreamRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) output, err := FindDBClusterWithActivityStream(ctx, conn, d.Id()) @@ -109,7 +112,7 @@ func resourceClusterActivityStreamRead(ctx context.Context, d *schema.ResourceDa } func resourceClusterActivityStreamDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) log.Printf("[DEBUG] Deleting RDS Cluster Activity Stream: %s", d.Id()) _, err := conn.StopActivityStreamWithContext(ctx, &rds.StopActivityStreamInput{ diff --git a/internal/service/rds/cluster_activity_stream_test.go b/internal/service/rds/cluster_activity_stream_test.go index 273ae1344f6..2fa5c5277b8 100644 --- a/internal/service/rds/cluster_activity_stream_test.go +++ b/internal/service/rds/cluster_activity_stream_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( @@ -87,7 +90,7 @@ func testAccCheckClusterActivityStreamExists(ctx context.Context, n string, v *r return fmt.Errorf("RDS Cluster Activity Stream ID is not set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) output, err := tfrds.FindDBClusterWithActivityStream(ctx, conn, rs.Primary.ID) if err != nil { @@ -102,7 +105,7 @@ func testAccCheckClusterActivityStreamExists(ctx context.Context, n string, v *r func testAccCheckClusterActivityStreamDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_rds_cluster_activity_stream" { diff --git a/internal/service/rds/cluster_data_source.go b/internal/service/rds/cluster_data_source.go index 8a7198ff1b4..fbd3588b38c 100644 --- a/internal/service/rds/cluster_data_source.go +++ b/internal/service/rds/cluster_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -165,7 +168,7 @@ func DataSourceCluster() *schema.Resource { func dataSourceClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig dbClusterID := d.Get("cluster_identifier").(string) diff --git a/internal/service/rds/cluster_data_source_test.go b/internal/service/rds/cluster_data_source_test.go index c696e3b84ea..0a084d5aa51 100644 --- a/internal/service/rds/cluster_data_source_test.go +++ b/internal/service/rds/cluster_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( diff --git a/internal/service/rds/cluster_endpoint.go b/internal/service/rds/cluster_endpoint.go index 7af21c0580d..a769d65e56a 100644 --- a/internal/service/rds/cluster_endpoint.go +++ b/internal/service/rds/cluster_endpoint.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -87,14 +90,14 @@ func resourceClusterEndpointCreate(ctx context.Context, d *schema.ResourceData, clusterEndpointCreateTimeout = 30 * time.Minute ) var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) endpointID := d.Get("cluster_endpoint_identifier").(string) input := &rds.CreateDBClusterEndpointInput{ DBClusterEndpointIdentifier: aws.String(endpointID), DBClusterIdentifier: aws.String(d.Get("cluster_identifier").(string)), EndpointType: aws.String(d.Get("custom_endpoint_type").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("excluded_members"); ok && v.(*schema.Set).Len() > 0 { @@ -121,7 +124,7 @@ func resourceClusterEndpointCreate(ctx context.Context, d *schema.ResourceData, func resourceClusterEndpointRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) clusterEp, err := FindDBClusterEndpointByID(ctx, conn, d.Id()) @@ -149,7 +152,7 @@ func resourceClusterEndpointRead(ctx context.Context, d *schema.ResourceData, me func resourceClusterEndpointUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &rds.ModifyDBClusterEndpointInput{ @@ -183,7 +186,7 @@ func resourceClusterEndpointUpdate(ctx context.Context, d *schema.ResourceData, func resourceClusterEndpointDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) log.Printf("[DEBUG] Deleting RDS Cluster Endpoint: %s", d.Id()) _, err := conn.DeleteDBClusterEndpointWithContext(ctx, &rds.DeleteDBClusterEndpointInput{ diff --git a/internal/service/rds/cluster_endpoint_test.go b/internal/service/rds/cluster_endpoint_test.go index 0c31675a769..1af2cd17bc9 100644 --- a/internal/service/rds/cluster_endpoint_test.go +++ b/internal/service/rds/cluster_endpoint_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( @@ -149,7 +152,7 @@ func testAccCheckClusterEndpointAttributes(v *rds.DBClusterEndpoint) resource.Te func testAccCheckClusterEndpointDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_rds_cluster_endpoint" { @@ -184,7 +187,7 @@ func testAccCheckClusterEndpointExists(ctx context.Context, n string, v *rds.DBC return fmt.Errorf("No RDS Cluster Endpoint ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) output, err := tfrds.FindDBClusterEndpointByID(ctx, conn, rs.Primary.ID) if err != nil { diff --git a/internal/service/rds/cluster_instance.go b/internal/service/rds/cluster_instance.go index cf39162f3b1..5e6986e7440 100644 --- a/internal/service/rds/cluster_instance.go +++ b/internal/service/rds/cluster_instance.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -225,7 +228,7 @@ func ResourceClusterInstance() *schema.Resource { func resourceClusterInstanceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) clusterID := d.Get("cluster_identifier").(string) var identifier string @@ -247,7 +250,7 @@ func resourceClusterInstanceCreate(ctx context.Context, d *schema.ResourceData, Engine: aws.String(d.Get("engine").(string)), PromotionTier: aws.Int64(int64(d.Get("promotion_tier").(int))), PubliclyAccessible: aws.Bool(d.Get("publicly_accessible").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("availability_zone"); ok { @@ -344,7 +347,7 @@ func resourceClusterInstanceCreate(ctx context.Context, d *schema.ResourceData, func resourceClusterInstanceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) db, err := findDBInstanceByIDSDKv1(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -413,13 +416,13 @@ func resourceClusterInstanceRead(ctx context.Context, d *schema.ResourceData, me clusterSetResourceDataEngineVersionFromClusterInstance(d, db) - SetTagsOut(ctx, db.TagList) + setTagsOut(ctx, db.TagList) return nil } func resourceClusterInstanceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) (diags diag.Diagnostics) { - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &rds.ModifyDBInstanceInput{ @@ -503,7 +506,7 @@ func resourceClusterInstanceUpdate(ctx context.Context, d *schema.ResourceData, func resourceClusterInstanceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) input := &rds.DeleteDBInstanceInput{ DBInstanceIdentifier: aws.String(d.Id()), diff --git a/internal/service/rds/cluster_instance_test.go b/internal/service/rds/cluster_instance_test.go index 107097dbb09..8b7301e0f4e 100644 --- a/internal/service/rds/cluster_instance_test.go +++ b/internal/service/rds/cluster_instance_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( @@ -179,7 +182,7 @@ func TestAccRDSClusterInstance_isAlreadyBeingDeleted(t *testing.T) { { PreConfig: func() { // Get Database Instance into deleting state - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) input := &rds.DeleteDBInstanceInput{ DBInstanceIdentifier: aws.String(rName), SkipFinalSnapshot: aws.Bool(true), @@ -951,7 +954,7 @@ func TestAccRDSClusterInstance_PerformanceInsightsKMSKeyIDAuroraPostgresql_defau } func testAccPerformanceInsightsDefaultVersionPreCheck(ctx context.Context, t *testing.T, engine string) { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) input := &rds.DescribeDBEngineVersionsInput{ DefaultOnly: aws.Bool(true), @@ -971,7 +974,7 @@ func testAccPerformanceInsightsDefaultVersionPreCheck(ctx context.Context, t *te } func testAccPerformanceInsightsPreCheck(ctx context.Context, t *testing.T, engine string, engineVersion string) { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) input := &rds.DescribeOrderableDBInstanceOptionsInput{ Engine: aws.String(engine), @@ -1017,7 +1020,7 @@ func testAccCheckClusterInstanceExists(ctx context.Context, n string, v *rds.DBI return fmt.Errorf("No RDS Cluster Instance ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) output, err := tfrds.FindDBInstanceByID(ctx, conn, rs.Primary.ID) if err != nil { @@ -1032,7 +1035,7 @@ func testAccCheckClusterInstanceExists(ctx context.Context, n string, v *rds.DBI func testAccCheckClusterInstanceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_rds_cluster_instance" { diff --git a/internal/service/rds/cluster_parameter_group.go b/internal/service/rds/cluster_parameter_group.go index 54f60de7ed4..20f886b6cc0 100644 --- a/internal/service/rds/cluster_parameter_group.go +++ b/internal/service/rds/cluster_parameter_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -102,14 +105,14 @@ func ResourceClusterParameterGroup() *schema.Resource { func resourceClusterParameterGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) groupName := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) input := &rds.CreateDBClusterParameterGroupInput{ DBClusterParameterGroupName: aws.String(groupName), DBParameterGroupFamily: aws.String(d.Get("family").(string)), Description: aws.String(d.Get("description").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } output, err := conn.CreateDBClusterParameterGroupWithContext(ctx, input) @@ -127,7 +130,7 @@ func resourceClusterParameterGroupCreate(ctx context.Context, d *schema.Resource func resourceClusterParameterGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) dbClusterParameterGroup, err := FindDBClusterParameterGroupByName(ctx, conn, d.Id()) @@ -218,7 +221,7 @@ func resourceClusterParameterGroupUpdate(ctx context.Context, d *schema.Resource maxParamModifyChunk = 20 ) var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) if d.HasChange("parameter") { o, n := d.GetChange("parameter") @@ -282,7 +285,7 @@ func resourceClusterParameterGroupUpdate(ctx context.Context, d *schema.Resource func resourceClusterParameterGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSClient() + conn := meta.(*conns.AWSClient).RDSClient(ctx) input := &rds_sdkv2.DeleteDBClusterParameterGroupInput{ DBClusterParameterGroupName: aws.String(d.Id()), diff --git a/internal/service/rds/cluster_parameter_group_test.go b/internal/service/rds/cluster_parameter_group_test.go index 19af909020f..70e0fa04a4b 100644 --- a/internal/service/rds/cluster_parameter_group_test.go +++ b/internal/service/rds/cluster_parameter_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( @@ -511,7 +514,7 @@ func TestAccRDSClusterParameterGroup_dynamicDiffs(t *testing.T) { func testAccCheckClusterParameterGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_rds_cluster_parameter_group" { @@ -546,7 +549,7 @@ func testAccCheckClusterParameterNotUserDefined(ctx context.Context, n, paramNam return fmt.Errorf("No DB Parameter Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) opts := rds.DescribeDBClusterParametersInput{ DBClusterParameterGroupName: aws.String(rs.Primary.ID), @@ -593,7 +596,7 @@ func testAccCheckClusterParameterGroupExists(ctx context.Context, n string, v *r return errors.New("No RDS DB Cluster Parameter Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) output, err := tfrds.FindDBClusterParameterGroupByName(ctx, conn, rs.Primary.ID) if err != nil { diff --git a/internal/service/rds/cluster_role_association.go b/internal/service/rds/cluster_role_association.go index 7b72f05c6c7..2293b3c61fe 100644 --- a/internal/service/rds/cluster_role_association.go +++ b/internal/service/rds/cluster_role_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -56,7 +59,7 @@ func ResourceClusterRoleAssociation() *schema.Resource { func resourceClusterRoleAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) dbClusterID := d.Get("db_cluster_identifier").(string) roleARN := d.Get("role_arn").(string) @@ -97,7 +100,7 @@ func resourceClusterRoleAssociationCreate(ctx context.Context, d *schema.Resourc func resourceClusterRoleAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) dbClusterID, roleARN, err := ClusterRoleAssociationParseResourceID(d.Id()) if err != nil { @@ -125,7 +128,7 @@ func resourceClusterRoleAssociationRead(ctx context.Context, d *schema.ResourceD func resourceClusterRoleAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) dbClusterID, roleARN, err := ClusterRoleAssociationParseResourceID(d.Id()) if err != nil { diff --git a/internal/service/rds/cluster_role_association_test.go b/internal/service/rds/cluster_role_association_test.go index 14b45561916..1908a0a778c 100644 --- a/internal/service/rds/cluster_role_association_test.go +++ b/internal/service/rds/cluster_role_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( @@ -135,7 +138,7 @@ func testAccCheckClusterRoleAssociationExists(ctx context.Context, resourceName return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) role, err := tfrds.FindDBClusterRoleByDBClusterIDAndRoleARN(ctx, conn, dbClusterID, roleARN) if err != nil { @@ -150,7 +153,7 @@ func testAccCheckClusterRoleAssociationExists(ctx context.Context, resourceName func testAccCheckClusterRoleAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_db_cluster_role_association" { diff --git a/internal/service/rds/cluster_snapshot.go b/internal/service/rds/cluster_snapshot.go index 271c7056433..3459e9d1195 100644 --- a/internal/service/rds/cluster_snapshot.go +++ b/internal/service/rds/cluster_snapshot.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -121,13 +124,13 @@ func ResourceClusterSnapshot() *schema.Resource { func resourceClusterSnapshotCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) id := d.Get("db_cluster_snapshot_identifier").(string) input := &rds.CreateDBClusterSnapshotInput{ DBClusterIdentifier: aws.String(d.Get("db_cluster_identifier").(string)), DBClusterSnapshotIdentifier: aws.String(id), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } _, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, clusterSnapshotCreateTimeout, func() (interface{}, error) { @@ -149,7 +152,7 @@ func resourceClusterSnapshotCreate(ctx context.Context, d *schema.ResourceData, func resourceClusterSnapshotRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) snapshot, err := FindDBClusterSnapshotByID(ctx, conn, d.Id()) @@ -179,7 +182,7 @@ func resourceClusterSnapshotRead(ctx context.Context, d *schema.ResourceData, me d.Set("storage_encrypted", snapshot.StorageEncrypted) d.Set("vpc_id", snapshot.VpcId) - SetTagsOut(ctx, snapshot.TagList) + setTagsOut(ctx, snapshot.TagList) return diags } @@ -194,7 +197,7 @@ func resourceClusterSnapshotUpdate(ctx context.Context, d *schema.ResourceData, func resourceClusterSnapshotDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) log.Printf("[DEBUG] Deleting RDS DB Cluster Snapshot: %s", d.Id()) _, err := conn.DeleteDBClusterSnapshotWithContext(ctx, &rds.DeleteDBClusterSnapshotInput{ diff --git a/internal/service/rds/cluster_snapshot_data_source.go b/internal/service/rds/cluster_snapshot_data_source.go index d3d210a2643..daf96f42a54 100644 --- a/internal/service/rds/cluster_snapshot_data_source.go +++ b/internal/service/rds/cluster_snapshot_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -109,7 +112,7 @@ func DataSourceClusterSnapshot() *schema.Resource { func dataSourceClusterSnapshotRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig input := &rds.DescribeDBClusterSnapshotsInput{ diff --git a/internal/service/rds/cluster_snapshot_data_source_test.go b/internal/service/rds/cluster_snapshot_data_source_test.go index 4f40cec7877..5037834cfca 100644 --- a/internal/service/rds/cluster_snapshot_data_source_test.go +++ b/internal/service/rds/cluster_snapshot_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( diff --git a/internal/service/rds/cluster_snapshot_test.go b/internal/service/rds/cluster_snapshot_test.go index be5abd53955..a2d659a410f 100644 --- a/internal/service/rds/cluster_snapshot_test.go +++ b/internal/service/rds/cluster_snapshot_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( @@ -112,7 +115,7 @@ func TestAccRDSClusterSnapshot_tags(t *testing.T) { func testAccCheckClusterSnapshotDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_db_cluster_snapshot" { @@ -147,7 +150,7 @@ func testAccCheckClusterSnapshotExists(ctx context.Context, n string, v *rds.DBC return fmt.Errorf("No RDS Cluster Snapshot ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) output, err := tfrds.FindDBClusterSnapshotByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/rds/cluster_test.go b/internal/service/rds/cluster_test.go index cd5f3b1467d..1c019718450 100644 --- a/internal/service/rds/cluster_test.go +++ b/internal/service/rds/cluster_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( @@ -757,10 +760,17 @@ func TestAccRDSCluster_dbClusterInstanceClass(t *testing.T) { CheckDestroy: testAccCheckClusterDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccClusterConfig_dbClusterInstanceClass(rName), + Config: testAccClusterConfig_dbClusterInstanceClass(rName, "db.m5d.2xlarge"), + Check: resource.ComposeTestCheckFunc( + testAccCheckClusterExists(ctx, resourceName, &dbCluster), + resource.TestCheckResourceAttr(resourceName, "db_cluster_instance_class", "db.m5d.2xlarge"), + ), + }, + { + Config: testAccClusterConfig_dbClusterInstanceClass(rName, "db.r6gd.4xlarge"), Check: resource.ComposeTestCheckFunc( testAccCheckClusterExists(ctx, resourceName, &dbCluster), - resource.TestCheckResourceAttr(resourceName, "db_cluster_instance_class", "db.r6gd.xlarge"), + resource.TestCheckResourceAttr(resourceName, "db_cluster_instance_class", "db.r6gd.4xlarge"), ), }, }, @@ -2530,7 +2540,7 @@ func testAccCheckClusterDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckClusterDestroyWithProvider(ctx context.Context) acctest.TestCheckWithProviderFunc { return func(s *terraform.State, provider *schema.Provider) error { - conn := provider.Meta().(*conns.AWSClient).RDSConn() + conn := provider.Meta().(*conns.AWSClient).RDSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_rds_cluster" { @@ -2556,7 +2566,7 @@ func testAccCheckClusterDestroyWithProvider(ctx context.Context) acctest.TestChe func testAccCheckClusterDestroyWithFinalSnapshot(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_rds_cluster" { @@ -2609,7 +2619,7 @@ func testAccCheckClusterExistsWithProvider(ctx context.Context, n string, v *rds return fmt.Errorf("No RDS Cluster ID is set") } - conn := providerF().Meta().(*conns.AWSClient).RDSConn() + conn := providerF().Meta().(*conns.AWSClient).RDSConn(ctx) output, err := tfrds.FindDBClusterByID(ctx, conn, rs.Primary.ID) if err != nil { @@ -2996,12 +3006,12 @@ resource "aws_rds_cluster" "test" { `, rName) } -func testAccClusterConfig_dbClusterInstanceClass(rName string) string { +func testAccClusterConfig_dbClusterInstanceClass(rName, instanceClass string) string { return fmt.Sprintf(` resource "aws_rds_cluster" "test" { apply_immediately = true cluster_identifier = %[1]q - db_cluster_instance_class = "db.r6gd.xlarge" + db_cluster_instance_class = %[2]q engine = "mysql" storage_type = "io1" allocated_storage = 100 @@ -3010,7 +3020,7 @@ resource "aws_rds_cluster" "test" { master_username = "test" skip_final_snapshot = true } -`, rName) +`, rName, instanceClass) } func testAccClusterConfig_backtrackWindow(backtrackWindow int) string { diff --git a/internal/service/rds/clusters_data_source.go b/internal/service/rds/clusters_data_source.go index 62952371a67..0d2bbc1000b 100644 --- a/internal/service/rds/clusters_data_source.go +++ b/internal/service/rds/clusters_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -39,7 +42,7 @@ const ( ) func dataSourceClustersRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) input := &rds.DescribeDBClustersInput{} diff --git a/internal/service/rds/clusters_data_source_test.go b/internal/service/rds/clusters_data_source_test.go index cdbb1a62730..ea7284beff2 100644 --- a/internal/service/rds/clusters_data_source_test.go +++ b/internal/service/rds/clusters_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( diff --git a/internal/service/rds/consts.go b/internal/service/rds/consts.go index 15e28f67126..a4783862d17 100644 --- a/internal/service/rds/consts.go +++ b/internal/service/rds/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -22,6 +25,7 @@ const ( ClusterStatusRebooting = "rebooting" ClusterStatusRenaming = "renaming" ClusterStatusResettingMasterCredentials = "resetting-master-credentials" + ClusterStatusScalingCompute = "scaling-compute" ClusterStatusUpgrading = "upgrading" ) diff --git a/internal/service/rds/consts_test.go b/internal/service/rds/consts_test.go index 531f791c3c9..5bdea9248fc 100644 --- a/internal/service/rds/consts_test.go +++ b/internal/service/rds/consts_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test const ( diff --git a/internal/service/rds/engine_version_data_source.go b/internal/service/rds/engine_version_data_source.go index b566f27499b..ed38d3f4b56 100644 --- a/internal/service/rds/engine_version_data_source.go +++ b/internal/service/rds/engine_version_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -142,7 +145,7 @@ func DataSourceEngineVersion() *schema.Resource { func dataSourceEngineVersionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) input := &rds.DescribeDBEngineVersionsInput{ ListSupportedCharacterSets: aws.Bool(true), diff --git a/internal/service/rds/engine_version_data_source_test.go b/internal/service/rds/engine_version_data_source_test.go index d6fd8a4b9aa..3abb67de763 100644 --- a/internal/service/rds/engine_version_data_source_test.go +++ b/internal/service/rds/engine_version_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( @@ -174,7 +177,7 @@ func TestAccRDSEngineVersionDataSource_filter(t *testing.T) { } func testAccEngineVersionPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) input := &rds.DescribeDBEngineVersionsInput{ Engine: aws.String("mysql"), diff --git a/internal/service/rds/engine_version_test.go b/internal/service/rds/engine_version_test.go index beed7faa38b..243187124b3 100644 --- a/internal/service/rds/engine_version_test.go +++ b/internal/service/rds/engine_version_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( diff --git a/internal/service/rds/errors.go b/internal/service/rds/errors.go index 1a82779324d..611d6837421 100644 --- a/internal/service/rds/errors.go +++ b/internal/service/rds/errors.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds const ( diff --git a/internal/service/rds/event_categories_data_source.go b/internal/service/rds/event_categories_data_source.go index 86936911f2a..22df4fec20a 100644 --- a/internal/service/rds/event_categories_data_source.go +++ b/internal/service/rds/event_categories_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -34,7 +37,7 @@ func DataSourceEventCategories() *schema.Resource { func dataSourceEventCategoriesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) input := &rds.DescribeEventCategoriesInput{} diff --git a/internal/service/rds/event_categories_data_source_test.go b/internal/service/rds/event_categories_data_source_test.go index 04b56db5cda..8493f0e32a2 100644 --- a/internal/service/rds/event_categories_data_source_test.go +++ b/internal/service/rds/event_categories_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( diff --git a/internal/service/rds/event_subscription.go b/internal/service/rds/event_subscription.go index 09ce6dfccc1..ff5634bf8b6 100644 --- a/internal/service/rds/event_subscription.go +++ b/internal/service/rds/event_subscription.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -100,14 +103,14 @@ func ResourceEventSubscription() *schema.Resource { func resourceEventSubscriptionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) name := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) input := &rds.CreateEventSubscriptionInput{ Enabled: aws.Bool(d.Get("enabled").(bool)), SnsTopicArn: aws.String(d.Get("sns_topic").(string)), SubscriptionName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("event_categories"); ok && v.(*schema.Set).Len() > 0 { @@ -139,7 +142,7 @@ func resourceEventSubscriptionCreate(ctx context.Context, d *schema.ResourceData func resourceEventSubscriptionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) sub, err := FindEventSubscriptionByID(ctx, conn, d.Id()) @@ -169,7 +172,7 @@ func resourceEventSubscriptionRead(ctx context.Context, d *schema.ResourceData, func resourceEventSubscriptionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) if d.HasChangesExcept("tags", "tags_all", "source_ids") { input := &rds.ModifyEventSubscriptionInput{ @@ -237,7 +240,7 @@ func resourceEventSubscriptionUpdate(ctx context.Context, d *schema.ResourceData func resourceEventSubscriptionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) log.Printf("[DEBUG] Deleting RDS Event Subscription: (%s)", d.Id()) _, err := conn.DeleteEventSubscriptionWithContext(ctx, &rds.DeleteEventSubscriptionInput{ diff --git a/internal/service/rds/event_subscription_test.go b/internal/service/rds/event_subscription_test.go index e6d8b7f35ce..fe840e744ba 100644 --- a/internal/service/rds/event_subscription_test.go +++ b/internal/service/rds/event_subscription_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( @@ -309,7 +312,7 @@ func testAccCheckEventSubscriptionExists(ctx context.Context, n string, v *rds.E return fmt.Errorf("No RDS Event Subscription is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) output, err := tfrds.FindEventSubscriptionByID(ctx, conn, rs.Primary.ID) if err != nil { @@ -324,7 +327,7 @@ func testAccCheckEventSubscriptionExists(ctx context.Context, n string, v *rds.E func testAccCheckEventSubscriptionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_db_event_subscription" { diff --git a/internal/service/rds/export_task.go b/internal/service/rds/export_task.go index bccdbe561d0..f1bfec73158 100644 --- a/internal/service/rds/export_task.go +++ b/internal/service/rds/export_task.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -20,8 +23,8 @@ import ( "github.com/hashicorp/terraform-plugin-framework/types/basetypes" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-provider-aws/internal/create" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -140,7 +143,7 @@ func (r *resourceExportTask) Schema(ctx context.Context, req resource.SchemaRequ } func (r *resourceExportTask) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) { - conn := r.Meta().RDSClient() + conn := r.Meta().RDSClient(ctx) var plan resourceExportTaskData resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) @@ -194,7 +197,7 @@ func (r *resourceExportTask) Create(ctx context.Context, req resource.CreateRequ } func (r *resourceExportTask) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) { - conn := r.Meta().RDSClient() + conn := r.Meta().RDSClient(ctx) var state resourceExportTaskData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) @@ -228,7 +231,7 @@ func (r *resourceExportTask) Update(ctx context.Context, req resource.UpdateRequ } func (r *resourceExportTask) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) { - conn := r.Meta().RDSClient() + conn := r.Meta().RDSClient(ctx) var state resourceExportTaskData resp.Diagnostics.Append(req.State.Get(ctx, &state)...) diff --git a/internal/service/rds/export_task_test.go b/internal/service/rds/export_task_test.go index 97daf136773..a1af47ab984 100644 --- a/internal/service/rds/export_task_test.go +++ b/internal/service/rds/export_task_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( @@ -98,7 +101,7 @@ func TestAccRDSExportTask_optional(t *testing.T) { func testAccCheckExportTaskDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_rds_export_task" { @@ -133,7 +136,7 @@ func testAccCheckExportTaskExists(ctx context.Context, name string, exportTask * return create.Error(names.RDS, create.ErrActionCheckingExistence, tfrds.ResNameExportTask, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSClient(ctx) resp, err := tfrds.FindExportTaskByID(ctx, conn, rs.Primary.ID) if err != nil { return create.Error(names.RDS, create.ErrActionCheckingExistence, tfrds.ResNameExportTask, rs.Primary.ID, err) diff --git a/internal/service/rds/exports_test.go b/internal/service/rds/exports_test.go index e9fc914bd77..a4c0b177088 100644 --- a/internal/service/rds/exports_test.go +++ b/internal/service/rds/exports_test.go @@ -1,3 +1,11 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds -var FindDBInstanceByID = findDBInstanceByIDSDKv1 +// Exports for use in tests only. +var ( + FindDBInstanceByID = findDBInstanceByIDSDKv1 + + ListTags = listTags +) diff --git a/internal/service/rds/find.go b/internal/service/rds/find.go index 8f64cacafcc..69639fe0762 100644 --- a/internal/service/rds/find.go +++ b/internal/service/rds/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( diff --git a/internal/service/rds/flex.go b/internal/service/rds/flex.go index fd31bb18756..b450a29a8e9 100644 --- a/internal/service/rds/flex.go +++ b/internal/service/rds/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( diff --git a/internal/service/rds/flex_test.go b/internal/service/rds/flex_test.go index 5a3bed3dfce..24b8596c48d 100644 --- a/internal/service/rds/flex_test.go +++ b/internal/service/rds/flex_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( diff --git a/internal/service/rds/generate.go b/internal/service/rds/generate.go index bff60f6a76c..887fb1d8ded 100644 --- a/internal/service/rds/generate.go +++ b/internal/service/rds/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceName -ListTagsOutTagsElem=TagList -ServiceTagsSlice -TagOp=AddTagsToResource -TagInIDElem=ResourceName -UntagOp=RemoveTagsFromResource -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package rds diff --git a/internal/service/rds/global_cluster.go b/internal/service/rds/global_cluster.go index 490571e8436..aee935baa0d 100644 --- a/internal/service/rds/global_cluster.go +++ b/internal/service/rds/global_cluster.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -133,7 +136,7 @@ func ResourceGlobalCluster() *schema.Resource { func resourceGlobalClusterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) input := &rds.CreateGlobalClusterInput{ GlobalClusterIdentifier: aws.String(d.Get("global_cluster_identifier").(string)), @@ -186,7 +189,7 @@ func resourceGlobalClusterCreate(ctx context.Context, d *schema.ResourceData, me func resourceGlobalClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) globalCluster, err := DescribeGlobalCluster(ctx, conn, d.Id()) @@ -244,7 +247,7 @@ func resourceGlobalClusterRead(ctx context.Context, d *schema.ResourceData, meta func resourceGlobalClusterUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) input := &rds.ModifyGlobalClusterInput{ DeletionProtection: aws.Bool(d.Get("deletion_protection").(bool)), @@ -277,7 +280,7 @@ func resourceGlobalClusterUpdate(ctx context.Context, d *schema.ResourceData, me func resourceGlobalClusterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) deadline := tfresource.NewDeadline(d.Timeout(schema.TimeoutDelete)) if d.Get("force_destroy").(bool) { @@ -483,7 +486,7 @@ func globalClusterRefreshFunc(ctx context.Context, conn *rds.RDS, globalClusterI } if err != nil { - return nil, "", fmt.Errorf("error reading RDS Global Cluster (%s): %s", globalClusterID, err) + return nil, "", fmt.Errorf("reading RDS Global Cluster (%s): %s", globalClusterID, err) } if globalCluster == nil { @@ -580,7 +583,7 @@ func waitForGlobalClusterRemoval(ctx context.Context, conn *rds.RDS, dbClusterId } func globalClusterUpgradeMajorEngineVersion(ctx context.Context, meta interface{}, clusterID string, engineVersion string, timeout time.Duration) error { - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) input := &rds.ModifyGlobalClusterInput{ GlobalClusterIdentifier: aws.String(clusterID), @@ -675,7 +678,7 @@ func ClusterIDRegionFromARN(arnID string) (string, string, error) { } func globalClusterUpgradeMinorEngineVersion(ctx context.Context, meta interface{}, clusterMembers *schema.Set, clusterID, engineVersion string, timeout time.Duration) error { - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) log.Printf("[INFO] Performing RDS Global Cluster (%s) minor version (%s) upgrade", clusterID, engineVersion) diff --git a/internal/service/rds/global_cluster_test.go b/internal/service/rds/global_cluster_test.go index b906750dc21..573cfa4771c 100644 --- a/internal/service/rds/global_cluster_test.go +++ b/internal/service/rds/global_cluster_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( @@ -619,7 +622,7 @@ func testAccCheckGlobalClusterExists(ctx context.Context, resourceName string, g return fmt.Errorf("No RDS Global Cluster ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) cluster, err := tfrds.DescribeGlobalCluster(ctx, conn, rs.Primary.ID) if err != nil { @@ -642,7 +645,7 @@ func testAccCheckGlobalClusterExists(ctx context.Context, resourceName string, g func testAccCheckGlobalClusterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_rds_global_cluster" { @@ -672,7 +675,7 @@ func testAccCheckGlobalClusterDestroy(ctx context.Context) resource.TestCheckFun func testAccCheckGlobalClusterDisappears(ctx context.Context, globalCluster *rds.GlobalCluster) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) input := &rds.DeleteGlobalClusterInput{ GlobalClusterIdentifier: globalCluster.GlobalClusterIdentifier, @@ -708,7 +711,7 @@ func testAccCheckGlobalClusterRecreated(i, j *rds.GlobalCluster) resource.TestCh } func testAccPreCheckGlobalCluster(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) input := &rds.DescribeGlobalClustersInput{} diff --git a/internal/service/rds/id.go b/internal/service/rds/id.go index fff4d0cb6a9..7d6ce52c4a3 100644 --- a/internal/service/rds/id.go +++ b/internal/service/rds/id.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( diff --git a/internal/service/rds/instance.go b/internal/service/rds/instance.go index 7fe87401603..bb931a02b34 100644 --- a/internal/service/rds/instance.go +++ b/internal/service/rds/instance.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -629,7 +632,7 @@ func ResourceInstance() *schema.Resource { func resourceInstanceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) // Some API calls (e.g. CreateDBInstanceReadReplica and // RestoreDBInstanceFromDBSnapshot do not support all parameters to @@ -662,7 +665,7 @@ func resourceInstanceCreate(ctx context.Context, d *schema.ResourceData, meta in DeletionProtection: aws.Bool(d.Get("deletion_protection").(bool)), PubliclyAccessible: aws.Bool(d.Get("publicly_accessible").(bool)), SourceDBInstanceIdentifier: aws.String(sourceDBInstanceID), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if _, ok := d.GetOk("allocated_storage"); ok { @@ -873,7 +876,7 @@ func resourceInstanceCreate(ctx context.Context, d *schema.ResourceData, meta in SourceEngine: aws.String(tfMap["source_engine"].(string)), SourceEngineVersion: aws.String(tfMap["source_engine_version"].(string)), StorageEncrypted: aws.Bool(d.Get("storage_encrypted").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("availability_zone"); ok { @@ -1011,7 +1014,7 @@ func resourceInstanceCreate(ctx context.Context, d *schema.ResourceData, meta in DBSnapshotIdentifier: aws.String(v.(string)), DeletionProtection: aws.Bool(d.Get("deletion_protection").(bool)), PubliclyAccessible: aws.Bool(d.Get("publicly_accessible").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } engine := strings.ToLower(d.Get("engine").(string)) @@ -1229,7 +1232,7 @@ func resourceInstanceCreate(ctx context.Context, d *schema.ResourceData, meta in DBInstanceClass: aws.String(d.Get("instance_class").(string)), DeletionProtection: aws.Bool(d.Get("deletion_protection").(bool)), PubliclyAccessible: aws.Bool(d.Get("publicly_accessible").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), TargetDBInstanceIdentifier: aws.String(identifier), } @@ -1397,7 +1400,7 @@ func resourceInstanceCreate(ctx context.Context, d *schema.ResourceData, meta in MasterUsername: aws.String(d.Get("username").(string)), PubliclyAccessible: aws.Bool(d.Get("publicly_accessible").(bool)), StorageEncrypted: aws.Bool(d.Get("storage_encrypted").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("availability_zone"); ok { @@ -1563,7 +1566,7 @@ func resourceInstanceCreate(ctx context.Context, d *schema.ResourceData, meta in var instance *rds.DBInstance var err error - if instance, err = waitDBInstanceAvailableSDKv1(ctx, conn, d.Get("identifier").(string), d.Timeout(schema.TimeoutCreate)); err != nil { + if instance, err = waitDBInstanceAvailableSDKv1(ctx, conn, identifier, d.Timeout(schema.TimeoutCreate)); err != nil { return sdkdiag.AppendErrorf(diags, "waiting for RDS DB Instance (%s) create: %s", identifier, err) } @@ -1602,8 +1605,9 @@ func resourceInstanceCreate(ctx context.Context, d *schema.ResourceData, meta in return append(diags, resourceInstanceRead(ctx, d, meta)...) } -func resourceInstanceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) (diags diag.Diagnostics) { - conn := meta.(*conns.AWSClient).RDSConn() +func resourceInstanceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).RDSConn(ctx) v, err := findDBInstanceByIDSDKv1(ctx, conn, d.Id()) @@ -1726,13 +1730,14 @@ func resourceInstanceRead(ctx context.Context, d *schema.ResourceData, meta inte dbSetResourceDataEngineVersionFromInstance(d, v) - SetTagsOut(ctx, v.TagList) + setTagsOut(ctx, v.TagList) return diags } -func resourceInstanceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) (diags diag.Diagnostics) { - conn := meta.(*conns.AWSClient).RDSClient() +func resourceInstanceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).RDSClient(ctx) deadline := tfresource.NewDeadline(d.Timeout(schema.TimeoutUpdate)) // Separate request to promote a database. @@ -1912,14 +1917,14 @@ func resourceInstanceUpdate(ctx context.Context, d *schema.ResourceData, meta in } cleaupWaiters = append(cleaupWaiters, func(optFns ...tfresource.OptionsFunc) { - _, err = waitDBInstanceDeleted(ctx, meta.(*conns.AWSClient).RDSConn(), sourceARN.Identifier, deadline.Remaining(), optFns...) + _, err = waitDBInstanceDeleted(ctx, meta.(*conns.AWSClient).RDSConn(ctx), sourceARN.Identifier, deadline.Remaining(), optFns...) if err != nil { diags = sdkdiag.AppendErrorf(diags, "updating RDS DB Instance (%s): deleting Blue/Green Deployment source: waiting for completion: %s", d.Get("identifier").(string), err) } }) if diags.HasError() { - return + return diags } } else { oldID := d.Get("identifier").(string) @@ -2202,8 +2207,9 @@ func dbInstanceModify(ctx context.Context, conn *rds_sdkv2.Client, resourceID st return nil } -func resourceInstanceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) (diags diag.Diagnostics) { - conn := meta.(*conns.AWSClient).RDSConn() +func resourceInstanceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).RDSConn(ctx) input := &rds.DeleteDBInstanceInput{ DBInstanceIdentifier: aws.String(d.Get("identifier").(string)), @@ -2370,17 +2376,16 @@ func findDBInstanceByIDSDKv1(ctx context.Context, conn *rds.RDS, id string) (*rd LastRequest: input, } } + if err != nil { return nil, err } - if output == nil || len(output.DBInstances) == 0 || output.DBInstances[0] == nil { + if output == nil { return nil, tfresource.NewEmptyResultError(input) } - dbInstance := output.DBInstances[0] - - return dbInstance, nil + return tfresource.AssertSinglePtrResult(output.DBInstances) } // findDBInstanceByIDSDKv2 in general should be called with a DbiResourceId of the form @@ -2416,15 +2421,16 @@ func findDBInstanceByIDSDKv2(ctx context.Context, conn *rds_sdkv2.Client, id str LastRequest: input, } } + if err != nil { return nil, err } - if output == nil || len(output.DBInstances) == 0 { + if output == nil { return nil, tfresource.NewEmptyResultError(input) } - return &output.DBInstances[0], nil + return tfresource.AssertSingleValueResult(output.DBInstances) } func statusDBInstanceSDKv1(ctx context.Context, conn *rds.RDS, id string) retry.StateRefreshFunc { diff --git a/internal/service/rds/instance_automated_backups_replication.go b/internal/service/rds/instance_automated_backups_replication.go index 5bf78f21f54..11a08290709 100644 --- a/internal/service/rds/instance_automated_backups_replication.go +++ b/internal/service/rds/instance_automated_backups_replication.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -69,7 +72,7 @@ func ResourceInstanceAutomatedBackupsReplication() *schema.Resource { func resourceInstanceAutomatedBackupsReplicationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) input := &rds.StartDBInstanceAutomatedBackupsReplicationInput{ BackupRetentionPeriod: aws.Int64(int64(d.Get("retention_period").(int))), @@ -101,7 +104,7 @@ func resourceInstanceAutomatedBackupsReplicationCreate(ctx context.Context, d *s func resourceInstanceAutomatedBackupsReplicationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) backup, err := FindDBInstanceAutomatedBackupByARN(ctx, conn, d.Id()) @@ -125,7 +128,7 @@ func resourceInstanceAutomatedBackupsReplicationRead(ctx context.Context, d *sch func resourceInstanceAutomatedBackupsReplicationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) backup, err := FindDBInstanceAutomatedBackupByARN(ctx, conn, d.Id()) diff --git a/internal/service/rds/instance_automated_backups_replication_test.go b/internal/service/rds/instance_automated_backups_replication_test.go index a06adcbb38b..7daafd3502a 100644 --- a/internal/service/rds/instance_automated_backups_replication_test.go +++ b/internal/service/rds/instance_automated_backups_replication_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( @@ -225,7 +228,7 @@ func testAccCheckInstanceAutomatedBackupsReplicationExist(ctx context.Context, n return fmt.Errorf("No RDS instance automated backups replication ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) _, err := tfrds.FindDBInstanceAutomatedBackupByARN(ctx, conn, rs.Primary.ID) @@ -235,7 +238,7 @@ func testAccCheckInstanceAutomatedBackupsReplicationExist(ctx context.Context, n func testAccCheckInstanceAutomatedBackupsReplicationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_db_instance_automated_backups_replication" { diff --git a/internal/service/rds/instance_data_source.go b/internal/service/rds/instance_data_source.go index 34e7723da98..4c3f4b1cb4c 100644 --- a/internal/service/rds/instance_data_source.go +++ b/internal/service/rds/instance_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -208,7 +211,7 @@ func DataSourceInstance() *schema.Resource { func dataSourceInstanceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig v, err := findDBInstanceByIDSDKv1(ctx, conn, d.Get("db_instance_identifier").(string)) diff --git a/internal/service/rds/instance_data_source_test.go b/internal/service/rds/instance_data_source_test.go index c40f7060150..fb96da69cc6 100644 --- a/internal/service/rds/instance_data_source_test.go +++ b/internal/service/rds/instance_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( diff --git a/internal/service/rds/instance_migrate.go b/internal/service/rds/instance_migrate.go index d2551e34035..445ade596f4 100644 --- a/internal/service/rds/instance_migrate.go +++ b/internal/service/rds/instance_migrate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( diff --git a/internal/service/rds/instance_migrate_test.go b/internal/service/rds/instance_migrate_test.go index 669448b980b..d17aa18348e 100644 --- a/internal/service/rds/instance_migrate_test.go +++ b/internal/service/rds/instance_migrate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( diff --git a/internal/service/rds/instance_role_association.go b/internal/service/rds/instance_role_association.go index fc2679f42d6..16417f70551 100644 --- a/internal/service/rds/instance_role_association.go +++ b/internal/service/rds/instance_role_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -64,7 +67,7 @@ func ResourceInstanceRoleAssociation() *schema.Resource { func resourceInstanceRoleAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) dbInstanceIdentifier := d.Get("db_instance_identifier").(string) roleArn := d.Get("role_arn").(string) @@ -104,7 +107,7 @@ func resourceInstanceRoleAssociationCreate(ctx context.Context, d *schema.Resour func resourceInstanceRoleAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) dbInstanceIdentifier, roleArn, err := InstanceRoleAssociationDecodeID(d.Id()) if err != nil { @@ -132,7 +135,7 @@ func resourceInstanceRoleAssociationRead(ctx context.Context, d *schema.Resource func resourceInstanceRoleAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) dbInstanceIdentifier, roleArn, err := InstanceRoleAssociationDecodeID(d.Id()) if err != nil { diff --git a/internal/service/rds/instance_role_association_test.go b/internal/service/rds/instance_role_association_test.go index 4e3aaac5bee..95a5f41f24b 100644 --- a/internal/service/rds/instance_role_association_test.go +++ b/internal/service/rds/instance_role_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( @@ -96,7 +99,7 @@ func testAccCheckInstanceRoleAssociationExists(ctx context.Context, resourceName return fmt.Errorf("error reading resource ID: %s", err) } - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) role, err := tfrds.DescribeDBInstanceRole(ctx, conn, dbInstanceIdentifier, roleArn) if tfresource.NotFound(err) { @@ -118,7 +121,7 @@ func testAccCheckInstanceRoleAssociationExists(ctx context.Context, resourceName func testAccCheckInstanceRoleAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_db_instance_role_association" { @@ -147,7 +150,7 @@ func testAccCheckInstanceRoleAssociationDestroy(ctx context.Context) resource.Te func testAccCheckInstanceRoleAssociationDisappears(ctx context.Context, dbInstance *rds.DBInstance, dbInstanceRole *rds.DBInstanceRole) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) input := &rds.RemoveRoleFromDBInstanceInput{ DBInstanceIdentifier: dbInstance.DBInstanceIdentifier, diff --git a/internal/service/rds/instance_test.go b/internal/service/rds/instance_test.go index 5c4b49ab0b3..7151c18cf88 100644 --- a/internal/service/rds/instance_test.go +++ b/internal/service/rds/instance_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( @@ -842,7 +845,7 @@ func TestAccRDSInstance_isAlreadyBeingDeleted(t *testing.T) { { PreConfig: func() { // Get Database Instance into deleting state - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) input := &rds.DeleteDBInstanceInput{ DBInstanceIdentifier: aws.String(rName), SkipFinalSnapshot: aws.Bool(true), @@ -5396,7 +5399,7 @@ func TestAccRDSInstance_newIdentifier(t *testing.T) { func testAccCheckInstanceAutomatedBackupsDelete(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_db_instance" { @@ -5430,7 +5433,7 @@ func testAccCheckInstanceAutomatedBackupsDelete(ctx context.Context) resource.Te func testAccCheckInstanceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_db_instance" { @@ -5537,7 +5540,7 @@ func testAccCheckInstanceReplicaAttributes(source, replica *rds.DBInstance) reso // The snapshot is deleted. func testAccCheckInstanceDestroyWithFinalSnapshot(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_db_instance" { @@ -5589,7 +5592,7 @@ func testAccCheckInstanceDestroyWithFinalSnapshot(ctx context.Context) resource. // - No DBSnapshot has been produced func testAccCheckInstanceDestroyWithoutFinalSnapshot(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_db_instance" { @@ -5661,7 +5664,7 @@ func testAccCheckInstanceExists(ctx context.Context, n string, v *rds.DBInstance return fmt.Errorf("No RDS DB Instance ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) output, err := tfrds.FindDBInstanceByID(ctx, conn, rs.Primary.ID) if err != nil { diff --git a/internal/service/rds/instances_data_source.go b/internal/service/rds/instances_data_source.go index be84332bb17..facd89ab2fa 100644 --- a/internal/service/rds/instances_data_source.go +++ b/internal/service/rds/instances_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -39,7 +42,7 @@ const ( ) func dataSourceInstancesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) input := &rds.DescribeDBInstancesInput{} diff --git a/internal/service/rds/instances_data_source_test.go b/internal/service/rds/instances_data_source_test.go index 456b73e94d9..cec0a8faad6 100644 --- a/internal/service/rds/instances_data_source_test.go +++ b/internal/service/rds/instances_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( diff --git a/internal/service/rds/option_group.go b/internal/service/rds/option_group.go index b940b731c4c..5ac5d2bbf98 100644 --- a/internal/service/rds/option_group.go +++ b/internal/service/rds/option_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -138,7 +141,7 @@ func ResourceOptionGroup() *schema.Resource { func resourceOptionGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) var groupName string if v, ok := d.GetOk("name"); ok { @@ -154,7 +157,7 @@ func resourceOptionGroupCreate(ctx context.Context, d *schema.ResourceData, meta MajorEngineVersion: aws.String(d.Get("major_engine_version").(string)), OptionGroupDescription: aws.String(d.Get("option_group_description").(string)), OptionGroupName: aws.String(groupName), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } log.Printf("[DEBUG] Create DB Option Group: %#v", createOpts) @@ -174,7 +177,7 @@ func resourceOptionGroupCreate(ctx context.Context, d *schema.ResourceData, meta func resourceOptionGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) params := &rds.DescribeOptionGroupsInput{ OptionGroupName: aws.String(d.Id()), @@ -231,7 +234,7 @@ func optionInList(optionName string, list []*string) bool { func resourceOptionGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) if d.HasChange("option") { o, n := d.GetChange("option") if o == nil { @@ -298,7 +301,7 @@ func resourceOptionGroupUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceOptionGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) deleteOpts := &rds.DeleteOptionGroupInput{ OptionGroupName: aws.String(d.Id()), diff --git a/internal/service/rds/option_group_test.go b/internal/service/rds/option_group_test.go index 02676b9f0be..1d93e4acc0b 100644 --- a/internal/service/rds/option_group_test.go +++ b/internal/service/rds/option_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( @@ -571,7 +574,7 @@ func testAccCheckOptionGroupExists(ctx context.Context, n string, v *rds.OptionG return fmt.Errorf("No DB Option Group Name is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) opts := rds.DescribeOptionGroupsInput{ OptionGroupName: aws.String(rs.Primary.ID), @@ -595,7 +598,7 @@ func testAccCheckOptionGroupExists(ctx context.Context, n string, v *rds.OptionG func testAccCheckOptionGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_db_option_group" { diff --git a/internal/service/rds/orderable_instance_data_source.go b/internal/service/rds/orderable_instance_data_source.go index 81026f4e269..5cc8d8b28ca 100644 --- a/internal/service/rds/orderable_instance_data_source.go +++ b/internal/service/rds/orderable_instance_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -185,7 +188,7 @@ func DataSourceOrderableInstance() *schema.Resource { func dataSourceOrderableInstanceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) input := &rds.DescribeOrderableDBInstanceOptionsInput{} diff --git a/internal/service/rds/orderable_instance_data_source_test.go b/internal/service/rds/orderable_instance_data_source_test.go index 787438857bd..fc4606ed1ae 100644 --- a/internal/service/rds/orderable_instance_data_source_test.go +++ b/internal/service/rds/orderable_instance_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( @@ -286,7 +289,7 @@ func TestAccRDSOrderableInstanceDataSource_supportsStorageEncryption(t *testing. } func testAccOrderableInstancePreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) input := &rds.DescribeOrderableDBInstanceOptionsInput{ Engine: aws.String("mysql"), diff --git a/internal/service/rds/parameter_group.go b/internal/service/rds/parameter_group.go index 8505b0e3ef5..227dae69358 100644 --- a/internal/service/rds/parameter_group.go +++ b/internal/service/rds/parameter_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -103,14 +106,14 @@ func ResourceParameterGroup() *schema.Resource { func resourceParameterGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) name := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) input := &rds.CreateDBParameterGroupInput{ DBParameterGroupFamily: aws.String(d.Get("family").(string)), DBParameterGroupName: aws.String(name), Description: aws.String(d.Get("description").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } output, err := conn.CreateDBParameterGroupWithContext(ctx, input) @@ -128,7 +131,7 @@ func resourceParameterGroupCreate(ctx context.Context, d *schema.ResourceData, m func resourceParameterGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) dbParameterGroup, err := FindDBParameterGroupByName(ctx, conn, d.Id()) @@ -228,7 +231,7 @@ func resourceParameterGroupUpdate(ctx context.Context, d *schema.ResourceData, m maxParamModifyChunk = 20 ) var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) if d.HasChange("parameter") { o, n := d.GetChange("parameter") @@ -311,7 +314,7 @@ func resourceParameterGroupUpdate(ctx context.Context, d *schema.ResourceData, m } func resourceParameterGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) (diags diag.Diagnostics) { - conn := meta.(*conns.AWSClient).RDSClient() + conn := meta.(*conns.AWSClient).RDSClient(ctx) input := &rds_sdkv2.DeleteDBParameterGroupInput{ DBParameterGroupName: aws.String(d.Id()), } diff --git a/internal/service/rds/parameter_group_test.go b/internal/service/rds/parameter_group_test.go index 892d552d9c0..7998ab6566e 100644 --- a/internal/service/rds/parameter_group_test.go +++ b/internal/service/rds/parameter_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( @@ -1041,7 +1044,7 @@ func TestDBParameterModifyChunk(t *testing.T) { func testAccCheckParameterGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_db_parameter_group" { @@ -1091,7 +1094,7 @@ func testAccCheckParameterGroupExists(ctx context.Context, n string, v *rds.DBPa return fmt.Errorf("No RDS DB Parameter Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) output, err := tfrds.FindDBParameterGroupByName(ctx, conn, rs.Primary.ID) if err != nil { @@ -1115,7 +1118,7 @@ func testAccCheckParameterNotUserDefined(ctx context.Context, rName, paramName s return fmt.Errorf("No DB Parameter Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) opts := rds.DescribeDBParametersInput{ DBParameterGroupName: aws.String(rs.Primary.ID), diff --git a/internal/service/rds/proxy.go b/internal/service/rds/proxy.go index adcc75551f8..9f299010467 100644 --- a/internal/service/rds/proxy.go +++ b/internal/service/rds/proxy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -137,14 +140,14 @@ func ResourceProxy() *schema.Resource { func resourceProxyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) input := rds.CreateDBProxyInput{ Auth: expandProxyAuth(d.Get("auth").([]interface{})), DBProxyName: aws.String(d.Get("name").(string)), EngineFamily: aws.String(d.Get("engine_family").(string)), RoleArn: aws.String(d.Get("role_arn").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), VpcSubnetIds: flex.ExpandStringSet(d.Get("vpc_subnet_ids").(*schema.Set)), } @@ -180,7 +183,7 @@ func resourceProxyCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceProxyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) dbProxy, err := FindDBProxyByName(ctx, conn, d.Id()) @@ -211,7 +214,7 @@ func resourceProxyRead(ctx context.Context, d *schema.ResourceData, meta interfa func resourceProxyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) if d.HasChangesExcept("tags", "tags_all") { oName, nName := d.GetChange("name") @@ -251,7 +254,7 @@ func resourceProxyUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceProxyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) log.Printf("[DEBUG] Deleting RDS DB Proxy: %s", d.Id()) _, err := conn.DeleteDBProxyWithContext(ctx, &rds.DeleteDBProxyInput{ diff --git a/internal/service/rds/proxy_data_source.go b/internal/service/rds/proxy_data_source.go index 8d55e461a95..a0fe1652c46 100644 --- a/internal/service/rds/proxy_data_source.go +++ b/internal/service/rds/proxy_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -99,7 +102,7 @@ func DataSourceProxy() *schema.Resource { func dataSourceProxyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) name := d.Get("name").(string) dbProxy, err := FindDBProxyByName(ctx, conn, name) diff --git a/internal/service/rds/proxy_data_source_test.go b/internal/service/rds/proxy_data_source_test.go index 01fa90f2db8..591e2f2b399 100644 --- a/internal/service/rds/proxy_data_source_test.go +++ b/internal/service/rds/proxy_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( diff --git a/internal/service/rds/proxy_default_target_group.go b/internal/service/rds/proxy_default_target_group.go index 5da1257d401..f2b16527410 100644 --- a/internal/service/rds/proxy_default_target_group.go +++ b/internal/service/rds/proxy_default_target_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -98,7 +101,7 @@ func ResourceProxyDefaultTargetGroup() *schema.Resource { func resourceProxyDefaultTargetGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) tg, err := resourceProxyDefaultTargetGroupGet(ctx, conn, d.Id()) if err != nil { @@ -137,7 +140,7 @@ func resourceProxyDefaultTargetGroupUpdate(ctx context.Context, d *schema.Resour func resourceProxyDefaultTargetGroupCreateUpdate(ctx context.Context, d *schema.ResourceData, timeout string, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) params := rds.ModifyDBProxyTargetGroupInput{ DBProxyName: aws.String(d.Get("db_proxy_name").(string)), diff --git a/internal/service/rds/proxy_default_target_group_test.go b/internal/service/rds/proxy_default_target_group_test.go index 3df5bd5f007..1735959c3ed 100644 --- a/internal/service/rds/proxy_default_target_group_test.go +++ b/internal/service/rds/proxy_default_target_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( @@ -326,7 +329,7 @@ func TestAccRDSProxyDefaultTargetGroup_disappears(t *testing.T) { func testAccCheckProxyTargetGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_db_proxy_default_target_group" { @@ -366,7 +369,7 @@ func testAccCheckProxyTargetGroupExists(ctx context.Context, n string, v *rds.DB return fmt.Errorf("No DB Proxy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) opts := rds.DescribeDBProxyTargetGroupsInput{ DBProxyName: aws.String(rs.Primary.ID), diff --git a/internal/service/rds/proxy_endpoint.go b/internal/service/rds/proxy_endpoint.go index adf0c181153..6ee7e25f6bd 100644 --- a/internal/service/rds/proxy_endpoint.go +++ b/internal/service/rds/proxy_endpoint.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -94,7 +97,7 @@ func ResourceProxyEndpoint() *schema.Resource { func resourceProxyEndpointCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) dbProxyName := d.Get("db_proxy_name").(string) dbProxyEndpointName := d.Get("db_proxy_endpoint_name").(string) @@ -103,7 +106,7 @@ func resourceProxyEndpointCreate(ctx context.Context, d *schema.ResourceData, me DBProxyEndpointName: aws.String(dbProxyEndpointName), TargetRole: aws.String(d.Get("target_role").(string)), VpcSubnetIds: flex.ExpandStringSet(d.Get("vpc_subnet_ids").(*schema.Set)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v := d.Get("vpc_security_group_ids").(*schema.Set); v.Len() > 0 { @@ -126,7 +129,7 @@ func resourceProxyEndpointCreate(ctx context.Context, d *schema.ResourceData, me func resourceProxyEndpointRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) dbProxyEndpoint, err := FindDBProxyEndpoint(ctx, conn, d.Id()) @@ -173,7 +176,7 @@ func resourceProxyEndpointRead(ctx context.Context, d *schema.ResourceData, meta func resourceProxyEndpointUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) if d.HasChange("vpc_security_group_ids") { params := rds.ModifyDBProxyEndpointInput{ @@ -196,7 +199,7 @@ func resourceProxyEndpointUpdate(ctx context.Context, d *schema.ResourceData, me func resourceProxyEndpointDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) params := rds.DeleteDBProxyEndpointInput{ DBProxyEndpointName: aws.String(d.Get("db_proxy_endpoint_name").(string)), diff --git a/internal/service/rds/proxy_endpoint_test.go b/internal/service/rds/proxy_endpoint_test.go index fcaa82cdf48..59e58200510 100644 --- a/internal/service/rds/proxy_endpoint_test.go +++ b/internal/service/rds/proxy_endpoint_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( @@ -239,7 +242,7 @@ func TestAccRDSProxyEndpoint_Disappears_proxy(t *testing.T) { // testAccDBProxyEndpointPreCheck checks if a call to describe db proxies errors out meaning feature not supported func testAccDBProxyEndpointPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) _, err := conn.DescribeDBProxyEndpointsWithContext(ctx, &rds.DescribeDBProxyEndpointsInput{}) @@ -254,7 +257,7 @@ func testAccDBProxyEndpointPreCheck(ctx context.Context, t *testing.T) { func testAccCheckProxyEndpointDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_db_proxy_endpoint" { @@ -291,7 +294,7 @@ func testAccCheckProxyEndpointExists(ctx context.Context, n string, v *rds.DBPro return fmt.Errorf("No DB Proxy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) dbProxyEndpoint, err := tfrds.FindDBProxyEndpoint(ctx, conn, rs.Primary.ID) if err != nil { diff --git a/internal/service/rds/proxy_target.go b/internal/service/rds/proxy_target.go index 24a3d4e004c..cbd682650f9 100644 --- a/internal/service/rds/proxy_target.go +++ b/internal/service/rds/proxy_target.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -91,7 +94,7 @@ func ResourceProxyTarget() *schema.Resource { func resourceProxyTargetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) dbProxyName := d.Get("db_proxy_name").(string) targetGroupName := d.Get("target_group_name").(string) @@ -134,7 +137,7 @@ func ProxyTargetParseID(id string) (string, string, string, string, error) { func resourceProxyTargetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) dbProxyName, targetGroupName, targetType, rdsResourceId, err := ProxyTargetParseID(d.Id()) if err != nil { @@ -185,7 +188,7 @@ func resourceProxyTargetRead(ctx context.Context, d *schema.ResourceData, meta i func resourceProxyTargetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) params := rds.DeregisterDBProxyTargetsInput{ DBProxyName: aws.String(d.Get("db_proxy_name").(string)), diff --git a/internal/service/rds/proxy_target_test.go b/internal/service/rds/proxy_target_test.go index 64ecaab07a4..4267e879fbe 100644 --- a/internal/service/rds/proxy_target_test.go +++ b/internal/service/rds/proxy_target_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( @@ -119,7 +122,7 @@ func TestAccRDSProxyTarget_disappears(t *testing.T) { func testAccCheckProxyTargetDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_db_proxy_target" { @@ -169,7 +172,7 @@ func testAccCheckProxyTargetExists(ctx context.Context, n string, v *rds.DBProxy return fmt.Errorf("No DB Proxy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) dbProxyName, targetGroupName, targetType, rdsResourceId, err := tfrds.ProxyTargetParseID(rs.Primary.ID) if err != nil { diff --git a/internal/service/rds/proxy_test.go b/internal/service/rds/proxy_test.go index fed5cb7575e..8d7e80679c6 100644 --- a/internal/service/rds/proxy_test.go +++ b/internal/service/rds/proxy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( @@ -519,7 +522,7 @@ func TestAccRDSProxy_disappears(t *testing.T) { // testAccDBProxyPreCheck checks if a call to describe db proxies errors out meaning feature not supported func testAccDBProxyPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) input := &rds.DescribeDBProxiesInput{} _, err := conn.DescribeDBProxiesWithContext(ctx, input) @@ -535,7 +538,7 @@ func testAccDBProxyPreCheck(ctx context.Context, t *testing.T) { func testAccCheckProxyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_db_proxy" { @@ -570,7 +573,7 @@ func testAccCheckProxyExists(ctx context.Context, n string, v *rds.DBProxy) reso return fmt.Errorf("No RDS DB Proxy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) output, err := tfrds.FindDBProxyByName(ctx, conn, rs.Primary.ID) if err != nil { diff --git a/internal/service/rds/reserved_instance.go b/internal/service/rds/reserved_instance.go index 2892c419f91..602a141e8c7 100644 --- a/internal/service/rds/reserved_instance.go +++ b/internal/service/rds/reserved_instance.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -129,11 +132,11 @@ func ResourceReservedInstance() *schema.Resource { } func resourceReservedInstanceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) input := &rds.PurchaseReservedDBInstancesOfferingInput{ ReservedDBInstancesOfferingId: aws.String(d.Get("offering_id").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.Get("instance_count").(int); ok && v > 0 { @@ -159,7 +162,7 @@ func resourceReservedInstanceCreate(ctx context.Context, d *schema.ResourceData, } func resourceReservedInstanceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) reservation, err := FindReservedDBInstanceByID(ctx, conn, d.Id()) diff --git a/internal/service/rds/reserved_instance_offering_data_source.go b/internal/service/rds/reserved_instance_offering_data_source.go index 0cc292f7438..8d78380f774 100644 --- a/internal/service/rds/reserved_instance_offering_data_source.go +++ b/internal/service/rds/reserved_instance_offering_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -65,7 +68,7 @@ func DataSourceReservedOffering() *schema.Resource { } func dataSourceReservedOfferingRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) input := &rds.DescribeReservedDBInstancesOfferingsInput{ DBInstanceClass: aws.String(d.Get("db_instance_class").(string)), diff --git a/internal/service/rds/reserved_instance_offering_data_source_test.go b/internal/service/rds/reserved_instance_offering_data_source_test.go index e37f156450b..aeb578501d2 100644 --- a/internal/service/rds/reserved_instance_offering_data_source_test.go +++ b/internal/service/rds/reserved_instance_offering_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( diff --git a/internal/service/rds/reserved_instance_test.go b/internal/service/rds/reserved_instance_test.go index ced21dba241..454b06e5198 100644 --- a/internal/service/rds/reserved_instance_test.go +++ b/internal/service/rds/reserved_instance_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( @@ -64,7 +67,7 @@ func TestAccRDSReservedInstance_basic(t *testing.T) { func testAccReservedInstanceExists(ctx context.Context, n string, reservation *rds.ReservedDBInstance) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) rs, ok := s.RootModule().Resources[n] if !ok { diff --git a/internal/service/rds/service_package_gen.go b/internal/service/rds/service_package_gen.go index b1972b05fef..09a27cab8fb 100644 --- a/internal/service/rds/service_package_gen.go +++ b/internal/service/rds/service_package_gen.go @@ -5,6 +5,12 @@ package rds import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + rds_sdkv2 "github.com/aws/aws-sdk-go-v2/service/rds" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + rds_sdkv1 "github.com/aws/aws-sdk-go/service/rds" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -237,4 +243,24 @@ func (p *servicePackage) ServicePackageName() string { return names.RDS } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*rds_sdkv1.RDS, error) { + sess := config["session"].(*session_sdkv1.Session) + + return rds_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*rds_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return rds_sdkv2.NewFromConfig(cfg, func(o *rds_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = rds_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/rds/snapshot.go b/internal/service/rds/snapshot.go index f8d7d5dfa90..d8b711fdb88 100644 --- a/internal/service/rds/snapshot.go +++ b/internal/service/rds/snapshot.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -131,13 +134,13 @@ func ResourceSnapshot() *schema.Resource { func resourceSnapshotCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) dbSnapshotID := d.Get("db_snapshot_identifier").(string) input := &rds.CreateDBSnapshotInput{ DBInstanceIdentifier: aws.String(d.Get("db_instance_identifier").(string)), DBSnapshotIdentifier: aws.String(dbSnapshotID), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } output, err := conn.CreateDBSnapshotWithContext(ctx, input) @@ -169,7 +172,7 @@ func resourceSnapshotCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceSnapshotRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) snapshot, err := FindDBSnapshotByID(ctx, conn, d.Id()) @@ -215,14 +218,14 @@ func resourceSnapshotRead(ctx context.Context, d *schema.ResourceData, meta inte d.Set("shared_accounts", flex.FlattenStringSet(output.DBSnapshotAttributesResult.DBSnapshotAttributes[0].AttributeValues)) - SetTagsOut(ctx, snapshot.TagList) + setTagsOut(ctx, snapshot.TagList) return diags } func resourceSnapshotUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) if d.HasChange("shared_accounts") { o, n := d.GetChange("shared_accounts") @@ -251,7 +254,7 @@ func resourceSnapshotUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceSnapshotDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) log.Printf("[DEBUG] Deleting RDS DB Snapshot: %s", d.Id()) _, err := conn.DeleteDBSnapshotWithContext(ctx, &rds.DeleteDBSnapshotInput{ diff --git a/internal/service/rds/snapshot_copy.go b/internal/service/rds/snapshot_copy.go index 6b844da221b..7689a0903df 100644 --- a/internal/service/rds/snapshot_copy.go +++ b/internal/service/rds/snapshot_copy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -145,12 +148,12 @@ func ResourceSnapshotCopy() *schema.Resource { func resourceSnapshotCopyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) targetDBSnapshotID := d.Get("target_db_snapshot_identifier").(string) input := &rds.CopyDBSnapshotInput{ SourceDBSnapshotIdentifier: aws.String(d.Get("source_db_snapshot_identifier").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), TargetDBSnapshotIdentifier: aws.String(targetDBSnapshotID), } @@ -190,7 +193,7 @@ func resourceSnapshotCopyCreate(ctx context.Context, d *schema.ResourceData, met func resourceSnapshotCopyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) snapshot, err := FindDBSnapshotByID(ctx, conn, d.Id()) @@ -233,7 +236,7 @@ func resourceSnapshotCopyUpdate(ctx context.Context, d *schema.ResourceData, met func resourceSnapshotCopyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) log.Printf("[DEBUG] Deleting RDS DB Snapshot Copy: %s", d.Id()) _, err := conn.DeleteDBSnapshotWithContext(ctx, &rds.DeleteDBSnapshotInput{ diff --git a/internal/service/rds/snapshot_copy_test.go b/internal/service/rds/snapshot_copy_test.go index 71aaea93c04..aebffac6c50 100644 --- a/internal/service/rds/snapshot_copy_test.go +++ b/internal/service/rds/snapshot_copy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( @@ -126,7 +129,7 @@ func TestAccRDSSnapshotCopy_disappears(t *testing.T) { func testAccCheckSnapshotCopyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_db_snapshot_copy" { @@ -161,7 +164,7 @@ func testAccCheckSnapshotCopyExists(ctx context.Context, n string, v *rds.DBSnap return fmt.Errorf("No RDS DB Snapshot Copy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) output, err := tfrds.FindDBSnapshotByID(ctx, conn, rs.Primary.ID) if err != nil { diff --git a/internal/service/rds/snapshot_data_source.go b/internal/service/rds/snapshot_data_source.go index 847046194e2..b287856ff54 100644 --- a/internal/service/rds/snapshot_data_source.go +++ b/internal/service/rds/snapshot_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -122,7 +125,7 @@ func DataSourceSnapshot() *schema.Resource { func dataSourceSnapshotRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) input := &rds.DescribeDBSnapshotsInput{ IncludePublic: aws.Bool(d.Get("include_public").(bool)), diff --git a/internal/service/rds/snapshot_data_source_test.go b/internal/service/rds/snapshot_data_source_test.go index fc3d75d4d59..7782ce0d462 100644 --- a/internal/service/rds/snapshot_data_source_test.go +++ b/internal/service/rds/snapshot_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( diff --git a/internal/service/rds/snapshot_test.go b/internal/service/rds/snapshot_test.go index cecb7d5e8df..1ff40264583 100644 --- a/internal/service/rds/snapshot_test.go +++ b/internal/service/rds/snapshot_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( @@ -170,7 +173,7 @@ func TestAccRDSSnapshot_disappears(t *testing.T) { func testAccCheckDBSnapshotDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_db_snapshot" { @@ -205,7 +208,7 @@ func testAccCheckDBSnapshotExists(ctx context.Context, n string, v *rds.DBSnapsh return fmt.Errorf("No RDS DB Snapshot ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) output, err := tfrds.FindDBSnapshotByID(ctx, conn, rs.Primary.ID) if err != nil { diff --git a/internal/service/rds/status.go b/internal/service/rds/status.go index 1226e020afe..1a0655b5110 100644 --- a/internal/service/rds/status.go +++ b/internal/service/rds/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( diff --git a/internal/service/rds/subnet_group.go b/internal/service/rds/subnet_group.go index a90240c0e2f..cd51dc7f12d 100644 --- a/internal/service/rds/subnet_group.go +++ b/internal/service/rds/subnet_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -82,14 +85,14 @@ func ResourceSubnetGroup() *schema.Resource { func resourceSubnetGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) name := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) input := &rds.CreateDBSubnetGroupInput{ DBSubnetGroupDescription: aws.String(d.Get("description").(string)), DBSubnetGroupName: aws.String(name), SubnetIds: flex.ExpandStringSet(d.Get("subnet_ids").(*schema.Set)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } log.Printf("[DEBUG] Creating RDS DB Subnet Group: %s", input) @@ -105,7 +108,7 @@ func resourceSubnetGroupCreate(ctx context.Context, d *schema.ResourceData, meta func resourceSubnetGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) v, err := FindDBSubnetGroupByName(ctx, conn, d.Id()) @@ -137,7 +140,7 @@ func resourceSubnetGroupRead(ctx context.Context, d *schema.ResourceData, meta i func resourceSubnetGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) if d.HasChanges("description", "subnet_ids") { input := &rds.ModifyDBSubnetGroupInput{ @@ -158,7 +161,7 @@ func resourceSubnetGroupUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceSubnetGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) log.Printf("[DEBUG] Deleting RDS DB Subnet Group: %s", d.Id()) _, err := conn.DeleteDBSubnetGroupWithContext(ctx, &rds.DeleteDBSubnetGroupInput{ diff --git a/internal/service/rds/subnet_group_data_source.go b/internal/service/rds/subnet_group_data_source.go index 93e04f7742e..e88d51b4e92 100644 --- a/internal/service/rds/subnet_group_data_source.go +++ b/internal/service/rds/subnet_group_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( @@ -53,7 +56,7 @@ func DataSourceSubnetGroup() *schema.Resource { func dataSourceSubnetGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RDSConn() + conn := meta.(*conns.AWSClient).RDSConn(ctx) v, err := FindDBSubnetGroupByName(ctx, conn, d.Get("name").(string)) if err != nil { diff --git a/internal/service/rds/subnet_group_data_source_test.go b/internal/service/rds/subnet_group_data_source_test.go index 556d120f552..002f2d2a1bf 100644 --- a/internal/service/rds/subnet_group_data_source_test.go +++ b/internal/service/rds/subnet_group_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( diff --git a/internal/service/rds/subnet_group_test.go b/internal/service/rds/subnet_group_test.go index d1515778559..54cf5befc65 100644 --- a/internal/service/rds/subnet_group_test.go +++ b/internal/service/rds/subnet_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds_test import ( @@ -282,7 +285,7 @@ func TestAccRDSSubnetGroup_updateSubnets(t *testing.T) { func testAccCheckSubnetGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_db_subnet_group" { @@ -317,7 +320,7 @@ func testAccCheckSubnetGroupExists(ctx context.Context, n string, v *rds.DBSubne return fmt.Errorf("No RDS DB Subnet Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn(ctx) output, err := tfrds.FindDBSubnetGroupByName(ctx, conn, rs.Primary.ID) if err != nil { diff --git a/internal/service/rds/sweep.go b/internal/service/rds/sweep.go index 18b115cfb30..fecb08c0236 100644 --- a/internal/service/rds/sweep.go +++ b/internal/service/rds/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -13,7 +16,6 @@ import ( "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) @@ -110,11 +112,11 @@ func init() { func sweepClusterParameterGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).RDSConn() + conn := client.RDSConn(ctx) input := &rds.DescribeDBClusterParameterGroupsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -149,7 +151,7 @@ func sweepClusterParameterGroups(region string) error { return fmt.Errorf("error listing RDS Cluster Parameter Groups (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping RDS Cluster Parameter Groups (%s): %w", region, err) @@ -160,11 +162,11 @@ func sweepClusterParameterGroups(region string) error { func sweepClusterSnapshots(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).RDSConn() + conn := client.RDSConn(ctx) input := &rds.DescribeDBClusterSnapshotsInput{ // "InvalidDBClusterSnapshotStateFault: Only manual snapshots may be deleted." Filters: []*rds.Filter{{ @@ -199,7 +201,7 @@ func sweepClusterSnapshots(region string) error { return fmt.Errorf("error listing RDS DB Cluster Snapshots (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping RDS DB Cluster Snapshots (%s): %w", region, err) @@ -210,11 +212,11 @@ func sweepClusterSnapshots(region string) error { func sweepClusters(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).RDSConn() + conn := client.RDSConn(ctx) var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -263,7 +265,7 @@ func sweepClusters(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing RDS Clusters (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping RDS Clusters (%s): %w", region, err)) } @@ -273,11 +275,11 @@ func sweepClusters(region string) error { func sweepEventSubscriptions(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).RDSConn() + conn := client.RDSConn(ctx) input := &rds.DescribeEventSubscriptionsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -306,7 +308,7 @@ func sweepEventSubscriptions(region string) error { return fmt.Errorf("error listing RDS Event Subscriptions (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping RDS Event Subscriptions (%s): %w", region, err) @@ -317,11 +319,11 @@ func sweepEventSubscriptions(region string) error { func sweepGlobalClusters(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).RDSConn() + conn := client.RDSConn(ctx) input := &rds.DescribeGlobalClustersInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -352,7 +354,7 @@ func sweepGlobalClusters(region string) error { return fmt.Errorf("error listing RDS Global Clusters (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping RDS Global Clusters (%s): %w", region, err) @@ -363,12 +365,12 @@ func sweepGlobalClusters(region string) error { func sweepInstances(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } input := &rds.DescribeDBInstancesInput{} - conn := client.(*conns.AWSClient).RDSConn() + conn := client.RDSConn(ctx) sweepResources := make([]sweep.Sweepable, 0) err = conn.DescribeDBInstancesPagesWithContext(ctx, input, func(page *rds.DescribeDBInstancesOutput, lastPage bool) bool { @@ -401,7 +403,7 @@ func sweepInstances(region string) error { return fmt.Errorf("error listing RDS DB Instances (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping RDS DB Instances (%s): %w", region, err) @@ -412,12 +414,12 @@ func sweepInstances(region string) error { func sweepOptionGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } input := &rds.DescribeOptionGroupsInput{} - conn := client.(*conns.AWSClient).RDSConn() + conn := client.RDSConn(ctx) sweepResources := make([]sweep.Sweepable, 0) err = conn.DescribeOptionGroupsPagesWithContext(ctx, input, func(page *rds.DescribeOptionGroupsOutput, lastPage bool) bool { @@ -451,7 +453,7 @@ func sweepOptionGroups(region string) error { return fmt.Errorf("error listing RDS Option Groups (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping RDS Option Groups (%s): %w", region, err) @@ -462,12 +464,12 @@ func sweepOptionGroups(region string) error { func sweepParameterGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } input := &rds.DescribeDBParameterGroupsInput{} - conn := client.(*conns.AWSClient).RDSConn() + conn := client.RDSConn(ctx) sweepResources := make([]sweep.Sweepable, 0) err = conn.DescribeDBParameterGroupsPagesWithContext(ctx, input, func(page *rds.DescribeDBParameterGroupsOutput, lastPage bool) bool { @@ -501,7 +503,7 @@ func sweepParameterGroups(region string) error { return fmt.Errorf("error listing RDS DB Parameter Groups (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping RDS DB Parameter Groups (%s): %w", region, err) @@ -512,11 +514,11 @@ func sweepParameterGroups(region string) error { func sweepProxies(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("Error getting client: %s", err) } - conn := client.(*conns.AWSClient).RDSConn() + conn := client.RDSConn(ctx) input := &rds.DescribeDBProxiesInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -545,7 +547,7 @@ func sweepProxies(region string) error { return fmt.Errorf("error listing RDS DB Proxies (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping RDS DB Proxies (%s): %w", region, err) @@ -556,11 +558,11 @@ func sweepProxies(region string) error { func sweepSnapshots(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).RDSConn() + conn := client.RDSConn(ctx) input := &rds.DescribeDBSnapshotsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -595,7 +597,7 @@ func sweepSnapshots(region string) error { return fmt.Errorf("error listing RDS DB Snapshots (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping RDS DB Snapshots (%s): %w", region, err) @@ -606,11 +608,11 @@ func sweepSnapshots(region string) error { func sweepSubnetGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).RDSConn() + conn := client.RDSConn(ctx) input := &rds.DescribeDBSubnetGroupsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -639,7 +641,7 @@ func sweepSubnetGroups(region string) error { return fmt.Errorf("error listing RDS DB Subnet Groups (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping RDS DB Subnet Groups (%s): %w", region, err) @@ -650,11 +652,11 @@ func sweepSubnetGroups(region string) error { func sweepInstanceAutomatedBackups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).RDSConn() + conn := client.RDSConn(ctx) input := &rds.DescribeDBInstanceAutomatedBackupsInput{} sweepResources := make([]sweep.Sweepable, 0) var backupARNs []string @@ -687,7 +689,7 @@ func sweepInstanceAutomatedBackups(region string) error { return fmt.Errorf("error listing RDS Instance Automated Backups (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping RDS Instance Automated Backups (%s): %w", region, err) diff --git a/internal/service/rds/tags_gen.go b/internal/service/rds/tags_gen.go index ae8d3604a39..2ba6542fe2c 100644 --- a/internal/service/rds/tags_gen.go +++ b/internal/service/rds/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists rds service tags. +// listTags lists rds service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn rdsiface.RDSAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn rdsiface.RDSAPI, identifier string) (tftags.KeyValueTags, error) { input := &rds.ListTagsForResourceInput{ ResourceName: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn rdsiface.RDSAPI, identifier string) (tft // ListTags lists rds service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).RDSConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).RDSConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*rds.Tag) tftags.KeyValueTags { return tftags.New(ctx, m) } -// GetTagsIn returns rds service tags from Context. +// getTagsIn returns rds service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*rds.Tag { +func getTagsIn(ctx context.Context) []*rds.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*rds.Tag { return nil } -// SetTagsOut sets rds service tags in Context. -func SetTagsOut(ctx context.Context, tags []*rds.Tag) { +// setTagsOut sets rds service tags in Context. +func setTagsOut(ctx context.Context, tags []*rds.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates rds service tags. +// updateTags updates rds service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn rdsiface.RDSAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn rdsiface.RDSAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn rdsiface.RDSAPI, identifier string, ol // UpdateTags updates rds service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).RDSConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).RDSConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/rds/validate.go b/internal/service/rds/validate.go index 9b06b8985a6..6917260a678 100644 --- a/internal/service/rds/validate.go +++ b/internal/service/rds/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( diff --git a/internal/service/rds/validate_test.go b/internal/service/rds/validate_test.go index 5ad3638842a..49c9554c39c 100644 --- a/internal/service/rds/validate_test.go +++ b/internal/service/rds/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( diff --git a/internal/service/rds/verify.go b/internal/service/rds/verify.go index e98f2dbc296..999ddb8fe24 100644 --- a/internal/service/rds/verify.go +++ b/internal/service/rds/verify.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( diff --git a/internal/service/rds/wait.go b/internal/service/rds/wait.go index 7d4117ee908..771b3d30447 100644 --- a/internal/service/rds/wait.go +++ b/internal/service/rds/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rds import ( diff --git a/internal/service/redshift/authentication_profile.go b/internal/service/redshift/authentication_profile.go index 69da4cdc2fe..cf7a8571887 100644 --- a/internal/service/redshift/authentication_profile.go +++ b/internal/service/redshift/authentication_profile.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift import ( @@ -51,7 +54,7 @@ func ResourceAuthenticationProfile() *schema.Resource { func resourceAuthenticationProfileCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) authProfileName := d.Get("authentication_profile_name").(string) @@ -73,7 +76,7 @@ func resourceAuthenticationProfileCreate(ctx context.Context, d *schema.Resource func resourceAuthenticationProfileRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) out, err := FindAuthenticationProfileByID(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -94,7 +97,7 @@ func resourceAuthenticationProfileRead(ctx context.Context, d *schema.ResourceDa func resourceAuthenticationProfileUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) input := &redshift.ModifyAuthenticationProfileInput{ AuthenticationProfileName: aws.String(d.Id()), @@ -112,7 +115,7 @@ func resourceAuthenticationProfileUpdate(ctx context.Context, d *schema.Resource func resourceAuthenticationProfileDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) deleteInput := redshift.DeleteAuthenticationProfileInput{ AuthenticationProfileName: aws.String(d.Id()), diff --git a/internal/service/redshift/authentication_profile_test.go b/internal/service/redshift/authentication_profile_test.go index 0c3145c89d0..bd72785b8b8 100644 --- a/internal/service/redshift/authentication_profile_test.go +++ b/internal/service/redshift/authentication_profile_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift_test import ( @@ -75,7 +78,7 @@ func TestAccRedshiftAuthenticationProfile_disappears(t *testing.T) { func testAccCheckAuthenticationProfileDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_redshift_authentication_profile" { @@ -110,7 +113,7 @@ func testAccCheckAuthenticationProfileExists(ctx context.Context, name string) r return fmt.Errorf("Authentication Profile ID is not set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) _, err := tfredshift.FindAuthenticationProfileByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/redshift/cluster.go b/internal/service/redshift/cluster.go index da9e65126b8..73abf6fafbb 100644 --- a/internal/service/redshift/cluster.go +++ b/internal/service/redshift/cluster.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift import ( "context" - "errors" "fmt" "log" "regexp" @@ -60,6 +62,10 @@ func ResourceCluster() *schema.Resource { Optional: true, Computed: true, ValidateFunc: validation.StringInSlice(redshift.AquaConfigurationStatus_Values(), false), + Deprecated: "This parameter is no longer supported by the AWS API. It will be removed in the next major version of the provider.", + DiffSuppressFunc: func(k, oldValue, newValue string, d *schema.ResourceData) bool { + return true + }, }, "arn": { Type: schema.TypeString, @@ -91,6 +97,10 @@ func ResourceCluster() *schema.Resource { validation.StringDoesNotMatch(regexp.MustCompile(`-$`), "cannot end with a hyphen"), ), }, + "cluster_namespace_arn": { + Type: schema.TypeString, + Computed: true, + }, "cluster_nodes": { Type: schema.TypeList, Computed: true, @@ -359,12 +369,6 @@ func ResourceCluster() *schema.Resource { CustomizeDiff: customdiff.All( verify.SetTagsDiff, - func(_ context.Context, diff *schema.ResourceDiff, v interface{}) error { - if diff.Get("availability_zone_relocation_enabled").(bool) && diff.Get("publicly_accessible").(bool) { - return errors.New("`availability_zone_relocation_enabled` cannot be true when `publicly_accessible` is true") - } - return nil - }, func(_ context.Context, diff *schema.ResourceDiff, v interface{}) error { if diff.Id() == "" { return nil @@ -384,7 +388,7 @@ func ResourceCluster() *schema.Resource { func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) clusterID := d.Get("cluster_identifier").(string) backupInput := &redshift.RestoreFromClusterSnapshotInput{ @@ -407,7 +411,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int NodeType: aws.String(d.Get("node_type").(string)), Port: aws.Int64(int64(d.Get("port").(int))), PubliclyAccessible: aws.Bool(d.Get("publicly_accessible").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("aqua_configuration_status"); ok { @@ -562,7 +566,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) rsc, err := FindClusterByID(ctx, conn, d.Id()) @@ -604,6 +608,7 @@ func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta inter } d.Set("availability_zone_relocation_enabled", azr) d.Set("cluster_identifier", rsc.ClusterIdentifier) + d.Set("cluster_namespace_arn", rsc.ClusterNamespaceArn) if err := d.Set("cluster_nodes", flattenClusterNodes(rsc.ClusterNodes)); err != nil { return sdkdiag.AppendErrorf(diags, "setting cluster_nodes: %s", err) } @@ -665,14 +670,14 @@ func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta inter } d.Set("vpc_security_group_ids", aws.StringValueSlice(apiList)) - SetTagsOut(ctx, rsc.Tags) + setTagsOut(ctx, rsc.Tags) return diags } func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) if d.HasChangesExcept("aqua_configuration_status", "availability_zone", "iam_roles", "logging", "snapshot_copy", "tags", "tags_all") { input := &redshift.ModifyClusterInput{ @@ -887,7 +892,7 @@ func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceClusterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) skipFinalSnapshot := d.Get("skip_final_snapshot").(bool) input := &redshift.DeleteClusterInput{ diff --git a/internal/service/redshift/cluster_credentials_data_source.go b/internal/service/redshift/cluster_credentials_data_source.go index fac005b0fbd..ce76d682d77 100644 --- a/internal/service/redshift/cluster_credentials_data_source.go +++ b/internal/service/redshift/cluster_credentials_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift import ( @@ -62,7 +65,7 @@ func DataSourceClusterCredentials() *schema.Resource { func dataSourceClusterCredentialsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) clusterID := d.Get("cluster_identifier").(string) input := &redshift.GetClusterCredentialsInput{ diff --git a/internal/service/redshift/cluster_credentials_data_source_test.go b/internal/service/redshift/cluster_credentials_data_source_test.go index 03ab6ace148..a777e33428b 100644 --- a/internal/service/redshift/cluster_credentials_data_source_test.go +++ b/internal/service/redshift/cluster_credentials_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift_test import ( diff --git a/internal/service/redshift/cluster_data_source.go b/internal/service/redshift/cluster_data_source.go index 6049eb5f857..eac3eabc2fb 100644 --- a/internal/service/redshift/cluster_data_source.go +++ b/internal/service/redshift/cluster_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift import ( @@ -53,6 +56,10 @@ func DataSourceCluster() *schema.Resource { Type: schema.TypeString, Required: true, }, + "cluster_namespace_arn": { + Type: schema.TypeString, + Computed: true, + }, "cluster_nodes": { Type: schema.TypeList, Computed: true, @@ -195,7 +202,7 @@ func DataSourceCluster() *schema.Resource { func dataSourceClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig clusterID := d.Get("cluster_identifier").(string) @@ -226,37 +233,31 @@ func dataSourceClusterRead(ctx context.Context, d *schema.ResourceData, meta int } d.Set("availability_zone_relocation_enabled", azr) d.Set("cluster_identifier", rsc.ClusterIdentifier) + d.Set("cluster_namespace_arn", rsc.ClusterNamespaceArn) if err := d.Set("cluster_nodes", flattenClusterNodes(rsc.ClusterNodes)); err != nil { return sdkdiag.AppendErrorf(diags, "setting cluster_nodes: %s", err) } - if len(rsc.ClusterParameterGroups) > 0 { d.Set("cluster_parameter_group_name", rsc.ClusterParameterGroups[0].ParameterGroupName) } - d.Set("cluster_public_key", rsc.ClusterPublicKey) d.Set("cluster_revision_number", rsc.ClusterRevisionNumber) d.Set("cluster_subnet_group_name", rsc.ClusterSubnetGroupName) - if len(rsc.ClusterNodes) > 1 { d.Set("cluster_type", clusterTypeMultiNode) } else { d.Set("cluster_type", clusterTypeSingleNode) } - d.Set("cluster_version", rsc.ClusterVersion) d.Set("database_name", rsc.DBName) - if rsc.ElasticIpStatus != nil { d.Set("elastic_ip", rsc.ElasticIpStatus.ElasticIp) } - d.Set("encrypted", rsc.Encrypted) - if rsc.Endpoint != nil { d.Set("endpoint", rsc.Endpoint.Address) + d.Set("port", rsc.Endpoint.Port) } - d.Set("enhanced_vpc_routing", rsc.EnhancedVpcRouting) var iamRoles []string @@ -269,7 +270,6 @@ func dataSourceClusterRead(ctx context.Context, d *schema.ResourceData, meta int d.Set("master_username", rsc.MasterUsername) d.Set("node_type", rsc.NodeType) d.Set("number_of_nodes", rsc.NumberOfNodes) - d.Set("port", rsc.Endpoint.Port) d.Set("preferred_maintenance_window", rsc.PreferredMaintenanceWindow) d.Set("publicly_accessible", rsc.PubliclyAccessible) d.Set("default_iam_role_arn", rsc.DefaultIamRoleArn) diff --git a/internal/service/redshift/cluster_data_source_test.go b/internal/service/redshift/cluster_data_source_test.go index cc75e9d7e4a..7eb7f2fe314 100644 --- a/internal/service/redshift/cluster_data_source_test.go +++ b/internal/service/redshift/cluster_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift_test import ( @@ -30,6 +33,7 @@ func TestAccRedshiftClusterDataSource_basic(t *testing.T) { resource.TestCheckResourceAttrSet(dataSourceName, "automated_snapshot_retention_period"), resource.TestCheckResourceAttrSet(dataSourceName, "availability_zone"), resource.TestCheckResourceAttrSet(dataSourceName, "cluster_identifier"), + resource.TestCheckResourceAttrPair(dataSourceName, "cluster_namespace_arn", resourceName, "cluster_namespace_arn"), resource.TestCheckResourceAttrSet(dataSourceName, "cluster_parameter_group_name"), resource.TestCheckResourceAttrSet(dataSourceName, "cluster_public_key"), resource.TestCheckResourceAttrSet(dataSourceName, "cluster_revision_number"), diff --git a/internal/service/redshift/cluster_iam_roles.go b/internal/service/redshift/cluster_iam_roles.go index 0fbd5325ed7..1520ce7dc14 100644 --- a/internal/service/redshift/cluster_iam_roles.go +++ b/internal/service/redshift/cluster_iam_roles.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift import ( @@ -61,7 +64,7 @@ func ResourceClusterIAMRoles() *schema.Resource { func resourceClusterIAMRolesCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) clusterID := d.Get("cluster_identifier").(string) input := &redshift.ModifyClusterIamRolesInput{ @@ -94,7 +97,7 @@ func resourceClusterIAMRolesCreate(ctx context.Context, d *schema.ResourceData, func resourceClusterIAMRolesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) rsc, err := FindClusterByID(ctx, conn, d.Id()) @@ -123,7 +126,7 @@ func resourceClusterIAMRolesRead(ctx context.Context, d *schema.ResourceData, me func resourceClusterIAMRolesUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) o, n := d.GetChange("iam_role_arns") if o == nil { @@ -160,7 +163,7 @@ func resourceClusterIAMRolesUpdate(ctx context.Context, d *schema.ResourceData, func resourceClusterIAMRolesDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) input := &redshift.ModifyClusterIamRolesInput{ ClusterIdentifier: aws.String(d.Id()), diff --git a/internal/service/redshift/cluster_iam_roles_test.go b/internal/service/redshift/cluster_iam_roles_test.go index f61e1a84937..dcee896466b 100644 --- a/internal/service/redshift/cluster_iam_roles_test.go +++ b/internal/service/redshift/cluster_iam_roles_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift_test import ( diff --git a/internal/service/redshift/cluster_snapshot.go b/internal/service/redshift/cluster_snapshot.go index c51be7d5612..80ae9099f30 100644 --- a/internal/service/redshift/cluster_snapshot.go +++ b/internal/service/redshift/cluster_snapshot.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift import ( @@ -69,12 +72,12 @@ func ResourceClusterSnapshot() *schema.Resource { func resourceClusterSnapshotCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) input := redshift.CreateClusterSnapshotInput{ SnapshotIdentifier: aws.String(d.Get("snapshot_identifier").(string)), ClusterIdentifier: aws.String(d.Get("cluster_identifier").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("manual_snapshot_retention_period"); ok { @@ -98,7 +101,7 @@ func resourceClusterSnapshotCreate(ctx context.Context, d *schema.ResourceData, func resourceClusterSnapshotRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) snapshot, err := FindClusterSnapshotByID(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -125,14 +128,14 @@ func resourceClusterSnapshotRead(ctx context.Context, d *schema.ResourceData, me d.Set("owner_account", snapshot.OwnerAccount) d.Set("snapshot_identifier", snapshot.SnapshotIdentifier) - SetTagsOut(ctx, snapshot.Tags) + setTagsOut(ctx, snapshot.Tags) return diags } func resourceClusterSnapshotUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &redshift.ModifyClusterSnapshotInput{ @@ -152,7 +155,7 @@ func resourceClusterSnapshotUpdate(ctx context.Context, d *schema.ResourceData, func resourceClusterSnapshotDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) log.Printf("[DEBUG] Deleting Redshift Cluster Snapshot: %s", d.Id()) _, err := conn.DeleteClusterSnapshotWithContext(ctx, &redshift.DeleteClusterSnapshotInput{ diff --git a/internal/service/redshift/cluster_snapshot_test.go b/internal/service/redshift/cluster_snapshot_test.go index 3b8c88a6478..7388b30209c 100644 --- a/internal/service/redshift/cluster_snapshot_test.go +++ b/internal/service/redshift/cluster_snapshot_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift_test import ( @@ -141,7 +144,7 @@ func testAccCheckClusterSnapshotExists(ctx context.Context, n string, v *redshif return fmt.Errorf("No Redshift Cluster Snapshot is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) out, err := tfredshift.FindClusterSnapshotByID(ctx, conn, rs.Primary.ID) @@ -157,7 +160,7 @@ func testAccCheckClusterSnapshotExists(ctx context.Context, n string, v *redshif func testAccCheckClusterSnapshotDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_redshift_cluster_snapshot" { diff --git a/internal/service/redshift/cluster_test.go b/internal/service/redshift/cluster_test.go index 466c2f93206..ec42a810744 100644 --- a/internal/service/redshift/cluster_test.go +++ b/internal/service/redshift/cluster_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift_test import ( @@ -79,7 +82,7 @@ func TestAccRedshiftCluster_aqua(t *testing.T) { Config: testAccClusterConfig_aqua(rName, "enabled"), Check: resource.ComposeTestCheckFunc( testAccCheckClusterExists(ctx, resourceName, &v), - resource.TestCheckResourceAttr(resourceName, "aqua_configuration_status", "enabled"), + resource.TestCheckResourceAttr(resourceName, "aqua_configuration_status", "auto"), ), }, { @@ -97,14 +100,14 @@ func TestAccRedshiftCluster_aqua(t *testing.T) { Config: testAccClusterConfig_aqua(rName, "disabled"), Check: resource.ComposeTestCheckFunc( testAccCheckClusterExists(ctx, resourceName, &v), - resource.TestCheckResourceAttr(resourceName, "aqua_configuration_status", "disabled"), + resource.TestCheckResourceAttr(resourceName, "aqua_configuration_status", "auto"), ), }, { Config: testAccClusterConfig_aqua(rName, "enabled"), Check: resource.ComposeTestCheckFunc( testAccCheckClusterExists(ctx, resourceName, &v), - resource.TestCheckResourceAttr(resourceName, "aqua_configuration_status", "enabled"), + resource.TestCheckResourceAttr(resourceName, "aqua_configuration_status", "auto"), ), }, }, @@ -740,6 +743,8 @@ func TestAccRedshiftCluster_availabilityZoneRelocation(t *testing.T) { func TestAccRedshiftCluster_availabilityZoneRelocation_publiclyAccessible(t *testing.T) { ctx := acctest.Context(t) + var v redshift.Cluster + resourceName := "aws_redshift_cluster.test" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.ParallelTest(t, resource.TestCase{ @@ -749,8 +754,12 @@ func TestAccRedshiftCluster_availabilityZoneRelocation_publiclyAccessible(t *tes CheckDestroy: testAccCheckClusterDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccClusterConfig_availabilityZoneRelocationPubliclyAccessible(rName), - ExpectError: regexp.MustCompile("`availability_zone_relocation_enabled` cannot be true when `publicly_accessible` is true"), + Config: testAccClusterConfig_availabilityZoneRelocationPubliclyAccessible(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckClusterExists(ctx, resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "availability_zone_relocation_enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "publicly_accessible", "true"), + ), }, }, }) @@ -808,7 +817,7 @@ func TestAccRedshiftCluster_restoreFromSnapshot(t *testing.T) { func testAccCheckClusterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_redshift_cluster" { @@ -840,7 +849,7 @@ func testAccCheckClusterTestSnapshotDestroy(ctx context.Context, rName string) r } // Try and delete the snapshot before we check for the cluster not found - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) _, err := conn.DeleteClusterSnapshotWithContext(ctx, &redshift.DeleteClusterSnapshotInput{ SnapshotIdentifier: aws.String(rName), @@ -878,7 +887,7 @@ func testAccCheckClusterExists(ctx context.Context, n string, v *redshift.Cluste return fmt.Errorf("No Redshift Cluster ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) output, err := tfredshift.FindClusterByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/redshift/consts.go b/internal/service/redshift/consts.go index df693a9908b..0b9c1cb7881 100644 --- a/internal/service/redshift/consts.go +++ b/internal/service/redshift/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift import ( diff --git a/internal/service/redshift/endpoint_access.go b/internal/service/redshift/endpoint_access.go index af310fa1445..7177b344019 100644 --- a/internal/service/redshift/endpoint_access.go +++ b/internal/service/redshift/endpoint_access.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift import ( @@ -114,7 +117,7 @@ func ResourceEndpointAccess() *schema.Resource { func resourceEndpointAccessCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) createOpts := redshift.CreateEndpointAccessInput{ EndpointName: aws.String(d.Get("endpoint_name").(string)), @@ -150,7 +153,7 @@ func resourceEndpointAccessCreate(ctx context.Context, d *schema.ResourceData, m func resourceEndpointAccessRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) endpoint, err := FindEndpointAccessByName(ctx, conn, d.Id()) @@ -181,7 +184,7 @@ func resourceEndpointAccessRead(ctx context.Context, d *schema.ResourceData, met func resourceEndpointAccessUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) if d.HasChanges("vpc_security_group_ids") { _, n := d.GetChange("vpc_security_group_ids") @@ -214,7 +217,7 @@ func resourceEndpointAccessUpdate(ctx context.Context, d *schema.ResourceData, m func resourceEndpointAccessDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) _, err := conn.DeleteEndpointAccessWithContext(ctx, &redshift.DeleteEndpointAccessInput{ EndpointName: aws.String(d.Id()), diff --git a/internal/service/redshift/endpoint_access_test.go b/internal/service/redshift/endpoint_access_test.go index dbd73a777d7..699a0a5a03f 100644 --- a/internal/service/redshift/endpoint_access_test.go +++ b/internal/service/redshift/endpoint_access_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift_test import ( @@ -135,7 +138,7 @@ func TestAccRedshiftEndpointAccess_disappears_cluster(t *testing.T) { func testAccCheckEndpointAccessDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_redshift_endpoint_access" { @@ -170,7 +173,7 @@ func testAccCheckEndpointAccessExists(ctx context.Context, n string, v *redshift return fmt.Errorf("No Redshift Endpoint Access ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) output, err := tfredshift.FindEndpointAccessByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/redshift/endpoint_authorization.go b/internal/service/redshift/endpoint_authorization.go index e64fa75427f..fd73b341628 100644 --- a/internal/service/redshift/endpoint_authorization.go +++ b/internal/service/redshift/endpoint_authorization.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift import ( @@ -73,7 +76,7 @@ func ResourceEndpointAuthorization() *schema.Resource { func resourceEndpointAuthorizationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) account := d.Get("account").(string) input := redshift.AuthorizeEndpointAccessInput{ @@ -98,7 +101,7 @@ func resourceEndpointAuthorizationCreate(ctx context.Context, d *schema.Resource func resourceEndpointAuthorizationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) endpoint, err := FindEndpointAuthorizationById(ctx, conn, d.Id()) @@ -125,7 +128,7 @@ func resourceEndpointAuthorizationRead(ctx context.Context, d *schema.ResourceDa func resourceEndpointAuthorizationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) if d.HasChanges("vpc_ids") { account, clusterId, err := DecodeEndpointAuthorizationID(d.Id()) @@ -169,7 +172,7 @@ func resourceEndpointAuthorizationUpdate(ctx context.Context, d *schema.Resource func resourceEndpointAuthorizationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) account, clusterId, err := DecodeEndpointAuthorizationID(d.Id()) if err != nil { diff --git a/internal/service/redshift/endpoint_authorization_test.go b/internal/service/redshift/endpoint_authorization_test.go index aa7ac8a9b08..781e668c281 100644 --- a/internal/service/redshift/endpoint_authorization_test.go +++ b/internal/service/redshift/endpoint_authorization_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift_test import ( @@ -156,7 +159,7 @@ func TestAccRedshiftEndpointAuthorization_disappears_cluster(t *testing.T) { func testAccCheckEndpointAuthorizationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_redshift_endpoint_authorization" { @@ -191,7 +194,7 @@ func testAccCheckEndpointAuthorizationExists(ctx context.Context, n string, v *r return fmt.Errorf("No Redshift Endpoint Authorization ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) output, err := tfredshift.FindEndpointAuthorizationById(ctx, conn, rs.Primary.ID) diff --git a/internal/service/redshift/errors.go b/internal/service/redshift/errors.go index a2d64b807d1..97d9acb1e6b 100644 --- a/internal/service/redshift/errors.go +++ b/internal/service/redshift/errors.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift const ( diff --git a/internal/service/redshift/event_subscription.go b/internal/service/redshift/event_subscription.go index 06432288b6a..fb8adeb4aaf 100644 --- a/internal/service/redshift/event_subscription.go +++ b/internal/service/redshift/event_subscription.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift import ( @@ -94,15 +97,9 @@ func ResourceEventSubscription() *schema.Resource { Elem: &schema.Schema{Type: schema.TypeString}, }, "source_type": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.StringInSlice([]string{ - "cluster", - "cluster-parameter-group", - "cluster-security-group", - "cluster-snapshot", - "scheduled-action", - }, false), + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice(redshift.SourceType_Values(), false), }, "status": { Type: schema.TypeString, @@ -118,13 +115,13 @@ func ResourceEventSubscription() *schema.Resource { func resourceEventSubscriptionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) request := &redshift.CreateEventSubscriptionInput{ SubscriptionName: aws.String(d.Get("name").(string)), SnsTopicArn: aws.String(d.Get("sns_topic_arn").(string)), Enabled: aws.Bool(d.Get("enabled").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("event_categories"); ok && v.(*schema.Set).Len() > 0 { @@ -157,7 +154,7 @@ func resourceEventSubscriptionCreate(ctx context.Context, d *schema.ResourceData func resourceEventSubscriptionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) sub, err := FindEventSubscriptionByName(ctx, conn, d.Id()) @@ -188,14 +185,14 @@ func resourceEventSubscriptionRead(ctx context.Context, d *schema.ResourceData, d.Set("source_type", sub.SourceType) d.Set("status", sub.Status) - SetTagsOut(ctx, sub.Tags) + setTagsOut(ctx, sub.Tags) return diags } func resourceEventSubscriptionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) if d.HasChangesExcept("tags", "tags_all") { req := &redshift.ModifyEventSubscriptionInput{ @@ -220,7 +217,7 @@ func resourceEventSubscriptionUpdate(ctx context.Context, d *schema.ResourceData func resourceEventSubscriptionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) deleteOpts := redshift.DeleteEventSubscriptionInput{ SubscriptionName: aws.String(d.Id()), } diff --git a/internal/service/redshift/event_subscription_test.go b/internal/service/redshift/event_subscription_test.go index e7cfbc07a58..8a204a23ce4 100644 --- a/internal/service/redshift/event_subscription_test.go +++ b/internal/service/redshift/event_subscription_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift_test import ( @@ -225,7 +228,7 @@ func testAccCheckEventSubscriptionExists(ctx context.Context, n string, v *redsh return fmt.Errorf("No Redshift Event Subscription is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) out, err := tfredshift.FindEventSubscriptionByName(ctx, conn, rs.Primary.ID) @@ -241,7 +244,7 @@ func testAccCheckEventSubscriptionExists(ctx context.Context, n string, v *redsh func testAccCheckEventSubscriptionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_redshift_event_subscription" { diff --git a/internal/service/redshift/find.go b/internal/service/redshift/find.go index 5afd6a55509..195a45bdf4e 100644 --- a/internal/service/redshift/find.go +++ b/internal/service/redshift/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift import ( diff --git a/internal/service/redshift/flex.go b/internal/service/redshift/flex.go index 0ee0347f974..10580842ecc 100644 --- a/internal/service/redshift/flex.go +++ b/internal/service/redshift/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift import ( diff --git a/internal/service/redshift/generate.go b/internal/service/redshift/generate.go index 6168b62866f..191198d019a 100644 --- a/internal/service/redshift/generate.go +++ b/internal/service/redshift/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTagsOp=DescribeTags -ListTagsInIDElem=ResourceName -ServiceTagsSlice -TagOp=CreateTags -TagInIDElem=ResourceName -UntagOp=DeleteTags -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package redshift diff --git a/internal/service/redshift/hsm_client_certificate.go b/internal/service/redshift/hsm_client_certificate.go index 3fa0740a15c..773c73c9b6b 100644 --- a/internal/service/redshift/hsm_client_certificate.go +++ b/internal/service/redshift/hsm_client_certificate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift import ( @@ -56,13 +59,13 @@ func ResourceHSMClientCertificate() *schema.Resource { func resourceHSMClientCertificateCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) certIdentifier := d.Get("hsm_client_certificate_identifier").(string) input := redshift.CreateHsmClientCertificateInput{ HsmClientCertificateIdentifier: aws.String(certIdentifier), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } out, err := conn.CreateHsmClientCertificateWithContext(ctx, &input) @@ -77,7 +80,7 @@ func resourceHSMClientCertificateCreate(ctx context.Context, d *schema.ResourceD func resourceHSMClientCertificateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) out, err := FindHSMClientCertificateByID(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -103,7 +106,7 @@ func resourceHSMClientCertificateRead(ctx context.Context, d *schema.ResourceDat d.Set("hsm_client_certificate_identifier", out.HsmClientCertificateIdentifier) d.Set("hsm_client_certificate_public_key", out.HsmClientCertificatePublicKey) - SetTagsOut(ctx, out.Tags) + setTagsOut(ctx, out.Tags) return diags } @@ -118,7 +121,7 @@ func resourceHSMClientCertificateUpdate(ctx context.Context, d *schema.ResourceD func resourceHSMClientCertificateDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) deleteInput := redshift.DeleteHsmClientCertificateInput{ HsmClientCertificateIdentifier: aws.String(d.Id()), diff --git a/internal/service/redshift/hsm_client_certificate_test.go b/internal/service/redshift/hsm_client_certificate_test.go index 95dcc886196..d0b7aebafbb 100644 --- a/internal/service/redshift/hsm_client_certificate_test.go +++ b/internal/service/redshift/hsm_client_certificate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift_test import ( @@ -115,7 +118,7 @@ func TestAccRedshiftHSMClientCertificate_disappears(t *testing.T) { func testAccCheckHSMClientCertificateDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_redshift_hsm_client_certificate" { @@ -150,7 +153,7 @@ func testAccCheckHSMClientCertificateExists(ctx context.Context, name string) re return fmt.Errorf("Snapshot Copy Grant ID (HsmClientCertificateName) is not set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) _, err := tfredshift.FindHSMClientCertificateByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/redshift/hsm_configuration.go b/internal/service/redshift/hsm_configuration.go index 1253b57cc98..9b41cc86eea 100644 --- a/internal/service/redshift/hsm_configuration.go +++ b/internal/service/redshift/hsm_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift import ( @@ -78,7 +81,7 @@ func ResourceHSMConfiguration() *schema.Resource { func resourceHSMConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) hsmConfigurationID := d.Get("hsm_configuration_identifier").(string) input := &redshift.CreateHsmConfigurationInput{ @@ -88,7 +91,7 @@ func resourceHSMConfigurationCreate(ctx context.Context, d *schema.ResourceData, HsmPartitionName: aws.String(d.Get("hsm_partition_name").(string)), HsmPartitionPassword: aws.String(d.Get("hsm_partition_password").(string)), HsmServerPublicCertificate: aws.String(d.Get("hsm_server_public_certificate").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } output, err := conn.CreateHsmConfigurationWithContext(ctx, input) @@ -104,7 +107,7 @@ func resourceHSMConfigurationCreate(ctx context.Context, d *schema.ResourceData, func resourceHSMConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) hsmConfiguration, err := FindHSMConfigurationByID(ctx, conn, d.Id()) @@ -133,7 +136,7 @@ func resourceHSMConfigurationRead(ctx context.Context, d *schema.ResourceData, m d.Set("hsm_partition_password", d.Get("hsm_partition_password").(string)) d.Set("hsm_server_public_certificate", d.Get("hsm_server_public_certificate").(string)) - SetTagsOut(ctx, hsmConfiguration.Tags) + setTagsOut(ctx, hsmConfiguration.Tags) return diags } @@ -148,7 +151,7 @@ func resourceHSMConfigurationUpdate(ctx context.Context, d *schema.ResourceData, func resourceHSMConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) log.Printf("[DEBUG] Deleting Redshift HSM Configuration: %s", d.Id()) _, err := conn.DeleteHsmConfigurationWithContext(ctx, &redshift.DeleteHsmConfigurationInput{ diff --git a/internal/service/redshift/hsm_configuration_test.go b/internal/service/redshift/hsm_configuration_test.go index db36711dce1..019ba96cf20 100644 --- a/internal/service/redshift/hsm_configuration_test.go +++ b/internal/service/redshift/hsm_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift_test import ( @@ -116,7 +119,7 @@ func TestAccRedshiftHSMConfiguration_disappears(t *testing.T) { func testAccCheckHSMConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_redshift_hsm_configuration" { @@ -151,7 +154,7 @@ func testAccCheckHSMConfigurationExists(ctx context.Context, name string) resour return fmt.Errorf("Redshift Hsm Configuration is not set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) _, err := tfredshift.FindHSMConfigurationByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/redshift/orderable_cluster_data_source.go b/internal/service/redshift/orderable_cluster_data_source.go index 1541bfeeae6..b915a9c0743 100644 --- a/internal/service/redshift/orderable_cluster_data_source.go +++ b/internal/service/redshift/orderable_cluster_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift import ( @@ -47,7 +50,7 @@ func DataSourceOrderableCluster() *schema.Resource { func dataSourceOrderableClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) input := &redshift.DescribeOrderableClusterOptionsInput{} diff --git a/internal/service/redshift/orderable_cluster_data_source_test.go b/internal/service/redshift/orderable_cluster_data_source_test.go index c188aaa0b75..f78efc0c704 100644 --- a/internal/service/redshift/orderable_cluster_data_source_test.go +++ b/internal/service/redshift/orderable_cluster_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift_test import ( @@ -95,7 +98,7 @@ func TestAccRedshiftOrderableClusterDataSource_preferredNodeTypes(t *testing.T) } func testAccOrderableClusterPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) input := &redshift.DescribeOrderableClusterOptionsInput{ MaxRecords: aws.Int64(20), diff --git a/internal/service/redshift/parameter_group.go b/internal/service/redshift/parameter_group.go index 50166017e0b..f5bfd381884 100644 --- a/internal/service/redshift/parameter_group.go +++ b/internal/service/redshift/parameter_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift import ( @@ -93,14 +96,14 @@ func ResourceParameterGroup() *schema.Resource { func resourceParameterGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) name := d.Get("name").(string) input := &redshift.CreateClusterParameterGroupInput{ Description: aws.String(d.Get("description").(string)), ParameterGroupFamily: aws.String(d.Get("family").(string)), ParameterGroupName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } _, err := conn.CreateClusterParameterGroupWithContext(ctx, input) @@ -129,7 +132,7 @@ func resourceParameterGroupCreate(ctx context.Context, d *schema.ResourceData, m func resourceParameterGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) parameterGroup, err := FindParameterGroupByName(ctx, conn, d.Id()) @@ -155,7 +158,7 @@ func resourceParameterGroupRead(ctx context.Context, d *schema.ResourceData, met d.Set("family", parameterGroup.ParameterGroupFamily) d.Set("name", parameterGroup.ParameterGroupName) - SetTagsOut(ctx, parameterGroup.Tags) + setTagsOut(ctx, parameterGroup.Tags) input := &redshift.DescribeClusterParametersInput{ ParameterGroupName: aws.String(d.Id()), @@ -175,7 +178,7 @@ func resourceParameterGroupRead(ctx context.Context, d *schema.ResourceData, met func resourceParameterGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) if d.HasChange("parameter") { o, n := d.GetChange("parameter") @@ -208,7 +211,7 @@ func resourceParameterGroupUpdate(ctx context.Context, d *schema.ResourceData, m func resourceParameterGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) log.Printf("[DEBUG] Deleting Redshift Parameter Group: %s", d.Id()) _, err := conn.DeleteClusterParameterGroupWithContext(ctx, &redshift.DeleteClusterParameterGroupInput{ diff --git a/internal/service/redshift/parameter_group_test.go b/internal/service/redshift/parameter_group_test.go index 8fe3bc8ea70..e6b62ca9f73 100644 --- a/internal/service/redshift/parameter_group_test.go +++ b/internal/service/redshift/parameter_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift_test import ( @@ -190,7 +193,7 @@ func TestAccRedshiftParameterGroup_tags(t *testing.T) { func testAccCheckParameterGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_redshift_parameter_group" { @@ -225,7 +228,7 @@ func testAccCheckParameterGroupExists(ctx context.Context, n string, v *redshift return fmt.Errorf("No Redshift Parameter Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) output, err := tfredshift.FindParameterGroupByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/redshift/partner.go b/internal/service/redshift/partner.go index 6edf2d92594..f77d075ce4a 100644 --- a/internal/service/redshift/partner.go +++ b/internal/service/redshift/partner.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift import ( @@ -64,7 +67,7 @@ func ResourcePartner() *schema.Resource { func resourcePartnerCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) account := d.Get("account_id").(string) clusterId := d.Get("cluster_identifier").(string) @@ -88,7 +91,7 @@ func resourcePartnerCreate(ctx context.Context, d *schema.ResourceData, meta int func resourcePartnerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) out, err := FindPartnerById(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -113,7 +116,7 @@ func resourcePartnerRead(ctx context.Context, d *schema.ResourceData, meta inter func resourcePartnerDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) account, clusterId, dbName, partnerName, err := DecodePartnerID(d.Id()) if err != nil { diff --git a/internal/service/redshift/partner_test.go b/internal/service/redshift/partner_test.go index 2cffde0a191..a69151b330f 100644 --- a/internal/service/redshift/partner_test.go +++ b/internal/service/redshift/partner_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift_test import ( @@ -94,7 +97,7 @@ func TestAccRedshiftPartner_disappears_cluster(t *testing.T) { func testAccCheckPartnerDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_redshift_partner" { @@ -128,7 +131,7 @@ func testAccCheckPartnerExists(ctx context.Context, name string) resource.TestCh return fmt.Errorf("No Redshift Partner ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) _, err := tfredshift.FindPartnerById(ctx, conn, rs.Primary.ID) diff --git a/internal/service/redshift/scheduled_action.go b/internal/service/redshift/scheduled_action.go index b42158184b1..d89bdae2fc1 100644 --- a/internal/service/redshift/scheduled_action.go +++ b/internal/service/redshift/scheduled_action.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift import ( @@ -149,7 +152,7 @@ func ResourceScheduledAction() *schema.Resource { func resourceScheduledActionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) name := d.Get("name").(string) input := &redshift.CreateScheduledActionInput{ @@ -201,7 +204,7 @@ func resourceScheduledActionCreate(ctx context.Context, d *schema.ResourceData, func resourceScheduledActionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) scheduledAction, err := FindScheduledActionByName(ctx, conn, d.Id()) @@ -248,7 +251,7 @@ func resourceScheduledActionRead(ctx context.Context, d *schema.ResourceData, me func resourceScheduledActionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) input := &redshift.ModifyScheduledActionInput{ ScheduledActionName: aws.String(d.Get("name").(string)), @@ -298,7 +301,7 @@ func resourceScheduledActionUpdate(ctx context.Context, d *schema.ResourceData, func resourceScheduledActionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) log.Printf("[DEBUG] Deleting Redshift Scheduled Action: %s", d.Id()) _, err := conn.DeleteScheduledActionWithContext(ctx, &redshift.DeleteScheduledActionInput{ diff --git a/internal/service/redshift/scheduled_action_test.go b/internal/service/redshift/scheduled_action_test.go index e60b74a3eef..be27bba0167 100644 --- a/internal/service/redshift/scheduled_action_test.go +++ b/internal/service/redshift/scheduled_action_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift_test import ( @@ -291,7 +294,7 @@ func TestAccRedshiftScheduledAction_disappears(t *testing.T) { func testAccCheckScheduledActionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_redshift_scheduled_action" { @@ -326,7 +329,7 @@ func testAccCheckScheduledActionExists(ctx context.Context, n string, v *redshif return fmt.Errorf("No Redshift Scheduled Action ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) output, err := tfredshift.FindScheduledActionByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/redshift/service_account_data_source.go b/internal/service/redshift/service_account_data_source.go index 28cc354fbf0..0e346ab7820 100644 --- a/internal/service/redshift/service_account_data_source.go +++ b/internal/service/redshift/service_account_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift import ( diff --git a/internal/service/redshift/service_package_gen.go b/internal/service/redshift/service_package_gen.go index 826d1bb73b1..f19d90cd576 100644 --- a/internal/service/redshift/service_package_gen.go +++ b/internal/service/redshift/service_package_gen.go @@ -5,6 +5,10 @@ package redshift import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + redshift_sdkv1 "github.com/aws/aws-sdk-go/service/redshift" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -161,4 +165,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Redshift } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*redshift_sdkv1.Redshift, error) { + sess := config["session"].(*session_sdkv1.Session) + + return redshift_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/redshift/snapshot_copy_grant.go b/internal/service/redshift/snapshot_copy_grant.go index 1d2a0b3f937..c26b9fbe912 100644 --- a/internal/service/redshift/snapshot_copy_grant.go +++ b/internal/service/redshift/snapshot_copy_grant.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift import ( @@ -60,13 +63,13 @@ func ResourceSnapshotCopyGrant() *schema.Resource { func resourceSnapshotCopyGrantCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) grantName := d.Get("snapshot_copy_grant_name").(string) input := redshift.CreateSnapshotCopyGrantInput{ SnapshotCopyGrantName: aws.String(grantName), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("kms_key_id"); ok { @@ -99,7 +102,7 @@ func resourceSnapshotCopyGrantCreate(ctx context.Context, d *schema.ResourceData func resourceSnapshotCopyGrantRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) grantName := d.Id() @@ -126,7 +129,7 @@ func resourceSnapshotCopyGrantRead(ctx context.Context, d *schema.ResourceData, d.Set("kms_key_id", grant.KmsKeyId) d.Set("snapshot_copy_grant_name", grant.SnapshotCopyGrantName) - SetTagsOut(ctx, grant.Tags) + setTagsOut(ctx, grant.Tags) return diags } @@ -141,7 +144,7 @@ func resourceSnapshotCopyGrantUpdate(ctx context.Context, d *schema.ResourceData func resourceSnapshotCopyGrantDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) grantName := d.Id() diff --git a/internal/service/redshift/snapshot_copy_grant_test.go b/internal/service/redshift/snapshot_copy_grant_test.go index b7d4b88fbb7..d0f7198832b 100644 --- a/internal/service/redshift/snapshot_copy_grant_test.go +++ b/internal/service/redshift/snapshot_copy_grant_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift_test import ( @@ -114,7 +117,7 @@ func TestAccRedshiftSnapshotCopyGrant_disappears(t *testing.T) { func testAccCheckSnapshotCopyGrantDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_redshift_snapshot_copy_grant" { @@ -141,7 +144,7 @@ func testAccCheckSnapshotCopyGrantExists(ctx context.Context, name string) resou } // retrieve the client from the test provider - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) input := redshift.DescribeSnapshotCopyGrantsInput{ MaxRecords: aws.Int64(100), diff --git a/internal/service/redshift/snapshot_schedule.go b/internal/service/redshift/snapshot_schedule.go index bb27b86e29d..a457b7776bb 100644 --- a/internal/service/redshift/snapshot_schedule.go +++ b/internal/service/redshift/snapshot_schedule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift import ( @@ -76,7 +79,7 @@ func ResourceSnapshotSchedule() *schema.Resource { func resourceSnapshotScheduleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) var identifier string if v, ok := d.GetOk("identifier"); ok { @@ -91,7 +94,7 @@ func resourceSnapshotScheduleCreate(ctx context.Context, d *schema.ResourceData, createOpts := &redshift.CreateSnapshotScheduleInput{ ScheduleIdentifier: aws.String(identifier), ScheduleDefinitions: flex.ExpandStringSet(d.Get("definitions").(*schema.Set)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if attr, ok := d.GetOk("description"); ok { createOpts.ScheduleDescription = aws.String(attr.(string)) @@ -109,7 +112,7 @@ func resourceSnapshotScheduleCreate(ctx context.Context, d *schema.ResourceData, func resourceSnapshotScheduleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) descOpts := &redshift.DescribeSnapshotSchedulesInput{ ScheduleIdentifier: aws.String(d.Id()), @@ -133,7 +136,7 @@ func resourceSnapshotScheduleRead(ctx context.Context, d *schema.ResourceData, m return sdkdiag.AppendErrorf(diags, "setting definitions: %s", err) } - SetTagsOut(ctx, snapshotSchedule.Tags) + setTagsOut(ctx, snapshotSchedule.Tags) arn := arn.ARN{ Partition: meta.(*conns.AWSClient).Partition, @@ -150,7 +153,7 @@ func resourceSnapshotScheduleRead(ctx context.Context, d *schema.ResourceData, m func resourceSnapshotScheduleUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) if d.HasChange("definitions") { modifyOpts := &redshift.ModifySnapshotScheduleInput{ @@ -168,7 +171,7 @@ func resourceSnapshotScheduleUpdate(ctx context.Context, d *schema.ResourceData, func resourceSnapshotScheduleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) if d.Get("force_destroy").(bool) { if err := resourceSnapshotScheduleDeleteAllAssociatedClusters(ctx, conn, d.Id()); err != nil { diff --git a/internal/service/redshift/snapshot_schedule_association.go b/internal/service/redshift/snapshot_schedule_association.go index 239df862c45..b369cf4fcac 100644 --- a/internal/service/redshift/snapshot_schedule_association.go +++ b/internal/service/redshift/snapshot_schedule_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift import ( @@ -43,7 +46,7 @@ func ResourceSnapshotScheduleAssociation() *schema.Resource { func resourceSnapshotScheduleAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) clusterIdentifier := d.Get("cluster_identifier").(string) scheduleIdentifier := d.Get("schedule_identifier").(string) @@ -68,7 +71,7 @@ func resourceSnapshotScheduleAssociationCreate(ctx context.Context, d *schema.Re func resourceSnapshotScheduleAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) scheduleIdentifier, assoicatedCluster, err := FindScheduleAssociationById(ctx, conn, d.Id()) @@ -89,7 +92,7 @@ func resourceSnapshotScheduleAssociationRead(ctx context.Context, d *schema.Reso func resourceSnapshotScheduleAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) clusterIdentifier, scheduleIdentifier, err := SnapshotScheduleAssociationParseID(d.Id()) if err != nil { return sdkdiag.AppendErrorf(diags, "deleting Redshift Cluster Snapshot Schedule (%s): %s", d.Id(), err) diff --git a/internal/service/redshift/snapshot_schedule_association_test.go b/internal/service/redshift/snapshot_schedule_association_test.go index d49148d989d..79c8b3b868f 100644 --- a/internal/service/redshift/snapshot_schedule_association_test.go +++ b/internal/service/redshift/snapshot_schedule_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift_test import ( @@ -99,7 +102,7 @@ func testAccCheckSnapshotScheduleAssociationDestroy(ctx context.Context) resourc continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) _, _, err := tfredshift.FindScheduleAssociationById(ctx, conn, rs.Primary.ID) @@ -129,7 +132,7 @@ func testAccCheckSnapshotScheduleAssociationExists(ctx context.Context, n string return fmt.Errorf("No Redshift Cluster Snapshot Schedule Association ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) _, _, err := tfredshift.FindScheduleAssociationById(ctx, conn, rs.Primary.ID) diff --git a/internal/service/redshift/snapshot_schedule_test.go b/internal/service/redshift/snapshot_schedule_test.go index 793043d0aa8..6aa5d6f0746 100644 --- a/internal/service/redshift/snapshot_schedule_test.go +++ b/internal/service/redshift/snapshot_schedule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift_test import ( @@ -236,7 +239,7 @@ func testAccCheckSnapshotScheduleDestroy(ctx context.Context) resource.TestCheck continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) resp, err := conn.DescribeSnapshotSchedulesWithContext(ctx, &redshift.DescribeSnapshotSchedulesInput{ ScheduleIdentifier: aws.String(rs.Primary.ID), }) @@ -269,7 +272,7 @@ func testAccCheckSnapshotScheduleExists(ctx context.Context, n string, v *redshi return fmt.Errorf("No Redshift Cluster Snapshot Schedule ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) resp, err := conn.DescribeSnapshotSchedulesWithContext(ctx, &redshift.DescribeSnapshotSchedulesInput{ ScheduleIdentifier: aws.String(rs.Primary.ID), }) @@ -291,7 +294,7 @@ func testAccCheckSnapshotScheduleExists(ctx context.Context, n string, v *redshi func testAccCheckSnapshotScheduleCreateSnapshotScheduleAssociation(ctx context.Context, cluster *redshift.Cluster, snapshotSchedule *redshift.SnapshotSchedule) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) if _, err := conn.ModifyClusterSnapshotScheduleWithContext(ctx, &redshift.ModifyClusterSnapshotScheduleInput{ ClusterIdentifier: cluster.ClusterIdentifier, diff --git a/internal/service/redshift/status.go b/internal/service/redshift/status.go index 207d8c2f9f2..4c75a5cc0e8 100644 --- a/internal/service/redshift/status.go +++ b/internal/service/redshift/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift import ( diff --git a/internal/service/redshift/subnet_group.go b/internal/service/redshift/subnet_group.go index 5b913c31ca5..746b17278e0 100644 --- a/internal/service/redshift/subnet_group.go +++ b/internal/service/redshift/subnet_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift import ( @@ -69,7 +72,7 @@ func ResourceSubnetGroup() *schema.Resource { func resourceSubnetGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) subnetIdsSet := d.Get("subnet_ids").(*schema.Set) subnetIds := make([]*string, subnetIdsSet.Len()) @@ -82,7 +85,7 @@ func resourceSubnetGroupCreate(ctx context.Context, d *schema.ResourceData, meta ClusterSubnetGroupName: aws.String(name), Description: aws.String(d.Get("description").(string)), SubnetIds: subnetIds, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } log.Printf("[DEBUG] Creating Redshift Subnet Group: %s", input) @@ -99,7 +102,7 @@ func resourceSubnetGroupCreate(ctx context.Context, d *schema.ResourceData, meta func resourceSubnetGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) subnetgroup, err := FindSubnetGroupByName(ctx, conn, d.Id()) @@ -125,14 +128,14 @@ func resourceSubnetGroupRead(ctx context.Context, d *schema.ResourceData, meta i d.Set("name", d.Id()) d.Set("subnet_ids", subnetIdsToSlice(subnetgroup.Subnets)) - SetTagsOut(ctx, subnetgroup.Tags) + setTagsOut(ctx, subnetgroup.Tags) return diags } func resourceSubnetGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) if d.HasChanges("subnet_ids", "description") { _, n := d.GetChange("subnet_ids") @@ -165,7 +168,7 @@ func resourceSubnetGroupUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceSubnetGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) log.Printf("[DEBUG] Deleting Redshift Subnet Group: %s", d.Id()) _, err := conn.DeleteClusterSubnetGroupWithContext(ctx, &redshift.DeleteClusterSubnetGroupInput{ diff --git a/internal/service/redshift/subnet_group_data_source.go b/internal/service/redshift/subnet_group_data_source.go index 6bbf3f3ef9f..2f93c108059 100644 --- a/internal/service/redshift/subnet_group_data_source.go +++ b/internal/service/redshift/subnet_group_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift import ( @@ -44,7 +47,7 @@ func DataSourceSubnetGroup() *schema.Resource { func dataSourceSubnetGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig subnetgroup, err := FindSubnetGroupByName(ctx, conn, d.Get("name").(string)) diff --git a/internal/service/redshift/subnet_group_data_source_test.go b/internal/service/redshift/subnet_group_data_source_test.go index 6207f1fbd08..8f5899f9f39 100644 --- a/internal/service/redshift/subnet_group_data_source_test.go +++ b/internal/service/redshift/subnet_group_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift_test import ( diff --git a/internal/service/redshift/subnet_group_test.go b/internal/service/redshift/subnet_group_test.go index 41eef8c70e5..4e4b334b379 100644 --- a/internal/service/redshift/subnet_group_test.go +++ b/internal/service/redshift/subnet_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift_test import ( @@ -186,7 +189,7 @@ func TestAccRedshiftSubnetGroup_tags(t *testing.T) { func testAccCheckSubnetGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_redshift_subnet_group" { @@ -221,7 +224,7 @@ func testAccCheckSubnetGroupExists(ctx context.Context, n string, v *redshift.Cl return fmt.Errorf("No Redshift Subnet Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) output, err := tfredshift.FindSubnetGroupByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/redshift/sweep.go b/internal/service/redshift/sweep.go index daa0bdce7bb..b31e27a6bd2 100644 --- a/internal/service/redshift/sweep.go +++ b/internal/service/redshift/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -12,7 +15,6 @@ import ( "github.com/aws/aws-sdk-go/service/redshift" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -71,11 +73,11 @@ func init() { func sweepClusterSnapshots(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).RedshiftConn() + conn := client.RedshiftConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -101,7 +103,7 @@ func sweepClusterSnapshots(region string) error { // in case work can be done, don't jump out yet } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping Redshift Snapshots for %s: %w", region, err)) } @@ -115,13 +117,13 @@ func sweepClusterSnapshots(region string) error { func sweepClusters(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).RedshiftConn() + conn := client.RedshiftConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -148,7 +150,7 @@ func sweepClusters(region string) error { // in case work can be done, don't jump out yet } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping Redshift Clusters for %s: %w", region, err)) } @@ -162,13 +164,13 @@ func sweepClusters(region string) error { func sweepEventSubscriptions(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).RedshiftConn() + conn := client.RedshiftConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -192,7 +194,7 @@ func sweepEventSubscriptions(region string) error { errs = multierror.Append(errs, fmt.Errorf("describing Redshift Event Subscriptions: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping Redshift Event Subscriptions for %s: %w", region, err)) } @@ -206,11 +208,11 @@ func sweepEventSubscriptions(region string) error { func sweepScheduledActions(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).RedshiftConn() + conn := client.RedshiftConn(ctx) input := &redshift.DescribeScheduledActionsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -239,7 +241,7 @@ func sweepScheduledActions(region string) error { return fmt.Errorf("listing Redshift Scheduled Actions (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("sweeping Redshift Scheduled Actions (%s): %w", region, err) @@ -250,13 +252,13 @@ func sweepScheduledActions(region string) error { func sweepSnapshotSchedules(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).RedshiftConn() + conn := client.RedshiftConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -291,7 +293,7 @@ func sweepSnapshotSchedules(region string) error { errs = multierror.Append(errs, fmt.Errorf("describing Redshift Snapshot Schedules: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping Redshift Snapshot Schedules for %s: %w", region, err)) } @@ -305,13 +307,13 @@ func sweepSnapshotSchedules(region string) error { func sweepSubnetGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).RedshiftConn() + conn := client.RedshiftConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -347,7 +349,7 @@ func sweepSubnetGroups(region string) error { errs = multierror.Append(errs, fmt.Errorf("describing Redshift Subnet Groups: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping Redshift Subnet Groups for %s: %w", region, err)) } @@ -361,13 +363,13 @@ func sweepSubnetGroups(region string) error { func sweepHSMClientCertificates(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).RedshiftConn() + conn := client.RedshiftConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -393,7 +395,7 @@ func sweepHSMClientCertificates(region string) error { // in case work can be done, don't jump out yet } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping Redshift Hsm Client Certificates for %s: %w", region, err)) } @@ -407,13 +409,13 @@ func sweepHSMClientCertificates(region string) error { func sweepHSMConfigurations(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).RedshiftConn() + conn := client.RedshiftConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -439,7 +441,7 @@ func sweepHSMConfigurations(region string) error { // in case work can be done, don't jump out yet } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping Redshift Hsm Configurations for %s: %w", region, err)) } @@ -453,13 +455,13 @@ func sweepHSMConfigurations(region string) error { func sweepAuthenticationProfiles(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).RedshiftConn() + conn := client.RedshiftConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -483,7 +485,7 @@ func sweepAuthenticationProfiles(region string) error { sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping Redshift Authentication Profiles for %s: %w", region, err)) } diff --git a/internal/service/redshift/tags_gen.go b/internal/service/redshift/tags_gen.go index c51490140a3..c162b2fb65d 100644 --- a/internal/service/redshift/tags_gen.go +++ b/internal/service/redshift/tags_gen.go @@ -43,9 +43,9 @@ func KeyValueTags(ctx context.Context, tags []*redshift.Tag) tftags.KeyValueTags return tftags.New(ctx, m) } -// GetTagsIn returns redshift service tags from Context. +// getTagsIn returns redshift service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*redshift.Tag { +func getTagsIn(ctx context.Context) []*redshift.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -55,17 +55,17 @@ func GetTagsIn(ctx context.Context) []*redshift.Tag { return nil } -// SetTagsOut sets redshift service tags in Context. -func SetTagsOut(ctx context.Context, tags []*redshift.Tag) { +// setTagsOut sets redshift service tags in Context. +func setTagsOut(ctx context.Context, tags []*redshift.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates redshift service tags. +// updateTags updates redshift service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn redshiftiface.RedshiftAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn redshiftiface.RedshiftAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -105,5 +105,5 @@ func UpdateTags(ctx context.Context, conn redshiftiface.RedshiftAPI, identifier // UpdateTags updates redshift service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).RedshiftConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).RedshiftConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/redshift/usage_limit.go b/internal/service/redshift/usage_limit.go index a2963a69f7d..37e403ad3f2 100644 --- a/internal/service/redshift/usage_limit.go +++ b/internal/service/redshift/usage_limit.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift import ( @@ -82,7 +85,7 @@ func ResourceUsageLimit() *schema.Resource { func resourceUsageLimitCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) clusterId := d.Get("cluster_identifier").(string) input := redshift.CreateUsageLimitInput{ @@ -90,7 +93,7 @@ func resourceUsageLimitCreate(ctx context.Context, d *schema.ResourceData, meta ClusterIdentifier: aws.String(clusterId), FeatureType: aws.String(d.Get("feature_type").(string)), LimitType: aws.String(d.Get("limit_type").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("breach_action"); ok { @@ -114,7 +117,7 @@ func resourceUsageLimitCreate(ctx context.Context, d *schema.ResourceData, meta func resourceUsageLimitRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) out, err := FindUsageLimitByID(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -143,14 +146,14 @@ func resourceUsageLimitRead(ctx context.Context, d *schema.ResourceData, meta in d.Set("breach_action", out.BreachAction) d.Set("cluster_identifier", out.ClusterIdentifier) - SetTagsOut(ctx, out.Tags) + setTagsOut(ctx, out.Tags) return diags } func resourceUsageLimitUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &redshift.ModifyUsageLimitInput{ @@ -176,7 +179,7 @@ func resourceUsageLimitUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceUsageLimitDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftConn() + conn := meta.(*conns.AWSClient).RedshiftConn(ctx) deleteInput := redshift.DeleteUsageLimitInput{ UsageLimitId: aws.String(d.Id()), diff --git a/internal/service/redshift/usage_limit_test.go b/internal/service/redshift/usage_limit_test.go index 38312462782..d408fd19b6f 100644 --- a/internal/service/redshift/usage_limit_test.go +++ b/internal/service/redshift/usage_limit_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift_test import ( @@ -130,7 +133,7 @@ func TestAccRedshiftUsageLimit_disappears(t *testing.T) { func testAccCheckUsageLimitDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_redshift_usage_limit" { @@ -164,7 +167,7 @@ func testAccCheckUsageLimitExists(ctx context.Context, name string) resource.Tes return fmt.Errorf("Snapshot Copy Grant ID (UsageLimitName) is not set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftConn(ctx) _, err := tfredshift.FindUsageLimitByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/redshift/wait.go b/internal/service/redshift/wait.go index 521ed8002d2..94faa70248d 100644 --- a/internal/service/redshift/wait.go +++ b/internal/service/redshift/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshift import ( @@ -42,7 +45,7 @@ func waitClusterCreated(ctx context.Context, conn *redshift.Redshift, id string, func waitClusterDeleted(ctx context.Context, conn *redshift.Redshift, id string, timeout time.Duration) (*redshift.Cluster, error) { stateConf := &retry.StateChangeConf{ - Pending: []string{clusterAvailabilityStatusModifying}, + Pending: []string{clusterAvailabilityStatusMaintenance, clusterAvailabilityStatusModifying}, Target: []string{}, Refresh: statusClusterAvailability(ctx, conn, id), Timeout: timeout, diff --git a/internal/service/redshiftdata/generate.go b/internal/service/redshiftdata/generate.go new file mode 100644 index 00000000000..c8456bd36dd --- /dev/null +++ b/internal/service/redshiftdata/generate.go @@ -0,0 +1,7 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/servicepackage/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package redshiftdata diff --git a/internal/service/redshiftdata/service_package_gen.go b/internal/service/redshiftdata/service_package_gen.go index 45dc6d420bf..3c625e94bcd 100644 --- a/internal/service/redshiftdata/service_package_gen.go +++ b/internal/service/redshiftdata/service_package_gen.go @@ -5,6 +5,10 @@ package redshiftdata import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + redshiftdataapiservice_sdkv1 "github.com/aws/aws-sdk-go/service/redshiftdataapiservice" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -36,4 +40,13 @@ func (p *servicePackage) ServicePackageName() string { return names.RedshiftData } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*redshiftdataapiservice_sdkv1.RedshiftDataAPIService, error) { + sess := config["session"].(*session_sdkv1.Session) + + return redshiftdataapiservice_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/redshiftdata/statement.go b/internal/service/redshiftdata/statement.go index 023597b2549..fe4603a1606 100644 --- a/internal/service/redshiftdata/statement.go +++ b/internal/service/redshiftdata/statement.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshiftdata import ( @@ -102,7 +105,7 @@ func ResourceStatement() *schema.Resource { func resourceStatementCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftDataConn() + conn := meta.(*conns.AWSClient).RedshiftDataConn(ctx) input := &redshiftdataapiservice.ExecuteStatementInput{ Database: aws.String(d.Get("database").(string)), @@ -151,7 +154,7 @@ func resourceStatementCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceStatementRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftDataConn() + conn := meta.(*conns.AWSClient).RedshiftDataConn(ctx) sub, err := FindStatementByID(ctx, conn, d.Id()) diff --git a/internal/service/redshiftdata/statement_test.go b/internal/service/redshiftdata/statement_test.go index 9b0e3449634..ab876d1a3d1 100644 --- a/internal/service/redshiftdata/statement_test.go +++ b/internal/service/redshiftdata/statement_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshiftdata_test import ( @@ -89,7 +92,7 @@ func testAccCheckStatementExists(ctx context.Context, n string, v *redshiftdataa return fmt.Errorf("No Redshift Data Statement ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftDataConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftDataConn(ctx) output, err := tfredshiftdata.FindStatementByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/redshiftserverless/credentials_data_source.go b/internal/service/redshiftserverless/credentials_data_source.go index 367f8cc15b6..06993853c1c 100644 --- a/internal/service/redshiftserverless/credentials_data_source.go +++ b/internal/service/redshiftserverless/credentials_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshiftserverless import ( @@ -52,7 +55,7 @@ func DataSourceCredentials() *schema.Resource { func dataSourceCredentialsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftServerlessConn() + conn := meta.(*conns.AWSClient).RedshiftServerlessConn(ctx) workgroupName := d.Get("workgroup_name").(string) input := &redshiftserverless.GetCredentialsInput{ diff --git a/internal/service/redshiftserverless/credentials_data_source_test.go b/internal/service/redshiftserverless/credentials_data_source_test.go index f68d266b697..21acb11e89f 100644 --- a/internal/service/redshiftserverless/credentials_data_source_test.go +++ b/internal/service/redshiftserverless/credentials_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshiftserverless_test import ( diff --git a/internal/service/redshiftserverless/endpoint_access.go b/internal/service/redshiftserverless/endpoint_access.go index 57a4c8d1fa6..a98438afcef 100644 --- a/internal/service/redshiftserverless/endpoint_access.go +++ b/internal/service/redshiftserverless/endpoint_access.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshiftserverless import ( @@ -114,7 +117,7 @@ func ResourceEndpointAccess() *schema.Resource { func resourceEndpointAccessCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftServerlessConn() + conn := meta.(*conns.AWSClient).RedshiftServerlessConn(ctx) input := redshiftserverless.CreateEndpointAccessInput{ WorkgroupName: aws.String(d.Get("workgroup_name").(string)), @@ -146,7 +149,7 @@ func resourceEndpointAccessCreate(ctx context.Context, d *schema.ResourceData, m func resourceEndpointAccessRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftServerlessConn() + conn := meta.(*conns.AWSClient).RedshiftServerlessConn(ctx) out, err := FindEndpointAccessByName(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -182,7 +185,7 @@ func resourceEndpointAccessRead(ctx context.Context, d *schema.ResourceData, met func resourceEndpointAccessUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftServerlessConn() + conn := meta.(*conns.AWSClient).RedshiftServerlessConn(ctx) input := &redshiftserverless.UpdateEndpointAccessInput{ EndpointName: aws.String(d.Id()), @@ -206,7 +209,7 @@ func resourceEndpointAccessUpdate(ctx context.Context, d *schema.ResourceData, m func resourceEndpointAccessDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftServerlessConn() + conn := meta.(*conns.AWSClient).RedshiftServerlessConn(ctx) deleteInput := redshiftserverless.DeleteEndpointAccessInput{ EndpointName: aws.String(d.Id()), diff --git a/internal/service/redshiftserverless/endpoint_access_test.go b/internal/service/redshiftserverless/endpoint_access_test.go index 1dbcf1599f4..9a9afee1327 100644 --- a/internal/service/redshiftserverless/endpoint_access_test.go +++ b/internal/service/redshiftserverless/endpoint_access_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshiftserverless_test import ( @@ -107,7 +110,7 @@ func TestAccRedshiftServerlessEndpointAccess_disappears(t *testing.T) { func testAccCheckEndpointAccessDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftServerlessConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftServerlessConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_redshiftserverless_endpoint_access" { @@ -141,7 +144,7 @@ func testAccCheckEndpointAccessExists(ctx context.Context, name string) resource return fmt.Errorf("Redshift Serverless EndpointAccess ID is not set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftServerlessConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftServerlessConn(ctx) _, err := tfredshiftserverless.FindEndpointAccessByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/redshiftserverless/find.go b/internal/service/redshiftserverless/find.go index 2d0a80171e7..4d3fa28bdd8 100644 --- a/internal/service/redshiftserverless/find.go +++ b/internal/service/redshiftserverless/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshiftserverless import ( @@ -35,31 +38,6 @@ func FindNamespaceByName(ctx context.Context, conn *redshiftserverless.RedshiftS return output.Namespace, nil } -func FindWorkgroupByName(ctx context.Context, conn *redshiftserverless.RedshiftServerless, name string) (*redshiftserverless.Workgroup, error) { - input := &redshiftserverless.GetWorkgroupInput{ - WorkgroupName: aws.String(name), - } - - output, err := conn.GetWorkgroupWithContext(ctx, input) - - if tfawserr.ErrCodeEquals(err, redshiftserverless.ErrCodeResourceNotFoundException) { - return nil, &retry.NotFoundError{ - LastError: err, - LastRequest: input, - } - } - - if err != nil { - return nil, err - } - - if output == nil { - return nil, tfresource.NewEmptyResultError(input) - } - - return output.Workgroup, nil -} - func FindEndpointAccessByName(ctx context.Context, conn *redshiftserverless.RedshiftServerless, name string) (*redshiftserverless.EndpointAccess, error) { input := &redshiftserverless.GetEndpointAccessInput{ EndpointName: aws.String(name), diff --git a/internal/service/redshiftserverless/generate.go b/internal/service/redshiftserverless/generate.go index 22a42b8b27a..47d2b4c39b3 100644 --- a/internal/service/redshiftserverless/generate.go +++ b/internal/service/redshiftserverless/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsSlice -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package redshiftserverless diff --git a/internal/service/redshiftserverless/namespace.go b/internal/service/redshiftserverless/namespace.go index 044b71744b7..01b141e5b6b 100644 --- a/internal/service/redshiftserverless/namespace.go +++ b/internal/service/redshiftserverless/namespace.go @@ -1,11 +1,17 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshiftserverless import ( "context" "log" + "regexp" + "strings" "time" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" "github.com/aws/aws-sdk-go/service/redshiftserverless" "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" @@ -102,12 +108,12 @@ func ResourceNamespace() *schema.Resource { func resourceNamespaceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftServerlessConn() + conn := meta.(*conns.AWSClient).RedshiftServerlessConn(ctx) name := d.Get("namespace_name").(string) input := &redshiftserverless.CreateNamespaceInput{ NamespaceName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("admin_user_password"); ok { @@ -151,7 +157,7 @@ func resourceNamespaceCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceNamespaceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftServerlessConn() + conn := meta.(*conns.AWSClient).RedshiftServerlessConn(ctx) output, err := FindNamespaceByName(ctx, conn, d.Id()) @@ -170,7 +176,7 @@ func resourceNamespaceRead(ctx context.Context, d *schema.ResourceData, meta int d.Set("arn", arn) d.Set("db_name", output.DbName) d.Set("default_iam_role_arn", output.DefaultIamRoleArn) - d.Set("iam_roles", aws.StringValueSlice(output.IamRoles)) + d.Set("iam_roles", flattenNamespaceIAMRoles(output.IamRoles)) d.Set("kms_key_id", output.KmsKeyId) d.Set("log_exports", aws.StringValueSlice(output.LogExports)) d.Set("namespace_id", output.NamespaceId) @@ -181,7 +187,7 @@ func resourceNamespaceRead(ctx context.Context, d *schema.ResourceData, meta int func resourceNamespaceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftServerlessConn() + conn := meta.(*conns.AWSClient).RedshiftServerlessConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &redshiftserverless.UpdateNamespaceInput{ @@ -225,7 +231,7 @@ func resourceNamespaceUpdate(ctx context.Context, d *schema.ResourceData, meta i func resourceNamespaceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftServerlessConn() + conn := meta.(*conns.AWSClient).RedshiftServerlessConn(ctx) log.Printf("[DEBUG] Deleting Redshift Serverless Namespace: %s", d.Id()) _, err := tfresource.RetryWhenAWSErrMessageContains(ctx, 10*time.Minute, @@ -251,3 +257,42 @@ func resourceNamespaceDelete(ctx context.Context, d *schema.ResourceData, meta i return diags } + +var ( + reIAMRole = regexp.MustCompile(`^\s*IamRole\((.*)\)\s*$`) +) + +func flattenNamespaceIAMRoles(iamRoles []*string) []string { + var tfList []string + + for _, iamRole := range iamRoles { + iamRole := aws.StringValue(iamRole) + + if arn.IsARN(iamRole) { + tfList = append(tfList, iamRole) + continue + } + + // e.g. "IamRole(applyStatus=in-sync, iamRoleArn=arn:aws:iam::123456789012:role/service-role/test)" + if m := reIAMRole.FindStringSubmatch(iamRole); len(m) > 0 { + var key string + s := m[1] + for s != "" { + key, s, _ = strings.Cut(s, ",") + key = strings.TrimSpace(key) + if key == "" { + continue + } + key, value, _ := strings.Cut(key, "=") + if key == "iamRoleArn" { + tfList = append(tfList, value) + break + } + } + + continue + } + } + + return tfList +} diff --git a/internal/service/redshiftserverless/namespace_data_source.go b/internal/service/redshiftserverless/namespace_data_source.go index 2ba99a82341..3ddf6e3481d 100644 --- a/internal/service/redshiftserverless/namespace_data_source.go +++ b/internal/service/redshiftserverless/namespace_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshiftserverless import ( @@ -63,7 +66,7 @@ func DataSourceNamespace() *schema.Resource { func dataSourceNamespaceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftServerlessConn() + conn := meta.(*conns.AWSClient).RedshiftServerlessConn(ctx) namespaceName := d.Get("namespace_name").(string) diff --git a/internal/service/redshiftserverless/namespace_data_source_test.go b/internal/service/redshiftserverless/namespace_data_source_test.go index 56cc15cb7c1..2770a51a3db 100644 --- a/internal/service/redshiftserverless/namespace_data_source_test.go +++ b/internal/service/redshiftserverless/namespace_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshiftserverless_test import ( diff --git a/internal/service/redshiftserverless/namespace_test.go b/internal/service/redshiftserverless/namespace_test.go index 1dac82f85d5..f8cf42cdaaf 100644 --- a/internal/service/redshiftserverless/namespace_test.go +++ b/internal/service/redshiftserverless/namespace_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshiftserverless_test import ( @@ -52,8 +55,9 @@ func TestAccRedshiftServerlessNamespace_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "namespace_name", rName), resource.TestCheckResourceAttrSet(resourceName, "namespace_id"), resource.TestCheckResourceAttr(resourceName, "log_exports.#", "0"), - resource.TestCheckResourceAttr(resourceName, "iam_roles.#", "1"), - resource.TestCheckTypeSetElemAttrPair(resourceName, "iam_roles.*", "aws_iam_role.test", "arn"), + resource.TestCheckResourceAttr(resourceName, "iam_roles.#", "2"), + resource.TestCheckTypeSetElemAttrPair(resourceName, "iam_roles.*", "aws_iam_role.test.0", "arn"), + resource.TestCheckTypeSetElemAttrPair(resourceName, "iam_roles.*", "aws_iam_role.test.1", "arn"), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), ), }, @@ -77,7 +81,7 @@ func TestAccRedshiftServerlessNamespace_defaultIAMRole(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckNamespaceExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "namespace_name", rName), - resource.TestCheckResourceAttrPair(resourceName, "default_iam_role_arn", "aws_iam_role.test", "arn"), + resource.TestCheckResourceAttrPair(resourceName, "default_iam_role_arn", "aws_iam_role.test.0", "arn"), ), }, { @@ -189,9 +193,34 @@ func TestAccRedshiftServerlessNamespace_disappears(t *testing.T) { }) } +func TestAccRedshiftServerlessNamespace_withWorkgroup(t *testing.T) { + ctx := acctest.Context(t) + resourceName := "aws_redshiftserverless_namespace.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, redshiftserverless.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckNamespaceDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccNamespaceConfig_withWorkgroup(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckNamespaceExists(ctx, resourceName), + ), + }, + { + Config: testAccNamespaceConfig_withWorkgroup(rName), + PlanOnly: true, + }, + }, + }) +} + func testAccCheckNamespaceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftServerlessConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftServerlessConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_redshiftserverless_namespace" { @@ -225,7 +254,7 @@ func testAccCheckNamespaceExists(ctx context.Context, name string) resource.Test return fmt.Errorf("Redshift Serverless Namespace is not set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftServerlessConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftServerlessConn(ctx) _, err := tfredshiftserverless.FindNamespaceByName(ctx, conn, rs.Primary.ID) @@ -233,6 +262,46 @@ func testAccCheckNamespaceExists(ctx context.Context, name string) resource.Test } } +func testAccNamespaceConfig_baseIAMRole(rName string, n int) string { + return fmt.Sprintf(` +resource "aws_iam_role" "test" { + count = %[2]d + + name = "%[1]s-${count.index}" + path = "/service-role/" + + assume_role_policy = < 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*redshiftserverless.Tag { return nil } -// SetTagsOut sets redshiftserverless service tags in Context. -func SetTagsOut(ctx context.Context, tags []*redshiftserverless.Tag) { +// setTagsOut sets redshiftserverless service tags in Context. +func setTagsOut(ctx context.Context, tags []*redshiftserverless.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates redshiftserverless service tags. +// updateTags updates redshiftserverless service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn redshiftserverlessiface.RedshiftServerlessAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn redshiftserverlessiface.RedshiftServerlessAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn redshiftserverlessiface.RedshiftServer // UpdateTags updates redshiftserverless service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).RedshiftServerlessConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).RedshiftServerlessConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/redshiftserverless/usage_limit.go b/internal/service/redshiftserverless/usage_limit.go index 757cdaf2227..248df18f9a2 100644 --- a/internal/service/redshiftserverless/usage_limit.go +++ b/internal/service/redshiftserverless/usage_limit.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshiftserverless import ( @@ -67,7 +70,7 @@ func ResourceUsageLimit() *schema.Resource { func resourceUsageLimitCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftServerlessConn() + conn := meta.(*conns.AWSClient).RedshiftServerlessConn(ctx) input := redshiftserverless.CreateUsageLimitInput{ Amount: aws.Int64(int64(d.Get("amount").(int))), @@ -96,7 +99,7 @@ func resourceUsageLimitCreate(ctx context.Context, d *schema.ResourceData, meta func resourceUsageLimitRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftServerlessConn() + conn := meta.(*conns.AWSClient).RedshiftServerlessConn(ctx) out, err := FindUsageLimitByName(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -121,7 +124,7 @@ func resourceUsageLimitRead(ctx context.Context, d *schema.ResourceData, meta in func resourceUsageLimitUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftServerlessConn() + conn := meta.(*conns.AWSClient).RedshiftServerlessConn(ctx) input := &redshiftserverless.UpdateUsageLimitInput{ UsageLimitId: aws.String(d.Id()), @@ -145,7 +148,7 @@ func resourceUsageLimitUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceUsageLimitDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftServerlessConn() + conn := meta.(*conns.AWSClient).RedshiftServerlessConn(ctx) log.Printf("[DEBUG] Deleting Redshift Serverless Usage Limit: %s", d.Id()) _, err := conn.DeleteUsageLimitWithContext(ctx, &redshiftserverless.DeleteUsageLimitInput{ diff --git a/internal/service/redshiftserverless/usage_limit_test.go b/internal/service/redshiftserverless/usage_limit_test.go index e07a9cfdd3f..203c6b3cb58 100644 --- a/internal/service/redshiftserverless/usage_limit_test.go +++ b/internal/service/redshiftserverless/usage_limit_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshiftserverless_test import ( @@ -79,7 +82,7 @@ func TestAccRedshiftServerlessUsageLimit_disappears(t *testing.T) { func testAccCheckUsageLimitDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftServerlessConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftServerlessConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_redshiftserverless_usage_limit" { @@ -113,7 +116,7 @@ func testAccCheckUsageLimitExists(ctx context.Context, name string) resource.Tes return fmt.Errorf("Redshift Serverless Usage Limit is not set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftServerlessConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftServerlessConn(ctx) _, err := tfredshiftserverless.FindUsageLimitByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/redshiftserverless/wait.go b/internal/service/redshiftserverless/wait.go index 9f33a8c0da4..71d972d7457 100644 --- a/internal/service/redshiftserverless/wait.go +++ b/internal/service/redshiftserverless/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshiftserverless import ( @@ -48,48 +51,6 @@ func waitNamespaceUpdated(ctx context.Context, conn *redshiftserverless.Redshift return nil, err } -func waitWorkgroupAvailable(ctx context.Context, conn *redshiftserverless.RedshiftServerless, name string, wait time.Duration) (*redshiftserverless.Workgroup, error) { //nolint:unparam - stateConf := &retry.StateChangeConf{ - Pending: []string{ - redshiftserverless.WorkgroupStatusCreating, - redshiftserverless.WorkgroupStatusModifying, - }, - Target: []string{ - redshiftserverless.WorkgroupStatusAvailable, - }, - Refresh: statusWorkgroup(ctx, conn, name), - Timeout: wait, - } - - outputRaw, err := stateConf.WaitForStateContext(ctx) - - if output, ok := outputRaw.(*redshiftserverless.Workgroup); ok { - return output, err - } - - return nil, err -} - -func waitWorkgroupDeleted(ctx context.Context, conn *redshiftserverless.RedshiftServerless, name string, wait time.Duration) (*redshiftserverless.Workgroup, error) { - stateConf := &retry.StateChangeConf{ - Pending: []string{ - redshiftserverless.WorkgroupStatusModifying, - redshiftserverless.WorkgroupStatusDeleting, - }, - Target: []string{}, - Refresh: statusWorkgroup(ctx, conn, name), - Timeout: wait, - } - - outputRaw, err := stateConf.WaitForStateContext(ctx) - - if output, ok := outputRaw.(*redshiftserverless.Workgroup); ok { - return output, err - } - - return nil, err -} - func waitEndpointAccessActive(ctx context.Context, conn *redshiftserverless.RedshiftServerless, name string) (*redshiftserverless.EndpointAccess, error) { //nolint:unparam stateConf := &retry.StateChangeConf{ Pending: []string{ diff --git a/internal/service/redshiftserverless/workgroup.go b/internal/service/redshiftserverless/workgroup.go index 33a6742ae0c..adf5b79ae57 100644 --- a/internal/service/redshiftserverless/workgroup.go +++ b/internal/service/redshiftserverless/workgroup.go @@ -1,7 +1,11 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshiftserverless import ( "context" + "fmt" "log" "time" @@ -9,6 +13,7 @@ import ( "github.com/aws/aws-sdk-go/service/redshiftserverless" "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" @@ -50,15 +55,33 @@ func ResourceWorkgroup() *schema.Resource { Computed: true, }, "config_parameter": { - Type: schema.TypeList, + Type: schema.TypeSet, Optional: true, Computed: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "parameter_key": { - Type: schema.TypeString, - ValidateFunc: validation.StringInSlice([]string{"datestyle", "enable_user_activity_logging", "query_group", "search_path", "max_query_execution_time"}, false), - Required: true, + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice([]string{ + // https://docs.aws.amazon.com/redshift-serverless/latest/APIReference/API_CreateWorkgroup.html#redshiftserverless-CreateWorkgroup-request-configParameters + "auto_mv", + "datestyle", + "enable_case_sensitive_identifier", // "ValidationException: The parameter key enable_case_sensitivity_identifier isn't supported. Supported values: [[max_query_cpu_usage_percent, max_join_row_count, auto_mv, max_query_execution_time, max_query_queue_time, max_query_blocks_read, max_return_row_count, search_path, datestyle, max_query_cpu_time, max_io_skew, max_scan_row_count, query_group, enable_user_activity_logging, enable_case_sensitive_identifier, max_nested_loop_join_row_count, max_query_temp_blocks_to_disk, max_cpu_skew]]" + "enable_user_activity_logging", + "query_group", + "search_path", + // https://docs.aws.amazon.com/redshift/latest/dg/cm-c-wlm-query-monitoring-rules.html#cm-c-wlm-query-monitoring-metrics-serverless + "max_query_cpu_time", + "max_query_blocks_read", + "max_scan_row_count", + "max_query_execution_time", + "max_query_queue_time", + "max_query_cpu_usage_percent", + "max_query_temp_blocks_to_disk", + "max_join_row_count", + "max_nested_loop_join_row_count", + }, false), + Required: true, }, "parameter_value": { Type: schema.TypeString, @@ -85,14 +108,6 @@ func ResourceWorkgroup() *schema.Resource { Computed: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "vpc_endpoint_id": { - Type: schema.TypeString, - Computed: true, - }, - "vpc_id": { - Type: schema.TypeString, - Computed: true, - }, "network_interface": { Type: schema.TypeList, Computed: true, @@ -117,6 +132,14 @@ func ResourceWorkgroup() *schema.Resource { }, }, }, + "vpc_endpoint_id": { + Type: schema.TypeString, + Computed: true, + }, + "vpc_id": { + Type: schema.TypeString, + Computed: true, + }, }, }, }, @@ -171,20 +194,21 @@ func ResourceWorkgroup() *schema.Resource { func resourceWorkgroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftServerlessConn() + conn := meta.(*conns.AWSClient).RedshiftServerlessConn(ctx) + name := d.Get("workgroup_name").(string) input := redshiftserverless.CreateWorkgroupInput{ NamespaceName: aws.String(d.Get("namespace_name").(string)), - Tags: GetTagsIn(ctx), - WorkgroupName: aws.String(d.Get("workgroup_name").(string)), + Tags: getTagsIn(ctx), + WorkgroupName: aws.String(name), } if v, ok := d.GetOk("base_capacity"); ok { input.BaseCapacity = aws.Int64(int64(v.(int))) } - if v, ok := d.GetOk("config_parameter"); ok && len(v.([]interface{})) > 0 { - input.ConfigParameters = expandConfigParameters(v.([]interface{})) + if v, ok := d.GetOk("config_parameter"); ok && v.(*schema.Set).Len() > 0 { + input.ConfigParameters = expandConfigParameters(v.(*schema.Set).List()) } if v, ok := d.GetOk("enhanced_vpc_routing"); ok { @@ -203,16 +227,16 @@ func resourceWorkgroupCreate(ctx context.Context, d *schema.ResourceData, meta i input.SubnetIds = flex.ExpandStringSet(v.(*schema.Set)) } - out, err := conn.CreateWorkgroupWithContext(ctx, &input) + output, err := conn.CreateWorkgroupWithContext(ctx, &input) if err != nil { - return sdkdiag.AppendErrorf(diags, "creating Redshift Serverless Workgroup : %s", err) + return sdkdiag.AppendErrorf(diags, "creating Redshift Serverless Workgroup (%s): %s", name, err) } - d.SetId(aws.StringValue(out.Workgroup.WorkgroupName)) + d.SetId(aws.StringValue(output.Workgroup.WorkgroupName)) if _, err := waitWorkgroupAvailable(ctx, conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { - return sdkdiag.AppendErrorf(diags, "waiting for Redshift Serverless Workgroup (%s) to be created: %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "waiting for Redshift Serverless Workgroup (%s) create: %s", d.Id(), err) } return append(diags, resourceWorkgroupRead(ctx, d, meta)...) @@ -220,9 +244,10 @@ func resourceWorkgroupCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceWorkgroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftServerlessConn() + conn := meta.(*conns.AWSClient).RedshiftServerlessConn(ctx) out, err := FindWorkgroupByName(ctx, conn, d.Id()) + if !d.IsNewResource() && tfresource.NotFound(err) { log.Printf("[WARN] Redshift Serverless Workgroup (%s) not found, removing from state", d.Id()) d.SetId("") @@ -236,6 +261,12 @@ func resourceWorkgroupRead(ctx context.Context, d *schema.ResourceData, meta int arn := aws.StringValue(out.WorkgroupArn) d.Set("arn", arn) d.Set("base_capacity", out.BaseCapacity) + if err := d.Set("config_parameter", flattenConfigParameters(out.ConfigParameters)); err != nil { + return sdkdiag.AppendErrorf(diags, "setting config_parameter: %s", err) + } + if err := d.Set("endpoint", []interface{}{flattenEndpoint(out.Endpoint)}); err != nil { + return sdkdiag.AppendErrorf(diags, "setting endpoint: %s", err) + } d.Set("enhanced_vpc_routing", out.EnhancedVpcRouting) d.Set("namespace_name", out.NamespaceName) d.Set("publicly_accessible", out.PubliclyAccessible) @@ -243,57 +274,79 @@ func resourceWorkgroupRead(ctx context.Context, d *schema.ResourceData, meta int d.Set("subnet_ids", flex.FlattenStringSet(out.SubnetIds)) d.Set("workgroup_id", out.WorkgroupId) d.Set("workgroup_name", out.WorkgroupName) - if err := d.Set("config_parameter", flattenConfigParameters(out.ConfigParameters)); err != nil { - return sdkdiag.AppendErrorf(diags, "setting config_parameter: %s", err) - } - - if err := d.Set("endpoint", []interface{}{flattenEndpoint(out.Endpoint)}); err != nil { - return sdkdiag.AppendErrorf(diags, "setting endpoint: %s", err) - } return diags } func resourceWorkgroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftServerlessConn() + conn := meta.(*conns.AWSClient).RedshiftServerlessConn(ctx) + + // You can't update multiple parameters in one request. - if d.HasChangesExcept("tags", "tags_all") { + if d.HasChange("base_capacity") { input := &redshiftserverless.UpdateWorkgroupInput{ + BaseCapacity: aws.Int64(int64(d.Get("base_capacity").(int))), WorkgroupName: aws.String(d.Id()), } - if v, ok := d.GetOk("base_capacity"); ok { - input.BaseCapacity = aws.Int64(int64(v.(int))) + if err := updateWorkgroup(ctx, conn, input, d.Timeout(schema.TimeoutUpdate)); err != nil { + return sdkdiag.AppendFromErr(diags, err) } + } - if v, ok := d.GetOk("config_parameter"); ok && len(v.([]interface{})) > 0 { - input.ConfigParameters = expandConfigParameters(v.([]interface{})) + if d.HasChange("config_parameter") { + input := &redshiftserverless.UpdateWorkgroupInput{ + ConfigParameters: expandConfigParameters(d.Get("config_parameter").(*schema.Set).List()), + WorkgroupName: aws.String(d.Id()), } - if v, ok := d.GetOk("enhanced_vpc_routing"); ok { - input.EnhancedVpcRouting = aws.Bool(v.(bool)) + if err := updateWorkgroup(ctx, conn, input, d.Timeout(schema.TimeoutUpdate)); err != nil { + return sdkdiag.AppendFromErr(diags, err) } + } - if v, ok := d.GetOk("publicly_accessible"); ok { - input.PubliclyAccessible = aws.Bool(v.(bool)) + if d.HasChange("enhanced_vpc_routing") { + input := &redshiftserverless.UpdateWorkgroupInput{ + EnhancedVpcRouting: aws.Bool(d.Get("enhanced_vpc_routing").(bool)), + WorkgroupName: aws.String(d.Id()), } - if v, ok := d.GetOk("security_group_ids"); ok && v.(*schema.Set).Len() > 0 { - input.SecurityGroupIds = flex.ExpandStringSet(v.(*schema.Set)) + if err := updateWorkgroup(ctx, conn, input, d.Timeout(schema.TimeoutUpdate)); err != nil { + return sdkdiag.AppendFromErr(diags, err) } + } - if v, ok := d.GetOk("subnet_ids"); ok && v.(*schema.Set).Len() > 0 { - input.SubnetIds = flex.ExpandStringSet(v.(*schema.Set)) + if d.HasChange("publicly_accessible") { + input := &redshiftserverless.UpdateWorkgroupInput{ + PubliclyAccessible: aws.Bool(d.Get("publicly_accessible").(bool)), + WorkgroupName: aws.String(d.Id()), } - _, err := conn.UpdateWorkgroupWithContext(ctx, input) - if err != nil { - return sdkdiag.AppendErrorf(diags, "updating Redshift Serverless Workgroup (%s): %s", d.Id(), err) + if err := updateWorkgroup(ctx, conn, input, d.Timeout(schema.TimeoutUpdate)); err != nil { + return sdkdiag.AppendFromErr(diags, err) + } + } + + if d.HasChange("security_group_ids") { + input := &redshiftserverless.UpdateWorkgroupInput{ + SecurityGroupIds: flex.ExpandStringSet(d.Get("security_group_ids").(*schema.Set)), + WorkgroupName: aws.String(d.Id()), } - if _, err := waitWorkgroupAvailable(ctx, conn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { - return sdkdiag.AppendErrorf(diags, "waiting for Redshift Serverless Workgroup (%s) to be updated: %s", d.Id(), err) + if err := updateWorkgroup(ctx, conn, input, d.Timeout(schema.TimeoutUpdate)); err != nil { + return sdkdiag.AppendFromErr(diags, err) + } + } + + if d.HasChange("subnet_ids") { + input := &redshiftserverless.UpdateWorkgroupInput{ + SubnetIds: flex.ExpandStringSet(d.Get("subnet_ids").(*schema.Set)), + WorkgroupName: aws.String(d.Id()), + } + + if err := updateWorkgroup(ctx, conn, input, d.Timeout(schema.TimeoutUpdate)); err != nil { + return sdkdiag.AppendFromErr(diags, err) } } @@ -302,8 +355,9 @@ func resourceWorkgroupUpdate(ctx context.Context, d *schema.ResourceData, meta i func resourceWorkgroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftServerlessConn() + conn := meta.(*conns.AWSClient).RedshiftServerlessConn(ctx) + log.Printf("[DEBUG] Deleting Redshift Serverless Workgroup: %s", d.Id()) _, err := tfresource.RetryWhenAWSErrMessageContains(ctx, 10*time.Minute, func() (interface{}, error) { return conn.DeleteWorkgroupWithContext(ctx, &redshiftserverless.DeleteWorkgroupInput{ @@ -313,20 +367,111 @@ func resourceWorkgroupDelete(ctx context.Context, d *schema.ResourceData, meta i // "ConflictException: There is an operation running on the workgroup. Try deleting the workgroup again later." redshiftserverless.ErrCodeConflictException, "operation running") + if tfawserr.ErrCodeEquals(err, redshiftserverless.ErrCodeResourceNotFoundException) { + return diags + } + if err != nil { - if tfawserr.ErrCodeEquals(err, redshiftserverless.ErrCodeResourceNotFoundException) { - return diags - } return sdkdiag.AppendErrorf(diags, "deleting Redshift Serverless Workgroup (%s): %s", d.Id(), err) } if _, err := waitWorkgroupDeleted(ctx, conn, d.Id(), d.Timeout(schema.TimeoutDelete)); err != nil { - return sdkdiag.AppendErrorf(diags, "deleting Redshift Serverless Workgroup (%s): waiting for completion: %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "waiting for Redshift Serverless Workgroup (%s) delete: %s", d.Id(), err) } return diags } +func updateWorkgroup(ctx context.Context, conn *redshiftserverless.RedshiftServerless, input *redshiftserverless.UpdateWorkgroupInput, timeout time.Duration) error { + name := aws.StringValue(input.WorkgroupName) + _, err := conn.UpdateWorkgroupWithContext(ctx, input) + + if err != nil { + return fmt.Errorf("updating Redshift Serverless Workgroup (%s): %w", name, err) + } + + if _, err := waitWorkgroupAvailable(ctx, conn, name, timeout); err != nil { + return fmt.Errorf("waiting for Redshift Serverless Workgroup (%s) update: %w", name, err) + } + + return nil +} + +func FindWorkgroupByName(ctx context.Context, conn *redshiftserverless.RedshiftServerless, name string) (*redshiftserverless.Workgroup, error) { + input := &redshiftserverless.GetWorkgroupInput{ + WorkgroupName: aws.String(name), + } + + output, err := conn.GetWorkgroupWithContext(ctx, input) + + if tfawserr.ErrCodeEquals(err, redshiftserverless.ErrCodeResourceNotFoundException) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + return output.Workgroup, nil +} + +func statusWorkgroup(ctx context.Context, conn *redshiftserverless.RedshiftServerless, name string) retry.StateRefreshFunc { + return func() (interface{}, string, error) { + output, err := FindWorkgroupByName(ctx, conn, name) + + if tfresource.NotFound(err) { + return nil, "", nil + } + + if err != nil { + return nil, "", err + } + + return output, aws.StringValue(output.Status), nil + } +} + +func waitWorkgroupAvailable(ctx context.Context, conn *redshiftserverless.RedshiftServerless, name string, wait time.Duration) (*redshiftserverless.Workgroup, error) { //nolint:unparam + stateConf := &retry.StateChangeConf{ + Pending: []string{redshiftserverless.WorkgroupStatusCreating, redshiftserverless.WorkgroupStatusModifying}, + Target: []string{redshiftserverless.WorkgroupStatusAvailable}, + Refresh: statusWorkgroup(ctx, conn, name), + Timeout: wait, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + + if output, ok := outputRaw.(*redshiftserverless.Workgroup); ok { + return output, err + } + + return nil, err +} + +func waitWorkgroupDeleted(ctx context.Context, conn *redshiftserverless.RedshiftServerless, name string, wait time.Duration) (*redshiftserverless.Workgroup, error) { + stateConf := &retry.StateChangeConf{ + Pending: []string{redshiftserverless.WorkgroupStatusAvailable, redshiftserverless.WorkgroupStatusModifying, redshiftserverless.WorkgroupStatusDeleting}, + Target: []string{}, + Refresh: statusWorkgroup(ctx, conn, name), + Timeout: wait, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + + if output, ok := outputRaw.(*redshiftserverless.Workgroup); ok { + return output, err + } + + return nil, err +} + func expandConfigParameter(tfMap map[string]interface{}) *redshiftserverless.ConfigParameter { if tfMap == nil { return nil diff --git a/internal/service/redshiftserverless/workgroup_data_source.go b/internal/service/redshiftserverless/workgroup_data_source.go index 31fe743bc2e..398487dfa4f 100644 --- a/internal/service/redshiftserverless/workgroup_data_source.go +++ b/internal/service/redshiftserverless/workgroup_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshiftserverless import ( @@ -115,7 +118,7 @@ func DataSourceWorkgroup() *schema.Resource { func dataSourceWorkgroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).RedshiftServerlessConn() + conn := meta.(*conns.AWSClient).RedshiftServerlessConn(ctx) workgroupName := d.Get("workgroup_name").(string) diff --git a/internal/service/redshiftserverless/workgroup_data_source_test.go b/internal/service/redshiftserverless/workgroup_data_source_test.go index 2ff94257fff..253d13627c5 100644 --- a/internal/service/redshiftserverless/workgroup_data_source_test.go +++ b/internal/service/redshiftserverless/workgroup_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshiftserverless_test import ( diff --git a/internal/service/redshiftserverless/workgroup_test.go b/internal/service/redshiftserverless/workgroup_test.go index 51e12233c71..f80509afd49 100644 --- a/internal/service/redshiftserverless/workgroup_test.go +++ b/internal/service/redshiftserverless/workgroup_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package redshiftserverless_test import ( @@ -47,6 +50,132 @@ func TestAccRedshiftServerlessWorkgroup_basic(t *testing.T) { }) } +func TestAccRedshiftServerlessWorkgroup_baseCapacityAndPubliclyAccessible(t *testing.T) { + ctx := acctest.Context(t) + resourceName := "aws_redshiftserverless_workgroup.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, redshiftserverless.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckWorkgroupDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccWorkgroupConfig_baseCapacityAndPubliclyAccessible(rName, 64, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckWorkgroupExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "base_capacity", "64"), + resource.TestCheckResourceAttr(resourceName, "publicly_accessible", "true"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccWorkgroupConfig_baseCapacityAndPubliclyAccessible(rName, 128, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckWorkgroupExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "base_capacity", "128"), + resource.TestCheckResourceAttr(resourceName, "publicly_accessible", "false"), + ), + }, + }, + }) +} + +func TestAccRedshiftServerlessWorkgroup_configParameters(t *testing.T) { + ctx := acctest.Context(t) + resourceName := "aws_redshiftserverless_workgroup.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, redshiftserverless.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckWorkgroupDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccWorkgroupConfig_configParameters(rName, "14400"), + Check: resource.ComposeTestCheckFunc( + testAccCheckWorkgroupExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "config_parameter.#", "7"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "config_parameter.*", map[string]string{ + "parameter_key": "datestyle", + "parameter_value": "ISO, MDY", + }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "config_parameter.*", map[string]string{ + "parameter_key": "enable_user_activity_logging", + "parameter_value": "true", + }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "config_parameter.*", map[string]string{ + "parameter_key": "query_group", + "parameter_value": "default", + }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "config_parameter.*", map[string]string{ + "parameter_key": "search_path", + "parameter_value": "$user, public", + }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "config_parameter.*", map[string]string{ + "parameter_key": "max_query_execution_time", + "parameter_value": "14400", + }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "config_parameter.*", map[string]string{ + "parameter_key": "auto_mv", + "parameter_value": "true", + }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "config_parameter.*", map[string]string{ + "parameter_key": "enable_case_sensitive_identifier", + "parameter_value": "false", + }), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccWorkgroupConfig_configParameters(rName, "28800"), + Check: resource.ComposeTestCheckFunc( + testAccCheckWorkgroupExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "config_parameter.#", "7"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "config_parameter.*", map[string]string{ + "parameter_key": "datestyle", + "parameter_value": "ISO, MDY", + }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "config_parameter.*", map[string]string{ + "parameter_key": "enable_user_activity_logging", + "parameter_value": "true", + }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "config_parameter.*", map[string]string{ + "parameter_key": "query_group", + "parameter_value": "default", + }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "config_parameter.*", map[string]string{ + "parameter_key": "search_path", + "parameter_value": "$user, public", + }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "config_parameter.*", map[string]string{ + "parameter_key": "max_query_execution_time", + "parameter_value": "28800", + }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "config_parameter.*", map[string]string{ + "parameter_key": "auto_mv", + "parameter_value": "true", + }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "config_parameter.*", map[string]string{ + "parameter_key": "enable_case_sensitive_identifier", + "parameter_value": "false", + }), + ), + }, + }, + }) +} + func TestAccRedshiftServerlessWorkgroup_tags(t *testing.T) { ctx := acctest.Context(t) resourceName := "aws_redshiftserverless_workgroup.test" @@ -116,7 +245,7 @@ func TestAccRedshiftServerlessWorkgroup_disappears(t *testing.T) { func testAccCheckWorkgroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftServerlessConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftServerlessConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_redshiftserverless_workgroup" { @@ -150,7 +279,7 @@ func testAccCheckWorkgroupExists(ctx context.Context, name string) resource.Test return fmt.Errorf("Redshift Serverless Workgroup ID is not set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftServerlessConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RedshiftServerlessConn(ctx) _, err := tfredshiftserverless.FindWorkgroupByName(ctx, conn, rs.Primary.ID) @@ -171,6 +300,63 @@ resource "aws_redshiftserverless_workgroup" "test" { `, rName) } +func testAccWorkgroupConfig_baseCapacityAndPubliclyAccessible(rName string, baseCapacity int, publiclyAccessible bool) string { + return fmt.Sprintf(` +resource "aws_redshiftserverless_namespace" "test" { + namespace_name = %[1]q +} + +resource "aws_redshiftserverless_workgroup" "test" { + namespace_name = aws_redshiftserverless_namespace.test.namespace_name + workgroup_name = %[1]q + base_capacity = %[2]d + publicly_accessible = %[3]t +} +`, rName, baseCapacity, publiclyAccessible) +} + +func testAccWorkgroupConfig_configParameters(rName, maxQueryExecutionTime string) string { + return fmt.Sprintf(` +resource "aws_redshiftserverless_namespace" "test" { + namespace_name = %[1]q +} + +resource "aws_redshiftserverless_workgroup" "test" { + namespace_name = aws_redshiftserverless_namespace.test.namespace_name + workgroup_name = %[1]q + + config_parameter { + parameter_key = "datestyle" + parameter_value = "ISO, MDY" + } + config_parameter { + parameter_key = "enable_user_activity_logging" + parameter_value = "true" + } + config_parameter { + parameter_key = "query_group" + parameter_value = "default" + } + config_parameter { + parameter_key = "search_path" + parameter_value = "$user, public" + } + config_parameter { + parameter_key = "max_query_execution_time" + parameter_value = %[2]q + } + config_parameter { + parameter_key = "auto_mv" + parameter_value = "true" + } + config_parameter { + parameter_key = "enable_case_sensitive_identifier" + parameter_value = "false" + } +} +`, rName, maxQueryExecutionTime) +} + func testAccWorkgroupConfig_tags1(rName, tagKey1, tagValue1 string) string { return fmt.Sprintf(` resource "aws_redshiftserverless_namespace" "test" { diff --git a/internal/service/resourceexplorer2/exports_test.go b/internal/service/resourceexplorer2/exports_test.go index 9a268bb6eb0..d00e26eb31b 100644 --- a/internal/service/resourceexplorer2/exports_test.go +++ b/internal/service/resourceexplorer2/exports_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resourceexplorer2 // Exports for use in tests only. diff --git a/internal/service/resourceexplorer2/generate.go b/internal/service/resourceexplorer2/generate.go index bb99c5dbd6f..03f697850eb 100644 --- a/internal/service/resourceexplorer2/generate.go +++ b/internal/service/resourceexplorer2/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -TagInIDElem=ResourceArn -ListTags -ListTagsInIDElem=ResourceArn -ServiceTagsMap -UpdateTags -UntagInTagsElem=TagKeys -KVTValues -SkipTypesImp +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package resourceexplorer2 diff --git a/internal/service/resourceexplorer2/index.go b/internal/service/resourceexplorer2/index.go index de593db8254..6096f874d53 100644 --- a/internal/service/resourceexplorer2/index.go +++ b/internal/service/resourceexplorer2/index.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resourceexplorer2 import ( @@ -9,7 +12,6 @@ import ( "github.com/aws/aws-sdk-go-v2/service/resourceexplorer2" awstypes "github.com/aws/aws-sdk-go-v2/service/resourceexplorer2/types" "github.com/hashicorp/terraform-plugin-framework-timeouts/resource/timeouts" - "github.com/hashicorp/terraform-plugin-framework/path" "github.com/hashicorp/terraform-plugin-framework/resource" "github.com/hashicorp/terraform-plugin-framework/resource/schema" "github.com/hashicorp/terraform-plugin-framework/schema/validator" @@ -20,8 +22,8 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/enum" "github.com/hashicorp/terraform-provider-aws/internal/errs" "github.com/hashicorp/terraform-provider-aws/internal/errs/fwdiag" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/names" @@ -40,6 +42,7 @@ func newResourceIndex(context.Context) (resource.ResourceWithConfigure, error) { type resourceIndex struct { framework.ResourceWithConfigure + framework.WithImportByID framework.WithTimeouts } @@ -80,11 +83,11 @@ func (r *resourceIndex) Create(ctx context.Context, request resource.CreateReque return } - conn := r.Meta().ResourceExplorer2Client() + conn := r.Meta().ResourceExplorer2Client(ctx) input := &resourceexplorer2.CreateIndexInput{ ClientToken: aws.String(id.UniqueId()), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } output, err := conn.CreateIndex(ctx, input) @@ -127,7 +130,7 @@ func (r *resourceIndex) Create(ctx context.Context, request resource.CreateReque } // Set values for unknowns. - data.ARN = types.StringValue(arn) + data.Arn = types.StringValue(arn) response.Diagnostics.Append(response.State.Set(ctx, &data)...) } @@ -141,7 +144,7 @@ func (r *resourceIndex) Read(ctx context.Context, request resource.ReadRequest, return } - conn := r.Meta().ResourceExplorer2Client() + conn := r.Meta().ResourceExplorer2Client(ctx) output, err := findIndex(ctx, conn) @@ -158,10 +161,13 @@ func (r *resourceIndex) Read(ctx context.Context, request resource.ReadRequest, return } - data.ARN = flex.StringToFramework(ctx, output.Arn) - data.Type = flex.StringValueToFramework(ctx, output.Type) + if err := flex.Flatten(ctx, output, &data); err != nil { + response.Diagnostics.AddError("flattening data", err.Error()) + + return + } - SetTagsOut(ctx, output.Tags) + setTagsOut(ctx, output.Tags) response.Diagnostics.Append(response.State.Set(ctx, &data)...) } @@ -182,7 +188,7 @@ func (r *resourceIndex) Update(ctx context.Context, request resource.UpdateReque } if !new.Type.Equal(old.Type) { - conn := r.Meta().ResourceExplorer2Client() + conn := r.Meta().ResourceExplorer2Client(ctx) input := &resourceexplorer2.UpdateIndexTypeInput{ Arn: flex.StringFromFramework(ctx, new.ID), @@ -217,7 +223,7 @@ func (r *resourceIndex) Delete(ctx context.Context, request resource.DeleteReque return } - conn := r.Meta().ResourceExplorer2Client() + conn := r.Meta().ResourceExplorer2Client(ctx) tflog.Debug(ctx, "deleting Resource Explorer Index", map[string]interface{}{ "id": data.ID.ValueString(), @@ -240,16 +246,13 @@ func (r *resourceIndex) Delete(ctx context.Context, request resource.DeleteReque } } -func (r *resourceIndex) ImportState(ctx context.Context, request resource.ImportStateRequest, response *resource.ImportStateResponse) { - resource.ImportStatePassthroughID(ctx, path.Root("id"), request, response) -} - func (r *resourceIndex) ModifyPlan(ctx context.Context, request resource.ModifyPlanRequest, response *resource.ModifyPlanResponse) { r.SetTagsAll(ctx, request, response) } +// See https://docs.aws.amazon.com/resource-explorer/latest/apireference/API_Index.html. type resourceIndexData struct { - ARN types.String `tfsdk:"arn"` + Arn types.String `tfsdk:"arn"` ID types.String `tfsdk:"id"` Tags types.Map `tfsdk:"tags"` TagsAll types.Map `tfsdk:"tags_all"` diff --git a/internal/service/resourceexplorer2/index_test.go b/internal/service/resourceexplorer2/index_test.go index a5878bb7267..fc80d548515 100644 --- a/internal/service/resourceexplorer2/index_test.go +++ b/internal/service/resourceexplorer2/index_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resourceexplorer2_test import ( @@ -155,7 +158,7 @@ func testAccIndex_type(t *testing.T) { func testAccCheckIndexDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ResourceExplorer2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).ResourceExplorer2Client(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_resourceexplorer2_index" { @@ -189,7 +192,7 @@ func testAccCheckIndexExists(ctx context.Context, n string) resource.TestCheckFu return fmt.Errorf("No Resource Explorer Index ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ResourceExplorer2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).ResourceExplorer2Client(ctx) _, err := tfresourceexplorer2.FindIndex(ctx, conn) diff --git a/internal/service/resourceexplorer2/resourceexplorer2_test.go b/internal/service/resourceexplorer2/resourceexplorer2_test.go index a793ad90cdf..c94deeec944 100644 --- a/internal/service/resourceexplorer2/resourceexplorer2_test.go +++ b/internal/service/resourceexplorer2/resourceexplorer2_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resourceexplorer2_test import ( diff --git a/internal/service/resourceexplorer2/service_package_gen.go b/internal/service/resourceexplorer2/service_package_gen.go index 71ce3e0a180..693c97a02e4 100644 --- a/internal/service/resourceexplorer2/service_package_gen.go +++ b/internal/service/resourceexplorer2/service_package_gen.go @@ -5,6 +5,9 @@ package resourceexplorer2 import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + resourceexplorer2_sdkv2 "github.com/aws/aws-sdk-go-v2/service/resourceexplorer2" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -46,4 +49,17 @@ func (p *servicePackage) ServicePackageName() string { return names.ResourceExplorer2 } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*resourceexplorer2_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return resourceexplorer2_sdkv2.NewFromConfig(cfg, func(o *resourceexplorer2_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = resourceexplorer2_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/resourceexplorer2/sweep.go b/internal/service/resourceexplorer2/sweep.go index b5f8e957b22..1effce46274 100644 --- a/internal/service/resourceexplorer2/sweep.go +++ b/internal/service/resourceexplorer2/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,8 +13,8 @@ import ( "github.com/aws/aws-sdk-go-v2/aws" "github.com/aws/aws-sdk-go-v2/service/resourceexplorer2" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" + "github.com/hashicorp/terraform-provider-aws/internal/sweep/framework" ) func init() { @@ -23,11 +26,11 @@ func init() { func sweepIndexes(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).ResourceExplorer2Client() + conn := client.ResourceExplorer2Client(ctx) input := &resourceexplorer2.ListIndexesInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -45,11 +48,13 @@ func sweepIndexes(region string) error { } for _, v := range page.Indexes { - sweepResources = append(sweepResources, sweep.NewSweepFrameworkResource(newResourceIndex, aws.ToString(v.Arn), client)) + sweepResources = append(sweepResources, framework.NewSweepResource(newResourceIndex, client, + framework.NewAttribute("id", aws.ToString(v.Arn)), + )) } } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Resource Explorer Indexes (%s): %w", region, err) diff --git a/internal/service/resourceexplorer2/tags_gen.go b/internal/service/resourceexplorer2/tags_gen.go index e18f8b144d4..36e5c1453fc 100644 --- a/internal/service/resourceexplorer2/tags_gen.go +++ b/internal/service/resourceexplorer2/tags_gen.go @@ -13,10 +13,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists resourceexplorer2 service tags. +// listTags lists resourceexplorer2 service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn *resourceexplorer2.Client, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *resourceexplorer2.Client, identifier string) (tftags.KeyValueTags, error) { input := &resourceexplorer2.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -33,7 +33,7 @@ func ListTags(ctx context.Context, conn *resourceexplorer2.Client, identifier st // ListTags lists resourceexplorer2 service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).ResourceExplorer2Client(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).ResourceExplorer2Client(ctx), identifier) if err != nil { return err @@ -53,14 +53,14 @@ func Tags(tags tftags.KeyValueTags) map[string]string { return tags.Map() } -// KeyValueTags creates KeyValueTags from resourceexplorer2 service tags. +// KeyValueTags creates tftags.KeyValueTags from resourceexplorer2 service tags. func KeyValueTags(ctx context.Context, tags map[string]string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns resourceexplorer2 service tags from Context. +// getTagsIn returns resourceexplorer2 service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]string { +func getTagsIn(ctx context.Context) map[string]string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -70,17 +70,17 @@ func GetTagsIn(ctx context.Context) map[string]string { return nil } -// SetTagsOut sets resourceexplorer2 service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]string) { +// setTagsOut sets resourceexplorer2 service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates resourceexplorer2 service tags. +// updateTags updates resourceexplorer2 service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn *resourceexplorer2.Client, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *resourceexplorer2.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -120,5 +120,5 @@ func UpdateTags(ctx context.Context, conn *resourceexplorer2.Client, identifier // UpdateTags updates resourceexplorer2 service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).ResourceExplorer2Client(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).ResourceExplorer2Client(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/resourceexplorer2/view.go b/internal/service/resourceexplorer2/view.go index 57434d2f74f..b4ddccfd0a7 100644 --- a/internal/service/resourceexplorer2/view.go +++ b/internal/service/resourceexplorer2/view.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resourceexplorer2 import ( @@ -16,6 +19,7 @@ import ( "github.com/hashicorp/terraform-plugin-framework/path" "github.com/hashicorp/terraform-plugin-framework/resource" "github.com/hashicorp/terraform-plugin-framework/resource/schema" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/booldefault" "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier" "github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier" "github.com/hashicorp/terraform-plugin-framework/schema/validator" @@ -26,9 +30,8 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/enum" "github.com/hashicorp/terraform-provider-aws/internal/errs" "github.com/hashicorp/terraform-provider-aws/internal/errs/fwdiag" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" - fwboolplanmodifier "github.com/hashicorp/terraform-provider-aws/internal/framework/boolplanmodifier" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/names" @@ -60,9 +63,7 @@ func (r *resourceView) Schema(ctx context.Context, request resource.SchemaReques "default_view": schema.BoolAttribute{ Optional: true, Computed: true, - PlanModifiers: []planmodifier.Bool{ - fwboolplanmodifier.DefaultValue(false), - }, + Default: booldefault.StaticBool(false), }, "id": framework.IDAttribute(), "name": schema.StringAttribute{ @@ -119,14 +120,14 @@ func (r *resourceView) Create(ctx context.Context, request resource.CreateReques return } - conn := r.Meta().ResourceExplorer2Client() + conn := r.Meta().ResourceExplorer2Client(ctx) input := &resourceexplorer2.CreateViewInput{ ClientToken: aws.String(id.UniqueId()), - Filters: r.expandSearchFilter(ctx, data.Filters), - IncludedProperties: r.expandIncludedProperties(ctx, data.IncludedProperties), - Tags: GetTagsIn(ctx), - ViewName: aws.String(data.Name.ValueString()), + Filters: flex.ExpandFrameworkListNestedBlockPtr(ctx, data.Filters, r.expandSearchFilter), + IncludedProperties: flex.ExpandFrameworkListNestedBlock(ctx, data.IncludedProperties, r.expandIncludedProperty), + Tags: getTagsIn(ctx), + ViewName: aws.String(data.ViewName.ValueString()), } output, err := conn.CreateView(ctx, input) @@ -154,7 +155,7 @@ func (r *resourceView) Create(ctx context.Context, request resource.CreateReques } // Set values for unknowns. - data.ARN = types.StringValue(arn) + data.ViewArn = types.StringValue(arn) data.ID = types.StringValue(arn) response.Diagnostics.Append(response.State.Set(ctx, &data)...) @@ -169,7 +170,7 @@ func (r *resourceView) Read(ctx context.Context, request resource.ReadRequest, r return } - conn := r.Meta().ResourceExplorer2Client() + conn := r.Meta().ResourceExplorer2Client(ctx) output, err := findViewByARN(ctx, conn, data.ID.ValueString()) @@ -195,12 +196,12 @@ func (r *resourceView) Read(ctx context.Context, request resource.ReadRequest, r } view := output.View - data.ARN = flex.StringToFramework(ctx, view.ViewArn) - data.DefaultView = types.BoolValue(defaultViewARN == data.ARN.ValueString()) + data.ViewArn = flex.StringToFramework(ctx, view.ViewArn) + data.DefaultView = types.BoolValue(defaultViewARN == data.ViewArn.ValueString()) data.Filters = r.flattenSearchFilter(ctx, view.Filters) - data.IncludedProperties = r.flattenIncludedProperties(ctx, view.IncludedProperties) + data.IncludedProperties = flex.FlattenFrameworkListNestedBlock[viewIncludedPropertyData](ctx, view.IncludedProperties, r.flattenIncludedProperty) - arn, err := arn.Parse(data.ARN.ValueString()) + arn, err := arn.Parse(data.ViewArn.ValueString()) if err != nil { response.Diagnostics.AddError("parsing Resource Explorer View ARN", err.Error()) @@ -218,9 +219,9 @@ func (r *resourceView) Read(ctx context.Context, request resource.ReadRequest, r } name := parts[1] - data.Name = types.StringValue(name) + data.ViewName = types.StringValue(name) - SetTagsOut(ctx, output.Tags) + setTagsOut(ctx, output.Tags) response.Diagnostics.Append(response.State.Set(ctx, &data)...) } @@ -240,12 +241,12 @@ func (r *resourceView) Update(ctx context.Context, request resource.UpdateReques return } - conn := r.Meta().ResourceExplorer2Client() + conn := r.Meta().ResourceExplorer2Client(ctx) if !new.Filters.Equal(old.Filters) || !new.IncludedProperties.Equal(old.IncludedProperties) { input := &resourceexplorer2.UpdateViewInput{ - Filters: r.expandSearchFilter(ctx, new.Filters), - IncludedProperties: r.expandIncludedProperties(ctx, new.IncludedProperties), + Filters: flex.ExpandFrameworkListNestedBlockPtr(ctx, new.Filters, r.expandSearchFilter), + IncludedProperties: flex.ExpandFrameworkListNestedBlock(ctx, new.IncludedProperties, r.expandIncludedProperty), ViewArn: flex.StringFromFramework(ctx, new.ID), } @@ -296,7 +297,7 @@ func (r *resourceView) Delete(ctx context.Context, request resource.DeleteReques return } - conn := r.Meta().ResourceExplorer2Client() + conn := r.Meta().ResourceExplorer2Client(ctx) tflog.Debug(ctx, "deleting Resource Explorer View", map[string]interface{}{ "id": data.ID.ValueString(), @@ -320,30 +321,14 @@ func (r *resourceView) ModifyPlan(ctx context.Context, request resource.ModifyPl r.SetTagsAll(ctx, request, response) } -func (r *resourceView) expandSearchFilter(ctx context.Context, tfList types.List) *awstypes.SearchFilter { - if tfList.IsNull() || tfList.IsUnknown() { - return nil - } - - var data []viewSearchFilterData - - if diags := tfList.ElementsAs(ctx, &data, false); diags.HasError() { - return nil - } - - if len(data) == 0 { - return nil - } - - apiObject := &awstypes.SearchFilter{ - FilterString: flex.StringFromFramework(ctx, data[0].FilterString), +func (r *resourceView) expandSearchFilter(ctx context.Context, data viewSearchFilterData) *awstypes.SearchFilter { + return &awstypes.SearchFilter{ + FilterString: flex.StringFromFramework(ctx, data.FilterString), } - - return apiObject } func (r *resourceView) flattenSearchFilter(ctx context.Context, apiObject *awstypes.SearchFilter) types.List { - attributeTypes, _ := framework.AttributeTypes[viewSearchFilterData](ctx) + attributeTypes := flex.AttributeTypesMust[viewSearchFilterData](ctx) elementType := types.ObjectType{AttrTypes: attributeTypes} // The default is @@ -365,65 +350,26 @@ func (r *resourceView) flattenSearchFilter(ctx context.Context, apiObject *awsty }) } -func (r *resourceView) expandIncludedProperties(ctx context.Context, tfList types.List) []awstypes.IncludedProperty { - if tfList.IsNull() || tfList.IsUnknown() { - return nil - } - - var data []viewIncludedPropertyData - - if diags := tfList.ElementsAs(ctx, &data, false); diags.HasError() { - return nil - } - - var apiObjects []awstypes.IncludedProperty - - for _, v := range data { - apiObjects = append(apiObjects, r.expandIncludedProperty(ctx, v)) - } - - return apiObjects -} - func (r *resourceView) expandIncludedProperty(ctx context.Context, data viewIncludedPropertyData) awstypes.IncludedProperty { - apiObject := awstypes.IncludedProperty{ + return awstypes.IncludedProperty{ Name: flex.StringFromFramework(ctx, data.Name), } - - return apiObject } -func (r *resourceView) flattenIncludedProperties(ctx context.Context, apiObjects []awstypes.IncludedProperty) types.List { - attributeTypes, _ := framework.AttributeTypes[viewIncludedPropertyData](ctx) - elementType := types.ObjectType{AttrTypes: attributeTypes} - - if len(apiObjects) == 0 { - return types.ListNull(elementType) +func (r *resourceView) flattenIncludedProperty(ctx context.Context, apiObject awstypes.IncludedProperty) viewIncludedPropertyData { + return viewIncludedPropertyData{ + Name: flex.StringToFramework(ctx, apiObject.Name), } - - var elements []attr.Value - - for _, apiObject := range apiObjects { - elements = append(elements, r.flattenIncludedProperty(ctx, apiObject)) - } - - return types.ListValueMust(elementType, elements) -} - -func (r *resourceView) flattenIncludedProperty(ctx context.Context, apiObject awstypes.IncludedProperty) types.Object { - attributeTypes, _ := framework.AttributeTypes[viewIncludedPropertyData](ctx) - return types.ObjectValueMust(attributeTypes, map[string]attr.Value{ - "name": flex.StringToFramework(ctx, apiObject.Name), - }) } +// See https://docs.aws.amazon.com/resource-explorer/latest/apireference/API_View.html. type resourceViewData struct { - ARN types.String `tfsdk:"arn"` + ViewArn types.String `tfsdk:"arn"` DefaultView types.Bool `tfsdk:"default_view"` Filters types.List `tfsdk:"filters"` ID types.String `tfsdk:"id"` IncludedProperties types.List `tfsdk:"included_property"` - Name types.String `tfsdk:"name"` + ViewName types.String `tfsdk:"name"` Tags types.Map `tfsdk:"tags"` TagsAll types.Map `tfsdk:"tags_all"` } diff --git a/internal/service/resourceexplorer2/view_test.go b/internal/service/resourceexplorer2/view_test.go index 7b1ee93f8c3..c053e458316 100644 --- a/internal/service/resourceexplorer2/view_test.go +++ b/internal/service/resourceexplorer2/view_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resourceexplorer2_test import ( @@ -228,7 +231,7 @@ func testAccView_tags(t *testing.T) { func testAccCheckViewDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ResourceExplorer2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).ResourceExplorer2Client(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_resourceexplorer2_iview" { @@ -262,7 +265,7 @@ func testAccCheckViewExists(ctx context.Context, n string, v *resourceexplorer2. return fmt.Errorf("No Resource Explorer View ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ResourceExplorer2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).ResourceExplorer2Client(ctx) output, err := tfresourceexplorer2.FindViewByARN(ctx, conn, rs.Primary.ID) diff --git a/internal/service/resourcegroups/generate.go b/internal/service/resourcegroups/generate.go index 7b6183b111e..6d41acb5e09 100644 --- a/internal/service/resourcegroups/generate.go +++ b/internal/service/resourcegroups/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=GetTags -ListTagsInIDElem=Arn -ServiceTagsMap -TagOp=Tag -TagInIDElem=Arn -UntagOp=Untag -UntagInTagsElem=Keys -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package resourcegroups diff --git a/internal/service/resourcegroups/group.go b/internal/service/resourcegroups/group.go index fff4d3a4f3e..1a272021fde 100644 --- a/internal/service/resourcegroups/group.go +++ b/internal/service/resourcegroups/group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resourcegroups import ( @@ -44,9 +47,8 @@ func ResourceGroup() *schema.Resource { Computed: true, }, "configuration": { - Type: schema.TypeSet, - Optional: true, - ConflictsWith: []string{"resource_query"}, + Type: schema.TypeSet, + Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "parameters": { @@ -85,11 +87,10 @@ func ResourceGroup() *schema.Resource { ForceNew: true, }, "resource_query": { - Type: schema.TypeList, - Optional: true, - MinItems: 1, - MaxItems: 1, - ConflictsWith: []string{"configuration"}, + Type: schema.TypeList, + Optional: true, + MinItems: 1, + MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "query": { @@ -116,13 +117,13 @@ func ResourceGroup() *schema.Resource { } func resourceGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ResourceGroupsConn() + conn := meta.(*conns.AWSClient).ResourceGroupsConn(ctx) name := d.Get("name").(string) input := &resourcegroups.CreateGroupInput{ Description: aws.String(d.Get("description").(string)), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } waitForConfigurationAttached := false @@ -155,7 +156,7 @@ func resourceGroupCreate(ctx context.Context, d *schema.ResourceData, meta inter } func resourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ResourceGroupsConn() + conn := meta.(*conns.AWSClient).ResourceGroupsConn(ctx) group, err := FindGroupByName(ctx, conn, d.Id()) @@ -178,17 +179,29 @@ func resourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interfa GroupName: aws.String(d.Id()), }) - isConfigurationGroup := false + hasQuery := true if err != nil { if tfawserr.ErrCodeEquals(err, resourcegroups.ErrCodeBadRequestException) { // Attempting to get the query on a configuration group returns BadRequestException. - isConfigurationGroup = true + hasQuery = false } else { return diag.Errorf("reading Resource Groups Group (%s) resource query: %s", d.Id(), err) } } - if !isConfigurationGroup { + groupCfg, err := findGroupConfigurationByGroupName(ctx, conn, d.Id()) + + hasConfiguration := true + if err != nil { + if tfawserr.ErrCodeEquals(err, resourcegroups.ErrCodeBadRequestException) { + // Attempting to get configuration on a query group returns BadRequestException. + hasConfiguration = false + } else { + return diag.Errorf("reading Resource Groups Group (%s) configuration: %s", d.Id(), err) + } + } + + if hasQuery { resultQuery := map[string]interface{}{} resultQuery["query"] = aws.StringValue(q.GroupQuery.ResourceQuery.Query) resultQuery["type"] = aws.StringValue(q.GroupQuery.ResourceQuery.Type) @@ -196,14 +209,7 @@ func resourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interfa return diag.Errorf("setting resource_query: %s", err) } } - - if isConfigurationGroup { - groupCfg, err := findGroupConfigurationByGroupName(ctx, conn, d.Id()) - - if err != nil { - return diag.Errorf("reading Resource Groups Group (%s) configuration: %s", d.Id(), err) - } - + if hasConfiguration { if err := d.Set("configuration", flattenResourceGroupConfigurationItems(groupCfg.Configuration)); err != nil { return diag.Errorf("setting configuration: %s", err) } @@ -213,7 +219,7 @@ func resourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interfa } func resourceGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ResourceGroupsConn() + conn := meta.(*conns.AWSClient).ResourceGroupsConn(ctx) // Conversion between a resource-query and configuration group is not possible and vice-versa if d.HasChange("configuration") && d.HasChange("resource_query") { @@ -267,7 +273,7 @@ func resourceGroupUpdate(ctx context.Context, d *schema.ResourceData, meta inter } func resourceGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ResourceGroupsConn() + conn := meta.(*conns.AWSClient).ResourceGroupsConn(ctx) log.Printf("[DEBUG] Deleting Resource Groups Group: %s", d.Id()) _, err := conn.DeleteGroupWithContext(ctx, &resourcegroups.DeleteGroupInput{ diff --git a/internal/service/resourcegroups/group_test.go b/internal/service/resourcegroups/group_test.go index 3510f88d721..86fe52235e3 100644 --- a/internal/service/resourcegroups/group_test.go +++ b/internal/service/resourcegroups/group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resourcegroups_test import ( @@ -223,6 +226,40 @@ func TestAccResourceGroupsGroup_configurationParametersOptional(t *testing.T) { }) } +func TestAccResourceGroupsGroup_resourceQueryAndConfiguration(t *testing.T) { + ctx := acctest.Context(t) + var v resourcegroups.Group + resourceName := "aws_resourcegroups_group.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + configType := "AWS::NetworkFirewall::RuleGroup" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, resourcegroups.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckResourceGroupDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccGroupConfig_resourceQueryAndConfiguration(rName, testAccResourceGroupQueryConfig, configType), + Check: resource.ComposeTestCheckFunc( + testAccCheckResourceGroupExists(ctx, resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttrSet(resourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "resource_query.0.query", testAccResourceGroupQueryConfig+"\n"), + resource.TestCheckResourceAttr(resourceName, "configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "configuration.0.type", configType), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func testAccCheckResourceGroupExists(ctx context.Context, n string, v *resourcegroups.Group) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -234,7 +271,7 @@ func testAccCheckResourceGroupExists(ctx context.Context, n string, v *resourceg return fmt.Errorf("No Resource Groups Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ResourceGroupsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ResourceGroupsConn(ctx) output, err := tfresourcegroups.FindGroupByName(ctx, conn, rs.Primary.ID) @@ -250,7 +287,7 @@ func testAccCheckResourceGroupExists(ctx context.Context, n string, v *resourceg func testAccCheckResourceGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ResourceGroupsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ResourceGroupsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_resourcegroups_group" { @@ -425,3 +462,22 @@ resource "aws_resourcegroups_group" "test" { } `, rName, configType1, configType2) } + +func testAccGroupConfig_resourceQueryAndConfiguration(rName, query, configType string) string { + return fmt.Sprintf(` +resource "aws_resourcegroups_group" "test" { + name = %[1]q + + resource_query { + query = < 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets resourcegroups service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets resourcegroups service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates resourcegroups service tags. +// updateTags updates resourcegroups service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn resourcegroupsiface.ResourceGroupsAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn resourcegroupsiface.ResourceGroupsAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn resourcegroupsiface.ResourceGroupsAPI, // UpdateTags updates resourcegroups service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).ResourceGroupsConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).ResourceGroupsConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/resourcegroupstaggingapi/generate.go b/internal/service/resourcegroupstaggingapi/generate.go index 5abba31f3c7..b306822d5bd 100644 --- a/internal/service/resourcegroupstaggingapi/generate.go +++ b/internal/service/resourcegroupstaggingapi/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ServiceTagsSlice +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package resourcegroupstaggingapi diff --git a/internal/service/resourcegroupstaggingapi/resources_data_source.go b/internal/service/resourcegroupstaggingapi/resources_data_source.go index 622095f553b..1d82cd05035 100644 --- a/internal/service/resourcegroupstaggingapi/resources_data_source.go +++ b/internal/service/resourcegroupstaggingapi/resources_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resourcegroupstaggingapi import ( @@ -100,7 +103,7 @@ func DataSourceResources() *schema.Resource { func dataSourceResourcesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ResourceGroupsTaggingAPIConn() + conn := meta.(*conns.AWSClient).ResourceGroupsTaggingAPIConn(ctx) input := &resourcegroupstaggingapi.GetResourcesInput{} diff --git a/internal/service/resourcegroupstaggingapi/resources_data_source_test.go b/internal/service/resourcegroupstaggingapi/resources_data_source_test.go index e42325659d5..b522bad88c0 100644 --- a/internal/service/resourcegroupstaggingapi/resources_data_source_test.go +++ b/internal/service/resourcegroupstaggingapi/resources_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resourcegroupstaggingapi_test import ( diff --git a/internal/service/resourcegroupstaggingapi/service_package_gen.go b/internal/service/resourcegroupstaggingapi/service_package_gen.go index 83612ec1339..df9a805c4eb 100644 --- a/internal/service/resourcegroupstaggingapi/service_package_gen.go +++ b/internal/service/resourcegroupstaggingapi/service_package_gen.go @@ -5,6 +5,10 @@ package resourcegroupstaggingapi import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + resourcegroupstaggingapi_sdkv1 "github.com/aws/aws-sdk-go/service/resourcegroupstaggingapi" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -36,4 +40,13 @@ func (p *servicePackage) ServicePackageName() string { return names.ResourceGroupsTaggingAPI } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*resourcegroupstaggingapi_sdkv1.ResourceGroupsTaggingAPI, error) { + sess := config["session"].(*session_sdkv1.Session) + + return resourcegroupstaggingapi_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/resourcegroupstaggingapi/tags_gen.go b/internal/service/resourcegroupstaggingapi/tags_gen.go index 4d7d8d15f28..550e3f5183d 100644 --- a/internal/service/resourcegroupstaggingapi/tags_gen.go +++ b/internal/service/resourcegroupstaggingapi/tags_gen.go @@ -39,9 +39,9 @@ func KeyValueTags(ctx context.Context, tags []*resourcegroupstaggingapi.Tag) tft return tftags.New(ctx, m) } -// GetTagsIn returns resourcegroupstaggingapi service tags from Context. +// getTagsIn returns resourcegroupstaggingapi service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*resourcegroupstaggingapi.Tag { +func getTagsIn(ctx context.Context) []*resourcegroupstaggingapi.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -51,8 +51,8 @@ func GetTagsIn(ctx context.Context) []*resourcegroupstaggingapi.Tag { return nil } -// SetTagsOut sets resourcegroupstaggingapi service tags in Context. -func SetTagsOut(ctx context.Context, tags []*resourcegroupstaggingapi.Tag) { +// setTagsOut sets resourcegroupstaggingapi service tags in Context. +func setTagsOut(ctx context.Context, tags []*resourcegroupstaggingapi.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } diff --git a/internal/service/rolesanywhere/find.go b/internal/service/rolesanywhere/find.go index f976ae3687c..6d26f5c5078 100644 --- a/internal/service/rolesanywhere/find.go +++ b/internal/service/rolesanywhere/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rolesanywhere import ( diff --git a/internal/service/rolesanywhere/flex.go b/internal/service/rolesanywhere/flex.go index 88cd69bc6a6..72500c467bd 100644 --- a/internal/service/rolesanywhere/flex.go +++ b/internal/service/rolesanywhere/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rolesanywhere func expandStringList(tfList []interface{}) []string { diff --git a/internal/service/rolesanywhere/generate.go b/internal/service/rolesanywhere/generate.go index 20e8cdc153a..e6932f9d9e8 100644 --- a/internal/service/rolesanywhere/generate.go +++ b/internal/service/rolesanywhere/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -ListTags -ServiceTagsSlice -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package rolesanywhere diff --git a/internal/service/rolesanywhere/profile.go b/internal/service/rolesanywhere/profile.go index 061c4f201fa..62d539d3a39 100644 --- a/internal/service/rolesanywhere/profile.go +++ b/internal/service/rolesanywhere/profile.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rolesanywhere import ( @@ -77,13 +80,13 @@ func ResourceProfile() *schema.Resource { } func resourceProfileCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).RolesAnywhereClient() + conn := meta.(*conns.AWSClient).RolesAnywhereClient(ctx) name := d.Get("name").(string) input := &rolesanywhere.CreateProfileInput{ Name: aws.String(name), RoleArns: expandStringList(d.Get("role_arns").(*schema.Set).List()), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("duration_seconds"); ok { @@ -119,7 +122,7 @@ func resourceProfileCreate(ctx context.Context, d *schema.ResourceData, meta int } func resourceProfileRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).RolesAnywhereClient() + conn := meta.(*conns.AWSClient).RolesAnywhereClient(ctx) profile, err := FindProfileByID(ctx, conn, d.Id()) @@ -146,7 +149,7 @@ func resourceProfileRead(ctx context.Context, d *schema.ResourceData, meta inter } func resourceProfileUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).RolesAnywhereClient() + conn := meta.(*conns.AWSClient).RolesAnywhereClient(ctx) if d.HasChangesExcept("enabled", "tags_all") { input := &rolesanywhere.UpdateProfileInput{ @@ -199,7 +202,7 @@ func resourceProfileUpdate(ctx context.Context, d *schema.ResourceData, meta int } func resourceProfileDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).RolesAnywhereClient() + conn := meta.(*conns.AWSClient).RolesAnywhereClient(ctx) log.Printf("[DEBUG] Deleting RolesAnywhere Profile (%s)", d.Id()) _, err := conn.DeleteProfile(ctx, &rolesanywhere.DeleteProfileInput{ @@ -219,7 +222,7 @@ func resourceProfileDelete(ctx context.Context, d *schema.ResourceData, meta int } func disableProfile(ctx context.Context, profileId string, meta interface{}) error { - conn := meta.(*conns.AWSClient).RolesAnywhereClient() + conn := meta.(*conns.AWSClient).RolesAnywhereClient(ctx) input := &rolesanywhere.DisableProfileInput{ ProfileId: aws.String(profileId), @@ -230,7 +233,7 @@ func disableProfile(ctx context.Context, profileId string, meta interface{}) err } func enableProfile(ctx context.Context, profileId string, meta interface{}) error { - conn := meta.(*conns.AWSClient).RolesAnywhereClient() + conn := meta.(*conns.AWSClient).RolesAnywhereClient(ctx) input := &rolesanywhere.EnableProfileInput{ ProfileId: aws.String(profileId), diff --git a/internal/service/rolesanywhere/profile_test.go b/internal/service/rolesanywhere/profile_test.go index 7c0a27094a8..acf21683cff 100644 --- a/internal/service/rolesanywhere/profile_test.go +++ b/internal/service/rolesanywhere/profile_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rolesanywhere_test import ( @@ -160,7 +163,7 @@ func TestAccRolesAnywhereProfile_enabled(t *testing.T) { func testAccCheckProfileDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RolesAnywhereClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).RolesAnywhereClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_rolesanywhere_profile" { @@ -196,7 +199,7 @@ func testAccCheckProfileExists(ctx context.Context, name string) resource.TestCh return fmt.Errorf("No Profile is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RolesAnywhereClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).RolesAnywhereClient(ctx) _, err := tfrolesanywhere.FindProfileByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/rolesanywhere/service_package_gen.go b/internal/service/rolesanywhere/service_package_gen.go index 5ba3bc81b00..1784ea7bc86 100644 --- a/internal/service/rolesanywhere/service_package_gen.go +++ b/internal/service/rolesanywhere/service_package_gen.go @@ -5,6 +5,9 @@ package rolesanywhere import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + rolesanywhere_sdkv2 "github.com/aws/aws-sdk-go-v2/service/rolesanywhere" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -48,4 +51,17 @@ func (p *servicePackage) ServicePackageName() string { return names.RolesAnywhere } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*rolesanywhere_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return rolesanywhere_sdkv2.NewFromConfig(cfg, func(o *rolesanywhere_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = rolesanywhere_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/rolesanywhere/tags_gen.go b/internal/service/rolesanywhere/tags_gen.go index 74f1b778766..532fc7203ae 100644 --- a/internal/service/rolesanywhere/tags_gen.go +++ b/internal/service/rolesanywhere/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists rolesanywhere service tags. +// listTags lists rolesanywhere service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn *rolesanywhere.Client, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *rolesanywhere.Client, identifier string) (tftags.KeyValueTags, error) { input := &rolesanywhere.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn *rolesanywhere.Client, identifier string // ListTags lists rolesanywhere service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).RolesAnywhereClient(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).RolesAnywhereClient(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []awstypes.Tag) tftags.KeyValueTags return tftags.New(ctx, m) } -// GetTagsIn returns rolesanywhere service tags from Context. +// getTagsIn returns rolesanywhere service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []awstypes.Tag { +func getTagsIn(ctx context.Context) []awstypes.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []awstypes.Tag { return nil } -// SetTagsOut sets rolesanywhere service tags in Context. -func SetTagsOut(ctx context.Context, tags []awstypes.Tag) { +// setTagsOut sets rolesanywhere service tags in Context. +func setTagsOut(ctx context.Context, tags []awstypes.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates rolesanywhere service tags. +// updateTags updates rolesanywhere service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn *rolesanywhere.Client, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *rolesanywhere.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn *rolesanywhere.Client, identifier stri // UpdateTags updates rolesanywhere service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).RolesAnywhereClient(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).RolesAnywhereClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/rolesanywhere/trust_anchor.go b/internal/service/rolesanywhere/trust_anchor.go index d2abdb62d60..629f097d8f3 100644 --- a/internal/service/rolesanywhere/trust_anchor.go +++ b/internal/service/rolesanywhere/trust_anchor.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rolesanywhere import ( @@ -88,14 +91,14 @@ func ResourceTrustAnchor() *schema.Resource { } func resourceTrustAnchorCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).RolesAnywhereClient() + conn := meta.(*conns.AWSClient).RolesAnywhereClient(ctx) name := d.Get("name").(string) input := &rolesanywhere.CreateTrustAnchorInput{ Enabled: aws.Bool(d.Get("enabled").(bool)), Name: aws.String(name), Source: expandSource(d.Get("source").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } log.Printf("[DEBUG] Creating RolesAnywhere Trust Anchor (%s): %#v", d.Id(), input) @@ -111,7 +114,7 @@ func resourceTrustAnchorCreate(ctx context.Context, d *schema.ResourceData, meta } func resourceTrustAnchorRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).RolesAnywhereClient() + conn := meta.(*conns.AWSClient).RolesAnywhereClient(ctx) trustAnchor, err := FindTrustAnchorByID(ctx, conn, d.Id()) @@ -137,7 +140,7 @@ func resourceTrustAnchorRead(ctx context.Context, d *schema.ResourceData, meta i } func resourceTrustAnchorUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).RolesAnywhereClient() + conn := meta.(*conns.AWSClient).RolesAnywhereClient(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &rolesanywhere.UpdateTrustAnchorInput{ @@ -171,7 +174,7 @@ func resourceTrustAnchorUpdate(ctx context.Context, d *schema.ResourceData, meta } func resourceTrustAnchorDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).RolesAnywhereClient() + conn := meta.(*conns.AWSClient).RolesAnywhereClient(ctx) log.Printf("[DEBUG] Deleting RolesAnywhere Trust Anchor (%s)", d.Id()) _, err := conn.DeleteTrustAnchor(ctx, &rolesanywhere.DeleteTrustAnchorInput{ @@ -272,7 +275,7 @@ func expandSourceDataCertificateBundle(tfMap map[string]interface{}) *types.Sour } func disableTrustAnchor(ctx context.Context, trustAnchorId string, meta interface{}) error { - conn := meta.(*conns.AWSClient).RolesAnywhereClient() + conn := meta.(*conns.AWSClient).RolesAnywhereClient(ctx) input := &rolesanywhere.DisableTrustAnchorInput{ TrustAnchorId: aws.String(trustAnchorId), @@ -283,7 +286,7 @@ func disableTrustAnchor(ctx context.Context, trustAnchorId string, meta interfac } func enableTrustAnchor(ctx context.Context, trustAnchorId string, meta interface{}) error { - conn := meta.(*conns.AWSClient).RolesAnywhereClient() + conn := meta.(*conns.AWSClient).RolesAnywhereClient(ctx) input := &rolesanywhere.EnableTrustAnchorInput{ TrustAnchorId: aws.String(trustAnchorId), diff --git a/internal/service/rolesanywhere/trust_anchor_test.go b/internal/service/rolesanywhere/trust_anchor_test.go index 426df61e6f4..9236669aa50 100644 --- a/internal/service/rolesanywhere/trust_anchor_test.go +++ b/internal/service/rolesanywhere/trust_anchor_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rolesanywhere_test import ( @@ -193,7 +196,7 @@ func TestAccRolesAnywhereTrustAnchor_enabled(t *testing.T) { func testAccCheckTrustAnchorDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RolesAnywhereClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).RolesAnywhereClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_rolesanywhere_trust_anchor" { @@ -229,7 +232,7 @@ func testAccCheckTrustAnchorExists(ctx context.Context, n string) resource.TestC return fmt.Errorf("No RolesAnywhere Trust Anchor ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RolesAnywhereClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).RolesAnywhereClient(ctx) _, err := tfrolesanywhere.FindTrustAnchorByID(ctx, conn, rs.Primary.ID) @@ -371,7 +374,7 @@ resource "aws_rolesanywhere_trust_anchor" "test" { func testAccPreCheck(ctx context.Context, t *testing.T) { acctest.PreCheckPartitionHasService(t, names.RolesAnywhereEndpointID) - conn := acctest.Provider.Meta().(*conns.AWSClient).RolesAnywhereClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).RolesAnywhereClient(ctx) input := &rolesanywhere.ListTrustAnchorsInput{} diff --git a/internal/service/route53/cidr_collection.go b/internal/service/route53/cidr_collection.go index fcb38d213ac..aa8f8ff1ddc 100644 --- a/internal/service/route53/cidr_collection.go +++ b/internal/service/route53/cidr_collection.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53 import ( @@ -20,8 +23,8 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-provider-aws/internal/errs/fwdiag" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) @@ -71,7 +74,7 @@ func (r *resourceCIDRCollection) Create(ctx context.Context, request resource.Cr return } - conn := r.Meta().Route53Conn() + conn := r.Meta().Route53Conn(ctx) name := data.Name.ValueString() input := &route53.CreateCidrCollectionInput{ @@ -106,7 +109,7 @@ func (r *resourceCIDRCollection) Read(ctx context.Context, request resource.Read return } - conn := r.Meta().Route53Conn() + conn := r.Meta().Route53Conn(ctx) output, err := findCIDRCollectionByID(ctx, conn, data.ID.ValueString()) @@ -143,7 +146,7 @@ func (r *resourceCIDRCollection) Delete(ctx context.Context, request resource.De return } - conn := r.Meta().Route53Conn() + conn := r.Meta().Route53Conn(ctx) tflog.Debug(ctx, "deleting Route 53 CIDR Collection", map[string]interface{}{ "id": data.ID.ValueString(), diff --git a/internal/service/route53/cidr_collection_test.go b/internal/service/route53/cidr_collection_test.go index e24df77002d..a9186942d04 100644 --- a/internal/service/route53/cidr_collection_test.go +++ b/internal/service/route53/cidr_collection_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53_test import ( @@ -71,7 +74,7 @@ func TestAccRoute53CIDRCollection_disappears(t *testing.T) { func testAccCheckCIDRCollectionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53_cidr_collection" { @@ -106,7 +109,7 @@ func testAccCheckCIDRCollectionExists(ctx context.Context, n string, v *route53. return fmt.Errorf("No Route 53 CIDR Collection ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) output, err := tfroute53.FindCIDRCollectionByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/route53/cidr_location.go b/internal/service/route53/cidr_location.go index 91a32c759cc..e5ca23f825d 100644 --- a/internal/service/route53/cidr_location.go +++ b/internal/service/route53/cidr_location.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53 import ( @@ -20,8 +23,8 @@ import ( "github.com/hashicorp/terraform-plugin-log/tflog" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-provider-aws/internal/errs/fwdiag" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" fwtypes "github.com/hashicorp/terraform-provider-aws/internal/framework/types" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) @@ -78,7 +81,7 @@ func (r *resourceCIDRLocation) Create(ctx context.Context, request resource.Crea return } - conn := r.Meta().Route53Conn() + conn := r.Meta().Route53Conn(ctx) collectionID := data.CIDRCollectionID.ValueString() collection, err := findCIDRCollectionByID(ctx, conn, collectionID) @@ -130,7 +133,7 @@ func (r *resourceCIDRLocation) Read(ctx context.Context, request resource.ReadRe return } - conn := r.Meta().Route53Conn() + conn := r.Meta().Route53Conn(ctx) cidrBlocks, err := findCIDRLocationByTwoPartKey(ctx, conn, collectionID, name) @@ -177,7 +180,7 @@ func (r *resourceCIDRLocation) Update(ctx context.Context, request resource.Upda return } - conn := r.Meta().Route53Conn() + conn := r.Meta().Route53Conn(ctx) collection, err := findCIDRCollectionByID(ctx, conn, collectionID) @@ -255,7 +258,7 @@ func (r *resourceCIDRLocation) Delete(ctx context.Context, request resource.Dele return } - conn := r.Meta().Route53Conn() + conn := r.Meta().Route53Conn(ctx) collection, err := findCIDRCollectionByID(ctx, conn, collectionID) diff --git a/internal/service/route53/cidr_location_test.go b/internal/service/route53/cidr_location_test.go index 0fd40e51b00..a1e68bb0618 100644 --- a/internal/service/route53/cidr_location_test.go +++ b/internal/service/route53/cidr_location_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53_test import ( @@ -125,7 +128,7 @@ func TestAccRoute53CIDRLocation_update(t *testing.T) { func testAccCheckCIDRLocationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53_cidr_location" { @@ -172,7 +175,7 @@ func testAccCheckCIDRLocationExists(ctx context.Context, n string) resource.Test return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) _, err = tfroute53.FindCIDRLocationByTwoPartKey(ctx, conn, collectionID, name) diff --git a/internal/service/route53/delegation_set.go b/internal/service/route53/delegation_set.go index 21772881a87..7454c223eb1 100644 --- a/internal/service/route53/delegation_set.go +++ b/internal/service/route53/delegation_set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53 import ( @@ -50,7 +53,7 @@ func ResourceDelegationSet() *schema.Resource { func resourceDelegationSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - r53 := meta.(*conns.AWSClient).Route53Conn() + r53 := meta.(*conns.AWSClient).Route53Conn(ctx) callerRef := id.UniqueId() if v, ok := d.GetOk("reference_name"); ok { @@ -75,7 +78,7 @@ func resourceDelegationSetCreate(ctx context.Context, d *schema.ResourceData, me func resourceDelegationSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - r53 := meta.(*conns.AWSClient).Route53Conn() + r53 := meta.(*conns.AWSClient).Route53Conn(ctx) input := &route53.GetReusableDelegationSetInput{ Id: aws.String(CleanDelegationSetID(d.Id())), @@ -104,7 +107,7 @@ func resourceDelegationSetRead(ctx context.Context, d *schema.ResourceData, meta func resourceDelegationSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - r53 := meta.(*conns.AWSClient).Route53Conn() + r53 := meta.(*conns.AWSClient).Route53Conn(ctx) input := &route53.DeleteReusableDelegationSetInput{ Id: aws.String(CleanDelegationSetID(d.Id())), diff --git a/internal/service/route53/delegation_set_data_source.go b/internal/service/route53/delegation_set_data_source.go index 43e6c0390fb..4f3a493734f 100644 --- a/internal/service/route53/delegation_set_data_source.go +++ b/internal/service/route53/delegation_set_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53 import ( @@ -43,7 +46,7 @@ func DataSourceDelegationSet() *schema.Resource { func dataSourceDelegationSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) dSetID := d.Get("id").(string) diff --git a/internal/service/route53/delegation_set_data_source_test.go b/internal/service/route53/delegation_set_data_source_test.go index 092eb7b5f7c..917cb1aed9c 100644 --- a/internal/service/route53/delegation_set_data_source_test.go +++ b/internal/service/route53/delegation_set_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53_test import ( diff --git a/internal/service/route53/delegation_set_test.go b/internal/service/route53/delegation_set_test.go index 63f7a15bc32..059dff787a5 100644 --- a/internal/service/route53/delegation_set_test.go +++ b/internal/service/route53/delegation_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53_test import ( @@ -109,7 +112,7 @@ func TestAccRoute53DelegationSet_disappears(t *testing.T) { func testAccCheckDelegationSetDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53_delegation_set" { continue @@ -126,7 +129,7 @@ func testAccCheckDelegationSetDestroy(ctx context.Context) resource.TestCheckFun func testAccCheckDelegationSetExists(ctx context.Context, n string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) rs, ok := s.RootModule().Resources[n] if !ok { return fmt.Errorf("Not found: %s", n) @@ -155,7 +158,7 @@ func testAccCheckDelegationSetExists(ctx context.Context, n string) resource.Tes func testAccCheckNameServersMatch(ctx context.Context, delegationSetName, zoneName string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) delegationSetLocal, ok := s.RootModule().Resources[delegationSetName] if !ok { diff --git a/internal/service/route53/enum.go b/internal/service/route53/enum.go index be2611b7bbb..579c7067333 100644 --- a/internal/service/route53/enum.go +++ b/internal/service/route53/enum.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53 const ( diff --git a/internal/service/route53/exports_test.go b/internal/service/route53/exports_test.go index 6e086c3a2e4..8247b2b87cf 100644 --- a/internal/service/route53/exports_test.go +++ b/internal/service/route53/exports_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53 // Exports for use in tests only. diff --git a/internal/service/route53/find.go b/internal/service/route53/find.go index fae431eadd1..36a410070d0 100644 --- a/internal/service/route53/find.go +++ b/internal/service/route53/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53 import ( diff --git a/internal/service/route53/flex.go b/internal/service/route53/flex.go index 09a4ab39b27..6351c1d3169 100644 --- a/internal/service/route53/flex.go +++ b/internal/service/route53/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53 import ( diff --git a/internal/service/route53/flex_test.go b/internal/service/route53/flex_test.go index 551a12450c7..683b7ce43b5 100644 --- a/internal/service/route53/flex_test.go +++ b/internal/service/route53/flex_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53 import ( diff --git a/internal/service/route53/generate.go b/internal/service/route53/generate.go index 6c428ddfe96..22a91d87fef 100644 --- a/internal/service/route53/generate.go +++ b/internal/service/route53/generate.go @@ -1,6 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/listpages/main.go -ListOps=ListTrafficPolicies -Paginator=TrafficPolicyIdMarker list_traffic_policies_pages_gen.go //go:generate go run ../../generate/listpages/main.go -ListOps=ListTrafficPolicyVersions -Paginator=TrafficPolicyVersionMarker list_traffic_policy_versions_pages_gen.go //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceId -ListTagsOutTagsElem=ResourceTagSet.Tags -ServiceTagsSlice -TagOp=ChangeTagsForResource -TagInIDElem=ResourceId -TagInTagsElem=AddTags -TagResTypeElem=ResourceType -UntagOp=ChangeTagsForResource -UntagInTagsElem=RemoveTagKeys -UpdateTags -CreateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package route53 diff --git a/internal/service/route53/health_check.go b/internal/service/route53/health_check.go index be6c1d34189..fb41634b7a5 100644 --- a/internal/service/route53/health_check.go +++ b/internal/service/route53/health_check.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53 import ( @@ -9,7 +12,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/arn" - "github.com/aws/aws-sdk-go/aws/endpoints" "github.com/aws/aws-sdk-go/service/route53" "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" @@ -131,17 +133,8 @@ func ResourceHealthCheck() *schema.Resource { MinItems: 3, MaxItems: 64, Elem: &schema.Schema{ - Type: schema.TypeString, - ValidateFunc: validation.StringInSlice([]string{ - endpoints.UsWest1RegionID, - endpoints.UsWest2RegionID, - endpoints.UsEast1RegionID, - endpoints.EuWest1RegionID, - endpoints.SaEast1RegionID, - endpoints.ApSoutheast1RegionID, - endpoints.ApSoutheast2RegionID, - endpoints.ApNortheast1RegionID, - }, true), + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice(route53.HealthCheckRegion_Values(), false), }, Optional: true, }, @@ -186,7 +179,7 @@ func ResourceHealthCheck() *schema.Resource { func resourceHealthCheckCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) healthCheckType := d.Get("type").(string) healthCheckConfig := &route53.HealthCheckConfig{ @@ -290,7 +283,7 @@ func resourceHealthCheckCreate(ctx context.Context, d *schema.ResourceData, meta d.SetId(aws.StringValue(output.HealthCheck.Id)) - if err := createTags(ctx, conn, d.Id(), route53.TagResourceTypeHealthcheck, GetTagsIn(ctx)); err != nil { + if err := createTags(ctx, conn, d.Id(), route53.TagResourceTypeHealthcheck, getTagsIn(ctx)); err != nil { return sdkdiag.AppendErrorf(diags, "setting Route53 Health Check (%s) tags: %s", d.Id(), err) } @@ -299,7 +292,7 @@ func resourceHealthCheckCreate(ctx context.Context, d *schema.ResourceData, meta func resourceHealthCheckRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) output, err := FindHealthCheckByID(ctx, conn, d.Id()) @@ -347,7 +340,7 @@ func resourceHealthCheckRead(ctx context.Context, d *schema.ResourceData, meta i func resourceHealthCheckUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &route53.UpdateHealthCheckInput{ @@ -427,7 +420,7 @@ func resourceHealthCheckUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceHealthCheckDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) log.Printf("[DEBUG] Deleting Route53 Health Check: %s", d.Id()) _, err := conn.DeleteHealthCheckWithContext(ctx, &route53.DeleteHealthCheckInput{ diff --git a/internal/service/route53/health_check_test.go b/internal/service/route53/health_check_test.go index 620a4524277..35df63109b3 100644 --- a/internal/service/route53/health_check_test.go +++ b/internal/service/route53/health_check_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53_test import ( @@ -420,7 +423,7 @@ func TestAccRoute53HealthCheck_disappears(t *testing.T) { func testAccCheckHealthCheckDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53_health_check" { @@ -455,7 +458,7 @@ func testAccCheckHealthCheckExists(ctx context.Context, n string, v *route53.Hea return fmt.Errorf("No Route53 Health Check ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) output, err := tfroute53.FindHealthCheckByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/route53/hosted_zone_dnssec.go b/internal/service/route53/hosted_zone_dnssec.go index 2c5c99ff042..ff63d07ea1e 100644 --- a/internal/service/route53/hosted_zone_dnssec.go +++ b/internal/service/route53/hosted_zone_dnssec.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53 import ( @@ -48,7 +51,7 @@ func ResourceHostedZoneDNSSEC() *schema.Resource { func resourceHostedZoneDNSSECCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) hostedZoneID := d.Get("hosted_zone_id").(string) signingStatus := d.Get("signing_status").(string) @@ -77,7 +80,7 @@ func resourceHostedZoneDNSSECCreate(ctx context.Context, d *schema.ResourceData, func resourceHostedZoneDNSSECRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) hostedZoneDnssec, err := FindHostedZoneDNSSEC(ctx, conn, d.Id()) @@ -118,7 +121,7 @@ func resourceHostedZoneDNSSECRead(ctx context.Context, d *schema.ResourceData, m func resourceHostedZoneDNSSECUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) if d.HasChange("signing_status") { signingStatus := d.Get("signing_status").(string) @@ -146,7 +149,7 @@ func resourceHostedZoneDNSSECUpdate(ctx context.Context, d *schema.ResourceData, func resourceHostedZoneDNSSECDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) input := &route53.DisableHostedZoneDNSSECInput{ HostedZoneId: aws.String(d.Id()), diff --git a/internal/service/route53/hosted_zone_dnssec_test.go b/internal/service/route53/hosted_zone_dnssec_test.go index 3b9ad37b2dd..a6e061a80f6 100644 --- a/internal/service/route53/hosted_zone_dnssec_test.go +++ b/internal/service/route53/hosted_zone_dnssec_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53_test import ( @@ -115,7 +118,7 @@ func TestAccRoute53HostedZoneDNSSEC_signingStatus(t *testing.T) { func testAccCheckHostedZoneDNSSECDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53_hosted_zone_dnssec" { @@ -157,7 +160,7 @@ func testAccHostedZoneDNSSECExists(ctx context.Context, resourceName string) res return fmt.Errorf("resource %s has not set its id", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) hostedZoneDnssec, err := tfroute53.FindHostedZoneDNSSEC(ctx, conn, rs.Primary.ID) diff --git a/internal/service/route53/id.go b/internal/service/route53/id.go index a8748fa4eef..e1df458b167 100644 --- a/internal/service/route53/id.go +++ b/internal/service/route53/id.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53 import ( diff --git a/internal/service/route53/key_signing_key.go b/internal/service/route53/key_signing_key.go index a7a612773bf..d04916f5842 100644 --- a/internal/service/route53/key_signing_key.go +++ b/internal/service/route53/key_signing_key.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53 import ( @@ -104,7 +107,7 @@ func ResourceKeySigningKey() *schema.Resource { func resourceKeySigningKeyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) hostedZoneID := d.Get("hosted_zone_id").(string) name := d.Get("name").(string) @@ -144,7 +147,7 @@ func resourceKeySigningKeyCreate(ctx context.Context, d *schema.ResourceData, me func resourceKeySigningKeyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) hostedZoneID, name, err := KeySigningKeyParseResourceID(d.Id()) @@ -200,7 +203,7 @@ func resourceKeySigningKeyRead(ctx context.Context, d *schema.ResourceData, meta func resourceKeySigningKeyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) if d.HasChange("status") { status := d.Get("status").(string) @@ -254,7 +257,7 @@ func resourceKeySigningKeyUpdate(ctx context.Context, d *schema.ResourceData, me func resourceKeySigningKeyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) status := d.Get("status").(string) diff --git a/internal/service/route53/key_signing_key_test.go b/internal/service/route53/key_signing_key_test.go index 444299be068..5d7fa997ed5 100644 --- a/internal/service/route53/key_signing_key_test.go +++ b/internal/service/route53/key_signing_key_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53_test import ( @@ -130,7 +133,7 @@ func TestAccRoute53KeySigningKey_status(t *testing.T) { func testAccCheckKeySigningKeyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53_key_signing_key" { @@ -172,7 +175,7 @@ func testAccKeySigningKeyExists(ctx context.Context, resourceName string) resour return fmt.Errorf("resource %s has not set its id", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) keySigningKey, err := tfroute53.FindKeySigningKeyByResourceID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/route53/list_pages.go b/internal/service/route53/list_pages.go index 226735d1013..70a0e5f4620 100644 --- a/internal/service/route53/list_pages.go +++ b/internal/service/route53/list_pages.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53 import ( diff --git a/internal/service/route53/query_log.go b/internal/service/route53/query_log.go index 5fbf5618bbe..feb60f69da1 100644 --- a/internal/service/route53/query_log.go +++ b/internal/service/route53/query_log.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53 import ( @@ -48,7 +51,7 @@ func ResourceQueryLog() *schema.Resource { } func resourceQueryLogCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) input := &route53.CreateQueryLoggingConfigInput{ CloudWatchLogsLogGroupArn: aws.String(d.Get("cloudwatch_log_group_arn").(string)), @@ -67,7 +70,7 @@ func resourceQueryLogCreate(ctx context.Context, d *schema.ResourceData, meta in } func resourceQueryLogRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) output, err := FindQueryLoggingConfigByID(ctx, conn, d.Id()) @@ -94,7 +97,7 @@ func resourceQueryLogRead(ctx context.Context, d *schema.ResourceData, meta inte } func resourceQueryLogDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) log.Printf("[DEBUG] Deleting Route53 Query Logging Config: %s", d.Id()) _, err := conn.DeleteQueryLoggingConfigWithContext(ctx, &route53.DeleteQueryLoggingConfigInput{ diff --git a/internal/service/route53/query_log_test.go b/internal/service/route53/query_log_test.go index a5efd003a48..424e2dc15f1 100644 --- a/internal/service/route53/query_log_test.go +++ b/internal/service/route53/query_log_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53_test import ( @@ -118,7 +121,7 @@ func testAccCheckQueryLogExists(ctx context.Context, n string, v *route53.QueryL return fmt.Errorf("No Route53 Query Logging Config ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) output, err := tfroute53.FindQueryLoggingConfigByID(ctx, conn, rs.Primary.ID) @@ -134,7 +137,7 @@ func testAccCheckQueryLogExists(ctx context.Context, n string, v *route53.QueryL func testAccCheckQueryLogDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53_query_log" { diff --git a/internal/service/route53/record.go b/internal/service/route53/record.go index 88172c234ea..4f2eb6d5efb 100644 --- a/internal/service/route53/record.go +++ b/internal/service/route53/record.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53 import ( @@ -276,7 +279,7 @@ func ResourceRecord() *schema.Resource { func resourceRecordCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) zoneID := CleanZoneID(d.Get("zone_id").(string)) zoneRecord, err := FindHostedZoneByID(ctx, conn, zoneID) @@ -341,7 +344,7 @@ func resourceRecordCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceRecordRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) record, fqdn, err := FindResourceRecordSetByFourPartKey(ctx, conn, CleanZoneID(d.Get("zone_id").(string)), d.Get("name").(string), d.Get("type").(string), d.Get("set_identifier").(string)) @@ -434,7 +437,7 @@ func resourceRecordRead(ctx context.Context, d *schema.ResourceData, meta interf func resourceRecordUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) // Route 53 supports CREATE, DELETE, and UPSERT actions. We use UPSERT, and // AWS dynamically determines if a record should be created or updated. @@ -625,7 +628,7 @@ func resourceRecordUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceRecordDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) zoneID := CleanZoneID(d.Get("zone_id").(string)) var name string @@ -758,8 +761,6 @@ func ChangeResourceRecordSets(ctx context.Context, conn *route53.Route53, input } func WaitForRecordSetToSync(ctx context.Context, conn *route53.Route53, requestId string) error { - rand.Seed(time.Now().UTC().UnixNano()) - wait := retry.StateChangeConf{ Pending: []string{route53.ChangeStatusPending}, Target: []string{route53.ChangeStatusInsync}, diff --git a/internal/service/route53/record_migrate.go b/internal/service/route53/record_migrate.go index 7b0ab6e1959..1a327ce6ab7 100644 --- a/internal/service/route53/record_migrate.go +++ b/internal/service/route53/record_migrate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53 import ( diff --git a/internal/service/route53/record_migrate_test.go b/internal/service/route53/record_migrate_test.go index 0a418a4f41f..7d345c68174 100644 --- a/internal/service/route53/record_migrate_test.go +++ b/internal/service/route53/record_migrate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53_test import ( diff --git a/internal/service/route53/record_test.go b/internal/service/route53/record_test.go index f78467b4c2f..58dae8d6b1a 100644 --- a/internal/service/route53/record_test.go +++ b/internal/service/route53/record_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53_test import ( @@ -1530,7 +1533,7 @@ func testAccRecordOverwriteExpectErrorCheck(t *testing.T) resource.ErrorCheckFun func testAccCheckRecordDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53_record" { @@ -1570,7 +1573,7 @@ func testAccCheckRecordExists(ctx context.Context, n string, v *route53.Resource return fmt.Errorf("No Route 53 Record ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) parts := tfroute53.ParseRecordID(rs.Primary.ID) zone := parts[0] @@ -1591,7 +1594,7 @@ func testAccCheckRecordExists(ctx context.Context, n string, v *route53.Resource func testAccCheckRecordDoesNotExist(ctx context.Context, zoneResourceName string, recordName string, recordType string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) zoneResource, ok := s.RootModule().Resources[zoneResourceName] if !ok { return fmt.Errorf("Not found: %s", zoneResourceName) diff --git a/internal/service/route53/service_package.go b/internal/service/route53/service_package.go new file mode 100644 index 00000000000..8f3f345c4b4 --- /dev/null +++ b/internal/service/route53/service_package.go @@ -0,0 +1,36 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package route53 + +import ( + "context" + + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + endpoints_sdkv1 "github.com/aws/aws-sdk-go/aws/endpoints" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + route53_sdkv1 "github.com/aws/aws-sdk-go/service/route53" +) + +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, m map[string]any) (*route53_sdkv1.Route53, error) { + sess := m["session"].(*session_sdkv1.Session) + config := &aws_sdkv1.Config{Endpoint: aws_sdkv1.String(m["endpoint"].(string))} + + // Force "global" services to correct Regions. + switch m["partition"].(string) { + case endpoints_sdkv1.AwsPartitionID: + config.Region = aws_sdkv1.String(endpoints_sdkv1.UsWest2RegionID) + case endpoints_sdkv1.AwsCnPartitionID: + // The AWS Go SDK is missing endpoint information for Route 53 in the AWS China partition. + // This can likely be removed in the future. + if aws_sdkv1.StringValue(config.Endpoint) == "" { + config.Endpoint = aws_sdkv1.String("https://api.route53.cn") + } + config.Region = aws_sdkv1.String(endpoints_sdkv1.CnNorthwest1RegionID) + case endpoints_sdkv1.AwsUsGovPartitionID: + config.Region = aws_sdkv1.String(endpoints_sdkv1.UsGovWest1RegionID) + } + + return route53_sdkv1.New(sess.Copy(config)), nil +} diff --git a/internal/service/route53/service_package_gen.go b/internal/service/route53/service_package_gen.go index 2d723164070..f2709931fa4 100644 --- a/internal/service/route53/service_package_gen.go +++ b/internal/service/route53/service_package_gen.go @@ -5,6 +5,7 @@ package route53 import ( "context" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -106,4 +107,6 @@ func (p *servicePackage) ServicePackageName() string { return names.Route53 } -var ServicePackage = &servicePackage{} +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/route53/status.go b/internal/service/route53/status.go index 1ee360d3c46..eb9c5f89f5e 100644 --- a/internal/service/route53/status.go +++ b/internal/service/route53/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53 import ( diff --git a/internal/service/route53/sweep.go b/internal/service/route53/sweep.go index 0a9403edfcc..f4cf2be036d 100644 --- a/internal/service/route53/sweep.go +++ b/internal/service/route53/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -14,7 +17,6 @@ import ( "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) @@ -64,13 +66,13 @@ func init() { func sweepHealthChecks(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).Route53Conn() + conn := client.Route53Conn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -102,7 +104,7 @@ func sweepHealthChecks(region string) error { errs = multierror.Append(errs, fmt.Errorf("describing Route53 Health Checks for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources, tfresource.WithDelayRand(1*time.Minute), tfresource.WithMinPollInterval(10*time.Second), tfresource.WithPollInterval(18*time.Second)); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources, tfresource.WithDelayRand(1*time.Minute), tfresource.WithMinPollInterval(10*time.Second), tfresource.WithPollInterval(18*time.Second)); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping Route53 Health Checks for %s: %w", region, err)) } @@ -116,13 +118,13 @@ func sweepHealthChecks(region string) error { func sweepKeySigningKeys(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).Route53Conn() + conn := client.Route53Conn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -182,7 +184,7 @@ func sweepKeySigningKeys(region string) error { errs = multierror.Append(errs, fmt.Errorf("getting Route53 Key-Signing Keys for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources, tfresource.WithDelayRand(1*time.Minute), tfresource.WithMinPollInterval(30*time.Second), tfresource.WithPollInterval(30*time.Second)); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources, tfresource.WithDelayRand(1*time.Minute), tfresource.WithMinPollInterval(30*time.Second), tfresource.WithPollInterval(30*time.Second)); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping Route53 Key-Signing Keys for %s: %w", region, err)) } @@ -196,11 +198,11 @@ func sweepKeySigningKeys(region string) error { func sweepQueryLogs(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).Route53Conn() + conn := client.Route53Conn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -232,7 +234,7 @@ func sweepQueryLogs(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("retrieving Route53 query logging configurations: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Route53 query logging configurations: %w", err)) } @@ -241,11 +243,11 @@ func sweepQueryLogs(region string) error { func sweepTrafficPolicies(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).Route53Conn() + conn := client.Route53Conn(ctx) input := &route53.ListTrafficPoliciesInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -274,7 +276,7 @@ func sweepTrafficPolicies(region string) error { return fmt.Errorf("listing Route 53 Traffic Policies (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("sweeping Route 53 Traffic Policies (%s): %w", region, err) @@ -285,11 +287,11 @@ func sweepTrafficPolicies(region string) error { func sweepTrafficPolicyInstances(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).Route53Conn() + conn := client.Route53Conn(ctx) input := &route53.ListTrafficPolicyInstancesInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -318,7 +320,7 @@ func sweepTrafficPolicyInstances(region string) error { return fmt.Errorf("listing Route 53 Traffic Policy Instances (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("sweeping Route 53 Traffic Policy Instances (%s): %w", region, err) @@ -329,13 +331,13 @@ func sweepTrafficPolicyInstances(region string) error { func sweepZones(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).Route53Conn() + conn := client.Route53Conn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -377,7 +379,7 @@ func sweepZones(region string) error { errs = multierror.Append(errs, fmt.Errorf("describing Route53 Hosted Zones for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources, tfresource.WithDelayRand(1*time.Minute), tfresource.WithMinPollInterval(10*time.Second), tfresource.WithPollInterval(18*time.Second)); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources, tfresource.WithDelayRand(1*time.Minute), tfresource.WithMinPollInterval(10*time.Second), tfresource.WithPollInterval(18*time.Second)); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping Route53 Hosted Zones for %s: %w", region, err)) } diff --git a/internal/service/route53/tags_gen.go b/internal/service/route53/tags_gen.go index 2e5dbd5878c..e1d7720b5ed 100644 --- a/internal/service/route53/tags_gen.go +++ b/internal/service/route53/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists route53 service tags. +// listTags lists route53 service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn route53iface.Route53API, identifier, resourceType string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn route53iface.Route53API, identifier, resourceType string) (tftags.KeyValueTags, error) { input := &route53.ListTagsForResourceInput{ ResourceId: aws.String(identifier), ResourceType: aws.String(resourceType), @@ -35,7 +35,7 @@ func ListTags(ctx context.Context, conn route53iface.Route53API, identifier, res // ListTags lists route53 service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier, resourceType string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).Route53Conn(), identifier, resourceType) + tags, err := listTags(ctx, meta.(*conns.AWSClient).Route53Conn(ctx), identifier, resourceType) if err != nil { return err @@ -77,9 +77,9 @@ func KeyValueTags(ctx context.Context, tags []*route53.Tag) tftags.KeyValueTags return tftags.New(ctx, m) } -// GetTagsIn returns route53 service tags from Context. +// getTagsIn returns route53 service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*route53.Tag { +func getTagsIn(ctx context.Context) []*route53.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -89,8 +89,8 @@ func GetTagsIn(ctx context.Context) []*route53.Tag { return nil } -// SetTagsOut sets route53 service tags in Context. -func SetTagsOut(ctx context.Context, tags []*route53.Tag) { +// setTagsOut sets route53 service tags in Context. +func setTagsOut(ctx context.Context, tags []*route53.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } @@ -102,13 +102,13 @@ func createTags(ctx context.Context, conn route53iface.Route53API, identifier, r return nil } - return UpdateTags(ctx, conn, identifier, resourceType, nil, KeyValueTags(ctx, tags)) + return updateTags(ctx, conn, identifier, resourceType, nil, KeyValueTags(ctx, tags)) } -// UpdateTags updates route53 service tags. +// updateTags updates route53 service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn route53iface.Route53API, identifier, resourceType string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn route53iface.Route53API, identifier, resourceType string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) removedTags := oldTags.Removed(newTags) @@ -146,5 +146,5 @@ func UpdateTags(ctx context.Context, conn route53iface.Route53API, identifier, r // UpdateTags updates route53 service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier, resourceType string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).Route53Conn(), identifier, resourceType, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).Route53Conn(ctx), identifier, resourceType, oldTags, newTags) } diff --git a/internal/service/route53/traffic_policy.go b/internal/service/route53/traffic_policy.go index 943d03e3c17..754f1866da4 100644 --- a/internal/service/route53/traffic_policy.go +++ b/internal/service/route53/traffic_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53 import ( @@ -77,7 +80,7 @@ func ResourceTrafficPolicy() *schema.Resource { } func resourceTrafficPolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) name := d.Get("name").(string) input := &route53.CreateTrafficPolicyInput{ @@ -95,7 +98,7 @@ func resourceTrafficPolicyCreate(ctx context.Context, d *schema.ResourceData, me }, route53.ErrCodeNoSuchTrafficPolicy) if err != nil { - return diag.Errorf("error creating Route53 Traffic Policy (%s): %s", name, err) + return diag.Errorf("creating Route53 Traffic Policy (%s): %s", name, err) } d.SetId(aws.StringValue(outputRaw.(*route53.CreateTrafficPolicyOutput).TrafficPolicy.Id)) @@ -104,7 +107,7 @@ func resourceTrafficPolicyCreate(ctx context.Context, d *schema.ResourceData, me } func resourceTrafficPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) trafficPolicy, err := FindTrafficPolicyByID(ctx, conn, d.Id()) @@ -115,7 +118,7 @@ func resourceTrafficPolicyRead(ctx context.Context, d *schema.ResourceData, meta } if err != nil { - return diag.Errorf("error reading Route53 Traffic Policy (%s): %s", d.Id(), err) + return diag.Errorf("reading Route53 Traffic Policy (%s): %s", d.Id(), err) } d.Set("comment", trafficPolicy.Comment) @@ -128,7 +131,7 @@ func resourceTrafficPolicyRead(ctx context.Context, d *schema.ResourceData, meta } func resourceTrafficPolicyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) input := &route53.UpdateTrafficPolicyCommentInput{ Id: aws.String(d.Id()), @@ -143,14 +146,14 @@ func resourceTrafficPolicyUpdate(ctx context.Context, d *schema.ResourceData, me _, err := conn.UpdateTrafficPolicyCommentWithContext(ctx, input) if err != nil { - return diag.Errorf("error updating Route53 Traffic Policy (%s) comment: %s", d.Id(), err) + return diag.Errorf("updating Route53 Traffic Policy (%s) comment: %s", d.Id(), err) } return resourceTrafficPolicyRead(ctx, d, meta) } func resourceTrafficPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) input := &route53.ListTrafficPolicyVersionsInput{ Id: aws.String(d.Id()), @@ -168,7 +171,7 @@ func resourceTrafficPolicyDelete(ctx context.Context, d *schema.ResourceData, me }) if err != nil { - return diag.Errorf("error listing Route 53 Traffic Policy (%s) versions: %s", d.Id(), err) + return diag.Errorf("listing Route 53 Traffic Policy (%s) versions: %s", d.Id(), err) } for _, v := range output { @@ -185,7 +188,7 @@ func resourceTrafficPolicyDelete(ctx context.Context, d *schema.ResourceData, me } if err != nil { - return diag.Errorf("error deleting Route 53 Traffic Policy (%s) version (%d): %s", d.Id(), version, err) + return diag.Errorf("deleting Route 53 Traffic Policy (%s) version (%d): %s", d.Id(), version, err) } } diff --git a/internal/service/route53/traffic_policy_document_data_source.go b/internal/service/route53/traffic_policy_document_data_source.go index 4c034b2354c..9a25bb87364 100644 --- a/internal/service/route53/traffic_policy_document_data_source.go +++ b/internal/service/route53/traffic_policy_document_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53 import ( diff --git a/internal/service/route53/traffic_policy_document_data_source_test.go b/internal/service/route53/traffic_policy_document_data_source_test.go index c3080f4e75c..edfd08c17ec 100644 --- a/internal/service/route53/traffic_policy_document_data_source_test.go +++ b/internal/service/route53/traffic_policy_document_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53_test import ( diff --git a/internal/service/route53/traffic_policy_document_model.go b/internal/service/route53/traffic_policy_document_model.go index c3063f4f5a9..cc86508c37f 100644 --- a/internal/service/route53/traffic_policy_document_model.go +++ b/internal/service/route53/traffic_policy_document_model.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53 const ( diff --git a/internal/service/route53/traffic_policy_instance.go b/internal/service/route53/traffic_policy_instance.go index 4f997e7872c..c2f9ba03f62 100644 --- a/internal/service/route53/traffic_policy_instance.go +++ b/internal/service/route53/traffic_policy_instance.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53 import ( @@ -64,7 +67,7 @@ func ResourceTrafficPolicyInstance() *schema.Resource { } func resourceTrafficPolicyInstanceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) name := d.Get("name").(string) input := &route53.CreateTrafficPolicyInstanceInput{ @@ -81,20 +84,20 @@ func resourceTrafficPolicyInstanceCreate(ctx context.Context, d *schema.Resource }, route53.ErrCodeNoSuchTrafficPolicy) if err != nil { - return diag.Errorf("error creating Route53 Traffic Policy Instance (%s): %s", name, err) + return diag.Errorf("creating Route53 Traffic Policy Instance (%s): %s", name, err) } d.SetId(aws.StringValue(outputRaw.(*route53.CreateTrafficPolicyInstanceOutput).TrafficPolicyInstance.Id)) if _, err = waitTrafficPolicyInstanceStateCreated(ctx, conn, d.Id()); err != nil { - return diag.Errorf("error waiting for Route53 Traffic Policy Instance (%s) create: %s", d.Id(), err) + return diag.Errorf("waiting for Route53 Traffic Policy Instance (%s) create: %s", d.Id(), err) } return resourceTrafficPolicyInstanceRead(ctx, d, meta) } func resourceTrafficPolicyInstanceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) trafficPolicyInstance, err := FindTrafficPolicyInstanceByID(ctx, conn, d.Id()) @@ -105,7 +108,7 @@ func resourceTrafficPolicyInstanceRead(ctx context.Context, d *schema.ResourceDa } if err != nil { - return diag.Errorf("error reading Route53 Traffic Policy Instance (%s): %s", d.Id(), err) + return diag.Errorf("reading Route53 Traffic Policy Instance (%s): %s", d.Id(), err) } d.Set("hosted_zone_id", trafficPolicyInstance.HostedZoneId) @@ -118,7 +121,7 @@ func resourceTrafficPolicyInstanceRead(ctx context.Context, d *schema.ResourceDa } func resourceTrafficPolicyInstanceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) input := &route53.UpdateTrafficPolicyInstanceInput{ Id: aws.String(d.Id()), @@ -131,18 +134,18 @@ func resourceTrafficPolicyInstanceUpdate(ctx context.Context, d *schema.Resource _, err := conn.UpdateTrafficPolicyInstanceWithContext(ctx, input) if err != nil { - return diag.Errorf("error updating Route53 Traffic Policy Instance (%s): %s", d.Id(), err) + return diag.Errorf("updating Route53 Traffic Policy Instance (%s): %s", d.Id(), err) } if _, err = waitTrafficPolicyInstanceStateUpdated(ctx, conn, d.Id()); err != nil { - return diag.Errorf("error waiting for Route53 Traffic Policy Instance (%s) update: %s", d.Id(), err) + return diag.Errorf("waiting for Route53 Traffic Policy Instance (%s) update: %s", d.Id(), err) } return resourceTrafficPolicyInstanceRead(ctx, d, meta) } func resourceTrafficPolicyInstanceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) log.Printf("[INFO] Delete Route53 Traffic Policy Instance: %s", d.Id()) _, err := conn.DeleteTrafficPolicyInstanceWithContext(ctx, &route53.DeleteTrafficPolicyInstanceInput{ @@ -154,11 +157,11 @@ func resourceTrafficPolicyInstanceDelete(ctx context.Context, d *schema.Resource } if err != nil { - return diag.Errorf("error deleting Route53 Traffic Policy Instance (%s): %s", d.Id(), err) + return diag.Errorf("deleting Route53 Traffic Policy Instance (%s): %s", d.Id(), err) } if _, err = waitTrafficPolicyInstanceStateDeleted(ctx, conn, d.Id()); err != nil { - return diag.Errorf("error waiting for Route53 Traffic Policy Instance (%s) delete: %s", d.Id(), err) + return diag.Errorf("waiting for Route53 Traffic Policy Instance (%s) delete: %s", d.Id(), err) } return nil diff --git a/internal/service/route53/traffic_policy_instance_test.go b/internal/service/route53/traffic_policy_instance_test.go index 23fde512050..32dee59d8e8 100644 --- a/internal/service/route53/traffic_policy_instance_test.go +++ b/internal/service/route53/traffic_policy_instance_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53_test import ( @@ -122,7 +125,7 @@ func testAccCheckTrafficPolicyInstanceExists(ctx context.Context, n string, v *r return fmt.Errorf("No Route53 Traffic Policy Instance ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) output, err := tfroute53.FindTrafficPolicyInstanceByID(ctx, conn, rs.Primary.ID) @@ -138,7 +141,7 @@ func testAccCheckTrafficPolicyInstanceExists(ctx context.Context, n string, v *r func testAccCheckTrafficPolicyInstanceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53_traffic_policy_instance" { diff --git a/internal/service/route53/traffic_policy_test.go b/internal/service/route53/traffic_policy_test.go index e4bd0d4dcad..4b5d5f617ad 100644 --- a/internal/service/route53/traffic_policy_test.go +++ b/internal/service/route53/traffic_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53_test import ( @@ -120,7 +123,7 @@ func testAccCheckTrafficPolicyExists(ctx context.Context, n string, v *route53.T return fmt.Errorf("No Route53 Traffic Policy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) output, err := tfroute53.FindTrafficPolicyByID(ctx, conn, rs.Primary.ID) @@ -136,7 +139,7 @@ func testAccCheckTrafficPolicyExists(ctx context.Context, n string, v *route53.T func testAccCheckTrafficPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53_traffic_policy" { diff --git a/internal/service/route53/vpc_association_authorization.go b/internal/service/route53/vpc_association_authorization.go index 38d64197a65..c35059d6fee 100644 --- a/internal/service/route53/vpc_association_authorization.go +++ b/internal/service/route53/vpc_association_authorization.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53 import ( @@ -13,6 +16,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) // @SDKResource("aws_route53_vpc_association_authorization") @@ -48,9 +52,9 @@ func ResourceVPCAssociationAuthorization() *schema.Resource { } } -func resourceVPCAssociationAuthorizationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { +func resourceVPCAssociationAuthorizationCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) req := &route53.CreateVPCAssociationAuthorizationInput{ HostedZoneId: aws.String(d.Get("zone_id").(string)), @@ -64,21 +68,24 @@ func resourceVPCAssociationAuthorizationCreate(ctx context.Context, d *schema.Re req.VPC.VPCRegion = aws.String(v.(string)) } - log.Printf("[DEBUG] Creating Route53 VPC Association Authorization for hosted zone %s with VPC %s and region %s", *req.HostedZoneId, *req.VPC.VPCId, *req.VPC.VPCRegion) - _, err := conn.CreateVPCAssociationAuthorizationWithContext(ctx, req) + raw, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, d.Timeout(schema.TimeoutCreate), func() (any, error) { + return conn.CreateVPCAssociationAuthorizationWithContext(ctx, req) + }, route53.ErrCodeConcurrentModification) if err != nil { return sdkdiag.AppendErrorf(diags, "creating Route53 VPC Association Authorization: %s", err) } + out := raw.(*route53.CreateVPCAssociationAuthorizationOutput) + // Store association id - d.SetId(fmt.Sprintf("%s:%s", *req.HostedZoneId, *req.VPC.VPCId)) + d.SetId(fmt.Sprintf("%s:%s", aws.StringValue(out.HostedZoneId), aws.StringValue(out.VPC.VPCId))) return append(diags, resourceVPCAssociationAuthorizationRead(ctx, d, meta)...) } -func resourceVPCAssociationAuthorizationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { +func resourceVPCAssociationAuthorizationRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) zone_id, vpc_id, err := VPCAssociationAuthorizationParseID(d.Id()) if err != nil { @@ -126,9 +133,9 @@ func resourceVPCAssociationAuthorizationRead(ctx context.Context, d *schema.Reso return diags } -func resourceVPCAssociationAuthorizationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { +func resourceVPCAssociationAuthorizationDelete(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) zone_id, vpc_id, err := VPCAssociationAuthorizationParseID(d.Id()) if err != nil { @@ -143,7 +150,9 @@ func resourceVPCAssociationAuthorizationDelete(ctx context.Context, d *schema.Re }, } - _, err = conn.DeleteVPCAssociationAuthorizationWithContext(ctx, &req) + _, err = tfresource.RetryWhenAWSErrCodeEquals(ctx, d.Timeout(schema.TimeoutCreate), func() (any, error) { + return conn.DeleteVPCAssociationAuthorizationWithContext(ctx, &req) + }, route53.ErrCodeConcurrentModification) if err != nil { return sdkdiag.AppendErrorf(diags, "deleting Route53 VPC Association Authorization (%s): %s", d.Id(), err) } diff --git a/internal/service/route53/vpc_association_authorization_test.go b/internal/service/route53/vpc_association_authorization_test.go index 1a872a75968..1c57542deda 100644 --- a/internal/service/route53/vpc_association_authorization_test.go +++ b/internal/service/route53/vpc_association_authorization_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53_test import ( @@ -8,6 +11,7 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/route53" "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" @@ -32,6 +36,7 @@ func TestAccRoute53VPCAssociationAuthorization_basic(t *testing.T) { Config: testAccVPCAssociationAuthorizationConfig_basic(), Check: resource.ComposeTestCheckFunc( testAccCheckVPCAssociationAuthorizationExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "vpc_region", acctest.Region()), ), }, { @@ -69,9 +74,68 @@ func TestAccRoute53VPCAssociationAuthorization_disappears(t *testing.T) { }) } +func TestAccRoute53VPCAssociationAuthorization_concurrent(t *testing.T) { + ctx := acctest.Context(t) + + resourceNameAlternate := "aws_route53_vpc_association_authorization.alternate" + resourceNameThird := "aws_route53_vpc_association_authorization.third" + + providers := make(map[string]*schema.Provider) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckAlternateAccount(t) + acctest.PreCheckThirdAccount(t) + }, + ErrorCheck: acctest.ErrorCheck(t, route53.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5FactoriesNamed(ctx, t, providers, acctest.ProviderName, acctest.ProviderNameAlternate, acctest.ProviderNameThird), + CheckDestroy: testAccCheckVPCAssociationAuthorizationDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccVPCAssociationAuthorizationConfig_concurrent(t), + Check: resource.ComposeTestCheckFunc( + testAccCheckVPCAssociationAuthorizationExists(ctx, resourceNameAlternate), + testAccCheckVPCAssociationAuthorizationExists(ctx, resourceNameThird), + ), + }, + }, + }) +} + +func TestAccRoute53VPCAssociationAuthorization_crossRegion(t *testing.T) { + ctx := acctest.Context(t) + resourceName := "aws_route53_vpc_association_authorization.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckAlternateAccount(t) + }, + ErrorCheck: acctest.ErrorCheck(t, route53.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5FactoriesAlternate(ctx, t), + CheckDestroy: testAccCheckVPCAssociationAuthorizationDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccVPCAssociationAuthorizationConfig_crossRegion(), + Check: resource.ComposeTestCheckFunc( + testAccCheckVPCAssociationAuthorizationExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "vpc_region", acctest.AlternateRegion()), + ), + }, + { + Config: testAccVPCAssociationAuthorizationConfig_crossRegion(), + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func testAccCheckVPCAssociationAuthorizationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53_vpc_association_authorization" { @@ -96,7 +160,7 @@ func testAccCheckVPCAssociationAuthorizationDestroy(ctx context.Context) resourc } for _, vpc := range res.VPCs { - if vpc_id == *vpc.VPCId { + if vpc_id == aws.StringValue(vpc.VPCId) { return fmt.Errorf("VPC association authorization for zone %v with %v still exists", zone_id, vpc_id) } } @@ -121,7 +185,7 @@ func testAccCheckVPCAssociationAuthorizationExists(ctx context.Context, n string return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) req := route53.ListVPCAssociationAuthorizationsInput{ HostedZoneId: aws.String(zone_id), @@ -133,7 +197,7 @@ func testAccCheckVPCAssociationAuthorizationExists(ctx context.Context, n string } for _, vpc := range res.VPCs { - if vpc_id == *vpc.VPCId { + if vpc_id == aws.StringValue(vpc.VPCId) { return nil } } @@ -143,12 +207,60 @@ func testAccCheckVPCAssociationAuthorizationExists(ctx context.Context, n string } func testAccVPCAssociationAuthorizationConfig_basic() string { - return acctest.ConfigAlternateAccountProvider() + ` + return acctest.ConfigCompose( + acctest.ConfigAlternateAccountProvider(), ` +resource "aws_route53_vpc_association_authorization" "test" { + zone_id = aws_route53_zone.test.id + vpc_id = aws_vpc.alternate.id +} + +resource "aws_vpc" "alternate" { + provider = "awsalternate" + cidr_block = cidrsubnet("10.0.0.0/8", 8, 1) + enable_dns_hostnames = true + enable_dns_support = true +} + +resource "aws_route53_zone" "test" { + name = "example.com" + + vpc { + vpc_id = aws_vpc.test.id + } +} + resource "aws_vpc" "test" { - cidr_block = "10.6.0.0/16" + cidr_block = cidrsubnet("10.0.0.0/8", 8, 0) enable_dns_hostnames = true enable_dns_support = true } +`) +} + +func testAccVPCAssociationAuthorizationConfig_concurrent(t *testing.T) string { + return acctest.ConfigCompose( + acctest.ConfigMultipleAccountProvider(t, 3), ` +resource "aws_route53_vpc_association_authorization" "alternate" { + zone_id = aws_route53_zone.test.id + vpc_id = aws_vpc.alternate.id + + # Try to encourage concurrency + depends_on = [ + aws_vpc.alternate, + aws_vpc.third + ] +} + +resource "aws_route53_vpc_association_authorization" "third" { + zone_id = aws_route53_zone.test.id + vpc_id = aws_vpc.third.id + + # Try to encourage concurrency + depends_on = [ + aws_vpc.alternate, + aws_vpc.third + ] +} resource "aws_route53_zone" "test" { name = "example.com" @@ -158,16 +270,61 @@ resource "aws_route53_zone" "test" { } } +resource "aws_vpc" "test" { + cidr_block = cidrsubnet("10.0.0.0/8", 8, 0) + enable_dns_hostnames = true + enable_dns_support = true +} + resource "aws_vpc" "alternate" { provider = "awsalternate" - cidr_block = "10.7.0.0/16" + cidr_block = cidrsubnet("10.0.0.0/8", 8, 1) + enable_dns_hostnames = true + enable_dns_support = true +} + +resource "aws_vpc" "third" { + provider = "awsthird" + cidr_block = cidrsubnet("10.0.0.0/8", 8, 2) enable_dns_hostnames = true enable_dns_support = true } +`) +} +func testAccVPCAssociationAuthorizationConfig_crossRegion() string { + return acctest.ConfigCompose( + acctest.ConfigAlternateAccountAlternateRegionProvider(), ` resource "aws_route53_vpc_association_authorization" "test" { - zone_id = aws_route53_zone.test.id - vpc_id = aws_vpc.alternate.id + zone_id = aws_route53_zone.test.id + vpc_id = aws_vpc.alternate.id + vpc_region = data.aws_region.alternate.name +} + +resource "aws_vpc" "alternate" { + provider = "awsalternate" + + cidr_block = cidrsubnet("10.0.0.0/8", 8, 1) + enable_dns_hostnames = true + enable_dns_support = true +} + +data "aws_region" "alternate" { + provider = "awsalternate" +} + +resource "aws_route53_zone" "test" { + name = "example.com" + + vpc { + vpc_id = aws_vpc.test.id + } +} + +resource "aws_vpc" "test" { + cidr_block = cidrsubnet("10.0.0.0/8", 8, 0) + enable_dns_hostnames = true + enable_dns_support = true } -` +`) } diff --git a/internal/service/route53/wait.go b/internal/service/route53/wait.go index a1b25ad5736..d8f577b2699 100644 --- a/internal/service/route53/wait.go +++ b/internal/service/route53/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53 import ( @@ -27,8 +30,6 @@ const ( ) func waitChangeInfoStatusInsync(ctx context.Context, conn *route53.Route53, changeID string) (*route53.ChangeInfo, error) { //nolint:unparam - rand.Seed(time.Now().UTC().UnixNano()) - // Route53 is vulnerable to throttling so longer delays, poll intervals helps significantly to avoid stateConf := &retry.StateChangeConf{ diff --git a/internal/service/route53/zone.go b/internal/service/route53/zone.go index 2e28d319767..dc080c4916e 100644 --- a/internal/service/route53/zone.go +++ b/internal/service/route53/zone.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53 import ( @@ -126,7 +129,7 @@ func ResourceZone() *schema.Resource { func resourceZoneCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) input := &route53.CreateHostedZoneInput{ CallerReference: aws.String(id.UniqueId()), @@ -162,7 +165,7 @@ func resourceZoneCreate(ctx context.Context, d *schema.ResourceData, meta interf } } - if err := createTags(ctx, conn, d.Id(), route53.TagResourceTypeHostedzone, GetTagsIn(ctx)); err != nil { + if err := createTags(ctx, conn, d.Id(), route53.TagResourceTypeHostedzone, getTagsIn(ctx)); err != nil { return sdkdiag.AppendErrorf(diags, "setting Route53 Zone (%s) tags: %s", d.Id(), err) } @@ -182,7 +185,7 @@ func resourceZoneCreate(ctx context.Context, d *schema.ResourceData, meta interf func resourceZoneRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) output, err := FindHostedZoneByID(ctx, conn, d.Id()) @@ -244,7 +247,7 @@ func resourceZoneRead(ctx context.Context, d *schema.ResourceData, meta interfac func resourceZoneUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) region := meta.(*conns.AWSClient).Region if d.HasChange("comment") { @@ -298,7 +301,7 @@ func resourceZoneUpdate(ctx context.Context, d *schema.ResourceData, meta interf func resourceZoneDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) if d.Get("force_destroy").(bool) { if err := deleteAllResourceRecordsFromHostedZone(ctx, conn, d.Id(), d.Get("name").(string)); err != nil { @@ -680,8 +683,6 @@ func hostedZoneVPCHash(v interface{}) int { } func waitForChangeSynchronization(ctx context.Context, conn *route53.Route53, changeID string) error { - rand.Seed(time.Now().UTC().UnixNano()) - conf := retry.StateChangeConf{ Pending: []string{route53.ChangeStatusPending}, Target: []string{route53.ChangeStatusInsync}, diff --git a/internal/service/route53/zone_association.go b/internal/service/route53/zone_association.go index 943575f77de..e17fa7c445b 100644 --- a/internal/service/route53/zone_association.go +++ b/internal/service/route53/zone_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53 import ( @@ -57,7 +60,7 @@ func ResourceZoneAssociation() *schema.Resource { func resourceZoneAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) vpcRegion := meta.(*conns.AWSClient).Region vpcID := d.Get("vpc_id").(string) @@ -104,7 +107,7 @@ func resourceZoneAssociationCreate(ctx context.Context, d *schema.ResourceData, func resourceZoneAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) zoneID, vpcID, vpcRegion, err := ZoneAssociationParseID(d.Id()) @@ -153,7 +156,7 @@ func resourceZoneAssociationRead(ctx context.Context, d *schema.ResourceData, me func resourceZoneAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) zoneID, vpcID, vpcRegion, err := ZoneAssociationParseID(d.Id()) diff --git a/internal/service/route53/zone_association_test.go b/internal/service/route53/zone_association_test.go index bb3d0159356..5c7e1fc2fd4 100644 --- a/internal/service/route53/zone_association_test.go +++ b/internal/service/route53/zone_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53_test import ( @@ -161,13 +164,43 @@ func TestAccRoute53ZoneAssociation_crossRegion(t *testing.T) { CheckDestroy: testAccCheckZoneAssociationDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccZoneAssociationConfig_region(domainName), + Config: testAccZoneAssociationConfig_crossRegion(domainName), Check: resource.ComposeTestCheckFunc( testAccCheckZoneAssociationExists(ctx, resourceName), ), }, { - Config: testAccZoneAssociationConfig_region(domainName), + Config: testAccZoneAssociationConfig_crossRegion(domainName), + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccRoute53ZoneAssociation_crossAccountAndRegion(t *testing.T) { + ctx := acctest.Context(t) + resourceName := "aws_route53_zone_association.test" + domainName := acctest.RandomFQDomainName() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckAlternateAccount(t) + }, + ErrorCheck: acctest.ErrorCheck(t, route53.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5FactoriesAlternate(ctx, t), + CheckDestroy: testAccCheckZoneAssociationDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccZoneAssociationConfig_crossAccountAndRegion(domainName), + Check: resource.ComposeTestCheckFunc( + testAccCheckZoneAssociationExists(ctx, resourceName), + ), + }, + { + Config: testAccZoneAssociationConfig_crossAccountAndRegion(domainName), ResourceName: resourceName, ImportState: true, ImportStateVerify: true, @@ -178,7 +211,7 @@ func TestAccRoute53ZoneAssociation_crossRegion(t *testing.T) { func testAccCheckZoneAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53_zone_association" { continue @@ -225,7 +258,7 @@ func testAccCheckZoneAssociationExists(ctx context.Context, resourceName string) return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) hostedZoneSummary, err := tfroute53.GetZoneAssociation(ctx, conn, zoneID, vpcID, vpcRegion) @@ -328,7 +361,7 @@ resource "aws_route53_zone_association" "test" { `, domainName)) } -func testAccZoneAssociationConfig_region(domainName string) string { +func testAccZoneAssociationConfig_crossRegion(domainName string) string { return acctest.ConfigCompose( acctest.ConfigMultipleRegionProvider(2), fmt.Sprintf(` @@ -380,3 +413,52 @@ resource "aws_route53_zone_association" "test" { } `, domainName)) } + +func testAccZoneAssociationConfig_crossAccountAndRegion(domainName string) string { + return acctest.ConfigCompose( + acctest.ConfigAlternateAccountAlternateRegionProvider(), + fmt.Sprintf(` +resource "aws_route53_zone_association" "test" { + vpc_id = aws_route53_vpc_association_authorization.test.vpc_id + zone_id = aws_route53_vpc_association_authorization.test.zone_id +} + +resource "aws_route53_vpc_association_authorization" "test" { + provider = "awsalternate" + + vpc_id = aws_vpc.test.id + zone_id = aws_route53_zone.test.id + vpc_region = data.aws_region.current.name +} + +data "aws_region" "current" {} + +resource "aws_vpc" "test" { + cidr_block = "10.6.0.0/16" + enable_dns_hostnames = true + enable_dns_support = true +} + +resource "aws_vpc" "alternate" { + provider = "awsalternate" + + cidr_block = "10.7.0.0/16" + enable_dns_hostnames = true + enable_dns_support = true +} + +resource "aws_route53_zone" "test" { + provider = "awsalternate" + + name = %[1]q + + vpc { + vpc_id = aws_vpc.alternate.id + } + + lifecycle { + ignore_changes = [vpc] + } +} +`, domainName)) +} diff --git a/internal/service/route53/zone_data_source.go b/internal/service/route53/zone_data_source.go index 6227a02576d..140fbc31e1b 100644 --- a/internal/service/route53/zone_data_source.go +++ b/internal/service/route53/zone_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53 import ( @@ -82,7 +85,7 @@ func DataSourceZone() *schema.Resource { func dataSourceZoneRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53Conn() + conn := meta.(*conns.AWSClient).Route53Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig name, nameExists := d.GetOk("name") @@ -143,7 +146,7 @@ func dataSourceZoneRead(ctx context.Context, d *schema.ResourceData, meta interf // we check if tags match matchingTags := true if len(tags) > 0 { - listTags, err := ListTags(ctx, conn, hostedZoneId, route53.TagResourceTypeHostedzone) + listTags, err := listTags(ctx, conn, hostedZoneId, route53.TagResourceTypeHostedzone) if err != nil { return sdkdiag.AppendErrorf(diags, "finding Route 53 Hosted Zone: %s", err) @@ -199,7 +202,7 @@ func dataSourceZoneRead(ctx context.Context, d *schema.ResourceData, meta interf return sdkdiag.AppendErrorf(diags, "setting name_servers: %s", err) } - tags, err = ListTags(ctx, conn, idHostedZone, route53.TagResourceTypeHostedzone) + tags, err = listTags(ctx, conn, idHostedZone, route53.TagResourceTypeHostedzone) if err != nil { return sdkdiag.AppendErrorf(diags, "listing Route 53 Hosted Zone (%s) tags: %s", idHostedZone, err) diff --git a/internal/service/route53/zone_data_source_test.go b/internal/service/route53/zone_data_source_test.go index 2c0a4590cad..bae8e7d8697 100644 --- a/internal/service/route53/zone_data_source_test.go +++ b/internal/service/route53/zone_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53_test import ( diff --git a/internal/service/route53/zone_test.go b/internal/service/route53/zone_test.go index 1946bb406df..8e9e2071ab2 100644 --- a/internal/service/route53/zone_test.go +++ b/internal/service/route53/zone_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53_test import ( @@ -448,7 +451,7 @@ func TestAccRoute53Zone_VPC_updates(t *testing.T) { func testAccCheckZoneDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53_zone" { @@ -474,7 +477,7 @@ func testAccCheckZoneDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCreateRandomRecordsInZoneID(ctx context.Context, zone *route53.GetHostedZoneOutput, recordsCount int) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) var changes []*route53.Change if recordsCount > 100 { @@ -522,7 +525,7 @@ func testAccCheckZoneExists(ctx context.Context, n string, v *route53.GetHostedZ return fmt.Errorf("No Route53 Hosted Zone ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53Conn(ctx) output, err := tfroute53.FindHostedZoneByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/route53domains/generate.go b/internal/service/route53domains/generate.go index bb39b3a0cb9..9466b787e70 100644 --- a/internal/service/route53domains/generate.go +++ b/internal/service/route53domains/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -ListTags -ListTagsOp=ListTagsForDomain -ListTagsInIDElem=DomainName -ListTagsOutTagsElem=TagList -ServiceTagsSlice -UpdateTags -UntagOp=DeleteTagsForDomain -UntagInTagsElem=TagsToDelete -TagOp=UpdateTagsForDomain -TagInTagsElem=TagsToUpdate -TagInIDElem=DomainName -GetTag +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package route53domains diff --git a/internal/service/route53domains/registered_domain.go b/internal/service/route53domains/registered_domain.go index 3136d33e775..2a00aeff029 100644 --- a/internal/service/route53domains/registered_domain.go +++ b/internal/service/route53domains/registered_domain.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53domains import ( @@ -245,13 +248,13 @@ func ResourceRegisteredDomain() *schema.Resource { } func resourceRegisteredDomainCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { // nosemgrep:ci.semgrep.tags.calling-UpdateTags-in-resource-create - conn := meta.(*conns.AWSClient).Route53DomainsClient() + conn := meta.(*conns.AWSClient).Route53DomainsClient(ctx) domainName := d.Get("domain_name").(string) domainDetail, err := findDomainDetailByName(ctx, conn, domainName) if err != nil { - return diag.Errorf("error reading Route 53 Domains Domain (%s): %s", domainName, err) + return diag.Errorf("reading Route 53 Domains Domain (%s): %s", domainName, err) } d.SetId(aws.ToString(domainDetail.DomainName)) @@ -310,19 +313,19 @@ func resourceRegisteredDomainCreate(ctx context.Context, d *schema.ResourceData, } } - tags, err := ListTags(ctx, conn, d.Id()) + tags, err := listTags(ctx, conn, d.Id()) if err != nil { - return diag.Errorf("error listing tags for Route 53 Domains Domain (%s): %s", d.Id(), err) + return diag.Errorf("listing tags for Route 53 Domains Domain (%s): %s", d.Id(), err) } ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig - newTags := KeyValueTags(ctx, GetTagsIn(ctx)) + newTags := KeyValueTags(ctx, getTagsIn(ctx)) oldTags := tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig) if !oldTags.Equal(newTags) { - if err := UpdateTags(ctx, conn, d.Id(), oldTags, newTags); err != nil { - return diag.Errorf("error updating Route 53 Domains Domain (%s) tags: %s", d.Id(), err) + if err := updateTags(ctx, conn, d.Id(), oldTags, newTags); err != nil { + return diag.Errorf("updating Route 53 Domains Domain (%s) tags: %s", d.Id(), err) } } @@ -330,7 +333,7 @@ func resourceRegisteredDomainCreate(ctx context.Context, d *schema.ResourceData, } func resourceRegisteredDomainRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53DomainsClient() + conn := meta.(*conns.AWSClient).Route53DomainsClient(ctx) domainDetail, err := findDomainDetailByName(ctx, conn, d.Id()) @@ -341,14 +344,14 @@ func resourceRegisteredDomainRead(ctx context.Context, d *schema.ResourceData, m } if err != nil { - return diag.Errorf("error reading Route 53 Domains Domain (%s): %s", d.Id(), err) + return diag.Errorf("reading Route 53 Domains Domain (%s): %s", d.Id(), err) } d.Set("abuse_contact_email", domainDetail.AbuseContactEmail) d.Set("abuse_contact_phone", domainDetail.AbuseContactPhone) if domainDetail.AdminContact != nil { if err := d.Set("admin_contact", []interface{}{flattenContactDetail(domainDetail.AdminContact)}); err != nil { - return diag.Errorf("error setting admin_contact: %s", err) + return diag.Errorf("setting admin_contact: %s", err) } } else { d.Set("admin_contact", nil) @@ -367,11 +370,11 @@ func resourceRegisteredDomainRead(ctx context.Context, d *schema.ResourceData, m d.Set("expiration_date", nil) } if err := d.Set("name_server", flattenNameservers(domainDetail.Nameservers)); err != nil { - return diag.Errorf("error setting name_servers: %s", err) + return diag.Errorf("setting name_servers: %s", err) } if domainDetail.RegistrantContact != nil { if err := d.Set("registrant_contact", []interface{}{flattenContactDetail(domainDetail.RegistrantContact)}); err != nil { - return diag.Errorf("error setting registrant_contact: %s", err) + return diag.Errorf("setting registrant_contact: %s", err) } } else { d.Set("registrant_contact", nil) @@ -384,7 +387,7 @@ func resourceRegisteredDomainRead(ctx context.Context, d *schema.ResourceData, m d.Set("status_list", statusList) if domainDetail.TechContact != nil { if err := d.Set("tech_contact", []interface{}{flattenContactDetail(domainDetail.TechContact)}); err != nil { - return diag.Errorf("error setting tech_contact: %s", err) + return diag.Errorf("setting tech_contact: %s", err) } } else { d.Set("tech_contact", nil) @@ -402,7 +405,7 @@ func resourceRegisteredDomainRead(ctx context.Context, d *schema.ResourceData, m } func resourceRegisteredDomainUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53DomainsClient() + conn := meta.(*conns.AWSClient).Route53DomainsClient(ctx) if d.HasChanges("admin_contact", "registrant_contact", "tech_contact") { var adminContact, registrantContact, techContact *types.ContactDetail @@ -491,7 +494,7 @@ func modifyDomainAutoRenew(ctx context.Context, conn *route53domains.Client, dom _, err := conn.EnableDomainAutoRenew(ctx, input) if err != nil { - return fmt.Errorf("error enabling Route 53 Domains Domain (%s) auto-renew: %w", domainName, err) + return fmt.Errorf("enabling Route 53 Domains Domain (%s) auto-renew: %w", domainName, err) } } else { input := &route53domains.DisableDomainAutoRenewInput{ @@ -502,7 +505,7 @@ func modifyDomainAutoRenew(ctx context.Context, conn *route53domains.Client, dom _, err := conn.DisableDomainAutoRenew(ctx, input) if err != nil { - return fmt.Errorf("error disabling Route 53 Domains Domain (%s) auto-renew: %w", domainName, err) + return fmt.Errorf("disabling Route 53 Domains Domain (%s) auto-renew: %w", domainName, err) } } @@ -521,11 +524,11 @@ func modifyDomainContact(ctx context.Context, conn *route53domains.Client, domai output, err := conn.UpdateDomainContact(ctx, input) if err != nil { - return fmt.Errorf("error updating Route 53 Domains Domain (%s) contacts: %w", domainName, err) + return fmt.Errorf("updating Route 53 Domains Domain (%s) contacts: %w", domainName, err) } if _, err := waitOperationSucceeded(ctx, conn, aws.ToString(output.OperationId), timeout); err != nil { - return fmt.Errorf("error waiting for Route 53 Domains Domain (%s) contacts update: %w", domainName, err) + return fmt.Errorf("waiting for Route 53 Domains Domain (%s) contacts update: %w", domainName, err) } return nil @@ -543,11 +546,11 @@ func modifyDomainContactPrivacy(ctx context.Context, conn *route53domains.Client output, err := conn.UpdateDomainContactPrivacy(ctx, input) if err != nil { - return fmt.Errorf("error enabling Route 53 Domains Domain (%s) contact privacy: %w", domainName, err) + return fmt.Errorf("enabling Route 53 Domains Domain (%s) contact privacy: %w", domainName, err) } if _, err := waitOperationSucceeded(ctx, conn, aws.ToString(output.OperationId), timeout); err != nil { - return fmt.Errorf("error waiting for Route 53 Domains Domain (%s) contact privacy update: %w", domainName, err) + return fmt.Errorf("waiting for Route 53 Domains Domain (%s) contact privacy update: %w", domainName, err) } return nil @@ -563,11 +566,11 @@ func modifyDomainNameservers(ctx context.Context, conn *route53domains.Client, d output, err := conn.UpdateDomainNameservers(ctx, input) if err != nil { - return fmt.Errorf("error updating Route 53 Domains Domain (%s) name servers: %w", domainName, err) + return fmt.Errorf("updating Route 53 Domains Domain (%s) name servers: %w", domainName, err) } if _, err := waitOperationSucceeded(ctx, conn, aws.ToString(output.OperationId), timeout); err != nil { - return fmt.Errorf("error waiting for Route 53 Domains Domain (%s) name servers update: %w", domainName, err) + return fmt.Errorf("waiting for Route 53 Domains Domain (%s) name servers update: %w", domainName, err) } return nil @@ -583,11 +586,11 @@ func modifyDomainTransferLock(ctx context.Context, conn *route53domains.Client, output, err := conn.EnableDomainTransferLock(ctx, input) if err != nil { - return fmt.Errorf("error enabling Route 53 Domains Domain (%s) transfer lock: %w", domainName, err) + return fmt.Errorf("enabling Route 53 Domains Domain (%s) transfer lock: %w", domainName, err) } if _, err := waitOperationSucceeded(ctx, conn, aws.ToString(output.OperationId), timeout); err != nil { - return fmt.Errorf("error waiting for Route 53 Domains Domain (%s) transfer lock enable: %w", domainName, err) + return fmt.Errorf("waiting for Route 53 Domains Domain (%s) transfer lock enable: %w", domainName, err) } } else { input := &route53domains.DisableDomainTransferLockInput{ @@ -598,11 +601,11 @@ func modifyDomainTransferLock(ctx context.Context, conn *route53domains.Client, output, err := conn.DisableDomainTransferLock(ctx, input) if err != nil { - return fmt.Errorf("error disabling Route 53 Domains Domain (%s) transfer lock: %w", domainName, err) + return fmt.Errorf("disabling Route 53 Domains Domain (%s) transfer lock: %w", domainName, err) } if _, err := waitOperationSucceeded(ctx, conn, aws.ToString(output.OperationId), timeout); err != nil { - return fmt.Errorf("error waiting for Route 53 Domains Domain (%s) transfer lock disable: %w", domainName, err) + return fmt.Errorf("waiting for Route 53 Domains Domain (%s) transfer lock disable: %w", domainName, err) } } diff --git a/internal/service/route53domains/registered_domain_test.go b/internal/service/route53domains/registered_domain_test.go index 74ee7341d9c..33052fc8024 100644 --- a/internal/service/route53domains/registered_domain_test.go +++ b/internal/service/route53domains/registered_domain_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53domains_test import ( @@ -34,7 +37,7 @@ func TestAccRoute53Domains_serial(t *testing.T) { func testAccPreCheck(ctx context.Context, t *testing.T) { acctest.PreCheckPartitionHasService(t, names.Route53DomainsEndpointID) - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53DomainsClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53DomainsClient(ctx) input := &route53domains.ListDomainsInput{} diff --git a/internal/service/route53domains/service_package.go b/internal/service/route53domains/service_package.go new file mode 100644 index 00000000000..2bebdce8263 --- /dev/null +++ b/internal/service/route53domains/service_package.go @@ -0,0 +1,26 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package route53domains + +import ( + "context" + + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + route53domains_sdkv2 "github.com/aws/aws-sdk-go-v2/service/route53domains" + endpoints_sdkv1 "github.com/aws/aws-sdk-go/aws/endpoints" +) + +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*route53domains_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return route53domains_sdkv2.NewFromConfig(cfg, func(o *route53domains_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = route53domains_sdkv2.EndpointResolverFromURL(endpoint) + } else if config["partition"].(string) == endpoints_sdkv1.AwsPartitionID { + // Route 53 Domains is only available in AWS Commercial us-east-1 Region. + o.Region = endpoints_sdkv1.UsEast1RegionID + } + }), nil +} diff --git a/internal/service/route53domains/service_package_gen.go b/internal/service/route53domains/service_package_gen.go index 4933e771fda..1ee5bc5e3ef 100644 --- a/internal/service/route53domains/service_package_gen.go +++ b/internal/service/route53domains/service_package_gen.go @@ -5,6 +5,7 @@ package route53domains import ( "context" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -40,4 +41,6 @@ func (p *servicePackage) ServicePackageName() string { return names.Route53Domains } -var ServicePackage = &servicePackage{} +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/route53domains/tags_gen.go b/internal/service/route53domains/tags_gen.go index 5ac586817c0..a1a01121bc9 100644 --- a/internal/service/route53domains/tags_gen.go +++ b/internal/service/route53domains/tags_gen.go @@ -17,11 +17,11 @@ import ( // GetTag fetches an individual route53domains service tag for a resource. // Returns whether the key value and any errors. A NotFoundError is used to signal that no value was found. -// This function will optimise the handling over ListTags, if possible. +// This function will optimise the handling over listTags, if possible. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. func GetTag(ctx context.Context, conn *route53domains.Client, identifier, key string) (*string, error) { - listTags, err := ListTags(ctx, conn, identifier) + listTags, err := listTags(ctx, conn, identifier) if err != nil { return nil, err @@ -34,10 +34,10 @@ func GetTag(ctx context.Context, conn *route53domains.Client, identifier, key st return listTags.KeyValue(key), nil } -// ListTags lists route53domains service tags. +// listTags lists route53domains service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn *route53domains.Client, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *route53domains.Client, identifier string) (tftags.KeyValueTags, error) { input := &route53domains.ListTagsForDomainInput{ DomainName: aws.String(identifier), } @@ -54,7 +54,7 @@ func ListTags(ctx context.Context, conn *route53domains.Client, identifier strin // ListTags lists route53domains service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).Route53DomainsClient(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).Route53DomainsClient(ctx), identifier) if err != nil { return err @@ -96,9 +96,9 @@ func KeyValueTags(ctx context.Context, tags []awstypes.Tag) tftags.KeyValueTags return tftags.New(ctx, m) } -// GetTagsIn returns route53domains service tags from Context. +// getTagsIn returns route53domains service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []awstypes.Tag { +func getTagsIn(ctx context.Context) []awstypes.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -108,17 +108,17 @@ func GetTagsIn(ctx context.Context) []awstypes.Tag { return nil } -// SetTagsOut sets route53domains service tags in Context. -func SetTagsOut(ctx context.Context, tags []awstypes.Tag) { +// setTagsOut sets route53domains service tags in Context. +func setTagsOut(ctx context.Context, tags []awstypes.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates route53domains service tags. +// updateTags updates route53domains service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn *route53domains.Client, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *route53domains.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -158,5 +158,5 @@ func UpdateTags(ctx context.Context, conn *route53domains.Client, identifier str // UpdateTags updates route53domains service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).Route53DomainsClient(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).Route53DomainsClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/route53recoverycontrolconfig/cluster.go b/internal/service/route53recoverycontrolconfig/cluster.go index e4ff70d6691..3bd0fa25e5d 100644 --- a/internal/service/route53recoverycontrolconfig/cluster.go +++ b/internal/service/route53recoverycontrolconfig/cluster.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53recoverycontrolconfig import ( @@ -60,7 +63,7 @@ func ResourceCluster() *schema.Resource { func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn(ctx) input := &r53rcc.CreateClusterInput{ ClientToken: aws.String(id.UniqueId()), @@ -89,7 +92,7 @@ func resourceClusterCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn(ctx) input := &r53rcc.DescribeClusterInput{ ClusterArn: aws.String(d.Id()), @@ -125,7 +128,7 @@ func resourceClusterRead(ctx context.Context, d *schema.ResourceData, meta inter func resourceClusterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn(ctx) log.Printf("[INFO] Deleting Route53 Recovery Control Config Cluster: %s", d.Id()) _, err := conn.DeleteClusterWithContext(ctx, &r53rcc.DeleteClusterInput{ diff --git a/internal/service/route53recoverycontrolconfig/cluster_test.go b/internal/service/route53recoverycontrolconfig/cluster_test.go index 03d497e8cfc..c09837aeafa 100644 --- a/internal/service/route53recoverycontrolconfig/cluster_test.go +++ b/internal/service/route53recoverycontrolconfig/cluster_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53recoverycontrolconfig_test import ( @@ -70,7 +73,7 @@ func testAccCluster_disappears(t *testing.T) { func testAccCheckClusterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryControlConfigConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53recoverycontrolconfig_cluster" { @@ -107,7 +110,7 @@ func testAccCheckClusterExists(ctx context.Context, name string) resource.TestCh return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryControlConfigConn(ctx) input := &r53rcc.DescribeClusterInput{ ClusterArn: aws.String(rs.Primary.ID), diff --git a/internal/service/route53recoverycontrolconfig/control_panel.go b/internal/service/route53recoverycontrolconfig/control_panel.go index a0b83714b3f..4841730902f 100644 --- a/internal/service/route53recoverycontrolconfig/control_panel.go +++ b/internal/service/route53recoverycontrolconfig/control_panel.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53recoverycontrolconfig import ( @@ -56,7 +59,7 @@ func ResourceControlPanel() *schema.Resource { func resourceControlPanelCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn(ctx) input := &r53rcc.CreateControlPanelInput{ ClientToken: aws.String(id.UniqueId()), @@ -86,7 +89,7 @@ func resourceControlPanelCreate(ctx context.Context, d *schema.ResourceData, met func resourceControlPanelRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn(ctx) input := &r53rcc.DescribeControlPanelInput{ ControlPanelArn: aws.String(d.Id()), @@ -121,7 +124,7 @@ func resourceControlPanelRead(ctx context.Context, d *schema.ResourceData, meta func resourceControlPanelUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn(ctx) input := &r53rcc.UpdateControlPanelInput{ ControlPanelName: aws.String(d.Get("name").(string)), @@ -139,7 +142,7 @@ func resourceControlPanelUpdate(ctx context.Context, d *schema.ResourceData, met func resourceControlPanelDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn(ctx) log.Printf("[INFO] Deleting Route53 Recovery Control Config Control Panel: %s", d.Id()) _, err := conn.DeleteControlPanelWithContext(ctx, &r53rcc.DeleteControlPanelInput{ diff --git a/internal/service/route53recoverycontrolconfig/control_panel_test.go b/internal/service/route53recoverycontrolconfig/control_panel_test.go index f28f551e74b..763f075be91 100644 --- a/internal/service/route53recoverycontrolconfig/control_panel_test.go +++ b/internal/service/route53recoverycontrolconfig/control_panel_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53recoverycontrolconfig_test import ( @@ -70,7 +73,7 @@ func testAccControlPanel_disappears(t *testing.T) { func testAccCheckControlPanelDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryControlConfigConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53recoverycontrolconfig_control_panel" { @@ -116,7 +119,7 @@ func testAccCheckControlPanelExists(ctx context.Context, name string) resource.T return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryControlConfigConn(ctx) input := &r53rcc.DescribeControlPanelInput{ ControlPanelArn: aws.String(rs.Primary.ID), diff --git a/internal/service/route53recoverycontrolconfig/generate.go b/internal/service/route53recoverycontrolconfig/generate.go new file mode 100644 index 00000000000..18b8964a30f --- /dev/null +++ b/internal/service/route53recoverycontrolconfig/generate.go @@ -0,0 +1,7 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/servicepackage/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package route53recoverycontrolconfig diff --git a/internal/service/route53recoverycontrolconfig/route53recoverycontrolconfig_test.go b/internal/service/route53recoverycontrolconfig/route53recoverycontrolconfig_test.go index aee34a5cb1d..54c27c4117c 100644 --- a/internal/service/route53recoverycontrolconfig/route53recoverycontrolconfig_test.go +++ b/internal/service/route53recoverycontrolconfig/route53recoverycontrolconfig_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53recoverycontrolconfig_test import ( diff --git a/internal/service/route53recoverycontrolconfig/routing_control.go b/internal/service/route53recoverycontrolconfig/routing_control.go index 645d50b881b..7f571acee58 100644 --- a/internal/service/route53recoverycontrolconfig/routing_control.go +++ b/internal/service/route53recoverycontrolconfig/routing_control.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53recoverycontrolconfig import ( @@ -53,7 +56,7 @@ func ResourceRoutingControl() *schema.Resource { func resourceRoutingControlCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn(ctx) input := &r53rcc.CreateRoutingControlInput{ ClientToken: aws.String(id.UniqueId()), @@ -87,7 +90,7 @@ func resourceRoutingControlCreate(ctx context.Context, d *schema.ResourceData, m func resourceRoutingControlRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn(ctx) input := &r53rcc.DescribeRoutingControlInput{ RoutingControlArn: aws.String(d.Id()), @@ -120,7 +123,7 @@ func resourceRoutingControlRead(ctx context.Context, d *schema.ResourceData, met func resourceRoutingControlUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn(ctx) input := &r53rcc.UpdateRoutingControlInput{ RoutingControlName: aws.String(d.Get("name").(string)), @@ -138,7 +141,7 @@ func resourceRoutingControlUpdate(ctx context.Context, d *schema.ResourceData, m func resourceRoutingControlDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn(ctx) log.Printf("[INFO] Deleting Route53 Recovery Control Config Routing Control: %s", d.Id()) _, err := conn.DeleteRoutingControlWithContext(ctx, &r53rcc.DeleteRoutingControlInput{ diff --git a/internal/service/route53recoverycontrolconfig/routing_control_test.go b/internal/service/route53recoverycontrolconfig/routing_control_test.go index d61274c58ec..87ef235435a 100644 --- a/internal/service/route53recoverycontrolconfig/routing_control_test.go +++ b/internal/service/route53recoverycontrolconfig/routing_control_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53recoverycontrolconfig_test import ( @@ -99,7 +102,7 @@ func testAccCheckRoutingControlExists(ctx context.Context, name string) resource return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryControlConfigConn(ctx) input := &r53rcc.DescribeRoutingControlInput{ RoutingControlArn: aws.String(rs.Primary.ID), @@ -113,7 +116,7 @@ func testAccCheckRoutingControlExists(ctx context.Context, name string) resource func testAccCheckRoutingControlDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryControlConfigConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53recoverycontrolconfig_routing_control" { diff --git a/internal/service/route53recoverycontrolconfig/safety_rule.go b/internal/service/route53recoverycontrolconfig/safety_rule.go index 58cd0cae6af..8a0f12c7cd4 100644 --- a/internal/service/route53recoverycontrolconfig/safety_rule.go +++ b/internal/service/route53recoverycontrolconfig/safety_rule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53recoverycontrolconfig import ( @@ -126,7 +129,7 @@ func resourceSafetyRuleCreate(ctx context.Context, d *schema.ResourceData, meta func resourceSafetyRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn(ctx) input := &r53rcc.DescribeSafetyRuleInput{ SafetyRuleArn: aws.String(d.Id()), @@ -210,7 +213,7 @@ func resourceSafetyRuleUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceSafetyRuleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn(ctx) log.Printf("[INFO] Deleting Route53 Recovery Control Config Safety Rule: %s", d.Id()) _, err := conn.DeleteSafetyRuleWithContext(ctx, &r53rcc.DeleteSafetyRuleInput{ @@ -240,7 +243,7 @@ func resourceSafetyRuleDelete(ctx context.Context, d *schema.ResourceData, meta func createAssertionRule(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn(ctx) assertionRule := &r53rcc.NewAssertionRule{ Name: aws.String(d.Get("name").(string)), @@ -277,7 +280,7 @@ func createAssertionRule(ctx context.Context, d *schema.ResourceData, meta inter func createGatingRule(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn(ctx) gatingRule := &r53rcc.NewGatingRule{ Name: aws.String(d.Get("name").(string)), @@ -315,7 +318,7 @@ func createGatingRule(ctx context.Context, d *schema.ResourceData, meta interfac func updateAssertionRule(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn(ctx) assertionRuleUpdate := &r53rcc.AssertionRuleUpdate{ SafetyRuleArn: aws.String(d.Get("arn").(string)), @@ -344,7 +347,7 @@ func updateAssertionRule(ctx context.Context, d *schema.ResourceData, meta inter func updateGatingRule(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := meta.(*conns.AWSClient).Route53RecoveryControlConfigConn(ctx) gatingRuleUpdate := &r53rcc.GatingRuleUpdate{ SafetyRuleArn: aws.String(d.Get("arn").(string)), diff --git a/internal/service/route53recoverycontrolconfig/safety_rule_test.go b/internal/service/route53recoverycontrolconfig/safety_rule_test.go index 511caf2f18f..62a805ce96c 100644 --- a/internal/service/route53recoverycontrolconfig/safety_rule_test.go +++ b/internal/service/route53recoverycontrolconfig/safety_rule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53recoverycontrolconfig_test import ( @@ -103,7 +106,7 @@ func testAccSafetyRule_gatingRule(t *testing.T) { func testAccCheckSafetyRuleDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryControlConfigConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53recoverycontrolconfig_safety_rule" { @@ -132,7 +135,7 @@ func testAccCheckSafetyRuleExists(ctx context.Context, name string) resource.Tes return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryControlConfigConn(ctx) input := &r53rcc.DescribeSafetyRuleInput{ SafetyRuleArn: aws.String(rs.Primary.ID), diff --git a/internal/service/route53recoverycontrolconfig/service_package.go b/internal/service/route53recoverycontrolconfig/service_package.go new file mode 100644 index 00000000000..f4acf4f25b4 --- /dev/null +++ b/internal/service/route53recoverycontrolconfig/service_package.go @@ -0,0 +1,26 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package route53recoverycontrolconfig + +import ( + "context" + + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + endpoints_sdkv1 "github.com/aws/aws-sdk-go/aws/endpoints" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + route53recoverycontrolconfig_sdkv1 "github.com/aws/aws-sdk-go/service/route53recoverycontrolconfig" +) + +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, m map[string]any) (*route53recoverycontrolconfig_sdkv1.Route53RecoveryControlConfig, error) { + sess := m["session"].(*session_sdkv1.Session) + config := &aws_sdkv1.Config{Endpoint: aws_sdkv1.String(m["endpoint"].(string))} + + // Force "global" services to correct Regions. + if m["partition"].(string) == endpoints_sdkv1.AwsPartitionID { + config.Region = aws_sdkv1.String(endpoints_sdkv1.UsWest2RegionID) + } + + return route53recoverycontrolconfig_sdkv1.New(sess.Copy(config)), nil +} diff --git a/internal/service/route53recoverycontrolconfig/service_package_gen.go b/internal/service/route53recoverycontrolconfig/service_package_gen.go index 3cadff05d46..5aeb51c243a 100644 --- a/internal/service/route53recoverycontrolconfig/service_package_gen.go +++ b/internal/service/route53recoverycontrolconfig/service_package_gen.go @@ -5,6 +5,7 @@ package route53recoverycontrolconfig import ( "context" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -48,4 +49,6 @@ func (p *servicePackage) ServicePackageName() string { return names.Route53RecoveryControlConfig } -var ServicePackage = &servicePackage{} +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/route53recoverycontrolconfig/status.go b/internal/service/route53recoverycontrolconfig/status.go index 511f2f26eff..c86024308d4 100644 --- a/internal/service/route53recoverycontrolconfig/status.go +++ b/internal/service/route53recoverycontrolconfig/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53recoverycontrolconfig import ( diff --git a/internal/service/route53recoverycontrolconfig/sweep.go b/internal/service/route53recoverycontrolconfig/sweep.go index 83037c87d49..37fda52f4e6 100644 --- a/internal/service/route53recoverycontrolconfig/sweep.go +++ b/internal/service/route53recoverycontrolconfig/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( r53rcc "github.com/aws/aws-sdk-go/service/route53recoverycontrolconfig" multierror "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -46,11 +48,11 @@ func init() { func sweepClusters(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := client.Route53RecoveryControlConfigConn(ctx) input := &r53rcc.ListClustersInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -79,7 +81,7 @@ func sweepClusters(region string) error { return fmt.Errorf("error listing Route53 Recovery Control Config Clusters (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Route53 Recovery Control Config Clusters (%s): %w", region, err) @@ -90,11 +92,11 @@ func sweepClusters(region string) error { func sweepControlPanels(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := client.Route53RecoveryControlConfigConn(ctx) input := &r53rcc.ListClustersInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -150,7 +152,7 @@ func sweepControlPanels(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Route53 Recovery Control Config Clusters (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Route53 Recovery Control Config Control Panels (%s): %w", region, err)) @@ -161,11 +163,11 @@ func sweepControlPanels(region string) error { func sweepRoutingControls(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := client.Route53RecoveryControlConfigConn(ctx) input := &r53rcc.ListClustersInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -239,7 +241,7 @@ func sweepRoutingControls(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Route53 Recovery Control Config Clusters (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Route53 Recovery Control Config Routing Controls (%s): %w", region, err)) @@ -250,11 +252,11 @@ func sweepRoutingControls(region string) error { func sweepSafetyRules(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).Route53RecoveryControlConfigConn() + conn := client.Route53RecoveryControlConfigConn(ctx) input := &r53rcc.ListClustersInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -334,7 +336,7 @@ func sweepSafetyRules(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Route53 Recovery Control Config Clusters (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Route53 Recovery Control Config Safety Rules (%s): %w", region, err)) diff --git a/internal/service/route53recoverycontrolconfig/wait.go b/internal/service/route53recoverycontrolconfig/wait.go index 276c5a17387..621f023d227 100644 --- a/internal/service/route53recoverycontrolconfig/wait.go +++ b/internal/service/route53recoverycontrolconfig/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53recoverycontrolconfig import ( diff --git a/internal/service/route53recoveryreadiness/cell.go b/internal/service/route53recoveryreadiness/cell.go index b0d21b0ae1c..6016a54b6d8 100644 --- a/internal/service/route53recoveryreadiness/cell.go +++ b/internal/service/route53recoveryreadiness/cell.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53recoveryreadiness import ( @@ -72,7 +75,7 @@ func ResourceCell() *schema.Resource { func resourceCellCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn() + conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn(ctx) name := d.Get("cell_name").(string) input := &route53recoveryreadiness.CreateCellInput{ @@ -88,7 +91,7 @@ func resourceCellCreate(ctx context.Context, d *schema.ResourceData, meta interf d.SetId(aws.StringValue(output.CellName)) - if err := createTags(ctx, conn, aws.StringValue(output.CellArn), GetTagsIn(ctx)); err != nil { + if err := createTags(ctx, conn, aws.StringValue(output.CellArn), getTagsIn(ctx)); err != nil { return sdkdiag.AppendErrorf(diags, "setting Route53 Recovery Readiness Cell (%s) tags: %s", d.Id(), err) } @@ -97,7 +100,7 @@ func resourceCellCreate(ctx context.Context, d *schema.ResourceData, meta interf func resourceCellRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn() + conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn(ctx) input := &route53recoveryreadiness.GetCellInput{ CellName: aws.String(d.Id()), @@ -125,7 +128,7 @@ func resourceCellRead(ctx context.Context, d *schema.ResourceData, meta interfac func resourceCellUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn() + conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &route53recoveryreadiness.UpdateCellInput{ @@ -145,7 +148,7 @@ func resourceCellUpdate(ctx context.Context, d *schema.ResourceData, meta interf func resourceCellDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn() + conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn(ctx) log.Printf("[DEBUG] Deleting Route53 Recovery Readiness Cell: %s", d.Id()) _, err := conn.DeleteCellWithContext(ctx, &route53recoveryreadiness.DeleteCellInput{ diff --git a/internal/service/route53recoveryreadiness/cell_test.go b/internal/service/route53recoveryreadiness/cell_test.go index 5f780c138a8..c87528639be 100644 --- a/internal/service/route53recoveryreadiness/cell_test.go +++ b/internal/service/route53recoveryreadiness/cell_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53recoveryreadiness_test import ( @@ -198,7 +201,7 @@ func TestAccRoute53RecoveryReadinessCell_timeout(t *testing.T) { func testAccCheckCellDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryReadinessConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryReadinessConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53recoveryreadiness_cell" { @@ -227,7 +230,7 @@ func testAccCheckCellExists(ctx context.Context, name string) resource.TestCheck return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryReadinessConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryReadinessConn(ctx) input := &route53recoveryreadiness.GetCellInput{ CellName: aws.String(rs.Primary.ID), @@ -240,7 +243,7 @@ func testAccCheckCellExists(ctx context.Context, name string) resource.TestCheck } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryReadinessConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryReadinessConn(ctx) input := &route53recoveryreadiness.ListCellsInput{} diff --git a/internal/service/route53recoveryreadiness/generate.go b/internal/service/route53recoveryreadiness/generate.go index 9b180bdd407..994f619e28b 100644 --- a/internal/service/route53recoveryreadiness/generate.go +++ b/internal/service/route53recoveryreadiness/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=ListTagsForResources -ServiceTagsMap -UpdateTags -CreateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package route53recoveryreadiness diff --git a/internal/service/route53recoveryreadiness/readiness_check.go b/internal/service/route53recoveryreadiness/readiness_check.go index c6135375e3d..60f3184b824 100644 --- a/internal/service/route53recoveryreadiness/readiness_check.go +++ b/internal/service/route53recoveryreadiness/readiness_check.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53recoveryreadiness import ( @@ -60,7 +63,7 @@ func ResourceReadinessCheck() *schema.Resource { func resourceReadinessCheckCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn() + conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn(ctx) name := d.Get("readiness_check_name").(string) input := &route53recoveryreadiness.CreateReadinessCheckInput{ @@ -76,7 +79,7 @@ func resourceReadinessCheckCreate(ctx context.Context, d *schema.ResourceData, m d.SetId(aws.StringValue(output.ReadinessCheckName)) - if err := createTags(ctx, conn, aws.StringValue(output.ReadinessCheckArn), GetTagsIn(ctx)); err != nil { + if err := createTags(ctx, conn, aws.StringValue(output.ReadinessCheckArn), getTagsIn(ctx)); err != nil { return sdkdiag.AppendErrorf(diags, "setting Route53 Recovery Readiness Readiness Check (%s) tags: %s", d.Id(), err) } @@ -85,7 +88,7 @@ func resourceReadinessCheckCreate(ctx context.Context, d *schema.ResourceData, m func resourceReadinessCheckRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn() + conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn(ctx) input := &route53recoveryreadiness.GetReadinessCheckInput{ ReadinessCheckName: aws.String(d.Id()), @@ -112,7 +115,7 @@ func resourceReadinessCheckRead(ctx context.Context, d *schema.ResourceData, met func resourceReadinessCheckUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn() + conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &route53recoveryreadiness.UpdateReadinessCheckInput{ @@ -132,7 +135,7 @@ func resourceReadinessCheckUpdate(ctx context.Context, d *schema.ResourceData, m func resourceReadinessCheckDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn() + conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn(ctx) log.Printf("[DEBUG] Deleting Route53 Recovery Readiness Readiness Check: %s", d.Id()) _, err := conn.DeleteReadinessCheckWithContext(ctx, &route53recoveryreadiness.DeleteReadinessCheckInput{ diff --git a/internal/service/route53recoveryreadiness/readiness_check_test.go b/internal/service/route53recoveryreadiness/readiness_check_test.go index 87d6be5215f..5dca1be8636 100644 --- a/internal/service/route53recoveryreadiness/readiness_check_test.go +++ b/internal/service/route53recoveryreadiness/readiness_check_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53recoveryreadiness_test import ( @@ -175,7 +178,7 @@ func TestAccRoute53RecoveryReadinessReadinessCheck_timeout(t *testing.T) { func testAccCheckReadinessCheckDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryReadinessConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryReadinessConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53recoveryreadiness_readiness_check" { @@ -203,7 +206,7 @@ func testAccCheckReadinessCheckExists(ctx context.Context, name string) resource return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryReadinessConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryReadinessConn(ctx) input := &route53recoveryreadiness.GetReadinessCheckInput{ ReadinessCheckName: aws.String(rs.Primary.ID), diff --git a/internal/service/route53recoveryreadiness/recovery_group.go b/internal/service/route53recoveryreadiness/recovery_group.go index c040fb9379e..fe16e32b3c7 100644 --- a/internal/service/route53recoveryreadiness/recovery_group.go +++ b/internal/service/route53recoveryreadiness/recovery_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53recoveryreadiness import ( @@ -64,7 +67,7 @@ func ResourceRecoveryGroup() *schema.Resource { func resourceRecoveryGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn() + conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn(ctx) name := d.Get("recovery_group_name").(string) input := &route53recoveryreadiness.CreateRecoveryGroupInput{ @@ -80,7 +83,7 @@ func resourceRecoveryGroupCreate(ctx context.Context, d *schema.ResourceData, me d.SetId(aws.StringValue(output.RecoveryGroupName)) - if err := createTags(ctx, conn, aws.StringValue(output.RecoveryGroupArn), GetTagsIn(ctx)); err != nil { + if err := createTags(ctx, conn, aws.StringValue(output.RecoveryGroupArn), getTagsIn(ctx)); err != nil { return sdkdiag.AppendErrorf(diags, "setting Route53 Recovery Readiness Recovery Group (%s) tags: %s", d.Id(), err) } @@ -89,7 +92,7 @@ func resourceRecoveryGroupCreate(ctx context.Context, d *schema.ResourceData, me func resourceRecoveryGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn() + conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn(ctx) input := &route53recoveryreadiness.GetRecoveryGroupInput{ RecoveryGroupName: aws.String(d.Id()), @@ -115,7 +118,7 @@ func resourceRecoveryGroupRead(ctx context.Context, d *schema.ResourceData, meta func resourceRecoveryGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn() + conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &route53recoveryreadiness.UpdateRecoveryGroupInput{ @@ -135,7 +138,7 @@ func resourceRecoveryGroupUpdate(ctx context.Context, d *schema.ResourceData, me func resourceRecoveryGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn() + conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn(ctx) log.Printf("[DEBUG] Deleting Route53 Recovery Readiness Recovery Group: %s", d.Id()) _, err := conn.DeleteRecoveryGroupWithContext(ctx, &route53recoveryreadiness.DeleteRecoveryGroupInput{ diff --git a/internal/service/route53recoveryreadiness/recovery_group_test.go b/internal/service/route53recoveryreadiness/recovery_group_test.go index 9a66acc677e..d616c1c6c73 100644 --- a/internal/service/route53recoveryreadiness/recovery_group_test.go +++ b/internal/service/route53recoveryreadiness/recovery_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53recoveryreadiness_test import ( @@ -170,7 +173,7 @@ func TestAccRoute53RecoveryReadinessRecoveryGroup_timeout(t *testing.T) { func testAccCheckRecoveryGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryReadinessConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryReadinessConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53recoveryreadiness_recovery_group" { @@ -198,7 +201,7 @@ func testAccCheckRecoveryGroupExists(ctx context.Context, name string) resource. return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryReadinessConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryReadinessConn(ctx) input := &route53recoveryreadiness.GetRecoveryGroupInput{ RecoveryGroupName: aws.String(rs.Primary.ID), diff --git a/internal/service/route53recoveryreadiness/resource_set.go b/internal/service/route53recoveryreadiness/resource_set.go index a5e3ebc0e56..5e4dfe06ce2 100644 --- a/internal/service/route53recoveryreadiness/resource_set.go +++ b/internal/service/route53recoveryreadiness/resource_set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53recoveryreadiness import ( @@ -149,7 +152,7 @@ func ResourceResourceSet() *schema.Resource { func resourceResourceSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn() + conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn(ctx) name := d.Get("resource_set_name").(string) input := &route53recoveryreadiness.CreateResourceSetInput{ @@ -166,7 +169,7 @@ func resourceResourceSetCreate(ctx context.Context, d *schema.ResourceData, meta d.SetId(aws.StringValue(output.ResourceSetName)) - if err := createTags(ctx, conn, aws.StringValue(output.ResourceSetArn), GetTagsIn(ctx)); err != nil { + if err := createTags(ctx, conn, aws.StringValue(output.ResourceSetArn), getTagsIn(ctx)); err != nil { return sdkdiag.AppendErrorf(diags, "setting Route53 Recovery Readiness Resource Set (%s) tags: %s", d.Id(), err) } @@ -175,7 +178,7 @@ func resourceResourceSetCreate(ctx context.Context, d *schema.ResourceData, meta func resourceResourceSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn() + conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn(ctx) input := &route53recoveryreadiness.GetResourceSetInput{ ResourceSetName: aws.String(d.Id()), @@ -205,7 +208,7 @@ func resourceResourceSetRead(ctx context.Context, d *schema.ResourceData, meta i func resourceResourceSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn() + conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &route53recoveryreadiness.UpdateResourceSetInput{ @@ -226,7 +229,7 @@ func resourceResourceSetUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceResourceSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn() + conn := meta.(*conns.AWSClient).Route53RecoveryReadinessConn(ctx) log.Printf("[DEBUG] Deleting Route53 Recovery Readiness Resource Set: %s", d.Id()) _, err := conn.DeleteResourceSetWithContext(ctx, &route53recoveryreadiness.DeleteResourceSetInput{ diff --git a/internal/service/route53recoveryreadiness/resource_set_test.go b/internal/service/route53recoveryreadiness/resource_set_test.go index f61b6c594f7..29c5d1aa975 100644 --- a/internal/service/route53recoveryreadiness/resource_set_test.go +++ b/internal/service/route53recoveryreadiness/resource_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53recoveryreadiness_test import ( @@ -332,7 +335,7 @@ func TestAccRoute53RecoveryReadinessResourceSet_timeout(t *testing.T) { func testAccCheckResourceSetDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryReadinessConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryReadinessConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53recoveryreadiness_resource_set" { @@ -360,7 +363,7 @@ func testAccCheckResourceSetExists(ctx context.Context, name string) resource.Te return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryReadinessConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryReadinessConn(ctx) input := &route53recoveryreadiness.GetResourceSetInput{ ResourceSetName: aws.String(rs.Primary.ID), @@ -373,7 +376,7 @@ func testAccCheckResourceSetExists(ctx context.Context, name string) resource.Te } func testAccPreCheckResourceSet(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryReadinessConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53RecoveryReadinessConn(ctx) input := &route53recoveryreadiness.ListResourceSetsInput{} diff --git a/internal/service/route53recoveryreadiness/service_package.go b/internal/service/route53recoveryreadiness/service_package.go new file mode 100644 index 00000000000..3cd81fccabe --- /dev/null +++ b/internal/service/route53recoveryreadiness/service_package.go @@ -0,0 +1,26 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package route53recoveryreadiness + +import ( + "context" + + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + endpoints_sdkv1 "github.com/aws/aws-sdk-go/aws/endpoints" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + route53recoveryreadiness_sdkv1 "github.com/aws/aws-sdk-go/service/route53recoveryreadiness" +) + +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, m map[string]any) (*route53recoveryreadiness_sdkv1.Route53RecoveryReadiness, error) { + sess := m["session"].(*session_sdkv1.Session) + config := &aws_sdkv1.Config{Endpoint: aws_sdkv1.String(m["endpoint"].(string))} + + // Force "global" services to correct Regions. + if m["partition"].(string) == endpoints_sdkv1.AwsPartitionID { + config.Region = aws_sdkv1.String(endpoints_sdkv1.UsWest2RegionID) + } + + return route53recoveryreadiness_sdkv1.New(sess.Copy(config)), nil +} diff --git a/internal/service/route53recoveryreadiness/service_package_gen.go b/internal/service/route53recoveryreadiness/service_package_gen.go index f5beaed0d97..ef438d19c70 100644 --- a/internal/service/route53recoveryreadiness/service_package_gen.go +++ b/internal/service/route53recoveryreadiness/service_package_gen.go @@ -5,6 +5,7 @@ package route53recoveryreadiness import ( "context" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -64,4 +65,6 @@ func (p *servicePackage) ServicePackageName() string { return names.Route53RecoveryReadiness } -var ServicePackage = &servicePackage{} +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/route53recoveryreadiness/tags_gen.go b/internal/service/route53recoveryreadiness/tags_gen.go index 95fd4d4ff89..9a0cea6944e 100644 --- a/internal/service/route53recoveryreadiness/tags_gen.go +++ b/internal/service/route53recoveryreadiness/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists route53recoveryreadiness service tags. +// listTags lists route53recoveryreadiness service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn route53recoveryreadinessiface.Route53RecoveryReadinessAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn route53recoveryreadinessiface.Route53RecoveryReadinessAPI, identifier string) (tftags.KeyValueTags, error) { input := &route53recoveryreadiness.ListTagsForResourcesInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn route53recoveryreadinessiface.Route53Rec // ListTags lists route53recoveryreadiness service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).Route53RecoveryReadinessConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).Route53RecoveryReadinessConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from route53recoveryreadiness service tags. +// KeyValueTags creates tftags.KeyValueTags from route53recoveryreadiness service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns route53recoveryreadiness service tags from Context. +// getTagsIn returns route53recoveryreadiness service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,8 +71,8 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets route53recoveryreadiness service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets route53recoveryreadiness service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } @@ -84,13 +84,13 @@ func createTags(ctx context.Context, conn route53recoveryreadinessiface.Route53R return nil } - return UpdateTags(ctx, conn, identifier, nil, tags) + return updateTags(ctx, conn, identifier, nil, tags) } -// UpdateTags updates route53recoveryreadiness service tags. +// updateTags updates route53recoveryreadiness service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn route53recoveryreadinessiface.Route53RecoveryReadinessAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn route53recoveryreadinessiface.Route53RecoveryReadinessAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -130,5 +130,5 @@ func UpdateTags(ctx context.Context, conn route53recoveryreadinessiface.Route53R // UpdateTags updates route53recoveryreadiness service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).Route53RecoveryReadinessConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).Route53RecoveryReadinessConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/route53resolver/config.go b/internal/service/route53resolver/config.go index e0f7d7242cd..45b90ed882f 100644 --- a/internal/service/route53resolver/config.go +++ b/internal/service/route53resolver/config.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver import ( @@ -47,7 +50,7 @@ func ResourceConfig() *schema.Resource { } func resourceConfigCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) autodefinedReverseFlag := d.Get("autodefined_reverse_flag").(string) input := &route53resolver.UpdateResolverConfigInput{ @@ -71,7 +74,7 @@ func resourceConfigCreate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceConfigRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) resolverConfig, err := FindResolverConfigByID(ctx, conn, d.Id()) @@ -99,7 +102,7 @@ func resourceConfigRead(ctx context.Context, d *schema.ResourceData, meta interf } func resourceConfigUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) autodefinedReverseFlag := d.Get("autodefined_reverse_flag").(string) input := &route53resolver.UpdateResolverConfigInput{ diff --git a/internal/service/route53resolver/config_test.go b/internal/service/route53resolver/config_test.go index 9f188f36749..b6a5efaf013 100644 --- a/internal/service/route53resolver/config_test.go +++ b/internal/service/route53resolver/config_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver_test import ( @@ -91,7 +94,7 @@ func testAccCheckConfigExists(ctx context.Context, n string, v *route53resolver. return fmt.Errorf("No Route53 Resolver Config ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn(ctx) output, err := tfroute53resolver.FindResolverConfigByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/route53resolver/dnssec_config.go b/internal/service/route53resolver/dnssec_config.go index b83769173dc..ce4e1212a7a 100644 --- a/internal/service/route53resolver/dnssec_config.go +++ b/internal/service/route53resolver/dnssec_config.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver import ( @@ -51,7 +54,7 @@ func ResourceDNSSECConfig() *schema.Resource { } func resourceDNSSECConfigCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) input := &route53resolver.UpdateResolverDnssecConfigInput{ ResourceId: aws.String(d.Get("resource_id").(string)), @@ -74,7 +77,7 @@ func resourceDNSSECConfigCreate(ctx context.Context, d *schema.ResourceData, met } func resourceDNSSECConfigRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) dnssecConfig, err := FindResolverDNSSECConfigByID(ctx, conn, d.Id()) @@ -106,7 +109,7 @@ func resourceDNSSECConfigRead(ctx context.Context, d *schema.ResourceData, meta } func resourceDNSSECConfigDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) log.Printf("[DEBUG] Deleting Route53 Resolver DNSSEC Config: %s", d.Id()) _, err := conn.UpdateResolverDnssecConfigWithContext(ctx, &route53resolver.UpdateResolverDnssecConfigInput{ diff --git a/internal/service/route53resolver/dnssec_config_test.go b/internal/service/route53resolver/dnssec_config_test.go index 965a3624cba..b0b666df24d 100644 --- a/internal/service/route53resolver/dnssec_config_test.go +++ b/internal/service/route53resolver/dnssec_config_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver_test import ( @@ -72,7 +75,7 @@ func TestAccRoute53ResolverDNSSECConfig_disappear(t *testing.T) { func testAccCheckDNSSECConfigDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53_resolver_dnssec_config" { @@ -107,7 +110,7 @@ func testAccCheckDNSSECConfigExists(ctx context.Context, n string) resource.Test return fmt.Errorf("No Route53 Resolver DNSSEC Config ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn(ctx) _, err := tfroute53resolver.FindResolverDNSSECConfigByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/route53resolver/endpoint.go b/internal/service/route53resolver/endpoint.go index 461770b8436..fdaba25fa4d 100644 --- a/internal/service/route53resolver/endpoint.go +++ b/internal/service/route53resolver/endpoint.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver import ( @@ -106,14 +109,14 @@ func ResourceEndpoint() *schema.Resource { } func resourceEndpointCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) input := &route53resolver.CreateResolverEndpointInput{ CreatorRequestId: aws.String(id.PrefixedUniqueId("tf-r53-resolver-endpoint-")), Direction: aws.String(d.Get("direction").(string)), IpAddresses: expandEndpointIPAddresses(d.Get("ip_address").(*schema.Set)), SecurityGroupIds: flex.ExpandStringSet(d.Get("security_group_ids").(*schema.Set)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("name"); ok { @@ -136,7 +139,7 @@ func resourceEndpointCreate(ctx context.Context, d *schema.ResourceData, meta in } func resourceEndpointRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) ep, err := FindResolverEndpointByID(ctx, conn, d.Id()) @@ -171,7 +174,7 @@ func resourceEndpointRead(ctx context.Context, d *schema.ResourceData, meta inte } func resourceEndpointUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) if d.HasChange("name") { _, err := conn.UpdateResolverEndpointWithContext(ctx, &route53resolver.UpdateResolverEndpointInput{ @@ -231,7 +234,7 @@ func resourceEndpointUpdate(ctx context.Context, d *schema.ResourceData, meta in } func resourceEndpointDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) log.Printf("[DEBUG] Deleting Route53 Resolver Endpoint: %s", d.Id()) _, err := conn.DeleteResolverEndpointWithContext(ctx, &route53resolver.DeleteResolverEndpointInput{ diff --git a/internal/service/route53resolver/endpoint_data_source.go b/internal/service/route53resolver/endpoint_data_source.go index 7b88df904cb..8deb06d4048 100644 --- a/internal/service/route53resolver/endpoint_data_source.go +++ b/internal/service/route53resolver/endpoint_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver import ( @@ -68,7 +71,7 @@ func DataSourceEndpoint() *schema.Resource { } func dataSourceEndpointRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) endpointID := d.Get("resolver_endpoint_id").(string) input := &route53resolver.ListResolverEndpointsInput{} diff --git a/internal/service/route53resolver/endpoint_data_source_test.go b/internal/service/route53resolver/endpoint_data_source_test.go index 5d499c46697..c1dae57e518 100644 --- a/internal/service/route53resolver/endpoint_data_source_test.go +++ b/internal/service/route53resolver/endpoint_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver_test import ( diff --git a/internal/service/route53resolver/endpoint_test.go b/internal/service/route53resolver/endpoint_test.go index 363c0247f3f..1688d519872 100644 --- a/internal/service/route53resolver/endpoint_test.go +++ b/internal/service/route53resolver/endpoint_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver_test import ( @@ -158,7 +161,7 @@ func TestAccRoute53ResolverEndpoint_updateOutbound(t *testing.T) { func testAccCheckEndpointDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53_resolver_endpoint" { @@ -193,7 +196,7 @@ func testAccCheckEndpointExists(ctx context.Context, n string, v *route53resolve return fmt.Errorf("No Route53 Resolver Endpoint ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn(ctx) output, err := tfroute53resolver.FindResolverEndpointByID(ctx, conn, rs.Primary.ID) @@ -208,7 +211,7 @@ func testAccCheckEndpointExists(ctx context.Context, n string, v *route53resolve } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn(ctx) input := &route53resolver.ListResolverEndpointsInput{} diff --git a/internal/service/route53resolver/firewall_config.go b/internal/service/route53resolver/firewall_config.go index b09610a9dea..617b4f08ed8 100644 --- a/internal/service/route53resolver/firewall_config.go +++ b/internal/service/route53resolver/firewall_config.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver import ( @@ -47,7 +50,7 @@ func ResourceFirewallConfig() *schema.Resource { } func resourceFirewallConfigCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) input := &route53resolver.UpdateFirewallConfigInput{ ResourceId: aws.String(d.Get("resource_id").(string)), @@ -69,7 +72,7 @@ func resourceFirewallConfigCreate(ctx context.Context, d *schema.ResourceData, m } func resourceFirewallConfigRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) firewallConfig, err := FindFirewallConfigByID(ctx, conn, d.Id()) @@ -91,7 +94,7 @@ func resourceFirewallConfigRead(ctx context.Context, d *schema.ResourceData, met } func resourceFirewallConfigUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) input := &route53resolver.UpdateFirewallConfigInput{ ResourceId: aws.String(d.Get("resource_id").(string)), @@ -111,7 +114,7 @@ func resourceFirewallConfigUpdate(ctx context.Context, d *schema.ResourceData, m } func resourceFirewallConfigDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) log.Printf("[DEBUG] Deleting Route53 Resolver Firewall Config: %s", d.Id()) _, err := conn.UpdateFirewallConfigWithContext(ctx, &route53resolver.UpdateFirewallConfigInput{ diff --git a/internal/service/route53resolver/firewall_config_data_source.go b/internal/service/route53resolver/firewall_config_data_source.go index 865a53e198a..26b7e79142a 100644 --- a/internal/service/route53resolver/firewall_config_data_source.go +++ b/internal/service/route53resolver/firewall_config_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver import ( @@ -36,7 +39,7 @@ func DataSourceFirewallConfig() *schema.Resource { } func dataSourceFirewallConfigRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) id := d.Get("resource_id").(string) firewallConfig, err := findFirewallConfigByResourceID(ctx, conn, id) diff --git a/internal/service/route53resolver/firewall_config_data_source_test.go b/internal/service/route53resolver/firewall_config_data_source_test.go index ed1f109093f..3b5d3a17f5c 100644 --- a/internal/service/route53resolver/firewall_config_data_source_test.go +++ b/internal/service/route53resolver/firewall_config_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver_test import ( diff --git a/internal/service/route53resolver/firewall_config_test.go b/internal/service/route53resolver/firewall_config_test.go index fb3fee2fdb5..3fb94e0e733 100644 --- a/internal/service/route53resolver/firewall_config_test.go +++ b/internal/service/route53resolver/firewall_config_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver_test import ( @@ -71,7 +74,7 @@ func TestAccRoute53ResolverFirewallConfig_disappears(t *testing.T) { func testAccCheckFirewallConfigDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53_resolver_firewall_config" { @@ -110,7 +113,7 @@ func testAccCheckFirewallConfigExists(ctx context.Context, n string, v *route53r return fmt.Errorf("No Route53 Resolver Firewall Config ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn(ctx) output, err := tfroute53resolver.FindFirewallConfigByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/route53resolver/firewall_domain_list.go b/internal/service/route53resolver/firewall_domain_list.go index 3a306fbbdd0..8e4474fc4e1 100644 --- a/internal/service/route53resolver/firewall_domain_list.go +++ b/internal/service/route53resolver/firewall_domain_list.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver import ( @@ -59,13 +62,13 @@ func ResourceFirewallDomainList() *schema.Resource { } func resourceFirewallDomainListCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) name := d.Get("name").(string) input := &route53resolver.CreateFirewallDomainListInput{ CreatorRequestId: aws.String(id.PrefixedUniqueId("tf-r53-resolver-firewall-domain-list-")), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } output, err := conn.CreateFirewallDomainListWithContext(ctx, input) @@ -96,7 +99,7 @@ func resourceFirewallDomainListCreate(ctx context.Context, d *schema.ResourceDat } func resourceFirewallDomainListRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) firewallDomainList, err := FindFirewallDomainListByID(ctx, conn, d.Id()) @@ -139,7 +142,7 @@ func resourceFirewallDomainListRead(ctx context.Context, d *schema.ResourceData, } func resourceFirewallDomainListUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) if d.HasChange("domains") { o, n := d.GetChange("domains") @@ -179,7 +182,7 @@ func resourceFirewallDomainListUpdate(ctx context.Context, d *schema.ResourceDat } func resourceFirewallDomainListDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) log.Printf("[DEBUG] Deleting Route53 Resolver Firewall Domain List: %s", d.Id()) _, err := conn.DeleteFirewallDomainListWithContext(ctx, &route53resolver.DeleteFirewallDomainListInput{ diff --git a/internal/service/route53resolver/firewall_domain_list_data_source.go b/internal/service/route53resolver/firewall_domain_list_data_source.go index 287f43e486a..0fa3b2b254c 100644 --- a/internal/service/route53resolver/firewall_domain_list_data_source.go +++ b/internal/service/route53resolver/firewall_domain_list_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver import ( @@ -60,7 +63,7 @@ func DataSourceFirewallDomainList() *schema.Resource { } func dataSourceFirewallDomainListRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) id := d.Get("firewall_domain_list_id").(string) firewallDomainList, err := FindFirewallDomainListByID(ctx, conn, id) diff --git a/internal/service/route53resolver/firewall_domain_list_data_source_test.go b/internal/service/route53resolver/firewall_domain_list_data_source_test.go index 036134be7da..c8e6a2019e2 100644 --- a/internal/service/route53resolver/firewall_domain_list_data_source_test.go +++ b/internal/service/route53resolver/firewall_domain_list_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver_test import ( diff --git a/internal/service/route53resolver/firewall_domain_list_test.go b/internal/service/route53resolver/firewall_domain_list_test.go index 74fad716ff9..2fac578630d 100644 --- a/internal/service/route53resolver/firewall_domain_list_test.go +++ b/internal/service/route53resolver/firewall_domain_list_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver_test import ( @@ -170,7 +173,7 @@ func TestAccRoute53ResolverFirewallDomainList_tags(t *testing.T) { func testAccCheckFirewallDomainListDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53_resolver_firewall_domain_list" { @@ -205,7 +208,7 @@ func testAccCheckFirewallDomainListExists(ctx context.Context, n string, v *rout return fmt.Errorf("No Route53 Resolver Firewall Domain List ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn(ctx) output, err := tfroute53resolver.FindFirewallDomainListByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/route53resolver/firewall_rule.go b/internal/service/route53resolver/firewall_rule.go index 9a36b7647b1..c7796cb93d0 100644 --- a/internal/service/route53resolver/firewall_rule.go +++ b/internal/service/route53resolver/firewall_rule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver import ( @@ -82,7 +85,7 @@ func ResourceFirewallRule() *schema.Resource { } func resourceFirewallRuleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) firewallDomainListID := d.Get("firewall_domain_list_id").(string) firewallRuleGroupID := d.Get("firewall_rule_group_id").(string) @@ -125,7 +128,7 @@ func resourceFirewallRuleCreate(ctx context.Context, d *schema.ResourceData, met } func resourceFirewallRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) firewallRuleGroupID, firewallDomainListID, err := FirewallRuleParseResourceID(d.Id()) @@ -159,7 +162,7 @@ func resourceFirewallRuleRead(ctx context.Context, d *schema.ResourceData, meta } func resourceFirewallRuleUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) firewallRuleGroupID, firewallDomainListID, err := FirewallRuleParseResourceID(d.Id()) @@ -201,7 +204,7 @@ func resourceFirewallRuleUpdate(ctx context.Context, d *schema.ResourceData, met } func resourceFirewallRuleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) firewallRuleGroupID, firewallDomainListID, err := FirewallRuleParseResourceID(d.Id()) diff --git a/internal/service/route53resolver/firewall_rule_group.go b/internal/service/route53resolver/firewall_rule_group.go index 80e987010ba..e2389cd83b1 100644 --- a/internal/service/route53resolver/firewall_rule_group.go +++ b/internal/service/route53resolver/firewall_rule_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver import ( @@ -59,13 +62,13 @@ func ResourceFirewallRuleGroup() *schema.Resource { } func resourceFirewallRuleGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) name := d.Get("name").(string) input := &route53resolver.CreateFirewallRuleGroupInput{ CreatorRequestId: aws.String(id.PrefixedUniqueId("tf-r53-resolver-firewall-rule-group-")), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } output, err := conn.CreateFirewallRuleGroupWithContext(ctx, input) @@ -80,7 +83,7 @@ func resourceFirewallRuleGroupCreate(ctx context.Context, d *schema.ResourceData } func resourceFirewallRuleGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) ruleGroup, err := FindFirewallRuleGroupByID(ctx, conn, d.Id()) @@ -109,7 +112,7 @@ func resourceFirewallRuleGroupUpdate(ctx context.Context, d *schema.ResourceData } func resourceFirewallRuleGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) log.Printf("[DEBUG] Deleting Route53 Resolver Firewall Rule Group: %s", d.Id()) _, err := conn.DeleteFirewallRuleGroupWithContext(ctx, &route53resolver.DeleteFirewallRuleGroupInput{ diff --git a/internal/service/route53resolver/firewall_rule_group_association.go b/internal/service/route53resolver/firewall_rule_group_association.go index 8d33548e530..5486e9df3d1 100644 --- a/internal/service/route53resolver/firewall_rule_group_association.go +++ b/internal/service/route53resolver/firewall_rule_group_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver import ( @@ -73,7 +76,7 @@ func ResourceFirewallRuleGroupAssociation() *schema.Resource { } func resourceFirewallRuleGroupAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) name := d.Get("name").(string) input := &route53resolver.AssociateFirewallRuleGroupInput{ @@ -81,7 +84,7 @@ func resourceFirewallRuleGroupAssociationCreate(ctx context.Context, d *schema.R FirewallRuleGroupId: aws.String(d.Get("firewall_rule_group_id").(string)), Name: aws.String(name), Priority: aws.Int64(int64(d.Get("priority").(int))), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), VpcId: aws.String(d.Get("vpc_id").(string)), } @@ -105,7 +108,7 @@ func resourceFirewallRuleGroupAssociationCreate(ctx context.Context, d *schema.R } func resourceFirewallRuleGroupAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) ruleGroupAssociation, err := FindFirewallRuleGroupAssociationByID(ctx, conn, d.Id()) @@ -131,7 +134,7 @@ func resourceFirewallRuleGroupAssociationRead(ctx context.Context, d *schema.Res } func resourceFirewallRuleGroupAssociationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) if d.HasChanges("name", "mutation_protection", "priority") { input := &route53resolver.UpdateFirewallRuleGroupAssociationInput{ @@ -159,7 +162,7 @@ func resourceFirewallRuleGroupAssociationUpdate(ctx context.Context, d *schema.R } func resourceFirewallRuleGroupAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) log.Printf("[DEBUG] Deleting Route53 Resolver Firewall Rule Group Association: %s", d.Id()) _, err := conn.DisassociateFirewallRuleGroupWithContext(ctx, &route53resolver.DisassociateFirewallRuleGroupInput{ diff --git a/internal/service/route53resolver/firewall_rule_group_association_data_source.go b/internal/service/route53resolver/firewall_rule_group_association_data_source.go index 5125ab59d4f..7094d10fe50 100644 --- a/internal/service/route53resolver/firewall_rule_group_association_data_source.go +++ b/internal/service/route53resolver/firewall_rule_group_association_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver import ( @@ -72,7 +75,7 @@ func DataSourceFirewallRuleGroupAssociation() *schema.Resource { } func dataSourceRuleGroupAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) id := d.Get("firewall_rule_group_association_id").(string) ruleGroupAssociation, err := FindFirewallRuleGroupAssociationByID(ctx, conn, id) diff --git a/internal/service/route53resolver/firewall_rule_group_association_data_source_test.go b/internal/service/route53resolver/firewall_rule_group_association_data_source_test.go index 05eaf04b043..93d28f6d144 100644 --- a/internal/service/route53resolver/firewall_rule_group_association_data_source_test.go +++ b/internal/service/route53resolver/firewall_rule_group_association_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver_test import ( diff --git a/internal/service/route53resolver/firewall_rule_group_association_test.go b/internal/service/route53resolver/firewall_rule_group_association_test.go index 531535f73bc..69180e55915 100644 --- a/internal/service/route53resolver/firewall_rule_group_association_test.go +++ b/internal/service/route53resolver/firewall_rule_group_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver_test import ( @@ -226,7 +229,7 @@ func TestAccRoute53ResolverFirewallRuleGroupAssociation_tags(t *testing.T) { func testAccCheckFirewallRuleGroupAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53_resolver_firewall_rule_group_association" { @@ -261,7 +264,7 @@ func testAccCheckFirewallRuleGroupAssociationExists(ctx context.Context, n strin return fmt.Errorf("No Route53 Resolver Firewall Rule Group Association ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn(ctx) output, err := tfroute53resolver.FindFirewallRuleGroupAssociationByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/route53resolver/firewall_rule_group_data_source.go b/internal/service/route53resolver/firewall_rule_group_data_source.go index 88dc6ef3726..d43d1d2f42f 100644 --- a/internal/service/route53resolver/firewall_rule_group_data_source.go +++ b/internal/service/route53resolver/firewall_rule_group_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver import ( @@ -64,7 +67,7 @@ func DataSourceFirewallRuleGroup() *schema.Resource { } func dataSourceFirewallRuleGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) id := d.Get("firewall_rule_group_id").(string) ruleGroup, err := FindFirewallRuleGroupByID(ctx, conn, id) diff --git a/internal/service/route53resolver/firewall_rule_group_data_source_test.go b/internal/service/route53resolver/firewall_rule_group_data_source_test.go index 6fced1334f4..c2248e500eb 100644 --- a/internal/service/route53resolver/firewall_rule_group_data_source_test.go +++ b/internal/service/route53resolver/firewall_rule_group_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver_test import ( diff --git a/internal/service/route53resolver/firewall_rule_group_test.go b/internal/service/route53resolver/firewall_rule_group_test.go index 70caccb5303..439d5c1a058 100644 --- a/internal/service/route53resolver/firewall_rule_group_test.go +++ b/internal/service/route53resolver/firewall_rule_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver_test import ( @@ -131,7 +134,7 @@ func TestAccRoute53ResolverFirewallRuleGroup_tags(t *testing.T) { func testAccCheckFirewallRuleGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53_resolver_firewall_rule_group" { @@ -166,7 +169,7 @@ func testAccCheckFirewallRuleGroupExists(ctx context.Context, n string, v *route return fmt.Errorf("No Route53 Resolver Firewall Rule Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn(ctx) output, err := tfroute53resolver.FindFirewallRuleGroupByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/route53resolver/firewall_rule_test.go b/internal/service/route53resolver/firewall_rule_test.go index 03826d77d35..cc114ac8049 100644 --- a/internal/service/route53resolver/firewall_rule_test.go +++ b/internal/service/route53resolver/firewall_rule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver_test import ( @@ -136,7 +139,7 @@ func TestAccRoute53ResolverFirewallRule_disappears(t *testing.T) { func testAccCheckFirewallRuleDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53_resolver_firewall_rule" { @@ -183,7 +186,7 @@ func testAccCheckFirewallRuleExists(ctx context.Context, n string, v *route53res return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn(ctx) output, err := tfroute53resolver.FindFirewallRuleByTwoPartKey(ctx, conn, firewallRuleGroupID, firewallDomainListID) diff --git a/internal/service/route53resolver/firewall_rules_data_source.go b/internal/service/route53resolver/firewall_rules_data_source.go index d87ac179398..1a5c45a2ae8 100644 --- a/internal/service/route53resolver/firewall_rules_data_source.go +++ b/internal/service/route53resolver/firewall_rules_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver import ( @@ -89,7 +92,7 @@ func DataSourceResolverFirewallRules() *schema.Resource { } func dataSourceResolverFirewallFirewallRulesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) firewallRuleGroupID := d.Get("firewall_rule_group_id").(string) rules, err := findFirewallRules(ctx, conn, firewallRuleGroupID, func(rule *route53resolver.FirewallRule) bool { diff --git a/internal/service/route53resolver/firewall_rules_data_source_test.go b/internal/service/route53resolver/firewall_rules_data_source_test.go index 04592518aaf..25e1400b92f 100644 --- a/internal/service/route53resolver/firewall_rules_data_source_test.go +++ b/internal/service/route53resolver/firewall_rules_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver_test import ( diff --git a/internal/service/route53resolver/generate.go b/internal/service/route53resolver/generate.go index bc550554f45..64e330f47bc 100644 --- a/internal/service/route53resolver/generate.go +++ b/internal/service/route53resolver/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -GetTag -ListTags -ServiceTagsSlice -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package route53resolver diff --git a/internal/service/route53resolver/query_log_config.go b/internal/service/route53resolver/query_log_config.go index c661ce5852b..312e0eec176 100644 --- a/internal/service/route53resolver/query_log_config.go +++ b/internal/service/route53resolver/query_log_config.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver import ( @@ -66,14 +69,14 @@ func ResourceQueryLogConfig() *schema.Resource { } func resourceQueryLogConfigCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) name := d.Get("name").(string) input := &route53resolver.CreateResolverQueryLogConfigInput{ CreatorRequestId: aws.String(id.PrefixedUniqueId("tf-r53-resolver-query-log-config-")), DestinationArn: aws.String(d.Get("destination_arn").(string)), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } output, err := conn.CreateResolverQueryLogConfigWithContext(ctx, input) @@ -92,7 +95,7 @@ func resourceQueryLogConfigCreate(ctx context.Context, d *schema.ResourceData, m } func resourceQueryLogConfigRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) queryLogConfig, err := FindResolverQueryLogConfigByID(ctx, conn, d.Id()) @@ -122,7 +125,7 @@ func resourceQueryLogConfigUpdate(ctx context.Context, d *schema.ResourceData, m } func resourceQueryLogConfigDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) log.Printf("[DEBUG] Deleting Route53 Resolver Query Log Config: %s", d.Id()) _, err := conn.DeleteResolverQueryLogConfigWithContext(ctx, &route53resolver.DeleteResolverQueryLogConfigInput{ diff --git a/internal/service/route53resolver/query_log_config_association.go b/internal/service/route53resolver/query_log_config_association.go index 02c48693e1c..05b62b2e73b 100644 --- a/internal/service/route53resolver/query_log_config_association.go +++ b/internal/service/route53resolver/query_log_config_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver import ( @@ -43,7 +46,7 @@ func ResourceQueryLogConfigAssociation() *schema.Resource { } func resourceQueryLogConfigAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) input := &route53resolver.AssociateResolverQueryLogConfigInput{ ResolverQueryLogConfigId: aws.String(d.Get("resolver_query_log_config_id").(string)), @@ -66,7 +69,7 @@ func resourceQueryLogConfigAssociationCreate(ctx context.Context, d *schema.Reso } func resourceQueryLogConfigAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) queryLogConfigAssociation, err := FindResolverQueryLogConfigAssociationByID(ctx, conn, d.Id()) @@ -87,7 +90,7 @@ func resourceQueryLogConfigAssociationRead(ctx context.Context, d *schema.Resour } func resourceQueryLogConfigAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) log.Printf("[DEBUG] Deleting Route53 Resolver Query Log Config Association: %s", d.Id()) _, err := conn.DisassociateResolverQueryLogConfigWithContext(ctx, &route53resolver.DisassociateResolverQueryLogConfigInput{ diff --git a/internal/service/route53resolver/query_log_config_association_test.go b/internal/service/route53resolver/query_log_config_association_test.go index 4f960ca6b4e..b692bd037f6 100644 --- a/internal/service/route53resolver/query_log_config_association_test.go +++ b/internal/service/route53resolver/query_log_config_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver_test import ( @@ -72,7 +75,7 @@ func TestAccRoute53ResolverQueryLogConfigAssociation_disappears(t *testing.T) { func testAccCheckQueryLogConfigAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53_resolver_query_log_config_association" { @@ -107,7 +110,7 @@ func testAccCheckQueryLogConfigAssociationExists(ctx context.Context, n string, return fmt.Errorf("No Route53 Resolver Query Log Config Association ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn(ctx) output, err := tfroute53resolver.FindResolverQueryLogConfigAssociationByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/route53resolver/query_log_config_data_source.go b/internal/service/route53resolver/query_log_config_data_source.go index 4f60374b270..1bb083b49be 100644 --- a/internal/service/route53resolver/query_log_config_data_source.go +++ b/internal/service/route53resolver/query_log_config_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver import ( @@ -57,7 +60,7 @@ const ( ) func dataSourceQueryLogConfigRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) configID := d.Get("resolver_query_log_config_id").(string) @@ -113,7 +116,7 @@ func dataSourceQueryLogConfigRead(ctx context.Context, d *schema.ResourceData, m d.Set("share_status", shareStatus) if shareStatus != route53resolver.ShareStatusSharedWithMe { - tags, err := ListTags(ctx, conn, arn) + tags, err := listTags(ctx, conn, arn) if err != nil { return create.DiagError(names.AppConfig, create.ErrActionReading, DSNameQueryLogConfig, configID, err) diff --git a/internal/service/route53resolver/query_log_config_data_source_test.go b/internal/service/route53resolver/query_log_config_data_source_test.go index 34727c0647a..62a9cab68a3 100644 --- a/internal/service/route53resolver/query_log_config_data_source_test.go +++ b/internal/service/route53resolver/query_log_config_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver_test import ( diff --git a/internal/service/route53resolver/query_log_config_test.go b/internal/service/route53resolver/query_log_config_test.go index e527e9d8b71..1d33f23488d 100644 --- a/internal/service/route53resolver/query_log_config_test.go +++ b/internal/service/route53resolver/query_log_config_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver_test import ( @@ -133,7 +136,7 @@ func TestAccRoute53ResolverQueryLogConfig_tags(t *testing.T) { func testAccCheckQueryLogConfigDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53_resolver_query_log_config" { @@ -168,7 +171,7 @@ func testAccCheckQueryLogConfigExists(ctx context.Context, n string, v *route53r return fmt.Errorf("No Route53 Resolver Query Log Config ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn(ctx) output, err := tfroute53resolver.FindResolverQueryLogConfigByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/route53resolver/rule.go b/internal/service/route53resolver/rule.go index ce5677ccd24..f730a1a194f 100644 --- a/internal/service/route53resolver/rule.go +++ b/internal/service/route53resolver/rule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver import ( @@ -108,13 +111,13 @@ func ResourceRule() *schema.Resource { } func resourceRuleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) input := &route53resolver.CreateResolverRuleInput{ CreatorRequestId: aws.String(id.PrefixedUniqueId("tf-r53-resolver-rule-")), DomainName: aws.String(d.Get("domain_name").(string)), RuleType: aws.String(d.Get("rule_type").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("name"); ok { @@ -145,7 +148,7 @@ func resourceRuleCreate(ctx context.Context, d *schema.ResourceData, meta interf } func resourceRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) rule, err := FindResolverRuleByID(ctx, conn, d.Id()) @@ -177,7 +180,7 @@ func resourceRuleRead(ctx context.Context, d *schema.ResourceData, meta interfac } func resourceRuleUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) if d.HasChanges("name", "resolver_endpoint_id", "target_ip") { input := &route53resolver.UpdateResolverRuleInput{ @@ -212,7 +215,7 @@ func resourceRuleUpdate(ctx context.Context, d *schema.ResourceData, meta interf } func resourceRuleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) log.Printf("[DEBUG] Deleting Route53 Resolver Rule: %s", d.Id()) _, err := conn.DeleteResolverRuleWithContext(ctx, &route53resolver.DeleteResolverRuleInput{ diff --git a/internal/service/route53resolver/rule_association.go b/internal/service/route53resolver/rule_association.go index 10ace31bc42..e513ea7801f 100644 --- a/internal/service/route53resolver/rule_association.go +++ b/internal/service/route53resolver/rule_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver import ( @@ -57,7 +60,7 @@ func ResourceRuleAssociation() *schema.Resource { } func resourceRuleAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) input := &route53resolver.AssociateResolverRuleInput{ ResolverRuleId: aws.String(d.Get("resolver_rule_id").(string)), @@ -84,7 +87,7 @@ func resourceRuleAssociationCreate(ctx context.Context, d *schema.ResourceData, } func resourceRuleAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) ruleAssociation, err := FindResolverRuleAssociationByID(ctx, conn, d.Id()) @@ -106,7 +109,7 @@ func resourceRuleAssociationRead(ctx context.Context, d *schema.ResourceData, me } func resourceRuleAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) log.Printf("[DEBUG] Deleting Route53 Resolver Rule Association: %s", d.Id()) _, err := conn.DisassociateResolverRuleWithContext(ctx, &route53resolver.DisassociateResolverRuleInput{ diff --git a/internal/service/route53resolver/rule_association_test.go b/internal/service/route53resolver/rule_association_test.go index 567994a947e..4ba7c2dccea 100644 --- a/internal/service/route53resolver/rule_association_test.go +++ b/internal/service/route53resolver/rule_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver_test import ( @@ -102,7 +105,7 @@ func TestAccRoute53ResolverRuleAssociation_Disappears_vpc(t *testing.T) { func testAccCheckRuleAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53_resolver_rule_association" { @@ -137,7 +140,7 @@ func testAccCheckRuleAssociationExists(ctx context.Context, n string, v *route53 return fmt.Errorf("No Route53 Resolver Rule Association ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn(ctx) output, err := tfroute53resolver.FindResolverRuleAssociationByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/route53resolver/rule_data_source.go b/internal/service/route53resolver/rule_data_source.go index 6917f5a78bf..4e59fa7c38e 100644 --- a/internal/service/route53resolver/rule_data_source.go +++ b/internal/service/route53resolver/rule_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver import ( @@ -69,7 +72,7 @@ func DataSourceRule() *schema.Resource { } func dataSourceRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig var err error @@ -125,7 +128,7 @@ func dataSourceRuleRead(ctx context.Context, d *schema.ResourceData, meta interf d.Set("share_status", shareStatus) // https://github.com/hashicorp/terraform-provider-aws/issues/10211 if shareStatus != route53resolver.ShareStatusSharedWithMe { - tags, err := ListTags(ctx, conn, arn) + tags, err := listTags(ctx, conn, arn) if err != nil { return diag.Errorf("listing tags for Route53 Resolver Rule (%s): %s", arn, err) diff --git a/internal/service/route53resolver/rule_data_source_test.go b/internal/service/route53resolver/rule_data_source_test.go index bbd1954e41b..de7091240a6 100644 --- a/internal/service/route53resolver/rule_data_source_test.go +++ b/internal/service/route53resolver/rule_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver_test import ( diff --git a/internal/service/route53resolver/rule_test.go b/internal/service/route53resolver/rule_test.go index 78cb425ead9..2d2fe94c2d3 100644 --- a/internal/service/route53resolver/rule_test.go +++ b/internal/service/route53resolver/rule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver_test import ( @@ -372,7 +375,7 @@ func testAccCheckRulesDifferent(before, after *route53resolver.ResolverRule) res func testAccCheckRuleDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_route53_resolver_rule" { @@ -406,7 +409,7 @@ func testAccCheckRuleExists(ctx context.Context, n string, v *route53resolver.Re return fmt.Errorf("No Route53 Resolver Rule ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).Route53ResolverConn(ctx) output, err := tfroute53resolver.FindResolverRuleByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/route53resolver/rules_data_source.go b/internal/service/route53resolver/rules_data_source.go index 6878926096a..a55b2bf0d9a 100644 --- a/internal/service/route53resolver/rules_data_source.go +++ b/internal/service/route53resolver/rules_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver import ( @@ -57,7 +60,7 @@ func DataSourceRules() *schema.Resource { } func dataSourceRulesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).Route53ResolverConn() + conn := meta.(*conns.AWSClient).Route53ResolverConn(ctx) input := &route53resolver.ListResolverRulesInput{} var ruleIDs []*string diff --git a/internal/service/route53resolver/rules_data_source_test.go b/internal/service/route53resolver/rules_data_source_test.go index 8907f813f3f..d2138249899 100644 --- a/internal/service/route53resolver/rules_data_source_test.go +++ b/internal/service/route53resolver/rules_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver_test import ( diff --git a/internal/service/route53resolver/service_package_gen.go b/internal/service/route53resolver/service_package_gen.go index 7a1361ba6f7..3ea6052b4fd 100644 --- a/internal/service/route53resolver/service_package_gen.go +++ b/internal/service/route53resolver/service_package_gen.go @@ -5,6 +5,10 @@ package route53resolver import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + route53resolver_sdkv1 "github.com/aws/aws-sdk-go/service/route53resolver" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -141,4 +145,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Route53Resolver } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*route53resolver_sdkv1.Route53Resolver, error) { + sess := config["session"].(*session_sdkv1.Session) + + return route53resolver_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/route53resolver/sweep.go b/internal/service/route53resolver/sweep.go index f7495b610c1..b076c9ec22c 100644 --- a/internal/service/route53resolver/sweep.go +++ b/internal/service/route53resolver/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/route53resolver" multierror "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -93,11 +95,11 @@ func init() { func sweepDNSSECConfig(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).Route53ResolverConn() + conn := client.Route53ResolverConn(ctx) input := &route53resolver.ListResolverDnssecConfigsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -127,7 +129,7 @@ func sweepDNSSECConfig(region string) error { return fmt.Errorf("error listing Route53 Resolver DNSSEC Configs (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Route53 Resolver DNSSEC Configs (%s): %w", region, err) @@ -138,11 +140,11 @@ func sweepDNSSECConfig(region string) error { func sweepEndpoints(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).Route53ResolverConn() + conn := client.Route53ResolverConn(ctx) input := &route53resolver.ListResolverEndpointsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -171,7 +173,7 @@ func sweepEndpoints(region string) error { return fmt.Errorf("error listing Route53 Resolver Endpoints (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Route53 Resolver Endpoints (%s): %w", region, err) @@ -182,11 +184,11 @@ func sweepEndpoints(region string) error { func sweepFirewallConfigs(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).Route53ResolverConn() + conn := client.Route53ResolverConn(ctx) input := &route53resolver.ListFirewallConfigsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -216,7 +218,7 @@ func sweepFirewallConfigs(region string) error { return fmt.Errorf("error listing Route53 Resolver Firewall Configs (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Route53 Resolver Firewall Configs (%s): %w", region, err) @@ -227,11 +229,11 @@ func sweepFirewallConfigs(region string) error { func sweepFirewallDomainLists(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).Route53ResolverConn() + conn := client.Route53ResolverConn(ctx) input := &route53resolver.ListFirewallDomainListsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -260,7 +262,7 @@ func sweepFirewallDomainLists(region string) error { return fmt.Errorf("error listing Route53 Resolver Firewall Domain Lists (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Route53 Resolver Firewall Domain Lists (%s): %w", region, err) @@ -271,11 +273,11 @@ func sweepFirewallDomainLists(region string) error { func sweepFirewallRuleGroupAssociations(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).Route53ResolverConn() + conn := client.Route53ResolverConn(ctx) input := &route53resolver.ListFirewallRuleGroupAssociationsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -304,7 +306,7 @@ func sweepFirewallRuleGroupAssociations(region string) error { return fmt.Errorf("error listing Route53 Resolver Firewall Rule Group Associations (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Route53 Resolver Firewall Rule Group Associations (%s): %w", region, err) @@ -315,11 +317,11 @@ func sweepFirewallRuleGroupAssociations(region string) error { func sweepFirewallRuleGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).Route53ResolverConn() + conn := client.Route53ResolverConn(ctx) input := &route53resolver.ListFirewallRuleGroupsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -348,7 +350,7 @@ func sweepFirewallRuleGroups(region string) error { return fmt.Errorf("error listing Route53 Resolver Firewall Rule Groups (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Route53 Resolver Firewall Rule Groups (%s): %w", region, err) @@ -359,11 +361,11 @@ func sweepFirewallRuleGroups(region string) error { func sweepFirewallRules(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).Route53ResolverConn() + conn := client.Route53ResolverConn(ctx) input := &route53resolver.ListFirewallRuleGroupsInput{} var sweeperErrs *multierror.Error sweepResources := make([]sweep.Sweepable, 0) @@ -415,7 +417,7 @@ func sweepFirewallRules(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Route53 Resolver Firewall Rule Groups (%s): %w", region, err)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Route53 Resolver Firewall Rules (%s): %w", region, err)) @@ -426,11 +428,11 @@ func sweepFirewallRules(region string) error { func sweepQueryLogAssociationsConfigs(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).Route53ResolverConn() + conn := client.Route53ResolverConn(ctx) input := &route53resolver.ListResolverQueryLogConfigAssociationsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -461,7 +463,7 @@ func sweepQueryLogAssociationsConfigs(region string) error { return fmt.Errorf("error listing Route53 Resolver Query Log Config Associations (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Route53 Resolver Query Log Config Associations (%s): %w", region, err) @@ -472,11 +474,11 @@ func sweepQueryLogAssociationsConfigs(region string) error { func sweepQueryLogsConfig(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).Route53ResolverConn() + conn := client.Route53ResolverConn(ctx) input := &route53resolver.ListResolverQueryLogConfigsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -505,7 +507,7 @@ func sweepQueryLogsConfig(region string) error { return fmt.Errorf("error listing Route53 Resolver Query Log Configs (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Route53 Resolver Query Log Configs (%s): %w", region, err) @@ -516,11 +518,11 @@ func sweepQueryLogsConfig(region string) error { func sweepRuleAssociations(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).Route53ResolverConn() + conn := client.Route53ResolverConn(ctx) input := &route53resolver.ListResolverRuleAssociationsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -551,7 +553,7 @@ func sweepRuleAssociations(region string) error { return fmt.Errorf("error listing Route53 Resolver Rule Associations (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Route53 Resolver Rule Associations (%s): %w", region, err) @@ -562,11 +564,11 @@ func sweepRuleAssociations(region string) error { func sweepRules(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).Route53ResolverConn() + conn := client.Route53ResolverConn(ctx) input := &route53resolver.ListResolverRulesInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -576,7 +578,7 @@ func sweepRules(region string) error { } for _, v := range page.ResolverRules { - if aws.StringValue(v.OwnerId) != client.(*conns.AWSClient).AccountID { + if aws.StringValue(v.OwnerId) != client.AccountID { continue } @@ -599,7 +601,7 @@ func sweepRules(region string) error { return fmt.Errorf("error listing Route53 Resolver Rules (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Route53 Resolver Rules (%s): %w", region, err) diff --git a/internal/service/route53resolver/tags_gen.go b/internal/service/route53resolver/tags_gen.go index 8ede22b42cb..8151de13937 100644 --- a/internal/service/route53resolver/tags_gen.go +++ b/internal/service/route53resolver/tags_gen.go @@ -17,11 +17,11 @@ import ( // GetTag fetches an individual route53resolver service tag for a resource. // Returns whether the key value and any errors. A NotFoundError is used to signal that no value was found. -// This function will optimise the handling over ListTags, if possible. +// This function will optimise the handling over listTags, if possible. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. func GetTag(ctx context.Context, conn route53resolveriface.Route53ResolverAPI, identifier, key string) (*string, error) { - listTags, err := ListTags(ctx, conn, identifier) + listTags, err := listTags(ctx, conn, identifier) if err != nil { return nil, err @@ -34,10 +34,10 @@ func GetTag(ctx context.Context, conn route53resolveriface.Route53ResolverAPI, i return listTags.KeyValue(key), nil } -// ListTags lists route53resolver service tags. +// listTags lists route53resolver service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn route53resolveriface.Route53ResolverAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn route53resolveriface.Route53ResolverAPI, identifier string) (tftags.KeyValueTags, error) { input := &route53resolver.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -54,7 +54,7 @@ func ListTags(ctx context.Context, conn route53resolveriface.Route53ResolverAPI, // ListTags lists route53resolver service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).Route53ResolverConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).Route53ResolverConn(ctx), identifier) if err != nil { return err @@ -96,9 +96,9 @@ func KeyValueTags(ctx context.Context, tags []*route53resolver.Tag) tftags.KeyVa return tftags.New(ctx, m) } -// GetTagsIn returns route53resolver service tags from Context. +// getTagsIn returns route53resolver service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*route53resolver.Tag { +func getTagsIn(ctx context.Context) []*route53resolver.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -108,17 +108,17 @@ func GetTagsIn(ctx context.Context) []*route53resolver.Tag { return nil } -// SetTagsOut sets route53resolver service tags in Context. -func SetTagsOut(ctx context.Context, tags []*route53resolver.Tag) { +// setTagsOut sets route53resolver service tags in Context. +func setTagsOut(ctx context.Context, tags []*route53resolver.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates route53resolver service tags. +// updateTags updates route53resolver service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn route53resolveriface.Route53ResolverAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn route53resolveriface.Route53ResolverAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -158,5 +158,5 @@ func UpdateTags(ctx context.Context, conn route53resolveriface.Route53ResolverAP // UpdateTags updates route53resolver service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).Route53ResolverConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).Route53ResolverConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/route53resolver/validate.go b/internal/service/route53resolver/validate.go index 3b182098088..d07910fb53e 100644 --- a/internal/service/route53resolver/validate.go +++ b/internal/service/route53resolver/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver import ( diff --git a/internal/service/route53resolver/validate_test.go b/internal/service/route53resolver/validate_test.go index 5cd83bebe3b..ce414e9a80a 100644 --- a/internal/service/route53resolver/validate_test.go +++ b/internal/service/route53resolver/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package route53resolver import ( diff --git a/internal/service/rum/app_monitor.go b/internal/service/rum/app_monitor.go index 0f34677ee74..4f6a812d4a6 100644 --- a/internal/service/rum/app_monitor.go +++ b/internal/service/rum/app_monitor.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rum import ( @@ -146,14 +149,14 @@ func ResourceAppMonitor() *schema.Resource { } func resourceAppMonitorCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).RUMConn() + conn := meta.(*conns.AWSClient).RUMConn(ctx) name := d.Get("name").(string) input := &cloudwatchrum.CreateAppMonitorInput{ Name: aws.String(name), CwLogEnabled: aws.Bool(d.Get("cw_log_enabled").(bool)), Domain: aws.String(d.Get("domain").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("app_monitor_configuration"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { @@ -176,7 +179,7 @@ func resourceAppMonitorCreate(ctx context.Context, d *schema.ResourceData, meta } func resourceAppMonitorRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).RUMConn() + conn := meta.(*conns.AWSClient).RUMConn(ctx) appMon, err := FindAppMonitorByName(ctx, conn, d.Id()) @@ -212,13 +215,13 @@ func resourceAppMonitorRead(ctx context.Context, d *schema.ResourceData, meta in d.Set("domain", appMon.Domain) d.Set("name", appMon.Name) - SetTagsOut(ctx, appMon.Tags) + setTagsOut(ctx, appMon.Tags) return nil } func resourceAppMonitorUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).RUMConn() + conn := meta.(*conns.AWSClient).RUMConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &cloudwatchrum.UpdateAppMonitorInput{ @@ -252,7 +255,7 @@ func resourceAppMonitorUpdate(ctx context.Context, d *schema.ResourceData, meta } func resourceAppMonitorDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).RUMConn() + conn := meta.(*conns.AWSClient).RUMConn(ctx) log.Printf("[DEBUG] Deleting CloudWatch RUM App Monitor: %s", d.Id()) _, err := conn.DeleteAppMonitorWithContext(ctx, &cloudwatchrum.DeleteAppMonitorInput{ diff --git a/internal/service/rum/app_monitor_test.go b/internal/service/rum/app_monitor_test.go index 9a3044ef1d1..6d33305da9b 100644 --- a/internal/service/rum/app_monitor_test.go +++ b/internal/service/rum/app_monitor_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rum_test import ( @@ -185,7 +188,7 @@ func TestAccRUMAppMonitor_disappears(t *testing.T) { func testAccCheckAppMonitorDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RUMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RUMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_rum_app_monitor" { @@ -219,7 +222,7 @@ func testAccCheckAppMonitorExists(ctx context.Context, n string, v *cloudwatchru return fmt.Errorf("No CloudWatch RUM App Monitor ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RUMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RUMConn(ctx) output, err := tfcloudwatchrum.FindAppMonitorByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/rum/generate.go b/internal/service/rum/generate.go index e399d2928af..a6955d2a2d9 100644 --- a/internal/service/rum/generate.go +++ b/internal/service/rum/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package rum diff --git a/internal/service/rum/metrics_destination.go b/internal/service/rum/metrics_destination.go index cf9f7ea5b94..ea253c1f20a 100644 --- a/internal/service/rum/metrics_destination.go +++ b/internal/service/rum/metrics_destination.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rum import ( @@ -53,7 +56,7 @@ func ResourceMetricsDestination() *schema.Resource { } func resourceMetricsDestinationPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).RUMConn() + conn := meta.(*conns.AWSClient).RUMConn(ctx) name := d.Get("app_monitor_name").(string) input := &cloudwatchrum.PutRumMetricsDestinationInput{ @@ -83,7 +86,7 @@ func resourceMetricsDestinationPut(ctx context.Context, d *schema.ResourceData, } func resourceMetricsDestinationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).RUMConn() + conn := meta.(*conns.AWSClient).RUMConn(ctx) dest, err := FindMetricsDestinationByName(ctx, conn, d.Id()) @@ -106,7 +109,7 @@ func resourceMetricsDestinationRead(ctx context.Context, d *schema.ResourceData, } func resourceMetricsDestinationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).RUMConn() + conn := meta.(*conns.AWSClient).RUMConn(ctx) input := &cloudwatchrum.DeleteRumMetricsDestinationInput{ AppMonitorName: aws.String(d.Id()), diff --git a/internal/service/rum/metrics_destination_test.go b/internal/service/rum/metrics_destination_test.go index e7b570add37..2790a4a7a6c 100644 --- a/internal/service/rum/metrics_destination_test.go +++ b/internal/service/rum/metrics_destination_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package rum_test import ( @@ -96,7 +99,7 @@ func TestAccRUMMetricsDestination_disappears_appMonitor(t *testing.T) { func testAccCheckMetricsDestinationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).RUMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RUMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_rum_metrics_destination" { @@ -130,7 +133,7 @@ func testAccCheckMetricsDestinationExists(ctx context.Context, n string, v *clou return fmt.Errorf("No CloudWatch RUM Metrics Destination ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).RUMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).RUMConn(ctx) output, err := tfcloudwatchrum.FindMetricsDestinationByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/rum/service_package_gen.go b/internal/service/rum/service_package_gen.go index 95f7d57d6bc..572ef869ee9 100644 --- a/internal/service/rum/service_package_gen.go +++ b/internal/service/rum/service_package_gen.go @@ -5,6 +5,10 @@ package rum import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + cloudwatchrum_sdkv1 "github.com/aws/aws-sdk-go/service/cloudwatchrum" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -44,4 +48,13 @@ func (p *servicePackage) ServicePackageName() string { return names.RUM } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*cloudwatchrum_sdkv1.CloudWatchRUM, error) { + sess := config["session"].(*session_sdkv1.Session) + + return cloudwatchrum_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/rum/sweep.go b/internal/service/rum/sweep.go index e1bf1635a90..ba3201eaf58 100644 --- a/internal/service/rum/sweep.go +++ b/internal/service/rum/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/cloudwatchrum" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -24,13 +26,13 @@ func init() { func sweepAppMonitors(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).RUMConn() + conn := client.RUMConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -56,7 +58,7 @@ func sweepAppMonitors(region string) error { // in case work can be done, don't jump out yet } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping RUM App Monitors for %s: %w", region, err)) } diff --git a/internal/service/rum/tags_gen.go b/internal/service/rum/tags_gen.go index 0f14e3dadb2..d5b56035227 100644 --- a/internal/service/rum/tags_gen.go +++ b/internal/service/rum/tags_gen.go @@ -21,14 +21,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from rum service tags. +// KeyValueTags creates tftags.KeyValueTags from rum service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns rum service tags from Context. +// getTagsIn returns rum service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -38,17 +38,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets rum service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets rum service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates rum service tags. +// updateTags updates rum service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn cloudwatchrumiface.CloudWatchRUMAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn cloudwatchrumiface.CloudWatchRUMAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -88,5 +88,5 @@ func UpdateTags(ctx context.Context, conn cloudwatchrumiface.CloudWatchRUMAPI, i // UpdateTags updates rum service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).RUMConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).RUMConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/s3/bucket.go b/internal/service/s3/bucket.go index f6d3fb5ddff..10be457384b 100644 --- a/internal/service/s3/bucket.go +++ b/internal/service/s3/bucket.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( @@ -694,7 +697,7 @@ func ResourceBucket() *schema.Resource { func resourceBucketCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := create.Name(d.Get("bucket").(string), d.Get("bucket_prefix").(string)) @@ -768,7 +771,7 @@ func resourceBucketCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceBucketRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) err := FindBucket(ctx, conn, d.Id()) @@ -1244,14 +1247,14 @@ func resourceBucketRead(ctx context.Context, d *schema.ResourceData, meta interf return sdkdiag.AppendErrorf(diags, "listing tags for S3 Bucket (%s): unable to convert tags", d.Id()) } - SetTagsOut(ctx, Tags(tags)) + setTagsOut(ctx, Tags(tags)) return diags } func resourceBucketUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) if d.HasChange("tags_all") { o, n := d.GetChange("tags_all") @@ -1361,7 +1364,7 @@ func resourceBucketUpdate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceBucketDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) log.Printf("[INFO] Deleting S3 Bucket: %s", d.Id()) _, err := conn.DeleteBucketWithContext(ctx, &s3.DeleteBucketInput{ @@ -1377,7 +1380,7 @@ func resourceBucketDelete(ctx context.Context, d *schema.ResourceData, meta inte // Use a S3 service client that can handle multiple slashes in URIs. // While aws_s3_object resources cannot create these object // keys, other AWS services and applications using the S3 Bucket can. - conn = meta.(*conns.AWSClient).S3ConnURICleaningDisabled() + conn := meta.(*conns.AWSClient).S3ConnURICleaningDisabled(ctx) // bucket may have things delete them log.Printf("[DEBUG] S3 Bucket attempting to forceDestroy %s", err) @@ -1440,7 +1443,10 @@ func BucketRegionalDomainName(bucket string, region string) (string, error) { if region == "" { return fmt.Sprintf("%s.s3.amazonaws.com", bucket), nil //lintignore:AWSR001 } - endpoint, err := endpoints.DefaultResolver().EndpointFor(s3.EndpointsID, region) + endpoint, err := endpoints.DefaultResolver().EndpointFor(s3.EndpointsID, region, func(o *endpoints.Options) { + // By default, EndpointFor uses the legacy endpoint for S3 in the us-east-1 region + o.S3UsEast1RegionalEndpoint = endpoints.RegionalS3UsEast1Endpoint + }) if err != nil { return "", err } @@ -1480,7 +1486,7 @@ func websiteEndpoint(ctx context.Context, client *conns.AWSClient, d *schema.Res // Lookup the region for this bucket locationResponse, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, d.Timeout(schema.TimeoutRead), func() (interface{}, error) { - return client.S3Conn().GetBucketLocation( + return client.S3Conn(ctx).GetBucketLocation( &s3.GetBucketLocationInput{ Bucket: aws.String(bucket), }, diff --git a/internal/service/s3/bucket_accelerate_configuration.go b/internal/service/s3/bucket_accelerate_configuration.go index 2c62a946683..0e1051163a9 100644 --- a/internal/service/s3/bucket_accelerate_configuration.go +++ b/internal/service/s3/bucket_accelerate_configuration.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( "context" - "fmt" "log" "time" @@ -51,7 +53,7 @@ func ResourceBucketAccelerateConfiguration() *schema.Resource { } func resourceBucketAccelerateConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) expectedBucketOwner := d.Get("expected_bucket_owner").(string) @@ -72,7 +74,7 @@ func resourceBucketAccelerateConfigurationCreate(ctx context.Context, d *schema. }, s3.ErrCodeNoSuchBucket) if err != nil { - return diag.FromErr(fmt.Errorf("error creating S3 bucket (%s) accelerate configuration: %w", bucket, err)) + return diag.Errorf("creating S3 bucket (%s) accelerate configuration: %s", bucket, err) } d.SetId(CreateResourceID(bucket, expectedBucketOwner)) @@ -81,7 +83,7 @@ func resourceBucketAccelerateConfigurationCreate(ctx context.Context, d *schema. } func resourceBucketAccelerateConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { @@ -105,12 +107,12 @@ func resourceBucketAccelerateConfigurationRead(ctx context.Context, d *schema.Re } if err != nil { - return diag.FromErr(fmt.Errorf("error reading S3 bucket accelerate configuration (%s): %w", d.Id(), err)) + return diag.Errorf("reading S3 bucket accelerate configuration (%s): %s", d.Id(), err) } if output == nil { if d.IsNewResource() { - return diag.FromErr(fmt.Errorf("error reading S3 bucket accelerate configuration (%s): empty output", d.Id())) + return diag.Errorf("reading S3 bucket accelerate configuration (%s): empty output", d.Id()) } log.Printf("[WARN] S3 Bucket Accelerate Configuration (%s) not found, removing from state", d.Id()) d.SetId("") @@ -125,7 +127,7 @@ func resourceBucketAccelerateConfigurationRead(ctx context.Context, d *schema.Re } func resourceBucketAccelerateConfigurationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { @@ -146,14 +148,14 @@ func resourceBucketAccelerateConfigurationUpdate(ctx context.Context, d *schema. _, err = conn.PutBucketAccelerateConfigurationWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error updating S3 bucket accelerate configuration (%s): %w", d.Id(), err)) + return diag.Errorf("updating S3 bucket accelerate configuration (%s): %s", d.Id(), err) } return resourceBucketAccelerateConfigurationRead(ctx, d, meta) } func resourceBucketAccelerateConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { @@ -178,7 +180,7 @@ func resourceBucketAccelerateConfigurationDelete(ctx context.Context, d *schema. } if err != nil { - return diag.FromErr(fmt.Errorf("error deleting S3 bucket accelerate configuration (%s): %w", d.Id(), err)) + return diag.Errorf("deleting S3 bucket accelerate configuration (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/s3/bucket_accelerate_configuration_test.go b/internal/service/s3/bucket_accelerate_configuration_test.go index e093f5ba290..a3046ba874a 100644 --- a/internal/service/s3/bucket_accelerate_configuration_test.go +++ b/internal/service/s3/bucket_accelerate_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( @@ -177,7 +180,7 @@ func TestAccS3BucketAccelerateConfiguration_migrate_withChange(t *testing.T) { func testAccCheckBucketAccelerateConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_bucket_accelerate_configuration" { @@ -227,7 +230,7 @@ func testAccCheckBucketAccelerateConfigurationExists(ctx context.Context, resour return fmt.Errorf("Resource (%s) ID not set", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := tfs3.ParseResourceID(rs.Primary.ID) if err != nil { diff --git a/internal/service/s3/bucket_acl.go b/internal/service/s3/bucket_acl.go index e702a0441db..777cc307ae3 100644 --- a/internal/service/s3/bucket_acl.go +++ b/internal/service/s3/bucket_acl.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( @@ -128,7 +131,7 @@ func ResourceBucketACL() *schema.Resource { } func resourceBucketACLCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) expectedBucketOwner := d.Get("expected_bucket_owner").(string) @@ -155,7 +158,7 @@ func resourceBucketACLCreate(ctx context.Context, d *schema.ResourceData, meta i }, s3.ErrCodeNoSuchBucket) if err != nil { - return diag.FromErr(fmt.Errorf("error creating S3 bucket ACL for %s: %w", bucket, err)) + return diag.Errorf("creating S3 bucket ACL for %s: %s", bucket, err) } d.SetId(BucketACLCreateResourceID(bucket, expectedBucketOwner, acl)) @@ -164,7 +167,7 @@ func resourceBucketACLCreate(ctx context.Context, d *schema.ResourceData, meta i } func resourceBucketACLRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, acl, err := BucketACLParseResourceID(d.Id()) if err != nil { @@ -188,25 +191,25 @@ func resourceBucketACLRead(ctx context.Context, d *schema.ResourceData, meta int } if err != nil { - return diag.FromErr(fmt.Errorf("error getting S3 bucket ACL (%s): %w", d.Id(), err)) + return diag.Errorf("getting S3 bucket ACL (%s): %s", d.Id(), err) } if output == nil { - return diag.FromErr(fmt.Errorf("error getting S3 bucket ACL (%s): empty output", d.Id())) + return diag.Errorf("getting S3 bucket ACL (%s): empty output", d.Id()) } d.Set("acl", acl) d.Set("bucket", bucket) d.Set("expected_bucket_owner", expectedBucketOwner) if err := d.Set("access_control_policy", flattenBucketACLAccessControlPolicy(output)); err != nil { - return diag.FromErr(fmt.Errorf("error setting access_control_policy: %w", err)) + return diag.Errorf("setting access_control_policy: %s", err) } return nil } func resourceBucketACLUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, acl, err := BucketACLParseResourceID(d.Id()) if err != nil { @@ -233,7 +236,7 @@ func resourceBucketACLUpdate(ctx context.Context, d *schema.ResourceData, meta i _, err = conn.PutBucketAclWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error updating S3 bucket ACL (%s): %w", d.Id(), err)) + return diag.Errorf("updating S3 bucket ACL (%s): %s", d.Id(), err) } if d.HasChange("acl") { diff --git a/internal/service/s3/bucket_acl_test.go b/internal/service/s3/bucket_acl_test.go index 2d5e7137e1d..58ca9a1d5d9 100644 --- a/internal/service/s3/bucket_acl_test.go +++ b/internal/service/s3/bucket_acl_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( @@ -605,7 +608,7 @@ func testAccCheckBucketACLExists(ctx context.Context, n string) resource.TestChe return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, _, err := tfs3.BucketACLParseResourceID(rs.Primary.ID) if err != nil { diff --git a/internal/service/s3/bucket_analytics_configuration.go b/internal/service/s3/bucket_analytics_configuration.go index dab414bc7be..da918fab8e8 100644 --- a/internal/service/s3/bucket_analytics_configuration.go +++ b/internal/service/s3/bucket_analytics_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( @@ -133,7 +136,7 @@ var filterAtLeastOneOfKeys = []string{"filter.0.prefix", "filter.0.tags"} func resourceBucketAnalyticsConfigurationPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) name := d.Get("name").(string) @@ -180,7 +183,7 @@ func resourceBucketAnalyticsConfigurationPut(ctx context.Context, d *schema.Reso func resourceBucketAnalyticsConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, name, err := BucketAnalyticsConfigurationParseID(d.Id()) if err != nil { @@ -231,7 +234,7 @@ func resourceBucketAnalyticsConfigurationRead(ctx context.Context, d *schema.Res func resourceBucketAnalyticsConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, name, err := BucketAnalyticsConfigurationParseID(d.Id()) if err != nil { @@ -468,7 +471,7 @@ func WaitForDeleteBucketAnalyticsConfiguration(ctx context.Context, conn *s3.S3, } if err != nil { - return fmt.Errorf("error deleting S3 Bucket Analytics Configuration \"%s:%s\": %w", bucket, name, err) + return fmt.Errorf("deleting S3 Bucket Analytics Configuration \"%s:%s\": %w", bucket, name, err) } return nil diff --git a/internal/service/s3/bucket_analytics_configuration_test.go b/internal/service/s3/bucket_analytics_configuration_test.go index cfcc1757e19..50cdb0133a3 100644 --- a/internal/service/s3/bucket_analytics_configuration_test.go +++ b/internal/service/s3/bucket_analytics_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( @@ -470,7 +473,7 @@ func TestAccS3BucketAnalyticsConfiguration_WithStorageClassAnalysis_full(t *test func testAccCheckBucketAnalyticsConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_bucket_analytics_configuration" { @@ -495,7 +498,7 @@ func testAccCheckBucketAnalyticsConfigurationExists(ctx context.Context, n strin return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) output, err := conn.GetBucketAnalyticsConfigurationWithContext(ctx, &s3.GetBucketAnalyticsConfigurationInput{ Bucket: aws.String(rs.Primary.Attributes["bucket"]), Id: aws.String(rs.Primary.Attributes["name"]), @@ -517,7 +520,7 @@ func testAccCheckBucketAnalyticsConfigurationExists(ctx context.Context, n strin func testAccCheckBucketAnalyticsConfigurationRemoved(ctx context.Context, name, bucket string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) return tfs3.WaitForDeleteBucketAnalyticsConfiguration(ctx, conn, bucket, name, 1*time.Minute) } } diff --git a/internal/service/s3/bucket_cors_configuration.go b/internal/service/s3/bucket_cors_configuration.go index 3b2c22afa9b..eb529e2f6bf 100644 --- a/internal/service/s3/bucket_cors_configuration.go +++ b/internal/service/s3/bucket_cors_configuration.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( "context" - "fmt" "log" "time" @@ -85,7 +87,7 @@ func ResourceBucketCorsConfiguration() *schema.Resource { } func resourceBucketCorsConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) expectedBucketOwner := d.Get("expected_bucket_owner").(string) @@ -106,7 +108,7 @@ func resourceBucketCorsConfigurationCreate(ctx context.Context, d *schema.Resour }, s3.ErrCodeNoSuchBucket) if err != nil { - return diag.FromErr(fmt.Errorf("error creating S3 bucket (%s) CORS configuration: %w", bucket, err)) + return diag.Errorf("creating S3 bucket (%s) CORS configuration: %s", bucket, err) } d.SetId(CreateResourceID(bucket, expectedBucketOwner)) @@ -115,7 +117,7 @@ func resourceBucketCorsConfigurationCreate(ctx context.Context, d *schema.Resour } func resourceBucketCorsConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { @@ -141,13 +143,13 @@ func resourceBucketCorsConfigurationRead(ctx context.Context, d *schema.Resource } if err != nil { - return diag.FromErr(fmt.Errorf("error reading S3 bucket CORS configuration (%s): %w", d.Id(), err)) + return diag.Errorf("reading S3 bucket CORS configuration (%s): %s", d.Id(), err) } output, ok := corsResponse.(*s3.GetBucketCorsOutput) if !ok || output == nil { if d.IsNewResource() { - return diag.FromErr(fmt.Errorf("error reading S3 bucket CORS configuration (%s): empty output", d.Id())) + return diag.Errorf("reading S3 bucket CORS configuration (%s): empty output", d.Id()) } log.Printf("[WARN] S3 Bucket CORS Configuration (%s) not found, removing from state", d.Id()) d.SetId("") @@ -158,14 +160,14 @@ func resourceBucketCorsConfigurationRead(ctx context.Context, d *schema.Resource d.Set("expected_bucket_owner", expectedBucketOwner) if err := d.Set("cors_rule", flattenBucketCorsConfigurationCorsRules(output.CORSRules)); err != nil { - return diag.FromErr(fmt.Errorf("error setting cors_rule: %w", err)) + return diag.Errorf("setting cors_rule: %s", err) } return nil } func resourceBucketCorsConfigurationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { @@ -186,14 +188,14 @@ func resourceBucketCorsConfigurationUpdate(ctx context.Context, d *schema.Resour _, err = conn.PutBucketCorsWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error updating S3 bucket CORS configuration (%s): %w", d.Id(), err)) + return diag.Errorf("updating S3 bucket CORS configuration (%s): %s", d.Id(), err) } return resourceBucketCorsConfigurationRead(ctx, d, meta) } func resourceBucketCorsConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { @@ -215,7 +217,7 @@ func resourceBucketCorsConfigurationDelete(ctx context.Context, d *schema.Resour } if err != nil { - return diag.FromErr(fmt.Errorf("error deleting S3 bucket CORS configuration (%s): %w", d.Id(), err)) + return diag.Errorf("deleting S3 bucket CORS configuration (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/s3/bucket_cors_configuration_test.go b/internal/service/s3/bucket_cors_configuration_test.go index 2d00e9f8861..176ff0498b4 100644 --- a/internal/service/s3/bucket_cors_configuration_test.go +++ b/internal/service/s3/bucket_cors_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( @@ -313,7 +316,7 @@ func TestAccS3BucketCorsConfiguration_migrate_corsRuleWithChange(t *testing.T) { func testAccCheckBucketCorsConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_bucket_cors_configuration" { @@ -363,7 +366,7 @@ func testAccCheckBucketCorsConfigurationExists(ctx context.Context, resourceName return fmt.Errorf("Resource (%s) ID not set", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := tfs3.ParseResourceID(rs.Primary.ID) if err != nil { diff --git a/internal/service/s3/bucket_data_source.go b/internal/service/s3/bucket_data_source.go index 13f629c4e1c..d00f565012b 100644 --- a/internal/service/s3/bucket_data_source.go +++ b/internal/service/s3/bucket_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( @@ -60,7 +63,7 @@ func DataSourceBucket() *schema.Resource { func dataSourceBucketRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) @@ -99,18 +102,18 @@ func dataSourceBucketRead(ctx context.Context, d *schema.ResourceData, meta inte } func bucketLocation(ctx context.Context, client *conns.AWSClient, d *schema.ResourceData, bucket string) error { - region, err := s3manager.GetBucketRegionWithClient(ctx, client.S3Conn(), bucket, func(r *request.Request) { + region, err := s3manager.GetBucketRegionWithClient(ctx, client.S3Conn(ctx), bucket, func(r *request.Request) { // By default, GetBucketRegion forces virtual host addressing, which // is not compatible with many non-AWS implementations. Instead, pass // the provider s3_force_path_style configuration, which defaults to // false, but allows override. - r.Config.S3ForcePathStyle = client.S3Conn().Config.S3ForcePathStyle + r.Config.S3ForcePathStyle = client.S3Conn(ctx).Config.S3ForcePathStyle // By default, GetBucketRegion uses anonymous credentials when doing // a HEAD request to get the bucket region. This breaks in aws-cn regions // when the account doesn't have an ICP license to host public content. // Use the current credentials when getting the bucket region. - r.Config.Credentials = client.S3Conn().Config.Credentials + r.Config.Credentials = client.S3Conn(ctx).Config.Credentials }) if err != nil { return err @@ -126,7 +129,7 @@ func bucketLocation(ctx context.Context, client *conns.AWSClient, d *schema.Reso d.Set("hosted_zone_id", hostedZoneID) } - _, websiteErr := client.S3Conn().GetBucketWebsite( + _, websiteErr := client.S3Conn(ctx).GetBucketWebsite( &s3.GetBucketWebsiteInput{ Bucket: aws.String(bucket), }, diff --git a/internal/service/s3/bucket_data_source_test.go b/internal/service/s3/bucket_data_source_test.go index 7d17843412d..a7440321d48 100644 --- a/internal/service/s3/bucket_data_source_test.go +++ b/internal/service/s3/bucket_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( diff --git a/internal/service/s3/bucket_intelligent_tiering_configuration.go b/internal/service/s3/bucket_intelligent_tiering_configuration.go index cfa4409ae31..02349daa6ae 100644 --- a/internal/service/s3/bucket_intelligent_tiering_configuration.go +++ b/internal/service/s3/bucket_intelligent_tiering_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( @@ -91,7 +94,7 @@ func ResourceBucketIntelligentTieringConfiguration() *schema.Resource { func resourceBucketIntelligentTieringConfigurationPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucketName := d.Get("bucket").(string) configurationName := d.Get("name").(string) @@ -131,7 +134,7 @@ func resourceBucketIntelligentTieringConfigurationPut(ctx context.Context, d *sc func resourceBucketIntelligentTieringConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucketName, configurationName, err := BucketIntelligentTieringConfigurationParseResourceID(d.Id()) @@ -170,7 +173,7 @@ func resourceBucketIntelligentTieringConfigurationRead(ctx context.Context, d *s func resourceBucketIntelligentTieringConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucketName, configurationName, err := BucketIntelligentTieringConfigurationParseResourceID(d.Id()) diff --git a/internal/service/s3/bucket_intelligent_tiering_configuration_test.go b/internal/service/s3/bucket_intelligent_tiering_configuration_test.go index 87460ea5d85..e3453a64fb1 100644 --- a/internal/service/s3/bucket_intelligent_tiering_configuration_test.go +++ b/internal/service/s3/bucket_intelligent_tiering_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( @@ -361,7 +364,7 @@ func testAccCheckBucketIntelligentTieringConfigurationExists(ctx context.Context return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) output, err := tfs3.FindBucketIntelligentTieringConfiguration(ctx, conn, bucketName, configurationName) @@ -377,7 +380,7 @@ func testAccCheckBucketIntelligentTieringConfigurationExists(ctx context.Context func testAccCheckBucketIntelligentTieringConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_bucket_intelligent_tiering_configuration" { diff --git a/internal/service/s3/bucket_inventory.go b/internal/service/s3/bucket_inventory.go index 61e22469923..2d2cb1b4fc3 100644 --- a/internal/service/s3/bucket_inventory.go +++ b/internal/service/s3/bucket_inventory.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( @@ -180,7 +183,7 @@ func ResourceBucketInventory() *schema.Resource { func resourceBucketInventoryPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) name := d.Get("name").(string) @@ -258,7 +261,7 @@ func resourceBucketInventoryPut(ctx context.Context, d *schema.ResourceData, met func resourceBucketInventoryDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, name, err := BucketInventoryParseID(d.Id()) if err != nil { @@ -290,7 +293,7 @@ func resourceBucketInventoryDelete(ctx context.Context, d *schema.ResourceData, func resourceBucketInventoryRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, name, err := BucketInventoryParseID(d.Id()) if err != nil { diff --git a/internal/service/s3/bucket_inventory_test.go b/internal/service/s3/bucket_inventory_test.go index 506f6c84c98..d7d90dc83ef 100644 --- a/internal/service/s3/bucket_inventory_test.go +++ b/internal/service/s3/bucket_inventory_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( @@ -142,7 +145,7 @@ func testAccCheckBucketInventoryExistsConfig(ctx context.Context, n string, res return fmt.Errorf("No S3 bucket inventory configuration ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) bucket, name, err := tfs3.BucketInventoryParseID(rs.Primary.ID) if err != nil { return err @@ -166,7 +169,7 @@ func testAccCheckBucketInventoryExistsConfig(ctx context.Context, n string, res func testAccCheckBucketInventoryDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_bucket_inventory" { diff --git a/internal/service/s3/bucket_lifecycle_configuration.go b/internal/service/s3/bucket_lifecycle_configuration.go index 95ff9200920..5b04bf0b61d 100644 --- a/internal/service/s3/bucket_lifecycle_configuration.go +++ b/internal/service/s3/bucket_lifecycle_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( @@ -16,9 +19,9 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" - "github.com/hashicorp/terraform-provider-aws/internal/experimental/nullable" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/types/nullable" "github.com/hashicorp/terraform-provider-aws/internal/verify" ) @@ -253,14 +256,14 @@ func ResourceBucketLifecycleConfiguration() *schema.Resource { } func resourceBucketLifecycleConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) expectedBucketOwner := d.Get("expected_bucket_owner").(string) rules, err := ExpandLifecycleRules(ctx, d.Get("rule").([]interface{})) if err != nil { - return diag.Errorf("error creating S3 Lifecycle Configuration for bucket (%s): %s", bucket, err) + return diag.Errorf("creating S3 Lifecycle Configuration for bucket (%s): %s", bucket, err) } input := &s3.PutBucketLifecycleConfigurationInput{ @@ -279,20 +282,20 @@ func resourceBucketLifecycleConfigurationCreate(ctx context.Context, d *schema.R }, s3.ErrCodeNoSuchBucket) if err != nil { - return diag.Errorf("error creating S3 Lifecycle Configuration for bucket (%s): %s", bucket, err) + return diag.Errorf("creating S3 Lifecycle Configuration for bucket (%s): %s", bucket, err) } d.SetId(CreateResourceID(bucket, expectedBucketOwner)) if err = waitForLifecycleConfigurationRulesStatus(ctx, conn, bucket, expectedBucketOwner, rules); err != nil { - return diag.Errorf("error waiting for S3 Lifecycle Configuration for bucket (%s) to reach expected rules status after update: %s", d.Id(), err) + return diag.Errorf("waiting for S3 Lifecycle Configuration for bucket (%s) to reach expected rules status after update: %s", d.Id(), err) } return resourceBucketLifecycleConfigurationRead(ctx, d, meta) } func resourceBucketLifecycleConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { @@ -343,20 +346,20 @@ func resourceBucketLifecycleConfigurationRead(ctx context.Context, d *schema.Res } if err != nil { - return diag.Errorf("error getting S3 Bucket Lifecycle Configuration (%s): %s", d.Id(), err) + return diag.Errorf("getting S3 Bucket Lifecycle Configuration (%s): %s", d.Id(), err) } d.Set("bucket", bucket) d.Set("expected_bucket_owner", expectedBucketOwner) if err := d.Set("rule", FlattenLifecycleRules(ctx, output.Rules)); err != nil { - return diag.Errorf("error setting rule: %s", err) + return diag.Errorf("setting rule: %s", err) } return nil } func resourceBucketLifecycleConfigurationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { @@ -365,7 +368,7 @@ func resourceBucketLifecycleConfigurationUpdate(ctx context.Context, d *schema.R rules, err := ExpandLifecycleRules(ctx, d.Get("rule").([]interface{})) if err != nil { - return diag.Errorf("error updating S3 Bucket Lifecycle Configuration rule: %s", err) + return diag.Errorf("updating S3 Bucket Lifecycle Configuration rule: %s", err) } input := &s3.PutBucketLifecycleConfigurationInput{ @@ -384,18 +387,18 @@ func resourceBucketLifecycleConfigurationUpdate(ctx context.Context, d *schema.R }, ErrCodeNoSuchLifecycleConfiguration) if err != nil { - return diag.Errorf("error updating S3 Bucket Lifecycle Configuration (%s): %s", d.Id(), err) + return diag.Errorf("updating S3 Bucket Lifecycle Configuration (%s): %s", d.Id(), err) } if err := waitForLifecycleConfigurationRulesStatus(ctx, conn, bucket, expectedBucketOwner, rules); err != nil { - return diag.Errorf("error waiting for S3 Lifecycle Configuration for bucket (%s) to reach expected rules status after update: %s", d.Id(), err) + return diag.Errorf("waiting for S3 Lifecycle Configuration for bucket (%s) to reach expected rules status after update: %s", d.Id(), err) } return resourceBucketLifecycleConfigurationRead(ctx, d, meta) } func resourceBucketLifecycleConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { @@ -417,7 +420,7 @@ func resourceBucketLifecycleConfigurationDelete(ctx context.Context, d *schema.R } if err != nil { - return diag.Errorf("error deleting S3 Bucket Lifecycle Configuration (%s): %s", d.Id(), err) + return diag.Errorf("deleting S3 Bucket Lifecycle Configuration (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/s3/bucket_lifecycle_configuration_test.go b/internal/service/s3/bucket_lifecycle_configuration_test.go index 6527b5f3bc5..bb59479fcf5 100644 --- a/internal/service/s3/bucket_lifecycle_configuration_test.go +++ b/internal/service/s3/bucket_lifecycle_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( @@ -1028,7 +1031,7 @@ func TestAccS3BucketLifecycleConfiguration_Update_filterWithAndToFilterWithPrefi func testAccCheckBucketLifecycleConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_bucket_lifecycle_configuration" { @@ -1080,7 +1083,7 @@ func testAccCheckBucketLifecycleConfigurationExists(ctx context.Context, n strin return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := tfs3.ParseResourceID(rs.Primary.ID) if err != nil { diff --git a/internal/service/s3/bucket_logging.go b/internal/service/s3/bucket_logging.go index fc1b9569a8f..f371ef449d6 100644 --- a/internal/service/s3/bucket_logging.go +++ b/internal/service/s3/bucket_logging.go @@ -1,8 +1,11 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( "context" - "fmt" + "errors" "log" "time" @@ -10,9 +13,11 @@ import ( "github.com/aws/aws-sdk-go/service/s3" "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/verify" ) @@ -97,7 +102,8 @@ func ResourceBucketLogging() *schema.Resource { } func resourceBucketLoggingCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) expectedBucketOwner := d.Get("expected_bucket_owner").(string) @@ -127,7 +133,7 @@ func resourceBucketLoggingCreate(ctx context.Context, d *schema.ResourceData, me }, s3.ErrCodeNoSuchBucket) if err != nil { - return diag.FromErr(fmt.Errorf("error putting S3 bucket (%s) logging: %w", bucket, err)) + return sdkdiag.AppendErrorf(diags, "putting S3 Bucket (%s) Logging: %s", bucket, err) } d.SetId(CreateResourceID(bucket, expectedBucketOwner)) @@ -136,47 +142,49 @@ func resourceBucketLoggingCreate(ctx context.Context, d *schema.ResourceData, me } func resourceBucketLoggingRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { - return diag.FromErr(err) + return sdkdiag.AppendFromErr(diags, err) } - input := &s3.GetBucketLoggingInput{ - Bucket: aws.String(bucket), - } + outputRaw, err := tfresource.RetryWhen(ctx, 2*time.Minute, func() (interface{}, error) { + return FindBucketLoggingByID(ctx, conn, bucket, expectedBucketOwner) + }, + func(err error) (bool, error) { + if tfresource.NotFound(err) { + return true, err + } - if expectedBucketOwner != "" { - input.ExpectedBucketOwner = aws.String(expectedBucketOwner) - } + if errors.Is(err, tfresource.ErrEmptyResult) { + return true, err + } - resp, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, 2*time.Minute, func() (interface{}, error) { - return conn.GetBucketLoggingWithContext(ctx, input) - }, s3.ErrCodeNoSuchBucket) + return false, err + }) - if !d.IsNewResource() && tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) { + if !d.IsNewResource() && tfresource.NotFound(err) { log.Printf("[WARN] S3 Bucket Logging (%s) not found, removing from state", d.Id()) d.SetId("") - return nil + return diags } if err != nil { - return diag.FromErr(fmt.Errorf("error reading S3 Bucket (%s) Logging: %w", d.Id(), err)) + return sdkdiag.AppendErrorf(diags, "reading S3 Bucket Logging for bucket (%s): %s", d.Id(), err) } - output, ok := resp.(*s3.GetBucketLoggingOutput) - - if !ok || output.LoggingEnabled == nil { + if errors.Is(err, tfresource.ErrEmptyResult) { if d.IsNewResource() { - return diag.FromErr(fmt.Errorf("error reading S3 Bucket (%s) Logging: empty output", d.Id())) + return sdkdiag.AppendErrorf(diags, "reading S3 Bucket (%s) Logging: empty output", d.Id()) } log.Printf("[WARN] S3 Bucket Logging (%s) not found, removing from state", d.Id()) d.SetId("") - return nil + return diags } - loggingEnabled := output.LoggingEnabled + loggingEnabled := outputRaw.(*s3.GetBucketLoggingOutput).LoggingEnabled d.Set("bucket", d.Id()) d.Set("expected_bucket_owner", expectedBucketOwner) @@ -184,18 +192,19 @@ func resourceBucketLoggingRead(ctx context.Context, d *schema.ResourceData, meta d.Set("target_prefix", loggingEnabled.TargetPrefix) if err := d.Set("target_grant", flattenBucketLoggingTargetGrants(loggingEnabled.TargetGrants)); err != nil { - return diag.FromErr(fmt.Errorf("error setting target_grant: %w", err)) + return sdkdiag.AppendErrorf(diags, "setting target_grant: %s", err) } - return nil + return diags } func resourceBucketLoggingUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { - return diag.FromErr(err) + return sdkdiag.AppendFromErr(diags, err) } loggingEnabled := &s3.LoggingEnabled{ @@ -223,18 +232,19 @@ func resourceBucketLoggingUpdate(ctx context.Context, d *schema.ResourceData, me }, s3.ErrCodeNoSuchBucket) if err != nil { - return diag.FromErr(fmt.Errorf("error updating S3 bucket (%s) logging: %w", d.Id(), err)) + return sdkdiag.AppendErrorf(diags, "updating S3 bucket (%s) logging: %s", d.Id(), err) } return resourceBucketLoggingRead(ctx, d, meta) } func resourceBucketLoggingDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { - return diag.FromErr(err) + return sdkdiag.AppendFromErr(diags, err) } input := &s3.PutBucketLoggingInput{ @@ -253,7 +263,7 @@ func resourceBucketLoggingDelete(ctx context.Context, d *schema.ResourceData, me } if err != nil { - return diag.FromErr(fmt.Errorf("error deleting S3 Bucket (%s) Logging: %w", d.Id(), err)) + return sdkdiag.AppendErrorf(diags, "deleting S3 Bucket (%s) Logging: %s", d.Id(), err) } return nil @@ -372,3 +382,32 @@ func flattenBucketLoggingTargetGrantGrantee(g *s3.Grantee) []interface{} { return []interface{}{m} } + +func FindBucketLoggingByID(ctx context.Context, conn *s3.S3, id, expectedBucketOwner string) (*s3.GetBucketLoggingOutput, error) { + in := &s3.GetBucketLoggingInput{ + Bucket: aws.String(id), + } + + if expectedBucketOwner != "" { + in.ExpectedBucketOwner = aws.String(expectedBucketOwner) + } + + out, err := conn.GetBucketLoggingWithContext(ctx, in) + + if tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + if err != nil { + return nil, err + } + + if out == nil || out.LoggingEnabled == nil { + return nil, tfresource.NewEmptyResultError(in) + } + + return out, nil +} diff --git a/internal/service/s3/bucket_logging_test.go b/internal/service/s3/bucket_logging_test.go index e878344bacf..ed4bef52cfc 100644 --- a/internal/service/s3/bucket_logging_test.go +++ b/internal/service/s3/bucket_logging_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( @@ -373,7 +376,7 @@ func TestAccS3BucketLogging_migrate_loggingWithChange(t *testing.T) { func testAccCheckBucketLoggingDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_bucket_logging" { @@ -423,7 +426,7 @@ func testAccCheckBucketLoggingExists(ctx context.Context, resourceName string) r return fmt.Errorf("Resource (%s) ID not set", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := tfs3.ParseResourceID(rs.Primary.ID) if err != nil { @@ -556,7 +559,7 @@ resource "aws_s3_bucket" "test" { resource "aws_s3_bucket_ownership_controls" "test" { bucket = aws_s3_bucket.test.id rule { - object_ownership = "BucketOwnerPreferred" + object_ownership = "ObjectWriter" } } @@ -654,7 +657,7 @@ resource "aws_s3_bucket" "test" { resource "aws_s3_bucket_ownership_controls" "test" { bucket = aws_s3_bucket.test.id rule { - object_ownership = "BucketOwnerPreferred" + object_ownership = "ObjectWriter" } } diff --git a/internal/service/s3/bucket_metric.go b/internal/service/s3/bucket_metric.go index 0b391d555d6..b2fe2f25b11 100644 --- a/internal/service/s3/bucket_metric.go +++ b/internal/service/s3/bucket_metric.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( @@ -68,7 +71,7 @@ func ResourceBucketMetric() *schema.Resource { func resourceBucketMetricPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) name := d.Get("name").(string) @@ -119,7 +122,7 @@ func resourceBucketMetricPut(ctx context.Context, d *schema.ResourceData, meta i func resourceBucketMetricDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, name, err := BucketMetricParseID(d.Id()) if err != nil { @@ -151,7 +154,7 @@ func resourceBucketMetricDelete(ctx context.Context, d *schema.ResourceData, met func resourceBucketMetricRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, name, err := BucketMetricParseID(d.Id()) if err != nil { diff --git a/internal/service/s3/bucket_metric_test.go b/internal/service/s3/bucket_metric_test.go index 91ff5b5404a..80042961700 100644 --- a/internal/service/s3/bucket_metric_test.go +++ b/internal/service/s3/bucket_metric_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( @@ -578,7 +581,7 @@ func TestAccS3BucketMetric_withFilterSingleTag(t *testing.T) { func testAccCheckBucketMetricDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_bucket_metric" { @@ -629,7 +632,7 @@ func testAccCheckBucketMetricsExistsConfig(ctx context.Context, n string, res *s return fmt.Errorf("No S3 bucket metrics configuration ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) bucket, name, err := tfs3.BucketMetricParseID(rs.Primary.ID) if err != nil { return err diff --git a/internal/service/s3/bucket_notification.go b/internal/service/s3/bucket_notification.go index c1c8b6cddaf..2020a1d77d7 100644 --- a/internal/service/s3/bucket_notification.go +++ b/internal/service/s3/bucket_notification.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( @@ -144,7 +147,7 @@ func ResourceBucketNotification() *schema.Resource { func resourceBucketNotificationPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) // EventBridge @@ -361,7 +364,7 @@ func resourceBucketNotificationPut(ctx context.Context, d *schema.ResourceData, func resourceBucketNotificationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) i := &s3.PutBucketNotificationConfigurationInput{ Bucket: aws.String(d.Id()), @@ -380,7 +383,7 @@ func resourceBucketNotificationDelete(ctx context.Context, d *schema.ResourceDat func resourceBucketNotificationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) notificationConfigs, err := conn.GetBucketNotificationConfigurationWithContext(ctx, &s3.GetBucketNotificationConfigurationRequest{ Bucket: aws.String(d.Id()), diff --git a/internal/service/s3/bucket_notification_test.go b/internal/service/s3/bucket_notification_test.go index 6958d25c900..62c3cf7ceb1 100644 --- a/internal/service/s3/bucket_notification_test.go +++ b/internal/service/s3/bucket_notification_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( @@ -294,7 +297,7 @@ func TestAccS3BucketNotification_update(t *testing.T) { func testAccCheckBucketNotificationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_bucket_notification" { @@ -338,7 +341,7 @@ func testAccCheckBucketTopicNotification(ctx context.Context, n, i, t string, ev return func(s *terraform.State) error { rs := s.RootModule().Resources[n] topicArn := s.RootModule().Resources[t].Primary.ID - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) err := retry.RetryContext(ctx, 1*time.Minute, func() *retry.RetryError { out, err := conn.GetBucketNotificationConfigurationWithContext(ctx, &s3.GetBucketNotificationConfigurationRequest{ @@ -394,7 +397,7 @@ func testAccCheckBucketTopicNotification(ctx context.Context, n, i, t string, ev func testAccCheckBucketEventBridgeNotification(ctx context.Context, n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs := s.RootModule().Resources[n] - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) err := retry.RetryContext(ctx, 1*time.Minute, func() *retry.RetryError { out, err := conn.GetBucketNotificationConfigurationWithContext(ctx, &s3.GetBucketNotificationConfigurationRequest{ @@ -420,7 +423,7 @@ func testAccCheckBucketQueueNotification(ctx context.Context, n, i, t string, ev return func(s *terraform.State) error { rs := s.RootModule().Resources[n] queueArn := s.RootModule().Resources[t].Primary.Attributes["arn"] - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) err := retry.RetryContext(ctx, 1*time.Minute, func() *retry.RetryError { out, err := conn.GetBucketNotificationConfigurationWithContext(ctx, &s3.GetBucketNotificationConfigurationRequest{ @@ -477,7 +480,7 @@ func testAccCheckBucketLambdaFunctionConfiguration(ctx context.Context, n, i, t return func(s *terraform.State) error { rs := s.RootModule().Resources[n] funcArn := s.RootModule().Resources[t].Primary.Attributes["arn"] - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) err := retry.RetryContext(ctx, 1*time.Minute, func() *retry.RetryError { out, err := conn.GetBucketNotificationConfigurationWithContext(ctx, &s3.GetBucketNotificationConfigurationRequest{ diff --git a/internal/service/s3/bucket_object.go b/internal/service/s3/bucket_object.go index 7d536a9fe68..083ecac3ef7 100644 --- a/internal/service/s3/bucket_object.go +++ b/internal/service/s3/bucket_object.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 // WARNING: This code is DEPRECATED and will be removed in a future release!! @@ -189,6 +192,8 @@ func ResourceBucketObject() *schema.Resource { Optional: true, }, }, + + DeprecationMessage: `use the aws_s3_object resource instead`, } } @@ -199,7 +204,7 @@ func resourceBucketObjectCreate(ctx context.Context, d *schema.ResourceData, met func resourceBucketObjectRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) key := d.Get("key").(string) @@ -272,7 +277,7 @@ func resourceBucketObjectRead(ctx context.Context, d *schema.ResourceData, meta return sdkdiag.AppendErrorf(diags, "listing tags for S3 Bucket (%s) Object (%s): unable to convert tags", bucket, key) } - SetTagsOut(ctx, Tags(tags)) + setTagsOut(ctx, Tags(tags)) return diags } @@ -283,7 +288,7 @@ func resourceBucketObjectUpdate(ctx context.Context, d *schema.ResourceData, met return append(diags, resourceBucketObjectUpload(ctx, d, meta)...) } - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) key := d.Get("key").(string) @@ -351,7 +356,7 @@ func resourceBucketObjectUpdate(ctx context.Context, d *schema.ResourceData, met func resourceBucketObjectDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) key := d.Get("key").(string) @@ -395,7 +400,7 @@ func resourceBucketObjectImport(ctx context.Context, d *schema.ResourceData, met func resourceBucketObjectUpload(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) uploader := s3manager.NewUploaderWithClient(conn) defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig tags := defaultTagsConfig.MergeTags(tftags.New(ctx, d.Get("tags").(map[string]interface{}))) @@ -406,11 +411,11 @@ func resourceBucketObjectUpload(ctx context.Context, d *schema.ResourceData, met source := v.(string) path, err := homedir.Expand(source) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error expanding homedir in source (%s): %s", source, err) + return sdkdiag.AppendErrorf(diags, "expanding homedir in source (%s): %s", source, err) } file, err := os.Open(path) if err != nil { - return sdkdiag.AppendErrorf(diags, "Error opening S3 object source (%s): %s", path, err) + return sdkdiag.AppendErrorf(diags, "opening S3 object source (%s): %s", path, err) } body = file @@ -509,7 +514,7 @@ func resourceBucketObjectUpload(ctx context.Context, d *schema.ResourceData, met } if _, err := uploader.Upload(input); err != nil { - return sdkdiag.AppendErrorf(diags, "Error uploading object to S3 bucket (%s): %s", bucket, err) + return sdkdiag.AppendErrorf(diags, "uploading object to S3 bucket (%s): %s", bucket, err) } d.SetId(key) @@ -521,7 +526,7 @@ func resourceBucketObjectSetKMS(ctx context.Context, d *schema.ResourceData, met // Only set non-default KMS key ID (one that doesn't match default) if sseKMSKeyId != nil { // retrieve S3 KMS Default Master Key - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) keyMetadata, err := kms.FindKeyByID(ctx, conn, DefaultKMSKeyAlias) if err != nil { return fmt.Errorf("Failed to describe default S3 KMS key (%s): %s", DefaultKMSKeyAlias, err) diff --git a/internal/service/s3/bucket_object_data_source.go b/internal/service/s3/bucket_object_data_source.go index 89a529d3221..414cfb07605 100644 --- a/internal/service/s3/bucket_object_data_source.go +++ b/internal/service/s3/bucket_object_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 // WARNING: This code is DEPRECATED and will be removed in a future release!! @@ -130,12 +133,14 @@ func DataSourceBucketObject() *schema.Resource { "tags": tftags.TagsSchemaComputed(), }, + + DeprecationMessage: `use the aws_s3_object data source instead`, } } func dataSourceBucketObjectRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig bucket := d.Get("bucket").(string) diff --git a/internal/service/s3/bucket_object_data_source_test.go b/internal/service/s3/bucket_object_data_source_test.go index 1d3f3221a69..7f022bc21a4 100644 --- a/internal/service/s3/bucket_object_data_source_test.go +++ b/internal/service/s3/bucket_object_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test // WARNING: This code is DEPRECATED and will be removed in a future release!! diff --git a/internal/service/s3/bucket_object_lock_configuration.go b/internal/service/s3/bucket_object_lock_configuration.go index 3c75d0b26a1..4d62e015ba7 100644 --- a/internal/service/s3/bucket_object_lock_configuration.go +++ b/internal/service/s3/bucket_object_lock_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( @@ -91,7 +94,7 @@ func ResourceBucketObjectLockConfiguration() *schema.Resource { } func resourceBucketObjectLockConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) expectedBucketOwner := d.Get("expected_bucket_owner").(string) @@ -131,7 +134,7 @@ func resourceBucketObjectLockConfigurationCreate(ctx context.Context, d *schema. } func resourceBucketObjectLockConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) @@ -162,7 +165,7 @@ func resourceBucketObjectLockConfigurationRead(ctx context.Context, d *schema.Re } func resourceBucketObjectLockConfigurationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) @@ -202,7 +205,7 @@ func resourceBucketObjectLockConfigurationUpdate(ctx context.Context, d *schema. } func resourceBucketObjectLockConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) diff --git a/internal/service/s3/bucket_object_lock_configuration_test.go b/internal/service/s3/bucket_object_lock_configuration_test.go index ef0166762cf..180d043f292 100644 --- a/internal/service/s3/bucket_object_lock_configuration_test.go +++ b/internal/service/s3/bucket_object_lock_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( @@ -209,7 +212,7 @@ func TestAccS3BucketObjectLockConfiguration_noRule(t *testing.T) { func testAccCheckBucketObjectLockConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_bucket_object_lock_configuration" { @@ -256,7 +259,7 @@ func testAccCheckBucketObjectLockConfigurationExists(ctx context.Context, resour return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) _, err = tfs3.FindObjectLockConfiguration(ctx, conn, bucket, expectedBucketOwner) diff --git a/internal/service/s3/bucket_object_test.go b/internal/service/s3/bucket_object_test.go index 0d0b0fbca9f..f857d6054f6 100644 --- a/internal/service/s3/bucket_object_test.go +++ b/internal/service/s3/bucket_object_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test // WARNING: This code is DEPRECATED and will be removed in a future release!! @@ -1378,7 +1381,7 @@ func testAccCheckBucketObjectVersionIdEquals(first, second *s3.GetObjectOutput) func testAccCheckBucketObjectDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_bucket_object" { @@ -1413,7 +1416,7 @@ func testAccCheckBucketObjectExists(ctx context.Context, n string, obj *s3.GetOb return fmt.Errorf("No S3 Object ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) input := &s3.GetObjectInput{ Bucket: aws.String(rs.Primary.Attributes["bucket"]), @@ -1472,7 +1475,7 @@ func testAccCheckBucketObjectBody(obj *s3.GetObjectOutput, want string) resource func testAccCheckBucketObjectACL(ctx context.Context, n string, expectedPerms []string) resource.TestCheckFunc { return func(s *terraform.State) error { rs := s.RootModule().Resources[n] - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) out, err := conn.GetObjectAclWithContext(ctx, &s3.GetObjectAclInput{ Bucket: aws.String(rs.Primary.Attributes["bucket"]), @@ -1500,7 +1503,7 @@ func testAccCheckBucketObjectACL(ctx context.Context, n string, expectedPerms [] func testAccCheckBucketObjectStorageClass(ctx context.Context, n, expectedClass string) resource.TestCheckFunc { return func(s *terraform.State) error { rs := s.RootModule().Resources[n] - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) out, err := tfs3.FindObjectByThreePartKey(ctx, conn, rs.Primary.Attributes["bucket"], rs.Primary.Attributes["key"], "") @@ -1527,7 +1530,7 @@ func testAccCheckBucketObjectStorageClass(ctx context.Context, n, expectedClass func testAccCheckBucketObjectSSE(ctx context.Context, n, expectedSSE string) resource.TestCheckFunc { return func(s *terraform.State) error { rs := s.RootModule().Resources[n] - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) out, err := tfs3.FindObjectByThreePartKey(ctx, conn, rs.Primary.Attributes["bucket"], rs.Primary.Attributes["key"], "") @@ -1568,7 +1571,7 @@ func testAccBucketObjectCreateTempFile(t *testing.T, data string) string { func testAccCheckBucketObjectUpdateTags(ctx context.Context, n string, oldTags, newTags map[string]string) resource.TestCheckFunc { return func(s *terraform.State) error { rs := s.RootModule().Resources[n] - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) return tfs3.ObjectUpdateTags(ctx, conn, rs.Primary.Attributes["bucket"], rs.Primary.Attributes["key"], oldTags, newTags) } @@ -1577,7 +1580,7 @@ func testAccCheckBucketObjectUpdateTags(ctx context.Context, n string, oldTags, func testAccCheckBucketObjectCheckTags(ctx context.Context, n string, expectedTags map[string]string) resource.TestCheckFunc { return func(s *terraform.State) error { rs := s.RootModule().Resources[n] - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) got, err := tfs3.ObjectListTags(ctx, conn, rs.Primary.Attributes["bucket"], rs.Primary.Attributes["key"]) if err != nil { diff --git a/internal/service/s3/bucket_objects_data_source.go b/internal/service/s3/bucket_objects_data_source.go index 620f61dfaa7..ffa982ca3bf 100644 --- a/internal/service/s3/bucket_objects_data_source.go +++ b/internal/service/s3/bucket_objects_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 // WARNING: This code is DEPRECATED and will be removed in a future release!! @@ -67,12 +70,14 @@ func DataSourceBucketObjects() *schema.Resource { Elem: &schema.Schema{Type: schema.TypeString}, }, }, + + DeprecationMessage: `use the aws_s3_objects data source instead`, } } func dataSourceBucketObjectsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) prefix := d.Get("prefix").(string) diff --git a/internal/service/s3/bucket_objects_data_source_test.go b/internal/service/s3/bucket_objects_data_source_test.go index f815abc2e3c..71a952321d8 100644 --- a/internal/service/s3/bucket_objects_data_source_test.go +++ b/internal/service/s3/bucket_objects_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test // WARNING: This code is DEPRECATED and will be removed in a future release!! diff --git a/internal/service/s3/bucket_ownership_controls.go b/internal/service/s3/bucket_ownership_controls.go index b39dea8cd32..cde57cfd5b4 100644 --- a/internal/service/s3/bucket_ownership_controls.go +++ b/internal/service/s3/bucket_ownership_controls.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( @@ -56,7 +59,7 @@ func ResourceBucketOwnershipControls() *schema.Resource { func resourceBucketOwnershipControlsCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) @@ -80,7 +83,7 @@ func resourceBucketOwnershipControlsCreate(ctx context.Context, d *schema.Resour func resourceBucketOwnershipControlsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) input := &s3.GetBucketOwnershipControlsInput{ Bucket: aws.String(d.Id()), @@ -123,7 +126,7 @@ func resourceBucketOwnershipControlsRead(ctx context.Context, d *schema.Resource func resourceBucketOwnershipControlsUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) input := &s3.PutBucketOwnershipControlsInput{ Bucket: aws.String(d.Id()), @@ -143,7 +146,7 @@ func resourceBucketOwnershipControlsUpdate(ctx context.Context, d *schema.Resour func resourceBucketOwnershipControlsDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) input := &s3.DeleteBucketOwnershipControlsInput{ Bucket: aws.String(d.Id()), diff --git a/internal/service/s3/bucket_ownership_controls_test.go b/internal/service/s3/bucket_ownership_controls_test.go index a848e3bd17f..258b2255933 100644 --- a/internal/service/s3/bucket_ownership_controls_test.go +++ b/internal/service/s3/bucket_ownership_controls_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( @@ -132,7 +135,7 @@ func TestAccS3BucketOwnershipControls_Rule_objectOwnership(t *testing.T) { func testAccCheckBucketOwnershipControlsDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_bucket_ownership_controls" { @@ -175,7 +178,7 @@ func testAccCheckBucketOwnershipControlsExists(ctx context.Context, resourceName return fmt.Errorf("no resource ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) input := &s3.GetBucketOwnershipControlsInput{ Bucket: aws.String(rs.Primary.ID), diff --git a/internal/service/s3/bucket_policy.go b/internal/service/s3/bucket_policy.go index 9720dd22e31..0efdbee4b7f 100644 --- a/internal/service/s3/bucket_policy.go +++ b/internal/service/s3/bucket_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( @@ -57,7 +60,7 @@ func ResourceBucketPolicy() *schema.Resource { func resourceBucketPolicyPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) @@ -87,7 +90,7 @@ func resourceBucketPolicyPut(ctx context.Context, d *schema.ResourceData, meta i _, err = conn.PutBucketPolicyWithContext(ctx, params) } if err != nil { - return sdkdiag.AppendErrorf(diags, "Error putting S3 policy: %s", err) + return sdkdiag.AppendErrorf(diags, "putting S3 policy: %s", err) } d.SetId(bucket) @@ -97,7 +100,7 @@ func resourceBucketPolicyPut(ctx context.Context, d *schema.ResourceData, meta i func resourceBucketPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) log.Printf("[DEBUG] S3 bucket policy, read for bucket: %s", d.Id()) pol, err := conn.GetBucketPolicyWithContext(ctx, &s3.GetBucketPolicyInput{ @@ -134,7 +137,7 @@ func resourceBucketPolicyRead(ctx context.Context, d *schema.ResourceData, meta func resourceBucketPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) @@ -148,7 +151,7 @@ func resourceBucketPolicyDelete(ctx context.Context, d *schema.ResourceData, met } if err != nil { - return sdkdiag.AppendErrorf(diags, "Error deleting S3 policy: %s", err) + return sdkdiag.AppendErrorf(diags, "deleting S3 policy: %s", err) } return diags diff --git a/internal/service/s3/bucket_policy_data_source.go b/internal/service/s3/bucket_policy_data_source.go index 13757488dd5..7badb2bf4d8 100644 --- a/internal/service/s3/bucket_policy_data_source.go +++ b/internal/service/s3/bucket_policy_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( @@ -34,7 +37,7 @@ func DataSourceBucketPolicy() *schema.Resource { } func dataSourceBucketPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) name := d.Get("bucket").(string) diff --git a/internal/service/s3/bucket_policy_data_source_test.go b/internal/service/s3/bucket_policy_data_source_test.go index 5e21095d7c2..7d92cfc33ea 100644 --- a/internal/service/s3/bucket_policy_data_source_test.go +++ b/internal/service/s3/bucket_policy_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( @@ -87,7 +90,7 @@ func testAccCheckBucketPolicyExists(ctx context.Context, n string, ci *s3.GetBuc return fmt.Errorf("no S3 Bucket Policy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) output, err := tfs3.FindBucketPolicy(ctx, conn, rs.Primary.ID) if err != nil { diff --git a/internal/service/s3/bucket_policy_test.go b/internal/service/s3/bucket_policy_test.go index 454a194d28b..0624b94887a 100644 --- a/internal/service/s3/bucket_policy_test.go +++ b/internal/service/s3/bucket_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( @@ -470,7 +473,7 @@ func testAccCheckBucketHasPolicy(ctx context.Context, n string, expectedPolicyTe return fmt.Errorf("No S3 Bucket ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) policy, err := conn.GetBucketPolicyWithContext(ctx, &s3.GetBucketPolicyInput{ Bucket: aws.String(rs.Primary.ID), diff --git a/internal/service/s3/bucket_public_access_block.go b/internal/service/s3/bucket_public_access_block.go index 704544696a5..1e9a0f746c0 100644 --- a/internal/service/s3/bucket_public_access_block.go +++ b/internal/service/s3/bucket_public_access_block.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( @@ -63,7 +66,7 @@ func ResourceBucketPublicAccessBlock() *schema.Resource { func resourceBucketPublicAccessBlockCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) input := &s3.PutPublicAccessBlockInput{ @@ -103,7 +106,7 @@ func resourceBucketPublicAccessBlockCreate(ctx context.Context, d *schema.Resour func resourceBucketPublicAccessBlockRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) input := &s3.GetPublicAccessBlockInput{ Bucket: aws.String(d.Id()), @@ -165,7 +168,7 @@ func resourceBucketPublicAccessBlockRead(ctx context.Context, d *schema.Resource func resourceBucketPublicAccessBlockUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) input := &s3.PutPublicAccessBlockInput{ Bucket: aws.String(d.Id()), @@ -198,7 +201,7 @@ func resourceBucketPublicAccessBlockUpdate(ctx context.Context, d *schema.Resour func resourceBucketPublicAccessBlockDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) input := &s3.DeletePublicAccessBlockInput{ Bucket: aws.String(d.Id()), diff --git a/internal/service/s3/bucket_public_access_block_test.go b/internal/service/s3/bucket_public_access_block_test.go index 8803af37b31..33e3c8b1ec7 100644 --- a/internal/service/s3/bucket_public_access_block_test.go +++ b/internal/service/s3/bucket_public_access_block_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( @@ -279,7 +282,7 @@ func testAccCheckBucketPublicAccessBlockExists(ctx context.Context, n string, co return fmt.Errorf("No S3 Bucket ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) input := &s3.GetPublicAccessBlockInput{ Bucket: aws.String(rs.Primary.ID), @@ -326,7 +329,7 @@ func testAccCheckBucketPublicAccessBlockDisappears(ctx context.Context, n string return fmt.Errorf("No S3 Bucket ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) deleteInput := &s3.DeletePublicAccessBlockInput{ Bucket: aws.String(rs.Primary.ID), diff --git a/internal/service/s3/bucket_replication_configuration.go b/internal/service/s3/bucket_replication_configuration.go index 547ae944960..ff8e6ec6a5a 100644 --- a/internal/service/s3/bucket_replication_configuration.go +++ b/internal/service/s3/bucket_replication_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( @@ -307,7 +310,7 @@ func ResourceBucketReplicationConfiguration() *schema.Resource { func resourceBucketReplicationConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) @@ -346,23 +349,24 @@ func resourceBucketReplicationConfigurationCreate(ctx context.Context, d *schema d.SetId(bucket) + _, err = tfresource.RetryWhenNotFound(ctx, propagationTimeout, func() (interface{}, error) { + return FindBucketReplicationConfigurationByID(ctx, conn, d.Id()) + }) + + if err != nil { + return sdkdiag.AppendErrorf(diags, "waiting for S3 Replication creation on bucket (%s): %s", d.Id(), err) + } + return append(diags, resourceBucketReplicationConfigurationRead(ctx, d, meta)...) } func resourceBucketReplicationConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() - - input := &s3.GetBucketReplicationInput{ - Bucket: aws.String(d.Id()), - } + conn := meta.(*conns.AWSClient).S3Conn(ctx) - // Read the bucket replication configuration - output, err := retryWhenBucketNotFound(ctx, func() (interface{}, error) { - return conn.GetBucketReplicationWithContext(ctx, input) - }) + output, err := FindBucketReplicationConfigurationByID(ctx, conn, d.Id()) - if !d.IsNewResource() && tfawserr.ErrCodeEquals(err, ErrCodeReplicationConfigurationNotFound, s3.ErrCodeNoSuchBucket) { + if !d.IsNewResource() && tfresource.NotFound(err) { log.Printf("[WARN] S3 Bucket Replication Configuration (%s) not found, removing from state", d.Id()) d.SetId("") return diags @@ -372,13 +376,7 @@ func resourceBucketReplicationConfigurationRead(ctx context.Context, d *schema.R return sdkdiag.AppendErrorf(diags, "getting S3 Bucket Replication Configuration for bucket (%s): %s", d.Id(), err) } - replication, ok := output.(*s3.GetBucketReplicationOutput) - - if !ok || replication == nil || replication.ReplicationConfiguration == nil { - return sdkdiag.AppendErrorf(diags, "reading S3 Bucket Replication Configuration for bucket (%s): empty output", d.Id()) - } - - r := replication.ReplicationConfiguration + r := output.ReplicationConfiguration d.Set("bucket", d.Id()) d.Set("role", r.Role) @@ -391,7 +389,7 @@ func resourceBucketReplicationConfigurationRead(ctx context.Context, d *schema.R func resourceBucketReplicationConfigurationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) rc := &s3.ReplicationConfiguration{ Role: aws.String(d.Get("role").(string)), @@ -431,7 +429,7 @@ func resourceBucketReplicationConfigurationUpdate(ctx context.Context, d *schema func resourceBucketReplicationConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) input := &s3.DeleteBucketReplicationInput{ Bucket: aws.String(d.Id()), @@ -449,3 +447,28 @@ func resourceBucketReplicationConfigurationDelete(ctx context.Context, d *schema return diags } + +func FindBucketReplicationConfigurationByID(ctx context.Context, conn *s3.S3, id string) (*s3.GetBucketReplicationOutput, error) { + in := &s3.GetBucketReplicationInput{ + Bucket: aws.String(id), + } + + out, err := conn.GetBucketReplicationWithContext(ctx, in) + + if tfawserr.ErrCodeEquals(err, ErrCodeReplicationConfigurationNotFound, s3.ErrCodeNoSuchBucket) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + if err != nil { + return nil, err + } + + if out == nil || out.ReplicationConfiguration == nil { + return nil, tfresource.NewEmptyResultError(in) + } + + return out, nil +} diff --git a/internal/service/s3/bucket_replication_configuration_test.go b/internal/service/s3/bucket_replication_configuration_test.go index 2f631ee2149..b40625178cd 100644 --- a/internal/service/s3/bucket_replication_configuration_test.go +++ b/internal/service/s3/bucket_replication_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( @@ -1180,7 +1183,7 @@ func TestAccS3BucketReplicationConfiguration_migrate_withChange(t *testing.T) { // version, but for use with "same region" tests requiring only one provider. func testAccCheckBucketReplicationConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_bucket_replication_configuration" { @@ -1211,7 +1214,7 @@ func testAccCheckBucketReplicationConfigurationDestroy(ctx context.Context) reso func testAccCheckBucketReplicationConfigurationDestroyWithProvider(ctx context.Context) acctest.TestCheckWithProviderFunc { return func(s *terraform.State, provider *schema.Provider) error { - conn := provider.Meta().(*conns.AWSClient).S3Conn() + conn := provider.Meta().(*conns.AWSClient).S3Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_bucket_replication_configuration" { @@ -1251,21 +1254,11 @@ func testAccCheckBucketReplicationConfigurationExists(ctx context.Context, n str return fmt.Errorf("No ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() - - output, err := conn.GetBucketReplicationWithContext(ctx, &s3.GetBucketReplicationInput{ - Bucket: aws.String(rs.Primary.ID), - }) - - if err != nil { - return err - } + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) - if output == nil || output.ReplicationConfiguration == nil { - return fmt.Errorf("S3 Bucket Replication Configuration for bucket (%s) not found", rs.Primary.ID) - } + _, err := tfs3.FindBucketReplicationConfigurationByID(ctx, conn, rs.Primary.ID) - return nil + return err } } diff --git a/internal/service/s3/bucket_request_payment_configuration.go b/internal/service/s3/bucket_request_payment_configuration.go index f3589ce4a74..f978ff54aef 100644 --- a/internal/service/s3/bucket_request_payment_configuration.go +++ b/internal/service/s3/bucket_request_payment_configuration.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( "context" - "fmt" "log" "time" @@ -51,7 +53,7 @@ func ResourceBucketRequestPaymentConfiguration() *schema.Resource { } func resourceBucketRequestPaymentConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) expectedBucketOwner := d.Get("expected_bucket_owner").(string) @@ -72,7 +74,7 @@ func resourceBucketRequestPaymentConfigurationCreate(ctx context.Context, d *sch }, s3.ErrCodeNoSuchBucket) if err != nil { - return diag.FromErr(fmt.Errorf("error creating S3 bucket (%s) request payment configuration: %w", bucket, err)) + return diag.Errorf("creating S3 bucket (%s) request payment configuration: %s", bucket, err) } d.SetId(CreateResourceID(bucket, expectedBucketOwner)) @@ -81,7 +83,7 @@ func resourceBucketRequestPaymentConfigurationCreate(ctx context.Context, d *sch } func resourceBucketRequestPaymentConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { @@ -105,7 +107,7 @@ func resourceBucketRequestPaymentConfigurationRead(ctx context.Context, d *schem } if output == nil { - return diag.FromErr(fmt.Errorf("error reading S3 bucket request payment configuration (%s): empty output", d.Id())) + return diag.Errorf("reading S3 bucket request payment configuration (%s): empty output", d.Id()) } d.Set("bucket", bucket) @@ -116,7 +118,7 @@ func resourceBucketRequestPaymentConfigurationRead(ctx context.Context, d *schem } func resourceBucketRequestPaymentConfigurationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { @@ -137,14 +139,14 @@ func resourceBucketRequestPaymentConfigurationUpdate(ctx context.Context, d *sch _, err = conn.PutBucketRequestPaymentWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error updating S3 bucket request payment configuration (%s): %w", d.Id(), err)) + return diag.Errorf("updating S3 bucket request payment configuration (%s): %s", d.Id(), err) } return resourceBucketRequestPaymentConfigurationRead(ctx, d, meta) } func resourceBucketRequestPaymentConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { @@ -171,7 +173,7 @@ func resourceBucketRequestPaymentConfigurationDelete(ctx context.Context, d *sch } if err != nil { - return diag.FromErr(fmt.Errorf("error deleting S3 bucket request payment configuration (%s): %w", d.Id(), err)) + return diag.Errorf("deleting S3 bucket request payment configuration (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/s3/bucket_request_payment_configuration_test.go b/internal/service/s3/bucket_request_payment_configuration_test.go index 138f8c0e7a9..4c9e442edf5 100644 --- a/internal/service/s3/bucket_request_payment_configuration_test.go +++ b/internal/service/s3/bucket_request_payment_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( @@ -172,7 +175,7 @@ func TestAccS3BucketRequestPaymentConfiguration_migrate_withChange(t *testing.T) func testAccCheckBucketRequestPaymentConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_bucket_request_payment_configuration" { @@ -222,7 +225,7 @@ func testAccCheckBucketRequestPaymentConfigurationExists(ctx context.Context, re return fmt.Errorf("Resource (%s) ID not set", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := tfs3.ParseResourceID(rs.Primary.ID) if err != nil { diff --git a/internal/service/s3/bucket_server_side_encryption_configuration.go b/internal/service/s3/bucket_server_side_encryption_configuration.go index a9e703a220b..cdee0c9bbcc 100644 --- a/internal/service/s3/bucket_server_side_encryption_configuration.go +++ b/internal/service/s3/bucket_server_side_encryption_configuration.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( "context" - "fmt" "log" "github.com/aws/aws-sdk-go/aws" @@ -75,7 +77,7 @@ func ResourceBucketServerSideEncryptionConfiguration() *schema.Resource { } func resourceBucketServerSideEncryptionConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) expectedBucketOwner := d.Get("expected_bucket_owner").(string) @@ -100,7 +102,7 @@ func resourceBucketServerSideEncryptionConfigurationCreate(ctx context.Context, ) if err != nil { - return diag.FromErr(fmt.Errorf("error creating S3 bucket (%s) server-side encryption configuration: %w", bucket, err)) + return diag.Errorf("creating S3 bucket (%s) server-side encryption configuration: %s", bucket, err) } d.SetId(CreateResourceID(bucket, expectedBucketOwner)) @@ -109,7 +111,7 @@ func resourceBucketServerSideEncryptionConfigurationCreate(ctx context.Context, } func resourceBucketServerSideEncryptionConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { @@ -139,13 +141,13 @@ func resourceBucketServerSideEncryptionConfigurationRead(ctx context.Context, d } if err != nil { - return diag.FromErr(fmt.Errorf("error reading S3 bucket server-side encryption configuration (%s): %w", d.Id(), err)) + return diag.Errorf("reading S3 bucket server-side encryption configuration (%s): %s", d.Id(), err) } output, ok := resp.(*s3.GetBucketEncryptionOutput) if !ok || output.ServerSideEncryptionConfiguration == nil { if d.IsNewResource() { - return diag.FromErr(fmt.Errorf("error reading S3 bucket server-side encryption configuration (%s): empty output", d.Id())) + return diag.Errorf("reading S3 bucket server-side encryption configuration (%s): empty output", d.Id()) } log.Printf("[WARN] S3 Bucket Server-Side Encryption Configuration (%s) not found, removing from state", d.Id()) d.SetId("") @@ -157,14 +159,14 @@ func resourceBucketServerSideEncryptionConfigurationRead(ctx context.Context, d d.Set("bucket", bucket) d.Set("expected_bucket_owner", expectedBucketOwner) if err := d.Set("rule", flattenBucketServerSideEncryptionConfigurationRules(sse.Rules)); err != nil { - return diag.FromErr(fmt.Errorf("error setting rule: %w", err)) + return diag.Errorf("setting rule: %s", err) } return nil } func resourceBucketServerSideEncryptionConfigurationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { @@ -191,14 +193,14 @@ func resourceBucketServerSideEncryptionConfigurationUpdate(ctx context.Context, ) if err != nil { - return diag.FromErr(fmt.Errorf("error updating S3 bucket (%s) server-side encryption configuration: %w", d.Id(), err)) + return diag.Errorf("updating S3 bucket (%s) server-side encryption configuration: %s", d.Id(), err) } return resourceBucketServerSideEncryptionConfigurationRead(ctx, d, meta) } func resourceBucketServerSideEncryptionConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { @@ -220,7 +222,7 @@ func resourceBucketServerSideEncryptionConfigurationDelete(ctx context.Context, } if err != nil { - return diag.FromErr(fmt.Errorf("error deleting S3 bucket server-side encryption configuration (%s): %w", d.Id(), err)) + return diag.Errorf("deleting S3 bucket server-side encryption configuration (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/s3/bucket_server_side_encryption_configuration_test.go b/internal/service/s3/bucket_server_side_encryption_configuration_test.go index 46d1645461d..4b2567c8433 100644 --- a/internal/service/s3/bucket_server_side_encryption_configuration_test.go +++ b/internal/service/s3/bucket_server_side_encryption_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( @@ -402,7 +405,7 @@ func TestAccS3BucketServerSideEncryptionConfiguration_migrate_withChange(t *test func testAccCheckBucketServerSideEncryptionConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_bucket_server_side_encryption_configuration" { @@ -452,7 +455,7 @@ func testAccCheckBucketServerSideEncryptionConfigurationExists(ctx context.Conte return fmt.Errorf("Resource (%s) ID not set", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := tfs3.ParseResourceID(rs.Primary.ID) if err != nil { diff --git a/internal/service/s3/bucket_test.go b/internal/service/s3/bucket_test.go index 076b10bd991..19173d36834 100644 --- a/internal/service/s3/bucket_test.go +++ b/internal/service/s3/bucket_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( @@ -423,7 +426,7 @@ func TestAccS3Bucket_Duplicate_UsEast1AltAccount(t *testing.T) { Steps: []resource.TestStep{ { Config: testAccBucketConfig_duplicateAltAccount(endpoints.UsEast1RegionID, bucketName), - ExpectError: regexp.MustCompile(s3.ErrCodeBucketAlreadyOwnedByYou), + ExpectError: regexp.MustCompile(s3.ErrCodeBucketAlreadyExists), }, }, }) @@ -528,7 +531,7 @@ func TestAccS3Bucket_Tags_withSystemTags(t *testing.T) { testAccCheckBucketDestroy(ctx), func(s *terraform.State) error { // Tear down CF stack. - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFormationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFormationConn(ctx) requestToken := id.UniqueId() req := &cloudformation.DeleteStackInput{ @@ -1797,7 +1800,7 @@ func TestAccS3Bucket_Security_corsUpdate(t *testing.T) { return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) _, err := conn.PutBucketCorsWithContext(ctx, &s3.PutBucketCorsInput{ Bucket: aws.String(rs.Primary.ID), CORSConfiguration: &s3.CORSConfiguration{ @@ -1883,7 +1886,7 @@ func TestAccS3Bucket_Security_corsDelete(t *testing.T) { return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) _, err := conn.DeleteBucketCorsWithContext(ctx, &s3.DeleteBucketCorsInput{ Bucket: aws.String(rs.Primary.ID), }) @@ -2345,7 +2348,7 @@ func TestBucketRegionalDomainName(t *testing.T) { { Region: endpoints.UsEast1RegionID, ExpectedErrCount: 0, - ExpectedOutput: bucket + ".s3.amazonaws.com", + ExpectedOutput: bucket + fmt.Sprintf(".s3.%s.%s", endpoints.UsEast1RegionID, acctest.PartitionDNSSuffix()), }, { Region: endpoints.UsWest2RegionID, @@ -2547,7 +2550,7 @@ func testAccCheckBucketDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckBucketDestroyWithProvider(ctx context.Context) acctest.TestCheckWithProviderFunc { return func(s *terraform.State, provider *schema.Provider) error { - conn := provider.Meta().(*conns.AWSClient).S3Conn() + conn := provider.Meta().(*conns.AWSClient).S3Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_bucket" { @@ -2586,7 +2589,7 @@ func testAccCheckBucketExistsWithProvider(ctx context.Context, n string, provide return fmt.Errorf("No S3 Bucket ID is set") } - conn := providerF().Meta().(*conns.AWSClient).S3Conn() + conn := providerF().Meta().(*conns.AWSClient).S3Conn(ctx) return tfs3.FindBucket(ctx, conn, rs.Primary.ID) } @@ -2595,7 +2598,7 @@ func testAccCheckBucketExistsWithProvider(ctx context.Context, n string, provide func testAccCheckBucketAddObjects(ctx context.Context, n string, keys ...string) resource.TestCheckFunc { return func(s *terraform.State) error { rs := s.RootModule().Resources[n] - conn := acctest.Provider.Meta().(*conns.AWSClient).S3ConnURICleaningDisabled() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3ConnURICleaningDisabled(ctx) for _, key := range keys { _, err := conn.PutObjectWithContext(ctx, &s3.PutObjectInput{ @@ -2615,7 +2618,7 @@ func testAccCheckBucketAddObjects(ctx context.Context, n string, keys ...string) func testAccCheckBucketAddObjectsWithLegalHold(ctx context.Context, n string, keys ...string) resource.TestCheckFunc { return func(s *terraform.State) error { rs := s.RootModule().Resources[n] - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) for _, key := range keys { _, err := conn.PutObjectWithContext(ctx, &s3.PutObjectInput{ @@ -2636,7 +2639,7 @@ func testAccCheckBucketAddObjectsWithLegalHold(ctx context.Context, n string, ke // Create an S3 bucket via a CF stack so that it has system tags. func testAccCheckBucketCreateViaCloudFormation(ctx context.Context, n string, stackID *string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFormationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFormationConn(ctx) stackName := sdkacctest.RandomWithPrefix("tf-acc-test-s3tags") templateBody := fmt.Sprintf(`{ "Resources": { @@ -2679,7 +2682,7 @@ func testAccCheckBucketCreateViaCloudFormation(ctx context.Context, n string, st func testAccCheckBucketTagKeys(ctx context.Context, n string, keys ...string) resource.TestCheckFunc { return func(s *terraform.State) error { rs := s.RootModule().Resources[n] - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) got, err := tfs3.BucketListTags(ctx, conn, rs.Primary.Attributes["bucket"]) if err != nil { @@ -2731,7 +2734,7 @@ func testAccCheckBucketWebsiteEndpoint(resourceName string, attributeName string func testAccCheckBucketUpdateTags(ctx context.Context, n string, oldTags, newTags map[string]string) resource.TestCheckFunc { return func(s *terraform.State) error { rs := s.RootModule().Resources[n] - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) return tfs3.BucketUpdateTags(ctx, conn, rs.Primary.Attributes["bucket"], oldTags, newTags) } @@ -2740,7 +2743,7 @@ func testAccCheckBucketUpdateTags(ctx context.Context, n string, oldTags, newTag func testAccCheckBucketCheckTags(ctx context.Context, n string, expectedTags map[string]string) resource.TestCheckFunc { return func(s *terraform.State) error { rs := s.RootModule().Resources[n] - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) got, err := tfs3.BucketListTags(ctx, conn, rs.Primary.Attributes["bucket"]) if err != nil { diff --git a/internal/service/s3/bucket_versioning.go b/internal/service/s3/bucket_versioning.go index 75c7fdc6386..0424313dcff 100644 --- a/internal/service/s3/bucket_versioning.go +++ b/internal/service/s3/bucket_versioning.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( @@ -91,7 +94,7 @@ func ResourceBucketVersioning() *schema.Resource { } func resourceBucketVersioningCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) expectedBucketOwner := d.Get("expected_bucket_owner").(string) @@ -121,7 +124,7 @@ func resourceBucketVersioningCreate(ctx context.Context, d *schema.ResourceData, }, s3.ErrCodeNoSuchBucket) if err != nil { - return diag.FromErr(fmt.Errorf("error creating S3 bucket versioning for %s: %w", bucket, err)) + return diag.Errorf("creating S3 bucket versioning for %s: %s", bucket, err) } } else { log.Printf("[DEBUG] Creating S3 bucket versioning for unversioned bucket: %s", bucket) @@ -133,7 +136,7 @@ func resourceBucketVersioningCreate(ctx context.Context, d *schema.ResourceData, } func resourceBucketVersioningRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) @@ -151,20 +154,20 @@ func resourceBucketVersioningRead(ctx context.Context, d *schema.ResourceData, m } if err != nil { - return diag.Errorf("error getting S3 bucket versioning (%s): %s", d.Id(), err) + return diag.Errorf("getting S3 bucket versioning (%s): %s", d.Id(), err) } d.Set("bucket", bucket) d.Set("expected_bucket_owner", expectedBucketOwner) if err := d.Set("versioning_configuration", flattenBucketVersioningConfiguration(output)); err != nil { - return diag.FromErr(fmt.Errorf("error setting versioning_configuration: %w", err)) + return diag.Errorf("setting versioning_configuration: %s", err) } return nil } func resourceBucketVersioningUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { @@ -187,14 +190,14 @@ func resourceBucketVersioningUpdate(ctx context.Context, d *schema.ResourceData, _, err = conn.PutBucketVersioningWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error updating S3 bucket versioning (%s): %w", d.Id(), err)) + return diag.Errorf("updating S3 bucket versioning (%s): %s", d.Id(), err) } return resourceBucketVersioningRead(ctx, d, meta) } func resourceBucketVersioningDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) if v := expandBucketVersioningConfiguration(d.Get("versioning_configuration").([]interface{})); v != nil && aws.StringValue(v.Status) == BucketVersioningStatusDisabled { log.Printf("[DEBUG] Removing S3 bucket versioning for unversioned bucket (%s) from state", d.Id()) @@ -231,7 +234,7 @@ func resourceBucketVersioningDelete(ctx context.Context, d *schema.ResourceData, } if err != nil { - return diag.FromErr(fmt.Errorf("error deleting S3 bucket versioning (%s): %w", d.Id(), err)) + return diag.Errorf("deleting S3 bucket versioning (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/s3/bucket_versioning_test.go b/internal/service/s3/bucket_versioning_test.go index 66cffe32d64..47c0f37afa9 100644 --- a/internal/service/s3/bucket_versioning_test.go +++ b/internal/service/s3/bucket_versioning_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( @@ -482,7 +485,7 @@ func TestAccS3BucketVersioning_Status_suspendedToDisabled(t *testing.T) { func testAccCheckBucketVersioningDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_bucket_versioning" { @@ -523,7 +526,7 @@ func testAccCheckBucketVersioningExists(ctx context.Context, resourceName string return fmt.Errorf("Resource (%s) ID not set", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) input := &s3.GetBucketVersioningInput{ Bucket: aws.String(rs.Primary.ID), diff --git a/internal/service/s3/bucket_website_configuration.go b/internal/service/s3/bucket_website_configuration.go index 909cb50c6d8..40d87e7bfba 100644 --- a/internal/service/s3/bucket_website_configuration.go +++ b/internal/service/s3/bucket_website_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( @@ -174,7 +177,7 @@ func ResourceBucketWebsiteConfiguration() *schema.Resource { } func resourceBucketWebsiteConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) expectedBucketOwner := d.Get("expected_bucket_owner").(string) @@ -200,7 +203,7 @@ func resourceBucketWebsiteConfigurationCreate(ctx context.Context, d *schema.Res if v, ok := d.GetOk("routing_rules"); ok { var unmarshalledRules []*s3.RoutingRule if err := json.Unmarshal([]byte(v.(string)), &unmarshalledRules); err != nil { - return diag.FromErr(fmt.Errorf("error creating S3 Bucket (%s) website configuration: %w", bucket, err)) + return diag.Errorf("creating S3 Bucket (%s) website configuration: %s", bucket, err) } websiteConfig.RoutingRules = unmarshalledRules } @@ -219,7 +222,7 @@ func resourceBucketWebsiteConfigurationCreate(ctx context.Context, d *schema.Res }, s3.ErrCodeNoSuchBucket) if err != nil { - return diag.FromErr(fmt.Errorf("error creating S3 bucket (%s) website configuration: %w", bucket, err)) + return diag.Errorf("creating S3 bucket (%s) website configuration: %s", bucket, err) } d.SetId(CreateResourceID(bucket, expectedBucketOwner)) @@ -228,7 +231,7 @@ func resourceBucketWebsiteConfigurationCreate(ctx context.Context, d *schema.Res } func resourceBucketWebsiteConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { @@ -253,7 +256,7 @@ func resourceBucketWebsiteConfigurationRead(ctx context.Context, d *schema.Resou if output == nil { if d.IsNewResource() { - return diag.FromErr(fmt.Errorf("error reading S3 bucket website configuration (%s): empty output", d.Id())) + return diag.Errorf("reading S3 bucket website configuration (%s): empty output", d.Id()) } log.Printf("[WARN] S3 Bucket Website Configuration (%s) not found, removing from state", d.Id()) d.SetId("") @@ -264,25 +267,25 @@ func resourceBucketWebsiteConfigurationRead(ctx context.Context, d *schema.Resou d.Set("expected_bucket_owner", expectedBucketOwner) if err := d.Set("error_document", flattenBucketWebsiteConfigurationErrorDocument(output.ErrorDocument)); err != nil { - return diag.FromErr(fmt.Errorf("error setting error_document: %w", err)) + return diag.Errorf("setting error_document: %s", err) } if err := d.Set("index_document", flattenBucketWebsiteConfigurationIndexDocument(output.IndexDocument)); err != nil { - return diag.FromErr(fmt.Errorf("error setting index_document: %w", err)) + return diag.Errorf("setting index_document: %s", err) } if err := d.Set("redirect_all_requests_to", flattenBucketWebsiteConfigurationRedirectAllRequestsTo(output.RedirectAllRequestsTo)); err != nil { - return diag.FromErr(fmt.Errorf("error setting redirect_all_requests_to: %w", err)) + return diag.Errorf("setting redirect_all_requests_to: %s", err) } if err := d.Set("routing_rule", flattenBucketWebsiteConfigurationRoutingRules(output.RoutingRules)); err != nil { - return diag.FromErr(fmt.Errorf("error setting routing_rule: %w", err)) + return diag.Errorf("setting routing_rule: %s", err) } if output.RoutingRules != nil { rr, err := normalizeRoutingRules(output.RoutingRules) if err != nil { - return diag.FromErr(fmt.Errorf("error while marshaling routing rules: %w", err)) + return diag.Errorf("while marshaling routing rules: %s", err) } d.Set("routing_rules", rr) } else { @@ -304,7 +307,7 @@ func resourceBucketWebsiteConfigurationRead(ctx context.Context, d *schema.Resou } func resourceBucketWebsiteConfigurationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { @@ -331,7 +334,7 @@ func resourceBucketWebsiteConfigurationUpdate(ctx context.Context, d *schema.Res } else { var unmarshalledRules []*s3.RoutingRule if err := json.Unmarshal([]byte(d.Get("routing_rules").(string)), &unmarshalledRules); err != nil { - return diag.FromErr(fmt.Errorf("error updating S3 Bucket (%s) website configuration: %w", bucket, err)) + return diag.Errorf("updating S3 Bucket (%s) website configuration: %s", bucket, err) } websiteConfig.RoutingRules = unmarshalledRules } @@ -344,7 +347,7 @@ func resourceBucketWebsiteConfigurationUpdate(ctx context.Context, d *schema.Res if v, ok := d.GetOk("routing_rules"); ok { var unmarshalledRules []*s3.RoutingRule if err := json.Unmarshal([]byte(v.(string)), &unmarshalledRules); err != nil { - return diag.FromErr(fmt.Errorf("error updating S3 Bucket (%s) website configuration: %w", bucket, err)) + return diag.Errorf("updating S3 Bucket (%s) website configuration: %s", bucket, err) } websiteConfig.RoutingRules = unmarshalledRules } @@ -362,14 +365,14 @@ func resourceBucketWebsiteConfigurationUpdate(ctx context.Context, d *schema.Res _, err = conn.PutBucketWebsiteWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error updating S3 bucket website configuration (%s): %w", d.Id(), err)) + return diag.Errorf("updating S3 bucket website configuration (%s): %s", d.Id(), err) } return resourceBucketWebsiteConfigurationRead(ctx, d, meta) } func resourceBucketWebsiteConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) if err != nil { @@ -391,14 +394,14 @@ func resourceBucketWebsiteConfigurationDelete(ctx context.Context, d *schema.Res } if err != nil { - return diag.FromErr(fmt.Errorf("error deleting S3 bucket website configuration (%s): %w", d.Id(), err)) + return diag.Errorf("deleting S3 bucket website configuration (%s): %s", d.Id(), err) } return nil } func resourceBucketWebsiteConfigurationWebsiteEndpoint(ctx context.Context, client *conns.AWSClient, bucket, expectedBucketOwner string) (*S3Website, error) { - conn := client.S3Conn() + conn := client.S3Conn(ctx) input := &s3.GetBucketLocationInput{ Bucket: aws.String(bucket), @@ -410,7 +413,7 @@ func resourceBucketWebsiteConfigurationWebsiteEndpoint(ctx context.Context, clie output, err := conn.GetBucketLocationWithContext(ctx, input) if err != nil { - return nil, fmt.Errorf("error getting S3 Bucket (%s) Location: %w", bucket, err) + return nil, fmt.Errorf("getting S3 Bucket (%s) Location: %w", bucket, err) } var region string diff --git a/internal/service/s3/bucket_website_configuration_test.go b/internal/service/s3/bucket_website_configuration_test.go index bf47e5d8894..1020b1861d6 100644 --- a/internal/service/s3/bucket_website_configuration_test.go +++ b/internal/service/s3/bucket_website_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( @@ -536,7 +539,7 @@ func TestAccS3BucketWebsiteConfiguration_migrate_websiteWithRoutingRuleWithChang func testAccCheckBucketWebsiteConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_bucket_website_configuration" { @@ -586,7 +589,7 @@ func testAccCheckBucketWebsiteConfigurationExists(ctx context.Context, resourceN return fmt.Errorf("Resource (%s) ID not set", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) bucket, expectedBucketOwner, err := tfs3.ParseResourceID(rs.Primary.ID) if err != nil { diff --git a/internal/service/s3/canonical_user_id_data_source.go b/internal/service/s3/canonical_user_id_data_source.go index 9ab9dc9636a..6d083a467ea 100644 --- a/internal/service/s3/canonical_user_id_data_source.go +++ b/internal/service/s3/canonical_user_id_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( @@ -28,7 +31,7 @@ func DataSourceCanonicalUserID() *schema.Resource { func dataSourceCanonicalUserIDRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) log.Printf("[DEBUG] Reading S3 Buckets") diff --git a/internal/service/s3/canonical_user_id_data_source_test.go b/internal/service/s3/canonical_user_id_data_source_test.go index 69b127735ab..c0da297edb8 100644 --- a/internal/service/s3/canonical_user_id_data_source_test.go +++ b/internal/service/s3/canonical_user_id_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( diff --git a/internal/service/s3/consts.go b/internal/service/s3/consts.go index 272dfe94e2c..607199431ea 100644 --- a/internal/service/s3/consts.go +++ b/internal/service/s3/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 const ( diff --git a/internal/service/s3/delete.go b/internal/service/s3/delete.go index 6ca87c588d5..287f7cf45e5 100644 --- a/internal/service/s3/delete.go +++ b/internal/service/s3/delete.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( diff --git a/internal/service/s3/delete_test.go b/internal/service/s3/delete_test.go index f0a9b13932b..0bbce64ca1b 100644 --- a/internal/service/s3/delete_test.go +++ b/internal/service/s3/delete_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( diff --git a/internal/service/s3/enum.go b/internal/service/s3/enum.go index 235479e3354..9e740875e59 100644 --- a/internal/service/s3/enum.go +++ b/internal/service/s3/enum.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( diff --git a/internal/service/s3/errors.go b/internal/service/s3/errors.go index 35780d68137..0fc97b3e831 100644 --- a/internal/service/s3/errors.go +++ b/internal/service/s3/errors.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 // Error code constants missing from AWS Go SDK: diff --git a/internal/service/s3/flex.go b/internal/service/s3/flex.go index 5e52950bab3..1ce441db303 100644 --- a/internal/service/s3/flex.go +++ b/internal/service/s3/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( @@ -9,8 +12,8 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/s3" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" - "github.com/hashicorp/terraform-provider-aws/internal/experimental/nullable" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/types/nullable" ) func ExpandReplicationRuleDestinationAccessControlTranslation(l []interface{}) *s3.AccessControlTranslation { @@ -203,7 +206,7 @@ func ExpandLifecycleRuleExpiration(l []interface{}) (*s3.LifecycleExpiration, er if v, ok := m["date"].(string); ok && v != "" { t, err := time.Parse(time.RFC3339, v) if err != nil { - return nil, fmt.Errorf("error parsing S3 Bucket Lifecycle Rule Expiration date: %w", err) + return nil, fmt.Errorf("parsing S3 Bucket Lifecycle Rule Expiration date: %w", err) } result.Date = aws.Time(t) } @@ -378,7 +381,7 @@ func ExpandLifecycleRuleTransitions(l []interface{}) ([]*s3.Transition, error) { if v, ok := tfMap["date"].(string); ok && v != "" { t, err := time.Parse(time.RFC3339, v) if err != nil { - return nil, fmt.Errorf("error parsing S3 Bucket Lifecycle Rule Transition date: %w", err) + return nil, fmt.Errorf("parsing S3 Bucket Lifecycle Rule Transition date: %w", err) } transition.Date = aws.Time(t) } diff --git a/internal/service/s3/flex_test.go b/internal/service/s3/flex_test.go index f542a8ca63d..63309b58949 100644 --- a/internal/service/s3/flex_test.go +++ b/internal/service/s3/flex_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( diff --git a/internal/service/s3/generate.go b/internal/service/s3/generate.go index a696f4e3649..dbb5e5c3dd5 100644 --- a/internal/service/s3/generate.go +++ b/internal/service/s3/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ServiceTagsSlice +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package s3 diff --git a/internal/service/s3/hosted_zones.go b/internal/service/s3/hosted_zones.go index 6b9dd3f1eee..c15954d6841 100644 --- a/internal/service/s3/hosted_zones.go +++ b/internal/service/s3/hosted_zones.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( diff --git a/internal/service/s3/id.go b/internal/service/s3/id.go index 23e67789579..3d5d03dadd3 100644 --- a/internal/service/s3/id.go +++ b/internal/service/s3/id.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( diff --git a/internal/service/s3/id_test.go b/internal/service/s3/id_test.go index 3b16539e0e7..043760d7e63 100644 --- a/internal/service/s3/id_test.go +++ b/internal/service/s3/id_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( diff --git a/internal/service/s3/object.go b/internal/service/s3/object.go index 1ef5d0c511c..e7bbdf0c4bf 100644 --- a/internal/service/s3/object.go +++ b/internal/service/s3/object.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( @@ -56,7 +59,6 @@ func ResourceObject() *schema.Resource { Schema: map[string]*schema.Schema{ "acl": { Type: schema.TypeString, - Default: s3.ObjectCannedACLPrivate, Optional: true, ValidateFunc: validation.StringInSlice(s3.ObjectCannedACL_Values(), false), }, @@ -198,7 +200,7 @@ func resourceObjectCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceObjectRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) key := d.Get("key").(string) @@ -272,7 +274,7 @@ func resourceObjectRead(ctx context.Context, d *schema.ResourceData, meta interf return sdkdiag.AppendErrorf(diags, "listing tags for S3 Bucket (%s) Object (%s): unable to convert tags", bucket, key) } - SetTagsOut(ctx, Tags(tags)) + setTagsOut(ctx, Tags(tags)) return diags } @@ -283,7 +285,7 @@ func resourceObjectUpdate(ctx context.Context, d *schema.ResourceData, meta inte return append(diags, resourceObjectUpload(ctx, d, meta)...) } - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) key := d.Get("key").(string) @@ -351,7 +353,7 @@ func resourceObjectUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceObjectDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) key := d.Get("key").(string) @@ -395,7 +397,7 @@ func resourceObjectImport(ctx context.Context, d *schema.ResourceData, meta inte func resourceObjectUpload(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) uploader := s3manager.NewUploaderWithClient(conn) defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig tags := defaultTagsConfig.MergeTags(tftags.New(ctx, d.Get("tags").(map[string]interface{}))) @@ -440,12 +442,15 @@ func resourceObjectUpload(ctx context.Context, d *schema.ResourceData, meta inte key := d.Get("key").(string) input := &s3manager.UploadInput{ - ACL: aws.String(d.Get("acl").(string)), Body: body, Bucket: aws.String(bucket), Key: aws.String(key), } + if v, ok := d.GetOk("acl"); ok { + input.ACL = aws.String(v.(string)) + } + if v, ok := d.GetOk("storage_class"); ok { input.StorageClass = aws.String(v.(string)) } @@ -521,7 +526,7 @@ func resourceObjectSetKMS(ctx context.Context, d *schema.ResourceData, meta inte // Only set non-default KMS key ID (one that doesn't match default) if sseKMSKeyId != nil { // retrieve S3 KMS Default Master Key - conn := meta.(*conns.AWSClient).KMSConn() + conn := meta.(*conns.AWSClient).KMSConn(ctx) keyMetadata, err := kms.FindKeyByID(ctx, conn, DefaultKMSKeyAlias) if err != nil { return fmt.Errorf("Failed to describe default S3 KMS key (%s): %s", DefaultKMSKeyAlias, err) diff --git a/internal/service/s3/object_copy.go b/internal/service/s3/object_copy.go index 0b915f7437d..29548922f58 100644 --- a/internal/service/s3/object_copy.go +++ b/internal/service/s3/object_copy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( @@ -37,7 +40,6 @@ func ResourceObjectCopy() *schema.Resource { Schema: map[string]*schema.Schema{ "acl": { Type: schema.TypeString, - Default: s3.ObjectCannedACLPrivate, Optional: true, ValidateFunc: validation.StringInSlice(s3.ObjectCannedACL_Values(), false), ConflictsWith: []string{"grant"}, @@ -306,7 +308,7 @@ func resourceObjectCopyCreate(ctx context.Context, d *schema.ResourceData, meta func resourceObjectCopyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) key := d.Get("key").(string) @@ -375,7 +377,7 @@ func resourceObjectCopyRead(ctx context.Context, d *schema.ResourceData, meta in return sdkdiag.AppendErrorf(diags, "listing tags for S3 Bucket (%s) Object (%s): unable to convert tags", bucket, key) } - SetTagsOut(ctx, Tags(tags)) + setTagsOut(ctx, Tags(tags)) return diags } @@ -441,7 +443,7 @@ func resourceObjectCopyUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceObjectCopyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) key := d.Get("key").(string) @@ -465,7 +467,7 @@ func resourceObjectCopyDelete(ctx context.Context, d *schema.ResourceData, meta func resourceObjectCopyDoCopy(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig tags := defaultTagsConfig.MergeTags(tftags.New(ctx, d.Get("tags").(map[string]interface{}))) diff --git a/internal/service/s3/object_copy_test.go b/internal/service/s3/object_copy_test.go index 7471ca6cbc2..08a00c6b3be 100644 --- a/internal/service/s3/object_copy_test.go +++ b/internal/service/s3/object_copy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( @@ -91,7 +94,7 @@ func TestAccS3ObjectCopy_BucketKeyEnabled_object(t *testing.T) { func testAccCheckObjectCopyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_object_copy" { @@ -126,7 +129,7 @@ func testAccCheckObjectCopyExists(ctx context.Context, n string) resource.TestCh return fmt.Errorf("No S3 Object ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) _, err := conn.GetObjectWithContext(ctx, &s3.GetObjectInput{ Bucket: aws.String(rs.Primary.Attributes["bucket"]), Key: aws.String(rs.Primary.Attributes["key"]), diff --git a/internal/service/s3/object_data_source.go b/internal/service/s3/object_data_source.go index 28bfe109528..f11e5e520ae 100644 --- a/internal/service/s3/object_data_source.go +++ b/internal/service/s3/object_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( @@ -131,7 +134,7 @@ func DataSourceObject() *schema.Resource { func dataSourceObjectRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig bucket := d.Get("bucket").(string) diff --git a/internal/service/s3/object_data_source_test.go b/internal/service/s3/object_data_source_test.go index b03d13a9d5b..0b5fe078779 100644 --- a/internal/service/s3/object_data_source_test.go +++ b/internal/service/s3/object_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( @@ -452,7 +455,7 @@ func testAccCheckObjectExistsDataSource(ctx context.Context, n string, obj *s3.G return fmt.Errorf("S3 object data source ID not set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) out, err := conn.GetObjectWithContext(ctx, &s3.GetObjectInput{ Bucket: aws.String(rs.Primary.Attributes["bucket"]), Key: aws.String(rs.Primary.Attributes["key"]), diff --git a/internal/service/s3/object_test.go b/internal/service/s3/object_test.go index 06eae5b7f40..275a5957fcd 100644 --- a/internal/service/s3/object_test.go +++ b/internal/service/s3/object_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( @@ -1374,7 +1377,7 @@ func testAccCheckObjectVersionIdEquals(first, second *s3.GetObjectOutput) resour func testAccCheckObjectDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_object" { @@ -1409,7 +1412,7 @@ func testAccCheckObjectExists(ctx context.Context, n string, obj *s3.GetObjectOu return fmt.Errorf("No S3 Object ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) input := &s3.GetObjectInput{ Bucket: aws.String(rs.Primary.Attributes["bucket"]), @@ -1468,7 +1471,7 @@ func testAccCheckObjectBody(obj *s3.GetObjectOutput, want string) resource.TestC func testAccCheckObjectACL(ctx context.Context, n string, expectedPerms []string) resource.TestCheckFunc { return func(s *terraform.State) error { rs := s.RootModule().Resources[n] - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) out, err := conn.GetObjectAclWithContext(ctx, &s3.GetObjectAclInput{ Bucket: aws.String(rs.Primary.Attributes["bucket"]), @@ -1496,7 +1499,7 @@ func testAccCheckObjectACL(ctx context.Context, n string, expectedPerms []string func testAccCheckObjectStorageClass(ctx context.Context, n, expectedClass string) resource.TestCheckFunc { return func(s *terraform.State) error { rs := s.RootModule().Resources[n] - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) out, err := tfs3.FindObjectByThreePartKey(ctx, conn, rs.Primary.Attributes["bucket"], rs.Primary.Attributes["key"], "") @@ -1523,7 +1526,7 @@ func testAccCheckObjectStorageClass(ctx context.Context, n, expectedClass string func testAccCheckObjectSSE(ctx context.Context, n, expectedSSE string) resource.TestCheckFunc { return func(s *terraform.State) error { rs := s.RootModule().Resources[n] - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) out, err := tfs3.FindObjectByThreePartKey(ctx, conn, rs.Primary.Attributes["bucket"], rs.Primary.Attributes["key"], "") @@ -1564,7 +1567,7 @@ func testAccObjectCreateTempFile(t *testing.T, data string) string { func testAccCheckObjectUpdateTags(ctx context.Context, n string, oldTags, newTags map[string]string) resource.TestCheckFunc { return func(s *terraform.State) error { rs := s.RootModule().Resources[n] - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) return tfs3.ObjectUpdateTags(ctx, conn, rs.Primary.Attributes["bucket"], rs.Primary.Attributes["key"], oldTags, newTags) } @@ -1573,7 +1576,7 @@ func testAccCheckObjectUpdateTags(ctx context.Context, n string, oldTags, newTag func testAccCheckObjectCheckTags(ctx context.Context, n string, expectedTags map[string]string) resource.TestCheckFunc { return func(s *terraform.State) error { rs := s.RootModule().Resources[n] - conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn(ctx) got, err := tfs3.ObjectListTags(ctx, conn, rs.Primary.Attributes["bucket"], rs.Primary.Attributes["key"]) if err != nil { @@ -1847,7 +1850,8 @@ resource "aws_s3_object" "object" { func testAccObjectConfig_tags(rName, key, content string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { - bucket = %[1]q + bucket = %[1]q + force_destroy = true } resource "aws_s3_bucket_versioning" "test" { @@ -1875,7 +1879,8 @@ resource "aws_s3_object" "object" { func testAccObjectConfig_updatedTags(rName, key, content string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { - bucket = %[1]q + bucket = %[1]q + force_destroy = true } resource "aws_s3_bucket_versioning" "test" { @@ -1904,7 +1909,8 @@ resource "aws_s3_object" "object" { func testAccObjectConfig_noTags(rName, key, content string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { - bucket = %[1]q + bucket = %[1]q + force_destroy = true } resource "aws_s3_bucket_versioning" "test" { diff --git a/internal/service/s3/objects_data_source.go b/internal/service/s3/objects_data_source.go index 35ba5d0c910..fb754992a07 100644 --- a/internal/service/s3/objects_data_source.go +++ b/internal/service/s3/objects_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( @@ -69,7 +72,7 @@ func DataSourceObjects() *schema.Resource { func dataSourceObjectsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3Conn() + conn := meta.(*conns.AWSClient).S3Conn(ctx) bucket := d.Get("bucket").(string) prefix := d.Get("prefix").(string) diff --git a/internal/service/s3/objects_data_source_test.go b/internal/service/s3/objects_data_source_test.go index 28176b0a529..ae9c46b493d 100644 --- a/internal/service/s3/objects_data_source_test.go +++ b/internal/service/s3/objects_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3_test import ( diff --git a/internal/service/s3/service_package.go b/internal/service/s3/service_package.go new file mode 100644 index 00000000000..c98ba3a5920 --- /dev/null +++ b/internal/service/s3/service_package.go @@ -0,0 +1,36 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package s3 + +import ( + "context" + + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + request_sdkv1 "github.com/aws/aws-sdk-go/aws/request" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + s3_sdkv1 "github.com/aws/aws-sdk-go/service/s3" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" +) + +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, m map[string]any) (*s3_sdkv1.S3, error) { + sess := m["session"].(*session_sdkv1.Session) + config := &aws_sdkv1.Config{ + Endpoint: aws_sdkv1.String(m["endpoint"].(string)), + S3ForcePathStyle: aws_sdkv1.Bool(m["s3_use_path_style"].(bool)), + } + + return s3_sdkv1.New(sess.Copy(config)), nil +} + +// CustomizeConn customizes a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) CustomizeConn(ctx context.Context, conn *s3_sdkv1.S3) (*s3_sdkv1.S3, error) { + conn.Handlers.Retry.PushBack(func(r *request_sdkv1.Request) { + if tfawserr.ErrMessageContains(r.Error, errCodeOperationAborted, "A conflicting conditional operation is currently in progress against this resource. Please try again.") { + r.Retryable = aws_sdkv1.Bool(true) + } + }) + + return conn, nil +} diff --git a/internal/service/s3/service_package_gen.go b/internal/service/s3/service_package_gen.go index b76355c4031..2e89cb39789 100644 --- a/internal/service/s3/service_package_gen.go +++ b/internal/service/s3/service_package_gen.go @@ -5,6 +5,7 @@ package s3 import ( "context" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -161,4 +162,6 @@ func (p *servicePackage) ServicePackageName() string { return names.S3 } -var ServicePackage = &servicePackage{} +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/s3/status.go b/internal/service/s3/status.go index 494ac03ff39..a45ffad2db2 100644 --- a/internal/service/s3/status.go +++ b/internal/service/s3/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( diff --git a/internal/service/s3/sweep.go b/internal/service/s3/sweep.go index a0ffb726a57..f09270e66b7 100644 --- a/internal/service/s3/sweep.go +++ b/internal/service/s3/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -18,7 +21,6 @@ import ( "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) @@ -42,12 +44,12 @@ func init() { func sweepObjects(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).S3ConnURICleaningDisabled() + conn := client.S3ConnURICleaningDisabled(ctx) input := &s3.ListBucketsInput{} output, err := conn.ListBucketsWithContext(ctx, input) @@ -93,7 +95,7 @@ func sweepObjects(region string) error { }) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepables); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepables); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping DynamoDB Backups for %s: %w", region, err)) } @@ -117,12 +119,12 @@ func (os objectSweeper) Delete(ctx context.Context, timeout time.Duration, optFn func sweepBuckets(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).S3Conn() + conn := client.S3Conn(ctx) input := &s3.ListBucketsInput{} output, err := conn.ListBucketsWithContext(ctx, input) @@ -164,7 +166,7 @@ func sweepBuckets(region string) error { sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping S3 Buckets for %s: %w", region, err)) } diff --git a/internal/service/s3/tags.go b/internal/service/s3/tags.go index 640abc4c8bd..af28bb98040 100644 --- a/internal/service/s3/tags.go +++ b/internal/service/s3/tags.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build !generate // +build !generate diff --git a/internal/service/s3/tags_gen.go b/internal/service/s3/tags_gen.go index 0ec84c1a235..4b1b0a5930f 100644 --- a/internal/service/s3/tags_gen.go +++ b/internal/service/s3/tags_gen.go @@ -39,9 +39,9 @@ func KeyValueTags(ctx context.Context, tags []*s3.Tag) tftags.KeyValueTags { return tftags.New(ctx, m) } -// GetTagsIn returns s3 service tags from Context. +// getTagsIn returns s3 service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*s3.Tag { +func getTagsIn(ctx context.Context) []*s3.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -51,8 +51,8 @@ func GetTagsIn(ctx context.Context) []*s3.Tag { return nil } -// SetTagsOut sets s3 service tags in Context. -func SetTagsOut(ctx context.Context, tags []*s3.Tag) { +// setTagsOut sets s3 service tags in Context. +func setTagsOut(ctx context.Context, tags []*s3.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } diff --git a/internal/service/s3/test-fixtures/populate_bucket.py b/internal/service/s3/test-fixtures/populate_bucket.py index b642284f1d2..7dda3084bd7 100755 --- a/internal/service/s3/test-fixtures/populate_bucket.py +++ b/internal/service/s3/test-fixtures/populate_bucket.py @@ -1,4 +1,7 @@ #!/usr/bin/env python3 +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + import argparse import random diff --git a/internal/service/s3/validate.go b/internal/service/s3/validate.go index 0c5ce27b36a..f1ce5607f4b 100644 --- a/internal/service/s3/validate.go +++ b/internal/service/s3/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( diff --git a/internal/service/s3/wait.go b/internal/service/s3/wait.go index ba4a2ef351e..d3cb732456d 100644 --- a/internal/service/s3/wait.go +++ b/internal/service/s3/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3 import ( diff --git a/internal/service/s3control/access_point.go b/internal/service/s3control/access_point.go index 48ef6937449..35247448ca0 100644 --- a/internal/service/s3control/access_point.go +++ b/internal/service/s3control/access_point.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control import ( @@ -153,7 +156,7 @@ func resourceAccessPoint() *schema.Resource { } func resourceAccessPointCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) accountID := meta.(*conns.AWSClient).AccountID if v, ok := d.GetOk("account_id"); ok { @@ -222,7 +225,7 @@ func resourceAccessPointCreate(ctx context.Context, d *schema.ResourceData, meta } func resourceAccessPointRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) accountID, name, err := AccessPointParseResourceID(d.Id()) @@ -329,7 +332,7 @@ func resourceAccessPointRead(ctx context.Context, d *schema.ResourceData, meta i } func resourceAccessPointUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) accountID, name, err := AccessPointParseResourceID(d.Id()) @@ -371,7 +374,7 @@ func resourceAccessPointUpdate(ctx context.Context, d *schema.ResourceData, meta } func resourceAccessPointDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) accountID, name, err := AccessPointParseResourceID(d.Id()) diff --git a/internal/service/s3control/access_point_policy.go b/internal/service/s3control/access_point_policy.go index 3f8d75137cf..1ca7c4216f7 100644 --- a/internal/service/s3control/access_point_policy.go +++ b/internal/service/s3control/access_point_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control import ( @@ -56,7 +59,7 @@ func resourceAccessPointPolicy() *schema.Resource { } func resourceAccessPointPolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) resourceID, err := AccessPointCreateResourceID(d.Get("access_point_arn").(string)) @@ -93,7 +96,7 @@ func resourceAccessPointPolicyCreate(ctx context.Context, d *schema.ResourceData } func resourceAccessPointPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) accountID, name, err := AccessPointParseResourceID(d.Id()) @@ -130,7 +133,7 @@ func resourceAccessPointPolicyRead(ctx context.Context, d *schema.ResourceData, } func resourceAccessPointPolicyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) accountID, name, err := AccessPointParseResourceID(d.Id()) @@ -159,7 +162,7 @@ func resourceAccessPointPolicyUpdate(ctx context.Context, d *schema.ResourceData } func resourceAccessPointPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) accountID, name, err := AccessPointParseResourceID(d.Id()) diff --git a/internal/service/s3control/access_point_policy_test.go b/internal/service/s3control/access_point_policy_test.go index 7fd45a0e9c5..7130501d517 100644 --- a/internal/service/s3control/access_point_policy_test.go +++ b/internal/service/s3control/access_point_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control_test import ( @@ -142,7 +145,7 @@ func testAccAccessPointPolicyImportStateIdFunc(n string) resource.ImportStateIdF func testAccCheckAccessPointPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3control_access_point_policy" { @@ -189,7 +192,7 @@ func testAccCheckAccessPointPolicyExists(ctx context.Context, n string) resource return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn(ctx) _, _, err = tfs3control.FindAccessPointPolicyAndStatusByTwoPartKey(ctx, conn, accountID, name) diff --git a/internal/service/s3control/access_point_test.go b/internal/service/s3control/access_point_test.go index 6f6f9220a82..77e984a8179 100644 --- a/internal/service/s3control/access_point_test.go +++ b/internal/service/s3control/access_point_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control_test import ( @@ -309,7 +312,7 @@ func TestAccS3ControlAccessPoint_vpc(t *testing.T) { func testAccCheckAccessPointDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_access_point" { @@ -356,7 +359,7 @@ func testAccCheckAccessPointExists(ctx context.Context, n string, v *s3control.G return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn(ctx) output, err := tfs3control.FindAccessPointByTwoPartKey(ctx, conn, accountID, name) @@ -387,7 +390,7 @@ func testAccCheckAccessPointHasPolicy(ctx context.Context, n string, fn func() s return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn(ctx) actualPolicyText, _, err := tfs3control.FindAccessPointPolicyAndStatusByTwoPartKey(ctx, conn, accountID, name) diff --git a/internal/service/s3control/account_public_access_block.go b/internal/service/s3control/account_public_access_block.go index 43f022a52fb..8a2d821ce85 100644 --- a/internal/service/s3control/account_public_access_block.go +++ b/internal/service/s3control/account_public_access_block.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control import ( @@ -62,7 +65,7 @@ func resourceAccountPublicAccessBlock() *schema.Resource { } func resourceAccountPublicAccessBlockCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) accountID := meta.(*conns.AWSClient).AccountID if v, ok := d.GetOk("account_id"); ok { @@ -99,7 +102,7 @@ func resourceAccountPublicAccessBlockCreate(ctx context.Context, d *schema.Resou } func resourceAccountPublicAccessBlockRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) output, err := FindPublicAccessBlockByAccountID(ctx, conn, d.Id()) @@ -123,7 +126,7 @@ func resourceAccountPublicAccessBlockRead(ctx context.Context, d *schema.Resourc } func resourceAccountPublicAccessBlockUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) publicAccessBlockConfiguration := &s3control.PublicAccessBlockConfiguration{ BlockPublicAcls: aws.Bool(d.Get("block_public_acls").(bool)), @@ -150,7 +153,7 @@ func resourceAccountPublicAccessBlockUpdate(ctx context.Context, d *schema.Resou } func resourceAccountPublicAccessBlockDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) log.Printf("[DEBUG] Deleting S3 Account Public Access Block: %s", d.Id()) _, err := conn.DeletePublicAccessBlockWithContext(ctx, &s3control.DeletePublicAccessBlockInput{ diff --git a/internal/service/s3control/account_public_access_block_data_source.go b/internal/service/s3control/account_public_access_block_data_source.go index 5659d1b997b..e3fb50f5f71 100644 --- a/internal/service/s3control/account_public_access_block_data_source.go +++ b/internal/service/s3control/account_public_access_block_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control import ( @@ -41,7 +44,7 @@ func dataSourceAccountPublicAccessBlock() *schema.Resource { } func dataSourceAccountPublicAccessBlockRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) accountID := meta.(*conns.AWSClient).AccountID if v, ok := d.GetOk("account_id"); ok { diff --git a/internal/service/s3control/account_public_access_block_data_source_test.go b/internal/service/s3control/account_public_access_block_data_source_test.go index b80ee9598cf..a2c321dc81a 100644 --- a/internal/service/s3control/account_public_access_block_data_source_test.go +++ b/internal/service/s3control/account_public_access_block_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control_test import ( diff --git a/internal/service/s3control/account_public_access_block_test.go b/internal/service/s3control/account_public_access_block_test.go index ba9cbb771f5..7790d24fbb5 100644 --- a/internal/service/s3control/account_public_access_block_test.go +++ b/internal/service/s3control/account_public_access_block_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control_test import ( @@ -292,7 +295,7 @@ func testAccCheckAccountPublicAccessBlockExists(ctx context.Context, n string, v return fmt.Errorf("No S3 Account Public Access Block ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn(ctx) output, err := tfs3control.FindPublicAccessBlockByAccountID(ctx, conn, rs.Primary.ID) @@ -308,7 +311,7 @@ func testAccCheckAccountPublicAccessBlockExists(ctx context.Context, n string, v func testAccCheckAccountPublicAccessBlockDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3_account_public_access_block" { diff --git a/internal/service/s3control/bucket.go b/internal/service/s3control/bucket.go index d2736d198cc..1e185bd5762 100644 --- a/internal/service/s3control/bucket.go +++ b/internal/service/s3control/bucket.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control import ( @@ -80,7 +83,7 @@ func resourceBucket() *schema.Resource { } func resourceBucketCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) bucket := d.Get("bucket").(string) input := &s3control.CreateBucketInput{ @@ -96,7 +99,7 @@ func resourceBucketCreate(ctx context.Context, d *schema.ResourceData, meta inte d.SetId(aws.StringValue(output.BucketArn)) - if tags := KeyValueTags(ctx, GetTagsIn(ctx)); len(tags) > 0 { + if tags := KeyValueTags(ctx, getTagsIn(ctx)); len(tags) > 0 { if err := bucketUpdateTags(ctx, conn, d.Id(), nil, tags); err != nil { return diag.Errorf("adding S3 Control Bucket (%s) tags: %s", d.Id(), err) } @@ -106,7 +109,7 @@ func resourceBucketCreate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceBucketRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) parsedArn, err := arn.Parse(d.Id()) @@ -147,13 +150,13 @@ func resourceBucketRead(ctx context.Context, d *schema.ResourceData, meta interf return diag.Errorf("listing tags for S3 Control Bucket (%s): %s", d.Id(), err) } - SetTagsOut(ctx, Tags(tags)) + setTagsOut(ctx, Tags(tags)) return nil } func resourceBucketUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) if d.HasChange("tags_all") { o, n := d.GetChange("tags_all") @@ -167,7 +170,7 @@ func resourceBucketUpdate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceBucketDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) parsedArn, err := arn.Parse(d.Id()) diff --git a/internal/service/s3control/bucket_lifecycle_configuration.go b/internal/service/s3control/bucket_lifecycle_configuration.go index 5068e848640..126402eed5d 100644 --- a/internal/service/s3control/bucket_lifecycle_configuration.go +++ b/internal/service/s3control/bucket_lifecycle_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control import ( @@ -123,7 +126,7 @@ func resourceBucketLifecycleConfiguration() *schema.Resource { } func resourceBucketLifecycleConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) bucket := d.Get("bucket").(string) @@ -157,7 +160,7 @@ func resourceBucketLifecycleConfigurationCreate(ctx context.Context, d *schema.R } func resourceBucketLifecycleConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) parsedArn, err := arn.Parse(d.Id()) @@ -191,7 +194,7 @@ func resourceBucketLifecycleConfigurationRead(ctx context.Context, d *schema.Res } func resourceBucketLifecycleConfigurationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) parsedArn, err := arn.Parse(d.Id()) @@ -221,7 +224,7 @@ func resourceBucketLifecycleConfigurationUpdate(ctx context.Context, d *schema.R } func resourceBucketLifecycleConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) parsedArn, err := arn.Parse(d.Id()) diff --git a/internal/service/s3control/bucket_lifecycle_configuration_test.go b/internal/service/s3control/bucket_lifecycle_configuration_test.go index fa94ca09d3e..b8e44bc6503 100644 --- a/internal/service/s3control/bucket_lifecycle_configuration_test.go +++ b/internal/service/s3control/bucket_lifecycle_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control_test import ( @@ -429,7 +432,7 @@ func TestAccS3ControlBucketLifecycleConfiguration_Rule_status(t *testing.T) { func testAccCheckBucketLifecycleConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3control_bucket_lifecycle_configuration" { @@ -470,7 +473,7 @@ func testAccCheckBucketLifecycleConfigurationExists(ctx context.Context, n strin return fmt.Errorf("No S3 Control Bucket Lifecycle Configuration ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn(ctx) parsedArn, err := arn.Parse(rs.Primary.ID) diff --git a/internal/service/s3control/bucket_policy.go b/internal/service/s3control/bucket_policy.go index 10040a36730..80862e5ab58 100644 --- a/internal/service/s3control/bucket_policy.go +++ b/internal/service/s3control/bucket_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control import ( @@ -53,7 +56,7 @@ func resourceBucketPolicy() *schema.Resource { } func resourceBucketPolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) bucket := d.Get("bucket").(string) @@ -79,7 +82,7 @@ func resourceBucketPolicyCreate(ctx context.Context, d *schema.ResourceData, met } func resourceBucketPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) parsedArn, err := arn.Parse(d.Id()) @@ -120,7 +123,7 @@ func resourceBucketPolicyRead(ctx context.Context, d *schema.ResourceData, meta } func resourceBucketPolicyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) policy, err := structure.NormalizeJsonString(d.Get("policy").(string)) if err != nil { @@ -142,7 +145,7 @@ func resourceBucketPolicyUpdate(ctx context.Context, d *schema.ResourceData, met } func resourceBucketPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) parsedArn, err := arn.Parse(d.Id()) diff --git a/internal/service/s3control/bucket_policy_test.go b/internal/service/s3control/bucket_policy_test.go index 2490b41b9bd..b8c7fb83a0a 100644 --- a/internal/service/s3control/bucket_policy_test.go +++ b/internal/service/s3control/bucket_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control_test import ( @@ -104,7 +107,7 @@ func TestAccS3ControlBucketPolicy_policy(t *testing.T) { func testAccCheckBucketPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3control_bucket_policy" { @@ -145,7 +148,7 @@ func testAccCheckBucketPolicyExists(ctx context.Context, n string) resource.Test return fmt.Errorf("No S3 Control Bucket Policy ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn(ctx) parsedArn, err := arn.Parse(rs.Primary.ID) diff --git a/internal/service/s3control/bucket_test.go b/internal/service/s3control/bucket_test.go index 16fb36bc445..5bec51fb55b 100644 --- a/internal/service/s3control/bucket_test.go +++ b/internal/service/s3control/bucket_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control_test import ( @@ -121,7 +124,7 @@ func TestAccS3ControlBucket_tags(t *testing.T) { func testAccCheckBucketDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3control_bucket" { @@ -162,7 +165,7 @@ func testAccCheckBucketExists(ctx context.Context, n string) resource.TestCheckF return fmt.Errorf("No S3 Control Bucket ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn(ctx) parsedArn, err := arn.Parse(rs.Primary.ID) diff --git a/internal/service/s3control/consts.go b/internal/service/s3control/consts.go index 68a30d284a3..5d170c1a664 100644 --- a/internal/service/s3control/consts.go +++ b/internal/service/s3control/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control // AsyncOperation.RequestStatus values. diff --git a/internal/service/s3control/errors.go b/internal/service/s3control/errors.go index 01a3f3c4405..1417a470e9f 100644 --- a/internal/service/s3control/errors.go +++ b/internal/service/s3control/errors.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control // Error code constants missing from AWS Go SDK: diff --git a/internal/service/s3control/exports_test.go b/internal/service/s3control/exports_test.go index bd998d22d53..f853f23ed2c 100644 --- a/internal/service/s3control/exports_test.go +++ b/internal/service/s3control/exports_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control // Exports for use in tests only. diff --git a/internal/service/s3control/generate.go b/internal/service/s3control/generate.go index 0df3f3c737b..4a1d30a22d0 100644 --- a/internal/service/s3control/generate.go +++ b/internal/service/s3control/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ServiceTagsSlice -TagType=S3Tag +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package s3control diff --git a/internal/service/s3control/multi_region_access_point.go b/internal/service/s3control/multi_region_access_point.go index 134ee3ef12f..574215a920d 100644 --- a/internal/service/s3control/multi_region_access_point.go +++ b/internal/service/s3control/multi_region_access_point.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control import ( @@ -136,7 +139,7 @@ func resourceMultiRegionAccessPoint() *schema.Resource { } func resourceMultiRegionAccessPointCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn, err := ConnForMRAP(meta.(*conns.AWSClient)) + conn, err := ConnForMRAP(ctx, meta.(*conns.AWSClient)) if err != nil { return diag.FromErr(err) @@ -175,7 +178,7 @@ func resourceMultiRegionAccessPointCreate(ctx context.Context, d *schema.Resourc } func resourceMultiRegionAccessPointRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn, err := ConnForMRAP(meta.(*conns.AWSClient)) + conn, err := ConnForMRAP(ctx, meta.(*conns.AWSClient)) if err != nil { return diag.FromErr(err) @@ -220,7 +223,7 @@ func resourceMultiRegionAccessPointRead(ctx context.Context, d *schema.ResourceD } func resourceMultiRegionAccessPointDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn, err := ConnForMRAP(meta.(*conns.AWSClient)) + conn, err := ConnForMRAP(ctx, meta.(*conns.AWSClient)) if err != nil { return diag.FromErr(err) @@ -251,14 +254,14 @@ func resourceMultiRegionAccessPointDelete(ctx context.Context, d *schema.Resourc _, err = waitMultiRegionAccessPointRequestSucceeded(ctx, conn, accountID, aws.StringValue(output.RequestTokenARN), d.Timeout(schema.TimeoutDelete)) if err != nil { - return diag.Errorf("error waiting for S3 Multi-Region Access Point (%s) delete: %s", d.Id(), err) + return diag.Errorf("waiting for S3 Multi-Region Access Point (%s) delete: %s", d.Id(), err) } return nil } -func ConnForMRAP(client *conns.AWSClient) (*s3control.S3Control, error) { - originalConn := client.S3ControlConn() +func ConnForMRAP(ctx context.Context, client *conns.AWSClient) (*s3control.S3Control, error) { + originalConn := client.S3ControlConn(ctx) // All Multi-Region Access Point actions are routed to the US West (Oregon) Region. region := endpoints.UsWest2RegionID diff --git a/internal/service/s3control/multi_region_access_point_data_source.go b/internal/service/s3control/multi_region_access_point_data_source.go index 8e683c41f54..859757c79c5 100644 --- a/internal/service/s3control/multi_region_access_point_data_source.go +++ b/internal/service/s3control/multi_region_access_point_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control import ( @@ -94,7 +97,7 @@ func dataSourceMultiRegionAccessPoint() *schema.Resource { } func dataSourceMultiRegionAccessPointBlockRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn, err := ConnForMRAP(meta.(*conns.AWSClient)) + conn, err := ConnForMRAP(ctx, meta.(*conns.AWSClient)) if err != nil { return diag.FromErr(err) diff --git a/internal/service/s3control/multi_region_access_point_data_source_test.go b/internal/service/s3control/multi_region_access_point_data_source_test.go index 07bf8da967e..a971b54b7cd 100644 --- a/internal/service/s3control/multi_region_access_point_data_source_test.go +++ b/internal/service/s3control/multi_region_access_point_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control_test import ( diff --git a/internal/service/s3control/multi_region_access_point_policy.go b/internal/service/s3control/multi_region_access_point_policy.go index 96cc83fb964..103c9701679 100644 --- a/internal/service/s3control/multi_region_access_point_policy.go +++ b/internal/service/s3control/multi_region_access_point_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control import ( @@ -83,7 +86,7 @@ func resourceMultiRegionAccessPointPolicy() *schema.Resource { } func resourceMultiRegionAccessPointPolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn, err := ConnForMRAP(meta.(*conns.AWSClient)) + conn, err := ConnForMRAP(ctx, meta.(*conns.AWSClient)) if err != nil { return diag.FromErr(err) @@ -122,7 +125,7 @@ func resourceMultiRegionAccessPointPolicyCreate(ctx context.Context, d *schema.R } func resourceMultiRegionAccessPointPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn, err := ConnForMRAP(meta.(*conns.AWSClient)) + conn, err := ConnForMRAP(ctx, meta.(*conns.AWSClient)) if err != nil { return diag.FromErr(err) @@ -174,7 +177,7 @@ func resourceMultiRegionAccessPointPolicyRead(ctx context.Context, d *schema.Res } func resourceMultiRegionAccessPointPolicyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn, err := ConnForMRAP(meta.(*conns.AWSClient)) + conn, err := ConnForMRAP(ctx, meta.(*conns.AWSClient)) if err != nil { return diag.FromErr(err) diff --git a/internal/service/s3control/multi_region_access_point_policy_test.go b/internal/service/s3control/multi_region_access_point_policy_test.go index 32d620eed2e..78e122ce17b 100644 --- a/internal/service/s3control/multi_region_access_point_policy_test.go +++ b/internal/service/s3control/multi_region_access_point_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control_test import ( @@ -174,7 +177,7 @@ func testAccCheckMultiRegionAccessPointPolicyExists(ctx context.Context, n strin return err } - conn, err := tfs3control.ConnForMRAP(acctest.Provider.Meta().(*conns.AWSClient)) + conn, err := tfs3control.ConnForMRAP(ctx, acctest.Provider.Meta().(*conns.AWSClient)) if err != nil { return err diff --git a/internal/service/s3control/multi_region_access_point_test.go b/internal/service/s3control/multi_region_access_point_test.go index 2f9116a99e7..0988f07c4bc 100644 --- a/internal/service/s3control/multi_region_access_point_test.go +++ b/internal/service/s3control/multi_region_access_point_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control_test import ( @@ -205,7 +208,7 @@ func TestAccS3ControlMultiRegionAccessPoint_threeRegions(t *testing.T) { func testAccCheckMultiRegionAccessPointDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn, err := tfs3control.ConnForMRAP(acctest.Provider.Meta().(*conns.AWSClient)) + conn, err := tfs3control.ConnForMRAP(ctx, acctest.Provider.Meta().(*conns.AWSClient)) if err != nil { return err @@ -256,7 +259,7 @@ func testAccCheckMultiRegionAccessPointExists(ctx context.Context, n string, v * return err } - conn, err := tfs3control.ConnForMRAP(acctest.Provider.Meta().(*conns.AWSClient)) + conn, err := tfs3control.ConnForMRAP(ctx, acctest.Provider.Meta().(*conns.AWSClient)) if err != nil { return err diff --git a/internal/service/s3control/object_lambda_access_point.go b/internal/service/s3control/object_lambda_access_point.go index 058c51b7c72..7a7551d3cdf 100644 --- a/internal/service/s3control/object_lambda_access_point.go +++ b/internal/service/s3control/object_lambda_access_point.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control import ( @@ -124,7 +127,7 @@ func resourceObjectLambdaAccessPoint() *schema.Resource { } func resourceObjectLambdaAccessPointCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) accountID := meta.(*conns.AWSClient).AccountID if v, ok := d.GetOk("account_id"); ok { @@ -154,7 +157,7 @@ func resourceObjectLambdaAccessPointCreate(ctx context.Context, d *schema.Resour } func resourceObjectLambdaAccessPointRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) accountID, name, err := ObjectLambdaAccessPointParseResourceID(d.Id()) @@ -193,7 +196,7 @@ func resourceObjectLambdaAccessPointRead(ctx context.Context, d *schema.Resource } func resourceObjectLambdaAccessPointUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) accountID, name, err := ObjectLambdaAccessPointParseResourceID(d.Id()) @@ -220,7 +223,7 @@ func resourceObjectLambdaAccessPointUpdate(ctx context.Context, d *schema.Resour } func resourceObjectLambdaAccessPointDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) accountID, name, err := ObjectLambdaAccessPointParseResourceID(d.Id()) diff --git a/internal/service/s3control/object_lambda_access_point_policy.go b/internal/service/s3control/object_lambda_access_point_policy.go index 2050509f5e9..6a44bef7e12 100644 --- a/internal/service/s3control/object_lambda_access_point_policy.go +++ b/internal/service/s3control/object_lambda_access_point_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control import ( @@ -61,7 +64,7 @@ func resourceObjectLambdaAccessPointPolicy() *schema.Resource { } func resourceObjectLambdaAccessPointPolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) accountID := meta.(*conns.AWSClient).AccountID if v, ok := d.GetOk("account_id"); ok { @@ -93,7 +96,7 @@ func resourceObjectLambdaAccessPointPolicyCreate(ctx context.Context, d *schema. } func resourceObjectLambdaAccessPointPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) accountID, name, err := ObjectLambdaAccessPointParseResourceID(d.Id()) @@ -132,7 +135,7 @@ func resourceObjectLambdaAccessPointPolicyRead(ctx context.Context, d *schema.Re } func resourceObjectLambdaAccessPointPolicyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) accountID, name, err := ObjectLambdaAccessPointParseResourceID(d.Id()) @@ -161,7 +164,7 @@ func resourceObjectLambdaAccessPointPolicyUpdate(ctx context.Context, d *schema. } func resourceObjectLambdaAccessPointPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlConn() + conn := meta.(*conns.AWSClient).S3ControlConn(ctx) accountID, name, err := ObjectLambdaAccessPointParseResourceID(d.Id()) diff --git a/internal/service/s3control/object_lambda_access_point_policy_test.go b/internal/service/s3control/object_lambda_access_point_policy_test.go index 70332ab2e9e..bd29dc4d5d5 100644 --- a/internal/service/s3control/object_lambda_access_point_policy_test.go +++ b/internal/service/s3control/object_lambda_access_point_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control_test import ( @@ -134,7 +137,7 @@ func TestAccS3ControlObjectLambdaAccessPointPolicy_update(t *testing.T) { func testAccCheckObjectLambdaAccessPointPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3control_object_lambda_access_point_policy" { @@ -181,7 +184,7 @@ func testAccCheckObjectLambdaAccessPointPolicyExists(ctx context.Context, n stri return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn(ctx) _, _, err = tfs3control.FindObjectLambdaAccessPointPolicyAndStatusByTwoPartKey(ctx, conn, accountID, name) diff --git a/internal/service/s3control/object_lambda_access_point_test.go b/internal/service/s3control/object_lambda_access_point_test.go index f383a63a653..3a185448519 100644 --- a/internal/service/s3control/object_lambda_access_point_test.go +++ b/internal/service/s3control/object_lambda_access_point_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control_test import ( @@ -152,7 +155,7 @@ func TestAccS3ControlObjectLambdaAccessPoint_update(t *testing.T) { func testAccCheckObjectLambdaAccessPointDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3control_object_lambda_access_point" { @@ -199,7 +202,7 @@ func testAccCheckObjectLambdaAccessPointExists(ctx context.Context, n string, v return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlConn(ctx) output, err := tfs3control.FindObjectLambdaAccessPointByTwoPartKey(ctx, conn, accountID, name) diff --git a/internal/service/s3control/service_package_gen.go b/internal/service/s3control/service_package_gen.go index 014fc9ca49d..6831c57983f 100644 --- a/internal/service/s3control/service_package_gen.go +++ b/internal/service/s3control/service_package_gen.go @@ -5,6 +5,12 @@ package s3control import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + s3control_sdkv2 "github.com/aws/aws-sdk-go-v2/service/s3control" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + s3control_sdkv1 "github.com/aws/aws-sdk-go/service/s3control" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -89,4 +95,24 @@ func (p *servicePackage) ServicePackageName() string { return names.S3Control } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*s3control_sdkv1.S3Control, error) { + sess := config["session"].(*session_sdkv1.Session) + + return s3control_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*s3control_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return s3control_sdkv2.NewFromConfig(cfg, func(o *s3control_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = s3control_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/s3control/storage_lens_configuration.go b/internal/service/s3control/storage_lens_configuration.go index 89b983a9096..dede01d7018 100644 --- a/internal/service/s3control/storage_lens_configuration.go +++ b/internal/service/s3control/storage_lens_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control import ( @@ -392,7 +395,7 @@ func resourceStorageLensConfiguration() *schema.Resource { } func resourceStorageLensConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlClient() + conn := meta.(*conns.AWSClient).S3ControlClient(ctx) accountID := meta.(*conns.AWSClient).AccountID if v, ok := d.GetOk("account_id"); ok { @@ -404,7 +407,7 @@ func resourceStorageLensConfigurationCreate(ctx context.Context, d *schema.Resou input := &s3control.PutStorageLensConfigurationInput{ AccountId: aws.String(accountID), ConfigId: aws.String(configID), - Tags: StorageLensTags(KeyValueTags(ctx, GetTagsIn(ctx))), + Tags: StorageLensTags(KeyValueTags(ctx, getTagsIn(ctx))), } if v, ok := d.GetOk("storage_lens_configuration"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { @@ -424,7 +427,7 @@ func resourceStorageLensConfigurationCreate(ctx context.Context, d *schema.Resou } func resourceStorageLensConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlClient() + conn := meta.(*conns.AWSClient).S3ControlClient(ctx) accountID, configID, err := StorageLensConfigurationParseResourceID(d.Id()) @@ -457,13 +460,13 @@ func resourceStorageLensConfigurationRead(ctx context.Context, d *schema.Resourc return diag.Errorf("listing tags for S3 Storage Lens Configuration (%s): %s", d.Id(), err) } - SetTagsOut(ctx, Tags(tags)) + setTagsOut(ctx, Tags(tags)) return nil } func resourceStorageLensConfigurationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlClient() + conn := meta.(*conns.AWSClient).S3ControlClient(ctx) accountID, configID, err := StorageLensConfigurationParseResourceID(d.Id()) @@ -501,7 +504,7 @@ func resourceStorageLensConfigurationUpdate(ctx context.Context, d *schema.Resou } func resourceStorageLensConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).S3ControlClient() + conn := meta.(*conns.AWSClient).S3ControlClient(ctx) accountID, configID, err := StorageLensConfigurationParseResourceID(d.Id()) diff --git a/internal/service/s3control/storage_lens_configuration_test.go b/internal/service/s3control/storage_lens_configuration_test.go index 37c2d3cc08e..cd123bf51da 100644 --- a/internal/service/s3control/storage_lens_configuration_test.go +++ b/internal/service/s3control/storage_lens_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control_test import ( @@ -323,7 +326,7 @@ func TestAccS3ControlStorageLensConfiguration_advancedMetrics(t *testing.T) { func testAccCheckStorageLensConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3control_object_lambda_access_point" { @@ -370,7 +373,7 @@ func testAccCheckStorageLensConfigurationExists(ctx context.Context, n string) r return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3ControlClient(ctx) _, err = tfs3control.FindStorageLensConfigurationByAccountIDAndConfigID(ctx, conn, accountID, configID) diff --git a/internal/service/s3control/sweep.go b/internal/service/s3control/sweep.go index 1fb0a73d948..14c3cd840be 100644 --- a/internal/service/s3control/sweep.go +++ b/internal/service/s3control/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -12,7 +15,6 @@ import ( "github.com/aws/aws-sdk-go/service/s3control" multierror "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -43,12 +45,12 @@ func init() { func sweepAccessPoints(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).S3ControlConn() - accountID := client.(*conns.AWSClient).AccountID + conn := client.S3ControlConn(ctx) + accountID := client.AccountID input := &s3control.ListAccessPointsInput{ AccountId: aws.String(accountID), } @@ -85,7 +87,7 @@ func sweepAccessPoints(region string) error { return fmt.Errorf("error listing S3 Access Points (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping S3 Access Points (%s): %w", region, err)) @@ -101,12 +103,12 @@ func sweepMultiRegionAccessPoints(region string) error { return nil } - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).S3ControlConn() - accountID := client.(*conns.AWSClient).AccountID + conn := client.S3ControlConn(ctx) + accountID := client.AccountID input := &s3control.ListMultiRegionAccessPointsInput{ AccountId: aws.String(accountID), } @@ -137,7 +139,7 @@ func sweepMultiRegionAccessPoints(region string) error { return fmt.Errorf("error listing S3 Multi-Region Access Points (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping S3 Multi-Region Access Points (%s): %w", region, err) @@ -148,12 +150,12 @@ func sweepMultiRegionAccessPoints(region string) error { func sweepObjectLambdaAccessPoints(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).S3ControlConn() - accountID := client.(*conns.AWSClient).AccountID + conn := client.S3ControlConn(ctx) + accountID := client.AccountID input := &s3control.ListAccessPointsForObjectLambdaInput{ AccountId: aws.String(accountID), } @@ -184,7 +186,7 @@ func sweepObjectLambdaAccessPoints(region string) error { return fmt.Errorf("error listing S3 Object Lambda Access Points (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping S3 Object Lambda Access Points (%s): %w", region, err) @@ -195,12 +197,12 @@ func sweepObjectLambdaAccessPoints(region string) error { func sweepStorageLensConfigurations(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).S3ControlConn() - accountID := client.(*conns.AWSClient).AccountID + conn := client.S3ControlConn(ctx) + accountID := client.AccountID input := &s3control.ListStorageLensConfigurationsInput{ AccountId: aws.String(accountID), } @@ -237,7 +239,7 @@ func sweepStorageLensConfigurations(region string) error { return fmt.Errorf("error listing S3 Storage Lens Configurations (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping S3 Storage Lens Configurations (%s): %w", region, err) diff --git a/internal/service/s3control/tags_gen.go b/internal/service/s3control/tags_gen.go index f93ecab0337..720217ef2ae 100644 --- a/internal/service/s3control/tags_gen.go +++ b/internal/service/s3control/tags_gen.go @@ -39,9 +39,9 @@ func KeyValueTags(ctx context.Context, tags []*s3control.S3Tag) tftags.KeyValueT return tftags.New(ctx, m) } -// GetTagsIn returns s3control service tags from Context. +// getTagsIn returns s3control service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*s3control.S3Tag { +func getTagsIn(ctx context.Context) []*s3control.S3Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -51,8 +51,8 @@ func GetTagsIn(ctx context.Context) []*s3control.S3Tag { return nil } -// SetTagsOut sets s3control service tags in Context. -func SetTagsOut(ctx context.Context, tags []*s3control.S3Tag) { +// setTagsOut sets s3control service tags in Context. +func setTagsOut(ctx context.Context, tags []*s3control.S3Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } diff --git a/internal/service/s3control/validate.go b/internal/service/s3control/validate.go index b7f0af1d942..a30382428ef 100644 --- a/internal/service/s3control/validate.go +++ b/internal/service/s3control/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3control import ( diff --git a/internal/service/s3outposts/endpoint.go b/internal/service/s3outposts/endpoint.go index d7895e6311f..0e4fba2c01d 100644 --- a/internal/service/s3outposts/endpoint.go +++ b/internal/service/s3outposts/endpoint.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3outposts import ( @@ -92,7 +95,7 @@ func ResourceEndpoint() *schema.Resource { func resourceEndpointCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3OutpostsConn() + conn := meta.(*conns.AWSClient).S3OutpostsConn(ctx) input := &s3outposts.CreateEndpointInput{ OutpostId: aws.String(d.Get("outpost_id").(string)), @@ -125,7 +128,7 @@ func resourceEndpointCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceEndpointRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3OutpostsConn() + conn := meta.(*conns.AWSClient).S3OutpostsConn(ctx) endpoint, err := FindEndpointByARN(ctx, conn, d.Id()) @@ -156,7 +159,7 @@ func resourceEndpointRead(ctx context.Context, d *schema.ResourceData, meta inte func resourceEndpointDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).S3OutpostsConn() + conn := meta.(*conns.AWSClient).S3OutpostsConn(ctx) parsedArn, err := arn.Parse(d.Id()) diff --git a/internal/service/s3outposts/endpoint_test.go b/internal/service/s3outposts/endpoint_test.go index 58e7c14c41b..8fae9ac9826 100644 --- a/internal/service/s3outposts/endpoint_test.go +++ b/internal/service/s3outposts/endpoint_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package s3outposts_test import ( @@ -151,7 +154,7 @@ func TestAccS3OutpostsEndpoint_disappears(t *testing.T) { func testAccCheckEndpointDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).S3OutpostsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3OutpostsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_s3outposts_endpoint" { @@ -185,7 +188,7 @@ func testAccCheckEndpointExists(ctx context.Context, n string) resource.TestChec return fmt.Errorf("No S3 Outposts Endpoint ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).S3OutpostsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).S3OutpostsConn(ctx) _, err := tfs3outposts.FindEndpointByARN(ctx, conn, rs.Primary.ID) diff --git a/internal/service/s3outposts/generate.go b/internal/service/s3outposts/generate.go new file mode 100644 index 00000000000..d20c5a7e9f1 --- /dev/null +++ b/internal/service/s3outposts/generate.go @@ -0,0 +1,7 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/servicepackage/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package s3outposts diff --git a/internal/service/s3outposts/service_package_gen.go b/internal/service/s3outposts/service_package_gen.go index 5ae8dc6e551..74b107c3dfe 100644 --- a/internal/service/s3outposts/service_package_gen.go +++ b/internal/service/s3outposts/service_package_gen.go @@ -5,6 +5,10 @@ package s3outposts import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + s3outposts_sdkv1 "github.com/aws/aws-sdk-go/service/s3outposts" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -36,4 +40,13 @@ func (p *servicePackage) ServicePackageName() string { return names.S3Outposts } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*s3outposts_sdkv1.S3Outposts, error) { + sess := config["session"].(*session_sdkv1.Session) + + return s3outposts_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/sagemaker/app.go b/internal/service/sagemaker/app.go index dc3fc70619b..79452f1eeea 100644 --- a/internal/service/sagemaker/app.go +++ b/internal/service/sagemaker/app.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( @@ -113,13 +116,13 @@ func ResourceApp() *schema.Resource { func resourceAppCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) input := &sagemaker.CreateAppInput{ AppName: aws.String(d.Get("app_name").(string)), AppType: aws.String(d.Get("app_type").(string)), DomainId: aws.String(d.Get("domain_id").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("user_profile_name"); ok { @@ -157,7 +160,7 @@ func resourceAppCreate(ctx context.Context, d *schema.ResourceData, meta interfa func resourceAppRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) domainID, userProfileOrSpaceName, appType, appName, err := decodeAppID(d.Id()) if err != nil { @@ -199,7 +202,7 @@ func resourceAppUpdate(ctx context.Context, d *schema.ResourceData, meta interfa func resourceAppDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) appName := d.Get("app_name").(string) appType := d.Get("app_type").(string) diff --git a/internal/service/sagemaker/app_image_config.go b/internal/service/sagemaker/app_image_config.go index 33f51a413a2..a04d421ba44 100644 --- a/internal/service/sagemaker/app_image_config.go +++ b/internal/service/sagemaker/app_image_config.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( @@ -111,12 +114,12 @@ func ResourceAppImageConfig() *schema.Resource { func resourceAppImageConfigCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) name := d.Get("app_image_config_name").(string) input := &sagemaker.CreateAppImageConfigInput{ AppImageConfigName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("kernel_gateway_image_config"); ok && len(v.([]interface{})) > 0 { @@ -135,7 +138,7 @@ func resourceAppImageConfigCreate(ctx context.Context, d *schema.ResourceData, m func resourceAppImageConfigRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) image, err := FindAppImageConfigByName(ctx, conn, d.Id()) if err != nil { @@ -160,7 +163,7 @@ func resourceAppImageConfigRead(ctx context.Context, d *schema.ResourceData, met func resourceAppImageConfigUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) if d.HasChange("kernel_gateway_image_config") { input := &sagemaker.UpdateAppImageConfigInput{ @@ -183,7 +186,7 @@ func resourceAppImageConfigUpdate(ctx context.Context, d *schema.ResourceData, m func resourceAppImageConfigDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) input := &sagemaker.DeleteAppImageConfigInput{ AppImageConfigName: aws.String(d.Id()), diff --git a/internal/service/sagemaker/app_image_config_test.go b/internal/service/sagemaker/app_image_config_test.go index 4c6e0ea0c3a..cafe600c64b 100644 --- a/internal/service/sagemaker/app_image_config_test.go +++ b/internal/service/sagemaker/app_image_config_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( @@ -210,7 +213,7 @@ func TestAccSageMakerAppImageConfig_disappears(t *testing.T) { func testAccCheckAppImageDestroyConfig(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sagemaker_app_image_config" { @@ -247,7 +250,7 @@ func testAccCheckAppImageExistsConfig(ctx context.Context, n string, config *sag return fmt.Errorf("No sagmaker App Image Config ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) resp, err := tfsagemaker.FindAppImageConfigByName(ctx, conn, rs.Primary.ID) if err != nil { return err diff --git a/internal/service/sagemaker/app_test.go b/internal/service/sagemaker/app_test.go index 81ed27ff657..14e2354a63f 100644 --- a/internal/service/sagemaker/app_test.go +++ b/internal/service/sagemaker/app_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( @@ -240,7 +243,7 @@ func testAccApp_disappears(t *testing.T) { func testAccCheckAppDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sagemaker_app" { @@ -287,7 +290,7 @@ func testAccCheckAppExists(ctx context.Context, n string, v *sagemaker.DescribeA return fmt.Errorf("No sagmaker domain ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) domainID := rs.Primary.Attributes["domain_id"] appType := rs.Primary.Attributes["app_type"] diff --git a/internal/service/sagemaker/code_repository.go b/internal/service/sagemaker/code_repository.go index a806de0c96a..059c5132005 100644 --- a/internal/service/sagemaker/code_repository.go +++ b/internal/service/sagemaker/code_repository.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( @@ -80,13 +83,13 @@ func ResourceCodeRepository() *schema.Resource { func resourceCodeRepositoryCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) name := d.Get("code_repository_name").(string) input := &sagemaker.CreateCodeRepositoryInput{ CodeRepositoryName: aws.String(name), GitConfig: expandCodeRepositoryGitConfig(d.Get("git_config").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } log.Printf("[DEBUG] sagemaker code repository create config: %#v", *input) @@ -102,7 +105,7 @@ func resourceCodeRepositoryCreate(ctx context.Context, d *schema.ResourceData, m func resourceCodeRepositoryRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) codeRepository, err := FindCodeRepositoryByName(ctx, conn, d.Id()) if err != nil { @@ -127,7 +130,7 @@ func resourceCodeRepositoryRead(ctx context.Context, d *schema.ResourceData, met func resourceCodeRepositoryUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) if d.HasChange("git_config") { input := &sagemaker.UpdateCodeRepositoryInput{ @@ -147,7 +150,7 @@ func resourceCodeRepositoryUpdate(ctx context.Context, d *schema.ResourceData, m func resourceCodeRepositoryDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) input := &sagemaker.DeleteCodeRepositoryInput{ CodeRepositoryName: aws.String(d.Id()), diff --git a/internal/service/sagemaker/code_repository_test.go b/internal/service/sagemaker/code_repository_test.go index 28600ecb434..bc21b52b79b 100644 --- a/internal/service/sagemaker/code_repository_test.go +++ b/internal/service/sagemaker/code_repository_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( @@ -196,7 +199,7 @@ func TestAccSageMakerCodeRepository_disappears(t *testing.T) { func testAccCheckCodeRepositoryDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sagemaker_code_repository" { @@ -233,7 +236,7 @@ func testAccCheckCodeRepositoryExists(ctx context.Context, n string, codeRepo *s return fmt.Errorf("No sagmaker Code Repository ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) resp, err := tfsagemaker.FindCodeRepositoryByName(ctx, conn, rs.Primary.ID) if err != nil { return err diff --git a/internal/service/sagemaker/consts.go b/internal/service/sagemaker/consts.go index c1ed618905f..46dd1f56c8d 100644 --- a/internal/service/sagemaker/consts.go +++ b/internal/service/sagemaker/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( diff --git a/internal/service/sagemaker/data_quality_job_definition.go b/internal/service/sagemaker/data_quality_job_definition.go index 99bee78a709..d82b0beaa6a 100644 --- a/internal/service/sagemaker/data_quality_job_definition.go +++ b/internal/service/sagemaker/data_quality_job_definition.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( @@ -454,7 +457,7 @@ func ResourceDataQualityJobDefinition() *schema.Resource { func resourceDataQualityJobDefinitionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) var name string if v, ok := d.GetOk("name"); ok { @@ -475,7 +478,7 @@ func resourceDataQualityJobDefinitionCreate(ctx context.Context, d *schema.Resou JobDefinitionName: aws.String(name), JobResources: expandMonitoringResources(d.Get("job_resources").([]interface{})), RoleArn: aws.String(roleArn), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("data_quality_baseline_config"); ok && len(v.([]interface{})) > 0 { @@ -503,7 +506,7 @@ func resourceDataQualityJobDefinitionCreate(ctx context.Context, d *schema.Resou func resourceDataQualityJobDefinitionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) jobDefinition, err := FindDataQualityJobDefinitionByName(ctx, conn, d.Id()) @@ -562,7 +565,7 @@ func resourceDataQualityJobDefinitionUpdate(ctx context.Context, d *schema.Resou func resourceDataQualityJobDefinitionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) log.Printf("[INFO] Deleting SageMaker Data Quality Job Definition: %s", d.Id()) _, err := conn.DeleteDataQualityJobDefinitionWithContext(ctx, &sagemaker.DeleteDataQualityJobDefinitionInput{ diff --git a/internal/service/sagemaker/data_quality_job_definition_test.go b/internal/service/sagemaker/data_quality_job_definition_test.go index 8d6d166f8d5..ebefdbfcb6a 100644 --- a/internal/service/sagemaker/data_quality_job_definition_test.go +++ b/internal/service/sagemaker/data_quality_job_definition_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( @@ -588,7 +591,7 @@ func TestAccSageMakerDataQualityJobDefinition_disappears(t *testing.T) { func testAccCheckDataQualityJobDefinitionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sagemaker_data_quality_job_definition" { @@ -622,7 +625,7 @@ func testAccCheckDataQualityJobDefinitionExists(ctx context.Context, n string) r return fmt.Errorf("no SageMaker Data Quality Job Definition ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) _, err := tfsagemaker.FindDataQualityJobDefinitionByName(ctx, conn, rs.Primary.ID) return err diff --git a/internal/service/sagemaker/device.go b/internal/service/sagemaker/device.go index 4623b18ed7b..554e99b10f7 100644 --- a/internal/service/sagemaker/device.go +++ b/internal/service/sagemaker/device.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( @@ -78,7 +81,7 @@ func ResourceDevice() *schema.Resource { func resourceDeviceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) name := d.Get("device_fleet_name").(string) input := &sagemaker.RegisterDevicesInput{ @@ -98,7 +101,7 @@ func resourceDeviceCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceDeviceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) deviceFleetName, deviceName, err := DecodeDeviceId(d.Id()) if err != nil { @@ -129,7 +132,7 @@ func resourceDeviceRead(ctx context.Context, d *schema.ResourceData, meta interf func resourceDeviceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) deviceFleetName, _, err := DecodeDeviceId(d.Id()) if err != nil { @@ -152,7 +155,7 @@ func resourceDeviceUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceDeviceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) deviceFleetName, deviceName, err := DecodeDeviceId(d.Id()) if err != nil { diff --git a/internal/service/sagemaker/device_fleet.go b/internal/service/sagemaker/device_fleet.go index fa87682104e..76c7bb0bed4 100644 --- a/internal/service/sagemaker/device_fleet.go +++ b/internal/service/sagemaker/device_fleet.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( @@ -92,14 +95,14 @@ func ResourceDeviceFleet() *schema.Resource { func resourceDeviceFleetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) name := d.Get("device_fleet_name").(string) input := &sagemaker.CreateDeviceFleetInput{ DeviceFleetName: aws.String(name), OutputConfig: expandFeatureDeviceFleetOutputConfig(d.Get("output_config").([]interface{})), EnableIotRoleAlias: aws.Bool(d.Get("enable_iot_role_alias").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("role_arn"); ok { @@ -124,7 +127,7 @@ func resourceDeviceFleetCreate(ctx context.Context, d *schema.ResourceData, meta func resourceDeviceFleetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) deviceFleet, err := FindDeviceFleetByName(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -156,7 +159,7 @@ func resourceDeviceFleetRead(ctx context.Context, d *schema.ResourceData, meta i func resourceDeviceFleetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &sagemaker.UpdateDeviceFleetInput{ @@ -182,7 +185,7 @@ func resourceDeviceFleetUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceDeviceFleetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) input := &sagemaker.DeleteDeviceFleetInput{ DeviceFleetName: aws.String(d.Id()), diff --git a/internal/service/sagemaker/device_fleet_test.go b/internal/service/sagemaker/device_fleet_test.go index 2bd07547f4e..0369519ee86 100644 --- a/internal/service/sagemaker/device_fleet_test.go +++ b/internal/service/sagemaker/device_fleet_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( @@ -158,7 +161,7 @@ func TestAccSageMakerDeviceFleet_disappears(t *testing.T) { func testAccCheckDeviceFleetDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sagemaker_device_fleet" { @@ -194,7 +197,7 @@ func testAccCheckDeviceFleetExists(ctx context.Context, n string, device_fleet * return fmt.Errorf("No sagmaker Device Fleet ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) resp, err := tfsagemaker.FindDeviceFleetByName(ctx, conn, rs.Primary.ID) if err != nil { return err diff --git a/internal/service/sagemaker/device_test.go b/internal/service/sagemaker/device_test.go index 7b1ff45abfb..b89947b3e8d 100644 --- a/internal/service/sagemaker/device_test.go +++ b/internal/service/sagemaker/device_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( @@ -134,7 +137,7 @@ func TestAccSageMakerDevice_disappears_fleet(t *testing.T) { func testAccCheckDeviceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sagemaker_device" { @@ -180,7 +183,7 @@ func testAccCheckDeviceExists(ctx context.Context, n string, device *sagemaker.D return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) resp, err := tfsagemaker.FindDeviceByName(ctx, conn, deviceFleetName, deviceName) if err != nil { return err diff --git a/internal/service/sagemaker/domain.go b/internal/service/sagemaker/domain.go index 4d4ae80879d..d4fcbd500c0 100644 --- a/internal/service/sagemaker/domain.go +++ b/internal/service/sagemaker/domain.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( @@ -681,7 +684,7 @@ func ResourceDomain() *schema.Resource { func resourceDomainCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) input := &sagemaker.CreateDomainInput{ DomainName: aws.String(d.Get("domain_name").(string)), @@ -690,7 +693,7 @@ func resourceDomainCreate(ctx context.Context, d *schema.ResourceData, meta inte AppNetworkAccessType: aws.String(d.Get("app_network_access_type").(string)), SubnetIds: flex.ExpandStringSet(d.Get("subnet_ids").(*schema.Set)), DefaultUserSettings: expandDomainDefaultUserSettings(d.Get("default_user_settings").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("app_security_group_management"); ok { @@ -732,7 +735,7 @@ func resourceDomainCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceDomainRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) domain, err := FindDomainByName(ctx, conn, d.Id()) if err != nil { @@ -778,7 +781,7 @@ func resourceDomainRead(ctx context.Context, d *schema.ResourceData, meta interf func resourceDomainUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &sagemaker.UpdateDomainInput{ @@ -813,7 +816,7 @@ func resourceDomainUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceDomainDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) input := &sagemaker.DeleteDomainInput{ DomainId: aws.String(d.Id()), diff --git a/internal/service/sagemaker/domain_test.go b/internal/service/sagemaker/domain_test.go index 7366ccaa428..f17099ed312 100644 --- a/internal/service/sagemaker/domain_test.go +++ b/internal/service/sagemaker/domain_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( @@ -782,7 +785,7 @@ func testAccDomain_spaceSettingsKernelGatewayAppSettings(t *testing.T) { func testAccCheckDomainDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sagemaker_domain" { @@ -825,7 +828,7 @@ func testAccCheckDomainExists(ctx context.Context, n string, codeRepo *sagemaker return fmt.Errorf("No sagmaker domain ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) resp, err := tfsagemaker.FindDomainByName(ctx, conn, rs.Primary.ID) if err != nil { return err diff --git a/internal/service/sagemaker/endpoint.go b/internal/service/sagemaker/endpoint.go index b8b083e139a..3130aca8b3d 100644 --- a/internal/service/sagemaker/endpoint.go +++ b/internal/service/sagemaker/endpoint.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( @@ -168,7 +171,7 @@ func ResourceEndpoint() *schema.Resource { func resourceEndpointCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) var name string if v, ok := d.GetOk("name"); ok { @@ -180,7 +183,7 @@ func resourceEndpointCreate(ctx context.Context, d *schema.ResourceData, meta in createOpts := &sagemaker.CreateEndpointInput{ EndpointName: aws.String(name), EndpointConfigName: aws.String(d.Get("endpoint_config_name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("deployment_config"); ok && (len(v.([]interface{})) > 0) { @@ -208,7 +211,7 @@ func resourceEndpointCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceEndpointRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) endpoint, err := FindEndpointByName(ctx, conn, d.Id()) @@ -235,7 +238,7 @@ func resourceEndpointRead(ctx context.Context, d *schema.ResourceData, meta inte func resourceEndpointUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) if d.HasChanges("endpoint_config_name", "deployment_config") { modifyOpts := &sagemaker.UpdateEndpointInput{ @@ -267,7 +270,7 @@ func resourceEndpointUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceEndpointDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) deleteEndpointOpts := &sagemaker.DeleteEndpointInput{ EndpointName: aws.String(d.Id()), diff --git a/internal/service/sagemaker/endpoint_configuration.go b/internal/service/sagemaker/endpoint_configuration.go index 5ebfcc8f26b..c9dae70888b 100644 --- a/internal/service/sagemaker/endpoint_configuration.go +++ b/internal/service/sagemaker/endpoint_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( @@ -341,6 +344,12 @@ func ResourceEndpointConfiguration() *schema.Resource { ForceNew: true, ValidateFunc: validation.IntInSlice([]int{1024, 2048, 3072, 4096, 5120, 6144}), }, + "provisioned_concurrency": { + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + ValidateFunc: validation.IntBetween(1, 200), + }, }, }, }, @@ -458,6 +467,12 @@ func ResourceEndpointConfiguration() *schema.Resource { ForceNew: true, ValidateFunc: validation.IntInSlice([]int{1024, 2048, 3072, 4096, 5120, 6144}), }, + "provisioned_concurrency": { + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + ValidateFunc: validation.IntBetween(1, 200), + }, }, }, }, @@ -486,14 +501,14 @@ func ResourceEndpointConfiguration() *schema.Resource { func resourceEndpointConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) name := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) createOpts := &sagemaker.CreateEndpointConfigInput{ EndpointConfigName: aws.String(name), ProductionVariants: expandProductionVariants(d.Get("production_variants").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("kms_key_arn"); ok { @@ -524,7 +539,7 @@ func resourceEndpointConfigurationCreate(ctx context.Context, d *schema.Resource func resourceEndpointConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) endpointConfig, err := FindEndpointConfigByName(ctx, conn, d.Id()) @@ -572,7 +587,7 @@ func resourceEndpointConfigurationUpdate(ctx context.Context, d *schema.Resource func resourceEndpointConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) deleteOpts := &sagemaker.DeleteEndpointConfigInput{ EndpointConfigName: aws.String(d.Id()), @@ -918,6 +933,10 @@ func expandServerlessConfig(configured []interface{}) *sagemaker.ProductionVaria c.MemorySizeInMB = aws.Int64(int64(v)) } + if v, ok := m["provisioned_concurrency"].(int); ok && v > 0 { + c.ProvisionedConcurrency = aws.Int64(int64(v)) + } + return c } @@ -1034,6 +1053,10 @@ func flattenServerlessConfig(config *sagemaker.ProductionVariantServerlessConfig cfg["memory_size_in_mb"] = aws.Int64Value(config.MemorySizeInMB) } + if config.ProvisionedConcurrency != nil { + cfg["provisioned_concurrency"] = aws.Int64Value(config.ProvisionedConcurrency) + } + return []map[string]interface{}{cfg} } diff --git a/internal/service/sagemaker/endpoint_configuration_test.go b/internal/service/sagemaker/endpoint_configuration_test.go index b94cb100636..97077b20f8c 100644 --- a/internal/service/sagemaker/endpoint_configuration_test.go +++ b/internal/service/sagemaker/endpoint_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( @@ -170,6 +173,38 @@ func TestAccSageMakerEndpointConfiguration_ProductionVariants_serverless(t *test resource.TestCheckResourceAttr(resourceName, "production_variants.0.serverless_config.#", "1"), resource.TestCheckResourceAttr(resourceName, "production_variants.0.serverless_config.0.max_concurrency", "1"), resource.TestCheckResourceAttr(resourceName, "production_variants.0.serverless_config.0.memory_size_in_mb", "1024"), + resource.TestCheckResourceAttr(resourceName, "production_variants.0.serverless_config.0.provisioned_concurrency", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccSageMakerEndpointConfiguration_ProductionVariants_serverlessProvisionedConcurrency(t *testing.T) { + ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_sagemaker_endpoint_configuration.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, sagemaker.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckEndpointConfigurationDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccEndpointConfigurationConfig_serverlessProvisionedConcurrency(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckEndpointConfigurationExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "production_variants.#", "1"), + resource.TestCheckResourceAttr(resourceName, "production_variants.0.serverless_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "production_variants.0.serverless_config.0.max_concurrency", "200"), + resource.TestCheckResourceAttr(resourceName, "production_variants.0.serverless_config.0.memory_size_in_mb", "5120"), + resource.TestCheckResourceAttr(resourceName, "production_variants.0.serverless_config.0.provisioned_concurrency", "100"), ), }, { @@ -639,7 +674,7 @@ func TestAccSageMakerEndpointConfiguration_upgradeToEnableSSMAccess(t *testing.T func testAccCheckEndpointConfigurationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sagemaker_endpoint_configuration" { @@ -674,7 +709,7 @@ func testAccCheckEndpointConfigurationExists(ctx context.Context, n string) reso return fmt.Errorf("no SageMaker endpoint config ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) _, err := tfsagemaker.FindEndpointConfigByName(ctx, conn, rs.Primary.ID) return err @@ -1180,3 +1215,22 @@ resource "aws_sagemaker_endpoint_configuration" "test" { } `, rName)) } + +func testAccEndpointConfigurationConfig_serverlessProvisionedConcurrency(rName string) string { + return acctest.ConfigCompose(testAccEndpointConfigurationConfig_base(rName), fmt.Sprintf(` +resource "aws_sagemaker_endpoint_configuration" "test" { + name = %[1]q + + production_variants { + variant_name = "variant-1" + model_name = aws_sagemaker_model.test.name + + serverless_config { + max_concurrency = 200 + memory_size_in_mb = 5120 + provisioned_concurrency = 100 + } + } +} +`, rName)) +} diff --git a/internal/service/sagemaker/endpoint_test.go b/internal/service/sagemaker/endpoint_test.go index 0557c235134..a3b3b741d99 100644 --- a/internal/service/sagemaker/endpoint_test.go +++ b/internal/service/sagemaker/endpoint_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( @@ -238,7 +241,7 @@ func TestAccSageMakerEndpoint_disappears(t *testing.T) { func testAccCheckEndpointDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sagemaker_endpoint" { @@ -272,7 +275,7 @@ func testAccCheckEndpointExists(ctx context.Context, n string) resource.TestChec return fmt.Errorf("no SageMaker Endpoint ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) _, err := tfsagemaker.FindEndpointByName(ctx, conn, rs.Primary.ID) return err diff --git a/internal/service/sagemaker/errors.go b/internal/service/sagemaker/errors.go index 2eb2da79efe..13e07ff940d 100644 --- a/internal/service/sagemaker/errors.go +++ b/internal/service/sagemaker/errors.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker const ( diff --git a/internal/service/sagemaker/feature_group.go b/internal/service/sagemaker/feature_group.go index 18842774b9a..3adba62d5f8 100644 --- a/internal/service/sagemaker/feature_group.go +++ b/internal/service/sagemaker/feature_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( @@ -209,7 +212,7 @@ func ResourceFeatureGroup() *schema.Resource { func resourceFeatureGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) name := d.Get("feature_group_name").(string) input := &sagemaker.CreateFeatureGroupInput{ @@ -218,7 +221,7 @@ func resourceFeatureGroupCreate(ctx context.Context, d *schema.ResourceData, met RecordIdentifierFeatureName: aws.String(d.Get("record_identifier_feature_name").(string)), RoleArn: aws.String(d.Get("role_arn").(string)), FeatureDefinitions: expandFeatureGroupFeatureDefinition(d.Get("feature_definition").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -267,7 +270,7 @@ func resourceFeatureGroupCreate(ctx context.Context, d *schema.ResourceData, met func resourceFeatureGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) output, err := FindFeatureGroupByName(ctx, conn, d.Id()) @@ -314,7 +317,7 @@ func resourceFeatureGroupUpdate(ctx context.Context, d *schema.ResourceData, met func resourceFeatureGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) input := &sagemaker.DeleteFeatureGroupInput{ FeatureGroupName: aws.String(d.Id()), diff --git a/internal/service/sagemaker/feature_group_test.go b/internal/service/sagemaker/feature_group_test.go index 17aac3529fd..164d884e885 100644 --- a/internal/service/sagemaker/feature_group_test.go +++ b/internal/service/sagemaker/feature_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( @@ -382,7 +385,7 @@ func TestAccSageMakerFeatureGroup_disappears(t *testing.T) { func testAccCheckFeatureGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sagemaker_feature_group" { @@ -417,7 +420,7 @@ func testAccCheckFeatureGroupExists(ctx context.Context, n string, v *sagemaker. return fmt.Errorf("No SageMaker Feature Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) output, err := tfsagemaker.FindFeatureGroupByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/sagemaker/find.go b/internal/service/sagemaker/find.go index 0b8e37e3083..651b5b73622 100644 --- a/internal/service/sagemaker/find.go +++ b/internal/service/sagemaker/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( diff --git a/internal/service/sagemaker/flow_definition.go b/internal/service/sagemaker/flow_definition.go index ccefa052770..371926abfd7 100644 --- a/internal/service/sagemaker/flow_definition.go +++ b/internal/service/sagemaker/flow_definition.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( @@ -246,7 +249,7 @@ func ResourceFlowDefinition() *schema.Resource { func resourceFlowDefinitionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) name := d.Get("flow_definition_name").(string) input := &sagemaker.CreateFlowDefinitionInput{ @@ -254,7 +257,7 @@ func resourceFlowDefinitionCreate(ctx context.Context, d *schema.ResourceData, m HumanLoopConfig: expandFlowDefinitionHumanLoopConfig(d.Get("human_loop_config").([]interface{})), RoleArn: aws.String(d.Get("role_arn").(string)), OutputConfig: expandFlowDefinitionOutputConfig(d.Get("output_config").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("human_loop_activation_config"); ok && (len(v.([]interface{})) > 0) { @@ -289,7 +292,7 @@ func resourceFlowDefinitionCreate(ctx context.Context, d *schema.ResourceData, m func resourceFlowDefinitionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) flowDefinition, err := FindFlowDefinitionByName(ctx, conn, d.Id()) @@ -337,7 +340,7 @@ func resourceFlowDefinitionUpdate(ctx context.Context, d *schema.ResourceData, m func resourceFlowDefinitionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) log.Printf("[DEBUG] Deleting SageMaker Flow Definition: %s", d.Id()) _, err := conn.DeleteFlowDefinitionWithContext(ctx, &sagemaker.DeleteFlowDefinitionInput{ diff --git a/internal/service/sagemaker/flow_definition_test.go b/internal/service/sagemaker/flow_definition_test.go index 6a8e88d9490..4e0558bda0b 100644 --- a/internal/service/sagemaker/flow_definition_test.go +++ b/internal/service/sagemaker/flow_definition_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( @@ -197,7 +200,7 @@ func testAccFlowDefinition_disappears(t *testing.T) { func testAccCheckFlowDefinitionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sagemaker_flow_definition" { @@ -232,7 +235,7 @@ func testAccCheckFlowDefinitionExists(ctx context.Context, n string, flowDefinit return fmt.Errorf("No SageMaker Flow Definition ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) output, err := tfsagemaker.FindFlowDefinitionByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/sagemaker/generate.go b/internal/service/sagemaker/generate.go index c2ca1f70a54..8d0db66e7bd 100644 --- a/internal/service/sagemaker/generate.go +++ b/internal/service/sagemaker/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=ListTags -ServiceTagsSlice -TagOp=AddTags -UntagOp=DeleteTags -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package sagemaker diff --git a/internal/service/sagemaker/human_task_ui.go b/internal/service/sagemaker/human_task_ui.go index 7a03bb5e8f4..e208d402e75 100644 --- a/internal/service/sagemaker/human_task_ui.go +++ b/internal/service/sagemaker/human_task_ui.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( @@ -78,12 +81,12 @@ func ResourceHumanTaskUI() *schema.Resource { func resourceHumanTaskUICreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) name := d.Get("human_task_ui_name").(string) input := &sagemaker.CreateHumanTaskUiInput{ HumanTaskUiName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), UiTemplate: expandHumanTaskUiUiTemplate(d.Get("ui_template").([]interface{})), } @@ -101,7 +104,7 @@ func resourceHumanTaskUICreate(ctx context.Context, d *schema.ResourceData, meta func resourceHumanTaskUIRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) humanTaskUi, err := FindHumanTaskUIByName(ctx, conn, d.Id()) @@ -136,7 +139,7 @@ func resourceHumanTaskUIUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceHumanTaskUIDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) log.Printf("[DEBUG] Deleting SageMaker HumanTaskUi: %s", d.Id()) _, err := conn.DeleteHumanTaskUiWithContext(ctx, &sagemaker.DeleteHumanTaskUiInput{ diff --git a/internal/service/sagemaker/human_task_ui_test.go b/internal/service/sagemaker/human_task_ui_test.go index 473f349c24c..8257bab36c3 100644 --- a/internal/service/sagemaker/human_task_ui_test.go +++ b/internal/service/sagemaker/human_task_ui_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( @@ -120,7 +123,7 @@ func TestAccSageMakerHumanTaskUI_disappears(t *testing.T) { func testAccCheckHumanTaskUIDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sagemaker_human_task_ui" { @@ -155,7 +158,7 @@ func testAccCheckHumanTaskUIExists(ctx context.Context, n string, humanTaskUi *s return fmt.Errorf("No SageMaker HumanTaskUi ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) output, err := tfsagemaker.FindHumanTaskUIByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/sagemaker/image.go b/internal/service/sagemaker/image.go index 4e0d4e3c3d2..64528ce50b5 100644 --- a/internal/service/sagemaker/image.go +++ b/internal/service/sagemaker/image.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( @@ -72,13 +75,13 @@ func ResourceImage() *schema.Resource { func resourceImageCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) name := d.Get("image_name").(string) input := &sagemaker.CreateImageInput{ ImageName: aws.String(name), RoleArn: aws.String(d.Get("role_arn").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("display_name"); ok { @@ -107,7 +110,7 @@ func resourceImageCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceImageRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) image, err := FindImageByName(ctx, conn, d.Id()) if err != nil { @@ -131,7 +134,7 @@ func resourceImageRead(ctx context.Context, d *schema.ResourceData, meta interfa func resourceImageUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) needsUpdate := false input := &sagemaker.UpdateImageInput{ @@ -177,7 +180,7 @@ func resourceImageUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceImageDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) input := &sagemaker.DeleteImageInput{ ImageName: aws.String(d.Id()), diff --git a/internal/service/sagemaker/image_test.go b/internal/service/sagemaker/image_test.go index c7c7ab0b643..64019727559 100644 --- a/internal/service/sagemaker/image_test.go +++ b/internal/service/sagemaker/image_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( @@ -203,7 +206,7 @@ func TestAccSageMakerImage_disappears(t *testing.T) { func testAccCheckImageDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sagemaker_image" { @@ -240,7 +243,7 @@ func testAccCheckImageExists(ctx context.Context, n string, image *sagemaker.Des return fmt.Errorf("No sagmaker Image ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) resp, err := tfsagemaker.FindImageByName(ctx, conn, rs.Primary.ID) if err != nil { return err diff --git a/internal/service/sagemaker/image_version.go b/internal/service/sagemaker/image_version.go index 19f8d28b602..affba11814e 100644 --- a/internal/service/sagemaker/image_version.go +++ b/internal/service/sagemaker/image_version.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( @@ -56,7 +59,7 @@ func ResourceImageVersion() *schema.Resource { func resourceImageVersionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) name := d.Get("image_name").(string) input := &sagemaker.CreateImageVersionInput{ @@ -80,7 +83,7 @@ func resourceImageVersionCreate(ctx context.Context, d *schema.ResourceData, met func resourceImageVersionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) image, err := FindImageVersionByName(ctx, conn, d.Id()) if err != nil { @@ -104,7 +107,7 @@ func resourceImageVersionRead(ctx context.Context, d *schema.ResourceData, meta func resourceImageVersionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) input := &sagemaker.DeleteImageVersionInput{ ImageName: aws.String(d.Id()), diff --git a/internal/service/sagemaker/image_version_test.go b/internal/service/sagemaker/image_version_test.go index 3ca3752b70d..b763cac2c00 100644 --- a/internal/service/sagemaker/image_version_test.go +++ b/internal/service/sagemaker/image_version_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( @@ -115,7 +118,7 @@ func TestAccSageMakerImageVersion_Disappears_image(t *testing.T) { func testAccCheckImageVersionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sagemaker_image_version" { @@ -152,7 +155,7 @@ func testAccCheckImageVersionExists(ctx context.Context, n string, image *sagema return fmt.Errorf("No sagmaker Image ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) resp, err := tfsagemaker.FindImageVersionByName(ctx, conn, rs.Primary.ID) if err != nil { return err diff --git a/internal/service/sagemaker/model.go b/internal/service/sagemaker/model.go index ab8c9ff48ca..19f56d47315 100644 --- a/internal/service/sagemaker/model.go +++ b/internal/service/sagemaker/model.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( @@ -59,7 +62,7 @@ func ResourceModel() *schema.Resource { }, "image": { Type: schema.TypeString, - Required: true, + Optional: true, ForceNew: true, ValidateFunc: validImage, }, @@ -106,6 +109,12 @@ func ResourceModel() *schema.Resource { ForceNew: true, ValidateFunc: validModelDataURL, }, + "model_package_name": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: verify.ValidARN, + }, }, }, }, @@ -164,7 +173,7 @@ func ResourceModel() *schema.Resource { }, "image": { Type: schema.TypeString, - Required: true, + Optional: true, ForceNew: true, ValidateFunc: validImage, }, @@ -211,6 +220,12 @@ func ResourceModel() *schema.Resource { ForceNew: true, ValidateFunc: validModelDataURL, }, + "model_package_name": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: verify.ValidARN, + }, }, }, }, @@ -246,7 +261,7 @@ func ResourceModel() *schema.Resource { func resourceModelCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) var name string if v, ok := d.GetOk("name"); ok { @@ -257,7 +272,7 @@ func resourceModelCreate(ctx context.Context, d *schema.ResourceData, meta inter createOpts := &sagemaker.CreateModelInput{ ModelName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("primary_container"); ok { @@ -312,7 +327,7 @@ func expandVPCConfigRequest(l []interface{}) *sagemaker.VpcConfig { func resourceModelRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) request := &sagemaker.DescribeModelInput{ ModelName: aws.String(d.Id()), @@ -376,7 +391,7 @@ func resourceModelUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceModelDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) deleteOpts := &sagemaker.DeleteModelInput{ ModelName: aws.String(d.Id()), @@ -404,8 +419,10 @@ func resourceModelDelete(ctx context.Context, d *schema.ResourceData, meta inter } func expandContainer(m map[string]interface{}) *sagemaker.ContainerDefinition { - container := sagemaker.ContainerDefinition{ - Image: aws.String(m["image"].(string)), + container := sagemaker.ContainerDefinition{} + + if v, ok := m["image"]; ok && v.(string) != "" { + container.Image = aws.String(v.(string)) } if v, ok := m["mode"]; ok && v.(string) != "" { @@ -418,6 +435,9 @@ func expandContainer(m map[string]interface{}) *sagemaker.ContainerDefinition { if v, ok := m["model_data_url"]; ok && v.(string) != "" { container.ModelDataUrl = aws.String(v.(string)) } + if v, ok := m["model_package_name"]; ok && v.(string) != "" { + container.ModelPackageName = aws.String(v.(string)) + } if v, ok := m["environment"].(map[string]interface{}); ok && len(v) > 0 { container.Environment = flex.ExpandStringMap(v) } @@ -478,7 +498,9 @@ func flattenContainer(container *sagemaker.ContainerDefinition) []interface{} { cfg := make(map[string]interface{}) - cfg["image"] = aws.StringValue(container.Image) + if container.Image != nil { + cfg["image"] = aws.StringValue(container.Image) + } if container.Mode != nil { cfg["mode"] = aws.StringValue(container.Mode) @@ -490,6 +512,9 @@ func flattenContainer(container *sagemaker.ContainerDefinition) []interface{} { if container.ModelDataUrl != nil { cfg["model_data_url"] = aws.StringValue(container.ModelDataUrl) } + if container.ModelPackageName != nil { + cfg["model_package_name"] = aws.StringValue(container.ModelPackageName) + } if container.Environment != nil { cfg["environment"] = aws.StringValueMap(container.Environment) } diff --git a/internal/service/sagemaker/model_package_group.go b/internal/service/sagemaker/model_package_group.go index 988a0bbc2b6..2f1cc6e8697 100644 --- a/internal/service/sagemaker/model_package_group.go +++ b/internal/service/sagemaker/model_package_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( @@ -61,12 +64,12 @@ func ResourceModelPackageGroup() *schema.Resource { func resourceModelPackageGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) name := d.Get("model_package_group_name").(string) input := &sagemaker.CreateModelPackageGroupInput{ ModelPackageGroupName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("model_package_group_description"); ok { @@ -89,7 +92,7 @@ func resourceModelPackageGroupCreate(ctx context.Context, d *schema.ResourceData func resourceModelPackageGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) mpg, err := FindModelPackageGroupByName(ctx, conn, d.Id()) if err != nil { @@ -119,7 +122,7 @@ func resourceModelPackageGroupUpdate(ctx context.Context, d *schema.ResourceData func resourceModelPackageGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) input := &sagemaker.DeleteModelPackageGroupInput{ ModelPackageGroupName: aws.String(d.Id()), diff --git a/internal/service/sagemaker/model_package_group_policy.go b/internal/service/sagemaker/model_package_group_policy.go index 69462e490cc..9023e861ddc 100644 --- a/internal/service/sagemaker/model_package_group_policy.go +++ b/internal/service/sagemaker/model_package_group_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( @@ -51,7 +54,7 @@ func ResourceModelPackageGroupPolicy() *schema.Resource { func resourceModelPackageGroupPolicyPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) policy, err := structure.NormalizeJsonString(d.Get("resource_policy").(string)) if err != nil { @@ -76,7 +79,7 @@ func resourceModelPackageGroupPolicyPut(ctx context.Context, d *schema.ResourceD func resourceModelPackageGroupPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) mpg, err := FindModelPackageGroupPolicyByName(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -103,7 +106,7 @@ func resourceModelPackageGroupPolicyRead(ctx context.Context, d *schema.Resource func resourceModelPackageGroupPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) input := &sagemaker.DeleteModelPackageGroupPolicyInput{ ModelPackageGroupName: aws.String(d.Id()), diff --git a/internal/service/sagemaker/model_package_group_policy_test.go b/internal/service/sagemaker/model_package_group_policy_test.go index 9c645cfe0d5..81d6c7dadfb 100644 --- a/internal/service/sagemaker/model_package_group_policy_test.go +++ b/internal/service/sagemaker/model_package_group_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( @@ -95,7 +98,7 @@ func TestAccSageMakerModelPackageGroupPolicy_Disappears_modelPackageGroup(t *tes func testAccCheckModelPackageGroupPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sagemaker_model_package_group_policy" { @@ -127,7 +130,7 @@ func testAccCheckModelPackageGroupPolicyExists(ctx context.Context, n string, mp return fmt.Errorf("No sagmaker Model Package Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) resp, err := tfsagemaker.FindModelPackageGroupPolicyByName(ctx, conn, rs.Primary.ID) if err != nil { return err diff --git a/internal/service/sagemaker/model_package_group_test.go b/internal/service/sagemaker/model_package_group_test.go index 4b82913b64c..42da9ef8298 100644 --- a/internal/service/sagemaker/model_package_group_test.go +++ b/internal/service/sagemaker/model_package_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( @@ -146,7 +149,7 @@ func TestAccSageMakerModelPackageGroup_disappears(t *testing.T) { func testAccCheckModelPackageGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sagemaker_model_package_group" { @@ -183,7 +186,7 @@ func testAccCheckModelPackageGroupExists(ctx context.Context, n string, mpg *sag return fmt.Errorf("No sagmaker Model Package Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) resp, err := tfsagemaker.FindModelPackageGroupByName(ctx, conn, rs.Primary.ID) if err != nil { return err diff --git a/internal/service/sagemaker/model_test.go b/internal/service/sagemaker/model_test.go index afcc3ccf031..1d41fb416d6 100644 --- a/internal/service/sagemaker/model_test.go +++ b/internal/service/sagemaker/model_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( @@ -262,6 +265,33 @@ func TestAccSageMakerModel_primaryContainerModeSingle(t *testing.T) { }) } +func TestAccSageMakerModel_primaryContainerModelPackageName(t *testing.T) { + ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_sagemaker_model.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, sagemaker.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckModelDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccModelConfig_primaryContainerPackageName(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckModelExists(ctx, resourceName), + resource.TestCheckResourceAttrSet(resourceName, "primary_container.0.model_package_name"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccSageMakerModel_containers(t *testing.T) { ctx := acctest.Context(t) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) @@ -401,7 +431,7 @@ func TestAccSageMakerModel_disappears(t *testing.T) { func testAccCheckModelDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sagemaker_model" { @@ -439,7 +469,7 @@ func testAccCheckModelExists(ctx context.Context, n string) resource.TestCheckFu return fmt.Errorf("No sagmaker model ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) DescribeModelOpts := &sagemaker.DescribeModelInput{ ModelName: aws.String(rs.Primary.ID), } @@ -449,7 +479,7 @@ func testAccCheckModelExists(ctx context.Context, n string) resource.TestCheckFu } } -func testAccModelConfigBase(rName string) string { +func testAccModelConfig_base(rName string) string { return fmt.Sprintf(` resource "aws_iam_role" "test" { name = %[1]q @@ -475,7 +505,7 @@ data "aws_sagemaker_prebuilt_ecr_image" "test" { } func testAccModelConfig_basic(rName string) string { - return acctest.ConfigCompose(testAccModelConfigBase(rName), fmt.Sprintf(` + return acctest.ConfigCompose(testAccModelConfig_base(rName), fmt.Sprintf(` resource "aws_sagemaker_model" "test" { name = %[1]q execution_role_arn = aws_iam_role.test.arn @@ -488,7 +518,7 @@ resource "aws_sagemaker_model" "test" { } func testAccModelConfig_inferenceExecution(rName string) string { - return acctest.ConfigCompose(testAccModelConfigBase(rName), fmt.Sprintf(` + return acctest.ConfigCompose(testAccModelConfig_base(rName), fmt.Sprintf(` resource "aws_sagemaker_model" "test" { name = %[1]q execution_role_arn = aws_iam_role.test.arn @@ -509,7 +539,7 @@ resource "aws_sagemaker_model" "test" { } func testAccModelConfig_tags1(rName, tagKey1, tagValue1 string) string { - return acctest.ConfigCompose(testAccModelConfigBase(rName), fmt.Sprintf(` + return acctest.ConfigCompose(testAccModelConfig_base(rName), fmt.Sprintf(` resource "aws_sagemaker_model" "test" { name = %[1]q execution_role_arn = aws_iam_role.test.arn @@ -526,7 +556,7 @@ resource "aws_sagemaker_model" "test" { } func testAccModelConfig_tags2(rName, tagKey1, tagValue1, tagKey2, tagValue2 string) string { - return acctest.ConfigCompose(testAccModelConfigBase(rName), fmt.Sprintf(` + return acctest.ConfigCompose(testAccModelConfig_base(rName), fmt.Sprintf(` resource "aws_sagemaker_model" "test" { name = %[1]q execution_role_arn = aws_iam_role.test.arn @@ -544,7 +574,7 @@ resource "aws_sagemaker_model" "test" { } func testAccModelConfig_primaryContainerDataURL(rName string) string { - return acctest.ConfigCompose(testAccModelConfigBase(rName), fmt.Sprintf(` + return acctest.ConfigCompose(testAccModelConfig_base(rName), fmt.Sprintf(` resource "aws_sagemaker_model" "test" { name = %[1]q execution_role_arn = aws_iam_role.test.arn @@ -605,11 +635,6 @@ resource "aws_s3_bucket" "test" { force_destroy = true } -resource "aws_s3_bucket_acl" "test" { - bucket = aws_s3_bucket.test.id - acl = "private" -} - resource "aws_s3_object" "test" { bucket = aws_s3_bucket.test.bucket key = "model.tar.gz" @@ -618,8 +643,54 @@ resource "aws_s3_object" "test" { `, rName)) } +// lintignore:AWSAT003,AWSAT005 +func testAccModelConfig_primaryContainerPackageName(rName string) string { + return acctest.ConfigCompose(testAccModelConfig_base(rName), fmt.Sprintf(` +data "aws_region" "current" {} + +locals { + region_account_map = { + us-east-1 = "865070037744" + us-east-2 = "057799348421" + us-west-1 = "382657785993" + us-west-2 = "594846645681" + ca-central-1 = "470592106596" + eu-central-1 = "446921602837" + eu-west-1 = "985815980388" + eu-west-2 = "856760150666" + eu-west-3 = "843114510376" + eu-north-1 = "136758871317" + ap-southeast-1 = "192199979996" + ap-southeast-2 = "666831318237" + ap-northeast-2 = "745090734665" + ap-northeast-1 = "977537786026" + ap-south-1 = "077584701553" + sa-east-1 = "270155090741" + } + + account = local.region_account_map[data.aws_region.current.name] + + model_package_name = format( + "arn:aws:sagemaker:%%s:%%s:model-package/hf-textgeneration-gpt2-cpu-b73b575105d336b680d151277ebe4ee0", + data.aws_region.current.name, + local.account + ) +} + +resource "aws_sagemaker_model" "test" { + name = %[1]q + enable_network_isolation = true + execution_role_arn = aws_iam_role.test.arn + + primary_container { + model_package_name = local.model_package_name + } +} +`, rName)) +} + func testAccModelConfig_primaryContainerHostname(rName string) string { - return acctest.ConfigCompose(testAccModelConfigBase(rName), fmt.Sprintf(` + return acctest.ConfigCompose(testAccModelConfig_base(rName), fmt.Sprintf(` resource "aws_sagemaker_model" "test" { name = %[1]q execution_role_arn = aws_iam_role.test.arn @@ -633,7 +704,7 @@ resource "aws_sagemaker_model" "test" { } func testAccModelConfig_primaryContainerImage(rName string) string { - return acctest.ConfigCompose(testAccModelConfigBase(rName), fmt.Sprintf(` + return acctest.ConfigCompose(testAccModelConfig_base(rName), fmt.Sprintf(` resource "aws_sagemaker_model" "test" { name = %[1]q execution_role_arn = aws_iam_role.test.arn @@ -650,7 +721,7 @@ resource "aws_sagemaker_model" "test" { } func testAccModelConfig_primaryContainerEnvironment(rName string) string { - return acctest.ConfigCompose(testAccModelConfigBase(rName), fmt.Sprintf(` + return acctest.ConfigCompose(testAccModelConfig_base(rName), fmt.Sprintf(` resource "aws_sagemaker_model" "test" { name = %[1]q execution_role_arn = aws_iam_role.test.arn @@ -667,7 +738,7 @@ resource "aws_sagemaker_model" "test" { } func testAccModelConfig_primaryContainerModeSingle(rName string) string { - return acctest.ConfigCompose(testAccModelConfigBase(rName), fmt.Sprintf(` + return acctest.ConfigCompose(testAccModelConfig_base(rName), fmt.Sprintf(` resource "aws_sagemaker_model" "test" { name = %[1]q execution_role_arn = aws_iam_role.test.arn @@ -681,7 +752,7 @@ resource "aws_sagemaker_model" "test" { } func testAccModelConfig_containers(rName string) string { - return acctest.ConfigCompose(testAccModelConfigBase(rName), fmt.Sprintf(` + return acctest.ConfigCompose(testAccModelConfig_base(rName), fmt.Sprintf(` resource "aws_sagemaker_model" "test" { name = %[1]q execution_role_arn = aws_iam_role.test.arn @@ -698,7 +769,7 @@ resource "aws_sagemaker_model" "test" { } func testAccModelConfig_networkIsolation(rName string) string { - return acctest.ConfigCompose(testAccModelConfigBase(rName), fmt.Sprintf(` + return acctest.ConfigCompose(testAccModelConfig_base(rName), fmt.Sprintf(` resource "aws_sagemaker_model" "test" { name = %[1]q execution_role_arn = aws_iam_role.test.arn @@ -712,7 +783,7 @@ resource "aws_sagemaker_model" "test" { } func testAccModelConfig_vpcBasic(rName string) string { - return acctest.ConfigCompose(testAccModelConfigBase(rName), acctest.ConfigAvailableAZsNoOptIn(), fmt.Sprintf(` + return acctest.ConfigCompose(testAccModelConfig_base(rName), acctest.ConfigVPCWithSubnets(rName, 2), fmt.Sprintf(` resource "aws_sagemaker_model" "test" { name = %[1]q execution_role_arn = aws_iam_role.test.arn @@ -723,50 +794,15 @@ resource "aws_sagemaker_model" "test" { } vpc_config { - subnets = [aws_subnet.test.id, aws_subnet.bar.id] - security_group_ids = [aws_security_group.test.id, aws_security_group.bar.id] - } -} - -resource "aws_vpc" "test" { - cidr_block = "10.1.0.0/16" - - tags = { - Name = %[1]q - } -} - -resource "aws_subnet" "test" { - cidr_block = "10.1.1.0/24" - availability_zone = data.aws_availability_zones.available.names[0] - vpc_id = aws_vpc.test.id - - tags = { - Name = %[1]q - } -} - -resource "aws_subnet" "bar" { - cidr_block = "10.1.2.0/24" - availability_zone = data.aws_availability_zones.available.names[0] - vpc_id = aws_vpc.test.id - - tags = { - Name = %[1]q + subnets = aws_subnet.test[*].id + security_group_ids = aws_security_group.test[*].id } } resource "aws_security_group" "test" { - name = "%[1]s-1" - vpc_id = aws_vpc.test.id - - tags = { - Name = %[1]q - } -} + count = 2 -resource "aws_security_group" "bar" { - name = "%[1]s-2" + name = "%[1]s-${count.index}" vpc_id = aws_vpc.test.id tags = { @@ -778,7 +814,7 @@ resource "aws_security_group" "bar" { // lintignore:AWSAT003,AWSAT005 func testAccModelConfig_primaryContainerPrivateDockerRegistry(rName string) string { - return acctest.ConfigCompose(testAccModelConfigBase(rName), acctest.ConfigAvailableAZsNoOptIn(), fmt.Sprintf(` + return acctest.ConfigCompose(testAccModelConfig_base(rName), acctest.ConfigVPCWithSubnets(rName, 1), fmt.Sprintf(` resource "aws_sagemaker_model" "test" { name = %[1]q execution_role_arn = aws_iam_role.test.arn @@ -797,31 +833,13 @@ resource "aws_sagemaker_model" "test" { } vpc_config { - subnets = [aws_subnet.test.id] + subnets = aws_subnet.test[*].id security_group_ids = [aws_security_group.test.id] } } -resource "aws_vpc" "test" { - cidr_block = "10.1.0.0/16" - - tags = { - Name = %[1]q - } -} - -resource "aws_subnet" "test" { - cidr_block = "10.1.1.0/24" - availability_zone = data.aws_availability_zones.available.names[0] - vpc_id = aws_vpc.test.id - - tags = { - Name = %[1]q - } -} - resource "aws_security_group" "test" { - name = "%[1]s-1" + name = %[1]q vpc_id = aws_vpc.test.id tags = { diff --git a/internal/service/sagemaker/monitoring_schedule.go b/internal/service/sagemaker/monitoring_schedule.go index 2ae448d0d9f..92fabceef19 100644 --- a/internal/service/sagemaker/monitoring_schedule.go +++ b/internal/service/sagemaker/monitoring_schedule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( @@ -92,7 +95,7 @@ func ResourceMonitoringSchedule() *schema.Resource { func resourceMonitoringScheduleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) var name string if v, ok := d.GetOk("name"); ok { @@ -104,7 +107,7 @@ func resourceMonitoringScheduleCreate(ctx context.Context, d *schema.ResourceDat createOpts := &sagemaker.CreateMonitoringScheduleInput{ MonitoringScheduleConfig: expandMonitoringScheduleConfig(d.Get("monitoring_schedule_config").([]interface{})), MonitoringScheduleName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } _, err := conn.CreateMonitoringScheduleWithContext(ctx, createOpts) @@ -122,7 +125,7 @@ func resourceMonitoringScheduleCreate(ctx context.Context, d *schema.ResourceDat func resourceMonitoringScheduleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) monitoringSchedule, err := FindMonitoringScheduleByName(ctx, conn, d.Id()) @@ -148,7 +151,7 @@ func resourceMonitoringScheduleRead(ctx context.Context, d *schema.ResourceData, func resourceMonitoringScheduleUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) if d.HasChanges("monitoring_schedule_config") { modifyOpts := &sagemaker.UpdateMonitoringScheduleInput{ @@ -173,7 +176,7 @@ func resourceMonitoringScheduleUpdate(ctx context.Context, d *schema.ResourceDat func resourceMonitoringScheduleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) deleteOpts := &sagemaker.DeleteMonitoringScheduleInput{ MonitoringScheduleName: aws.String(d.Id()), diff --git a/internal/service/sagemaker/monitoring_schedule_test.go b/internal/service/sagemaker/monitoring_schedule_test.go index a3f3dbe8e8c..da1d496941a 100644 --- a/internal/service/sagemaker/monitoring_schedule_test.go +++ b/internal/service/sagemaker/monitoring_schedule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( @@ -165,7 +168,7 @@ func TestAccSageMakerMonitoringSchedule_disappears(t *testing.T) { func testAccCheckMonitoringScheduleDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sagemaker_monitoring_schedule" { @@ -199,7 +202,7 @@ func testAccCheckMonitoringScheduleExists(ctx context.Context, n string) resourc return fmt.Errorf("no SageMaker Monitoring Schedule ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) _, err := tfsagemaker.FindMonitoringScheduleByName(ctx, conn, rs.Primary.ID) return err diff --git a/internal/service/sagemaker/notebook_instance.go b/internal/service/sagemaker/notebook_instance.go index 6f49b63057f..7c3ea6eb3d0 100644 --- a/internal/service/sagemaker/notebook_instance.go +++ b/internal/service/sagemaker/notebook_instance.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( @@ -159,7 +162,7 @@ func ResourceNotebookInstance() *schema.Resource { func resourceNotebookInstanceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) name := d.Get("name").(string) input := &sagemaker.CreateNotebookInstanceInput{ @@ -168,7 +171,7 @@ func resourceNotebookInstanceCreate(ctx context.Context, d *schema.ResourceData, NotebookInstanceName: aws.String(name), RoleArn: aws.String(d.Get("role_arn").(string)), SecurityGroupIds: flex.ExpandStringSet(d.Get("security_groups").(*schema.Set)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("accelerator_types"); ok && v.(*schema.Set).Len() > 0 { @@ -229,7 +232,7 @@ func resourceNotebookInstanceCreate(ctx context.Context, d *schema.ResourceData, func resourceNotebookInstanceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) notebookInstance, err := FindNotebookInstanceByName(ctx, conn, d.Id()) @@ -270,7 +273,7 @@ func resourceNotebookInstanceRead(ctx context.Context, d *schema.ResourceData, m func resourceNotebookInstanceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &sagemaker.UpdateNotebookInstanceInput{ @@ -365,7 +368,7 @@ func resourceNotebookInstanceUpdate(ctx context.Context, d *schema.ResourceData, func resourceNotebookInstanceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) notebook, err := FindNotebookInstanceByName(ctx, conn, d.Id()) diff --git a/internal/service/sagemaker/notebook_instance_lifecycle_configuration.go b/internal/service/sagemaker/notebook_instance_lifecycle_configuration.go index 1ba8ea1e3db..a1290526072 100644 --- a/internal/service/sagemaker/notebook_instance_lifecycle_configuration.go +++ b/internal/service/sagemaker/notebook_instance_lifecycle_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( @@ -56,7 +59,7 @@ func ResourceNotebookInstanceLifeCycleConfiguration() *schema.Resource { func resourceNotebookInstanceLifeCycleConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) var name string if v, ok := d.GetOk("name"); ok { @@ -93,7 +96,7 @@ func resourceNotebookInstanceLifeCycleConfigurationCreate(ctx context.Context, d func resourceNotebookInstanceLifeCycleConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) request := &sagemaker.DescribeNotebookInstanceLifecycleConfigInput{ NotebookInstanceLifecycleConfigName: aws.String(d.Id()), @@ -134,7 +137,7 @@ func resourceNotebookInstanceLifeCycleConfigurationRead(ctx context.Context, d * func resourceNotebookInstanceLifeCycleConfigurationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) updateOpts := &sagemaker.UpdateNotebookInstanceLifecycleConfigInput{ NotebookInstanceLifecycleConfigName: aws.String(d.Get("name").(string)), @@ -159,7 +162,7 @@ func resourceNotebookInstanceLifeCycleConfigurationUpdate(ctx context.Context, d func resourceNotebookInstanceLifeCycleConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) deleteOpts := &sagemaker.DeleteNotebookInstanceLifecycleConfigInput{ NotebookInstanceLifecycleConfigName: aws.String(d.Id()), diff --git a/internal/service/sagemaker/notebook_instance_lifecycle_configuration_test.go b/internal/service/sagemaker/notebook_instance_lifecycle_configuration_test.go index 2d6a02760b1..c6e34a64acf 100644 --- a/internal/service/sagemaker/notebook_instance_lifecycle_configuration_test.go +++ b/internal/service/sagemaker/notebook_instance_lifecycle_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( @@ -97,7 +100,7 @@ func testAccCheckNotebookInstanceLifecycleConfigurationExists(ctx context.Contex return fmt.Errorf("no ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) output, err := conn.DescribeNotebookInstanceLifecycleConfigWithContext(ctx, &sagemaker.DescribeNotebookInstanceLifecycleConfigInput{ NotebookInstanceLifecycleConfigName: aws.String(rs.Primary.ID), }) @@ -123,7 +126,7 @@ func testAccCheckNotebookInstanceLifecycleConfigurationDestroy(ctx context.Conte continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) lifecycleConfig, err := conn.DescribeNotebookInstanceLifecycleConfigWithContext(ctx, &sagemaker.DescribeNotebookInstanceLifecycleConfigInput{ NotebookInstanceLifecycleConfigName: aws.String(rs.Primary.ID), }) diff --git a/internal/service/sagemaker/notebook_instance_test.go b/internal/service/sagemaker/notebook_instance_test.go index 2942a3370e5..944ff81e62c 100644 --- a/internal/service/sagemaker/notebook_instance_test.go +++ b/internal/service/sagemaker/notebook_instance_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( @@ -670,7 +673,7 @@ func TestAccSageMakerNotebookInstance_acceleratorTypes(t *testing.T) { func testAccCheckNotebookInstanceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sagemaker_notebook_instance" { @@ -705,7 +708,7 @@ func testAccCheckNotebookInstanceExists(ctx context.Context, n string, v *sagema return fmt.Errorf("No SageMaker Notebook Instance ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) output, err := tfsagemaker.FindNotebookInstanceByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/sagemaker/prebuilt_ecr_image_data_source.go b/internal/service/sagemaker/prebuilt_ecr_image_data_source.go index eeba9a2d2a0..ff79088f271 100644 --- a/internal/service/sagemaker/prebuilt_ecr_image_data_source.go +++ b/internal/service/sagemaker/prebuilt_ecr_image_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( diff --git a/internal/service/sagemaker/prebuilt_ecr_image_data_source_test.go b/internal/service/sagemaker/prebuilt_ecr_image_data_source_test.go index affc70d65b7..ae6683d0c36 100644 --- a/internal/service/sagemaker/prebuilt_ecr_image_data_source_test.go +++ b/internal/service/sagemaker/prebuilt_ecr_image_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( diff --git a/internal/service/sagemaker/project.go b/internal/service/sagemaker/project.go index 6061d9a4a3d..fa99101afc7 100644 --- a/internal/service/sagemaker/project.go +++ b/internal/service/sagemaker/project.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( @@ -106,13 +109,13 @@ func ResourceProject() *schema.Resource { func resourceProjectCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) name := d.Get("project_name").(string) input := &sagemaker.CreateProjectInput{ ProjectName: aws.String(name), ServiceCatalogProvisioningDetails: expandProjectServiceCatalogProvisioningDetails(d.Get("service_catalog_provisioning_details").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("project_description"); ok { @@ -137,7 +140,7 @@ func resourceProjectCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceProjectRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) project, err := FindProjectByName(ctx, conn, d.Id()) if err != nil { @@ -164,7 +167,7 @@ func resourceProjectRead(ctx context.Context, d *schema.ResourceData, meta inter func resourceProjectUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) if d.HasChangesExcept("tags_all", "tags") { input := &sagemaker.UpdateProjectInput{ @@ -195,7 +198,7 @@ func resourceProjectUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceProjectDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) input := &sagemaker.DeleteProjectInput{ ProjectName: aws.String(d.Id()), diff --git a/internal/service/sagemaker/project_test.go b/internal/service/sagemaker/project_test.go index 2f80918d866..36aa9394821 100644 --- a/internal/service/sagemaker/project_test.go +++ b/internal/service/sagemaker/project_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( @@ -169,7 +172,7 @@ func TestAccSageMakerProject_disappears(t *testing.T) { func testAccCheckProjectDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sagemaker_project" { @@ -206,7 +209,7 @@ func testAccCheckProjectExists(ctx context.Context, n string, mpg *sagemaker.Des return fmt.Errorf("No sagmaker Project ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) resp, err := tfsagemaker.FindProjectByName(ctx, conn, rs.Primary.ID) if err != nil { return err diff --git a/internal/service/sagemaker/sagemaker_test.go b/internal/service/sagemaker/sagemaker_test.go index e385ad6313c..23edd438d19 100644 --- a/internal/service/sagemaker/sagemaker_test.go +++ b/internal/service/sagemaker/sagemaker_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( diff --git a/internal/service/sagemaker/service_package_gen.go b/internal/service/sagemaker/service_package_gen.go index 88ad731aba1..6ea2603d7d3 100644 --- a/internal/service/sagemaker/service_package_gen.go +++ b/internal/service/sagemaker/service_package_gen.go @@ -5,6 +5,10 @@ package sagemaker import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + sagemaker_sdkv1 "github.com/aws/aws-sdk-go/service/sagemaker" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -228,4 +232,13 @@ func (p *servicePackage) ServicePackageName() string { return names.SageMaker } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*sagemaker_sdkv1.SageMaker, error) { + sess := config["session"].(*session_sdkv1.Session) + + return sagemaker_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/sagemaker/servicecatalog_portfolio_status.go b/internal/service/sagemaker/servicecatalog_portfolio_status.go index b38c22de908..e8b35ea7112 100644 --- a/internal/service/sagemaker/servicecatalog_portfolio_status.go +++ b/internal/service/sagemaker/servicecatalog_portfolio_status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( @@ -34,7 +37,7 @@ func ResourceServicecatalogPortfolioStatus() *schema.Resource { func resourceServicecatalogPortfolioStatusPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) status := d.Get("status").(string) var err error @@ -55,7 +58,7 @@ func resourceServicecatalogPortfolioStatusPut(ctx context.Context, d *schema.Res func resourceServicecatalogPortfolioStatusRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) resp, err := conn.GetSagemakerServicecatalogPortfolioStatusWithContext(ctx, &sagemaker.GetSagemakerServicecatalogPortfolioStatusInput{}) if err != nil { diff --git a/internal/service/sagemaker/servicecatalog_portfolio_status_test.go b/internal/service/sagemaker/servicecatalog_portfolio_status_test.go index 80752ac8725..fb6fb60c60d 100644 --- a/internal/service/sagemaker/servicecatalog_portfolio_status_test.go +++ b/internal/service/sagemaker/servicecatalog_portfolio_status_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( @@ -64,7 +67,7 @@ func testAccCheckServicecatalogPortfolioStatusExistsConfig(ctx context.Context, return fmt.Errorf("No SageMaker Studio Lifecycle Config ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) output, err := conn.GetSagemakerServicecatalogPortfolioStatusWithContext(ctx, &sagemaker.GetSagemakerServicecatalogPortfolioStatusInput{}) if err != nil { diff --git a/internal/service/sagemaker/space.go b/internal/service/sagemaker/space.go index 939ca4fe035..d3815fd27cc 100644 --- a/internal/service/sagemaker/space.go +++ b/internal/service/sagemaker/space.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( @@ -202,14 +205,14 @@ func ResourceSpace() *schema.Resource { func resourceSpaceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) domainId := d.Get("domain_id").(string) spaceName := d.Get("space_name").(string) input := &sagemaker.CreateSpaceInput{ SpaceName: aws.String(spaceName), DomainId: aws.String(domainId), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("space_settings"); ok && len(v.([]interface{})) > 0 { @@ -233,7 +236,7 @@ func resourceSpaceCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceSpaceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) domainID, name, err := decodeSpaceName(d.Id()) if err != nil { @@ -265,7 +268,7 @@ func resourceSpaceRead(ctx context.Context, d *schema.ResourceData, meta interfa func resourceSpaceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) if d.HasChange("space_settings") { domainID := d.Get("domain_id").(string) @@ -293,7 +296,7 @@ func resourceSpaceUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceSpaceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) name := d.Get("space_name").(string) domainID := d.Get("domain_id").(string) diff --git a/internal/service/sagemaker/space_test.go b/internal/service/sagemaker/space_test.go index 2fc18067299..a3378d0ca2c 100644 --- a/internal/service/sagemaker/space_test.go +++ b/internal/service/sagemaker/space_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( @@ -256,7 +259,7 @@ func testAccSpace_disappears(t *testing.T) { func testAccCheckSpaceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sagemaker_space" { @@ -297,7 +300,7 @@ func testAccCheckSpaceExists(ctx context.Context, n string, space *sagemaker.Des return fmt.Errorf("No sagmaker domain ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) domainID := rs.Primary.Attributes["domain_id"] spaceName := rs.Primary.Attributes["space_name"] diff --git a/internal/service/sagemaker/status.go b/internal/service/sagemaker/status.go index 9e5fc4aa81e..b7645548e71 100644 --- a/internal/service/sagemaker/status.go +++ b/internal/service/sagemaker/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( diff --git a/internal/service/sagemaker/studio_lifecycle_config.go b/internal/service/sagemaker/studio_lifecycle_config.go index 77b9933e17b..f0b97abd44a 100644 --- a/internal/service/sagemaker/studio_lifecycle_config.go +++ b/internal/service/sagemaker/studio_lifecycle_config.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( @@ -67,14 +70,14 @@ func ResourceStudioLifecycleConfig() *schema.Resource { func resourceStudioLifecycleConfigCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) name := d.Get("studio_lifecycle_config_name").(string) input := &sagemaker.CreateStudioLifecycleConfigInput{ StudioLifecycleConfigName: aws.String(name), StudioLifecycleConfigAppType: aws.String(d.Get("studio_lifecycle_config_app_type").(string)), StudioLifecycleConfigContent: aws.String(d.Get("studio_lifecycle_config_content").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } log.Printf("[DEBUG] Creating SageMaker Studio Lifecycle Config : %s", input) @@ -91,7 +94,7 @@ func resourceStudioLifecycleConfigCreate(ctx context.Context, d *schema.Resource func resourceStudioLifecycleConfigRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) image, err := FindStudioLifecycleConfigByName(ctx, conn, d.Id()) @@ -124,7 +127,7 @@ func resourceStudioLifecycleConfigUpdate(ctx context.Context, d *schema.Resource func resourceStudioLifecycleConfigDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) input := &sagemaker.DeleteStudioLifecycleConfigInput{ StudioLifecycleConfigName: aws.String(d.Id()), diff --git a/internal/service/sagemaker/studio_lifecycle_config_test.go b/internal/service/sagemaker/studio_lifecycle_config_test.go index d6bbf53363f..65b587de30d 100644 --- a/internal/service/sagemaker/studio_lifecycle_config_test.go +++ b/internal/service/sagemaker/studio_lifecycle_config_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( @@ -120,7 +123,7 @@ func TestAccSageMakerStudioLifecycleConfig_disappears(t *testing.T) { func testAccCheckStudioLifecycleDestroyConfig(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sagemaker_studio_lifecycle_config" { @@ -155,7 +158,7 @@ func testAccCheckStudioLifecycleExistsConfig(ctx context.Context, n string, conf return fmt.Errorf("No SageMaker Studio Lifecycle Config ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) output, err := tfsagemaker.FindStudioLifecycleConfigByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/sagemaker/sweep.go b/internal/service/sagemaker/sweep.go index c2775fe1763..515f3c12c8e 100644 --- a/internal/service/sagemaker/sweep.go +++ b/internal/service/sagemaker/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -12,7 +15,6 @@ import ( "github.com/aws/aws-sdk-go/service/sagemaker" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -153,11 +155,11 @@ func init() { func sweepAppImagesConfig(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).SageMakerConn() + conn := client.SageMakerConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -193,7 +195,7 @@ func sweepAppImagesConfig(region string) error { input.NextToken = output.NextToken } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("sweeping SageMaker App Image Configs: %w", err)) } @@ -202,11 +204,11 @@ func sweepAppImagesConfig(region string) error { func sweepSpaces(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).SageMakerConn() + conn := client.SageMakerConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -234,7 +236,7 @@ func sweepSpaces(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("retrieving SageMaker Spaces: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("sweeping SageMaker Spaces: %w", err)) } @@ -243,11 +245,11 @@ func sweepSpaces(region string) error { func sweepApps(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).SageMakerConn() + conn := client.SageMakerConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -282,7 +284,7 @@ func sweepApps(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("retrieving SageMaker Apps: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("sweeping SageMaker Apps: %w", err)) } @@ -291,11 +293,11 @@ func sweepApps(region string) error { func sweepCodeRepositories(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).SageMakerConn() + conn := client.SageMakerConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -320,7 +322,7 @@ func sweepCodeRepositories(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("retrieving SageMaker Code Repositories: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("sweeping SageMaker Code Repositories: %w", err)) } @@ -329,11 +331,11 @@ func sweepCodeRepositories(region string) error { func sweepDeviceFleets(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).SageMakerConn() + conn := client.SageMakerConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -360,7 +362,7 @@ func sweepDeviceFleets(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("retrieving SageMaker Device Fleets: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("sweeping SageMaker Device Fleets: %w", err)) } @@ -369,11 +371,11 @@ func sweepDeviceFleets(region string) error { func sweepDomains(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).SageMakerConn() + conn := client.SageMakerConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -400,7 +402,7 @@ func sweepDomains(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("retrieving SageMaker Domains: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("sweeping API Gateway VPC Links: %w", err)) } @@ -409,11 +411,11 @@ func sweepDomains(region string) error { func sweepEndpointConfigurations(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).SageMakerConn() + conn := client.SageMakerConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -441,7 +443,7 @@ func sweepEndpointConfigurations(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("retrieving SageMaker Endpoint Configs: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("sweeping SageMaker Endpoint Configs: %w", err)) } @@ -450,11 +452,11 @@ func sweepEndpointConfigurations(region string) error { func sweepEndpoints(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).SageMakerConn() + conn := client.SageMakerConn(ctx) req := &sagemaker.ListEndpointsInput{ NameContains: aws.String(sweep.ResourcePrefix), @@ -483,11 +485,11 @@ func sweepEndpoints(region string) error { func sweepFeatureGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).SageMakerConn() + conn := client.SageMakerConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -512,7 +514,7 @@ func sweepFeatureGroups(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("retrieving SageMaker Feature Groups: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("sweeping SageMaker Feature Groups: %w", err)) } @@ -521,11 +523,11 @@ func sweepFeatureGroups(region string) error { func sweepFlowDefinitions(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).SageMakerConn() + conn := client.SageMakerConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -551,7 +553,7 @@ func sweepFlowDefinitions(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("retrieving SageMaker Flow Definitions: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("sweeping SageMaker Flow Definitions: %w", err)) } @@ -560,11 +562,11 @@ func sweepFlowDefinitions(region string) error { func sweepHumanTaskUIs(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).SageMakerConn() + conn := client.SageMakerConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -590,7 +592,7 @@ func sweepHumanTaskUIs(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("retrieving SageMaker HumanTaskUis: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("sweeping SageMaker HumanTaskUis: %w", err)) } @@ -599,11 +601,11 @@ func sweepHumanTaskUIs(region string) error { func sweepImages(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).SageMakerConn() + conn := client.SageMakerConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -628,7 +630,7 @@ func sweepImages(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("retrieving SageMaker Images: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("sweeping SageMaker Images: %w", err)) } @@ -637,11 +639,11 @@ func sweepImages(region string) error { func sweepModelPackageGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).SageMakerConn() + conn := client.SageMakerConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -666,7 +668,7 @@ func sweepModelPackageGroups(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("retrieving SageMaker Model Package Groups: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("sweeping SageMaker Model Package Groups: %w", err)) } @@ -675,11 +677,11 @@ func sweepModelPackageGroups(region string) error { func sweepModels(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).SageMakerConn() + conn := client.SageMakerConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -704,7 +706,7 @@ func sweepModels(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("retrieving SageMaker Models: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("sweeping SageMaker Models: %w", err)) } @@ -713,11 +715,11 @@ func sweepModels(region string) error { func sweepNotebookInstanceLifecycleConfiguration(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).SageMakerConn() + conn := client.SageMakerConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -747,7 +749,7 @@ func sweepNotebookInstanceLifecycleConfiguration(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("retrieving SageMaker Notebook Instance Lifecycle Configurations: %s", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("sweeping SageMaker Notebook Instance Lifecycle Configurations: %w", err)) } @@ -756,11 +758,11 @@ func sweepNotebookInstanceLifecycleConfiguration(region string) error { func sweepNotebookInstances(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).SageMakerConn() + conn := client.SageMakerConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -787,7 +789,7 @@ func sweepNotebookInstances(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("retrieving SageMaker Notbook Instances: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("sweeping SageMaker Notbook Instances: %w", err)) } @@ -796,11 +798,11 @@ func sweepNotebookInstances(region string) error { func sweepStudioLifecyclesConfig(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).SageMakerConn() + conn := client.SageMakerConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -826,7 +828,7 @@ func sweepStudioLifecyclesConfig(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("retrieving SageMaker Studio Lifecycle Configs: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("sweeping SageMaker Studio Lifecycle Configs: %w", err)) } @@ -835,11 +837,11 @@ func sweepStudioLifecyclesConfig(region string) error { func sweepUserProfiles(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).SageMakerConn() + conn := client.SageMakerConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -867,7 +869,7 @@ func sweepUserProfiles(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("retrieving SageMaker User Profiles: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("sweeping SageMaker User Profiles: %w", err)) } @@ -876,11 +878,11 @@ func sweepUserProfiles(region string) error { func sweepWorkforces(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).SageMakerConn() + conn := client.SageMakerConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -906,7 +908,7 @@ func sweepWorkforces(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("retrieving SageMaker Workforces: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("sweeping SageMaker Workforces: %w", err)) } @@ -915,11 +917,11 @@ func sweepWorkforces(region string) error { func sweepWorkteams(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).SageMakerConn() + conn := client.SageMakerConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -945,7 +947,7 @@ func sweepWorkteams(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("retrieving SageMaker Workteams: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("sweeping SageMaker Workteams: %w", err)) } @@ -954,11 +956,11 @@ func sweepWorkteams(region string) error { func sweepProjects(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).SageMakerConn() + conn := client.SageMakerConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -985,7 +987,7 @@ func sweepProjects(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("retrieving SageMaker Projects: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("sweeping SageMaker Projects: %w", err)) } diff --git a/internal/service/sagemaker/tags_gen.go b/internal/service/sagemaker/tags_gen.go index f8ee073814c..3f2461046f7 100644 --- a/internal/service/sagemaker/tags_gen.go +++ b/internal/service/sagemaker/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists sagemaker service tags. +// listTags lists sagemaker service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn sagemakeriface.SageMakerAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn sagemakeriface.SageMakerAPI, identifier string) (tftags.KeyValueTags, error) { input := &sagemaker.ListTagsInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn sagemakeriface.SageMakerAPI, identifier // ListTags lists sagemaker service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).SageMakerConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).SageMakerConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*sagemaker.Tag) tftags.KeyValueTag return tftags.New(ctx, m) } -// GetTagsIn returns sagemaker service tags from Context. +// getTagsIn returns sagemaker service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*sagemaker.Tag { +func getTagsIn(ctx context.Context) []*sagemaker.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*sagemaker.Tag { return nil } -// SetTagsOut sets sagemaker service tags in Context. -func SetTagsOut(ctx context.Context, tags []*sagemaker.Tag) { +// setTagsOut sets sagemaker service tags in Context. +func setTagsOut(ctx context.Context, tags []*sagemaker.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates sagemaker service tags. +// updateTags updates sagemaker service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn sagemakeriface.SageMakerAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn sagemakeriface.SageMakerAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn sagemakeriface.SageMakerAPI, identifie // UpdateTags updates sagemaker service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).SageMakerConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).SageMakerConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/sagemaker/test-fixtures/sagemaker-human-task-ui-tmpl.html b/internal/service/sagemaker/test-fixtures/sagemaker-human-task-ui-tmpl.html index f8b547b8a8f..eeb044524e0 100644 --- a/internal/service/sagemaker/test-fixtures/sagemaker-human-task-ui-tmpl.html +++ b/internal/service/sagemaker/test-fixtures/sagemaker-human-task-ui-tmpl.html @@ -1,3 +1,8 @@ + + diff --git a/internal/service/sagemaker/user_profile.go b/internal/service/sagemaker/user_profile.go index e1f40d54bb8..09d401b2a41 100644 --- a/internal/service/sagemaker/user_profile.go +++ b/internal/service/sagemaker/user_profile.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( @@ -412,12 +415,12 @@ func ResourceUserProfile() *schema.Resource { func resourceUserProfileCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) input := &sagemaker.CreateUserProfileInput{ UserProfileName: aws.String(d.Get("user_profile_name").(string)), DomainId: aws.String(d.Get("domain_id").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("user_settings"); ok { @@ -455,7 +458,7 @@ func resourceUserProfileCreate(ctx context.Context, d *schema.ResourceData, meta func resourceUserProfileRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) domainID, userProfileName, err := decodeUserProfileName(d.Id()) if err != nil { @@ -489,7 +492,7 @@ func resourceUserProfileRead(ctx context.Context, d *schema.ResourceData, meta i func resourceUserProfileUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) if d.HasChange("user_settings") { domainID := d.Get("domain_id").(string) @@ -517,7 +520,7 @@ func resourceUserProfileUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceUserProfileDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) userProfileName := d.Get("user_profile_name").(string) domainID := d.Get("domain_id").(string) diff --git a/internal/service/sagemaker/user_profile_test.go b/internal/service/sagemaker/user_profile_test.go index 59d8090e0f8..64ddfe58c4c 100644 --- a/internal/service/sagemaker/user_profile_test.go +++ b/internal/service/sagemaker/user_profile_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( @@ -330,7 +333,7 @@ func testAccUserProfile_disappears(t *testing.T) { func testAccCheckUserProfileDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sagemaker_user_profile" { @@ -371,7 +374,7 @@ func testAccCheckUserProfileExists(ctx context.Context, n string, userProfile *s return fmt.Errorf("No sagmaker domain ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) domainID := rs.Primary.Attributes["domain_id"] userProfileName := rs.Primary.Attributes["user_profile_name"] diff --git a/internal/service/sagemaker/validate.go b/internal/service/sagemaker/validate.go index 86a1184fafe..e3383ea5eab 100644 --- a/internal/service/sagemaker/validate.go +++ b/internal/service/sagemaker/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( diff --git a/internal/service/sagemaker/validate_test.go b/internal/service/sagemaker/validate_test.go index 84270e8289e..fe00065929d 100644 --- a/internal/service/sagemaker/validate_test.go +++ b/internal/service/sagemaker/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( diff --git a/internal/service/sagemaker/wait.go b/internal/service/sagemaker/wait.go index d011e142af1..3cceea30ff7 100644 --- a/internal/service/sagemaker/wait.go +++ b/internal/service/sagemaker/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( diff --git a/internal/service/sagemaker/workforce.go b/internal/service/sagemaker/workforce.go index dddeb476fc7..9ee56c5790b 100644 --- a/internal/service/sagemaker/workforce.go +++ b/internal/service/sagemaker/workforce.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( @@ -184,7 +187,7 @@ func ResourceWorkforce() *schema.Resource { func resourceWorkforceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) name := d.Get("workforce_name").(string) input := &sagemaker.CreateWorkforceInput{ @@ -224,7 +227,7 @@ func resourceWorkforceCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceWorkforceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) workforce, err := FindWorkforceByName(ctx, conn, d.Id()) @@ -265,7 +268,7 @@ func resourceWorkforceRead(ctx context.Context, d *schema.ResourceData, meta int func resourceWorkforceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) input := &sagemaker.UpdateWorkforceInput{ WorkforceName: aws.String(d.Id()), @@ -298,7 +301,7 @@ func resourceWorkforceUpdate(ctx context.Context, d *schema.ResourceData, meta i func resourceWorkforceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) log.Printf("[DEBUG] Deleting SageMaker Workforce: %s", d.Id()) _, err := conn.DeleteWorkforceWithContext(ctx, &sagemaker.DeleteWorkforceInput{ diff --git a/internal/service/sagemaker/workforce_test.go b/internal/service/sagemaker/workforce_test.go index fa889410afe..853ecfaa429 100644 --- a/internal/service/sagemaker/workforce_test.go +++ b/internal/service/sagemaker/workforce_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( @@ -230,7 +233,7 @@ func testAccWorkforce_disappears(t *testing.T) { func testAccCheckWorkforceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sagemaker_workforce" { @@ -265,7 +268,7 @@ func testAccCheckWorkforceExists(ctx context.Context, n string, workforce *sagem return fmt.Errorf("No SageMaker Workforce ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) output, err := tfsagemaker.FindWorkforceByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/sagemaker/workteam.go b/internal/service/sagemaker/workteam.go index a78858ec862..d1734e264ae 100644 --- a/internal/service/sagemaker/workteam.go +++ b/internal/service/sagemaker/workteam.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker import ( @@ -136,7 +139,7 @@ func ResourceWorkteam() *schema.Resource { func resourceWorkteamCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) name := d.Get("workteam_name").(string) input := &sagemaker.CreateWorkteamInput{ @@ -144,7 +147,7 @@ func resourceWorkteamCreate(ctx context.Context, d *schema.ResourceData, meta in WorkforceName: aws.String(d.Get("workforce_name").(string)), Description: aws.String(d.Get("description").(string)), MemberDefinitions: expandWorkteamMemberDefinition(d.Get("member_definition").([]interface{})), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("notification_configuration"); ok { @@ -167,7 +170,7 @@ func resourceWorkteamCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceWorkteamRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) workteam, err := FindWorkteamByName(ctx, conn, d.Id()) @@ -200,7 +203,7 @@ func resourceWorkteamRead(ctx context.Context, d *schema.ResourceData, meta inte func resourceWorkteamUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &sagemaker.UpdateWorkteamInput{ @@ -229,7 +232,7 @@ func resourceWorkteamUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceWorkteamDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SageMakerConn() + conn := meta.(*conns.AWSClient).SageMakerConn(ctx) log.Printf("[DEBUG] Deleting SageMaker Workteam: %s", d.Id()) _, err := conn.DeleteWorkteamWithContext(ctx, &sagemaker.DeleteWorkteamInput{ diff --git a/internal/service/sagemaker/workteam_test.go b/internal/service/sagemaker/workteam_test.go index 636dd08ef81..e37a20fe051 100644 --- a/internal/service/sagemaker/workteam_test.go +++ b/internal/service/sagemaker/workteam_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sagemaker_test import ( @@ -273,7 +276,7 @@ func testAccWorkteam_disappears(t *testing.T) { func testAccCheckWorkteamDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sagemaker_workteam" { @@ -308,7 +311,7 @@ func testAccCheckWorkteamExists(ctx context.Context, n string, workteam *sagemak return fmt.Errorf("No SageMaker Workteam ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SageMakerConn(ctx) output, err := tfsagemaker.FindWorkteamByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/scheduler/exports_test.go b/internal/service/scheduler/exports_test.go index 56dacaccb30..6e354af2834 100644 --- a/internal/service/scheduler/exports_test.go +++ b/internal/service/scheduler/exports_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package scheduler // Exports for use in tests only. diff --git a/internal/service/scheduler/find.go b/internal/service/scheduler/find.go index ef52c586d08..cdd279b93bf 100644 --- a/internal/service/scheduler/find.go +++ b/internal/service/scheduler/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package scheduler import ( diff --git a/internal/service/scheduler/flex.go b/internal/service/scheduler/flex.go index e98f3038af6..556e7e92a72 100644 --- a/internal/service/scheduler/flex.go +++ b/internal/service/scheduler/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package scheduler import ( diff --git a/internal/service/scheduler/generate.go b/internal/service/scheduler/generate.go index 1ead4ff033a..4507f6aab30 100644 --- a/internal/service/scheduler/generate.go +++ b/internal/service/scheduler/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -ListTags -UpdateTags -ServiceTagsSlice +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package scheduler diff --git a/internal/service/scheduler/retry.go b/internal/service/scheduler/retry.go index aea870cacea..90d328eb318 100644 --- a/internal/service/scheduler/retry.go +++ b/internal/service/scheduler/retry.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package scheduler import ( diff --git a/internal/service/scheduler/schedule.go b/internal/service/scheduler/schedule.go index 161805d4880..5d6fc91587e 100644 --- a/internal/service/scheduler/schedule.go +++ b/internal/service/scheduler/schedule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package scheduler import ( @@ -429,7 +432,7 @@ const ( ) func resourceScheduleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SchedulerClient() + conn := meta.(*conns.AWSClient).SchedulerClient(ctx) name := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) @@ -507,7 +510,7 @@ func resourceScheduleCreate(ctx context.Context, d *schema.ResourceData, meta in } func resourceScheduleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { // nosemgrep:ci.scheduler-in-func-name - conn := meta.(*conns.AWSClient).SchedulerClient() + conn := meta.(*conns.AWSClient).SchedulerClient(ctx) groupName, scheduleName, err := ResourceScheduleParseID(d.Id()) @@ -563,7 +566,7 @@ func resourceScheduleRead(ctx context.Context, d *schema.ResourceData, meta inte } func resourceScheduleUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SchedulerClient() + conn := meta.(*conns.AWSClient).SchedulerClient(ctx) in := &scheduler.UpdateScheduleInput{ FlexibleTimeWindow: expandFlexibleTimeWindow(d.Get("flexible_time_window").([]interface{})[0].(map[string]interface{})), @@ -613,7 +616,7 @@ func resourceScheduleUpdate(ctx context.Context, d *schema.ResourceData, meta in } func resourceScheduleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SchedulerClient() + conn := meta.(*conns.AWSClient).SchedulerClient(ctx) groupName, scheduleName, err := ResourceScheduleParseID(d.Id()) diff --git a/internal/service/scheduler/schedule_group.go b/internal/service/scheduler/schedule_group.go index d4e9085268d..421d28f9266 100644 --- a/internal/service/scheduler/schedule_group.go +++ b/internal/service/scheduler/schedule_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package scheduler import ( @@ -92,13 +95,13 @@ const ( ) func resourceScheduleGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SchedulerClient() + conn := meta.(*conns.AWSClient).SchedulerClient(ctx) name := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) in := &scheduler.CreateScheduleGroupInput{ Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } out, err := conn.CreateScheduleGroup(ctx, in) @@ -120,7 +123,7 @@ func resourceScheduleGroupCreate(ctx context.Context, d *schema.ResourceData, me } func resourceScheduleGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SchedulerClient() + conn := meta.(*conns.AWSClient).SchedulerClient(ctx) out, err := findScheduleGroupByName(ctx, conn, d.Id()) @@ -150,7 +153,7 @@ func resourceScheduleGroupUpdate(ctx context.Context, d *schema.ResourceData, me } func resourceScheduleGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SchedulerClient() + conn := meta.(*conns.AWSClient).SchedulerClient(ctx) log.Printf("[INFO] Deleting EventBridge Scheduler ScheduleGroup %s", d.Id()) diff --git a/internal/service/scheduler/schedule_group_test.go b/internal/service/scheduler/schedule_group_test.go index 2fdf14c939c..fd469fb23b4 100644 --- a/internal/service/scheduler/schedule_group_test.go +++ b/internal/service/scheduler/schedule_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package scheduler_test import ( @@ -225,7 +228,7 @@ func TestAccSchedulerScheduleGroup_tags(t *testing.T) { func testAccCheckScheduleGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SchedulerClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).SchedulerClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_scheduler_schedule_group" { @@ -261,7 +264,7 @@ func testAccCheckScheduleGroupExists(ctx context.Context, name string, scheduleg return create.Error(names.Scheduler, create.ErrActionCheckingExistence, tfscheduler.ResNameScheduleGroup, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).SchedulerClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).SchedulerClient(ctx) resp, err := conn.GetScheduleGroup(ctx, &scheduler.GetScheduleGroupInput{ Name: aws.String(rs.Primary.ID), @@ -278,7 +281,7 @@ func testAccCheckScheduleGroupExists(ctx context.Context, name string, scheduleg } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).SchedulerClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).SchedulerClient(ctx) input := &scheduler.ListScheduleGroupsInput{} _, err := conn.ListScheduleGroups(ctx, input) diff --git a/internal/service/scheduler/schedule_test.go b/internal/service/scheduler/schedule_test.go index 895d4e8b884..808c9d780a3 100644 --- a/internal/service/scheduler/schedule_test.go +++ b/internal/service/scheduler/schedule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package scheduler_test import ( @@ -1585,7 +1588,7 @@ func TestAccSchedulerSchedule_targetSQSParameters(t *testing.T) { func testAccCheckScheduleDestroy(ctx context.Context, t *testing.T) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.ProviderMeta(t).SchedulerClient() + conn := acctest.ProviderMeta(t).SchedulerClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_scheduler_schedule" { @@ -1632,7 +1635,7 @@ func testAccCheckScheduleExists(ctx context.Context, t *testing.T, name string, return err } - conn := acctest.ProviderMeta(t).SchedulerClient() + conn := acctest.ProviderMeta(t).SchedulerClient(ctx) output, err := tfscheduler.FindScheduleByTwoPartKey(ctx, conn, groupName, scheduleName) diff --git a/internal/service/scheduler/service_package_gen.go b/internal/service/scheduler/service_package_gen.go index b7ea6809d21..b1f84560464 100644 --- a/internal/service/scheduler/service_package_gen.go +++ b/internal/service/scheduler/service_package_gen.go @@ -5,6 +5,9 @@ package scheduler import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + scheduler_sdkv2 "github.com/aws/aws-sdk-go-v2/service/scheduler" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -44,4 +47,17 @@ func (p *servicePackage) ServicePackageName() string { return names.Scheduler } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*scheduler_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return scheduler_sdkv2.NewFromConfig(cfg, func(o *scheduler_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = scheduler_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/scheduler/status.go b/internal/service/scheduler/status.go index 88aec603ce8..d8b576064d2 100644 --- a/internal/service/scheduler/status.go +++ b/internal/service/scheduler/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package scheduler import ( diff --git a/internal/service/scheduler/sweep.go b/internal/service/scheduler/sweep.go index 45adc7b8826..eb48552bf69 100644 --- a/internal/service/scheduler/sweep.go +++ b/internal/service/scheduler/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go-v2/service/scheduler" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -32,13 +34,13 @@ func init() { func sweepScheduleGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).SchedulerClient() + conn := client.SchedulerClient(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -68,7 +70,7 @@ func sweepScheduleGroups(region string) error { } } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping Schedule Group for %s: %w", region, err)) } @@ -82,13 +84,13 @@ func sweepScheduleGroups(region string) error { func sweepSchedules(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).SchedulerClient() + conn := client.SchedulerClient(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -114,7 +116,7 @@ func sweepSchedules(region string) error { } } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping Schedule for %s: %w", region, err)) } diff --git a/internal/service/scheduler/tags_gen.go b/internal/service/scheduler/tags_gen.go index 1575755dd55..071f9aa5a03 100644 --- a/internal/service/scheduler/tags_gen.go +++ b/internal/service/scheduler/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists scheduler service tags. +// listTags lists scheduler service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn *scheduler.Client, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *scheduler.Client, identifier string) (tftags.KeyValueTags, error) { input := &scheduler.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn *scheduler.Client, identifier string) (t // ListTags lists scheduler service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).SchedulerClient(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).SchedulerClient(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []awstypes.Tag) tftags.KeyValueTags return tftags.New(ctx, m) } -// GetTagsIn returns scheduler service tags from Context. +// getTagsIn returns scheduler service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []awstypes.Tag { +func getTagsIn(ctx context.Context) []awstypes.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []awstypes.Tag { return nil } -// SetTagsOut sets scheduler service tags in Context. -func SetTagsOut(ctx context.Context, tags []awstypes.Tag) { +// setTagsOut sets scheduler service tags in Context. +func setTagsOut(ctx context.Context, tags []awstypes.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates scheduler service tags. +// updateTags updates scheduler service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn *scheduler.Client, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *scheduler.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn *scheduler.Client, identifier string, // UpdateTags updates scheduler service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).SchedulerClient(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).SchedulerClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/scheduler/wait.go b/internal/service/scheduler/wait.go index 66b167b95ae..9f03fb5687c 100644 --- a/internal/service/scheduler/wait.go +++ b/internal/service/scheduler/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package scheduler import ( diff --git a/internal/service/schemas/discoverer.go b/internal/service/schemas/discoverer.go index 320181d6cfd..c5ecf827fc7 100644 --- a/internal/service/schemas/discoverer.go +++ b/internal/service/schemas/discoverer.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schemas import ( @@ -58,12 +61,12 @@ func ResourceDiscoverer() *schema.Resource { func resourceDiscovererCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SchemasConn() + conn := meta.(*conns.AWSClient).SchemasConn(ctx) sourceARN := d.Get("source_arn").(string) input := &schemas.CreateDiscovererInput{ SourceArn: aws.String(sourceARN), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -84,7 +87,7 @@ func resourceDiscovererCreate(ctx context.Context, d *schema.ResourceData, meta func resourceDiscovererRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SchemasConn() + conn := meta.(*conns.AWSClient).SchemasConn(ctx) output, err := FindDiscovererByID(ctx, conn, d.Id()) @@ -107,7 +110,7 @@ func resourceDiscovererRead(ctx context.Context, d *schema.ResourceData, meta in func resourceDiscovererUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SchemasConn() + conn := meta.(*conns.AWSClient).SchemasConn(ctx) if d.HasChange("description") { input := &schemas.UpdateDiscovererInput{ @@ -128,7 +131,7 @@ func resourceDiscovererUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceDiscovererDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SchemasConn() + conn := meta.(*conns.AWSClient).SchemasConn(ctx) log.Printf("[INFO] Deleting EventBridge Schemas Discoverer (%s)", d.Id()) _, err := conn.DeleteDiscovererWithContext(ctx, &schemas.DeleteDiscovererInput{ diff --git a/internal/service/schemas/discoverer_test.go b/internal/service/schemas/discoverer_test.go index e5527c7f813..0104677ee83 100644 --- a/internal/service/schemas/discoverer_test.go +++ b/internal/service/schemas/discoverer_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schemas_test import ( @@ -159,7 +162,7 @@ func TestAccSchemasDiscoverer_tags(t *testing.T) { func testAccCheckDiscovererDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SchemasConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SchemasConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_schemas_discoverer" { @@ -194,7 +197,7 @@ func testAccCheckDiscovererExists(ctx context.Context, n string, v *schemas.Desc return fmt.Errorf("No EventBridge Schemas Discoverer ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SchemasConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SchemasConn(ctx) output, err := tfschemas.FindDiscovererByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/schemas/find.go b/internal/service/schemas/find.go index bb9b5a14eed..bc5de784df5 100644 --- a/internal/service/schemas/find.go +++ b/internal/service/schemas/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schemas import ( diff --git a/internal/service/schemas/generate.go b/internal/service/schemas/generate.go index 9ed84b3014f..d1ac8908d49 100644 --- a/internal/service/schemas/generate.go +++ b/internal/service/schemas/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package schemas diff --git a/internal/service/schemas/id.go b/internal/service/schemas/id.go index cb695e49eed..91d538d782a 100644 --- a/internal/service/schemas/id.go +++ b/internal/service/schemas/id.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schemas import ( diff --git a/internal/service/schemas/registry.go b/internal/service/schemas/registry.go index 08053f4c5e7..b463bfcff41 100644 --- a/internal/service/schemas/registry.go +++ b/internal/service/schemas/registry.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schemas import ( @@ -62,12 +65,12 @@ func ResourceRegistry() *schema.Resource { func resourceRegistryCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SchemasConn() + conn := meta.(*conns.AWSClient).SchemasConn(ctx) name := d.Get("name").(string) input := &schemas.CreateRegistryInput{ RegistryName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -88,7 +91,7 @@ func resourceRegistryCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceRegistryRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SchemasConn() + conn := meta.(*conns.AWSClient).SchemasConn(ctx) output, err := FindRegistryByName(ctx, conn, d.Id()) @@ -111,7 +114,7 @@ func resourceRegistryRead(ctx context.Context, d *schema.ResourceData, meta inte func resourceRegistryUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SchemasConn() + conn := meta.(*conns.AWSClient).SchemasConn(ctx) if d.HasChanges("description") { input := &schemas.UpdateRegistryInput{ @@ -132,7 +135,7 @@ func resourceRegistryUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceRegistryDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SchemasConn() + conn := meta.(*conns.AWSClient).SchemasConn(ctx) log.Printf("[INFO] Deleting EventBridge Schemas Registry (%s)", d.Id()) _, err := conn.DeleteRegistryWithContext(ctx, &schemas.DeleteRegistryInput{ diff --git a/internal/service/schemas/registry_policy.go b/internal/service/schemas/registry_policy.go index eebbd3d4e87..828045b73f9 100644 --- a/internal/service/schemas/registry_policy.go +++ b/internal/service/schemas/registry_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schemas import ( @@ -56,7 +59,7 @@ const ( ) func resourceRegistryPolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SchemasConn() + conn := meta.(*conns.AWSClient).SchemasConn(ctx) registryName := d.Get("registry_name").(string) policy, err := structure.ExpandJsonFromString(d.Get("policy").(string)) @@ -82,7 +85,7 @@ func resourceRegistryPolicyCreate(ctx context.Context, d *schema.ResourceData, m } func resourceRegistryPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SchemasConn() + conn := meta.(*conns.AWSClient).SchemasConn(ctx) output, err := FindRegistryPolicyByName(ctx, conn, d.Id()) @@ -108,7 +111,7 @@ func resourceRegistryPolicyRead(ctx context.Context, d *schema.ResourceData, met } func resourceRegistryPolicyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SchemasConn() + conn := meta.(*conns.AWSClient).SchemasConn(ctx) policy, err := structure.ExpandJsonFromString(d.Get("policy").(string)) if err != nil { @@ -133,7 +136,7 @@ func resourceRegistryPolicyUpdate(ctx context.Context, d *schema.ResourceData, m } func resourceRegistryPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SchemasConn() + conn := meta.(*conns.AWSClient).SchemasConn(ctx) log.Printf("[INFO] Deleting EventBridge Schemas Registry Policy (%s)", d.Id()) _, err := conn.DeleteResourcePolicyWithContext(ctx, &schemas.DeleteResourcePolicyInput{ diff --git a/internal/service/schemas/registry_policy_test.go b/internal/service/schemas/registry_policy_test.go index 5e93c9f9dfc..30609df9bee 100644 --- a/internal/service/schemas/registry_policy_test.go +++ b/internal/service/schemas/registry_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schemas_test import ( @@ -130,7 +133,7 @@ func testAccCheckRegistryPolicyExists(ctx context.Context, name string, v *schem return fmt.Errorf("No EventBridge Schemas Registry ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SchemasConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SchemasConn(ctx) output, err := tfschemas.FindRegistryPolicyByName(ctx, conn, rs.Primary.ID) if err != nil { @@ -144,7 +147,7 @@ func testAccCheckRegistryPolicyExists(ctx context.Context, name string, v *schem func testAccCheckRegistryPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SchemasConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SchemasConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_schemas_registry_policy" { @@ -202,7 +205,7 @@ func testAccCheckRegistryPolicy(ctx context.Context, name string, expectedSid st ] }`, expectedSid, partition, region, account_id, rs.Primary.ID) - conn := acctest.Provider.Meta().(*conns.AWSClient).SchemasConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SchemasConn(ctx) policy, err := tfschemas.FindRegistryPolicyByName(ctx, conn, rs.Primary.ID) if err != nil { return err diff --git a/internal/service/schemas/registry_test.go b/internal/service/schemas/registry_test.go index cfb020bf4f0..03a06d85536 100644 --- a/internal/service/schemas/registry_test.go +++ b/internal/service/schemas/registry_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schemas_test import ( @@ -160,7 +163,7 @@ func TestAccSchemasRegistry_tags(t *testing.T) { func testAccCheckRegistryDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SchemasConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SchemasConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_schemas_registry" { @@ -195,7 +198,7 @@ func testAccCheckRegistryExists(ctx context.Context, n string, v *schemas.Descri return fmt.Errorf("No EventBridge Schemas Registry ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SchemasConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SchemasConn(ctx) output, err := tfschemas.FindRegistryByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/schemas/schema.go b/internal/service/schemas/schema.go index 47abbacde7e..f2b479fa938 100644 --- a/internal/service/schemas/schema.go +++ b/internal/service/schemas/schema.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schemas import ( @@ -96,7 +99,7 @@ func ResourceSchema() *schema.Resource { func resourceSchemaCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SchemasConn() + conn := meta.(*conns.AWSClient).SchemasConn(ctx) name := d.Get("name").(string) registryName := d.Get("registry_name").(string) @@ -104,7 +107,7 @@ func resourceSchemaCreate(ctx context.Context, d *schema.ResourceData, meta inte Content: aws.String(d.Get("content").(string)), RegistryName: aws.String(registryName), SchemaName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), Type: aws.String(d.Get("type").(string)), } @@ -128,7 +131,7 @@ func resourceSchemaCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceSchemaRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SchemasConn() + conn := meta.(*conns.AWSClient).SchemasConn(ctx) name, registryName, err := SchemaParseResourceID(d.Id()) @@ -171,7 +174,7 @@ func resourceSchemaRead(ctx context.Context, d *schema.ResourceData, meta interf func resourceSchemaUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SchemasConn() + conn := meta.(*conns.AWSClient).SchemasConn(ctx) if d.HasChanges("content", "description", "type") { name, registryName, err := SchemaParseResourceID(d.Id()) @@ -207,7 +210,7 @@ func resourceSchemaUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceSchemaDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SchemasConn() + conn := meta.(*conns.AWSClient).SchemasConn(ctx) name, registryName, err := SchemaParseResourceID(d.Id()) diff --git a/internal/service/schemas/schema_test.go b/internal/service/schemas/schema_test.go index be60bb2e720..3dad536d424 100644 --- a/internal/service/schemas/schema_test.go +++ b/internal/service/schemas/schema_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package schemas_test import ( @@ -223,7 +226,7 @@ func TestAccSchemasSchema_tags(t *testing.T) { func testAccCheckSchemaDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SchemasConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SchemasConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_schemas_schema" { @@ -270,7 +273,7 @@ func testAccCheckSchemaExists(ctx context.Context, n string, v *schemas.Describe return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).SchemasConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SchemasConn(ctx) output, err := tfschemas.FindSchemaByNameAndRegistryName(ctx, conn, name, registryName) diff --git a/internal/service/schemas/service_package_gen.go b/internal/service/schemas/service_package_gen.go index 7bde40d4897..111d632ba89 100644 --- a/internal/service/schemas/service_package_gen.go +++ b/internal/service/schemas/service_package_gen.go @@ -5,6 +5,10 @@ package schemas import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + schemas_sdkv1 "github.com/aws/aws-sdk-go/service/schemas" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -60,4 +64,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Schemas } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*schemas_sdkv1.Schemas, error) { + sess := config["session"].(*session_sdkv1.Session) + + return schemas_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/schemas/sweep.go b/internal/service/schemas/sweep.go index eabc58cf489..03367eb610b 100644 --- a/internal/service/schemas/sweep.go +++ b/internal/service/schemas/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -12,7 +15,6 @@ import ( "github.com/aws/aws-sdk-go/service/schemas" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -38,11 +40,11 @@ func init() { func sweepDiscoverers(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).SchemasConn() + conn := client.SchemasConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -72,7 +74,7 @@ func sweepDiscoverers(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("listing EventBridge Schemas Discoverers: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("sweeping EventBridge Schemas Discoverers: %w", err)) } @@ -81,11 +83,11 @@ func sweepDiscoverers(region string) error { func sweepRegistries(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).SchemasConn() + conn := client.SchemasConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -120,7 +122,7 @@ func sweepRegistries(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("listing EventBridge Schemas Registries: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping EventBridge Schemas Registries: %w", err)) } @@ -129,11 +131,11 @@ func sweepRegistries(region string) error { func sweepSchemas(region string) error { // nosemgrep:ci.schemas-in-func-name ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).SchemasConn() + conn := client.SchemasConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -188,7 +190,7 @@ func sweepSchemas(region string) error { // nosemgrep:ci.schemas-in-func-name sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("listing EventBridge Schemas Schemas (%s): %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping EventBridge Schemas Schemas (%s): %w", region, err)) } diff --git a/internal/service/schemas/tags_gen.go b/internal/service/schemas/tags_gen.go index 178e32c5c0e..9828faf9848 100644 --- a/internal/service/schemas/tags_gen.go +++ b/internal/service/schemas/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists schemas service tags. +// listTags lists schemas service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn schemasiface.SchemasAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn schemasiface.SchemasAPI, identifier string) (tftags.KeyValueTags, error) { input := &schemas.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn schemasiface.SchemasAPI, identifier stri // ListTags lists schemas service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).SchemasConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).SchemasConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from schemas service tags. +// KeyValueTags creates tftags.KeyValueTags from schemas service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns schemas service tags from Context. +// getTagsIn returns schemas service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets schemas service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets schemas service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates schemas service tags. +// updateTags updates schemas service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn schemasiface.SchemasAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn schemasiface.SchemasAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn schemasiface.SchemasAPI, identifier st // UpdateTags updates schemas service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).SchemasConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).SchemasConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/secretsmanager/consts.go b/internal/service/secretsmanager/consts.go index 1df547c7771..c17ad53267f 100644 --- a/internal/service/secretsmanager/consts.go +++ b/internal/service/secretsmanager/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package secretsmanager import ( diff --git a/internal/service/secretsmanager/generate.go b/internal/service/secretsmanager/generate.go index db8e84b2b55..c1b1ca83239 100644 --- a/internal/service/secretsmanager/generate.go +++ b/internal/service/secretsmanager/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTagsInIDElem=SecretId -ServiceTagsSlice -TagInIDElem=SecretId -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package secretsmanager diff --git a/internal/service/secretsmanager/random_password_data_source.go b/internal/service/secretsmanager/random_password_data_source.go index b83e6aeca63..839de8b39ff 100644 --- a/internal/service/secretsmanager/random_password_data_source.go +++ b/internal/service/secretsmanager/random_password_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package secretsmanager import ( @@ -61,7 +64,7 @@ func DataSourceRandomPassword() *schema.Resource { func dataSourceRandomPasswordRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecretsManagerConn() + conn := meta.(*conns.AWSClient).SecretsManagerConn(ctx) var excludeCharacters string if v, ok := d.GetOk("exclude_characters"); ok { diff --git a/internal/service/secretsmanager/random_password_data_source_test.go b/internal/service/secretsmanager/random_password_data_source_test.go index 697aa4e0497..f14c46e92a3 100644 --- a/internal/service/secretsmanager/random_password_data_source_test.go +++ b/internal/service/secretsmanager/random_password_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package secretsmanager_test import ( diff --git a/internal/service/secretsmanager/secret.go b/internal/service/secretsmanager/secret.go index 5251c0168af..d2fa147abbf 100644 --- a/internal/service/secretsmanager/secret.go +++ b/internal/service/secretsmanager/secret.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package secretsmanager import ( @@ -6,7 +9,6 @@ import ( "errors" "fmt" "log" - "regexp" "time" "github.com/aws/aws-sdk-go/aws" @@ -126,52 +128,6 @@ func ResourceSecret() *schema.Resource { }, }, }, - "rotation_enabled": { - Deprecated: "Use the aws_secretsmanager_secret_rotation resource instead", - Type: schema.TypeBool, - Computed: true, - }, - "rotation_lambda_arn": { - Deprecated: "Use the aws_secretsmanager_secret_rotation resource instead", - Type: schema.TypeString, - Optional: true, - Computed: true, - }, - "rotation_rules": { - Deprecated: "Use the aws_secretsmanager_secret_rotation resource instead", - Type: schema.TypeList, - Computed: true, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "automatically_after_days": { - Type: schema.TypeInt, - Optional: true, - ConflictsWith: []string{"rotation_rules.0.schedule_expression"}, - ExactlyOneOf: []string{"rotation_rules.0.automatically_after_days", "rotation_rules.0.schedule_expression"}, - DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { - _, exists := d.GetOk("rotation_rules.0.schedule_expression") - return exists - }, - DiffSuppressOnRefresh: true, - ValidateFunc: validation.IntBetween(1, 1000), - }, - "duration": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.StringMatch(regexp.MustCompile(`[0-9h]+`), ""), - }, - "schedule_expression": { - Type: schema.TypeString, - Optional: true, - ConflictsWith: []string{"rotation_rules.0.automatically_after_days"}, - ExactlyOneOf: []string{"rotation_rules.0.automatically_after_days", "rotation_rules.0.schedule_expression"}, - ValidateFunc: validation.StringMatch(regexp.MustCompile(`[0-9A-Za-z\(\)#\?\*\-\/, ]+`), ""), - }, - }, - }, - }, names.AttrTags: tftags.TagsSchema(), names.AttrTagsAll: tftags.TagsSchemaComputed(), }, @@ -182,14 +138,14 @@ func ResourceSecret() *schema.Resource { func resourceSecretCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecretsManagerConn() + conn := meta.(*conns.AWSClient).SecretsManagerConn(ctx) secretName := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) input := &secretsmanager.CreateSecretInput{ Description: aws.String(d.Get("description").(string)), ForceOverwriteReplicaSecret: aws.Bool(d.Get("force_overwrite_replica_secret").(bool)), Name: aws.String(secretName), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("kms_key_id"); ok { @@ -257,45 +213,18 @@ func resourceSecretCreate(ctx context.Context, d *schema.ResourceData, meta inte } } - if v, ok := d.GetOk("rotation_lambda_arn"); ok && v.(string) != "" { - input := &secretsmanager.RotateSecretInput{ - RotationLambdaARN: aws.String(v.(string)), - RotationRules: expandRotationRules(d.Get("rotation_rules").([]interface{})), - SecretId: aws.String(d.Id()), - } - - log.Printf("[DEBUG] Enabling Secrets Manager Secret rotation: %s", input) - err := retry.RetryContext(ctx, 1*time.Minute, func() *retry.RetryError { - _, err := conn.RotateSecretWithContext(ctx, input) - if err != nil { - // AccessDeniedException: Secrets Manager cannot invoke the specified Lambda function. - if tfawserr.ErrCodeEquals(err, "AccessDeniedException") { - return retry.RetryableError(err) - } - return retry.NonRetryableError(err) - } - return nil - }) - if tfresource.TimedOut(err) { - _, err = conn.RotateSecretWithContext(ctx, input) - } - if err != nil { - return sdkdiag.AppendErrorf(diags, "enabling Secrets Manager Secret %q rotation: %s", d.Id(), err) - } - } - return append(diags, resourceSecretRead(ctx, d, meta)...) } func resourceSecretRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecretsManagerConn() + conn := meta.(*conns.AWSClient).SecretsManagerConn(ctx) outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, PropagationTimeout, func() (interface{}, error) { return FindSecretByID(ctx, conn, d.Id()) }, d.IsNewResource()) - if !d.IsNewResource() && tfawserr.ErrCodeEquals(err, secretsmanager.ErrCodeResourceNotFoundException) { + if !d.IsNewResource() && tfresource.NotFound(err) { log.Printf("[WARN] Secrets Manager Secret (%s) not found, removing from state", d.Id()) d.SetId("") return diags @@ -354,26 +283,14 @@ func resourceSecretRead(ctx context.Context, d *schema.ResourceData, meta interf d.Set("policy", "") } - d.Set("rotation_enabled", output.RotationEnabled) - - if aws.BoolValue(output.RotationEnabled) { - d.Set("rotation_lambda_arn", output.RotationLambdaARN) - if err := d.Set("rotation_rules", flattenRotationRules(output.RotationRules)); err != nil { - return sdkdiag.AppendErrorf(diags, "setting rotation_rules: %s", err) - } - } else { - d.Set("rotation_lambda_arn", "") - d.Set("rotation_rules", []interface{}{}) - } - - SetTagsOut(ctx, output.Tags) + setTagsOut(ctx, output.Tags) return diags } func resourceSecretUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecretsManagerConn() + conn := meta.(*conns.AWSClient).SecretsManagerConn(ctx) if d.HasChange("replica") { o, n := d.GetChange("replica") @@ -446,51 +363,12 @@ func resourceSecretUpdate(ctx context.Context, d *schema.ResourceData, meta inte } } - if d.HasChanges("rotation_lambda_arn", "rotation_rules") { - if v, ok := d.GetOk("rotation_lambda_arn"); ok && v.(string) != "" { - input := &secretsmanager.RotateSecretInput{ - RotationLambdaARN: aws.String(v.(string)), - RotationRules: expandRotationRules(d.Get("rotation_rules").([]interface{})), - SecretId: aws.String(d.Id()), - } - - log.Printf("[DEBUG] Enabling Secrets Manager Secret rotation: %s", input) - err := retry.RetryContext(ctx, 1*time.Minute, func() *retry.RetryError { - _, err := conn.RotateSecretWithContext(ctx, input) - if err != nil { - // AccessDeniedException: Secrets Manager cannot invoke the specified Lambda function. - if tfawserr.ErrCodeEquals(err, "AccessDeniedException") { - return retry.RetryableError(err) - } - return retry.NonRetryableError(err) - } - return nil - }) - if tfresource.TimedOut(err) { - _, err = conn.RotateSecretWithContext(ctx, input) - } - if err != nil { - return sdkdiag.AppendErrorf(diags, "updating Secrets Manager Secret %q rotation: %s", d.Id(), err) - } - } else { - input := &secretsmanager.CancelRotateSecretInput{ - SecretId: aws.String(d.Id()), - } - - log.Printf("[DEBUG] Cancelling Secrets Manager Secret rotation: %s", input) - _, err := conn.CancelRotateSecretWithContext(ctx, input) - if err != nil { - return sdkdiag.AppendErrorf(diags, "cancelling Secret Manager Secret %q rotation: %s", d.Id(), err) - } - } - } - return append(diags, resourceSecretRead(ctx, d, meta)...) } func resourceSecretDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecretsManagerConn() + conn := meta.(*conns.AWSClient).SecretsManagerConn(ctx) if v, ok := d.GetOk("replica"); ok && v.(*schema.Set).Len() > 0 { err := removeSecretReplicas(ctx, conn, d.Id(), v.(*schema.Set).List()) diff --git a/internal/service/secretsmanager/secret_data_source.go b/internal/service/secretsmanager/secret_data_source.go index 8045f0c3726..d4b1676d1e1 100644 --- a/internal/service/secretsmanager/secret_data_source.go +++ b/internal/service/secretsmanager/secret_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package secretsmanager import ( @@ -44,37 +47,6 @@ func DataSourceSecret() *schema.Resource { Type: schema.TypeString, Computed: true, }, - "rotation_enabled": { - Deprecated: "Use the aws_secretsmanager_secret_rotation data source instead", - Type: schema.TypeBool, - Computed: true, - }, - "rotation_lambda_arn": { - Deprecated: "Use the aws_secretsmanager_secret_rotation data source instead", - Type: schema.TypeString, - Computed: true, - }, - "rotation_rules": { - Deprecated: "Use the aws_secretsmanager_secret_rotation data source instead", - Type: schema.TypeList, - Computed: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "automatically_after_days": { - Type: schema.TypeInt, - Computed: true, - }, - "duration": { - Type: schema.TypeString, - Computed: true, - }, - "schedule_expression": { - Type: schema.TypeString, - Computed: true, - }, - }, - }, - }, "tags": { Type: schema.TypeMap, Computed: true, @@ -86,7 +58,7 @@ func DataSourceSecret() *schema.Resource { func dataSourceSecretRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecretsManagerConn() + conn := meta.(*conns.AWSClient).SecretsManagerConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig var secretID string @@ -126,8 +98,6 @@ func dataSourceSecretRead(ctx context.Context, d *schema.ResourceData, meta inte d.Set("description", output.Description) d.Set("kms_key_id", output.KmsKeyId) d.Set("name", output.Name) - d.Set("rotation_enabled", output.RotationEnabled) - d.Set("rotation_lambda_arn", output.RotationLambdaARN) d.Set("policy", "") pIn := &secretsmanager.GetResourcePolicyInput{ @@ -147,10 +117,6 @@ func dataSourceSecretRead(ctx context.Context, d *schema.ResourceData, meta inte d.Set("policy", policy) } - if err := d.Set("rotation_rules", flattenRotationRules(output.RotationRules)); err != nil { - return sdkdiag.AppendErrorf(diags, "setting rotation_rules: %s", err) - } - if err := d.Set("tags", KeyValueTags(ctx, output.Tags).IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { return sdkdiag.AppendErrorf(diags, "setting tags: %s", err) } diff --git a/internal/service/secretsmanager/secret_data_source_test.go b/internal/service/secretsmanager/secret_data_source_test.go index 57032f12660..9fa7404ef6c 100644 --- a/internal/service/secretsmanager/secret_data_source_test.go +++ b/internal/service/secretsmanager/secret_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package secretsmanager_test import ( @@ -116,9 +119,6 @@ func testAccSecretCheckDataSource(datasourceName, resourceName string) resource. "kms_key_id", "name", "policy", - "rotation_enabled", - "rotation_lambda_arn", - "rotation_rules.#", "tags.#", } diff --git a/internal/service/secretsmanager/secret_policy.go b/internal/service/secretsmanager/secret_policy.go index 898cef80a8c..5c4f53d075f 100644 --- a/internal/service/secretsmanager/secret_policy.go +++ b/internal/service/secretsmanager/secret_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package secretsmanager import ( @@ -57,7 +60,7 @@ func ResourceSecretPolicy() *schema.Resource { func resourceSecretPolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecretsManagerConn() + conn := meta.(*conns.AWSClient).SecretsManagerConn(ctx) policy, err := structure.NormalizeJsonString(d.Get("policy").(string)) if err != nil { @@ -102,7 +105,7 @@ func resourceSecretPolicyCreate(ctx context.Context, d *schema.ResourceData, met func resourceSecretPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecretsManagerConn() + conn := meta.(*conns.AWSClient).SecretsManagerConn(ctx) input := &secretsmanager.GetResourcePolicyInput{ SecretId: aws.String(d.Id()), @@ -145,7 +148,7 @@ func resourceSecretPolicyRead(ctx context.Context, d *schema.ResourceData, meta func resourceSecretPolicyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecretsManagerConn() + conn := meta.(*conns.AWSClient).SecretsManagerConn(ctx) if d.HasChanges("policy", "block_public_policy") { policy, err := structure.NormalizeJsonString(d.Get("policy").(string)) @@ -183,7 +186,7 @@ func resourceSecretPolicyUpdate(ctx context.Context, d *schema.ResourceData, met func resourceSecretPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecretsManagerConn() + conn := meta.(*conns.AWSClient).SecretsManagerConn(ctx) input := &secretsmanager.DeleteResourcePolicyInput{ SecretId: aws.String(d.Id()), diff --git a/internal/service/secretsmanager/secret_policy_test.go b/internal/service/secretsmanager/secret_policy_test.go index 59b9b2ce2bb..1a3367ea38a 100644 --- a/internal/service/secretsmanager/secret_policy_test.go +++ b/internal/service/secretsmanager/secret_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package secretsmanager_test import ( @@ -126,7 +129,7 @@ func TestAccSecretsManagerSecretPolicy_disappears(t *testing.T) { func testAccCheckSecretPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SecretsManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecretsManagerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_secretsmanager_secret_policy" { @@ -198,7 +201,7 @@ func testAccCheckSecretPolicyExists(ctx context.Context, resourceName string, po return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).SecretsManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecretsManagerConn(ctx) input := &secretsmanager.GetResourcePolicyInput{ SecretId: aws.String(rs.Primary.ID), } diff --git a/internal/service/secretsmanager/secret_rotation.go b/internal/service/secretsmanager/secret_rotation.go index 1396d5d462c..8809bcc9ee8 100644 --- a/internal/service/secretsmanager/secret_rotation.go +++ b/internal/service/secretsmanager/secret_rotation.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package secretsmanager import ( @@ -8,9 +11,7 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/secretsmanager" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" @@ -25,16 +26,12 @@ func ResourceSecretRotation() *schema.Resource { ReadWithoutTimeout: resourceSecretRotationRead, UpdateWithoutTimeout: resourceSecretRotationUpdate, DeleteWithoutTimeout: resourceSecretRotationDelete, + Importer: &schema.ResourceImporter{ StateContext: schema.ImportStatePassthroughContext, }, Schema: map[string]*schema.Schema{ - "secret_id": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - }, "rotation_enabled": { Type: schema.TypeBool, Computed: true, @@ -76,13 +73,18 @@ func ResourceSecretRotation() *schema.Resource { }, }, }, + "secret_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, }, } } func resourceSecretRotationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecretsManagerConn() + conn := meta.(*conns.AWSClient).SecretsManagerConn(ctx) secretID := d.Get("secret_id").(string) if v, ok := d.GetOk("rotation_lambda_arn"); ok && v.(string) != "" { @@ -92,31 +94,16 @@ func resourceSecretRotationCreate(ctx context.Context, d *schema.ResourceData, m SecretId: aws.String(secretID), } - log.Printf("[DEBUG] Enabling Secrets Manager Secret rotation: %s", input) - var output *secretsmanager.RotateSecretOutput - err := retry.RetryContext(ctx, 1*time.Minute, func() *retry.RetryError { - var err error - output, err = conn.RotateSecretWithContext(ctx, input) - if err != nil { - // AccessDeniedException: Secrets Manager cannot invoke the specified Lambda function. - if tfawserr.ErrCodeEquals(err, "AccessDeniedException") { - return retry.RetryableError(err) - } - return retry.NonRetryableError(err) - } - - return nil - }) - - if tfresource.TimedOut(err) { - output, err = conn.RotateSecretWithContext(ctx, input) - } + // AccessDeniedException: Secrets Manager cannot invoke the specified Lambda function. + outputRaw, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, 1*time.Minute, func() (interface{}, error) { + return conn.RotateSecretWithContext(ctx, input) + }, "AccessDeniedException") if err != nil { - return sdkdiag.AppendErrorf(diags, "enabling Secrets Manager Secret %q rotation: %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "creating Secrets Manager Secret Rotation (%s): %s", secretID, err) } - d.SetId(aws.StringValue(output.ARN)) + d.SetId(aws.StringValue(outputRaw.(*secretsmanager.RotateSecretOutput).ARN)) } return append(diags, resourceSecretRotationRead(ctx, d, meta)...) @@ -124,35 +111,13 @@ func resourceSecretRotationCreate(ctx context.Context, d *schema.ResourceData, m func resourceSecretRotationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecretsManagerConn() - - input := &secretsmanager.DescribeSecretInput{ - SecretId: aws.String(d.Id()), - } + conn := meta.(*conns.AWSClient).SecretsManagerConn(ctx) - var output *secretsmanager.DescribeSecretOutput + outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, PropagationTimeout, func() (interface{}, error) { + return FindSecretByID(ctx, conn, d.Id()) + }, d.IsNewResource()) - err := retry.RetryContext(ctx, PropagationTimeout, func() *retry.RetryError { - var err error - - output, err = conn.DescribeSecretWithContext(ctx, input) - - if d.IsNewResource() && tfawserr.ErrCodeEquals(err, secretsmanager.ErrCodeResourceNotFoundException) { - return retry.RetryableError(err) - } - - if err != nil { - return retry.NonRetryableError(err) - } - - return nil - }) - - if tfresource.TimedOut(err) { - output, err = conn.DescribeSecretWithContext(ctx, input) - } - - if !d.IsNewResource() && tfawserr.ErrCodeEquals(err, secretsmanager.ErrCodeResourceNotFoundException) { + if !d.IsNewResource() && tfresource.NotFound(err) { log.Printf("[WARN] Secrets Manager Secret Rotation (%s) not found, removing from state", d.Id()) d.SetId("") return diags @@ -162,13 +127,10 @@ func resourceSecretRotationRead(ctx context.Context, d *schema.ResourceData, met return sdkdiag.AppendErrorf(diags, "reading Secrets Manager Secret Rotation (%s): %s", d.Id(), err) } - if output == nil { - return sdkdiag.AppendErrorf(diags, "reading Secrets Manager Secret Rotation (%s): empty response", d.Id()) - } + output := outputRaw.(*secretsmanager.DescribeSecretOutput) d.Set("secret_id", d.Id()) d.Set("rotation_enabled", output.RotationEnabled) - if aws.BoolValue(output.RotationEnabled) { d.Set("rotation_lambda_arn", output.RotationLambdaARN) if err := d.Set("rotation_rules", flattenRotationRules(output.RotationRules)); err != nil { @@ -184,7 +146,7 @@ func resourceSecretRotationRead(ctx context.Context, d *schema.ResourceData, met func resourceSecretRotationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecretsManagerConn() + conn := meta.(*conns.AWSClient).SecretsManagerConn(ctx) secretID := d.Get("secret_id").(string) if d.HasChanges("rotation_lambda_arn", "rotation_rules") { @@ -195,35 +157,23 @@ func resourceSecretRotationUpdate(ctx context.Context, d *schema.ResourceData, m SecretId: aws.String(secretID), } - log.Printf("[DEBUG] Enabling Secrets Manager Secret Rotation: %s", input) - err := retry.RetryContext(ctx, 1*time.Minute, func() *retry.RetryError { - _, err := conn.RotateSecretWithContext(ctx, input) - if err != nil { - // AccessDeniedException: Secrets Manager cannot invoke the specified Lambda function. - if tfawserr.ErrCodeEquals(err, "AccessDeniedException") { - return retry.RetryableError(err) - } - return retry.NonRetryableError(err) - } - return nil - }) - - if tfresource.TimedOut(err) { - _, err = conn.RotateSecretWithContext(ctx, input) - } + // AccessDeniedException: Secrets Manager cannot invoke the specified Lambda function. + _, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, 1*time.Minute, func() (interface{}, error) { + return conn.RotateSecretWithContext(ctx, input) + }, "AccessDeniedException") if err != nil { - return sdkdiag.AppendErrorf(diags, "updating Secrets Manager Secret Rotation %q : %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "updating Secrets Manager Secret Rotation (%s): %s", d.Id(), err) } } else { input := &secretsmanager.CancelRotateSecretInput{ SecretId: aws.String(d.Id()), } - log.Printf("[DEBUG] Cancelling Secrets Manager Secret Rotation: %s", input) _, err := conn.CancelRotateSecretWithContext(ctx, input) + if err != nil { - return sdkdiag.AppendErrorf(diags, "cancelling Secret Manager Secret Rotation %q : %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "cancelling Secrets Manager Secret Rotation (%s): %s", d.Id(), err) } } } @@ -233,17 +183,14 @@ func resourceSecretRotationUpdate(ctx context.Context, d *schema.ResourceData, m func resourceSecretRotationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecretsManagerConn() - secretID := d.Get("secret_id").(string) + conn := meta.(*conns.AWSClient).SecretsManagerConn(ctx) - input := &secretsmanager.CancelRotateSecretInput{ - SecretId: aws.String(secretID), - } + _, err := conn.CancelRotateSecretWithContext(ctx, &secretsmanager.CancelRotateSecretInput{ + SecretId: aws.String(d.Get("secret_id").(string)), + }) - log.Printf("[DEBUG] Deleting Secrets Manager Rotation: %s", input) - _, err := conn.CancelRotateSecretWithContext(ctx, input) if err != nil { - return sdkdiag.AppendErrorf(diags, "cancelling Secret Manager Secret %q rotation: %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "deleting Secret Manager Secret Rotation (%s): %s", d.Id(), err) } return diags @@ -279,7 +226,8 @@ func flattenRotationRules(rules *secretsmanager.RotationRulesType) []interface{} m := map[string]interface{}{} - if v := rules.AutomaticallyAfterDays; v != nil { + if v := rules.AutomaticallyAfterDays; v != nil && rules.ScheduleExpression == nil { + // Only populate automatically_after_days if schedule_expression is not set, otherwise we won't be able to update the resource m["automatically_after_days"] = int(aws.Int64Value(v)) } diff --git a/internal/service/secretsmanager/secret_rotation_data_source.go b/internal/service/secretsmanager/secret_rotation_data_source.go index 2065157b45f..8ed3b4457fc 100644 --- a/internal/service/secretsmanager/secret_rotation_data_source.go +++ b/internal/service/secretsmanager/secret_rotation_data_source.go @@ -1,11 +1,12 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package secretsmanager import ( "context" - "log" "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/secretsmanager" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -19,11 +20,6 @@ func DataSourceSecretRotation() *schema.Resource { ReadWithoutTimeout: dataSourceSecretRotationRead, Schema: map[string]*schema.Schema{ - "secret_id": { - Type: schema.TypeString, - ValidateFunc: validation.StringLenBetween(1, 2048), - Required: true, - }, "rotation_enabled": { Type: schema.TypeBool, Computed: true, @@ -52,33 +48,29 @@ func DataSourceSecretRotation() *schema.Resource { }, }, }, + "secret_id": { + Type: schema.TypeString, + ValidateFunc: validation.StringLenBetween(1, 2048), + Required: true, + }, }, } } func dataSourceSecretRotationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecretsManagerConn() - secretID := d.Get("secret_id").(string) + conn := meta.(*conns.AWSClient).SecretsManagerConn(ctx) - input := &secretsmanager.DescribeSecretInput{ - SecretId: aws.String(secretID), - } + secretID := d.Get("secret_id").(string) + output, err := FindSecretByID(ctx, conn, secretID) - log.Printf("[DEBUG] Reading Secrets Manager Secret: %s", input) - output, err := conn.DescribeSecretWithContext(ctx, input) if err != nil { - return sdkdiag.AppendErrorf(diags, "reading Secrets Manager Secret: %s", err) - } - - if output.ARN == nil { - return sdkdiag.AppendErrorf(diags, "Secrets Manager Secret %q not found", secretID) + return sdkdiag.AppendErrorf(diags, "reading Secrets Manager Secret (%s): %s", secretID, err) } d.SetId(aws.StringValue(output.ARN)) d.Set("rotation_enabled", output.RotationEnabled) d.Set("rotation_lambda_arn", output.RotationLambdaARN) - if err := d.Set("rotation_rules", flattenRotationRules(output.RotationRules)); err != nil { return sdkdiag.AppendErrorf(diags, "setting rotation_rules: %s", err) } diff --git a/internal/service/secretsmanager/secret_rotation_data_source_test.go b/internal/service/secretsmanager/secret_rotation_data_source_test.go index df232f370fb..af05f4e33ee 100644 --- a/internal/service/secretsmanager/secret_rotation_data_source_test.go +++ b/internal/service/secretsmanager/secret_rotation_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package secretsmanager_test import ( @@ -24,7 +27,7 @@ func TestAccSecretsManagerSecretRotationDataSource_basic(t *testing.T) { Steps: []resource.TestStep{ { Config: testAccSecretRotationDataSourceConfig_nonExistent, - ExpectError: regexp.MustCompile(`ResourceNotFoundException`), + ExpectError: regexp.MustCompile(`couldn't find resource`), }, { Config: testAccSecretRotationDataSourceConfig_default(rName, 7), @@ -45,7 +48,7 @@ data "aws_secretsmanager_secret_rotation" "test" { ` func testAccSecretRotationDataSourceConfig_default(rName string, automaticallyAfterDays int) string { - return acctest.ConfigLambdaBase(rName, rName, rName) + fmt.Sprintf(` + return acctest.ConfigCompose(acctest.ConfigLambdaBase(rName, rName, rName), fmt.Sprintf(` # Not a real rotation function resource "aws_lambda_function" "test" { filename = "test-fixtures/lambdatest.zip" @@ -63,7 +66,7 @@ resource "aws_lambda_permission" "test" { } resource "aws_secretsmanager_secret" "test" { - name = "%[1]s" + name = %[1]q } resource "aws_secretsmanager_secret_rotation" "test" { @@ -78,5 +81,5 @@ resource "aws_secretsmanager_secret_rotation" "test" { data "aws_secretsmanager_secret_rotation" "test" { secret_id = aws_secretsmanager_secret_rotation.test.secret_id } -`, rName, automaticallyAfterDays) +`, rName, automaticallyAfterDays)) } diff --git a/internal/service/secretsmanager/secret_rotation_test.go b/internal/service/secretsmanager/secret_rotation_test.go index aebf0eb250e..ea14134047a 100644 --- a/internal/service/secretsmanager/secret_rotation_test.go +++ b/internal/service/secretsmanager/secret_rotation_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package secretsmanager_test import ( @@ -8,12 +11,13 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/secretsmanager" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" + tfsecretsmanager "github.com/hashicorp/terraform-provider-aws/internal/service/secretsmanager" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) func TestAccSecretsManagerSecretRotation_basic(t *testing.T) { @@ -108,6 +112,52 @@ func TestAccSecretsManagerSecretRotation_scheduleExpression(t *testing.T) { }) } +func TestAccSecretsManagerSecretRotation_scheduleExpressionHours(t *testing.T) { + ctx := acctest.Context(t) + var secret secretsmanager.DescribeSecretOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_secretsmanager_secret_rotation.test" + lambdaFunctionResourceName := "aws_lambda_function.test1" + scheduleExpression := "rate(6 hours)" + scheduleExpression02 := "rate(10 hours)" + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, secretsmanager.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckSecretRotationDestroy(ctx), + Steps: []resource.TestStep{ + // Test creating secret rotation resource + { + Config: testAccSecretRotationConfig_scheduleExpression(rName, scheduleExpression), + Check: resource.ComposeTestCheckFunc( + testAccCheckSecretRotationExists(ctx, resourceName, &secret), + resource.TestCheckResourceAttr(resourceName, "rotation_enabled", "true"), + resource.TestCheckResourceAttrPair(resourceName, "rotation_lambda_arn", lambdaFunctionResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "rotation_rules.#", "1"), + resource.TestCheckResourceAttr(resourceName, "rotation_rules.0.schedule_expression", scheduleExpression), + testSecretValueIsCurrent(ctx, rName), + ), + }, + { + Config: testAccSecretRotationConfig_scheduleExpression(rName, scheduleExpression02), + Check: resource.ComposeTestCheckFunc( + testAccCheckSecretRotationExists(ctx, resourceName, &secret), + resource.TestCheckResourceAttr(resourceName, "rotation_enabled", "true"), + resource.TestCheckResourceAttrPair(resourceName, "rotation_lambda_arn", lambdaFunctionResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "rotation_rules.#", "1"), + resource.TestCheckResourceAttr(resourceName, "rotation_rules.0.schedule_expression", scheduleExpression02), + ), + }, + // Test importing secret rotation + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccSecretsManagerSecretRotation_duration(t *testing.T) { ctx := acctest.Context(t) var secret secretsmanager.DescribeSecretOutput @@ -147,69 +197,65 @@ func TestAccSecretsManagerSecretRotation_duration(t *testing.T) { func testAccCheckSecretRotationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SecretsManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecretsManagerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_secretsmanager_secret_rotation" { continue } - input := &secretsmanager.DescribeSecretInput{ - SecretId: aws.String(rs.Primary.ID), - } + output, err := tfsecretsmanager.FindSecretByID(ctx, conn, rs.Primary.ID) - output, err := conn.DescribeSecretWithContext(ctx, input) + if tfresource.NotFound(err) { + continue + } if err != nil { - if tfawserr.ErrCodeEquals(err, secretsmanager.ErrCodeResourceNotFoundException) { - continue - } return err } - if output != nil && aws.BoolValue(output.RotationEnabled) { - return fmt.Errorf("Secret rotation for %q still enabled", rs.Primary.ID) + if !aws.BoolValue(output.RotationEnabled) { + continue } + + return fmt.Errorf("Secrets Manager Secret %s rotation still enabled", rs.Primary.ID) } return nil } } -func testAccCheckSecretRotationExists(ctx context.Context, resourceName string, secret *secretsmanager.DescribeSecretOutput) resource.TestCheckFunc { +func testAccCheckSecretRotationExists(ctx context.Context, n string, v *secretsmanager.DescribeSecretOutput) resource.TestCheckFunc { return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[resourceName] + rs, ok := s.RootModule().Resources[n] if !ok { - return fmt.Errorf("Not found: %s", resourceName) + return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).SecretsManagerConn() - input := &secretsmanager.DescribeSecretInput{ - SecretId: aws.String(rs.Primary.ID), + if rs.Primary.ID == "" { + return fmt.Errorf("No Secrets Manager Secret Rotation ID is set") } - output, err := conn.DescribeSecretWithContext(ctx, input) + conn := acctest.Provider.Meta().(*conns.AWSClient).SecretsManagerConn(ctx) + + output, err := tfsecretsmanager.FindSecretByID(ctx, conn, rs.Primary.ID) if err != nil { return err } - if output == nil { - return fmt.Errorf("Secret %q does not exist", rs.Primary.ID) - } - if !aws.BoolValue(output.RotationEnabled) { - return fmt.Errorf("Secret rotation %q is not enabled", rs.Primary.ID) + return fmt.Errorf("Secrets Manager Secret %s rotation not enabled", rs.Primary.ID) } - *secret = *output + *v = *output return nil } } func testAccSecretRotationConfig_basic(rName string, automaticallyAfterDays int) string { - return acctest.ConfigLambdaBase(rName, rName, rName) + fmt.Sprintf(` + return acctest.ConfigCompose(acctest.ConfigLambdaBase(rName, rName, rName), fmt.Sprintf(` # Not a real rotation function resource "aws_lambda_function" "test1" { filename = "test-fixtures/lambdatest.zip" @@ -243,7 +289,7 @@ resource "aws_lambda_permission" "test2" { } resource "aws_secretsmanager_secret" "test" { - name = "%[1]s" + name = %[1]q } resource "aws_secretsmanager_secret_rotation" "test" { @@ -256,11 +302,11 @@ resource "aws_secretsmanager_secret_rotation" "test" { depends_on = [aws_lambda_permission.test1] } -`, rName, automaticallyAfterDays) +`, rName, automaticallyAfterDays)) } func testAccSecretRotationConfig_scheduleExpression(rName string, scheduleExpression string) string { - return acctest.ConfigLambdaBase(rName, rName, rName) + fmt.Sprintf(` + return acctest.ConfigCompose(acctest.ConfigLambdaBase(rName, rName, rName), fmt.Sprintf(` # Not a real rotation function resource "aws_lambda_function" "test1" { filename = "test-fixtures/lambdatest.zip" @@ -294,7 +340,7 @@ resource "aws_lambda_permission" "test2" { } resource "aws_secretsmanager_secret" "test" { - name = "%[1]s" + name = %[1]q } resource "aws_secretsmanager_secret_rotation" "test" { @@ -307,11 +353,11 @@ resource "aws_secretsmanager_secret_rotation" "test" { depends_on = [aws_lambda_permission.test1] } -`, rName, scheduleExpression) +`, rName, scheduleExpression)) } func testAccSecretRotationConfig_duration(rName string, automaticallyAfterDays int, duration string) string { - return acctest.ConfigLambdaBase(rName, rName, rName) + fmt.Sprintf(` + return acctest.ConfigCompose(acctest.ConfigLambdaBase(rName, rName, rName), fmt.Sprintf(` # Not a real rotation function resource "aws_lambda_function" "test1" { filename = "test-fixtures/lambdatest.zip" @@ -345,7 +391,7 @@ resource "aws_lambda_permission" "test2" { } resource "aws_secretsmanager_secret" "test" { - name = "%[1]s" + name = %[1]q } resource "aws_secretsmanager_secret_rotation" "test" { @@ -359,5 +405,38 @@ resource "aws_secretsmanager_secret_rotation" "test" { depends_on = [aws_lambda_permission.test1] } -`, rName, automaticallyAfterDays, duration) +`, rName, automaticallyAfterDays, duration)) +} + +func testSecretValueIsCurrent(ctx context.Context, rName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).SecretsManagerConn(ctx) + // Write secret value to clear in-rotation state, otherwise updating the secret rotation + // will fail with "A previous rotation isn't complete. That rotation will be reattempted." + put_secret_input := &secretsmanager.PutSecretValueInput{ + SecretId: aws.String(rName), + SecretString: aws.String("secret-value"), + } + _, err := conn.PutSecretValueWithContext(ctx, put_secret_input) + if err != nil { + return err + } + input := &secretsmanager.DescribeSecretInput{ + SecretId: aws.String(rName), + } + output, err := conn.DescribeSecretWithContext(ctx, input) + if err != nil { + return err + } else { + // Ensure that the current version of the secret is in the AWSCURRENT stage + for _, stage := range output.VersionIdsToStages { + if *stage[0] == "AWSCURRENT" { + return nil + } else { + return fmt.Errorf("Secret version is not in AWSCURRENT stage: %s", *stage[0]) + } + } + return nil + } + } } diff --git a/internal/service/secretsmanager/secret_test.go b/internal/service/secretsmanager/secret_test.go index d6842acebd6..a52d0c59279 100644 --- a/internal/service/secretsmanager/secret_test.go +++ b/internal/service/secretsmanager/secret_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package secretsmanager_test import ( @@ -39,9 +42,6 @@ func TestAccSecretsManagerSecret_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "name", rName), resource.TestCheckResourceAttr(resourceName, "name_prefix", ""), resource.TestCheckResourceAttr(resourceName, "recovery_window_in_days", "30"), - resource.TestCheckResourceAttr(resourceName, "rotation_enabled", "false"), - resource.TestCheckResourceAttr(resourceName, "rotation_lambda_arn", ""), - resource.TestCheckResourceAttr(resourceName, "rotation_rules.#", "0"), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), ), }, @@ -254,61 +254,7 @@ func TestAccSecretsManagerSecret_RecoveryWindowInDays_recreate(t *testing.T) { }) } -func TestAccSecretsManagerSecret_rotationLambdaARN(t *testing.T) { - ctx := acctest.Context(t) - var secret secretsmanager.DescribeSecretOutput - rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - resourceName := "aws_secretsmanager_secret.test" - lambdaFunctionResourceName := "aws_lambda_function.test1" - - resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, secretsmanager.EndpointsID), - ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckSecretDestroy(ctx), - Steps: []resource.TestStep{ - // Test enabling rotation on resource creation - { - Config: testAccSecretConfig_rotationLambdaARN(rName), - Check: resource.ComposeTestCheckFunc( - testAccCheckSecretExists(ctx, resourceName, &secret), - resource.TestCheckResourceAttr(resourceName, "rotation_enabled", "true"), - resource.TestCheckResourceAttrPair(resourceName, "rotation_lambda_arn", lambdaFunctionResourceName, "arn"), - ), - }, - // Test updating rotation - // We need a valid rotation function for this testing - // InvalidRequestException: A previous rotation isn’t complete. That rotation will be reattempted. - /* - { - Config: testAccSecretConfig_managerRotationLambdaARNUpdated(rName), - Check: resource.ComposeTestCheckFunc( - testAccCheckSecretExists(resourceName, &secret), - resource.TestCheckResourceAttr(resourceName, "rotation_enabled", "true"), - resource.TestMatchResourceAttr(resourceName, "rotation_lambda_arn", regexp.MustCompile(fmt.Sprintf("^arn:[^:]+:lambda:[^:]+:[^:]+:function:%s-2$", rName))), - ), - }, - */ - // Test importing rotation - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"recovery_window_in_days", "force_overwrite_replica_secret"}, - }, - // Test removing rotation on resource update - { - Config: testAccSecretConfig_name(rName), - Check: resource.ComposeTestCheckFunc( - testAccCheckSecretExists(ctx, resourceName, &secret), - resource.TestCheckResourceAttr(resourceName, "rotation_enabled", "true"), // Must be removed with aws_secretsmanager_secret_rotation after version 2.67.0 - ), - }, - }, - }) -} - -func TestAccSecretsManagerSecret_rotationRules(t *testing.T) { +func TestAccSecretsManagerSecret_tags(t *testing.T) { ctx := acctest.Context(t) var secret secretsmanager.DescribeSecretOutput rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) @@ -320,100 +266,37 @@ func TestAccSecretsManagerSecret_rotationRules(t *testing.T) { ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckSecretDestroy(ctx), Steps: []resource.TestStep{ - // Test creating rotation rules on resource creation { - Config: testAccSecretConfig_rotationRules(rName, 7), + Config: testAccSecretConfig_tags1(rName, "key1", "value1"), Check: resource.ComposeTestCheckFunc( testAccCheckSecretExists(ctx, resourceName, &secret), - resource.TestCheckResourceAttr(resourceName, "rotation_enabled", "true"), - resource.TestCheckResourceAttr(resourceName, "rotation_rules.#", "1"), - resource.TestCheckResourceAttr(resourceName, "rotation_rules.0.automatically_after_days", "7"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), ), }, - // Test updating rotation rules - // We need a valid rotation function for this testing - // InvalidRequestException: A previous rotation isn’t complete. That rotation will be reattempted. - /* - { - Config: testAccSecretConfig_rotationRules(rName, 1), - Check: resource.ComposeTestCheckFunc( - testAccCheckSecretExists(resourceName, &secret), - resource.TestCheckResourceAttr(resourceName, "rotation_enabled", "true"), - resource.TestCheckResourceAttr(resourceName, "rotation_rules.#", "1"), - resource.TestCheckResourceAttr(resourceName, "rotation_rules.0.automatically_after_days", "1"), - ), - }, - */ - // Test importing rotation rules { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, ImportStateVerifyIgnore: []string{"recovery_window_in_days", "force_overwrite_replica_secret"}, }, - // Test removing rotation rules on resource update - { - Config: testAccSecretConfig_name(rName), - Check: resource.ComposeTestCheckFunc( - testAccCheckSecretExists(ctx, resourceName, &secret), - resource.TestCheckResourceAttr(resourceName, "rotation_enabled", "true"), // Must be removed with aws_secretsmanager_secret_rotation after version 2.67.0 - ), - }, - }, - }) -} - -func TestAccSecretsManagerSecret_tags(t *testing.T) { - ctx := acctest.Context(t) - var secret secretsmanager.DescribeSecretOutput - rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - resourceName := "aws_secretsmanager_secret.test" - - resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, secretsmanager.EndpointsID), - ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckSecretDestroy(ctx), - Steps: []resource.TestStep{ - { - Config: testAccSecretConfig_tagsSingle(rName), - Check: resource.ComposeTestCheckFunc( - testAccCheckSecretExists(ctx, resourceName, &secret), - resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), - resource.TestCheckResourceAttr(resourceName, "tags.tag1", "tag1value"), - ), - }, { - Config: testAccSecretConfig_tagsSingleUpdated(rName), - Check: resource.ComposeTestCheckFunc( - testAccCheckSecretExists(ctx, resourceName, &secret), - resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), - resource.TestCheckResourceAttr(resourceName, "tags.tag1", "tag1value-updated"), - ), - }, - { - Config: testAccSecretConfig_tagsMultiple(rName), + Config: testAccSecretConfig_tags2(rName, "key1", "value1updated", "key2", "value2"), Check: resource.ComposeTestCheckFunc( testAccCheckSecretExists(ctx, resourceName, &secret), resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), - resource.TestCheckResourceAttr(resourceName, "tags.tag1", "tag1value"), - resource.TestCheckResourceAttr(resourceName, "tags.tag2", "tag2value"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), ), }, { - Config: testAccSecretConfig_tagsSingle(rName), + Config: testAccSecretConfig_tags1(rName, "key2", "value2"), Check: resource.ComposeTestCheckFunc( testAccCheckSecretExists(ctx, resourceName, &secret), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), - resource.TestCheckResourceAttr(resourceName, "tags.tag1", "tag1value"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), ), }, - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"recovery_window_in_days", "force_overwrite_replica_secret"}, - }, }, }) } @@ -461,7 +344,7 @@ func TestAccSecretsManagerSecret_policy(t *testing.T) { func testAccCheckSecretDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SecretsManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecretsManagerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_secretsmanager_secret" { @@ -496,7 +379,7 @@ func testAccCheckSecretExists(ctx context.Context, n string, v *secretsmanager.D return fmt.Errorf("No Secrets Manager Secret ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SecretsManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecretsManagerConn(ctx) output, err := tfsecretsmanager.FindSecretByID(ctx, conn, rs.Primary.ID) @@ -511,7 +394,7 @@ func testAccCheckSecretExists(ctx context.Context, n string, v *secretsmanager.D } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).SecretsManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecretsManagerConn(ctx) input := &secretsmanager.ListSecretsInput{} @@ -529,16 +412,14 @@ func testAccPreCheck(ctx context.Context, t *testing.T) { func testAccSecretConfig_description(rName, description string) string { return fmt.Sprintf(` resource "aws_secretsmanager_secret" "test" { - description = "%s" - name = "%s" + description = %[1]q + name = %[2]q } `, description, rName) } func testAccSecretConfig_basicReplica(rName string) string { - return acctest.ConfigCompose( - acctest.ConfigMultipleRegionProvider(2), - fmt.Sprintf(` + return acctest.ConfigCompose(acctest.ConfigMultipleRegionProvider(2), fmt.Sprintf(` data "aws_region" "alternate" { provider = awsalternate } @@ -554,9 +435,7 @@ resource "aws_secretsmanager_secret" "test" { } func testAccSecretConfig_overwriteReplica(rName string, force_overwrite_replica_secret bool) string { - return acctest.ConfigCompose( - acctest.ConfigMultipleRegionProvider(3), - fmt.Sprintf(` + return acctest.ConfigCompose(acctest.ConfigMultipleRegionProvider(3), fmt.Sprintf(` resource "aws_kms_key" "test" { provider = awsalternate deletion_window_in_days = 7 @@ -584,9 +463,7 @@ resource "aws_secretsmanager_secret" "test" { } func testAccSecretConfig_overwriteReplicaUpdate(rName string, force_overwrite_replica_secret bool) string { - return acctest.ConfigCompose( - acctest.ConfigMultipleRegionProvider(3), - fmt.Sprintf(` + return acctest.ConfigCompose(acctest.ConfigMultipleRegionProvider(3), fmt.Sprintf(` resource "aws_kms_key" "test" { provider = awsalternate deletion_window_in_days = 7 @@ -616,7 +493,7 @@ resource "aws_secretsmanager_secret" "test" { func testAccSecretConfig_name(rName string) string { return fmt.Sprintf(` resource "aws_secretsmanager_secret" "test" { - name = "%s" + name = %[1]q } `, rName) } @@ -641,7 +518,7 @@ resource "aws_kms_key" "test2" { resource "aws_secretsmanager_secret" "test" { kms_key_id = aws_kms_key.test1.id - name = "%s" + name = %[1]q } `, rName) } @@ -658,7 +535,7 @@ resource "aws_kms_key" "test2" { resource "aws_secretsmanager_secret" "test" { kms_key_id = aws_kms_key.test2.id - name = "%s" + name = %[1]q } `, rName) } @@ -666,121 +543,35 @@ resource "aws_secretsmanager_secret" "test" { func testAccSecretConfig_recoveryWindowInDays(rName string, recoveryWindowInDays int) string { return fmt.Sprintf(` resource "aws_secretsmanager_secret" "test" { - name = %q - recovery_window_in_days = %d + name = %[1]q + recovery_window_in_days = %[2]d } `, rName, recoveryWindowInDays) } -func testAccSecretConfig_rotationLambdaARN(rName string) string { - return acctest.ConfigLambdaBase(rName, rName, rName) + fmt.Sprintf(` -resource "aws_secretsmanager_secret" "test" { - name = "%[1]s" - rotation_lambda_arn = aws_lambda_function.test1.arn - - depends_on = [aws_lambda_permission.test1] -} - -# Not a real rotation function -resource "aws_lambda_function" "test1" { - filename = "test-fixtures/lambdatest.zip" - function_name = "%[1]s-1" - handler = "exports.example" - role = aws_iam_role.iam_for_lambda.arn - runtime = "nodejs16.x" -} - -resource "aws_lambda_permission" "test1" { - action = "lambda:InvokeFunction" - function_name = aws_lambda_function.test1.function_name - principal = "secretsmanager.amazonaws.com" - statement_id = "AllowExecutionFromSecretsManager1" -} - -# Not a real rotation function -resource "aws_lambda_function" "test2" { - filename = "test-fixtures/lambdatest.zip" - function_name = "%[1]s-2" - handler = "exports.example" - role = aws_iam_role.iam_for_lambda.arn - runtime = "nodejs16.x" -} - -resource "aws_lambda_permission" "test2" { - action = "lambda:InvokeFunction" - function_name = aws_lambda_function.test2.function_name - principal = "secretsmanager.amazonaws.com" - statement_id = "AllowExecutionFromSecretsManager2" -} -`, rName) -} - -func testAccSecretConfig_rotationRules(rName string, automaticallyAfterDays int) string { - return acctest.ConfigLambdaBase(rName, rName, rName) + fmt.Sprintf(` -# Not a real rotation function -resource "aws_lambda_function" "test" { - filename = "test-fixtures/lambdatest.zip" - function_name = "%[1]s" - handler = "exports.example" - role = aws_iam_role.iam_for_lambda.arn - runtime = "nodejs16.x" -} - -resource "aws_lambda_permission" "test" { - action = "lambda:InvokeFunction" - function_name = aws_lambda_function.test.function_name - principal = "secretsmanager.amazonaws.com" - statement_id = "AllowExecutionFromSecretsManager1" -} - -resource "aws_secretsmanager_secret" "test" { - name = "%[1]s" - rotation_lambda_arn = aws_lambda_function.test.arn - - rotation_rules { - automatically_after_days = %[2]d - } - - depends_on = [aws_lambda_permission.test] -} -`, rName, automaticallyAfterDays) -} - -func testAccSecretConfig_tagsSingle(rName string) string { - return fmt.Sprintf(` -resource "aws_secretsmanager_secret" "test" { - name = "%s" - - tags = { - tag1 = "tag1value" - } -} -`, rName) -} - -func testAccSecretConfig_tagsSingleUpdated(rName string) string { +func testAccSecretConfig_tags1(rName, tagKey1, tagValue1 string) string { return fmt.Sprintf(` resource "aws_secretsmanager_secret" "test" { - name = "%s" + name = %[1]q tags = { - tag1 = "tag1value-updated" + %[2]q = %[3]q } } -`, rName) +`, rName, tagKey1, tagValue1) } -func testAccSecretConfig_tagsMultiple(rName string) string { +func testAccSecretConfig_tags2(rName, tagKey1, tagValue1, tagKey2, tagValue2 string) string { return fmt.Sprintf(` resource "aws_secretsmanager_secret" "test" { - name = "%s" + name = %[1]q tags = { - tag1 = "tag1value" - tag2 = "tag2value" + %[2]q = %[3]q + %[4]q = %[5]q } } -`, rName) +`, rName, tagKey1, tagValue1, tagKey2, tagValue2) } func testAccSecretConfig_policy(rName string) string { diff --git a/internal/service/secretsmanager/secret_version.go b/internal/service/secretsmanager/secret_version.go index 76590f6d364..e088430315b 100644 --- a/internal/service/secretsmanager/secret_version.go +++ b/internal/service/secretsmanager/secret_version.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package secretsmanager import ( @@ -71,7 +74,7 @@ func ResourceSecretVersion() *schema.Resource { func resourceSecretVersionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecretsManagerConn() + conn := meta.(*conns.AWSClient).SecretsManagerConn(ctx) secretID := d.Get("secret_id").(string) input := &secretsmanager.PutSecretValueInput{ @@ -114,7 +117,7 @@ func resourceSecretVersionCreate(ctx context.Context, d *schema.ResourceData, me func resourceSecretVersionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecretsManagerConn() + conn := meta.(*conns.AWSClient).SecretsManagerConn(ctx) secretID, versionID, err := DecodeSecretVersionID(d.Id()) if err != nil { @@ -187,7 +190,7 @@ func resourceSecretVersionRead(ctx context.Context, d *schema.ResourceData, meta func resourceSecretVersionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecretsManagerConn() + conn := meta.(*conns.AWSClient).SecretsManagerConn(ctx) secretID, versionID, err := DecodeSecretVersionID(d.Id()) if err != nil { @@ -237,7 +240,7 @@ func resourceSecretVersionUpdate(ctx context.Context, d *schema.ResourceData, me func resourceSecretVersionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecretsManagerConn() + conn := meta.(*conns.AWSClient).SecretsManagerConn(ctx) secretID, versionID, err := DecodeSecretVersionID(d.Id()) if err != nil { diff --git a/internal/service/secretsmanager/secret_version_data_source.go b/internal/service/secretsmanager/secret_version_data_source.go index 78c17a2c074..4228b6b4d76 100644 --- a/internal/service/secretsmanager/secret_version_data_source.go +++ b/internal/service/secretsmanager/secret_version_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package secretsmanager import ( @@ -60,7 +63,7 @@ func DataSourceSecretVersion() *schema.Resource { func dataSourceSecretVersionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecretsManagerConn() + conn := meta.(*conns.AWSClient).SecretsManagerConn(ctx) secretID := d.Get("secret_id").(string) var version string diff --git a/internal/service/secretsmanager/secret_version_data_source_test.go b/internal/service/secretsmanager/secret_version_data_source_test.go index 0c7158cd5c4..3099f0f242f 100644 --- a/internal/service/secretsmanager/secret_version_data_source_test.go +++ b/internal/service/secretsmanager/secret_version_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package secretsmanager_test import ( diff --git a/internal/service/secretsmanager/secret_version_test.go b/internal/service/secretsmanager/secret_version_test.go index bb5165cde73..4c9fc8c0bf6 100644 --- a/internal/service/secretsmanager/secret_version_test.go +++ b/internal/service/secretsmanager/secret_version_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package secretsmanager_test import ( @@ -137,7 +140,7 @@ func TestAccSecretsManagerSecretVersion_versionStages(t *testing.T) { func testAccCheckSecretVersionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SecretsManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecretsManagerConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_secretsmanager_secret_version" { @@ -197,7 +200,7 @@ func testAccCheckSecretVersionExists(ctx context.Context, resourceName string, v return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).SecretsManagerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecretsManagerConn(ctx) input := &secretsmanager.GetSecretValueInput{ SecretId: aws.String(secretID), diff --git a/internal/service/secretsmanager/secrets_data_source.go b/internal/service/secretsmanager/secrets_data_source.go index aedf8250011..e33af97f5ed 100644 --- a/internal/service/secretsmanager/secrets_data_source.go +++ b/internal/service/secretsmanager/secrets_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package secretsmanager import ( @@ -34,7 +37,7 @@ func DataSourceSecrets() *schema.Resource { func dataSourceSecretsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecretsManagerConn() + conn := meta.(*conns.AWSClient).SecretsManagerConn(ctx) input := &secretsmanager.ListSecretsInput{} diff --git a/internal/service/secretsmanager/secrets_data_source_test.go b/internal/service/secretsmanager/secrets_data_source_test.go index 41711a99032..a247c79e653 100644 --- a/internal/service/secretsmanager/secrets_data_source_test.go +++ b/internal/service/secretsmanager/secrets_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package secretsmanager_test import ( diff --git a/internal/service/secretsmanager/service_package_gen.go b/internal/service/secretsmanager/service_package_gen.go index 39a6791d4de..93068a3ee7b 100644 --- a/internal/service/secretsmanager/service_package_gen.go +++ b/internal/service/secretsmanager/service_package_gen.go @@ -5,6 +5,10 @@ package secretsmanager import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + secretsmanager_sdkv1 "github.com/aws/aws-sdk-go/service/secretsmanager" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -73,4 +77,13 @@ func (p *servicePackage) ServicePackageName() string { return names.SecretsManager } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*secretsmanager_sdkv1.SecretsManager, error) { + sess := config["session"].(*session_sdkv1.Session) + + return secretsmanager_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/secretsmanager/sweep.go b/internal/service/secretsmanager/sweep.go index ae5c87c2808..a2402603dd7 100644 --- a/internal/service/secretsmanager/sweep.go +++ b/internal/service/secretsmanager/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/secretsmanager" "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -29,11 +31,11 @@ func init() { func sweepSecretPolicies(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).SecretsManagerConn() + conn := client.SecretsManagerConn(ctx) err = conn.ListSecretsPagesWithContext(ctx, &secretsmanager.ListSecretsInput{}, func(page *secretsmanager.ListSecretsOutput, lastPage bool) bool { if len(page.SecretList) == 0 { @@ -72,11 +74,11 @@ func sweepSecretPolicies(region string) error { func sweepSecrets(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).SecretsManagerConn() + conn := client.SecretsManagerConn(ctx) err = conn.ListSecretsPagesWithContext(ctx, &secretsmanager.ListSecretsInput{}, func(page *secretsmanager.ListSecretsOutput, lastPage bool) bool { if len(page.SecretList) == 0 { diff --git a/internal/service/secretsmanager/tags_gen.go b/internal/service/secretsmanager/tags_gen.go index 19d10332f0e..bff4ae1b05d 100644 --- a/internal/service/secretsmanager/tags_gen.go +++ b/internal/service/secretsmanager/tags_gen.go @@ -43,9 +43,9 @@ func KeyValueTags(ctx context.Context, tags []*secretsmanager.Tag) tftags.KeyVal return tftags.New(ctx, m) } -// GetTagsIn returns secretsmanager service tags from Context. +// getTagsIn returns secretsmanager service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*secretsmanager.Tag { +func getTagsIn(ctx context.Context) []*secretsmanager.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -55,17 +55,17 @@ func GetTagsIn(ctx context.Context) []*secretsmanager.Tag { return nil } -// SetTagsOut sets secretsmanager service tags in Context. -func SetTagsOut(ctx context.Context, tags []*secretsmanager.Tag) { +// setTagsOut sets secretsmanager service tags in Context. +func setTagsOut(ctx context.Context, tags []*secretsmanager.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates secretsmanager service tags. +// updateTags updates secretsmanager service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn secretsmanageriface.SecretsManagerAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn secretsmanageriface.SecretsManagerAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -105,5 +105,5 @@ func UpdateTags(ctx context.Context, conn secretsmanageriface.SecretsManagerAPI, // UpdateTags updates secretsmanager service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).SecretsManagerConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).SecretsManagerConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/secretsmanager/validate.go b/internal/service/secretsmanager/validate.go index 5e6d9aa965e..36dc04fb3d2 100644 --- a/internal/service/secretsmanager/validate.go +++ b/internal/service/secretsmanager/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package secretsmanager import ( diff --git a/internal/service/secretsmanager/validate_test.go b/internal/service/secretsmanager/validate_test.go index 777902fb37c..3b7b6bfc748 100644 --- a/internal/service/secretsmanager/validate_test.go +++ b/internal/service/secretsmanager/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package secretsmanager import ( diff --git a/internal/service/securityhub/account.go b/internal/service/securityhub/account.go index 903e018d813..93e59f2f6bc 100644 --- a/internal/service/securityhub/account.go +++ b/internal/service/securityhub/account.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub import ( @@ -73,7 +76,7 @@ func ResourceAccount() *schema.Resource { func resourceAccountCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) input := &securityhub.EnableSecurityHubInput{ EnableDefaultStandards: aws.Bool(d.Get("enable_default_standards").(bool)), @@ -97,7 +100,7 @@ func resourceAccountCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceAccountRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) _, err := FindStandardsSubscriptions(ctx, conn, &securityhub.GetEnabledStandardsInput{}) @@ -130,7 +133,7 @@ func resourceAccountRead(ctx context.Context, d *schema.ResourceData, meta inter func resourceAccountUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) input := &securityhub.UpdateSecurityHubConfigurationInput{ ControlFindingGenerator: aws.String(d.Get("control_finding_generator").(string)), @@ -148,7 +151,7 @@ func resourceAccountUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceAccountDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) log.Printf("[DEBUG] Deleting Security Hub Account: %s", d.Id()) _, err := tfresource.RetryWhenAWSErrMessageContains(ctx, adminAccountNotFoundTimeout, func() (interface{}, error) { diff --git a/internal/service/securityhub/account_test.go b/internal/service/securityhub/account_test.go index 43147e84df6..49f61769a5b 100644 --- a/internal/service/securityhub/account_test.go +++ b/internal/service/securityhub/account_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub_test import ( @@ -160,7 +163,7 @@ func testAccCheckAccountExists(ctx context.Context, n string) resource.TestCheck return fmt.Errorf("No Security Hub Account ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn(ctx) _, err := tfsecurityhub.FindStandardsSubscriptions(ctx, conn, &securityhub.GetEnabledStandardsInput{}) @@ -170,7 +173,7 @@ func testAccCheckAccountExists(ctx context.Context, n string) resource.TestCheck func testAccCheckAccountDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_securityhub_account" { diff --git a/internal/service/securityhub/action_target.go b/internal/service/securityhub/action_target.go index fee2a519b7e..a5d0f7942a5 100644 --- a/internal/service/securityhub/action_target.go +++ b/internal/service/securityhub/action_target.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub import ( @@ -58,7 +61,7 @@ func ResourceActionTarget() *schema.Resource { func resourceActionTargetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) description := d.Get("description").(string) name := d.Get("name").(string) identifier := d.Get("identifier").(string) @@ -92,7 +95,7 @@ func resourceActionTargetParseIdentifier(identifier string) (string, error) { func resourceActionTargetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) log.Printf("[DEBUG] Reading Security Hub Action Targets to find %s", d.Id()) @@ -124,7 +127,7 @@ func resourceActionTargetRead(ctx context.Context, d *schema.ResourceData, meta func resourceActionTargetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) input := &securityhub.UpdateActionTargetInput{ ActionTargetArn: aws.String(d.Id()), @@ -161,7 +164,7 @@ func ActionTargetCheckExists(ctx context.Context, conn *securityhub.SecurityHub, func resourceActionTargetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) log.Printf("[DEBUG] Deleting Security Hub Action Target %s", d.Id()) _, err := conn.DeleteActionTargetWithContext(ctx, &securityhub.DeleteActionTargetInput{ diff --git a/internal/service/securityhub/action_target_test.go b/internal/service/securityhub/action_target_test.go index 986762d1134..953e56c0d74 100644 --- a/internal/service/securityhub/action_target_test.go +++ b/internal/service/securityhub/action_target_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub_test import ( @@ -142,7 +145,7 @@ func testAccCheckActionTargetExists(ctx context.Context, n string) resource.Test return fmt.Errorf("No Security Hub custom action ARN is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn(ctx) action, err := tfsecurityhub.ActionTargetCheckExists(ctx, conn, rs.Primary.ID) @@ -160,7 +163,7 @@ func testAccCheckActionTargetExists(ctx context.Context, n string) resource.Test func testAccCheckActionTargetDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_securityhub_action_target" { diff --git a/internal/service/securityhub/arn.go b/internal/service/securityhub/arn.go index 105541e3021..a7484cd00f5 100644 --- a/internal/service/securityhub/arn.go +++ b/internal/service/securityhub/arn.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub import ( @@ -17,7 +20,7 @@ func StandardsControlARNToStandardsSubscriptionARN(inputARN string) (string, err parsedARN, err := arn.Parse(inputARN) if err != nil { - return "", fmt.Errorf("error parsing ARN (%s): %w", inputARN, err) + return "", fmt.Errorf("parsing ARN (%s): %w", inputARN, err) } if actual, expected := parsedARN.Service, ARNService; actual != expected { diff --git a/internal/service/securityhub/arn_test.go b/internal/service/securityhub/arn_test.go index 50877fa10a5..e3e6c5ee559 100644 --- a/internal/service/securityhub/arn_test.go +++ b/internal/service/securityhub/arn_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub_test import ( @@ -19,12 +22,12 @@ func TestStandardsControlARNToStandardsSubscriptionARN(t *testing.T) { { TestName: "empty ARN", InputARN: "", - ExpectedError: regexp.MustCompile(`error parsing ARN`), + ExpectedError: regexp.MustCompile(`parsing ARN`), }, { TestName: "unparsable ARN", InputARN: "test", - ExpectedError: regexp.MustCompile(`error parsing ARN`), + ExpectedError: regexp.MustCompile(`parsing ARN`), }, { TestName: "invalid ARN service", diff --git a/internal/service/securityhub/find.go b/internal/service/securityhub/find.go index 1cd2aee03ff..4ec936dba2a 100644 --- a/internal/service/securityhub/find.go +++ b/internal/service/securityhub/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub import ( diff --git a/internal/service/securityhub/finding_aggregator.go b/internal/service/securityhub/finding_aggregator.go index 5fd7237fb49..8034e342381 100644 --- a/internal/service/securityhub/finding_aggregator.go +++ b/internal/service/securityhub/finding_aggregator.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub import ( @@ -55,7 +58,7 @@ func ResourceFindingAggregator() *schema.Resource { func resourceFindingAggregatorCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) linkingMode := d.Get("linking_mode").(string) @@ -82,7 +85,7 @@ func resourceFindingAggregatorCreate(ctx context.Context, d *schema.ResourceData func resourceFindingAggregatorRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) aggregatorArn := d.Id() @@ -137,7 +140,7 @@ func FindingAggregatorCheckExists(ctx context.Context, conn *securityhub.Securit func resourceFindingAggregatorUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) aggregatorArn := d.Id() @@ -165,7 +168,7 @@ func resourceFindingAggregatorUpdate(ctx context.Context, d *schema.ResourceData func resourceFindingAggregatorDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) aggregatorArn := d.Id() diff --git a/internal/service/securityhub/finding_aggregator_test.go b/internal/service/securityhub/finding_aggregator_test.go index 33692e29fcb..78fb144bc32 100644 --- a/internal/service/securityhub/finding_aggregator_test.go +++ b/internal/service/securityhub/finding_aggregator_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub_test import ( @@ -91,7 +94,7 @@ func testAccCheckFindingAggregatorExists(ctx context.Context, n string) resource return fmt.Errorf("No Security Hub finding aggregator ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn(ctx) _, err := conn.GetFindingAggregatorWithContext(ctx, &securityhub.GetFindingAggregatorInput{ FindingAggregatorArn: &rs.Primary.ID, @@ -107,7 +110,7 @@ func testAccCheckFindingAggregatorExists(ctx context.Context, n string) resource func testAccCheckFindingAggregatorDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_securityhub_finding_aggregator" { diff --git a/internal/service/securityhub/generate.go b/internal/service/securityhub/generate.go index 27f24e8a506..4c686ec0ac2 100644 --- a/internal/service/securityhub/generate.go +++ b/internal/service/securityhub/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package securityhub diff --git a/internal/service/securityhub/insight.go b/internal/service/securityhub/insight.go index c5288ac9b5a..da8b2c5c953 100644 --- a/internal/service/securityhub/insight.go +++ b/internal/service/securityhub/insight.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub import ( "context" - "fmt" "log" "strconv" @@ -145,7 +147,7 @@ func ResourceInsight() *schema.Resource { } func resourceInsightCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) name := d.Get("name").(string) @@ -161,11 +163,11 @@ func resourceInsightCreate(ctx context.Context, d *schema.ResourceData, meta int output, err := conn.CreateInsightWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error creating Security Hub Insight (%s): %w", name, err)) + return diag.Errorf("creating Security Hub Insight (%s): %s", name, err) } if output == nil { - return diag.FromErr(fmt.Errorf("error creating Security Hub Insight (%s): empty output", name)) + return diag.Errorf("creating Security Hub Insight (%s): empty output", name) } d.SetId(aws.StringValue(output.InsightArn)) @@ -174,7 +176,7 @@ func resourceInsightCreate(ctx context.Context, d *schema.ResourceData, meta int } func resourceInsightRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) insight, err := FindInsight(ctx, conn, d.Id()) @@ -185,12 +187,12 @@ func resourceInsightRead(ctx context.Context, d *schema.ResourceData, meta inter } if err != nil { - return diag.FromErr(fmt.Errorf("error reading Security Hub Insight (%s): %w", d.Id(), err)) + return diag.Errorf("reading Security Hub Insight (%s): %s", d.Id(), err) } if insight == nil { if d.IsNewResource() { - return diag.FromErr(fmt.Errorf("error reading Security Hub Insight (%s): empty output", d.Id())) + return diag.Errorf("reading Security Hub Insight (%s): empty output", d.Id()) } log.Printf("[WARN] Security Hub Insight (%s) not found, removing from state", d.Id()) d.SetId("") @@ -199,7 +201,7 @@ func resourceInsightRead(ctx context.Context, d *schema.ResourceData, meta inter d.Set("arn", insight.InsightArn) if err := d.Set("filters", flattenSecurityFindingFilters(insight.Filters)); err != nil { - return diag.FromErr(fmt.Errorf("error setting filters: %w", err)) + return diag.Errorf("setting filters: %s", err) } d.Set("group_by_attribute", insight.GroupByAttribute) d.Set("name", insight.Name) @@ -208,7 +210,7 @@ func resourceInsightRead(ctx context.Context, d *schema.ResourceData, meta inter } func resourceInsightUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) input := &securityhub.UpdateInsightInput{ InsightArn: aws.String(d.Id()), @@ -229,14 +231,14 @@ func resourceInsightUpdate(ctx context.Context, d *schema.ResourceData, meta int _, err := conn.UpdateInsightWithContext(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error updating Security Hub Insight (%s): %w", d.Id(), err)) + return diag.Errorf("updating Security Hub Insight (%s): %s", d.Id(), err) } return resourceInsightRead(ctx, d, meta) } func resourceInsightDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) input := &securityhub.DeleteInsightInput{ InsightArn: aws.String(d.Id()), @@ -248,7 +250,7 @@ func resourceInsightDelete(ctx context.Context, d *schema.ResourceData, meta int if tfawserr.ErrCodeEquals(err, securityhub.ErrCodeResourceNotFoundException) { return nil } - return diag.FromErr(fmt.Errorf("error deleting Security Hub Insight (%s): %w", d.Id(), err)) + return diag.Errorf("deleting Security Hub Insight (%s): %s", d.Id(), err) } return nil diff --git a/internal/service/securityhub/insight_test.go b/internal/service/securityhub/insight_test.go index e013952614a..a2ac057ccec 100644 --- a/internal/service/securityhub/insight_test.go +++ b/internal/service/securityhub/insight_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub_test import ( @@ -449,7 +452,7 @@ func testAccInsight_WorkflowStatus(t *testing.T) { func testAccCheckInsightDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_securityhub_insight" { @@ -484,7 +487,7 @@ func testAccCheckInsightExists(ctx context.Context, n string) resource.TestCheck return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn(ctx) insight, err := tfsecurityhub.FindInsight(ctx, conn, rs.Primary.ID) diff --git a/internal/service/securityhub/invite_accepter.go b/internal/service/securityhub/invite_accepter.go index 0deacf68b9f..e1771e5c7b5 100644 --- a/internal/service/securityhub/invite_accepter.go +++ b/internal/service/securityhub/invite_accepter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub import ( @@ -41,7 +44,7 @@ func ResourceInviteAccepter() *schema.Resource { func resourceInviteAccepterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) log.Print("[DEBUG] Accepting Security Hub invitation") invitationId, err := resourceInviteAccepterGetInvitationID(ctx, conn, d.Get("master_id").(string)) @@ -85,7 +88,7 @@ func resourceInviteAccepterGetInvitationID(ctx context.Context, conn *securityhu func resourceInviteAccepterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) log.Print("[DEBUG] Reading Security Hub master account") resp, err := conn.GetMasterAccountWithContext(ctx, &securityhub.GetMasterAccountInput{}) @@ -114,7 +117,7 @@ func resourceInviteAccepterRead(ctx context.Context, d *schema.ResourceData, met func resourceInviteAccepterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) log.Print("[DEBUG] Disassociating from Security Hub master account") _, err := conn.DisassociateFromMasterAccountWithContext(ctx, &securityhub.DisassociateFromMasterAccountInput{}) diff --git a/internal/service/securityhub/invite_accepter_test.go b/internal/service/securityhub/invite_accepter_test.go index 21881597a19..e10f7c7f80b 100644 --- a/internal/service/securityhub/invite_accepter_test.go +++ b/internal/service/securityhub/invite_accepter_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub_test import ( @@ -50,7 +53,7 @@ func testAccCheckInviteAccepterExists(ctx context.Context, resourceName string) return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn(ctx) resp, err := conn.GetMasterAccountWithContext(ctx, &securityhub.GetMasterAccountInput{}) @@ -68,7 +71,7 @@ func testAccCheckInviteAccepterExists(ctx context.Context, resourceName string) func testAccCheckInviteAccepterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_securityhub_invite_accepter" { diff --git a/internal/service/securityhub/member.go b/internal/service/securityhub/member.go index e42b8c65ae4..ad2466b150e 100644 --- a/internal/service/securityhub/member.go +++ b/internal/service/securityhub/member.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub import ( @@ -69,7 +72,7 @@ func ResourceMember() *schema.Resource { func resourceMemberCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) accountID := d.Get("account_id").(string) input := &securityhub.CreateMembersInput{ @@ -115,7 +118,7 @@ func resourceMemberCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceMemberRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) member, err := FindMemberByAccountID(ctx, conn, d.Id()) @@ -142,7 +145,7 @@ func resourceMemberRead(ctx context.Context, d *schema.ResourceData, meta interf func resourceMemberDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) _, err := conn.DisassociateMembersWithContext(ctx, &securityhub.DisassociateMembersInput{ AccountIds: aws.StringSlice([]string{d.Id()}), diff --git a/internal/service/securityhub/member_test.go b/internal/service/securityhub/member_test.go index ceba0bfbb6d..a0f9e70d515 100644 --- a/internal/service/securityhub/member_test.go +++ b/internal/service/securityhub/member_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub_test import ( @@ -81,7 +84,7 @@ func testAccCheckMemberExists(ctx context.Context, n string, v *securityhub.Memb return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn(ctx) output, err := tfsecurityhub.FindMemberByAccountID(ctx, conn, rs.Primary.ID) @@ -97,7 +100,7 @@ func testAccCheckMemberExists(ctx context.Context, n string, v *securityhub.Memb func testAccCheckMemberDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_securityhub_member" { diff --git a/internal/service/securityhub/organization_admin_account.go b/internal/service/securityhub/organization_admin_account.go index 41d210b995f..069d567a6ef 100644 --- a/internal/service/securityhub/organization_admin_account.go +++ b/internal/service/securityhub/organization_admin_account.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub import ( @@ -38,7 +41,7 @@ func ResourceOrganizationAdminAccount() *schema.Resource { func resourceOrganizationAdminAccountCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) adminAccountID := d.Get("admin_account_id").(string) @@ -63,7 +66,7 @@ func resourceOrganizationAdminAccountCreate(ctx context.Context, d *schema.Resou func resourceOrganizationAdminAccountRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) adminAccount, err := FindAdminAccount(ctx, conn, d.Id()) @@ -94,7 +97,7 @@ func resourceOrganizationAdminAccountRead(ctx context.Context, d *schema.Resourc func resourceOrganizationAdminAccountDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) input := &securityhub.DisableOrganizationAdminAccountInput{ AdminAccountId: aws.String(d.Id()), diff --git a/internal/service/securityhub/organization_admin_account_test.go b/internal/service/securityhub/organization_admin_account_test.go index f5900a0b649..c80bdcb17ae 100644 --- a/internal/service/securityhub/organization_admin_account_test.go +++ b/internal/service/securityhub/organization_admin_account_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub_test import ( @@ -98,7 +101,7 @@ func testAccOrganizationAdminAccount_MultiRegion(t *testing.T) { func testAccCheckOrganizationAdminAccountDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_securityhub_organization_admin_account" { @@ -135,7 +138,7 @@ func testAccCheckOrganizationAdminAccountExists(ctx context.Context, resourceNam return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn(ctx) adminAccount, err := tfsecurityhub.FindAdminAccount(ctx, conn, rs.Primary.ID) diff --git a/internal/service/securityhub/organization_configuration.go b/internal/service/securityhub/organization_configuration.go index e6fea23ae26..2a087b90bc0 100644 --- a/internal/service/securityhub/organization_configuration.go +++ b/internal/service/securityhub/organization_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub import ( @@ -41,7 +44,7 @@ func ResourceOrganizationConfiguration() *schema.Resource { func resourceOrganizationConfigurationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) input := &securityhub.UpdateOrganizationConfigurationInput{ AutoEnable: aws.Bool(d.Get("auto_enable").(bool)), @@ -66,7 +69,7 @@ func resourceOrganizationConfigurationUpdate(ctx context.Context, d *schema.Reso func resourceOrganizationConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) output, err := conn.DescribeOrganizationConfigurationWithContext(ctx, &securityhub.DescribeOrganizationConfigurationInput{}) diff --git a/internal/service/securityhub/organization_configuration_test.go b/internal/service/securityhub/organization_configuration_test.go index a3e98a77bf1..69701f3cd0b 100644 --- a/internal/service/securityhub/organization_configuration_test.go +++ b/internal/service/securityhub/organization_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub_test import ( @@ -89,7 +92,7 @@ func testAccOrganizationConfigurationExists(ctx context.Context, n string) resou return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn(ctx) _, err := conn.DescribeOrganizationConfigurationWithContext(ctx, &securityhub.DescribeOrganizationConfigurationInput{}) diff --git a/internal/service/securityhub/product_subscription.go b/internal/service/securityhub/product_subscription.go index 70998dd5e23..f7b556d778f 100644 --- a/internal/service/securityhub/product_subscription.go +++ b/internal/service/securityhub/product_subscription.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub import ( @@ -42,7 +45,7 @@ func ResourceProductSubscription() *schema.Resource { func resourceProductSubscriptionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) productArn := d.Get("product_arn").(string) log.Printf("[DEBUG] Enabling Security Hub Product Subscription for product %s", productArn) @@ -62,7 +65,7 @@ func resourceProductSubscriptionCreate(ctx context.Context, d *schema.ResourceDa func resourceProductSubscriptionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) productArn, productSubscriptionArn, err := ProductSubscriptionParseID(d.Id()) @@ -123,7 +126,7 @@ func ProductSubscriptionParseID(id string) (string, string, error) { func resourceProductSubscriptionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) log.Printf("[DEBUG] Disabling Security Hub Product Subscription %s", d.Id()) _, productSubscriptionArn, err := ProductSubscriptionParseID(d.Id()) diff --git a/internal/service/securityhub/product_subscription_test.go b/internal/service/securityhub/product_subscription_test.go index 908da3b937c..6080ccd3b25 100644 --- a/internal/service/securityhub/product_subscription_test.go +++ b/internal/service/securityhub/product_subscription_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub_test import ( @@ -35,7 +38,7 @@ func testAccProductSubscription_basic(t *testing.T) { // AWS product subscriptions happen automatically when enabling Security Hub. // Here we attempt to remove one so we can attempt to (re-)enable it. PreConfig: func() { - conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn(ctx) productSubscriptionARN := arn.ARN{ AccountID: acctest.AccountID(), Partition: acctest.Partition(), @@ -80,7 +83,7 @@ func testAccCheckProductSubscriptionExists(ctx context.Context, n string) resour return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn(ctx) _, productSubscriptionArn, err := tfsecurityhub.ProductSubscriptionParseID(rs.Primary.ID) @@ -104,7 +107,7 @@ func testAccCheckProductSubscriptionExists(ctx context.Context, n string) resour func testAccCheckProductSubscriptionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_securityhub_product_subscription" { diff --git a/internal/service/securityhub/securityhub_test.go b/internal/service/securityhub/securityhub_test.go index afb7de01a03..42cf0495a83 100644 --- a/internal/service/securityhub/securityhub_test.go +++ b/internal/service/securityhub/securityhub_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub_test import ( diff --git a/internal/service/securityhub/service_package.go b/internal/service/securityhub/service_package.go new file mode 100644 index 00000000000..c59033f052a --- /dev/null +++ b/internal/service/securityhub/service_package.go @@ -0,0 +1,28 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package securityhub + +import ( + "context" + + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + request_sdkv1 "github.com/aws/aws-sdk-go/aws/request" + securityhub_sdkv1 "github.com/aws/aws-sdk-go/service/securityhub" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" +) + +// CustomizeConn customizes a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) CustomizeConn(ctx context.Context, conn *securityhub_sdkv1.SecurityHub) (*securityhub_sdkv1.SecurityHub, error) { + // Reference: https://github.com/hashicorp/terraform-provider-aws/issues/17996. + conn.Handlers.Retry.PushBack(func(r *request_sdkv1.Request) { + switch r.Operation.Name { + case "EnableOrganizationAdminAccount": + if tfawserr.ErrCodeEquals(r.Error, securityhub_sdkv1.ErrCodeResourceConflictException) { + r.Retryable = aws_sdkv1.Bool(true) + } + } + }) + + return conn, nil +} diff --git a/internal/service/securityhub/service_package_gen.go b/internal/service/securityhub/service_package_gen.go index 08f68974641..b9b5a844b4c 100644 --- a/internal/service/securityhub/service_package_gen.go +++ b/internal/service/securityhub/service_package_gen.go @@ -5,6 +5,10 @@ package securityhub import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + securityhub_sdkv1 "github.com/aws/aws-sdk-go/service/securityhub" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -76,4 +80,13 @@ func (p *servicePackage) ServicePackageName() string { return names.SecurityHub } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*securityhub_sdkv1.SecurityHub, error) { + sess := config["session"].(*session_sdkv1.Session) + + return securityhub_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/securityhub/standards_control.go b/internal/service/securityhub/standards_control.go index 45b6aa8bebb..4e2254fc6c6 100644 --- a/internal/service/securityhub/standards_control.go +++ b/internal/service/securityhub/standards_control.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub import ( @@ -83,7 +86,7 @@ func ResourceStandardsControl() *schema.Resource { } func resourceStandardsControlRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) standardsSubscriptionARN, err := StandardsControlARNToStandardsSubscriptionARN(d.Id()) @@ -100,7 +103,7 @@ func resourceStandardsControlRead(ctx context.Context, d *schema.ResourceData, m } if err != nil { - return diag.Errorf("error reading Security Hub Standards Control (%s): %s", d.Id(), err) + return diag.Errorf("reading Security Hub Standards Control (%s): %s", d.Id(), err) } d.Set("control_id", control.ControlId) @@ -118,7 +121,7 @@ func resourceStandardsControlRead(ctx context.Context, d *schema.ResourceData, m } func resourceStandardsControlPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) d.SetId(d.Get("standards_control_arn").(string)) @@ -132,7 +135,7 @@ func resourceStandardsControlPut(ctx context.Context, d *schema.ResourceData, me _, err := conn.UpdateStandardsControlWithContext(ctx, input) if err != nil { - return diag.Errorf("error updating Security Hub Standards Control (%s): %s", d.Id(), err) + return diag.Errorf("updating Security Hub Standards Control (%s): %s", d.Id(), err) } return resourceStandardsControlRead(ctx, d, meta) diff --git a/internal/service/securityhub/standards_control_test.go b/internal/service/securityhub/standards_control_test.go index e174ac10753..2bca68f5ddc 100644 --- a/internal/service/securityhub/standards_control_test.go +++ b/internal/service/securityhub/standards_control_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub_test import ( @@ -95,7 +98,7 @@ func testAccCheckStandardsControlExists(ctx context.Context, n string, control * return fmt.Errorf("No Security Hub Standards Control ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn(ctx) standardsSubscriptionARN, err := tfsecurityhub.StandardsControlARNToStandardsSubscriptionARN(rs.Primary.ID) diff --git a/internal/service/securityhub/standards_subscription.go b/internal/service/securityhub/standards_subscription.go index e6a41d16bc8..0f470871ca9 100644 --- a/internal/service/securityhub/standards_subscription.go +++ b/internal/service/securityhub/standards_subscription.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub import ( @@ -41,7 +44,7 @@ func ResourceStandardsSubscription() *schema.Resource { func resourceStandardsSubscriptionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) standardsARN := d.Get("standards_arn").(string) input := &securityhub.BatchEnableStandardsInput{ @@ -67,7 +70,7 @@ func resourceStandardsSubscriptionCreate(ctx context.Context, d *schema.Resource func resourceStandardsSubscriptionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) output, err := FindStandardsSubscriptionByARN(ctx, conn, d.Id()) @@ -88,7 +91,7 @@ func resourceStandardsSubscriptionRead(ctx context.Context, d *schema.ResourceDa func resourceStandardsSubscriptionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SecurityHubConn() + conn := meta.(*conns.AWSClient).SecurityHubConn(ctx) log.Printf("[DEBUG] Deleting Security Hub Standards Subscription: %s", d.Id()) _, err := conn.BatchDisableStandardsWithContext(ctx, &securityhub.BatchDisableStandardsInput{ diff --git a/internal/service/securityhub/standards_subscription_test.go b/internal/service/securityhub/standards_subscription_test.go index 0ef3d247840..5f84f55ee7e 100644 --- a/internal/service/securityhub/standards_subscription_test.go +++ b/internal/service/securityhub/standards_subscription_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub_test import ( @@ -75,7 +78,7 @@ func testAccCheckStandardsSubscriptionExists(ctx context.Context, n string, stan return fmt.Errorf("No Security Hub Standards Subscription ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn(ctx) output, err := tfsecurityhub.FindStandardsSubscriptionByARN(ctx, conn, rs.Primary.ID) @@ -91,7 +94,7 @@ func testAccCheckStandardsSubscriptionExists(ctx context.Context, n string, stan func testAccCheckStandardsSubscriptionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SecurityHubConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_securityhub_standards_subscription" { diff --git a/internal/service/securityhub/status.go b/internal/service/securityhub/status.go index d8898f67040..8aab1dc5382 100644 --- a/internal/service/securityhub/status.go +++ b/internal/service/securityhub/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub import ( diff --git a/internal/service/securityhub/tags_gen.go b/internal/service/securityhub/tags_gen.go index 9e852ce8ae1..165c99bc28c 100644 --- a/internal/service/securityhub/tags_gen.go +++ b/internal/service/securityhub/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists securityhub service tags. +// listTags lists securityhub service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn securityhubiface.SecurityHubAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn securityhubiface.SecurityHubAPI, identifier string) (tftags.KeyValueTags, error) { input := &securityhub.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn securityhubiface.SecurityHubAPI, identif // ListTags lists securityhub service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).SecurityHubConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).SecurityHubConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from securityhub service tags. +// KeyValueTags creates tftags.KeyValueTags from securityhub service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns securityhub service tags from Context. +// getTagsIn returns securityhub service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets securityhub service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets securityhub service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates securityhub service tags. +// updateTags updates securityhub service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn securityhubiface.SecurityHubAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn securityhubiface.SecurityHubAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn securityhubiface.SecurityHubAPI, ident // UpdateTags updates securityhub service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).SecurityHubConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).SecurityHubConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/securityhub/wait.go b/internal/service/securityhub/wait.go index a2f404a1fde..0f15cb72e69 100644 --- a/internal/service/securityhub/wait.go +++ b/internal/service/securityhub/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package securityhub import ( diff --git a/internal/service/securitylake/generate.go b/internal/service/securitylake/generate.go new file mode 100644 index 00000000000..311d0a09137 --- /dev/null +++ b/internal/service/securitylake/generate.go @@ -0,0 +1,7 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/servicepackage/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package securitylake diff --git a/internal/service/securitylake/service_package_gen.go b/internal/service/securitylake/service_package_gen.go index 62cab68ea0d..49d5d8ed0a3 100644 --- a/internal/service/securitylake/service_package_gen.go +++ b/internal/service/securitylake/service_package_gen.go @@ -5,6 +5,9 @@ package securitylake import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + securitylake_sdkv2 "github.com/aws/aws-sdk-go-v2/service/securitylake" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -31,4 +34,17 @@ func (p *servicePackage) ServicePackageName() string { return names.SecurityLake } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*securitylake_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return securitylake_sdkv2.NewFromConfig(cfg, func(o *securitylake_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = securitylake_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/serverlessrepo/application_data_source.go b/internal/service/serverlessrepo/application_data_source.go index 0f39521d938..ef3a5408cc1 100644 --- a/internal/service/serverlessrepo/application_data_source.go +++ b/internal/service/serverlessrepo/application_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package serverlessrepo import ( @@ -52,7 +55,7 @@ func DataSourceApplication() *schema.Resource { func dataSourceApplicationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServerlessRepoConn() + conn := meta.(*conns.AWSClient).ServerlessRepoConn(ctx) applicationID := d.Get("application_id").(string) semanticVersion := d.Get("semantic_version").(string) diff --git a/internal/service/serverlessrepo/application_data_source_test.go b/internal/service/serverlessrepo/application_data_source_test.go index 1edcd76fdc4..b32554c8b35 100644 --- a/internal/service/serverlessrepo/application_data_source_test.go +++ b/internal/service/serverlessrepo/application_data_source_test.go @@ -1,8 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package serverlessrepo_test import ( "fmt" - "regexp" "testing" "github.com/aws/aws-sdk-go/service/serverlessapplicationrepository" @@ -32,10 +34,6 @@ func TestAccServerlessRepoApplicationDataSource_basic(t *testing.T) { resource.TestCheckResourceAttrSet(datasourceName, "required_capabilities.#"), ), }, - { - Config: testAccApplicationDataSourceConfig_nonExistent(), - ExpectError: regexp.MustCompile(`error getting Serverless Application Repository application`), - }, }, }) } @@ -79,10 +77,6 @@ func TestAccServerlessRepoApplicationDataSource_versioned(t *testing.T) { resource.TestCheckTypeSetElemAttr(datasourceName, "required_capabilities.*", "CAPABILITY_RESOURCE_POLICY"), ), }, - { - Config: testAccApplicationDataSourceConfig_versionedNonExistent(appARN), - ExpectError: regexp.MustCompile(`error getting Serverless Application Repository application`), - }, }, }) } @@ -109,20 +103,6 @@ data "aws_serverlessapplicationrepository_application" "secrets_manager_postgres `, appARN) } -func testAccApplicationDataSourceConfig_nonExistent() string { - return ` -data "aws_caller_identity" "current" {} - -data "aws_partition" "current" {} - -data "aws_region" "current" {} - -data "aws_serverlessapplicationrepository_application" "no_such_function" { - application_id = "arn:${data.aws_partition.current.partition}:serverlessrepo:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:applications/ThisFunctionDoesNotExist" -} -` -} - func testAccApplicationDataSourceConfig_versioned(appARN, version string) string { return fmt.Sprintf(` data "aws_serverlessapplicationrepository_application" "secrets_manager_postgres_single_user_rotator" { @@ -131,12 +111,3 @@ data "aws_serverlessapplicationrepository_application" "secrets_manager_postgres } `, appARN, version) } - -func testAccApplicationDataSourceConfig_versionedNonExistent(appARN string) string { - return fmt.Sprintf(` -data "aws_serverlessapplicationrepository_application" "secrets_manager_postgres_single_user_rotator" { - application_id = %[1]q - semantic_version = "42.13.7" -} -`, appARN) -} diff --git a/internal/service/serverlessrepo/cloudformation_stack.go b/internal/service/serverlessrepo/cloudformation_stack.go index 46094060acd..a922188dbf9 100644 --- a/internal/service/serverlessrepo/cloudformation_stack.go +++ b/internal/service/serverlessrepo/cloudformation_stack.go @@ -1,6 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package serverlessrepo -import ( // nosemgrep:ci.aws-sdk-go-multiple-service-imports +import ( // nosemgrep:ci.semgrep.aws.multiple-service-imports "context" "fmt" "log" @@ -97,7 +100,7 @@ func ResourceCloudFormationStack() *schema.Resource { func resourceCloudFormationStackCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - cfConn := meta.(*conns.AWSClient).CloudFormationConn() + cfConn := meta.(*conns.AWSClient).CloudFormationConn(ctx) changeSet, err := createCloudFormationChangeSet(ctx, d, meta.(*conns.AWSClient)) if err != nil { @@ -131,8 +134,8 @@ func resourceCloudFormationStackCreate(ctx context.Context, d *schema.ResourceDa func resourceCloudFormationStackRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - serverlessConn := meta.(*conns.AWSClient).ServerlessRepoConn() - cfConn := meta.(*conns.AWSClient).CloudFormationConn() + serverlessConn := meta.(*conns.AWSClient).ServerlessRepoConn(ctx) + cfConn := meta.(*conns.AWSClient).CloudFormationConn(ctx) stack, err := tfcloudformation.FindStackByID(ctx, cfConn, d.Id()) @@ -165,7 +168,7 @@ func resourceCloudFormationStackRead(ctx context.Context, d *schema.ResourceData return sdkdiag.AppendErrorf(diags, "describing Serverless Application Repository CloudFormation Stack (%s): missing required tag \"%s\"", d.Id(), cloudFormationStackTagSemanticVersion) } - SetTagsOut(ctx, Tags(tags)) + setTagsOut(ctx, Tags(tags)) if err = d.Set("outputs", flattenCloudFormationOutputs(stack.Outputs)); err != nil { return sdkdiag.AppendErrorf(diags, "to set outputs: %s", err) @@ -216,7 +219,7 @@ func flattenParameterDefinitions(parameterDefinitions []*serverlessrepo.Paramete func resourceCloudFormationStackUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - cfConn := meta.(*conns.AWSClient).CloudFormationConn() + cfConn := meta.(*conns.AWSClient).CloudFormationConn(ctx) changeSet, err := createCloudFormationChangeSet(ctx, d, meta.(*conns.AWSClient)) if err != nil { @@ -248,7 +251,7 @@ func resourceCloudFormationStackUpdate(ctx context.Context, d *schema.ResourceDa func resourceCloudFormationStackDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - cfConn := meta.(*conns.AWSClient).CloudFormationConn() + cfConn := meta.(*conns.AWSClient).CloudFormationConn(ctx) requestToken := id.UniqueId() input := &cloudformation.DeleteStackInput{ @@ -281,7 +284,7 @@ func resourceCloudFormationStackImport(ctx context.Context, d *schema.ResourceDa } } - cfConn := meta.(*conns.AWSClient).CloudFormationConn() + cfConn := meta.(*conns.AWSClient).CloudFormationConn(ctx) stack, err := tfcloudformation.FindStackByID(ctx, cfConn, stackID) if err != nil { return nil, fmt.Errorf("describing Serverless Application Repository CloudFormation Stack (%s): %w", stackID, err) @@ -293,15 +296,15 @@ func resourceCloudFormationStackImport(ctx context.Context, d *schema.ResourceDa } func createCloudFormationChangeSet(ctx context.Context, d *schema.ResourceData, client *conns.AWSClient) (*cloudformation.DescribeChangeSetOutput, error) { - serverlessConn := client.ServerlessRepoConn() - cfConn := client.CloudFormationConn() + serverlessConn := client.ServerlessRepoConn(ctx) + cfConn := client.CloudFormationConn(ctx) stackName := d.Get("name").(string) changeSetRequest := serverlessrepo.CreateCloudFormationChangeSetRequest{ StackName: aws.String(stackName), ApplicationId: aws.String(d.Get("application_id").(string)), Capabilities: flex.ExpandStringSet(d.Get("capabilities").(*schema.Set)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("semantic_version"); ok { changeSetRequest.SemanticVersion = aws.String(v.(string)) diff --git a/internal/service/serverlessrepo/cloudformation_stack_test.go b/internal/service/serverlessrepo/cloudformation_stack_test.go index d6a6e7fba2e..c303d8a3b12 100644 --- a/internal/service/serverlessrepo/cloudformation_stack_test.go +++ b/internal/service/serverlessrepo/cloudformation_stack_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package serverlessrepo_test import ( @@ -281,7 +284,7 @@ func testAccCheckCloudFormationStackExists(ctx context.Context, n string, stack return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFormationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFormationConn(ctx) params := &cloudformation.DescribeStacksInput{ StackName: aws.String(rs.Primary.ID), } @@ -546,7 +549,7 @@ resource "aws_serverlessapplicationrepository_cloudformation_stack" "postgres-ro func testAccCheckAMIDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ami" { @@ -579,7 +582,7 @@ func testAccCheckAMIDestroy(ctx context.Context) resource.TestCheckFunc { func testAccCheckCloudFormationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFormationConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFormationConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_cloudformation_stack" { diff --git a/internal/service/serverlessrepo/find.go b/internal/service/serverlessrepo/find.go index 7fcda5a246d..dda60b0c33c 100644 --- a/internal/service/serverlessrepo/find.go +++ b/internal/service/serverlessrepo/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package serverlessrepo import ( diff --git a/internal/service/serverlessrepo/generate.go b/internal/service/serverlessrepo/generate.go index b2ff1369473..71aeaf48552 100644 --- a/internal/service/serverlessrepo/generate.go +++ b/internal/service/serverlessrepo/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ServiceTagsSlice +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package serverlessrepo diff --git a/internal/service/serverlessrepo/service_package_gen.go b/internal/service/serverlessrepo/service_package_gen.go index c53afa08d1d..3639a275c06 100644 --- a/internal/service/serverlessrepo/service_package_gen.go +++ b/internal/service/serverlessrepo/service_package_gen.go @@ -5,6 +5,10 @@ package serverlessrepo import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + serverlessapplicationrepository_sdkv1 "github.com/aws/aws-sdk-go/service/serverlessapplicationrepository" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -43,4 +47,13 @@ func (p *servicePackage) ServicePackageName() string { return names.ServerlessRepo } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*serverlessapplicationrepository_sdkv1.ServerlessApplicationRepository, error) { + sess := config["session"].(*session_sdkv1.Session) + + return serverlessapplicationrepository_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/serverlessrepo/tags_gen.go b/internal/service/serverlessrepo/tags_gen.go index 2b74ab779bb..109d9bd9882 100644 --- a/internal/service/serverlessrepo/tags_gen.go +++ b/internal/service/serverlessrepo/tags_gen.go @@ -39,9 +39,9 @@ func KeyValueTags(ctx context.Context, tags []*serverlessapplicationrepository.T return tftags.New(ctx, m) } -// GetTagsIn returns serverlessrepo service tags from Context. +// getTagsIn returns serverlessrepo service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*serverlessapplicationrepository.Tag { +func getTagsIn(ctx context.Context) []*serverlessapplicationrepository.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -51,8 +51,8 @@ func GetTagsIn(ctx context.Context) []*serverlessapplicationrepository.Tag { return nil } -// SetTagsOut sets serverlessrepo service tags in Context. -func SetTagsOut(ctx context.Context, tags []*serverlessapplicationrepository.Tag) { +// setTagsOut sets serverlessrepo service tags in Context. +func setTagsOut(ctx context.Context, tags []*serverlessapplicationrepository.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } diff --git a/internal/service/serverlessrepo/wait.go b/internal/service/serverlessrepo/wait.go index e19060c42fc..6afa90687fa 100644 --- a/internal/service/serverlessrepo/wait.go +++ b/internal/service/serverlessrepo/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package serverlessrepo import ( diff --git a/internal/service/servicecatalog/budget_resource_association.go b/internal/service/servicecatalog/budget_resource_association.go index fae96c296e8..d81ffe83795 100644 --- a/internal/service/servicecatalog/budget_resource_association.go +++ b/internal/service/servicecatalog/budget_resource_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog import ( @@ -48,7 +51,7 @@ func ResourceBudgetResourceAssociation() *schema.Resource { func resourceBudgetResourceAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) input := &servicecatalog.AssociateBudgetWithResourceInput{ BudgetName: aws.String(d.Get("budget_name").(string)), @@ -91,7 +94,7 @@ func resourceBudgetResourceAssociationCreate(ctx context.Context, d *schema.Reso func resourceBudgetResourceAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) budgetName, resourceID, err := BudgetResourceAssociationParseID(d.Id()) @@ -123,7 +126,7 @@ func resourceBudgetResourceAssociationRead(ctx context.Context, d *schema.Resour func resourceBudgetResourceAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) budgetName, resourceID, err := BudgetResourceAssociationParseID(d.Id()) diff --git a/internal/service/servicecatalog/budget_resource_association_test.go b/internal/service/servicecatalog/budget_resource_association_test.go index 17beb53eabe..2b7e804e869 100644 --- a/internal/service/servicecatalog/budget_resource_association_test.go +++ b/internal/service/servicecatalog/budget_resource_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog_test import ( @@ -70,7 +73,7 @@ func TestAccServiceCatalogBudgetResourceAssociation_disappears(t *testing.T) { func testAccCheckBudgetResourceAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_servicecatalog_budget_resource_association" { @@ -112,7 +115,7 @@ func testAccCheckBudgetResourceAssociationExists(ctx context.Context, resourceNa return fmt.Errorf("could not parse ID (%s): %w", rs.Primary.ID, err) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn(ctx) _, err = tfservicecatalog.WaitBudgetResourceAssociationReady(ctx, conn, budgetName, resourceID, tfservicecatalog.BudgetResourceAssociationReadyTimeout) diff --git a/internal/service/servicecatalog/constraint.go b/internal/service/servicecatalog/constraint.go index 0803655213d..c27738af865 100644 --- a/internal/service/servicecatalog/constraint.go +++ b/internal/service/servicecatalog/constraint.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog import ( @@ -84,7 +87,7 @@ func ResourceConstraint() *schema.Resource { func resourceConstraintCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) input := &servicecatalog.CreateConstraintInput{ IdempotencyToken: aws.String(id.UniqueId()), @@ -142,7 +145,7 @@ func resourceConstraintCreate(ctx context.Context, d *schema.ResourceData, meta func resourceConstraintRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) output, err := WaitConstraintReady(ctx, conn, d.Get("accept_language").(string), d.Id(), d.Timeout(schema.TimeoutRead)) @@ -184,7 +187,7 @@ func resourceConstraintRead(ctx context.Context, d *schema.ResourceData, meta in func resourceConstraintUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) input := &servicecatalog.UpdateConstraintInput{ Id: aws.String(d.Id()), @@ -229,7 +232,7 @@ func resourceConstraintUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceConstraintDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) input := &servicecatalog.DeleteConstraintInput{ Id: aws.String(d.Id()), diff --git a/internal/service/servicecatalog/constraint_data_source.go b/internal/service/servicecatalog/constraint_data_source.go index 9701e70d4ce..bec183cb34c 100644 --- a/internal/service/servicecatalog/constraint_data_source.go +++ b/internal/service/servicecatalog/constraint_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog import ( @@ -66,7 +69,7 @@ func DataSourceConstraint() *schema.Resource { func dataSourceConstraintRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) output, err := WaitConstraintReady(ctx, conn, d.Get("accept_language").(string), d.Get("id").(string), d.Timeout(schema.TimeoutRead)) diff --git a/internal/service/servicecatalog/constraint_data_source_test.go b/internal/service/servicecatalog/constraint_data_source_test.go index cd49df32ed1..5daa01a1e34 100644 --- a/internal/service/servicecatalog/constraint_data_source_test.go +++ b/internal/service/servicecatalog/constraint_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog_test import ( diff --git a/internal/service/servicecatalog/constraint_test.go b/internal/service/servicecatalog/constraint_test.go index 548a59e6ae8..1836d172053 100644 --- a/internal/service/servicecatalog/constraint_test.go +++ b/internal/service/servicecatalog/constraint_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog_test import ( @@ -105,7 +108,7 @@ func TestAccServiceCatalogConstraint_update(t *testing.T) { func testAccCheckConstraintDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_servicecatalog_constraint" { @@ -143,7 +146,7 @@ func testAccCheckConstraintExists(ctx context.Context, resourceName string) reso return fmt.Errorf("resource not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn(ctx) input := &servicecatalog.DescribeConstraintInput{ Id: aws.String(rs.Primary.ID), diff --git a/internal/service/servicecatalog/diff.go b/internal/service/servicecatalog/diff.go index 78263ddd1b6..94a977c9310 100644 --- a/internal/service/servicecatalog/diff.go +++ b/internal/service/servicecatalog/diff.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog import ( diff --git a/internal/service/servicecatalog/enum.go b/internal/service/servicecatalog/enum.go index 90f3ea3ccfe..3873d07f40a 100644 --- a/internal/service/servicecatalog/enum.go +++ b/internal/service/servicecatalog/enum.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog const ( diff --git a/internal/service/servicecatalog/find.go b/internal/service/servicecatalog/find.go index 2df619e32b7..ab05b8269f8 100644 --- a/internal/service/servicecatalog/find.go +++ b/internal/service/servicecatalog/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog import ( diff --git a/internal/service/servicecatalog/generate.go b/internal/service/servicecatalog/generate.go index e6bcd29e4f8..bf6c4db1fc5 100644 --- a/internal/service/servicecatalog/generate.go +++ b/internal/service/servicecatalog/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ServiceTagsSlice +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package servicecatalog diff --git a/internal/service/servicecatalog/id.go b/internal/service/servicecatalog/id.go index a15f1a759a2..a977cd77e89 100644 --- a/internal/service/servicecatalog/id.go +++ b/internal/service/servicecatalog/id.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog import ( diff --git a/internal/service/servicecatalog/launch_paths_data_source.go b/internal/service/servicecatalog/launch_paths_data_source.go index e4c77f51ba2..29d81dbf06f 100644 --- a/internal/service/servicecatalog/launch_paths_data_source.go +++ b/internal/service/servicecatalog/launch_paths_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog import ( @@ -72,7 +75,7 @@ func DataSourceLaunchPaths() *schema.Resource { func dataSourceLaunchPathsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig summaries, err := WaitLaunchPathsReady(ctx, conn, d.Get("accept_language").(string), d.Get("product_id").(string), d.Timeout(schema.TimeoutRead)) diff --git a/internal/service/servicecatalog/launch_paths_data_source_test.go b/internal/service/servicecatalog/launch_paths_data_source_test.go index 2fa50be8995..a089c326b52 100644 --- a/internal/service/servicecatalog/launch_paths_data_source_test.go +++ b/internal/service/servicecatalog/launch_paths_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog_test import ( diff --git a/internal/service/servicecatalog/organizations_access.go b/internal/service/servicecatalog/organizations_access.go index 41b963353ab..d736c82fa60 100644 --- a/internal/service/servicecatalog/organizations_access.go +++ b/internal/service/servicecatalog/organizations_access.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog import ( @@ -35,7 +38,7 @@ func ResourceOrganizationsAccess() *schema.Resource { func resourceOrganizationsAccessCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) d.SetId(meta.(*conns.AWSClient).AccountID) @@ -63,7 +66,7 @@ func resourceOrganizationsAccessCreate(ctx context.Context, d *schema.ResourceDa func resourceOrganizationsAccessRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) output, err := WaitOrganizationsAccessStable(ctx, conn, d.Timeout(schema.TimeoutRead)) @@ -93,7 +96,7 @@ func resourceOrganizationsAccessRead(ctx context.Context, d *schema.ResourceData func resourceOrganizationsAccessDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) // During create, if enabled = "true", then Enable Access and vice versa // During delete, the opposite diff --git a/internal/service/servicecatalog/organizations_access_test.go b/internal/service/servicecatalog/organizations_access_test.go index e221b71efce..9d33d6c9b4c 100644 --- a/internal/service/servicecatalog/organizations_access_test.go +++ b/internal/service/servicecatalog/organizations_access_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog_test import ( @@ -40,7 +43,7 @@ func testAccOrganizationsAccess_basic(t *testing.T) { func testAccCheckOrganizationsAccessDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_servicecatalog_organizations_access" { @@ -72,7 +75,7 @@ func testAccCheckOrganizationsAccessExists(ctx context.Context, resourceName str return fmt.Errorf("resource not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn(ctx) output, err := tfservicecatalog.WaitOrganizationsAccessStable(ctx, conn, tfservicecatalog.OrganizationsAccessStableTimeout) diff --git a/internal/service/servicecatalog/portfolio.go b/internal/service/servicecatalog/portfolio.go index c6f237917af..2a1e79d3300 100644 --- a/internal/service/servicecatalog/portfolio.go +++ b/internal/service/servicecatalog/portfolio.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog import ( @@ -75,14 +78,14 @@ func ResourcePortfolio() *schema.Resource { } func resourcePortfolioCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) name := d.Get("name").(string) input := &servicecatalog.CreatePortfolioInput{ AcceptLanguage: aws.String(AcceptLanguageEnglish), DisplayName: aws.String(name), IdempotencyToken: aws.String(id.UniqueId()), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -106,7 +109,7 @@ func resourcePortfolioCreate(ctx context.Context, d *schema.ResourceData, meta i func resourcePortfolioRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) output, err := FindPortfolioByID(ctx, conn, d.Id()) @@ -127,14 +130,14 @@ func resourcePortfolioRead(ctx context.Context, d *schema.ResourceData, meta int d.Set("name", portfolioDetail.DisplayName) d.Set("provider_name", portfolioDetail.ProviderName) - SetTagsOut(ctx, output.Tags) + setTagsOut(ctx, output.Tags) return diags } func resourcePortfolioUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) input := &servicecatalog.UpdatePortfolioInput{ AcceptLanguage: aws.String(AcceptLanguageEnglish), @@ -175,7 +178,7 @@ func resourcePortfolioUpdate(ctx context.Context, d *schema.ResourceData, meta i func resourcePortfolioDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) log.Printf("[DEBUG] Deleting Service Catalog Portfolio: %s", d.Id()) _, err := conn.DeletePortfolioWithContext(ctx, &servicecatalog.DeletePortfolioInput{ diff --git a/internal/service/servicecatalog/portfolio_constraints_data_source.go b/internal/service/servicecatalog/portfolio_constraints_data_source.go index c0f59305c68..d064657a0a1 100644 --- a/internal/service/servicecatalog/portfolio_constraints_data_source.go +++ b/internal/service/servicecatalog/portfolio_constraints_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog import ( @@ -74,7 +77,7 @@ func DataSourcePortfolioConstraints() *schema.Resource { func dataSourcePortfolioConstraintsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) output, err := WaitPortfolioConstraintsReady(ctx, conn, d.Get("accept_language").(string), d.Get("portfolio_id").(string), d.Get("product_id").(string), d.Timeout(schema.TimeoutRead)) diff --git a/internal/service/servicecatalog/portfolio_constraints_data_source_test.go b/internal/service/servicecatalog/portfolio_constraints_data_source_test.go index f836d59603e..dd689f04ac5 100644 --- a/internal/service/servicecatalog/portfolio_constraints_data_source_test.go +++ b/internal/service/servicecatalog/portfolio_constraints_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog_test import ( diff --git a/internal/service/servicecatalog/portfolio_data_source.go b/internal/service/servicecatalog/portfolio_data_source.go index 810717cd177..e80a520159f 100644 --- a/internal/service/servicecatalog/portfolio_data_source.go +++ b/internal/service/servicecatalog/portfolio_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog import ( @@ -62,7 +65,7 @@ func DataSourcePortfolio() *schema.Resource { func dataSourcePortfolioRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) input := &servicecatalog.DescribePortfolioInput{ Id: aws.String(d.Get("id").(string)), diff --git a/internal/service/servicecatalog/portfolio_data_source_test.go b/internal/service/servicecatalog/portfolio_data_source_test.go index 4db3ec0d0d2..236bd396787 100644 --- a/internal/service/servicecatalog/portfolio_data_source_test.go +++ b/internal/service/servicecatalog/portfolio_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog_test import ( diff --git a/internal/service/servicecatalog/portfolio_share.go b/internal/service/servicecatalog/portfolio_share.go index 0c88a9a0e5a..568639c6177 100644 --- a/internal/service/servicecatalog/portfolio_share.go +++ b/internal/service/servicecatalog/portfolio_share.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog import ( @@ -98,7 +101,7 @@ func ResourcePortfolioShare() *schema.Resource { func resourcePortfolioShareCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) input := &servicecatalog.CreatePortfolioShareInput{ PortfolioId: aws.String(d.Get("portfolio_id").(string)), @@ -178,7 +181,7 @@ func resourcePortfolioShareCreate(ctx context.Context, d *schema.ResourceData, m func resourcePortfolioShareRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) portfolioID, shareType, principalID, err := PortfolioShareParseResourceID(d.Id()) @@ -216,7 +219,7 @@ func resourcePortfolioShareRead(ctx context.Context, d *schema.ResourceData, met func resourcePortfolioShareUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) input := &servicecatalog.UpdatePortfolioShareInput{ PortfolioId: aws.String(d.Get("portfolio_id").(string)), @@ -274,7 +277,7 @@ func resourcePortfolioShareUpdate(ctx context.Context, d *schema.ResourceData, m func resourcePortfolioShareDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) input := &servicecatalog.DeletePortfolioShareInput{ PortfolioId: aws.String(d.Get("portfolio_id").(string)), diff --git a/internal/service/servicecatalog/portfolio_share_test.go b/internal/service/servicecatalog/portfolio_share_test.go index 93121ce3e19..cbae8a242cf 100644 --- a/internal/service/servicecatalog/portfolio_share_test.go +++ b/internal/service/servicecatalog/portfolio_share_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog_test import ( @@ -182,7 +185,7 @@ func testAccPortfolioShare_disappears(t *testing.T) { func testAccCheckPortfolioShareDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_servicecatalog_portfolio_share" { @@ -221,7 +224,7 @@ func testAccCheckPortfolioShareExists(ctx context.Context, n string) resource.Te return fmt.Errorf("No Service Catalog Portfolio Share ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn(ctx) _, err := tfservicecatalog.FindPortfolioShare(ctx, conn, rs.Primary.Attributes["portfolio_id"], diff --git a/internal/service/servicecatalog/portfolio_test.go b/internal/service/servicecatalog/portfolio_test.go index 38d54a7ceb7..c75cae5406e 100644 --- a/internal/service/servicecatalog/portfolio_test.go +++ b/internal/service/servicecatalog/portfolio_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog_test import ( @@ -133,7 +136,7 @@ func testAccCheckPortfolioExists(ctx context.Context, n string, v *servicecatalo return fmt.Errorf("No Service Catalog Portfolio ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn(ctx) output, err := tfservicecatalog.FindPortfolioByID(ctx, conn, rs.Primary.ID) @@ -149,7 +152,7 @@ func testAccCheckPortfolioExists(ctx context.Context, n string, v *servicecatalo func testAccCheckServiceCatlaogPortfolioDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_servicecatalog_portfolio" { diff --git a/internal/service/servicecatalog/principal_portfolio_association.go b/internal/service/servicecatalog/principal_portfolio_association.go index 7ddab527fb3..7b3a98060e9 100644 --- a/internal/service/servicecatalog/principal_portfolio_association.go +++ b/internal/service/servicecatalog/principal_portfolio_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog import ( @@ -63,7 +66,7 @@ func ResourcePrincipalPortfolioAssociation() *schema.Resource { func resourcePrincipalPortfolioAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) input := &servicecatalog.AssociatePrincipalWithPortfolioInput{ PortfolioId: aws.String(d.Get("portfolio_id").(string)), @@ -114,7 +117,7 @@ func resourcePrincipalPortfolioAssociationCreate(ctx context.Context, d *schema. func resourcePrincipalPortfolioAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) acceptLanguage, principalARN, portfolioID, err := PrincipalPortfolioAssociationParseID(d.Id()) @@ -152,7 +155,7 @@ func resourcePrincipalPortfolioAssociationRead(ctx context.Context, d *schema.Re func resourcePrincipalPortfolioAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) acceptLanguage, principalARN, portfolioID, err := PrincipalPortfolioAssociationParseID(d.Id()) diff --git a/internal/service/servicecatalog/principal_portfolio_association_test.go b/internal/service/servicecatalog/principal_portfolio_association_test.go index a316e2477df..db2ab4c93c8 100644 --- a/internal/service/servicecatalog/principal_portfolio_association_test.go +++ b/internal/service/servicecatalog/principal_portfolio_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog_test import ( @@ -71,7 +74,7 @@ func TestAccServiceCatalogPrincipalPortfolioAssociation_disappears(t *testing.T) func testAccCheckPrincipalPortfolioAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_servicecatalog_principal_portfolio_association" { @@ -113,7 +116,7 @@ func testAccCheckPrincipalPortfolioAssociationExists(ctx context.Context, resour return fmt.Errorf("could not parse ID (%s): %w", rs.Primary.ID, err) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn(ctx) _, err = tfservicecatalog.WaitPrincipalPortfolioAssociationReady(ctx, conn, acceptLanguage, principalARN, portfolioID, tfservicecatalog.PrincipalPortfolioAssociationReadyTimeout) diff --git a/internal/service/servicecatalog/product.go b/internal/service/servicecatalog/product.go index f7ca32bc256..01b6ba38df5 100644 --- a/internal/service/servicecatalog/product.go +++ b/internal/service/servicecatalog/product.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog import ( @@ -162,7 +165,7 @@ func ResourceProduct() *schema.Resource { func resourceProductCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) input := &servicecatalog.CreateProductInput{ IdempotencyToken: aws.String(id.UniqueId()), @@ -172,7 +175,7 @@ func resourceProductCreate(ctx context.Context, d *schema.ResourceData, meta int ProvisioningArtifactParameters: expandProvisioningArtifactParameters( d.Get("provisioning_artifact_parameters").([]interface{})[0].(map[string]interface{}), ), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("accept_language"); ok { @@ -248,7 +251,7 @@ func resourceProductCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceProductRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) output, err := WaitProductReady(ctx, conn, d.Get("accept_language").(string), d.Id(), d.Timeout(schema.TimeoutRead)) @@ -283,14 +286,14 @@ func resourceProductRead(ctx context.Context, d *schema.ResourceData, meta inter d.Set("support_url", pvs.SupportUrl) d.Set("type", pvs.Type) - SetTagsOut(ctx, output.Tags) + setTagsOut(ctx, output.Tags) return diags } func resourceProductUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &servicecatalog.UpdateProductInput{ @@ -365,7 +368,7 @@ func resourceProductUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceProductDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) input := &servicecatalog.DeleteProductInput{ Id: aws.String(d.Id()), diff --git a/internal/service/servicecatalog/product_data_source.go b/internal/service/servicecatalog/product_data_source.go index ed7759cabba..4afc5da8b07 100644 --- a/internal/service/servicecatalog/product_data_source.go +++ b/internal/service/servicecatalog/product_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog import ( @@ -88,7 +91,7 @@ func DataSourceProduct() *schema.Resource { func dataSourceProductRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) output, err := WaitProductReady(ctx, conn, d.Get("accept_language").(string), d.Get("id").(string), d.Timeout(schema.TimeoutRead)) diff --git a/internal/service/servicecatalog/product_data_source_test.go b/internal/service/servicecatalog/product_data_source_test.go index 5cde16fc89b..945ff774cb7 100644 --- a/internal/service/servicecatalog/product_data_source_test.go +++ b/internal/service/servicecatalog/product_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog_test import ( diff --git a/internal/service/servicecatalog/product_portfolio_association.go b/internal/service/servicecatalog/product_portfolio_association.go index f8085ab4396..28c5274799f 100644 --- a/internal/service/servicecatalog/product_portfolio_association.go +++ b/internal/service/servicecatalog/product_portfolio_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog import ( @@ -61,7 +64,7 @@ func ResourceProductPortfolioAssociation() *schema.Resource { func resourceProductPortfolioAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) input := &servicecatalog.AssociateProductWithPortfolioInput{ PortfolioId: aws.String(d.Get("portfolio_id").(string)), @@ -112,7 +115,7 @@ func resourceProductPortfolioAssociationCreate(ctx context.Context, d *schema.Re func resourceProductPortfolioAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) acceptLanguage, portfolioID, productID, err := ProductPortfolioAssociationParseID(d.Id()) @@ -146,7 +149,7 @@ func resourceProductPortfolioAssociationRead(ctx context.Context, d *schema.Reso func resourceProductPortfolioAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) acceptLanguage, portfolioID, productID, err := ProductPortfolioAssociationParseID(d.Id()) diff --git a/internal/service/servicecatalog/product_portfolio_association_test.go b/internal/service/servicecatalog/product_portfolio_association_test.go index 03d95266d8c..e8af4cec28c 100644 --- a/internal/service/servicecatalog/product_portfolio_association_test.go +++ b/internal/service/servicecatalog/product_portfolio_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog_test import ( @@ -74,7 +77,7 @@ func TestAccServiceCatalogProductPortfolioAssociation_disappears(t *testing.T) { func testAccCheckProductPortfolioAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_servicecatalog_product_portfolio_association" { @@ -116,7 +119,7 @@ func testAccCheckProductPortfolioAssociationExists(ctx context.Context, resource return fmt.Errorf("could not parse ID (%s): %w", rs.Primary.ID, err) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn(ctx) _, err = tfservicecatalog.WaitProductPortfolioAssociationReady(ctx, conn, acceptLanguage, portfolioID, productID, tfservicecatalog.ProductPortfolioAssociationReadyTimeout) diff --git a/internal/service/servicecatalog/product_test.go b/internal/service/servicecatalog/product_test.go index 6082bca5014..38247be0507 100644 --- a/internal/service/servicecatalog/product_test.go +++ b/internal/service/servicecatalog/product_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog_test import ( @@ -214,7 +217,7 @@ func TestAccServiceCatalogProduct_physicalID(t *testing.T) { func testAccCheckProductDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_servicecatalog_product" { @@ -252,7 +255,7 @@ func testAccCheckProductExists(ctx context.Context, resourceName string) resourc return fmt.Errorf("resource not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn(ctx) input := &servicecatalog.DescribeProductAsAdminInput{ Id: aws.String(rs.Primary.ID), diff --git a/internal/service/servicecatalog/provisioned_product.go b/internal/service/servicecatalog/provisioned_product.go index aa2c22920a4..a91adf4023b 100644 --- a/internal/service/servicecatalog/provisioned_product.go +++ b/internal/service/servicecatalog/provisioned_product.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog import ( @@ -277,12 +280,12 @@ func refreshOutputsDiff(_ context.Context, diff *schema.ResourceDiff, meta inter func resourceProvisionedProductCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) input := &servicecatalog.ProvisionProductInput{ ProvisionToken: aws.String(id.UniqueId()), ProvisionedProductName: aws.String(d.Get("name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("accept_language"); ok { @@ -374,7 +377,7 @@ func resourceProvisionedProductCreate(ctx context.Context, d *schema.ResourceDat func resourceProvisionedProductRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) // There are two API operations for getting information about provisioned products: // 1. DescribeProvisionedProduct (used in WaitProvisionedProductReady) and @@ -476,14 +479,14 @@ func resourceProvisionedProductRead(ctx context.Context, d *schema.ResourceData, d.Set("path_id", recordOutput.RecordDetail.PathId) - SetTagsOut(ctx, Tags(recordKeyValueTags(ctx, recordOutput.RecordDetail.RecordTags))) + setTagsOut(ctx, Tags(recordKeyValueTags(ctx, recordOutput.RecordDetail.RecordTags))) return diags } func resourceProvisionedProductUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) input := &servicecatalog.UpdateProvisionedProductInput{ UpdateToken: aws.String(id.UniqueId()), @@ -526,7 +529,7 @@ func resourceProvisionedProductUpdate(ctx context.Context, d *schema.ResourceDat } if d.HasChanges("tags", "tags_all") { - input.Tags = GetTagsIn(ctx) + input.Tags = getTagsIn(ctx) } err := retry.RetryContext(ctx, d.Timeout(schema.TimeoutUpdate), func() *retry.RetryError { @@ -560,7 +563,7 @@ func resourceProvisionedProductUpdate(ctx context.Context, d *schema.ResourceDat func resourceProvisionedProductDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) input := &servicecatalog.TerminateProvisionedProductInput{ TerminateToken: aws.String(id.UniqueId()), diff --git a/internal/service/servicecatalog/provisioned_product_test.go b/internal/service/servicecatalog/provisioned_product_test.go index b2b329790e9..998588d09e8 100644 --- a/internal/service/servicecatalog/provisioned_product_test.go +++ b/internal/service/servicecatalog/provisioned_product_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog_test import ( @@ -204,60 +207,55 @@ func TestAccServiceCatalogProvisionedProduct_stackSetProvisioningPreferences(t * }) } -// NOTE: This test is dependent on a v5.0.0 feature which fixes how -// products are modified when provisioning_artifact_parameters are -// changed. Once v5.0.0 is released, this test can be re-enabled. -// Ref: https://github.com/hashicorp/terraform-provider-aws/pull/31061 -// -// func TestAccServiceCatalogProvisionedProduct_ProductName_update(t *testing.T) { -// ctx := acctest.Context(t) -// resourceName := "aws_servicecatalog_provisioned_product.test" - -// rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) -// productName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) -// productNameUpdated := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) -// domain := fmt.Sprintf("http://%s", acctest.RandomDomainName()) -// var pprod servicecatalog.ProvisionedProductDetail - -// resource.ParallelTest(t, resource.TestCase{ -// PreCheck: func() { acctest.PreCheck(ctx, t) }, -// ErrorCheck: acctest.ErrorCheck(t, servicecatalog.EndpointsID), -// ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, -// CheckDestroy: testAccCheckProvisionedProductDestroy(ctx), -// Steps: []resource.TestStep{ -// { -// Config: testAccProvisionedProductConfig_productName(rName, domain, acctest.DefaultEmailAddress, "10.1.0.0/16", productName), -// Check: resource.ComposeTestCheckFunc( -// testAccCheckProvisionedProductExists(ctx, resourceName, &pprod), -// resource.TestCheckResourceAttrPair(resourceName, "product_name", "aws_servicecatalog_product.test", "name"), -// resource.TestCheckResourceAttrPair(resourceName, "product_id", "aws_servicecatalog_product.test", "id"), -// ), -// }, -// { -// // update the product name, but keep provisioned product name as-is to trigger an in-place update -// Config: testAccProvisionedProductConfig_productName(rName, domain, acctest.DefaultEmailAddress, "10.1.0.0/16", productNameUpdated), -// Check: resource.ComposeTestCheckFunc( -// testAccCheckProvisionedProductExists(ctx, resourceName, &pprod), -// resource.TestCheckResourceAttrPair(resourceName, "product_name", "aws_servicecatalog_product.test", "name"), -// resource.TestCheckResourceAttrPair(resourceName, "product_id", "aws_servicecatalog_product.test", "id"), -// ), -// }, -// { -// ResourceName: resourceName, -// ImportState: true, -// ImportStateVerify: true, -// ImportStateVerifyIgnore: []string{ -// "accept_language", -// "ignore_errors", -// "product_name", -// "provisioning_artifact_name", -// "provisioning_parameters", -// "retain_physical_resources", -// }, -// }, -// }, -// }) -// } +func TestAccServiceCatalogProvisionedProduct_ProductName_update(t *testing.T) { + ctx := acctest.Context(t) + resourceName := "aws_servicecatalog_provisioned_product.test" + + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + productName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + productNameUpdated := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + domain := fmt.Sprintf("http://%s", acctest.RandomDomainName()) + var pprod servicecatalog.ProvisionedProductDetail + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, servicecatalog.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckProvisionedProductDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccProvisionedProductConfig_productName(rName, domain, acctest.DefaultEmailAddress, "10.1.0.0/16", productName), + Check: resource.ComposeTestCheckFunc( + testAccCheckProvisionedProductExists(ctx, resourceName, &pprod), + resource.TestCheckResourceAttrPair(resourceName, "product_name", "aws_servicecatalog_product.test", "name"), + resource.TestCheckResourceAttrPair(resourceName, "product_id", "aws_servicecatalog_product.test", "id"), + ), + }, + { + // update the product name, but keep provisioned product name as-is to trigger an in-place update + Config: testAccProvisionedProductConfig_productName(rName, domain, acctest.DefaultEmailAddress, "10.1.0.0/16", productNameUpdated), + Check: resource.ComposeTestCheckFunc( + testAccCheckProvisionedProductExists(ctx, resourceName, &pprod), + resource.TestCheckResourceAttrPair(resourceName, "product_name", "aws_servicecatalog_product.test", "name"), + resource.TestCheckResourceAttrPair(resourceName, "product_id", "aws_servicecatalog_product.test", "id"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "accept_language", + "ignore_errors", + "product_name", + "provisioning_artifact_name", + "provisioning_parameters", + "retain_physical_resources", + }, + }, + }, + }) +} // Reference: https://github.com/hashicorp/terraform-provider-aws/issues/26271 func TestAccServiceCatalogProvisionedProduct_ProvisioningArtifactName_update(t *testing.T) { @@ -463,7 +461,7 @@ func TestAccServiceCatalogProvisionedProduct_errorOnUpdate(t *testing.T) { func testAccCheckProvisionedProductDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_servicecatalog_provisioned_product" { @@ -499,7 +497,7 @@ func testAccCheckProvisionedProductExists(ctx context.Context, resourceName stri return fmt.Errorf("resource not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn(ctx) out, err := tfservicecatalog.WaitProvisionedProductReady(ctx, conn, tfservicecatalog.AcceptLanguageEnglish, rs.Primary.ID, "", tfservicecatalog.ProvisionedProductReadyTimeout) if err != nil { @@ -818,27 +816,27 @@ resource "aws_servicecatalog_provisioned_product" "test" { `, rName, vpcCidr, failureToleranceCount, maxConcurrencyCount)) } -// func testAccProvisionedProductConfig_productName(rName, domain, email, vpcCidr, productName string) string { -// return acctest.ConfigCompose(testAccProvisionedProductTemplateURLBaseConfig(productName, domain, email), -// fmt.Sprintf(` -// resource "aws_servicecatalog_provisioned_product" "test" { -// name = %[1]q -// product_name = aws_servicecatalog_product.test.name -// provisioning_artifact_name = aws_servicecatalog_product.test.provisioning_artifact_parameters[0].name -// path_id = data.aws_servicecatalog_launch_paths.test.summaries[0].path_id - -// provisioning_parameters { -// key = "VPCPrimaryCIDR" -// value = %[2]q -// } - -// provisioning_parameters { -// key = "LeaveMeEmpty" -// value = "" -// } -// } -// `, rName, vpcCidr)) -// } +func testAccProvisionedProductConfig_productName(rName, domain, email, vpcCidr, productName string) string { + return acctest.ConfigCompose(testAccProvisionedProductTemplateURLBaseConfig(productName, domain, email), + fmt.Sprintf(` +resource "aws_servicecatalog_provisioned_product" "test" { + name = %[1]q + product_name = aws_servicecatalog_product.test.name + provisioning_artifact_name = aws_servicecatalog_product.test.provisioning_artifact_parameters[0].name + path_id = data.aws_servicecatalog_launch_paths.test.summaries[0].path_id + + provisioning_parameters { + key = "VPCPrimaryCIDR" + value = %[2]q + } + + provisioning_parameters { + key = "LeaveMeEmpty" + value = "" + } +} +`, rName, vpcCidr)) +} func testAccProvisionedProductConfig_ProvisionedArtifactName_update(rName, domain, email, vpcCidr, artifactName string) string { return acctest.ConfigCompose(testAccProvisionedProductTemplateURLBaseConfig(rName, domain, email), diff --git a/internal/service/servicecatalog/provisioning_artifact.go b/internal/service/servicecatalog/provisioning_artifact.go index 5a08d664cf2..7e30143a529 100644 --- a/internal/service/servicecatalog/provisioning_artifact.go +++ b/internal/service/servicecatalog/provisioning_artifact.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog import ( @@ -112,7 +115,7 @@ func ResourceProvisioningArtifact() *schema.Resource { func resourceProvisioningArtifactCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) parameters := make(map[string]interface{}) parameters["description"] = d.Get("description") @@ -171,7 +174,7 @@ func resourceProvisioningArtifactCreate(ctx context.Context, d *schema.ResourceD func resourceProvisioningArtifactRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) artifactID, productID, err := ProvisioningArtifactParseID(d.Id()) @@ -221,7 +224,7 @@ func resourceProvisioningArtifactRead(ctx context.Context, d *schema.ResourceDat func resourceProvisioningArtifactUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) if d.HasChanges("accept_language", "active", "description", "guidance", "name", "product_id") { artifactID, productID, err := ProvisioningArtifactParseID(d.Id()) @@ -280,7 +283,7 @@ func resourceProvisioningArtifactUpdate(ctx context.Context, d *schema.ResourceD func resourceProvisioningArtifactDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) artifactID, productID, err := ProvisioningArtifactParseID(d.Id()) diff --git a/internal/service/servicecatalog/provisioning_artifact_test.go b/internal/service/servicecatalog/provisioning_artifact_test.go index f9a8c28728f..72b7c25c3c7 100644 --- a/internal/service/servicecatalog/provisioning_artifact_test.go +++ b/internal/service/servicecatalog/provisioning_artifact_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog_test import ( @@ -180,7 +183,7 @@ func TestAccServiceCatalogProvisioningArtifact_physicalID(t *testing.T) { func testAccCheckProvisioningArtifactDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_servicecatalog_provisioning_artifact" { @@ -225,7 +228,7 @@ func testAccCheckProvisioningArtifactExists(ctx context.Context, resourceName st return fmt.Errorf("resource not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn(ctx) artifactID, productID, err := tfservicecatalog.ProvisioningArtifactParseID(rs.Primary.ID) diff --git a/internal/service/servicecatalog/provisioning_artifacts_data_source.go b/internal/service/servicecatalog/provisioning_artifacts_data_source.go index 96747f40164..58a9ea1bf25 100644 --- a/internal/service/servicecatalog/provisioning_artifacts_data_source.go +++ b/internal/service/servicecatalog/provisioning_artifacts_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog import ( @@ -74,7 +77,7 @@ func DataSourceProvisioningArtifacts() *schema.Resource { func dataSourceProvisioningArtifactsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) productID := d.Get("product_id").(string) input := &servicecatalog.ListProvisioningArtifactsInput{ diff --git a/internal/service/servicecatalog/provisioning_artifacts_data_source_test.go b/internal/service/servicecatalog/provisioning_artifacts_data_source_test.go index d54a10cbf66..598bfa6c118 100644 --- a/internal/service/servicecatalog/provisioning_artifacts_data_source_test.go +++ b/internal/service/servicecatalog/provisioning_artifacts_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog_test import ( diff --git a/internal/service/servicecatalog/service_action.go b/internal/service/servicecatalog/service_action.go index 9e25cda35d9..2c9b3b54813 100644 --- a/internal/service/servicecatalog/service_action.go +++ b/internal/service/servicecatalog/service_action.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog import ( @@ -91,7 +94,7 @@ func ResourceServiceAction() *schema.Resource { func resourceServiceActionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) input := &servicecatalog.CreateServiceActionInput{ IdempotencyToken: aws.String(id.UniqueId()), @@ -144,7 +147,7 @@ func resourceServiceActionCreate(ctx context.Context, d *schema.ResourceData, me func resourceServiceActionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) output, err := WaitServiceActionReady(ctx, conn, d.Get("accept_language").(string), d.Id(), d.Timeout(schema.TimeoutRead)) @@ -178,7 +181,7 @@ func resourceServiceActionRead(ctx context.Context, d *schema.ResourceData, meta func resourceServiceActionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) input := &servicecatalog.UpdateServiceActionInput{ Id: aws.String(d.Id()), @@ -227,7 +230,7 @@ func resourceServiceActionUpdate(ctx context.Context, d *schema.ResourceData, me func resourceServiceActionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) input := &servicecatalog.DeleteServiceActionInput{ Id: aws.String(d.Id()), diff --git a/internal/service/servicecatalog/service_action_test.go b/internal/service/servicecatalog/service_action_test.go index 2cfb4899dc0..b828e7a6e2f 100644 --- a/internal/service/servicecatalog/service_action_test.go +++ b/internal/service/servicecatalog/service_action_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog_test import ( @@ -113,7 +116,7 @@ func TestAccServiceCatalogServiceAction_update(t *testing.T) { func testAccCheckServiceActionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_servicecatalog_service_action" { @@ -151,7 +154,7 @@ func testAccCheckServiceActionExists(ctx context.Context, resourceName string) r return fmt.Errorf("resource not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn(ctx) input := &servicecatalog.DescribeServiceActionInput{ Id: aws.String(rs.Primary.ID), diff --git a/internal/service/servicecatalog/service_package_gen.go b/internal/service/servicecatalog/service_package_gen.go index d1737735d34..bb0eb4609cd 100644 --- a/internal/service/servicecatalog/service_package_gen.go +++ b/internal/service/servicecatalog/service_package_gen.go @@ -5,6 +5,10 @@ package servicecatalog import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + servicecatalog_sdkv1 "github.com/aws/aws-sdk-go/service/servicecatalog" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -115,4 +119,13 @@ func (p *servicePackage) ServicePackageName() string { return names.ServiceCatalog } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*servicecatalog_sdkv1.ServiceCatalog, error) { + sess := config["session"].(*session_sdkv1.Session) + + return servicecatalog_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/servicecatalog/servicecatalog_test.go b/internal/service/servicecatalog/servicecatalog_test.go index 7599a3d6c7b..34d707ecb6f 100644 --- a/internal/service/servicecatalog/servicecatalog_test.go +++ b/internal/service/servicecatalog/servicecatalog_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog_test import ( diff --git a/internal/service/servicecatalog/status.go b/internal/service/servicecatalog/status.go index 8f72160aad1..207d75f3953 100644 --- a/internal/service/servicecatalog/status.go +++ b/internal/service/servicecatalog/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog import ( @@ -36,11 +39,11 @@ func StatusProduct(ctx context.Context, conn *servicecatalog.ServiceCatalog, acc } if err != nil { - return nil, servicecatalog.StatusFailed, fmt.Errorf("error describing product status: %w", err) + return nil, servicecatalog.StatusFailed, fmt.Errorf("describing product status: %w", err) } if output == nil || output.ProductViewDetail == nil { - return nil, StatusUnavailable, fmt.Errorf("error describing product status: empty product view detail") + return nil, StatusUnavailable, fmt.Errorf("describing product status: empty product view detail") } return output, aws.StringValue(output.ProductViewDetail.Status), err @@ -60,11 +63,11 @@ func StatusTagOption(ctx context.Context, conn *servicecatalog.ServiceCatalog, i } if err != nil { - return nil, servicecatalog.StatusFailed, fmt.Errorf("error describing tag option: %w", err) + return nil, servicecatalog.StatusFailed, fmt.Errorf("describing tag option: %w", err) } if output == nil || output.TagOptionDetail == nil { - return nil, StatusUnavailable, fmt.Errorf("error describing tag option: empty tag option detail") + return nil, StatusUnavailable, fmt.Errorf("describing tag option: empty tag option detail") } return output.TagOptionDetail, servicecatalog.StatusAvailable, err @@ -83,11 +86,11 @@ func StatusPortfolioShareWithToken(ctx context.Context, conn *servicecatalog.Ser } if err != nil { - return nil, servicecatalog.ShareStatusError, fmt.Errorf("error describing portfolio share status: %w", err) + return nil, servicecatalog.ShareStatusError, fmt.Errorf("describing portfolio share status: %w", err) } if output == nil { - return nil, StatusUnavailable, fmt.Errorf("error describing portfolio share status: empty response") + return nil, StatusUnavailable, fmt.Errorf("describing portfolio share status: empty response") } return output, aws.StringValue(output.Status), err @@ -125,11 +128,11 @@ func StatusOrganizationsAccess(ctx context.Context, conn *servicecatalog.Service } if err != nil { - return nil, OrganizationAccessStatusError, fmt.Errorf("error getting Organizations Access: %w", err) + return nil, OrganizationAccessStatusError, fmt.Errorf("getting Organizations Access: %w", err) } if output == nil { - return nil, StatusUnavailable, fmt.Errorf("error getting Organizations Access: empty response") + return nil, StatusUnavailable, fmt.Errorf("getting Organizations Access: empty response") } return output, aws.StringValue(output.AccessStatus), err @@ -155,7 +158,7 @@ func StatusConstraint(ctx context.Context, conn *servicecatalog.ServiceCatalog, } if err != nil { - return nil, servicecatalog.StatusFailed, fmt.Errorf("error describing constraint: %w", err) + return nil, servicecatalog.StatusFailed, fmt.Errorf("describing constraint: %w", err) } if output == nil || output.ConstraintDetail == nil { @@ -179,7 +182,7 @@ func StatusProductPortfolioAssociation(ctx context.Context, conn *servicecatalog } if err != nil { - return nil, servicecatalog.StatusFailed, fmt.Errorf("error describing product portfolio association: %w", err) + return nil, servicecatalog.StatusFailed, fmt.Errorf("describing product portfolio association: %w", err) } if output == nil { @@ -209,11 +212,11 @@ func StatusServiceAction(ctx context.Context, conn *servicecatalog.ServiceCatalo } if err != nil { - return nil, servicecatalog.StatusFailed, fmt.Errorf("error describing Service Action: %w", err) + return nil, servicecatalog.StatusFailed, fmt.Errorf("describing Service Action: %w", err) } if output == nil || output.ServiceActionDetail == nil { - return nil, StatusUnavailable, fmt.Errorf("error describing Service Action: empty Service Action Detail") + return nil, StatusUnavailable, fmt.Errorf("describing Service Action: empty Service Action Detail") } return output.ServiceActionDetail, servicecatalog.StatusAvailable, nil @@ -231,7 +234,7 @@ func StatusBudgetResourceAssociation(ctx context.Context, conn *servicecatalog.S } if err != nil { - return nil, servicecatalog.StatusFailed, fmt.Errorf("error describing tag option resource association: %w", err) + return nil, servicecatalog.StatusFailed, fmt.Errorf("describing tag option resource association: %w", err) } if output == nil { @@ -255,7 +258,7 @@ func StatusTagOptionResourceAssociation(ctx context.Context, conn *servicecatalo } if err != nil { - return nil, servicecatalog.StatusFailed, fmt.Errorf("error describing tag option resource association: %w", err) + return nil, servicecatalog.StatusFailed, fmt.Errorf("describing tag option resource association: %w", err) } if output == nil { @@ -302,7 +305,7 @@ func StatusPrincipalPortfolioAssociation(ctx context.Context, conn *servicecatal } if err != nil { - return nil, servicecatalog.StatusFailed, fmt.Errorf("error describing principal portfolio association: %w", err) + return nil, servicecatalog.StatusFailed, fmt.Errorf("describing principal portfolio association: %w", err) } if output == nil { diff --git a/internal/service/servicecatalog/sweep.go b/internal/service/servicecatalog/sweep.go index 2dad31b8689..e378ac164b2 100644 --- a/internal/service/servicecatalog/sweep.go +++ b/internal/service/servicecatalog/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -14,7 +17,6 @@ import ( "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -84,13 +86,13 @@ func init() { func sweepBudgetResourceAssociations(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).ServiceCatalogConn() + conn := client.ServiceCatalogConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -182,7 +184,7 @@ func sweepBudgetResourceAssociations(region string) error { errs = multierror.Append(errs, fmt.Errorf("error describing Service Catalog Budget Resource (Product) Associations for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Service Catalog Budget Resource Associations for %s: %w", region, err)) } @@ -196,13 +198,13 @@ func sweepBudgetResourceAssociations(region string) error { func sweepConstraints(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).ServiceCatalogConn() + conn := client.ServiceCatalogConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -252,7 +254,7 @@ func sweepConstraints(region string) error { errs = multierror.Append(errs, fmt.Errorf("error describing Service Catalog Constraints for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Service Catalog Constraints for %s: %w", region, err)) } @@ -266,13 +268,13 @@ func sweepConstraints(region string) error { func sweepPrincipalPortfolioAssociations(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).ServiceCatalogConn() + conn := client.ServiceCatalogConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -325,7 +327,7 @@ func sweepPrincipalPortfolioAssociations(region string) error { errs = multierror.Append(errs, fmt.Errorf("error describing Service Catalog Principal Portfolio Associations for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Service Catalog Principal Portfolio Associations for %s: %w", region, err)) } @@ -339,13 +341,13 @@ func sweepPrincipalPortfolioAssociations(region string) error { func sweepProductPortfolioAssociations(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).ServiceCatalogConn() + conn := client.ServiceCatalogConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -418,7 +420,7 @@ func sweepProductPortfolioAssociations(region string) error { errs = multierror.Append(errs, fmt.Errorf("error describing Service Catalog Product Portfolio Associations for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Service Catalog Product Portfolio Associations for %s: %w", region, err)) } @@ -432,13 +434,13 @@ func sweepProductPortfolioAssociations(region string) error { func sweepProducts(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).ServiceCatalogConn() + conn := client.ServiceCatalogConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -470,7 +472,7 @@ func sweepProducts(region string) error { errs = multierror.Append(errs, fmt.Errorf("error describing Service Catalog Products for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Service Catalog Products for %s: %w", region, err)) } @@ -484,13 +486,13 @@ func sweepProducts(region string) error { func sweepProvisionedProducts(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).ServiceCatalogConn() + conn := client.ServiceCatalogConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -525,7 +527,7 @@ func sweepProvisionedProducts(region string) error { errs = multierror.Append(errs, fmt.Errorf("error describing Service Catalog Provisioned Products for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Service Catalog Provisioned Products for %s: %w", region, err)) } @@ -539,13 +541,13 @@ func sweepProvisionedProducts(region string) error { func sweepProvisioningArtifacts(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).ServiceCatalogConn() + conn := client.ServiceCatalogConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -607,7 +609,7 @@ func sweepProvisioningArtifacts(region string) error { errs = multierror.Append(errs, fmt.Errorf("error describing Service Catalog Provisioning Artifacts for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Service Catalog Provisioning Artifacts for %s: %w", region, err)) } @@ -621,13 +623,13 @@ func sweepProvisioningArtifacts(region string) error { func sweepServiceActions(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).ServiceCatalogConn() + conn := client.ServiceCatalogConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -659,7 +661,7 @@ func sweepServiceActions(region string) error { errs = multierror.Append(errs, fmt.Errorf("error describing Service Catalog Service Actions for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Service Catalog Service Actions for %s: %w", region, err)) } @@ -673,13 +675,13 @@ func sweepServiceActions(region string) error { func sweepTagOptionResourceAssociations(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).ServiceCatalogConn() + conn := client.ServiceCatalogConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -731,7 +733,7 @@ func sweepTagOptionResourceAssociations(region string) error { errs = multierror.Append(errs, fmt.Errorf("error describing Service Catalog Tag Option Resource Associations for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Service Catalog Tag Option Resource Associations for %s: %w", region, err)) } @@ -745,13 +747,13 @@ func sweepTagOptionResourceAssociations(region string) error { func sweepTagOptions(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).ServiceCatalogConn() + conn := client.ServiceCatalogConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -787,7 +789,7 @@ func sweepTagOptions(region string) error { errs = multierror.Append(errs, fmt.Errorf("error describing Service Catalog Tag Options for %s: %w", region, err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Service Catalog Tag Options for %s: %w", region, err)) } diff --git a/internal/service/servicecatalog/tag_option.go b/internal/service/servicecatalog/tag_option.go index 6d2530c37c3..12fa6683880 100644 --- a/internal/service/servicecatalog/tag_option.go +++ b/internal/service/servicecatalog/tag_option.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog import ( @@ -58,7 +61,7 @@ func ResourceTagOption() *schema.Resource { func resourceTagOptionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) input := &servicecatalog.CreateTagOptionInput{ Key: aws.String(d.Get("key").(string)), @@ -115,7 +118,7 @@ func resourceTagOptionCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceTagOptionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) output, err := WaitTagOptionReady(ctx, conn, d.Id(), d.Timeout(schema.TimeoutRead)) @@ -143,7 +146,7 @@ func resourceTagOptionRead(ctx context.Context, d *schema.ResourceData, meta int func resourceTagOptionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) input := &servicecatalog.UpdateTagOptionInput{ Id: aws.String(d.Id()), @@ -187,7 +190,7 @@ func resourceTagOptionUpdate(ctx context.Context, d *schema.ResourceData, meta i func resourceTagOptionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) input := &servicecatalog.DeleteTagOptionInput{ Id: aws.String(d.Id()), diff --git a/internal/service/servicecatalog/tag_option_resource_association.go b/internal/service/servicecatalog/tag_option_resource_association.go index e4cfc920e91..c54ded2dd5d 100644 --- a/internal/service/servicecatalog/tag_option_resource_association.go +++ b/internal/service/servicecatalog/tag_option_resource_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog import ( @@ -65,7 +68,7 @@ func ResourceTagOptionResourceAssociation() *schema.Resource { func resourceTagOptionResourceAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) input := &servicecatalog.AssociateTagOptionWithResourceInput{ ResourceId: aws.String(d.Get("resource_id").(string)), @@ -108,7 +111,7 @@ func resourceTagOptionResourceAssociationCreate(ctx context.Context, d *schema.R func resourceTagOptionResourceAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) tagOptionID, resourceID, err := TagOptionResourceAssociationParseID(d.Id()) @@ -147,7 +150,7 @@ func resourceTagOptionResourceAssociationRead(ctx context.Context, d *schema.Res func resourceTagOptionResourceAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceCatalogConn() + conn := meta.(*conns.AWSClient).ServiceCatalogConn(ctx) tagOptionID, resourceID, err := TagOptionResourceAssociationParseID(d.Id()) diff --git a/internal/service/servicecatalog/tag_option_resource_association_test.go b/internal/service/servicecatalog/tag_option_resource_association_test.go index f92f10762c6..323394b4302 100644 --- a/internal/service/servicecatalog/tag_option_resource_association_test.go +++ b/internal/service/servicecatalog/tag_option_resource_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog_test import ( @@ -70,7 +73,7 @@ func TestAccServiceCatalogTagOptionResourceAssociation_disappears(t *testing.T) func testAccCheckTagOptionResourceAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_servicecatalog_tag_option_resource_association" { @@ -112,7 +115,7 @@ func testAccCheckTagOptionResourceAssociationExists(ctx context.Context, resourc return fmt.Errorf("could not parse ID (%s): %w", rs.Primary.ID, err) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn(ctx) _, err = tfservicecatalog.WaitTagOptionResourceAssociationReady(ctx, conn, tagOptionID, resourceID, tfservicecatalog.TagOptionResourceAssociationReadyTimeout) diff --git a/internal/service/servicecatalog/tag_option_test.go b/internal/service/servicecatalog/tag_option_test.go index dab16c48a44..b6eb3303f84 100644 --- a/internal/service/servicecatalog/tag_option_test.go +++ b/internal/service/servicecatalog/tag_option_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog_test import ( @@ -161,7 +164,7 @@ func TestAccServiceCatalogTagOption_notActive(t *testing.T) { func testAccCheckTagOptionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_servicecatalog_tag_option" { @@ -199,7 +202,7 @@ func testAccCheckTagOptionExists(ctx context.Context, resourceName string) resou return fmt.Errorf("resource not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceCatalogConn(ctx) input := &servicecatalog.DescribeTagOptionInput{ Id: aws.String(rs.Primary.ID), diff --git a/internal/service/servicecatalog/tags.go b/internal/service/servicecatalog/tags.go index af6f3277a5d..720d6fe435c 100644 --- a/internal/service/servicecatalog/tags.go +++ b/internal/service/servicecatalog/tags.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build !generate // +build !generate diff --git a/internal/service/servicecatalog/tags_gen.go b/internal/service/servicecatalog/tags_gen.go index 475c7f3c5fd..92386011e0c 100644 --- a/internal/service/servicecatalog/tags_gen.go +++ b/internal/service/servicecatalog/tags_gen.go @@ -39,9 +39,9 @@ func KeyValueTags(ctx context.Context, tags []*servicecatalog.Tag) tftags.KeyVal return tftags.New(ctx, m) } -// GetTagsIn returns servicecatalog service tags from Context. +// getTagsIn returns servicecatalog service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*servicecatalog.Tag { +func getTagsIn(ctx context.Context) []*servicecatalog.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -51,8 +51,8 @@ func GetTagsIn(ctx context.Context) []*servicecatalog.Tag { return nil } -// SetTagsOut sets servicecatalog service tags in Context. -func SetTagsOut(ctx context.Context, tags []*servicecatalog.Tag) { +// setTagsOut sets servicecatalog service tags in Context. +func setTagsOut(ctx context.Context, tags []*servicecatalog.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } diff --git a/internal/service/servicecatalog/validate.go b/internal/service/servicecatalog/validate.go index 3adaddef31e..45625dceff1 100644 --- a/internal/service/servicecatalog/validate.go +++ b/internal/service/servicecatalog/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog import ( diff --git a/internal/service/servicecatalog/validate_test.go b/internal/service/servicecatalog/validate_test.go index cdf1a3e505c..a5e46739af8 100644 --- a/internal/service/servicecatalog/validate_test.go +++ b/internal/service/servicecatalog/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog import ( diff --git a/internal/service/servicecatalog/wait.go b/internal/service/servicecatalog/wait.go index bafdcdda716..051abc1fc46 100644 --- a/internal/service/servicecatalog/wait.go +++ b/internal/service/servicecatalog/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicecatalog import ( diff --git a/internal/service/servicediscovery/dns_namespace_data_source.go b/internal/service/servicediscovery/dns_namespace_data_source.go index ba2cfd567e3..8589a91e582 100644 --- a/internal/service/servicediscovery/dns_namespace_data_source.go +++ b/internal/service/servicediscovery/dns_namespace_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicediscovery import ( @@ -49,7 +52,7 @@ func DataSourceDNSNamespace() *schema.Resource { } func dataSourceDNSNamespaceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ServiceDiscoveryConn() + conn := meta.(*conns.AWSClient).ServiceDiscoveryConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig name := d.Get("name").(string) @@ -79,7 +82,7 @@ func dataSourceDNSNamespaceRead(ctx context.Context, d *schema.ResourceData, met } d.Set("name", ns.Name) - tags, err := ListTags(ctx, conn, arn) + tags, err := listTags(ctx, conn, arn) if err != nil { return diag.Errorf("listing tags for Service Discovery %s Namespace (%s): %s", nsType, arn, err) diff --git a/internal/service/servicediscovery/dns_namespace_data_source_test.go b/internal/service/servicediscovery/dns_namespace_data_source_test.go index e68020179ae..c17bfa9f10e 100644 --- a/internal/service/servicediscovery/dns_namespace_data_source_test.go +++ b/internal/service/servicediscovery/dns_namespace_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicediscovery_test import ( diff --git a/internal/service/servicediscovery/find.go b/internal/service/servicediscovery/find.go index 83adde8c113..270f15d4a3f 100644 --- a/internal/service/servicediscovery/find.go +++ b/internal/service/servicediscovery/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicediscovery import ( diff --git a/internal/service/servicediscovery/generate.go b/internal/service/servicediscovery/generate.go index 6b1426ca436..7eb072888d4 100644 --- a/internal/service/servicediscovery/generate.go +++ b/internal/service/servicediscovery/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceARN -ServiceTagsSlice -TagInIDElem=ResourceARN -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package servicediscovery diff --git a/internal/service/servicediscovery/http_namespace.go b/internal/service/servicediscovery/http_namespace.go index 1c177528bc7..d52ea853c6f 100644 --- a/internal/service/servicediscovery/http_namespace.go +++ b/internal/service/servicediscovery/http_namespace.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicediscovery import ( @@ -60,13 +63,13 @@ func ResourceHTTPNamespace() *schema.Resource { } func resourceHTTPNamespaceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ServiceDiscoveryConn() + conn := meta.(*conns.AWSClient).ServiceDiscoveryConn(ctx) name := d.Get("name").(string) input := &servicediscovery.CreateHttpNamespaceInput{ CreatorRequestId: aws.String(id.UniqueId()), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -98,7 +101,7 @@ func resourceHTTPNamespaceCreate(ctx context.Context, d *schema.ResourceData, me } func resourceHTTPNamespaceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ServiceDiscoveryConn() + conn := meta.(*conns.AWSClient).ServiceDiscoveryConn(ctx) ns, err := FindNamespaceByID(ctx, conn, d.Id()) @@ -131,7 +134,7 @@ func resourceHTTPNamespaceUpdate(ctx context.Context, d *schema.ResourceData, me } func resourceHTTPNamespaceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ServiceDiscoveryConn() + conn := meta.(*conns.AWSClient).ServiceDiscoveryConn(ctx) log.Printf("[INFO] Deleting Service Discovery HTTP Namespace: %s", d.Id()) outputRaw, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, 2*time.Minute, func() (interface{}, error) { diff --git a/internal/service/servicediscovery/http_namespace_data_source.go b/internal/service/servicediscovery/http_namespace_data_source.go index 08a75cf9e2d..06ce864d530 100644 --- a/internal/service/servicediscovery/http_namespace_data_source.go +++ b/internal/service/servicediscovery/http_namespace_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicediscovery import ( @@ -40,7 +43,7 @@ func DataSourceHTTPNamespace() *schema.Resource { } func dataSourceHTTPNamespaceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ServiceDiscoveryConn() + conn := meta.(*conns.AWSClient).ServiceDiscoveryConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig name := d.Get("name").(string) @@ -69,7 +72,7 @@ func dataSourceHTTPNamespaceRead(ctx context.Context, d *schema.ResourceData, me } d.Set("name", ns.Name) - tags, err := ListTags(ctx, conn, arn) + tags, err := listTags(ctx, conn, arn) if err != nil { return diag.Errorf("listing tags for Service Discovery HTTP Namespace (%s): %s", arn, err) diff --git a/internal/service/servicediscovery/http_namespace_data_source_test.go b/internal/service/servicediscovery/http_namespace_data_source_test.go index 535adcde38c..3581b700469 100644 --- a/internal/service/servicediscovery/http_namespace_data_source_test.go +++ b/internal/service/servicediscovery/http_namespace_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicediscovery_test import ( diff --git a/internal/service/servicediscovery/http_namespace_test.go b/internal/service/servicediscovery/http_namespace_test.go index b85dbb60d97..d2ca19be7f3 100644 --- a/internal/service/servicediscovery/http_namespace_test.go +++ b/internal/service/servicediscovery/http_namespace_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicediscovery_test import ( @@ -161,7 +164,7 @@ func TestAccServiceDiscoveryHTTPNamespace_tags(t *testing.T) { func testAccCheckHTTPNamespaceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceDiscoveryConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceDiscoveryConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_service_discovery_http_namespace" { @@ -196,7 +199,7 @@ func testAccCheckHTTPNamespaceExists(ctx context.Context, n string) resource.Tes return fmt.Errorf("No Service Discovery HTTP Namespace ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceDiscoveryConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceDiscoveryConn(ctx) _, err := tfservicediscovery.FindNamespaceByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/servicediscovery/instance.go b/internal/service/servicediscovery/instance.go index 37536ded284..15a980df811 100644 --- a/internal/service/servicediscovery/instance.go +++ b/internal/service/servicediscovery/instance.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicediscovery import ( @@ -63,7 +66,7 @@ func ResourceInstance() *schema.Resource { } func resourceInstancePut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ServiceDiscoveryConn() + conn := meta.(*conns.AWSClient).ServiceDiscoveryConn(ctx) instanceID := d.Get("instance_id").(string) input := &servicediscovery.RegisterInstanceInput{ @@ -92,7 +95,7 @@ func resourceInstancePut(ctx context.Context, d *schema.ResourceData, meta inter } func resourceInstanceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ServiceDiscoveryConn() + conn := meta.(*conns.AWSClient).ServiceDiscoveryConn(ctx) instance, err := FindInstanceByServiceIDAndInstanceID(ctx, conn, d.Get("service_id").(string), d.Get("instance_id").(string)) @@ -120,7 +123,7 @@ func resourceInstanceRead(ctx context.Context, d *schema.ResourceData, meta inte } func resourceInstanceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ServiceDiscoveryConn() + conn := meta.(*conns.AWSClient).ServiceDiscoveryConn(ctx) err := deregisterInstance(ctx, conn, d.Get("service_id").(string), d.Get("instance_id").(string)) diff --git a/internal/service/servicediscovery/instance_test.go b/internal/service/servicediscovery/instance_test.go index 2dd526db48f..0ce5276f283 100644 --- a/internal/service/servicediscovery/instance_test.go +++ b/internal/service/servicediscovery/instance_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicediscovery_test import ( @@ -162,7 +165,7 @@ func testAccCheckInstanceExists(ctx context.Context, n string) resource.TestChec return fmt.Errorf("No Service Discovery Instance ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceDiscoveryConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceDiscoveryConn(ctx) _, err := tfservicediscovery.FindInstanceByServiceIDAndInstanceID(ctx, conn, rs.Primary.Attributes["service_id"], rs.Primary.Attributes["instance_id"]) @@ -183,7 +186,7 @@ func testAccInstanceImportStateIdFunc(resourceName string) resource.ImportStateI func testAccCheckInstanceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceDiscoveryConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceDiscoveryConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_service_discovery_instance" { diff --git a/internal/service/servicediscovery/private_dns_namespace.go b/internal/service/servicediscovery/private_dns_namespace.go index 38564587dc9..87d477dbef2 100644 --- a/internal/service/servicediscovery/private_dns_namespace.go +++ b/internal/service/servicediscovery/private_dns_namespace.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicediscovery import ( @@ -47,7 +50,6 @@ func ResourcePrivateDNSNamespace() *schema.Resource { "description": { Type: schema.TypeString, Optional: true, - ForceNew: true, }, "hosted_zone": { Type: schema.TypeString, @@ -73,13 +75,13 @@ func ResourcePrivateDNSNamespace() *schema.Resource { } func resourcePrivateDNSNamespaceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ServiceDiscoveryConn() + conn := meta.(*conns.AWSClient).ServiceDiscoveryConn(ctx) name := d.Get("name").(string) input := &servicediscovery.CreatePrivateDnsNamespaceInput{ CreatorRequestId: aws.String(id.UniqueId()), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), Vpc: aws.String(d.Get("vpc").(string)), } @@ -87,7 +89,6 @@ func resourcePrivateDNSNamespaceCreate(ctx context.Context, d *schema.ResourceDa input.Description = aws.String(v.(string)) } - log.Printf("[DEBUG] Creating Service Discovery Private DNS Namespace: %s", input) output, err := conn.CreatePrivateDnsNamespaceWithContext(ctx, input) if err != nil { @@ -112,7 +113,7 @@ func resourcePrivateDNSNamespaceCreate(ctx context.Context, d *schema.ResourceDa } func resourcePrivateDNSNamespaceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ServiceDiscoveryConn() + conn := meta.(*conns.AWSClient).ServiceDiscoveryConn(ctx) ns, err := FindNamespaceByID(ctx, conn, d.Id()) @@ -140,12 +141,35 @@ func resourcePrivateDNSNamespaceRead(ctx context.Context, d *schema.ResourceData } func resourcePrivateDNSNamespaceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - // Tags only. + conn := meta.(*conns.AWSClient).ServiceDiscoveryConn(ctx) + + if d.HasChange("description") { + input := &servicediscovery.UpdatePrivateDnsNamespaceInput{ + Id: aws.String(d.Id()), + Namespace: &servicediscovery.PrivateDnsNamespaceChange{ + Description: aws.String(d.Get("description").(string)), + }, + UpdaterRequestId: aws.String(id.UniqueId()), + } + + output, err := conn.UpdatePrivateDnsNamespaceWithContext(ctx, input) + + if err != nil { + return diag.Errorf("updating Service Discovery Private DNS Namespace (%s): %s", d.Id(), err) + } + + if output != nil && output.OperationId != nil { + if _, err := WaitOperationSuccess(ctx, conn, aws.StringValue(output.OperationId)); err != nil { + return diag.Errorf("waiting for Service Discovery Private DNS Namespace (%s) update: %s", d.Id(), err) + } + } + } + return resourcePrivateDNSNamespaceRead(ctx, d, meta) } func resourcePrivateDNSNamespaceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ServiceDiscoveryConn() + conn := meta.(*conns.AWSClient).ServiceDiscoveryConn(ctx) log.Printf("[INFO] Deleting Service Discovery Private DNS Namespace: %s", d.Id()) output, err := conn.DeleteNamespaceWithContext(ctx, &servicediscovery.DeleteNamespaceInput{ diff --git a/internal/service/servicediscovery/private_dns_namespace_test.go b/internal/service/servicediscovery/private_dns_namespace_test.go index 556f3c89f72..5c648347469 100644 --- a/internal/service/servicediscovery/private_dns_namespace_test.go +++ b/internal/service/servicediscovery/private_dns_namespace_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicediscovery_test import ( @@ -94,10 +97,17 @@ func TestAccServiceDiscoveryPrivateDNSNamespace_description(t *testing.T) { CheckDestroy: testAccCheckPrivateDNSNamespaceDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccPrivateDNSNamespaceConfig_description(rName, "test"), + Config: testAccPrivateDNSNamespaceConfig_description(rName, "desc1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckPrivateDNSNamespaceExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "description", "desc1"), + ), + }, + { + Config: testAccPrivateDNSNamespaceConfig_description(rName, "desc2"), Check: resource.ComposeTestCheckFunc( testAccCheckPrivateDNSNamespaceExists(ctx, resourceName), - resource.TestCheckResourceAttr(resourceName, "description", "test"), + resource.TestCheckResourceAttr(resourceName, "description", "desc2"), ), }, }, @@ -181,7 +191,7 @@ func TestAccServiceDiscoveryPrivateDNSNamespace_tags(t *testing.T) { func testAccCheckPrivateDNSNamespaceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceDiscoveryConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceDiscoveryConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_service_discovery_private_dns_namespace" { @@ -216,7 +226,7 @@ func testAccCheckPrivateDNSNamespaceExists(ctx context.Context, n string) resour return fmt.Errorf("No Service Discovery Private DNS Namespace ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceDiscoveryConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceDiscoveryConn(ctx) _, err := tfservicediscovery.FindNamespaceByID(ctx, conn, rs.Primary.ID) @@ -225,7 +235,7 @@ func testAccCheckPrivateDNSNamespaceExists(ctx context.Context, n string) resour } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceDiscoveryConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceDiscoveryConn(ctx) input := &servicediscovery.ListNamespacesInput{} diff --git a/internal/service/servicediscovery/public_dns_namespace.go b/internal/service/servicediscovery/public_dns_namespace.go index db93e5dbae9..d5aef120a29 100644 --- a/internal/service/servicediscovery/public_dns_namespace.go +++ b/internal/service/servicediscovery/public_dns_namespace.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicediscovery import ( @@ -37,7 +40,6 @@ func ResourcePublicDNSNamespace() *schema.Resource { "description": { Type: schema.TypeString, Optional: true, - ForceNew: true, }, "hosted_zone": { Type: schema.TypeString, @@ -58,20 +60,19 @@ func ResourcePublicDNSNamespace() *schema.Resource { } func resourcePublicDNSNamespaceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ServiceDiscoveryConn() + conn := meta.(*conns.AWSClient).ServiceDiscoveryConn(ctx) name := d.Get("name").(string) input := &servicediscovery.CreatePublicDnsNamespaceInput{ CreatorRequestId: aws.String(id.UniqueId()), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { input.Description = aws.String(v.(string)) } - log.Printf("[DEBUG] Creating Service Discovery Public DNS Namespace: %s", input) output, err := conn.CreatePublicDnsNamespaceWithContext(ctx, input) if err != nil { @@ -96,7 +97,7 @@ func resourcePublicDNSNamespaceCreate(ctx context.Context, d *schema.ResourceDat } func resourcePublicDNSNamespaceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ServiceDiscoveryConn() + conn := meta.(*conns.AWSClient).ServiceDiscoveryConn(ctx) ns, err := FindNamespaceByID(ctx, conn, d.Id()) @@ -124,12 +125,35 @@ func resourcePublicDNSNamespaceRead(ctx context.Context, d *schema.ResourceData, } func resourcePublicDNSNamespaceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - // Tags only. + conn := meta.(*conns.AWSClient).ServiceDiscoveryConn(ctx) + + if d.HasChange("description") { + input := &servicediscovery.UpdatePublicDnsNamespaceInput{ + Id: aws.String(d.Id()), + Namespace: &servicediscovery.PublicDnsNamespaceChange{ + Description: aws.String(d.Get("description").(string)), + }, + UpdaterRequestId: aws.String(id.UniqueId()), + } + + output, err := conn.UpdatePublicDnsNamespaceWithContext(ctx, input) + + if err != nil { + return diag.Errorf("updating Service Discovery Public DNS Namespace (%s): %s", d.Id(), err) + } + + if output != nil && output.OperationId != nil { + if _, err := WaitOperationSuccess(ctx, conn, aws.StringValue(output.OperationId)); err != nil { + return diag.Errorf("waiting for Service Discovery Public DNS Namespace (%s) update: %s", d.Id(), err) + } + } + } + return resourcePublicDNSNamespaceRead(ctx, d, meta) } func resourcePublicDNSNamespaceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ServiceDiscoveryConn() + conn := meta.(*conns.AWSClient).ServiceDiscoveryConn(ctx) log.Printf("[INFO] Deleting Service Discovery Public DNS Namespace: %s", d.Id()) output, err := conn.DeleteNamespaceWithContext(ctx, &servicediscovery.DeleteNamespaceInput{ diff --git a/internal/service/servicediscovery/public_dns_namespace_test.go b/internal/service/servicediscovery/public_dns_namespace_test.go index 2e73823cacf..cb8745ce4f9 100644 --- a/internal/service/servicediscovery/public_dns_namespace_test.go +++ b/internal/service/servicediscovery/public_dns_namespace_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicediscovery_test import ( @@ -93,10 +96,17 @@ func TestAccServiceDiscoveryPublicDNSNamespace_description(t *testing.T) { CheckDestroy: testAccCheckPublicDNSNamespaceDestroy(ctx), Steps: []resource.TestStep{ { - Config: testAccPublicDNSNamespaceConfig_description(rName, "test"), + Config: testAccPublicDNSNamespaceConfig_description(rName, "desc1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckPublicDNSNamespaceExists(ctx, resourceName), + resource.TestCheckResourceAttr(resourceName, "description", "desc1"), + ), + }, + { + Config: testAccPublicDNSNamespaceConfig_description(rName, "desc2"), Check: resource.ComposeTestCheckFunc( testAccCheckPublicDNSNamespaceExists(ctx, resourceName), - resource.TestCheckResourceAttr(resourceName, "description", "test"), + resource.TestCheckResourceAttr(resourceName, "description", "desc2"), ), }, }, @@ -154,7 +164,7 @@ func TestAccServiceDiscoveryPublicDNSNamespace_tags(t *testing.T) { func testAccCheckPublicDNSNamespaceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceDiscoveryConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceDiscoveryConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_service_discovery_public_dns_namespace" { @@ -189,7 +199,7 @@ func testAccCheckPublicDNSNamespaceExists(ctx context.Context, n string) resourc return fmt.Errorf("No Service Discovery Public DNS Namespace ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceDiscoveryConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceDiscoveryConn(ctx) _, err := tfservicediscovery.FindNamespaceByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/servicediscovery/service.go b/internal/service/servicediscovery/service.go index 041cff6de06..a6e2282daa8 100644 --- a/internal/service/servicediscovery/service.go +++ b/internal/service/servicediscovery/service.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicediscovery import ( @@ -151,13 +154,13 @@ func ResourceService() *schema.Resource { } func resourceServiceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ServiceDiscoveryConn() + conn := meta.(*conns.AWSClient).ServiceDiscoveryConn(ctx) name := d.Get("name").(string) input := &servicediscovery.CreateServiceInput{ CreatorRequestId: aws.String(id.UniqueId()), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -196,7 +199,7 @@ func resourceServiceCreate(ctx context.Context, d *schema.ResourceData, meta int } func resourceServiceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ServiceDiscoveryConn() + conn := meta.(*conns.AWSClient).ServiceDiscoveryConn(ctx) service, err := FindServiceByID(ctx, conn, d.Id()) @@ -242,7 +245,7 @@ func resourceServiceRead(ctx context.Context, d *schema.ResourceData, meta inter } func resourceServiceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ServiceDiscoveryConn() + conn := meta.(*conns.AWSClient).ServiceDiscoveryConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &servicediscovery.UpdateServiceInput{ @@ -277,7 +280,7 @@ func resourceServiceUpdate(ctx context.Context, d *schema.ResourceData, meta int } func resourceServiceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ServiceDiscoveryConn() + conn := meta.(*conns.AWSClient).ServiceDiscoveryConn(ctx) if d.Get("force_destroy").(bool) { var deletionErrs *multierror.Error diff --git a/internal/service/servicediscovery/service_data_source.go b/internal/service/servicediscovery/service_data_source.go index 56f26d2ab9f..0664e6be8b8 100644 --- a/internal/service/servicediscovery/service_data_source.go +++ b/internal/service/servicediscovery/service_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicediscovery import ( @@ -109,7 +112,7 @@ func DataSourceService() *schema.Resource { } func dataSourceServiceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).ServiceDiscoveryConn() + conn := meta.(*conns.AWSClient).ServiceDiscoveryConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig name := d.Get("name").(string) @@ -155,7 +158,7 @@ func dataSourceServiceRead(ctx context.Context, d *schema.ResourceData, meta int d.Set("name", service.Name) d.Set("namespace_id", service.NamespaceId) - tags, err := ListTags(ctx, conn, arn) + tags, err := listTags(ctx, conn, arn) if err != nil { return diag.Errorf("listing tags for Service Discovery Service (%s): %s", arn, err) diff --git a/internal/service/servicediscovery/service_data_source_test.go b/internal/service/servicediscovery/service_data_source_test.go index caf7db6e408..c6ad538f337 100644 --- a/internal/service/servicediscovery/service_data_source_test.go +++ b/internal/service/servicediscovery/service_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicediscovery_test import ( diff --git a/internal/service/servicediscovery/service_package_gen.go b/internal/service/servicediscovery/service_package_gen.go index a608346b405..fdc70ea5e12 100644 --- a/internal/service/servicediscovery/service_package_gen.go +++ b/internal/service/servicediscovery/service_package_gen.go @@ -5,6 +5,10 @@ package servicediscovery import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + servicediscovery_sdkv1 "github.com/aws/aws-sdk-go/service/servicediscovery" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -81,4 +85,13 @@ func (p *servicePackage) ServicePackageName() string { return names.ServiceDiscovery } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*servicediscovery_sdkv1.ServiceDiscovery, error) { + sess := config["session"].(*session_sdkv1.Session) + + return servicediscovery_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/servicediscovery/service_test.go b/internal/service/servicediscovery/service_test.go index edda010ac1e..f22abc2b427 100644 --- a/internal/service/servicediscovery/service_test.go +++ b/internal/service/servicediscovery/service_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicediscovery_test import ( @@ -280,7 +283,7 @@ func TestAccServiceDiscoveryService_tags(t *testing.T) { func testAccCheckServiceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceDiscoveryConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceDiscoveryConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_service_discovery_service" { @@ -315,7 +318,7 @@ func testAccCheckServiceExists(ctx context.Context, n string) resource.TestCheck return fmt.Errorf("No Service Discovery Service ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceDiscoveryConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceDiscoveryConn(ctx) _, err := tfservicediscovery.FindServiceByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/servicediscovery/status.go b/internal/service/servicediscovery/status.go index 7661c9a5496..da24245cd7c 100644 --- a/internal/service/servicediscovery/status.go +++ b/internal/service/servicediscovery/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicediscovery import ( diff --git a/internal/service/servicediscovery/sweep.go b/internal/service/servicediscovery/sweep.go index a55d08591c3..4a1aa5460f7 100644 --- a/internal/service/servicediscovery/sweep.go +++ b/internal/service/servicediscovery/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/servicediscovery" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -47,11 +49,11 @@ func init() { func sweepHTTPNamespaces(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).ServiceDiscoveryConn() + conn := client.ServiceDiscoveryConn(ctx) sweepResources := make([]sweep.Sweepable, 0) namespaces, err := findNamespacesByType(ctx, conn, servicediscovery.NamespaceTypeHttp) @@ -73,7 +75,7 @@ func sweepHTTPNamespaces(region string) error { sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Service Discovery HTTP Namespaces (%s): %w", region, err) @@ -84,11 +86,11 @@ func sweepHTTPNamespaces(region string) error { func sweepPrivateDNSNamespaces(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).ServiceDiscoveryConn() + conn := client.ServiceDiscoveryConn(ctx) sweepResources := make([]sweep.Sweepable, 0) namespaces, err := findNamespacesByType(ctx, conn, servicediscovery.NamespaceTypeDnsPrivate) @@ -110,7 +112,7 @@ func sweepPrivateDNSNamespaces(region string) error { sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Service Discovery Private DNS Namespaces (%s): %w", region, err) @@ -121,11 +123,11 @@ func sweepPrivateDNSNamespaces(region string) error { func sweepPublicDNSNamespaces(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).ServiceDiscoveryConn() + conn := client.ServiceDiscoveryConn(ctx) sweepResources := make([]sweep.Sweepable, 0) namespaces, err := findNamespacesByType(ctx, conn, servicediscovery.NamespaceTypeDnsPublic) @@ -147,7 +149,7 @@ func sweepPublicDNSNamespaces(region string) error { sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Service Discovery Public DNS Namespaces (%s): %w", region, err) @@ -158,11 +160,11 @@ func sweepPublicDNSNamespaces(region string) error { func sweepServices(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).ServiceDiscoveryConn() + conn := client.ServiceDiscoveryConn(ctx) input := &servicediscovery.ListServicesInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -186,7 +188,7 @@ func sweepServices(region string) error { sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Service Discovery Services (%s): %w", region, err) diff --git a/internal/service/servicediscovery/tags_gen.go b/internal/service/servicediscovery/tags_gen.go index b576faebfce..c18dc6b33ed 100644 --- a/internal/service/servicediscovery/tags_gen.go +++ b/internal/service/servicediscovery/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists servicediscovery service tags. +// listTags lists servicediscovery service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn servicediscoveryiface.ServiceDiscoveryAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn servicediscoveryiface.ServiceDiscoveryAPI, identifier string) (tftags.KeyValueTags, error) { input := &servicediscovery.ListTagsForResourceInput{ ResourceARN: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn servicediscoveryiface.ServiceDiscoveryAP // ListTags lists servicediscovery service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).ServiceDiscoveryConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).ServiceDiscoveryConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*servicediscovery.Tag) tftags.KeyV return tftags.New(ctx, m) } -// GetTagsIn returns servicediscovery service tags from Context. +// getTagsIn returns servicediscovery service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*servicediscovery.Tag { +func getTagsIn(ctx context.Context) []*servicediscovery.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*servicediscovery.Tag { return nil } -// SetTagsOut sets servicediscovery service tags in Context. -func SetTagsOut(ctx context.Context, tags []*servicediscovery.Tag) { +// setTagsOut sets servicediscovery service tags in Context. +func setTagsOut(ctx context.Context, tags []*servicediscovery.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates servicediscovery service tags. +// updateTags updates servicediscovery service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn servicediscoveryiface.ServiceDiscoveryAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn servicediscoveryiface.ServiceDiscoveryAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn servicediscoveryiface.ServiceDiscovery // UpdateTags updates servicediscovery service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).ServiceDiscoveryConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).ServiceDiscoveryConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/servicediscovery/validate.go b/internal/service/servicediscovery/validate.go index 7c2f6b433d0..1ec8e58459d 100644 --- a/internal/service/servicediscovery/validate.go +++ b/internal/service/servicediscovery/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicediscovery import ( diff --git a/internal/service/servicediscovery/validate_test.go b/internal/service/servicediscovery/validate_test.go index b8c9953b701..cb5764a5f06 100644 --- a/internal/service/servicediscovery/validate_test.go +++ b/internal/service/servicediscovery/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicediscovery import ( diff --git a/internal/service/servicediscovery/wait.go b/internal/service/servicediscovery/wait.go index ca8d35eba6b..534eb064199 100644 --- a/internal/service/servicediscovery/wait.go +++ b/internal/service/servicediscovery/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicediscovery import ( diff --git a/internal/service/servicequotas/acc_test.go b/internal/service/servicequotas/acc_test.go index ce88e4261ef..5f9ff82c9d9 100644 --- a/internal/service/servicequotas/acc_test.go +++ b/internal/service/servicequotas/acc_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicequotas_test import ( @@ -26,7 +29,7 @@ const ( ) func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceQuotasConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceQuotasConn(ctx) input := &servicequotas.ListServicesInput{} @@ -43,7 +46,7 @@ func testAccPreCheck(ctx context.Context, t *testing.T) { // nosemgrep:ci.servicequotas-in-func-name func preCheckServiceQuotaSet(ctx context.Context, serviceCode, quotaCode string, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceQuotasConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceQuotasConn(ctx) input := &servicequotas.GetServiceQuotaInput{ QuotaCode: aws.String(quotaCode), @@ -60,7 +63,7 @@ func preCheckServiceQuotaSet(ctx context.Context, serviceCode, quotaCode string, } func preCheckServiceQuotaUnset(ctx context.Context, serviceCode, quotaCode string, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceQuotasConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceQuotasConn(ctx) input := &servicequotas.GetServiceQuotaInput{ QuotaCode: aws.String(quotaCode), @@ -77,7 +80,7 @@ func preCheckServiceQuotaUnset(ctx context.Context, serviceCode, quotaCode strin } func preCheckServiceQuotaHasUsageMetric(ctx context.Context, serviceCode, quotaCode string, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceQuotasConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ServiceQuotasConn(ctx) input := &servicequotas.GetAWSDefaultServiceQuotaInput{ QuotaCode: aws.String(quotaCode), diff --git a/internal/service/servicequotas/find.go b/internal/service/servicequotas/find.go index 4abd401a624..fd67bf3a796 100644 --- a/internal/service/servicequotas/find.go +++ b/internal/service/servicequotas/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicequotas import ( diff --git a/internal/service/servicequotas/generate.go b/internal/service/servicequotas/generate.go new file mode 100644 index 00000000000..83f9adc9719 --- /dev/null +++ b/internal/service/servicequotas/generate.go @@ -0,0 +1,7 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/servicepackage/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package servicequotas diff --git a/internal/service/servicequotas/service_data_source.go b/internal/service/servicequotas/service_data_source.go index 4b68a28b027..c9d5b26f344 100644 --- a/internal/service/servicequotas/service_data_source.go +++ b/internal/service/servicequotas/service_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicequotas import ( @@ -31,7 +34,7 @@ func DataSourceService() *schema.Resource { func dataSourceServiceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceQuotasConn() + conn := meta.(*conns.AWSClient).ServiceQuotasConn(ctx) serviceName := d.Get("service_name").(string) diff --git a/internal/service/servicequotas/service_data_source_test.go b/internal/service/servicequotas/service_data_source_test.go index b43ae05719e..1bbef1388e6 100644 --- a/internal/service/servicequotas/service_data_source_test.go +++ b/internal/service/servicequotas/service_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicequotas_test import ( diff --git a/internal/service/servicequotas/service_package_gen.go b/internal/service/servicequotas/service_package_gen.go index 3816ba05da2..5f2630a6e04 100644 --- a/internal/service/servicequotas/service_package_gen.go +++ b/internal/service/servicequotas/service_package_gen.go @@ -5,6 +5,10 @@ package servicequotas import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + servicequotas_sdkv1 "github.com/aws/aws-sdk-go/service/servicequotas" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -45,4 +49,13 @@ func (p *servicePackage) ServicePackageName() string { return names.ServiceQuotas } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*servicequotas_sdkv1.ServiceQuotas, error) { + sess := config["session"].(*session_sdkv1.Session) + + return servicequotas_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/servicequotas/service_quota.go b/internal/service/servicequotas/service_quota.go index 6dbf56812e2..2dccba2af4a 100644 --- a/internal/service/servicequotas/service_quota.go +++ b/internal/service/servicequotas/service_quota.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicequotas import ( @@ -131,7 +134,7 @@ func ResourceServiceQuota() *schema.Resource { func resourceServiceQuotaCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceQuotasConn() + conn := meta.(*conns.AWSClient).ServiceQuotasConn(ctx) quotaCode := d.Get("quota_code").(string) serviceCode := d.Get("service_code").(string) @@ -180,7 +183,7 @@ func resourceServiceQuotaCreate(ctx context.Context, d *schema.ResourceData, met func resourceServiceQuotaRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceQuotasConn() + conn := meta.(*conns.AWSClient).ServiceQuotasConn(ctx) serviceCode, quotaCode, err := resourceServiceQuotaParseID(d.Id()) @@ -257,7 +260,7 @@ func resourceServiceQuotaRead(ctx context.Context, d *schema.ResourceData, meta func resourceServiceQuotaUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceQuotasConn() + conn := meta.(*conns.AWSClient).ServiceQuotasConn(ctx) value := d.Get("value").(float64) serviceCode, quotaCode, err := resourceServiceQuotaParseID(d.Id()) diff --git a/internal/service/servicequotas/service_quota_data_source.go b/internal/service/servicequotas/service_quota_data_source.go index cd38cde601e..6bd3c03328e 100644 --- a/internal/service/servicequotas/service_quota_data_source.go +++ b/internal/service/servicequotas/service_quota_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicequotas import ( @@ -137,7 +140,7 @@ func flattenUsageMetric(usageMetric *servicequotas.MetricInfo) []interface{} { func dataSourceServiceQuotaRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ServiceQuotasConn() + conn := meta.(*conns.AWSClient).ServiceQuotasConn(ctx) quotaCode := d.Get("quota_code").(string) quotaName := d.Get("quota_name").(string) diff --git a/internal/service/servicequotas/service_quota_data_source_test.go b/internal/service/servicequotas/service_quota_data_source_test.go index f65d3d43344..43ed6f8b969 100644 --- a/internal/service/servicequotas/service_quota_data_source_test.go +++ b/internal/service/servicequotas/service_quota_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicequotas_test import ( diff --git a/internal/service/servicequotas/service_quota_test.go b/internal/service/servicequotas/service_quota_test.go index 82418b119e3..5a8da870a88 100644 --- a/internal/service/servicequotas/service_quota_test.go +++ b/internal/service/servicequotas/service_quota_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package servicequotas_test import ( diff --git a/internal/service/ses/active_receipt_rule_set.go b/internal/service/ses/active_receipt_rule_set.go index 12c1ec89711..ad6e2cc5ed2 100644 --- a/internal/service/ses/active_receipt_rule_set.go +++ b/internal/service/ses/active_receipt_rule_set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses import ( @@ -24,6 +27,10 @@ func ResourceActiveReceiptRuleSet() *schema.Resource { ReadWithoutTimeout: resourceActiveReceiptRuleSetRead, DeleteWithoutTimeout: resourceActiveReceiptRuleSetDelete, + Importer: &schema.ResourceImporter{ + StateContext: resourceActiveReceiptRuleSetImport, + }, + Schema: map[string]*schema.Schema{ "arn": { Type: schema.TypeString, @@ -40,7 +47,7 @@ func ResourceActiveReceiptRuleSet() *schema.Resource { func resourceActiveReceiptRuleSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) ruleSetName := d.Get("rule_set_name").(string) @@ -60,7 +67,7 @@ func resourceActiveReceiptRuleSetUpdate(ctx context.Context, d *schema.ResourceD func resourceActiveReceiptRuleSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) describeOpts := &ses.DescribeActiveReceiptRuleSetInput{} @@ -96,7 +103,7 @@ func resourceActiveReceiptRuleSetRead(ctx context.Context, d *schema.ResourceDat func resourceActiveReceiptRuleSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) deleteOpts := &ses.SetActiveReceiptRuleSetInput{ RuleSetName: nil, @@ -109,3 +116,35 @@ func resourceActiveReceiptRuleSetDelete(ctx context.Context, d *schema.ResourceD return diags } + +func resourceActiveReceiptRuleSetImport(ctx context.Context, d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + conn := meta.(*conns.AWSClient).SESConn(ctx) + + describeOpts := &ses.DescribeActiveReceiptRuleSetInput{} + + response, err := conn.DescribeActiveReceiptRuleSetWithContext(ctx, describeOpts) + if err != nil { + return nil, err + } + + if response.Metadata == nil { + return nil, fmt.Errorf("no active Receipt Rule Set found") + } + + if aws.StringValue(response.Metadata.Name) != d.Id() { + return nil, fmt.Errorf("SES Receipt Rule Set (%s) belonging to SES Active Receipt Rule Set not found", d.Id()) + } + + d.Set("rule_set_name", response.Metadata.Name) + + arnValue := arn.ARN{ + Partition: meta.(*conns.AWSClient).Partition, + Service: "ses", + Region: meta.(*conns.AWSClient).Region, + AccountID: meta.(*conns.AWSClient).AccountID, + Resource: fmt.Sprintf("receipt-rule-set/%s", d.Id()), + }.String() + d.Set("arn", arnValue) + + return []*schema.ResourceData{d}, nil +} diff --git a/internal/service/ses/active_receipt_rule_set_data_source.go b/internal/service/ses/active_receipt_rule_set_data_source.go index 1146adb4f62..54ef9a54d80 100644 --- a/internal/service/ses/active_receipt_rule_set_data_source.go +++ b/internal/service/ses/active_receipt_rule_set_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses import ( @@ -34,7 +37,7 @@ func DataSourceActiveReceiptRuleSet() *schema.Resource { func dataSourceActiveReceiptRuleSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) data, err := conn.DescribeActiveReceiptRuleSetWithContext(ctx, &ses.DescribeActiveReceiptRuleSetInput{}) diff --git a/internal/service/ses/active_receipt_rule_set_data_source_test.go b/internal/service/ses/active_receipt_rule_set_data_source_test.go index 36efdf26534..95551c50e35 100644 --- a/internal/service/ses/active_receipt_rule_set_data_source_test.go +++ b/internal/service/ses/active_receipt_rule_set_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses_test import ( @@ -81,7 +84,7 @@ data "aws_ses_active_receipt_rule_set" "test" {} } func testAccPreCheckUnsetActiveRuleSet(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) output, err := conn.DescribeActiveReceiptRuleSetWithContext(ctx, &ses.DescribeActiveReceiptRuleSetInput{}) if acctest.PreCheckSkipError(err) { diff --git a/internal/service/ses/active_receipt_rule_set_test.go b/internal/service/ses/active_receipt_rule_set_test.go index 146e33eaa91..c6044e378ec 100644 --- a/internal/service/ses/active_receipt_rule_set_test.go +++ b/internal/service/ses/active_receipt_rule_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses_test import ( @@ -56,6 +59,11 @@ func testAccActiveReceiptRuleSet_basic(t *testing.T) { acctest.CheckResourceAttrRegionalARN(resourceName, "arn", "ses", fmt.Sprintf("receipt-rule-set/%s", rName)), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, }, }) } @@ -89,7 +97,7 @@ func testAccActiveReceiptRuleSet_disappears(t *testing.T) { func testAccCheckActiveReceiptRuleSetDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ses_active_receipt_rule_set" { @@ -121,7 +129,7 @@ func testAccCheckActiveReceiptRuleSetExists(ctx context.Context, n string) resou return fmt.Errorf("SES Active Receipt Rule Set name not set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) response, err := conn.DescribeActiveReceiptRuleSetWithContext(ctx, &ses.DescribeActiveReceiptRuleSetInput{}) if err != nil { diff --git a/internal/service/ses/configuration_set.go b/internal/service/ses/configuration_set.go index e5e86684090..f36bde471cd 100644 --- a/internal/service/ses/configuration_set.go +++ b/internal/service/ses/configuration_set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses import ( @@ -89,7 +92,7 @@ func ResourceConfigurationSet() *schema.Resource { func resourceConfigurationSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) configurationSetName := d.Get("name").(string) @@ -159,7 +162,7 @@ func resourceConfigurationSetCreate(ctx context.Context, d *schema.ResourceData, func resourceConfigurationSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) configSetInput := &ses.DescribeConfigurationSetInput{ ConfigurationSetName: aws.String(d.Id()), @@ -211,7 +214,7 @@ func resourceConfigurationSetRead(ctx context.Context, d *schema.ResourceData, m func resourceConfigurationSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) if d.HasChange("delivery_options") { input := &ses.PutConfigurationSetDeliveryOptionsInput{ @@ -266,7 +269,7 @@ func resourceConfigurationSetUpdate(ctx context.Context, d *schema.ResourceData, func resourceConfigurationSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) log.Printf("[DEBUG] SES Delete Configuration Rule Set: %s", d.Id()) input := &ses.DeleteConfigurationSetInput{ diff --git a/internal/service/ses/configuration_set_test.go b/internal/service/ses/configuration_set_test.go index dd0285c8624..eace5f418d2 100644 --- a/internal/service/ses/configuration_set_test.go +++ b/internal/service/ses/configuration_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses_test import ( @@ -380,7 +383,7 @@ func testAccCheckConfigurationSetExists(ctx context.Context, n string) resource. return fmt.Errorf("SES configuration set ID not set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) response, err := conn.DescribeConfigurationSetWithContext(ctx, &ses.DescribeConfigurationSetInput{ ConfigurationSetName: aws.String(rs.Primary.ID), @@ -399,7 +402,7 @@ func testAccCheckConfigurationSetExists(ctx context.Context, n string) resource. func testAccCheckConfigurationSetDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ses_configuration_set" { diff --git a/internal/service/ses/domain_dkim.go b/internal/service/ses/domain_dkim.go index 057a8f90722..81099ff17bf 100644 --- a/internal/service/ses/domain_dkim.go +++ b/internal/service/ses/domain_dkim.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses import ( @@ -39,7 +42,7 @@ func ResourceDomainDKIM() *schema.Resource { func resourceDomainDKIMCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) domainName := d.Get("domain").(string) @@ -59,7 +62,7 @@ func resourceDomainDKIMCreate(ctx context.Context, d *schema.ResourceData, meta func resourceDomainDKIMRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) domainName := d.Id() d.Set("domain", domainName) diff --git a/internal/service/ses/domain_dkim_test.go b/internal/service/ses/domain_dkim_test.go index a6b26dd3841..b4d482f1726 100644 --- a/internal/service/ses/domain_dkim_test.go +++ b/internal/service/ses/domain_dkim_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses_test import ( @@ -42,7 +45,7 @@ func TestAccSESDomainDKIM_basic(t *testing.T) { func testAccCheckDomainDKIMDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ses_domain_dkim" { @@ -83,7 +86,7 @@ func testAccCheckDomainDKIMExists(ctx context.Context, n string) resource.TestCh } domain := rs.Primary.ID - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) params := &ses.GetIdentityDkimAttributesInput{ Identities: []*string{ diff --git a/internal/service/ses/domain_identity.go b/internal/service/ses/domain_identity.go index 762eb9135a2..2c8f6e9d327 100644 --- a/internal/service/ses/domain_identity.go +++ b/internal/service/ses/domain_identity.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses import ( @@ -47,7 +50,7 @@ func ResourceDomainIdentity() *schema.Resource { func resourceDomainIdentityCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) domainName := d.Get("domain").(string) @@ -67,7 +70,7 @@ func resourceDomainIdentityCreate(ctx context.Context, d *schema.ResourceData, m func resourceDomainIdentityRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) domainName := d.Id() d.Set("domain", domainName) @@ -104,7 +107,7 @@ func resourceDomainIdentityRead(ctx context.Context, d *schema.ResourceData, met func resourceDomainIdentityDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) domainName := d.Get("domain").(string) diff --git a/internal/service/ses/domain_identity_data_source.go b/internal/service/ses/domain_identity_data_source.go index cb306678679..6f31e800748 100644 --- a/internal/service/ses/domain_identity_data_source.go +++ b/internal/service/ses/domain_identity_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses import ( @@ -39,7 +42,7 @@ func DataSourceDomainIdentity() *schema.Resource { func dataSourceDomainIdentityRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) domainName := d.Get("domain").(string) d.SetId(domainName) diff --git a/internal/service/ses/domain_identity_data_source_test.go b/internal/service/ses/domain_identity_data_source_test.go index 4dce8214198..0eef4c41c65 100644 --- a/internal/service/ses/domain_identity_data_source_test.go +++ b/internal/service/ses/domain_identity_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses_test import ( diff --git a/internal/service/ses/domain_identity_test.go b/internal/service/ses/domain_identity_test.go index 3ff3c23c352..e5e7cfc9f01 100644 --- a/internal/service/ses/domain_identity_test.go +++ b/internal/service/ses/domain_identity_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses_test import ( @@ -81,7 +84,7 @@ func TestAccSESDomainIdentity_trailingPeriod(t *testing.T) { func testAccCheckDomainIdentityDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ses_domain_identity" { @@ -121,7 +124,7 @@ func testAccCheckDomainIdentityExists(ctx context.Context, n string) resource.Te } domain := rs.Primary.ID - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) params := &ses.GetIdentityVerificationAttributesInput{ Identities: []*string{ @@ -144,7 +147,7 @@ func testAccCheckDomainIdentityExists(ctx context.Context, n string) resource.Te func testAccCheckDomainIdentityDisappears(ctx context.Context, identity string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) input := &ses.DeleteIdentityInput{ Identity: aws.String(identity), @@ -178,7 +181,7 @@ func testAccCheckDomainIdentityARN(n string, domain string) resource.TestCheckFu } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) input := &ses.ListIdentitiesInput{} diff --git a/internal/service/ses/domain_identity_verification.go b/internal/service/ses/domain_identity_verification.go index 337f0a807e2..b35013ac70c 100644 --- a/internal/service/ses/domain_identity_verification.go +++ b/internal/service/ses/domain_identity_verification.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses import ( @@ -53,7 +56,7 @@ func getIdentityVerificationAttributes(ctx context.Context, conn *ses.SES, domai response, err := conn.GetIdentityVerificationAttributesWithContext(ctx, input) if err != nil { - return nil, fmt.Errorf("Error getting identity verification attributes: %s", err) + return nil, fmt.Errorf("getting identity verification attributes: %s", err) } return response.VerificationAttributes[domainName], nil @@ -61,12 +64,12 @@ func getIdentityVerificationAttributes(ctx context.Context, conn *ses.SES, domai func resourceDomainIdentityVerificationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) domainName := d.Get("domain").(string) err := retry.RetryContext(ctx, d.Timeout(schema.TimeoutCreate), func() *retry.RetryError { att, err := getIdentityVerificationAttributes(ctx, conn, domainName) if err != nil { - return retry.NonRetryableError(fmt.Errorf("Error getting identity verification attributes: %s", err)) + return retry.NonRetryableError(fmt.Errorf("getting identity verification attributes: %s", err)) } if att == nil { @@ -98,7 +101,7 @@ func resourceDomainIdentityVerificationCreate(ctx context.Context, d *schema.Res func resourceDomainIdentityVerificationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) domainName := d.Id() d.Set("domain", domainName) diff --git a/internal/service/ses/domain_identity_verification_test.go b/internal/service/ses/domain_identity_verification_test.go index b712b7b64d5..cf43543377d 100644 --- a/internal/service/ses/domain_identity_verification_test.go +++ b/internal/service/ses/domain_identity_verification_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses_test import ( @@ -95,7 +98,7 @@ func testAccCheckDomainIdentityVerificationPassed(ctx context.Context, n string) } domain := rs.Primary.ID - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) params := &ses.GetIdentityVerificationAttributesInput{ Identities: []*string{ diff --git a/internal/service/ses/domain_mail_from.go b/internal/service/ses/domain_mail_from.go index 7fa4c415aaf..f7e44a0acfb 100644 --- a/internal/service/ses/domain_mail_from.go +++ b/internal/service/ses/domain_mail_from.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses import ( @@ -44,7 +47,7 @@ func ResourceDomainMailFrom() *schema.Resource { func resourceDomainMailFromSet(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) behaviorOnMxFailure := d.Get("behavior_on_mx_failure").(string) domainName := d.Get("domain").(string) @@ -68,7 +71,7 @@ func resourceDomainMailFromSet(ctx context.Context, d *schema.ResourceData, meta func resourceDomainMailFromRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) domainName := d.Id() @@ -105,7 +108,7 @@ func resourceDomainMailFromRead(ctx context.Context, d *schema.ResourceData, met func resourceDomainMailFromDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) domainName := d.Id() diff --git a/internal/service/ses/domain_mail_from_test.go b/internal/service/ses/domain_mail_from_test.go index 84e87afc7a8..ce495a45419 100644 --- a/internal/service/ses/domain_mail_from_test.go +++ b/internal/service/ses/domain_mail_from_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses_test import ( @@ -150,7 +153,7 @@ func testAccCheckDomainMailFromExists(ctx context.Context, n string) resource.Te } domain := rs.Primary.ID - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) params := &ses.GetIdentityMailFromDomainAttributesInput{ Identities: []*string{ @@ -173,7 +176,7 @@ func testAccCheckDomainMailFromExists(ctx context.Context, n string) resource.Te func testAccCheckDomainMailFromDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ses_domain_mail_from" { @@ -199,7 +202,7 @@ func testAccCheckDomainMailFromDestroy(ctx context.Context) resource.TestCheckFu func testAccCheckDomainMailFromDisappears(ctx context.Context, identity string) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) input := &ses.SetIdentityMailFromDomainInput{ Identity: aws.String(identity), diff --git a/internal/service/ses/email_identity.go b/internal/service/ses/email_identity.go index 436bb7fb2dd..67931b7343e 100644 --- a/internal/service/ses/email_identity.go +++ b/internal/service/ses/email_identity.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses import ( @@ -44,7 +47,7 @@ func ResourceEmailIdentity() *schema.Resource { func resourceEmailIdentityCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) email := d.Get("email").(string) email = strings.TrimSuffix(email, ".") @@ -65,7 +68,7 @@ func resourceEmailIdentityCreate(ctx context.Context, d *schema.ResourceData, me func resourceEmailIdentityRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) email := d.Id() d.Set("email", email) @@ -101,7 +104,7 @@ func resourceEmailIdentityRead(ctx context.Context, d *schema.ResourceData, meta func resourceEmailIdentityDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) email := d.Get("email").(string) diff --git a/internal/service/ses/email_identity_data_dource_test.go b/internal/service/ses/email_identity_data_dource_test.go index dd6c7389314..966bcc6335e 100644 --- a/internal/service/ses/email_identity_data_dource_test.go +++ b/internal/service/ses/email_identity_data_dource_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses_test import ( diff --git a/internal/service/ses/email_identity_data_source.go b/internal/service/ses/email_identity_data_source.go index 6491dbe6350..e0f625c58cd 100644 --- a/internal/service/ses/email_identity_data_source.go +++ b/internal/service/ses/email_identity_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses import ( @@ -36,7 +39,7 @@ func DataSourceEmailIdentity() *schema.Resource { func dataSourceEmailIdentityRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) email := d.Get("email").(string) email = strings.TrimSuffix(email, ".") diff --git a/internal/service/ses/email_identity_test.go b/internal/service/ses/email_identity_test.go index 71381247388..d56d7c2edf2 100644 --- a/internal/service/ses/email_identity_test.go +++ b/internal/service/ses/email_identity_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses_test import ( @@ -71,7 +74,7 @@ func TestAccSESEmailIdentity_trailingPeriod(t *testing.T) { func testAccCheckEmailIdentityDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ses_email_identity" { @@ -111,7 +114,7 @@ func testAccCheckEmailIdentityExists(ctx context.Context, n string) resource.Tes } email := rs.Primary.ID - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) params := &ses.GetIdentityVerificationAttributesInput{ Identities: []*string{ diff --git a/internal/service/ses/event_destination.go b/internal/service/ses/event_destination.go index cf2bfeef5e6..643b2ee9374 100644 --- a/internal/service/ses/event_destination.go +++ b/internal/service/ses/event_destination.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses import ( @@ -139,7 +142,7 @@ func ResourceEventDestination() *schema.Resource { func resourceEventDestinationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) configurationSetName := d.Get("configuration_set_name").(string) eventDestinationName := d.Get("name").(string) @@ -196,7 +199,7 @@ func resourceEventDestinationCreate(ctx context.Context, d *schema.ResourceData, func resourceEventDestinationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) configurationSetName := d.Get("configuration_set_name").(string) input := &ses.DescribeConfigurationSetInput{ @@ -257,7 +260,7 @@ func resourceEventDestinationRead(ctx context.Context, d *schema.ResourceData, m func resourceEventDestinationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) log.Printf("[DEBUG] SES Delete Configuration Set Destination: %s", d.Id()) _, err := conn.DeleteConfigurationSetEventDestinationWithContext(ctx, &ses.DeleteConfigurationSetEventDestinationInput{ diff --git a/internal/service/ses/event_destination_test.go b/internal/service/ses/event_destination_test.go index c574d408d33..c96c3e542a1 100644 --- a/internal/service/ses/event_destination_test.go +++ b/internal/service/ses/event_destination_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses_test import ( @@ -107,7 +110,7 @@ func TestAccSESEventDestination_disappears(t *testing.T) { func testAccCheckEventDestinationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ses_configuration_set" { @@ -146,7 +149,7 @@ func testAccCheckEventDestinationExists(ctx context.Context, n string, v *ses.Ev return fmt.Errorf("SES event destination ID not set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) response, err := conn.DescribeConfigurationSetWithContext(ctx, &ses.DescribeConfigurationSetInput{ ConfigurationSetAttributeNames: aws.StringSlice([]string{ses.ConfigurationSetAttributeEventDestinations}), @@ -173,11 +176,6 @@ resource "aws_s3_bucket" "test" { bucket = %[2]q } -resource "aws_s3_bucket_acl" "test" { - bucket = aws_s3_bucket.test.id - acl = "private" -} - resource "aws_iam_role" "test" { name = %[2]q @@ -207,9 +205,9 @@ EOF resource "aws_kinesis_firehose_delivery_stream" "test" { name = %[2]q - destination = "s3" + destination = "extended_s3" - s3_configuration { + extended_s3_configuration { role_arn = aws_iam_role.test.arn bucket_arn = aws_s3_bucket.test.arn } diff --git a/internal/service/ses/generate.go b/internal/service/ses/generate.go new file mode 100644 index 00000000000..9dc354bec86 --- /dev/null +++ b/internal/service/ses/generate.go @@ -0,0 +1,7 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/servicepackage/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package ses diff --git a/internal/service/ses/identity_notification_topic.go b/internal/service/ses/identity_notification_topic.go index 421e1ea89c4..cc29ea202aa 100644 --- a/internal/service/ses/identity_notification_topic.go +++ b/internal/service/ses/identity_notification_topic.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses import ( @@ -62,7 +65,7 @@ func ResourceIdentityNotificationTopic() *schema.Resource { func resourceNotificationTopicSet(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) notification := d.Get("notification_type").(string) identity := d.Get("identity").(string) includeOriginalHeaders := d.Get("include_original_headers").(bool) @@ -101,7 +104,7 @@ func resourceNotificationTopicSet(ctx context.Context, d *schema.ResourceData, m func resourceIdentityNotificationTopicRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) identity, notificationType, err := decodeIdentityNotificationTopicID(d.Id()) if err != nil { @@ -148,7 +151,7 @@ func resourceIdentityNotificationTopicRead(ctx context.Context, d *schema.Resour func resourceIdentityNotificationTopicDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) identity, notificationType, err := decodeIdentityNotificationTopicID(d.Id()) if err != nil { diff --git a/internal/service/ses/identity_notification_topic_test.go b/internal/service/ses/identity_notification_topic_test.go index 7a0d91694a9..5715a04d88d 100644 --- a/internal/service/ses/identity_notification_topic_test.go +++ b/internal/service/ses/identity_notification_topic_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses_test import ( @@ -60,7 +63,7 @@ func TestAccSESIdentityNotificationTopic_basic(t *testing.T) { func testAccCheckIdentityNotificationTopicDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ses_identity_notification_topic" { @@ -100,7 +103,7 @@ func testAccCheckIdentityNotificationTopicExists(ctx context.Context, n string) } identity := rs.Primary.Attributes["identity"] - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) params := &ses.GetIdentityNotificationAttributesInput{ Identities: []*string{aws.String(identity)}, diff --git a/internal/service/ses/identity_policy.go b/internal/service/ses/identity_policy.go index 45041e769f3..2e77c61944a 100644 --- a/internal/service/ses/identity_policy.go +++ b/internal/service/ses/identity_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses import ( @@ -61,7 +64,7 @@ func ResourceIdentityPolicy() *schema.Resource { func resourceIdentityPolicyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) identity := d.Get("identity").(string) policyName := d.Get("name").(string) @@ -89,7 +92,7 @@ func resourceIdentityPolicyCreate(ctx context.Context, d *schema.ResourceData, m func resourceIdentityPolicyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) identity, policyName, err := IdentityPolicyParseID(d.Id()) if err != nil { @@ -117,7 +120,7 @@ func resourceIdentityPolicyUpdate(ctx context.Context, d *schema.ResourceData, m func resourceIdentityPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) identity, policyName, err := IdentityPolicyParseID(d.Id()) if err != nil { @@ -167,7 +170,7 @@ func resourceIdentityPolicyRead(ctx context.Context, d *schema.ResourceData, met func resourceIdentityPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) identity, policyName, err := IdentityPolicyParseID(d.Id()) if err != nil { diff --git a/internal/service/ses/identity_policy_test.go b/internal/service/ses/identity_policy_test.go index 73f9b500083..dd81d70c4c3 100644 --- a/internal/service/ses/identity_policy_test.go +++ b/internal/service/ses/identity_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses_test import ( @@ -128,7 +131,7 @@ func TestAccSESIdentityPolicy_ignoreEquivalent(t *testing.T) { func testAccCheckIdentityPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ses_identity_policy" { @@ -171,7 +174,7 @@ func testAccCheckIdentityPolicyExists(ctx context.Context, resourceName string) return fmt.Errorf("SES Identity Policy ID not set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) identityARN, policyName, err := tfses.IdentityPolicyParseID(rs.Primary.ID) if err != nil { diff --git a/internal/service/ses/receipt_filter.go b/internal/service/ses/receipt_filter.go index cd21f944e34..42fabcc768d 100644 --- a/internal/service/ses/receipt_filter.go +++ b/internal/service/ses/receipt_filter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses import ( @@ -68,7 +71,7 @@ func ResourceReceiptFilter() *schema.Resource { func resourceReceiptFilterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) name := d.Get("name").(string) @@ -94,7 +97,7 @@ func resourceReceiptFilterCreate(ctx context.Context, d *schema.ResourceData, me func resourceReceiptFilterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) listOpts := &ses.ListReceiptFiltersInput{} @@ -136,7 +139,7 @@ func resourceReceiptFilterRead(ctx context.Context, d *schema.ResourceData, meta func resourceReceiptFilterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) deleteOpts := &ses.DeleteReceiptFilterInput{ FilterName: aws.String(d.Id()), diff --git a/internal/service/ses/receipt_filter_test.go b/internal/service/ses/receipt_filter_test.go index 9a58358a21c..80ed9361b1f 100644 --- a/internal/service/ses/receipt_filter_test.go +++ b/internal/service/ses/receipt_filter_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses_test import ( @@ -70,7 +73,7 @@ func TestAccSESReceiptFilter_disappears(t *testing.T) { func testAccCheckReceiptFilterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ses_receipt_filter" { @@ -104,7 +107,7 @@ func testAccCheckReceiptFilterExists(ctx context.Context, n string) resource.Tes return fmt.Errorf("SES receipt filter ID not set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) response, err := conn.ListReceiptFiltersWithContext(ctx, &ses.ListReceiptFiltersInput{}) if err != nil { diff --git a/internal/service/ses/receipt_rule.go b/internal/service/ses/receipt_rule.go index b90f26c209d..896f4c35bc4 100644 --- a/internal/service/ses/receipt_rule.go +++ b/internal/service/ses/receipt_rule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses import ( @@ -271,7 +274,7 @@ func ResourceReceiptRule() *schema.Resource { func resourceReceiptRuleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) name := d.Get("name").(string) input := &ses.CreateReceiptRuleInput{ @@ -296,7 +299,7 @@ func resourceReceiptRuleCreate(ctx context.Context, d *schema.ResourceData, meta func resourceReceiptRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) ruleSetName := d.Get("rule_set_name").(string) rule, err := FindReceiptRuleByTwoPartKey(ctx, conn, d.Id(), ruleSetName) @@ -477,7 +480,7 @@ func resourceReceiptRuleRead(ctx context.Context, d *schema.ResourceData, meta i func resourceReceiptRuleUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) input := &ses.UpdateReceiptRuleInput{ Rule: buildReceiptRule(d), @@ -509,7 +512,7 @@ func resourceReceiptRuleUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceReceiptRuleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) log.Printf("[DEBUG] Deleting SES Receipt Rule: %s", d.Id()) _, err := conn.DeleteReceiptRuleWithContext(ctx, &ses.DeleteReceiptRuleInput{ diff --git a/internal/service/ses/receipt_rule_set.go b/internal/service/ses/receipt_rule_set.go index 9d34c1a773b..9b909f5100c 100644 --- a/internal/service/ses/receipt_rule_set.go +++ b/internal/service/ses/receipt_rule_set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses import ( @@ -43,7 +46,7 @@ func ResourceReceiptRuleSet() *schema.Resource { func resourceReceiptRuleSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) ruleSetName := d.Get("rule_set_name").(string) @@ -63,7 +66,7 @@ func resourceReceiptRuleSetCreate(ctx context.Context, d *schema.ResourceData, m func resourceReceiptRuleSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) input := &ses.DescribeReceiptRuleSetInput{ RuleSetName: aws.String(d.Id()), @@ -103,7 +106,7 @@ func resourceReceiptRuleSetRead(ctx context.Context, d *schema.ResourceData, met func resourceReceiptRuleSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) log.Printf("[DEBUG] SES Delete Receipt Rule Set: %s", d.Id()) input := &ses.DeleteReceiptRuleSetInput{ diff --git a/internal/service/ses/receipt_rule_set_test.go b/internal/service/ses/receipt_rule_set_test.go index e0014477e46..836e2b6a92d 100644 --- a/internal/service/ses/receipt_rule_set_test.go +++ b/internal/service/ses/receipt_rule_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses_test import ( @@ -69,7 +72,7 @@ func TestAccSESReceiptRuleSet_disappears(t *testing.T) { func testAccCheckReceiptRuleSetDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ses_receipt_rule_set" { @@ -108,7 +111,7 @@ func testAccCheckReceiptRuleSetExists(ctx context.Context, n string) resource.Te return fmt.Errorf("SES Receipt Rule Set name not set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) params := &ses.DescribeReceiptRuleSetInput{ RuleSetName: aws.String(rs.Primary.ID), diff --git a/internal/service/ses/receipt_rule_test.go b/internal/service/ses/receipt_rule_test.go index 9b72c2b006f..347d708078b 100644 --- a/internal/service/ses/receipt_rule_test.go +++ b/internal/service/ses/receipt_rule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses_test import ( @@ -360,7 +363,7 @@ func TestAccSESReceiptRule_disappears(t *testing.T) { func testAccCheckReceiptRuleDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ses_receipt_rule" { @@ -395,7 +398,7 @@ func testAccCheckReceiptRuleExists(ctx context.Context, n string, v *ses.Receipt return fmt.Errorf("No SES Receipt Rule ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) output, err := tfses.FindReceiptRuleByTwoPartKey(ctx, conn, rs.Primary.ID, rs.Primary.Attributes["rule_set_name"]) @@ -421,7 +424,7 @@ func testAccReceiptRuleImportStateIdFunc(resourceName string) resource.ImportSta } func testAccPreCheckReceiptRule(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) input := &ses.DescribeReceiptRuleInput{ RuleName: aws.String("MyRule"), diff --git a/internal/service/ses/service_package_gen.go b/internal/service/ses/service_package_gen.go index 7f98247ac51..1bf148b99b6 100644 --- a/internal/service/ses/service_package_gen.go +++ b/internal/service/ses/service_package_gen.go @@ -5,6 +5,10 @@ package ses import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + ses_sdkv1 "github.com/aws/aws-sdk-go/service/ses" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -101,4 +105,13 @@ func (p *servicePackage) ServicePackageName() string { return names.SES } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*ses_sdkv1.SES, error) { + sess := config["session"].(*session_sdkv1.Session) + + return ses_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/ses/sweep.go b/internal/service/ses/sweep.go index c1857bee739..e66e7d62119 100644 --- a/internal/service/ses/sweep.go +++ b/internal/service/ses/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -12,7 +15,6 @@ import ( "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -40,11 +42,11 @@ func init() { func sweepConfigurationSets(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).SESConn() + conn := client.SESConn(ctx) input := &ses.ListConfigurationSetsInput{} var sweeperErrs *multierror.Error @@ -88,11 +90,11 @@ func sweepConfigurationSets(region string) error { func sweepIdentities(region, identityType string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).SESConn() + conn := client.SESConn(ctx) input := &ses.ListIdentitiesInput{ IdentityType: aws.String(identityType), } @@ -133,11 +135,11 @@ func sweepIdentities(region, identityType string) error { func sweepReceiptRuleSets(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).SESConn() + conn := client.SESConn(ctx) // You cannot delete the receipt rule set that is currently active. // Setting the name of the receipt rule set to make active to null disables all email receiving. diff --git a/internal/service/ses/template.go b/internal/service/ses/template.go index ac746026107..46dab7b68b5 100644 --- a/internal/service/ses/template.go +++ b/internal/service/ses/template.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses import ( @@ -58,7 +61,7 @@ func ResourceTemplate() *schema.Resource { } func resourceTemplateCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) templateName := d.Get("name").(string) @@ -94,7 +97,7 @@ func resourceTemplateCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceTemplateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) input := ses.GetTemplateInput{ TemplateName: aws.String(d.Id()), } @@ -129,7 +132,7 @@ func resourceTemplateRead(ctx context.Context, d *schema.ResourceData, meta inte func resourceTemplateUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) templateName := d.Id() @@ -153,10 +156,9 @@ func resourceTemplateUpdate(ctx context.Context, d *schema.ResourceData, meta in Template: &template, } - log.Printf("[DEBUG] Update SES template: %#v", input) _, err := conn.UpdateTemplateWithContext(ctx, &input) if err != nil { - return sdkdiag.AppendErrorf(diags, "Updating SES template '%s' failed: %s", templateName, err.Error()) + return sdkdiag.AppendErrorf(diags, "updating SES template (%s): %s", templateName, err) } return append(diags, resourceTemplateRead(ctx, d, meta)...) @@ -164,7 +166,7 @@ func resourceTemplateUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceTemplateDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SESConn() + conn := meta.(*conns.AWSClient).SESConn(ctx) input := ses.DeleteTemplateInput{ TemplateName: aws.String(d.Id()), } diff --git a/internal/service/ses/template_test.go b/internal/service/ses/template_test.go index c39e0142f02..dd5220dba9a 100644 --- a/internal/service/ses/template_test.go +++ b/internal/service/ses/template_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ses_test import ( @@ -127,7 +130,7 @@ func TestAccSESTemplate_disappears(t *testing.T) { func testAccCheckTemplateExists(ctx context.Context, pr string, template *ses.Template) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) rs, ok := s.RootModule().Resources[pr] if !ok { return fmt.Errorf("Not found: %s", pr) @@ -158,7 +161,7 @@ func testAccCheckTemplateExists(ctx context.Context, pr string, template *ses.Te func testAccCheckTemplateDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ses_template" { diff --git a/internal/service/sesv2/configuration_set.go b/internal/service/sesv2/configuration_set.go index f3548268d2f..8aaff77f63b 100644 --- a/internal/service/sesv2/configuration_set.go +++ b/internal/service/sesv2/configuration_set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sesv2 import ( @@ -181,11 +184,11 @@ const ( ) func resourceConfigurationSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) in := &sesv2.CreateConfigurationSetInput{ ConfigurationSetName: aws.String(d.Get("configuration_set_name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("delivery_options"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { @@ -227,7 +230,7 @@ func resourceConfigurationSetCreate(ctx context.Context, d *schema.ResourceData, } func resourceConfigurationSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) out, err := FindConfigurationSetByID(ctx, conn, d.Id()) @@ -296,7 +299,7 @@ func resourceConfigurationSetRead(ctx context.Context, d *schema.ResourceData, m } func resourceConfigurationSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) if d.HasChanges("delivery_options") { in := &sesv2.PutConfigurationSetDeliveryOptionsInput{ @@ -422,7 +425,7 @@ func resourceConfigurationSetUpdate(ctx context.Context, d *schema.ResourceData, } func resourceConfigurationSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) log.Printf("[INFO] Deleting SESV2 ConfigurationSet %s", d.Id()) diff --git a/internal/service/sesv2/configuration_set_data_source.go b/internal/service/sesv2/configuration_set_data_source.go index df6bbf3e26a..7e970eee8e6 100644 --- a/internal/service/sesv2/configuration_set_data_source.go +++ b/internal/service/sesv2/configuration_set_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sesv2 import ( @@ -141,7 +144,7 @@ const ( ) func dataSourceConfigurationSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) name := d.Get("configuration_set_name").(string) @@ -203,7 +206,7 @@ func dataSourceConfigurationSetRead(ctx context.Context, d *schema.ResourceData, d.Set("vdm_options", nil) } - tags, err := ListTags(ctx, conn, d.Get("arn").(string)) + tags, err := listTags(ctx, conn, d.Get("arn").(string)) if err != nil { return create.DiagError(names.SESV2, create.ErrActionReading, DSNameConfigurationSet, d.Id(), err) } diff --git a/internal/service/sesv2/configuration_set_data_source_test.go b/internal/service/sesv2/configuration_set_data_source_test.go index 7130bc4613c..54cb7576969 100644 --- a/internal/service/sesv2/configuration_set_data_source_test.go +++ b/internal/service/sesv2/configuration_set_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sesv2_test import ( diff --git a/internal/service/sesv2/configuration_set_event_destination.go b/internal/service/sesv2/configuration_set_event_destination.go index 587517385ac..301926cd85f 100644 --- a/internal/service/sesv2/configuration_set_event_destination.go +++ b/internal/service/sesv2/configuration_set_event_destination.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sesv2 import ( @@ -179,7 +182,7 @@ const ( ) func resourceConfigurationSetEventDestinationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) in := &sesv2.CreateConfigurationSetEventDestinationInput{ ConfigurationSetName: aws.String(d.Get("configuration_set_name").(string)), @@ -204,7 +207,7 @@ func resourceConfigurationSetEventDestinationCreate(ctx context.Context, d *sche } func resourceConfigurationSetEventDestinationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) configurationSetName, _, err := ParseConfigurationSetEventDestinationID(d.Id()) if err != nil { @@ -234,7 +237,7 @@ func resourceConfigurationSetEventDestinationRead(ctx context.Context, d *schema } func resourceConfigurationSetEventDestinationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) configurationSetName, eventDestinationName, err := ParseConfigurationSetEventDestinationID(d.Id()) if err != nil { @@ -259,7 +262,7 @@ func resourceConfigurationSetEventDestinationUpdate(ctx context.Context, d *sche } func resourceConfigurationSetEventDestinationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) log.Printf("[INFO] Deleting SESV2 ConfigurationSetEventDestination %s", d.Id()) diff --git a/internal/service/sesv2/configuration_set_event_destination_test.go b/internal/service/sesv2/configuration_set_event_destination_test.go index 7ba96ef3a64..b0db61e44da 100644 --- a/internal/service/sesv2/configuration_set_event_destination_test.go +++ b/internal/service/sesv2/configuration_set_event_destination_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sesv2_test import ( @@ -246,7 +249,7 @@ func TestAccSESV2ConfigurationSetEventDestination_disappears(t *testing.T) { func testAccCheckConfigurationSetEventDestinationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sesv2_configuration_set_event_destination" { @@ -280,7 +283,7 @@ func testAccCheckConfigurationSetEventDestinationExists(ctx context.Context, nam return create.Error(names.SESV2, create.ErrActionCheckingExistence, tfsesv2.ResNameConfigurationSetEventDestination, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client(ctx) _, err := tfsesv2.FindConfigurationSetEventDestinationByID(ctx, conn, rs.Primary.ID) @@ -348,11 +351,6 @@ resource "aws_s3_bucket" "test" { bucket = %[1]q } -resource "aws_s3_bucket_acl" "test" { - bucket = aws_s3_bucket.test.id - acl = "private" -} - resource "aws_iam_role" "bucket" { name = "%[1]s2" @@ -422,9 +420,9 @@ func testAccConfigurationSetEventDestinationConfig_kinesisFirehoseDestination1(r fmt.Sprintf(` resource "aws_kinesis_firehose_delivery_stream" "test1" { name = "%[1]s-1" - destination = "s3" + destination = "extended_s3" - s3_configuration { + extended_s3_configuration { role_arn = aws_iam_role.bucket.arn bucket_arn = aws_s3_bucket.test.arn } @@ -458,9 +456,9 @@ func testAccConfigurationSetEventDestinationConfig_kinesisFirehoseDestination2(r fmt.Sprintf(` resource "aws_kinesis_firehose_delivery_stream" "test2" { name = "%[1]s-2" - destination = "s3" + destination = "extended_s3" - s3_configuration { + extended_s3_configuration { role_arn = aws_iam_role.bucket.arn bucket_arn = aws_s3_bucket.test.arn } diff --git a/internal/service/sesv2/configuration_set_test.go b/internal/service/sesv2/configuration_set_test.go index 3442e72bc61..2549f06b036 100644 --- a/internal/service/sesv2/configuration_set_test.go +++ b/internal/service/sesv2/configuration_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sesv2_test import ( @@ -338,7 +341,7 @@ func TestAccSESV2ConfigurationSet_tags(t *testing.T) { func testAccCheckConfigurationSetDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sesv2_configuration_set" { @@ -373,7 +376,7 @@ func testAccCheckConfigurationSetExists(ctx context.Context, name string) resour return create.Error(names.SESV2, create.ErrActionCheckingExistence, tfsesv2.ResNameConfigurationSet, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client(ctx) _, err := tfsesv2.FindConfigurationSetByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/sesv2/contact_list.go b/internal/service/sesv2/contact_list.go index 0f94093f724..b95a225ac88 100644 --- a/internal/service/sesv2/contact_list.go +++ b/internal/service/sesv2/contact_list.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sesv2 import ( @@ -96,11 +99,11 @@ const ( ) func resourceContactListCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) in := &sesv2.CreateContactListInput{ ContactListName: aws.String(d.Get("contact_list_name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -126,7 +129,7 @@ func resourceContactListCreate(ctx context.Context, d *schema.ResourceData, meta } func resourceContactListRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) out, err := FindContactListByID(ctx, conn, d.Id()) @@ -162,7 +165,7 @@ func resourceContactListRead(ctx context.Context, d *schema.ResourceData, meta i } func resourceContactListUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) in := &sesv2.UpdateContactListInput{ ContactListName: aws.String(d.Id()), @@ -182,7 +185,7 @@ func resourceContactListUpdate(ctx context.Context, d *schema.ResourceData, meta } func resourceContactListDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) log.Printf("[INFO] Deleting SESV2 ContactList %s", d.Id()) diff --git a/internal/service/sesv2/contact_list_test.go b/internal/service/sesv2/contact_list_test.go index e61ce4c305c..bab445efcd6 100644 --- a/internal/service/sesv2/contact_list_test.go +++ b/internal/service/sesv2/contact_list_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sesv2_test import ( @@ -191,7 +194,7 @@ func TestAccSESV2ContactList_disappears(t *testing.T) { func testAccCheckContactListDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sesv2_contact_list" { @@ -226,7 +229,7 @@ func testAccCheckContactListExists(ctx context.Context, name string) resource.Te return create.Error(names.SESV2, create.ErrActionCheckingExistence, tfsesv2.ResNameContactList, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client(ctx) _, err := tfsesv2.FindContactListByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/sesv2/dedicated_ip_assignment.go b/internal/service/sesv2/dedicated_ip_assignment.go index 9bdd6edaae4..c55792c88e0 100644 --- a/internal/service/sesv2/dedicated_ip_assignment.go +++ b/internal/service/sesv2/dedicated_ip_assignment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sesv2 import ( @@ -58,7 +61,7 @@ const ( ) func resourceDedicatedIPAssignmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) in := &sesv2.PutDedicatedIpInPoolInput{ Ip: aws.String(d.Get("ip").(string)), @@ -77,7 +80,7 @@ func resourceDedicatedIPAssignmentCreate(ctx context.Context, d *schema.Resource } func resourceDedicatedIPAssignmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) out, err := FindDedicatedIPAssignmentByID(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -89,14 +92,14 @@ func resourceDedicatedIPAssignmentRead(ctx context.Context, d *schema.ResourceDa return create.DiagError(names.SESV2, create.ErrActionReading, ResNameDedicatedIPAssignment, d.Id(), err) } - d.Set("ip", aws.ToString(out.Ip)) - d.Set("destination_pool_name", aws.ToString(out.PoolName)) + d.Set("ip", out.Ip) + d.Set("destination_pool_name", out.PoolName) return nil } func resourceDedicatedIPAssignmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) ip, _ := splitID(d.Id()) log.Printf("[INFO] Deleting SESV2 DedicatedIPAssignment %s", d.Id()) diff --git a/internal/service/sesv2/dedicated_ip_assignment_test.go b/internal/service/sesv2/dedicated_ip_assignment_test.go index 9bf24280cbe..72772a93cf9 100644 --- a/internal/service/sesv2/dedicated_ip_assignment_test.go +++ b/internal/service/sesv2/dedicated_ip_assignment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sesv2_test import ( @@ -92,7 +95,7 @@ func testAccSESV2DedicatedIPAssignment_disappears(t *testing.T) { // nosemgrep:c func testAccCheckDedicatedIPAssignmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sesv2_dedicated_ip_assignment" { @@ -129,7 +132,7 @@ func testAccCheckDedicatedIPAssignmentExists(ctx context.Context, name string) r return create.Error(names.SESV2, create.ErrActionCheckingExistence, tfsesv2.ResNameDedicatedIPAssignment, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client(ctx) _, err := tfsesv2.FindDedicatedIPAssignmentByID(ctx, conn, rs.Primary.ID) if err != nil { diff --git a/internal/service/sesv2/dedicated_ip_pool.go b/internal/service/sesv2/dedicated_ip_pool.go index 12c8e2d4b17..29fc9843fad 100644 --- a/internal/service/sesv2/dedicated_ip_pool.go +++ b/internal/service/sesv2/dedicated_ip_pool.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sesv2 import ( @@ -72,11 +75,11 @@ const ( ) func resourceDedicatedIPPoolCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) in := &sesv2.CreateDedicatedIpPoolInput{ PoolName: aws.String(d.Get("pool_name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("scaling_mode"); ok { in.ScalingMode = types.ScalingMode(v.(string)) @@ -95,7 +98,7 @@ func resourceDedicatedIPPoolCreate(ctx context.Context, d *schema.ResourceData, } func resourceDedicatedIPPoolRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) out, err := FindDedicatedIPPoolByID(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -120,7 +123,7 @@ func resourceDedicatedIPPoolUpdate(ctx context.Context, d *schema.ResourceData, } func resourceDedicatedIPPoolDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) log.Printf("[INFO] Deleting SESV2 DedicatedIPPool %s", d.Id()) _, err := conn.DeleteDedicatedIpPool(ctx, &sesv2.DeleteDedicatedIpPoolInput{ diff --git a/internal/service/sesv2/dedicated_ip_pool_data_source.go b/internal/service/sesv2/dedicated_ip_pool_data_source.go index 8f488e8517e..2c5a76a50cc 100644 --- a/internal/service/sesv2/dedicated_ip_pool_data_source.go +++ b/internal/service/sesv2/dedicated_ip_pool_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sesv2 import ( @@ -65,7 +68,7 @@ const ( ) func dataSourceDedicatedIPPoolRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) out, err := FindDedicatedIPPoolByID(ctx, conn, d.Get("pool_name").(string)) if err != nil { @@ -82,7 +85,7 @@ func dataSourceDedicatedIPPoolRead(ctx context.Context, d *schema.ResourceData, } d.Set("dedicated_ips", flattenDedicatedIPs(outIP.DedicatedIps)) - tags, err := ListTags(ctx, conn, d.Get("arn").(string)) + tags, err := listTags(ctx, conn, d.Get("arn").(string)) if err != nil { return create.DiagError(names.SESV2, create.ErrActionReading, DSNameDedicatedIPPool, d.Id(), err) } diff --git a/internal/service/sesv2/dedicated_ip_pool_data_source_test.go b/internal/service/sesv2/dedicated_ip_pool_data_source_test.go index e693dd0e67a..5a56993910d 100644 --- a/internal/service/sesv2/dedicated_ip_pool_data_source_test.go +++ b/internal/service/sesv2/dedicated_ip_pool_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sesv2_test import ( diff --git a/internal/service/sesv2/dedicated_ip_pool_test.go b/internal/service/sesv2/dedicated_ip_pool_test.go index 48be91c567f..d94e24e5d83 100644 --- a/internal/service/sesv2/dedicated_ip_pool_test.go +++ b/internal/service/sesv2/dedicated_ip_pool_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sesv2_test import ( @@ -156,7 +159,7 @@ func TestAccSESV2DedicatedIPPool_tags(t *testing.T) { func testAccCheckDedicatedIPPoolDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sesv2_dedicated_ip_pool" { @@ -189,7 +192,7 @@ func testAccCheckDedicatedIPPoolExists(ctx context.Context, name string) resourc return create.Error(names.SESV2, create.ErrActionCheckingExistence, tfsesv2.ResNameDedicatedIPPool, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client(ctx) _, err := tfsesv2.FindDedicatedIPPoolByID(ctx, conn, rs.Primary.ID) if err != nil { @@ -201,7 +204,7 @@ func testAccCheckDedicatedIPPoolExists(ctx context.Context, name string) resourc } func testAccPreCheckDedicatedIPPool(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client(ctx) _, err := conn.ListDedicatedIpPools(ctx, &sesv2.ListDedicatedIpPoolsInput{}) if acctest.PreCheckSkipError(err) { diff --git a/internal/service/sesv2/email_identity.go b/internal/service/sesv2/email_identity.go index 1bd441642a0..313901481e9 100644 --- a/internal/service/sesv2/email_identity.go +++ b/internal/service/sesv2/email_identity.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sesv2 import ( @@ -134,11 +137,11 @@ const ( ) func resourceEmailIdentityCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) in := &sesv2.CreateEmailIdentityInput{ EmailIdentity: aws.String(d.Get("email_identity").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("configuration_set_name"); ok { @@ -163,8 +166,18 @@ func resourceEmailIdentityCreate(ctx context.Context, d *schema.ResourceData, me return resourceEmailIdentityRead(ctx, d, meta) } +func emailIdentityNameToARN(meta interface{}, emailIdentityName string) string { + return arn.ARN{ + Partition: meta.(*conns.AWSClient).Partition, + Service: "ses", + Region: meta.(*conns.AWSClient).Region, + AccountID: meta.(*conns.AWSClient).AccountID, + Resource: fmt.Sprintf("identity/%s", emailIdentityName), + }.String() +} + func resourceEmailIdentityRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) out, err := FindEmailIdentityByID(ctx, conn, d.Id()) @@ -178,13 +191,7 @@ func resourceEmailIdentityRead(ctx context.Context, d *schema.ResourceData, meta return create.DiagError(names.SESV2, create.ErrActionReading, ResNameEmailIdentity, d.Id(), err) } - arn := arn.ARN{ - Partition: meta.(*conns.AWSClient).Partition, - Service: "ses", - Region: meta.(*conns.AWSClient).Region, - AccountID: meta.(*conns.AWSClient).AccountID, - Resource: fmt.Sprintf("identity/%s", d.Id()), - }.String() + arn := emailIdentityNameToARN(meta, d.Id()) d.Set("arn", arn) d.Set("configuration_set_name", out.ConfigurationSetName) @@ -209,7 +216,7 @@ func resourceEmailIdentityRead(ctx context.Context, d *schema.ResourceData, meta } func resourceEmailIdentityUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) if d.HasChanges("configuration_set_name") { in := &sesv2.PutEmailIdentityConfigurationSetAttributesInput{ @@ -249,7 +256,7 @@ func resourceEmailIdentityUpdate(ctx context.Context, d *schema.ResourceData, me } func resourceEmailIdentityDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) log.Printf("[INFO] Deleting SESV2 EmailIdentity %s", d.Id()) diff --git a/internal/service/sesv2/email_identity_data_source.go b/internal/service/sesv2/email_identity_data_source.go new file mode 100644 index 00000000000..162f42cfa23 --- /dev/null +++ b/internal/service/sesv2/email_identity_data_source.go @@ -0,0 +1,128 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package sesv2 + +import ( + "context" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @SDKDataSource("aws_sesv2_email_identity") +// @Tags(identifierAttribute="arn") +func DataSourceEmailIdentity() *schema.Resource { + return &schema.Resource{ + ReadWithoutTimeout: dataSourceEmailIdentityRead, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "configuration_set_name": { + Type: schema.TypeString, + Computed: true, + }, + "dkim_signing_attributes": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "current_signing_key_length": { + Type: schema.TypeString, + Computed: true, + }, + "domain_signing_private_key": { + Type: schema.TypeString, + Computed: true, + }, + "domain_signing_selector": { + Type: schema.TypeString, + Computed: true, + }, + "last_key_generation_timestamp": { + Type: schema.TypeString, + Computed: true, + }, + "next_signing_key_length": { + Type: schema.TypeString, + Computed: true, + }, + "signing_attributes_origin": { + Type: schema.TypeString, + Computed: true, + }, + "status": { + Type: schema.TypeString, + Computed: true, + }, + "tokens": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + }, + }, + }, + "email_identity": { + Type: schema.TypeString, + Required: true, + }, + "identity_type": { + Type: schema.TypeString, + Computed: true, + }, + names.AttrTags: tftags.TagsSchemaComputed(), + "verified_for_sending_status": { + Type: schema.TypeBool, + Computed: true, + }, + }, + } +} + +const ( + DSNameEmailIdentity = "Email Identity Data Source" +) + +func dataSourceEmailIdentityRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).SESV2Client(ctx) + + name := d.Get("email_identity").(string) + + out, err := FindEmailIdentityByID(ctx, conn, name) + if err != nil { + return append(diags, create.DiagError(names.SESV2, create.ErrActionReading, DSNameEmailIdentity, name, err)...) + } + + arn := emailIdentityNameToARN(meta, name) + + d.SetId(name) + d.Set("arn", arn) + d.Set("configuration_set_name", out.ConfigurationSetName) + d.Set("email_identity", name) + + if out.DkimAttributes != nil { + tfMap := flattenDKIMAttributes(out.DkimAttributes) + tfMap["domain_signing_private_key"] = d.Get("dkim_signing_attributes.0.domain_signing_private_key").(string) + tfMap["domain_signing_selector"] = d.Get("dkim_signing_attributes.0.domain_signing_selector").(string) + + if err := d.Set("dkim_signing_attributes", []interface{}{tfMap}); err != nil { + return append(diags, create.DiagError(names.SESV2, create.ErrActionSetting, ResNameEmailIdentity, name, err)...) + } + } else { + d.Set("dkim_signing_attributes", nil) + } + + d.Set("identity_type", string(out.IdentityType)) + d.Set("verified_for_sending_status", out.VerifiedForSendingStatus) + + return diags +} diff --git a/internal/service/sesv2/email_identity_data_source_test.go b/internal/service/sesv2/email_identity_data_source_test.go new file mode 100644 index 00000000000..ea4278763ad --- /dev/null +++ b/internal/service/sesv2/email_identity_data_source_test.go @@ -0,0 +1,59 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package sesv2_test + +import ( + "fmt" + "testing" + + sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/names" +) + +func TestAccSESV2EmailIdentityDataSource_basic(t *testing.T) { + ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_sesv2_email_identity.test" + dataSourceName := "data.aws_sesv2_email_identity.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, names.SESV2EndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckEmailIdentityDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccEmailIdentityDataSourceConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckEmailIdentityExists(ctx, dataSourceName), + resource.TestCheckResourceAttrPair(resourceName, "arn", dataSourceName, "arn"), + resource.TestCheckResourceAttrPair(resourceName, "email_identity", dataSourceName, "email_identity"), + resource.TestCheckResourceAttrPair(resourceName, "dkim_signing_attributes.#", dataSourceName, "dkim_signing_attributes.#"), + resource.TestCheckResourceAttrPair(resourceName, "dkim_signing_attributes.0.current_signing_key_length", dataSourceName, "dkim_signing_attributes.0.current_signing_key_length"), + resource.TestCheckResourceAttrPair(resourceName, "dkim_signing_attributes.0.last_key_generation_timestamp", dataSourceName, "dkim_signing_attributes.0.last_key_generation_timestamp"), + resource.TestCheckResourceAttrPair(resourceName, "dkim_signing_attributes.0.next_signing_key_length", dataSourceName, "dkim_signing_attributes.0.next_signing_key_length"), + resource.TestCheckResourceAttrPair(resourceName, "dkim_signing_attributes.0.signing_attributes_origin", dataSourceName, "dkim_signing_attributes.0.signing_attributes_origin"), + resource.TestCheckResourceAttrPair(resourceName, "dkim_signing_attributes.0.status", dataSourceName, "dkim_signing_attributes.0.status"), + resource.TestCheckResourceAttrPair(resourceName, "dkim_signing_attributes.0.tokens.#", dataSourceName, "dkim_signing_attributes.0.tokens.#"), + resource.TestCheckResourceAttrPair(resourceName, "identity_type", dataSourceName, "identity_type"), + resource.TestCheckResourceAttrPair(resourceName, "verified_for_sending_status", dataSourceName, "verified_for_sending_status"), + ), + }, + }, + }) +} + +func testAccEmailIdentityDataSourceConfig_basic(rName string) string { + return fmt.Sprintf(` +resource "aws_sesv2_email_identity" "test" { + email_identity = %[1]q +} + +data "aws_sesv2_email_identity" "test" { + email_identity = aws_sesv2_email_identity.test.email_identity +} +`, rName) +} diff --git a/internal/service/sesv2/email_identity_feedback_attributes.go b/internal/service/sesv2/email_identity_feedback_attributes.go index 882c8c91cf9..314ebab34aa 100644 --- a/internal/service/sesv2/email_identity_feedback_attributes.go +++ b/internal/service/sesv2/email_identity_feedback_attributes.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sesv2 import ( @@ -48,7 +51,7 @@ const ( ) func resourceEmailIdentityFeedbackAttributesCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) in := &sesv2.PutEmailIdentityFeedbackAttributesInput{ EmailIdentity: aws.String(d.Get("email_identity").(string)), @@ -70,7 +73,7 @@ func resourceEmailIdentityFeedbackAttributesCreate(ctx context.Context, d *schem } func resourceEmailIdentityFeedbackAttributesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) out, err := FindEmailIdentityByID(ctx, conn, d.Id()) @@ -91,7 +94,7 @@ func resourceEmailIdentityFeedbackAttributesRead(ctx context.Context, d *schema. } func resourceEmailIdentityFeedbackAttributesUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) update := false @@ -118,7 +121,7 @@ func resourceEmailIdentityFeedbackAttributesUpdate(ctx context.Context, d *schem } func resourceEmailIdentityFeedbackAttributesDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) log.Printf("[INFO] Deleting SESV2 EmailIdentityFeedbackAttributes %s", d.Id()) diff --git a/internal/service/sesv2/email_identity_feedback_attributes_test.go b/internal/service/sesv2/email_identity_feedback_attributes_test.go index 3e2c88daa3c..2448dd6e585 100644 --- a/internal/service/sesv2/email_identity_feedback_attributes_test.go +++ b/internal/service/sesv2/email_identity_feedback_attributes_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sesv2_test import ( @@ -138,7 +141,7 @@ func testAccCheckEmailIdentityFeedbackAttributesExist(ctx context.Context, name return create.Error(names.SESV2, create.ErrActionCheckingExistence, tfsesv2.ResNameEmailIdentity, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client(ctx) out, err := tfsesv2.FindEmailIdentityByID(ctx, conn, rs.Primary.ID) if err != nil { diff --git a/internal/service/sesv2/email_identity_mail_from_attributes.go b/internal/service/sesv2/email_identity_mail_from_attributes.go index a7344772d6e..673afef7f8c 100644 --- a/internal/service/sesv2/email_identity_mail_from_attributes.go +++ b/internal/service/sesv2/email_identity_mail_from_attributes.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sesv2 import ( @@ -56,7 +59,7 @@ const ( ) func resourceEmailIdentityMailFromAttributesCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) in := &sesv2.PutEmailIdentityMailFromAttributesInput{ EmailIdentity: aws.String(d.Get("email_identity").(string)), @@ -89,7 +92,7 @@ func resourceEmailIdentityMailFromAttributesCreate(ctx context.Context, d *schem } func resourceEmailIdentityMailFromAttributesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) out, err := FindEmailIdentityByID(ctx, conn, d.Id()) @@ -117,7 +120,7 @@ func resourceEmailIdentityMailFromAttributesRead(ctx context.Context, d *schema. } func resourceEmailIdentityMailFromAttributesUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) update := false @@ -150,7 +153,7 @@ func resourceEmailIdentityMailFromAttributesUpdate(ctx context.Context, d *schem } func resourceEmailIdentityMailFromAttributesDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SESV2Client() + conn := meta.(*conns.AWSClient).SESV2Client(ctx) log.Printf("[INFO] Deleting SESV2 EmailIdentityMailFromAttributes %s", d.Id()) diff --git a/internal/service/sesv2/email_identity_mail_from_attributes_data_source.go b/internal/service/sesv2/email_identity_mail_from_attributes_data_source.go new file mode 100644 index 00000000000..896d10c7e4f --- /dev/null +++ b/internal/service/sesv2/email_identity_mail_from_attributes_data_source.go @@ -0,0 +1,66 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package sesv2 + +import ( + "context" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @SDKDataSource("aws_sesv2_email_identity_mail_from_attributes") +func DataSourceEmailIdentityMailFromAttributes() *schema.Resource { + return &schema.Resource{ + ReadWithoutTimeout: dataSourceEmailIdentityMailFromAttributesRead, + + Schema: map[string]*schema.Schema{ + "behavior_on_mx_failure": { + Type: schema.TypeString, + Computed: true, + }, + "email_identity": { + Type: schema.TypeString, + Required: true, + }, + "mail_from_domain": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +const ( + DSNameEmailIdentityMailFromAttributes = "Email Identity Mail From Attributes Data Source" +) + +func dataSourceEmailIdentityMailFromAttributesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).SESV2Client(ctx) + + name := d.Get("email_identity").(string) + + out, err := FindEmailIdentityByID(ctx, conn, name) + + if err != nil { + return append(diags, create.DiagError(names.SESV2, create.ErrActionReading, ResNameEmailIdentityMailFromAttributes, name, err)...) + } + + d.SetId(name) + d.Set("email_identity", name) + + if out.MailFromAttributes != nil { + d.Set("behavior_on_mx_failure", out.MailFromAttributes.BehaviorOnMxFailure) + d.Set("mail_from_domain", out.MailFromAttributes.MailFromDomain) + } else { + d.Set("behavior_on_mx_failure", nil) + d.Set("mail_from_domain", nil) + } + + return diags +} diff --git a/internal/service/sesv2/email_identity_mail_from_attributes_data_source_test.go b/internal/service/sesv2/email_identity_mail_from_attributes_data_source_test.go new file mode 100644 index 00000000000..ef9cdda1945 --- /dev/null +++ b/internal/service/sesv2/email_identity_mail_from_attributes_data_source_test.go @@ -0,0 +1,60 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package sesv2_test + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go-v2/service/sesv2/types" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/names" +) + +func TestAccSESV2EmailIdentityMailFromAttributesDataSource_basic(t *testing.T) { + ctx := acctest.Context(t) + domain := acctest.RandomDomain() + mailFromDomain1 := domain.Subdomain("test1") + + rName := domain.String() + resourceName := "aws_sesv2_email_identity_mail_from_attributes.test" + dataSourceName := "data.aws_sesv2_email_identity_mail_from_attributes.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, names.SESV2EndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckEmailIdentityDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccEmailIdentityMailFromAttributesDataSourceConfig_basic(rName, string(types.BehaviorOnMxFailureRejectMessage), mailFromDomain1.String()), + Check: resource.ComposeTestCheckFunc( + testAccCheckEmailIdentityMailFromAttributesExists(ctx, dataSourceName), + resource.TestCheckResourceAttrPair(resourceName, "email_identity", dataSourceName, "email_identity"), + resource.TestCheckResourceAttrPair(resourceName, "behavior_on_mx_failure", dataSourceName, "behavior_on_mx_failure"), + resource.TestCheckResourceAttrPair(resourceName, "mail_from_domain", dataSourceName, "mail_from_domain"), + ), + }, + }, + }) +} + +func testAccEmailIdentityMailFromAttributesDataSourceConfig_basic(rName, behaviorOnMXFailure, mailFromDomain string) string { + return fmt.Sprintf(` +resource "aws_sesv2_email_identity" "test" { + email_identity = %[1]q +} + +resource "aws_sesv2_email_identity_mail_from_attributes" "test" { + email_identity = aws_sesv2_email_identity.test.email_identity + behavior_on_mx_failure = %[2]q + mail_from_domain = %[3]q +} + +data "aws_sesv2_email_identity_mail_from_attributes" "test" { + email_identity = aws_sesv2_email_identity_mail_from_attributes.test.email_identity +} +`, rName, behaviorOnMXFailure, mailFromDomain) +} diff --git a/internal/service/sesv2/email_identity_mail_from_attributes_test.go b/internal/service/sesv2/email_identity_mail_from_attributes_test.go index 12e3f8a4baf..37fe5e1cf5d 100644 --- a/internal/service/sesv2/email_identity_mail_from_attributes_test.go +++ b/internal/service/sesv2/email_identity_mail_from_attributes_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sesv2_test import ( @@ -180,7 +183,7 @@ func testAccCheckEmailIdentityMailFromAttributesExists(ctx context.Context, name return create.Error(names.SESV2, create.ErrActionCheckingExistence, tfsesv2.ResNameEmailIdentityMailFromAttributes, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client(ctx) out, err := tfsesv2.FindEmailIdentityByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/sesv2/email_identity_test.go b/internal/service/sesv2/email_identity_test.go index 9f8d1330bbf..990909edc0e 100644 --- a/internal/service/sesv2/email_identity_test.go +++ b/internal/service/sesv2/email_identity_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sesv2_test import ( @@ -278,7 +281,7 @@ func TestAccSESV2EmailIdentity_tags(t *testing.T) { func testAccCheckEmailIdentityDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sesv2_email_identity" { @@ -313,7 +316,7 @@ func testAccCheckEmailIdentityExists(ctx context.Context, name string) resource. return create.Error(names.SESV2, create.ErrActionCheckingExistence, tfsesv2.ResNameEmailIdentity, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client() + conn := acctest.Provider.Meta().(*conns.AWSClient).SESV2Client(ctx) _, err := tfsesv2.FindEmailIdentityByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/sesv2/generate.go b/internal/service/sesv2/generate.go index d89d2067fd4..d2c7c7719b4 100644 --- a/internal/service/sesv2/generate.go +++ b/internal/service/sesv2/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -ServiceTagsSlice -ListTags -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package sesv2 diff --git a/internal/service/sesv2/list.go b/internal/service/sesv2/list.go index 9f5389386b4..c0c0d505f4a 100644 --- a/internal/service/sesv2/list.go +++ b/internal/service/sesv2/list.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sesv2 import ( diff --git a/internal/service/sesv2/service_package_gen.go b/internal/service/sesv2/service_package_gen.go index 9ccf96a1142..b211e0901b8 100644 --- a/internal/service/sesv2/service_package_gen.go +++ b/internal/service/sesv2/service_package_gen.go @@ -5,6 +5,9 @@ package sesv2 import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + sesv2_sdkv2 "github.com/aws/aws-sdk-go-v2/service/sesv2" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -29,6 +32,17 @@ func (p *servicePackage) SDKDataSources(ctx context.Context) []*types.ServicePac Factory: DataSourceDedicatedIPPool, TypeName: "aws_sesv2_dedicated_ip_pool", }, + { + Factory: DataSourceEmailIdentity, + TypeName: "aws_sesv2_email_identity", + Tags: &types.ServicePackageResourceTags{ + IdentifierAttribute: "arn", + }, + }, + { + Factory: DataSourceEmailIdentityMailFromAttributes, + TypeName: "aws_sesv2_email_identity_mail_from_attributes", + }, } } @@ -89,4 +103,17 @@ func (p *servicePackage) ServicePackageName() string { return names.SESV2 } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*sesv2_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return sesv2_sdkv2.NewFromConfig(cfg, func(o *sesv2_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = sesv2_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/sesv2/sweep.go b/internal/service/sesv2/sweep.go index a8ebc040422..83c6c0c6f2a 100644 --- a/internal/service/sesv2/sweep.go +++ b/internal/service/sesv2/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go-v2/service/sesv2" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -29,13 +31,13 @@ func init() { func sweepConfigurationSets(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).SESV2Client() + conn := client.SESV2Client(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -62,7 +64,7 @@ func sweepConfigurationSets(region string) error { errs = multierror.Append(errs, fmt.Errorf("listing Configuration Sets for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping Configuration Sets for %s: %w", region, err)) } @@ -76,13 +78,13 @@ func sweepConfigurationSets(region string) error { func sweepContactLists(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).SESV2Client() + conn := client.SESV2Client(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error @@ -109,7 +111,7 @@ func sweepContactLists(region string) error { errs = multierror.Append(errs, fmt.Errorf("listing Contact Lists for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping Contact Lists for %s: %w", region, err)) } diff --git a/internal/service/sesv2/tags_gen.go b/internal/service/sesv2/tags_gen.go index 14b0fc63d5d..c99b1ce1afa 100644 --- a/internal/service/sesv2/tags_gen.go +++ b/internal/service/sesv2/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists sesv2 service tags. +// listTags lists sesv2 service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn *sesv2.Client, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *sesv2.Client, identifier string) (tftags.KeyValueTags, error) { input := &sesv2.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn *sesv2.Client, identifier string) (tftag // ListTags lists sesv2 service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).SESV2Client(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).SESV2Client(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []awstypes.Tag) tftags.KeyValueTags return tftags.New(ctx, m) } -// GetTagsIn returns sesv2 service tags from Context. +// getTagsIn returns sesv2 service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []awstypes.Tag { +func getTagsIn(ctx context.Context) []awstypes.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []awstypes.Tag { return nil } -// SetTagsOut sets sesv2 service tags in Context. -func SetTagsOut(ctx context.Context, tags []awstypes.Tag) { +// setTagsOut sets sesv2 service tags in Context. +func setTagsOut(ctx context.Context, tags []awstypes.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates sesv2 service tags. +// updateTags updates sesv2 service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn *sesv2.Client, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *sesv2.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn *sesv2.Client, identifier string, oldT // UpdateTags updates sesv2 service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).SESV2Client(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).SESV2Client(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/sfn/activity.go b/internal/service/sfn/activity.go index 8444c5f1522..815edbde635 100644 --- a/internal/service/sfn/activity.go +++ b/internal/service/sfn/activity.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sfn import ( @@ -52,12 +55,12 @@ func ResourceActivity() *schema.Resource { } func resourceActivityCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SFNConn() + conn := meta.(*conns.AWSClient).SFNConn(ctx) name := d.Get("name").(string) input := &sfn.CreateActivityInput{ Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } output, err := conn.CreateActivityWithContext(ctx, input) @@ -72,7 +75,7 @@ func resourceActivityCreate(ctx context.Context, d *schema.ResourceData, meta in } func resourceActivityRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SFNConn() + conn := meta.(*conns.AWSClient).SFNConn(ctx) output, err := FindActivityByARN(ctx, conn, d.Id()) @@ -98,7 +101,7 @@ func resourceActivityUpdate(ctx context.Context, d *schema.ResourceData, meta in } func resourceActivityDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SFNConn() + conn := meta.(*conns.AWSClient).SFNConn(ctx) log.Printf("[DEBUG] Deleting Step Functions Activity: %s", d.Id()) _, err := conn.DeleteActivityWithContext(ctx, &sfn.DeleteActivityInput{ diff --git a/internal/service/sfn/activity_data_source.go b/internal/service/sfn/activity_data_source.go index 4846c2f3c0e..c64ed750f55 100644 --- a/internal/service/sfn/activity_data_source.go +++ b/internal/service/sfn/activity_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sfn import ( @@ -44,7 +47,7 @@ func DataSourceActivity() *schema.Resource { } func dataSourceActivityRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SFNConn() + conn := meta.(*conns.AWSClient).SFNConn(ctx) if v, ok := d.GetOk("name"); ok { name := v.(string) diff --git a/internal/service/sfn/activity_data_source_test.go b/internal/service/sfn/activity_data_source_test.go index 2eac17b0d6e..c8acb430b5e 100644 --- a/internal/service/sfn/activity_data_source_test.go +++ b/internal/service/sfn/activity_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sfn_test import ( diff --git a/internal/service/sfn/activity_test.go b/internal/service/sfn/activity_test.go index c2b06a92fba..d8360381766 100644 --- a/internal/service/sfn/activity_test.go +++ b/internal/service/sfn/activity_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sfn_test import ( @@ -125,7 +128,7 @@ func testAccCheckActivityExists(ctx context.Context, n string) resource.TestChec return fmt.Errorf("No Step Functions Activity ID set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SFNConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SFNConn(ctx) _, err := tfsfn.FindActivityByARN(ctx, conn, rs.Primary.ID) @@ -135,7 +138,7 @@ func testAccCheckActivityExists(ctx context.Context, n string) resource.TestChec func testAccCheckActivityDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SFNConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SFNConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sfn_activity" { diff --git a/internal/service/sfn/alias.go b/internal/service/sfn/alias.go new file mode 100644 index 00000000000..379acb54360 --- /dev/null +++ b/internal/service/sfn/alias.go @@ -0,0 +1,283 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package sfn + +import ( + "context" + "errors" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/sfn" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @SDKResource("aws_sfn_alias") +func ResourceAlias() *schema.Resource { + return &schema.Resource{ + CreateWithoutTimeout: resourceAliasCreate, + ReadWithoutTimeout: resourceAliasRead, + UpdateWithoutTimeout: resourceAliasUpdate, + DeleteWithoutTimeout: resourceAliasDelete, + + Importer: &schema.ResourceImporter{ + StateContext: schema.ImportStatePassthroughContext, + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(30 * time.Minute), + Update: schema.DefaultTimeout(30 * time.Minute), + Delete: schema.DefaultTimeout(30 * time.Minute), + }, + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "creation_date": { + Type: schema.TypeString, + Computed: true, + }, + "description": { + Type: schema.TypeString, + Optional: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "routing_configuration": { + Type: schema.TypeList, + Required: true, + MinItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "state_machine_version_arn": { + Type: schema.TypeString, + Required: true, + }, + "weight": { + Type: schema.TypeInt, + Required: true, + }, + }, + }, + }, + }, + } +} + +const ( + ResNameAlias = "Alias" +) + +func resourceAliasCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).SFNConn(ctx) + + in := &sfn.CreateStateMachineAliasInput{ + Name: aws.String(d.Get("name").(string)), + Description: aws.String(d.Get("description").(string)), + } + + if v, ok := d.GetOk("routing_configuration"); ok && len(v.([]interface{})) > 0 { + in.RoutingConfiguration = expandAliasRoutingConfiguration(v.([]interface{})) + } + + out, err := conn.CreateStateMachineAliasWithContext(ctx, in) + if err != nil { + return create.DiagError(names.SFN, create.ErrActionCreating, ResNameAlias, d.Get("name").(string), err) + } + + if out == nil || out.StateMachineAliasArn == nil { + return create.DiagError(names.SFN, create.ErrActionCreating, ResNameAlias, d.Get("name").(string), errors.New("empty output")) + } + + d.SetId(aws.StringValue(out.StateMachineAliasArn)) + + return resourceAliasRead(ctx, d, meta) +} + +func resourceAliasRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).SFNConn(ctx) + + out, err := FindAliasByARN(ctx, conn, d.Id()) + + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] SFN Alias (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return create.DiagError(names.SFN, create.ErrActionReading, ResNameAlias, d.Id(), err) + } + + d.Set("arn", out.StateMachineAliasArn) + d.Set("name", out.Name) + d.Set("description", out.Description) + d.Set("creation_date", aws.TimeValue(out.CreationDate).Format(time.RFC3339)) + d.SetId(aws.StringValue(out.StateMachineAliasArn)) + + if err := d.Set("routing_configuration", flattenAliasRoutingConfiguration(out.RoutingConfiguration)); err != nil { + return create.DiagError(names.SFN, create.ErrActionSetting, ResNameAlias, d.Id(), err) + } + return nil +} + +func resourceAliasUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).SFNConn(ctx) + + update := false + + in := &sfn.UpdateStateMachineAliasInput{ + StateMachineAliasArn: aws.String(d.Id()), + } + + if d.HasChanges("description") { + in.Description = aws.String(d.Get("description").(string)) + update = true + } + + if d.HasChange("routing_configuration") { + in.RoutingConfiguration = expandAliasRoutingConfiguration(d.Get("routing_configuration").([]interface{})) + update = true + } + + if !update { + return nil + } + + log.Printf("[DEBUG] Updating SFN Alias (%s): %#v", d.Id(), in) + _, err := conn.UpdateStateMachineAliasWithContext(ctx, in) + if err != nil { + return create.DiagError(names.SFN, create.ErrActionUpdating, ResNameAlias, d.Id(), err) + } + + return resourceAliasRead(ctx, d, meta) +} + +func resourceAliasDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).SFNConn(ctx) + log.Printf("[INFO] Deleting SFN Alias %s", d.Id()) + + _, err := conn.DeleteStateMachineAliasWithContext(ctx, &sfn.DeleteStateMachineAliasInput{ + StateMachineAliasArn: aws.String(d.Id()), + }) + + if err != nil { + return create.DiagError(names.SFN, create.ErrActionDeleting, ResNameAlias, d.Id(), err) + } + + return nil +} + +func FindAliasByARN(ctx context.Context, conn *sfn.SFN, arn string) (*sfn.DescribeStateMachineAliasOutput, error) { + in := &sfn.DescribeStateMachineAliasInput{ + StateMachineAliasArn: aws.String(arn), + } + out, err := conn.DescribeStateMachineAliasWithContext(ctx, in) + if tfawserr.ErrCodeEquals(err, sfn.ErrCodeResourceNotFound) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, + } + } + + if err != nil { + return nil, err + } + + if out == nil { + return nil, tfresource.NewEmptyResultError(in) + } + + return out, nil +} + +func flattenAliasRoutingConfigurationItem(apiObject *sfn.RoutingConfigurationListItem) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.StateMachineVersionArn; v != nil { + tfMap["state_machine_version_arn"] = aws.StringValue(v) + } + + if v := apiObject.Weight; v != nil { + tfMap["weight"] = aws.Int64Value(v) + } + + return tfMap +} + +func flattenAliasRoutingConfiguration(apiObjects []*sfn.RoutingConfigurationListItem) []interface{} { + if len(apiObjects) == 0 { + return nil + } + + var tfList []interface{} + + for _, apiObject := range apiObjects { + if apiObject == nil { + continue + } + + tfList = append(tfList, flattenAliasRoutingConfigurationItem(apiObject)) + } + + return tfList +} + +func expandAliasRoutingConfiguration(tfList []interface{}) []*sfn.RoutingConfigurationListItem { + if len(tfList) == 0 { + return nil + } + var configurationListItems []*sfn.RoutingConfigurationListItem + + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + + if !ok { + continue + } + + configurationListItem := expandAliasRoutingConfigurationItem(tfMap) + + if configurationListItem == nil { + continue + } + + configurationListItems = append(configurationListItems, configurationListItem) + } + + return configurationListItems +} + +func expandAliasRoutingConfigurationItem(tfMap map[string]interface{}) *sfn.RoutingConfigurationListItem { + if tfMap == nil { + return nil + } + + apiObject := &sfn.RoutingConfigurationListItem{} + if v, ok := tfMap["state_machine_version_arn"].(string); ok && v != "" { + apiObject.StateMachineVersionArn = aws.String(v) + } + + if v, ok := tfMap["weight"].(int); ok && v != 0 { + apiObject.Weight = aws.Int64(int64(v)) + } + + return apiObject +} diff --git a/internal/service/sfn/alias_data_source.go b/internal/service/sfn/alias_data_source.go new file mode 100644 index 00000000000..a0d3724d7a5 --- /dev/null +++ b/internal/service/sfn/alias_data_source.go @@ -0,0 +1,115 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package sfn + +import ( + "context" + "strings" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/sfn" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @SDKDataSource("aws_sfn_alias") +func DataSourceAlias() *schema.Resource { + return &schema.Resource{ + ReadWithoutTimeout: dataSourceAliasRead, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "creation_date": { + Type: schema.TypeString, + Computed: true, + }, + "description": { + Type: schema.TypeString, + Optional: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + }, + "routing_configuration": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "state_machine_version_arn": { + Type: schema.TypeString, + Required: true, + }, + "weight": { + Type: schema.TypeInt, + Required: true, + }, + }, + }, + }, + "statemachine_arn": { + Type: schema.TypeString, + Required: true, + }, + }, + } +} + +const ( + DSNameAlias = "Alias Data Source" +) + +func dataSourceAliasRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).SFNConn(ctx) + aliasArn := "" + + in := &sfn.ListStateMachineAliasesInput{ + StateMachineArn: aws.String(d.Get("statemachine_arn").(string)), + } + + out, err := conn.ListStateMachineAliasesWithContext(ctx, in) + + if err != nil { + return diag.Errorf("listing Step Functions State Machines: %s", err) + } + + if n := len(out.StateMachineAliases); n == 0 { + return diag.Errorf("no Step Functions State Machine Aliases matched") + } + + for _, in := range out.StateMachineAliases { + if v := aws.StringValue(in.StateMachineAliasArn); strings.HasSuffix(v, d.Get("name").(string)) { + aliasArn = v + } + } + + if aliasArn == "" { + return diag.Errorf("no Step Functions State Machine Aliases matched") + } + + output, err := FindAliasByARN(ctx, conn, aliasArn) + + if err != nil { + return diag.Errorf("reading Step Functions State Machine Alias (%s): %s", aliasArn, err) + } + + d.SetId(aliasArn) + d.Set("arn", output.StateMachineAliasArn) + d.Set("name", output.Name) + d.Set("description", output.Description) + d.Set("creation_date", aws.TimeValue(output.CreationDate).Format(time.RFC3339)) + + if err := d.Set("routing_configuration", flattenAliasRoutingConfiguration(output.RoutingConfiguration)); err != nil { + return create.DiagError(names.SFN, create.ErrActionSetting, ResNameAlias, d.Id(), err) + } + + return nil +} diff --git a/internal/service/sfn/alias_data_source_test.go b/internal/service/sfn/alias_data_source_test.go new file mode 100644 index 00000000000..cf1d2c7f992 --- /dev/null +++ b/internal/service/sfn/alias_data_source_test.go @@ -0,0 +1,49 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package sfn_test + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/service/sfn" + sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" +) + +func TestAccSFNAliasDataSource_basic(t *testing.T) { + ctx := acctest.Context(t) + dataSourceName := "data.aws_sfn_alias.test" + resourceName := "aws_sfn_alias.test" + rString := sdkacctest.RandString(8) + stateMachineName := fmt.Sprintf("tf_acc_state_machine_alias_basic_%s", rString) + aliasName := fmt.Sprintf("tf_acc_state_machine_alias_basic_%s", rString) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, sfn.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + Steps: []resource.TestStep{ + { + Config: testAccAliasDataSourceConfig_basic(stateMachineName, aliasName, 10), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrPair(resourceName, "id", dataSourceName, "arn"), + resource.TestCheckResourceAttrPair(resourceName, "creation_date", dataSourceName, "creation_date"), + resource.TestCheckResourceAttrPair(resourceName, "description", dataSourceName, "description"), + resource.TestCheckResourceAttrPair(resourceName, "name", dataSourceName, "name"), + ), + }, + }, + }) +} + +func testAccAliasDataSourceConfig_basic(statemachineName string, aliasName string, rMaxAttempts int) string { + return acctest.ConfigCompose(testAccStateMachineAliasConfig_basic(statemachineName, aliasName, rMaxAttempts), ` +data "aws_sfn_alias" "test" { + name = aws_sfn_alias.test.name + statemachine_arn = aws_sfn_state_machine.test.arn +} +`) +} diff --git a/internal/service/sfn/alias_test.go b/internal/service/sfn/alias_test.go new file mode 100644 index 00000000000..4bf815ac6f7 --- /dev/null +++ b/internal/service/sfn/alias_test.go @@ -0,0 +1,295 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package sfn_test + +import ( + "context" + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/service/sfn" + sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-plugin-testing/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + tfsfn "github.com/hashicorp/terraform-provider-aws/internal/service/sfn" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" +) + +func TestAccSFNAlias_basic(t *testing.T) { + ctx := acctest.Context(t) + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var alias sfn.DescribeStateMachineAliasOutput + rString := sdkacctest.RandString(8) + stateMachineName := fmt.Sprintf("tf_acc_state_machine_alias_basic_%s", rString) + aliasName := fmt.Sprintf("tf_acc_state_machine_alias_basic_%s", rString) + resourceName := "aws_sfn_alias.test" + functionArnResourcePart := fmt.Sprintf("stateMachine:%s:%s", stateMachineName, aliasName) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, sfn.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckAliasDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccStateMachineAliasConfig_basic(stateMachineName, aliasName, 10), + Check: resource.ComposeTestCheckFunc( + testAccCheckAliasExists(ctx, resourceName, &alias), + testAccCheckAliasAttributes(&alias), + resource.TestCheckResourceAttrSet(resourceName, "creation_date"), + resource.TestCheckResourceAttr(resourceName, "name", aliasName), + acctest.CheckResourceAttrRegionalARN(resourceName, "arn", "states", functionArnResourcePart), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccSFNAlias_disappears(t *testing.T) { + ctx := acctest.Context(t) + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var alias sfn.DescribeStateMachineAliasOutput + rString := sdkacctest.RandString(8) + stateMachineName := fmt.Sprintf("tf_acc_state_machine_alias_basic_%s", rString) + aliasName := fmt.Sprintf("tf_acc_state_machine_alias_basic_%s", rString) + resourceName := "aws_sfn_alias.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, sfn.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckAliasDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccStateMachineAliasConfig_basic(stateMachineName, aliasName, 10), + Check: resource.ComposeTestCheckFunc( + testAccCheckAliasExists(ctx, resourceName, &alias), + acctest.CheckResourceDisappears(ctx, acctest.Provider, tfsfn.ResourceAlias(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func testAccCheckAliasAttributes(mapping *sfn.DescribeStateMachineAliasOutput) resource.TestCheckFunc { + return func(s *terraform.State) error { + name := *mapping.Name + arn := *mapping.StateMachineAliasArn + if arn == "" { + return fmt.Errorf("Could not read StateMachine alias ARN") + } + if name == "" { + return fmt.Errorf("Could not read StateMachine alias name") + } + return nil + } +} + +func testAccCheckAliasDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).SFNConn(ctx) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_sfn_alias" { + continue + } + + _, err := tfsfn.FindAliasByARN(ctx, conn, rs.Primary.ID) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return err + } + + return fmt.Errorf("Step Functions State Machine Alias %s still exists", rs.Primary.ID) + } + + return nil + } +} + +func testAccCheckAliasExists(ctx context.Context, name string, v *sfn.DescribeStateMachineAliasOutput) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[name] + if !ok { + return fmt.Errorf("Not found: %s", name) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Step Functions State Machine Alias ID is set") + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).SFNConn(ctx) + + output, err := tfsfn.FindAliasByARN(ctx, conn, rs.Primary.ID) + + if err != nil { + return err + } + + *v = *output + + return nil + } +} + +func testAccStateMachineAliasConfig_base(rName string, rMaxAttempts int) string { + return fmt.Sprintf(` +resource "aws_iam_role_policy" "for_lambda" { + name = "%[1]s-lambda" + role = aws_iam_role.for_lambda.id + + policy = < 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*sfn.Tag { return nil } -// SetTagsOut sets sfn service tags in Context. -func SetTagsOut(ctx context.Context, tags []*sfn.Tag) { +// setTagsOut sets sfn service tags in Context. +func setTagsOut(ctx context.Context, tags []*sfn.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates sfn service tags. +// updateTags updates sfn service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn sfniface.SFNAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn sfniface.SFNAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn sfniface.SFNAPI, identifier string, ol // UpdateTags updates sfn service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).SFNConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).SFNConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/shield/generate.go b/internal/service/shield/generate.go index 6fb866d8cfc..e4f0db018e8 100644 --- a/internal/service/shield/generate.go +++ b/internal/service/shield/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceARN -ServiceTagsSlice -TagInIDElem=ResourceARN -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package shield diff --git a/internal/service/shield/id.go b/internal/service/shield/id.go index 7ee3e22eafe..6f8da58b1ec 100644 --- a/internal/service/shield/id.go +++ b/internal/service/shield/id.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package shield import ( diff --git a/internal/service/shield/protection.go b/internal/service/shield/protection.go index 0238875e975..783095187e0 100644 --- a/internal/service/shield/protection.go +++ b/internal/service/shield/protection.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package shield import ( @@ -53,12 +56,12 @@ func ResourceProtection() *schema.Resource { func resourceProtectionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ShieldConn() + conn := meta.(*conns.AWSClient).ShieldConn(ctx) input := &shield.CreateProtectionInput{ Name: aws.String(d.Get("name").(string)), ResourceArn: aws.String(d.Get("resource_arn").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } resp, err := conn.CreateProtectionWithContext(ctx, input) @@ -71,7 +74,7 @@ func resourceProtectionCreate(ctx context.Context, d *schema.ResourceData, meta func resourceProtectionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ShieldConn() + conn := meta.(*conns.AWSClient).ShieldConn(ctx) input := &shield.DescribeProtectionInput{ ProtectionId: aws.String(d.Id()), @@ -107,7 +110,7 @@ func resourceProtectionUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceProtectionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ShieldConn() + conn := meta.(*conns.AWSClient).ShieldConn(ctx) input := &shield.DeleteProtectionInput{ ProtectionId: aws.String(d.Id()), diff --git a/internal/service/shield/protection_group.go b/internal/service/shield/protection_group.go index 0c200cf0940..a1737271d66 100644 --- a/internal/service/shield/protection_group.go +++ b/internal/service/shield/protection_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package shield import ( @@ -79,14 +82,14 @@ func ResourceProtectionGroup() *schema.Resource { func resourceProtectionGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ShieldConn() + conn := meta.(*conns.AWSClient).ShieldConn(ctx) protectionGroupID := d.Get("protection_group_id").(string) input := &shield.CreateProtectionGroupInput{ Aggregation: aws.String(d.Get("aggregation").(string)), Pattern: aws.String(d.Get("pattern").(string)), ProtectionGroupId: aws.String(protectionGroupID), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("members"); ok { @@ -111,7 +114,7 @@ func resourceProtectionGroupCreate(ctx context.Context, d *schema.ResourceData, func resourceProtectionGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ShieldConn() + conn := meta.(*conns.AWSClient).ShieldConn(ctx) input := &shield.DescribeProtectionGroupInput{ ProtectionGroupId: aws.String(d.Id()), @@ -142,7 +145,7 @@ func resourceProtectionGroupRead(ctx context.Context, d *schema.ResourceData, me func resourceProtectionGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ShieldConn() + conn := meta.(*conns.AWSClient).ShieldConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &shield.UpdateProtectionGroupInput{ @@ -172,7 +175,7 @@ func resourceProtectionGroupUpdate(ctx context.Context, d *schema.ResourceData, func resourceProtectionGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ShieldConn() + conn := meta.(*conns.AWSClient).ShieldConn(ctx) log.Printf("[DEBUG] Deletinh Shield Protection Group: %s", d.Id()) _, err := conn.DeleteProtectionGroupWithContext(ctx, &shield.DeleteProtectionGroupInput{ diff --git a/internal/service/shield/protection_group_test.go b/internal/service/shield/protection_group_test.go index 4621729feab..6afc7b07af1 100644 --- a/internal/service/shield/protection_group_test.go +++ b/internal/service/shield/protection_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package shield_test import ( @@ -296,7 +299,7 @@ func TestAccShieldProtectionGroup_tags(t *testing.T) { func testAccCheckProtectionGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ShieldConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ShieldConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_shield_protection_group" { @@ -333,7 +336,7 @@ func testAccCheckProtectionGroupExists(ctx context.Context, name string) resourc return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ShieldConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ShieldConn(ctx) input := &shield.DescribeProtectionGroupInput{ ProtectionGroupId: aws.String(rs.Primary.ID), diff --git a/internal/service/shield/protection_health_check_association.go b/internal/service/shield/protection_health_check_association.go index cbc3dd283cc..04fe573e634 100644 --- a/internal/service/shield/protection_health_check_association.go +++ b/internal/service/shield/protection_health_check_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package shield import ( @@ -41,7 +44,7 @@ func ResourceProtectionHealthCheckAssociation() *schema.Resource { func ResourceProtectionHealthCheckAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ShieldConn() + conn := meta.(*conns.AWSClient).ShieldConn(ctx) protectionId := d.Get("shield_protection_id").(string) healthCheckArn := d.Get("health_check_arn").(string) @@ -62,7 +65,7 @@ func ResourceProtectionHealthCheckAssociationCreate(ctx context.Context, d *sche func ResourceProtectionHealthCheckAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ShieldConn() + conn := meta.(*conns.AWSClient).ShieldConn(ctx) protectionId, healthCheckArn, err := ProtectionHealthCheckAssociationParseResourceID(d.Id()) @@ -100,7 +103,7 @@ func ResourceProtectionHealthCheckAssociationRead(ctx context.Context, d *schema func ResourceProtectionHealthCheckAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).ShieldConn() + conn := meta.(*conns.AWSClient).ShieldConn(ctx) protectionId, healthCheckId, err := ProtectionHealthCheckAssociationParseResourceID(d.Id()) diff --git a/internal/service/shield/protection_health_check_association_test.go b/internal/service/shield/protection_health_check_association_test.go index 67404e88462..ae5ff790be4 100644 --- a/internal/service/shield/protection_health_check_association_test.go +++ b/internal/service/shield/protection_health_check_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package shield_test import ( @@ -75,7 +78,7 @@ func TestAccShieldProtectionHealthCheckAssociation_disappears(t *testing.T) { func testAccCheckProtectionHealthCheckAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ShieldConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ShieldConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_shield_protection_health_check_association" { @@ -128,7 +131,7 @@ func testAccCheckProtectionHealthCheckAssociationExists(ctx context.Context, res return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).ShieldConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ShieldConn(ctx) input := &shield.DescribeProtectionInput{ ProtectionId: aws.String(protectionId), @@ -167,7 +170,7 @@ data "aws_caller_identity" "current" {} data "aws_partition" "current" {} resource "aws_eip" "test" { - vpc = true + domain = "vpc" tags = { foo = "bar" diff --git a/internal/service/shield/protection_test.go b/internal/service/shield/protection_test.go index 390036949d6..03fe4c82280 100644 --- a/internal/service/shield/protection_test.go +++ b/internal/service/shield/protection_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package shield_test import ( @@ -291,7 +294,7 @@ func TestAccShieldProtection_route53(t *testing.T) { func testAccCheckProtectionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).ShieldConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ShieldConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_shield_protection" { @@ -328,7 +331,7 @@ func testAccCheckProtectionExists(ctx context.Context, name string) resource.Tes return fmt.Errorf("Not found: %s", name) } - conn := acctest.Provider.Meta().(*conns.AWSClient).ShieldConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ShieldConn(ctx) input := &shield.DescribeProtectionInput{ ProtectionId: aws.String(rs.Primary.ID), @@ -341,7 +344,7 @@ func testAccCheckProtectionExists(ctx context.Context, name string) resource.Tes } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).ShieldConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).ShieldConn(ctx) input := &shield.ListProtectionsInput{} @@ -773,7 +776,7 @@ data "aws_caller_identity" "current" {} data "aws_partition" "current" {} resource "aws_eip" "test" { - vpc = true + domain = "vpc" tags = { foo = "bar" diff --git a/internal/service/shield/service_package.go b/internal/service/shield/service_package.go new file mode 100644 index 00000000000..7ae182f311c --- /dev/null +++ b/internal/service/shield/service_package.go @@ -0,0 +1,26 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package shield + +import ( + "context" + + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + endpoints_sdkv1 "github.com/aws/aws-sdk-go/aws/endpoints" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + shield_sdkv1 "github.com/aws/aws-sdk-go/service/shield" +) + +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, m map[string]any) (*shield_sdkv1.Shield, error) { + sess := m["session"].(*session_sdkv1.Session) + config := &aws_sdkv1.Config{Endpoint: aws_sdkv1.String(m["endpoint"].(string))} + + // Force "global" services to correct Regions. + if m["partition"].(string) == endpoints_sdkv1.AwsPartitionID { + config.Region = aws_sdkv1.String(endpoints_sdkv1.UsEast1RegionID) + } + + return shield_sdkv1.New(sess.Copy(config)), nil +} diff --git a/internal/service/shield/service_package_gen.go b/internal/service/shield/service_package_gen.go index 4e0318f9796..91d134d2748 100644 --- a/internal/service/shield/service_package_gen.go +++ b/internal/service/shield/service_package_gen.go @@ -5,6 +5,7 @@ package shield import ( "context" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -52,4 +53,6 @@ func (p *servicePackage) ServicePackageName() string { return names.Shield } -var ServicePackage = &servicePackage{} +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/shield/tags_gen.go b/internal/service/shield/tags_gen.go index a8c84539f3a..7941e5312f5 100644 --- a/internal/service/shield/tags_gen.go +++ b/internal/service/shield/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists shield service tags. +// listTags lists shield service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn shieldiface.ShieldAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn shieldiface.ShieldAPI, identifier string) (tftags.KeyValueTags, error) { input := &shield.ListTagsForResourceInput{ ResourceARN: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn shieldiface.ShieldAPI, identifier string // ListTags lists shield service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).ShieldConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).ShieldConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*shield.Tag) tftags.KeyValueTags { return tftags.New(ctx, m) } -// GetTagsIn returns shield service tags from Context. +// getTagsIn returns shield service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*shield.Tag { +func getTagsIn(ctx context.Context) []*shield.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*shield.Tag { return nil } -// SetTagsOut sets shield service tags in Context. -func SetTagsOut(ctx context.Context, tags []*shield.Tag) { +// setTagsOut sets shield service tags in Context. +func setTagsOut(ctx context.Context, tags []*shield.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates shield service tags. +// updateTags updates shield service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn shieldiface.ShieldAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn shieldiface.ShieldAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn shieldiface.ShieldAPI, identifier stri // UpdateTags updates shield service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).ShieldConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).ShieldConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/signer/consts.go b/internal/service/signer/consts.go index bcda216448a..4c03e15edf1 100644 --- a/internal/service/signer/consts.go +++ b/internal/service/signer/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package signer import ( diff --git a/internal/service/signer/generate.go b/internal/service/signer/generate.go index c6e72642f8c..aa8b49cfd55 100644 --- a/internal/service/signer/generate.go +++ b/internal/service/signer/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package signer diff --git a/internal/service/signer/service_package_gen.go b/internal/service/signer/service_package_gen.go index 4f6d7c07ad4..464cb866823 100644 --- a/internal/service/signer/service_package_gen.go +++ b/internal/service/signer/service_package_gen.go @@ -5,6 +5,10 @@ package signer import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + signer_sdkv1 "github.com/aws/aws-sdk-go/service/signer" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -57,4 +61,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Signer } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*signer_sdkv1.Signer, error) { + sess := config["session"].(*session_sdkv1.Session) + + return signer_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/signer/signing_job.go b/internal/service/signer/signing_job.go index ae823d7070a..3b87c8f36a5 100644 --- a/internal/service/signer/signing_job.go +++ b/internal/service/signer/signing_job.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package signer import ( @@ -200,7 +203,7 @@ func ResourceSigningJob() *schema.Resource { func resourceSigningJobCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SignerConn() + conn := meta.(*conns.AWSClient).SignerConn(ctx) profileName := d.Get("profile_name") source := d.Get("source").([]interface{}) destination := d.Get("destination").([]interface{}) @@ -237,7 +240,7 @@ func resourceSigningJobCreate(ctx context.Context, d *schema.ResourceData, meta func resourceSigningJobRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SignerConn() + conn := meta.(*conns.AWSClient).SignerConn(ctx) jobId := d.Id() describeSigningJobOutput, err := conn.DescribeSigningJobWithContext(ctx, &signer.DescribeSigningJobInput{ diff --git a/internal/service/signer/signing_job_data_source.go b/internal/service/signer/signing_job_data_source.go index 9a584102c0a..c0ea2c965cc 100644 --- a/internal/service/signer/signing_job_data_source.go +++ b/internal/service/signer/signing_job_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package signer import ( @@ -148,7 +151,7 @@ func DataSourceSigningJob() *schema.Resource { func dataSourceSigningJobRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SignerConn() + conn := meta.(*conns.AWSClient).SignerConn(ctx) jobId := d.Get("job_id").(string) describeSigningJobOutput, err := conn.DescribeSigningJobWithContext(ctx, &signer.DescribeSigningJobInput{ diff --git a/internal/service/signer/signing_job_data_source_test.go b/internal/service/signer/signing_job_data_source_test.go index 302d7284382..9179067bf3c 100644 --- a/internal/service/signer/signing_job_data_source_test.go +++ b/internal/service/signer/signing_job_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package signer_test import ( diff --git a/internal/service/signer/signing_job_test.go b/internal/service/signer/signing_job_test.go index b4c8eabde10..11d0111857c 100644 --- a/internal/service/signer/signing_job_test.go +++ b/internal/service/signer/signing_job_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package signer_test import ( @@ -111,7 +114,7 @@ func testAccCheckSigningJobExists(ctx context.Context, res string, job *signer.D return fmt.Errorf("Signing job with that ID does not exist") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SignerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SignerConn(ctx) params := &signer.DescribeSigningJobInput{ JobId: aws.String(rs.Primary.ID), diff --git a/internal/service/signer/signing_profile.go b/internal/service/signer/signing_profile.go index 54cc55cd6ef..247e629594d 100644 --- a/internal/service/signer/signing_profile.go +++ b/internal/service/signer/signing_profile.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package signer import ( @@ -130,7 +133,7 @@ func ResourceSigningProfile() *schema.Resource { func resourceSigningProfileCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SignerConn() + conn := meta.(*conns.AWSClient).SignerConn(ctx) log.Printf("[DEBUG] Creating Signer signing profile") @@ -140,7 +143,7 @@ func resourceSigningProfileCreate(ctx context.Context, d *schema.ResourceData, m signingProfileInput := &signer.PutSigningProfileInput{ ProfileName: aws.String(profileName), PlatformId: aws.String(d.Get("platform_id").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, exists := d.GetOk("signature_validity_period"); exists { @@ -163,7 +166,7 @@ func resourceSigningProfileCreate(ctx context.Context, d *schema.ResourceData, m func resourceSigningProfileRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SignerConn() + conn := meta.(*conns.AWSClient).SignerConn(ctx) signingProfileOutput, err := conn.GetSigningProfileWithContext(ctx, &signer.GetSigningProfileInput{ ProfileName: aws.String(d.Id()), @@ -216,7 +219,7 @@ func resourceSigningProfileRead(ctx context.Context, d *schema.ResourceData, met return sdkdiag.AppendErrorf(diags, "setting signer signing profile status: %s", err) } - SetTagsOut(ctx, signingProfileOutput.Tags) + setTagsOut(ctx, signingProfileOutput.Tags) if err := d.Set("revocation_record", flattenSigningProfileRevocationRecord(signingProfileOutput.RevocationRecord)); err != nil { return sdkdiag.AppendErrorf(diags, "setting signer signing profile revocation record: %s", err) @@ -235,7 +238,7 @@ func resourceSigningProfileUpdate(ctx context.Context, d *schema.ResourceData, m func resourceSigningProfileDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SignerConn() + conn := meta.(*conns.AWSClient).SignerConn(ctx) _, err := conn.CancelSigningProfileWithContext(ctx, &signer.CancelSigningProfileInput{ ProfileName: aws.String(d.Id()), diff --git a/internal/service/signer/signing_profile_data_source.go b/internal/service/signer/signing_profile_data_source.go index 5d1b46db971..b6e4df3823a 100644 --- a/internal/service/signer/signing_profile_data_source.go +++ b/internal/service/signer/signing_profile_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package signer import ( @@ -89,7 +92,7 @@ func DataSourceSigningProfile() *schema.Resource { func dataSourceSigningProfileRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SignerConn() + conn := meta.(*conns.AWSClient).SignerConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig profileName := d.Get("name").(string) diff --git a/internal/service/signer/signing_profile_data_source_test.go b/internal/service/signer/signing_profile_data_source_test.go index 9efabf2fdbd..f9d819c9abe 100644 --- a/internal/service/signer/signing_profile_data_source_test.go +++ b/internal/service/signer/signing_profile_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package signer_test import ( diff --git a/internal/service/signer/signing_profile_permission.go b/internal/service/signer/signing_profile_permission.go index d64b3c39863..b24bc411295 100644 --- a/internal/service/signer/signing_profile_permission.go +++ b/internal/service/signer/signing_profile_permission.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package signer import ( @@ -81,7 +84,7 @@ func ResourceSigningProfilePermission() *schema.Resource { func resourceSigningProfilePermissionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SignerConn() + conn := meta.(*conns.AWSClient).SignerConn(ctx) profileName := d.Get("profile_name").(string) @@ -148,7 +151,7 @@ func resourceSigningProfilePermissionCreate(ctx context.Context, d *schema.Resou func resourceSigningProfilePermissionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SignerConn() + conn := meta.(*conns.AWSClient).SignerConn(ctx) listProfilePermissionsInput := &signer.ListProfilePermissionsInput{ ProfileName: aws.String(d.Get("profile_name").(string)), @@ -223,7 +226,7 @@ func getProfilePermission(permissions []*signer.Permission, statementId string) func resourceSigningProfilePermissionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SignerConn() + conn := meta.(*conns.AWSClient).SignerConn(ctx) profileName := d.Get("profile_name").(string) diff --git a/internal/service/signer/signing_profile_permission_test.go b/internal/service/signer/signing_profile_permission_test.go index 7c22b6dfec0..8775633c306 100644 --- a/internal/service/signer/signing_profile_permission_test.go +++ b/internal/service/signer/signing_profile_permission_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package signer_test import ( @@ -255,7 +258,7 @@ func testAccCheckSigningProfilePermissionExists(ctx context.Context, res, profil return fmt.Errorf("Signing Profile with that ARN does not exist") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SignerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SignerConn(ctx) params := &signer.ListProfilePermissionsInput{ ProfileName: aws.String(profileName), diff --git a/internal/service/signer/signing_profile_test.go b/internal/service/signer/signing_profile_test.go index 687a69d859d..772ef85e024 100644 --- a/internal/service/signer/signing_profile_test.go +++ b/internal/service/signer/signing_profile_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package signer_test import ( @@ -177,7 +180,7 @@ func TestAccSignerSigningProfile_signatureValidityPeriod(t *testing.T) { } func testAccPreCheckSingerSigningProfile(ctx context.Context, t *testing.T, platformID string) { - conn := acctest.Provider.Meta().(*conns.AWSClient).SignerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SignerConn(ctx) input := &signer.ListSigningPlatformsInput{} @@ -297,7 +300,7 @@ func testAccCheckSigningProfileExists(ctx context.Context, res string, sp *signe return fmt.Errorf("Signing Profile with that ARN does not exist") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SignerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SignerConn(ctx) params := &signer.GetSigningProfileInput{ ProfileName: aws.String(rs.Primary.ID), @@ -316,7 +319,7 @@ func testAccCheckSigningProfileExists(ctx context.Context, res string, sp *signe func testAccCheckSigningProfileDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SignerConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SignerConn(ctx) time.Sleep(5 * time.Second) diff --git a/internal/service/signer/tags_gen.go b/internal/service/signer/tags_gen.go index 68974edddb5..165293e0ace 100644 --- a/internal/service/signer/tags_gen.go +++ b/internal/service/signer/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists signer service tags. +// listTags lists signer service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn signeriface.SignerAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn signeriface.SignerAPI, identifier string) (tftags.KeyValueTags, error) { input := &signer.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn signeriface.SignerAPI, identifier string // ListTags lists signer service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).SignerConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).SignerConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from signer service tags. +// KeyValueTags creates tftags.KeyValueTags from signer service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns signer service tags from Context. +// getTagsIn returns signer service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets signer service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets signer service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates signer service tags. +// updateTags updates signer service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn signeriface.SignerAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn signeriface.SignerAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn signeriface.SignerAPI, identifier stri // UpdateTags updates signer service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).SignerConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).SignerConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/simpledb/domain.go b/internal/service/simpledb/domain.go index c99ae6f523a..66c6ba8e1a7 100644 --- a/internal/service/simpledb/domain.go +++ b/internal/service/simpledb/domain.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package simpledb import ( @@ -16,8 +19,8 @@ import ( "github.com/hashicorp/terraform-plugin-log/tflog" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-provider-aws/internal/errs/fwdiag" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) @@ -65,7 +68,7 @@ func (r *resourceDomain) Create(ctx context.Context, request resource.CreateRequ return } - conn := r.Meta().SimpleDBConn() + conn := r.Meta().SimpleDBConn(ctx) name := data.Name.ValueString() input := &simpledb.CreateDomainInput{ @@ -96,7 +99,7 @@ func (r *resourceDomain) Read(ctx context.Context, request resource.ReadRequest, return } - conn := r.Meta().SimpleDBConn() + conn := r.Meta().SimpleDBConn(ctx) _, err := FindDomainByName(ctx, conn, data.ID.ValueString()) @@ -136,7 +139,7 @@ func (r *resourceDomain) Delete(ctx context.Context, request resource.DeleteRequ return } - conn := r.Meta().SimpleDBConn() + conn := r.Meta().SimpleDBConn(ctx) tflog.Debug(ctx, "deleting SimpleDB Domain", map[string]interface{}{ "id": data.ID.ValueString(), diff --git a/internal/service/simpledb/domain_test.go b/internal/service/simpledb/domain_test.go index 50e3974c83f..0e97f28c037 100644 --- a/internal/service/simpledb/domain_test.go +++ b/internal/service/simpledb/domain_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package simpledb_test import ( @@ -99,7 +102,7 @@ func TestAccSimpleDBDomain_MigrateFromPluginSDK(t *testing.T) { func testAccCheckDomainDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SimpleDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SimpleDBConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_simpledb_domain" { @@ -134,7 +137,7 @@ func testAccCheckDomainExists(ctx context.Context, n string) resource.TestCheckF return fmt.Errorf("No SimpleDB Domain ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SimpleDBConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SimpleDBConn(ctx) _, err := tfsimpledb.FindDomainByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/simpledb/exports_test.go b/internal/service/simpledb/exports_test.go index 8c66737d1c7..4e604242d3b 100644 --- a/internal/service/simpledb/exports_test.go +++ b/internal/service/simpledb/exports_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package simpledb // Exports for use in tests only. diff --git a/internal/service/simpledb/generate.go b/internal/service/simpledb/generate.go new file mode 100644 index 00000000000..775f0211f8a --- /dev/null +++ b/internal/service/simpledb/generate.go @@ -0,0 +1,7 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/servicepackage/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package simpledb diff --git a/internal/service/simpledb/service_package_gen.go b/internal/service/simpledb/service_package_gen.go index 78faa16465d..3bb1625ebd7 100644 --- a/internal/service/simpledb/service_package_gen.go +++ b/internal/service/simpledb/service_package_gen.go @@ -5,6 +5,10 @@ package simpledb import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + simpledb_sdkv1 "github.com/aws/aws-sdk-go/service/simpledb" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -35,4 +39,13 @@ func (p *servicePackage) ServicePackageName() string { return names.SimpleDB } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*simpledb_sdkv1.SimpleDB, error) { + sess := config["session"].(*session_sdkv1.Session) + + return simpledb_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/simpledb/sweep.go b/internal/service/simpledb/sweep.go index 6f122d36b9a..3c31164a659 100644 --- a/internal/service/simpledb/sweep.go +++ b/internal/service/simpledb/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,8 +13,8 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/simpledb" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" + "github.com/hashicorp/terraform-provider-aws/internal/sweep/framework" ) func init() { @@ -23,11 +26,11 @@ func init() { func sweepDomains(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).SimpleDBConn() + conn := client.SimpleDBConn(ctx) input := &simpledb.ListDomainsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -37,7 +40,9 @@ func sweepDomains(region string) error { } for _, v := range page.DomainNames { - sweepResources = append(sweepResources, sweep.NewSweepFrameworkResource(newResourceDomain, aws.StringValue(v), client)) + sweepResources = append(sweepResources, framework.NewSweepResource(newResourceDomain, client, + framework.NewAttribute("id", aws.StringValue(v)), + )) } return !lastPage @@ -52,7 +57,7 @@ func sweepDomains(region string) error { return fmt.Errorf("error listing SimpleDB Domains (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping SimpleDB Domains (%s): %w", region, err) diff --git a/internal/service/sns/consts.go b/internal/service/sns/consts.go index 67e60be3022..032c40da382 100644 --- a/internal/service/sns/consts.go +++ b/internal/service/sns/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sns import ( diff --git a/internal/service/sns/find.go b/internal/service/sns/find.go index a997906f11f..e7b6406a0ef 100644 --- a/internal/service/sns/find.go +++ b/internal/service/sns/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sns import ( diff --git a/internal/service/sns/generate.go b/internal/service/sns/generate.go index 25149ddd874..43fe4e8a6cb 100644 --- a/internal/service/sns/generate.go +++ b/internal/service/sns/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsSlice -UpdateTags -CreateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package sns diff --git a/internal/service/sns/platform_application.go b/internal/service/sns/platform_application.go index 18683e99f88..634be0a234c 100644 --- a/internal/service/sns/platform_application.go +++ b/internal/service/sns/platform_application.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sns import ( @@ -114,7 +117,7 @@ func ResourcePlatformApplication() *schema.Resource { } func resourcePlatformApplicationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SNSConn() + conn := meta.(*conns.AWSClient).SNSConn(ctx) attributes, err := platformApplicationAttributeMap.ResourceDataToAPIAttributesCreate(d) @@ -143,7 +146,7 @@ func resourcePlatformApplicationCreate(ctx context.Context, d *schema.ResourceDa } func resourcePlatformApplicationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SNSConn() + conn := meta.(*conns.AWSClient).SNSConn(ctx) // There is no SNS Describe/GetPlatformApplication to fetch attributes like name and platform // We will use the ID, which should be a platform application ARN, to: @@ -181,7 +184,7 @@ func resourcePlatformApplicationRead(ctx context.Context, d *schema.ResourceData } func resourcePlatformApplicationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SNSConn() + conn := meta.(*conns.AWSClient).SNSConn(ctx) attributes, err := platformApplicationAttributeMap.ResourceDataToAPIAttributesUpdate(d) @@ -234,7 +237,7 @@ func resourcePlatformApplicationUpdate(ctx context.Context, d *schema.ResourceDa } func resourcePlatformApplicationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SNSConn() + conn := meta.(*conns.AWSClient).SNSConn(ctx) log.Printf("[DEBUG] Deleting SNS Platform Application: %s", d.Id()) _, err := conn.DeletePlatformApplicationWithContext(ctx, &sns.DeletePlatformApplicationInput{ diff --git a/internal/service/sns/platform_application_test.go b/internal/service/sns/platform_application_test.go index 0d02d583df2..a21b77b820a 100644 --- a/internal/service/sns/platform_application_test.go +++ b/internal/service/sns/platform_application_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sns_test import ( @@ -474,7 +477,7 @@ func testAccCheckPlatformApplicationExists(ctx context.Context, n string) resour return fmt.Errorf("No SNS Platform Application ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SNSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SNSConn(ctx) _, err := tfsns.FindPlatformApplicationAttributesByARN(ctx, conn, rs.Primary.ID) @@ -484,7 +487,7 @@ func testAccCheckPlatformApplicationExists(ctx context.Context, n string) resour func testAccCheckPlatformApplicationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SNSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SNSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sns_platform_application" { diff --git a/internal/service/sns/service_package_gen.go b/internal/service/sns/service_package_gen.go index c7dddd11024..4c2126a82b0 100644 --- a/internal/service/sns/service_package_gen.go +++ b/internal/service/sns/service_package_gen.go @@ -5,6 +5,10 @@ package sns import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + sns_sdkv1 "github.com/aws/aws-sdk-go/service/sns" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -65,4 +69,13 @@ func (p *servicePackage) ServicePackageName() string { return names.SNS } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*sns_sdkv1.SNS, error) { + sess := config["session"].(*session_sdkv1.Session) + + return sns_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/sns/sms_preferences.go b/internal/service/sns/sms_preferences.go index 5c416f083bd..5253905a9b1 100644 --- a/internal/service/sns/sms_preferences.go +++ b/internal/service/sns/sms_preferences.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sns import ( @@ -17,7 +20,7 @@ import ( func validateMonthlySpend(v interface{}, k string) (ws []string, errors []error) { vInt := v.(int) if vInt < 0 { - errors = append(errors, fmt.Errorf("error setting SMS preferences: monthly spend limit value [%d] must be >= 0", vInt)) + errors = append(errors, fmt.Errorf("setting SMS preferences: monthly spend limit value [%d] must be >= 0", vInt)) } return } @@ -25,7 +28,7 @@ func validateMonthlySpend(v interface{}, k string) (ws []string, errors []error) func validateDeliverySamplingRate(v interface{}, k string) (ws []string, errors []error) { vInt, _ := strconv.Atoi(v.(string)) if vInt < 0 || vInt > 100 { - errors = append(errors, fmt.Errorf("error setting SMS preferences: default percentage of success to sample value [%d] must be between 0 and 100", vInt)) + errors = append(errors, fmt.Errorf("setting SMS preferences: default percentage of success to sample value [%d] must be between 0 and 100", vInt)) } return } @@ -138,7 +141,7 @@ func ResourceSMSPreferences() *schema.Resource { } func resourceSMSPreferencesSet(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SNSConn() + conn := meta.(*conns.AWSClient).SNSConn(ctx) attributes, err := SMSPreferencesAttributeMap.ResourceDataToAPIAttributesCreate(d) @@ -162,7 +165,7 @@ func resourceSMSPreferencesSet(ctx context.Context, d *schema.ResourceData, meta } func resourceSMSPreferencesGet(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SNSConn() + conn := meta.(*conns.AWSClient).SNSConn(ctx) output, err := conn.GetSMSAttributesWithContext(ctx, &sns.GetSMSAttributesInput{}) @@ -174,7 +177,7 @@ func resourceSMSPreferencesGet(ctx context.Context, d *schema.ResourceData, meta } func resourceSMSPreferencesDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SNSConn() + conn := meta.(*conns.AWSClient).SNSConn(ctx) // Reset the attributes to their default value. attributes := make(map[string]string) diff --git a/internal/service/sns/sms_preferences_test.go b/internal/service/sns/sms_preferences_test.go index 90fa95ad552..b4371c29797 100644 --- a/internal/service/sns/sms_preferences_test.go +++ b/internal/service/sns/sms_preferences_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sns_test import ( @@ -106,7 +109,7 @@ func testAccCheckSMSPreferencesDestroy(ctx context.Context) resource.TestCheckFu continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).SNSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SNSConn(ctx) attrs, err := conn.GetSMSAttributesWithContext(ctx, &sns.GetSMSAttributesInput{}) diff --git a/internal/service/sns/sweep.go b/internal/service/sns/sweep.go index 79575928f7f..3c483c44b37 100644 --- a/internal/service/sns/sweep.go +++ b/internal/service/sns/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/sns" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -53,12 +55,12 @@ func init() { func sweepPlatformApplications(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } input := &sns.ListPlatformApplicationsInput{} - conn := client.(*conns.AWSClient).SNSConn() + conn := client.SNSConn(ctx) sweepResources := make([]sweep.Sweepable, 0) err = conn.ListPlatformApplicationsPagesWithContext(ctx, input, func(page *sns.ListPlatformApplicationsOutput, lastPage bool) bool { @@ -86,7 +88,7 @@ func sweepPlatformApplications(region string) error { return fmt.Errorf("error listing SNS Platform Applications: %w", err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping SNS Platform Applications (%s): %w", region, err) @@ -97,12 +99,12 @@ func sweepPlatformApplications(region string) error { func sweepTopics(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } input := &sns.ListTopicsInput{} - conn := client.(*conns.AWSClient).SNSConn() + conn := client.SNSConn(ctx) sweepResources := make([]sweep.Sweepable, 0) err = conn.ListTopicsPagesWithContext(ctx, input, func(page *sns.ListTopicsOutput, lastPage bool) bool { @@ -130,7 +132,7 @@ func sweepTopics(region string) error { return fmt.Errorf("error listing SNS Topics: %w", err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping SNS Topics (%s): %w", region, err) @@ -141,12 +143,12 @@ func sweepTopics(region string) error { func sweepTopicSubscriptions(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } input := &sns.ListSubscriptionsInput{} - conn := client.(*conns.AWSClient).SNSConn() + conn := client.SNSConn(ctx) sweepResources := make([]sweep.Sweepable, 0) err = conn.ListSubscriptionsPagesWithContext(ctx, input, func(page *sns.ListSubscriptionsOutput, lastPage bool) bool { @@ -179,7 +181,7 @@ func sweepTopicSubscriptions(region string) error { return fmt.Errorf("error listing SNS Topic Subscriptions: %w", err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping SNS Topic Subscriptions (%s): %w", region, err) diff --git a/internal/service/sns/tags_gen.go b/internal/service/sns/tags_gen.go index 36ad1647820..a29f5b6a21d 100644 --- a/internal/service/sns/tags_gen.go +++ b/internal/service/sns/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists sns service tags. +// listTags lists sns service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn snsiface.SNSAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn snsiface.SNSAPI, identifier string) (tftags.KeyValueTags, error) { input := &sns.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn snsiface.SNSAPI, identifier string) (tft // ListTags lists sns service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).SNSConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).SNSConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*sns.Tag) tftags.KeyValueTags { return tftags.New(ctx, m) } -// GetTagsIn returns sns service tags from Context. +// getTagsIn returns sns service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*sns.Tag { +func getTagsIn(ctx context.Context) []*sns.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,8 +88,8 @@ func GetTagsIn(ctx context.Context) []*sns.Tag { return nil } -// SetTagsOut sets sns service tags in Context. -func SetTagsOut(ctx context.Context, tags []*sns.Tag) { +// setTagsOut sets sns service tags in Context. +func setTagsOut(ctx context.Context, tags []*sns.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } @@ -101,13 +101,13 @@ func createTags(ctx context.Context, conn snsiface.SNSAPI, identifier string, ta return nil } - return UpdateTags(ctx, conn, identifier, nil, KeyValueTags(ctx, tags)) + return updateTags(ctx, conn, identifier, nil, KeyValueTags(ctx, tags)) } -// UpdateTags updates sns service tags. +// updateTags updates sns service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn snsiface.SNSAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn snsiface.SNSAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -147,5 +147,5 @@ func UpdateTags(ctx context.Context, conn snsiface.SNSAPI, identifier string, ol // UpdateTags updates sns service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).SNSConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).SNSConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/sns/topic.go b/internal/service/sns/topic.go index f22a1ebe167..1add16a4360 100644 --- a/internal/service/sns/topic.go +++ b/internal/service/sns/topic.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sns import ( @@ -236,7 +239,7 @@ func ResourceTopic() *schema.Resource { func resourceTopicCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SNSConn() + conn := meta.(*conns.AWSClient).SNSConn(ctx) var name string fifoTopic := d.Get("fifo_topic").(bool) @@ -248,7 +251,7 @@ func resourceTopicCreate(ctx context.Context, d *schema.ResourceData, meta inter input := &sns.CreateTopicInput{ Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } attributes, err := topicAttributeMap.ResourceDataToAPIAttributesCreate(d) @@ -293,7 +296,7 @@ func resourceTopicCreate(ctx context.Context, d *schema.ResourceData, meta inter } // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := createTags(ctx, conn, d.Id(), tags) // If default tags only, continue. Otherwise, error. @@ -311,7 +314,7 @@ func resourceTopicCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceTopicRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SNSConn() + conn := meta.(*conns.AWSClient).SNSConn(ctx) attributes, err := FindTopicAttributesByARN(ctx, conn, d.Id()) @@ -348,7 +351,7 @@ func resourceTopicRead(ctx context.Context, d *schema.ResourceData, meta any) di } func resourceTopicUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SNSConn() + conn := meta.(*conns.AWSClient).SNSConn(ctx) if d.HasChangesExcept("tags", "tags_all") { attributes, err := topicAttributeMap.ResourceDataToAPIAttributesUpdate(d) @@ -368,7 +371,7 @@ func resourceTopicUpdate(ctx context.Context, d *schema.ResourceData, meta inter } func resourceTopicDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SNSConn() + conn := meta.(*conns.AWSClient).SNSConn(ctx) log.Printf("[DEBUG] Deleting SNS Topic: %s", d.Id()) _, err := conn.DeleteTopicWithContext(ctx, &sns.DeleteTopicInput{ diff --git a/internal/service/sns/topic_data_protection_policy.go b/internal/service/sns/topic_data_protection_policy.go index a92e57b5c32..d2d918b859b 100644 --- a/internal/service/sns/topic_data_protection_policy.go +++ b/internal/service/sns/topic_data_protection_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sns import ( @@ -52,7 +55,7 @@ func ResourceTopicDataProtectionPolicy() *schema.Resource { func resourceTopicDataProtectionPolicyUpsert(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SNSConn() + conn := meta.(*conns.AWSClient).SNSConn(ctx) topicArn := d.Get("arn").(string) policy, err := structure.NormalizeJsonString(d.Get("policy").(string)) @@ -81,7 +84,7 @@ func resourceTopicDataProtectionPolicyUpsert(ctx context.Context, d *schema.Reso func resourceTopicDataProtectionPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SNSConn() + conn := meta.(*conns.AWSClient).SNSConn(ctx) output, err := conn.GetDataProtectionPolicyWithContext(ctx, &sns.GetDataProtectionPolicyInput{ ResourceArn: aws.String(d.Id()), @@ -111,7 +114,7 @@ func resourceTopicDataProtectionPolicyRead(ctx context.Context, d *schema.Resour func resourceTopicDataProtectionPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SNSConn() + conn := meta.(*conns.AWSClient).SNSConn(ctx) _, err := conn.PutDataProtectionPolicyWithContext(ctx, &sns.PutDataProtectionPolicyInput{ DataProtectionPolicy: aws.String(""), diff --git a/internal/service/sns/topic_data_protection_policy_test.go b/internal/service/sns/topic_data_protection_policy_test.go index f784623a7ed..5f5c8fdf83b 100644 --- a/internal/service/sns/topic_data_protection_policy_test.go +++ b/internal/service/sns/topic_data_protection_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sns_test import ( @@ -71,7 +74,7 @@ func TestAccSNSTopicDataProtectionPolicy_disappears(t *testing.T) { func testAccCheckTopicDataProtectionPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SNSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SNSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sns_topic_data_protection_policy" { diff --git a/internal/service/sns/topic_data_source.go b/internal/service/sns/topic_data_source.go index bf8e1e591ee..3d2f35771fc 100644 --- a/internal/service/sns/topic_data_source.go +++ b/internal/service/sns/topic_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sns import ( @@ -31,7 +34,7 @@ func DataSourceTopic() *schema.Resource { } func dataSourceTopicRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SNSConn() + conn := meta.(*conns.AWSClient).SNSConn(ctx) resourceArn := "" name := d.Get("name").(string) diff --git a/internal/service/sns/topic_data_source_test.go b/internal/service/sns/topic_data_source_test.go index f2476b636c5..487b3d49cc7 100644 --- a/internal/service/sns/topic_data_source_test.go +++ b/internal/service/sns/topic_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sns_test import ( diff --git a/internal/service/sns/topic_policy.go b/internal/service/sns/topic_policy.go index 7f72ec462a1..75a8e19a7dc 100644 --- a/internal/service/sns/topic_policy.go +++ b/internal/service/sns/topic_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sns import ( @@ -54,7 +57,7 @@ func ResourceTopicPolicy() *schema.Resource { } func resourceTopicPolicyUpsert(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SNSConn() + conn := meta.(*conns.AWSClient).SNSConn(ctx) policy, err := structure.NormalizeJsonString(d.Get("policy").(string)) if err != nil { @@ -77,7 +80,7 @@ func resourceTopicPolicyUpsert(ctx context.Context, d *schema.ResourceData, meta } func resourceTopicPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SNSConn() + conn := meta.(*conns.AWSClient).SNSConn(ctx) attributes, err := FindTopicAttributesByARN(ctx, conn, d.Id()) @@ -115,7 +118,7 @@ func resourceTopicPolicyRead(ctx context.Context, d *schema.ResourceData, meta i } func resourceTopicPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SNSConn() + conn := meta.(*conns.AWSClient).SNSConn(ctx) // It is impossible to delete a policy or set to empty // (confirmed by AWS Support representative) diff --git a/internal/service/sns/topic_policy_test.go b/internal/service/sns/topic_policy_test.go index 2ff9f153297..ec251e09158 100644 --- a/internal/service/sns/topic_policy_test.go +++ b/internal/service/sns/topic_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sns_test import ( @@ -161,7 +164,7 @@ func TestAccSNSTopicPolicy_ignoreEquivalent(t *testing.T) { func testAccCheckTopicPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SNSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SNSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sns_topic_policy" { diff --git a/internal/service/sns/topic_subscription.go b/internal/service/sns/topic_subscription.go index a9d24446d70..97e1d8a056f 100644 --- a/internal/service/sns/topic_subscription.go +++ b/internal/service/sns/topic_subscription.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sns import ( @@ -146,7 +149,7 @@ func ResourceTopicSubscription() *schema.Resource { } func resourceTopicSubscriptionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SNSConn() + conn := meta.(*conns.AWSClient).SNSConn(ctx) attributes, err := subscriptionAttributeMap.ResourceDataToAPIAttributesCreate(d) @@ -201,7 +204,7 @@ func resourceTopicSubscriptionCreate(ctx context.Context, d *schema.ResourceData } func resourceTopicSubscriptionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SNSConn() + conn := meta.(*conns.AWSClient).SNSConn(ctx) outputRaw, err := tfresource.RetryWhenNewResourceNotFound(ctx, subscriptionCreateTimeout, func() (interface{}, error) { return FindSubscriptionAttributesByARN(ctx, conn, d.Id()) @@ -223,7 +226,7 @@ func resourceTopicSubscriptionRead(ctx context.Context, d *schema.ResourceData, } func resourceTopicSubscriptionUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SNSConn() + conn := meta.(*conns.AWSClient).SNSConn(ctx) attributes, err := subscriptionAttributeMap.ResourceDataToAPIAttributesUpdate(d) @@ -241,7 +244,7 @@ func resourceTopicSubscriptionUpdate(ctx context.Context, d *schema.ResourceData } func resourceTopicSubscriptionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SNSConn() + conn := meta.(*conns.AWSClient).SNSConn(ctx) log.Printf("[DEBUG] Deleting SNS Topic Subscription: %s", d.Id()) _, err := conn.UnsubscribeWithContext(ctx, &sns.UnsubscribeInput{ diff --git a/internal/service/sns/topic_subscription_test.go b/internal/service/sns/topic_subscription_test.go index 12751376b9b..498eea0ba55 100644 --- a/internal/service/sns/topic_subscription_test.go +++ b/internal/service/sns/topic_subscription_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sns_test import ( @@ -691,7 +694,7 @@ func TestAccSNSTopicSubscription_Disappears_topic(t *testing.T) { func testAccCheckTopicSubscriptionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SNSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SNSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sns_topic_subscription" { @@ -730,7 +733,7 @@ func testAccCheckTopicSubscriptionExists(ctx context.Context, n string, v *map[s return fmt.Errorf("No SNS Topic Subscription ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SNSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SNSConn(ctx) output, err := tfsns.FindSubscriptionAttributesByARN(ctx, conn, rs.Primary.ID) @@ -1315,11 +1318,6 @@ resource "aws_s3_bucket" "bucket" { bucket = %[1]q } -resource "aws_s3_bucket_acl" "test" { - bucket = aws_s3_bucket.bucket.id - acl = "private" -} - data "aws_partition" "current" {} resource "aws_iam_role" "firehose_role" { @@ -1344,9 +1342,9 @@ EOF resource "aws_kinesis_firehose_delivery_stream" "test_stream" { name = %[1]q - destination = "s3" + destination = "extended_s3" - s3_configuration { + extended_s3_configuration { role_arn = aws_iam_role.firehose_role.arn bucket_arn = aws_s3_bucket.bucket.arn } diff --git a/internal/service/sns/topic_test.go b/internal/service/sns/topic_test.go index db1381d1856..2e35e8931a6 100644 --- a/internal/service/sns/topic_test.go +++ b/internal/service/sns/topic_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sns_test import ( @@ -549,7 +552,7 @@ func testAccCheckTopicHasPolicy(ctx context.Context, n string, expectedPolicyTex return fmt.Errorf("No SNS Topic ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SNSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SNSConn(ctx) attributes, err := tfsns.FindTopicAttributesByARN(ctx, conn, rs.Primary.ID) @@ -589,7 +592,7 @@ func testAccCheckTopicHasDeliveryPolicy(ctx context.Context, n string, expectedP return fmt.Errorf("No SNS Topic ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SNSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SNSConn(ctx) attributes, err := tfsns.FindTopicAttributesByARN(ctx, conn, rs.Primary.ID) @@ -616,7 +619,7 @@ func testAccCheckTopicHasDeliveryPolicy(ctx context.Context, n string, expectedP func testAccCheckTopicDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SNSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SNSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sns_topic" { @@ -651,7 +654,7 @@ func testAccCheckTopicExists(ctx context.Context, n string, v *map[string]string return fmt.Errorf("No SNS Topic ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SNSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SNSConn(ctx) output, err := tfsns.GetTopicAttributesByARN(ctx, conn, rs.Primary.ID) diff --git a/internal/service/sqs/attribute_funcs.go b/internal/service/sqs/attribute_funcs.go index 99de743867c..ecc66cc0e03 100644 --- a/internal/service/sqs/attribute_funcs.go +++ b/internal/service/sqs/attribute_funcs.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sqs import ( @@ -22,7 +25,7 @@ type queueAttributeHandler struct { } func (h *queueAttributeHandler) Upsert(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SQSConn() + conn := meta.(*conns.AWSClient).SQSConn(ctx) attrValue, err := structure.NormalizeJsonString(d.Get(h.SchemaKey).(string)) if err != nil { @@ -55,7 +58,7 @@ func (h *queueAttributeHandler) Upsert(ctx context.Context, d *schema.ResourceDa } func (h *queueAttributeHandler) Read(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SQSConn() + conn := meta.(*conns.AWSClient).SQSConn(ctx) outputRaw, err := tfresource.RetryWhenNotFound(ctx, queueAttributeReadTimeout, func() (interface{}, error) { return FindQueueAttributeByURL(ctx, conn, d.Id(), h.AttributeName) @@ -90,7 +93,7 @@ func (h *queueAttributeHandler) Read(ctx context.Context, d *schema.ResourceData } func (h *queueAttributeHandler) Delete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SQSConn() + conn := meta.(*conns.AWSClient).SQSConn(ctx) log.Printf("[DEBUG] Deleting SQS Queue (%s) attribute: %s", d.Id(), h.AttributeName) attributes := map[string]string{ diff --git a/internal/service/sqs/consts.go b/internal/service/sqs/consts.go index 4b8c4c86b87..1c20ff993aa 100644 --- a/internal/service/sqs/consts.go +++ b/internal/service/sqs/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sqs const ( diff --git a/internal/service/sqs/exports_test.go b/internal/service/sqs/exports_test.go index 47fb0711104..3493b5953a0 100644 --- a/internal/service/sqs/exports_test.go +++ b/internal/service/sqs/exports_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sqs // Exports for use in tests only. diff --git a/internal/service/sqs/find.go b/internal/service/sqs/find.go index 63d2ec030a6..76f4aba9cb1 100644 --- a/internal/service/sqs/find.go +++ b/internal/service/sqs/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sqs import ( diff --git a/internal/service/sqs/generate.go b/internal/service/sqs/generate.go index 27a2b1d7ef4..ed0ae9547ca 100644 --- a/internal/service/sqs/generate.go +++ b/internal/service/sqs/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=ListQueueTags -ListTagsInIDElem=QueueUrl -ServiceTagsMap -TagOp=TagQueue -TagInIDElem=QueueUrl -UntagOp=UntagQueue -UpdateTags -CreateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package sqs diff --git a/internal/service/sqs/json.go b/internal/service/sqs/json.go index 73f07ca2c67..503efc89bf2 100644 --- a/internal/service/sqs/json.go +++ b/internal/service/sqs/json.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sqs import ( diff --git a/internal/service/sqs/json_test.go b/internal/service/sqs/json_test.go index a98864b888d..772db163b79 100644 --- a/internal/service/sqs/json_test.go +++ b/internal/service/sqs/json_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sqs import ( diff --git a/internal/service/sqs/name.go b/internal/service/sqs/name.go index 9653911ecba..0eab8355e86 100644 --- a/internal/service/sqs/name.go +++ b/internal/service/sqs/name.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sqs import ( diff --git a/internal/service/sqs/name_test.go b/internal/service/sqs/name_test.go index 8c8b28fb309..13cd91c902d 100644 --- a/internal/service/sqs/name_test.go +++ b/internal/service/sqs/name_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sqs_test import ( diff --git a/internal/service/sqs/queue.go b/internal/service/sqs/queue.go index 5d26d20b2e1..7baa6ce93f0 100644 --- a/internal/service/sqs/queue.go +++ b/internal/service/sqs/queue.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sqs import ( @@ -196,7 +199,7 @@ func ResourceQueue() *schema.Resource { } func resourceQueueCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SQSConn() + conn := meta.(*conns.AWSClient).SQSConn(ctx) var name string fifoQueue := d.Get("fifo_queue").(bool) @@ -208,7 +211,7 @@ func resourceQueueCreate(ctx context.Context, d *schema.ResourceData, meta inter input := &sqs.CreateQueueInput{ QueueName: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } attributes, err := queueAttributeMap.ResourceDataToAPIAttributesCreate(d) @@ -243,7 +246,7 @@ func resourceQueueCreate(ctx context.Context, d *schema.ResourceData, meta inter } // For partitions not supporting tag-on-create, attempt tag after create. - if tags := GetTagsIn(ctx); input.Tags == nil && len(tags) > 0 { + if tags := getTagsIn(ctx); input.Tags == nil && len(tags) > 0 { err := createTags(ctx, conn, d.Id(), tags) // If default tags only, continue. Otherwise, error. @@ -260,7 +263,7 @@ func resourceQueueCreate(ctx context.Context, d *schema.ResourceData, meta inter } func resourceQueueRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SQSConn() + conn := meta.(*conns.AWSClient).SQSConn(ctx) outputRaw, err := tfresource.RetryWhenNotFound(ctx, queueReadTimeout, func() (interface{}, error) { return FindQueueAttributesByURL(ctx, conn, d.Id()) @@ -307,7 +310,7 @@ func resourceQueueRead(ctx context.Context, d *schema.ResourceData, meta interfa } func resourceQueueUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SQSConn() + conn := meta.(*conns.AWSClient).SQSConn(ctx) if d.HasChangesExcept("tags", "tags_all") { attributes, err := queueAttributeMap.ResourceDataToAPIAttributesUpdate(d) @@ -339,7 +342,7 @@ func resourceQueueUpdate(ctx context.Context, d *schema.ResourceData, meta inter } func resourceQueueDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SQSConn() + conn := meta.(*conns.AWSClient).SQSConn(ctx) log.Printf("[DEBUG] Deleting SQS Queue: %s", d.Id()) _, err := conn.DeleteQueueWithContext(ctx, &sqs.DeleteQueueInput{ diff --git a/internal/service/sqs/queue_data_source.go b/internal/service/sqs/queue_data_source.go index 3515dd3bee6..93484685448 100644 --- a/internal/service/sqs/queue_data_source.go +++ b/internal/service/sqs/queue_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sqs import ( @@ -37,7 +40,7 @@ func DataSourceQueue() *schema.Resource { } func dataSourceQueueRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SQSConn() + conn := meta.(*conns.AWSClient).SQSConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig name := d.Get("name").(string) @@ -64,7 +67,7 @@ func dataSourceQueueRead(ctx context.Context, d *schema.ResourceData, meta inter d.Set("url", queueURL) d.SetId(queueURL) - tags, err := ListTags(ctx, conn, queueURL) + tags, err := listTags(ctx, conn, queueURL) if verify.ErrorISOUnsupported(conn.PartitionID, err) { // Some partitions may not support tagging, giving error diff --git a/internal/service/sqs/queue_data_source_test.go b/internal/service/sqs/queue_data_source_test.go index d87e800ec88..c2470518191 100644 --- a/internal/service/sqs/queue_data_source_test.go +++ b/internal/service/sqs/queue_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sqs_test import ( diff --git a/internal/service/sqs/queue_policy.go b/internal/service/sqs/queue_policy.go index f86384c5a09..c3c2ad9d16d 100644 --- a/internal/service/sqs/queue_policy.go +++ b/internal/service/sqs/queue_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sqs import ( diff --git a/internal/service/sqs/queue_policy_migrate.go b/internal/service/sqs/queue_policy_migrate.go index 207ffe12a4e..0340acaa81d 100644 --- a/internal/service/sqs/queue_policy_migrate.go +++ b/internal/service/sqs/queue_policy_migrate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sqs import ( diff --git a/internal/service/sqs/queue_policy_migrate_test.go b/internal/service/sqs/queue_policy_migrate_test.go index e432bc2e2a9..f68cf338ee4 100644 --- a/internal/service/sqs/queue_policy_migrate_test.go +++ b/internal/service/sqs/queue_policy_migrate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sqs_test import ( diff --git a/internal/service/sqs/queue_policy_test.go b/internal/service/sqs/queue_policy_test.go index c6e3458b1bf..0e61e6e48f8 100644 --- a/internal/service/sqs/queue_policy_test.go +++ b/internal/service/sqs/queue_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sqs_test import ( diff --git a/internal/service/sqs/queue_redrive_allow_policy.go b/internal/service/sqs/queue_redrive_allow_policy.go index 44c30116888..f9b50a833ec 100644 --- a/internal/service/sqs/queue_redrive_allow_policy.go +++ b/internal/service/sqs/queue_redrive_allow_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sqs import ( diff --git a/internal/service/sqs/queue_redrive_allow_policy_test.go b/internal/service/sqs/queue_redrive_allow_policy_test.go index ffcc0014c90..6bbbb2162aa 100644 --- a/internal/service/sqs/queue_redrive_allow_policy_test.go +++ b/internal/service/sqs/queue_redrive_allow_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sqs_test import ( diff --git a/internal/service/sqs/queue_redrive_policy.go b/internal/service/sqs/queue_redrive_policy.go index 9f93ea1090f..7f55a8e6e8e 100644 --- a/internal/service/sqs/queue_redrive_policy.go +++ b/internal/service/sqs/queue_redrive_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sqs import ( diff --git a/internal/service/sqs/queue_redrive_policy_test.go b/internal/service/sqs/queue_redrive_policy_test.go index b020037959e..ce6525d3884 100644 --- a/internal/service/sqs/queue_redrive_policy_test.go +++ b/internal/service/sqs/queue_redrive_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sqs_test import ( diff --git a/internal/service/sqs/queue_test.go b/internal/service/sqs/queue_test.go index 7641b9d99ff..4152c144ca0 100644 --- a/internal/service/sqs/queue_test.go +++ b/internal/service/sqs/queue_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sqs_test import ( @@ -837,7 +840,7 @@ func testAccCheckQueueExists(ctx context.Context, resourceName string, v *map[st return fmt.Errorf("No SQS Queue URL is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SQSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SQSConn(ctx) output, err := tfsqs.FindQueueAttributesByURL(ctx, conn, rs.Primary.ID) @@ -853,7 +856,7 @@ func testAccCheckQueueExists(ctx context.Context, resourceName string, v *map[st func testAccCheckQueueDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SQSConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SQSConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_sqs_queue" { diff --git a/internal/service/sqs/queues_data_source.go b/internal/service/sqs/queues_data_source.go index 3ede8419c12..5ce15f94531 100644 --- a/internal/service/sqs/queues_data_source.go +++ b/internal/service/sqs/queues_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sqs import ( @@ -36,7 +39,7 @@ const ( ) func dataSourceQueuesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SQSConn() + conn := meta.(*conns.AWSClient).SQSConn(ctx) input := &sqs.ListQueuesInput{} diff --git a/internal/service/sqs/queues_data_source_test.go b/internal/service/sqs/queues_data_source_test.go index da03ba8feb9..780c20fcf0b 100644 --- a/internal/service/sqs/queues_data_source_test.go +++ b/internal/service/sqs/queues_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sqs_test import ( diff --git a/internal/service/sqs/service_package_gen.go b/internal/service/sqs/service_package_gen.go index 615bfa793bc..235612d3bff 100644 --- a/internal/service/sqs/service_package_gen.go +++ b/internal/service/sqs/service_package_gen.go @@ -5,6 +5,10 @@ package sqs import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + sqs_sdkv1 "github.com/aws/aws-sdk-go/service/sqs" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -61,4 +65,13 @@ func (p *servicePackage) ServicePackageName() string { return names.SQS } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*sqs_sdkv1.SQS, error) { + sess := config["session"].(*session_sdkv1.Session) + + return sqs_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/sqs/status.go b/internal/service/sqs/status.go index 9af1464ec71..3f58082acd1 100644 --- a/internal/service/sqs/status.go +++ b/internal/service/sqs/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sqs import ( diff --git a/internal/service/sqs/sweep.go b/internal/service/sqs/sweep.go index e4d3464a4cf..d51edb45830 100644 --- a/internal/service/sqs/sweep.go +++ b/internal/service/sqs/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,8 +14,8 @@ import ( "github.com/aws/aws-sdk-go/service/sqs" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" + "github.com/hashicorp/terraform-provider-aws/internal/sweep/sdk" ) func init() { @@ -33,11 +36,11 @@ func init() { func sweepQueues(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).SQSConn() + conn := client.SQSConn(ctx) input := &sqs.ListQueuesInput{} var sweeperErrs *multierror.Error @@ -51,7 +54,7 @@ func sweepQueues(region string) error { r := ResourceQueue() d := r.Data(nil) d.SetId(aws.StringValue(queueUrl)) - err = sweep.DeleteResource(ctx, r, d, client) + err = sdk.DeleteResource(ctx, r, d, client) if err != nil { log.Printf("[ERROR] %s", err) diff --git a/internal/service/sqs/tags_gen.go b/internal/service/sqs/tags_gen.go index b768860a608..bb633ed4298 100644 --- a/internal/service/sqs/tags_gen.go +++ b/internal/service/sqs/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists sqs service tags. +// listTags lists sqs service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn sqsiface.SQSAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn sqsiface.SQSAPI, identifier string) (tftags.KeyValueTags, error) { input := &sqs.ListQueueTagsInput{ QueueUrl: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn sqsiface.SQSAPI, identifier string) (tft // ListTags lists sqs service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).SQSConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).SQSConn(ctx), identifier) if err != nil { return err @@ -54,14 +54,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from sqs service tags. +// KeyValueTags creates tftags.KeyValueTags from sqs service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns sqs service tags from Context. +// getTagsIn returns sqs service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -71,8 +71,8 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets sqs service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets sqs service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } @@ -84,13 +84,13 @@ func createTags(ctx context.Context, conn sqsiface.SQSAPI, identifier string, ta return nil } - return UpdateTags(ctx, conn, identifier, nil, tags) + return updateTags(ctx, conn, identifier, nil, tags) } -// UpdateTags updates sqs service tags. +// updateTags updates sqs service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn sqsiface.SQSAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn sqsiface.SQSAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -130,5 +130,5 @@ func UpdateTags(ctx context.Context, conn sqsiface.SQSAPI, identifier string, ol // UpdateTags updates sqs service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).SQSConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).SQSConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/sqs/wait.go b/internal/service/sqs/wait.go index 36b73fbe154..5c57c8ee85d 100644 --- a/internal/service/sqs/wait.go +++ b/internal/service/sqs/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sqs import ( diff --git a/internal/service/ssm/activation.go b/internal/service/ssm/activation.go index fed9c047f54..664b56eba0d 100644 --- a/internal/service/ssm/activation.go +++ b/internal/service/ssm/activation.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm import ( @@ -81,13 +84,13 @@ func ResourceActivation() *schema.Resource { func resourceActivationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) name := d.Get("name").(string) input := &ssm.CreateActivationInput{ DefaultInstanceName: aws.String(name), IamRole: aws.String(d.Get("iam_role").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -121,7 +124,7 @@ func resourceActivationCreate(ctx context.Context, d *schema.ResourceData, meta func resourceActivationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) activation, err := FindActivationByID(ctx, conn, d.Id()) @@ -143,14 +146,14 @@ func resourceActivationRead(ctx context.Context, d *schema.ResourceData, meta in d.Set("registration_count", activation.RegistrationsCount) d.Set("registration_limit", activation.RegistrationLimit) - SetTagsOut(ctx, activation.Tags) + setTagsOut(ctx, activation.Tags) return diags } func resourceActivationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) log.Printf("[DEBUG] Deleting SSM Activation: %s", d.Id()) _, err := conn.DeleteActivationWithContext(ctx, &ssm.DeleteActivationInput{ diff --git a/internal/service/ssm/activation_test.go b/internal/service/ssm/activation_test.go index 0e79c5aa125..c17f94c102b 100644 --- a/internal/service/ssm/activation_test.go +++ b/internal/service/ssm/activation_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm_test import ( @@ -154,7 +157,7 @@ func testAccCheckActivationExists(ctx context.Context, n string, v *ssm.Activati return fmt.Errorf("No SSM Activation ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn(ctx) output, err := tfssm.FindActivationByID(ctx, conn, rs.Primary.ID) @@ -170,7 +173,7 @@ func testAccCheckActivationExists(ctx context.Context, n string, v *ssm.Activati func testAccCheckActivationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ssm_activation" { diff --git a/internal/service/ssm/association.go b/internal/service/ssm/association.go index 43432bd52ed..ecaf36a3cdc 100644 --- a/internal/service/ssm/association.go +++ b/internal/service/ssm/association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm import ( @@ -159,7 +162,7 @@ func ResourceAssociation() *schema.Resource { func resourceAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) log.Printf("[DEBUG] SSM association create: %s", d.Id()) @@ -239,7 +242,7 @@ func resourceAssociationCreate(ctx context.Context, d *schema.ResourceData, meta func resourceAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) log.Printf("[DEBUG] Reading SSM Association: %s", d.Id()) @@ -290,7 +293,7 @@ func resourceAssociationRead(ctx context.Context, d *schema.ResourceData, meta i func resourceAssociationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) log.Printf("[DEBUG] SSM Association update: %s", d.Id()) @@ -353,7 +356,7 @@ func resourceAssociationUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) log.Printf("[DEBUG] Deleting SSM Association: %s", d.Id()) diff --git a/internal/service/ssm/association_migrate.go b/internal/service/ssm/association_migrate.go index d536d3a1138..81c10f32883 100644 --- a/internal/service/ssm/association_migrate.go +++ b/internal/service/ssm/association_migrate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm import ( diff --git a/internal/service/ssm/association_migrate_test.go b/internal/service/ssm/association_migrate_test.go index 99e51fe0aaf..e82689fabbb 100644 --- a/internal/service/ssm/association_migrate_test.go +++ b/internal/service/ssm/association_migrate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm_test import ( diff --git a/internal/service/ssm/association_test.go b/internal/service/ssm/association_test.go index ec9c412d19c..8d1881f1085 100644 --- a/internal/service/ssm/association_test.go +++ b/internal/service/ssm/association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm_test import ( @@ -619,7 +622,7 @@ func testAccCheckAssociationExists(ctx context.Context, n string) resource.TestC return fmt.Errorf("No SSM Assosciation ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn(ctx) _, err := tfssm.FindAssociationById(ctx, conn, rs.Primary.ID) @@ -629,7 +632,7 @@ func testAccCheckAssociationExists(ctx context.Context, n string) resource.TestC func testAccCheckAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ssm_association" { diff --git a/internal/service/ssm/consts.go b/internal/service/ssm/consts.go index 798dd711248..2f48b424cd3 100644 --- a/internal/service/ssm/consts.go +++ b/internal/service/ssm/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm import ( diff --git a/internal/service/ssm/default_patch_baseline.go b/internal/service/ssm/default_patch_baseline.go index de5107e3dca..a9c755b109b 100644 --- a/internal/service/ssm/default_patch_baseline.go +++ b/internal/service/ssm/default_patch_baseline.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm import ( @@ -15,8 +18,10 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/create" "github.com/hashicorp/terraform-provider-aws/internal/enum" + "github.com/hashicorp/terraform-provider-aws/internal/errs" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" tfslices "github.com/hashicorp/terraform-provider-aws/internal/slices" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" @@ -28,10 +33,6 @@ const ( patchBaselineIDRegexPattern = `pb-[0-9a-f]{17}` ) -type ssmClient interface { - SSMClient() *ssm.Client -} - // @SDKResource("aws_ssm_default_patch_baseline") func ResourceDefaultPatchBaseline() *schema.Resource { return &schema.Resource{ @@ -44,7 +45,7 @@ func ResourceDefaultPatchBaseline() *schema.Resource { id := d.Id() if isPatchBaselineID(id) || isPatchBaselineARN(id) { - conn := meta.(ssmClient).SSMClient() + conn := meta.(*conns.AWSClient).SSMClient(ctx) patchbaseline, err := findPatchBaselineByID(ctx, conn, id) if err != nil { @@ -169,7 +170,7 @@ const ( ) func resourceDefaultPatchBaselineCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { - conn := meta.(ssmClient).SSMClient() + conn := meta.(*conns.AWSClient).SSMClient(ctx) baselineID := d.Get("baseline_id").(string) @@ -199,7 +200,7 @@ func resourceDefaultPatchBaselineCreate(ctx context.Context, d *schema.ResourceD } func resourceDefaultPatchBaselineRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { - conn := meta.(ssmClient).SSMClient() + conn := meta.(*conns.AWSClient).SSMClient(ctx) out, err := FindDefaultPatchBaseline(ctx, conn, types.OperatingSystem(d.Id())) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -245,12 +246,10 @@ func ownerIsSelfFilter() types.PatchOrchestratorFilter { //nolint:unused // This } func resourceDefaultPatchBaselineDelete(ctx context.Context, d *schema.ResourceData, meta any) (diags diag.Diagnostics) { - return defaultPatchBaselineRestoreOSDefault(ctx, meta.(ssmClient), types.OperatingSystem(d.Id())) + return defaultPatchBaselineRestoreOSDefault(ctx, meta.(*conns.AWSClient).SSMClient(ctx), types.OperatingSystem(d.Id())) } -func defaultPatchBaselineRestoreOSDefault(ctx context.Context, meta ssmClient, os types.OperatingSystem) (diags diag.Diagnostics) { - conn := meta.SSMClient() - +func defaultPatchBaselineRestoreOSDefault(ctx context.Context, conn *ssm.Client, os types.OperatingSystem) (diags diag.Diagnostics) { baselineID, err := FindDefaultDefaultPatchBaselineIDForOS(ctx, conn, os) if errors.Is(err, tfresource.ErrEmptyResult) { diags = sdkdiag.AppendWarningf(diags, "no AWS-owned default Patch Baseline found for operating system %q", os) @@ -282,15 +281,15 @@ func FindDefaultPatchBaseline(ctx context.Context, conn *ssm.Client, os types.Op OperatingSystem: os, } out, err := conn.GetDefaultPatchBaseline(ctx, in) - if err != nil { - var nfe *types.DoesNotExistException - if errors.As(err, &nfe) { - return nil, &retry.NotFoundError{ - LastError: err, - LastRequest: in, - } + + if errs.IsA[*types.DoesNotExistException](err) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, } + } + if err != nil { return nil, err } @@ -306,15 +305,15 @@ func findPatchBaselineByID(ctx context.Context, conn *ssm.Client, id string) (*s BaselineId: aws.String(id), } out, err := conn.GetPatchBaseline(ctx, in) - if err != nil { - var nfe *types.DoesNotExistException - if errors.As(err, &nfe) { - return nil, &retry.NotFoundError{ - LastError: err, - LastRequest: in, - } + + if errs.IsA[*types.DoesNotExistException](err) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, } + } + if err != nil { return nil, err } diff --git a/internal/service/ssm/default_patch_baseline_test.go b/internal/service/ssm/default_patch_baseline_test.go index 6cc4cb2bd86..6918b114c53 100644 --- a/internal/service/ssm/default_patch_baseline_test.go +++ b/internal/service/ssm/default_patch_baseline_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm_test import ( @@ -332,7 +335,7 @@ func testAccSSMDefaultPatchBaseline_multiRegion(t *testing.T) { func testAccCheckDefaultPatchBaselineDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ssm_default_patch_baseline" { @@ -375,7 +378,7 @@ func testAccCheckDefaultPatchBaselineExists(ctx context.Context, name string, de return create.Error(names.SSM, create.ErrActionCheckingExistence, tfssm.ResNameDefaultPatchBaseline, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMClient(ctx) resp, err := tfssm.FindDefaultPatchBaseline(ctx, conn, types.OperatingSystem(rs.Primary.ID)) if err != nil { diff --git a/internal/service/ssm/document.go b/internal/service/ssm/document.go index a8476c0ae79..cf48b30ec54 100644 --- a/internal/service/ssm/document.go +++ b/internal/service/ssm/document.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm import ( @@ -247,7 +250,7 @@ func ResourceDocument() *schema.Resource { func resourceDocumentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) name := d.Get("name").(string) input := &ssm.CreateDocumentInput{ @@ -255,7 +258,7 @@ func resourceDocumentCreate(ctx context.Context, d *schema.ResourceData, meta in DocumentFormat: aws.String(d.Get("document_format").(string)), DocumentType: aws.String(d.Get("document_type").(string)), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("attachments_source"); ok && len(v.([]interface{})) > 0 { @@ -309,7 +312,7 @@ func resourceDocumentCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceDocumentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) doc, err := FindDocumentByName(ctx, conn, d.Id()) @@ -389,14 +392,14 @@ func resourceDocumentRead(ctx context.Context, d *schema.ResourceData, meta inte } } - SetTagsOut(ctx, doc.Tags) + setTagsOut(ctx, doc.Tags) return diags } func resourceDocumentUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) if d.HasChange("permissions") { var oldAccountIDs, newAccountIDs flex.Set[string] @@ -503,7 +506,7 @@ func resourceDocumentUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceDocumentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) if v, ok := d.GetOk("permissions"); ok && len(v.(map[string]interface{})) > 0 { tfMap := flex.ExpandStringValueMap(v.(map[string]interface{})) diff --git a/internal/service/ssm/document_data_source.go b/internal/service/ssm/document_data_source.go index 46e2961115c..ed7c1e1dcac 100644 --- a/internal/service/ssm/document_data_source.go +++ b/internal/service/ssm/document_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm import ( @@ -53,7 +56,7 @@ func DataSourceDocument() *schema.Resource { func dataDocumentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) name := d.Get("name").(string) input := &ssm.GetDocumentInput{ diff --git a/internal/service/ssm/document_data_source_test.go b/internal/service/ssm/document_data_source_test.go index 87278c9c7aa..b136b4ea437 100644 --- a/internal/service/ssm/document_data_source_test.go +++ b/internal/service/ssm/document_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm_test import ( diff --git a/internal/service/ssm/document_test.go b/internal/service/ssm/document_test.go index b20fe5f6590..ca9e791203a 100644 --- a/internal/service/ssm/document_test.go +++ b/internal/service/ssm/document_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm_test import ( @@ -623,7 +626,7 @@ func testAccCheckDocumentExists(ctx context.Context, n string) resource.TestChec return fmt.Errorf("No SSM Document ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn(ctx) _, err := tfssm.FindDocumentByName(ctx, conn, rs.Primary.ID) @@ -633,7 +636,7 @@ func testAccCheckDocumentExists(ctx context.Context, n string) resource.TestChec func testAccCheckDocumentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ssm_document" { diff --git a/internal/service/ssm/find.go b/internal/service/ssm/find.go index c27c332889c..19e5e95f817 100644 --- a/internal/service/ssm/find.go +++ b/internal/service/ssm/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm import ( diff --git a/internal/service/ssm/flex.go b/internal/service/ssm/flex.go index 6a6052917ee..64ebd97c948 100644 --- a/internal/service/ssm/flex.go +++ b/internal/service/ssm/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm import ( diff --git a/internal/service/ssm/generate.go b/internal/service/ssm/generate.go index 859dda8f0fc..bb8460a6a78 100644 --- a/internal/service/ssm/generate.go +++ b/internal/service/ssm/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceId -ListTagsOutTagsElem=TagList -ServiceTagsSlice -TagOp=AddTagsToResource -TagInIDElem=ResourceId -TagResTypeElem=ResourceType -UntagOp=RemoveTagsFromResource -UpdateTags -CreateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package ssm diff --git a/internal/service/ssm/instances_data_source.go b/internal/service/ssm/instances_data_source.go index 44499339d65..e43c393ac04 100644 --- a/internal/service/ssm/instances_data_source.go +++ b/internal/service/ssm/instances_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm import ( @@ -46,7 +49,7 @@ func DataSourceInstances() *schema.Resource { func dataSourceInstancesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) input := &ssm.DescribeInstanceInformationInput{} diff --git a/internal/service/ssm/instances_data_source_test.go b/internal/service/ssm/instances_data_source_test.go index edd8912173e..807f957b400 100644 --- a/internal/service/ssm/instances_data_source_test.go +++ b/internal/service/ssm/instances_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm_test import ( diff --git a/internal/service/ssm/maintenance_window.go b/internal/service/ssm/maintenance_window.go index 25a17bda4bd..40e2c0cf7fa 100644 --- a/internal/service/ssm/maintenance_window.go +++ b/internal/service/ssm/maintenance_window.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm import ( @@ -90,7 +93,7 @@ func ResourceMaintenanceWindow() *schema.Resource { func resourceMaintenanceWindowCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) name := d.Get("name").(string) input := &ssm.CreateMaintenanceWindowInput{ @@ -99,7 +102,7 @@ func resourceMaintenanceWindowCreate(ctx context.Context, d *schema.ResourceData Duration: aws.Int64(int64(d.Get("duration").(int))), Name: aws.String(name), Schedule: aws.String(d.Get("schedule").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -148,7 +151,7 @@ func resourceMaintenanceWindowCreate(ctx context.Context, d *schema.ResourceData func resourceMaintenanceWindowRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) output, err := FindMaintenanceWindowByID(ctx, conn, d.Id()) @@ -179,7 +182,7 @@ func resourceMaintenanceWindowRead(ctx context.Context, d *schema.ResourceData, func resourceMaintenanceWindowUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) if d.HasChangesExcept("tags", "tags_all") { // Replace must be set otherwise its not possible to remove optional attributes, e.g. @@ -227,7 +230,7 @@ func resourceMaintenanceWindowUpdate(ctx context.Context, d *schema.ResourceData func resourceMaintenanceWindowDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) log.Printf("[INFO] Deleting SSM Maintenance Window: %s", d.Id()) _, err := conn.DeleteMaintenanceWindowWithContext(ctx, &ssm.DeleteMaintenanceWindowInput{ diff --git a/internal/service/ssm/maintenance_window_target.go b/internal/service/ssm/maintenance_window_target.go index 631b863dba6..f6a5bf9e77a 100644 --- a/internal/service/ssm/maintenance_window_target.go +++ b/internal/service/ssm/maintenance_window_target.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm import ( @@ -99,7 +102,7 @@ func ResourceMaintenanceWindowTarget() *schema.Resource { func resourceMaintenanceWindowTargetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) log.Printf("[INFO] Registering SSM Maintenance Window Target") @@ -133,7 +136,7 @@ func resourceMaintenanceWindowTargetCreate(ctx context.Context, d *schema.Resour func resourceMaintenanceWindowTargetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) windowID := d.Get("window_id").(string) params := &ssm.DescribeMaintenanceWindowTargetsInput{ @@ -186,7 +189,7 @@ func resourceMaintenanceWindowTargetRead(ctx context.Context, d *schema.Resource func resourceMaintenanceWindowTargetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) log.Printf("[INFO] Updating SSM Maintenance Window Target: %s", d.Id()) @@ -218,7 +221,7 @@ func resourceMaintenanceWindowTargetUpdate(ctx context.Context, d *schema.Resour func resourceMaintenanceWindowTargetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) log.Printf("[INFO] Deregistering SSM Maintenance Window Target: %s", d.Id()) diff --git a/internal/service/ssm/maintenance_window_target_test.go b/internal/service/ssm/maintenance_window_target_test.go index 13a14678f94..efdc06bb0f6 100644 --- a/internal/service/ssm/maintenance_window_target_test.go +++ b/internal/service/ssm/maintenance_window_target_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm_test import ( @@ -263,7 +266,7 @@ func testAccCheckMaintenanceWindowTargetExists(ctx context.Context, n string, mW return fmt.Errorf("No SSM Maintenance Window Target Window ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn(ctx) resp, err := conn.DescribeMaintenanceWindowTargetsWithContext(ctx, &ssm.DescribeMaintenanceWindowTargetsInput{ WindowId: aws.String(rs.Primary.Attributes["window_id"]), @@ -291,7 +294,7 @@ func testAccCheckMaintenanceWindowTargetExists(ctx context.Context, n string, mW func testAccCheckMaintenanceWindowTargetDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ssm_maintenance_window_target" { diff --git a/internal/service/ssm/maintenance_window_task.go b/internal/service/ssm/maintenance_window_task.go index 4beccd218a0..3f09c67cd4e 100644 --- a/internal/service/ssm/maintenance_window_task.go +++ b/internal/service/ssm/maintenance_window_task.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm import ( @@ -673,7 +676,7 @@ func flattenTaskInvocationCommonParameters(parameters map[string][]*string) []in func resourceMaintenanceWindowTaskCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) log.Printf("[INFO] Registering SSM Maintenance Window Task") @@ -731,7 +734,7 @@ func resourceMaintenanceWindowTaskCreate(ctx context.Context, d *schema.Resource func resourceMaintenanceWindowTaskRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) windowID := d.Get("window_id").(string) params := &ssm.GetMaintenanceWindowTaskInput{ @@ -785,7 +788,7 @@ func resourceMaintenanceWindowTaskRead(ctx context.Context, d *schema.ResourceDa func resourceMaintenanceWindowTaskUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) windowID := d.Get("window_id").(string) params := &ssm.UpdateMaintenanceWindowTaskInput{ @@ -841,7 +844,7 @@ func resourceMaintenanceWindowTaskUpdate(ctx context.Context, d *schema.Resource func resourceMaintenanceWindowTaskDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) log.Printf("[INFO] Deregistering SSM Maintenance Window Task: %s", d.Id()) diff --git a/internal/service/ssm/maintenance_window_task_test.go b/internal/service/ssm/maintenance_window_task_test.go index 6b6d5129c17..f66f2c4d690 100644 --- a/internal/service/ssm/maintenance_window_task_test.go +++ b/internal/service/ssm/maintenance_window_task_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm_test import ( @@ -496,7 +499,7 @@ func testAccCheckMaintenanceWindowTaskExists(ctx context.Context, n string, task return fmt.Errorf("No SSM Maintenance Window Task Window ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn(ctx) resp, err := conn.DescribeMaintenanceWindowTasksWithContext(ctx, &ssm.DescribeMaintenanceWindowTasksInput{ WindowId: aws.String(rs.Primary.Attributes["window_id"]), @@ -518,7 +521,7 @@ func testAccCheckMaintenanceWindowTaskExists(ctx context.Context, n string, task func testAccCheckMaintenanceWindowTaskDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ssm_maintenance_window_task" { diff --git a/internal/service/ssm/maintenance_window_test.go b/internal/service/ssm/maintenance_window_test.go index e13e4cc717b..25ae690d3f1 100644 --- a/internal/service/ssm/maintenance_window_test.go +++ b/internal/service/ssm/maintenance_window_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm_test import ( @@ -515,7 +518,7 @@ func testAccCheckMaintenanceWindowExists(ctx context.Context, n string, v *ssm.G return fmt.Errorf("No SSM Maintenance Window ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn(ctx) output, err := tfssm.FindMaintenanceWindowByID(ctx, conn, rs.Primary.ID) @@ -531,7 +534,7 @@ func testAccCheckMaintenanceWindowExists(ctx context.Context, n string, v *ssm.G func testAccCheckMaintenanceWindowDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ssm_maintenance_window" { diff --git a/internal/service/ssm/maintenance_windows_data_source.go b/internal/service/ssm/maintenance_windows_data_source.go index 1cdfe33e675..559c390aea6 100644 --- a/internal/service/ssm/maintenance_windows_data_source.go +++ b/internal/service/ssm/maintenance_windows_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm import ( @@ -46,7 +49,7 @@ func DataSourceMaintenanceWindows() *schema.Resource { func dataMaintenanceWindowsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) input := &ssm.DescribeMaintenanceWindowsInput{} diff --git a/internal/service/ssm/maintenance_windows_data_source_test.go b/internal/service/ssm/maintenance_windows_data_source_test.go index 39d9028a06a..c07a797acba 100644 --- a/internal/service/ssm/maintenance_windows_data_source_test.go +++ b/internal/service/ssm/maintenance_windows_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm_test import ( @@ -33,7 +36,7 @@ func TestAccSSMMaintenanceWindowsDataSource_filter(t *testing.T) { { Config: testAccMaintenanceWindowsDataSourceConfig_filterEnabled(rName1, rName2, rName3), Check: resource.ComposeAggregateTestCheckFunc( - acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "ids.#", "1"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "ids.#", 1), ), }, }, diff --git a/internal/service/ssm/parameter.go b/internal/service/ssm/parameter.go index b4c9e26a387..544385cfa6c 100644 --- a/internal/service/ssm/parameter.go +++ b/internal/service/ssm/parameter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm import ( @@ -84,8 +87,9 @@ func ResourceParameter() *schema.Resource { ValidateFunc: validation.StringLenBetween(1, 2048), }, "overwrite": { - Type: schema.TypeBool, - Optional: true, + Type: schema.TypeBool, + Optional: true, + Deprecated: "this attribute has been deprecated", }, names.AttrTags: tftags.TagsSchema(), names.AttrTagsAll: tftags.TagsSchemaComputed(), @@ -142,7 +146,7 @@ func ResourceParameter() *schema.Resource { func resourceParameterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) name := d.Get("name").(string) @@ -178,7 +182,7 @@ func resourceParameterCreate(ctx context.Context, d *schema.ResourceData, meta i // AWS SSM Service only supports PutParameter requests with Tags // iff Overwrite is not provided or is false; in this resource's case, // the Overwrite value is always set in the paramInput so we check for the value - tags := GetTagsIn(ctx) + tags := getTagsIn(ctx) if !aws.BoolValue(input.Overwrite) { input.Tags = tags } @@ -211,7 +215,7 @@ func resourceParameterCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceParameterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) input := &ssm.GetParameterInput{ Name: aws.String(d.Id()), @@ -224,7 +228,7 @@ func resourceParameterRead(ctx context.Context, d *schema.ResourceData, meta int resp, err = conn.GetParameterWithContext(ctx, input) if tfawserr.ErrCodeEquals(err, ssm.ErrCodeParameterNotFound) && d.IsNewResource() && d.Get("data_type").(string) == "aws:ec2:image" { - return retry.RetryableError(fmt.Errorf("error reading SSM Parameter (%s) after creation: this can indicate that the provided parameter value could not be validated by SSM", d.Id())) + return retry.RetryableError(fmt.Errorf("reading SSM Parameter (%s) after creation: this can indicate that the provided parameter value could not be validated by SSM", d.Id())) } if err != nil { @@ -297,9 +301,9 @@ func resourceParameterRead(ctx context.Context, d *schema.ResourceData, meta int func resourceParameterUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) - if d.HasChangesExcept("tags", "tags_all") { + if d.HasChangesExcept("overwrite", "tags", "tags_all") { value := d.Get("value").(string) if v, ok := d.Get("insecure_value").(string); ok && v != "" { @@ -350,7 +354,7 @@ func resourceParameterUpdate(ctx context.Context, d *schema.ResourceData, meta i func resourceParameterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) _, err := conn.DeleteParameterWithContext(ctx, &ssm.DeleteParameterInput{ Name: aws.String(d.Get("name").(string)), diff --git a/internal/service/ssm/parameter_data_source.go b/internal/service/ssm/parameter_data_source.go index 80aa6625f46..9ab93933a80 100644 --- a/internal/service/ssm/parameter_data_source.go +++ b/internal/service/ssm/parameter_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm import ( @@ -21,6 +24,10 @@ func DataSourceParameter() *schema.Resource { Type: schema.TypeString, Computed: true, }, + "insecure_value": { + Type: schema.TypeString, + Computed: true, + }, "name": { Type: schema.TypeString, Required: true, @@ -49,7 +56,7 @@ func DataSourceParameter() *schema.Resource { func dataParameterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) name := d.Get("name").(string) @@ -69,9 +76,13 @@ func dataParameterRead(ctx context.Context, d *schema.ResourceData, meta interfa d.SetId(aws.StringValue(param.Name)) d.Set("arn", param.ARN) + d.Set("value", param.Value) + d.Set("insecure_value", nil) + if aws.StringValue(param.Type) != ssm.ParameterTypeSecureString { + d.Set("insecure_value", param.Value) + } d.Set("name", param.Name) d.Set("type", param.Type) - d.Set("value", param.Value) d.Set("version", param.Version) return diags diff --git a/internal/service/ssm/parameter_data_source_test.go b/internal/service/ssm/parameter_data_source_test.go index d0c8abf8842..bfc9b7bb376 100644 --- a/internal/service/ssm/parameter_data_source_test.go +++ b/internal/service/ssm/parameter_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm_test import ( @@ -69,6 +72,30 @@ func TestAccSSMParameterDataSource_fullPath(t *testing.T) { }) } +func TestAccSSMParameterDataSource_insecureValue(t *testing.T) { + ctx := acctest.Context(t) + var param ssm.Parameter + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_ssm_parameter.test" + dataSourceName := "data.aws_ssm_parameter.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, ssm.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckParameterDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccParameterConfig_insecureValue(rName, "String"), + Check: resource.ComposeTestCheckFunc( + testAccCheckParameterExists(ctx, dataSourceName, ¶m), + resource.TestCheckResourceAttrPair(dataSourceName, "insecure_value", resourceName, "insecure_value"), + ), + }, + }, + }) +} + func testAccParameterDataSourceConfig_basic(name string, withDecryption string) string { return fmt.Sprintf(` resource "aws_ssm_parameter" "test" { @@ -83,3 +110,17 @@ data "aws_ssm_parameter" "test" { } `, name, withDecryption) } + +func testAccParameterConfig_insecureValue(rName, pType string) string { + return fmt.Sprintf(` +resource "aws_ssm_parameter" "test" { + name = %[1]q + type = %[2]q + insecure_value = "notsecret" +} + +data "aws_ssm_parameter" "test" { + name = aws_ssm_parameter.test.name +} +`, rName, pType) +} diff --git a/internal/service/ssm/parameter_test.go b/internal/service/ssm/parameter_test.go index 4eaeb5962c6..f76fb13f00e 100644 --- a/internal/service/ssm/parameter_test.go +++ b/internal/service/ssm/parameter_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm_test import ( @@ -40,6 +43,101 @@ func TestAccSSMParameter_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), resource.TestCheckResourceAttrSet(resourceName, "version"), resource.TestCheckResourceAttr(resourceName, "data_type", "text"), + resource.TestCheckNoResourceAttr(resourceName, "overwrite"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"overwrite"}, + }, + }, + }) +} + +func TestAccSSMParameter_updateValue(t *testing.T) { + ctx := acctest.Context(t) + var param ssm.Parameter + name := fmt.Sprintf("%s_%s", t.Name(), sdkacctest.RandString(10)) + resourceName := "aws_ssm_parameter.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, ssm.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckParameterDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccParameterConfig_basic(name, "String", "test"), + Check: resource.ComposeTestCheckFunc( + testAccCheckParameterExists(ctx, resourceName, ¶m), + resource.TestCheckResourceAttr(resourceName, "type", "String"), + resource.TestCheckResourceAttr(resourceName, "value", "test"), + resource.TestCheckNoResourceAttr(resourceName, "overwrite"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"overwrite"}, + }, + { + Config: testAccParameterConfig_basic(name, "String", "test2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckParameterExists(ctx, resourceName, ¶m), + resource.TestCheckResourceAttr(resourceName, "type", "String"), + resource.TestCheckResourceAttr(resourceName, "value", "test2"), + resource.TestCheckNoResourceAttr(resourceName, "overwrite"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"overwrite"}, + }, + }, + }) +} + +func TestAccSSMParameter_updateDescription(t *testing.T) { + ctx := acctest.Context(t) + var param ssm.Parameter + name := fmt.Sprintf("%s_%s", t.Name(), sdkacctest.RandString(10)) + resourceName := "aws_ssm_parameter.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, ssm.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckParameterDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccParameterConfig_description(name, "description", "String", "test"), + Check: resource.ComposeTestCheckFunc( + testAccCheckParameterExists(ctx, resourceName, ¶m), + resource.TestCheckResourceAttr(resourceName, "description", "description"), + resource.TestCheckResourceAttr(resourceName, "type", "String"), + resource.TestCheckResourceAttr(resourceName, "value", "test"), + resource.TestCheckNoResourceAttr(resourceName, "overwrite"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"overwrite"}, + }, + { + Config: testAccParameterConfig_description(name, "updated description", "String", "test"), + Check: resource.ComposeTestCheckFunc( + testAccCheckParameterExists(ctx, resourceName, ¶m), + resource.TestCheckResourceAttr(resourceName, "description", "updated description"), + resource.TestCheckResourceAttr(resourceName, "type", "String"), + resource.TestCheckResourceAttr(resourceName, "value", "test"), + resource.TestCheckNoResourceAttr(resourceName, "overwrite"), ), }, { @@ -297,7 +395,7 @@ func TestAccSSMParameter_Overwrite_basic(t *testing.T) { { PreConfig: func() { - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn(ctx) input := &ssm.PutParameterInput{ Name: aws.String(fmt.Sprintf("%s-%s", "test_parameter", name)), @@ -313,12 +411,14 @@ func TestAccSSMParameter_Overwrite_basic(t *testing.T) { Config: testAccParameterConfig_basicOverwrite(name, "String", "This value is set using Terraform"), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr(resourceName, "version", "2"), + resource.TestCheckResourceAttr(resourceName, "overwrite", "true"), ), }, { Config: testAccParameterConfig_basicOverwrite(name, "String", "test2"), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr(resourceName, "version", "3"), + resource.TestCheckResourceAttr(resourceName, "overwrite", "true"), ), }, { @@ -334,6 +434,7 @@ func TestAccSSMParameter_Overwrite_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "value", "test3"), resource.TestCheckResourceAttr(resourceName, "type", "String"), resource.TestCheckResourceAttr(resourceName, "version", "4"), + resource.TestCheckResourceAttr(resourceName, "overwrite", "true"), ), }, }, @@ -465,6 +566,41 @@ func TestAccSSMParameter_Overwrite_updateToTags(t *testing.T) { }, }) } +func TestAccSSMParameter_Overwrite_removeAttribute(t *testing.T) { + ctx := acctest.Context(t) + var param ssm.Parameter + rName := fmt.Sprintf("%s_%s", t.Name(), sdkacctest.RandString(10)) + resourceName := "aws_ssm_parameter.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, ssm.EndpointsID), + CheckDestroy: testAccCheckParameterDestroy(ctx), + Steps: []resource.TestStep{ + { + ExternalProviders: map[string]resource.ExternalProvider{ + "aws": { + Source: "hashicorp/aws", + VersionConstraint: "4.67.0", + }, + }, + Config: testAccParameterConfig_overwriteRemove_Setup(rName, "String", "value1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckParameterExists(ctx, resourceName, ¶m), + resource.TestCheckResourceAttr(resourceName, "overwrite", "true"), + ), + }, + { + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + Config: testAccParameterConfig_overwriteRemove_Remove(rName, "String", "value1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckParameterExists(ctx, resourceName, ¶m), + resource.TestCheckResourceAttr(resourceName, "overwrite", "false"), + ), + }, + }, + }) +} func TestAccSSMParameter_tags(t *testing.T) { ctx := acctest.Context(t) @@ -910,7 +1046,7 @@ func testAccCheckParameterExists(ctx context.Context, n string, param *ssm.Param return fmt.Errorf("No SSM Parameter ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn(ctx) paramInput := &ssm.GetParametersInput{ Names: []*string{ @@ -936,7 +1072,7 @@ func testAccCheckParameterExists(ctx context.Context, n string, param *ssm.Param func testAccCheckParameterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ssm_parameter" { @@ -980,6 +1116,17 @@ resource "aws_ssm_parameter" "test" { `, rName, pType, value) } +func testAccParameterConfig_description(rName, description, pType, value string) string { + return fmt.Sprintf(` +resource "aws_ssm_parameter" "test" { + name = %[1]q + description = %[2]q + type = %[3]q + value = %[4]q +} +`, rName, description, pType, value) +} + func testAccParameterConfig_insecure(rName, pType, value string) string { return fmt.Sprintf(` resource "aws_ssm_parameter" "test" { @@ -1121,6 +1268,29 @@ resource "aws_ssm_parameter" "test_downstream" { `, rName, value) } +func testAccParameterConfig_overwriteRemove_Setup(rName, pType, value string) string { + return fmt.Sprintf(` +resource "aws_ssm_parameter" "test" { + name = "test_parameter-%[1]s" + description = "description for parameter %[1]s" + type = "%[2]s" + value = "%[3]s" + overwrite = true +} +`, rName, pType, value) +} + +func testAccParameterConfig_overwriteRemove_Remove(rName, pType, value string) string { + return fmt.Sprintf(` +resource "aws_ssm_parameter" "test" { + name = "test_parameter-%[1]s" + description = "description for parameter %[1]s" + type = "%[2]s" + value = "%[3]s" +} +`, rName, pType, value) +} + func testAccParameterConfig_secure(rName string, value string) string { return fmt.Sprintf(` resource "aws_ssm_parameter" "test" { diff --git a/internal/service/ssm/parameters_by_path_data_source.go b/internal/service/ssm/parameters_by_path_data_source.go index 95cf0546a8f..1d9ba9f2fab 100644 --- a/internal/service/ssm/parameters_by_path_data_source.go +++ b/internal/service/ssm/parameters_by_path_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm import ( @@ -58,7 +61,7 @@ func DataSourceParametersByPath() *schema.Resource { func dataSourceParametersReadByPath(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) path := d.Get("path").(string) input := &ssm.GetParametersByPathInput{ diff --git a/internal/service/ssm/parameters_by_path_data_source_test.go b/internal/service/ssm/parameters_by_path_data_source_test.go index ee043425e2a..72b31e14bf9 100644 --- a/internal/service/ssm/parameters_by_path_data_source_test.go +++ b/internal/service/ssm/parameters_by_path_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm_test import ( diff --git a/internal/service/ssm/patch_baseline.go b/internal/service/ssm/patch_baseline.go index 87f55467d0b..5c38a0e8a2d 100644 --- a/internal/service/ssm/patch_baseline.go +++ b/internal/service/ssm/patch_baseline.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm import ( @@ -232,14 +235,14 @@ const ( func resourcePatchBaselineCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) name := d.Get("name").(string) input := &ssm.CreatePatchBaselineInput{ ApprovedPatchesComplianceLevel: aws.String(d.Get("approved_patches_compliance_level").(string)), Name: aws.String(name), OperatingSystem: aws.String(d.Get("operating_system").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -287,7 +290,7 @@ func resourcePatchBaselineCreate(ctx context.Context, d *schema.ResourceData, me func resourcePatchBaselineRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) params := &ssm.GetPatchBaselineInput{ BaselineId: aws.String(d.Id()), @@ -337,7 +340,7 @@ func resourcePatchBaselineRead(ctx context.Context, d *schema.ResourceData, meta func resourcePatchBaselineUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &ssm.UpdatePatchBaselineInput{ @@ -395,7 +398,7 @@ func resourcePatchBaselineUpdate(ctx context.Context, d *schema.ResourceData, me } func resourcePatchBaselineDelete(ctx context.Context, d *schema.ResourceData, meta any) (diags diag.Diagnostics) { - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) log.Printf("[INFO] Deleting SSM Patch Baseline: %s", d.Id()) @@ -406,7 +409,7 @@ func resourcePatchBaselineDelete(ctx context.Context, d *schema.ResourceData, me _, err := conn.DeletePatchBaselineWithContext(ctx, params) if tfawserr.ErrCodeEquals(err, ssm.ErrCodeResourceInUseException) { // Reset the default patch baseline before retrying - diags = append(diags, defaultPatchBaselineRestoreOSDefault(ctx, meta.(ssmClient), types.OperatingSystem(d.Get("operating_system").(string)))...) + diags = append(diags, defaultPatchBaselineRestoreOSDefault(ctx, meta.(*conns.AWSClient).SSMClient(ctx), types.OperatingSystem(d.Get("operating_system").(string)))...) if diags.HasError() { return } diff --git a/internal/service/ssm/patch_baseline_data_source.go b/internal/service/ssm/patch_baseline_data_source.go index 97759844373..5ae469620ac 100644 --- a/internal/service/ssm/patch_baseline_data_source.go +++ b/internal/service/ssm/patch_baseline_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm import ( @@ -152,7 +155,7 @@ func DataSourcePatchBaseline() *schema.Resource { func dataPatchBaselineRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) filters := []*ssm.PatchOrchestratorFilter{ { diff --git a/internal/service/ssm/patch_baseline_data_source_test.go b/internal/service/ssm/patch_baseline_data_source_test.go index 9c852310f51..f075cc4c702 100644 --- a/internal/service/ssm/patch_baseline_data_source_test.go +++ b/internal/service/ssm/patch_baseline_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm_test import ( diff --git a/internal/service/ssm/patch_baseline_test.go b/internal/service/ssm/patch_baseline_test.go index 9e274b23ccf..9dfdcfb9f6b 100644 --- a/internal/service/ssm/patch_baseline_test.go +++ b/internal/service/ssm/patch_baseline_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm_test import ( @@ -364,7 +367,7 @@ func testAccSSMPatchBaseline_deleteDefault(t *testing.T) { }, { PreConfig: func() { - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn(ctx) input := &ssm.RegisterDefaultPatchBaselineInput{ BaselineId: ssmPatch.BaselineId, @@ -400,7 +403,7 @@ func testAccCheckPatchBaselineExists(ctx context.Context, n string, patch *ssm.P return fmt.Errorf("No SSM Patch Baseline ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn(ctx) resp, err := conn.DescribePatchBaselinesWithContext(ctx, &ssm.DescribePatchBaselinesInput{ Filters: []*ssm.PatchOrchestratorFilter{ @@ -427,7 +430,7 @@ func testAccCheckPatchBaselineExists(ctx context.Context, n string, patch *ssm.P func testAccCheckPatchBaselineDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ssm_patch_baseline" { diff --git a/internal/service/ssm/patch_group.go b/internal/service/ssm/patch_group.go index 75fd879bdba..e5f26c6d452 100644 --- a/internal/service/ssm/patch_group.go +++ b/internal/service/ssm/patch_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm import ( @@ -48,7 +51,7 @@ func ResourcePatchGroup() *schema.Resource { func resourcePatchGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) baselineId := d.Get("baseline_id").(string) patchGroup := d.Get("patch_group").(string) @@ -70,7 +73,7 @@ func resourcePatchGroupCreate(ctx context.Context, d *schema.ResourceData, meta func resourcePatchGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) patchGroup, baselineId, err := ParsePatchGroupID(d.Id()) if err != nil { @@ -106,7 +109,7 @@ func resourcePatchGroupRead(ctx context.Context, d *schema.ResourceData, meta in func resourcePatchGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) patchGroup, baselineId, err := ParsePatchGroupID(d.Id()) if err != nil { diff --git a/internal/service/ssm/patch_group_migrate.go b/internal/service/ssm/patch_group_migrate.go index 703dbcb19f8..90d4194f70f 100644 --- a/internal/service/ssm/patch_group_migrate.go +++ b/internal/service/ssm/patch_group_migrate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm import ( diff --git a/internal/service/ssm/patch_group_migrate_test.go b/internal/service/ssm/patch_group_migrate_test.go index a72d021cf0b..54479207650 100644 --- a/internal/service/ssm/patch_group_migrate_test.go +++ b/internal/service/ssm/patch_group_migrate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm_test import ( diff --git a/internal/service/ssm/patch_group_test.go b/internal/service/ssm/patch_group_test.go index dabe00c0e02..b9259e6a011 100644 --- a/internal/service/ssm/patch_group_test.go +++ b/internal/service/ssm/patch_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm_test import ( @@ -85,7 +88,7 @@ func TestAccSSMPatchGroup_multipleBaselines(t *testing.T) { func testAccCheckPatchGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ssm_patch_group" { @@ -128,7 +131,7 @@ func testAccCheckPatchGroupExists(ctx context.Context, n string) resource.TestCh return fmt.Errorf("error parsing SSM Patch Group ID (%s): %w", rs.Primary.ID, err) } - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn(ctx) group, err := tfssm.FindPatchGroup(ctx, conn, patchGroup, baselineId) diff --git a/internal/service/ssm/resource_data_sync.go b/internal/service/ssm/resource_data_sync.go index b6c74bff5b0..6d446caecfb 100644 --- a/internal/service/ssm/resource_data_sync.go +++ b/internal/service/ssm/resource_data_sync.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm import ( @@ -75,7 +78,7 @@ func ResourceResourceDataSync() *schema.Resource { func resourceResourceDataSyncCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) name := d.Get("name").(string) @@ -108,7 +111,7 @@ func resourceResourceDataSyncCreate(ctx context.Context, d *schema.ResourceData, func resourceResourceDataSyncRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) syncItem, err := FindResourceDataSyncItem(ctx, conn, d.Id()) @@ -129,7 +132,7 @@ func resourceResourceDataSyncRead(ctx context.Context, d *schema.ResourceData, m func resourceResourceDataSyncDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) input := &ssm.DeleteResourceDataSyncInput{ SyncName: aws.String(d.Id()), diff --git a/internal/service/ssm/resource_data_sync_test.go b/internal/service/ssm/resource_data_sync_test.go index a6f124bd142..67c31ef1e5a 100644 --- a/internal/service/ssm/resource_data_sync_test.go +++ b/internal/service/ssm/resource_data_sync_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm_test import ( @@ -75,7 +78,7 @@ func TestAccSSMResourceDataSync_update(t *testing.T) { func testAccCheckResourceDataSyncDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ssm_resource_data_sync" { diff --git a/internal/service/ssm/service_package_gen.go b/internal/service/ssm/service_package_gen.go index 3fba9b897ae..1ce65459ab2 100644 --- a/internal/service/ssm/service_package_gen.go +++ b/internal/service/ssm/service_package_gen.go @@ -5,6 +5,12 @@ package ssm import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + ssm_sdkv2 "github.com/aws/aws-sdk-go-v2/service/ssm" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + ssm_sdkv1 "github.com/aws/aws-sdk-go/service/ssm" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -127,4 +133,24 @@ func (p *servicePackage) ServicePackageName() string { return names.SSM } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*ssm_sdkv1.SSM, error) { + sess := config["session"].(*session_sdkv1.Session) + + return ssm_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*ssm_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return ssm_sdkv2.NewFromConfig(cfg, func(o *ssm_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = ssm_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/ssm/service_setting.go b/internal/service/ssm/service_setting.go index 9be18544c8a..5df2bcd2f4d 100644 --- a/internal/service/ssm/service_setting.go +++ b/internal/service/ssm/service_setting.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm import ( @@ -51,7 +54,7 @@ func ResourceServiceSetting() *schema.Resource { func resourceServiceSettingUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) log.Printf("[DEBUG] SSM service setting create: %s", d.Get("setting_id").(string)) @@ -75,7 +78,7 @@ func resourceServiceSettingUpdate(ctx context.Context, d *schema.ResourceData, m func resourceServiceSettingRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) log.Printf("[DEBUG] Reading SSM Activation: %s", d.Id()) @@ -96,7 +99,7 @@ func resourceServiceSettingRead(ctx context.Context, d *schema.ResourceData, met func resourceServiceSettingReset(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSMConn() + conn := meta.(*conns.AWSClient).SSMConn(ctx) log.Printf("[DEBUG] Deleting SSM Service Setting: %s", d.Id()) diff --git a/internal/service/ssm/service_setting_test.go b/internal/service/ssm/service_setting_test.go index 657ba774ae4..8d1646dce57 100644 --- a/internal/service/ssm/service_setting_test.go +++ b/internal/service/ssm/service_setting_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm_test import ( @@ -53,7 +56,7 @@ func TestAccSSMServiceSetting_basic(t *testing.T) { func testAccCheckServiceSettingDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ssm_service_setting" { @@ -92,7 +95,7 @@ func testAccServiceSettingExists(ctx context.Context, n string, v *ssm.ServiceSe return fmt.Errorf("No SSM Service Setting ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMConn(ctx) output, err := tfssm.FindServiceSettingByID(ctx, conn, rs.Primary.Attributes["setting_id"]) diff --git a/internal/service/ssm/ssm_test.go b/internal/service/ssm/ssm_test.go index ba0a817bb00..c56e401c79a 100644 --- a/internal/service/ssm/ssm_test.go +++ b/internal/service/ssm/ssm_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm_test import ( diff --git a/internal/service/ssm/status.go b/internal/service/ssm/status.go index e88b979dbc8..0452199c0f2 100644 --- a/internal/service/ssm/status.go +++ b/internal/service/ssm/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm import ( diff --git a/internal/service/ssm/sweep.go b/internal/service/ssm/sweep.go index 116bad4ae03..1715fb93ead 100644 --- a/internal/service/ssm/sweep.go +++ b/internal/service/ssm/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,12 +14,12 @@ import ( "time" "github.com/aws/aws-sdk-go-v2/aws" + ssm_sdkv2 "github.com/aws/aws-sdk-go-v2/service/ssm" "github.com/aws/aws-sdk-go-v2/service/ssm/types" - "github.com/aws/aws-sdk-go/service/ssm" + ssm_sdkv1 "github.com/aws/aws-sdk-go/service/ssm" "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" tfslices "github.com/hashicorp/terraform-provider-aws/internal/slices" "github.com/hashicorp/terraform-provider-aws/internal/sweep" @@ -50,13 +53,12 @@ func init() { func sweepResourceDefaultPatchBaselines(region string) error { ctx := sweep.Context(region) - c, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - client := c.(ssmClient) - conn := client.SSMClient() + conn := client.SSMClient(ctx) var sweepables []sweep.Sweepable var errs *multierror.Error @@ -79,13 +81,13 @@ func sweepResourceDefaultPatchBaselines(region string) error { continue } sweepables = append(sweepables, defaultPatchBaselineSweeper{ - client: client, - os: pb.OperatingSystem, + conn: conn, + os: pb.OperatingSystem, }) } } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepables); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepables); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping Default Patch Baselines for %s: %w", region, err)) } @@ -98,12 +100,12 @@ func sweepResourceDefaultPatchBaselines(region string) error { } type defaultPatchBaselineSweeper struct { - client ssmClient - os types.OperatingSystem + conn *ssm_sdkv2.Client + os types.OperatingSystem } func (s defaultPatchBaselineSweeper) Delete(ctx context.Context, timeout time.Duration, optFns ...tfresource.OptionsFunc) (err error) { - diags := defaultPatchBaselineRestoreOSDefault(ctx, s.client, s.os) + diags := defaultPatchBaselineRestoreOSDefault(ctx, s.conn, s.os) for _, d := range sdkdiag.Warnings(diags) { log.Printf("[WARN] %s", sdkdiag.DiagnosticString(d)) @@ -117,14 +119,15 @@ func (s defaultPatchBaselineSweeper) Delete(ctx context.Context, timeout time.Du func sweepMaintenanceWindows(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %s", err) } - conn := client.(*conns.AWSClient).SSMConn() - input := &ssm.DescribeMaintenanceWindowsInput{} + conn := client.SSMConn(ctx) + + input := &ssm_sdkv1.DescribeMaintenanceWindowsInput{} var sweeperErrs *multierror.Error for { @@ -141,7 +144,7 @@ func sweepMaintenanceWindows(region string) error { for _, window := range output.WindowIdentities { id := aws.ToString(window.WindowId) - input := &ssm.DeleteMaintenanceWindowInput{ + input := &ssm_sdkv1.DeleteMaintenanceWindowInput{ WindowId: window.WindowId, } @@ -149,7 +152,7 @@ func sweepMaintenanceWindows(region string) error { _, err := conn.DeleteMaintenanceWindowWithContext(ctx, input) - if tfawserr.ErrCodeEquals(err, ssm.ErrCodeDoesNotExistException) { + if tfawserr.ErrCodeEquals(err, ssm_sdkv1.ErrCodeDoesNotExistException) { continue } @@ -173,13 +176,12 @@ func sweepMaintenanceWindows(region string) error { func sweepResourcePatchBaselines(region string) error { ctx := sweep.Context(region) - c, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - client := c.(ssmClient) - conn := client.SSMClient() + conn := client.SSMClient(ctx) var sweepables []sweep.Sweepable var errs *multierror.Error @@ -203,7 +205,7 @@ func sweepResourcePatchBaselines(region string) error { } } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepables); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepables);err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping Patch Baselines for %s: %w", region, err)) } @@ -217,19 +219,20 @@ func sweepResourcePatchBaselines(region string) error { func sweepResourceDataSyncs(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("getting client: %w", err) } - conn := client.(*conns.AWSClient).SSMConn() + conn := client.SSMConn(ctx) + sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error - input := &ssm.ListResourceDataSyncInput{} + input := &ssm_sdkv1.ListResourceDataSyncInput{} - err = conn.ListResourceDataSyncPagesWithContext(ctx, input, func(page *ssm.ListResourceDataSyncOutput, lastPage bool) bool { + err = conn.ListResourceDataSyncPagesWithContext(ctx, input, func(page *ssm_sdkv1.ListResourceDataSyncOutput, lastPage bool) bool { if page == nil { return !lastPage } @@ -251,7 +254,7 @@ func sweepResourceDataSyncs(region string) error { errs = multierror.Append(errs, fmt.Errorf("listing SSM Resource Data Sync for %s: %w", region, err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("sweeping SSM Resource Data Sync for %s: %w", region, err)) } diff --git a/internal/service/ssm/tags_gen.go b/internal/service/ssm/tags_gen.go index ab9b44cb66a..5d8a1d3eabf 100644 --- a/internal/service/ssm/tags_gen.go +++ b/internal/service/ssm/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists ssm service tags. +// listTags lists ssm service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn ssmiface.SSMAPI, identifier, resourceType string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn ssmiface.SSMAPI, identifier, resourceType string) (tftags.KeyValueTags, error) { input := &ssm.ListTagsForResourceInput{ ResourceId: aws.String(identifier), ResourceType: aws.String(resourceType), @@ -35,7 +35,7 @@ func ListTags(ctx context.Context, conn ssmiface.SSMAPI, identifier, resourceTyp // ListTags lists ssm service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier, resourceType string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).SSMConn(), identifier, resourceType) + tags, err := listTags(ctx, meta.(*conns.AWSClient).SSMConn(ctx), identifier, resourceType) if err != nil { return err @@ -77,9 +77,9 @@ func KeyValueTags(ctx context.Context, tags []*ssm.Tag) tftags.KeyValueTags { return tftags.New(ctx, m) } -// GetTagsIn returns ssm service tags from Context. +// getTagsIn returns ssm service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*ssm.Tag { +func getTagsIn(ctx context.Context) []*ssm.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -89,8 +89,8 @@ func GetTagsIn(ctx context.Context) []*ssm.Tag { return nil } -// SetTagsOut sets ssm service tags in Context. -func SetTagsOut(ctx context.Context, tags []*ssm.Tag) { +// setTagsOut sets ssm service tags in Context. +func setTagsOut(ctx context.Context, tags []*ssm.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } @@ -102,13 +102,13 @@ func createTags(ctx context.Context, conn ssmiface.SSMAPI, identifier, resourceT return nil } - return UpdateTags(ctx, conn, identifier, resourceType, nil, KeyValueTags(ctx, tags)) + return updateTags(ctx, conn, identifier, resourceType, nil, KeyValueTags(ctx, tags)) } -// UpdateTags updates ssm service tags. +// updateTags updates ssm service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn ssmiface.SSMAPI, identifier, resourceType string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn ssmiface.SSMAPI, identifier, resourceType string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -150,5 +150,5 @@ func UpdateTags(ctx context.Context, conn ssmiface.SSMAPI, identifier, resourceT // UpdateTags updates ssm service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier, resourceType string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).SSMConn(), identifier, resourceType, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).SSMConn(ctx), identifier, resourceType, oldTags, newTags) } diff --git a/internal/service/ssm/wait.go b/internal/service/ssm/wait.go index cd6dd61e421..e817bde1bf5 100644 --- a/internal/service/ssm/wait.go +++ b/internal/service/ssm/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssm import ( diff --git a/internal/service/ssmcontacts/contact.go b/internal/service/ssmcontacts/contact.go index 6698ab7d17e..fdd50b15a6e 100644 --- a/internal/service/ssmcontacts/contact.go +++ b/internal/service/ssmcontacts/contact.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmcontacts import ( @@ -63,13 +66,13 @@ const ( ) func resourceContactCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - client := meta.(*conns.AWSClient).SSMContactsClient() + client := meta.(*conns.AWSClient).SSMContactsClient(ctx) input := &ssmcontacts.CreateContactInput{ Alias: aws.String(d.Get("alias").(string)), DisplayName: aws.String(d.Get("display_name").(string)), Plan: &types.Plan{Stages: []types.Stage{}}, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), Type: types.ContactType(d.Get("type").(string)), } @@ -88,7 +91,7 @@ func resourceContactCreate(ctx context.Context, d *schema.ResourceData, meta int } func resourceContactRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SSMContactsClient() + conn := meta.(*conns.AWSClient).SSMContactsClient(ctx) out, err := findContactByID(ctx, conn, d.Id()) @@ -110,7 +113,7 @@ func resourceContactRead(ctx context.Context, d *schema.ResourceData, meta inter } func resourceContactUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SSMContactsClient() + conn := meta.(*conns.AWSClient).SSMContactsClient(ctx) if d.HasChanges("display_name") { in := &ssmcontacts.UpdateContactInput{ @@ -128,7 +131,7 @@ func resourceContactUpdate(ctx context.Context, d *schema.ResourceData, meta int } func resourceContactDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SSMContactsClient() + conn := meta.(*conns.AWSClient).SSMContactsClient(ctx) log.Printf("[INFO] Deleting SSMContacts Contact %s", d.Id()) diff --git a/internal/service/ssmcontacts/contact_channel.go b/internal/service/ssmcontacts/contact_channel.go index 783289954a0..d66f1a2cb2e 100644 --- a/internal/service/ssmcontacts/contact_channel.go +++ b/internal/service/ssmcontacts/contact_channel.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmcontacts import ( @@ -73,7 +76,7 @@ const ( ) func resourceContactChannelCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SSMContactsClient() + conn := meta.(*conns.AWSClient).SSMContactsClient(ctx) delivery_address := expandContactChannelAddress(d.Get("delivery_address").([]interface{})) in := &ssmcontacts.CreateContactChannelInput{ @@ -99,7 +102,7 @@ func resourceContactChannelCreate(ctx context.Context, d *schema.ResourceData, m } func resourceContactChannelRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SSMContactsClient() + conn := meta.(*conns.AWSClient).SSMContactsClient(ctx) out, err := findContactChannelByID(ctx, conn, d.Id()) @@ -121,7 +124,7 @@ func resourceContactChannelRead(ctx context.Context, d *schema.ResourceData, met } func resourceContactChannelUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SSMContactsClient() + conn := meta.(*conns.AWSClient).SSMContactsClient(ctx) update := false @@ -153,7 +156,7 @@ func resourceContactChannelUpdate(ctx context.Context, d *schema.ResourceData, m } func resourceContactChannelDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SSMContactsClient() + conn := meta.(*conns.AWSClient).SSMContactsClient(ctx) log.Printf("[INFO] Deleting SSMContacts ContactChannel %s", d.Id()) diff --git a/internal/service/ssmcontacts/contact_channel_data_source.go b/internal/service/ssmcontacts/contact_channel_data_source.go index 2fefef64e57..655907ec84c 100644 --- a/internal/service/ssmcontacts/contact_channel_data_source.go +++ b/internal/service/ssmcontacts/contact_channel_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmcontacts import ( @@ -58,7 +61,7 @@ const ( ) func dataSourceContactChannelRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SSMContactsClient() + conn := meta.(*conns.AWSClient).SSMContactsClient(ctx) arn := d.Get("arn").(string) diff --git a/internal/service/ssmcontacts/contact_channel_data_source_test.go b/internal/service/ssmcontacts/contact_channel_data_source_test.go index 184a97689f1..5754577660f 100644 --- a/internal/service/ssmcontacts/contact_channel_data_source_test.go +++ b/internal/service/ssmcontacts/contact_channel_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmcontacts_test import ( diff --git a/internal/service/ssmcontacts/contact_channel_test.go b/internal/service/ssmcontacts/contact_channel_test.go index 70852596db4..5a56817e86c 100644 --- a/internal/service/ssmcontacts/contact_channel_test.go +++ b/internal/service/ssmcontacts/contact_channel_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmcontacts_test import ( @@ -326,7 +329,7 @@ func testContactChannel_type(t *testing.T) { func testAccCheckContactChannelDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMContactsClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMContactsClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ssmcontacts_contact_channel" { @@ -371,7 +374,7 @@ func testAccCheckContactChannelExists(ctx context.Context, name string) resource return create.Error(names.SSMContacts, create.ErrActionCheckingExistence, tfssmcontacts.ResNameContactChannel, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMContactsClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMContactsClient(ctx) _, err := conn.GetContactChannel(ctx, &ssmcontacts.GetContactChannelInput{ ContactChannelId: aws.String(rs.Primary.ID), }) diff --git a/internal/service/ssmcontacts/contact_data_source.go b/internal/service/ssmcontacts/contact_data_source.go index 97149879c7a..f5172ab19a6 100644 --- a/internal/service/ssmcontacts/contact_data_source.go +++ b/internal/service/ssmcontacts/contact_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmcontacts import ( @@ -44,7 +47,7 @@ const ( ) func dataSourceContactRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SSMContactsClient() + conn := meta.(*conns.AWSClient).SSMContactsClient(ctx) arn := d.Get("arn").(string) @@ -59,7 +62,7 @@ func dataSourceContactRead(ctx context.Context, d *schema.ResourceData, meta int return create.DiagError(names.SSMContacts, create.ErrActionSetting, DSNameContact, d.Id(), err) } - tags, err := ListTags(ctx, conn, d.Id()) + tags, err := listTags(ctx, conn, d.Id()) if err != nil { return create.DiagError(names.SSMContacts, create.ErrActionReading, DSNameContact, d.Id(), err) } diff --git a/internal/service/ssmcontacts/contact_data_source_test.go b/internal/service/ssmcontacts/contact_data_source_test.go index 9f05d9e5fa9..c80ed308857 100644 --- a/internal/service/ssmcontacts/contact_data_source_test.go +++ b/internal/service/ssmcontacts/contact_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmcontacts_test import ( diff --git a/internal/service/ssmcontacts/contact_test.go b/internal/service/ssmcontacts/contact_test.go index bfaa0eba520..434ec525d8a 100644 --- a/internal/service/ssmcontacts/contact_test.go +++ b/internal/service/ssmcontacts/contact_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmcontacts_test import ( @@ -347,7 +350,7 @@ func testContact_updateTags(t *testing.T) { func testAccCheckContactDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMContactsClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMContactsClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ssmcontacts_contact" { @@ -392,7 +395,7 @@ func testAccCheckContactExists(ctx context.Context, name string) resource.TestCh return create.Error(names.SSMContacts, create.ErrActionCheckingExistence, tfssmcontacts.ResNameContact, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMContactsClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMContactsClient(ctx) _, err := conn.GetContact(ctx, &ssmcontacts.GetContactInput{ ContactId: aws.String(rs.Primary.ID), @@ -407,7 +410,7 @@ func testAccCheckContactExists(ctx context.Context, name string) resource.TestCh } func testAccContactPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMContactsClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMContactsClient(ctx) input := &ssmcontacts.ListContactsInput{} _, err := conn.ListContacts(ctx, input) diff --git a/internal/service/ssmcontacts/find.go b/internal/service/ssmcontacts/find.go index c69edfcfad3..373b2ea09da 100644 --- a/internal/service/ssmcontacts/find.go +++ b/internal/service/ssmcontacts/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmcontacts import ( diff --git a/internal/service/ssmcontacts/flex.go b/internal/service/ssmcontacts/flex.go index 6155b716dd9..ad5bfaec505 100644 --- a/internal/service/ssmcontacts/flex.go +++ b/internal/service/ssmcontacts/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmcontacts import ( diff --git a/internal/service/ssmcontacts/generate.go b/internal/service/ssmcontacts/generate.go index d7d1ec99216..a8f164c78ed 100644 --- a/internal/service/ssmcontacts/generate.go +++ b/internal/service/ssmcontacts/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ServiceTagsSlice -ListTags -UpdateTags -AWSSDKVersion=2 -TagInIDElem=ResourceARN -ListTagsInIDElem=ResourceARN +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package ssmcontacts diff --git a/internal/service/ssmcontacts/helper.go b/internal/service/ssmcontacts/helper.go index c8316140ca9..3e541c4a404 100644 --- a/internal/service/ssmcontacts/helper.go +++ b/internal/service/ssmcontacts/helper.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmcontacts import ( diff --git a/internal/service/ssmcontacts/plan.go b/internal/service/ssmcontacts/plan.go index 7e821ef8f54..81ded8a6bbc 100644 --- a/internal/service/ssmcontacts/plan.go +++ b/internal/service/ssmcontacts/plan.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmcontacts import ( @@ -96,7 +99,7 @@ const ( ) func resourcePlanCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SSMContactsClient() + conn := meta.(*conns.AWSClient).SSMContactsClient(ctx) contactId := d.Get("contact_id").(string) stages := expandStages(d.Get("stage").([]interface{})) @@ -126,7 +129,7 @@ func resourcePlanCreate(ctx context.Context, d *schema.ResourceData, meta interf } func resourcePlanRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SSMContactsClient() + conn := meta.(*conns.AWSClient).SSMContactsClient(ctx) out, err := findContactByID(ctx, conn, d.Id()) @@ -148,7 +151,7 @@ func resourcePlanRead(ctx context.Context, d *schema.ResourceData, meta interfac } func resourcePlanUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SSMContactsClient() + conn := meta.(*conns.AWSClient).SSMContactsClient(ctx) update := false @@ -178,7 +181,7 @@ func resourcePlanUpdate(ctx context.Context, d *schema.ResourceData, meta interf } func resourcePlanDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SSMContactsClient() + conn := meta.(*conns.AWSClient).SSMContactsClient(ctx) log.Printf("[INFO] Deleting SSMContacts Plan %s", d.Id()) diff --git a/internal/service/ssmcontacts/plan_data_source.go b/internal/service/ssmcontacts/plan_data_source.go index c0527ceba7e..f816da18d7f 100644 --- a/internal/service/ssmcontacts/plan_data_source.go +++ b/internal/service/ssmcontacts/plan_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmcontacts import ( @@ -82,7 +85,7 @@ const ( ) func dataSourcePlanRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SSMContactsClient() + conn := meta.(*conns.AWSClient).SSMContactsClient(ctx) contactId := d.Get("contact_id").(string) diff --git a/internal/service/ssmcontacts/plan_data_source_test.go b/internal/service/ssmcontacts/plan_data_source_test.go index 7935f33d2b3..3e2f5a19452 100644 --- a/internal/service/ssmcontacts/plan_data_source_test.go +++ b/internal/service/ssmcontacts/plan_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmcontacts_test import ( diff --git a/internal/service/ssmcontacts/plan_test.go b/internal/service/ssmcontacts/plan_test.go index c6d700ce654..bcdd5d92066 100644 --- a/internal/service/ssmcontacts/plan_test.go +++ b/internal/service/ssmcontacts/plan_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmcontacts_test import ( @@ -558,7 +561,7 @@ func testAccCheckPlanExists(ctx context.Context, name string) resource.TestCheck return create.Error(names.SSMContacts, create.ErrActionCheckingExistence, tfssmcontacts.ResNamePlan, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMContactsClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMContactsClient(ctx) output, err := conn.GetContact(ctx, &ssmcontacts.GetContactInput{ ContactId: aws.String(rs.Primary.ID), @@ -574,7 +577,7 @@ func testAccCheckPlanExists(ctx context.Context, name string) resource.TestCheck func testAccCheckPlanDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SSMContactsClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSMContactsClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ssmcontacts_plan" { diff --git a/internal/service/ssmcontacts/service_package_gen.go b/internal/service/ssmcontacts/service_package_gen.go index d24f8b62828..971fb749add 100644 --- a/internal/service/ssmcontacts/service_package_gen.go +++ b/internal/service/ssmcontacts/service_package_gen.go @@ -5,6 +5,9 @@ package ssmcontacts import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + ssmcontacts_sdkv2 "github.com/aws/aws-sdk-go-v2/service/ssmcontacts" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -63,4 +66,17 @@ func (p *servicePackage) ServicePackageName() string { return names.SSMContacts } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*ssmcontacts_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return ssmcontacts_sdkv2.NewFromConfig(cfg, func(o *ssmcontacts_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = ssmcontacts_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/ssmcontacts/ssmcontacts_test.go b/internal/service/ssmcontacts/ssmcontacts_test.go index 48c650a66af..2c872cd9ee6 100644 --- a/internal/service/ssmcontacts/ssmcontacts_test.go +++ b/internal/service/ssmcontacts/ssmcontacts_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmcontacts_test import ( diff --git a/internal/service/ssmcontacts/tags_gen.go b/internal/service/ssmcontacts/tags_gen.go index 78435c6a953..cfb30b4b9a4 100644 --- a/internal/service/ssmcontacts/tags_gen.go +++ b/internal/service/ssmcontacts/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists ssmcontacts service tags. +// listTags lists ssmcontacts service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn *ssmcontacts.Client, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *ssmcontacts.Client, identifier string) (tftags.KeyValueTags, error) { input := &ssmcontacts.ListTagsForResourceInput{ ResourceARN: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn *ssmcontacts.Client, identifier string) // ListTags lists ssmcontacts service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).SSMContactsClient(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).SSMContactsClient(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []awstypes.Tag) tftags.KeyValueTags return tftags.New(ctx, m) } -// GetTagsIn returns ssmcontacts service tags from Context. +// getTagsIn returns ssmcontacts service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []awstypes.Tag { +func getTagsIn(ctx context.Context) []awstypes.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []awstypes.Tag { return nil } -// SetTagsOut sets ssmcontacts service tags in Context. -func SetTagsOut(ctx context.Context, tags []awstypes.Tag) { +// setTagsOut sets ssmcontacts service tags in Context. +func setTagsOut(ctx context.Context, tags []awstypes.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates ssmcontacts service tags. +// updateTags updates ssmcontacts service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn *ssmcontacts.Client, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *ssmcontacts.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn *ssmcontacts.Client, identifier string // UpdateTags updates ssmcontacts service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).SSMContactsClient(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).SSMContactsClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/ssmincidents/find.go b/internal/service/ssmincidents/find.go index 7f00715a16d..acb0d515062 100644 --- a/internal/service/ssmincidents/find.go +++ b/internal/service/ssmincidents/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmincidents import ( diff --git a/internal/service/ssmincidents/flex.go b/internal/service/ssmincidents/flex.go index 904d12033f9..85faad23dd4 100644 --- a/internal/service/ssmincidents/flex.go +++ b/internal/service/ssmincidents/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmincidents import ( diff --git a/internal/service/ssmincidents/generate.go b/internal/service/ssmincidents/generate.go index b8a40acb237..5634f6d59a9 100644 --- a/internal/service/ssmincidents/generate.go +++ b/internal/service/ssmincidents/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -TagInIDElem=ResourceArn -ListTags -ListTagsInIDElem=ResourceArn -ServiceTagsMap -UpdateTags -KVTValues -SkipTypesImp +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package ssmincidents diff --git a/internal/service/ssmincidents/helper.go b/internal/service/ssmincidents/helper.go index d94ebb1f676..f08c85a662c 100644 --- a/internal/service/ssmincidents/helper.go +++ b/internal/service/ssmincidents/helper.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmincidents import ( diff --git a/internal/service/ssmincidents/replication_set.go b/internal/service/ssmincidents/replication_set.go index 82a2c83d41d..54ab5f336ec 100644 --- a/internal/service/ssmincidents/replication_set.go +++ b/internal/service/ssmincidents/replication_set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmincidents import ( @@ -103,11 +106,11 @@ func ResourceReplicationSet() *schema.Resource { } func resourceReplicationSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - client := meta.(*conns.AWSClient).SSMIncidentsClient() + client := meta.(*conns.AWSClient).SSMIncidentsClient(ctx) input := &ssmincidents.CreateReplicationSetInput{ Regions: expandRegions(d.Get("region").(*schema.Set).List()), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } createReplicationSetOutput, err := client.CreateReplicationSet(ctx, input) @@ -133,7 +136,7 @@ func resourceReplicationSetCreate(ctx context.Context, d *schema.ResourceData, m } func resourceReplicationSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - client := meta.(*conns.AWSClient).SSMIncidentsClient() + client := meta.(*conns.AWSClient).SSMIncidentsClient(ctx) replicationSet, err := FindReplicationSetByID(ctx, client, d.Id()) @@ -160,7 +163,7 @@ func resourceReplicationSetRead(ctx context.Context, d *schema.ResourceData, met } func resourceReplicationSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - client := meta.(*conns.AWSClient).SSMIncidentsClient() + client := meta.(*conns.AWSClient).SSMIncidentsClient(ctx) if d.HasChanges("region") { input := &ssmincidents.UpdateReplicationSetInput{ @@ -190,7 +193,7 @@ func resourceReplicationSetUpdate(ctx context.Context, d *schema.ResourceData, m } func resourceReplicationSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - client := meta.(*conns.AWSClient).SSMIncidentsClient() + client := meta.(*conns.AWSClient).SSMIncidentsClient(ctx) log.Printf("[INFO] Deleting SSMIncidents ReplicationSet: %s", d.Id()) _, err := client.DeleteReplicationSet(ctx, &ssmincidents.DeleteReplicationSetInput{ @@ -275,10 +278,10 @@ func updateRegionsInput(d *schema.ResourceData, input *ssmincidents.UpdateReplic return nil } -func resourceReplicationSetImport(context context.Context, d *schema.ResourceData, meta any) ([]*schema.ResourceData, error) { - client := meta.(*conns.AWSClient).SSMIncidentsClient() +func resourceReplicationSetImport(ctx context.Context, d *schema.ResourceData, meta any) ([]*schema.ResourceData, error) { + client := meta.(*conns.AWSClient).SSMIncidentsClient(ctx) - arn, err := getReplicationSetARN(context, client) + arn, err := getReplicationSetARN(ctx, client) if err != nil { return nil, err diff --git a/internal/service/ssmincidents/replication_set_data_source.go b/internal/service/ssmincidents/replication_set_data_source.go index 455b5eee741..aa56d45a85d 100644 --- a/internal/service/ssmincidents/replication_set_data_source.go +++ b/internal/service/ssmincidents/replication_set_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmincidents import ( @@ -72,7 +75,7 @@ const ( ) func dataSourceReplicationSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - client := meta.(*conns.AWSClient).SSMIncidentsClient() + client := meta.(*conns.AWSClient).SSMIncidentsClient(ctx) arn, err := getReplicationSetARN(ctx, client) @@ -98,7 +101,7 @@ func dataSourceReplicationSetRead(ctx context.Context, d *schema.ResourceData, m return create.DiagError(names.SSMIncidents, create.ErrActionSetting, ResNameReplicationSet, d.Id(), err) } - tags, err := ListTags(ctx, client, d.Id()) + tags, err := listTags(ctx, client, d.Id()) if err != nil { return create.DiagError(names.SSMIncidents, create.ErrActionReading, DSNameReplicationSet, d.Id(), err) diff --git a/internal/service/ssmincidents/replication_set_data_source_test.go b/internal/service/ssmincidents/replication_set_data_source_test.go index 0b6a0129a7a..37c1ec56a58 100644 --- a/internal/service/ssmincidents/replication_set_data_source_test.go +++ b/internal/service/ssmincidents/replication_set_data_source_test.go @@ -1,7 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmincidents_test import ( - "context" "fmt" "regexp" "testing" @@ -16,8 +18,7 @@ func testReplicationSetDataSource_basic(t *testing.T) { t.Skip("skipping long-running test in short mode") } - ctx := context.Background() - + ctx := acctest.Context(t) dataSourceName := "data.aws_ssmincidents_replication_set.test" resourceName := "aws_ssmincidents_replication_set.test" @@ -28,12 +29,10 @@ func testReplicationSetDataSource_basic(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, names.SSMIncidentsEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckReplicationSetDestroy, Steps: []resource.TestStep{ { Config: testAccReplicationSetDataSourceConfig_basic(), Check: resource.ComposeTestCheckFunc( - testAccCheckReplicationSetExists(dataSourceName), resource.TestCheckResourceAttrPair(resourceName, "arn", dataSourceName, "arn"), resource.TestCheckResourceAttrPair(resourceName, "created_by", dataSourceName, "created_by"), resource.TestCheckResourceAttrPair(resourceName, "deletion_protected", dataSourceName, "deletion_protected"), diff --git a/internal/service/ssmincidents/replication_set_test.go b/internal/service/ssmincidents/replication_set_test.go index c83a46c6a7f..8b6b2104273 100644 --- a/internal/service/ssmincidents/replication_set_test.go +++ b/internal/service/ssmincidents/replication_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmincidents_test import ( @@ -24,8 +27,7 @@ func testReplicationSet_basic(t *testing.T) { t.Skip("skipping long-running test in short mode") } - ctx := context.Background() - + ctx := acctest.Context(t) resourceName := "aws_ssmincidents_replication_set.test" region1 := acctest.Region() region2 := acctest.AlternateRegion() @@ -38,12 +40,12 @@ func testReplicationSet_basic(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, names.SSMIncidentsEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckReplicationSetDestroy, + CheckDestroy: testAccCheckReplicationSetDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccReplicationSetConfig_basicTwoRegion(region1, region2), Check: resource.ComposeTestCheckFunc( - testAccCheckReplicationSetExists(resourceName), + testAccCheckReplicationSetExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "region.#", "2"), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "region.*", map[string]string{ "name": region1, @@ -70,8 +72,7 @@ func testReplicationSet_updateRegionsWithoutCMK(t *testing.T) { t.Skip("skipping long-running test in short mode") } - ctx := context.Background() - + ctx := acctest.Context(t) resourceName := "aws_ssmincidents_replication_set.test" region1 := acctest.Region() region2 := acctest.AlternateRegion() @@ -84,12 +85,12 @@ func testReplicationSet_updateRegionsWithoutCMK(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, names.SSMIncidentsEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5FactoriesMultipleRegions(ctx, t, 2), - CheckDestroy: testAccCheckReplicationSetDestroy, + CheckDestroy: testAccCheckReplicationSetDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccReplicationSetConfig_basicOneRegion(region1), Check: resource.ComposeTestCheckFunc( - testAccCheckReplicationSetExists(resourceName), + testAccCheckReplicationSetExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "region.#", "1"), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "region.*", map[string]string{ "name": region1, @@ -108,7 +109,7 @@ func testReplicationSet_updateRegionsWithoutCMK(t *testing.T) { { Config: testAccReplicationSetConfig_basicTwoRegion(region1, region2), Check: resource.ComposeTestCheckFunc( - testAccCheckReplicationSetExists(resourceName), + testAccCheckReplicationSetExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "region.#", "2"), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "region.*", map[string]string{ "name": region1, @@ -131,7 +132,7 @@ func testReplicationSet_updateRegionsWithoutCMK(t *testing.T) { { Config: testAccReplicationSetConfig_basicOneRegion(region1), Check: resource.ComposeTestCheckFunc( - testAccCheckReplicationSetExists(resourceName), + testAccCheckReplicationSetExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "region.#", "1"), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "region.*", map[string]string{ "name": region1, @@ -156,8 +157,7 @@ func testReplicationSet_updateRegionsWithCMK(t *testing.T) { t.Skip("skipping long-running test in short mode") } - ctx := context.Background() - + ctx := acctest.Context(t) resourceName := "aws_ssmincidents_replication_set.test" resource.Test(t, resource.TestCase{ @@ -168,12 +168,12 @@ func testReplicationSet_updateRegionsWithCMK(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, names.SSMIncidentsEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5FactoriesMultipleRegions(ctx, t, 2), - CheckDestroy: testAccCheckReplicationSetDestroy, + CheckDestroy: testAccCheckReplicationSetDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccReplicationSetConfig_oneRegionWithCMK(), Check: resource.ComposeTestCheckFunc( - testAccCheckReplicationSetExists(resourceName), + testAccCheckReplicationSetExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "region.#", "1"), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "region.*", map[string]string{ "name": acctest.Region(), @@ -190,7 +190,7 @@ func testReplicationSet_updateRegionsWithCMK(t *testing.T) { { Config: testAccReplicationSetConfig_twoRegionWithCMK(), Check: resource.ComposeTestCheckFunc( - testAccCheckReplicationSetExists(resourceName), + testAccCheckReplicationSetExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "region.#", "2"), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "region.*", map[string]string{ "name": acctest.Region(), @@ -210,7 +210,7 @@ func testReplicationSet_updateRegionsWithCMK(t *testing.T) { { Config: testAccReplicationSetConfig_oneRegionWithCMK(), Check: resource.ComposeTestCheckFunc( - testAccCheckReplicationSetExists(resourceName), + testAccCheckReplicationSetExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "region.#", "1"), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "region.*", map[string]string{ "name": acctest.Region(), @@ -233,8 +233,7 @@ func testReplicationSet_updateTags(t *testing.T) { t.Skip("skipping long-running test in short mode") } - ctx := context.Background() - + ctx := acctest.Context(t) resourceName := "aws_ssmincidents_replication_set.test" rKey1 := sdkacctest.RandString(26) @@ -260,7 +259,7 @@ func testReplicationSet_updateTags(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, names.SSMIncidentsEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckReplicationSetDestroy, + CheckDestroy: testAccCheckReplicationSetDestroy(ctx), Steps: []resource.TestStep{ { Config: acctest.ConfigCompose( @@ -268,7 +267,7 @@ func testReplicationSet_updateTags(t *testing.T) { testAccReplicationSetConfig_oneTag(rKey1, rVal1Ini), ), Check: resource.ComposeTestCheckFunc( - testAccCheckReplicationSetExists(resourceName), + testAccCheckReplicationSetExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags."+rKey1, rVal1Ini), resource.TestCheckResourceAttr(resourceName, "tags_all.%", "2"), @@ -287,7 +286,7 @@ func testReplicationSet_updateTags(t *testing.T) { testAccReplicationSetConfig_oneTag(rKey1, rVal1Updated), ), Check: resource.ComposeTestCheckFunc( - testAccCheckReplicationSetExists(resourceName), + testAccCheckReplicationSetExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags."+rKey1, rVal1Updated), resource.TestCheckResourceAttr(resourceName, "tags_all.%", "2"), @@ -306,7 +305,7 @@ func testReplicationSet_updateTags(t *testing.T) { testAccReplicationSetConfig_twoTags(rKey2, rVal2, rKey3, rVal3), ), Check: resource.ComposeTestCheckFunc( - testAccCheckReplicationSetExists(resourceName), + testAccCheckReplicationSetExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), resource.TestCheckResourceAttr(resourceName, "tags."+rKey2, rVal2), resource.TestCheckResourceAttr(resourceName, "tags."+rKey3, rVal3), @@ -330,8 +329,7 @@ func testReplicationSet_updateEmptyTags(t *testing.T) { t.Skip("skipping long-running test in short mode") } - ctx := context.Background() - + ctx := acctest.Context(t) resourceName := "aws_ssmincidents_replication_set.test" rKey1 := sdkacctest.RandString(26) @@ -344,12 +342,12 @@ func testReplicationSet_updateEmptyTags(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, names.SSMIncidentsEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckReplicationSetDestroy, + CheckDestroy: testAccCheckReplicationSetDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccReplicationSetConfig_oneTag(rKey1, ""), Check: resource.ComposeTestCheckFunc( - testAccCheckReplicationSetExists(resourceName), + testAccCheckReplicationSetExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags."+rKey1, ""), acctest.MatchResourceAttrGlobalARN(resourceName, "arn", "ssm-incidents", regexp.MustCompile(`replication-set\/+.`)), @@ -363,7 +361,7 @@ func testReplicationSet_updateEmptyTags(t *testing.T) { { Config: testAccReplicationSetConfig_twoTags(rKey1, "", rKey2, ""), Check: resource.ComposeTestCheckFunc( - testAccCheckReplicationSetExists(resourceName), + testAccCheckReplicationSetExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), resource.TestCheckResourceAttr(resourceName, "tags."+rKey1, ""), resource.TestCheckResourceAttr(resourceName, "tags."+rKey2, ""), @@ -378,7 +376,7 @@ func testReplicationSet_updateEmptyTags(t *testing.T) { { Config: testAccReplicationSetConfig_oneTag(rKey1, ""), Check: resource.ComposeTestCheckFunc( - testAccCheckReplicationSetExists(resourceName), + testAccCheckReplicationSetExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags."+rKey1, ""), acctest.MatchResourceAttrGlobalARN(resourceName, "arn", "ssm-incidents", regexp.MustCompile(`replication-set\/+.`)), @@ -393,8 +391,7 @@ func testReplicationSet_disappears(t *testing.T) { t.Skip("skipping long-running test in short mode") } - ctx := context.Background() - + ctx := acctest.Context(t) resourceName := "aws_ssmincidents_replication_set.test" region1 := acctest.Region() region2 := acctest.AlternateRegion() @@ -406,12 +403,12 @@ func testReplicationSet_disappears(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, names.SSMIncidentsEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckReplicationSetDestroy, + CheckDestroy: testAccCheckReplicationSetDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccReplicationSetConfig_basicTwoRegion(region1, region2), Check: resource.ComposeTestCheckFunc( - testAccCheckReplicationSetExists(resourceName), + testAccCheckReplicationSetExists(ctx, resourceName), acctest.CheckResourceDisappears(ctx, acctest.Provider, tfssmincidents.ResourceReplicationSet(), resourceName), ), @@ -421,38 +418,39 @@ func testReplicationSet_disappears(t *testing.T) { }) } -func testAccCheckReplicationSetDestroy(s *terraform.State) error { - client := acctest.Provider.Meta().(*conns.AWSClient).SSMIncidentsClient() - context := context.Background() +func testAccCheckReplicationSetDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + client := acctest.Provider.Meta().(*conns.AWSClient).SSMIncidentsClient(ctx) - for _, resource := range s.RootModule().Resources { - if resource.Type != "aws_ssmincidents_replication_set" { - continue - } + for _, resource := range s.RootModule().Resources { + if resource.Type != "aws_ssmincidents_replication_set" { + continue + } - log.Printf("Checking Deletion of replication set resource: %s with ID: %s \n", resource.Type, resource.Primary.ID) + log.Printf("Checking Deletion of replication set resource: %s with ID: %s \n", resource.Type, resource.Primary.ID) - _, err := tfssmincidents.FindReplicationSetByID(context, client, resource.Primary.ID) + _, err := tfssmincidents.FindReplicationSetByID(ctx, client, resource.Primary.ID) - if tfresource.NotFound(err) { - log.Printf("Replication Resource correctly returns NotFound Error... \n") - continue - } + if tfresource.NotFound(err) { + log.Printf("Replication Resource correctly returns NotFound Error... \n") + continue + } - log.Printf("Replication Set Resource has incorrect Error\n") + log.Printf("Replication Set Resource has incorrect Error\n") - if err != nil { - return create.Error(names.SSMIncidents, create.ErrActionCheckingDestroyed, tfssmincidents.ResNameReplicationSet, resource.Primary.ID, - errors.New("expected resource not found error, received an unexpected error")) + if err != nil { + return create.Error(names.SSMIncidents, create.ErrActionCheckingDestroyed, tfssmincidents.ResNameReplicationSet, resource.Primary.ID, + errors.New("expected resource not found error, received an unexpected error")) + } + + return create.Error(names.SSMIncidents, create.ErrActionCheckingDestroyed, tfssmincidents.ResNameReplicationSet, resource.Primary.ID, errors.New("not destroyed")) } - return create.Error(names.SSMIncidents, create.ErrActionCheckingDestroyed, tfssmincidents.ResNameReplicationSet, resource.Primary.ID, errors.New("not destroyed")) + return nil } - - return nil } -func testAccCheckReplicationSetExists(name string) resource.TestCheckFunc { +func testAccCheckReplicationSetExists(ctx context.Context, name string) resource.TestCheckFunc { return func(s *terraform.State) error { resource, ok := s.RootModule().Resources[name] if !ok { @@ -463,10 +461,9 @@ func testAccCheckReplicationSetExists(name string) resource.TestCheckFunc { return create.Error(names.SSMIncidents, create.ErrActionCheckingExistence, tfssmincidents.ResNameReplicationSet, name, errors.New("not set")) } - client := acctest.Provider.Meta().(*conns.AWSClient).SSMIncidentsClient() - context := context.Background() + client := acctest.Provider.Meta().(*conns.AWSClient).SSMIncidentsClient(ctx) - _, err := tfssmincidents.FindReplicationSetByID(context, client, resource.Primary.ID) + _, err := tfssmincidents.FindReplicationSetByID(ctx, client, resource.Primary.ID) if err != nil { return create.Error(names.SSMIncidents, create.ErrActionCheckingExistence, tfssmincidents.ResNameReplicationSet, resource.Primary.ID, err) diff --git a/internal/service/ssmincidents/response_plan.go b/internal/service/ssmincidents/response_plan.go index fb89aae1eec..e388fefd01d 100644 --- a/internal/service/ssmincidents/response_plan.go +++ b/internal/service/ssmincidents/response_plan.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmincidents import ( @@ -191,7 +194,7 @@ func ResourceResponsePlan() *schema.Resource { } func resourceResponsePlanCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - client := meta.(*conns.AWSClient).SSMIncidentsClient() + client := meta.(*conns.AWSClient).SSMIncidentsClient(ctx) input := &ssmincidents.CreateResponsePlanInput{ Actions: expandAction(d.Get("action").([]interface{})), @@ -201,7 +204,7 @@ func resourceResponsePlanCreate(ctx context.Context, d *schema.ResourceData, met IncidentTemplate: expandIncidentTemplate(d.Get("incident_template").([]interface{})), Integrations: expandIntegration(d.Get("integration").([]interface{})), Name: aws.String(d.Get("name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } output, err := client.CreateResponsePlan(ctx, input) @@ -220,7 +223,7 @@ func resourceResponsePlanCreate(ctx context.Context, d *schema.ResourceData, met } func resourceResponsePlanRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - client := meta.(*conns.AWSClient).SSMIncidentsClient() + client := meta.(*conns.AWSClient).SSMIncidentsClient(ctx) responsePlan, err := FindResponsePlanByID(ctx, client, d.Id()) @@ -242,7 +245,7 @@ func resourceResponsePlanRead(ctx context.Context, d *schema.ResourceData, meta } func resourceResponsePlanUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - client := meta.(*conns.AWSClient).SSMIncidentsClient() + client := meta.(*conns.AWSClient).SSMIncidentsClient(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &ssmincidents.UpdateResponsePlanInput{ @@ -286,7 +289,7 @@ func resourceResponsePlanUpdate(ctx context.Context, d *schema.ResourceData, met } func resourceResponsePlanDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - client := meta.(*conns.AWSClient).SSMIncidentsClient() + client := meta.(*conns.AWSClient).SSMIncidentsClient(ctx) log.Printf("[INFO] Deleting SSMIncidents ResponsePlan %s", d.Id()) diff --git a/internal/service/ssmincidents/response_plan_data_source.go b/internal/service/ssmincidents/response_plan_data_source.go index 2b02a529993..f23d01ada10 100644 --- a/internal/service/ssmincidents/response_plan_data_source.go +++ b/internal/service/ssmincidents/response_plan_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmincidents import ( @@ -170,7 +173,7 @@ const ( ) func dataSourceResponsePlanRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - client := meta.(*conns.AWSClient).SSMIncidentsClient() + client := meta.(*conns.AWSClient).SSMIncidentsClient(ctx) d.SetId(d.Get("arn").(string)) @@ -184,7 +187,7 @@ func dataSourceResponsePlanRead(ctx context.Context, d *schema.ResourceData, met return create.DiagError(names.SSMIncidents, create.ErrActionReading, DSNameResponsePlan, d.Id(), err) } - tags, err := ListTags(ctx, client, d.Id()) + tags, err := listTags(ctx, client, d.Id()) if err != nil { return create.DiagError(names.SSMIncidents, create.ErrActionReading, DSNameResponsePlan, d.Id(), err) } diff --git a/internal/service/ssmincidents/response_plan_data_source_test.go b/internal/service/ssmincidents/response_plan_data_source_test.go index db9e868470e..ba95d8e810d 100644 --- a/internal/service/ssmincidents/response_plan_data_source_test.go +++ b/internal/service/ssmincidents/response_plan_data_source_test.go @@ -1,7 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmincidents_test import ( - "context" "fmt" "regexp" "testing" @@ -13,8 +15,7 @@ import ( ) func testResponsePlanDataSource_basic(t *testing.T) { - ctx := context.Background() - + ctx := acctest.Context(t) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) rTitle := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) dataSourceName := "data.aws_ssmincidents_response_plan.test" @@ -33,7 +34,6 @@ func testResponsePlanDataSource_basic(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, names.SSMIncidentsEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckResponsePlanDestroy, Steps: []resource.TestStep{ { Config: testAccResponsePlanDataSourceConfig_basic( @@ -45,8 +45,6 @@ func testResponsePlanDataSource_basic(t *testing.T) { chatChannelTopic, ), Check: resource.ComposeTestCheckFunc( - testAccCheckResponsePlanExists(dataSourceName), - resource.TestCheckResourceAttrPair(resourceName, "name", dataSourceName, "name"), resource.TestCheckResourceAttrPair(resourceName, "incident_template.0.title", dataSourceName, "incident_template.0.title"), resource.TestCheckResourceAttrPair(resourceName, "incident_template.0.impact", dataSourceName, "incident_template.0.impact"), diff --git a/internal/service/ssmincidents/response_plan_test.go b/internal/service/ssmincidents/response_plan_test.go index 17fcabaf9bb..560f823cab8 100644 --- a/internal/service/ssmincidents/response_plan_test.go +++ b/internal/service/ssmincidents/response_plan_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmincidents_test import ( @@ -22,8 +25,7 @@ func testResponsePlan_basic(t *testing.T) { t.Skip("skipping long-running test in short mode") } - ctx := context.Background() - + ctx := acctest.Context(t) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) rTitle := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) rImpact := "3" @@ -37,12 +39,12 @@ func testResponsePlan_basic(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, names.SSMIncidentsEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckResponsePlanDestroy, + CheckDestroy: testAccCheckResponsePlanDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccResponsePlanConfig_basic(rName, rTitle, rImpact), Check: resource.ComposeTestCheckFunc( - testAccCheckResponsePlanExists(resourceName), + testAccCheckResponsePlanExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "name", rName), resource.TestCheckResourceAttr(resourceName, "incident_template.0.title", rTitle), resource.TestCheckResourceAttr(resourceName, "incident_template.0.impact", rImpact), @@ -61,7 +63,7 @@ func testResponsePlan_basic(t *testing.T) { // because CheckDestroy will run after the replication set has been destroyed and destroying // the replication set will destroy all other resources. Config: testAccResponsePlanConfig_none(), - Check: testAccCheckResponsePlanDestroy, + Check: testAccCheckResponsePlanDestroy(ctx), }, }, }) @@ -72,8 +74,7 @@ func testResponsePlan_updateRequiredFields(t *testing.T) { t.Skip("skipping long-running test in short mode") } - ctx := context.Background() - + ctx := acctest.Context(t) iniName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) updName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) @@ -91,12 +92,12 @@ func testResponsePlan_updateRequiredFields(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, names.SSMIncidentsEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckResponsePlanDestroy, + CheckDestroy: testAccCheckResponsePlanDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccResponsePlanConfig_basic(iniName, iniTitle, iniImpact), Check: resource.ComposeTestCheckFunc( - testAccCheckResponsePlanExists(resourceName), + testAccCheckResponsePlanExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "name", iniName), resource.TestCheckResourceAttr(resourceName, "incident_template.0.title", iniTitle), resource.TestCheckResourceAttr(resourceName, "incident_template.0.impact", iniImpact), @@ -113,7 +114,7 @@ func testResponsePlan_updateRequiredFields(t *testing.T) { { Config: testAccResponsePlanConfig_basic(iniName, updTitle, updImpact), Check: resource.ComposeTestCheckFunc( - testAccCheckResponsePlanExists(resourceName), + testAccCheckResponsePlanExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "name", iniName), resource.TestCheckResourceAttr(resourceName, "incident_template.0.title", updTitle), resource.TestCheckResourceAttr(resourceName, "incident_template.0.impact", updImpact), @@ -130,7 +131,7 @@ func testResponsePlan_updateRequiredFields(t *testing.T) { { Config: testAccResponsePlanConfig_basic(updName, updTitle, updImpact), Check: resource.ComposeTestCheckFunc( - testAccCheckResponsePlanExists(resourceName), + testAccCheckResponsePlanExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "name", updName), resource.TestCheckResourceAttr(resourceName, "incident_template.0.title", updTitle), resource.TestCheckResourceAttr(resourceName, "incident_template.0.impact", updImpact), @@ -153,8 +154,7 @@ func testResponsePlan_updateTags(t *testing.T) { t.Skip("skipping long-running test in short mode") } - ctx := context.Background() - + ctx := acctest.Context(t) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) rTitle := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) @@ -183,7 +183,7 @@ func testResponsePlan_updateTags(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, names.SSMIncidentsEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckResponsePlanDestroy, + CheckDestroy: testAccCheckResponsePlanDestroy(ctx), Steps: []resource.TestStep{ { Config: acctest.ConfigCompose( @@ -191,7 +191,7 @@ func testResponsePlan_updateTags(t *testing.T) { testAccResponsePlanConfig_oneTag(rName, rTitle, rKey1, rVal1Ini), ), Check: resource.ComposeTestCheckFunc( - testAccCheckResponsePlanExists(resourceName), + testAccCheckResponsePlanExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags."+rKey1, rVal1Ini), resource.TestCheckResourceAttr(resourceName, "tags_all.%", "2"), @@ -210,7 +210,7 @@ func testResponsePlan_updateTags(t *testing.T) { testAccResponsePlanConfig_oneTag(rName, rTitle, rKey1, rVal1Upd), ), Check: resource.ComposeTestCheckFunc( - testAccCheckResponsePlanExists(resourceName), + testAccCheckResponsePlanExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags."+rKey1, rVal1Upd), resource.TestCheckResourceAttr(resourceName, "tags_all.%", "2"), @@ -229,7 +229,7 @@ func testResponsePlan_updateTags(t *testing.T) { testAccResponsePlanConfig_twoTags(rName, rTitle, rKey2, rVal2, rKey3, rVal3), ), Check: resource.ComposeTestCheckFunc( - testAccCheckResponsePlanExists(resourceName), + testAccCheckResponsePlanExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), resource.TestCheckResourceAttr(resourceName, "tags."+rKey2, rVal2), resource.TestCheckResourceAttr(resourceName, "tags."+rKey3, rVal3), @@ -253,8 +253,7 @@ func testResponsePlan_updateEmptyTags(t *testing.T) { t.Skip("skipping long-running test in short mode") } - ctx := context.Background() - + ctx := acctest.Context(t) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) rTitle := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) @@ -270,12 +269,12 @@ func testResponsePlan_updateEmptyTags(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, names.SSMIncidentsEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckResponsePlanDestroy, + CheckDestroy: testAccCheckResponsePlanDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccResponsePlanConfig_oneTag(rName, rTitle, rKey1, ""), Check: resource.ComposeTestCheckFunc( - testAccCheckResponsePlanExists(resourceName), + testAccCheckResponsePlanExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags."+rKey1, ""), ), @@ -289,7 +288,7 @@ func testResponsePlan_updateEmptyTags(t *testing.T) { { Config: testAccResponsePlanConfig_twoTags(rName, rTitle, rKey1, "", rKey2, ""), Check: resource.ComposeTestCheckFunc( - testAccCheckResponsePlanExists(resourceName), + testAccCheckResponsePlanExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), resource.TestCheckResourceAttr(resourceName, "tags."+rKey1, ""), resource.TestCheckResourceAttr(resourceName, "tags."+rKey2, ""), @@ -304,7 +303,7 @@ func testResponsePlan_updateEmptyTags(t *testing.T) { { Config: testAccResponsePlanConfig_oneTag(rName, rTitle, rKey1, ""), Check: resource.ComposeTestCheckFunc( - testAccCheckResponsePlanExists(resourceName), + testAccCheckResponsePlanExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags."+rKey1, ""), ), @@ -324,8 +323,7 @@ func testResponsePlan_disappears(t *testing.T) { t.Skip("skipping long-running test in short mode") } - ctx := context.Background() - + ctx := acctest.Context(t) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) rTitle := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) impact := "3" @@ -338,12 +336,12 @@ func testResponsePlan_disappears(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, names.SSMIncidentsEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckResponsePlanDestroy, + CheckDestroy: testAccCheckResponsePlanDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccResponsePlanConfig_basic(rName, rTitle, impact), Check: resource.ComposeTestCheckFunc( - testAccCheckResponsePlanExists(resourceName), + testAccCheckResponsePlanExists(ctx, resourceName), acctest.CheckResourceDisappears(ctx, acctest.Provider, tfssmincidents.ResourceResponsePlan(), resourceName), ), ExpectNonEmptyPlan: true, @@ -357,8 +355,7 @@ func testResponsePlan_incidentTemplateOptionalFields(t *testing.T) { t.Skip("skipping long-running test in short mode") } - ctx := context.Background() - + ctx := acctest.Context(t) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) rTitle := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) @@ -384,12 +381,12 @@ func testResponsePlan_incidentTemplateOptionalFields(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, names.SSMIncidentsEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckResponsePlanDestroy, + CheckDestroy: testAccCheckResponsePlanDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccResponsePlanConfig_incidentTemplateOptionalFields(rName, rTitle, rDedupeStringIni, rSummaryIni, rTagKeyIni, rTagValIni, snsTopic1, snsTopic2), Check: resource.ComposeTestCheckFunc( - testAccCheckResponsePlanExists(resourceName), + testAccCheckResponsePlanExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "incident_template.0.title", rTitle), resource.TestCheckResourceAttr(resourceName, "incident_template.0.dedupe_string", rDedupeStringIni), resource.TestCheckResourceAttr(resourceName, "incident_template.0.summary", rSummaryIni), @@ -407,7 +404,7 @@ func testResponsePlan_incidentTemplateOptionalFields(t *testing.T) { { Config: testAccResponsePlanConfig_incidentTemplateOptionalFields(rName, rTitle, rDedupeStringUpd, rSummaryUpd, rTagKeyUpd, rTagValUpd, snsTopic2, snsTopic3), Check: resource.ComposeTestCheckFunc( - testAccCheckResponsePlanExists(resourceName), + testAccCheckResponsePlanExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "incident_template.0.title", rTitle), resource.TestCheckResourceAttr(resourceName, "incident_template.0.dedupe_string", rDedupeStringUpd), resource.TestCheckResourceAttr(resourceName, "incident_template.0.summary", rSummaryUpd), @@ -431,8 +428,7 @@ func testResponsePlan_displayName(t *testing.T) { t.Skip("skipping long-running test in short mode") } - ctx := context.Background() - + ctx := acctest.Context(t) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) oldDisplayName := rName + "-old-display-name" newDisplayName := rName + "-new-display-name" @@ -446,12 +442,12 @@ func testResponsePlan_displayName(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, names.SSMIncidentsEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckResponsePlanDestroy, + CheckDestroy: testAccCheckResponsePlanDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccResponsePlanConfig_displayName(rName, oldDisplayName), Check: resource.ComposeTestCheckFunc( - testAccCheckResponsePlanExists(resourceName), + testAccCheckResponsePlanExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "display_name", oldDisplayName), ), }, @@ -464,7 +460,7 @@ func testResponsePlan_displayName(t *testing.T) { { Config: testAccResponsePlanConfig_displayName(rName, newDisplayName), Check: resource.ComposeTestCheckFunc( - testAccCheckResponsePlanExists(resourceName), + testAccCheckResponsePlanExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "display_name", newDisplayName), ), }, @@ -483,8 +479,7 @@ func testResponsePlan_chatChannel(t *testing.T) { t.Skip("skipping long-running test in short mode") } - ctx := context.Background() - + ctx := acctest.Context(t) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) chatChannelTopic1 := "aws_sns_topic.topic1" chatChannelTopic2 := "aws_sns_topic.topic2" @@ -498,12 +493,12 @@ func testResponsePlan_chatChannel(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, names.SSMIncidentsEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckResponsePlanDestroy, + CheckDestroy: testAccCheckResponsePlanDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccResponsePlanConfig_chatChannel(rName, chatChannelTopic1), Check: resource.ComposeTestCheckFunc( - testAccCheckResponsePlanExists(resourceName), + testAccCheckResponsePlanExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "chat_channel.#", "1"), resource.TestCheckTypeSetElemAttrPair(resourceName, "chat_channel.0", chatChannelTopic1, "arn"), ), @@ -517,7 +512,7 @@ func testResponsePlan_chatChannel(t *testing.T) { { Config: testAccResponsePlanConfig_chatChannel(rName, chatChannelTopic2), Check: resource.ComposeTestCheckFunc( - testAccCheckResponsePlanExists(resourceName), + testAccCheckResponsePlanExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "chat_channel.#", "1"), resource.TestCheckTypeSetElemAttrPair(resourceName, "chat_channel.0", chatChannelTopic2, "arn"), ), @@ -531,7 +526,7 @@ func testResponsePlan_chatChannel(t *testing.T) { { Config: testAccResponsePlanConfig_twoChatChannels(rName, chatChannelTopic1, chatChannelTopic2), Check: resource.ComposeTestCheckFunc( - testAccCheckResponsePlanExists(resourceName), + testAccCheckResponsePlanExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "chat_channel.#", "2"), resource.TestCheckTypeSetElemAttrPair(resourceName, "chat_channel.*", chatChannelTopic1, "arn"), resource.TestCheckTypeSetElemAttrPair(resourceName, "chat_channel.*", chatChannelTopic2, "arn"), @@ -546,7 +541,7 @@ func testResponsePlan_chatChannel(t *testing.T) { { Config: testAccResponsePlanConfig_emptyChatChannel(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckResponsePlanExists(resourceName), + testAccCheckResponsePlanExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "chat_channel.#", "0"), ), }, @@ -565,8 +560,7 @@ func testResponsePlan_engagement(t *testing.T) { t.Skip("skipping long-running test in short mode") } - ctx := context.Background() - + ctx := acctest.Context(t) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) //lintignore:AWSAT003 //lintignore:AWSAT005 @@ -584,12 +578,12 @@ func testResponsePlan_engagement(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, names.SSMIncidentsEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckResponsePlanDestroy, + CheckDestroy: testAccCheckResponsePlanDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccResponsePlanConfig_engagement(rName, contactArn1), Check: resource.ComposeTestCheckFunc( - testAccCheckResponsePlanExists(resourceName), + testAccCheckResponsePlanExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "engagements.#", "1"), resource.TestCheckResourceAttr(resourceName, "engagements.0", contactArn1), ), @@ -603,7 +597,7 @@ func testResponsePlan_engagement(t *testing.T) { { Config: testAccResponsePlanConfig_engagement(rName, contactArn2), Check: resource.ComposeTestCheckFunc( - testAccCheckResponsePlanExists(resourceName), + testAccCheckResponsePlanExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "engagements.#", "1"), resource.TestCheckResourceAttr(resourceName, "engagements.0", contactArn2), ), @@ -617,7 +611,7 @@ func testResponsePlan_engagement(t *testing.T) { { Config: testAccResponsePlanConfig_twoEngagements(rName, contactArn1, contactArn2), Check: resource.ComposeTestCheckFunc( - testAccCheckResponsePlanExists(resourceName), + testAccCheckResponsePlanExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "engagements.#", "2"), resource.TestCheckResourceAttr(resourceName, "engagements.0", contactArn1), resource.TestCheckResourceAttr(resourceName, "engagements.1", contactArn2), @@ -632,7 +626,7 @@ func testResponsePlan_engagement(t *testing.T) { { Config: testAccResponsePlanConfig_emptyEngagements(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckResponsePlanExists(resourceName), + testAccCheckResponsePlanExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "engagements.#", "0"), ), }, @@ -651,8 +645,7 @@ func testResponsePlan_action(t *testing.T) { t.Skip("skipping long-running test in short mode") } - ctx := context.Background() - + ctx := acctest.Context(t) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_ssmincidents_response_plan.test" @@ -664,12 +657,12 @@ func testResponsePlan_action(t *testing.T) { }, ErrorCheck: acctest.ErrorCheck(t, names.SSMIncidentsEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, - CheckDestroy: testAccCheckResponsePlanDestroy, + CheckDestroy: testAccCheckResponsePlanDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccResponsePlanConfig_action1(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckResponsePlanExists(resourceName), + testAccCheckResponsePlanExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "action.#", "1"), resource.TestCheckResourceAttr(resourceName, "action.0.ssm_automation.#", "1"), resource.TestCheckTypeSetElemAttrPair( @@ -730,7 +723,7 @@ func testResponsePlan_action(t *testing.T) { { Config: testAccResponsePlanConfig_action2(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckResponsePlanExists(resourceName), + testAccCheckResponsePlanExists(ctx, resourceName), resource.TestCheckResourceAttr(resourceName, "action.#", "1"), resource.TestCheckResourceAttr(resourceName, "action.0.ssm_automation.#", "1"), resource.TestCheckTypeSetElemAttrPair( @@ -798,8 +791,7 @@ func testResponsePlan_action(t *testing.T) { // t.Skip("skipping long-running test in short mode") // } // -// ctx := context.Background() -// +// ctx := acctest.Context(t) // rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) // // resourceName := "aws_ssmincidents_response_plan.test" @@ -814,7 +806,7 @@ func testResponsePlan_action(t *testing.T) { // }, // ErrorCheck: acctest.ErrorCheck(t, names.SSMIncidentsEndpointID), // ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, -// CheckDestroy: testAccCheckResponsePlanDestroy, +// CheckDestroy: testAccCheckResponsePlanDestroy(ctx), // Steps: []resource.TestStep{ // { // Config: testAccResponsePlanConfig_pagerdutyIntegration( @@ -824,7 +816,7 @@ func testResponsePlan_action(t *testing.T) { // pagerdutySecretId, // ), // Check: resource.ComposeTestCheckFunc( -// testAccCheckResponsePlanExists(resourceName), +// testAccCheckResponsePlanExists(ctx,resourceName), // resource.TestCheckResourceAttr(resourceName, "integration.#", "1"), // resource.TestCheckResourceAttr(resourceName, "integration.0.pagerduty.#", "1"), // resource.TestCheckResourceAttr( @@ -854,33 +846,34 @@ func testResponsePlan_action(t *testing.T) { // }) //} -func testAccCheckResponsePlanDestroy(s *terraform.State) error { - client := acctest.Provider.Meta().(*conns.AWSClient).SSMIncidentsClient() - ctx := context.Background() +func testAccCheckResponsePlanDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + client := acctest.Provider.Meta().(*conns.AWSClient).SSMIncidentsClient(ctx) - for _, resource := range s.RootModule().Resources { - if resource.Type != "aws_ssmincidents_response_plan" { - continue - } + for _, resource := range s.RootModule().Resources { + if resource.Type != "aws_ssmincidents_response_plan" { + continue + } - _, err := tfssmincidents.FindResponsePlanByID(ctx, client, resource.Primary.ID) + _, err := tfssmincidents.FindResponsePlanByID(ctx, client, resource.Primary.ID) - if tfresource.NotFound(err) { - continue - } + if tfresource.NotFound(err) { + continue + } - if err != nil { - return create.Error(names.SSMIncidents, create.ErrActionCheckingDestroyed, tfssmincidents.ResNameResponsePlan, resource.Primary.ID, - errors.New("expected resource not found error, received an unexpected error")) + if err != nil { + return create.Error(names.SSMIncidents, create.ErrActionCheckingDestroyed, tfssmincidents.ResNameResponsePlan, resource.Primary.ID, + errors.New("expected resource not found error, received an unexpected error")) + } + + return create.Error(names.SSMIncidents, create.ErrActionCheckingDestroyed, tfssmincidents.ResNameResponsePlan, resource.Primary.ID, errors.New("not destroyed")) } - return create.Error(names.SSMIncidents, create.ErrActionCheckingDestroyed, tfssmincidents.ResNameResponsePlan, resource.Primary.ID, errors.New("not destroyed")) + return nil } - - return nil } -func testAccCheckResponsePlanExists(name string) resource.TestCheckFunc { +func testAccCheckResponsePlanExists(ctx context.Context, name string) resource.TestCheckFunc { return func(s *terraform.State) error { resource, ok := s.RootModule().Resources[name] if !ok { @@ -891,8 +884,7 @@ func testAccCheckResponsePlanExists(name string) resource.TestCheckFunc { return create.Error(names.SSMIncidents, create.ErrActionCheckingExistence, tfssmincidents.ResNameResponsePlan, name, errors.New("not set")) } - client := acctest.Provider.Meta().(*conns.AWSClient).SSMIncidentsClient() - ctx := context.Background() + client := acctest.Provider.Meta().(*conns.AWSClient).SSMIncidentsClient(ctx) _, err := tfssmincidents.FindResponsePlanByID(ctx, client, resource.Primary.ID) diff --git a/internal/service/ssmincidents/service_package_gen.go b/internal/service/ssmincidents/service_package_gen.go index 4a95410044a..758bcb8f00d 100644 --- a/internal/service/ssmincidents/service_package_gen.go +++ b/internal/service/ssmincidents/service_package_gen.go @@ -5,6 +5,9 @@ package ssmincidents import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + ssmincidents_sdkv2 "github.com/aws/aws-sdk-go-v2/service/ssmincidents" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -57,4 +60,17 @@ func (p *servicePackage) ServicePackageName() string { return names.SSMIncidents } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*ssmincidents_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return ssmincidents_sdkv2.NewFromConfig(cfg, func(o *ssmincidents_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = ssmincidents_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/ssmincidents/ssmincidents_test.go b/internal/service/ssmincidents/ssmincidents_test.go index 6458dddadd1..848715775f5 100644 --- a/internal/service/ssmincidents/ssmincidents_test.go +++ b/internal/service/ssmincidents/ssmincidents_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssmincidents_test import ( diff --git a/internal/service/ssmincidents/tags_gen.go b/internal/service/ssmincidents/tags_gen.go index 21b840e1ebf..7356f4966f0 100644 --- a/internal/service/ssmincidents/tags_gen.go +++ b/internal/service/ssmincidents/tags_gen.go @@ -13,10 +13,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists ssmincidents service tags. +// listTags lists ssmincidents service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn *ssmincidents.Client, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *ssmincidents.Client, identifier string) (tftags.KeyValueTags, error) { input := &ssmincidents.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -33,7 +33,7 @@ func ListTags(ctx context.Context, conn *ssmincidents.Client, identifier string) // ListTags lists ssmincidents service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).SSMIncidentsClient(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).SSMIncidentsClient(ctx), identifier) if err != nil { return err @@ -53,14 +53,14 @@ func Tags(tags tftags.KeyValueTags) map[string]string { return tags.Map() } -// KeyValueTags creates KeyValueTags from ssmincidents service tags. +// KeyValueTags creates tftags.KeyValueTags from ssmincidents service tags. func KeyValueTags(ctx context.Context, tags map[string]string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns ssmincidents service tags from Context. +// getTagsIn returns ssmincidents service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]string { +func getTagsIn(ctx context.Context) map[string]string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -70,17 +70,17 @@ func GetTagsIn(ctx context.Context) map[string]string { return nil } -// SetTagsOut sets ssmincidents service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]string) { +// setTagsOut sets ssmincidents service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates ssmincidents service tags. +// updateTags updates ssmincidents service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn *ssmincidents.Client, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *ssmincidents.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -120,5 +120,5 @@ func UpdateTags(ctx context.Context, conn *ssmincidents.Client, identifier strin // UpdateTags updates ssmincidents service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).SSMIncidentsClient(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).SSMIncidentsClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/ssoadmin/account_assignment.go b/internal/service/ssoadmin/account_assignment.go index 0d0b5c1ba88..9c415c945c8 100644 --- a/internal/service/ssoadmin/account_assignment.go +++ b/internal/service/ssoadmin/account_assignment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssoadmin import ( @@ -80,7 +83,7 @@ func ResourceAccountAssignment() *schema.Resource { func resourceAccountAssignmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSOAdminConn() + conn := meta.(*conns.AWSClient).SSOAdminConn(ctx) instanceArn := d.Get("instance_arn").(string) permissionSetArn := d.Get("permission_set_arn").(string) @@ -132,7 +135,7 @@ func resourceAccountAssignmentCreate(ctx context.Context, d *schema.ResourceData func resourceAccountAssignmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSOAdminConn() + conn := meta.(*conns.AWSClient).SSOAdminConn(ctx) idParts, err := ParseAccountAssignmentID(d.Id()) if err != nil { @@ -180,7 +183,7 @@ func resourceAccountAssignmentRead(ctx context.Context, d *schema.ResourceData, func resourceAccountAssignmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSOAdminConn() + conn := meta.(*conns.AWSClient).SSOAdminConn(ctx) idParts, err := ParseAccountAssignmentID(d.Id()) if err != nil { diff --git a/internal/service/ssoadmin/account_assignment_test.go b/internal/service/ssoadmin/account_assignment_test.go index d9d08128f28..5761ebaec05 100644 --- a/internal/service/ssoadmin/account_assignment_test.go +++ b/internal/service/ssoadmin/account_assignment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssoadmin_test import ( @@ -115,7 +118,7 @@ func TestAccSSOAdminAccountAssignment_disappears(t *testing.T) { func testAccCheckAccountAssignmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ssoadmin_account_assignment" { @@ -164,7 +167,7 @@ func testAccCheckAccountAssignmentExists(ctx context.Context, resourceName strin return fmt.Errorf("Resource (%s) ID not set", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn(ctx) idParts, err := tfssoadmin.ParseAccountAssignmentID(rs.Primary.ID) diff --git a/internal/service/ssoadmin/customer_managed_policy_attachment.go b/internal/service/ssoadmin/customer_managed_policy_attachment.go index 74b62d9b1c2..5be527001aa 100644 --- a/internal/service/ssoadmin/customer_managed_policy_attachment.go +++ b/internal/service/ssoadmin/customer_managed_policy_attachment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssoadmin import ( @@ -76,7 +79,7 @@ func ResourceCustomerManagedPolicyAttachment() *schema.Resource { func resourceCustomerManagedPolicyAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSOAdminConn() + conn := meta.(*conns.AWSClient).SSOAdminConn(ctx) tfMap := d.Get("customer_managed_policy_reference").([]interface{})[0].(map[string]interface{}) policyName := tfMap["name"].(string) @@ -111,7 +114,7 @@ func resourceCustomerManagedPolicyAttachmentCreate(ctx context.Context, d *schem func resourceCustomerManagedPolicyAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSOAdminConn() + conn := meta.(*conns.AWSClient).SSOAdminConn(ctx) policyName, policyPath, permissionSetARN, instanceARN, err := CustomerManagedPolicyAttachmentParseResourceID(d.Id()) @@ -142,7 +145,7 @@ func resourceCustomerManagedPolicyAttachmentRead(ctx context.Context, d *schema. func resourceCustomerManagedPolicyAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSOAdminConn() + conn := meta.(*conns.AWSClient).SSOAdminConn(ctx) policyName, policyPath, permissionSetARN, instanceARN, err := CustomerManagedPolicyAttachmentParseResourceID(d.Id()) diff --git a/internal/service/ssoadmin/customer_managed_policy_attachment_test.go b/internal/service/ssoadmin/customer_managed_policy_attachment_test.go index 0c4aeb46e03..49413100eab 100644 --- a/internal/service/ssoadmin/customer_managed_policy_attachment_test.go +++ b/internal/service/ssoadmin/customer_managed_policy_attachment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssoadmin_test import ( @@ -179,7 +182,7 @@ func TestAccSSOAdminCustomerManagedPolicyAttachment_multipleManagedPolicies(t *t func testAccCheckCustomerManagedPolicyAttachmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ssoadmin_customer_managed_policy_attachment" { @@ -226,7 +229,7 @@ func testAccCheckCustomerManagedPolicyAttachmentExists(ctx context.Context, n st return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn(ctx) _, err = tfssoadmin.FindCustomerManagedPolicy(ctx, conn, policyName, policyPath, permissionSetARN, instanceARN) diff --git a/internal/service/ssoadmin/find.go b/internal/service/ssoadmin/find.go index 11bedb37d06..179003c7bb5 100644 --- a/internal/service/ssoadmin/find.go +++ b/internal/service/ssoadmin/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssoadmin import ( diff --git a/internal/service/ssoadmin/generate.go b/internal/service/ssoadmin/generate.go index 15ed7f052b0..6ba7cb154cb 100644 --- a/internal/service/ssoadmin/generate.go +++ b/internal/service/ssoadmin/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsSlice -TagResTypeElem=InstanceArn -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package ssoadmin diff --git a/internal/service/ssoadmin/instance_access_control_attributes.go b/internal/service/ssoadmin/instance_access_control_attributes.go index 58ac60f232e..4df0c8e69f1 100644 --- a/internal/service/ssoadmin/instance_access_control_attributes.go +++ b/internal/service/ssoadmin/instance_access_control_attributes.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssoadmin import ( @@ -77,7 +80,7 @@ func ResourceAccessControlAttributes() *schema.Resource { func resourceAccessControlAttributesCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSOAdminConn() + conn := meta.(*conns.AWSClient).SSOAdminConn(ctx) instanceARN := d.Get("instance_arn").(string) input := &ssoadmin.CreateInstanceAccessControlAttributeConfigurationInput{ @@ -100,7 +103,7 @@ func resourceAccessControlAttributesCreate(ctx context.Context, d *schema.Resour func resourceAccessControlAttributesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSOAdminConn() + conn := meta.(*conns.AWSClient).SSOAdminConn(ctx) output, err := FindInstanceAttributeControlAttributesByARN(ctx, conn, d.Id()) @@ -126,7 +129,7 @@ func resourceAccessControlAttributesRead(ctx context.Context, d *schema.Resource func resourceAccessControlAttributesUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSOAdminConn() + conn := meta.(*conns.AWSClient).SSOAdminConn(ctx) input := &ssoadmin.UpdateInstanceAccessControlAttributeConfigurationInput{ InstanceArn: aws.String(d.Id()), @@ -146,7 +149,7 @@ func resourceAccessControlAttributesUpdate(ctx context.Context, d *schema.Resour func resourceAccessControlAttributesDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSOAdminConn() + conn := meta.(*conns.AWSClient).SSOAdminConn(ctx) _, err := conn.DeleteInstanceAccessControlAttributeConfigurationWithContext(ctx, &ssoadmin.DeleteInstanceAccessControlAttributeConfigurationInput{ InstanceArn: aws.String(d.Id()), diff --git a/internal/service/ssoadmin/instance_access_control_attributes_test.go b/internal/service/ssoadmin/instance_access_control_attributes_test.go index 3db144aeed5..9abc8f2cf11 100644 --- a/internal/service/ssoadmin/instance_access_control_attributes_test.go +++ b/internal/service/ssoadmin/instance_access_control_attributes_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssoadmin_test import ( @@ -138,7 +141,7 @@ func testAccInstanceAccessControlAttributes_update(t *testing.T) { func testAccCheckInstanceAccessControlAttributesDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ssoadmin_instance_access_control_attributes" { @@ -173,7 +176,7 @@ func testAccCheckInstanceAccessControlAttributesExists(ctx context.Context, reso return fmt.Errorf("No SSO Instance Access Control Attributes ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn(ctx) _, err := tfssoadmin.FindInstanceAttributeControlAttributesByARN(ctx, conn, rs.Primary.ID) diff --git a/internal/service/ssoadmin/instances_data_source.go b/internal/service/ssoadmin/instances_data_source.go index 9dc1f1cb900..7be220da16a 100644 --- a/internal/service/ssoadmin/instances_data_source.go +++ b/internal/service/ssoadmin/instances_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssoadmin import ( @@ -33,7 +36,7 @@ func DataSourceInstances() *schema.Resource { func dataSourceInstancesRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSOAdminConn() + conn := meta.(*conns.AWSClient).SSOAdminConn(ctx) output, err := findInstanceMetadatas(ctx, conn) diff --git a/internal/service/ssoadmin/instances_data_source_test.go b/internal/service/ssoadmin/instances_data_source_test.go index 41eeec1112f..49e32554f0a 100644 --- a/internal/service/ssoadmin/instances_data_source_test.go +++ b/internal/service/ssoadmin/instances_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssoadmin_test import ( @@ -12,7 +15,7 @@ import ( ) func testAccPreCheckInstances(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn(ctx) var instances []*ssoadmin.InstanceMetadata err := conn.ListInstancesPagesWithContext(ctx, &ssoadmin.ListInstancesInput{}, func(page *ssoadmin.ListInstancesOutput, lastPage bool) bool { diff --git a/internal/service/ssoadmin/managed_policy_attachment.go b/internal/service/ssoadmin/managed_policy_attachment.go index 4ce5c53021e..ed0fb988f43 100644 --- a/internal/service/ssoadmin/managed_policy_attachment.go +++ b/internal/service/ssoadmin/managed_policy_attachment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssoadmin import ( @@ -57,7 +60,7 @@ func ResourceManagedPolicyAttachment() *schema.Resource { func resourceManagedPolicyAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSOAdminConn() + conn := meta.(*conns.AWSClient).SSOAdminConn(ctx) instanceArn := d.Get("instance_arn").(string) managedPolicyArn := d.Get("managed_policy_arn").(string) @@ -87,7 +90,7 @@ func resourceManagedPolicyAttachmentCreate(ctx context.Context, d *schema.Resour func resourceManagedPolicyAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSOAdminConn() + conn := meta.(*conns.AWSClient).SSOAdminConn(ctx) managedPolicyArn, permissionSetArn, instanceArn, err := ParseManagedPolicyAttachmentID(d.Id()) if err != nil { @@ -122,7 +125,7 @@ func resourceManagedPolicyAttachmentRead(ctx context.Context, d *schema.Resource func resourceManagedPolicyAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSOAdminConn() + conn := meta.(*conns.AWSClient).SSOAdminConn(ctx) managedPolicyArn, permissionSetArn, instanceArn, err := ParseManagedPolicyAttachmentID(d.Id()) if err != nil { @@ -155,7 +158,7 @@ func resourceManagedPolicyAttachmentDelete(ctx context.Context, d *schema.Resour func ParseManagedPolicyAttachmentID(id string) (string, string, string, error) { idParts := strings.Split(id, ",") if len(idParts) != 3 || idParts[0] == "" || idParts[1] == "" || idParts[2] == "" { - return "", "", "", fmt.Errorf("error parsing ID: expected MANAGED_POLICY_ARN,PERMISSION_SET_ARN,INSTANCE_ARN") + return "", "", "", fmt.Errorf("parsing ID: expected MANAGED_POLICY_ARN,PERMISSION_SET_ARN,INSTANCE_ARN") } return idParts[0], idParts[1], idParts[2], nil } diff --git a/internal/service/ssoadmin/managed_policy_attachment_test.go b/internal/service/ssoadmin/managed_policy_attachment_test.go index e0054427c77..ebe20a3ae8f 100644 --- a/internal/service/ssoadmin/managed_policy_attachment_test.go +++ b/internal/service/ssoadmin/managed_policy_attachment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssoadmin_test import ( @@ -175,7 +178,7 @@ func TestAccSSOAdminManagedPolicyAttachment_multipleManagedPolicies(t *testing.T func testAccCheckManagedPolicyAttachmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ssoadmin_managed_policy_attachment" { @@ -225,7 +228,7 @@ func testAccCheckManagedPolicyAttachmentExists(ctx context.Context, resourceName return fmt.Errorf("error parsing SSO Managed Policy Attachment ID (%s): %w", rs.Primary.ID, err) } - conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn(ctx) policy, err := tfssoadmin.FindManagedPolicy(ctx, conn, managedPolicyArn, permissionSetArn, instanceArn) diff --git a/internal/service/ssoadmin/permission_set.go b/internal/service/ssoadmin/permission_set.go index fe7dafa3b5f..555f8eeb619 100644 --- a/internal/service/ssoadmin/permission_set.go +++ b/internal/service/ssoadmin/permission_set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssoadmin import ( @@ -92,14 +95,14 @@ func ResourcePermissionSet() *schema.Resource { func resourcePermissionSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSOAdminConn() + conn := meta.(*conns.AWSClient).SSOAdminConn(ctx) instanceARN := d.Get("instance_arn").(string) name := d.Get("name").(string) input := &ssoadmin.CreatePermissionSetInput{ InstanceArn: aws.String(instanceARN), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -127,7 +130,7 @@ func resourcePermissionSetCreate(ctx context.Context, d *schema.ResourceData, me func resourcePermissionSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSOAdminConn() + conn := meta.(*conns.AWSClient).SSOAdminConn(ctx) arn, instanceARN, err := ParseResourceID(d.Id()) @@ -163,20 +166,20 @@ func resourcePermissionSetRead(ctx context.Context, d *schema.ResourceData, meta d.Set("relay_state", permissionSet.RelayState) d.Set("session_duration", permissionSet.SessionDuration) - tags, err := ListTags(ctx, conn, arn, instanceARN) + tags, err := listTags(ctx, conn, arn, instanceARN) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags for SSO Permission Set (%s): %s", arn, err) } - SetTagsOut(ctx, Tags(tags)) + setTagsOut(ctx, Tags(tags)) return diags } func resourcePermissionSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSOAdminConn() + conn := meta.(*conns.AWSClient).SSOAdminConn(ctx) arn, instanceARN, err := ParseResourceID(d.Id()) @@ -215,7 +218,7 @@ func resourcePermissionSetUpdate(ctx context.Context, d *schema.ResourceData, me if d.HasChange("tags_all") { o, n := d.GetChange("tags_all") - if err := UpdateTags(ctx, conn, arn, instanceARN, o, n); err != nil { + if err := updateTags(ctx, conn, arn, instanceARN, o, n); err != nil { return sdkdiag.AppendErrorf(diags, "updating tags: %s", err) } } @@ -230,7 +233,7 @@ func resourcePermissionSetUpdate(ctx context.Context, d *schema.ResourceData, me func resourcePermissionSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSOAdminConn() + conn := meta.(*conns.AWSClient).SSOAdminConn(ctx) arn, instanceARN, err := ParseResourceID(d.Id()) @@ -293,16 +296,16 @@ func provisionPermissionSet(ctx context.Context, conn *ssoadmin.SSOAdmin, arn, i } if err != nil { - return fmt.Errorf("error provisioning SSO Permission Set (%s): %w", arn, err) + return fmt.Errorf("provisioning SSO Permission Set (%s): %w", arn, err) } if output == nil || output.PermissionSetProvisioningStatus == nil { - return fmt.Errorf("error provisioning SSO Permission Set (%s): empty output", arn) + return fmt.Errorf("provisioning SSO Permission Set (%s): empty output", arn) } _, err = waitPermissionSetProvisioned(ctx, conn, instanceArn, aws.StringValue(output.PermissionSetProvisioningStatus.RequestId)) if err != nil { - return fmt.Errorf("error waiting for SSO Permission Set (%s) to provision: %w", arn, err) + return fmt.Errorf("waiting for SSO Permission Set (%s) to provision: %w", arn, err) } return nil diff --git a/internal/service/ssoadmin/permission_set_data_source.go b/internal/service/ssoadmin/permission_set_data_source.go index 348f08195e4..8e763e0d2bb 100644 --- a/internal/service/ssoadmin/permission_set_data_source.go +++ b/internal/service/ssoadmin/permission_set_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssoadmin import ( @@ -74,7 +77,7 @@ func DataSourcePermissionSet() *schema.Resource { func dataSourcePermissionSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSOAdminConn() + conn := meta.(*conns.AWSClient).SSOAdminConn(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig instanceArn := d.Get("instance_arn").(string) @@ -163,7 +166,7 @@ func dataSourcePermissionSetRead(ctx context.Context, d *schema.ResourceData, me d.Set("session_duration", permissionSet.SessionDuration) d.Set("relay_state", permissionSet.RelayState) - tags, err := ListTags(ctx, conn, arn, instanceArn) + tags, err := listTags(ctx, conn, arn, instanceArn) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags for SSO Permission Set (%s): %s", arn, err) } diff --git a/internal/service/ssoadmin/permission_set_data_source_test.go b/internal/service/ssoadmin/permission_set_data_source_test.go index 21dfe85a192..19f04bbee19 100644 --- a/internal/service/ssoadmin/permission_set_data_source_test.go +++ b/internal/service/ssoadmin/permission_set_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssoadmin_test import ( diff --git a/internal/service/ssoadmin/permission_set_inline_policy.go b/internal/service/ssoadmin/permission_set_inline_policy.go index eb99c564735..482bad045bd 100644 --- a/internal/service/ssoadmin/permission_set_inline_policy.go +++ b/internal/service/ssoadmin/permission_set_inline_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssoadmin import ( @@ -57,7 +60,7 @@ func ResourcePermissionSetInlinePolicy() *schema.Resource { func resourcePermissionSetInlinePolicyPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSOAdminConn() + conn := meta.(*conns.AWSClient).SSOAdminConn(ctx) instanceArn := d.Get("instance_arn").(string) permissionSetArn := d.Get("permission_set_arn").(string) @@ -91,7 +94,7 @@ func resourcePermissionSetInlinePolicyPut(ctx context.Context, d *schema.Resourc func resourcePermissionSetInlinePolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSOAdminConn() + conn := meta.(*conns.AWSClient).SSOAdminConn(ctx) permissionSetArn, instanceArn, err := ParseResourceID(d.Id()) if err != nil { @@ -135,7 +138,7 @@ func resourcePermissionSetInlinePolicyRead(ctx context.Context, d *schema.Resour func resourcePermissionSetInlinePolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSOAdminConn() + conn := meta.(*conns.AWSClient).SSOAdminConn(ctx) permissionSetArn, instanceArn, err := ParseResourceID(d.Id()) if err != nil { diff --git a/internal/service/ssoadmin/permission_set_inline_policy_test.go b/internal/service/ssoadmin/permission_set_inline_policy_test.go index 31accf8a37d..fb5736ef944 100644 --- a/internal/service/ssoadmin/permission_set_inline_policy_test.go +++ b/internal/service/ssoadmin/permission_set_inline_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssoadmin_test import ( @@ -130,7 +133,7 @@ func TestAccSSOAdminPermissionSetInlinePolicy_Disappears_permissionSet(t *testin func testAccCheckPermissionSetInlinePolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ssoadmin_permission_set_inline_policy" { @@ -190,7 +193,7 @@ func testAccCheckPermissionSetInlinePolicyExists(ctx context.Context, resourceNa return fmt.Errorf("error parsing SSO Permission Set Inline Policy ID (%s): %w", rs.Primary.ID, err) } - conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn(ctx) input := &ssoadmin.GetInlinePolicyForPermissionSetInput{ InstanceArn: aws.String(instanceArn), diff --git a/internal/service/ssoadmin/permission_set_test.go b/internal/service/ssoadmin/permission_set_test.go index f0ee2d41ac6..ff9cfdca72e 100644 --- a/internal/service/ssoadmin/permission_set_test.go +++ b/internal/service/ssoadmin/permission_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssoadmin_test import ( @@ -277,7 +280,7 @@ func TestAccSSOAdminPermissionSet_mixedPolicyAttachments(t *testing.T) { func testAccCheckPermissionSetDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ssoadmin_permission_set" { @@ -323,7 +326,7 @@ func testAccCheckSOAdminPermissionSetExists(ctx context.Context, resourceName st return fmt.Errorf("Resource (%s) ID not set", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn(ctx) arn, instanceArn, err := tfssoadmin.ParseResourceID(rs.Primary.ID) diff --git a/internal/service/ssoadmin/permissions_boundary_attachment.go b/internal/service/ssoadmin/permissions_boundary_attachment.go index d2f2f8776aa..5fd271c538d 100644 --- a/internal/service/ssoadmin/permissions_boundary_attachment.go +++ b/internal/service/ssoadmin/permissions_boundary_attachment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssoadmin import ( @@ -93,7 +96,7 @@ func ResourcePermissionsBoundaryAttachment() *schema.Resource { func resourcePermissionsBoundaryAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSOAdminConn() + conn := meta.(*conns.AWSClient).SSOAdminConn(ctx) tfMap := d.Get("permissions_boundary").([]interface{})[0].(map[string]interface{}) instanceARN := d.Get("instance_arn").(string) @@ -126,7 +129,7 @@ func resourcePermissionsBoundaryAttachmentCreate(ctx context.Context, d *schema. func resourcePermissionsBoundaryAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSOAdminConn() + conn := meta.(*conns.AWSClient).SSOAdminConn(ctx) permissionSetARN, instanceARN, err := PermissionsBoundaryAttachmentParseResourceID(d.Id()) @@ -157,7 +160,7 @@ func resourcePermissionsBoundaryAttachmentRead(ctx context.Context, d *schema.Re func resourcePermissionsBoundaryAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SSOAdminConn() + conn := meta.(*conns.AWSClient).SSOAdminConn(ctx) permissionSetARN, instanceARN, err := PermissionsBoundaryAttachmentParseResourceID(d.Id()) diff --git a/internal/service/ssoadmin/permissions_boundary_attachment_test.go b/internal/service/ssoadmin/permissions_boundary_attachment_test.go index fec98eea2e8..cd0eee3c098 100644 --- a/internal/service/ssoadmin/permissions_boundary_attachment_test.go +++ b/internal/service/ssoadmin/permissions_boundary_attachment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssoadmin_test import ( @@ -159,7 +162,7 @@ func TestAccSSOAdminPermissionsBoundaryAttachment_managedPolicyAndCustomerManage func testAccCheckPermissionsBoundaryAttachmentDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_ssoadmin_permissions_boundary_attachment" { @@ -206,7 +209,7 @@ func testAccCheckPermissionsBoundaryAttachmentExists(ctx context.Context, n stri return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SSOAdminConn(ctx) _, err = tfssoadmin.FindPermissionsBoundary(ctx, conn, permissionSetARN, instanceARN) diff --git a/internal/service/ssoadmin/service_package.go b/internal/service/ssoadmin/service_package.go new file mode 100644 index 00000000000..2863eefd692 --- /dev/null +++ b/internal/service/ssoadmin/service_package.go @@ -0,0 +1,27 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package ssoadmin + +import ( + "context" + + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + request_sdkv1 "github.com/aws/aws-sdk-go/aws/request" + ssoadmin_sdkv1 "github.com/aws/aws-sdk-go/service/ssoadmin" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" +) + +// CustomizeConn customizes a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) CustomizeConn(ctx context.Context, conn *ssoadmin_sdkv1.SSOAdmin) (*ssoadmin_sdkv1.SSOAdmin, error) { + // Reference: https://github.com/hashicorp/terraform-provider-aws/issues/19215. + conn.Handlers.Retry.PushBack(func(r *request_sdkv1.Request) { + if r.Operation.Name == "AttachManagedPolicyToPermissionSet" || r.Operation.Name == "DetachManagedPolicyFromPermissionSet" { + if tfawserr.ErrCodeEquals(r.Error, ssoadmin_sdkv1.ErrCodeConflictException) { + r.Retryable = aws_sdkv1.Bool(true) + } + } + }) + + return conn, nil +} diff --git a/internal/service/ssoadmin/service_package_gen.go b/internal/service/ssoadmin/service_package_gen.go index 079834abab5..449c444a0ef 100644 --- a/internal/service/ssoadmin/service_package_gen.go +++ b/internal/service/ssoadmin/service_package_gen.go @@ -5,6 +5,10 @@ package ssoadmin import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + ssoadmin_sdkv1 "github.com/aws/aws-sdk-go/service/ssoadmin" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -71,4 +75,13 @@ func (p *servicePackage) ServicePackageName() string { return names.SSOAdmin } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*ssoadmin_sdkv1.SSOAdmin, error) { + sess := config["session"].(*session_sdkv1.Session) + + return ssoadmin_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/ssoadmin/status.go b/internal/service/ssoadmin/status.go index 4e1b5ac2d19..d19ab879479 100644 --- a/internal/service/ssoadmin/status.go +++ b/internal/service/ssoadmin/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssoadmin import ( diff --git a/internal/service/ssoadmin/sweep.go b/internal/service/ssoadmin/sweep.go index 317674a695a..db0d3c19880 100644 --- a/internal/service/ssoadmin/sweep.go +++ b/internal/service/ssoadmin/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -6,15 +9,15 @@ package ssoadmin import ( "fmt" "log" + "regexp" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ssoadmin" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" + "github.com/hashicorp/terraform-provider-aws/internal/sweep/sdk" ) func init() { @@ -34,23 +37,25 @@ func init() { func sweepAccountAssignments(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).SSOAdminConn() + conn := client.SSOAdminConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error + accessDenied := regexp.MustCompile(`AccessDeniedException: .+ is not authorized to perform:`) + // Need to Read the SSO Instance first; assumes the first instance returned // is where the permission sets exist as AWS SSO currently supports only 1 instance ds := DataSourceInstances() dsData := ds.Data(nil) - err = sweep.ReadResource(ctx, ds, dsData, client) + err = sdk.ReadResource(ctx, ds, dsData, client) - if tfawserr.ErrCodeContains(err, "AccessDenied") { + if accessDenied.MatchString(err.Error()) { log.Printf("[WARN] Skipping SSO Account Assignment sweep for %s: %s", region, err) return nil } @@ -80,7 +85,7 @@ func sweepAccountAssignments(region string) error { permissionSetArn := aws.StringValue(permissionSet) input := &ssoadmin.ListAccountAssignmentsInput{ - AccountId: aws.String(client.(*conns.AWSClient).AccountID), + AccountId: aws.String(client.AccountID), InstanceArn: aws.String(instanceArn), PermissionSetArn: permissionSet, } @@ -130,7 +135,7 @@ func sweepAccountAssignments(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error retrieving SSO Permission Sets for Account Assignment sweep: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping SSO Account Assignments: %w", err)) } @@ -139,23 +144,25 @@ func sweepAccountAssignments(region string) error { func sweepPermissionSets(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).SSOAdminConn() + conn := client.SSOAdminConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error + accessDenied := regexp.MustCompile(`AccessDeniedException: .+ is not authorized to perform:`) + // Need to Read the SSO Instance first; assumes the first instance returned // is where the permission sets exist as AWS SSO currently supports only 1 instance ds := DataSourceInstances() dsData := ds.Data(nil) - err = sweep.ReadResource(ctx, ds, dsData, client) + err = sdk.ReadResource(ctx, ds, dsData, client) - if tfawserr.ErrCodeContains(err, "AccessDenied") { + if accessDenied.MatchString(err.Error()) { log.Printf("[WARN] Skipping SSO Permission Set sweep for %s: %s", region, err) return nil } @@ -202,7 +209,7 @@ func sweepPermissionSets(region string) error { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error retrieving SSO Permission Sets: %w", err)) } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping SSO Permission Sets: %w", err)) } diff --git a/internal/service/ssoadmin/tags_gen.go b/internal/service/ssoadmin/tags_gen.go index f388688ffa1..e7835f9ecbf 100644 --- a/internal/service/ssoadmin/tags_gen.go +++ b/internal/service/ssoadmin/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists ssoadmin service tags. +// listTags lists ssoadmin service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn ssoadminiface.SSOAdminAPI, identifier, resourceType string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn ssoadminiface.SSOAdminAPI, identifier, resourceType string) (tftags.KeyValueTags, error) { input := &ssoadmin.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), InstanceArn: aws.String(resourceType), @@ -35,7 +35,7 @@ func ListTags(ctx context.Context, conn ssoadminiface.SSOAdminAPI, identifier, r // ListTags lists ssoadmin service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier, resourceType string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).SSOAdminConn(), identifier, resourceType) + tags, err := listTags(ctx, meta.(*conns.AWSClient).SSOAdminConn(ctx), identifier, resourceType) if err != nil { return err @@ -77,9 +77,9 @@ func KeyValueTags(ctx context.Context, tags []*ssoadmin.Tag) tftags.KeyValueTags return tftags.New(ctx, m) } -// GetTagsIn returns ssoadmin service tags from Context. +// getTagsIn returns ssoadmin service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*ssoadmin.Tag { +func getTagsIn(ctx context.Context) []*ssoadmin.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -89,17 +89,17 @@ func GetTagsIn(ctx context.Context) []*ssoadmin.Tag { return nil } -// SetTagsOut sets ssoadmin service tags in Context. -func SetTagsOut(ctx context.Context, tags []*ssoadmin.Tag) { +// setTagsOut sets ssoadmin service tags in Context. +func setTagsOut(ctx context.Context, tags []*ssoadmin.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates ssoadmin service tags. +// updateTags updates ssoadmin service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn ssoadminiface.SSOAdminAPI, identifier, resourceType string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn ssoadminiface.SSOAdminAPI, identifier, resourceType string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -141,5 +141,5 @@ func UpdateTags(ctx context.Context, conn ssoadminiface.SSOAdminAPI, identifier, // UpdateTags updates ssoadmin service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier, resourceType string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).SSOAdminConn(), identifier, resourceType, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).SSOAdminConn(ctx), identifier, resourceType, oldTags, newTags) } diff --git a/internal/service/ssoadmin/wait.go b/internal/service/ssoadmin/wait.go index e7e711e4ae7..b9d8c34cf16 100644 --- a/internal/service/ssoadmin/wait.go +++ b/internal/service/ssoadmin/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package ssoadmin import ( diff --git a/internal/service/storagegateway/cache.go b/internal/service/storagegateway/cache.go index 17a27e4fe32..24fa56c78e6 100644 --- a/internal/service/storagegateway/cache.go +++ b/internal/service/storagegateway/cache.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway import ( @@ -44,7 +47,7 @@ func ResourceCache() *schema.Resource { func resourceCacheCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) diskID := d.Get("disk_id").(string) gatewayARN := d.Get("gateway_arn").(string) @@ -94,7 +97,7 @@ func resourceCacheCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceCacheRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) gatewayARN, diskID, err := DecodeCacheID(d.Id()) if err != nil { diff --git a/internal/service/storagegateway/cache_test.go b/internal/service/storagegateway/cache_test.go index 60269ee3f52..c2367a24b24 100644 --- a/internal/service/storagegateway/cache_test.go +++ b/internal/service/storagegateway/cache_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway_test import ( @@ -142,7 +145,7 @@ func testAccCheckCacheExists(ctx context.Context, resourceName string) resource. return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn(ctx) gatewayARN, diskID, err := tfstoragegateway.DecodeCacheID(rs.Primary.ID) if err != nil { diff --git a/internal/service/storagegateway/cached_iscsi_volume.go b/internal/service/storagegateway/cached_iscsi_volume.go index b64c24a6851..0e3bbd7751d 100644 --- a/internal/service/storagegateway/cached_iscsi_volume.go +++ b/internal/service/storagegateway/cached_iscsi_volume.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway import ( @@ -119,7 +122,7 @@ func ResourceCachediSCSIVolume() *schema.Resource { func resourceCachediSCSIVolumeCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) input := &storagegateway.CreateCachediSCSIVolumeInput{ ClientToken: aws.String(id.UniqueId()), @@ -127,7 +130,7 @@ func resourceCachediSCSIVolumeCreate(ctx context.Context, d *schema.ResourceData NetworkInterfaceId: aws.String(d.Get("network_interface_id").(string)), TargetName: aws.String(d.Get("target_name").(string)), VolumeSizeInBytes: aws.Int64(int64(d.Get("volume_size_in_bytes").(int))), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("snapshot_id"); ok { @@ -159,7 +162,7 @@ func resourceCachediSCSIVolumeCreate(ctx context.Context, d *schema.ResourceData func resourceCachediSCSIVolumeRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) input := &storagegateway.DescribeCachediSCSIVolumesInput{ VolumeARNs: []*string{aws.String(d.Id())}, @@ -228,7 +231,7 @@ func resourceCachediSCSIVolumeUpdate(ctx context.Context, d *schema.ResourceData func resourceCachediSCSIVolumeDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) input := &storagegateway.DeleteVolumeInput{ VolumeARN: aws.String(d.Id()), diff --git a/internal/service/storagegateway/cached_iscsi_volume_test.go b/internal/service/storagegateway/cached_iscsi_volume_test.go index 271a1b45bdb..e533671c7be 100644 --- a/internal/service/storagegateway/cached_iscsi_volume_test.go +++ b/internal/service/storagegateway/cached_iscsi_volume_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway_test import ( @@ -304,7 +307,7 @@ func testAccCheckCachediSCSIVolumeExists(ctx context.Context, resourceName strin return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn(ctx) input := &storagegateway.DescribeCachediSCSIVolumesInput{ VolumeARNs: []*string{aws.String(rs.Primary.ID)}, @@ -328,7 +331,7 @@ func testAccCheckCachediSCSIVolumeExists(ctx context.Context, resourceName strin func testAccCheckCachediSCSIVolumeDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_storagegateway_cached_iscsi_volume" { diff --git a/internal/service/storagegateway/enum.go b/internal/service/storagegateway/enum.go index a3fc16f57a5..de7e6d2d97d 100644 --- a/internal/service/storagegateway/enum.go +++ b/internal/service/storagegateway/enum.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway import ( diff --git a/internal/service/storagegateway/errors.go b/internal/service/storagegateway/errors.go index 33bc7e35d50..39fc3395971 100644 --- a/internal/service/storagegateway/errors.go +++ b/internal/service/storagegateway/errors.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway import ( diff --git a/internal/service/storagegateway/file_system_association.go b/internal/service/storagegateway/file_system_association.go index 387994e940f..8a882a15cf4 100644 --- a/internal/service/storagegateway/file_system_association.go +++ b/internal/service/storagegateway/file_system_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway import ( @@ -99,7 +102,7 @@ func ResourceFileSystemAssociation() *schema.Resource { func resourceFileSystemAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) gatewayARN := d.Get("gateway_arn").(string) input := &storagegateway.AssociateFileSystemInput{ @@ -107,7 +110,7 @@ func resourceFileSystemAssociationCreate(ctx context.Context, d *schema.Resource GatewayARN: aws.String(gatewayARN), LocationARN: aws.String(d.Get("location_arn").(string)), Password: aws.String(d.Get("password").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), UserName: aws.String(d.Get("username").(string)), } @@ -136,7 +139,7 @@ func resourceFileSystemAssociationCreate(ctx context.Context, d *schema.Resource func resourceFileSystemAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) filesystem, err := FindFileSystemAssociationByARN(ctx, conn, d.Id()) @@ -159,14 +162,14 @@ func resourceFileSystemAssociationRead(ctx context.Context, d *schema.ResourceDa return sdkdiag.AppendErrorf(diags, "setting cache_attributes: %s", err) } - SetTagsOut(ctx, filesystem.Tags) + setTagsOut(ctx, filesystem.Tags) return diags } func resourceFileSystemAssociationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) if d.HasChangesExcept("tags_all") { input := &storagegateway.UpdateFileSystemAssociationInput{ @@ -196,7 +199,7 @@ func resourceFileSystemAssociationUpdate(ctx context.Context, d *schema.Resource func resourceFileSystemAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) input := &storagegateway.DisassociateFileSystemInput{ FileSystemAssociationARN: aws.String(d.Id()), diff --git a/internal/service/storagegateway/file_system_association_test.go b/internal/service/storagegateway/file_system_association_test.go index d0d35fe81dd..56b5411f37f 100644 --- a/internal/service/storagegateway/file_system_association_test.go +++ b/internal/service/storagegateway/file_system_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway_test import ( @@ -273,7 +276,7 @@ func TestAccStorageGatewayFileSystemAssociation_Disappears_fsxFileSystem(t *test func testAccCheckFileSystemAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_storagegateway_file_system_association" { @@ -304,7 +307,7 @@ func testAccCheckFileSystemAssociationExists(ctx context.Context, resourceName s return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn(ctx) output, err := tfstoragegateway.FindFileSystemAssociationByARN(ctx, conn, rs.Primary.ID) diff --git a/internal/service/storagegateway/find.go b/internal/service/storagegateway/find.go index 24959947767..24c86c5252b 100644 --- a/internal/service/storagegateway/find.go +++ b/internal/service/storagegateway/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway import ( diff --git a/internal/service/storagegateway/gateway.go b/internal/service/storagegateway/gateway.go index 388ed834f08..80ff3a5f8b9 100644 --- a/internal/service/storagegateway/gateway.go +++ b/internal/service/storagegateway/gateway.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway import ( @@ -21,10 +24,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/errs" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" - "github.com/hashicorp/terraform-provider-aws/internal/experimental/nullable" "github.com/hashicorp/terraform-provider-aws/internal/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/types/nullable" "github.com/hashicorp/terraform-provider-aws/internal/verify" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -273,7 +276,7 @@ func ResourceGateway() *schema.Resource { func resourceGatewayCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) region := meta.(*conns.AWSClient).Region activationKey := d.Get("activation_key").(string) @@ -353,7 +356,7 @@ func resourceGatewayCreate(ctx context.Context, d *schema.ResourceData, meta int GatewayName: aws.String(d.Get("gateway_name").(string)), GatewayTimezone: aws.String(d.Get("gateway_timezone").(string)), GatewayType: aws.String(d.Get("gateway_type").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("medium_changer_type"); ok { @@ -484,7 +487,7 @@ func resourceGatewayCreate(ctx context.Context, d *schema.ResourceData, meta int func resourceGatewayRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) output, err := FindGatewayByARN(ctx, conn, d.Id()) @@ -498,7 +501,7 @@ func resourceGatewayRead(ctx context.Context, d *schema.ResourceData, meta inter return sdkdiag.AppendErrorf(diags, "reading Storage Gateway Gateway (%s): %s", d.Id(), err) } - SetTagsOut(ctx, output.Tags) + setTagsOut(ctx, output.Tags) smbSettingsInput := &storagegateway.DescribeSMBSettingsInput{ GatewayARN: aws.String(d.Id()), @@ -643,7 +646,7 @@ func resourceGatewayRead(ctx context.Context, d *schema.ResourceData, meta inter func resourceGatewayUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) if d.HasChanges("gateway_name", "gateway_timezone", "cloudwatch_log_group_arn") { input := &storagegateway.UpdateGatewayInformationInput{ @@ -785,7 +788,7 @@ func resourceGatewayUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceGatewayDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) log.Printf("[DEBUG] Deleting Storage Gateway Gateway: %s", d.Id()) _, err := conn.DeleteGatewayWithContext(ctx, &storagegateway.DeleteGatewayInput{ diff --git a/internal/service/storagegateway/gateway_test.go b/internal/service/storagegateway/gateway_test.go index f78f183140c..afbb36995c7 100644 --- a/internal/service/storagegateway/gateway_test.go +++ b/internal/service/storagegateway/gateway_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway_test import ( @@ -860,7 +863,7 @@ func TestAccStorageGatewayGateway_maintenanceStartTime(t *testing.T) { func testAccCheckGatewayDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_storagegateway_gateway" { @@ -891,7 +894,7 @@ func testAccCheckGatewayExists(ctx context.Context, resourceName string, gateway return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn(ctx) output, err := tfstoragegateway.FindGatewayByARN(ctx, conn, rs.Primary.ID) diff --git a/internal/service/storagegateway/generate.go b/internal/service/storagegateway/generate.go index 3d5e1cb268f..81f7facca18 100644 --- a/internal/service/storagegateway/generate.go +++ b/internal/service/storagegateway/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceARN -ServiceTagsSlice -TagOp=AddTagsToResource -TagInIDElem=ResourceARN -UntagOp=RemoveTagsFromResource -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package storagegateway diff --git a/internal/service/storagegateway/local_disk_data_source.go b/internal/service/storagegateway/local_disk_data_source.go index 4b86587b056..5491285a5a4 100644 --- a/internal/service/storagegateway/local_disk_data_source.go +++ b/internal/service/storagegateway/local_disk_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway import ( @@ -44,7 +47,7 @@ func DataSourceLocalDisk() *schema.Resource { func dataSourceLocalDiskRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) input := &storagegateway.ListLocalDisksInput{ GatewayARN: aws.String(d.Get("gateway_arn").(string)), diff --git a/internal/service/storagegateway/local_disk_data_source_test.go b/internal/service/storagegateway/local_disk_data_source_test.go index af778ee07a6..a4fbe222b24 100644 --- a/internal/service/storagegateway/local_disk_data_source_test.go +++ b/internal/service/storagegateway/local_disk_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway_test import ( diff --git a/internal/service/storagegateway/nfs_file_share.go b/internal/service/storagegateway/nfs_file_share.go index 8255987137e..c2905f6a3fb 100644 --- a/internal/service/storagegateway/nfs_file_share.go +++ b/internal/service/storagegateway/nfs_file_share.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway import ( @@ -216,7 +219,7 @@ func ResourceNFSFileShare() *schema.Resource { func resourceNFSFileShareCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) fileShareDefaults, err := expandNFSFileShareDefaults(d.Get("nfs_file_share_defaults").([]interface{})) @@ -238,7 +241,7 @@ func resourceNFSFileShareCreate(ctx context.Context, d *schema.ResourceData, met RequesterPays: aws.Bool(d.Get("requester_pays").(bool)), Role: aws.String(d.Get("role_arn").(string)), Squash: aws.String(d.Get("squash").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("audit_destination_arn"); ok { @@ -287,7 +290,7 @@ func resourceNFSFileShareCreate(ctx context.Context, d *schema.ResourceData, met func resourceNFSFileShareRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) fileshare, err := FindNFSFileShareByARN(ctx, conn, d.Id()) @@ -330,14 +333,14 @@ func resourceNFSFileShareRead(ctx context.Context, d *schema.ResourceData, meta d.Set("squash", fileshare.Squash) d.Set("vpc_endpoint_dns_name", fileshare.VPCEndpointDNSName) - SetTagsOut(ctx, fileshare.Tags) + setTagsOut(ctx, fileshare.Tags) return diags } func resourceNFSFileShareUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) if d.HasChangesExcept("tags_all", "tags") { fileShareDefaults, err := expandNFSFileShareDefaults(d.Get("nfs_file_share_defaults").([]interface{})) @@ -396,7 +399,7 @@ func resourceNFSFileShareUpdate(ctx context.Context, d *schema.ResourceData, met func resourceNFSFileShareDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) log.Printf("[DEBUG] Deleting Storage Gateway NFS File Share: %s", d.Id()) _, err := conn.DeleteFileShareWithContext(ctx, &storagegateway.DeleteFileShareInput{ diff --git a/internal/service/storagegateway/nfs_file_share_test.go b/internal/service/storagegateway/nfs_file_share_test.go index f2bd908eaa6..d3def74b450 100644 --- a/internal/service/storagegateway/nfs_file_share_test.go +++ b/internal/service/storagegateway/nfs_file_share_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway_test import ( @@ -681,7 +684,7 @@ func TestAccStorageGatewayNFSFileShare_disappears(t *testing.T) { func testAccCheckNFSFileShareDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_storagegateway_nfs_file_share" { @@ -712,7 +715,7 @@ func testAccCheckNFSFileShareExists(ctx context.Context, resourceName string, nf return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn(ctx) output, err := tfstoragegateway.FindNFSFileShareByARN(ctx, conn, rs.Primary.ID) diff --git a/internal/service/storagegateway/service_package.go b/internal/service/storagegateway/service_package.go new file mode 100644 index 00000000000..94d404b25ea --- /dev/null +++ b/internal/service/storagegateway/service_package.go @@ -0,0 +1,25 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package storagegateway + +import ( + "context" + + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + request_sdkv1 "github.com/aws/aws-sdk-go/aws/request" + storagegateway_sdkv1 "github.com/aws/aws-sdk-go/service/storagegateway" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" +) + +// CustomizeConn customizes a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) CustomizeConn(ctx context.Context, conn *storagegateway_sdkv1.StorageGateway) (*storagegateway_sdkv1.StorageGateway, error) { + conn.Handlers.Retry.PushBack(func(r *request_sdkv1.Request) { + // InvalidGatewayRequestException: The specified gateway proxy network connection is busy. + if tfawserr.ErrMessageContains(r.Error, storagegateway_sdkv1.ErrCodeInvalidGatewayRequestException, "The specified gateway proxy network connection is busy") { + r.Retryable = aws_sdkv1.Bool(true) + } + }) + + return conn, nil +} diff --git a/internal/service/storagegateway/service_package_gen.go b/internal/service/storagegateway/service_package_gen.go index 9b517a38731..0cf1748bd0d 100644 --- a/internal/service/storagegateway/service_package_gen.go +++ b/internal/service/storagegateway/service_package_gen.go @@ -5,6 +5,10 @@ package storagegateway import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + storagegateway_sdkv1 "github.com/aws/aws-sdk-go/service/storagegateway" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -105,4 +109,13 @@ func (p *servicePackage) ServicePackageName() string { return names.StorageGateway } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*storagegateway_sdkv1.StorageGateway, error) { + sess := config["session"].(*session_sdkv1.Session) + + return storagegateway_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/storagegateway/smb_file_share.go b/internal/service/storagegateway/smb_file_share.go index 7405a2cfcb1..a47b5b6337f 100644 --- a/internal/service/storagegateway/smb_file_share.go +++ b/internal/service/storagegateway/smb_file_share.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway import ( @@ -207,7 +210,7 @@ func ResourceSMBFileShare() *schema.Resource { func resourceSMBFileShareCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) input := &storagegateway.CreateSMBFileShareInput{ AccessBasedEnumeration: aws.Bool(d.Get("access_based_enumeration").(bool)), @@ -220,7 +223,7 @@ func resourceSMBFileShareCreate(ctx context.Context, d *schema.ResourceData, met RequesterPays: aws.Bool(d.Get("requester_pays").(bool)), Role: aws.String(d.Get("role_arn").(string)), SMBACLEnabled: aws.Bool(d.Get("smb_acl_enabled").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("admin_user_list"); ok && v.(*schema.Set).Len() > 0 { @@ -301,7 +304,7 @@ func resourceSMBFileShareCreate(ctx context.Context, d *schema.ResourceData, met func resourceSMBFileShareRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) fileshare, err := FindSMBFileShareByARN(ctx, conn, d.Id()) @@ -351,14 +354,14 @@ func resourceSMBFileShareRead(ctx context.Context, d *schema.ResourceData, meta d.Set("valid_user_list", aws.StringValueSlice(fileshare.ValidUserList)) d.Set("vpc_endpoint_dns_name", fileshare.VPCEndpointDNSName) - SetTagsOut(ctx, fileshare.Tags) + setTagsOut(ctx, fileshare.Tags) return diags } func resourceSMBFileShareUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &storagegateway.UpdateSMBFileShareInput{ @@ -402,6 +405,8 @@ func resourceSMBFileShareUpdate(ctx context.Context, d *schema.ResourceData, met // This value can only be set when KMSEncrypted is true. if d.HasChange("kms_key_arn") && d.Get("kms_encrypted").(bool) { input.KMSKey = aws.String(d.Get("kms_key_arn").(string)) + } else if d.Get("kms_encrypted").(bool) && d.Get("kms_key_arn").(string) != "" { + input.KMSKey = aws.String(d.Get("kms_key_arn").(string)) } if d.HasChange("notification_policy") { @@ -437,7 +442,7 @@ func resourceSMBFileShareUpdate(ctx context.Context, d *schema.ResourceData, met func resourceSMBFileShareDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) log.Printf("[DEBUG] Deleting Storage Gateway SMB File Share: %s", d.Id()) _, err := conn.DeleteFileShareWithContext(ctx, &storagegateway.DeleteFileShareInput{ diff --git a/internal/service/storagegateway/smb_file_share_test.go b/internal/service/storagegateway/smb_file_share_test.go index 0c55f090865..de2d6d0660c 100644 --- a/internal/service/storagegateway/smb_file_share_test.go +++ b/internal/service/storagegateway/smb_file_share_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway_test import ( @@ -237,6 +240,38 @@ func TestAccStorageGatewaySMBFileShare_defaultStorageClass(t *testing.T) { }) } +func TestAccStorageGatewaySMBFileShare_encryptedUpdate(t *testing.T) { + ctx := acctest.Context(t) + var smbFileShare storagegateway.SMBFileShareInfo + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_storagegateway_smb_file_share.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, storagegateway.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckSMBFileShareDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccSMBFileShareConfig_encryptedUpdate(rName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckSMBFileShareExists(ctx, resourceName, &smbFileShare), + resource.TestCheckResourceAttr(resourceName, "read_only", "false"), + resource.TestCheckResourceAttr(resourceName, "kms_encrypted", "true"), + ), + }, + { + Config: testAccSMBFileShareConfig_encryptedUpdate(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckSMBFileShareExists(ctx, resourceName, &smbFileShare), + resource.TestCheckResourceAttr(resourceName, "read_only", "true"), + resource.TestCheckResourceAttr(resourceName, "kms_encrypted", "true"), + ), + }, + }, + }) +} + func TestAccStorageGatewaySMBFileShare_fileShareName(t *testing.T) { ctx := acctest.Context(t) var smbFileShare storagegateway.SMBFileShareInfo @@ -911,7 +946,7 @@ func TestAccStorageGatewaySMBFileShare_adminUserList(t *testing.T) { func testAccCheckSMBFileShareDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_storagegateway_smb_file_share" { @@ -942,7 +977,7 @@ func testAccCheckSMBFileShareExists(ctx context.Context, resourceName string, sm return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn(ctx) output, err := tfstoragegateway.FindSMBFileShareByARN(ctx, conn, rs.Primary.ID) @@ -1117,6 +1152,26 @@ resource "aws_storagegateway_smb_file_share" "test" { `, defaultStorageClass)) } +func testAccSMBFileShareConfig_encryptedUpdate(rName string, readOnly bool) string { + return acctest.ConfigCompose(testAcc_SMBFileShare_GuestAccessBase(rName), fmt.Sprintf(` +resource "aws_kms_key" "test" { + deletion_window_in_days = 7 + description = "Terraform Acceptance Testing" +} + +resource "aws_storagegateway_smb_file_share" "test" { + # Use GuestAccess to simplify testing + authentication = "GuestAccess" + gateway_arn = aws_storagegateway_gateway.test.arn + kms_encrypted = true + kms_key_arn = aws_kms_key.test.arn + location_arn = aws_s3_bucket.test.arn + role_arn = aws_iam_role.test.arn + read_only = %[1]t +} +`, readOnly)) +} + func testAccSMBFileShareConfig_name(rName, fileShareName string) string { return acctest.ConfigCompose(testAcc_SMBFileShare_GuestAccessBase(rName), fmt.Sprintf(` resource "aws_storagegateway_smb_file_share" "test" { diff --git a/internal/service/storagegateway/status.go b/internal/service/storagegateway/status.go index 65dc49499c8..c2cd9a1b339 100644 --- a/internal/service/storagegateway/status.go +++ b/internal/service/storagegateway/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway import ( diff --git a/internal/service/storagegateway/stored_iscsi_volume.go b/internal/service/storagegateway/stored_iscsi_volume.go index b81a4b6423a..3a5d783ca86 100644 --- a/internal/service/storagegateway/stored_iscsi_volume.go +++ b/internal/service/storagegateway/stored_iscsi_volume.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway import ( @@ -126,7 +129,7 @@ func ResourceStorediSCSIVolume() *schema.Resource { func resourceStorediSCSIVolumeCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) input := &storagegateway.CreateStorediSCSIVolumeInput{ DiskId: aws.String(d.Get("disk_id").(string)), @@ -134,7 +137,7 @@ func resourceStorediSCSIVolumeCreate(ctx context.Context, d *schema.ResourceData NetworkInterfaceId: aws.String(d.Get("network_interface_id").(string)), TargetName: aws.String(d.Get("target_name").(string)), PreserveExistingData: aws.Bool(d.Get("preserve_existing_data").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("snapshot_id"); ok { @@ -168,7 +171,7 @@ func resourceStorediSCSIVolumeCreate(ctx context.Context, d *schema.ResourceData func resourceStorediSCSIVolumeRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) input := &storagegateway.DescribeStorediSCSIVolumesInput{ VolumeARNs: []*string{aws.String(d.Id())}, @@ -236,7 +239,7 @@ func resourceStorediSCSIVolumeUpdate(ctx context.Context, d *schema.ResourceData func resourceStorediSCSIVolumeDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) input := &storagegateway.DeleteVolumeInput{ VolumeARN: aws.String(d.Id()), diff --git a/internal/service/storagegateway/stored_iscsi_volume_test.go b/internal/service/storagegateway/stored_iscsi_volume_test.go index 889c5568b73..b3576ff59d6 100644 --- a/internal/service/storagegateway/stored_iscsi_volume_test.go +++ b/internal/service/storagegateway/stored_iscsi_volume_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway_test import ( @@ -207,7 +210,7 @@ func testAccCheckStorediSCSIVolumeExists(ctx context.Context, resourceName strin return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn(ctx) input := &storagegateway.DescribeStorediSCSIVolumesInput{ VolumeARNs: []*string{aws.String(rs.Primary.ID)}, @@ -231,7 +234,7 @@ func testAccCheckStorediSCSIVolumeExists(ctx context.Context, resourceName strin func testAccCheckStorediSCSIVolumeDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_storagegateway_stored_iscsi_volume" { diff --git a/internal/service/storagegateway/sweep.go b/internal/service/storagegateway/sweep.go index 95335faec0f..55bfba0d973 100644 --- a/internal/service/storagegateway/sweep.go +++ b/internal/service/storagegateway/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/storagegateway" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -36,11 +38,11 @@ func init() { func sweepGateways(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).StorageGatewayConn() + conn := client.StorageGatewayConn(ctx) sweepResources := make([]sweep.Sweepable, 0) err = conn.ListGatewaysPagesWithContext(ctx, &storagegateway.ListGatewaysInput{}, func(page *storagegateway.ListGatewaysOutput, lastPage bool) bool { @@ -69,7 +71,7 @@ func sweepGateways(region string) error { return fmt.Errorf("error listing Storage Gateway Gateways (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Storage Gateway Gateways (%s): %w", region, err) @@ -80,11 +82,11 @@ func sweepGateways(region string) error { func sweepTapePools(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).StorageGatewayConn() + conn := client.StorageGatewayConn(ctx) sweepResources := make([]sweep.Sweepable, 0) err = conn.ListTapePoolsPagesWithContext(ctx, &storagegateway.ListTapePoolsInput{}, func(page *storagegateway.ListTapePoolsOutput, lastPage bool) bool { @@ -113,7 +115,7 @@ func sweepTapePools(region string) error { return fmt.Errorf("error listing Storage Gateway Tape Pools (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Storage Gateway Gateways (%s): %w", region, err) @@ -124,11 +126,11 @@ func sweepTapePools(region string) error { func sweepFileSystemAssociations(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).StorageGatewayConn() + conn := client.StorageGatewayConn(ctx) sweepResources := make([]sweep.Sweepable, 0) err = conn.ListFileSystemAssociationsPagesWithContext(ctx, &storagegateway.ListFileSystemAssociationsInput{}, func(page *storagegateway.ListFileSystemAssociationsOutput, lastPage bool) bool { @@ -157,7 +159,7 @@ func sweepFileSystemAssociations(region string) error { return fmt.Errorf("error listing Storage Gateway File System Associations (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Storage Gateway File System Associations (%s): %w", region, err) diff --git a/internal/service/storagegateway/tags_gen.go b/internal/service/storagegateway/tags_gen.go index 02efcbc0cf5..c39265634c2 100644 --- a/internal/service/storagegateway/tags_gen.go +++ b/internal/service/storagegateway/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists storagegateway service tags. +// listTags lists storagegateway service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn storagegatewayiface.StorageGatewayAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn storagegatewayiface.StorageGatewayAPI, identifier string) (tftags.KeyValueTags, error) { input := &storagegateway.ListTagsForResourceInput{ ResourceARN: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn storagegatewayiface.StorageGatewayAPI, i // ListTags lists storagegateway service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).StorageGatewayConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).StorageGatewayConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*storagegateway.Tag) tftags.KeyVal return tftags.New(ctx, m) } -// GetTagsIn returns storagegateway service tags from Context. +// getTagsIn returns storagegateway service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*storagegateway.Tag { +func getTagsIn(ctx context.Context) []*storagegateway.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*storagegateway.Tag { return nil } -// SetTagsOut sets storagegateway service tags in Context. -func SetTagsOut(ctx context.Context, tags []*storagegateway.Tag) { +// setTagsOut sets storagegateway service tags in Context. +func setTagsOut(ctx context.Context, tags []*storagegateway.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates storagegateway service tags. +// updateTags updates storagegateway service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn storagegatewayiface.StorageGatewayAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn storagegatewayiface.StorageGatewayAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn storagegatewayiface.StorageGatewayAPI, // UpdateTags updates storagegateway service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).StorageGatewayConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).StorageGatewayConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/storagegateway/tape_pool.go b/internal/service/storagegateway/tape_pool.go index e007132362d..0c8052dde99 100644 --- a/internal/service/storagegateway/tape_pool.go +++ b/internal/service/storagegateway/tape_pool.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway import ( @@ -69,14 +72,14 @@ func ResourceTapePool() *schema.Resource { func resourceTapePoolCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) input := &storagegateway.CreateTapePoolInput{ PoolName: aws.String(d.Get("pool_name").(string)), StorageClass: aws.String(d.Get("storage_class").(string)), RetentionLockType: aws.String(d.Get("retention_lock_type").(string)), RetentionLockTimeInDays: aws.Int64(int64(d.Get("retention_lock_time_in_days").(int))), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } log.Printf("[DEBUG] Creating Storage Gateway Tape Pool: %s", input) @@ -92,7 +95,7 @@ func resourceTapePoolCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceTapePoolRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) input := &storagegateway.ListTapePoolsInput{ PoolARNs: []*string{aws.String(d.Id())}, @@ -133,7 +136,7 @@ func resourceTapePoolUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceTapePoolDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) input := &storagegateway.DeleteTapePoolInput{ PoolARN: aws.String(d.Id()), diff --git a/internal/service/storagegateway/tape_pool_test.go b/internal/service/storagegateway/tape_pool_test.go index 6deee3d1f5f..273bfdd23da 100644 --- a/internal/service/storagegateway/tape_pool_test.go +++ b/internal/service/storagegateway/tape_pool_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway_test import ( @@ -157,7 +160,7 @@ func testAccCheckTapePoolExists(ctx context.Context, resourceName string, TapePo return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn(ctx) input := &storagegateway.ListTapePoolsInput{ PoolARNs: []*string{aws.String(rs.Primary.ID)}, @@ -181,7 +184,7 @@ func testAccCheckTapePoolExists(ctx context.Context, resourceName string, TapePo func testAccCheckTapePoolDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_storagegateway_tape_pool" { diff --git a/internal/service/storagegateway/upload_buffer.go b/internal/service/storagegateway/upload_buffer.go index 2aa1495a864..db89b463e07 100644 --- a/internal/service/storagegateway/upload_buffer.go +++ b/internal/service/storagegateway/upload_buffer.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway import ( @@ -53,7 +56,7 @@ func ResourceUploadBuffer() *schema.Resource { func resourceUploadBufferCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) input := &storagegateway.AddUploadBufferInput{} @@ -103,7 +106,7 @@ func resourceUploadBufferCreate(ctx context.Context, d *schema.ResourceData, met func resourceUploadBufferRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) gatewayARN, diskID, err := DecodeUploadBufferID(d.Id()) if err != nil { diff --git a/internal/service/storagegateway/upload_buffer_test.go b/internal/service/storagegateway/upload_buffer_test.go index 76390408137..d9c06da7063 100644 --- a/internal/service/storagegateway/upload_buffer_test.go +++ b/internal/service/storagegateway/upload_buffer_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway_test import ( @@ -147,7 +150,7 @@ func testAccCheckUploadBufferExists(ctx context.Context, resourceName string) re return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn(ctx) gatewayARN, diskID, err := tfstoragegateway.DecodeUploadBufferID(rs.Primary.ID) if err != nil { diff --git a/internal/service/storagegateway/validate.go b/internal/service/storagegateway/validate.go index fea400d7477..5ca15a3116d 100644 --- a/internal/service/storagegateway/validate.go +++ b/internal/service/storagegateway/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway import ( diff --git a/internal/service/storagegateway/validate_test.go b/internal/service/storagegateway/validate_test.go index ca9dcef54be..8658b8c8e45 100644 --- a/internal/service/storagegateway/validate_test.go +++ b/internal/service/storagegateway/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway import ( diff --git a/internal/service/storagegateway/wait.go b/internal/service/storagegateway/wait.go index 5fe91090ef2..4ecd5eecce9 100644 --- a/internal/service/storagegateway/wait.go +++ b/internal/service/storagegateway/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway import ( diff --git a/internal/service/storagegateway/working_storage.go b/internal/service/storagegateway/working_storage.go index 941e9577934..eb854ef452e 100644 --- a/internal/service/storagegateway/working_storage.go +++ b/internal/service/storagegateway/working_storage.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway import ( @@ -44,7 +47,7 @@ func ResourceWorkingStorage() *schema.Resource { func resourceWorkingStorageCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) diskID := d.Get("disk_id").(string) gatewayARN := d.Get("gateway_arn").(string) @@ -67,7 +70,7 @@ func resourceWorkingStorageCreate(ctx context.Context, d *schema.ResourceData, m func resourceWorkingStorageRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).StorageGatewayConn() + conn := meta.(*conns.AWSClient).StorageGatewayConn(ctx) gatewayARN, diskID, err := DecodeWorkingStorageID(d.Id()) if err != nil { diff --git a/internal/service/storagegateway/working_storage_test.go b/internal/service/storagegateway/working_storage_test.go index 6e7bc7e1ce1..b47278df58d 100644 --- a/internal/service/storagegateway/working_storage_test.go +++ b/internal/service/storagegateway/working_storage_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package storagegateway_test import ( @@ -112,7 +115,7 @@ func testAccCheckWorkingStorageExists(ctx context.Context, resourceName string) return fmt.Errorf("Not found: %s", resourceName) } - conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn(ctx) gatewayARN, diskID, err := tfstoragegateway.DecodeWorkingStorageID(rs.Primary.ID) if err != nil { diff --git a/internal/service/sts/caller_identity_data_source.go b/internal/service/sts/caller_identity_data_source.go index 09c82fcfe4e..f19339cc9e0 100644 --- a/internal/service/sts/caller_identity_data_source.go +++ b/internal/service/sts/caller_identity_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sts import ( @@ -7,8 +10,8 @@ import ( "github.com/hashicorp/terraform-plugin-framework/datasource" "github.com/hashicorp/terraform-plugin-framework/datasource/schema" "github.com/hashicorp/terraform-plugin-framework/types" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" ) // @FrameworkDataSource @@ -61,7 +64,7 @@ func (d *dataSourceCallerIdentity) Read(ctx context.Context, request datasource. return } - conn := d.Meta().STSConn() + conn := d.Meta().STSConn(ctx) output, err := FindCallerIdentity(ctx, conn) diff --git a/internal/service/sts/caller_identity_data_source_test.go b/internal/service/sts/caller_identity_data_source_test.go index df528cb7426..bafd5132283 100644 --- a/internal/service/sts/caller_identity_data_source_test.go +++ b/internal/service/sts/caller_identity_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sts_test import ( diff --git a/internal/service/sts/find.go b/internal/service/sts/find.go index 25a02ecf302..7c66d2fac90 100644 --- a/internal/service/sts/find.go +++ b/internal/service/sts/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sts import ( diff --git a/internal/service/sts/generate.go b/internal/service/sts/generate.go new file mode 100644 index 00000000000..13d93447bc2 --- /dev/null +++ b/internal/service/sts/generate.go @@ -0,0 +1,7 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/servicepackage/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package sts diff --git a/internal/service/sts/service_package.go b/internal/service/sts/service_package.go new file mode 100644 index 00000000000..43d41ed0710 --- /dev/null +++ b/internal/service/sts/service_package.go @@ -0,0 +1,24 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package sts + +import ( + "context" + + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + sts_sdkv1 "github.com/aws/aws-sdk-go/service/sts" +) + +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, m map[string]any) (*sts_sdkv1.STS, error) { + sess := m["session"].(*session_sdkv1.Session) + config := &aws_sdkv1.Config{Endpoint: aws_sdkv1.String(m["endpoint"].(string))} + + if stsRegion := m["sts_region"].(string); stsRegion != "" { + config.Region = aws_sdkv1.String(stsRegion) + } + + return sts_sdkv1.New(sess.Copy(config)), nil +} diff --git a/internal/service/sts/service_package_gen.go b/internal/service/sts/service_package_gen.go index 295656d10f8..87d193deb30 100644 --- a/internal/service/sts/service_package_gen.go +++ b/internal/service/sts/service_package_gen.go @@ -5,6 +5,7 @@ package sts import ( "context" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -35,4 +36,6 @@ func (p *servicePackage) ServicePackageName() string { return names.STS } -var ServicePackage = &servicePackage{} +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/swf/domain.go b/internal/service/swf/domain.go index f7ebe1243dd..e854bbeae49 100644 --- a/internal/service/swf/domain.go +++ b/internal/service/swf/domain.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package swf import ( @@ -6,14 +9,16 @@ import ( "log" "strconv" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/swf" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/swf" + "github.com/aws/aws-sdk-go-v2/service/swf/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/errs" + "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/verify" @@ -22,7 +27,7 @@ import ( // @SDKResource("aws_swf_domain", name="Domain") // @Tags(identifierAttribute="arn") -func ResourceDomain() *schema.Resource { +func resourceDomain() *schema.Resource { return &schema.Resource{ CreateWithoutTimeout: resourceDomainCreate, ReadWithoutTimeout: resourceDomainRead, @@ -79,12 +84,13 @@ func ResourceDomain() *schema.Resource { } func resourceDomainCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SWFConn() + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).SWFClient(ctx) name := create.Name(d.Get("name").(string), d.Get("name_prefix").(string)) input := &swf.RegisterDomainInput{ Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), WorkflowExecutionRetentionPeriodInDays: aws.String(d.Get("workflow_execution_retention_period_in_days").(string)), } @@ -92,10 +98,10 @@ func resourceDomainCreate(ctx context.Context, d *schema.ResourceData, meta inte input.Description = aws.String(v.(string)) } - _, err := conn.RegisterDomainWithContext(ctx, input) + _, err := conn.RegisterDomain(ctx, input) if err != nil { - return diag.Errorf("creating SWF Domain (%s): %s", name, err) + return sdkdiag.AppendErrorf(diags, "creating SWF Domain (%s): %s", name, err) } d.SetId(name) @@ -104,9 +110,10 @@ func resourceDomainCreate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceDomainRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SWFConn() + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).SWFClient(ctx) - output, err := FindDomainByName(ctx, conn, d.Id()) + output, err := findDomainByName(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { log.Printf("[WARN] SWF Domain (%s) not found, removing from state", d.Id()) @@ -115,17 +122,17 @@ func resourceDomainRead(ctx context.Context, d *schema.ResourceData, meta interf } if err != nil { - return diag.Errorf("reading SWF Domain (%s): %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "reading SWF Domain (%s): %s", d.Id(), err) } - arn := aws.StringValue(output.DomainInfo.Arn) + arn := aws.ToString(output.DomainInfo.Arn) d.Set("arn", arn) d.Set("description", output.DomainInfo.Description) d.Set("name", output.DomainInfo.Name) - d.Set("name_prefix", create.NamePrefixFromName(aws.StringValue(output.DomainInfo.Name))) + d.Set("name_prefix", create.NamePrefixFromName(aws.ToString(output.DomainInfo.Name))) d.Set("workflow_execution_retention_period_in_days", output.Configuration.WorkflowExecutionRetentionPeriodInDays) - return nil + return diags } func resourceDomainUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { @@ -134,31 +141,32 @@ func resourceDomainUpdate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceDomainDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).SWFConn() + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).SWFClient(ctx) - _, err := conn.DeprecateDomainWithContext(ctx, &swf.DeprecateDomainInput{ + _, err := conn.DeprecateDomain(ctx, &swf.DeprecateDomainInput{ Name: aws.String(d.Get("name").(string)), }) - if tfawserr.ErrCodeEquals(err, swf.ErrCodeDomainDeprecatedFault, swf.ErrCodeUnknownResourceFault) { - return nil + if errs.IsA[*types.DomainDeprecatedFault](err) || errs.IsA[*types.UnknownResourceFault](err) { + return diags } if err != nil { - return diag.Errorf("deleting SWF Domain (%s): %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "deleting SWF Domain (%s): %s", d.Id(), err) } - return nil + return diags } -func FindDomainByName(ctx context.Context, conn *swf.SWF, name string) (*swf.DescribeDomainOutput, error) { +func findDomainByName(ctx context.Context, conn *swf.Client, name string) (*swf.DescribeDomainOutput, error) { input := &swf.DescribeDomainInput{ Name: aws.String(name), } - output, err := conn.DescribeDomainWithContext(ctx, input) + output, err := conn.DescribeDomain(ctx, input) - if tfawserr.ErrCodeEquals(err, swf.ErrCodeUnknownResourceFault) { + if errs.IsA[*types.UnknownResourceFault](err) { return nil, &retry.NotFoundError{ LastError: err, LastRequest: input, @@ -173,9 +181,9 @@ func FindDomainByName(ctx context.Context, conn *swf.SWF, name string) (*swf.Des return nil, tfresource.NewEmptyResultError(input) } - if status := aws.StringValue(output.DomainInfo.Status); status == swf.RegistrationStatusDeprecated { + if status := output.DomainInfo.Status; status == types.RegistrationStatusDeprecated { return nil, &retry.NotFoundError{ - Message: status, + Message: string(status), LastRequest: input, } } diff --git a/internal/service/swf/domain_test.go b/internal/service/swf/domain_test.go index 8511d988e48..51f9f2ad292 100644 --- a/internal/service/swf/domain_test.go +++ b/internal/service/swf/domain_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package swf_test import ( @@ -8,9 +11,7 @@ import ( "testing" "time" - "github.com/aws/aws-sdk-go/service/swf" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -18,6 +19,7 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/conns" tfswf "github.com/hashicorp/terraform-provider-aws/internal/service/swf" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" ) func testAccPreCheckDomainTestingEnabled(t *testing.T) { @@ -39,7 +41,7 @@ func TestAccSWFDomain_basic(t *testing.T) { acctest.PreCheck(ctx, t) testAccPreCheckDomainTestingEnabled(t) }, - ErrorCheck: acctest.ErrorCheck(t, swf.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.SWFEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDomainDestroy(ctx), Steps: []resource.TestStep{ @@ -64,6 +66,32 @@ func TestAccSWFDomain_basic(t *testing.T) { }) } +func TestAccSWFDomain_disappears(t *testing.T) { + ctx := acctest.Context(t) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_swf_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + testAccPreCheckDomainTestingEnabled(t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.SWFEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckDomainDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_basic(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckDomainExists(ctx, resourceName), + acctest.CheckResourceDisappears(ctx, acctest.Provider, tfswf.ResourceDomain(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + func TestAccSWFDomain_nameGenerated(t *testing.T) { ctx := acctest.Context(t) resourceName := "aws_swf_domain.test" @@ -73,7 +101,7 @@ func TestAccSWFDomain_nameGenerated(t *testing.T) { acctest.PreCheck(ctx, t) testAccPreCheckDomainTestingEnabled(t) }, - ErrorCheck: acctest.ErrorCheck(t, swf.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.SWFEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDomainDestroy(ctx), Steps: []resource.TestStep{ @@ -103,7 +131,7 @@ func TestAccSWFDomain_namePrefix(t *testing.T) { acctest.PreCheck(ctx, t) testAccPreCheckDomainTestingEnabled(t) }, - ErrorCheck: acctest.ErrorCheck(t, swf.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.SWFEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDomainDestroy(ctx), Steps: []resource.TestStep{ @@ -134,7 +162,7 @@ func TestAccSWFDomain_tags(t *testing.T) { acctest.PreCheck(ctx, t) testAccPreCheckDomainTestingEnabled(t) }, - ErrorCheck: acctest.ErrorCheck(t, swf.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.SWFEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDomainDestroy(ctx), Steps: []resource.TestStep{ @@ -182,7 +210,7 @@ func TestAccSWFDomain_description(t *testing.T) { acctest.PreCheck(ctx, t) testAccPreCheckDomainTestingEnabled(t) }, - ErrorCheck: acctest.ErrorCheck(t, swf.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.SWFEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDomainDestroy(ctx), Steps: []resource.TestStep{ @@ -204,7 +232,7 @@ func TestAccSWFDomain_description(t *testing.T) { func testAccCheckDomainDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SWFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SWFClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_swf_domain" { @@ -212,18 +240,8 @@ func testAccCheckDomainDestroy(ctx context.Context) resource.TestCheckFunc { } // Retrying as Read after Delete is not always consistent. - err := retry.RetryContext(ctx, 2*time.Minute, func() *retry.RetryError { - _, err := tfswf.FindDomainByName(ctx, conn, rs.Primary.ID) - - if tfresource.NotFound(err) { - return nil - } - - if err != nil { - return retry.NonRetryableError(err) - } - - return retry.RetryableError(fmt.Errorf("SWF Domain still exists: %s", rs.Primary.ID)) + _, err := tfresource.RetryUntilNotFound(ctx, 2*time.Minute, func() (interface{}, error) { + return tfswf.FindDomainByName(ctx, conn, rs.Primary.ID) }) return err @@ -244,7 +262,7 @@ func testAccCheckDomainExists(ctx context.Context, n string) resource.TestCheckF return fmt.Errorf("No SWF Domain ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SWFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SWFClient(ctx) _, err := tfswf.FindDomainByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/swf/exports_test.go b/internal/service/swf/exports_test.go new file mode 100644 index 00000000000..eeeb6a6782c --- /dev/null +++ b/internal/service/swf/exports_test.go @@ -0,0 +1,11 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package swf + +// Exports for use in tests only. +var ( + FindDomainByName = findDomainByName + + ResourceDomain = resourceDomain +) diff --git a/internal/service/swf/generate.go b/internal/service/swf/generate.go index 4dbdd7b1f5e..88fd0126292 100644 --- a/internal/service/swf/generate.go +++ b/internal/service/swf/generate.go @@ -1,4 +1,8 @@ -//go:generate go run ../../generate/tags/main.go -ListTags -ServiceTagsSlice -TagType=ResourceTag -UpdateTags +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -ListTags -ServiceTagsSlice -TagType=ResourceTag -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package swf diff --git a/internal/service/swf/service_package_gen.go b/internal/service/swf/service_package_gen.go index 0601f7f2383..ea1aca666d7 100644 --- a/internal/service/swf/service_package_gen.go +++ b/internal/service/swf/service_package_gen.go @@ -5,6 +5,9 @@ package swf import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + swf_sdkv2 "github.com/aws/aws-sdk-go-v2/service/swf" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -26,7 +29,7 @@ func (p *servicePackage) SDKDataSources(ctx context.Context) []*types.ServicePac func (p *servicePackage) SDKResources(ctx context.Context) []*types.ServicePackageSDKResource { return []*types.ServicePackageSDKResource{ { - Factory: ResourceDomain, + Factory: resourceDomain, TypeName: "aws_swf_domain", Name: "Domain", Tags: &types.ServicePackageResourceTags{ @@ -40,4 +43,17 @@ func (p *servicePackage) ServicePackageName() string { return names.SWF } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*swf_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return swf_sdkv2.NewFromConfig(cfg, func(o *swf_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = swf_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/swf/sweep.go b/internal/service/swf/sweep.go index 89c7fae094b..e7eec341a90 100644 --- a/internal/service/swf/sweep.go +++ b/internal/service/swf/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -7,10 +10,10 @@ import ( "fmt" "log" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/swf" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/swf" + "github.com/aws/aws-sdk-go-v2/service/swf/types" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -23,42 +26,39 @@ func init() { func sweepDomains(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).SWFConn() + conn := client.SWFClient(ctx) input := &swf.ListDomainsInput{ - RegistrationStatus: aws.String(swf.RegistrationStatusRegistered), + RegistrationStatus: types.RegistrationStatusRegistered, } sweepResources := make([]sweep.Sweepable, 0) - err = conn.ListDomainsPagesWithContext(ctx, input, func(page *swf.ListDomainsOutput, lastPage bool) bool { - if page == nil { - return !lastPage + pages := swf.NewListDomainsPaginator(conn, input) + for pages.HasMorePages() { + page, err := pages.NextPage(ctx) + + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping SWF Domain sweep for %s: %s", region, err) + return nil + } + + if err != nil { + return fmt.Errorf("error listing SWF Domains (%s): %w", region, err) } for _, v := range page.DomainInfos { - r := ResourceDomain() + r := resourceDomain() d := r.Data(nil) - d.SetId(aws.StringValue(v.Name)) + d.SetId(aws.ToString(v.Name)) sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } - - return !lastPage - }) - - if sweep.SkipSweepError(err) { - log.Printf("[WARN] Skipping SWF Domain sweep for %s: %s", region, err) - return nil - } - - if err != nil { - return fmt.Errorf("error listing SWF Domains (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping SWF Domains (%s): %w", region, err) diff --git a/internal/service/swf/tags_gen.go b/internal/service/swf/tags_gen.go index eeac71a4b78..0931f4b2e44 100644 --- a/internal/service/swf/tags_gen.go +++ b/internal/service/swf/tags_gen.go @@ -5,24 +5,24 @@ import ( "context" "fmt" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/swf" - "github.com/aws/aws-sdk-go/service/swf/swfiface" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/swf" + awstypes "github.com/aws/aws-sdk-go-v2/service/swf/types" "github.com/hashicorp/terraform-provider-aws/internal/conns" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists swf service tags. +// listTags lists swf service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn swfiface.SWFAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *swf.Client, identifier string) (tftags.KeyValueTags, error) { input := &swf.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } - output, err := conn.ListTagsForResourceWithContext(ctx, input) + output, err := conn.ListTagsForResource(ctx, input) if err != nil { return tftags.New(ctx, nil), err @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn swfiface.SWFAPI, identifier string) (tft // ListTags lists swf service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).SWFConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).SWFClient(ctx), identifier) if err != nil { return err @@ -50,11 +50,11 @@ func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier stri // []*SERVICE.Tag handling // Tags returns swf service tags. -func Tags(tags tftags.KeyValueTags) []*swf.ResourceTag { - result := make([]*swf.ResourceTag, 0, len(tags)) +func Tags(tags tftags.KeyValueTags) []awstypes.ResourceTag { + result := make([]awstypes.ResourceTag, 0, len(tags)) for k, v := range tags.Map() { - tag := &swf.ResourceTag{ + tag := awstypes.ResourceTag{ Key: aws.String(k), Value: aws.String(v), } @@ -66,19 +66,19 @@ func Tags(tags tftags.KeyValueTags) []*swf.ResourceTag { } // KeyValueTags creates tftags.KeyValueTags from swf service tags. -func KeyValueTags(ctx context.Context, tags []*swf.ResourceTag) tftags.KeyValueTags { +func KeyValueTags(ctx context.Context, tags []awstypes.ResourceTag) tftags.KeyValueTags { m := make(map[string]*string, len(tags)) for _, tag := range tags { - m[aws.StringValue(tag.Key)] = tag.Value + m[aws.ToString(tag.Key)] = tag.Value } return tftags.New(ctx, m) } -// GetTagsIn returns swf service tags from Context. +// getTagsIn returns swf service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*swf.ResourceTag { +func getTagsIn(ctx context.Context) []awstypes.ResourceTag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*swf.ResourceTag { return nil } -// SetTagsOut sets swf service tags in Context. -func SetTagsOut(ctx context.Context, tags []*swf.ResourceTag) { +// setTagsOut sets swf service tags in Context. +func setTagsOut(ctx context.Context, tags []awstypes.ResourceTag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates swf service tags. +// updateTags updates swf service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn swfiface.SWFAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *swf.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -107,10 +107,10 @@ func UpdateTags(ctx context.Context, conn swfiface.SWFAPI, identifier string, ol if len(removedTags) > 0 { input := &swf.UntagResourceInput{ ResourceArn: aws.String(identifier), - TagKeys: aws.StringSlice(removedTags.Keys()), + TagKeys: removedTags.Keys(), } - _, err := conn.UntagResourceWithContext(ctx, input) + _, err := conn.UntagResource(ctx, input) if err != nil { return fmt.Errorf("untagging resource (%s): %w", identifier, err) @@ -125,7 +125,7 @@ func UpdateTags(ctx context.Context, conn swfiface.SWFAPI, identifier string, ol Tags: Tags(updatedTags), } - _, err := conn.TagResourceWithContext(ctx, input) + _, err := conn.TagResource(ctx, input) if err != nil { return fmt.Errorf("tagging resource (%s): %w", identifier, err) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn swfiface.SWFAPI, identifier string, ol // UpdateTags updates swf service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).SWFConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).SWFClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/synthetics/canary.go b/internal/service/synthetics/canary.go index 7960ab2d081..5360be7c933 100644 --- a/internal/service/synthetics/canary.go +++ b/internal/service/synthetics/canary.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package synthetics import ( @@ -269,7 +272,7 @@ func ResourceCanary() *schema.Resource { func resourceCanaryCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SyntheticsConn() + conn := meta.(*conns.AWSClient).SyntheticsConn(ctx) name := d.Get("name").(string) input := &synthetics.CreateCanaryInput{ @@ -277,7 +280,7 @@ func resourceCanaryCreate(ctx context.Context, d *schema.ResourceData, meta inte ExecutionRoleArn: aws.String(d.Get("execution_role_arn").(string)), Name: aws.String(name), RuntimeVersion: aws.String(d.Get("runtime_version").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if code, err := expandCanaryCode(d); err != nil { @@ -356,7 +359,7 @@ func resourceCanaryCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceCanaryRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SyntheticsConn() + conn := meta.(*conns.AWSClient).SyntheticsConn(ctx) canary, err := FindCanaryByName(ctx, conn, d.Id()) @@ -414,14 +417,14 @@ func resourceCanaryRead(ctx context.Context, d *schema.ResourceData, meta interf return sdkdiag.AppendErrorf(diags, "setting artifact_config: %s", err) } - SetTagsOut(ctx, canary.Tags) + setTagsOut(ctx, canary.Tags) return diags } func resourceCanaryUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SyntheticsConn() + conn := meta.(*conns.AWSClient).SyntheticsConn(ctx) if d.HasChangesExcept("tags", "tags_all", "start_canary") { input := &synthetics.UpdateCanaryInput{ @@ -527,7 +530,7 @@ func resourceCanaryUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceCanaryDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SyntheticsConn() + conn := meta.(*conns.AWSClient).SyntheticsConn(ctx) if status := d.Get("status").(string); status == synthetics.CanaryStateRunning { if err := stopCanary(ctx, d.Id(), conn); err != nil { diff --git a/internal/service/synthetics/canary_test.go b/internal/service/synthetics/canary_test.go index abc6ed2eae9..e230e9aa057 100644 --- a/internal/service/synthetics/canary_test.go +++ b/internal/service/synthetics/canary_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package synthetics_test import ( @@ -560,7 +563,7 @@ func TestAccSyntheticsCanary_disappears(t *testing.T) { func testAccCheckCanaryDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SyntheticsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SyntheticsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_synthetics_canary" { @@ -595,7 +598,7 @@ func testAccCheckCanaryExists(ctx context.Context, n string, canary *synthetics. return fmt.Errorf("No Synthetics Canary ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SyntheticsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SyntheticsConn(ctx) output, err := tfsynthetics.FindCanaryByName(ctx, conn, rs.Primary.ID) diff --git a/internal/service/synthetics/consts.go b/internal/service/synthetics/consts.go index a52d31bbd77..c3d4b7a4b85 100644 --- a/internal/service/synthetics/consts.go +++ b/internal/service/synthetics/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package synthetics import ( diff --git a/internal/service/synthetics/find.go b/internal/service/synthetics/find.go index d4a5555d185..d4af7e238fa 100644 --- a/internal/service/synthetics/find.go +++ b/internal/service/synthetics/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package synthetics import ( diff --git a/internal/service/synthetics/generate.go b/internal/service/synthetics/generate.go index de1354827e5..dbf96f1556d 100644 --- a/internal/service/synthetics/generate.go +++ b/internal/service/synthetics/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ServiceTagsMap -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package synthetics diff --git a/internal/service/synthetics/group.go b/internal/service/synthetics/group.go index fbdc81d0949..56cab5bc09a 100644 --- a/internal/service/synthetics/group.go +++ b/internal/service/synthetics/group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package synthetics import ( @@ -54,12 +57,12 @@ func ResourceGroup() *schema.Resource { func resourceGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SyntheticsConn() + conn := meta.(*conns.AWSClient).SyntheticsConn(ctx) name := d.Get("name").(string) in := &synthetics.CreateGroupInput{ Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } out, err := conn.CreateGroupWithContext(ctx, in) @@ -79,7 +82,7 @@ func resourceGroupCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SyntheticsConn() + conn := meta.(*conns.AWSClient).SyntheticsConn(ctx) group, err := FindGroupByName(ctx, conn, d.Id()) @@ -97,7 +100,7 @@ func resourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interfa d.Set("group_id", group.Id) d.Set("name", group.Name) - SetTagsOut(ctx, group.Tags) + setTagsOut(ctx, group.Tags) return diags } @@ -109,7 +112,7 @@ func resourceGroupUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SyntheticsConn() + conn := meta.(*conns.AWSClient).SyntheticsConn(ctx) log.Printf("[DEBUG] Deleting Synthetics Group %s", d.Id()) diff --git a/internal/service/synthetics/group_association.go b/internal/service/synthetics/group_association.go index 636c330c170..f6926e0f7aa 100644 --- a/internal/service/synthetics/group_association.go +++ b/internal/service/synthetics/group_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package synthetics import ( @@ -52,7 +55,7 @@ func ResourceGroupAssociation() *schema.Resource { func resourceGroupAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SyntheticsConn() + conn := meta.(*conns.AWSClient).SyntheticsConn(ctx) canaryArn := d.Get("canary_arn").(string) groupName := d.Get("group_name").(string) @@ -79,7 +82,7 @@ func resourceGroupAssociationCreate(ctx context.Context, d *schema.ResourceData, func resourceGroupAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SyntheticsConn() + conn := meta.(*conns.AWSClient).SyntheticsConn(ctx) canaryArn, groupName, err := GroupAssociationParseResourceID(d.Id()) @@ -109,7 +112,7 @@ func resourceGroupAssociationRead(ctx context.Context, d *schema.ResourceData, m func resourceGroupAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).SyntheticsConn() + conn := meta.(*conns.AWSClient).SyntheticsConn(ctx) log.Printf("[DEBUG] Deleting Synthetics Group Association %s", d.Id()) diff --git a/internal/service/synthetics/group_association_test.go b/internal/service/synthetics/group_association_test.go index 82e6f8ca30f..e19be00f03d 100644 --- a/internal/service/synthetics/group_association_test.go +++ b/internal/service/synthetics/group_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package synthetics_test import ( @@ -88,7 +91,7 @@ func testAccCheckGroupAssociationExists(ctx context.Context, name string, v *syn return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).SyntheticsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SyntheticsConn(ctx) output, err := tfsynthetics.FindAssociatedGroup(ctx, conn, canaryArn, groupName) if err != nil { @@ -103,7 +106,7 @@ func testAccCheckGroupAssociationExists(ctx context.Context, name string, v *syn func testAccCheckGroupAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SyntheticsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SyntheticsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_synthetics_group_association" { diff --git a/internal/service/synthetics/group_test.go b/internal/service/synthetics/group_test.go index d0d1dcc50a4..160eebca3ed 100644 --- a/internal/service/synthetics/group_test.go +++ b/internal/service/synthetics/group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package synthetics_test import ( @@ -127,7 +130,7 @@ func testAccCheckGroupExists(ctx context.Context, name string, group *synthetics return fmt.Errorf("no Synthetics Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).SyntheticsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SyntheticsConn(ctx) output, err := tfsynthetics.FindGroupByName(ctx, conn, rs.Primary.ID) if err != nil { @@ -142,7 +145,7 @@ func testAccCheckGroupExists(ctx context.Context, name string, group *synthetics func testAccCheckGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).SyntheticsConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).SyntheticsConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_synthetics_group" { diff --git a/internal/service/synthetics/retry.go b/internal/service/synthetics/retry.go index b96a7f396c0..52575aa7ad2 100644 --- a/internal/service/synthetics/retry.go +++ b/internal/service/synthetics/retry.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package synthetics import ( @@ -29,17 +32,17 @@ func retryCreateCanary(ctx context.Context, conn *synthetics.Synthetics, d *sche // delete canary because it is the only way to reprovision if in an error state err = deleteCanary(ctx, conn, d.Id()) if err != nil { - return output, fmt.Errorf("error deleting Synthetics Canary on retry (%s): %w", d.Id(), err) + return output, fmt.Errorf("deleting Synthetics Canary on retry (%s): %w", d.Id(), err) } _, err = conn.CreateCanaryWithContext(ctx, input) if err != nil { - return output, fmt.Errorf("error creating Synthetics Canary on retry (%s): %w", d.Id(), err) + return output, fmt.Errorf("creating Synthetics Canary on retry (%s): %w", d.Id(), err) } _, err = waitCanaryReady(ctx, conn, d.Id()) if err != nil { - return output, fmt.Errorf("error waiting on Synthetics Canary on retry (%s): %w", d.Id(), err) + return output, fmt.Errorf("waiting on Synthetics Canary on retry (%s): %w", d.Id(), err) } } } @@ -57,13 +60,13 @@ func deleteCanary(ctx context.Context, conn *synthetics.Synthetics, name string) } if err != nil { - return fmt.Errorf("error deleting Synthetics Canary (%s): %w", name, err) + return fmt.Errorf("deleting Synthetics Canary (%s): %w", name, err) } _, err = waitCanaryDeleted(ctx, conn, name) if err != nil { - return fmt.Errorf("error waiting for Synthetics Canary (%s) delete: %w", name, err) + return fmt.Errorf("waiting for Synthetics Canary (%s) delete: %w", name, err) } return nil diff --git a/internal/service/synthetics/service_package_gen.go b/internal/service/synthetics/service_package_gen.go index 1a87d615fc2..4e14ba3a5f1 100644 --- a/internal/service/synthetics/service_package_gen.go +++ b/internal/service/synthetics/service_package_gen.go @@ -5,6 +5,10 @@ package synthetics import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + synthetics_sdkv1 "github.com/aws/aws-sdk-go/service/synthetics" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -53,4 +57,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Synthetics } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*synthetics_sdkv1.Synthetics, error) { + sess := config["session"].(*session_sdkv1.Session) + + return synthetics_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/synthetics/status.go b/internal/service/synthetics/status.go index 16bd489a6f0..ce34b8e61fd 100644 --- a/internal/service/synthetics/status.go +++ b/internal/service/synthetics/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package synthetics import ( diff --git a/internal/service/synthetics/sweep.go b/internal/service/synthetics/sweep.go index 201672eac36..95e24cf76b1 100644 --- a/internal/service/synthetics/sweep.go +++ b/internal/service/synthetics/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/synthetics" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -29,11 +31,11 @@ func init() { func sweepCanaries(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).SyntheticsConn() + conn := client.SyntheticsConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var sweeperErrs *multierror.Error @@ -66,7 +68,7 @@ func sweepCanaries(region string) error { input.NextToken = output.NextToken } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Synthetics Canaries: %w", err)) } diff --git a/internal/service/synthetics/tags_gen.go b/internal/service/synthetics/tags_gen.go index 37144c81115..e6db84cd6aa 100644 --- a/internal/service/synthetics/tags_gen.go +++ b/internal/service/synthetics/tags_gen.go @@ -21,14 +21,14 @@ func Tags(tags tftags.KeyValueTags) map[string]*string { return aws.StringMap(tags.Map()) } -// KeyValueTags creates KeyValueTags from synthetics service tags. +// KeyValueTags creates tftags.KeyValueTags from synthetics service tags. func KeyValueTags(ctx context.Context, tags map[string]*string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns synthetics service tags from Context. +// getTagsIn returns synthetics service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]*string { +func getTagsIn(ctx context.Context) map[string]*string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -38,17 +38,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets synthetics service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets synthetics service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates synthetics service tags. +// updateTags updates synthetics service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn syntheticsiface.SyntheticsAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn syntheticsiface.SyntheticsAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -88,5 +88,5 @@ func UpdateTags(ctx context.Context, conn syntheticsiface.SyntheticsAPI, identif // UpdateTags updates synthetics service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).SyntheticsConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).SyntheticsConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/synthetics/wait.go b/internal/service/synthetics/wait.go index 6b223479a97..54f61f1a8fd 100644 --- a/internal/service/synthetics/wait.go +++ b/internal/service/synthetics/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package synthetics import ( diff --git a/internal/service/timestreamwrite/database.go b/internal/service/timestreamwrite/database.go index bc46f8bfa28..c0e19a0f42f 100644 --- a/internal/service/timestreamwrite/database.go +++ b/internal/service/timestreamwrite/database.go @@ -1,26 +1,31 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package timestreamwrite import ( "context" - "fmt" "log" "regexp" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/timestreamwrite" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/timestreamwrite" + "github.com/aws/aws-sdk-go-v2/service/timestreamwrite/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/errs" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/verify" "github.com/hashicorp/terraform-provider-aws/names" ) // @SDKResource("aws_timestreamwrite_database", name="Database") // @Tags(identifierAttribute="arn") -func ResourceDatabase() *schema.Resource { +func resourceDatabase() *schema.Resource { return &schema.Resource{ CreateWithoutTimeout: resourceDatabaseCreate, ReadWithoutTimeout: resourceDatabaseRead, @@ -36,7 +41,6 @@ func ResourceDatabase() *schema.Resource { Type: schema.TypeString, Computed: true, }, - "database_name": { Type: schema.TypeString, Required: true, @@ -46,7 +50,6 @@ func ResourceDatabase() *schema.Resource { validation.StringMatch(regexp.MustCompile(`^[a-zA-Z0-9_.-]+$`), "must only include alphanumeric, underscore, period, or hyphen characters"), ), }, - "kms_key_id": { Type: schema.TypeString, Optional: true, @@ -57,7 +60,6 @@ func ResourceDatabase() *schema.Resource { // To avoid importing an extra service in this resource, input here is restricted to only ARNs. ValidateFunc: verify.ValidARN, }, - "table_count": { Type: schema.TypeInt, Computed: true, @@ -71,60 +73,45 @@ func ResourceDatabase() *schema.Resource { } func resourceDatabaseCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).TimestreamWriteConn() + conn := meta.(*conns.AWSClient).TimestreamWriteClient(ctx) - dbName := d.Get("database_name").(string) + name := d.Get("database_name").(string) input := ×treamwrite.CreateDatabaseInput{ - DatabaseName: aws.String(dbName), - Tags: GetTagsIn(ctx), + DatabaseName: aws.String(name), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("kms_key_id"); ok { input.KmsKeyId = aws.String(v.(string)) } - resp, err := conn.CreateDatabaseWithContext(ctx, input) + output, err := conn.CreateDatabase(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error creating Timestream Database (%s): %w", dbName, err)) + return diag.Errorf("creating Timestream Database (%s): %s", name, err) } - if resp == nil || resp.Database == nil { - return diag.FromErr(fmt.Errorf("error creating Timestream Database (%s): empty output", dbName)) - } - - d.SetId(aws.StringValue(resp.Database.DatabaseName)) + d.SetId(aws.ToString(output.Database.DatabaseName)) return resourceDatabaseRead(ctx, d, meta) } func resourceDatabaseRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).TimestreamWriteConn() - - input := ×treamwrite.DescribeDatabaseInput{ - DatabaseName: aws.String(d.Id()), - } + conn := meta.(*conns.AWSClient).TimestreamWriteClient(ctx) - resp, err := conn.DescribeDatabaseWithContext(ctx, input) + db, err := findDatabaseByName(ctx, conn, d.Id()) - if !d.IsNewResource() && tfawserr.ErrCodeEquals(err, timestreamwrite.ErrCodeResourceNotFoundException) { + if !d.IsNewResource() && tfresource.NotFound(err) { log.Printf("[WARN] Timestream Database %s not found, removing from state", d.Id()) d.SetId("") return nil } if err != nil { - return diag.FromErr(fmt.Errorf("error reading Timestream Database (%s): %w", d.Id(), err)) + return diag.Errorf("reading Timestream Database (%s): %s", d.Id(), err) } - if resp == nil || resp.Database == nil { - return diag.FromErr(fmt.Errorf("error reading Timestream Database (%s): empty output", d.Id())) - } - - db := resp.Database - arn := aws.StringValue(db.Arn) - - d.Set("arn", arn) + d.Set("arn", db.Arn) d.Set("database_name", db.DatabaseName) d.Set("kms_key_id", db.KmsKeyId) d.Set("table_count", db.TableCount) @@ -133,7 +120,7 @@ func resourceDatabaseRead(ctx context.Context, d *schema.ResourceData, meta inte } func resourceDatabaseUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).TimestreamWriteConn() + conn := meta.(*conns.AWSClient).TimestreamWriteClient(ctx) if d.HasChange("kms_key_id") { input := ×treamwrite.UpdateDatabaseInput{ @@ -141,10 +128,10 @@ func resourceDatabaseUpdate(ctx context.Context, d *schema.ResourceData, meta in KmsKeyId: aws.String(d.Get("kms_key_id").(string)), } - _, err := conn.UpdateDatabaseWithContext(ctx, input) + _, err := conn.UpdateDatabase(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error updating Timestream Database (%s): %w", d.Id(), err)) + return diag.Errorf("updating Timestream Database (%s): %s", d.Id(), err) } } @@ -152,20 +139,45 @@ func resourceDatabaseUpdate(ctx context.Context, d *schema.ResourceData, meta in } func resourceDatabaseDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).TimestreamWriteConn() + conn := meta.(*conns.AWSClient).TimestreamWriteClient(ctx) log.Printf("[INFO] Deleting Timestream Database: %s", d.Id()) - _, err := conn.DeleteDatabaseWithContext(ctx, ×treamwrite.DeleteDatabaseInput{ + _, err := conn.DeleteDatabase(ctx, ×treamwrite.DeleteDatabaseInput{ DatabaseName: aws.String(d.Id()), }) - if tfawserr.ErrCodeEquals(err, timestreamwrite.ErrCodeResourceNotFoundException) { + if errs.IsA[*types.ResourceNotFoundException](err) { return nil } if err != nil { - return diag.FromErr(fmt.Errorf("error deleting Timestream Database (%s): %w", d.Id(), err)) + return diag.Errorf("deleting Timestream Database (%s): %s", d.Id(), err) } return nil } + +func findDatabaseByName(ctx context.Context, conn *timestreamwrite.Client, name string) (*types.Database, error) { + input := ×treamwrite.DescribeDatabaseInput{ + DatabaseName: aws.String(name), + } + + output, err := conn.DescribeDatabase(ctx, input) + + if errs.IsA[*types.ResourceNotFoundException](err) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil || output.Database == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + return output.Database, nil +} diff --git a/internal/service/timestreamwrite/database_test.go b/internal/service/timestreamwrite/database_test.go index 662f987bbf6..54502486d89 100644 --- a/internal/service/timestreamwrite/database_test.go +++ b/internal/service/timestreamwrite/database_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package timestreamwrite_test import ( @@ -6,15 +9,15 @@ import ( "regexp" "testing" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/timestreamwrite" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/service/timestreamwrite" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" tftimestreamwrite "github.com/hashicorp/terraform-provider-aws/internal/service/timestreamwrite" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" ) func TestAccTimestreamWriteDatabase_basic(t *testing.T) { @@ -24,7 +27,7 @@ func TestAccTimestreamWriteDatabase_basic(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, timestreamwrite.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.TimestreamWriteEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDatabaseDestroy(ctx), Steps: []resource.TestStep{ @@ -54,7 +57,7 @@ func TestAccTimestreamWriteDatabase_disappears(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, timestreamwrite.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.TimestreamWriteEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDatabaseDestroy(ctx), Steps: []resource.TestStep{ @@ -78,7 +81,7 @@ func TestAccTimestreamWriteDatabase_kmsKey(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, timestreamwrite.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.TimestreamWriteEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDatabaseDestroy(ctx), Steps: []resource.TestStep{ @@ -107,7 +110,7 @@ func TestAccTimestreamWriteDatabase_updateKMSKey(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, timestreamwrite.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.TimestreamWriteEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDatabaseDestroy(ctx), Steps: []resource.TestStep{ @@ -148,7 +151,7 @@ func TestAccTimestreamWriteDatabase_tags(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, timestreamwrite.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.TimestreamWriteEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDatabaseDestroy(ctx), Steps: []resource.TestStep{ @@ -195,18 +198,16 @@ func TestAccTimestreamWriteDatabase_tags(t *testing.T) { func testAccCheckDatabaseDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).TimestreamWriteConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).TimestreamWriteClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_timestreamwrite_database" { continue } - output, err := conn.DescribeDatabaseWithContext(ctx, ×treamwrite.DescribeDatabaseInput{ - DatabaseName: aws.String(rs.Primary.ID), - }) + _, err := tftimestreamwrite.FindDatabaseByName(ctx, conn, rs.Primary.ID) - if tfawserr.ErrCodeEquals(err, timestreamwrite.ErrCodeResourceNotFoundException) { + if tfresource.NotFound(err) { continue } @@ -214,9 +215,7 @@ func testAccCheckDatabaseDestroy(ctx context.Context) resource.TestCheckFunc { return err } - if output != nil && output.Database != nil { - return fmt.Errorf("Timestream Database (%s) still exists", rs.Primary.ID) - } + return fmt.Errorf("Timestream Database %s still exists", rs.Primary.ID) } return nil @@ -230,34 +229,20 @@ func testAccCheckDatabaseExists(ctx context.Context, n string) resource.TestChec return fmt.Errorf("Not found: %s", n) } - if rs.Primary.ID == "" { - return fmt.Errorf("no resource ID is set") - } + conn := acctest.Provider.Meta().(*conns.AWSClient).TimestreamWriteClient(ctx) - conn := acctest.Provider.Meta().(*conns.AWSClient).TimestreamWriteConn() + _, err := tftimestreamwrite.FindDatabaseByName(ctx, conn, rs.Primary.ID) - output, err := conn.DescribeDatabaseWithContext(ctx, ×treamwrite.DescribeDatabaseInput{ - DatabaseName: aws.String(rs.Primary.ID), - }) - - if err != nil { - return err - } - - if output == nil || output.Database == nil { - return fmt.Errorf("Timestream Database (%s) not found", rs.Primary.ID) - } - - return nil + return err } } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).TimestreamWriteConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).TimestreamWriteClient(ctx) input := ×treamwrite.ListDatabasesInput{} - _, err := conn.ListDatabasesWithContext(ctx, input) + _, err := conn.ListDatabases(ctx, input) if acctest.PreCheckSkipError(err) { t.Skipf("skipping acceptance testing: %s", err) diff --git a/internal/service/timestreamwrite/export_test.go b/internal/service/timestreamwrite/export_test.go new file mode 100644 index 00000000000..ca5ac4f150e --- /dev/null +++ b/internal/service/timestreamwrite/export_test.go @@ -0,0 +1,15 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package timestreamwrite + +// Exports for use in tests only. +var ( + ResourceDatabase = resourceDatabase + ResourceTable = resourceTable + + FindDatabaseByName = findDatabaseByName + FindTableByTwoPartKey = findTableByTwoPartKey + + TableParseResourceID = tableParseResourceID +) diff --git a/internal/service/timestreamwrite/generate.go b/internal/service/timestreamwrite/generate.go index 759788015f9..dc4fbf7160b 100644 --- a/internal/service/timestreamwrite/generate.go +++ b/internal/service/timestreamwrite/generate.go @@ -1,4 +1,8 @@ -//go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceARN -ServiceTagsSlice -TagInIDElem=ResourceARN -UpdateTags +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -ListTags -ListTagsInIDElem=ResourceARN -ServiceTagsSlice -TagInIDElem=ResourceARN -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package timestreamwrite diff --git a/internal/service/timestreamwrite/service_package_gen.go b/internal/service/timestreamwrite/service_package_gen.go index 7720d7bb1c5..9258c3ba0ef 100644 --- a/internal/service/timestreamwrite/service_package_gen.go +++ b/internal/service/timestreamwrite/service_package_gen.go @@ -5,6 +5,9 @@ package timestreamwrite import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + timestreamwrite_sdkv2 "github.com/aws/aws-sdk-go-v2/service/timestreamwrite" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -26,7 +29,7 @@ func (p *servicePackage) SDKDataSources(ctx context.Context) []*types.ServicePac func (p *servicePackage) SDKResources(ctx context.Context) []*types.ServicePackageSDKResource { return []*types.ServicePackageSDKResource{ { - Factory: ResourceDatabase, + Factory: resourceDatabase, TypeName: "aws_timestreamwrite_database", Name: "Database", Tags: &types.ServicePackageResourceTags{ @@ -34,7 +37,7 @@ func (p *servicePackage) SDKResources(ctx context.Context) []*types.ServicePacka }, }, { - Factory: ResourceTable, + Factory: resourceTable, TypeName: "aws_timestreamwrite_table", Name: "Table", Tags: &types.ServicePackageResourceTags{ @@ -48,4 +51,17 @@ func (p *servicePackage) ServicePackageName() string { return names.TimestreamWrite } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*timestreamwrite_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return timestreamwrite_sdkv2.NewFromConfig(cfg, func(o *timestreamwrite_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = timestreamwrite_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/timestreamwrite/sweep.go b/internal/service/timestreamwrite/sweep.go index 44160c99c73..8445a927c0c 100644 --- a/internal/service/timestreamwrite/sweep.go +++ b/internal/service/timestreamwrite/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -7,10 +10,9 @@ import ( "fmt" "log" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/timestreamwrite" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/timestreamwrite" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -29,40 +31,37 @@ func init() { func sweepDatabases(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } input := ×treamwrite.ListDatabasesInput{} - conn := client.(*conns.AWSClient).TimestreamWriteConn() + conn := client.TimestreamWriteClient(ctx) sweepResources := make([]sweep.Sweepable, 0) - err = conn.ListDatabasesPagesWithContext(ctx, input, func(page *timestreamwrite.ListDatabasesOutput, lastPage bool) bool { - if page == nil { - return !lastPage + pages := timestreamwrite.NewListDatabasesPaginator(conn, input) + for pages.HasMorePages() { + page, err := pages.NextPage(ctx) + + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping Timestream Database sweep for %s: %s", region, err) + return nil + } + + if err != nil { + return fmt.Errorf("error listing Timestream Databases (%s): %w", region, err) } for _, v := range page.Databases { - r := ResourceDatabase() + r := resourceDatabase() d := r.Data(nil) - d.SetId(aws.StringValue(v.DatabaseName)) + d.SetId(aws.ToString(v.DatabaseName)) sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } - - return !lastPage - }) - - if sweep.SkipSweepError(err) { - log.Printf("[WARN] Skipping Timestream Database sweep for %s: %s", region, err) - return nil - } - - if err != nil { - return fmt.Errorf("error listing Timestream Databases (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Timestream Databases (%s): %w", region, err) @@ -73,40 +72,37 @@ func sweepDatabases(region string) error { func sweepTables(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } input := ×treamwrite.ListTablesInput{} - conn := client.(*conns.AWSClient).TimestreamWriteConn() + conn := client.TimestreamWriteClient(ctx) sweepResources := make([]sweep.Sweepable, 0) - err = conn.ListTablesPagesWithContext(ctx, input, func(page *timestreamwrite.ListTablesOutput, lastPage bool) bool { - if page == nil { - return !lastPage + pages := timestreamwrite.NewListTablesPaginator(conn, input) + for pages.HasMorePages() { + page, err := pages.NextPage(ctx) + + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping Timestream Table sweep for %s: %s", region, err) + return nil + } + + if err != nil { + return fmt.Errorf("error listing Timestream Tables (%s): %w", region, err) } for _, v := range page.Tables { - r := ResourceTable() + r := resourceTable() d := r.Data(nil) - d.SetId(fmt.Sprintf("%s:%s", aws.StringValue(v.TableName), aws.StringValue(v.DatabaseName))) + d.SetId(tableCreateResourceID(aws.ToString(v.TableName), aws.ToString(v.DatabaseName))) sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } - - return !lastPage - }) - - if sweep.SkipSweepError(err) { - log.Printf("[WARN] Skipping Timestream Table sweep for %s: %s", region, err) - return nil - } - - if err != nil { - return fmt.Errorf("error listing Timestream Tables (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Timestream Tables (%s): %w", region, err) diff --git a/internal/service/timestreamwrite/table.go b/internal/service/timestreamwrite/table.go index b62f095c2ac..a5dc607ab6e 100644 --- a/internal/service/timestreamwrite/table.go +++ b/internal/service/timestreamwrite/table.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package timestreamwrite import ( @@ -7,21 +10,25 @@ import ( "regexp" "strings" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/timestreamwrite" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/timestreamwrite" + "github.com/aws/aws-sdk-go-v2/service/timestreamwrite/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/enum" + "github.com/hashicorp/terraform-provider-aws/internal/errs" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/verify" "github.com/hashicorp/terraform-provider-aws/names" ) // @SDKResource("aws_timestreamwrite_table", name="Table") // @Tags(identifierAttribute="arn") -func ResourceTable() *schema.Resource { +func resourceTable() *schema.Resource { return &schema.Resource{ CreateWithoutTimeout: resourceTableCreate, ReadWithoutTimeout: resourceTableRead, @@ -37,7 +44,6 @@ func ResourceTable() *schema.Resource { Type: schema.TypeString, Computed: true, }, - "database_name": { Type: schema.TypeString, Required: true, @@ -76,9 +82,9 @@ func ResourceTable() *schema.Resource { Optional: true, }, "encryption_option": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.StringInSlice(timestreamwrite.S3EncryptionOption_Values(), false), + Type: schema.TypeString, + Optional: true, + ValidateDiagFunc: enum.Validate[types.S3EncryptionOption](), }, "kms_key_id": { Type: schema.TypeString, @@ -98,7 +104,6 @@ func ResourceTable() *schema.Resource { }, }, }, - "retention_properties": { Type: schema.TypeList, Optional: true, @@ -111,7 +116,6 @@ func ResourceTable() *schema.Resource { Required: true, ValidateFunc: validation.IntBetween(1, 73000), }, - "memory_store_retention_period_in_hours": { Type: schema.TypeInt, Required: true, @@ -120,7 +124,44 @@ func ResourceTable() *schema.Resource { }, }, }, - + "schema": { + Type: schema.TypeList, + Optional: true, + Computed: true, + ForceNew: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "composite_partition_key": { + Type: schema.TypeList, + Optional: true, + Computed: true, + ForceNew: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enforcement_in_record": { + Type: schema.TypeString, + Optional: true, + ValidateDiagFunc: enum.Validate[types.PartitionKeyEnforcementLevel](), + }, + "name": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "type": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateDiagFunc: enum.Validate[types.PartitionKeyType](), + }, + }, + }, + }, + }, + }, + }, "table_name": { Type: schema.TypeString, Required: true, @@ -139,89 +180,81 @@ func ResourceTable() *schema.Resource { } func resourceTableCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).TimestreamWriteConn() + conn := meta.(*conns.AWSClient).TimestreamWriteClient(ctx) + databaseName := d.Get("database_name").(string) tableName := d.Get("table_name").(string) + id := tableCreateResourceID(tableName, databaseName) input := ×treamwrite.CreateTableInput{ - DatabaseName: aws.String(d.Get("database_name").(string)), + DatabaseName: aws.String(databaseName), TableName: aws.String(tableName), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), + } + + if v, ok := d.GetOk("magnetic_store_write_properties"); ok && len(v.([]interface{})) > 0 && v.([]interface{}) != nil { + input.MagneticStoreWriteProperties = expandMagneticStoreWriteProperties(v.([]interface{})) } if v, ok := d.GetOk("retention_properties"); ok && len(v.([]interface{})) > 0 && v.([]interface{}) != nil { input.RetentionProperties = expandRetentionProperties(v.([]interface{})) } - if v, ok := d.GetOk("magnetic_store_write_properties"); ok && len(v.([]interface{})) > 0 && v.([]interface{}) != nil { - input.MagneticStoreWriteProperties = expandMagneticStoreWriteProperties(v.([]interface{})) + if v, ok := d.GetOk("schema"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.Schema = expandSchema(v.([]interface{})[0].(map[string]interface{})) } - output, err := conn.CreateTableWithContext(ctx, input) + _, err := conn.CreateTable(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error creating Timestream Table (%s): %w", tableName, err)) + return diag.Errorf("creating Timestream Table (%s): %s", id, err) } - if output == nil || output.Table == nil { - return diag.FromErr(fmt.Errorf("error creating Timestream Table (%s): empty output", tableName)) - } - - d.SetId(fmt.Sprintf("%s:%s", aws.StringValue(output.Table.TableName), aws.StringValue(output.Table.DatabaseName))) + d.SetId(id) return resourceTableRead(ctx, d, meta) } func resourceTableRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).TimestreamWriteConn() - - tableName, databaseName, err := TableParseID(d.Id()) + conn := meta.(*conns.AWSClient).TimestreamWriteClient(ctx) + tableName, databaseName, err := tableParseResourceID(d.Id()) if err != nil { return diag.FromErr(err) } - input := ×treamwrite.DescribeTableInput{ - DatabaseName: aws.String(databaseName), - TableName: aws.String(tableName), - } - - output, err := conn.DescribeTableWithContext(ctx, input) + table, err := findTableByTwoPartKey(ctx, conn, databaseName, tableName) - if !d.IsNewResource() && tfawserr.ErrCodeEquals(err, timestreamwrite.ErrCodeResourceNotFoundException) { + if !d.IsNewResource() && tfresource.NotFound(err) { log.Printf("[WARN] Timestream Table %s not found, removing from state", d.Id()) d.SetId("") return nil } - if output == nil || output.Table == nil { - return diag.FromErr(fmt.Errorf("error reading Timestream Table (%s): empty output", d.Id())) - } - - table := output.Table - arn := aws.StringValue(table.Arn) - - d.Set("arn", arn) + d.Set("arn", table.Arn) d.Set("database_name", table.DatabaseName) - + if err := d.Set("magnetic_store_write_properties", flattenMagneticStoreWriteProperties(table.MagneticStoreWriteProperties)); err != nil { + return diag.Errorf("setting magnetic_store_write_properties: %s", err) + } if err := d.Set("retention_properties", flattenRetentionProperties(table.RetentionProperties)); err != nil { - return diag.FromErr(fmt.Errorf("error setting retention_properties: %w", err)) + return diag.Errorf("setting retention_properties: %s", err) } - - if err := d.Set("magnetic_store_write_properties", flattenMagneticStoreWriteProperties(table.MagneticStoreWriteProperties)); err != nil { - return diag.FromErr(fmt.Errorf("error setting magnetic_store_write_properties: %w", err)) + if table.Schema != nil { + if err := d.Set("schema", []interface{}{flattenSchema(table.Schema)}); err != nil { + return diag.Errorf("setting schema: %s", err) + } + } else { + d.Set("schema", nil) } - d.Set("table_name", table.TableName) return nil } func resourceTableUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).TimestreamWriteConn() + conn := meta.(*conns.AWSClient).TimestreamWriteClient(ctx) if d.HasChangesExcept("tags", "tags_all") { - tableName, databaseName, err := TableParseID(d.Id()) - + tableName, databaseName, err := tableParseResourceID(d.Id()) if err != nil { return diag.FromErr(err) } @@ -231,18 +264,24 @@ func resourceTableUpdate(ctx context.Context, d *schema.ResourceData, meta inter TableName: aws.String(tableName), } + if d.HasChange("magnetic_store_write_properties") { + input.MagneticStoreWriteProperties = expandMagneticStoreWriteProperties(d.Get("magnetic_store_write_properties").([]interface{})) + } + if d.HasChange("retention_properties") { input.RetentionProperties = expandRetentionProperties(d.Get("retention_properties").([]interface{})) } - if d.HasChange("magnetic_store_write_properties") { - input.MagneticStoreWriteProperties = expandMagneticStoreWriteProperties(d.Get("magnetic_store_write_properties").([]interface{})) + if d.HasChange("schema") { + if v, ok := d.GetOk("schema"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.Schema = expandSchema(v.([]interface{})[0].(map[string]interface{})) + } } - _, err = conn.UpdateTableWithContext(ctx, input) + _, err = conn.UpdateTable(ctx, input) if err != nil { - return diag.FromErr(fmt.Errorf("error updating Timestream Table (%s): %w", d.Id(), err)) + return diag.Errorf("updating Timestream Table (%s): %s", d.Id(), err) } } @@ -250,186 +289,329 @@ func resourceTableUpdate(ctx context.Context, d *schema.ResourceData, meta inter } func resourceTableDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).TimestreamWriteConn() - - tableName, databaseName, err := TableParseID(d.Id()) + conn := meta.(*conns.AWSClient).TimestreamWriteClient(ctx) + tableName, databaseName, err := tableParseResourceID(d.Id()) if err != nil { return diag.FromErr(err) } log.Printf("[INFO] Deleting Timestream Table: %s", d.Id()) - _, err = conn.DeleteTableWithContext(ctx, ×treamwrite.DeleteTableInput{ + _, err = conn.DeleteTable(ctx, ×treamwrite.DeleteTableInput{ DatabaseName: aws.String(databaseName), TableName: aws.String(tableName), }) - if tfawserr.ErrCodeEquals(err, timestreamwrite.ErrCodeResourceNotFoundException) { + if errs.IsA[*types.ResourceNotFoundException](err) { return nil } if err != nil { - return diag.FromErr(fmt.Errorf("error deleting Timestream Table (%s): %w", d.Id(), err)) + return diag.Errorf("deleting Timestream Table (%s): %s", d.Id(), err) } return nil } -func expandRetentionProperties(l []interface{}) *timestreamwrite.RetentionProperties { - if len(l) == 0 || l[0] == nil { +func findTableByTwoPartKey(ctx context.Context, conn *timestreamwrite.Client, databaseName, tableName string) (*types.Table, error) { + input := ×treamwrite.DescribeTableInput{ + DatabaseName: aws.String(databaseName), + TableName: aws.String(tableName), + } + + output, err := conn.DescribeTable(ctx, input) + + if errs.IsA[*types.ResourceNotFoundException](err) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil || output.Table == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + return output.Table, nil +} + +func expandRetentionProperties(tfList []interface{}) *types.RetentionProperties { + if len(tfList) == 0 || tfList[0] == nil { return nil } - tfMap, ok := l[0].(map[string]interface{}) + tfMap, ok := tfList[0].(map[string]interface{}) if !ok { return nil } - rp := ×treamwrite.RetentionProperties{} + apiObject := &types.RetentionProperties{} if v, ok := tfMap["magnetic_store_retention_period_in_days"].(int); ok { - rp.MagneticStoreRetentionPeriodInDays = aws.Int64(int64(v)) + apiObject.MagneticStoreRetentionPeriodInDays = int64(v) } if v, ok := tfMap["memory_store_retention_period_in_hours"].(int); ok { - rp.MemoryStoreRetentionPeriodInHours = aws.Int64(int64(v)) + apiObject.MemoryStoreRetentionPeriodInHours = int64(v) } - return rp + return apiObject } -func flattenRetentionProperties(rp *timestreamwrite.RetentionProperties) []interface{} { - if rp == nil { +func flattenRetentionProperties(apiObject *types.RetentionProperties) []interface{} { + if apiObject == nil { return []interface{}{} } - m := map[string]interface{}{ - "magnetic_store_retention_period_in_days": aws.Int64Value(rp.MagneticStoreRetentionPeriodInDays), - "memory_store_retention_period_in_hours": aws.Int64Value(rp.MemoryStoreRetentionPeriodInHours), + tfMap := map[string]interface{}{ + "magnetic_store_retention_period_in_days": apiObject.MagneticStoreRetentionPeriodInDays, + "memory_store_retention_period_in_hours": apiObject.MemoryStoreRetentionPeriodInHours, } - return []interface{}{m} + return []interface{}{tfMap} } -func expandMagneticStoreWriteProperties(l []interface{}) *timestreamwrite.MagneticStoreWriteProperties { - if len(l) == 0 || l[0] == nil { +func expandMagneticStoreWriteProperties(tfList []interface{}) *types.MagneticStoreWriteProperties { + if len(tfList) == 0 || tfList[0] == nil { return nil } - tfMap, ok := l[0].(map[string]interface{}) + tfMap, ok := tfList[0].(map[string]interface{}) if !ok { return nil } - rp := ×treamwrite.MagneticStoreWriteProperties{ + apiObject := &types.MagneticStoreWriteProperties{ EnableMagneticStoreWrites: aws.Bool(tfMap["enable_magnetic_store_writes"].(bool)), } if v, ok := tfMap["magnetic_store_rejected_data_location"].([]interface{}); ok && len(v) > 0 { - rp.MagneticStoreRejectedDataLocation = expandMagneticStoreRejectedDataLocation(v) + apiObject.MagneticStoreRejectedDataLocation = expandMagneticStoreRejectedDataLocation(v) } - return rp + return apiObject } -func flattenMagneticStoreWriteProperties(rp *timestreamwrite.MagneticStoreWriteProperties) []interface{} { - if rp == nil { +func flattenMagneticStoreWriteProperties(apiObject *types.MagneticStoreWriteProperties) []interface{} { + if apiObject == nil { return []interface{}{} } - m := map[string]interface{}{ - "enable_magnetic_store_writes": aws.BoolValue(rp.EnableMagneticStoreWrites), - "magnetic_store_rejected_data_location": flattenMagneticStoreRejectedDataLocation(rp.MagneticStoreRejectedDataLocation), + tfMap := map[string]interface{}{ + "enable_magnetic_store_writes": aws.ToBool(apiObject.EnableMagneticStoreWrites), + "magnetic_store_rejected_data_location": flattenMagneticStoreRejectedDataLocation(apiObject.MagneticStoreRejectedDataLocation), } - return []interface{}{m} + return []interface{}{tfMap} } -func expandMagneticStoreRejectedDataLocation(l []interface{}) *timestreamwrite.MagneticStoreRejectedDataLocation { - if len(l) == 0 || l[0] == nil { +func expandMagneticStoreRejectedDataLocation(tfList []interface{}) *types.MagneticStoreRejectedDataLocation { + if len(tfList) == 0 || tfList[0] == nil { return nil } - tfMap, ok := l[0].(map[string]interface{}) + tfMap, ok := tfList[0].(map[string]interface{}) if !ok { return nil } - rp := ×treamwrite.MagneticStoreRejectedDataLocation{} + apiObject := &types.MagneticStoreRejectedDataLocation{} if v, ok := tfMap["s3_configuration"].([]interface{}); ok && len(v) > 0 { - rp.S3Configuration = expandS3Configuration(v) + apiObject.S3Configuration = expandS3Configuration(v) } - return rp + return apiObject } -func flattenMagneticStoreRejectedDataLocation(rp *timestreamwrite.MagneticStoreRejectedDataLocation) []interface{} { - if rp == nil { +func flattenMagneticStoreRejectedDataLocation(apiObject *types.MagneticStoreRejectedDataLocation) []interface{} { + if apiObject == nil { return []interface{}{} } - m := map[string]interface{}{ - "s3_configuration": flattenS3Configuration(rp.S3Configuration), + tfMap := map[string]interface{}{ + "s3_configuration": flattenS3Configuration(apiObject.S3Configuration), } - return []interface{}{m} + return []interface{}{tfMap} } -func expandS3Configuration(l []interface{}) *timestreamwrite.S3Configuration { - if len(l) == 0 || l[0] == nil { +func expandS3Configuration(tfList []interface{}) *types.S3Configuration { + if len(tfList) == 0 || tfList[0] == nil { return nil } - tfMap, ok := l[0].(map[string]interface{}) + tfMap, ok := tfList[0].(map[string]interface{}) if !ok { return nil } - rp := ×treamwrite.S3Configuration{} + apiObject := &types.S3Configuration{} if v, ok := tfMap["bucket_name"].(string); ok && v != "" { - rp.BucketName = aws.String(v) + apiObject.BucketName = aws.String(v) } - if v, ok := tfMap["object_key_prefix"].(string); ok && v != "" { - rp.ObjectKeyPrefix = aws.String(v) + if v, ok := tfMap["encryption_option"].(string); ok && v != "" { + apiObject.EncryptionOption = types.S3EncryptionOption(v) } if v, ok := tfMap["kms_key_id"].(string); ok && v != "" { - rp.KmsKeyId = aws.String(v) + apiObject.KmsKeyId = aws.String(v) } - if v, ok := tfMap["encryption_option"].(string); ok && v != "" { - rp.EncryptionOption = aws.String(v) + if v, ok := tfMap["object_key_prefix"].(string); ok && v != "" { + apiObject.ObjectKeyPrefix = aws.String(v) } - return rp + return apiObject } -func flattenS3Configuration(rp *timestreamwrite.S3Configuration) []interface{} { - if rp == nil { +func flattenS3Configuration(apiObject *types.S3Configuration) []interface{} { + if apiObject == nil { return []interface{}{} } - m := map[string]interface{}{ - "bucket_name": aws.StringValue(rp.BucketName), - "object_key_prefix": aws.StringValue(rp.ObjectKeyPrefix), - "kms_key_id": aws.StringValue(rp.KmsKeyId), - "encryption_option": aws.StringValue(rp.EncryptionOption), + tfMap := map[string]interface{}{ + "bucket_name": aws.ToString(apiObject.BucketName), + "encryption_option": string(apiObject.EncryptionOption), + "kms_key_id": aws.ToString(apiObject.KmsKeyId), + "object_key_prefix": aws.ToString(apiObject.ObjectKeyPrefix), + } + + return []interface{}{tfMap} +} + +func expandSchema(tfMap map[string]interface{}) *types.Schema { + if tfMap == nil { + return nil + } + + apiObject := &types.Schema{} + + if v, ok := tfMap["composite_partition_key"].([]interface{}); ok && len(v) > 0 { + apiObject.CompositePartitionKey = expandPartitionKeys(v) + } + + return apiObject +} + +func expandPartitionKey(tfMap map[string]interface{}) *types.PartitionKey { + if tfMap == nil { + return nil + } + + apiObject := &types.PartitionKey{} + + if v, ok := tfMap["enforcement_in_record"].(string); ok && v != "" { + apiObject.EnforcementInRecord = types.PartitionKeyEnforcementLevel(v) } - return []interface{}{m} + if v, ok := tfMap["name"].(string); ok && v != "" { + apiObject.Name = aws.String(v) + } + + if v, ok := tfMap["type"].(string); ok && v != "" { + apiObject.Type = types.PartitionKeyType(v) + } + + return apiObject } -func TableParseID(id string) (string, string, error) { - idParts := strings.SplitN(id, ":", 2) - if len(idParts) != 2 || idParts[0] == "" || idParts[1] == "" { - return "", "", fmt.Errorf("unexpected format of ID (%s), expected table_name:database_name", id) +func expandPartitionKeys(tfList []interface{}) []types.PartitionKey { + if len(tfList) == 0 { + return nil } - return idParts[0], idParts[1], nil + + var apiObjects []types.PartitionKey + + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + + if !ok { + continue + } + + apiObject := expandPartitionKey(tfMap) + + if apiObject == nil { + continue + } + + apiObjects = append(apiObjects, *apiObject) + } + + return apiObjects +} + +func flattenSchema(apiObject *types.Schema) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.CompositePartitionKey; v != nil { + tfMap["composite_partition_key"] = flattenPartitionKeys(v) + } + + return tfMap +} + +func flattenPartitionKey(apiObject *types.PartitionKey) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{ + "enforcement_in_record": apiObject.EnforcementInRecord, + "type": apiObject.Type, + } + + if v := apiObject.Name; v != nil { + tfMap["name"] = aws.ToString(v) + } + + return tfMap +} + +func flattenPartitionKeys(apiObjects []types.PartitionKey) []interface{} { + if len(apiObjects) == 0 { + return nil + } + + var tfList []interface{} + + for _, apiObject := range apiObjects { + tfList = append(tfList, flattenPartitionKey(&apiObject)) + } + + return tfList +} + +const tableIDSeparator = ":" + +func tableCreateResourceID(tableName, databaseName string) string { + parts := []string{tableName, databaseName} + id := strings.Join(parts, tableIDSeparator) + + return id +} + +func tableParseResourceID(id string) (string, string, error) { + parts := strings.SplitN(id, tableIDSeparator, 2) + + if len(parts) != 2 || parts[0] == "" || parts[1] == "" { + return "", "", fmt.Errorf("unexpected format of ID (%[1]s), expected table_name%[2]sdatabase_name", id, tableIDSeparator) + } + + return parts[0], parts[1], nil } diff --git a/internal/service/timestreamwrite/table_test.go b/internal/service/timestreamwrite/table_test.go index 4bdfae06485..776452dcccc 100644 --- a/internal/service/timestreamwrite/table_test.go +++ b/internal/service/timestreamwrite/table_test.go @@ -1,43 +1,54 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package timestreamwrite_test import ( "context" + "errors" "fmt" "testing" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/timestreamwrite" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/timestreamwrite/types" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" tftimestreamwrite "github.com/hashicorp/terraform-provider-aws/internal/service/timestreamwrite" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/names" ) func TestAccTimestreamWriteTable_basic(t *testing.T) { ctx := acctest.Context(t) + var table types.Table rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_timestreamwrite_table.test" dbResourceName := "aws_timestreamwrite_database.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, timestreamwrite.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.TimestreamWriteEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckTableDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccTableConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckTableExists(ctx, resourceName), + testAccCheckTableExists(ctx, resourceName, &table), acctest.CheckResourceAttrRegionalARN(resourceName, "arn", "timestream", fmt.Sprintf("database/%[1]s/table/%[1]s", rName)), resource.TestCheckResourceAttrPair(resourceName, "database_name", dbResourceName, "database_name"), - resource.TestCheckResourceAttr(resourceName, "retention_properties.#", "1"), resource.TestCheckResourceAttr(resourceName, "magnetic_store_write_properties.#", "1"), resource.TestCheckResourceAttr(resourceName, "magnetic_store_write_properties.0.enable_magnetic_store_writes", "false"), resource.TestCheckResourceAttr(resourceName, "magnetic_store_write_properties.0.magnetic_store_rejected_data_location.#", "0"), + resource.TestCheckResourceAttr(resourceName, "retention_properties.#", "1"), + resource.TestCheckResourceAttr(resourceName, "schema.#", "1"), + resource.TestCheckResourceAttr(resourceName, "schema.0.composite_partition_key.#", "1"), + resource.TestCheckResourceAttr(resourceName, "schema.0.composite_partition_key.0.enforcement_in_record", ""), + resource.TestCheckResourceAttr(resourceName, "schema.0.composite_partition_key.0.name", ""), + resource.TestCheckResourceAttr(resourceName, "schema.0.composite_partition_key.0.type", "MEASURE"), resource.TestCheckResourceAttr(resourceName, "table_name", rName), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), ), @@ -53,19 +64,20 @@ func TestAccTimestreamWriteTable_basic(t *testing.T) { func TestAccTimestreamWriteTable_magneticStoreWriteProperties(t *testing.T) { ctx := acctest.Context(t) + var table types.Table rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_timestreamwrite_table.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, timestreamwrite.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.TimestreamWriteEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckTableDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccTableConfig_magneticStoreWriteProperties(rName, true), Check: resource.ComposeTestCheckFunc( - testAccCheckTableExists(ctx, resourceName), + testAccCheckTableExists(ctx, resourceName, &table), resource.TestCheckResourceAttr(resourceName, "magnetic_store_write_properties.#", "1"), resource.TestCheckResourceAttr(resourceName, "magnetic_store_write_properties.0.enable_magnetic_store_writes", "true"), resource.TestCheckResourceAttr(resourceName, "magnetic_store_write_properties.0.magnetic_store_rejected_data_location.#", "0"), @@ -79,7 +91,7 @@ func TestAccTimestreamWriteTable_magneticStoreWriteProperties(t *testing.T) { { Config: testAccTableConfig_magneticStoreWriteProperties(rName, false), Check: resource.ComposeTestCheckFunc( - testAccCheckTableExists(ctx, resourceName), + testAccCheckTableExists(ctx, resourceName, &table), resource.TestCheckResourceAttr(resourceName, "magnetic_store_write_properties.#", "1"), resource.TestCheckResourceAttr(resourceName, "magnetic_store_write_properties.0.enable_magnetic_store_writes", "false"), ), @@ -87,7 +99,7 @@ func TestAccTimestreamWriteTable_magneticStoreWriteProperties(t *testing.T) { { Config: testAccTableConfig_magneticStoreWriteProperties(rName, true), Check: resource.ComposeTestCheckFunc( - testAccCheckTableExists(ctx, resourceName), + testAccCheckTableExists(ctx, resourceName, &table), resource.TestCheckResourceAttr(resourceName, "magnetic_store_write_properties.#", "1"), resource.TestCheckResourceAttr(resourceName, "magnetic_store_write_properties.0.enable_magnetic_store_writes", "true"), ), @@ -98,6 +110,7 @@ func TestAccTimestreamWriteTable_magneticStoreWriteProperties(t *testing.T) { func TestAccTimestreamWriteTable_magneticStoreWriteProperties_s3Config(t *testing.T) { ctx := acctest.Context(t) + var table types.Table rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) rNameUpdated := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) @@ -105,14 +118,14 @@ func TestAccTimestreamWriteTable_magneticStoreWriteProperties_s3Config(t *testin resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, timestreamwrite.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.TimestreamWriteEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckTableDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccTableConfig_magneticStoreWritePropertiesS3(rName, rName), Check: resource.ComposeTestCheckFunc( - testAccCheckTableExists(ctx, resourceName), + testAccCheckTableExists(ctx, resourceName, &table), resource.TestCheckResourceAttr(resourceName, "magnetic_store_write_properties.#", "1"), resource.TestCheckResourceAttr(resourceName, "magnetic_store_write_properties.0.enable_magnetic_store_writes", "true"), resource.TestCheckResourceAttr(resourceName, "magnetic_store_write_properties.0.magnetic_store_rejected_data_location.#", "1"), @@ -129,7 +142,7 @@ func TestAccTimestreamWriteTable_magneticStoreWriteProperties_s3Config(t *testin { Config: testAccTableConfig_magneticStoreWritePropertiesS3(rName, rNameUpdated), Check: resource.ComposeTestCheckFunc( - testAccCheckTableExists(ctx, resourceName), + testAccCheckTableExists(ctx, resourceName, &table), resource.TestCheckResourceAttr(resourceName, "magnetic_store_write_properties.#", "1"), resource.TestCheckResourceAttr(resourceName, "magnetic_store_write_properties.0.enable_magnetic_store_writes", "true"), resource.TestCheckResourceAttr(resourceName, "magnetic_store_write_properties.0.magnetic_store_rejected_data_location.#", "1"), @@ -144,20 +157,21 @@ func TestAccTimestreamWriteTable_magneticStoreWriteProperties_s3Config(t *testin func TestAccTimestreamWriteTable_magneticStoreWriteProperties_s3KMSConfig(t *testing.T) { ctx := acctest.Context(t) + var table types.Table rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_timestreamwrite_table.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, timestreamwrite.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.TimestreamWriteEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckTableDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccTableConfig_magneticStoreWritePropertiesS3KMS(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckTableExists(ctx, resourceName), + testAccCheckTableExists(ctx, resourceName, &table), resource.TestCheckResourceAttr(resourceName, "magnetic_store_write_properties.#", "1"), resource.TestCheckResourceAttr(resourceName, "magnetic_store_write_properties.0.enable_magnetic_store_writes", "true"), resource.TestCheckResourceAttr(resourceName, "magnetic_store_write_properties.0.magnetic_store_rejected_data_location.#", "1"), @@ -179,19 +193,20 @@ func TestAccTimestreamWriteTable_magneticStoreWriteProperties_s3KMSConfig(t *tes func TestAccTimestreamWriteTable_disappears(t *testing.T) { ctx := acctest.Context(t) + var table types.Table resourceName := "aws_timestreamwrite_table.test" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, timestreamwrite.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.TimestreamWriteEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckTableDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccTableConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckTableExists(ctx, resourceName), + testAccCheckTableExists(ctx, resourceName, &table), acctest.CheckResourceDisappears(ctx, acctest.Provider, tftimestreamwrite.ResourceTable(), resourceName), acctest.CheckResourceDisappears(ctx, acctest.Provider, tftimestreamwrite.ResourceTable(), resourceName), ), @@ -203,19 +218,20 @@ func TestAccTimestreamWriteTable_disappears(t *testing.T) { func TestAccTimestreamWriteTable_retentionProperties(t *testing.T) { ctx := acctest.Context(t) + var table types.Table rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_timestreamwrite_table.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, timestreamwrite.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.TimestreamWriteEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckTableDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccTableConfig_retentionProperties(rName, 30, 120), Check: resource.ComposeTestCheckFunc( - testAccCheckTableExists(ctx, resourceName), + testAccCheckTableExists(ctx, resourceName, &table), resource.TestCheckResourceAttr(resourceName, "retention_properties.#", "1"), resource.TestCheckResourceAttr(resourceName, "retention_properties.0.magnetic_store_retention_period_in_days", "30"), resource.TestCheckResourceAttr(resourceName, "retention_properties.0.memory_store_retention_period_in_hours", "120"), @@ -229,7 +245,7 @@ func TestAccTimestreamWriteTable_retentionProperties(t *testing.T) { { Config: testAccTableConfig_retentionProperties(rName, 300, 7), Check: resource.ComposeTestCheckFunc( - testAccCheckTableExists(ctx, resourceName), + testAccCheckTableExists(ctx, resourceName, &table), resource.TestCheckResourceAttr(resourceName, "retention_properties.#", "1"), resource.TestCheckResourceAttr(resourceName, "retention_properties.0.magnetic_store_retention_period_in_days", "300"), resource.TestCheckResourceAttr(resourceName, "retention_properties.0.memory_store_retention_period_in_hours", "7"), @@ -243,7 +259,7 @@ func TestAccTimestreamWriteTable_retentionProperties(t *testing.T) { { Config: testAccTableConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckTableExists(ctx, resourceName), + testAccCheckTableExists(ctx, resourceName, &table), resource.TestCheckResourceAttr(resourceName, "retention_properties.#", "1"), ), }, @@ -253,19 +269,20 @@ func TestAccTimestreamWriteTable_retentionProperties(t *testing.T) { func TestAccTimestreamWriteTable_tags(t *testing.T) { ctx := acctest.Context(t) + var table types.Table resourceName := "aws_timestreamwrite_table.test" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, timestreamwrite.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, names.TimestreamWriteEndpointID), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckTableDestroy(ctx), Steps: []resource.TestStep{ { Config: testAccTableConfig_tags1(rName, "key1", "value1"), Check: resource.ComposeTestCheckFunc( - testAccCheckTableExists(ctx, resourceName), + testAccCheckTableExists(ctx, resourceName, &table), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), resource.TestCheckResourceAttr(resourceName, "tags_all.%", "1"), @@ -275,7 +292,7 @@ func TestAccTimestreamWriteTable_tags(t *testing.T) { { Config: testAccTableConfig_tags2(rName, "key1", "value1updated", "key2", "value2"), Check: resource.ComposeTestCheckFunc( - testAccCheckTableExists(ctx, resourceName), + testAccCheckTableExists(ctx, resourceName, &table), resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), @@ -287,7 +304,7 @@ func TestAccTimestreamWriteTable_tags(t *testing.T) { { Config: testAccTableConfig_tags1(rName, "key2", "value2"), Check: resource.ComposeTestCheckFunc( - testAccCheckTableExists(ctx, resourceName), + testAccCheckTableExists(ctx, resourceName, &table), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), resource.TestCheckResourceAttr(resourceName, "tags_all.%", "1"), @@ -303,29 +320,68 @@ func TestAccTimestreamWriteTable_tags(t *testing.T) { }) } +func TestAccTimestreamWriteTable_schema(t *testing.T) { + ctx := acctest.Context(t) + var table1, table2 types.Table + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_timestreamwrite_table.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, names.TimestreamWriteEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckTableDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccTableConfig_schema(rName, "OPTIONAL"), + Check: resource.ComposeTestCheckFunc( + testAccCheckTableExists(ctx, resourceName, &table1), + resource.TestCheckResourceAttr(resourceName, "schema.#", "1"), + resource.TestCheckResourceAttr(resourceName, "schema.0.composite_partition_key.#", "1"), + resource.TestCheckResourceAttr(resourceName, "schema.0.composite_partition_key.0.enforcement_in_record", "OPTIONAL"), + resource.TestCheckResourceAttr(resourceName, "schema.0.composite_partition_key.0.name", "attr1"), + resource.TestCheckResourceAttr(resourceName, "schema.0.composite_partition_key.0.type", "DIMENSION"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccTableConfig_schema(rName, "REQUIRED"), + Check: resource.ComposeTestCheckFunc( + testAccCheckTableExists(ctx, resourceName, &table2), + testAccCheckTableNotRecreated(&table2, &table1), + resource.TestCheckResourceAttr(resourceName, "schema.#", "1"), + resource.TestCheckResourceAttr(resourceName, "schema.0.composite_partition_key.#", "1"), + resource.TestCheckResourceAttr(resourceName, "schema.0.composite_partition_key.0.enforcement_in_record", "REQUIRED"), + resource.TestCheckResourceAttr(resourceName, "schema.0.composite_partition_key.0.name", "attr1"), + resource.TestCheckResourceAttr(resourceName, "schema.0.composite_partition_key.0.type", "DIMENSION"), + ), + }, + }, + }) +} + func testAccCheckTableDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).TimestreamWriteConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).TimestreamWriteClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_timestreamwrite_table" { continue } - tableName, dbName, err := tftimestreamwrite.TableParseID(rs.Primary.ID) + tableName, databaseName, err := tftimestreamwrite.TableParseResourceID(rs.Primary.ID) if err != nil { return err } - input := ×treamwrite.DescribeTableInput{ - DatabaseName: aws.String(dbName), - TableName: aws.String(tableName), - } - - output, err := conn.DescribeTableWithContext(ctx, input) + _, err = tftimestreamwrite.FindTableByTwoPartKey(ctx, conn, databaseName, tableName) - if tfawserr.ErrCodeEquals(err, timestreamwrite.ErrCodeResourceNotFoundException) { + if tfresource.NotFound(err) { continue } @@ -333,91 +389,82 @@ func testAccCheckTableDestroy(ctx context.Context) resource.TestCheckFunc { return err } - if output != nil && output.Table != nil { - return fmt.Errorf("Timestream Table (%s) still exists", rs.Primary.ID) - } + return fmt.Errorf("Timestream Table %s still exists", rs.Primary.ID) } return nil } } -func testAccCheckTableExists(ctx context.Context, n string) resource.TestCheckFunc { +func testAccCheckTableExists(ctx context.Context, n string, v *types.Table) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { return fmt.Errorf("Not found: %s", n) } - if rs.Primary.ID == "" { - return fmt.Errorf("no resource ID is set") - } - - tableName, dbName, err := tftimestreamwrite.TableParseID(rs.Primary.ID) + tableName, databaseName, err := tftimestreamwrite.TableParseResourceID(rs.Primary.ID) if err != nil { return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).TimestreamWriteConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).TimestreamWriteClient(ctx) - input := ×treamwrite.DescribeTableInput{ - DatabaseName: aws.String(dbName), - TableName: aws.String(tableName), - } - - output, err := conn.DescribeTableWithContext(ctx, input) + output, err := tftimestreamwrite.FindTableByTwoPartKey(ctx, conn, databaseName, tableName) if err != nil { return err } - if output == nil || output.Table == nil { - return fmt.Errorf("Timestream Table (%s) not found", rs.Primary.ID) + *v = *output + + return err + } +} + +func testAccCheckTableNotRecreated(i, j *types.Table) resource.TestCheckFunc { + return func(s *terraform.State) error { + if !aws.ToTime(i.CreationTime).Equal(aws.ToTime(j.CreationTime)) { + return errors.New("Timestream Table was recreated") } return nil } } -func testAccTableBaseConfig(rName string) string { +func testAccTableConfig_base(rName string) string { return fmt.Sprintf(` resource "aws_timestreamwrite_database" "test" { - database_name = %q + database_name = %[1]q } `, rName) } func testAccTableConfig_basic(rName string) string { - return acctest.ConfigCompose( - testAccTableBaseConfig(rName), - fmt.Sprintf(` + return acctest.ConfigCompose(testAccTableConfig_base(rName), fmt.Sprintf(` resource "aws_timestreamwrite_table" "test" { database_name = aws_timestreamwrite_database.test.database_name - table_name = %q + table_name = %[1]q } `, rName)) } func testAccTableConfig_magneticStoreWriteProperties(rName string, enable bool) string { - return acctest.ConfigCompose( - testAccTableBaseConfig(rName), - fmt.Sprintf(` + return acctest.ConfigCompose(testAccTableConfig_base(rName), fmt.Sprintf(` resource "aws_timestreamwrite_table" "test" { database_name = aws_timestreamwrite_database.test.database_name - table_name = %q + table_name = %[1]q magnetic_store_write_properties { - enable_magnetic_store_writes = %t + enable_magnetic_store_writes = %[2]t } } `, rName, enable)) } func testAccTableConfig_magneticStoreWritePropertiesS3(rName, prefix string) string { - return acctest.ConfigCompose( - testAccTableBaseConfig(rName), - fmt.Sprintf(` + return acctest.ConfigCompose(testAccTableConfig_base(rName), fmt.Sprintf(` resource "aws_s3_bucket" "test" { bucket = %[1]q force_destroy = true @@ -442,9 +489,7 @@ resource "aws_timestreamwrite_table" "test" { } func testAccTableConfig_magneticStoreWritePropertiesS3KMS(rName string) string { - return acctest.ConfigCompose( - testAccTableBaseConfig(rName), - fmt.Sprintf(` + return acctest.ConfigCompose(testAccTableConfig_base(rName), fmt.Sprintf(` resource "aws_s3_bucket" "test" { bucket = %[1]q force_destroy = true @@ -476,25 +521,21 @@ resource "aws_timestreamwrite_table" "test" { } func testAccTableConfig_retentionProperties(rName string, magneticStoreDays, memoryStoreHours int) string { - return acctest.ConfigCompose( - testAccTableBaseConfig(rName), - fmt.Sprintf(` + return acctest.ConfigCompose(testAccTableConfig_base(rName), fmt.Sprintf(` resource "aws_timestreamwrite_table" "test" { database_name = aws_timestreamwrite_database.test.database_name - table_name = %q + table_name = %[1]q retention_properties { - magnetic_store_retention_period_in_days = %d - memory_store_retention_period_in_hours = %d + magnetic_store_retention_period_in_days = %[2]d + memory_store_retention_period_in_hours = %[3]d } } `, rName, magneticStoreDays, memoryStoreHours)) } func testAccTableConfig_tags1(rName, tagKey1, tagValue1 string) string { - return acctest.ConfigCompose( - testAccTableBaseConfig(rName), - fmt.Sprintf(` + return acctest.ConfigCompose(testAccTableConfig_base(rName), fmt.Sprintf(` resource "aws_timestreamwrite_table" "test" { database_name = aws_timestreamwrite_database.test.database_name table_name = %[1]q @@ -507,9 +548,7 @@ resource "aws_timestreamwrite_table" "test" { } func testAccTableConfig_tags2(rName, tagKey1, tagValue1, tagKey2, tagValue2 string) string { - return acctest.ConfigCompose( - testAccTableBaseConfig(rName), - fmt.Sprintf(` + return acctest.ConfigCompose(testAccTableConfig_base(rName), fmt.Sprintf(` resource "aws_timestreamwrite_table" "test" { database_name = aws_timestreamwrite_database.test.database_name table_name = %[1]q @@ -521,3 +560,20 @@ resource "aws_timestreamwrite_table" "test" { } `, rName, tagKey1, tagValue1, tagKey2, tagValue2)) } + +func testAccTableConfig_schema(rName, enforcementInRecord string) string { + return acctest.ConfigCompose(testAccTableConfig_base(rName), fmt.Sprintf(` +resource "aws_timestreamwrite_table" "test" { + database_name = aws_timestreamwrite_database.test.database_name + table_name = %[1]q + + schema { + composite_partition_key { + enforcement_in_record = %[2]q + name = "attr1" + type = "DIMENSION" + } + } +} +`, rName, enforcementInRecord)) +} diff --git a/internal/service/timestreamwrite/tags_gen.go b/internal/service/timestreamwrite/tags_gen.go index 84e5911af31..4c0f767afca 100644 --- a/internal/service/timestreamwrite/tags_gen.go +++ b/internal/service/timestreamwrite/tags_gen.go @@ -5,24 +5,24 @@ import ( "context" "fmt" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/timestreamwrite" - "github.com/aws/aws-sdk-go/service/timestreamwrite/timestreamwriteiface" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/timestreamwrite" + awstypes "github.com/aws/aws-sdk-go-v2/service/timestreamwrite/types" "github.com/hashicorp/terraform-provider-aws/internal/conns" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists timestreamwrite service tags. +// listTags lists timestreamwrite service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn timestreamwriteiface.TimestreamWriteAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *timestreamwrite.Client, identifier string) (tftags.KeyValueTags, error) { input := ×treamwrite.ListTagsForResourceInput{ ResourceARN: aws.String(identifier), } - output, err := conn.ListTagsForResourceWithContext(ctx, input) + output, err := conn.ListTagsForResource(ctx, input) if err != nil { return tftags.New(ctx, nil), err @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn timestreamwriteiface.TimestreamWriteAPI, // ListTags lists timestreamwrite service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).TimestreamWriteConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).TimestreamWriteClient(ctx), identifier) if err != nil { return err @@ -50,11 +50,11 @@ func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier stri // []*SERVICE.Tag handling // Tags returns timestreamwrite service tags. -func Tags(tags tftags.KeyValueTags) []*timestreamwrite.Tag { - result := make([]*timestreamwrite.Tag, 0, len(tags)) +func Tags(tags tftags.KeyValueTags) []awstypes.Tag { + result := make([]awstypes.Tag, 0, len(tags)) for k, v := range tags.Map() { - tag := ×treamwrite.Tag{ + tag := awstypes.Tag{ Key: aws.String(k), Value: aws.String(v), } @@ -66,19 +66,19 @@ func Tags(tags tftags.KeyValueTags) []*timestreamwrite.Tag { } // KeyValueTags creates tftags.KeyValueTags from timestreamwrite service tags. -func KeyValueTags(ctx context.Context, tags []*timestreamwrite.Tag) tftags.KeyValueTags { +func KeyValueTags(ctx context.Context, tags []awstypes.Tag) tftags.KeyValueTags { m := make(map[string]*string, len(tags)) for _, tag := range tags { - m[aws.StringValue(tag.Key)] = tag.Value + m[aws.ToString(tag.Key)] = tag.Value } return tftags.New(ctx, m) } -// GetTagsIn returns timestreamwrite service tags from Context. +// getTagsIn returns timestreamwrite service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*timestreamwrite.Tag { +func getTagsIn(ctx context.Context) []awstypes.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*timestreamwrite.Tag { return nil } -// SetTagsOut sets timestreamwrite service tags in Context. -func SetTagsOut(ctx context.Context, tags []*timestreamwrite.Tag) { +// setTagsOut sets timestreamwrite service tags in Context. +func setTagsOut(ctx context.Context, tags []awstypes.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates timestreamwrite service tags. +// updateTags updates timestreamwrite service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn timestreamwriteiface.TimestreamWriteAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *timestreamwrite.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -107,10 +107,10 @@ func UpdateTags(ctx context.Context, conn timestreamwriteiface.TimestreamWriteAP if len(removedTags) > 0 { input := ×treamwrite.UntagResourceInput{ ResourceARN: aws.String(identifier), - TagKeys: aws.StringSlice(removedTags.Keys()), + TagKeys: removedTags.Keys(), } - _, err := conn.UntagResourceWithContext(ctx, input) + _, err := conn.UntagResource(ctx, input) if err != nil { return fmt.Errorf("untagging resource (%s): %w", identifier, err) @@ -125,7 +125,7 @@ func UpdateTags(ctx context.Context, conn timestreamwriteiface.TimestreamWriteAP Tags: Tags(updatedTags), } - _, err := conn.TagResourceWithContext(ctx, input) + _, err := conn.TagResource(ctx, input) if err != nil { return fmt.Errorf("tagging resource (%s): %w", identifier, err) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn timestreamwriteiface.TimestreamWriteAP // UpdateTags updates timestreamwrite service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).TimestreamWriteConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).TimestreamWriteClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/transcribe/generate.go b/internal/service/transcribe/generate.go index 3a8547b87a1..26a51406962 100644 --- a/internal/service/transcribe/generate.go +++ b/internal/service/transcribe/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -TagInIDElem=ResourceArn -ListTags -ListTagsInIDElem=ResourceArn -ServiceTagsSlice -UpdateTags -UntagInTagsElem=TagKeys +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package transcribe diff --git a/internal/service/transcribe/language_model.go b/internal/service/transcribe/language_model.go index 65a0f8fc300..3e1d6028580 100644 --- a/internal/service/transcribe/language_model.go +++ b/internal/service/transcribe/language_model.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package transcribe import ( @@ -106,13 +109,13 @@ const ( ) func resourceLanguageModelCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).TranscribeClient() + conn := meta.(*conns.AWSClient).TranscribeClient(ctx) in := &transcribe.CreateLanguageModelInput{ BaseModelName: types.BaseModelName(d.Get("base_model_name").(string)), LanguageCode: types.CLMLanguageCode(d.Get("language_code").(string)), ModelName: aws.String(d.Get("model_name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("input_data_config"); ok && len(v.([]interface{})) > 0 { @@ -146,7 +149,7 @@ func resourceLanguageModelCreate(ctx context.Context, d *schema.ResourceData, me } func resourceLanguageModelRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).TranscribeClient() + conn := meta.(*conns.AWSClient).TranscribeClient(ctx) out, err := FindLanguageModelByName(ctx, conn, d.Id()) @@ -186,7 +189,7 @@ func resourceLanguageModelUpdate(ctx context.Context, d *schema.ResourceData, me } func resourceLanguageModelDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).TranscribeClient() + conn := meta.(*conns.AWSClient).TranscribeClient(ctx) log.Printf("[INFO] Deleting Transcribe LanguageModel %s", d.Id()) diff --git a/internal/service/transcribe/language_model_test.go b/internal/service/transcribe/language_model_test.go index 17e0e11af8c..c3157dfd8f9 100644 --- a/internal/service/transcribe/language_model_test.go +++ b/internal/service/transcribe/language_model_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package transcribe_test import ( @@ -138,7 +141,7 @@ func TestAccTranscribeLanguageModel_disappears(t *testing.T) { func testAccCheckLanguageModelDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).TranscribeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).TranscribeClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_transcribe_language_model" { @@ -171,7 +174,7 @@ func testAccCheckLanguageModelExists(ctx context.Context, name string, languageM return fmt.Errorf("No Transcribe LanguageModel is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).TranscribeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).TranscribeClient(ctx) resp, err := tftranscribe.FindLanguageModelByName(ctx, conn, rs.Primary.ID) if err != nil { @@ -185,7 +188,7 @@ func testAccCheckLanguageModelExists(ctx context.Context, name string, languageM } func testAccLanguageModelsPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).TranscribeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).TranscribeClient(ctx) input := &transcribe.ListLanguageModelsInput{} diff --git a/internal/service/transcribe/medical_vocabulary.go b/internal/service/transcribe/medical_vocabulary.go index edad1adac81..7ed117a44bd 100644 --- a/internal/service/transcribe/medical_vocabulary.go +++ b/internal/service/transcribe/medical_vocabulary.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package transcribe import ( @@ -76,14 +79,14 @@ func ResourceMedicalVocabulary() *schema.Resource { } func resourceMedicalVocabularyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).TranscribeClient() + conn := meta.(*conns.AWSClient).TranscribeClient(ctx) vocabularyName := d.Get("vocabulary_name").(string) in := &transcribe.CreateMedicalVocabularyInput{ VocabularyName: aws.String(vocabularyName), VocabularyFileUri: aws.String(d.Get("vocabulary_file_uri").(string)), LanguageCode: types.LanguageCode(d.Get("language_code").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } out, err := conn.CreateMedicalVocabulary(ctx, in) @@ -105,7 +108,7 @@ func resourceMedicalVocabularyCreate(ctx context.Context, d *schema.ResourceData } func resourceMedicalVocabularyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).TranscribeClient() + conn := meta.(*conns.AWSClient).TranscribeClient(ctx) out, err := FindMedicalVocabularyByName(ctx, conn, d.Id()) @@ -136,7 +139,7 @@ func resourceMedicalVocabularyRead(ctx context.Context, d *schema.ResourceData, } func resourceMedicalVocabularyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).TranscribeClient() + conn := meta.(*conns.AWSClient).TranscribeClient(ctx) if d.HasChangesExcept("tags", "tags_all") { in := &transcribe.UpdateMedicalVocabularyInput{ @@ -163,7 +166,7 @@ func resourceMedicalVocabularyUpdate(ctx context.Context, d *schema.ResourceData } func resourceMedicalVocabularyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).TranscribeClient() + conn := meta.(*conns.AWSClient).TranscribeClient(ctx) log.Printf("[INFO] Deleting Transcribe MedicalVocabulary %s", d.Id()) diff --git a/internal/service/transcribe/medical_vocabulary_test.go b/internal/service/transcribe/medical_vocabulary_test.go index 80ab28eaa71..dec05ad9857 100644 --- a/internal/service/transcribe/medical_vocabulary_test.go +++ b/internal/service/transcribe/medical_vocabulary_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package transcribe_test import ( @@ -180,7 +183,7 @@ func TestAccTranscribeMedicalVocabulary_disappears(t *testing.T) { func testAccCheckMedicalVocabularyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).TranscribeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).TranscribeClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_transcribe_medical_vocabulary" { @@ -215,7 +218,7 @@ func testAccCheckMedicalVocabularyExists(ctx context.Context, name string, medic return fmt.Errorf("No Transcribe MedicalVocabulary is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).TranscribeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).TranscribeClient(ctx) resp, err := tftranscribe.FindMedicalVocabularyByName(ctx, conn, rs.Primary.ID) if err != nil { @@ -229,7 +232,7 @@ func testAccCheckMedicalVocabularyExists(ctx context.Context, name string, medic } func testAccMedicalVocabularyPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).TranscribeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).TranscribeClient(ctx) input := &transcribe.ListMedicalVocabulariesInput{} diff --git a/internal/service/transcribe/service_package_gen.go b/internal/service/transcribe/service_package_gen.go index 3a590e5a6bd..2dc96453ab1 100644 --- a/internal/service/transcribe/service_package_gen.go +++ b/internal/service/transcribe/service_package_gen.go @@ -5,6 +5,9 @@ package transcribe import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + transcribe_sdkv2 "github.com/aws/aws-sdk-go-v2/service/transcribe" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -64,4 +67,17 @@ func (p *servicePackage) ServicePackageName() string { return names.Transcribe } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*transcribe_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return transcribe_sdkv2.NewFromConfig(cfg, func(o *transcribe_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = transcribe_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/transcribe/sweep.go b/internal/service/transcribe/sweep.go index 8479258824a..e69e6ea195c 100644 --- a/internal/service/transcribe/sweep.go +++ b/internal/service/transcribe/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go-v2/service/transcribe" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -52,12 +54,12 @@ func init() { func sweepLanguageModels(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).TranscribeClient() + conn := client.TranscribeClient(ctx) sweepResources := make([]sweep.Sweepable, 0) in := &transcribe.ListLanguageModelsInput{} var errs *multierror.Error @@ -88,7 +90,7 @@ func sweepLanguageModels(region string) error { } } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Transcribe Language Models for %s: %w", region, err)) } @@ -102,12 +104,12 @@ func sweepLanguageModels(region string) error { func sweepMedicalVocabularies(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).TranscribeClient() + conn := client.TranscribeClient(ctx) sweepResources := make([]sweep.Sweepable, 0) in := &transcribe.ListMedicalVocabulariesInput{} var errs *multierror.Error @@ -139,7 +141,7 @@ func sweepMedicalVocabularies(region string) error { in.NextToken = out.NextToken } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Transcribe Medical Vocabularies for %s: %w", region, err)) } @@ -153,12 +155,12 @@ func sweepMedicalVocabularies(region string) error { func sweepVocabularies(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).TranscribeClient() + conn := client.TranscribeClient(ctx) sweepResources := make([]sweep.Sweepable, 0) in := &transcribe.ListVocabulariesInput{} var errs *multierror.Error @@ -190,7 +192,7 @@ func sweepVocabularies(region string) error { in.NextToken = out.NextToken } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Transcribe Vocabularies for %s: %w", region, err)) } @@ -204,12 +206,12 @@ func sweepVocabularies(region string) error { func sweepVocabularyFilters(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).TranscribeClient() + conn := client.TranscribeClient(ctx) sweepResources := make([]sweep.Sweepable, 0) in := &transcribe.ListVocabularyFiltersInput{} var errs *multierror.Error @@ -242,7 +244,7 @@ func sweepVocabularyFilters(region string) error { in.NextToken = out.NextToken } - if err := sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err := sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping Transcribe Vocabulary Filters for %s: %w", region, err)) } diff --git a/internal/service/transcribe/tags_gen.go b/internal/service/transcribe/tags_gen.go index e725a28f762..464e577ef68 100644 --- a/internal/service/transcribe/tags_gen.go +++ b/internal/service/transcribe/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists transcribe service tags. +// listTags lists transcribe service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn *transcribe.Client, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *transcribe.Client, identifier string) (tftags.KeyValueTags, error) { input := &transcribe.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn *transcribe.Client, identifier string) ( // ListTags lists transcribe service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).TranscribeClient(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).TranscribeClient(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []awstypes.Tag) tftags.KeyValueTags return tftags.New(ctx, m) } -// GetTagsIn returns transcribe service tags from Context. +// getTagsIn returns transcribe service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []awstypes.Tag { +func getTagsIn(ctx context.Context) []awstypes.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []awstypes.Tag { return nil } -// SetTagsOut sets transcribe service tags in Context. -func SetTagsOut(ctx context.Context, tags []awstypes.Tag) { +// setTagsOut sets transcribe service tags in Context. +func setTagsOut(ctx context.Context, tags []awstypes.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates transcribe service tags. +// updateTags updates transcribe service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn *transcribe.Client, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *transcribe.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn *transcribe.Client, identifier string, // UpdateTags updates transcribe service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).TranscribeClient(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).TranscribeClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/transcribe/validate.go b/internal/service/transcribe/validate.go index 8b17fa74b2a..33e3765b6d1 100644 --- a/internal/service/transcribe/validate.go +++ b/internal/service/transcribe/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package transcribe import ( diff --git a/internal/service/transcribe/vocabulary.go b/internal/service/transcribe/vocabulary.go index 16d18d9856e..41d1847d42c 100644 --- a/internal/service/transcribe/vocabulary.go +++ b/internal/service/transcribe/vocabulary.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package transcribe import ( @@ -90,12 +93,12 @@ const ( ) func resourceVocabularyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).TranscribeClient() + conn := meta.(*conns.AWSClient).TranscribeClient(ctx) in := &transcribe.CreateVocabularyInput{ VocabularyName: aws.String(d.Get("vocabulary_name").(string)), LanguageCode: types.LanguageCode(d.Get("language_code").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("vocabulary_file_uri"); ok { @@ -125,7 +128,7 @@ func resourceVocabularyCreate(ctx context.Context, d *schema.ResourceData, meta } func resourceVocabularyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).TranscribeClient() + conn := meta.(*conns.AWSClient).TranscribeClient(ctx) out, err := FindVocabularyByName(ctx, conn, d.Id()) @@ -156,7 +159,7 @@ func resourceVocabularyRead(ctx context.Context, d *schema.ResourceData, meta in } func resourceVocabularyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).TranscribeClient() + conn := meta.(*conns.AWSClient).TranscribeClient(ctx) if d.HasChangesExcept("tags", "tags_all") { in := &transcribe.UpdateVocabularyInput{ @@ -187,7 +190,7 @@ func resourceVocabularyUpdate(ctx context.Context, d *schema.ResourceData, meta } func resourceVocabularyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).TranscribeClient() + conn := meta.(*conns.AWSClient).TranscribeClient(ctx) log.Printf("[INFO] Deleting Transcribe Vocabulary %s", d.Id()) diff --git a/internal/service/transcribe/vocabulary_filter.go b/internal/service/transcribe/vocabulary_filter.go index ca2a1ec66b4..14409313a77 100644 --- a/internal/service/transcribe/vocabulary_filter.go +++ b/internal/service/transcribe/vocabulary_filter.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package transcribe import ( @@ -92,12 +95,12 @@ const ( ) func resourceVocabularyFilterCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).TranscribeClient() + conn := meta.(*conns.AWSClient).TranscribeClient(ctx) in := &transcribe.CreateVocabularyFilterInput{ VocabularyFilterName: aws.String(d.Get("vocabulary_filter_name").(string)), LanguageCode: types.LanguageCode(d.Get("language_code").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("vocabulary_filter_file_uri"); ok { @@ -123,7 +126,7 @@ func resourceVocabularyFilterCreate(ctx context.Context, d *schema.ResourceData, } func resourceVocabularyFilterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).TranscribeClient() + conn := meta.(*conns.AWSClient).TranscribeClient(ctx) out, err := FindVocabularyFilterByName(ctx, conn, d.Id()) @@ -160,7 +163,7 @@ func resourceVocabularyFilterRead(ctx context.Context, d *schema.ResourceData, m } func resourceVocabularyFilterUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).TranscribeClient() + conn := meta.(*conns.AWSClient).TranscribeClient(ctx) if d.HasChangesExcept("tags", "tags_all") { in := &transcribe.UpdateVocabularyFilterInput{ @@ -186,7 +189,7 @@ func resourceVocabularyFilterUpdate(ctx context.Context, d *schema.ResourceData, } func resourceVocabularyFilterDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).TranscribeClient() + conn := meta.(*conns.AWSClient).TranscribeClient(ctx) log.Printf("[INFO] Deleting Transcribe VocabularyFilter %s", d.Id()) diff --git a/internal/service/transcribe/vocabulary_filter_test.go b/internal/service/transcribe/vocabulary_filter_test.go index e686c64f920..e98511dd49b 100644 --- a/internal/service/transcribe/vocabulary_filter_test.go +++ b/internal/service/transcribe/vocabulary_filter_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package transcribe_test import ( @@ -215,7 +218,7 @@ func TestAccTranscribeVocabularyFilter_disappears(t *testing.T) { func testAccCheckVocabularyFilterDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).TranscribeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).TranscribeClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_transcribe_vocabulary_filter" { @@ -250,7 +253,7 @@ func testAccCheckVocabularyFilterExists(ctx context.Context, name string, vocabu return create.Error(names.Transcribe, create.ErrActionCheckingExistence, tftranscribe.ResNameVocabularyFilter, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).TranscribeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).TranscribeClient(ctx) resp, err := tftranscribe.FindVocabularyFilterByName(ctx, conn, rs.Primary.ID) @@ -265,7 +268,7 @@ func testAccCheckVocabularyFilterExists(ctx context.Context, name string, vocabu } func testAccVocabularyFiltersPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).TranscribeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).TranscribeClient(ctx) input := &transcribe.ListVocabularyFiltersInput{} _, err := conn.ListVocabularyFilters(ctx, input) diff --git a/internal/service/transcribe/vocabulary_test.go b/internal/service/transcribe/vocabulary_test.go index 61c23877ce2..dd591f78b95 100644 --- a/internal/service/transcribe/vocabulary_test.go +++ b/internal/service/transcribe/vocabulary_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package transcribe_test import ( @@ -215,7 +218,7 @@ func TestAccTranscribeVocabulary_disappears(t *testing.T) { func testAccCheckVocabularyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).TranscribeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).TranscribeClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_transcribe_vocabulary" { @@ -250,7 +253,7 @@ func testAccCheckVocabularyExists(ctx context.Context, name string, vocabulary * return create.Error(names.Transcribe, create.ErrActionCheckingExistence, tftranscribe.ResNameVocabulary, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).TranscribeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).TranscribeClient(ctx) resp, err := tftranscribe.FindVocabularyByName(ctx, conn, rs.Primary.ID) if err != nil { @@ -264,7 +267,7 @@ func testAccCheckVocabularyExists(ctx context.Context, name string, vocabulary * } func testAccVocabulariesPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).TranscribeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).TranscribeClient(ctx) input := &transcribe.ListVocabulariesInput{} diff --git a/internal/service/transfer/acc_test.go b/internal/service/transfer/acc_test.go index 7ea9f37fcef..9012979df87 100644 --- a/internal/service/transfer/acc_test.go +++ b/internal/service/transfer/acc_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package transfer_test import ( @@ -10,7 +13,7 @@ import ( ) func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn(ctx) input := &transfer.ListServersInput{} diff --git a/internal/service/transfer/access.go b/internal/service/transfer/access.go index 570db4c72f9..7d299a2befd 100644 --- a/internal/service/transfer/access.go +++ b/internal/service/transfer/access.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package transfer import ( @@ -124,7 +127,7 @@ func ResourceAccess() *schema.Resource { func resourceAccessCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).TransferConn() + conn := meta.(*conns.AWSClient).TransferConn(ctx) externalID := d.Get("external_id").(string) serverID := d.Get("server_id").(string) @@ -177,7 +180,7 @@ func resourceAccessCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceAccessRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).TransferConn() + conn := meta.(*conns.AWSClient).TransferConn(ctx) serverID, externalID, err := AccessParseResourceID(d.Id()) @@ -185,7 +188,7 @@ func resourceAccessRead(ctx context.Context, d *schema.ResourceData, meta interf return sdkdiag.AppendErrorf(diags, "parsing Transfer Access ID: %s", err) } - access, err := FindAccessByServerIDAndExternalID(ctx, conn, serverID, externalID) + access, err := FindAccessByTwoPartKey(ctx, conn, serverID, externalID) if !d.IsNewResource() && tfresource.NotFound(err) { log.Printf("[WARN] Transfer Access (%s) not found, removing from state", d.Id()) @@ -224,7 +227,7 @@ func resourceAccessRead(ctx context.Context, d *schema.ResourceData, meta interf func resourceAccessUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).TransferConn() + conn := meta.(*conns.AWSClient).TransferConn(ctx) serverID, externalID, err := AccessParseResourceID(d.Id()) @@ -278,7 +281,7 @@ func resourceAccessUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceAccessDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).TransferConn() + conn := meta.(*conns.AWSClient).TransferConn(ctx) serverID, externalID, err := AccessParseResourceID(d.Id()) diff --git a/internal/service/transfer/access_test.go b/internal/service/transfer/access_test.go index dadb23ba5ba..5bbcb4ed6a0 100644 --- a/internal/service/transfer/access_test.go +++ b/internal/service/transfer/access_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package transfer_test import ( @@ -182,9 +185,9 @@ func testAccCheckAccessExists(ctx context.Context, n string, v *transfer.Describ return fmt.Errorf("error parsing Transfer Access ID: %w", err) } - conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn(ctx) - output, err := tftransfer.FindAccessByServerIDAndExternalID(ctx, conn, serverID, externalID) + output, err := tftransfer.FindAccessByTwoPartKey(ctx, conn, serverID, externalID) if err != nil { return err @@ -198,7 +201,7 @@ func testAccCheckAccessExists(ctx context.Context, n string, v *transfer.Describ func testAccCheckAccessDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_transfer_access" { @@ -210,7 +213,7 @@ func testAccCheckAccessDestroy(ctx context.Context) resource.TestCheckFunc { if err != nil { return fmt.Errorf("error parsing Transfer Access ID: %w", err) } - _, err = tftransfer.FindAccessByServerIDAndExternalID(ctx, conn, serverID, externalID) + _, err = tftransfer.FindAccessByTwoPartKey(ctx, conn, serverID, externalID) if tfresource.NotFound(err) { continue diff --git a/internal/service/transfer/agreement.go b/internal/service/transfer/agreement.go new file mode 100644 index 00000000000..15b0de8b78c --- /dev/null +++ b/internal/service/transfer/agreement.go @@ -0,0 +1,210 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package transfer + +import ( + "context" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/transfer" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/verify" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @SDKResource("aws_transfer_agreement", name="Agreement") +// @Tags(identifierAttribute="agreement_id") +func ResourceAgreement() *schema.Resource { + return &schema.Resource{ + CreateWithoutTimeout: resourceAgreementCreate, + ReadWithoutTimeout: resourceAgreementRead, + UpdateWithoutTimeout: resourceAgreementUpdate, + DeleteWithoutTimeout: resourceAgreementDelete, + + Importer: &schema.ResourceImporter{ + StateContext: schema.ImportStatePassthroughContext, + }, + + Schema: map[string]*schema.Schema{ + "access_role": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidARN, + }, + "agreement_id": { + Type: schema.TypeString, + Computed: true, + }, + "base_directory": { + Type: schema.TypeString, + Required: true, + }, + "description": { + Type: schema.TypeString, + Optional: true, + }, + "local_profile_id": { + Type: schema.TypeString, + Required: true, + }, + "partner_profile_id": { + Type: schema.TypeString, + Required: true, + }, + "server_id": { + Type: schema.TypeString, + Required: true, + }, + "status": { + Type: schema.TypeString, + Computed: true, + }, + names.AttrTags: tftags.TagsSchema(), + names.AttrTagsAll: tftags.TagsSchemaComputed(), + }, + CustomizeDiff: verify.SetTagsDiff, + } +} + +func resourceAgreementCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).TransferConn(ctx) + + serverID := d.Get("server_id").(string) + input := &transfer.CreateAgreementInput{ + AccessRole: aws.String(d.Get("access_role").(string)), + BaseDirectory: aws.String(d.Get("base_directory").(string)), + LocalProfileId: aws.String(d.Get("local_profile_id").(string)), + PartnerProfileId: aws.String(d.Get("partner_profile_id").(string)), + ServerId: aws.String(serverID), + Tags: getTagsIn(ctx), + } + + if v, ok := d.GetOk("description"); ok { + input.Description = aws.String(v.(string)) + } + + output, err := conn.CreateAgreementWithContext(ctx, input) + + if err != nil { + return sdkdiag.AppendErrorf(diags, "creating Transfer Agreement: %s", err) + } + + d.SetId(AgreementCreateResourceID(serverID, aws.StringValue(output.AgreementId))) + + return append(diags, resourceAgreementRead(ctx, d, meta)...) +} + +func resourceAgreementRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).TransferConn(ctx) + + serverID, agreementID, err := AgreementParseResourceID(d.Id()) + if err != nil { + return sdkdiag.AppendFromErr(diags, err) + } + + output, err := FindAgreementByTwoPartKey(ctx, conn, serverID, agreementID) + + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] Transfer Agreement (%s) not found, removing from state", d.Id()) + d.SetId("") + return diags + } + + if err != nil { + return sdkdiag.AppendErrorf(diags, "reading Transfer Agreement (%s): %s", d.Id(), err) + } + + d.Set("access_role", output.AccessRole) + d.Set("agreement_id", output.AgreementId) + d.Set("base_directory", output.BaseDirectory) + d.Set("description", output.Description) + d.Set("local_profile_id", output.LocalProfileId) + d.Set("partner_profile_id", output.PartnerProfileId) + d.Set("server_id", output.ServerId) + d.Set("status", output.Status) + setTagsOut(ctx, output.Tags) + + return diags +} + +func resourceAgreementUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).TransferConn(ctx) + + serverID, agreementID, err := AgreementParseResourceID(d.Id()) + if err != nil { + return sdkdiag.AppendFromErr(diags, err) + } + + if d.HasChangesExcept("tags", "tags_all") { + input := &transfer.UpdateAgreementInput{ + AgreementId: aws.String(agreementID), + ServerId: aws.String(serverID), + } + + if d.HasChange("access_role") { + input.AccessRole = aws.String(d.Get("access_role").(string)) + } + + if d.HasChange("base_directory") { + input.BaseDirectory = aws.String(d.Get("base_directory").(string)) + } + + if d.HasChange("description") { + input.Description = aws.String(d.Get("description").(string)) + } + + if d.HasChange("local_profile_id") { + input.LocalProfileId = aws.String(d.Get("local_profile_id").(string)) + } + + if d.HasChange("partner_profile_id") { + input.PartnerProfileId = aws.String(d.Get("partner_profile_id").(string)) + } + + _, err := conn.UpdateAgreementWithContext(ctx, input) + + if err != nil { + return sdkdiag.AppendErrorf(diags, "updating Transfer Agreement (%s): %s", d.Id(), err) + } + } + + return append(diags, resourceAgreementRead(ctx, d, meta)...) +} + +func resourceAgreementDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).TransferConn(ctx) + + serverID, agreementID, err := AgreementParseResourceID(d.Id()) + + if err != nil { + return sdkdiag.AppendFromErr(diags, err) + } + + log.Printf("[DEBUG] Deleting Transfer Agreement: %s", d.Id()) + _, err = conn.DeleteAgreementWithContext(ctx, &transfer.DeleteAgreementInput{ + AgreementId: aws.String(agreementID), + ServerId: aws.String(serverID), + }) + + if tfawserr.ErrCodeEquals(err, transfer.ErrCodeResourceNotFoundException) { + return diags + } + + if err != nil { + return sdkdiag.AppendErrorf(diags, "deleting Transfer Agreement (%s): %s", d.Id(), err) + } + + return diags +} diff --git a/internal/service/transfer/agreement_test.go b/internal/service/transfer/agreement_test.go new file mode 100644 index 00000000000..f36b90e2f0d --- /dev/null +++ b/internal/service/transfer/agreement_test.go @@ -0,0 +1,295 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package transfer_test + +import ( + "context" + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/service/transfer" + sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-plugin-testing/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + tftransfer "github.com/hashicorp/terraform-provider-aws/internal/service/transfer" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" +) + +func testAccAgreement_basic(t *testing.T) { + ctx := acctest.Context(t) + var conf transfer.DescribedAgreement + baseDirectory1 := "/DOC-EXAMPLE-BUCKET/home/mydirectory1" + baseDirectory2 := "/DOC-EXAMPLE-BUCKET/home/mydirectory2" + resourceName := "aws_transfer_agreement.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, transfer.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckAgreementDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccAgreementConfig_basic(rName, baseDirectory1), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAgreementExists(ctx, resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "base_directory", baseDirectory1), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"status"}, + }, + { + Config: testAccAgreementConfig_basic(rName, baseDirectory2), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAgreementExists(ctx, resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "base_directory", baseDirectory2), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + ), + }, + }, + }) +} + +func testAccAgreement_disappears(t *testing.T) { + ctx := acctest.Context(t) + var conf transfer.DescribedAgreement + resourceName := "aws_transfer_agreement.test" + baseDirectory := "/DOC-EXAMPLE-BUCKET/home/mydirectory" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, transfer.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckAgreementDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccAgreementConfig_basic(rName, baseDirectory), + Check: resource.ComposeTestCheckFunc( + testAccCheckAgreementExists(ctx, resourceName, &conf), + acctest.CheckResourceDisappears(ctx, acctest.Provider, tftransfer.ResourceAgreement(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func testAccAgreement_tags(t *testing.T) { + ctx := acctest.Context(t) + var conf transfer.DescribedAgreement + baseDirectory := "/DOC-EXAMPLE-BUCKET/home/mydirectory" + resourceName := "aws_transfer_agreement.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, transfer.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckAgreementDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccAgreementConfig_tags1(rName, baseDirectory, "key1", "value1"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAgreementExists(ctx, resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"status"}, + }, + { + Config: testAccAgreementConfig_tags2(rName, baseDirectory, "key1", "value1updated", "key2", "value2"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAgreementExists(ctx, resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + { + Config: testAccAgreementConfig_tags1(rName, baseDirectory, "key2", "value2"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAgreementExists(ctx, resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + }, + }) +} + +func testAccCheckAgreementExists(ctx context.Context, n string, v *transfer.DescribedAgreement) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Transfer Agreement ID is set") + } + + serverID, agreementID, err := tftransfer.AgreementParseResourceID(rs.Primary.ID) + + if err != nil { + return err + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn(ctx) + + output, err := tftransfer.FindAgreementByTwoPartKey(ctx, conn, serverID, agreementID) + + if err != nil { + return err + } + + *v = *output + + return nil + } +} + +func testAccCheckAgreementDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn(ctx) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_transfer_agreement" { + continue + } + + serverID, agreementID, err := tftransfer.AgreementParseResourceID(rs.Primary.ID) + + if err != nil { + return err + } + + _, err = tftransfer.FindAgreementByTwoPartKey(ctx, conn, serverID, agreementID) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return err + } + + return fmt.Errorf("Transfer Agreement %s still exists", rs.Primary.ID) + } + + return nil + } +} + +func testAccAgreementConfig_base(rName string) string { + return fmt.Sprintf(` +resource "aws_iam_role" "test" { + name = %[1]q + assume_role_policy = < 0 && v.([]interface{})[0] != nil { + input.As2Config = expandAs2Config(v.([]interface{})[0].(map[string]interface{})) + } + } + + if d.HasChange("logging_role") { + input.LoggingRole = aws.String(d.Get("logging_role").(string)) + } + + if d.HasChange("url") { + input.Url = aws.String(d.Get("url").(string)) + } + + _, err := conn.UpdateConnectorWithContext(ctx, input) + + if err != nil { + return sdkdiag.AppendErrorf(diags, "updating Transfer Connector (%s): %s", d.Id(), err) + } + } + + return append(diags, resourceConnectorRead(ctx, d, meta)...) +} + +func resourceConnectorDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).TransferConn(ctx) + + log.Printf("[DEBUG] Deleting Transfer Connector: %s", d.Id()) + _, err := conn.DeleteConnectorWithContext(ctx, &transfer.DeleteConnectorInput{ + ConnectorId: aws.String(d.Id()), + }) + + if tfawserr.ErrCodeEquals(err, transfer.ErrCodeResourceNotFoundException) { + return diags + } + + if err != nil { + return sdkdiag.AppendErrorf(diags, "deleting Transfer Connector (%s): %s", d.Id(), err) + } + + return diags +} + +func expandAs2Config(tfMap map[string]interface{}) *transfer.As2ConnectorConfig { + if tfMap == nil { + return nil + } + + apiObject := &transfer.As2ConnectorConfig{} + + if v, ok := tfMap["compression"].(string); ok && v != "" { + apiObject.Compression = aws.String(v) + } + + if v, ok := tfMap["encryption_algorithm"].(string); ok && v != "" { + apiObject.EncryptionAlgorithm = aws.String(v) + } + + if v, ok := tfMap["local_profile_id"].(string); ok && v != "" { + apiObject.LocalProfileId = aws.String(v) + } + + if v, ok := tfMap["mdn_response"].(string); ok && v != "" { + apiObject.MdnResponse = aws.String(v) + } + + if v, ok := tfMap["mdn_signing_algorithm"].(string); ok && v != "" { + apiObject.MdnSigningAlgorithm = aws.String(v) + } + + if v, ok := tfMap["message_subject"].(string); ok && v != "" { + apiObject.MessageSubject = aws.String(v) + } + + if v, ok := tfMap["partner_profile_id"].(string); ok && v != "" { + apiObject.PartnerProfileId = aws.String(v) + } + + if v, ok := tfMap["signing_algorithm"].(string); ok && v != "" { + apiObject.SigningAlgorithm = aws.String(v) + } + + return apiObject +} + +func flattenAs2Config(apiObject *transfer.As2ConnectorConfig) []interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.Compression; v != nil { + tfMap["compression"] = aws.StringValue(v) + } + + if v := apiObject.EncryptionAlgorithm; v != nil { + tfMap["encryption_algorithm"] = aws.StringValue(v) + } + + if v := apiObject.LocalProfileId; v != nil { + tfMap["local_profile_id"] = aws.StringValue(v) + } + + if v := apiObject.MdnResponse; v != nil { + tfMap["mdn_response"] = aws.StringValue(v) + } + + if v := apiObject.MdnSigningAlgorithm; v != nil { + tfMap["mdn_signing_algorithm"] = aws.StringValue(v) + } + + if v := apiObject.MessageSubject; v != nil { + tfMap["message_subject"] = aws.StringValue(v) + } + + if v := apiObject.PartnerProfileId; v != nil { + tfMap["partner_profile_id"] = aws.StringValue(v) + } + + if v := apiObject.SigningAlgorithm; v != nil { + tfMap["signing_algorithm"] = aws.StringValue(v) + } + + return []interface{}{tfMap} +} diff --git a/internal/service/transfer/connector_test.go b/internal/service/transfer/connector_test.go new file mode 100644 index 00000000000..8505da4b871 --- /dev/null +++ b/internal/service/transfer/connector_test.go @@ -0,0 +1,300 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package transfer_test + +import ( + "context" + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/service/transfer" + sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-plugin-testing/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + tftransfer "github.com/hashicorp/terraform-provider-aws/internal/service/transfer" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" +) + +func TestAccTransferConnector_basic(t *testing.T) { + ctx := acctest.Context(t) + var conf transfer.DescribedConnector + resourceName := "aws_transfer_connector.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, transfer.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckConnectorDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccConnectorConfig_basic(rName, "http://www.example.com"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckConnectorExists(ctx, resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttr(resourceName, "url", "http://www.example.com"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccConnectorConfig_basic(rName, "http://www.example.net"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckConnectorExists(ctx, resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttr(resourceName, "url", "http://www.example.net"), + ), + }, + }, + }) +} + +func TestAccTransferConnector_disappears(t *testing.T) { + ctx := acctest.Context(t) + var conf transfer.DescribedConnector + resourceName := "aws_transfer_connector.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, transfer.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckConnectorDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccConnectorConfig_basic(rName, "http://www.example.com"), + Check: resource.ComposeTestCheckFunc( + testAccCheckConnectorExists(ctx, resourceName, &conf), + acctest.CheckResourceDisappears(ctx, acctest.Provider, tftransfer.ResourceConnector(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccTransferConnector_tags(t *testing.T) { + ctx := acctest.Context(t) + var conf transfer.DescribedConnector + resourceName := "aws_transfer_connector.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, transfer.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckConnectorDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccConnectorConfig_tags1(rName, "http://www.example.com", "key1", "value1"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckConnectorExists(ctx, resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccConnectorConfig_tags2(rName, "http://www.example.com", "key1", "value1updated", "key2", "value2"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckConnectorExists(ctx, resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + { + Config: testAccConnectorConfig_tags1(rName, "http://www.example.com", "key2", "value2"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckConnectorExists(ctx, resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + }, + }) +} + +func testAccCheckConnectorExists(ctx context.Context, n string, v *transfer.DescribedConnector) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Transfer Connector ID is set") + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn(ctx) + + output, err := tftransfer.FindConnectorByID(ctx, conn, rs.Primary.ID) + + if err != nil { + return err + } + + *v = *output + + return nil + } +} + +func testAccCheckConnectorDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn(ctx) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_transfer_connector" { + continue + } + + _, err := tftransfer.FindConnectorByID(ctx, conn, rs.Primary.ID) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return err + } + + return fmt.Errorf("Transfer Connector %s still exists", rs.Primary.ID) + } + + return nil + } +} + +func testAccConnectorConfig_base(rName string) string { + return fmt.Sprintf(` +resource "aws_iam_role" "test" { + name = %[1]q + assume_role_policy = < 0 { + input.CertificateIds = flex.ExpandStringSet(v.(*schema.Set)) + } + + output, err := conn.CreateProfileWithContext(ctx, input) + + if err != nil { + return sdkdiag.AppendErrorf(diags, "creating Transfer Profile: %s", err) + } + + d.SetId(aws.StringValue(output.ProfileId)) + + return append(diags, resourceProfileRead(ctx, d, meta)...) +} + +func resourceProfileRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).TransferConn(ctx) + + output, err := FindProfileByID(ctx, conn, d.Id()) + + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] Transfer Profile (%s) not found, removing from state", d.Id()) + d.SetId("") + return diags + } + + if err != nil { + return sdkdiag.AppendErrorf(diags, "reading Transfer Profile (%s): %s", d.Id(), err) + } + + d.Set("as2_id", output.As2Id) + d.Set("certificate_ids", aws.StringValueSlice(output.CertificateIds)) + d.Set("profile_id", output.ProfileId) + d.Set("profile_type", output.ProfileType) + setTagsOut(ctx, output.Tags) + + return diags +} + +func resourceProfileUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).TransferConn(ctx) + + if d.HasChangesExcept("tags", "tags_all") { + input := &transfer.UpdateProfileInput{ + ProfileId: aws.String(d.Id()), + } + + if d.HasChange("certificate_ids") { + input.CertificateIds = flex.ExpandStringSet(d.Get("certificate_ids").(*schema.Set)) + } + + _, err := conn.UpdateProfileWithContext(ctx, input) + + if err != nil { + return sdkdiag.AppendErrorf(diags, "updating Transfer Profile (%s): %s", d.Id(), err) + } + } + + return append(diags, resourceProfileRead(ctx, d, meta)...) +} + +func resourceProfileDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics + conn := meta.(*conns.AWSClient).TransferConn(ctx) + + log.Printf("[DEBUG] Deleting Transfer Profile: %s", d.Id()) + _, err := conn.DeleteProfileWithContext(ctx, &transfer.DeleteProfileInput{ + ProfileId: aws.String(d.Id()), + }) + + if tfawserr.ErrCodeEquals(err, transfer.ErrCodeResourceNotFoundException) { + return diags + } + + if err != nil { + return sdkdiag.AppendErrorf(diags, "deleting Transfer Profile (%s): %s", d.Id(), err) + } + + return diags +} diff --git a/internal/service/transfer/profile_test.go b/internal/service/transfer/profile_test.go new file mode 100644 index 00000000000..bbb4150e176 --- /dev/null +++ b/internal/service/transfer/profile_test.go @@ -0,0 +1,254 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package transfer_test + +import ( + "context" + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/service/transfer" + sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-plugin-testing/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + tftransfer "github.com/hashicorp/terraform-provider-aws/internal/service/transfer" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" +) + +func TestAccTransferProfile_basic(t *testing.T) { + ctx := acctest.Context(t) + var conf transfer.DescribedProfile + resourceName := "aws_transfer_profile.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, transfer.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckProfileDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccProfileConfig_basic(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckProfileExists(ctx, resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "as2_id", rName), + resource.TestCheckResourceAttr(resourceName, "certificate_ids.#", "0"), + resource.TestCheckResourceAttrSet(resourceName, "profile_id"), + resource.TestCheckResourceAttr(resourceName, "profile_type", "LOCAL"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccTransferProfile_certificateIDs(t *testing.T) { + ctx := acctest.Context(t) + var conf transfer.DescribedProfile + resourceName := "aws_transfer_profile.test" + key := acctest.TLSRSAPrivateKeyPEM(t, 2048) + certificate := acctest.TLSRSAX509SelfSignedCertificatePEM(t, key, acctest.RandomSubdomain()) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, transfer.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckProfileDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccProfileConfig_certificateIDs(rName, certificate, key), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckProfileExists(ctx, resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "certificate_ids.#", "1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccTransferProfile_disappears(t *testing.T) { + ctx := acctest.Context(t) + var conf transfer.DescribedProfile + resourceName := "aws_transfer_profile.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, transfer.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckProfileDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccProfileConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckProfileExists(ctx, resourceName, &conf), + acctest.CheckResourceDisappears(ctx, acctest.Provider, tftransfer.ResourceProfile(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccTransferProfile_tags(t *testing.T) { + ctx := acctest.Context(t) + var conf transfer.DescribedProfile + resourceName := "aws_transfer_profile.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(ctx, t); testAccPreCheck(ctx, t) }, + ErrorCheck: acctest.ErrorCheck(t, transfer.EndpointsID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckProfileDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccProfileConfig_tags1(rName, "key1", "value1"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckProfileExists(ctx, resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccProfileConfig_tags2(rName, "key1", "value1updated", "key2", "value2"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckProfileExists(ctx, resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + { + Config: testAccProfileConfig_tags1(rName, "key2", "value2"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckProfileExists(ctx, resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + }, + }) +} + +func testAccCheckProfileExists(ctx context.Context, n string, v *transfer.DescribedProfile) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Transfer Profile ID is set") + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn(ctx) + + output, err := tftransfer.FindProfileByID(ctx, conn, rs.Primary.ID) + + if err != nil { + return err + } + + *v = *output + + return nil + } +} + +func testAccCheckProfileDestroy(ctx context.Context) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn(ctx) + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_transfer_profile" { + continue + } + + _, err := tftransfer.FindProfileByID(ctx, conn, rs.Primary.ID) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return err + } + + return fmt.Errorf("Transfer Profile %s still exists", rs.Primary.ID) + } + + return nil + } +} + +func testAccProfileConfig_basic(rName string) string { + return fmt.Sprintf(` +resource "aws_transfer_profile" "test" { + as2_id = %[1]q + profile_type = "LOCAL" +} +`, rName) +} + +func testAccProfileConfig_certificateIDs(rName, certificate, privateKey string) string { + return fmt.Sprintf(` +resource "aws_transfer_certificate" "test" { + certificate = %[2]q + private_key = %[3]q + usage = "SIGNING" +} + +resource "aws_transfer_profile" "test" { + as2_id = %[1]q + certificate_ids = [aws_transfer_certificate.test.certificate_id] + profile_type = "LOCAL" +} +`, rName, certificate, privateKey) +} + +func testAccProfileConfig_tags1(rName, tagKey1, tagValue1 string) string { + return fmt.Sprintf(` +resource "aws_transfer_profile" "test" { + as2_id = %[1]q + profile_type = "LOCAL" + + tags = { + %[2]q = %[3]q + } +} +`, rName, tagKey1, tagValue1) +} + +func testAccProfileConfig_tags2(rName, tagKey1, tagValue1, tagKey2, tagValue2 string) string { + return fmt.Sprintf(` +resource "aws_transfer_profile" "test" { + as2_id = %[1]q + profile_type = "LOCAL" + + tags = { + %[2]q = %[3]q + %[4]q = %[5]q + } +} +`, rName, tagKey1, tagValue1, tagKey2, tagValue2) +} diff --git a/internal/service/transfer/server.go b/internal/service/transfer/server.go index 22cdc29d019..8ed6a4ae768 100644 --- a/internal/service/transfer/server.go +++ b/internal/service/transfer/server.go @@ -1,6 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package transfer -import ( // nosemgrep:ci.aws-sdk-go-multiple-service-imports +import ( // nosemgrep:ci.semgrep.aws.multiple-service-imports "context" "fmt" "log" @@ -283,10 +286,10 @@ func ResourceServer() *schema.Resource { func resourceServerCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).TransferConn() + conn := meta.(*conns.AWSClient).TransferConn(ctx) input := &transfer.CreateServerInput{ - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("certificate"); ok { @@ -419,7 +422,7 @@ func resourceServerCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceServerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).TransferConn() + conn := meta.(*conns.AWSClient).TransferConn(ctx) output, err := FindServerByID(ctx, conn, d.Id()) @@ -448,7 +451,7 @@ func resourceServerRead(ctx context.Context, d *schema.ResourceData, meta interf // Security Group IDs are not returned for VPC endpoints. if aws.StringValue(output.EndpointType) == transfer.EndpointTypeVpc && len(output.EndpointDetails.SecurityGroupIds) == 0 { vpcEndpointID := aws.StringValue(output.EndpointDetails.VpcEndpointId) - output, err := tfec2.FindVPCEndpointByID(ctx, meta.(*conns.AWSClient).EC2Conn(), vpcEndpointID) + output, err := tfec2.FindVPCEndpointByID(ctx, meta.(*conns.AWSClient).EC2Conn(ctx), vpcEndpointID) if err != nil { return sdkdiag.AppendErrorf(diags, "reading Transfer Server (%s) VPC Endpoint (%s): %s", d.Id(), vpcEndpointID, err) @@ -495,14 +498,14 @@ func resourceServerRead(ctx context.Context, d *schema.ResourceData, meta interf return sdkdiag.AppendErrorf(diags, "setting workflow_details: %s", err) } - SetTagsOut(ctx, output.Tags) + setTagsOut(ctx, output.Tags) return diags } func resourceServerUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).TransferConn() + conn := meta.(*conns.AWSClient).TransferConn(ctx) if d.HasChangesExcept("tags", "tags_all") { var newEndpointTypeVpc bool @@ -575,7 +578,7 @@ func resourceServerUpdate(ctx context.Context, d *schema.ResourceData, meta inte // You can edit the SecurityGroupIds property in the UpdateServer API only if you are changing the EndpointType from PUBLIC or VPC_ENDPOINT to VPC. // To change security groups associated with your server's VPC endpoint after creation, use the Amazon EC2 ModifyVpcEndpoint API. if d.HasChange("endpoint_details.0.security_group_ids") && newEndpointTypeVpc && oldEndpointTypeVpc { - conn := meta.(*conns.AWSClient).EC2Conn() + conn := meta.(*conns.AWSClient).EC2Conn(ctx) vpcEndpointID := d.Get("endpoint_details.0.vpc_endpoint_id").(string) input := &ec2.ModifyVpcEndpointInput{ @@ -715,7 +718,7 @@ func resourceServerUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceServerDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).TransferConn() + conn := meta.(*conns.AWSClient).TransferConn(ctx) if d.Get("force_destroy").(bool) && d.Get("identity_provider_type").(string) == transfer.IdentityProviderTypeServiceManaged { input := &transfer.ListUsersInput{ diff --git a/internal/service/transfer/server_data_source.go b/internal/service/transfer/server_data_source.go index 663db8e8bc8..4cd84c216d8 100644 --- a/internal/service/transfer/server_data_source.go +++ b/internal/service/transfer/server_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package transfer import ( @@ -84,7 +87,7 @@ func DataSourceServer() *schema.Resource { func dataSourceServerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).TransferConn() + conn := meta.(*conns.AWSClient).TransferConn(ctx) serverID := d.Get("server_id").(string) diff --git a/internal/service/transfer/server_data_source_test.go b/internal/service/transfer/server_data_source_test.go index ce4f094f8d0..30614e30ed9 100644 --- a/internal/service/transfer/server_data_source_test.go +++ b/internal/service/transfer/server_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package transfer_test import ( diff --git a/internal/service/transfer/server_test.go b/internal/service/transfer/server_test.go index f3bfe22eeb7..e33a7d41088 100644 --- a/internal/service/transfer/server_test.go +++ b/internal/service/transfer/server_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package transfer_test import ( @@ -244,6 +247,13 @@ func testAccServer_securityPolicy(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "security_policy_name", "TransferSecurityPolicy-2022-03"), ), }, + { + Config: testAccServerConfig_securityPolicy(rName, "TransferSecurityPolicy-2023-05"), + Check: resource.ComposeTestCheckFunc( + testAccCheckServerExists(ctx, resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "security_policy_name", "TransferSecurityPolicy-2023-05"), + ), + }, }, }) } @@ -1169,7 +1179,7 @@ func testAccCheckServerExists(ctx context.Context, n string, v *transfer.Describ return fmt.Errorf("No Transfer Server ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn(ctx) output, err := tftransfer.FindServerByID(ctx, conn, rs.Primary.ID) @@ -1185,7 +1195,7 @@ func testAccCheckServerExists(ctx context.Context, n string, v *transfer.Describ func testAccCheckServerDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_transfer_server" { @@ -1284,7 +1294,7 @@ resource "aws_default_security_group" "test" { resource "aws_eip" "test" { count = 2 - vpc = true + domain = "vpc" tags = { Name = %[1]q diff --git a/internal/service/transfer/service_package_gen.go b/internal/service/transfer/service_package_gen.go index 659bb1ea316..7160bb5c549 100644 --- a/internal/service/transfer/service_package_gen.go +++ b/internal/service/transfer/service_package_gen.go @@ -5,6 +5,10 @@ package transfer import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + transfer_sdkv1 "github.com/aws/aws-sdk-go/service/transfer" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -34,6 +38,38 @@ func (p *servicePackage) SDKResources(ctx context.Context) []*types.ServicePacka Factory: ResourceAccess, TypeName: "aws_transfer_access", }, + { + Factory: ResourceAgreement, + TypeName: "aws_transfer_agreement", + Name: "Agreement", + Tags: &types.ServicePackageResourceTags{ + IdentifierAttribute: "agreement_id", + }, + }, + { + Factory: ResourceCertificate, + TypeName: "aws_transfer_certificate", + Name: "Certificate", + Tags: &types.ServicePackageResourceTags{ + IdentifierAttribute: "certificate_id", + }, + }, + { + Factory: ResourceConnector, + TypeName: "aws_transfer_connector", + Name: "Connector", + Tags: &types.ServicePackageResourceTags{ + IdentifierAttribute: "connector_id", + }, + }, + { + Factory: ResourceProfile, + TypeName: "aws_transfer_profile", + Name: "Profile", + Tags: &types.ServicePackageResourceTags{ + IdentifierAttribute: "profile_id", + }, + }, { Factory: ResourceServer, TypeName: "aws_transfer_server", @@ -73,4 +109,13 @@ func (p *servicePackage) ServicePackageName() string { return names.Transfer } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*transfer_sdkv1.Transfer, error) { + sess := config["session"].(*session_sdkv1.Session) + + return transfer_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/transfer/ssh_key.go b/internal/service/transfer/ssh_key.go index 2b52bdf3c8e..7f8c166e9d3 100644 --- a/internal/service/transfer/ssh_key.go +++ b/internal/service/transfer/ssh_key.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package transfer import ( @@ -55,7 +58,7 @@ func ResourceSSHKey() *schema.Resource { func resourceSSHKeyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).TransferConn() + conn := meta.(*conns.AWSClient).TransferConn(ctx) userName := d.Get("user_name").(string) serverID := d.Get("server_id").(string) @@ -79,7 +82,7 @@ func resourceSSHKeyCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceSSHKeyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).TransferConn() + conn := meta.(*conns.AWSClient).TransferConn(ctx) serverID, userName, sshKeyID, err := DecodeSSHKeyID(d.Id()) if err != nil { return sdkdiag.AppendErrorf(diags, "parsing Transfer SSH Public Key ID: %s", err) @@ -121,7 +124,7 @@ func resourceSSHKeyRead(ctx context.Context, d *schema.ResourceData, meta interf func resourceSSHKeyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).TransferConn() + conn := meta.(*conns.AWSClient).TransferConn(ctx) serverID, userName, sshKeyID, err := DecodeSSHKeyID(d.Id()) if err != nil { return sdkdiag.AppendErrorf(diags, "parsing Transfer SSH Public Key ID: %s", err) diff --git a/internal/service/transfer/ssh_key_test.go b/internal/service/transfer/ssh_key_test.go index 27760ef228b..c5eb228f99d 100644 --- a/internal/service/transfer/ssh_key_test.go +++ b/internal/service/transfer/ssh_key_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package transfer_test import ( @@ -63,7 +66,7 @@ func testAccCheckSSHKeyExists(ctx context.Context, n string, res *transfer.SshPu return fmt.Errorf("No Transfer Ssh Public Key ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn(ctx) serverID, userName, sshKeyID, err := tftransfer.DecodeSSHKeyID(rs.Primary.ID) if err != nil { return fmt.Errorf("error parsing Transfer SSH Public Key ID: %s", err) @@ -91,7 +94,7 @@ func testAccCheckSSHKeyExists(ctx context.Context, n string, res *transfer.SshPu func testAccCheckSSHKeyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_transfer_ssh_key" { diff --git a/internal/service/transfer/status.go b/internal/service/transfer/status.go index 4c20fc7b45d..94582a24da2 100644 --- a/internal/service/transfer/status.go +++ b/internal/service/transfer/status.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package transfer import ( diff --git a/internal/service/transfer/sweep.go b/internal/service/transfer/sweep.go index 97ca70fb61a..68a139d66c1 100644 --- a/internal/service/transfer/sweep.go +++ b/internal/service/transfer/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/transfer" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -31,11 +33,11 @@ func init() { func sweepServers(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).TransferConn() + conn := client.TransferConn(ctx) input := &transfer.ListServersInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -66,7 +68,7 @@ func sweepServers(region string) error { return fmt.Errorf("error listing Transfer Servers (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Transfer Servers (%s): %w", region, err) @@ -77,11 +79,11 @@ func sweepServers(region string) error { func sweepWorkflows(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).TransferConn() + conn := client.TransferConn(ctx) input := &transfer.ListWorkflowsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -110,7 +112,7 @@ func sweepWorkflows(region string) error { return fmt.Errorf("error listing Transfer Workflows (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping Transfer Workflows (%s): %w", region, err) diff --git a/internal/service/transfer/tag_gen.go b/internal/service/transfer/tag_gen.go index a0eec4a1008..b2674f3591b 100644 --- a/internal/service/transfer/tag_gen.go +++ b/internal/service/transfer/tag_gen.go @@ -46,13 +46,13 @@ func ResourceTag() *schema.Resource { } func resourceTagCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { // nosemgrep:ci.semgrep.tags.calling-UpdateTags-in-resource-create - conn := meta.(*conns.AWSClient).TransferConn() + conn := meta.(*conns.AWSClient).TransferConn(ctx) identifier := d.Get("resource_arn").(string) key := d.Get("key").(string) value := d.Get("value").(string) - if err := UpdateTagsNoIgnoreSystem(ctx, conn, identifier, nil, map[string]string{key: value}); err != nil { + if err := updateTagsNoIgnoreSystem(ctx, conn, identifier, nil, map[string]string{key: value}); err != nil { return diag.Errorf("creating %s resource (%s) tag (%s): %s", transfer.ServiceID, identifier, key, err) } @@ -62,7 +62,7 @@ func resourceTagCreate(ctx context.Context, d *schema.ResourceData, meta interfa } func resourceTagRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).TransferConn() + conn := meta.(*conns.AWSClient).TransferConn(ctx) identifier, key, err := tftags.GetResourceID(d.Id()) if err != nil { @@ -89,14 +89,14 @@ func resourceTagRead(ctx context.Context, d *schema.ResourceData, meta interface } func resourceTagUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).TransferConn() + conn := meta.(*conns.AWSClient).TransferConn(ctx) identifier, key, err := tftags.GetResourceID(d.Id()) if err != nil { return diag.FromErr(err) } - if err := UpdateTagsNoIgnoreSystem(ctx, conn, identifier, nil, map[string]string{key: d.Get("value").(string)}); err != nil { + if err := updateTagsNoIgnoreSystem(ctx, conn, identifier, nil, map[string]string{key: d.Get("value").(string)}); err != nil { return diag.Errorf("updating %s resource (%s) tag (%s): %s", transfer.ServiceID, identifier, key, err) } @@ -104,14 +104,14 @@ func resourceTagUpdate(ctx context.Context, d *schema.ResourceData, meta interfa } func resourceTagDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).TransferConn() + conn := meta.(*conns.AWSClient).TransferConn(ctx) identifier, key, err := tftags.GetResourceID(d.Id()) if err != nil { return diag.FromErr(err) } - if err := UpdateTagsNoIgnoreSystem(ctx, conn, identifier, map[string]string{key: d.Get("value").(string)}, nil); err != nil { + if err := updateTagsNoIgnoreSystem(ctx, conn, identifier, map[string]string{key: d.Get("value").(string)}, nil); err != nil { return diag.Errorf("deleting %s resource (%s) tag (%s): %s", transfer.ServiceID, identifier, key, err) } diff --git a/internal/service/transfer/tag_gen_test.go b/internal/service/transfer/tag_gen_test.go index bb2abc5c74a..e706bb6df34 100644 --- a/internal/service/transfer/tag_gen_test.go +++ b/internal/service/transfer/tag_gen_test.go @@ -18,7 +18,7 @@ import ( func testAccCheckTagDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_transfer_tag" { @@ -64,7 +64,7 @@ func testAccCheckTagExists(ctx context.Context, resourceName string) resource.Te return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn(ctx) _, err = tftransfer.GetTag(ctx, conn, identifier, key) diff --git a/internal/service/transfer/tag_test.go b/internal/service/transfer/tag_test.go index 9bdf0336e56..eb46bcfcc81 100644 --- a/internal/service/transfer/tag_test.go +++ b/internal/service/transfer/tag_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package transfer_test import ( diff --git a/internal/service/transfer/tags_gen.go b/internal/service/transfer/tags_gen.go index f7aad2401a2..35258c080e6 100644 --- a/internal/service/transfer/tags_gen.go +++ b/internal/service/transfer/tags_gen.go @@ -17,11 +17,11 @@ import ( // GetTag fetches an individual transfer service tag for a resource. // Returns whether the key value and any errors. A NotFoundError is used to signal that no value was found. -// This function will optimise the handling over ListTags, if possible. +// This function will optimise the handling over listTags, if possible. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. func GetTag(ctx context.Context, conn transferiface.TransferAPI, identifier, key string) (*string, error) { - listTags, err := ListTags(ctx, conn, identifier) + listTags, err := listTags(ctx, conn, identifier) if err != nil { return nil, err @@ -34,10 +34,10 @@ func GetTag(ctx context.Context, conn transferiface.TransferAPI, identifier, key return listTags.KeyValue(key), nil } -// ListTags lists transfer service tags. +// listTags lists transfer service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn transferiface.TransferAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn transferiface.TransferAPI, identifier string) (tftags.KeyValueTags, error) { input := &transfer.ListTagsForResourceInput{ Arn: aws.String(identifier), } @@ -54,7 +54,7 @@ func ListTags(ctx context.Context, conn transferiface.TransferAPI, identifier st // ListTags lists transfer service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).TransferConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).TransferConn(ctx), identifier) if err != nil { return err @@ -96,9 +96,9 @@ func KeyValueTags(ctx context.Context, tags []*transfer.Tag) tftags.KeyValueTags return tftags.New(ctx, m) } -// GetTagsIn returns transfer service tags from Context. +// getTagsIn returns transfer service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*transfer.Tag { +func getTagsIn(ctx context.Context) []*transfer.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -108,17 +108,17 @@ func GetTagsIn(ctx context.Context) []*transfer.Tag { return nil } -// SetTagsOut sets transfer service tags in Context. -func SetTagsOut(ctx context.Context, tags []*transfer.Tag) { +// setTagsOut sets transfer service tags in Context. +func setTagsOut(ctx context.Context, tags []*transfer.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates transfer service tags. +// updateTags updates transfer service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn transferiface.TransferAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn transferiface.TransferAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -158,5 +158,5 @@ func UpdateTags(ctx context.Context, conn transferiface.TransferAPI, identifier // UpdateTags updates transfer service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).TransferConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).TransferConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/transfer/transfer_test.go b/internal/service/transfer/transfer_test.go index 0ffc497d03f..02569d63b09 100644 --- a/internal/service/transfer/transfer_test.go +++ b/internal/service/transfer/transfer_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package transfer_test import ( @@ -16,6 +19,11 @@ func TestAccTransfer_serial(t *testing.T) { "S3Basic": testAccAccess_s3_basic, "S3Policy": testAccAccess_s3_policy, }, + "Agreement": { + "basic": testAccAgreement_basic, + "disappears": testAccAgreement_disappears, + "tags": testAccAgreement_tags, + }, "Server": { "basic": testAccServer_basic, "disappears": testAccServer_disappears, diff --git a/internal/service/transfer/update_tags_no_system_ignore_gen.go b/internal/service/transfer/update_tags_no_system_ignore_gen.go index 010529ee999..40c81fc7041 100644 --- a/internal/service/transfer/update_tags_no_system_ignore_gen.go +++ b/internal/service/transfer/update_tags_no_system_ignore_gen.go @@ -8,14 +8,13 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/transfer" "github.com/aws/aws-sdk-go/service/transfer/transferiface" - "github.com/hashicorp/terraform-provider-aws/internal/conns" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" ) -// UpdateTagsNoIgnoreSystem updates transfer service tags. +// updateTagsNoIgnoreSystem updates transfer service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTagsNoIgnoreSystem(ctx context.Context, conn transferiface.TransferAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTagsNoIgnoreSystem(ctx context.Context, conn transferiface.TransferAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -49,9 +48,3 @@ func UpdateTagsNoIgnoreSystem(ctx context.Context, conn transferiface.TransferAP return nil } - -// UpdateTagsNoIgnoreSystem updates transfer service tags. -// It is called from outside this package. -func (p *servicePackage) UpdateTagsNoIgnoreSystem(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTagsNoIgnoreSystem(ctx, meta.(*conns.AWSClient).TransferConn(), identifier, oldTags, newTags) -} diff --git a/internal/service/transfer/user.go b/internal/service/transfer/user.go index f2316c447d8..d144a995969 100644 --- a/internal/service/transfer/user.go +++ b/internal/service/transfer/user.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package transfer import ( @@ -134,7 +137,7 @@ func ResourceUser() *schema.Resource { func resourceUserCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).TransferConn() + conn := meta.(*conns.AWSClient).TransferConn(ctx) serverID := d.Get("server_id").(string) userName := d.Get("user_name").(string) @@ -142,7 +145,7 @@ func resourceUserCreate(ctx context.Context, d *schema.ResourceData, meta interf input := &transfer.CreateUserInput{ Role: aws.String(d.Get("role").(string)), ServerId: aws.String(serverID), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), UserName: aws.String(userName), } @@ -184,7 +187,7 @@ func resourceUserCreate(ctx context.Context, d *schema.ResourceData, meta interf func resourceUserRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).TransferConn() + conn := meta.(*conns.AWSClient).TransferConn(ctx) serverID, userName, err := UserParseResourceID(d.Id()) @@ -224,14 +227,14 @@ func resourceUserRead(ctx context.Context, d *schema.ResourceData, meta interfac d.Set("server_id", serverID) d.Set("user_name", user.UserName) - SetTagsOut(ctx, user.Tags) + setTagsOut(ctx, user.Tags) return diags } func resourceUserUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).TransferConn() + conn := meta.(*conns.AWSClient).TransferConn(ctx) if d.HasChangesExcept("tags", "tags_all") { serverID, userName, err := UserParseResourceID(d.Id()) @@ -286,7 +289,7 @@ func resourceUserUpdate(ctx context.Context, d *schema.ResourceData, meta interf func resourceUserDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).TransferConn() + conn := meta.(*conns.AWSClient).TransferConn(ctx) serverID, userName, err := UserParseResourceID(d.Id()) diff --git a/internal/service/transfer/user_test.go b/internal/service/transfer/user_test.go index 79a48f5bf64..842ad832fe1 100644 --- a/internal/service/transfer/user_test.go +++ b/internal/service/transfer/user_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package transfer_test import ( @@ -305,7 +308,7 @@ func testAccCheckUserExists(ctx context.Context, n string, v *transfer.Described return fmt.Errorf("No Transfer User ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn(ctx) serverID, userName, err := tftransfer.UserParseResourceID(rs.Primary.ID) @@ -327,7 +330,7 @@ func testAccCheckUserExists(ctx context.Context, n string, v *transfer.Described func testAccCheckUserDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_transfer_user" { diff --git a/internal/service/transfer/validate.go b/internal/service/transfer/validate.go index 562af6e33d3..de5e65b29d3 100644 --- a/internal/service/transfer/validate.go +++ b/internal/service/transfer/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package transfer import ( diff --git a/internal/service/transfer/wait.go b/internal/service/transfer/wait.go index 1de840b2d7f..dcf1766ee72 100644 --- a/internal/service/transfer/wait.go +++ b/internal/service/transfer/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package transfer import ( diff --git a/internal/service/transfer/workflow.go b/internal/service/transfer/workflow.go index 0ff60a59dba..8b9ccc80c02 100644 --- a/internal/service/transfer/workflow.go +++ b/internal/service/transfer/workflow.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package transfer import ( @@ -682,10 +685,10 @@ func ResourceWorkflow() *schema.Resource { func resourceWorkflowCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).TransferConn() + conn := meta.(*conns.AWSClient).TransferConn(ctx) input := &transfer.CreateWorkflowInput{ - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -713,7 +716,7 @@ func resourceWorkflowCreate(ctx context.Context, d *schema.ResourceData, meta in func resourceWorkflowRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).TransferConn() + conn := meta.(*conns.AWSClient).TransferConn(ctx) output, err := FindWorkflowByID(ctx, conn, d.Id()) if !d.IsNewResource() && tfresource.NotFound(err) { @@ -735,7 +738,7 @@ func resourceWorkflowRead(ctx context.Context, d *schema.ResourceData, meta inte return sdkdiag.AppendErrorf(diags, "setting steps: %s", err) } - SetTagsOut(ctx, output.Tags) + setTagsOut(ctx, output.Tags) return diags } @@ -750,7 +753,7 @@ func resourceWorkflowUpdate(ctx context.Context, d *schema.ResourceData, meta in func resourceWorkflowDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).TransferConn() + conn := meta.(*conns.AWSClient).TransferConn(ctx) log.Printf("[DEBUG] Deleting Transfer Workflow: %s", d.Id()) _, err := conn.DeleteWorkflowWithContext(ctx, &transfer.DeleteWorkflowInput{ diff --git a/internal/service/transfer/workflow_test.go b/internal/service/transfer/workflow_test.go index 2c435bc945b..faeb20c5774 100644 --- a/internal/service/transfer/workflow_test.go +++ b/internal/service/transfer/workflow_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package transfer_test import ( @@ -303,7 +306,7 @@ func testAccCheckWorkflowExists(ctx context.Context, n string, v *transfer.Descr return fmt.Errorf("No Transfer Workflow ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn(ctx) output, err := tftransfer.FindWorkflowByID(ctx, conn, rs.Primary.ID) @@ -319,7 +322,7 @@ func testAccCheckWorkflowExists(ctx context.Context, n string, v *transfer.Descr func testAccCheckWorkflowDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).TransferConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_transfer_workflow" { diff --git a/internal/service/verifiedpermissions/README.md b/internal/service/verifiedpermissions/README.md new file mode 100644 index 00000000000..51fe3fcb9e8 --- /dev/null +++ b/internal/service/verifiedpermissions/README.md @@ -0,0 +1,10 @@ +# Terraform AWS Provider VerifiedPermissions Package + +This area is primarily for AWS provider contributors and maintainers. For information on _using_ Terraform and the AWS provider, see the links below. + +## Handy Links + +* [Find out about contributing](https://hashicorp.github.io/terraform-provider-aws/#contribute) to the AWS provider! +* AWS Provider Docs: [Home](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) +* AWS Provider Docs: [One of the VerifiedPermissions resources](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/verifiedpermissions_policy_store) +* AWS Docs: [AWS SDK for Go VerifiedPermissions](https://docs.aws.amazon.com/sdk-for-go/api/service/verifiedpermissions/) diff --git a/internal/service/verifiedpermissions/generate.go b/internal/service/verifiedpermissions/generate.go new file mode 100644 index 00000000000..6ea5165dc1f --- /dev/null +++ b/internal/service/verifiedpermissions/generate.go @@ -0,0 +1,7 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/servicepackage/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package verifiedpermissions diff --git a/internal/service/verifiedpermissions/service_package_gen.go b/internal/service/verifiedpermissions/service_package_gen.go new file mode 100644 index 00000000000..7cd0659dfcf --- /dev/null +++ b/internal/service/verifiedpermissions/service_package_gen.go @@ -0,0 +1,50 @@ +// Code generated by internal/generate/servicepackages/main.go; DO NOT EDIT. + +package verifiedpermissions + +import ( + "context" + + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + verifiedpermissions_sdkv2 "github.com/aws/aws-sdk-go-v2/service/verifiedpermissions" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/types" + "github.com/hashicorp/terraform-provider-aws/names" +) + +type servicePackage struct{} + +func (p *servicePackage) FrameworkDataSources(ctx context.Context) []*types.ServicePackageFrameworkDataSource { + return []*types.ServicePackageFrameworkDataSource{} +} + +func (p *servicePackage) FrameworkResources(ctx context.Context) []*types.ServicePackageFrameworkResource { + return []*types.ServicePackageFrameworkResource{} +} + +func (p *servicePackage) SDKDataSources(ctx context.Context) []*types.ServicePackageSDKDataSource { + return []*types.ServicePackageSDKDataSource{} +} + +func (p *servicePackage) SDKResources(ctx context.Context) []*types.ServicePackageSDKResource { + return []*types.ServicePackageSDKResource{} +} + +func (p *servicePackage) ServicePackageName() string { + return names.VerifiedPermissions +} + +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*verifiedpermissions_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return verifiedpermissions_sdkv2.NewFromConfig(cfg, func(o *verifiedpermissions_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = verifiedpermissions_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/vpclattice/access_log_subscription.go b/internal/service/vpclattice/access_log_subscription.go index c0a0b85d93f..189d1c34060 100644 --- a/internal/service/vpclattice/access_log_subscription.go +++ b/internal/service/vpclattice/access_log_subscription.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice import ( @@ -66,13 +69,13 @@ const ( ) func resourceAccessLogSubscriptionCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) in := &vpclattice.CreateAccessLogSubscriptionInput{ ClientToken: aws.String(id.UniqueId()), DestinationArn: aws.String(d.Get("destination_arn").(string)), ResourceIdentifier: aws.String(d.Get("resource_identifier").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } out, err := conn.CreateAccessLogSubscription(ctx, in) @@ -87,7 +90,7 @@ func resourceAccessLogSubscriptionCreate(ctx context.Context, d *schema.Resource } func resourceAccessLogSubscriptionRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) out, err := findAccessLogSubscriptionByID(ctx, conn, d.Id()) @@ -115,7 +118,7 @@ func resourceAccessLogSubscriptionUpdate(ctx context.Context, d *schema.Resource } func resourceAccessLogSubscriptionDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) log.Printf("[INFO] Deleting VPCLattice AccessLogSubscription %s", d.Id()) _, err := conn.DeleteAccessLogSubscription(ctx, &vpclattice.DeleteAccessLogSubscriptionInput{ diff --git a/internal/service/vpclattice/access_log_subscription_test.go b/internal/service/vpclattice/access_log_subscription_test.go index 16dfd7a9fd2..a7eea7d7386 100644 --- a/internal/service/vpclattice/access_log_subscription_test.go +++ b/internal/service/vpclattice/access_log_subscription_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice_test import ( @@ -137,7 +140,7 @@ func TestAccVPCLatticeAccessLogSubscription_tags(t *testing.T) { func testAccCheckAccessLogSubscriptionDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpclattice_access_log_subscription" { @@ -173,7 +176,7 @@ func testAccCheckAccessLogSubscriptionExists(ctx context.Context, name string, a return create.Error(names.VPCLattice, create.ErrActionCheckingExistence, tfvpclattice.ResNameAccessLogSubscription, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient(ctx) resp, err := conn.GetAccessLogSubscription(ctx, &vpclattice.GetAccessLogSubscriptionInput{ AccessLogSubscriptionIdentifier: aws.String(rs.Primary.ID), }) diff --git a/internal/service/vpclattice/auth_policy.go b/internal/service/vpclattice/auth_policy.go index ff2be0458fb..c2efd14ab1e 100644 --- a/internal/service/vpclattice/auth_policy.go +++ b/internal/service/vpclattice/auth_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice import ( @@ -68,7 +71,7 @@ const ( ) func resourceAuthPolicyPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) resourceId := d.Get("resource_identifier").(string) policy, err := structure.NormalizeJsonString(d.Get("policy").(string)) @@ -94,7 +97,7 @@ func resourceAuthPolicyPut(ctx context.Context, d *schema.ResourceData, meta int } func resourceAuthPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) resourceId := d.Id() log.Printf("[DEBUG] Reading VPCLattice Auth Policy for resource: %s", resourceId) @@ -128,7 +131,7 @@ func resourceAuthPolicyRead(ctx context.Context, d *schema.ResourceData, meta in } func resourceAuthPolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) log.Printf("[INFO] Deleting VPCLattice AuthPolicy: %s", d.Id()) _, err := conn.DeleteAuthPolicy(ctx, &vpclattice.DeleteAuthPolicyInput{ diff --git a/internal/service/vpclattice/auth_policy_data_source.go b/internal/service/vpclattice/auth_policy_data_source.go index 782db8ae746..394ce89d479 100644 --- a/internal/service/vpclattice/auth_policy_data_source.go +++ b/internal/service/vpclattice/auth_policy_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice import ( @@ -42,7 +45,7 @@ const ( ) func dataSourceAuthPolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) resourceID := d.Get("resource_identifier").(string) out, err := findAuthPolicy(ctx, conn, resourceID) diff --git a/internal/service/vpclattice/auth_policy_data_source_test.go b/internal/service/vpclattice/auth_policy_data_source_test.go index b38103eafb1..efa932a87a3 100644 --- a/internal/service/vpclattice/auth_policy_data_source_test.go +++ b/internal/service/vpclattice/auth_policy_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice_test import ( diff --git a/internal/service/vpclattice/auth_policy_test.go b/internal/service/vpclattice/auth_policy_test.go index 957b24efd3b..3661a05f5c1 100644 --- a/internal/service/vpclattice/auth_policy_test.go +++ b/internal/service/vpclattice/auth_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice_test import ( @@ -85,7 +88,7 @@ func TestAccVPCLatticeAuthPolicy_disappears(t *testing.T) { func testAccCheckAuthPolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpclattice_auth_policy" { @@ -123,7 +126,7 @@ func testAccCheckAuthPolicyExists(ctx context.Context, name string, authpolicy * return create.Error(names.VPCLattice, create.ErrActionCheckingExistence, tfvpclattice.ResNameAuthPolicy, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient(ctx) resp, err := conn.GetAuthPolicy(ctx, &vpclattice.GetAuthPolicyInput{ ResourceIdentifier: aws.String(rs.Primary.ID), }) diff --git a/internal/service/vpclattice/exports_test.go b/internal/service/vpclattice/exports_test.go index de5ad314c18..a60aebfb708 100644 --- a/internal/service/vpclattice/exports_test.go +++ b/internal/service/vpclattice/exports_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice // Exports for use in tests only. diff --git a/internal/service/vpclattice/generate.go b/internal/service/vpclattice/generate.go index 61423534b2e..d6eb39aaa62 100644 --- a/internal/service/vpclattice/generate.go +++ b/internal/service/vpclattice/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -ServiceTagsMap -KVTValues -SkipTypesImp -ListTags -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package vpclattice diff --git a/internal/service/vpclattice/listener.go b/internal/service/vpclattice/listener.go index a0cd7a58054..bc25e839b18 100644 --- a/internal/service/vpclattice/listener.go +++ b/internal/service/vpclattice/listener.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice import ( @@ -163,13 +166,13 @@ const ( ) func resourceListenerCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) in := &vpclattice.CreateListenerInput{ Name: aws.String(d.Get("name").(string)), DefaultAction: expandDefaultAction(d.Get("default_action").([]interface{})), Protocol: types.ListenerProtocol(d.Get("protocol").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("port"); ok && v != nil { @@ -185,7 +188,7 @@ func resourceListenerCreate(ctx context.Context, d *schema.ResourceData, meta in } if in.ServiceIdentifier == nil { - return diag.FromErr(fmt.Errorf("must specify either service_arn or service_identifier")) + return diag.Errorf("must specify either service_arn or service_identifier") } out, err := conn.CreateListener(ctx, in) @@ -213,7 +216,7 @@ func resourceListenerCreate(ctx context.Context, d *schema.ResourceData, meta in } func resourceListenerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) // GetListener requires the ID or Amazon Resource Name (ARN) of the service serviceId := d.Get("service_identifier").(string) @@ -249,7 +252,7 @@ func resourceListenerRead(ctx context.Context, d *schema.ResourceData, meta inte } func resourceListenerUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) serviceId := d.Get("service_identifier").(string) listenerId := d.Get("listener_id").(string) @@ -276,7 +279,7 @@ func resourceListenerUpdate(ctx context.Context, d *schema.ResourceData, meta in } func resourceListenerDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) log.Printf("[INFO] Deleting VPC Lattice Listener %s", d.Id()) diff --git a/internal/service/vpclattice/listener_data_source.go b/internal/service/vpclattice/listener_data_source.go index 6a0a3ab784b..0f64b6fdc4b 100644 --- a/internal/service/vpclattice/listener_data_source.go +++ b/internal/service/vpclattice/listener_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice import ( @@ -122,7 +125,7 @@ const ( ) func dataSourceListenerRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) serviceId := d.Get("service_identifier").(string) listenerId := d.Get("listener_identifier").(string) @@ -151,7 +154,7 @@ func dataSourceListenerRead(ctx context.Context, d *schema.ResourceData, meta in // Set tags ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig - tags, err := ListTags(ctx, conn, aws.ToString(out.Arn)) + tags, err := listTags(ctx, conn, aws.ToString(out.Arn)) if err != nil { return create.DiagError(names.VPCLattice, create.ErrActionReading, DSNameListener, d.Id(), err) diff --git a/internal/service/vpclattice/listener_data_source_test.go b/internal/service/vpclattice/listener_data_source_test.go index db76a740393..01abb9fb675 100644 --- a/internal/service/vpclattice/listener_data_source_test.go +++ b/internal/service/vpclattice/listener_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice_test import ( diff --git a/internal/service/vpclattice/listener_rule.go b/internal/service/vpclattice/listener_rule.go index dccc5f8f03a..e14b2f6b8e8 100644 --- a/internal/service/vpclattice/listener_rule.go +++ b/internal/service/vpclattice/listener_rule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice import ( @@ -248,7 +251,7 @@ const ( ) func resourceListenerRuleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) name := d.Get("name").(string) in := &vpclattice.CreateRuleInput{ @@ -258,7 +261,7 @@ func resourceListenerRuleCreate(ctx context.Context, d *schema.ResourceData, met Match: expandRuleMatch(d.Get("match").([]interface{})[0].(map[string]interface{})), Name: aws.String(name), ServiceIdentifier: aws.String(d.Get("service_identifier").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("priority"); ok { @@ -290,7 +293,7 @@ func resourceListenerRuleCreate(ctx context.Context, d *schema.ResourceData, met } func resourceListenerRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) serviceId := d.Get("service_identifier").(string) listenerId := d.Get("listener_identifier").(string) @@ -327,7 +330,7 @@ func resourceListenerRuleRead(ctx context.Context, d *schema.ResourceData, meta } func resourceListenerRuleUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) serviceId := d.Get("service_identifier").(string) listenerId := d.Get("listener_identifier").(string) @@ -361,7 +364,7 @@ func resourceListenerRuleUpdate(ctx context.Context, d *schema.ResourceData, met } func resourceListenerRuleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) serviceId := d.Get("service_identifier").(string) listenerId := d.Get("listener_identifier").(string) diff --git a/internal/service/vpclattice/listener_rule_test.go b/internal/service/vpclattice/listener_rule_test.go index 6c137c604d4..15053c76491 100644 --- a/internal/service/vpclattice/listener_rule_test.go +++ b/internal/service/vpclattice/listener_rule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice_test import ( @@ -168,7 +171,7 @@ func testAccCheckListenerRuleExists(ctx context.Context, name string, rule *vpcl serviceIdentifier := rs.Primary.Attributes["service_identifier"] listenerIdentifier := rs.Primary.Attributes["listener_identifier"] - conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient(ctx) resp, err := conn.GetRule(ctx, &vpclattice.GetRuleInput{ RuleIdentifier: aws.String(rs.Primary.Attributes["arn"]), ListenerIdentifier: aws.String(listenerIdentifier), @@ -187,7 +190,7 @@ func testAccCheckListenerRuleExists(ctx context.Context, name string, rule *vpcl func testAccChecklistenerRuleDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpclattice_listener_rule" { diff --git a/internal/service/vpclattice/listener_test.go b/internal/service/vpclattice/listener_test.go index 6e4cc6263b6..ddd0fe0f3fd 100644 --- a/internal/service/vpclattice/listener_test.go +++ b/internal/service/vpclattice/listener_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice_test import ( @@ -448,7 +451,7 @@ func TestAccVPCLatticeListener_tags(t *testing.T) { func testAccCheckListenerDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpclattice_listener" { @@ -485,7 +488,7 @@ func testAccCheckListenerExists(ctx context.Context, name string, listener *vpcl return create.Error(names.VPCLattice, create.ErrActionCheckingExistence, tfvpclattice.ResNameListener, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient(ctx) resp, err := conn.GetListener(ctx, &vpclattice.GetListenerInput{ ListenerIdentifier: aws.String(rs.Primary.Attributes["listener_id"]), ServiceIdentifier: aws.String(rs.Primary.Attributes["service_identifier"]), diff --git a/internal/service/vpclattice/resource_policy.go b/internal/service/vpclattice/resource_policy.go index 8f467aab3fa..822a420d8fd 100644 --- a/internal/service/vpclattice/resource_policy.go +++ b/internal/service/vpclattice/resource_policy.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice import ( @@ -59,7 +62,7 @@ const ( ) func resourceResourcePolicyPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) resourceArn := d.Get("resource_arn").(string) policy, err := structure.NormalizeJsonString(d.Get("policy").(string)) @@ -85,7 +88,7 @@ func resourceResourcePolicyPut(ctx context.Context, d *schema.ResourceData, meta } func resourceResourcePolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) resourceArn := d.Id() @@ -118,7 +121,7 @@ func resourceResourcePolicyRead(ctx context.Context, d *schema.ResourceData, met } func resourceResourcePolicyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) log.Printf("[INFO] Deleting VPCLattice ResourcePolicy: %s", d.Id()) _, err := conn.DeleteResourcePolicy(ctx, &vpclattice.DeleteResourcePolicyInput{ diff --git a/internal/service/vpclattice/resource_policy_data_source.go b/internal/service/vpclattice/resource_policy_data_source.go new file mode 100644 index 00000000000..2518e830226 --- /dev/null +++ b/internal/service/vpclattice/resource_policy_data_source.go @@ -0,0 +1,58 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package vpclattice + +import ( + "context" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/verify" + "github.com/hashicorp/terraform-provider-aws/names" +) + +// @SDKDataSource("aws_vpclattice_resource_policy", name="Resource Policy") +func DataSourceResourcePolicy() *schema.Resource { + return &schema.Resource{ + ReadWithoutTimeout: dataSourceResourcePolicyRead, + + Schema: map[string]*schema.Schema{ + "policy": { + Type: schema.TypeString, + Computed: true, + }, + "resource_arn": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidARN, + }, + }, + } +} + +const ( + DSNameResourcePolicy = "Resource Policy Data Source" +) + +func dataSourceResourcePolicyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) + + resourceArn := d.Get("resource_arn").(string) + + out, err := findResourcePolicyByID(ctx, conn, resourceArn) + if err != nil { + return create.DiagError(names.VPCLattice, create.ErrActionReading, DSNameResourcePolicy, d.Id(), err) + } + + if out == nil { + return create.DiagError(names.VPCLattice, create.ErrActionReading, DSNameResourcePolicy, d.Id(), err) + } + + d.SetId(resourceArn) + d.Set("policy", out.Policy) + + return nil +} diff --git a/internal/service/vpclattice/resource_policy_data_source_test.go b/internal/service/vpclattice/resource_policy_data_source_test.go new file mode 100644 index 00000000000..2914ee39917 --- /dev/null +++ b/internal/service/vpclattice/resource_policy_data_source_test.go @@ -0,0 +1,81 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package vpclattice_test + +import ( + "fmt" + "regexp" + "testing" + + sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/names" +) + +func TestAccVPCLatticeResourcePolicyDataSource_basic(t *testing.T) { + ctx := acctest.Context(t) + + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + dataSourceName := "data.aws_vpclattice_resource_policy.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckPartitionHasService(t, names.VPCLatticeEndpointID) + testAccPreCheck(ctx, t) + }, + ErrorCheck: acctest.ErrorCheck(t, names.VPCLatticeEndpointID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, + CheckDestroy: testAccCheckResourcePolicyDestroy(ctx), + Steps: []resource.TestStep{ + { + Config: testAccResourcePolicyDataSourceConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + resource.TestMatchResourceAttr(dataSourceName, "policy", regexp.MustCompile(`"vpc-lattice:CreateServiceNetworkVpcAssociation","vpc-lattice:CreateServiceNetworkServiceAssociation","vpc-lattice:GetServiceNetwork"`)), + resource.TestCheckResourceAttrPair(dataSourceName, "resource_arn", "aws_vpclattice_service_network.test", "arn"), + ), + }, + }, + }) +} +func testAccResourcePolicyDataSourceConfig_create(rName string) string { + return fmt.Sprintf(` +data "aws_caller_identity" "current" {} +data "aws_partition" "current" {} + +resource "aws_vpclattice_service_network" "test" { + name = %[1]q +} + +resource "aws_vpclattice_resource_policy" "test" { + resource_arn = aws_vpclattice_service_network.test.arn + + policy = jsonencode({ + Version = "2012-10-17", + Statement = [{ + Sid = "test-pol-principals-6" + Effect = "Allow" + Principal = { + "AWS" = "arn:${data.aws_partition.current.partition}:iam::${data.aws_caller_identity.current.account_id}:root" + } + Action = [ + "vpc-lattice:CreateServiceNetworkVpcAssociation", + "vpc-lattice:CreateServiceNetworkServiceAssociation", + "vpc-lattice:GetServiceNetwork" + ] + Resource = aws_vpclattice_service_network.test.arn + }] + }) +} +`, rName) +} + +func testAccResourcePolicyDataSourceConfig_basic(rName string) string { + return acctest.ConfigCompose(testAccResourcePolicyDataSourceConfig_create(rName), ` +data "aws_vpclattice_resource_policy" "test" { + resource_arn = aws_vpclattice_resource_policy.test.resource_arn +} +`) +} diff --git a/internal/service/vpclattice/resource_policy_test.go b/internal/service/vpclattice/resource_policy_test.go index 65f129ebe2c..015e90f515a 100644 --- a/internal/service/vpclattice/resource_policy_test.go +++ b/internal/service/vpclattice/resource_policy_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice_test import ( @@ -85,7 +88,7 @@ func TestAccVPCLatticeResourcePolicy_disappears(t *testing.T) { func testAccCheckResourcePolicyDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpclattice_resource_policy" { @@ -123,7 +126,7 @@ func testAccCheckResourcePolicyExists(ctx context.Context, name string, resource return create.Error(names.VPCLattice, create.ErrActionCheckingExistence, tfvpclattice.ResNameResourcePolicy, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient(ctx) resp, err := conn.GetResourcePolicy(ctx, &vpclattice.GetResourcePolicyInput{ ResourceArn: aws.String(rs.Primary.ID), }) diff --git a/internal/service/vpclattice/service.go b/internal/service/vpclattice/service.go index a3ea158bf89..e4bd20be4dd 100644 --- a/internal/service/vpclattice/service.go +++ b/internal/service/vpclattice/service.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice import ( @@ -103,12 +106,12 @@ const ( ) func resourceServiceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) in := &vpclattice.CreateServiceInput{ ClientToken: aws.String(id.UniqueId()), Name: aws.String(d.Get("name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("auth_type"); ok { @@ -142,7 +145,7 @@ func resourceServiceCreate(ctx context.Context, d *schema.ResourceData, meta int } func resourceServiceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) out, err := findServiceByID(ctx, conn, d.Id()) @@ -174,7 +177,7 @@ func resourceServiceRead(ctx context.Context, d *schema.ResourceData, meta inter } func resourceServiceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) if d.HasChangesExcept("tags", "tags_all") { in := &vpclattice.UpdateServiceInput{ @@ -200,7 +203,7 @@ func resourceServiceUpdate(ctx context.Context, d *schema.ResourceData, meta int } func resourceServiceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) log.Printf("[INFO] Deleting VPC Lattice Service: %s", d.Id()) _, err := conn.DeleteService(ctx, &vpclattice.DeleteServiceInput{ diff --git a/internal/service/vpclattice/service_data_source.go b/internal/service/vpclattice/service_data_source.go index 3e2e3ef0771..86c355739df 100644 --- a/internal/service/vpclattice/service_data_source.go +++ b/internal/service/vpclattice/service_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice import ( @@ -72,7 +75,7 @@ const ( ) func dataSourceServiceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) service_id := d.Get("service_identifier").(string) out, err := findServiceByID(ctx, conn, service_id) diff --git a/internal/service/vpclattice/service_data_source_test.go b/internal/service/vpclattice/service_data_source_test.go index 5963b1e3d00..7b1baab5a7f 100644 --- a/internal/service/vpclattice/service_data_source_test.go +++ b/internal/service/vpclattice/service_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice_test import ( diff --git a/internal/service/vpclattice/service_network.go b/internal/service/vpclattice/service_network.go index 9a97b84d07f..ef574a819b6 100644 --- a/internal/service/vpclattice/service_network.go +++ b/internal/service/vpclattice/service_network.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice import ( @@ -65,12 +68,12 @@ const ( ) func resourceServiceNetworkCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) in := &vpclattice.CreateServiceNetworkInput{ ClientToken: aws.String(id.UniqueId()), Name: aws.String(d.Get("name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("auth_type"); ok { @@ -92,7 +95,7 @@ func resourceServiceNetworkCreate(ctx context.Context, d *schema.ResourceData, m } func resourceServiceNetworkRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) out, err := findServiceNetworkByID(ctx, conn, d.Id()) @@ -114,7 +117,7 @@ func resourceServiceNetworkRead(ctx context.Context, d *schema.ResourceData, met } func resourceServiceNetworkUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) if d.HasChangesExcept("tags", "tags_all") { in := &vpclattice.UpdateServiceNetworkInput{ @@ -136,7 +139,7 @@ func resourceServiceNetworkUpdate(ctx context.Context, d *schema.ResourceData, m } func resourceServiceNetworkDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) log.Printf("[INFO] Deleting VPC Lattice Service Network: %s", d.Id()) _, err := conn.DeleteServiceNetwork(ctx, &vpclattice.DeleteServiceNetworkInput{ diff --git a/internal/service/vpclattice/service_network_data_source.go b/internal/service/vpclattice/service_network_data_source.go index 45560ce0737..4a89e92c2da 100644 --- a/internal/service/vpclattice/service_network_data_source.go +++ b/internal/service/vpclattice/service_network_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice import ( @@ -69,7 +72,7 @@ const ( ) func dataSourceServiceNetworkRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) serviceNetworkID := d.Get("service_network_identifier").(string) @@ -89,7 +92,7 @@ func dataSourceServiceNetworkRead(ctx context.Context, d *schema.ResourceData, m d.Set("number_of_associated_vpcs", out.NumberOfAssociatedVPCs) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig - tags, err := ListTags(ctx, conn, aws.ToString(out.Arn)) + tags, err := listTags(ctx, conn, aws.ToString(out.Arn)) if err != nil { return create.DiagError(names.VPCLattice, create.ErrActionReading, DSNameServiceNetwork, serviceNetworkID, err) diff --git a/internal/service/vpclattice/service_network_data_source_test.go b/internal/service/vpclattice/service_network_data_source_test.go index ffbf96df29b..6b4e4569a60 100644 --- a/internal/service/vpclattice/service_network_data_source_test.go +++ b/internal/service/vpclattice/service_network_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice_test import ( diff --git a/internal/service/vpclattice/service_network_service_association.go b/internal/service/vpclattice/service_network_service_association.go index c9528dc512f..b3c0498d935 100644 --- a/internal/service/vpclattice/service_network_service_association.go +++ b/internal/service/vpclattice/service_network_service_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice import ( @@ -97,13 +100,13 @@ const ( ) func resourceServiceNetworkServiceAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) in := &vpclattice.CreateServiceNetworkServiceAssociationInput{ ClientToken: aws.String(id.UniqueId()), ServiceIdentifier: aws.String(d.Get("service_identifier").(string)), ServiceNetworkIdentifier: aws.String(d.Get("service_network_identifier").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } out, err := conn.CreateServiceNetworkServiceAssociation(ctx, in) @@ -125,7 +128,7 @@ func resourceServiceNetworkServiceAssociationCreate(ctx context.Context, d *sche } func resourceServiceNetworkServiceAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) out, err := findServiceNetworkServiceAssociationByID(ctx, conn, d.Id()) @@ -162,7 +165,7 @@ func resourceServiceNetworkServiceAssociationUpdate(ctx context.Context, d *sche } func resourceServiceNetworkServiceAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) log.Printf("[INFO] Deleting VPCLattice Service Network Association %s", d.Id()) diff --git a/internal/service/vpclattice/service_network_service_association_test.go b/internal/service/vpclattice/service_network_service_association_test.go index d438d8fcd82..4e903ee821c 100644 --- a/internal/service/vpclattice/service_network_service_association_test.go +++ b/internal/service/vpclattice/service_network_service_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice_test import ( @@ -134,7 +137,7 @@ func TestAccVPCLatticeServiceNetworkServiceAssociation_tags(t *testing.T) { func testAccCheckServiceNetworkServiceAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpclattice_service_network_service_association" { @@ -170,7 +173,7 @@ func testAccCheckServiceNetworkServiceAssociationExists(ctx context.Context, nam return create.Error(names.VPCLattice, create.ErrActionCheckingExistence, tfvpclattice.ResNameService, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient(ctx) resp, err := conn.GetServiceNetworkServiceAssociation(ctx, &vpclattice.GetServiceNetworkServiceAssociationInput{ ServiceNetworkServiceAssociationIdentifier: aws.String(rs.Primary.ID), }) diff --git a/internal/service/vpclattice/service_network_test.go b/internal/service/vpclattice/service_network_test.go index f8d4faebe7d..87eac468701 100644 --- a/internal/service/vpclattice/service_network_test.go +++ b/internal/service/vpclattice/service_network_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice_test import ( @@ -166,7 +169,7 @@ func TestAccVPCLatticeServiceNetwork_tags(t *testing.T) { func testAccCheckServiceNetworkDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpclattice_service_network" { @@ -202,7 +205,7 @@ func testAccCheckServiceNetworkExists(ctx context.Context, name string, servicen return create.Error(names.VPCLattice, create.ErrActionCheckingExistence, tfvpclattice.ResNameServiceNetwork, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient(ctx) resp, err := conn.GetServiceNetwork(ctx, &vpclattice.GetServiceNetworkInput{ ServiceNetworkIdentifier: aws.String(rs.Primary.ID), }) diff --git a/internal/service/vpclattice/service_network_vpc_association.go b/internal/service/vpclattice/service_network_vpc_association.go index 6df39ad6a46..4f2ae54ba87 100644 --- a/internal/service/vpclattice/service_network_vpc_association.go +++ b/internal/service/vpclattice/service_network_vpc_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice import ( @@ -84,13 +87,13 @@ const ( ) func resourceServiceNetworkVPCAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) in := &vpclattice.CreateServiceNetworkVpcAssociationInput{ ClientToken: aws.String(id.UniqueId()), ServiceNetworkIdentifier: aws.String(d.Get("service_network_identifier").(string)), VpcIdentifier: aws.String(d.Get("vpc_identifier").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("security_group_ids"); ok { @@ -116,7 +119,7 @@ func resourceServiceNetworkVPCAssociationCreate(ctx context.Context, d *schema.R } func resourceServiceNetworkVPCAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) out, err := findServiceNetworkVPCAssociationByID(ctx, conn, d.Id()) @@ -141,7 +144,7 @@ func resourceServiceNetworkVPCAssociationRead(ctx context.Context, d *schema.Res } func resourceServiceNetworkVPCAssociationUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) if d.HasChangesExcept("tags", "tags_all") { in := &vpclattice.UpdateServiceNetworkVpcAssociationInput{ ServiceNetworkVpcAssociationIdentifier: aws.String(d.Id()), @@ -162,7 +165,7 @@ func resourceServiceNetworkVPCAssociationUpdate(ctx context.Context, d *schema.R } func resourceServiceNetworkVPCAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) log.Printf("[INFO] Deleting VPCLattice Service Network VPC Association %s", d.Id()) diff --git a/internal/service/vpclattice/service_network_vpc_association_test.go b/internal/service/vpclattice/service_network_vpc_association_test.go index 6565c04d53e..706d356c7ba 100644 --- a/internal/service/vpclattice/service_network_vpc_association_test.go +++ b/internal/service/vpclattice/service_network_vpc_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice_test import ( @@ -170,7 +173,7 @@ func TestAccVPCLatticeServiceNetworkVPCAssociation_tags(t *testing.T) { func testAccCheckServiceNetworkVPCAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpclattice_service_network_vpc_association" { @@ -206,7 +209,7 @@ func testAccCheckServiceNetworkVPCAssociationExists(ctx context.Context, name st return create.Error(names.VPCLattice, create.ErrActionCheckingExistence, tfvpclattice.ResNameService, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient(ctx) resp, err := conn.GetServiceNetworkVpcAssociation(ctx, &vpclattice.GetServiceNetworkVpcAssociationInput{ ServiceNetworkVpcAssociationIdentifier: aws.String(rs.Primary.ID), }) diff --git a/internal/service/vpclattice/service_package_gen.go b/internal/service/vpclattice/service_package_gen.go index 1b5a81adadf..dd682c2fd80 100644 --- a/internal/service/vpclattice/service_package_gen.go +++ b/internal/service/vpclattice/service_package_gen.go @@ -5,6 +5,9 @@ package vpclattice import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + vpclattice_sdkv2 "github.com/aws/aws-sdk-go-v2/service/vpclattice" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -31,6 +34,11 @@ func (p *servicePackage) SDKDataSources(ctx context.Context) []*types.ServicePac TypeName: "aws_vpclattice_listener", Name: "Listener", }, + { + Factory: DataSourceResourcePolicy, + TypeName: "aws_vpclattice_resource_policy", + Name: "Resource Policy", + }, { Factory: DataSourceService, TypeName: "aws_vpclattice_service", @@ -129,4 +137,17 @@ func (p *servicePackage) ServicePackageName() string { return names.VPCLattice } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*vpclattice_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return vpclattice_sdkv2.NewFromConfig(cfg, func(o *vpclattice_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = vpclattice_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/vpclattice/service_test.go b/internal/service/vpclattice/service_test.go index a0c82cc9990..3e54a056048 100644 --- a/internal/service/vpclattice/service_test.go +++ b/internal/service/vpclattice/service_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice_test import ( @@ -166,7 +169,7 @@ func TestAccVPCLatticeService_tags(t *testing.T) { func testAccCheckServiceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpclattice_service" { @@ -202,7 +205,7 @@ func testAccCheckServiceExists(ctx context.Context, name string, service *vpclat return create.Error(names.VPCLattice, create.ErrActionCheckingExistence, tfvpclattice.ResNameService, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient(ctx) resp, err := conn.GetService(ctx, &vpclattice.GetServiceInput{ ServiceIdentifier: aws.String(rs.Primary.ID), }) @@ -218,7 +221,7 @@ func testAccCheckServiceExists(ctx context.Context, name string, service *vpclat } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient(ctx) input := &vpclattice.ListServicesInput{} _, err := conn.ListServices(ctx, input) diff --git a/internal/service/vpclattice/sweep.go b/internal/service/vpclattice/sweep.go index 1227ed7a205..003cefacfc1 100644 --- a/internal/service/vpclattice/sweep.go +++ b/internal/service/vpclattice/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -10,7 +13,6 @@ import ( "github.com/aws/aws-sdk-go-v2/aws" "github.com/aws/aws-sdk-go-v2/service/vpclattice" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -31,11 +33,11 @@ func init() { func sweepServices(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).VPCLatticeClient() + conn := client.VPCLatticeClient(ctx) input := &vpclattice.ListServicesInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -61,7 +63,7 @@ func sweepServices(region string) error { } } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping VPC Lattice Services (%s): %w", region, err) @@ -72,11 +74,11 @@ func sweepServices(region string) error { func sweepServiceNetworks(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).VPCLatticeClient() + conn := client.VPCLatticeClient(ctx) input := &vpclattice.ListServiceNetworksInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -102,7 +104,7 @@ func sweepServiceNetworks(region string) error { } } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping VPC Lattice Service Networks (%s): %w", region, err) diff --git a/internal/service/vpclattice/tags_gen.go b/internal/service/vpclattice/tags_gen.go index 1e98d260781..bde902ecdca 100644 --- a/internal/service/vpclattice/tags_gen.go +++ b/internal/service/vpclattice/tags_gen.go @@ -13,10 +13,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists vpclattice service tags. +// listTags lists vpclattice service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn *vpclattice.Client, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *vpclattice.Client, identifier string) (tftags.KeyValueTags, error) { input := &vpclattice.ListTagsForResourceInput{ ResourceArn: aws.String(identifier), } @@ -33,7 +33,7 @@ func ListTags(ctx context.Context, conn *vpclattice.Client, identifier string) ( // ListTags lists vpclattice service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).VPCLatticeClient(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).VPCLatticeClient(ctx), identifier) if err != nil { return err @@ -53,14 +53,14 @@ func Tags(tags tftags.KeyValueTags) map[string]string { return tags.Map() } -// KeyValueTags creates KeyValueTags from vpclattice service tags. +// KeyValueTags creates tftags.KeyValueTags from vpclattice service tags. func KeyValueTags(ctx context.Context, tags map[string]string) tftags.KeyValueTags { return tftags.New(ctx, tags) } -// GetTagsIn returns vpclattice service tags from Context. +// getTagsIn returns vpclattice service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) map[string]string { +func getTagsIn(ctx context.Context) map[string]string { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -70,17 +70,17 @@ func GetTagsIn(ctx context.Context) map[string]string { return nil } -// SetTagsOut sets vpclattice service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]string) { +// setTagsOut sets vpclattice service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates vpclattice service tags. +// updateTags updates vpclattice service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn *vpclattice.Client, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *vpclattice.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -120,5 +120,5 @@ func UpdateTags(ctx context.Context, conn *vpclattice.Client, identifier string, // UpdateTags updates vpclattice service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).VPCLatticeClient(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).VPCLatticeClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/vpclattice/target_group.go b/internal/service/vpclattice/target_group.go index 473e0efa195..9c8b5dc173a 100644 --- a/internal/service/vpclattice/target_group.go +++ b/internal/service/vpclattice/target_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice import ( @@ -219,13 +222,13 @@ const ( ) func resourceTargetGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) name := d.Get("name").(string) in := &vpclattice.CreateTargetGroupInput{ ClientToken: aws.String(id.UniqueId()), Name: aws.String(name), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), Type: types.TargetGroupType(d.Get("type").(string)), } @@ -249,7 +252,7 @@ func resourceTargetGroupCreate(ctx context.Context, d *schema.ResourceData, meta } func resourceTargetGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) out, err := FindTargetGroupByID(ctx, conn, d.Id()) @@ -279,7 +282,7 @@ func resourceTargetGroupRead(ctx context.Context, d *schema.ResourceData, meta i } func resourceTargetGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) if d.HasChangesExcept("tags", "tags_all") { in := &vpclattice.UpdateTargetGroupInput{ @@ -315,7 +318,7 @@ func resourceTargetGroupUpdate(ctx context.Context, d *schema.ResourceData, meta } func resourceTargetGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) log.Printf("[INFO] Deleting VpcLattice TargetGroup: %s", d.Id()) _, err := conn.DeleteTargetGroup(ctx, &vpclattice.DeleteTargetGroupInput{ diff --git a/internal/service/vpclattice/target_group_attachment.go b/internal/service/vpclattice/target_group_attachment.go index e38b49d1481..d0de3998257 100644 --- a/internal/service/vpclattice/target_group_attachment.go +++ b/internal/service/vpclattice/target_group_attachment.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice import ( @@ -68,7 +71,7 @@ func resourceTargetGroupAttachment() *schema.Resource { } func resourceTargetGroupAttachmentCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) targetGroupID := d.Get("target_group_identifier").(string) target := expandTarget(d.Get("target").([]interface{})[0].(map[string]interface{})) @@ -96,7 +99,7 @@ func resourceTargetGroupAttachmentCreate(ctx context.Context, d *schema.Resource } func resourceTargetGroupAttachmentRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) targetGroupID := d.Get("target_group_identifier").(string) target := expandTarget(d.Get("target").([]interface{})[0].(map[string]interface{})) @@ -124,7 +127,7 @@ func resourceTargetGroupAttachmentRead(ctx context.Context, d *schema.ResourceDa } func resourceTargetGroupAttachmentDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).VPCLatticeClient() + conn := meta.(*conns.AWSClient).VPCLatticeClient(ctx) targetGroupID := d.Get("target_group_identifier").(string) target := expandTarget(d.Get("target").([]interface{})[0].(map[string]interface{})) diff --git a/internal/service/vpclattice/target_group_attachment_test.go b/internal/service/vpclattice/target_group_attachment_test.go index 924aff36ec4..d20d401e0d7 100644 --- a/internal/service/vpclattice/target_group_attachment_test.go +++ b/internal/service/vpclattice/target_group_attachment_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice_test import ( @@ -334,7 +337,7 @@ func testAccCheckTargetsExists(ctx context.Context, n string) resource.TestCheck } } - conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient(ctx) _, err = tfvpclattice.FindTargetByThreePartKey(ctx, conn, rs.Primary.Attributes["target_group_identifier"], rs.Primary.Attributes["target.0.id"], port) @@ -344,7 +347,7 @@ func testAccCheckTargetsExists(ctx context.Context, n string) resource.TestCheck func testAccCheckRegisterTargetsDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpclattice_register_targets" { diff --git a/internal/service/vpclattice/target_group_test.go b/internal/service/vpclattice/target_group_test.go index 9348409e5d1..ef6bec2d68a 100644 --- a/internal/service/vpclattice/target_group_test.go +++ b/internal/service/vpclattice/target_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package vpclattice_test import ( @@ -308,7 +311,7 @@ func TestAccVPCLatticeTargetGroup_alb(t *testing.T) { func testAccCheckTargetGroupDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_vpclattice_target_group" { @@ -344,7 +347,7 @@ func testAccCheckTargetGroupExists(ctx context.Context, name string, targetGroup return create.Error(names.VPCLattice, create.ErrActionCheckingExistence, tfvpclattice.ResNameService, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).VPCLatticeClient(ctx) resp, err := conn.GetTargetGroup(ctx, &vpclattice.GetTargetGroupInput{ TargetGroupIdentifier: aws.String(rs.Primary.ID), }) diff --git a/internal/service/waf/byte_match_set.go b/internal/service/waf/byte_match_set.go index 570fff23882..91493a7ae29 100644 --- a/internal/service/waf/byte_match_set.go +++ b/internal/service/waf/byte_match_set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf import ( @@ -76,7 +79,7 @@ func ResourceByteMatchSet() *schema.Resource { func resourceByteMatchSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) log.Printf("[INFO] Creating WAF ByteMatchSet: %s", d.Get("name").(string)) @@ -100,7 +103,7 @@ func resourceByteMatchSetCreate(ctx context.Context, d *schema.ResourceData, met func resourceByteMatchSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) log.Printf("[INFO] Reading WAF ByteMatchSet: %s", d.Get("name").(string)) params := &waf.GetByteMatchSetInput{ ByteMatchSetId: aws.String(d.Id()), @@ -125,7 +128,7 @@ func resourceByteMatchSetRead(ctx context.Context, d *schema.ResourceData, meta func resourceByteMatchSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) log.Printf("[INFO] Updating WAF ByteMatchSet: %s", d.Get("name").(string)) @@ -143,7 +146,7 @@ func resourceByteMatchSetUpdate(ctx context.Context, d *schema.ResourceData, met func resourceByteMatchSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) oldTuples := d.Get("byte_match_tuples").(*schema.Set).List() if len(oldTuples) > 0 { diff --git a/internal/service/waf/byte_match_set_test.go b/internal/service/waf/byte_match_set_test.go index a165a427a1c..909f6435ef5 100644 --- a/internal/service/waf/byte_match_set_test.go +++ b/internal/service/waf/byte_match_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf_test import ( @@ -223,7 +226,7 @@ func TestAccWAFByteMatchSet_disappears(t *testing.T) { func testAccCheckByteMatchSetDisappears(ctx context.Context, v *waf.ByteMatchSet) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) wr := tfwaf.NewRetryer(conn) _, err := wr.RetryWithToken(ctx, func(token *string) (interface{}, error) { @@ -277,7 +280,7 @@ func testAccCheckByteMatchSetExists(ctx context.Context, n string, v *waf.ByteMa return fmt.Errorf("No WAF ByteMatchSet ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) resp, err := conn.GetByteMatchSetWithContext(ctx, &waf.GetByteMatchSetInput{ ByteMatchSetId: aws.String(rs.Primary.ID), }) @@ -302,7 +305,7 @@ func testAccCheckByteMatchSetDestroy(ctx context.Context) resource.TestCheckFunc continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) resp, err := conn.GetByteMatchSetWithContext(ctx, &waf.GetByteMatchSetInput{ ByteMatchSetId: aws.String(rs.Primary.ID), }) diff --git a/internal/service/waf/find.go b/internal/service/waf/find.go index 55625a0a182..dcbb162f998 100644 --- a/internal/service/waf/find.go +++ b/internal/service/waf/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf import ( diff --git a/internal/service/waf/flex.go b/internal/service/waf/flex.go index a97d86b6bbf..2d7a014b05b 100644 --- a/internal/service/waf/flex.go +++ b/internal/service/waf/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf import ( diff --git a/internal/service/waf/generate.go b/internal/service/waf/generate.go index 11d4dba4525..fd5dcd8f21d 100644 --- a/internal/service/waf/generate.go +++ b/internal/service/waf/generate.go @@ -1,5 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/listpages/main.go -ListOps=ListByteMatchSets,ListGeoMatchSets,ListIPSets,ListRateBasedRules,ListRegexMatchSets,ListRegexPatternSets,ListRuleGroups,ListRules,ListSizeConstraintSets,ListSqlInjectionMatchSets,ListWebACLs,ListXssMatchSets -Paginator=NextMarker //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceARN -ListTagsOutTagsElem=TagInfoForResource.TagList -ServiceTagsSlice -TagInIDElem=ResourceARN -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package waf diff --git a/internal/service/waf/geo_match_set.go b/internal/service/waf/geo_match_set.go index 1e490235a53..5acd3b72faf 100644 --- a/internal/service/waf/geo_match_set.go +++ b/internal/service/waf/geo_match_set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf import ( @@ -58,7 +61,7 @@ func ResourceGeoMatchSet() *schema.Resource { func resourceGeoMatchSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) log.Printf("[INFO] Creating GeoMatchSet: %s", d.Get("name").(string)) @@ -83,7 +86,7 @@ func resourceGeoMatchSetCreate(ctx context.Context, d *schema.ResourceData, meta func resourceGeoMatchSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) log.Printf("[INFO] Reading GeoMatchSet: %s", d.Get("name").(string)) params := &waf.GetGeoMatchSetInput{ GeoMatchSetId: aws.String(d.Id()), @@ -116,7 +119,7 @@ func resourceGeoMatchSetRead(ctx context.Context, d *schema.ResourceData, meta i func resourceGeoMatchSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) if d.HasChange("geo_match_constraint") { o, n := d.GetChange("geo_match_constraint") @@ -133,7 +136,7 @@ func resourceGeoMatchSetUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceGeoMatchSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) oldConstraints := d.Get("geo_match_constraint").(*schema.Set).List() if len(oldConstraints) > 0 { diff --git a/internal/service/waf/geo_match_set_test.go b/internal/service/waf/geo_match_set_test.go index 631236249bb..0b1a6d84b16 100644 --- a/internal/service/waf/geo_match_set_test.go +++ b/internal/service/waf/geo_match_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf_test import ( @@ -201,7 +204,7 @@ func TestAccWAFGeoMatchSet_noConstraints(t *testing.T) { func testAccCheckGeoMatchSetDisappears(ctx context.Context, v *waf.GeoMatchSet) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) wr := tfwaf.NewRetryer(conn) _, err := wr.RetryWithToken(ctx, func(token *string) (interface{}, error) { @@ -251,7 +254,7 @@ func testAccCheckGeoMatchSetExists(ctx context.Context, n string, v *waf.GeoMatc return fmt.Errorf("No WAF GeoMatchSet ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) resp, err := conn.GetGeoMatchSetWithContext(ctx, &waf.GetGeoMatchSetInput{ GeoMatchSetId: aws.String(rs.Primary.ID), }) @@ -276,7 +279,7 @@ func testAccCheckGeoMatchSetDestroy(ctx context.Context) resource.TestCheckFunc continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) resp, err := conn.GetGeoMatchSetWithContext(ctx, &waf.GetGeoMatchSetInput{ GeoMatchSetId: aws.String(rs.Primary.ID), }) diff --git a/internal/service/waf/helpers.go b/internal/service/waf/helpers.go index daf54d562e0..d1918ac991d 100644 --- a/internal/service/waf/helpers.go +++ b/internal/service/waf/helpers.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf import ( diff --git a/internal/service/waf/ipset.go b/internal/service/waf/ipset.go index 2c04d3a41d5..94dabbc8c86 100644 --- a/internal/service/waf/ipset.go +++ b/internal/service/waf/ipset.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf import ( @@ -67,7 +70,7 @@ func ResourceIPSet() *schema.Resource { func resourceIPSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) wr := NewRetryer(conn) out, err := wr.RetryWithToken(ctx, func(token *string) (interface{}, error) { @@ -95,7 +98,7 @@ func resourceIPSetCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceIPSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) params := &waf.GetIPSetInput{ IPSetId: aws.String(d.Id()), @@ -139,7 +142,7 @@ func resourceIPSetRead(ctx context.Context, d *schema.ResourceData, meta interfa func resourceIPSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) if d.HasChange("ip_set_descriptors") { o, n := d.GetChange("ip_set_descriptors") @@ -156,7 +159,7 @@ func resourceIPSetUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceIPSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) oldDescriptors := d.Get("ip_set_descriptors").(*schema.Set).List() diff --git a/internal/service/waf/ipset_data_source.go b/internal/service/waf/ipset_data_source.go index 5d1d723177a..196ae31efd0 100644 --- a/internal/service/waf/ipset_data_source.go +++ b/internal/service/waf/ipset_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf import ( @@ -27,7 +30,7 @@ func DataSourceIPSet() *schema.Resource { func dataSourceIPSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) name := d.Get("name").(string) ipsets := make([]*waf.IPSetSummary, 0) diff --git a/internal/service/waf/ipset_data_source_test.go b/internal/service/waf/ipset_data_source_test.go index 1ec7fa5d1d8..186cedc943a 100644 --- a/internal/service/waf/ipset_data_source_test.go +++ b/internal/service/waf/ipset_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf_test import ( diff --git a/internal/service/waf/ipset_test.go b/internal/service/waf/ipset_test.go index da4014c647b..5b444db1680 100644 --- a/internal/service/waf/ipset_test.go +++ b/internal/service/waf/ipset_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf_test import ( @@ -381,7 +384,7 @@ func TestAccWAFIPSet_ipv6(t *testing.T) { func testAccCheckIPSetDisappears(ctx context.Context, v *waf.IPSet) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) wr := tfwaf.NewRetryer(conn) _, err := wr.RetryWithToken(ctx, func(token *string) (interface{}, error) { @@ -428,7 +431,7 @@ func testAccCheckIPSetDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) resp, err := conn.GetIPSetWithContext(ctx, &waf.GetIPSetInput{ IPSetId: aws.String(rs.Primary.ID), }) @@ -462,7 +465,7 @@ func testAccCheckIPSetExists(ctx context.Context, n string, v *waf.IPSet) resour return fmt.Errorf("No WAF IPSet ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) resp, err := conn.GetIPSetWithContext(ctx, &waf.GetIPSetInput{ IPSetId: aws.String(rs.Primary.ID), }) diff --git a/internal/service/waf/rate_based_rule.go b/internal/service/waf/rate_based_rule.go index c47958da0a4..4d2a274df4e 100644 --- a/internal/service/waf/rate_based_rule.go +++ b/internal/service/waf/rate_based_rule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf import ( @@ -91,7 +94,7 @@ func ResourceRateBasedRule() *schema.Resource { func resourceRateBasedRuleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) name := d.Get("name").(string) input := &waf.CreateRateBasedRuleInput{ @@ -99,7 +102,7 @@ func resourceRateBasedRuleCreate(ctx context.Context, d *schema.ResourceData, me Name: aws.String(name), RateKey: aws.String(d.Get("rate_key").(string)), RateLimit: aws.Int64(int64(d.Get("rate_limit").(int))), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } wr := NewRetryer(conn) @@ -131,7 +134,7 @@ func resourceRateBasedRuleCreate(ctx context.Context, d *schema.ResourceData, me func resourceRateBasedRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) rule, err := FindRateBasedRuleByID(ctx, conn, d.Id()) @@ -177,7 +180,7 @@ func resourceRateBasedRuleRead(ctx context.Context, d *schema.ResourceData, meta func resourceRateBasedRuleUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) if d.HasChanges("predicates", "rate_limit") { o, n := d.GetChange("predicates") @@ -196,7 +199,7 @@ func resourceRateBasedRuleUpdate(ctx context.Context, d *schema.ResourceData, me func resourceRateBasedRuleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) oldPredicates := d.Get("predicates").(*schema.Set).List() if len(oldPredicates) > 0 { diff --git a/internal/service/waf/rate_based_rule_data_source.go b/internal/service/waf/rate_based_rule_data_source.go index 8ab29a09298..5d946ccda32 100644 --- a/internal/service/waf/rate_based_rule_data_source.go +++ b/internal/service/waf/rate_based_rule_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf import ( @@ -27,7 +30,7 @@ func DataSourceRateBasedRule() *schema.Resource { func dataSourceRateBasedRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) name := d.Get("name").(string) rules := make([]*waf.RuleSummary, 0) diff --git a/internal/service/waf/rate_based_rule_data_source_test.go b/internal/service/waf/rate_based_rule_data_source_test.go index 3d546ceeae3..ec95b5a4bd8 100644 --- a/internal/service/waf/rate_based_rule_data_source_test.go +++ b/internal/service/waf/rate_based_rule_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf_test import ( diff --git a/internal/service/waf/rate_based_rule_test.go b/internal/service/waf/rate_based_rule_test.go index 4e7702673b9..2984c6aff67 100644 --- a/internal/service/waf/rate_based_rule_test.go +++ b/internal/service/waf/rate_based_rule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf_test import ( @@ -318,7 +321,7 @@ func testAccCheckRateBasedRuleDestroy(ctx context.Context) resource.TestCheckFun continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) _, err := tfwaf.FindRateBasedRuleByID(ctx, conn, rs.Primary.ID) @@ -348,7 +351,7 @@ func testAccCheckRateBasedRuleExists(ctx context.Context, n string, v *waf.RateB return fmt.Errorf("No WAF Rate Based Rule ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) output, err := tfwaf.FindRateBasedRuleByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/waf/regex_match_set.go b/internal/service/waf/regex_match_set.go index 437632d8212..009583fb618 100644 --- a/internal/service/waf/regex_match_set.go +++ b/internal/service/waf/regex_match_set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf import ( @@ -80,7 +83,7 @@ func ResourceRegexMatchSet() *schema.Resource { func resourceRegexMatchSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) log.Printf("[INFO] Creating WAF Regex Match Set: %s", d.Get("name").(string)) @@ -104,7 +107,7 @@ func resourceRegexMatchSetCreate(ctx context.Context, d *schema.ResourceData, me func resourceRegexMatchSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) log.Printf("[INFO] Reading WAF Regex Match Set: %s", d.Get("name").(string)) params := &waf.GetRegexMatchSetInput{ RegexMatchSetId: aws.String(d.Id()), @@ -137,7 +140,7 @@ func resourceRegexMatchSetRead(ctx context.Context, d *schema.ResourceData, meta func resourceRegexMatchSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) log.Printf("[INFO] Updating WAF Regex Match Set: %s", d.Get("name").(string)) @@ -155,7 +158,7 @@ func resourceRegexMatchSetUpdate(ctx context.Context, d *schema.ResourceData, me func resourceRegexMatchSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) oldTuples := d.Get("regex_match_tuple").(*schema.Set).List() if len(oldTuples) > 0 { diff --git a/internal/service/waf/regex_match_set_test.go b/internal/service/waf/regex_match_set_test.go index a140f7c91a3..4361f541e74 100644 --- a/internal/service/waf/regex_match_set_test.go +++ b/internal/service/waf/regex_match_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf_test import ( @@ -206,7 +209,7 @@ func computeRegexMatchSetTuple(patternSet *waf.RegexPatternSet, fieldToMatch *wa func testAccCheckRegexMatchSetDisappears(ctx context.Context, set *waf.RegexMatchSet) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) wr := tfwaf.NewRetryer(conn) _, err := wr.RetryWithToken(ctx, func(token *string) (interface{}, error) { @@ -254,7 +257,7 @@ func testAccCheckRegexMatchSetExists(ctx context.Context, n string, v *waf.Regex return fmt.Errorf("No WAF Regex Match Set ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) resp, err := conn.GetRegexMatchSetWithContext(ctx, &waf.GetRegexMatchSetInput{ RegexMatchSetId: aws.String(rs.Primary.ID), }) @@ -279,7 +282,7 @@ func testAccCheckRegexMatchSetDestroy(ctx context.Context) resource.TestCheckFun continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) resp, err := conn.GetRegexMatchSetWithContext(ctx, &waf.GetRegexMatchSetInput{ RegexMatchSetId: aws.String(rs.Primary.ID), }) diff --git a/internal/service/waf/regex_pattern_set.go b/internal/service/waf/regex_pattern_set.go index 042eac2551d..cbdbdd61057 100644 --- a/internal/service/waf/regex_pattern_set.go +++ b/internal/service/waf/regex_pattern_set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf import ( @@ -47,7 +50,7 @@ func ResourceRegexPatternSet() *schema.Resource { func resourceRegexPatternSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) log.Printf("[INFO] Creating WAF Regex Pattern Set: %s", d.Get("name").(string)) @@ -71,7 +74,7 @@ func resourceRegexPatternSetCreate(ctx context.Context, d *schema.ResourceData, func resourceRegexPatternSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) log.Printf("[INFO] Reading WAF Regex Pattern Set: %s", d.Get("name").(string)) params := &waf.GetRegexPatternSetInput{ RegexPatternSetId: aws.String(d.Id()), @@ -104,7 +107,7 @@ func resourceRegexPatternSetRead(ctx context.Context, d *schema.ResourceData, me func resourceRegexPatternSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) log.Printf("[INFO] Updating WAF Regex Pattern Set: %s", d.Get("name").(string)) @@ -122,7 +125,7 @@ func resourceRegexPatternSetUpdate(ctx context.Context, d *schema.ResourceData, func resourceRegexPatternSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) oldPatterns := d.Get("regex_pattern_strings").(*schema.Set).List() if len(oldPatterns) > 0 { diff --git a/internal/service/waf/regex_pattern_set_test.go b/internal/service/waf/regex_pattern_set_test.go index 75d834488c9..35c87b9bc3d 100644 --- a/internal/service/waf/regex_pattern_set_test.go +++ b/internal/service/waf/regex_pattern_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf_test import ( @@ -161,7 +164,7 @@ func testAccRegexPatternSet_disappears(t *testing.T) { func testAccCheckRegexPatternSetDisappears(ctx context.Context, set *waf.RegexPatternSet) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) wr := tfwaf.NewRetryer(conn) _, err := wr.RetryWithToken(ctx, func(token *string) (interface{}, error) { @@ -210,7 +213,7 @@ func testAccCheckRegexPatternSetExists(ctx context.Context, n string, v *waf.Reg return fmt.Errorf("No WAF Regex Pattern Set ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) resp, err := conn.GetRegexPatternSetWithContext(ctx, &waf.GetRegexPatternSetInput{ RegexPatternSetId: aws.String(rs.Primary.ID), }) @@ -235,7 +238,7 @@ func testAccCheckRegexPatternSetDestroy(ctx context.Context) resource.TestCheckF continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) resp, err := conn.GetRegexPatternSetWithContext(ctx, &waf.GetRegexPatternSetInput{ RegexPatternSetId: aws.String(rs.Primary.ID), }) diff --git a/internal/service/waf/rule.go b/internal/service/waf/rule.go index f814aa26426..6e70952c452 100644 --- a/internal/service/waf/rule.go +++ b/internal/service/waf/rule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf import ( @@ -87,7 +90,7 @@ func ResourceRule() *schema.Resource { func resourceRuleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) wr := NewRetryer(conn) out, err := wr.RetryWithToken(ctx, func(token *string) (interface{}, error) { @@ -95,7 +98,7 @@ func resourceRuleCreate(ctx context.Context, d *schema.ResourceData, meta interf ChangeToken: token, MetricName: aws.String(d.Get("metric_name").(string)), Name: aws.String(d.Get("name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } return conn.CreateRuleWithContext(ctx, input) @@ -122,7 +125,7 @@ func resourceRuleCreate(ctx context.Context, d *schema.ResourceData, meta interf func resourceRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) params := &waf.GetRuleInput{ RuleId: aws.String(d.Id()), @@ -176,7 +179,7 @@ func resourceRuleRead(ctx context.Context, d *schema.ResourceData, meta interfac func resourceRuleUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) if d.HasChange("predicates") { o, n := d.GetChange("predicates") @@ -193,7 +196,7 @@ func resourceRuleUpdate(ctx context.Context, d *schema.ResourceData, meta interf func resourceRuleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) oldPredicates := d.Get("predicates").(*schema.Set).List() if len(oldPredicates) > 0 { diff --git a/internal/service/waf/rule_data_source.go b/internal/service/waf/rule_data_source.go index 61f5054dbb3..6278f727e6d 100644 --- a/internal/service/waf/rule_data_source.go +++ b/internal/service/waf/rule_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf import ( @@ -27,7 +30,7 @@ func DataSourceRule() *schema.Resource { func dataSourceRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) name := d.Get("name").(string) rules := make([]*waf.RuleSummary, 0) diff --git a/internal/service/waf/rule_data_source_test.go b/internal/service/waf/rule_data_source_test.go index 59a71a956f9..670a140c964 100644 --- a/internal/service/waf/rule_data_source_test.go +++ b/internal/service/waf/rule_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf_test import ( diff --git a/internal/service/waf/rule_group.go b/internal/service/waf/rule_group.go index 1c2e4d5ca67..181ae17403c 100644 --- a/internal/service/waf/rule_group.go +++ b/internal/service/waf/rule_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf import ( @@ -90,7 +93,7 @@ func ResourceRuleGroup() *schema.Resource { func resourceRuleGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) wr := NewRetryer(conn) out, err := wr.RetryWithToken(ctx, func(token *string) (interface{}, error) { @@ -98,7 +101,7 @@ func resourceRuleGroupCreate(ctx context.Context, d *schema.ResourceData, meta i ChangeToken: token, MetricName: aws.String(d.Get("metric_name").(string)), Name: aws.String(d.Get("name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } return conn.CreateRuleGroupWithContext(ctx, input) @@ -124,7 +127,7 @@ func resourceRuleGroupCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceRuleGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) params := &waf.GetRuleGroupInput{ RuleGroupId: aws.String(d.Id()), @@ -164,7 +167,7 @@ func resourceRuleGroupRead(ctx context.Context, d *schema.ResourceData, meta int func resourceRuleGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) if d.HasChange("activated_rule") { o, n := d.GetChange("activated_rule") @@ -172,7 +175,7 @@ func resourceRuleGroupUpdate(ctx context.Context, d *schema.ResourceData, meta i err := updateRuleGroupResource(ctx, d.Id(), oldRules, newRules, conn) if err != nil { - return sdkdiag.AppendErrorf(diags, "Updating WAF Rule Group: %s", err) + return sdkdiag.AppendErrorf(diags, "updating WAF Rule Group (%s): %s", d.Id(), err) } } @@ -181,7 +184,7 @@ func resourceRuleGroupUpdate(ctx context.Context, d *schema.ResourceData, meta i func resourceRuleGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) oldRules := d.Get("activated_rule").(*schema.Set).List() if err := deleteRuleGroup(ctx, d.Id(), oldRules, conn); err != nil { @@ -195,7 +198,7 @@ func deleteRuleGroup(ctx context.Context, id string, oldRules []interface{}, con noRules := []interface{}{} err := updateRuleGroupResource(ctx, id, oldRules, noRules, conn) if err != nil { - return fmt.Errorf("Error updating WAF Rule Group Predicates: %s", err) + return fmt.Errorf("updating WAF Rule Group Predicates: %s", err) } } @@ -209,7 +212,7 @@ func deleteRuleGroup(ctx context.Context, id string, oldRules []interface{}, con return conn.DeleteRuleGroupWithContext(ctx, req) }) if err != nil { - return fmt.Errorf("Error deleting WAF Rule Group: %s", err) + return fmt.Errorf("deleting WAF Rule Group: %s", err) } return nil } @@ -226,7 +229,7 @@ func updateRuleGroupResource(ctx context.Context, id string, oldRules, newRules return conn.UpdateRuleGroupWithContext(ctx, req) }) if err != nil { - return fmt.Errorf("Error Updating WAF Rule Group: %s", err) + return fmt.Errorf("Updating WAF Rule Group: %s", err) } return nil diff --git a/internal/service/waf/rule_group_test.go b/internal/service/waf/rule_group_test.go index 264207ec3c5..7294befe3ef 100644 --- a/internal/service/waf/rule_group_test.go +++ b/internal/service/waf/rule_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf_test import ( @@ -205,7 +208,7 @@ func TestAccWAFRuleGroup_changeActivatedRules(t *testing.T) { // which isn't static because ruleId is generated as part of the test func computeActivatedRuleWithRuleId(rule *waf.Rule, actionType string, priority int, idx *int) resource.TestCheckFunc { return func(s *terraform.State) error { - ruleResource := tfwaf.ResourceRuleGroup().Schema["activated_rule"].Elem.(*schema.Resource) + ruleResource := tfwaf.ResourceRuleGroup().SchemaMap()["activated_rule"].Elem.(*schema.Resource) m := map[string]interface{}{ "action": []interface{}{ @@ -303,7 +306,7 @@ func TestAccWAFRuleGroup_noActivatedRules(t *testing.T) { func testAccCheckRuleGroupDisappears(ctx context.Context, group *waf.RuleGroup) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) rResp, err := conn.ListActivatedRulesInRuleGroupWithContext(ctx, &waf.ListActivatedRulesInRuleGroupInput{ RuleGroupId: group.RuleGroupId, @@ -354,7 +357,7 @@ func testAccCheckRuleGroupDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) resp, err := conn.GetRuleGroupWithContext(ctx, &waf.GetRuleGroupInput{ RuleGroupId: aws.String(rs.Primary.ID), }) @@ -387,7 +390,7 @@ func testAccCheckRuleGroupExists(ctx context.Context, n string, group *waf.RuleG return fmt.Errorf("No WAF Rule Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) resp, err := conn.GetRuleGroupWithContext(ctx, &waf.GetRuleGroupInput{ RuleGroupId: aws.String(rs.Primary.ID), }) diff --git a/internal/service/waf/rule_test.go b/internal/service/waf/rule_test.go index 7fee26bcfbc..3e9f003e494 100644 --- a/internal/service/waf/rule_test.go +++ b/internal/service/waf/rule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf_test import ( @@ -295,7 +298,7 @@ func testAccCheckRuleDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) resp, err := conn.GetRuleWithContext(ctx, &waf.GetRuleInput{ RuleId: aws.String(rs.Primary.ID), }) @@ -328,7 +331,7 @@ func testAccCheckRuleExists(ctx context.Context, n string, v *waf.Rule) resource return fmt.Errorf("No WAF Rule ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) resp, err := conn.GetRuleWithContext(ctx, &waf.GetRuleInput{ RuleId: aws.String(rs.Primary.ID), }) @@ -347,7 +350,7 @@ func testAccCheckRuleExists(ctx context.Context, n string, v *waf.Rule) resource } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) input := &waf.ListRulesInput{} diff --git a/internal/service/waf/service_package_gen.go b/internal/service/waf/service_package_gen.go index 1f1edab6e31..215fcb2c743 100644 --- a/internal/service/waf/service_package_gen.go +++ b/internal/service/waf/service_package_gen.go @@ -5,6 +5,10 @@ package waf import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + waf_sdkv1 "github.com/aws/aws-sdk-go/service/waf" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -117,4 +121,13 @@ func (p *servicePackage) ServicePackageName() string { return names.WAF } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*waf_sdkv1.WAF, error) { + sess := config["session"].(*session_sdkv1.Session) + + return waf_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/waf/size_constraint_set.go b/internal/service/waf/size_constraint_set.go index d39ec21824c..76f296c542c 100644 --- a/internal/service/waf/size_constraint_set.go +++ b/internal/service/waf/size_constraint_set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf import ( @@ -35,7 +38,7 @@ func ResourceSizeConstraintSet() *schema.Resource { func resourceSizeConstraintSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) name := d.Get("name").(string) input := &waf.CreateSizeConstraintSetInput{ @@ -60,7 +63,7 @@ func resourceSizeConstraintSetCreate(ctx context.Context, d *schema.ResourceData func resourceSizeConstraintSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) sizeConstraintSet, err := FindSizeConstraintSetByID(ctx, conn, d.Id()) @@ -91,7 +94,7 @@ func resourceSizeConstraintSetRead(ctx context.Context, d *schema.ResourceData, func resourceSizeConstraintSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) if d.HasChange("size_constraints") { o, n := d.GetChange("size_constraints") @@ -109,7 +112,7 @@ func resourceSizeConstraintSetUpdate(ctx context.Context, d *schema.ResourceData func resourceSizeConstraintSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) oldConstraints := d.Get("size_constraints").(*schema.Set).List() diff --git a/internal/service/waf/size_constraint_set_test.go b/internal/service/waf/size_constraint_set_test.go index a4c284135b7..2480150ce62 100644 --- a/internal/service/waf/size_constraint_set_test.go +++ b/internal/service/waf/size_constraint_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf_test import ( @@ -215,7 +218,7 @@ func testAccCheckSizeConstraintSetExists(ctx context.Context, n string, v *waf.S return fmt.Errorf("No WAF Size Constraint Set ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) output, err := tfwaf.FindSizeConstraintSetByID(ctx, conn, rs.Primary.ID) @@ -236,7 +239,7 @@ func testAccCheckSizeConstraintSetDestroy(ctx context.Context) resource.TestChec continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) _, err := tfwaf.FindSizeConstraintSetByID(ctx, conn, rs.Primary.ID) diff --git a/internal/service/waf/sql_injection_match_set.go b/internal/service/waf/sql_injection_match_set.go index 5fae249c04c..ccb653be6bb 100644 --- a/internal/service/waf/sql_injection_match_set.go +++ b/internal/service/waf/sql_injection_match_set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf import ( @@ -66,7 +69,7 @@ func ResourceSQLInjectionMatchSet() *schema.Resource { func resourceSQLInjectionMatchSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) log.Printf("[INFO] Creating SqlInjectionMatchSet: %s", d.Get("name").(string)) @@ -90,7 +93,7 @@ func resourceSQLInjectionMatchSetCreate(ctx context.Context, d *schema.ResourceD func resourceSQLInjectionMatchSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) log.Printf("[INFO] Reading SqlInjectionMatchSet: %s", d.Get("name").(string)) params := &waf.GetSqlInjectionMatchSetInput{ SqlInjectionMatchSetId: aws.String(d.Id()), @@ -118,7 +121,7 @@ func resourceSQLInjectionMatchSetRead(ctx context.Context, d *schema.ResourceDat func resourceSQLInjectionMatchSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) if d.HasChange("sql_injection_match_tuples") { o, n := d.GetChange("sql_injection_match_tuples") @@ -135,7 +138,7 @@ func resourceSQLInjectionMatchSetUpdate(ctx context.Context, d *schema.ResourceD func resourceSQLInjectionMatchSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) oldTuples := d.Get("sql_injection_match_tuples").(*schema.Set).List() @@ -176,7 +179,7 @@ func updateSQLInjectionMatchSetResource(ctx context.Context, id string, oldT, ne return conn.UpdateSqlInjectionMatchSetWithContext(ctx, req) }) if err != nil { - return fmt.Errorf("Error updating SqlInjectionMatchSet: %s", err) + return fmt.Errorf("updating SqlInjectionMatchSet: %s", err) } return nil diff --git a/internal/service/waf/sql_injection_match_set_test.go b/internal/service/waf/sql_injection_match_set_test.go index 835d6fb7727..119385c1410 100644 --- a/internal/service/waf/sql_injection_match_set_test.go +++ b/internal/service/waf/sql_injection_match_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf_test import ( @@ -187,7 +190,7 @@ func TestAccWAFSQLInjectionMatchSet_noTuples(t *testing.T) { func testAccCheckSQLInjectionMatchSetDisappears(ctx context.Context, v *waf.SqlInjectionMatchSet) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) wr := tfwaf.NewRetryer(conn) _, err := wr.RetryWithToken(ctx, func(token *string) (interface{}, error) { @@ -237,7 +240,7 @@ func testAccCheckSQLInjectionMatchSetExists(ctx context.Context, n string, v *wa return fmt.Errorf("No WAF SqlInjectionMatchSet ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) resp, err := conn.GetSqlInjectionMatchSetWithContext(ctx, &waf.GetSqlInjectionMatchSetInput{ SqlInjectionMatchSetId: aws.String(rs.Primary.ID), }) @@ -262,7 +265,7 @@ func testAccCheckSQLInjectionMatchSetDestroy(ctx context.Context) resource.TestC continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) resp, err := conn.GetSqlInjectionMatchSetWithContext(ctx, &waf.GetSqlInjectionMatchSetInput{ SqlInjectionMatchSetId: aws.String(rs.Primary.ID), }) diff --git a/internal/service/waf/subscribed_rule_group.go b/internal/service/waf/subscribed_rule_group.go index 2ae6e445727..5ff045e59dc 100644 --- a/internal/service/waf/subscribed_rule_group.go +++ b/internal/service/waf/subscribed_rule_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf import ( @@ -35,7 +38,7 @@ func DataSourceSubscribedRuleGroup() *schema.Resource { } func dataSourceSubscribedRuleGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) name, nameOk := d.Get("name").(string) metricName, metricNameOk := d.Get("metric_name").(string) diff --git a/internal/service/waf/subscribed_rule_group_test.go b/internal/service/waf/subscribed_rule_group_test.go index e7ba6af199c..e24de22badd 100644 --- a/internal/service/waf/subscribed_rule_group_test.go +++ b/internal/service/waf/subscribed_rule_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf_test import ( diff --git a/internal/service/waf/sweep.go b/internal/service/waf/sweep.go index 017e6f01cb4..e7af4147bba 100644 --- a/internal/service/waf/sweep.go +++ b/internal/service/waf/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -12,8 +15,8 @@ import ( "github.com/aws/aws-sdk-go/service/waf" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" + "github.com/hashicorp/terraform-provider-aws/internal/sweep/sdk" ) func init() { @@ -131,13 +134,13 @@ func init() { func sweepByteMatchSet(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).WAFConn() + conn := client.WAFConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error var g multierror.Group @@ -160,7 +163,7 @@ func sweepByteMatchSet(region string) error { // read concurrently and gather errors g.Go(func() error { // Need to Read first to fill in byte_match_tuples attribute - err := sweep.ReadResource(ctx, r, d, client) + err := sdk.ReadResource(ctx, r, d, client) if err != nil { sweeperErr := fmt.Errorf("error reading WAF Byte Match Set (%s): %w", id, err) @@ -192,7 +195,7 @@ func sweepByteMatchSet(region string) error { errs = multierror.Append(errs, fmt.Errorf("error concurrently reading WAF Byte Match Sets: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping WAF Byte Match Set for %s: %w", region, err)) } @@ -206,13 +209,13 @@ func sweepByteMatchSet(region string) error { func sweepGeoMatchSet(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).WAFConn() + conn := client.WAFConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error var g multierror.Group @@ -235,7 +238,7 @@ func sweepGeoMatchSet(region string) error { // read concurrently and gather errors g.Go(func() error { // Need to Read first to fill in geo_match_constraint attribute - err := sweep.ReadResource(ctx, r, d, client) + err := sdk.ReadResource(ctx, r, d, client) if err != nil { sweeperErr := fmt.Errorf("error reading WAF Geo Match Set (%s): %w", id, err) @@ -267,7 +270,7 @@ func sweepGeoMatchSet(region string) error { errs = multierror.Append(errs, fmt.Errorf("error concurrently reading WAF Geo Match Sets: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping WAF Geo Match Set for %s: %w", region, err)) } @@ -281,13 +284,13 @@ func sweepGeoMatchSet(region string) error { func sweepIPSet(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).WAFConn() + conn := client.WAFConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error var g multierror.Group @@ -310,7 +313,7 @@ func sweepIPSet(region string) error { // read concurrently and gather errors g.Go(func() error { // Need to Read first to fill in ip_set_descriptors attribute - err := sweep.ReadResource(ctx, r, d, client) + err := sdk.ReadResource(ctx, r, d, client) if err != nil { sweeperErr := fmt.Errorf("error reading WAF IP Set (%s): %w", id, err) @@ -342,7 +345,7 @@ func sweepIPSet(region string) error { errs = multierror.Append(errs, fmt.Errorf("error concurrently reading WAF IP Sets: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping WAF IP Set for %s: %w", region, err)) } @@ -356,13 +359,13 @@ func sweepIPSet(region string) error { func sweepRateBasedRules(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).WAFConn() + conn := client.WAFConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error var g multierror.Group @@ -385,7 +388,7 @@ func sweepRateBasedRules(region string) error { // read concurrently and gather errors g.Go(func() error { // Need to Read first to fill in predicates attribute - err := sweep.ReadResource(ctx, r, d, client) + err := sdk.ReadResource(ctx, r, d, client) if err != nil { sweeperErr := fmt.Errorf("error reading WAF Rate Based Rule (%s): %w", id, err) @@ -417,7 +420,7 @@ func sweepRateBasedRules(region string) error { errs = multierror.Append(errs, fmt.Errorf("error concurrently reading WAF Rate Based Rules: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping WAF Rate Based Rule for %s: %w", region, err)) } @@ -431,13 +434,13 @@ func sweepRateBasedRules(region string) error { func sweepRegexMatchSet(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).WAFConn() + conn := client.WAFConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error var g multierror.Group @@ -460,7 +463,7 @@ func sweepRegexMatchSet(region string) error { // read concurrently and gather errors g.Go(func() error { // Need to Read first to fill in regex_match_tuple attribute - err := sweep.ReadResource(ctx, r, d, client) + err := sdk.ReadResource(ctx, r, d, client) if err != nil { sweeperErr := fmt.Errorf("error reading WAF Regex Match Set (%s): %w", id, err) @@ -492,7 +495,7 @@ func sweepRegexMatchSet(region string) error { errs = multierror.Append(errs, fmt.Errorf("error concurrently reading WAF Regex Match Sets: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping WAF Regex Match Set for %s: %w", region, err)) } @@ -506,13 +509,13 @@ func sweepRegexMatchSet(region string) error { func sweepRegexPatternSet(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).WAFConn() + conn := client.WAFConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error var g multierror.Group @@ -535,7 +538,7 @@ func sweepRegexPatternSet(region string) error { // read concurrently and gather errors g.Go(func() error { // Need to Read first to fill in regex_pattern_strings attribute - err := sweep.ReadResource(ctx, r, d, client) + err := sdk.ReadResource(ctx, r, d, client) if err != nil { sweeperErr := fmt.Errorf("error reading WAF Regex Pattern Set (%s): %w", id, err) @@ -567,7 +570,7 @@ func sweepRegexPatternSet(region string) error { errs = multierror.Append(errs, fmt.Errorf("error concurrently reading WAF Regex Pattern Sets: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping WAF Regex Pattern Set for %s: %w", region, err)) } @@ -581,13 +584,13 @@ func sweepRegexPatternSet(region string) error { func sweepRuleGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).WAFConn() + conn := client.WAFConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error var g multierror.Group @@ -610,7 +613,7 @@ func sweepRuleGroups(region string) error { // read concurrently and gather errors g.Go(func() error { // Need to Read first to fill in activated_rule attribute - err := sweep.ReadResource(ctx, r, d, client) + err := sdk.ReadResource(ctx, r, d, client) if err != nil { sweeperErr := fmt.Errorf("error reading WAF Rule Group (%s): %w", id, err) @@ -642,7 +645,7 @@ func sweepRuleGroups(region string) error { errs = multierror.Append(errs, fmt.Errorf("error concurrently reading WAF Rule Groups: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping WAF Rule Group for %s: %w", region, err)) } @@ -656,13 +659,13 @@ func sweepRuleGroups(region string) error { func sweepRules(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).WAFConn() + conn := client.WAFConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error var g multierror.Group @@ -689,7 +692,7 @@ func sweepRules(region string) error { // read concurrently and gather errors g.Go(func() error { // Need to Read first to fill in predicates attribute - err := sweep.ReadResource(ctx, r, d, client) + err := sdk.ReadResource(ctx, r, d, client) if err != nil { sweeperErr := fmt.Errorf("error reading WAF Rule (%s): %w", id, err) @@ -721,7 +724,7 @@ func sweepRules(region string) error { errs = multierror.Append(errs, fmt.Errorf("error concurrently reading WAF Rules: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping WAF Rules for %s: %w", region, err)) } @@ -735,13 +738,13 @@ func sweepRules(region string) error { func sweepSizeConstraintSet(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).WAFConn() + conn := client.WAFConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error var g multierror.Group @@ -764,7 +767,7 @@ func sweepSizeConstraintSet(region string) error { // read concurrently and gather errors g.Go(func() error { // Need to Read first to fill in size_constraints attribute - err := sweep.ReadResource(ctx, r, d, client) + err := sdk.ReadResource(ctx, r, d, client) if err != nil { sweeperErr := fmt.Errorf("error reading WAF Size Constraint Set (%s): %w", id, err) @@ -796,7 +799,7 @@ func sweepSizeConstraintSet(region string) error { errs = multierror.Append(errs, fmt.Errorf("error concurrently reading WAF Size Constraint Sets: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping WAF Size Constraint Sets for %s: %w", region, err)) } @@ -810,13 +813,13 @@ func sweepSizeConstraintSet(region string) error { func sweepSQLInjectionMatchSet(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).WAFConn() + conn := client.WAFConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error var g multierror.Group @@ -839,7 +842,7 @@ func sweepSQLInjectionMatchSet(region string) error { // read concurrently and gather errors g.Go(func() error { // Need to Read first to fill in sql_injection_match_tuples attribute - err := sweep.ReadResource(ctx, r, d, client) + err := sdk.ReadResource(ctx, r, d, client) if err != nil { sweeperErr := fmt.Errorf("error reading WAF SQL Injection Match Set (%s): %w", id, err) @@ -871,7 +874,7 @@ func sweepSQLInjectionMatchSet(region string) error { errs = multierror.Append(errs, fmt.Errorf("error concurrently reading WAF SQL Injection Matches: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping WAF SQL Injection Matches for %s: %w", region, err)) } @@ -885,13 +888,13 @@ func sweepSQLInjectionMatchSet(region string) error { func sweepWebACLs(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).WAFConn() + conn := client.WAFConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error var g multierror.Group @@ -918,7 +921,7 @@ func sweepWebACLs(region string) error { // read concurrently and gather errors g.Go(func() error { // Need to Read first to fill in rules argument - err := sweep.ReadResource(ctx, r, d, client) + err := sdk.ReadResource(ctx, r, d, client) if err != nil { readErr := fmt.Errorf("error reading WAF Web ACL (%s): %w", id, err) @@ -950,7 +953,7 @@ func sweepWebACLs(region string) error { errs = multierror.Append(errs, fmt.Errorf("error concurrently reading WAF Web ACLs: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping WAF Web ACLs for %s: %w", region, err)) } @@ -964,13 +967,13 @@ func sweepWebACLs(region string) error { func sweepXSSMatchSet(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).WAFConn() + conn := client.WAFConn(ctx) sweepResources := make([]sweep.Sweepable, 0) var errs *multierror.Error var g multierror.Group @@ -993,7 +996,7 @@ func sweepXSSMatchSet(region string) error { // read concurrently and gather errors g.Go(func() error { // Need to Read first to fill in xss_match_tuples attribute - err := sweep.ReadResource(ctx, r, d, client) + err := sdk.ReadResource(ctx, r, d, client) if err != nil { sweeperErr := fmt.Errorf("error reading WAF XSS Match Set (%s): %w", id, err) @@ -1025,7 +1028,7 @@ func sweepXSSMatchSet(region string) error { errs = multierror.Append(errs, fmt.Errorf("error concurrently reading WAF XSS Match Sets: %w", err)) } - if err = sweep.SweepOrchestratorWithContext(ctx, sweepResources); err != nil { + if err = sweep.SweepOrchestrator(ctx, sweepResources); err != nil { errs = multierror.Append(errs, fmt.Errorf("error sweeping WAF XSS Match Sets for %s: %w", region, err)) } diff --git a/internal/service/waf/tags_gen.go b/internal/service/waf/tags_gen.go index 6364c1da77b..39708c7e35d 100644 --- a/internal/service/waf/tags_gen.go +++ b/internal/service/waf/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists waf service tags. +// listTags lists waf service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn wafiface.WAFAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn wafiface.WAFAPI, identifier string) (tftags.KeyValueTags, error) { input := &waf.ListTagsForResourceInput{ ResourceARN: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn wafiface.WAFAPI, identifier string) (tft // ListTags lists waf service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).WAFConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).WAFConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*waf.Tag) tftags.KeyValueTags { return tftags.New(ctx, m) } -// GetTagsIn returns waf service tags from Context. +// getTagsIn returns waf service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*waf.Tag { +func getTagsIn(ctx context.Context) []*waf.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*waf.Tag { return nil } -// SetTagsOut sets waf service tags in Context. -func SetTagsOut(ctx context.Context, tags []*waf.Tag) { +// setTagsOut sets waf service tags in Context. +func setTagsOut(ctx context.Context, tags []*waf.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates waf service tags. +// updateTags updates waf service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn wafiface.WAFAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn wafiface.WAFAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn wafiface.WAFAPI, identifier string, ol // UpdateTags updates waf service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).WAFConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).WAFConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/waf/token_handlers.go b/internal/service/waf/token_handlers.go index 91fdc22ac94..eaa7820d10a 100644 --- a/internal/service/waf/token_handlers.go +++ b/internal/service/waf/token_handlers.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf import ( @@ -44,7 +47,7 @@ func (t *WafRetryer) RetryWithToken(ctx context.Context, f withTokenFunc) (inter tokenOut, err = t.Connection.GetChangeToken(&waf.GetChangeTokenInput{}) if err != nil { - return nil, fmt.Errorf("error getting WAF change token: %w", err) + return nil, fmt.Errorf("getting WAF change token: %w", err) } out, err = f(tokenOut.ChangeToken) diff --git a/internal/service/waf/validate.go b/internal/service/waf/validate.go index 93b969194ad..3e06cf3719b 100644 --- a/internal/service/waf/validate.go +++ b/internal/service/waf/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf import ( diff --git a/internal/service/waf/validate_test.go b/internal/service/waf/validate_test.go index c886971fd33..cf6eca68b44 100644 --- a/internal/service/waf/validate_test.go +++ b/internal/service/waf/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf import ( diff --git a/internal/service/waf/web_acl.go b/internal/service/waf/web_acl.go index 9e27b42fac3..e9a0aaa6ffa 100644 --- a/internal/service/waf/web_acl.go +++ b/internal/service/waf/web_acl.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf import ( @@ -157,7 +160,7 @@ func ResourceWebACL() *schema.Resource { func resourceWebACLCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) wr := NewRetryer(conn) out, err := wr.RetryWithToken(ctx, func(token *string) (interface{}, error) { @@ -166,7 +169,7 @@ func resourceWebACLCreate(ctx context.Context, d *schema.ResourceData, meta inte DefaultAction: ExpandAction(d.Get("default_action").([]interface{})), MetricName: aws.String(d.Get("metric_name").(string)), Name: aws.String(d.Get("name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } return conn.CreateWebACLWithContext(ctx, input) @@ -220,7 +223,7 @@ func resourceWebACLCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceWebACLRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) params := &waf.GetWebACLInput{ WebACLId: aws.String(d.Id()), @@ -283,7 +286,7 @@ func resourceWebACLRead(ctx context.Context, d *schema.ResourceData, meta interf func resourceWebACLUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) if d.HasChanges("default_action", "rules") { o, n := d.GetChange("rules") @@ -331,7 +334,7 @@ func resourceWebACLUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceWebACLDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) // First, need to delete all rules rules := d.Get("rules").(*schema.Set).List() diff --git a/internal/service/waf/web_acl_data_source.go b/internal/service/waf/web_acl_data_source.go index 45e3dbcc606..537e5285384 100644 --- a/internal/service/waf/web_acl_data_source.go +++ b/internal/service/waf/web_acl_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf import ( @@ -27,7 +30,7 @@ func DataSourceWebACL() *schema.Resource { func dataSourceWebACLRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) name := d.Get("name").(string) acls := make([]*waf.WebACLSummary, 0) diff --git a/internal/service/waf/web_acl_data_source_test.go b/internal/service/waf/web_acl_data_source_test.go index 4eb64848a5f..93b7317d041 100644 --- a/internal/service/waf/web_acl_data_source_test.go +++ b/internal/service/waf/web_acl_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf_test import ( diff --git a/internal/service/waf/web_acl_test.go b/internal/service/waf/web_acl_test.go index e4939af6e12..bd2941be8f6 100644 --- a/internal/service/waf/web_acl_test.go +++ b/internal/service/waf/web_acl_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf_test import ( @@ -326,7 +329,7 @@ func testAccCheckWebACLDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) resp, err := conn.GetWebACLWithContext(ctx, &waf.GetWebACLInput{ WebACLId: aws.String(rs.Primary.ID), }) @@ -359,7 +362,7 @@ func testAccCheckWebACLExists(ctx context.Context, n string, v *waf.WebACL) reso return fmt.Errorf("No WebACL ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) resp, err := conn.GetWebACLWithContext(ctx, &waf.GetWebACLInput{ WebACLId: aws.String(rs.Primary.ID), }) @@ -560,11 +563,6 @@ resource "aws_s3_bucket" "test" { bucket = %[1]q } -resource "aws_s3_bucket_acl" "test" { - bucket = aws_s3_bucket.test.id - acl = "private" -} - resource "aws_iam_role" "test" { name = %[1]q @@ -583,15 +581,14 @@ resource "aws_iam_role" "test" { ] } EOF - } resource "aws_kinesis_firehose_delivery_stream" "test" { # the name must begin with aws-waf-logs- name = "aws-waf-logs-%[1]s" - destination = "s3" + destination = "extended_s3" - s3_configuration { + extended_s3_configuration { role_arn = aws_iam_role.test.arn bucket_arn = aws_s3_bucket.test.arn } @@ -631,11 +628,6 @@ resource "aws_s3_bucket" "test" { bucket = %[1]q } -resource "aws_s3_bucket_acl" "test" { - bucket = aws_s3_bucket.test.id - acl = "private" -} - resource "aws_iam_role" "test" { name = %[1]q @@ -660,9 +652,9 @@ EOF resource "aws_kinesis_firehose_delivery_stream" "test" { # the name must begin with aws-waf-logs- name = "aws-waf-logs-%[1]s" - destination = "s3" + destination = "extended_s3" - s3_configuration { + extended_s3_configuration { role_arn = aws_iam_role.test.arn bucket_arn = aws_s3_bucket.test.arn } diff --git a/internal/service/waf/xss_match_set.go b/internal/service/waf/xss_match_set.go index ddc2236325e..9455219ab6b 100644 --- a/internal/service/waf/xss_match_set.go +++ b/internal/service/waf/xss_match_set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf import ( @@ -74,7 +77,7 @@ func ResourceXSSMatchSet() *schema.Resource { func resourceXSSMatchSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) log.Printf("[INFO] Creating XssMatchSet: %s", d.Get("name").(string)) @@ -105,7 +108,7 @@ func resourceXSSMatchSetCreate(ctx context.Context, d *schema.ResourceData, meta func resourceXSSMatchSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) log.Printf("[INFO] Reading WAF XSS Match Set: %s", d.Get("name").(string)) params := &waf.GetXssMatchSetInput{ XssMatchSetId: aws.String(d.Id()), @@ -140,7 +143,7 @@ func resourceXSSMatchSetRead(ctx context.Context, d *schema.ResourceData, meta i func resourceXSSMatchSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) if d.HasChange("xss_match_tuples") { o, n := d.GetChange("xss_match_tuples") @@ -157,7 +160,7 @@ func resourceXSSMatchSetUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceXSSMatchSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFConn() + conn := meta.(*conns.AWSClient).WAFConn(ctx) oldTuples := d.Get("xss_match_tuples").(*schema.Set).List() if len(oldTuples) > 0 { @@ -196,7 +199,7 @@ func updateXSSMatchSetResource(ctx context.Context, id string, oldT, newT []inte return conn.UpdateXssMatchSetWithContext(ctx, req) }) if err != nil { - return fmt.Errorf("Error updating WAF XSS Match Set: %w", err) + return fmt.Errorf("updating WAF XSS Match Set: %w", err) } return nil diff --git a/internal/service/waf/xss_match_set_test.go b/internal/service/waf/xss_match_set_test.go index 6f6e6292738..7ac2f4b8b44 100644 --- a/internal/service/waf/xss_match_set_test.go +++ b/internal/service/waf/xss_match_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package waf_test import ( @@ -222,7 +225,7 @@ func testAccCheckXSSMatchSetExists(ctx context.Context, n string, v *waf.XssMatc return fmt.Errorf("No WAF XSS Match Set ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) resp, err := conn.GetXssMatchSetWithContext(ctx, &waf.GetXssMatchSetInput{ XssMatchSetId: aws.String(rs.Primary.ID), }) @@ -247,7 +250,7 @@ func testAccCheckXSSMatchSetDestroy(ctx context.Context) resource.TestCheckFunc continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFConn(ctx) resp, err := conn.GetXssMatchSetWithContext(ctx, &waf.GetXssMatchSetInput{ XssMatchSetId: aws.String(rs.Primary.ID), }) diff --git a/internal/service/wafregional/byte_match_set.go b/internal/service/wafregional/byte_match_set.go index dfbae024728..92d042b2f19 100644 --- a/internal/service/wafregional/byte_match_set.go +++ b/internal/service/wafregional/byte_match_set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional import ( @@ -76,7 +79,7 @@ func ResourceByteMatchSet() *schema.Resource { func resourceByteMatchSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region log.Printf("[INFO] Creating ByteMatchSet: %s", d.Get("name").(string)) @@ -102,7 +105,7 @@ func resourceByteMatchSetCreate(ctx context.Context, d *schema.ResourceData, met func resourceByteMatchSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) log.Printf("[INFO] Reading ByteMatchSet: %s", d.Get("name").(string)) @@ -160,7 +163,7 @@ func flattenByteMatchTuplesWR(in []*waf.ByteMatchTuple) []interface{} { func resourceByteMatchSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region log.Printf("[INFO] Updating ByteMatchSet: %s", d.Get("name").(string)) @@ -178,7 +181,7 @@ func resourceByteMatchSetUpdate(ctx context.Context, d *schema.ResourceData, met func resourceByteMatchSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region log.Printf("[INFO] Deleting ByteMatchSet: %s", d.Get("name").(string)) @@ -221,7 +224,7 @@ func updateByteMatchSetResourceWR(ctx context.Context, d *schema.ResourceData, o return conn.UpdateByteMatchSetWithContext(ctx, req) }) if err != nil { - return fmt.Errorf("Error updating ByteMatchSet: %s", err) + return fmt.Errorf("updating ByteMatchSet: %s", err) } return nil diff --git a/internal/service/wafregional/byte_match_set_test.go b/internal/service/wafregional/byte_match_set_test.go index 283a1bb5e1b..0fafcf21a3d 100644 --- a/internal/service/wafregional/byte_match_set_test.go +++ b/internal/service/wafregional/byte_match_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional_test import ( @@ -264,7 +267,7 @@ func TestAccWAFRegionalByteMatchSet_disappears(t *testing.T) { func testAccCheckByteMatchSetDisappears(ctx context.Context, v *waf.ByteMatchSet) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) region := acctest.Provider.Meta().(*conns.AWSClient).Region wr := tfwafregional.NewRetryer(conn, region) @@ -319,7 +322,7 @@ func testAccCheckByteMatchSetExists(ctx context.Context, n string, v *waf.ByteMa return fmt.Errorf("No WAF ByteMatchSet ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) resp, err := conn.GetByteMatchSetWithContext(ctx, &waf.GetByteMatchSetInput{ ByteMatchSetId: aws.String(rs.Primary.ID), }) @@ -344,7 +347,7 @@ func testAccCheckByteMatchSetDestroy(ctx context.Context) resource.TestCheckFunc continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) resp, err := conn.GetByteMatchSetWithContext(ctx, &waf.GetByteMatchSetInput{ ByteMatchSetId: aws.String(rs.Primary.ID), }) diff --git a/internal/service/wafregional/find.go b/internal/service/wafregional/find.go index 48a08e2bdd3..a46a57ffb7b 100644 --- a/internal/service/wafregional/find.go +++ b/internal/service/wafregional/find.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional import ( diff --git a/internal/service/wafregional/generate.go b/internal/service/wafregional/generate.go index 2ccf23cd0a1..0db74f64561 100644 --- a/internal/service/wafregional/generate.go +++ b/internal/service/wafregional/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceARN -ListTagsOutTagsElem=TagInfoForResource.TagList -ServiceTagsSlice -TagInIDElem=ResourceARN -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package wafregional diff --git a/internal/service/wafregional/geo_match_set.go b/internal/service/wafregional/geo_match_set.go index 7539826d26c..b364b547b6a 100644 --- a/internal/service/wafregional/geo_match_set.go +++ b/internal/service/wafregional/geo_match_set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional import ( @@ -55,7 +58,7 @@ func ResourceGeoMatchSet() *schema.Resource { func resourceGeoMatchSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region wr := NewRetryer(conn, region) @@ -79,7 +82,7 @@ func resourceGeoMatchSetCreate(ctx context.Context, d *schema.ResourceData, meta func resourceGeoMatchSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) params := &waf.GetGeoMatchSetInput{ GeoMatchSetId: aws.String(d.Id()), @@ -103,7 +106,7 @@ func resourceGeoMatchSetRead(ctx context.Context, d *schema.ResourceData, meta i func resourceGeoMatchSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region if d.HasChange("geo_match_constraint") { @@ -121,7 +124,7 @@ func resourceGeoMatchSetUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceGeoMatchSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region oldConstraints := d.Get("geo_match_constraint").(*schema.Set).List() diff --git a/internal/service/wafregional/geo_match_set_test.go b/internal/service/wafregional/geo_match_set_test.go index 373ccdcf18f..cba3704ccbd 100644 --- a/internal/service/wafregional/geo_match_set_test.go +++ b/internal/service/wafregional/geo_match_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional_test import ( @@ -222,7 +225,7 @@ func testAccCheckGeoMatchSetIdDiffers(before, after *waf.GeoMatchSet) resource.T func testAccCheckGeoMatchSetDisappears(ctx context.Context, v *waf.GeoMatchSet) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) region := acctest.Provider.Meta().(*conns.AWSClient).Region wr := tfwafregional.NewRetryer(conn, region) @@ -273,7 +276,7 @@ func testAccCheckGeoMatchSetExists(ctx context.Context, n string, v *waf.GeoMatc return fmt.Errorf("No WAF Regional Geo Match Set ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) resp, err := conn.GetGeoMatchSetWithContext(ctx, &waf.GetGeoMatchSetInput{ GeoMatchSetId: aws.String(rs.Primary.ID), }) @@ -298,7 +301,7 @@ func testAccCheckGeoMatchSetDestroy(ctx context.Context) resource.TestCheckFunc continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) resp, err := conn.GetGeoMatchSetWithContext(ctx, &waf.GetGeoMatchSetInput{ GeoMatchSetId: aws.String(rs.Primary.ID), }) diff --git a/internal/service/wafregional/ipset.go b/internal/service/wafregional/ipset.go index 967d2a96347..0c853db80f5 100644 --- a/internal/service/wafregional/ipset.go +++ b/internal/service/wafregional/ipset.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional import ( @@ -62,7 +65,7 @@ func ResourceIPSet() *schema.Resource { func resourceIPSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region wr := NewRetryer(conn, region) @@ -83,7 +86,7 @@ func resourceIPSetCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceIPSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) params := &waf.GetIPSetInput{ IPSetId: aws.String(d.Id()), @@ -131,7 +134,7 @@ func flattenIPSetDescriptorWR(in []*waf.IPSetDescriptor) []interface{} { func resourceIPSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region if d.HasChange("ip_set_descriptor") { @@ -148,7 +151,7 @@ func resourceIPSetUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceIPSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region oldD := d.Get("ip_set_descriptor").(*schema.Set).List() diff --git a/internal/service/wafregional/ipset_data_source.go b/internal/service/wafregional/ipset_data_source.go index 523696c7c89..f0da1915479 100644 --- a/internal/service/wafregional/ipset_data_source.go +++ b/internal/service/wafregional/ipset_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional import ( @@ -27,7 +30,7 @@ func DataSourceIPSet() *schema.Resource { func dataSourceIPSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) name := d.Get("name").(string) ipsets := make([]*waf.IPSetSummary, 0) diff --git a/internal/service/wafregional/ipset_data_source_test.go b/internal/service/wafregional/ipset_data_source_test.go index 0f393796d2f..26201ba1202 100644 --- a/internal/service/wafregional/ipset_data_source_test.go +++ b/internal/service/wafregional/ipset_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional_test import ( diff --git a/internal/service/wafregional/ipset_test.go b/internal/service/wafregional/ipset_test.go index 5539eb1788d..d7f4c9eaf1a 100644 --- a/internal/service/wafregional/ipset_test.go +++ b/internal/service/wafregional/ipset_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional_test import ( @@ -355,7 +358,7 @@ func TestDiffIPSetDescriptors(t *testing.T) { func testAccCheckIPSetDisappears(ctx context.Context, v *waf.IPSet) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) region := acctest.Provider.Meta().(*conns.AWSClient).Region wr := tfwafregional.NewRetryer(conn, region) @@ -403,7 +406,7 @@ func testAccCheckIPSetDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) resp, err := conn.GetIPSetWithContext(ctx, &waf.GetIPSetInput{ IPSetId: aws.String(rs.Primary.ID), }) @@ -436,7 +439,7 @@ func testAccCheckIPSetExists(ctx context.Context, n string, v *waf.IPSet) resour return fmt.Errorf("No WAF IPSet ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) resp, err := conn.GetIPSetWithContext(ctx, &waf.GetIPSetInput{ IPSetId: aws.String(rs.Primary.ID), }) diff --git a/internal/service/wafregional/rate_based_rule.go b/internal/service/wafregional/rate_based_rule.go index efe199b2b96..7f58e5f2438 100644 --- a/internal/service/wafregional/rate_based_rule.go +++ b/internal/service/wafregional/rate_based_rule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional import ( @@ -90,7 +93,7 @@ func ResourceRateBasedRule() *schema.Resource { func resourceRateBasedRuleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region wr := NewRetryer(conn, region) @@ -101,7 +104,7 @@ func resourceRateBasedRuleCreate(ctx context.Context, d *schema.ResourceData, me Name: aws.String(d.Get("name").(string)), RateKey: aws.String(d.Get("rate_key").(string)), RateLimit: aws.Int64(int64(d.Get("rate_limit").(int))), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } return conn.CreateRateBasedRuleWithContext(ctx, input) @@ -126,7 +129,7 @@ func resourceRateBasedRuleCreate(ctx context.Context, d *schema.ResourceData, me func resourceRateBasedRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) params := &waf.GetRateBasedRuleInput{ RuleId: aws.String(d.Id()), @@ -171,7 +174,7 @@ func resourceRateBasedRuleRead(ctx context.Context, d *schema.ResourceData, meta func resourceRateBasedRuleUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region if d.HasChanges("predicate", "rate_limit") { @@ -190,7 +193,7 @@ func resourceRateBasedRuleUpdate(ctx context.Context, d *schema.ResourceData, me func resourceRateBasedRuleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region oldPredicates := d.Get("predicate").(*schema.Set).List() diff --git a/internal/service/wafregional/rate_based_rule_data_source.go b/internal/service/wafregional/rate_based_rule_data_source.go index 74d5fe8ef16..2212dcc33ee 100644 --- a/internal/service/wafregional/rate_based_rule_data_source.go +++ b/internal/service/wafregional/rate_based_rule_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional import ( @@ -27,7 +30,7 @@ func DataSourceRateBasedRule() *schema.Resource { func dataSourceRateBasedRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) name := d.Get("name").(string) rules := make([]*waf.RuleSummary, 0) diff --git a/internal/service/wafregional/rate_based_rule_data_source_test.go b/internal/service/wafregional/rate_based_rule_data_source_test.go index 5ae82b8ada7..283da060110 100644 --- a/internal/service/wafregional/rate_based_rule_data_source_test.go +++ b/internal/service/wafregional/rate_based_rule_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional_test import ( diff --git a/internal/service/wafregional/rate_based_rule_test.go b/internal/service/wafregional/rate_based_rule_test.go index a1f9db6d7f4..e2aa9ea0953 100644 --- a/internal/service/wafregional/rate_based_rule_test.go +++ b/internal/service/wafregional/rate_based_rule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional_test import ( @@ -291,7 +294,7 @@ func testAccCheckRateBasedRuleDestroy(ctx context.Context) resource.TestCheckFun continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) resp, err := conn.GetRateBasedRuleWithContext(ctx, &waf.GetRateBasedRuleInput{ RuleId: aws.String(rs.Primary.ID), }) @@ -325,7 +328,7 @@ func testAccCheckRateBasedRuleExists(ctx context.Context, n string, v *waf.RateB return fmt.Errorf("No WAF Rule ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) resp, err := conn.GetRateBasedRuleWithContext(ctx, &waf.GetRateBasedRuleInput{ RuleId: aws.String(rs.Primary.ID), }) diff --git a/internal/service/wafregional/regex_match_set.go b/internal/service/wafregional/regex_match_set.go index 4e84045742e..9f533d4fa56 100644 --- a/internal/service/wafregional/regex_match_set.go +++ b/internal/service/wafregional/regex_match_set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional import ( @@ -77,7 +80,7 @@ func ResourceRegexMatchSet() *schema.Resource { func resourceRegexMatchSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region log.Printf("[INFO] Creating WAF Regional Regex Match Set: %s", d.Get("name").(string)) @@ -102,7 +105,7 @@ func resourceRegexMatchSetCreate(ctx context.Context, d *schema.ResourceData, me func resourceRegexMatchSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) set, err := FindRegexMatchSetByID(ctx, conn, d.Id()) if !d.IsNewResource() && tfawserr.ErrCodeEquals(err, wafregional.ErrCodeWAFNonexistentItemException) { @@ -122,7 +125,7 @@ func resourceRegexMatchSetRead(ctx context.Context, d *schema.ResourceData, meta func resourceRegexMatchSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region if d.HasChange("regex_match_tuple") { @@ -139,7 +142,7 @@ func resourceRegexMatchSetUpdate(ctx context.Context, d *schema.ResourceData, me func resourceRegexMatchSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region err := DeleteRegexMatchSetResource(ctx, conn, region, "global", d.Id(), getRegexMatchTuplesFromResourceData(d)) @@ -188,7 +191,7 @@ func clearRegexMatchTuples(ctx context.Context, conn *wafregional.WAFRegional, r return conn.UpdateRegexMatchSetWithContext(ctx, input) }) if err != nil { - return fmt.Errorf("error clearing WAF Regional Regex Match Set (%s): %w", id, err) + return fmt.Errorf("clearing WAF Regional Regex Match Set (%s): %w", id, err) } } return nil @@ -205,7 +208,7 @@ func deleteRegexMatchSet(ctx context.Context, conn *wafregional.WAFRegional, reg return conn.DeleteRegexMatchSetWithContext(ctx, req) }) if err != nil { - return fmt.Errorf("error deleting WAF Regional Regex Match Set (%s): %w", id, err) + return fmt.Errorf("deleting WAF Regional Regex Match Set (%s): %w", id, err) } return nil } diff --git a/internal/service/wafregional/regex_match_set_test.go b/internal/service/wafregional/regex_match_set_test.go index 784f2aa3d9d..2d2224fdae3 100644 --- a/internal/service/wafregional/regex_match_set_test.go +++ b/internal/service/wafregional/regex_match_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional_test import ( @@ -201,7 +204,7 @@ func testAccCheckRegexMatchSetExists(ctx context.Context, n string, v *waf.Regex return fmt.Errorf("No WAF Regional Regex Match Set ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) resp, err := conn.GetRegexMatchSetWithContext(ctx, &waf.GetRegexMatchSetInput{ RegexMatchSetId: aws.String(rs.Primary.ID), }) @@ -226,7 +229,7 @@ func testAccCheckRegexMatchSetDestroy(ctx context.Context) resource.TestCheckFun continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) resp, err := conn.GetRegexMatchSetWithContext(ctx, &waf.GetRegexMatchSetInput{ RegexMatchSetId: aws.String(rs.Primary.ID), }) diff --git a/internal/service/wafregional/regex_pattern_set.go b/internal/service/wafregional/regex_pattern_set.go index 19ac06ab706..cc359469169 100644 --- a/internal/service/wafregional/regex_pattern_set.go +++ b/internal/service/wafregional/regex_pattern_set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional import ( @@ -43,7 +46,7 @@ func ResourceRegexPatternSet() *schema.Resource { func resourceRegexPatternSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region log.Printf("[INFO] Creating WAF Regional Regex Pattern Set: %s", d.Get("name").(string)) @@ -68,7 +71,7 @@ func resourceRegexPatternSetCreate(ctx context.Context, d *schema.ResourceData, func resourceRegexPatternSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) log.Printf("[INFO] Reading WAF Regional Regex Pattern Set: %s", d.Get("name").(string)) params := &waf.GetRegexPatternSetInput{ @@ -94,7 +97,7 @@ func resourceRegexPatternSetRead(ctx context.Context, d *schema.ResourceData, me func resourceRegexPatternSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region log.Printf("[INFO] Updating WAF Regional Regex Pattern Set: %s", d.Get("name").(string)) @@ -113,7 +116,7 @@ func resourceRegexPatternSetUpdate(ctx context.Context, d *schema.ResourceData, func resourceRegexPatternSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region oldPatterns := d.Get("regex_pattern_strings").(*schema.Set).List() diff --git a/internal/service/wafregional/regex_pattern_set_test.go b/internal/service/wafregional/regex_pattern_set_test.go index aa0b76ffb82..28fa75d0b23 100644 --- a/internal/service/wafregional/regex_pattern_set_test.go +++ b/internal/service/wafregional/regex_pattern_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional_test import ( @@ -160,7 +163,7 @@ func testAccRegexPatternSet_disappears(t *testing.T) { func testAccCheckRegexPatternSetDisappears(ctx context.Context, set *waf.RegexPatternSet) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) region := acctest.Provider.Meta().(*conns.AWSClient).Region wr := tfwafregional.NewRetryer(conn, region) @@ -210,7 +213,7 @@ func testAccCheckRegexPatternSetExists(ctx context.Context, n string, patternSet return fmt.Errorf("No WAF Regional Regex Pattern Set ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) resp, err := conn.GetRegexPatternSetWithContext(ctx, &waf.GetRegexPatternSetInput{ RegexPatternSetId: aws.String(rs.Primary.ID), }) @@ -235,7 +238,7 @@ func testAccCheckRegexPatternSetDestroy(ctx context.Context) resource.TestCheckF continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) resp, err := conn.GetRegexPatternSetWithContext(ctx, &waf.GetRegexPatternSetInput{ RegexPatternSetId: aws.String(rs.Primary.ID), }) diff --git a/internal/service/wafregional/rule.go b/internal/service/wafregional/rule.go index b2d9a7f372a..b6712fddb2f 100644 --- a/internal/service/wafregional/rule.go +++ b/internal/service/wafregional/rule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional import ( @@ -80,7 +83,7 @@ func ResourceRule() *schema.Resource { func resourceRuleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region wr := NewRetryer(conn, region) @@ -89,7 +92,7 @@ func resourceRuleCreate(ctx context.Context, d *schema.ResourceData, meta interf ChangeToken: token, MetricName: aws.String(d.Get("metric_name").(string)), Name: aws.String(d.Get("name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } return conn.CreateRuleWithContext(ctx, input) @@ -113,7 +116,7 @@ func resourceRuleCreate(ctx context.Context, d *schema.ResourceData, meta interf func resourceRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) params := &waf.GetRuleInput{ RuleId: aws.String(d.Id()), @@ -163,7 +166,7 @@ func resourceRuleUpdate(ctx context.Context, d *schema.ResourceData, meta interf func resourceRuleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region oldPredicates := d.Get("predicate").(*schema.Set).List() @@ -192,7 +195,7 @@ func resourceRuleDelete(ctx context.Context, d *schema.ResourceData, meta interf } func updateRuleResource(ctx context.Context, id string, oldP, newP []interface{}, meta interface{}) error { - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region wr := NewRetryer(conn, region) diff --git a/internal/service/wafregional/rule_data_source.go b/internal/service/wafregional/rule_data_source.go index 22d85f9c563..66fc4ff73a7 100644 --- a/internal/service/wafregional/rule_data_source.go +++ b/internal/service/wafregional/rule_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional import ( @@ -27,7 +30,7 @@ func DataSourceRule() *schema.Resource { func dataSourceRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) name := d.Get("name").(string) rules := make([]*waf.RuleSummary, 0) diff --git a/internal/service/wafregional/rule_data_source_test.go b/internal/service/wafregional/rule_data_source_test.go index 35f5868bcf9..fb23efd5f79 100644 --- a/internal/service/wafregional/rule_data_source_test.go +++ b/internal/service/wafregional/rule_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional_test import ( diff --git a/internal/service/wafregional/rule_group.go b/internal/service/wafregional/rule_group.go index 4cef341aeb2..23f52009aa1 100644 --- a/internal/service/wafregional/rule_group.go +++ b/internal/service/wafregional/rule_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional import ( @@ -92,7 +95,7 @@ func ResourceRuleGroup() *schema.Resource { func resourceRuleGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region wr := NewRetryer(conn, region) @@ -101,7 +104,7 @@ func resourceRuleGroupCreate(ctx context.Context, d *schema.ResourceData, meta i ChangeToken: token, MetricName: aws.String(d.Get("metric_name").(string)), Name: aws.String(d.Get("name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } return conn.CreateRuleGroupWithContext(ctx, input) @@ -127,7 +130,7 @@ func resourceRuleGroupCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceRuleGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) params := &waf.GetRuleGroupInput{ RuleGroupId: aws.String(d.Id()), @@ -168,7 +171,7 @@ func resourceRuleGroupRead(ctx context.Context, d *schema.ResourceData, meta int func resourceRuleGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region if d.HasChange("activated_rule") { @@ -186,7 +189,7 @@ func resourceRuleGroupUpdate(ctx context.Context, d *schema.ResourceData, meta i func resourceRuleGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region oldRules := d.Get("activated_rule").(*schema.Set).List() diff --git a/internal/service/wafregional/rule_group_test.go b/internal/service/wafregional/rule_group_test.go index a66c77c4587..9ba8a4e01f0 100644 --- a/internal/service/wafregional/rule_group_test.go +++ b/internal/service/wafregional/rule_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional_test import ( @@ -280,7 +283,7 @@ func TestAccWAFRegionalRuleGroup_noActivatedRules(t *testing.T) { func testAccCheckRuleGroupDisappears(ctx context.Context, group *waf.RuleGroup) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) region := acctest.Provider.Meta().(*conns.AWSClient).Region rResp, err := conn.ListActivatedRulesInRuleGroupWithContext(ctx, &waf.ListActivatedRulesInRuleGroupInput{ @@ -332,7 +335,7 @@ func testAccCheckRuleGroupDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) resp, err := conn.GetRuleGroupWithContext(ctx, &waf.GetRuleGroupInput{ RuleGroupId: aws.String(rs.Primary.ID), }) @@ -365,7 +368,7 @@ func testAccCheckRuleGroupExists(ctx context.Context, n string, group *waf.RuleG return fmt.Errorf("No WAF Regional Rule Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) resp, err := conn.GetRuleGroupWithContext(ctx, &waf.GetRuleGroupInput{ RuleGroupId: aws.String(rs.Primary.ID), }) @@ -525,7 +528,7 @@ resource "aws_wafregional_rule_group" "test" { // which isn't static because ruleId is generated as part of the test func computeActivatedRuleWithRuleId(rule *waf.Rule, actionType string, priority int, idx *int) resource.TestCheckFunc { return func(s *terraform.State) error { - ruleResource := tfwafregional.ResourceRuleGroup().Schema["activated_rule"].Elem.(*schema.Resource) + ruleResource := tfwafregional.ResourceRuleGroup().SchemaMap()["activated_rule"].Elem.(*schema.Resource) m := map[string]interface{}{ "action": []interface{}{ diff --git a/internal/service/wafregional/rule_test.go b/internal/service/wafregional/rule_test.go index 09dea1d2d2e..8ca6ca4e09d 100644 --- a/internal/service/wafregional/rule_test.go +++ b/internal/service/wafregional/rule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional_test import ( @@ -250,7 +253,7 @@ func TestAccWAFRegionalRule_changePredicates(t *testing.T) { // Calculates the index which isn't static because dataId is generated as part of the test func computeRulePredicate(dataId **string, negated bool, pType string, idx *int) resource.TestCheckFunc { return func(s *terraform.State) error { - predicateResource := tfwafregional.ResourceRule().Schema["predicate"].Elem.(*schema.Resource) + predicateResource := tfwafregional.ResourceRule().SchemaMap()["predicate"].Elem.(*schema.Resource) m := map[string]interface{}{ "data_id": **dataId, "negated": negated, @@ -266,7 +269,7 @@ func computeRulePredicate(dataId **string, negated bool, pType string, idx *int) func testAccCheckRuleDisappears(ctx context.Context, v *waf.Rule) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) region := acctest.Provider.Meta().(*conns.AWSClient).Region wr := tfwafregional.NewRetryer(conn, region) @@ -315,7 +318,7 @@ func testAccCheckRuleDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) resp, err := conn.GetRuleWithContext(ctx, &waf.GetRuleInput{ RuleId: aws.String(rs.Primary.ID), }) @@ -349,7 +352,7 @@ func testAccCheckRuleExists(ctx context.Context, n string, v *waf.Rule) resource return fmt.Errorf("No WAF Rule ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) resp, err := conn.GetRuleWithContext(ctx, &waf.GetRuleInput{ RuleId: aws.String(rs.Primary.ID), }) diff --git a/internal/service/wafregional/service_package_gen.go b/internal/service/wafregional/service_package_gen.go index 4b56a51f731..2fc3536c951 100644 --- a/internal/service/wafregional/service_package_gen.go +++ b/internal/service/wafregional/service_package_gen.go @@ -5,6 +5,10 @@ package wafregional import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + wafregional_sdkv1 "github.com/aws/aws-sdk-go/service/wafregional" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -121,4 +125,13 @@ func (p *servicePackage) ServicePackageName() string { return names.WAFRegional } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*wafregional_sdkv1.WAFRegional, error) { + sess := config["session"].(*session_sdkv1.Session) + + return wafregional_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/wafregional/size_constraint_set.go b/internal/service/wafregional/size_constraint_set.go index bed1afd771d..68631524cb2 100644 --- a/internal/service/wafregional/size_constraint_set.go +++ b/internal/service/wafregional/size_constraint_set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional import ( @@ -32,7 +35,7 @@ func ResourceSizeConstraintSet() *schema.Resource { func resourceSizeConstraintSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region name := d.Get("name").(string) @@ -60,7 +63,7 @@ func resourceSizeConstraintSetCreate(ctx context.Context, d *schema.ResourceData func resourceSizeConstraintSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) log.Printf("[INFO] Reading WAF Regional SizeConstraintSet: %s", d.Get("name").(string)) params := &waf.GetSizeConstraintSetInput{ @@ -91,7 +94,7 @@ func resourceSizeConstraintSetUpdate(ctx context.Context, d *schema.ResourceData o, n := d.GetChange("size_constraints") oldConstraints, newConstraints := o.(*schema.Set).List(), n.(*schema.Set).List() - err := updateRegionalSizeConstraintSetResource(ctx, d.Id(), oldConstraints, newConstraints, client.WAFRegionalConn(), client.Region) + err := updateRegionalSizeConstraintSetResource(ctx, d.Id(), oldConstraints, newConstraints, client.WAFRegionalConn(ctx), client.Region) if err != nil { return sdkdiag.AppendErrorf(diags, "updating WAF Regional SizeConstraintSet(%s): %s", d.Id(), err) } @@ -102,7 +105,7 @@ func resourceSizeConstraintSetUpdate(ctx context.Context, d *schema.ResourceData func resourceSizeConstraintSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region oldConstraints := d.Get("size_constraints").(*schema.Set).List() diff --git a/internal/service/wafregional/size_constraint_set_test.go b/internal/service/wafregional/size_constraint_set_test.go index 7d59455adc8..d616b49f9f9 100644 --- a/internal/service/wafregional/size_constraint_set_test.go +++ b/internal/service/wafregional/size_constraint_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional_test import ( @@ -218,7 +221,7 @@ func TestAccWAFRegionalSizeConstraintSet_noConstraints(t *testing.T) { func testAccCheckSizeConstraintSetDisappears(ctx context.Context, constraints *waf.SizeConstraintSet) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) region := acctest.Provider.Meta().(*conns.AWSClient).Region wr := tfwafregional.NewRetryer(conn, region) @@ -269,7 +272,7 @@ func testAccCheckSizeConstraintSetExists(ctx context.Context, n string, constrai return fmt.Errorf("No WAF SizeConstraintSet ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) resp, err := conn.GetSizeConstraintSetWithContext(ctx, &waf.GetSizeConstraintSetInput{ SizeConstraintSetId: aws.String(rs.Primary.ID), }) @@ -294,7 +297,7 @@ func testAccCheckSizeConstraintSetDestroy(ctx context.Context) resource.TestChec continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) resp, err := conn.GetSizeConstraintSetWithContext(ctx, &waf.GetSizeConstraintSetInput{ SizeConstraintSetId: aws.String(rs.Primary.ID), }) diff --git a/internal/service/wafregional/sql_injection_match_set.go b/internal/service/wafregional/sql_injection_match_set.go index e0cbe56a2a0..08b4eb9e71c 100644 --- a/internal/service/wafregional/sql_injection_match_set.go +++ b/internal/service/wafregional/sql_injection_match_set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional import ( @@ -76,7 +79,7 @@ func ResourceSQLInjectionMatchSet() *schema.Resource { func resourceSQLInjectionMatchSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region log.Printf("[INFO] Creating Regional WAF SQL Injection Match Set: %s", d.Get("name").(string)) @@ -101,7 +104,7 @@ func resourceSQLInjectionMatchSetCreate(ctx context.Context, d *schema.ResourceD func resourceSQLInjectionMatchSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) log.Printf("[INFO] Reading Regional WAF SQL Injection Match Set: %s", d.Get("name").(string)) params := &waf.GetSqlInjectionMatchSetInput{ SqlInjectionMatchSetId: aws.String(d.Id()), @@ -125,7 +128,7 @@ func resourceSQLInjectionMatchSetRead(ctx context.Context, d *schema.ResourceDat func resourceSQLInjectionMatchSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region if d.HasChange("sql_injection_match_tuple") { @@ -144,7 +147,7 @@ func resourceSQLInjectionMatchSetUpdate(ctx context.Context, d *schema.ResourceD func resourceSQLInjectionMatchSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region oldTuples := d.Get("sql_injection_match_tuple").(*schema.Set).List() diff --git a/internal/service/wafregional/sql_injection_match_set_test.go b/internal/service/wafregional/sql_injection_match_set_test.go index 2e88c41f9ce..0081e7fec43 100644 --- a/internal/service/wafregional/sql_injection_match_set_test.go +++ b/internal/service/wafregional/sql_injection_match_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional_test import ( @@ -204,7 +207,7 @@ func TestAccWAFRegionalSQLInjectionMatchSet_noTuples(t *testing.T) { func testAccCheckSQLInjectionMatchSetDisappears(ctx context.Context, v *waf.SqlInjectionMatchSet) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) region := acctest.Provider.Meta().(*conns.AWSClient).Region wr := tfwafregional.NewRetryer(conn, region) @@ -255,7 +258,7 @@ func testAccCheckSQLInjectionMatchSetExists(ctx context.Context, n string, v *wa return fmt.Errorf("No WAF SqlInjectionMatchSet ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) resp, err := conn.GetSqlInjectionMatchSetWithContext(ctx, &waf.GetSqlInjectionMatchSetInput{ SqlInjectionMatchSetId: aws.String(rs.Primary.ID), }) @@ -279,7 +282,7 @@ func testAccCheckSQLInjectionMatchSetDestroy(ctx context.Context) resource.TestC continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) resp, err := conn.GetSqlInjectionMatchSetWithContext(ctx, &waf.GetSqlInjectionMatchSetInput{ SqlInjectionMatchSetId: aws.String(rs.Primary.ID), }) diff --git a/internal/service/wafregional/subscribed_rule_group.go b/internal/service/wafregional/subscribed_rule_group.go index 670b6c2c5bd..85090f20636 100644 --- a/internal/service/wafregional/subscribed_rule_group.go +++ b/internal/service/wafregional/subscribed_rule_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional import ( @@ -35,7 +38,7 @@ func DataSourceSubscribedRuleGroup() *schema.Resource { } func dataSourceSubscribedRuleGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) name, nameOk := d.Get("name").(string) metricName, metricNameOk := d.Get("metric_name").(string) diff --git a/internal/service/wafregional/subscribed_rule_group_test.go b/internal/service/wafregional/subscribed_rule_group_test.go index 8b51e83fc10..ded3f1c04a1 100644 --- a/internal/service/wafregional/subscribed_rule_group_test.go +++ b/internal/service/wafregional/subscribed_rule_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional_test import ( diff --git a/internal/service/wafregional/sweep.go b/internal/service/wafregional/sweep.go index 36a3b91c2ad..91117d3ef6c 100644 --- a/internal/service/wafregional/sweep.go +++ b/internal/service/wafregional/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -14,7 +17,6 @@ import ( "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/go-multierror" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" tfwaf "github.com/hashicorp/terraform-provider-aws/internal/service/waf" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -54,11 +56,11 @@ func init() { func sweepRateBasedRules(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).WAFRegionalConn() + conn := client.WAFRegionalConn(ctx) input := &waf.ListRateBasedRulesInput{} @@ -148,11 +150,11 @@ func sweepRateBasedRules(region string) error { func sweepRegexMatchSet(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).WAFRegionalConn() + conn := client.WAFRegionalConn(ctx) var sweeperErrs *multierror.Error @@ -195,11 +197,11 @@ func sweepRegexMatchSet(region string) error { func sweepRuleGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).WAFRegionalConn() + conn := client.WAFRegionalConn(ctx) req := &waf.ListRuleGroupsInput{} resp, err := conn.ListRuleGroupsWithContext(ctx, req) @@ -235,11 +237,11 @@ func sweepRuleGroups(region string) error { func sweepRules(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).WAFRegionalConn() + conn := client.WAFRegionalConn(ctx) input := &waf.ListRulesInput{} @@ -328,11 +330,11 @@ func sweepRules(region string) error { func sweepWebACLs(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).WAFRegionalConn() + conn := client.WAFRegionalConn(ctx) input := &waf.ListWebACLsInput{} diff --git a/internal/service/wafregional/tags_gen.go b/internal/service/wafregional/tags_gen.go index d24ef71c98b..6ffcab4b19a 100644 --- a/internal/service/wafregional/tags_gen.go +++ b/internal/service/wafregional/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists wafregional service tags. +// listTags lists wafregional service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn wafregionaliface.WAFRegionalAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn wafregionaliface.WAFRegionalAPI, identifier string) (tftags.KeyValueTags, error) { input := &waf.ListTagsForResourceInput{ ResourceARN: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn wafregionaliface.WAFRegionalAPI, identif // ListTags lists wafregional service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).WAFRegionalConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).WAFRegionalConn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*waf.Tag) tftags.KeyValueTags { return tftags.New(ctx, m) } -// GetTagsIn returns wafregional service tags from Context. +// getTagsIn returns wafregional service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*waf.Tag { +func getTagsIn(ctx context.Context) []*waf.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*waf.Tag { return nil } -// SetTagsOut sets wafregional service tags in Context. -func SetTagsOut(ctx context.Context, tags []*waf.Tag) { +// setTagsOut sets wafregional service tags in Context. +func setTagsOut(ctx context.Context, tags []*waf.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates wafregional service tags. +// updateTags updates wafregional service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn wafregionaliface.WAFRegionalAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn wafregionaliface.WAFRegionalAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn wafregionaliface.WAFRegionalAPI, ident // UpdateTags updates wafregional service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).WAFRegionalConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).WAFRegionalConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/wafregional/token_handlers.go b/internal/service/wafregional/token_handlers.go index 5a741f47d92..67e4c9906a6 100644 --- a/internal/service/wafregional/token_handlers.go +++ b/internal/service/wafregional/token_handlers.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional import ( @@ -47,7 +50,7 @@ func (t *WafRegionalRetryer) RetryWithToken(ctx context.Context, f withRegionalT tokenOut, err = t.Connection.GetChangeToken(&waf.GetChangeTokenInput{}) if err != nil { - return nil, fmt.Errorf("error getting WAF Regional change token: %w", err) + return nil, fmt.Errorf("getting WAF Regional change token: %w", err) } out, err = f(tokenOut.ChangeToken) diff --git a/internal/service/wafregional/validate.go b/internal/service/wafregional/validate.go index 7a7198b709d..3488935c705 100644 --- a/internal/service/wafregional/validate.go +++ b/internal/service/wafregional/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional import ( diff --git a/internal/service/wafregional/validate_test.go b/internal/service/wafregional/validate_test.go index 2b9da3bc418..0f4c857067a 100644 --- a/internal/service/wafregional/validate_test.go +++ b/internal/service/wafregional/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional import ( diff --git a/internal/service/wafregional/web_acl.go b/internal/service/wafregional/web_acl.go index 1a3b70eb58f..c0b5465ba4f 100644 --- a/internal/service/wafregional/web_acl.go +++ b/internal/service/wafregional/web_acl.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional import ( @@ -177,7 +180,7 @@ func ResourceWebACL() *schema.Resource { func resourceWebACLCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region wr := NewRetryer(conn, region) @@ -187,7 +190,7 @@ func resourceWebACLCreate(ctx context.Context, d *schema.ResourceData, meta inte DefaultAction: tfwaf.ExpandAction(d.Get("default_action").([]interface{})), MetricName: aws.String(d.Get("metric_name").(string)), Name: aws.String(d.Get("name").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } return conn.CreateWebACLWithContext(ctx, input) @@ -219,7 +222,7 @@ func resourceWebACLCreate(ctx context.Context, d *schema.ResourceData, meta inte log.Printf("[DEBUG] Updating WAF Regional Web ACL (%s) Logging Configuration: %s", d.Id(), input) if _, err := conn.PutLoggingConfigurationWithContext(ctx, input); err != nil { - return sdkdiag.AppendErrorf(diags, "Updating WAF Regional Web ACL (%s) Logging Configuration: %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "updating WAF Regional Web ACL (%s) Logging Configuration: %s", d.Id(), err) } } @@ -236,7 +239,7 @@ func resourceWebACLCreate(ctx context.Context, d *schema.ResourceData, meta inte return conn.UpdateWebACLWithContext(ctx, req) }) if err != nil { - return sdkdiag.AppendErrorf(diags, "Updating WAF Regional ACL: %s", err) + return sdkdiag.AppendErrorf(diags, "updating WAF Regional Web ACL (%s): %s", d.Id(), err) } } @@ -245,7 +248,7 @@ func resourceWebACLCreate(ctx context.Context, d *schema.ResourceData, meta inte func resourceWebACLRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) params := &waf.GetWebACLInput{ WebACLId: aws.String(d.Id()), @@ -315,7 +318,7 @@ func resourceWebACLRead(ctx context.Context, d *schema.ResourceData, meta interf func resourceWebACLUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region if d.HasChanges("default_action", "rule") { @@ -333,7 +336,7 @@ func resourceWebACLUpdate(ctx context.Context, d *schema.ResourceData, meta inte return conn.UpdateWebACLWithContext(ctx, req) }) if err != nil { - return sdkdiag.AppendErrorf(diags, "Updating WAF Regional ACL: %s", err) + return sdkdiag.AppendErrorf(diags, "updating WAF Regional Web ACL (%s): %s", d.Id(), err) } } @@ -366,7 +369,7 @@ func resourceWebACLUpdate(ctx context.Context, d *schema.ResourceData, meta inte func resourceWebACLDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region // First, need to delete all rules diff --git a/internal/service/wafregional/web_acl_association.go b/internal/service/wafregional/web_acl_association.go index 711802af127..a5160606b32 100644 --- a/internal/service/wafregional/web_acl_association.go +++ b/internal/service/wafregional/web_acl_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional import ( @@ -45,7 +48,7 @@ func ResourceWebACLAssociation() *schema.Resource { func resourceWebACLAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) log.Printf( "[INFO] Creating WAF Regional Web ACL association: %s => %s", @@ -60,7 +63,7 @@ func resourceWebACLAssociationCreate(ctx context.Context, d *schema.ResourceData // create association and wait on retryable error // no response body var err error - err = retry.RetryContext(ctx, 2*time.Minute, func() *retry.RetryError { + err = retry.RetryContext(ctx, 5*time.Minute, func() *retry.RetryError { _, err = conn.AssociateWebACLWithContext(ctx, params) if err != nil { if tfawserr.ErrCodeEquals(err, wafregional.ErrCodeWAFUnavailableEntityException) { @@ -85,7 +88,7 @@ func resourceWebACLAssociationCreate(ctx context.Context, d *schema.ResourceData func resourceWebACLAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) resourceArn := WebACLAssociationParseID(d.Id()) @@ -119,7 +122,7 @@ func resourceWebACLAssociationRead(ctx context.Context, d *schema.ResourceData, func resourceWebACLAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) resourceArn := WebACLAssociationParseID(d.Id()) diff --git a/internal/service/wafregional/web_acl_association_test.go b/internal/service/wafregional/web_acl_association_test.go index 1dd582ffb13..6a4572779ba 100644 --- a/internal/service/wafregional/web_acl_association_test.go +++ b/internal/service/wafregional/web_acl_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional_test import ( @@ -119,7 +122,7 @@ func TestAccWAFRegionalWebACLAssociation_ResourceARN_apiGatewayStage(t *testing. func testAccCheckWebACLAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_wafregional_web_acl_association" { @@ -162,7 +165,7 @@ func testAccCheckWebACLAssociationExists(ctx context.Context, n string) resource resourceArn := tfwafregional.WebACLAssociationParseID(rs.Primary.ID) - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) input := &wafregional.GetWebACLForResourceInput{ ResourceArn: aws.String(resourceArn), @@ -189,7 +192,7 @@ func testAccCheckWebACLAssociationDisappears(ctx context.Context, resourceName s resourceArn := tfwafregional.WebACLAssociationParseID(rs.Primary.ID) - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) input := &wafregional.DisassociateWebACLInput{ ResourceArn: aws.String(resourceArn), diff --git a/internal/service/wafregional/web_acl_data_source.go b/internal/service/wafregional/web_acl_data_source.go index 38ef9af3dbe..3e8d2c9e4df 100644 --- a/internal/service/wafregional/web_acl_data_source.go +++ b/internal/service/wafregional/web_acl_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional import ( @@ -27,7 +30,7 @@ func DataSourceWebACL() *schema.Resource { func dataSourceWebACLRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) name := d.Get("name").(string) acls := make([]*waf.WebACLSummary, 0) diff --git a/internal/service/wafregional/web_acl_data_source_test.go b/internal/service/wafregional/web_acl_data_source_test.go index 4da507873c3..da9bdf901e4 100644 --- a/internal/service/wafregional/web_acl_data_source_test.go +++ b/internal/service/wafregional/web_acl_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional_test import ( diff --git a/internal/service/wafregional/web_acl_test.go b/internal/service/wafregional/web_acl_test.go index 02f1d5ae205..fea5ed655de 100644 --- a/internal/service/wafregional/web_acl_test.go +++ b/internal/service/wafregional/web_acl_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional_test import ( @@ -404,7 +407,7 @@ func TestAccWAFRegionalWebACL_logging(t *testing.T) { // Calculates the index which isn't static because ruleId is generated as part of the test func computeWebACLRuleIndex(ruleId **string, priority int, ruleType string, actionType string, idx *int) resource.TestCheckFunc { return func(s *terraform.State) error { - ruleResource := tfwafregional.ResourceWebACL().Schema["rule"].Elem.(*schema.Resource) + ruleResource := tfwafregional.ResourceWebACL().SchemaMap()["rule"].Elem.(*schema.Resource) actionMap := map[string]interface{}{ "type": actionType, } @@ -425,7 +428,7 @@ func computeWebACLRuleIndex(ruleId **string, priority int, ruleType string, acti func testAccCheckWebACLDisappears(ctx context.Context, v *waf.WebACL) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) region := acctest.Provider.Meta().(*conns.AWSClient).Region wr := tfwafregional.NewRetryer(conn, region) @@ -474,7 +477,7 @@ func testAccCheckWebACLDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) resp, err := conn.GetWebACLWithContext(ctx, &waf.GetWebACLInput{ WebACLId: aws.String(rs.Primary.ID), }) @@ -508,7 +511,7 @@ func testAccCheckWebACLExists(ctx context.Context, n string, v *waf.WebACL) reso return fmt.Errorf("No WebACL ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) resp, err := conn.GetWebACLWithContext(ctx, &waf.GetWebACLInput{ WebACLId: aws.String(rs.Primary.ID), }) @@ -808,11 +811,6 @@ resource "aws_s3_bucket" "test" { bucket = %[1]q } -resource "aws_s3_bucket_acl" "test" { - bucket = aws_s3_bucket.test.id - acl = "private" -} - resource "aws_iam_role" "test" { name = %[1]q @@ -837,9 +835,9 @@ EOF resource "aws_kinesis_firehose_delivery_stream" "test" { # the name must begin with aws-waf-logs- name = "aws-waf-logs-%[1]s" - destination = "s3" + destination = "extended_s3" - s3_configuration { + extended_s3_configuration { role_arn = aws_iam_role.test.arn bucket_arn = aws_s3_bucket.test.arn } @@ -866,11 +864,6 @@ resource "aws_s3_bucket" "test" { bucket = %[1]q } -resource "aws_s3_bucket_acl" "test" { - bucket = aws_s3_bucket.test.id - acl = "private" -} - resource "aws_iam_role" "test" { name = %[1]q @@ -895,9 +888,9 @@ EOF resource "aws_kinesis_firehose_delivery_stream" "test" { # the name must begin with aws-waf-logs- name = "aws-waf-logs-%[1]s" - destination = "s3" + destination = "extended_s3" - s3_configuration { + extended_s3_configuration { role_arn = aws_iam_role.test.arn bucket_arn = aws_s3_bucket.test.arn } diff --git a/internal/service/wafregional/xss_match_set.go b/internal/service/wafregional/xss_match_set.go index e3fb9c14358..1459036c8a5 100644 --- a/internal/service/wafregional/xss_match_set.go +++ b/internal/service/wafregional/xss_match_set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional import ( @@ -71,7 +74,7 @@ func ResourceXSSMatchSet() *schema.Resource { func resourceXSSMatchSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region log.Printf("[INFO] Creating regional WAF XSS Match Set: %s", d.Get("name").(string)) @@ -104,7 +107,7 @@ func resourceXSSMatchSetCreate(ctx context.Context, d *schema.ResourceData, meta func resourceXSSMatchSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) params := &waf.GetXssMatchSetInput{ XssMatchSetId: aws.String(d.Id()), } @@ -132,7 +135,7 @@ func resourceXSSMatchSetRead(ctx context.Context, d *schema.ResourceData, meta i func resourceXSSMatchSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region if d.HasChange("xss_match_tuple") { @@ -150,7 +153,7 @@ func resourceXSSMatchSetUpdate(ctx context.Context, d *schema.ResourceData, meta func resourceXSSMatchSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFRegionalConn() + conn := meta.(*conns.AWSClient).WAFRegionalConn(ctx) region := meta.(*conns.AWSClient).Region if v, ok := d.GetOk("xss_match_tuple"); ok { diff --git a/internal/service/wafregional/xss_match_set_test.go b/internal/service/wafregional/xss_match_set_test.go index 524aceba8fd..117643827b7 100644 --- a/internal/service/wafregional/xss_match_set_test.go +++ b/internal/service/wafregional/xss_match_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafregional_test import ( @@ -221,7 +224,7 @@ func testAccCheckXSSMatchSetExists(ctx context.Context, n string, v *waf.XssMatc return fmt.Errorf("No regional WAF XSS Match Set ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) resp, err := conn.GetXssMatchSetWithContext(ctx, &waf.GetXssMatchSetInput{ XssMatchSetId: aws.String(rs.Primary.ID), }) @@ -246,7 +249,7 @@ func testAccCheckXSSMatchSetDestroy(ctx context.Context) resource.TestCheckFunc continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFRegionalConn(ctx) resp, err := conn.GetXssMatchSetWithContext(ctx, &waf.GetXssMatchSetInput{ XssMatchSetId: aws.String(rs.Primary.ID), }) diff --git a/internal/service/wafv2/consts.go b/internal/service/wafv2/consts.go index 8a8e5ea9d85..a60ea16dbe1 100644 --- a/internal/service/wafv2/consts.go +++ b/internal/service/wafv2/consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafv2 const ( diff --git a/internal/service/wafv2/flex.go b/internal/service/wafv2/flex.go index d9458fe3eb6..c27fbd4d704 100644 --- a/internal/service/wafv2/flex.go +++ b/internal/service/wafv2/flex.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafv2 import ( diff --git a/internal/service/wafv2/generate.go b/internal/service/wafv2/generate.go index a9219aa529e..640e200ab1e 100644 --- a/internal/service/wafv2/generate.go +++ b/internal/service/wafv2/generate.go @@ -1,5 +1,9 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/listpages/main.go -ListOps=ListIPSets,ListRegexPatternSets,ListRuleGroups,ListWebACLs -Paginator=NextMarker //go:generate go run ../../generate/tags/main.go -ListTags -ListTagsInIDElem=ResourceARN -ListTagsOutTagsElem=TagInfoForResource.TagList -ServiceTagsSlice -TagInIDElem=ResourceARN -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package wafv2 diff --git a/internal/service/wafv2/ip_set.go b/internal/service/wafv2/ip_set.go index a873d1485f7..3595c224ab1 100644 --- a/internal/service/wafv2/ip_set.go +++ b/internal/service/wafv2/ip_set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafv2 import ( @@ -18,6 +21,7 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + itypes "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/internal/verify" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -51,69 +55,71 @@ func ResourceIPSet() *schema.Resource { }, }, - Schema: map[string]*schema.Schema{ - "addresses": { - Type: schema.TypeSet, - Optional: true, - MaxItems: 10000, - Elem: &schema.Schema{Type: schema.TypeString}, - DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { - if d.GetRawPlan().GetAttr("addresses").IsWhollyKnown() { - o, n := d.GetChange("addresses") - oldAddresses := o.(*schema.Set).List() - newAddresses := n.(*schema.Set).List() - if len(oldAddresses) == len(newAddresses) { - for _, ov := range oldAddresses { - hasAddress := false - for _, nv := range newAddresses { - if verify.CIDRBlocksEqual(ov.(string), nv.(string)) { - hasAddress = true - break + SchemaFunc: func() map[string]*schema.Schema { + return map[string]*schema.Schema{ + "addresses": { + Type: schema.TypeSet, + Optional: true, + MaxItems: 10000, + Elem: &schema.Schema{Type: schema.TypeString}, + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if d.GetRawPlan().GetAttr("addresses").IsWhollyKnown() { + o, n := d.GetChange("addresses") + oldAddresses := o.(*schema.Set).List() + newAddresses := n.(*schema.Set).List() + if len(oldAddresses) == len(newAddresses) { + for _, ov := range oldAddresses { + hasAddress := false + for _, nv := range newAddresses { + if itypes.CIDRBlocksEqual(ov.(string), nv.(string)) { + hasAddress = true + break + } + } + if !hasAddress { + return false } } - if !hasAddress { - return false - } + return true } - return true } - } - return false + return false + }, }, - }, - "arn": { - Type: schema.TypeString, - Computed: true, - }, - "description": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.StringLenBetween(1, 256), - }, - "ip_address_version": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validation.StringInSlice(wafv2.IPAddressVersion_Values(), false), - }, - "lock_token": { - Type: schema.TypeString, - Computed: true, - }, - "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validation.StringLenBetween(1, 128), - }, - "scope": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validation.StringInSlice(wafv2.Scope_Values(), false), - }, - names.AttrTags: tftags.TagsSchema(), - names.AttrTagsAll: tftags.TagsSchemaComputed(), + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "description": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(1, 256), + }, + "ip_address_version": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice(wafv2.IPAddressVersion_Values(), false), + }, + "lock_token": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(1, 128), + }, + "scope": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice(wafv2.Scope_Values(), false), + }, + names.AttrTags: tftags.TagsSchema(), + names.AttrTagsAll: tftags.TagsSchemaComputed(), + } }, CustomizeDiff: verify.SetTagsDiff, @@ -121,7 +127,7 @@ func ResourceIPSet() *schema.Resource { } func resourceIPSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).WAFV2Conn() + conn := meta.(*conns.AWSClient).WAFV2Conn(ctx) name := d.Get("name").(string) input := &wafv2.CreateIPSetInput{ @@ -129,7 +135,7 @@ func resourceIPSetCreate(ctx context.Context, d *schema.ResourceData, meta inter IPAddressVersion: aws.String(d.Get("ip_address_version").(string)), Name: aws.String(name), Scope: aws.String(d.Get("scope").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("addresses"); ok && v.(*schema.Set).Len() > 0 { @@ -152,7 +158,7 @@ func resourceIPSetCreate(ctx context.Context, d *schema.ResourceData, meta inter } func resourceIPSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).WAFV2Conn() + conn := meta.(*conns.AWSClient).WAFV2Conn(ctx) output, err := FindIPSetByThreePartKey(ctx, conn, d.Id(), d.Get("name").(string), d.Get("scope").(string)) @@ -179,7 +185,7 @@ func resourceIPSetRead(ctx context.Context, d *schema.ResourceData, meta interfa } func resourceIPSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).WAFV2Conn() + conn := meta.(*conns.AWSClient).WAFV2Conn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &wafv2.UpdateIPSetInput{ @@ -210,7 +216,7 @@ func resourceIPSetUpdate(ctx context.Context, d *schema.ResourceData, meta inter } func resourceIPSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).WAFV2Conn() + conn := meta.(*conns.AWSClient).WAFV2Conn(ctx) input := &wafv2.DeleteIPSetInput{ Id: aws.String(d.Id()), diff --git a/internal/service/wafv2/ip_set_data_source.go b/internal/service/wafv2/ip_set_data_source.go index 2ccee6f88e8..fecabb3d9fe 100644 --- a/internal/service/wafv2/ip_set_data_source.go +++ b/internal/service/wafv2/ip_set_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafv2 import ( @@ -18,40 +21,42 @@ func DataSourceIPSet() *schema.Resource { return &schema.Resource{ ReadWithoutTimeout: dataSourceIPSetRead, - Schema: map[string]*schema.Schema{ - "addresses": { - Type: schema.TypeSet, - Computed: true, - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "arn": { - Type: schema.TypeString, - Computed: true, - }, - "description": { - Type: schema.TypeString, - Computed: true, - }, - "ip_address_version": { - Type: schema.TypeString, - Computed: true, - }, - "name": { - Type: schema.TypeString, - Required: true, - }, - "scope": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice(wafv2.Scope_Values(), false), - }, + SchemaFunc: func() map[string]*schema.Schema { + return map[string]*schema.Schema{ + "addresses": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "description": { + Type: schema.TypeString, + Computed: true, + }, + "ip_address_version": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + }, + "scope": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(wafv2.Scope_Values(), false), + }, + } }, } } func dataSourceIPSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFV2Conn() + conn := meta.(*conns.AWSClient).WAFV2Conn(ctx) name := d.Get("name").(string) var foundIpSet *wafv2.IPSetSummary diff --git a/internal/service/wafv2/ip_set_data_source_test.go b/internal/service/wafv2/ip_set_data_source_test.go index 7bfdce319bd..4e7e34768d4 100644 --- a/internal/service/wafv2/ip_set_data_source_test.go +++ b/internal/service/wafv2/ip_set_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafv2_test import ( diff --git a/internal/service/wafv2/ip_set_test.go b/internal/service/wafv2/ip_set_test.go index 3c95da408b8..8694acb4a89 100644 --- a/internal/service/wafv2/ip_set_test.go +++ b/internal/service/wafv2/ip_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafv2_test import ( @@ -315,7 +318,7 @@ func testAccCheckIPSetDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFV2Conn(ctx) _, err := tfwafv2.FindIPSetByThreePartKey(ctx, conn, rs.Primary.ID, rs.Primary.Attributes["name"], rs.Primary.Attributes["scope"]) @@ -345,7 +348,7 @@ func testAccCheckIPSetExists(ctx context.Context, n string, v *wafv2.IPSet) reso return fmt.Errorf("No WAFv2 IPSet ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFV2Conn(ctx) output, err := tfwafv2.FindIPSetByThreePartKey(ctx, conn, rs.Primary.ID, rs.Primary.Attributes["name"], rs.Primary.Attributes["scope"]) @@ -379,7 +382,7 @@ resource "aws_wafv2_ip_set" "ip_set" { func testAccIPSetConfig_addresses(name string) string { return fmt.Sprintf(` resource "aws_eip" "test" { - vpc = true + domain = "vpc" } resource "aws_wafv2_ip_set" "ip_set" { diff --git a/internal/service/wafv2/regex_pattern_set.go b/internal/service/wafv2/regex_pattern_set.go index 9e4a4544160..8c2a86d8ea0 100644 --- a/internal/service/wafv2/regex_pattern_set.go +++ b/internal/service/wafv2/regex_pattern_set.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafv2 import ( @@ -50,51 +53,53 @@ func ResourceRegexPatternSet() *schema.Resource { }, }, - Schema: map[string]*schema.Schema{ - "arn": { - Type: schema.TypeString, - Computed: true, - }, - "description": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.StringLenBetween(1, 256), - }, - "lock_token": { - Type: schema.TypeString, - Computed: true, - }, - "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validation.StringLenBetween(1, 128), - }, - "regular_expression": { - Type: schema.TypeSet, - Optional: true, - MaxItems: 10, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "regex_string": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.All( - validation.StringLenBetween(1, 200), - validation.StringIsValidRegExp, - ), + SchemaFunc: func() map[string]*schema.Schema { + return map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "description": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(1, 256), + }, + "lock_token": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(1, 128), + }, + "regular_expression": { + Type: schema.TypeSet, + Optional: true, + MaxItems: 10, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "regex_string": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.All( + validation.StringLenBetween(1, 200), + validation.StringIsValidRegExp, + ), + }, }, }, }, - }, - "scope": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validation.StringInSlice(wafv2.Scope_Values(), false), - }, - names.AttrTags: tftags.TagsSchema(), - names.AttrTagsAll: tftags.TagsSchemaComputed(), + "scope": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice(wafv2.Scope_Values(), false), + }, + names.AttrTags: tftags.TagsSchema(), + names.AttrTagsAll: tftags.TagsSchemaComputed(), + } }, CustomizeDiff: verify.SetTagsDiff, @@ -102,14 +107,14 @@ func ResourceRegexPatternSet() *schema.Resource { } func resourceRegexPatternSetCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).WAFV2Conn() + conn := meta.(*conns.AWSClient).WAFV2Conn(ctx) name := d.Get("name").(string) input := &wafv2.CreateRegexPatternSetInput{ Name: aws.String(name), RegularExpressionList: []*wafv2.Regex{}, Scope: aws.String(d.Get("scope").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("description"); ok { @@ -132,7 +137,7 @@ func resourceRegexPatternSetCreate(ctx context.Context, d *schema.ResourceData, } func resourceRegexPatternSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).WAFV2Conn() + conn := meta.(*conns.AWSClient).WAFV2Conn(ctx) output, err := FindRegexPatternSetByThreePartKey(ctx, conn, d.Id(), d.Get("name").(string), d.Get("scope").(string)) @@ -160,7 +165,7 @@ func resourceRegexPatternSetRead(ctx context.Context, d *schema.ResourceData, me } func resourceRegexPatternSetUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).WAFV2Conn() + conn := meta.(*conns.AWSClient).WAFV2Conn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &wafv2.UpdateRegexPatternSetInput{ @@ -191,7 +196,7 @@ func resourceRegexPatternSetUpdate(ctx context.Context, d *schema.ResourceData, } func resourceRegexPatternSetDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).WAFV2Conn() + conn := meta.(*conns.AWSClient).WAFV2Conn(ctx) input := &wafv2.DeleteRegexPatternSetInput{ Id: aws.String(d.Id()), diff --git a/internal/service/wafv2/regex_pattern_set_data_source.go b/internal/service/wafv2/regex_pattern_set_data_source.go index ecb0ab2daab..b0965ff8134 100644 --- a/internal/service/wafv2/regex_pattern_set_data_source.go +++ b/internal/service/wafv2/regex_pattern_set_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafv2 import ( @@ -17,43 +20,45 @@ func DataSourceRegexPatternSet() *schema.Resource { return &schema.Resource{ ReadWithoutTimeout: dataSourceRegexPatternSetRead, - Schema: map[string]*schema.Schema{ - "arn": { - Type: schema.TypeString, - Computed: true, - }, - "description": { - Type: schema.TypeString, - Computed: true, - }, - "name": { - Type: schema.TypeString, - Required: true, - }, - "regular_expression": { - Type: schema.TypeSet, - Computed: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "regex_string": { - Type: schema.TypeString, - Computed: true, + SchemaFunc: func() map[string]*schema.Schema { + return map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "description": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + }, + "regular_expression": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "regex_string": { + Type: schema.TypeString, + Computed: true, + }, }, }, }, - }, - "scope": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice(wafv2.Scope_Values(), false), - }, + "scope": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(wafv2.Scope_Values(), false), + }, + } }, } } func dataSourceRegexPatternSetRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFV2Conn() + conn := meta.(*conns.AWSClient).WAFV2Conn(ctx) name := d.Get("name").(string) var foundRegexPatternSet *wafv2.RegexPatternSetSummary diff --git a/internal/service/wafv2/regex_pattern_set_data_source_test.go b/internal/service/wafv2/regex_pattern_set_data_source_test.go index fbb067123e4..75c2e62ffb0 100644 --- a/internal/service/wafv2/regex_pattern_set_data_source_test.go +++ b/internal/service/wafv2/regex_pattern_set_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafv2_test import ( diff --git a/internal/service/wafv2/regex_pattern_set_test.go b/internal/service/wafv2/regex_pattern_set_test.go index 112c1e46489..f7838872a08 100644 --- a/internal/service/wafv2/regex_pattern_set_test.go +++ b/internal/service/wafv2/regex_pattern_set_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafv2_test import ( @@ -224,7 +227,7 @@ func testAccCheckRegexPatternSetDestroy(ctx context.Context) resource.TestCheckF continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFV2Conn(ctx) _, err := tfwafv2.FindRegexPatternSetByThreePartKey(ctx, conn, rs.Primary.ID, rs.Primary.Attributes["name"], rs.Primary.Attributes["scope"]) @@ -254,7 +257,7 @@ func testAccCheckRegexPatternSetExists(ctx context.Context, n string, v *wafv2.R return fmt.Errorf("No WAFv2 RegexPatternSet ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFV2Conn(ctx) output, err := tfwafv2.FindRegexPatternSetByThreePartKey(ctx, conn, rs.Primary.ID, rs.Primary.Attributes["name"], rs.Primary.Attributes["scope"]) diff --git a/internal/service/wafv2/rule_group.go b/internal/service/wafv2/rule_group.go index 1c4abd8f2fd..ba1095a4587 100644 --- a/internal/service/wafv2/rule_group.go +++ b/internal/service/wafv2/rule_group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafv2 import ( @@ -53,80 +56,82 @@ func ResourceRuleGroup() *schema.Resource { }, }, - Schema: map[string]*schema.Schema{ - "arn": { - Type: schema.TypeString, - Computed: true, - }, - "capacity": { - Type: schema.TypeInt, - Required: true, - ForceNew: true, - ValidateFunc: validation.IntAtLeast(1), - }, - "custom_response_body": customResponseBodySchema(), - "description": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.StringLenBetween(1, 256), - }, - "lock_token": { - Type: schema.TypeString, - Computed: true, - }, - "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validation.All( - validation.StringLenBetween(1, 128), - validation.StringMatch(regexp.MustCompile(`^[a-zA-Z0-9-_]+$`), "must contain only alphanumeric hyphen and underscore characters"), - ), - }, - "rule": { - Type: schema.TypeSet, - Optional: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "action": { - Type: schema.TypeList, - Required: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "allow": allowConfigSchema(), - "block": blockConfigSchema(), - "captcha": captchaConfigSchema(), - "challenge": challengeConfigSchema(), - "count": countConfigSchema(), + SchemaFunc: func() map[string]*schema.Schema { + return map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "capacity": { + Type: schema.TypeInt, + Required: true, + ForceNew: true, + ValidateFunc: validation.IntAtLeast(1), + }, + "custom_response_body": customResponseBodySchema(), + "description": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(1, 256), + }, + "lock_token": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.All( + validation.StringLenBetween(1, 128), + validation.StringMatch(regexp.MustCompile(`^[a-zA-Z0-9-_]+$`), "must contain only alphanumeric hyphen and underscore characters"), + ), + }, + "rule": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "action": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "allow": allowConfigSchema(), + "block": blockConfigSchema(), + "captcha": captchaConfigSchema(), + "challenge": challengeConfigSchema(), + "count": countConfigSchema(), + }, }, }, + "captcha_config": outerCaptchaConfigSchema(), + "name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 128), + }, + "priority": { + Type: schema.TypeInt, + Required: true, + }, + "rule_label": ruleLabelsSchema(), + "statement": ruleGroupRootStatementSchema(ruleGroupRootStatementSchemaLevel), + "visibility_config": visibilityConfigSchema(), }, - "captcha_config": outerCaptchaConfigSchema(), - "name": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringLenBetween(1, 128), - }, - "priority": { - Type: schema.TypeInt, - Required: true, - }, - "rule_label": ruleLabelsSchema(), - "statement": ruleGroupRootStatementSchema(ruleGroupRootStatementSchemaLevel), - "visibility_config": visibilityConfigSchema(), }, }, - }, - "scope": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validation.StringInSlice(wafv2.Scope_Values(), false), - }, - names.AttrTags: tftags.TagsSchema(), - names.AttrTagsAll: tftags.TagsSchemaComputed(), - "visibility_config": visibilityConfigSchema(), + "scope": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice(wafv2.Scope_Values(), false), + }, + names.AttrTags: tftags.TagsSchema(), + names.AttrTagsAll: tftags.TagsSchemaComputed(), + "visibility_config": visibilityConfigSchema(), + } }, CustomizeDiff: verify.SetTagsDiff, @@ -134,7 +139,7 @@ func ResourceRuleGroup() *schema.Resource { } func resourceRuleGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).WAFV2Conn() + conn := meta.(*conns.AWSClient).WAFV2Conn(ctx) name := d.Get("name").(string) input := &wafv2.CreateRuleGroupInput{ @@ -142,7 +147,7 @@ func resourceRuleGroupCreate(ctx context.Context, d *schema.ResourceData, meta i Name: aws.String(name), Rules: expandRules(d.Get("rule").(*schema.Set).List()), Scope: aws.String(d.Get("scope").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), VisibilityConfig: expandVisibilityConfig(d.Get("visibility_config").([]interface{})), } @@ -170,7 +175,7 @@ func resourceRuleGroupCreate(ctx context.Context, d *schema.ResourceData, meta i } func resourceRuleGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).WAFV2Conn() + conn := meta.(*conns.AWSClient).WAFV2Conn(ctx) output, err := FindRuleGroupByThreePartKey(ctx, conn, d.Id(), d.Get("name").(string), d.Get("scope").(string)) @@ -204,7 +209,7 @@ func resourceRuleGroupRead(ctx context.Context, d *schema.ResourceData, meta int } func resourceRuleGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).WAFV2Conn() + conn := meta.(*conns.AWSClient).WAFV2Conn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &wafv2.UpdateRuleGroupInput{ @@ -238,7 +243,7 @@ func resourceRuleGroupUpdate(ctx context.Context, d *schema.ResourceData, meta i } func resourceRuleGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).WAFV2Conn() + conn := meta.(*conns.AWSClient).WAFV2Conn(ctx) input := &wafv2.DeleteRuleGroupInput{ Id: aws.String(d.Id()), diff --git a/internal/service/wafv2/rule_group_data_source.go b/internal/service/wafv2/rule_group_data_source.go index c2a32ef868c..df3c50c5f92 100644 --- a/internal/service/wafv2/rule_group_data_source.go +++ b/internal/service/wafv2/rule_group_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafv2 import ( @@ -17,31 +20,33 @@ func DataSourceRuleGroup() *schema.Resource { return &schema.Resource{ ReadWithoutTimeout: dataSourceRuleGroupRead, - Schema: map[string]*schema.Schema{ - "arn": { - Type: schema.TypeString, - Computed: true, - }, - "description": { - Type: schema.TypeString, - Computed: true, - }, - "name": { - Type: schema.TypeString, - Required: true, - }, - "scope": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice(wafv2.Scope_Values(), false), - }, + SchemaFunc: func() map[string]*schema.Schema { + return map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "description": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + }, + "scope": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(wafv2.Scope_Values(), false), + }, + } }, } } func dataSourceRuleGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFV2Conn() + conn := meta.(*conns.AWSClient).WAFV2Conn(ctx) name := d.Get("name").(string) var foundRuleGroup *wafv2.RuleGroupSummary diff --git a/internal/service/wafv2/rule_group_data_source_test.go b/internal/service/wafv2/rule_group_data_source_test.go index 34b23867be1..7605845a865 100644 --- a/internal/service/wafv2/rule_group_data_source_test.go +++ b/internal/service/wafv2/rule_group_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafv2_test import ( diff --git a/internal/service/wafv2/rule_group_test.go b/internal/service/wafv2/rule_group_test.go index 1fdd39f5d48..20a081553e0 100644 --- a/internal/service/wafv2/rule_group_test.go +++ b/internal/service/wafv2/rule_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafv2_test import ( @@ -2106,7 +2109,7 @@ func TestAccWAFV2RuleGroup_Operators_maxNested(t *testing.T) { } func testAccPreCheckScopeRegional(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFV2Conn(ctx) input := &wafv2.ListRuleGroupsInput{ Scope: aws.String(wafv2.ScopeRegional), @@ -2130,7 +2133,7 @@ func testAccCheckRuleGroupDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFV2Conn(ctx) _, err := tfwafv2.FindRuleGroupByThreePartKey(ctx, conn, rs.Primary.ID, rs.Primary.Attributes["name"], rs.Primary.Attributes["scope"]) @@ -2160,7 +2163,7 @@ func testAccCheckRuleGroupExists(ctx context.Context, n string, v *wafv2.RuleGro return fmt.Errorf("No WAFv2 RuleGroup ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFV2Conn(ctx) output, err := tfwafv2.FindRuleGroupByThreePartKey(ctx, conn, rs.Primary.ID, rs.Primary.Attributes["name"], rs.Primary.Attributes["scope"]) diff --git a/internal/service/wafv2/schemas.go b/internal/service/wafv2/schemas.go index 5f50eddc150..4b6771d2f4b 100644 --- a/internal/service/wafv2/schemas.go +++ b/internal/service/wafv2/schemas.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafv2 import ( @@ -10,27 +13,17 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/verify" ) -func emptySchema() *schema.Schema { - return &schema.Schema{ - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{}, - }, - } +var listOfEmptyObjectSchema *schema.Schema = &schema.Schema{ + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{}, + }, } -func emptyDeprecatedSchema() *schema.Schema { - return &schema.Schema{ - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{}, - }, - Deprecated: "Not supported by WAFv2 API", - } +func emptySchema() *schema.Schema { + return listOfEmptyObjectSchema } func ruleLabelsSchema() *schema.Schema { diff --git a/internal/service/wafv2/service_package.go b/internal/service/wafv2/service_package.go new file mode 100644 index 00000000000..159a4ff06f9 --- /dev/null +++ b/internal/service/wafv2/service_package.go @@ -0,0 +1,39 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package wafv2 + +import ( + "context" + + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + request_sdkv1 "github.com/aws/aws-sdk-go/aws/request" + wafv2_sdkv1 "github.com/aws/aws-sdk-go/service/wafv2" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" +) + +// CustomizeConn customizes a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) CustomizeConn(ctx context.Context, conn *wafv2_sdkv1.WAFV2) (*wafv2_sdkv1.WAFV2, error) { + conn.Handlers.Retry.PushBack(func(r *request_sdkv1.Request) { + if tfawserr.ErrMessageContains(r.Error, wafv2_sdkv1.ErrCodeWAFInternalErrorException, "Retry your request") { + r.Retryable = aws_sdkv1.Bool(true) + } + + if tfawserr.ErrMessageContains(r.Error, wafv2_sdkv1.ErrCodeWAFServiceLinkedRoleErrorException, "Retry") { + r.Retryable = aws_sdkv1.Bool(true) + } + + if r.Operation.Name == "CreateIPSet" || r.Operation.Name == "CreateRegexPatternSet" || + r.Operation.Name == "CreateRuleGroup" || r.Operation.Name == "CreateWebACL" { + // WAFv2 supports tag on create which can result in the below error codes according to the documentation + if tfawserr.ErrMessageContains(r.Error, wafv2_sdkv1.ErrCodeWAFTagOperationException, "Retry your request") { + r.Retryable = aws_sdkv1.Bool(true) + } + if tfawserr.ErrMessageContains(r.Error, wafv2_sdkv1.ErrCodeWAFTagOperationInternalErrorException, "Retry your request") { + r.Retryable = aws_sdkv1.Bool(true) + } + } + }) + + return conn, nil +} diff --git a/internal/service/wafv2/service_package_gen.go b/internal/service/wafv2/service_package_gen.go index 3f1beb5df66..e9ce0dfe4cd 100644 --- a/internal/service/wafv2/service_package_gen.go +++ b/internal/service/wafv2/service_package_gen.go @@ -5,6 +5,10 @@ package wafv2 import ( "context" + aws_sdkv1 "github.com/aws/aws-sdk-go/aws" + session_sdkv1 "github.com/aws/aws-sdk-go/aws/session" + wafv2_sdkv1 "github.com/aws/aws-sdk-go/service/wafv2" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -89,4 +93,13 @@ func (p *servicePackage) ServicePackageName() string { return names.WAFV2 } -var ServicePackage = &servicePackage{} +// NewConn returns a new AWS SDK for Go v1 client for this service package's AWS API. +func (p *servicePackage) NewConn(ctx context.Context, config map[string]any) (*wafv2_sdkv1.WAFV2, error) { + sess := config["session"].(*session_sdkv1.Session) + + return wafv2_sdkv1.New(sess.Copy(&aws_sdkv1.Config{Endpoint: aws_sdkv1.String(config["endpoint"].(string))})), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/wafv2/sweep.go b/internal/service/wafv2/sweep.go index a0a456ea977..898d54aef41 100644 --- a/internal/service/wafv2/sweep.go +++ b/internal/service/wafv2/sweep.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep @@ -11,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/wafv2" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) @@ -50,11 +52,11 @@ func init() { func sweepIPSets(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).WAFV2Conn() + conn := client.WAFV2Conn(ctx) input := &wafv2.ListIPSetsInput{ Scope: aws.String(wafv2.ScopeRegional), } @@ -88,7 +90,7 @@ func sweepIPSets(region string) error { return fmt.Errorf("error listing WAFv2 IPSets (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping WAFv2 IPSets (%s): %w", region, err) @@ -99,11 +101,11 @@ func sweepIPSets(region string) error { func sweepRegexPatternSets(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).WAFV2Conn() + conn := client.WAFV2Conn(ctx) input := &wafv2.ListRegexPatternSetsInput{ Scope: aws.String(wafv2.ScopeRegional), } @@ -137,7 +139,7 @@ func sweepRegexPatternSets(region string) error { return fmt.Errorf("error listing WAFv2 RegexPatternSets (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping WAFv2 RegexPatternSets (%s): %w", region, err) @@ -148,11 +150,11 @@ func sweepRegexPatternSets(region string) error { func sweepRuleGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).WAFV2Conn() + conn := client.WAFV2Conn(ctx) input := &wafv2.ListRuleGroupsInput{ Scope: aws.String(wafv2.ScopeRegional), } @@ -186,7 +188,7 @@ func sweepRuleGroups(region string) error { return fmt.Errorf("error listing WAFv2 RuleGroups (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping WAFv2 RuleGroups (%s): %w", region, err) @@ -197,11 +199,11 @@ func sweepRuleGroups(region string) error { func sweepWebACLs(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).WAFV2Conn() + conn := client.WAFV2Conn(ctx) input := &wafv2.ListWebACLsInput{ Scope: aws.String(wafv2.ScopeRegional), } @@ -245,7 +247,7 @@ func sweepWebACLs(region string) error { return fmt.Errorf("error listing WAFv2 WebACLs (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping WAFv2 WebACLs (%s): %w", region, err) diff --git a/internal/service/wafv2/tags_gen.go b/internal/service/wafv2/tags_gen.go index 0d69478cd73..673213b475b 100644 --- a/internal/service/wafv2/tags_gen.go +++ b/internal/service/wafv2/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists wafv2 service tags. +// listTags lists wafv2 service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn wafv2iface.WAFV2API, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn wafv2iface.WAFV2API, identifier string) (tftags.KeyValueTags, error) { input := &wafv2.ListTagsForResourceInput{ ResourceARN: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn wafv2iface.WAFV2API, identifier string) // ListTags lists wafv2 service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).WAFV2Conn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).WAFV2Conn(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []*wafv2.Tag) tftags.KeyValueTags { return tftags.New(ctx, m) } -// GetTagsIn returns wafv2 service tags from Context. +// getTagsIn returns wafv2 service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*wafv2.Tag { +func getTagsIn(ctx context.Context) []*wafv2.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*wafv2.Tag { return nil } -// SetTagsOut sets wafv2 service tags in Context. -func SetTagsOut(ctx context.Context, tags []*wafv2.Tag) { +// setTagsOut sets wafv2 service tags in Context. +func setTagsOut(ctx context.Context, tags []*wafv2.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates wafv2 service tags. +// updateTags updates wafv2 service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn wafv2iface.WAFV2API, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn wafv2iface.WAFV2API, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn wafv2iface.WAFV2API, identifier string // UpdateTags updates wafv2 service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).WAFV2Conn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).WAFV2Conn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/wafv2/web_acl.go b/internal/service/wafv2/web_acl.go index ed505ae1889..2cd463fcfb3 100644 --- a/internal/service/wafv2/web_acl.go +++ b/internal/service/wafv2/web_acl.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafv2 import ( @@ -54,112 +57,114 @@ func ResourceWebACL() *schema.Resource { }, }, - Schema: map[string]*schema.Schema{ - "arn": { - Type: schema.TypeString, - Computed: true, - }, - "capacity": { - Type: schema.TypeInt, - Computed: true, - }, - "captcha_config": outerCaptchaConfigSchema(), - "custom_response_body": customResponseBodySchema(), - "default_action": { - Type: schema.TypeList, - Required: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "allow": allowConfigSchema(), - "block": blockConfigSchema(), + SchemaFunc: func() map[string]*schema.Schema { + return map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "capacity": { + Type: schema.TypeInt, + Computed: true, + }, + "captcha_config": outerCaptchaConfigSchema(), + "custom_response_body": customResponseBodySchema(), + "default_action": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "allow": allowConfigSchema(), + "block": blockConfigSchema(), + }, }, }, - }, - "description": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.StringLenBetween(1, 256), - }, - "lock_token": { - Type: schema.TypeString, - Computed: true, - }, - "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validation.All( - validation.StringLenBetween(1, 128), - validation.StringMatch(regexp.MustCompile(`^[a-zA-Z0-9-_]+$`), "must contain only alphanumeric hyphen and underscore characters"), - ), - }, - "rule": { - Type: schema.TypeSet, - Optional: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "action": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "allow": allowConfigSchema(), - "block": blockConfigSchema(), - "captcha": captchaConfigSchema(), - "challenge": challengeConfigSchema(), - "count": countConfigSchema(), + "description": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(1, 256), + }, + "lock_token": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.All( + validation.StringLenBetween(1, 128), + validation.StringMatch(regexp.MustCompile(`^[a-zA-Z0-9-_]+$`), "must contain only alphanumeric hyphen and underscore characters"), + ), + }, + "rule": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "action": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "allow": allowConfigSchema(), + "block": blockConfigSchema(), + "captcha": captchaConfigSchema(), + "challenge": challengeConfigSchema(), + "count": countConfigSchema(), + }, }, }, - }, - "captcha_config": outerCaptchaConfigSchema(), - "name": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringLenBetween(1, 128), - }, - "override_action": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "count": emptySchema(), - "none": emptySchema(), + "captcha_config": outerCaptchaConfigSchema(), + "name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 128), + }, + "override_action": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "count": emptySchema(), + "none": emptySchema(), + }, }, }, + "priority": { + Type: schema.TypeInt, + Required: true, + }, + "rule_label": ruleLabelsSchema(), + "statement": webACLRootStatementSchema(webACLRootStatementSchemaLevel), + "visibility_config": visibilityConfigSchema(), }, - "priority": { - Type: schema.TypeInt, - Required: true, - }, - "rule_label": ruleLabelsSchema(), - "statement": webACLRootStatementSchema(webACLRootStatementSchemaLevel), - "visibility_config": visibilityConfigSchema(), }, }, - }, - "scope": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validation.StringInSlice(wafv2.Scope_Values(), false), - }, - names.AttrTags: tftags.TagsSchema(), - names.AttrTagsAll: tftags.TagsSchemaComputed(), - "token_domains": { - Type: schema.TypeSet, - Optional: true, - Elem: &schema.Schema{ - Type: schema.TypeString, - ValidateFunc: validation.All( - validation.StringLenBetween(1, 253), - validation.StringMatch(regexp.MustCompile(`^[\w\.\-/]+$`), "must contain only alphanumeric, hyphen, dot, underscore and forward-slash characters"), - ), + "scope": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice(wafv2.Scope_Values(), false), }, - }, - "visibility_config": visibilityConfigSchema(), + names.AttrTags: tftags.TagsSchema(), + names.AttrTagsAll: tftags.TagsSchemaComputed(), + "token_domains": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.All( + validation.StringLenBetween(1, 253), + validation.StringMatch(regexp.MustCompile(`^[\w\.\-/]+$`), "must contain only alphanumeric, hyphen, dot, underscore and forward-slash characters"), + ), + }, + }, + "visibility_config": visibilityConfigSchema(), + } }, CustomizeDiff: verify.SetTagsDiff, @@ -167,7 +172,7 @@ func ResourceWebACL() *schema.Resource { } func resourceWebACLCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).WAFV2Conn() + conn := meta.(*conns.AWSClient).WAFV2Conn(ctx) name := d.Get("name").(string) input := &wafv2.CreateWebACLInput{ @@ -176,7 +181,7 @@ func resourceWebACLCreate(ctx context.Context, d *schema.ResourceData, meta inte Name: aws.String(name), Rules: expandWebACLRules(d.Get("rule").(*schema.Set).List()), Scope: aws.String(d.Get("scope").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), VisibilityConfig: expandVisibilityConfig(d.Get("visibility_config").([]interface{})), } @@ -208,7 +213,7 @@ func resourceWebACLCreate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceWebACLRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).WAFV2Conn() + conn := meta.(*conns.AWSClient).WAFV2Conn(ctx) output, err := FindWebACLByThreePartKey(ctx, conn, d.Id(), d.Get("name").(string), d.Get("scope").(string)) @@ -251,7 +256,7 @@ func resourceWebACLRead(ctx context.Context, d *schema.ResourceData, meta interf } func resourceWebACLUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).WAFV2Conn() + conn := meta.(*conns.AWSClient).WAFV2Conn(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &wafv2.UpdateWebACLInput{ @@ -294,7 +299,7 @@ func resourceWebACLUpdate(ctx context.Context, d *schema.ResourceData, meta inte } func resourceWebACLDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).WAFV2Conn() + conn := meta.(*conns.AWSClient).WAFV2Conn(ctx) input := &wafv2.DeleteWebACLInput{ Id: aws.String(d.Id()), diff --git a/internal/service/wafv2/web_acl_association.go b/internal/service/wafv2/web_acl_association.go index 9bcd9c448bb..705d1357429 100644 --- a/internal/service/wafv2/web_acl_association.go +++ b/internal/service/wafv2/web_acl_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafv2 import ( @@ -33,25 +36,27 @@ func ResourceWebACLAssociation() *schema.Resource { Create: schema.DefaultTimeout(5 * time.Minute), }, - Schema: map[string]*schema.Schema{ - "resource_arn": { - Type: schema.TypeString, - ForceNew: true, - Required: true, - ValidateFunc: verify.ValidARN, - }, - "web_acl_arn": { - Type: schema.TypeString, - ForceNew: true, - Required: true, - ValidateFunc: verify.ValidARN, - }, + SchemaFunc: func() map[string]*schema.Schema { + return map[string]*schema.Schema{ + "resource_arn": { + Type: schema.TypeString, + ForceNew: true, + Required: true, + ValidateFunc: verify.ValidARN, + }, + "web_acl_arn": { + Type: schema.TypeString, + ForceNew: true, + Required: true, + ValidateFunc: verify.ValidARN, + }, + } }, } } func resourceWebACLAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).WAFV2Conn() + conn := meta.(*conns.AWSClient).WAFV2Conn(ctx) webACLARN := d.Get("web_acl_arn").(string) resourceARN := d.Get("resource_arn").(string) @@ -76,7 +81,7 @@ func resourceWebACLAssociationCreate(ctx context.Context, d *schema.ResourceData } func resourceWebACLAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).WAFV2Conn() + conn := meta.(*conns.AWSClient).WAFV2Conn(ctx) _, resourceARN, err := WebACLAssociationParseResourceID(d.Id()) @@ -103,7 +108,7 @@ func resourceWebACLAssociationRead(ctx context.Context, d *schema.ResourceData, } func resourceWebACLAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { - conn := meta.(*conns.AWSClient).WAFV2Conn() + conn := meta.(*conns.AWSClient).WAFV2Conn(ctx) _, resourceARN, err := WebACLAssociationParseResourceID(d.Id()) diff --git a/internal/service/wafv2/web_acl_association_test.go b/internal/service/wafv2/web_acl_association_test.go index ce081fdd17d..98c84f46a53 100644 --- a/internal/service/wafv2/web_acl_association_test.go +++ b/internal/service/wafv2/web_acl_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafv2_test import ( @@ -97,7 +100,7 @@ func testAccCheckWebACLAssociationDestroy(ctx context.Context) resource.TestChec return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFV2Conn(ctx) _, err = tfwafv2.FindWebACLByResourceARN(ctx, conn, resourceARN) @@ -133,7 +136,7 @@ func testAccCheckWebACLAssociationExists(ctx context.Context, n string) resource return err } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFV2Conn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFV2Conn(ctx) _, err = tfwafv2.FindWebACLByResourceARN(ctx, conn, resourceARN) diff --git a/internal/service/wafv2/web_acl_data_source.go b/internal/service/wafv2/web_acl_data_source.go index 24108853b87..f4a4bf9d39f 100644 --- a/internal/service/wafv2/web_acl_data_source.go +++ b/internal/service/wafv2/web_acl_data_source.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafv2 import ( @@ -17,31 +20,33 @@ func DataSourceWebACL() *schema.Resource { return &schema.Resource{ ReadWithoutTimeout: dataSourceWebACLRead, - Schema: map[string]*schema.Schema{ - "arn": { - Type: schema.TypeString, - Computed: true, - }, - "description": { - Type: schema.TypeString, - Computed: true, - }, - "name": { - Type: schema.TypeString, - Required: true, - }, - "scope": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice(wafv2.Scope_Values(), false), - }, + SchemaFunc: func() map[string]*schema.Schema { + return map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "description": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + }, + "scope": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(wafv2.Scope_Values(), false), + }, + } }, } } func dataSourceWebACLRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFV2Conn() + conn := meta.(*conns.AWSClient).WAFV2Conn(ctx) name := d.Get("name").(string) var foundWebACL *wafv2.WebACLSummary diff --git a/internal/service/wafv2/web_acl_data_source_test.go b/internal/service/wafv2/web_acl_data_source_test.go index c29785e75c8..8d9586401d0 100644 --- a/internal/service/wafv2/web_acl_data_source_test.go +++ b/internal/service/wafv2/web_acl_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafv2_test import ( diff --git a/internal/service/wafv2/web_acl_logging_configuration.go b/internal/service/wafv2/web_acl_logging_configuration.go index 9946122ba77..70a12b0ba39 100644 --- a/internal/service/wafv2/web_acl_logging_configuration.go +++ b/internal/service/wafv2/web_acl_logging_configuration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafv2 import ( @@ -11,12 +14,14 @@ import ( "github.com/aws/aws-sdk-go/service/wafv2" "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/create" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" "github.com/hashicorp/terraform-provider-aws/internal/flex" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/verify" ) @@ -32,174 +37,149 @@ func ResourceWebACLLoggingConfiguration() *schema.Resource { StateContext: schema.ImportStatePassthroughContext, }, - Schema: map[string]*schema.Schema{ - "log_destination_configs": { - Type: schema.TypeSet, - Required: true, - ForceNew: true, - MinItems: 1, - MaxItems: 100, - Elem: &schema.Schema{ - Type: schema.TypeString, - ValidateFunc: verify.ValidARN, + SchemaFunc: func() map[string]*schema.Schema { + return map[string]*schema.Schema{ + "log_destination_configs": { + Type: schema.TypeSet, + Required: true, + ForceNew: true, + MinItems: 1, + MaxItems: 100, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: verify.ValidARN, + }, + Description: "AWS Kinesis Firehose Delivery Stream ARNs", }, - Description: "AWS Kinesis Firehose Delivery Stream ARNs", - }, - "logging_filter": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "default_behavior": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice(wafv2.FilterBehavior_Values(), false), - }, - "filter": { - Type: schema.TypeSet, - Required: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "behavior": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice(wafv2.FilterBehavior_Values(), false), - }, - "condition": { - Type: schema.TypeSet, - Required: true, - MinItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "action_condition": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "action": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice(wafv2.ActionValue_Values(), false), + "logging_filter": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "default_behavior": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(wafv2.FilterBehavior_Values(), false), + }, + "filter": { + Type: schema.TypeSet, + Required: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "behavior": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(wafv2.FilterBehavior_Values(), false), + }, + "condition": { + Type: schema.TypeSet, + Required: true, + MinItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "action_condition": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "action": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(wafv2.ActionValue_Values(), false), + }, }, }, }, - }, - "label_name_condition": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "label_name": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.All( - validation.StringLenBetween(1, 1024), - validation.StringMatch(regexp.MustCompile(`^[0-9A-Za-z_\-:]+$`), "must contain only alphanumeric characters, underscores, hyphens, and colons"), - ), + "label_name_condition": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "label_name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.All( + validation.StringLenBetween(1, 1024), + validation.StringMatch(regexp.MustCompile(`^[0-9A-Za-z_\-:]+$`), "must contain only alphanumeric characters, underscores, hyphens, and colons"), + ), + }, }, }, }, }, }, }, - }, - "requirement": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice(wafv2.FilterRequirement_Values(), false), + "requirement": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(wafv2.FilterRequirement_Values(), false), + }, }, }, }, }, }, }, - }, - "redacted_fields": { - // To allow this argument and its nested fields with Empty Schemas (e.g. "method") - // to be correctly interpreted, this argument must be of type List, - // otherwise, at apply-time a field configured as an empty block - // (e.g. body {}) will result in a nil redacted_fields attribute - Type: schema.TypeList, - Optional: true, - MaxItems: 100, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - // TODO: remove attributes marked as Deprecated - // as they are not supported by the WAFv2 API - // in the context of WebACL Logging Configurations - "all_query_arguments": emptyDeprecatedSchema(), - "body": emptyDeprecatedSchema(), - "method": emptySchema(), - "query_string": emptySchema(), - "single_header": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "name": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.All( - validation.StringLenBetween(1, 40), - // The value is returned in lower case by the API. - // Trying to solve it with StateFunc and/or DiffSuppressFunc resulted in hash problem of the rule field or didn't work. - validation.StringMatch(regexp.MustCompile(`^[a-z0-9-_]+$`), "must contain only lowercase alphanumeric characters, underscores, and hyphens"), - ), - }, - }, - }, - }, - "single_query_argument": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "name": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.All( - validation.StringLenBetween(1, 30), - // The value is returned in lower case by the API. - // Trying to solve it with StateFunc and/or DiffSuppressFunc resulted in hash problem of the rule field or didn't work. - validation.StringMatch(regexp.MustCompile(`^[a-z0-9-_]+$`), "must contain only lowercase alphanumeric characters, underscores, and hyphens"), - ), - Deprecated: "Not supported by WAFv2 API", + "redacted_fields": { + // To allow this argument and its nested fields with Empty Schemas (e.g. "method") + // to be correctly interpreted, this argument must be of type List, + // otherwise, at apply-time a field configured as an empty block + // (e.g. body {}) will result in a nil redacted_fields attribute + Type: schema.TypeList, + Optional: true, + MaxItems: 100, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "method": emptySchema(), + "query_string": emptySchema(), + "single_header": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.All( + validation.StringLenBetween(1, 40), + // The value is returned in lower case by the API. + // Trying to solve it with StateFunc and/or DiffSuppressFunc resulted in hash problem of the rule field or didn't work. + validation.StringMatch(regexp.MustCompile(`^[a-z0-9-_]+$`), "must contain only lowercase alphanumeric characters, underscores, and hyphens"), + ), + }, }, }, }, - Deprecated: "Not supported by WAFv2 API", + "uri_path": emptySchema(), }, - "uri_path": emptySchema(), }, + Description: "Parts of the request to exclude from logs", + DiffSuppressFunc: suppressRedactedFieldsDiff, + }, + "resource_arn": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: verify.ValidARN, + Description: "AWS WebACL ARN", }, - Description: "Parts of the request to exclude from logs", - DiffSuppressFunc: suppressRedactedFieldsDiff, - }, - "resource_arn": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: verify.ValidARN, - Description: "AWS WebACL ARN", - }, + } }, } } func resourceWebACLLoggingConfigurationPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFV2Conn() - - resourceArn := d.Get("resource_arn").(string) + conn := meta.(*conns.AWSClient).WAFV2Conn(ctx) + resourceARN := d.Get("resource_arn").(string) config := &wafv2.LoggingConfiguration{ LogDestinationConfigs: flex.ExpandStringSet(d.Get("log_destination_configs").(*schema.Set)), - ResourceArn: aws.String(resourceArn), + ResourceArn: aws.String(resourceARN), } if v, ok := d.GetOk("logging_filter"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { @@ -219,61 +199,41 @@ func resourceWebACLLoggingConfigurationPut(ctx context.Context, d *schema.Resour output, err := conn.PutLoggingConfigurationWithContext(ctx, input) if err != nil { - return sdkdiag.AppendErrorf(diags, "putting WAFv2 Logging Configuration for resource (%s): %s", resourceArn, err) + return sdkdiag.AppendErrorf(diags, "putting WAFv2 WebACL Logging Configuration (%s): %s", resourceARN, err) } - if output == nil || output.LoggingConfiguration == nil { - return sdkdiag.AppendErrorf(diags, "putting WAFv2 Logging Configuration for resource (%s): empty response", resourceArn) + if d.IsNewResource() { + d.SetId(aws.StringValue(output.LoggingConfiguration.ResourceArn)) } - d.SetId(aws.StringValue(output.LoggingConfiguration.ResourceArn)) - return append(diags, resourceWebACLLoggingConfigurationRead(ctx, d, meta)...) } func resourceWebACLLoggingConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFV2Conn() + conn := meta.(*conns.AWSClient).WAFV2Conn(ctx) - input := &wafv2.GetLoggingConfigurationInput{ - ResourceArn: aws.String(d.Id()), - } + loggingConfig, err := FindLoggingConfigurationByARN(ctx, conn, d.Id()) - output, err := conn.GetLoggingConfigurationWithContext(ctx, input) - - if !d.IsNewResource() && tfawserr.ErrCodeEquals(err, wafv2.ErrCodeWAFNonexistentItemException) { - log.Printf("[WARN] WAFv2 Logging Configuration for WebACL with ARN %s not found, removing from state", d.Id()) + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] WAFv2 WebACL Logging Configuration (%s) not found, removing from state", d.Id()) d.SetId("") return diags } if err != nil { - return sdkdiag.AppendErrorf(diags, "reading WAFv2 Logging Configuration for resource (%s): %s", d.Id(), err) + return diag.Errorf("reading WAFv2 WebACL Logging Configuration (%s): %s", d.Id(), err) } - if output == nil || output.LoggingConfiguration == nil { - if d.IsNewResource() { - return sdkdiag.AppendErrorf(diags, "reading WAFv2 Logging Configuration for resource (%s): empty response after creation", d.Id()) - } - log.Printf("[WARN] WAFv2 Logging Configuration for WebACL with ARN %s not found, removing from state", d.Id()) - d.SetId("") - return diags - } - - loggingConfig := output.LoggingConfiguration - if err := d.Set("log_destination_configs", flex.FlattenStringList(loggingConfig.LogDestinationConfigs)); err != nil { return sdkdiag.AppendErrorf(diags, "setting log_destination_configs: %s", err) } - if err := d.Set("logging_filter", flattenLoggingFilter(loggingConfig.LoggingFilter)); err != nil { return sdkdiag.AppendErrorf(diags, "setting logging_filter: %s", err) } - if err := d.Set("redacted_fields", flattenRedactedFields(loggingConfig.RedactedFields)); err != nil { return sdkdiag.AppendErrorf(diags, "setting redacted_fields: %s", err) } - d.Set("resource_arn", loggingConfig.ResourceArn) return diags @@ -281,25 +241,49 @@ func resourceWebACLLoggingConfigurationRead(ctx context.Context, d *schema.Resou func resourceWebACLLoggingConfigurationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WAFV2Conn() + conn := meta.(*conns.AWSClient).WAFV2Conn(ctx) - input := &wafv2.DeleteLoggingConfigurationInput{ + log.Printf("[INFO] Deleting WAFv2 WebACL Logging Configuration: %s", d.Id()) + _, err := conn.DeleteLoggingConfigurationWithContext(ctx, &wafv2.DeleteLoggingConfigurationInput{ ResourceArn: aws.String(d.Id()), - } - - _, err := conn.DeleteLoggingConfigurationWithContext(ctx, input) + }) if tfawserr.ErrCodeEquals(err, wafv2.ErrCodeWAFNonexistentItemException) { return diags } if err != nil { - return sdkdiag.AppendErrorf(diags, "deleting WAFv2 Logging Configuration for resource (%s): %s", d.Id(), err) + return sdkdiag.AppendErrorf(diags, "deleting WAFv2 WebACL Logging Configuration (%s): %s", d.Id(), err) } return diags } +func FindLoggingConfigurationByARN(ctx context.Context, conn *wafv2.WAFV2, arn string) (*wafv2.LoggingConfiguration, error) { + input := &wafv2.GetLoggingConfigurationInput{ + ResourceArn: aws.String(arn), + } + + output, err := conn.GetLoggingConfigurationWithContext(ctx, input) + + if tfawserr.ErrCodeEquals(err, wafv2.ErrCodeWAFNonexistentItemException) { + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil || output.LoggingConfiguration == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + return output.LoggingConfiguration, nil +} + func expandLoggingFilter(l []interface{}) *wafv2.LoggingFilter { if len(l) == 0 || l[0] == nil { return nil diff --git a/internal/service/wafv2/web_acl_logging_configuration_test.go b/internal/service/wafv2/web_acl_logging_configuration_test.go index 0d78f653e0a..a24b0047699 100644 --- a/internal/service/wafv2/web_acl_logging_configuration_test.go +++ b/internal/service/wafv2/web_acl_logging_configuration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package wafv2_test import ( @@ -5,15 +8,14 @@ import ( "fmt" "testing" - "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/wafv2" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" tfwafv2 "github.com/hashicorp/terraform-provider-aws/internal/service/wafv2" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) func TestAccWAFV2WebACLLoggingConfiguration_basic(t *testing.T) { @@ -623,25 +625,19 @@ func testAccCheckWebACLLoggingConfigurationDestroy(ctx context.Context) resource continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFV2Conn() - resp, err := conn.GetLoggingConfigurationWithContext(ctx, &wafv2.GetLoggingConfigurationInput{ - ResourceArn: aws.String(rs.Primary.ID), - }) + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFV2Conn(ctx) + + _, err := tfwafv2.FindLoggingConfigurationByARN(ctx, conn, rs.Primary.ID) + + if tfresource.NotFound(err) { + continue + } if err != nil { - // Continue checking resources in state if a WebACL Logging Configuration is already destroyed - if tfawserr.ErrCodeEquals(err, wafv2.ErrCodeWAFNonexistentItemException) { - continue - } return err } - if resp == nil || resp.LoggingConfiguration == nil { - return fmt.Errorf("Error getting WAFv2 WebACL Logging Configuration") - } - if aws.StringValue(resp.LoggingConfiguration.ResourceArn) == rs.Primary.ID { - return fmt.Errorf("WAFv2 WebACL Logging Configuration for WebACL ARN %s still exists", rs.Primary.ID) - } + return fmt.Errorf("WAFv2 WebACL Logging Configuration %s still exists", rs.Primary.ID) } return nil @@ -659,36 +655,28 @@ func testAccCheckWebACLLoggingConfigurationExists(ctx context.Context, n string, return fmt.Errorf("No WAFv2 WebACL Logging Configuration ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WAFV2Conn() - resp, err := conn.GetLoggingConfigurationWithContext(ctx, &wafv2.GetLoggingConfigurationInput{ - ResourceArn: aws.String(rs.Primary.ID), - }) + conn := acctest.Provider.Meta().(*conns.AWSClient).WAFV2Conn(ctx) + + output, err := tfwafv2.FindLoggingConfigurationByARN(ctx, conn, rs.Primary.ID) if err != nil { return err } - if resp == nil || resp.LoggingConfiguration == nil { - return fmt.Errorf("Error getting WAFv2 WebACL Logging Configuration") - } - - if aws.StringValue(resp.LoggingConfiguration.ResourceArn) == rs.Primary.ID { - *v = *resp.LoggingConfiguration - return nil - } + *v = *output - return fmt.Errorf("WAFv2 WebACL Logging Configuration (%s) not found", rs.Primary.ID) + return nil } } -func testAccWebACLLoggingConfigurationDependenciesConfig(rName string) string { +func testAccWebACLLoggingConfigurationConfig_base(rName string) string { return fmt.Sprintf(` data "aws_partition" "current" {} data "aws_caller_identity" "current" {} resource "aws_iam_role" "firehose" { - name = "%[1]s" + name = %[1]q assume_role_policy = < 0 { return tags @@ -71,17 +71,17 @@ func GetTagsIn(ctx context.Context) map[string]*string { return nil } -// SetTagsOut sets worklink service tags in Context. -func SetTagsOut(ctx context.Context, tags map[string]*string) { +// setTagsOut sets worklink service tags in Context. +func setTagsOut(ctx context.Context, tags map[string]*string) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates worklink service tags. +// updateTags updates worklink service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn worklinkiface.WorkLinkAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn worklinkiface.WorkLinkAPI, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -121,5 +121,5 @@ func UpdateTags(ctx context.Context, conn worklinkiface.WorkLinkAPI, identifier // UpdateTags updates worklink service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).WorkLinkConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).WorkLinkConn(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/worklink/website_certificate_authority_association.go b/internal/service/worklink/website_certificate_authority_association.go index b789b96fefa..98f80d3a66d 100644 --- a/internal/service/worklink/website_certificate_authority_association.go +++ b/internal/service/worklink/website_certificate_authority_association.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package worklink import ( @@ -55,7 +58,7 @@ func ResourceWebsiteCertificateAuthorityAssociation() *schema.Resource { func resourceWebsiteCertificateAuthorityAssociationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WorkLinkConn() + conn := meta.(*conns.AWSClient).WorkLinkConn(ctx) input := &worklink.AssociateWebsiteCertificateAuthorityInput{ FleetArn: aws.String(d.Get("fleet_arn").(string)), @@ -78,7 +81,7 @@ func resourceWebsiteCertificateAuthorityAssociationCreate(ctx context.Context, d func resourceWebsiteCertificateAuthorityAssociationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WorkLinkConn() + conn := meta.(*conns.AWSClient).WorkLinkConn(ctx) fleetArn, websiteCaID, err := DecodeWebsiteCertificateAuthorityAssociationResourceID(d.Id()) if err != nil { @@ -110,7 +113,7 @@ func resourceWebsiteCertificateAuthorityAssociationRead(ctx context.Context, d * func resourceWebsiteCertificateAuthorityAssociationDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WorkLinkConn() + conn := meta.(*conns.AWSClient).WorkLinkConn(ctx) fleetArn, websiteCaID, err := DecodeWebsiteCertificateAuthorityAssociationResourceID(d.Id()) if err != nil { diff --git a/internal/service/worklink/website_certificate_authority_association_test.go b/internal/service/worklink/website_certificate_authority_association_test.go index 0a5890241f1..a2ce25906d8 100644 --- a/internal/service/worklink/website_certificate_authority_association_test.go +++ b/internal/service/worklink/website_certificate_authority_association_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package worklink_test import ( @@ -109,7 +112,7 @@ func TestAccWorkLinkWebsiteCertificateAuthorityAssociation_disappears(t *testing func testAccCheckWebsiteCertificateAuthorityAssociationDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).WorkLinkConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WorkLinkConn(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_worklink_website_certificate_authority_association" { @@ -146,7 +149,7 @@ func testAccCheckWebsiteCertificateAuthorityAssociationDisappears(ctx context.Co return fmt.Errorf("No resource ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WorkLinkConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WorkLinkConn(ctx) fleetArn, websiteCaID, err := tfworklink.DecodeWebsiteCertificateAuthorityAssociationResourceID(rs.Primary.ID) if err != nil { return err @@ -191,7 +194,7 @@ func testAccCheckWebsiteCertificateAuthorityAssociationExists(ctx context.Contex return fmt.Errorf("WorkLink Fleet ARN is missing, should be set.") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WorkLinkConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WorkLinkConn(ctx) fleetArn, websiteCaID, err := tfworklink.DecodeWebsiteCertificateAuthorityAssociationResourceID(rs.Primary.ID) if err != nil { return err diff --git a/internal/service/workspaces/bundle_data_source.go b/internal/service/workspaces/bundle_data_source.go index 9f969c00780..8b423e1518e 100644 --- a/internal/service/workspaces/bundle_data_source.go +++ b/internal/service/workspaces/bundle_data_source.go @@ -1,11 +1,15 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package workspaces import ( "context" "fmt" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/workspaces" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/workspaces" + "github.com/aws/aws-sdk-go-v2/service/workspaces/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-aws/internal/conns" @@ -79,13 +83,13 @@ func DataSourceBundle() *schema.Resource { func dataSourceWorkspaceBundleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WorkSpacesConn() + conn := meta.(*conns.AWSClient).WorkSpacesClient(ctx) - var bundle *workspaces.WorkspaceBundle + var bundle types.WorkspaceBundle if bundleID, ok := d.GetOk("bundle_id"); ok { - resp, err := conn.DescribeWorkspaceBundlesWithContext(ctx, &workspaces.DescribeWorkspaceBundlesInput{ - BundleIds: []*string{aws.String(bundleID.(string))}, + resp, err := conn.DescribeWorkspaceBundles(ctx, &workspaces.DescribeWorkspaceBundlesInput{ + BundleIds: []string{bundleID.(string)}, }) if err != nil { return sdkdiag.AppendErrorf(diags, "reading WorkSpaces Workspace Bundle (%s): %s", bundleID, err) @@ -95,11 +99,11 @@ func dataSourceWorkspaceBundleRead(ctx context.Context, d *schema.ResourceData, return sdkdiag.AppendErrorf(diags, "expected 1 result for WorkSpaces Workspace Bundle %q, found %d", bundleID, len(resp.Bundles)) } - bundle = resp.Bundles[0] - - if bundle == nil { + if len(resp.Bundles) <= 0 { return sdkdiag.AppendErrorf(diags, "no WorkSpaces Workspace Bundle with ID %q found", bundleID) } + + bundle = resp.Bundles[0] } if name, ok := d.GetOk("name"); ok { @@ -112,26 +116,31 @@ func dataSourceWorkspaceBundleRead(ctx context.Context, d *schema.ResourceData, } name := name.(string) - err := conn.DescribeWorkspaceBundlesPagesWithContext(ctx, input, func(out *workspaces.DescribeWorkspaceBundlesOutput, lastPage bool) bool { + + paginator := workspaces.NewDescribeWorkspaceBundlesPaginator(conn, input, func(out *workspaces.DescribeWorkspaceBundlesPaginatorOptions) {}) + + entryNotFound := true + for paginator.HasMorePages() && entryNotFound { + out, err := paginator.NextPage(ctx) + + if err != nil { + return sdkdiag.AppendErrorf(diags, "reading WorkSpaces Workspace Bundle (%s): %s", id, err) + } + for _, b := range out.Bundles { - if aws.StringValue(b.Name) == name { + if aws.ToString(b.Name) == name { bundle = b - return true + entryNotFound = false } } - - return !lastPage - }) - if err != nil { - return sdkdiag.AppendErrorf(diags, "reading WorkSpaces Workspace Bundle (%s): %s", id, err) } - if bundle == nil { + if entryNotFound { return sdkdiag.AppendErrorf(diags, "no WorkSpaces Workspace Bundle with name %q found", name) } } - d.SetId(aws.StringValue(bundle.BundleId)) + d.SetId(aws.ToString(bundle.BundleId)) d.Set("bundle_id", bundle.BundleId) d.Set("description", bundle.Description) d.Set("name", bundle.Name) @@ -140,7 +149,7 @@ func dataSourceWorkspaceBundleRead(ctx context.Context, d *schema.ResourceData, computeType := make([]map[string]interface{}, 1) if bundle.ComputeType != nil { computeType[0] = map[string]interface{}{ - "name": aws.StringValue(bundle.ComputeType.Name), + "name": string(bundle.ComputeType.Name), } } if err := d.Set("compute_type", computeType); err != nil { @@ -150,7 +159,7 @@ func dataSourceWorkspaceBundleRead(ctx context.Context, d *schema.ResourceData, rootStorage := make([]map[string]interface{}, 1) if bundle.RootStorage != nil { rootStorage[0] = map[string]interface{}{ - "capacity": aws.StringValue(bundle.RootStorage.Capacity), + "capacity": aws.ToString(bundle.RootStorage.Capacity), } } if err := d.Set("root_storage", rootStorage); err != nil { @@ -160,7 +169,7 @@ func dataSourceWorkspaceBundleRead(ctx context.Context, d *schema.ResourceData, userStorage := make([]map[string]interface{}, 1) if bundle.UserStorage != nil { userStorage[0] = map[string]interface{}{ - "capacity": aws.StringValue(bundle.UserStorage.Capacity), + "capacity": aws.ToString(bundle.UserStorage.Capacity), } } if err := d.Set("user_storage", userStorage); err != nil { diff --git a/internal/service/workspaces/bundle_data_source_test.go b/internal/service/workspaces/bundle_data_source_test.go index e01fa3a4081..8b054c12b57 100644 --- a/internal/service/workspaces/bundle_data_source_test.go +++ b/internal/service/workspaces/bundle_data_source_test.go @@ -1,12 +1,16 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package workspaces_test import ( "fmt" "os" "regexp" + "strings" "testing" - "github.com/aws/aws-sdk-go/service/workspaces" + "github.com/aws/aws-sdk-go-v2/service/workspaces" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-provider-aws/internal/acctest" ) @@ -17,7 +21,7 @@ func testAccWorkspaceBundleDataSource_basic(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, Steps: []resource.TestStep{ { @@ -45,7 +49,7 @@ func testAccWorkspaceBundleDataSource_byOwnerName(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, Steps: []resource.TestStep{ { @@ -71,7 +75,7 @@ func testAccWorkspaceBundleDataSource_bundleIDAndNameConflict(t *testing.T) { ctx := acctest.Context(t) resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, Steps: []resource.TestStep{ { @@ -92,7 +96,7 @@ func testAccWorkspaceBundleDataSource_privateOwner(t *testing.T) { acctest.PreCheck(ctx, t) testAccBundlePreCheck(t) }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, Steps: []resource.TestStep{ { diff --git a/internal/service/workspaces/directory.go b/internal/service/workspaces/directory.go index ea51475eccd..5a9c8f164b3 100644 --- a/internal/service/workspaces/directory.go +++ b/internal/service/workspaces/directory.go @@ -1,16 +1,20 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package workspaces import ( "context" "log" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/workspaces" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/workspaces" + "github.com/aws/aws-sdk-go-v2/service/workspaces/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/errs" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" "github.com/hashicorp/terraform-provider-aws/internal/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" @@ -127,42 +131,42 @@ func ResourceDirectory() *schema.Resource { "device_type_android": { Type: schema.TypeString, Optional: true, - ValidateFunc: validation.StringInSlice(workspaces.AccessPropertyValue_Values(), false), + ValidateFunc: validation.StringInSlice(flattenAccessPropertyEnumValues(types.AccessPropertyValue("").Values()), false), }, "device_type_chromeos": { Type: schema.TypeString, Optional: true, - ValidateFunc: validation.StringInSlice(workspaces.AccessPropertyValue_Values(), false), + ValidateFunc: validation.StringInSlice(flattenAccessPropertyEnumValues(types.AccessPropertyValue("").Values()), false), }, "device_type_ios": { Type: schema.TypeString, Optional: true, - ValidateFunc: validation.StringInSlice(workspaces.AccessPropertyValue_Values(), false), + ValidateFunc: validation.StringInSlice(flattenAccessPropertyEnumValues(types.AccessPropertyValue("").Values()), false), }, "device_type_linux": { Type: schema.TypeString, Optional: true, - ValidateFunc: validation.StringInSlice(workspaces.AccessPropertyValue_Values(), false), + ValidateFunc: validation.StringInSlice(flattenAccessPropertyEnumValues(types.AccessPropertyValue("").Values()), false), }, "device_type_osx": { Type: schema.TypeString, Optional: true, - ValidateFunc: validation.StringInSlice(workspaces.AccessPropertyValue_Values(), false), + ValidateFunc: validation.StringInSlice(flattenAccessPropertyEnumValues(types.AccessPropertyValue("").Values()), false), }, "device_type_web": { Type: schema.TypeString, Optional: true, - ValidateFunc: validation.StringInSlice(workspaces.AccessPropertyValue_Values(), false), + ValidateFunc: validation.StringInSlice(flattenAccessPropertyEnumValues(types.AccessPropertyValue("").Values()), false), }, "device_type_windows": { Type: schema.TypeString, Optional: true, - ValidateFunc: validation.StringInSlice(workspaces.AccessPropertyValue_Values(), false), + ValidateFunc: validation.StringInSlice(flattenAccessPropertyEnumValues(types.AccessPropertyValue("").Values()), false), }, "device_type_zeroclient": { Type: schema.TypeString, Optional: true, - ValidateFunc: validation.StringInSlice(workspaces.AccessPropertyValue_Values(), false), + ValidateFunc: validation.StringInSlice(flattenAccessPropertyEnumValues(types.AccessPropertyValue("").Values()), false), }, }, }, @@ -212,29 +216,25 @@ func ResourceDirectory() *schema.Resource { func resourceDirectoryCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WorkSpacesConn() + conn := meta.(*conns.AWSClient).WorkSpacesClient(ctx) directoryID := d.Get("directory_id").(string) input := &workspaces.RegisterWorkspaceDirectoryInput{ DirectoryId: aws.String(directoryID), EnableSelfService: aws.Bool(false), // this is handled separately below EnableWorkDocs: aws.Bool(false), - Tenancy: aws.String(workspaces.TenancyShared), - Tags: GetTagsIn(ctx), + Tenancy: types.TenancyShared, + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("subnet_ids"); ok { - input.SubnetIds = flex.ExpandStringSet(v.(*schema.Set)) + input.SubnetIds = flex.ExpandStringValueSet(v.(*schema.Set)) } - log.Printf("[DEBUG] Registering WorkSpaces Directory: %s", input) - _, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, DirectoryRegisterInvalidResourceStateTimeout, + _, err := tfresource.RetryWhenIsA[*types.InvalidResourceStateException](ctx, DirectoryRegisterInvalidResourceStateTimeout, func() (interface{}, error) { - return conn.RegisterWorkspaceDirectoryWithContext(ctx, input) - }, - // "error registering WorkSpaces Directory (d-000000000000): InvalidResourceStateException: The specified directory is not in a valid state. Confirm that the directory has a status of Active, and try again." - workspaces.ErrCodeInvalidResourceStateException, - ) + return conn.RegisterWorkspaceDirectory(ctx, input) + }) if err != nil { return sdkdiag.AppendErrorf(diags, "registering WorkSpaces Directory (%s): %s", directoryID, err) @@ -250,7 +250,7 @@ func resourceDirectoryCreate(ctx context.Context, d *schema.ResourceData, meta i if v, ok := d.GetOk("self_service_permissions"); ok { log.Printf("[DEBUG] Modifying WorkSpaces Directory (%s) self-service permissions", directoryID) - _, err := conn.ModifySelfservicePermissionsWithContext(ctx, &workspaces.ModifySelfservicePermissionsInput{ + _, err := conn.ModifySelfservicePermissions(ctx, &workspaces.ModifySelfservicePermissionsInput{ ResourceId: aws.String(directoryID), SelfservicePermissions: ExpandSelfServicePermissions(v.([]interface{})), }) @@ -262,7 +262,7 @@ func resourceDirectoryCreate(ctx context.Context, d *schema.ResourceData, meta i if v, ok := d.GetOk("workspace_access_properties"); ok { log.Printf("[DEBUG] Modifying WorkSpaces Directory (%s) access properties", directoryID) - _, err := conn.ModifyWorkspaceAccessPropertiesWithContext(ctx, &workspaces.ModifyWorkspaceAccessPropertiesInput{ + _, err := conn.ModifyWorkspaceAccessProperties(ctx, &workspaces.ModifyWorkspaceAccessPropertiesInput{ ResourceId: aws.String(directoryID), WorkspaceAccessProperties: ExpandWorkspaceAccessProperties(v.([]interface{})), }) @@ -274,7 +274,7 @@ func resourceDirectoryCreate(ctx context.Context, d *schema.ResourceData, meta i if v, ok := d.GetOk("workspace_creation_properties"); ok { log.Printf("[DEBUG] Modifying WorkSpaces Directory (%s) creation properties", directoryID) - _, err := conn.ModifyWorkspaceCreationPropertiesWithContext(ctx, &workspaces.ModifyWorkspaceCreationPropertiesInput{ + _, err := conn.ModifyWorkspaceCreationProperties(ctx, &workspaces.ModifyWorkspaceCreationPropertiesInput{ ResourceId: aws.String(directoryID), WorkspaceCreationProperties: ExpandWorkspaceCreationProperties(v.([]interface{})), }) @@ -287,9 +287,9 @@ func resourceDirectoryCreate(ctx context.Context, d *schema.ResourceData, meta i if v, ok := d.GetOk("ip_group_ids"); ok && v.(*schema.Set).Len() > 0 { ipGroupIds := v.(*schema.Set) log.Printf("[DEBUG] Associating WorkSpaces Directory (%s) with IP Groups %s", directoryID, ipGroupIds.List()) - _, err := conn.AssociateIpGroupsWithContext(ctx, &workspaces.AssociateIpGroupsInput{ + _, err := conn.AssociateIpGroups(ctx, &workspaces.AssociateIpGroupsInput{ DirectoryId: aws.String(directoryID), - GroupIds: flex.ExpandStringSet(ipGroupIds), + GroupIds: flex.ExpandStringValueSet(ipGroupIds), }) if err != nil { return sdkdiag.AppendErrorf(diags, "asassociating WorkSpaces Directory (%s) ip groups: %s", directoryID, err) @@ -302,7 +302,7 @@ func resourceDirectoryCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceDirectoryRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WorkSpacesConn() + conn := meta.(*conns.AWSClient).WorkSpacesClient(ctx) directory, err := FindDirectoryByID(ctx, conn, d.Id()) @@ -317,7 +317,7 @@ func resourceDirectoryRead(ctx context.Context, d *schema.ResourceData, meta int } d.Set("directory_id", directory.DirectoryId) - if err := d.Set("subnet_ids", flex.FlattenStringSet(directory.SubnetIds)); err != nil { + if err := d.Set("subnet_ids", flex.FlattenStringValueSet(directory.SubnetIds)); err != nil { return sdkdiag.AppendErrorf(diags, "setting subnet_ids: %s", err) } d.Set("workspace_security_group_id", directory.WorkspaceSecurityGroupId) @@ -339,11 +339,11 @@ func resourceDirectoryRead(ctx context.Context, d *schema.ResourceData, meta int return sdkdiag.AppendErrorf(diags, "setting workspace_creation_properties: %s", err) } - if err := d.Set("ip_group_ids", flex.FlattenStringSet(directory.IpGroupIds)); err != nil { + if err := d.Set("ip_group_ids", flex.FlattenStringValueSet(directory.IpGroupIds)); err != nil { return sdkdiag.AppendErrorf(diags, "setting ip_group_ids: %s", err) } - if err := d.Set("dns_ip_addresses", flex.FlattenStringSet(directory.DnsIpAddresses)); err != nil { + if err := d.Set("dns_ip_addresses", flex.FlattenStringValueSet(directory.DnsIpAddresses)); err != nil { return sdkdiag.AppendErrorf(diags, "setting dns_ip_addresses: %s", err) } @@ -352,13 +352,13 @@ func resourceDirectoryRead(ctx context.Context, d *schema.ResourceData, meta int func resourceDirectoryUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WorkSpacesConn() + conn := meta.(*conns.AWSClient).WorkSpacesClient(ctx) if d.HasChange("self_service_permissions") { log.Printf("[DEBUG] Modifying WorkSpaces Directory (%s) self-service permissions", d.Id()) permissions := d.Get("self_service_permissions").([]interface{}) - _, err := conn.ModifySelfservicePermissionsWithContext(ctx, &workspaces.ModifySelfservicePermissionsInput{ + _, err := conn.ModifySelfservicePermissions(ctx, &workspaces.ModifySelfservicePermissionsInput{ ResourceId: aws.String(d.Id()), SelfservicePermissions: ExpandSelfServicePermissions(permissions), }) @@ -372,7 +372,7 @@ func resourceDirectoryUpdate(ctx context.Context, d *schema.ResourceData, meta i log.Printf("[DEBUG] Modifying WorkSpaces Directory (%s) access properties", d.Id()) properties := d.Get("workspace_access_properties").([]interface{}) - _, err := conn.ModifyWorkspaceAccessPropertiesWithContext(ctx, &workspaces.ModifyWorkspaceAccessPropertiesInput{ + _, err := conn.ModifyWorkspaceAccessProperties(ctx, &workspaces.ModifyWorkspaceAccessPropertiesInput{ ResourceId: aws.String(d.Id()), WorkspaceAccessProperties: ExpandWorkspaceAccessProperties(properties), }) @@ -386,7 +386,7 @@ func resourceDirectoryUpdate(ctx context.Context, d *schema.ResourceData, meta i log.Printf("[DEBUG] Modifying WorkSpaces Directory (%s) creation properties", d.Id()) properties := d.Get("workspace_creation_properties").([]interface{}) - _, err := conn.ModifyWorkspaceCreationPropertiesWithContext(ctx, &workspaces.ModifyWorkspaceCreationPropertiesInput{ + _, err := conn.ModifyWorkspaceCreationProperties(ctx, &workspaces.ModifyWorkspaceCreationPropertiesInput{ ResourceId: aws.String(d.Id()), WorkspaceCreationProperties: ExpandWorkspaceCreationProperties(properties), }) @@ -404,18 +404,18 @@ func resourceDirectoryUpdate(ctx context.Context, d *schema.ResourceData, meta i removed := old.Difference(new) log.Printf("[DEBUG] Associating WorkSpaces Directory (%s) with IP Groups %s", d.Id(), added.GoString()) - _, err := conn.AssociateIpGroupsWithContext(ctx, &workspaces.AssociateIpGroupsInput{ + _, err := conn.AssociateIpGroups(ctx, &workspaces.AssociateIpGroupsInput{ DirectoryId: aws.String(d.Id()), - GroupIds: flex.ExpandStringSet(added), + GroupIds: flex.ExpandStringValueSet(added), }) if err != nil { return sdkdiag.AppendErrorf(diags, "asassociating WorkSpaces Directory (%s) IP Groups: %s", d.Id(), err) } log.Printf("[DEBUG] Disassociating WorkSpaces Directory (%s) with IP Groups %s", d.Id(), removed.GoString()) - _, err = conn.DisassociateIpGroupsWithContext(ctx, &workspaces.DisassociateIpGroupsInput{ + _, err = conn.DisassociateIpGroups(ctx, &workspaces.DisassociateIpGroupsInput{ DirectoryId: aws.String(d.Id()), - GroupIds: flex.ExpandStringSet(removed), + GroupIds: flex.ExpandStringValueSet(removed), }) if err != nil { return sdkdiag.AppendErrorf(diags, "disasassociating WorkSpaces Directory (%s) IP Groups: %s", d.Id(), err) @@ -429,20 +429,17 @@ func resourceDirectoryUpdate(ctx context.Context, d *schema.ResourceData, meta i func resourceDirectoryDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WorkSpacesConn() + conn := meta.(*conns.AWSClient).WorkSpacesClient(ctx) log.Printf("[DEBUG] Deregistering WorkSpaces Directory: %s", d.Id()) - _, err := tfresource.RetryWhenAWSErrCodeEquals(ctx, DirectoryDeregisterInvalidResourceStateTimeout, + _, err := tfresource.RetryWhenIsA[*types.InvalidResourceStateException](ctx, DirectoryRegisterInvalidResourceStateTimeout, func() (interface{}, error) { - return conn.DeregisterWorkspaceDirectoryWithContext(ctx, &workspaces.DeregisterWorkspaceDirectoryInput{ + return conn.DeregisterWorkspaceDirectory(ctx, &workspaces.DeregisterWorkspaceDirectoryInput{ DirectoryId: aws.String(d.Id()), }) - }, - // "error deregistering WorkSpaces Directory (d-000000000000): InvalidResourceStateException: The specified directory is not in a valid state. Confirm that the directory has a status of Active, and try again." - workspaces.ErrCodeInvalidResourceStateException, - ) + }) - if tfawserr.ErrCodeEquals(err, workspaces.ErrCodeResourceNotFoundException) { + if errs.IsA[*types.ResourceNotFoundException](err) { return diags } @@ -459,100 +456,100 @@ func resourceDirectoryDelete(ctx context.Context, d *schema.ResourceData, meta i return diags } -func ExpandWorkspaceAccessProperties(properties []interface{}) *workspaces.WorkspaceAccessProperties { +func ExpandWorkspaceAccessProperties(properties []interface{}) *types.WorkspaceAccessProperties { if len(properties) == 0 || properties[0] == nil { return nil } - result := &workspaces.WorkspaceAccessProperties{} + result := &types.WorkspaceAccessProperties{} p := properties[0].(map[string]interface{}) if p["device_type_android"].(string) != "" { - result.DeviceTypeAndroid = aws.String(p["device_type_android"].(string)) + result.DeviceTypeAndroid = types.AccessPropertyValue(p["device_type_android"].(string)) } if p["device_type_chromeos"].(string) != "" { - result.DeviceTypeChromeOs = aws.String(p["device_type_chromeos"].(string)) + result.DeviceTypeChromeOs = types.AccessPropertyValue(p["device_type_chromeos"].(string)) } if p["device_type_ios"].(string) != "" { - result.DeviceTypeIos = aws.String(p["device_type_ios"].(string)) + result.DeviceTypeIos = types.AccessPropertyValue(p["device_type_ios"].(string)) } if p["device_type_linux"].(string) != "" { - result.DeviceTypeLinux = aws.String(p["device_type_linux"].(string)) + result.DeviceTypeLinux = types.AccessPropertyValue(p["device_type_linux"].(string)) } if p["device_type_osx"].(string) != "" { - result.DeviceTypeOsx = aws.String(p["device_type_osx"].(string)) + result.DeviceTypeOsx = types.AccessPropertyValue(p["device_type_osx"].(string)) } if p["device_type_web"].(string) != "" { - result.DeviceTypeWeb = aws.String(p["device_type_web"].(string)) + result.DeviceTypeWeb = types.AccessPropertyValue(p["device_type_web"].(string)) } if p["device_type_windows"].(string) != "" { - result.DeviceTypeWindows = aws.String(p["device_type_windows"].(string)) + result.DeviceTypeWindows = types.AccessPropertyValue(p["device_type_windows"].(string)) } if p["device_type_zeroclient"].(string) != "" { - result.DeviceTypeZeroClient = aws.String(p["device_type_zeroclient"].(string)) + result.DeviceTypeZeroClient = types.AccessPropertyValue(p["device_type_zeroclient"].(string)) } return result } -func ExpandSelfServicePermissions(permissions []interface{}) *workspaces.SelfservicePermissions { +func ExpandSelfServicePermissions(permissions []interface{}) *types.SelfservicePermissions { if len(permissions) == 0 || permissions[0] == nil { return nil } - result := &workspaces.SelfservicePermissions{} + result := &types.SelfservicePermissions{} p := permissions[0].(map[string]interface{}) if p["change_compute_type"].(bool) { - result.ChangeComputeType = aws.String(workspaces.ReconnectEnumEnabled) + result.ChangeComputeType = types.ReconnectEnumEnabled } else { - result.ChangeComputeType = aws.String(workspaces.ReconnectEnumDisabled) + result.ChangeComputeType = types.ReconnectEnumDisabled } if p["increase_volume_size"].(bool) { - result.IncreaseVolumeSize = aws.String(workspaces.ReconnectEnumEnabled) + result.IncreaseVolumeSize = types.ReconnectEnumEnabled } else { - result.IncreaseVolumeSize = aws.String(workspaces.ReconnectEnumDisabled) + result.IncreaseVolumeSize = types.ReconnectEnumDisabled } if p["rebuild_workspace"].(bool) { - result.RebuildWorkspace = aws.String(workspaces.ReconnectEnumEnabled) + result.RebuildWorkspace = types.ReconnectEnumEnabled } else { - result.RebuildWorkspace = aws.String(workspaces.ReconnectEnumDisabled) + result.RebuildWorkspace = types.ReconnectEnumDisabled } if p["restart_workspace"].(bool) { - result.RestartWorkspace = aws.String(workspaces.ReconnectEnumEnabled) + result.RestartWorkspace = types.ReconnectEnumEnabled } else { - result.RestartWorkspace = aws.String(workspaces.ReconnectEnumDisabled) + result.RestartWorkspace = types.ReconnectEnumDisabled } if p["switch_running_mode"].(bool) { - result.SwitchRunningMode = aws.String(workspaces.ReconnectEnumEnabled) + result.SwitchRunningMode = types.ReconnectEnumEnabled } else { - result.SwitchRunningMode = aws.String(workspaces.ReconnectEnumDisabled) + result.SwitchRunningMode = types.ReconnectEnumDisabled } return result } -func ExpandWorkspaceCreationProperties(properties []interface{}) *workspaces.WorkspaceCreationProperties { +func ExpandWorkspaceCreationProperties(properties []interface{}) *types.WorkspaceCreationProperties { if len(properties) == 0 || properties[0] == nil { return nil } p := properties[0].(map[string]interface{}) - result := &workspaces.WorkspaceCreationProperties{ + result := &types.WorkspaceCreationProperties{ EnableInternetAccess: aws.Bool(p["enable_internet_access"].(bool)), EnableMaintenanceMode: aws.Bool(p["enable_maintenance_mode"].(bool)), UserEnabledAsLocalAdministrator: aws.Bool(p["user_enabled_as_local_administrator"].(bool)), @@ -569,72 +566,72 @@ func ExpandWorkspaceCreationProperties(properties []interface{}) *workspaces.Wor return result } -func FlattenWorkspaceAccessProperties(properties *workspaces.WorkspaceAccessProperties) []interface{} { +func FlattenWorkspaceAccessProperties(properties *types.WorkspaceAccessProperties) []interface{} { if properties == nil { return []interface{}{} } return []interface{}{ map[string]interface{}{ - "device_type_android": aws.StringValue(properties.DeviceTypeAndroid), - "device_type_chromeos": aws.StringValue(properties.DeviceTypeChromeOs), - "device_type_ios": aws.StringValue(properties.DeviceTypeIos), - "device_type_linux": aws.StringValue(properties.DeviceTypeLinux), - "device_type_osx": aws.StringValue(properties.DeviceTypeOsx), - "device_type_web": aws.StringValue(properties.DeviceTypeWeb), - "device_type_windows": aws.StringValue(properties.DeviceTypeWindows), - "device_type_zeroclient": aws.StringValue(properties.DeviceTypeZeroClient), + "device_type_android": string(properties.DeviceTypeAndroid), + "device_type_chromeos": string(properties.DeviceTypeChromeOs), + "device_type_ios": string(properties.DeviceTypeIos), + "device_type_linux": string(properties.DeviceTypeLinux), + "device_type_osx": string(properties.DeviceTypeOsx), + "device_type_web": string(properties.DeviceTypeWeb), + "device_type_windows": string(properties.DeviceTypeWindows), + "device_type_zeroclient": string(properties.DeviceTypeZeroClient), }, } } -func FlattenSelfServicePermissions(permissions *workspaces.SelfservicePermissions) []interface{} { +func FlattenSelfServicePermissions(permissions *types.SelfservicePermissions) []interface{} { if permissions == nil { return []interface{}{} } result := map[string]interface{}{} - switch *permissions.ChangeComputeType { - case workspaces.ReconnectEnumEnabled: + switch permissions.ChangeComputeType { + case types.ReconnectEnumEnabled: result["change_compute_type"] = true - case workspaces.ReconnectEnumDisabled: + case types.ReconnectEnumDisabled: result["change_compute_type"] = false default: result["change_compute_type"] = nil } - switch *permissions.IncreaseVolumeSize { - case workspaces.ReconnectEnumEnabled: + switch permissions.IncreaseVolumeSize { + case types.ReconnectEnumEnabled: result["increase_volume_size"] = true - case workspaces.ReconnectEnumDisabled: + case types.ReconnectEnumDisabled: result["increase_volume_size"] = false default: result["increase_volume_size"] = nil } - switch *permissions.RebuildWorkspace { - case workspaces.ReconnectEnumEnabled: + switch permissions.RebuildWorkspace { + case types.ReconnectEnumEnabled: result["rebuild_workspace"] = true - case workspaces.ReconnectEnumDisabled: + case types.ReconnectEnumDisabled: result["rebuild_workspace"] = false default: result["rebuild_workspace"] = nil } - switch *permissions.RestartWorkspace { - case workspaces.ReconnectEnumEnabled: + switch permissions.RestartWorkspace { + case types.ReconnectEnumEnabled: result["restart_workspace"] = true - case workspaces.ReconnectEnumDisabled: + case types.ReconnectEnumDisabled: result["restart_workspace"] = false default: result["restart_workspace"] = nil } - switch *permissions.SwitchRunningMode { - case workspaces.ReconnectEnumEnabled: + switch permissions.SwitchRunningMode { + case types.ReconnectEnumEnabled: result["switch_running_mode"] = true - case workspaces.ReconnectEnumDisabled: + case types.ReconnectEnumDisabled: result["switch_running_mode"] = false default: result["switch_running_mode"] = nil @@ -643,18 +640,28 @@ func FlattenSelfServicePermissions(permissions *workspaces.SelfservicePermission return []interface{}{result} } -func FlattenWorkspaceCreationProperties(properties *workspaces.DefaultWorkspaceCreationProperties) []interface{} { +func FlattenWorkspaceCreationProperties(properties *types.DefaultWorkspaceCreationProperties) []interface{} { if properties == nil { return []interface{}{} } return []interface{}{ map[string]interface{}{ - "custom_security_group_id": aws.StringValue(properties.CustomSecurityGroupId), - "default_ou": aws.StringValue(properties.DefaultOu), - "enable_internet_access": aws.BoolValue(properties.EnableInternetAccess), - "enable_maintenance_mode": aws.BoolValue(properties.EnableMaintenanceMode), - "user_enabled_as_local_administrator": aws.BoolValue(properties.UserEnabledAsLocalAdministrator), + "custom_security_group_id": aws.ToString(properties.CustomSecurityGroupId), + "default_ou": aws.ToString(properties.DefaultOu), + "enable_internet_access": aws.ToBool(properties.EnableInternetAccess), + "enable_maintenance_mode": aws.ToBool(properties.EnableMaintenanceMode), + "user_enabled_as_local_administrator": aws.ToBool(properties.UserEnabledAsLocalAdministrator), }, } } + +func flattenAccessPropertyEnumValues(t []types.AccessPropertyValue) []string { + var out []string + + for _, v := range t { + out = append(out, string(v)) + } + + return out +} diff --git a/internal/service/workspaces/directory_data_source.go b/internal/service/workspaces/directory_data_source.go index 0a799cbdd4a..e25a65221ad 100644 --- a/internal/service/workspaces/directory_data_source.go +++ b/internal/service/workspaces/directory_data_source.go @@ -1,9 +1,12 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package workspaces import ( "context" - "github.com/aws/aws-sdk-go/service/workspaces" + "github.com/aws/aws-sdk-go-v2/service/workspaces/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-aws/internal/conns" @@ -168,7 +171,7 @@ func DataSourceDirectory() *schema.Resource { func dataSourceDirectoryRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WorkSpacesConn() + conn := meta.(*conns.AWSClient).WorkSpacesClient(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig directoryID := d.Get("directory_id").(string) @@ -177,13 +180,13 @@ func dataSourceDirectoryRead(ctx context.Context, d *schema.ResourceData, meta i if err != nil { return sdkdiag.AppendErrorf(diags, "getting WorkSpaces Directory (%s): %s", directoryID, err) } - if state == workspaces.WorkspaceDirectoryStateDeregistered { + if state == string(types.WorkspaceDirectoryStateDeregistered) { return sdkdiag.AppendErrorf(diags, "WorkSpaces directory %s was not found", directoryID) } d.SetId(directoryID) - directory := rawOutput.(*workspaces.WorkspaceDirectory) + directory := rawOutput.(*types.WorkspaceDirectory) d.Set("directory_id", directory.DirectoryId) d.Set("workspace_security_group_id", directory.WorkspaceSecurityGroupId) d.Set("iam_role_id", directory.IamRoleId) @@ -192,7 +195,7 @@ func dataSourceDirectoryRead(ctx context.Context, d *schema.ResourceData, meta i d.Set("directory_type", directory.DirectoryType) d.Set("alias", directory.Alias) - if err := d.Set("subnet_ids", flex.FlattenStringSet(directory.SubnetIds)); err != nil { + if err := d.Set("subnet_ids", flex.FlattenStringValueSet(directory.SubnetIds)); err != nil { return sdkdiag.AppendErrorf(diags, "setting subnet_ids: %s", err) } @@ -208,15 +211,15 @@ func dataSourceDirectoryRead(ctx context.Context, d *schema.ResourceData, meta i return sdkdiag.AppendErrorf(diags, "setting workspace_creation_properties: %s", err) } - if err := d.Set("ip_group_ids", flex.FlattenStringSet(directory.IpGroupIds)); err != nil { + if err := d.Set("ip_group_ids", flex.FlattenStringValueSet(directory.IpGroupIds)); err != nil { return sdkdiag.AppendErrorf(diags, "setting ip_group_ids: %s", err) } - if err := d.Set("dns_ip_addresses", flex.FlattenStringSet(directory.DnsIpAddresses)); err != nil { + if err := d.Set("dns_ip_addresses", flex.FlattenStringValueSet(directory.DnsIpAddresses)); err != nil { return sdkdiag.AppendErrorf(diags, "setting dns_ip_addresses: %s", err) } - tags, err := ListTags(ctx, conn, d.Id()) + tags, err := listTags(ctx, conn, d.Id()) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags: %s", err) } diff --git a/internal/service/workspaces/directory_data_source_test.go b/internal/service/workspaces/directory_data_source_test.go index d12f7d6d19d..411eb23a665 100644 --- a/internal/service/workspaces/directory_data_source_test.go +++ b/internal/service/workspaces/directory_data_source_test.go @@ -1,10 +1,14 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package workspaces_test import ( "fmt" + "strings" "testing" - "github.com/aws/aws-sdk-go/service/workspaces" + "github.com/aws/aws-sdk-go-v2/service/workspaces" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-provider-aws/internal/acctest" @@ -25,7 +29,7 @@ func testAccDirectoryDataSource_basic(t *testing.T) { acctest.PreCheckDirectoryServiceSimpleDirectory(ctx, t) acctest.PreCheckHasIAMRole(ctx, t, "workspaces_DefaultRole") }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, Steps: []resource.TestStep{ { diff --git a/internal/service/workspaces/directory_test.go b/internal/service/workspaces/directory_test.go index 4612546a662..f196394ba6e 100644 --- a/internal/service/workspaces/directory_test.go +++ b/internal/service/workspaces/directory_test.go @@ -1,13 +1,18 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package workspaces_test import ( "context" "fmt" "reflect" + "strings" "testing" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/workspaces" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/workspaces" + "github.com/aws/aws-sdk-go-v2/service/workspaces/types" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -19,7 +24,7 @@ import ( func testAccDirectory_basic(t *testing.T) { ctx := acctest.Context(t) - var v workspaces.WorkspaceDirectory + var v types.WorkspaceDirectory rName := sdkacctest.RandString(8) resourceName := "aws_workspaces_directory.main" @@ -35,7 +40,7 @@ func testAccDirectory_basic(t *testing.T) { acctest.PreCheckDirectoryServiceSimpleDirectory(ctx, t) acctest.PreCheckHasIAMRole(ctx, t, "workspaces_DefaultRole") }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDirectoryDestroy(ctx), Steps: []resource.TestStep{ @@ -46,7 +51,7 @@ func testAccDirectory_basic(t *testing.T) { resource.TestCheckResourceAttrPair(resourceName, "alias", directoryResourceName, "alias"), resource.TestCheckResourceAttrPair(resourceName, "directory_id", directoryResourceName, "id"), resource.TestCheckResourceAttrPair(resourceName, "directory_name", directoryResourceName, "name"), - resource.TestCheckResourceAttr(resourceName, "directory_type", workspaces.WorkspaceDirectoryTypeSimpleAd), + resource.TestCheckResourceAttr(resourceName, "directory_type", string(types.WorkspaceDirectoryTypeSimpleAd)), resource.TestCheckResourceAttr(resourceName, "dns_ip_addresses.#", "2"), resource.TestCheckResourceAttrPair(resourceName, "iam_role_id", iamRoleDataSourceName, "arn"), resource.TestCheckResourceAttr(resourceName, "ip_group_ids.#", "0"), @@ -89,7 +94,7 @@ func testAccDirectory_basic(t *testing.T) { func testAccDirectory_disappears(t *testing.T) { ctx := acctest.Context(t) - var v workspaces.WorkspaceDirectory + var v types.WorkspaceDirectory rName := sdkacctest.RandString(8) resourceName := "aws_workspaces_directory.main" @@ -103,7 +108,7 @@ func testAccDirectory_disappears(t *testing.T) { acctest.PreCheckDirectoryServiceSimpleDirectory(ctx, t) acctest.PreCheckHasIAMRole(ctx, t, "workspaces_DefaultRole") }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDirectoryDestroy(ctx), Steps: []resource.TestStep{ @@ -121,7 +126,7 @@ func testAccDirectory_disappears(t *testing.T) { func testAccDirectory_subnetIDs(t *testing.T) { ctx := acctest.Context(t) - var v workspaces.WorkspaceDirectory + var v types.WorkspaceDirectory rName := sdkacctest.RandString(8) resourceName := "aws_workspaces_directory.main" @@ -135,7 +140,7 @@ func testAccDirectory_subnetIDs(t *testing.T) { acctest.PreCheckDirectoryServiceSimpleDirectory(ctx, t) acctest.PreCheckHasIAMRole(ctx, t, "workspaces_DefaultRole") }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDirectoryDestroy(ctx), Steps: []resource.TestStep{ @@ -157,7 +162,7 @@ func testAccDirectory_subnetIDs(t *testing.T) { func testAccDirectory_tags(t *testing.T) { ctx := acctest.Context(t) - var v workspaces.WorkspaceDirectory + var v types.WorkspaceDirectory rName := sdkacctest.RandString(8) resourceName := "aws_workspaces_directory.main" @@ -171,7 +176,7 @@ func testAccDirectory_tags(t *testing.T) { acctest.PreCheckDirectoryServiceSimpleDirectory(ctx, t) acctest.PreCheckHasIAMRole(ctx, t, "workspaces_DefaultRole") }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDirectoryDestroy(ctx), Steps: []resource.TestStep{ @@ -211,7 +216,7 @@ func testAccDirectory_tags(t *testing.T) { func testAccDirectory_selfServicePermissions(t *testing.T) { ctx := acctest.Context(t) - var v workspaces.WorkspaceDirectory + var v types.WorkspaceDirectory rName := sdkacctest.RandString(8) resourceName := "aws_workspaces_directory.main" @@ -225,7 +230,7 @@ func testAccDirectory_selfServicePermissions(t *testing.T) { acctest.PreCheckDirectoryServiceSimpleDirectory(ctx, t) acctest.PreCheckHasIAMRole(ctx, t, "workspaces_DefaultRole") }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDirectoryDestroy(ctx), Steps: []resource.TestStep{ @@ -247,7 +252,7 @@ func testAccDirectory_selfServicePermissions(t *testing.T) { func testAccDirectory_workspaceAccessProperties(t *testing.T) { ctx := acctest.Context(t) - var v workspaces.WorkspaceDirectory + var v types.WorkspaceDirectory rName := sdkacctest.RandString(8) resourceName := "aws_workspaces_directory.main" @@ -261,7 +266,7 @@ func testAccDirectory_workspaceAccessProperties(t *testing.T) { acctest.PreCheckDirectoryServiceSimpleDirectory(ctx, t) acctest.PreCheckHasIAMRole(ctx, t, "workspaces_DefaultRole") }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDirectoryDestroy(ctx), Steps: []resource.TestStep{ @@ -286,7 +291,7 @@ func testAccDirectory_workspaceAccessProperties(t *testing.T) { func testAccDirectory_workspaceCreationProperties(t *testing.T) { ctx := acctest.Context(t) - var v workspaces.WorkspaceDirectory + var v types.WorkspaceDirectory rName := sdkacctest.RandString(8) resourceName := "aws_workspaces_directory.main" @@ -301,7 +306,7 @@ func testAccDirectory_workspaceCreationProperties(t *testing.T) { acctest.PreCheckDirectoryServiceSimpleDirectory(ctx, t) acctest.PreCheckHasIAMRole(ctx, t, "workspaces_DefaultRole") }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDirectoryDestroy(ctx), Steps: []resource.TestStep{ @@ -323,7 +328,7 @@ func testAccDirectory_workspaceCreationProperties(t *testing.T) { func testAccDirectory_workspaceCreationProperties_customSecurityGroupId_defaultOu(t *testing.T) { ctx := acctest.Context(t) - var v workspaces.WorkspaceDirectory + var v types.WorkspaceDirectory rName := sdkacctest.RandString(8) resourceName := "aws_workspaces_directory.main" @@ -338,7 +343,7 @@ func testAccDirectory_workspaceCreationProperties_customSecurityGroupId_defaultO acctest.PreCheckDirectoryServiceSimpleDirectory(ctx, t) acctest.PreCheckHasIAMRole(ctx, t, "workspaces_DefaultRole") }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDirectoryDestroy(ctx), Steps: []resource.TestStep{ @@ -376,7 +381,7 @@ func testAccDirectory_workspaceCreationProperties_customSecurityGroupId_defaultO func testAccDirectory_ipGroupIDs(t *testing.T) { ctx := acctest.Context(t) - var v workspaces.WorkspaceDirectory + var v types.WorkspaceDirectory rName := sdkacctest.RandString(8) resourceName := "aws_workspaces_directory.test" @@ -385,7 +390,7 @@ func testAccDirectory_ipGroupIDs(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckHasIAMRole(ctx, t, "workspaces_DefaultRole") }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckDirectoryDestroy(ctx), Steps: []resource.TestStep{ @@ -425,7 +430,7 @@ func TestExpandSelfServicePermissions(t *testing.T) { cases := []struct { input []interface{} - expected *workspaces.SelfservicePermissions + expected *types.SelfservicePermissions }{ // Empty { @@ -443,12 +448,12 @@ func TestExpandSelfServicePermissions(t *testing.T) { "switch_running_mode": true, }, }, - expected: &workspaces.SelfservicePermissions{ - ChangeComputeType: aws.String(workspaces.ReconnectEnumDisabled), - IncreaseVolumeSize: aws.String(workspaces.ReconnectEnumDisabled), - RebuildWorkspace: aws.String(workspaces.ReconnectEnumEnabled), - RestartWorkspace: aws.String(workspaces.ReconnectEnumEnabled), - SwitchRunningMode: aws.String(workspaces.ReconnectEnumEnabled), + expected: &types.SelfservicePermissions{ + ChangeComputeType: types.ReconnectEnumDisabled, + IncreaseVolumeSize: types.ReconnectEnumDisabled, + RebuildWorkspace: types.ReconnectEnumEnabled, + RestartWorkspace: types.ReconnectEnumEnabled, + SwitchRunningMode: types.ReconnectEnumEnabled, }, }, } @@ -465,7 +470,7 @@ func TestFlattenSelfServicePermissions(t *testing.T) { t.Parallel() cases := []struct { - input *workspaces.SelfservicePermissions + input *types.SelfservicePermissions expected []interface{} }{ // Empty @@ -475,12 +480,12 @@ func TestFlattenSelfServicePermissions(t *testing.T) { }, // Full { - input: &workspaces.SelfservicePermissions{ - ChangeComputeType: aws.String(workspaces.ReconnectEnumDisabled), - IncreaseVolumeSize: aws.String(workspaces.ReconnectEnumDisabled), - RebuildWorkspace: aws.String(workspaces.ReconnectEnumEnabled), - RestartWorkspace: aws.String(workspaces.ReconnectEnumEnabled), - SwitchRunningMode: aws.String(workspaces.ReconnectEnumEnabled), + input: &types.SelfservicePermissions{ + ChangeComputeType: types.ReconnectEnumDisabled, + IncreaseVolumeSize: types.ReconnectEnumDisabled, + RebuildWorkspace: types.ReconnectEnumEnabled, + RestartWorkspace: types.ReconnectEnumEnabled, + SwitchRunningMode: types.ReconnectEnumEnabled, }, expected: []interface{}{ map[string]interface{}{ @@ -507,7 +512,7 @@ func TestExpandWorkspaceAccessProperties(t *testing.T) { cases := []struct { input []interface{} - expected *workspaces.WorkspaceAccessProperties + expected *types.WorkspaceAccessProperties }{ // Empty { @@ -528,15 +533,15 @@ func TestExpandWorkspaceAccessProperties(t *testing.T) { "device_type_zeroclient": "DENY", }, }, - expected: &workspaces.WorkspaceAccessProperties{ - DeviceTypeAndroid: aws.String("ALLOW"), - DeviceTypeChromeOs: aws.String("ALLOW"), - DeviceTypeIos: aws.String("ALLOW"), - DeviceTypeLinux: aws.String("DENY"), - DeviceTypeOsx: aws.String("ALLOW"), - DeviceTypeWeb: aws.String("DENY"), - DeviceTypeWindows: aws.String("DENY"), - DeviceTypeZeroClient: aws.String("DENY"), + expected: &types.WorkspaceAccessProperties{ + DeviceTypeAndroid: types.AccessPropertyValue("ALLOW"), + DeviceTypeChromeOs: types.AccessPropertyValue("ALLOW"), + DeviceTypeIos: types.AccessPropertyValue("ALLOW"), + DeviceTypeLinux: types.AccessPropertyValue("DENY"), + DeviceTypeOsx: types.AccessPropertyValue("ALLOW"), + DeviceTypeWeb: types.AccessPropertyValue("DENY"), + DeviceTypeWindows: types.AccessPropertyValue("DENY"), + DeviceTypeZeroClient: types.AccessPropertyValue("DENY"), }, }, } @@ -553,7 +558,7 @@ func TestFlattenWorkspaceAccessProperties(t *testing.T) { t.Parallel() cases := []struct { - input *workspaces.WorkspaceAccessProperties + input *types.WorkspaceAccessProperties expected []interface{} }{ // Empty @@ -563,15 +568,15 @@ func TestFlattenWorkspaceAccessProperties(t *testing.T) { }, // Full { - input: &workspaces.WorkspaceAccessProperties{ - DeviceTypeAndroid: aws.String("ALLOW"), - DeviceTypeChromeOs: aws.String("ALLOW"), - DeviceTypeIos: aws.String("ALLOW"), - DeviceTypeLinux: aws.String("DENY"), - DeviceTypeOsx: aws.String("ALLOW"), - DeviceTypeWeb: aws.String("DENY"), - DeviceTypeWindows: aws.String("DENY"), - DeviceTypeZeroClient: aws.String("DENY"), + input: &types.WorkspaceAccessProperties{ + DeviceTypeAndroid: types.AccessPropertyValue("ALLOW"), + DeviceTypeChromeOs: types.AccessPropertyValue("ALLOW"), + DeviceTypeIos: types.AccessPropertyValue("ALLOW"), + DeviceTypeLinux: types.AccessPropertyValue("DENY"), + DeviceTypeOsx: types.AccessPropertyValue("ALLOW"), + DeviceTypeWeb: types.AccessPropertyValue("DENY"), + DeviceTypeWindows: types.AccessPropertyValue("DENY"), + DeviceTypeZeroClient: types.AccessPropertyValue("DENY"), }, expected: []interface{}{ map[string]interface{}{ @@ -601,7 +606,7 @@ func TestExpandWorkspaceCreationProperties(t *testing.T) { cases := []struct { input []interface{} - expected *workspaces.WorkspaceCreationProperties + expected *types.WorkspaceCreationProperties }{ // Empty { @@ -619,7 +624,7 @@ func TestExpandWorkspaceCreationProperties(t *testing.T) { "user_enabled_as_local_administrator": true, }, }, - expected: &workspaces.WorkspaceCreationProperties{ + expected: &types.WorkspaceCreationProperties{ CustomSecurityGroupId: aws.String("sg-123456789012"), DefaultOu: aws.String("OU=AWS,DC=Workgroup,DC=Example,DC=com"), EnableInternetAccess: aws.Bool(true), @@ -638,7 +643,7 @@ func TestExpandWorkspaceCreationProperties(t *testing.T) { "user_enabled_as_local_administrator": true, }, }, - expected: &workspaces.WorkspaceCreationProperties{ + expected: &types.WorkspaceCreationProperties{ CustomSecurityGroupId: nil, DefaultOu: nil, EnableInternetAccess: aws.Bool(true), @@ -660,7 +665,7 @@ func TestFlattenWorkspaceCreationProperties(t *testing.T) { t.Parallel() cases := []struct { - input *workspaces.DefaultWorkspaceCreationProperties + input *types.DefaultWorkspaceCreationProperties expected []interface{} }{ // Empty @@ -670,7 +675,7 @@ func TestFlattenWorkspaceCreationProperties(t *testing.T) { }, // Full { - input: &workspaces.DefaultWorkspaceCreationProperties{ + input: &types.DefaultWorkspaceCreationProperties{ CustomSecurityGroupId: aws.String("sg-123456789012"), DefaultOu: aws.String("OU=AWS,DC=Workgroup,DC=Example,DC=com"), EnableInternetAccess: aws.Bool(true), @@ -699,7 +704,7 @@ func TestFlattenWorkspaceCreationProperties(t *testing.T) { func testAccCheckDirectoryDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).WorkSpacesConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WorkSpacesClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_workspaces_directory" { @@ -723,7 +728,7 @@ func testAccCheckDirectoryDestroy(ctx context.Context) resource.TestCheckFunc { } } -func testAccCheckDirectoryExists(ctx context.Context, n string, v *workspaces.WorkspaceDirectory) resource.TestCheckFunc { +func testAccCheckDirectoryExists(ctx context.Context, n string, v *types.WorkspaceDirectory) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -734,7 +739,7 @@ func testAccCheckDirectoryExists(ctx context.Context, n string, v *workspaces.Wo return fmt.Errorf("No WorkSpaces Directory ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WorkSpacesConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WorkSpacesClient(ctx) output, err := tfworkspaces.FindDirectoryByID(ctx, conn, rs.Primary.ID) @@ -749,11 +754,11 @@ func testAccCheckDirectoryExists(ctx context.Context, n string, v *workspaces.Wo } func testAccPreCheckDirectory(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).WorkSpacesConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WorkSpacesClient(ctx) input := &workspaces.DescribeWorkspaceDirectoriesInput{} - _, err := conn.DescribeWorkspaceDirectoriesWithContext(ctx, input) + _, err := conn.DescribeWorkspaceDirectories(ctx, input) if acctest.PreCheckSkipError(err) { t.Skipf("skipping acceptance testing: %s", err) diff --git a/internal/service/workspaces/find.go b/internal/service/workspaces/find.go index 91027bdb338..2e2e891fc10 100644 --- a/internal/service/workspaces/find.go +++ b/internal/service/workspaces/find.go @@ -1,25 +1,29 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package workspaces import ( "context" + "reflect" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/workspaces" + "github.com/aws/aws-sdk-go-v2/service/workspaces" + "github.com/aws/aws-sdk-go-v2/service/workspaces/types" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" ) -func FindDirectoryByID(ctx context.Context, conn *workspaces.WorkSpaces, id string) (*workspaces.WorkspaceDirectory, error) { +func FindDirectoryByID(ctx context.Context, conn *workspaces.Client, id string) (*types.WorkspaceDirectory, error) { input := &workspaces.DescribeWorkspaceDirectoriesInput{ - DirectoryIds: aws.StringSlice([]string{id}), + DirectoryIds: []string{id}, } - output, err := conn.DescribeWorkspaceDirectoriesWithContext(ctx, input) + output, err := conn.DescribeWorkspaceDirectories(ctx, input) if err != nil { return nil, err } - if output == nil || len(output.Directories) == 0 || output.Directories[0] == nil { + if output == nil || len(output.Directories) == 0 || reflect.DeepEqual(output.Directories[0], (types.WorkspaceDirectory{})) { return nil, &retry.NotFoundError{ Message: "Empty result", LastRequest: input, @@ -31,12 +35,12 @@ func FindDirectoryByID(ctx context.Context, conn *workspaces.WorkSpaces, id stri directory := output.Directories[0] - if state := aws.StringValue(directory.State); state == workspaces.WorkspaceDirectoryStateDeregistered { + if state := string(directory.State); state == string(types.WorkspaceDirectoryStateDeregistered) { return nil, &retry.NotFoundError{ Message: state, LastRequest: input, } } - return directory, nil + return &directory, nil } diff --git a/internal/service/workspaces/generate.go b/internal/service/workspaces/generate.go index f9ee768a5ac..623b9952b0f 100644 --- a/internal/service/workspaces/generate.go +++ b/internal/service/workspaces/generate.go @@ -1,5 +1,8 @@ -//go:generate go run ../../generate/listpages/main.go -ListOps=DescribeIpGroups -//go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=DescribeTags -ListTagsInIDElem=ResourceId -ListTagsOutTagsElem=TagList -ServiceTagsSlice -TagOp=CreateTags -TagInIDElem=ResourceId -UntagOp=DeleteTags -UpdateTags +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -ListTags -ListTagsOp=DescribeTags -ListTagsInIDElem=ResourceId -ListTagsOutTagsElem=TagList -ServiceTagsSlice -TagOp=CreateTags -TagInIDElem=ResourceId -UntagOp=DeleteTags -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package workspaces diff --git a/internal/service/workspaces/image_data_source.go b/internal/service/workspaces/image_data_source.go index dfa9f2371b8..fe24acecc4a 100644 --- a/internal/service/workspaces/image_data_source.go +++ b/internal/service/workspaces/image_data_source.go @@ -1,10 +1,12 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package workspaces import ( "context" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/workspaces" + "github.com/aws/aws-sdk-go-v2/service/workspaces" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-aws/internal/conns" @@ -47,14 +49,14 @@ func DataSourceImage() *schema.Resource { func dataSourceImageRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WorkSpacesConn() + conn := meta.(*conns.AWSClient).WorkSpacesClient(ctx) imageID := d.Get("image_id").(string) input := &workspaces.DescribeWorkspaceImagesInput{ - ImageIds: []*string{aws.String(imageID)}, + ImageIds: []string{imageID}, } - resp, err := conn.DescribeWorkspaceImagesWithContext(ctx, input) + resp, err := conn.DescribeWorkspaceImages(ctx, input) if err != nil { return sdkdiag.AppendErrorf(diags, "describe workspaces images: %s", err) } diff --git a/internal/service/workspaces/image_data_source_test.go b/internal/service/workspaces/image_data_source_test.go index e80257e05ae..3bb87a26c12 100644 --- a/internal/service/workspaces/image_data_source_test.go +++ b/internal/service/workspaces/image_data_source_test.go @@ -1,13 +1,19 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package workspaces_test import ( "context" "fmt" "os" + "reflect" + "strings" "testing" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/workspaces" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/workspaces" + "github.com/aws/aws-sdk-go-v2/service/workspaces/types" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" @@ -16,7 +22,7 @@ import ( func testAccImageDataSource_basic(t *testing.T) { ctx := acctest.Context(t) - var image workspaces.WorkspaceImage + var image types.WorkspaceImage imageID := os.Getenv("AWS_WORKSPACES_IMAGE_ID") dataSourceName := "data.aws_workspaces_image.test" @@ -25,7 +31,7 @@ func testAccImageDataSource_basic(t *testing.T) { acctest.PreCheck(ctx, t) testAccImagePreCheck(t) }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, Steps: []resource.TestStep{ { @@ -55,34 +61,34 @@ data aws_workspaces_image test { `, imageID) } -func testAccCheckImageExists(ctx context.Context, n string, image *workspaces.WorkspaceImage) resource.TestCheckFunc { +func testAccCheckImageExists(ctx context.Context, n string, image *types.WorkspaceImage) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).WorkSpacesConn() - resp, err := conn.DescribeWorkspaceImagesWithContext(ctx, &workspaces.DescribeWorkspaceImagesInput{ - ImageIds: []*string{aws.String(rs.Primary.ID)}, + conn := acctest.Provider.Meta().(*conns.AWSClient).WorkSpacesClient(ctx) + resp, err := conn.DescribeWorkspaceImages(ctx, &workspaces.DescribeWorkspaceImagesInput{ + ImageIds: []string{rs.Primary.ID}, }) if err != nil { return fmt.Errorf("Failed describe workspaces images: %w", err) } - if resp == nil || len(resp.Images) == 0 || resp.Images[0] == nil { + if resp == nil || len(resp.Images) == 0 || reflect.DeepEqual(resp.Images[0], (types.WorkspaceImage{})) { return fmt.Errorf("Workspace image %s was not found", rs.Primary.ID) } - if aws.StringValue(resp.Images[0].ImageId) != rs.Primary.ID { + if aws.ToString(resp.Images[0].ImageId) != rs.Primary.ID { return fmt.Errorf("Workspace image ID mismatch - existing: %q, state: %q", *resp.Images[0].ImageId, rs.Primary.ID) } - *image = *resp.Images[0] + *image = resp.Images[0] return nil } } -func testAccCheckImageAttributes(n string, image *workspaces.WorkspaceImage) resource.TestCheckFunc { +func testAccCheckImageAttributes(n string, image *types.WorkspaceImage) resource.TestCheckFunc { return func(s *terraform.State) error { _, ok := s.RootModule().Resources[n] if !ok { @@ -101,15 +107,15 @@ func testAccCheckImageAttributes(n string, image *workspaces.WorkspaceImage) res return err } - if err := resource.TestCheckResourceAttr(n, "operating_system_type", *image.OperatingSystem.Type)(s); err != nil { + if err := resource.TestCheckResourceAttr(n, "operating_system_type", string(image.OperatingSystem.Type))(s); err != nil { return err } - if err := resource.TestCheckResourceAttr(n, "required_tenancy", *image.RequiredTenancy)(s); err != nil { + if err := resource.TestCheckResourceAttr(n, "required_tenancy", string(image.RequiredTenancy))(s); err != nil { return err } - if err := resource.TestCheckResourceAttr(n, "state", *image.State)(s); err != nil { + if err := resource.TestCheckResourceAttr(n, "state", string(image.State))(s); err != nil { return err } diff --git a/internal/service/workspaces/ip_group.go b/internal/service/workspaces/ip_group.go index b66064b7b60..eb00917c7b5 100644 --- a/internal/service/workspaces/ip_group.go +++ b/internal/service/workspaces/ip_group.go @@ -1,11 +1,15 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package workspaces import ( "context" "log" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/workspaces" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/workspaces" + "github.com/aws/aws-sdk-go-v2/service/workspaces/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -66,31 +70,31 @@ func ResourceIPGroup() *schema.Resource { func resourceIPGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WorkSpacesConn() + conn := meta.(*conns.AWSClient).WorkSpacesClient(ctx) rules := d.Get("rules").(*schema.Set).List() - resp, err := conn.CreateIpGroupWithContext(ctx, &workspaces.CreateIpGroupInput{ + resp, err := conn.CreateIpGroup(ctx, &workspaces.CreateIpGroupInput{ GroupName: aws.String(d.Get("name").(string)), GroupDesc: aws.String(d.Get("description").(string)), UserRules: expandIPGroupRules(rules), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), }) if err != nil { return sdkdiag.AppendErrorf(diags, "creating WorkSpaces IP Group: %s", err) } - d.SetId(aws.StringValue(resp.GroupId)) + d.SetId(aws.ToString(resp.GroupId)) return append(diags, resourceIPGroupRead(ctx, d, meta)...) } func resourceIPGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WorkSpacesConn() + conn := meta.(*conns.AWSClient).WorkSpacesClient(ctx) - resp, err := conn.DescribeIpGroupsWithContext(ctx, &workspaces.DescribeIpGroupsInput{ - GroupIds: []*string{aws.String(d.Id())}, + resp, err := conn.DescribeIpGroups(ctx, &workspaces.DescribeIpGroupsInput{ + GroupIds: []string{d.Id()}, }) if err != nil { if len(resp.Result) == 0 { @@ -121,12 +125,12 @@ func resourceIPGroupRead(ctx context.Context, d *schema.ResourceData, meta inter func resourceIPGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WorkSpacesConn() + conn := meta.(*conns.AWSClient).WorkSpacesClient(ctx) if d.HasChange("rules") { rules := d.Get("rules").(*schema.Set).List() - _, err := conn.UpdateRulesOfIpGroupWithContext(ctx, &workspaces.UpdateRulesOfIpGroupInput{ + _, err := conn.UpdateRulesOfIpGroup(ctx, &workspaces.UpdateRulesOfIpGroupInput{ GroupId: aws.String(d.Id()), UserRules: expandIPGroupRules(rules), }) @@ -140,34 +144,38 @@ func resourceIPGroupUpdate(ctx context.Context, d *schema.ResourceData, meta int func resourceIPGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WorkSpacesConn() + conn := meta.(*conns.AWSClient).WorkSpacesClient(ctx) var found bool log.Printf("[DEBUG] Finding directories associated with WorkSpaces IP Group (%s)", d.Id()) - err := conn.DescribeWorkspaceDirectoriesPagesWithContext(ctx, nil, func(page *workspaces.DescribeWorkspaceDirectoriesOutput, lastPage bool) bool { - for _, dir := range page.Directories { + paginator := workspaces.NewDescribeWorkspaceDirectoriesPaginator(conn, &workspaces.DescribeWorkspaceDirectoriesInput{}, func(out *workspaces.DescribeWorkspaceDirectoriesPaginatorOptions) {}) + + for paginator.HasMorePages() { + out, err := paginator.NextPage(ctx) + + if err != nil { + diags = sdkdiag.AppendErrorf(diags, "describing WorkSpaces Directories: %s", err) + } + for _, dir := range out.Directories { for _, ipg := range dir.IpGroupIds { - groupID := aws.StringValue(ipg) + groupID := ipg if groupID == d.Id() { found = true - log.Printf("[DEBUG] WorkSpaces IP Group (%s) associated with WorkSpaces Directory (%s), disassociating", groupID, aws.StringValue(dir.DirectoryId)) - _, err := conn.DisassociateIpGroupsWithContext(ctx, &workspaces.DisassociateIpGroupsInput{ + log.Printf("[DEBUG] WorkSpaces IP Group (%s) associated with WorkSpaces Directory (%s), disassociating", groupID, aws.ToString(dir.DirectoryId)) + _, err := conn.DisassociateIpGroups(ctx, &workspaces.DisassociateIpGroupsInput{ DirectoryId: dir.DirectoryId, - GroupIds: aws.StringSlice([]string{d.Id()}), + GroupIds: []string{d.Id()}, }) if err != nil { - diags = sdkdiag.AppendErrorf(diags, "disassociating WorkSpaces IP Group (%s) from WorkSpaces Directory (%s): %s", d.Id(), aws.StringValue(dir.DirectoryId), err) + diags = sdkdiag.AppendErrorf(diags, "disassociating WorkSpaces IP Group (%s) from WorkSpaces Directory (%s): %s", d.Id(), aws.ToString(dir.DirectoryId), err) continue } - log.Printf("[INFO] WorkSpaces IP Group (%s) disassociated from WorkSpaces Directory (%s)", d.Id(), aws.StringValue(dir.DirectoryId)) + log.Printf("[INFO] WorkSpaces IP Group (%s) disassociated from WorkSpaces Directory (%s)", d.Id(), aws.ToString(dir.DirectoryId)) } } } - return !lastPage - }) - if err != nil { - diags = sdkdiag.AppendErrorf(diags, "describing WorkSpaces Directories: %s", err) } + if diags.HasError() { return diags } @@ -177,7 +185,7 @@ func resourceIPGroupDelete(ctx context.Context, d *schema.ResourceData, meta int } log.Printf("[DEBUG] Deleting WorkSpaces IP Group (%s)", d.Id()) - _, err = conn.DeleteIpGroupWithContext(ctx, &workspaces.DeleteIpGroupInput{ + _, err := conn.DeleteIpGroup(ctx, &workspaces.DeleteIpGroupInput{ GroupId: aws.String(d.Id()), }) if err != nil { @@ -188,12 +196,12 @@ func resourceIPGroupDelete(ctx context.Context, d *schema.ResourceData, meta int return diags } -func expandIPGroupRules(rules []interface{}) []*workspaces.IpRuleItem { - var result []*workspaces.IpRuleItem +func expandIPGroupRules(rules []interface{}) []types.IpRuleItem { + var result []types.IpRuleItem for _, rule := range rules { r := rule.(map[string]interface{}) - result = append(result, &workspaces.IpRuleItem{ + result = append(result, types.IpRuleItem{ IpRule: aws.String(r["source"].(string)), RuleDesc: aws.String(r["description"].(string)), }) @@ -201,17 +209,17 @@ func expandIPGroupRules(rules []interface{}) []*workspaces.IpRuleItem { return result } -func flattenIPGroupRules(rules []*workspaces.IpRuleItem) []map[string]interface{} { +func flattenIPGroupRules(rules []types.IpRuleItem) []map[string]interface{} { result := make([]map[string]interface{}, 0, len(rules)) for _, rule := range rules { r := map[string]interface{}{} if v := rule.IpRule; v != nil { - r["source"] = aws.StringValue(v) + r["source"] = aws.ToString(v) } if v := rule.RuleDesc; v != nil { - r["description"] = aws.StringValue(rule.RuleDesc) + r["description"] = aws.ToString(rule.RuleDesc) } result = append(result, r) diff --git a/internal/service/workspaces/ip_group_test.go b/internal/service/workspaces/ip_group_test.go index 3548f383330..f0a3af93fa5 100644 --- a/internal/service/workspaces/ip_group_test.go +++ b/internal/service/workspaces/ip_group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package workspaces_test import ( @@ -6,8 +9,8 @@ import ( "strings" "testing" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/workspaces" + "github.com/aws/aws-sdk-go-v2/service/workspaces" + "github.com/aws/aws-sdk-go-v2/service/workspaces/types" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -18,7 +21,7 @@ import ( func testAccIPGroup_basic(t *testing.T) { ctx := acctest.Context(t) - var v workspaces.IpGroup + var v types.WorkspacesIpGroup ipGroupName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) ipGroupNewName := sdkacctest.RandomWithPrefix("tf-acc-test-upd") ipGroupDescription := fmt.Sprintf("Terraform Acceptance Test %s", strings.Title(sdkacctest.RandString(20))) @@ -26,7 +29,7 @@ func testAccIPGroup_basic(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckIPGroupDestroy(ctx), Steps: []resource.TestStep{ @@ -65,13 +68,13 @@ func testAccIPGroup_basic(t *testing.T) { func testAccIPGroup_tags(t *testing.T) { ctx := acctest.Context(t) - var v workspaces.IpGroup + var v types.WorkspacesIpGroup resourceName := "aws_workspaces_ip_group.test" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckIPGroupDestroy(ctx), Steps: []resource.TestStep{ @@ -111,14 +114,14 @@ func testAccIPGroup_tags(t *testing.T) { func testAccIPGroup_disappears(t *testing.T) { ctx := acctest.Context(t) - var v workspaces.IpGroup + var v types.WorkspacesIpGroup ipGroupName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) ipGroupDescription := fmt.Sprintf("Terraform Acceptance Test %s", strings.Title(sdkacctest.RandString(20))) resourceName := "aws_workspaces_ip_group.test" resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckIPGroupDestroy(ctx), Steps: []resource.TestStep{ @@ -136,8 +139,8 @@ func testAccIPGroup_disappears(t *testing.T) { func testAccIPGroup_MultipleDirectories(t *testing.T) { ctx := acctest.Context(t) - var v workspaces.IpGroup - var d1, d2 workspaces.WorkspaceDirectory + var v types.WorkspacesIpGroup + var d1, d2 types.WorkspaceDirectory ipGroupName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) domain := acctest.RandomDomainName() @@ -147,8 +150,11 @@ func testAccIPGroup_MultipleDirectories(t *testing.T) { directoryResourceName2 := "aws_workspaces_directory.test2" resource.Test(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + PreCheck: func() { + acctest.PreCheck(ctx, t) + acctest.PreCheckHasIAMRole(ctx, t, "workspaces_DefaultRole") + }, + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckIPGroupDestroy(ctx), Steps: []resource.TestStep{ @@ -173,9 +179,9 @@ func testAccCheckIPGroupDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).WorkSpacesConn() - resp, err := conn.DescribeIpGroupsWithContext(ctx, &workspaces.DescribeIpGroupsInput{ - GroupIds: []*string{aws.String(rs.Primary.ID)}, + conn := acctest.Provider.Meta().(*conns.AWSClient).WorkSpacesClient(ctx) + resp, err := conn.DescribeIpGroups(ctx, &workspaces.DescribeIpGroupsInput{ + GroupIds: []string{rs.Primary.ID}, }) if err != nil { @@ -196,7 +202,7 @@ func testAccCheckIPGroupDestroy(ctx context.Context) resource.TestCheckFunc { } } -func testAccCheckIPGroupExists(ctx context.Context, n string, v *workspaces.IpGroup) resource.TestCheckFunc { +func testAccCheckIPGroupExists(ctx context.Context, n string, v *types.WorkspacesIpGroup) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -207,16 +213,16 @@ func testAccCheckIPGroupExists(ctx context.Context, n string, v *workspaces.IpGr return fmt.Errorf("No Workpsaces IP Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).WorkSpacesConn() - resp, err := conn.DescribeIpGroupsWithContext(ctx, &workspaces.DescribeIpGroupsInput{ - GroupIds: []*string{aws.String(rs.Primary.ID)}, + conn := acctest.Provider.Meta().(*conns.AWSClient).WorkSpacesClient(ctx) + resp, err := conn.DescribeIpGroups(ctx, &workspaces.DescribeIpGroupsInput{ + GroupIds: []string{rs.Primary.ID}, }) if err != nil { return err } if *resp.Result[0].GroupId == rs.Primary.ID { - *v = *resp.Result[0] + *v = resp.Result[0] return nil } diff --git a/internal/service/workspaces/list_pages_gen.go b/internal/service/workspaces/list_pages_gen.go deleted file mode 100644 index 8295ec77b90..00000000000 --- a/internal/service/workspaces/list_pages_gen.go +++ /dev/null @@ -1,28 +0,0 @@ -// Code generated by "internal/generate/listpages/main.go -ListOps=DescribeIpGroups"; DO NOT EDIT. - -package workspaces - -import ( - "context" - - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/workspaces" - "github.com/aws/aws-sdk-go/service/workspaces/workspacesiface" -) - -func describeIPGroupsPages(ctx context.Context, conn workspacesiface.WorkSpacesAPI, input *workspaces.DescribeIpGroupsInput, fn func(*workspaces.DescribeIpGroupsOutput, bool) bool) error { - for { - output, err := conn.DescribeIpGroupsWithContext(ctx, input) - if err != nil { - return err - } - - lastPage := aws.StringValue(output.NextToken) == "" - if !fn(output, lastPage) || lastPage { - break - } - - input.NextToken = output.NextToken - } - return nil -} diff --git a/internal/service/workspaces/service_package_gen.go b/internal/service/workspaces/service_package_gen.go index feca2acf6c2..46a3af1b461 100644 --- a/internal/service/workspaces/service_package_gen.go +++ b/internal/service/workspaces/service_package_gen.go @@ -5,6 +5,9 @@ package workspaces import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + workspaces_sdkv2 "github.com/aws/aws-sdk-go-v2/service/workspaces" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -73,4 +76,17 @@ func (p *servicePackage) ServicePackageName() string { return names.WorkSpaces } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*workspaces_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return workspaces_sdkv2.NewFromConfig(cfg, func(o *workspaces_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = workspaces_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/workspaces/status.go b/internal/service/workspaces/status.go index 847812dfb03..5601ba964d2 100644 --- a/internal/service/workspaces/status.go +++ b/internal/service/workspaces/status.go @@ -1,15 +1,18 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package workspaces import ( "context" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/workspaces" + "github.com/aws/aws-sdk-go-v2/service/workspaces" + "github.com/aws/aws-sdk-go-v2/service/workspaces/types" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) -func StatusDirectoryState(ctx context.Context, conn *workspaces.WorkSpaces, id string) retry.StateRefreshFunc { +func StatusDirectoryState(ctx context.Context, conn *workspaces.Client, id string) retry.StateRefreshFunc { return func() (interface{}, string, error) { output, err := FindDirectoryByID(ctx, conn, id) @@ -21,32 +24,32 @@ func StatusDirectoryState(ctx context.Context, conn *workspaces.WorkSpaces, id s return nil, "", err } - return output, aws.StringValue(output.State), nil + return output, string(output.State), nil } } // nosemgrep:ci.workspaces-in-func-name -func StatusWorkspaceState(ctx context.Context, conn *workspaces.WorkSpaces, workspaceID string) retry.StateRefreshFunc { +func StatusWorkspaceState(ctx context.Context, conn *workspaces.Client, workspaceID string) retry.StateRefreshFunc { return func() (interface{}, string, error) { - output, err := conn.DescribeWorkspacesWithContext(ctx, &workspaces.DescribeWorkspacesInput{ - WorkspaceIds: aws.StringSlice([]string{workspaceID}), + output, err := conn.DescribeWorkspaces(ctx, &workspaces.DescribeWorkspacesInput{ + WorkspaceIds: []string{workspaceID}, }) if err != nil { - return nil, workspaces.WorkspaceStateError, err + return nil, string(types.WorkspaceStateError), err } if len(output.Workspaces) == 0 { - return output, workspaces.WorkspaceStateTerminated, nil + return output, string(types.WorkspaceStateTerminated), nil } workspace := output.Workspaces[0] // https://docs.aws.amazon.com/workspaces/latest/api/API_TerminateWorkspaces.html // State TERMINATED is overridden with TERMINATING to catch up directory metadata clean up. - if aws.StringValue(workspace.State) == workspaces.WorkspaceStateTerminated { - return workspace, workspaces.WorkspaceStateTerminating, nil + if workspace.State == types.WorkspaceStateTerminated { + return workspace, string(types.WorkspaceStateTerminating), nil } - return workspace, aws.StringValue(workspace.State), nil + return workspace, string(workspace.State), nil } } diff --git a/internal/service/workspaces/sweep.go b/internal/service/workspaces/sweep.go index 133e771431f..c27fa440eba 100644 --- a/internal/service/workspaces/sweep.go +++ b/internal/service/workspaces/sweep.go @@ -1,25 +1,30 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:build sweep // +build sweep package workspaces import ( + "context" "fmt" "log" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/workspaces" - "github.com/hashicorp/go-multierror" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/workspaces" "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/sweep" ) func init() { resource.AddTestSweepers("aws_workspaces_directory", &resource.Sweeper{ - Name: "aws_workspaces_directory", - F: sweepDirectories, - Dependencies: []string{"aws_workspaces_workspace", "aws_workspaces_ip_group"}, + Name: "aws_workspaces_directory", + F: sweepDirectories, + Dependencies: []string{ + "aws_workspaces_workspace", + "aws_workspaces_ip_group", + }, }) resource.AddTestSweepers("aws_workspaces_ip_group", &resource.Sweeper{ @@ -35,40 +40,37 @@ func init() { func sweepDirectories(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).WorkSpacesConn() input := &workspaces.DescribeWorkspaceDirectoriesInput{} + conn := client.WorkSpacesClient(ctx) sweepResources := make([]sweep.Sweepable, 0) - err = conn.DescribeWorkspaceDirectoriesPagesWithContext(ctx, input, func(page *workspaces.DescribeWorkspaceDirectoriesOutput, lastPage bool) bool { - if page == nil { - return !lastPage + paginator := workspaces.NewDescribeWorkspaceDirectoriesPaginator(conn, input) + for paginator.HasMorePages() { + page, err := paginator.NextPage(ctx) + + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping WorkSpaces Directory sweep for %s: %s", region, err) + return nil + } + + if err != nil { + return fmt.Errorf("error listing WorkSpaces Directories (%s): %w", region, err) } - for _, directory := range page.Directories { + for _, v := range page.Directories { r := ResourceDirectory() d := r.Data(nil) - d.SetId(aws.StringValue(directory.DirectoryId)) + d.SetId(aws.ToString(v.DirectoryId)) sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } - - return !lastPage - }) - - if sweep.SkipSweepError(err) { - log.Printf("[WARN] Skipping WorkSpaces Directory sweep for %s: %s", region, err) - return nil - } - - if err != nil { - return fmt.Errorf("error listing WorkSpaces Directories (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { return fmt.Errorf("error sweeping WorkSpaces Directories (%s): %w", region, err) @@ -79,11 +81,11 @@ func sweepDirectories(region string) error { func sweepIPGroups(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).WorkSpacesConn() + conn := client.WorkSpacesClient(ctx) input := &workspaces.DescribeIpGroupsInput{} sweepResources := make([]sweep.Sweepable, 0) @@ -92,10 +94,10 @@ func sweepIPGroups(region string) error { return !lastPage } - for _, ipGroup := range page.Result { + for _, v := range page.Result { r := ResourceIPGroup() d := r.Data(nil) - d.SetId(aws.StringValue(ipGroup.GroupId)) + d.SetId(aws.ToString(v.GroupId)) sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } @@ -104,50 +106,77 @@ func sweepIPGroups(region string) error { }) if sweep.SkipSweepError(err) { - log.Printf("[WARN] Skipping WorkSpaces Ip Group sweep for %s: %s", region, err) + log.Printf("[WARN] Skipping WorkSpaces IP Group sweep for %s: %s", region, err) return nil } if err != nil { - return fmt.Errorf("error listing WorkSpaces Ip Groups (%s): %w", region, err) + return fmt.Errorf("error listing WorkSpaces IP Groups (%s): %w", region, err) } - err = sweep.SweepOrchestratorWithContext(ctx, sweepResources) + err = sweep.SweepOrchestrator(ctx, sweepResources) if err != nil { - return fmt.Errorf("error sweeping WorkSpaces Ip Groups (%s): %w", region, err) + return fmt.Errorf("error sweeping WorkSpaces IP Groups (%s): %w", region, err) } return nil } +func describeIPGroupsPages(ctx context.Context, conn *workspaces.Client, input *workspaces.DescribeIpGroupsInput, fn func(*workspaces.DescribeIpGroupsOutput, bool) bool) error { + for { + output, err := conn.DescribeIpGroups(ctx, input) + if err != nil { + return err + } + + lastPage := aws.ToString(output.NextToken) == "" + if !fn(output, lastPage) || lastPage { + break + } + + input.NextToken = output.NextToken + } + return nil +} + func sweepWorkspace(region string) error { ctx := sweep.Context(region) - client, err := sweep.SharedRegionalSweepClient(region) + client, err := sweep.SharedRegionalSweepClient(ctx, region) if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).WorkSpacesConn() - - var errors error input := &workspaces.DescribeWorkspacesInput{} - err = conn.DescribeWorkspacesPagesWithContext(ctx, input, func(resp *workspaces.DescribeWorkspacesOutput, _ bool) bool { - for _, workspace := range resp.Workspaces { - err := WorkspaceDelete(ctx, conn, aws.StringValue(workspace.WorkspaceId), WorkspaceTerminatedTimeout) - if err != nil { - errors = multierror.Append(errors, err) - } + conn := client.WorkSpacesClient(ctx) + sweepResources := make([]sweep.Sweepable, 0) + + paginator := workspaces.NewDescribeWorkspacesPaginator(conn, input) + for paginator.HasMorePages() { + page, err := paginator.NextPage(ctx) + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping WorkSpaces Workspace sweep for %s: %s", region, err) + return nil + } + + if err != nil { + return fmt.Errorf("error listing WorkSpaces Workspaces (%s): %w", region, err) + } + + for _, v := range page.Workspaces { + r := ResourceWorkspace() + d := r.Data(nil) + d.SetId(aws.ToString(v.WorkspaceId)) + + sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } - return true - }) - if sweep.SkipSweepError(err) { - log.Printf("[WARN] Skipping workspaces sweep for %s: %s", region, err) - return errors // In case we have completed some pages, but had errors } + + err = sweep.SweepOrchestrator(ctx, sweepResources) + if err != nil { - errors = multierror.Append(errors, fmt.Errorf("error listing workspaces: %s", err)) + return fmt.Errorf("error sweeping WorkSpaces Workspaces (%s): %w", region, err) } - return errors + return nil } diff --git a/internal/service/workspaces/tags_gen.go b/internal/service/workspaces/tags_gen.go index 3ad65cd227c..51f4a9081cc 100644 --- a/internal/service/workspaces/tags_gen.go +++ b/internal/service/workspaces/tags_gen.go @@ -5,24 +5,24 @@ import ( "context" "fmt" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/workspaces" - "github.com/aws/aws-sdk-go/service/workspaces/workspacesiface" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/workspaces" + awstypes "github.com/aws/aws-sdk-go-v2/service/workspaces/types" "github.com/hashicorp/terraform-provider-aws/internal/conns" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists workspaces service tags. +// listTags lists workspaces service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn workspacesiface.WorkSpacesAPI, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *workspaces.Client, identifier string) (tftags.KeyValueTags, error) { input := &workspaces.DescribeTagsInput{ ResourceId: aws.String(identifier), } - output, err := conn.DescribeTagsWithContext(ctx, input) + output, err := conn.DescribeTags(ctx, input) if err != nil { return tftags.New(ctx, nil), err @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn workspacesiface.WorkSpacesAPI, identifie // ListTags lists workspaces service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).WorkSpacesConn(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).WorkSpacesClient(ctx), identifier) if err != nil { return err @@ -50,11 +50,11 @@ func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier stri // []*SERVICE.Tag handling // Tags returns workspaces service tags. -func Tags(tags tftags.KeyValueTags) []*workspaces.Tag { - result := make([]*workspaces.Tag, 0, len(tags)) +func Tags(tags tftags.KeyValueTags) []awstypes.Tag { + result := make([]awstypes.Tag, 0, len(tags)) for k, v := range tags.Map() { - tag := &workspaces.Tag{ + tag := awstypes.Tag{ Key: aws.String(k), Value: aws.String(v), } @@ -66,19 +66,19 @@ func Tags(tags tftags.KeyValueTags) []*workspaces.Tag { } // KeyValueTags creates tftags.KeyValueTags from workspaces service tags. -func KeyValueTags(ctx context.Context, tags []*workspaces.Tag) tftags.KeyValueTags { +func KeyValueTags(ctx context.Context, tags []awstypes.Tag) tftags.KeyValueTags { m := make(map[string]*string, len(tags)) for _, tag := range tags { - m[aws.StringValue(tag.Key)] = tag.Value + m[aws.ToString(tag.Key)] = tag.Value } return tftags.New(ctx, m) } -// GetTagsIn returns workspaces service tags from Context. +// getTagsIn returns workspaces service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []*workspaces.Tag { +func getTagsIn(ctx context.Context) []awstypes.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []*workspaces.Tag { return nil } -// SetTagsOut sets workspaces service tags in Context. -func SetTagsOut(ctx context.Context, tags []*workspaces.Tag) { +// setTagsOut sets workspaces service tags in Context. +func setTagsOut(ctx context.Context, tags []awstypes.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates workspaces service tags. +// updateTags updates workspaces service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn workspacesiface.WorkSpacesAPI, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *workspaces.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -107,10 +107,10 @@ func UpdateTags(ctx context.Context, conn workspacesiface.WorkSpacesAPI, identif if len(removedTags) > 0 { input := &workspaces.DeleteTagsInput{ ResourceId: aws.String(identifier), - TagKeys: aws.StringSlice(removedTags.Keys()), + TagKeys: removedTags.Keys(), } - _, err := conn.DeleteTagsWithContext(ctx, input) + _, err := conn.DeleteTags(ctx, input) if err != nil { return fmt.Errorf("untagging resource (%s): %w", identifier, err) @@ -125,7 +125,7 @@ func UpdateTags(ctx context.Context, conn workspacesiface.WorkSpacesAPI, identif Tags: Tags(updatedTags), } - _, err := conn.CreateTagsWithContext(ctx, input) + _, err := conn.CreateTags(ctx, input) if err != nil { return fmt.Errorf("tagging resource (%s): %w", identifier, err) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn workspacesiface.WorkSpacesAPI, identif // UpdateTags updates workspaces service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).WorkSpacesConn(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).WorkSpacesClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/service/workspaces/wait.go b/internal/service/workspaces/wait.go index c0bbbfb200b..820305d6cd9 100644 --- a/internal/service/workspaces/wait.go +++ b/internal/service/workspaces/wait.go @@ -1,11 +1,16 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package workspaces import ( "context" "time" - "github.com/aws/aws-sdk-go/service/workspaces" + "github.com/aws/aws-sdk-go-v2/service/workspaces" + "github.com/aws/aws-sdk-go-v2/service/workspaces/types" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/hashicorp/terraform-provider-aws/internal/enum" ) const ( @@ -31,30 +36,30 @@ const ( WorkspaceTerminatedTimeout = 10 * time.Minute ) -func WaitDirectoryRegistered(ctx context.Context, conn *workspaces.WorkSpaces, directoryID string) (*workspaces.WorkspaceDirectory, error) { +func WaitDirectoryRegistered(ctx context.Context, conn *workspaces.Client, directoryID string) (*types.WorkspaceDirectory, error) { stateConf := &retry.StateChangeConf{ - Pending: []string{workspaces.WorkspaceDirectoryStateRegistering}, - Target: []string{workspaces.WorkspaceDirectoryStateRegistered}, + Pending: enum.Slice(types.WorkspaceDirectoryStateRegistering), + Target: enum.Slice(types.WorkspaceDirectoryStateRegistered), Refresh: StatusDirectoryState(ctx, conn, directoryID), Timeout: DirectoryRegisteredTimeout, } outputRaw, err := stateConf.WaitForStateContext(ctx) - if v, ok := outputRaw.(*workspaces.WorkspaceDirectory); ok { + if v, ok := outputRaw.(*types.WorkspaceDirectory); ok { return v, err } return nil, err } -func WaitDirectoryDeregistered(ctx context.Context, conn *workspaces.WorkSpaces, directoryID string) (*workspaces.WorkspaceDirectory, error) { +func WaitDirectoryDeregistered(ctx context.Context, conn *workspaces.Client, directoryID string) (*types.WorkspaceDirectory, error) { stateConf := &retry.StateChangeConf{ - Pending: []string{ - workspaces.WorkspaceDirectoryStateRegistering, - workspaces.WorkspaceDirectoryStateRegistered, - workspaces.WorkspaceDirectoryStateDeregistering, - }, + Pending: enum.Slice( + types.WorkspaceDirectoryStateRegistering, + types.WorkspaceDirectoryStateRegistered, + types.WorkspaceDirectoryStateDeregistering, + ), Target: []string{}, Refresh: StatusDirectoryState(ctx, conn, directoryID), Timeout: DirectoryDeregisteredTimeout, @@ -62,78 +67,78 @@ func WaitDirectoryDeregistered(ctx context.Context, conn *workspaces.WorkSpaces, outputRaw, err := stateConf.WaitForStateContext(ctx) - if v, ok := outputRaw.(*workspaces.WorkspaceDirectory); ok { + if v, ok := outputRaw.(*types.WorkspaceDirectory); ok { return v, err } return nil, err } -func WaitWorkspaceAvailable(ctx context.Context, conn *workspaces.WorkSpaces, workspaceID string, timeout time.Duration) (*workspaces.Workspace, error) { +func WaitWorkspaceAvailable(ctx context.Context, conn *workspaces.Client, workspaceID string, timeout time.Duration) (*types.Workspace, error) { stateConf := &retry.StateChangeConf{ - Pending: []string{ - workspaces.WorkspaceStatePending, - workspaces.WorkspaceStateStarting, - }, - Target: []string{workspaces.WorkspaceStateAvailable}, + Pending: enum.Slice( + types.WorkspaceStatePending, + types.WorkspaceStateStarting, + ), + Target: enum.Slice(types.WorkspaceStateAvailable), Refresh: StatusWorkspaceState(ctx, conn, workspaceID), Timeout: timeout, } outputRaw, err := stateConf.WaitForStateContext(ctx) - if v, ok := outputRaw.(*workspaces.Workspace); ok { + if v, ok := outputRaw.(*types.Workspace); ok { return v, err } return nil, err } -func WaitWorkspaceTerminated(ctx context.Context, conn *workspaces.WorkSpaces, workspaceID string, timeout time.Duration) (*workspaces.Workspace, error) { +func WaitWorkspaceTerminated(ctx context.Context, conn *workspaces.Client, workspaceID string, timeout time.Duration) (*types.Workspace, error) { stateConf := &retry.StateChangeConf{ - Pending: []string{ - workspaces.WorkspaceStatePending, - workspaces.WorkspaceStateAvailable, - workspaces.WorkspaceStateImpaired, - workspaces.WorkspaceStateUnhealthy, - workspaces.WorkspaceStateRebooting, - workspaces.WorkspaceStateStarting, - workspaces.WorkspaceStateRebuilding, - workspaces.WorkspaceStateRestoring, - workspaces.WorkspaceStateMaintenance, - workspaces.WorkspaceStateAdminMaintenance, - workspaces.WorkspaceStateSuspended, - workspaces.WorkspaceStateUpdating, - workspaces.WorkspaceStateStopping, - workspaces.WorkspaceStateStopped, - workspaces.WorkspaceStateTerminating, - workspaces.WorkspaceStateError, - }, - Target: []string{workspaces.WorkspaceStateTerminated}, + Pending: enum.Slice( + types.WorkspaceStatePending, + types.WorkspaceStateAvailable, + types.WorkspaceStateImpaired, + types.WorkspaceStateUnhealthy, + types.WorkspaceStateRebooting, + types.WorkspaceStateStarting, + types.WorkspaceStateRebuilding, + types.WorkspaceStateRestoring, + types.WorkspaceStateMaintenance, + types.WorkspaceStateAdminMaintenance, + types.WorkspaceStateSuspended, + types.WorkspaceStateUpdating, + types.WorkspaceStateStopping, + types.WorkspaceStateStopped, + types.WorkspaceStateTerminating, + types.WorkspaceStateError, + ), + Target: enum.Slice(types.WorkspaceStateTerminated), Refresh: StatusWorkspaceState(ctx, conn, workspaceID), Timeout: timeout, } outputRaw, err := stateConf.WaitForStateContext(ctx) - if v, ok := outputRaw.(*workspaces.Workspace); ok { + if v, ok := outputRaw.(*types.Workspace); ok { return v, err } return nil, err } -func WaitWorkspaceUpdated(ctx context.Context, conn *workspaces.WorkSpaces, workspaceID string, timeout time.Duration) (*workspaces.Workspace, error) { +func WaitWorkspaceUpdated(ctx context.Context, conn *workspaces.Client, workspaceID string, timeout time.Duration) (*types.Workspace, error) { // OperationInProgressException: The properties of this WorkSpace are currently under modification. Please try again in a moment. // AWS Workspaces service doesn't change instance status to "Updating" during property modification. Respective AWS Support feature request has been created. Meanwhile, artificial delay is placed here as a workaround. stateConf := &retry.StateChangeConf{ - Pending: []string{ - workspaces.WorkspaceStateUpdating, - }, - Target: []string{ - workspaces.WorkspaceStateAvailable, - workspaces.WorkspaceStateStopped, - }, + Pending: enum.Slice( + types.WorkspaceStateUpdating, + ), + Target: enum.Slice( + types.WorkspaceStateAvailable, + types.WorkspaceStateStopped, + ), Refresh: StatusWorkspaceState(ctx, conn, workspaceID), Delay: WorkspaceUpdatingDelay, Timeout: timeout, @@ -141,7 +146,7 @@ func WaitWorkspaceUpdated(ctx context.Context, conn *workspaces.WorkSpaces, work outputRaw, err := stateConf.WaitForStateContext(ctx) - if v, ok := outputRaw.(*workspaces.Workspace); ok { + if v, ok := outputRaw.(*types.Workspace); ok { return v, err } diff --git a/internal/service/workspaces/workspace.go b/internal/service/workspaces/workspace.go index 772b1e041e1..f15207b4379 100644 --- a/internal/service/workspaces/workspace.go +++ b/internal/service/workspaces/workspace.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package workspaces import ( @@ -6,12 +9,14 @@ import ( "log" "time" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/workspaces" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/workspaces" + "github.com/aws/aws-sdk-go-v2/service/workspaces/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/enum" "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/verify" @@ -84,8 +89,8 @@ func ResourceWorkspace() *schema.Resource { "compute_type_name": { Type: schema.TypeString, Optional: true, - Default: workspaces.ComputeValue, - ValidateFunc: validation.StringInSlice(workspaces.Compute_Values(), false), + Default: string(types.ComputeValue), + ValidateFunc: validation.StringInSlice(flattenComputeEnumValues(types.Compute("").Values()), false), }, "root_volume_size_gib": { Type: schema.TypeInt, @@ -99,11 +104,11 @@ func ResourceWorkspace() *schema.Resource { "running_mode": { Type: schema.TypeString, Optional: true, - Default: workspaces.RunningModeAlwaysOn, - ValidateFunc: validation.StringInSlice([]string{ - workspaces.RunningModeAlwaysOn, - workspaces.RunningModeAutoStop, - }, false), + Default: string(types.RunningModeAlwaysOn), + ValidateFunc: validation.StringInSlice(enum.Slice( + types.RunningModeAlwaysOn, + types.RunningModeAutoStop, + ), false), }, "running_mode_auto_stop_timeout_in_minutes": { Type: schema.TypeInt, @@ -145,15 +150,15 @@ func ResourceWorkspace() *schema.Resource { func resourceWorkspaceCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WorkSpacesConn() + conn := meta.(*conns.AWSClient).WorkSpacesClient(ctx) - input := &workspaces.WorkspaceRequest{ + input := types.WorkspaceRequest{ BundleId: aws.String(d.Get("bundle_id").(string)), DirectoryId: aws.String(d.Get("directory_id").(string)), UserName: aws.String(d.Get("user_name").(string)), RootVolumeEncryptionEnabled: aws.Bool(d.Get("root_volume_encryption_enabled").(bool)), UserVolumeEncryptionEnabled: aws.Bool(d.Get("user_volume_encryption_enabled").(bool)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("volume_encryption_key"); ok { @@ -162,8 +167,8 @@ func resourceWorkspaceCreate(ctx context.Context, d *schema.ResourceData, meta i input.WorkspaceProperties = ExpandWorkspaceProperties(d.Get("workspace_properties").([]interface{})) - resp, err := conn.CreateWorkspacesWithContext(ctx, &workspaces.CreateWorkspacesInput{ - Workspaces: []*workspaces.WorkspaceRequest{input}, + resp, err := conn.CreateWorkspaces(ctx, &workspaces.CreateWorkspacesInput{ + Workspaces: []types.WorkspaceRequest{input}, }) if err != nil { return sdkdiag.AppendErrorf(diags, "creating WorkSpaces Workspace: %s", err) @@ -171,10 +176,10 @@ func resourceWorkspaceCreate(ctx context.Context, d *schema.ResourceData, meta i wsFail := resp.FailedRequests if len(wsFail) > 0 { - return sdkdiag.AppendErrorf(diags, "creating WorkSpaces Workspace: %s: %s", aws.StringValue(wsFail[0].ErrorCode), aws.StringValue(wsFail[0].ErrorMessage)) + return sdkdiag.AppendErrorf(diags, "creating WorkSpaces Workspace: %s: %s", aws.ToString(wsFail[0].ErrorCode), aws.ToString(wsFail[0].ErrorMessage)) } - workspaceID := aws.StringValue(resp.PendingRequests[0].WorkspaceId) + workspaceID := aws.ToString(resp.PendingRequests[0].WorkspaceId) d.SetId(workspaceID) _, err = WaitWorkspaceAvailable(ctx, conn, workspaceID, d.Timeout(schema.TimeoutCreate)) @@ -187,19 +192,19 @@ func resourceWorkspaceCreate(ctx context.Context, d *schema.ResourceData, meta i func resourceWorkspaceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WorkSpacesConn() + conn := meta.(*conns.AWSClient).WorkSpacesClient(ctx) rawOutput, state, err := StatusWorkspaceState(ctx, conn, d.Id())() if err != nil { return sdkdiag.AppendErrorf(diags, "reading WorkSpaces Workspace (%s): %s", d.Id(), err) } - if state == workspaces.WorkspaceStateTerminated { + if state == string(types.WorkspaceStateTerminated) { log.Printf("[WARN] WorkSpaces Workspace (%s) not found, removing from state", d.Id()) d.SetId("") return diags } - workspace := rawOutput.(*workspaces.Workspace) + workspace := rawOutput.(types.Workspace) d.Set("bundle_id", workspace.BundleId) d.Set("directory_id", workspace.DirectoryId) d.Set("ip_address", workspace.IpAddress) @@ -218,7 +223,7 @@ func resourceWorkspaceRead(ctx context.Context, d *schema.ResourceData, meta int func resourceWorkspaceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WorkSpacesConn() + conn := meta.(*conns.AWSClient).WorkSpacesClient(ctx) // IMPORTANT: Only one workspace property could be changed in a time. // I've create AWS Support feature request to allow multiple properties modification in a time. @@ -259,7 +264,7 @@ func resourceWorkspaceUpdate(ctx context.Context, d *schema.ResourceData, meta i func resourceWorkspaceDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WorkSpacesConn() + conn := meta.(*conns.AWSClient).WorkSpacesClient(ctx) if err := WorkspaceDelete(ctx, conn, d.Id(), d.Timeout(schema.TimeoutDelete)); err != nil { return sdkdiag.AppendFromErr(diags, err) @@ -267,9 +272,9 @@ func resourceWorkspaceDelete(ctx context.Context, d *schema.ResourceData, meta i return diags } -func WorkspaceDelete(ctx context.Context, conn *workspaces.WorkSpaces, id string, timeout time.Duration) error { - resp, err := conn.TerminateWorkspacesWithContext(ctx, &workspaces.TerminateWorkspacesInput{ - TerminateWorkspaceRequests: []*workspaces.TerminateRequest{ +func WorkspaceDelete(ctx context.Context, conn *workspaces.Client, id string, timeout time.Duration) error { + resp, err := conn.TerminateWorkspaces(ctx, &workspaces.TerminateWorkspacesInput{ + TerminateWorkspaceRequests: []types.TerminateRequest{ { WorkspaceId: aws.String(id), }, @@ -281,7 +286,7 @@ func WorkspaceDelete(ctx context.Context, conn *workspaces.WorkSpaces, id string wsFail := resp.FailedRequests if len(wsFail) > 0 { - return fmt.Errorf("deleting WorkSpaces Workspace (%s): %s: %s", id, aws.StringValue(wsFail[0].ErrorCode), aws.StringValue(wsFail[0].ErrorMessage)) + return fmt.Errorf("deleting WorkSpaces Workspace (%s): %s: %s", id, aws.ToString(wsFail[0].ErrorCode), aws.ToString(wsFail[0].ErrorMessage)) } _, err = WaitWorkspaceTerminated(ctx, conn, id, timeout) @@ -292,40 +297,40 @@ func WorkspaceDelete(ctx context.Context, conn *workspaces.WorkSpaces, id string return nil } -func workspacePropertyUpdate(ctx context.Context, p string, conn *workspaces.WorkSpaces, d *schema.ResourceData) error { +func workspacePropertyUpdate(ctx context.Context, p string, conn *workspaces.Client, d *schema.ResourceData) error { id := d.Id() - var wsp *workspaces.WorkspaceProperties + var wsp *types.WorkspaceProperties switch p { case "compute_type_name": - wsp = &workspaces.WorkspaceProperties{ - ComputeTypeName: aws.String(d.Get("workspace_properties.0.compute_type_name").(string)), + wsp = &types.WorkspaceProperties{ + ComputeTypeName: types.Compute(d.Get("workspace_properties.0.compute_type_name").(string)), } case "root_volume_size_gib": - wsp = &workspaces.WorkspaceProperties{ - RootVolumeSizeGib: aws.Int64(int64(d.Get("workspace_properties.0.root_volume_size_gib").(int))), + wsp = &types.WorkspaceProperties{ + RootVolumeSizeGib: aws.Int32(int32(d.Get("workspace_properties.0.root_volume_size_gib").(int))), } case "running_mode": - wsp = &workspaces.WorkspaceProperties{ - RunningMode: aws.String(d.Get("workspace_properties.0.running_mode").(string)), + wsp = &types.WorkspaceProperties{ + RunningMode: types.RunningMode(d.Get("workspace_properties.0.running_mode").(string)), } case "running_mode_auto_stop_timeout_in_minutes": - if d.Get("workspace_properties.0.running_mode") != workspaces.RunningModeAutoStop { + if d.Get("workspace_properties.0.running_mode") != types.RunningModeAutoStop { log.Printf("[DEBUG] Property running_mode_auto_stop_timeout_in_minutes makes sense only for AUTO_STOP running mode") return nil } - wsp = &workspaces.WorkspaceProperties{ - RunningModeAutoStopTimeoutInMinutes: aws.Int64(int64(d.Get("workspace_properties.0.running_mode_auto_stop_timeout_in_minutes").(int))), + wsp = &types.WorkspaceProperties{ + RunningModeAutoStopTimeoutInMinutes: aws.Int32(int32(d.Get("workspace_properties.0.running_mode_auto_stop_timeout_in_minutes").(int))), } case "user_volume_size_gib": - wsp = &workspaces.WorkspaceProperties{ - UserVolumeSizeGib: aws.Int64(int64(d.Get("workspace_properties.0.user_volume_size_gib").(int))), + wsp = &types.WorkspaceProperties{ + UserVolumeSizeGib: aws.Int32(int32(d.Get("workspace_properties.0.user_volume_size_gib").(int))), } } - _, err := conn.ModifyWorkspacePropertiesWithContext(ctx, &workspaces.ModifyWorkspacePropertiesInput{ + _, err := conn.ModifyWorkspaceProperties(ctx, &workspaces.ModifyWorkspacePropertiesInput{ WorkspaceId: aws.String(id), WorkspaceProperties: wsp, }) @@ -341,7 +346,7 @@ func workspacePropertyUpdate(ctx context.Context, p string, conn *workspaces.Wor return nil } -func ExpandWorkspaceProperties(properties []interface{}) *workspaces.WorkspaceProperties { +func ExpandWorkspaceProperties(properties []interface{}) *types.WorkspaceProperties { log.Printf("[DEBUG] Expand Workspace properties: %+v ", properties) if len(properties) == 0 || properties[0] == nil { @@ -350,21 +355,21 @@ func ExpandWorkspaceProperties(properties []interface{}) *workspaces.WorkspacePr p := properties[0].(map[string]interface{}) - workspaceProperties := &workspaces.WorkspaceProperties{ - ComputeTypeName: aws.String(p["compute_type_name"].(string)), - RootVolumeSizeGib: aws.Int64(int64(p["root_volume_size_gib"].(int))), - RunningMode: aws.String(p["running_mode"].(string)), - UserVolumeSizeGib: aws.Int64(int64(p["user_volume_size_gib"].(int))), + workspaceProperties := &types.WorkspaceProperties{ + ComputeTypeName: types.Compute(p["compute_type_name"].(string)), + RootVolumeSizeGib: aws.Int32(int32(p["root_volume_size_gib"].(int))), + RunningMode: types.RunningMode(p["running_mode"].(string)), + UserVolumeSizeGib: aws.Int32(int32(p["user_volume_size_gib"].(int))), } - if p["running_mode"].(string) == workspaces.RunningModeAutoStop { - workspaceProperties.RunningModeAutoStopTimeoutInMinutes = aws.Int64(int64(p["running_mode_auto_stop_timeout_in_minutes"].(int))) + if p["running_mode"].(string) == string(types.RunningModeAutoStop) { + workspaceProperties.RunningModeAutoStopTimeoutInMinutes = aws.Int32(int32(p["running_mode_auto_stop_timeout_in_minutes"].(int))) } return workspaceProperties } -func FlattenWorkspaceProperties(properties *workspaces.WorkspaceProperties) []map[string]interface{} { +func FlattenWorkspaceProperties(properties *types.WorkspaceProperties) []map[string]interface{} { log.Printf("[DEBUG] Flatten workspace properties: %+v ", properties) if properties == nil { @@ -373,11 +378,21 @@ func FlattenWorkspaceProperties(properties *workspaces.WorkspaceProperties) []ma return []map[string]interface{}{ { - "compute_type_name": aws.StringValue(properties.ComputeTypeName), - "root_volume_size_gib": int(aws.Int64Value(properties.RootVolumeSizeGib)), - "running_mode": aws.StringValue(properties.RunningMode), - "running_mode_auto_stop_timeout_in_minutes": int(aws.Int64Value(properties.RunningModeAutoStopTimeoutInMinutes)), - "user_volume_size_gib": int(aws.Int64Value(properties.UserVolumeSizeGib)), + "compute_type_name": string(properties.ComputeTypeName), + "root_volume_size_gib": int(aws.ToInt32(properties.RootVolumeSizeGib)), + "running_mode": string(properties.RunningMode), + "running_mode_auto_stop_timeout_in_minutes": int(aws.ToInt32(properties.RunningModeAutoStopTimeoutInMinutes)), + "user_volume_size_gib": int(aws.ToInt32(properties.UserVolumeSizeGib)), }, } } + +func flattenComputeEnumValues(t []types.Compute) []string { + var out []string + + for _, v := range t { + out = append(out, string(v)) + } + + return out +} diff --git a/internal/service/workspaces/workspace_data_source.go b/internal/service/workspaces/workspace_data_source.go index 94e9b98b1f6..dca2e5eaaac 100644 --- a/internal/service/workspaces/workspace_data_source.go +++ b/internal/service/workspaces/workspace_data_source.go @@ -1,10 +1,15 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package workspaces import ( "context" + "reflect" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/workspaces" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/workspaces" + "github.com/aws/aws-sdk-go-v2/service/workspaces/types" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-aws/internal/conns" @@ -101,14 +106,14 @@ func DataSourceWorkspace() *schema.Resource { func dataSourceWorkspaceRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).WorkSpacesConn() + conn := meta.(*conns.AWSClient).WorkSpacesClient(ctx) ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig - var workspace *workspaces.Workspace + var workspace types.Workspace if workspaceID, ok := d.GetOk("workspace_id"); ok { - resp, err := conn.DescribeWorkspacesWithContext(ctx, &workspaces.DescribeWorkspacesInput{ - WorkspaceIds: aws.StringSlice([]string{workspaceID.(string)}), + resp, err := conn.DescribeWorkspaces(ctx, &workspaces.DescribeWorkspacesInput{ + WorkspaceIds: []string{workspaceID.(string)}, }) if err != nil { return sdkdiag.AppendErrorf(diags, "reading WorkSpaces Workspace (%s): %s", workspaceID, err) @@ -120,14 +125,14 @@ func dataSourceWorkspaceRead(ctx context.Context, d *schema.ResourceData, meta i workspace = resp.Workspaces[0] - if workspace == nil { + if reflect.DeepEqual(workspace, (types.Workspace{})) { return sdkdiag.AppendErrorf(diags, "no WorkSpaces Workspace with ID %q found", workspaceID) } } if directoryID, ok := d.GetOk("directory_id"); ok { userName := d.Get("user_name").(string) - resp, err := conn.DescribeWorkspacesWithContext(ctx, &workspaces.DescribeWorkspacesInput{ + resp, err := conn.DescribeWorkspaces(ctx, &workspaces.DescribeWorkspacesInput{ DirectoryId: aws.String(directoryID.(string)), UserName: aws.String(userName), }) @@ -141,12 +146,12 @@ func dataSourceWorkspaceRead(ctx context.Context, d *schema.ResourceData, meta i workspace = resp.Workspaces[0] - if workspace == nil { + if reflect.DeepEqual(workspace, (types.Workspace{})) { return sdkdiag.AppendErrorf(diags, "no %q Workspace in %q directory found", userName, directoryID) } } - d.SetId(aws.StringValue(workspace.WorkspaceId)) + d.SetId(aws.ToString(workspace.WorkspaceId)) d.Set("bundle_id", workspace.BundleId) d.Set("directory_id", workspace.DirectoryId) d.Set("ip_address", workspace.IpAddress) @@ -160,7 +165,7 @@ func dataSourceWorkspaceRead(ctx context.Context, d *schema.ResourceData, meta i return sdkdiag.AppendErrorf(diags, "setting workspace properties: %s", err) } - tags, err := ListTags(ctx, conn, d.Id()) + tags, err := listTags(ctx, conn, d.Id()) if err != nil { return sdkdiag.AppendErrorf(diags, "listing tags: %s", err) } diff --git a/internal/service/workspaces/workspace_data_source_test.go b/internal/service/workspaces/workspace_data_source_test.go index 4a9b3078050..1bf7b6879f3 100644 --- a/internal/service/workspaces/workspace_data_source_test.go +++ b/internal/service/workspaces/workspace_data_source_test.go @@ -1,10 +1,14 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package workspaces_test import ( "regexp" + "strings" "testing" - "github.com/aws/aws-sdk-go/service/workspaces" + "github.com/aws/aws-sdk-go-v2/service/workspaces" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-provider-aws/internal/acctest" @@ -19,7 +23,7 @@ func testAccWorkspaceDataSource_byWorkspaceID(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckHasIAMRole(ctx, t, "workspaces_DefaultRole") }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, Steps: []resource.TestStep{ { @@ -54,7 +58,7 @@ func testAccWorkspaceDataSource_byDirectoryID_userName(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckHasIAMRole(ctx, t, "workspaces_DefaultRole") }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, Steps: []resource.TestStep{ { @@ -85,7 +89,7 @@ func testAccWorkspaceDataSource_workspaceIDAndDirectoryIDConflict(t *testing.T) resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t); acctest.PreCheckHasIAMRole(ctx, t, "workspaces_DefaultRole") }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, Steps: []resource.TestStep{ { diff --git a/internal/service/workspaces/workspace_test.go b/internal/service/workspaces/workspace_test.go index b10db27409c..e066aa9167f 100644 --- a/internal/service/workspaces/workspace_test.go +++ b/internal/service/workspaces/workspace_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package workspaces_test import ( @@ -5,10 +8,12 @@ import ( "fmt" "reflect" "regexp" + "strings" "testing" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/workspaces" + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/service/workspaces" + "github.com/aws/aws-sdk-go-v2/service/workspaces/types" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -19,7 +24,7 @@ import ( func testAccWorkspace_basic(t *testing.T) { ctx := acctest.Context(t) - var v workspaces.Workspace + var v types.Workspace rName := sdkacctest.RandString(8) domain := acctest.RandomDomainName() @@ -34,7 +39,7 @@ func testAccWorkspace_basic(t *testing.T) { acctest.PreCheckDirectoryServiceSimpleDirectory(ctx, t) acctest.PreCheckHasIAMRole(ctx, t, "workspaces_DefaultRole") }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckWorkspaceDestroy(ctx), Steps: []resource.TestStep{ @@ -46,14 +51,14 @@ func testAccWorkspace_basic(t *testing.T) { resource.TestCheckResourceAttrPair(resourceName, "directory_id", directoryResourceName, "id"), resource.TestCheckResourceAttrPair(resourceName, "bundle_id", bundleDataSourceName, "id"), resource.TestMatchResourceAttr(resourceName, "ip_address", regexp.MustCompile(`\d+\.\d+\.\d+\.\d+`)), - resource.TestCheckResourceAttr(resourceName, "state", workspaces.WorkspaceStateAvailable), + resource.TestCheckResourceAttr(resourceName, "state", string(types.WorkspaceStateAvailable)), resource.TestCheckResourceAttr(resourceName, "root_volume_encryption_enabled", "false"), resource.TestCheckResourceAttr(resourceName, "user_name", "Administrator"), resource.TestCheckResourceAttr(resourceName, "volume_encryption_key", ""), resource.TestCheckResourceAttr(resourceName, "workspace_properties.#", "1"), - resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.compute_type_name", workspaces.ComputeValue), + resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.compute_type_name", string(types.ComputeValue)), resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.root_volume_size_gib", "80"), - resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.running_mode", workspaces.RunningModeAlwaysOn), + resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.running_mode", string(types.RunningModeAlwaysOn)), resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.running_mode_auto_stop_timeout_in_minutes", "0"), resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.user_volume_size_gib", "10"), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), @@ -71,7 +76,7 @@ func testAccWorkspace_basic(t *testing.T) { func testAccWorkspace_tags(t *testing.T) { ctx := acctest.Context(t) - var v1, v2, v3 workspaces.Workspace + var v1, v2, v3 types.Workspace rName := sdkacctest.RandString(8) domain := acctest.RandomDomainName() @@ -84,7 +89,7 @@ func testAccWorkspace_tags(t *testing.T) { acctest.PreCheckDirectoryServiceSimpleDirectory(ctx, t) acctest.PreCheckHasIAMRole(ctx, t, "workspaces_DefaultRole") }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckWorkspaceDestroy(ctx), Steps: []resource.TestStep{ @@ -125,7 +130,7 @@ func testAccWorkspace_tags(t *testing.T) { func testAccWorkspace_workspaceProperties(t *testing.T) { ctx := acctest.Context(t) - var v1, v2, v3 workspaces.Workspace + var v1, v2, v3 types.Workspace rName := sdkacctest.RandString(8) domain := acctest.RandomDomainName() @@ -138,7 +143,7 @@ func testAccWorkspace_workspaceProperties(t *testing.T) { acctest.PreCheckDirectoryServiceSimpleDirectory(ctx, t) acctest.PreCheckHasIAMRole(ctx, t, "workspaces_DefaultRole") }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckWorkspaceDestroy(ctx), Steps: []resource.TestStep{ @@ -148,9 +153,9 @@ func testAccWorkspace_workspaceProperties(t *testing.T) { Check: resource.ComposeAggregateTestCheckFunc( testAccCheckWorkspaceExists(ctx, resourceName, &v1), resource.TestCheckResourceAttr(resourceName, "workspace_properties.#", "1"), - resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.compute_type_name", workspaces.ComputeValue), + resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.compute_type_name", string(types.ComputeValue)), resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.root_volume_size_gib", "80"), - resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.running_mode", workspaces.RunningModeAutoStop), + resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.running_mode", string(types.RunningModeAutoStop)), resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.running_mode_auto_stop_timeout_in_minutes", "120"), resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.user_volume_size_gib", "10"), ), @@ -165,9 +170,9 @@ func testAccWorkspace_workspaceProperties(t *testing.T) { Check: resource.ComposeAggregateTestCheckFunc( testAccCheckWorkspaceExists(ctx, resourceName, &v2), resource.TestCheckResourceAttr(resourceName, "workspace_properties.#", "1"), - resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.compute_type_name", workspaces.ComputeValue), + resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.compute_type_name", string(types.ComputeValue)), resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.root_volume_size_gib", "80"), - resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.running_mode", workspaces.RunningModeAlwaysOn), + resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.running_mode", string(types.RunningModeAlwaysOn)), resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.running_mode_auto_stop_timeout_in_minutes", "0"), resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.user_volume_size_gib", "10"), ), @@ -177,9 +182,9 @@ func testAccWorkspace_workspaceProperties(t *testing.T) { Check: resource.ComposeAggregateTestCheckFunc( testAccCheckWorkspaceExists(ctx, resourceName, &v3), resource.TestCheckResourceAttr(resourceName, "workspace_properties.#", "1"), - resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.compute_type_name", workspaces.ComputeValue), + resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.compute_type_name", string(types.ComputeValue)), resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.root_volume_size_gib", "80"), - resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.running_mode", workspaces.RunningModeAlwaysOn), + resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.running_mode", string(types.RunningModeAlwaysOn)), resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.running_mode_auto_stop_timeout_in_minutes", "0"), resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.user_volume_size_gib", "10"), ), @@ -193,7 +198,7 @@ func testAccWorkspace_workspaceProperties(t *testing.T) { // Reference: https://github.com/hashicorp/terraform-provider-aws/issues/13558 func testAccWorkspace_workspaceProperties_runningModeAlwaysOn(t *testing.T) { ctx := acctest.Context(t) - var v1 workspaces.Workspace + var v1 types.Workspace rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_workspaces_workspace.test" domain := acctest.RandomDomainName() @@ -205,7 +210,7 @@ func testAccWorkspace_workspaceProperties_runningModeAlwaysOn(t *testing.T) { acctest.PreCheckDirectoryServiceSimpleDirectory(ctx, t) acctest.PreCheckHasIAMRole(ctx, t, "workspaces_DefaultRole") }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckWorkspaceDestroy(ctx), Steps: []resource.TestStep{ @@ -214,9 +219,9 @@ func testAccWorkspace_workspaceProperties_runningModeAlwaysOn(t *testing.T) { Check: resource.ComposeAggregateTestCheckFunc( testAccCheckWorkspaceExists(ctx, resourceName, &v1), resource.TestCheckResourceAttr(resourceName, "workspace_properties.#", "1"), - resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.compute_type_name", workspaces.ComputeValue), + resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.compute_type_name", string(types.ComputeValue)), resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.root_volume_size_gib", "80"), - resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.running_mode", workspaces.RunningModeAlwaysOn), + resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.running_mode", string(types.RunningModeAlwaysOn)), resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.running_mode_auto_stop_timeout_in_minutes", "0"), resource.TestCheckResourceAttr(resourceName, "workspace_properties.0.user_volume_size_gib", "10"), ), @@ -237,7 +242,7 @@ func testAccWorkspace_validateRootVolumeSize(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckWorkspaceDestroy(ctx), Steps: []resource.TestStep{ @@ -256,7 +261,7 @@ func testAccWorkspace_validateUserVolumeSize(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(ctx, t) }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckWorkspaceDestroy(ctx), Steps: []resource.TestStep{ @@ -270,7 +275,7 @@ func testAccWorkspace_validateUserVolumeSize(t *testing.T) { func testAccWorkspace_recreate(t *testing.T) { ctx := acctest.Context(t) - var v workspaces.Workspace + var v types.Workspace rName := sdkacctest.RandString(8) domain := acctest.RandomDomainName() @@ -283,7 +288,7 @@ func testAccWorkspace_recreate(t *testing.T) { acctest.PreCheckDirectoryServiceSimpleDirectory(ctx, t) acctest.PreCheckHasIAMRole(ctx, t, "workspaces_DefaultRole") }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckWorkspaceDestroy(ctx), Steps: []resource.TestStep{ @@ -306,7 +311,7 @@ func testAccWorkspace_recreate(t *testing.T) { func testAccWorkspace_timeout(t *testing.T) { ctx := acctest.Context(t) - var v workspaces.Workspace + var v types.Workspace rName := sdkacctest.RandString(8) domain := acctest.RandomDomainName() @@ -319,7 +324,7 @@ func testAccWorkspace_timeout(t *testing.T) { acctest.PreCheckDirectoryServiceSimpleDirectory(ctx, t) acctest.PreCheckHasIAMRole(ctx, t, "workspaces_DefaultRole") }, - ErrorCheck: acctest.ErrorCheck(t, workspaces.EndpointsID), + ErrorCheck: acctest.ErrorCheck(t, strings.ToLower(workspaces.ServiceID)), ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories, CheckDestroy: testAccCheckWorkspaceDestroy(ctx), Steps: []resource.TestStep{ @@ -336,15 +341,15 @@ func testAccWorkspace_timeout(t *testing.T) { func testAccCheckWorkspaceDestroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).WorkSpacesConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WorkSpacesClient(ctx) for _, rs := range s.RootModule().Resources { if rs.Type != "aws_workspaces_workspace" { continue } - resp, err := conn.DescribeWorkspacesWithContext(ctx, &workspaces.DescribeWorkspacesInput{ - WorkspaceIds: []*string{aws.String(rs.Primary.ID)}, + resp, err := conn.DescribeWorkspaces(ctx, &workspaces.DescribeWorkspacesInput{ + WorkspaceIds: []string{rs.Primary.ID}, }) if err != nil { return err @@ -355,7 +360,7 @@ func testAccCheckWorkspaceDestroy(ctx context.Context) resource.TestCheckFunc { } ws := resp.Workspaces[0] - if *ws.State != workspaces.WorkspaceStateTerminating && *ws.State != workspaces.WorkspaceStateTerminated { + if ws.State != types.WorkspaceStateTerminating && ws.State != types.WorkspaceStateTerminated { return fmt.Errorf("workspace %q was not terminated", rs.Primary.ID) } } @@ -364,24 +369,24 @@ func testAccCheckWorkspaceDestroy(ctx context.Context) resource.TestCheckFunc { } } -func testAccCheckWorkspaceExists(ctx context.Context, n string, v *workspaces.Workspace) resource.TestCheckFunc { +func testAccCheckWorkspaceExists(ctx context.Context, n string, v *types.Workspace) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).WorkSpacesConn() + conn := acctest.Provider.Meta().(*conns.AWSClient).WorkSpacesClient(ctx) - output, err := conn.DescribeWorkspacesWithContext(ctx, &workspaces.DescribeWorkspacesInput{ - WorkspaceIds: []*string{aws.String(rs.Primary.ID)}, + output, err := conn.DescribeWorkspaces(ctx, &workspaces.DescribeWorkspacesInput{ + WorkspaceIds: []string{rs.Primary.ID}, }) if err != nil { return err } if *output.Workspaces[0].WorkspaceId == rs.Primary.ID { - *v = *output.Workspaces[0] + *v = output.Workspaces[0] return nil } @@ -634,7 +639,7 @@ func TestExpandWorkspaceProperties(t *testing.T) { cases := []struct { input []interface{} - expected *workspaces.WorkspaceProperties + expected *types.WorkspaceProperties }{ // Empty { @@ -645,19 +650,19 @@ func TestExpandWorkspaceProperties(t *testing.T) { { input: []interface{}{ map[string]interface{}{ - "compute_type_name": workspaces.ComputeValue, + "compute_type_name": string(types.ComputeValue), "root_volume_size_gib": 80, - "running_mode": workspaces.RunningModeAutoStop, + "running_mode": string(types.RunningModeAutoStop), "running_mode_auto_stop_timeout_in_minutes": 60, "user_volume_size_gib": 10, }, }, - expected: &workspaces.WorkspaceProperties{ - ComputeTypeName: aws.String(workspaces.ComputeValue), - RootVolumeSizeGib: aws.Int64(80), - RunningMode: aws.String(workspaces.RunningModeAutoStop), - RunningModeAutoStopTimeoutInMinutes: aws.Int64(60), - UserVolumeSizeGib: aws.Int64(10), + expected: &types.WorkspaceProperties{ + ComputeTypeName: types.ComputeValue, + RootVolumeSizeGib: aws.Int32(80), + RunningMode: types.RunningModeAutoStop, + RunningModeAutoStopTimeoutInMinutes: aws.Int32(60), + UserVolumeSizeGib: aws.Int32(10), }, }, } @@ -674,7 +679,7 @@ func TestFlattenWorkspaceProperties(t *testing.T) { t.Parallel() cases := []struct { - input *workspaces.WorkspaceProperties + input *types.WorkspaceProperties expected []map[string]interface{} }{ // Empty @@ -684,18 +689,18 @@ func TestFlattenWorkspaceProperties(t *testing.T) { }, // Full { - input: &workspaces.WorkspaceProperties{ - ComputeTypeName: aws.String(workspaces.ComputeValue), - RootVolumeSizeGib: aws.Int64(80), - RunningMode: aws.String(workspaces.RunningModeAutoStop), - RunningModeAutoStopTimeoutInMinutes: aws.Int64(60), - UserVolumeSizeGib: aws.Int64(10), + input: &types.WorkspaceProperties{ + ComputeTypeName: types.ComputeValue, + RootVolumeSizeGib: aws.Int32(80), + RunningMode: types.RunningModeAutoStop, + RunningModeAutoStopTimeoutInMinutes: aws.Int32(60), + UserVolumeSizeGib: aws.Int32(10), }, expected: []map[string]interface{}{ { - "compute_type_name": workspaces.ComputeValue, + "compute_type_name": string(types.ComputeValue), "root_volume_size_gib": 80, - "running_mode": workspaces.RunningModeAutoStop, + "running_mode": string(types.RunningModeAutoStop), "running_mode_auto_stop_timeout_in_minutes": 60, "user_volume_size_gib": 10, }, diff --git a/internal/service/workspaces/workspaces_data_source_test.go b/internal/service/workspaces/workspaces_data_source_test.go index 10b08592c9c..da39971e975 100644 --- a/internal/service/workspaces/workspaces_data_source_test.go +++ b/internal/service/workspaces/workspaces_data_source_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package workspaces_test import ( diff --git a/internal/service/workspaces/workspaces_test.go b/internal/service/workspaces/workspaces_test.go index 3f31e51b2c8..808d1d589af 100644 --- a/internal/service/workspaces/workspaces_test.go +++ b/internal/service/workspaces/workspaces_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package workspaces_test import ( diff --git a/internal/service/xray/encryption_config.go b/internal/service/xray/encryption_config.go index 1c1f0fe6db3..4d85729313e 100644 --- a/internal/service/xray/encryption_config.go +++ b/internal/service/xray/encryption_config.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package xray import ( @@ -47,7 +50,7 @@ func resourceEncryptionConfig() *schema.Resource { func resourceEncryptionPutConfig(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).XRayClient() + conn := meta.(*conns.AWSClient).XRayClient(ctx) input := &xray.PutEncryptionConfigInput{ Type: types.EncryptionType(d.Get("type").(string)), @@ -74,7 +77,7 @@ func resourceEncryptionPutConfig(ctx context.Context, d *schema.ResourceData, me func resourceEncryptionConfigRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).XRayClient() + conn := meta.(*conns.AWSClient).XRayClient(ctx) config, err := findEncryptionConfig(ctx, conn) diff --git a/internal/service/xray/encryption_config_test.go b/internal/service/xray/encryption_config_test.go index fe083da1646..baa998aae38 100644 --- a/internal/service/xray/encryption_config_test.go +++ b/internal/service/xray/encryption_config_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package xray_test import ( @@ -70,7 +73,7 @@ func testAccCheckEncryptionConfigExists(ctx context.Context, n string, v *types. return fmt.Errorf("No XRay Encryption Config ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).XRayClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).XRayClient(ctx) output, err := tfxray.FindEncryptionConfig(ctx, conn) diff --git a/internal/service/xray/exports_test.go b/internal/service/xray/exports_test.go index 6c8fa8febe3..a93ae442f2a 100644 --- a/internal/service/xray/exports_test.go +++ b/internal/service/xray/exports_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package xray // Exports for use in tests only. diff --git a/internal/service/xray/generate.go b/internal/service/xray/generate.go index 4d00c7602e8..b4b94e80f91 100644 --- a/internal/service/xray/generate.go +++ b/internal/service/xray/generate.go @@ -1,4 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../../generate/tags/main.go -AWSSDKVersion=2 -ListTags -ListTagsInIDElem=ResourceARN -ServiceTagsSlice -TagInIDElem=ResourceARN -UpdateTags +//go:generate go run ../../generate/servicepackage/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. package xray diff --git a/internal/service/xray/group.go b/internal/service/xray/group.go index c17d0d469c8..3aadd90c9c4 100644 --- a/internal/service/xray/group.go +++ b/internal/service/xray/group.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package xray import ( @@ -75,13 +78,13 @@ func resourceGroup() *schema.Resource { func resourceGroupCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).XRayClient() + conn := meta.(*conns.AWSClient).XRayClient(ctx) name := d.Get("group_name").(string) input := &xray.CreateGroupInput{ GroupName: aws.String(name), FilterExpression: aws.String(d.Get("filter_expression").(string)), - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("insights_configuration"); ok { @@ -101,7 +104,7 @@ func resourceGroupCreate(ctx context.Context, d *schema.ResourceData, meta inter func resourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).XRayClient() + conn := meta.(*conns.AWSClient).XRayClient(ctx) group, err := findGroupByARN(ctx, conn, d.Id()) @@ -127,7 +130,7 @@ func resourceGroupRead(ctx context.Context, d *schema.ResourceData, meta interfa func resourceGroupUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).XRayClient() + conn := meta.(*conns.AWSClient).XRayClient(ctx) if d.HasChangesExcept("tags", "tags_all") { input := &xray.UpdateGroupInput{GroupARN: aws.String(d.Id())} @@ -152,7 +155,7 @@ func resourceGroupUpdate(ctx context.Context, d *schema.ResourceData, meta inter func resourceGroupDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).XRayClient() + conn := meta.(*conns.AWSClient).XRayClient(ctx) log.Printf("[INFO] Deleting XRay Group: %s", d.Id()) _, err := conn.DeleteGroup(ctx, &xray.DeleteGroupInput{ diff --git a/internal/service/xray/group_test.go b/internal/service/xray/group_test.go index b4ea7b218bd..0b88da7df76 100644 --- a/internal/service/xray/group_test.go +++ b/internal/service/xray/group_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package xray_test import ( @@ -181,7 +184,7 @@ func testAccCheckGroupExists(ctx context.Context, n string, v *types.Group) reso return fmt.Errorf("No XRay Group ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).XRayClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).XRayClient(ctx) output, err := tfxray.FindGroupByARN(ctx, conn, rs.Primary.ID) @@ -202,7 +205,7 @@ func testAccCheckGroupDestroy(ctx context.Context) resource.TestCheckFunc { continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).XRayClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).XRayClient(ctx) _, err := tfxray.FindGroupByARN(ctx, conn, rs.Primary.ID) diff --git a/internal/service/xray/sampling_rule.go b/internal/service/xray/sampling_rule.go index b636470cf9f..01885f82c21 100644 --- a/internal/service/xray/sampling_rule.go +++ b/internal/service/xray/sampling_rule.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package xray import ( @@ -112,7 +115,7 @@ func resourceSamplingRule() *schema.Resource { func resourceSamplingRuleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).XRayClient() + conn := meta.(*conns.AWSClient).XRayClient(ctx) name := d.Get("rule_name").(string) samplingRule := &types.SamplingRule{ @@ -135,7 +138,7 @@ func resourceSamplingRuleCreate(ctx context.Context, d *schema.ResourceData, met input := &xray.CreateSamplingRuleInput{ SamplingRule: samplingRule, - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } output, err := conn.CreateSamplingRule(ctx, input) @@ -151,7 +154,7 @@ func resourceSamplingRuleCreate(ctx context.Context, d *schema.ResourceData, met func resourceSamplingRuleRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).XRayClient() + conn := meta.(*conns.AWSClient).XRayClient(ctx) samplingRule, err := findSamplingRuleByName(ctx, conn, d.Id()) @@ -184,7 +187,7 @@ func resourceSamplingRuleRead(ctx context.Context, d *schema.ResourceData, meta func resourceSamplingRuleUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).XRayClient() + conn := meta.(*conns.AWSClient).XRayClient(ctx) if d.HasChangesExcept("tags", "tags_all") { samplingRuleUpdate := &types.SamplingRuleUpdate{ @@ -220,7 +223,7 @@ func resourceSamplingRuleUpdate(ctx context.Context, d *schema.ResourceData, met func resourceSamplingRuleDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { var diags diag.Diagnostics - conn := meta.(*conns.AWSClient).XRayClient() + conn := meta.(*conns.AWSClient).XRayClient(ctx) log.Printf("[INFO] Deleting XRay Sampling Rule: %s", d.Id()) _, err := conn.DeleteSamplingRule(ctx, &xray.DeleteSamplingRuleInput{ diff --git a/internal/service/xray/sampling_rule_test.go b/internal/service/xray/sampling_rule_test.go index 1983b5183a5..19d01b9f4d9 100644 --- a/internal/service/xray/sampling_rule_test.go +++ b/internal/service/xray/sampling_rule_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package xray_test import ( @@ -197,7 +200,7 @@ func testAccCheckSamplingRuleExists(ctx context.Context, n string, v *types.Samp return fmt.Errorf("No XRay Sampling Rule ID is set") } - conn := acctest.Provider.Meta().(*conns.AWSClient).XRayClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).XRayClient(ctx) output, err := tfxray.FindSamplingRuleByName(ctx, conn, rs.Primary.ID) @@ -218,7 +221,7 @@ func testAccCheckSamplingRuleDestroy(ctx context.Context) resource.TestCheckFunc continue } - conn := acctest.Provider.Meta().(*conns.AWSClient).XRayClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).XRayClient(ctx) _, err := tfxray.FindSamplingRuleByName(ctx, conn, rs.Primary.ID) @@ -238,7 +241,7 @@ func testAccCheckSamplingRuleDestroy(ctx context.Context) resource.TestCheckFunc } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).XRayClient() + conn := acctest.Provider.Meta().(*conns.AWSClient).XRayClient(ctx) input := &xray.GetSamplingRulesInput{} diff --git a/internal/service/xray/service_package_gen.go b/internal/service/xray/service_package_gen.go index 9a8a81dce9d..def73f6ae7d 100644 --- a/internal/service/xray/service_package_gen.go +++ b/internal/service/xray/service_package_gen.go @@ -5,6 +5,9 @@ package xray import ( "context" + aws_sdkv2 "github.com/aws/aws-sdk-go-v2/aws" + xray_sdkv2 "github.com/aws/aws-sdk-go-v2/service/xray" + "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -52,4 +55,17 @@ func (p *servicePackage) ServicePackageName() string { return names.XRay } -var ServicePackage = &servicePackage{} +// NewClient returns a new AWS SDK for Go v2 client for this service package's AWS API. +func (p *servicePackage) NewClient(ctx context.Context, config map[string]any) (*xray_sdkv2.Client, error) { + cfg := *(config["aws_sdkv2_config"].(*aws_sdkv2.Config)) + + return xray_sdkv2.NewFromConfig(cfg, func(o *xray_sdkv2.Options) { + if endpoint := config["endpoint"].(string); endpoint != "" { + o.EndpointResolver = xray_sdkv2.EndpointResolverFromURL(endpoint) + } + }), nil +} + +func ServicePackage(ctx context.Context) conns.ServicePackage { + return &servicePackage{} +} diff --git a/internal/service/xray/tags_gen.go b/internal/service/xray/tags_gen.go index 3c65677ad10..6353fb5aa31 100644 --- a/internal/service/xray/tags_gen.go +++ b/internal/service/xray/tags_gen.go @@ -14,10 +14,10 @@ import ( "github.com/hashicorp/terraform-provider-aws/names" ) -// ListTags lists xray service tags. +// listTags lists xray service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func ListTags(ctx context.Context, conn *xray.Client, identifier string) (tftags.KeyValueTags, error) { +func listTags(ctx context.Context, conn *xray.Client, identifier string) (tftags.KeyValueTags, error) { input := &xray.ListTagsForResourceInput{ ResourceARN: aws.String(identifier), } @@ -34,7 +34,7 @@ func ListTags(ctx context.Context, conn *xray.Client, identifier string) (tftags // ListTags lists xray service tags and set them in Context. // It is called from outside this package. func (p *servicePackage) ListTags(ctx context.Context, meta any, identifier string) error { - tags, err := ListTags(ctx, meta.(*conns.AWSClient).XRayClient(), identifier) + tags, err := listTags(ctx, meta.(*conns.AWSClient).XRayClient(ctx), identifier) if err != nil { return err @@ -76,9 +76,9 @@ func KeyValueTags(ctx context.Context, tags []awstypes.Tag) tftags.KeyValueTags return tftags.New(ctx, m) } -// GetTagsIn returns xray service tags from Context. +// getTagsIn returns xray service tags from Context. // nil is returned if there are no input tags. -func GetTagsIn(ctx context.Context) []awstypes.Tag { +func getTagsIn(ctx context.Context) []awstypes.Tag { if inContext, ok := tftags.FromContext(ctx); ok { if tags := Tags(inContext.TagsIn.UnwrapOrDefault()); len(tags) > 0 { return tags @@ -88,17 +88,17 @@ func GetTagsIn(ctx context.Context) []awstypes.Tag { return nil } -// SetTagsOut sets xray service tags in Context. -func SetTagsOut(ctx context.Context, tags []awstypes.Tag) { +// setTagsOut sets xray service tags in Context. +func setTagsOut(ctx context.Context, tags []awstypes.Tag) { if inContext, ok := tftags.FromContext(ctx); ok { inContext.TagsOut = types.Some(KeyValueTags(ctx, tags)) } } -// UpdateTags updates xray service tags. +// updateTags updates xray service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. -func UpdateTags(ctx context.Context, conn *xray.Client, identifier string, oldTagsMap, newTagsMap any) error { +func updateTags(ctx context.Context, conn *xray.Client, identifier string, oldTagsMap, newTagsMap any) error { oldTags := tftags.New(ctx, oldTagsMap) newTags := tftags.New(ctx, newTagsMap) @@ -138,5 +138,5 @@ func UpdateTags(ctx context.Context, conn *xray.Client, identifier string, oldTa // UpdateTags updates xray service tags. // It is called from outside this package. func (p *servicePackage) UpdateTags(ctx context.Context, meta any, identifier string, oldTags, newTags any) error { - return UpdateTags(ctx, meta.(*conns.AWSClient).XRayClient(), identifier, oldTags, newTags) + return updateTags(ctx, meta.(*conns.AWSClient).XRayClient(ctx), identifier, oldTags, newTags) } diff --git a/internal/slices/filters.go b/internal/slices/filters.go index bdb62aedea7..3ca377bde10 100644 --- a/internal/slices/filters.go +++ b/internal/slices/filters.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package slices func FilterEquals[T comparable](v T) FilterFunc[T] { diff --git a/internal/slices/slices.go b/internal/slices/slices.go index 8d4089d7109..f21ffcd4164 100644 --- a/internal/slices/slices.go +++ b/internal/slices/slices.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package slices import "golang.org/x/exp/slices" diff --git a/internal/slices/slices_test.go b/internal/slices/slices_test.go index 1d34fb3dad6..cd08f2c8d76 100644 --- a/internal/slices/slices_test.go +++ b/internal/slices/slices_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package slices import ( diff --git a/internal/sweep/context.go b/internal/sweep/context.go index e636b0c5118..300c140abb7 100644 --- a/internal/sweep/context.go +++ b/internal/sweep/context.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sweep import "context" diff --git a/internal/sweep/framework.go b/internal/sweep/framework.go deleted file mode 100644 index f68b30f43e4..00000000000 --- a/internal/sweep/framework.go +++ /dev/null @@ -1,115 +0,0 @@ -package sweep - -import ( - "context" - "log" - "strings" - "time" - - "github.com/hashicorp/terraform-plugin-framework/path" - fwresource "github.com/hashicorp/terraform-plugin-framework/resource" - "github.com/hashicorp/terraform-plugin-framework/tfsdk" - "github.com/hashicorp/terraform-plugin-go/tftypes" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" - "github.com/hashicorp/terraform-provider-aws/internal/errs/fwdiag" - "github.com/hashicorp/terraform-provider-aws/internal/tfresource" -) - -// Terraform Plugin Framework variants of sweeper helpers. - -type FrameworkSupplementalAttribute struct { - Path string - Value any -} - -type FrameworkSupplementalAttributes []FrameworkSupplementalAttribute - -type SweepFrameworkResource struct { - factory func(context.Context) (fwresource.ResourceWithConfigure, error) - id string - meta interface{} - - // supplementalAttributes stores additional attributes to set in state. - // - // This can be used in situations where the Delete method requires multiple attributes - // to destroy the underlying resource. - supplementalAttributes []FrameworkSupplementalAttribute -} - -func NewFrameworkSupplementalAttributes() FrameworkSupplementalAttributes { - return FrameworkSupplementalAttributes{} -} - -func (f *FrameworkSupplementalAttributes) Add(path string, value any) { - item := FrameworkSupplementalAttribute{ - Path: path, - Value: value, - } - - *f = append(*f, item) -} - -func NewSweepFrameworkResource(factory func(context.Context) (fwresource.ResourceWithConfigure, error), id string, meta interface{}, supplementalAttributes ...FrameworkSupplementalAttribute) *SweepFrameworkResource { - return &SweepFrameworkResource{ - factory: factory, - id: id, - meta: meta, - supplementalAttributes: supplementalAttributes, - } -} - -func (sr *SweepFrameworkResource) Delete(ctx context.Context, timeout time.Duration, optFns ...tfresource.OptionsFunc) error { - err := tfresource.Retry(ctx, timeout, func() *retry.RetryError { - err := deleteFrameworkResource(ctx, sr.factory, sr.id, sr.meta, sr.supplementalAttributes) - - if err != nil { - if strings.Contains(err.Error(), "Throttling") { - log.Printf("[INFO] While sweeping resource (%s), encountered throttling error (%s). Retrying...", sr.id, err) - return retry.RetryableError(err) - } - - return retry.NonRetryableError(err) - } - - return nil - }, optFns...) - - if tfresource.TimedOut(err) { - err = deleteFrameworkResource(ctx, sr.factory, sr.id, sr.meta, sr.supplementalAttributes) - } - - return err -} - -func deleteFrameworkResource(ctx context.Context, factory func(context.Context) (fwresource.ResourceWithConfigure, error), id string, meta interface{}, supplementalAttributes []FrameworkSupplementalAttribute) error { - resource, err := factory(ctx) - - if err != nil { - return err - } - - resource.Configure(ctx, fwresource.ConfigureRequest{ProviderData: meta}, &fwresource.ConfigureResponse{}) - - schemaResp := fwresource.SchemaResponse{} - resource.Schema(ctx, fwresource.SchemaRequest{}, &schemaResp) - - // Simple Terraform State that contains just the resource ID. - state := tfsdk.State{ - Raw: tftypes.NewValue(schemaResp.Schema.Type().TerraformType(ctx), nil), - Schema: schemaResp.Schema, - } - state.SetAttribute(ctx, path.Root("id"), id) - - // Set supplemental attibutes, if provided - for _, attr := range supplementalAttributes { - d := state.SetAttribute(ctx, path.Root(attr.Path), attr.Value) - if d.HasError() { - return fwdiag.DiagnosticsError(d) - } - } - - response := fwresource.DeleteResponse{} - resource.Delete(ctx, fwresource.DeleteRequest{State: state}, &response) - - return fwdiag.DiagnosticsError(response.Diagnostics) -} diff --git a/internal/sweep/framework/resource.go b/internal/sweep/framework/resource.go new file mode 100644 index 00000000000..85706495a52 --- /dev/null +++ b/internal/sweep/framework/resource.go @@ -0,0 +1,114 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package framework + +import ( + "context" + "strings" + "time" + + "github.com/hashicorp/terraform-plugin-framework/path" + fwresource "github.com/hashicorp/terraform-plugin-framework/resource" + "github.com/hashicorp/terraform-plugin-framework/tfsdk" + "github.com/hashicorp/terraform-plugin-go/tftypes" + "github.com/hashicorp/terraform-plugin-log/tflog" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/errs/fwdiag" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" +) + +type attribute struct { + path string + value any +} + +func NewAttribute(path string, value any) attribute { + return attribute{ + path: path, + value: value, + } +} + +type sweepResource struct { + factory func(context.Context) (fwresource.ResourceWithConfigure, error) + meta *conns.AWSClient + attributes []attribute +} + +func NewSweepResource(factory func(context.Context) (fwresource.ResourceWithConfigure, error), meta *conns.AWSClient, attributes ...attribute) *sweepResource { + return &sweepResource{ + factory: factory, + meta: meta, + attributes: attributes, + } +} + +func (sr *sweepResource) Delete(ctx context.Context, timeout time.Duration, optFns ...tfresource.OptionsFunc) error { + resource, err := sr.factory(ctx) + + if err != nil { + return err + } + + metadata := resourceMetadata(ctx, resource) + ctx = tflog.SetField(ctx, "resource_type", metadata.TypeName) + + resource.Configure(ctx, fwresource.ConfigureRequest{ProviderData: sr.meta}, &fwresource.ConfigureResponse{}) + + schemaResp := fwresource.SchemaResponse{} + resource.Schema(ctx, fwresource.SchemaRequest{}, &schemaResp) + + state := tfsdk.State{ + Raw: tftypes.NewValue(schemaResp.Schema.Type().TerraformType(ctx), nil), + Schema: schemaResp.Schema, + } + + for _, attr := range sr.attributes { + d := state.SetAttribute(ctx, path.Root(attr.path), attr.value) + if d.HasError() { + return fwdiag.DiagnosticsError(d) + } + ctx = tflog.SetField(ctx, attr.path, attr.value) + } + + tflog.Info(ctx, "Sweeping resource") + + err = tfresource.Retry(ctx, timeout, func() *retry.RetryError { + err := deleteResource(ctx, state, resource) + + if err != nil { + if strings.Contains(err.Error(), "Throttling") { + tflog.Info(ctx, "Retrying throttling error", map[string]any{ + "err": err.Error(), + }) + return retry.RetryableError(err) + } + + return retry.NonRetryableError(err) + } + + return nil + }, optFns...) + + if tfresource.TimedOut(err) { + err = deleteResource(ctx, state, resource) + } + + return err +} + +func deleteResource(ctx context.Context, state tfsdk.State, resource fwresource.Resource) error { + var response fwresource.DeleteResponse + resource.Delete(ctx, fwresource.DeleteRequest{State: state}, &response) + + return fwdiag.DiagnosticsError(response.Diagnostics) +} + +func resourceMetadata(ctx context.Context, resource fwresource.Resource) fwresource.MetadataResponse { + var response fwresource.MetadataResponse + resource.Metadata(ctx, fwresource.MetadataRequest{}, &response) + + return response +} diff --git a/internal/sweep/generate_test.go b/internal/sweep/generate_test.go new file mode 100644 index 00000000000..a33bfbcf00b --- /dev/null +++ b/internal/sweep/generate_test.go @@ -0,0 +1,8 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:generate go run ../generate/servicepackages/main.go -- service_packages_gen_test.go +//go:generate go run ../generate/sweepimp/main.go +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package sweep_test diff --git a/internal/sweep/sdk/resource.go b/internal/sweep/sdk/resource.go new file mode 100644 index 00000000000..a51af1d010f --- /dev/null +++ b/internal/sweep/sdk/resource.go @@ -0,0 +1,96 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package sdk + +import ( + "context" + "strings" + "time" + + "github.com/hashicorp/terraform-plugin-log/tflog" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" +) + +type sweepResource struct { + d *schema.ResourceData + meta *conns.AWSClient + resource *schema.Resource +} + +func NewSweepResource(resource *schema.Resource, d *schema.ResourceData, meta *conns.AWSClient) *sweepResource { + return &sweepResource{ + d: d, + meta: meta, + resource: resource, + } +} + +func (sr *sweepResource) Delete(ctx context.Context, timeout time.Duration, optFns ...tfresource.OptionsFunc) error { + ctx = tflog.SetField(ctx, "id", sr.d.Id()) + + err := tfresource.Retry(ctx, timeout, func() *retry.RetryError { + err := deleteResource(ctx, sr.resource, sr.d, sr.meta) + + if err != nil { + if strings.Contains(err.Error(), "Throttling") { + tflog.Info(ctx, "Retrying throttling error", map[string]any{ + "err": err.Error(), + }) + return retry.RetryableError(err) + } + + return retry.NonRetryableError(err) + } + + return nil + }, optFns...) + + if tfresource.TimedOut(err) { + err = deleteResource(ctx, sr.resource, sr.d, sr.meta) + } + + return err +} + +func deleteResource(ctx context.Context, resource *schema.Resource, d *schema.ResourceData, meta *conns.AWSClient) error { + if resource.DeleteContext != nil || resource.DeleteWithoutTimeout != nil { + var diags diag.Diagnostics + + if resource.DeleteContext != nil { + diags = resource.DeleteContext(ctx, d, meta) + } else { + diags = resource.DeleteWithoutTimeout(ctx, d, meta) + } + + return sdkdiag.DiagnosticsError(diags) + } + + return resource.Delete(d, meta) +} + +// Deprecated: Create a list of Sweepables and pass them to SweepOrchestrator instead +func DeleteResource(ctx context.Context, resource *schema.Resource, d *schema.ResourceData, meta *conns.AWSClient) error { + return deleteResource(ctx, resource, d, meta) +} + +func ReadResource(ctx context.Context, resource *schema.Resource, d *schema.ResourceData, meta *conns.AWSClient) error { + if resource.ReadContext != nil || resource.ReadWithoutTimeout != nil { + var diags diag.Diagnostics + + if resource.ReadContext != nil { + diags = resource.ReadContext(ctx, d, meta) + } else { + diags = resource.ReadWithoutTimeout(ctx, d, meta) + } + + return sdkdiag.DiagnosticsError(diags) + } + + return resource.Read(d, meta) +} diff --git a/internal/sweep/sdk_resource.go b/internal/sweep/sdk_resource.go new file mode 100644 index 00000000000..1bf42c0099e --- /dev/null +++ b/internal/sweep/sdk_resource.go @@ -0,0 +1,10 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package sweep + +import ( + "github.com/hashicorp/terraform-provider-aws/internal/sweep/sdk" +) + +var NewSweepResource = sdk.NewSweepResource diff --git a/internal/sweep/service_packages_gen_test.go b/internal/sweep/service_packages_gen_test.go new file mode 100644 index 00000000000..b4307fea096 --- /dev/null +++ b/internal/sweep/service_packages_gen_test.go @@ -0,0 +1,423 @@ +// Code generated by internal/generate/servicepackages/main.go; DO NOT EDIT. + +package sweep_test + +import ( + "context" + + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/service/accessanalyzer" + "github.com/hashicorp/terraform-provider-aws/internal/service/account" + "github.com/hashicorp/terraform-provider-aws/internal/service/acm" + "github.com/hashicorp/terraform-provider-aws/internal/service/acmpca" + "github.com/hashicorp/terraform-provider-aws/internal/service/amp" + "github.com/hashicorp/terraform-provider-aws/internal/service/amplify" + "github.com/hashicorp/terraform-provider-aws/internal/service/apigateway" + "github.com/hashicorp/terraform-provider-aws/internal/service/apigatewayv2" + "github.com/hashicorp/terraform-provider-aws/internal/service/appautoscaling" + "github.com/hashicorp/terraform-provider-aws/internal/service/appconfig" + "github.com/hashicorp/terraform-provider-aws/internal/service/appflow" + "github.com/hashicorp/terraform-provider-aws/internal/service/appintegrations" + "github.com/hashicorp/terraform-provider-aws/internal/service/applicationinsights" + "github.com/hashicorp/terraform-provider-aws/internal/service/appmesh" + "github.com/hashicorp/terraform-provider-aws/internal/service/apprunner" + "github.com/hashicorp/terraform-provider-aws/internal/service/appstream" + "github.com/hashicorp/terraform-provider-aws/internal/service/appsync" + "github.com/hashicorp/terraform-provider-aws/internal/service/athena" + "github.com/hashicorp/terraform-provider-aws/internal/service/auditmanager" + "github.com/hashicorp/terraform-provider-aws/internal/service/autoscaling" + "github.com/hashicorp/terraform-provider-aws/internal/service/autoscalingplans" + "github.com/hashicorp/terraform-provider-aws/internal/service/backup" + "github.com/hashicorp/terraform-provider-aws/internal/service/batch" + "github.com/hashicorp/terraform-provider-aws/internal/service/budgets" + "github.com/hashicorp/terraform-provider-aws/internal/service/ce" + "github.com/hashicorp/terraform-provider-aws/internal/service/chime" + "github.com/hashicorp/terraform-provider-aws/internal/service/chimesdkmediapipelines" + "github.com/hashicorp/terraform-provider-aws/internal/service/chimesdkvoice" + "github.com/hashicorp/terraform-provider-aws/internal/service/cleanrooms" + "github.com/hashicorp/terraform-provider-aws/internal/service/cloud9" + "github.com/hashicorp/terraform-provider-aws/internal/service/cloudcontrol" + "github.com/hashicorp/terraform-provider-aws/internal/service/cloudformation" + "github.com/hashicorp/terraform-provider-aws/internal/service/cloudfront" + "github.com/hashicorp/terraform-provider-aws/internal/service/cloudhsmv2" + "github.com/hashicorp/terraform-provider-aws/internal/service/cloudsearch" + "github.com/hashicorp/terraform-provider-aws/internal/service/cloudtrail" + "github.com/hashicorp/terraform-provider-aws/internal/service/cloudwatch" + "github.com/hashicorp/terraform-provider-aws/internal/service/codeartifact" + "github.com/hashicorp/terraform-provider-aws/internal/service/codebuild" + "github.com/hashicorp/terraform-provider-aws/internal/service/codecommit" + "github.com/hashicorp/terraform-provider-aws/internal/service/codegurureviewer" + "github.com/hashicorp/terraform-provider-aws/internal/service/codepipeline" + "github.com/hashicorp/terraform-provider-aws/internal/service/codestarconnections" + "github.com/hashicorp/terraform-provider-aws/internal/service/codestarnotifications" + "github.com/hashicorp/terraform-provider-aws/internal/service/cognitoidentity" + "github.com/hashicorp/terraform-provider-aws/internal/service/cognitoidp" + "github.com/hashicorp/terraform-provider-aws/internal/service/comprehend" + "github.com/hashicorp/terraform-provider-aws/internal/service/computeoptimizer" + "github.com/hashicorp/terraform-provider-aws/internal/service/configservice" + "github.com/hashicorp/terraform-provider-aws/internal/service/connect" + "github.com/hashicorp/terraform-provider-aws/internal/service/controltower" + "github.com/hashicorp/terraform-provider-aws/internal/service/cur" + "github.com/hashicorp/terraform-provider-aws/internal/service/dataexchange" + "github.com/hashicorp/terraform-provider-aws/internal/service/datapipeline" + "github.com/hashicorp/terraform-provider-aws/internal/service/datasync" + "github.com/hashicorp/terraform-provider-aws/internal/service/dax" + "github.com/hashicorp/terraform-provider-aws/internal/service/deploy" + "github.com/hashicorp/terraform-provider-aws/internal/service/detective" + "github.com/hashicorp/terraform-provider-aws/internal/service/devicefarm" + "github.com/hashicorp/terraform-provider-aws/internal/service/directconnect" + "github.com/hashicorp/terraform-provider-aws/internal/service/dlm" + "github.com/hashicorp/terraform-provider-aws/internal/service/dms" + "github.com/hashicorp/terraform-provider-aws/internal/service/docdb" + "github.com/hashicorp/terraform-provider-aws/internal/service/docdbelastic" + "github.com/hashicorp/terraform-provider-aws/internal/service/ds" + "github.com/hashicorp/terraform-provider-aws/internal/service/dynamodb" + "github.com/hashicorp/terraform-provider-aws/internal/service/ec2" + "github.com/hashicorp/terraform-provider-aws/internal/service/ecr" + "github.com/hashicorp/terraform-provider-aws/internal/service/ecrpublic" + "github.com/hashicorp/terraform-provider-aws/internal/service/ecs" + "github.com/hashicorp/terraform-provider-aws/internal/service/efs" + "github.com/hashicorp/terraform-provider-aws/internal/service/eks" + "github.com/hashicorp/terraform-provider-aws/internal/service/elasticache" + "github.com/hashicorp/terraform-provider-aws/internal/service/elasticbeanstalk" + "github.com/hashicorp/terraform-provider-aws/internal/service/elasticsearch" + "github.com/hashicorp/terraform-provider-aws/internal/service/elastictranscoder" + "github.com/hashicorp/terraform-provider-aws/internal/service/elb" + "github.com/hashicorp/terraform-provider-aws/internal/service/elbv2" + "github.com/hashicorp/terraform-provider-aws/internal/service/emr" + "github.com/hashicorp/terraform-provider-aws/internal/service/emrcontainers" + "github.com/hashicorp/terraform-provider-aws/internal/service/emrserverless" + "github.com/hashicorp/terraform-provider-aws/internal/service/events" + "github.com/hashicorp/terraform-provider-aws/internal/service/evidently" + "github.com/hashicorp/terraform-provider-aws/internal/service/finspace" + "github.com/hashicorp/terraform-provider-aws/internal/service/firehose" + "github.com/hashicorp/terraform-provider-aws/internal/service/fis" + "github.com/hashicorp/terraform-provider-aws/internal/service/fms" + "github.com/hashicorp/terraform-provider-aws/internal/service/fsx" + "github.com/hashicorp/terraform-provider-aws/internal/service/gamelift" + "github.com/hashicorp/terraform-provider-aws/internal/service/glacier" + "github.com/hashicorp/terraform-provider-aws/internal/service/globalaccelerator" + "github.com/hashicorp/terraform-provider-aws/internal/service/glue" + "github.com/hashicorp/terraform-provider-aws/internal/service/grafana" + "github.com/hashicorp/terraform-provider-aws/internal/service/greengrass" + "github.com/hashicorp/terraform-provider-aws/internal/service/guardduty" + "github.com/hashicorp/terraform-provider-aws/internal/service/healthlake" + "github.com/hashicorp/terraform-provider-aws/internal/service/iam" + "github.com/hashicorp/terraform-provider-aws/internal/service/identitystore" + "github.com/hashicorp/terraform-provider-aws/internal/service/imagebuilder" + "github.com/hashicorp/terraform-provider-aws/internal/service/inspector" + "github.com/hashicorp/terraform-provider-aws/internal/service/inspector2" + "github.com/hashicorp/terraform-provider-aws/internal/service/internetmonitor" + "github.com/hashicorp/terraform-provider-aws/internal/service/iot" + "github.com/hashicorp/terraform-provider-aws/internal/service/iotanalytics" + "github.com/hashicorp/terraform-provider-aws/internal/service/iotevents" + "github.com/hashicorp/terraform-provider-aws/internal/service/ivs" + "github.com/hashicorp/terraform-provider-aws/internal/service/ivschat" + "github.com/hashicorp/terraform-provider-aws/internal/service/kafka" + "github.com/hashicorp/terraform-provider-aws/internal/service/kafkaconnect" + "github.com/hashicorp/terraform-provider-aws/internal/service/kendra" + "github.com/hashicorp/terraform-provider-aws/internal/service/keyspaces" + "github.com/hashicorp/terraform-provider-aws/internal/service/kinesis" + "github.com/hashicorp/terraform-provider-aws/internal/service/kinesisanalytics" + "github.com/hashicorp/terraform-provider-aws/internal/service/kinesisanalyticsv2" + "github.com/hashicorp/terraform-provider-aws/internal/service/kinesisvideo" + "github.com/hashicorp/terraform-provider-aws/internal/service/kms" + "github.com/hashicorp/terraform-provider-aws/internal/service/lakeformation" + "github.com/hashicorp/terraform-provider-aws/internal/service/lambda" + "github.com/hashicorp/terraform-provider-aws/internal/service/lexmodels" + "github.com/hashicorp/terraform-provider-aws/internal/service/licensemanager" + "github.com/hashicorp/terraform-provider-aws/internal/service/lightsail" + "github.com/hashicorp/terraform-provider-aws/internal/service/location" + "github.com/hashicorp/terraform-provider-aws/internal/service/logs" + "github.com/hashicorp/terraform-provider-aws/internal/service/macie2" + "github.com/hashicorp/terraform-provider-aws/internal/service/mediaconnect" + "github.com/hashicorp/terraform-provider-aws/internal/service/mediaconvert" + "github.com/hashicorp/terraform-provider-aws/internal/service/medialive" + "github.com/hashicorp/terraform-provider-aws/internal/service/mediapackage" + "github.com/hashicorp/terraform-provider-aws/internal/service/mediastore" + "github.com/hashicorp/terraform-provider-aws/internal/service/memorydb" + "github.com/hashicorp/terraform-provider-aws/internal/service/meta" + "github.com/hashicorp/terraform-provider-aws/internal/service/mq" + "github.com/hashicorp/terraform-provider-aws/internal/service/mwaa" + "github.com/hashicorp/terraform-provider-aws/internal/service/neptune" + "github.com/hashicorp/terraform-provider-aws/internal/service/networkfirewall" + "github.com/hashicorp/terraform-provider-aws/internal/service/networkmanager" + "github.com/hashicorp/terraform-provider-aws/internal/service/oam" + "github.com/hashicorp/terraform-provider-aws/internal/service/opensearch" + "github.com/hashicorp/terraform-provider-aws/internal/service/opensearchserverless" + "github.com/hashicorp/terraform-provider-aws/internal/service/opsworks" + "github.com/hashicorp/terraform-provider-aws/internal/service/organizations" + "github.com/hashicorp/terraform-provider-aws/internal/service/outposts" + "github.com/hashicorp/terraform-provider-aws/internal/service/pinpoint" + "github.com/hashicorp/terraform-provider-aws/internal/service/pipes" + "github.com/hashicorp/terraform-provider-aws/internal/service/pricing" + "github.com/hashicorp/terraform-provider-aws/internal/service/qldb" + "github.com/hashicorp/terraform-provider-aws/internal/service/quicksight" + "github.com/hashicorp/terraform-provider-aws/internal/service/ram" + "github.com/hashicorp/terraform-provider-aws/internal/service/rbin" + "github.com/hashicorp/terraform-provider-aws/internal/service/rds" + "github.com/hashicorp/terraform-provider-aws/internal/service/redshift" + "github.com/hashicorp/terraform-provider-aws/internal/service/redshiftdata" + "github.com/hashicorp/terraform-provider-aws/internal/service/redshiftserverless" + "github.com/hashicorp/terraform-provider-aws/internal/service/resourceexplorer2" + "github.com/hashicorp/terraform-provider-aws/internal/service/resourcegroups" + "github.com/hashicorp/terraform-provider-aws/internal/service/resourcegroupstaggingapi" + "github.com/hashicorp/terraform-provider-aws/internal/service/rolesanywhere" + "github.com/hashicorp/terraform-provider-aws/internal/service/route53" + "github.com/hashicorp/terraform-provider-aws/internal/service/route53domains" + "github.com/hashicorp/terraform-provider-aws/internal/service/route53recoverycontrolconfig" + "github.com/hashicorp/terraform-provider-aws/internal/service/route53recoveryreadiness" + "github.com/hashicorp/terraform-provider-aws/internal/service/route53resolver" + "github.com/hashicorp/terraform-provider-aws/internal/service/rum" + "github.com/hashicorp/terraform-provider-aws/internal/service/s3" + "github.com/hashicorp/terraform-provider-aws/internal/service/s3control" + "github.com/hashicorp/terraform-provider-aws/internal/service/s3outposts" + "github.com/hashicorp/terraform-provider-aws/internal/service/sagemaker" + "github.com/hashicorp/terraform-provider-aws/internal/service/scheduler" + "github.com/hashicorp/terraform-provider-aws/internal/service/schemas" + "github.com/hashicorp/terraform-provider-aws/internal/service/secretsmanager" + "github.com/hashicorp/terraform-provider-aws/internal/service/securityhub" + "github.com/hashicorp/terraform-provider-aws/internal/service/securitylake" + "github.com/hashicorp/terraform-provider-aws/internal/service/serverlessrepo" + "github.com/hashicorp/terraform-provider-aws/internal/service/servicecatalog" + "github.com/hashicorp/terraform-provider-aws/internal/service/servicediscovery" + "github.com/hashicorp/terraform-provider-aws/internal/service/servicequotas" + "github.com/hashicorp/terraform-provider-aws/internal/service/ses" + "github.com/hashicorp/terraform-provider-aws/internal/service/sesv2" + "github.com/hashicorp/terraform-provider-aws/internal/service/sfn" + "github.com/hashicorp/terraform-provider-aws/internal/service/shield" + "github.com/hashicorp/terraform-provider-aws/internal/service/signer" + "github.com/hashicorp/terraform-provider-aws/internal/service/simpledb" + "github.com/hashicorp/terraform-provider-aws/internal/service/sns" + "github.com/hashicorp/terraform-provider-aws/internal/service/sqs" + "github.com/hashicorp/terraform-provider-aws/internal/service/ssm" + "github.com/hashicorp/terraform-provider-aws/internal/service/ssmcontacts" + "github.com/hashicorp/terraform-provider-aws/internal/service/ssmincidents" + "github.com/hashicorp/terraform-provider-aws/internal/service/ssoadmin" + "github.com/hashicorp/terraform-provider-aws/internal/service/storagegateway" + "github.com/hashicorp/terraform-provider-aws/internal/service/sts" + "github.com/hashicorp/terraform-provider-aws/internal/service/swf" + "github.com/hashicorp/terraform-provider-aws/internal/service/synthetics" + "github.com/hashicorp/terraform-provider-aws/internal/service/timestreamwrite" + "github.com/hashicorp/terraform-provider-aws/internal/service/transcribe" + "github.com/hashicorp/terraform-provider-aws/internal/service/transfer" + "github.com/hashicorp/terraform-provider-aws/internal/service/verifiedpermissions" + "github.com/hashicorp/terraform-provider-aws/internal/service/vpclattice" + "github.com/hashicorp/terraform-provider-aws/internal/service/waf" + "github.com/hashicorp/terraform-provider-aws/internal/service/wafregional" + "github.com/hashicorp/terraform-provider-aws/internal/service/wafv2" + "github.com/hashicorp/terraform-provider-aws/internal/service/worklink" + "github.com/hashicorp/terraform-provider-aws/internal/service/workspaces" + "github.com/hashicorp/terraform-provider-aws/internal/service/xray" + "golang.org/x/exp/slices" +) + +func servicePackages(ctx context.Context) []conns.ServicePackage { + v := []conns.ServicePackage{ + accessanalyzer.ServicePackage(ctx), + account.ServicePackage(ctx), + acm.ServicePackage(ctx), + acmpca.ServicePackage(ctx), + amp.ServicePackage(ctx), + amplify.ServicePackage(ctx), + apigateway.ServicePackage(ctx), + apigatewayv2.ServicePackage(ctx), + appautoscaling.ServicePackage(ctx), + appconfig.ServicePackage(ctx), + appflow.ServicePackage(ctx), + appintegrations.ServicePackage(ctx), + applicationinsights.ServicePackage(ctx), + appmesh.ServicePackage(ctx), + apprunner.ServicePackage(ctx), + appstream.ServicePackage(ctx), + appsync.ServicePackage(ctx), + athena.ServicePackage(ctx), + auditmanager.ServicePackage(ctx), + autoscaling.ServicePackage(ctx), + autoscalingplans.ServicePackage(ctx), + backup.ServicePackage(ctx), + batch.ServicePackage(ctx), + budgets.ServicePackage(ctx), + ce.ServicePackage(ctx), + chime.ServicePackage(ctx), + chimesdkmediapipelines.ServicePackage(ctx), + chimesdkvoice.ServicePackage(ctx), + cleanrooms.ServicePackage(ctx), + cloud9.ServicePackage(ctx), + cloudcontrol.ServicePackage(ctx), + cloudformation.ServicePackage(ctx), + cloudfront.ServicePackage(ctx), + cloudhsmv2.ServicePackage(ctx), + cloudsearch.ServicePackage(ctx), + cloudtrail.ServicePackage(ctx), + cloudwatch.ServicePackage(ctx), + codeartifact.ServicePackage(ctx), + codebuild.ServicePackage(ctx), + codecommit.ServicePackage(ctx), + codegurureviewer.ServicePackage(ctx), + codepipeline.ServicePackage(ctx), + codestarconnections.ServicePackage(ctx), + codestarnotifications.ServicePackage(ctx), + cognitoidentity.ServicePackage(ctx), + cognitoidp.ServicePackage(ctx), + comprehend.ServicePackage(ctx), + computeoptimizer.ServicePackage(ctx), + configservice.ServicePackage(ctx), + connect.ServicePackage(ctx), + controltower.ServicePackage(ctx), + cur.ServicePackage(ctx), + dataexchange.ServicePackage(ctx), + datapipeline.ServicePackage(ctx), + datasync.ServicePackage(ctx), + dax.ServicePackage(ctx), + deploy.ServicePackage(ctx), + detective.ServicePackage(ctx), + devicefarm.ServicePackage(ctx), + directconnect.ServicePackage(ctx), + dlm.ServicePackage(ctx), + dms.ServicePackage(ctx), + docdb.ServicePackage(ctx), + docdbelastic.ServicePackage(ctx), + ds.ServicePackage(ctx), + dynamodb.ServicePackage(ctx), + ec2.ServicePackage(ctx), + ecr.ServicePackage(ctx), + ecrpublic.ServicePackage(ctx), + ecs.ServicePackage(ctx), + efs.ServicePackage(ctx), + eks.ServicePackage(ctx), + elasticache.ServicePackage(ctx), + elasticbeanstalk.ServicePackage(ctx), + elasticsearch.ServicePackage(ctx), + elastictranscoder.ServicePackage(ctx), + elb.ServicePackage(ctx), + elbv2.ServicePackage(ctx), + emr.ServicePackage(ctx), + emrcontainers.ServicePackage(ctx), + emrserverless.ServicePackage(ctx), + events.ServicePackage(ctx), + evidently.ServicePackage(ctx), + finspace.ServicePackage(ctx), + firehose.ServicePackage(ctx), + fis.ServicePackage(ctx), + fms.ServicePackage(ctx), + fsx.ServicePackage(ctx), + gamelift.ServicePackage(ctx), + glacier.ServicePackage(ctx), + globalaccelerator.ServicePackage(ctx), + glue.ServicePackage(ctx), + grafana.ServicePackage(ctx), + greengrass.ServicePackage(ctx), + guardduty.ServicePackage(ctx), + healthlake.ServicePackage(ctx), + iam.ServicePackage(ctx), + identitystore.ServicePackage(ctx), + imagebuilder.ServicePackage(ctx), + inspector.ServicePackage(ctx), + inspector2.ServicePackage(ctx), + internetmonitor.ServicePackage(ctx), + iot.ServicePackage(ctx), + iotanalytics.ServicePackage(ctx), + iotevents.ServicePackage(ctx), + ivs.ServicePackage(ctx), + ivschat.ServicePackage(ctx), + kafka.ServicePackage(ctx), + kafkaconnect.ServicePackage(ctx), + kendra.ServicePackage(ctx), + keyspaces.ServicePackage(ctx), + kinesis.ServicePackage(ctx), + kinesisanalytics.ServicePackage(ctx), + kinesisanalyticsv2.ServicePackage(ctx), + kinesisvideo.ServicePackage(ctx), + kms.ServicePackage(ctx), + lakeformation.ServicePackage(ctx), + lambda.ServicePackage(ctx), + lexmodels.ServicePackage(ctx), + licensemanager.ServicePackage(ctx), + lightsail.ServicePackage(ctx), + location.ServicePackage(ctx), + logs.ServicePackage(ctx), + macie2.ServicePackage(ctx), + mediaconnect.ServicePackage(ctx), + mediaconvert.ServicePackage(ctx), + medialive.ServicePackage(ctx), + mediapackage.ServicePackage(ctx), + mediastore.ServicePackage(ctx), + memorydb.ServicePackage(ctx), + meta.ServicePackage(ctx), + mq.ServicePackage(ctx), + mwaa.ServicePackage(ctx), + neptune.ServicePackage(ctx), + networkfirewall.ServicePackage(ctx), + networkmanager.ServicePackage(ctx), + oam.ServicePackage(ctx), + opensearch.ServicePackage(ctx), + opensearchserverless.ServicePackage(ctx), + opsworks.ServicePackage(ctx), + organizations.ServicePackage(ctx), + outposts.ServicePackage(ctx), + pinpoint.ServicePackage(ctx), + pipes.ServicePackage(ctx), + pricing.ServicePackage(ctx), + qldb.ServicePackage(ctx), + quicksight.ServicePackage(ctx), + ram.ServicePackage(ctx), + rbin.ServicePackage(ctx), + rds.ServicePackage(ctx), + redshift.ServicePackage(ctx), + redshiftdata.ServicePackage(ctx), + redshiftserverless.ServicePackage(ctx), + resourceexplorer2.ServicePackage(ctx), + resourcegroups.ServicePackage(ctx), + resourcegroupstaggingapi.ServicePackage(ctx), + rolesanywhere.ServicePackage(ctx), + route53.ServicePackage(ctx), + route53domains.ServicePackage(ctx), + route53recoverycontrolconfig.ServicePackage(ctx), + route53recoveryreadiness.ServicePackage(ctx), + route53resolver.ServicePackage(ctx), + rum.ServicePackage(ctx), + s3.ServicePackage(ctx), + s3control.ServicePackage(ctx), + s3outposts.ServicePackage(ctx), + sagemaker.ServicePackage(ctx), + scheduler.ServicePackage(ctx), + schemas.ServicePackage(ctx), + secretsmanager.ServicePackage(ctx), + securityhub.ServicePackage(ctx), + securitylake.ServicePackage(ctx), + serverlessrepo.ServicePackage(ctx), + servicecatalog.ServicePackage(ctx), + servicediscovery.ServicePackage(ctx), + servicequotas.ServicePackage(ctx), + ses.ServicePackage(ctx), + sesv2.ServicePackage(ctx), + sfn.ServicePackage(ctx), + shield.ServicePackage(ctx), + signer.ServicePackage(ctx), + simpledb.ServicePackage(ctx), + sns.ServicePackage(ctx), + sqs.ServicePackage(ctx), + ssm.ServicePackage(ctx), + ssmcontacts.ServicePackage(ctx), + ssmincidents.ServicePackage(ctx), + ssoadmin.ServicePackage(ctx), + storagegateway.ServicePackage(ctx), + sts.ServicePackage(ctx), + swf.ServicePackage(ctx), + synthetics.ServicePackage(ctx), + timestreamwrite.ServicePackage(ctx), + transcribe.ServicePackage(ctx), + transfer.ServicePackage(ctx), + verifiedpermissions.ServicePackage(ctx), + vpclattice.ServicePackage(ctx), + waf.ServicePackage(ctx), + wafregional.ServicePackage(ctx), + wafv2.ServicePackage(ctx), + worklink.ServicePackage(ctx), + workspaces.ServicePackage(ctx), + xray.ServicePackage(ctx), + } + + return slices.Clone(v) +} diff --git a/internal/sweep/sweep.go b/internal/sweep/sweep.go index 4aed3372695..6535e41963a 100644 --- a/internal/sweep/sweep.go +++ b/internal/sweep/sweep.go @@ -1,25 +1,22 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package sweep import ( "context" "errors" "fmt" - "log" "net" "os" "strconv" - "strings" "time" "github.com/aws/aws-sdk-go/aws/endpoints" "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" multierror "github.com/hashicorp/go-multierror" - "github.com/hashicorp/terraform-plugin-sdk/v2/diag" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/envvar" - "github.com/hashicorp/terraform-provider-aws/internal/errs/sdkdiag" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) @@ -31,18 +28,16 @@ const ( const defaultSweeperAssumeRoleDurationSeconds = 3600 -// SweeperClients is a shared cache of regional conns.AWSClient -// This prevents client re-initialization for every resource with no benefit. -var SweeperClients map[string]interface{} +// ServicePackages is set in TestMain in order to break an import cycle. +var ServicePackages []conns.ServicePackage -// SharedRegionalSweepClient returns a common conns.AWSClient setup needed for the sweeper -// functions for a given region -func SharedRegionalSweepClient(region string) (interface{}, error) { - return SharedRegionalSweepClientWithContext(Context(region), region) -} +// sweeperClients is a shared cache of regional conns.AWSClient +// This prevents client re-initialization for every resource with no benefit. +var sweeperClients map[string]*conns.AWSClient = make(map[string]*conns.AWSClient) -func SharedRegionalSweepClientWithContext(ctx context.Context, region string) (interface{}, error) { - if client, ok := SweeperClients[region]; ok { +// SharedRegionalSweepClient returns a common conns.AWSClient setup needed for the sweeper functions for a given Region. +func SharedRegionalSweepClient(ctx context.Context, region string) (*conns.AWSClient, error) { + if client, ok := sweeperClients[region]; ok { return client, nil } @@ -58,6 +53,14 @@ func SharedRegionalSweepClientWithContext(ctx context.Context, region string) (i } } + meta := new(conns.AWSClient) + servicePackageMap := make(map[string]conns.ServicePackage) + for _, sp := range ServicePackages { + servicePackageName := sp.ServicePackageName() + servicePackageMap[servicePackageName] = sp + } + meta.ServicePackages = servicePackageMap + conf := &conns.Config{ MaxRetries: 5, Region: region, @@ -86,13 +89,13 @@ func SharedRegionalSweepClientWithContext(ctx context.Context, region string) (i } // configures a default client for the region, using the above env vars - client, diags := conf.ConfigureProvider(ctx, &conns.AWSClient{}) + client, diags := conf.ConfigureProvider(ctx, meta) if diags.HasError() { return nil, fmt.Errorf("getting AWS client: %#v", diags) } - SweeperClients[region] = client + sweeperClients[region] = client return client, nil } @@ -101,48 +104,7 @@ type Sweepable interface { Delete(ctx context.Context, timeout time.Duration, optFns ...tfresource.OptionsFunc) error } -type SweepResource struct { - d *schema.ResourceData - meta interface{} - resource *schema.Resource -} - -func NewSweepResource(resource *schema.Resource, d *schema.ResourceData, meta interface{}) *SweepResource { - return &SweepResource{ - d: d, - meta: meta, - resource: resource, - } -} - -func (sr *SweepResource) Delete(ctx context.Context, timeout time.Duration, optFns ...tfresource.OptionsFunc) error { - err := tfresource.Retry(ctx, timeout, func() *retry.RetryError { - err := DeleteResource(ctx, sr.resource, sr.d, sr.meta) - - if err != nil { - if strings.Contains(err.Error(), "Throttling") { - log.Printf("[INFO] While sweeping resource (%s), encountered throttling error (%s). Retrying...", sr.d.Id(), err) - return retry.RetryableError(err) - } - - return retry.NonRetryableError(err) - } - - return nil - }, optFns...) - - if tfresource.TimedOut(err) { - err = DeleteResource(ctx, sr.resource, sr.d, sr.meta) - } - - return err -} - -func SweepOrchestrator(sweepables []Sweepable) error { - return SweepOrchestratorWithContext(context.Background(), sweepables) -} - -func SweepOrchestratorWithContext(ctx context.Context, sweepables []Sweepable, optFns ...tfresource.OptionsFunc) error { +func SweepOrchestrator(ctx context.Context, sweepables []Sweepable, optFns ...tfresource.OptionsFunc) error { var g multierror.Group for _, sweepable := range sweepables { @@ -232,38 +194,6 @@ func SkipSweepError(err error) bool { return false } -func DeleteResource(ctx context.Context, resource *schema.Resource, d *schema.ResourceData, meta any) error { - if resource.DeleteContext != nil || resource.DeleteWithoutTimeout != nil { - var diags diag.Diagnostics - - if resource.DeleteContext != nil { - diags = resource.DeleteContext(ctx, d, meta) - } else { - diags = resource.DeleteWithoutTimeout(ctx, d, meta) - } - - return sdkdiag.DiagnosticsError(diags) - } - - return resource.Delete(d, meta) -} - -func ReadResource(ctx context.Context, resource *schema.Resource, d *schema.ResourceData, meta any) error { - if resource.ReadContext != nil || resource.ReadWithoutTimeout != nil { - var diags diag.Diagnostics - - if resource.ReadContext != nil { - diags = resource.ReadContext(ctx, d, meta) - } else { - diags = resource.ReadWithoutTimeout(ctx, d, meta) - } - - return sdkdiag.DiagnosticsError(diags) - } - - return resource.Read(d, meta) -} - func Partition(region string) string { if partition, ok := endpoints.PartitionForRegion(endpoints.DefaultPartitions(), region); ok { return partition.ID() diff --git a/internal/sweep/sweep_test.go b/internal/sweep/sweep_test.go index ff46cd35a7a..fb7fac9b3fe 100644 --- a/internal/sweep/sweep_test.go +++ b/internal/sweep/sweep_test.go @@ -3,6 +3,7 @@ package sweep_test import ( + "context" "testing" "github.com/hashicorp/terraform-plugin-testing/helper/resource" @@ -105,6 +106,7 @@ import ( _ "github.com/hashicorp/terraform-provider-aws/internal/service/networkmanager" _ "github.com/hashicorp/terraform-provider-aws/internal/service/oam" _ "github.com/hashicorp/terraform-provider-aws/internal/service/opensearch" + _ "github.com/hashicorp/terraform-provider-aws/internal/service/opensearchserverless" _ "github.com/hashicorp/terraform-provider-aws/internal/service/opsworks" _ "github.com/hashicorp/terraform-provider-aws/internal/service/pinpoint" _ "github.com/hashicorp/terraform-provider-aws/internal/service/pipes" @@ -115,6 +117,7 @@ import ( _ "github.com/hashicorp/terraform-provider-aws/internal/service/redshift" _ "github.com/hashicorp/terraform-provider-aws/internal/service/redshiftserverless" _ "github.com/hashicorp/terraform-provider-aws/internal/service/resourceexplorer2" + _ "github.com/hashicorp/terraform-provider-aws/internal/service/resourcegroups" _ "github.com/hashicorp/terraform-provider-aws/internal/service/route53" _ "github.com/hashicorp/terraform-provider-aws/internal/service/route53recoverycontrolconfig" _ "github.com/hashicorp/terraform-provider-aws/internal/service/route53resolver" @@ -150,6 +153,6 @@ import ( ) func TestMain(m *testing.M) { - sweep.SweeperClients = make(map[string]interface{}) + sweep.ServicePackages = servicePackages(context.Background()) resource.TestMain(m) } diff --git a/internal/tags/context.go b/internal/tags/context.go index 9b1329f061b..71663164895 100644 --- a/internal/tags/context.go +++ b/internal/tags/context.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tags import ( diff --git a/internal/tags/framework.go b/internal/tags/framework.go index 297b69347ea..733e300cacd 100644 --- a/internal/tags/framework.go +++ b/internal/tags/framework.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tags import ( diff --git a/internal/tags/key_value_tags.go b/internal/tags/key_value_tags.go index 0b2581ccf76..5020c0d995c 100644 --- a/internal/tags/key_value_tags.go +++ b/internal/tags/key_value_tags.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tags import ( @@ -16,7 +19,7 @@ import ( "github.com/hashicorp/terraform-plugin-framework/resource" "github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-provider-aws/internal/create" - "github.com/hashicorp/terraform-provider-aws/internal/flex" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -749,27 +752,37 @@ type schemaResourceData interface { GetRawState() cty.Value } +// tagSource is an enum that identifiers the source of the tag +type tagSource int + +const ( + configuration tagSource = iota + plan + state +) + +// configTag contains the value and source of the incoming tag +type configTag struct { + value string + source tagSource +} + +// ResolveDuplicates resolves differences between incoming tags, defaultTags, and ignoreConfig func (tags KeyValueTags) ResolveDuplicates(ctx context.Context, defaultConfig *DefaultConfig, ignoreConfig *IgnoreConfig, d schemaResourceData) KeyValueTags { // remove default config. t := tags.RemoveDefaultConfig(defaultConfig) + cf := d.GetRawConfig() + configExists := !cf.IsNull() && cf.IsKnown() + result := make(map[string]string) for k, v := range t { result[k] = v.ValueString() } - configTags := make(map[string]string) - if config := d.GetRawPlan(); !config.IsNull() && config.IsKnown() { - c := config.GetAttr("tags") - if !c.IsNull() && c.IsKnown() { - for k, v := range c.AsValueMap() { - configTags[k] = v.AsString() - } - } - } - - if config := d.GetRawConfig(); !config.IsNull() && config.IsKnown() { - c := config.GetAttr("tags") + configTags := make(map[string]configTag) + if configExists { + c := cf.GetAttr(names.AttrTags) // if the config is null just return the incoming tags // no duplicates to calculate @@ -778,30 +791,37 @@ func (tags KeyValueTags) ResolveDuplicates(ctx context.Context, defaultConfig *D } if !c.IsNull() && c.IsKnown() { - for k, v := range c.AsValueMap() { - if _, ok := configTags[k]; !ok { - configTags[k] = v.AsString() - } - } + normalizeTagsFromRaw(c.AsValueMap(), configTags, configuration) } } - if state := d.GetRawState(); !state.IsNull() && state.IsKnown() { - c := state.GetAttr("tags") + if pl := d.GetRawPlan(); !pl.IsNull() && pl.IsKnown() { + c := pl.GetAttr(names.AttrTags) + if !c.IsNull() && c.IsKnown() { + normalizeTagsFromRaw(c.AsValueMap(), configTags, plan) + } + } + + if st := d.GetRawState(); !st.IsNull() && st.IsKnown() { + c := st.GetAttr(names.AttrTags) if !c.IsNull() { - for k, v := range c.AsValueMap() { - if _, ok := configTags[k]; !ok { - configTags[k] = v.AsString() - } - } + normalizeTagsFromRaw(c.AsValueMap(), configTags, state) } } for k, v := range configTags { if _, ok := result[k]; !ok { if defaultConfig != nil { - if val, ok := defaultConfig.Tags[k]; ok && val.ValueString() == v { - result[k] = v + if val, ok := defaultConfig.Tags[k]; ok && val.ValueString() == v.value { + // config does not exist during a refresh. + // set duplicate values from other sources for refresh diff calculation + if !configExists { + result[k] = v.value + } else { + if v.source == configuration { + result[k] = v.value + } + } } } } @@ -810,6 +830,7 @@ func (tags KeyValueTags) ResolveDuplicates(ctx context.Context, defaultConfig *D return New(ctx, result).IgnoreConfig(ignoreConfig) } +// ResolveDuplicatesFramework resolves differences between incoming tags, defaultTags, and ignoreConfig func (tags KeyValueTags) ResolveDuplicatesFramework(ctx context.Context, defaultConfig *DefaultConfig, ignoreConfig *IgnoreConfig, resp *resource.ReadResponse, diags fwdiag.Diagnostics) KeyValueTags { // remove default config. t := tags.RemoveDefaultConfig(defaultConfig) @@ -857,3 +878,16 @@ func ToSnakeCase(str string) string { result = regexp.MustCompile("([a-z0-9])([A-Z])").ReplaceAllString(result, "${1}_${2}") return strings.ToLower(result) } + +func normalizeTagsFromRaw(m map[string]cty.Value, incoming map[string]configTag, source tagSource) { + for k, v := range m { + if !v.IsNull() { + if _, ok := incoming[k]; !ok { + incoming[k] = configTag{ + value: v.AsString(), + source: source, + } + } + } + } +} diff --git a/internal/tags/key_value_tags_test.go b/internal/tags/key_value_tags_test.go index 99dc555a7d4..ea7981fc9ef 100644 --- a/internal/tags/key_value_tags_test.go +++ b/internal/tags/key_value_tags_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tags import ( diff --git a/internal/tags/tag_resources.go b/internal/tags/tag_resources.go index cc9971cd41f..25e2e73ddf3 100644 --- a/internal/tags/tag_resources.go +++ b/internal/tags/tag_resources.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tags import ( diff --git a/internal/tags/tag_resources_test.go b/internal/tags/tag_resources_test.go index 7f781b20534..40b6190d7b3 100644 --- a/internal/tags/tag_resources_test.go +++ b/internal/tags/tag_resources_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tags import ( diff --git a/internal/tags/tags.go b/internal/tags/tags.go index fbe5fe2385a..12d3b9013eb 100644 --- a/internal/tags/tags.go +++ b/internal/tags/tags.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tags import ( diff --git a/internal/tfresource/errors.go b/internal/tfresource/errors.go index 84e54a85187..e470df026f0 100644 --- a/internal/tfresource/errors.go +++ b/internal/tfresource/errors.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfresource import ( diff --git a/internal/tfresource/errors_test.go b/internal/tfresource/errors_test.go index bcca4d5cdbf..ccd602328d5 100644 --- a/internal/tfresource/errors_test.go +++ b/internal/tfresource/errors_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfresource_test import ( diff --git a/internal/tfresource/not_found_error.go b/internal/tfresource/not_found_error.go index c942e01d201..bc668a44234 100644 --- a/internal/tfresource/not_found_error.go +++ b/internal/tfresource/not_found_error.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfresource import ( diff --git a/internal/tfresource/not_found_error_test.go b/internal/tfresource/not_found_error_test.go index af16647c8f4..d10f603529d 100644 --- a/internal/tfresource/not_found_error_test.go +++ b/internal/tfresource/not_found_error_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfresource import ( diff --git a/internal/tfresource/retry.go b/internal/tfresource/retry.go index 764462f9b27..fe2aabd9b2a 100644 --- a/internal/tfresource/retry.go +++ b/internal/tfresource/retry.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfresource import ( diff --git a/internal/tfresource/retry_test.go b/internal/tfresource/retry_test.go index 2a6cf64c39a..fe10a6c48b1 100644 --- a/internal/tfresource/retry_test.go +++ b/internal/tfresource/retry_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfresource_test import ( diff --git a/internal/tfresource/wait.go b/internal/tfresource/wait.go index a6f0150863c..c84e65831c9 100644 --- a/internal/tfresource/wait.go +++ b/internal/tfresource/wait.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfresource import ( diff --git a/internal/tfresource/wait_test.go b/internal/tfresource/wait_test.go index ac5ff19983d..4867625a630 100644 --- a/internal/tfresource/wait_test.go +++ b/internal/tfresource/wait_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package tfresource_test import ( diff --git a/internal/verify/cidr.go b/internal/types/cidr_block.go similarity index 62% rename from internal/verify/cidr.go rename to internal/types/cidr_block.go index e115f56b7c6..ecdc58ee93e 100644 --- a/internal/verify/cidr.go +++ b/internal/types/cidr_block.go @@ -1,9 +1,29 @@ -package verify +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package types import ( + "fmt" "net" ) +// ValidateCIDRBlock validates that the specified CIDR block is valid: +// - The CIDR block parses to an IP address and network +// - The CIDR block is the CIDR block for the network +func ValidateCIDRBlock(cidr string) error { + _, ipnet, err := net.ParseCIDR(cidr) + if err != nil { + return fmt.Errorf("%q is not a valid CIDR block: %w", cidr, err) + } + + if !CIDRBlocksEqual(cidr, ipnet.String()) { + return fmt.Errorf("%q is not a valid CIDR block; did you mean %q?", cidr, ipnet) + } + + return nil +} + // CIDRBlocksEqual returns whether or not two CIDR blocks are equal: // - Both CIDR blocks parse to an IP address and network // - The string representation of the IP addresses are equal diff --git a/internal/verify/cidr_test.go b/internal/types/cidr_block_test.go similarity index 58% rename from internal/verify/cidr_test.go rename to internal/types/cidr_block_test.go index fd22d383a64..e153aa3eb8f 100644 --- a/internal/verify/cidr_test.go +++ b/internal/types/cidr_block_test.go @@ -1,8 +1,35 @@ -package verify +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 -import ( - "testing" -) +package types + +import "testing" + +func TestValidateCIDRBlock(t *testing.T) { + t.Parallel() + + for _, ts := range []struct { + cidr string + valid bool + }{ + {"10.2.2.0/24", true}, + {"10.2.2.0/1234", false}, + {"10.2.2.2/24", false}, + {"::/0", true}, + {"::0/0", true}, + {"2000::/15", true}, + {"2001::/15", false}, + {"", false}, + } { + err := ValidateCIDRBlock(ts.cidr) + if !ts.valid && err == nil { + t.Fatalf("Input '%s' should error but didn't!", ts.cidr) + } + if ts.valid && err != nil { + t.Fatalf("Got unexpected error for '%s' input: %s", ts.cidr, err) + } + } +} func TestCIDRBlocksEqual(t *testing.T) { t.Parallel() diff --git a/internal/types/duration/duration.go b/internal/types/duration/duration.go index ae8eae6bf7a..add57ee6411 100644 --- a/internal/types/duration/duration.go +++ b/internal/types/duration/duration.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package duration import ( diff --git a/internal/types/duration/duration_test.go b/internal/types/duration/duration_test.go index 06049067e02..b3469890bc4 100644 --- a/internal/types/duration/duration_test.go +++ b/internal/types/duration/duration_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package duration import ( diff --git a/internal/experimental/nullable/bool.go b/internal/types/nullable/bool.go similarity index 86% rename from internal/experimental/nullable/bool.go rename to internal/types/nullable/bool.go index cedbb494220..0f2a8f815ad 100644 --- a/internal/experimental/nullable/bool.go +++ b/internal/types/nullable/bool.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package nullable import ( @@ -48,6 +51,11 @@ func ValidateTypeStringNullableBool(v interface{}, k string) (ws []string, es [] if _, err := strconv.ParseBool(value); err != nil { es = append(es, fmt.Errorf("%s: cannot parse '%s' as boolean: %w", k, value, err)) + return + } + + if value != "true" && value != "false" { + ws = append(ws, fmt.Sprintf(`%s: the use of values other than "true" and "false" is deprecated and will be removed in a future version of the provider`, k)) } return diff --git a/internal/experimental/nullable/bool_test.go b/internal/types/nullable/bool_test.go similarity index 92% rename from internal/experimental/nullable/bool_test.go rename to internal/types/nullable/bool_test.go index cc39dac4ee9..bbedbeaa2a0 100644 --- a/internal/experimental/nullable/bool_test.go +++ b/internal/types/nullable/bool_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package nullable import ( @@ -83,18 +86,19 @@ func TestValidationBool(t *testing.T) { f: ValidateTypeStringNullableBool, }, { - val: "1", - f: ValidateTypeStringNullableBool, + val: "1", + f: ValidateTypeStringNullableBool, + expectedWarning: regexp.MustCompile(`^\w+: the use of values other than "true" and "false" is deprecated and will be removed in a future version of the provider$`), }, { val: "A", f: ValidateTypeStringNullableBool, - expectedErr: regexp.MustCompile(`[\w]+: cannot parse 'A' as boolean`), + expectedErr: regexp.MustCompile(`^\w+: cannot parse 'A' as boolean: .+$`), }, { val: 1, f: ValidateTypeStringNullableBool, - expectedErr: regexp.MustCompile(`expected type of [\w]+ to be string`), + expectedErr: regexp.MustCompile(`^expected type of \w+ to be string$`), }, }) } diff --git a/internal/experimental/nullable/float.go b/internal/types/nullable/float.go similarity index 93% rename from internal/experimental/nullable/float.go rename to internal/types/nullable/float.go index 80cf71e345b..d7de0e61ed1 100644 --- a/internal/experimental/nullable/float.go +++ b/internal/types/nullable/float.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package nullable import ( diff --git a/internal/experimental/nullable/float_test.go b/internal/types/nullable/float_test.go similarity index 89% rename from internal/experimental/nullable/float_test.go rename to internal/types/nullable/float_test.go index 8c1ddc09af3..5aa4d930a56 100644 --- a/internal/experimental/nullable/float_test.go +++ b/internal/types/nullable/float_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package nullable import ( @@ -88,12 +91,12 @@ func TestValidationFloat(t *testing.T) { { val: "A", f: ValidateTypeStringNullableFloat, - expectedErr: regexp.MustCompile(`[\w]+: cannot parse 'A' as float: .*`), + expectedErr: regexp.MustCompile(`^\w+: cannot parse 'A' as float: .+$`), }, { val: 1, f: ValidateTypeStringNullableFloat, - expectedErr: regexp.MustCompile(`expected type of [\w]+ to be string`), + expectedErr: regexp.MustCompile(`^expected type of \w+ to be string$`), }, }) } diff --git a/internal/experimental/nullable/int.go b/internal/types/nullable/int.go similarity index 97% rename from internal/experimental/nullable/int.go rename to internal/types/nullable/int.go index 25feb8112bb..da00af0dbda 100644 --- a/internal/experimental/nullable/int.go +++ b/internal/types/nullable/int.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package nullable import ( diff --git a/internal/experimental/nullable/int_test.go b/internal/types/nullable/int_test.go similarity index 84% rename from internal/experimental/nullable/int_test.go rename to internal/types/nullable/int_test.go index bcb371ca2d1..a40299b2b2f 100644 --- a/internal/experimental/nullable/int_test.go +++ b/internal/types/nullable/int_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package nullable import ( @@ -70,12 +73,12 @@ func TestValidationInt(t *testing.T) { { val: "A", f: ValidateTypeStringNullableInt, - expectedErr: regexp.MustCompile(`[\w]+: cannot parse 'A' as int: .*`), + expectedErr: regexp.MustCompile(`^\w+: cannot parse 'A' as int: .*`), }, { val: 1, f: ValidateTypeStringNullableInt, - expectedErr: regexp.MustCompile(`expected type of [\w]+ to be string`), + expectedErr: regexp.MustCompile(`^expected type of \w+ to be string`), }, }) } @@ -95,12 +98,12 @@ func TestValidationIntAtLeast(t *testing.T) { { val: "1", f: ValidateTypeStringNullableIntAtLeast(2), - expectedErr: regexp.MustCompile(`expected [\w]+ to be at least \(2\), got 1`), + expectedErr: regexp.MustCompile(`expected \w+ to be at least \(2\), got 1`), }, { val: 1, f: ValidateTypeStringNullableIntAtLeast(2), - expectedErr: regexp.MustCompile(`expected type of [\w]+ to be string`), + expectedErr: regexp.MustCompile(`expected type of \w+ to be string`), }, }) } diff --git a/internal/types/nullable/testing.go b/internal/types/nullable/testing.go new file mode 100644 index 00000000000..e1e14ea1104 --- /dev/null +++ b/internal/types/nullable/testing.go @@ -0,0 +1,62 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package nullable + +import ( + "regexp" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" +) + +type testCase struct { + val interface{} + f schema.SchemaValidateFunc + expectedErr *regexp.Regexp + expectedWarning *regexp.Regexp +} + +func runValidationTestCases(t *testing.T, cases []testCase) { + t.Helper() + + matchErr := func(errs []error, r *regexp.Regexp) bool { + // err must match one provided + for _, err := range errs { + if r.MatchString(err.Error()) { + return true + } + } + + return false + } + + matchWarning := func(warnings []string, r *regexp.Regexp) bool { + // warning must match one provided + for _, warning := range warnings { + if r.MatchString(warning) { + return true + } + } + + return false + } + + for i, tc := range cases { + warnings, errs := tc.f(tc.val, "test_property") + + if tc.expectedErr == nil && len(errs) != 0 { + t.Errorf("expected test case %d to produce no errors, got %v", i, errs) + } + if tc.expectedErr != nil && !matchErr(errs, tc.expectedErr) { + t.Errorf("expected test case %d to produce error matching \"%s\", got %v", i, tc.expectedErr, errs) + } + + if tc.expectedWarning == nil && len(warnings) != 0 { + t.Errorf("expected test case %d to produce no warnings, got %v", i, warnings) + } + if tc.expectedWarning != nil && !matchWarning(warnings, tc.expectedWarning) { + t.Errorf("expected test case %d to produce warning matching \"%s\", got %v", i, tc.expectedWarning, warnings) + } + } +} diff --git a/internal/types/option.go b/internal/types/option.go index 5ad1cc8d89d..77d07ec500f 100644 --- a/internal/types/option.go +++ b/internal/types/option.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package types type Option[T any] []T diff --git a/internal/types/option_test.go b/internal/types/option_test.go index 7d3d04ce271..7b4abb51280 100644 --- a/internal/types/option_test.go +++ b/internal/types/option_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package types import ( diff --git a/internal/types/service_package.go b/internal/types/service_package.go index be3273eafab..ada3a89ca99 100644 --- a/internal/types/service_package.go +++ b/internal/types/service_package.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package types import ( diff --git a/internal/types/timestamp/timestamp.go b/internal/types/timestamp/timestamp.go index 454cf8f4f1c..b31ad0dd74d 100644 --- a/internal/types/timestamp/timestamp.go +++ b/internal/types/timestamp/timestamp.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package timestamp import ( diff --git a/internal/types/timestamp/timestamp_test.go b/internal/types/timestamp/timestamp_test.go index 76d6b0401c0..8f7ac80ac45 100644 --- a/internal/types/timestamp/timestamp_test.go +++ b/internal/types/timestamp/timestamp_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package timestamp import "testing" diff --git a/internal/types/zero.go b/internal/types/zero.go new file mode 100644 index 00000000000..ba7b61f15ad --- /dev/null +++ b/internal/types/zero.go @@ -0,0 +1,13 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package types + +import ( + "reflect" +) + +// IsZero returns true if `v` is `nil` or points to the zero value of `T`. +func IsZero[T any](v *T) bool { + return v == nil || reflect.ValueOf(*v).IsZero() +} diff --git a/internal/types/zero_test.go b/internal/types/zero_test.go new file mode 100644 index 00000000000..f90df68dbdf --- /dev/null +++ b/internal/types/zero_test.go @@ -0,0 +1,55 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +package types + +import ( + "testing" +) + +type AIsZero struct { + Key string + Value int +} + +func TestIsZero(t *testing.T) { + t.Parallel() + + testCases := []struct { + Name string + Ptr *AIsZero + Expected bool + }{ + { + Name: "nil pointer", + Expected: true, + }, + { + Name: "pointer to zero value", + Ptr: &AIsZero{}, + Expected: true, + }, + { + Name: "pointer to non-zero value Key", + Ptr: &AIsZero{Key: "test"}, + }, + { + Name: "pointer to non-zero value Value", + Ptr: &AIsZero{Value: 42}, + }, + } + + for _, testCase := range testCases { + testCase := testCase + + t.Run(testCase.Name, func(t *testing.T) { + t.Parallel() + + got := IsZero(testCase.Ptr) + + if got != testCase.Expected { + t.Errorf("got %t, expected %t", got, testCase.Expected) + } + }) + } +} diff --git a/internal/vault/helper/pgpkeys/encrypt_decrypt.go b/internal/vault/helper/pgpkeys/encrypt_decrypt.go index 58f0b07b05e..bde3f749599 100644 --- a/internal/vault/helper/pgpkeys/encrypt_decrypt.go +++ b/internal/vault/helper/pgpkeys/encrypt_decrypt.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pgpkeys import ( @@ -28,11 +31,11 @@ func EncryptShares(input [][]byte, pgpKeys []string) ([]string, [][]byte, error) ctBuf := bytes.NewBuffer(nil) pt, err := openpgp.Encrypt(ctBuf, []*openpgp.Entity{entity}, nil, nil, nil) if err != nil { - return nil, nil, fmt.Errorf("error setting up encryption for PGP message: %w", err) + return nil, nil, fmt.Errorf("setting up encryption for PGP message: %w", err) } _, err = pt.Write(input[i]) if err != nil { - return nil, nil, fmt.Errorf("error encrypting PGP message: %w", err) + return nil, nil, fmt.Errorf("encrypting PGP message: %w", err) } pt.Close() encryptedShares = append(encryptedShares, ctBuf.Bytes()) @@ -72,11 +75,11 @@ func GetEntities(pgpKeys []string) ([]*openpgp.Entity, error) { for _, keystring := range pgpKeys { data, err := base64.StdEncoding.DecodeString(keystring) if err != nil { - return nil, fmt.Errorf("error decoding given PGP key: %w", err) + return nil, fmt.Errorf("decoding given PGP key: %w", err) } entity, err := openpgp.ReadEntity(packet.NewReader(bytes.NewBuffer(data))) if err != nil { - return nil, fmt.Errorf("error parsing given PGP key: %w", err) + return nil, fmt.Errorf("parsing given PGP key: %w", err) } ret = append(ret, entity) } @@ -91,30 +94,30 @@ func GetEntities(pgpKeys []string) ([]*openpgp.Entity, error) { func DecryptBytes(encodedCrypt, privKey string) (*bytes.Buffer, error) { privKeyBytes, err := base64.StdEncoding.DecodeString(privKey) if err != nil { - return nil, fmt.Errorf("error decoding base64 private key: %w", err) + return nil, fmt.Errorf("decoding base64 private key: %w", err) } cryptBytes, err := base64.StdEncoding.DecodeString(encodedCrypt) if err != nil { - return nil, fmt.Errorf("error decoding base64 crypted bytes: %w", err) + return nil, fmt.Errorf("decoding base64 crypted bytes: %w", err) } entity, err := openpgp.ReadEntity(packet.NewReader(bytes.NewBuffer(privKeyBytes))) if err != nil { - return nil, fmt.Errorf("error parsing private key: %w", err) + return nil, fmt.Errorf("parsing private key: %w", err) } entityList := &openpgp.EntityList{entity} md, err := openpgp.ReadMessage(bytes.NewBuffer(cryptBytes), entityList, nil, nil) if err != nil { - return nil, fmt.Errorf("error decrypting the messages: %w", err) + return nil, fmt.Errorf("decrypting the messages: %w", err) } ptBuf := bytes.NewBuffer(nil) _, err = ptBuf.ReadFrom(md.UnverifiedBody) if err != nil { - return nil, fmt.Errorf("error reading the messages: %w", err) + return nil, fmt.Errorf("reading the messages: %w", err) } return ptBuf, nil diff --git a/internal/vault/helper/pgpkeys/keybase.go b/internal/vault/helper/pgpkeys/keybase.go index dc91344b70d..06c13c4ddd9 100644 --- a/internal/vault/helper/pgpkeys/keybase.go +++ b/internal/vault/helper/pgpkeys/keybase.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pgpkeys import ( @@ -101,7 +104,7 @@ func FetchKeybasePubkeys(input []string) (map[string]string, error) { serializedEntity.Reset() err = entityList[0].Serialize(serializedEntity) if err != nil { - return nil, fmt.Errorf("error serializing entity for user %q: %w", usernames[i], err) + return nil, fmt.Errorf("serializing entity for user %q: %w", usernames[i], err) } // The API returns values in the same ordering requested, so this should properly match diff --git a/internal/vault/helper/pgpkeys/keybase_test.go b/internal/vault/helper/pgpkeys/keybase_test.go index 52d3b67a3a8..554e3279160 100644 --- a/internal/vault/helper/pgpkeys/keybase_test.go +++ b/internal/vault/helper/pgpkeys/keybase_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package pgpkeys import ( diff --git a/internal/vault/sdk/helper/jsonutil/json.go b/internal/vault/sdk/helper/jsonutil/json.go index 51cca5f2643..11dde9bae2d 100644 --- a/internal/vault/sdk/helper/jsonutil/json.go +++ b/internal/vault/sdk/helper/jsonutil/json.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package jsonutil import ( diff --git a/internal/vault/sdk/helper/jsonutil/json_test.go b/internal/vault/sdk/helper/jsonutil/json_test.go index fde05295c36..6b73c798534 100644 --- a/internal/vault/sdk/helper/jsonutil/json_test.go +++ b/internal/vault/sdk/helper/jsonutil/json_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package jsonutil import ( diff --git a/internal/verify/base64.go b/internal/verify/base64.go index 766e5c984bc..9927d226da9 100644 --- a/internal/verify/base64.go +++ b/internal/verify/base64.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package verify import ( diff --git a/internal/verify/base64_test.go b/internal/verify/base64_test.go index 69b485c5121..7e5284db0ba 100644 --- a/internal/verify/base64_test.go +++ b/internal/verify/base64_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package verify import ( diff --git a/internal/verify/diff.go b/internal/verify/diff.go index 64f5c80d6fb..5e9c779a285 100644 --- a/internal/verify/diff.go +++ b/internal/verify/diff.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package verify import ( @@ -36,7 +39,7 @@ func SetTagsDiff(ctx context.Context, diff *schema.ResourceDiff, meta interface{ if !diff.GetRawPlan().GetAttr("tags").IsWhollyKnown() { if err := diff.SetNewComputed("tags_all"); err != nil { - return fmt.Errorf("error setting tags_all to computed: %w", err) + return fmt.Errorf("setting tags_all to computed: %w", err) } return nil } @@ -47,25 +50,25 @@ func SetTagsDiff(ctx context.Context, diff *schema.ResourceDiff, meta interface{ if newTags.HasZeroValue() { if err := diff.SetNewComputed("tags_all"); err != nil { - return fmt.Errorf("error setting tags_all to computed: %w", err) + return fmt.Errorf("setting tags_all to computed: %w", err) } } if len(allTags) > 0 && (!newTags.HasZeroValue() || !allTags.HasZeroValue()) { if err := diff.SetNew("tags_all", allTags.Map()); err != nil { - return fmt.Errorf("error setting new tags_all diff: %w", err) + return fmt.Errorf("setting new tags_all diff: %w", err) } } if len(allTags) == 0 { if err := diff.SetNewComputed("tags_all"); err != nil { - return fmt.Errorf("error setting tags_all to computed: %w", err) + return fmt.Errorf("setting tags_all to computed: %w", err) } } } else if !diff.HasChange("tags") { if len(allTags) > 0 && !allTags.HasZeroValue() { if err := diff.SetNew("tags_all", allTags.Map()); err != nil { - return fmt.Errorf("error setting new tags_all diff: %w", err) + return fmt.Errorf("setting new tags_all diff: %w", err) } return nil } @@ -76,7 +79,7 @@ func SetTagsDiff(ctx context.Context, diff *schema.ResourceDiff, meta interface{ } if len(allTags) > 0 && !ta.DeepEqual(allTags) && allTags.HasZeroValue() { if err := diff.SetNewComputed("tags_all"); err != nil { - return fmt.Errorf("error setting tags_all to computed: %w", err) + return fmt.Errorf("setting tags_all to computed: %w", err) } return nil } @@ -85,17 +88,17 @@ func SetTagsDiff(ctx context.Context, diff *schema.ResourceDiff, meta interface{ if !ta.DeepEqual(allTags) { if allTags.HasZeroValue() { if err := diff.SetNewComputed("tags_all"); err != nil { - return fmt.Errorf("error setting tags_all to computed: %w", err) + return fmt.Errorf("setting tags_all to computed: %w", err) } } } } else if len(diff.Get("tags_all").(map[string]interface{})) > 0 { if err := diff.SetNewComputed("tags_all"); err != nil { - return fmt.Errorf("error setting tags_all to computed: %w", err) + return fmt.Errorf("setting tags_all to computed: %w", err) } } else if diff.HasChange("tags_all") { if err := diff.SetNewComputed("tags_all"); err != nil { - return fmt.Errorf("error setting tags_all to computed: %w", err) + return fmt.Errorf("setting tags_all to computed: %w", err) } } diff --git a/internal/verify/diff_test.go b/internal/verify/diff_test.go index 96c365850e4..3ddc36b4c3e 100644 --- a/internal/verify/diff_test.go +++ b/internal/verify/diff_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package verify import ( diff --git a/internal/verify/json.go b/internal/verify/json.go index 23646e134e1..cd74663c5ff 100644 --- a/internal/verify/json.go +++ b/internal/verify/json.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package verify import ( diff --git a/internal/verify/json_test.go b/internal/verify/json_test.go index daa4d52256d..e759971fd77 100644 --- a/internal/verify/json_test.go +++ b/internal/verify/json_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package verify import ( diff --git a/internal/verify/resource_differ.go b/internal/verify/resource_differ.go index 2ce5b84738d..20977039f0b 100644 --- a/internal/verify/resource_differ.go +++ b/internal/verify/resource_differ.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package verify // ResourceDiffer exposes the interface for accessing changes in a resource diff --git a/internal/verify/semver.go b/internal/verify/semver.go index 30225036a6f..898d3c78e7a 100644 --- a/internal/verify/semver.go +++ b/internal/verify/semver.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package verify import ( diff --git a/internal/verify/semver_test.go b/internal/verify/semver_test.go index ab734bcad3c..5bac5020062 100644 --- a/internal/verify/semver_test.go +++ b/internal/verify/semver_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package verify import ( diff --git a/internal/verify/validate.go b/internal/verify/validate.go index 170846dd01d..8d9a9e9e27b 100644 --- a/internal/verify/validate.go +++ b/internal/verify/validate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package verify import ( @@ -16,6 +19,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/errs" + "github.com/hashicorp/terraform-provider-aws/internal/types" "github.com/hashicorp/terraform-provider-aws/internal/types/timestamp" ) @@ -133,26 +137,10 @@ func ValidAccountID(v interface{}, k string) (ws []string, errors []error) { return } -// ValidateCIDRBlock validates that the specified CIDR block is valid: -// - The CIDR block parses to an IP address and network -// - The CIDR block is the CIDR block for the network -func ValidateCIDRBlock(cidr string) error { - _, ipnet, err := net.ParseCIDR(cidr) - if err != nil { - return fmt.Errorf("%q is not a valid CIDR block: %w", cidr, err) - } - - if !CIDRBlocksEqual(cidr, ipnet.String()) { - return fmt.Errorf("%q is not a valid CIDR block; did you mean %q?", cidr, ipnet) - } - - return nil -} - // ValidCIDRNetworkAddress ensures that the string value is a valid CIDR that // represents a network address - it adds an error otherwise func ValidCIDRNetworkAddress(v interface{}, k string) (ws []string, errors []error) { - if err := ValidateCIDRBlock(v.(string)); err != nil { + if err := types.ValidateCIDRBlock(v.(string)); err != nil { errors = append(errors, err) return } @@ -217,7 +205,7 @@ func ValidateIPv4CIDRBlock(cidr string) error { return fmt.Errorf("%q is not a valid IPv4 CIDR block", cidr) } - if !CIDRBlocksEqual(cidr, ipnet.String()) { + if !types.CIDRBlocksEqual(cidr, ipnet.String()) { return fmt.Errorf("%q is not a valid IPv4 CIDR block; did you mean %q?", cidr, ipnet) } @@ -239,7 +227,7 @@ func ValidateIPv6CIDRBlock(cidr string) error { return fmt.Errorf("%q is not a valid IPv6 CIDR block", cidr) } - if !CIDRBlocksEqual(cidr, ipnet.String()) { + if !types.CIDRBlocksEqual(cidr, ipnet.String()) { return fmt.Errorf("%q is not a valid IPv6 CIDR block; did you mean %q?", cidr, ipnet) } diff --git a/internal/verify/validate_test.go b/internal/verify/validate_test.go index d3db084ee9f..0b5b20616c5 100644 --- a/internal/verify/validate_test.go +++ b/internal/verify/validate_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package verify import ( @@ -220,32 +223,6 @@ func TestValidARN(t *testing.T) { } } -func TestValidateCIDRBlock(t *testing.T) { - t.Parallel() - - for _, ts := range []struct { - cidr string - valid bool - }{ - {"10.2.2.0/24", true}, - {"10.2.2.0/1234", false}, - {"10.2.2.2/24", false}, - {"::/0", true}, - {"::0/0", true}, - {"2000::/15", true}, - {"2001::/15", false}, - {"", false}, - } { - err := ValidateCIDRBlock(ts.cidr) - if !ts.valid && err == nil { - t.Fatalf("Input '%s' should error but didn't!", ts.cidr) - } - if ts.valid && err != nil { - t.Fatalf("Got unexpected error for '%s' input: %s", ts.cidr, err) - } - } -} - func TestValidCIDRNetworkAddress(t *testing.T) { t.Parallel() diff --git a/internal/verify/verify.go b/internal/verify/verify.go index 701aa6a5cef..a47f76d96b4 100644 --- a/internal/verify/verify.go +++ b/internal/verify/verify.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package verify import ( diff --git a/internal/verify/verify_test.go b/internal/verify/verify_test.go index 0f15051bb3d..d210f4498db 100644 --- a/internal/verify/verify_test.go +++ b/internal/verify/verify_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package verify import ( diff --git a/main.go b/main.go index b40c75d2eb2..b73cae18c64 100644 --- a/main.go +++ b/main.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package main import ( diff --git a/mkdocs.yml b/mkdocs.yml index bd423ec3c60..451bc548058 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -31,13 +31,13 @@ nav: - Data Handling and Conversion: data-handling-and-conversion.md - Dependency Updates: dependency-updates.md - AWS SDK for Go Versions: aws-go-sdk-versions.md + - Terraform Plugin Versions: terraform-plugin-versions.md - AWS Go SDK Base: aws-go-sdk-base.md - Error Handling: error-handling.md - Provider Design: provider-design.md - Provider Scaffolding (skaff): skaff.md - Naming Standards: naming.md - Retries and Waiters: retries-and-waiters.md - - Service Packages Refactor: service-package-pullrequest-guide.md - Submit an Issue: issue-reporting-and-lifecycle.md - FAQ: faq.md diff --git a/names/attr_consts.go b/names/attr_consts.go index 91bf443e703..a6bcf5b960d 100644 --- a/names/attr_consts.go +++ b/names/attr_consts.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package names const ( diff --git a/names/columns.go b/names/columns.go index 73071386695..552b71c30b5 100644 --- a/names/columns.go +++ b/names/columns.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package names const ( diff --git a/names/consts_gen.go b/names/consts_gen.go index c58019cbe79..7045181edad 100644 --- a/names/consts_gen.go +++ b/names/consts_gen.go @@ -305,6 +305,7 @@ const ( Transfer = "transfer" Translate = "translate" VPCLattice = "vpclattice" + VerifiedPermissions = "verifiedpermissions" VoiceID = "voiceid" WAF = "waf" WAFRegional = "wafregional" diff --git a/names/generate.go b/names/generate.go index 5108487e594..3344ae3ce54 100644 --- a/names/generate.go +++ b/names/generate.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + //go:generate go run ../internal/generate/namesconsts/main.go // ONLY generate directives and package declaration! Do not add anything else to this file. diff --git a/names/names.go b/names/names.go index b0f8c31ea65..5452046d9cb 100644 --- a/names/names.go +++ b/names/names.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + // Package names provides constants for AWS service names that are used as keys // for the endpoints slice in internal/conns/conns.go. The package also exposes // access to data found in the names_data.csv file, which provides additional @@ -26,19 +29,25 @@ const ( AccountEndpointID = "account" ACMEndpointID = "acm" AuditManagerEndpointID = "auditmanager" + CleanRoomsEndpointID = "cleanrooms" CloudWatchLogsEndpointID = "logs" ComprehendEndpointID = "comprehend" ComputeOptimizerEndpointID = "computeoptimizer" DSEndpointID = "ds" + GlacierEndpointID = "glacier" IdentityStoreEndpointID = "identitystore" Inspector2EndpointID = "inspector2" + InternetMonitorEndpointID = "internetmonitor" IVSChatEndpointID = "ivschat" KendraEndpointID = "kendra" + KeyspacesEndpointID = "keyspaces" LambdaEndpointID = "lambda" MediaLiveEndpointID = "medialive" ObservabilityAccessManagerEndpointID = "oam" OpenSearchServerlessEndpointID = "aoss" PipesEndpointID = "pipes" + PricingEndpointID = "pricing" + QLDBEndpointID = "qldb" ResourceExplorer2EndpointID = "resource-explorer-2" RolesAnywhereEndpointID = "rolesanywhere" Route53DomainsEndpointID = "route53domains" @@ -47,6 +56,8 @@ const ( SSMEndpointID = "ssm" SSMContactsEndpointID = "ssm-contacts" SSMIncidentsEndpointID = "ssm-incidents" + SWFEndpointID = "swf" + TimestreamWriteEndpointID = "ingest.timestream" TranscribeEndpointID = "transcribe" VPCLatticeEndpointID = "vpc-lattice" XRayEndpointID = "xray" diff --git a/names/names_data.csv b/names/names_data.csv index 70ff8bfb18d..bf01d4253e6 100644 --- a/names/names_data.csv +++ b/names/names_data.csv @@ -15,7 +15,7 @@ apigatewayv2,apigatewayv2,apigatewayv2,apigatewayv2,,apigatewayv2,,,APIGatewayV2 appmesh,appmesh,appmesh,appmesh,,appmesh,,,AppMesh,AppMesh,,1,,,aws_appmesh_,,appmesh_,App Mesh,AWS,,,,, apprunner,apprunner,apprunner,apprunner,,apprunner,,,AppRunner,AppRunner,,1,,,aws_apprunner_,,apprunner_,App Runner,AWS,,,,, ,,,,,,,,,,,,,,,,,App2Container,AWS,x,,,,No SDK support -appconfig,appconfig,appconfig,appconfig,,appconfig,,,AppConfig,AppConfig,,1,,,aws_appconfig_,,appconfig_,AppConfig,AWS,,,,, +appconfig,appconfig,appconfig,appconfig,,appconfig,,,AppConfig,AppConfig,,1,2,,aws_appconfig_,,appconfig_,AppConfig,AWS,,,,, appconfigdata,appconfigdata,appconfigdata,appconfigdata,,appconfigdata,,,AppConfigData,AppConfigData,,1,,,aws_appconfigdata_,,appconfigdata_,AppConfig Data,AWS,,,,, appflow,appflow,appflow,appflow,,appflow,,,AppFlow,Appflow,,1,,,aws_appflow_,,appflow_,AppFlow,Amazon,,,,, appintegrations,appintegrations,appintegrationsservice,appintegrations,,appintegrations,,appintegrationsservice,AppIntegrations,AppIntegrationsService,,1,,,aws_appintegrations_,,appintegrations_,AppIntegrations,Amazon,,,,, @@ -68,7 +68,7 @@ cloudtrail,cloudtrail,cloudtrail,cloudtrail,,cloudtrail,,,CloudTrail,CloudTrail, cloudwatch,cloudwatch,cloudwatch,cloudwatch,,cloudwatch,,,CloudWatch,CloudWatch,,1,,aws_cloudwatch_(?!(event_|log_|query_)),aws_cloudwatch_,,cloudwatch_dashboard;cloudwatch_metric_;cloudwatch_composite_,CloudWatch,Amazon,,,,, application-insights,applicationinsights,applicationinsights,applicationinsights,,applicationinsights,,,ApplicationInsights,ApplicationInsights,,1,,,aws_applicationinsights_,,applicationinsights_,CloudWatch Application Insights,Amazon,,,,, evidently,evidently,cloudwatchevidently,evidently,,evidently,,cloudwatchevidently,Evidently,CloudWatchEvidently,,1,,,aws_evidently_,,evidently_,CloudWatch Evidently,Amazon,,,,, -internetmonitor,internetmonitor,internetmonitor,internetmonitor,,internetmonitor,,,InternetMonitor,InternetMonitor,,1,,,aws_internetmonitor_,,internetmonitor_,CloudWatch Internet Monitor,Amazon,,,,, +internetmonitor,internetmonitor,internetmonitor,internetmonitor,,internetmonitor,,,InternetMonitor,InternetMonitor,,,2,,aws_internetmonitor_,,internetmonitor_,CloudWatch Internet Monitor,Amazon,,,,, logs,logs,cloudwatchlogs,cloudwatchlogs,,logs,,cloudwatchlog;cloudwatchlogs,Logs,CloudWatchLogs,,1,2,aws_cloudwatch_(log_|query_),aws_logs_,,cloudwatch_log_;cloudwatch_query_,CloudWatch Logs,Amazon,,,,, rum,rum,cloudwatchrum,rum,,rum,,cloudwatchrum,RUM,CloudWatchRUM,,1,,,aws_rum_,,rum_,CloudWatch RUM,Amazon,,,,, synthetics,synthetics,synthetics,synthetics,,synthetics,,,Synthetics,Synthetics,,1,,,aws_synthetics_,,synthetics_,CloudWatch Synthetics,Amazon,,,,, @@ -154,7 +154,7 @@ emr-serverless,emrserverless,emrserverless,emrserverless,,emrserverless,,,EMRSer events,events,eventbridge,eventbridge,,events,,eventbridge;cloudwatchevents,Events,EventBridge,,1,,aws_cloudwatch_event_,aws_events_,,cloudwatch_event_,EventBridge,Amazon,,,,, schemas,schemas,schemas,schemas,,schemas,,,Schemas,Schemas,,1,,,aws_schemas_,,schemas_,EventBridge Schemas,Amazon,,,,, fis,fis,fis,fis,,fis,,,FIS,FIS,,,2,,aws_fis_,,fis_,FIS (Fault Injection Simulator),AWS,,,,, -finspace,finspace,finspace,finspace,,finspace,,,FinSpace,Finspace,,1,,,aws_finspace_,,finspace_,FinSpace,Amazon,,,,, +finspace,finspace,finspace,finspace,,finspace,,,FinSpace,Finspace,,,2,,aws_finspace_,,finspace_,FinSpace,Amazon,,,,, finspace-data,finspacedata,finspacedata,finspacedata,,finspacedata,,,FinSpaceData,FinSpaceData,,1,,,aws_finspacedata_,,finspacedata_,FinSpace Data,Amazon,,,,, fms,fms,fms,fms,,fms,,,FMS,FMS,,1,,,aws_fms_,,fms_,FMS (Firewall Manager),AWS,,,,, forecast,forecast,forecastservice,forecast,,forecast,,forecastservice,Forecast,ForecastService,,1,,,aws_forecast_,,forecast_,Forecast,Amazon,,,,, @@ -199,7 +199,7 @@ iotwireless,iotwireless,iotwireless,iotwireless,,iotwireless,,,IoTWireless,IoTWi ivs,ivs,ivs,ivs,,ivs,,,IVS,IVS,,1,,,aws_ivs_,,ivs_,IVS (Interactive Video),Amazon,,,,, ivschat,ivschat,ivschat,ivschat,,ivschat,,,IVSChat,Ivschat,,,2,,aws_ivschat_,,ivschat_,IVS (Interactive Video) Chat,Amazon,,,,, kendra,kendra,kendra,kendra,,kendra,,,Kendra,Kendra,,,2,,aws_kendra_,,kendra_,Kendra,Amazon,,,,, -keyspaces,keyspaces,keyspaces,,,keyspaces,,,Keyspaces,Keyspaces,,1,,,aws_keyspaces_,,keyspaces_,Keyspaces (for Apache Cassandra),Amazon,,,,, +keyspaces,keyspaces,keyspaces,keyspaces,,keyspaces,,,Keyspaces,Keyspaces,,,2,,aws_keyspaces_,,keyspaces_,Keyspaces (for Apache Cassandra),Amazon,,,,, kinesis,kinesis,kinesis,kinesis,,kinesis,,,Kinesis,Kinesis,,1,,aws_kinesis_stream,aws_kinesis_,,kinesis_stream,Kinesis,Amazon,,,,, kinesisanalytics,kinesisanalytics,kinesisanalytics,kinesisanalytics,,kinesisanalytics,,,KinesisAnalytics,KinesisAnalytics,,1,,aws_kinesis_analytics_,aws_kinesisanalytics_,,kinesis_analytics_,Kinesis Analytics,Amazon,,,,, kinesisanalyticsv2,kinesisanalyticsv2,kinesisanalyticsv2,kinesisanalyticsv2,,kinesisanalyticsv2,,,KinesisAnalyticsV2,KinesisAnalyticsV2,,1,,,aws_kinesisanalyticsv2_,,kinesisanalyticsv2_,Kinesis Analytics V2,Amazon,,,,, @@ -217,7 +217,7 @@ lexv2-models,lexv2models,lexmodelsv2,lexmodelsv2,,lexmodelsv2,,lexv2models,LexMo lex-runtime,lexruntime,lexruntimeservice,lexruntimeservice,,lexruntime,,lexruntimeservice,LexRuntime,LexRuntimeService,,1,,,aws_lexruntime_,,lexruntime_,Lex Runtime,Amazon,,,,, lexv2-runtime,lexv2runtime,lexruntimev2,lexruntimev2,,lexruntimev2,,lexv2runtime,LexRuntimeV2,LexRuntimeV2,,1,,,aws_lexruntimev2_,,lexruntimev2_,Lex Runtime V2,Amazon,,,,, license-manager,licensemanager,licensemanager,licensemanager,,licensemanager,,,LicenseManager,LicenseManager,,1,,,aws_licensemanager_,,licensemanager_,License Manager,AWS,,,,, -lightsail,lightsail,lightsail,lightsail,,lightsail,,,Lightsail,Lightsail,,1,,,aws_lightsail_,,lightsail_,Lightsail,Amazon,,,,, +lightsail,lightsail,lightsail,lightsail,,lightsail,,,Lightsail,Lightsail,x,,2,,aws_lightsail_,,lightsail_,Lightsail,Amazon,,,,, location,location,locationservice,location,,location,,locationservice,Location,LocationService,,1,,,aws_location_,,location_,Location,Amazon,,,,, lookoutequipment,lookoutequipment,lookoutequipment,lookoutequipment,,lookoutequipment,,,LookoutEquipment,LookoutEquipment,,1,,,aws_lookoutequipment_,,lookoutequipment_,Lookout for Equipment,Amazon,,,,, lookoutmetrics,lookoutmetrics,lookoutmetrics,lookoutmetrics,,lookoutmetrics,,,LookoutMetrics,LookoutMetrics,,1,,,aws_lookoutmetrics_,,lookoutmetrics_,Lookout for Metrics,Amazon,,,,, @@ -275,9 +275,9 @@ pinpoint-sms-voice,pinpointsmsvoice,pinpointsmsvoice,pinpointsmsvoice,,pinpoints pipes,pipes,pipes,pipes,,pipes,,,Pipes,Pipes,,,2,,aws_pipes_,,pipes_,EventBridge Pipes,Amazon,,,,, polly,polly,polly,polly,,polly,,,Polly,Polly,,1,,,aws_polly_,,polly_,Polly,Amazon,,,,, ,,,,,,,,,,,,,,,,,Porting Assistant for .NET,,x,,,,No SDK support -pricing,pricing,pricing,pricing,,pricing,,,Pricing,Pricing,,1,,,aws_pricing_,,pricing_,Pricing Calculator,AWS,,,,, +pricing,pricing,pricing,pricing,,pricing,,,Pricing,Pricing,,,2,,aws_pricing_,,pricing_,Pricing Calculator,AWS,,,,, proton,proton,proton,proton,,proton,,,Proton,Proton,,1,,,aws_proton_,,proton_,Proton,AWS,,,,, -qldb,qldb,qldb,qldb,,qldb,,,QLDB,QLDB,,1,,,aws_qldb_,,qldb_,QLDB (Quantum Ledger Database),Amazon,,,,, +qldb,qldb,qldb,qldb,,qldb,,,QLDB,QLDB,,,2,,aws_qldb_,,qldb_,QLDB (Quantum Ledger Database),Amazon,,,,, qldb-session,qldbsession,qldbsession,qldbsession,,qldbsession,,,QLDBSession,QLDBSession,,1,,,aws_qldbsession_,,qldbsession_,QLDB Session,Amazon,,,,, quicksight,quicksight,quicksight,quicksight,,quicksight,,,QuickSight,QuickSight,,1,,,aws_quicksight_,,quicksight_,QuickSight,Amazon,,,,, ram,ram,ram,ram,,ram,,,RAM,RAM,,1,,,aws_ram_,,ram_,RAM (Resource Access Manager),AWS,,,,, @@ -304,7 +304,7 @@ route53-recovery-readiness,route53recoveryreadiness,route53recoveryreadiness,rou route53resolver,route53resolver,route53resolver,route53resolver,,route53resolver,,,Route53Resolver,Route53Resolver,,1,,aws_route53_resolver_,aws_route53resolver_,,route53_resolver_,Route 53 Resolver,Amazon,,,,, s3api,s3api,s3,s3,,s3,,s3api,S3,S3,x,1,,aws_(canonical_user_id|s3_bucket|s3_object),aws_s3_,,s3_bucket;s3_object;canonical_user_id,S3 (Simple Storage),Amazon,,,AWS_S3_ENDPOINT,TF_AWS_S3_ENDPOINT, s3control,s3control,s3control,s3control,,s3control,,,S3Control,S3Control,,1,2,aws_(s3_account_|s3control_|s3_access_),aws_s3control_,,s3control;s3_account_;s3_access_,S3 Control,Amazon,,,,, -glacier,glacier,glacier,glacier,,glacier,,,Glacier,Glacier,,1,,,aws_glacier_,,glacier_,S3 Glacier,Amazon,,,,, +glacier,glacier,glacier,glacier,,glacier,,,Glacier,Glacier,,,2,,aws_glacier_,,glacier_,S3 Glacier,Amazon,,,,, s3outposts,s3outposts,s3outposts,s3outposts,,s3outposts,,,S3Outposts,S3Outposts,,1,,,aws_s3outposts_,,s3outposts_,S3 on Outposts,Amazon,,,,, sagemaker,sagemaker,sagemaker,sagemaker,,sagemaker,,,SageMaker,SageMaker,,1,,,aws_sagemaker_,,sagemaker_,SageMaker,Amazon,,,,, sagemaker-a2i-runtime,sagemakera2iruntime,augmentedairuntime,sagemakera2iruntime,,sagemakera2iruntime,,augmentedairuntime,SageMakerA2IRuntime,AugmentedAIRuntime,,1,,,aws_sagemakera2iruntime_,,sagemakera2iruntime_,SageMaker A2I (Augmented AI),Amazon,,,,, @@ -344,11 +344,11 @@ storagegateway,storagegateway,storagegateway,storagegateway,,storagegateway,,,St sts,sts,sts,sts,,sts,,,STS,STS,x,1,,aws_caller_identity,aws_sts_,,caller_identity,STS (Security Token),AWS,,,AWS_STS_ENDPOINT,TF_AWS_STS_ENDPOINT, ,,,,,,,,,,,,,,,,,Sumerian,Amazon,x,,,,No SDK support support,support,support,support,,support,,,Support,Support,,1,,,aws_support_,,support_,Support,AWS,,,,, -swf,swf,swf,swf,,swf,,,SWF,SWF,,1,,,aws_swf_,,swf_,SWF (Simple Workflow),Amazon,,,,, +swf,swf,swf,swf,,swf,,,SWF,SWF,,,2,,aws_swf_,,swf_,SWF (Simple Workflow),Amazon,,,,, ,,,,,,,,,,,,,,,,,Tag Editor,AWS,x,,,,Part of Resource Groups Tagging textract,textract,textract,textract,,textract,,,Textract,Textract,,1,,,aws_textract_,,textract_,Textract,Amazon,,,,, timestream-query,timestreamquery,timestreamquery,timestreamquery,,timestreamquery,,,TimestreamQuery,TimestreamQuery,,1,,,aws_timestreamquery_,,timestreamquery_,Timestream Query,Amazon,,,,, -timestream-write,timestreamwrite,timestreamwrite,timestreamwrite,,timestreamwrite,,,TimestreamWrite,TimestreamWrite,,1,,,aws_timestreamwrite_,,timestreamwrite_,Timestream Write,Amazon,,,,, +timestream-write,timestreamwrite,timestreamwrite,timestreamwrite,,timestreamwrite,,,TimestreamWrite,TimestreamWrite,,,2,,aws_timestreamwrite_,,timestreamwrite_,Timestream Write,Amazon,,,,, ,,,,,,,,,,,,,,,,,Tools for PowerShell,AWS,x,,,,No SDK support ,,,,,,,,,,,,,,,,,Training and Certification,AWS,x,,,,No SDK support transcribe,transcribe,transcribeservice,transcribe,,transcribe,,transcribeservice,Transcribe,TranscribeService,,,2,,aws_transcribe_,,transcribe_,Transcribe,Amazon,,,,, @@ -374,6 +374,7 @@ workdocs,workdocs,workdocs,workdocs,,workdocs,,,WorkDocs,WorkDocs,,1,,,aws_workd worklink,worklink,worklink,worklink,,worklink,,,WorkLink,WorkLink,,1,,,aws_worklink_,,worklink_,WorkLink,Amazon,,,,, workmail,workmail,workmail,workmail,,workmail,,,WorkMail,WorkMail,,1,,,aws_workmail_,,workmail_,WorkMail,Amazon,,,,, workmailmessageflow,workmailmessageflow,workmailmessageflow,workmailmessageflow,,workmailmessageflow,,,WorkMailMessageFlow,WorkMailMessageFlow,,1,,,aws_workmailmessageflow_,,workmailmessageflow_,WorkMail Message Flow,Amazon,,,,, -workspaces,workspaces,workspaces,workspaces,,workspaces,,,WorkSpaces,WorkSpaces,,1,,,aws_workspaces_,,workspaces_,WorkSpaces,Amazon,,,,, +workspaces,workspaces,workspaces,workspaces,,workspaces,,,WorkSpaces,WorkSpaces,,,2,,aws_workspaces_,,workspaces_,WorkSpaces,Amazon,,,,, workspaces-web,workspacesweb,workspacesweb,workspacesweb,,workspacesweb,,,WorkSpacesWeb,WorkSpacesWeb,,1,,,aws_workspacesweb_,,workspacesweb_,WorkSpaces Web,Amazon,,,,, xray,xray,xray,xray,,xray,,,XRay,XRay,,,2,,aws_xray_,,xray_,X-Ray,AWS,,,,, +verifiedpermissions,verifiedpermissions,verifiedpermissions,verifiedpermissions,,verifiedpermissions,,,VerifiedPermissions,VerifiedPermissions,,,2,,aws_verifiedpermissions_,,verifiedpermissions_,Verified Permissions,Amazon,,,,, diff --git a/names/names_test.go b/names/names_test.go index 637f9cd4f2b..1e994eb68e2 100644 --- a/names/names_test.go +++ b/names/names_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package names import ( diff --git a/skaff/cmd/datasource.go b/skaff/cmd/datasource.go index 5a2efdf99a6..c4f395da675 100644 --- a/skaff/cmd/datasource.go +++ b/skaff/cmd/datasource.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cmd import ( diff --git a/skaff/cmd/resource.go b/skaff/cmd/resource.go index e38094b399c..33669ccfde3 100644 --- a/skaff/cmd/resource.go +++ b/skaff/cmd/resource.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cmd import ( diff --git a/skaff/cmd/root.go b/skaff/cmd/root.go index b7082283d2e..34c3a9b6395 100644 --- a/skaff/cmd/root.go +++ b/skaff/cmd/root.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package cmd import ( diff --git a/skaff/datasource/datasource.go b/skaff/datasource/datasource.go index baaf3a97491..ef4a5bbbaeb 100644 --- a/skaff/datasource/datasource.go +++ b/skaff/datasource/datasource.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datasource import ( diff --git a/skaff/datasource/datasource.tmpl b/skaff/datasource/datasource.tmpl index c7753d921f1..c3eb17c04ab 100644 --- a/skaff/datasource/datasource.tmpl +++ b/skaff/datasource/datasource.tmpl @@ -17,10 +17,7 @@ package {{ .ServicePackage }} // In other words, as generated, this is a rough outline of the work you will // need to do. If something doesn't make sense for your situation, get rid of // it. -// -// Remember to register this new data source in the provider -// (internal/provider/provider.go) once you finish. Otherwise, Terraform won't -// know about it.{{- end }} +{{- end }} import ( {{- if .IncludeComments }} @@ -47,8 +44,7 @@ import ( "regexp" "strings" "time" - -{{- if .AWSGoSDKV2 }} +{{ if .AWSGoSDKV2 }} "github.com/aws/aws-sdk-go-v2/aws" "github.com/aws/aws-sdk-go-v2/service/{{ .ServicePackage }}" "github.com/aws/aws-sdk-go-v2/service/{{ .ServicePackage }}/types" @@ -67,6 +63,7 @@ import ( tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/verify" + "github.com/hashicorp/terraform-provider-aws/names" ) {{ if .IncludeComments }} // TIP: ==== FILE STRUCTURE ==== @@ -151,6 +148,7 @@ const ( ) func dataSource{{ .DataSource }}Read(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics {{- if .IncludeComments }} // TIP: ==== RESOURCE READ ==== // Generally, the Read function should do the following things. Make @@ -161,13 +159,13 @@ func dataSource{{ .DataSource }}Read(ctx context.Context, d *schema.ResourceData // 3. Set the ID // 4. Set the arguments and attributes // 5. Set the tags - // 6. Return nil + // 6. Return diags {{- end }} {{- if .IncludeComments }} // TIP: -- 1. Get a client connection to the relevant service {{- end }} - conn := meta.(*conns.AWSClient).{{ .Service }}{{ if .AWSGoSDKV2 }}Client(){{ else }}Conn(){{ end }} + conn := meta.(*conns.AWSClient).{{ .Service }}{{ if .AWSGoSDKV2 }}Client(ctx){{ else }}Conn(ctx){{ end }} {{ if .IncludeComments }} // TIP: -- 2. Get information about a resource from AWS using an API Get, // List, or Describe-type function, or, better yet, using a finder. Data @@ -179,7 +177,7 @@ func dataSource{{ .DataSource }}Read(ctx context.Context, d *schema.ResourceData out, err := find{{ .DataSource }}ByName(ctx, conn, name) if err != nil { - return create.DiagError(names.{{ .Service }}, create.ErrActionReading, DSName{{ .DataSource }}, name, err) + return append(diags, create.DiagError(names.{{ .Service }}, create.ErrActionReading, DSName{{ .DataSource }}, name, err)...) } {{ if .IncludeComments }} // TIP: -- 3. Set the ID @@ -215,19 +213,19 @@ func dataSource{{ .DataSource }}Read(ctx context.Context, d *schema.ResourceData // https://hashicorp.github.io/terraform-provider-aws/data-handling-and-conversion/ {{- end }} if err := d.Set("complex_argument", flattenComplexArguments(out.ComplexArguments)); err != nil { - return create.DiagError(names.{{ .Service }}, create.ErrActionSetting, DSName{{ .DataSource }}, d.Id(), err) + return append(diags, create.DiagError(names.{{ .Service }}, create.ErrActionSetting, DSName{{ .DataSource }}, d.Id(), err)...) } {{ if .IncludeComments }} // TIP: Setting a JSON string to avoid errorneous diffs. {{- end }} - p, err := verify.SecondJSONUnlessEquivalent(d.Get("policy").(string), aws.ToString(out.Policy)) + p, err := verify.SecondJSONUnlessEquivalent(d.Get("policy").(string), {{ if .AWSGoSDKV2 }}aws.ToString{{ else }}aws.StringValue{{ end }}(out.Policy)) if err != nil { - return create.DiagError(names.{{ .Service }}, create.ErrActionSetting, DSName{{ .DataSource }}, d.Id(), err) + return append(diags, create.DiagError(names.{{ .Service }}, create.ErrActionSetting, DSName{{ .DataSource }}, d.Id(), err)...) } p, err = structure.NormalizeJsonString(p) if err != nil { - return create.DiagError(names.{{ .Service }}, create.ErrActionReading, DSName{{ .DataSource }}, d.Id(), err) + return append(diags, create.DiagError(names.{{ .Service }}, create.ErrActionReading, DSName{{ .DataSource }}, d.Id(), err)...) } d.Set("policy", p) @@ -243,10 +241,10 @@ func dataSource{{ .DataSource }}Read(ctx context.Context, d *schema.ResourceData //lintignore:AWSR002 if err := d.Set("tags", KeyValueTags(out.Tags).IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return create.DiagError(names.{{ .Service }}, create.ErrActionSetting, DSName{{ .DataSource }}, d.Id(), err) + return append(diags, create.DiagError(names.{{ .Service }}, create.ErrActionSetting, DSName{{ .DataSource }}, d.Id(), err)...) } {{ if .IncludeComments }} - // TIP: -- 6. Return nil + // TIP: -- 6. Return diags {{- end }} - return nil + return diags } diff --git a/skaff/datasource/datasource_test.go b/skaff/datasource/datasource_test.go index 0d7172bf8db..41a59b2efda 100644 --- a/skaff/datasource/datasource_test.go +++ b/skaff/datasource/datasource_test.go @@ -1 +1,4 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package datasource diff --git a/skaff/datasource/datasourcefw.tmpl b/skaff/datasource/datasourcefw.tmpl index 50668b4b8fd..fd1f1aa2f52 100644 --- a/skaff/datasource/datasourcefw.tmpl +++ b/skaff/datasource/datasourcefw.tmpl @@ -48,8 +48,8 @@ import ( "github.com/hashicorp/terraform-plugin-framework/datasource/schema" "github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-provider-aws/internal/create" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/names" ) @@ -165,7 +165,7 @@ func (d *dataSource{{ .DataSource }}) Read(ctx context.Context, req datasource.R {{- if .IncludeComments }} // TIP: -- 1. Get a client connection to the relevant service {{- end }} - conn := d.Meta().{{ .Service }}{{ if .AWSGoSDKV2 }}Client(){{ else }}Conn(){{ end }} + conn := d.Meta().{{ .Service }}{{ if .AWSGoSDKV2 }}Client(ctx){{ else }}Conn(ctx){{ end }} {{ if .IncludeComments }} // TIP: -- 2. Fetch the config {{- end }} diff --git a/skaff/datasource/datasourcetest.tmpl b/skaff/datasource/datasourcetest.tmpl index 0c1034f7763..6e30043c052 100644 --- a/skaff/datasource/datasourcetest.tmpl +++ b/skaff/datasource/datasourcetest.tmpl @@ -17,10 +17,7 @@ package {{ .ServicePackage }}_test // In other words, as generated, this is a rough outline of the work you will // need to do. If something doesn't make sense for your situation, get rid of // it. -// -// Remember to register this new data source in the provider -// (internal/provider/provider.go) once you finish. Otherwise, Terraform won't -// know about it.{{- end }} +{{- end }} import ( {{- if .IncludeComments }} @@ -102,6 +99,8 @@ import ( // intricate, they should be unit tested. {{- end }} func Test{{ .DataSource }}ExampleUnitTest(t *testing.T) { + t.Parallel() + testCases := []struct { TestName string Input string @@ -129,7 +128,9 @@ func Test{{ .DataSource }}ExampleUnitTest(t *testing.T) { } for _, testCase := range testCases { + testCase := testCase t.Run(testCase.TestName, func(t *testing.T) { + t.Parallel() got, err := tf{{ .ServicePackage }}.FunctionFromDataSource(testCase.Input) if err != nil && !testCase.Error { diff --git a/skaff/datasource/websitedoc.tmpl b/skaff/datasource/websitedoc.tmpl index a956b26c064..bbb3d88c851 100644 --- a/skaff/datasource/websitedoc.tmpl +++ b/skaff/datasource/websitedoc.tmpl @@ -6,6 +6,17 @@ description: |- Terraform data source for managing an AWS {{ .HumanFriendlyService }} {{ .HumanDataSourceName }}. --- +{{- if .IncludeComments }} + +{{- end }} + # Data Source: aws_{{ .ServicePackage }}_{{ .DataSourceSnake }} Terraform data source for managing an AWS {{ .HumanFriendlyService }} {{ .HumanDataSourceName }}. diff --git a/skaff/go.mod b/skaff/go.mod index 84b82c0c3c1..98f97798fdb 100644 --- a/skaff/go.mod +++ b/skaff/go.mod @@ -1,6 +1,6 @@ module github.com/hashicorp/terraform-provider-aws/skaff -go 1.19 +go 1.20 require ( github.com/hashicorp/terraform-provider-aws v1.60.1-0.20220322001452-8f7a597d0c24 diff --git a/skaff/main.go b/skaff/main.go index eaefc257384..68a0e29bd38 100644 --- a/skaff/main.go +++ b/skaff/main.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package main import "github.com/hashicorp/terraform-provider-aws/skaff/cmd" diff --git a/skaff/resource/resource.go b/skaff/resource/resource.go index 5df8b712db8..ccaaff232ef 100644 --- a/skaff/resource/resource.go +++ b/skaff/resource/resource.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resource import ( diff --git a/skaff/resource/resource.tmpl b/skaff/resource/resource.tmpl index 364c7e12209..d0ba353acce 100644 --- a/skaff/resource/resource.tmpl +++ b/skaff/resource/resource.tmpl @@ -17,10 +17,7 @@ package {{ .ServicePackage }} // In other words, as generated, this is a rough outline of the work you will // need to do. If something doesn't make sense for your situation, get rid of // it. -// -// Remember to register this new resource in the provider -// (internal/provider/provider.go) once you finish. Otherwise, Terraform won't -// know about it.{{- end }} +{{- end }} import ( {{- if .IncludeComments }} @@ -63,6 +60,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/errs" "github.com/hashicorp/terraform-provider-aws/internal/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" @@ -205,6 +203,7 @@ const ( ) func resource{{ .Resource }}Create(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics {{- if .IncludeComments }} // TIP: ==== RESOURCE CREATE ==== // Generally, the Create function should do the following things. Make @@ -223,7 +222,7 @@ func resource{{ .Resource }}Create(ctx context.Context, d *schema.ResourceData, // TIP: -- 1. Get a client connection to the relevant service {{- end }} - conn := meta.(*conns.AWSClient).{{ .Service }}{{ if .AWSGoSDKV2 }}Client(){{ else }}Conn(){{ end }} + conn := meta.(*conns.AWSClient).{{ .Service }}{{ if .AWSGoSDKV2 }}Client(ctx){{ else }}Conn(ctx){{ end }} {{ if .IncludeComments }} // TIP: -- 2. Populate a create input structure {{- end }} @@ -240,7 +239,7 @@ func resource{{ .Resource }}Create(ctx context.Context, d *schema.ResourceData, // below. Many resources do include tags so this a reminder to include them // where possible. {{- end }} - Tags: GetTagsIn(ctx), + Tags: getTagsIn(ctx), } if v, ok := d.GetOk("max_size"); ok { @@ -271,30 +270,31 @@ func resource{{ .Resource }}Create(ctx context.Context, d *schema.ResourceData, // TIP: Since d.SetId() has not been called yet, you cannot use d.Id() // in error messages at this point. {{- end }} - return create.DiagError(names.{{ .Service }}, create.ErrActionCreating, ResName{{ .Resource }}, d.Get("name").(string), err) + return append(diags, create.DiagError(names.{{ .Service }}, create.ErrActionCreating, ResName{{ .Resource }}, d.Get("name").(string), err)...) } if out == nil || out.{{ .Resource }} == nil { - return create.DiagError(names.{{ .Service }}, create.ErrActionCreating, ResName{{ .Resource }}, d.Get("name").(string), errors.New("empty output")) + return append(diags, create.DiagError(names.{{ .Service }}, create.ErrActionCreating, ResName{{ .Resource }}, d.Get("name").(string), errors.New("empty output"))...) } {{ if .IncludeComments }} // TIP: -- 4. Set the minimum arguments and/or attributes for the Read function to // work. {{- end }} - d.SetId(aws.ToString(out.{{ .Resource }}.{{ .Resource }}ID)) + d.SetId({{ if .AWSGoSDKV2 }}aws.ToString{{ else }}aws.StringValue{{ end }}(out.{{ .Resource }}.{{ .Resource }}ID)) {{ if .IncludeComments }} // TIP: -- 5. Use a waiter to wait for create to complete {{- end }} if _, err := wait{{ .Resource }}Created(ctx, conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { - return create.DiagError(names.{{ .Service }}, create.ErrActionWaitingForCreation, ResName{{ .Resource }}, d.Id(), err) + return append(diags, create.DiagError(names.{{ .Service }}, create.ErrActionWaitingForCreation, ResName{{ .Resource }}, d.Id(), err)...) } {{ if .IncludeComments }} // TIP: -- 6. Call the Read function in the Create return {{- end }} - return resource{{ .Resource }}Read(ctx, d, meta) + return append(diags, resource{{ .Resource }}Read(ctx, d, meta)...) } func resource{{ .Resource }}Read(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics {{- if .IncludeComments }} // TIP: ==== RESOURCE READ ==== // Generally, the Read function should do the following things. Make @@ -305,13 +305,13 @@ func resource{{ .Resource }}Read(ctx context.Context, d *schema.ResourceData, me // 3. Set ID to empty where resource is not new and not found // 4. Set the arguments and attributes // 5. Set the tags - // 6. Return nil + // 6. Return diags {{- end }} {{- if .IncludeComments }} // TIP: -- 1. Get a client connection to the relevant service {{- end }} - conn := meta.(*conns.AWSClient).{{ .Service }}{{ if .AWSGoSDKV2 }}Client(){{ else }}Conn(){{ end }} + conn := meta.(*conns.AWSClient).{{ .Service }}{{ if .AWSGoSDKV2 }}Client(ctx){{ else }}Conn(ctx){{ end }} {{ if .IncludeComments }} // TIP: -- 2. Get the resource from AWS using an API Get, List, or Describe- // type function, or, better yet, using a finder. @@ -323,11 +323,11 @@ func resource{{ .Resource }}Read(ctx context.Context, d *schema.ResourceData, me if !d.IsNewResource() && tfresource.NotFound(err) { log.Printf("[WARN] {{ .Service }} {{ .Resource }} (%s) not found, removing from state", d.Id()) d.SetId("") - return nil + return diags } if err != nil { - return create.DiagError(names.{{ .Service }}, create.ErrActionReading, ResName{{ .Resource }}, d.Id(), err) + return append(diags, create.DiagError(names.{{ .Service }}, create.ErrActionReading, ResName{{ .Resource }}, d.Id(), err)...) } {{ if .IncludeComments }} // TIP: -- 4. Set the arguments and attributes @@ -354,30 +354,31 @@ func resource{{ .Resource }}Read(ctx context.Context, d *schema.ResourceData, me // https://hashicorp.github.io/terraform-provider-aws/data-handling-and-conversion/#root-typeset-of-resource-and-aws-list-of-structure {{- end }} if err := d.Set("complex_argument", flattenComplexArguments(out.ComplexArguments)); err != nil { - return create.DiagError(names.{{ .Service }}, create.ErrActionSetting, ResName{{ .Resource }}, d.Id(), err) + return append(diags, create.DiagError(names.{{ .Service }}, create.ErrActionSetting, ResName{{ .Resource }}, d.Id(), err)...) } {{ if .IncludeComments }} // TIP: Setting a JSON string to avoid errorneous diffs. {{- end }} - p, err := verify.SecondJSONUnlessEquivalent(d.Get("policy").(string), aws.ToString(out.Policy)) + p, err := verify.SecondJSONUnlessEquivalent(d.Get("policy").(string), {{ if .AWSGoSDKV2 }}aws.ToString{{ else }}aws.StringValue{{ end }}(out.Policy)) if err != nil { - return create.DiagError(names.{{ .Service }}, create.ErrActionSetting, ResName{{ .Resource }}, d.Id(), err) + return append(diags, create.DiagError(names.{{ .Service }}, create.ErrActionSetting, ResName{{ .Resource }}, d.Id(), err)...) } p, err = structure.NormalizeJsonString(p) if err != nil { - return create.DiagError(names.{{ .Service }}, create.ErrActionSetting, ResName{{ .Resource }}, d.Id(), err) + return append(diags, create.DiagError(names.{{ .Service }}, create.ErrActionSetting, ResName{{ .Resource }}, d.Id(), err)...) } d.Set("policy", p) {{ if .IncludeComments }} - // TIP: -- 6. Return nil + // TIP: -- 6. Return diags {{- end }} - return nil + return diags } func resource{{ .Resource }}Update(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics {{- if .IncludeComments }} // TIP: ==== RESOURCE UPDATE ==== // Not all resources have Update functions. There are a few reasons: @@ -403,7 +404,7 @@ func resource{{ .Resource }}Update(ctx context.Context, d *schema.ResourceData, // TIP: -- 1. Get a client connection to the relevant service {{- end }} - conn := meta.(*conns.AWSClient).{{ .Service }}{{ if .AWSGoSDKV2 }}Client(){{ else }}Conn(){{ end }} + conn := meta.(*conns.AWSClient).{{ .Service }}{{ if .AWSGoSDKV2 }}Client(ctx){{ else }}Conn(ctx){{ end }} {{ if .IncludeComments }} // TIP: -- 2. Populate a modify input structure and check for changes // @@ -426,9 +427,9 @@ func resource{{ .Resource }}Update(ctx context.Context, d *schema.ResourceData, if !update { {{- if .IncludeComments }} // TIP: If update doesn't do anything at all, which is rare, you can - // return nil. Otherwise, return a read call, as below. + // return diags. Otherwise, return a read call, as below. {{- end }} - return nil + return diags } {{ if .IncludeComments }} // TIP: -- 3. Call the AWS modify/update function @@ -440,21 +441,22 @@ func resource{{ .Resource }}Update(ctx context.Context, d *schema.ResourceData, out, err := conn.Update{{ .Resource }}WithContext(ctx, in) {{- end }} if err != nil { - return create.DiagError(names.{{ .Service }}, create.ErrActionUpdating, ResName{{ .Resource }}, d.Id(), err) + return append(diags, create.DiagError(names.{{ .Service }}, create.ErrActionUpdating, ResName{{ .Resource }}, d.Id(), err)...) } {{ if .IncludeComments }} // TIP: -- 4. Use a waiter to wait for update to complete {{- end }} - if _, err := wait{{ .Resource }}Updated(ctx, conn, aws.ToString(out.OperationId), d.Timeout(schema.TimeoutUpdate)); err != nil { - return create.DiagError(names.{{ .Service }}, create.ErrActionWaitingForUpdate, ResName{{ .Resource }}, d.Id(), err) + if _, err := wait{{ .Resource }}Updated(ctx, conn, {{ if .AWSGoSDKV2 }}aws.ToString{{ else }}aws.StringValue{{ end }}(out.OperationId), d.Timeout(schema.TimeoutUpdate)); err != nil { + return append(diags, create.DiagError(names.{{ .Service }}, create.ErrActionWaitingForUpdate, ResName{{ .Resource }}, d.Id(), err)...) } {{ if .IncludeComments }} // TIP: -- 5. Call the Read function in the Update return {{- end }} - return resource{{ .Resource }}Read(ctx, d, meta) + return append(diags, resource{{ .Resource }}Read(ctx, d, meta)...) } func resource{{ .Resource }}Delete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + var diags diag.Diagnostics {{- if .IncludeComments }} // TIP: ==== RESOURCE DELETE ==== // Most resources have Delete functions. There are rare situations @@ -470,13 +472,13 @@ func resource{{ .Resource }}Delete(ctx context.Context, d *schema.ResourceData, // 2. Populate a delete input structure // 3. Call the AWS delete function // 4. Use a waiter to wait for delete to complete - // 5. Return nil + // 5. Return diags {{- end }} {{- if .IncludeComments }} // TIP: -- 1. Get a client connection to the relevant service {{- end }} - conn := meta.(*conns.AWSClient).{{ .Service }}{{ if .AWSGoSDKV2 }}Client(){{ else }}Conn(){{ end }} + conn := meta.(*conns.AWSClient).{{ .Service }}{{ if .AWSGoSDKV2 }}Client(ctx){{ else }}Conn(ctx){{ end }} {{ if .IncludeComments }} // TIP: -- 2. Populate a delete input structure {{- end }} @@ -498,33 +500,30 @@ func resource{{ .Resource }}Delete(ctx context.Context, d *schema.ResourceData, // resource. If that happens, we don't want it to show up as an error. {{- end }} {{- if .AWSGoSDKV2 }} + if errs.IsA[*types.ResourceNotFoundException](err){ + return diags + } if err != nil { - var nfe *types.ResourceNotFoundException - if errors.As(err, &nfe) { - return nil - } - - return create.DiagError(names.{{ .Service }}, create.ErrActionDeleting, ResName{{ .Resource }}, d.Id(), err) + return append(diags, create.DiagError(names.{{ .Service }}, create.ErrActionDeleting, ResName{{ .Resource }}, d.Id(), err)...) } {{- else }} if tfawserr.ErrCodeEquals(err, {{ .ServiceLower }}.ErrCodeResourceNotFoundException) { - return nil + return diags } - if err != nil { - return create.DiagError(names.{{ .Service }}, create.ErrActionDeleting, ResName{{ .Resource }}, d.Id(), err) + return append(diags, create.DiagError(names.{{ .Service }}, create.ErrActionDeleting, ResName{{ .Resource }}, d.Id(), err)...) } {{- end }} {{ if .IncludeComments }} // TIP: -- 4. Use a waiter to wait for delete to complete {{- end }} if _, err := wait{{ .Resource }}Deleted(ctx, conn, d.Id(), d.Timeout(schema.TimeoutDelete)); err != nil { - return create.DiagError(names.{{ .Service }}, create.ErrActionWaitingForDeletion, ResName{{ .Resource }}, d.Id(), err) + return append(diags, create.DiagError(names.{{ .Service }}, create.ErrActionWaitingForDeletion, ResName{{ .Resource }}, d.Id(), err)...) } {{ if .IncludeComments }} - // TIP: -- 5. Return nil + // TIP: -- 5. Return diags {{- end }} - return nil + return diags } {{ if .IncludeComments }} // TIP: ==== STATUS CONSTANTS ==== @@ -648,7 +647,7 @@ func status{{ .Resource }}(ctx context.Context, conn *{{ .ServiceLower }}.{{ .Se return nil, "", err } - return out, aws.ToString(out.Status), nil + return out, {{ if .AWSGoSDKV2 }}aws.ToString{{ else }}aws.StringValue{{ end }}(out.Status), nil } } {{ if .IncludeComments }} @@ -669,15 +668,13 @@ func find{{ .Resource }}ByID(ctx context.Context, conn *{{ .ServiceLower }}.{{ . {{- if .AWSGoSDKV2 }} out, err := conn.Get{{ .Resource }}(ctx, in) - if err != nil { - var nfe *types.ResourceNotFoundException - if errors.As(err, &nfe) { - return nil, &retry.NotFoundError{ - LastError: err, - LastRequest: in, - } + if errs.IsA[*types.ResourceNotFoundException](err){ + return nil, &retry.NotFoundError{ + LastError: err, + LastRequest: in, } - + } + if err != nil { return nil, err } {{- else }} @@ -688,7 +685,6 @@ func find{{ .Resource }}ByID(ctx context.Context, conn *{{ .ServiceLower }}.{{ . LastRequest: in, } } - if err != nil { return nil, err } @@ -721,11 +717,11 @@ func flattenComplexArgument(apiObject *{{ .ServiceLower }}.ComplexArgument) map[ m := map[string]interface{}{} if v := apiObject.SubFieldOne; v != nil { - m["sub_field_one"] = aws.ToString(v) + m["sub_field_one"] = {{ if .AWSGoSDKV2 }}aws.ToString{{ else }}aws.StringValue{{ end }}(v) } if v := apiObject.SubFieldTwo; v != nil { - m["sub_field_two"] = aws.ToString(v) + m["sub_field_two"] = {{ if .AWSGoSDKV2 }}aws.ToString{{ else }}aws.StringValue{{ end }}(v) } return m diff --git a/skaff/resource/resource_test.go b/skaff/resource/resource_test.go index 6f8b9fce615..7fc60594929 100644 --- a/skaff/resource/resource_test.go +++ b/skaff/resource/resource_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package resource import ( diff --git a/skaff/resource/resourcefw.tmpl b/skaff/resource/resourcefw.tmpl index d551963298f..223af814e79 100644 --- a/skaff/resource/resourcefw.tmpl +++ b/skaff/resource/resourcefw.tmpl @@ -60,8 +60,8 @@ import ( "github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-provider-aws/internal/create" - "github.com/hashicorp/terraform-provider-aws/internal/flex" "github.com/hashicorp/terraform-provider-aws/internal/framework" + "github.com/hashicorp/terraform-provider-aws/internal/framework/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/names" @@ -242,7 +242,7 @@ func (r *resource{{ .Resource }}) Create(ctx context.Context, req resource.Creat // TIP: -- 1. Get a client connection to the relevant service {{- end }} - conn := r.Meta().{{ .Service }}{{ if .AWSGoSDKV2 }}Client(){{ else }}Conn(){{ end }} + conn := r.Meta().{{ .Service }}{{ if .AWSGoSDKV2 }}Client(ctx){{ else }}Conn(ctx){{ end }} {{ if .IncludeComments }} // TIP: -- 2. Fetch the plan {{- end }} @@ -349,7 +349,7 @@ func (r *resource{{ .Resource }}) Read(ctx context.Context, req resource.ReadReq // TIP: -- 1. Get a client connection to the relevant service {{- end }} - conn := r.Meta().{{ .Service }}{{ if .AWSGoSDKV2 }}Client(){{ else }}Conn(){{ end }} + conn := r.Meta().{{ .Service }}{{ if .AWSGoSDKV2 }}Client(ctx){{ else }}Conn(ctx){{ end }} {{ if .IncludeComments }} // TIP: -- 2. Fetch the state {{- end }} @@ -434,7 +434,7 @@ func (r *resource{{ .Resource }}) Update(ctx context.Context, req resource.Updat {{- if .IncludeComments }} // TIP: -- 1. Get a client connection to the relevant service {{- end }} - conn := r.Meta().{{ .Service }}{{ if .AWSGoSDKV2 }}Client(){{ else }}Conn(){{ end }} + conn := r.Meta().{{ .Service }}{{ if .AWSGoSDKV2 }}Client(ctx){{ else }}Conn(ctx){{ end }} {{ if .IncludeComments }} // TIP: -- 2. Fetch the plan {{- end }} @@ -552,7 +552,7 @@ func (r *resource{{ .Resource }}) Delete(ctx context.Context, req resource.Delet {{- if .IncludeComments }} // TIP: -- 1. Get a client connection to the relevant service {{- end }} - conn := r.Meta().{{ .Service }}{{ if .AWSGoSDKV2 }}Client(){{ else }}Conn(){{ end }} + conn := r.Meta().{{ .Service }}{{ if .AWSGoSDKV2 }}Client(ctx){{ else }}Conn(ctx){{ end }} {{ if .IncludeComments }} // TIP: -- 2. Fetch the state {{- end }} diff --git a/skaff/resource/resourcetest.tmpl b/skaff/resource/resourcetest.tmpl index f22c05865f9..e44a046b20a 100644 --- a/skaff/resource/resourcetest.tmpl +++ b/skaff/resource/resourcetest.tmpl @@ -17,10 +17,7 @@ package {{ .ServicePackage }}_test // In other words, as generated, this is a rough outline of the work you will // need to do. If something doesn't make sense for your situation, get rid of // it. -// -// Remember to register this new resource in the provider -// (internal/provider/provider.go) once you finish. Otherwise, Terraform won't -// know about it.{{- end }} +{{- end }} import ( {{- if .IncludeComments }} @@ -53,23 +50,21 @@ import ( "github.com/aws/aws-sdk-go/service/{{ .ServicePackage }}" "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" {{- end }} - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" sdkacctest "github.com/hashicorp/terraform-plugin-testing/helper/acctest" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/create" + "github.com/hashicorp/terraform-provider-aws/internal/errs" + "github.com/hashicorp/terraform-provider-aws/names" {{- if .IncludeComments }} // TIP: You will often need to import the package that this test file lives - // in. Since it is in the "test" context, it must import the package to use - // any normal context constants, variables, or functions. + // in. Since it is in the "test" context, it must import the package to use + // any normal context constants, variables, or functions. {{- end }} tf{{ .ServicePackage }} "github.com/hashicorp/terraform-provider-aws/internal/service/{{ .ServicePackage }}" -{{- if .AWSGoSDKV2 }} - "github.com/hashicorp/terraform-provider-aws/names" -{{- end }} ) {{ if .IncludeComments }} // TIP: File Structure. The basic outline for all test files should be as @@ -85,7 +80,7 @@ import ( // 7. Helper functions (exists, destroy, check, etc.) // 8. Functions that return Terraform configurations {{- end }} -{{ if .IncludeComments }} +{{- if .IncludeComments }} // TIP: ==== UNIT TESTS ==== // This is an example of a unit test. Its name is not prefixed with @@ -103,6 +98,8 @@ import ( // intricate, they should be unit tested. {{- end }} func Test{{ .Resource }}ExampleUnitTest(t *testing.T) { + t.Parallel() + testCases := []struct { TestName string Input string @@ -130,7 +127,9 @@ func Test{{ .Resource }}ExampleUnitTest(t *testing.T) { } for _, testCase := range testCases { + testCase := testCase t.Run(testCase.TestName, func(t *testing.T) { + t.Parallel() got, err := tf{{ .ServicePackage }}.FunctionFromResource(testCase.Input) if err != nil && !testCase.Error { @@ -148,7 +147,6 @@ func Test{{ .Resource }}ExampleUnitTest(t *testing.T) { } } {{ if .IncludeComments }} - // TIP: ==== ACCEPTANCE TESTS ==== // This is an example of a basic acceptance test. This should test as much of // standard functionality of the resource as possible, and test importing, if @@ -160,8 +158,8 @@ func Test{{ .Resource }}ExampleUnitTest(t *testing.T) { func TestAcc{{ .Service }}{{ .Resource }}_basic(t *testing.T) { ctx := acctest.Context(t) {{- if .IncludeComments }} - // TIP: This is a long-running test guard for tests that run longer than - // 300s (5 min) generally. + // TIP: This is a long-running test guard for tests that run longer than + // 300s (5 min) generally. {{- end }} if testing.Short() { t.Skip("skipping long-running test in short mode") @@ -268,7 +266,7 @@ func TestAcc{{ .Service }}{{ .Resource }}_disappears(t *testing.T) { func testAccCheck{{ .Resource }}Destroy(ctx context.Context) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).{{ .Service }}{{ if .AWSGoSDKV2 }}Client(){{ else }}Conn(){{ end }} + conn := acctest.Provider.Meta().(*conns.AWSClient).{{ .Service }}{{ if .AWSGoSDKV2 }}Client(ctx){{ else }}Conn(ctx){{ end }} for _, rs := range s.RootModule().Resources { if rs.Type != "aws_{{ .ServicePackage }}_{{ .ResourceSnake }}" { @@ -288,18 +286,17 @@ func testAccCheck{{ .Resource }}Destroy(ctx context.Context) resource.TestCheckF {{ .Resource }}Id: aws.String(rs.Primary.ID), }) {{- end }} + {{- if .AWSGoSDKV2 }} + if errs.IsA[*types.ResourceNotFoundException](err){ + return nil + } + {{ else }} + if tfawserr.ErrCodeEquals(err, {{ .ServicePackage }}.ErrCodeNotFoundException) { + return nil + } + {{ end -}} if err != nil { - {{- if .AWSGoSDKV2 }} - var nfe *types.ResourceNotFoundException - if errors.As(err, &nfe) { - return nil - } - {{- else }} - if tfawserr.ErrCodeEquals(err, {{ .ServicePackage }}.ErrCodeNotFoundException) { - return nil - } - {{- end }} - return err + return nil } return create.Error(names.{{ .Service }}, create.ErrActionCheckingDestroyed, tf{{ .ServicePackage }}.ResName{{ .Resource }}, rs.Primary.ID, errors.New("not destroyed")) @@ -320,7 +317,7 @@ func testAccCheck{{ .Resource }}Exists(ctx context.Context, name string, {{ .Res return create.Error(names.{{ .Service }}, create.ErrActionCheckingExistence, tf{{ .ServicePackage }}.ResName{{ .Resource }}, name, errors.New("not set")) } - conn := acctest.Provider.Meta().(*conns.AWSClient).{{ .Service }}{{ if .AWSGoSDKV2 }}Client(){{ else }}Conn(){{ end }} + conn := acctest.Provider.Meta().(*conns.AWSClient).{{ .Service }}{{ if .AWSGoSDKV2 }}Client(ctx){{ else }}Conn(ctx){{ end }} {{- if .AWSGoSDKV2 }} resp, err := conn.Describe{{ .Resource }}(ctx, &{{ .ServicePackage }}.Describe{{ .Resource }}Input{ @@ -343,7 +340,7 @@ func testAccCheck{{ .Resource }}Exists(ctx context.Context, name string, {{ .Res } func testAccPreCheck(ctx context.Context, t *testing.T) { - conn := acctest.Provider.Meta().(*conns.AWSClient).{{ .Service }}{{ if .AWSGoSDKV2 }}Client(){{ else }}Conn(){{ end }} + conn := acctest.Provider.Meta().(*conns.AWSClient).{{ .Service }}{{ if .AWSGoSDKV2 }}Client(ctx){{ else }}Conn(ctx){{ end }} input := &{{ .ServicePackage }}.List{{ .Resource }}sInput{} @@ -356,7 +353,6 @@ func testAccPreCheck(ctx context.Context, t *testing.T) { if acctest.PreCheckSkipError(err) { t.Skipf("skipping acceptance testing: %s", err) } - if err != nil { t.Fatalf("unexpected PreCheck error: %s", err) } @@ -364,8 +360,8 @@ func testAccPreCheck(ctx context.Context, t *testing.T) { func testAccCheck{{ .Resource }}NotRecreated(before, after *{{ .ServicePackage }}.Describe{{ .Resource }}Response) resource.TestCheckFunc { return func(s *terraform.State) error { - if before, after := aws.StringValue(before.{{ .Resource }}Id), aws.StringValue(after.{{ .Resource }}Id); before != after { - return create.Error(names.{{ .Service }}, create.ErrActionCheckingNotRecreated, tf{{ .ServicePackage }}.ResName{{ .Resource }}, aws.StringValue(before.{{ .Resource }}Id), errors.New("recreated")) + if before, after := {{ if .AWSGoSDKV2 }}aws.ToString{{ else }}aws.StringValue{{ end }}(before.{{ .Resource }}Id), {{ if .AWSGoSDKV2 }}aws.ToString{{ else }}aws.StringValue{{ end }}(after.{{ .Resource }}Id); before != after { + return create.Error(names.{{ .Service }}, create.ErrActionCheckingNotRecreated, tf{{ .ServicePackage }}.ResName{{ .Resource }}, {{ if .AWSGoSDKV2 }}aws.ToString{{ else }}aws.StringValue{{ end }}(before.{{ .Resource }}Id), errors.New("recreated")) } return nil diff --git a/skaff/resource/websitedoc.tmpl b/skaff/resource/websitedoc.tmpl index c78b975aced..e4121155fc3 100644 --- a/skaff/resource/websitedoc.tmpl +++ b/skaff/resource/websitedoc.tmpl @@ -6,6 +6,16 @@ description: |- Terraform resource for managing an AWS {{ .HumanFriendlyService }} {{ .HumanResourceName }}. --- +{{- if .IncludeComments }} +` +{{- end }} # Resource: aws_{{ .ServicePackage }}_{{ .ResourceSnake }} Terraform resource for managing an AWS {{ .HumanFriendlyService }} {{ .HumanResourceName }}. diff --git a/tools/tfsdk2fw/go.mod b/tools/tfsdk2fw/go.mod index 7c95d97bcff..d4026cc3a94 100644 --- a/tools/tfsdk2fw/go.mod +++ b/tools/tfsdk2fw/go.mod @@ -1,101 +1,125 @@ module github.com/hashicorp/terraform-provider-aws/tools/tfsdk2fw -go 1.19 +go 1.20 require ( - github.com/hashicorp/terraform-plugin-sdk/v2 v2.25.0 + github.com/hashicorp/terraform-plugin-sdk/v2 v2.27.0 github.com/hashicorp/terraform-provider-aws v1.60.1-0.20220322001452-8f7a597d0c24 - golang.org/x/exp v0.0.0-20230206171751-46f607a40771 + golang.org/x/exp v0.0.0-20230510235704-dd950f8aeaea ) require ( github.com/Masterminds/goutils v1.1.1 // indirect github.com/Masterminds/semver/v3 v3.1.1 // indirect github.com/Masterminds/sprig/v3 v3.2.1 // indirect - github.com/ProtonMail/go-crypto v0.0.0-20230201104953-d1d05f4e2bfb // indirect + github.com/ProtonMail/go-crypto v0.0.0-20230217124315-7d5c6f04bbb8 // indirect github.com/agext/levenshtein v1.2.3 // indirect github.com/apparentlymart/go-textseg/v13 v13.0.0 // indirect github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310 // indirect - github.com/aws/aws-sdk-go v1.44.226 // indirect - github.com/aws/aws-sdk-go-v2 v1.17.7 // indirect - github.com/aws/aws-sdk-go-v2/config v1.18.12 // indirect - github.com/aws/aws-sdk-go-v2/credentials v1.13.12 // indirect - github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.13.1 // indirect - github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.31 // indirect - github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.25 // indirect - github.com/aws/aws-sdk-go-v2/internal/ini v1.3.29 // indirect - github.com/aws/aws-sdk-go-v2/service/auditmanager v1.24.3 // indirect - github.com/aws/aws-sdk-go-v2/service/cloudcontrol v1.11.7 // indirect - github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs v1.20.7 // indirect - github.com/aws/aws-sdk-go-v2/service/comprehend v1.22.2 // indirect - github.com/aws/aws-sdk-go-v2/service/computeoptimizer v1.21.5 // indirect - github.com/aws/aws-sdk-go-v2/service/ec2 v1.91.0 // indirect - github.com/aws/aws-sdk-go-v2/service/fis v1.14.6 // indirect - github.com/aws/aws-sdk-go-v2/service/healthlake v1.15.6 // indirect - github.com/aws/aws-sdk-go-v2/service/iam v1.19.7 // indirect - github.com/aws/aws-sdk-go-v2/service/identitystore v1.16.6 // indirect - github.com/aws/aws-sdk-go-v2/service/inspector2 v1.11.7 // indirect - github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.9.25 // indirect - github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.14.0 // indirect - github.com/aws/aws-sdk-go-v2/service/ivschat v1.4.1 // indirect - github.com/aws/aws-sdk-go-v2/service/kendra v1.38.7 // indirect - github.com/aws/aws-sdk-go-v2/service/lambda v1.30.2 // indirect - github.com/aws/aws-sdk-go-v2/service/medialive v1.30.2 // indirect - github.com/aws/aws-sdk-go-v2/service/oam v1.1.7 // indirect - github.com/aws/aws-sdk-go-v2/service/opensearchserverless v1.1.7 // indirect - github.com/aws/aws-sdk-go-v2/service/pipes v1.2.2 // indirect - github.com/aws/aws-sdk-go-v2/service/rbin v1.8.7 // indirect - github.com/aws/aws-sdk-go-v2/service/rds v1.40.7 // indirect - github.com/aws/aws-sdk-go-v2/service/resourceexplorer2 v1.2.8 // indirect - github.com/aws/aws-sdk-go-v2/service/rolesanywhere v1.1.7 // indirect - github.com/aws/aws-sdk-go-v2/service/route53domains v1.14.6 // indirect - github.com/aws/aws-sdk-go-v2/service/s3control v1.31.1 // indirect - github.com/aws/aws-sdk-go-v2/service/scheduler v1.1.6 // indirect - github.com/aws/aws-sdk-go-v2/service/sesv2 v1.17.2 // indirect - github.com/aws/aws-sdk-go-v2/service/ssm v1.35.7 // indirect - github.com/aws/aws-sdk-go-v2/service/ssmcontacts v1.14.6 // indirect - github.com/aws/aws-sdk-go-v2/service/ssmincidents v1.20.6 // indirect - github.com/aws/aws-sdk-go-v2/service/sso v1.12.6 // indirect - github.com/aws/aws-sdk-go-v2/service/ssooidc v1.14.6 // indirect - github.com/aws/aws-sdk-go-v2/service/sts v1.18.7 // indirect - github.com/aws/aws-sdk-go-v2/service/transcribe v1.26.2 // indirect + github.com/aws/aws-sdk-go v1.44.299 // indirect + github.com/aws/aws-sdk-go-v2 v1.18.1 // indirect + github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.4.10 // indirect + github.com/aws/aws-sdk-go-v2/config v1.18.27 // indirect + github.com/aws/aws-sdk-go-v2/credentials v1.13.26 // indirect + github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.13.4 // indirect + github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.34 // indirect + github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.28 // indirect + github.com/aws/aws-sdk-go-v2/internal/ini v1.3.35 // indirect + github.com/aws/aws-sdk-go-v2/service/accessanalyzer v1.19.14 // indirect + github.com/aws/aws-sdk-go-v2/service/account v1.10.8 // indirect + github.com/aws/aws-sdk-go-v2/service/acm v1.17.13 // indirect + github.com/aws/aws-sdk-go-v2/service/appconfig v1.17.11 // indirect + github.com/aws/aws-sdk-go-v2/service/auditmanager v1.25.0 // indirect + github.com/aws/aws-sdk-go-v2/service/cleanrooms v1.2.0 // indirect + github.com/aws/aws-sdk-go-v2/service/cloudcontrol v1.11.14 // indirect + github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs v1.22.0 // indirect + github.com/aws/aws-sdk-go-v2/service/comprehend v1.24.4 // indirect + github.com/aws/aws-sdk-go-v2/service/computeoptimizer v1.24.2 // indirect + github.com/aws/aws-sdk-go-v2/service/directoryservice v1.17.3 // indirect + github.com/aws/aws-sdk-go-v2/service/docdbelastic v1.1.12 // indirect + github.com/aws/aws-sdk-go-v2/service/ec2 v1.103.0 // indirect + github.com/aws/aws-sdk-go-v2/service/finspace v1.10.2 // indirect + github.com/aws/aws-sdk-go-v2/service/fis v1.14.12 // indirect + github.com/aws/aws-sdk-go-v2/service/glacier v1.14.13 // indirect + github.com/aws/aws-sdk-go-v2/service/healthlake v1.16.2 // indirect + github.com/aws/aws-sdk-go-v2/service/iam v1.21.0 // indirect + github.com/aws/aws-sdk-go-v2/service/identitystore v1.16.13 // indirect + github.com/aws/aws-sdk-go-v2/service/inspector2 v1.15.0 // indirect + github.com/aws/aws-sdk-go-v2/service/internal/endpoint-discovery v1.7.28 // indirect + github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.9.28 // indirect + github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.14.3 // indirect + github.com/aws/aws-sdk-go-v2/service/internetmonitor v1.3.0 // indirect + github.com/aws/aws-sdk-go-v2/service/ivschat v1.4.7 // indirect + github.com/aws/aws-sdk-go-v2/service/kendra v1.41.0 // indirect + github.com/aws/aws-sdk-go-v2/service/keyspaces v1.3.2 // indirect + github.com/aws/aws-sdk-go-v2/service/lambda v1.37.0 // indirect + github.com/aws/aws-sdk-go-v2/service/lightsail v1.27.1 // indirect + github.com/aws/aws-sdk-go-v2/service/medialive v1.32.0 // indirect + github.com/aws/aws-sdk-go-v2/service/oam v1.1.13 // indirect + github.com/aws/aws-sdk-go-v2/service/opensearchserverless v1.2.6 // indirect + github.com/aws/aws-sdk-go-v2/service/pipes v1.2.8 // indirect + github.com/aws/aws-sdk-go-v2/service/pricing v1.20.0 // indirect + github.com/aws/aws-sdk-go-v2/service/qldb v1.15.13 // indirect + github.com/aws/aws-sdk-go-v2/service/rbin v1.8.14 // indirect + github.com/aws/aws-sdk-go-v2/service/rds v1.46.1 // indirect + github.com/aws/aws-sdk-go-v2/service/resourceexplorer2 v1.2.15 // indirect + github.com/aws/aws-sdk-go-v2/service/rolesanywhere v1.2.2 // indirect + github.com/aws/aws-sdk-go-v2/service/route53domains v1.15.0 // indirect + github.com/aws/aws-sdk-go-v2/service/s3control v1.31.7 // indirect + github.com/aws/aws-sdk-go-v2/service/scheduler v1.1.13 // indirect + github.com/aws/aws-sdk-go-v2/service/securitylake v1.4.3 // indirect + github.com/aws/aws-sdk-go-v2/service/sesv2 v1.18.2 // indirect + github.com/aws/aws-sdk-go-v2/service/ssm v1.36.7 // indirect + github.com/aws/aws-sdk-go-v2/service/ssmcontacts v1.15.7 // indirect + github.com/aws/aws-sdk-go-v2/service/ssmincidents v1.21.6 // indirect + github.com/aws/aws-sdk-go-v2/service/sso v1.12.12 // indirect + github.com/aws/aws-sdk-go-v2/service/ssooidc v1.14.12 // indirect + github.com/aws/aws-sdk-go-v2/service/sts v1.19.2 // indirect + github.com/aws/aws-sdk-go-v2/service/swf v1.15.2 // indirect + github.com/aws/aws-sdk-go-v2/service/timestreamwrite v1.17.2 // indirect + github.com/aws/aws-sdk-go-v2/service/transcribe v1.26.8 // indirect + github.com/aws/aws-sdk-go-v2/service/verifiedpermissions v1.0.4 // indirect + github.com/aws/aws-sdk-go-v2/service/vpclattice v1.0.7 // indirect + github.com/aws/aws-sdk-go-v2/service/workspaces v1.28.15 // indirect + github.com/aws/aws-sdk-go-v2/service/xray v1.16.13 // indirect github.com/aws/smithy-go v1.13.5 // indirect - github.com/beevik/etree v1.1.0 // indirect + github.com/beevik/etree v1.2.0 // indirect github.com/bgentry/speakeasy v0.1.0 // indirect - github.com/cloudflare/circl v1.3.2 // indirect - github.com/fatih/color v1.14.1 // indirect - github.com/golang/protobuf v1.5.2 // indirect + github.com/cloudflare/circl v1.3.3 // indirect + github.com/fatih/color v1.15.0 // indirect + github.com/golang/protobuf v1.5.3 // indirect github.com/google/go-cmp v0.5.9 // indirect github.com/google/uuid v1.3.0 // indirect - github.com/hashicorp/aws-cloudformation-resource-schema-sdk-go v0.20.0 // indirect - github.com/hashicorp/aws-sdk-go-base/v2 v2.0.0-beta.24 // indirect - github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2 v2.0.0-beta.25 // indirect + github.com/hashicorp/aws-cloudformation-resource-schema-sdk-go v0.21.0 // indirect + github.com/hashicorp/aws-sdk-go-base/v2 v2.0.0-beta.31 // indirect + github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2 v2.0.0-beta.32 // indirect github.com/hashicorp/awspolicyequivalence v1.6.0 // indirect github.com/hashicorp/errwrap v1.1.0 // indirect github.com/hashicorp/go-checkpoint v0.5.0 // indirect github.com/hashicorp/go-cleanhttp v0.5.2 // indirect github.com/hashicorp/go-cty v1.4.1-0.20200414143053-d3edf31b6320 // indirect - github.com/hashicorp/go-hclog v1.4.0 // indirect + github.com/hashicorp/go-hclog v1.5.0 // indirect github.com/hashicorp/go-multierror v1.1.1 // indirect - github.com/hashicorp/go-plugin v1.4.8 // indirect + github.com/hashicorp/go-plugin v1.4.10 // indirect github.com/hashicorp/go-uuid v1.0.3 // indirect github.com/hashicorp/go-version v1.6.0 // indirect - github.com/hashicorp/hc-install v0.5.0 // indirect - github.com/hashicorp/hcl/v2 v2.16.1 // indirect + github.com/hashicorp/hc-install v0.5.2 // indirect + github.com/hashicorp/hcl/v2 v2.17.0 // indirect github.com/hashicorp/logutils v1.0.0 // indirect - github.com/hashicorp/terraform-exec v0.17.3 // indirect - github.com/hashicorp/terraform-json v0.15.0 // indirect - github.com/hashicorp/terraform-plugin-framework v1.1.1 // indirect - github.com/hashicorp/terraform-plugin-framework-timeouts v0.3.1 // indirect + github.com/hashicorp/terraform-exec v0.18.1 // indirect + github.com/hashicorp/terraform-json v0.17.0 // indirect + github.com/hashicorp/terraform-plugin-framework v1.3.2 // indirect + github.com/hashicorp/terraform-plugin-framework-timeouts v0.4.1 // indirect github.com/hashicorp/terraform-plugin-framework-validators v0.10.0 // indirect - github.com/hashicorp/terraform-plugin-go v0.14.3 // indirect - github.com/hashicorp/terraform-plugin-log v0.8.0 // indirect - github.com/hashicorp/terraform-plugin-mux v0.9.0 // indirect - github.com/hashicorp/terraform-registry-address v0.1.0 // indirect - github.com/hashicorp/terraform-svchost v0.1.0 // indirect + github.com/hashicorp/terraform-plugin-go v0.18.0 // indirect + github.com/hashicorp/terraform-plugin-log v0.9.0 // indirect + github.com/hashicorp/terraform-plugin-mux v0.11.1 // indirect + github.com/hashicorp/terraform-plugin-testing v1.3.0 // indirect + github.com/hashicorp/terraform-registry-address v0.2.1 // indirect + github.com/hashicorp/terraform-svchost v0.1.1 // indirect github.com/hashicorp/yamux v0.1.1 // indirect github.com/huandu/xstrings v1.3.2 // indirect - github.com/imdario/mergo v0.3.12 // indirect + github.com/imdario/mergo v0.3.13 // indirect github.com/jmespath/go-jmespath v0.4.0 // indirect github.com/mattbaird/jsonpatch v0.0.0-20200820163806-098863c1fc24 // indirect github.com/mattn/go-colorable v0.1.13 // indirect @@ -112,23 +136,23 @@ require ( github.com/shopspring/decimal v1.3.1 // indirect github.com/spf13/cast v1.3.1 // indirect github.com/vmihailenco/msgpack v4.0.4+incompatible // indirect - github.com/vmihailenco/msgpack/v4 v4.3.12 // indirect - github.com/vmihailenco/tagparser v0.1.2 // indirect + github.com/vmihailenco/msgpack/v5 v5.3.5 // indirect + github.com/vmihailenco/tagparser/v2 v2.0.0 // indirect github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect github.com/xeipuuv/gojsonschema v1.2.0 // indirect - github.com/zclconf/go-cty v1.12.1 // indirect - go.opentelemetry.io/otel v1.13.0 // indirect - go.opentelemetry.io/otel/trace v1.13.0 // indirect - golang.org/x/crypto v0.7.0 // indirect - golang.org/x/mod v0.8.0 // indirect - golang.org/x/net v0.8.0 // indirect - golang.org/x/sys v0.6.0 // indirect - golang.org/x/text v0.8.0 // indirect + github.com/zclconf/go-cty v1.13.2 // indirect + go.opentelemetry.io/otel v1.16.0 // indirect + go.opentelemetry.io/otel/trace v1.16.0 // indirect + golang.org/x/crypto v0.11.0 // indirect + golang.org/x/mod v0.10.0 // indirect + golang.org/x/net v0.11.0 // indirect + golang.org/x/sys v0.10.0 // indirect + golang.org/x/text v0.11.0 // indirect google.golang.org/appengine v1.6.7 // indirect - google.golang.org/genproto v0.0.0-20230202175211-008b39050e57 // indirect - google.golang.org/grpc v1.53.0 // indirect - google.golang.org/protobuf v1.28.1 // indirect + google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 // indirect + google.golang.org/grpc v1.56.1 // indirect + google.golang.org/protobuf v1.31.0 // indirect gopkg.in/yaml.v2 v2.4.0 // indirect ) diff --git a/tools/tfsdk2fw/go.sum b/tools/tfsdk2fw/go.sum index 4ac3dd18759..4a28c433b24 100644 --- a/tools/tfsdk2fw/go.sum +++ b/tools/tfsdk2fw/go.sum @@ -1,163 +1,181 @@ -cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= github.com/Masterminds/goutils v1.1.1 h1:5nUrii3FMTL5diU80unEVvNevw1nH4+ZV4DSLVJLSYI= github.com/Masterminds/goutils v1.1.1/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU= github.com/Masterminds/semver/v3 v3.1.1 h1:hLg3sBzpNErnxhQtUy/mmLR2I9foDujNK030IGemrRc= github.com/Masterminds/semver/v3 v3.1.1/go.mod h1:VPu/7SZ7ePZ3QOrcuXROw5FAcLl4a0cBrbBpGY/8hQs= github.com/Masterminds/sprig/v3 v3.2.1 h1:n6EPaDyLSvCEa3frruQvAiHuNp2dhBlMSmkEr+HuzGc= github.com/Masterminds/sprig/v3 v3.2.1/go.mod h1:UoaO7Yp8KlPnJIYWTFkMaqPUYKTfGFPhxNuwnnxkKlk= -github.com/Microsoft/go-winio v0.4.14/go.mod h1:qXqCSQ3Xa7+6tgxaGTIe4Kpcdsi+P8jBhyzoq1bpyYA= -github.com/Microsoft/go-winio v0.4.16 h1:FtSW/jqD+l4ba5iPBj9CODVtgfYAD8w2wS923g/cFDk= -github.com/Microsoft/go-winio v0.4.16/go.mod h1:XB6nPKklQyQ7GC9LdcBEcBl8PF76WugXOPRXwdLnMv0= -github.com/ProtonMail/go-crypto v0.0.0-20210428141323-04723f9f07d7/go.mod h1:z4/9nQmJSSwwds7ejkxaJwO37dru3geImFUdJlaLzQo= -github.com/ProtonMail/go-crypto v0.0.0-20230201104953-d1d05f4e2bfb h1:Vx1Bw/nGULx+FuY7Sw+8ZDpOx9XOdA+mOfo678SqkbU= -github.com/ProtonMail/go-crypto v0.0.0-20230201104953-d1d05f4e2bfb/go.mod h1:I0gYDMZ6Z5GRU7l58bNFSkPTFN6Yl12dsUlAZ8xy98g= -github.com/acomagu/bufpipe v1.0.3 h1:fxAGrHZTgQ9w5QqVItgzwj235/uYZYgbXitB+dLupOk= -github.com/acomagu/bufpipe v1.0.3/go.mod h1:mxdxdup/WdsKVreO5GpW4+M/1CE2sMG4jeGJ2sYmHc4= +github.com/Microsoft/go-winio v0.5.2 h1:a9IhgEQBCUEk6QCdml9CiJGhAws+YwffDHEMp1VMrpA= +github.com/ProtonMail/go-crypto v0.0.0-20230217124315-7d5c6f04bbb8 h1:wPbRQzjjwFc0ih8puEVAOFGELsn1zoIIYdxvML7mDxA= +github.com/ProtonMail/go-crypto v0.0.0-20230217124315-7d5c6f04bbb8/go.mod h1:I0gYDMZ6Z5GRU7l58bNFSkPTFN6Yl12dsUlAZ8xy98g= +github.com/acomagu/bufpipe v1.0.4 h1:e3H4WUzM3npvo5uv95QuJM3cQspFNtFBzvJ2oNjKIDQ= github.com/agext/levenshtein v1.2.3 h1:YB2fHEn0UJagG8T1rrWknE3ZQzWM06O8AMAatNn7lmo= github.com/agext/levenshtein v1.2.3/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558= -github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239/go.mod h1:2FmKhYUyUczH0OGQWaF5ceTx0UBShxjsH6f8oGKYe2c= -github.com/apparentlymart/go-textseg v1.0.0/go.mod h1:z96Txxhf3xSFMPmb5X/1W05FF/Nj9VFpLOpjS5yuumk= github.com/apparentlymart/go-textseg/v12 v12.0.0/go.mod h1:S/4uRK2UtaQttw1GenVJEynmyUenKwP++x/+DdGV/Ec= github.com/apparentlymart/go-textseg/v13 v13.0.0 h1:Y+KvPE1NYz0xl601PVImeQfFyEy6iT90AvPUL1NNfNw= github.com/apparentlymart/go-textseg/v13 v13.0.0/go.mod h1:ZK2fH7c4NqDTLtiYLvIkEghdlcqw7yxLeM89kiTRPUo= github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310 h1:BUAU3CGlLvorLI26FmByPp2eC2qla6E1Tw+scpcg/to= github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8= -github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs= -github.com/aws/aws-sdk-go v1.44.226 h1:lqTNeHJUq0U6dpMGJc9ZcmfTUkuAjklcwewj96RhMlc= -github.com/aws/aws-sdk-go v1.44.226/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI= -github.com/aws/aws-sdk-go-v2 v1.17.4/go.mod h1:uzbQtefpm44goOPmdKyAlXSNcwlRgF3ePWVW6EtJvvw= -github.com/aws/aws-sdk-go-v2 v1.17.7 h1:CLSjnhJSTSogvqUGhIC6LqFKATMRexcxLZ0i/Nzk9Eg= -github.com/aws/aws-sdk-go-v2 v1.17.7/go.mod h1:uzbQtefpm44goOPmdKyAlXSNcwlRgF3ePWVW6EtJvvw= -github.com/aws/aws-sdk-go-v2/config v1.18.12 h1:fKs/I4wccmfrNRO9rdrbMO1NgLxct6H9rNMiPdBxHWw= -github.com/aws/aws-sdk-go-v2/config v1.18.12/go.mod h1:J36fOhj1LQBr+O4hJCiT8FwVvieeoSGOtPuvhKlsNu8= -github.com/aws/aws-sdk-go-v2/credentials v1.13.12 h1:Cb+HhuEnV19zHRaYYVglwvdHGMJWbdsyP4oHhw04xws= -github.com/aws/aws-sdk-go-v2/credentials v1.13.12/go.mod h1:37HG2MBroXK3jXfxVGtbM2J48ra2+Ltu+tmwr/jO0KA= -github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.12.22/go.mod h1:YGSIJyQ6D6FjKMQh16hVFSIUD54L4F7zTGePqYMYYJU= -github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.13.1 h1:gt57MN3liKiyGopcqgNzJb2+d9MJaKT/q1OksHNXVE4= -github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.13.1/go.mod h1:lfUx8puBRdM5lVVMQlwt2v+ofiG/X6Ms+dy0UkG/kXw= -github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.28/go.mod h1:3lwChorpIM/BhImY/hy+Z6jekmN92cXGPI1QJasVPYY= -github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.31 h1:sJLYcS+eZn5EeNINGHSCRAwUJMFVqklwkH36Vbyai7M= -github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.31/go.mod h1:QT0BqUvX1Bh2ABdTGnjqEjvjzrCfIniM9Sc8zn9Yndo= -github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.22/go.mod h1:EqK7gVrIGAHyZItrD1D8B0ilgwMD1GiWAmbU4u/JHNk= -github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.25 h1:1mnRASEKnkqsntcxHaysxwgVoUUp5dkiB+l3llKnqyg= -github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.25/go.mod h1:zBHOPwhBc3FlQjQJE/D3IfPWiWaQmT06Vq9aNukDo0k= -github.com/aws/aws-sdk-go-v2/internal/ini v1.3.29 h1:J4xhFd6zHhdF9jPP0FQJ6WknzBboGMBNjKOv4iTuw4A= -github.com/aws/aws-sdk-go-v2/internal/ini v1.3.29/go.mod h1:TwuqRBGzxjQJIwH16/fOZodwXt2Zxa9/cwJC5ke4j7s= -github.com/aws/aws-sdk-go-v2/service/auditmanager v1.24.3 h1:qOE+q5FYSmlVuFah/2m+5ChKmJX2eAD0R6f1O8Hb1Y0= -github.com/aws/aws-sdk-go-v2/service/auditmanager v1.24.3/go.mod h1:kYU0Qgn0LzE/NqIJQruZh/3lyJw83fyJBBV2D/iPaAM= -github.com/aws/aws-sdk-go-v2/service/cloudcontrol v1.11.7 h1:JwKSQTZV35fxTgy+xcu+SQt6/shCXqVmnANUIye0UKc= -github.com/aws/aws-sdk-go-v2/service/cloudcontrol v1.11.7/go.mod h1:sMVHADAF+VIxSMJ1LNQo1+FmAjKlLMToHJuu0XxUzYQ= -github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs v1.20.7 h1:Sv9ixBhjrihZUZih+SJfyo892LXutFspfqPt5XQGc9Q= -github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs v1.20.7/go.mod h1:pvT0/gXJx7Xe2pcs+/wXWHBiD45zml+gwO2bhCBFq+Q= -github.com/aws/aws-sdk-go-v2/service/comprehend v1.22.2 h1:QEZ3opVJyT5I6NdUoKh48YNM4Q0HlaCcVBKrQW3xM6c= -github.com/aws/aws-sdk-go-v2/service/comprehend v1.22.2/go.mod h1:Oo3a8+A4QcEw0m5u1XOd65fmckULX/IvfgmW2VnLtJc= -github.com/aws/aws-sdk-go-v2/service/computeoptimizer v1.21.5 h1:5jEXwqB7128t9Jj4RdOek4GrQKmnvkzOo1zxVFmpFUc= -github.com/aws/aws-sdk-go-v2/service/computeoptimizer v1.21.5/go.mod h1:B+fjhR6uAO48wa5tDUwZ4luJg50MQ1BGfLdh0fzQGtU= -github.com/aws/aws-sdk-go-v2/service/ec2 v1.91.0 h1:b4Qme29Ml9nl3QBxWobytF5UxlfmYUJI7+u1FTqjehs= -github.com/aws/aws-sdk-go-v2/service/ec2 v1.91.0/go.mod h1:ZZLfkd1Y7fjXujjMg1CFqNmaTl314eCbShlHQO7VTWo= -github.com/aws/aws-sdk-go-v2/service/fis v1.14.6 h1:jJXQJdNLk2GWHYH+GKvsTkR6uY7EsUAjW9VDkH/TsX0= -github.com/aws/aws-sdk-go-v2/service/fis v1.14.6/go.mod h1:eTAgBKUuR9QXdBLIeBIg445yaZSxgsMpSrG9mR6DL8E= -github.com/aws/aws-sdk-go-v2/service/healthlake v1.15.6 h1:/CtuJEMjbkyKYcqWhc2sF/nxmGh6qUwdYAIT6nj+04w= -github.com/aws/aws-sdk-go-v2/service/healthlake v1.15.6/go.mod h1:zMZPXgmqVOKXaTrANxHTzDTD1xvPpRKPqh2/Q0DHP9U= -github.com/aws/aws-sdk-go-v2/service/iam v1.19.7 h1:10pGhchIHbjCuJWgXJ/bNm8443s4SxbQRgeSQYxOZrs= -github.com/aws/aws-sdk-go-v2/service/iam v1.19.7/go.mod h1:lf/oAjt//UvPsmnOgPT61F+q4K6U0q4zDd1s1yx2NZs= -github.com/aws/aws-sdk-go-v2/service/identitystore v1.16.6 h1:tzB5Y88Xb4//jybVjrk6ECAcUhCk1ZwV2NuBnpil+w0= -github.com/aws/aws-sdk-go-v2/service/identitystore v1.16.6/go.mod h1:qdai9l1KfhYZ4J0HWFtN9KJ8pG+TwG9UxCHcdf0EdTE= -github.com/aws/aws-sdk-go-v2/service/inspector2 v1.11.7 h1:JXUUeeOirLweZMt28JMBarO3vTVRxq7hTKo9QDt/IBU= -github.com/aws/aws-sdk-go-v2/service/inspector2 v1.11.7/go.mod h1:wD8JZDhQhYMFA4dtTQFIijSRb3ofHlL4d2pkVbLov40= -github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.9.22/go.mod h1:xt0Au8yPIwYXf/GYPy/vl4K3CgwhfQMYbrH7DlUUIws= -github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.9.25 h1:5LHn8JQ0qvjD9L9JhMtylnkcw7j05GDZqM9Oin6hpr0= -github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.9.25/go.mod h1:/95IA+0lMnzW6XzqYJRpjjsAbKEORVeO0anQqjd2CNU= -github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.14.0 h1:e2ooMhpYGhDnBfSvIyusvAwX7KexuZaHbQY2Dyei7VU= -github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.14.0/go.mod h1:bh2E0CXKZsQN+faiKVqC40vfNMAWheoULBCnEgO9K+8= -github.com/aws/aws-sdk-go-v2/service/ivschat v1.4.1 h1:WstlpV2TBkDlhQ6Vnr3jremcKd6SEg1yi30mcvqnyn8= -github.com/aws/aws-sdk-go-v2/service/ivschat v1.4.1/go.mod h1:H+9KIxjBxlVnvkwL8emP3A3tdP0byWf6HxEI6r4uCjs= -github.com/aws/aws-sdk-go-v2/service/kendra v1.38.7 h1:Lu6GMrhWI4uNREMKeRwkh60bmIB3Wws4Gx4ioL8rwCQ= -github.com/aws/aws-sdk-go-v2/service/kendra v1.38.7/go.mod h1:AIygUOfpUxR0mTrIsaDmBqkP5nyWPCF0yAqLA2N0/Og= -github.com/aws/aws-sdk-go-v2/service/lambda v1.30.2 h1:JEUEgBM8HZ27ahhZsIlgfj7xPITxkRoHXdpW7lLzGB0= -github.com/aws/aws-sdk-go-v2/service/lambda v1.30.2/go.mod h1:PmNd6f36wPbp2+B3ZSuvHqqSwggfagEdI18tIb8s91o= -github.com/aws/aws-sdk-go-v2/service/medialive v1.30.2 h1:A1tB0k90suzaxdarNUsNTCbKDUjHaAAUpnUDdDvATyM= -github.com/aws/aws-sdk-go-v2/service/medialive v1.30.2/go.mod h1:xMm9ftEcjgzlYu/taqbHr2q4EKjus4F2bi98LgFtPl0= -github.com/aws/aws-sdk-go-v2/service/oam v1.1.7 h1:EHkc5p3bd/isOPhTbDK9EOONAblY76dU9R02j+QzCxE= -github.com/aws/aws-sdk-go-v2/service/oam v1.1.7/go.mod h1:vy3iHiV8djFWziYqJP9VDM/A4ksOjbP6/g+pfxm82XA= -github.com/aws/aws-sdk-go-v2/service/opensearchserverless v1.1.7 h1:zyaFTkxNk092+PmARcVLk3R8bG4TmLhz/I/mmBJax6A= -github.com/aws/aws-sdk-go-v2/service/opensearchserverless v1.1.7/go.mod h1:e4tF159DHMz41KBqOS7VRFcC9KzbIzVj1Zjgp/P6Cu8= -github.com/aws/aws-sdk-go-v2/service/pipes v1.2.2 h1:pGVj3iv8Zhj3HopofOTgt8xc+OxYx7pbfW4gyyi0JPo= -github.com/aws/aws-sdk-go-v2/service/pipes v1.2.2/go.mod h1:ix9n6B+j4sBxDm+ti64CsyrkDGFI4d2aw1pKbnw3TX0= -github.com/aws/aws-sdk-go-v2/service/rbin v1.8.7 h1:fgM+Kqn31cIkZriqdR+D6AeC431doyIa9TVcG0+2iEg= -github.com/aws/aws-sdk-go-v2/service/rbin v1.8.7/go.mod h1:V71Cf9QQKukM1z5FUZW3vULuuri0txE27uiZpxrNMJY= -github.com/aws/aws-sdk-go-v2/service/rds v1.40.7 h1:sM5284kIpXLvkHUgR3FBfWMf7HJ8KZjSG1ST0hTqhxk= -github.com/aws/aws-sdk-go-v2/service/rds v1.40.7/go.mod h1:es+Xl+GSYsY3ESUW8H6zwieX0ePwycTheaC91KgrpJI= -github.com/aws/aws-sdk-go-v2/service/resourceexplorer2 v1.2.8 h1:zesthvVJhxTGhyrjjEMFpqk7gt7b+fdeABuHxoDW0l0= -github.com/aws/aws-sdk-go-v2/service/resourceexplorer2 v1.2.8/go.mod h1:VvzVAxo5lqTxoeq1GAyZ5vMfP+UIadaHVANyJblTA2A= -github.com/aws/aws-sdk-go-v2/service/rolesanywhere v1.1.7 h1:R9JkLhNkCtpGC2f7jLj7NaeRrFrggxOA4TOdgCmrs60= -github.com/aws/aws-sdk-go-v2/service/rolesanywhere v1.1.7/go.mod h1:Eq8/wDq/iyO0dqRG2QRLKiX0KI61JWHi2HXQ7IPnExQ= -github.com/aws/aws-sdk-go-v2/service/route53domains v1.14.6 h1:o5GiOms/NArcl79ek+nhThsKOUKYDeffp8JbhT8wBZU= -github.com/aws/aws-sdk-go-v2/service/route53domains v1.14.6/go.mod h1:CX9ZPOGGMXTq66ezBylyy3nIbc0veZZ5oYx+bhPftfY= -github.com/aws/aws-sdk-go-v2/service/s3control v1.31.1 h1:Ro2Sn4iDcZtXSkXLxAo7CjIe2v9JbDrvcbaFHmEY3OU= -github.com/aws/aws-sdk-go-v2/service/s3control v1.31.1/go.mod h1:5NoUY8X5HcZ+u93hu5+Y/EB4u72t+Uythw+A1DYloIs= -github.com/aws/aws-sdk-go-v2/service/scheduler v1.1.6 h1:FiFUXvyS91yPdQo+Bty+ZGrwJ6cUaEQa5+HzQ6r8UiU= -github.com/aws/aws-sdk-go-v2/service/scheduler v1.1.6/go.mod h1:4Ac3JoGbiIfpUlZMNqMpJbAVCiMpcO7FGeCnYqB9ALg= -github.com/aws/aws-sdk-go-v2/service/sesv2 v1.17.2 h1:jxYG0lW2AGc11834RsUneuOOg4aZON8IABvG76iBTkg= -github.com/aws/aws-sdk-go-v2/service/sesv2 v1.17.2/go.mod h1:Ym44Peh6n3qTmR7rZmlpj7jGNXQqZL5lJQjVA0Cb7E8= -github.com/aws/aws-sdk-go-v2/service/ssm v1.35.7 h1:mt7DqUE5Itjj1KGYVbxqwzotnuE71E2fVSU1t1huJy0= -github.com/aws/aws-sdk-go-v2/service/ssm v1.35.7/go.mod h1:nCdeJmEFby1HKwKhDdKdVxPOJQUNht7Ngw+ejzbzvDU= -github.com/aws/aws-sdk-go-v2/service/ssmcontacts v1.14.6 h1:JNsez6Q4VWLZCmWrBKJUtChbzE3d+7ApUlNzaW3LVOY= -github.com/aws/aws-sdk-go-v2/service/ssmcontacts v1.14.6/go.mod h1:m+xmImOW5SRsyufOGEH6UZ0jil/CisWohdOy/bSzga8= -github.com/aws/aws-sdk-go-v2/service/ssmincidents v1.20.6 h1:KyWljVWxCtu8xQGs5oMKDqE8NFAfijjjOypi3uXyoqs= -github.com/aws/aws-sdk-go-v2/service/ssmincidents v1.20.6/go.mod h1:W1l+oXqtBfjRn44ZE1ds1Jn6CtZeLFdAluGPwRbTKjM= -github.com/aws/aws-sdk-go-v2/service/sso v1.12.1/go.mod h1:IgV8l3sj22nQDd5qcAGY0WenwCzCphqdbFOpfktZPrI= -github.com/aws/aws-sdk-go-v2/service/sso v1.12.6 h1:5V7DWLBd7wTELVz5bPpwzYy/sikk0gsgZfj40X+l5OI= -github.com/aws/aws-sdk-go-v2/service/sso v1.12.6/go.mod h1:Y1VOmit/Fn6Tz1uFAeCO6Q7M2fmfXSCLeL5INVYsLuY= -github.com/aws/aws-sdk-go-v2/service/ssooidc v1.14.1/go.mod h1:O1YSOg3aekZibh2SngvCRRG+cRHKKlYgxf/JBF/Kr/k= -github.com/aws/aws-sdk-go-v2/service/ssooidc v1.14.6 h1:B8cauxOH1W1v7rd8RdI/MWnoR4Ze0wIHWrb90qczxj4= -github.com/aws/aws-sdk-go-v2/service/ssooidc v1.14.6/go.mod h1:Lh/bc9XUf8CfOY6Jp5aIkQtN+j1mc+nExc+KXj9jx2s= -github.com/aws/aws-sdk-go-v2/service/sts v1.18.3/go.mod h1:b+psTJn33Q4qGoDaM7ZiOVVG8uVjGI6HaZ8WBHdgDgU= -github.com/aws/aws-sdk-go-v2/service/sts v1.18.7 h1:bWNgNdRko2x6gqa0blfATqAZKZokPIeM1vfmQt2pnvM= -github.com/aws/aws-sdk-go-v2/service/sts v1.18.7/go.mod h1:JuTnSoeePXmMVe9G8NcjjwgOKEfZ4cOjMuT2IBT/2eI= -github.com/aws/aws-sdk-go-v2/service/transcribe v1.26.2 h1:fvMxT3O8EG8Yv2qPNjU+rGhqoFDMQuVGfDdlOh8PWk4= -github.com/aws/aws-sdk-go-v2/service/transcribe v1.26.2/go.mod h1:bvUGSKVvtOCyY0Cxke18dAdjRgscTdqZSzuvvTcFYYk= +github.com/aws/aws-sdk-go v1.44.299 h1:HVD9lU4CAFHGxleMJp95FV/sRhtg7P4miHD1v88JAQk= +github.com/aws/aws-sdk-go v1.44.299/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI= +github.com/aws/aws-sdk-go-v2 v1.18.1 h1:+tefE750oAb7ZQGzla6bLkOwfcQCEtC5y2RqoqCeqKo= +github.com/aws/aws-sdk-go-v2 v1.18.1/go.mod h1:uzbQtefpm44goOPmdKyAlXSNcwlRgF3ePWVW6EtJvvw= +github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.4.10 h1:dK82zF6kkPeCo8J1e+tGx4JdvDIQzj7ygIoLg8WMuGs= +github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.4.10/go.mod h1:VeTZetY5KRJLuD/7fkQXMU6Mw7H5m/KP2J5Iy9osMno= +github.com/aws/aws-sdk-go-v2/config v1.18.27 h1:Az9uLwmssTE6OGTpsFqOnaGpLnKDqNYOJzWuC6UAYzA= +github.com/aws/aws-sdk-go-v2/config v1.18.27/go.mod h1:0My+YgmkGxeqjXZb5BYme5pc4drjTnM+x1GJ3zv42Nw= +github.com/aws/aws-sdk-go-v2/credentials v1.13.26 h1:qmU+yhKmOCyujmuPY7tf5MxR/RKyZrOPO3V4DobiTUk= +github.com/aws/aws-sdk-go-v2/credentials v1.13.26/go.mod h1:GoXt2YC8jHUBbA4jr+W3JiemnIbkXOfxSXcisUsZ3os= +github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.13.4 h1:LxK/bitrAr4lnh9LnIS6i7zWbCOdMsfzKFBI6LUCS0I= +github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.13.4/go.mod h1:E1hLXN/BL2e6YizK1zFlYd8vsfi2GTjbjBazinMmeaM= +github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.34 h1:A5UqQEmPaCFpedKouS4v+dHCTUo2sKqhoKO9U5kxyWo= +github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.34/go.mod h1:wZpTEecJe0Btj3IYnDx/VlUzor9wm3fJHyvLpQF0VwY= +github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.28 h1:srIVS45eQuewqz6fKKu6ZGXaq6FuFg5NzgQBAM6g8Y4= +github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.28/go.mod h1:7VRpKQQedkfIEXb4k52I7swUnZP0wohVajJMRn3vsUw= +github.com/aws/aws-sdk-go-v2/internal/ini v1.3.35 h1:LWA+3kDM8ly001vJ1X1waCuLJdtTl48gwkPKWy9sosI= +github.com/aws/aws-sdk-go-v2/internal/ini v1.3.35/go.mod h1:0Eg1YjxE0Bhn56lx+SHJwCzhW+2JGtizsrx+lCqrfm0= +github.com/aws/aws-sdk-go-v2/service/accessanalyzer v1.19.14 h1:zNXf4WC1KZZKKMjlhkSfjKA90p6fQJHjmnBSj7JKHlk= +github.com/aws/aws-sdk-go-v2/service/accessanalyzer v1.19.14/go.mod h1:3SKaBO7H4AJHi3Tv8WPxSlAKF3N6eVTxvqPTo1BvG/o= +github.com/aws/aws-sdk-go-v2/service/account v1.10.8 h1:nvUpdu6IHqY9reKI8InrYpOa1cGadxiAgGITOa+vVyo= +github.com/aws/aws-sdk-go-v2/service/account v1.10.8/go.mod h1:hC6WhRtoLcuUTRxx99fwVoSt5IzgELQBNdnzEZYzlPA= +github.com/aws/aws-sdk-go-v2/service/acm v1.17.13 h1:v858/efJsg0ydr33NEbX6CidcOVGx0T0iIHgUrOh+xg= +github.com/aws/aws-sdk-go-v2/service/acm v1.17.13/go.mod h1:3cAmFXTtyBKv5+JYgHK5EZ6ufCAlgHXCzDDWYG5AUaY= +github.com/aws/aws-sdk-go-v2/service/appconfig v1.17.11 h1:z/fg62eLb5Hynz7c5R+/B9S9wxDZd20ofvt+rBmlzjU= +github.com/aws/aws-sdk-go-v2/service/appconfig v1.17.11/go.mod h1:tpRhHndw04n6hgzicRXhsrLTCLagWJ8jtt7e3Wet7RU= +github.com/aws/aws-sdk-go-v2/service/auditmanager v1.25.0 h1:+pED0OdEasz/zHWgy4s+hDzNXL1rfK+GBi74k54QkVE= +github.com/aws/aws-sdk-go-v2/service/auditmanager v1.25.0/go.mod h1:ia6+sGa4z5LM8ho3hfV06VktSCpmFJo530vSsBDkbT4= +github.com/aws/aws-sdk-go-v2/service/cleanrooms v1.2.0 h1:5l52k6yHz88DfRy7kcYHS+J3BsgU47451TWw87L9zL4= +github.com/aws/aws-sdk-go-v2/service/cleanrooms v1.2.0/go.mod h1:kEOOfzYBRPmMvAhxer+ztu/WC0KC8l4nTytGUemU/Og= +github.com/aws/aws-sdk-go-v2/service/cloudcontrol v1.11.14 h1:4r+ZNZuPN9qlYsSONoEK12IaEwzTCHBguZ7QVMq3FKQ= +github.com/aws/aws-sdk-go-v2/service/cloudcontrol v1.11.14/go.mod h1:WDIYAtWm2b8ET3y8hSuUHLwSsOB5ZwBZkK7ioTTo7S4= +github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs v1.22.0 h1:t39KKEz+/tgTtJIc51uta5md4I8K+H2ddznofUtYPiM= +github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs v1.22.0/go.mod h1:zuZWQM3kYD+ibkP3GBrAMbsfUvHK7p7yOwWh9MKsnYQ= +github.com/aws/aws-sdk-go-v2/service/comprehend v1.24.4 h1:jGIp04h4FvujUjsGLUgwfe1cg/TPViomE5EBtxaCxv4= +github.com/aws/aws-sdk-go-v2/service/comprehend v1.24.4/go.mod h1:t1X+vZJVxh7GXE8xoXpr8SG1RT31bXSF9BN6Rw9pt1Y= +github.com/aws/aws-sdk-go-v2/service/computeoptimizer v1.24.2 h1:pw1ZCQIa25gKXEp7In+VHqS25xJRzpHX96Ha/U9qiVM= +github.com/aws/aws-sdk-go-v2/service/computeoptimizer v1.24.2/go.mod h1:sBj1DJdinQBuuaeAD/KJAfWEKd/3CtbW8vhgqE0phyw= +github.com/aws/aws-sdk-go-v2/service/directoryservice v1.17.3 h1:Ij9KqZAu7JM6H+1pcysufBUi56aaWDSNAVXcNqDHLXg= +github.com/aws/aws-sdk-go-v2/service/directoryservice v1.17.3/go.mod h1:cFv2leRY0jEdFFXPC9iT4HSdYuC3LO3dhDEfMPkObtg= +github.com/aws/aws-sdk-go-v2/service/docdbelastic v1.1.12 h1:p4VxoGiHg+nJHPAXIpTSuXF+1SoHmNPfplzkR3Fmi0k= +github.com/aws/aws-sdk-go-v2/service/docdbelastic v1.1.12/go.mod h1:2UtZqzdVwPZMvQfbMXaiTFmqHzH0djspNEJWAmP6U9c= +github.com/aws/aws-sdk-go-v2/service/ec2 v1.103.0 h1:T0m2UzMD5l+yxqlaI46FJiHwAvvS7+X6Fkv5MZVHBYM= +github.com/aws/aws-sdk-go-v2/service/ec2 v1.103.0/go.mod h1:tIctCeX9IbzsUTKHt53SVEcgyfxV2ElxJeEB+QUbc4M= +github.com/aws/aws-sdk-go-v2/service/finspace v1.10.2 h1:SszO0FFI23oCM30QIJCJ1IKfl+FhCxoLuEwWSw5CkSg= +github.com/aws/aws-sdk-go-v2/service/finspace v1.10.2/go.mod h1:go0pcProAlBpDBh9g1l9h8XvOL8zfeBqA2tc2DvH9AM= +github.com/aws/aws-sdk-go-v2/service/fis v1.14.12 h1:ePwiKxedbAePZLNza3Yb0C52dguII3gLEsxBg7D/3AI= +github.com/aws/aws-sdk-go-v2/service/fis v1.14.12/go.mod h1:tygYAthjrOZANZe156t+zECxhYHw/kYVNkCU6rvI5cI= +github.com/aws/aws-sdk-go-v2/service/glacier v1.14.13 h1:2kfsHVFJTIAcc6dOkxv9syph4211EJe0pIcwrRJ1U6M= +github.com/aws/aws-sdk-go-v2/service/glacier v1.14.13/go.mod h1:ojK9fioQX8Cz8xdoXRp304sF9s5YHvLvhIrr4GswCzs= +github.com/aws/aws-sdk-go-v2/service/healthlake v1.16.2 h1:DE9750UM0zcleUk5meiZL/bHPZzDOQHIDhGdcPIp+7w= +github.com/aws/aws-sdk-go-v2/service/healthlake v1.16.2/go.mod h1:7dapfq6ZyDrAl4hv6hgxYuLYbUUQd5E+ngSdXvun6ys= +github.com/aws/aws-sdk-go-v2/service/iam v1.21.0 h1:8hEpu60CWlrp7iEBUFRZhgPoX6+gadaGL1sD4LoRYS0= +github.com/aws/aws-sdk-go-v2/service/iam v1.21.0/go.mod h1:aQZ8BI+reeaY7RI/QQp7TKCSUHOesTdrzzylp3CW85c= +github.com/aws/aws-sdk-go-v2/service/identitystore v1.16.13 h1:xYUE2IwzSq1rNK9uf8BsxVVxFSeJzKgZ2VFc1zDT+iU= +github.com/aws/aws-sdk-go-v2/service/identitystore v1.16.13/go.mod h1:siVgFYduB/ThkiyUhVIBQ5AG3wBpbFho4KIQOHQLR2s= +github.com/aws/aws-sdk-go-v2/service/inspector2 v1.15.0 h1:68YcB08CjQnszydJGLtT6qxxpcCiN8RymaQfZwhZA3I= +github.com/aws/aws-sdk-go-v2/service/inspector2 v1.15.0/go.mod h1:a2EqXrt+5o49Cnp4iMc2Tpt38ZUiX8aJsNy9ZzH+y/s= +github.com/aws/aws-sdk-go-v2/service/internal/endpoint-discovery v1.7.28 h1:/D994rtMQd1jQ2OY+7tvUlMlrv1L1c7Xtma/FhkbVtY= +github.com/aws/aws-sdk-go-v2/service/internal/endpoint-discovery v1.7.28/go.mod h1:3bJI2pLY3ilrqO5EclusI1GbjFJh1iXYrhOItf2sjKw= +github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.9.28 h1:bkRyG4a929RCnpVSTvLM2j/T4ls015ZhhYApbmYs15s= +github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.9.28/go.mod h1:jj7znCIg05jXlaGBlFMGP8+7UN3VtCkRBG2spnmRQkU= +github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.14.3 h1:dBL3StFxHtpBzJJ/mNEsjXVgfO+7jR0dAIEwLqMapEA= +github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.14.3/go.mod h1:f1QyiAsvIv4B49DmCqrhlXqyaR+0IxMmyX+1P+AnzOM= +github.com/aws/aws-sdk-go-v2/service/internetmonitor v1.3.0 h1:qy8Ko+RdwqmhmHmFdTX9BBGArWEbQV7iuIvFroxfy/g= +github.com/aws/aws-sdk-go-v2/service/internetmonitor v1.3.0/go.mod h1:dopruDWBqM3sxYZWprHj065umhsYqKfzTgpv21od6us= +github.com/aws/aws-sdk-go-v2/service/ivschat v1.4.7 h1:pI950CQHVEFW2/+UklRO4TWzHdO83bedFO9s6vH1R3k= +github.com/aws/aws-sdk-go-v2/service/ivschat v1.4.7/go.mod h1:oOLFrfP14cyQdTsc3I8VohdYb8g86I7xduoMALyKLj0= +github.com/aws/aws-sdk-go-v2/service/kendra v1.41.0 h1:QZIaiIYfU8KCUuT4nCif9fifXryv+/pI/ounZMB4/0s= +github.com/aws/aws-sdk-go-v2/service/kendra v1.41.0/go.mod h1:huU0BQC+O9qcc3vrANhwkC9KP0hGg1quffiyfbJ1Ktg= +github.com/aws/aws-sdk-go-v2/service/keyspaces v1.3.2 h1:nL5UBozBBCgCV5nv03YvDnuuCH11iI4pNjH/EchcBEU= +github.com/aws/aws-sdk-go-v2/service/keyspaces v1.3.2/go.mod h1:kce6B9bwqIlk810LDwZC4bT928MQc9c88OtXkqGIymc= +github.com/aws/aws-sdk-go-v2/service/lambda v1.37.0 h1:xzyM5ZR9kZW0/Bkw5EiihOy6B+BYclp5K+yb6OHjc7s= +github.com/aws/aws-sdk-go-v2/service/lambda v1.37.0/go.mod h1:Q8zQi5nZpjUF/H55dKEpKfEvFWJkgZzjjqvDb2AR5b4= +github.com/aws/aws-sdk-go-v2/service/lightsail v1.27.1 h1:q9YZIsBPkeafzYPAJUquHZW5WkdmsLLu4jWlWzNEyjI= +github.com/aws/aws-sdk-go-v2/service/lightsail v1.27.1/go.mod h1:il65k5YeNrQVAw6GXT/7/ZGmiQ0nKXKLsc3T9+/GXbI= +github.com/aws/aws-sdk-go-v2/service/medialive v1.32.0 h1:tg8Qq4Wmj2Em89foTd4NJmAKJe3qcNkX+m0xXgPLH7M= +github.com/aws/aws-sdk-go-v2/service/medialive v1.32.0/go.mod h1:HY+J2fuuc8zncrXB5MRAjsQDGIl9tmCpf+7X7ySZ8V8= +github.com/aws/aws-sdk-go-v2/service/oam v1.1.13 h1:JaXK3OSWCUs+gKG84P1ojFhI8PC9o/c2JRrdSjYd11k= +github.com/aws/aws-sdk-go-v2/service/oam v1.1.13/go.mod h1:w6pNi/5sYUHurOQW0hpUr4mxGYvUzEtbAQ2xg3qNHR0= +github.com/aws/aws-sdk-go-v2/service/opensearchserverless v1.2.6 h1:tADh6fIpsh0XTvqY45kZbkk44dJ2VYt0jOF7SGXB0+4= +github.com/aws/aws-sdk-go-v2/service/opensearchserverless v1.2.6/go.mod h1:Ju/rPQaOhCUQAFIZlf+drYB+YJQSuwbCYlAzfCGQVQc= +github.com/aws/aws-sdk-go-v2/service/pipes v1.2.8 h1:TPr7uAucpAvgE0ac9zMs/LTx9O0dmLWHWr2mkZjL4iY= +github.com/aws/aws-sdk-go-v2/service/pipes v1.2.8/go.mod h1:T8pM2eiirH5Ld58Ek+t7tK3SqQNUwp9zkqkslW1v8sM= +github.com/aws/aws-sdk-go-v2/service/pricing v1.20.0 h1:x5gKeerbKIQ/tdhmaAGNpivSfmb+p2rdt0wyjCGz+4Q= +github.com/aws/aws-sdk-go-v2/service/pricing v1.20.0/go.mod h1:JjpnqJdEW/5An429Ou+5Kb3UkwjXv16gRD2ZdGA2Gw8= +github.com/aws/aws-sdk-go-v2/service/qldb v1.15.13 h1:iwKtlmYoS0wXm8VaCMnEpLjLrnyW3rT7YBsr9VlACCA= +github.com/aws/aws-sdk-go-v2/service/qldb v1.15.13/go.mod h1:AVsFo7PSMNr+/LvWa3YjXnP3poj5UJHDFVbCWkz6rJw= +github.com/aws/aws-sdk-go-v2/service/rbin v1.8.14 h1:oEeNqSze4JYbhsDg7udtz+AtLwrIXlvjItAd09Z2zTw= +github.com/aws/aws-sdk-go-v2/service/rbin v1.8.14/go.mod h1:JehbzDgwXYQi1wqKKFLstaNqsRPgPu9Kd6DiwhtFxoA= +github.com/aws/aws-sdk-go-v2/service/rds v1.46.1 h1:aUHvtQGMuU8C0evJGNw9KGKERuoxc+EugpziQ/m9vag= +github.com/aws/aws-sdk-go-v2/service/rds v1.46.1/go.mod h1:goBDR4OPrsnKpYyU0GHGcEnlTmL8O+eKGsWeyOAFJ5M= +github.com/aws/aws-sdk-go-v2/service/resourceexplorer2 v1.2.15 h1:V9D5mkuxoHp0emavQh6YDRZgh++sI+Yhl0jWcXmGyzU= +github.com/aws/aws-sdk-go-v2/service/resourceexplorer2 v1.2.15/go.mod h1:JkXv1RqDLs6Bpgj2EtFpVW4J83lEewuTiQJwzj8dzvs= +github.com/aws/aws-sdk-go-v2/service/rolesanywhere v1.2.2 h1:Dc3UON/Ry+dcDbXNLsyi2JPlfhNbaBRXqYg7ItYvJF8= +github.com/aws/aws-sdk-go-v2/service/rolesanywhere v1.2.2/go.mod h1:PKn6HQ0RaVbZn+nPAl7UpxbVvDJv0JxWj7m+XTMzZZI= +github.com/aws/aws-sdk-go-v2/service/route53domains v1.15.0 h1:H7c53dt/1Wsp3fWbu5v5+XVNsThWzmxzYdyYLltWtec= +github.com/aws/aws-sdk-go-v2/service/route53domains v1.15.0/go.mod h1:k75fR3BhOo/0umex9ICEJ+CKTA55qxvFWAlyVHTS6Qg= +github.com/aws/aws-sdk-go-v2/service/s3control v1.31.7 h1:dTPOxNtrOt1NU/3DstjpNOAT7qjnMcZ08ah4A3zuu2w= +github.com/aws/aws-sdk-go-v2/service/s3control v1.31.7/go.mod h1:mRBY8itC2zX/tZL7IzZBORR7fqCpFy6GCNWYVx9jE7o= +github.com/aws/aws-sdk-go-v2/service/scheduler v1.1.13 h1:g3mR/LmI0SQILlRqWqENskXVaU1E6kbRzoWVkOJOAes= +github.com/aws/aws-sdk-go-v2/service/scheduler v1.1.13/go.mod h1:B0FqPtgWXfV5NSlEPtR+V7zBlaAScNWL6uOsXu4owfI= +github.com/aws/aws-sdk-go-v2/service/securitylake v1.4.3 h1:lkme0ZphoOBLozbJ7d+oiWdhn4XgUjqL7IkCBWHZxH4= +github.com/aws/aws-sdk-go-v2/service/securitylake v1.4.3/go.mod h1:2+ps4raDazjHdffquUq0KO6G4MsCLUMAEfITwPZkPzw= +github.com/aws/aws-sdk-go-v2/service/sesv2 v1.18.2 h1:+Q13MHUoGJ02puTt1fTtePY8FvpGhAvkpbCgHJq/0m4= +github.com/aws/aws-sdk-go-v2/service/sesv2 v1.18.2/go.mod h1:CirPyae+ZPYmzWUJfRvxhRs00yAyK9mh/r61+oKk3SM= +github.com/aws/aws-sdk-go-v2/service/ssm v1.36.7 h1:vFJ9Fsp6nQf6xTU1P2GipYC1XVQPPri5vCvQpoeBT/s= +github.com/aws/aws-sdk-go-v2/service/ssm v1.36.7/go.mod h1:NdyMyZH/FzmCaybTrVMBD0nTCGrs1G4cOPKHFywx9Ns= +github.com/aws/aws-sdk-go-v2/service/ssmcontacts v1.15.7 h1:iASnyic9WZcfO/WI+NTQUpMPuDI8AEXI+2e3N0AhaEQ= +github.com/aws/aws-sdk-go-v2/service/ssmcontacts v1.15.7/go.mod h1:acZsEd16A53TrcSdmUZ4emKmmSp2dCJLf7cJbou7ArI= +github.com/aws/aws-sdk-go-v2/service/ssmincidents v1.21.6 h1:7tFvTeZ8EsVVkBByv9Q66WqenXLk7T62HC7cFwxAaIs= +github.com/aws/aws-sdk-go-v2/service/ssmincidents v1.21.6/go.mod h1:8wiHOckjElK6ywHQlWjbDrmEZOg7JGcicM2zkEeojUI= +github.com/aws/aws-sdk-go-v2/service/sso v1.12.12 h1:nneMBM2p79PGWBQovYO/6Xnc2ryRMw3InnDJq1FHkSY= +github.com/aws/aws-sdk-go-v2/service/sso v1.12.12/go.mod h1:HuCOxYsF21eKrerARYO6HapNeh9GBNq7fius2AcwodY= +github.com/aws/aws-sdk-go-v2/service/ssooidc v1.14.12 h1:2qTR7IFk7/0IN/adSFhYu9Xthr0zVFTgBrmPldILn80= +github.com/aws/aws-sdk-go-v2/service/ssooidc v1.14.12/go.mod h1:E4VrHCPzmVB/KFXtqBGKb3c8zpbNBgKe3fisDNLAW5w= +github.com/aws/aws-sdk-go-v2/service/sts v1.19.2 h1:XFJ2Z6sNUUcAz9poj+245DMkrHE4h2j5I9/xD50RHfE= +github.com/aws/aws-sdk-go-v2/service/sts v1.19.2/go.mod h1:dp0yLPsLBOi++WTxzCjA/oZqi6NPIhoR+uF7GeMU9eg= +github.com/aws/aws-sdk-go-v2/service/swf v1.15.2 h1:2ozdtzVA8NpZZZrE1BwYbWVbtA0uJ9rph2+LIgVF22k= +github.com/aws/aws-sdk-go-v2/service/swf v1.15.2/go.mod h1:Jj8X0ex+9XIXgNI/zD/g0jvv2wd5LHEZq29FlJj1RUc= +github.com/aws/aws-sdk-go-v2/service/timestreamwrite v1.17.2 h1:PlsApCYTMqiDmaCDikifXGYqQ53QWQ5UAOEZIevfcL8= +github.com/aws/aws-sdk-go-v2/service/timestreamwrite v1.17.2/go.mod h1:xes7zyfBSJ6kgfmaIvTFbEfPc3CfU1HLx46qcEc6n2s= +github.com/aws/aws-sdk-go-v2/service/transcribe v1.26.8 h1:KfCL992IXYjCPT62KGBMCOxf4cvu5OwwqcJZRBORL+U= +github.com/aws/aws-sdk-go-v2/service/transcribe v1.26.8/go.mod h1:F8gPtIYU0JYmVyPeQI0zf8geCpbupPJfo3wPoIV6wy0= +github.com/aws/aws-sdk-go-v2/service/verifiedpermissions v1.0.4 h1:BG7eKUi0YMDRNWyvvgObHTDNbSBTq8Ya7U4S6373rrM= +github.com/aws/aws-sdk-go-v2/service/verifiedpermissions v1.0.4/go.mod h1:DcBzv8o6EYm1gQf/qJo+PE81VlrXEvIa4nUVwVOD1VU= +github.com/aws/aws-sdk-go-v2/service/vpclattice v1.0.7 h1:SovLGhSpSnv9AQyW2ANCxzc1X4ri1ptsLM5nIR0rTFs= +github.com/aws/aws-sdk-go-v2/service/vpclattice v1.0.7/go.mod h1:ZYUcLmMNXSVsWPC8r2d6DXhAlu5uqdU6u2lxLDOvf/8= +github.com/aws/aws-sdk-go-v2/service/workspaces v1.28.15 h1:ZNrMTe7SRpnJHSUwSI/wcAfA4+7yCy3j+Cwro2icTBw= +github.com/aws/aws-sdk-go-v2/service/workspaces v1.28.15/go.mod h1:WT1eObgrrk0qD76J5ddKsS6lm+25zhcibLlsFosaHe0= +github.com/aws/aws-sdk-go-v2/service/xray v1.16.13 h1:I1j641YyiML0WBbTYkitzVkO7rwhTtUdhACrwEirCBg= +github.com/aws/aws-sdk-go-v2/service/xray v1.16.13/go.mod h1:8jQIOnrQNc+m4x1CxrGulz7Xa27YtGRbpDb4NZp4uyc= github.com/aws/smithy-go v1.13.5 h1:hgz0X/DX0dGqTYpGALqXJoRKRj5oQ7150i5FdTePzO8= github.com/aws/smithy-go v1.13.5/go.mod h1:Tg+OJXh4MB2R/uN61Ko2f6hTZwB/ZYGOtib8J3gBHzA= -github.com/beevik/etree v1.1.0 h1:T0xke/WvNtMoCqgzPhkX2r4rjY3GDZFi+FjpRZY2Jbs= -github.com/beevik/etree v1.1.0/go.mod h1:r8Aw8JqVegEf0w2fDnATrX9VpkMcyFeM0FhwO62wh+A= +github.com/beevik/etree v1.2.0 h1:l7WETslUG/T+xOPs47dtd6jov2Ii/8/OjCldk5fYfQw= +github.com/beevik/etree v1.2.0/go.mod h1:aiPf89g/1k3AShMVAzriilpcE4R/Vuor90y83zVZWFc= github.com/bgentry/speakeasy v0.1.0 h1:ByYyxL9InA1OWqxJqqp2A5pYHUrCiAL6K3J+LKSsQkY= github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs= github.com/boombuler/barcode v1.0.1-0.20190219062509-6c824513bacc h1:biVzkmvwrH8WK8raXaxBx6fRVTlJILwEwQGL1I/ByEI= github.com/bwesterb/go-ristretto v1.2.0/go.mod h1:fUIoIZaG73pV5biE2Blr2xEzDoMj7NFEuV9ekS419A0= github.com/cloudflare/circl v1.1.0/go.mod h1:prBCrKB9DV4poKZY1l9zBXg2QJY7mvgRvtMxxK7fi4I= -github.com/cloudflare/circl v1.3.2 h1:VWp8dY3yH69fdM7lM6A1+NhhVoDu9vqK0jOgmkQHFWk= -github.com/cloudflare/circl v1.3.2/go.mod h1:+CauBF6R70Jqcyl8N2hC8pAXYbWkGIezuSbuGLtRhnw= -github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= +github.com/cloudflare/circl v1.3.3 h1:fE/Qz0QdIGqeWfnwq0RE0R7MI51s0M2E4Ga9kq5AEMs= +github.com/cloudflare/circl v1.3.3/go.mod h1:5XYMA4rFBvNIrhs50XuiBJ15vF2pZn4nnUKZrLbUZFA= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/emirpasic/gods v1.12.0 h1:QAUIPSaCu4G+POclxeqb3F+WPpdKqFGlw36+yOzGlrg= -github.com/emirpasic/gods v1.12.0/go.mod h1:YfzfFFoVP/catgzJb4IKIqXjX78Ha8FMSDh3ymbK86o= +github.com/emirpasic/gods v1.18.1 h1:FXtiHYKDGKCW2KzwZKx0iC0PQmdlorYgdFG9jPXJ1Bc= github.com/evanphx/json-patch v0.5.2 h1:xVCHIVMUu1wtM/VkR9jVZ45N3FhZfYMMYGorLCR8P3k= github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4= github.com/fatih/color v1.13.0/go.mod h1:kLAiJbzzSOZDVNGyDpeOxJ47H46qBXwg5ILebYFFOfk= -github.com/fatih/color v1.14.1 h1:qfhVLaG5s+nCROl1zJsZRxFeYrHLqWroPOQ8BWiNb4w= -github.com/fatih/color v1.14.1/go.mod h1:2oHN61fhTpgcxD3TSWCgKDiH1+x4OiDVVGH8WlgGZGg= -github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:xEzjJPgXI435gkrCt3MPfRiAkVrwSbHsst4LCFVfpJc= -github.com/gliderlabs/ssh v0.2.2/go.mod h1:U7qILu1NlMHj9FlMhZLlkCdDnU1DBEAqr0aevW3Awn0= +github.com/fatih/color v1.15.0 h1:kOqh6YHBtK8aywxGerMG2Eq3H6Qgoqeo13Bk2Mv/nBs= +github.com/fatih/color v1.15.0/go.mod h1:0h5ZqXfHYED7Bhv2ZJamyIOUej9KtShiJESRwBDUSsw= github.com/go-git/gcfg v1.5.0 h1:Q5ViNfGF8zFgyJWPqYwA7qGFoMTEiBmdlkcfRmpIMa4= -github.com/go-git/gcfg v1.5.0/go.mod h1:5m20vg6GwYabIxaOonVkTdrILxQMpEShl1xiMF4ua+E= -github.com/go-git/go-billy/v5 v5.2.0/go.mod h1:pmpqyWchKfYfrkb/UVH4otLvyi/5gJlGI4Hb3ZqZ3W0= -github.com/go-git/go-billy/v5 v5.3.1 h1:CPiOUAzKtMRvolEKw+bG1PLRpT7D3LIs3/3ey4Aiu34= -github.com/go-git/go-billy/v5 v5.3.1/go.mod h1:pmpqyWchKfYfrkb/UVH4otLvyi/5gJlGI4Hb3ZqZ3W0= -github.com/go-git/go-git-fixtures/v4 v4.2.1/go.mod h1:K8zd3kDUAykwTdDCr+I0per6Y6vMiRR/nnVTBtavnB0= -github.com/go-git/go-git/v5 v5.4.2 h1:BXyZu9t0VkbiHtqrsvdq39UDhGJTl1h55VW6CSC4aY4= -github.com/go-git/go-git/v5 v5.4.2/go.mod h1:gQ1kArt6d+n+BGd+/B/I74HwRTLhth2+zti4ihgckDc= +github.com/go-git/go-billy/v5 v5.4.1 h1:Uwp5tDRkPr+l/TnbHOQzp+tmJfLceOlbVucgpTz8ix4= +github.com/go-git/go-git/v5 v5.6.1 h1:q4ZRqQl4pR/ZJHc1L5CFjGA1a10u76aV1iC+nh+bHsk= github.com/go-test/deep v1.0.3 h1:ZrJSEWsXzPOxaZnFteGEfooLba+ju3FYIbOrS+rQd68= github.com/golang/protobuf v1.1.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.4/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw= github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk= -github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw= -github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= -github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= +github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg= +github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= @@ -167,12 +185,12 @@ github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+ github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I= github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= -github.com/hashicorp/aws-cloudformation-resource-schema-sdk-go v0.20.0 h1:xc1OYpWvNo6dhnzemfjwtbNxeu3Ag4Wr6yT8BOo0/q0= -github.com/hashicorp/aws-cloudformation-resource-schema-sdk-go v0.20.0/go.mod h1:cdTE6F2pCKQobug+RqRaQp7Kz9hIEqiSvpPmb6E5G1w= -github.com/hashicorp/aws-sdk-go-base/v2 v2.0.0-beta.24 h1:syXF0qM0fnGTDqCfrh4gaPeFKlxKxOjKmoI8E+DJMRY= -github.com/hashicorp/aws-sdk-go-base/v2 v2.0.0-beta.24/go.mod h1:VYfmMo8LdxeZg4sH/4/cbgxx9BOEm/U48RHCs2SlmhM= -github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2 v2.0.0-beta.25 h1:H9SxZgt7pRcMabtsfe/+3M3rOETawbWVyOFSHGCbl3E= -github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2 v2.0.0-beta.25/go.mod h1:FOhbkFFrS1j55uGWjwOYTV4WohDVbe+zTNBMaOjFExo= +github.com/hashicorp/aws-cloudformation-resource-schema-sdk-go v0.21.0 h1:IUypt/TbXiJBkBbE3926CgnjD8IltAitdn7Yive61DY= +github.com/hashicorp/aws-cloudformation-resource-schema-sdk-go v0.21.0/go.mod h1:cdTE6F2pCKQobug+RqRaQp7Kz9hIEqiSvpPmb6E5G1w= +github.com/hashicorp/aws-sdk-go-base/v2 v2.0.0-beta.31 h1:e7DeQmtc8j93yk0n7cTEUCGzE0MMqtpW3EK1gqKcSmo= +github.com/hashicorp/aws-sdk-go-base/v2 v2.0.0-beta.31/go.mod h1:O41BJwjbJMFlZz1shJlRYScBbIxtj5GweRghNSnyVdQ= +github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2 v2.0.0-beta.32 h1:gAzRlvTIeKIMKSoTylI/BtDspTY1/aWzhzLOido3q+E= +github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2 v2.0.0-beta.32/go.mod h1:zl+mlH6pWkSLElnldWIpZXqwMIa5fg2VaRnz3FwT1bA= github.com/hashicorp/awspolicyequivalence v1.6.0 h1:7aadmkalbc5ewStC6g3rljx1iNvP4QyAhg2KsHx8bU8= github.com/hashicorp/awspolicyequivalence v1.6.0/go.mod h1:9IOaIHx+a7C0NfUNk1A93M7kHd5rJ19aoUx37LZGC14= github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4= @@ -181,82 +199,73 @@ github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brv github.com/hashicorp/go-checkpoint v0.5.0 h1:MFYpPZCnQqQTE18jFwSII6eUQrD/oxMFp3mlgcqk5mU= github.com/hashicorp/go-checkpoint v0.5.0/go.mod h1:7nfLNL10NsxqO4iWuW6tWW0HjZuDrwkBuEQsVcpCOgg= github.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80= -github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80= github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ= github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48= github.com/hashicorp/go-cty v1.4.1-0.20200414143053-d3edf31b6320 h1:1/D3zfFHttUKaCaGKZ/dR2roBXv0vKbSCnssIldfQdI= github.com/hashicorp/go-cty v1.4.1-0.20200414143053-d3edf31b6320/go.mod h1:EiZBMaudVLy8fmjf9Npq1dq9RalhveqZG5w/yz3mHWs= -github.com/hashicorp/go-hclog v1.4.0 h1:ctuWFGrhFha8BnnzxqeRGidlEcQkDyL5u8J8t5eA11I= -github.com/hashicorp/go-hclog v1.4.0/go.mod h1:W4Qnvbt70Wk/zYJryRzDRU/4r0kIg0PVHBcfoyhpF5M= +github.com/hashicorp/go-hclog v1.5.0 h1:bI2ocEMgcVlz55Oj1xZNBsVi900c7II+fWDyV9o+13c= +github.com/hashicorp/go-hclog v1.5.0/go.mod h1:W4Qnvbt70Wk/zYJryRzDRU/4r0kIg0PVHBcfoyhpF5M= github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk= github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo= github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM= -github.com/hashicorp/go-plugin v1.4.8 h1:CHGwpxYDOttQOY7HOWgETU9dyVjOXzniXDqJcYJE1zM= -github.com/hashicorp/go-plugin v1.4.8/go.mod h1:viDMjcLJuDui6pXb8U4HVfb8AamCWhHGUjr2IrTF67s= +github.com/hashicorp/go-plugin v1.4.10 h1:xUbmA4jC6Dq163/fWcp8P3JuHilrHHMLNRxzGQJ9hNk= +github.com/hashicorp/go-plugin v1.4.10/go.mod h1:6/1TEzT0eQznvI/gV2CM29DLSkAK/e58mUWKVsPaph0= github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= github.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8= github.com/hashicorp/go-uuid v1.0.3/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= -github.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= github.com/hashicorp/go-version v1.6.0 h1:feTTfFNnjP967rlCxM/I9g701jU+RN74YKx2mOkIeek= github.com/hashicorp/go-version v1.6.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= -github.com/hashicorp/hc-install v0.5.0 h1:D9bl4KayIYKEeJ4vUDe9L5huqxZXczKaykSRcmQ0xY0= -github.com/hashicorp/hc-install v0.5.0/go.mod h1:JyzMfbzfSBSjoDCRPna1vi/24BEDxFaCPfdHtM5SCdo= -github.com/hashicorp/hcl/v2 v2.16.1 h1:BwuxEMD/tsYgbhIW7UuI3crjovf3MzuFWiVgiv57iHg= -github.com/hashicorp/hcl/v2 v2.16.1/go.mod h1:JRmR89jycNkrrqnMmvPDMd56n1rQJ2Q6KocSLCMCXng= +github.com/hashicorp/hc-install v0.5.2 h1:SfwMFnEXVVirpwkDuSF5kymUOhrUxrTq3udEseZdOD0= +github.com/hashicorp/hc-install v0.5.2/go.mod h1:9QISwe6newMWIfEiXpzuu1k9HAGtQYgnSH8H9T8wmoI= +github.com/hashicorp/hcl/v2 v2.17.0 h1:z1XvSUyXd1HP10U4lrLg5e0JMVz6CPaJvAgxM0KNZVY= +github.com/hashicorp/hcl/v2 v2.17.0/go.mod h1:gJyW2PTShkJqQBKpAmPO3yxMxIuoXkOF2TpqXzrQyx4= github.com/hashicorp/logutils v1.0.0 h1:dLEQVugN8vlakKOUE3ihGLTZJRB4j+M2cdTm/ORI65Y= github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64= -github.com/hashicorp/terraform-exec v0.17.3 h1:MX14Kvnka/oWGmIkyuyvL6POx25ZmKrjlaclkx3eErU= -github.com/hashicorp/terraform-exec v0.17.3/go.mod h1:+NELG0EqQekJzhvikkeQsOAZpsw0cv/03rbeQJqscAI= -github.com/hashicorp/terraform-json v0.15.0 h1:/gIyNtR6SFw6h5yzlbDbACyGvIhKtQi8mTsbkNd79lE= -github.com/hashicorp/terraform-json v0.15.0/go.mod h1:+L1RNzjDU5leLFZkHTFTbJXaoqUC6TqXlFgDoOXrtvk= -github.com/hashicorp/terraform-plugin-framework v1.1.1 h1:PbnEKHsIU8KTTzoztHQGgjZUWx7Kk8uGtpGMMc1p+oI= -github.com/hashicorp/terraform-plugin-framework v1.1.1/go.mod h1:DyZPxQA+4OKK5ELxFIIcqggcszqdWWUpTLPHAhS/tkY= -github.com/hashicorp/terraform-plugin-framework-timeouts v0.3.1 h1:5GhozvHUsrqxqku+yd0UIRTkmDLp2QPX5paL1Kq5uUA= -github.com/hashicorp/terraform-plugin-framework-timeouts v0.3.1/go.mod h1:ThtYDU8p6sJ9+SI+TYxXrw28vXxgBwYOpoPv1EojSJI= +github.com/hashicorp/terraform-exec v0.18.1 h1:LAbfDvNQU1l0NOQlTuudjczVhHj061fNX5H8XZxHlH4= +github.com/hashicorp/terraform-exec v0.18.1/go.mod h1:58wg4IeuAJ6LVsLUeD2DWZZoc/bYi6dzhLHzxM41980= +github.com/hashicorp/terraform-json v0.17.0 h1:EiA1Wp07nknYQAiv+jIt4dX4Cq5crgP+TsTE45MjMmM= +github.com/hashicorp/terraform-json v0.17.0/go.mod h1:Huy6zt6euxaY9knPAFKjUITn8QxUFIe9VuSzb4zn/0o= +github.com/hashicorp/terraform-plugin-framework v1.3.2 h1:aQ6GSD0CTnvoALEWvKAkcH/d8jqSE0Qq56NYEhCexUs= +github.com/hashicorp/terraform-plugin-framework v1.3.2/go.mod h1:oimsRAPJOYkZ4kY6xIGfR0PHjpHLDLaknzuptl6AvnY= +github.com/hashicorp/terraform-plugin-framework-timeouts v0.4.1 h1:gm5b1kHgFFhaKFhm4h2TgvMUlNzFAtUqlcOWnWPm+9E= +github.com/hashicorp/terraform-plugin-framework-timeouts v0.4.1/go.mod h1:MsjL1sQ9L7wGwzJ5RjcI6FzEMdyoBnw+XK8ZnOvQOLY= github.com/hashicorp/terraform-plugin-framework-validators v0.10.0 h1:4L0tmy/8esP6OcvocVymw52lY0HyQ5OxB7VNl7k4bS0= github.com/hashicorp/terraform-plugin-framework-validators v0.10.0/go.mod h1:qdQJCdimB9JeX2YwOpItEu+IrfoJjWQ5PhLpAOMDQAE= -github.com/hashicorp/terraform-plugin-go v0.14.3 h1:nlnJ1GXKdMwsC8g1Nh05tK2wsC3+3BL/DBBxFEki+j0= -github.com/hashicorp/terraform-plugin-go v0.14.3/go.mod h1:7ees7DMZ263q8wQ6E4RdIdR6nHHJtrdt4ogX5lPkX1A= -github.com/hashicorp/terraform-plugin-log v0.8.0 h1:pX2VQ/TGKu+UU1rCay0OlzosNKe4Nz1pepLXj95oyy0= -github.com/hashicorp/terraform-plugin-log v0.8.0/go.mod h1:1myFrhVsBLeylQzYYEV17VVjtG8oYPRFdaZs7xdW2xs= -github.com/hashicorp/terraform-plugin-mux v0.9.0 h1:a2Xh63cunDB/1GZECrV02cGA74AhQGUjY9X8W3P/L7k= -github.com/hashicorp/terraform-plugin-mux v0.9.0/go.mod h1:8NUFbgeMigms7Tma/r2Vgi5Jv5mPv4xcJ05pJtIOhwc= -github.com/hashicorp/terraform-plugin-sdk/v2 v2.25.0 h1:iNRjaJCatQS1rIbHs/vDvJ0GECsaGgxx780chA2Irpk= -github.com/hashicorp/terraform-plugin-sdk/v2 v2.25.0/go.mod h1:XnVNLIS6bdMJbjSDujhX4Rlk24QpbGKbnrVFM4tZ7OU= -github.com/hashicorp/terraform-registry-address v0.1.0 h1:W6JkV9wbum+m516rCl5/NjKxCyTVaaUBbzYcMzBDO3U= -github.com/hashicorp/terraform-registry-address v0.1.0/go.mod h1:EnyO2jYO6j29DTHbJcm00E5nQTFeTtyZH3H5ycydQ5A= -github.com/hashicorp/terraform-svchost v0.0.0-20200729002733-f050f53b9734/go.mod h1:kNDNcF7sN4DocDLBkQYz73HGKwN1ANB1blq4lIYLYvg= -github.com/hashicorp/terraform-svchost v0.1.0 h1:0+RcgZdZYNd81Vw7tu62g9JiLLvbOigp7QtyNh6CjXk= -github.com/hashicorp/terraform-svchost v0.1.0/go.mod h1:ut8JaH0vumgdCfJaihdcZULqkAwHdQNwNH7taIDdsZM= +github.com/hashicorp/terraform-plugin-go v0.18.0 h1:IwTkOS9cOW1ehLd/rG0y+u/TGLK9y6fGoBjXVUquzpE= +github.com/hashicorp/terraform-plugin-go v0.18.0/go.mod h1:l7VK+2u5Kf2y+A+742GX0ouLut3gttudmvMgN0PA74Y= +github.com/hashicorp/terraform-plugin-log v0.9.0 h1:i7hOA+vdAItN1/7UrfBqBwvYPQ9TFvymaRGZED3FCV0= +github.com/hashicorp/terraform-plugin-log v0.9.0/go.mod h1:rKL8egZQ/eXSyDqzLUuwUYLVdlYeamldAHSxjUFADow= +github.com/hashicorp/terraform-plugin-mux v0.11.1 h1:cDCrmkrNHf/c2zC1oREIMdgCYWi1QP9U/qNXeNSYoFk= +github.com/hashicorp/terraform-plugin-mux v0.11.1/go.mod h1:eMZPcv8b5y+alMeQmocgaphPj4zAnM3uXiMLd1emqMQ= +github.com/hashicorp/terraform-plugin-sdk/v2 v2.27.0 h1:I8efBnjuDrgPjNF1MEypHy48VgcTIUY4X6rOFunrR3Y= +github.com/hashicorp/terraform-plugin-sdk/v2 v2.27.0/go.mod h1:cUEP4ly/nxlHy5HzD6YRrHydtlheGvGRJDhiWqqVik4= +github.com/hashicorp/terraform-plugin-testing v1.3.0 h1:4Pn8fSspPCRUc5zRGPNZYc00VhQmQPEH6y6Pv4e/42M= +github.com/hashicorp/terraform-plugin-testing v1.3.0/go.mod h1:mGOfGFTVIhP9buGPZyDQhmZFIO/Ig8E0Fo694UACr64= +github.com/hashicorp/terraform-registry-address v0.2.1 h1:QuTf6oJ1+WSflJw6WYOHhLgwUiQ0FrROpHPYFtwTYWM= +github.com/hashicorp/terraform-registry-address v0.2.1/go.mod h1:BSE9fIFzp0qWsJUUyGquo4ldV9k2n+psif6NYkBRS3Y= +github.com/hashicorp/terraform-svchost v0.1.1 h1:EZZimZ1GxdqFRinZ1tpJwVxxt49xc/S52uzrw4x0jKQ= +github.com/hashicorp/terraform-svchost v0.1.1/go.mod h1:mNsjQfZyf/Jhz35v6/0LWcv26+X7JPS+buii2c9/ctc= github.com/hashicorp/yamux v0.1.1 h1:yrQxtgseBDrq9Y652vSRDvsKCJKOUD+GzTS4Y0Y8pvE= github.com/hashicorp/yamux v0.1.1/go.mod h1:CtWFDAQgb7dxtzFs4tWbplKIe2jSi3+5vKbgIO0SLnQ= github.com/huandu/xstrings v1.3.1/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE= github.com/huandu/xstrings v1.3.2 h1:L18LIDzqlW6xN2rEkpdV8+oL/IXWJ1APd+vsdYy4Wdw= github.com/huandu/xstrings v1.3.2/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE= github.com/imdario/mergo v0.3.11/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA= -github.com/imdario/mergo v0.3.12 h1:b6R2BslTbIEToALKP7LxUvijTsNI9TAe80pLWN2g/HU= -github.com/imdario/mergo v0.3.12/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA= +github.com/imdario/mergo v0.3.13 h1:lFzP57bqS/wsqKssCGmtLAb8A0wKjLGrve2q3PPVcBk= +github.com/imdario/mergo v0.3.13/go.mod h1:4lJ1jqUDcsbIECGy0RUJAXNIhg+6ocWgb1ALK2O4oXg= github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 h1:BQSFePA1RWJOlocH6Fxy8MmwDt+yVQYULKfN0RoTN8A= -github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99/go.mod h1:1lJo3i6rXxKeerYnT8Nvf0QmHCRC1n8sfWVwXF2Frvo= -github.com/jessevdk/go-flags v1.5.0/go.mod h1:Fw0T6WPc1dYxT4mKEZRfG5kJhaTDP9pj1c2EWnYs/m4= github.com/jhump/protoreflect v1.6.0 h1:h5jfMVslIg6l29nsMs0D8Wj17RDVdNYti0vDN/PZZoE= github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg= github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo= github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8= github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U= -github.com/kevinburke/ssh_config v0.0.0-20201106050909-4977a11b4351 h1:DowS9hvgyYSX4TO5NpyC606/Z4SxnNYbT+WX27or6Ck= -github.com/kevinburke/ssh_config v0.0.0-20201106050909-4977a11b4351/go.mod h1:CT57kijsi8u/K/BOFA39wgDQJ9CxiF4nAY/ojJ6r6mM= -github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/kevinburke/ssh_config v1.2.0 h1:x584FjTGwHzMwvHx18PXxbBVzfnxogHaAReU4gf13a4= github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= -github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= github.com/kr/pretty v0.3.0 h1:WgNl7dwNpEZ6jJ9k1snq4pZsg7DOEN8hP9Xw0Tsjwk0= github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= +github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= -github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= -github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= -github.com/kylelemons/godebug v0.0.0-20170820004349-d65d576e9348/go.mod h1:B69LEHPfb2qLo0BaaOLcbitczOKLWTsrBG9LczfCD4k= github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc= -github.com/matryer/is v1.2.0/go.mod h1:2fLPjFQM9rhQ15aVEtbuwhJinnOqrmgXPNdZsdwlWXA= github.com/mattbaird/jsonpatch v0.0.0-20200820163806-098863c1fc24 h1:uYuGXJBAi1umT+ZS4oQJUgKtfXCAYTR+n9zw1ViT0vA= github.com/mattbaird/jsonpatch v0.0.0-20200820163806-098863c1fc24/go.mod h1:M1qoD/MqPgTZIk0EWKB38wE28ACRfVcn+cU08jyArI0= github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU= @@ -286,10 +295,9 @@ github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RR github.com/mitchellh/reflectwalk v1.0.0/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw= github.com/mitchellh/reflectwalk v1.0.2 h1:G2LzWKi524PWgd3mLHV8Y5k7s6XUvT0Gef6zxSIeXaQ= github.com/mitchellh/reflectwalk v1.0.2/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw= -github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno= github.com/oklog/run v1.1.0 h1:GEenZ1cK0+q0+wsJew9qUg/DyD8k3JzYsZAi5gYi2mA= github.com/oklog/run v1.1.0/go.mod h1:sVPdnTZT1zYwAJeCMu2Th4T21pA3FPOQRfWjQlk7DVU= -github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= +github.com/pjbgf/sha1cd v0.3.0 h1:4D5XXmUUBUl/xQ6IjCkEAbqXskkq/4O7LmGn0AqMDs4= github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= @@ -298,35 +306,28 @@ github.com/posener/complete v1.1.1 h1:ccV59UEOTzVDnDUEFdT95ZzHVZ+5+158q8+SJb2QV5 github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI= github.com/pquerna/otp v1.4.0 h1:wZvl1TIVxKRThZIBiwOOHOGP/1+nZyWBil9Y2XNEDzg= github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8= -github.com/sebdah/goldie v1.0.0/go.mod h1:jXP4hmWywNEwZzhMuv2ccnqTSFpuq8iyQhtQdkkZBH4= -github.com/sergi/go-diff v1.1.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM= github.com/sergi/go-diff v1.2.0 h1:XU+rvMAioB0UC3q1MFrIQy4Vo5/4VsRDQQXHsEya6xQ= github.com/shopspring/decimal v1.2.0/go.mod h1:DKyhrW/HYNuLGql+MJL6WCR6knT2jwCFRcu2hWCYk4o= github.com/shopspring/decimal v1.3.1 h1:2Usl1nmF/WZucqkFZhnfFYxxxu8LG21F6nPQBE5gKV8= github.com/shopspring/decimal v1.3.1/go.mod h1:DKyhrW/HYNuLGql+MJL6WCR6knT2jwCFRcu2hWCYk4o= -github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q= +github.com/skeema/knownhosts v1.1.0 h1:Wvr9V0MxhjRbl3f9nMnKnFfiWTJmtECJ9Njkea3ysW0= github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng= github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= -github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.7.2/go.mod h1:R6va5+xMeoiuVRoj+gSkQ7d3FALtqAAGI1FQKckRals= -github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk= +github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk= github.com/vmihailenco/msgpack v3.3.3+incompatible/go.mod h1:fy3FlTQTDXWkZ7Bh6AcGMlsjHatGryHQYUTf1ShIgkk= github.com/vmihailenco/msgpack v4.0.4+incompatible h1:dSLoQfGFAo3F6OoNhwUmLwVgaUXK79GlxNBwueZn0xI= github.com/vmihailenco/msgpack v4.0.4+incompatible/go.mod h1:fy3FlTQTDXWkZ7Bh6AcGMlsjHatGryHQYUTf1ShIgkk= -github.com/vmihailenco/msgpack/v4 v4.3.12 h1:07s4sz9IReOgdikxLTKNbBdqDMLsjPKXwvCazn8G65U= -github.com/vmihailenco/msgpack/v4 v4.3.12/go.mod h1:gborTTJjAo/GWTqqRjrLCn9pgNN+NXzzngzBKDPIqw4= -github.com/vmihailenco/tagparser v0.1.1/go.mod h1:OeAg3pn3UbLjkWt+rN9oFYB6u/cQgqMEUPoW2WPyhdI= -github.com/vmihailenco/tagparser v0.1.2 h1:gnjoVuB/kljJ5wICEEOpx98oXMWPLj22G67Vbd1qPqc= -github.com/vmihailenco/tagparser v0.1.2/go.mod h1:OeAg3pn3UbLjkWt+rN9oFYB6u/cQgqMEUPoW2WPyhdI= -github.com/xanzy/ssh-agent v0.3.0 h1:wUMzuKtKilRgBAD1sUb8gOwwRr2FGoBVumcjoOACClI= -github.com/xanzy/ssh-agent v0.3.0/go.mod h1:3s9xbODqPuuhK9JV1R321M/FlMZSBvE5aY6eAcqrDh0= +github.com/vmihailenco/msgpack/v5 v5.3.5 h1:5gO0H1iULLWGhs2H5tbAHIZTV8/cYafcFOr9znI5mJU= +github.com/vmihailenco/msgpack/v5 v5.3.5/go.mod h1:7xyJ9e+0+9SaZT0Wt1RGleJXzli6Q/V5KbhBonMG9jc= +github.com/vmihailenco/tagparser/v2 v2.0.0 h1:y09buUbR+b5aycVFQs/g70pqKVZNBmxwAhO7/IwNM9g= +github.com/vmihailenco/tagparser/v2 v2.0.0/go.mod h1:Wri+At7QHww0WTrCBeu4J6bNtoV6mEfg5OIWRZA9qds= +github.com/xanzy/ssh-agent v0.3.3 h1:+/15pJfg/RsTxqYcX6fHqOXZwwMP+2VyYWJeWM2qQFM= github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU= github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb h1:zGWFAtiMcyryUHoUjUJX0/lt1H2+i2Ka2n+D3DImSNo= github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU= @@ -335,65 +336,39 @@ github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1: github.com/xeipuuv/gojsonschema v1.2.0 h1:LhYJRs+L4fBtjZUfuSZIKGeVu0QRy8e5Xi7D17UxZ74= github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQluxsYJ78Id3Y= github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= -github.com/zclconf/go-cty v1.1.0/go.mod h1:xnAOWiHeOqg2nWS62VtQ7pbOu17FtxJNW8RLEih+O3s= -github.com/zclconf/go-cty v1.2.0/go.mod h1:hOPWgoHbaTUnI5k4D2ld+GRpFJSCe6bCM7m1q/N4PQ8= -github.com/zclconf/go-cty v1.10.0/go.mod h1:vVKLxnk3puL4qRAv72AO+W99LUD4da90g3uUAzyuvAk= -github.com/zclconf/go-cty v1.12.1 h1:PcupnljUm9EIvbgSHQnHhUr3fO6oFmkOrvs2BAFNXXY= -github.com/zclconf/go-cty v1.12.1/go.mod h1:s9IfD1LK5ccNMSWCVFCE2rJfHiZgi7JijgeWIMfhLvA= -github.com/zclconf/go-cty-debug v0.0.0-20191215020915-b22d67c1ba0b/go.mod h1:ZRKQfBXbGkpdV6QMzT3rU1kSTAnfu1dO8dPKjYprgj8= -go.opentelemetry.io/otel v1.13.0 h1:1ZAKnNQKwBBxFtww/GwxNUyTf0AxkZzrukO8MeXqe4Y= -go.opentelemetry.io/otel v1.13.0/go.mod h1:FH3RtdZCzRkJYFTCsAKDy9l/XYjMdNv6QrkFFB8DvVg= -go.opentelemetry.io/otel/trace v1.13.0 h1:CBgRZ6ntv+Amuj1jDsMhZtlAPT6gbyIRdaIzFhfBSdY= -go.opentelemetry.io/otel/trace v1.13.0/go.mod h1:muCvmmO9KKpvuXSf3KKAXXB2ygNYHQ+ZfI5X08d3tds= -golang.org/x/crypto v0.0.0-20190219172222-a4c6cb3142f2/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= +github.com/zclconf/go-cty v1.13.2 h1:4GvrUxe/QUDYuJKAav4EYqdM47/kZa672LwmXFmEKT0= +github.com/zclconf/go-cty v1.13.2/go.mod h1:YKQzy/7pZ7iq2jNFzy5go57xdxdWoLLpaEp4u238AE0= +go.opentelemetry.io/otel v1.16.0 h1:Z7GVAX/UkAXPKsy94IU+i6thsQS4nb7LviLpnaNeW8s= +go.opentelemetry.io/otel v1.16.0/go.mod h1:vl0h9NUa1D5s1nv3A5vZOYWn8av4K8Ml6JDeHrT/bx4= +go.opentelemetry.io/otel/trace v1.16.0 h1:8JRpaObFoW0pxuVPapkgH8UhHQj+bJW8jJsCZEu5MQs= +go.opentelemetry.io/otel/trace v1.16.0/go.mod h1:Yt9vYq1SdNz3xdjZZK7wcXv1qv2pwLkqr2QVwea0ef0= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20200414173820-0848c9571904/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200820211705-5c72a883971a/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= -golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4= -golang.org/x/crypto v0.0.0-20210421170649-83a5a9bb288b/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4= golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= -golang.org/x/crypto v0.5.0/go.mod h1:NK/OQwhpMQP3MwtdjgLlYHnH9ebylxKWv3e0fK+mkQU= -golang.org/x/crypto v0.7.0 h1:AvwMYaRytfdeVt3u6mLaxYtErKYjxA2OXjJ1HHq6t3A= -golang.org/x/crypto v0.7.0/go.mod h1:pYwdfH91IfpZVANVyUOhSIPZaFoJGxTFbZhFTx+dXZU= -golang.org/x/exp v0.0.0-20230206171751-46f607a40771 h1:xP7rWLUr1e1n2xkK5YB4LI0hPEy3LJC6Wk+D4pGlOJg= -golang.org/x/exp v0.0.0-20230206171751-46f607a40771/go.mod h1:CxIveKay+FTh1D0yPZemJVgC/95VzuuOLq5Qi4xnoYc= +golang.org/x/crypto v0.11.0 h1:6Ewdq3tDic1mg5xRO4milcWCfMVQhI4NkqWWvqejpuA= +golang.org/x/crypto v0.11.0/go.mod h1:xgJhtzW8F9jGdVFWZESrid1U1bjeNy4zgy5cRr/CIio= +golang.org/x/exp v0.0.0-20230510235704-dd950f8aeaea h1:vLCWI/yYrdEHyN2JzIzPO3aaQJHQdp89IZBA/+azVC4= +golang.org/x/exp v0.0.0-20230510235704-dd950f8aeaea/go.mod h1:V1LtkGg67GoY2N1AnLN78QLrzxkLyJw7RJb1gzOOz9w= golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= -golang.org/x/mod v0.7.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= -golang.org/x/mod v0.8.0 h1:LUYupSeNrTNCGzR/hVBk2NHZO4hXcVaW1k4Qx7rjPx8= -golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= -golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20180811021610-c39426892332/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/mod v0.10.0 h1:lFO9qtOdlre5W1jxS3r/4szv2/6iXxScdzjoBMXNhYk= +golang.org/x/mod v0.10.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20191009170851-d66e71096ffb/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= -golang.org/x/net v0.0.0-20210326060303-6b1517762897/go.mod h1:uSPa2vr4CLtc/ILN5odXGNXS6mhrKVzTaCXzk9m6W3k= golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco= -golang.org/x/net v0.5.0/go.mod h1:DivGGAXEgPSlEBzxGzZI+ZLohi+xUj054jfeKui00ws= -golang.org/x/net v0.8.0 h1:Zrh2ngAOFYneWTAIAPethzeaQLuHwhuBkuV6ZiRnUaQ= -golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc= -golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= +golang.org/x/net v0.11.0 h1:Gi2tvZIJyBtO9SDr1q9h5hEQCp/4L2RQ+ar0qjx2oNU= +golang.org/x/net v0.11.0/go.mod h1:2L/ixqYpgIVXmeoSA/4Lu7BzTG4KIyPIryS4IsOd1oQ= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210324051608-47abb6519492/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210502180810-71e4cd670f79/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= @@ -403,57 +378,46 @@ golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBc golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.4.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.6.0 h1:MVltZSvRTcU2ljQOhs94SXPftV6DCNnZViHeQps87pQ= -golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.10.0 h1:SqMFp9UcQJZa+pmYuAKjd9xq1f0j5rLcDIk0mj4qAsA= +golang.org/x/sys v0.10.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= -golang.org/x/term v0.4.0/go.mod h1:9P2UbLfCdcvo3p/nzKvsmas4TnlujnuoV9hGgYzW1lQ= -golang.org/x/term v0.6.0 h1:clScbb1cHjoCkyRbWwBEUZ5H/tIFu5TAXIqaZD0Gcjw= +golang.org/x/term v0.10.0 h1:3R7pNqamzBraeqj/Tj8qt1aQ2HpmlC+Cx/qL/7hn4/c= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= -golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= -golang.org/x/text v0.6.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= -golang.org/x/text v0.8.0 h1:57P1ETyNKtuIjB4SRd15iJxuhj8Gc416Y78H3qgMh68= -golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.11.0 h1:LAntKIrcmeSKERyiOh0XMV39LXS8IE9UL2yP7+f5ij4= +golang.org/x/text v0.11.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= -google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= -google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c= google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= -google.golang.org/genproto v0.0.0-20230202175211-008b39050e57 h1:vArvWooPH749rNHpBGgVl+U9B9dATjiEhJzcWGlovNs= -google.golang.org/genproto v0.0.0-20230202175211-008b39050e57/go.mod h1:RGgjbofJ8xD9Sq1VVhDM1Vok1vRONV+rg+CjzG4SZKM= -google.golang.org/grpc v1.53.0 h1:LAv2ds7cmFV/XTS3XG1NneeENYrXGmorPxsBbptIjNc= -google.golang.org/grpc v1.53.0/go.mod h1:OnIrk0ipVdj4N5d9IUoFUx72/VlD7+jUsHwZgwSMQpw= +google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 h1:KpwkzHKEF7B9Zxg18WzOa7djJ+Ha5DzthMyZYQfEn2A= +google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1/go.mod h1:nKE/iIaLqn2bQwXBg8f1g2Ylh6r5MN5CmZvuzZCgsCU= +google.golang.org/grpc v1.56.1 h1:z0dNfjIl0VpaZ9iSVjA6daGatAYwPGstTjt5vkRMFkQ= +google.golang.org/grpc v1.56.1/go.mod h1:I9bI3vqKfayGqPUAwGdOSu7kt6oIJLixfffKrpXqQ9s= google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw= google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= -google.golang.org/protobuf v1.28.1 h1:d0NfwRgPtno5B1Wa6L2DAG+KivqkdutMf1UhdNx175w= -google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= +google.golang.org/protobuf v1.31.0 h1:g0LDEJHgrBl9N9r17Ru3sqWhkIx2NB67okBHPwC7hs8= +google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= -gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= gopkg.in/dnaeon/go-vcr.v3 v3.1.2 h1:F1smfXBqQqwpVifDfUBQG6zzaGjzT+EnVZakrOdr5wA= gopkg.in/warnings.v0 v0.1.2 h1:wFXVbFY8DY5/xOe1ECiWdKCzZlxgshcYVNkBHstARME= -gopkg.in/warnings.v0 v0.1.2/go.mod h1:jksf8JmL6Qr/oQM2OXTHunEvvTAsrWBLb6OOjuVWRNI= gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= -gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +gopkg.in/yaml.v3 v3.0.0/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= diff --git a/tools/tfsdk2fw/main.go b/tools/tfsdk2fw/main.go index 75ff8b29cb4..41bec03ec99 100644 --- a/tools/tfsdk2fw/main.go +++ b/tools/tfsdk2fw/main.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package main import ( diff --git a/tools/tfsdk2fw/naming/camel.go b/tools/tfsdk2fw/naming/camel.go index 09d88b0959f..7adb3b0b64a 100644 --- a/tools/tfsdk2fw/naming/camel.go +++ b/tools/tfsdk2fw/naming/camel.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package naming import ( diff --git a/tools/tfsdk2fw/naming/camel_test.go b/tools/tfsdk2fw/naming/camel_test.go index 0f743838c06..d6803409df9 100644 --- a/tools/tfsdk2fw/naming/camel_test.go +++ b/tools/tfsdk2fw/naming/camel_test.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package naming_test import ( diff --git a/version/version.go b/version/version.go index 081f1948ffe..19de2550b44 100644 --- a/version/version.go +++ b/version/version.go @@ -1,3 +1,6 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + package version // ProviderVersion is set during the release process to the release version of the binary diff --git a/website/allowed-subcategories.txt b/website/allowed-subcategories.txt index 09921f32f4d..7b04dbea8df 100644 --- a/website/allowed-subcategories.txt +++ b/website/allowed-subcategories.txt @@ -312,6 +312,7 @@ VPC Lattice VPN (Client) VPN (Site-to-Site) Verified Access +Verified Permissions WAF WAF Classic WAF Classic Regional diff --git a/website/docs/cdktf/python/d/ec2_client_vpn_endpoint.html.markdown b/website/docs/cdktf/python/d/ec2_client_vpn_endpoint.html.markdown new file mode 100644 index 00000000000..964c619ccc9 --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_client_vpn_endpoint.html.markdown @@ -0,0 +1,96 @@ +--- +subcategory: "VPN (Client)" +layout: "aws" +page_title: "AWS: aws_ec2_client_vpn_endpoint" +description: |- + Get information on an EC2 Client VPN endpoint +--- + +# Data Source: aws_ec2_client_vpn_endpoint + +Get information on an EC2 Client VPN endpoint. + +## Example Usage + +### By Filter + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_client_vpn_endpoint.DataAwsEc2ClientVpnEndpoint(self, "example", + filter=[DataAwsEc2ClientVpnEndpointFilter( + name="tag:Name", + values=["ExampleVpn"] + ) + ] + ) +``` + +### By Identifier + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_client_vpn_endpoint.DataAwsEc2ClientVpnEndpoint(self, "example", + client_vpn_endpoint_id="cvpn-endpoint-083cf50d6eb314f21" + ) +``` + +## Argument Reference + +The following arguments are supported: + +* `client_vpn_endpoint_id` - (Optional) ID of the Client VPN endpoint. +* `filter` - (Optional) One or more configuration blocks containing name-values filters. Detailed below. +* `tags` - (Optional) Map of tags, each pair of which must exactly match a pair on the desired endpoint. + +### filter + +This block allows for complex filters. You can use one or more `filter` blocks. + +The following arguments are required: + +* `name` - (Required) Name of the field to filter by, as defined by [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeClientVpnEndpoints.html). +* `values` - (Required) Set of values that are accepted for the given field. An endpoint will be selected if any one of the given values matches. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The ARN of the Client VPN endpoint. +* `authentication_options` - Information about the authentication method used by the Client VPN endpoint. +* `client_cidr_block` - IPv4 address range, in CIDR notation, from which client IP addresses are assigned. +* `client_connect_options` - The options for managing connection authorization for new client connections. +* `client_login_banner_options` - Options for enabling a customizable text banner that will be displayed on AWS provided clients when a VPN session is established. +* `connection_log_options` - Information about the client connection logging options for the Client VPN endpoint. +* `description` - Brief description of the endpoint. +* `dns_name` - DNS name to be used by clients when connecting to the Client VPN endpoint. +* `dns_servers` - Information about the DNS servers to be used for DNS resolution. +* `security_group_ids` - IDs of the security groups for the target network associated with the Client VPN endpoint. +* `self_service_portal` - Whether the self-service portal for the Client VPN endpoint is enabled. +* `server_certificate_arn` - The ARN of the server certificate. +* `session_timeout_hours` - The maximum VPN session duration time in hours. +* `split_tunnel` - Whether split-tunnel is enabled in the AWS Client VPN endpoint. +* `transport_protocol` - Transport protocol used by the Client VPN endpoint. +* `vpc_id` - ID of the VPC associated with the Client VPN endpoint. +* `vpn_port` - Port number for the Client VPN endpoint. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_coip_pool.html.markdown b/website/docs/cdktf/python/d/ec2_coip_pool.html.markdown new file mode 100644 index 00000000000..7e892b484f3 --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_coip_pool.html.markdown @@ -0,0 +1,69 @@ +--- +subcategory: "Outposts (EC2)" +layout: "aws" +page_title: "AWS: aws_ec2_coip_pool" +description: |- + Provides details about a specific EC2 Customer-Owned IP Pool +--- + +# Data Source: aws_ec2_coip_pool + +Provides details about a specific EC2 Customer-Owned IP Pool. + +This data source can prove useful when a module accepts a coip pool id as +an input variable and needs to, for example, determine the CIDR block of that +COIP Pool. + +## Example Usage + +The following example returns a specific coip pool ID + +```terraform +variable "coip_pool_id" {} + +data "aws_ec2_coip_pool" "selected" { + id = var.coip_pool_id +} +``` + +## Argument Reference + +The arguments of this data source act as filters for querying the available +COIP Pools in the current region. The given filters must match exactly one +COIP Pool whose data will be exported as attributes. + +* `local_gateway_route_table_id` - (Optional) Local Gateway Route Table Id assigned to desired COIP Pool + +* `pool_id` - (Optional) ID of the specific COIP Pool to retrieve. + +* `tags` - (Optional) Mapping of tags, each pair of which must exactly match + a pair on the desired COIP Pool. + +More complex filters can be expressed using one or more `filter` sub-blocks, +which take the following arguments: + +* `name` - (Required) Name of the field to filter by, as defined by + [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeCoipPools.html). + +* `values` - (Required) Set of values that are accepted for the given field. + A COIP Pool will be selected if any one of the given values matches. + +## Attributes Reference + +All of the argument attributes except `filter` blocks are also exported as +result attributes. This data source will complete the data by populating +any fields that are not included in the configuration with the data for +the selected COIP Pool. + +In addition, the following attributes are exported: + +* `arn` - ARN of the COIP pool +* `pool_cidrs` - Set of CIDR blocks in pool + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_coip_pools.html.markdown b/website/docs/cdktf/python/d/ec2_coip_pools.html.markdown new file mode 100644 index 00000000000..26216a83d79 --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_coip_pools.html.markdown @@ -0,0 +1,52 @@ +--- +subcategory: "Outposts (EC2)" +layout: "aws" +page_title: "AWS: aws_ec2_coip_pools" +description: |- + Provides information for multiple EC2 Customer-Owned IP Pools +--- + +# Data Source: aws_ec2_coip_pools + +Provides information for multiple EC2 Customer-Owned IP Pools, such as their identifiers. + +## Example Usage + +The following shows outputting all COIP Pool Ids. + +```terraform +data "aws_ec2_coip_pools" "foo" {} + +output "foo" { + value = data.aws_ec2_coip_pools.foo.ids +} +``` + +## Argument Reference + +* `tags` - (Optional) Mapping of tags, each pair of which must exactly match + a pair on the desired aws_ec2_coip_pools. + +* `filter` - (Optional) Custom filter block as described below. + +More complex filters can be expressed using one or more `filter` sub-blocks, +which take the following arguments: + +* `name` - (Required) Name of the field to filter by, as defined by + [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeCoipPools.html). + +* `values` - (Required) Set of values that are accepted for the given field. + A COIP Pool will be selected if any one of the given values matches. + +## Attributes Reference + +* `id` - AWS Region. +* `pool_ids` - Set of COIP Pool Identifiers + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_host.html.markdown b/website/docs/cdktf/python/d/ec2_host.html.markdown new file mode 100644 index 00000000000..7764d71aa1f --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_host.html.markdown @@ -0,0 +1,95 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_host" +description: |- + Get information on an EC2 Host. +--- + +# Data Source: aws_ec2_host + +Use this data source to get information about an EC2 Dedicated Host. + +## Example Usage + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws_ec2_host_test = aws.ec2_host.Ec2Host(self, "test", + availability_zone="us-west-2a", + instance_type="c5.18xlarge" + ) + data_aws_ec2_host_test = aws.data_aws_ec2_host.DataAwsEc2Host(self, "test_1", + host_id=cdktf.Token.as_string(aws_ec2_host_test.id) + ) + # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. + data_aws_ec2_host_test.override_logical_id("test") +``` + +### Filter Example + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_host.DataAwsEc2Host(self, "test", + filter=[DataAwsEc2HostFilter( + name="instance-type", + values=["c5.18xlarge"] + ) + ] + ) +``` + +## Argument Reference + +The arguments of this data source act as filters for querying the available EC2 Hosts in the current region. +The given filters must match exactly one host whose data will be exported as attributes. + +* `filter` - (Optional) Configuration block. Detailed below. +* `host_id` - (Optional) ID of the Dedicated Host. + +### filter + +This block allows for complex filters. You can use one or more `filter` blocks. + +The following arguments are required: + +* `name` - (Required) Name of the field to filter by, as defined by [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeHosts.html). +* `values` - (Required) Set of values that are accepted for the given field. A host will be selected if any one of the given values matches. + +## Attributes Reference + +In addition to the attributes above, the following attributes are exported: + +* `id` - ID of the Dedicated Host. +* `arn` - ARN of the Dedicated Host. +* `auto_placement` - Whether auto-placement is on or off. +* `availability_zone` - Availability Zone of the Dedicated Host. +* `cores` - Number of cores on the Dedicated Host. +* `host_recovery` - Whether host recovery is enabled or disabled for the Dedicated Host. +* `instance_family` - Instance family supported by the Dedicated Host. For example, "m5". +* `instance_type` - Instance type supported by the Dedicated Host. For example, "m5.large". If the host supports multiple instance types, no instanceType is returned. +* `outpost_arn` - ARN of the AWS Outpost on which the Dedicated Host is allocated. +* `owner_id` - ID of the AWS account that owns the Dedicated Host. +* `sockets` - Number of sockets on the Dedicated Host. +* `total_vcpus` - Total number of vCPUs on the Dedicated Host. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_instance_type.html.markdown b/website/docs/cdktf/python/d/ec2_instance_type.html.markdown new file mode 100644 index 00000000000..e0d9e62703d --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_instance_type.html.markdown @@ -0,0 +1,108 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_instance_type" +description: |- + Information about single EC2 Instance Type. +--- + + +# Data Source: aws_ec2_instance_type + +Get characteristics for a single EC2 Instance Type. + +## Example Usage + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_instance_type.DataAwsEc2InstanceType(self, "example", + instance_type="t2.micro" + ) +``` + +## Argument Reference + +The following argument is supported: + +* `instance_type` - (Required) Instance + +## Attribute Reference + +In addition to the argument above, the following attributes are exported: + +~> **NOTE:** Not all attributes are set for every instance type. + +* `auto_recovery_supported` - `true` if auto recovery is supported. +* `bare_metal` - `true` if it is a bare metal instance type. +* `burstable_performance_supported` - `true` if the instance type is a burstable performance instance type. +* `current_generation` - `true` if the instance type is a current generation. +* `dedicated_hosts_supported` - `true` if Dedicated Hosts are supported on the instance type. +* `default_cores` - Default number of cores for the instance type. +* `default_threads_per_core` - The default number of threads per core for the instance type. +* `default_vcpus` - Default number of vCPUs for the instance type. +* `ebs_encryption_support` - Indicates whether Amazon EBS encryption is supported. +* `ebs_nvme_support` - Whether non-volatile memory express (NVMe) is supported. +* `ebs_optimized_support` - Indicates that the instance type is Amazon EBS-optimized. +* `ebs_performance_baseline_bandwidth` - The baseline bandwidth performance for an EBS-optimized instance type, in Mbps. +* `ebs_performance_baseline_iops` - The baseline input/output storage operations per seconds for an EBS-optimized instance type. +* `ebs_performance_baseline_throughput` - The baseline throughput performance for an EBS-optimized instance type, in MBps. +* `ebs_performance_maximum_bandwidth` - The maximum bandwidth performance for an EBS-optimized instance type, in Mbps. +* `ebs_performance_maximum_iops` - The maximum input/output storage operations per second for an EBS-optimized instance type. +* `ebs_performance_maximum_throughput` - The maximum throughput performance for an EBS-optimized instance type, in MBps. +* `efa_supported` - Whether Elastic Fabric Adapter (EFA) is supported. +* `ena_support` - Whether Elastic Network Adapter (ENA) is supported. +* `encryption_in_transit_supported` - Indicates whether encryption in-transit between instances is supported. +* `fpgas` - Describes the FPGA accelerator settings for the instance type. + * `fpgas.#.count` - The count of FPGA accelerators for the instance type. + * `fpgas.#.manufacturer` - The manufacturer of the FPGA accelerator. + * `fpgas.#.memory_size` - The size (in MiB) for the memory available to the FPGA accelerator. + * `fpgas.#.name` - The name of the FPGA accelerator. +* `free_tier_eligible` - `true` if the instance type is eligible for the free tier. +* `gpus` - Describes the GPU accelerators for the instance type. + * `gpus.#.count` - The number of GPUs for the instance type. + * `gpus.#.manufacturer` - The manufacturer of the GPU accelerator. + * `gpus.#.memory_size` - The size (in MiB) for the memory available to the GPU accelerator. + * `gpus.#.name` - The name of the GPU accelerator. +* `hibernation_supported` - `true` if On-Demand hibernation is supported. +* `hypervisor` - Hypervisor used for the instance type. +* `inference_accelerators` Describes the Inference accelerators for the instance type. + * `inference_accelerators.#.count` - The number of Inference accelerators for the instance type. + * `inference_accelerators.#.manufacturer` - The manufacturer of the Inference accelerator. + * `inference_accelerators.#.name` - The name of the Inference accelerator. +* `instance_disks` - Describes the disks for the instance type. + * `instance_disks.#.count` - The number of disks with this configuration. + * `instance_disks.#.size` - The size of the disk in GB. + * `instance_disks.#.type` - The type of disk. +* `instance_storage_supported` - `true` if instance storage is supported. +* `ipv6_supported` - `true` if IPv6 is supported. +* `maximum_ipv4_addresses_per_interface` - The maximum number of IPv4 addresses per network interface. +* `maximum_ipv6_addresses_per_interface` - The maximum number of IPv6 addresses per network interface. +* `maximum_network_interfaces` - The maximum number of network interfaces for the instance type. +* `memory_size` - Size of the instance memory, in MiB. +* `network_performance` - Describes the network performance. +* `supported_architectures` - A list of architectures supported by the instance type. +* `supported_placement_strategies` - A list of supported placement groups types. +* `supported_root_device_types` - Indicates the supported root device types. +* `supported_usages_classes` - Indicates whether the instance type is offered for spot or On-Demand. +* `supported_virtualization_types` - The supported virtualization types. +* `sustained_clock_speed` - The speed of the processor, in GHz. +* `total_fpga_memory` - Total memory of all FPGA accelerators for the instance type (in MiB). +* `total_gpu_memory` - Total size of the memory for the GPU accelerators for the instance type (in MiB). +* `total_instance_storage` - The total size of the instance disks, in GB. +* `valid_cores` - List of the valid number of cores that can be configured for the instance type. +* `valid_threads_per_core` - List of the valid number of threads per core that can be configured for the instance type. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_instance_type_offering.html.markdown b/website/docs/cdktf/python/d/ec2_instance_type_offering.html.markdown new file mode 100644 index 00000000000..95edfae440f --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_instance_type_offering.html.markdown @@ -0,0 +1,60 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_instance_type_offering" +description: |- + Information about single EC2 Instance Type Offering. +--- + +# Data Source: aws_ec2_instance_type_offering + +Information about single EC2 Instance Type Offering. + +## Example Usage + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_instance_type_offering.DataAwsEc2InstanceTypeOffering(self, "example", + filter=[DataAwsEc2InstanceTypeOfferingFilter( + name="instance-type", + values=["t2.micro", "t3.micro"] + ) + ], + preferred_instance_types=["t3.micro", "t2.micro"] + ) +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. See the [EC2 API Reference](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstanceTypeOfferings.html) for supported filters. Detailed below. +* `location_type` - (Optional) Location type. Defaults to `region`. Valid values: `availability-zone`, `availability-zone-id`, and `region`. +* `preferred_instance_types` - (Optional) Ordered list of preferred EC2 Instance Types. The first match in this list will be returned. If no preferred matches are found and the original search returned more than one result, an error is returned. + +### filter Argument Reference + +* `name` - (Required) Name of the filter. The `location` filter depends on the top-level `location_type` argument and if not specified, defaults to the current region. +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Instance Type. +* `instance_type` - EC2 Instance Type. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_instance_type_offerings.html.markdown b/website/docs/cdktf/python/d/ec2_instance_type_offerings.html.markdown new file mode 100644 index 00000000000..02bcc7bf2bb --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_instance_type_offerings.html.markdown @@ -0,0 +1,66 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_instance_type_offerings" +description: |- + Information about EC2 Instance Type Offerings. +--- + +# Data Source: aws_ec2_instance_type_offerings + +Information about EC2 Instance Type Offerings. + +## Example Usage + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_instance_type_offerings.DataAwsEc2InstanceTypeOfferings(self, "example", + filter=[DataAwsEc2InstanceTypeOfferingsFilter( + name="instance-type", + values=["t2.micro", "t3.micro"] + ), DataAwsEc2InstanceTypeOfferingsFilter( + name="location", + values=["usw2-az4"] + ) + ], + location_type="availability-zone-id" + ) +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. See the [EC2 API Reference](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstanceTypeOfferings.html) for supported filters. Detailed below. +* `location_type` - (Optional) Location type. Defaults to `region`. Valid values: `availability-zone`, `availability-zone-id`, and `region`. + +### filter Argument Reference + +* `name` - (Required) Name of the filter. The `location` filter depends on the top-level `location_type` argument and if not specified, defaults to the current region. +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - AWS Region. +* `instance_types` - List of EC2 Instance Types. +* `locations` - List of locations. +* `location_types` - List of location types. + +Note that the indexes of Instance Type Offering instance types, locations and location types correspond. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_instance_types.html.markdown b/website/docs/cdktf/python/d/ec2_instance_types.html.markdown new file mode 100644 index 00000000000..4fcf7bedeb8 --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_instance_types.html.markdown @@ -0,0 +1,66 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_instance_types" +description: |- + Information about EC2 Instance Types. +--- + +# Data Source: aws_ec2_instance_types + +Information about EC2 Instance Types. + +## Example Usage + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_instance_types.DataAwsEc2InstanceTypes(self, "test", + filter=[DataAwsEc2InstanceTypesFilter( + name="auto-recovery-supported", + values=["true"] + ), DataAwsEc2InstanceTypesFilter( + name="network-info.encryption-in-transit-supported", + values=["true"] + ), DataAwsEc2InstanceTypesFilter( + name="instance-storage-supported", + values=["true"] + ), DataAwsEc2InstanceTypesFilter( + name="instance-type", + values=["g5.2xlarge", "g5.4xlarge"] + ) + ] + ) +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. See the [EC2 API Reference](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstanceTypes.html) for supported filters. Detailed below. + +### filter Argument Reference + +* `name` - (Required) Name of the filter. +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - AWS Region. +* `instance_types` - List of EC2 Instance Types. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_local_gateway.html.markdown b/website/docs/cdktf/python/d/ec2_local_gateway.html.markdown new file mode 100644 index 00000000000..cd328f6e6c0 --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_local_gateway.html.markdown @@ -0,0 +1,78 @@ +--- +subcategory: "Outposts (EC2)" +layout: "aws" +page_title: "AWS: aws_ec2_local_gateway" +description: |- + Provides details about an EC2 Local Gateway +--- + +# Data Source: aws_ec2_local_gateway + +Provides details about an EC2 Local Gateway. + +## Example Usage + +The following example shows how one might accept a local gateway id as a variable. + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + # Terraform Variables are not always the best fit for getting inputs in the context of Terraform CDK. + # You can read more about this at https://cdk.tf/variables + local_gateway_id = cdktf.TerraformVariable(self, "local_gateway_id") + aws.data_aws_ec2_local_gateway.DataAwsEc2LocalGateway(self, "selected", + id=local_gateway_id.string_value + ) +``` + +## Argument Reference + +The arguments of this data source act as filters for querying the available +Local Gateways in the current region. The given filters must match exactly one +Local Gateway whose data will be exported as attributes. + +* `filter` - (Optional) Custom filter block as described below. + +* `id` - (Optional) Id of the specific Local Gateway to retrieve. + +* `state` - (Optional) Current state of the desired Local Gateway. + Can be either `"pending"` or `"available"`. + +* `tags` - (Optional) Mapping of tags, each pair of which must exactly match + a pair on the desired Local Gateway. + +More complex filters can be expressed using one or more `filter` sub-blocks, +which take the following arguments: + +* `name` - (Required) Name of the field to filter by, as defined by + [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeLocalGateways.html). + +* `values` - (Required) Set of values that are accepted for the given field. + A Local Gateway will be selected if any one of the given values matches. + +## Attributes Reference + +All of the argument attributes except `filter` blocks are also exported as +result attributes. This data source will complete the data by populating +any fields that are not included in the configuration with the data for +the selected Local Gateway. + +The following attributes are additionally exported: + +* `outpost_arn` - ARN of Outpost +* `owner_id` - AWS account identifier that owns the Local Gateway. +* `state` - State of the local gateway. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_local_gateway_route_table.html.markdown b/website/docs/cdktf/python/d/ec2_local_gateway_route_table.html.markdown new file mode 100644 index 00000000000..acbc0a2581b --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_local_gateway_route_table.html.markdown @@ -0,0 +1,69 @@ +--- +subcategory: "Outposts (EC2)" +layout: "aws" +page_title: "AWS: aws_ec2_local_gateway_route_table" +description: |- + Provides details about an EC2 Local Gateway Route Table +--- + +# Data Source: aws_ec2_local_gateway_route_table + +Provides details about an EC2 Local Gateway Route Table. + +This data source can prove useful when a module accepts a local gateway route table id as +an input variable and needs to, for example, find the associated Outpost or Local Gateway. + +## Example Usage + +The following example returns a specific local gateway route table ID + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + # Terraform Variables are not always the best fit for getting inputs in the context of Terraform CDK. + # You can read more about this at https://cdk.tf/variables + aws_ec2_local_gateway_route_table = cdktf.TerraformVariable(self, "aws_ec2_local_gateway_route_table") + aws.data_aws_ec2_local_gateway_route_table.DataAwsEc2LocalGatewayRouteTable(self, "selected", + local_gateway_route_table_id=aws_ec2_local_gateway_route_table.string_value + ) +``` + +## Argument Reference + +The arguments of this data source act as filters for querying the available +Local Gateway Route Tables in the current region. The given filters must match exactly one +Local Gateway Route Table whose data will be exported as attributes. + +* `local_gateway_route_table_id` - (Optional) Local Gateway Route Table Id assigned to desired local gateway route table + +* `local_gateway_id` - (Optional) ID of the specific local gateway route table to retrieve. + +* `outpost_arn` - (Optional) ARN of the Outpost the local gateway route table is associated with. + +* `state` - (Optional) State of the local gateway route table. + +* `tags` - (Optional) Mapping of tags, each pair of which must exactly match + a pair on the desired local gateway route table. + +More complex filters can be expressed using one or more `filter` sub-blocks, +which take the following arguments: + +* `name` - (Required) Name of the field to filter by, as defined by + [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeLocalGatewayRouteTables.html). + +* `values` - (Required) Set of values that are accepted for the given field. + A local gateway route table will be selected if any one of the given values matches. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_local_gateway_route_tables.html.markdown b/website/docs/cdktf/python/d/ec2_local_gateway_route_tables.html.markdown new file mode 100644 index 00000000000..df9d201f1ab --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_local_gateway_route_tables.html.markdown @@ -0,0 +1,62 @@ +--- +subcategory: "Outposts (EC2)" +layout: "aws" +page_title: "AWS: aws_ec2_local_gateway_route_tables" +description: |- + Provides information for multiple EC2 Local Gateway Route Tables +--- + +# Data Source: aws_ec2_local_gateway_route_tables + +Provides information for multiple EC2 Local Gateway Route Tables, such as their identifiers. + +## Example Usage + +The following shows outputting all Local Gateway Route Table Ids. + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + data_aws_ec2_local_gateway_route_tables_foo = + aws.data_aws_ec2_local_gateway_route_tables.DataAwsEc2LocalGatewayRouteTables(self, "foo") + cdktf_terraform_output_foo = cdktf.TerraformOutput(self, "foo_1", + value=data_aws_ec2_local_gateway_route_tables_foo.ids + ) + # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. + cdktf_terraform_output_foo.override_logical_id("foo") +``` + +## Argument Reference + +* `tags` - (Optional) Mapping of tags, each pair of which must exactly match + a pair on the desired local gateway route table. + +* `filter` - (Optional) Custom filter block as described below. + +More complex filters can be expressed using one or more `filter` sub-blocks, +which take the following arguments: + +* `name` - (Required) Name of the field to filter by, as defined by + [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeLocalGatewayRouteTables.html). + +* `values` - (Required) Set of values that are accepted for the given field. + A Local Gateway Route Table will be selected if any one of the given values matches. + +## Attributes Reference + +* `id` - AWS Region. +* `ids` - Set of Local Gateway Route Table identifiers + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_local_gateway_virtual_interface.html.markdown b/website/docs/cdktf/python/d/ec2_local_gateway_virtual_interface.html.markdown new file mode 100644 index 00000000000..6e7f86766be --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_local_gateway_virtual_interface.html.markdown @@ -0,0 +1,55 @@ +--- +subcategory: "Outposts (EC2)" +layout: "aws" +page_title: "AWS: aws_ec2_local_gateway_virtual_interface" +description: |- + Provides details about an EC2 Local Gateway Virtual Interface +--- + +# Data Source: aws_ec2_local_gateway_virtual_interface + +Provides details about an EC2 Local Gateway Virtual Interface. More information can be found in the [Outposts User Guide](https://docs.aws.amazon.com/outposts/latest/userguide/outposts-networking-components.html#routing). + +## Example Usage + +```terraform +data "aws_ec2_local_gateway_virtual_interface" "example" { + for_each = data.aws_ec2_local_gateway_virtual_interface_group.example.local_gateway_virtual_interface_ids + + id = each.value +} +``` + +## Argument Reference + +The following arguments are optional: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. See the [EC2 API Reference](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeLocalGatewayVirtualInterfaces.html) for supported filters. Detailed below. +* `id` - (Optional) Identifier of EC2 Local Gateway Virtual Interface. +* `tags` - (Optional) Key-value map of resource tags, each pair of which must exactly match a pair on the desired local gateway route table. + +### filter Argument Reference + +The `filter` configuration block supports the following arguments: + +* `name` - (Required) Name of the filter. +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `local_address` - Local address. +* `local_bgp_asn` - Border Gateway Protocol (BGP) Autonomous System Number (ASN) of the EC2 Local Gateway. +* `local_gateway_id` - Identifier of the EC2 Local Gateway. +* `peer_address` - Peer address. +* `peer_bgp_asn` - Border Gateway Protocol (BGP) Autonomous System Number (ASN) of the peer. +* `vlan` - Virtual Local Area Network. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_local_gateway_virtual_interface_group.html.markdown b/website/docs/cdktf/python/d/ec2_local_gateway_virtual_interface_group.html.markdown new file mode 100644 index 00000000000..e20f807a803 --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_local_gateway_virtual_interface_group.html.markdown @@ -0,0 +1,49 @@ +--- +subcategory: "Outposts (EC2)" +layout: "aws" +page_title: "AWS: aws_ec2_local_gateway_virtual_interface_group" +description: |- + Provides details about an EC2 Local Gateway Virtual Interface Group +--- + +# Data Source: aws_ec2_local_gateway_virtual_interface_group + +Provides details about an EC2 Local Gateway Virtual Interface Group. More information can be found in the [Outposts User Guide](https://docs.aws.amazon.com/outposts/latest/userguide/outposts-networking-components.html#routing). + +## Example Usage + +```terraform +data "aws_ec2_local_gateway_virtual_interface_group" "example" { + local_gateway_id = data.aws_ec2_local_gateway.example.id +} +``` + +## Argument Reference + +The following arguments are optional: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. See the [EC2 API Reference](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeLocalGatewayVirtualInterfaceGroups.html) for supported filters. Detailed below. +* `id` - (Optional) Identifier of EC2 Local Gateway Virtual Interface Group. +* `local_gateway_id` - (Optional) Identifier of EC2 Local Gateway. +* `tags` - (Optional) Key-value map of resource tags, each pair of which must exactly match a pair on the desired local gateway route table. + +### filter Argument Reference + +The `filter` configuration block supports the following arguments: + +* `name` - (Required) Name of the filter. +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `local_gateway_virtual_interface_ids` - Set of EC2 Local Gateway Virtual Interface identifiers. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_local_gateway_virtual_interface_groups.html.markdown b/website/docs/cdktf/python/d/ec2_local_gateway_virtual_interface_groups.html.markdown new file mode 100644 index 00000000000..4578304cdc6 --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_local_gateway_virtual_interface_groups.html.markdown @@ -0,0 +1,55 @@ +--- +subcategory: "Outposts (EC2)" +layout: "aws" +page_title: "AWS: aws_ec2_local_gateway_virtual_interface_groups" +description: |- + Provides details about multiple EC2 Local Gateway Virtual Interface Groups +--- + +# Data Source: aws_ec2_local_gateway_virtual_interface_groups + +Provides details about multiple EC2 Local Gateway Virtual Interface Groups, such as identifiers. More information can be found in the [Outposts User Guide](https://docs.aws.amazon.com/outposts/latest/userguide/outposts-networking-components.html#routing). + +## Example Usage + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_local_gateway_virtual_interface_groups.DataAwsEc2LocalGatewayVirtualInterfaceGroups(self, "all") +``` + +## Argument Reference + +The following arguments are optional: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. See the [EC2 API Reference](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeLocalGatewayVirtualInterfaceGroups.html) for supported filters. Detailed below. +* `tags` - (Optional) Key-value map of resource tags, each pair of which must exactly match a pair on the desired local gateway route table. + +### filter Argument Reference + +The `filter` configuration block supports the following arguments: + +* `name` - (Required) Name of the filter. +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - AWS Region. +* `ids` - Set of EC2 Local Gateway Virtual Interface Group identifiers. +* `local_gateway_virtual_interface_ids` - Set of EC2 Local Gateway Virtual Interface identifiers. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_local_gateways.html.markdown b/website/docs/cdktf/python/d/ec2_local_gateways.html.markdown new file mode 100644 index 00000000000..abda280ec16 --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_local_gateways.html.markdown @@ -0,0 +1,66 @@ +--- +subcategory: "Outposts (EC2)" +layout: "aws" +page_title: "AWS: aws_ec2_local_gateways" +description: |- + Provides information for multiple EC2 Local Gateways +--- + +# Data Source: aws_ec2_local_gateways + +Provides information for multiple EC2 Local Gateways, such as their identifiers. + +## Example Usage + +The following example retrieves Local Gateways with a resource tag of `service` set to `production`. + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + data_aws_ec2_local_gateways_foo = + aws.data_aws_ec2_local_gateways.DataAwsEc2LocalGateways(self, "foo", + tags={ + "service": "production" + } + ) + cdktf_terraform_output_foo = cdktf.TerraformOutput(self, "foo_1", + value=data_aws_ec2_local_gateways_foo.ids + ) + # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. + cdktf_terraform_output_foo.override_logical_id("foo") +``` + +## Argument Reference + +* `tags` - (Optional) Mapping of tags, each pair of which must exactly match + a pair on the desired local_gateways. + +* `filter` - (Optional) Custom filter block as described below. + +More complex filters can be expressed using one or more `filter` sub-blocks, +which take the following arguments: + +* `name` - (Required) Name of the field to filter by, as defined by + [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeLocalGateways.html). + +* `values` - (Required) Set of values that are accepted for the given field. + A Local Gateway will be selected if any one of the given values matches. + +## Attributes Reference + +* `id` - AWS Region. +* `ids` - Set of all the Local Gateway identifiers + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_managed_prefix_list.html.markdown b/website/docs/cdktf/python/d/ec2_managed_prefix_list.html.markdown new file mode 100644 index 00000000000..086ae45d916 --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_managed_prefix_list.html.markdown @@ -0,0 +1,89 @@ +--- +subcategory: "VPC (Virtual Private Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_managed_prefix_list" +description: |- + Provides details about a specific managed prefix list +--- + +# Data Source: aws_ec2_managed_prefix_list + +`aws_ec2_managed_prefix_list` provides details about a specific AWS prefix list or +customer-managed prefix list in the current region. + +## Example Usage + +### Find the regional DynamoDB prefix list + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + data_aws_region_current = aws.data_aws_region.DataAwsRegion(self, "current") + aws.data_aws_ec2_managed_prefix_list.DataAwsEc2ManagedPrefixList(self, "example", + name="com.amazonaws.${" + data_aws_region_current.name + "}.dynamodb" + ) +``` + +### Find a managed prefix list using filters + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_managed_prefix_list.DataAwsEc2ManagedPrefixList(self, "example", + filter=[DataAwsEc2ManagedPrefixListFilter( + name="prefix-list-name", + values=["my-prefix-list"] + ) + ] + ) +``` + +## Argument Reference + +The arguments of this data source act as filters for querying the available +prefix lists. The given filters must match exactly one prefix list +whose data will be exported as attributes. + +* `id` - (Optional) ID of the prefix list to select. +* `name` - (Optional) Name of the prefix list to select. +* `filter` - (Optional) Configuration block(s) for filtering. Detailed below. + +### filter Configuration Block + +The following arguments are supported by the `filter` configuration block: + +* `name` - (Required) Name of the filter field. Valid values can be found in the EC2 [DescribeManagedPrefixLists](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeManagedPrefixLists.html) API Reference. +* `values` - (Required) Set of values that are accepted for the given filter field. Results will be selected if any given value matches. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - ID of the selected prefix list. +* `arn` - ARN of the selected prefix list. +* `name` - Name of the selected prefix list. +* `entries` - Set of entries in this prefix list. Each entry is an object with `cidr` and `description`. +* `owner_id` - Account ID of the owner of a customer-managed prefix list, or `AWS` otherwise. +* `address_family` - Address family of the prefix list. Valid values are `IPv4` and `IPv6`. +* `max_entries` - When then prefix list is managed, the maximum number of entries it supports, or null otherwise. +* `tags` - Map of tags assigned to the resource. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_managed_prefix_lists.html.markdown b/website/docs/cdktf/python/d/ec2_managed_prefix_lists.html.markdown new file mode 100644 index 00000000000..dc7c5d0e364 --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_managed_prefix_lists.html.markdown @@ -0,0 +1,74 @@ +--- +subcategory: "VPC (Virtual Private Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_managed_prefix_lists" +description: |- + Get information on managed prefix lists +--- + +# Data Source: aws_ec2_managed_prefix_lists + +This resource can be useful for getting back a list of managed prefix list ids to be referenced elsewhere. + +## Example Usage + +The following returns all managed prefix lists filtered by tags + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + data_aws_ec2_managed_prefix_lists_test_env = + aws.data_aws_ec2_managed_prefix_lists.DataAwsEc2ManagedPrefixLists(self, "test_env", + tags={ + "Env": "test" + } + ) + # In most cases loops should be handled in the programming language context and + # not inside of the Terraform context. If you are looping over something external, e.g. a variable or a file input + # you should consider using a for loop. If you are looping over something only known to Terraform, e.g. a result of a data source + # you need to keep this like it is. + data_aws_ec2_managed_prefix_list_test_env_count = cdktf.TerraformCount.of( + cdktf.Fn.length_of(data_aws_ec2_managed_prefix_lists_test_env.ids)) + data_aws_ec2_managed_prefix_list_test_env = + aws.data_aws_ec2_managed_prefix_list.DataAwsEc2ManagedPrefixList(self, "test_env_1", + id=cdktf.Token.as_string( + cdktf.property_access( + cdktf.Fn.tolist(data_aws_ec2_managed_prefix_lists_test_env.ids), [data_aws_ec2_managed_prefix_list_test_env_count.index])), + count=data_aws_ec2_managed_prefix_list_test_env_count + ) + # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. + data_aws_ec2_managed_prefix_list_test_env.override_logical_id("test_env") +``` + +## Argument Reference + +* `filter` - (Optional) Custom filter block as described below. +* `tags` - (Optional) Map of tags, each pair of which must exactly match + a pair on the desired . + +More complex filters can be expressed using one or more `filter` sub-blocks, +which take the following arguments: + +* `name` - (Required) Name of the field to filter by, as defined by + [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeManagedPrefixLists.html). +* `values` - (Required) Set of values that are accepted for the given field. + A managed prefix list will be selected if any one of the given values matches. + +## Attributes Reference + +* `id` - AWS Region. +* `ids` - List of all the managed prefix list ids found. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_network_insights_analysis.html.markdown b/website/docs/cdktf/python/d/ec2_network_insights_analysis.html.markdown new file mode 100644 index 00000000000..eed346f74e5 --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_network_insights_analysis.html.markdown @@ -0,0 +1,54 @@ +--- +subcategory: "VPC (Virtual Private Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_network_insights_analysis" +description: |- + Provides details about a specific Network Insights Analysis. +--- + +# Data Source: aws_ec2_network_insights_analysis + +`aws_ec2_network_insights_analysis` provides details about a specific Network Insights Analysis. + +## Example Usage + +```terraform +data "aws_ec2_network_insights_analysis" "example" { + network_insights_analysis_id = aws_ec2_network_insights_analysis.example.id +} +``` + +## Argument Reference + +The arguments of this data source act as filters for querying the available +Network Insights Analyses. The given filters must match exactly one Network Insights Analysis +whose data will be exported as attributes. + +* `network_insights_analysis_id` - (Optional) ID of the Network Insights Analysis to select. +* `filter` - (Optional) Configuration block(s) for filtering. Detailed below. + +### filter Configuration Block + +The following arguments are supported by the `filter` configuration block: + +* `name` - (Required) Name of the filter field. Valid values can be found in the EC2 [`DescribeNetworkInsightsAnalyses`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeNetworkInsightsAnalyses.html) API Reference. +* `values` - (Required) Set of values that are accepted for the given filter field. Results will be selected if any given value matches. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `alternate_path_hints` - Potential intermediate components of a feasible path. +* `arn` - ARN of the selected Network Insights Analysis. +* `explanations` - Explanation codes for an unreachable path. +* `filter_in_arns` - ARNs of the AWS resources that the path must traverse. +* `forward_path_components` - The components in the path from source to destination. +* `network_insights_path_id` - The ID of the path. +* `path_found` - Set to `true` if the destination was reachable. +* `return_path_components` - The components in the path from destination to source. +* `start_date` - Date/time the analysis was started. +* `status` - Status of the analysis. `succeeded` means the analysis was completed, not that a path was found, for that see `path_found`. +* `status_message` - Message to provide more context when the `status` is `failed`. +* `warning_message` - Warning message. + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_network_insights_path.html.markdown b/website/docs/cdktf/python/d/ec2_network_insights_path.html.markdown new file mode 100644 index 00000000000..d55516021a1 --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_network_insights_path.html.markdown @@ -0,0 +1,50 @@ +--- +subcategory: "VPC (Virtual Private Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_network_insights_path" +description: |- + Provides details about a specific Network Insights Path. +--- + +# Data Source: aws_ec2_network_insights_path + +`aws_ec2_network_insights_path` provides details about a specific Network Insights Path. + +## Example Usage + +```terraform +data "aws_ec2_network_insights_path" "example" { + network_insights_path_id = aws_ec2_network_insights_path.example.id +} +``` + +## Argument Reference + +The arguments of this data source act as filters for querying the available +Network Insights Paths. The given filters must match exactly one Network Insights Path +whose data will be exported as attributes. + +* `network_insights_path_id` - (Optional) ID of the Network Insights Path to select. +* `filter` - (Optional) Configuration block(s) for filtering. Detailed below. + +### filter Configuration Block + +The following arguments are supported by the `filter` configuration block: + +* `name` - (Required) Name of the filter field. Valid values can be found in the EC2 [`DescribeNetworkInsightsPaths`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeNetworkInsightsPaths.html) API Reference. +* `values` - (Required) Set of values that are accepted for the given filter field. Results will be selected if any given value matches. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - ARN of the selected Network Insights Path. +* `destination` - AWS resource that is the destination of the path. +* `destination_ip` - IP address of the AWS resource that is the destination of the path. +* `destination_port` - Destination port. +* `protocol` - Protocol. +* `source` - AWS resource that is the source of the path. +* `source_ip` - IP address of the AWS resource that is the source of the path. +* `tags` - Map of tags assigned to the resource. + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_public_ipv4_pool.html.markdown b/website/docs/cdktf/python/d/ec2_public_ipv4_pool.html.markdown new file mode 100644 index 00000000000..ed3037f5406 --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_public_ipv4_pool.html.markdown @@ -0,0 +1,52 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_public_ipv4_pool" +description: |- + Provides details about a specific AWS EC2 Public IPv4 Pool. +--- + +# Data Source: aws_ec2_public_ipv4_pool + +Provides details about a specific AWS EC2 Public IPv4 Pool. + +## Example Usage + +### Basic Usage + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_public_ipv4_pool.DataAwsEc2PublicIpv4Pool(self, "example", + pool_id="ipv4pool-ec2-000df99cff0c1ec10" + ) +``` + +## Argument Reference + +The following arguments are required: + +* `pool_id` - (Required) AWS resource IDs of a public IPv4 pool (as a string) for which this data source will fetch detailed information. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `description` - Description of the pool, if any. +* `network_border_group` - Name of the location from which the address pool is advertised. +* pool_address_ranges` - List of Address Ranges in the Pool; each address range record contains: + * `address_count` - Number of addresses in the range. + * `available_address_count` - Number of available addresses in the range. + * `first_address` - First address in the range. + * `last_address` - Last address in the range. +* `tags` - Any tags for the address pool. +* `total_address_count` - Total number of addresses in the pool. +* `total_available_address_count` - Total number of available addresses in the pool. + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_public_ipv4_pools.html.markdown b/website/docs/cdktf/python/d/ec2_public_ipv4_pools.html.markdown new file mode 100644 index 00000000000..f2e779ba37b --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_public_ipv4_pools.html.markdown @@ -0,0 +1,66 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_public_ipv4_pools" +description: |- + Terraform data source for getting information about AWS EC2 Public IPv4 Pools. +--- + +# Data Source: aws_ec2_public_ipv4_pools + +Terraform data source for getting information about AWS EC2 Public IPv4 Pools. + +## Example Usage + +### Basic Usage + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_public_ipv4_pools.DataAwsEc2PublicIpv4Pools(self, "example") +``` + +### Usage with Filter + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_public_ipv4_pools.DataAwsEc2PublicIpv4Pools(self, "example", + filter=[DataAwsEc2PublicIpv4PoolsFilter( + name="tag-key", + values=["ExampleTagKey"] + ) + ] + ) +``` + +## Argument Reference + +The following arguments are optional: + +* `filter` - (Optional) Custom filter block as described below. +* `tags` - (Optional) Map of tags, each pair of which must exactly match a pair on the desired pools. + +More complex filters can be expressed using one or more `filter` sub-blocks, +which take the following arguments: + +* `name` - (Required) Name of the field to filter by, as defined by [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribePublicIpv4Pools.html). +* `values` - (Required) Set of values that are accepted for the given field. Pool IDs will be selected if any one of the given values match. + +## Attributes Reference + +* `pool_ids` - List of all the pool IDs found. + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_serial_console_access.html.markdown b/website/docs/cdktf/python/d/ec2_serial_console_access.html.markdown new file mode 100644 index 00000000000..b3bd01b034d --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_serial_console_access.html.markdown @@ -0,0 +1,40 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_serial_console_access" +description: |- + Checks whether serial console access is enabled for your AWS account in the current AWS region. +--- + +# Data Source: aws_ec2_serial_console_access + +Provides a way to check whether serial console access is enabled for your AWS account in the current AWS region. + +## Example Usage + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_serial_console_access.DataAwsEc2SerialConsoleAccess(self, "current") +``` + +## Attributes Reference + +The following attributes are exported: + +* `enabled` - Whether or not serial console access is enabled. Returns as `true` or `false`. +* `id` - Region of serial console access. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_spot_price.html.markdown b/website/docs/cdktf/python/d/ec2_spot_price.html.markdown new file mode 100644 index 00000000000..6ec7e3ee99b --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_spot_price.html.markdown @@ -0,0 +1,62 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_spot_price" +description: |- + Information about most recent Spot Price for a given EC2 instance. +--- + +# Data Source: aws_ec2_spot_price + +Information about most recent Spot Price for a given EC2 instance. + +## Example Usage + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_spot_price.DataAwsEc2SpotPrice(self, "example", + availability_zone="us-west-2a", + filter=[DataAwsEc2SpotPriceFilter( + name="product-description", + values=["Linux/UNIX"] + ) + ], + instance_type="t3.medium" + ) +``` + +## Argument Reference + +The following arguments are supported: + +* `instance_type` - (Optional) Type of instance for which to query Spot Price information. +* `availability_zone` - (Optional) Availability zone in which to query Spot price information. +* `filter` - (Optional) One or more configuration blocks containing name-values filters. See the [EC2 API Reference](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSpotPriceHistory.html) for supported filters. Detailed below. + +### filter Argument Reference + +* `name` - (Required) Name of the filter. +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - AWS Region. +* `spot_price` - Most recent Spot Price value for the given instance type and AZ. +* `spot_price_timestamp` - The timestamp at which the Spot Price value was published. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_transit_gateway.html.markdown b/website/docs/cdktf/python/d/ec2_transit_gateway.html.markdown new file mode 100644 index 00000000000..db3fcdb43ce --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_transit_gateway.html.markdown @@ -0,0 +1,89 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway" +description: |- + Get information on an EC2 Transit Gateway +--- + +# Data Source: aws_ec2_transit_gateway + +Get information on an EC2 Transit Gateway. + +## Example Usage + +### By Filter + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_transit_gateway.DataAwsEc2TransitGateway(self, "example", + filter=[DataAwsEc2TransitGatewayFilter( + name="options.amazon-side-asn", + values=["64512"] + ) + ] + ) +``` + +### By Identifier + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_transit_gateway.DataAwsEc2TransitGateway(self, "example", + id="tgw-12345678" + ) +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. Detailed below. +* `id` - (Optional) Identifier of the EC2 Transit Gateway. + +### filter Argument Reference + +* `name` - (Required) Name of the field to filter by, as defined by the [underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeTransitGateways.html). +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `amazon_side_asn` - Private Autonomous System Number (ASN) for the Amazon side of a BGP session +* `arn` - EC2 Transit Gateway ARN +* `association_default_route_table_id` - Identifier of the default association route table +* `auto_accept_shared_attachments` - Whether resource attachment requests are automatically accepted +* `default_route_table_association` - Whether resource attachments are automatically associated with the default association route table +* `default_route_table_propagation` - Whether resource attachments automatically propagate routes to the default propagation route table +* `description` - Description of the EC2 Transit Gateway +* `dns_support` - Whether DNS support is enabled +* `multicast_support` - Whether Multicast support is enabled +* `id` - EC2 Transit Gateway identifier +* `owner_id` - Identifier of the AWS account that owns the EC2 Transit Gateway +* `propagation_default_route_table_id` - Identifier of the default propagation route table +* `tags` - Key-value tags for the EC2 Transit Gateway +* `transit_gateway_cidr_blocks` - The list of associated CIDR blocks +* `vpn_ecmp_support` - Whether VPN Equal Cost Multipath Protocol support is enabled + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_transit_gateway_attachment.html.markdown b/website/docs/cdktf/python/d/ec2_transit_gateway_attachment.html.markdown new file mode 100644 index 00000000000..62925f85bd4 --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_transit_gateway_attachment.html.markdown @@ -0,0 +1,56 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_attachment" +description: |- + Get information on an EC2 Transit Gateway's attachment to a resource +--- + +# Data Source: aws_ec2_transit_gateway_attachment + +Get information on an EC2 Transit Gateway's attachment to a resource. + +## Example Usage + +```terraform +data "aws_ec2_transit_gateway_attachment" "example" { + filter { + name = "transit-gateway-id" + values = [aws_ec2_transit_gateway.example.id] + } + + filter { + name = "resource-type" + values = ["peering"] + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. Detailed below. +* `transit_gateway_attachment_id` - (Optional) ID of the attachment. + +### filter Argument Reference + +* `name` - (Required) Name of the field to filter by, as defined by the [underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeTransitGatewayAttachments.html). +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - ARN of the attachment. +* `association_state` - The state of the association (see [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_TransitGatewayAttachmentAssociation.html) for valid values). +* `association_transit_gateway_route_table_id` - The ID of the route table for the transit gateway. +* `resource_id` - ID of the resource. +* `resource_owner_id` - ID of the AWS account that owns the resource. +* `resource_type` - Resource type. +* `state` - Attachment state. +* `tags` - Key-value tags for the attachment. +* `transit_gateway_id` - ID of the transit gateway. +* `transit_gateway_owner_id` - The ID of the AWS account that owns the transit gateway. + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_transit_gateway_attachments.html.markdown b/website/docs/cdktf/python/d/ec2_transit_gateway_attachments.html.markdown new file mode 100644 index 00000000000..5cf23027ab9 --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_transit_gateway_attachments.html.markdown @@ -0,0 +1,62 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_attachments" +description: |- + Get information on EC2 Transit Gateway Attachments +--- + +# Data Source: aws_ec2_transit_gateway_attachments + +Get information on EC2 Transit Gateway Attachments. + +## Example Usage + +### By Filter + +```hcl +data "aws_ec2_transit_gateway_attachments" "filtered" { + filter { + name = "state" + values = ["pendingAcceptance"] + } + + filter { + name = "resource-type" + values = ["vpc"] + } +} + +data "aws_ec2_transit_gateway_attachment" "unit" { + count = length(data.aws_ec2_transit_gateway_attachments.filtered.ids) + id = data.aws_ec2_transit_gateway_attachments.filtered.ids[count.index] +} +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. Detailed below. + +### filter Argument Reference + +* `name` - (Required) Name of the filter check available value on [official documentation][1] +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `ids` A list of all attachments ids matching the filter. You can retrieve more information about the attachment using the [aws_ec2_transit_gateway_attachment][2] data source, searching by identifier. + +[1]: https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeTransitGatewayAttachments.html +[2]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ec2_transit_gateway_attachment + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_transit_gateway_connect.html.markdown b/website/docs/cdktf/python/d/ec2_transit_gateway_connect.html.markdown new file mode 100644 index 00000000000..77a2acc4f3e --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_transit_gateway_connect.html.markdown @@ -0,0 +1,78 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_connect" +description: |- + Get information on an EC2 Transit Gateway Connect +--- + +# Data Source: aws_ec2_transit_gateway_connect + +Get information on an EC2 Transit Gateway Connect. + +## Example Usage + +### By Filter + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_transit_gateway_connect.DataAwsEc2TransitGatewayConnect(self, "example", + filter=[DataAwsEc2TransitGatewayConnectFilter( + name="transport-transit-gateway-attachment-id", + values=["tgw-attach-12345678"] + ) + ] + ) +``` + +### By Identifier + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_transit_gateway_connect.DataAwsEc2TransitGatewayConnect(self, "example", + transit_gateway_connect_id="tgw-attach-12345678" + ) +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. Detailed below. +* `transit_gateway_connect_id` - (Optional) Identifier of the EC2 Transit Gateway Connect. + +### filter Argument Reference + +* `name` - (Required) Name of the filter. +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `protocol` - Tunnel protocol +* `tags` - Key-value tags for the EC2 Transit Gateway Connect +* `transit_gateway_id` - EC2 Transit Gateway identifier +* `transport_attachment_id` - The underlaying VPC attachment + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_transit_gateway_connect_peer.html.markdown b/website/docs/cdktf/python/d/ec2_transit_gateway_connect_peer.html.markdown new file mode 100644 index 00000000000..7f82e400e85 --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_transit_gateway_connect_peer.html.markdown @@ -0,0 +1,81 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_connect_peer" +description: |- + Get information on an EC2 Transit Gateway Connect Peer +--- + +# Data Source: aws_ec2_transit_gateway_connect_peer + +Get information on an EC2 Transit Gateway Connect Peer. + +## Example Usage + +### By Filter + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_transit_gateway_connect_peer.DataAwsEc2TransitGatewayConnectPeer(self, "example", + filter=[DataAwsEc2TransitGatewayConnectPeerFilter( + name="transit-gateway-attachment-id", + values=["tgw-attach-12345678"] + ) + ] + ) +``` + +### By Identifier + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_transit_gateway_connect_peer.DataAwsEc2TransitGatewayConnectPeer(self, "example", + transit_gateway_connect_peer_id="tgw-connect-peer-12345678" + ) +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. Detailed below. +* `transit_gateway_connect_peer_id` - (Optional) Identifier of the EC2 Transit Gateway Connect Peer. + +### filter Argument Reference + +* `name` - (Required) Name of the filter. +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - EC2 Transit Gateway Connect Peer ARN +* `bgp_asn` - BGP ASN number assigned customer device +* `inside_cidr_blocks` - CIDR blocks that will be used for addressing within the tunnel. +* `peer_address` - IP addressed assigned to customer device, which is used as tunnel endpoint +* `tags` - Key-value tags for the EC2 Transit Gateway Connect Peer +* `transit_gateway_address` - The IP address assigned to Transit Gateway, which is used as tunnel endpoint. +* `transit_gateway_attachment_id` - The Transit Gateway Connect + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_transit_gateway_dx_gateway_attachment.html.markdown b/website/docs/cdktf/python/d/ec2_transit_gateway_dx_gateway_attachment.html.markdown new file mode 100644 index 00000000000..76ccb6872d5 --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_transit_gateway_dx_gateway_attachment.html.markdown @@ -0,0 +1,53 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_dx_gateway_attachment" +description: |- + Get information on an EC2 Transit Gateway's attachment to a Direct Connect Gateway +--- + +# Data Source: aws_ec2_transit_gateway_dx_gateway_attachment + +Get information on an EC2 Transit Gateway's attachment to a Direct Connect Gateway. + +## Example Usage + +### By Transit Gateway and Direct Connect Gateway Identifiers + +```terraform +data "aws_ec2_transit_gateway_dx_gateway_attachment" "example" { + transit_gateway_id = aws_ec2_transit_gateway.example.id + dx_gateway_id = aws_dx_gateway.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `transit_gateway_id` - (Optional) Identifier of the EC2 Transit Gateway. +* `dx_gateway_id` - (Optional) Identifier of the Direct Connect Gateway. +* `filter` - (Optional) Configuration block(s) for filtering. Detailed below. +* `tags` - (Optional) Map of tags, each pair of which must exactly match a pair on the desired Transit Gateway Direct Connect Gateway Attachment. + +### filter Configuration Block + +The following arguments are supported by the `filter` configuration block: + +* `name` - (Required) Name of the filter field. Valid values can be found in the [EC2 DescribeTransitGatewayAttachments API Reference](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeTransitGatewayAttachments.html). +* `values` - (Required) Set of values that are accepted for the given filter field. Results will be selected if any given value matches. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Attachment identifier +* `tags` - Key-value tags for the EC2 Transit Gateway Attachment + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_transit_gateway_multicast_domain.html.markdown b/website/docs/cdktf/python/d/ec2_transit_gateway_multicast_domain.html.markdown new file mode 100644 index 00000000000..6bcf5d5f5a4 --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_transit_gateway_multicast_domain.html.markdown @@ -0,0 +1,95 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_multicast_domain" +description: |- + Get information on an EC2 Transit Gateway Multicast Domain +--- + +# Data Source: aws_ec2_transit_gateway_multicast_domain + +Get information on an EC2 Transit Gateway Multicast Domain. + +## Example Usage + +### By Filter + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_transit_gateway_multicast_domain.DataAwsEc2TransitGatewayMulticastDomain(self, "example", + filter=[DataAwsEc2TransitGatewayMulticastDomainFilter( + name="transit-gateway-multicast-domain-id", + values=["tgw-mcast-domain-12345678"] + ) + ] + ) +``` + +### By Identifier + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_transit_gateway_multicast_domain.DataAwsEc2TransitGatewayMulticastDomain(self, "example", + transit_gateway_multicast_domain_id="tgw-mcast-domain-12345678" + ) +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. Detailed below. +* `transit_gateway_multicast_domain_id` - (Optional) Identifier of the EC2 Transit Gateway Multicast Domain. + +### filter Argument Reference + +This block allows for complex filters. You can use one or more `filter` blocks. + +The following arguments are required: + +* `name` - (Required) Name of the field to filter by, as defined by [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeTransitGatewayMulticastDomains.html). +* `values` - (Required) Set of values that are accepted for the given field. A multicast domain will be selected if any one of the given values matches. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Multicast Domain identifier. +* `arn` - EC2 Transit Gateway Multicast Domain ARN. +* `associations` - EC2 Transit Gateway Multicast Domain Associations + * `subnet_id` - The ID of the subnet associated with the transit gateway multicast domain. + * `transit_gateway_attachment_id` - The ID of the transit gateway attachment. +* `auto_accept_shared_associations` - Whether to automatically accept cross-account subnet associations that are associated with the EC2 Transit Gateway Multicast Domain. +* `igmpv2_support` - Whether to enable Internet Group Management Protocol (IGMP) version 2 for the EC2 Transit Gateway Multicast Domain. +* `members` - EC2 Multicast Domain Group Members + * `group_ip_address` - The IP address assigned to the transit gateway multicast group. + * `network_interface_id` - The group members' network interface ID. +* `owner_id` - Identifier of the AWS account that owns the EC2 Transit Gateway Multicast Domain. +* `sources` - EC2 Multicast Domain Group Sources + * `group_ip_address` - The IP address assigned to the transit gateway multicast group. + * `network_interface_id` - The group members' network interface ID. +* `static_sources_support` - Whether to enable support for statically configuring multicast group sources for the EC2 Transit Gateway Multicast Domain. +* `tags` - Key-value tags for the EC2 Transit Gateway Multicast Domain. +* `transit_gateway_id` - EC2 Transit Gateway identifier. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_transit_gateway_peering_attachment.html.markdown b/website/docs/cdktf/python/d/ec2_transit_gateway_peering_attachment.html.markdown new file mode 100644 index 00000000000..51d5ad58170 --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_transit_gateway_peering_attachment.html.markdown @@ -0,0 +1,83 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_peering_attachment" +description: |- + Get information on an EC2 Transit Gateway Peering Attachment +--- + +# Data Source: aws_ec2_transit_gateway_peering_attachment + +Get information on an EC2 Transit Gateway Peering Attachment. + +## Example Usage + +### By Filter + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_transit_gateway_peering_attachment.DataAwsEc2TransitGatewayPeeringAttachment(self, "example", + filter=[DataAwsEc2TransitGatewayPeeringAttachmentFilter( + name="transit-gateway-attachment-id", + values=["tgw-attach-12345678"] + ) + ] + ) +``` + +### By Identifier + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_transit_gateway_peering_attachment.DataAwsEc2TransitGatewayPeeringAttachment(self, "attachment", + id="tgw-attach-12345678" + ) +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. Detailed below. +* `id` - (Optional) Identifier of the EC2 Transit Gateway Peering Attachment. +* `tags` - (Optional) Mapping of tags, each pair of which must exactly match + a pair on the specific EC2 Transit Gateway Peering Attachment to retrieve. + +More complex filters can be expressed using one or more `filter` sub-blocks, +which take the following arguments: + +* `name` - (Required) Name of the field to filter by, as defined by + [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeTransitGatewayPeeringAttachments.html). +* `values` - (Required) Set of values that are accepted for the given field. + An EC2 Transit Gateway Peering Attachment be selected if any one of the given values matches. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `peer_account_id` - Identifier of the peer AWS account +* `peer_region` - Identifier of the peer AWS region +* `peer_transit_gateway_id` - Identifier of the peer EC2 Transit Gateway +* `transit_gateway_id` - Identifier of the local EC2 Transit Gateway + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_transit_gateway_route_table.html.markdown b/website/docs/cdktf/python/d/ec2_transit_gateway_route_table.html.markdown new file mode 100644 index 00000000000..fd856512692 --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_transit_gateway_route_table.html.markdown @@ -0,0 +1,83 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_route_table" +description: |- + Get information on an EC2 Transit Gateway Route Table +--- + +# Data Source: aws_ec2_transit_gateway_route_table + +Get information on an EC2 Transit Gateway Route Table. + +## Example Usage + +### By Filter + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_transit_gateway_route_table.DataAwsEc2TransitGatewayRouteTable(self, "example", + filter=[DataAwsEc2TransitGatewayRouteTableFilter( + name="default-association-route-table", + values=["true"] + ), DataAwsEc2TransitGatewayRouteTableFilter( + name="transit-gateway-id", + values=["tgw-12345678"] + ) + ] + ) +``` + +### By Identifier + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_transit_gateway_route_table.DataAwsEc2TransitGatewayRouteTable(self, "example", + id="tgw-rtb-12345678" + ) +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. Detailed below. +* `id` - (Optional) Identifier of the EC2 Transit Gateway Route Table. + +### filter Argument Reference + +* `name` - (Required) Name of the filter. +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - EC2 Transit Gateway Route Table ARN. +* `default_association_route_table` - Boolean whether this is the default association route table for the EC2 Transit Gateway +* `default_propagation_route_table` - Boolean whether this is the default propagation route table for the EC2 Transit Gateway +* `id` - EC2 Transit Gateway Route Table identifier +* `transit_gateway_id` - EC2 Transit Gateway identifier +* `tags` - Key-value tags for the EC2 Transit Gateway Route Table + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_transit_gateway_route_table_associations.html.markdown b/website/docs/cdktf/python/d/ec2_transit_gateway_route_table_associations.html.markdown new file mode 100644 index 00000000000..7f8572ce6b2 --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_transit_gateway_route_table_associations.html.markdown @@ -0,0 +1,49 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_route_table_associations" +description: |- + Provides information for multiple EC2 Transit Gateway Route Table Associations +--- + +# Data Source: aws_ec2_transit_gateway_route_table_associations + +Provides information for multiple EC2 Transit Gateway Route Table Associations, such as their identifiers. + +## Example Usage + +### By Transit Gateway Identifier + +```terraform +data "aws_ec2_transit_gateway_route_table_associations" "example" { + transit_gateway_route_table_id = aws_ec2_transit_gateway_route_table.example.id +} +``` + +## Argument Reference + +The following arguments are required: + +* `transit_gateway_route_table_id` - (Required) Identifier of EC2 Transit Gateway Route Table. + +The following arguments are optional: + +* `filter` - (Optional) Custom filter block as described below. + +More complex filters can be expressed using one or more `filter` sub-blocks, +which take the following arguments: + +* `name` - (Required) Name of the field to filter by, as defined by + [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_GetTransitGatewayRouteTableAssociations.html). + +* `values` - (Required) Set of values that are accepted for the given field. + A Transit Gateway Route Table will be selected if any one of the given values matches. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - AWS Region. +* `ids` - Set of Transit Gateway Route Table Association identifiers. + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_transit_gateway_route_table_propagations.html.markdown b/website/docs/cdktf/python/d/ec2_transit_gateway_route_table_propagations.html.markdown new file mode 100644 index 00000000000..957a13f48bc --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_transit_gateway_route_table_propagations.html.markdown @@ -0,0 +1,49 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transitgateway_route_table_propagations" +description: |- + Provides information for multiple EC2 Transit Gateway Route Table Propagations +--- + +# Data Source: aws_ec2_transitgateway_route_table_propagations + +Provides information for multiple EC2 Transit Gateway Route Table Propagations, such as their identifiers. + +## Example Usage + +### By Transit Gateway Identifier + +```terraform +data "aws_ec2_transit_gateway_route_table_propagations" "example" { + transit_gateway_route_table_id = aws_ec2_transit_gateway_route_table.example.id +} +``` + +## Argument Reference + +The following arguments are required: + +* `transit_gateway_route_table_id` - (Required) Identifier of EC2 Transit Gateway Route Table. + +The following arguments are optional: + +* `filter` - (Optional) Custom filter block as described below. + +More complex filters can be expressed using one or more `filter` sub-blocks, +which take the following arguments: + +* `name` - (Required) Name of the field to filter by, as defined by + [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_GetTransitGatewayRouteTablePropagations.html). + +* `values` - (Required) Set of values that are accepted for the given field. + A Transit Gateway Route Table will be selected if any one of the given values matches. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - AWS Region. +* `ids` - Set of Transit Gateway Route Table Association identifiers. + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_transit_gateway_route_tables.html.markdown b/website/docs/cdktf/python/d/ec2_transit_gateway_route_tables.html.markdown new file mode 100644 index 00000000000..d00ff15c5f7 --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_transit_gateway_route_tables.html.markdown @@ -0,0 +1,56 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_route_tables" +description: |- + Provides information for multiple EC2 Transit Gateway Route Tables +--- + +# Data Source: aws_ec2_transit_gateway_route_tables + +Provides information for multiple EC2 Transit Gateway Route Tables, such as their identifiers. + +## Example Usage + +The following shows outputting all Transit Gateway Route Table Ids. + +```terraform +data "aws_ec2_transit_gateway_route_tables" "example" {} + +output "example" { + value = data.aws_ec2_transit_gateway_route_table.example.ids +} +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) Custom filter block as described below. + +* `tags` - (Optional) Mapping of tags, each pair of which must exactly match + a pair on the desired transit gateway route table. + +More complex filters can be expressed using one or more `filter` sub-blocks, +which take the following arguments: + +* `name` - (Required) Name of the field to filter by, as defined by + [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeTransitGatewayRouteTables.html). + +* `values` - (Required) Set of values that are accepted for the given field. + A Transit Gateway Route Table will be selected if any one of the given values matches. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - AWS Region. +* `ids` - Set of Transit Gateway Route Table identifiers. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_transit_gateway_vpc_attachment.html.markdown b/website/docs/cdktf/python/d/ec2_transit_gateway_vpc_attachment.html.markdown new file mode 100644 index 00000000000..c113fcf3487 --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_transit_gateway_vpc_attachment.html.markdown @@ -0,0 +1,83 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_vpc_attachment" +description: |- + Get information on an EC2 Transit Gateway VPC Attachment +--- + +# Data Source: aws_ec2_transit_gateway_vpc_attachment + +Get information on an EC2 Transit Gateway VPC Attachment. + +## Example Usage + +### By Filter + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_transit_gateway_vpc_attachment.DataAwsEc2TransitGatewayVpcAttachment(self, "example", + filter=[DataAwsEc2TransitGatewayVpcAttachmentFilter( + name="vpc-id", + values=["vpc-12345678"] + ) + ] + ) +``` + +### By Identifier + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_transit_gateway_vpc_attachment.DataAwsEc2TransitGatewayVpcAttachment(self, "example", + id="tgw-attach-12345678" + ) +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. Detailed below. +* `id` - (Optional) Identifier of the EC2 Transit Gateway VPC Attachment. + +### filter Argument Reference + +* `name` - (Required) Name of the filter. +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `appliance_mode_support` - Whether Appliance Mode support is enabled. +* `dns_support` - Whether DNS support is enabled. +* `id` - EC2 Transit Gateway VPC Attachment identifier +* `ipv6_support` - Whether IPv6 support is enabled. +* `subnet_ids` - Identifiers of EC2 Subnets. +* `transit_gateway_id` - EC2 Transit Gateway identifier +* `tags` - Key-value tags for the EC2 Transit Gateway VPC Attachment +* `vpc_id` - Identifier of EC2 VPC. +* `vpc_owner_id` - Identifier of the AWS account that owns the EC2 VPC. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_transit_gateway_vpc_attachments.html.markdown b/website/docs/cdktf/python/d/ec2_transit_gateway_vpc_attachments.html.markdown new file mode 100644 index 00000000000..009c280d325 --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_transit_gateway_vpc_attachments.html.markdown @@ -0,0 +1,74 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_vpc_attachments" +description: |- + Get information on EC2 Transit Gateway VPC Attachments +--- + +# Data Source: aws_ec2_transit_gateway_vpc_attachments + +Get information on EC2 Transit Gateway VPC Attachments. + +## Example Usage + +### By Filter + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + data_aws_ec2_transit_gateway_vpc_attachments_filtered = + aws.data_aws_ec2_transit_gateway_vpc_attachments.DataAwsEc2TransitGatewayVpcAttachments(self, "filtered", + filter=[DataAwsEc2TransitGatewayVpcAttachmentsFilter( + name="state", + values=["pendingAcceptance"] + ) + ] + ) + # In most cases loops should be handled in the programming language context and + # not inside of the Terraform context. If you are looping over something external, e.g. a variable or a file input + # you should consider using a for loop. If you are looping over something only known to Terraform, e.g. a result of a data source + # you need to keep this like it is. + data_aws_ec2_transit_gateway_vpc_attachment_unit_count = + cdktf.TerraformCount.of( + cdktf.Fn.length_of(data_aws_ec2_transit_gateway_vpc_attachments_filtered.ids)) + aws.data_aws_ec2_transit_gateway_vpc_attachment.DataAwsEc2TransitGatewayVpcAttachment(self, "unit", + id=cdktf.Token.as_string( + cdktf.property_access(data_aws_ec2_transit_gateway_vpc_attachments_filtered.ids, [data_aws_ec2_transit_gateway_vpc_attachment_unit_count.index])), + count=data_aws_ec2_transit_gateway_vpc_attachment_unit_count + ) +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. Detailed below. + +### filter Argument Reference + +* `name` - (Required) Name of the filter check available value on [official documentation][1] +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `ids` A list of all attachments ids matching the filter. You can retrieve more information about the attachment using the [aws_ec2_transit_gateway_vpc_attachment][2] data source, searching by identifier. + +[1]: https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeTransitGatewayVpcAttachments.html +[2]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ec2_transit_gateway_vpc_attachment + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/d/ec2_transit_gateway_vpn_attachment.html.markdown b/website/docs/cdktf/python/d/ec2_transit_gateway_vpn_attachment.html.markdown new file mode 100644 index 00000000000..4f3dde31294 --- /dev/null +++ b/website/docs/cdktf/python/d/ec2_transit_gateway_vpn_attachment.html.markdown @@ -0,0 +1,75 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_vpn_attachment" +description: |- + Get information on an EC2 Transit Gateway VPN Attachment +--- + +# Data Source: aws_ec2_transit_gateway_vpn_attachment + +Get information on an EC2 Transit Gateway VPN Attachment. + +-> EC2 Transit Gateway VPN Attachments are implicitly created by VPN Connections referencing an EC2 Transit Gateway so there is no managed resource. For ease, the [`aws_vpn_connection` resource](/docs/providers/aws/r/vpn_connection.html) includes a `transit_gateway_attachment_id` attribute which can replace some usage of this data source. For tagging the attachment, see the [`aws_ec2_tag` resource](/docs/providers/aws/r/ec2_tag.html). + +## Example Usage + +### By Transit Gateway and VPN Connection Identifiers + +```terraform +data "aws_ec2_transit_gateway_vpn_attachment" "example" { + transit_gateway_id = aws_ec2_transit_gateway.example.id + vpn_connection_id = aws_vpn_connection.example.id +} +``` + +### Filter + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.data_aws_ec2_transit_gateway_vpn_attachment.DataAwsEc2TransitGatewayVpnAttachment(self, "test", + filter=[DataAwsEc2TransitGatewayVpnAttachmentFilter( + name="resource-id", + values=["some-resource"] + ) + ] + ) +``` + +## Argument Reference + +The following arguments are supported: + +* `transit_gateway_id` - (Optional) Identifier of the EC2 Transit Gateway. +* `vpn_connection_id` - (Optional) Identifier of the EC2 VPN Connection. +* `filter` - (Optional) Configuration block(s) for filtering. Detailed below. +* `tags` - (Optional) Map of tags, each pair of which must exactly match a pair on the desired Transit Gateway VPN Attachment. + +### filter Configuration Block + +The following arguments are supported by the `filter` configuration block: + +* `name` - (Required) Name of the filter field. Valid values can be found in the [EC2 DescribeTransitGatewayAttachments API Reference](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeTransitGatewayAttachments.html). +* `values` - (Required) Set of values that are accepted for the given filter field. Results will be selected if any given value matches. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway VPN Attachment identifier +* `tags` - Key-value tags for the EC2 Transit Gateway VPN Attachment + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_availability_zone_group.html.markdown b/website/docs/cdktf/python/r/ec2_availability_zone_group.html.markdown new file mode 100644 index 00000000000..1b23003c61b --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_availability_zone_group.html.markdown @@ -0,0 +1,53 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_availability_zone_group" +description: |- + Manages an EC2 Availability Zone Group. +--- + +# Resource: aws_ec2_availability_zone_group + +Manages an EC2 Availability Zone Group, such as updating its opt-in status. + +~> **NOTE:** This is an advanced Terraform resource. Terraform will automatically assume management of the EC2 Availability Zone Group without import and perform no actions on removal from configuration. + +## Example Usage + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.ec2_availability_zone_group.Ec2AvailabilityZoneGroup(self, "example", + group_name="us-west-2-lax-1", + opt_in_status="opted-in" + ) +``` + +## Argument Reference + +The following arguments are required: + +* `group_name` - (Required) Name of the Availability Zone Group. +* `opt_in_status` - (Required) Indicates whether to enable or disable Availability Zone Group. Valid values: `opted-in` or `not-opted-in`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Name of the Availability Zone Group. + +## Import + +EC2 Availability Zone Groups can be imported using the group name, e.g., + +``` +$ terraform import aws_ec2_availability_zone_group.example us-west-2-lax-1 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_capacity_reservation.html.markdown b/website/docs/cdktf/python/r/ec2_capacity_reservation.html.markdown new file mode 100644 index 00000000000..fea396044a8 --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_capacity_reservation.html.markdown @@ -0,0 +1,67 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_capacity_reservation" +description: |- + Provides an EC2 Capacity Reservation. This allows you to reserve capacity for your Amazon EC2 instances in a specific Availability Zone for any duration. +--- + +# Resource: aws_ec2_capacity_reservation + +Provides an EC2 Capacity Reservation. This allows you to reserve capacity for your Amazon EC2 instances in a specific Availability Zone for any duration. + +## Example Usage + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.ec2_capacity_reservation.Ec2CapacityReservation(self, "default", + availability_zone="eu-west-1a", + instance_count=1, + instance_platform="Linux/UNIX", + instance_type="t2.micro" + ) +``` + +## Argument Reference + +The following arguments are supported: + +* `availability_zone` - (Required) The Availability Zone in which to create the Capacity Reservation. +* `ebs_optimized` - (Optional) Indicates whether the Capacity Reservation supports EBS-optimized instances. +* `end_date` - (Optional) The date and time at which the Capacity Reservation expires. When a Capacity Reservation expires, the reserved capacity is released and you can no longer launch instances into it. Valid values: [RFC3339 time string](https://tools.ietf.org/html/rfc3339#section-5.8) (`YYYY-MM-DDTHH:MM:SSZ`) +* `end_date_type` - (Optional) Indicates the way in which the Capacity Reservation ends. Specify either `unlimited` or `limited`. +* `ephemeral_storage` - (Optional) Indicates whether the Capacity Reservation supports instances with temporary, block-level storage. +* `instance_count` - (Required) The number of instances for which to reserve capacity. +* `instance_match_criteria` - (Optional) Indicates the type of instance launches that the Capacity Reservation accepts. Specify either `open` or `targeted`. +* `instance_platform` - (Required) The type of operating system for which to reserve capacity. Valid options are `Linux/UNIX`, `Red Hat Enterprise Linux`, `SUSE Linux`, `Windows`, `Windows with SQL Server`, `Windows with SQL Server Enterprise`, `Windows with SQL Server Standard` or `Windows with SQL Server Web`. +* `instance_type` - (Required) The instance type for which to reserve capacity. +* `outpost_arn` - (Optional) The Amazon Resource Name (ARN) of the Outpost on which to create the Capacity Reservation. +* `placement_group_arn` - (Optional) The Amazon Resource Name (ARN) of the cluster placement group in which to create the Capacity Reservation. +* `tags` - (Optional) A map of tags to assign to the resource. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `tenancy` - (Optional) Indicates the tenancy of the Capacity Reservation. Specify either `default` or `dedicated`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The Capacity Reservation ID. +* `owner_id` - The ID of the AWS account that owns the Capacity Reservation. +* `arn` - The ARN of the Capacity Reservation. +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) + +## Import + +Capacity Reservations can be imported using the `id`, e.g., + +``` +$ terraform import aws_ec2_capacity_reservation.web cr-0123456789abcdef0 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_carrier_gateway.html.markdown b/website/docs/cdktf/python/r/ec2_carrier_gateway.html.markdown new file mode 100644 index 00000000000..93772d19dda --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_carrier_gateway.html.markdown @@ -0,0 +1,50 @@ +--- +subcategory: "Wavelength" +layout: "aws" +page_title: "AWS: aws_ec2_carrier_gateway" +description: |- + Manages an EC2 Carrier Gateway. +--- + +# Resource: aws_ec2_carrier_gateway + +Manages an EC2 Carrier Gateway. See the AWS [documentation](https://docs.aws.amazon.com/vpc/latest/userguide/Carrier_Gateway.html) for more information. + +## Example Usage + +```terraform +resource "aws_ec2_carrier_gateway" "example" { + vpc_id = aws_vpc.example.id + + tags = { + Name = "example-carrier-gateway" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `tags` - (Optional) A map of tags to assign to the resource. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `vpc_id` - (Required) The ID of the VPC to associate with the carrier gateway. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The ARN of the carrier gateway. +* `id` - The ID of the carrier gateway. +* `owner_id` - The AWS account ID of the owner of the carrier gateway. +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Import + +`aws_ec2_carrier_gateway` can be imported using the carrier gateway's ID, +e.g., + +``` +$ terraform import aws_ec2_carrier_gateway.example cgw-12345 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_client_vpn_authorization_rule.html.markdown b/website/docs/cdktf/python/r/ec2_client_vpn_authorization_rule.html.markdown new file mode 100644 index 00000000000..132561ee3c9 --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_client_vpn_authorization_rule.html.markdown @@ -0,0 +1,57 @@ +--- +subcategory: "VPN (Client)" +layout: "aws" +page_title: "AWS: aws_ec2_client_vpn_authorization_rule" +description: |- + Provides authorization rules for AWS Client VPN endpoints. +--- + +# Resource: aws_ec2_client_vpn_authorization_rule + +Provides authorization rules for AWS Client VPN endpoints. For more information on usage, please see the +[AWS Client VPN Administrator's Guide](https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/what-is.html). + +## Example Usage + +```terraform +resource "aws_ec2_client_vpn_authorization_rule" "example" { + client_vpn_endpoint_id = aws_ec2_client_vpn_endpoint.example.id + target_network_cidr = aws_subnet.example.cidr_block + authorize_all_groups = true +} +``` + +## Argument Reference + +The following arguments are supported: + +* `client_vpn_endpoint_id` - (Required) The ID of the Client VPN endpoint. +* `target_network_cidr` - (Required) The IPv4 address range, in CIDR notation, of the network to which the authorization rule applies. +* `access_group_id` - (Optional) The ID of the group to which the authorization rule grants access. One of `access_group_id` or `authorize_all_groups` must be set. +* `authorize_all_groups` - (Optional) Indicates whether the authorization rule grants access to all clients. One of `access_group_id` or `authorize_all_groups` must be set. +* `description` - (Optional) A brief description of the authorization rule. + +## Attributes Reference + +No additional attributes are exported. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `create` - (Default `10m`) +- `delete` - (Default `10m`) + +## Import + +AWS Client VPN authorization rules can be imported using the endpoint ID and target network CIDR. If there is a specific group name that is included as well. All values are separated by a `,`. + +``` +$ terraform import aws_ec2_client_vpn_authorization_rule.example cvpn-endpoint-0ac3a1abbccddd666,10.1.0.0/24 +``` + +``` +$ terraform import aws_ec2_client_vpn_authorization_rule.example cvpn-endpoint-0ac3a1abbccddd666,10.1.0.0/24,team-a +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_client_vpn_endpoint.html.markdown b/website/docs/cdktf/python/r/ec2_client_vpn_endpoint.html.markdown new file mode 100644 index 00000000000..d950ed10bd2 --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_client_vpn_endpoint.html.markdown @@ -0,0 +1,101 @@ +--- +subcategory: "VPN (Client)" +layout: "aws" +page_title: "AWS: aws_ec2_client_vpn_endpoint" +description: |- + Provides an AWS Client VPN endpoint for OpenVPN clients. +--- + +# Resource: aws_ec2_client_vpn_endpoint + +Provides an AWS Client VPN endpoint for OpenVPN clients. For more information on usage, please see the +[AWS Client VPN Administrator's Guide](https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/what-is.html). + +## Example Usage + +```terraform +resource "aws_ec2_client_vpn_endpoint" "example" { + description = "terraform-clientvpn-example" + server_certificate_arn = aws_acm_certificate.cert.arn + client_cidr_block = "10.0.0.0/16" + + authentication_options { + type = "certificate-authentication" + root_certificate_chain_arn = aws_acm_certificate.root_cert.arn + } + + connection_log_options { + enabled = true + cloudwatch_log_group = aws_cloudwatch_log_group.lg.name + cloudwatch_log_stream = aws_cloudwatch_log_stream.ls.name + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `authentication_options` - (Required) Information about the authentication method to be used to authenticate clients. +* `client_cidr_block` - (Required) The IPv4 address range, in CIDR notation, from which to assign client IP addresses. The address range cannot overlap with the local CIDR of the VPC in which the associated subnet is located, or the routes that you add manually. The address range cannot be changed after the Client VPN endpoint has been created. The CIDR block should be /22 or greater. +* `client_connect_options` - (Optional) The options for managing connection authorization for new client connections. +* `client_login_banner_options` - (Optional) Options for enabling a customizable text banner that will be displayed on AWS provided clients when a VPN session is established. +* `connection_log_options` - (Required) Information about the client connection logging options. +* `description` - (Optional) A brief description of the Client VPN endpoint. +* `dns_servers` - (Optional) Information about the DNS servers to be used for DNS resolution. A Client VPN endpoint can have up to two DNS servers. If no DNS server is specified, the DNS address of the connecting device is used. +* `security_group_ids` - (Optional) The IDs of one or more security groups to apply to the target network. You must also specify the ID of the VPC that contains the security groups. +* `self_service_portal` - (Optional) Specify whether to enable the self-service portal for the Client VPN endpoint. Values can be `enabled` or `disabled`. Default value is `disabled`. +* `server_certificate_arn` - (Required) The ARN of the ACM server certificate. +* `session_timeout_hours` - (Optional) The maximum session duration is a trigger by which end-users are required to re-authenticate prior to establishing a VPN session. Default value is `24` - Valid values: `8 | 10 | 12 | 24` +* `split_tunnel` - (Optional) Indicates whether split-tunnel is enabled on VPN endpoint. Default value is `false`. +* `tags` - (Optional) A mapping of tags to assign to the resource. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `transport_protocol` - (Optional) The transport protocol to be used by the VPN session. Default value is `udp`. +* `vpc_id` - (Optional) The ID of the VPC to associate with the Client VPN endpoint. If no security group IDs are specified in the request, the default security group for the VPC is applied. +* `vpn_port` - (Optional) The port number for the Client VPN endpoint. Valid values are `443` and `1194`. Default value is `443`. + +### `authentication_options` Argument Reference + +One of the following arguments must be supplied: + +* `active_directory_id` - (Optional) The ID of the Active Directory to be used for authentication if type is `directory-service-authentication`. +* `root_certificate_chain_arn` - (Optional) The ARN of the client certificate. The certificate must be signed by a certificate authority (CA) and it must be provisioned in AWS Certificate Manager (ACM). Only necessary when type is set to `certificate-authentication`. +* `saml_provider_arn` - (Optional) The ARN of the IAM SAML identity provider if type is `federated-authentication`. +* `self_service_saml_provider_arn` - (Optional) The ARN of the IAM SAML identity provider for the self service portal if type is `federated-authentication`. +* `type` - (Required) The type of client authentication to be used. Specify `certificate-authentication` to use certificate-based authentication, `directory-service-authentication` to use Active Directory authentication, or `federated-authentication` to use Federated Authentication via SAML 2.0. + +### `client_connect_options` Argument reference + +* `enabled` - (Optional) Indicates whether client connect options are enabled. The default is `false` (not enabled). +* `lambda_function_arn` - (Optional) The Amazon Resource Name (ARN) of the Lambda function used for connection authorization. + +### `client_login_banner_options` Argument reference + +* `banner_text` - (Optional) Customizable text that will be displayed in a banner on AWS provided clients when a VPN session is established. UTF-8 encoded characters only. Maximum of 1400 characters. +* `enabled` - (Optional) Enable or disable a customizable text banner that will be displayed on AWS provided clients when a VPN session is established. The default is `false` (not enabled). + +### `connection_log_options` Argument Reference + +One of the following arguments must be supplied: + +* `cloudwatch_log_group` - (Optional) The name of the CloudWatch Logs log group. +* `cloudwatch_log_stream` - (Optional) The name of the CloudWatch Logs log stream to which the connection data is published. +* `enabled` - (Required) Indicates whether connection logging is enabled. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The ARN of the Client VPN endpoint. +* `dns_name` - The DNS name to be used by clients when establishing their VPN session. +* `id` - The ID of the Client VPN endpoint. +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Import + +AWS Client VPN endpoints can be imported using the `id` value found via `aws ec2 describe-client-vpn-endpoints`, e.g., + +``` +$ terraform import aws_ec2_client_vpn_endpoint.example cvpn-endpoint-0ac3a1abbccddd666 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_client_vpn_network_association.html.markdown b/website/docs/cdktf/python/r/ec2_client_vpn_network_association.html.markdown new file mode 100644 index 00000000000..089c77dbb88 --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_client_vpn_network_association.html.markdown @@ -0,0 +1,53 @@ +--- +subcategory: "VPN (Client)" +layout: "aws" +page_title: "AWS: aws_ec2_client_vpn_network_association" +description: |- + Provides network associations for AWS Client VPN endpoints. +--- + +# Resource: aws_ec2_client_vpn_network_association + +Provides network associations for AWS Client VPN endpoints. For more information on usage, please see the +[AWS Client VPN Administrator's Guide](https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/what-is.html). + +## Example Usage + +```terraform +resource "aws_ec2_client_vpn_network_association" "example" { + client_vpn_endpoint_id = aws_ec2_client_vpn_endpoint.example.id + subnet_id = aws_subnet.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `client_vpn_endpoint_id` - (Required) The ID of the Client VPN endpoint. +* `subnet_id` - (Required) The ID of the subnet to associate with the Client VPN endpoint. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The unique ID of the target network association. +* `association_id` - The unique ID of the target network association. +* `vpc_id` - The ID of the VPC in which the target subnet is located. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `create` - (Default `30m`) +- `delete` - (Default `30m`) + +## Import + +AWS Client VPN network associations can be imported using the endpoint ID and the association ID. Values are separated by a `,`. + +``` +$ terraform import aws_ec2_client_vpn_network_association.example cvpn-endpoint-0ac3a1abbccddd666,vpn-assoc-0b8db902465d069ad +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_client_vpn_route.html.markdown b/website/docs/cdktf/python/r/ec2_client_vpn_route.html.markdown new file mode 100644 index 00000000000..eed51fdd474 --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_client_vpn_route.html.markdown @@ -0,0 +1,76 @@ +--- +subcategory: "VPN (Client)" +layout: "aws" +page_title: "AWS: aws_ec2_client_vpn_route" +description: |- + Provides additional routes for AWS Client VPN endpoints. +--- + +# Resource: aws_ec2_client_vpn_route + +Provides additional routes for AWS Client VPN endpoints. For more information on usage, please see the +[AWS Client VPN Administrator's Guide](https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/what-is.html). + +## Example Usage + +```terraform +resource "aws_ec2_client_vpn_route" "example" { + client_vpn_endpoint_id = aws_ec2_client_vpn_endpoint.example.id + destination_cidr_block = "0.0.0.0/0" + target_vpc_subnet_id = aws_ec2_client_vpn_network_association.example.subnet_id +} + +resource "aws_ec2_client_vpn_network_association" "example" { + client_vpn_endpoint_id = aws_ec2_client_vpn_endpoint.example.id + subnet_id = aws_subnet.example.id +} + +resource "aws_ec2_client_vpn_endpoint" "example" { + description = "Example Client VPN endpoint" + server_certificate_arn = aws_acm_certificate.example.arn + client_cidr_block = "10.0.0.0/16" + + authentication_options { + type = "certificate-authentication" + root_certificate_chain_arn = aws_acm_certificate.example.arn + } + + connection_log_options { + enabled = false + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `client_vpn_endpoint_id` - (Required) The ID of the Client VPN endpoint. +* `destination_cidr_block` - (Required) The IPv4 address range, in CIDR notation, of the route destination. +* `description` - (Optional) A brief description of the route. +* `target_vpc_subnet_id` - (Required) The ID of the Subnet to route the traffic through. It must already be attached to the Client VPN. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the Client VPN endpoint. +* `origin` - Indicates how the Client VPN route was added. Will be `add-route` for routes created by this resource. +* `type` - The type of the route. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `create` - (Default `4m`) +- `delete` - (Default `4m`) + +## Import + +AWS Client VPN routes can be imported using the endpoint ID, target subnet ID, and destination CIDR block. All values are separated by a `,`. + +``` +$ terraform import aws_ec2_client_vpn_route.example cvpn-endpoint-1234567890abcdef,subnet-9876543210fedcba,10.1.0.0/24 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_fleet.html.markdown b/website/docs/cdktf/python/r/ec2_fleet.html.markdown new file mode 100644 index 00000000000..2cac1b0a692 --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_fleet.html.markdown @@ -0,0 +1,234 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_fleet" +description: |- + Provides a resource to manage EC2 Fleets +--- + +# Resource: aws_ec2_fleet + +Provides a resource to manage EC2 Fleets. + +## Example Usage + +```terraform +resource "aws_ec2_fleet" "example" { + launch_template_config { + launch_template_specification { + launch_template_id = aws_launch_template.example.id + version = aws_launch_template.example.latest_version + } + } + + target_capacity_specification { + default_target_capacity_type = "spot" + total_target_capacity = 5 + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `context` - (Optional) Reserved. +* `excess_capacity_termination_policy` - (Optional) Whether running instances should be terminated if the total target capacity of the EC2 Fleet is decreased below the current size of the EC2. Valid values: `no-termination`, `termination`. Defaults to `termination`. Supported only for fleets of type `maintain`. +* `launch_template_config` - (Required) Nested argument containing EC2 Launch Template configurations. Defined below. +* `on_demand_options` - (Optional) Nested argument containing On-Demand configurations. Defined below. +* `replace_unhealthy_instances` - (Optional) Whether EC2 Fleet should replace unhealthy instances. Defaults to `false`. Supported only for fleets of type `maintain`. +* `spot_options` - (Optional) Nested argument containing Spot configurations. Defined below. +* `tags` - (Optional) Map of Fleet tags. To tag instances at launch, specify the tags in the Launch Template. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `target_capacity_specification` - (Required) Nested argument containing target capacity configurations. Defined below. +* `terminate_instances` - (Optional) Whether to terminate instances for an EC2 Fleet if it is deleted successfully. Defaults to `false`. +* `terminate_instances_with_expiration` - (Optional) Whether running instances should be terminated when the EC2 Fleet expires. Defaults to `false`. +* `type` - (Optional) The type of request. Indicates whether the EC2 Fleet only requests the target capacity, or also attempts to maintain it. Valid values: `maintain`, `request`, `instant`. Defaults to `maintain`. +* `valid_from` - (Optional) The start date and time of the request, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). The default is to start fulfilling the request immediately. +* `valid_until` - (Optional) The end date and time of the request, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). At this point, no new EC2 Fleet requests are placed or able to fulfill the request. If no value is specified, the request remains until you cancel it. + +### launch_template_config + +Describes a launch template and overrides. + +* `launch_template_specification` - (Optional) Nested argument containing EC2 Launch Template to use. Defined below. +* `override` - (Optional) Nested argument(s) containing parameters to override the same parameters in the Launch Template. Defined below. + +#### launch_template_specification + +The launch template to use. You must specify either the launch template ID or launch template name in the request. + +* `launch_template_id` - (Optional) The ID of the launch template. +* `launch_template_name` - (Optional) The name of the launch template. +* `version` - (Required) The launch template version number, `$Latest`, or `$Default.` + +#### override + +Any parameters that you specify override the same parameters in the launch template. For fleets of type `request` and `maintain`, a maximum of 300 items is allowed across all launch templates. + +Example: + +```terraform +resource "aws_ec2_fleet" "example" { + # ... other configuration ... + + launch_template_config { + # ... other configuration ... + + override { + instance_type = "m4.xlarge" + weighted_capacity = 1 + } + + override { + instance_type = "m4.2xlarge" + weighted_capacity = 2 + } + } +} +``` + +* `availability_zone` - (Optional) Availability Zone in which to launch the instances. +* `instance_requirements` - (Optional) Override the instance type in the Launch Template with instance types that satisfy the requirements. +* `instance_type` - (Optional) Instance type. +* `max_price` - (Optional) Maximum price per unit hour that you are willing to pay for a Spot Instance. +* `priority` - (Optional) Priority for the launch template override. If `on_demand_options` `allocation_strategy` is set to `prioritized`, EC2 Fleet uses priority to determine which launch template override to use first in fulfilling On-Demand capacity. The highest priority is launched first. The lower the number, the higher the priority. If no number is set, the launch template override has the lowest priority. Valid values are whole numbers starting at 0. +* `subnet_id` - (Optional) ID of the subnet in which to launch the instances. +* `weighted_capacity` - (Optional) Number of units provided by the specified instance type. + +##### instance_requirements + +The attributes for the instance types. For a list of currently supported values, please see ['InstanceRequirementsRequest'](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_InstanceRequirementsRequest.html). + +This configuration block supports the following: + +~> **NOTE:** Both `memory_mib.min` and `vcpu_count.min` must be specified. + +* `accelerator_count` - (Optional) Block describing the minimum and maximum number of accelerators (GPUs, FPGAs, or AWS Inferentia chips). Default is no minimum or maximum limits. + * `min` - (Optional) Minimum. + * `max` - (Optional) Maximum. Set to `0` to exclude instance types with accelerators. +* `accelerator_manufacturers` - (Optional) List of accelerator manufacturer names. Default is any manufacturer. +* `accelerator_names` - (Optional) List of accelerator names. Default is any acclerator. +* `accelerator_total_memory_mib` - (Optional) Block describing the minimum and maximum total memory of the accelerators. Default is no minimum or maximum. + * `min` - (Optional) The minimum amount of accelerator memory, in MiB. To specify no minimum limit, omit this parameter. + * `max` - (Optional) The maximum amount of accelerator memory, in MiB. To specify no maximum limit, omit this parameter. +* `accelerator_types` - (Optional) The accelerator types that must be on the instance type. Default is any accelerator type. +* `allowed_instance_types` - (Optional) The instance types to apply your specified attributes against. All other instance types are ignored, even if they match your specified attributes. You can use strings with one or more wild cards,represented by an asterisk (\*). The following are examples: `c5*`, `m5a.*`, `r*`, `*3*`. For example, if you specify `c5*`, you are excluding the entire C5 instance family, which includes all C5a and C5n instance types. If you specify `m5a.*`, you are excluding all the M5a instance types, but not the M5n instance types. Maximum of 400 entries in the list; each entry is limited to 30 characters. Default is no excluded instance types. Default is any instance type. + + If you specify `AllowedInstanceTypes`, you can't specify `ExcludedInstanceTypes`. + +* `bare_metal` - (Optional) Indicate whether bare metal instace types should be `included`, `excluded`, or `required`. Default is `excluded`. +* `baseline_ebs_bandwidth_mbps` - (Optional) Block describing the minimum and maximum baseline EBS bandwidth, in Mbps. Default is no minimum or maximum. + * `min` - (Optional) The minimum baseline bandwidth, in Mbps. To specify no minimum limit, omit this parameter.. + * `max` - (Optional) The maximum baseline bandwidth, in Mbps. To specify no maximum limit, omit this parameter.. +* `burstable_performance` - (Optional) Indicates whether burstable performance T instance types are `included`, `excluded`, or `required`. Default is `excluded`. +* `cpu_manufacturers` (Optional) The CPU manufacturers to include. Default is any manufacturer. + ~> **NOTE:** Don't confuse the CPU hardware manufacturer with the CPU hardware architecture. Instances will be launched with a compatible CPU architecture based on the Amazon Machine Image (AMI) that you specify in your launch template. +* `excluded_instance_types` - (Optional) The instance types to exclude. You can use strings with one or more wild cards, represented by an asterisk (\*). The following are examples: `c5*`, `m5a.*`, `r*`, `*3*`. For example, if you specify `c5*`, you are excluding the entire C5 instance family, which includes all C5a and C5n instance types. If you specify `m5a.*`, you are excluding all the M5a instance types, but not the M5n instance types. Maximum of 400 entries in the list; each entry is limited to 30 characters. Default is no excluded instance types. + + If you specify `AllowedInstanceTypes`, you can't specify `ExcludedInstanceTypes`. + +* `instance_generations` - (Optional) Indicates whether current or previous generation instance types are included. The current generation instance types are recommended for use. Valid values are `current` and `previous`. Default is `current` and `previous` generation instance types. +* `local_storage` - (Optional) Indicate whether instance types with local storage volumes are `included`, `excluded`, or `required`. Default is `included`. +* `local_storage_types` - (Optional) List of local storage type names. Valid values are `hdd` and `ssd`. Default any storage type. +* `memory_gib_per_vcpu` - (Optional) Block describing the minimum and maximum amount of memory (GiB) per vCPU. Default is no minimum or maximum. + * `min` - (Optional) The minimum amount of memory per vCPU, in GiB. To specify no minimum limit, omit this parameter. + * `max` - (Optional) The maximum amount of memory per vCPU, in GiB. To specify no maximum limit, omit this parameter. +* `memory_mib` - (Required) The minimum and maximum amount of memory per vCPU, in GiB. Default is no minimum or maximum limits. + * `min` - (Required) The minimum amount of memory, in MiB. To specify no minimum limit, specify `0`. + * `max` - (Optional) The maximum amount of memory, in MiB. To specify no maximum limit, omit this parameter. +* `network_bandwidth_gbps` - (Optional) The minimum and maximum amount of network bandwidth, in gigabits per second (Gbps). Default is No minimum or maximum. + * `min` - (Optional) The minimum amount of network bandwidth, in Gbps. To specify no minimum limit, omit this parameter. + * `max` - (Optional) The maximum amount of network bandwidth, in Gbps. To specify no maximum limit, omit this parameter. +* `network_interface_count` - (Optional) Block describing the minimum and maximum number of network interfaces. Default is no minimum or maximum. + * `min` - (Optional) The minimum number of network interfaces. To specify no minimum limit, omit this parameter. + * `max` - (Optional) The maximum number of network interfaces. To specify no maximum limit, omit this parameter. +* `on_demand_max_price_percentage_over_lowest_price` - (Optional) The price protection threshold for On-Demand Instances. This is the maximum you’ll pay for an On-Demand Instance, expressed as a percentage higher than the cheapest M, C, or R instance type with your specified attributes. When Amazon EC2 Auto Scaling selects instance types with your attributes, we will exclude instance types whose price is higher than your threshold. The parameter accepts an integer, which Amazon EC2 Auto Scaling interprets as a percentage. To turn off price protection, specify a high value, such as 999999. Default is 20. + + If you set `target_capacity_unit_type` to `vcpu` or `memory-mib`, the price protection threshold is applied based on the per-vCPU or per-memory price instead of the per-instance price. + +* `require_hibernate_support` - (Optional) Indicate whether instance types must support On-Demand Instance Hibernation, either `true` or `false`. Default is `false`. +* `spot_max_price_percentage_over_lowest_price` - (Optional) The price protection threshold for Spot Instances. This is the maximum you’ll pay for a Spot Instance, expressed as a percentage higher than the cheapest M, C, or R instance type with your specified attributes. When Amazon EC2 Auto Scaling selects instance types with your attributes, we will exclude instance types whose price is higher than your threshold. The parameter accepts an integer, which Amazon EC2 Auto Scaling interprets as a percentage. To turn off price protection, specify a high value, such as 999999. Default is 100. + + If you set DesiredCapacityType to vcpu or memory-mib, the price protection threshold is applied based on the per vCPU or per memory price instead of the per instance price. + +* `total_local_storage_gb` - (Optional) Block describing the minimum and maximum total local storage (GB). Default is no minimum or maximum. + * `min` - (Optional) The minimum amount of total local storage, in GB. To specify no minimum limit, omit this parameter. + * `max` - (Optional) The maximum amount of total local storage, in GB. To specify no maximum limit, omit this parameter. +* `vcpu_count` - (Required) Block describing the minimum and maximum number of vCPUs. Default is no maximum. + * `min` - (Required) The minimum number of vCPUs. To specify no minimum limit, specify `0`. + * `max` - (Optional) The maximum number of vCPUs. To specify no maximum limit, omit this parameter. + +### on_demand_options + +* `allocation_strategy` - (Optional) The order of the launch template overrides to use in fulfilling On-Demand capacity. Valid values: `lowestPrice`, `prioritized`. Default: `lowestPrice`. +* `capacity_reservation_options` (Optional) The strategy for using unused Capacity Reservations for fulfilling On-Demand capacity. Supported only for fleets of type `instant`. + * `usage_strategy` - (Optional) Indicates whether to use unused Capacity Reservations for fulfilling On-Demand capacity. Valid values: `use-capacity-reservations-first`. +* `max_total_price` - (Optional) The maximum amount per hour for On-Demand Instances that you're willing to pay. +* `min_target_capacity` - (Optional) The minimum target capacity for On-Demand Instances in the fleet. If the minimum target capacity is not reached, the fleet launches no instances. Supported only for fleets of type `instant`. + If you specify `min_target_capacity`, at least one of the following must be specified: `single_availability_zone` or `single_instance_type`. + +* `single_availability_zone` - (Optional) Indicates that the fleet launches all On-Demand Instances into a single Availability Zone. Supported only for fleets of type `instant`. +* `single_instance_type` - (Optional) Indicates that the fleet uses a single instance type to launch all On-Demand Instances in the fleet. Supported only for fleets of type `instant`. + +### spot_options + +* `allocation_strategy` - (Optional) How to allocate the target capacity across the Spot pools. Valid values: `diversified`, `lowestPrice`, `capacity-optimized`, `capacity-optimized-prioritized` and `price-capacity-optimized`. Default: `lowestPrice`. +* `instance_interruption_behavior` - (Optional) Behavior when a Spot Instance is interrupted. Valid values: `hibernate`, `stop`, `terminate`. Default: `terminate`. +* `instance_pools_to_use_count` - (Optional) Number of Spot pools across which to allocate your target Spot capacity. Valid only when Spot `allocation_strategy` is set to `lowestPrice`. Default: `1`. +* `maintenance_strategies` - (Optional) Nested argument containing maintenance strategies for managing your Spot Instances that are at an elevated risk of being interrupted. Defined below. +* `max_total_price` - (Optional) The maximum amount per hour for Spot Instances that you're willing to pay. +* `min_target_capacity` - (Optional) The minimum target capacity for Spot Instances in the fleet. If the minimum target capacity is not reached, the fleet launches no instances. Supported only for fleets of type `instant`. +* `single_availability_zone` - (Optional) Indicates that the fleet launches all Spot Instances into a single Availability Zone. Supported only for fleets of type `instant`. +* `single_instance_type` - (Optional) Indicates that the fleet uses a single instance type to launch all Spot Instances in the fleet. Supported only for fleets of type `instant`. + +### maintenance_strategies + +* `capacity_rebalance` - (Optional) Nested argument containing the capacity rebalance for your fleet request. Defined below. + +### capacity_rebalance + +* `replacement_strategy` - (Optional) The replacement strategy to use. Only available for fleets of `type` set to `maintain`. Valid values: `launch`. + +### target_capacity_specification + +* `default_target_capacity_type` - (Required) Default target capacity type. Valid values: `on-demand`, `spot`. +* `on_demand_target_capacity` - (Optional) The number of On-Demand units to request. +* `spot_target_capacity` - (Optional) The number of Spot units to request. +* `target_capacity_unit_type` - (Optional) The unit for the target capacity. + If you specify `target_capacity_unit_type`, `instance_requirements` must be specified. + +* `total_target_capacity` - (Required) The number of units to request, filled using `default_target_capacity_type`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Fleet identifier +* `arn` - The ARN of the fleet +* `fleet_instance_set` - Information about the instances that were launched by the fleet. Available only when `type` is set to `instant`. + * `instance_ids` - The IDs of the instances. + * `instance_type` - The instance type. + * `lifecycle` - Indicates if the instance that was launched is a Spot Instance or On-Demand Instance. + * `platform` - The value is `Windows` for Windows instances. Otherwise, the value is blank. +* `fleet_state` - The state of the EC2 Fleet. +* `fulfilled_capacity` - The number of units fulfilled by this request compared to the set target capacity. +* `fulfilled_on_demand_capacity` - The number of units fulfilled by this request compared to the set target On-Demand capacity. +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +* `create` - (Default `10m`) +* `update` - (Default `10m`) +* `delete` - (Default `10m`) + +## Import + +`aws_ec2_fleet` can be imported by using the Fleet identifier, e.g., + +``` +$ terraform import aws_ec2_fleet.example fleet-b9b55d27-c5fc-41ac-a6f3-48fcc91f080c +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_host.html.markdown b/website/docs/cdktf/python/r/ec2_host.html.markdown new file mode 100644 index 00000000000..46583ee083d --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_host.html.markdown @@ -0,0 +1,61 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_host" +description: |- + Provides an EC2 Host resource. This allows Dedicated Hosts to be allocated, modified, and released. +--- + +# Resource: aws_ec2_host + +Provides an EC2 Host resource. This allows Dedicated Hosts to be allocated, modified, and released. + +## Example Usage + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.ec2_host.Ec2Host(self, "test", + auto_placement="on", + availability_zone="us-west-2a", + host_recovery="on", + instance_type="c5.18xlarge" + ) +``` + +## Argument Reference + +The following arguments are supported: + +* `auto_placement` - (Optional) Indicates whether the host accepts any untargeted instance launches that match its instance type configuration, or if it only accepts Host tenancy instance launches that specify its unique host ID. Valid values: `on`, `off`. Default: `on`. +* `availability_zone` - (Required) The Availability Zone in which to allocate the Dedicated Host. +* `host_recovery` - (Optional) Indicates whether to enable or disable host recovery for the Dedicated Host. Valid values: `on`, `off`. Default: `off`. +* `instance_family` - (Optional) Specifies the instance family to be supported by the Dedicated Hosts. If you specify an instance family, the Dedicated Hosts support multiple instance types within that instance family. Exactly one of `instance_family` or `instance_type` must be specified. +* `instance_type` - (Optional) Specifies the instance type to be supported by the Dedicated Hosts. If you specify an instance type, the Dedicated Hosts support instances of the specified instance type only. Exactly one of `instance_family` or `instance_type` must be specified. +* `outpost_arn` - (Optional) The Amazon Resource Name (ARN) of the AWS Outpost on which to allocate the Dedicated Host. +* `tags` - (Optional) Map of tags to assign to this resource. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the allocated Dedicated Host. This is used to launch an instance onto a specific host. +* `arn` - The ARN of the Dedicated Host. +* `owner_id` - The ID of the AWS account that owns the Dedicated Host. +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Import + +Hosts can be imported using the host `id`, e.g., + +``` +$ terraform import aws_ec2_host.example h-0385a99d0e4b20cbb +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_instance_state.html.markdown b/website/docs/cdktf/python/r/ec2_instance_state.html.markdown new file mode 100644 index 00000000000..01da15e39d7 --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_instance_state.html.markdown @@ -0,0 +1,86 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_instance_state" +description: |- + Provides an EC2 instance state resource. This allows managing an instance power state. +--- + +# Resource: aws_ec2_instance_state + +Provides an EC2 instance state resource. This allows managing an instance power state. + +~> **NOTE on Instance State Management:** AWS does not currently have an EC2 API operation to determine an instance has finished processing user data. As a result, this resource can interfere with user data processing. For example, this resource may stop an instance while the user data script is in mid run. + +## Example Usage + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + data_aws_ami_ubuntu = aws.data_aws_ami.DataAwsAmi(self, "ubuntu", + filter=[DataAwsAmiFilter( + name="name", + values=["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"] + ), DataAwsAmiFilter( + name="virtualization-type", + values=["hvm"] + ) + ], + most_recent=True, + owners=["099720109477"] + ) + aws_instance_test = aws.instance.Instance(self, "test", + ami=cdktf.Token.as_string(data_aws_ami_ubuntu.id), + instance_type="t3.micro", + tags={ + "Name": "HelloWorld" + } + ) + aws_ec2_instance_state_test = aws.ec2_instance_state.Ec2InstanceState(self, "test_2", + instance_id=cdktf.Token.as_string(aws_instance_test.id), + state="stopped" + ) + # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. + aws_ec2_instance_state_test.override_logical_id("test") +``` + +## Argument Reference + +The following arguments are required: + +* `instance_id` - (Required) ID of the instance. +* `state` - (Required) - State of the instance. Valid values are `stopped`, `running`. + +The following arguments are optional: + +* `force` - (Optional) Whether to request a forced stop when `state` is `stopped`. Otherwise (_i.e._, `state` is `running`), ignored. When an instance is forced to stop, it does not flush file system caches or file system metadata, and you must subsequently perform file system check and repair. Not recommended for Windows instances. Defaults to `false`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - ID of the instance (matches `instance_id`). + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +* `create` - (Default `10m`) +* `update` - (Default `10m`) +* `delete` - (Default `1m`) + +## Import + +`aws_ec2_instance_state` can be imported by using the `instance_id` attribute, e.g., + +``` +$ terraform import aws_ec2_instance_state.test i-02cae6557dfcf2f96 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_local_gateway_route.html.markdown b/website/docs/cdktf/python/r/ec2_local_gateway_route.html.markdown new file mode 100644 index 00000000000..815dec7e437 --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_local_gateway_route.html.markdown @@ -0,0 +1,45 @@ +--- +subcategory: "Outposts (EC2)" +layout: "aws" +page_title: "AWS: aws_ec2_local_gateway_route" +description: |- + Manages an EC2 Local Gateway Route +--- + +# Resource: aws_ec2_local_gateway_route + +Manages an EC2 Local Gateway Route. More information can be found in the [Outposts User Guide](https://docs.aws.amazon.com/outposts/latest/userguide/outposts-networking-components.html#routing). + +## Example Usage + +```terraform +resource "aws_ec2_local_gateway_route" "example" { + destination_cidr_block = "172.16.0.0/16" + local_gateway_route_table_id = data.aws_ec2_local_gateway_route_table.example.id + local_gateway_virtual_interface_group_id = data.aws_ec2_local_gateway_virtual_interface_group.example.id +} +``` + +## Argument Reference + +The following arguments are required: + +* `destination_cidr_block` - (Required) IPv4 CIDR range used for destination matches. Routing decisions are based on the most specific match. +* `local_gateway_route_table_id` - (Required) Identifier of EC2 Local Gateway Route Table. +* `local_gateway_virtual_interface_group_id` - (Required) Identifier of EC2 Local Gateway Virtual Interface Group. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Local Gateway Route Table identifier and destination CIDR block separated by underscores (`_`) + +## Import + +`aws_ec2_local_gateway_route` can be imported by using the EC2 Local Gateway Route Table identifier and destination CIDR block separated by underscores (`_`), e.g., + +``` +$ terraform import aws_ec2_local_gateway_route.example lgw-rtb-12345678_172.16.0.0/16 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_local_gateway_route_table_vpc_association.html.markdown b/website/docs/cdktf/python/r/ec2_local_gateway_route_table_vpc_association.html.markdown new file mode 100644 index 00000000000..842b4f5b10b --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_local_gateway_route_table_vpc_association.html.markdown @@ -0,0 +1,68 @@ +--- +subcategory: "Outposts (EC2)" +layout: "aws" +page_title: "AWS: aws_ec2_local_gateway_route_table_vpc_association" +description: |- + Manages an EC2 Local Gateway Route Table VPC Association +--- + +# Resource: aws_ec2_local_gateway_route_table_vpc_association + +Manages an EC2 Local Gateway Route Table VPC Association. More information can be found in the [Outposts User Guide](https://docs.aws.amazon.com/outposts/latest/userguide/outposts-local-gateways.html#vpc-associations). + +## Example Usage + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws_vpc_example = aws.vpc.Vpc(self, "example", + cidr_block="10.0.0.0/16" + ) + data_aws_ec2_local_gateway_route_table_example = + aws.data_aws_ec2_local_gateway_route_table.DataAwsEc2LocalGatewayRouteTable(self, "example_1", + outpost_arn="arn:aws:outposts:us-west-2:123456789012:outpost/op-1234567890abcdef" + ) + # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. + data_aws_ec2_local_gateway_route_table_example.override_logical_id("example") + aws_ec2_local_gateway_route_table_vpc_association_example = + aws.ec2_local_gateway_route_table_vpc_association.Ec2LocalGatewayRouteTableVpcAssociation(self, "example_2", + local_gateway_route_table_id=cdktf.Token.as_string(data_aws_ec2_local_gateway_route_table_example.id), + vpc_id=cdktf.Token.as_string(aws_vpc_example.id) + ) + # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. + aws_ec2_local_gateway_route_table_vpc_association_example.override_logical_id("example") +``` + +## Argument Reference + +The following arguments are required: + +* `local_gateway_route_table_id` - (Required) Identifier of EC2 Local Gateway Route Table. +* `vpc_id` - (Required) Identifier of EC2 VPC. + +The following arguments are optional: + +* `tags` - (Optional) Key-value map of resource tags. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Identifier of EC2 Local Gateway Route Table VPC Association. +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Import + +`aws_ec2_local_gateway_route_table_vpc_association` can be imported by using the Local Gateway Route Table VPC Association identifier, e.g., + +``` +$ terraform import aws_ec2_local_gateway_route_table_vpc_association.example lgw-vpc-assoc-1234567890abcdef +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_managed_prefix_list.html.markdown b/website/docs/cdktf/python/r/ec2_managed_prefix_list.html.markdown new file mode 100644 index 00000000000..304664078e6 --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_managed_prefix_list.html.markdown @@ -0,0 +1,84 @@ +--- +subcategory: "VPC (Virtual Private Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_managed_prefix_list" +description: |- + Provides a managed prefix list resource. +--- + +# Resource: aws_ec2_managed_prefix_list + +Provides a managed prefix list resource. + +~> **NOTE on Managed Prefix Lists and Managed Prefix List Entries:** Terraform +currently provides both a standalone [Managed Prefix List Entry resource](ec2_managed_prefix_list_entry.html) (a single entry), +and a Managed Prefix List resource with entries defined in-line. At this time you +cannot use a Managed Prefix List with in-line rules in conjunction with any Managed +Prefix List Entry resources. Doing so will cause a conflict of entries and will overwrite entries. + +~> **NOTE on `max_entries`:** When you reference a Prefix List in a resource, +the maximum number of entries for the prefix lists counts as the same number of rules +or entries for the resource. For example, if you create a prefix list with a maximum +of 20 entries and you reference that prefix list in a security group rule, this counts +as 20 rules for the security group. + +## Example Usage + +Basic usage + +```terraform +resource "aws_ec2_managed_prefix_list" "example" { + name = "All VPC CIDR-s" + address_family = "IPv4" + max_entries = 5 + + entry { + cidr = aws_vpc.example.cidr_block + description = "Primary" + } + + entry { + cidr = aws_vpc_ipv4_cidr_block_association.example.cidr_block + description = "Secondary" + } + + tags = { + Env = "live" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `address_family` - (Required, Forces new resource) Address family (`IPv4` or `IPv6`) of this prefix list. +* `entry` - (Optional) Configuration block for prefix list entry. Detailed below. Different entries may have overlapping CIDR blocks, but a particular CIDR should not be duplicated. +* `max_entries` - (Required) Maximum number of entries that this prefix list can contain. +* `name` - (Required) Name of this resource. The name must not start with `com.amazonaws`. +* `tags` - (Optional) Map of tags to assign to this resource. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +### `entry` + +* `cidr` - (Required) CIDR block of this entry. +* `description` - (Optional) Description of this entry. Due to API limitations, updating only the description of an existing entry requires temporarily removing and re-adding the entry. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - ARN of the prefix list. +* `id` - ID of the prefix list. +* `owner_id` - ID of the AWS account that owns this prefix list. +* `tags_all` - Map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). +* `version` - Latest version of this prefix list. + +## Import + +Prefix Lists can be imported using the `id`, e.g., + +``` +$ terraform import aws_ec2_managed_prefix_list.default pl-0570a1d2d725c16be +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_managed_prefix_list_entry.html.markdown b/website/docs/cdktf/python/r/ec2_managed_prefix_list_entry.html.markdown new file mode 100644 index 00000000000..2c2617cd846 --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_managed_prefix_list_entry.html.markdown @@ -0,0 +1,69 @@ +--- +subcategory: "VPC (Virtual Private Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_managed_prefix_list_entry" +description: |- + Provides a managed prefix list entry resource. +--- + +# Resource: aws_ec2_managed_prefix_list_entry + +Provides a managed prefix list entry resource. + +~> **NOTE on Managed Prefix Lists and Managed Prefix List Entries:** Terraform +currently provides both a standalone Managed Prefix List Entry resource (a single entry), +and a [Managed Prefix List resource](ec2_managed_prefix_list.html) with entries defined +in-line. At this time you cannot use a Managed Prefix List with in-line rules in +conjunction with any Managed Prefix List Entry resources. Doing so will cause a conflict +of entries and will overwrite entries. + +~> **NOTE on Managed Prefix Lists with many entries:** To improved execution times on larger +updates, if you plan to create a prefix list with more than 100 entries, it is **recommended** +that you use the inline `entry` block as part of the [Managed Prefix List resource](ec2_managed_prefix_list.html) +resource instead. + +## Example Usage + +Basic usage + +```terraform +resource "aws_ec2_managed_prefix_list" "example" { + name = "All VPC CIDR-s" + address_family = "IPv4" + max_entries = 5 + + tags = { + Env = "live" + } +} + +resource "aws_ec2_managed_prefix_list_entry" "entry_1" { + cidr = aws_vpc.example.cidr_block + description = "Primary" + prefix_list_id = aws_ec2_managed_prefix_list.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `cidr` - (Required) CIDR block of this entry. +* `description` - (Optional) Description of this entry. Due to API limitations, updating only the description of an entry requires recreating the entry. +* `prefix_list_id` - (Required) CIDR block of this entry. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - ID of the managed prefix list entry. + +## Import + +Prefix List Entries can be imported using the `prefix_list_id` and `cidr` separated by a `,`, e.g., + +``` +$ terraform import aws_ec2_managed_prefix_list_entry.default pl-0570a1d2d725c16be,10.0.3.0/24 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_network_insights_analysis.html.markdown b/website/docs/cdktf/python/r/ec2_network_insights_analysis.html.markdown new file mode 100644 index 00000000000..74ecc094ad2 --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_network_insights_analysis.html.markdown @@ -0,0 +1,69 @@ +--- +subcategory: "VPC (Virtual Private Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_network_insights_analysis" +description: |- + Provides a Network Insights Analysis resource. +--- + +# Resource: aws_ec2_network_insights_analysis + +Provides a Network Insights Analysis resource. Part of the "Reachability Analyzer" service in the AWS VPC console. + +## Example Usage + +```terraform +resource "aws_ec2_network_insights_path" "path" { + source = aws_network_interface.source.id + destination = aws_network_interface.destination.id + protocol = "tcp" +} + +resource "aws_ec2_network_insights_analysis" "analysis" { + network_insights_path_id = aws_ec2_network_insights_path.path.id +} +``` + +## Argument Reference + +The following arguments are required: + +* `network_insights_path_id` - (Required) ID of the Network Insights Path to run an analysis on. + +The following arguments are optional: + +* `filter_in_arns` - (Optional) A list of ARNs for resources the path must traverse. +* `wait_for_completion` - (Optional) If enabled, the resource will wait for the Network Insights Analysis status to change to `succeeded` or `failed`. Setting this to `false` will skip the process. Default: `true`. +* `tags` - (Optional) Map of tags to assign to the resource. If configured with a provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `alternate_path_hints` - Potential intermediate components of a feasible path. Described below. +* `arn` - ARN of the Network Insights Analysis. +* `explanations` - Explanation codes for an unreachable path. See the [AWS documentation](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_Explanation.html) for details. +* `forward_path_components` - The components in the path from source to destination. See the [AWS documentation](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_PathComponent.html) for details. +* `id` - ID of the Network Insights Analysis. +* `path_found` - Set to `true` if the destination was reachable. +* `return_path_components` - The components in the path from destination to source. See the [AWS documentation](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_PathComponent.html) for details. +* `start_date` - The date/time the analysis was started. +* `status` - The status of the analysis. `succeeded` means the analysis was completed, not that a path was found, for that see `path_found`. +* `status_message` - A message to provide more context when the `status` is `failed`. +* `tags_all` - Map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block). +* `warning_message` - The warning message. + +The `alternate_path_hints` object supports the following: + +* `component_arn` - The Amazon Resource Name (ARN) of the component. +* `component_id` - The ID of the component. + +## Import + +Network Insights Analyses can be imported using the `id`, e.g., + +``` +$ terraform import aws_ec2_network_insights_analysis.test nia-0462085c957f11a55 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_network_insights_path.html.markdown b/website/docs/cdktf/python/r/ec2_network_insights_path.html.markdown new file mode 100644 index 00000000000..2b0b1ba32fa --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_network_insights_path.html.markdown @@ -0,0 +1,54 @@ +--- +subcategory: "VPC (Virtual Private Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_network_insights_path" +description: |- + Provides a Network Insights Path resource. +--- + +# Resource: aws_ec2_network_insights_path + +Provides a Network Insights Path resource. Part of the "Reachability Analyzer" service in the AWS VPC console. + +## Example Usage + +```terraform +resource "aws_ec2_network_insights_path" "test" { + source = aws_network_interface.source.id + destination = aws_network_interface.destination.id + protocol = "tcp" +} +``` + +## Argument Reference + +The following arguments are required: + +* `source` - (Required) ID of the resource which is the source of the path. Can be an Instance, Internet Gateway, Network Interface, Transit Gateway, VPC Endpoint, VPC Peering Connection or VPN Gateway. +* `destination` - (Required) ID of the resource which is the source of the path. Can be an Instance, Internet Gateway, Network Interface, Transit Gateway, VPC Endpoint, VPC Peering Connection or VPN Gateway. +* `protocol` - (Required) Protocol to use for analysis. Valid options are `tcp` or `udp`. + +The following arguments are optional: + +* `source_ip` - (Optional) IP address of the source resource. +* `destination_ip` - (Optional) IP address of the destination resource. +* `destination_port` - (Optional) Destination port to analyze access to. +* `tags` - (Optional) Map of tags to assign to the resource. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - ARN of the Network Insights Path. +* `id` - ID of the Network Insights Path. +* `tags_all` - Map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Import + +Network Insights Paths can be imported using the `id`, e.g., + +``` +$ terraform import aws_ec2_network_insights_path.test nip-00edfba169923aefd +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_serial_console_access.html.markdown b/website/docs/cdktf/python/r/ec2_serial_console_access.html.markdown new file mode 100644 index 00000000000..10586885f97 --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_serial_console_access.html.markdown @@ -0,0 +1,49 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_serial_console_access" +description: |- + Manages whether serial console access is enabled for your AWS account in the current AWS region. +--- + +# Resource: aws_ec2_serial_console_access + +Provides a resource to manage whether serial console access is enabled for your AWS account in the current AWS region. + +~> **NOTE:** Removing this Terraform resource disables serial console access. + +## Example Usage + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.ec2_serial_console_access.Ec2SerialConsoleAccess(self, "example", + enabled=True + ) +``` + +## Argument Reference + +The following arguments are supported: + +* `enabled` - (Optional) Whether or not serial console access is enabled. Valid values are `true` or `false`. Defaults to `true`. + +## Attributes Reference + +No additional attributes are exported. + +## Import + +Serial console access state can be imported, e.g., + +``` +$ terraform import aws_ec2_serial_console_access.example default +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_subnet_cidr_reservation.html.markdown b/website/docs/cdktf/python/r/ec2_subnet_cidr_reservation.html.markdown new file mode 100644 index 00000000000..059efaf893e --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_subnet_cidr_reservation.html.markdown @@ -0,0 +1,47 @@ +--- +subcategory: "VPC (Virtual Private Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_subnet_cidr_reservation" +description: |- + Provides a subnet CIDR reservation resource. +--- + +# Resource: aws_ec2_subnet_cidr_reservation + +Provides a subnet CIDR reservation resource. + +## Example Usage + +```terraform +resource "aws_ec2_subnet_cidr_reservation" "example" { + cidr_block = "10.0.0.16/28" + reservation_type = "prefix" + subnet_id = aws_subnet.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `cidr_block` - (Required) The CIDR block for the reservation. +* `reservation_type` - (Required) The type of reservation to create. Valid values: `explicit`, `prefix` +* `subnet_id` - (Required) The ID of the subnet to create the reservation for. +* `description` - (Optional) A brief description of the reservation. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - ID of the CIDR reservation. +* `owner_id` - ID of the AWS account that owns this CIDR reservation. + +## Import + +Existing CIDR reservations can be imported using `SUBNET_ID:RESERVATION_ID`, e.g., + +``` +$ terraform import aws_ec2_subnet_cidr_reservation.example subnet-01llsxvsxabqiymcz:scr-4mnvz6wb7otksjcs9 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_tag.html.markdown b/website/docs/cdktf/python/r/ec2_tag.html.markdown new file mode 100644 index 00000000000..15b700f40d9 --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_tag.html.markdown @@ -0,0 +1,75 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_tag" +description: |- + Manages an individual EC2 resource tag +--- + +# Resource: aws_ec2_tag + +Manages an individual EC2 resource tag. This resource should only be used in cases where EC2 resources are created outside Terraform (e.g., AMIs), being shared via Resource Access Manager (RAM), or implicitly created by other means (e.g., Transit Gateway VPN Attachments). + +~> **NOTE:** This tagging resource should not be combined with the Terraform resource for managing the parent resource. For example, using `aws_vpc` and `aws_ec2_tag` to manage tags of the same VPC will cause a perpetual difference where the `aws_vpc` resource will try to remove the tag being added by the `aws_ec2_tag` resource. + +~> **NOTE:** This tagging resource does not use the [provider `ignore_tags` configuration](/docs/providers/aws/index.html#ignore_tags). + +## Example Usage + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws_customer_gateway_example = aws.customer_gateway.CustomerGateway(self, "example", + bgp_asn=cdktf.Token.as_string(65000), + ip_address="172.0.0.1", + type="ipsec.1" + ) + aws_ec2_transit_gateway_example = + aws.ec2_transit_gateway.Ec2TransitGateway(self, "example_1") + # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. + aws_ec2_transit_gateway_example.override_logical_id("example") + aws_vpn_connection_example = aws.vpn_connection.VpnConnection(self, "example_2", + customer_gateway_id=cdktf.Token.as_string(aws_customer_gateway_example.id), + transit_gateway_id=cdktf.Token.as_string(aws_ec2_transit_gateway_example.id), + type=cdktf.Token.as_string(aws_customer_gateway_example.type) + ) + # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. + aws_vpn_connection_example.override_logical_id("example") + aws_ec2_tag_example = aws.ec2_tag.Ec2Tag(self, "example_3", + key="Name", + resource_id=cdktf.Token.as_string(aws_vpn_connection_example.transit_gateway_attachment_id), + value="Hello World" + ) + # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. + aws_ec2_tag_example.override_logical_id("example") +``` + +## Argument Reference + +The following arguments are supported: + +* `resource_id` - (Required) The ID of the EC2 resource to manage the tag for. +* `key` - (Required) The tag name. +* `value` - (Required) The value of the tag. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 resource identifier and key, separated by a comma (`,`) + +## Import + +`aws_ec2_tag` can be imported by using the EC2 resource identifier and key, separated by a comma (`,`), e.g., + +``` +$ terraform import aws_ec2_tag.example tgw-attach-1234567890abcdef,Name +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_traffic_mirror_filter.html.markdown b/website/docs/cdktf/python/r/ec2_traffic_mirror_filter.html.markdown new file mode 100644 index 00000000000..1fc8bf46f48 --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_traffic_mirror_filter.html.markdown @@ -0,0 +1,57 @@ +--- +subcategory: "VPC (Virtual Private Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_traffic_mirror_filter" +description: |- + Provides an Traffic mirror filter +--- + +# Resource: aws_ec2_traffic_mirror_filter + +Provides an Traffic mirror filter. +Read [limits and considerations](https://docs.aws.amazon.com/vpc/latest/mirroring/traffic-mirroring-considerations.html) for traffic mirroring + +## Example Usage + +To create a basic traffic mirror filter + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.ec2_traffic_mirror_filter.Ec2TrafficMirrorFilter(self, "foo", + description="traffic mirror filter - terraform example", + network_services=["amazon-dns"] + ) +``` + +## Argument Reference + +The following arguments are supported: + +* `description` - (Optional, Forces new resource) A description of the filter. +* `network_services` - (Optional) List of amazon network services that should be mirrored. Valid values: `amazon-dns`. +* `tags` - (Optional) Key-value map of resource tags. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The ARN of the traffic mirror filter. +* `id` - The name of the filter. +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Import + +Traffic mirror filter can be imported using the `id`, e.g., + +``` +$ terraform import aws_ec2_traffic_mirror_filter.foo tmf-0fbb93ddf38198f64 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_traffic_mirror_filter_rule.html.markdown b/website/docs/cdktf/python/r/ec2_traffic_mirror_filter_rule.html.markdown new file mode 100644 index 00000000000..86094c8e80c --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_traffic_mirror_filter_rule.html.markdown @@ -0,0 +1,96 @@ +--- +subcategory: "VPC (Virtual Private Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_traffic_mirror_filter_rule" +description: |- + Provides an Traffic mirror filter rule +--- + +# Resource: aws_ec2_traffic_mirror_filter_rule + +Provides an Traffic mirror filter rule. +Read [limits and considerations](https://docs.aws.amazon.com/vpc/latest/mirroring/traffic-mirroring-considerations.html) for traffic mirroring + +## Example Usage + +To create a basic traffic mirror session + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws_ec2_traffic_mirror_filter_filter = + aws.ec2_traffic_mirror_filter.Ec2TrafficMirrorFilter(self, "filter", + description="traffic mirror filter - terraform example", + network_services=["amazon-dns"] + ) + aws.ec2_traffic_mirror_filter_rule.Ec2TrafficMirrorFilterRule(self, "rulein", + description="test rule", + destination_cidr_block="10.0.0.0/8", + destination_port_range=Ec2TrafficMirrorFilterRuleDestinationPortRange( + from_port=22, + to_port=53 + ), + protocol=6, + rule_action="accept", + rule_number=1, + source_cidr_block="10.0.0.0/8", + source_port_range=Ec2TrafficMirrorFilterRuleSourcePortRange( + from_port=0, + to_port=10 + ), + traffic_direction="ingress", + traffic_mirror_filter_id=cdktf.Token.as_string(aws_ec2_traffic_mirror_filter_filter.id) + ) + aws.ec2_traffic_mirror_filter_rule.Ec2TrafficMirrorFilterRule(self, "ruleout", + description="test rule", + destination_cidr_block="10.0.0.0/8", + rule_action="accept", + rule_number=1, + source_cidr_block="10.0.0.0/8", + traffic_direction="egress", + traffic_mirror_filter_id=cdktf.Token.as_string(aws_ec2_traffic_mirror_filter_filter.id) + ) +``` + +## Argument Reference + +The following arguments are supported: + +* `description` - (Optional) Description of the traffic mirror filter rule. +* `traffic_mirror_filter_id` - (Required) ID of the traffic mirror filter to which this rule should be added +* `destination_cidr_block` - (Required) Destination CIDR block to assign to the Traffic Mirror rule. +* `destination_port_range` - (Optional) Destination port range. Supported only when the protocol is set to TCP(6) or UDP(17). See Traffic mirror port range documented below +* `protocol` - (Optional) Protocol number, for example 17 (UDP), to assign to the Traffic Mirror rule. For information about the protocol value, see [Protocol Numbers](https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml) on the Internet Assigned Numbers Authority (IANA) website. +* `rule_action` - (Required) Action to take (accept | reject) on the filtered traffic. Valid values are `accept` and `reject` +* `rule_number` - (Required) Number of the Traffic Mirror rule. This number must be unique for each Traffic Mirror rule in a given direction. The rules are processed in ascending order by rule number. +* `source_cidr_block` - (Required) Source CIDR block to assign to the Traffic Mirror rule. +* `source_port_range` - (Optional) Source port range. Supported only when the protocol is set to TCP(6) or UDP(17). See Traffic mirror port range documented below +* `traffic_direction` - (Required) Direction of traffic to be captured. Valid values are `ingress` and `egress` + +Traffic mirror port range support following attributes: + +* `from_port` - (Optional) Starting port of the range +* `to_port` - (Optional) Ending port of the range + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - ARN of the traffic mirror filter rule. +* `id` - Name of the traffic mirror filter rule. + +## Import + +Traffic mirror rules can be imported using the `traffic_mirror_filter_id` and `id` separated by `:` e.g., + +``` +$ terraform import aws_ec2_traffic_mirror_filter_rule.rule tmf-0fbb93ddf38198f64:tmfr-05a458f06445d0aee +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_traffic_mirror_session.html.markdown b/website/docs/cdktf/python/r/ec2_traffic_mirror_session.html.markdown new file mode 100644 index 00000000000..58b0d2dc980 --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_traffic_mirror_session.html.markdown @@ -0,0 +1,67 @@ +--- +subcategory: "VPC (Virtual Private Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_traffic_mirror_session" +description: |- + Provides a Traffic mirror session +--- + +# Resource: aws_ec2_traffic_mirror_session + +Provides an Traffic mirror session. +Read [limits and considerations](https://docs.aws.amazon.com/vpc/latest/mirroring/traffic-mirroring-considerations.html) for traffic mirroring + +## Example Usage + +To create a basic traffic mirror session + +```terraform +resource "aws_ec2_traffic_mirror_filter" "filter" { + description = "traffic mirror filter - terraform example" + network_services = ["amazon-dns"] +} + +resource "aws_ec2_traffic_mirror_target" "target" { + network_load_balancer_arn = aws_lb.lb.arn +} + +resource "aws_ec2_traffic_mirror_session" "session" { + description = "traffic mirror session - terraform example" + network_interface_id = aws_instance.test.primary_network_interface_id + session_number = 1 + traffic_mirror_filter_id = aws_ec2_traffic_mirror_filter.filter.id + traffic_mirror_target_id = aws_ec2_traffic_mirror_target.target.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `description` - (Optional) A description of the traffic mirror session. +* `network_interface_id` - (Required, Forces new) ID of the source network interface. Not all network interfaces are eligible as mirror sources. On EC2 instances only nitro based instances support mirroring. +* `traffic_mirror_filter_id` - (Required) ID of the traffic mirror filter to be used +* `traffic_mirror_target_id` - (Required) ID of the traffic mirror target to be used +* `packet_length` - (Optional) The number of bytes in each packet to mirror. These are bytes after the VXLAN header. Do not specify this parameter when you want to mirror the entire packet. To mirror a subset of the packet, set this to the length (in bytes) that you want to mirror. +* `session_number` - (Required) - The session number determines the order in which sessions are evaluated when an interface is used by multiple sessions. The first session with a matching filter is the one that mirrors the packets. +* `virtual_network_id` - (Optional) - The VXLAN ID for the Traffic Mirror session. For more information about the VXLAN protocol, see RFC 7348. If you do not specify a VirtualNetworkId, an account-wide unique id is chosen at random. +* `tags` - (Optional) Key-value map of resource tags. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The ARN of the traffic mirror session. +* `id` - The name of the session. +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). +* `owner_id` - The AWS account ID of the session owner. + +## Import + +Traffic mirror sessions can be imported using the `id`, e.g., + +``` +$ terraform import aws_ec2_traffic_mirror_session.session tms-0d8aa3ca35897b82e +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_traffic_mirror_target.html.markdown b/website/docs/cdktf/python/r/ec2_traffic_mirror_target.html.markdown new file mode 100644 index 00000000000..a926f203af1 --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_traffic_mirror_target.html.markdown @@ -0,0 +1,64 @@ +--- +subcategory: "VPC (Virtual Private Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_traffic_mirror_target" +description: |- + Provides a Traffic mirror target +--- + +# Resource: aws_ec2_traffic_mirror_target + +Provides a Traffic mirror target. +Read [limits and considerations](https://docs.aws.amazon.com/vpc/latest/mirroring/traffic-mirroring-considerations.html) for traffic mirroring + +## Example Usage + +To create a basic traffic mirror session + +```terraform +resource "aws_ec2_traffic_mirror_target" "nlb" { + description = "NLB target" + network_load_balancer_arn = aws_lb.lb.arn +} + +resource "aws_ec2_traffic_mirror_target" "eni" { + description = "ENI target" + network_interface_id = aws_instance.test.primary_network_interface_id +} + +resource "aws_ec2_traffic_mirror_target" "gwlb" { + description = "GWLB target" + gateway_load_balancer_endpoint_id = aws_vpc_endpoint.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `description` - (Optional, Forces new) A description of the traffic mirror session. +* `network_interface_id` - (Optional, Forces new) The network interface ID that is associated with the target. +* `network_load_balancer_arn` - (Optional, Forces new) The Amazon Resource Name (ARN) of the Network Load Balancer that is associated with the target. +* `gateway_load_balancer_endpoint_id` - (Optional, Forces new) The VPC Endpoint Id of the Gateway Load Balancer that is associated with the target. +* `tags` - (Optional) Key-value map of resource tags. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +**NOTE:** Either `network_interface_id` or `network_load_balancer_arn` should be specified and both should not be specified together + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the Traffic Mirror target. +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). +* `arn` - The ARN of the traffic mirror target. +* `owner_id` - The ID of the AWS account that owns the traffic mirror target. + +## Import + +Traffic mirror targets can be imported using the `id`, e.g., + +``` +$ terraform import aws_ec2_traffic_mirror_target.target tmt-0c13a005422b86606 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_transit_gateway.html.markdown b/website/docs/cdktf/python/r/ec2_transit_gateway.html.markdown new file mode 100644 index 00000000000..54cff4fa604 --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_transit_gateway.html.markdown @@ -0,0 +1,74 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway" +description: |- + Manages an EC2 Transit Gateway +--- + +# Resource: aws_ec2_transit_gateway + +Manages an EC2 Transit Gateway. + +## Example Usage + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws.ec2_transit_gateway.Ec2TransitGateway(self, "example", + description="example" + ) +``` + +## Argument Reference + +The following arguments are supported: + +* `amazon_side_asn` - (Optional) Private Autonomous System Number (ASN) for the Amazon side of a BGP session. The range is `64512` to `65534` for 16-bit ASNs and `4200000000` to `4294967294` for 32-bit ASNs. Default value: `64512`. + +-> **NOTE:** Modifying `amazon_side_asn` on a Transit Gateway with active BGP sessions is [not allowed](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_ModifyTransitGatewayOptions.html). You must first delete all Transit Gateway attachments that have BGP configured prior to modifying `amazon_side_asn`. + +* `auto_accept_shared_attachments` - (Optional) Whether resource attachment requests are automatically accepted. Valid values: `disable`, `enable`. Default value: `disable`. +* `default_route_table_association` - (Optional) Whether resource attachments are automatically associated with the default association route table. Valid values: `disable`, `enable`. Default value: `enable`. +* `default_route_table_propagation` - (Optional) Whether resource attachments automatically propagate routes to the default propagation route table. Valid values: `disable`, `enable`. Default value: `enable`. +* `description` - (Optional) Description of the EC2 Transit Gateway. +* `dns_support` - (Optional) Whether DNS support is enabled. Valid values: `disable`, `enable`. Default value: `enable`. +* `multicast_support` - (Optional) Whether Multicast support is enabled. Required to use `ec2_transit_gateway_multicast_domain`. Valid values: `disable`, `enable`. Default value: `disable`. +* `tags` - (Optional) Key-value tags for the EC2 Transit Gateway. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `transit_gateway_cidr_blocks` - (Optional) One or more IPv4 or IPv6 CIDR blocks for the transit gateway. Must be a size /24 CIDR block or larger for IPv4, or a size /64 CIDR block or larger for IPv6. +* `vpn_ecmp_support` - (Optional) Whether VPN Equal Cost Multipath Protocol support is enabled. Valid values: `disable`, `enable`. Default value: `enable`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - EC2 Transit Gateway Amazon Resource Name (ARN) +* `association_default_route_table_id` - Identifier of the default association route table +* `id` - EC2 Transit Gateway identifier +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). +* `owner_id` - Identifier of the AWS account that owns the EC2 Transit Gateway +* `propagation_default_route_table_id` - Identifier of the default propagation route table + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `create` - (Default `10m`) +- `update` - (Default `10m`) +- `delete` - (Default `10m`) + +## Import + +`aws_ec2_transit_gateway` can be imported by using the EC2 Transit Gateway identifier, e.g., + +``` +$ terraform import aws_ec2_transit_gateway.example tgw-12345678 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_transit_gateway_connect.html.markdown b/website/docs/cdktf/python/r/ec2_transit_gateway_connect.html.markdown new file mode 100644 index 00000000000..b879075850f --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_transit_gateway_connect.html.markdown @@ -0,0 +1,62 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_connect" +description: |- + Manages an EC2 Transit Gateway Connect +--- + +# Resource: aws_ec2_transit_gateway_connect + +Manages an EC2 Transit Gateway Connect. + +## Example Usage + +```terraform +resource "aws_ec2_transit_gateway_vpc_attachment" "example" { + subnet_ids = [aws_subnet.example.id] + transit_gateway_id = aws_ec2_transit_gateway.example.id + vpc_id = aws_vpc.example.id +} + +resource "aws_ec2_transit_gateway_connect" "attachment" { + transport_attachment_id = aws_ec2_transit_gateway_vpc_attachment.example.id + transit_gateway_id = aws_ec2_transit_gateway.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `protocol` - (Optional) The tunnel protocol. Valida values: `gre`. Default is `gre`. +* `tags` - (Optional) Key-value tags for the EC2 Transit Gateway Connect. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `transit_gateway_default_route_table_association` - (Optional) Boolean whether the Connect should be associated with the EC2 Transit Gateway association default route table. This cannot be configured or perform drift detection with Resource Access Manager shared EC2 Transit Gateways. Default value: `true`. +* `transit_gateway_default_route_table_propagation` - (Optional) Boolean whether the Connect should propagate routes with the EC2 Transit Gateway propagation default route table. This cannot be configured or perform drift detection with Resource Access Manager shared EC2 Transit Gateways. Default value: `true`. +* `transit_gateway_id` - (Required) Identifier of EC2 Transit Gateway. +* `transport_attachment_id` - (Required) The underlaying VPC attachment + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Attachment identifier +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `create` - (Default `10m`) +- `update` - (Default `10m`) +- `delete` - (Default `10m`) + +## Import + +`aws_ec2_transit_gateway_connect` can be imported by using the EC2 Transit Gateway Connect identifier, e.g., + +``` +$ terraform import aws_ec2_transit_gateway_connect.example tgw-attach-12345678 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_transit_gateway_connect_peer.html.markdown b/website/docs/cdktf/python/r/ec2_transit_gateway_connect_peer.html.markdown new file mode 100644 index 00000000000..5ce8eae9788 --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_transit_gateway_connect_peer.html.markdown @@ -0,0 +1,62 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_connect_peer" +description: |- + Manages an EC2 Transit Gateway Connect Peer +--- + +# Resource: aws_ec2_transit_gateway_connect_peer + +Manages an EC2 Transit Gateway Connect Peer. + +## Example Usage + +```terraform +resource "aws_ec2_transit_gateway_connect" "example" { + transport_attachment_id = aws_ec2_transit_gateway_vpc_attachment.example.id + transit_gateway_id = aws_ec2_transit_gateway.example.id +} + +resource "aws_ec2_transit_gateway_connect_peer" "example" { + peer_address = "10.1.2.3" + inside_cidr_blocks = ["169.254.100.0/29"] + transit_gateway_attachment_id = aws_ec2_transit_gateway_connect.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `bgp_asn` - (Optional) The BGP ASN number assigned customer device. If not provided, it will use the same BGP ASN as is associated with Transit Gateway. +* `inside_cidr_blocks` - (Required) The CIDR block that will be used for addressing within the tunnel. It must contain exactly one IPv4 CIDR block and up to one IPv6 CIDR block. The IPv4 CIDR block must be /29 size and must be within 169.254.0.0/16 range, with exception of: 169.254.0.0/29, 169.254.1.0/29, 169.254.2.0/29, 169.254.3.0/29, 169.254.4.0/29, 169.254.5.0/29, 169.254.169.248/29. The IPv6 CIDR block must be /125 size and must be within fd00::/8. The first IP from each CIDR block is assigned for customer gateway, the second and third is for Transit Gateway (An example: from range 169.254.100.0/29, .1 is assigned to customer gateway and .2 and .3 are assigned to Transit Gateway) +* `peer_address` - (Required) The IP addressed assigned to customer device, which will be used as tunnel endpoint. It can be IPv4 or IPv6 address, but must be the same address family as `transit_gateway_address` +* `tags` - (Optional) Key-value tags for the EC2 Transit Gateway Connect Peer. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `transit_gateway_address` - (Optional) The IP address assigned to Transit Gateway, which will be used as tunnel endpoint. This address must be from associated Transit Gateway CIDR block. The address must be from the same address family as `peer_address`. If not set explicitly, it will be selected from associated Transit Gateway CIDR blocks +* `transit_gateway_attachment_id` - (Required) The Transit Gateway Connect + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Connect Peer identifier +* `arn` - EC2 Transit Gateway Connect Peer ARN +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `create` - (Default `10m`) +- `delete` - (Default `10m`) + +## Import + +`aws_ec2_transit_gateway_connect_peer` can be imported by using the EC2 Transit Gateway Connect Peer identifier, e.g., + +``` +$ terraform import aws_ec2_transit_gateway_connect_peer.example tgw-connect-peer-12345678 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_transit_gateway_multicast_domain.html.markdown b/website/docs/cdktf/python/r/ec2_transit_gateway_multicast_domain.html.markdown new file mode 100644 index 00000000000..93bb146aed9 --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_transit_gateway_multicast_domain.html.markdown @@ -0,0 +1,173 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_multicast_domain" +description: |- + Manages an EC2 Transit Gateway Multicast Domain +--- + +# Resource: aws_ec2_transit_gateway_multicast_domain + +Manages an EC2 Transit Gateway Multicast Domain. + +## Example Usage + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws_ec2_transit_gateway_tgw = aws.ec2_transit_gateway.Ec2TransitGateway(self, "tgw", + multicast_support="enable" + ) + aws_ec2_transit_gateway_multicast_domain_domain = + aws.ec2_transit_gateway_multicast_domain.Ec2TransitGatewayMulticastDomain(self, "domain", + static_sources_support="enable", + tags={ + "Name": "Transit_Gateway_Multicast_Domain_Example" + }, + transit_gateway_id=cdktf.Token.as_string(aws_ec2_transit_gateway_tgw.id) + ) + aws_vpc_vpc1 = aws.vpc.Vpc(self, "vpc1", + cidr_block="10.0.0.0/16" + ) + aws_vpc_vpc2 = aws.vpc.Vpc(self, "vpc2", + cidr_block="10.1.0.0/16" + ) + data_aws_ami_amazon_linux = aws.data_aws_ami.DataAwsAmi(self, "amazon_linux", + filter=[DataAwsAmiFilter( + name="name", + values=["amzn-ami-hvm-*-x86_64-gp2"] + ), DataAwsAmiFilter( + name="owner-alias", + values=["amazon"] + ) + ], + most_recent=True, + owners=["amazon"] + ) + data_aws_availability_zones_available = + aws.data_aws_availability_zones.DataAwsAvailabilityZones(self, "available", + state="available" + ) + aws_subnet_subnet1 = aws.subnet.Subnet(self, "subnet1", + availability_zone=cdktf.Token.as_string( + cdktf.property_access(data_aws_availability_zones_available.names, ["0"])), + cidr_block="10.0.1.0/24", + vpc_id=cdktf.Token.as_string(aws_vpc_vpc1.id) + ) + aws_subnet_subnet2 = aws.subnet.Subnet(self, "subnet2", + availability_zone=cdktf.Token.as_string( + cdktf.property_access(data_aws_availability_zones_available.names, ["1"])), + cidr_block="10.0.2.0/24", + vpc_id=cdktf.Token.as_string(aws_vpc_vpc1.id) + ) + aws_subnet_subnet3 = aws.subnet.Subnet(self, "subnet3", + availability_zone=cdktf.Token.as_string( + cdktf.property_access(data_aws_availability_zones_available.names, ["0"])), + cidr_block="10.1.1.0/24", + vpc_id=cdktf.Token.as_string(aws_vpc_vpc2.id) + ) + aws_ec2_transit_gateway_vpc_attachment_attachment1 = + aws.ec2_transit_gateway_vpc_attachment.Ec2TransitGatewayVpcAttachment(self, "attachment1", + subnet_ids=[ + cdktf.Token.as_string(aws_subnet_subnet1.id), + cdktf.Token.as_string(aws_subnet_subnet2.id) + ], + transit_gateway_id=cdktf.Token.as_string(aws_ec2_transit_gateway_tgw.id), + vpc_id=cdktf.Token.as_string(aws_vpc_vpc1.id) + ) + aws_ec2_transit_gateway_vpc_attachment_attachment2 = + aws.ec2_transit_gateway_vpc_attachment.Ec2TransitGatewayVpcAttachment(self, "attachment2", + subnet_ids=[cdktf.Token.as_string(aws_subnet_subnet3.id)], + transit_gateway_id=cdktf.Token.as_string(aws_ec2_transit_gateway_tgw.id), + vpc_id=cdktf.Token.as_string(aws_vpc_vpc2.id) + ) + aws_instance_instance1 = aws.instance.Instance(self, "instance1", + ami=cdktf.Token.as_string(data_aws_ami_amazon_linux.id), + instance_type="t2.micro", + subnet_id=cdktf.Token.as_string(aws_subnet_subnet1.id) + ) + aws_instance_instance2 = aws.instance.Instance(self, "instance2", + ami=cdktf.Token.as_string(data_aws_ami_amazon_linux.id), + instance_type="t2.micro", + subnet_id=cdktf.Token.as_string(aws_subnet_subnet2.id) + ) + aws_instance_instance3 = aws.instance.Instance(self, "instance3", + ami=cdktf.Token.as_string(data_aws_ami_amazon_linux.id), + instance_type="t2.micro", + subnet_id=cdktf.Token.as_string(aws_subnet_subnet3.id) + ) + aws_ec2_transit_gateway_multicast_domain_association_association1 = + aws.ec2_transit_gateway_multicast_domain_association.Ec2TransitGatewayMulticastDomainAssociation(self, "association1", + subnet_id=cdktf.Token.as_string(aws_subnet_subnet1.id), + transit_gateway_attachment_id=cdktf.Token.as_string(aws_ec2_transit_gateway_vpc_attachment_attachment1.id), + transit_gateway_multicast_domain_id=cdktf.Token.as_string(aws_ec2_transit_gateway_multicast_domain_domain.id) + ) + aws.ec2_transit_gateway_multicast_domain_association.Ec2TransitGatewayMulticastDomainAssociation(self, "association2", + subnet_id=cdktf.Token.as_string(aws_subnet_subnet2.id), + transit_gateway_attachment_id=cdktf.Token.as_string(aws_ec2_transit_gateway_vpc_attachment_attachment2.id), + transit_gateway_multicast_domain_id=cdktf.Token.as_string(aws_ec2_transit_gateway_multicast_domain_domain.id) + ) + aws_ec2_transit_gateway_multicast_domain_association_association3 = + aws.ec2_transit_gateway_multicast_domain_association.Ec2TransitGatewayMulticastDomainAssociation(self, "association3", + subnet_id=cdktf.Token.as_string(aws_subnet_subnet3.id), + transit_gateway_attachment_id=cdktf.Token.as_string(aws_ec2_transit_gateway_vpc_attachment_attachment2.id), + transit_gateway_multicast_domain_id=cdktf.Token.as_string(aws_ec2_transit_gateway_multicast_domain_domain.id) + ) + aws.ec2_transit_gateway_multicast_group_member.Ec2TransitGatewayMulticastGroupMember(self, "member1", + group_ip_address="224.0.0.1", + network_interface_id=cdktf.Token.as_string(aws_instance_instance1.primary_network_interface_id), + transit_gateway_multicast_domain_id=cdktf.Token.as_string(aws_ec2_transit_gateway_multicast_domain_association_association1.transit_gateway_multicast_domain_id) + ) + aws.ec2_transit_gateway_multicast_group_member.Ec2TransitGatewayMulticastGroupMember(self, "member2", + group_ip_address="224.0.0.1", + network_interface_id=cdktf.Token.as_string(aws_instance_instance2.primary_network_interface_id), + transit_gateway_multicast_domain_id=cdktf.Token.as_string(aws_ec2_transit_gateway_multicast_domain_association_association1.transit_gateway_multicast_domain_id) + ) + aws.ec2_transit_gateway_multicast_group_source.Ec2TransitGatewayMulticastGroupSource(self, "source", + group_ip_address="224.0.0.1", + network_interface_id=cdktf.Token.as_string(aws_instance_instance3.primary_network_interface_id), + transit_gateway_multicast_domain_id=cdktf.Token.as_string(aws_ec2_transit_gateway_multicast_domain_association_association3.transit_gateway_multicast_domain_id) + ) +``` + +## Argument Reference + +The following arguments are supported: + +* `transit_gateway_id` - (Required) EC2 Transit Gateway identifier. The EC2 Transit Gateway must have `multicast_support` enabled. +* `auto_accept_shared_associations` - (Optional) Whether to automatically accept cross-account subnet associations that are associated with the EC2 Transit Gateway Multicast Domain. Valid values: `disable`, `enable`. Default value: `disable`. +* `igmpv2_support` - (Optional) Whether to enable Internet Group Management Protocol (IGMP) version 2 for the EC2 Transit Gateway Multicast Domain. Valid values: `disable`, `enable`. Default value: `disable`. +* `static_sources_support` - (Optional) Whether to enable support for statically configuring multicast group sources for the EC2 Transit Gateway Multicast Domain. Valid values: `disable`, `enable`. Default value: `disable`. +* `tags` - (Optional) Key-value tags for the EC2 Transit Gateway Multicast Domain. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Multicast Domain identifier. +* `arn` - EC2 Transit Gateway Multicast Domain Amazon Resource Name (ARN). +* `owner_id` - Identifier of the AWS account that owns the EC2 Transit Gateway Multicast Domain. +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `create` - (Default `10m`) +- `delete` - (Default `10m`) + +## Import + +`aws_ec2_transit_gateway_multicast_domain` can be imported by using the EC2 Transit Gateway Multicast Domain identifier, e.g., + +``` +terraform import aws_ec2_transit_gateway_multicast_domain.example tgw-mcast-domain-12345 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_transit_gateway_multicast_domain_association.html.markdown b/website/docs/cdktf/python/r/ec2_transit_gateway_multicast_domain_association.html.markdown new file mode 100644 index 00000000000..6a56a74beee --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_transit_gateway_multicast_domain_association.html.markdown @@ -0,0 +1,58 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_multicast_domain_association" +description: |- + Manages an EC2 Transit Gateway Multicast Domain Association +--- + +# Resource: aws_ec2_transit_gateway_multicast_domain_association + +Associates the specified subnet and transit gateway attachment with the specified transit gateway multicast domain. + +## Example Usage + +```terraform +resource "aws_ec2_transit_gateway" "example" { + multicast_support = "enable" +} + +resource "aws_ec2_transit_gateway_vpc_attachment" "example" { + subnet_ids = [aws_subnet.example.id] + transit_gateway_id = aws_ec2_transit_gateway.example.id + vpc_id = aws_vpc.example.id +} + +resource "aws_ec2_transit_gateway_multicast_domain" "example" { + transit_gateway_id = aws_ec2_transit_gateway.example.id +} + +resource "aws_ec2_transit_gateway_multicast_domain_association" "example" { + subnet_id = aws_subnet.example.id + transit_gateway_attachment_id = aws_ec2_transit_gateway_vpc_attachment.example.id + transit_gateway_multicast_domain_id = aws_ec2_transit_gateway_multicast_domain.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `subnet_id` - (Required) The ID of the subnet to associate with the transit gateway multicast domain. +* `transit_gateway_attachment_id` - (Required) The ID of the transit gateway attachment. +* `transit_gateway_multicast_domain_id` - (Required) The ID of the transit gateway multicast domain. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Multicast Domain Association identifier. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `create` - (Default `10m`) +- `delete` - (Default `10m`) + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_transit_gateway_multicast_group_member.html.markdown b/website/docs/cdktf/python/r/ec2_transit_gateway_multicast_group_member.html.markdown new file mode 100644 index 00000000000..f2dd0248c9b --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_transit_gateway_multicast_group_member.html.markdown @@ -0,0 +1,38 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_multicast_group_member" +description: |- + Manages an EC2 Transit Gateway Multicast Group Member +--- + +# Resource: aws_ec2_transit_gateway_multicast_group_member + +Registers members (network interfaces) with the transit gateway multicast group. +A member is a network interface associated with a supported EC2 instance that receives multicast traffic. + +## Example Usage + +```terraform +resource "aws_ec2_transit_gateway_multicast_group_member" "example" { + group_ip_address = "224.0.0.1" + network_interface_id = aws_network_interface.example.id + transit_gateway_multicast_domain_id = aws_ec2_transit_gateway_multicast_domain.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `group_ip_address` - (Required) The IP address assigned to the transit gateway multicast group. +* `network_interface_id` - (Required) The group members' network interface ID to register with the transit gateway multicast group. +* `transit_gateway_multicast_domain_id` - (Required) The ID of the transit gateway multicast domain. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Multicast Group Member identifier. + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_transit_gateway_multicast_group_source.html.markdown b/website/docs/cdktf/python/r/ec2_transit_gateway_multicast_group_source.html.markdown new file mode 100644 index 00000000000..41d11afb87f --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_transit_gateway_multicast_group_source.html.markdown @@ -0,0 +1,38 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_multicast_group_source" +description: |- + Manages an EC2 Transit Gateway Multicast Group Source +--- + +# Resource: aws_ec2_transit_gateway_multicast_group_source + +Registers sources (network interfaces) with the transit gateway multicast group. +A multicast source is a network interface attached to a supported instance that sends multicast traffic. + +## Example Usage + +```terraform +resource "aws_ec2_transit_gateway_multicast_group_source" "example" { + group_ip_address = "224.0.0.1" + network_interface_id = aws_network_interface.example.id + transit_gateway_multicast_domain_id = aws_ec2_transit_gateway_multicast_domain.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `group_ip_address` - (Required) The IP address assigned to the transit gateway multicast group. +* `network_interface_id` - (Required) The group members' network interface ID to register with the transit gateway multicast group. +* `transit_gateway_multicast_domain_id` - (Required) The ID of the transit gateway multicast domain. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Multicast Group Member identifier. + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_transit_gateway_peering_attachment.html.markdown b/website/docs/cdktf/python/r/ec2_transit_gateway_peering_attachment.html.markdown new file mode 100644 index 00000000000..1bbcb3b3a96 --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_transit_gateway_peering_attachment.html.markdown @@ -0,0 +1,92 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_peering_attachment" +description: |- + Manages an EC2 Transit Gateway Peering Attachment +--- + +# Resource: aws_ec2_transit_gateway_peering_attachment + +Manages an EC2 Transit Gateway Peering Attachment. +For examples of custom route table association and propagation, see the [EC2 Transit Gateway Networking Examples Guide](https://docs.aws.amazon.com/vpc/latest/tgw/TGW_Scenarios.html). + +## Example Usage + +```python +import constructs as constructs +import cdktf as cdktf +# Provider bindings are generated by running cdktf get. +# See https://cdk.tf/provider-generation for more details. +import ...gen.providers.aws as aws +class MyConvertedCode(cdktf.TerraformStack): + def __init__(self, scope, name): + super().__init__(scope, name) + aws_local = aws.provider.AwsProvider(self, "aws", + alias="local", + region="us-east-1" + ) + aws_peer = aws.provider.AwsProvider(self, "aws_1", + alias="peer", + region="us-west-2" + ) + aws_ec2_transit_gateway_local = + aws.ec2_transit_gateway.Ec2TransitGateway(self, "local", + provider=aws_local, + tags={ + "Name": "Local TGW" + } + ) + aws_ec2_transit_gateway_peer = + aws.ec2_transit_gateway.Ec2TransitGateway(self, "peer", + provider=aws_peer, + tags={ + "Name": "Peer TGW" + } + ) + data_aws_region_peer = aws.data_aws_region.DataAwsRegion(self, "peer_4", + provider=aws_peer + ) + # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. + data_aws_region_peer.override_logical_id("peer") + aws.ec2_transit_gateway_peering_attachment.Ec2TransitGatewayPeeringAttachment(self, "example", + peer_account_id=cdktf.Token.as_string(aws_ec2_transit_gateway_peer.owner_id), + peer_region=cdktf.Token.as_string(data_aws_region_peer.name), + peer_transit_gateway_id=cdktf.Token.as_string(aws_ec2_transit_gateway_peer.id), + tags={ + "Name": "TGW Peering Requestor" + }, + transit_gateway_id=cdktf.Token.as_string(aws_ec2_transit_gateway_local.id) + ) +``` + +A full example of how to create a Transit Gateway in one AWS account, share it with a second AWS account, and attach a to a Transit Gateway in the second account via the `aws_ec2_transit_gateway_peering_attachment` resource can be found in [the `./examples/transit-gateway-cross-account-peering-attachment` directory within the Github Repository](https://github.com/hashicorp/terraform-provider-aws/tree/main/examples/transit-gateway-cross-account-peering-attachment). + +## Argument Reference + +The following arguments are supported: + +* `peer_account_id` - (Optional) Account ID of EC2 Transit Gateway to peer with. Defaults to the account ID the [AWS provider][1] is currently connected to. +* `peer_region` - (Required) Region of EC2 Transit Gateway to peer with. +* `peer_transit_gateway_id` - (Required) Identifier of EC2 Transit Gateway to peer with. +* `tags` - (Optional) Key-value tags for the EC2 Transit Gateway Peering Attachment. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `transit_gateway_id` - (Required) Identifier of EC2 Transit Gateway. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Attachment identifier +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Import + +`aws_ec2_transit_gateway_peering_attachment` can be imported by using the EC2 Transit Gateway Attachment identifier, e.g., + +```sh +terraform import aws_ec2_transit_gateway_peering_attachment.example tgw-attach-12345678 +``` + +[1]: /docs/providers/aws/index.html + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_transit_gateway_peering_attachment_accepter.html.markdown b/website/docs/cdktf/python/r/ec2_transit_gateway_peering_attachment_accepter.html.markdown new file mode 100644 index 00000000000..f09ba0dea8f --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_transit_gateway_peering_attachment_accepter.html.markdown @@ -0,0 +1,52 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_peering_attachment_accepter" +description: |- + Manages the accepter's side of an EC2 Transit Gateway peering Attachment +--- + +# Resource: aws_ec2_transit_gateway_peering_attachment_accepter + +Manages the accepter's side of an EC2 Transit Gateway Peering Attachment. + +## Example Usage + +```terraform +resource "aws_ec2_transit_gateway_peering_attachment_accepter" "example" { + transit_gateway_attachment_id = aws_ec2_transit_gateway_peering_attachment.example.id + + tags = { + Name = "Example cross-account attachment" + } +} +``` + +A full example of how to create a Transit Gateway in one AWS account, share it with a second AWS account, and attach a to a Transit Gateway in the second account via the `aws_ec2_transit_gateway_peering_attachment` resource can be found in [the `./examples/transit-gateway-cross-account-peering-attachment` directory within the Github Repository](https://github.com/hashicorp/terraform-provider-aws/tree/main/examples/transit-gateway-cross-account-peering-attachment). + +## Argument Reference + +The following arguments are supported: + +* `transit_gateway_attachment_id` - (Required) The ID of the EC2 Transit Gateway Peering Attachment to manage. +* `tags` - (Optional) Key-value tags for the EC2 Transit Gateway Peering Attachment. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Attachment identifier +* `transit_gateway_id` - Identifier of EC2 Transit Gateway. +* `peer_transit_gateway_id` - Identifier of EC2 Transit Gateway to peer with. +* `peer_account_id` - Identifier of the AWS account that owns the EC2 TGW peering. +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Import + +`aws_ec2_transit_gateway_peering_attachment_accepter` can be imported by using the EC2 Transit Gateway Attachment identifier, e.g., + +``` +$ terraform import aws_ec2_transit_gateway_peering_attachment_accepter.example tgw-attach-12345678 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_transit_gateway_policy_table.html.markdown b/website/docs/cdktf/python/r/ec2_transit_gateway_policy_table.html.markdown new file mode 100644 index 00000000000..dc9ab805bca --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_transit_gateway_policy_table.html.markdown @@ -0,0 +1,49 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_policy_table" +description: |- + Manages an EC2 Transit Gateway Policy Table +--- + +# Resource: aws_ec2_transit_gateway_policy_table + +Manages an EC2 Transit Gateway Policy Table. + +## Example Usage + +```terraform +resource "aws_ec2_transit_gateway_policy_table" "example" { + transit_gateway_id = aws_ec2_transit_gateway.example.id + + tags = { + Name = "Example Policy Table" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `transit_gateway_id` - (Required) EC2 Transit Gateway identifier. +* `tags` - (Optional) Key-value tags for the EC2 Transit Gateway Policy Table. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - EC2 Transit Gateway Policy Table Amazon Resource Name (ARN). +* `id` - EC2 Transit Gateway Policy Table identifier. +* `state` - The state of the EC2 Transit Gateway Policy Table. +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Import + +`aws_ec2_transit_gateway_policy_table` can be imported by using the EC2 Transit Gateway Policy Table identifier, e.g., + +``` +$ terraform import aws_ec2_transit_gateway_policy_table.example tgw-rtb-12345678 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_transit_gateway_policy_table_association.html.markdown b/website/docs/cdktf/python/r/ec2_transit_gateway_policy_table_association.html.markdown new file mode 100644 index 00000000000..ecf06015b0d --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_transit_gateway_policy_table_association.html.markdown @@ -0,0 +1,45 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_policy_table_association" +description: |- + Manages an EC2 Transit Gateway Policy Table association +--- + +# Resource: aws_ec2_transit_gateway_policy_table_association + +Manages an EC2 Transit Gateway Policy Table association. + +## Example Usage + +```terraform +resource "aws_ec2_transit_gateway_policy_table_association" "example" { + transit_gateway_attachment_id = aws_networkmanager_transit_gateway_peering.example.transit_gateway_peering_attachment_id + transit_gateway_policy_table_id = aws_ec2_transit_gateway_policy_table.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `transit_gateway_attachment_id` - (Required) Identifier of EC2 Transit Gateway Attachment. +* `transit_gateway_policy_table_id` - (Required) Identifier of EC2 Transit Gateway Policy Table. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Policy Table identifier combined with EC2 Transit Gateway Attachment identifier +* `resource_id` - Identifier of the resource +* `resource_type` - Type of the resource + +## Import + +`aws_ec2_transit_gateway_policy_table_association` can be imported by using the EC2 Transit Gateway Policy Table identifier, an underscore, and the EC2 Transit Gateway Attachment identifier, e.g., + +``` +$ terraform import aws_ec2_transit_gateway_policy_table_association.example tgw-rtb-12345678_tgw-attach-87654321 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_transit_gateway_prefix_list_reference.html.markdown b/website/docs/cdktf/python/r/ec2_transit_gateway_prefix_list_reference.html.markdown new file mode 100644 index 00000000000..6dcdfb21f4f --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_transit_gateway_prefix_list_reference.html.markdown @@ -0,0 +1,61 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_prefix_list_reference" +description: |- + Manages an EC2 Transit Gateway Prefix List Reference +--- + +# Resource: aws_ec2_transit_gateway_prefix_list_reference + +Manages an EC2 Transit Gateway Prefix List Reference. + +## Example Usage + +### Attachment Routing + +```terraform +resource "aws_ec2_transit_gateway_prefix_list_reference" "example" { + prefix_list_id = aws_ec2_managed_prefix_list.example.id + transit_gateway_attachment_id = aws_ec2_transit_gateway_vpc_attachment.example.id + transit_gateway_route_table_id = aws_ec2_transit_gateway.example.association_default_route_table_id +} +``` + +### Blackhole Routing + +```terraform +resource "aws_ec2_transit_gateway_prefix_list_reference" "example" { + blackhole = true + prefix_list_id = aws_ec2_managed_prefix_list.example.id + transit_gateway_route_table_id = aws_ec2_transit_gateway.example.association_default_route_table_id +} +``` + +## Argument Reference + +The following arguments are required: + +* `prefix_list_id` - (Required) Identifier of EC2 Prefix List. +* `transit_gateway_route_table_id` - (Required) Identifier of EC2 Transit Gateway Route Table. + +The following arguments are optional: + +* `blackhole` - (Optional) Indicates whether to drop traffic that matches the Prefix List. Defaults to `false`. +* `transit_gateway_attachment_id` - (Optional) Identifier of EC2 Transit Gateway Attachment. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Route Table identifier and EC2 Prefix List identifier, separated by an underscore (`_`) + +## Import + +`aws_ec2_transit_gateway_prefix_list_reference` can be imported by using the EC2 Transit Gateway Route Table identifier and EC2 Prefix List identifier, separated by an underscore (`_`), e.g., + +```console +$ terraform import aws_ec2_transit_gateway_prefix_list_reference.example tgw-rtb-12345678_pl-12345678 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_transit_gateway_route.html.markdown b/website/docs/cdktf/python/r/ec2_transit_gateway_route.html.markdown new file mode 100644 index 00000000000..383121e4257 --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_transit_gateway_route.html.markdown @@ -0,0 +1,58 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_route" +description: |- + Manages an EC2 Transit Gateway Route +--- + +# Resource: aws_ec2_transit_gateway_route + +Manages an EC2 Transit Gateway Route. + +## Example Usage + +### Standard usage + +```terraform +resource "aws_ec2_transit_gateway_route" "example" { + destination_cidr_block = "0.0.0.0/0" + transit_gateway_attachment_id = aws_ec2_transit_gateway_vpc_attachment.example.id + transit_gateway_route_table_id = aws_ec2_transit_gateway.example.association_default_route_table_id +} +``` + +### Blackhole route + +```terraform +resource "aws_ec2_transit_gateway_route" "example" { + destination_cidr_block = "0.0.0.0/0" + blackhole = true + transit_gateway_route_table_id = aws_ec2_transit_gateway.example.association_default_route_table_id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `destination_cidr_block` - (Required) IPv4 or IPv6 RFC1924 CIDR used for destination matches. Routing decisions are based on the most specific match. +* `transit_gateway_attachment_id` - (Optional) Identifier of EC2 Transit Gateway Attachment (required if `blackhole` is set to false). +* `blackhole` - (Optional) Indicates whether to drop traffic that matches this route (default to `false`). +* `transit_gateway_route_table_id` - (Required) Identifier of EC2 Transit Gateway Route Table. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Route Table identifier combined with destination + +## Import + +`aws_ec2_transit_gateway_route` can be imported by using the EC2 Transit Gateway Route Table, an underscore, and the destination, e.g., + +``` +$ terraform import aws_ec2_transit_gateway_route.example tgw-rtb-12345678_0.0.0.0/0 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_transit_gateway_route_table.html.markdown b/website/docs/cdktf/python/r/ec2_transit_gateway_route_table.html.markdown new file mode 100644 index 00000000000..013d344a1e1 --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_transit_gateway_route_table.html.markdown @@ -0,0 +1,46 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_route_table" +description: |- + Manages an EC2 Transit Gateway Route Table +--- + +# Resource: aws_ec2_transit_gateway_route_table + +Manages an EC2 Transit Gateway Route Table. + +## Example Usage + +```terraform +resource "aws_ec2_transit_gateway_route_table" "example" { + transit_gateway_id = aws_ec2_transit_gateway.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `transit_gateway_id` - (Required) Identifier of EC2 Transit Gateway. +* `tags` - (Optional) Key-value tags for the EC2 Transit Gateway Route Table. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - EC2 Transit Gateway Route Table Amazon Resource Name (ARN). +* `default_association_route_table` - Boolean whether this is the default association route table for the EC2 Transit Gateway. +* `default_propagation_route_table` - Boolean whether this is the default propagation route table for the EC2 Transit Gateway. +* `id` - EC2 Transit Gateway Route Table identifier +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Import + +`aws_ec2_transit_gateway_route_table` can be imported by using the EC2 Transit Gateway Route Table identifier, e.g., + +``` +$ terraform import aws_ec2_transit_gateway_route_table.example tgw-rtb-12345678 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_transit_gateway_route_table_association.html.markdown b/website/docs/cdktf/python/r/ec2_transit_gateway_route_table_association.html.markdown new file mode 100644 index 00000000000..44073ddc56f --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_transit_gateway_route_table_association.html.markdown @@ -0,0 +1,45 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_route_table_association" +description: |- + Manages an EC2 Transit Gateway Route Table association +--- + +# Resource: aws_ec2_transit_gateway_route_table_association + +Manages an EC2 Transit Gateway Route Table association. + +## Example Usage + +```terraform +resource "aws_ec2_transit_gateway_route_table_association" "example" { + transit_gateway_attachment_id = aws_ec2_transit_gateway_vpc_attachment.example.id + transit_gateway_route_table_id = aws_ec2_transit_gateway_route_table.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `transit_gateway_attachment_id` - (Required) Identifier of EC2 Transit Gateway Attachment. +* `transit_gateway_route_table_id` - (Required) Identifier of EC2 Transit Gateway Route Table. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Route Table identifier combined with EC2 Transit Gateway Attachment identifier +* `resource_id` - Identifier of the resource +* `resource_type` - Type of the resource + +## Import + +`aws_ec2_transit_gateway_route_table_association` can be imported by using the EC2 Transit Gateway Route Table identifier, an underscore, and the EC2 Transit Gateway Attachment identifier, e.g., + +``` +$ terraform import aws_ec2_transit_gateway_route_table_association.example tgw-rtb-12345678_tgw-attach-87654321 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_transit_gateway_route_table_propagation.html.markdown b/website/docs/cdktf/python/r/ec2_transit_gateway_route_table_propagation.html.markdown new file mode 100644 index 00000000000..302b9252679 --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_transit_gateway_route_table_propagation.html.markdown @@ -0,0 +1,45 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_route_table_propagation" +description: |- + Manages an EC2 Transit Gateway Route Table propagation +--- + +# Resource: aws_ec2_transit_gateway_route_table_propagation + +Manages an EC2 Transit Gateway Route Table propagation. + +## Example Usage + +```terraform +resource "aws_ec2_transit_gateway_route_table_propagation" "example" { + transit_gateway_attachment_id = aws_ec2_transit_gateway_vpc_attachment.example.id + transit_gateway_route_table_id = aws_ec2_transit_gateway_route_table.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `transit_gateway_attachment_id` - (Required) Identifier of EC2 Transit Gateway Attachment. +* `transit_gateway_route_table_id` - (Required) Identifier of EC2 Transit Gateway Route Table. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Route Table identifier combined with EC2 Transit Gateway Attachment identifier +* `resource_id` - Identifier of the resource +* `resource_type` - Type of the resource + +## Import + +`aws_ec2_transit_gateway_route_table_propagation` can be imported by using the EC2 Transit Gateway Route Table identifier, an underscore, and the EC2 Transit Gateway Attachment identifier, e.g., + +``` +$ terraform import aws_ec2_transit_gateway_route_table_propagation.example tgw-rtb-12345678_tgw-attach-87654321 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_transit_gateway_vpc_attachment.html.markdown b/website/docs/cdktf/python/r/ec2_transit_gateway_vpc_attachment.html.markdown new file mode 100644 index 00000000000..7c61855a797 --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_transit_gateway_vpc_attachment.html.markdown @@ -0,0 +1,55 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_vpc_attachment" +description: |- + Manages an EC2 Transit Gateway VPC Attachment +--- + +# Resource: aws_ec2_transit_gateway_vpc_attachment + +Manages an EC2 Transit Gateway VPC Attachment. For examples of custom route table association and propagation, see the EC2 Transit Gateway Networking Examples Guide. + +## Example Usage + +```terraform +resource "aws_ec2_transit_gateway_vpc_attachment" "example" { + subnet_ids = [aws_subnet.example.id] + transit_gateway_id = aws_ec2_transit_gateway.example.id + vpc_id = aws_vpc.example.id +} +``` + +A full example of how to create a Transit Gateway in one AWS account, share it with a second AWS account, and attach a VPC in the second account to the Transit Gateway via the `aws_ec2_transit_gateway_vpc_attachment` and `aws_ec2_transit_gateway_vpc_attachment_accepter` resources can be found in [the `./examples/transit-gateway-cross-account-vpc-attachment` directory within the Github Repository](https://github.com/hashicorp/terraform-provider-aws/tree/main/examples/transit-gateway-cross-account-vpc-attachment). + +## Argument Reference + +The following arguments are supported: + +* `subnet_ids` - (Required) Identifiers of EC2 Subnets. +* `transit_gateway_id` - (Required) Identifier of EC2 Transit Gateway. +* `vpc_id` - (Required) Identifier of EC2 VPC. +* `appliance_mode_support` - (Optional) Whether Appliance Mode support is enabled. If enabled, a traffic flow between a source and destination uses the same Availability Zone for the VPC attachment for the lifetime of that flow. Valid values: `disable`, `enable`. Default value: `disable`. +* `dns_support` - (Optional) Whether DNS support is enabled. Valid values: `disable`, `enable`. Default value: `enable`. +* `ipv6_support` - (Optional) Whether IPv6 support is enabled. Valid values: `disable`, `enable`. Default value: `disable`. +* `tags` - (Optional) Key-value tags for the EC2 Transit Gateway VPC Attachment. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `transit_gateway_default_route_table_association` - (Optional) Boolean whether the VPC Attachment should be associated with the EC2 Transit Gateway association default route table. This cannot be configured or perform drift detection with Resource Access Manager shared EC2 Transit Gateways. Default value: `true`. +* `transit_gateway_default_route_table_propagation` - (Optional) Boolean whether the VPC Attachment should propagate routes with the EC2 Transit Gateway propagation default route table. This cannot be configured or perform drift detection with Resource Access Manager shared EC2 Transit Gateways. Default value: `true`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Attachment identifier +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). +* `vpc_owner_id` - Identifier of the AWS account that owns the EC2 VPC. + +## Import + +`aws_ec2_transit_gateway_vpc_attachment` can be imported by using the EC2 Transit Gateway Attachment identifier, e.g., + +``` +$ terraform import aws_ec2_transit_gateway_vpc_attachment.example tgw-attach-12345678 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/python/r/ec2_transit_gateway_vpc_attachment_accepter.html.markdown b/website/docs/cdktf/python/r/ec2_transit_gateway_vpc_attachment_accepter.html.markdown new file mode 100644 index 00000000000..a1096e1d9a3 --- /dev/null +++ b/website/docs/cdktf/python/r/ec2_transit_gateway_vpc_attachment_accepter.html.markdown @@ -0,0 +1,64 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_vpc_attachment_accepter" +description: |- + Manages the accepter's side of an EC2 Transit Gateway VPC Attachment +--- + +# Resource: aws_ec2_transit_gateway_vpc_attachment_accepter + +Manages the accepter's side of an EC2 Transit Gateway VPC Attachment. + +When a cross-account (requester's AWS account differs from the accepter's AWS account) EC2 Transit Gateway VPC Attachment +is created, an EC2 Transit Gateway VPC Attachment resource is automatically created in the accepter's account. +The requester can use the `aws_ec2_transit_gateway_vpc_attachment` resource to manage its side of the connection +and the accepter can use the `aws_ec2_transit_gateway_vpc_attachment_accepter` resource to "adopt" its side of the +connection into management. + +## Example Usage + +```terraform +resource "aws_ec2_transit_gateway_vpc_attachment_accepter" "example" { + transit_gateway_attachment_id = aws_ec2_transit_gateway_vpc_attachment.example.id + + tags = { + Name = "Example cross-account attachment" + } +} +``` + +A full example of how to create a Transit Gateway in one AWS account, share it with a second AWS account, and attach a VPC in the second account to the Transit Gateway via the `aws_ec2_transit_gateway_vpc_attachment` and `aws_ec2_transit_gateway_vpc_attachment_accepter` resources can be found in [the `./examples/transit-gateway-cross-account-vpc-attachment` directory within the Github Repository](https://github.com/hashicorp/terraform-provider-aws/tree/main/examples/transit-gateway-cross-account-vpc-attachment). + +## Argument Reference + +The following arguments are supported: + +* `transit_gateway_attachment_id` - (Required) The ID of the EC2 Transit Gateway Attachment to manage. +* `transit_gateway_default_route_table_association` - (Optional) Boolean whether the VPC Attachment should be associated with the EC2 Transit Gateway association default route table. Default value: `true`. +* `transit_gateway_default_route_table_propagation` - (Optional) Boolean whether the VPC Attachment should propagate routes with the EC2 Transit Gateway propagation default route table. Default value: `true`. +* `tags` - (Optional) Key-value tags for the EC2 Transit Gateway VPC Attachment. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Attachment identifier +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). +* `appliance_mode_support` - Whether Appliance Mode support is enabled. Valid values: `disable`, `enable`. +* `dns_support` - Whether DNS support is enabled. Valid values: `disable`, `enable`. +* `ipv6_support` - Whether IPv6 support is enabled. Valid values: `disable`, `enable`. +* `subnet_ids` - Identifiers of EC2 Subnets. +* `transit_gateway_id` - Identifier of EC2 Transit Gateway. +* `vpc_id` - Identifier of EC2 VPC. +* `vpc_owner_id` - Identifier of the AWS account that owns the EC2 VPC. + +## Import + +`aws_ec2_transit_gateway_vpc_attachment_accepter` can be imported by using the EC2 Transit Gateway Attachment identifier, e.g., + +``` +$ terraform import aws_ec2_transit_gateway_vpc_attachment_accepter.example tgw-attach-12345678 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_client_vpn_endpoint.html.markdown b/website/docs/cdktf/typescript/d/ec2_client_vpn_endpoint.html.markdown new file mode 100644 index 00000000000..95855761c0a --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_client_vpn_endpoint.html.markdown @@ -0,0 +1,111 @@ +--- +subcategory: "VPN (Client)" +layout: "aws" +page_title: "AWS: aws_ec2_client_vpn_endpoint" +description: |- + Get information on an EC2 Client VPN endpoint +--- + +# Data Source: aws_ec2_client_vpn_endpoint + +Get information on an EC2 Client VPN endpoint. + +## Example Usage + +### By Filter + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2ClientVpnEndpoint.DataAwsEc2ClientVpnEndpoint( + this, + "example", + { + filter: [ + { + name: "tag:Name", + values: ["ExampleVpn"], + }, + ], + } + ); + } +} + +``` + +### By Identifier + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2ClientVpnEndpoint.DataAwsEc2ClientVpnEndpoint( + this, + "example", + { + clientVpnEndpointId: "cvpn-endpoint-083cf50d6eb314f21", + } + ); + } +} + +``` + +## Argument Reference + +The following arguments are supported: + +* `clientVpnEndpointId` - (Optional) ID of the Client VPN endpoint. +* `filter` - (Optional) One or more configuration blocks containing name-values filters. Detailed below. +* `tags` - (Optional) Map of tags, each pair of which must exactly match a pair on the desired endpoint. + +### filter + +This block allows for complex filters. You can use one or more `filter` blocks. + +The following arguments are required: + +* `name` - (Required) Name of the field to filter by, as defined by [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeClientVpnEndpoints.html). +* `values` - (Required) Set of values that are accepted for the given field. An endpoint will be selected if any one of the given values matches. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The ARN of the Client VPN endpoint. +* `authenticationOptions` - Information about the authentication method used by the Client VPN endpoint. +* `clientCidrBlock` - IPv4 address range, in CIDR notation, from which client IP addresses are assigned. +* `clientConnectOptions` - The options for managing connection authorization for new client connections. +* `clientLoginBannerOptions` - Options for enabling a customizable text banner that will be displayed on AWS provided clients when a VPN session is established. +* `connectionLogOptions` - Information about the client connection logging options for the Client VPN endpoint. +* `description` - Brief description of the endpoint. +* `dnsName` - DNS name to be used by clients when connecting to the Client VPN endpoint. +* `dnsServers` - Information about the DNS servers to be used for DNS resolution. +* `securityGroupIds` - IDs of the security groups for the target network associated with the Client VPN endpoint. +* `selfServicePortal` - Whether the self-service portal for the Client VPN endpoint is enabled. +* `serverCertificateArn` - The ARN of the server certificate. +* `sessionTimeoutHours` - The maximum VPN session duration time in hours. +* `splitTunnel` - Whether split-tunnel is enabled in the AWS Client VPN endpoint. +* `transportProtocol` - Transport protocol used by the Client VPN endpoint. +* `vpcId` - ID of the VPC associated with the Client VPN endpoint. +* `vpnPort` - Port number for the Client VPN endpoint. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_coip_pool.html.markdown b/website/docs/cdktf/typescript/d/ec2_coip_pool.html.markdown new file mode 100644 index 00000000000..aedff5df3de --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_coip_pool.html.markdown @@ -0,0 +1,69 @@ +--- +subcategory: "Outposts (EC2)" +layout: "aws" +page_title: "AWS: aws_ec2_coip_pool" +description: |- + Provides details about a specific EC2 Customer-Owned IP Pool +--- + +# Data Source: aws_ec2_coip_pool + +Provides details about a specific EC2 Customer-Owned IP Pool. + +This data source can prove useful when a module accepts a coip pool id as +an input variable and needs to, for example, determine the CIDR block of that +COIP Pool. + +## Example Usage + +The following example returns a specific coip pool ID + +```terraform +variable "coip_pool_id" {} + +data "aws_ec2_coip_pool" "selected" { + id = var.coip_pool_id +} +``` + +## Argument Reference + +The arguments of this data source act as filters for querying the available +COIP Pools in the current region. The given filters must match exactly one +COIP Pool whose data will be exported as attributes. + +* `localGatewayRouteTableId` - (Optional) Local Gateway Route Table Id assigned to desired COIP Pool + +* `poolId` - (Optional) ID of the specific COIP Pool to retrieve. + +* `tags` - (Optional) Mapping of tags, each pair of which must exactly match + a pair on the desired COIP Pool. + +More complex filters can be expressed using one or more `filter` sub-blocks, +which take the following arguments: + +* `name` - (Required) Name of the field to filter by, as defined by + [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeCoipPools.html). + +* `values` - (Required) Set of values that are accepted for the given field. + A COIP Pool will be selected if any one of the given values matches. + +## Attributes Reference + +All of the argument attributes except `filter` blocks are also exported as +result attributes. This data source will complete the data by populating +any fields that are not included in the configuration with the data for +the selected COIP Pool. + +In addition, the following attributes are exported: + +* `arn` - ARN of the COIP pool +* `poolCidrs` - Set of CIDR blocks in pool + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_coip_pools.html.markdown b/website/docs/cdktf/typescript/d/ec2_coip_pools.html.markdown new file mode 100644 index 00000000000..548b6a0d6dd --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_coip_pools.html.markdown @@ -0,0 +1,52 @@ +--- +subcategory: "Outposts (EC2)" +layout: "aws" +page_title: "AWS: aws_ec2_coip_pools" +description: |- + Provides information for multiple EC2 Customer-Owned IP Pools +--- + +# Data Source: aws_ec2_coip_pools + +Provides information for multiple EC2 Customer-Owned IP Pools, such as their identifiers. + +## Example Usage + +The following shows outputting all COIP Pool Ids. + +```terraform +data "aws_ec2_coip_pools" "foo" {} + +output "foo" { + value = data.aws_ec2_coip_pools.foo.ids +} +``` + +## Argument Reference + +* `tags` - (Optional) Mapping of tags, each pair of which must exactly match + a pair on the desired aws_ec2_coip_pools. + +* `filter` - (Optional) Custom filter block as described below. + +More complex filters can be expressed using one or more `filter` sub-blocks, +which take the following arguments: + +* `name` - (Required) Name of the field to filter by, as defined by + [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeCoipPools.html). + +* `values` - (Required) Set of values that are accepted for the given field. + A COIP Pool will be selected if any one of the given values matches. + +## Attributes Reference + +* `id` - AWS Region. +* `poolIds` - Set of COIP Pool Identifiers + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_host.html.markdown b/website/docs/cdktf/typescript/d/ec2_host.html.markdown new file mode 100644 index 00000000000..ebfe3f56944 --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_host.html.markdown @@ -0,0 +1,106 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_host" +description: |- + Get information on an EC2 Host. +--- + +# Data Source: aws_ec2_host + +Use this data source to get information about an EC2 Dedicated Host. + +## Example Usage + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + const awsEc2HostTest = new aws.ec2Host.Ec2Host(this, "test", { + availabilityZone: "us-west-2a", + instanceType: "c5.18xlarge", + }); + const dataAwsEc2HostTest = new aws.dataAwsEc2Host.DataAwsEc2Host( + this, + "test_1", + { + hostId: cdktf.Token.asString(awsEc2HostTest.id), + } + ); + /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ + dataAwsEc2HostTest.overrideLogicalId("test"); + } +} + +``` + +### Filter Example + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2Host.DataAwsEc2Host(this, "test", { + filter: [ + { + name: "instance-type", + values: ["c5.18xlarge"], + }, + ], + }); + } +} + +``` + +## Argument Reference + +The arguments of this data source act as filters for querying the available EC2 Hosts in the current region. +The given filters must match exactly one host whose data will be exported as attributes. + +* `filter` - (Optional) Configuration block. Detailed below. +* `hostId` - (Optional) ID of the Dedicated Host. + +### filter + +This block allows for complex filters. You can use one or more `filter` blocks. + +The following arguments are required: + +* `name` - (Required) Name of the field to filter by, as defined by [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeHosts.html). +* `values` - (Required) Set of values that are accepted for the given field. A host will be selected if any one of the given values matches. + +## Attributes Reference + +In addition to the attributes above, the following attributes are exported: + +* `id` - ID of the Dedicated Host. +* `arn` - ARN of the Dedicated Host. +* `autoPlacement` - Whether auto-placement is on or off. +* `availabilityZone` - Availability Zone of the Dedicated Host. +* `cores` - Number of cores on the Dedicated Host. +* `hostRecovery` - Whether host recovery is enabled or disabled for the Dedicated Host. +* `instanceFamily` - Instance family supported by the Dedicated Host. For example, "m5". +* `instanceType` - Instance type supported by the Dedicated Host. For example, "m5.large". If the host supports multiple instance types, no instanceType is returned. +* `outpostArn` - ARN of the AWS Outpost on which the Dedicated Host is allocated. +* `ownerId` - ID of the AWS account that owns the Dedicated Host. +* `sockets` - Number of sockets on the Dedicated Host. +* `totalVcpus` - Total number of vCPUs on the Dedicated Host. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_instance_type.html.markdown b/website/docs/cdktf/typescript/d/ec2_instance_type.html.markdown new file mode 100644 index 00000000000..9628bf7d587 --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_instance_type.html.markdown @@ -0,0 +1,111 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_instance_type" +description: |- + Information about single EC2 Instance Type. +--- + + +# Data Source: aws_ec2_instance_type + +Get characteristics for a single EC2 Instance Type. + +## Example Usage + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2InstanceType.DataAwsEc2InstanceType(this, "example", { + instanceType: "t2.micro", + }); + } +} + +``` + +## Argument Reference + +The following argument is supported: + +* `instanceType` - (Required) Instance + +## Attribute Reference + +In addition to the argument above, the following attributes are exported: + +~> **NOTE:** Not all attributes are set for every instance type. + +* `autoRecoverySupported` - `true` if auto recovery is supported. +* `bareMetal` - `true` if it is a bare metal instance type. +* `burstablePerformanceSupported` - `true` if the instance type is a burstable performance instance type. +* `currentGeneration` - `true` if the instance type is a current generation. +* `dedicatedHostsSupported` - `true` if Dedicated Hosts are supported on the instance type. +* `defaultCores` - Default number of cores for the instance type. +* `defaultThreadsPerCore` - The default number of threads per core for the instance type. +* `defaultVcpus` - Default number of vCPUs for the instance type. +* `ebsEncryptionSupport` - Indicates whether Amazon EBS encryption is supported. +* `ebsNvmeSupport` - Whether non-volatile memory express (NVMe) is supported. +* `ebsOptimizedSupport` - Indicates that the instance type is Amazon EBS-optimized. +* `ebsPerformanceBaselineBandwidth` - The baseline bandwidth performance for an EBS-optimized instance type, in Mbps. +* `ebsPerformanceBaselineIops` - The baseline input/output storage operations per seconds for an EBS-optimized instance type. +* `ebsPerformanceBaselineThroughput` - The baseline throughput performance for an EBS-optimized instance type, in MBps. +* `ebsPerformanceMaximumBandwidth` - The maximum bandwidth performance for an EBS-optimized instance type, in Mbps. +* `ebsPerformanceMaximumIops` - The maximum input/output storage operations per second for an EBS-optimized instance type. +* `ebsPerformanceMaximumThroughput` - The maximum throughput performance for an EBS-optimized instance type, in MBps. +* `efaSupported` - Whether Elastic Fabric Adapter (EFA) is supported. +* `enaSupport` - Whether Elastic Network Adapter (ENA) is supported. +* `encryptionInTransitSupported` - Indicates whether encryption in-transit between instances is supported. +* `fpgas` - Describes the FPGA accelerator settings for the instance type. + * `fpgas.#Count` - The count of FPGA accelerators for the instance type. + * `fpgas.#Manufacturer` - The manufacturer of the FPGA accelerator. + * `fpgas.#MemorySize` - The size (in MiB) for the memory available to the FPGA accelerator. + * `fpgas.#Name` - The name of the FPGA accelerator. +* `freeTierEligible` - `true` if the instance type is eligible for the free tier. +* `gpus` - Describes the GPU accelerators for the instance type. + * `gpus.#Count` - The number of GPUs for the instance type. + * `gpus.#Manufacturer` - The manufacturer of the GPU accelerator. + * `gpus.#MemorySize` - The size (in MiB) for the memory available to the GPU accelerator. + * `gpus.#Name` - The name of the GPU accelerator. +* `hibernationSupported` - `true` if On-Demand hibernation is supported. +* `hypervisor` - Hypervisor used for the instance type. +* `inferenceAccelerators` Describes the Inference accelerators for the instance type. + * `inferenceAccelerators.#Count` - The number of Inference accelerators for the instance type. + * `inferenceAccelerators.#Manufacturer` - The manufacturer of the Inference accelerator. + * `inferenceAccelerators.#Name` - The name of the Inference accelerator. +* `instanceDisks` - Describes the disks for the instance type. + * `instanceDisks.#Count` - The number of disks with this configuration. + * `instanceDisks.#Size` - The size of the disk in GB. + * `instanceDisks.#Type` - The type of disk. +* `instanceStorageSupported` - `true` if instance storage is supported. +* `ipv6Supported` - `true` if IPv6 is supported. +* `maximumIpv4AddressesPerInterface` - The maximum number of IPv4 addresses per network interface. +* `maximumIpv6AddressesPerInterface` - The maximum number of IPv6 addresses per network interface. +* `maximumNetworkInterfaces` - The maximum number of network interfaces for the instance type. +* `memorySize` - Size of the instance memory, in MiB. +* `networkPerformance` - Describes the network performance. +* `supportedArchitectures` - A list of architectures supported by the instance type. +* `supportedPlacementStrategies` - A list of supported placement groups types. +* `supportedRootDeviceTypes` - Indicates the supported root device types. +* `supportedUsagesClasses` - Indicates whether the instance type is offered for spot or On-Demand. +* `supportedVirtualizationTypes` - The supported virtualization types. +* `sustainedClockSpeed` - The speed of the processor, in GHz. +* `totalFpgaMemory` - Total memory of all FPGA accelerators for the instance type (in MiB). +* `totalGpuMemory` - Total size of the memory for the GPU accelerators for the instance type (in MiB). +* `totalInstanceStorage` - The total size of the instance disks, in GB. +* `validCores` - List of the valid number of cores that can be configured for the instance type. +* `validThreadsPerCore` - List of the valid number of threads per core that can be configured for the instance type. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_instance_type_offering.html.markdown b/website/docs/cdktf/typescript/d/ec2_instance_type_offering.html.markdown new file mode 100644 index 00000000000..c6603fda8d4 --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_instance_type_offering.html.markdown @@ -0,0 +1,68 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_instance_type_offering" +description: |- + Information about single EC2 Instance Type Offering. +--- + +# Data Source: aws_ec2_instance_type_offering + +Information about single EC2 Instance Type Offering. + +## Example Usage + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2InstanceTypeOffering.DataAwsEc2InstanceTypeOffering( + this, + "example", + { + filter: [ + { + name: "instance-type", + values: ["t2.micro", "t3.micro"], + }, + ], + preferredInstanceTypes: ["t3.micro", "t2.micro"], + } + ); + } +} + +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. See the [EC2 API Reference](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstanceTypeOfferings.html) for supported filters. Detailed below. +* `locationType` - (Optional) Location type. Defaults to `region`. Valid values: `availabilityZone`, `availabilityZoneId`, and `region`. +* `preferredInstanceTypes` - (Optional) Ordered list of preferred EC2 Instance Types. The first match in this list will be returned. If no preferred matches are found and the original search returned more than one result, an error is returned. + +### filter Argument Reference + +* `name` - (Required) Name of the filter. The `location` filter depends on the top-level `locationType` argument and if not specified, defaults to the current region. +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Instance Type. +* `instanceType` - EC2 Instance Type. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_instance_type_offerings.html.markdown b/website/docs/cdktf/typescript/d/ec2_instance_type_offerings.html.markdown new file mode 100644 index 00000000000..63d24b7e9d1 --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_instance_type_offerings.html.markdown @@ -0,0 +1,75 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_instance_type_offerings" +description: |- + Information about EC2 Instance Type Offerings. +--- + +# Data Source: aws_ec2_instance_type_offerings + +Information about EC2 Instance Type Offerings. + +## Example Usage + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2InstanceTypeOfferings.DataAwsEc2InstanceTypeOfferings( + this, + "example", + { + filter: [ + { + name: "instance-type", + values: ["t2.micro", "t3.micro"], + }, + { + name: "location", + values: ["usw2-az4"], + }, + ], + locationType: "availability-zone-id", + } + ); + } +} + +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. See the [EC2 API Reference](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstanceTypeOfferings.html) for supported filters. Detailed below. +* `locationType` - (Optional) Location type. Defaults to `region`. Valid values: `availabilityZone`, `availabilityZoneId`, and `region`. + +### filter Argument Reference + +* `name` - (Required) Name of the filter. The `location` filter depends on the top-level `locationType` argument and if not specified, defaults to the current region. +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - AWS Region. +* `instanceTypes` - List of EC2 Instance Types. +* `locations` - List of locations. +* `locationTypes` - List of location types. + +Note that the indexes of Instance Type Offering instance types, locations and location types correspond. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_instance_types.html.markdown b/website/docs/cdktf/typescript/d/ec2_instance_types.html.markdown new file mode 100644 index 00000000000..bb586ca30ac --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_instance_types.html.markdown @@ -0,0 +1,73 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_instance_types" +description: |- + Information about EC2 Instance Types. +--- + +# Data Source: aws_ec2_instance_types + +Information about EC2 Instance Types. + +## Example Usage + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2InstanceTypes.DataAwsEc2InstanceTypes(this, "test", { + filter: [ + { + name: "auto-recovery-supported", + values: ["true"], + }, + { + name: "network-info.encryption-in-transit-supported", + values: ["true"], + }, + { + name: "instance-storage-supported", + values: ["true"], + }, + { + name: "instance-type", + values: ["g5.2xlarge", "g5.4xlarge"], + }, + ], + }); + } +} + +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. See the [EC2 API Reference](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstanceTypes.html) for supported filters. Detailed below. + +### filter Argument Reference + +* `name` - (Required) Name of the filter. +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - AWS Region. +* `instanceTypes` - List of EC2 Instance Types. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_local_gateway.html.markdown b/website/docs/cdktf/typescript/d/ec2_local_gateway.html.markdown new file mode 100644 index 00000000000..e88cf56e10e --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_local_gateway.html.markdown @@ -0,0 +1,85 @@ +--- +subcategory: "Outposts (EC2)" +layout: "aws" +page_title: "AWS: aws_ec2_local_gateway" +description: |- + Provides details about an EC2 Local Gateway +--- + +# Data Source: aws_ec2_local_gateway + +Provides details about an EC2 Local Gateway. + +## Example Usage + +The following example shows how one might accept a local gateway id as a variable. + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + /*Terraform Variables are not always the best fit for getting inputs in the context of Terraform CDK. + You can read more about this at https://cdk.tf/variables*/ + const localGatewayId = new cdktf.TerraformVariable( + this, + "local_gateway_id", + {} + ); + new aws.dataAwsEc2LocalGateway.DataAwsEc2LocalGateway(this, "selected", { + id: localGatewayId.stringValue, + }); + } +} + +``` + +## Argument Reference + +The arguments of this data source act as filters for querying the available +Local Gateways in the current region. The given filters must match exactly one +Local Gateway whose data will be exported as attributes. + +* `filter` - (Optional) Custom filter block as described below. + +* `id` - (Optional) Id of the specific Local Gateway to retrieve. + +* `state` - (Optional) Current state of the desired Local Gateway. + Can be either `"pending"` or `"available"`. + +* `tags` - (Optional) Mapping of tags, each pair of which must exactly match + a pair on the desired Local Gateway. + +More complex filters can be expressed using one or more `filter` sub-blocks, +which take the following arguments: + +* `name` - (Required) Name of the field to filter by, as defined by + [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeLocalGateways.html). + +* `values` - (Required) Set of values that are accepted for the given field. + A Local Gateway will be selected if any one of the given values matches. + +## Attributes Reference + +All of the argument attributes except `filter` blocks are also exported as +result attributes. This data source will complete the data by populating +any fields that are not included in the configuration with the data for +the selected Local Gateway. + +The following attributes are additionally exported: + +* `outpostArn` - ARN of Outpost +* `ownerId` - AWS account identifier that owns the Local Gateway. +* `state` - State of the local gateway. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_local_gateway_route_table.html.markdown b/website/docs/cdktf/typescript/d/ec2_local_gateway_route_table.html.markdown new file mode 100644 index 00000000000..0d7ff06c103 --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_local_gateway_route_table.html.markdown @@ -0,0 +1,80 @@ +--- +subcategory: "Outposts (EC2)" +layout: "aws" +page_title: "AWS: aws_ec2_local_gateway_route_table" +description: |- + Provides details about an EC2 Local Gateway Route Table +--- + +# Data Source: aws_ec2_local_gateway_route_table + +Provides details about an EC2 Local Gateway Route Table. + +This data source can prove useful when a module accepts a local gateway route table id as +an input variable and needs to, for example, find the associated Outpost or Local Gateway. + +## Example Usage + +The following example returns a specific local gateway route table ID + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + /*Terraform Variables are not always the best fit for getting inputs in the context of Terraform CDK. + You can read more about this at https://cdk.tf/variables*/ + const awsEc2LocalGatewayRouteTable = new cdktf.TerraformVariable( + this, + "aws_ec2_local_gateway_route_table", + {} + ); + new aws.dataAwsEc2LocalGatewayRouteTable.DataAwsEc2LocalGatewayRouteTable( + this, + "selected", + { + localGatewayRouteTableId: awsEc2LocalGatewayRouteTable.stringValue, + } + ); + } +} + +``` + +## Argument Reference + +The arguments of this data source act as filters for querying the available +Local Gateway Route Tables in the current region. The given filters must match exactly one +Local Gateway Route Table whose data will be exported as attributes. + +* `localGatewayRouteTableId` - (Optional) Local Gateway Route Table Id assigned to desired local gateway route table + +* `localGatewayId` - (Optional) ID of the specific local gateway route table to retrieve. + +* `outpostArn` - (Optional) ARN of the Outpost the local gateway route table is associated with. + +* `state` - (Optional) State of the local gateway route table. + +* `tags` - (Optional) Mapping of tags, each pair of which must exactly match + a pair on the desired local gateway route table. + +More complex filters can be expressed using one or more `filter` sub-blocks, +which take the following arguments: + +* `name` - (Required) Name of the field to filter by, as defined by + [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeLocalGatewayRouteTables.html). + +* `values` - (Required) Set of values that are accepted for the given field. + A local gateway route table will be selected if any one of the given values matches. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_local_gateway_route_tables.html.markdown b/website/docs/cdktf/typescript/d/ec2_local_gateway_route_tables.html.markdown new file mode 100644 index 00000000000..ebf9a5249d7 --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_local_gateway_route_tables.html.markdown @@ -0,0 +1,69 @@ +--- +subcategory: "Outposts (EC2)" +layout: "aws" +page_title: "AWS: aws_ec2_local_gateway_route_tables" +description: |- + Provides information for multiple EC2 Local Gateway Route Tables +--- + +# Data Source: aws_ec2_local_gateway_route_tables + +Provides information for multiple EC2 Local Gateway Route Tables, such as their identifiers. + +## Example Usage + +The following shows outputting all Local Gateway Route Table Ids. + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + const dataAwsEc2LocalGatewayRouteTablesFoo = + new aws.dataAwsEc2LocalGatewayRouteTables.DataAwsEc2LocalGatewayRouteTables( + this, + "foo", + {} + ); + const cdktfTerraformOutputFoo = new cdktf.TerraformOutput(this, "foo_1", { + value: dataAwsEc2LocalGatewayRouteTablesFoo.ids, + }); + /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ + cdktfTerraformOutputFoo.overrideLogicalId("foo"); + } +} + +``` + +## Argument Reference + +* `tags` - (Optional) Mapping of tags, each pair of which must exactly match + a pair on the desired local gateway route table. + +* `filter` - (Optional) Custom filter block as described below. + +More complex filters can be expressed using one or more `filter` sub-blocks, +which take the following arguments: + +* `name` - (Required) Name of the field to filter by, as defined by + [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeLocalGatewayRouteTables.html). + +* `values` - (Required) Set of values that are accepted for the given field. + A Local Gateway Route Table will be selected if any one of the given values matches. + +## Attributes Reference + +* `id` - AWS Region. +* `ids` - Set of Local Gateway Route Table identifiers + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_local_gateway_virtual_interface.html.markdown b/website/docs/cdktf/typescript/d/ec2_local_gateway_virtual_interface.html.markdown new file mode 100644 index 00000000000..e03e15d73b0 --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_local_gateway_virtual_interface.html.markdown @@ -0,0 +1,55 @@ +--- +subcategory: "Outposts (EC2)" +layout: "aws" +page_title: "AWS: aws_ec2_local_gateway_virtual_interface" +description: |- + Provides details about an EC2 Local Gateway Virtual Interface +--- + +# Data Source: aws_ec2_local_gateway_virtual_interface + +Provides details about an EC2 Local Gateway Virtual Interface. More information can be found in the [Outposts User Guide](https://docs.aws.amazon.com/outposts/latest/userguide/outposts-networking-components.html#routing). + +## Example Usage + +```terraform +data "aws_ec2_local_gateway_virtual_interface" "example" { + for_each = data.aws_ec2_local_gateway_virtual_interface_group.example.local_gateway_virtual_interface_ids + + id = each.value +} +``` + +## Argument Reference + +The following arguments are optional: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. See the [EC2 API Reference](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeLocalGatewayVirtualInterfaces.html) for supported filters. Detailed below. +* `id` - (Optional) Identifier of EC2 Local Gateway Virtual Interface. +* `tags` - (Optional) Key-value map of resource tags, each pair of which must exactly match a pair on the desired local gateway route table. + +### filter Argument Reference + +The `filter` configuration block supports the following arguments: + +* `name` - (Required) Name of the filter. +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `localAddress` - Local address. +* `localBgpAsn` - Border Gateway Protocol (BGP) Autonomous System Number (ASN) of the EC2 Local Gateway. +* `localGatewayId` - Identifier of the EC2 Local Gateway. +* `peerAddress` - Peer address. +* `peerBgpAsn` - Border Gateway Protocol (BGP) Autonomous System Number (ASN) of the peer. +* `vlan` - Virtual Local Area Network. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_local_gateway_virtual_interface_group.html.markdown b/website/docs/cdktf/typescript/d/ec2_local_gateway_virtual_interface_group.html.markdown new file mode 100644 index 00000000000..b8eaa49f856 --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_local_gateway_virtual_interface_group.html.markdown @@ -0,0 +1,49 @@ +--- +subcategory: "Outposts (EC2)" +layout: "aws" +page_title: "AWS: aws_ec2_local_gateway_virtual_interface_group" +description: |- + Provides details about an EC2 Local Gateway Virtual Interface Group +--- + +# Data Source: aws_ec2_local_gateway_virtual_interface_group + +Provides details about an EC2 Local Gateway Virtual Interface Group. More information can be found in the [Outposts User Guide](https://docs.aws.amazon.com/outposts/latest/userguide/outposts-networking-components.html#routing). + +## Example Usage + +```terraform +data "aws_ec2_local_gateway_virtual_interface_group" "example" { + local_gateway_id = data.aws_ec2_local_gateway.example.id +} +``` + +## Argument Reference + +The following arguments are optional: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. See the [EC2 API Reference](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeLocalGatewayVirtualInterfaceGroups.html) for supported filters. Detailed below. +* `id` - (Optional) Identifier of EC2 Local Gateway Virtual Interface Group. +* `localGatewayId` - (Optional) Identifier of EC2 Local Gateway. +* `tags` - (Optional) Key-value map of resource tags, each pair of which must exactly match a pair on the desired local gateway route table. + +### filter Argument Reference + +The `filter` configuration block supports the following arguments: + +* `name` - (Required) Name of the filter. +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `localGatewayVirtualInterfaceIds` - Set of EC2 Local Gateway Virtual Interface identifiers. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_local_gateway_virtual_interface_groups.html.markdown b/website/docs/cdktf/typescript/d/ec2_local_gateway_virtual_interface_groups.html.markdown new file mode 100644 index 00000000000..72a97d90ebf --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_local_gateway_virtual_interface_groups.html.markdown @@ -0,0 +1,62 @@ +--- +subcategory: "Outposts (EC2)" +layout: "aws" +page_title: "AWS: aws_ec2_local_gateway_virtual_interface_groups" +description: |- + Provides details about multiple EC2 Local Gateway Virtual Interface Groups +--- + +# Data Source: aws_ec2_local_gateway_virtual_interface_groups + +Provides details about multiple EC2 Local Gateway Virtual Interface Groups, such as identifiers. More information can be found in the [Outposts User Guide](https://docs.aws.amazon.com/outposts/latest/userguide/outposts-networking-components.html#routing). + +## Example Usage + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2LocalGatewayVirtualInterfaceGroups.DataAwsEc2LocalGatewayVirtualInterfaceGroups( + this, + "all", + {} + ); + } +} + +``` + +## Argument Reference + +The following arguments are optional: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. See the [EC2 API Reference](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeLocalGatewayVirtualInterfaceGroups.html) for supported filters. Detailed below. +* `tags` - (Optional) Key-value map of resource tags, each pair of which must exactly match a pair on the desired local gateway route table. + +### filter Argument Reference + +The `filter` configuration block supports the following arguments: + +* `name` - (Required) Name of the filter. +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - AWS Region. +* `ids` - Set of EC2 Local Gateway Virtual Interface Group identifiers. +* `localGatewayVirtualInterfaceIds` - Set of EC2 Local Gateway Virtual Interface identifiers. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_local_gateways.html.markdown b/website/docs/cdktf/typescript/d/ec2_local_gateways.html.markdown new file mode 100644 index 00000000000..58e8d79ce11 --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_local_gateways.html.markdown @@ -0,0 +1,69 @@ +--- +subcategory: "Outposts (EC2)" +layout: "aws" +page_title: "AWS: aws_ec2_local_gateways" +description: |- + Provides information for multiple EC2 Local Gateways +--- + +# Data Source: aws_ec2_local_gateways + +Provides information for multiple EC2 Local Gateways, such as their identifiers. + +## Example Usage + +The following example retrieves Local Gateways with a resource tag of `service` set to `production`. + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + const dataAwsEc2LocalGatewaysFoo = + new aws.dataAwsEc2LocalGateways.DataAwsEc2LocalGateways(this, "foo", { + tags: { + service: "production", + }, + }); + const cdktfTerraformOutputFoo = new cdktf.TerraformOutput(this, "foo_1", { + value: dataAwsEc2LocalGatewaysFoo.ids, + }); + /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ + cdktfTerraformOutputFoo.overrideLogicalId("foo"); + } +} + +``` + +## Argument Reference + +* `tags` - (Optional) Mapping of tags, each pair of which must exactly match + a pair on the desired local_gateways. + +* `filter` - (Optional) Custom filter block as described below. + +More complex filters can be expressed using one or more `filter` sub-blocks, +which take the following arguments: + +* `name` - (Required) Name of the field to filter by, as defined by + [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeLocalGateways.html). + +* `values` - (Required) Set of values that are accepted for the given field. + A Local Gateway will be selected if any one of the given values matches. + +## Attributes Reference + +* `id` - AWS Region. +* `ids` - Set of all the Local Gateway identifiers + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_managed_prefix_list.html.markdown b/website/docs/cdktf/typescript/d/ec2_managed_prefix_list.html.markdown new file mode 100644 index 00000000000..c33c136869d --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_managed_prefix_list.html.markdown @@ -0,0 +1,108 @@ +--- +subcategory: "VPC (Virtual Private Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_managed_prefix_list" +description: |- + Provides details about a specific managed prefix list +--- + +# Data Source: aws_ec2_managed_prefix_list + +`awsEc2ManagedPrefixList` provides details about a specific AWS prefix list or +customer-managed prefix list in the current region. + +## Example Usage + +### Find the regional DynamoDB prefix list + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + const dataAwsRegionCurrent = new aws.dataAwsRegion.DataAwsRegion( + this, + "current", + {} + ); + new aws.dataAwsEc2ManagedPrefixList.DataAwsEc2ManagedPrefixList( + this, + "example", + { + name: "com.amazonaws.${" + dataAwsRegionCurrent.name + "}.dynamodb", + } + ); + } +} + +``` + +### Find a managed prefix list using filters + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2ManagedPrefixList.DataAwsEc2ManagedPrefixList( + this, + "example", + { + filter: [ + { + name: "prefix-list-name", + values: ["my-prefix-list"], + }, + ], + } + ); + } +} + +``` + +## Argument Reference + +The arguments of this data source act as filters for querying the available +prefix lists. The given filters must match exactly one prefix list +whose data will be exported as attributes. + +* `id` - (Optional) ID of the prefix list to select. +* `name` - (Optional) Name of the prefix list to select. +* `filter` - (Optional) Configuration block(s) for filtering. Detailed below. + +### filter Configuration Block + +The following arguments are supported by the `filter` configuration block: + +* `name` - (Required) Name of the filter field. Valid values can be found in the EC2 [DescribeManagedPrefixLists](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeManagedPrefixLists.html) API Reference. +* `values` - (Required) Set of values that are accepted for the given filter field. Results will be selected if any given value matches. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - ID of the selected prefix list. +* `arn` - ARN of the selected prefix list. +* `name` - Name of the selected prefix list. +* `entries` - Set of entries in this prefix list. Each entry is an object with `cidr` and `description`. +* `ownerId` - Account ID of the owner of a customer-managed prefix list, or `aws` otherwise. +* `addressFamily` - Address family of the prefix list. Valid values are `iPv4` and `iPv6`. +* `maxEntries` - When then prefix list is managed, the maximum number of entries it supports, or null otherwise. +* `tags` - Map of tags assigned to the resource. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_managed_prefix_lists.html.markdown b/website/docs/cdktf/typescript/d/ec2_managed_prefix_lists.html.markdown new file mode 100644 index 00000000000..e6269fd1ed3 --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_managed_prefix_lists.html.markdown @@ -0,0 +1,89 @@ +--- +subcategory: "VPC (Virtual Private Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_managed_prefix_lists" +description: |- + Get information on managed prefix lists +--- + +# Data Source: aws_ec2_managed_prefix_lists + +This resource can be useful for getting back a list of managed prefix list ids to be referenced elsewhere. + +## Example Usage + +The following returns all managed prefix lists filtered by tags + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + const dataAwsEc2ManagedPrefixListsTestEnv = + new aws.dataAwsEc2ManagedPrefixLists.DataAwsEc2ManagedPrefixLists( + this, + "test_env", + { + tags: { + Env: "test", + }, + } + ); + /*In most cases loops should be handled in the programming language context and + not inside of the Terraform context. If you are looping over something external, e.g. a variable or a file input + you should consider using a for loop. If you are looping over something only known to Terraform, e.g. a result of a data source + you need to keep this like it is.*/ + const dataAwsEc2ManagedPrefixListTestEnvCount = cdktf.TerraformCount.of( + cdktf.Fn.lengthOf(dataAwsEc2ManagedPrefixListsTestEnv.ids) + ); + const dataAwsEc2ManagedPrefixListTestEnv = + new aws.dataAwsEc2ManagedPrefixList.DataAwsEc2ManagedPrefixList( + this, + "test_env_1", + { + id: cdktf.Token.asString( + cdktf.propertyAccess( + cdktf.Fn.tolist(dataAwsEc2ManagedPrefixListsTestEnv.ids), + [dataAwsEc2ManagedPrefixListTestEnvCount.index] + ) + ), + count: dataAwsEc2ManagedPrefixListTestEnvCount, + } + ); + /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ + dataAwsEc2ManagedPrefixListTestEnv.overrideLogicalId("test_env"); + } +} + +``` + +## Argument Reference + +* `filter` - (Optional) Custom filter block as described below. +* `tags` - (Optional) Map of tags, each pair of which must exactly match + a pair on the desired . + +More complex filters can be expressed using one or more `filter` sub-blocks, +which take the following arguments: + +* `name` - (Required) Name of the field to filter by, as defined by + [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeManagedPrefixLists.html). +* `values` - (Required) Set of values that are accepted for the given field. + A managed prefix list will be selected if any one of the given values matches. + +## Attributes Reference + +* `id` - AWS Region. +* `ids` - List of all the managed prefix list ids found. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_network_insights_analysis.html.markdown b/website/docs/cdktf/typescript/d/ec2_network_insights_analysis.html.markdown new file mode 100644 index 00000000000..5427a4642e4 --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_network_insights_analysis.html.markdown @@ -0,0 +1,54 @@ +--- +subcategory: "VPC (Virtual Private Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_network_insights_analysis" +description: |- + Provides details about a specific Network Insights Analysis. +--- + +# Data Source: aws_ec2_network_insights_analysis + +`awsEc2NetworkInsightsAnalysis` provides details about a specific Network Insights Analysis. + +## Example Usage + +```terraform +data "aws_ec2_network_insights_analysis" "example" { + network_insights_analysis_id = aws_ec2_network_insights_analysis.example.id +} +``` + +## Argument Reference + +The arguments of this data source act as filters for querying the available +Network Insights Analyses. The given filters must match exactly one Network Insights Analysis +whose data will be exported as attributes. + +* `networkInsightsAnalysisId` - (Optional) ID of the Network Insights Analysis to select. +* `filter` - (Optional) Configuration block(s) for filtering. Detailed below. + +### filter Configuration Block + +The following arguments are supported by the `filter` configuration block: + +* `name` - (Required) Name of the filter field. Valid values can be found in the EC2 [`describeNetworkInsightsAnalyses`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeNetworkInsightsAnalyses.html) API Reference. +* `values` - (Required) Set of values that are accepted for the given filter field. Results will be selected if any given value matches. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `alternatePathHints` - Potential intermediate components of a feasible path. +* `arn` - ARN of the selected Network Insights Analysis. +* `explanations` - Explanation codes for an unreachable path. +* `filterInArns` - ARNs of the AWS resources that the path must traverse. +* `forwardPathComponents` - The components in the path from source to destination. +* `networkInsightsPathId` - The ID of the path. +* `pathFound` - Set to `true` if the destination was reachable. +* `returnPathComponents` - The components in the path from destination to source. +* `startDate` - Date/time the analysis was started. +* `status` - Status of the analysis. `succeeded` means the analysis was completed, not that a path was found, for that see `pathFound`. +* `statusMessage` - Message to provide more context when the `status` is `failed`. +* `warningMessage` - Warning message. + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_network_insights_path.html.markdown b/website/docs/cdktf/typescript/d/ec2_network_insights_path.html.markdown new file mode 100644 index 00000000000..31db2845dcc --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_network_insights_path.html.markdown @@ -0,0 +1,50 @@ +--- +subcategory: "VPC (Virtual Private Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_network_insights_path" +description: |- + Provides details about a specific Network Insights Path. +--- + +# Data Source: aws_ec2_network_insights_path + +`awsEc2NetworkInsightsPath` provides details about a specific Network Insights Path. + +## Example Usage + +```terraform +data "aws_ec2_network_insights_path" "example" { + network_insights_path_id = aws_ec2_network_insights_path.example.id +} +``` + +## Argument Reference + +The arguments of this data source act as filters for querying the available +Network Insights Paths. The given filters must match exactly one Network Insights Path +whose data will be exported as attributes. + +* `networkInsightsPathId` - (Optional) ID of the Network Insights Path to select. +* `filter` - (Optional) Configuration block(s) for filtering. Detailed below. + +### filter Configuration Block + +The following arguments are supported by the `filter` configuration block: + +* `name` - (Required) Name of the filter field. Valid values can be found in the EC2 [`describeNetworkInsightsPaths`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeNetworkInsightsPaths.html) API Reference. +* `values` - (Required) Set of values that are accepted for the given filter field. Results will be selected if any given value matches. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - ARN of the selected Network Insights Path. +* `destination` - AWS resource that is the destination of the path. +* `destinationIp` - IP address of the AWS resource that is the destination of the path. +* `destinationPort` - Destination port. +* `protocol` - Protocol. +* `source` - AWS resource that is the source of the path. +* `sourceIp` - IP address of the AWS resource that is the source of the path. +* `tags` - Map of tags assigned to the resource. + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_public_ipv4_pool.html.markdown b/website/docs/cdktf/typescript/d/ec2_public_ipv4_pool.html.markdown new file mode 100644 index 00000000000..c78106e05c8 --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_public_ipv4_pool.html.markdown @@ -0,0 +1,55 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_public_ipv4_pool" +description: |- + Provides details about a specific AWS EC2 Public IPv4 Pool. +--- + +# Data Source: aws_ec2_public_ipv4_pool + +Provides details about a specific AWS EC2 Public IPv4 Pool. + +## Example Usage + +### Basic Usage + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2PublicIpv4Pool.DataAwsEc2PublicIpv4Pool(this, "example", { + poolId: "ipv4pool-ec2-000df99cff0c1ec10", + }); + } +} + +``` + +## Argument Reference + +The following arguments are required: + +* `poolId` - (Required) AWS resource IDs of a public IPv4 pool (as a string) for which this data source will fetch detailed information. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `description` - Description of the pool, if any. +* `networkBorderGroup` - Name of the location from which the address pool is advertised. +* pool_address_ranges` - List of Address Ranges in the Pool; each address range record contains: + * `addressCount` - Number of addresses in the range. + * `availableAddressCount` - Number of available addresses in the range. + * `firstAddress` - First address in the range. + * `lastAddress` - Last address in the range. +* `tags` - Any tags for the address pool. +* `totalAddressCount` - Total number of addresses in the pool. +* `totalAvailableAddressCount` - Total number of available addresses in the pool. + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_public_ipv4_pools.html.markdown b/website/docs/cdktf/typescript/d/ec2_public_ipv4_pools.html.markdown new file mode 100644 index 00000000000..c8b8ea41dd3 --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_public_ipv4_pools.html.markdown @@ -0,0 +1,81 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_public_ipv4_pools" +description: |- + Terraform data source for getting information about AWS EC2 Public IPv4 Pools. +--- + +# Data Source: aws_ec2_public_ipv4_pools + +Terraform data source for getting information about AWS EC2 Public IPv4 Pools. + +## Example Usage + +### Basic Usage + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2PublicIpv4Pools.DataAwsEc2PublicIpv4Pools( + this, + "example", + {} + ); + } +} + +``` + +### Usage with Filter + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2PublicIpv4Pools.DataAwsEc2PublicIpv4Pools( + this, + "example", + { + filter: [ + { + name: "tag-key", + values: ["ExampleTagKey"], + }, + ], + } + ); + } +} + +``` + +## Argument Reference + +The following arguments are optional: + +* `filter` - (Optional) Custom filter block as described below. +* `tags` - (Optional) Map of tags, each pair of which must exactly match a pair on the desired pools. + +More complex filters can be expressed using one or more `filter` sub-blocks, +which take the following arguments: + +* `name` - (Required) Name of the field to filter by, as defined by [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribePublicIpv4Pools.html). +* `values` - (Required) Set of values that are accepted for the given field. Pool IDs will be selected if any one of the given values match. + +## Attributes Reference + +* `poolIds` - List of all the pool IDs found. + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_serial_console_access.html.markdown b/website/docs/cdktf/typescript/d/ec2_serial_console_access.html.markdown new file mode 100644 index 00000000000..7e1b7114154 --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_serial_console_access.html.markdown @@ -0,0 +1,47 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_serial_console_access" +description: |- + Checks whether serial console access is enabled for your AWS account in the current AWS region. +--- + +# Data Source: aws_ec2_serial_console_access + +Provides a way to check whether serial console access is enabled for your AWS account in the current AWS region. + +## Example Usage + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2SerialConsoleAccess.DataAwsEc2SerialConsoleAccess( + this, + "current", + {} + ); + } +} + +``` + +## Attributes Reference + +The following attributes are exported: + +* `enabled` - Whether or not serial console access is enabled. Returns as `true` or `false`. +* `id` - Region of serial console access. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_spot_price.html.markdown b/website/docs/cdktf/typescript/d/ec2_spot_price.html.markdown new file mode 100644 index 00000000000..88c18ff7117 --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_spot_price.html.markdown @@ -0,0 +1,66 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_spot_price" +description: |- + Information about most recent Spot Price for a given EC2 instance. +--- + +# Data Source: aws_ec2_spot_price + +Information about most recent Spot Price for a given EC2 instance. + +## Example Usage + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2SpotPrice.DataAwsEc2SpotPrice(this, "example", { + availabilityZone: "us-west-2a", + filter: [ + { + name: "product-description", + values: ["Linux/UNIX"], + }, + ], + instanceType: "t3.medium", + }); + } +} + +``` + +## Argument Reference + +The following arguments are supported: + +* `instanceType` - (Optional) Type of instance for which to query Spot Price information. +* `availabilityZone` - (Optional) Availability zone in which to query Spot price information. +* `filter` - (Optional) One or more configuration blocks containing name-values filters. See the [EC2 API Reference](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSpotPriceHistory.html) for supported filters. Detailed below. + +### filter Argument Reference + +* `name` - (Required) Name of the filter. +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - AWS Region. +* `spotPrice` - Most recent Spot Price value for the given instance type and AZ. +* `spotPriceTimestamp` - The timestamp at which the Spot Price value was published. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_transit_gateway.html.markdown b/website/docs/cdktf/typescript/d/ec2_transit_gateway.html.markdown new file mode 100644 index 00000000000..b4dfa0c4ff7 --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_transit_gateway.html.markdown @@ -0,0 +1,96 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway" +description: |- + Get information on an EC2 Transit Gateway +--- + +# Data Source: aws_ec2_transit_gateway + +Get information on an EC2 Transit Gateway. + +## Example Usage + +### By Filter + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2TransitGateway.DataAwsEc2TransitGateway(this, "example", { + filter: [ + { + name: "options.amazon-side-asn", + values: ["64512"], + }, + ], + }); + } +} + +``` + +### By Identifier + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2TransitGateway.DataAwsEc2TransitGateway(this, "example", { + id: "tgw-12345678", + }); + } +} + +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. Detailed below. +* `id` - (Optional) Identifier of the EC2 Transit Gateway. + +### filter Argument Reference + +* `name` - (Required) Name of the field to filter by, as defined by the [underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeTransitGateways.html). +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `amazonSideAsn` - Private Autonomous System Number (ASN) for the Amazon side of a BGP session +* `arn` - EC2 Transit Gateway ARN +* `associationDefaultRouteTableId` - Identifier of the default association route table +* `autoAcceptSharedAttachments` - Whether resource attachment requests are automatically accepted +* `defaultRouteTableAssociation` - Whether resource attachments are automatically associated with the default association route table +* `defaultRouteTablePropagation` - Whether resource attachments automatically propagate routes to the default propagation route table +* `description` - Description of the EC2 Transit Gateway +* `dnsSupport` - Whether DNS support is enabled +* `multicastSupport` - Whether Multicast support is enabled +* `id` - EC2 Transit Gateway identifier +* `ownerId` - Identifier of the AWS account that owns the EC2 Transit Gateway +* `propagationDefaultRouteTableId` - Identifier of the default propagation route table +* `tags` - Key-value tags for the EC2 Transit Gateway +* `transitGatewayCidrBlocks` - The list of associated CIDR blocks +* `vpnEcmpSupport` - Whether VPN Equal Cost Multipath Protocol support is enabled + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_transit_gateway_attachment.html.markdown b/website/docs/cdktf/typescript/d/ec2_transit_gateway_attachment.html.markdown new file mode 100644 index 00000000000..c6a362490e8 --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_transit_gateway_attachment.html.markdown @@ -0,0 +1,56 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_attachment" +description: |- + Get information on an EC2 Transit Gateway's attachment to a resource +--- + +# Data Source: aws_ec2_transit_gateway_attachment + +Get information on an EC2 Transit Gateway's attachment to a resource. + +## Example Usage + +```terraform +data "aws_ec2_transit_gateway_attachment" "example" { + filter { + name = "transit-gateway-id" + values = [aws_ec2_transit_gateway.example.id] + } + + filter { + name = "resource-type" + values = ["peering"] + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. Detailed below. +* `transitGatewayAttachmentId` - (Optional) ID of the attachment. + +### filter Argument Reference + +* `name` - (Required) Name of the field to filter by, as defined by the [underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeTransitGatewayAttachments.html). +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - ARN of the attachment. +* `associationState` - The state of the association (see [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_TransitGatewayAttachmentAssociation.html) for valid values). +* `associationTransitGatewayRouteTableId` - The ID of the route table for the transit gateway. +* `resourceId` - ID of the resource. +* `resourceOwnerId` - ID of the AWS account that owns the resource. +* `resourceType` - Resource type. +* `state` - Attachment state. +* `tags` - Key-value tags for the attachment. +* `transitGatewayId` - ID of the transit gateway. +* `transitGatewayOwnerId` - The ID of the AWS account that owns the transit gateway. + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_transit_gateway_attachments.html.markdown b/website/docs/cdktf/typescript/d/ec2_transit_gateway_attachments.html.markdown new file mode 100644 index 00000000000..c48843e19d7 --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_transit_gateway_attachments.html.markdown @@ -0,0 +1,62 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_attachments" +description: |- + Get information on EC2 Transit Gateway Attachments +--- + +# Data Source: aws_ec2_transit_gateway_attachments + +Get information on EC2 Transit Gateway Attachments. + +## Example Usage + +### By Filter + +```hcl +data "aws_ec2_transit_gateway_attachments" "filtered" { + filter { + name = "state" + values = ["pendingAcceptance"] + } + + filter { + name = "resource-type" + values = ["vpc"] + } +} + +data "aws_ec2_transit_gateway_attachment" "unit" { + count = length(data.aws_ec2_transit_gateway_attachments.filtered.ids) + id = data.aws_ec2_transit_gateway_attachments.filtered.ids[count.index] +} +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. Detailed below. + +### filter Argument Reference + +* `name` - (Required) Name of the filter check available value on [official documentation][1] +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `ids` A list of all attachments ids matching the filter. You can retrieve more information about the attachment using the [aws_ec2_transit_gateway_attachment][2] data source, searching by identifier. + +[1]: https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeTransitGatewayAttachments.html +[2]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ec2_transit_gateway_attachment + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_transit_gateway_connect.html.markdown b/website/docs/cdktf/typescript/d/ec2_transit_gateway_connect.html.markdown new file mode 100644 index 00000000000..cadcb3d6633 --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_transit_gateway_connect.html.markdown @@ -0,0 +1,93 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_connect" +description: |- + Get information on an EC2 Transit Gateway Connect +--- + +# Data Source: aws_ec2_transit_gateway_connect + +Get information on an EC2 Transit Gateway Connect. + +## Example Usage + +### By Filter + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2TransitGatewayConnect.DataAwsEc2TransitGatewayConnect( + this, + "example", + { + filter: [ + { + name: "transport-transit-gateway-attachment-id", + values: ["tgw-attach-12345678"], + }, + ], + } + ); + } +} + +``` + +### By Identifier + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2TransitGatewayConnect.DataAwsEc2TransitGatewayConnect( + this, + "example", + { + transitGatewayConnectId: "tgw-attach-12345678", + } + ); + } +} + +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. Detailed below. +* `transitGatewayConnectId` - (Optional) Identifier of the EC2 Transit Gateway Connect. + +### filter Argument Reference + +* `name` - (Required) Name of the filter. +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `protocol` - Tunnel protocol +* `tags` - Key-value tags for the EC2 Transit Gateway Connect +* `transitGatewayId` - EC2 Transit Gateway identifier +* `transportAttachmentId` - The underlaying VPC attachment + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_transit_gateway_connect_peer.html.markdown b/website/docs/cdktf/typescript/d/ec2_transit_gateway_connect_peer.html.markdown new file mode 100644 index 00000000000..8ea68196119 --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_transit_gateway_connect_peer.html.markdown @@ -0,0 +1,96 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_connect_peer" +description: |- + Get information on an EC2 Transit Gateway Connect Peer +--- + +# Data Source: aws_ec2_transit_gateway_connect_peer + +Get information on an EC2 Transit Gateway Connect Peer. + +## Example Usage + +### By Filter + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2TransitGatewayConnectPeer.DataAwsEc2TransitGatewayConnectPeer( + this, + "example", + { + filter: [ + { + name: "transit-gateway-attachment-id", + values: ["tgw-attach-12345678"], + }, + ], + } + ); + } +} + +``` + +### By Identifier + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2TransitGatewayConnectPeer.DataAwsEc2TransitGatewayConnectPeer( + this, + "example", + { + transitGatewayConnectPeerId: "tgw-connect-peer-12345678", + } + ); + } +} + +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. Detailed below. +* `transitGatewayConnectPeerId` - (Optional) Identifier of the EC2 Transit Gateway Connect Peer. + +### filter Argument Reference + +* `name` - (Required) Name of the filter. +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - EC2 Transit Gateway Connect Peer ARN +* `bgpAsn` - BGP ASN number assigned customer device +* `insideCidrBlocks` - CIDR blocks that will be used for addressing within the tunnel. +* `peerAddress` - IP addressed assigned to customer device, which is used as tunnel endpoint +* `tags` - Key-value tags for the EC2 Transit Gateway Connect Peer +* `transitGatewayAddress` - The IP address assigned to Transit Gateway, which is used as tunnel endpoint. +* `transitGatewayAttachmentId` - The Transit Gateway Connect + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_transit_gateway_dx_gateway_attachment.html.markdown b/website/docs/cdktf/typescript/d/ec2_transit_gateway_dx_gateway_attachment.html.markdown new file mode 100644 index 00000000000..10802b4f9c4 --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_transit_gateway_dx_gateway_attachment.html.markdown @@ -0,0 +1,53 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_dx_gateway_attachment" +description: |- + Get information on an EC2 Transit Gateway's attachment to a Direct Connect Gateway +--- + +# Data Source: aws_ec2_transit_gateway_dx_gateway_attachment + +Get information on an EC2 Transit Gateway's attachment to a Direct Connect Gateway. + +## Example Usage + +### By Transit Gateway and Direct Connect Gateway Identifiers + +```terraform +data "aws_ec2_transit_gateway_dx_gateway_attachment" "example" { + transit_gateway_id = aws_ec2_transit_gateway.example.id + dx_gateway_id = aws_dx_gateway.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `transitGatewayId` - (Optional) Identifier of the EC2 Transit Gateway. +* `dxGatewayId` - (Optional) Identifier of the Direct Connect Gateway. +* `filter` - (Optional) Configuration block(s) for filtering. Detailed below. +* `tags` - (Optional) Map of tags, each pair of which must exactly match a pair on the desired Transit Gateway Direct Connect Gateway Attachment. + +### filter Configuration Block + +The following arguments are supported by the `filter` configuration block: + +* `name` - (Required) Name of the filter field. Valid values can be found in the [EC2 DescribeTransitGatewayAttachments API Reference](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeTransitGatewayAttachments.html). +* `values` - (Required) Set of values that are accepted for the given filter field. Results will be selected if any given value matches. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Attachment identifier +* `tags` - Key-value tags for the EC2 Transit Gateway Attachment + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_transit_gateway_multicast_domain.html.markdown b/website/docs/cdktf/typescript/d/ec2_transit_gateway_multicast_domain.html.markdown new file mode 100644 index 00000000000..5300c9ae90c --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_transit_gateway_multicast_domain.html.markdown @@ -0,0 +1,110 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_multicast_domain" +description: |- + Get information on an EC2 Transit Gateway Multicast Domain +--- + +# Data Source: aws_ec2_transit_gateway_multicast_domain + +Get information on an EC2 Transit Gateway Multicast Domain. + +## Example Usage + +### By Filter + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2TransitGatewayMulticastDomain.DataAwsEc2TransitGatewayMulticastDomain( + this, + "example", + { + filter: [ + { + name: "transit-gateway-multicast-domain-id", + values: ["tgw-mcast-domain-12345678"], + }, + ], + } + ); + } +} + +``` + +### By Identifier + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2TransitGatewayMulticastDomain.DataAwsEc2TransitGatewayMulticastDomain( + this, + "example", + { + transitGatewayMulticastDomainId: "tgw-mcast-domain-12345678", + } + ); + } +} + +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. Detailed below. +* `transitGatewayMulticastDomainId` - (Optional) Identifier of the EC2 Transit Gateway Multicast Domain. + +### filter Argument Reference + +This block allows for complex filters. You can use one or more `filter` blocks. + +The following arguments are required: + +* `name` - (Required) Name of the field to filter by, as defined by [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeTransitGatewayMulticastDomains.html). +* `values` - (Required) Set of values that are accepted for the given field. A multicast domain will be selected if any one of the given values matches. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Multicast Domain identifier. +* `arn` - EC2 Transit Gateway Multicast Domain ARN. +* `associations` - EC2 Transit Gateway Multicast Domain Associations + * `subnetId` - The ID of the subnet associated with the transit gateway multicast domain. + * `transitGatewayAttachmentId` - The ID of the transit gateway attachment. +* `autoAcceptSharedAssociations` - Whether to automatically accept cross-account subnet associations that are associated with the EC2 Transit Gateway Multicast Domain. +* `igmpv2Support` - Whether to enable Internet Group Management Protocol (IGMP) version 2 for the EC2 Transit Gateway Multicast Domain. +* `members` - EC2 Multicast Domain Group Members + * `groupIpAddress` - The IP address assigned to the transit gateway multicast group. + * `networkInterfaceId` - The group members' network interface ID. +* `ownerId` - Identifier of the AWS account that owns the EC2 Transit Gateway Multicast Domain. +* `sources` - EC2 Multicast Domain Group Sources + * `groupIpAddress` - The IP address assigned to the transit gateway multicast group. + * `networkInterfaceId` - The group members' network interface ID. +* `staticSourcesSupport` - Whether to enable support for statically configuring multicast group sources for the EC2 Transit Gateway Multicast Domain. +* `tags` - Key-value tags for the EC2 Transit Gateway Multicast Domain. +* `transitGatewayId` - EC2 Transit Gateway identifier. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_transit_gateway_peering_attachment.html.markdown b/website/docs/cdktf/typescript/d/ec2_transit_gateway_peering_attachment.html.markdown new file mode 100644 index 00000000000..48906585d84 --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_transit_gateway_peering_attachment.html.markdown @@ -0,0 +1,98 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_peering_attachment" +description: |- + Get information on an EC2 Transit Gateway Peering Attachment +--- + +# Data Source: aws_ec2_transit_gateway_peering_attachment + +Get information on an EC2 Transit Gateway Peering Attachment. + +## Example Usage + +### By Filter + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2TransitGatewayPeeringAttachment.DataAwsEc2TransitGatewayPeeringAttachment( + this, + "example", + { + filter: [ + { + name: "transit-gateway-attachment-id", + values: ["tgw-attach-12345678"], + }, + ], + } + ); + } +} + +``` + +### By Identifier + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2TransitGatewayPeeringAttachment.DataAwsEc2TransitGatewayPeeringAttachment( + this, + "attachment", + { + id: "tgw-attach-12345678", + } + ); + } +} + +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. Detailed below. +* `id` - (Optional) Identifier of the EC2 Transit Gateway Peering Attachment. +* `tags` - (Optional) Mapping of tags, each pair of which must exactly match + a pair on the specific EC2 Transit Gateway Peering Attachment to retrieve. + +More complex filters can be expressed using one or more `filter` sub-blocks, +which take the following arguments: + +* `name` - (Required) Name of the field to filter by, as defined by + [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeTransitGatewayPeeringAttachments.html). +* `values` - (Required) Set of values that are accepted for the given field. + An EC2 Transit Gateway Peering Attachment be selected if any one of the given values matches. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `peerAccountId` - Identifier of the peer AWS account +* `peerRegion` - Identifier of the peer AWS region +* `peerTransitGatewayId` - Identifier of the peer EC2 Transit Gateway +* `transitGatewayId` - Identifier of the local EC2 Transit Gateway + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_transit_gateway_route_table.html.markdown b/website/docs/cdktf/typescript/d/ec2_transit_gateway_route_table.html.markdown new file mode 100644 index 00000000000..08b3f7255b2 --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_transit_gateway_route_table.html.markdown @@ -0,0 +1,99 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_route_table" +description: |- + Get information on an EC2 Transit Gateway Route Table +--- + +# Data Source: aws_ec2_transit_gateway_route_table + +Get information on an EC2 Transit Gateway Route Table. + +## Example Usage + +### By Filter + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2TransitGatewayRouteTable.DataAwsEc2TransitGatewayRouteTable( + this, + "example", + { + filter: [ + { + name: "default-association-route-table", + values: ["true"], + }, + { + name: "transit-gateway-id", + values: ["tgw-12345678"], + }, + ], + } + ); + } +} + +``` + +### By Identifier + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2TransitGatewayRouteTable.DataAwsEc2TransitGatewayRouteTable( + this, + "example", + { + id: "tgw-rtb-12345678", + } + ); + } +} + +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. Detailed below. +* `id` - (Optional) Identifier of the EC2 Transit Gateway Route Table. + +### filter Argument Reference + +* `name` - (Required) Name of the filter. +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - EC2 Transit Gateway Route Table ARN. +* `defaultAssociationRouteTable` - Boolean whether this is the default association route table for the EC2 Transit Gateway +* `defaultPropagationRouteTable` - Boolean whether this is the default propagation route table for the EC2 Transit Gateway +* `id` - EC2 Transit Gateway Route Table identifier +* `transitGatewayId` - EC2 Transit Gateway identifier +* `tags` - Key-value tags for the EC2 Transit Gateway Route Table + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_transit_gateway_route_table_associations.html.markdown b/website/docs/cdktf/typescript/d/ec2_transit_gateway_route_table_associations.html.markdown new file mode 100644 index 00000000000..962adeaeb85 --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_transit_gateway_route_table_associations.html.markdown @@ -0,0 +1,49 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_route_table_associations" +description: |- + Provides information for multiple EC2 Transit Gateway Route Table Associations +--- + +# Data Source: aws_ec2_transit_gateway_route_table_associations + +Provides information for multiple EC2 Transit Gateway Route Table Associations, such as their identifiers. + +## Example Usage + +### By Transit Gateway Identifier + +```terraform +data "aws_ec2_transit_gateway_route_table_associations" "example" { + transit_gateway_route_table_id = aws_ec2_transit_gateway_route_table.example.id +} +``` + +## Argument Reference + +The following arguments are required: + +* `transitGatewayRouteTableId` - (Required) Identifier of EC2 Transit Gateway Route Table. + +The following arguments are optional: + +* `filter` - (Optional) Custom filter block as described below. + +More complex filters can be expressed using one or more `filter` sub-blocks, +which take the following arguments: + +* `name` - (Required) Name of the field to filter by, as defined by + [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_GetTransitGatewayRouteTableAssociations.html). + +* `values` - (Required) Set of values that are accepted for the given field. + A Transit Gateway Route Table will be selected if any one of the given values matches. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - AWS Region. +* `ids` - Set of Transit Gateway Route Table Association identifiers. + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_transit_gateway_route_table_propagations.html.markdown b/website/docs/cdktf/typescript/d/ec2_transit_gateway_route_table_propagations.html.markdown new file mode 100644 index 00000000000..1cf37dfd563 --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_transit_gateway_route_table_propagations.html.markdown @@ -0,0 +1,49 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transitgateway_route_table_propagations" +description: |- + Provides information for multiple EC2 Transit Gateway Route Table Propagations +--- + +# Data Source: aws_ec2_transitgateway_route_table_propagations + +Provides information for multiple EC2 Transit Gateway Route Table Propagations, such as their identifiers. + +## Example Usage + +### By Transit Gateway Identifier + +```terraform +data "aws_ec2_transit_gateway_route_table_propagations" "example" { + transit_gateway_route_table_id = aws_ec2_transit_gateway_route_table.example.id +} +``` + +## Argument Reference + +The following arguments are required: + +* `transitGatewayRouteTableId` - (Required) Identifier of EC2 Transit Gateway Route Table. + +The following arguments are optional: + +* `filter` - (Optional) Custom filter block as described below. + +More complex filters can be expressed using one or more `filter` sub-blocks, +which take the following arguments: + +* `name` - (Required) Name of the field to filter by, as defined by + [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_GetTransitGatewayRouteTablePropagations.html). + +* `values` - (Required) Set of values that are accepted for the given field. + A Transit Gateway Route Table will be selected if any one of the given values matches. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - AWS Region. +* `ids` - Set of Transit Gateway Route Table Association identifiers. + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_transit_gateway_route_tables.html.markdown b/website/docs/cdktf/typescript/d/ec2_transit_gateway_route_tables.html.markdown new file mode 100644 index 00000000000..77961c10613 --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_transit_gateway_route_tables.html.markdown @@ -0,0 +1,56 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_route_tables" +description: |- + Provides information for multiple EC2 Transit Gateway Route Tables +--- + +# Data Source: aws_ec2_transit_gateway_route_tables + +Provides information for multiple EC2 Transit Gateway Route Tables, such as their identifiers. + +## Example Usage + +The following shows outputting all Transit Gateway Route Table Ids. + +```terraform +data "aws_ec2_transit_gateway_route_tables" "example" {} + +output "example" { + value = data.aws_ec2_transit_gateway_route_table.example.ids +} +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) Custom filter block as described below. + +* `tags` - (Optional) Mapping of tags, each pair of which must exactly match + a pair on the desired transit gateway route table. + +More complex filters can be expressed using one or more `filter` sub-blocks, +which take the following arguments: + +* `name` - (Required) Name of the field to filter by, as defined by + [the underlying AWS API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeTransitGatewayRouteTables.html). + +* `values` - (Required) Set of values that are accepted for the given field. + A Transit Gateway Route Table will be selected if any one of the given values matches. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - AWS Region. +* `ids` - Set of Transit Gateway Route Table identifiers. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_transit_gateway_vpc_attachment.html.markdown b/website/docs/cdktf/typescript/d/ec2_transit_gateway_vpc_attachment.html.markdown new file mode 100644 index 00000000000..22e76d5a21a --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_transit_gateway_vpc_attachment.html.markdown @@ -0,0 +1,98 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_vpc_attachment" +description: |- + Get information on an EC2 Transit Gateway VPC Attachment +--- + +# Data Source: aws_ec2_transit_gateway_vpc_attachment + +Get information on an EC2 Transit Gateway VPC Attachment. + +## Example Usage + +### By Filter + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2TransitGatewayVpcAttachment.DataAwsEc2TransitGatewayVpcAttachment( + this, + "example", + { + filter: [ + { + name: "vpc-id", + values: ["vpc-12345678"], + }, + ], + } + ); + } +} + +``` + +### By Identifier + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2TransitGatewayVpcAttachment.DataAwsEc2TransitGatewayVpcAttachment( + this, + "example", + { + id: "tgw-attach-12345678", + } + ); + } +} + +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. Detailed below. +* `id` - (Optional) Identifier of the EC2 Transit Gateway VPC Attachment. + +### filter Argument Reference + +* `name` - (Required) Name of the filter. +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `applianceModeSupport` - Whether Appliance Mode support is enabled. +* `dnsSupport` - Whether DNS support is enabled. +* `id` - EC2 Transit Gateway VPC Attachment identifier +* `ipv6Support` - Whether IPv6 support is enabled. +* `subnetIds` - Identifiers of EC2 Subnets. +* `transitGatewayId` - EC2 Transit Gateway identifier +* `tags` - Key-value tags for the EC2 Transit Gateway VPC Attachment +* `vpcId` - Identifier of EC2 VPC. +* `vpcOwnerId` - Identifier of the AWS account that owns the EC2 VPC. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_transit_gateway_vpc_attachments.html.markdown b/website/docs/cdktf/typescript/d/ec2_transit_gateway_vpc_attachments.html.markdown new file mode 100644 index 00000000000..1d2b67dd3a2 --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_transit_gateway_vpc_attachments.html.markdown @@ -0,0 +1,91 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_vpc_attachments" +description: |- + Get information on EC2 Transit Gateway VPC Attachments +--- + +# Data Source: aws_ec2_transit_gateway_vpc_attachments + +Get information on EC2 Transit Gateway VPC Attachments. + +## Example Usage + +### By Filter + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + const dataAwsEc2TransitGatewayVpcAttachmentsFiltered = + new aws.dataAwsEc2TransitGatewayVpcAttachments.DataAwsEc2TransitGatewayVpcAttachments( + this, + "filtered", + { + filter: [ + { + name: "state", + values: ["pendingAcceptance"], + }, + ], + } + ); + /*In most cases loops should be handled in the programming language context and + not inside of the Terraform context. If you are looping over something external, e.g. a variable or a file input + you should consider using a for loop. If you are looping over something only known to Terraform, e.g. a result of a data source + you need to keep this like it is.*/ + const dataAwsEc2TransitGatewayVpcAttachmentUnitCount = + cdktf.TerraformCount.of( + cdktf.Fn.lengthOf(dataAwsEc2TransitGatewayVpcAttachmentsFiltered.ids) + ); + new aws.dataAwsEc2TransitGatewayVpcAttachment.DataAwsEc2TransitGatewayVpcAttachment( + this, + "unit", + { + id: cdktf.Token.asString( + cdktf.propertyAccess( + dataAwsEc2TransitGatewayVpcAttachmentsFiltered.ids, + [dataAwsEc2TransitGatewayVpcAttachmentUnitCount.index] + ) + ), + count: dataAwsEc2TransitGatewayVpcAttachmentUnitCount, + } + ); + } +} + +``` + +## Argument Reference + +The following arguments are supported: + +* `filter` - (Optional) One or more configuration blocks containing name-values filters. Detailed below. + +### filter Argument Reference + +* `name` - (Required) Name of the filter check available value on [official documentation][1] +* `values` - (Required) List of one or more values for the filter. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `ids` A list of all attachments ids matching the filter. You can retrieve more information about the attachment using the [aws_ec2_transit_gateway_vpc_attachment][2] data source, searching by identifier. + +[1]: https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeTransitGatewayVpcAttachments.html +[2]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ec2_transit_gateway_vpc_attachment + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/d/ec2_transit_gateway_vpn_attachment.html.markdown b/website/docs/cdktf/typescript/d/ec2_transit_gateway_vpn_attachment.html.markdown new file mode 100644 index 00000000000..34ac73fbd3b --- /dev/null +++ b/website/docs/cdktf/typescript/d/ec2_transit_gateway_vpn_attachment.html.markdown @@ -0,0 +1,83 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_vpn_attachment" +description: |- + Get information on an EC2 Transit Gateway VPN Attachment +--- + +# Data Source: aws_ec2_transit_gateway_vpn_attachment + +Get information on an EC2 Transit Gateway VPN Attachment. + +-> EC2 Transit Gateway VPN Attachments are implicitly created by VPN Connections referencing an EC2 Transit Gateway so there is no managed resource. For ease, the [`awsVpnConnection` resource](/docs/providers/aws/r/vpn_connection.html) includes a `transitGatewayAttachmentId` attribute which can replace some usage of this data source. For tagging the attachment, see the [`awsEc2Tag` resource](/docs/providers/aws/r/ec2_tag.html). + +## Example Usage + +### By Transit Gateway and VPN Connection Identifiers + +```terraform +data "aws_ec2_transit_gateway_vpn_attachment" "example" { + transit_gateway_id = aws_ec2_transit_gateway.example.id + vpn_connection_id = aws_vpn_connection.example.id +} +``` + +### Filter + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.dataAwsEc2TransitGatewayVpnAttachment.DataAwsEc2TransitGatewayVpnAttachment( + this, + "test", + { + filter: [ + { + name: "resource-id", + values: ["some-resource"], + }, + ], + } + ); + } +} + +``` + +## Argument Reference + +The following arguments are supported: + +* `transitGatewayId` - (Optional) Identifier of the EC2 Transit Gateway. +* `vpnConnectionId` - (Optional) Identifier of the EC2 VPN Connection. +* `filter` - (Optional) Configuration block(s) for filtering. Detailed below. +* `tags` - (Optional) Map of tags, each pair of which must exactly match a pair on the desired Transit Gateway VPN Attachment. + +### filter Configuration Block + +The following arguments are supported by the `filter` configuration block: + +* `name` - (Required) Name of the filter field. Valid values can be found in the [EC2 DescribeTransitGatewayAttachments API Reference](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeTransitGatewayAttachments.html). +* `values` - (Required) Set of values that are accepted for the given filter field. Results will be selected if any given value matches. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway VPN Attachment identifier +* `tags` - Key-value tags for the EC2 Transit Gateway VPN Attachment + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `read` - (Default `20M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_availability_zone_group.html.markdown b/website/docs/cdktf/typescript/r/ec2_availability_zone_group.html.markdown new file mode 100644 index 00000000000..d85ad9d6400 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_availability_zone_group.html.markdown @@ -0,0 +1,56 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_availability_zone_group" +description: |- + Manages an EC2 Availability Zone Group. +--- + +# Resource: aws_ec2_availability_zone_group + +Manages an EC2 Availability Zone Group, such as updating its opt-in status. + +~> **NOTE:** This is an advanced Terraform resource. Terraform will automatically assume management of the EC2 Availability Zone Group without import and perform no actions on removal from configuration. + +## Example Usage + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.ec2AvailabilityZoneGroup.Ec2AvailabilityZoneGroup(this, "example", { + groupName: "us-west-2-lax-1", + optInStatus: "opted-in", + }); + } +} + +``` + +## Argument Reference + +The following arguments are required: + +* `groupName` - (Required) Name of the Availability Zone Group. +* `optInStatus` - (Required) Indicates whether to enable or disable Availability Zone Group. Valid values: `optedIn` or `notOptedIn`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Name of the Availability Zone Group. + +## Import + +EC2 Availability Zone Groups can be imported using the group name, e.g., + +``` +$ terraform import aws_ec2_availability_zone_group.example us-west-2-lax-1 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_capacity_reservation.html.markdown b/website/docs/cdktf/typescript/r/ec2_capacity_reservation.html.markdown new file mode 100644 index 00000000000..cff3cff8264 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_capacity_reservation.html.markdown @@ -0,0 +1,70 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_capacity_reservation" +description: |- + Provides an EC2 Capacity Reservation. This allows you to reserve capacity for your Amazon EC2 instances in a specific Availability Zone for any duration. +--- + +# Resource: aws_ec2_capacity_reservation + +Provides an EC2 Capacity Reservation. This allows you to reserve capacity for your Amazon EC2 instances in a specific Availability Zone for any duration. + +## Example Usage + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.ec2CapacityReservation.Ec2CapacityReservation(this, "default", { + availabilityZone: "eu-west-1a", + instanceCount: 1, + instancePlatform: "Linux/UNIX", + instanceType: "t2.micro", + }); + } +} + +``` + +## Argument Reference + +The following arguments are supported: + +* `availabilityZone` - (Required) The Availability Zone in which to create the Capacity Reservation. +* `ebsOptimized` - (Optional) Indicates whether the Capacity Reservation supports EBS-optimized instances. +* `endDate` - (Optional) The date and time at which the Capacity Reservation expires. When a Capacity Reservation expires, the reserved capacity is released and you can no longer launch instances into it. Valid values: [RFC3339 time string](https://tools.ietf.org/html/rfc3339#section-5.8) (`yyyyMmDdthh:mm:ssz`) +* `endDateType` - (Optional) Indicates the way in which the Capacity Reservation ends. Specify either `unlimited` or `limited`. +* `ephemeralStorage` - (Optional) Indicates whether the Capacity Reservation supports instances with temporary, block-level storage. +* `instanceCount` - (Required) The number of instances for which to reserve capacity. +* `instanceMatchCriteria` - (Optional) Indicates the type of instance launches that the Capacity Reservation accepts. Specify either `open` or `targeted`. +* `instancePlatform` - (Required) The type of operating system for which to reserve capacity. Valid options are `linux/unix`, `Red Hat Enterprise Linux`, `SUSE Linux`, `windows`, `Windows with SQL Server`, `Windows with SQL Server Enterprise`, `Windows with SQL Server Standard` or `Windows with SQL Server Web`. +* `instanceType` - (Required) The instance type for which to reserve capacity. +* `outpostArn` - (Optional) The Amazon Resource Name (ARN) of the Outpost on which to create the Capacity Reservation. +* `placementGroupArn` - (Optional) The Amazon Resource Name (ARN) of the cluster placement group in which to create the Capacity Reservation. +* `tags` - (Optional) A map of tags to assign to the resource. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `tenancy` - (Optional) Indicates the tenancy of the Capacity Reservation. Specify either `default` or `dedicated`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The Capacity Reservation ID. +* `ownerId` - The ID of the AWS account that owns the Capacity Reservation. +* `arn` - The ARN of the Capacity Reservation. +* `tagsAll` - A map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) + +## Import + +Capacity Reservations can be imported using the `id`, e.g., + +``` +$ terraform import aws_ec2_capacity_reservation.web cr-0123456789abcdef0 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_carrier_gateway.html.markdown b/website/docs/cdktf/typescript/r/ec2_carrier_gateway.html.markdown new file mode 100644 index 00000000000..6f25fe24500 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_carrier_gateway.html.markdown @@ -0,0 +1,50 @@ +--- +subcategory: "Wavelength" +layout: "aws" +page_title: "AWS: aws_ec2_carrier_gateway" +description: |- + Manages an EC2 Carrier Gateway. +--- + +# Resource: aws_ec2_carrier_gateway + +Manages an EC2 Carrier Gateway. See the AWS [documentation](https://docs.aws.amazon.com/vpc/latest/userguide/Carrier_Gateway.html) for more information. + +## Example Usage + +```terraform +resource "aws_ec2_carrier_gateway" "example" { + vpc_id = aws_vpc.example.id + + tags = { + Name = "example-carrier-gateway" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `tags` - (Optional) A map of tags to assign to the resource. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `vpcId` - (Required) The ID of the VPC to associate with the carrier gateway. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The ARN of the carrier gateway. +* `id` - The ID of the carrier gateway. +* `ownerId` - The AWS account ID of the owner of the carrier gateway. +* `tagsAll` - A map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Import + +`awsEc2CarrierGateway` can be imported using the carrier gateway's ID, +e.g., + +``` +$ terraform import aws_ec2_carrier_gateway.example cgw-12345 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_client_vpn_authorization_rule.html.markdown b/website/docs/cdktf/typescript/r/ec2_client_vpn_authorization_rule.html.markdown new file mode 100644 index 00000000000..acc80488b16 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_client_vpn_authorization_rule.html.markdown @@ -0,0 +1,57 @@ +--- +subcategory: "VPN (Client)" +layout: "aws" +page_title: "AWS: aws_ec2_client_vpn_authorization_rule" +description: |- + Provides authorization rules for AWS Client VPN endpoints. +--- + +# Resource: aws_ec2_client_vpn_authorization_rule + +Provides authorization rules for AWS Client VPN endpoints. For more information on usage, please see the +[AWS Client VPN Administrator's Guide](https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/what-is.html). + +## Example Usage + +```terraform +resource "aws_ec2_client_vpn_authorization_rule" "example" { + client_vpn_endpoint_id = aws_ec2_client_vpn_endpoint.example.id + target_network_cidr = aws_subnet.example.cidr_block + authorize_all_groups = true +} +``` + +## Argument Reference + +The following arguments are supported: + +* `clientVpnEndpointId` - (Required) The ID of the Client VPN endpoint. +* `targetNetworkCidr` - (Required) The IPv4 address range, in CIDR notation, of the network to which the authorization rule applies. +* `accessGroupId` - (Optional) The ID of the group to which the authorization rule grants access. One of `accessGroupId` or `authorizeAllGroups` must be set. +* `authorizeAllGroups` - (Optional) Indicates whether the authorization rule grants access to all clients. One of `accessGroupId` or `authorizeAllGroups` must be set. +* `description` - (Optional) A brief description of the authorization rule. + +## Attributes Reference + +No additional attributes are exported. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `create` - (Default `10M`) +- `delete` - (Default `10M`) + +## Import + +AWS Client VPN authorization rules can be imported using the endpoint ID and target network CIDR. If there is a specific group name that is included as well. All values are separated by a `,`. + +``` +$ terraform import aws_ec2_client_vpn_authorization_rule.example cvpn-endpoint-0ac3a1abbccddd666,10.1.0.0/24 +``` + +``` +$ terraform import aws_ec2_client_vpn_authorization_rule.example cvpn-endpoint-0ac3a1abbccddd666,10.1.0.0/24,team-a +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_client_vpn_endpoint.html.markdown b/website/docs/cdktf/typescript/r/ec2_client_vpn_endpoint.html.markdown new file mode 100644 index 00000000000..eff705144e8 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_client_vpn_endpoint.html.markdown @@ -0,0 +1,101 @@ +--- +subcategory: "VPN (Client)" +layout: "aws" +page_title: "AWS: aws_ec2_client_vpn_endpoint" +description: |- + Provides an AWS Client VPN endpoint for OpenVPN clients. +--- + +# Resource: aws_ec2_client_vpn_endpoint + +Provides an AWS Client VPN endpoint for OpenVPN clients. For more information on usage, please see the +[AWS Client VPN Administrator's Guide](https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/what-is.html). + +## Example Usage + +```terraform +resource "aws_ec2_client_vpn_endpoint" "example" { + description = "terraform-clientvpn-example" + server_certificate_arn = aws_acm_certificate.cert.arn + client_cidr_block = "10.0.0.0/16" + + authentication_options { + type = "certificate-authentication" + root_certificate_chain_arn = aws_acm_certificate.root_cert.arn + } + + connection_log_options { + enabled = true + cloudwatch_log_group = aws_cloudwatch_log_group.lg.name + cloudwatch_log_stream = aws_cloudwatch_log_stream.ls.name + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `authenticationOptions` - (Required) Information about the authentication method to be used to authenticate clients. +* `clientCidrBlock` - (Required) The IPv4 address range, in CIDR notation, from which to assign client IP addresses. The address range cannot overlap with the local CIDR of the VPC in which the associated subnet is located, or the routes that you add manually. The address range cannot be changed after the Client VPN endpoint has been created. The CIDR block should be /22 or greater. +* `clientConnectOptions` - (Optional) The options for managing connection authorization for new client connections. +* `clientLoginBannerOptions` - (Optional) Options for enabling a customizable text banner that will be displayed on AWS provided clients when a VPN session is established. +* `connectionLogOptions` - (Required) Information about the client connection logging options. +* `description` - (Optional) A brief description of the Client VPN endpoint. +* `dnsServers` - (Optional) Information about the DNS servers to be used for DNS resolution. A Client VPN endpoint can have up to two DNS servers. If no DNS server is specified, the DNS address of the connecting device is used. +* `securityGroupIds` - (Optional) The IDs of one or more security groups to apply to the target network. You must also specify the ID of the VPC that contains the security groups. +* `selfServicePortal` - (Optional) Specify whether to enable the self-service portal for the Client VPN endpoint. Values can be `enabled` or `disabled`. Default value is `disabled`. +* `serverCertificateArn` - (Required) The ARN of the ACM server certificate. +* `sessionTimeoutHours` - (Optional) The maximum session duration is a trigger by which end-users are required to re-authenticate prior to establishing a VPN session. Default value is `24` - Valid values: `8 | 10 | 12 | 24` +* `splitTunnel` - (Optional) Indicates whether split-tunnel is enabled on VPN endpoint. Default value is `false`. +* `tags` - (Optional) A mapping of tags to assign to the resource. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `transportProtocol` - (Optional) The transport protocol to be used by the VPN session. Default value is `udp`. +* `vpcId` - (Optional) The ID of the VPC to associate with the Client VPN endpoint. If no security group IDs are specified in the request, the default security group for the VPC is applied. +* `vpnPort` - (Optional) The port number for the Client VPN endpoint. Valid values are `443` and `1194`. Default value is `443`. + +### `authenticationOptions` Argument Reference + +One of the following arguments must be supplied: + +* `activeDirectoryId` - (Optional) The ID of the Active Directory to be used for authentication if type is `directoryServiceAuthentication`. +* `rootCertificateChainArn` - (Optional) The ARN of the client certificate. The certificate must be signed by a certificate authority (CA) and it must be provisioned in AWS Certificate Manager (ACM). Only necessary when type is set to `certificateAuthentication`. +* `samlProviderArn` - (Optional) The ARN of the IAM SAML identity provider if type is `federatedAuthentication`. +* `selfServiceSamlProviderArn` - (Optional) The ARN of the IAM SAML identity provider for the self service portal if type is `federatedAuthentication`. +* `type` - (Required) The type of client authentication to be used. Specify `certificateAuthentication` to use certificate-based authentication, `directoryServiceAuthentication` to use Active Directory authentication, or `federatedAuthentication` to use Federated Authentication via SAML 2.0. + +### `clientConnectOptions` Argument reference + +* `enabled` - (Optional) Indicates whether client connect options are enabled. The default is `false` (not enabled). +* `lambdaFunctionArn` - (Optional) The Amazon Resource Name (ARN) of the Lambda function used for connection authorization. + +### `clientLoginBannerOptions` Argument reference + +* `bannerText` - (Optional) Customizable text that will be displayed in a banner on AWS provided clients when a VPN session is established. UTF-8 encoded characters only. Maximum of 1400 characters. +* `enabled` - (Optional) Enable or disable a customizable text banner that will be displayed on AWS provided clients when a VPN session is established. The default is `false` (not enabled). + +### `connectionLogOptions` Argument Reference + +One of the following arguments must be supplied: + +* `cloudwatchLogGroup` - (Optional) The name of the CloudWatch Logs log group. +* `cloudwatchLogStream` - (Optional) The name of the CloudWatch Logs log stream to which the connection data is published. +* `enabled` - (Required) Indicates whether connection logging is enabled. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The ARN of the Client VPN endpoint. +* `dnsName` - The DNS name to be used by clients when establishing their VPN session. +* `id` - The ID of the Client VPN endpoint. +* `tagsAll` - A map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Import + +AWS Client VPN endpoints can be imported using the `id` value found via `aws ec2 describe-client-vpn-endpoints`, e.g., + +``` +$ terraform import aws_ec2_client_vpn_endpoint.example cvpn-endpoint-0ac3a1abbccddd666 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_client_vpn_network_association.html.markdown b/website/docs/cdktf/typescript/r/ec2_client_vpn_network_association.html.markdown new file mode 100644 index 00000000000..3aa5700d035 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_client_vpn_network_association.html.markdown @@ -0,0 +1,53 @@ +--- +subcategory: "VPN (Client)" +layout: "aws" +page_title: "AWS: aws_ec2_client_vpn_network_association" +description: |- + Provides network associations for AWS Client VPN endpoints. +--- + +# Resource: aws_ec2_client_vpn_network_association + +Provides network associations for AWS Client VPN endpoints. For more information on usage, please see the +[AWS Client VPN Administrator's Guide](https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/what-is.html). + +## Example Usage + +```terraform +resource "aws_ec2_client_vpn_network_association" "example" { + client_vpn_endpoint_id = aws_ec2_client_vpn_endpoint.example.id + subnet_id = aws_subnet.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `clientVpnEndpointId` - (Required) The ID of the Client VPN endpoint. +* `subnetId` - (Required) The ID of the subnet to associate with the Client VPN endpoint. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The unique ID of the target network association. +* `associationId` - The unique ID of the target network association. +* `vpcId` - The ID of the VPC in which the target subnet is located. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `create` - (Default `30M`) +- `delete` - (Default `30M`) + +## Import + +AWS Client VPN network associations can be imported using the endpoint ID and the association ID. Values are separated by a `,`. + +``` +$ terraform import aws_ec2_client_vpn_network_association.example cvpn-endpoint-0ac3a1abbccddd666,vpn-assoc-0b8db902465d069ad +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_client_vpn_route.html.markdown b/website/docs/cdktf/typescript/r/ec2_client_vpn_route.html.markdown new file mode 100644 index 00000000000..41bf409c140 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_client_vpn_route.html.markdown @@ -0,0 +1,76 @@ +--- +subcategory: "VPN (Client)" +layout: "aws" +page_title: "AWS: aws_ec2_client_vpn_route" +description: |- + Provides additional routes for AWS Client VPN endpoints. +--- + +# Resource: aws_ec2_client_vpn_route + +Provides additional routes for AWS Client VPN endpoints. For more information on usage, please see the +[AWS Client VPN Administrator's Guide](https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/what-is.html). + +## Example Usage + +```terraform +resource "aws_ec2_client_vpn_route" "example" { + client_vpn_endpoint_id = aws_ec2_client_vpn_endpoint.example.id + destination_cidr_block = "0.0.0.0/0" + target_vpc_subnet_id = aws_ec2_client_vpn_network_association.example.subnet_id +} + +resource "aws_ec2_client_vpn_network_association" "example" { + client_vpn_endpoint_id = aws_ec2_client_vpn_endpoint.example.id + subnet_id = aws_subnet.example.id +} + +resource "aws_ec2_client_vpn_endpoint" "example" { + description = "Example Client VPN endpoint" + server_certificate_arn = aws_acm_certificate.example.arn + client_cidr_block = "10.0.0.0/16" + + authentication_options { + type = "certificate-authentication" + root_certificate_chain_arn = aws_acm_certificate.example.arn + } + + connection_log_options { + enabled = false + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `clientVpnEndpointId` - (Required) The ID of the Client VPN endpoint. +* `destinationCidrBlock` - (Required) The IPv4 address range, in CIDR notation, of the route destination. +* `description` - (Optional) A brief description of the route. +* `targetVpcSubnetId` - (Required) The ID of the Subnet to route the traffic through. It must already be attached to the Client VPN. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the Client VPN endpoint. +* `origin` - Indicates how the Client VPN route was added. Will be `addRoute` for routes created by this resource. +* `type` - The type of the route. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `create` - (Default `4M`) +- `delete` - (Default `4M`) + +## Import + +AWS Client VPN routes can be imported using the endpoint ID, target subnet ID, and destination CIDR block. All values are separated by a `,`. + +``` +$ terraform import aws_ec2_client_vpn_route.example cvpn-endpoint-1234567890abcdef,subnet-9876543210fedcba,10.1.0.0/24 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_fleet.html.markdown b/website/docs/cdktf/typescript/r/ec2_fleet.html.markdown new file mode 100644 index 00000000000..1a3f6c29677 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_fleet.html.markdown @@ -0,0 +1,234 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_fleet" +description: |- + Provides a resource to manage EC2 Fleets +--- + +# Resource: aws_ec2_fleet + +Provides a resource to manage EC2 Fleets. + +## Example Usage + +```terraform +resource "aws_ec2_fleet" "example" { + launch_template_config { + launch_template_specification { + launch_template_id = aws_launch_template.example.id + version = aws_launch_template.example.latest_version + } + } + + target_capacity_specification { + default_target_capacity_type = "spot" + total_target_capacity = 5 + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `context` - (Optional) Reserved. +* `excessCapacityTerminationPolicy` - (Optional) Whether running instances should be terminated if the total target capacity of the EC2 Fleet is decreased below the current size of the EC2. Valid values: `noTermination`, `termination`. Defaults to `termination`. Supported only for fleets of type `maintain`. +* `launchTemplateConfig` - (Required) Nested argument containing EC2 Launch Template configurations. Defined below. +* `onDemandOptions` - (Optional) Nested argument containing On-Demand configurations. Defined below. +* `replaceUnhealthyInstances` - (Optional) Whether EC2 Fleet should replace unhealthy instances. Defaults to `false`. Supported only for fleets of type `maintain`. +* `spotOptions` - (Optional) Nested argument containing Spot configurations. Defined below. +* `tags` - (Optional) Map of Fleet tags. To tag instances at launch, specify the tags in the Launch Template. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `targetCapacitySpecification` - (Required) Nested argument containing target capacity configurations. Defined below. +* `terminateInstances` - (Optional) Whether to terminate instances for an EC2 Fleet if it is deleted successfully. Defaults to `false`. +* `terminateInstancesWithExpiration` - (Optional) Whether running instances should be terminated when the EC2 Fleet expires. Defaults to `false`. +* `type` - (Optional) The type of request. Indicates whether the EC2 Fleet only requests the target capacity, or also attempts to maintain it. Valid values: `maintain`, `request`, `instant`. Defaults to `maintain`. +* `validFrom` - (Optional) The start date and time of the request, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). The default is to start fulfilling the request immediately. +* `validUntil` - (Optional) The end date and time of the request, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). At this point, no new EC2 Fleet requests are placed or able to fulfill the request. If no value is specified, the request remains until you cancel it. + +### launch_template_config + +Describes a launch template and overrides. + +* `launchTemplateSpecification` - (Optional) Nested argument containing EC2 Launch Template to use. Defined below. +* `override` - (Optional) Nested argument(s) containing parameters to override the same parameters in the Launch Template. Defined below. + +#### launch_template_specification + +The launch template to use. You must specify either the launch template ID or launch template name in the request. + +* `launchTemplateId` - (Optional) The ID of the launch template. +* `launchTemplateName` - (Optional) The name of the launch template. +* `version` - (Required) The launch template version number, `$latest`, or `$default` + +#### override + +Any parameters that you specify override the same parameters in the launch template. For fleets of type `request` and `maintain`, a maximum of 300 items is allowed across all launch templates. + +Example: + +```terraform +resource "aws_ec2_fleet" "example" { + # ... other configuration ... + + launch_template_config { + # ... other configuration ... + + override { + instance_type = "m4.xlarge" + weighted_capacity = 1 + } + + override { + instance_type = "m4.2xlarge" + weighted_capacity = 2 + } + } +} +``` + +* `availabilityZone` - (Optional) Availability Zone in which to launch the instances. +* `instanceRequirements` - (Optional) Override the instance type in the Launch Template with instance types that satisfy the requirements. +* `instanceType` - (Optional) Instance type. +* `maxPrice` - (Optional) Maximum price per unit hour that you are willing to pay for a Spot Instance. +* `priority` - (Optional) Priority for the launch template override. If `onDemandOptions` `allocationStrategy` is set to `prioritized`, EC2 Fleet uses priority to determine which launch template override to use first in fulfilling On-Demand capacity. The highest priority is launched first. The lower the number, the higher the priority. If no number is set, the launch template override has the lowest priority. Valid values are whole numbers starting at 0. +* `subnetId` - (Optional) ID of the subnet in which to launch the instances. +* `weightedCapacity` - (Optional) Number of units provided by the specified instance type. + +##### instance_requirements + +The attributes for the instance types. For a list of currently supported values, please see ['InstanceRequirementsRequest'](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_InstanceRequirementsRequest.html). + +This configuration block supports the following: + +~> **NOTE:** Both `memoryMibMin` and `vcpuCountMin` must be specified. + +* `acceleratorCount` - (Optional) Block describing the minimum and maximum number of accelerators (GPUs, FPGAs, or AWS Inferentia chips). Default is no minimum or maximum limits. + * `min` - (Optional) Minimum. + * `max` - (Optional) Maximum. Set to `0` to exclude instance types with accelerators. +* `acceleratorManufacturers` - (Optional) List of accelerator manufacturer names. Default is any manufacturer. +* `acceleratorNames` - (Optional) List of accelerator names. Default is any acclerator. +* `acceleratorTotalMemoryMib` - (Optional) Block describing the minimum and maximum total memory of the accelerators. Default is no minimum or maximum. + * `min` - (Optional) The minimum amount of accelerator memory, in MiB. To specify no minimum limit, omit this parameter. + * `max` - (Optional) The maximum amount of accelerator memory, in MiB. To specify no maximum limit, omit this parameter. +* `acceleratorTypes` - (Optional) The accelerator types that must be on the instance type. Default is any accelerator type. +* `allowedInstanceTypes` - (Optional) The instance types to apply your specified attributes against. All other instance types are ignored, even if they match your specified attributes. You can use strings with one or more wild cards,represented by an asterisk (\*). The following are examples: `c5*`, `m5A.*`, `r*`, `*3*`. For example, if you specify `c5*`, you are excluding the entire C5 instance family, which includes all C5a and C5n instance types. If you specify `m5A.*`, you are excluding all the M5a instance types, but not the M5n instance types. Maximum of 400 entries in the list; each entry is limited to 30 characters. Default is no excluded instance types. Default is any instance type. + + If you specify `allowedInstanceTypes`, you can't specify `excludedInstanceTypes`. + +* `bareMetal` - (Optional) Indicate whether bare metal instace types should be `included`, `excluded`, or `required`. Default is `excluded`. +* `baselineEbsBandwidthMbps` - (Optional) Block describing the minimum and maximum baseline EBS bandwidth, in Mbps. Default is no minimum or maximum. + * `min` - (Optional) The minimum baseline bandwidth, in Mbps. To specify no minimum limit, omit this parameter.. + * `max` - (Optional) The maximum baseline bandwidth, in Mbps. To specify no maximum limit, omit this parameter.. +* `burstablePerformance` - (Optional) Indicates whether burstable performance T instance types are `included`, `excluded`, or `required`. Default is `excluded`. +* `cpuManufacturers` (Optional) The CPU manufacturers to include. Default is any manufacturer. + ~> **NOTE:** Don't confuse the CPU hardware manufacturer with the CPU hardware architecture. Instances will be launched with a compatible CPU architecture based on the Amazon Machine Image (AMI) that you specify in your launch template. +* `excludedInstanceTypes` - (Optional) The instance types to exclude. You can use strings with one or more wild cards, represented by an asterisk (\*). The following are examples: `c5*`, `m5A.*`, `r*`, `*3*`. For example, if you specify `c5*`, you are excluding the entire C5 instance family, which includes all C5a and C5n instance types. If you specify `m5A.*`, you are excluding all the M5a instance types, but not the M5n instance types. Maximum of 400 entries in the list; each entry is limited to 30 characters. Default is no excluded instance types. + + If you specify `allowedInstanceTypes`, you can't specify `excludedInstanceTypes`. + +* `instanceGenerations` - (Optional) Indicates whether current or previous generation instance types are included. The current generation instance types are recommended for use. Valid values are `current` and `previous`. Default is `current` and `previous` generation instance types. +* `localStorage` - (Optional) Indicate whether instance types with local storage volumes are `included`, `excluded`, or `required`. Default is `included`. +* `localStorageTypes` - (Optional) List of local storage type names. Valid values are `hdd` and `ssd`. Default any storage type. +* `memoryGibPerVcpu` - (Optional) Block describing the minimum and maximum amount of memory (GiB) per vCPU. Default is no minimum or maximum. + * `min` - (Optional) The minimum amount of memory per vCPU, in GiB. To specify no minimum limit, omit this parameter. + * `max` - (Optional) The maximum amount of memory per vCPU, in GiB. To specify no maximum limit, omit this parameter. +* `memoryMib` - (Required) The minimum and maximum amount of memory per vCPU, in GiB. Default is no minimum or maximum limits. + * `min` - (Required) The minimum amount of memory, in MiB. To specify no minimum limit, specify `0`. + * `max` - (Optional) The maximum amount of memory, in MiB. To specify no maximum limit, omit this parameter. +* `networkBandwidthGbps` - (Optional) The minimum and maximum amount of network bandwidth, in gigabits per second (Gbps). Default is No minimum or maximum. + * `min` - (Optional) The minimum amount of network bandwidth, in Gbps. To specify no minimum limit, omit this parameter. + * `max` - (Optional) The maximum amount of network bandwidth, in Gbps. To specify no maximum limit, omit this parameter. +* `networkInterfaceCount` - (Optional) Block describing the minimum and maximum number of network interfaces. Default is no minimum or maximum. + * `min` - (Optional) The minimum number of network interfaces. To specify no minimum limit, omit this parameter. + * `max` - (Optional) The maximum number of network interfaces. To specify no maximum limit, omit this parameter. +* `onDemandMaxPricePercentageOverLowestPrice` - (Optional) The price protection threshold for On-Demand Instances. This is the maximum you’ll pay for an On-Demand Instance, expressed as a percentage higher than the cheapest M, C, or R instance type with your specified attributes. When Amazon EC2 Auto Scaling selects instance types with your attributes, we will exclude instance types whose price is higher than your threshold. The parameter accepts an integer, which Amazon EC2 Auto Scaling interprets as a percentage. To turn off price protection, specify a high value, such as 999999. Default is 20. + + If you set `targetCapacityUnitType` to `vcpu` or `memoryMib`, the price protection threshold is applied based on the per-vCPU or per-memory price instead of the per-instance price. + +* `requireHibernateSupport` - (Optional) Indicate whether instance types must support On-Demand Instance Hibernation, either `true` or `false`. Default is `false`. +* `spotMaxPricePercentageOverLowestPrice` - (Optional) The price protection threshold for Spot Instances. This is the maximum you’ll pay for a Spot Instance, expressed as a percentage higher than the cheapest M, C, or R instance type with your specified attributes. When Amazon EC2 Auto Scaling selects instance types with your attributes, we will exclude instance types whose price is higher than your threshold. The parameter accepts an integer, which Amazon EC2 Auto Scaling interprets as a percentage. To turn off price protection, specify a high value, such as 999999. Default is 100. + + If you set DesiredCapacityType to vcpu or memory-mib, the price protection threshold is applied based on the per vCPU or per memory price instead of the per instance price. + +* `totalLocalStorageGb` - (Optional) Block describing the minimum and maximum total local storage (GB). Default is no minimum or maximum. + * `min` - (Optional) The minimum amount of total local storage, in GB. To specify no minimum limit, omit this parameter. + * `max` - (Optional) The maximum amount of total local storage, in GB. To specify no maximum limit, omit this parameter. +* `vcpuCount` - (Required) Block describing the minimum and maximum number of vCPUs. Default is no maximum. + * `min` - (Required) The minimum number of vCPUs. To specify no minimum limit, specify `0`. + * `max` - (Optional) The maximum number of vCPUs. To specify no maximum limit, omit this parameter. + +### on_demand_options + +* `allocationStrategy` - (Optional) The order of the launch template overrides to use in fulfilling On-Demand capacity. Valid values: `lowestPrice`, `prioritized`. Default: `lowestPrice`. +* `capacityReservationOptions` (Optional) The strategy for using unused Capacity Reservations for fulfilling On-Demand capacity. Supported only for fleets of type `instant`. + * `usageStrategy` - (Optional) Indicates whether to use unused Capacity Reservations for fulfilling On-Demand capacity. Valid values: `useCapacityReservationsFirst`. +* `maxTotalPrice` - (Optional) The maximum amount per hour for On-Demand Instances that you're willing to pay. +* `minTargetCapacity` - (Optional) The minimum target capacity for On-Demand Instances in the fleet. If the minimum target capacity is not reached, the fleet launches no instances. Supported only for fleets of type `instant`. + If you specify `minTargetCapacity`, at least one of the following must be specified: `singleAvailabilityZone` or `singleInstanceType`. + +* `singleAvailabilityZone` - (Optional) Indicates that the fleet launches all On-Demand Instances into a single Availability Zone. Supported only for fleets of type `instant`. +* `singleInstanceType` - (Optional) Indicates that the fleet uses a single instance type to launch all On-Demand Instances in the fleet. Supported only for fleets of type `instant`. + +### spot_options + +* `allocationStrategy` - (Optional) How to allocate the target capacity across the Spot pools. Valid values: `diversified`, `lowestPrice`, `capacityOptimized`, `capacityOptimizedPrioritized` and `priceCapacityOptimized`. Default: `lowestPrice`. +* `instanceInterruptionBehavior` - (Optional) Behavior when a Spot Instance is interrupted. Valid values: `hibernate`, `stop`, `terminate`. Default: `terminate`. +* `instancePoolsToUseCount` - (Optional) Number of Spot pools across which to allocate your target Spot capacity. Valid only when Spot `allocationStrategy` is set to `lowestPrice`. Default: `1`. +* `maintenanceStrategies` - (Optional) Nested argument containing maintenance strategies for managing your Spot Instances that are at an elevated risk of being interrupted. Defined below. +* `maxTotalPrice` - (Optional) The maximum amount per hour for Spot Instances that you're willing to pay. +* `minTargetCapacity` - (Optional) The minimum target capacity for Spot Instances in the fleet. If the minimum target capacity is not reached, the fleet launches no instances. Supported only for fleets of type `instant`. +* `singleAvailabilityZone` - (Optional) Indicates that the fleet launches all Spot Instances into a single Availability Zone. Supported only for fleets of type `instant`. +* `singleInstanceType` - (Optional) Indicates that the fleet uses a single instance type to launch all Spot Instances in the fleet. Supported only for fleets of type `instant`. + +### maintenance_strategies + +* `capacityRebalance` - (Optional) Nested argument containing the capacity rebalance for your fleet request. Defined below. + +### capacity_rebalance + +* `replacementStrategy` - (Optional) The replacement strategy to use. Only available for fleets of `type` set to `maintain`. Valid values: `launch`. + +### target_capacity_specification + +* `defaultTargetCapacityType` - (Required) Default target capacity type. Valid values: `onDemand`, `spot`. +* `onDemandTargetCapacity` - (Optional) The number of On-Demand units to request. +* `spotTargetCapacity` - (Optional) The number of Spot units to request. +* `targetCapacityUnitType` - (Optional) The unit for the target capacity. + If you specify `targetCapacityUnitType`, `instanceRequirements` must be specified. + +* `totalTargetCapacity` - (Required) The number of units to request, filled using `defaultTargetCapacityType`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Fleet identifier +* `arn` - The ARN of the fleet +* `fleetInstanceSet` - Information about the instances that were launched by the fleet. Available only when `type` is set to `instant`. + * `instanceIds` - The IDs of the instances. + * `instanceType` - The instance type. + * `lifecycle` - Indicates if the instance that was launched is a Spot Instance or On-Demand Instance. + * `platform` - The value is `windows` for Windows instances. Otherwise, the value is blank. +* `fleetState` - The state of the EC2 Fleet. +* `fulfilledCapacity` - The number of units fulfilled by this request compared to the set target capacity. +* `fulfilledOnDemandCapacity` - The number of units fulfilled by this request compared to the set target On-Demand capacity. +* `tagsAll` - A map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +* `create` - (Default `10M`) +* `update` - (Default `10M`) +* `delete` - (Default `10M`) + +## Import + +`awsEc2Fleet` can be imported by using the Fleet identifier, e.g., + +``` +$ terraform import aws_ec2_fleet.example fleet-b9b55d27-c5fc-41ac-a6f3-48fcc91f080c +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_host.html.markdown b/website/docs/cdktf/typescript/r/ec2_host.html.markdown new file mode 100644 index 00000000000..2036435dbc9 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_host.html.markdown @@ -0,0 +1,64 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_host" +description: |- + Provides an EC2 Host resource. This allows Dedicated Hosts to be allocated, modified, and released. +--- + +# Resource: aws_ec2_host + +Provides an EC2 Host resource. This allows Dedicated Hosts to be allocated, modified, and released. + +## Example Usage + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.ec2Host.Ec2Host(this, "test", { + autoPlacement: "on", + availabilityZone: "us-west-2a", + hostRecovery: "on", + instanceType: "c5.18xlarge", + }); + } +} + +``` + +## Argument Reference + +The following arguments are supported: + +* `autoPlacement` - (Optional) Indicates whether the host accepts any untargeted instance launches that match its instance type configuration, or if it only accepts Host tenancy instance launches that specify its unique host ID. Valid values: `on`, `off`. Default: `on`. +* `availabilityZone` - (Required) The Availability Zone in which to allocate the Dedicated Host. +* `hostRecovery` - (Optional) Indicates whether to enable or disable host recovery for the Dedicated Host. Valid values: `on`, `off`. Default: `off`. +* `instanceFamily` - (Optional) Specifies the instance family to be supported by the Dedicated Hosts. If you specify an instance family, the Dedicated Hosts support multiple instance types within that instance family. Exactly one of `instanceFamily` or `instanceType` must be specified. +* `instanceType` - (Optional) Specifies the instance type to be supported by the Dedicated Hosts. If you specify an instance type, the Dedicated Hosts support instances of the specified instance type only. Exactly one of `instanceFamily` or `instanceType` must be specified. +* `outpostArn` - (Optional) The Amazon Resource Name (ARN) of the AWS Outpost on which to allocate the Dedicated Host. +* `tags` - (Optional) Map of tags to assign to this resource. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the allocated Dedicated Host. This is used to launch an instance onto a specific host. +* `arn` - The ARN of the Dedicated Host. +* `ownerId` - The ID of the AWS account that owns the Dedicated Host. +* `tagsAll` - A map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Import + +Hosts can be imported using the host `id`, e.g., + +``` +$ terraform import aws_ec2_host.example h-0385a99d0e4b20cbb +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_instance_state.html.markdown b/website/docs/cdktf/typescript/r/ec2_instance_state.html.markdown new file mode 100644 index 00000000000..bb48123e169 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_instance_state.html.markdown @@ -0,0 +1,95 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_instance_state" +description: |- + Provides an EC2 instance state resource. This allows managing an instance power state. +--- + +# Resource: aws_ec2_instance_state + +Provides an EC2 instance state resource. This allows managing an instance power state. + +~> **NOTE on Instance State Management:** AWS does not currently have an EC2 API operation to determine an instance has finished processing user data. As a result, this resource can interfere with user data processing. For example, this resource may stop an instance while the user data script is in mid run. + +## Example Usage + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + const dataAwsAmiUbuntu = new aws.dataAwsAmi.DataAwsAmi(this, "ubuntu", { + filter: [ + { + name: "name", + values: ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"], + }, + { + name: "virtualization-type", + values: ["hvm"], + }, + ], + mostRecent: true, + owners: ["099720109477"], + }); + const awsInstanceTest = new aws.instance.Instance(this, "test", { + ami: cdktf.Token.asString(dataAwsAmiUbuntu.id), + instanceType: "t3.micro", + tags: { + Name: "HelloWorld", + }, + }); + const awsEc2InstanceStateTest = new aws.ec2InstanceState.Ec2InstanceState( + this, + "test_2", + { + instanceId: cdktf.Token.asString(awsInstanceTest.id), + state: "stopped", + } + ); + /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ + awsEc2InstanceStateTest.overrideLogicalId("test"); + } +} + +``` + +## Argument Reference + +The following arguments are required: + +* `instanceId` - (Required) ID of the instance. +* `state` - (Required) - State of the instance. Valid values are `stopped`, `running`. + +The following arguments are optional: + +* `force` - (Optional) Whether to request a forced stop when `state` is `stopped`. Otherwise (_i.e._, `state` is `running`), ignored. When an instance is forced to stop, it does not flush file system caches or file system metadata, and you must subsequently perform file system check and repair. Not recommended for Windows instances. Defaults to `false`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - ID of the instance (matches `instanceId`). + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +* `create` - (Default `10M`) +* `update` - (Default `10M`) +* `delete` - (Default `1M`) + +## Import + +`awsEc2InstanceState` can be imported by using the `instanceId` attribute, e.g., + +``` +$ terraform import aws_ec2_instance_state.test i-02cae6557dfcf2f96 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_local_gateway_route.html.markdown b/website/docs/cdktf/typescript/r/ec2_local_gateway_route.html.markdown new file mode 100644 index 00000000000..28e1e07636b --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_local_gateway_route.html.markdown @@ -0,0 +1,45 @@ +--- +subcategory: "Outposts (EC2)" +layout: "aws" +page_title: "AWS: aws_ec2_local_gateway_route" +description: |- + Manages an EC2 Local Gateway Route +--- + +# Resource: aws_ec2_local_gateway_route + +Manages an EC2 Local Gateway Route. More information can be found in the [Outposts User Guide](https://docs.aws.amazon.com/outposts/latest/userguide/outposts-networking-components.html#routing). + +## Example Usage + +```terraform +resource "aws_ec2_local_gateway_route" "example" { + destination_cidr_block = "172.16.0.0/16" + local_gateway_route_table_id = data.aws_ec2_local_gateway_route_table.example.id + local_gateway_virtual_interface_group_id = data.aws_ec2_local_gateway_virtual_interface_group.example.id +} +``` + +## Argument Reference + +The following arguments are required: + +* `destinationCidrBlock` - (Required) IPv4 CIDR range used for destination matches. Routing decisions are based on the most specific match. +* `localGatewayRouteTableId` - (Required) Identifier of EC2 Local Gateway Route Table. +* `localGatewayVirtualInterfaceGroupId` - (Required) Identifier of EC2 Local Gateway Virtual Interface Group. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Local Gateway Route Table identifier and destination CIDR block separated by underscores (`_`) + +## Import + +`awsEc2LocalGatewayRoute` can be imported by using the EC2 Local Gateway Route Table identifier and destination CIDR block separated by underscores (`_`), e.g., + +``` +$ terraform import aws_ec2_local_gateway_route.example lgw-rtb-12345678_172.16.0.0/16 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_local_gateway_route_table_vpc_association.html.markdown b/website/docs/cdktf/typescript/r/ec2_local_gateway_route_table_vpc_association.html.markdown new file mode 100644 index 00000000000..c9c869c999f --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_local_gateway_route_table_vpc_association.html.markdown @@ -0,0 +1,84 @@ +--- +subcategory: "Outposts (EC2)" +layout: "aws" +page_title: "AWS: aws_ec2_local_gateway_route_table_vpc_association" +description: |- + Manages an EC2 Local Gateway Route Table VPC Association +--- + +# Resource: aws_ec2_local_gateway_route_table_vpc_association + +Manages an EC2 Local Gateway Route Table VPC Association. More information can be found in the [Outposts User Guide](https://docs.aws.amazon.com/outposts/latest/userguide/outposts-local-gateways.html#vpc-associations). + +## Example Usage + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + const awsVpcExample = new aws.vpc.Vpc(this, "example", { + cidrBlock: "10.0.0.0/16", + }); + const dataAwsEc2LocalGatewayRouteTableExample = + new aws.dataAwsEc2LocalGatewayRouteTable.DataAwsEc2LocalGatewayRouteTable( + this, + "example_1", + { + outpostArn: + "arn:aws:outposts:us-west-2:123456789012:outpost/op-1234567890abcdef", + } + ); + /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ + dataAwsEc2LocalGatewayRouteTableExample.overrideLogicalId("example"); + const awsEc2LocalGatewayRouteTableVpcAssociationExample = + new aws.ec2LocalGatewayRouteTableVpcAssociation.Ec2LocalGatewayRouteTableVpcAssociation( + this, + "example_2", + { + localGatewayRouteTableId: cdktf.Token.asString( + dataAwsEc2LocalGatewayRouteTableExample.id + ), + vpcId: cdktf.Token.asString(awsVpcExample.id), + } + ); + /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ + awsEc2LocalGatewayRouteTableVpcAssociationExample.overrideLogicalId( + "example" + ); + } +} + +``` + +## Argument Reference + +The following arguments are required: + +* `localGatewayRouteTableId` - (Required) Identifier of EC2 Local Gateway Route Table. +* `vpcId` - (Required) Identifier of EC2 VPC. + +The following arguments are optional: + +* `tags` - (Optional) Key-value map of resource tags. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Identifier of EC2 Local Gateway Route Table VPC Association. +* `tagsAll` - A map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Import + +`awsEc2LocalGatewayRouteTableVpcAssociation` can be imported by using the Local Gateway Route Table VPC Association identifier, e.g., + +``` +$ terraform import aws_ec2_local_gateway_route_table_vpc_association.example lgw-vpc-assoc-1234567890abcdef +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_managed_prefix_list.html.markdown b/website/docs/cdktf/typescript/r/ec2_managed_prefix_list.html.markdown new file mode 100644 index 00000000000..af7161ae114 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_managed_prefix_list.html.markdown @@ -0,0 +1,84 @@ +--- +subcategory: "VPC (Virtual Private Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_managed_prefix_list" +description: |- + Provides a managed prefix list resource. +--- + +# Resource: aws_ec2_managed_prefix_list + +Provides a managed prefix list resource. + +~> **NOTE on Managed Prefix Lists and Managed Prefix List Entries:** Terraform +currently provides both a standalone [Managed Prefix List Entry resource](ec2_managed_prefix_list_entry.html) (a single entry), +and a Managed Prefix List resource with entries defined in-line. At this time you +cannot use a Managed Prefix List with in-line rules in conjunction with any Managed +Prefix List Entry resources. Doing so will cause a conflict of entries and will overwrite entries. + +~> **NOTE on `maxEntries`:** When you reference a Prefix List in a resource, +the maximum number of entries for the prefix lists counts as the same number of rules +or entries for the resource. For example, if you create a prefix list with a maximum +of 20 entries and you reference that prefix list in a security group rule, this counts +as 20 rules for the security group. + +## Example Usage + +Basic usage + +```terraform +resource "aws_ec2_managed_prefix_list" "example" { + name = "All VPC CIDR-s" + address_family = "IPv4" + max_entries = 5 + + entry { + cidr = aws_vpc.example.cidr_block + description = "Primary" + } + + entry { + cidr = aws_vpc_ipv4_cidr_block_association.example.cidr_block + description = "Secondary" + } + + tags = { + Env = "live" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `addressFamily` - (Required, Forces new resource) Address family (`iPv4` or `iPv6`) of this prefix list. +* `entry` - (Optional) Configuration block for prefix list entry. Detailed below. Different entries may have overlapping CIDR blocks, but a particular CIDR should not be duplicated. +* `maxEntries` - (Required) Maximum number of entries that this prefix list can contain. +* `name` - (Required) Name of this resource. The name must not start with `comAmazonaws`. +* `tags` - (Optional) Map of tags to assign to this resource. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +### `entry` + +* `cidr` - (Required) CIDR block of this entry. +* `description` - (Optional) Description of this entry. Due to API limitations, updating only the description of an existing entry requires temporarily removing and re-adding the entry. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - ARN of the prefix list. +* `id` - ID of the prefix list. +* `ownerId` - ID of the AWS account that owns this prefix list. +* `tagsAll` - Map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). +* `version` - Latest version of this prefix list. + +## Import + +Prefix Lists can be imported using the `id`, e.g., + +``` +$ terraform import aws_ec2_managed_prefix_list.default pl-0570a1d2d725c16be +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_managed_prefix_list_entry.html.markdown b/website/docs/cdktf/typescript/r/ec2_managed_prefix_list_entry.html.markdown new file mode 100644 index 00000000000..8dc5c9c2981 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_managed_prefix_list_entry.html.markdown @@ -0,0 +1,69 @@ +--- +subcategory: "VPC (Virtual Private Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_managed_prefix_list_entry" +description: |- + Provides a managed prefix list entry resource. +--- + +# Resource: aws_ec2_managed_prefix_list_entry + +Provides a managed prefix list entry resource. + +~> **NOTE on Managed Prefix Lists and Managed Prefix List Entries:** Terraform +currently provides both a standalone Managed Prefix List Entry resource (a single entry), +and a [Managed Prefix List resource](ec2_managed_prefix_list.html) with entries defined +in-line. At this time you cannot use a Managed Prefix List with in-line rules in +conjunction with any Managed Prefix List Entry resources. Doing so will cause a conflict +of entries and will overwrite entries. + +~> **NOTE on Managed Prefix Lists with many entries:** To improved execution times on larger +updates, if you plan to create a prefix list with more than 100 entries, it is **recommended** +that you use the inline `entry` block as part of the [Managed Prefix List resource](ec2_managed_prefix_list.html) +resource instead. + +## Example Usage + +Basic usage + +```terraform +resource "aws_ec2_managed_prefix_list" "example" { + name = "All VPC CIDR-s" + address_family = "IPv4" + max_entries = 5 + + tags = { + Env = "live" + } +} + +resource "aws_ec2_managed_prefix_list_entry" "entry_1" { + cidr = aws_vpc.example.cidr_block + description = "Primary" + prefix_list_id = aws_ec2_managed_prefix_list.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `cidr` - (Required) CIDR block of this entry. +* `description` - (Optional) Description of this entry. Due to API limitations, updating only the description of an entry requires recreating the entry. +* `prefixListId` - (Required) CIDR block of this entry. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - ID of the managed prefix list entry. + +## Import + +Prefix List Entries can be imported using the `prefixListId` and `cidr` separated by a `,`, e.g., + +``` +$ terraform import aws_ec2_managed_prefix_list_entry.default pl-0570a1d2d725c16be,10.0.3.0/24 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_network_insights_analysis.html.markdown b/website/docs/cdktf/typescript/r/ec2_network_insights_analysis.html.markdown new file mode 100644 index 00000000000..151003b3057 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_network_insights_analysis.html.markdown @@ -0,0 +1,69 @@ +--- +subcategory: "VPC (Virtual Private Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_network_insights_analysis" +description: |- + Provides a Network Insights Analysis resource. +--- + +# Resource: aws_ec2_network_insights_analysis + +Provides a Network Insights Analysis resource. Part of the "Reachability Analyzer" service in the AWS VPC console. + +## Example Usage + +```terraform +resource "aws_ec2_network_insights_path" "path" { + source = aws_network_interface.source.id + destination = aws_network_interface.destination.id + protocol = "tcp" +} + +resource "aws_ec2_network_insights_analysis" "analysis" { + network_insights_path_id = aws_ec2_network_insights_path.path.id +} +``` + +## Argument Reference + +The following arguments are required: + +* `networkInsightsPathId` - (Required) ID of the Network Insights Path to run an analysis on. + +The following arguments are optional: + +* `filterInArns` - (Optional) A list of ARNs for resources the path must traverse. +* `waitForCompletion` - (Optional) If enabled, the resource will wait for the Network Insights Analysis status to change to `succeeded` or `failed`. Setting this to `false` will skip the process. Default: `true`. +* `tags` - (Optional) Map of tags to assign to the resource. If configured with a provider [`defaultTags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `alternatePathHints` - Potential intermediate components of a feasible path. Described below. +* `arn` - ARN of the Network Insights Analysis. +* `explanations` - Explanation codes for an unreachable path. See the [AWS documentation](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_Explanation.html) for details. +* `forwardPathComponents` - The components in the path from source to destination. See the [AWS documentation](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_PathComponent.html) for details. +* `id` - ID of the Network Insights Analysis. +* `pathFound` - Set to `true` if the destination was reachable. +* `returnPathComponents` - The components in the path from destination to source. See the [AWS documentation](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_PathComponent.html) for details. +* `startDate` - The date/time the analysis was started. +* `status` - The status of the analysis. `succeeded` means the analysis was completed, not that a path was found, for that see `pathFound`. +* `statusMessage` - A message to provide more context when the `status` is `failed`. +* `tagsAll` - Map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block). +* `warningMessage` - The warning message. + +The `alternatePathHints` object supports the following: + +* `componentArn` - The Amazon Resource Name (ARN) of the component. +* `componentId` - The ID of the component. + +## Import + +Network Insights Analyses can be imported using the `id`, e.g., + +``` +$ terraform import aws_ec2_network_insights_analysis.test nia-0462085c957f11a55 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_network_insights_path.html.markdown b/website/docs/cdktf/typescript/r/ec2_network_insights_path.html.markdown new file mode 100644 index 00000000000..db27bf3ed0b --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_network_insights_path.html.markdown @@ -0,0 +1,54 @@ +--- +subcategory: "VPC (Virtual Private Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_network_insights_path" +description: |- + Provides a Network Insights Path resource. +--- + +# Resource: aws_ec2_network_insights_path + +Provides a Network Insights Path resource. Part of the "Reachability Analyzer" service in the AWS VPC console. + +## Example Usage + +```terraform +resource "aws_ec2_network_insights_path" "test" { + source = aws_network_interface.source.id + destination = aws_network_interface.destination.id + protocol = "tcp" +} +``` + +## Argument Reference + +The following arguments are required: + +* `source` - (Required) ID of the resource which is the source of the path. Can be an Instance, Internet Gateway, Network Interface, Transit Gateway, VPC Endpoint, VPC Peering Connection or VPN Gateway. +* `destination` - (Required) ID of the resource which is the source of the path. Can be an Instance, Internet Gateway, Network Interface, Transit Gateway, VPC Endpoint, VPC Peering Connection or VPN Gateway. +* `protocol` - (Required) Protocol to use for analysis. Valid options are `tcp` or `udp`. + +The following arguments are optional: + +* `sourceIp` - (Optional) IP address of the source resource. +* `destinationIp` - (Optional) IP address of the destination resource. +* `destinationPort` - (Optional) Destination port to analyze access to. +* `tags` - (Optional) Map of tags to assign to the resource. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - ARN of the Network Insights Path. +* `id` - ID of the Network Insights Path. +* `tagsAll` - Map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Import + +Network Insights Paths can be imported using the `id`, e.g., + +``` +$ terraform import aws_ec2_network_insights_path.test nip-00edfba169923aefd +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_serial_console_access.html.markdown b/website/docs/cdktf/typescript/r/ec2_serial_console_access.html.markdown new file mode 100644 index 00000000000..e4778997a8d --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_serial_console_access.html.markdown @@ -0,0 +1,52 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_serial_console_access" +description: |- + Manages whether serial console access is enabled for your AWS account in the current AWS region. +--- + +# Resource: aws_ec2_serial_console_access + +Provides a resource to manage whether serial console access is enabled for your AWS account in the current AWS region. + +~> **NOTE:** Removing this Terraform resource disables serial console access. + +## Example Usage + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.ec2SerialConsoleAccess.Ec2SerialConsoleAccess(this, "example", { + enabled: true, + }); + } +} + +``` + +## Argument Reference + +The following arguments are supported: + +* `enabled` - (Optional) Whether or not serial console access is enabled. Valid values are `true` or `false`. Defaults to `true`. + +## Attributes Reference + +No additional attributes are exported. + +## Import + +Serial console access state can be imported, e.g., + +``` +$ terraform import aws_ec2_serial_console_access.example default +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_subnet_cidr_reservation.html.markdown b/website/docs/cdktf/typescript/r/ec2_subnet_cidr_reservation.html.markdown new file mode 100644 index 00000000000..2047c1e5cf4 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_subnet_cidr_reservation.html.markdown @@ -0,0 +1,47 @@ +--- +subcategory: "VPC (Virtual Private Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_subnet_cidr_reservation" +description: |- + Provides a subnet CIDR reservation resource. +--- + +# Resource: aws_ec2_subnet_cidr_reservation + +Provides a subnet CIDR reservation resource. + +## Example Usage + +```terraform +resource "aws_ec2_subnet_cidr_reservation" "example" { + cidr_block = "10.0.0.16/28" + reservation_type = "prefix" + subnet_id = aws_subnet.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `cidrBlock` - (Required) The CIDR block for the reservation. +* `reservationType` - (Required) The type of reservation to create. Valid values: `explicit`, `prefix` +* `subnetId` - (Required) The ID of the subnet to create the reservation for. +* `description` - (Optional) A brief description of the reservation. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - ID of the CIDR reservation. +* `ownerId` - ID of the AWS account that owns this CIDR reservation. + +## Import + +Existing CIDR reservations can be imported using `subnetId:reservationId`, e.g., + +``` +$ terraform import aws_ec2_subnet_cidr_reservation.example subnet-01llsxvsxabqiymcz:scr-4mnvz6wb7otksjcs9 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_tag.html.markdown b/website/docs/cdktf/typescript/r/ec2_tag.html.markdown new file mode 100644 index 00000000000..b928731433f --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_tag.html.markdown @@ -0,0 +1,88 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_tag" +description: |- + Manages an individual EC2 resource tag +--- + +# Resource: aws_ec2_tag + +Manages an individual EC2 resource tag. This resource should only be used in cases where EC2 resources are created outside Terraform (e.g., AMIs), being shared via Resource Access Manager (RAM), or implicitly created by other means (e.g., Transit Gateway VPN Attachments). + +~> **NOTE:** This tagging resource should not be combined with the Terraform resource for managing the parent resource. For example, using `awsVpc` and `awsEc2Tag` to manage tags of the same VPC will cause a perpetual difference where the `awsVpc` resource will try to remove the tag being added by the `awsEc2Tag` resource. + +~> **NOTE:** This tagging resource does not use the [provider `ignoreTags` configuration](/docs/providers/aws/index.html#ignore_tags). + +## Example Usage + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + const awsCustomerGatewayExample = new aws.customerGateway.CustomerGateway( + this, + "example", + { + bgpAsn: cdktf.Token.asString(65000), + ipAddress: "172.0.0.1", + type: "ipsec.1", + } + ); + const awsEc2TransitGatewayExample = + new aws.ec2TransitGateway.Ec2TransitGateway(this, "example_1", {}); + /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ + awsEc2TransitGatewayExample.overrideLogicalId("example"); + const awsVpnConnectionExample = new aws.vpnConnection.VpnConnection( + this, + "example_2", + { + customerGatewayId: cdktf.Token.asString(awsCustomerGatewayExample.id), + transitGatewayId: cdktf.Token.asString(awsEc2TransitGatewayExample.id), + type: cdktf.Token.asString(awsCustomerGatewayExample.type), + } + ); + /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ + awsVpnConnectionExample.overrideLogicalId("example"); + const awsEc2TagExample = new aws.ec2Tag.Ec2Tag(this, "example_3", { + key: "Name", + resourceId: cdktf.Token.asString( + awsVpnConnectionExample.transitGatewayAttachmentId + ), + value: "Hello World", + }); + /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ + awsEc2TagExample.overrideLogicalId("example"); + } +} + +``` + +## Argument Reference + +The following arguments are supported: + +* `resourceId` - (Required) The ID of the EC2 resource to manage the tag for. +* `key` - (Required) The tag name. +* `value` - (Required) The value of the tag. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 resource identifier and key, separated by a comma (`,`) + +## Import + +`awsEc2Tag` can be imported by using the EC2 resource identifier and key, separated by a comma (`,`), e.g., + +``` +$ terraform import aws_ec2_tag.example tgw-attach-1234567890abcdef,Name +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_traffic_mirror_filter.html.markdown b/website/docs/cdktf/typescript/r/ec2_traffic_mirror_filter.html.markdown new file mode 100644 index 00000000000..8ede9b4237a --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_traffic_mirror_filter.html.markdown @@ -0,0 +1,60 @@ +--- +subcategory: "VPC (Virtual Private Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_traffic_mirror_filter" +description: |- + Provides an Traffic mirror filter +--- + +# Resource: aws_ec2_traffic_mirror_filter + +Provides an Traffic mirror filter. +Read [limits and considerations](https://docs.aws.amazon.com/vpc/latest/mirroring/traffic-mirroring-considerations.html) for traffic mirroring + +## Example Usage + +To create a basic traffic mirror filter + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.ec2TrafficMirrorFilter.Ec2TrafficMirrorFilter(this, "foo", { + description: "traffic mirror filter - terraform example", + networkServices: ["amazon-dns"], + }); + } +} + +``` + +## Argument Reference + +The following arguments are supported: + +* `description` - (Optional, Forces new resource) A description of the filter. +* `networkServices` - (Optional) List of amazon network services that should be mirrored. Valid values: `amazonDns`. +* `tags` - (Optional) Key-value map of resource tags. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The ARN of the traffic mirror filter. +* `id` - The name of the filter. +* `tagsAll` - A map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Import + +Traffic mirror filter can be imported using the `id`, e.g., + +``` +$ terraform import aws_ec2_traffic_mirror_filter.foo tmf-0fbb93ddf38198f64 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_traffic_mirror_filter_rule.html.markdown b/website/docs/cdktf/typescript/r/ec2_traffic_mirror_filter_rule.html.markdown new file mode 100644 index 00000000000..5a28fc21392 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_traffic_mirror_filter_rule.html.markdown @@ -0,0 +1,111 @@ +--- +subcategory: "VPC (Virtual Private Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_traffic_mirror_filter_rule" +description: |- + Provides an Traffic mirror filter rule +--- + +# Resource: aws_ec2_traffic_mirror_filter_rule + +Provides an Traffic mirror filter rule. +Read [limits and considerations](https://docs.aws.amazon.com/vpc/latest/mirroring/traffic-mirroring-considerations.html) for traffic mirroring + +## Example Usage + +To create a basic traffic mirror session + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + const awsEc2TrafficMirrorFilterFilter = + new aws.ec2TrafficMirrorFilter.Ec2TrafficMirrorFilter(this, "filter", { + description: "traffic mirror filter - terraform example", + networkServices: ["amazon-dns"], + }); + new aws.ec2TrafficMirrorFilterRule.Ec2TrafficMirrorFilterRule( + this, + "rulein", + { + description: "test rule", + destinationCidrBlock: "10.0.0.0/8", + destinationPortRange: { + fromPort: 22, + toPort: 53, + }, + protocol: 6, + ruleAction: "accept", + ruleNumber: 1, + sourceCidrBlock: "10.0.0.0/8", + sourcePortRange: { + fromPort: 0, + toPort: 10, + }, + trafficDirection: "ingress", + trafficMirrorFilterId: cdktf.Token.asString( + awsEc2TrafficMirrorFilterFilter.id + ), + } + ); + new aws.ec2TrafficMirrorFilterRule.Ec2TrafficMirrorFilterRule( + this, + "ruleout", + { + description: "test rule", + destinationCidrBlock: "10.0.0.0/8", + ruleAction: "accept", + ruleNumber: 1, + sourceCidrBlock: "10.0.0.0/8", + trafficDirection: "egress", + trafficMirrorFilterId: cdktf.Token.asString( + awsEc2TrafficMirrorFilterFilter.id + ), + } + ); + } +} + +``` + +## Argument Reference + +The following arguments are supported: + +* `description` - (Optional) Description of the traffic mirror filter rule. +* `trafficMirrorFilterId` - (Required) ID of the traffic mirror filter to which this rule should be added +* `destinationCidrBlock` - (Required) Destination CIDR block to assign to the Traffic Mirror rule. +* `destinationPortRange` - (Optional) Destination port range. Supported only when the protocol is set to TCP(6) or UDP(17). See Traffic mirror port range documented below +* `protocol` - (Optional) Protocol number, for example 17 (UDP), to assign to the Traffic Mirror rule. For information about the protocol value, see [Protocol Numbers](https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml) on the Internet Assigned Numbers Authority (IANA) website. +* `ruleAction` - (Required) Action to take (accept | reject) on the filtered traffic. Valid values are `accept` and `reject` +* `ruleNumber` - (Required) Number of the Traffic Mirror rule. This number must be unique for each Traffic Mirror rule in a given direction. The rules are processed in ascending order by rule number. +* `sourceCidrBlock` - (Required) Source CIDR block to assign to the Traffic Mirror rule. +* `sourcePortRange` - (Optional) Source port range. Supported only when the protocol is set to TCP(6) or UDP(17). See Traffic mirror port range documented below +* `trafficDirection` - (Required) Direction of traffic to be captured. Valid values are `ingress` and `egress` + +Traffic mirror port range support following attributes: + +* `fromPort` - (Optional) Starting port of the range +* `toPort` - (Optional) Ending port of the range + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - ARN of the traffic mirror filter rule. +* `id` - Name of the traffic mirror filter rule. + +## Import + +Traffic mirror rules can be imported using the `trafficMirrorFilterId` and `id` separated by `:` e.g., + +``` +$ terraform import aws_ec2_traffic_mirror_filter_rule.rule tmf-0fbb93ddf38198f64:tmfr-05a458f06445d0aee +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_traffic_mirror_session.html.markdown b/website/docs/cdktf/typescript/r/ec2_traffic_mirror_session.html.markdown new file mode 100644 index 00000000000..deed6b09f89 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_traffic_mirror_session.html.markdown @@ -0,0 +1,67 @@ +--- +subcategory: "VPC (Virtual Private Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_traffic_mirror_session" +description: |- + Provides a Traffic mirror session +--- + +# Resource: aws_ec2_traffic_mirror_session + +Provides an Traffic mirror session. +Read [limits and considerations](https://docs.aws.amazon.com/vpc/latest/mirroring/traffic-mirroring-considerations.html) for traffic mirroring + +## Example Usage + +To create a basic traffic mirror session + +```terraform +resource "aws_ec2_traffic_mirror_filter" "filter" { + description = "traffic mirror filter - terraform example" + network_services = ["amazon-dns"] +} + +resource "aws_ec2_traffic_mirror_target" "target" { + network_load_balancer_arn = aws_lb.lb.arn +} + +resource "aws_ec2_traffic_mirror_session" "session" { + description = "traffic mirror session - terraform example" + network_interface_id = aws_instance.test.primary_network_interface_id + session_number = 1 + traffic_mirror_filter_id = aws_ec2_traffic_mirror_filter.filter.id + traffic_mirror_target_id = aws_ec2_traffic_mirror_target.target.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `description` - (Optional) A description of the traffic mirror session. +* `networkInterfaceId` - (Required, Forces new) ID of the source network interface. Not all network interfaces are eligible as mirror sources. On EC2 instances only nitro based instances support mirroring. +* `trafficMirrorFilterId` - (Required) ID of the traffic mirror filter to be used +* `trafficMirrorTargetId` - (Required) ID of the traffic mirror target to be used +* `packetLength` - (Optional) The number of bytes in each packet to mirror. These are bytes after the VXLAN header. Do not specify this parameter when you want to mirror the entire packet. To mirror a subset of the packet, set this to the length (in bytes) that you want to mirror. +* `sessionNumber` - (Required) - The session number determines the order in which sessions are evaluated when an interface is used by multiple sessions. The first session with a matching filter is the one that mirrors the packets. +* `virtualNetworkId` - (Optional) - The VXLAN ID for the Traffic Mirror session. For more information about the VXLAN protocol, see RFC 7348. If you do not specify a VirtualNetworkId, an account-wide unique id is chosen at random. +* `tags` - (Optional) Key-value map of resource tags. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The ARN of the traffic mirror session. +* `id` - The name of the session. +* `tagsAll` - A map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). +* `ownerId` - The AWS account ID of the session owner. + +## Import + +Traffic mirror sessions can be imported using the `id`, e.g., + +``` +$ terraform import aws_ec2_traffic_mirror_session.session tms-0d8aa3ca35897b82e +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_traffic_mirror_target.html.markdown b/website/docs/cdktf/typescript/r/ec2_traffic_mirror_target.html.markdown new file mode 100644 index 00000000000..a1f502299e1 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_traffic_mirror_target.html.markdown @@ -0,0 +1,64 @@ +--- +subcategory: "VPC (Virtual Private Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_traffic_mirror_target" +description: |- + Provides a Traffic mirror target +--- + +# Resource: aws_ec2_traffic_mirror_target + +Provides a Traffic mirror target. +Read [limits and considerations](https://docs.aws.amazon.com/vpc/latest/mirroring/traffic-mirroring-considerations.html) for traffic mirroring + +## Example Usage + +To create a basic traffic mirror session + +```terraform +resource "aws_ec2_traffic_mirror_target" "nlb" { + description = "NLB target" + network_load_balancer_arn = aws_lb.lb.arn +} + +resource "aws_ec2_traffic_mirror_target" "eni" { + description = "ENI target" + network_interface_id = aws_instance.test.primary_network_interface_id +} + +resource "aws_ec2_traffic_mirror_target" "gwlb" { + description = "GWLB target" + gateway_load_balancer_endpoint_id = aws_vpc_endpoint.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `description` - (Optional, Forces new) A description of the traffic mirror session. +* `networkInterfaceId` - (Optional, Forces new) The network interface ID that is associated with the target. +* `networkLoadBalancerArn` - (Optional, Forces new) The Amazon Resource Name (ARN) of the Network Load Balancer that is associated with the target. +* `gatewayLoadBalancerEndpointId` - (Optional, Forces new) The VPC Endpoint Id of the Gateway Load Balancer that is associated with the target. +* `tags` - (Optional) Key-value map of resource tags. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +**NOTE:** Either `networkInterfaceId` or `networkLoadBalancerArn` should be specified and both should not be specified together + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The ID of the Traffic Mirror target. +* `tagsAll` - A map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). +* `arn` - The ARN of the traffic mirror target. +* `ownerId` - The ID of the AWS account that owns the traffic mirror target. + +## Import + +Traffic mirror targets can be imported using the `id`, e.g., + +``` +$ terraform import aws_ec2_traffic_mirror_target.target tmt-0c13a005422b86606 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_transit_gateway.html.markdown b/website/docs/cdktf/typescript/r/ec2_transit_gateway.html.markdown new file mode 100644 index 00000000000..e073514f633 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_transit_gateway.html.markdown @@ -0,0 +1,77 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway" +description: |- + Manages an EC2 Transit Gateway +--- + +# Resource: aws_ec2_transit_gateway + +Manages an EC2 Transit Gateway. + +## Example Usage + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + new aws.ec2TransitGateway.Ec2TransitGateway(this, "example", { + description: "example", + }); + } +} + +``` + +## Argument Reference + +The following arguments are supported: + +* `amazonSideAsn` - (Optional) Private Autonomous System Number (ASN) for the Amazon side of a BGP session. The range is `64512` to `65534` for 16-bit ASNs and `4200000000` to `4294967294` for 32-bit ASNs. Default value: `64512`. + +-> **NOTE:** Modifying `amazonSideAsn` on a Transit Gateway with active BGP sessions is [not allowed](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_ModifyTransitGatewayOptions.html). You must first delete all Transit Gateway attachments that have BGP configured prior to modifying `amazonSideAsn`. + +* `autoAcceptSharedAttachments` - (Optional) Whether resource attachment requests are automatically accepted. Valid values: `disable`, `enable`. Default value: `disable`. +* `defaultRouteTableAssociation` - (Optional) Whether resource attachments are automatically associated with the default association route table. Valid values: `disable`, `enable`. Default value: `enable`. +* `defaultRouteTablePropagation` - (Optional) Whether resource attachments automatically propagate routes to the default propagation route table. Valid values: `disable`, `enable`. Default value: `enable`. +* `description` - (Optional) Description of the EC2 Transit Gateway. +* `dnsSupport` - (Optional) Whether DNS support is enabled. Valid values: `disable`, `enable`. Default value: `enable`. +* `multicastSupport` - (Optional) Whether Multicast support is enabled. Required to use `ec2TransitGatewayMulticastDomain`. Valid values: `disable`, `enable`. Default value: `disable`. +* `tags` - (Optional) Key-value tags for the EC2 Transit Gateway. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `transitGatewayCidrBlocks` - (Optional) One or more IPv4 or IPv6 CIDR blocks for the transit gateway. Must be a size /24 CIDR block or larger for IPv4, or a size /64 CIDR block or larger for IPv6. +* `vpnEcmpSupport` - (Optional) Whether VPN Equal Cost Multipath Protocol support is enabled. Valid values: `disable`, `enable`. Default value: `enable`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - EC2 Transit Gateway Amazon Resource Name (ARN) +* `associationDefaultRouteTableId` - Identifier of the default association route table +* `id` - EC2 Transit Gateway identifier +* `tagsAll` - A map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). +* `ownerId` - Identifier of the AWS account that owns the EC2 Transit Gateway +* `propagationDefaultRouteTableId` - Identifier of the default propagation route table + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `create` - (Default `10M`) +- `update` - (Default `10M`) +- `delete` - (Default `10M`) + +## Import + +`awsEc2TransitGateway` can be imported by using the EC2 Transit Gateway identifier, e.g., + +``` +$ terraform import aws_ec2_transit_gateway.example tgw-12345678 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_transit_gateway_connect.html.markdown b/website/docs/cdktf/typescript/r/ec2_transit_gateway_connect.html.markdown new file mode 100644 index 00000000000..59e486de9a5 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_transit_gateway_connect.html.markdown @@ -0,0 +1,62 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_connect" +description: |- + Manages an EC2 Transit Gateway Connect +--- + +# Resource: aws_ec2_transit_gateway_connect + +Manages an EC2 Transit Gateway Connect. + +## Example Usage + +```terraform +resource "aws_ec2_transit_gateway_vpc_attachment" "example" { + subnet_ids = [aws_subnet.example.id] + transit_gateway_id = aws_ec2_transit_gateway.example.id + vpc_id = aws_vpc.example.id +} + +resource "aws_ec2_transit_gateway_connect" "attachment" { + transport_attachment_id = aws_ec2_transit_gateway_vpc_attachment.example.id + transit_gateway_id = aws_ec2_transit_gateway.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `protocol` - (Optional) The tunnel protocol. Valida values: `gre`. Default is `gre`. +* `tags` - (Optional) Key-value tags for the EC2 Transit Gateway Connect. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `transitGatewayDefaultRouteTableAssociation` - (Optional) Boolean whether the Connect should be associated with the EC2 Transit Gateway association default route table. This cannot be configured or perform drift detection with Resource Access Manager shared EC2 Transit Gateways. Default value: `true`. +* `transitGatewayDefaultRouteTablePropagation` - (Optional) Boolean whether the Connect should propagate routes with the EC2 Transit Gateway propagation default route table. This cannot be configured or perform drift detection with Resource Access Manager shared EC2 Transit Gateways. Default value: `true`. +* `transitGatewayId` - (Required) Identifier of EC2 Transit Gateway. +* `transportAttachmentId` - (Required) The underlaying VPC attachment + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Attachment identifier +* `tagsAll` - A map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `create` - (Default `10M`) +- `update` - (Default `10M`) +- `delete` - (Default `10M`) + +## Import + +`awsEc2TransitGatewayConnect` can be imported by using the EC2 Transit Gateway Connect identifier, e.g., + +``` +$ terraform import aws_ec2_transit_gateway_connect.example tgw-attach-12345678 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_transit_gateway_connect_peer.html.markdown b/website/docs/cdktf/typescript/r/ec2_transit_gateway_connect_peer.html.markdown new file mode 100644 index 00000000000..5c05f186fd7 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_transit_gateway_connect_peer.html.markdown @@ -0,0 +1,62 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_connect_peer" +description: |- + Manages an EC2 Transit Gateway Connect Peer +--- + +# Resource: aws_ec2_transit_gateway_connect_peer + +Manages an EC2 Transit Gateway Connect Peer. + +## Example Usage + +```terraform +resource "aws_ec2_transit_gateway_connect" "example" { + transport_attachment_id = aws_ec2_transit_gateway_vpc_attachment.example.id + transit_gateway_id = aws_ec2_transit_gateway.example.id +} + +resource "aws_ec2_transit_gateway_connect_peer" "example" { + peer_address = "10.1.2.3" + inside_cidr_blocks = ["169.254.100.0/29"] + transit_gateway_attachment_id = aws_ec2_transit_gateway_connect.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `bgpAsn` - (Optional) The BGP ASN number assigned customer device. If not provided, it will use the same BGP ASN as is associated with Transit Gateway. +* `insideCidrBlocks` - (Required) The CIDR block that will be used for addressing within the tunnel. It must contain exactly one IPv4 CIDR block and up to one IPv6 CIDR block. The IPv4 CIDR block must be /29 size and must be within 169.254.0.0/16 range, with exception of: 169.254.0.0/29, 169.254.1.0/29, 169.254.2.0/29, 169.254.3.0/29, 169.254.4.0/29, 169.254.5.0/29, 169.254.169.248/29. The IPv6 CIDR block must be /125 size and must be within fd00::/8. The first IP from each CIDR block is assigned for customer gateway, the second and third is for Transit Gateway (An example: from range 169.254.100.0/29, .1 is assigned to customer gateway and .2 and .3 are assigned to Transit Gateway) +* `peerAddress` - (Required) The IP addressed assigned to customer device, which will be used as tunnel endpoint. It can be IPv4 or IPv6 address, but must be the same address family as `transitGatewayAddress` +* `tags` - (Optional) Key-value tags for the EC2 Transit Gateway Connect Peer. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `transitGatewayAddress` - (Optional) The IP address assigned to Transit Gateway, which will be used as tunnel endpoint. This address must be from associated Transit Gateway CIDR block. The address must be from the same address family as `peerAddress`. If not set explicitly, it will be selected from associated Transit Gateway CIDR blocks +* `transitGatewayAttachmentId` - (Required) The Transit Gateway Connect + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Connect Peer identifier +* `arn` - EC2 Transit Gateway Connect Peer ARN +* `tagsAll` - A map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `create` - (Default `10M`) +- `delete` - (Default `10M`) + +## Import + +`awsEc2TransitGatewayConnectPeer` can be imported by using the EC2 Transit Gateway Connect Peer identifier, e.g., + +``` +$ terraform import aws_ec2_transit_gateway_connect_peer.example tgw-connect-peer-12345678 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_transit_gateway_multicast_domain.html.markdown b/website/docs/cdktf/typescript/r/ec2_transit_gateway_multicast_domain.html.markdown new file mode 100644 index 00000000000..02f15ef5c67 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_transit_gateway_multicast_domain.html.markdown @@ -0,0 +1,253 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_multicast_domain" +description: |- + Manages an EC2 Transit Gateway Multicast Domain +--- + +# Resource: aws_ec2_transit_gateway_multicast_domain + +Manages an EC2 Transit Gateway Multicast Domain. + +## Example Usage + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + const awsEc2TransitGatewayTgw = new aws.ec2TransitGateway.Ec2TransitGateway( + this, + "tgw", + { + multicastSupport: "enable", + } + ); + const awsEc2TransitGatewayMulticastDomainDomain = + new aws.ec2TransitGatewayMulticastDomain.Ec2TransitGatewayMulticastDomain( + this, + "domain", + { + staticSourcesSupport: "enable", + tags: { + Name: "Transit_Gateway_Multicast_Domain_Example", + }, + transitGatewayId: cdktf.Token.asString(awsEc2TransitGatewayTgw.id), + } + ); + const awsVpcVpc1 = new aws.vpc.Vpc(this, "vpc1", { + cidrBlock: "10.0.0.0/16", + }); + const awsVpcVpc2 = new aws.vpc.Vpc(this, "vpc2", { + cidrBlock: "10.1.0.0/16", + }); + const dataAwsAmiAmazonLinux = new aws.dataAwsAmi.DataAwsAmi( + this, + "amazon_linux", + { + filter: [ + { + name: "name", + values: ["amzn-ami-hvm-*-x86_64-gp2"], + }, + { + name: "owner-alias", + values: ["amazon"], + }, + ], + mostRecent: true, + owners: ["amazon"], + } + ); + const dataAwsAvailabilityZonesAvailable = + new aws.dataAwsAvailabilityZones.DataAwsAvailabilityZones( + this, + "available", + { + state: "available", + } + ); + const awsSubnetSubnet1 = new aws.subnet.Subnet(this, "subnet1", { + availabilityZone: cdktf.Token.asString( + cdktf.propertyAccess(dataAwsAvailabilityZonesAvailable.names, ["0"]) + ), + cidrBlock: "10.0.1.0/24", + vpcId: cdktf.Token.asString(awsVpcVpc1.id), + }); + const awsSubnetSubnet2 = new aws.subnet.Subnet(this, "subnet2", { + availabilityZone: cdktf.Token.asString( + cdktf.propertyAccess(dataAwsAvailabilityZonesAvailable.names, ["1"]) + ), + cidrBlock: "10.0.2.0/24", + vpcId: cdktf.Token.asString(awsVpcVpc1.id), + }); + const awsSubnetSubnet3 = new aws.subnet.Subnet(this, "subnet3", { + availabilityZone: cdktf.Token.asString( + cdktf.propertyAccess(dataAwsAvailabilityZonesAvailable.names, ["0"]) + ), + cidrBlock: "10.1.1.0/24", + vpcId: cdktf.Token.asString(awsVpcVpc2.id), + }); + const awsEc2TransitGatewayVpcAttachmentAttachment1 = + new aws.ec2TransitGatewayVpcAttachment.Ec2TransitGatewayVpcAttachment( + this, + "attachment1", + { + subnetIds: [ + cdktf.Token.asString(awsSubnetSubnet1.id), + cdktf.Token.asString(awsSubnetSubnet2.id), + ], + transitGatewayId: cdktf.Token.asString(awsEc2TransitGatewayTgw.id), + vpcId: cdktf.Token.asString(awsVpcVpc1.id), + } + ); + const awsEc2TransitGatewayVpcAttachmentAttachment2 = + new aws.ec2TransitGatewayVpcAttachment.Ec2TransitGatewayVpcAttachment( + this, + "attachment2", + { + subnetIds: [cdktf.Token.asString(awsSubnetSubnet3.id)], + transitGatewayId: cdktf.Token.asString(awsEc2TransitGatewayTgw.id), + vpcId: cdktf.Token.asString(awsVpcVpc2.id), + } + ); + const awsInstanceInstance1 = new aws.instance.Instance(this, "instance1", { + ami: cdktf.Token.asString(dataAwsAmiAmazonLinux.id), + instanceType: "t2.micro", + subnetId: cdktf.Token.asString(awsSubnetSubnet1.id), + }); + const awsInstanceInstance2 = new aws.instance.Instance(this, "instance2", { + ami: cdktf.Token.asString(dataAwsAmiAmazonLinux.id), + instanceType: "t2.micro", + subnetId: cdktf.Token.asString(awsSubnetSubnet2.id), + }); + const awsInstanceInstance3 = new aws.instance.Instance(this, "instance3", { + ami: cdktf.Token.asString(dataAwsAmiAmazonLinux.id), + instanceType: "t2.micro", + subnetId: cdktf.Token.asString(awsSubnetSubnet3.id), + }); + const awsEc2TransitGatewayMulticastDomainAssociationAssociation1 = + new aws.ec2TransitGatewayMulticastDomainAssociation.Ec2TransitGatewayMulticastDomainAssociation( + this, + "association1", + { + subnetId: cdktf.Token.asString(awsSubnetSubnet1.id), + transitGatewayAttachmentId: cdktf.Token.asString( + awsEc2TransitGatewayVpcAttachmentAttachment1.id + ), + transitGatewayMulticastDomainId: cdktf.Token.asString( + awsEc2TransitGatewayMulticastDomainDomain.id + ), + } + ); + new aws.ec2TransitGatewayMulticastDomainAssociation.Ec2TransitGatewayMulticastDomainAssociation( + this, + "association2", + { + subnetId: cdktf.Token.asString(awsSubnetSubnet2.id), + transitGatewayAttachmentId: cdktf.Token.asString( + awsEc2TransitGatewayVpcAttachmentAttachment2.id + ), + transitGatewayMulticastDomainId: cdktf.Token.asString( + awsEc2TransitGatewayMulticastDomainDomain.id + ), + } + ); + const awsEc2TransitGatewayMulticastDomainAssociationAssociation3 = + new aws.ec2TransitGatewayMulticastDomainAssociation.Ec2TransitGatewayMulticastDomainAssociation( + this, + "association3", + { + subnetId: cdktf.Token.asString(awsSubnetSubnet3.id), + transitGatewayAttachmentId: cdktf.Token.asString( + awsEc2TransitGatewayVpcAttachmentAttachment2.id + ), + transitGatewayMulticastDomainId: cdktf.Token.asString( + awsEc2TransitGatewayMulticastDomainDomain.id + ), + } + ); + new aws.ec2TransitGatewayMulticastGroupMember.Ec2TransitGatewayMulticastGroupMember( + this, + "member1", + { + groupIpAddress: "224.0.0.1", + networkInterfaceId: cdktf.Token.asString( + awsInstanceInstance1.primaryNetworkInterfaceId + ), + transitGatewayMulticastDomainId: cdktf.Token.asString( + awsEc2TransitGatewayMulticastDomainAssociationAssociation1.transitGatewayMulticastDomainId + ), + } + ); + new aws.ec2TransitGatewayMulticastGroupMember.Ec2TransitGatewayMulticastGroupMember( + this, + "member2", + { + groupIpAddress: "224.0.0.1", + networkInterfaceId: cdktf.Token.asString( + awsInstanceInstance2.primaryNetworkInterfaceId + ), + transitGatewayMulticastDomainId: cdktf.Token.asString( + awsEc2TransitGatewayMulticastDomainAssociationAssociation1.transitGatewayMulticastDomainId + ), + } + ); + new aws.ec2TransitGatewayMulticastGroupSource.Ec2TransitGatewayMulticastGroupSource( + this, + "source", + { + groupIpAddress: "224.0.0.1", + networkInterfaceId: cdktf.Token.asString( + awsInstanceInstance3.primaryNetworkInterfaceId + ), + transitGatewayMulticastDomainId: cdktf.Token.asString( + awsEc2TransitGatewayMulticastDomainAssociationAssociation3.transitGatewayMulticastDomainId + ), + } + ); + } +} + +``` + +## Argument Reference + +The following arguments are supported: + +* `transitGatewayId` - (Required) EC2 Transit Gateway identifier. The EC2 Transit Gateway must have `multicastSupport` enabled. +* `autoAcceptSharedAssociations` - (Optional) Whether to automatically accept cross-account subnet associations that are associated with the EC2 Transit Gateway Multicast Domain. Valid values: `disable`, `enable`. Default value: `disable`. +* `igmpv2Support` - (Optional) Whether to enable Internet Group Management Protocol (IGMP) version 2 for the EC2 Transit Gateway Multicast Domain. Valid values: `disable`, `enable`. Default value: `disable`. +* `staticSourcesSupport` - (Optional) Whether to enable support for statically configuring multicast group sources for the EC2 Transit Gateway Multicast Domain. Valid values: `disable`, `enable`. Default value: `disable`. +* `tags` - (Optional) Key-value tags for the EC2 Transit Gateway Multicast Domain. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Multicast Domain identifier. +* `arn` - EC2 Transit Gateway Multicast Domain Amazon Resource Name (ARN). +* `ownerId` - Identifier of the AWS account that owns the EC2 Transit Gateway Multicast Domain. +* `tagsAll` - A map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `create` - (Default `10M`) +- `delete` - (Default `10M`) + +## Import + +`awsEc2TransitGatewayMulticastDomain` can be imported by using the EC2 Transit Gateway Multicast Domain identifier, e.g., + +``` +terraform import aws_ec2_transit_gateway_multicast_domain.example tgw-mcast-domain-12345 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_transit_gateway_multicast_domain_association.html.markdown b/website/docs/cdktf/typescript/r/ec2_transit_gateway_multicast_domain_association.html.markdown new file mode 100644 index 00000000000..cfb4ded0a39 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_transit_gateway_multicast_domain_association.html.markdown @@ -0,0 +1,58 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_multicast_domain_association" +description: |- + Manages an EC2 Transit Gateway Multicast Domain Association +--- + +# Resource: aws_ec2_transit_gateway_multicast_domain_association + +Associates the specified subnet and transit gateway attachment with the specified transit gateway multicast domain. + +## Example Usage + +```terraform +resource "aws_ec2_transit_gateway" "example" { + multicast_support = "enable" +} + +resource "aws_ec2_transit_gateway_vpc_attachment" "example" { + subnet_ids = [aws_subnet.example.id] + transit_gateway_id = aws_ec2_transit_gateway.example.id + vpc_id = aws_vpc.example.id +} + +resource "aws_ec2_transit_gateway_multicast_domain" "example" { + transit_gateway_id = aws_ec2_transit_gateway.example.id +} + +resource "aws_ec2_transit_gateway_multicast_domain_association" "example" { + subnet_id = aws_subnet.example.id + transit_gateway_attachment_id = aws_ec2_transit_gateway_vpc_attachment.example.id + transit_gateway_multicast_domain_id = aws_ec2_transit_gateway_multicast_domain.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `subnetId` - (Required) The ID of the subnet to associate with the transit gateway multicast domain. +* `transitGatewayAttachmentId` - (Required) The ID of the transit gateway attachment. +* `transitGatewayMulticastDomainId` - (Required) The ID of the transit gateway multicast domain. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Multicast Domain Association identifier. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `create` - (Default `10M`) +- `delete` - (Default `10M`) + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_transit_gateway_multicast_group_member.html.markdown b/website/docs/cdktf/typescript/r/ec2_transit_gateway_multicast_group_member.html.markdown new file mode 100644 index 00000000000..614c940b015 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_transit_gateway_multicast_group_member.html.markdown @@ -0,0 +1,38 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_multicast_group_member" +description: |- + Manages an EC2 Transit Gateway Multicast Group Member +--- + +# Resource: aws_ec2_transit_gateway_multicast_group_member + +Registers members (network interfaces) with the transit gateway multicast group. +A member is a network interface associated with a supported EC2 instance that receives multicast traffic. + +## Example Usage + +```terraform +resource "aws_ec2_transit_gateway_multicast_group_member" "example" { + group_ip_address = "224.0.0.1" + network_interface_id = aws_network_interface.example.id + transit_gateway_multicast_domain_id = aws_ec2_transit_gateway_multicast_domain.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `groupIpAddress` - (Required) The IP address assigned to the transit gateway multicast group. +* `networkInterfaceId` - (Required) The group members' network interface ID to register with the transit gateway multicast group. +* `transitGatewayMulticastDomainId` - (Required) The ID of the transit gateway multicast domain. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Multicast Group Member identifier. + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_transit_gateway_multicast_group_source.html.markdown b/website/docs/cdktf/typescript/r/ec2_transit_gateway_multicast_group_source.html.markdown new file mode 100644 index 00000000000..265105d03c8 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_transit_gateway_multicast_group_source.html.markdown @@ -0,0 +1,38 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_multicast_group_source" +description: |- + Manages an EC2 Transit Gateway Multicast Group Source +--- + +# Resource: aws_ec2_transit_gateway_multicast_group_source + +Registers sources (network interfaces) with the transit gateway multicast group. +A multicast source is a network interface attached to a supported instance that sends multicast traffic. + +## Example Usage + +```terraform +resource "aws_ec2_transit_gateway_multicast_group_source" "example" { + group_ip_address = "224.0.0.1" + network_interface_id = aws_network_interface.example.id + transit_gateway_multicast_domain_id = aws_ec2_transit_gateway_multicast_domain.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `groupIpAddress` - (Required) The IP address assigned to the transit gateway multicast group. +* `networkInterfaceId` - (Required) The group members' network interface ID to register with the transit gateway multicast group. +* `transitGatewayMulticastDomainId` - (Required) The ID of the transit gateway multicast domain. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Multicast Group Member identifier. + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_transit_gateway_peering_attachment.html.markdown b/website/docs/cdktf/typescript/r/ec2_transit_gateway_peering_attachment.html.markdown new file mode 100644 index 00000000000..5ac8ab0ffbd --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_transit_gateway_peering_attachment.html.markdown @@ -0,0 +1,103 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_peering_attachment" +description: |- + Manages an EC2 Transit Gateway Peering Attachment +--- + +# Resource: aws_ec2_transit_gateway_peering_attachment + +Manages an EC2 Transit Gateway Peering Attachment. +For examples of custom route table association and propagation, see the [EC2 Transit Gateway Networking Examples Guide](https://docs.aws.amazon.com/vpc/latest/tgw/TGW_Scenarios.html). + +## Example Usage + +```typescript +import * as constructs from "constructs"; +import * as cdktf from "cdktf"; +/*Provider bindings are generated by running cdktf get. +See https://cdk.tf/provider-generation for more details.*/ +import * as aws from "./.gen/providers/aws"; +class MyConvertedCode extends cdktf.TerraformStack { + constructor(scope: constructs.Construct, name: string) { + super(scope, name); + const awsLocal = new aws.provider.AwsProvider(this, "aws", { + alias: "local", + region: "us-east-1", + }); + const awsPeer = new aws.provider.AwsProvider(this, "aws_1", { + alias: "peer", + region: "us-west-2", + }); + const awsEc2TransitGatewayLocal = + new aws.ec2TransitGateway.Ec2TransitGateway(this, "local", { + provider: awsLocal, + tags: { + Name: "Local TGW", + }, + }); + const awsEc2TransitGatewayPeer = + new aws.ec2TransitGateway.Ec2TransitGateway(this, "peer", { + provider: awsPeer, + tags: { + Name: "Peer TGW", + }, + }); + const dataAwsRegionPeer = new aws.dataAwsRegion.DataAwsRegion( + this, + "peer_4", + { + provider: awsPeer, + } + ); + /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ + dataAwsRegionPeer.overrideLogicalId("peer"); + new aws.ec2TransitGatewayPeeringAttachment.Ec2TransitGatewayPeeringAttachment( + this, + "example", + { + peerAccountId: cdktf.Token.asString(awsEc2TransitGatewayPeer.ownerId), + peerRegion: cdktf.Token.asString(dataAwsRegionPeer.name), + peerTransitGatewayId: cdktf.Token.asString(awsEc2TransitGatewayPeer.id), + tags: { + Name: "TGW Peering Requestor", + }, + transitGatewayId: cdktf.Token.asString(awsEc2TransitGatewayLocal.id), + } + ); + } +} + +``` + +A full example of how to create a Transit Gateway in one AWS account, share it with a second AWS account, and attach a to a Transit Gateway in the second account via the `awsEc2TransitGatewayPeeringAttachment` resource can be found in [the `/examples/transitGatewayCrossAccountPeeringAttachment` directory within the Github Repository](https://github.com/hashicorp/terraform-provider-aws/tree/main/examples/transit-gateway-cross-account-peering-attachment). + +## Argument Reference + +The following arguments are supported: + +* `peerAccountId` - (Optional) Account ID of EC2 Transit Gateway to peer with. Defaults to the account ID the [AWS provider][1] is currently connected to. +* `peerRegion` - (Required) Region of EC2 Transit Gateway to peer with. +* `peerTransitGatewayId` - (Required) Identifier of EC2 Transit Gateway to peer with. +* `tags` - (Optional) Key-value tags for the EC2 Transit Gateway Peering Attachment. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `transitGatewayId` - (Required) Identifier of EC2 Transit Gateway. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Attachment identifier +* `tagsAll` - A map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Import + +`awsEc2TransitGatewayPeeringAttachment` can be imported by using the EC2 Transit Gateway Attachment identifier, e.g., + +```sh +terraform import aws_ec2_transit_gateway_peering_attachment.example tgw-attach-12345678 +``` + +[1]: /docs/providers/aws/index.html + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_transit_gateway_peering_attachment_accepter.html.markdown b/website/docs/cdktf/typescript/r/ec2_transit_gateway_peering_attachment_accepter.html.markdown new file mode 100644 index 00000000000..fb86c563aa0 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_transit_gateway_peering_attachment_accepter.html.markdown @@ -0,0 +1,52 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_peering_attachment_accepter" +description: |- + Manages the accepter's side of an EC2 Transit Gateway peering Attachment +--- + +# Resource: aws_ec2_transit_gateway_peering_attachment_accepter + +Manages the accepter's side of an EC2 Transit Gateway Peering Attachment. + +## Example Usage + +```terraform +resource "aws_ec2_transit_gateway_peering_attachment_accepter" "example" { + transit_gateway_attachment_id = aws_ec2_transit_gateway_peering_attachment.example.id + + tags = { + Name = "Example cross-account attachment" + } +} +``` + +A full example of how to create a Transit Gateway in one AWS account, share it with a second AWS account, and attach a to a Transit Gateway in the second account via the `awsEc2TransitGatewayPeeringAttachment` resource can be found in [the `/examples/transitGatewayCrossAccountPeeringAttachment` directory within the Github Repository](https://github.com/hashicorp/terraform-provider-aws/tree/main/examples/transit-gateway-cross-account-peering-attachment). + +## Argument Reference + +The following arguments are supported: + +* `transitGatewayAttachmentId` - (Required) The ID of the EC2 Transit Gateway Peering Attachment to manage. +* `tags` - (Optional) Key-value tags for the EC2 Transit Gateway Peering Attachment. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Attachment identifier +* `transitGatewayId` - Identifier of EC2 Transit Gateway. +* `peerTransitGatewayId` - Identifier of EC2 Transit Gateway to peer with. +* `peerAccountId` - Identifier of the AWS account that owns the EC2 TGW peering. +* `tagsAll` - A map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Import + +`awsEc2TransitGatewayPeeringAttachmentAccepter` can be imported by using the EC2 Transit Gateway Attachment identifier, e.g., + +``` +$ terraform import aws_ec2_transit_gateway_peering_attachment_accepter.example tgw-attach-12345678 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_transit_gateway_policy_table.html.markdown b/website/docs/cdktf/typescript/r/ec2_transit_gateway_policy_table.html.markdown new file mode 100644 index 00000000000..9e57efea412 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_transit_gateway_policy_table.html.markdown @@ -0,0 +1,49 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_policy_table" +description: |- + Manages an EC2 Transit Gateway Policy Table +--- + +# Resource: aws_ec2_transit_gateway_policy_table + +Manages an EC2 Transit Gateway Policy Table. + +## Example Usage + +```terraform +resource "aws_ec2_transit_gateway_policy_table" "example" { + transit_gateway_id = aws_ec2_transit_gateway.example.id + + tags = { + Name = "Example Policy Table" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `transitGatewayId` - (Required) EC2 Transit Gateway identifier. +* `tags` - (Optional) Key-value tags for the EC2 Transit Gateway Policy Table. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - EC2 Transit Gateway Policy Table Amazon Resource Name (ARN). +* `id` - EC2 Transit Gateway Policy Table identifier. +* `state` - The state of the EC2 Transit Gateway Policy Table. +* `tagsAll` - A map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Import + +`awsEc2TransitGatewayPolicyTable` can be imported by using the EC2 Transit Gateway Policy Table identifier, e.g., + +``` +$ terraform import aws_ec2_transit_gateway_policy_table.example tgw-rtb-12345678 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_transit_gateway_policy_table_association.html.markdown b/website/docs/cdktf/typescript/r/ec2_transit_gateway_policy_table_association.html.markdown new file mode 100644 index 00000000000..f895a0e215c --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_transit_gateway_policy_table_association.html.markdown @@ -0,0 +1,45 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_policy_table_association" +description: |- + Manages an EC2 Transit Gateway Policy Table association +--- + +# Resource: aws_ec2_transit_gateway_policy_table_association + +Manages an EC2 Transit Gateway Policy Table association. + +## Example Usage + +```terraform +resource "aws_ec2_transit_gateway_policy_table_association" "example" { + transit_gateway_attachment_id = aws_networkmanager_transit_gateway_peering.example.transit_gateway_peering_attachment_id + transit_gateway_policy_table_id = aws_ec2_transit_gateway_policy_table.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `transitGatewayAttachmentId` - (Required) Identifier of EC2 Transit Gateway Attachment. +* `transitGatewayPolicyTableId` - (Required) Identifier of EC2 Transit Gateway Policy Table. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Policy Table identifier combined with EC2 Transit Gateway Attachment identifier +* `resourceId` - Identifier of the resource +* `resourceType` - Type of the resource + +## Import + +`awsEc2TransitGatewayPolicyTableAssociation` can be imported by using the EC2 Transit Gateway Policy Table identifier, an underscore, and the EC2 Transit Gateway Attachment identifier, e.g., + +``` +$ terraform import aws_ec2_transit_gateway_policy_table_association.example tgw-rtb-12345678_tgw-attach-87654321 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_transit_gateway_prefix_list_reference.html.markdown b/website/docs/cdktf/typescript/r/ec2_transit_gateway_prefix_list_reference.html.markdown new file mode 100644 index 00000000000..1b191a17c9d --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_transit_gateway_prefix_list_reference.html.markdown @@ -0,0 +1,61 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_prefix_list_reference" +description: |- + Manages an EC2 Transit Gateway Prefix List Reference +--- + +# Resource: aws_ec2_transit_gateway_prefix_list_reference + +Manages an EC2 Transit Gateway Prefix List Reference. + +## Example Usage + +### Attachment Routing + +```terraform +resource "aws_ec2_transit_gateway_prefix_list_reference" "example" { + prefix_list_id = aws_ec2_managed_prefix_list.example.id + transit_gateway_attachment_id = aws_ec2_transit_gateway_vpc_attachment.example.id + transit_gateway_route_table_id = aws_ec2_transit_gateway.example.association_default_route_table_id +} +``` + +### Blackhole Routing + +```terraform +resource "aws_ec2_transit_gateway_prefix_list_reference" "example" { + blackhole = true + prefix_list_id = aws_ec2_managed_prefix_list.example.id + transit_gateway_route_table_id = aws_ec2_transit_gateway.example.association_default_route_table_id +} +``` + +## Argument Reference + +The following arguments are required: + +* `prefixListId` - (Required) Identifier of EC2 Prefix List. +* `transitGatewayRouteTableId` - (Required) Identifier of EC2 Transit Gateway Route Table. + +The following arguments are optional: + +* `blackhole` - (Optional) Indicates whether to drop traffic that matches the Prefix List. Defaults to `false`. +* `transitGatewayAttachmentId` - (Optional) Identifier of EC2 Transit Gateway Attachment. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Route Table identifier and EC2 Prefix List identifier, separated by an underscore (`_`) + +## Import + +`awsEc2TransitGatewayPrefixListReference` can be imported by using the EC2 Transit Gateway Route Table identifier and EC2 Prefix List identifier, separated by an underscore (`_`), e.g., + +```console +$ terraform import aws_ec2_transit_gateway_prefix_list_reference.example tgw-rtb-12345678_pl-12345678 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_transit_gateway_route.html.markdown b/website/docs/cdktf/typescript/r/ec2_transit_gateway_route.html.markdown new file mode 100644 index 00000000000..df3c812a24e --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_transit_gateway_route.html.markdown @@ -0,0 +1,58 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_route" +description: |- + Manages an EC2 Transit Gateway Route +--- + +# Resource: aws_ec2_transit_gateway_route + +Manages an EC2 Transit Gateway Route. + +## Example Usage + +### Standard usage + +```terraform +resource "aws_ec2_transit_gateway_route" "example" { + destination_cidr_block = "0.0.0.0/0" + transit_gateway_attachment_id = aws_ec2_transit_gateway_vpc_attachment.example.id + transit_gateway_route_table_id = aws_ec2_transit_gateway.example.association_default_route_table_id +} +``` + +### Blackhole route + +```terraform +resource "aws_ec2_transit_gateway_route" "example" { + destination_cidr_block = "0.0.0.0/0" + blackhole = true + transit_gateway_route_table_id = aws_ec2_transit_gateway.example.association_default_route_table_id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `destinationCidrBlock` - (Required) IPv4 or IPv6 RFC1924 CIDR used for destination matches. Routing decisions are based on the most specific match. +* `transitGatewayAttachmentId` - (Optional) Identifier of EC2 Transit Gateway Attachment (required if `blackhole` is set to false). +* `blackhole` - (Optional) Indicates whether to drop traffic that matches this route (default to `false`). +* `transitGatewayRouteTableId` - (Required) Identifier of EC2 Transit Gateway Route Table. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Route Table identifier combined with destination + +## Import + +`awsEc2TransitGatewayRoute` can be imported by using the EC2 Transit Gateway Route Table, an underscore, and the destination, e.g., + +``` +$ terraform import aws_ec2_transit_gateway_route.example tgw-rtb-12345678_0.0.0.0/0 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_transit_gateway_route_table.html.markdown b/website/docs/cdktf/typescript/r/ec2_transit_gateway_route_table.html.markdown new file mode 100644 index 00000000000..61103f6bd36 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_transit_gateway_route_table.html.markdown @@ -0,0 +1,46 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_route_table" +description: |- + Manages an EC2 Transit Gateway Route Table +--- + +# Resource: aws_ec2_transit_gateway_route_table + +Manages an EC2 Transit Gateway Route Table. + +## Example Usage + +```terraform +resource "aws_ec2_transit_gateway_route_table" "example" { + transit_gateway_id = aws_ec2_transit_gateway.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `transitGatewayId` - (Required) Identifier of EC2 Transit Gateway. +* `tags` - (Optional) Key-value tags for the EC2 Transit Gateway Route Table. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - EC2 Transit Gateway Route Table Amazon Resource Name (ARN). +* `defaultAssociationRouteTable` - Boolean whether this is the default association route table for the EC2 Transit Gateway. +* `defaultPropagationRouteTable` - Boolean whether this is the default propagation route table for the EC2 Transit Gateway. +* `id` - EC2 Transit Gateway Route Table identifier +* `tagsAll` - A map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Import + +`awsEc2TransitGatewayRouteTable` can be imported by using the EC2 Transit Gateway Route Table identifier, e.g., + +``` +$ terraform import aws_ec2_transit_gateway_route_table.example tgw-rtb-12345678 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_transit_gateway_route_table_association.html.markdown b/website/docs/cdktf/typescript/r/ec2_transit_gateway_route_table_association.html.markdown new file mode 100644 index 00000000000..e8aa797c703 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_transit_gateway_route_table_association.html.markdown @@ -0,0 +1,45 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_route_table_association" +description: |- + Manages an EC2 Transit Gateway Route Table association +--- + +# Resource: aws_ec2_transit_gateway_route_table_association + +Manages an EC2 Transit Gateway Route Table association. + +## Example Usage + +```terraform +resource "aws_ec2_transit_gateway_route_table_association" "example" { + transit_gateway_attachment_id = aws_ec2_transit_gateway_vpc_attachment.example.id + transit_gateway_route_table_id = aws_ec2_transit_gateway_route_table.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `transitGatewayAttachmentId` - (Required) Identifier of EC2 Transit Gateway Attachment. +* `transitGatewayRouteTableId` - (Required) Identifier of EC2 Transit Gateway Route Table. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Route Table identifier combined with EC2 Transit Gateway Attachment identifier +* `resourceId` - Identifier of the resource +* `resourceType` - Type of the resource + +## Import + +`awsEc2TransitGatewayRouteTableAssociation` can be imported by using the EC2 Transit Gateway Route Table identifier, an underscore, and the EC2 Transit Gateway Attachment identifier, e.g., + +``` +$ terraform import aws_ec2_transit_gateway_route_table_association.example tgw-rtb-12345678_tgw-attach-87654321 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_transit_gateway_route_table_propagation.html.markdown b/website/docs/cdktf/typescript/r/ec2_transit_gateway_route_table_propagation.html.markdown new file mode 100644 index 00000000000..49921b2b128 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_transit_gateway_route_table_propagation.html.markdown @@ -0,0 +1,45 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_route_table_propagation" +description: |- + Manages an EC2 Transit Gateway Route Table propagation +--- + +# Resource: aws_ec2_transit_gateway_route_table_propagation + +Manages an EC2 Transit Gateway Route Table propagation. + +## Example Usage + +```terraform +resource "aws_ec2_transit_gateway_route_table_propagation" "example" { + transit_gateway_attachment_id = aws_ec2_transit_gateway_vpc_attachment.example.id + transit_gateway_route_table_id = aws_ec2_transit_gateway_route_table.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `transitGatewayAttachmentId` - (Required) Identifier of EC2 Transit Gateway Attachment. +* `transitGatewayRouteTableId` - (Required) Identifier of EC2 Transit Gateway Route Table. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Route Table identifier combined with EC2 Transit Gateway Attachment identifier +* `resourceId` - Identifier of the resource +* `resourceType` - Type of the resource + +## Import + +`awsEc2TransitGatewayRouteTablePropagation` can be imported by using the EC2 Transit Gateway Route Table identifier, an underscore, and the EC2 Transit Gateway Attachment identifier, e.g., + +``` +$ terraform import aws_ec2_transit_gateway_route_table_propagation.example tgw-rtb-12345678_tgw-attach-87654321 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_transit_gateway_vpc_attachment.html.markdown b/website/docs/cdktf/typescript/r/ec2_transit_gateway_vpc_attachment.html.markdown new file mode 100644 index 00000000000..d188b741ff5 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_transit_gateway_vpc_attachment.html.markdown @@ -0,0 +1,55 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_vpc_attachment" +description: |- + Manages an EC2 Transit Gateway VPC Attachment +--- + +# Resource: aws_ec2_transit_gateway_vpc_attachment + +Manages an EC2 Transit Gateway VPC Attachment. For examples of custom route table association and propagation, see the EC2 Transit Gateway Networking Examples Guide. + +## Example Usage + +```terraform +resource "aws_ec2_transit_gateway_vpc_attachment" "example" { + subnet_ids = [aws_subnet.example.id] + transit_gateway_id = aws_ec2_transit_gateway.example.id + vpc_id = aws_vpc.example.id +} +``` + +A full example of how to create a Transit Gateway in one AWS account, share it with a second AWS account, and attach a VPC in the second account to the Transit Gateway via the `awsEc2TransitGatewayVpcAttachment` and `awsEc2TransitGatewayVpcAttachmentAccepter` resources can be found in [the `/examples/transitGatewayCrossAccountVpcAttachment` directory within the Github Repository](https://github.com/hashicorp/terraform-provider-aws/tree/main/examples/transit-gateway-cross-account-vpc-attachment). + +## Argument Reference + +The following arguments are supported: + +* `subnetIds` - (Required) Identifiers of EC2 Subnets. +* `transitGatewayId` - (Required) Identifier of EC2 Transit Gateway. +* `vpcId` - (Required) Identifier of EC2 VPC. +* `applianceModeSupport` - (Optional) Whether Appliance Mode support is enabled. If enabled, a traffic flow between a source and destination uses the same Availability Zone for the VPC attachment for the lifetime of that flow. Valid values: `disable`, `enable`. Default value: `disable`. +* `dnsSupport` - (Optional) Whether DNS support is enabled. Valid values: `disable`, `enable`. Default value: `enable`. +* `ipv6Support` - (Optional) Whether IPv6 support is enabled. Valid values: `disable`, `enable`. Default value: `disable`. +* `tags` - (Optional) Key-value tags for the EC2 Transit Gateway VPC Attachment. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `transitGatewayDefaultRouteTableAssociation` - (Optional) Boolean whether the VPC Attachment should be associated with the EC2 Transit Gateway association default route table. This cannot be configured or perform drift detection with Resource Access Manager shared EC2 Transit Gateways. Default value: `true`. +* `transitGatewayDefaultRouteTablePropagation` - (Optional) Boolean whether the VPC Attachment should propagate routes with the EC2 Transit Gateway propagation default route table. This cannot be configured or perform drift detection with Resource Access Manager shared EC2 Transit Gateways. Default value: `true`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Attachment identifier +* `tagsAll` - A map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). +* `vpcOwnerId` - Identifier of the AWS account that owns the EC2 VPC. + +## Import + +`awsEc2TransitGatewayVpcAttachment` can be imported by using the EC2 Transit Gateway Attachment identifier, e.g., + +``` +$ terraform import aws_ec2_transit_gateway_vpc_attachment.example tgw-attach-12345678 +``` + + \ No newline at end of file diff --git a/website/docs/cdktf/typescript/r/ec2_transit_gateway_vpc_attachment_accepter.html.markdown b/website/docs/cdktf/typescript/r/ec2_transit_gateway_vpc_attachment_accepter.html.markdown new file mode 100644 index 00000000000..90fe5a37126 --- /dev/null +++ b/website/docs/cdktf/typescript/r/ec2_transit_gateway_vpc_attachment_accepter.html.markdown @@ -0,0 +1,64 @@ +--- +subcategory: "Transit Gateway" +layout: "aws" +page_title: "AWS: aws_ec2_transit_gateway_vpc_attachment_accepter" +description: |- + Manages the accepter's side of an EC2 Transit Gateway VPC Attachment +--- + +# Resource: aws_ec2_transit_gateway_vpc_attachment_accepter + +Manages the accepter's side of an EC2 Transit Gateway VPC Attachment. + +When a cross-account (requester's AWS account differs from the accepter's AWS account) EC2 Transit Gateway VPC Attachment +is created, an EC2 Transit Gateway VPC Attachment resource is automatically created in the accepter's account. +The requester can use the `awsEc2TransitGatewayVpcAttachment` resource to manage its side of the connection +and the accepter can use the `awsEc2TransitGatewayVpcAttachmentAccepter` resource to "adopt" its side of the +connection into management. + +## Example Usage + +```terraform +resource "aws_ec2_transit_gateway_vpc_attachment_accepter" "example" { + transit_gateway_attachment_id = aws_ec2_transit_gateway_vpc_attachment.example.id + + tags = { + Name = "Example cross-account attachment" + } +} +``` + +A full example of how to create a Transit Gateway in one AWS account, share it with a second AWS account, and attach a VPC in the second account to the Transit Gateway via the `awsEc2TransitGatewayVpcAttachment` and `awsEc2TransitGatewayVpcAttachmentAccepter` resources can be found in [the `/examples/transitGatewayCrossAccountVpcAttachment` directory within the Github Repository](https://github.com/hashicorp/terraform-provider-aws/tree/main/examples/transit-gateway-cross-account-vpc-attachment). + +## Argument Reference + +The following arguments are supported: + +* `transitGatewayAttachmentId` - (Required) The ID of the EC2 Transit Gateway Attachment to manage. +* `transitGatewayDefaultRouteTableAssociation` - (Optional) Boolean whether the VPC Attachment should be associated with the EC2 Transit Gateway association default route table. Default value: `true`. +* `transitGatewayDefaultRouteTablePropagation` - (Optional) Boolean whether the VPC Attachment should propagate routes with the EC2 Transit Gateway propagation default route table. Default value: `true`. +* `tags` - (Optional) Key-value tags for the EC2 Transit Gateway VPC Attachment. If configured with a provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - EC2 Transit Gateway Attachment identifier +* `tagsAll` - A map of tags assigned to the resource, including those inherited from the provider [`defaultTags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). +* `applianceModeSupport` - Whether Appliance Mode support is enabled. Valid values: `disable`, `enable`. +* `dnsSupport` - Whether DNS support is enabled. Valid values: `disable`, `enable`. +* `ipv6Support` - Whether IPv6 support is enabled. Valid values: `disable`, `enable`. +* `subnetIds` - Identifiers of EC2 Subnets. +* `transitGatewayId` - Identifier of EC2 Transit Gateway. +* `vpcId` - Identifier of EC2 VPC. +* `vpcOwnerId` - Identifier of the AWS account that owns the EC2 VPC. + +## Import + +`awsEc2TransitGatewayVpcAttachmentAccepter` can be imported by using the EC2 Transit Gateway Attachment identifier, e.g., + +``` +$ terraform import aws_ec2_transit_gateway_vpc_attachment_accepter.example tgw-attach-12345678 +``` + + \ No newline at end of file diff --git a/website/docs/d/autoscaling_group.html.markdown b/website/docs/d/autoscaling_group.html.markdown index 615f922956a..ff1045a63b0 100644 --- a/website/docs/d/autoscaling_group.html.markdown +++ b/website/docs/d/autoscaling_group.html.markdown @@ -116,6 +116,9 @@ interpolation. * `propagate_at_launch` - Whether the tag is propagated to Amazon EC2 instances launched via this ASG. * `target_group_arns` - ARNs of the target groups for your load balancer. * `termination_policies` - The termination policies for the group. +* `traffic_source` -Traffic sources. + * `identifier` - Identifies the traffic source. For Application Load Balancers, Gateway Load Balancers, Network Load Balancers, and VPC Lattice, this will be the Amazon Resource Name (ARN) for a target group in this account and Region. For Classic Load Balancers, this will be the name of the Classic Load Balancer in this account and Region. + * `type` - Traffic source type. * `vpc_zone_identifier` - VPC ID for the group. * `warm_pool` - List of warm pool configuration objects. * `instance_reuse_policy` - List of instance reuse policy objects. diff --git a/website/docs/d/budgets_budget.html.markdown b/website/docs/d/budgets_budget.html.markdown new file mode 100644 index 00000000000..9597fd3d775 --- /dev/null +++ b/website/docs/d/budgets_budget.html.markdown @@ -0,0 +1,133 @@ +--- +subcategory: "Web Services Budgets" +layout: "aws" +page_title: "AWS: aws_budgets_budget" +description: |- + Terraform data source for managing an AWS Web Services Budgets Budget. +--- + +# Data Source: aws_budgets_budget + +Terraform data source for managing an AWS Web Services Budgets Budget. + +## Example Usage + +### Basic Usage + +```terraform +data "aws_budgets_budget" "test" { + name = aws_budgets_budget.test.name +} +``` + +## Argument Reference + +The following arguments are required: + +* `name` - The name of a budget. Unique within accounts. + +The following arguments are optional: + +* `account_id` - The ID of the target account for budget. Will use current user's account_id by default if omitted. +* `name_prefix` - The prefix of the name of a budget. Unique within accounts. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `auto_adjust_data` - Object containing [AutoAdjustData] which determines the budget amount for an auto-adjusting budget. +* `budget_type` - Whether this budget tracks monetary cost or usage. +* `budget_limit` - The total amount of cost, usage, RI utilization, RI coverage, Savings Plans utilization, or Savings Plans coverage that you want to track with your budget. Contains object [Spend](#spend) +* `calculated_spend` - The spend objects that are associated with this budget. The [actualSpend](#actual-spend) tracks how much you've used, cost, usage, RI units, or Savings Plans units and the [forecastedSpend](#forecasted-spend) tracks how much that you're predicted to spend based on your historical usage profile. +* `cost_filter` - A list of [CostFilter](#cost-filter) name/values pair to apply to budget. +* `cost_types` - Object containing [CostTypes](#cost-types) The types of cost included in a budget, such as tax and subscriptions. +* `time_period_end` - The end of the time period covered by the budget. There are no restrictions on the end date. Format: `2017-01-01_12:00`. +* `time_period_start` - The start of the time period covered by the budget. If you don't specify a start date, AWS defaults to the start of your chosen time period. The start date must come before the end date. Format: `2017-01-01_12:00`. +* `time_unit` - The length of time until a budget resets the actual and forecasted spend. Valid values: `MONTHLY`, `QUARTERLY`, `ANNUALLY`, and `DAILY`. +* `notification` - Object containing [Budget Notifications](#budget-notification). Can be used multiple times to define more than one budget notification. +* `planned_limit` - Object containing [Planned Budget Limits](#planned-budget-limits). Can be used multiple times to plan more than one budget limit. See [PlannedBudgetLimits](https://docs.aws.amazon.com/aws-cost-management/latest/APIReference/API_budgets_Budget.html#awscostmanagement-Type-budgets_Budget-PlannedBudgetLimits) documentation. + +### Actual Spend + +The amount of cost, usage, RI units, or Savings Plans units that you used. Type is [Spend](#spend) + +### Auto Adjust Data + +The parameters that determine the budget amount for an auto-adjusting budget. + +`auto_adjust_type` (Required) - The string that defines whether your budget auto-adjusts based on historical or forecasted data. Valid values: `FORECAST`,`HISTORICAL` +`historical_options` (Optional) - Configuration block of [Historical Options](#historical-options). Required for `auto_adjust_type` of `HISTORICAL` Configuration block that defines the historical data that your auto-adjusting budget is based on. +`last_auto_adjust_time` (Optional) - The last time that your budget was auto-adjusted. + +### Budget Notification + +Valid keys for `notification` parameter. + +* `comparison_operator` - (Required) Comparison operator to use to evaluate the condition. Can be `LESS_THAN`, `EQUAL_TO` or `GREATER_THAN`. +* `threshold` - (Required) Threshold when the notification should be sent. +* `threshold_type` - (Required) What kind of threshold is defined. Can be `PERCENTAGE` OR `ABSOLUTE_VALUE`. +* `notification_type` - (Required) What kind of budget value to notify on. Can be `ACTUAL` or `FORECASTED` +* `subscriber_email_addresses` - (Optional) E-Mail addresses to notify. Either this or `subscriber_sns_topic_arns` is required. +* `subscriber_sns_topic_arns` - (Optional) SNS topics to notify. Either this or `subscriber_email_addresses` is required. + +### Cost Filter + +Based on your choice of budget type, you can choose one or more of the available budget filters. + +* `PurchaseType` +* `UsageTypeGroup` +* `Service` +* `Operation` +* `UsageType` +* `BillingEntity` +* `CostCategory` +* `LinkedAccount` +* `TagKeyValue` +* `LegalEntityName` +* `InvoicingEntity` +* `AZ` +* `Region` +* `InstanceType` + +Refer to [AWS CostFilter documentation](https://docs.aws.amazon.com/cost-management/latest/userguide/budgets-create-filters.html) for further detail. + +### Cost Types + +Valid keys for `cost_types` parameter. + +* `include_credit` - A boolean value whether to include credits in the cost budget. Defaults to `true` +* `include_discount` - Whether a budget includes discounts. Defaults to `true` +* `include_other_subscription` - A boolean value whether to include other subscription costs in the cost budget. Defaults to `true` +* `include_recurring` - A boolean value whether to include recurring costs in the cost budget. Defaults to `true` +* `include_refund` - A boolean value whether to include refunds in the cost budget. Defaults to `true` +* `include_subscription` - A boolean value whether to include subscriptions in the cost budget. Defaults to `true` +* `include_support` - A boolean value whether to include support costs in the cost budget. Defaults to `true` +* `include_tax` - A boolean value whether to include tax in the cost budget. Defaults to `true` +* `include_upfront` - A boolean value whether to include upfront costs in the cost budget. Defaults to `true` +* `use_amortized` - Whether a budget uses the amortized rate. Defaults to `false` +* `use_blended` - A boolean value whether to use blended costs in the cost budget. Defaults to `false` + +Refer to [AWS CostTypes documentation](https://docs.aws.amazon.com/aws-cost-management/latest/APIReference/API_budgets_CostTypes.html) for further detail. + +### Forecasted Spend + +The amount of cost, usage, RI units, or Savings Plans units that you're forecasted to use. +Type is [Spend](#spend) + +### Historical Options + +`budget_adjustment_period` (Required) - The number of budget periods included in the moving-average calculation that determines your auto-adjusted budget amount. +`lookback_available_periods` (Optional) - The integer that describes how many budget periods in your BudgetAdjustmentPeriod are included in the calculation of your current budget limit. If the first budget period in your BudgetAdjustmentPeriod has no cost data, then that budget period isn’t included in the average that determines your budget limit. You can’t set your own LookBackAvailablePeriods. The value is automatically calculated from the `budget_adjustment_period` and your historical cost data. + +### Planned Budget Limits + +Valid keys for `planned_limit` parameter. + +* `start_time` - (Required) The start time of the budget limit. Format: `2017-01-01_12:00`. See [PlannedBudgetLimits](https://docs.aws.amazon.com/aws-cost-management/latest/APIReference/API_budgets_Budget.html#awscostmanagement-Type-budgets_Budget-PlannedBudgetLimits) documentation. +* `amount` - (Required) The amount of cost or usage being measured for a budget. +* `unit` - (Required) The unit of measurement used for the budget forecast, actual spend, or budget threshold, such as dollars or GB. See [Spend](http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/data-type-spend.html) documentation. + +### Spend + +`amount` - The cost or usage amount that's associated with a budget forecast, actual spend, or budget threshold. Length Constraints: Minimum length of `1`. Maximum length of `2147483647`. +`unit` - The unit of measurement that's used for the budget forecast, actual spend, or budget threshold, such as USD or GBP. Length Constraints: Minimum length of `1`. Maximum length of `2147483647`. diff --git a/website/docs/d/connect_hours_of_operation.html.markdown b/website/docs/d/connect_hours_of_operation.html.markdown index 3e9db8766b6..b4a117cdfb7 100644 --- a/website/docs/d/connect_hours_of_operation.html.markdown +++ b/website/docs/d/connect_hours_of_operation.html.markdown @@ -47,7 +47,6 @@ In addition to all of the arguments above, the following attributes are exported * `arn` - ARN of the Hours of Operation. * `config` - Configuration information for the hours of operation: day, start time, and end time . Config blocks are documented below. Config blocks are documented below. * `description` - Description of the Hours of Operation. -* `hours_of_operation_arn` - (**Deprecated**) ARN of the Hours of Operation. * `hours_of_operation_id` - The identifier for the hours of operation. * `instance_id` - Identifier of the hosting Amazon Connect Instance. * `name` - Name of the Hours of Operation. diff --git a/website/docs/d/db_instance.html.markdown b/website/docs/d/db_instance.html.markdown index c86f1326cab..de7e872520a 100644 --- a/website/docs/d/db_instance.html.markdown +++ b/website/docs/d/db_instance.html.markdown @@ -26,6 +26,8 @@ The following arguments are supported: ## Attributes Reference +~> **NOTE:** The `port` field may be empty while an Aurora cluster is still in the process of being created. This can occur if the cluster was initiated with the [AWS CLI `create-db-cluster`](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html) command, but no DB instance has yet been added to it. + In addition to all arguments above, the following attributes are exported: * `address` - Hostname of the RDS instance. See also `endpoint` and `port`. @@ -55,7 +57,7 @@ In addition to all arguments above, the following attributes are exported: * `multi_az` - If the DB instance is a Multi-AZ deployment. * `network_type` - Network type of the DB instance. * `option_group_memberships` - Provides the list of option group memberships for this DB instance. -* `port` - Database port. +* `port` - Database endpoint port, primarily used by an Aurora DB cluster. For a conventional RDS DB instance, the `db_instance_port` is typically the preferred choice. * `preferred_backup_window` - Specifies the daily time range during which automated backups are created. * `preferred_maintenance_window` - Specifies the weekly time range during which system maintenance can occur in UTC. * `publicly_accessible` - Accessibility options for the DB instance. diff --git a/website/docs/d/ec2_transit_gateway_connect_peer.html.markdown b/website/docs/d/ec2_transit_gateway_connect_peer.html.markdown index e73e29be1ce..c8615df1a9c 100644 --- a/website/docs/d/ec2_transit_gateway_connect_peer.html.markdown +++ b/website/docs/d/ec2_transit_gateway_connect_peer.html.markdown @@ -49,6 +49,8 @@ In addition to all arguments above, the following attributes are exported: * `arn` - EC2 Transit Gateway Connect Peer ARN * `bgp_asn` - BGP ASN number assigned customer device +* `bgp_peer_address` - The IP address assigned to customer device, which is used as BGP IP address. +* `bgp_transit_gateway_addresses` - The IP addresses assigned to Transit Gateway, which are used as BGP IP addresses. * `inside_cidr_blocks` - CIDR blocks that will be used for addressing within the tunnel. * `peer_address` - IP addressed assigned to customer device, which is used as tunnel endpoint * `tags` - Key-value tags for the EC2 Transit Gateway Connect Peer diff --git a/website/docs/d/ec2_transit_gateway_route_tables.html.markdown b/website/docs/d/ec2_transit_gateway_route_tables.html.markdown index 04130769699..dbc00f455cb 100644 --- a/website/docs/d/ec2_transit_gateway_route_tables.html.markdown +++ b/website/docs/d/ec2_transit_gateway_route_tables.html.markdown @@ -18,7 +18,7 @@ The following shows outputting all Transit Gateway Route Table Ids. data "aws_ec2_transit_gateway_route_tables" "example" {} output "example" { - value = data.aws_ec2_transit_gateway_route_table.example.ids + value = data.aws_ec2_transit_gateway_route_tables.example.ids } ``` diff --git a/website/docs/d/ecr_pull_through_cache_rule.html.markdown b/website/docs/d/ecr_pull_through_cache_rule.html.markdown new file mode 100644 index 00000000000..b43ce8f4df0 --- /dev/null +++ b/website/docs/d/ecr_pull_through_cache_rule.html.markdown @@ -0,0 +1,33 @@ +--- +subcategory: "ECR (Elastic Container Registry)" +layout: "aws" +page_title: "AWS: aws_ecr_pull_through_cache_rule" +description: |- + Provides details about an ECR Pull Through Cache Rule +--- + +# Data Source: aws_ecr_pull_through_cache_rule + +The ECR Pull Through Cache Rule data source allows the upstream registry URL and registry ID to be retrieved for a Pull Through Cache Rule. + +## Example Usage + +```terraform +data "aws_ecr_pull_through_cache_rule" "ecr_public" { + ecr_repository_prefix = "ecr-public" +} +``` + +## Argument Reference + +The following arguments are supported: + +- `ecr_repository_prefix` - (Required) The repository name prefix to use when caching images from the source registry. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +- `id` - The repository name prefix. +- `upstream_registry_url` - The registry URL of the upstream public registry to use as the source. +- `registry_id` - The registry ID where the repository was created. diff --git a/website/docs/d/guardduty_finding_ids.html.markdown b/website/docs/d/guardduty_finding_ids.html.markdown new file mode 100644 index 00000000000..6ab4b0d970d --- /dev/null +++ b/website/docs/d/guardduty_finding_ids.html.markdown @@ -0,0 +1,34 @@ +--- +subcategory: "GuardDuty" +layout: "aws" +page_title: "AWS: aws_guardduty_finding_ids" +description: |- + Terraform data source for managing an AWS GuardDuty Finding Ids. +--- + +# Data Source: aws_guardduty_finding_ids + +Terraform data source for managing an AWS GuardDuty Finding Ids. + +## Example Usage + +### Basic Usage + +```terraform +data "aws_guardduty_finding_ids" "example" { + detector_id = aws_guardduty_detector.example.id +} +``` + +## Argument Reference + +The following arguments are required: + +* `detector_id` - (Required) ID of the GuardDuty detector. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `has_findings` - Indicates whether findings are present for the specified detector. +* `finding_ids` - A list of finding IDs for the specified detector. diff --git a/website/docs/d/iam_principal_policy_simulation.html.markdown b/website/docs/d/iam_principal_policy_simulation.html.markdown new file mode 100644 index 00000000000..b5a9e41b735 --- /dev/null +++ b/website/docs/d/iam_principal_policy_simulation.html.markdown @@ -0,0 +1,220 @@ +--- +subcategory: "IAM (Identity & Access Management)" +layout: "aws" +page_title: "AWS: aws_iam_principal_policy_simulation" +description: |- + Runs a simulation of the IAM policies of a particular principal against a given hypothetical request. +--- + +# Data Source: aws_iam_principal_policy_simulation + +Runs a simulation of the IAM policies of a particular principal against a given hypothetical request. + +You can use this data source in conjunction with +[Preconditions and Postconditions](https://www.terraform.io/language/expressions/custom-conditions#preconditions-and-postconditions) so that your configuration can test either whether it should have sufficient access to do its own work, or whether policies your configuration declares itself are sufficient for their intended use elsewhere. + +-> **Note:** Correctly using this data source requires familiarity with various details of AWS Identity and Access Management, and how various AWS services integrate with it. For general information on the AWS IAM policy simulator, see [Testing IAM policies with the IAM policy simulator](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_testing-policies.html). This data source wraps the `iam:SimulatePrincipalPolicy` API action described on that page. + +## Example Usage + +### Self Access-checking Example + +The following example raises an error if the credentials passed to the AWS provider do not have access to perform the three actions `s3:GetObject`, `s3:PutObject`, and `s3:DeleteObject` on the S3 bucket with the given ARN. It combines `aws_iam_principal_policy_simulation` with the core Terraform postconditions feature. + +```terraform +data "aws_caller_identity" "current" {} + +data "aws_iam_principal_policy_simulation" "s3_object_access" { + action_names = [ + "s3:GetObject", + "s3:PutObject", + "s3:DeleteObject", + ] + policy_source_arn = data.aws_caller_identity.current.arn + resource_arns = ["arn:aws:s3:::my-test-bucket"] + + # The "lifecycle" and "postcondition" block types are part of + # the main Terraform language, not part of this data source. + lifecycle { + postcondition { + condition = self.all_allowed + error_message = < **Tip:** A "provisioning artifact" is also referred to as a "version." A "distributor" is also referred to as a "vendor." +~> **NOTE:** A "provisioning artifact" is also known as a "version," and a "distributor" is also known as a "vendor." ## Example Usage @@ -26,26 +26,26 @@ data "aws_servicecatalog_product" "example" { The following arguments are required: -* `id` - (Required) Product ID. +* `id` - (Required) ID of the product. The following arguments are optional: -* `accept_language` - (Optional) Language code. Valid values: `en` (English), `jp` (Japanese), `zh` (Chinese). Default value is `en`. +* `accept_language` - (Optional) Language code. Valid values are `en` (English), `jp` (Japanese), `zh` (Chinese). The default value is `en`. -## Attributes Reference +## Attribute Reference -In addition to all arguments above, the following attributes are exported: +This data source exports the following attributes in addition to the arguments above: * `arn` - ARN of the product. * `created_time` - Time when the product was created. * `description` - Description of the product. -* `distributor` - Distributor (i.e., vendor) of the product. +* `distributor` - Vendor of the product. * `has_default_path` - Whether the product has a default path. * `name` - Name of the product. * `owner` - Owner of the product. * `status` - Status of the product. -* `support_description` - Support information about the product. +* `support_description` - Field that provides support information about the product. * `support_email` - Contact email for product support. * `support_url` - Contact URL for product support. -* `tags` - Tags to apply to the product. +* `tags` - Tags applied to the product. * `type` - Type of product. diff --git a/website/docs/d/sesv2_email_identity.html.markdown b/website/docs/d/sesv2_email_identity.html.markdown new file mode 100644 index 00000000000..18cf93cd549 --- /dev/null +++ b/website/docs/d/sesv2_email_identity.html.markdown @@ -0,0 +1,43 @@ +--- +subcategory: "SESv2 (Simple Email V2)" +layout: "aws" +page_title: "AWS: aws_sesv2_email_identity" +description: |- + Terraform data source for managing an AWS SESv2 (Simple Email V2) Email Identity. +--- + +# Data Source: aws_sesv2_email_identity + +Terraform data source for managing an AWS SESv2 (Simple Email V2) Email Identity. + +## Example Usage + +### Basic Usage + +```terraform +data "aws_sesv2_email_identity" "example" { + email_identity = "example.com" +} +``` + +## Argument Reference + +The following arguments are required: + +* `email_identity` - (Required) The name of the email identity. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - ARN of the Email Identity. +* `dkim_signing_attributes` - A list of objects that contains at most one element with information about the private key and selector that you want to use to configure DKIM for the identity for Bring Your Own DKIM (BYODKIM) for the identity, or, configures the key length to be used for Easy DKIM. + * `current_signing_key_length` - [Easy DKIM] The key length of the DKIM key pair in use. + * `last_key_generation_timestamp` - [Easy DKIM] The last time a key pair was generated for this identity. + * `next_signing_key_length` - [Easy DKIM] The key length of the future DKIM key pair to be generated. This can be changed at most once per day. + * `signing_attributes_origin` - A string that indicates how DKIM was configured for the identity. `AWS_SES` indicates that DKIM was configured for the identity by using Easy DKIM. `EXTERNAL` indicates that DKIM was configured for the identity by using Bring Your Own DKIM (BYODKIM). + * `status` - Describes whether or not Amazon SES has successfully located the DKIM records in the DNS records for the domain. See the [AWS SES API v2 Reference](https://docs.aws.amazon.com/ses/latest/APIReference-V2/API_DkimAttributes.html#SES-Type-DkimAttributes-Status) for supported statuses. + * `tokens` - If you used Easy DKIM to configure DKIM authentication for the domain, then this object contains a set of unique strings that you use to create a set of CNAME records that you add to the DNS configuration for your domain. When Amazon SES detects these records in the DNS configuration for your domain, the DKIM authentication process is complete. If you configured DKIM authentication for the domain by providing your own public-private key pair, then this object contains the selector for the public key. +* `identity_type` - The email identity type. Valid values: `EMAIL_ADDRESS`, `DOMAIN`. +* `tags` - Key-value mapping of resource tags. +* `verified_for_sending_status` - Specifies whether or not the identity is verified. diff --git a/website/docs/d/sesv2_email_identity_mail_from_attributes.html.markdown b/website/docs/d/sesv2_email_identity_mail_from_attributes.html.markdown new file mode 100644 index 00000000000..01e4096b503 --- /dev/null +++ b/website/docs/d/sesv2_email_identity_mail_from_attributes.html.markdown @@ -0,0 +1,38 @@ +--- +subcategory: "SESv2 (Simple Email V2)" +layout: "aws" +page_title: "AWS: aws_sesv2_email_identity_mail_from_attributes" +description: |- + Terraform data source for managing an AWS SESv2 (Simple Email V2) Email Identity Mail From Attributes. +--- + +# Data Source: aws_sesv2_email_identity_mail_from_attributes + +Terraform data source for managing an AWS SESv2 (Simple Email V2) Email Identity Mail From Attributes. + +## Example Usage + +### Basic Usage + +```terraform +data "aws_sesv2_email_identity" "example" { + email_identity = "example.com" +} + +data "aws_sesv2_email_identity_mail_from_attributes" "example" { + email_identity = data.aws_sesv2_email_identity.example.email_identity +} +``` + +## Argument Reference + +The following arguments are required: + +* `email_identity` - (Required) The name of the email identity. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `behavior_on_mx_failure` - The action to take if the required MX record isn't found when you send an email. Valid values: `USE_DEFAULT_VALUE`, `REJECT_MESSAGE`. +* `mail_from_domain` - The custom MAIL FROM domain that you want the verified identity to use. diff --git a/website/docs/d/sfn_alias.html.markdown b/website/docs/d/sfn_alias.html.markdown new file mode 100644 index 00000000000..1d5b0695a36 --- /dev/null +++ b/website/docs/d/sfn_alias.html.markdown @@ -0,0 +1,38 @@ +--- +subcategory: "SFN (Step Functions)" +layout: "aws" +page_title: "AWS: aws_sfn_alias" +description: |- + Terraform data source for managing an AWS SFN (Step Functions) State Machine Alias. +--- + +# Data Source: aws_sfn_alias + +Terraform data source for managing an AWS SFN (Step Functions) State Machine Alias. + +## Example Usage + +### Basic Usage + +```terraform +data "aws_sfn_alias" "example" { + name = "my_sfn_alias" + statemachine_arn = aws_sfn_state_machine.sfn_test.arn +} +``` + +## Argument Reference + +The following arguments are required: + +* `name` - (Required) Name of the State Machine alias. +* `statemachine_arn` - (Required) ARN of the State Machine. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - ARN identifying the State Machine alias. +* `creation_date` - Date the state machine Alias was created. +* `description` - Description of state machine alias. +* `routing_configuration` - Routing Configuration of state machine alias diff --git a/website/docs/d/sfn_state_machine.html.markdown b/website/docs/d/sfn_state_machine.html.markdown index 0b7e2e0b6cf..9c2168200a4 100644 --- a/website/docs/d/sfn_state_machine.html.markdown +++ b/website/docs/d/sfn_state_machine.html.markdown @@ -31,4 +31,5 @@ data "aws_sfn_state_machine" "example" { * `role_arn` - Set to the role_arn used by the state function. * `definition` - Set to the state machine definition. * `creation_date` - Date the state machine was created. +* `revision_id` - The revision identifier for the state machine. * `status` - Set to the current status of the state machine. diff --git a/website/docs/d/sfn_state_machine_versions.html.markdown b/website/docs/d/sfn_state_machine_versions.html.markdown new file mode 100644 index 00000000000..4b1050c2683 --- /dev/null +++ b/website/docs/d/sfn_state_machine_versions.html.markdown @@ -0,0 +1,34 @@ +--- +subcategory: "SFN (Step Functions)" +layout: "aws" +page_title: "AWS: aws_sfn_state_machine_versions" +description: |- + Terraform data source for managing an AWS SFN (Step Functions) State Machine Versions. +--- + +# Data Source: aws_sfn_state_machine_versions + +Terraform data source for managing an AWS SFN (Step Functions) State Machine Versions. + +## Example Usage + +### Basic Usage + +```terraform +data "aws_sfn_state_machine_versions" "test" { + statemachine_arn = aws_sfn_state_machine.test.arn +} + +``` + +## Argument Reference + +The following arguments are required: + +* `statemachine_arn` - (Required) ARN of the State Machine. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `statemachine_versions` - ARN List identifying the statemachine versions. diff --git a/website/docs/d/ssm_parameter.html.markdown b/website/docs/d/ssm_parameter.html.markdown index 6f929ab6aed..0a21e3758e6 100644 --- a/website/docs/d/ssm_parameter.html.markdown +++ b/website/docs/d/ssm_parameter.html.markdown @@ -36,4 +36,5 @@ In addition to all arguments above, the following attributes are exported: * `name` - Name of the parameter. * `type` - Type of the parameter. Valid types are `String`, `StringList` and `SecureString`. * `value` - Value of the parameter. This value is always marked as sensitive in the Terraform plan output, regardless of `type`. In Terraform CLI version 0.15 and later, this may require additional configuration handling for certain scenarios. For more information, see the [Terraform v0.15 Upgrade Guide](https://www.terraform.io/upgrade-guides/0-15.html#sensitive-output-values). +* `insecure_value` - Value of the parameter. **Use caution:** This value is never marked as sensitive. * `version` - Version of the parameter. diff --git a/website/docs/d/vpclattice_resource_policy.html.markdown b/website/docs/d/vpclattice_resource_policy.html.markdown new file mode 100644 index 00000000000..c7ccff9e2ad --- /dev/null +++ b/website/docs/d/vpclattice_resource_policy.html.markdown @@ -0,0 +1,33 @@ +--- +subcategory: "VPC Lattice" +layout: "aws" +page_title: "AWS: aws_vpclattice_resource_policy" +description: |- + Terraform data source for managing an AWS VPC Lattice Resource Policy. +--- + +# Data Source: aws_vpclattice_resource_policy + +Terraform data source for managing an AWS VPC Lattice Resource Policy. + +## Example Usage + +### Basic Usage + +```terraform +data "aws_vpclattice_resource_policy" "example" { + resource_arn = aws_vpclattice_service_network.example.arn +} +``` + +## Argument Reference + +The following arguments are required: + +* `resource_arn` - (Required) Resource ARN of the resource for which a policy is retrieved. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `policy` - JSON-encoded string representation of the applied resource policy. diff --git a/website/docs/d/wafv2_ip_set.html.markdown b/website/docs/d/wafv2_ip_set.html.markdown index 580047739cd..e1c2603a2ef 100644 --- a/website/docs/d/wafv2_ip_set.html.markdown +++ b/website/docs/d/wafv2_ip_set.html.markdown @@ -30,7 +30,7 @@ The following arguments are supported: In addition to all arguments above, the following attributes are exported: -* `addresses` - An array of strings that specify one or more IP addresses or blocks of IP addresses in Classless Inter-Domain Routing (CIDR) notation. +* `addresses` - An array of strings that specifies zero or more IP addresses or blocks of IP addresses in Classless Inter-Domain Routing (CIDR) notation. * `arn` - ARN of the entity. * `description` - Description of the set that helps with identification. * `id` - Unique identifier for the set. diff --git a/website/docs/guides/continuous-validation-examples.html.md b/website/docs/guides/continuous-validation-examples.html.md new file mode 100644 index 00000000000..6f8bc513fd6 --- /dev/null +++ b/website/docs/guides/continuous-validation-examples.html.md @@ -0,0 +1,131 @@ +--- +subcategory: "" +layout: "aws" +page_title: "Using Terraform Cloud's Continuous Validation feature with the AWS Provider" +description: |- + Using Terraform Cloud's Continuous Validation feature with the AWS Provider +--- + +# Using Terraform Cloud's Continuous Validation feature with the AWS Provider + +## Continuous Validation in Terraform Cloud + +The Continuous Validation feature in Terraform Cloud (TFC) allows users to make assertions about their infrastructure between applied runs. This helps users to identify issues at the time they first appear and avoid situations where a change is only identified once it causes a customer-facing problem. + +Users can add checks to their Terraform configuration using check blocks. Check blocks contain assertions that are defined with a custom condition expression and an error message. When the condition expression evaluates to true the check passes, but when the expression evaluates to false Terraform will show a warning message that includes the user-defined error message. + +Custom conditions can be created using data from Terraform providers’ resources and data sources. Data can also be combined from multiple sources; for example, you can use checks to monitor expirable resources by comparing a resource’s expiration date attribute to the current time returned by Terraform’s built-in time functions. + +Below, this guide shows examples of how data returned by the AWS provider can be used to define checks in your Terraform configuration. + +## Example - Ensure your AWS account is within budget (aws_budgets_budget) + +AWS Budgets allows you to track and take action on your AWS costs and usage. You can use AWS Budgets to monitor your aggregate utilization and coverage metrics for your Reserved Instances (RIs) or Savings Plans. + +- You can use AWS Budgets to enable simple-to-complex cost and usage tracking. Some examples include: + +- Setting a monthly cost budget with a fixed target amount to track all costs associated with your account. + +- Setting a monthly cost budget with a variable target amount, with each subsequent month growing the budget target by 5 percent. + +- Setting a monthly usage budget with a fixed usage amount and forecasted notifications to help ensure that you are staying within the service limits for a specific service. + +- Setting a daily utilization or coverage budget to track your RI or Savings Plans. + +The example below shows how a check block can be used to assert that you remain in compliance for the budgets that have been established. + +```hcl +check "check_budget_exceeded" { + data "aws_budgets_budget" "example" { + name = aws_budgets_budget.example.name + } + + assert { + condition = !data.aws_budgets_budget.example.budget_exceeded + error_message = format("AWS budget has been exceeded! Calculated spend: '%s' and budget limit: '%s'", + data.aws_budgets_budget.example.calculated_spend[0].actual_spend[0].amount, + data.aws_budgets_budget.example.budget_limit[0].amount + ) + } +} +``` + +If the budget exceeds the set limit, the check block assertion will return a warning similar to the following: + +``` +│ Warning: Check block assertion failed +│ +│ on main.tf line 43, in check "check_budget_exceeded": +│ 43: condition = !data.aws_budgets_budget.example.budget_exceeded +│ ├──────────────── +│ │ data.aws_budgets_budget.example.budget_exceeded is true +│ +│ AWS budget has been exceeded! Calculated spend: '1550.0' and budget limit: '1200.0' +``` + +## Example - Check GuardDuty for Threats (aws_guardduty_finding_ids) + +Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your Amazon Web Services accounts, workloads, and data stored in Amazon S3. With the cloud, the collection and aggregation of account and network activities is simplified, but it can be time consuming for security teams to continuously analyze event log data for potential threats. With GuardDuty, you now have an intelligent and cost-effective option for continuous threat detection in Amazon Web Services Cloud. + +The following example outlines how a check block can be utilized to assert that no threats have been identified from AWS GuardDuty. + +```hcl +data "aws_guardduty_detector" "example" {} + +check "check_guardduty_findings" { + data "aws_guardduty_finding_ids" "example" { + detector_id = data.aws_guardduty_detector.example.id + } + + assert { + condition = !data.aws_guardduty_finding_ids.example.has_findings + error_message = format("AWS GuardDuty detector '%s' has %d open findings!", + data.aws_guardduty_finding_ids.example.detector_id, + length(data.aws_guardduty_finding_ids.example.finding_ids), + ) + } +} +``` + +If findings are present, the check block assertion will return a warning similar to the following: + +``` +│ Warning: Check block assertion failed +│ +│ on main.tf line 24, in check "check_guardduty_findings": +│ 24: condition = !data.aws_guardduty_finding_ids.example.has_findings +│ ├──────────────── +│ │ data.aws_guardduty_finding_ids.example.has_findings is true +│ +│ AWS GuardDuty detector 'abcdef123456' has 9 open findings! +``` + +## Example - Check for unused IAM roles (aws_iam_role) + +AWS IAM tracks role usage, including the [last used date and region](https://docs.aws.amazon.com/IAM/latest/APIReference/API_RoleLastUsed.html). This information is returned with the [`aws_iam_role`](../d/iam_role.html.markdown) data source, and can be used in continuous validation to check for unused roles. AWS reports activity for the trailing 400 days. If a role is unused within that period, the `last_used_date` will be an empty string (`""`). + +In the example below, the [`timecmp`](https://developer.hashicorp.com/terraform/language/functions/timecmp) function checks for a `last_used_date` more recent than the `unused_limit` local variable (30 days ago). The [`coalesce`](https://developer.hashicorp.com/terraform/language/functions/coalesce) function handles empty (`""`) `last_used_date` values safely, falling back to the `unused_limit` local, and automatically triggering a failed condition. + +```hcl +locals { + unused_limit = timeadd(timestamp(), "-720h") +} + +check "check_iam_role_unused" { + data "aws_iam_role" "example" { + name = aws_iam_role.example.name + } + + assert { + condition = ( + timecmp( + coalesce(data.aws_iam_role.example.role_last_used[0].last_used_date, local.unused_limit), + local.unused_limit, + ) > 0 + ) + error_message = format("AWS IAM role '%s' is unused in the last 30 days!", + data.aws_iam_role.example.name, + ) + } +} +``` diff --git a/website/docs/guides/custom-service-endpoints.html.md b/website/docs/guides/custom-service-endpoints.html.md index bf5da3f2d8f..83116bcc844 100644 --- a/website/docs/guides/custom-service-endpoints.html.md +++ b/website/docs/guides/custom-service-endpoints.html.md @@ -372,6 +372,7 @@ provider "aws" {
  • transcribestreaming (or transcribestreamingservice)
  • transfer
  • translate
  • +
  • verifiedpermissions
  • voiceid
  • vpclattice
  • waf
  • diff --git a/website/docs/guides/version-5-upgrade.html.md b/website/docs/guides/version-5-upgrade.html.md index b1bdb6ac6b2..18141522af3 100644 --- a/website/docs/guides/version-5-upgrade.html.md +++ b/website/docs/guides/version-5-upgrade.html.md @@ -8,9 +8,7 @@ description: |- # Terraform AWS Provider Version 5 Upgrade Guide -Version 5.0.0 of the AWS provider for Terraform is a major release and includes some changes that you will need to consider when upgrading. We intend this guide to help with that process and focus only on changes from version 4.X to version 5.0.0. See the [Version 4 Upgrade Guide](/docs/providers/aws/guides/version-4-upgrade.html) for information about upgrading from 3.X to version 4.0.0. - -We previously marked most of the changes we outline in this guide as deprecated in the Terraform plan/apply output throughout previous provider releases. You can find these changes, including deprecation notices, in the [Terraform AWS Provider CHANGELOG](https://github.com/hashicorp/terraform-provider-aws/blob/main/CHANGELOG.md). +Version 5.0.0 of the AWS provider for Terraform is a major release and includes changes that you need to consider when upgrading. This guide will help with that process and focuses only on changes from version 4.x to version 5.0.0. See the [Version 4 Upgrade Guide](/docs/providers/aws/guides/version-4-upgrade.html) for information on upgrading from 3.x to version 4.0.0. Upgrade topics: @@ -19,36 +17,88 @@ Upgrade topics: - [Provider Version Configuration](#provider-version-configuration) - [Provider Arguments](#provider-arguments) - [Default Tags](#default-tags) -- [Data Source: aws_api_gateway_rest_api](#data-source-aws_api_gateway_rest_api) -- [Data Source: aws_identitystore_group](#data-source-aws_identitystore_group) -- [Data Source: aws_identitystore_user](#data-source-aws_identitystore_user) -- [Data Source: aws_redshift_service_account](#data-source-aws_redshift_service_account) -- [Data Source: aws_subnet_ids](#data-source-aws_subnet_ids) -- [Resource: aws_acmpca_certificate_authority](#resource-aws_acmpca_certificate_authority) -- [Resource: aws_api_gateway_rest_api](#resource-aws_api_gateway_rest_api) -- [Resource: aws_autoscaling_group](#resource-aws_autoscaling_group) -- [Resource: aws_budgets_budget](#resource-aws_budgets_budget) -- [Resource: aws_ce_anomaly_subscription](#resource-aws_ce_anomaly_subscription) -- [Resource: aws_cloudwatch_event_target](#resource-aws_cloudwatch_event_target) -- [Resource: aws_connect_queue](#resource-aws_connect_queue) -- [Resource: aws_connect_routing_profile](#resource-aws_connect_routing_profile) -- [Resource: aws_docdb_cluster](#resource-aws_docdb_cluster) -- [Resource: aws_ec2_client_vpn_endpoint](#resource-aws_ec2_client_vpn_endpoint) -- [Resource: aws_ec2_client_vpn_network_association](#resource-aws_ec2_client_vpn_network_association) -- [Resource: aws_ecs_cluster](#resource-aws_ecs_cluster) -- [Resource: aws_msk_cluster](#resource-aws_msk_cluster) -- [Resource: aws_neptune_cluster](#resource-aws_neptune_cluster) -- [Resource: aws_rds_cluster](#resource-aws_rds_cluster) -- [Resource: aws_wafv2_web_acl](#resource-aws_wafv2_web_acl) - - - -Additional Topics: - - - - [EC2-Classic Retirement](#ec2-classic-retirement) - [Macie Classic Retirement](#macie-classic-retirement) +- [resource/aws_acmpca_certificate_authority](#resourceaws_acmpca_certificate_authority) +- [resource/aws_api_gateway_rest_api](#resourceaws_api_gateway_rest_api) +- [resource/aws_autoscaling_attachment](#resourceaws_autoscaling_attachment) +- [resource/aws_autoscaling_group](#resourceaws_autoscaling_group) +- [resource/aws_budgets_budget](#resourceaws_budgets_budget) +- [resource/aws_ce_anomaly_subscription](#resourceaws_ce_anomaly_subscription) +- [resource/aws_cloudwatch_event_target](#resourceaws_cloudwatch_event_target) +- [resource/aws_codebuild_project](#resourceaws_codebuild_project) +- [resource/aws_connect_hours_of_operation](#resourceaws_connect_hours_of_operation) +- [resource/aws_connect_queue](#resourceaws_connect_queue) +- [resource/aws_connect_routing_profile](#resourceaws_connect_routing_profile) +- [resource/aws_db_event_subscription](#resourceaws_db_event_subscription) +- [resource/aws_db_instance_role_association](#resourceaws_db_instance_role_association) +- [resource/aws_db_instance](#resourceaws_db_instance) +- [resource/aws_db_proxy_target](#resourceaws_db_proxy_target) +- [resource/aws_db_security_group](#resourceaws_db_security_group) +- [resource/aws_db_snapshot](#resourceaws_db_snapshot) +- [resource/aws_default_vpc](#resourceaws_default_vpc) +- [resource/aws_dms_endpoint](#resourceaws_dms_endpoint) +- [resource/aws_docdb_cluster](#resourceaws_docdb_cluster) +- [resource/aws_dx_gateway_association](#resourceaws_dx_gateway_association) +- [resource/aws_ec2_client_vpn_endpoint](#resourceaws_ec2_client_vpn_endpoint) +- [resource/aws_ec2_client_vpn_network_association](#resourceaws_ec2_client_vpn_network_association) +- [resource/aws_ecs_cluster](#resourceaws_ecs_cluster) +- [resource/aws_eip](#resourceaws_eip) +- [resource/aws_eip_association](#resourceaws_eip_association) +- [resource/aws_eks_addon](#resourceaws_eks_addon) +- [resource/aws_elasticache_cluster](#resourceaws_elasticache_cluster) +- [resource/aws_elasticache_replication_group](#resourceaws_elasticache_replication_group) +- [resource/aws_elasticache_security_group](#resourceaws_elasticache_security_group) +- [resource/aws_flow_log](#resourceaws_flow_log) +- [resource/aws_guardduty_organization_configuration](#resourceaws_guardduty_organization_configuration) +- [resource/aws_kinesis_firehose_delivery_stream](#resourceaws_kinesis_firehose_delivery_stream) +- [resource/aws_launch_configuration](#resourceaws_launch_configuration) +- [resource/aws_launch_template](#resourceaws_launch_template) +- [resource/aws_lightsail_instance](#resourceaws_lightsail_instance) +- [resource/aws_macie_member_account_association](#resourceaws_macie_member_account_association) +- [resource/aws_macie_s3_bucket_association](#resourceaws_macie_s3_bucket_association) +- [resource/aws_medialive_multiplex_program](#resourceaws_medialive_multiplex_program) +- [resource/aws_msk_cluster](#resourceaws_msk_cluster) +- [resource/aws_neptune_cluster](#resourceaws_neptune_cluster) +- [resource/aws_networkmanager_core_network](#resourceaws_networkmanager_core_network) +- [resource/aws_opensearch_domain](#resourceaws_opensearch_domain) +- [resource/aws_rds_cluster](#resourceaws_rds_cluster) +- [resource/aws_rds_cluster_instance](#resourceaws_rds_cluster_instance) +- [resource/aws_redshift_cluster](#resourceaws_redshift_cluster) +- [resource/aws_redshift_security_group](#resourceaws_redshift_security_group) +- [resource/aws_route](#resourceaws_route) +- [resource/aws_route_table](#resourceaws_route_table) +- [resource/aws_s3_object](#resourceaws_s3_object) +- [resource/aws_s3_object_copy](#resourceaws_s3_object_copy) +- [resource/aws_secretsmanager_secret](#resourceaws_secretsmanager_secret) +- [resource/aws_security_group](#resourceaws_security_group) +- [resource/aws_security_group_rule](#resourceaws_security_group_rule) +- [resource/aws_servicecatalog_product](#resourceaws_servicecatalog_product) +- [resource/aws_ssm_association](#resourceaws_ssm_association) +- [resource/aws_ssm_parameter](#resourceaws_ssm_parameter) +- [resource/aws_vpc](#resourceaws_vpc) +- [resource/aws_vpc_peering_connection](#resourceaws_vpc_peering_connection) +- [resource/aws_vpc_peering_connection_accepter](#resourceaws_vpc_peering_connection_accepter) +- [resource/aws_vpc_peering_connection_options](#resourceaws_vpc_peering_connection_options) +- [resource/aws_wafv2_web_acl](#resourceaws_wafv2_web_acl) +- [resource/aws_wafv2_web_acl_logging_configuration](#resourceaws_wafv2_web_acl_logging_configuration) +- [data-source/aws_api_gateway_rest_api](#data-sourceaws_api_gateway_rest_api) +- [data-source/aws_connect_hours_of_operation](#data-sourceaws_connect_hours_of_operation) +- [data-source/aws_db_instance](#data-sourceaws_db_instance) +- [data-source/aws_elasticache_cluster](#data-sourceaws_elasticache_cluster) +- [data-source/aws_elasticache_replication_group](#data-sourceaws_elasticache_replication_group) +- [data-source/aws_iam_policy_document](#data-sourceaws_iam_policy_document) +- [data-source/aws_identitystore_group](#data-sourceaws_identitystore_group) +- [data-source/aws_identitystore_user](#data-sourceaws_identitystore_user) +- [data-source/aws_launch_configuration](#data-sourceaws_launch_configuration) +- [data-source/aws_opensearch_domain](#data-sourceaws_opensearch_domain) +- [data-source/aws_quicksight_data_set](#data-sourceaws_quicksight_data_set) +- [data-source/aws_redshift_cluster](#data-sourceaws_redshift_cluster) +- [data-source/aws_redshift_service_account](#data-sourceaws_redshift_service_account) +- [data-source/aws_secretsmanager_secret](#data-sourceaws_secretsmanager_secret) +- [data-source/aws_service_discovery_service](#data-sourceaws_service_discovery_service) +- [data-source/aws_subnet_ids](#data-sourceaws_subnet_ids) +- [data-source/aws_vpc_peering_connection](#data-sourceaws_vpc_peering_connection) @@ -102,131 +152,508 @@ Version 5.0.0 removes these `provider` arguments: * `shared_credentials_file` - Use `shared_credentials_files` instead * `skip_get_ec2_platforms` - Removed following the retirement of EC2-Classic -## Resource: aws_acmpca_certificate_authority +## Default Tags -The `status` attribute is superfluous and sometimes incorrect. It has been removed. +The following enhancements are included: -## Resource: aws_api_gateway_rest_api +* Duplicate `default_tags` can now be included and will be overwritten by resource `tags`. +* Zero value tags, `""`, can now be included in both `default_tags` and resource `tags`. +* Tags can now be `computed`. + +## EC2-Classic Retirement + +Following the retirement of EC2-Classic, we removed a number of resources, arguments, and attributes. This list summarizes what we _removed_: + +* `aws_db_security_group` resource +* `aws_elasticache_security_group` resource +* `aws_redshift_security_group` resource +* [`aws_db_instance`](/docs/providers/aws/r/db_instance.html) resource's `security_group_names` argument +* [`aws_elasticache_cluster`](/docs/providers/aws/r/elasticache_cluster.html) resource's `security_group_names` argument +* [`aws_redshift_cluster`](/docs/providers/aws/r/redshift_cluster.html) resource's `cluster_security_groups` argument +* [`aws_launch_configuration`](/docs/providers/aws/r/launch_configuration.html) resource's `vpc_classic_link_id` and `vpc_classic_link_security_groups` arguments +* [`aws_vpc`](/docs/providers/aws/r/vpc.html) resource's `enable_classiclink` and `enable_classiclink_dns_support` arguments +* [`aws_default_vpc`](/docs/providers/aws/r/default_vpc.html) resource's `enable_classiclink` and `enable_classiclink_dns_support` arguments +* [`aws_vpc_peering_connection`](/docs/providers/aws/r/vpc_peering_connection.html) resource's `allow_classic_link_to_remote_vpc` and `allow_vpc_to_remote_classic_link` arguments +* [`aws_vpc_peering_connection_accepter`](/docs/providers/aws/r/vpc_peering_connection_accepter.html) resource's `allow_classic_link_to_remote_vpc` and `allow_vpc_to_remote_classic_link` arguments +* [`aws_vpc_peering_connection_options`](/docs/providers/aws/r/vpc_peering_connection_options.html) resource's `allow_classic_link_to_remote_vpc` and `allow_vpc_to_remote_classic_link` arguments +* [`aws_db_instance`](/docs/providers/aws/d/db_instance.html) data source's `db_security_groups` attribute +* [`aws_elasticache_cluster`](/docs/providers/aws/d/elasticache_cluster.html) data source's `security_group_names` attribute +* [`aws_redshift_cluster`](/docs/providers/aws/d/redshift_cluster.html) data source's `cluster_security_groups` attribute +* [`aws_launch_configuration`](/docs/providers/aws/d/launch_configuration.html) data source's `vpc_classic_link_id` and `vpc_classic_link_security_groups` attributes + +## Macie Classic Retirement + +Following the retirement of Amazon Macie Classic, we removed these resources: + +* `aws_macie_member_account_association` +* `aws_macie_s3_bucket_association` + +## resource/aws_acmpca_certificate_authority + +Remove `status` from configurations as it no longer exists. + +## resource/aws_api_gateway_rest_api The `minimum_compression_size` attribute is now a String type, allowing it to be computed when set via the `body` attribute. Valid values remain the same. -## Resource: aws_autoscaling_group +## resource/aws_autoscaling_attachment + +Change `alb_target_group_arn`, which no longer exists, to `lb_target_group_arn` in configurations. + +## resource/aws_autoscaling_group -The `tags` attribute has been removed. Use the `tag` attribute instead. For use cases requiring dynamic tags, see the [Dynamic Tagging example](../r/autoscaling_group.html.markdown#dynamic-tagging). +Remove `tags` from configurations as it no longer exists. Use the `tag` attribute instead. For use cases requiring dynamic tags, see the [Dynamic Tagging example](../r/autoscaling_group.html.markdown#dynamic-tagging). -## Resource: aws_budgets_budget +## resource/aws_budgets_budget -The `cost_filters` attribute has been removed. +Remove `cost_filters` from configurations as it no longer exists. -## Resource: aws_ce_anomaly_subscription +## resource/aws_ce_anomaly_subscription -The `threshold` attribute has been removed. +Remove `threshold` from configurations as it no longer exists. -## Resource: aws_cloudwatch_event_target +## resource/aws_cloudwatch_event_target The `ecs_target.propagate_tags` attribute now has no default value. If no value is specified, the tags are not propagated. -## Resource: aws_connect_queue +## resource/aws_codebuild_project -The `quick_connect_ids_associated` attribute has been removed. +Remove `secondary_sources.auth` and `source.auth` from configurations as they no longer exist. -## Resource: aws_connect_routing_profile +## resource/aws_connect_hours_of_operation -The `queue_configs_associated` attribute has been removed. +Remove `hours_of_operation_arn` from configurations as it no longer exists. -## Resource: aws_docdb_cluster +## resource/aws_connect_queue -Changes to the `snapshot_identifier` attribute will now correctly force re-creation of the resource. Previously, changing this attribute would result in a successful apply, but without the cluster being restored (only the resource state was changed). This change brings behavior of the cluster `snapshot_identifier` attribute into alignment with other RDS resources, such as `aws_db_instance`. +Remove `quick_connect_ids_associated` from configurations as it no longer exists. -Automated snapshots **should not** be used for this attribute, unless from a different cluster. Automated snapshots are deleted as part of cluster destruction when the resource is replaced. +## resource/aws_connect_routing_profile + +Remove `queue_configs_associated` from configurations as it no longer exists. + +## resource/aws_db_event_subscription + +Configurations that define `source_ids` using the `id` attribute of `aws_db_instance` must be updated to use `identifier` instead. For example, `source_ids = [aws_db_instance.example.id]` must be updated to `source_ids = [aws_db_instance.example.identifier]`. + +## resource/aws_db_instance + +`aws_db_instance` has had a number of changes: + +1. [`id` is no longer the identifier](#aws_db_instanceid-is-no-longer-the-identifier) +2. [Use `db_name` instead of `name`](#use-db_name-instead-of-name) +3. [Remove `db_security_groups`](#remove-db_security_groups) + +### aws_db_instance.id is no longer the identifier + +**What `id` _is_ has changed and can have far-reaching consequences.** Fortunately, fixing configurations is straightforward. + +`id` is _now_ the DBI Resource ID (_i.e._, `dbi-resource-id`), an immutable "identifier" for an instance. `id` is now the same as the `resource_id`. (We recommend using `resource_id` rather than `id` when you need to refer to the DBI Resource ID.) _Previously_, `id` was the DB Identifier. Now when you need to refer to the _DB Identifier_, use `identifier`. + +Fixing configurations involves changing any `id` references to `identifier`, where the reference expects the DB Identifier. For example, if you're replicating an `aws_db_instance`, you can no longer use `id` to define the `replicate_source_db`. + +This configuration will now result in an error since `replicate_source_db` expects a _DB Identifier_: + +```terraform +resource "aws_db_instance" "test" { + replicate_source_db = aws_db_instance.source.id + # ...other configuration... +} +``` + +You can fix the configuration like this: + +```terraform +resource "aws_db_instance" "test" { + replicate_source_db = aws_db_instance.source.identifier + # ...other configuration... +} +``` + +### Use `db_name` instead of `name` + +Change `name` to `db_name` in configurations as `name` no longer exists. + +### Remove `db_security_groups` + +Remove `db_security_groups` from configurations as it no longer exists. We removed it as part of the EC2-Classic retirement. + +## resource/aws_db_instance_role_association + +Configurations that define `db_instance_identifier` using the `id` attribute of `aws_db_instance` must be updated to use `identifier` instead. For example, `db_instance_identifier = aws_db_instance.example.id` must be updated to `db_instance_identifier = aws_db_instance.example.identifier`. + +## resource/aws_db_proxy_target -## Resource: aws_ec2_client_vpn_endpoint +Configurations that define `db_instance_identifier` using the `id` attribute of `aws_db_instance` must be updated to use `identifier` instead. For example, `db_instance_identifier = aws_db_instance.example.id` must be updated to `db_instance_identifier = aws_db_instance.example.identifier`. -The `security_groups` and `status` attributes have been removed. +## resource/aws_db_security_group -## Resource: aws_ec2_client_vpn_network_association +We removed this resource as part of the EC2-Classic retirement. -The `status` attribute has been removed. +## resource/aws_db_snapshot -## Resource: aws_ecs_cluster +Configurations that define `db_instance_identifier` using the `id` attribute of `aws_db_instance` must be updated to use `identifier` instead. For example, `db_instance_identifier = aws_db_instance.example.id` must be updated to `db_instance_identifier = aws_db_instance.example.identifier`. -The `capacity_providers` and `default_capacity_provider_strategy` attributes have been removed. +## resource/aws_default_vpc -## Resource: aws_msk_cluster +Remove `enable_classiclink` and `enable_classiclink_dns_support` from configurations as they no longer exist. They were part of the EC2-Classic retirement. -The `broker_node_group_info.ebs_volume_size` attribute has been removed. +## resource/aws_dms_endpoint -## Resource: aws_neptune_cluster +Remove `s3_settings.ignore_headers_row` from configurations as it no longer exists. **Be careful to not confuse `ignore_headers_row`, which no longer exists, with `ignore_header_rows`, which still exists.** + +## resource/aws_docdb_cluster Changes to the `snapshot_identifier` attribute will now correctly force re-creation of the resource. Previously, changing this attribute would result in a successful apply, but without the cluster being restored (only the resource state was changed). This change brings behavior of the cluster `snapshot_identifier` attribute into alignment with other RDS resources, such as `aws_db_instance`. Automated snapshots **should not** be used for this attribute, unless from a different cluster. Automated snapshots are deleted as part of cluster destruction when the resource is replaced. -## Resource: aws_rds_cluster +## resource/aws_dx_gateway_association + +The `vpn_gateway_id` attribute has been deprecated. All configurations using `vpn_gateway_id` should be updated to use the `associated_gateway_id` attribute instead. + +## resource/aws_ec2_client_vpn_endpoint + +Remove `status` from configurations as it no longer exists. + +## resource/aws_ec2_client_vpn_network_association + +Remove `security_groups` and `status` from configurations as they no longer exist. + +## resource/aws_ecs_cluster + +Remove `capacity_providers` and `default_capacity_provider_strategy` from configurations as they no longer exist. + +## resource/aws_eip + +* With the retirement of EC2-Classic, the `standard` domain is no longer supported. +* The `vpc` argument has been deprecated. Use `domain` argument instead. + +## resource/aws_eip_association + +With the retirement of EC2-Classic, the `standard` domain is no longer supported. + +## resource/aws_eks_addon + +The `resolve_conflicts` argument has been deprecated. Use the `resolve_conflicts_on_create` and/or `resolve_conflicts_on_update` arguments instead. + +## resource/aws_elasticache_cluster + +Remove `security_group_names` from configurations as it no longer exists. We removed it as part of the EC2-Classic retirement. + +## resource/aws_elasticache_replication_group + +* Remove the `cluster_mode` configuration block. Use top-level `num_node_groups` and `replicas_per_node_group` instead. +* Remove `availability_zones`, `number_cache_clusters`, `replication_group_description` arguments from configurations as they no longer exist. Use `preferred_cache_cluster_azs`, `num_cache_clusters`, and `description`, respectively, instead. + +## resource/aws_elasticache_security_group + +We removed this resource as part of the EC2-Classic retirement. + +## resource/aws_flow_log + +The `log_group_name` attribute has been deprecated. All configurations using `log_group_name` should be updated to use the `log_destination` attribute instead. + +## resource/aws_guardduty_organization_configuration + +The `auto_enable` argument has been deprecated. Use the `auto_enable_organization_members` argument instead. + +## resource/aws_kinesis_firehose_delivery_stream + +* Remove the `s3_configuration` attribute from the root of the resource. `s3_configuration` is now a part of the following blocks: `elasticsearch_configuration`, `opensearch_configuration`, `redshift_configuration`, `splunk_configuration`, and `http_endpoint_configuration`. +* Remove `s3` as an option for `destination`. Use `extended_s3` instead +* Rename `extended_s3_configuration.0.s3_backup_configuration.0.buffer_size` and `extended_s3_configuration.0.s3_backup_configuration.0.buffer_interval` to `extended_s3_configuration.0.s3_backup_configuration.0.buffering_size` and `extended_s3_configuration.0.s3_backup_configuration.0.buffering_interval`, respectively. +* Rename `redshift_configuration.0.s3_backup_configuration.0.buffer_size` and `redshift_configuration.0.s3_backup_configuration.0.buffer_interval` to `redshift_configuration.0.s3_backup_configuration.0.buffering_size` and `redshift_configuration.0.s3_backup_configuration.0.buffering_interval`, respectively. +* Rename `s3_configuration.0.buffer_size` and `s3_configuration.0.buffer_interval` to `s3_configuration.0.buffering_size` and `s3_configuration.0.buffering_interval`, respectively. + +## resource/aws_launch_configuration + +Remove `vpc_classic_link_id` and `vpc_classic_link_security_groups` from configurations as they no longer exist. We removed them as part of the EC2-Classic retirement. + +## resource/aws_launch_template + +We removed defaults from `metatadata_options`. Launch template metadata options will now default to unset values, which is the AWS default behavior. + +## resource/aws_lightsail_instance + +Remove `ipv6_address` from configurations as it no longer exists. + +## resource/aws_macie_member_account_association + +We removed this resource as part of the Macie Classic retirement. + +## resource/aws_macie_s3_bucket_association + +We removed this resource as part of the Macie Classic retirement. + +## resource/aws_medialive_multiplex_program + +Change `statemux_settings`, which no longer exists, to `statmux_settings` in configurations. + +## resource/aws_msk_cluster + +Remove `broker_node_group_info.ebs_volume_size` from configurations as it no longer exists. + +## resource/aws_neptune_cluster Changes to the `snapshot_identifier` attribute will now correctly force re-creation of the resource. Previously, changing this attribute would result in a successful apply, but without the cluster being restored (only the resource state was changed). This change brings behavior of the cluster `snapshot_identifier` attribute into alignment with other RDS resources, such as `aws_db_instance`. Automated snapshots **should not** be used for this attribute, unless from a different cluster. Automated snapshots are deleted as part of cluster destruction when the resource is replaced. -## Resource: aws_wafv2_web_acl +## resource/aws_networkmanager_core_network + +Remove `policy_document` from configurations as it no longer exists. Use the `aws_networkmanager_core_network_policy_attachment` resource instead. + +## resource/aws_opensearch_domain + +* The `kibana_endpoint` attribute has been deprecated. All configurations using `kibana_endpoint` should be updated to use the `dashboard_endpoint` attribute instead. +* The `engine_version` attribute no longer has a default value. Omitting this attribute will now create a domain with the latest OpenSearch version, consistent with the behavior of the AWS API. + +## resource/aws_rds_cluster + +* Update configurations to always include `engine` since it is now required and has no default. Previously, not including `engine` was equivalent to `engine = "aurora"` and created a MySQL-5.6-compatible cluster. +* Changes to the `snapshot_identifier` attribute will now correctly force re-creation of the resource. Previously, changing this attribute would result in a successful apply, but without the cluster being restored (only the resource state was changed). This change brings behavior of the cluster `snapshot_identifier` attribute into alignment with other RDS resources, such as `aws_db_instance`. **NOTE:** Automated snapshots **should not** be used for this attribute, unless from a different cluster. Automated snapshots are deleted as part of cluster destruction when the resource is replaced. + +## resource/aws_rds_cluster_instance -The `statement.managed_rule_group_statement.excluded_rule` and `statement.rule_group_reference_statement.excluded_rule` attributes have been removed. +Update configurations to always include `engine` since it is now required and has no default. Previously, not including `engine` was equivalent to `engine = "aurora"` and created a MySQL-5.6-compatible cluster. -The `statement.rule_group_reference_statement.rule_action_override` attribute has been added. +## resource/aws_redshift_cluster -## Data Source: aws_api_gateway_rest_api +Remove `cluster_security_groups` from configurations as it no longer exists. We removed it as part of the EC2-Classic retirement. + +## resource/aws_redshift_security_group + +We removed this resource as part of the EC2-Classic retirement. + +## resource/aws_route + +Update configurations to use `network_interface_id` rather than `instance_id`, which no longer exists. + +For example, this configuration is _no longer valid_: + +```terraform +resource "aws_route" "example" { + instance_id = aws_instance.example.id + # ...other configuration... +} +``` + +One possible way to fix this configuration involves referring to the `primary_network_interface_id` of an instance: + +```terraform +resource "aws_route" "example" { + network_interface_id = aws_instance.example.primary_network_interface_id + # ...other configuration... +} +``` + +Another fix is to use an ENI: + +```terraform +resource "aws_network_interface" "example" { + # ...other configuration... +} + +resource "aws_instance" "example" { + network_interface { + network_interface_id = aws_network_interface.example.id + # ...other configuration... + } + + # ...other configuration... +} + +resource "aws_route" "example" { + network_interface_id = aws_network_interface.example.id + # ...other configuration... + + # Wait for the ENI attachment + depends_on = [aws_instance.example] +} +``` + +## resource/aws_route_table + +Update configurations to use `route.*.network_interface_id` rather than `route.*.instance_id`, which no longer exists. + +For example, this configuration is _no longer valid_: + +```terraform +resource "aws_route_table" "example" { + route { + instance_id = aws_instance.example.id + # ...other configuration... + } + # ...other configuration... +} +``` + +One possible way to fix this configuration involves referring to the `primary_network_interface_id` of an instance: + +```terraform +resource "aws_route_table" "example" { + route { + network_interface_id = aws_instance.example.primary_network_interface_id + # ...other configuration... + } + + # ...other configuration... +} +``` + +Another fix is to use an ENI: + +```terraform +resource "aws_network_interface" "example" { + # ...other configuration... +} + +resource "aws_instance" "example" { + network_interface { + network_interface_id = aws_network_interface.example.id + # ...other configuration... + } + + # ...other configuration... +} + +resource "aws_route_table" "example" { + route { + network_interface_id = aws_network_interface.example.id + # ...other configuration... + } + + # ...other configuration... + + # Wait for the ENI attachment + depends_on = [aws_instance.example] +} +``` + +## resource/aws_s3_object + +The `acl` attribute no longer has a default value. Previously this was set to `private` when omitted. Objects requiring a private ACL should now explicitly set this attribute. + +## resource/aws_s3_object_copy + +The `acl` attribute no longer has a default value. Previously this was set to `private` when omitted. Object copies requiring a private ACL should now explicitly set this attribute. + +## resource/aws_secretsmanager_secret + +Remove `rotation_enabled`, `rotation_lambda_arn` and `rotation_rules` from configurations as they no longer exist. + +## resource/aws_security_group + +With the retirement of EC2-Classic, non-VPC security groups are no longer supported. + +## resource/aws_security_group_rule + +With the retirement of EC2-Classic, non-VPC security groups are no longer supported. + +## resource/aws_servicecatalog_product + +Changes to any `provisioning_artifact_parameters` arguments now properly trigger a replacement. This fixes incorrect behavior, but may technically be breaking for configurations expecting non-functional in-place updates. + +## resource/aws_ssm_association + +The `instance_id` attribute has been deprecated. All configurations using `instance_id` should be updated to use the `targets` attribute instead. + +## resource/aws_ssm_parameter + +The `overwrite` attribute has been deprecated. Existing parameters should be explicitly imported rather than relying on the "import on create" behavior previously enabled by setting `overwrite = true`. In a future major version the `overwrite` attribute will be removed and attempting to create a parameter that already exists will fail. + +## resource/aws_vpc + +Remove `enable_classiclink` and `enable_classiclink_dns_support` from configurations as they no longer exist. They were part of the EC2-Classic retirement. + +## resource/aws_vpc_peering_connection + +Remove `allow_classic_link_to_remote_vpc` and `allow_vpc_to_remote_classic_link` from configurations as they no longer exist. They were part of the EC2-Classic retirement. + +## resource/aws_vpc_peering_connection_accepter + +Remove `allow_classic_link_to_remote_vpc` and `allow_vpc_to_remote_classic_link` from configurations as they no longer exist. They were part of the EC2-Classic retirement. + +## resource/aws_vpc_peering_connection_options + +Remove `allow_classic_link_to_remote_vpc` and `allow_vpc_to_remote_classic_link` from configurations as they no longer exist. They were part of the EC2-Classic retirement. + +## resource/aws_wafv2_web_acl + +* Remove `statement.managed_rule_group_statement.excluded_rule` and `statement.rule_group_reference_statement.excluded_rule` from configurations as they no longer exist. +* The `statement.rule_group_reference_statement.rule_action_override` attribute has been added. + +## resource/aws_wafv2_web_acl_logging_configuration + +Remove `redacted_fields.all_query_arguments`, `redacted_fields.body` and `redacted_fields.single_query_argument` from configurations as they no longer exist. + +## data-source/aws_api_gateway_rest_api The `minimum_compression_size` attribute is now a String type, allowing it to be computed when set via the `body` attribute. -## Data Source: aws_identitystore_group +## data-source/aws_connect_hours_of_operation -The `filter` argument has been removed. +Remove `hours_of_operation_arn` from configurations as it no longer exists. -## Data Source: aws_identitystore_user +## data-source/aws_db_instance -The `filter` argument has been removed. +Remove `db_security_groups` from configurations as it no longer exists. We removed it as part of the EC2-Classic retirement. -## Data Source: aws_redshift_service_account +## data-source/aws_elasticache_cluster -[AWS document](https://docs.aws.amazon.com/redshift/latest/mgmt/db-auditing.html#db-auditing-bucket-permissions) that [a service principal name](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html#principal-services) be used instead of AWS account ID in any relevant IAM policy. -The [`aws_redshift_service_account`](/docs/providers/aws/d/redshift_service_account.html) data source should now be considered deprecated and will be removed in a future version. +Remove `security_group_names` from configurations as it no longer exists. We removed it as part of the EC2-Classic retirement. -## Data Source: aws_subnet_ids +## data-source/aws_elasticache_replication_group -The `aws_subnet_ids` data source has been removed. Use the [`aws_subnets`](/docs/providers/aws/d/subnets.html) data source instead. +Rename `number_cache_clusters` and `replication_group_description`, which no longer exist, to `num_cache_clusters`, and `description`, respectively. -## Default Tags +## data-source/aws_iam_policy_document -The following enhancements are included: +* Remove `source_json` and `override_json` from configurations. Use `source_policy_documents` and `override_policy_documents`, respectively, instead. +* Don't add empty `statement.sid` values to `json` attribute value. -* Duplicate `default_tags` can now be included and will be overwritten by resource `tags`. -* Zero value tags, `""`, can now be included in both `default_tags` and resource `tags`. -* Tags can now be `computed`. +## data-source/aws_identitystore_group -## EC2-Classic Retirement +Remove `filter` from configurations as it no longer exists. -Following the retirement of EC2-Classic a number of resources and attributes have been removed. - -* The `aws_db_security_group` resource has been removed -* The `aws_elasticache_security_group` resource has been removed -* The `aws_redshift_security_group` resource has been removed -* The [`aws_db_instance`](/docs/providers/aws/r/db_instance.html) resource's `security_group_names` argument has been removed -* The [`aws_elasticache_cluster`](/docs/providers/aws/r/elasticache_cluster.html) resource's `security_group_names` argument has been removed -* The [`aws_redshift_cluster`](/docs/providers/aws/r/redshift_cluster.html) resource's `cluster_security_groups` argument has been removed -* The [`aws_launch_configuration`](/docs/providers/aws/r/launch_configuration.html) resource's `vpc_classic_link_id` and `vpc_classic_link_security_groups` arguments have been removed -* The [`aws_vpc`](/docs/providers/aws/r/vpc.html) resource's `enable_classiclink` and `enable_classiclink_dns_support` arguments have been removed -* The [`aws_default_vpc`](/docs/providers/aws/r/default_vpc.html) resource's `enable_classiclink` and `enable_classiclink_dns_support` arguments have been removed -* The [`aws_vpc_peering_connection`](/docs/providers/aws/r/vpc_peering_connection.html) resource's `allow_classic_link_to_remote_vpc` and `allow_vpc_to_remote_classic_link` arguments have been removed -* The [`aws_vpc_peering_connection_accepter`](/docs/providers/aws/r/vpc_peering_connection_accepter.html) resource's `allow_classic_link_to_remote_vpc` and `allow_vpc_to_remote_classic_link` arguments have been removed -* The [`aws_vpc_peering_connection_options`](/docs/providers/aws/r/vpc_peering_connection_options.html) resource's `allow_classic_link_to_remote_vpc` and `allow_vpc_to_remote_classic_link` arguments have been removed -* The [`aws_db_instance`](/docs/providers/aws/d/db_instance.html) data source's `db_security_groups` attribute has been removed -* The [`aws_elasticache_cluster`](/docs/providers/aws/d/elasticache_cluster.html) data source's `security_group_names` attribute has been removed -* The [`aws_redshift_cluster`](/docs/providers/aws/d/redshift_cluster.html) data source's `cluster_security_groups` attribute has been removed -* The [`aws_launch_configuration`](/docs/providers/aws/d/launch_configuration.html) data source's `vpc_classic_link_id` and `vpc_classic_link_security_groups` attributes have been removed +## data-source/aws_identitystore_user -## Macie Classic Retirement +Remove `filter` from configurations as it no longer exists. + +## data-source/aws_launch_configuration + +Remove `vpc_classic_link_id` and `vpc_classic_link_security_groups` from configurations as they no longer exist. They were part of the EC2-Classic retirement. + +## data-source/aws_opensearch_domain + +The `kibana_endpoint` attribute has been deprecated. All configurations using `kibana_endpoint` should be updated to use the `dashboard_endpoint` attribute instead. + +## data-source/aws_quicksight_data_set + +The `tags_all` attribute has been deprecated and will be removed in a future version. + +## data-source/aws_redshift_cluster + +Remove `cluster_security_groups` from configurations as it no longer exists. We removed it as part of the EC2-Classic retirement. + +## data-source/aws_redshift_service_account + +[AWS document](https://docs.aws.amazon.com/redshift/latest/mgmt/db-auditing.html#db-auditing-bucket-permissions) that [a service principal name](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html#principal-services) be used instead of AWS account ID in any relevant IAM policy. +The [`aws_redshift_service_account`](/docs/providers/aws/d/redshift_service_account.html) data source should now be considered deprecated and will be removed in a future version. + +## data-source/aws_service_discovery_service + +The `tags_all` attribute has been deprecated and will be removed in a future version. + +## data-source/aws_secretsmanager_secret + +Remove `rotation_enabled`, `rotation_lambda_arn` and `rotation_rules` from configurations as they no longer exist. + +## data-source/aws_subnet_ids + +We removed the `aws_subnet_ids` data source. Use the [`aws_subnets`](/docs/providers/aws/d/subnets.html) data source instead. -Following the retirement of Amazon Macie Classic a couple of resources have been removed. +## data-source/aws_vpc_peering_connection -* The `aws_macie_member_account_association` resource has been removed -* The `aws_macie_s3_bucket_association` resource has been removed +Remove `allow_classic_link_to_remote_vpc` and `allow_vpc_to_remote_classic_link` from configurations as they no longer exist. They were part of the EC2-Classic retirement. diff --git a/website/docs/index.html.markdown b/website/docs/index.html.markdown index 13f4e2d8fd6..6edb9fd8d1e 100644 --- a/website/docs/index.html.markdown +++ b/website/docs/index.html.markdown @@ -11,7 +11,7 @@ Use the Amazon Web Services (AWS) provider to interact with the many resources supported by AWS. You must configure the provider with the proper credentials before you can use it. -Use the navigation to the left to read about the available resources. +Use the navigation to the left to read about the available resources. There are currently 1234 resources and 507 data sources available in the provider. To learn the basics of Terraform using this provider, follow the hands-on [get started tutorials](https://learn.hashicorp.com/tutorials/terraform/infrastructure-as-code?in=terraform/aws-get-started&utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS). Interact with AWS services, @@ -27,7 +27,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = "~> 4.0" + version = "~> 5.0" } } } @@ -48,7 +48,7 @@ Terraform 0.12 and earlier: ```terraform # Configure the AWS Provider provider "aws" { - version = "~> 4.0" + version = "~> 5.0" region = "us-east-1" } @@ -238,8 +238,9 @@ credential_process = custom-process --username jdoe |HTTP Proxy|`http_proxy`|`HTTP_PROXY` or `HTTPS_PROXY`|N/A| |Max Retries|`max_retries`|`AWS_MAX_ATTEMPTS`|`max_attempts`| |Profile|`profile`|`AWS_PROFILE` or `AWS_DEFAULT_PROFILE`|N/A| +|Retry Mode|`retry_mode`|`AWS_RETRY_MODE`|`retry_mode`| |Shared Config Files|`shared_config_files`|`AWS_CONFIG_FILE`|N/A| -|Shared Credentials Files|`shared_credentials_files` or `shared_credentials_file`|`AWS_SHARED_CREDENTIALS_FILE`|N/A| +|Shared Credentials Files|`shared_credentials_files`|`AWS_SHARED_CREDENTIALS_FILE`|N/A| |Use DualStack Endpoints|`use_dualstack_endpoint`|`AWS_USE_DUALSTACK_ENDPOINT`|`use_dualstack_endpoint`| |Use FIPS Endpoints|`use_fips_endpoint`|`AWS_USE_FIPS_ENDPOINT`|`use_fips_endpoint`| @@ -253,7 +254,7 @@ See the [assume role documentation](https://docs.aws.amazon.com/cli/latest/userg |Setting|Provider|[Environment Variable][envvars]|[Shared Config][config]| |-------|--------|--------|-----------------------| |Role ARN|`role_arn`|`AWS_ROLE_ARN`|`role_arn`| -|Duration|`duration` or `duration_seconds`|N/A|`duration_seconds`| +|Duration|`duration`|N/A|`duration_seconds`| |External ID|`external_id`|N/A|`external_id`| |Policy|`policy`|N/A|N/A| |Policy ARNs|`policy_arns`|N/A|N/A| @@ -322,6 +323,9 @@ In addition to [generic `provider` arguments](https://www.terraform.io/docs/conf Can also be set with either the `AWS_REGION` or `AWS_DEFAULT_REGION` environment variables, or via a shared config file parameter `region` if `profile` is used. If credentials are retrieved from the EC2 Instance Metadata Service, the region can also be retrieved from the metadata. +* `retry_mode` - (Optional) Specifies how retries are attempted. + Valid values are `standard` and `adaptive`. + Can also be configured using the `AWS_RETRY_MODE` environment variable or the shared config file parameter `retry_mode`. * `s3_use_path_style` - (Optional) Whether to enable the request to use path-style addressing, i.e., `https://s3.amazonaws.com/BUCKET/KEY`. By default, the S3 client will use virtual hosted bucket addressing, `https://BUCKET.s3.amazonaws.com/KEY`, when possible. Specific to the Amazon S3 service. * `secret_key` - (Optional) AWS secret key. Can also be set with the `AWS_SECRET_ACCESS_KEY` environment variable, or via a shared configuration and credentials files if `profile` is used. See also `access_key`. * `shared_config_files` - (Optional) List of paths to AWS shared config files. If not set, the default is `[~/.aws/config]`. A single value can also be set with the `AWS_CONFIG_FILE` environment variable. @@ -454,11 +458,7 @@ In addition to [generic `provider` arguments](https://www.terraform.io/docs/conf The `assume_role` configuration block supports the following arguments: -* `duration` - (Optional, Conflicts with `duration_seconds`) Duration of the assume role session. - You can provide a value from 15 minutes up to the maximum session duration setting for the role. - Represented by a string such as `1h`, `2h45m`, or `30m15s`. -* `duration_seconds` - (Optional, **Deprecated** use `duration` instead) Number of seconds to restrict the assume role session duration. - You can provide a value from 900 seconds (15 minutes) up to the maximum session duration setting for the role. +* `duration` - (Optional) Duration of the assume role session. You can provide a value from 15 minutes up to the maximum session duration setting for the role. Represented by a string such as `1h`, `2h45m`, or `30m15s`. * `external_id` - (Optional) External identifier to use when assuming the role. * `policy` - (Optional) IAM Policy JSON describing further restricting permissions for the IAM Role being assumed. * `policy_arns` - (Optional) Set of Amazon Resource Names (ARNs) of IAM Policies describing further restricting permissions for the IAM Role being assumed. diff --git a/website/docs/r/account_primary_contact.html.markdown b/website/docs/r/account_primary_contact.html.markdown index 96f38e01bc4..3f4f4eead73 100644 --- a/website/docs/r/account_primary_contact.html.markdown +++ b/website/docs/r/account_primary_contact.html.markdown @@ -48,3 +48,11 @@ The following arguments are supported: ## Attributes Reference No additional attributes are exported. + +## Import + +The Primary Contact can be imported using the `account_id`, e.g., + +``` +$ terraform import aws_account_primary_contact.test 1234567890 +``` diff --git a/website/docs/r/acm_certificate.html.markdown b/website/docs/r/acm_certificate.html.markdown index 29f823de912..79cb0376693 100644 --- a/website/docs/r/acm_certificate.html.markdown +++ b/website/docs/r/acm_certificate.html.markdown @@ -143,7 +143,7 @@ The following arguments are supported: * Creating an Amazon issued certificate * `domain_name` - (Required) Domain name for which the certificate should be issued * `subject_alternative_names` - (Optional) Set of domains that should be SANs in the issued certificate. To remove all elements of a previously configured list, set this value equal to an empty list (`[]`) or use the [`terraform taint` command](https://www.terraform.io/docs/commands/taint.html) to trigger recreation. - * `validation_method` - (Required) Which method to use for validation. `DNS` or `EMAIL` are valid, `NONE` can be used for certificates that were imported into ACM and then into Terraform. + * `validation_method` - (Optional) Which method to use for validation. `DNS` or `EMAIL` are valid. This parameter must not be set for certificates that were imported into ACM and then into Terraform. * `key_algorithm` - (Optional) Specifies the algorithm of the public and private key pair that your Amazon issued certificate uses to encrypt data. See [ACM Certificate characteristics](https://docs.aws.amazon.com/acm/latest/userguide/acm-certificate.html#algorithms) for more details. * `options` - (Optional) Configuration block used to set certificate options. Detailed below. * `validation_option` - (Optional) Configuration block used to specify information about the initial validation of each domain name. Detailed below. diff --git a/website/docs/r/appconfig_environment.html.markdown b/website/docs/r/appconfig_environment.html.markdown index e5464c1fe50..291c3246341 100644 --- a/website/docs/r/appconfig_environment.html.markdown +++ b/website/docs/r/appconfig_environment.html.markdown @@ -60,7 +60,7 @@ The `monitor` block supports the following: In addition to all arguments above, the following attributes are exported: * `arn` - ARN of the AppConfig Environment. -* `id` - AppConfig environment ID and application ID separated by a colon (`:`). +* `id` - (**Deprecated**) AppConfig environment ID and application ID separated by a colon (`:`). * `environment_id` - AppConfig environment ID. * `state` - State of the environment. Possible values are `READY_FOR_DEPLOYMENT`, `DEPLOYING`, `ROLLING_BACK` or `ROLLED_BACK`. diff --git a/website/docs/r/appflow_flow.html.markdown b/website/docs/r/appflow_flow.html.markdown index 2aeb4103885..c0886e15ed2 100644 --- a/website/docs/r/appflow_flow.html.markdown +++ b/website/docs/r/appflow_flow.html.markdown @@ -14,7 +14,7 @@ Provides an AppFlow flow resource. ```terraform resource "aws_s3_bucket" "example_source" { - bucket = "example_source" + bucket = "example-source" } data "aws_iam_policy_document" "example_source" { @@ -51,31 +51,33 @@ resource "aws_s3_object" "example" { } resource "aws_s3_bucket" "example_destination" { - bucket = "example_destination" + bucket = "example-destination" } data "aws_iam_policy_document" "example_destination" { - sid = "AllowAppFlowDestinationActions" - effect = "Allow" + statement { + sid = "AllowAppFlowDestinationActions" + effect = "Allow" - principals { - type = "Service" - identifiers = ["appflow.amazonaws.com"] - } + principals { + type = "Service" + identifiers = ["appflow.amazonaws.com"] + } - actions = [ - "s3:PutObject", - "s3:AbortMultipartUpload", - "s3:ListMultipartUploadParts", - "s3:ListBucketMultipartUploads", - "s3:GetBucketAcl", - "s3:PutObjectAcl", - ] - - resources = [ - "arn:aws:s3:::example_destination", - "arn:aws:s3:::example_destination/*", - ] + actions = [ + "s3:PutObject", + "s3:AbortMultipartUpload", + "s3:ListMultipartUploadParts", + "s3:ListBucketMultipartUploads", + "s3:GetBucketAcl", + "s3:PutObjectAcl", + ] + + resources = [ + "arn:aws:s3:::example_destination", + "arn:aws:s3:::example_destination/*", + ] + } } resource "aws_s3_bucket_policy" "example_destination" { diff --git a/website/docs/r/appsync_datasource.html.markdown b/website/docs/r/appsync_datasource.html.markdown index 30681a874b4..e863d33c042 100644 --- a/website/docs/r/appsync_datasource.html.markdown +++ b/website/docs/r/appsync_datasource.html.markdown @@ -81,78 +81,86 @@ The following arguments are supported: * `name` - (Required) User-supplied name for the data source. * `type` - (Required) Type of the Data Source. Valid values: `AWS_LAMBDA`, `AMAZON_DYNAMODB`, `AMAZON_ELASTICSEARCH`, `HTTP`, `NONE`, `RELATIONAL_DATABASE`, `AMAZON_EVENTBRIDGE`. * `description` - (Optional) Description of the data source. -* `dynamodb_config` - (Optional) DynamoDB settings. See [below](#dynamodb_config) -* `elasticsearch_config` - (Optional) Amazon Elasticsearch settings. See [below](#elasticsearch_config) -* `event_bridge_config` - (Optional) AWS EventBridge settings. See [below](#event_bridge_config) -* `http_config` - (Optional) HTTP settings. See [below](#http_config) -* `lambda_config` - (Optional) AWS Lambda settings. See [below](#lambda_config) -* `opensearchservice_config` - (Optional) Amazon OpenSearch Service settings. See [below](#opensearchservice_config) -* `relational_database_config` (Optional) AWS RDS settings. See [Relational Database Config](#relational_database_config) +* `dynamodb_config` - (Optional) DynamoDB settings. See [DynamoDB Config](#dynamodb-config) +* `elasticsearch_config` - (Optional) Amazon Elasticsearch settings. See [ElasticSearch Config](#elasticsearch-config) +* `event_bridge_config` - (Optional) AWS EventBridge settings. See [Event Bridge Config](#event-bridge-config) +* `http_config` - (Optional) HTTP settings. See [HTTP Config](#http-config) +* `lambda_config` - (Optional) AWS Lambda settings. See [Lambda Config](#lambda-config) +* `opensearchservice_config` - (Optional) Amazon OpenSearch Service settings. See [OpenSearch Service Config](#opensearch-service-config) +* `relational_database_config` (Optional) AWS RDS settings. See [Relational Database Config](#relational-database-config) * `service_role_arn` - (Optional) IAM service role ARN for the data source. -### dynamodb_config +### DynamoDB Config The following arguments are supported: * `table_name` - (Required) Name of the DynamoDB table. * `region` - (Optional) AWS region of the DynamoDB table. Defaults to current region. * `use_caller_credentials` - (Optional) Set to `true` to use Amazon Cognito credentials with this data source. +* `delta_sync_config` - (Optional) The DeltaSyncConfig for a versioned data source. See [Delta Sync Config](#delta-sync-config) +* `versioned` - (Optional) Detects Conflict Detection and Resolution with this data source. -### elasticsearch_config +### Delta Sync Config + +* `base_table_ttl` - (Optional) The number of minutes that an Item is stored in the data source. +* `delta_sync_table_name` - (Required) The table name. +* `delta_sync_table_ttl` - (Optional) The number of minutes that a Delta Sync log entry is stored in the Delta Sync table. + +### ElasticSearch Config The following arguments are supported: * `endpoint` - (Required) HTTP endpoint of the Elasticsearch domain. * `region` - (Optional) AWS region of Elasticsearch domain. Defaults to current region. -### event_bridge_config +### Event Bridge Config The following arguments are supported: * `event_bus_arn` - (Required) ARN for the EventBridge bus. -### http_config +### HTTP Config The following arguments are supported: * `endpoint` - (Required) HTTP URL. -* `authorization_config` - (Optional) Authorization configuration in case the HTTP endpoint requires authorization. See [Authorization Config](#authorization_config). +* `authorization_config` - (Optional) Authorization configuration in case the HTTP endpoint requires authorization. See [Authorization Config](#authorization-config). -#### authorization_config +#### Authorization Config The following arguments are supported: * `authorization_type` - (Optional) Authorization type that the HTTP endpoint requires. Default values is `AWS_IAM`. -* `aws_iam_config` - (Optional) Identity and Access Management (IAM) settings. See [AWS IAM Config](#aws_iam_config). +* `aws_iam_config` - (Optional) Identity and Access Management (IAM) settings. See [AWS IAM Config](#aws-iam-config). -##### aws_iam_config +##### AWS IAM Config The following arguments are supported: * `signing_region` - (Optional) Signing Amazon Web Services Region for IAM authorization. * `signing_service_name`- (Optional) Signing service name for IAM authorization. -### lambda_config +### Lambda Config The following arguments are supported: * `function_arn` - (Required) ARN for the Lambda function. -### opensearchservice_config +### OpenSearch Service Config The following arguments are supported: * `endpoint` - (Required) HTTP endpoint of the OpenSearch domain. * `region` - (Optional) AWS region of the OpenSearch domain. Defaults to current region. -### relational_database_config +### Relational Database Config The following arguments are supported: -* `http_endpoint_config` - (Required) Amazon RDS HTTP endpoint configuration. See [HTTP Endpoint Config](#http_endpoint_config). +* `http_endpoint_config` - (Required) Amazon RDS HTTP endpoint configuration. See [HTTP Endpoint Config](#http-endpoint-config). * `source_type` - (Optional) Source type for the relational database. Valid values: `RDS_HTTP_ENDPOINT`. -#### http_endpoint_config +#### HTTP Endpoint Config The following arguments are supported: diff --git a/website/docs/r/appsync_graphql_api.html.markdown b/website/docs/r/appsync_graphql_api.html.markdown index 02440f2f2eb..4859c2aaf7a 100644 --- a/website/docs/r/appsync_graphql_api.html.markdown +++ b/website/docs/r/appsync_graphql_api.html.markdown @@ -212,6 +212,7 @@ The following arguments are supported: * `additional_authentication_provider` - (Optional) One or more additional authentication providers for the GraphqlApi. Defined below. * `tags` - (Optional) Map of tags to assign to the resource. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. * `xray_enabled` - (Optional) Whether tracing with X-ray is enabled. Defaults to false. +* `visibility` - (Optional) Sets the value of the GraphQL API to public (`GLOBAL`) or private (`PRIVATE`). If no value is provided, the visibility will be set to `GLOBAL` by default. This value cannot be changed once the API has been created. ### log_config diff --git a/website/docs/r/autoscaling_attachment.html.markdown b/website/docs/r/autoscaling_attachment.html.markdown index 0fb26bc6f49..daf034cc6f2 100644 --- a/website/docs/r/autoscaling_attachment.html.markdown +++ b/website/docs/r/autoscaling_attachment.html.markdown @@ -3,54 +3,30 @@ subcategory: "Auto Scaling" layout: "aws" page_title: "AWS: aws_autoscaling_attachment" description: |- - Provides an AutoScaling Group Attachment resource. + Terraform resource for managing an AWS Auto Scaling Attachment. --- # Resource: aws_autoscaling_attachment -Provides an Auto Scaling Attachment resource. +Attaches a load balancer to an Auto Scaling group. -~> **NOTE on Auto Scaling Groups and ASG Attachments:** Terraform currently provides -both a standalone [`aws_autoscaling_attachment`](autoscaling_attachment.html) resource -(describing an ASG attached to an ELB or ALB), and an [`aws_autoscaling_group`](autoscaling_group.html) -with `load_balancers` and `target_group_arns` defined in-line. These two methods are not -mutually-exclusive. If `aws_autoscaling_attachment` resources are used, either alone or with inline -`load_balancers` or `target_group_arns`, the `aws_autoscaling_group` resource must be configured -to ignore changes to the `load_balancers` and `target_group_arns` arguments within a -[`lifecycle` configuration block](https://www.terraform.io/docs/configuration/meta-arguments/lifecycle.html). +~> **NOTE on Auto Scaling Groups, Attachments and Traffic Source Attachments:** Terraform provides standalone Attachment (for attaching Classic Load Balancers and Application Load Balancer, Gateway Load Balancer, or Network Load Balancer target groups) and [Traffic Source Attachment](autoscaling_traffic_source_attachment.html) (for attaching Load Balancers and VPC Lattice target groups) resources and an [Auto Scaling Group](autoscaling_group.html) resource with `load_balancers`, `target_group_arns` and `traffic_source` attributes. Do not use the same traffic source in more than one of these resources. Doing so will cause a conflict of attachments. A [`lifecycle` configuration block](https://www.terraform.io/docs/configuration/meta-arguments/lifecycle.html) can be used to suppress differences if necessary. ## Example Usage ```terraform # Create a new load balancer attachment -resource "aws_autoscaling_attachment" "asg_attachment_bar" { - autoscaling_group_name = aws_autoscaling_group.asg.id - elb = aws_elb.bar.id +resource "aws_autoscaling_attachment" "example" { + autoscaling_group_name = aws_autoscaling_group.example.id + elb = aws_elb.example.id } ``` ```terraform # Create a new ALB Target Group attachment -resource "aws_autoscaling_attachment" "asg_attachment_bar" { - autoscaling_group_name = aws_autoscaling_group.asg.id - lb_target_group_arn = aws_lb_target_group.test.arn -} -``` - -## With An AutoScaling Group Resource - -```terraform -resource "aws_autoscaling_group" "asg" { - # ... other configuration ... - - lifecycle { - ignore_changes = [load_balancers, target_group_arns] - } -} - -resource "aws_autoscaling_attachment" "asg_attachment_bar" { - autoscaling_group_name = aws_autoscaling_group.asg.id - elb = aws_elb.test.id +resource "aws_autoscaling_attachment" "example" { + autoscaling_group_name = aws_autoscaling_group.example.id + lb_target_group_arn = aws_lb_target_group.example.arn } ``` diff --git a/website/docs/r/autoscaling_group.html.markdown b/website/docs/r/autoscaling_group.html.markdown index ddad3cf370a..96f7e754b92 100644 --- a/website/docs/r/autoscaling_group.html.markdown +++ b/website/docs/r/autoscaling_group.html.markdown @@ -12,14 +12,7 @@ Provides an Auto Scaling Group resource. -> **Note:** You must specify either `launch_configuration`, `launch_template`, or `mixed_instances_policy`. -~> **NOTE on Auto Scaling Groups and ASG Attachments:** Terraform currently provides -both a standalone [`aws_autoscaling_attachment`](autoscaling_attachment.html) resource -(describing an ASG attached to an ELB or ALB), and an [`aws_autoscaling_group`](autoscaling_group.html) -with `load_balancers` and `target_group_arns` defined in-line. These two methods are not -mutually-exclusive. If `aws_autoscaling_attachment` resources are used, either alone or with inline -`load_balancers` or `target_group_arns`, the `aws_autoscaling_group` resource must be configured -to ignore changes to the `load_balancers` and `target_group_arns` arguments within a -[`lifecycle` configuration block](https://www.terraform.io/docs/configuration/meta-arguments/lifecycle.html). +~> **NOTE on Auto Scaling Groups, Attachments and Traffic Source Attachments:** Terraform provides standalone [Attachment](autoscaling_attachment.html) (for attaching Classic Load Balancers and Application Load Balancer, Gateway Load Balancer, or Network Load Balancer target groups) and [Traffic Source Attachment](autoscaling_traffic_source_attachment.html) (for attaching Load Balancers and VPC Lattice target groups) resources and an Auto Scaling Group resource with `load_balancers`, `target_group_arns` and `traffic_source` attributes. Do not use the same traffic source in more than one of these resources. Doing so will cause a conflict of attachments. A [`lifecycle` configuration block](https://www.terraform.io/docs/configuration/meta-arguments/lifecycle.html) can be used to suppress differences if necessary. > **Hands-on:** Try the [Manage AWS Auto Scaling Groups](https://learn.hashicorp.com/tutorials/terraform/aws-asg?utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS) tutorial on HashiCorp Learn. @@ -377,79 +370,100 @@ resource "aws_autoscaling_group" "example" { } ``` +### Auto Scaling group with Traffic Sources + +```terraform +resource "aws_autoscaling_group" "test" { + vpc_zone_identifier = aws_subnet.test.id + max_size = 1 + min_size = 1 + + force_delete = true + dynamic "traffic_source" { + for_each = aws_vpclattice_target_group.test[*] + content { + identifier = traffic_source.value.arn + type = "vpc-lattice" + } + } + +} +``` + ## Argument Reference The following arguments are supported: -* `name` - (Optional) Name of the Auto Scaling Group. By default generated by Terraform. Conflicts with `name_prefix`. -* `name_prefix` - (Optional) Creates a unique name beginning with the specified +- `name` - (Optional) Name of the Auto Scaling Group. By default generated by Terraform. Conflicts with `name_prefix`. +- `name_prefix` - (Optional) Creates a unique name beginning with the specified prefix. Conflicts with `name`. -* `max_size` - (Required) Maximum size of the Auto Scaling Group. -* `min_size` - (Required) Minimum size of the Auto Scaling Group. - (See also [Waiting for Capacity](#waiting-for-capacity) below.) -* `availability_zones` - (Optional) A list of Availability Zones where instances in the Auto Scaling group can be created. Used for launching into the default VPC subnet in each Availability Zone when not using the `vpc_zone_identifier` attribute, or for attaching a network interface when an existing network interface ID is specified in a launch template. Conflicts with `vpc_zone_identifier`. -* `capacity_rebalance` - (Optional) Whether capacity rebalance is enabled. Otherwise, capacity rebalance is disabled. -* `context` - (Optional) Reserved. -* `default_cooldown` - (Optional) Amount of time, in seconds, after a scaling activity completes before another scaling activity can start. -* `default_instance_warmup` - (Optional) Amount of time, in seconds, until a newly launched instance can contribute to the Amazon CloudWatch metrics. This delay lets an instance finish initializing before Amazon EC2 Auto Scaling aggregates instance metrics, resulting in more reliable usage data. Set this value equal to the amount of time that it takes for resource consumption to become stable after an instance reaches the InService state. (See [Set the default instance warmup for an Auto Scaling group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-default-instance-warmup.html)) -* `launch_configuration` - (Optional) Name of the launch configuration to use. -* `launch_template` - (Optional) Nested argument with Launch template specification to use to launch instances. See [Launch Template](#launch_template) below for more details. -* `mixed_instances_policy` (Optional) Configuration block containing settings to define launch targets for Auto Scaling groups. See [Mixed Instances Policy](#mixed_instances_policy) below for more details. -* `initial_lifecycle_hook` - (Optional) One or more +- `max_size` - (Required) Maximum size of the Auto Scaling Group. +- `min_size` - (Required) Minimum size of the Auto Scaling Group. + (See also [Waiting for Capacity](#waiting-for-capacity) below.) +- `availability_zones` - (Optional) A list of Availability Zones where instances in the Auto Scaling group can be created. Used for launching into the default VPC subnet in each Availability Zone when not using the `vpc_zone_identifier` attribute, or for attaching a network interface when an existing network interface ID is specified in a launch template. Conflicts with `vpc_zone_identifier`. +- `capacity_rebalance` - (Optional) Whether capacity rebalance is enabled. Otherwise, capacity rebalance is disabled. +- `context` - (Optional) Reserved. +- `default_cooldown` - (Optional) Amount of time, in seconds, after a scaling activity completes before another scaling activity can start. +- `default_instance_warmup` - (Optional) Amount of time, in seconds, until a newly launched instance can contribute to the Amazon CloudWatch metrics. This delay lets an instance finish initializing before Amazon EC2 Auto Scaling aggregates instance metrics, resulting in more reliable usage data. Set this value equal to the amount of time that it takes for resource consumption to become stable after an instance reaches the InService state. (See [Set the default instance warmup for an Auto Scaling group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-default-instance-warmup.html)) +- `launch_configuration` - (Optional) Name of the launch configuration to use. +- `launch_template` - (Optional) Nested argument with Launch template specification to use to launch instances. See [Launch Template](#launch_template) below for more details. +- `mixed_instances_policy` (Optional) Configuration block containing settings to define launch targets for Auto Scaling groups. See [Mixed Instances Policy](#mixed_instances_policy) below for more details. +- `initial_lifecycle_hook` - (Optional) One or more [Lifecycle Hooks](http://docs.aws.amazon.com/autoscaling/latest/userguide/lifecycle-hooks.html) to attach to the Auto Scaling Group **before** instances are launched. The syntax is exactly the same as the separate [`aws_autoscaling_lifecycle_hook`](/docs/providers/aws/r/autoscaling_lifecycle_hook.html) resource, without the `autoscaling_group_name` attribute. Please note that this will only work when creating a new Auto Scaling Group. For all other use-cases, please use `aws_autoscaling_lifecycle_hook` resource. -* `health_check_grace_period` - (Optional, Default: 300) Time (in seconds) after instance comes into service before checking health. -* `health_check_type` - (Optional) "EC2" or "ELB". Controls how health checking is done. -* `desired_capacity` - (Optional) Number of Amazon EC2 instances that - should be running in the group. (See also [Waiting for - Capacity](#waiting-for-capacity) below.) -* `desired_capacity_type` - (Optional) The unit of measurement for the value specified for `desired_capacity`. Supported for attribute-based instance type selection only. Valid values: `"units"`, `"vcpu"`, `"memory-mib"`. -* `force_delete` - (Optional) Allows deleting the Auto Scaling Group without waiting - for all instances in the pool to terminate. You can force an Auto Scaling Group to delete - even if it's in the process of scaling a resource. Normally, Terraform - drains all the instances before deleting the group. This bypasses that - behavior and potentially leaves resources dangling. -* `load_balancers` (Optional) List of elastic load balancer names to add to the autoscaling - group names. Only valid for classic load balancers. For ALBs, use `target_group_arns` instead. -* `vpc_zone_identifier` (Optional) List of subnet IDs to launch resources in. Subnets automatically determine which availability zones the group will reside. Conflicts with `availability_zones`. -* `target_group_arns` (Optional) Set of `aws_alb_target_group` ARNs, for use with Application or Network Load Balancing. -* `termination_policies` (Optional) List of policies to decide how the instances in the Auto Scaling Group should be terminated. The allowed values are `OldestInstance`, `NewestInstance`, `OldestLaunchConfiguration`, `ClosestToNextInstanceHour`, `OldestLaunchTemplate`, `AllocationStrategy`, `Default`. Additionally, the ARN of a Lambda function can be specified for custom termination policies. -* `suspended_processes` - (Optional) List of processes to suspend for the Auto Scaling Group. The allowed values are `Launch`, `Terminate`, `HealthCheck`, `ReplaceUnhealthy`, `AZRebalance`, `AlarmNotification`, `ScheduledActions`, `AddToLoadBalancer`, `InstanceRefresh`. -Note that if you suspend either the `Launch` or `Terminate` process types, it can prevent your Auto Scaling Group from functioning properly. -* `tag` (Optional) Configuration block(s) containing resource tags. See [Tag](#tag) below for more details. -* `placement_group` (Optional) Name of the placement group into which you'll launch your instances, if any. -* `metrics_granularity` - (Optional) Granularity to associate with the metrics to collect. The only valid value is `1Minute`. Default is `1Minute`. -* `enabled_metrics` - (Optional) List of metrics to collect. The allowed values are defined by the [underlying AWS API](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_EnableMetricsCollection.html). -* `wait_for_capacity_timeout` (Default: "10m") Maximum +- `health_check_grace_period` - (Optional, Default: 300) Time (in seconds) after instance comes into service before checking health. +- `health_check_type` - (Optional) "EC2" or "ELB". Controls how health checking is done. +- `desired_capacity` - (Optional) Number of Amazon EC2 instances that + should be running in the group. (See also [Waiting for + Capacity](#waiting-for-capacity) below.) +- `desired_capacity_type` - (Optional) The unit of measurement for the value specified for `desired_capacity`. Supported for attribute-based instance type selection only. Valid values: `"units"`, `"vcpu"`, `"memory-mib"`. +- `force_delete` - (Optional) Allows deleting the Auto Scaling Group without waiting + for all instances in the pool to terminate. You can force an Auto Scaling Group to delete + even if it's in the process of scaling a resource. Normally, Terraform + drains all the instances before deleting the group. This bypasses that + behavior and potentially leaves resources dangling. +- `load_balancers` (Optional) List of elastic load balancer names to add to the autoscaling + group names. Only valid for classic load balancers. For ALBs, use `target_group_arns` instead. To remove all load balancer attachments an empty list should be specified. +- `traffic_source` (Optional) Attaches one or more traffic sources to the specified Auto Scaling group. +- `vpc_zone_identifier` (Optional) List of subnet IDs to launch resources in. Subnets automatically determine which availability zones the group will reside. Conflicts with `availability_zones`. +- `target_group_arns` (Optional) Set of `aws_alb_target_group` ARNs, for use with Application or Network Load Balancing. To remove all target group attachments an empty list should be specified. +- `termination_policies` (Optional) List of policies to decide how the instances in the Auto Scaling Group should be terminated. The allowed values are `OldestInstance`, `NewestInstance`, `OldestLaunchConfiguration`, `ClosestToNextInstanceHour`, `OldestLaunchTemplate`, `AllocationStrategy`, `Default`. Additionally, the ARN of a Lambda function can be specified for custom termination policies. +- `suspended_processes` - (Optional) List of processes to suspend for the Auto Scaling Group. The allowed values are `Launch`, `Terminate`, `HealthCheck`, `ReplaceUnhealthy`, `AZRebalance`, `AlarmNotification`, `ScheduledActions`, `AddToLoadBalancer`, `InstanceRefresh`. + Note that if you suspend either the `Launch` or `Terminate` process types, it can prevent your Auto Scaling Group from functioning properly. +- `tag` (Optional) Configuration block(s) containing resource tags. See [Tag](#tag) below for more details. +- `placement_group` (Optional) Name of the placement group into which you'll launch your instances, if any. +- `metrics_granularity` - (Optional) Granularity to associate with the metrics to collect. The only valid value is `1Minute`. Default is `1Minute`. +- `enabled_metrics` - (Optional) List of metrics to collect. The allowed values are defined by the [underlying AWS API](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_EnableMetricsCollection.html). +- `wait_for_capacity_timeout` (Default: "10m") Maximum [duration](https://golang.org/pkg/time/#ParseDuration) that Terraform should - wait for ASG instances to be healthy before timing out. (See also [Waiting + wait for ASG instances to be healthy before timing out. (See also [Waiting for Capacity](#waiting-for-capacity) below.) Setting this to "0" causes Terraform to skip all Capacity Waiting behavior. -* `min_elb_capacity` - (Optional) Setting this causes Terraform to wait for +- `min_elb_capacity` - (Optional) Setting this causes Terraform to wait for this number of instances from this Auto Scaling Group to show up healthy in the ELB only on creation. Updates will not wait on ELB instance number changes. (See also [Waiting for Capacity](#waiting-for-capacity) below.) -* `wait_for_elb_capacity` - (Optional) Setting this will cause Terraform to wait +- `wait_for_elb_capacity` - (Optional) Setting this will cause Terraform to wait for exactly this number of healthy instances from this Auto Scaling Group in all attached load balancers on both create and update operations. (Takes precedence over `min_elb_capacity` behavior.) (See also [Waiting for Capacity](#waiting-for-capacity) below.) -* `protect_from_scale_in` (Optional) Whether newly launched instances +- `protect_from_scale_in` (Optional) Whether newly launched instances are automatically protected from termination by Amazon EC2 Auto Scaling when scaling in. For more information about preventing instances from terminating on scale in, see [Using instance scale-in protection](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-instance-protection.html) in the Amazon EC2 Auto Scaling User Guide. -* `service_linked_role_arn` (Optional) ARN of the service-linked role that the ASG will use to call other AWS services -* `max_instance_lifetime` (Optional) Maximum amount of time, in seconds, that an instance can be in service, values must be either equal to 0 or between 86400 and 31536000 seconds. -* `instance_refresh` - (Optional) If this block is configured, start an - [Instance Refresh](https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-instance-refresh.html) - when this Auto Scaling Group is updated. Defined [below](#instance_refresh). -* `warm_pool` - (Optional) If this block is configured, add a [Warm Pool](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-warm-pools.html) - to the specified Auto Scaling group. Defined [below](#warm_pool) +- `service_linked_role_arn` (Optional) ARN of the service-linked role that the ASG will use to call other AWS services +- `max_instance_lifetime` (Optional) Maximum amount of time, in seconds, that an instance can be in service, values must be either equal to 0 or between 86400 and 31536000 seconds. +- `instance_refresh` - (Optional) If this block is configured, start an + [Instance Refresh](https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-instance-refresh.html) + when this Auto Scaling Group is updated. Defined [below](#instance_refresh). +- `warm_pool` - (Optional) If this block is configured, add a [Warm Pool](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-warm-pools.html) + to the specified Auto Scaling group. Defined [below](#warm_pool) ### launch_template @@ -457,32 +471,32 @@ Note that if you suspend either the `Launch` or `Terminate` process types, it ca The top-level `launch_template` block supports the following: -* `id` - (Optional) ID of the launch template. Conflicts with `name`. -* `name` - (Optional) Name of the launch template. Conflicts with `id`. -* `version` - (Optional) Template version. Can be version number, `$Latest`, or `$Default`. (Default: `$Default`). +- `id` - (Optional) ID of the launch template. Conflicts with `name`. +- `name` - (Optional) Name of the launch template. Conflicts with `id`. +- `version` - (Optional) Template version. Can be version number, `$Latest`, or `$Default`. (Default: `$Default`). ### mixed_instances_policy -* `instances_distribution` - (Optional) Nested argument containing settings on how to mix on-demand and Spot instances in the Auto Scaling group. Defined below. -* `launch_template` - (Required) Nested argument containing launch template settings along with the overrides to specify multiple instance types and weights. Defined below. +- `instances_distribution` - (Optional) Nested argument containing settings on how to mix on-demand and Spot instances in the Auto Scaling group. Defined below. +- `launch_template` - (Required) Nested argument containing launch template settings along with the overrides to specify multiple instance types and weights. Defined below. #### mixed_instances_policy instances_distribution This configuration block supports the following: -* `on_demand_allocation_strategy` - (Optional) Strategy to use when launching on-demand instances. Valid values: `prioritized`, `lowest-price`. Default: `prioritized`. -* `on_demand_base_capacity` - (Optional) Absolute minimum amount of desired capacity that must be fulfilled by on-demand instances. Default: `0`. -* `on_demand_percentage_above_base_capacity` - (Optional) Percentage split between on-demand and Spot instances above the base on-demand capacity. Default: `100`. -* `spot_allocation_strategy` - (Optional) How to allocate capacity across the Spot pools. Valid values: `lowest-price`, `capacity-optimized`, `capacity-optimized-prioritized`, and `price-capacity-optimized`. Default: `lowest-price`. -* `spot_instance_pools` - (Optional) Number of Spot pools per availability zone to allocate capacity. EC2 Auto Scaling selects the cheapest Spot pools and evenly allocates Spot capacity across the number of Spot pools that you specify. Only available with `spot_allocation_strategy` set to `lowest-price`. Otherwise it must be set to `0`, if it has been defined before. Default: `2`. -* `spot_max_price` - (Optional) Maximum price per unit hour that the user is willing to pay for the Spot instances. Default: an empty string which means the on-demand price. +- `on_demand_allocation_strategy` - (Optional) Strategy to use when launching on-demand instances. Valid values: `prioritized`, `lowest-price`. Default: `prioritized`. +- `on_demand_base_capacity` - (Optional) Absolute minimum amount of desired capacity that must be fulfilled by on-demand instances. Default: `0`. +- `on_demand_percentage_above_base_capacity` - (Optional) Percentage split between on-demand and Spot instances above the base on-demand capacity. Default: `100`. +- `spot_allocation_strategy` - (Optional) How to allocate capacity across the Spot pools. Valid values: `lowest-price`, `capacity-optimized`, `capacity-optimized-prioritized`, and `price-capacity-optimized`. Default: `lowest-price`. +- `spot_instance_pools` - (Optional) Number of Spot pools per availability zone to allocate capacity. EC2 Auto Scaling selects the cheapest Spot pools and evenly allocates Spot capacity across the number of Spot pools that you specify. Only available with `spot_allocation_strategy` set to `lowest-price`. Otherwise it must be set to `0`, if it has been defined before. Default: `2`. +- `spot_max_price` - (Optional) Maximum price per unit hour that the user is willing to pay for the Spot instances. Default: an empty string which means the on-demand price. #### mixed_instances_policy launch_template This configuration block supports the following: -* `launch_template_specification` - (Required) Nested argument defines the Launch Template. Defined below. -* `override` - (Optional) List of nested arguments provides the ability to specify multiple instance types. This will override the same parameter in the launch template. For on-demand instances, Auto Scaling considers the order of preference of instance types to launch based on the order specified in the overrides list. Defined below. +- `launch_template_specification` - (Required) Nested argument defines the Launch Template. Defined below. +- `override` - (Optional) List of nested arguments provides the ability to specify multiple instance types. This will override the same parameter in the launch template. For on-demand instances, Auto Scaling considers the order of preference of instance types to launch based on the order specified in the overrides list. Defined below. ##### mixed_instances_policy launch_template launch_template_specification @@ -490,18 +504,18 @@ This configuration block supports the following: This configuration block supports the following: -* `launch_template_id` - (Optional) ID of the launch template. Conflicts with `launch_template_name`. -* `launch_template_name` - (Optional) Name of the launch template. Conflicts with `launch_template_id`. -* `version` - (Optional) Template version. Can be version number, `$Latest`, or `$Default`. (Default: `$Default`). +- `launch_template_id` - (Optional) ID of the launch template. Conflicts with `launch_template_name`. +- `launch_template_name` - (Optional) Name of the launch template. Conflicts with `launch_template_id`. +- `version` - (Optional) Template version. Can be version number, `$Latest`, or `$Default`. (Default: `$Default`). ##### mixed_instances_policy launch_template override This configuration block supports the following: -* `instance_type` - (Optional) Override the instance type in the Launch Template. -* `instance_requirements` - (Optional) Override the instance type in the Launch Template with instance types that satisfy the requirements. -* `launch_template_specification` - (Optional) Override the instance launch template specification in the Launch Template. -* `weighted_capacity` - (Optional) Number of capacity units, which gives the instance type a proportional weight to other instance types. +- `instance_type` - (Optional) Override the instance type in the Launch Template. +- `instance_requirements` - (Optional) Override the instance type in the Launch Template with instance types that satisfy the requirements. +- `launch_template_specification` - (Optional) Override the instance launch template specification in the Launch Template. +- `weighted_capacity` - (Optional) Number of capacity units, which gives the instance type a proportional weight to other instance types. ###### mixed_instances_policy launch_template override instance_requirements @@ -509,120 +523,123 @@ This configuration block supports the following: ~> **NOTE:** Both `memory_mib.min` and `vcpu_count.min` must be specified. -* `accelerator_count` - (Optional) Block describing the minimum and maximum number of accelerators (GPUs, FPGAs, or AWS Inferentia chips). Default is no minimum or maximum. - * `min` - (Optional) Minimum. - * `max` - (Optional) Maximum. Set to `0` to exclude instance types with accelerators. -* `accelerator_manufacturers` - (Optional) List of accelerator manufacturer names. Default is any manufacturer. - - ``` - Valid names: - * amazon-web-services - * amd - * nvidia - * xilinx - ``` - -* `accelerator_names` - (Optional) List of accelerator names. Default is any acclerator. - - ``` - Valid names: - * a100 - NVIDIA A100 GPUs - * v100 - NVIDIA V100 GPUs - * k80 - NVIDIA K80 GPUs - * t4 - NVIDIA T4 GPUs - * m60 - NVIDIA M60 GPUs - * radeon-pro-v520 - AMD Radeon Pro V520 GPUs - * vu9p - Xilinx VU9P FPGAs - ``` - -* `accelerator_total_memory_mib` - (Optional) Block describing the minimum and maximum total memory of the accelerators. Default is no minimum or maximum. - * `min` - (Optional) Minimum. - * `max` - (Optional) Maximum. - -* `accelerator_types` - (Optional) List of accelerator types. Default is any accelerator type. - - ``` - Valid types: - * fpga - * gpu - * inference - ``` - -* `allowed_instance_types` - (Optional) List of instance types to apply your specified attributes against. All other instance types are ignored, even if they match your specified attributes. You can use strings with one or more wild cards, represented by an asterisk (\*), to allow an instance type, size, or generation. The following are examples: `m5.8xlarge`, `c5*.*`, `m5a.*`, `r*`, `*3*`. For example, if you specify `c5*`, you are allowing the entire C5 instance family, which includes all C5a and C5n instance types. If you specify `m5a.*`, you are allowing all the M5a instance types, but not the M5n instance types. Maximum of 400 entries in the list; each entry is limited to 30 characters. Default is all instance types. - - ~> **NOTE:** If you specify `allowed_instance_types`, you can't specify `excluded_instance_types`. - -* `bare_metal` - (Optional) Indicate whether bare metal instace types should be `included`, `excluded`, or `required`. Default is `excluded`. -* `baseline_ebs_bandwidth_mbps` - (Optional) Block describing the minimum and maximum baseline EBS bandwidth, in Mbps. Default is no minimum or maximum. - * `min` - (Optional) Minimum. - * `max` - (Optional) Maximum. -* `burstable_performance` - (Optional) Indicate whether burstable performance instance types should be `included`, `excluded`, or `required`. Default is `excluded`. -* `cpu_manufacturers` (Optional) List of CPU manufacturer names. Default is any manufacturer. - - ~> **NOTE:** Don't confuse the CPU hardware manufacturer with the CPU hardware architecture. Instances will be launched with a compatible CPU architecture based on the Amazon Machine Image (AMI) that you specify in your launch template. - - ``` - Valid names: - * amazon-web-services - * amd - * intel - ``` - -* `excluded_instance_types` - (Optional) List of instance types to exclude. You can use strings with one or more wild cards, represented by an asterisk (\*), to exclude an instance type, size, or generation. The following are examples: `m5.8xlarge`, `c5*.*`, `m5a.*`, `r*`, `*3*`. For example, if you specify `c5*`, you are excluding the entire C5 instance family, which includes all C5a and C5n instance types. If you specify `m5a.*`, you are excluding all the M5a instance types, but not the M5n instance types. Maximum of 400 entries in the list; each entry is limited to 30 characters. Default is no excluded instance types. - - ~> **NOTE:** If you specify `excluded_instance_types`, you can't specify `allowed_instance_types`. - -* `instance_generations` - (Optional) List of instance generation names. Default is any generation. - - ``` - Valid names: - * current - Recommended for best performance. - * previous - For existing applications optimized for older instance types. - ``` - -* `local_storage` - (Optional) Indicate whether instance types with local storage volumes are `included`, `excluded`, or `required`. Default is `included`. -* `local_storage_types` - (Optional) List of local storage type names. Default any storage type. - - ``` - Value names: - * hdd - hard disk drive - * ssd - solid state drive - ``` - -* `memory_gib_per_vcpu` - (Optional) Block describing the minimum and maximum amount of memory (GiB) per vCPU. Default is no minimum or maximum. - * `min` - (Optional) Minimum. May be a decimal number, e.g. `0.5`. - * `max` - (Optional) Maximum. May be a decimal number, e.g. `0.5`. -* `memory_mib` - (Required) Block describing the minimum and maximum amount of memory (MiB). Default is no maximum. - * `min` - (Required) Minimum. - * `max` - (Optional) Maximum. -* `network_bandwidth_gbps` - (Optional) Block describing the minimum and maximum amount of network bandwidth, in gigabits per second (Gbps). Default is no minimum or maximum. - * `min` - (Optional) Minimum. - * `max` - (Optional) Maximum. -* `network_interface_count` - (Optional) Block describing the minimum and maximum number of network interfaces. Default is no minimum or maximum. - * `min` - (Optional) Minimum. - * `max` - (Optional) Maximum. -* `on_demand_max_price_percentage_over_lowest_price` - (Optional) Price protection threshold for On-Demand Instances. This is the maximum you’ll pay for an On-Demand Instance, expressed as a percentage higher than the cheapest M, C, or R instance type with your specified attributes. When Amazon EC2 Auto Scaling selects instance types with your attributes, we will exclude instance types whose price is higher than your threshold. The parameter accepts an integer, which Amazon EC2 Auto Scaling interprets as a percentage. To turn off price protection, specify a high value, such as 999999. Default is 20. - - If you set DesiredCapacityType to vcpu or memory-mib, the price protection threshold is applied based on the per vCPU or per memory price instead of the per instance price. -* `require_hibernate_support` - (Optional) Indicate whether instance types must support On-Demand Instance Hibernation, either `true` or `false`. Default is `false`. -* `spot_max_price_percentage_over_lowest_price` - (Optional) Price protection threshold for Spot Instances. This is the maximum you’ll pay for a Spot Instance, expressed as a percentage higher than the cheapest M, C, or R instance type with your specified attributes. When Amazon EC2 Auto Scaling selects instance types with your attributes, we will exclude instance types whose price is higher than your threshold. The parameter accepts an integer, which Amazon EC2 Auto Scaling interprets as a percentage. To turn off price protection, specify a high value, such as 999999. Default is 100. - - If you set DesiredCapacityType to vcpu or memory-mib, the price protection threshold is applied based on the per vCPU or per memory price instead of the per instance price. -* `total_local_storage_gb` - (Optional) Block describing the minimum and maximum total local storage (GB). Default is no minimum or maximum. - * `min` - (Optional) Minimum. May be a decimal number, e.g. `0.5`. - * `max` - (Optional) Maximum. May be a decimal number, e.g. `0.5`. -* `vcpu_count` - (Required) Block describing the minimum and maximum number of vCPUs. Default is no maximum. - * `min` - (Required) Minimum. - * `max` - (Optional) Maximum. +- `accelerator_count` - (Optional) Block describing the minimum and maximum number of accelerators (GPUs, FPGAs, or AWS Inferentia chips). Default is no minimum or maximum. + - `min` - (Optional) Minimum. + - `max` - (Optional) Maximum. Set to `0` to exclude instance types with accelerators. +- `accelerator_manufacturers` - (Optional) List of accelerator manufacturer names. Default is any manufacturer. + + ``` + Valid names: + * amazon-web-services + * amd + * nvidia + * xilinx + ``` + +- `accelerator_names` - (Optional) List of accelerator names. Default is any acclerator. + + ``` + Valid names: + * a100 - NVIDIA A100 GPUs + * v100 - NVIDIA V100 GPUs + * k80 - NVIDIA K80 GPUs + * t4 - NVIDIA T4 GPUs + * m60 - NVIDIA M60 GPUs + * radeon-pro-v520 - AMD Radeon Pro V520 GPUs + * vu9p - Xilinx VU9P FPGAs + ``` + +- `accelerator_total_memory_mib` - (Optional) Block describing the minimum and maximum total memory of the accelerators. Default is no minimum or maximum. + + - `min` - (Optional) Minimum. + - `max` - (Optional) Maximum. + +- `accelerator_types` - (Optional) List of accelerator types. Default is any accelerator type. + + ``` + Valid types: + * fpga + * gpu + * inference + ``` + +- `allowed_instance_types` - (Optional) List of instance types to apply your specified attributes against. All other instance types are ignored, even if they match your specified attributes. You can use strings with one or more wild cards, represented by an asterisk (\*), to allow an instance type, size, or generation. The following are examples: `m5.8xlarge`, `c5*.*`, `m5a.*`, `r*`, `*3*`. For example, if you specify `c5*`, you are allowing the entire C5 instance family, which includes all C5a and C5n instance types. If you specify `m5a.*`, you are allowing all the M5a instance types, but not the M5n instance types. Maximum of 400 entries in the list; each entry is limited to 30 characters. Default is all instance types. + + ~> **NOTE:** If you specify `allowed_instance_types`, you can't specify `excluded_instance_types`. + +- `bare_metal` - (Optional) Indicate whether bare metal instace types should be `included`, `excluded`, or `required`. Default is `excluded`. +- `baseline_ebs_bandwidth_mbps` - (Optional) Block describing the minimum and maximum baseline EBS bandwidth, in Mbps. Default is no minimum or maximum. + - `min` - (Optional) Minimum. + - `max` - (Optional) Maximum. +- `burstable_performance` - (Optional) Indicate whether burstable performance instance types should be `included`, `excluded`, or `required`. Default is `excluded`. +- `cpu_manufacturers` (Optional) List of CPU manufacturer names. Default is any manufacturer. + + ~> **NOTE:** Don't confuse the CPU hardware manufacturer with the CPU hardware architecture. Instances will be launched with a compatible CPU architecture based on the Amazon Machine Image (AMI) that you specify in your launch template. + + ``` + Valid names: + * amazon-web-services + * amd + * intel + ``` + +- `excluded_instance_types` - (Optional) List of instance types to exclude. You can use strings with one or more wild cards, represented by an asterisk (\*), to exclude an instance type, size, or generation. The following are examples: `m5.8xlarge`, `c5*.*`, `m5a.*`, `r*`, `*3*`. For example, if you specify `c5*`, you are excluding the entire C5 instance family, which includes all C5a and C5n instance types. If you specify `m5a.*`, you are excluding all the M5a instance types, but not the M5n instance types. Maximum of 400 entries in the list; each entry is limited to 30 characters. Default is no excluded instance types. + + ~> **NOTE:** If you specify `excluded_instance_types`, you can't specify `allowed_instance_types`. + +- `instance_generations` - (Optional) List of instance generation names. Default is any generation. + + ``` + Valid names: + * current - Recommended for best performance. + * previous - For existing applications optimized for older instance types. + ``` + +- `local_storage` - (Optional) Indicate whether instance types with local storage volumes are `included`, `excluded`, or `required`. Default is `included`. +- `local_storage_types` - (Optional) List of local storage type names. Default any storage type. + + ``` + Value names: + * hdd - hard disk drive + * ssd - solid state drive + ``` + +- `memory_gib_per_vcpu` - (Optional) Block describing the minimum and maximum amount of memory (GiB) per vCPU. Default is no minimum or maximum. + - `min` - (Optional) Minimum. May be a decimal number, e.g. `0.5`. + - `max` - (Optional) Maximum. May be a decimal number, e.g. `0.5`. +- `memory_mib` - (Required) Block describing the minimum and maximum amount of memory (MiB). Default is no maximum. + - `min` - (Required) Minimum. + - `max` - (Optional) Maximum. +- `network_bandwidth_gbps` - (Optional) Block describing the minimum and maximum amount of network bandwidth, in gigabits per second (Gbps). Default is no minimum or maximum. + - `min` - (Optional) Minimum. + - `max` - (Optional) Maximum. +- `network_interface_count` - (Optional) Block describing the minimum and maximum number of network interfaces. Default is no minimum or maximum. + - `min` - (Optional) Minimum. + - `max` - (Optional) Maximum. +- `on_demand_max_price_percentage_over_lowest_price` - (Optional) Price protection threshold for On-Demand Instances. This is the maximum you’ll pay for an On-Demand Instance, expressed as a percentage higher than the cheapest M, C, or R instance type with your specified attributes. When Amazon EC2 Auto Scaling selects instance types with your attributes, we will exclude instance types whose price is higher than your threshold. The parameter accepts an integer, which Amazon EC2 Auto Scaling interprets as a percentage. To turn off price protection, specify a high value, such as 999999. Default is 20. + + If you set DesiredCapacityType to vcpu or memory-mib, the price protection threshold is applied based on the per vCPU or per memory price instead of the per instance price. + +- `require_hibernate_support` - (Optional) Indicate whether instance types must support On-Demand Instance Hibernation, either `true` or `false`. Default is `false`. +- `spot_max_price_percentage_over_lowest_price` - (Optional) Price protection threshold for Spot Instances. This is the maximum you’ll pay for a Spot Instance, expressed as a percentage higher than the cheapest M, C, or R instance type with your specified attributes. When Amazon EC2 Auto Scaling selects instance types with your attributes, we will exclude instance types whose price is higher than your threshold. The parameter accepts an integer, which Amazon EC2 Auto Scaling interprets as a percentage. To turn off price protection, specify a high value, such as 999999. Default is 100. + + If you set DesiredCapacityType to vcpu or memory-mib, the price protection threshold is applied based on the per vCPU or per memory price instead of the per instance price. + +- `total_local_storage_gb` - (Optional) Block describing the minimum and maximum total local storage (GB). Default is no minimum or maximum. + - `min` - (Optional) Minimum. May be a decimal number, e.g. `0.5`. + - `max` - (Optional) Maximum. May be a decimal number, e.g. `0.5`. +- `vcpu_count` - (Required) Block describing the minimum and maximum number of vCPUs. Default is no maximum. + - `min` - (Required) Minimum. + - `max` - (Optional) Maximum. ### tag The `tag` attribute accepts exactly one tag declaration with the following fields: -* `key` - (Required) Key -* `value` - (Required) Value -* `propagate_at_launch` - (Required) Enables propagation of the tag to - Amazon EC2 instances launched via this ASG +- `key` - (Required) Key +- `value` - (Required) Value +- `propagate_at_launch` - (Required) Enables propagation of the tag to + Amazon EC2 instances launched via this ASG To declare multiple tags, additional `tag` blocks can be specified. @@ -632,15 +649,15 @@ To declare multiple tags, additional `tag` blocks can be specified. This configuration block supports the following: -* `strategy` - (Required) Strategy to use for instance refresh. The only allowed value is `Rolling`. See [StartInstanceRefresh Action](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_StartInstanceRefresh.html#API_StartInstanceRefresh_RequestParameters) for more information. -* `preferences` - (Optional) Override default parameters for Instance Refresh. - * `checkpoint_delay` - (Optional) Number of seconds to wait after a checkpoint. Defaults to `3600`. - * `checkpoint_percentages` - (Optional) List of percentages for each checkpoint. Values must be unique and in ascending order. To replace all instances, the final number must be `100`. - * `instance_warmup` - (Optional) Number of seconds until a newly launched instance is configured and ready to use. Default behavior is to use the Auto Scaling Group's health check grace period. - * `min_healthy_percentage` - (Optional) Amount of capacity in the Auto Scaling group that must remain healthy during an instance refresh to allow the operation to continue, as a percentage of the desired capacity of the Auto Scaling group. Defaults to `90`. - * `skip_matching` - (Optional) Replace instances that already have your desired configuration. Defaults to `false`. - * `auto_rollback` - (Optional) Automatically rollback if instance refresh fails. Defaults to `false`. -* `triggers` - (Optional) Set of additional property names that will trigger an Instance Refresh. A refresh will always be triggered by a change in any of `launch_configuration`, `launch_template`, or `mixed_instances_policy`. +- `strategy` - (Required) Strategy to use for instance refresh. The only allowed value is `Rolling`. See [StartInstanceRefresh Action](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_StartInstanceRefresh.html#API_StartInstanceRefresh_RequestParameters) for more information. +- `preferences` - (Optional) Override default parameters for Instance Refresh. + - `checkpoint_delay` - (Optional) Number of seconds to wait after a checkpoint. Defaults to `3600`. + - `checkpoint_percentages` - (Optional) List of percentages for each checkpoint. Values must be unique and in ascending order. To replace all instances, the final number must be `100`. + - `instance_warmup` - (Optional) Number of seconds until a newly launched instance is configured and ready to use. Default behavior is to use the Auto Scaling Group's health check grace period. + - `min_healthy_percentage` - (Optional) Amount of capacity in the Auto Scaling group that must remain healthy during an instance refresh to allow the operation to continue, as a percentage of the desired capacity of the Auto Scaling group. Defaults to `90`. + - `skip_matching` - (Optional) Replace instances that already have your desired configuration. Defaults to `false`. + - `auto_rollback` - (Optional) Automatically rollback if instance refresh fails. Defaults to `false`. +- `triggers` - (Optional) Set of additional property names that will trigger an Instance Refresh. A refresh will always be triggered by a change in any of `launch_configuration`, `launch_template`, or `mixed_instances_policy`. ~> **NOTE:** A refresh is started when any of the following Auto Scaling Group properties change: `launch_configuration`, `launch_template`, `mixed_instances_policy`. Additional properties can be specified in the `triggers` property of `instance_refresh`. @@ -654,36 +671,45 @@ This configuration block supports the following: This configuration block supports the following: -* `instance_reuse_policy` - (Optional) Whether instances in the Auto Scaling group can be returned to the warm pool on scale in. The default is to terminate instances in the Auto Scaling group when the group scales in. -* `max_group_prepared_capacity` - (Optional) Total maximum number of instances that are allowed to be in the warm pool or in any state except Terminated for the Auto Scaling group. -* `min_size` - (Optional) Minimum number of instances to maintain in the warm pool. This helps you to ensure that there is always a certain number of warmed instances available to handle traffic spikes. Defaults to 0 if not specified. -* `pool_state` - (Optional) Sets the instance state to transition to after the lifecycle hooks finish. Valid values are: Stopped (default), Running or Hibernated. +- `instance_reuse_policy` - (Optional) Whether instances in the Auto Scaling group can be returned to the warm pool on scale in. The default is to terminate instances in the Auto Scaling group when the group scales in. +- `max_group_prepared_capacity` - (Optional) Total maximum number of instances that are allowed to be in the warm pool or in any state except Terminated for the Auto Scaling group. +- `min_size` - (Optional) Minimum number of instances to maintain in the warm pool. This helps you to ensure that there is always a certain number of warmed instances available to handle traffic spikes. Defaults to 0 if not specified. +- `pool_state` - (Optional) Sets the instance state to transition to after the lifecycle hooks finish. Valid values are: Stopped (default), Running or Hibernated. + +### traffic_source + +- `identifier` - Identifies the traffic source. For Application Load Balancers, Gateway Load Balancers, Network Load Balancers, and VPC Lattice, this will be the Amazon Resource Name (ARN) for a target group in this account and Region. For Classic Load Balancers, this will be the name of the Classic Load Balancer in this account and Region. +- `type` - Provides additional context for the value of Identifier. + The following lists the valid values: + `elb` if `identifier` is the name of a Classic Load Balancer. + `elbv2` if `identifier` is the ARN of an Application Load Balancer, Gateway Load Balancer, or Network Load Balancer target group. + `vpc-lattice` if `identifier` is the ARN of a VPC Lattice target group. ##### instance_reuse_policy This configuration block supports the following: -* `reuse_on_scale_in` - (Optional) Whether instances in the Auto Scaling group can be returned to the warm pool on scale in. +- `reuse_on_scale_in` - (Optional) Whether instances in the Auto Scaling group can be returned to the warm pool on scale in. ## Attributes Reference In addition to all arguments above, the following attributes are exported: -* `id` - Auto Scaling Group id. -* `arn` - ARN for this Auto Scaling Group -* `availability_zones` - Availability zones of the Auto Scaling Group. -* `min_size` - Minimum size of the Auto Scaling Group -* `max_size` - Maximum size of the Auto Scaling Group -* `default_cooldown` - Time between a scaling activity and the succeeding scaling activity. -* `default_instance_warmup` - The duration of the default instance warmup, in seconds. -* `name` - Name of the Auto Scaling Group -* `health_check_grace_period` - Time after instance comes into service before checking health. -* `health_check_type` - "EC2" or "ELB". Controls how health checking is done. -* `desired_capacity` -The number of Amazon EC2 instances that should be running in the group. -* `launch_configuration` - The launch configuration of the Auto Scaling Group -* `predicted_capacity` - Predicted capacity of the group. -* `vpc_zone_identifier` (Optional) - The VPC zone identifier -* `warm_pool_size` - Current size of the warm pool. +- `id` - Auto Scaling Group id. +- `arn` - ARN for this Auto Scaling Group +- `availability_zones` - Availability zones of the Auto Scaling Group. +- `min_size` - Minimum size of the Auto Scaling Group +- `max_size` - Maximum size of the Auto Scaling Group +- `default_cooldown` - Time between a scaling activity and the succeeding scaling activity. +- `default_instance_warmup` - The duration of the default instance warmup, in seconds. +- `name` - Name of the Auto Scaling Group +- `health_check_grace_period` - Time after instance comes into service before checking health. +- `health_check_type` - "EC2" or "ELB". Controls how health checking is done. +- `desired_capacity` -The number of Amazon EC2 instances that should be running in the group. +- `launch_configuration` - The launch configuration of the Auto Scaling Group +- `predicted_capacity` - Predicted capacity of the group. +- `vpc_zone_identifier` (Optional) - The VPC zone identifier +- `warm_pool_size` - Current size of the warm pool. ~> **NOTE:** When using `ELB` as the `health_check_type`, `health_check_grace_period` is required. @@ -745,7 +771,7 @@ via the `load_balancers` attribute or with ALBs specified with `target_group_arn The `min_elb_capacity` parameter causes Terraform to wait for at least the requested number of instances to show up `"InService"` in all attached ELBs -during ASG creation. It has no effect on ASG updates. +during ASG creation. It has no effect on ASG updates. If `wait_for_elb_capacity` is set, Terraform will wait for exactly that number of Instances to be `"InService"` in all attached ELBs on both creation and diff --git a/website/docs/r/autoscaling_traffic_source_attachment.html.markdown b/website/docs/r/autoscaling_traffic_source_attachment.html.markdown new file mode 100644 index 00000000000..33a8713d478 --- /dev/null +++ b/website/docs/r/autoscaling_traffic_source_attachment.html.markdown @@ -0,0 +1,48 @@ +--- +subcategory: "Auto Scaling" +layout: "aws" +page_title: "AWS: aws_autoscaling_traffic_source_attachment" +description: |- + Terraform resource for managing an AWS Auto Scaling Traffic Source Attachment. +--- + +# Resource: aws_autoscaling_traffic_source_attachment + +Attaches a traffic source to an Auto Scaling group. + +~> **NOTE on Auto Scaling Groups, Attachments and Traffic Source Attachments:** Terraform provides standalone [Attachment](autoscaling_attachment.html) (for attaching Classic Load Balancers and Application Load Balancer, Gateway Load Balancer, or Network Load Balancer target groups) and Traffic Source Attachment (for attaching Load Balancers and VPC Lattice target groups) resources and an [Auto Scaling Group](autoscaling_group.html) resource with `load_balancers`, `target_group_arns` and `traffic_source` attributes. Do not use the same traffic source in more than one of these resources. Doing so will cause a conflict of attachments. A [`lifecycle` configuration block](https://www.terraform.io/docs/configuration/meta-arguments/lifecycle.html) can be used to suppress differences if necessary. + +## Example Usage + +### Basic Usage + +```terraform +resource "aws_autoscaling_traffic_source_attachment" "example" { + autoscaling_group_name = aws_autoscaling_group.example.id + + traffic_source { + identifier = aws_lb_target_group.example.arn + type = "elbv2" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +- `autoscaling_group_name` - (Required) The name of the Auto Scaling group. +- `traffic_source` - (Required) The unique identifiers of a traffic sources. + +`traffic_source` supports the following: + +- `identifier` - (Required) Identifies the traffic source. For Application Load Balancers, Gateway Load Balancers, Network Load Balancers, and VPC Lattice, this will be the Amazon Resource Name (ARN) for a target group in this account and Region. For Classic Load Balancers, this will be the name of the Classic Load Balancer in this account and Region. +- `type` - (Required) Provides additional context for the value of `identifier`. + The following lists the valid values: + `elb` if `identifier` is the name of a Classic Load Balancer. + `elbv2` if `identifier` is the ARN of an Application Load Balancer, Gateway Load Balancer, or Network Load Balancer target group. + `vpc-lattice` if `identifier` is the ARN of a VPC Lattice target group. + +## Attributes Reference + +No additional attributes are exported. diff --git a/website/docs/r/backup_region_settings.html.markdown b/website/docs/r/backup_region_settings.html.markdown index e85da6b65b6..67d4246d712 100644 --- a/website/docs/r/backup_region_settings.html.markdown +++ b/website/docs/r/backup_region_settings.html.markdown @@ -40,7 +40,7 @@ resource "aws_backup_region_settings" "test" { The following arguments are supported: * `resource_type_opt_in_preference` - (Required) A map of services along with the opt-in preferences for the Region. -* `resource_type_management_preference` - (Optional) A map of services along with the management preferences for the Region. +* `resource_type_management_preference` - (Optional) A map of services along with the management preferences for the Region. For more information, see the [AWS Documentation](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_UpdateRegionSettings.html#API_UpdateRegionSettings_RequestSyntax). ## Attributes Reference diff --git a/website/docs/r/batch_compute_environment.html.markdown b/website/docs/r/batch_compute_environment.html.markdown index e760f901edc..314d0f0430e 100644 --- a/website/docs/r/batch_compute_environment.html.markdown +++ b/website/docs/r/batch_compute_environment.html.markdown @@ -92,6 +92,11 @@ resource "aws_subnet" "sample" { cidr_block = "10.1.1.0/24" } +resource "aws_placement_group" "sample" { + name = "sample" + strategy = "cluster" +} + resource "aws_batch_compute_environment" "sample" { compute_environment_name = "sample" @@ -105,6 +110,8 @@ resource "aws_batch_compute_environment" "sample" { max_vcpus = 16 min_vcpus = 0 + placement_group = aws_placement_group.sample.name + security_group_ids = [ aws_security_group.sample.id, ] @@ -172,6 +179,7 @@ resource "aws_batch_compute_environment" "sample" { * `launch_template` - (Optional) The launch template to use for your compute resources. See details below. This parameter isn't applicable to jobs running on Fargate resources, and shouldn't be specified. * `max_vcpus` - (Required) The maximum number of EC2 vCPUs that an environment can reach. * `min_vcpus` - (Optional) The minimum number of EC2 vCPUs that an environment should maintain. For `EC2` or `SPOT` compute environments, if the parameter is not explicitly defined, a `0` default value will be set. This parameter isn't applicable to jobs running on Fargate resources, and shouldn't be specified. +* `placement_group` - (Optional) The Amazon EC2 placement group to associate with your compute resources. * `security_group_ids` - (Optional) A list of EC2 security group that are associated with instances launched in the compute environment. This parameter is required for Fargate compute environments. * `spot_iam_fleet_role` - (Optional) The Amazon Resource Name (ARN) of the Amazon EC2 Spot Fleet IAM role applied to a SPOT compute environment. This parameter is required for SPOT compute environments. This parameter isn't applicable to jobs running on Fargate resources, and shouldn't be specified. * `subnets` - (Required) A list of VPC subnets into which the compute resources are launched. diff --git a/website/docs/r/chime_voice_connector.html.markdown b/website/docs/r/chime_voice_connector.html.markdown index fe663a0ad16..42142b0e43a 100644 --- a/website/docs/r/chime_voice_connector.html.markdown +++ b/website/docs/r/chime_voice_connector.html.markdown @@ -22,17 +22,23 @@ resource "aws_chime_voice_connector" "test" { ## Argument Reference -The following arguments are supported: +The following arguments are required: * `name` - (Required) The name of the Amazon Chime Voice Connector. * `require_encryption` - (Required) When enabled, requires encryption for the Amazon Chime Voice Connector. + +The following arguments are optional: + * `aws_region` - (Optional) The AWS Region in which the Amazon Chime Voice Connector is created. Default value: `us-east-1` +* `tags` - (Optional) Key-value mapping of resource tags. If configured with a provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. ## Attributes Reference In addition to all arguments above, the following attributes are exported: +* `arn` - ARN (Amazon Resource Name) of the Amazon Chime Voice Connector. * `outbound_host_name` - The outbound host name for the Amazon Chime Voice Connector. +* `tags_all` - Map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block). ## Import diff --git a/website/docs/r/chimesdkvoice_global_settings.html.markdown b/website/docs/r/chimesdkvoice_global_settings.html.markdown new file mode 100644 index 00000000000..86aaa45ee65 --- /dev/null +++ b/website/docs/r/chimesdkvoice_global_settings.html.markdown @@ -0,0 +1,49 @@ +--- +subcategory: "Chime SDK Voice" +layout: "aws" +page_title: "AWS: aws_chimesdkvoice_global_settings" +description: |- + Terraform resource for managing Amazon Chime SDK Voice Global Settings. +--- + +# Resource: aws_chimesdkvoice_global_settings + +Terraform resource for managing Amazon Chime SDK Voice Global Settings. + +## Example Usage + +### Basic Usage + +```terraform +resource "aws_chimesdkvoice_global_settings" "example" { + voice_connector { + cdr_bucket = "example-bucket-name" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `voice_connector` - (Required) The Voice Connector settings. See [voice_connector](#voice_connector). + +### `voice_connector` + +The Amazon Chime SDK Voice Connector settings. Includes any Amazon S3 buckets designated for storing call detail records. + +* `cdr_bucket` - (Optional) The S3 bucket that stores the Voice Connector's call detail records. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - AWS account ID for which the settings are applied. + +## Import + +AWS Chime SDK Voice Global Settings can be imported using the `id` (AWS account ID), e.g., + +``` +$ terraform import aws_chimesdkvoice_global_settings.example 123456789012 +``` diff --git a/website/docs/r/chimesdkvoice_sip_media_application.html.markdown b/website/docs/r/chimesdkvoice_sip_media_application.html.markdown new file mode 100644 index 00000000000..0659c4eb77c --- /dev/null +++ b/website/docs/r/chimesdkvoice_sip_media_application.html.markdown @@ -0,0 +1,59 @@ +--- +subcategory: "Chime SDK Voice" +layout: "aws" +page_title: "AWS: aws_chimesdkvoice_sip_media_application" +description: |- + A ChimeSDKVoice SIP Media Application is a managed object that passes values from a SIP rule to a target AWS Lambda function. +--- + +# Resource: aws_chimesdkvoice_sip_media_application + +A ChimeSDKVoice SIP Media Application is a managed object that passes values from a SIP rule to a target AWS Lambda function. + +## Example Usage + +### Basic Usage + +```terraform +resource "aws_chimesdkvoice_sip_media_application" "example" { + aws_region = "us-east-1" + name = "example-sip-media-application" + endpoints { + lambda_arn = aws_lambda_function.test.arn + } +} +``` + +## Argument Reference + +The following arguments are required: + +* `aws_region` - (Required) The AWS Region in which the AWS Chime SDK Voice Sip Media Application is created. +* `endpoints` - (Required) List of endpoints (Lambda Amazon Resource Names) specified for the SIP media application. Currently, only one endpoint is supported. See [`endpoints`](#endpoints). +* `name` - (Required) The name of the AWS Chime SDK Voice Sip Media Application. + +The following arguments are optional: + +* `tags` - (Optional) Key-value mapping of resource tags. If configured with a provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +### `endpoints` + +The endpoint assigned to the SIP media application. + +* `lambda_arn` - (Required) Valid Amazon Resource Name (ARN) of the Lambda function, version, or alias. The function must be created in the same AWS Region as the SIP media application. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - ARN (Amazon Resource Name) of the AWS Chime SDK Voice Sip Media Application +* `id` - The SIP media application ID. +* `tags_all` - Map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block). + +## Import + +A ChimeSDKVoice SIP Media Application can be imported using the `id`, e.g., + +``` +$ terraform import aws_chimesdkvoice_sip_media_application.example abcdef123456 +``` diff --git a/website/docs/r/chimesdkvoice_sip_rule.html.markdown b/website/docs/r/chimesdkvoice_sip_rule.html.markdown new file mode 100644 index 00000000000..c9c8480f46d --- /dev/null +++ b/website/docs/r/chimesdkvoice_sip_rule.html.markdown @@ -0,0 +1,62 @@ +--- +subcategory: "Chime SDK Voice" +layout: "aws" +page_title: "AWS: aws_chimesdkvoice_sip_rule" +description: |- + A SIP rule associates your SIP media application with a phone number or a Request URI hostname. You can associate a SIP rule with more than one SIP media application. Each application then runs only that rule. +--- +# Resource: aws_chimesdkvoice_sip_rule + +A SIP rule associates your SIP media application with a phone number or a Request URI hostname. You can associate a SIP rule with more than one SIP media application. Each application then runs only that rule. + +## Example Usage + +### Basic Usage + +```terraform +resource "aws_chimesdkvoice_sip_rule" "example" { + name = "example-sip-rule" + trigger_type = "RequestUriHostname" + trigger_value = aws_chime_voice_connector.example-voice-connector.outbound_host_name + target_applications { + priority = 1 + sip_media_application_id = aws_chimesdkvoice_sip_media_application.example-sma.id + aws_region = "us-east-1" + } +} +``` + +## Argument Reference + +The following arguments are required: + +* `name` - (Required) The name of the SIP rule. +* `target_applications` - (Required) List of SIP media applications with priority and AWS Region. Only one SIP application per AWS Region can be used. See [`target_applications`](#target_applications). +* `trigger_type` - (Required) The type of trigger assigned to the SIP rule in `trigger_value`. Valid values are `RequestUriHostname` or `ToPhoneNumber`. +* `trigger_value` - (Required) If `trigger_type` is `RequestUriHostname`, the value can be the outbound host name of an Amazon Chime Voice Connector. If `trigger_type` is `ToPhoneNumber`, the value can be a customer-owned phone number in the E164 format. The Sip Media Application specified in the Sip Rule is triggered if the request URI in an incoming SIP request matches the `RequestUriHostname`, or if the "To" header in the incoming SIP request matches the `ToPhoneNumber` value. + +The following arguments are optional: + +* `disabled` - (Optional) Enables or disables a rule. You must disable rules before you can delete them. + +### `target_applications` + +List of SIP media applications with priority and AWS Region. Only one SIP application per AWS Region can be used. + +* `aws_region` - (Required) The AWS Region of the target application. +* `priority` - (Required) Priority of the SIP media application in the target list. +* `sip_media_application_id` - (Required) The SIP media application ID. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The SIP rule ID. + +## Import + +A ChimeSDKVoice SIP Rule can be imported using the `id`, e.g., + +``` +$ terraform import aws_chimesdkvoice_sip_rule.example abcdef123456 +``` diff --git a/website/docs/r/cleanrooms_collaboration.html.markdown b/website/docs/r/cleanrooms_collaboration.html.markdown new file mode 100644 index 00000000000..8489f6159fc --- /dev/null +++ b/website/docs/r/cleanrooms_collaboration.html.markdown @@ -0,0 +1,93 @@ +--- +subcategory: "Clean Rooms" +layout: "aws" +page_title: "AWS: aws_cleanrooms_collaboration" +description: |- + Provides a Clean Rooms Collaboration. +--- + +# Resource: aws_cleanrooms_collaboration + +Provides a AWS Clean Rooms collaboration. All members included in the definition will be invited to +join the collaboration and can create memberships. + +## Example Usage + +### Collaboration with tags + +```terraform +resource "aws_cleanrooms_collaboration" "test_collaboration" { + name = "terraform-example-collaboration" + creator_member_abilities = ["CAN_QUERY", "CAN_RECEIVE_RESULTS"] + creator_display_name = "Creator " + description = "I made this collaboration with terraform!" + query_log_status = "DISABLED" + + data_encryption_metadata { + allow_clear_text = true + allow_duplicates = true + allow_joins_on_columns_with_different_names = true + preserve_nulls = false + } + + member { + account_id = 123456789012 + display_name = "Other member" + member_abilities = [] + } + + tags = { + Project = "Terraform" + } + +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) - The name of the collaboration. Collaboration names do not need to be unique. +* `description` - (Required) - A description for a collaboration. +* `creator_member_abilities` - (Required -Forces new resource) - The list of member abilities for the creator of the collaboration. Valid v +lues [may be found here](https://docs.aws.amazon.com/clean-rooms/latest/apireference/API_CreateCollaboration.html#API-CreateCollaboration-re +uest-creatorMemberAbilities) +* `creator_display_name` - (Required - Forces new resource) - The name for the member record for the collaboration creator. +* `query_log_status` - (Required - Forces new resource) - Determines if members of the collaboration can enable query logs within their own +emberships. Valid values [may be found here](https://docs.aws.amazon.com/clean-rooms/latest/apireference/API_CreateCollaboration.html#API-Cr +ateCollaboration-request-queryLogStatus). +* `data_encryption_metadata` - (Required - Forces new resource) - a collection of settings which determine how the [c3r client](https://docs +aws.amazon.com/clean-rooms/latest/userguide/crypto-computing.html) will encrypt data for use within this collaboration +* `data_encryption_metadata.allow_clear_text` - (Required - Forces new resource) - Indicates whether encrypted tables can contain cleartext data. This is a boolea + field. +* `data_encryption_metadata.allow_duplicates` - (Required - Forces new resource ) - Indicates whether Fingerprint columns can contain duplicate entries. This is a +boolean field. +* `data_encryption_metadata.allow_joins_on_columns_with_different_names` - (Required - Forces new resource) - Indicates whether Fingerprint columns can be joined +n any other Fingerprint column with a different name. This is a boolean field. +* `data_encryption_metadata.preserve_nulls` - (Required - Forces new resource) - Indicates whether NULL values are to be copied as NULL to encrypted tables (true) +or cryptographically processed (false). +* `member` - (Optional - Forces new resource) - Additional members of the collaboration which will be invited to join the collaboration. +* `member.account_id` - (Required - Forces new resource) - The account id for the invited member +* `member.display_name` - (Required - Forces new resource) - The display name for the invited member +* `member.member_abilities` - (Required - Forces new resource) - The list of abilities for the invited member. Valid values [may be found here](https://docs.aws.amazon.com/clean-rooms/latest/apireference/API_CreateCollaboration.html#API-CreateCollaboration-request-creatorMemberAbiliti +s +* `tags` - (Optional) - Key value pairs which tag the collaboration. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The arn of the collaboration +* `id` - The id of the collaboration +* `create_time` - The date and time the collaboration was created +* `member status` - For each member included in the collaboration an additional computed attribute of status is added. These values [may be +ound here](https://docs.aws.amazon.com/clean-rooms/latest/apireference/API_MemberSummary.html#API-Type-MemberSummary-status) +* `updated_time` - The date and time he collaboration was last updated + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `create` - (Default `1m`) +- `update` - (Default `1m`) +- `delete` - (Default `1m`) diff --git a/website/docs/r/cloud9_environment_ec2.html.markdown b/website/docs/r/cloud9_environment_ec2.html.markdown index 6d6aa154deb..d93844c570d 100644 --- a/website/docs/r/cloud9_environment_ec2.html.markdown +++ b/website/docs/r/cloud9_environment_ec2.html.markdown @@ -58,7 +58,7 @@ data "aws_instance" "cloud9_instance" { resource "aws_eip" "cloud9_eip" { instance = data.aws_instance.cloud9_instance.id - vpc = true + domain = "vpc" } output "cloud9_public_ip" { diff --git a/website/docs/r/cloudformation_stack_set.html.markdown b/website/docs/r/cloudformation_stack_set.html.markdown index c88d754bbc7..875155af1e0 100644 --- a/website/docs/r/cloudformation_stack_set.html.markdown +++ b/website/docs/r/cloudformation_stack_set.html.markdown @@ -97,6 +97,8 @@ The following arguments are supported: * `operation_preferences` - (Optional) Preferences for how AWS CloudFormation performs a stack set update. * `description` - (Optional) Description of the StackSet. * `execution_role_name` - (Optional) Name of the IAM Role in all target accounts for StackSet operations. Defaults to `AWSCloudFormationStackSetExecutionRole` when using the `SELF_MANAGED` permission model. This should not be defined when using the `SERVICE_MANAGED` permission model. +* `managed_execution` - (Optional) Configuration block to allow StackSets to perform non-conflicting operations concurrently and queues conflicting operations. + * `active` - (Optional) When set to true, StackSets performs non-conflicting operations concurrently and queues conflicting operations. After conflicting operations finish, StackSets starts queued operations in request order. Default is false. * `parameters` - (Optional) Key-value map of input parameters for the StackSet template. All template parameters, including those with a `Default`, must be configured or ignored with `lifecycle` configuration block `ignore_changes` argument. All `NoEcho` template parameters must be ignored with the `lifecycle` configuration block `ignore_changes` argument. * `permission_model` - (Optional) Describes how the IAM roles required for your StackSet are created. Valid values: `SELF_MANAGED` (default), `SERVICE_MANAGED`. * `call_as` - (Optional) Specifies whether you are acting as an account administrator in the organization's management account or as a delegated administrator in a member account. Valid values: `SELF` (default), `DELEGATED_ADMIN`. diff --git a/website/docs/r/cloudfront_cache_policy.html.markdown b/website/docs/r/cloudfront_cache_policy.html.markdown index f1a0eb8132a..4b4c0cab663 100644 --- a/website/docs/r/cloudfront_cache_policy.html.markdown +++ b/website/docs/r/cloudfront_cache_policy.html.markdown @@ -3,19 +3,14 @@ subcategory: "CloudFront" layout: "aws" page_title: "AWS: aws_cloudfront_cache_policy" description: |- - Provides a cache policy for a CloudFront ditribution. When it’s attached to a cache behavior, - the cache policy determines the values that CloudFront includes in the cache key. These - values can include HTTP headers, cookies, and URL query strings. CloudFront uses the cache - key to find an object in its cache that it can return to the viewer. It also determines the - default, minimum, and maximum time to live (TTL) values that you want objects to stay in the - CloudFront cache. + Use the `aws_cloudfront_cache_policy` resource to manage cache policies for CloudFront distributions. This resource allows you to attach cache policies to cache behaviors, which determine the values included in the cache key, such as HTTP headers, cookies, and URL query strings. CloudFront uses the cache key to locate cached objects and return them to viewers. Additionally, the cache policy sets the default, minimum, and maximum time to live (TTL) values for objects in the CloudFront cache. --- # Resource: aws_cloudfront_cache_policy ## Example Usage -The following example below creates a CloudFront cache policy. +Use the `aws_cloudfront_cache_policy` resource to create a cache policy for CloudFront. ```terraform resource "aws_cloudfront_cache_policy" "example" { @@ -49,52 +44,52 @@ resource "aws_cloudfront_cache_policy" "example" { ## Argument Reference -The following arguments are supported: +This resource supports the following arguments: -* `name` - (Required) A unique name to identify the cache policy. -* `min_ttl` - (Required) The minimum amount of time, in seconds, that you want objects to stay in the CloudFront cache before CloudFront sends another request to the origin to see if the object has been updated. -* `max_ttl` - (Optional) The maximum amount of time, in seconds, that objects stay in the CloudFront cache before CloudFront sends another request to the origin to see if the object has been updated. -* `default_ttl` - (Optional) The default amount of time, in seconds, that you want objects to stay in the CloudFront cache before CloudFront sends another request to the origin to see if the object has been updated. -* `comment` - (Optional) A comment to describe the cache policy. -* `parameters_in_cache_key_and_forwarded_to_origin` - (Required) The HTTP headers, cookies, and URL query strings to include in the cache key. See [Parameters In Cache Key And Forwarded To Origin](#parameters-in-cache-key-and-forwarded-to-origin) for more information. +* `name` - (Required) Unique name used to identify the cache policy. +* `min_ttl` - (Required) Minimum amount of time, in seconds, that objects should remain in the CloudFront cache before a new request is sent to the origin to check for updates. +* `max_ttl` - (Optional) Maximum amount of time, in seconds, that objects stay in the CloudFront cache before CloudFront sends another request to the origin to see if the object has been updated. +* `default_ttl` - (Optional) Amount of time, in seconds, that objects are allowed to remain in the CloudFront cache before CloudFront sends a new request to the origin server to check if the object has been updated. +* `comment` - (Optional) Description for the cache policy. +* `parameters_in_cache_key_and_forwarded_to_origin` - (Required) Configuration for including HTTP headers, cookies, and URL query strings in the cache key. For more information, refer to the [Parameters In Cache Key And Forwarded To Origin](#parameters-in-cache-key-and-forwarded-to-origin) section. ### Parameters In Cache Key And Forwarded To Origin -* `cookies_config` - (Required) Object that determines whether any cookies in viewer requests (and if so, which cookies) are included in the cache key and automatically included in requests that CloudFront sends to the origin. See [Cookies Config](#cookies-config) for more information. -* `headers_config` - (Required) Object that determines whether any HTTP headers (and if so, which headers) are included in the cache key and automatically included in requests that CloudFront sends to the origin. See [Headers Config](#headers-config) for more information. -* `query_strings_config` - (Required) Object that determines whether any URL query strings in viewer requests (and if so, which query strings) are included in the cache key and automatically included in requests that CloudFront sends to the origin. See [Query String Config](#query-string-config) for more information. -* `enable_accept_encoding_brotli` - (Optional) A flag that can affect whether the Accept-Encoding HTTP header is included in the cache key and included in requests that CloudFront sends to the origin. -* `enable_accept_encoding_gzip` - (Optional) A flag that can affect whether the Accept-Encoding HTTP header is included in the cache key and included in requests that CloudFront sends to the origin. +* `cookies_config` - (Required) Whether any cookies in viewer requests are included in the cache key and automatically included in requests that CloudFront sends to the origin. See [Cookies Config](#cookies-config) for more information. +* `headers_config` - (Required) Whether any HTTP headers are included in the cache key and automatically included in requests that CloudFront sends to the origin. See [Headers Config](#headers-config) for more information. +* `query_strings_config` - (Required) Whether any URL query strings in viewer requests are included in the cache key. It also automatically includes these query strings in requests that CloudFront sends to the origin. Please refer to the [Query String Config](#query-string-config) for more information. +* `enable_accept_encoding_brotli` - (Optional) Flag determines whether the Accept-Encoding HTTP header is included in the cache key and in requests that CloudFront sends to the origin. +* `enable_accept_encoding_gzip` - (Optional) Whether the Accept-Encoding HTTP header is included in the cache key and in requests sent to the origin by CloudFront. ### Cookies Config -* `cookie_behavior` - (Required) Determines whether any cookies in viewer requests are included in the cache key and automatically included in requests that CloudFront sends to the origin. Valid values are `none`, `whitelist`, `allExcept`, `all`. +* `cookie_behavior` - (Required) Whether any cookies in viewer requests are included in the cache key and automatically included in requests that CloudFront sends to the origin. Valid values for `cookie_behavior` are `none`, `whitelist`, `allExcept`, and `all`. * `cookies` - (Optional) Object that contains a list of cookie names. See [Items](#items) for more information. ### Headers Config -* `header_behavior` - (Required) Determines whether any HTTP headers are included in the cache key and automatically included in requests that CloudFront sends to the origin. Valid values are `none`, `whitelist`. -* `headers` - (Optional) Object that contains a list of header names. See [Items](#items) for more information. +* `header_behavior` - (Required) Whether any HTTP headers are included in the cache key and automatically included in requests that CloudFront sends to the origin. Valid values for `header_behavior` are `none` and `whitelist`. +* `headers` - (Optional) Object contains a list of header names. See [Items](#items) for more information. ### Query String Config -* `query_string_behavior` - (Required) Determines whether any URL query strings in viewer requests are included in the cache key and automatically included in requests that CloudFront sends to the origin. Valid values are `none`, `whitelist`, `allExcept`, `all`. -* `query_strings` - (Optional) Object that contains a list of query string names. See [Items](#items) for more information. +* `query_string_behavior` - (Required) Whether URL query strings in viewer requests are included in the cache key and automatically included in requests that CloudFront sends to the origin. Valid values for `query_string_behavior` are `none`, `whitelist`, `allExcept`, and `all`. +* `query_strings` - (Optional) Configuration parameter that contains a list of query string names. See [Items](#items) for more information. ### Items -* `items` - (Required) A list of item names (cookies, headers, or query strings). +* `items` - (Required) List of item names, such as cookies, headers, or query strings. -## Attributes Reference +## Attribute Reference -In addition to all arguments above, the following attributes are exported: +This resource exports the following attributes in addition to the arguments above: -* `etag` - The current version of the cache policy. -* `id` - The identifier for the cache policy. +* `etag` - Current version of the cache policy. +* `id` - Identifier for the cache policy. ## Import -Cloudfront Cache Policies can be imported using the `id`, e.g., +To import CloudFront cache policies, use the `id` of the cache policy. For example: ``` $ terraform import aws_cloudfront_cache_policy.policy 658327ea-f89d-4fab-a63d-7e88639e58f6 diff --git a/website/docs/r/cloudfront_distribution.html.markdown b/website/docs/r/cloudfront_distribution.html.markdown index 29e4953cae1..747369c03a7 100644 --- a/website/docs/r/cloudfront_distribution.html.markdown +++ b/website/docs/r/cloudfront_distribution.html.markdown @@ -258,6 +258,8 @@ The CloudFront distribution argument layout is a complex structure composed of s #### Cache Behavior Arguments +~> **NOTE:** To achieve the setting of 'Use origin cache headers' without a linked cache policy, use the following TTL values: `min_ttl` = 0, `max_ttl` = 31536000, `default_ttl` = 86400. See [this issue](https://github.com/hashicorp/terraform-provider-aws/issues/19382) for additional context. + * `allowed_methods` (Required) - Controls which HTTP methods CloudFront processes and forwards to your Amazon S3 bucket or your custom origin. * `cached_methods` (Required) - Controls whether CloudFront caches the response to requests using the specified HTTP methods. * `cache_policy_id` (Optional) - Unique identifier of the cache policy that is attached to the cache behavior. If configuring the `default_cache_behavior` either `cache_policy_id` or `forwarded_values` must be set. diff --git a/website/docs/r/cloudtrail.html.markdown b/website/docs/r/cloudtrail.html.markdown index 31bfff62a32..b5980b819c3 100644 --- a/website/docs/r/cloudtrail.html.markdown +++ b/website/docs/r/cloudtrail.html.markdown @@ -22,21 +22,19 @@ Enable CloudTrail to capture all compatible management events in region. For capturing events from services like IAM, `include_global_service_events` must be enabled. ```terraform -data "aws_caller_identity" "current" {} - -resource "aws_cloudtrail" "foobar" { - name = "tf-trail-foobar" - s3_bucket_name = aws_s3_bucket.foo.id +resource "aws_cloudtrail" "example" { + name = "example" + s3_bucket_name = aws_s3_bucket.example.id s3_key_prefix = "prefix" include_global_service_events = false } -resource "aws_s3_bucket" "foo" { +resource "aws_s3_bucket" "example" { bucket = "tf-test-trail" force_destroy = true } -data "aws_iam_policy_document" "foo" { +data "aws_iam_policy_document" "example" { statement { sid = "AWSCloudTrailAclCheck" effect = "Allow" @@ -47,7 +45,12 @@ data "aws_iam_policy_document" "foo" { } actions = ["s3:GetBucketAcl"] - resources = [aws_s3_bucket.foo.arn] + resources = [aws_s3_bucket.example.arn] + condition { + test = "StringEquals" + variable = "aws:SourceArn" + values = ["arn:${data.aws_partition.current.partition}:cloudtrail:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:trail/example"] + } } statement { @@ -60,19 +63,30 @@ data "aws_iam_policy_document" "foo" { } actions = ["s3:PutObject"] - resources = ["${aws_s3_bucket.foo.arn}/prefix/AWSLogs/${data.aws_caller_identity.current.account_id}/*"] + resources = ["${aws_s3_bucket.example.arn}/prefix/AWSLogs/${data.aws_caller_identity.current.account_id}/*"] condition { test = "StringEquals" variable = "s3:x-amz-acl" values = ["bucket-owner-full-control"] } + condition { + test = "StringEquals" + variable = "aws:SourceArn" + values = ["arn:${data.aws_partition.current.partition}:cloudtrail:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:trail/example"] + } } } -resource "aws_s3_bucket_policy" "foo" { - bucket = aws_s3_bucket.foo.id - policy = data.aws_iam_policy_document.foo.json +resource "aws_s3_bucket_policy" "example" { + bucket = aws_s3_bucket.example.id + policy = data.aws_iam_policy_document.example.json } + +data "aws_caller_identity" "current" {} + +data "aws_partition" "current" {} + +data "aws_region" "current" {} ``` ### Data Event Logging diff --git a/website/docs/r/cloudwatch_event_target.html.markdown b/website/docs/r/cloudwatch_event_target.html.markdown index 546142a3c8e..642a87dad7b 100644 --- a/website/docs/r/cloudwatch_event_target.html.markdown +++ b/website/docs/r/cloudwatch_event_target.html.markdown @@ -389,7 +389,8 @@ data "aws_iam_policy_document" "example_log_policy" { principals { type = "Service" identifiers = [ - "events.amazonaws.com" + "events.amazonaws.com", + "delivery.logs.amazonaws.com" ] } } @@ -406,7 +407,8 @@ data "aws_iam_policy_document" "example_log_policy" { principals { type = "Service" identifiers = [ - "events.amazonaws.com" + "events.amazonaws.com", + "delivery.logs.amazonaws.com" ] } diff --git a/website/docs/r/cloudwatch_metric_stream.html.markdown b/website/docs/r/cloudwatch_metric_stream.html.markdown index d7904e2e81d..9d4af7f6700 100644 --- a/website/docs/r/cloudwatch_metric_stream.html.markdown +++ b/website/docs/r/cloudwatch_metric_stream.html.markdown @@ -124,9 +124,9 @@ resource "aws_iam_role_policy" "firehose_to_s3" { resource "aws_kinesis_firehose_delivery_stream" "s3_stream" { name = "metric-stream-test-stream" - destination = "s3" + destination = "extended_s3" - s3_configuration { + extended_s3_configuration { role_arn = aws_iam_role.firehose_to_s3.arn bucket_arn = aws_s3_bucket.bucket.arn } diff --git a/website/docs/r/codeartifact_repository_permissions_policy.html.markdown b/website/docs/r/codeartifact_repository_permissions_policy.html.markdown index 4fb8d621a71..53d72d0e7e5 100644 --- a/website/docs/r/codeartifact_repository_permissions_policy.html.markdown +++ b/website/docs/r/codeartifact_repository_permissions_policy.html.markdown @@ -36,8 +36,8 @@ data "aws_iam_policy_document" "example" { identifiers = ["*"] } - actions = ["codeartifact:CreateRepository"] - resources = [aws_codeartifact_domain.example.arn] + actions = ["codeartifact:ReadFromRepository"] + resources = [aws_codeartifact_repository.example.arn] } } resource "aws_codeartifact_repository_permissions_policy" "example" { diff --git a/website/docs/r/codebuild_project.html.markdown b/website/docs/r/codebuild_project.html.markdown index cf790d70166..bafcbc290a7 100755 --- a/website/docs/r/codebuild_project.html.markdown +++ b/website/docs/r/codebuild_project.html.markdown @@ -123,7 +123,7 @@ resource "aws_codebuild_project" "example" { environment { compute_type = "BUILD_GENERAL1_SMALL" - image = "aws/codebuild/standard:1.0" + image = "aws/codebuild/amazonlinux2-x86_64-standard:4.0" type = "LINUX_CONTAINER" image_pull_credentials_type = "CODEBUILD" @@ -201,7 +201,7 @@ resource "aws_codebuild_project" "project-with-cache" { environment { compute_type = "BUILD_GENERAL1_SMALL" - image = "aws/codebuild/standard:1.0" + image = "aws/codebuild/amazonlinux2-x86_64-standard:4.0" type = "LINUX_CONTAINER" image_pull_credentials_type = "CODEBUILD" @@ -291,7 +291,7 @@ The following arguments are optional: * `compute_type` - (Required) Information about the compute resources the build project will use. Valid values: `BUILD_GENERAL1_SMALL`, `BUILD_GENERAL1_MEDIUM`, `BUILD_GENERAL1_LARGE`, `BUILD_GENERAL1_2XLARGE`. `BUILD_GENERAL1_SMALL` is only valid if `type` is set to `LINUX_CONTAINER`. When `type` is set to `LINUX_GPU_CONTAINER`, `compute_type` must be `BUILD_GENERAL1_LARGE`. * `environment_variable` - (Optional) Configuration block. Detailed below. * `image_pull_credentials_type` - (Optional) Type of credentials AWS CodeBuild uses to pull images in your build. Valid values: `CODEBUILD`, `SERVICE_ROLE`. When you use a cross-account or private registry image, you must use SERVICE_ROLE credentials. When you use an AWS CodeBuild curated image, you must use CodeBuild credentials. Defaults to `CODEBUILD`. -* `image` - (Required) Docker image to use for this build project. Valid values include [Docker images provided by CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-available.html) (e.g `aws/codebuild/standard:2.0`), [Docker Hub images](https://hub.docker.com/) (e.g., `hashicorp/terraform:latest`), and full Docker repository URIs such as those for ECR (e.g., `137112412989.dkr.ecr.us-west-2.amazonaws.com/amazonlinux:latest`). +* `image` - (Required) Docker image to use for this build project. Valid values include [Docker images provided by CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-available.html) (e.g `aws/codebuild/amazonlinux2-x86_64-standard:4.0`), [Docker Hub images](https://hub.docker.com/) (e.g., `hashicorp/terraform:latest`), and full Docker repository URIs such as those for ECR (e.g., `137112412989.dkr.ecr.us-west-2.amazonaws.com/amazonlinux:latest`). * `privileged_mode` - (Optional) Whether to enable running the Docker daemon inside a Docker container. Defaults to `false`. * `registry_credential` - (Optional) Configuration block. Detailed below. * `type` - (Required) Type of build environment to use for related builds. Valid values: `LINUX_CONTAINER`, `LINUX_GPU_CONTAINER`, `WINDOWS_CONTAINER` (deprecated), `WINDOWS_SERVER_2019_CONTAINER`, `ARM_CONTAINER`. For additional information, see the [CodeBuild User Guide](https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-compute-types.html). @@ -352,7 +352,6 @@ See [ProjectFileSystemLocation](https://docs.aws.amazon.com/codebuild/latest/API ### secondary_sources -* `auth` - (Optional, **Deprecated**) Configuration block with the authorization settings for AWS CodeBuild to access the source code to be built. This information is for the AWS CodeBuild console's use only. Use the [`aws_codebuild_source_credential` resource](codebuild_source_credential.html) instead. Auth blocks are documented below. * `buildspec` - (Optional) The build spec declaration to use for this build project's related builds. This must be set when `type` is `NO_SOURCE`. It can either be a path to a file residing in the repository to be built or a local file path leveraging the `file()` built-in. * `git_clone_depth` - (Optional) Truncate git history to this many commits. Use `0` for a `Full` checkout which you need to run commands like `git branch --show-current`. See [AWS CodePipeline User Guide: Tutorial: Use full clone with a GitHub pipeline source](https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-github-gitclone.html) for details. * `git_submodules_config` - (Optional) Configuration block. Detailed below. @@ -363,11 +362,6 @@ See [ProjectFileSystemLocation](https://docs.aws.amazon.com/codebuild/latest/API * `source_identifier` - (Required) An identifier for this project source. The identifier can only contain alphanumeric characters and underscores, and must be less than 128 characters in length. * `type` - (Required) Type of repository that contains the source code to be built. Valid values: `CODECOMMIT`, `CODEPIPELINE`, `GITHUB`, `GITHUB_ENTERPRISE`, `BITBUCKET` or `S3`. -#### secondary_sources: auth - -* `resource` - (Optional, **Deprecated**) Resource value that applies to the specified authorization type. Use the [`aws_codebuild_source_credential` resource](codebuild_source_credential.html) instead. -* `type` - (Required, **Deprecated**) Authorization type to use. The only valid value is `OAUTH`. This data type is deprecated and is no longer accurate or used. Use the [`aws_codebuild_source_credential` resource](codebuild_source_credential.html) instead. - #### secondary_sources: git_submodules_config This block is only valid when the `type` is `CODECOMMIT`, `GITHUB` or `GITHUB_ENTERPRISE`. @@ -386,7 +380,6 @@ This block is only valid when the `type` is `CODECOMMIT`, `GITHUB` or `GITHUB_EN ### source -* `auth` - (Optional, **Deprecated**) Configuration block with the authorization settings for AWS CodeBuild to access the source code to be built. This information is for the AWS CodeBuild console's use only. Use the [`aws_codebuild_source_credential` resource](codebuild_source_credential.html) instead. Auth blocks are documented below. * `buildspec` - (Optional) Build specification to use for this build project's related builds. This must be set when `type` is `NO_SOURCE`. * `git_clone_depth` - (Optional) Truncate git history to this many commits. Use `0` for a `Full` checkout which you need to run commands like `git branch --show-current`. See [AWS CodePipeline User Guide: Tutorial: Use full clone with a GitHub pipeline source](https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-github-gitclone.html) for details. * `git_submodules_config` - (Optional) Configuration block. Detailed below. @@ -396,11 +389,6 @@ This block is only valid when the `type` is `CODECOMMIT`, `GITHUB` or `GITHUB_EN * `build_status_config` - (Optional) Configuration block that contains information that defines how the build project reports the build status to the source provider. This option is only used when the source provider is `GITHUB`, `GITHUB_ENTERPRISE`, or `BITBUCKET`. `build_status_config` blocks are documented below. * `type` - (Required) Type of repository that contains the source code to be built. Valid values: `CODECOMMIT`, `CODEPIPELINE`, `GITHUB`, `GITHUB_ENTERPRISE`, `BITBUCKET`, `S3`, `NO_SOURCE`. -#### source: auth - -* `resource` - (Optional, **Deprecated**) Resource value that applies to the specified authorization type. Use the [`aws_codebuild_source_credential` resource](codebuild_source_credential.html) instead. -* `type` - (Required, **Deprecated**) Authorization type to use. The only valid value is `OAUTH`. This data type is deprecated and is no longer accurate or used. Use the [`aws_codebuild_source_credential` resource](codebuild_source_credential.html) instead. - #### source: git_submodules_config This block is only valid when the `type` is `CODECOMMIT`, `GITHUB` or `GITHUB_ENTERPRISE`. diff --git a/website/docs/r/codedeploy_deployment_group.html.markdown b/website/docs/r/codedeploy_deployment_group.html.markdown index 562643d2df5..b3dc41cd67f 100644 --- a/website/docs/r/codedeploy_deployment_group.html.markdown +++ b/website/docs/r/codedeploy_deployment_group.html.markdown @@ -315,7 +315,7 @@ The `target_group_pair_info` configuration block supports the following: The `prod_traffic_route` configuration block supports the following: -* `listener_arns` - (Required) List of Amazon Resource Names (ARNs) of the load balancer listeners. +* `listener_arns` - (Required) List of Amazon Resource Names (ARNs) of the load balancer listeners. Must contain exactly one listener ARN. ##### load_balancer_info target_group_pair_info target_group Argument Reference diff --git a/website/docs/r/cognito_identity_pool.markdown b/website/docs/r/cognito_identity_pool.markdown index 61bd8a21bf1..944a957900b 100644 --- a/website/docs/r/cognito_identity_pool.markdown +++ b/website/docs/r/cognito_identity_pool.markdown @@ -70,7 +70,7 @@ backend and the Cognito service to communicate about the developer provider. In addition to all arguments above, the following attributes are exported: -* `id` - An identity pool ID, e.g. `us-west-2_abc123`. +* `id` - An identity pool ID, e.g. `us-west-2:1a234567-8901-234b-5cde-f6789g01h2i3`. * `arn` - The ARN of the identity pool. * `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). @@ -79,5 +79,5 @@ In addition to all arguments above, the following attributes are exported: Cognito Identity Pool can be imported using its ID, e.g., ``` -$ terraform import aws_cognito_identity_pool.mypool us-west-2_abc123 +$ terraform import aws_cognito_identity_pool.mypool us-west-2:1a234567-8901-234b-5cde-f6789g01h2i3 ``` diff --git a/website/docs/r/cognito_managed_user_pool_client.html.markdown b/website/docs/r/cognito_managed_user_pool_client.html.markdown index 94abe7d1b99..73eb74a605d 100644 --- a/website/docs/r/cognito_managed_user_pool_client.html.markdown +++ b/website/docs/r/cognito_managed_user_pool_client.html.markdown @@ -3,21 +3,18 @@ subcategory: "Cognito IDP (Identity Provider)" layout: "aws" page_title: "AWS: aws_cognito_managed_user_pool_client" description: |- - Manages a Cognito User Pool Client resource created by another service. + Use the `aws_cognito_user_pool_client` resource to manage a Cognito User Pool Client. This resource is created by another service. --- # Resource: aws_cognito_managed_user_pool_client -Manages a Cognito User Pool Client resource created by another service. +Use the `aws_cognito_user_pool_client` resource to manage a Cognito User Pool Client. -**This is an advanced resource** and has special caveats to be aware of when using it. Please read this document in its entirety before using this resource. +**This resource is advanced** and has special caveats to consider before use. Please read this document completely before using the resource. -The `aws_cognito_managed_user_pool_client` resource should only be used to manage a Cognito User Pool Client created automatically by an AWS service. -For example, when [configuring an OpenSearch Domain to use Cognito authentication](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/cognito-auth.html), -the OpenSearch service will create the User Pool Client on setup and delete it when no longer needed. -Therefore, the `aws_cognito_managed_user_pool_client` resource does not _create_ or _delete_ this resource, but instead "adopts" it into management. +Use the `aws_cognito_managed_user_pool_client` resource to manage a Cognito User Pool Client that is automatically created by an AWS service. For instance, when [configuring an OpenSearch Domain to use Cognito authentication](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/cognito-auth.html), the OpenSearch service creates the User Pool Client during setup and removes it when it is no longer required. As a result, the `aws_cognito_managed_user_pool_client` resource does not create or delete this resource, but instead assumes management of it. -For normal uses of a Cognito User Pool Client, use the [`aws_cognito_managed_user_pool_client` resource](cognito_user_pool_client.html) instead. +Use the [`aws_cognito_user_pool_client`](cognito_user_pool_client.html) resource to manage Cognito User Pool Clients for normal use cases. ## Example Usage @@ -97,70 +94,62 @@ data "aws_partition" "current" {} The following arguments are required: -* `user_pool_id` - (Required) User pool the client belongs to. -* `name_pattern` - (Required, one of `name_pattern` or `name_prefix`) Regular expression that matches the name of the desired User Pool Client. - Must match only one User Pool Client. -* `name_prefix` - (Required, one of `name_prefix` or `name_pattern`) String that matches the beginning of the name of the desired User Pool Client. - Must match only one User Pool Client. +* `user_pool_id` - (Required) User pool that the client belongs to. +* `name_pattern` - (Required, one of `name_pattern` or `name_prefix`) Regular expression that matches the name of the desired User Pool Client. It must only match one User Pool Client. +* `name_prefix` - (Required, one of `name_prefix` or `name_pattern`) String that matches the beginning of the name of the desired User Pool Client. It must match only one User Pool Client. The following arguments are optional: -* `access_token_validity` - (Optional) Time limit, between 5 minutes and 1 day, after which the access token is no longer valid and cannot be used. - By default, the unit is hours. - The unit can be overridden by a value in `token_validity_units.access_token`. -* `allowed_oauth_flows_user_pool_client` - (Optional) Whether the client is allowed to follow the OAuth protocol when interacting with Cognito user pools. -* `allowed_oauth_flows` - (Optional) List of allowed OAuth flows (code, implicit, client_credentials). -* `allowed_oauth_scopes` - (Optional) List of allowed OAuth scopes (phone, email, openid, profile, and aws.cognito.signin.user.admin). -* `analytics_configuration` - (Optional) Configuration block for Amazon Pinpoint analytics for collecting metrics for this user pool. [Detailed below](#analytics_configuration). -* `auth_session_validity` - (Optional) Amazon Cognito creates a session token for each API request in an authentication flow. AuthSessionValidity is the duration, in minutes, of that session token. Your user pool native user must respond to each authentication challenge before the session expires. Valid values between `3` and `15`. Default value is `3`. +* `access_token_validity` - (Optional) Time limit, between 5 minutes and 1 day, after which the access token is no longer valid and cannot be used. By default, the unit is hours. The unit can be overridden by a value in `token_validity_units.access_token`. +* `allowed_oauth_flows_user_pool_client` - (Optional) Whether the client is allowed to use the OAuth protocol when interacting with Cognito user pools. +* `allowed_oauth_flows` - (Optional) List of allowed OAuth flows, including code, implicit, and client_credentials. +* `allowed_oauth_scopes` - (Optional) List of allowed OAuth scopes, including phone, email, openid, profile, and aws.cognito.signin.user.admin. +* `analytics_configuration` - (Optional) Configuration block for Amazon Pinpoint analytics that collects metrics for this user pool. See [details below](#analytics_configuration). +* `auth_session_validity` - (Optional) Duration, in minutes, of the session token created by Amazon Cognito for each API request in an authentication flow. The session token must be responded to by the native user of the user pool before it expires. Valid values for `auth_session_validity` are between `3` and `15`, with a default value of `3`. * `callback_urls` - (Optional) List of allowed callback URLs for the identity providers. -* `default_redirect_uri` - (Optional) Default redirect URI. Must be in the list of callback URLs. +* `default_redirect_uri` - (Optional) Default redirect URI and must be included in the list of callback URLs. * `enable_token_revocation` - (Optional) Enables or disables token revocation. -* `enable_propagate_additional_user_context_data` - (Optional) Activates the propagation of additional user context data. -* `explicit_auth_flows` - (Optional) List of authentication flows (ADMIN_NO_SRP_AUTH, CUSTOM_AUTH_FLOW_ONLY, USER_PASSWORD_AUTH, ALLOW_ADMIN_USER_PASSWORD_AUTH, ALLOW_CUSTOM_AUTH, ALLOW_USER_PASSWORD_AUTH, ALLOW_USER_SRP_AUTH, ALLOW_REFRESH_TOKEN_AUTH). -* `generate_secret` - (Optional) Should an application secret be generated. -* `id_token_validity` - (Optional) Time limit, between 5 minutes and 1 day, after which the ID token is no longer valid and cannot be used. - By default, the unit is hours. - The unit can be overridden by a value in `token_validity_units.id_token`. +* `enable_propagate_additional_user_context_data` - (Optional) Enables the propagation of additional user context data. +* `explicit_auth_flows` - (Optional) List of authentication flows. The available options include ADMIN_NO_SRP_AUTH, CUSTOM_AUTH_FLOW_ONLY, USER_PASSWORD_AUTH, ALLOW_ADMIN_USER_PASSWORD_AUTH, ALLOW_CUSTOM_AUTH, ALLOW_USER_PASSWORD_AUTH, ALLOW_USER_SRP_AUTH, and ALLOW_REFRESH_TOKEN_AUTH. +* `generate_secret` - (Optional) Boolean flag indicating whether an application secret should be generated. +* `id_token_validity` - (Optional) Time limit, between 5 minutes and 1 day, after which the ID token is no longer valid and cannot be used. By default, the unit is hours. The unit can be overridden by a value in `token_validity_units.id_token`. * `logout_urls` - (Optional) List of allowed logout URLs for the identity providers. -* `prevent_user_existence_errors` - (Optional) Choose which errors and responses are returned by Cognito APIs during authentication, account confirmation, and password recovery when the user does not exist in the user pool. When set to `ENABLED` and the user does not exist, authentication returns an error indicating either the username or password was incorrect, and account confirmation and password recovery return a response indicating a code was sent to a simulated destination. When set to `LEGACY`, those APIs will return a `UserNotFoundException` exception if the user does not exist in the user pool. -* `read_attributes` - (Optional) List of user pool attributes the application client can read from. -* `refresh_token_validity` - (Optional) Time limit, between 60 minutes and 10 years, after which the refresh token is no longer valid and cannot be used. - By default, the unit is days. - The unit can be overridden by a value in `token_validity_units.refresh_token`. -* `supported_identity_providers` - (Optional) List of provider names for the identity providers that are supported on this client. Uses the `provider_name` attribute of `aws_cognito_identity_provider` resource(s), or the equivalent string(s). -* `token_validity_units` - (Optional) Configuration block for units in which the validity times are represented in. [Detailed below](#token_validity_units). -* `write_attributes` - (Optional) List of user pool attributes the application client can write to. +* `prevent_user_existence_errors` - (Optional) Setting determines the errors and responses returned by Cognito APIs when a user does not exist in the user pool during authentication, account confirmation, and password recovery. +* `read_attributes` - (Optional) List of user pool attributes that the application client can read from. +* `refresh_token_validity` - (Optional) Time limit, between 60 minutes and 10 years, after which the refresh token is no longer valid and cannot be used. By default, the unit is days. The unit can be overridden by a value in `token_validity_units.refresh_token`. +* `supported_identity_providers` - (Optional) List of provider names for the identity providers that are supported on this client. It uses the `provider_name` attribute of the `aws_cognito_identity_provider` resource(s), or the equivalent string(s). +* `token_validity_units` - (Optional) Configuration block for representing the validity times in units. See details below. [Detailed below](#token_validity_units). +* `write_attributes` - (Optional) List of user pool attributes that the application client can write to. ### analytics_configuration -Either `application_arn` or `application_id` is required. +Either `application_arn` or `application_id` is required for this configuration block. -* `application_arn` - (Optional) Application ARN for an Amazon Pinpoint application. Conflicts with `external_id` and `role_arn`. -* `application_id` - (Optional) Application ID for an Amazon Pinpoint application. -* `external_id` - (Optional) ID for the Analytics Configuration. Conflicts with `application_arn`. -* `role_arn` - (Optional) ARN of an IAM role that authorizes Amazon Cognito to publish events to Amazon Pinpoint analytics. Conflicts with `application_arn`. -* `user_data_shared` (Optional) If set to `true`, Amazon Cognito will include user data in the events it publishes to Amazon Pinpoint analytics. +* `application_arn` - (Optional) Application ARN for an Amazon Pinpoint application. It conflicts with `external_id` and `role_arn`. +* `application_id` - (Optional) Unique identifier for an Amazon Pinpoint application. +* `external_id` - (Optional) ID for the Analytics Configuration and conflicts with `application_arn`. +* `role_arn` - (Optional) ARN of an IAM role that authorizes Amazon Cognito to publish events to Amazon Pinpoint analytics. It conflicts with `application_arn`. +* `user_data_shared` - (Optional) If `user_data_shared` is set to `true`, Amazon Cognito will include user data in the events it publishes to Amazon Pinpoint analytics. ### token_validity_units -Valid values for the following arguments are: `seconds`, `minutes`, `hours` or `days`. +Valid values for the following arguments are: `seconds`, `minutes`, `hours`, or `days`. -* `access_token` - (Optional) Time unit in for the value in `access_token_validity`, defaults to `hours`. -* `id_token` - (Optional) Time unit in for the value in `id_token_validity`, defaults to `hours`. -* `refresh_token` - (Optional) Time unit in for the value in `refresh_token_validity`, defaults to `days`. +* `access_token` - (Optional) Time unit for the value in `access_token_validity` and defaults to `hours`. +* `id_token` - (Optional) Time unit for the value in `id_token_validity`, and it defaults to `hours`. +* `refresh_token` - (Optional) Time unit for the value in `refresh_token_validity` and defaults to `days`. -## Attributes Reference +## Attribute Reference -In addition to all arguments above, the following attributes are exported: +This resource exports the following attributes in addition to the arguments above: * `client_secret` - Client secret of the user pool client. -* `id` - ID of the user pool client. +* `id` - Unique identifier for the user pool client. * `name` - Name of the user pool client. ## Import -Cognito User Pool Clients can be imported using the `id` of the Cognito User Pool, and the `id` of the Cognito User Pool Client, e.g., +To import Cognito User Pool Clients, use the `id` of the Cognito User Pool and the `id` of the Cognito User Pool Client. For example: ``` $ terraform import aws_cognito_managed_user_pool_client.client us-west-2_abc123/3ho4ek12345678909nh3fmhpko diff --git a/website/docs/r/cognito_user_pool.markdown b/website/docs/r/cognito_user_pool.markdown index 53f51ffcdd2..16816740e39 100644 --- a/website/docs/r/cognito_user_pool.markdown +++ b/website/docs/r/cognito_user_pool.markdown @@ -80,7 +80,7 @@ The following arguments are optional: * `email_verification_subject` - (Optional) String representing the email verification subject. Conflicts with `verification_message_template` configuration block `email_subject` argument. * `lambda_config` - (Optional) Configuration block for the AWS Lambda triggers associated with the user pool. [Detailed below](#lambda_config). * `mfa_configuration` - (Optional) Multi-Factor Authentication (MFA) configuration for the User Pool. Defaults of `OFF`. Valid values are `OFF` (MFA Tokens are not required), `ON` (MFA is required for all users to sign in; requires at least one of `sms_configuration` or `software_token_mfa_configuration` to be configured), or `OPTIONAL` (MFA Will be required only for individual users who have MFA Enabled; requires at least one of `sms_configuration` or `software_token_mfa_configuration` to be configured). -* `password_policy` - (Optional) Configuration blocked for information about the user pool password policy. [Detailed below](#password_policy). +* `password_policy` - (Optional) Configuration block for information about the user pool password policy. [Detailed below](#password_policy). * `schema` - (Optional) Configuration block for the schema attributes of a user pool. [Detailed below](#schema). Schema attributes from the [standard attribute set](https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-attributes.html#cognito-user-pools-standard-attributes) only need to be specified if they are different from the default configuration. Attributes can be added, but not modified or removed. Maximum of 50 attributes. * `sms_authentication_message` - (Optional) String representing the SMS authentication message. The Message must contain the `{####}` placeholder, which will be replaced with the code. * `sms_configuration` - (Optional) Configuration block for Short Message Service (SMS) settings. [Detailed below](#sms_configuration). These settings apply to SMS user verification and SMS Multi-Factor Authentication (MFA). Due to Cognito API restrictions, the SMS configuration cannot be removed without recreating the Cognito User Pool. For user data safety, this resource will ignore the removal of this configuration by disabling drift detection. To force resource recreation after this configuration has been applied, see the [`taint` command](https://www.terraform.io/docs/commands/taint.html). @@ -118,7 +118,7 @@ The following arguments are optional: ### email_configuration * `configuration_set` - (Optional) Email configuration set name from SES. -* `email_sending_account` - (Optional) Email delivery method to use. `COGNITO_DEFAULT` for the default email functionality built into Cognito or `DEVELOPER` to use your Amazon SES configuration. +* `email_sending_account` - (Optional) Email delivery method to use. `COGNITO_DEFAULT` for the default email functionality built into Cognito or `DEVELOPER` to use your Amazon SES configuration. Required to be `DEVELOPER` if `from_email_address` is set. * `from_email_address` - (Optional) Sender’s email address or sender’s display name with their email address (e.g., `john@example.com`, `John Smith ` or `\"John Smith Ph.D.\" `). Escaped double quotes are required around display names that contain certain characters as specified in [RFC 5322](https://tools.ietf.org/html/rfc5322). * `reply_to_email_address` - (Optional) REPLY-TO email address. * `source_arn` - (Optional) ARN of the SES verified email identity to use. Required if `email_sending_account` is set to `DEVELOPER`. diff --git a/website/docs/r/config_configuration_recorder.html.markdown b/website/docs/r/config_configuration_recorder.html.markdown index b4c98a5ea63..76966ef692d 100644 --- a/website/docs/r/config_configuration_recorder.html.markdown +++ b/website/docs/r/config_configuration_recorder.html.markdown @@ -50,9 +50,15 @@ The following arguments are supported: ### `recording_group` * `all_supported` - (Optional) Specifies whether AWS Config records configuration changes for every supported type of regional resource (which includes any new type that will become supported in the future). Conflicts with `resource_types`. Defaults to `true`. +* `exclusion_by_resource_types` - (Optional) An object that specifies how AWS Config excludes resource types from being recorded by the configuration recorder.To use this option, you must set the useOnly field of RecordingStrategy to `EXCLUSION_BY_RESOURCE_TYPES` Requires `all_supported = false`. Conflicts with `resource_types`. * `include_global_resource_types` - (Optional) Specifies whether AWS Config includes all supported types of _global resources_ with the resources that it records. Requires `all_supported = true`. Conflicts with `resource_types`. +* `recording_strategy` - (Optional) Recording Strategy - see below.. * `resource_types` - (Optional) A list that specifies the types of AWS resources for which AWS Config records configuration changes (for example, `AWS::EC2::Instance` or `AWS::CloudTrail::Trail`). See [relevant part of AWS Docs](http://docs.aws.amazon.com/config/latest/APIReference/API_ResourceIdentifier.html#config-Type-ResourceIdentifier-resourceType) for available types. In order to use this attribute, `all_supported` must be set to false. +#### `recording_strategy` + +* ` use_only` - (Optional) The recording strategy for the configuration recorder.See [relevant part of AWS Docs](https://docs.aws.amazon.com/config/latest/APIReference/API_RecordingStrategy.html) + ## Attributes Reference In addition to all arguments above, the following attributes are exported: diff --git a/website/docs/r/connect_hours_of_operation.html.markdown b/website/docs/r/connect_hours_of_operation.html.markdown index 14994d00077..e319c57c1e1 100644 --- a/website/docs/r/connect_hours_of_operation.html.markdown +++ b/website/docs/r/connect_hours_of_operation.html.markdown @@ -86,7 +86,6 @@ A `start_time` block supports the following arguments: In addition to all arguments above, the following attributes are exported: * `arn` - The Amazon Resource Name (ARN) of the Hours of Operation. -* `hours_of_operation_arn` - (**Deprecated**) The Amazon Resource Name (ARN) of the Hours of Operation. * `hours_of_operation_id` - The identifier for the hours of operation. * `id` - The identifier of the hosting Amazon Connect Instance and identifier of the Hours of Operation separated by a colon (`:`). * `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). diff --git a/website/docs/r/connect_instance_storage_config.html.markdown b/website/docs/r/connect_instance_storage_config.html.markdown index 79608e8b66a..a3e8f6ba6e2 100644 --- a/website/docs/r/connect_instance_storage_config.html.markdown +++ b/website/docs/r/connect_instance_storage_config.html.markdown @@ -111,7 +111,7 @@ resource "aws_connect_instance_storage_config" "example" { The following arguments are supported: * `instance_id` - (Required) Specifies the identifier of the hosting Amazon Connect Instance. -* `resource_type` - (Required) A valid resource type. Valid Values: `CHAT_TRANSCRIPTS` | `CALL_RECORDINGS` | `SCHEDULED_REPORTS` | `MEDIA_STREAMS` | `CONTACT_TRACE_RECORDS` | `AGENT_EVENTS` | `REAL_TIME_CONTACT_ANALYSIS_SEGMENTS`. +* `resource_type` - (Required) A valid resource type. Valid Values: `AGENT_EVENTS` | `ATTACHMENTS` | `CALL_RECORDINGS` | `CHAT_TRANSCRIPTS` | `CONTACT_EVALUATIONS` | `CONTACT_TRACE_RECORDS` | `MEDIA_STREAMS` | `REAL_TIME_CONTACT_ANALYSIS_SEGMENTS` | `SCHEDULED_REPORTS`. * `storage_config` - (Required) Specifies the storage configuration options for the Connect Instance. [Documented below](#storage_config). ### `storage_config` diff --git a/website/docs/r/cur_report_definition.html.markdown b/website/docs/r/cur_report_definition.html.markdown index fc0b4ad855a..902da981a91 100644 --- a/website/docs/r/cur_report_definition.html.markdown +++ b/website/docs/r/cur_report_definition.html.markdown @@ -12,8 +12,6 @@ Manages Cost and Usage Report Definitions. ~> *NOTE:* The AWS Cost and Usage Report service is only available in `us-east-1` currently. -~> *NOTE:* If AWS Organizations is enabled, only the master account can use this resource. - ## Example Usage ```terraform @@ -22,7 +20,7 @@ resource "aws_cur_report_definition" "example_cur_report_definition" { time_unit = "HOURLY" format = "textORcsv" compression = "GZIP" - additional_schema_elements = ["RESOURCES"] + additional_schema_elements = ["RESOURCES", "SPLIT_COST_ALLOCATION_DATA"] s3_bucket = "example-bucket-name" s3_region = "us-east-1" additional_artifacts = ["REDSHIFT", "QUICKSIGHT"] @@ -37,7 +35,7 @@ The following arguments are supported: * `time_unit` - (Required) The frequency on which report data are measured and displayed. Valid values are: `DAILY`, `HOURLY`, `MONTHLY`. * `format` - (Required) Format for report. Valid values are: `textORcsv`, `Parquet`. If `Parquet` is used, then Compression must also be `Parquet`. * `compression` - (Required) Compression format for report. Valid values are: `GZIP`, `ZIP`, `Parquet`. If `Parquet` is used, then format must also be `Parquet`. -* `additional_schema_elements` - (Required) A list of schema elements. Valid values are: `RESOURCES`. +* `additional_schema_elements` - (Required) A list of schema elements. Valid values are: `RESOURCES`, `SPLIT_COST_ALLOCATION_DATA`. * `s3_bucket` - (Required) Name of the existing S3 bucket to hold generated reports. * `s3_prefix` - (Optional) Report path prefix. Limited to 256 characters. * `s3_region` - (Required) Region of the existing S3 bucket to hold generated reports. diff --git a/website/docs/r/datasync_location_object_storage.html.markdown b/website/docs/r/datasync_location_object_storage.html.markdown index a3feed1c265..87a32961fea 100644 --- a/website/docs/r/datasync_location_object_storage.html.markdown +++ b/website/docs/r/datasync_location_object_storage.html.markdown @@ -30,7 +30,7 @@ The following arguments are supported: * `access_key` - (Optional) The access key is used if credentials are required to access the self-managed object storage server. If your object storage requires a user name and password to authenticate, use `access_key` and `secret_key` to provide the user name and password, respectively. * `bucket_name` - (Required) The bucket on the self-managed object storage server that is used to read data from. * `secret_key` - (Optional) The secret key is used if credentials are required to access the self-managed object storage server. If your object storage requires a user name and password to authenticate, use `access_key` and `secret_key` to provide the user name and password, respectively. -* `server_certificate` - (Optional) Specifies a certificate to authenticate with an object storage system that uses a private or self-signed certificate authority (CA). You must specify a Base64-encoded .pem file (for example, file:///home/user/.ssh/storage_sys_certificate.pem). The certificate can be up to 32768 bytes (before Base64 encoding). +* `server_certificate` - (Optional) Specifies a certificate to authenticate with an object storage system that uses a private or self-signed certificate authority (CA). You must specify a Base64-encoded .pem string. The certificate can be up to 32768 bytes (before Base64 encoding). * `server_hostname` - (Required) The name of the self-managed object storage server. This value is the IP address or Domain Name Service (DNS) name of the object storage server. An agent uses this host name to mount the object storage server in a network. * `server_protocol` - (Optional) The protocol that the object storage server uses to communicate. Valid values are `HTTP` or `HTTPS`. * `server_port` - (Optional) The port that your self-managed object storage server accepts inbound network traffic on. The server port is set by default to TCP 80 (`HTTP`) or TCP 443 (`HTTPS`). You can specify a custom port if your self-managed object storage server requires one. diff --git a/website/docs/r/datasync_task.html.markdown b/website/docs/r/datasync_task.html.markdown index fe4684244dd..126b8c52691 100644 --- a/website/docs/r/datasync_task.html.markdown +++ b/website/docs/r/datasync_task.html.markdown @@ -83,11 +83,12 @@ The following arguments are supported inside the `options` configuration block: * `gid` - (Optional) Group identifier of the file's owners. Valid values: `BOTH`, `INT_VALUE`, `NAME`, `NONE`. Default: `INT_VALUE` (preserve integer value of the ID). * `log_level` - (Optional) Determines the type of logs that DataSync publishes to a log stream in the Amazon CloudWatch log group that you provide. Valid values: `OFF`, `BASIC`, `TRANSFER`. Default: `OFF`. * `mtime` - (Optional) A file metadata that indicates the last time a file was modified (written to) before the sync `PREPARING` phase. Value values: `NONE`, `PRESERVE`. Default: `PRESERVE`. +* `object_tags` - (Optional) Specifies whether object tags are maintained when transferring between object storage systems. If you want your DataSync task to ignore object tags, specify the NONE value. Valid values: `PRESERVE`, `NONE`. Default value: `PRESERVE`. * `overwrite_mode` - (Optional) Determines whether files at the destination should be overwritten or preserved when copying files. Valid values: `ALWAYS`, `NEVER`. Default: `ALWAYS`. * `posix_permissions` - (Optional) Determines which users or groups can access a file for a specific purpose such as reading, writing, or execution of the file. Valid values: `NONE`, `PRESERVE`. Default: `PRESERVE`. * `preserve_deleted_files` - (Optional) Whether files deleted in the source should be removed or preserved in the destination file system. Valid values: `PRESERVE`, `REMOVE`. Default: `PRESERVE`. * `preserve_devices` - (Optional) Whether the DataSync Task should preserve the metadata of block and character devices in the source files system, and recreate the files with that device name and metadata on the destination. The DataSync Task can’t sync the actual contents of such devices, because many of the devices are non-terminal and don’t return an end of file (EOF) marker. Valid values: `NONE`, `PRESERVE`. Default: `NONE` (ignore special devices). -* `security_descriptor_copy_flags` - (Optional) Determines which components of the SMB security descriptor are copied from source to destination objects. This value is only used for transfers between SMB and Amazon FSx for Windows File Server locations, or between two Amazon FSx for Windows File Server locations. Valid values: `NONE`, `OWNER_DACL`, `OWNER_DACL_SACL`. +* `security_descriptor_copy_flags` - (Optional) Determines which components of the SMB security descriptor are copied from source to destination objects. This value is only used for transfers between SMB and Amazon FSx for Windows File Server locations, or between two Amazon FSx for Windows File Server locations. Valid values: `NONE`, `OWNER_DACL`, `OWNER_DACL_SACL`. Default: `OWNER_DACL`. * `task_queueing` - (Optional) Determines whether tasks should be queued before executing the tasks. Valid values: `ENABLED`, `DISABLED`. Default `ENABLED`. * `transfer_mode` - (Optional) Determines whether DataSync transfers only the data and metadata that differ between the source and the destination location, or whether DataSync transfers all the content from the source, without comparing to the destination location. Valid values: `CHANGED`, `ALL`. Default: `CHANGED` * `uid` - (Optional) User identifier of the file's owners. Valid values: `BOTH`, `INT_VALUE`, `NAME`, `NONE`. Default: `INT_VALUE` (preserve integer value of the ID). diff --git a/website/docs/r/db_event_subscription.html.markdown b/website/docs/r/db_event_subscription.html.markdown index ff18234bc3a..2f276d76ee2 100644 --- a/website/docs/r/db_event_subscription.html.markdown +++ b/website/docs/r/db_event_subscription.html.markdown @@ -59,7 +59,7 @@ The following arguments are supported: * `name_prefix` - (Optional) The name of the DB event subscription. Conflicts with `name`. * `sns_topic` - (Required) The SNS topic to send events to. * `source_ids` - (Optional) A list of identifiers of the event sources for which events will be returned. If not specified, then all sources are included in the response. If specified, a source_type must also be specified. -* `source_type` - (Optional) The type of source that will be generating the events. Valid options are `db-instance`, `db-security-group`, `db-parameter-group`, `db-snapshot`, `db-cluster` or `db-cluster-snapshot`. If not set, all sources will be subscribed to. +* `source_type` - (Optional) The type of source that will be generating the events. Valid options are `db-instance`, `db-security-group`, `db-parameter-group`, `db-snapshot`, `db-cluster`, `db-cluster-snapshot`, or `db-proxy`. If not set, all sources will be subscribed to. * `event_categories` - (Optional) A list of event categories for a SourceType that you want to subscribe to. See http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.html or run `aws rds describe-event-categories`. * `enabled` - (Optional) A boolean flag to enable/disable the subscription. Defaults to true. * `tags` - (Optional) A map of tags to assign to the resource. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. diff --git a/website/docs/r/db_instance.html.markdown b/website/docs/r/db_instance.html.markdown index 9338493459d..09d7e747a5b 100644 --- a/website/docs/r/db_instance.html.markdown +++ b/website/docs/r/db_instance.html.markdown @@ -269,7 +269,6 @@ information on the [AWS Documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Monitoring.html) what IAM permissions are needed to allow Enhanced Monitoring for RDS Instances. * `multi_az` - (Optional) Specifies if the RDS instance is multi-AZ -* `name` - (Optional, **Deprecated** use `db_name` instead) The name of the database to create when the DB instance is created. If this parameter is not specified, no database is created in the DB instance. Note that this does not apply for Oracle or SQL Server engines. See the [AWS documentation](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/rds/create-db-instance.html) for more details on what applies for those engines. If you are providing an Oracle db name, it needs to be in all upper case. Cannot be specified for a replica. * `nchar_character_set_name` - (Optional, Forces new resource) The national character set is used in the NCHAR, NVARCHAR2, and NCLOB data types for Oracle instances. This can't be changed. See [Oracle Character Sets Supported in Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.OracleCharacterSets.html). * `network_type` - (Optional) The network type of the DB instance. Valid values: `IPV4`, `DUAL`. @@ -371,7 +370,7 @@ This will not recreate the resource if the S3 object changes in some way. It's ## blue_green_update -* `enabled` - (Optional) Enables [low-downtime updates](#Low-Downtime Updates) when `true`. +* `enabled` - (Optional) Enables [low-downtime updates](#low-downtime-updates) when `true`. Default is `false`. [instance-replication]: @@ -408,7 +407,6 @@ in a Route 53 Alias record). * `maintenance_window` - The instance maintenance window. * `master_user_secret` - A block that specifies the master user secret. Only available when `manage_master_user_password` is set to true. [Documented below](#master_user_secret). * `multi_az` - If the RDS instance is multi AZ enabled. -* `name` - The database name. * `port` - The database port. * `resource_id` - The RDS Resource ID of this instance. * `status` - The RDS instance status. diff --git a/website/docs/r/db_instance_automated_backups_replication.markdown b/website/docs/r/db_instance_automated_backups_replication.markdown index 9c042bd5c79..18c44197689 100644 --- a/website/docs/r/db_instance_automated_backups_replication.markdown +++ b/website/docs/r/db_instance_automated_backups_replication.markdown @@ -61,14 +61,14 @@ resource "aws_db_instance" "default" { resource "aws_kms_key" "default" { description = "Encryption key for automated backups" - provider = "aws.replica" + provider = aws.replica } resource "aws_db_instance_automated_backups_replication" "default" { source_db_instance_arn = aws_db_instance.default.arn kms_key_id = aws_kms_key.default.arn - provider = "aws.replica" + provider = aws.replica } ``` diff --git a/website/docs/r/db_proxy.html.markdown b/website/docs/r/db_proxy.html.markdown index 29894c1356a..344d6269451 100644 --- a/website/docs/r/db_proxy.html.markdown +++ b/website/docs/r/db_proxy.html.markdown @@ -44,7 +44,7 @@ The following arguments are supported: * `name` - (Required) The identifier for the proxy. This name must be unique for all proxies owned by your AWS account in the specified AWS Region. An identifier must begin with a letter and must contain only ASCII letters, digits, and hyphens; it can't end with a hyphen or contain two consecutive hyphens. * `auth` - (Required) Configuration block(s) with authorization mechanisms to connect to the associated instances or clusters. Described below. * `debug_logging` - (Optional) Whether the proxy includes detailed information about SQL statements in its logs. This information helps you to debug issues involving SQL behavior or the performance and scalability of the proxy connections. The debug information includes the text of SQL statements that you submit through the proxy. Thus, only enable this setting when needed for debugging, and only when you have security measures in place to safeguard any sensitive information that appears in the logs. -* `engine_family` - (Required, Forces new resource) The kinds of databases that the proxy can connect to. This value determines which database network protocol the proxy recognizes when it interprets network traffic to and from the database. The engine family applies to MySQL and PostgreSQL for both RDS and Aurora. Valid values are `MYSQL` and `POSTGRESQL`. +* `engine_family` - (Required, Forces new resource) The kinds of databases that the proxy can connect to. This value determines which database network protocol the proxy recognizes when it interprets network traffic to and from the database. For Aurora MySQL, RDS for MariaDB, and RDS for MySQL databases, specify `MYSQL`. For Aurora PostgreSQL and RDS for PostgreSQL databases, specify `POSTGRESQL`. For RDS for Microsoft SQL Server, specify `SQLSERVER`. Valid values are `MYSQL`, `POSTGRESQL`, and `SQLSERVER`. * `idle_client_timeout` - (Optional) The number of seconds that a connection to the proxy can be inactive before the proxy disconnects it. You can set this value higher or lower than the connection timeout limit for the associated database. * `require_tls` - (Optional) A Boolean parameter that specifies whether Transport Layer Security (TLS) encryption is required for connections to the proxy. By enabling this setting, you can enforce encrypted TLS connections to the proxy. * `role_arn` - (Required) The Amazon Resource Name (ARN) of the IAM role that the proxy uses to access secrets in AWS Secrets Manager. diff --git a/website/docs/r/dms_replication_instance.html.markdown b/website/docs/r/dms_replication_instance.html.markdown index e09b30b8c0d..815aae44c3b 100644 --- a/website/docs/r/dms_replication_instance.html.markdown +++ b/website/docs/r/dms_replication_instance.html.markdown @@ -17,7 +17,7 @@ Create required roles and then create a DMS instance, setting the depends_on to ```terraform # Database Migration Service requires the below IAM Roles to be created before # replication instances can be created. See the DMS Documentation for -# additional information: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#CHAP_Security.APIRole +# additional information: https://docs.aws.amazon.com/dms/latest/userguide/security-iam.html#CHAP_Security.APIRole # * dms-vpc-role # * dms-cloudwatch-logs-role # * dms-access-for-endpoint diff --git a/website/docs/r/dms_replication_task.html.markdown b/website/docs/r/dms_replication_task.html.markdown index 35f486cbbd7..873345a76f7 100644 --- a/website/docs/r/dms_replication_task.html.markdown +++ b/website/docs/r/dms_replication_task.html.markdown @@ -51,7 +51,6 @@ The following arguments are supported: * `replication_task_settings` - (Optional) An escaped JSON string that contains the task settings. For a complete list of task settings, see [Task Settings for AWS Database Migration Service Tasks](http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.html). * `source_endpoint_arn` - (Required) The Amazon Resource Name (ARN) string that uniquely identifies the source endpoint. * `start_replication_task` - (Optional) Whether to run or stop the replication task. -* `status` - Replication Task status. * `table_mappings` - (Required) An escaped JSON string that contains the table mappings. For information on table mapping see [Using Table Mapping with an AWS Database Migration Service Task to Select and Filter Data](http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.html) * `tags` - (Optional) A map of tags to assign to the resource. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. * `target_endpoint_arn` - (Required) The Amazon Resource Name (ARN) string that uniquely identifies the target endpoint. @@ -61,6 +60,7 @@ The following arguments are supported: In addition to all arguments above, the following attributes are exported: * `replication_task_arn` - The Amazon Resource Name (ARN) for the replication task. +* `status` - Replication Task status. * `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). ## Import diff --git a/website/docs/r/dynamodb_table_replica.html.markdown b/website/docs/r/dynamodb_table_replica.html.markdown index c5b67bb9d6e..b21a8a375b8 100644 --- a/website/docs/r/dynamodb_table_replica.html.markdown +++ b/website/docs/r/dynamodb_table_replica.html.markdown @@ -30,7 +30,7 @@ provider "aws" { } resource "aws_dynamodb_table" "example" { - provider = "aws.main" + provider = aws.main name = "TestTable" hash_key = "BrodoBaggins" billing_mode = "PAY_PER_REQUEST" @@ -48,7 +48,7 @@ resource "aws_dynamodb_table" "example" { } resource "aws_dynamodb_table_replica" "example" { - provider = "aws.alt" + provider = aws.alt global_table_arn = aws_dynamodb_table.example.arn tags = { diff --git a/website/docs/r/dynamodb_tag.html.markdown b/website/docs/r/dynamodb_tag.html.markdown index 2ebee54d635..02290d215a1 100644 --- a/website/docs/r/dynamodb_tag.html.markdown +++ b/website/docs/r/dynamodb_tag.html.markdown @@ -27,7 +27,7 @@ provider "aws" { } data "aws_region" "replica" { - provider = "aws.replica" + provider = aws.replica } data "aws_region" "current" {} @@ -41,7 +41,7 @@ resource "aws_dynamodb_table" "example" { } resource "aws_dynamodb_tag" "test" { - provider = "aws.replica" + provider = aws.replica resource_arn = replace(aws_dynamodb_table.test.arn, data.aws_region.current.name, data.aws_region.replica.name) key = "testkey" diff --git a/website/docs/r/ec2_instance_connect_endpoint.html.markdown b/website/docs/r/ec2_instance_connect_endpoint.html.markdown new file mode 100644 index 00000000000..845f45ba147 --- /dev/null +++ b/website/docs/r/ec2_instance_connect_endpoint.html.markdown @@ -0,0 +1,56 @@ +--- +subcategory: "EC2 (Elastic Compute Cloud)" +layout: "aws" +page_title: "AWS: aws_ec2_instance_connect_endpoint" +description: |- + Provides an EC2 Instance Connect Endpoint resource. +--- + +# Resource: aws_ec2_instance_connect_endpoint + +Manages an EC2 Instance Connect Endpoint. + +## Example Usage + +```terraform +resource "aws_ec2_instance_connect_endpoint" "example" { + subnet_id = aws_subnet.example.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `preserve_client_ip` - (Optional) Indicates whether your client's IP address is preserved as the source. Default: `true`. +* `security_group_ids` - (Optional) One or more security groups to associate with the endpoint. If you don't specify a security group, the default security group for the VPC will be associated with the endpoint. +* `subnet_id` - (Required) The ID of the subnet in which to create the EC2 Instance Connect Endpoint. +* `tags` - (Optional) Map of tags to assign to this resource. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `create` - (Default `10m`) +- `delete` - (Default `10m`) + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The Amazon Resource Name (ARN) of the EC2 Instance Connect Endpoint. +* `availability_zone` - The Availability Zone of the EC2 Instance Connect Endpoint. +* `dns_name` - The DNS name of the EC2 Instance Connect Endpoint. +* `fips_dns_name` - The DNS name of the EC2 Instance Connect FIPS Endpoint. +* `network_interface_ids` - The IDs of the ENIs that Amazon EC2 automatically created when creating the EC2 Instance Connect Endpoint. +* `owner_id` - The ID of the AWS account that created the EC2 Instance Connect Endpoint. +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). +* `vpc_id` - The ID of the VPC in which the EC2 Instance Connect Endpoint was created. + +## Import + +EC2 Instance Connect Endpoints can be imported using the `id`, e.g., + +``` +$ terraform import aws_ec2_instance_connect_endpoint.example eice-012345678 +``` diff --git a/website/docs/r/ec2_managed_prefix_list_entry.html.markdown b/website/docs/r/ec2_managed_prefix_list_entry.html.markdown index 672d624fba6..262d37b2335 100644 --- a/website/docs/r/ec2_managed_prefix_list_entry.html.markdown +++ b/website/docs/r/ec2_managed_prefix_list_entry.html.markdown @@ -3,28 +3,20 @@ subcategory: "VPC (Virtual Private Cloud)" layout: "aws" page_title: "AWS: aws_ec2_managed_prefix_list_entry" description: |- - Provides a managed prefix list entry resource. + Use the `aws_ec2_managed_prefix_list_entry` resource to manage a managed prefix list entry. --- # Resource: aws_ec2_managed_prefix_list_entry -Provides a managed prefix list entry resource. +Use the `aws_prefix_list_entry` resource to manage a managed prefix list entry. -~> **NOTE on Managed Prefix Lists and Managed Prefix List Entries:** Terraform -currently provides both a standalone Managed Prefix List Entry resource (a single entry), -and a [Managed Prefix List resource](ec2_managed_prefix_list.html) with entries defined -in-line. At this time you cannot use a Managed Prefix List with in-line rules in -conjunction with any Managed Prefix List Entry resources. Doing so will cause a conflict -of entries and will overwrite entries. +~> **NOTE:** Terraform currently provides two resources for managing Managed Prefix Lists and Managed Prefix List Entries. The standalone resource, [Managed Prefix List Entry](ec2_managed_prefix_list_entry.html), is used to manage a single entry. The [Managed Prefix List resource](ec2_managed_prefix_list.html) is used to manage multiple entries defined in-line. It is important to note that you cannot use a Managed Prefix List with in-line rules in conjunction with any Managed Prefix List Entry resources. This will result in a conflict of entries and will cause the entries to be overwritten. -~> **NOTE on Managed Prefix Lists with many entries:** To improved execution times on larger -updates, if you plan to create a prefix list with more than 100 entries, it is **recommended** -that you use the inline `entry` block as part of the [Managed Prefix List resource](ec2_managed_prefix_list.html) -resource instead. +~> **NOTE:** To improve execution times on larger updates, it is recommended to use the inline `entry` block as part of the Managed Prefix List resource when creating a prefix list with more than 100 entries. You can find more information about the resource [here](ec2_managed_prefix_list.html). ## Example Usage -Basic usage +Basic usage. ```terraform resource "aws_ec2_managed_prefix_list" "example" { @@ -46,21 +38,21 @@ resource "aws_ec2_managed_prefix_list_entry" "entry_1" { ## Argument Reference -The following arguments are supported: +This resource supports the following arguments: * `cidr` - (Required) CIDR block of this entry. -* `description` - (Optional) Description of this entry. Due to API limitations, updating only the description of an entry requires recreating the entry. +* `description` - (Optional) Description of this entry. Please note that due to API limitations, updating only the description of an entry will require recreating the entry. * `prefix_list_id` - (Required) CIDR block of this entry. -## Attributes Reference +## Attribute Reference -In addition to all arguments above, the following attributes are exported: +This resource exports the following attributes in addition to the arguments above: * `id` - ID of the managed prefix list entry. ## Import -Prefix List Entries can be imported using the `prefix_list_id` and `cidr` separated by a `,`, e.g., +To import Prefix List Entries, use the `prefix_list_id` and `cidr`. Separate them with a comma (`,`). For example: ``` $ terraform import aws_ec2_managed_prefix_list_entry.default pl-0570a1d2d725c16be,10.0.3.0/24 diff --git a/website/docs/r/ec2_transit_gateway_connect.html.markdown b/website/docs/r/ec2_transit_gateway_connect.html.markdown index 4f08f86ed78..82e99877bf6 100644 --- a/website/docs/r/ec2_transit_gateway_connect.html.markdown +++ b/website/docs/r/ec2_transit_gateway_connect.html.markdown @@ -29,7 +29,7 @@ resource "aws_ec2_transit_gateway_connect" "attachment" { The following arguments are supported: -* `protocol` - (Optional) The tunnel protocol. Valida values: `gre`. Default is `gre`. +* `protocol` - (Optional) The tunnel protocol. Valid values: `gre`. Default is `gre`. * `tags` - (Optional) Key-value tags for the EC2 Transit Gateway Connect. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. * `transit_gateway_default_route_table_association` - (Optional) Boolean whether the Connect should be associated with the EC2 Transit Gateway association default route table. This cannot be configured or perform drift detection with Resource Access Manager shared EC2 Transit Gateways. Default value: `true`. * `transit_gateway_default_route_table_propagation` - (Optional) Boolean whether the Connect should propagate routes with the EC2 Transit Gateway propagation default route table. This cannot be configured or perform drift detection with Resource Access Manager shared EC2 Transit Gateways. Default value: `true`. diff --git a/website/docs/r/ec2_transit_gateway_connect_peer.html.markdown b/website/docs/r/ec2_transit_gateway_connect_peer.html.markdown index 378e86d488c..484f87a40e8 100644 --- a/website/docs/r/ec2_transit_gateway_connect_peer.html.markdown +++ b/website/docs/r/ec2_transit_gateway_connect_peer.html.markdown @@ -42,6 +42,8 @@ In addition to all arguments above, the following attributes are exported: * `id` - EC2 Transit Gateway Connect Peer identifier * `arn` - EC2 Transit Gateway Connect Peer ARN +* `bgp_peer_address` - The IP address assigned to customer device, which is used as BGP IP address. +* `bgp_transit_gateway_addresses` - The IP addresses assigned to Transit Gateway, which are used as BGP IP addresses. * `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). ## Timeouts diff --git a/website/docs/r/ec2_transit_gateway_route_table_association.html.markdown b/website/docs/r/ec2_transit_gateway_route_table_association.html.markdown index f46d1f7158d..63c2f242f9b 100644 --- a/website/docs/r/ec2_transit_gateway_route_table_association.html.markdown +++ b/website/docs/r/ec2_transit_gateway_route_table_association.html.markdown @@ -25,6 +25,7 @@ The following arguments are supported: * `transit_gateway_attachment_id` - (Required) Identifier of EC2 Transit Gateway Attachment. * `transit_gateway_route_table_id` - (Required) Identifier of EC2 Transit Gateway Route Table. +* `replace_existing_association` - (Optional) Boolean whether the Gateway Attachment should remove any current Route Table association before associating with the specified Route Table. Default value: `false`. This argument is intended for use with EC2 Transit Gateways shared into the current account, otherwise the `transit_gateway_default_route_table_association` argument of the `aws_ec2_transit_gateway_vpc_attachment` resource should be used. ## Attributes Reference diff --git a/website/docs/r/eip.html.markdown b/website/docs/r/eip.html.markdown index f41808cf57c..50b2cdabf0f 100644 --- a/website/docs/r/eip.html.markdown +++ b/website/docs/r/eip.html.markdown @@ -21,7 +21,7 @@ Provides an Elastic IP resource. ```terraform resource "aws_eip" "lb" { instance = aws_instance.web.id - vpc = true + domain = "vpc" } ``` @@ -34,13 +34,13 @@ resource "aws_network_interface" "multi-ip" { } resource "aws_eip" "one" { - vpc = true + domain = "vpc" network_interface = aws_network_interface.multi-ip.id associate_with_private_ip = "10.0.0.10" } resource "aws_eip" "two" { - vpc = true + domain = "vpc" network_interface = aws_network_interface.multi-ip.id associate_with_private_ip = "10.0.0.11" } @@ -76,7 +76,7 @@ resource "aws_instance" "foo" { } resource "aws_eip" "bar" { - vpc = true + domain = "vpc" instance = aws_instance.foo.id associate_with_private_ip = "10.0.0.12" @@ -88,7 +88,7 @@ resource "aws_eip" "bar" { ```terraform resource "aws_eip" "byoip-ip" { - vpc = true + domain = "vpc" public_ipv4_pool = "ipv4pool-ec2-012345" } ``` @@ -100,13 +100,14 @@ The following arguments are supported: * `address` - (Optional) IP address from an EC2 BYOIP pool. This option is only available for VPC EIPs. * `associate_with_private_ip` - (Optional) User-specified primary or secondary private IP address to associate with the Elastic IP address. If no private IP address is specified, the Elastic IP address is associated with the primary private IP address. * `customer_owned_ipv4_pool` - (Optional) ID of a customer-owned address pool. For more on customer owned IP addressed check out [Customer-owned IP addresses guide](https://docs.aws.amazon.com/outposts/latest/userguide/outposts-networking-components.html#ip-addressing). +* `domain` - Indicates if this EIP is for use in VPC (`vpc`). * `instance` - (Optional) EC2 instance ID. * `network_border_group` - (Optional) Location from which the IP address is advertised. Use this parameter to limit the address to this location. * `network_interface` - (Optional) Network interface ID to associate with. * `public_ipv4_pool` - (Optional) EC2 IPv4 address pool identifier or `amazon`. This option is only available for VPC EIPs. * `tags` - (Optional) Map of tags to assign to the resource. Tags can only be applied to EIPs in a VPC. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. -* `vpc` - (Optional) Boolean if the EIP is in a VPC or not. +* `vpc` - (Optional **Deprecated**) Boolean if the EIP is in a VPC or not. Use `domain` instead. Defaults to `true` unless the region supports EC2-Classic. ~> **NOTE:** You can specify either the `instance` ID or the `network_interface` ID, but not both. Including both will **not** return an error from the AWS API, but will have undefined behavior. See the relevant [AssociateAddress API Call][1] for more information. @@ -122,7 +123,6 @@ In addition to all arguments above, the following attributes are exported: * `association_id` - ID representing the association of the address with an instance in a VPC. * `carrier_ip` - Carrier IP address. * `customer_owned_ip` - Customer owned IP. -* `domain` - Indicates if this EIP is for use in VPC (`vpc`) or EC2-Classic (`standard`). * `id` - Contains the EIP allocation ID. * `private_dns` - The Private DNS associated with the Elastic IP address (if in VPC). * `private_ip` - Contains the private IP address (if in VPC). diff --git a/website/docs/r/eip_association.html.markdown b/website/docs/r/eip_association.html.markdown index 8a33bc84539..93ffac9f6ce 100644 --- a/website/docs/r/eip_association.html.markdown +++ b/website/docs/r/eip_association.html.markdown @@ -35,7 +35,7 @@ resource "aws_instance" "web" { } resource "aws_eip" "example" { - vpc = true + domain = "vpc" } ``` diff --git a/website/docs/r/eks_addon.html.markdown b/website/docs/r/eks_addon.html.markdown index dd7d66e995e..03b61352d82 100644 --- a/website/docs/r/eks_addon.html.markdown +++ b/website/docs/r/eks_addon.html.markdown @@ -10,11 +10,6 @@ description: |- Manages an EKS add-on. -~> **Note:** Amazon EKS add-on can only be used with Amazon EKS Clusters -running version 1.18 with platform version eks.3 or later -because add-ons rely on the Server-side Apply Kubernetes feature, -which is only available in Kubernetes 1.18 and later. - ## Example Usage ```terraform @@ -32,7 +27,7 @@ resource "aws_eks_addon" "example" { resource "aws_eks_addon" "example" { cluster_name = aws_eks_cluster.example.name addon_name = "coredns" - addon_version = "v1.8.7-eksbuild.3" #e.g., previous version v1.8.7-eksbuild.2 and the new version is v1.8.7-eksbuild.3 + addon_version = "v1.10.1-eksbuild.1" #e.g., previous version v1.9.3-eksbuild.3 and the new version is v1.10.1-eksbuild.1 resolve_conflicts_on_update = "PRESERVE" } ``` @@ -49,7 +44,7 @@ This below is an example for extracting the `configuration_values` schema for `c ```bash aws eks describe-addon-configuration \ --addon-name coredns \ - --addon-version v1.8.7-eksbuild.2 + --addon-version v1.10.1-eksbuild.1 ``` Example to create a `coredns` managed addon with custom `configuration_values`. @@ -58,7 +53,7 @@ Example to create a `coredns` managed addon with custom `configuration_values`. resource "aws_eks_addon" "example" { cluster_name = "mycluster" addon_name = "coredns" - addon_version = "v1.8.7-eksbuild.3" + addon_version = "v1.10.1-eksbuild.1" resolve_conflicts_on_create = "OVERWRITE" configuration_values = jsonencode({ @@ -137,7 +132,7 @@ The following arguments are optional: match one of the versions returned by [describe-addon-versions](https://docs.aws.amazon.com/cli/latest/reference/eks/describe-addon-versions.html). * `configuration_values` - (Optional) custom configuration values for addons with single JSON string. This JSON string value must match the JSON schema derived from [describe-addon-configuration](https://docs.aws.amazon.com/cli/latest/reference/eks/describe-addon-configuration.html). * `resolve_conflicts_on_create` - (Optional) How to resolve field value conflicts when migrating a self-managed add-on to an Amazon EKS add-on. Valid values are `NONE` and `OVERWRITE`. For more details see the [CreateAddon](https://docs.aws.amazon.com/eks/latest/APIReference/API_CreateAddon.html) API Docs. -* `resolve_conflicts_on_update` - (Optional) How to resolve field value conflicts for an Amazon EKS add-on if you've changed a value from the Amazon EKS default value. Valid values are `NONE` and `OVERWRITE`. For more details see the [UpdateAddon](https://docs.aws.amazon.com/eks/latest/APIReference/API_UpdateAddon.html) API Docs. +* `resolve_conflicts_on_update` - (Optional) How to resolve field value conflicts for an Amazon EKS add-on if you've changed a value from the Amazon EKS default value. Valid values are `NONE`, `OVERWRITE`, and `PRESERVE`. For more details see the [UpdateAddon](https://docs.aws.amazon.com/eks/latest/APIReference/API_UpdateAddon.html) API Docs. * `resolve_conflicts` - (**Deprecated** use the `resolve_conflicts_on_create` and `resolve_conflicts_on_update` attributes instead) Define how to resolve parameter value conflicts when migrating an existing add-on to an Amazon EKS add-on or when applying version updates to the add-on. Valid values are `NONE`, `OVERWRITE` and `PRESERVE`. Note that `PRESERVE` is only valid on addon update, not for initial addon creation. If you need to set this to `PRESERVE`, use the `resolve_conflicts_on_create` and `resolve_conflicts_on_update` attributes instead. For more details check [UpdateAddon](https://docs.aws.amazon.com/eks/latest/APIReference/API_UpdateAddon.html) API Docs. * `tags` - (Optional) Key-value map of resource tags. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. * `preserve` - (Optional) Indicates if you want to preserve the created resources when deleting the EKS add-on. @@ -147,7 +142,7 @@ The following arguments are optional: an existing IAM role, then the add-on uses the permissions assigned to the node IAM role. For more information, see [Amazon EKS node IAM role](https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html) in the Amazon EKS User Guide. - + ~> **Note:** To specify an existing IAM role, you must have an IAM OpenID Connect (OIDC) provider created for your cluster. For more information, [see Enabling IAM roles for service accounts on your cluster](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html) diff --git a/website/docs/r/eks_node_group.html.markdown b/website/docs/r/eks_node_group.html.markdown index d3b24af346d..86ee76a8bd8 100644 --- a/website/docs/r/eks_node_group.html.markdown +++ b/website/docs/r/eks_node_group.html.markdown @@ -127,10 +127,6 @@ resource "aws_subnet" "example" { availability_zone = data.aws_availability_zones.available.names[count.index] cidr_block = cidrsubnet(aws_vpc.example.cidr_block, 8, count.index) vpc_id = aws_vpc.example.id - - tags = { - "kubernetes.io/cluster/${aws_eks_cluster.example.name}" = "shared" - } } ``` @@ -140,8 +136,8 @@ The following arguments are required: * `cluster_name` – (Required) Name of the EKS Cluster. Must be between 1-100 characters in length. Must begin with an alphanumeric character, and must only contain alphanumeric characters, dashes and underscores (`^[0-9A-Za-z][A-Za-z0-9\-_]+$`). * `node_role_arn` – (Required) Amazon Resource Name (ARN) of the IAM Role that provides permissions for the EKS Node Group. -* `scaling_config` - (Required) Configuration block with scaling settings. Detailed below. -* `subnet_ids` – (Required) Identifiers of EC2 Subnets to associate with the EKS Node Group. These subnets must have the following resource tag: `kubernetes.io/cluster/CLUSTER_NAME` (where `CLUSTER_NAME` is replaced with the name of the EKS Cluster). +* `scaling_config` - (Required) Configuration block with scaling settings. See [`scaling_config`](#scaling_config-configuration-block) below for details. +* `subnet_ids` – (Required) Identifiers of EC2 Subnets to associate with the EKS Node Group. The following arguments are optional: @@ -151,13 +147,14 @@ The following arguments are optional: * `force_update_version` - (Optional) Force version update if existing pods are unable to be drained due to a pod disruption budget issue. * `instance_types` - (Optional) List of instance types associated with the EKS Node Group. Defaults to `["t3.medium"]`. Terraform will only perform drift detection if a configuration value is provided. * `labels` - (Optional) Key-value map of Kubernetes labels. Only labels that are applied with the EKS API are managed by this argument. Other Kubernetes labels applied to the EKS Node Group will not be managed. -* `launch_template` - (Optional) Configuration block with Launch Template settings. Detailed below. +* `launch_template` - (Optional) Configuration block with Launch Template settings. See [`launch_template`](#launch_template-configuration-block) below for details. * `node_group_name` – (Optional) Name of the EKS Node Group. If omitted, Terraform will assign a random, unique name. Conflicts with `node_group_name_prefix`. The node group name can't be longer than 63 characters. It must start with a letter or digit, but can also include hyphens and underscores for the remaining characters. * `node_group_name_prefix` – (Optional) Creates a unique name beginning with the specified prefix. Conflicts with `node_group_name`. * `release_version` – (Optional) AMI version of the EKS Node Group. Defaults to latest version for Kubernetes version. -* `remote_access` - (Optional) Configuration block with remote access settings. Detailed below. +* `remote_access` - (Optional) Configuration block with remote access settings. See [`remote_access`](#remote_access-configuration-block) below for details. * `tags` - (Optional) Key-value map of resource tags. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. -* `taint` - (Optional) The Kubernetes taints to be applied to the nodes in the node group. Maximum of 50 taints per node group. Detailed below. +* `taint` - (Optional) The Kubernetes taints to be applied to the nodes in the node group. Maximum of 50 taints per node group. See [taint](#taint-configuration-block) below for details. +* `update_config` - (Optional) Configuration block with update settings. See [`update_config`](#update_config-configuration-block) below for details. * `version` – (Optional) Kubernetes version. Defaults to EKS Cluster Kubernetes version. Terraform will only perform drift detection if a configuration value is provided. ### launch_template Configuration Block diff --git a/website/docs/r/elasticache_cluster.html.markdown b/website/docs/r/elasticache_cluster.html.markdown index b683d44c438..120f693f8ad 100644 --- a/website/docs/r/elasticache_cluster.html.markdown +++ b/website/docs/r/elasticache_cluster.html.markdown @@ -156,7 +156,8 @@ The following arguments are optional: * `engine_version` – (Optional) Version number of the cache engine to be used. If not set, defaults to the latest version. See [Describe Cache Engine Versions](https://docs.aws.amazon.com/cli/latest/reference/elasticache/describe-cache-engine-versions.html) in the AWS Documentation for supported versions. - When `engine` is `redis` and the version is 6 or higher, the major and minor version can be set, e.g., `6.2`, + When `engine` is `redis` and the version is 7 or higher, the major and minor version should be set, e.g., `7.2`. + When the version is 6, the major and minor version can be set, e.g., `6.2`, or the minor version can be unspecified which will use the latest version at creation time, e.g., `6.x`. Otherwise, specify the full version desired, e.g., `5.0.6`. The actual engine version used is returned in the attribute `engine_version_actual`, see [Attributes Reference](#attributes-reference) below. diff --git a/website/docs/r/elasticache_global_replication_group.html.markdown b/website/docs/r/elasticache_global_replication_group.html.markdown index 48bd8a8d14d..12074ac6421 100644 --- a/website/docs/r/elasticache_global_replication_group.html.markdown +++ b/website/docs/r/elasticache_global_replication_group.html.markdown @@ -23,24 +23,24 @@ resource "aws_elasticache_global_replication_group" "example" { } resource "aws_elasticache_replication_group" "primary" { - replication_group_id = "example-primary" - replication_group_description = "primary replication group" + replication_group_id = "example-primary" + description = "primary replication group" engine = "redis" engine_version = "5.0.6" node_type = "cache.m5.large" - number_cache_clusters = 1 + num_cache_clusters = 1 } resource "aws_elasticache_replication_group" "secondary" { provider = aws.other_region - replication_group_id = "example-secondary" - replication_group_description = "secondary replication group" - global_replication_group_id = aws_elasticache_global_replication_group.example.global_replication_group_id + replication_group_id = "example-secondary" + description = "secondary replication group" + global_replication_group_id = aws_elasticache_global_replication_group.example.global_replication_group_id - number_cache_clusters = 1 + num_cache_clusters = 1 } ``` @@ -67,14 +67,14 @@ resource "aws_elasticache_global_replication_group" "example" { } resource "aws_elasticache_replication_group" "primary" { - replication_group_id = "example-primary" - replication_group_description = "primary replication group" + replication_group_id = "example-primary" + description = "primary replication group" engine = "redis" engine_version = "6.0" node_type = "cache.m5.large" - number_cache_clusters = 1 + num_cache_clusters = 1 lifecycle { ignore_changes = [engine_version] @@ -84,11 +84,11 @@ resource "aws_elasticache_replication_group" "primary" { resource "aws_elasticache_replication_group" "secondary" { provider = aws.other_region - replication_group_id = "example-secondary" - replication_group_description = "secondary replication group" - global_replication_group_id = aws_elasticache_global_replication_group.example.global_replication_group_id + replication_group_id = "example-secondary" + description = "secondary replication group" + global_replication_group_id = aws_elasticache_global_replication_group.example.global_replication_group_id - number_cache_clusters = 1 + num_cache_clusters = 1 lifecycle { ignore_changes = [engine_version] @@ -110,7 +110,8 @@ The following arguments are supported: When creating, by default the Global Replication Group inherits the version of the primary replication group. If a version is specified, the Global Replication Group and all member replication groups will be upgraded to this version. Cannot be downgraded without replacing the Global Replication Group and all member replication groups. - If the version is 6 or higher, the major and minor version can be set, e.g., `6.2`, + When the version is 7 or higher, the major and minor version should be set, e.g., `7.2`. + When the version is 6, the major and minor version can be set, e.g., `6.2`, or the minor version can be unspecified which will use the latest version at creation time, e.g., `6.x`. The actual engine version used is returned in the attribute `engine_version_actual`, see [Attributes Reference](#attributes-reference) below. * `global_replication_group_id_suffix` – (Required) The suffix name of a Global Datastore. If `global_replication_group_id_suffix` is changed, creates a new resource. diff --git a/website/docs/r/elasticache_replication_group.html.markdown b/website/docs/r/elasticache_replication_group.html.markdown index 76aa7268994..9f740118fd1 100644 --- a/website/docs/r/elasticache_replication_group.html.markdown +++ b/website/docs/r/elasticache_replication_group.html.markdown @@ -181,7 +181,8 @@ The following arguments are optional: * `data_tiering_enabled` - (Optional) Enables data tiering. Data tiering is only supported for replication groups using the r6gd node type. This parameter must be set to `true` when using r6gd nodes. * `engine` - (Optional) Name of the cache engine to be used for the clusters in this replication group. The only valid value is `redis`. * `engine_version` - (Optional) Version number of the cache engine to be used for the cache clusters in this replication group. - If the version is 6 or higher, the major and minor version can be set, e.g., `6.2`, + If the version is 7 or higher, the major and minor version should be set, e.g., `7.2`. + If the version is 6, the major and minor version can be set, e.g., `6.2`, or the minor version can be unspecified which will use the latest version at creation time, e.g., `6.x`. Otherwise, specify the full version desired, e.g., `5.0.6`. The actual engine version used is returned in the attribute `engine_version_actual`, see [Attributes Reference](#attributes-reference) below. diff --git a/website/docs/r/elb.html.markdown b/website/docs/r/elb.html.markdown index ff695499378..09bac26ece4 100644 --- a/website/docs/r/elb.html.markdown +++ b/website/docs/r/elb.html.markdown @@ -79,7 +79,7 @@ The following arguments are supported: * `availability_zones` - (Required for an EC2-classic ELB) The AZ's to serve traffic in. * `security_groups` - (Optional) A list of security group IDs to assign to the ELB. Only valid if creating an ELB within a VPC -* `subnets` - (Required for a VPC ELB) A list of subnet IDs to attach to the ELB. +* `subnets` - (Required for a VPC ELB) A list of subnet IDs to attach to the ELB. When an update to subnets will remove all current subnets, this will force a new resource. * `instances` - (Optional) A list of instance ids to place in the ELB pool. * `internal` - (Optional) If true, ELB will be an internal ELB. * `listener` - (Required) A list of listener blocks. Listeners documented below. diff --git a/website/docs/r/emr_cluster.html.markdown b/website/docs/r/emr_cluster.html.markdown index 874be9e86ca..7b7f258ad4b 100644 --- a/website/docs/r/emr_cluster.html.markdown +++ b/website/docs/r/emr_cluster.html.markdown @@ -715,7 +715,7 @@ The instance fleet configuration is available only in Amazon EMR versions 4.8.0 The launch specification for Spot instances in the fleet, which determines the defined duration, provisioning timeout behavior, and allocation strategy. -* `allocation_strategy` - (Required) Specifies the strategy to use in launching Spot instance fleets. Currently, the only option is `capacity-optimized` (the default), which launches instances from Spot instance pools with optimal capacity for the number of instances that are launching. +* `allocation_strategy` - (Required) Specifies the strategy to use in launching Spot instance fleets. Valid values include `capacity-optimized`, `diversified`, `lowest-price`, `price-capacity-optimized`. See the [AWS documentation](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-instance-fleet.html#emr-instance-fleet-allocation-strategy) for details on each strategy type. * `block_duration_minutes` - (Optional) Defined duration for Spot instances (also known as Spot blocks) in minutes. When specified, the Spot instance does not terminate before the defined duration expires, and defined duration pricing for Spot instances applies. Valid values are 60, 120, 180, 240, 300, or 360. The duration period starts as soon as a Spot instance receives its instance ID. At the end of the duration, Amazon EC2 marks the Spot instance for termination and provides a Spot instance termination notice, which gives the instance a two-minute warning before it terminates. * `timeout_action` - (Required) Action to take when TargetSpotCapacity has not been fulfilled when the TimeoutDurationMinutes has expired; that is, when all Spot instances could not be provisioned within the Spot provisioning timeout. Valid values are `TERMINATE_CLUSTER` and `SWITCH_TO_ON_DEMAND`. SWITCH_TO_ON_DEMAND specifies that if no Spot instances are available, On-Demand Instances should be provisioned to fulfill any remaining Spot capacity. * `timeout_duration_minutes` - (Required) Spot provisioning timeout period in minutes. If Spot instances are not provisioned within this time period, the TimeOutAction is taken. Minimum value is 5 and maximum value is 1440. The timeout applies only during initial provisioning, when the cluster is first created. diff --git a/website/docs/r/emrcontainers_job_template.markdown b/website/docs/r/emrcontainers_job_template.markdown new file mode 100644 index 00000000000..10f8df84ba4 --- /dev/null +++ b/website/docs/r/emrcontainers_job_template.markdown @@ -0,0 +1,107 @@ +--- +subcategory: "EMR Containers" +layout: "aws" +page_title: "AWS: aws_emrcontainers_job_template" +description: |- + Manages an EMR Containers (EMR on EKS) Job Template +--- + +# Resource: aws_emrcontainers_job_template + +Manages an EMR Containers (EMR on EKS) Job Template. + +## Example Usage + +### Basic Usage + +```terraform +resource "aws_emrcontainers_job_template" "example" { + job_template_data { + execution_role_arn = aws_iam_role.example.arn + release_label = "emr-6.10.0-latest" + + job_driver { + spark_sql_job_driver { + entry_point = "default" + } + } + } + + name = "example" +} +``` + +## Argument Reference + +The following arguments are required: + +* `job_template_data` - (Required) The job template data which holds values of StartJobRun API request. +* `kms_key_arn` - (Optional) The KMS key ARN used to encrypt the job template. +* `name` – (Required) The specified name of the job template. +* `tags` - (Optional) Key-value mapping of resource tags. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +### job_template_data Arguments + +* `configuration_overrides` - (Optional) The configuration settings that are used to override defaults configuration. +* `execution_role_arn` - (Required) The execution role ARN of the job run. +* `job_driver` - (Required) Specify the driver that the job runs on. Exactly one of the two available job drivers is required, either sparkSqlJobDriver or sparkSubmitJobDriver. +* `job_tags` - (Optional) The tags assigned to jobs started using the job template. +* `release_label` - (Required) The release version of Amazon EMR. + +#### configuration_overrides Arguments + +* `application_configuration` - (Optional) The configurations for the application running by the job run. +* `monitoring_configuration` - (Optional) The configurations for monitoring. + +##### application_configuration Arguments + +* `classification` - (Required) The classification within a configuration. +* `configurations` - (Optional) A list of additional configurations to apply within a configuration object. +* `properties` - (Optional) A set of properties specified within a configuration classification. + +##### monitoring_configuration Arguments + +* `cloud_watch_monitoring_configuration` - (Optional) Monitoring configurations for CloudWatch. +* `persistent_app_ui` - (Optional) Monitoring configurations for the persistent application UI. +* `s3_monitoring_configuration` - (Optional) Amazon S3 configuration for monitoring log publishing. + +###### cloud_watch_monitoring_configuration Arguments + +* `log_group_name` - (Required) The name of the log group for log publishing. +* `log_stream_name_prefix` - (Optional) The specified name prefix for log streams. + +###### s3_monitoring_configuration Arguments + +* `log_uri` - (Optional) Amazon S3 destination URI for log publishing. + +#### job_driver Arguments + +* `spark_sql_job_driver` - (Optional) The job driver for job type. +* `spark_submit_job_driver` - (Optional) The job driver parameters specified for spark submit. + +##### spark_sql_job_driver Arguments + +* `entry_point` - (Optional) The SQL file to be executed. +* `spark_sql_parameters` - (Optional) The Spark parameters to be included in the Spark SQL command. + +##### spark_submit_job_driver Arguments + +* `entry_point` - (Required) The entry point of job application. +* `entry_point_arguments` - (Optional) The arguments for job application. +* `spark_submit_parameters` - (Optional) The Spark submit parameters that are used for job runs. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - ARN of the job template. +* `id` - The ID of the job template. +* `tags_all` - Map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Import + +EKS job templates can be imported using the `id`, e.g. + +``` +$ terraform import aws_emrcontainers_job_template.example a1b2c3d4e5f6g7h8i9j10k11l +``` diff --git a/website/docs/r/finspace_kx_cluster.html.markdown b/website/docs/r/finspace_kx_cluster.html.markdown new file mode 100644 index 00000000000..ade42519139 --- /dev/null +++ b/website/docs/r/finspace_kx_cluster.html.markdown @@ -0,0 +1,188 @@ +--- +subcategory: "FinSpace" +layout: "aws" +page_title: "AWS: aws_finspace_kx_cluster" +description: |- + Terraform resource for managing an AWS FinSpace Kx Cluster. +--- + +# Resource: aws_finspace_kx_cluster + +Terraform resource for managing an AWS FinSpace Kx Cluster. + +## Example Usage + +### Basic Usage + +```terraform +resource "aws_finspace_kx_cluster" "example" { + name = "my-tf-kx-cluster" + environment_id = aws_finspace_kx_environment.example.id + type = "HDB" + release_label = "1.0" + az_mode = "SINGLE" + availability_zone_id = "use1-az2" + + capacity_configuration { + node_type = "kx.s.2xlarge" + node_count = 2 + } + + vpc_configuration { + vpc_id = aws_vpc.test.id + security_group_ids = [aws_security_group.example.id] + subnet_ids = [aws_subnet.example.id] + ip_address_type = "IP_V4" + } + + cache_storage_configurations { + type = "CACHE_1000" + size = 1200 + } + + database { + database_name = aws_finspace_kx_database.example.name + cache_configuration { + cache_type = "CACHE_1000" + db_paths = "/" + } + } + + code { + s3_bucket = aws_s3_bucket.test.id + s3_key = aws_s3_object.object.key + } +} +``` + +## Argument Reference + +The following arguments are required: + +* `az_mode` - (Required) The number of availability zones you want to assign per cluster. This can be one of the following: + * SINGLE - Assigns one availability zone per cluster. + * MULTI - Assigns all the availability zones per cluster. +* `capacity_configuration` - (Required) Structure for the metadata of a cluster. Includes information like the CPUs needed, memory of instances, and number of instances. See [capacity_configuration](#capacity_configuration). +* `environment_id` - (Required) Unique identifier for the KX environment. +* `name` - (Required) Unique name for the cluster that you want to create. +* `release_label` - (Required) Version of FinSpace Managed kdb to run. +* `type` - (Required) Type of KDB database. The following types are available: + * HDB - Historical Database. The data is only accessible with read-only permissions from one of the FinSpace managed KX databases mounted to the cluster. + * RDB - Realtime Database. This type of database captures all the data from a ticker plant and stores it in memory until the end of day, after which it writes all of its data to a disk and reloads the HDB. This cluster type requires local storage for temporary storage of data during the savedown process. If you specify this field in your request, you must provide the `savedownStorageConfiguration` parameter. + * GATEWAY - A gateway cluster allows you to access data across processes in kdb systems. It allows you to create your own routing logic using the initialization scripts and custom code. This type of cluster does not require a writable local storage. +* `vpc_configuration` - (Required) Configuration details about the network where the Privatelink endpoint of the cluster resides. See [vpc_configuration](#vpc_configuration). + +The following arguments are optional: + +* `auto_scaling_configuration` - (Optional) Configuration based on which FinSpace will scale in or scale out nodes in your cluster. See [auto_scaling_configuration](#auto_scaling_configuration). +* `availability_zone_id` - (Optional) The availability zone identifiers for the requested regions. Required when `az_mode` is set to SINGLE. +* `cache_storage_configurations` - (Optional) Configurations for a read only cache storage associated with a cluster. This cache will be stored as an FSx Lustre that reads from the S3 store. See [cache_storage_configuration](#cache_storage_configuration). +* `code` - (Optional) Details of the custom code that you want to use inside a cluster when analyzing data. Consists of the S3 source bucket, location, object version, and the relative path from where the custom code is loaded into the cluster. See [code](#code). +* `command_line_arguments` - (Optional) List of key-value pairs to make available inside the cluster. +* `database` - (Optional) KX database that will be available for querying. Defined below. +* `description` - (Optional) Description of the cluster. +* `execution_role` - (Optional) An IAM role that defines a set of permissions associated with a cluster. These permissions are assumed when a cluster attempts to access another cluster. +* `initialization_script` - (Optional) Path to Q program that will be run at launch of a cluster. This is a relative path within .zip file that contains the custom code, which will be loaded on the cluster. It must include the file name itself. For example, somedir/init.q. +* `savedown_storage_configuration` - (Optional) Size and type of the temporary storage that is used to hold data during the savedown process. This parameter is required when you choose `type` as RDB. All the data written to this storage space is lost when the cluster node is restarted. See [savedown_storage_configuration](#savedown_storage_configuration). +* `tags` - (Optional) Key-value mapping of resource tags. If configured with a provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +### auto_scaling_configuration + +The auto_scaling_configuration block supports the following arguments: + +* `auto_scaling_metric` - (Required) Metric your cluster will track in order to scale in and out. For example, CPU_UTILIZATION_PERCENTAGE is the average CPU usage across all nodes in a cluster. +* `min_node_count` - (Required) Lowest number of nodes to scale. Must be at least 1 and less than the `max_node_count`. If nodes in cluster belong to multiple availability zones, then `min_node_count` must be at least 3. +* `max_node_count` - (Required) Highest number of nodes to scale. Cannot be greater than 5 +* `metric_target` - (Required) Desired value of chosen `auto_scaling_metric`. When metric drops below this value, cluster will scale in. When metric goes above this value, cluster will scale out. Can be set between 0 and 100 percent. +* `scale_in_cooldown_seconds` - (Required) Duration in seconds that FinSpace will wait after a scale in event before initiating another scaling event. +* `scale_out_cooldown_seconds` - (Required) Duration in seconds that FinSpace will wait after a scale out event before initiating another scaling event. + +### capacity_configuration + +The capacity_configuration block supports the following arguments: + +* `node_type` - (Required) Determines the hardware of the host computer used for your cluster instance. Each node type offers different memory and storage capabilities. Choose a node type based on the requirements of the application or software that you plan to run on your instance. + + You can only specify one of the following values: + * kx.s.large – The node type with a configuration of 12 GiB memory and 2 vCPUs. + * kx.s.xlarge – The node type with a configuration of 27 GiB memory and 4 vCPUs. + * kx.s.2xlarge – The node type with a configuration of 54 GiB memory and 8 vCPUs. + * kx.s.4xlarge – The node type with a configuration of 108 GiB memory and 16 vCPUs. + * kx.s.8xlarge – The node type with a configuration of 216 GiB memory and 32 vCPUs. + * kx.s.16xlarge – The node type with a configuration of 432 GiB memory and 64 vCPUs. + * kx.s.32xlarge – The node type with a configuration of 864 GiB memory and 128 vCPUs. +* `node_count` - (Required) Number of instances running in a cluster. Must be at least 1 and at most 5. + +### cache_storage_configuration + +The cache_storage_configuration block supports the following arguments: + +* `type` - (Required) Type of cache storage . The valid values are: + * CACHE_1000 - This type provides at least 1000 MB/s disk access throughput. +* `size` - (Required) Size of cache in Gigabytes. + +### code + +The code block supports the following arguments: + +* `s3_bucket` - (Required) Unique name for the S3 bucket. +* `s3_key` - (Required) Full S3 path (excluding bucket) to the .zip file that contains the code to be loaded onto the cluster when it’s started. +* `s3_object_version` - (Optional) Version of an S3 Object. + +### database + +The database block supports the following arguments: + +* `database_name` - (Required) Name of the KX database. +* `cache_configurations` - (Required) Configuration details for the disk cache to increase performance reading from a KX database mounted to the cluster. See [cache_configurations](#cache_configurations). +* `changeset_id` - (Optional) A unique identifier of the changeset that is associated with the cluster. + +#### cache_configurations + +The cache_configuration block supports the following arguments: + +* `cache_type` - (Required) Type of disk cache. +* `db_paths` - (Required) Paths within the database to cache. + +### savedown_storage_configuration + +The savedown_storage_configuration block supports the following arguments: + +* `type` - (Required) Type of writeable storage space for temporarily storing your savedown data. The valid values are: + * SDS01 - This type represents 3000 IOPS and io2 ebs volume type. +* `size` - (Required) Size of temporary storage in bytes. + +### vpc_configuration + +The vpc_configuration block supports the following arguments: + +* `vpc_id` - (Required) Identifier of the VPC endpoint +* `security_group_ids` - (Required) Unique identifier of the VPC security group applied to the VPC endpoint ENI for the cluster. +* `subnet_ids `- (Required) Identifier of the subnet that the Privatelink VPC endpoint uses to connect to the cluster. +* `ip_address_type` - (Required) IP address type for cluster network configuration parameters. The following type is available: IP_V4 - IP address version 4. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - Amazon Resource Name (ARN) identifier of the KX cluster. +* `created_timestamp` - Timestamp at which the cluster is created in FinSpace. Value determined as epoch time in seconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000. +* `id` - A comma-delimited string joining environment ID and cluster name. +* `last_modified_timestamp` - Last timestamp at which the cluster was updated in FinSpace. Value determined as epoch time in seconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000. +* `tags_all` - Map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block). + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +* `create` - (Default `30m`) +* `update` - (Default `2m`) +* `delete` - (Default `40m`) + +## Import + +An AWS FinSpace Kx Cluster can be imported using the `id` (environment ID and cluster name, comma-delimited), e.g., + +``` +$ terraform import aws_finspace_kx_cluster.example n3ceo7wqxoxcti5tujqwzs,my-tf-kx-cluster +``` diff --git a/website/docs/r/finspace_kx_database.html.markdown b/website/docs/r/finspace_kx_database.html.markdown new file mode 100644 index 00000000000..a9458eea0e6 --- /dev/null +++ b/website/docs/r/finspace_kx_database.html.markdown @@ -0,0 +1,71 @@ +--- +subcategory: "FinSpace" +layout: "aws" +page_title: "AWS: aws_finspace_kx_database" +description: |- + Terraform resource for managing an AWS FinSpace Kx Database. +--- + +# Resource: aws_finspace_kx_database + +Terraform resource for managing an AWS FinSpace Kx Database. + +## Example Usage + +### Basic Usage + +```terraform +resource "aws_kms_key" "example" { + description = "Example KMS Key" + deletion_window_in_days = 7 +} + +resource "aws_finspace_kx_environment" "example" { + name = "my-tf-kx-environment" + kms_key_id = aws_kms_key.example.arn +} + +resource "aws_finspace_kx_database" "example" { + environment_id = aws_finspace_kx_environment.example.id + name = "my-tf-kx-database" + description = "Example database description" +} +``` + +## Argument Reference + +The following arguments are required: + +* `environment_id` - (Required) Unique identifier for the KX environment. +* `name` - (Required) Name of the KX database. + +The following arguments are optional: + +* `description` - (Optional) Description of the KX database. +* `tags` - (Optional) Key-value mapping of resource tags. If configured with a provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - Amazon Resource Name (ARN) identifier of the KX database. +* `created_timestamp` - Timestamp at which the databse is created in FinSpace. Value determined as epoch time in seconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000. +* `id` - A comma-delimited string joining environment ID and database name. +* `last_modified_timestamp` - Last timestamp at which the database was updated in FinSpace. Value determined as epoch time in seconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000. +* `tags_all` - Map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block). + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +* `create` - (Default `30m`) +* `update` - (Default `30m`) +* `delete` - (Default `30m`) + +## Import + +An AWS FinSpace Kx Database can be imported using the `id` (environment ID and database name, comma-delimited), e.g., + +``` +$ terraform import aws_finspace_kx_database.example n3ceo7wqxoxcti5tujqwzs,my-tf-kx-database +``` diff --git a/website/docs/r/finspace_kx_environment.html.markdown b/website/docs/r/finspace_kx_environment.html.markdown new file mode 100644 index 00000000000..e6d5b1532ca --- /dev/null +++ b/website/docs/r/finspace_kx_environment.html.markdown @@ -0,0 +1,113 @@ +--- +subcategory: "FinSpace" +layout: "aws" +page_title: "AWS: aws_finspace_kx_environment" +description: |- + Terraform resource for managing an AWS FinSpace Kx Environment. +--- + +# Resource: aws_finspace_kx_environment + +Terraform resource for managing an AWS FinSpace Kx Environment. + +## Example Usage + +### Basic Usage + +```terraform +resource "aws_kms_key" "example" { + description = "Sample KMS Key" + deletion_window_in_days = 7 +} + +resource "aws_finspace_kx_environment" "example" { + name = "my-tf-kx-environment" + kms_key_id = aws_kms_key.example.arn +} +``` + +### With Network Setup + +```terraform +resource "aws_kms_key" "example" { + description = "Sample KMS Key" + deletion_window_in_days = 7 +} + +resource "aws_ec2_transit_gateway" "example" { + description = "example" +} + +resource "aws_finspace_kx_environment" "example_env" { + name = "my-tf-kx-environment" + description = "Environment description" + kms_key_id = aws_kms_key.example.arn + + transit_gateway_configuration { + transit_gateway_id = aws_ec2_transit_gateway.example.id + routable_cidr_space = "100.64.0.0/26" + } + + custom_dns_configuration { + custom_dns_server_name = "example.finspace.amazonaws.com" + custom_dns_server_ip = "10.0.0.76" + } +} +``` + +## Argument Reference + +The following arguments are required: + +* `name` - (Required) Name of the KX environment that you want to create. +* `kms_key_id` - (Required) KMS key ID to encrypt your data in the FinSpace environment. + +The following arguments are optional: + +* `custom_dns_configuration` - (Optional) List of DNS server name and server IP. This is used to set up Route-53 outbound resolvers. Defined below. +* `description` - (Optional) Description for the KX environment. +* `tags` - (Optional) Key-value mapping of resource tags. If configured with a provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `transit_gateway_configuration` - (Optional) Transit gateway and network configuration that is used to connect the KX environment to an internal network. Defined below. + +### custom_dns_configuration + +The custom_dns_configuration block supports the following arguments: + +* `custom_dns_server_ip` - (Required) IP address of the DNS server. +* `custom_dns_server_name` - (Required) Name of the DNS server. + +### transit_gateway_configuration + +The transit_gateway_configuration block supports the following arguments: + +* `routable_cidr_space` - (Required) Routing CIDR on behalf of KX environment. It could be any “/26 range in the 100.64.0.0 CIDR space. After providing, it will be added to the customer’s transit gateway routing table so that the traffics could be routed to KX network. +* `transit_gateway_id` - (Required) Identifier of the transit gateway created by the customer to connect outbound traffics from KX network to your internal network. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - Amazon Resource Name (ARN) identifier of the KX environment. +* `availability_zones` - AWS Availability Zone IDs that this environment is available in. Important when selecting VPC subnets to use in cluster creation. +* `created_timestamp` - Timestamp at which the environment is created in FinSpace. Value determined as epoch time in seconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000. +* `id` - Unique identifier for the KX environment. +* `infrastructure_account_id` - Unique identifier for the AWS environment infrastructure account. +* `last_modified_timestamp` - Last timestamp at which the environment was updated in FinSpace. Value determined as epoch time in seconds. For example, the value for Monday, November 1, 2021 12:00:00 PM UTC is specified as 1635768000. +* `status` - Status of environment creation +* `tags_all` - Map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block). + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +* `create` - (Default `30m`) +* `update` - (Default `30m`) +* `delete` - (Default `30m`) + +## Import + +An AWS FinSpace Kx Environment can be imported using the `id`, e.g., + +``` +$ terraform import aws_finspace_kx_environment.example n3ceo7wqxoxcti5tujqwzs +``` diff --git a/website/docs/r/finspace_kx_user.html.markdown b/website/docs/r/finspace_kx_user.html.markdown new file mode 100644 index 00000000000..a0439092c55 --- /dev/null +++ b/website/docs/r/finspace_kx_user.html.markdown @@ -0,0 +1,87 @@ +--- +subcategory: "FinSpace" +layout: "aws" +page_title: "AWS: aws_finspace_kx_user" +description: |- + Terraform resource for managing an AWS FinSpace Kx User. +--- + +# Resource: aws_finspace_kx_user + +Terraform resource for managing an AWS FinSpace Kx User. + +## Example Usage + +### Basic Usage + +```terraform +resource "aws_kms_key" "example" { + description = "Example KMS Key" + deletion_window_in_days = 7 +} + +resource "aws_finspace_kx_environment" "example" { + name = "my-tf-kx-environment" + kms_key_id = aws_kms_key.example.arn +} + +resource "aws_iam_role" "example" { + name = "example-role" + + assume_role_policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Action = "sts:AssumeRole" + Effect = "Allow" + Sid = "" + Principal = { + Service = "ec2.amazonaws.com" + } + }, + ] + }) +} + +resource "aws_finspace_kx_user" "example" { + name = "my-tf-kx-user" + environment_id = aws_finspace_kx_environment.example.id + iam_role = aws_iam_role.example.arn +} +``` + +## Argument Reference + +The following arguments are required: + +* `name` - (Required) A unique identifier for the user. +* `environment_id` - (Required) Unique identifier for the KX environment. +* `iam_role` - (Required) IAM role ARN to be associated with the user. + +The following arguments are optional: + +* `tags` - (Optional) Key-value mapping of resource tags. If configured with a provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - Amazon Resource Name (ARN) identifier of the KX user. +* `id` - A comma-delimited string joining environment ID and user name. +* `tags_all` - Map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block). + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +* `create` - (Default `30m`) +* `update` - (Default `30m`) +* `delete` - (Default `30m`) + +## Import + +An AWS FinSpace Kx User can be imported using the `id` (environment ID and user name, comma-delimited), e.g., + +``` +$ terraform import aws_finspace_kx_user.example n3ceo7wqxoxcti5tujqwzs,my-tf-kx-user +``` diff --git a/website/docs/r/fis_experiment_template.html.markdown b/website/docs/r/fis_experiment_template.html.markdown index 12de70430ef..3e95c4d178d 100644 --- a/website/docs/r/fis_experiment_template.html.markdown +++ b/website/docs/r/fis_experiment_template.html.markdown @@ -61,6 +61,7 @@ The following arguments are optional: * `tags` - (Optional) Key-value mapping of tags. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. * `target` - (Optional) Target of an action. See below. +* `log_configuration` - (Optional) The configuration for experiment logging. See below. ### `action` @@ -80,7 +81,7 @@ For a list of parameters supported by each action, see [AWS FIS actions referenc #### `target` (`action.*.target`) -* `key` - (Required) Target type. Valid values are `Cluster` (EKS Cluster), `Clusters` (ECS Clusters), `DBInstances` (RDS DB Instances), `Instances` (EC2 Instances), `Nodegroups` (EKS Node groups), `Roles` (IAM Roles), `SpotInstances` (EC2 Spot Instances), `Subnets` (VPC Subnets). +* `key` - (Required) Target type. Valid values are `Cluster` (EKS Cluster), `Clusters` (ECS Clusters), `DBInstances` (RDS DB Instances), `Instances` (EC2 Instances), `Nodegroups` (EKS Node groups), `Roles` (IAM Roles), `SpotInstances` (EC2 Spot Instances), `Subnets` (VPC Subnets), `Volumes` (EBS Volumes) , `Pods` (EKS Pods), `Tasks` (ECS Tasks). See the [documentation](https://docs.aws.amazon.com/fis/latest/userguide/actions.html#action-targets) for more details. * `value` - (Required) Target name, referencing a corresponding target. ### `stop_condition` @@ -96,6 +97,7 @@ For a list of parameters supported by each action, see [AWS FIS actions referenc * `filter` - (Optional) Filter(s) for the target. Filters can be used to select resources based on specific attributes returned by the respective describe action of the resource type. For more information, see [Targets for AWS FIS](https://docs.aws.amazon.com/fis/latest/userguide/targets.html#target-filters). See below. * `resource_arns` - (Optional) Set of ARNs of the resources to target with an action. Conflicts with `resource_tag`. * `resource_tag` - (Optional) Tag(s) the resources need to have to be considered a valid target for an action. Conflicts with `resource_arns`. See below. +* `parameters` - (Optional) The resource type parameters. ~> **NOTE:** The `target` configuration block requires either `resource_arns` or `resource_tag`. @@ -111,6 +113,21 @@ For a list of parameters supported by each action, see [AWS FIS actions referenc * `key` - (Required) Tag key. * `value` - (Required) Tag value. +### `log_configuration` + +* `log_schema_version` - (Required) The schema version. See [documentation](https://docs.aws.amazon.com/fis/latest/userguide/monitoring-logging.html#experiment-log-schema) for the list of schema versions. +* `cloudwatch_logs_configuration` - (Optional) The configuration for experiment logging to Amazon CloudWatch Logs. See below. +* `s3_configuration` - (Optional) The configuration for experiment logging to Amazon S3. See below. + +#### `cloudwatch_logs_configuration` + +* `log_group_arn` - (Required) The Amazon Resource Name (ARN) of the destination Amazon CloudWatch Logs log group. + +#### `s3_configuration` + +* `bucket_name` - (Required) The name of the destination bucket. +* `prefix` - (Optional) The bucket prefix. + ## Attributes Reference In addition to all arguments above, the following attributes are exported: diff --git a/website/docs/r/fsx_ontap_volume.html.markdown b/website/docs/r/fsx_ontap_volume.html.markdown index 54b38ec3319..61410fa873c 100644 --- a/website/docs/r/fsx_ontap_volume.html.markdown +++ b/website/docs/r/fsx_ontap_volume.html.markdown @@ -49,10 +49,12 @@ resource "aws_fsx_ontap_volume" "test" { The following arguments are supported: * `name` - (Required) The name of the Volume. You can use a maximum of 203 alphanumeric characters, plus the underscore (_) special character. -* `junction_path` - (Required) Specifies the location in the storage virtual machine's namespace where the volume is mounted. The junction_path must have a leading forward slash, such as `/vol3` -* `security_style` - (Optional) Specifies the volume security style, Valid values are `UNIX`, `NTFS`, and `MIXED`. Default value is `UNIX`. +* `junction_path` - (Optional) Specifies the location in the storage virtual machine's namespace where the volume is mounted. The junction_path must have a leading forward slash, such as `/vol3` +* `ontap_volume_type` - (Optional) Specifies the type of volume, valid values are `RW`, `DP`. Default value is `RW`. These can be set by the ONTAP CLI or API. This setting is used as part of migration and replication [Migrating to Amazon FSx for NetApp ONTAP](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/migrating-fsx-ontap.html) +* `security_style` - (Optional) Specifies the volume security style, Valid values are `UNIX`, `NTFS`, and `MIXED`. * `size_in_megabytes` - (Required) Specifies the size of the volume, in megabytes (MB), that you are creating. -* `storage_efficiency_enabled` - (Required) Set to true to enable deduplication, compression, and compaction storage efficiency features on the volume. +* `skip_final_backup` - (Optional) When enabled, will skip the default final backup taken when the volume is deleted. This configuration must be applied separately before attempting to delete the resource to have the desired behavior. Defaults to `false`. +* `storage_efficiency_enabled` - (Optional) Set to true to enable deduplication, compression, and compaction storage efficiency features on the volume. * `storage_virtual_machine_id` - (Required) Specifies the storage virtual machine in which to create the volume. * `tags` - (Optional) A map of tags to assign to the volume. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. @@ -61,7 +63,7 @@ The following arguments are supported: The following arguments are supported for `tiering_policy` configuration block: * `name` - (Required) Specifies the tiering policy for the ONTAP volume for moving data to the capacity pool storage. Valid values are `SNAPSHOT_ONLY`, `AUTO`, `ALL`, `NONE`. Default value is `SNAPSHOT_ONLY`. -* `cooling_policy` - (Optional) Specifies the number of days that user data in a volume must remain inactive before it is considered "cold" and moved to the capacity pool. Used with `AUTO` and `SNAPSHOT_ONLY` tiering policies only. Valid values are whole numbers between 2 and 183. Default values are 31 days for `AUTO` and 2 days for `SNAPSHOT_ONLY`. +* `cooling_period` - (Optional) Specifies the number of days that user data in a volume must remain inactive before it is considered "cold" and moved to the capacity pool. Used with `AUTO` and `SNAPSHOT_ONLY` tiering policies only. Valid values are whole numbers between 2 and 183. Default values are 31 days for `AUTO` and 2 days for `SNAPSHOT_ONLY`. ## Attributes Reference @@ -71,7 +73,6 @@ In addition to all arguments above, the following attributes are exported: * `id` - Identifier of the volume, e.g., `fsvol-12345678` * `file_system_id` - Describes the file system for the volume, e.g. `fs-12345679` * `flexcache_endpoint_type` - Specifies the FlexCache endpoint type of the volume, Valid values are `NONE`, `ORIGIN`, `CACHE`. Default value is `NONE`. These can be set by the ONTAP CLI or API and are use with FlexCache feature. -* `ontap_volume_type` - Specifies the type of volume, Valid values are `RW`, `DP`, and `LS`. Default value is `RW`. These can be set by the ONTAP CLI or API. This setting is used as part of migration and replication [Migrating to Amazon FSx for NetApp ONTAP](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/migrating-fsx-ontap.html) * `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). * `uuid` - The Volume's UUID (universally unique identifier). * `volume_type` - The type of volume, currently the only valid value is `ONTAP`. diff --git a/website/docs/r/glue_catalog_database.html.markdown b/website/docs/r/glue_catalog_database.html.markdown index a8ee23a5f40..6509488c465 100644 --- a/website/docs/r/glue_catalog_database.html.markdown +++ b/website/docs/r/glue_catalog_database.html.markdown @@ -51,6 +51,7 @@ The following arguments are supported: * `catalog_id` - (Required) ID of the Data Catalog in which the database resides. * `database_name` - (Required) Name of the catalog database. +* `region` - (Optional) Region of the target database. ### create_table_default_permission diff --git a/website/docs/r/glue_crawler.html.markdown b/website/docs/r/glue_crawler.html.markdown index 03f90afa3d8..228a7c76823 100644 --- a/website/docs/r/glue_crawler.html.markdown +++ b/website/docs/r/glue_crawler.html.markdown @@ -138,10 +138,12 @@ The following arguments are supported: * `classifiers` (Optional) List of custom classifiers. By default, all AWS classifiers are included in a crawl, but these custom classifiers always override the default classifiers for a given classification. * `configuration` (Optional) JSON string of configuration information. For more details see [Setting Crawler Configuration Options](https://docs.aws.amazon.com/glue/latest/dg/crawler-configuration.html). * `description` (Optional) Description of the crawler. +* `delta_target` (Optional) List of nested Delta Lake target arguments. See [Delta Target](#delta-target) below. * `dynamodb_target` (Optional) List of nested DynamoDB target arguments. See [Dynamodb Target](#dynamodb-target) below. * `jdbc_target` (Optional) List of nested JBDC target arguments. See [JDBC Target](#jdbc-target) below. * `s3_target` (Optional) List nested Amazon S3 target arguments. See [S3 Target](#s3-target) below. * `mongodb_target` (Optional) List nested MongoDB target arguments. See [MongoDB Target](#mongodb-target) below. +* `iceberg_target` (Optional) List nested Iceberg target arguments. See [Iceberg Target](#iceberg-target) below. * `schedule` (Optional) A cron expression used to specify the schedule. For more information, see [Time-Based Schedules for Jobs and Crawlers](https://docs.aws.amazon.com/glue/latest/dg/monitor-data-warehouse-schedule.html). For example, to run something every day at 12:15 UTC, you would specify: `cron(15 12 * * ? *)`. * `schema_change_policy` (Optional) Policy for the crawler's update and deletion behavior. See [Schema Change Policy](#schema-change-policy) below. * `lake_formation_configuration` (Optional) Specifies Lake Formation configuration settings for the crawler. See [Lake Formation Configuration](#lake-formation-configuration) below. @@ -191,6 +193,13 @@ The following arguments are supported: * `path` - (Required) The path of the Amazon DocumentDB or MongoDB target (database/collection). * `scan_all` - (Optional) Indicates whether to scan all the records, or to sample rows from the table. Scanning all the records can take a long time when the table is not a high throughput table. Default value is `true`. +### Iceberg Target + +* `connection_name` - (Optional) The name of the connection to use to connect to the Iceberg target. +* `paths` - (Required) One or more Amazon S3 paths that contains Iceberg metadata folders as s3://bucket/prefix. +* `exclusions` - (Optional) A list of glob patterns used to exclude from the crawl. +* `maximum_traversal_depth` - (Required) The maximum depth of Amazon S3 paths that the crawler can traverse to discover the Iceberg metadata folder in your Amazon S3 path. Used to limit the crawler run time. Valid values are between `1` and `20`. + ### Delta Target * `connection_name` - (Optional) The name of the connection to use to connect to the Delta table target. @@ -210,7 +219,7 @@ The following arguments are supported: ### Lineage Configuration -* `crawler_lineage_settings` - (Optional) Specifies whether data lineage is enabled for the crawler. Valid values are: `ENABLE` and `DISABLE`. Default value is `Disable`. +* `crawler_lineage_settings` - (Optional) Specifies whether data lineage is enabled for the crawler. Valid values are: `ENABLE` and `DISABLE`. Default value is `DISABLE`. ### Recrawl Policy diff --git a/website/docs/r/glue_data_quality_ruleset.html.markdown b/website/docs/r/glue_data_quality_ruleset.html.markdown new file mode 100644 index 00000000000..2be8a278cb7 --- /dev/null +++ b/website/docs/r/glue_data_quality_ruleset.html.markdown @@ -0,0 +1,93 @@ +--- +subcategory: "Glue" +layout: "aws" +page_title: "AWS: aws_glue_data_quality_ruleset" +description: |- + Provides a Glue Data Quality Ruleset. +--- + +# Resource: aws_glue_data_quality_ruleset + +Provides a Glue Data Quality Ruleset Resource. You can refer to the [Glue Developer Guide](https://docs.aws.amazon.com/glue/latest/dg/glue-data-quality.html) for a full explanation of the Glue Data Quality Ruleset functionality + +## Example Usage + +### Basic + +```terraform +resource "aws_glue_data_quality_ruleset" "example" { + name = "example" + ruleset = "Rules = [Completeness \"colA\" between 0.4 and 0.8]" +} +``` + +### With description + +```terraform +resource "aws_glue_data_quality_ruleset" "example" { + name = "example" + description = "example" + ruleset = "Rules = [Completeness \"colA\" between 0.4 and 0.8]" +} +``` + +### With tags + +```terraform +resource "aws_glue_data_quality_ruleset" "example" { + name = "example" + ruleset = "Rules = [Completeness \"colA\" between 0.4 and 0.8]" + + tags = { + "hello" = "world" + } +} +``` + +### With target_table + +```terraform +resource "aws_glue_data_quality_ruleset" "example" { + name = "example" + ruleset = "Rules = [Completeness \"colA\" between 0.4 and 0.8]" + + target_table { + database_name = aws_glue_catalog_database.example.name + table_name = aws_glue_catalog_table.example.name + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `description` - (Optional) Description of the data quality ruleset. +* `name` - (Required, Forces new resource) Name of the data quality ruleset. +* `ruleset` - (Optional) A Data Quality Definition Language (DQDL) ruleset. For more information, see the AWS Glue developer guide. +* `tags` - (Optional) Key-value map of resource tags. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `target_table` - (Optional, Forces new resource) A Configuration block specifying a target table associated with the data quality ruleset. See [`target_table`](#target_table) below. + +### target_table + +* `catalog_id` - (Optional, Forces new resource) The catalog id where the AWS Glue table exists. +* `database_name` - (Required, Forces new resource) Name of the database where the AWS Glue table exists. +* `table_name` - (Required, Forces new resource) Name of the AWS Glue table. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - ARN of the Glue Data Quality Ruleset. +* `created_on` - The time and date that this data quality ruleset was created. +* `last_modified_on` - The time and date that this data quality ruleset was created. +* `recommendation_run_id` - When a ruleset was created from a recommendation run, this run ID is generated to link the two together. +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Import + +Glue Data Quality Ruleset can be imported using the `name`, e.g., + +``` +$ terraform import aws_glue_data_quality_ruleset.example exampleName +``` diff --git a/website/docs/r/iam_role.html.markdown b/website/docs/r/iam_role.html.markdown index f4f58521916..0580c5852f0 100644 --- a/website/docs/r/iam_role.html.markdown +++ b/website/docs/r/iam_role.html.markdown @@ -208,15 +208,9 @@ In addition to all arguments above, the following attributes are exported: * `create_date` - Creation date of the IAM role. * `id` - Name of the role. * `name` - Name of the role. -* `role_last_used` - Contains information about the last time that an IAM role was used. See [`role_last_used`](#role_last_used) for details. * `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). * `unique_id` - Stable and unique string identifying the role. -### role_last_used - -* `region` - The name of the AWS Region in which the role was last used. -* `last_used_time` - The date and time, in RFC 3339 format, that the role was last used. - ## Import IAM Roles can be imported using the `name`, e.g., diff --git a/website/docs/r/iam_virtual_mfa_device.html.markdown b/website/docs/r/iam_virtual_mfa_device.html.markdown index c252a48de29..80f447216d0 100644 --- a/website/docs/r/iam_virtual_mfa_device.html.markdown +++ b/website/docs/r/iam_virtual_mfa_device.html.markdown @@ -13,6 +13,10 @@ Provides an IAM Virtual MFA Device. ~> **Note:** All attributes will be stored in the raw state as plain-text. [Read more about sensitive data in state](https://www.terraform.io/docs/state/sensitive-data.html). +~> **Note:** A virtual MFA device cannot be directly associated with an IAM User from Terraform. + To associate the virtual MFA device with a user and enable it, use the code returned in either `base_32_string_seed` or `qr_code_png` to generate TOTP authentication codes. + The authentication codes can then be used with the AWS CLI command [`aws iam enable-mfa-device`](https://docs.aws.amazon.com/cli/latest/reference/iam/enable-mfa-device.html) or the AWS API call [`EnableMFADevice`](https://docs.aws.amazon.com/IAM/latest/APIReference/API_EnableMFADevice.html). + ## Example Usage **Using certs on file:** @@ -37,8 +41,10 @@ In addition to all arguments above, the following attributes are exported: * `arn` - The Amazon Resource Name (ARN) specifying the virtual mfa device. * `base_32_string_seed` - The base32 seed defined as specified in [RFC3548](https://tools.ietf.org/html/rfc3548.txt). The `base_32_string_seed` is base64-encoded. -* `qr_code_png` - A QR code PNG image that encodes `otpauth://totp/$virtualMFADeviceName@$AccountName?secret=$Base32String` where `$virtualMFADeviceName` is one of the create call arguments. AccountName is the user name if set (otherwise, the account ID otherwise), and Base32String is the seed in base32 format. +* `enable_date` - The date and time when the virtual MFA device was enabled. +* `qr_code_png` - A QR code PNG image that encodes `otpauth://totp/$virtualMFADeviceName@$AccountName?secret=$Base32String` where `$virtualMFADeviceName` is one of the create call arguments. AccountName is the user name if set (otherwise, the account ID), and Base32String is the seed in base32 format. * `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). +* `user_name` - The associated IAM User name if the virtual MFA device is enabled. ## Import diff --git a/website/docs/r/instance.html.markdown b/website/docs/r/instance.html.markdown index f04809e3e6c..225d48799d9 100644 --- a/website/docs/r/instance.html.markdown +++ b/website/docs/r/instance.html.markdown @@ -41,6 +41,36 @@ resource "aws_instance" "web" { } ``` +### Spot instance example + +```terraform +data "aws_ami" "this" { + most_recent = true + owners = ["amazon"] + filter { + name = "architecture" + values = ["arm64"] + } + filter { + name = "name" + values = ["al2023-ami-2023*"] + } +} + +resource "aws_instance" "this" { + ami = data.aws_ami.this.id + instance_market_options { + spot_options { + max_price = 0.0031 + } + } + instance_type = "t4g.nano" + tags = { + Name = "test-spot" + } +} +``` + ### Network and credit specification example ```terraform @@ -176,6 +206,7 @@ The following arguments are supported: * `host_resource_group_arn` - (Optional) ARN of the host resource group in which to launch the instances. If you specify an ARN, omit the `tenancy` parameter or set it to `host`. * `iam_instance_profile` - (Optional) IAM Instance Profile to launch the instance with. Specified as the name of the Instance Profile. Ensure your credentials have the correct permission to assign the instance profile according to the [EC2 documentation](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html#roles-usingrole-ec2instance-permissions), notably `iam:PassRole`. * `instance_initiated_shutdown_behavior` - (Optional) Shutdown behavior for the instance. Amazon defaults this to `stop` for EBS-backed instances and `terminate` for instance-store instances. Cannot be set on instance-store instances. See [Shutdown Behavior](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/terminating-instances.html#Using_ChangingInstanceInitiatedShutdownBehavior) for more information. +* `instance_market_options` - (Optional) Describes the market (purchasing) option for the instances. See [Market Options](#market-options) below for details on attributes. * `instance_type` - (Optional) Instance type to use for the instance. Required unless `launch_template` is specified and the Launch Template specifies an instance type. If an instance type is specified in the Launch Template, setting `instance_type` will override the instance type specified in the Launch Template. Updates to this field will trigger a stop/start of the EC2 instance. * `ipv6_address_count`- (Optional) Number of IPv6 addresses to associate with the primary network interface. Amazon EC2 chooses the IPv6 addresses from the range of your subnet. * `ipv6_addresses` - (Optional) Specify one or more IPv6 addresses from the range of the subnet to associate with the primary network interface @@ -310,6 +341,13 @@ The `maintenance_options` block supports the following: * `auto_recovery` - (Optional) Automatic recovery behavior of the Instance. Can be `"default"` or `"disabled"`. See [Recover your instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-recover.html) for more details. +### Market Options + +The `instance_market_options` block supports the following: + +* `market_type` - (Optional) Type of market for the instance. Valid value is `spot`. Defaults to `spot`. +* `spot_options` - (Optional) Block to configure the options for Spot Instances. See [Spot Options](#spot-options) below for details on attributes. + ### Metadata Options Metadata options can be applied/modified to the EC2 Instance at any time. @@ -344,6 +382,15 @@ The `private_dns_name_options` block supports the following: * `enable_resource_name_dns_a_record` - Indicates whether to respond to DNS queries for instance hostnames with DNS A records. * `hostname_type` - Type of hostname for Amazon EC2 instances. For IPv4 only subnets, an instance DNS name must be based on the instance IPv4 address. For IPv6 native subnets, an instance DNS name must be based on the instance ID. For dual-stack subnets, you can specify whether DNS names use the instance IPv4 address or the instance ID. Valid values: `ip-name` and `resource-name`. +### Spot Options + +The `spot_options` block supports the following: + +* `instance_interruption_behavior` - (Optional) The behavior when a Spot Instance is interrupted. Valid values include `hibernate`, `stop`, `terminate` . The default is `terminate`. +* `max_price` - (Optional) The maximum hourly price that you're willing to pay for a Spot Instance. +* `spot_instance_type` - (Optional) The Spot Instance request type. Valid values include `one-time`, `persistent`. Persistent Spot Instance requests are only supported when the instance interruption behavior is either hibernate or stop. The default is `one-time`. +* `valid_until` - (Optional) The end date of the request, in UTC format (YYYY-MM-DDTHH:MM:SSZ). Supported only for persistent requests. + ### Launch Template Specification -> **Note:** Launch Template parameters will be used only once during instance creation. If you want to update existing instance you need to change parameters @@ -381,6 +428,11 @@ For `root_block_device`, in addition to the arguments above, the following attri * `volume_id` - ID of the volume. For example, the ID can be accessed like this, `aws_instance.web.root_block_device.0.volume_id`. * `device_name` - Device name, e.g., `/dev/sdh` or `xvdh`. +For `instance_market_options`, in addition to the arguments above, the following attributes are exported: + +* `instance_lifecycle` - Indicates whether this is a Spot Instance or a Scheduled Instance. +* `spot_instance_request_id` - If the request is a Spot Instance request, the ID of the request. + ## Timeouts [Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): diff --git a/website/docs/r/internetmonitor_monitor.html.markdown b/website/docs/r/internetmonitor_monitor.html.markdown index 2c0da1616ce..239224d9c6f 100644 --- a/website/docs/r/internetmonitor_monitor.html.markdown +++ b/website/docs/r/internetmonitor_monitor.html.markdown @@ -26,13 +26,21 @@ The following arguments are required: The following arguments are optional: +* `health_events_config` - (Optional) Health event thresholds. A health event threshold percentage, for performance and availability, determines when Internet Monitor creates a health event when there's an internet issue that affects your application end users. See [Health Events Config](#health-events-config) below. * `internet_measurements_log_delivery` - (Optional) Publish internet measurements for Internet Monitor to an Amazon S3 bucket in addition to CloudWatch Logs. * `max_city_networks_to_monitor` - (Optional) The maximum number of city-networks to monitor for your resources. A city-network is the location (city) where clients access your application resources from and the network or ASN, such as an internet service provider (ISP), that clients access the resources through. This limit helps control billing costs. -* `resources` - (Optional)The resources to include in a monitor, which you provide as a set of Amazon Resource Names (ARNs). +* `resources` - (Optional) The resources to include in a monitor, which you provide as a set of Amazon Resource Names (ARNs). * `status` - (Optional) The status for a monitor. The accepted values for Status with the UpdateMonitor API call are the following: `ACTIVE` and `INACTIVE`. * `tags` - (Optional) Map of tags to assign to the resource. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. * `traffic_percentage_to_monitor` - (Optional) The percentage of the internet-facing traffic for your application that you want to monitor with this monitor. +### Health Events Config + +Defines the health event threshold percentages, for performance score and availability score. Amazon CloudWatch Internet Monitor creates a health event when there's an internet issue that affects your application end users where a health score percentage is at or below a set threshold. If you don't set a health event threshold, the default value is 95%. + +* `availability_score_threshold` - (Optional) The health event threshold percentage set for availability scores. +* `performance_score_threshold` - (Optional) The health event threshold percentage set for performance scores. + ## Attributes Reference In addition to all arguments above, the following attributes are exported: diff --git a/website/docs/r/kendra_data_source.html.markdown b/website/docs/r/kendra_data_source.html.markdown index f2cf34ce1ae..521830bcd80 100644 --- a/website/docs/r/kendra_data_source.html.markdown +++ b/website/docs/r/kendra_data_source.html.markdown @@ -59,10 +59,11 @@ resource "aws_kendra_data_source" "example" { configuration { s3_configuration { + bucket_name = aws_s3_bucket.example.id + access_control_list_configuration { key_path = "s3://${aws_s3_bucket.example.id}/path-1" } - bucket_name = aws_s3_bucket.example.id } } } @@ -79,12 +80,14 @@ resource "aws_kendra_data_source" "example" { configuration { s3_configuration { - bucket_name = aws_s3_bucket.example.id - s3_prefix = "example" - + bucket_name = aws_s3_bucket.example.id exclusion_patterns = ["example"] inclusion_patterns = ["hello"] inclusion_prefixes = ["world"] + + documents_metadata_configuration { + s3_prefix = "example" + } } } } @@ -336,69 +339,71 @@ resource "aws_kendra_data_source" "example" { The following arguments are required: -* `index_id` - (Required, Forces new resource) The identifier of the index for your Amazon Kendra data_source. -* `name` - (Required) A name for your Data Source connector. +* `index_id` - (Required, Forces new resource) The identifier of the index for your Amazon Kendra data source. +* `name` - (Required) A name for your data source connector. * `role_arn` - (Required, Optional in one scenario) The Amazon Resource Name (ARN) of a role with permission to access the data source connector. For more information, see [IAM roles for Amazon Kendra](https://docs.aws.amazon.com/kendra/latest/dg/iam-roles.html). You can't specify the `role_arn` parameter when the `type` parameter is set to `CUSTOM`. The `role_arn` parameter is required for all other data sources. * `type` - (Required, Forces new resource) The type of data source repository. For an updated list of values, refer to [Valid Values for Type](https://docs.aws.amazon.com/kendra/latest/dg/API_CreateDataSource.html#Kendra-CreateDataSource-request-Type). The following arguments are optional: -* `configuration` - (Optional) A block with the configuration information to connect to your Data Source repository. You can't specify the `configuration` argument when the `type` parameter is set to `CUSTOM`. [Detailed below](#configuration). -* `custom_document_enrichment_configuration` - (Optional) A block with the configuration information for altering document metadata and content during the document ingestion process. For more information on how to create, modify and delete document metadata, or make other content alterations when you ingest documents into Amazon Kendra, see [Customizing document metadata during the ingestion process](https://docs.aws.amazon.com/kendra/latest/dg/custom-document-enrichment.html). [Detailed below](#custom_document_enrichment_configuration). +* `configuration` - (Optional) A block with the configuration information to connect to your Data Source repository. You can't specify the `configuration` block when the `type` parameter is set to `CUSTOM`. [Detailed below](#configuration-block). +* `custom_document_enrichment_configuration` - (Optional) A block with the configuration information for altering document metadata and content during the document ingestion process. For more information on how to create, modify and delete document metadata, or make other content alterations when you ingest documents into Amazon Kendra, see [Customizing document metadata during the ingestion process](https://docs.aws.amazon.com/kendra/latest/dg/custom-document-enrichment.html). [Detailed below](#custom_document_enrichment_configuration-block). * `description` - (Optional) A description for the Data Source connector. * `language_code` - (Optional) The code for a language. This allows you to support a language for all documents when creating the Data Source connector. English is supported by default. For more information on supported languages, including their codes, see [Adding documents in languages other than English](https://docs.aws.amazon.com/kendra/latest/dg/in-adding-languages.html). * `schedule` - (Optional) Sets the frequency for Amazon Kendra to check the documents in your Data Source repository and update the index. If you don't set a schedule Amazon Kendra will not periodically update the index. You can call the `StartDataSourceSyncJob` API to update the index. * `tags` - (Optional) Key-value map of resource tags. If configured with a provider [`default_tags` configuration block](https://www.terraform.io/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. -### `configuration` +### configuration Block The `configuration` configuration block supports the following arguments: -* `s3_configuration` - (Required if `type` is set to `S3`) A block that provides the configuration information to connect to an Amazon S3 bucket as your data source. [Detailed below](#s3_configuration). -* `web_crawler_configuration` - (Required if `type` is set to `WEBCRAWLER`) A block that provides the configuration information required for Amazon Kendra Web Crawler. [Detailed below](#web_crawler_configuration). +* `s3_configuration` - (Required if `type` is set to `S3`) A block that provides the configuration information to connect to an Amazon S3 bucket as your data source. [Detailed below](#s3_configuration-block). +* `web_crawler_configuration` - (Required if `type` is set to `WEBCRAWLER`) A block that provides the configuration information required for Amazon Kendra Web Crawler. [Detailed below](#web_crawler_configuration-block). -#### `s3_configuration` +### s3_configuration Block The `s3_configuration` configuration block supports the following arguments: -* `access_control_list_configuration` - (Optional) A block that provides the path to the S3 bucket that contains the user context filtering files for the data source. For the format of the file, see [Access control for S3 data sources](https://docs.aws.amazon.com/kendra/latest/dg/s3-acl.html). [Detailed below](#access_control_list_configuration). +* `access_control_list_configuration` - (Optional) A block that provides the path to the S3 bucket that contains the user context filtering files for the data source. For the format of the file, see [Access control for S3 data sources](https://docs.aws.amazon.com/kendra/latest/dg/s3-acl.html). [Detailed below](#access_control_list_configuration-block). * `bucket_name` - (Required) The name of the bucket that contains the documents. -* `documents_metadata_configuration` - (Optional) A block that defines the Document metadata files that contain information such as the document access control information, source URI, document author, and custom attributes. Each metadata file contains metadata about a single document. [Detailed below](#documents_metadata_configuration). +* `documents_metadata_configuration` - (Optional) A block that defines the Document metadata files that contain information such as the document access control information, source URI, document author, and custom attributes. Each metadata file contains metadata about a single document. [Detailed below](#documents_metadata_configuration-block). * `exclusion_patterns` - (Optional) A list of glob patterns for documents that should not be indexed. If a document that matches an inclusion prefix or inclusion pattern also matches an exclusion pattern, the document is not indexed. Refer to [Exclusion Patterns for more examples](https://docs.aws.amazon.com/kendra/latest/dg/API_S3DataSourceConfiguration.html#Kendra-Type-S3DataSourceConfiguration-ExclusionPatterns). * `inclusion_patterns` - (Optional) A list of glob patterns for documents that should be indexed. If a document that matches an inclusion pattern also matches an exclusion pattern, the document is not indexed. Refer to [Inclusion Patterns for more examples](https://docs.aws.amazon.com/kendra/latest/dg/API_S3DataSourceConfiguration.html#Kendra-Type-S3DataSourceConfiguration-InclusionPatterns). * `inclusion_prefixes` - (Optional) A list of S3 prefixes for the documents that should be included in the index. -##### `access_control_list_configuration` +### access_control_list_configuration Block The `access_control_list_configuration` configuration block supports the following arguments: * `key_path` - (Optional) Path to the AWS S3 bucket that contains the ACL files. -##### `documents_metadata_configuration` +### documents_metadata_configuration Block The `documents_metadata_configuration` configuration block supports the following arguments: * `s3_prefix` - (Optional) A prefix used to filter metadata configuration files in the AWS S3 bucket. The S3 bucket might contain multiple metadata files. Use `s3_prefix` to include only the desired metadata files. -#### `web_crawler_configuration` +### web_crawler_configuration Block The `web_crawler_configuration` configuration block supports the following arguments: -* `authentication_configuration` - (Optional) A block with the configuration information required to connect to websites using authentication. You can connect to websites using basic authentication of user name and password. You use a secret in AWS Secrets Manager to store your authentication credentials. You must provide the website host name and port number. For example, the host name of `https://a.example.com/page1.html` is `"a.example.com"` and the port is `443`, the standard port for HTTPS. [Detailed below](#authentication_configuration). +* `authentication_configuration` - (Optional) A block with the configuration information required to connect to websites using authentication. You can connect to websites using basic authentication of user name and password. You use a secret in AWS Secrets Manager to store your authentication credentials. You must provide the website host name and port number. For example, the host name of `https://a.example.com/page1.html` is `"a.example.com"` and the port is `443`, the standard port for HTTPS. [Detailed below](#authentication_configuration-block). * `crawl_depth` - (Optional) Specifies the number of levels in a website that you want to crawl. The first level begins from the website seed or starting point URL. For example, if a website has 3 levels – index level (i.e. seed in this example), sections level, and subsections level – and you are only interested in crawling information up to the sections level (i.e. levels 0-1), you can set your depth to 1. The default crawl depth is set to `2`. Minimum value of `0`. Maximum value of `10`. * `max_content_size_per_page_in_mega_bytes` - (Optional) The maximum size (in MB) of a webpage or attachment to crawl. Files larger than this size (in MB) are skipped/not crawled. The default maximum size of a webpage or attachment is set to `50` MB. Minimum value of `1.0e-06`. Maximum value of `50`. * `max_links_per_page` - (Optional) The maximum number of URLs on a webpage to include when crawling a website. This number is per webpage. As a website’s webpages are crawled, any URLs the webpages link to are also crawled. URLs on a webpage are crawled in order of appearance. The default maximum links per page is `100`. Minimum value of `1`. Maximum value of `1000`. * `max_urls_per_minute_crawl_rate` - (Optional) The maximum number of URLs crawled per website host per minute. The default maximum number of URLs crawled per website host per minute is `300`. Minimum value of `1`. Maximum value of `300`. -* `proxy_configuration` - (Optional) Configuration information required to connect to your internal websites via a web proxy. You must provide the website host name and port number. For example, the host name of `https://a.example.com/page1.html` is `"a.example.com"` and the port is `443`, the standard port for HTTPS. Web proxy credentials are optional and you can use them to connect to a web proxy server that requires basic authentication. To store web proxy credentials, you use a secret in [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html). [Detailed below](#proxy_configuration). +* `proxy_configuration` - (Optional) Configuration information required to connect to your internal websites via a web proxy. You must provide the website host name and port number. For example, the host name of `https://a.example.com/page1.html` is `"a.example.com"` and the port is `443`, the standard port for HTTPS. Web proxy credentials are optional and you can use them to connect to a web proxy server that requires basic authentication. To store web proxy credentials, you use a secret in [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html). [Detailed below](#proxy_configuration-block). * `url_exclusion_patterns` - (Optional) A list of regular expression patterns to exclude certain URLs to crawl. URLs that match the patterns are excluded from the index. URLs that don't match the patterns are included in the index. If a URL matches both an inclusion and exclusion pattern, the exclusion pattern takes precedence and the URL file isn't included in the index. Array Members: Minimum number of `0` items. Maximum number of `100` items. Length Constraints: Minimum length of `1`. Maximum length of `150`. * `url_inclusion_patterns` - (Optional) A list of regular expression patterns to include certain URLs to crawl. URLs that match the patterns are included in the index. URLs that don't match the patterns are excluded from the index. If a URL matches both an inclusion and exclusion pattern, the exclusion pattern takes precedence and the URL file isn't included in the index. Array Members: Minimum number of `0` items. Maximum number of `100` items. Length Constraints: Minimum length of `1`. Maximum length of `150`. -* `urls` - (Required) A block that specifies the seed or starting point URLs of the websites or the sitemap URLs of the websites you want to crawl. You can include website subdomains. You can list up to `100` seed URLs and up to `3` sitemap URLs. You can only crawl websites that use the secure communication protocol, Hypertext Transfer Protocol Secure (HTTPS). If you receive an error when crawling a website, it could be that the website is blocked from crawling. When selecting websites to index, you must adhere to the [Amazon Acceptable Use Policy](https://aws.amazon.com/aup/) and all other Amazon terms. Remember that you must only use Amazon Kendra Web Crawler to index your own webpages, or webpages that you have authorization to index. [Detailed below](#urls). +* `urls` - (Required) A block that specifies the seed or starting point URLs of the websites or the sitemap URLs of the websites you want to crawl. You can include website subdomains. You can list up to `100` seed URLs and up to `3` sitemap URLs. You can only crawl websites that use the secure communication protocol, Hypertext Transfer Protocol Secure (HTTPS). If you receive an error when crawling a website, it could be that the website is blocked from crawling. When selecting websites to index, you must adhere to the [Amazon Acceptable Use Policy](https://aws.amazon.com/aup/) and all other Amazon terms. Remember that you must only use Amazon Kendra Web Crawler to index your own webpages, or webpages that you have authorization to index. [Detailed below](#urls-block). -##### `authentication_configuration` +### authentication_configuration Block The `authentication_configuration` configuration block supports the following arguments: -* `basic_authentication` - (Optional) The list of configuration information that's required to connect to and crawl a website host using basic authentication credentials. The list includes the name and port number of the website host. Detailed below. +* `basic_authentication` - (Optional) The list of configuration information that's required to connect to and crawl a website host using basic authentication credentials. The list includes the name and port number of the website host. [Detailed below](#basic_authentication-block). + +### basic_authentication Block The `basic_authentication` configuration block supports the following arguments: @@ -406,7 +411,7 @@ The `basic_authentication` configuration block supports the following arguments: * `host` - (Required) The name of the website host you want to connect to using authentication credentials. For example, the host name of `https://a.example.com/page1.html` is `"a.example.com"`. * `port` - (Required) The port number of the website host you want to connect to using authentication credentials. For example, the port for `https://a.example.com/page1.html` is `443`, the standard port for HTTPS. -##### `proxy_configuration` +### proxy_configuration Block The `proxy_configuration` configuration block supports the following arguments: @@ -414,12 +419,14 @@ The `proxy_configuration` configuration block supports the following arguments: * `host` - (Required) The name of the website host you want to connect to via a web proxy server. For example, the host name of `https://a.example.com/page1.html` is `"a.example.com"`. * `port` - (Required) The port number of the website host you want to connect to via a web proxy server. For example, the port for `https://a.example.com/page1.html` is `443`, the standard port for HTTPS. -##### `urls` +### urls Block The `urls` configuration block supports the following arguments: -* `seed_url_configuration` - (Optional) A block that specifies the configuration of the seed or starting point URLs of the websites you want to crawl. You can choose to crawl only the website host names, or the website host names with subdomains, or the website host names with subdomains and other domains that the webpages link to. You can list up to `100` seed URLs. Detailed below. -* `site_maps_configuration` - (Optional) A block that specifies the configuration of the sitemap URLs of the websites you want to crawl. Only URLs belonging to the same website host names are crawled. You can list up to `3` sitemap URLs. Detailed below. +* `seed_url_configuration` - (Optional) A block that specifies the configuration of the seed or starting point URLs of the websites you want to crawl. You can choose to crawl only the website host names, or the website host names with subdomains, or the website host names with subdomains and other domains that the webpages link to. You can list up to `100` seed URLs. [Detailed below](#seed_url_configuration-block). +* `site_maps_configuration` - (Optional) A block that specifies the configuration of the sitemap URLs of the websites you want to crawl. Only URLs belonging to the same website host names are crawled. You can list up to `3` sitemap URLs. [Detailed below](#site_maps_configuration-block). + +### seed_url_configuration Block The `seed_url_configuration` configuration block supports the following arguments: @@ -429,55 +436,73 @@ The `seed_url_configuration` configuration block supports the following argument * `SUBDOMAINS` – crawl the website host names with subdomains. For example, if the seed URL is `"abc.example.com"`, then `"a.abc.example.com"` and `"b.abc.example.com"` are also crawled. * `EVERYTHING` – crawl the website host names with subdomains and other domains that the webpages link to. +### site_maps_configuration Block + The `site_maps_configuration` configuration block supports the following arguments: * `site_maps` - (Required) The list of sitemap URLs of the websites you want to crawl. The list can include a maximum of `3` sitemap URLs. -### `custom_document_enrichment_configuration` +### custom_document_enrichment_configuration Block The `custom_document_enrichment_configuration` configuration block supports the following arguments: -* `inline_configurations` - (Optional) Configuration information to alter document attributes or metadata fields and content when ingesting documents into Amazon Kendra. Minimum number of `0` items. Maximum number of `100` items. [Detailed below](#inline_configurations). -* `post_extraction_hook_configuration` - (Optional) A block that specifies the configuration information for invoking a Lambda function in AWS Lambda on the structured documents with their metadata and text extracted. You can use a Lambda function to apply advanced logic for creating, modifying, or deleting document metadata and content. For more information, see [Advanced data manipulation](https://docs.aws.amazon.com/kendra/latest/dg/custom-document-enrichment.html#advanced-data-manipulation). [Detailed below](#hook_configuration). -* `pre_extraction_hook_configuration` - (Optional) Configuration information for invoking a Lambda function in AWS Lambda on the original or raw documents before extracting their metadata and text. You can use a Lambda function to apply advanced logic for creating, modifying, or deleting document metadata and content. For more information, see [Advanced data manipulation](https://docs.aws.amazon.com/kendra/latest/dg/custom-document-enrichment.html#advanced-data-manipulation). [Detailed below](#hook_configuration). +* `inline_configurations` - (Optional) Configuration information to alter document attributes or metadata fields and content when ingesting documents into Amazon Kendra. Minimum number of `0` items. Maximum number of `100` items. [Detailed below](#inline_configurations-block). +* `post_extraction_hook_configuration` - (Optional) A block that specifies the configuration information for invoking a Lambda function in AWS Lambda on the structured documents with their metadata and text extracted. You can use a Lambda function to apply advanced logic for creating, modifying, or deleting document metadata and content. For more information, see [Advanced data manipulation](https://docs.aws.amazon.com/kendra/latest/dg/custom-document-enrichment.html#advanced-data-manipulation). [Detailed below](#pre_extraction_hook_configuration-and-post_extraction_hook_configuration-blocks). +* `pre_extraction_hook_configuration` - (Optional) Configuration information for invoking a Lambda function in AWS Lambda on the original or raw documents before extracting their metadata and text. You can use a Lambda function to apply advanced logic for creating, modifying, or deleting document metadata and content. For more information, see [Advanced data manipulation](https://docs.aws.amazon.com/kendra/latest/dg/custom-document-enrichment.html#advanced-data-manipulation). [Detailed below](#pre_extraction_hook_configuration-and-post_extraction_hook_configuration-blocks). * `role_arn` - (Optional) The Amazon Resource Name (ARN) of a role with permission to run `pre_extraction_hook_configuration` and `post_extraction_hook_configuration` for altering document metadata and content during the document ingestion process. For more information, see [IAM roles for Amazon Kendra](https://docs.aws.amazon.com/kendra/latest/dg/iam-roles.html). -#### `inline_configurations` +### inline_configurations Block The `inline_configurations` configuration block supports the following arguments: -* `condition` - (Optional) Configuration of the condition used for the target document attribute or metadata field when ingesting documents into Amazon Kendra. See [Document Attribute Condition](#document-attribute-condition). +* `condition` - (Optional) Configuration of the condition used for the target document attribute or metadata field when ingesting documents into Amazon Kendra. See [condition](#condition-block). * `document_content_deletion` - (Optional) `TRUE` to delete content if the condition used for the target attribute is met. -* `target` - (Optional) Configuration of the target document attribute or metadata field when ingesting documents into Amazon Kendra. You can also include a value. [Detailed below](#target). +* `target` - (Optional) Configuration of the target document attribute or metadata field when ingesting documents into Amazon Kendra. You can also include a value. [Detailed below](#target-block). + +### condition Block -##### `target` +The `condition` configuration blocks supports the following arguments: + +* `condition_document_attribute_key` - (Required) The identifier of the document attribute used for the condition. For example, `_source_uri` could be an identifier for the attribute or metadata field that contains source URIs associated with the documents. Amazon Kendra currently does not support `_document_body` as an attribute key used for the condition. +* `condition_on_value` - (Optional) The value used by the operator. For example, you can specify the value 'financial' for strings in the `_source_uri` field that partially match or contain this value. See [condition_on_value](#condition_on_value-block). +* `operator` - (Required) The condition operator. For example, you can use `Contains` to partially match a string. Valid Values: `GreaterThan` | `GreaterThanOrEquals` | `LessThan` | `LessThanOrEquals` | `Equals` | `NotEquals` | `Contains` | `NotContains` | `Exists` | `NotExists` | `BeginsWith`. + +### target Block The `target` configuration block supports the following arguments: * `target_document_attribute_key` - (Optional) The identifier of the target document attribute or metadata field. For example, 'Department' could be an identifier for the target attribute or metadata field that includes the department names associated with the documents. -* `target_document_attribute_value` - (Optional) The target value you want to create for the target attribute. For example, 'Finance' could be the target value for the target attribute key 'Department'. -See [Document Attribute Value](#document-attribute-value). +* `target_document_attribute_value` - (Optional) The target value you want to create for the target attribute. For example, 'Finance' could be the target value for the target attribute key 'Department'. See [target_document_attribute_value](#target_document_attribute_value-block). * `target_document_attribute_value_deletion` - (Optional) `TRUE` to delete the existing target value for your specified target attribute key. You cannot create a target value and set this to `TRUE`. To create a target value (`TargetDocumentAttributeValue`), set this to `FALSE`. -#### `hook_configuration` +### target_document_attribute_value Block + +The `target_document_attribute_value` configuration blocks supports the following arguments: + +* `date_value` - (Optional) A date expressed as an ISO 8601 string. It is important for the time zone to be included in the ISO 8601 date-time format. As of this writing only UTC is supported. For example, `2012-03-25T12:30:10+00:00`. +* `long_value` - (Optional) A long integer value. +* `string_list_value` - (Optional) A list of strings. +* `string` - (Optional) A string, such as "department". + +### pre_extraction_hook_configuration and post_extraction_hook_configuration Blocks -The `hook_configuration` configuration block supports the following arguments: +The `pre_extraction_hook_configuration` and `post_extraction_hook_configuration` configuration blocks each supports the following arguments: -* `invocation_condition` - (Optional) A block that specifies the condition used for when a Lambda function should be invoked. For example, you can specify a condition that if there are empty date-time values, then Amazon Kendra should invoke a function that inserts the current date-time. See [Document Attribute Condition](#document-attribute-condition). +* `invocation_condition` - (Optional) A block that specifies the condition used for when a Lambda function should be invoked. For example, you can specify a condition that if there are empty date-time values, then Amazon Kendra should invoke a function that inserts the current date-time. See [invocation_condition](#invocation_condition-block). * `lambda_arn` - (Required) The Amazon Resource Name (ARN) of a Lambda Function that can manipulate your document metadata fields or attributes and content. * `s3_bucket` - (Required) Stores the original, raw documents or the structured, parsed documents before and after altering them. For more information, see [Data contracts for Lambda functions](https://docs.aws.amazon.com/kendra/latest/dg/custom-document-enrichment.html#cde-data-contracts-lambda). -#### Document Attribute Condition +### invocation_condition Block -The `condition` and `invocation_condition` configuration blocks supports the following arguments: +The `invocation_condition` configuration blocks supports the following arguments: * `condition_document_attribute_key` - (Required) The identifier of the document attribute used for the condition. For example, `_source_uri` could be an identifier for the attribute or metadata field that contains source URIs associated with the documents. Amazon Kendra currently does not support `_document_body` as an attribute key used for the condition. -* `condition_on_value` - (Optional) The value used by the operator. For example, you can specify the value 'financial' for strings in the `_source_uri` field that partially match or contain this value. See [Document Attribute Value](#document-attribute-value). +* `condition_on_value` - (Optional) The value used by the operator. For example, you can specify the value 'financial' for strings in the `_source_uri` field that partially match or contain this value. See [condition_on_value](#condition_on_value-block). * `operator` - (Required) The condition operator. For example, you can use `Contains` to partially match a string. Valid Values: `GreaterThan` | `GreaterThanOrEquals` | `LessThan` | `LessThanOrEquals` | `Equals` | `NotEquals` | `Contains` | `NotContains` | `Exists` | `NotExists` | `BeginsWith`. -#### Document Attribute Value +### condition_on_value Block -The `condition_on_value` and `target_document_attribute_value` configuration blocks supports the following arguments: +The `condition_on_value` configuration blocks supports the following arguments: * `date_value` - (Optional) A date expressed as an ISO 8601 string. It is important for the time zone to be included in the ISO 8601 date-time format. As of this writing only UTC is supported. For example, `2012-03-25T12:30:10+00:00`. * `long_value` - (Optional) A long integer value. diff --git a/website/docs/r/kendra_index.html.markdown b/website/docs/r/kendra_index.html.markdown index c4f9e644348..ff0bac87db7 100644 --- a/website/docs/r/kendra_index.html.markdown +++ b/website/docs/r/kendra_index.html.markdown @@ -55,10 +55,25 @@ resource "aws_kendra_index" "example" { } ``` +### With user group resolution configuration + +```terraform +resource "aws_kendra_index" "example" { + name = "example" + role_arn = aws_iam_role.this.arn + + user_group_resolution_configuration { + user_group_resolution_mode = "AWS_SSO" + } +} +``` + ### With Document Metadata Configuration Updates #### Specifying the predefined elements +Refer to [Amazon Kendra documentation on built-in document fields](https://docs.aws.amazon.com/kendra/latest/dg/hiw-index.html#index-reserved-fields) for more information. + ```terraform resource "aws_kendra_index" "example" { name = "example" @@ -232,6 +247,21 @@ resource "aws_kendra_index" "example" { } } + document_metadata_configuration_updates { + name = "_tenant_id" + type = "STRING_VALUE" + search { + displayable = false + facetable = false + searchable = false + sortable = true + } + relevance { + importance = 1 + values_importance_map = {} + } + } + document_metadata_configuration_updates { name = "_version" type = "STRING_VALUE" @@ -441,6 +471,21 @@ resource "aws_kendra_index" "example" { } } + document_metadata_configuration_updates { + name = "_tenant_id" + type = "STRING_VALUE" + search { + displayable = false + facetable = false + searchable = false + sortable = true + } + relevance { + importance = 1 + values_importance_map = {} + } + } + document_metadata_configuration_updates { name = "_version" type = "STRING_VALUE" diff --git a/website/docs/r/kendra_query_suggestions_block_list.html.markdown b/website/docs/r/kendra_query_suggestions_block_list.html.markdown index 02d9c818f75..e17af83db5a 100644 --- a/website/docs/r/kendra_query_suggestions_block_list.html.markdown +++ b/website/docs/r/kendra_query_suggestions_block_list.html.markdown @@ -8,7 +8,7 @@ description: |- # Resource: aws_kendra_query_suggestions_block_list -Terraform resource for managing an AWS Kendra block list used for query suggestions for an index. +Use the `aws_kendra_index_block_list` resource to manage an AWS Kendra block list used for query suggestions for an index. ## Example Usage @@ -35,32 +35,32 @@ resource "aws_kendra_query_suggestions_block_list" "example" { The following arguments are required: -* `index_id`- (Required, Forces new resource) The identifier of the index for a block list. -* `name` - (Required) The name for the block list. -* `role_arn` - (Required) The IAM (Identity and Access Management) role used to access the block list text file in S3. -* `source_s3_path` - (Required) The S3 path where your block list text file sits in S3. Detailed below. +* `index_id` - (Required, Forces New Resource) Identifier of the index for a block list. +* `name` - (Required) Name for the block list. +* `role_arn` - (Required) IAM (Identity and Access Management) role used to access the block list text file in S3. +* `source_s3_path` - (Required) S3 path where your block list text file is located. See details below. The `source_s3_path` configuration block supports the following arguments: -* `bucket` - (Required) The name of the S3 bucket that contains the file. -* `key` - (Required) The name of the file. +* `bucket` - (Required) Name of the S3 bucket that contains the file. +* `key` - (Required) Name of the file. The following arguments are optional: -* `description` - (Optional) The description for a block list. -* `tags` - (Optional) Key-value map of resource tags. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `description` - (Optional) Description for a block list. +* `tags` - (Optional) Key-value map of resource tags. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block), tags with matching keys will overwrite those defined at the provider-level. -## Attributes Reference +## Attribute Reference -In addition to all arguments above, the following attributes are exported: +This resource exports the following attributes in addition to the arguments above: * `arn` - ARN of the block list. -* `query_suggestions_block_list_id` - The unique indentifier of the block list. -* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). +* `query_suggestions_block_list_id` - Unique identifier of the block list. +* `tags_all` - Map of tags assigned to the resource, including those inherited from the provider's [default_tags configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). ## Timeouts -[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): +Configuration options for operation timeouts can be found [here](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts). * `create` - (Default `30m`) * `update` - (Default `30m`) @@ -68,7 +68,7 @@ In addition to all arguments above, the following attributes are exported: ## Import -`aws_kendra_query_suggestions_block_list` can be imported using the unique identifiers of the block list and index separated by a slash (`/`), e.g., +The `aws_kendra_query_suggestions_block_list` resource can be imported using the unique identifiers of the block list and index separated by a slash (`/`), for example: ``` $ terraform import aws_kendra_query_suggestions_block_list.example blocklist-123456780/idx-8012925589 diff --git a/website/docs/r/keyspaces_table.html.markdown b/website/docs/r/keyspaces_table.html.markdown index 0fa5b97e1fd..194361a8b6b 100644 --- a/website/docs/r/keyspaces_table.html.markdown +++ b/website/docs/r/keyspaces_table.html.markdown @@ -42,6 +42,7 @@ The following arguments are required: The following arguments are optional: * `capacity_specification` - (Optional) Specifies the read/write throughput capacity mode for the table. +* `client_side_timestamps` - (Optional) Enables client-side timestamps for the table. By default, the setting is disabled. * `comment` - (Optional) A description of the table. * `default_time_to_live` - (Optional) The default Time to Live setting in seconds for the table. More information can be found in the [Developer Guide](https://docs.aws.amazon.com/keyspaces/latest/devguide/TTL-how-it-works.html#ttl-howitworks_default_ttl). * `encryption_specification` - (Optional) Specifies how the encryption key for encryption at rest is managed for the table. More information can be found in the [Developer Guide](https://docs.aws.amazon.com/keyspaces/latest/devguide/EncryptionAtRest.html). @@ -56,6 +57,10 @@ The `capacity_specification` object takes the following arguments: * `throughput_mode` - (Optional) The read/write throughput capacity mode for a table. Valid values: `PAY_PER_REQUEST`, `PROVISIONED`. The default value is `PAY_PER_REQUEST`. * `write_capacity_units` - (Optional) The throughput capacity specified for write operations defined in write capacity units (WCUs). +The `client_side_timestamps` object takes the following arguments: + +* `status` - (Required) Shows how to enable client-side timestamps settings for the specified table. Valid values: `ENABLED`. + The `comment` object takes the following arguments: * `message` - (Required) A description of the table. diff --git a/website/docs/r/kinesis_firehose_delivery_stream.html.markdown b/website/docs/r/kinesis_firehose_delivery_stream.html.markdown index 0bbfe569316..dda84090332 100644 --- a/website/docs/r/kinesis_firehose_delivery_stream.html.markdown +++ b/website/docs/r/kinesis_firehose_delivery_stream.html.markdown @@ -152,6 +152,44 @@ resource "aws_kinesis_firehose_delivery_stream" "extended_s3_stream" { } ``` +Multiple Dynamic Partitioning Keys (maximum of 50) can be added by comma separating the `parameter_value`. + +The following example adds the Dynamic Partitioning Keys: `store_id` and `customer_id` to the S3 prefix. + +```terraform +resource "aws_kinesis_firehose_delivery_stream" "extended_s3_stream" { + name = "terraform-kinesis-firehose-extended-s3-test-stream" + destination = "extended_s3" + extended_s3_configuration { + role_arn = aws_iam_role.firehose_role.arn + bucket_arn = aws_s3_bucket.bucket.arn + buffering_size = 64 + # https://docs.aws.amazon.com/firehose/latest/dev/dynamic-partitioning.html + dynamic_partitioning_configuration { + enabled = "true" + } + # Example prefix using partitionKeyFromQuery, applicable to JQ processor + prefix = "data/store_id=!{partitionKeyFromQuery:store_id}/customer_id=!{partitionKeyFromQuery:customer_id}/year=!{timestamp:yyyy}/month=!{timestamp:MM}/day=!{timestamp:dd}/hour=!{timestamp:HH}/" + error_output_prefix = "errors/year=!{timestamp:yyyy}/month=!{timestamp:MM}/day=!{timestamp:dd}/hour=!{timestamp:HH}/!{firehose:error-output-type}/" + processing_configuration { + enabled = "true" + # JQ processor example + processors { + type = "MetadataExtraction" + parameters { + parameter_name = "JsonParsingEngine" + parameter_value = "JQ-1.6" + } + parameters { + parameter_name = "MetadataExtractionQuery" + parameter_value = "{store_id:.store_id,customer_id:.customer_id}" + } + } + } + } +} +``` + ### Redshift Destination ```terraform @@ -661,7 +699,7 @@ The `request_configuration` object supports the following: The `common_attributes` array objects support the following: * `name` - (Required) The name of the HTTP endpoint common attribute. -* `value` - (Optional) The value of the HTTP endpoint common attribute. +* `value` - (Required) The value of the HTTP endpoint common attribute. The `vpc_config` object supports the following: diff --git a/website/docs/r/lambda_event_source_mapping.html.markdown b/website/docs/r/lambda_event_source_mapping.html.markdown index c013c6e17ae..e1c6dc44cc2 100644 --- a/website/docs/r/lambda_event_source_mapping.html.markdown +++ b/website/docs/r/lambda_event_source_mapping.html.markdown @@ -161,7 +161,7 @@ resource "aws_lambda_event_source_mapping" "example" { * `maximum_record_age_in_seconds`: - (Optional) The maximum age of a record that Lambda sends to a function for processing. Only available for stream sources (DynamoDB and Kinesis). Must be either -1 (forever, and the default value) or between 60 and 604800 (inclusive). * `maximum_retry_attempts`: - (Optional) The maximum number of times to retry when the function returns an error. Only available for stream sources (DynamoDB and Kinesis). Minimum and default of -1 (forever), maximum of 10000. * `parallelization_factor`: - (Optional) The number of batches to process from each shard concurrently. Only available for stream sources (DynamoDB and Kinesis). Minimum and default of 1, maximum of 10. -* `queues` - (Optional) The name of the Amazon MQ broker destination queue to consume. Only available for MQ sources. A single queue name must be specified. +* `queues` - (Optional) The name of the Amazon MQ broker destination queue to consume. Only available for MQ sources. The list must contain exactly one queue name. * `scaling_config` - (Optional) Scaling configuration of the event source. Only available for SQS queues. Detailed below. * `self_managed_event_source`: - (Optional) For Self Managed Kafka sources, the location of the self managed cluster. If set, configuration must also include `source_access_configuration`. Detailed below. * `self_managed_kafka_event_source_config` - (Optional) Additional configuration block for Self Managed Kafka sources. Incompatible with "event_source_arn" and "amazon_managed_kafka_event_source_config". Detailed below. diff --git a/website/docs/r/lambda_function.html.markdown b/website/docs/r/lambda_function.html.markdown index 56c842bdf57..a2ad2da3bf4 100644 --- a/website/docs/r/lambda_function.html.markdown +++ b/website/docs/r/lambda_function.html.markdown @@ -59,7 +59,7 @@ resource "aws_lambda_function" "test_lambda" { source_code_hash = data.archive_file.lambda.output_base64sha256 - runtime = "nodejs16.x" + runtime = "nodejs18.x" environment { variables = { @@ -112,7 +112,7 @@ resource "aws_lambda_function" "test_lambda" { function_name = "lambda_function_name" role = aws_iam_role.iam_for_lambda.arn handler = "index.test" - runtime = "nodejs14.x" + runtime = "nodejs18.x" ephemeral_storage { size = 10240 # Min 512 MB and the Max 10240 MB @@ -274,8 +274,8 @@ The following arguments are optional: * `package_type` - (Optional) Lambda deployment package type. Valid values are `Zip` and `Image`. Defaults to `Zip`. * `publish` - (Optional) Whether to publish creation/change as new Lambda Function Version. Defaults to `false`. * `reserved_concurrent_executions` - (Optional) Amount of reserved concurrent executions for this lambda function. A value of `0` disables lambda from being triggered and `-1` removes any concurrency limitations. Defaults to Unreserved Concurrency Limits `-1`. See [Managing Concurrency][9] -* `replace_security_groups_on_destroy` - (Optional) Whether to replace the security groups on associated lambda network interfaces upon destruction. Removing these security groups from orphaned network interfaces can speed up security group deletion times by avoiding a dependency on AWS's internal cleanup operations. By default, the ENI security groups will be replaced with the `default` security group in the function's VPC. Set the `replacement_security_group_ids` attribute to use a custom list of security groups for replacement. -* `replacement_security_group_ids` - (Optional) List of security group IDs to assign to orphaned Lambda function network interfaces upon destruction. `replace_security_groups_on_destroy` must be set to `true` to use this attribute. +* `replace_security_groups_on_destroy` - (Optional, **Deprecated**) **AWS no longer supports this operation. This attribute now has no effect and will be removed in a future major version.** Whether to replace the security groups on associated lambda network interfaces upon destruction. Removing these security groups from orphaned network interfaces can speed up security group deletion times by avoiding a dependency on AWS's internal cleanup operations. By default, the ENI security groups will be replaced with the `default` security group in the function's VPC. Set the `replacement_security_group_ids` attribute to use a custom list of security groups for replacement. +* `replacement_security_group_ids` - (Optional, **Deprecated**) List of security group IDs to assign to orphaned Lambda function network interfaces upon destruction. `replace_security_groups_on_destroy` must be set to `true` to use this attribute. * `runtime` - (Optional) Identifier of the function's runtime. See [Runtimes][6] for valid values. * `s3_bucket` - (Optional) S3 bucket location containing the function's deployment package. This bucket must reside in the same AWS region where you are creating the Lambda function. Exactly one of `filename`, `image_uri`, or `s3_bucket` must be specified. When `s3_bucket` is set, `s3_key` is required. * `s3_key` - (Optional) S3 key of an object containing the function's deployment package. When `s3_bucket` is set, `s3_key` is required. diff --git a/website/docs/r/lambda_invocation.html.markdown b/website/docs/r/lambda_invocation.html.markdown index 21f2baf30a8..0ae51de64aa 100644 --- a/website/docs/r/lambda_invocation.html.markdown +++ b/website/docs/r/lambda_invocation.html.markdown @@ -10,7 +10,7 @@ description: |- Use this resource to invoke a lambda function. The lambda function is invoked with the [RequestResponse](https://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html#API_Invoke_RequestSyntax) invocation type. -~> **NOTE:** This resource _only_ invokes the function when the arguments call for a create or update. In other words, after an initial invocation on _apply_, if the arguments do not change, a subsequent _apply_ does not invoke the function again. To dynamically invoke the function, see the `triggers` example below. To always invoke a function on each _apply_, see the [`aws_lambda_invocation`](/docs/providers/aws/d/lambda_invocation.html) data source. +~> **NOTE:** By default this resource _only_ invokes the function when the arguments call for a create or replace. In other words, after an initial invocation on _apply_, if the arguments do not change, a subsequent _apply_ does not invoke the function again. To dynamically invoke the function, see the `triggers` example below. To always invoke a function on each _apply_, see the [`aws_lambda_invocation`](/docs/providers/aws/d/lambda_invocation.html) data source. To invoke the lambda function when the terraform resource is updated and deleted, see the [CRUD Lifecycle Scope](#crud-lifecycle-scope) example below. ~> **NOTE:** If you get a `KMSAccessDeniedException: Lambda was unable to decrypt the environment variables because KMS access was denied` error when invoking an [`aws_lambda_function`](/docs/providers/aws/r/lambda_function.html) with environment variables, the IAM role associated with the function may have been deleted and recreated _after_ the function was created. You can fix the problem two ways: 1) updating the function's role to another role and then updating it back again to the recreated role, or 2) by using Terraform to `taint` the function and `apply` your configuration again to recreate the function. (When you create a function, Lambda grants permissions on the KMS key to the function's IAM role. If the IAM role is recreated, the grant is no longer valid. Changing the function's role or recreating the function causes Lambda to update the grant.) @@ -52,6 +52,73 @@ resource "aws_lambda_invocation" "example" { } ``` +### CRUD Lifecycle Scope + +```terraform +resource "aws_lambda_invocation" "example" { + function_name = aws_lambda_function.lambda_function_test.function_name + + input = jsonencode({ + key1 = "value1" + key2 = "value2" + }) + + lifecycle_scope = "CRUD" +} +``` + +~> **NOTE:** `lifecycle_scope = "CRUD"` will inject a key `tf` in the input event to pass lifecycle information! This allows the lambda function to handle different lifecycle transitions uniquely. If you need to use a key `tf` in your own input JSON, the default key name can be overridden with the `terraform_key` argument. + +The key `tf` gets added with subkeys: + +* `action` - Action Terraform performs on the resource. Values are `create`, `update`, or `delete`. +* `prev_input` - Input JSON payload from the previous invocation. This can be used to handle update and delete events. + +When the resource from the example above is created, the Lambda will get following JSON payload: + +```json +{ + "key1": "value1", + "key2": "value2", + "tf": { + "action": "create", + "prev_input": null + } +} +``` + +If the input value of `key1` changes to "valueB", then the lambda will be invoked again with the following JSON payload: + +```json +{ + "key1": "valueB", + "key2": "value2", + "tf": { + "action": "update", + "prev_input": { + "key1": "value1", + "key2": "value2" + } + } +} +``` + +When the invocation resource is removed, the final invocation will have the following JSON payload: + +```json +{ + "key1": "valueB", + "key2": "value2", + "tf": { + "action": "delete", + "prev_input": { + "key1": "valueB", + "key2": "value2" + } + } +} +``` + ## Argument Reference The following arguments are required: @@ -61,7 +128,9 @@ The following arguments are required: The following arguments are optional: +* `lifecycle_scope` - (Optional) Lifecycle scope of the resource to manage. Valid values are `CREATE_ONLY` and `CRUD`. Defaults to `CREATE_ONLY`. `CREATE_ONLY` will invoke the function only on creation or replacement. `CRUD` will invoke the function on each lifecycle event, and augment the input JSON payload with additional lifecycle information. * `qualifier` - (Optional) Qualifier (i.e., version) of the lambda function. Defaults to `$LATEST`. +* `terraform_key` - (Optional) The JSON key used to store lifecycle information in the input JSON payload. Defaults to `tf`. This additional key is only included when `lifecycle_scope` is set to `CRUD`. * `triggers` - (Optional) Map of arbitrary keys and values that, when changed, will trigger a re-invocation. To force a re-invocation without changing these keys/values, use the [`terraform taint` command](https://www.terraform.io/docs/commands/taint.html). ## Attributes Reference diff --git a/website/docs/r/lambda_layer_version_permission.html.markdown b/website/docs/r/lambda_layer_version_permission.html.markdown index 94211f1709e..f3e9f760af2 100644 --- a/website/docs/r/lambda_layer_version_permission.html.markdown +++ b/website/docs/r/lambda_layer_version_permission.html.markdown @@ -12,6 +12,8 @@ Provides a Lambda Layer Version Permission resource. It allows you to share you For information about Lambda Layer Permissions and how to use them, see [Using Resource-based Policies for AWS Lambda][1] +~> **NOTE:** Setting `skip_destroy` to `true` means that the AWS Provider will _not_ destroy any layer version permission, even when running `terraform destroy`. Layer version permissions are thus intentional dangling resources that are _not_ managed by Terraform and may incur extra expense in your AWS account. + ## Example Usage ```terraform @@ -34,6 +36,7 @@ The following arguments are supported: * `principal` - (Required) AWS account ID which should be able to use your Lambda Layer. `*` can be used here, if you want to share your Lambda Layer widely. * `statement_id` - (Required) The name of Lambda Layer Permission, for example `dev-account` - human readable note about what is this permission for. * `version_number` (Required) Version of Lambda Layer, which you want to grant access to. Note: permissions only apply to a single version of a layer. +* `skip_destroy` - (Optional) Whether to retain the old version of a previously deployed Lambda Layer. Default is `false`. When this is not set to `true`, changing any of `compatible_architectures`, `compatible_runtimes`, `description`, `filename`, `layer_name`, `license_info`, `s3_bucket`, `s3_key`, `s3_object_version`, or `source_code_hash` forces deletion of the existing layer version and creation of a new layer version. ## Attributes Reference diff --git a/website/docs/r/lambda_permission.html.markdown b/website/docs/r/lambda_permission.html.markdown index 87d8f75f8fe..859f9570d49 100644 --- a/website/docs/r/lambda_permission.html.markdown +++ b/website/docs/r/lambda_permission.html.markdown @@ -12,6 +12,8 @@ Gives an external source (like an EventBridge Rule, SNS, or S3) permission to ac ## Example Usage +### Basic Usage + ```terraform resource "aws_lambda_permission" "allow_cloudwatch" { statement_id = "AllowExecutionFromCloudWatch" @@ -58,7 +60,7 @@ resource "aws_iam_role" "iam_for_lambda" { } ``` -## Usage with SNS +### With SNS ```terraform resource "aws_lambda_permission" "with_sns" { @@ -108,7 +110,7 @@ resource "aws_iam_role" "default" { } ``` -## Specify Lambda permissions for API Gateway REST API +### With API Gateway REST API ```terraform resource "aws_api_gateway_rest_api" "MyDemoAPI" { @@ -128,7 +130,7 @@ resource "aws_lambda_permission" "lambda_permission" { } ``` -## Usage with CloudWatch log group +### With CloudWatch Log Group ```terraform resource "aws_lambda_permission" "logging" { @@ -177,7 +179,7 @@ resource "aws_iam_role" "default" { } ``` -## Example function URL cross-account invoke policy +### With Cross-Account Invocation Policy ```terraform resource "aws_lambda_function_url" "url" { @@ -204,6 +206,25 @@ resource "aws_lambda_permission" "url" { } ``` +### With `replace_triggered_by` Lifecycle Configuration + +If omitting the `qualifier` argument (which forces re-creation each time a function version is published), a `lifecycle` block can be used to ensure permissions are re-applied on any change to the underlying function. + +```terraform +resource "aws_lambda_permission" "logging" { + action = "lambda:InvokeFunction" + function_name = aws_lambda_function.example.function_name + principal = "events.amazonaws.com" + source_arn = "arn:aws:events:eu-west-1:111122223333:rule/RunDaily" + + lifecycle { + replace_triggered_by = [ + aws_lambda_function.example + ] + } +} +``` + ## Argument Reference * `action` - (Required) The AWS Lambda action you want to allow in this statement. (e.g., `lambda:InvokeFunction`) diff --git a/website/docs/r/lambda_provisioned_concurrency_config.html.markdown b/website/docs/r/lambda_provisioned_concurrency_config.html.markdown index c3fab20022f..421a2b674fe 100644 --- a/website/docs/r/lambda_provisioned_concurrency_config.html.markdown +++ b/website/docs/r/lambda_provisioned_concurrency_config.html.markdown @@ -10,6 +10,8 @@ description: |- Manages a Lambda Provisioned Concurrency Configuration. +~> **NOTE:** Setting `skip_destroy` to `true` means that the AWS Provider will _not_ destroy a provisioned concurrency configuration, even when running `terraform destroy`. The configuration is thus an intentional dangling resource that is _not_ managed by Terraform and may incur extra expense in your AWS account. + ## Example Usage ### Alias Name @@ -40,11 +42,15 @@ The following arguments are required: * `provisioned_concurrent_executions` - (Required) Amount of capacity to allocate. Must be greater than or equal to `1`. * `qualifier` - (Required) Lambda Function version or Lambda Alias name. +The following arguments are optional: + +* `skip_destroy` - (Optional) Whether to retain the provisoned concurrency configuration upon destruction. Defaults to `false`. If set to `true`, the resource in simply removed from state instead. + ## Attributes Reference In addition to all arguments above, the following attributes are exported: -* `id` - Lambda Function name and qualifier separated by a colon (`:`). +* `id` - Lambda Function name and qualifier separated by a comma (`,`). ## Timeouts @@ -55,8 +61,8 @@ In addition to all arguments above, the following attributes are exported: ## Import -Lambda Provisioned Concurrency Configs can be imported using the `function_name` and `qualifier` separated by a colon (`:`), e.g., +A Lambda Provisioned Concurrency Configuration can be imported using the `function_name` and `qualifier` separated by a comma (`,`), e.g., ``` -$ terraform import aws_lambda_provisioned_concurrency_config.example my_function:production +$ terraform import aws_lambda_provisioned_concurrency_config.example my_function,production ``` diff --git a/website/docs/r/launch_template.html.markdown b/website/docs/r/launch_template.html.markdown index 3da2fcc9dd7..13ae4cd081c 100644 --- a/website/docs/r/launch_template.html.markdown +++ b/website/docs/r/launch_template.html.markdown @@ -415,11 +415,11 @@ The metadata options for the instances. The `metadata_options` block supports the following: -* `http_endpoint` - (Optional) Whether the metadata service is available. Can be `enabled` or `disabled`. -* `http_protocol_ipv6` - (Optional) Enables or disables the IPv6 endpoint for the instance metadata service. (Default: `disabled`). -* `http_put_response_hop_limit` - (Optional) The desired HTTP PUT response hop limit for instance metadata requests. The larger the number, the further instance metadata requests can travel. Can be an integer from `1` to `64`. -* `http_tokens` - (Optional) Whether or not the metadata service requires session tokens, also referred to as _Instance Metadata Service Version 2 (IMDSv2)_. Can be `optional` or `required`. -* `instance_metadata_tags` - (Optional) Enables or disables access to instance tags from the instance metadata service. (Default: `disabled`). +* `http_endpoint` - (Optional) Whether the metadata service is available. Can be `"enabled"` or `"disabled"`. (Default: `"enabled"`). +* `http_tokens` - (Optional) Whether or not the metadata service requires session tokens, also referred to as _Instance Metadata Service Version 2 (IMDSv2)_. Can be `"optional"` or `"required"`. (Default: `"optional"`). +* `http_put_response_hop_limit` - (Optional) The desired HTTP PUT response hop limit for instance metadata requests. The larger the number, the further instance metadata requests can travel. Can be an integer from `1` to `64`. (Default: `1`). +* `http_protocol_ipv6` - (Optional) Enables or disables the IPv6 endpoint for the instance metadata service. Can be `"enabled"` or `"disabled"`. +* `instance_metadata_tags` - (Optional) Enables or disables access to instance tags from the instance metadata service. Can be `"enabled"` or `"disabled"`. For more information, see the documentation on the [Instance Metadata Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html). diff --git a/website/docs/r/lb_target_group_attachment.html.markdown b/website/docs/r/lb_target_group_attachment.html.markdown index c44d4c7f678..494b4a2c77a 100644 --- a/website/docs/r/lb_target_group_attachment.html.markdown +++ b/website/docs/r/lb_target_group_attachment.html.markdown @@ -15,6 +15,8 @@ Provides the ability to register instances and containers with an Application Lo ## Example Usage +### Basic Usage + ```terraform resource "aws_lb_target_group_attachment" "test" { target_group_arn = aws_lb_target_group.test.arn @@ -31,7 +33,7 @@ resource "aws_instance" "test" { } ``` -## Usage with lambda +### Lambda Target ```terraform resource "aws_lambda_permission" "with_lb" { @@ -60,18 +62,21 @@ resource "aws_lb_target_group_attachment" "test" { ## Argument Reference -The following arguments are supported: +The following arguments are required: + +* `target_group_arn` - (Required) The ARN of the target group with which to register targets. +* `target_id` (Required) The ID of the target. This is the Instance ID for an instance, or the container ID for an ECS container. If the target type is `ip`, specify an IP address. If the target type is `lambda`, specify the Lambda function ARN. If the target type is `alb`, specify the ALB ARN. + +The following arguments are optional: -* `target_group_arn` - (Required) The ARN of the target group with which to register targets -* `target_id` (Required) The ID of the target. This is the Instance ID for an instance, or the container ID for an ECS container. If the target type is ip, specify an IP address. If the target type is lambda, specify the arn of lambda. If the target type is alb, specify the arn of alb. +* `availability_zone` - (Optional) The Availability Zone where the IP address of the target is to be registered. If the private IP address is outside of the VPC scope, this value must be set to `all`. * `port` - (Optional) The port on which targets receive traffic. -* `availability_zone` - (Optional) The Availability Zone where the IP address of the target is to be registered. If the private ip address is outside of the VPC scope, this value must be set to 'all'. ## Attributes Reference In addition to all arguments above, the following attributes are exported: -* `id` - A unique identifier for the attachment +* `id` - A unique identifier for the attachment. ## Import diff --git a/website/docs/r/lightsail_instance.html.markdown b/website/docs/r/lightsail_instance.html.markdown index 146b9029c9a..c9b1ae4bdbc 100644 --- a/website/docs/r/lightsail_instance.html.markdown +++ b/website/docs/r/lightsail_instance.html.markdown @@ -150,7 +150,6 @@ In addition to all arguments above, the following attributes are exported: * `created_at` - The timestamp when the instance was created. * `cpu_count` - The number of vCPUs the instance has. * `ram_size` - The amount of RAM in GB on the instance (e.g., 1.0). -* `ipv6_address` - (**Deprecated**) The first IPv6 address of the Lightsail instance. Use `ipv6_addresses` attribute instead. * `ipv6_addresses` - List of IPv6 addresses for the Lightsail instance. * `private_ip_address` - The private IP address of the instance. * `public_ip_address` - The public IP address of the instance. diff --git a/website/docs/r/mq_broker.html.markdown b/website/docs/r/mq_broker.html.markdown index 970fdf40522..afb6fa2448a 100644 --- a/website/docs/r/mq_broker.html.markdown +++ b/website/docs/r/mq_broker.html.markdown @@ -146,6 +146,7 @@ The following arguments are required: * `console_access` - (Optional) Whether to enable access to the [ActiveMQ Web Console](http://activemq.apache.org/web-console.html) for the user. Applies to `engine_type` of `ActiveMQ` only. * `groups` - (Optional) List of groups (20 maximum) to which the ActiveMQ user belongs. Applies to `engine_type` of `ActiveMQ` only. * `password` - (Required) Password of the user. It must be 12 to 250 characters long, at least 4 unique characters, and must not contain commas. +* `replication_user` - (Optional) Whether to set set replication user. Defaults to `false`. * `username` - (Required) Username of the user. ~> **NOTE:** AWS currently does not support updating RabbitMQ users. Updates to users can only be in the RabbitMQ UI. diff --git a/website/docs/r/msk_cluster.html.markdown b/website/docs/r/msk_cluster.html.markdown index 85a4190d4d1..7029e1e9d2b 100644 --- a/website/docs/r/msk_cluster.html.markdown +++ b/website/docs/r/msk_cluster.html.markdown @@ -84,9 +84,9 @@ resource "aws_iam_role" "firehose_role" { resource "aws_kinesis_firehose_delivery_stream" "test_stream" { name = "terraform-kinesis-firehose-msk-broker-logs-stream" - destination = "s3" + destination = "extended_s3" - s3_configuration { + extended_s3_configuration { role_arn = aws_iam_role.firehose_role.arn bucket_arn = aws_s3_bucket.bucket.arn } diff --git a/website/docs/r/networkfirewall_firewall_policy.html.markdown b/website/docs/r/networkfirewall_firewall_policy.html.markdown index ff82d96197d..7216826551d 100644 --- a/website/docs/r/networkfirewall_firewall_policy.html.markdown +++ b/website/docs/r/networkfirewall_firewall_policy.html.markdown @@ -103,7 +103,9 @@ The `stateful_engine_options` block supports the following argument: ~> **NOTE:** If the `STRICT_ORDER` rule order is specified, this firewall policy can only reference stateful rule groups that utilize `STRICT_ORDER`. -* `rule_order` - (Required) Indicates how to manage the order of stateful rule evaluation for the policy. Default value: `DEFAULT_ACTION_ORDER`. Valid values: `DEFAULT_ACTION_ORDER`, `STRICT_ORDER`. +* `rule_order` - Indicates how to manage the order of stateful rule evaluation for the policy. Default value: `DEFAULT_ACTION_ORDER`. Valid values: `DEFAULT_ACTION_ORDER`, `STRICT_ORDER`. + +* `stream_exception_policy` - Describes how to treat traffic which has broken midstream. Default value: `DROP`. Valid values: `DROP`, `CONTINUE`, `REJECT`. ### Stateful Rule Group Reference diff --git a/website/docs/r/networkmanager_attachment_accepter.html.markdown b/website/docs/r/networkmanager_attachment_accepter.html.markdown index 2110a44f3b1..8b4d35bfbff 100644 --- a/website/docs/r/networkmanager_attachment_accepter.html.markdown +++ b/website/docs/r/networkmanager_attachment_accepter.html.markdown @@ -35,7 +35,7 @@ resource "aws_networkmanager_attachment_accepter" "test" { The following arguments are required: - `attachment_id` - (Required) The ID of the attachment. -- `attachment_type` - The type of attachment. Valid values can be found in the [AWS Documentation](https://docs.aws.amazon.com/networkmanager/latest/APIReference/API_ListAttachments.html#API_ListAttachments_RequestSyntax) +- `attachment_type` - (Required) The type of attachment. Valid values can be found in the [AWS Documentation](https://docs.aws.amazon.com/networkmanager/latest/APIReference/API_ListAttachments.html#API_ListAttachments_RequestSyntax) ## Attributes Reference diff --git a/website/docs/r/opensearch_domain.html.markdown b/website/docs/r/opensearch_domain.html.markdown index 586cfa3a348..b8a3fd18fd1 100644 --- a/website/docs/r/opensearch_domain.html.markdown +++ b/website/docs/r/opensearch_domain.html.markdown @@ -328,13 +328,16 @@ The following arguments are optional: * `cognito_options` - (Optional) Configuration block for authenticating dashboard with Cognito. Detailed below. * `domain_endpoint_options` - (Optional) Configuration block for domain endpoint HTTP(S) related options. Detailed below. * `ebs_options` - (Optional) Configuration block for EBS related options, may be required based on chosen [instance size](https://aws.amazon.com/opensearch-service/pricing/). Detailed below. -* `engine_version` - (Optional) Either `Elasticsearch_X.Y` or `OpenSearch_X.Y` to specify the engine version for the Amazon OpenSearch Service domain. For example, `OpenSearch_1.0` or `Elasticsearch_7.9`. See [Creating and managing Amazon OpenSearch Service domains](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/createupdatedomains.html#createdomains). Defaults to `OpenSearch_1.1`. +* `engine_version` - (Optional) Either `Elasticsearch_X.Y` or `OpenSearch_X.Y` to specify the engine version for the Amazon OpenSearch Service domain. For example, `OpenSearch_1.0` or `Elasticsearch_7.9`. + See [Creating and managing Amazon OpenSearch Service domains](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/createupdatedomains.html#createdomains). + Defaults to the lastest version of OpenSearch. * `encrypt_at_rest` - (Optional) Configuration block for encrypt at rest options. Only available for [certain instance types](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/encryption-at-rest.html). Detailed below. * `log_publishing_options` - (Optional) Configuration block for publishing slow and application logs to CloudWatch Logs. This block can be declared multiple times, for each log_type, within the same resource. Detailed below. * `node_to_node_encryption` - (Optional) Configuration block for node-to-node encryption options. Detailed below. * `snapshot_options` - (Optional) Configuration block for snapshot related options. Detailed below. DEPRECATED. For domains running OpenSearch 5.3 and later, Amazon OpenSearch takes hourly automated snapshots, making this setting irrelevant. For domains running earlier versions, OpenSearch takes daily automated snapshots. * `tags` - (Optional) Map of tags to assign to the resource. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. * `vpc_options` - (Optional) Configuration block for VPC related options. Adding or removing this configuration forces a new resource ([documentation](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/vpc.html)). Detailed below. +* `off_peak_window_options` - (Optional) Configuration to add Off Peak update options. ([documentation](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/off-peak.html)). Detailed below. ### advanced_security_options @@ -447,6 +450,16 @@ AWS documentation: [VPC Support for Amazon OpenSearch Service Domains](https://d * `security_group_ids` - (Optional) List of VPC Security Group IDs to be applied to the OpenSearch domain endpoints. If omitted, the default Security Group for the VPC will be used. * `subnet_ids` - (Required) List of VPC Subnet IDs for the OpenSearch domain endpoints to be created in. +### off_peak_window_options + +AWS documentation: [Off Peak Hours Support for Amazon OpenSearch Service Domains](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/off-peak.html) + +* `enabled` - (Optional) Enabled disabled toggle for off-peak update window. +* `off_peak_window` - (Optional) + * `window_start_time` - (Optional) 10h window for updates + * `hours` - (Required) Starting hour of the 10-hour window for updates + * `minutes` - (Required) Starting minute of the 10-hour window for updates + ## Attributes Reference In addition to all arguments above, the following attributes are exported: @@ -456,7 +469,7 @@ In addition to all arguments above, the following attributes are exported: * `domain_name` - Name of the OpenSearch domain. * `endpoint` - Domain-specific endpoint used to submit index, search, and data upload requests. * `dashboard_endpoint` - Domain-specific endpoint for Dashboard without https scheme. -* `kibana_endpoint` - Domain-specific endpoint for kibana without https scheme. OpenSearch Dashboards do not use Kibana, so this attribute will be **DEPRECATED** in a future version. +* `kibana_endpoint` - (**Deprecated**) Domain-specific endpoint for kibana without https scheme. Use the `dashboard_endpoint` attribute instead. * `tags_all` - Map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). * `vpc_options.0.availability_zones` - If the domain was created inside a VPC, the names of the availability zones the configured `subnet_ids` were created inside. * `vpc_options.0.vpc_id` - If the domain was created inside a VPC, the ID of the VPC. diff --git a/website/docs/r/opensearchserverless_access_policy.html.markdown b/website/docs/r/opensearchserverless_access_policy.html.markdown new file mode 100644 index 00000000000..10e52d5e5e0 --- /dev/null +++ b/website/docs/r/opensearchserverless_access_policy.html.markdown @@ -0,0 +1,73 @@ +--- +subcategory: "OpenSearch Serverless" +layout: "aws" +page_title: "AWS: aws_opensearchserverless_access_policy" +description: |- + Terraform resource for managing an AWS OpenSearch Serverless Access Policy. +--- + +# Resource: aws_opensearchserverless_access_policy + +Terraform resource for managing an AWS OpenSearch Serverless Access Policy. + +## Example Usage + +### Basic Usage + +```terraform +data "aws_caller_identity" "current" {} +data "aws_partition" "current" {} + +resource "aws_opensearchserverless_access_policy" "test" { + name = "example" + type = "data" + policy = jsonencode([ + { + "Rules" : [ + { + "ResourceType" : "index", + "Resource" : [ + "index/books/*" + ], + "Permission" : [ + "aoss:CreateIndex", + "aoss:ReadDocument", + "aoss:UpdateIndex", + "aoss:DeleteIndex", + "aoss:WriteDocument" + ] + } + ], + "Principal" : [ + "arn:${data.aws_partition.current.partition}:iam::${data.aws_caller_identity.current.account_id}:user/admin" + ] + } + ]) +} +``` + +## Argument Reference + +The following arguments are required: + +* `name` - (Required) Name of the policy. +* `policy` - (Required) JSON policy document to use as the content for the new policy +* `type` - (Required) Type of access policy. Must be `data`. + +The following arguments are optional: + +* `description` - (Optional) Description of the policy. Typically used to store information about the permissions defined in the policy. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `policy_version` - Version of the policy. + +## Import + +OpenSearchServerless Access Policy can be imported using the `name` and `type` arguments separated by a slash (`/`), e.g., + +``` +$ terraform import aws_opensearchserverless_access_policy.example example/data +``` diff --git a/website/docs/r/opensearchserverless_collection.html.markdown b/website/docs/r/opensearchserverless_collection.html.markdown new file mode 100644 index 00000000000..ad66ad1eb95 --- /dev/null +++ b/website/docs/r/opensearchserverless_collection.html.markdown @@ -0,0 +1,76 @@ +--- +subcategory: "OpenSearch Serverless" +layout: "aws" +page_title: "AWS: aws_opensearchserverless_collection" +description: |- + Terraform resource for managing an AWS OpenSearch Collection. +--- + +# Resource: aws_opensearchserverless_collection + +Terraform resource for managing an AWS OpenSearch Serverless Collection. + +## Example Usage + +### Basic Usage + +```terraform +resource "aws_opensearchserverless_security_policy" "example" { + name = "example" + type = "encryption" + policy = jsonencode({ + "Rules" = [ + { + "Resource" = [ + "collection/example" + ], + "ResourceType" = "collection" + } + ], + "AWSOwnedKey" = true + }) +} + +resource "aws_opensearchserverless_collection" "example" { + name = "example" + + depends_on = [aws_opensearchserverless_security_policy.example] +} +``` + +## Argument Reference + +The following arguments are required: + +* `name` - (Required) Name of the collection. + +The following arguments are optional: + +* `description` - (Optional) Description of the collection. +* `tags` - (Optional) A map of tags to assign to the collection. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `type` - (Optional) Type of collection. One of `SEARCH` or `TIMESERIES`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - Amazon Resource Name (ARN) of the collection. +* `collection_endpoint` - Collection-specific endpoint used to submit index, search, and data upload requests to an OpenSearch Serverless collection. +* `dashboard_endpont` - Collection-specific endpoint used to access OpenSearch Dashboards. +* `kms_key_arn` - The ARN of the Amazon Web Services KMS key used to encrypt the collection. +* `id` - Unique identifier for the collection. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `create` - (Default `20m`) +- `delete` - (Default `20m`) + +## Import + +OpenSearchServerless Collection can be imported using the `id`, e.g., + +``` +$ terraform import aws_opensearchserverless_collection.example example +``` diff --git a/website/docs/r/opensearchserverless_security_config.html.markdown b/website/docs/r/opensearchserverless_security_config.html.markdown new file mode 100644 index 00000000000..b9c066c2556 --- /dev/null +++ b/website/docs/r/opensearchserverless_security_config.html.markdown @@ -0,0 +1,55 @@ +--- +subcategory: "OpenSearch Serverless" +layout: "aws" +page_title: "AWS: aws_opensearchserverless_security_config" +description: |- + Terraform resource for managing an AWS OpenSearch Serverless Security Config. +--- + +# Resource: aws_opensearchserverless_security_config + +Terraform resource for managing an AWS OpenSearch Serverless Security Config. + +## Example Usage + +### Basic Usage + +```terraform +resource "aws_opensearchserverless_security_config" "example" { + name = "example" + type = "saml" + saml_options { + metadata = file("${path.module}/idp-metadata.xml") + } +} +``` + +## Argument Reference + +The following arguments are required: + +* `name` - (Required, Forces new resource) Name of the policy. +* `saml_options` - (Required) Configuration block for SAML options. +* `type` - (Required) Type of configuration. Must be `saml`. + +The following arguments are optional: + +* `description` - (Optional) Description of the security configuration. + +### saml_options + +* `group_attribute` - (Optional) Group attribute for this SAML integration. +* `metadata` - (Required) The XML IdP metadata file generated from your identity provider. +* `session_timeout` - (Optional) Session timeout, in minutes. Minimum is 5 minutes and maximum is 720 minutes (12 hours). Default is 60 minutes. +* `user_attribute` - (Optional) User attribute for this SAML integration. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `config_version` - Version of the configuration. + +## Import + +OpenSearchServerless Access Policy can be imported using the `name` argument prefixed with the string `saml/account_id/` e.g +$ terraform import aws_opensearchserverless_security_config.example saml/123456789012/example diff --git a/website/docs/r/opensearchserverless_security_policy.html.markdown b/website/docs/r/opensearchserverless_security_policy.html.markdown new file mode 100644 index 00000000000..892c6024832 --- /dev/null +++ b/website/docs/r/opensearchserverless_security_policy.html.markdown @@ -0,0 +1,215 @@ +--- +subcategory: "OpenSearch Serverless" +layout: "aws" +page_title: "AWS: aws_opensearchserverless_security_policy" +description: |- + Terraform resource for managing an AWS OpenSearch Serverless Security Policy. +--- + +# Resource: aws_opensearchserverless_security_policy + +Terraform resource for managing an AWS OpenSearch Serverless Security Policy. See AWS documentation for [encryption policies](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-encryption.html#serverless-encryption-policies) and [network policies](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-network.html#serverless-network-policies). + +## Example Usage + +### Encryption Security Policy + +#### Applies to a single collection + +```terraform +resource "aws_opensearchserverless_security_policy" "example" { + name = "example" + type = "encryption" + description = "encryption security policy for example-collection" + policy = jsonencode({ + Rules = [ + { + Resource = [ + "collection/example-collection" + ], + ResourceType = "collection" + } + ], + AWSOwnedKey = true + }) +} +``` + +#### Applies to multiple collections + +```terraform +resource "aws_opensearchserverless_security_policy" "example" { + name = "example" + type = "encryption" + description = "encryption security policy for collections that begin with \"example\"" + policy = jsonencode({ + Rules = [ + { + Resource = [ + "collection/example*" + ], + ResourceType = "collection" + } + ], + AWSOwnedKey = true + }) +} +``` + +#### Using a customer managed key + +```terraform +resource "aws_opensearchserverless_security_policy" "example" { + name = "example" + type = "encryption" + description = "encryption security policy using customer KMS key" + policy = jsonencode({ + Rules = [ + { + Resource = [ + "collection/customer-managed-key-collection" + ], + ResourceType = "collection" + } + ], + AWSOwnedKey = false + KmsARN = "arn:aws:kms:us-east-1:123456789012:key/93fd6da4-a317-4c17-bfe9-382b5d988b36" + }) +} +``` + +### Network Security Policy + +#### Allow public access to the collection endpoint and the Dashboards endpoint + +```terraform +resource "aws_opensearchserverless_security_policy" "example" { + name = "example" + type = "network" + description = "Public access" + policy = jsonencode([ + { + Description = "Public access to collection and Dashboards endpoint for example collection", + Rules = [ + { + ResourceType = "collection", + Resource = [ + "collection/example-collection" + ] + }, + { + ResourceType = "dashboard" + Resource = [ + "collection/example-collection" + ] + } + ], + AllowFromPublic = true + } + ]) +} +``` + +#### Allow VPC access to the collection endpoint and the Dashboards endpoint + +```terraform +resource "aws_opensearchserverless_security_policy" "example" { + name = "example" + type = "network" + description = "VPC access" + policy = jsonencode([ + { + Description = "VPC access to collection and Dashboards endpoint for example collection", + Rules = [ + { + ResourceType = "collection", + Resource = [ + "collection/example-collection" + ] + }, + { + ResourceType = "dashboard" + Resource = [ + "collection/example-collection" + ] + } + ], + AllowFromPublic = false, + SourceVPCEs = [ + "vpce-050f79086ee71ac05" + ] + } + ]) +} +``` + +#### Mixed access for different collections + +```terraform +resource "aws_opensearchserverless_security_policy" "example" { + name = "example" + type = "network" + description = "Mixed access for marketing and sales" + policy = jsonencode([ + { + "Description" : "Marketing access", + "Rules" : [ + { + "ResourceType" : "collection", + "Resource" : [ + "collection/marketing*" + ] + }, + { + "ResourceType" : "dashboard", + "Resource" : [ + "collection/marketing*" + ] + } + ], + "AllowFromPublic" : false, + "SourceVPCEs" : [ + "vpce-050f79086ee71ac05" + ] + }, + { + "Description" : "Sales access", + "Rules" : [ + { + "ResourceType" : "collection", + "Resource" : [ + "collection/finance" + ] + } + ], + "AllowFromPublic" : true + } + ]) +} +``` + +## Argument Reference + +The following arguments are required: + +* `name` - (Required) Name of the policy. +* `policy` - (Required) JSON policy document to use as the content for the new policy +* `type` - (Required) Type of security policy. One of `encryption` or `network`. + +The following arguments are optional: + +* `description` - (Optional) Description of the policy. Typically used to store information about the permissions defined in the policy. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `policy_version` - Version of the policy. + +## Import + +OpenSearchServerless Security Policy can be imported using the `name` and `type` arguments separated by a slash (`/`), e.g., + +``` +$ terraform import aws_opensearchserverless_security_policy.example example/encryption +``` diff --git a/website/docs/r/opensearchserverless_vpc_endpoint.html.markdown b/website/docs/r/opensearchserverless_vpc_endpoint.html.markdown new file mode 100644 index 00000000000..577be166e36 --- /dev/null +++ b/website/docs/r/opensearchserverless_vpc_endpoint.html.markdown @@ -0,0 +1,57 @@ +--- +subcategory: "OpenSearch Serverless" +layout: "aws" +page_title: "AWS: aws_opensearchserverless_vpc_endpoint" +description: |- + Terraform resource for managing an AWS OpenSearch Serverless VPC Endpoint. +--- + +# Resource: aws_opensearchserverless_vpc_endpoint + +Terraform resource for managing an AWS OpenSearchServerless VPC Endpoint. + +## Example Usage + +### Basic Usage + +```terraform +resource "aws_opensearchserverless_vpc_endpoint" "example" { + name = "myendpoint" + subnet_ids = [aws_subnet.example.id] + vpc_id = aws_vpc.example.id +} +``` + +## Argument Reference + +The following arguments are required: + +* `name` - (Required) Name of the interface endpoint. +* `subnet_ids` - (Required) One or more subnet IDs from which you'll access OpenSearch Serverless. Up to 6 subnets can be provided. +* `vpc_id` - (Required) ID of the VPC from which you'll access OpenSearch Serverless. + +The following arguments are optional: + +* `security_group_ids` - (Optional) One or more security groups that define the ports, protocols, and sources for inbound traffic that you are authorizing into your endpoint. Up to 5 security groups can be provided. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Unique identified of the Vpc Endpoint. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +* `create` - (Default `30m`) +* `update` - (Default `30m`) +* `delete` - (Default `30m`) + +## Import + +OpenSearchServerless Vpc Endpointa can be imported using the `id`, e.g., + +``` +$ terraform import aws_opensearchserverless_vpc_endpoint.example vpce-8012925589 +``` diff --git a/website/docs/r/organizations_resource_policy.html.markdown b/website/docs/r/organizations_resource_policy.html.markdown new file mode 100644 index 00000000000..fd0a316418b --- /dev/null +++ b/website/docs/r/organizations_resource_policy.html.markdown @@ -0,0 +1,73 @@ +--- +subcategory: "Organizations" +layout: "aws" +page_title: "AWS: aws_organizations_resource_policy" +description: |- + Provides a resource to manage an AWS Organizations resource policy. +--- + +# Resource: aws_organizations_resource_policy + +Provides a resource to manage a resource-based delegation policy that can be used to delegate policy management for AWS Organizations to specified member accounts to perform policy actions that are by default available only to the management account. See the [_AWS Organizations User Guide_](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_delegate_policies.html) for more information. + +## Example Usage + +```terraform +resource "aws_organizations_resource_policy" "example" { + content = < **Note:** EventBridge was formerly known as CloudWatch Events. The functionality is identical. ## Example Usage @@ -87,9 +89,79 @@ resource "aws_pipes_pipe" "example" { role_arn = aws_iam_role.example.arn source = aws_sqs_queue.source.arn target = aws_sqs_queue.target.arn +} +``` + +### Enrichment Usage + +```terraform +resource "aws_pipes_pipe" "example" { + name = "example-pipe" + role_arn = aws_iam_role.example.arn + source = aws_sqs_queue.source.arn + target = aws_sqs_queue.target.arn + + enrichment = aws_cloudwatch_event_api_destination.example.arn - source_parameters {} - target_parameters {} + enrichment_parameters { + http_parameters = { + "example-header" = "example-value" + "second-example-header" = "second-example-value" + } + + path_parameter_values = ["example-path-param"] + + query_string_parameters = { + "example-query-string" = "example-value" + "second-example-query-string" = "second-example-value" + } + } +} +``` + +### Filter Usage + +```terraform +resource "aws_pipes_pipe" "example" { + name = "example-pipe" + role_arn = aws_iam_role.example.arn + source = aws_sqs_queue.source.arn + target = aws_sqs_queue.target.arn + + source_parameters { + filter_criteria { + filter { + pattern = jsonencode({ + source = ["event-source"] + }) + } + } + } +} +``` + +### SQS Source and Target Configuration Usage + +```terraform +resource "aws_pipes_pipe" "example" { + name = "example-pipe" + role_arn = aws_iam_role.example.arn + source = aws_sqs_queue.source.arn + target = aws_sqs_queue.target.arn + + source_parameters { + sqs_queue_parameters { + batch_size = 1 + maximum_batching_window_in_seconds = 2 + } + } + + target_parameters { + sqs_queue { + message_deduplication_id = "example-dedupe" + message_group_id = "example-group" + } + } } ``` @@ -100,21 +172,44 @@ The following arguments are required: * `role_arn` - (Required) ARN of the role that allows the pipe to send data to the target. * `source` - (Required) Source resource of the pipe (typically an ARN). * `target` - (Required) Target resource of the pipe (typically an ARN). -* `source_parameters` - (Required) Parameters required to set up a source for the pipe. Detailed below. -* `target_parameters` - (Required) Parameters required to set up a target for your pipe. Detailed below. The following arguments are optional: * `description` - (Optional) A description of the pipe. At most 512 characters. * `desired_state` - (Optional) The state the pipe should be in. One of: `RUNNING`, `STOPPED`. * `enrichment` - (Optional) Enrichment resource of the pipe (typically an ARN). Read more about enrichment in the [User Guide](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes.html#pipes-enrichment). +* `enrichment_parameters` - (Optional) Parameters to configure enrichment for your pipe. Detailed below. * `name` - (Optional) Name of the pipe. If omitted, Terraform will assign a random, unique name. Conflicts with `name_prefix`. * `name_prefix` - (Optional) Creates a unique name beginning with the specified prefix. Conflicts with `name`. +* `source_parameters` - (Optional) Parameters to configure a source for the pipe. Detailed below. +* `target_parameters` - (Optional) Parameters to configure a target for your pipe. Detailed below. * `tags` - (Optional) Key-value mapping of resource tags. If configured with a provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +### enrichment_parameters Configuration Block + +You can find out more about EventBridge Pipes Enrichment in the [User Guide](https://docs.aws.amazon.com/eventbridge/latest/userguide/pipes-enrichment.html). + +* `input_template` - (Optional) Valid JSON text passed to the target. In this case, nothing from the event itself is passed to the target. Maximum length of 8192 characters. +* `http_parameters` - (Optional) Contains the HTTP parameters to use when the target is a API Gateway REST endpoint or EventBridge ApiDestination. If you specify an API Gateway REST API or EventBridge ApiDestination as a target, you can use this parameter to specify headers, path parameters, and query string keys/values as part of your target invoking request. If you're using ApiDestinations, the corresponding Connection can also have these values configured. In case of any conflicting keys, values from the Connection take precedence. Detailed below. + +#### enrichment_parameters.http_parameters Configuration Block + +* `header_parameters` - (Optional) Key-value mapping of the headers that need to be sent as part of request invoking the API Gateway REST API or EventBridge ApiDestination. +* `path_parameter_values` - (Optional) The path parameter values to be used to populate API Gateway REST API or EventBridge ApiDestination path wildcards ("*"). +* `query_string_parameters` - (Optional) Key-value mapping of the query strings that need to be sent as part of request invoking the API Gateway REST API or EventBridge ApiDestination. + ### source_parameters Configuration Block -* `filter_criteria` - (Optional) The collection of event patterns used to filter events. Detailed below. +You can find out more about EventBridge Pipes Sources in the [User Guide](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-event-source.html). + +* `activemq_broker_parameters` - (Optional) The parameters for using an Active MQ broker as a source. Detailed below. +* `dynamodb_stream_parameters` - (Optional) The parameters for using a DynamoDB stream as a source. Detailed below. +* `filter_criteria` - (Optional) The collection of event patterns used to [filter events](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-event-filtering.html). Detailed below. +* `kinesis_stream_parameters` - (Optional) The parameters for using a Kinesis stream as a source. Detailed below. +* `managed_streaming_kafka_parameters` - (Optional) The parameters for using an MSK stream as a source. Detailed below. +* `rabbitmq_broker_parameters` - (Optional) The parameters for using a Rabbit MQ broker as a source. Detailed below. +* `self_managed_kafka_parameters` - (Optional) The parameters for using a self-managed Apache Kafka stream as a source. Detailed below. +* `sqs_queue_parameters` - (Optional) The parameters for using a Amazon SQS stream as a source. Detailed below. #### source_parameters.filter_criteria Configuration Block @@ -124,9 +219,302 @@ The following arguments are optional: * `pattern` - (Required) The event pattern. At most 4096 characters. +#### source_parameters.activemq_broker_parameters Configuration Block + +* `batch_size` - (Optional) The maximum number of records to include in each batch. Maximum value of 10000. +* `credentials` - (Required) The credentials needed to access the resource. Detailed below. +* `maximum_batching_window_in_seconds` - (Optional) The maximum length of a time to wait for events. Maximum value of 300. +* `queue_name` - (Required) The name of the destination queue to consume. Maximum length of 1000. + +##### source_parameters.activemq_broker_parameters.credentials Configuration Block + +* `basic_auth` - (Required) The ARN of the Secrets Manager secret containing the basic auth credentials. + +#### source_parameters.dynamodb_stream_parameters Configuration Block + +* `batch_size` - (Optional) The maximum number of records to include in each batch. Maximum value of 10000. +* `dead_letter_config` - (Optional) Define the target queue to send dead-letter queue events to. Detailed below. +* `maximum_batching_window_in_seconds` - (Optional) The maximum length of a time to wait for events. Maximum value of 300. +* `maximum_record_age_in_seconds` - (Optional) Discard records older than the specified age. The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, EventBridge never discards old records. Maximum value of 604,800. +* `maximum_retry_attempts` - (Optional) Discard records after the specified number of retries. The default value is -1, which sets the maximum number of retries to infinite. When MaximumRetryAttempts is infinite, EventBridge retries failed records until the record expires in the event source. Maximum value of 10,000. +* `on_partial_batch_item_failure` - (Optional) Define how to handle item process failures. AUTOMATIC_BISECT halves each batch and retry each half until all the records are processed or there is one failed message left in the batch. Valid values: AUTOMATIC_BISECT. +* `parallelization_factor` - (Optional)The number of batches to process concurrently from each shard. The default value is 1. Maximum value of 10. +* `starting_position` - (Optional) The position in a stream from which to start reading. Valid values: TRIM_HORIZON, LATEST. + +##### source_parameters.dynamodb_stream_parameters.dead_letter_config Configuration Block + +* `arn` - (Optional) The ARN of the Amazon SQS queue specified as the target for the dead-letter queue. + +#### source_parameters.kinesis_stream_parameters Configuration Block + +* `batch_size` - (Optional) The maximum number of records to include in each batch. Maximum value of 10000. +* `dead_letter_config` - (Optional) Define the target queue to send dead-letter queue events to. Detailed below. +* `maximum_batching_window_in_seconds` - (Optional) The maximum length of a time to wait for events. Maximum value of 300. +* `maximum_record_age_in_seconds` - (Optional) Discard records older than the specified age. The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, EventBridge never discards old records. Maximum value of 604,800. +* `maximum_retry_attempts` - (Optional) Discard records after the specified number of retries. The default value is -1, which sets the maximum number of retries to infinite. When MaximumRetryAttempts is infinite, EventBridge retries failed records until the record expires in the event source. Maximum value of 10,000. +* `on_partial_batch_item_failure` - (Optional) Define how to handle item process failures. AUTOMATIC_BISECT halves each batch and retry each half until all the records are processed or there is one failed message left in the batch. Valid values: AUTOMATIC_BISECT. +* `parallelization_factor` - (Optional)The number of batches to process concurrently from each shard. The default value is 1. Maximum value of 10. +* `starting_position` - (Required) The position in a stream from which to start reading. Valid values: TRIM_HORIZON, LATEST, AT_TIMESTAMP. +* `starting_position_timestamp` - (Optional) With StartingPosition set to AT_TIMESTAMP, the time from which to start reading, in Unix time seconds. + +##### source_parameters.kinesis_stream_parameters.dead_letter_config Configuration Block + +* `arn` - (Optional) The ARN of the Amazon SQS queue specified as the target for the dead-letter queue. + +#### source_parameters.managed_streaming_kafka_parameters Configuration Block + +* `batch_size` - (Optional) The maximum number of records to include in each batch. Maximum value of 10000. +* `consumer_group_id` - (Optional) The name of the destination queue to consume. Maximum value of 200. +* `credentials` - (Optional) The credentials needed to access the resource. Detailed below. +* `maximum_batching_window_in_seconds` - (Optional) The maximum length of a time to wait for events. Maximum value of 300. +* `starting_position` - (Optional) The position in a stream from which to start reading. Valid values: TRIM_HORIZON, LATEST. +* `topic_name` - (Required) The name of the topic that the pipe will read from. Maximum length of 249. + +##### source_parameters.managed_streaming_kafka_parameters.credentials Configuration Block + +* `client_certificate_tls_auth` - (Optional) The ARN of the Secrets Manager secret containing the credentials. +* `sasl_scram_512_auth` - (Optional) The ARN of the Secrets Manager secret containing the credentials. + +#### source_parameters.rabbitmq_broker_parameters Configuration Block + +* `batch_size` - (Optional) The maximum number of records to include in each batch. Maximum value of 10000. +* `credentials` - (Required) The credentials needed to access the resource. Detailed below. +* `maximum_batching_window_in_seconds` - (Optional) The maximum length of a time to wait for events. Maximum value of 300. +* `queue_name` - (Required) The name of the destination queue to consume. Maximum length of 1000. +* `virtual_host` - (Optional) The name of the virtual host associated with the source broker. Maximum length of 200. + +##### source_parameters.rabbitmq_broker_parameters.credentials Configuration Block + +* `basic_auth` - (Required) The ARN of the Secrets Manager secret containing the credentials. + +#### source_parameters.self_managed_kafka_parameters Configuration Block + +* `additional_bootstrap_servers` - (Optional) An array of server URLs. Maximum number of 2 items, each of maximum length 300. +* `batch_size` - (Optional) The maximum number of records to include in each batch. Maximum value of 10000. +* `consumer_group_id` - (Optional) The name of the destination queue to consume. Maximum value of 200. +* `credentials` - (Optional) The credentials needed to access the resource. Detailed below. +* `maximum_batching_window_in_seconds` - (Optional) The maximum length of a time to wait for events. Maximum value of 300. +* `server_root_ca_certificate` - (Optional) The ARN of the Secrets Manager secret used for certification. +* `starting_position` - (Optional) The position in a stream from which to start reading. Valid values: TRIM_HORIZON, LATEST. +* `topic_name` - (Required) The name of the topic that the pipe will read from. Maximum length of 249. +* `vpc` - (Optional) This structure specifies the VPC subnets and security groups for the stream, and whether a public IP address is to be used. Detailed below. + +##### source_parameters.self_managed_kafka_parameters.credentials Configuration Block + +* `basic_auth` - (Optional) The ARN of the Secrets Manager secret containing the credentials. +* `client_certificate_tls_auth` - (Optional) The ARN of the Secrets Manager secret containing the credentials. +* `sasl_scram_256_auth` - (Optional) The ARN of the Secrets Manager secret containing the credentials. +* `sasl_scram_512_auth` - (Optional) The ARN of the Secrets Manager secret containing the credentials. + +##### source_parameters.self_managed_kafka_parameters.vpc Configuration Block + +* `security_groups` - (Optional) List of security groups associated with the stream. These security groups must all be in the same VPC. You can specify as many as five security groups. If you do not specify a security group, the default security group for the VPC is used. +* `subnets` - (Optional) List of the subnets associated with the stream. These subnets must all be in the same VPC. You can specify as many as 16 subnets. + +#### source_parameters.sqs_queue_parameters Configuration Block + +* `batch_size` - (Optional) The maximum number of records to include in each batch. Maximum value of 10000. +* `maximum_batching_window_in_seconds` - (Optional) The maximum length of a time to wait for events. Maximum value of 300. + ### target_parameters Configuration Block -* `input_template` - (Optional) Valid JSON text passed to the target. In this case, nothing from the event itself is passed to the target. +You can find out more about EventBridge Pipes Targets in the [User Guide](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-event-target.html). + +* `batch_job_parameters` - (Optional) The parameters for using an AWS Batch job as a target. Detailed below. +* `cloudwatch_logs_parameters` - (Optional) The parameters for using an CloudWatch Logs log stream as a target. Detailed below. +* `ecs_task_parameters` - (Optional) The parameters for using an Amazon ECS task as a target. Detailed below. +* `eventbridge_event_bus_parameters` - (Optional) The parameters for using an EventBridge event bus as a target. Detailed below. +* `http_parameters` - (Optional) These are custom parameter to be used when the target is an API Gateway REST APIs or EventBridge ApiDestinations. Detailed below. +* `input_template` - (Optional) Valid JSON text passed to the target. In this case, nothing from the event itself is passed to the target. Maximum length of 8192 characters. +* `kinesis_stream_parameters` - (Optional) The parameters for using a Kinesis stream as a source. Detailed below. +* `lambda_function_parameters` - (Optional) The parameters for using a Lambda function as a target. Detailed below. +* `redshift_data_parameters` - (Optional) These are custom parameters to be used when the target is a Amazon Redshift cluster to invoke the Amazon Redshift Data API BatchExecuteStatement. Detailed below. +* `sagemaker_pipeline_parameters` - (Optional) The parameters for using a SageMaker pipeline as a target. Detailed below. +* `sqs_queue_parameters` - (Optional) The parameters for using a Amazon SQS stream as a target. Detailed below. +* `step_function_state_machine_parameters` - (Optional) The parameters for using a Step Functions state machine as a target. Detailed below. + +#### target_parameters.batch_job_parameters Configuration Block + +* `array_properties` - (Optional) The array properties for the submitted job, such as the size of the array. The array size can be between 2 and 10,000. If you specify array properties for a job, it becomes an array job. This parameter is used only if the target is an AWS Batch job. Detailed below. +* `container_overrides` - (Optional) The overrides that are sent to a container. Detailed below. +* `depends_on` - (Optional) A list of dependencies for the job. A job can depend upon a maximum of 20 jobs. You can specify a SEQUENTIAL type dependency without specifying a job ID for array jobs so that each child array job completes sequentially, starting at index 0. You can also specify an N_TO_N type dependency with a job ID for array jobs. In that case, each index child of this job must wait for the corresponding index child of each dependency to complete before it can begin. Detailed below. +* `job_definition` - (Required) The job definition used by this job. This value can be one of name, name:revision, or the Amazon Resource Name (ARN) for the job definition. If name is specified without a revision then the latest active revision is used. +* `job_name` - (Required) The name of the job. It can be up to 128 letters long. +* `parameters` - (Optional) Additional parameters passed to the job that replace parameter substitution placeholders that are set in the job definition. Parameters are specified as a key and value pair mapping. Parameters included here override any corresponding parameter defaults from the job definition. Detailed below. +* `retry_strategy` - (Optional) The retry strategy to use for failed jobs. When a retry strategy is specified here, it overrides the retry strategy defined in the job definition. Detailed below. + +##### target_parameters.batch_job_parameters.array_properties Configuration Block + +* `size` - (Optional) The size of the array, if this is an array batch job. Minimum value of 2. Maximum value of 10,000. + +##### target_parameters.batch_job_parameters.container_overrides Configuration Block + +* `command` - (Optional) List of commands to send to the container that overrides the default command from the Docker image or the task definition. +* `environment` - (Optional) The environment variables to send to the container. You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the task definition. Environment variables cannot start with " AWS Batch ". This naming convention is reserved for variables that AWS Batch sets. Detailed below. +* `instance_type` - (Optional) The instance type to use for a multi-node parallel job. This parameter isn't applicable to single-node container jobs or jobs that run on Fargate resources, and shouldn't be provided. +* `resource_requirement` - (Optional) The type and amount of resources to assign to a container. This overrides the settings in the job definition. The supported resources include GPU, MEMORY, and VCPU. Detailed below. + +###### target_parameters.batch_job_parameters.container_overrides.environment Configuration Block + +* `name` - (Optional) The name of the key-value pair. For environment variables, this is the name of the environment variable. +* `value` - (Optional) The value of the key-value pair. For environment variables, this is the value of the environment variable. + +###### target_parameters.batch_job_parameters.container_overrides.resource_requirement Configuration Block + +* `type` - (Optional) The type of resource to assign to a container. The supported resources include GPU, MEMORY, and VCPU. +* `value` - (Optional) The quantity of the specified resource to reserve for the container. [The values vary based on the type specified](https://docs.aws.amazon.com/eventbridge/latest/pipes-reference/API_BatchResourceRequirement.html). + +##### target_parameters.batch_job_parameters.depends_on Configuration Block + +* `job_id` - (Optional) The job ID of the AWS Batch job that's associated with this dependency. +* `type` - (Optional) The type of the job dependency. Valid Values: N_TO_N, SEQUENTIAL. + +##### target_parameters.batch_job_parameters.retry_strategy Configuration Block + +* `attempts` - (Optional) The number of times to move a job to the RUNNABLE status. If the value of attempts is greater than one, the job is retried on failure the same number of attempts as the value. Maximum value of 10. + +#### target_parameters.cloudwatch_logs_parameters Configuration Block + +* `log_stream_name` - (Optional) The name of the log stream. +* `timestamp` - (Optional) The time the event occurred, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. This is the JSON path to the field in the event e.g. $.detail.timestamp + +#### target_parameters.ecs_task_parameters Configuration Block + +* `capacity_provider_strategy` - (Optional) List of capacity provider strategies to use for the task. If a capacityProviderStrategy is specified, the launchType parameter must be omitted. If no capacityProviderStrategy or launchType is specified, the defaultCapacityProviderStrategy for the cluster is used. Detailed below. +* `enable_ecs_managed_tags` - (Optional) Specifies whether to enable Amazon ECS managed tags for the task. Valid values: true, false. +* `enable_execute_command` - (Optional) Whether or not to enable the execute command functionality for the containers in this task. If true, this enables execute command functionality on all containers in the task. Valid values: true, false. +* `group` - (Optional) Specifies an Amazon ECS task group for the task. The maximum length is 255 characters. +* `launch_type` - (Optional) Specifies the launch type on which your task is running. The launch type that you specify here must match one of the launch type (compatibilities) of the target task. The FARGATE value is supported only in the Regions where AWS Fargate with Amazon ECS is supported. Valid Values: EC2, FARGATE, EXTERNAL +* `network_configuration` - (Optional) Use this structure if the Amazon ECS task uses the awsvpc network mode. This structure specifies the VPC subnets and security groups associated with the task, and whether a public IP address is to be used. This structure is required if LaunchType is FARGATE because the awsvpc mode is required for Fargate tasks. If you specify NetworkConfiguration when the target ECS task does not use the awsvpc network mode, the task fails. Detailed below. +* `overrides` - (Optional) The overrides that are associated with a task. Detailed below. +* `placement_constraint` - (Optional) An array of placement constraint objects to use for the task. You can specify up to 10 constraints per task (including constraints in the task definition and those specified at runtime). Detailed below. +* `placement_strategy` - (Optional) The placement strategy objects to use for the task. You can specify a maximum of five strategy rules per task. Detailed below. +* `platform_version` - (Optional) Specifies the platform version for the task. Specify only the numeric portion of the platform version, such as 1.1.0. This structure is used only if LaunchType is FARGATE. +* `propagate_tags` - (Optional) Specifies whether to propagate the tags from the task definition to the task. If no value is specified, the tags are not propagated. Tags can only be propagated to the task during task creation. To add tags to a task after task creation, use the TagResource API action. Valid Values: TASK_DEFINITION +* `reference_id` - (Optional) The reference ID to use for the task. Maximum length of 1,024. +* `tags` - (Optional) Key-value map of tags that you apply to the task to help you categorize and organize them. +* `task_count` - (Optional) The number of tasks to create based on TaskDefinition. The default is 1. +* `task_definition_arn` - (Optional) The ARN of the task definition to use if the event target is an Amazon ECS task. + +##### target_parameters.ecs_task_parameters.capacity_provider_strategy Configuration Block + +* `base` - (Optional) The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used. Maximum value of 100,000. +* `capacity_provider` - (Optional) The short name of the capacity provider. Maximum value of 255. +* `weight` - (Optional) The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied. Maximum value of 1,000. + +##### target_parameters.ecs_task_parameters.network_configuration Configuration Block + +* `aws_vpc_configuration` - (Optional) Use this structure to specify the VPC subnets and security groups for the task, and whether a public IP address is to be used. This structure is relevant only for ECS tasks that use the awsvpc network mode. Detailed below. + +###### target_parameters.ecs_task_parameters.network_configuration.aws_vpc_configuration Configuration Block + +* `assign_public_ip` - (Optional) Specifies whether the task's elastic network interface receives a public IP address. You can specify ENABLED only when LaunchType in EcsParameters is set to FARGATE. Valid Values: ENABLED, DISABLED. +* `security_groups` - (Optional) Specifies the security groups associated with the task. These security groups must all be in the same VPC. You can specify as many as five security groups. If you do not specify a security group, the default security group for the VPC is used. +* `subnets` - (Optional) Specifies the subnets associated with the task. These subnets must all be in the same VPC. You can specify as many as 16 subnets. + +##### target_parameters.ecs_task_parameters.overrides Configuration Block + +* `container_override` - (Optional) One or more container overrides that are sent to a task. Detailed below. +* `cpu` - (Optional) The cpu override for the task. +* `ephemeral_storage` - (Optional) The ephemeral storage setting override for the task. Detailed below. +* `execution_role_arn` - (Optional) The Amazon Resource Name (ARN) of the task execution IAM role override for the task. +* `inference_accelerator_override` - (Optional) List of Elastic Inference accelerator overrides for the task. Detailed below. +* `memory` - (Optional) The memory override for the task. +* `task_role_arn` - (Optional) The Amazon Resource Name (ARN) of the IAM role that containers in this task can assume. All containers in this task are granted the permissions that are specified in this role. + +###### target_parameters.ecs_task_parameters.overrides.container_override Configuration Block + +* `command` - (Optional) List of commands to send to the container that overrides the default command from the Docker image or the task definition. You must also specify a container name. +* `cpu` - (Optional) The number of cpu units reserved for the container, instead of the default value from the task definition. You must also specify a container name. +* `environment` - (Optional) The environment variables to send to the container. You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the task definition. You must also specify a container name. Detailed below. +* `environment_file` - (Optional) A list of files containing the environment variables to pass to a container, instead of the value from the container definition. Detailed below. +* `memory` - (Optional) The hard limit (in MiB) of memory to present to the container, instead of the default value from the task definition. If your container attempts to exceed the memory specified here, the container is killed. You must also specify a container name. +* `memory_reservation` - (Optional) The soft limit (in MiB) of memory to reserve for the container, instead of the default value from the task definition. You must also specify a container name. +* `name` - (Optional) The name of the container that receives the override. This parameter is required if any override is specified. +* `resource_requirement` - (Optional) The type and amount of a resource to assign to a container, instead of the default value from the task definition. The only supported resource is a GPU. Detailed below. + +###### target_parameters.ecs_task_parameters.overrides.container_override.environment Configuration Block + +* `name` - (Optional) The name of the key-value pair. For environment variables, this is the name of the environment variable. +* `value` - (Optional) The value of the key-value pair. For environment variables, this is the value of the environment variable. + +###### target_parameters.ecs_task_parameters.overrides.container_override.environment_file Configuration Block + +* `type` - (Optional) The file type to use. The only supported value is s3. +* `value` - (Optional) The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file. + +###### target_parameters.ecs_task_parameters.overrides.container_override.resource_requirement Configuration Block + +* `type` - (Optional) The type of resource to assign to a container. The supported values are GPU or InferenceAccelerator. +* `value` - (Optional) The value for the specified resource type. If the GPU type is used, the value is the number of physical GPUs the Amazon ECS container agent reserves for the container. The number of GPUs that's reserved for all containers in a task can't exceed the number of available GPUs on the container instance that the task is launched on. If the InferenceAccelerator type is used, the value matches the deviceName for an InferenceAccelerator specified in a task definition. + +###### target_parameters.ecs_task_parameters.overrides.ephemeral_storage Configuration Block + +* `size_in_gib` - (Required) The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is 21 GiB and the maximum supported value is 200 GiB. + +###### target_parameters.ecs_task_parameters.overrides.inference_accelerator_override Configuration Block + +* `device_name` - (Optional) The Elastic Inference accelerator device name to override for the task. This parameter must match a deviceName specified in the task definition. +* `device_type` - (Optional) The Elastic Inference accelerator type to use. + +##### target_parameters.ecs_task_parameters.placement_constraint Configuration Block + +* `expression` - (Optional) A cluster query language expression to apply to the constraint. You cannot specify an expression if the constraint type is distinctInstance. Maximum length of 2,000. +* `type` - (Optional) The type of constraint. Use distinctInstance to ensure that each task in a particular group is running on a different container instance. Use memberOf to restrict the selection to a group of valid candidates. Valid Values: distinctInstance, memberOf. + +##### target_parameters.ecs_task_parameters.placement_strategy Configuration Block + +* `field` - (Optional) The field to apply the placement strategy against. For the spread placement strategy, valid values are instanceId (or host, which has the same effect), or any platform or custom attribute that is applied to a container instance, such as attribute:ecs.availability-zone. For the binpack placement strategy, valid values are cpu and memory. For the random placement strategy, this field is not used. Maximum length of 255. +* `type` - (Optional) The type of placement strategy. The random placement strategy randomly places tasks on available candidates. The spread placement strategy spreads placement across available candidates evenly based on the field parameter. The binpack strategy places tasks on available candidates that have the least available amount of the resource that is specified with the field parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory (but still enough to run the task). Valid Values: random, spread, binpack. + +#### target_parameters.eventbridge_event_bus_parameters Configuration Block + +* `detail_type` - (Optional) A free-form string, with a maximum of 128 characters, used to decide what fields to expect in the event detail. +* `endpoint_id` - (Optional) The URL subdomain of the endpoint. For example, if the URL for Endpoint is https://abcde.veo.endpoints.event.amazonaws.com, then the EndpointId is abcde.veo. +* `resources` - (Optional) List of AWS resources, identified by Amazon Resource Name (ARN), which the event primarily concerns. Any number, including zero, may be present. +* `source` - (Optional) The source of the event. Maximum length of 256. +* `time` - (Optional) The time stamp of the event, per RFC3339. If no time stamp is provided, the time stamp of the PutEvents call is used. This is the JSON path to the field in the event e.g. $.detail.timestamp + +#### target_parameters.http_parameters Configuration Block + +* `header_parameters` - (Optional) Key-value mapping of the headers that need to be sent as part of request invoking the API Gateway REST API or EventBridge ApiDestination. Detailed below. +* `path_parameter_values` - (Optional) The path parameter values to be used to populate API Gateway REST API or EventBridge ApiDestination path wildcards ("*"). +* `query_string_parameters` - (Optional) Key-value mapping of the query strings that need to be sent as part of request invoking the API Gateway REST API or EventBridge ApiDestination. Detailed below. + +#### target_parameters.kinesis_stream_parameters Configuration Block + +* `partition_key` - (Required) Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream. + +#### target_parameters.lambda_function_parameters Configuration Block + +* `invocation_type` - (Optional) Specify whether to invoke the function synchronously or asynchronously. Valid Values: REQUEST_RESPONSE, FIRE_AND_FORGET. + +#### target_parameters.redshift_data_parameters Configuration Block + +* `database` - (Required) The name of the database. Required when authenticating using temporary credentials. +* `db_user` - (Optional) The database user name. Required when authenticating using temporary credentials. +* `secret_manager_arn` - (Optional) The name or ARN of the secret that enables access to the database. Required when authenticating using Secrets Manager. +* `sqls` - (Optional) List of SQL statements text to run, each of maximum length of 100,000. +* `statement_name` - (Optional) The name of the SQL statement. You can name the SQL statement when you create it to identify the query. +* `with_event` - (Optional) Indicates whether to send an event back to EventBridge after the SQL statement runs. + +#### target_parameters.sagemaker_pipeline_parameters Configuration Block + +* `pipeline_parameter` - (Optional) List of Parameter names and values for SageMaker Model Building Pipeline execution. Detailed below. + +##### target_parameters.sagemaker_pipeline_parameters.parameters Configuration Block + +* `name` - (Optional) Name of parameter to start execution of a SageMaker Model Building Pipeline. Maximum length of 256. +* `value` - (Optional) Value of parameter to start execution of a SageMaker Model Building Pipeline. Maximum length of 1024. + +#### target_parameters.sqs_queue_parameters Configuration Block + +* `message_deduplication_id` - (Optional) This parameter applies only to FIFO (first-in-first-out) queues. The token used for deduplication of sent messages. +* `message_group_id` - (Optional) The FIFO message group ID to use as the target. + +#### target_parameters.step_function_state_machine_parameters Configuration Block + +* `invocation_type` - (Optional) Specify whether to invoke the function synchronously or asynchronously. Valid Values: REQUEST_RESPONSE, FIRE_AND_FORGET. ## Attributes Reference diff --git a/website/docs/r/qldb_stream.html.markdown b/website/docs/r/qldb_stream.html.markdown index 20a0b14280a..5f425baae03 100644 --- a/website/docs/r/qldb_stream.html.markdown +++ b/website/docs/r/qldb_stream.html.markdown @@ -56,3 +56,10 @@ In addition to all arguments above, the following attributes are exported: * `id` - The ID of the QLDB Stream. * `arn` - The ARN of the QLDB Stream. * `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +- `create` - (Default `8m`) +- `delete` - (Default `5m`) diff --git a/website/docs/r/quicksight_analysis.html.markdown b/website/docs/r/quicksight_analysis.html.markdown new file mode 100644 index 00000000000..78255a274a6 --- /dev/null +++ b/website/docs/r/quicksight_analysis.html.markdown @@ -0,0 +1,166 @@ +--- +subcategory: "QuickSight" +layout: "aws" +page_title: "AWS: aws_quicksight_analysis" +description: |- + Manages a QuickSight Analysis. +--- + +# Resource: aws_quicksight_analysis + +Resource for managing a QuickSight Analysis. + +## Example Usage + +### From Source Template + +```terraform +resource "aws_quicksight_analysis" "example" { + analysis_id = "example-id" + name = "example-name" + source_entity { + source_template { + arn = aws_quicksight_template.source.arn + data_set_references { + data_set_arn = aws_quicksight_data_set.dataset.arn + data_set_placeholder = "1" + } + } + } +} +``` + +### With Definition + +```terraform +resource "aws_quicksight_analysis" "example" { + analysis_id = "example-id" + name = "example-name" + definition { + data_set_identifiers_declarations { + data_set_arn = aws_quicksight_data_set.dataset.arn + identifier = "1" + } + sheets { + title = "Example" + sheet_id = "Example1" + visuals { + line_chart_visual { + visual_id = "LineChart" + title { + format_text { + plain_text = "Line Chart Example" + } + } + chart_configuration { + field_wells { + line_chart_aggregated_field_wells { + category { + categorical_dimension_field { + field_id = "1" + column { + data_set_identifier = "1" + column_name = "Column1" + } + } + } + values { + categorical_measure_field { + field_id = "2" + column { + data_set_identifier = "1" + column_name = "Column1" + } + aggregation_function = "COUNT" + } + } + } + } + } + } + } + } + } +} +``` + +## Argument Reference + +The following arguments are required: + +* `analysis_id` - (Required, Forces new resource) Identifier for the analysis. +* `name` - (Required) Display name for the analysis. + +The following arguments are optional: + +* `aws_account_id` - (Optional, Forces new resource) AWS account ID. +* `definition` - (Optional) A detailed analysis definition. Only one of `definition` or `source_entity` should be configured. See [definition](#definition). +* `parameters` - (Optional) The parameters for the creation of the analysis, which you want to use to override the default settings. An analysis can have any type of parameters, and some parameters might accept multiple values. See [parameters](#parameters). +* `permissions` - (Optional) A set of resource permissions on the analysis. Maximum of 64 items. See [permissions](#permissions). +* `recovery_window_in_days` - (Optional) A value that specifies the number of days that Amazon QuickSight waits before it deletes the analysis. Use `0` to force deletion without recovery. Minimum value of `7`. Maximum value of `30`. Default to `30`. +* `source_entity` - (Optional) The entity that you are using as a source when you create the analysis (template). Only one of `definition` or `source_entity` should be configured. See [source_entity](#source_entity). +* `tags` - (Optional) Key-value map of resource tags. If configured with a provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `theme_arn` - (Optional) The Amazon Resource Name (ARN) of the theme that is being used for this analysis. The theme ARN must exist in the same AWS account where you create the analysis. + +### permissions + +* `actions` - (Required) List of IAM actions to grant or revoke permissions on. +* `principal` - (Required) ARN of the principal. See the [ResourcePermission documentation](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_ResourcePermission.html) for the applicable ARN values. + +### source_entity + +* `source_template` - (Optional) The source template. See [source_template](#source_template). + +### source_template + +* `arn` - (Required) The Amazon Resource Name (ARN) of the resource. +* `data_set_references` - (Required) List of dataset references. See [data_set_references](#data_set_references). + +### data_set_references + +* `data_set_arn` - (Required) Dataset Amazon Resource Name (ARN). +* `data_set_placeholder` - (Required) Dataset placeholder. + +### parameters + +* `date_time_parameters` - (Optional) A list of parameters that have a data type of date-time. See [AWS API Documentation for complete description](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_DateTimeParameter.html). +* `decimal_parameters` - (Optional) A list of parameters that have a data type of decimal. See [AWS API Documentation for complete description](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_DecimalParameter.html). +* `integer_parameters` - (Optional) A list of parameters that have a data type of integer. See [AWS API Documentation for complete description](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_IntegerParameter.html). +* `string_parameters` - (Optional) A list of parameters that have a data type of string. See [AWS API Documentation for complete description](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_StringParameter.html). + +### definition + +* `data_set_identifiers_declarations` - (Required) A list dataset identifier declarations. With this mapping,you can use dataset identifiers instead of dataset Amazon Resource Names (ARNs) throughout the analysis sub-structures. See [AWS API Documentation for complete description](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_DataSetIdentifierDeclaration.html). +* `analysis_defaults` - (Optional) The configuration for default analysis settings. See [AWS API Documentation for complete description](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_AnalysisDefaults.html). +* `calculated_fields` - (Optional) A list of calculated field definitions for the analysis. See [AWS API Documentation for complete description](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_CalculatedField.html). +* `column_configurations` - (Optional) A list of analysis-level column configurations. Column configurations are used to set default formatting for a column that's used throughout an analysis. See [AWS API Documentation for complete description](ttps://docs.aws.amazon.com/quicksight/latest/APIReference/API_ColumnConfiguration.html). +* `filter_groups` - (Optional) A list of filter definitions for an analysis. See [AWS API Documentation for complete description](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_FilterGroup.html). For more information, see [Filtering Data](https://docs.aws.amazon.com/quicksight/latest/user/filtering-visual-data.html) in Amazon QuickSight User Guide. +* `parameters_declarations` - (Optional) A list of parameter declarations for an analysis. Parameters are named variables that can transfer a value for use by an action or an object. See [AWS API Documentation for complete description](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_ParameterDeclaration.html). For more information, see [Parameters in Amazon QuickSight](https://docs.aws.amazon.com/quicksight/latest/user/parameters-in-quicksight.html) in the Amazon QuickSight User Guide. +* `sheets` - (Optional) A list of sheet definitions for an analysis. See [AWS API Documentation for complete description](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_SheetDefinition.html). + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - ARN of the analysis. +* `created_time` - The time that the analysis was created. +* `id` - A comma-delimited string joining AWS account ID and analysis ID. +* `last_updated_time` - The time that the analysis was last updated. +* `status` - The analysis creation status. +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block). + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +* `create` - (Default `5m`) +* `update` - (Default `5m`) +* `delete` - (Default `5m`) + +## Import + +A QuickSight Analysis can be imported using the AWS account ID and analysis ID separated by a comma (`,`) e.g., + +``` +$ terraform import aws_quicksight_analysis.example 123456789012,example-id +``` diff --git a/website/docs/r/quicksight_dashboard.html.markdown b/website/docs/r/quicksight_dashboard.html.markdown new file mode 100644 index 00000000000..ebe1826e478 --- /dev/null +++ b/website/docs/r/quicksight_dashboard.html.markdown @@ -0,0 +1,224 @@ +--- +subcategory: "QuickSight" +layout: "aws" +page_title: "AWS: aws_quicksight_dashboard" +description: |- + Manages a QuickSight Dashboard. +--- + +# Resource: aws_quicksight_dashboard + +Resource for managing a QuickSight Dashboard. + +## Example Usage + +### From Source Template + +```terraform +resource "aws_quicksight_dashboard" "example" { + dashboard_id = "example-id" + name = "example-name" + version_description = "version" + source_entity { + source_template { + arn = aws_quicksight_template.source.arn + data_set_references { + data_set_arn = aws_quicksight_data_set.dataset.arn + data_set_placeholder = "1" + } + } + } +} +``` + +### With Definition + +```terraform +resource "aws_quicksight_dashboard" "example" { + dashboard_id = "example-id" + name = "example-name" + version_description = "version" + definition { + data_set_identifiers_declarations { + data_set_arn = aws_quicksight_data_set.dataset.arn + identifier = "1" + } + sheets { + title = "Example" + sheet_id = "Example1" + visuals { + line_chart_visual { + visual_id = "LineChart" + title { + format_text { + plain_text = "Line Chart Example" + } + } + chart_configuration { + field_wells { + line_chart_aggregated_field_wells { + category { + categorical_dimension_field { + field_id = "1" + column { + data_set_identifier = "1" + column_name = "Column1" + } + } + } + values { + categorical_measure_field { + field_id = "2" + column { + data_set_identifier = "1" + column_name = "Column1" + } + aggregation_function = "COUNT" + } + } + } + } + } + } + } + } + } +} +``` + +## Argument Reference + +The following arguments are required: + +* `dashboard_id` - (Required, Forces new resource) Identifier for the dashboard. +* `name` - (Required) Display name for the dashboard. +* `version_description` - (Required) A description of the current dashboard version being created/updated. + +The following arguments are optional: + +* `aws_account_id` - (Optional, Forces new resource) AWS account ID. +* `dashboard_publish_options` - (Optional) Options for publishing the dashboard. See [dashboard_publish_options](#dashboard_publish_options). +* `definition` - (Optional) A detailed dashboard definition. Only one of `definition` or `source_entity` should be configured. See [definition](#definition). +* `parameters` - (Optional) The parameters for the creation of the dashboard, which you want to use to override the default settings. A dashboard can have any type of parameters, and some parameters might accept multiple values. See [parameters](#parameters). +* `permissions` - (Optional) A set of resource permissions on the dashboard. Maximum of 64 items. See [permissions](#permissions). +* `source_entity` - (Optional) The entity that you are using as a source when you create the dashboard (template). Only one of `definition` or `source_entity` should be configured. See [source_entity](#source_entity). +* `tags` - (Optional) Key-value map of resource tags. If configured with a provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `theme_arn` - (Optional) The Amazon Resource Name (ARN) of the theme that is being used for this dashboard. The theme ARN must exist in the same AWS account where you create the dashboard. + +### permissions + +* `actions` - (Required) List of IAM actions to grant or revoke permissions on. +* `principal` - (Required) ARN of the principal. See the [ResourcePermission documentation](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_ResourcePermission.html) for the applicable ARN values. + +### source_entity + +* `source_template` - (Optional) The source template. See [source_template](#source_template). + +### source_template + +* `arn` - (Required) The Amazon Resource Name (ARN) of the resource. +* `data_set_references` - (Required) List of dataset references. See [data_set_references](#data_set_references). + +### data_set_references + +* `data_set_arn` - (Required) Dataset Amazon Resource Name (ARN). +* `data_set_placeholder` - (Required) Dataset placeholder. + +### dashboard_publish_options + +* `ad_hoc_filtering_option` - (Optional) Ad hoc (one-time) filtering option. See [ad_hoc_filtering_option](#ad_hoc_filtering_option). +* `data_point_drill_up_down_option` - (Optional) The drill-down options of data points in a dashboard. See [data_point_drill_up_down_option](#data_point_drill_up_down_option). +* `data_point_menu_label_option` - (Optional) The data point menu label options of a dashboard. See [data_point_menu_label_option](#data_point_menu_label_option). +* `data_point_tooltip_option` - (Optional) The data point tool tip options of a dashboard. See [data_point_tooltip_option](#data_point_tooltip_option). +* `export_to_csv_option` - (Optional) Export to .csv option. See [export_to_csv_option](#export_to_csv_option). +* `export_with_hidden_fields_option` - (Optional) Determines if hidden fields are exported with a dashboard. See [export_with_hidden_fields_option](#export_with_hidden_fields_option). +* `sheet_controls_option` - (Optional) Sheet controls option. See [sheet_controls_option](#sheet_controls_option). +* `sheet_layout_element_maximization_option` - (Optional) The sheet layout maximization options of a dashboard. See [sheet_layout_element_maximization_option](#sheet_layout_element_maximization_option). +* `visual_axis_sort_option` - (Optional) The axis sort options of a dashboard. See [visual_axis_sort_option](#visual_axis_sort_option). +* `visual_menu_option` - (Optional) The menu options of a visual in a dashboard. See [visual_menu_option](#visual_menu_option). + +### ad_hoc_filtering_option + +* `availability_status` - (Optional) Availability status. Possibles values: ENABLED, DISABLED. + +### data_point_drill_up_down_option + +* `availability_status` - (Optional) Availability status. Possibles values: ENABLED, DISABLED. + +### data_point_menu_label_option + +* `availability_status` - (Optional) Availability status. Possibles values: ENABLED, DISABLED. + +### data_point_tooltip_option + +* `availability_status` - (Optional) Availability status. Possibles values: ENABLED, DISABLED. + +### export_to_csv_option + +* `availability_status` - (Optional) Availability status. Possibles values: ENABLED, DISABLED. + +### export_with_hidden_fields_option + +* `availability_status` - (Optional) Availability status. Possibles values: ENABLED, DISABLED. + +### sheet_controls_option + +* `visibility_state` - (Optional) Visibility state. Possibles values: EXPANDED, COLLAPSED. + +### sheet_layout_element_maximization_option + +* `availability_status` - (Optional) Availability status. Possibles values: ENABLED, DISABLED. + +### visual_axis_sort_option + +* `availability_status` - (Optional) Availability status. Possibles values: ENABLED, DISABLED. + +### visual_menu_option + +* `availability_status` - (Optional) Availability status. Possibles values: ENABLED, DISABLED. + +### parameters + +* `date_time_parameters` - (Optional) A list of parameters that have a data type of date-time. See [AWS API Documentation for complete description](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_DateTimeParameter.html). +* `decimal_parameters` - (Optional) A list of parameters that have a data type of decimal. See [AWS API Documentation for complete description](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_DecimalParameter.html). +* `integer_parameters` - (Optional) A list of parameters that have a data type of integer. See [AWS API Documentation for complete description](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_IntegerParameter.html). +* `string_parameters` - (Optional) A list of parameters that have a data type of string. See [AWS API Documentation for complete description](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_StringParameter.html). + +### definition + +* `data_set_identifiers_declarations` - (Required) A list dataset identifier declarations. With this mapping,you can use dataset identifiers instead of dataset Amazon Resource Names (ARNs) throughout the dashboard's sub-structures. See [AWS API Documentation for complete description](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_DataSetIdentifierDeclaration.html). +* `analysis_defaults` - (Optional) The configuration for default analysis settings. See [AWS API Documentation for complete description](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_AnalysisDefaults.html). +* `calculated_fields` - (Optional) A list of calculated field definitions for the dashboard. See [AWS API Documentation for complete description](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_CalculatedField.html). +* `column_configurations` - (Optional) A list of dashboard-level column configurations. Column configurations are used to set default formatting for a column that's used throughout a dashboard. See [AWS API Documentation for complete description](ttps://docs.aws.amazon.com/quicksight/latest/APIReference/API_ColumnConfiguration.html). +* `filter_groups` - (Optional) A list of filter definitions for a dashboard. See [AWS API Documentation for complete description](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_FilterGroup.html). For more information, see [Filtering Data](https://docs.aws.amazon.com/quicksight/latest/user/filtering-visual-data.html) in Amazon QuickSight User Guide. +* `parameters_declarations` - (Optional) A list of parameter declarations for a dashboard. Parameters are named variables that can transfer a value for use by an action or an object. See [AWS API Documentation for complete description](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_ParameterDeclaration.html). For more information, see [Parameters in Amazon QuickSight](https://docs.aws.amazon.com/quicksight/latest/user/parameters-in-quicksight.html) in the Amazon QuickSight User Guide. +* `sheets` - (Optional) A list of sheet definitions for a dashboard. See [AWS API Documentation for complete description](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_SheetDefinition.html). + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - ARN of the dashboard. +* `created_time` - The time that the dashboard was created. +* `id` - A comma-delimited string joining AWS account ID and dashboard ID. +* `last_updated_time` - The time that the dashboard was last updated. +* `source_entity_arn` - Amazon Resource Name (ARN) of a template that was used to create this dashboard. +* `status` - The dashboard creation status. +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block). +* `version_number` - The version number of the dashboard version. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +* `create` - (Default `5m`) +* `update` - (Default `5m`) +* `delete` - (Default `5m`) + +## Import + +A QuickSight Dashboard can be imported using the AWS account ID and dashboard ID separated by a comma (`,`) e.g., + +``` +$ terraform import aws_quicksight_dashboard.example 123456789012,example-id +``` diff --git a/website/docs/r/quicksight_refresh_schedule.html.markdown b/website/docs/r/quicksight_refresh_schedule.html.markdown index 0e12825f130..c213312b276 100644 --- a/website/docs/r/quicksight_refresh_schedule.html.markdown +++ b/website/docs/r/quicksight_refresh_schedule.html.markdown @@ -40,9 +40,9 @@ resource "aws_quicksight_refresh_schedule" "example" { refresh_type = "INCREMENTAL_REFRESH" schedule_frequency { - interval = "WEEKLY" - time_of_day = "01:00" - timezone = "Europe/London" + interval = "WEEKLY" + time_of_the_day = "01:00" + timezone = "Europe/London" refresh_on_day { day_of_week = "MONDAY" } @@ -62,9 +62,9 @@ resource "aws_quicksight_refresh_schedule" "example" { refresh_type = "INCREMENTAL_REFRESH" schedule_frequency { - interval = "MONTHLY" - time_of_day = "01:00" - timezone = "Europe/London" + interval = "MONTHLY" + time_of_the_day = "01:00" + timezone = "Europe/London" refresh_on_day { day_of_month = "1" } @@ -94,7 +94,7 @@ The following arguments are optional: ### schedule_frequency * `interval` - (Required) The interval between scheduled refreshes. Valid values are `MINUTE15`, `MINUTE30`, `HOURLY`, `DAILY`, `WEEKLY` and `MONTHLY`. -* `time_of_day` - (Optional) The time of day that you want the dataset to refresh. This value is expressed in `HH:MM` format. This field is not required for schedules that refresh hourly. +* `time_of_the_day` - (Optional) The time of day that you want the dataset to refresh. This value is expressed in `HH:MM` format. This field is not required for schedules that refresh hourly. * `timezone` - (Optional) The timezone that you want the refresh schedule to use. * `refresh_on_day` - (Optional) The [refresh on entity](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_ScheduleRefreshOnEntity.html) configuration for weekly or monthly schedules. See [refresh_on_day](#refresh_on_day). diff --git a/website/docs/r/quicksight_theme.html.markdown b/website/docs/r/quicksight_theme.html.markdown new file mode 100644 index 00000000000..a634de07213 --- /dev/null +++ b/website/docs/r/quicksight_theme.html.markdown @@ -0,0 +1,161 @@ +--- +subcategory: "QuickSight" +layout: "aws" +page_title: "AWS: aws_quicksight_theme" +description: |- + Manages a QuickSight Theme. +--- + +# Resource: aws_quicksight_theme + +Resource for managing a QuickSight Theme. + +## Example Usage + +### Basic Usage + +```terraform +resource "aws_quicksight_theme" "example" { + theme_id = "example" + name = "example" + + base_theme_id = "MIDNIGHT" + + configuration { + data_color_palette { + colors = [ + "#FFFFFF", + "#111111", + "#222222", + "#333333", + "#444444", + "#555555", + "#666666", + "#777777", + "#888888", + "#999999" + ] + empty_fill_color = "#FFFFFF" + min_max_gradient = [ + "#FFFFFF", + "#111111", + ] + } + } +} +``` + +## Argument Reference + +The following arguments are required: + +* `theme_id` - (Required, Forces new resource) Identifier of the theme. +* `base_theme_id` - (Required) The ID of the theme that a custom theme will inherit from. All themes inherit from one of the starting themes defined by Amazon QuickSight. For a list of the starting themes, use ListThemes or choose Themes from within an analysis. +* `name` - (Required) Display name of the theme. +* `configuration` - (Required) The theme configuration, which contains the theme display properties. See [configuration](#configuration). + +The following arguments are optional: + +* `aws_account_id` - (Optional, Forces new resource) AWS account ID. +* `permissions` - (Optional) A set of resource permissions on the theme. Maximum of 64 items. See [permissions](#permissions). +* `tags` - (Optional) Key-value map of resource tags. If configured with a provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `version_description` - (Optional) A description of the current theme version being created/updated. + +### permissions + +* `actions` - (Required) List of IAM actions to grant or revoke permissions on. +* `principal` - (Required) ARN of the principal. See the [ResourcePermission documentation](https://docs.aws.amazon.com/quicksight/latest/APIReference/API_ResourcePermission.html) for the applicable ARN values. + +### configuration + +* `data_color_palette` - (Optional) Color properties that apply to chart data colors. See [data_color_palette](#data_color_palette). +* `sheet` - (Optional) Display options related to sheets. See [sheet](#sheet). +* `typography` - (Optional) Determines the typography options. See [typography](#typography). +* `ui_color_palette` - (Optional) Color properties that apply to the UI and to charts, excluding the colors that apply to data. See [ui_color_palette](#ui_color_palette). + +### data_color_palette + +* `colors` - (Optional) List of hexadecimal codes for the colors. Minimum of 8 items and maximum of 20 items. +* `empty_fill_color` - (Optional) The hexadecimal code of a color that applies to charts where a lack of data is highlighted. +* `min_max_gradient` - (Optional) The minimum and maximum hexadecimal codes that describe a color gradient. List of exactly 2 items. + +### sheet + +* `tile` - (Optional) The display options for tiles. See [tile](#tile). +* `tile_layout` - (Optional) The layout options for tiles. See [tile_layout](#tile_layout). + +### tile + +* `border` - (Optional) The border around a tile. See [border](#border). + +### border + +* `show` - (Optional) The option to enable display of borders for visuals. + +### tile_layout + +* `gutter` - (Optional) The gutter settings that apply between tiles. See [gutter](#gutter). +* `margin` - (Optional) The margin settings that apply around the outside edge of sheets. See [margin](#margin). + +### gutter + +* `show` - (Optional) This Boolean value controls whether to display a gutter space between sheet tiles. + +### margin + +* `show` - (Optional) This Boolean value controls whether to display sheet margins. + +### typography + +* `font_families` - (Optional) Determines the list of font families. Maximum number of 5 items. See [font_families](#font_families). + +### font_families + +* `font_family` - (Optional) Font family name. + +### ui_color_palette + +* `accent` - (Optional) Color (hexadecimal) that applies to selected states and buttons. +* `accent_foreground` - (Optional) Color (hexadecimal) that applies to any text or other elements that appear over the accent color. +* `danger` - (Optional) Color (hexadecimal) that applies to error messages. +* `danger_foreground` - (Optional) Color (hexadecimal) that applies to any text or other elements that appear over the error color. +* `dimension` - (Optional) Color (hexadecimal) that applies to the names of fields that are identified as dimensions. +* `dimension_foreground` - (Optional) Color (hexadecimal) that applies to any text or other elements that appear over the dimension color. +* `measure` - (Optional) Color (hexadecimal) that applies to the names of fields that are identified as measures. +* `measure_foreground` - (Optional) Color (hexadecimal) that applies to any text or other elements that appear over the measure color. +* `primary_background` - (Optional) Color (hexadecimal) that applies to visuals and other high emphasis UI. +* `primary_foreground` - (Optional) Color (hexadecimal) of text and other foreground elements that appear over the primary background regions, such as grid lines, borders, table banding, icons, and so on. +* `secondary_background` - (Optional) Color (hexadecimal) that applies to the sheet background and sheet controls. +* `secondary_foreground` - (Optional) Color (hexadecimal) that applies to any sheet title, sheet control text, or UI that appears over the secondary background. +* `success` - (Optional) Color (hexadecimal) that applies to success messages, for example the check mark for a successful download. +* `success_foreground` - (Optional) Color (hexadecimal) that applies to any text or other elements that appear over the success color. +* `warning` - (Optional) Color (hexadecimal) that applies to warning and informational messages. +* `warning_foreground` - (Optional) Color (hexadecimal) that applies to any text or other elements that appear over the warning color. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - ARN of the theme. +* `created_time` - The time that the theme was created. +* `id` - A comma-delimited string joining AWS account ID and theme ID. +* `last_updated_time` - The time that the theme was last updated. +* `status` - The theme creation status. +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block). +* `version_number` - The version number of the theme version. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +* `create` - (Default `5m`) +* `update` - (Default `5m`) +* `delete` - (Default `5m`) + +## Import + +A QuickSight Theme can be imported using the AWS account ID and theme ID separated by a comma (`,`) e.g., + +``` +$ terraform import aws_quicksight_theme.example 123456789012,example-id +``` diff --git a/website/docs/r/rds_cluster.html.markdown b/website/docs/r/rds_cluster.html.markdown index 98c7c2cf9f3..5834201ef4d 100644 --- a/website/docs/r/rds_cluster.html.markdown +++ b/website/docs/r/rds_cluster.html.markdown @@ -230,7 +230,6 @@ The following arguments are supported: * `cluster_identifier` - (Optional, Forces new resources) The cluster identifier. If omitted, Terraform will assign a random, unique identifier. * `copy_tags_to_snapshot` – (Optional, boolean) Copy all Cluster `tags` to snapshots. Default is `false`. * `database_name` - (Optional) Name for an automatically created database on cluster creation. There are different naming restrictions per database engine: [RDS Naming Constraints][5] -* `db_cluster_instance_class` - (Optional) Compute and memory capacity of each DB instance in the Multi-AZ DB cluster, for example db.m6g.xlarge. Not all DB instance classes are available in all AWS Regions, or for all database engines. For the full list of DB instance classes and availability for your engine, see [DB instance class](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.html) in the Amazon RDS User Guide. (This setting is required to create a Multi-AZ DB cluster). * `db_cluster_instance_class` - (Optional) (Required for Multi-AZ DB cluster) The compute and memory capacity of each DB instance in the Multi-AZ DB cluster, for example db.m6g.xlarge. Not all DB instance classes are available in all AWS Regions, or for all database engines. For the full list of DB instance classes and availability for your engine, see [DB instance class](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.html) in the Amazon RDS User Guide. * `db_instance_parameter_group_name` - (Optional) Instance parameter group to associate with all instances of the DB cluster. The `db_instance_parameter_group_name` parameter is only valid in combination with the `allow_major_version_upgrade` parameter. * `db_subnet_group_name` - (Optional) DB subnet group to associate with this DB instance. **NOTE:** This must match the `db_subnet_group_name` specified on every [`aws_rds_cluster_instance`](/docs/providers/aws/r/rds_cluster_instance.html) in the cluster. diff --git a/website/docs/r/redshift_cluster.html.markdown b/website/docs/r/redshift_cluster.html.markdown index d0340cd9fb0..d482e5ad3bb 100644 --- a/website/docs/r/redshift_cluster.html.markdown +++ b/website/docs/r/redshift_cluster.html.markdown @@ -61,7 +61,9 @@ The following arguments are supported: The version selected runs on all the nodes in the cluster. * `allow_version_upgrade` - (Optional) If true , major version upgrades can be applied during the maintenance window to the Amazon Redshift engine that is running on the cluster. Default is `true`. * `apply_immediately` - (Optional) Specifies whether any cluster modifications are applied immediately, or during the next maintenance window. Default is `false`. -* `aqua_configuration_status` - (Optional) The value represents how the cluster is configured to use AQUA (Advanced Query Accelerator) after the cluster is restored. Possible values are `enabled`, `disabled`, and `auto`. Requires Cluster reboot. +* `aqua_configuration_status` - (Optional, **Deprecated**) The value represents how the cluster is configured to use AQUA (Advanced Query Accelerator) after the cluster is restored. + No longer supported by the AWS API. + Always returns `auto`. * `number_of_nodes` - (Optional) The number of compute nodes in the cluster. This parameter is required when the ClusterType parameter is specified as multi-node. Default is 1. * `publicly_accessible` - (Optional) If true, the cluster can be accessed from a public network. Default is `true`. * `encrypted` - (Optional) If true , the data in the cluster is encrypted at rest. @@ -121,6 +123,7 @@ In addition to all arguments above, the following attributes are exported: * `cluster_public_key` - The public key for the cluster * `cluster_revision_number` - The specific revision number of the database in the cluster * `cluster_nodes` - The nodes in the cluster. Cluster node blocks are documented below +* `cluster_namespace_arn` - The namespace Amazon Resource Name (ARN) of the cluster * `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). Cluster nodes (for `cluster_nodes`) support the following attributes: diff --git a/website/docs/r/redshiftserverless_workgroup.html.markdown b/website/docs/r/redshiftserverless_workgroup.html.markdown index ac53634235a..b3d25d43a49 100644 --- a/website/docs/r/redshiftserverless_workgroup.html.markdown +++ b/website/docs/r/redshiftserverless_workgroup.html.markdown @@ -38,7 +38,7 @@ The following arguments are optional: ### Config Parameter -* `parameter_key` - (Required) The key of the parameter. The options are `datestyle`, `enable_user_activity_logging`, `query_group`, `search_path`, and `max_query_execution_time`. +* `parameter_key` - (Required) The key of the parameter. The options are `auto_mv`, `datestyle`, `enable_case_sensitive_identifier`, `enable_user_activity_logging`, `query_group`, `search_path` and [query monitoring metrics](https://docs.aws.amazon.com/redshift/latest/dg/cm-c-wlm-query-monitoring-rules.html#cm-c-wlm-query-monitoring-metrics-serverless) that let you define performance boundaries: `max_query_cpu_time`, `max_query_blocks_read`, `max_scan_row_count`, `max_query_execution_time`, `max_query_queue_time`, `max_query_cpu_usage_percent`, `max_query_temp_blocks_to_disk`, `max_join_row_count` and `max_nested_loop_join_row_count`. * `parameter_value` - (Required) The value of the parameter to set. ## Attributes Reference diff --git a/website/docs/r/resourcegroups_resource.html.markdown b/website/docs/r/resourcegroups_resource.html.markdown new file mode 100644 index 00000000000..e881d18d187 --- /dev/null +++ b/website/docs/r/resourcegroups_resource.html.markdown @@ -0,0 +1,57 @@ +--- +subcategory: "Resource Groups" +layout: "aws" +page_title: "AWS: aws_resourcegroups_resource" +description: |- + Terraform resource for managing an AWS Resource Groups Resource. +--- + +# Resource: aws_resourcegroups_resource + +Terraform resource for managing an AWS Resource Groups Resource. + +## Example Usage + +### Basic Usage + +```terraform +resource "aws_ec2_host" "example" { + instance_family = "t3" + availability_zone = "us-east-1a" + host_recovery = "off" + auto_placement = "on" +} + +resource "aws_resourcegroups_group" "example" { + name = "example" +} + +resource "aws_resourcegroups_resource" "example" { + group_arn = aws_resourcegroups_group.example.arn + resource_arn = aws_ec2_host.example.arn +} + +``` + +## Argument Reference + +The following arguments are required: + +* `group_arn` - (Required) The name or the ARN of the resource group to add resources to. + +The following arguments are optional: + +* `resource_arn` - (Required) The ARN of the resource to be added to the group. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `resource_type` - The resource type of a resource, such as `AWS::EC2::Instance`. + +## Timeouts + +[Configuration options](https://developer.hashicorp.com/terraform/language/resources/syntax#operation-timeouts): + +* `create` - (Default `5m`) +* `delete` - (Default `5m`) diff --git a/website/docs/r/route53_health_check.html.markdown b/website/docs/r/route53_health_check.html.markdown index 4d5eb5a6ece..9d26efce621 100644 --- a/website/docs/r/route53_health_check.html.markdown +++ b/website/docs/r/route53_health_check.html.markdown @@ -87,7 +87,7 @@ The following arguments are supported: * `reference_name` - (Optional) This is a reference name used in Caller Reference (helpful for identifying single health_check set amongst others) -* `fqdn` - (Optional) The fully qualified domain name of the endpoint to be checked. +* `fqdn` - (Optional) The fully qualified domain name of the endpoint to be checked. If a value is set for `ip_address`, the value set for `fqdn` will be passed in the `Host` header. * `ip_address` - (Optional) The IP address of the endpoint to be checked. * `port` - (Optional) The port of the endpoint to be checked. * `type` - (Required) The protocol to use when performing health checks. Valid values are `HTTP`, `HTTPS`, `HTTP_STR_MATCH`, `HTTPS_STR_MATCH`, `TCP`, `CALCULATED`, `CLOUDWATCH_METRIC` and `RECOVERY_CONTROL`. diff --git a/website/docs/r/route53_vpc_association_authorization.html.markdown b/website/docs/r/route53_vpc_association_authorization.html.markdown index cdf851ea040..6c76dcbeca3 100644 --- a/website/docs/r/route53_vpc_association_authorization.html.markdown +++ b/website/docs/r/route53_vpc_association_authorization.html.markdown @@ -42,7 +42,7 @@ resource "aws_route53_zone" "example" { } resource "aws_vpc" "alternate" { - provider = "aws.alternate" + provider = aws.alternate cidr_block = "10.7.0.0/16" enable_dns_hostnames = true @@ -55,7 +55,7 @@ resource "aws_route53_vpc_association_authorization" "example" { } resource "aws_route53_zone_association" "example" { - provider = "aws.alternate" + provider = aws.alternate vpc_id = aws_route53_vpc_association_authorization.example.vpc_id zone_id = aws_route53_vpc_association_authorization.example.zone_id diff --git a/website/docs/r/s3_bucket.html.markdown b/website/docs/r/s3_bucket.html.markdown index 7cac50dadc9..3972d49f570 100644 --- a/website/docs/r/s3_bucket.html.markdown +++ b/website/docs/r/s3_bucket.html.markdown @@ -345,7 +345,7 @@ In addition to all arguments above, the following attributes are exported: * `id` - Name of the bucket. * `arn` - ARN of the bucket. Will be of format `arn:aws:s3:::bucketname`. * `bucket_domain_name` - Bucket domain name. Will be of format `bucketname.s3.amazonaws.com`. -* `bucket_regional_domain_name` - Bucket region-specific domain name. The bucket domain name including the region name, please refer [here](https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region) for format. Note: The AWS CloudFront allows specifying S3 region-specific endpoint when creating S3 origin, it will prevent [redirect issues](https://forums.aws.amazon.com/thread.jspa?threadID=216814) from CloudFront to S3 Origin URL. +* `bucket_regional_domain_name` - The bucket region-specific domain name. The bucket domain name including the region name. Please refer to the [S3 endpoints reference](https://docs.aws.amazon.com/general/latest/gr/s3.html#s3_region) for format. Note: AWS CloudFront allows specifying an S3 region-specific endpoint when creating an S3 origin. This will prevent redirect issues from CloudFront to the S3 Origin URL. For more information, see the [Virtual Hosted-Style Requests for Other Regions](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html#deprecated-global-endpoint) section in the AWS S3 User Guide. * `hosted_zone_id` - [Route 53 Hosted Zone ID](https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_website_region_endpoints) for this bucket's region. * `region` - AWS region this bucket resides in. * `tags_all` - Map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). diff --git a/website/docs/r/s3_bucket_notification.html.markdown b/website/docs/r/s3_bucket_notification.html.markdown index c4d4e83efbf..f9922dc169a 100644 --- a/website/docs/r/s3_bucket_notification.html.markdown +++ b/website/docs/r/s3_bucket_notification.html.markdown @@ -305,6 +305,19 @@ For Terraform's [JSON syntax](https://www.terraform.io/docs/configuration/syntax } ``` +### Emit events to EventBridge + +```terraform +resource "aws_s3_bucket" "bucket" { + bucket = "your-bucket-name" +} + +resource "aws_s3_bucket_notification" "bucket_notification" { + bucket = aws_s3_bucket.bucket.id + eventbridge = true +} +``` + ## Argument Reference The following arguments are required: @@ -313,7 +326,7 @@ The following arguments are required: The following arguments are optional: -* `eventbridge` - (Optional) Whether to enable Amazon EventBridge notifications. +* `eventbridge` - (Optional) Whether to enable Amazon EventBridge notifications. Defaults to `false`. * `lambda_function` - (Optional, Multiple) Used to configure notifications to a Lambda Function. See below. * `queue` - (Optional) Notification configuration to SQS Queue. See below. * `topic` - (Optional) Notification configuration to SNS Topic. See below. diff --git a/website/docs/r/s3_object.html.markdown b/website/docs/r/s3_object.html.markdown index 3caa9132332..39fa32c6815 100644 --- a/website/docs/r/s3_object.html.markdown +++ b/website/docs/r/s3_object.html.markdown @@ -140,7 +140,7 @@ The following arguments are required: The following arguments are optional: -* `acl` - (Optional) [Canned ACL](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl) to apply. Valid values are `private`, `public-read`, `public-read-write`, `aws-exec-read`, `authenticated-read`, `bucket-owner-read`, and `bucket-owner-full-control`. Defaults to `private`. +* `acl` - (Optional) [Canned ACL](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl) to apply. Valid values are `private`, `public-read`, `public-read-write`, `aws-exec-read`, `authenticated-read`, `bucket-owner-read`, and `bucket-owner-full-control`. * `bucket_key_enabled` - (Optional) Whether or not to use [Amazon S3 Bucket Keys](https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-key.html) for SSE-KMS. * `cache_control` - (Optional) Caching behavior along the request/reply chain Read [w3c cache_control](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9) for further details. * `content_base64` - (Optional, conflicts with `source` and `content`) Base64-encoded data that will be decoded and uploaded as raw bytes for the object content. This allows safely uploading non-UTF8 binary data, but is recommended only for small content such as the result of the `gzipbase64` function with small text strings. For larger objects, use `source` to stream the content from a disk file. diff --git a/website/docs/r/s3_object_copy.html.markdown b/website/docs/r/s3_object_copy.html.markdown index bb676fbafe7..74f6ebe2867 100644 --- a/website/docs/r/s3_object_copy.html.markdown +++ b/website/docs/r/s3_object_copy.html.markdown @@ -36,7 +36,7 @@ The following arguments are required: The following arguments are optional: -* `acl` - (Optional) [Canned ACL](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl) to apply. Defaults to `private`. Valid values are `private`, `public-read`, `public-read-write`, `authenticated-read`, `aws-exec-read`, `bucket-owner-read`, and `bucket-owner-full-control`. Conflicts with `grant`. +* `acl` - (Optional) [Canned ACL](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl) to apply. Valid values are `private`, `public-read`, `public-read-write`, `authenticated-read`, `aws-exec-read`, `bucket-owner-read`, and `bucket-owner-full-control`. Conflicts with `grant`. * `cache_control` - (Optional) Specifies caching behavior along the request/reply chain Read [w3c cache_control](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9) for further details. * `content_disposition` - (Optional) Specifies presentational information for the object. Read [w3c content_disposition](http://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html#sec19.5.1) for further information. * `content_encoding` - (Optional) Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field. Read [w3c content encoding](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.11) for further information. diff --git a/website/docs/r/sagemaker_endpoint_configuration.html.markdown b/website/docs/r/sagemaker_endpoint_configuration.html.markdown index aa1238f0f58..c7d8f17433e 100644 --- a/website/docs/r/sagemaker_endpoint_configuration.html.markdown +++ b/website/docs/r/sagemaker_endpoint_configuration.html.markdown @@ -68,6 +68,7 @@ The following arguments are supported: * `max_concurrency` - (Required) The maximum number of concurrent invocations your serverless endpoint can process. Valid values are between `1` and `200`. * `memory_size_in_mb` - (Required) The memory size of your serverless endpoint. Valid values are in 1 GB increments: `1024` MB, `2048` MB, `3072` MB, `4096` MB, `5120` MB, or `6144` MB. +* `provisioned_concurrency` - The amount of provisioned concurrency to allocate for the serverless endpoint. Should be less than or equal to `max_concurrency`. Valid values are between `1` and `200`. ### data_capture_config diff --git a/website/docs/r/sagemaker_model.html.markdown b/website/docs/r/sagemaker_model.html.markdown index 1d581d96f49..1889ac38b0f 100644 --- a/website/docs/r/sagemaker_model.html.markdown +++ b/website/docs/r/sagemaker_model.html.markdown @@ -59,9 +59,10 @@ The following arguments are supported: The `primary_container` and `container` block both support: -* `image` - (Required) The registry path where the inference code image is stored in Amazon ECR. +* `image` - (Optional) The registry path where the inference code image is stored in Amazon ECR. * `mode` - (Optional) The container hosts value `SingleModel/MultiModel`. The default value is `SingleModel`. * `model_data_url` - (Optional) The URL for the S3 location where model artifacts are stored. +* `model_package_name` - (Optional) The Amazon Resource Name (ARN) of the model package to use to create the model. * `container_hostname` - (Optional) The DNS host name for the container. * `environment` - (Optional) Environment variables for the Docker container. A list of key value pairs. diff --git a/website/docs/r/scheduler_schedule.html.markdown b/website/docs/r/scheduler_schedule.html.markdown index 0d63c00ebdb..c6fc073d8aa 100644 --- a/website/docs/r/scheduler_schedule.html.markdown +++ b/website/docs/r/scheduler_schedule.html.markdown @@ -27,7 +27,7 @@ resource "aws_scheduler_schedule" "example" { mode = "OFF" } - schedule_expression = "rate(1 hour)" + schedule_expression = "rate(1 hours)" target { arn = aws_sqs_queue.example.arn @@ -48,7 +48,7 @@ resource "aws_scheduler_schedule" "example" { mode = "OFF" } - schedule_expression = "rate(1 hour)" + schedule_expression = "rate(1 hours)" target { arn = "arn:aws:scheduler:::aws-sdk:sqs:sendMessage" diff --git a/website/docs/r/secretsmanager_secret.html.markdown b/website/docs/r/secretsmanager_secret.html.markdown index 2cd409d4f90..82550b9583f 100644 --- a/website/docs/r/secretsmanager_secret.html.markdown +++ b/website/docs/r/secretsmanager_secret.html.markdown @@ -20,25 +20,6 @@ resource "aws_secretsmanager_secret" "example" { } ``` -### Rotation Configuration - -To enable automatic secret rotation, the Secrets Manager service requires usage of a Lambda function. The [Rotate Secrets section in the Secrets Manager User Guide](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html) provides additional information about deploying a prebuilt Lambda functions for supported credential rotation (e.g., RDS) or deploying a custom Lambda function. - -~> **NOTE:** Configuring rotation causes the secret to rotate once as soon as you store the secret. Before you do this, you must ensure that all of your applications that use the credentials stored in the secret are updated to retrieve the secret from AWS Secrets Manager. The old credentials might no longer be usable after the initial rotation and any applications that you fail to update will break as soon as the old credentials are no longer valid. - -~> **NOTE:** If you cancel a rotation that is in progress (by removing the `rotation` configuration), it can leave the VersionStage labels in an unexpected state. Depending on what step of the rotation was in progress, you might need to remove the staging label AWSPENDING from the partially created version, specified by the SecretVersionId response value. You should also evaluate the partially rotated new version to see if it should be deleted, which you can do by removing all staging labels from the new version's VersionStage field. - -```terraform -resource "aws_secretsmanager_secret" "rotation-example" { - name = "rotation-example" - rotation_lambda_arn = aws_lambda_function.example.arn - - rotation_rules { - automatically_after_days = 7 - } -} -``` - ## Argument Reference The following arguments are supported: @@ -51,8 +32,6 @@ The following arguments are supported: * `recovery_window_in_days` - (Optional) Number of days that AWS Secrets Manager waits before it can delete the secret. This value can be `0` to force deletion without recovery or range from `7` to `30` days. The default value is `30`. * `replica` - (Optional) Configuration block to support secret replication. See details below. * `force_overwrite_replica_secret` - (Optional) Accepts boolean value to specify whether to overwrite a secret with the same name in the destination Region. -* `rotation_lambda_arn` - (Optional, **DEPRECATED**) ARN of the Lambda function that can rotate the secret. Use the `aws_secretsmanager_secret_rotation` resource to manage this configuration instead. As of version 2.67.0, removal of this configuration will no longer remove rotation due to supporting the new resource. Either import the new resource and remove the configuration or manually remove rotation. -* `rotation_rules` - (Optional, **DEPRECATED**) Configuration block for the rotation configuration of this secret. Defined below. Use the `aws_secretsmanager_secret_rotation` resource to manage this configuration instead. As of version 2.67.0, removal of this configuration will no longer remove rotation due to supporting the new resource. Either import the new resource and remove the configuration or manually remove rotation. * `tags` - (Optional) Key-value map of user-defined tags that are attached to the secret. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. ### replica @@ -60,17 +39,12 @@ The following arguments are supported: * `kms_key_id` - (Optional) ARN, Key ID, or Alias of the AWS KMS key within the region secret is replicated to. If one is not specified, then Secrets Manager defaults to using the AWS account's default KMS key (`aws/secretsmanager`) in the region or creates one for use if non-existent. * `region` - (Required) Region for replicating the secret. -### rotation_rules - -* `automatically_after_days` - (Required) Specifies the number of days between automatic scheduled rotations of the secret. - ## Attributes Reference In addition to all arguments above, the following attributes are exported: * `id` - ARN of the secret. * `arn` - ARN of the secret. -* `rotation_enabled` - Whether automatic rotation is enabled for this secret. * `replica` - Attributes of a replica are described below. * `tags_all` - Map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block). diff --git a/website/docs/r/security_group.html.markdown b/website/docs/r/security_group.html.markdown index 4ed02d408e6..c26e654874d 100644 --- a/website/docs/r/security_group.html.markdown +++ b/website/docs/r/security_group.html.markdown @@ -93,6 +93,20 @@ resource "aws_vpc_endpoint" "my_endpoint" { You can also find a specific Prefix List using the `aws_prefix_list` data source. +### Removing All Ingress and Egress Rules + +The `ingress` and `egress` arguments are processed in [attributes-as-blocks](https://developer.hashicorp.com/terraform/language/attr-as-blocks) mode. Due to this, removing these arguments from the configuration will **not** cause Terraform to destroy the managed rules. To subsequently remove all managed ingress and egress rules: + +```terraform +resource "aws_security_group" "example" { + name = "sg" + vpc_id = aws_vpc.example.id + + ingress = [] + egress = [] +} +``` + ### Recreating a Security Group A simple security group `name` change "forces new" the security group--Terraform destroys the security group and creates a new one. (Likewise, `description`, `name_prefix`, or `vpc_id` [cannot be changed](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/working-with-security-groups.html#creating-security-group).) Attempting to recreate the security group leads to a variety of complications depending on how it is used. @@ -236,6 +250,8 @@ The following arguments are required: The following arguments are optional: +~> **Note** Although `cidr_blocks`, `ipv6_cidr_blocks`, `prefix_list_ids`, and `security_groups` are all marked as optional, you _must_ provide one of them in order to configure the source of the traffic. + * `cidr_blocks` - (Optional) List of CIDR blocks. * `description` - (Optional) Description of this ingress rule. * `ipv6_cidr_blocks` - (Optional) List of IPv6 CIDR blocks. @@ -254,6 +270,8 @@ The following arguments are required: The following arguments are optional: +~> **Note** Although `cidr_blocks`, `ipv6_cidr_blocks`, `prefix_list_ids`, and `security_groups` are all marked as optional, you _must_ provide one of them in order to configure the destination of the traffic. + * `cidr_blocks` - (Optional) List of CIDR blocks. * `description` - (Optional) Description of this egress rule. * `ipv6_cidr_blocks` - (Optional) List of IPv6 CIDR blocks. diff --git a/website/docs/r/security_group_rule.html.markdown b/website/docs/r/security_group_rule.html.markdown index 342283ddab0..ebcfd153467 100644 --- a/website/docs/r/security_group_rule.html.markdown +++ b/website/docs/r/security_group_rule.html.markdown @@ -99,6 +99,8 @@ or `egress` (outbound). The following arguments are optional: +~> **Note** Although `cidr_blocks`, `ipv6_cidr_blocks`, `prefix_list_ids`, and `source_security_group_id` are all marked as optional, you _must_ provide one of them in order to configure the source of the traffic. + * `cidr_blocks` - (Optional) List of CIDR blocks. Cannot be specified with `source_security_group_id` or `self`. * `description` - (Optional) Description of the rule. * `ipv6_cidr_blocks` - (Optional) List of IPv6 CIDR blocks. Cannot be specified with `source_security_group_id` or `self`. diff --git a/website/docs/r/securityhub_standards_control.markdown b/website/docs/r/securityhub_standards_control.markdown index f4a4d9612e5..a791fcdd688 100644 --- a/website/docs/r/securityhub_standards_control.markdown +++ b/website/docs/r/securityhub_standards_control.markdown @@ -37,7 +37,7 @@ resource "aws_securityhub_standards_control" "ensure_iam_password_policy_prevent The following arguments are supported: -* `standards_control_arn` - (Required) The standards control ARN. +* `standards_control_arn` - (Required) The standards control ARN. See the AWS documentation for how to list existing controls using [`get-enabled-standards`](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/securityhub/get-enabled-standards.html) and [`describe-standards-controls`](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/securityhub/describe-standards-controls.html). * `control_status` – (Required) The control status could be `ENABLED` or `DISABLED`. You have to specify `disabled_reason` argument for `DISABLED` control status. * `disabled_reason` – (Optional) A description of the reason why you are disabling a security standard control. If you specify this attribute, `control_status` will be set to `DISABLED` automatically. diff --git a/website/docs/r/ses_active_receipt_rule_set.html.markdown b/website/docs/r/ses_active_receipt_rule_set.html.markdown index fb8ed8e2d3c..5b54b2ed6e2 100644 --- a/website/docs/r/ses_active_receipt_rule_set.html.markdown +++ b/website/docs/r/ses_active_receipt_rule_set.html.markdown @@ -30,3 +30,11 @@ In addition to all arguments above, the following attributes are exported: * `id` - The SES receipt rule set name. * `arn` - The SES receipt rule set ARN. + +## Import + +Active SES receipt rule sets can be imported using the rule set name. + +``` +$ terraform import aws_ses_active_receipt_rule_set.my_rule_set my_rule_set_name +``` diff --git a/website/docs/r/sesv2_email_identity.html.markdown b/website/docs/r/sesv2_email_identity.html.markdown index c40c28fef7c..d59df831e45 100644 --- a/website/docs/r/sesv2_email_identity.html.markdown +++ b/website/docs/r/sesv2_email_identity.html.markdown @@ -58,11 +58,15 @@ resource "aws_sesv2_email_identity" "example" { ## Argument Reference -The following arguments are supported: +The following arguments are required: * `email_identity` - (Required) The email address or domain to verify. + +The following arguments are optional: + * `configuration_set_name` - (Optional) The configuration set to use by default when sending from this identity. Note that any configuration set defined in the email sending request takes precedence. * `dkim_signing_attributes` - (Optional) The configuration of the DKIM authentication settings for an email domain identity. +* `tags` - (Optional) Key-value mapping of resource tags. If configured with a provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. ### dkim_signing_attributes @@ -78,7 +82,7 @@ The following arguments are supported: In addition to all arguments above, the following attributes are exported: * `arn` - ARN of the Email Identity. -* `dkim_signing_attributes` - An object that contains information about the private key and selector that you want to use to configure DKIM for the identity for Bring Your Own DKIM (BYODKIM) for the identity, or, configures the key length to be used for Easy DKIM. +* `dkim_signing_attributes` - A list of objects that contains at most one element with information about the private key and selector that you want to use to configure DKIM for the identity for Bring Your Own DKIM (BYODKIM) for the identity, or, configures the key length to be used for Easy DKIM. * `current_signing_key_length` - [Easy DKIM] The key length of the DKIM key pair in use. * `last_key_generation_timestamp` - [Easy DKIM] The last time a key pair was generated for this identity. * `next_signing_key_length` - [Easy DKIM] The key length of the future DKIM key pair to be generated. This can be changed at most once per day. @@ -86,7 +90,7 @@ In addition to all arguments above, the following attributes are exported: * `status` - Describes whether or not Amazon SES has successfully located the DKIM records in the DNS records for the domain. See the [AWS SES API v2 Reference](https://docs.aws.amazon.com/ses/latest/APIReference-V2/API_DkimAttributes.html#SES-Type-DkimAttributes-Status) for supported statuses. * `tokens` - If you used Easy DKIM to configure DKIM authentication for the domain, then this object contains a set of unique strings that you use to create a set of CNAME records that you add to the DNS configuration for your domain. When Amazon SES detects these records in the DNS configuration for your domain, the DKIM authentication process is complete. If you configured DKIM authentication for the domain by providing your own public-private key pair, then this object contains the selector for the public key. * `identity_type` - The email identity type. Valid values: `EMAIL_ADDRESS`, `DOMAIN`. -* `tags` - (Optional) A map of tags to assign to the service. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `tags_all` - Map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block). * `verified_for_sending_status` - Specifies whether or not the identity is verified. ## Import diff --git a/website/docs/r/sfn_alias.html.markdown b/website/docs/r/sfn_alias.html.markdown new file mode 100644 index 00000000000..9df6c666ed4 --- /dev/null +++ b/website/docs/r/sfn_alias.html.markdown @@ -0,0 +1,72 @@ +--- +subcategory: "SFN (Step Functions)" +layout: "aws" +page_title: "AWS: aws_sfn_alias" +description: |- + Provides a Step Function State Machine Alias. +--- + +# Resource: aws_sfn_alias + +Provides a Step Function State Machine Alias. + +## Example Usage + +### Basic Usage + +```terraform +resource "aws_sfn_alias" "sfn_alias" { + name = "my_sfn_alias" + + routing_configuration { + state_machine_version_arn = aws_sfn_state_machine.sfn_test.state_machine_version_arn + weight = 100 + } +} + +resource "aws_sfn_alias" "my_sfn_alias" { + name = "my_sfn_alias" + + routing_configuration { + state_machine_version_arn = "arn:aws:states:us-east-1:12345:stateMachine:demo:3" + weight = 50 + } + + routing_configuration { + state_machine_version_arn = "arn:aws:states:us-east-1:12345:stateMachine:demo:2" + weight = 50 + } +} +``` + +## Argument Reference + +The following arguments are required: + +* `name` - (Required) Name for the alias you are creating. +* `description` - (Optional) Description of the alias. +* `routing_configuration` - (Required) The StateMachine alias' route configuration settings. Fields documented below + +For **routing_configuration** the following attributes are supported: + +* `state_machine_version_arn` - (Required) A version of the state machine. +* `weight` - (Required) Percentage of traffic routed to the state machine version. + +The following arguments are optional: + +* `optional_arg` - (Optional) Concise argument description. Do not begin the description with "An", "The", "Defines", "Indicates", or "Specifies," as these are verbose. In other words, "Indicates the amount of storage," can be rewritten as "Amount of storage," without losing any information. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The Amazon Resource Name (ARN) identifying your state machine alias. +* `creation_date` - The date the state machine alias was created. + +## Import + +SFN (Step Functions) Alias can be imported using the `arn`, e.g., + +``` +$ terraform import aws_sfn_alias.foo arn:aws:states:us-east-1:123456789098:stateMachine:myStateMachine:foo +``` diff --git a/website/docs/r/sfn_state_machine.html.markdown b/website/docs/r/sfn_state_machine.html.markdown index ca4d0c0722e..200bab2f87d 100644 --- a/website/docs/r/sfn_state_machine.html.markdown +++ b/website/docs/r/sfn_state_machine.html.markdown @@ -63,6 +63,33 @@ EOF } ``` +### Publish (Publish SFN version) + +```terraform +# ... + +resource "aws_sfn_state_machine" "sfn_state_machine" { + name = "my-state-machine" + role_arn = aws_iam_role.iam_for_sfn.arn + publish = true + type = "EXPRESS" + + definition = < *NOTE:* See the [AWS Step Functions Developer Guide](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) for more information about enabling Step Function logging. @@ -104,6 +131,7 @@ The following arguments are supported: * `logging_configuration` - (Optional) Defines what execution history events are logged and where they are logged. The `logging_configuration` parameter is only valid when `type` is set to `EXPRESS`. Defaults to `OFF`. For more information see [Logging Express Workflows](https://docs.aws.amazon.com/step-functions/latest/dg/cw-logs.html) and [Log Levels](https://docs.aws.amazon.com/step-functions/latest/dg/cloudwatch-log-level.html) in the AWS Step Functions User Guide. * `name` - (Optional) The name of the state machine. The name should only contain `0`-`9`, `A`-`Z`, `a`-`z`, `-` and `_`. If omitted, Terraform will assign a random, unique name. * `name_prefix` - (Optional) Creates a unique name beginning with the specified prefix. Conflicts with `name`. +* `publish` - (Optional) Set to true to publish a version of the state machine during creation. Default: false. * `role_arn` - (Required) The Amazon Resource Name (ARN) of the IAM role to use for this state machine. * `tags` - (Optional) Key-value map of resource tags. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. * `tracing_configuration` - (Optional) Selects whether AWS X-Ray tracing is enabled. diff --git a/website/docs/r/shield_protection.html.markdown b/website/docs/r/shield_protection.html.markdown index c0891611249..51ca843c6d4 100644 --- a/website/docs/r/shield_protection.html.markdown +++ b/website/docs/r/shield_protection.html.markdown @@ -21,7 +21,7 @@ data "aws_region" "current" {} data "aws_caller_identity" "current" {} resource "aws_eip" "example" { - vpc = true + domain = "vpc" } resource "aws_shield_protection" "example" { diff --git a/website/docs/r/shield_protection_group.html.markdown b/website/docs/r/shield_protection_group.html.markdown index 7954143748d..45acba0a9fd 100644 --- a/website/docs/r/shield_protection_group.html.markdown +++ b/website/docs/r/shield_protection_group.html.markdown @@ -31,7 +31,7 @@ data "aws_region" "current" {} data "aws_caller_identity" "current" {} resource "aws_eip" "example" { - vpc = true + domain = "vpc" } resource "aws_shield_protection" "example" { diff --git a/website/docs/r/shield_protection_health_check_association.html.markdown b/website/docs/r/shield_protection_health_check_association.html.markdown index 69220b785f3..776e7a94757 100644 --- a/website/docs/r/shield_protection_health_check_association.html.markdown +++ b/website/docs/r/shield_protection_health_check_association.html.markdown @@ -23,7 +23,7 @@ data "aws_caller_identity" "current" {} data "aws_partition" "current" {} resource "aws_eip" "example" { - vpc = true + domain = "vpc" tags = { Name = "example" } diff --git a/website/docs/r/sns_topic_subscription.html.markdown b/website/docs/r/sns_topic_subscription.html.markdown index d44b1618a1c..4470a1c1a3f 100644 --- a/website/docs/r/sns_topic_subscription.html.markdown +++ b/website/docs/r/sns_topic_subscription.html.markdown @@ -205,20 +205,20 @@ provider "aws" { } resource "aws_sns_topic" "sns-topic" { - provider = "aws.sns" + provider = aws.sns name = var.sns["name"] display_name = var.sns["display_name"] policy = data.aws_iam_policy_document.sns-topic-policy.json } resource "aws_sqs_queue" "sqs-queue" { - provider = "aws.sqs" + provider = aws.sqs name = var.sqs["name"] policy = data.aws_iam_policy_document.sqs-queue-policy.json } resource "aws_sns_topic_subscription" "sns-topic" { - provider = "aws.sns2sqs" + provider = aws.sns2sqs topic_arn = aws_sns_topic.sns-topic.arn protocol = "sqs" endpoint = aws_sqs_queue.sqs-queue.arn diff --git a/website/docs/r/spot_instance_request.html.markdown b/website/docs/r/spot_instance_request.html.markdown index d1f73ede98f..8d5f8c8698a 100644 --- a/website/docs/r/spot_instance_request.html.markdown +++ b/website/docs/r/spot_instance_request.html.markdown @@ -32,7 +32,7 @@ documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-in for more information. ~> **NOTE [AWS strongly discourages](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-best-practices.html#which-spot-request-method-to-use) the use of the legacy APIs called by this resource. -We recommend using the [EC2 Fleet](ec2_fleet.html) or [Auto Scaling Group](autoscaling_group.html) resources instead. +We recommend using the [EC2 Instance](instance.html) resource with `instance_market_options` instead. ## Example Usage diff --git a/website/docs/r/ssm_parameter.html.markdown b/website/docs/r/ssm_parameter.html.markdown index 0f995b42ece..93e313072d1 100644 --- a/website/docs/r/ssm_parameter.html.markdown +++ b/website/docs/r/ssm_parameter.html.markdown @@ -69,7 +69,7 @@ The following arguments are optional: * `description` - (Optional) Description of the parameter. * `insecure_value` - (Optional, exactly one of `value` or `insecure_value` is required) Value of the parameter. **Use caution:** This value is _never_ marked as sensitive in the Terraform plan output. This argument is not valid with a `type` of `SecureString`. * `key_id` - (Optional) KMS key ID or ARN for encrypting a SecureString. -* `overwrite` - (Optional) Overwrite an existing parameter. If not specified, will default to `false` if the resource has not been created by terraform to avoid overwrite of existing resource and will default to `true` otherwise (terraform lifecycle rules should then be used to manage the update behavior). +* `overwrite` - (Optional, **Deprecated**) Overwrite an existing parameter. If not specified, will default to `false` if the resource has not been created by terraform to avoid overwrite of existing resource and will default to `true` otherwise (terraform lifecycle rules should then be used to manage the update behavior). * `tags` - (Optional) Map of tags to assign to the object. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. * `tier` - (Optional) Parameter tier to assign to the parameter. If not specified, will use the default parameter tier for the region. Valid tiers are `Standard`, `Advanced`, and `Intelligent-Tiering`. Downgrading an `Advanced` tier parameter to `Standard` will recreate the resource. For more information on parameter tiers, see the [AWS SSM Parameter tier comparison and guide](https://docs.aws.amazon.com/systems-manager/latest/userguide/parameter-store-advanced-parameters.html). * `value` - (Optional, exactly one of `value` or `insecure_value` is required) Value of the parameter. This value is always marked as sensitive in the Terraform plan output, regardless of `type`. In Terraform CLI version 0.15 and later, this may require additional configuration handling for certain scenarios. For more information, see the [Terraform v0.15 Upgrade Guide](https://www.terraform.io/upgrade-guides/0-15.html#sensitive-output-values). diff --git a/website/docs/r/ssm_patch_baseline.html.markdown b/website/docs/r/ssm_patch_baseline.html.markdown index 180ac63c853..86c3d1e3151 100644 --- a/website/docs/r/ssm_patch_baseline.html.markdown +++ b/website/docs/r/ssm_patch_baseline.html.markdown @@ -164,9 +164,11 @@ The following arguments are supported: * `description` - (Optional) The description of the patch baseline. * `operating_system` - (Optional) The operating system the patch baseline applies to. Valid values are + `ALMA_LINUX`, `AMAZON_LINUX`, `AMAZON_LINUX_2`, `AMAZON_LINUX_2022`, + `AMAZON_LINUX_2023`, `CENTOS`, `DEBIAN`, `MACOS`, diff --git a/website/docs/r/timestreamwrite_table.html.markdown b/website/docs/r/timestreamwrite_table.html.markdown index ce0535bba81..7da26afcc05 100644 --- a/website/docs/r/timestreamwrite_table.html.markdown +++ b/website/docs/r/timestreamwrite_table.html.markdown @@ -39,6 +39,23 @@ resource "aws_timestreamwrite_table" "example" { } ``` +### Customer-defined Partition Key + +```hcl +resource "aws_timestreamwrite_table" "example" { + database_name = aws_timestreamwrite_database.example.database_name + table_name = "example" + + schema { + composite_partition_key { + enforcement_in_record = "REQUIRED" + name = "attr1" + type = "DIMENSION" + } + } +} +``` + ## Argument Reference The following arguments are supported: @@ -46,6 +63,7 @@ The following arguments are supported: * `database_name` – (Required) The name of the Timestream database. * `magnetic_store_write_properties` - (Optional) Contains properties to set on the table when enabling magnetic store writes. See [Magnetic Store Write Properties](#magnetic-store-write-properties) below for more details. * `retention_properties` - (Optional) The retention duration for the memory store and magnetic store. See [Retention Properties](#retention-properties) below for more details. If not provided, `magnetic_store_retention_period_in_days` default to 73000 and `memory_store_retention_period_in_hours` defaults to 6. +* `schema` - (Optional) The schema of the table. See [Schema](#schema) below for more details. * `table_name` - (Required) The name of the Timestream table. * `tags` - (Optional) Map of tags to assign to this resource. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. @@ -78,6 +96,20 @@ The `retention_properties` block supports the following arguments: * `magnetic_store_retention_period_in_days` - (Required) The duration for which data must be stored in the magnetic store. Minimum value of 1. Maximum value of 73000. * `memory_store_retention_period_in_hours` - (Required) The duration for which data must be stored in the memory store. Minimum value of 1. Maximum value of 8766. +### Schema + +The `schema` block supports the following arguments: + +* `composite_partition_key` - (Required) A non-empty list of partition keys defining the attributes used to partition the table data. The order of the list determines the partition hierarchy. The name and type of each partition key as well as the partition key order cannot be changed after the table is created. However, the enforcement level of each partition key can be changed. See [Composite Partition Key](#composite-partition-key) below for more details. + +### Composite Partition Key + +The `composite_partition_key` block supports the following arguments: + +* `enforcement_in_record` - (Optional) The level of enforcement for the specification of a dimension key in ingested records. Valid values: `REQUIRED`, `OPTIONAL`. +* `name` - (Optional) The name of the attribute used for a dimension key. +* `type` - (Required) The type of the partition key. Valid values: `DIMENSION`, `MEASURE`. + ## Attributes Reference In addition to all arguments above, the following attributes are exported: diff --git a/website/docs/r/transfer_agreement.html.markdown b/website/docs/r/transfer_agreement.html.markdown new file mode 100644 index 00000000000..4a863e8428f --- /dev/null +++ b/website/docs/r/transfer_agreement.html.markdown @@ -0,0 +1,53 @@ +--- +subcategory: "Transfer Family" +layout: "aws" +page_title: "AWS: aws_transfer_agreement" +description: |- + Provides a AWS Transfer AS2 Agreement Resource +--- + +# Resource: aws_transfer_agreement + +Provides a AWS Transfer AS2 Agreement resource. + +## Example Usage + +### Basic + +```terraform +resource "aws_transfer_agreement" "example" { + access_role = aws_iam_role.test.arn + base_directory = "/DOC-EXAMPLE-BUCKET/home/mydirectory" + description = "example" + local_profile_id = aws_transfer_profile.local.profile_id + partner_profile_id = aws_transfer_profile.partner.profile_id + server_id = aws_transfer_server.test.id +} +``` + +## Argument Reference + +The following arguments are supported: + +* `access_role` - (Required) The IAM Role which provides read and write access to the parent directory of the file location mentioned in the StartFileTransfer request. +* `base_directory` - (Required) The landing directory for the files transferred by using the AS2 protocol. +* `description` - (Optional) The Optional description of the transdfer. +* `local_profile_id` - (Required) The unique identifier for the AS2 local profile. +* `partner_profile_id` - (Required) The unique identifier for the AS2 partner profile. +* `server_id` - (Required) The unique server identifier for the server instance. This is the specific server the agreement uses. +* `tags` - (Optional) A map of tags to assign to the resource. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `agreement_id` - The unique identifier for the AS2 agreement +* `staus` - The staus of the agreement which is either ACTIVE or INACTIVE. + +## Import + +Transfer AS2 Agreement can be imported using the `server_id/agreement_id`, e.g., + +``` +$ terraform import aws_transfer_agreement.example s-4221a88afd5f4362a/a-4221a88afd5f4362a +``` diff --git a/website/docs/r/transfer_certificate.html.markdown b/website/docs/r/transfer_certificate.html.markdown new file mode 100644 index 00000000000..7b1f4ac9799 --- /dev/null +++ b/website/docs/r/transfer_certificate.html.markdown @@ -0,0 +1,52 @@ +--- +subcategory: "Transfer Family" +layout: "aws" +page_title: "AWS: aws_transfer_certificate" +description: |- + Provides a AWS Transfer AS2 Certificate Resource +--- + +# Resource: aws_transfer_certificate + +Provides a AWS Transfer AS2 Certificate resource. + +## Example Usage + +### Basic + +```terraform +resource "aws_transfer_certificate" "example" { + certificate = file("${path.module}/example.com/example.crt") + certificate_chain = file("${path.module}/example.com/ca.crt") + private_key = file("${path.module}/example.com/example.key") + description = "example" + usage = "SIGNING" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `certificate` - (Required) The valid certificate file required for the transfer. +* `certificate_chain` - (Optional) The optional list of certificate that make up the chain for the certificate that is being imported. +* `description` - (Optional) A short description that helps identify the certificate. +* `private_key` - (Optional) The private key associated with the certificate being imported. +* `tags` - (Optional) A map of tags to assign to the resource. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `usage` - (Required) Specifies if a certificate is being used for signing or encryption. The valid values are SIGNING and ENCRYPTION. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `certificate_id` - The unique identifier for the AS2 certificate +* `active_date` - An date when the certificate becomes active +* `inactive_date` - An date when the certificate becomes inactive + +## Import + +Transfer AS2 Certificate can be imported using the `certificate_id`, e.g., + +``` +$ terraform import aws_transfer_certificate.example c-4221a88afd5f4362a +``` diff --git a/website/docs/r/transfer_connector.html.markdown b/website/docs/r/transfer_connector.html.markdown new file mode 100644 index 00000000000..d84cac651e0 --- /dev/null +++ b/website/docs/r/transfer_connector.html.markdown @@ -0,0 +1,67 @@ +--- +subcategory: "Transfer Family" +layout: "aws" +page_title: "AWS: aws_transfer_connector" +description: |- + Provides a AWS Transfer AS2 Connector Resource +--- + +# Resource: aws_transfer_connector + +Provides a AWS Transfer AS2 Connector resource. + +## Example Usage + +### Basic + +```terraform +resource "aws_transfer_connector" "example" { + access_role = aws_iam_role.test.arn + as2_config { + compression = "DISABLED" + encryption_algorithm = "AWS128_CBC" + message_subject = "For Connector" + local_profile_id = aws_transfer_profile.local.profile_id + mdn_response = "NONE" + mdn_signing_algorithm = "NONE" + partner_profile_id = aws_transfer_profile.partner.profile_id + signing_algorithm = "NONE" + } + url = "http://www.test.com" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `access_role` - (Required) The IAM Role which provides read and write access to the parent directory of the file location mentioned in the StartFileTransfer request. +* `as2_config` - (Required) The parameters to configure for the connector object. Fields documented below. +* `logging_role` - (Optional) The IAM Role which is required for allowing the connector to turn on CloudWatch logging for Amazon S3 events. +* `url` - (Required) The URL of the partners AS2 endpoint. +* `tags` - (Optional) A map of tags to assign to the resource. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +### As2Config Details + +* `compression` - (Required) Specifies weather AS2 file is compressed. The valud values are ZLIB and DISABLED. +* `encryption_algorithm` - (Required) The algorithm that is used to encrypt the file. The valid values are AES128_CBC | AES192_CBC | AES256_CBC | NONE. +* `local_profile_id` - (Required) The unique identifier for the AS2 local profile. +* `mdn_response` - (Required) Used for outbound requests to determine if a partner response for transfers is synchronous or asynchronous. The valid values are SYNC and NONE. +* `mdn_signing_algorithm` - (Optional) The signing algorithm for the Mdn response. The valid values are SHA256 | SHA384 | SHA512 | SHA1 | NONE | DEFAULT. +* `message_subject` - (Optional) Used as the subject HTTP header attribute in AS2 messages that are being sent with the connector. +* `partner_profile_id` - (Required) The unique identifier for the AS2 partner profile. +* `signing_algorithm` - (Required) The algorithm that is used to sign AS2 messages sent with the connector. The valid values are SHA256 | SHA384 | SHA512 | SHA1 | NONE . + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `connector_id` - The unique identifier for the AS2 profile + +## Import + +Transfer AS2 Connector can be imported using the `connector_id`, e.g., + +``` +$ terraform import aws_transfer_connector.example c-4221a88afd5f4362a +``` diff --git a/website/docs/r/transfer_profile.html.markdown b/website/docs/r/transfer_profile.html.markdown new file mode 100644 index 00000000000..ea374046a23 --- /dev/null +++ b/website/docs/r/transfer_profile.html.markdown @@ -0,0 +1,46 @@ +--- +subcategory: "Transfer Family" +layout: "aws" +page_title: "AWS: aws_transfer_profile" +description: |- + Provides a AWS Transfer AS2 Profile Resource +--- + +# Resource: aws_transfer_profile + +Provides a AWS Transfer AS2 Profile resource. + +## Example Usage + +### Basic + +```terraform +resource "aws_transfer_profile" "example" { + as2_id = "example" + certificate_ids = [aws_transfer_certificate.example.certificate_id] + usage = "LOCAL" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `as2_id` - (Required) The As2Id is the AS2 name as defined in the RFC 4130. For inbound ttransfers this is the AS2 From Header for the AS2 messages sent from the partner. For Outbound messages this is the AS2 To Header for the AS2 messages sent to the partner. his ID cannot include spaces. +* `certificate_ids` - (Optional) The list of certificate Ids from the imported certificate operation. +* `profile_type` - (Required) The profile type should be LOCAL or PARTNER. +* `tags` - (Optional) A map of tags to assign to the resource. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `profile_id` - The unique identifier for the AS2 profile + +## Import + +Transfer AS2 Profile can be imported using the `profile_id`, e.g., + +``` +$ terraform import aws_transfer_profile.example p-4221a88afd5f4362a +``` diff --git a/website/docs/r/transfer_server.html.markdown b/website/docs/r/transfer_server.html.markdown index 962f3facbd1..bf8b2b681c9 100644 --- a/website/docs/r/transfer_server.html.markdown +++ b/website/docs/r/transfer_server.html.markdown @@ -109,7 +109,7 @@ The following arguments are supported: * `post_authentication_login_banner`- (Optional) Specify a string to display when users connect to a server. This string is displayed after the user authenticates. The SFTP protocol does not support post-authentication display banners. * `pre_authentication_login_banner`- (Optional) Specify a string to display when users connect to a server. This string is displayed before the user authenticates. * `protocol_details`- (Optional) The protocol settings that are configured for your server. -* `security_policy_name` - (Optional) Specifies the name of the security policy that is attached to the server. Possible values are `TransferSecurityPolicy-2018-11`, `TransferSecurityPolicy-2020-06`, `TransferSecurityPolicy-FIPS-2020-06` and `TransferSecurityPolicy-2022-03`. Default value is: `TransferSecurityPolicy-2018-11`. +* `security_policy_name` - (Optional) Specifies the name of the security policy that is attached to the server. Possible values are `TransferSecurityPolicy-2018-11`, `TransferSecurityPolicy-2020-06`, `TransferSecurityPolicy-FIPS-2020-06`, `TransferSecurityPolicy-2022-03` and `TransferSecurityPolicy-2023-05`. Default value is: `TransferSecurityPolicy-2018-11`. * `tags` - (Optional) A map of tags to assign to the resource. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. * `workflow_details` - (Optional) Specifies the workflow details. See Workflow Details below. diff --git a/website/docs/r/transfer_user.html.markdown b/website/docs/r/transfer_user.html.markdown index c717196b538..8fbd961daba 100644 --- a/website/docs/r/transfer_user.html.markdown +++ b/website/docs/r/transfer_user.html.markdown @@ -51,7 +51,7 @@ data "aws_iam_policy_document" "foo" { resource "aws_iam_role_policy" "foo" { name = "tf-test-transfer-user-iam-policy" role = aws_iam_role.foo.id - policy = data.aws_iam_role_policy.foo.json + policy = data.aws_iam_policy_document.foo.json } resource "aws_transfer_user" "foo" { @@ -78,7 +78,7 @@ The following arguments are supported: * `home_directory_type` - (Optional) The type of landing directory (folder) you mapped for your users' home directory. Valid values are `PATH` and `LOGICAL`. * `policy` - (Optional) An IAM JSON policy document that scopes down user access to portions of their Amazon S3 bucket. IAM variables you can use inside this policy include `${Transfer:UserName}`, `${Transfer:HomeDirectory}`, and `${Transfer:HomeBucket}`. Since the IAM variable syntax matches Terraform's interpolation syntax, they must be escaped inside Terraform configuration strings (`$${Transfer:UserName}`). These are evaluated on-the-fly when navigating the bucket. * `posix_profile` - (Optional) Specifies the full POSIX identity, including user ID (Uid), group ID (Gid), and any secondary groups IDs (SecondaryGids), that controls your users' access to your Amazon EFS file systems. See [Posix Profile](#posix-profile) below. -* `role` - (Required) Amazon Resource Name (ARN) of an IAM role that allows the service to controls your user’s access to your Amazon S3 bucket. +* `role` - (Required) Amazon Resource Name (ARN) of an IAM role that allows the service to control your user’s access to your Amazon S3 bucket. * `tags` - (Optional) A map of tags to assign to the resource. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. ### Home Directory Mappings diff --git a/website/docs/r/vpc_endpoint.html.markdown b/website/docs/r/vpc_endpoint.html.markdown index b2190cf16eb..4cab5713801 100644 --- a/website/docs/r/vpc_endpoint.html.markdown +++ b/website/docs/r/vpc_endpoint.html.markdown @@ -132,6 +132,7 @@ If no security groups are specified, the VPC's [default security group](https:// ### dns_options * `dns_record_ip_type` - (Optional) The DNS records created for the endpoint. Valid values are `ipv4`, `dualstack`, `service-defined`, and `ipv6`. +* `private_dns_only_for_inbound_resolver_endpoint` - (Optional) Indicates whether to enable private DNS only for inbound endpoints. This option is available only for services that support both gateway and interface endpoints. It routes traffic that originates from the VPC to the gateway endpoint and traffic that originates from on-premises to the interface endpoint. Can only be specified if `private_dns_enabled` is `true`. ## Timeouts diff --git a/website/docs/r/vpc_endpoint_connection_accepter.html.markdown b/website/docs/r/vpc_endpoint_connection_accepter.html.markdown index 5ec194ac583..cdb0d38a32a 100644 --- a/website/docs/r/vpc_endpoint_connection_accepter.html.markdown +++ b/website/docs/r/vpc_endpoint_connection_accepter.html.markdown @@ -21,7 +21,7 @@ resource "aws_vpc_endpoint_service" "example" { } resource "aws_vpc_endpoint" "example" { - provider = "aws.alternate" + provider = aws.alternate vpc_id = aws_vpc.test_alternate.id service_name = aws_vpc_endpoint_service.test.service_name diff --git a/website/docs/r/vpc_peering_connection_options.html.markdown b/website/docs/r/vpc_peering_connection_options.html.markdown index 6aea4f38c7e..33a1a7409e1 100644 --- a/website/docs/r/vpc_peering_connection_options.html.markdown +++ b/website/docs/r/vpc_peering_connection_options.html.markdown @@ -138,22 +138,14 @@ resource "aws_vpc_peering_connection_options" "accepter" { The following arguments are supported: * `vpc_peering_connection_id` - (Required) The ID of the requester VPC peering connection. -* `accepter` (Optional) - An optional configuration block that allows for [VPC Peering Connection] -(https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html) options to be set for the VPC that accepts -the peering connection (a maximum of one). -* `requester` (Optional) - A optional configuration block that allows for [VPC Peering Connection] -(https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html) options to be set for the VPC that requests -the peering connection (a maximum of one). +* `accepter` (Optional) - An optional configuration block that allows for [VPC Peering Connection](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html) options to be set for the VPC that acceptsthe peering connection (a maximum of one). +* `requester` (Optional) - A optional configuration block that allows for [VPC Peering Connection](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html) options to be set for the VPC that requeststhe peering connection (a maximum of one). #### Accepter and Requester Arguments --> **Note:** When enabled, the DNS resolution feature requires that VPCs participating in the peering -must have support for the DNS hostnames enabled. This can be done using the [`enable_dns_hostnames`] -(vpc.html#enable_dns_hostnames) attribute in the [`aws_vpc`](vpc.html) resource. See [Using DNS with Your VPC] -(http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-dns.html) user guide for more information. +-> **Note:** When enabled, the DNS resolution feature requires that VPCs participating in the peering must have support for the DNS hostnames enabled. This can be done using the [`enable_dns_hostnames`](vpc.html#enable_dns_hostnames) attribute in the [`aws_vpc`](vpc.html) resource. See [Using DNS with Your VPC](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-dns.html) user guide for more information. -* `allow_remote_vpc_dns_resolution` - (Optional) Allow a local VPC to resolve public DNS hostnames to -private IP addresses when queried from instances in the peer VPC. +* `allow_remote_vpc_dns_resolution` - (Optional) Allow a local VPC to resolve public DNS hostnames to private IP addresses when queried from instances in the peer VPC. ## Attributes Reference diff --git a/website/docs/r/vpc_security_group_egress_rule.html.markdown b/website/docs/r/vpc_security_group_egress_rule.html.markdown index 4944910ee8c..8fea624aaaf 100644 --- a/website/docs/r/vpc_security_group_egress_rule.html.markdown +++ b/website/docs/r/vpc_security_group_egress_rule.html.markdown @@ -26,19 +26,21 @@ resource "aws_vpc_security_group_egress_rule" "example" { cidr_ipv4 = "10.0.0.0/8" from_port = 80 ip_protocol = "tcp" - to_port = 8080 + to_port = 80 } ``` ## Argument Reference +~> **Note** Although `cidr_ipv4`, `cidr_ipv6`, `prefix_list_id`, and `referenced_security_group_id` are all marked as optional, you *must* provide one of them in order to configure the destination of the traffic. The `from_port` and `to_port` arguments are required unless `ip_protocol` is set to `-1` or `icmpv6`. + The following arguments are supported: * `cidr_ipv4` - (Optional) The destination IPv4 CIDR range. * `cidr_ipv6` - (Optional) The destination IPv6 CIDR range. * `description` - (Optional) The security group rule description. * `from_port` - (Optional) The start of port range for the TCP and UDP protocols, or an ICMP/ICMPv6 type. -* `ip_protocol` - (Optional) The IP protocol name or number. Use `-1` to specify all protocols. +* `ip_protocol` - (Optional) The IP protocol name or number. Use `-1` to specify all protocols. Note that if `ip_protocol` is set to `-1`, it translates to all protocols, all port ranges, and `from_port` and `to_port` values should not be defined. * `prefix_list_id` - (Optional) The ID of the destination prefix list. * `referenced_security_group_id` - (Optional) The destination security group that is referenced in the rule. * `security_group_id` - (Required) The ID of the security group. diff --git a/website/docs/r/vpc_security_group_ingress_rule.html.markdown b/website/docs/r/vpc_security_group_ingress_rule.html.markdown index 39f06a491d1..75ea178f0ff 100644 --- a/website/docs/r/vpc_security_group_ingress_rule.html.markdown +++ b/website/docs/r/vpc_security_group_ingress_rule.html.markdown @@ -26,7 +26,7 @@ resource "aws_vpc_security_group_ingress_rule" "example" { cidr_ipv4 = "10.0.0.0/8" from_port = 80 ip_protocol = "tcp" - to_port = 8080 + to_port = 80 } ``` @@ -34,11 +34,13 @@ resource "aws_vpc_security_group_ingress_rule" "example" { The following arguments are supported: +~> **Note** Although `cidr_ipv4`, `cidr_ipv6`, `prefix_list_id`, and `referenced_security_group_id` are all marked as optional, you *must* provide one of them in order to configure the destination of the traffic. The `from_port` and `to_port` arguments are required unless `ip_protocol` is set to `-1` or `icmpv6`. + * `cidr_ipv4` - (Optional) The source IPv4 CIDR range. * `cidr_ipv6` - (Optional) The source IPv6 CIDR range. * `description` - (Optional) The security group rule description. * `from_port` - (Optional) The start of port range for the TCP and UDP protocols, or an ICMP/ICMPv6 type. -* `ip_protocol` - (Optional) The IP protocol name or number. Use `-1` to specify all protocols. +* `ip_protocol` - (Required) The IP protocol name or number. Use `-1` to specify all protocols. Note that if `ip_protocol` is set to `-1`, it translates to all protocols, all port ranges, and `from_port` and `to_port` values should not be defined. * `prefix_list_id` - (Optional) The ID of the source prefix list. * `referenced_security_group_id` - (Optional) The source security group that is referenced in the rule. * `security_group_id` - (Required) The ID of the security group. diff --git a/website/docs/r/wafv2_ip_set.html.markdown b/website/docs/r/wafv2_ip_set.html.markdown index 320595bd329..770b719ec72 100644 --- a/website/docs/r/wafv2_ip_set.html.markdown +++ b/website/docs/r/wafv2_ip_set.html.markdown @@ -35,7 +35,7 @@ The following arguments are supported: * `description` - (Optional) A friendly description of the IP set. * `scope` - (Required) Specifies whether this is for an AWS CloudFront distribution or for a regional application. Valid values are `CLOUDFRONT` or `REGIONAL`. To work with CloudFront, you must also specify the Region US East (N. Virginia). * `ip_address_version` - (Required) Specify IPV4 or IPV6. Valid values are `IPV4` or `IPV6`. -* `addresses` - (Required) Contains an array of strings that specify one or more IP addresses or blocks of IP addresses in Classless Inter-Domain Routing (CIDR) notation. AWS WAF supports all address ranges for IP versions IPv4 and IPv6. +* `addresses` - (Required) Contains an array of strings that specifies zero or more IP addresses or blocks of IP addresses. All addresses must be specified using Classless Inter-Domain Routing (CIDR) notation. WAF supports all IPv4 and IPv6 CIDR ranges except for `/0`. * `tags` - (Optional) An array of key:value pairs to associate with the resource. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. ## Attributes Reference diff --git a/website/docs/r/wafv2_regex_pattern_set.html.markdown b/website/docs/r/wafv2_regex_pattern_set.html.markdown index 92b0d6e9b5a..4d96bc5cbfe 100644 --- a/website/docs/r/wafv2_regex_pattern_set.html.markdown +++ b/website/docs/r/wafv2_regex_pattern_set.html.markdown @@ -40,7 +40,7 @@ The following arguments are supported: * `name` - (Required) A friendly name of the regular expression pattern set. * `description` - (Optional) A friendly description of the regular expression pattern set. * `scope` - (Required) Specifies whether this is for an AWS CloudFront distribution or for a regional application. Valid values are `CLOUDFRONT` or `REGIONAL`. To work with CloudFront, you must also specify the region `us-east-1` (N. Virginia) on the AWS provider. -* `regular_expression` - (Optional) One or more blocks of regular expression patterns that you want AWS WAF to search for, such as `B[a@]dB[o0]t`. See [Regular Expression](#regular-expression) below for details. +* `regular_expression` - (Optional) One or more blocks of regular expression patterns that you want AWS WAF to search for, such as `B[a@]dB[o0]t`. See [Regular Expression](#regular-expression) below for details. A maximum of 10 `regular_expression` blocks may be specified. * `tags` - (Optional) An array of key:value pairs to associate with the resource. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. ### Regular Expression diff --git a/website/docs/r/wafv2_rule_group.html.markdown b/website/docs/r/wafv2_rule_group.html.markdown index b2528448808..b92f29d1a4d 100644 --- a/website/docs/r/wafv2_rule_group.html.markdown +++ b/website/docs/r/wafv2_rule_group.html.markdown @@ -504,10 +504,10 @@ You can't nest a `rate_based_statement`, for example for use inside a `not_state The `rate_based_statement` block supports the following arguments: -* `aggregate_key_type` - (Optional) Setting that indicates how to aggregate the request counts. Valid values include: `FORWARDED_IP` or `IP`. Default: `IP`. +* `aggregate_key_type` - (Optional) Setting that indicates how to aggregate the request counts. Valid values include: `CONSTANT`, `FORWARDED_IP` or `IP`. Default: `IP`. * `forwarded_ip_config` - (Optional) The configuration for inspecting IP addresses in an HTTP header that you specify, instead of using the IP address that's reported by the web request origin. If `aggregate_key_type` is set to `FORWARDED_IP`, this block is required. See [Forwarded IP Config](#forwarded-ip-config) below for details. * `limit` - (Required) The limit on requests per 5-minute period for a single originating IP address. -* `scope_down_statement` - (Optional) An optional nested statement that narrows the scope of the rate-based statement to matching web requests. This can be any nestable statement, and you can nest statements at any level below this scope-down statement. See [Statement](#statement) above for details. +* `scope_down_statement` - (Optional) An optional nested statement that narrows the scope of the rate-based statement to matching web requests. This can be any nestable statement, and you can nest statements at any level below this scope-down statement. See [Statement](#statement) above for details. If `aggregate_key_type` is set to `CONSTANT`, this block is required. ### Regex Match Statement diff --git a/website/docs/r/wafv2_web_acl.html.markdown b/website/docs/r/wafv2_web_acl.html.markdown index 0e6c4892d2c..e948172df2f 100644 --- a/website/docs/r/wafv2_web_acl.html.markdown +++ b/website/docs/r/wafv2_web_acl.html.markdown @@ -10,7 +10,7 @@ description: |- Creates a WAFv2 Web ACL resource. -~> **Note:** In `field_to_match` blocks, _e.g._, in `byte_match_statement`, the `body` block includes an optional argument `oversize_handling`. AWS indicates this argument will be required starting February 2023. To avoid configurations breaking when that change happens, treat the `oversize_handling` argument as **required** as soon as possible. +~> **Note** In `field_to_match` blocks, _e.g._, in `byte_match_statement`, the `body` block includes an optional argument `oversize_handling`. AWS indicates this argument will be required starting February 2023. To avoid configurations breaking when that change happens, treat the `oversize_handling` argument as **required** as soon as possible. ## Example Usage @@ -89,7 +89,7 @@ resource "aws_wafv2_web_acl" "example" { ### Account Takeover Protection -``` +```terraform resource "aws_wafv2_web_acl" "atp-example" { name = "managed-atp-example" description = "Example of a managed ATP rule." @@ -349,202 +349,200 @@ resource "aws_wafv2_web_acl" "test" { The following arguments are supported: -* `custom_response_body` - (Optional) Defines custom response bodies that can be referenced by `custom_response` actions. See [`custom_response_body`](#custom_response_body) below for details. -* `default_action` - (Required) Action to perform if none of the `rules` contained in the WebACL match. See [`default_ action`](#default_action) below for details. +* `custom_response_body` - (Optional) Defines custom response bodies that can be referenced by `custom_response` actions. See [`custom_response_body`](#custom_response_body-block) below for details. +* `default_action` - (Required) Action to perform if none of the `rules` contained in the WebACL match. See [`default_action`](#default_action-block) below for details. * `description` - (Optional) Friendly description of the WebACL. * `name` - (Required) Friendly name of the WebACL. -* `rule` - (Optional) Rule blocks used to identify the web requests that you want to `allow`, `block`, or `count`. See [`rule`](#rule) below for details. +* `rule` - (Optional) Rule blocks used to identify the web requests that you want to `allow`, `block`, or `count`. See [`rule`](#rule-block) below for details. * `scope` - (Required) Specifies whether this is for an AWS CloudFront distribution or for a regional application. Valid values are `CLOUDFRONT` or `REGIONAL`. To work with CloudFront, you must also specify the region `us-east-1` (N. Virginia) on the AWS provider. * `tags` - (Optional) Map of key-value pairs to associate with the resource. If configured with a provider [`default_tags` configuration block](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. * `token_domains` - (Optional) Specifies the domains that AWS WAF should accept in a web request token. This enables the use of tokens across multiple protected websites. When AWS WAF provides a token, it uses the domain of the AWS resource that the web ACL is protecting. If you don't specify a list of token domains, AWS WAF accepts tokens only for the domain of the protected resource. With a token domain list, AWS WAF accepts the resource's host domain plus all domains in the token domain list, including their prefixed subdomains. -* `visibility_config` - (Required) Defines and enables Amazon CloudWatch metrics and web request sample collection. See [`visibility_config`](#visibility_config) below for details. +* `visibility_config` - (Required) Defines and enables Amazon CloudWatch metrics and web request sample collection. See [`visibility_config`](#visibility_config-block) below for details. -### `custom_response_body` +### `custom_response_body` Block Each `custom_response_body` block supports the following arguments: -* `key` - (Required) Unique key identifying the custom response body. This is referenced by the `custom_response_body_key` argument in the [`custom_response`](#custom_response) block. +* `key` - (Required) Unique key identifying the custom response body. This is referenced by the `custom_response_body_key` argument in the [`custom_response`](#custom_response-block) block. * `content` - (Required) Payload of the custom response. * `content_type` - (Required) Type of content in the payload that you are defining in the `content` argument. Valid values are `TEXT_PLAIN`, `TEXT_HTML`, or `APPLICATION_JSON`. -### `default_action` +### `default_action` Block The `default_action` block supports the following arguments: -~> **NOTE:** One of `allow` or `block`, expressed as an empty configuration block `{}`, is required when specifying a `default_action` +~> **Note** One of `allow` or `block`, expressed as an empty configuration block `{}`, is required when specifying a `default_action` -* `allow` - (Optional) Specifies that AWS WAF should allow requests by default. See [`allow`](#allow) below for details. -* `block` - (Optional) Specifies that AWS WAF should block requests by default. See [`block`](#block) below for details. +* `allow` - (Optional) Specifies that AWS WAF should allow requests by default. See [`allow`](#allow-block) below for details. +* `block` - (Optional) Specifies that AWS WAF should block requests by default. See [`block`](#block-block) below for details. -### `rule` +### `rule` Block -~> **NOTE:** One of `action` or `override_action` is required when specifying a rule +~> **Note** One of `action` or `override_action` is required when specifying a rule Each `rule` supports the following arguments: -* `action` - (Optional) Action that AWS WAF should take on a web request when it matches the rule's statement. This is used only for rules whose **statements do not reference a rule group**. See [`action`](#action) below for details. -* `captcha_config` - (Optional) Specifies how AWS WAF should handle CAPTCHA evaluations. See [Captcha Configuration](#captcha-configuration) below for details. -* `name` - (Required) Friendly name of the rule. **NOTE:** The provider assumes that rules with names matching this pattern, `^ShieldMitigationRuleGroup___.*`, are AWS-added for [automatic application layer DDoS mitigation activities](https://docs.aws.amazon.com/waf/latest/developerguide/ddos-automatic-app-layer-response-rg.html). Such rules will be ignored by the provider unless you explicitly include them in your configuration (for example, by using the AWS CLI to discover their properties and creating matching configuration). However, since these rules are owned and managed by AWS, you may get permission errors. -* `override_action` - (Optional) Override action to apply to the rules in a rule group. Used only for rule **statements that reference a rule group**, like `rule_group_reference_statement` and `managed_rule_group_statement`. See [`override_action`](#override_action) below for details. +* `action` - (Optional) Action that AWS WAF should take on a web request when it matches the rule's statement. This is used only for rules whose **statements do not reference a rule group**. See [`action`](#action-block) for details. +* `captcha_config` - (Optional) Specifies how AWS WAF should handle CAPTCHA evaluations. See [`captcha_config`](#captcha_config-block) below for details. +* `name` - (Required) Friendly name of the rule. Note that the provider assumes that rules with names matching this pattern, `^ShieldMitigationRuleGroup___.*`, are AWS-added for [automatic application layer DDoS mitigation activities](https://docs.aws.amazon.com/waf/latest/developerguide/ddos-automatic-app-layer-response-rg.html). Such rules will be ignored by the provider unless you explicitly include them in your configuration (for example, by using the AWS CLI to discover their properties and creating matching configuration). However, since these rules are owned and managed by AWS, you may get permission errors. +* `override_action` - (Optional) Override action to apply to the rules in a rule group. Used only for rule **statements that reference a rule group**, like `rule_group_reference_statement` and `managed_rule_group_statement`. See [`override_action`](#override_action-block) below for details. * `priority` - (Required) If you define more than one Rule in a WebACL, AWS WAF evaluates each request against the `rules` in order based on the value of `priority`. AWS WAF processes rules with lower priority first. -* `rule_label` - (Optional) Labels to apply to web requests that match the rule match statement. See [`rule_label`](#rule_label) below for details. -* `statement` - (Required) The AWS WAF processing statement for the rule, for example `byte_match_statement` or `geo_match_statement`. See [`statement`](#statement) below for details. -* `visibility_config` - (Required) Defines and enables Amazon CloudWatch metrics and web request sample collection. See [`visibility_config`](#visibility_config) below for details. +* `rule_label` - (Optional) Labels to apply to web requests that match the rule match statement. See [`rule_label`](#rule_label-block) below for details. +* `statement` - (Required) The AWS WAF processing statement for the rule, for example `byte_match_statement` or `geo_match_statement`. See [`statement`](#statement-block) below for details. +* `visibility_config` - (Required) Defines and enables Amazon CloudWatch metrics and web request sample collection. See [`visibility_config`](#visibility_config-block) below for details. -#### `action` +### `action` Block The `action` block supports the following arguments: -~> **NOTE:** One of `allow`, `block`, or `count`, is required when specifying an `action`. +~> **Note** One of `allow`, `block`, or `count`, is required when specifying an `action`. -* `allow` - (Optional) Instructs AWS WAF to allow the web request. See [`allow`](#allow) below for details. -* `block` - (Optional) Instructs AWS WAF to block the web request. See [`block`](#block) below for details. -* `captcha` - (Optional) Instructs AWS WAF to run a Captcha check against the web request. See [`captcha`](#captcha) below for details. -* `challenge` - (Optional) Instructs AWS WAF to run a check against the request to verify that the request is coming from a legitimate client session. See [`challenge`](#challenge) below for details. -* `count` - (Optional) Instructs AWS WAF to count the web request and allow it. See [`count`](#count) below for details. +* `allow` - (Optional) Instructs AWS WAF to allow the web request. See [`allow`](#allow-block) below for details. +* `block` - (Optional) Instructs AWS WAF to block the web request. See [`block`](#block-block) below for details. +* `captcha` - (Optional) Instructs AWS WAF to run a Captcha check against the web request. See [`captcha`](#captcha-block) below for details. +* `challenge` - (Optional) Instructs AWS WAF to run a check against the request to verify that the request is coming from a legitimate client session. See [`challenge`](#challenge-block) below for details. +* `count` - (Optional) Instructs AWS WAF to count the web request and allow it. See [`count`](#count-block) below for details. -#### `override_action` +### `override_action` Block The `override_action` block supports the following arguments: -~> **NOTE:** One of `count` or `none`, expressed as an empty configuration block `{}`, is required when specifying an `override_action` +~> **Note** One of `count` or `none`, expressed as an empty configuration block `{}`, is required when specifying an `override_action` * `count` - (Optional) Override the rule action setting to count (i.e., only count matches). Configured as an empty block `{}`. * `none` - (Optional) Don't override the rule action setting. Configured as an empty block `{}`. -#### `allow` +### `allow` Block The `allow` block supports the following arguments: -* `custom_request_handling` - (Optional) Defines custom handling for the web request. See [`custom_request_handling`](#custom_request_handling) below for details. +* `custom_request_handling` - (Optional) Defines custom handling for the web request. See [`custom_request_handling`](#custom_request_handling-block) below for details. -#### `block` +### `block` Block The `block` block supports the following arguments: -* `custom_response` - (Optional) Defines a custom response for the web request. See [`custom_response`](#custom_response) below for details. +* `custom_response` - (Optional) Defines a custom response for the web request. See [`custom_response`](#custom_response-block) below for details. -#### `captcha` +### `captcha` Block The `captcha` block supports the following arguments: -* `custom_request_handling` - (Optional) Defines custom handling for the web request. See [`custom_request_handling`](#custom_request_handling) below for details. +* `custom_request_handling` - (Optional) Defines custom handling for the web request. See [`custom_request_handling`](#custom_request_handling-block) below for details. -#### `challenge` +### `challenge` Block The `challenge` block supports the following arguments: -* `custom_request_handling` - (Optional) Defines custom handling for the web request. See [`custom_request_handling`](#custom_request_handling) below for details. +* `custom_request_handling` - (Optional) Defines custom handling for the web request. See [`custom_request_handling`](#custom_request_handling-block) below for details. -#### `count` +### `count` Block The `count` block supports the following arguments: -* `custom_request_handling` - (Optional) Defines custom handling for the web request. See [`custom_request_handling`](#custom_request_handling) below for details. +* `custom_request_handling` - (Optional) Defines custom handling for the web request. See [`custom_request_handling`](#custom_request_handling-block) below for details. -#### `custom_request_handling` +### `custom_request_handling` Block The `custom_request_handling` block supports the following arguments: -* `insert_header` - (Required) The `insert_header` blocks used to define HTTP headers added to the request. See [`insert_header`](#insert_header) below for details. +* `insert_header` - (Required) The `insert_header` blocks used to define HTTP headers added to the request. See [`insert_header`](#insert_header-block) below for details. -#### `insert_header` +### `insert_header` Block Each `insert_header` block supports the following arguments. Duplicate header names are not allowed: * `name` - Name of the custom header. For custom request header insertion, when AWS WAF inserts the header into the request, it prefixes this name `x-amzn-waf-`, to avoid confusion with the headers that are already in the request. For example, for the header name `sample`, AWS WAF inserts the header `x-amzn-waf-sample`. * `value` - Value of the custom header. -#### `custom_response` +### `custom_response` Block The `custom_response` block supports the following arguments: * `custom_response_body_key` - (Optional) References the response body that you want AWS WAF to return to the web request client. This must reference a `key` defined in a `custom_response_body` block of this resource. * `response_code` - (Required) The HTTP status code to return to the client. -* `response_header` - (Optional) The `response_header` blocks used to define the HTTP response headers added to the response. See [`response_header`](#response_header) below for details. +* `response_header` - (Optional) The `response_header` blocks used to define the HTTP response headers added to the response. See [`response_header`](#response_header-block) below for details. -#### `response_header` +### `response_header` Block Each `response_header` block supports the following arguments. Duplicate header names are not allowed: * `name` - Name of the custom header. For custom request header insertion, when AWS WAF inserts the header into the request, it prefixes this name `x-amzn-waf-`, to avoid confusion with the headers that are already in the request. For example, for the header name `sample`, AWS WAF inserts the header `x-amzn-waf-sample`. * `value` - Value of the custom header. -#### `rule_label` +### `rule_label` Block Each block supports the following arguments: * `name` - Label string. -#### `statement` +### `statement` Block The processing guidance for a Rule, used by AWS WAF to determine whether a web request matches the rule. See the [documentation](https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statements-list.html) for more information. --> **NOTE:** Although the `statement` block is recursive, currently only 3 levels are supported. +-> **Note** Although the `statement` block is recursive, currently only 3 levels are supported. The `statement` block supports the following arguments: -* `and_statement` - (Optional) Logical rule statement used to combine other rule statements with AND logic. See [`and_statement`](#and_statement) below for details. -* `byte_match_statement` - (Optional) Rule statement that defines a string match search for AWS WAF to apply to web requests. See [`byte_match_statement`](#byte_match_statement) below for details. -* `geo_match_statement` - (Optional) Rule statement used to identify web requests based on country of origin. See [`geo_match_statement`](#geo_match_statement) below for details. -* `ip_set_reference_statement` - (Optional) Rule statement used to detect web requests coming from particular IP addresses or address ranges. See [IP Set Reference Statement](#ip_set_reference_statement) below for details. -* `label_match_statement` - (Optional) Rule statement that defines a string match search against labels that have been added to the web request by rules that have already run in the web ACL. See [`label_match_statement`](#label_match_statement) below for details. -* `managed_rule_group_statement` - (Optional) Rule statement used to run the rules that are defined in a managed rule group. This statement can not be nested. See [Managed Rule Group Statement](#managed_rule_group_statement) below for details. -* `not_statement` - (Optional) Logical rule statement used to negate the results of another rule statement. See [`not_statement`](#not_statement) below for details. -* `or_statement` - (Optional) Logical rule statement used to combine other rule statements with OR logic. See [`or_statement`](#or_statement) below for details. -* `rate_based_statement` - (Optional) Rate-based rule tracks the rate of requests for each originating `IP address`, and triggers the rule action when the rate exceeds a limit that you specify on the number of requests in any `5-minute` time span. This statement can not be nested. See [`rate_based_statement`](#rate_based_statement) below for details. -* `regex_match_statement` - (Optional) Rule statement used to search web request components for a match against a single regular expression. See [`regex_match_statement`](#regex_match_statement) below for details. -* `regex_pattern_set_reference_statement` - (Optional) Rule statement used to search web request components for matches with regular expressions. See [Regex Pattern Set Reference Statement](#regex_pattern_set_reference_statement) below for details. -* `rule_group_reference_statement` - (Optional) Rule statement used to run the rules that are defined in an WAFv2 Rule Group. See [Rule Group Reference Statement](#rule_group_reference_statement) below for details. -* `size_constraint_statement` - (Optional) Rule statement that compares a number of bytes against the size of a request component, using a comparison operator, such as greater than (>) or less than (<). See [`size_constraint_statement`](#size_constraint_statement) below for more details. -* `sqli_match_statement` - (Optional) An SQL injection match condition identifies the part of web requests, such as the URI or the query string, that you want AWS WAF to inspect. See [`sqli_match_statement`](#sqli_match_statement) below for details. -* `xss_match_statement` - (Optional) Rule statement that defines a cross-site scripting (XSS) match search for AWS WAF to apply to web requests. See [`xss_match_statement`](#xss_match_statement) below for details. - -#### `and_statement` +* `and_statement` - (Optional) Logical rule statement used to combine other rule statements with AND logic. See [`and_statement`](#and_statement-block) below for details. +* `byte_match_statement` - (Optional) Rule statement that defines a string match search for AWS WAF to apply to web requests. See [`byte_match_statement`](#byte_match_statement-block) below for details. +* `geo_match_statement` - (Optional) Rule statement used to identify web requests based on country of origin. See [`geo_match_statement`](#geo_match_statement-block) below for details. +* `ip_set_reference_statement` - (Optional) Rule statement used to detect web requests coming from particular IP addresses or address ranges. See [`ip_set_reference_statement`](#ip_set_reference_statement-block) below for details. +* `label_match_statement` - (Optional) Rule statement that defines a string match search against labels that have been added to the web request by rules that have already run in the web ACL. See [`label_match_statement`](#label_match_statement-block) below for details. +* `managed_rule_group_statement` - (Optional) Rule statement used to run the rules that are defined in a managed rule group. This statement can not be nested. See [`managed_rule_group_statement`](#managed_rule_group_statement-block) below for details. +* `not_statement` - (Optional) Logical rule statement used to negate the results of another rule statement. See [`not_statement`](#not_statement-block) below for details. +* `or_statement` - (Optional) Logical rule statement used to combine other rule statements with OR logic. See [`or_statement`](#or_statement-block) below for details. +* `rate_based_statement` - (Optional) Rate-based rule tracks the rate of requests for each originating `IP address`, and triggers the rule action when the rate exceeds a limit that you specify on the number of requests in any `5-minute` time span. This statement can not be nested. See [`rate_based_statement`](#rate_based_statement-block) below for details. +* `regex_match_statement` - (Optional) Rule statement used to search web request components for a match against a single regular expression. See [`regex_match_statement`](#regex_match_statement-block) below for details. +* `regex_pattern_set_reference_statement` - (Optional) Rule statement used to search web request components for matches with regular expressions. See [`regex_pattern_set_reference_statement`](#regex_pattern_set_reference_statement-block) below for details. +* `rule_group_reference_statement` - (Optional) Rule statement used to run the rules that are defined in an WAFv2 Rule Group. See [`rule_group_reference_statement`](#rule_group_reference_statement-block) below for details. +* `size_constraint_statement` - (Optional) Rule statement that compares a number of bytes against the size of a request component, using a comparison operator, such as greater than (>) or less than (<). See [`size_constraint_statement`](#size_constraint_statement-block) below for more details. +* `sqli_match_statement` - (Optional) An SQL injection match condition identifies the part of web requests, such as the URI or the query string, that you want AWS WAF to inspect. See [`sqli_match_statement`](#sqli_match_statement-block) below for details. +* `xss_match_statement` - (Optional) Rule statement that defines a cross-site scripting (XSS) match search for AWS WAF to apply to web requests. See [`xss_match_statement`](#xss_match_statement-block) below for details. + +### `and_statement` Block A logical rule statement used to combine other rule statements with `AND` logic. You provide more than one `statement` within the `and_statement`. The `and_statement` block supports the following arguments: -* `statement` - (Required) Statements to combine with `AND` logic. You can use any statements that can be nested. See [`statement`](#statement) above for details. +* `statement` - (Required) Statements to combine with `AND` logic. You can use any statements that can be nested. See [`statement`](#statement-block) above for details. -#### `byte_match_statement` +### `byte_match_statement` Block The byte match statement provides the bytes to search for, the location in requests that you want AWS WAF to search, and other settings. The bytes to search for are typically a string that corresponds with ASCII characters. The `byte_match_statement` block supports the following arguments: -* `field_to_match` - (Optional) Part of a web request that you want AWS WAF to inspect. See [`field_to_match`](#field_to_match) below for details. +* `field_to_match` - (Optional) Part of a web request that you want AWS WAF to inspect. See [`field_to_match`](#field_to_match-block) below for details. * `positional_constraint` - (Required) Area within the portion of a web request that you want AWS WAF to search for `search_string`. Valid values include the following: `EXACTLY`, `STARTS_WITH`, `ENDS_WITH`, `CONTAINS`, `CONTAINS_WORD`. See the AWS [documentation](https://docs.aws.amazon.com/waf/latest/APIReference/API_ByteMatchStatement.html) for more information. * `search_string` - (Required) String value that you want AWS WAF to search for. AWS WAF searches only in the part of web requests that you designate for inspection in `field_to_match`. The maximum length of the value is 50 bytes. -* `text_transformation` - (Required) Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. - At least one required. - See [`text_transformation`](#text_transformation) below for details. +* `text_transformation` - (Required) Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. At least one transformation is required. See [`text_transformation`](#text_transformation-block) below for details. -#### `geo_match_statement` +### `geo_match_statement` Block The `geo_match_statement` block supports the following arguments: * `country_codes` - (Required) Array of two-character country codes, for example, [ "US", "CN" ], from the alpha-2 country ISO codes of the `ISO 3166` international standard. See the [documentation](https://docs.aws.amazon.com/waf/latest/APIReference/API_GeoMatchStatement.html) for valid values. -* `forwarded_ip_config` - (Optional) Configuration for inspecting IP addresses in an HTTP header that you specify, instead of using the IP address that's reported by the web request origin. See [`forwarded_ip_config`](#forwarded_ip_config) below for details. +* `forwarded_ip_config` - (Optional) Configuration for inspecting IP addresses in an HTTP header that you specify, instead of using the IP address that's reported by the web request origin. See [`forwarded_ip_config`](#forwarded_ip_config-block) below for details. -#### `ip_set_reference_statement` +### `ip_set_reference_statement` Block A rule statement used to detect web requests coming from particular IP addresses or address ranges. To use this, create an `aws_wafv2_ip_set` that specifies the addresses you want to detect, then use the `ARN` of that set in this statement. The `ip_set_reference_statement` block supports the following arguments: * `arn` - (Required) The Amazon Resource Name (ARN) of the IP Set that this statement references. -* `ip_set_forwarded_ip_config` - (Optional) Configuration for inspecting IP addresses in an HTTP header that you specify, instead of using the IP address that's reported by the web request origin. See [`ip_set_forwarded_ip_config`](#ip_set_forwarded_ip_config) below for more details. +* `ip_set_forwarded_ip_config` - (Optional) Configuration for inspecting IP addresses in an HTTP header that you specify, instead of using the IP address that's reported by the web request origin. See [`ip_set_forwarded_ip_config`](#ip_set_forwarded_ip_config-block) below for more details. -#### `label_match_statement` +### `label_match_statement` Block The `label_match_statement` block supports the following arguments: * `scope` - (Required) Specify whether you want to match using the label name or just the namespace. Valid values are `LABEL` or `NAMESPACE`. * `key` - (Required) String to match against. -#### `managed_rule_group_statement` +### `managed_rule_group_statement` Block A rule statement used to run the rules that are defined in a managed rule group. @@ -553,29 +551,29 @@ You can't nest a `managed_rule_group_statement`, for example for use inside a `n The `managed_rule_group_statement` block supports the following arguments: * `name` - (Required) Name of the managed rule group. -* `rule_action_override` - (Optional) Action settings to use in the place of the rule actions that are configured inside the rule group. You specify one override for each rule whose action you want to change. See [`rule_action_override`](#rule_action_override) below for details. -* `managed_rule_group_configs`- (Optional) Additional information that's used by a managed rule group. Only one rule attribute is allowed in each config. See [Managed Rule Group Configs](#managed_rule_group_configs) for more details -* `scope_down_statement` - Narrows the scope of the statement to matching web requests. This can be any nestable statement, and you can nest statements at any level below this scope-down statement. See [`statement`](#statement) above for details. +* `rule_action_override` - (Optional) Action settings to use in the place of the rule actions that are configured inside the rule group. You specify one override for each rule whose action you want to change. See [`rule_action_override`](#rule_action_override-block) below for details. +* `managed_rule_group_configs`- (Optional) Additional information that's used by a managed rule group. Only one rule attribute is allowed in each config. See [`managed_rule_group_configs`](#managed_rule_group_configs-block) for more details +* `scope_down_statement` - Narrows the scope of the statement to matching web requests. This can be any nestable statement, and you can nest statements at any level below this scope-down statement. See [`statement`](#statement-block) above for details. * `vendor_name` - (Required) Name of the managed rule group vendor. * `version` - (Optional) Version of the managed rule group. You can set `Version_1.0` or `Version_1.1` etc. If you want to use the default version, do not set anything. -#### `not_statement` +### `not_statement` Block A logical rule statement used to negate the results of another rule statement. You provide one `statement` within the `not_statement`. The `not_statement` block supports the following arguments: -* `statement` - (Required) Statement to negate. You can use any statement that can be nested. See [`statement`](#statement) above for details. +* `statement` - (Required) Statement to negate. You can use any statement that can be nested. See [`statement`](#statement-block) above for details. -#### `or_statement` +### `or_statement` Block A logical rule statement used to combine other rule statements with `OR` logic. You provide more than one `statement` within the `or_statement`. The `or_statement` block supports the following arguments: -* `statement` - (Required) Statements to combine with `OR` logic. You can use any statements that can be nested. See [`statement`](#statement) above for details. +* `statement` - (Required) Statements to combine with `OR` logic. You can use any statements that can be nested. See [`statement`](#statement-block) above for details. -#### `rate_based_statement` +### `rate_based_statement` Block A rate-based rule tracks the rate of requests for each originating IP address, and triggers the rule action when the rate exceeds a limit that you specify on the number of requests in any 5-minute time span. You can use this to put a temporary block on requests from an IP address that is sending excessive requests. See the [documentation](https://docs.aws.amazon.com/waf/latest/APIReference/API_RateBasedStatement.html) for more information. @@ -583,36 +581,32 @@ You can't nest a `rate_based_statement`, for example for use inside a `not_state The `rate_based_statement` block supports the following arguments: -* `aggregate_key_type` - (Optional) Setting that indicates how to aggregate the request counts. Valid values include: `FORWARDED_IP` or `IP`. Default: `IP`. -* `forwarded_ip_config` - (Optional) Configuration for inspecting IP addresses in an HTTP header that you specify, instead of using the IP address that's reported by the web request origin. If `aggregate_key_type` is set to `FORWARDED_IP`, this block is required. See [`forwarded_ip_config`](#forwarded_ip_config) below for details. +* `aggregate_key_type` - (Optional) Setting that indicates how to aggregate the request counts. Valid values include: `CONSTANT`, `FORWARDED_IP` or `IP`. Default: `IP`. +* `forwarded_ip_config` - (Optional) Configuration for inspecting IP addresses in an HTTP header that you specify, instead of using the IP address that's reported by the web request origin. If `aggregate_key_type` is set to `FORWARDED_IP`, this block is required. See [`forwarded_ip_config`](#forwarded_ip_config-block) below for details. * `limit` - (Required) Limit on requests per 5-minute period for a single originating IP address. -* `scope_down_statement` - (Optional) Optional nested statement that narrows the scope of the rate-based statement to matching web requests. This can be any nestable statement, and you can nest statements at any level below this scope-down statement. See [`statement`](#statement) above for details. +* `scope_down_statement` - (Optional) Optional nested statement that narrows the scope of the rate-based statement to matching web requests. This can be any nestable statement, and you can nest statements at any level below this scope-down statement. See [`statement`](#statement-block) above for details. If `aggregate_key_type` is set to `CONSTANT`, this block is required. -#### `regex_match_statement` +### `regex_match_statement` Block A rule statement used to search web request components for a match against a single regular expression. The `regex_match_statement` block supports the following arguments: * `regex_string` - (Required) String representing the regular expression. Minimum of `1` and maximum of `512` characters. -* `field_to_match` - (Required) The part of a web request that you want AWS WAF to inspect. See [`field_to_match`](#field_to_match) below for details. -* `text_transformation` - (Required) Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. - At least one required. - See [`text_transformation`](#text_transformation) below for details. +* `field_to_match` - (Required) The part of a web request that you want AWS WAF to inspect. See [`field_to_match`](#field_to_match-block) below for details. +* `text_transformation` - (Required) Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. At least one transformation is required. See [`text_transformation`](#text_transformation-block) below for details. -#### `regex_pattern_set_reference_statement` +### `regex_pattern_set_reference_statement` Block A rule statement used to search web request components for matches with regular expressions. To use this, create a `aws_wafv2_regex_pattern_set` that specifies the expressions that you want to detect, then use the `ARN` of that set in this statement. A web request matches the pattern set rule statement if the request component matches any of the patterns in the set. The `regex_pattern_set_reference_statement` block supports the following arguments: * `arn` - (Required) The Amazon Resource Name (ARN) of the Regex Pattern Set that this statement references. -* `field_to_match` - (Optional) Part of a web request that you want AWS WAF to inspect. See [`field_to_match`](#field_to_match) below for details. -* `text_transformation` - (Required) Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. - At least one required. - See [`text_transformation`](#text_transformation) below for details. +* `field_to_match` - (Optional) Part of a web request that you want AWS WAF to inspect. See [`field_to_match`](#field_to_match-block) below for details. +* `text_transformation` - (Required) Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. At least one transformation is required. See [`text_transformation`](#text_transformation-block) below for details. -#### `rule_group_reference_statement` +### `rule_group_reference_statement` Block A rule statement used to run the rules that are defined in an WAFv2 Rule Group or `aws_wafv2_rule_group` resource. @@ -621,9 +615,9 @@ You can't nest a `rule_group_reference_statement`, for example for use inside a The `rule_group_reference_statement` block supports the following arguments: * `arn` - (Required) The Amazon Resource Name (ARN) of the `aws_wafv2_rule_group` resource. -* `rule_action_override` - (Optional) Action settings to use in the place of the rule actions that are configured inside the rule group. You specify one override for each rule whose action you want to change. See [`rule_action_override`](#rule_action_override) below for details. +* `rule_action_override` - (Optional) Action settings to use in the place of the rule actions that are configured inside the rule group. You specify one override for each rule whose action you want to change. See [`rule_action_override`](#rule_action_override-block) below for details. -#### `size_constraint_statement` +### `size_constraint_statement` Block A rule statement that uses a comparison operator to compare a number of bytes against the size of a request component. AWS WAFv2 inspects up to the first 8192 bytes (8 KB) of a request body, and when inspecting the request URI Path, the slash `/` in the URI counts as one character. @@ -631,137 +625,128 @@ the URI counts as one character. The `size_constraint_statement` block supports the following arguments: * `comparison_operator` - (Required) Operator to use to compare the request part to the size setting. Valid values include: `EQ`, `NE`, `LE`, `LT`, `GE`, or `GT`. -* `field_to_match` - (Optional) Part of a web request that you want AWS WAF to inspect. See [`field_to_match`](#field_to_match) below for details. +* `field_to_match` - (Optional) Part of a web request that you want AWS WAF to inspect. See [`field_to_match`](#field_to_match-block) below for details. * `size` - (Required) Size, in bytes, to compare to the request part, after any transformations. Valid values are integers between 0 and 21474836480, inclusive. -* `text_transformation` - (Required) Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. - At least one required. - See [`text_transformation`](#text_transformation) below for details. +* `text_transformation` - (Required) Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. At least one transformation is required. See [`text_transformation`](#text_transformation-block) below for details. -#### `sqli_match_statement` +### `sqli_match_statement` Block An SQL injection match condition identifies the part of web requests, such as the URI or the query string, that you want AWS WAF to inspect. Later in the process, when you create a web ACL, you specify whether to allow or block requests that appear to contain malicious SQL code. The `sqli_match_statement` block supports the following arguments: -* `field_to_match` - (Optional) Part of a web request that you want AWS WAF to inspect. See [`field_to_match`](#field_to_match) below for details. -* `text_transformation` - (Required) Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. - At least one required. - See [`text_transformation`](#text_transformation) below for details. +* `field_to_match` - (Optional) Part of a web request that you want AWS WAF to inspect. See [`field_to_match`](#field_to_match-block) below for details. +* `text_transformation` - (Required) Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. At least one transformation is required. See [`text_transformation`](#text_transformation-block) below for details. -#### `xss_match_statement` +### `xss_match_statement` Block The XSS match statement provides the location in requests that you want AWS WAF to search and text transformations to use on the search area before AWS WAF searches for character sequences that are likely to be malicious strings. The `xss_match_statement` block supports the following arguments: -* `field_to_match` - (Optional) Part of a web request that you want AWS WAF to inspect. See [`field_to_match`](#field_to_match) below for details. -* `text_transformation` - (Required) Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. - At least one required. - See [`text_transformation`](#text_transformation) below for details. +* `field_to_match` - (Optional) Part of a web request that you want AWS WAF to inspect. See [`field_to_match`](#field_to_match-block) below for details. +* `text_transformation` - (Required) Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. At least one transformation is required. See [`text_transformation`](#text_transformation-block) below for details. -#### `rule_action_override` +### `rule_action_override` Block The `rule_action_override` block supports the following arguments: -* `action_to_use` - (Required) Override action to use, in place of the configured action of the rule in the rule group. See [`action`](#action) below for details. +* `action_to_use` - (Required) Override action to use, in place of the configured action of the rule in the rule group. See [`action`](#action-block) for details. * `name` - (Required) Name of the rule to override. See the [documentation](https://docs.aws.amazon.com/waf/latest/developerguide/aws-managed-rule-groups-list.html) for a list of names in the appropriate rule group in use. -#### `managed_rule_group_configs` +### `managed_rule_group_configs` Block The `managed_rule_group_configs` block support the following arguments: -* `aws_managed_rules_bot_control_rule_set` - (Optional) Additional configuration for using the Bot Control managed rule group. Use this to specify the inspection level that you want to use. See [`aws_managed_rules_bot_control_rule_set`](#aws_managed_rules_bot_control_rule_set) for more details +* `aws_managed_rules_bot_control_rule_set` - (Optional) Additional configuration for using the Bot Control managed rule group. Use this to specify the inspection level that you want to use. See [`aws_managed_rules_bot_control_rule_set`](#aws_managed_rules_bot_control_rule_set-block) for more details * `aws_managed_rules_atp_rule_set` - (Optional) Additional configuration for using the Account Takeover Protection managed rule group. Use this to specify information such as the sign-in page of your application and the type of content to accept or reject from the client. * `login_path` - (Optional, **Deprecated**) The path of the login endpoint for your application. -* `password_field` - (Optional, **Deprecated**) Details about your login page password field. See [`password_field`](#password_field) for more details. +* `password_field` - (Optional, **Deprecated**) Details about your login page password field. See [`password_field`](#password_field-block) for more details. * `payload_type`- (Optional, **Deprecated**) The payload type for your login endpoint, either JSON or form encoded. -* `username_field` - (Optional, **Deprecated**) Details about your login page username field. See [`username_field`](#username_field) for more details. +* `username_field` - (Optional, **Deprecated**) Details about your login page username field. See [`username_field`](#username_field-block) for more details. -#### `aws_managed_rules_bot_control_rule_set` +### `aws_managed_rules_bot_control_rule_set` Block * `inspection_level` - (Optional) The inspection level to use for the Bot Control rule group. -#### `aws_managed_rules_atp_rule_set` +### `aws_managed_rules_atp_rule_set` Block * `login_path` - (Required) The path of the login endpoint for your application. -* `request_inspection` - (Optional) The criteria for inspecting login requests, used by the ATP rule group to validate credentials usage. See [`request_inspection`](#request_inspection) for more details. -* `response_inspection` - (Optional) The criteria for inspecting responses to login requests, used by the ATP rule group to track login failure rates. Note that Response Inspection is available only on web ACLs that protect CloudFront distributions. See [`response_inspection`](#response_inspection) for more details. +* `request_inspection` - (Optional) The criteria for inspecting login requests, used by the ATP rule group to validate credentials usage. See [`request_inspection`](#request_inspection-block) for more details. +* `response_inspection` - (Optional) The criteria for inspecting responses to login requests, used by the ATP rule group to track login failure rates. Note that Response Inspection is available only on web ACLs that protect CloudFront distributions. See [`response_inspection`](#response_inspection-block) for more details. -#### `request_inspection` +### `request_inspection` Block * `payload_type` (Required) The payload type for your login endpoint, either JSON or form encoded. -* `username_field` (Required) Details about your login page username field. See [`username_field`](#username_field) for more details. -* `password_field` (Required) Details about your login page password field. See [`password_field`](#password_field) for more details. +* `username_field` (Required) Details about your login page username field. See [`username_field`](#username_field-block) for more details. +* `password_field` (Required) Details about your login page password field. See [`password_field`](#password_field-block) for more details. -#### `password_field` +### `password_field` Block * `identifier` - (Optional) The name of the password field. -#### `username_field` +### `username_field` Block * `identifier` - (Optional) The name of the username field. -#### `response_inspection` +### `response_inspection` Block -* `body_contains` (Optional) Configures inspection of the response body. See [`body_contains`](#body_contains) for more details. -* `header` (Optional) Configures inspection of the response header.See [`header`](#header) for more details. -* `json` (Optional) Configures inspection of the response JSON. See [`json`](#json) for more details. -* `status_code` (Optional) Configures inspection of the response status code.See [`status_code`](#status_code) for more details. +* `body_contains` (Optional) Configures inspection of the response body. See [`body_contains`](#body_contains-block) for more details. +* `header` (Optional) Configures inspection of the response header.See [`header`](#header-block) for more details. +* `json` (Optional) Configures inspection of the response JSON. See [`json`](#json-block) for more details. +* `status_code` (Optional) Configures inspection of the response status code.See [`status_code`](#status_code-block) for more details. -#### `body_contains` +### `body_contains` Block * `success_strings` (Required) Strings in the body of the response that indicate a successful login attempt. * `failure_strings` (Required) Strings in the body of the response that indicate a failed login attempt. -#### `header` +### `header` Block * `name` (Required) The name of the header to match against. The name must be an exact match, including case. * `success_values` (Required) Values in the response header with the specified name that indicate a successful login attempt. * `failure_values` (Required) Values in the response header with the specified name that indicate a failed login attempt. -#### `json` +### `json` Block * `identifier` (Required) The identifier for the value to match against in the JSON. * `success_strings` (Required) Strings in the body of the response that indicate a successful login attempt. * `failure_strings` (Required) Strings in the body of the response that indicate a failed login attempt. -#### `status_code` +### `status_code` Block * `success_codes` (Required) Status codes in the response that indicate a successful login attempt. * `failure_codes` (Required) Status codes in the response that indicate a failed login attempt. -#### `field_to_match` +### `field_to_match` Block The part of a web request that you want AWS WAF to inspect. Include the single `field_to_match` type that you want to inspect, with additional specifications as needed, according to the type. You specify a single request component in `field_to_match` for each rule statement that requires it. To inspect more than one component of a web request, create a separate rule statement for each component. See the [documentation](https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-fields.html#waf-rule-statement-request-component) for more details. The `field_to_match` block supports the following arguments: -~> **NOTE:** Only one of `all_query_arguments`, `body`, `cookies`, `headers`, `json_body`, `method`, `query_string`, `single_header`, `single_query_argument`, or `uri_path` can be specified. -An empty configuration block `{}` should be used when specifying `all_query_arguments`, `method`, or `query_string` attributes. +~> **Note** Only one of `all_query_arguments`, `body`, `cookies`, `headers`, `json_body`, `method`, `query_string`, `single_header`, `single_query_argument`, or `uri_path` can be specified. An empty configuration block `{}` should be used when specifying `all_query_arguments`, `method`, or `query_string` attributes. * `all_query_arguments` - (Optional) Inspect all query arguments. -* `body` - (Optional) Inspect the request body, which immediately follows the request headers. See [`body`](#body) below for details. -* `cookies` - (Optional) Inspect the cookies in the web request. See [`cookies`](#cookies) below for details. -* `headers` - (Optional) Inspect the request headers. See [`headers`](#headers) below for details. -* `json_body` - (Optional) Inspect the request body as JSON. See [`json_body`](#json_body) for details. +* `body` - (Optional) Inspect the request body, which immediately follows the request headers. See [`body`](#body-block) below for details. +* `cookies` - (Optional) Inspect the cookies in the web request. See [`cookies`](#cookies-block) below for details. +* `headers` - (Optional) Inspect the request headers. See [`headers`](#headers-block) below for details. +* `json_body` - (Optional) Inspect the request body as JSON. See [`json_body`](#json_body-block) for details. * `method` - (Optional) Inspect the HTTP method. The method indicates the type of operation that the request is asking the origin to perform. * `query_string` - (Optional) Inspect the query string. This is the part of a URL that appears after a `?` character, if any. -* `single_header` - (Optional) Inspect a single header. See [`single_header`](#single_header) below for details. -* `single_query_argument` - (Optional) Inspect a single query argument. See [`single_query_argument`](#single_query_argument) below for details. +* `single_header` - (Optional) Inspect a single header. See [`single_header`](#single_header-block) below for details. +* `single_query_argument` - (Optional) Inspect a single query argument. See [`single_query_argument`](#single_query_argument-block) below for details. * `uri_path` - (Optional) Inspect the request URI path. This is the part of a web request that identifies a resource, for example, `/images/daily-ad.jpg`. -#### `forwarded_ip_config` +### `forwarded_ip_config` Block -The configuration for inspecting IP addresses in an HTTP header that you specify, instead of using the IP address that's reported by the web request origin. Commonly, this is the X-Forwarded-For (XFF) header, but you can specify -any header name. If the specified header isn't present in the request, AWS WAFv2 doesn't apply the rule to the web request at all. -AWS WAFv2 only evaluates the first IP address found in the specified HTTP header. +The configuration for inspecting IP addresses in an HTTP header that you specify, instead of using the IP address that's reported by the web request origin. Commonly, this is the X-Forwarded-For (XFF) header, but you can specify any header name. If the specified header isn't present in the request, AWS WAFv2 doesn't apply the rule to the web request at all. AWS WAFv2 only evaluates the first IP address found in the specified HTTP header. The `forwarded_ip_config` block supports the following arguments: * `fallback_behavior` - (Required) - Match status to assign to the web request if the request doesn't have a valid IP address in the specified position. Valid values include: `MATCH` or `NO_MATCH`. * `header_name` - (Required) - Name of the HTTP header to use for the IP address. -#### `ip_set_forwarded_ip_config` +### `ip_set_forwarded_ip_config` Block The configuration for inspecting IP addresses in an HTTP header that you specify, instead of using the IP address that's reported by the web request origin. Commonly, this is the X-Forwarded-For (XFF) header, but you can specify any header name. @@ -771,7 +756,7 @@ The `ip_set_forwarded_ip_config` block supports the following arguments: * `header_name` - (Required) - Name of the HTTP header to use for the IP address. * `position` - (Required) - Position in the header to search for the IP address. Valid values include: `FIRST`, `LAST`, or `ANY`. If `ANY` is specified and the header contains more than 10 IP addresses, AWS WAFv2 inspects the last 10. -#### `headers` +### `headers` Block Inspect the request headers. @@ -784,7 +769,7 @@ The `headers` block supports the following arguments: * `match_scope` - (Required) The parts of the headers to inspect with the rule inspection criteria. If you specify `All`, AWS WAF inspects both keys and values. Valid values include the following: `ALL`, `Key`, `Value`. * `oversize_handling` - (Required) Oversize handling tells AWS WAF what to do with a web request when the request component that the rule inspects is over the limits. Valid values include the following: `CONTINUE`, `MATCH`, `NO_MATCH`. See the AWS [documentation](https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-oversize-handling.html) for more information. -#### `json_body` +### `json_body` Block The `json_body` block supports the following arguments: @@ -793,7 +778,7 @@ The `json_body` block supports the following arguments: * `match_scope` - (Required) The parts of the JSON to match against using the `match_pattern`. Valid values are `ALL`, `KEY` and `VALUE`. * `oversize_handling` - (Optional) What to do if the body is larger than can be inspected. Valid values are `CONTINUE` (default), `MATCH` and `NO_MATCH`. -#### `single_header` +### `single_header` Block Inspect a single header. Provide the name of the header to inspect, for example, `User-Agent` or `Referer` (provided as lowercase strings). @@ -801,7 +786,7 @@ The `single_header` block supports the following arguments: * `name` - (Optional) Name of the query header to inspect. This setting must be provided as lower case characters. -#### `single_query_argument` +### `single_query_argument` Block Inspect a single query argument. Provide the name of the query argument to inspect, such as `UserName` or `SalesRegion` (provided as lowercase strings). @@ -809,16 +794,15 @@ The `single_query_argument` block supports the following arguments: * `name` - (Optional) Name of the query header to inspect. This setting must be provided as lower case characters. -#### `body` +### `body` Block The `body` block supports the following arguments: * `oversize_handling` - (Optional) What WAF should do if the body is larger than WAF can inspect. WAF does not support inspecting the entire contents of the body of a web request when the body exceeds 8 KB (8192 bytes). Only the first 8 KB of the request body are forwarded to WAF by the underlying host service. Valid values: `CONTINUE`, `MATCH`, `NO_MATCH`. -#### `cookies` +### `cookies` Block -Inspect the cookies in the web request. You can specify the parts of the cookies to inspect and you can narrow the set of cookies to inspect by including or excluding specific keys. -This is used to indicate the web request component to inspect, in the [FieldToMatch](https://docs.aws.amazon.com/waf/latest/APIReference/API_FieldToMatch.html) specification. +Inspect the cookies in the web request. You can specify the parts of the cookies to inspect and you can narrow the set of cookies to inspect by including or excluding specific keys. This is used to indicate the web request component to inspect, in the [FieldToMatch](https://docs.aws.amazon.com/waf/latest/APIReference/API_FieldToMatch.html) specification. The `cookies` block supports the following arguments: @@ -826,14 +810,14 @@ The `cookies` block supports the following arguments: * `match_scope` - (Required) The parts of the cookies to inspect with the rule inspection criteria. If you specify All, AWS WAF inspects both keys and values. Valid values: `ALL`, `KEY`, `VALUE` * `oversize_handling` - (Required) What AWS WAF should do if the cookies of the request are larger than AWS WAF can inspect. AWS WAF does not support inspecting the entire contents of request cookies when they exceed 8 KB (8192 bytes) or 200 total cookies. The underlying host service forwards a maximum of 200 cookies and at most 8 KB of cookie contents to AWS WAF. Valid values: `CONTINUE`, `MATCH`, `NO_MATCH`. -#### `text_transformation` +### `text_transformation` Block The `text_transformation` block supports the following arguments: * `priority` - (Required) Relative processing order for multiple transformations that are defined for a rule statement. AWS WAF processes all transformations, from lowest priority to highest, before inspecting the transformed content. * `type` - (Required) Transformation to apply, please refer to the Text Transformation [documentation](https://docs.aws.amazon.com/waf/latest/APIReference/API_TextTransformation.html) for more details. -### `visibility_config` +### `visibility_config` Block The `visibility_config` block supports the following arguments: @@ -841,13 +825,13 @@ The `visibility_config` block supports the following arguments: * `metric_name` - (Required) A friendly name of the CloudWatch metric. The name can contain only alphanumeric characters (A-Z, a-z, 0-9) hyphen(-) and underscore (\_), with length from one to 128 characters. It can't contain whitespace or metric names reserved for AWS WAF, for example `All` and `Default_Action`. * `sampled_requests_enabled` - (Required) Whether AWS WAF should store a sampling of the web requests that match the rules. You can view the sampled requests through the AWS WAF console. -### Captcha Configuration +### `captcha_config` Block The `captcha_config` block supports the following arguments: -* `immunity_time_property` - (Optional) Defines custom immunity time. See [Immunity Time Property](#immunity-time-property) below for details. +* `immunity_time_property` - (Optional) Defines custom immunity time. See [`immunity_time_property`](#immunity_time_property-block) below for details. -### Immunity Time Property +### `immunity_time_property` Block The `immunity_time_property` block supports the following arguments: diff --git a/website/docs/r/wafv2_web_acl_logging_configuration.html.markdown b/website/docs/r/wafv2_web_acl_logging_configuration.html.markdown index 5498e512a5d..32feb75ce67 100644 --- a/website/docs/r/wafv2_web_acl_logging_configuration.html.markdown +++ b/website/docs/r/wafv2_web_acl_logging_configuration.html.markdown @@ -3,16 +3,16 @@ subcategory: "WAF" layout: "aws" page_title: "AWS: aws_wafv2_web_acl_logging_configuration" description: |- - Creates a WAFv2 Web ACL Logging Configuration resource. + Create a resource for WAFv2 Web ACL Logging Configuration. --- # Resource: aws_wafv2_web_acl_logging_configuration -Creates a WAFv2 Web ACL Logging Configuration resource. +This resource creates a WAFv2 Web ACL Logging Configuration. --> **Note:** To start logging from a WAFv2 Web ACL, an Amazon Kinesis Data Firehose (e.g., [`aws_kinesis_firehose_delivery_stream` resource](/docs/providers/aws/r/kinesis_firehose_delivery_stream.html) must also be created with a PUT source (not a stream) and in the region that you are operating. -If you are capturing logs for Amazon CloudFront, always create the firehose in US East (N. Virginia). -Be sure to give the data firehose, cloudwatch log group, and/or s3 bucket a name that starts with the prefix `aws-waf-logs-`. +~> **NOTE:** To start logging from a WAFv2 Web ACL, you need to create an Amazon Kinesis Data Firehose resource, such as the [`aws_kinesis_firehose_delivery_stream`](/docs/providers/aws/r/kinesis_firehose_delivery_stream.html) resource. Make sure to create the firehose with a PUT source (not a stream) in the region where you are operating. If you are capturing logs for Amazon CloudFront, create the firehose in the US East (N. Virginia) region. It is important to name the data firehose, CloudWatch log group, and/or S3 bucket with a prefix of `aws-waf-logs-`. + +!> **WARNING:** When logging from a WAFv2 Web ACL to a CloudWatch Log Group, the WAFv2 service tries to create or update a generic Log Resource Policy named `AWSWAF-LOGS`. However, if there are a large number of Web ACLs or if the account frequently creates and deletes Web ACLs, this policy may exceed the maximum policy size. As a result, this resource type will fail to be created. More details about this issue can be found in [this issue](https://github.com/hashicorp/terraform-provider-aws/issues/25296). To prevent this issue, you can manage a specific resource policy. Please refer to the [example](#with-cloudwatch-log-group-and-managed-cloudwatch-log-resource-policy) below for managing a CloudWatch Log Group with a managed CloudWatch Log Resource Policy. ## Example Usage @@ -73,90 +73,124 @@ resource "aws_wafv2_web_acl_logging_configuration" "example" { } ``` +### With CloudWatch Log Group and managed CloudWatch Log Resource Policy + +```terraform +resource "aws_cloudwatch_log_group" "example" { + name = "aws-waf-logs-some-uniq-suffix" +} + +resource "aws_wafv2_web_acl_logging_configuration" "example" { + log_destination_configs = [aws_cloudwatch_log_group.example.arn] + resource_arn = aws_wafv2_web_acl.example.arn +} + +resource "aws_cloudwatch_log_resource_policy" "example" { + policy_document = data.aws_iam_policy_document.example.json + policy_name = "webacl-policy-uniq-name" +} + +data "aws_iam_policy_document" "example" { + version = "2012-10-17" + statement { + effect = "Allow" + principals { + identifiers = ["delivery.logs.amazonaws.com"] + type = "AWS" + } + actions = ["logs:CreateLogStream", "logs:PutLogEvents"] + resources = ["${aws_cloudwatch_log_group.example.arn}:*"] + condition { + test = "ArnLike" + values = ["arn:aws:logs:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:*"] + variable = "aws:SourceArn" + } + condition { + test = "StringEquals" + values = [tostring(data.aws_caller_identity.current.account_id)] + variable = "aws:SourceAccount" + } + } +} + +data "aws_region" "current" {} + +data "aws_caller_identity" "current" {} +``` + ## Argument Reference -The following arguments are supported: +This resource supports the following arguments: -* `log_destination_configs` - (Required) The Amazon Kinesis Data Firehose, Cloudwatch Log log group, or S3 bucket Amazon Resource Names (ARNs) that you want to associate with the web ACL. -* `logging_filter` - (Optional) A configuration block that specifies which web requests are kept in the logs and which are dropped. You can filter on the rule action and on the web request labels that were applied by matching rules during web ACL evaluation. See [Logging Filter](#logging-filter) below for more details. -* `redacted_fields` - (Optional) The parts of the request that you want to keep out of the logs. Up to 100 `redacted_fields` blocks are supported. See [Redacted Fields](#redacted-fields) below for more details. -* `resource_arn` - (Required) The Amazon Resource Name (ARN) of the web ACL that you want to associate with `log_destination_configs`. +* `log_destination_configs` - (Required) Configuration block that allows you to associate Amazon Kinesis Data Firehose, Cloudwatch Log log group, or S3 bucket Amazon Resource Names (ARNs) with the web ACL. +* `logging_filter` - (Optional) Configuration block that specifies which web requests are kept in the logs and which are dropped. It allows filtering based on the rule action and the web request labels applied by matching rules during web ACL evaluation. For more details, refer to the [Logging Filter](#logging-filter) section below. +* `redacted_fields` - (Optional) Configuration for parts of the request that you want to keep out of the logs. Up to 100 `redacted_fields` blocks are supported. See [Redacted Fields](#redacted-fields) below for more details. +* `resource_arn` - (Required) Amazon Resource Name (ARN) of the web ACL that you want to associate with `log_destination_configs`. ### Logging Filter The `logging_filter` block supports the following arguments: -* `default_behavior` - (Required) Default handling for logs that don't match any of the specified filtering conditions. Valid values: `KEEP` or `DROP`. +* `default_behavior` - (Required) Default handling for logs that don't match any of the specified filtering conditions. Valid values for `default_behavior` are `KEEP` or `DROP`. * `filter` - (Required) Filter(s) that you want to apply to the logs. See [Filter](#filter) below for more details. ### Filter The `filter` block supports the following arguments: -* `behavior` - (Required) How to handle logs that satisfy the filter's conditions and requirement. Valid values: `KEEP` or `DROP`. +* `behavior` - (Required) Parameter that determines how to handle logs that meet the conditions and requirements of the filter. The valid values for `behavior` are `KEEP` or `DROP`. * `condition` - (Required) Match condition(s) for the filter. See [Condition](#condition) below for more details. -* `requirement` - (Required) Logic to apply to the filtering conditions. You can specify that, in order to satisfy the filter, a log must match all conditions or must match at least one condition. Valid values: `MEETS_ALL` or `MEETS_ANY`. +* `requirement` - (Required) Logic to apply to the filtering conditions. You can specify that a log must match all conditions or at least one condition in order to satisfy the filter. Valid values for `requirement` are `MEETS_ALL` or `MEETS_ANY`. ### Condition The `condition` block supports the following arguments: -~> **Note:** Either `action_condition` or `label_name_condition` must be specified. +~> **NOTE:** Either the `action_condition` or `label_name_condition` must be specified. -* `action_condition` - (Optional) A single action condition. See [Action Condition](#action-condition) below for more details. -* `label_name_condition` - (Optional) A single label name condition. See [Label Name Condition](#label-name-condition) below for more details. +* `action_condition` - (Optional) Configuration for a single action condition. See [Action Condition](#action-condition) below for more details. +* `label_name_condition` - (Optional) Condition for a single label name. See [Label Name Condition](#label-name-condition) below for more details. ### Action Condition The `action_condition` block supports the following argument: -* `action` - (Required) The action setting that a log record must contain in order to meet the condition. Valid values: `ALLOW`, `BLOCK`, `COUNT`. +* `action` - (Required) Action setting that a log record must contain in order to meet the condition. Valid values for `action` are `ALLOW`, `BLOCK`, and `COUNT`. ### Label Name Condition The `label_name_condition` block supports the following argument: -* `label_name` - (Required) The label name that a log record must contain in order to meet the condition. This must be a fully qualified label name. Fully qualified labels have a prefix, optional namespaces, and label name. The prefix identifies the rule group or web ACL context of the rule that added the label. +* `label_name` - (Required) Name of the label that a log record must contain in order to meet the condition. It must be a fully qualified label name, which includes a prefix, optional namespaces, and the label name itself. The prefix identifies the rule group or web ACL context of the rule that added the label. ### Redacted Fields The `redacted_fields` block supports the following arguments: -~> **NOTE:** Only one of `method`, `query_string`, `single_header` or `uri_path` can be specified. +~> **NOTE:** You can only specify one of the following: `method`, `query_string`, `single_header`, or `uri_path`. -* `all_query_arguments` - (Optional, **DEPRECATED**) Redact all query arguments. -* `body` - (Optional, **DEPRECATED**) Redact the request body, which immediately follows the request headers. -* `method` - (Optional) Redact the HTTP method. Must be specified as an empty configuration block `{}`. The method indicates the type of operation that the request is asking the origin to perform. -* `query_string` - (Optional) Redact the query string. Must be specified as an empty configuration block `{}`. This is the part of a URL that appears after a `?` character, if any. -* `single_header` - (Optional) Redact a single header. See [Single Header](#single-header) below for details. -* `single_query_argument` - (Optional, **DEPRECATED**) Redact a single query argument. See [Single Query Argument](#single-query-argument-deprecated) below for details. -* `uri_path` - (Optional) Redact the request URI path. Must be specified as an empty configuration block `{}`. This is the part of a web request that identifies a resource, for example, `/images/daily-ad.jpg`. +* `method` - (Optional) HTTP method to be redacted. It must be specified as an empty configuration block `{}`. The method indicates the type of operation that the request is asking the origin to perform. +* `query_string` - (Optional) Whether to redact the query string. It must be specified as an empty configuration block `{}`. The query string is the part of a URL that appears after a `?` character, if any. +* `single_header` - (Optional) "single_header" refers to the redaction of a single header. For more information, please see the details below under [Single Header](#single-header). +* `uri_path` - (Optional) Configuration block that redacts the request URI path. It should be specified as an empty configuration block `{}`. The URI path is the part of a web request that identifies a resource, such as `/images/daily-ad.jpg`. ### Single Header -Redact a single header. Provide the name of the header to redact, for example, `User-Agent` or `Referer` (provided as lowercase strings). +To redact a single header, provide the name of the header to be redacted. For example, use `User-Agent` or `Referer` (provided as lowercase strings). The `single_header` block supports the following arguments: -* `name` - (Optional) The name of the query header to redact. This setting must be provided as lower case characters. - -### Single Query Argument (**DEPRECATED**) - -Redact a single query argument. Provide the name of the query argument to redact, such as `UserName` or `SalesRegion` (provided as lowercase strings). - -The `single_query_argument` block supports the following arguments: - -* `name` - (Optional) The name of the query header to redact. This setting must be provided as lower case characters. +* `name` - (Optional) Name of the query header to redact. This setting must be provided in lowercase characters. -## Attributes Reference +## Attribute Reference -In addition to all arguments above, the following attributes are exported: +This resource exports the following attributes in addition to the arguments above: -* `id` - The Amazon Resource Name (ARN) of the WAFv2 Web ACL. +* `id` - Amazon Resource Name (ARN) of the WAFv2 Web ACL. ## Import -WAFv2 Web ACL Logging Configurations can be imported using the WAFv2 Web ACL ARN e.g., +To import WAFv2 Web ACL Logging Configurations, use the ARN of the WAFv2 Web ACL. For example: ``` $ terraform import aws_wafv2_web_acl_logging_configuration.example arn:aws:wafv2:us-west-2:123456789012:regional/webacl/test-logs/a1b2c3d4-5678-90ab-cdef diff --git a/website/docs/r/xray_sampling_rule.html.markdown b/website/docs/r/xray_sampling_rule.html.markdown index 42572bc90c9..7e58b9c60c7 100644 --- a/website/docs/r/xray_sampling_rule.html.markdown +++ b/website/docs/r/xray_sampling_rule.html.markdown @@ -15,7 +15,7 @@ Creates and manages an AWS XRay Sampling Rule. ```terraform resource "aws_xray_sampling_rule" "example" { rule_name = "example" - priority = 10000 + priority = 9999 version = 1 reservoir_size = 1 fixed_rate = 0.05